* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-11-21 13:12 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-11-21 13:12 UTC (permalink / raw
To: gentoo-commits
commit: 67d76cc6cc2bdc81a481ca7563853da3307b9331
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Nov 21 13:11:30 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Nov 21 13:11:30 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=67d76cc6
BMQ(BitMap Queue) Scheduler. (USE=experimental)
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 +
5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch | 11188 +++++++++++++++++++++++++
5021_BMQ-and-PDS-gentoo-defaults.patch | 13 +
3 files changed, 11209 insertions(+)
diff --git a/0000_README b/0000_README
index 79d80432..2f20a332 100644
--- a/0000_README
+++ b/0000_README
@@ -86,3 +86,11 @@ Desc: Add Gentoo Linux support config settings and defaults.
Patch: 5010_enable-cpu-optimizations-universal.patch
From: https://github.com/graysky2/kernel_compiler_patch
Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
+
+Patch: 5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
+From: https://gitlab.com/alfredchen/projectc
+Desc: BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
+
+Patch: 5021_BMQ-and-PDS-gentoo-defaults.patch
+From: https://gitweb.gentoo.org/proj/linux-patches.git/
+Desc: Set defaults for BMQ. default to n
diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
new file mode 100644
index 00000000..9eb3139f
--- /dev/null
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
@@ -0,0 +1,11188 @@
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index f8bc1630eba0..1b90768a0916 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -1673,3 +1673,12 @@ is 10 seconds.
+
+ The softlockup threshold is (``2 * watchdog_thresh``). Setting this
+ tunable to zero will disable lockup detection altogether.
++
++yield_type:
++===========
++
++BMQ/PDS CPU scheduler only. This determines what type of yield calls
++to sched_yield() will be performed.
++
++ 0 - No yield.
++ 1 - Requeue task. (default)
+diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
+new file mode 100644
+index 000000000000..05c84eec0f31
+--- /dev/null
++++ b/Documentation/scheduler/sched-BMQ.txt
+@@ -0,0 +1,110 @@
++ BitMap queue CPU Scheduler
++ --------------------------
++
++CONTENT
++========
++
++ Background
++ Design
++ Overview
++ Task policy
++ Priority management
++ BitMap Queue
++ CPU Assignment and Migration
++
++
++Background
++==========
++
++BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
++of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
++and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
++simple, while efficiency and scalable for interactive tasks, such as desktop,
++movie playback and gaming etc.
++
++Design
++======
++
++Overview
++--------
++
++BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
++each CPU is responsible for scheduling the tasks that are putting into it's
++run queue.
++
++The run queue is a set of priority queues. Note that these queues are fifo
++queue for non-rt tasks or priority queue for rt tasks in data structure. See
++BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
++that most applications are non-rt tasks. No matter the queue is fifo or
++priority, In each queue is an ordered list of runnable tasks awaiting execution
++and the data structures are the same. When it is time for a new task to run,
++the scheduler simply looks the lowest numbered queueue that contains a task,
++and runs the first task from the head of that queue. And per CPU idle task is
++also in the run queue, so the scheduler can always find a task to run on from
++its run queue.
++
++Each task will assigned the same timeslice(default 4ms) when it is picked to
++start running. Task will be reinserted at the end of the appropriate priority
++queue when it uses its whole timeslice. When the scheduler selects a new task
++from the priority queue it sets the CPU's preemption timer for the remainder of
++the previous timeslice. When that timer fires the scheduler will stop execution
++on that task, select another task and start over again.
++
++If a task blocks waiting for a shared resource then it's taken out of its
++priority queue and is placed in a wait queue for the shared resource. When it
++is unblocked it will be reinserted in the appropriate priority queue of an
++eligible CPU.
++
++Task policy
++-----------
++
++BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
++mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
++NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
++policy.
++
++DEADLINE
++ It is squashed as priority 0 FIFO task.
++
++FIFO/RR
++ All RT tasks share one single priority queue in BMQ run queue designed. The
++complexity of insert operation is O(n). BMQ is not designed for system runs
++with major rt policy tasks.
++
++NORMAL/BATCH/IDLE
++ BATCH and IDLE tasks are treated as the same policy. They compete CPU with
++NORMAL policy tasks, but they just don't boost. To control the priority of
++NORMAL/BATCH/IDLE tasks, simply use nice level.
++
++ISO
++ ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
++task instead.
++
++Priority management
++-------------------
++
++RT tasks have priority from 0-99. For non-rt tasks, there are three different
++factors used to determine the effective priority of a task. The effective
++priority being what is used to determine which queue it will be in.
++
++The first factor is simply the task’s static priority. Which is assigned from
++task's nice level, within [-20, 19] in userland's point of view and [0, 39]
++internally.
++
++The second factor is the priority boost. This is a value bounded between
++[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
++modified by the following cases:
++
++*When a thread has used up its entire timeslice, always deboost its boost by
++increasing by one.
++*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
++and its switch-in time(time after last switch and run) below the thredhold
++based on its priority boost, will boost its boost by decreasing by one buti is
++capped at 0 (won’t go negative).
++
++The intent in this system is to ensure that interactive threads are serviced
++quickly. These are usually the threads that interact directly with the user
++and cause user-perceivable latency. These threads usually do little work and
++spend most of their time blocked awaiting another user event. So they get the
++priority boost from unblocking while background threads that do most of the
++processing receive the priority penalty for using their entire timeslice.
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index b31283d81c52..e27c5c7b05f6 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -516,7 +516,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
+ seq_puts(m, "0 0 0\n");
+ else
+ seq_printf(m, "%llu %llu %lu\n",
+- (unsigned long long)task->se.sum_exec_runtime,
++ (unsigned long long)tsk_seruntime(task),
+ (unsigned long long)task->sched_info.run_delay,
+ task->sched_info.pcount);
+
+diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
+index 8874f681b056..59eb72bf7d5f 100644
+--- a/include/asm-generic/resource.h
++++ b/include/asm-generic/resource.h
+@@ -23,7 +23,7 @@
+ [RLIMIT_LOCKS] = { RLIM_INFINITY, RLIM_INFINITY }, \
+ [RLIMIT_SIGPENDING] = { 0, 0 }, \
+ [RLIMIT_MSGQUEUE] = { MQ_BYTES_MAX, MQ_BYTES_MAX }, \
+- [RLIMIT_NICE] = { 0, 0 }, \
++ [RLIMIT_NICE] = { 30, 30 }, \
+ [RLIMIT_RTPRIO] = { 0, 0 }, \
+ [RLIMIT_RTTIME] = { RLIM_INFINITY, RLIM_INFINITY }, \
+ }
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index bb343136ddd0..212d9204e9aa 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -804,9 +804,13 @@ struct task_struct {
+ struct alloc_tag *alloc_tag;
+ #endif
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
+ int on_cpu;
++#endif
++
++#ifdef CONFIG_SMP
+ struct __call_single_node wake_entry;
++#ifndef CONFIG_SCHED_ALT
+ unsigned int wakee_flips;
+ unsigned long wakee_flip_decay_ts;
+ struct task_struct *last_wakee;
+@@ -820,6 +824,7 @@ struct task_struct {
+ */
+ int recent_used_cpu;
+ int wake_cpu;
++#endif /* !CONFIG_SCHED_ALT */
+ #endif
+ int on_rq;
+
+@@ -828,6 +833,19 @@ struct task_struct {
+ int normal_prio;
+ unsigned int rt_priority;
+
++#ifdef CONFIG_SCHED_ALT
++ u64 last_ran;
++ s64 time_slice;
++ struct list_head sq_node;
++#ifdef CONFIG_SCHED_BMQ
++ int boost_prio;
++#endif /* CONFIG_SCHED_BMQ */
++#ifdef CONFIG_SCHED_PDS
++ u64 deadline;
++#endif /* CONFIG_SCHED_PDS */
++ /* sched_clock time spent running */
++ u64 sched_time;
++#else /* !CONFIG_SCHED_ALT */
+ struct sched_entity se;
+ struct sched_rt_entity rt;
+ struct sched_dl_entity dl;
+@@ -842,6 +860,7 @@ struct task_struct {
+ unsigned long core_cookie;
+ unsigned int core_occupation;
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_CGROUP_SCHED
+ struct task_group *sched_task_group;
+@@ -1609,6 +1628,15 @@ struct task_struct {
+ */
+ };
+
++#ifdef CONFIG_SCHED_ALT
++#define tsk_seruntime(t) ((t)->sched_time)
++/* replace the uncertian rt_timeout with 0UL */
++#define tsk_rttimeout(t) (0UL)
++#else /* CFS */
++#define tsk_seruntime(t) ((t)->se.sum_exec_runtime)
++#define tsk_rttimeout(t) ((t)->rt.timeout)
++#endif /* !CONFIG_SCHED_ALT */
++
+ #define TASK_REPORT_IDLE (TASK_REPORT + 1)
+ #define TASK_REPORT_MAX (TASK_REPORT_IDLE << 1)
+
+@@ -2135,7 +2163,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+
+ static inline bool task_is_runnable(struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return p->on_rq;
++#else
+ return p->on_rq && !p->se.sched_delayed;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ extern bool sched_task_on_rq(struct task_struct *p);
+diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
+index 3a912ab42bb5..269a1513a153 100644
+--- a/include/linux/sched/deadline.h
++++ b/include/linux/sched/deadline.h
+@@ -2,6 +2,25 @@
+ #ifndef _LINUX_SCHED_DEADLINE_H
+ #define _LINUX_SCHED_DEADLINE_H
+
++#ifdef CONFIG_SCHED_ALT
++
++static inline int dl_task(struct task_struct *p)
++{
++ return 0;
++}
++
++#ifdef CONFIG_SCHED_BMQ
++#define __tsk_deadline(p) (0UL)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define __tsk_deadline(p) ((((u64) ((p)->prio))<<56) | (p)->deadline)
++#endif
++
++#else
++
++#define __tsk_deadline(p) ((p)->dl.deadline)
++
+ /*
+ * SCHED_DEADLINE tasks has negative priorities, reflecting
+ * the fact that any of them has higher prio than RT and
+@@ -23,6 +42,7 @@ static inline bool dl_task(struct task_struct *p)
+ {
+ return dl_prio(p->prio);
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ static inline bool dl_time_before(u64 a, u64 b)
+ {
+diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
+index 6ab43b4f72f9..ef1cff556c5e 100644
+--- a/include/linux/sched/prio.h
++++ b/include/linux/sched/prio.h
+@@ -19,6 +19,28 @@
+ #define MAX_PRIO (MAX_RT_PRIO + NICE_WIDTH)
+ #define DEFAULT_PRIO (MAX_RT_PRIO + NICE_WIDTH / 2)
+
++#ifdef CONFIG_SCHED_ALT
++
++/* Undefine MAX_PRIO and DEFAULT_PRIO */
++#undef MAX_PRIO
++#undef DEFAULT_PRIO
++
++/* +/- priority levels from the base priority */
++#ifdef CONFIG_SCHED_BMQ
++#define MAX_PRIORITY_ADJ (12)
++#endif
++
++#ifdef CONFIG_SCHED_PDS
++#define MAX_PRIORITY_ADJ (0)
++#endif
++
++#define MIN_NORMAL_PRIO (128)
++#define NORMAL_PRIO_NUM (64)
++#define MAX_PRIO (MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
++#define DEFAULT_PRIO (MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
++
++#endif /* CONFIG_SCHED_ALT */
++
+ /*
+ * Convert user-nice values [ -20 ... 0 ... 19 ]
+ * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
+diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
+index 4e3338103654..6dfef878fe3b 100644
+--- a/include/linux/sched/rt.h
++++ b/include/linux/sched/rt.h
+@@ -45,8 +45,10 @@ static inline bool rt_or_dl_task_policy(struct task_struct *tsk)
+
+ if (policy == SCHED_FIFO || policy == SCHED_RR)
+ return true;
++#ifndef CONFIG_SCHED_ALT
+ if (policy == SCHED_DEADLINE)
+ return true;
++#endif
+ return false;
+ }
+
+diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
+index 4237daa5ac7a..3cebd93c49c8 100644
+--- a/include/linux/sched/topology.h
++++ b/include/linux/sched/topology.h
+@@ -244,7 +244,8 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
+
+ #endif /* !CONFIG_SMP */
+
+-#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
++#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
++ !defined(CONFIG_SCHED_ALT)
+ extern void rebuild_sched_domains_energy(void);
+ #else
+ static inline void rebuild_sched_domains_energy(void)
+diff --git a/init/Kconfig b/init/Kconfig
+index c521e1421ad4..131a599fcde2 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -652,6 +652,7 @@ config TASK_IO_ACCOUNTING
+
+ config PSI
+ bool "Pressure stall information tracking"
++ depends on !SCHED_ALT
+ select KERNFS
+ help
+ Collect metrics that indicate how overcommitted the CPU, memory,
+@@ -817,6 +818,7 @@ menu "Scheduler features"
+ config UCLAMP_TASK
+ bool "Enable utilization clamping for RT/FAIR tasks"
+ depends on CPU_FREQ_GOV_SCHEDUTIL
++ depends on !SCHED_ALT
+ help
+ This feature enables the scheduler to track the clamped utilization
+ of each CPU based on RUNNABLE tasks scheduled on that CPU.
+@@ -863,6 +865,35 @@ config UCLAMP_BUCKETS_COUNT
+
+ If in doubt, use the default value.
+
++menuconfig SCHED_ALT
++ bool "Alternative CPU Schedulers"
++ default y
++ help
++ This feature enable alternative CPU scheduler"
++
++if SCHED_ALT
++
++choice
++ prompt "Alternative CPU Scheduler"
++ default SCHED_BMQ
++
++config SCHED_BMQ
++ bool "BMQ CPU scheduler"
++ help
++ The BitMap Queue CPU scheduler for excellent interactivity and
++ responsiveness on the desktop and solid scalability on normal
++ hardware and commodity servers.
++
++config SCHED_PDS
++ bool "PDS CPU scheduler"
++ help
++ The Priority and Deadline based Skip list multiple queue CPU
++ Scheduler.
++
++endchoice
++
++endif
++
+ endmenu
+
+ #
+@@ -928,6 +959,7 @@ config NUMA_BALANCING
+ depends on ARCH_SUPPORTS_NUMA_BALANCING
+ depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
+ depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
++ depends on !SCHED_ALT
+ help
+ This option adds support for automatic NUMA aware memory/task placement.
+ The mechanism is quite primitive and is based on migrating memory when
+@@ -1036,6 +1068,7 @@ menuconfig CGROUP_SCHED
+ tasks.
+
+ if CGROUP_SCHED
++if !SCHED_ALT
+ config GROUP_SCHED_WEIGHT
+ def_bool n
+
+@@ -1073,6 +1106,7 @@ config EXT_GROUP_SCHED
+ select GROUP_SCHED_WEIGHT
+ default y
+
++endif #!SCHED_ALT
+ endif #CGROUP_SCHED
+
+ config SCHED_MM_CID
+@@ -1334,6 +1368,7 @@ config CHECKPOINT_RESTORE
+
+ config SCHED_AUTOGROUP
+ bool "Automatic process group scheduling"
++ depends on !SCHED_ALT
+ select CGROUPS
+ select CGROUP_SCHED
+ select FAIR_GROUP_SCHED
+diff --git a/init/init_task.c b/init/init_task.c
+index 136a8231355a..03770079619a 100644
+--- a/init/init_task.c
++++ b/init/init_task.c
+@@ -71,9 +71,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .stack = init_stack,
+ .usage = REFCOUNT_INIT(2),
+ .flags = PF_KTHREAD,
++#ifdef CONFIG_SCHED_ALT
++ .on_cpu = 1,
++ .prio = DEFAULT_PRIO,
++ .static_prio = DEFAULT_PRIO,
++ .normal_prio = DEFAULT_PRIO,
++#else
+ .prio = MAX_PRIO - 20,
+ .static_prio = MAX_PRIO - 20,
+ .normal_prio = MAX_PRIO - 20,
++#endif
+ .policy = SCHED_NORMAL,
+ .cpus_ptr = &init_task.cpus_mask,
+ .user_cpus_ptr = NULL,
+@@ -86,6 +93,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .restart_block = {
+ .fn = do_no_restart_syscall,
+ },
++#ifdef CONFIG_SCHED_ALT
++ .sq_node = LIST_HEAD_INIT(init_task.sq_node),
++#ifdef CONFIG_SCHED_BMQ
++ .boost_prio = 0,
++#endif
++#ifdef CONFIG_SCHED_PDS
++ .deadline = 0,
++#endif
++ .time_slice = HZ,
++#else
+ .se = {
+ .group_node = LIST_HEAD_INIT(init_task.se.group_node),
+ },
+@@ -93,6 +110,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+ .run_list = LIST_HEAD_INIT(init_task.rt.run_list),
+ .time_slice = RR_TIMESLICE,
+ },
++#endif
+ .tasks = LIST_HEAD_INIT(init_task.tasks),
+ #ifdef CONFIG_SMP
+ .pushable_tasks = PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
+index fe782cd77388..d27d2154d71a 100644
+--- a/kernel/Kconfig.preempt
++++ b/kernel/Kconfig.preempt
+@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
+
+ config SCHED_CORE
+ bool "Core Scheduling for SMT"
+- depends on SCHED_SMT
++ depends on SCHED_SMT && !SCHED_ALT
+ help
+ This option permits Core Scheduling, a means of coordinated task
+ selection across SMT siblings. When enabled -- see
+@@ -135,7 +135,7 @@ config SCHED_CORE
+
+ config SCHED_CLASS_EXT
+ bool "Extensible Scheduling Class"
+- depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF
++ depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF && !SCHED_ALT
+ select STACKTRACE if STACKTRACE_SUPPORT
+ help
+ This option enables a new scheduler class sched_ext (SCX), which
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index a4dd285cdf39..5b4ebe58d032 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -620,7 +620,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
+ return ret;
+ }
+
+-#ifdef CONFIG_SMP
++#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
+ /*
+ * Helper routine for generate_sched_domains().
+ * Do cpusets a, b have overlapping effective cpus_allowed masks?
+@@ -1031,7 +1031,7 @@ void rebuild_sched_domains_locked(void)
+ /* Have scheduler rebuild the domains */
+ partition_and_rebuild_sched_domains(ndoms, doms, attr);
+ }
+-#else /* !CONFIG_SMP */
++#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
+ void rebuild_sched_domains_locked(void)
+ {
+ }
+@@ -2926,12 +2926,15 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ goto out_unlock;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(task)) {
+ cs->nr_migrate_dl_tasks++;
+ cs->sum_migrate_dl_bw += task->dl.dl_bw;
+ }
++#endif
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (!cs->nr_migrate_dl_tasks)
+ goto out_success;
+
+@@ -2952,6 +2955,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ }
+
+ out_success:
++#endif
+ /*
+ * Mark attach is in progress. This makes validate_change() fail
+ * changes which zero cpus/mems_allowed.
+@@ -2973,12 +2977,14 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
+ mutex_lock(&cpuset_mutex);
+ dec_attach_in_progress_locked(cs);
+
++#ifndef CONFIG_SCHED_ALT
+ if (cs->nr_migrate_dl_tasks) {
+ int cpu = cpumask_any(cs->effective_cpus);
+
+ dl_bw_free(cpu, cs->sum_migrate_dl_bw);
+ reset_migrate_dl_data(cs);
+ }
++#endif
+
+ mutex_unlock(&cpuset_mutex);
+ }
+diff --git a/kernel/delayacct.c b/kernel/delayacct.c
+index dead51de8eb5..8edef9676ab3 100644
+--- a/kernel/delayacct.c
++++ b/kernel/delayacct.c
+@@ -149,7 +149,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
+ */
+ t1 = tsk->sched_info.pcount;
+ t2 = tsk->sched_info.run_delay;
+- t3 = tsk->se.sum_exec_runtime;
++ t3 = tsk_seruntime(tsk);
+
+ d->cpu_count += t1;
+
+diff --git a/kernel/exit.c b/kernel/exit.c
+index 619f0014c33b..7dc53ddd45a8 100644
+--- a/kernel/exit.c
++++ b/kernel/exit.c
+@@ -175,7 +175,7 @@ static void __exit_signal(struct task_struct *tsk)
+ sig->curr_target = next_thread(tsk);
+ }
+
+- add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
++ add_device_randomness((const void*) &tsk_seruntime(tsk),
+ sizeof(unsigned long long));
+
+ /*
+@@ -196,7 +196,7 @@ static void __exit_signal(struct task_struct *tsk)
+ sig->inblock += task_io_get_inblock(tsk);
+ sig->oublock += task_io_get_oublock(tsk);
+ task_io_accounting_add(&sig->ioac, &tsk->ioac);
+- sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
++ sig->sum_sched_runtime += tsk_seruntime(tsk);
+ sig->nr_threads--;
+ __unhash_process(tsk, group_dead);
+ write_sequnlock(&sig->stats_lock);
+diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
+index ebebd0eec7f6..802112207855 100644
+--- a/kernel/locking/rtmutex.c
++++ b/kernel/locking/rtmutex.c
+@@ -363,7 +363,7 @@ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
+
+ waiter->tree.prio = __waiter_prio(task);
+- waiter->tree.deadline = task->dl.deadline;
++ waiter->tree.deadline = __tsk_deadline(task);
+ }
+
+ /*
+@@ -384,16 +384,20 @@ waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
+ * Only use with rt_waiter_node_{less,equal}()
+ */
+ #define task_to_waiter_node(p) \
+- &(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
++ &(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
+ #define task_to_waiter(p) \
+ &(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
+
+ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++ return (left->deadline < right->deadline);
++#else
+ if (left->prio < right->prio)
+ return 1;
+
++#ifndef CONFIG_SCHED_BMQ
+ /*
+ * If both waiters have dl_prio(), we check the deadlines of the
+ * associated tasks.
+@@ -402,16 +406,22 @@ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
+ */
+ if (dl_prio(left->prio))
+ return dl_time_before(left->deadline, right->deadline);
++#endif
+
+ return 0;
++#endif
+ }
+
+ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ struct rt_waiter_node *right)
+ {
++#ifdef CONFIG_SCHED_PDS
++ return (left->deadline == right->deadline);
++#else
+ if (left->prio != right->prio)
+ return 0;
+
++#ifndef CONFIG_SCHED_BMQ
+ /*
+ * If both waiters have dl_prio(), we check the deadlines of the
+ * associated tasks.
+@@ -420,8 +430,10 @@ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
+ */
+ if (dl_prio(left->prio))
+ return left->deadline == right->deadline;
++#endif
+
+ return 1;
++#endif
+ }
+
+ static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
+diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
+index 76d204b7d29c..de1a52f963e5 100644
+--- a/kernel/locking/ww_mutex.h
++++ b/kernel/locking/ww_mutex.h
+@@ -247,6 +247,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+
+ /* equal static prio */
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_prio(a_prio)) {
+ if (dl_time_before(b->task->dl.deadline,
+ a->task->dl.deadline))
+@@ -256,6 +257,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
+ b->task->dl.deadline))
+ return false;
+ }
++#endif
+
+ /* equal prio */
+ }
+diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
+index 976092b7bd45..31d587c16ec1 100644
+--- a/kernel/sched/Makefile
++++ b/kernel/sched/Makefile
+@@ -28,7 +28,12 @@ endif
+ # These compilation units have roughly the same size and complexity - so their
+ # build parallelizes well and finishes roughly at once:
+ #
++ifdef CONFIG_SCHED_ALT
++obj-y += alt_core.o
++obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
++else
+ obj-y += core.o
+ obj-y += fair.o
++endif
+ obj-y += build_policy.o
+ obj-y += build_utility.o
+diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
+new file mode 100644
+index 000000000000..c59691742340
+--- /dev/null
++++ b/kernel/sched/alt_core.c
+@@ -0,0 +1,7458 @@
++/*
++ * kernel/sched/alt_core.c
++ *
++ * Core alternative kernel scheduler code and related syscalls
++ *
++ * Copyright (C) 1991-2002 Linus Torvalds
++ *
++ * 2009-08-13 Brainfuck deadline scheduling policy by Con Kolivas deletes
++ * a whole lot of those previous things.
++ * 2017-09-06 Priority and Deadline based Skip list multiple queue kernel
++ * scheduler by Alfred Chen.
++ * 2019-02-20 BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
++ */
++#include <linux/sched/clock.h>
++#include <linux/sched/cputime.h>
++#include <linux/sched/debug.h>
++#include <linux/sched/hotplug.h>
++#include <linux/sched/init.h>
++#include <linux/sched/isolation.h>
++#include <linux/sched/loadavg.h>
++#include <linux/sched/mm.h>
++#include <linux/sched/nohz.h>
++#include <linux/sched/stat.h>
++#include <linux/sched/wake_q.h>
++
++#include <linux/blkdev.h>
++#include <linux/context_tracking.h>
++#include <linux/cpuset.h>
++#include <linux/delayacct.h>
++#include <linux/init_task.h>
++#include <linux/kcov.h>
++#include <linux/kprobes.h>
++#include <linux/nmi.h>
++#include <linux/rseq.h>
++#include <linux/scs.h>
++
++#include <uapi/linux/sched/types.h>
++
++#include <asm/irq_regs.h>
++#include <asm/switch_to.h>
++
++#define CREATE_TRACE_POINTS
++#include <trace/events/sched.h>
++#include <trace/events/ipi.h>
++#undef CREATE_TRACE_POINTS
++
++#include "sched.h"
++#include "smp.h"
++
++#include "pelt.h"
++
++#include "../../io_uring/io-wq.h"
++#include "../smpboot.h"
++
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
++EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
++
++/*
++ * Export tracepoints that act as a bare tracehook (ie: have no trace event
++ * associated with them) to allow external modules to probe them.
++ */
++EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
++
++#ifdef CONFIG_SCHED_DEBUG
++#define sched_feat(x) (1)
++/*
++ * Print a warning if need_resched is set for the given duration (if
++ * LATENCY_WARN is enabled).
++ *
++ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
++ * per boot.
++ */
++__read_mostly int sysctl_resched_latency_warn_ms = 100;
++__read_mostly int sysctl_resched_latency_warn_once = 1;
++#else
++#define sched_feat(x) (0)
++#endif /* CONFIG_SCHED_DEBUG */
++
++#define ALT_SCHED_VERSION "v6.12-r0"
++
++#define STOP_PRIO (MAX_RT_PRIO - 1)
++
++/*
++ * Time slice
++ * (default: 4 msec, units: nanoseconds)
++ */
++unsigned int sysctl_sched_base_slice __read_mostly = (4 << 20);
++
++#include "alt_core.h"
++#include "alt_topology.h"
++
++/* Reschedule if less than this many μs left */
++#define RESCHED_NS (100 << 10)
++
++/**
++ * sched_yield_type - Type of sched_yield() will be performed.
++ * 0: No yield.
++ * 1: Requeue task. (default)
++ */
++int sched_yield_type __read_mostly = 1;
++
++#ifdef CONFIG_SMP
++cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
++
++#ifdef CONFIG_SCHED_SMT
++DEFINE_STATIC_KEY_FALSE(sched_smt_present);
++EXPORT_SYMBOL_GPL(sched_smt_present);
++
++cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++#endif
++
++/*
++ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
++ * the domain), this allows us to quickly tell if two cpus are in the same cache
++ * domain, see cpus_share_cache().
++ */
++DEFINE_PER_CPU(int, sd_llc_id);
++#endif /* CONFIG_SMP */
++
++DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next) do { } while (0)
++#endif
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch() do { } while (0)
++#endif
++
++static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 2] ____cacheline_aligned_in_smp;
++
++cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
++cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_pcore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
++cpumask_t *const sched_ecore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS + 1];
++
++/* task function */
++static inline const struct cpumask *task_user_cpus(struct task_struct *p)
++{
++ if (!p->user_cpus_ptr)
++ return cpu_possible_mask; /* &init_task.cpus_mask */
++ return p->user_cpus_ptr;
++}
++
++/* sched_queue related functions */
++static inline void sched_queue_init(struct sched_queue *q)
++{
++ int i;
++
++ bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
++ for(i = 0; i < SCHED_LEVELS; i++)
++ INIT_LIST_HEAD(&q->heads[i]);
++}
++
++/*
++ * Init idle task and put into queue structure of rq
++ * IMPORTANT: may be called multiple times for a single cpu
++ */
++static inline void sched_queue_init_idle(struct sched_queue *q,
++ struct task_struct *idle)
++{
++ INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
++ list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
++ idle->on_rq = TASK_ON_RQ_QUEUED;
++}
++
++#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu) \
++ if (low < pr && pr <= high) \
++ cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
++
++#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu) \
++ if (low < pr && pr <= high) \
++ cpumask_set_cpu(cpu, sched_preempt_mask + pr);
++
++static atomic_t sched_prio_record = ATOMIC_INIT(0);
++
++/* water mark related functions */
++static inline void update_sched_preempt_mask(struct rq *rq)
++{
++ int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
++ int last_prio = rq->prio;
++ int cpu, pr;
++
++ if (prio == last_prio)
++ return;
++
++ rq->prio = prio;
++#ifdef CONFIG_SCHED_PDS
++ rq->prio_idx = sched_prio2idx(rq->prio, rq);
++#endif
++ cpu = cpu_of(rq);
++ pr = atomic_read(&sched_prio_record);
++
++ if (prio < last_prio) {
++ if (IDLE_TASK_SCHED_PRIO == last_prio) {
++ rq->clear_idle_mask_func(cpu, sched_idle_mask);
++ last_prio -= 2;
++ }
++ CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
++
++ return;
++ }
++ /* last_prio < prio */
++ if (IDLE_TASK_SCHED_PRIO == prio) {
++ rq->set_idle_mask_func(cpu, sched_idle_mask);
++ prio -= 2;
++ }
++ SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
++}
++
++/*
++ * Serialization rules:
++ *
++ * Lock order:
++ *
++ * p->pi_lock
++ * rq->lock
++ * hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
++ *
++ * rq1->lock
++ * rq2->lock where: rq1 < rq2
++ *
++ * Regular state:
++ *
++ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
++ * local CPU's rq->lock, it optionally removes the task from the runqueue and
++ * always looks at the local rq data structures to find the most eligible task
++ * to run next.
++ *
++ * Task enqueue is also under rq->lock, possibly taken from another CPU.
++ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
++ * the local CPU to avoid bouncing the runqueue state around [ see
++ * ttwu_queue_wakelist() ]
++ *
++ * Task wakeup, specifically wakeups that involve migration, are horribly
++ * complicated to avoid having to take two rq->locks.
++ *
++ * Special state:
++ *
++ * System-calls and anything external will use task_rq_lock() which acquires
++ * both p->pi_lock and rq->lock. As a consequence the state they change is
++ * stable while holding either lock:
++ *
++ * - sched_setaffinity()/
++ * set_cpus_allowed_ptr(): p->cpus_ptr, p->nr_cpus_allowed
++ * - set_user_nice(): p->se.load, p->*prio
++ * - __sched_setscheduler(): p->sched_class, p->policy, p->*prio,
++ * p->se.load, p->rt_priority,
++ * p->dl.dl_{runtime, deadline, period, flags, bw, density}
++ * - sched_setnuma(): p->numa_preferred_nid
++ * - sched_move_task(): p->sched_task_group
++ * - uclamp_update_active() p->uclamp*
++ *
++ * p->state <- TASK_*:
++ *
++ * is changed locklessly using set_current_state(), __set_current_state() or
++ * set_special_state(), see their respective comments, or by
++ * try_to_wake_up(). This latter uses p->pi_lock to serialize against
++ * concurrent self.
++ *
++ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
++ *
++ * is set by activate_task() and cleared by deactivate_task(), under
++ * rq->lock. Non-zero indicates the task is runnable, the special
++ * ON_RQ_MIGRATING state is used for migration without holding both
++ * rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
++ *
++ * Additionally it is possible to be ->on_rq but still be considered not
++ * runnable when p->se.sched_delayed is true. These tasks are on the runqueue
++ * but will be dequeued as soon as they get picked again. See the
++ * task_is_runnable() helper.
++ *
++ * p->on_cpu <- { 0, 1 }:
++ *
++ * is set by prepare_task() and cleared by finish_task() such that it will be
++ * set before p is scheduled-in and cleared after p is scheduled-out, both
++ * under rq->lock. Non-zero indicates the task is running on its CPU.
++ *
++ * [ The astute reader will observe that it is possible for two tasks on one
++ * CPU to have ->on_cpu = 1 at the same time. ]
++ *
++ * task_cpu(p): is changed by set_task_cpu(), the rules are:
++ *
++ * - Don't call set_task_cpu() on a blocked task:
++ *
++ * We don't care what CPU we're not running on, this simplifies hotplug,
++ * the CPU assignment of blocked tasks isn't required to be valid.
++ *
++ * - for try_to_wake_up(), called under p->pi_lock:
++ *
++ * This allows try_to_wake_up() to only take one rq->lock, see its comment.
++ *
++ * - for migration called under rq->lock:
++ * [ see task_on_rq_migrating() in task_rq_lock() ]
++ *
++ * o move_queued_task()
++ * o detach_task()
++ *
++ * - for migration called under double_rq_lock():
++ *
++ * o __migrate_swap_task()
++ * o push_rt_task() / pull_rt_task()
++ * o push_dl_task() / pull_dl_task()
++ * o dl_task_offline_migration()
++ *
++ */
++
++/*
++ * Context: p->pi_lock
++ */
++static inline struct rq *
++task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
++{
++ struct rq *rq;
++ for (;;) {
++ rq = task_rq(p);
++ if (p->on_cpu || task_on_rq_queued(p)) {
++ raw_spin_lock_irqsave(&rq->lock, *flags);
++ if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++ *plock = &rq->lock;
++ return rq;
++ }
++ raw_spin_unlock_irqrestore(&rq->lock, *flags);
++ } else if (task_on_rq_migrating(p)) {
++ do {
++ cpu_relax();
++ } while (unlikely(task_on_rq_migrating(p)));
++ } else {
++ raw_spin_lock_irqsave(&p->pi_lock, *flags);
++ if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
++ *plock = &p->pi_lock;
++ return rq;
++ }
++ raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
++ }
++ }
++}
++
++static inline void
++task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
++{
++ raw_spin_unlock_irqrestore(lock, *flags);
++}
++
++/*
++ * __task_rq_lock - lock the rq @p resides on.
++ */
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ lockdep_assert_held(&p->pi_lock);
++
++ for (;;) {
++ rq = task_rq(p);
++ raw_spin_lock(&rq->lock);
++ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
++ return rq;
++ raw_spin_unlock(&rq->lock);
++
++ while (unlikely(task_on_rq_migrating(p)))
++ cpu_relax();
++ }
++}
++
++/*
++ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
++ */
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(p->pi_lock)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ for (;;) {
++ raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
++ rq = task_rq(p);
++ raw_spin_lock(&rq->lock);
++ /*
++ * move_queued_task() task_rq_lock()
++ *
++ * ACQUIRE (rq->lock)
++ * [S] ->on_rq = MIGRATING [L] rq = task_rq()
++ * WMB (__set_task_cpu()) ACQUIRE (rq->lock);
++ * [S] ->cpu = new_cpu [L] task_rq()
++ * [L] ->on_rq
++ * RELEASE (rq->lock)
++ *
++ * If we observe the old CPU in task_rq_lock(), the acquire of
++ * the old rq->lock will fully serialize against the stores.
++ *
++ * If we observe the new CPU in task_rq_lock(), the address
++ * dependency headed by '[L] rq = task_rq()' and the acquire
++ * will pair with the WMB to ensure we then also see migrating.
++ */
++ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
++ return rq;
++ }
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++
++ while (unlikely(task_on_rq_migrating(p)))
++ cpu_relax();
++ }
++}
++
++static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock_irqsave(&rq->lock, rf->flags);
++}
++
++static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
++}
++
++DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
++ rq_lock_irqsave(_T->lock, &_T->rf),
++ rq_unlock_irqrestore(_T->lock, &_T->rf),
++ struct rq_flags rf)
++
++void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
++{
++ raw_spinlock_t *lock;
++
++ /* Matches synchronize_rcu() in __sched_core_enable() */
++ preempt_disable();
++
++ for (;;) {
++ lock = __rq_lockp(rq);
++ raw_spin_lock_nested(lock, subclass);
++ if (likely(lock == __rq_lockp(rq))) {
++ /* preempt_count *MUST* be > 1 */
++ preempt_enable_no_resched();
++ return;
++ }
++ raw_spin_unlock(lock);
++ }
++}
++
++void raw_spin_rq_unlock(struct rq *rq)
++{
++ raw_spin_unlock(rq_lockp(rq));
++}
++
++/*
++ * RQ-clock updating methods:
++ */
++
++static void update_rq_clock_task(struct rq *rq, s64 delta)
++{
++/*
++ * In theory, the compile should just see 0 here, and optimize out the call
++ * to sched_rt_avg_update. But I don't trust it...
++ */
++ s64 __maybe_unused steal = 0, irq_delta = 0;
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++ irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
++
++ /*
++ * Since irq_time is only updated on {soft,}irq_exit, we might run into
++ * this case when a previous update_rq_clock() happened inside a
++ * {soft,}IRQ region.
++ *
++ * When this happens, we stop ->clock_task and only update the
++ * prev_irq_time stamp to account for the part that fit, so that a next
++ * update will consume the rest. This ensures ->clock_task is
++ * monotonic.
++ *
++ * It does however cause some slight miss-attribution of {soft,}IRQ
++ * time, a more accurate solution would be to update the irq_time using
++ * the current rq->clock timestamp, except that would require using
++ * atomic ops.
++ */
++ if (irq_delta > delta)
++ irq_delta = delta;
++
++ rq->prev_irq_time += irq_delta;
++ delta -= irq_delta;
++ delayacct_irq(rq->curr, irq_delta);
++#endif
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++ if (static_key_false((¶virt_steal_rq_enabled))) {
++ steal = paravirt_steal_clock(cpu_of(rq));
++ steal -= rq->prev_steal_time_rq;
++
++ if (unlikely(steal > delta))
++ steal = delta;
++
++ rq->prev_steal_time_rq += steal;
++ delta -= steal;
++ }
++#endif
++
++ rq->clock_task += delta;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++ if ((irq_delta + steal))
++ update_irq_load_avg(rq, irq_delta + steal);
++#endif
++}
++
++static inline void update_rq_clock(struct rq *rq)
++{
++ s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
++
++ if (unlikely(delta <= 0))
++ return;
++ rq->clock += delta;
++ sched_update_rq_clock(rq);
++ update_rq_clock_task(rq, delta);
++}
++
++/*
++ * RQ Load update routine
++ */
++#define RQ_LOAD_HISTORY_BITS (sizeof(s32) * 8ULL)
++#define RQ_UTIL_SHIFT (8)
++#define RQ_LOAD_HISTORY_TO_UTIL(l) (((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
++
++#define LOAD_BLOCK(t) ((t) >> 17)
++#define LOAD_HALF_BLOCK(t) ((t) >> 16)
++#define BLOCK_MASK(t) ((t) & ((0x01 << 18) - 1))
++#define LOAD_BLOCK_BIT(b) (1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
++#define CURRENT_LOAD_BIT LOAD_BLOCK_BIT(0)
++
++static inline void rq_load_update(struct rq *rq)
++{
++ u64 time = rq->clock;
++ u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
++ u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
++ u64 curr = !!rq->nr_running;
++
++ if (delta) {
++ rq->load_history = rq->load_history >> delta;
++
++ if (delta < RQ_UTIL_SHIFT) {
++ rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
++ if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
++ rq->load_history ^= LOAD_BLOCK_BIT(delta);
++ }
++
++ rq->load_block = BLOCK_MASK(time) * prev;
++ } else {
++ rq->load_block += (time - rq->load_stamp) * prev;
++ }
++ if (prev ^ curr)
++ rq->load_history ^= CURRENT_LOAD_BIT;
++ rq->load_stamp = time;
++}
++
++unsigned long rq_load_util(struct rq *rq, unsigned long max)
++{
++ return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
++}
++
++#ifdef CONFIG_SMP
++unsigned long sched_cpu_util(int cpu)
++{
++ return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_CPU_FREQ
++/**
++ * cpufreq_update_util - Take a note about CPU utilization changes.
++ * @rq: Runqueue to carry out the update for.
++ * @flags: Update reason flags.
++ *
++ * This function is called by the scheduler on the CPU whose utilization is
++ * being updated.
++ *
++ * It can only be called from RCU-sched read-side critical sections.
++ *
++ * The way cpufreq is currently arranged requires it to evaluate the CPU
++ * performance state (frequency/voltage) on a regular basis to prevent it from
++ * being stuck in a completely inadequate performance level for too long.
++ * That is not guaranteed to happen if the updates are only triggered from CFS
++ * and DL, though, because they may not be coming in if only RT tasks are
++ * active all the time (or there are RT tasks only).
++ *
++ * As a workaround for that issue, this function is called periodically by the
++ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
++ * but that really is a band-aid. Going forward it should be replaced with
++ * solutions targeted more specifically at RT tasks.
++ */
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++ struct update_util_data *data;
++
++#ifdef CONFIG_SMP
++ rq_load_update(rq);
++#endif
++ data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
++ if (data)
++ data->func(data, rq_clock(rq), flags);
++}
++#else
++static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
++{
++#ifdef CONFIG_SMP
++ rq_load_update(rq);
++#endif
++}
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++/*
++ * Tick may be needed by tasks in the runqueue depending on their policy and
++ * requirements. If tick is needed, lets send the target an IPI to kick it out
++ * of nohz mode if necessary.
++ */
++static inline void sched_update_tick_dependency(struct rq *rq)
++{
++ int cpu = cpu_of(rq);
++
++ if (!tick_nohz_full_cpu(cpu))
++ return;
++
++ if (rq->nr_running < 2)
++ tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
++ else
++ tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
++}
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_update_tick_dependency(struct rq *rq) { }
++#endif
++
++bool sched_task_on_rq(struct task_struct *p)
++{
++ return task_on_rq_queued(p);
++}
++
++unsigned long get_wchan(struct task_struct *p)
++{
++ unsigned long ip = 0;
++ unsigned int state;
++
++ if (!p || p == current)
++ return 0;
++
++ /* Only get wchan if task is blocked and we can keep it that way. */
++ raw_spin_lock_irq(&p->pi_lock);
++ state = READ_ONCE(p->__state);
++ smp_rmb(); /* see try_to_wake_up() */
++ if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
++ ip = __get_wchan(p);
++ raw_spin_unlock_irq(&p->pi_lock);
++
++ return ip;
++}
++
++/*
++ * Add/Remove/Requeue task to/from the runqueue routines
++ * Context: rq->lock
++ */
++#define __SCHED_DEQUEUE_TASK(p, rq, flags, func) \
++ sched_info_dequeue(rq, p); \
++ \
++ __list_del_entry(&p->sq_node); \
++ if (p->sq_node.prev == p->sq_node.next) { \
++ clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq), \
++ rq->queue.bitmap); \
++ func; \
++ }
++
++#define __SCHED_ENQUEUE_TASK(p, rq, flags, func) \
++ sched_info_enqueue(rq, p); \
++ { \
++ int idx, prio; \
++ TASK_SCHED_PRIO_IDX(p, rq, idx, prio); \
++ list_add_tail(&p->sq_node, &rq->queue.heads[idx]); \
++ if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) { \
++ set_bit(prio, rq->queue.bitmap); \
++ func; \
++ } \
++ }
++
++static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++
++ /*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
++ task_cpu(p), cpu_of(rq));
++#endif
++
++ __SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++ --rq->nr_running;
++#ifdef CONFIG_SMP
++ if (1 == rq->nr_running)
++ cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++ sched_update_tick_dependency(rq);
++}
++
++static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
++{
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++
++ /*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
++ task_cpu(p), cpu_of(rq));
++#endif
++
++ __SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
++ ++rq->nr_running;
++#ifdef CONFIG_SMP
++ if (2 == rq->nr_running)
++ cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
++#endif
++
++ sched_update_tick_dependency(rq);
++}
++
++void requeue_task(struct task_struct *p, struct rq *rq)
++{
++ struct list_head *node = &p->sq_node;
++ int deq_idx, idx, prio;
++
++ TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
++#ifdef ALT_SCHED_DEBUG
++ lockdep_assert_held(&rq->lock);
++ /*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
++ WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
++ cpu_of(rq), task_cpu(p));
++#endif
++ if (list_is_last(node, &rq->queue.heads[idx]))
++ return;
++
++ __list_del_entry(node);
++ if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
++ clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
++
++ list_add_tail(node, &rq->queue.heads[idx]);
++ if (list_is_first(node, &rq->queue.heads[idx]))
++ set_bit(prio, rq->queue.bitmap);
++ update_sched_preempt_mask(rq);
++}
++
++/*
++ * try_cmpxchg based fetch_or() macro so it works for different integer types:
++ */
++#define fetch_or(ptr, mask) \
++ ({ \
++ typeof(ptr) _ptr = (ptr); \
++ typeof(mask) _mask = (mask); \
++ typeof(*_ptr) _val = *_ptr; \
++ \
++ do { \
++ } while (!try_cmpxchg(_ptr, &_val, _val | _mask)); \
++ _val; \
++})
++
++#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
++/*
++ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
++ * this avoids any races wrt polling state changes and thereby avoids
++ * spurious IPIs.
++ */
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++ struct thread_info *ti = task_thread_info(p);
++ return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
++}
++
++/*
++ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
++ *
++ * If this returns true, then the idle task promises to call
++ * sched_ttwu_pending() and reschedule soon.
++ */
++static bool set_nr_if_polling(struct task_struct *p)
++{
++ struct thread_info *ti = task_thread_info(p);
++ typeof(ti->flags) val = READ_ONCE(ti->flags);
++
++ do {
++ if (!(val & _TIF_POLLING_NRFLAG))
++ return false;
++ if (val & _TIF_NEED_RESCHED)
++ return true;
++ } while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
++
++ return true;
++}
++
++#else
++static inline bool set_nr_and_not_polling(struct task_struct *p)
++{
++ set_tsk_need_resched(p);
++ return true;
++}
++
++#ifdef CONFIG_SMP
++static inline bool set_nr_if_polling(struct task_struct *p)
++{
++ return false;
++}
++#endif
++#endif
++
++static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++ struct wake_q_node *node = &task->wake_q;
++
++ /*
++ * Atomically grab the task, if ->wake_q is !nil already it means
++ * it's already queued (either by us or someone else) and will get the
++ * wakeup due to that.
++ *
++ * In order to ensure that a pending wakeup will observe our pending
++ * state, even in the failed case, an explicit smp_mb() must be used.
++ */
++ smp_mb__before_atomic();
++ if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
++ return false;
++
++ /*
++ * The head is context local, there can be no concurrency.
++ */
++ *head->lastp = node;
++ head->lastp = &node->next;
++ return true;
++}
++
++/**
++ * wake_q_add() - queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ */
++void wake_q_add(struct wake_q_head *head, struct task_struct *task)
++{
++ if (__wake_q_add(head, task))
++ get_task_struct(task);
++}
++
++/**
++ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
++ * @head: the wake_q_head to add @task to
++ * @task: the task to queue for 'later' wakeup
++ *
++ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
++ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
++ * instantly.
++ *
++ * This function must be used as-if it were wake_up_process(); IOW the task
++ * must be ready to be woken at this location.
++ *
++ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
++ * that already hold reference to @task can call the 'safe' version and trust
++ * wake_q to do the right thing depending whether or not the @task is already
++ * queued for wakeup.
++ */
++void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
++{
++ if (!__wake_q_add(head, task))
++ put_task_struct(task);
++}
++
++void wake_up_q(struct wake_q_head *head)
++{
++ struct wake_q_node *node = head->first;
++
++ while (node != WAKE_Q_TAIL) {
++ struct task_struct *task;
++
++ task = container_of(node, struct task_struct, wake_q);
++ /* task can safely be re-inserted now: */
++ node = node->next;
++ task->wake_q.next = NULL;
++
++ /*
++ * wake_up_process() executes a full barrier, which pairs with
++ * the queueing in wake_q_add() so as not to miss wakeups.
++ */
++ wake_up_process(task);
++ put_task_struct(task);
++ }
++}
++
++/*
++ * resched_curr - mark rq's current task 'to be rescheduled now'.
++ *
++ * On UP this means the setting of the need_resched flag, on SMP it
++ * might also involve a cross-CPU call to trigger the scheduler on
++ * the target CPU.
++ */
++static inline void resched_curr(struct rq *rq)
++{
++ struct task_struct *curr = rq->curr;
++ int cpu;
++
++ lockdep_assert_held(&rq->lock);
++
++ if (test_tsk_need_resched(curr))
++ return;
++
++ cpu = cpu_of(rq);
++ if (cpu == smp_processor_id()) {
++ set_tsk_need_resched(curr);
++ set_preempt_need_resched();
++ return;
++ }
++
++ if (set_nr_and_not_polling(curr))
++ smp_send_reschedule(cpu);
++ else
++ trace_sched_wake_idle_without_ipi(cpu);
++}
++
++void resched_cpu(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ if (cpu_online(cpu) || cpu == smp_processor_id())
++ resched_curr(cpu_rq(cpu));
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++#ifdef CONFIG_SMP
++#ifdef CONFIG_NO_HZ_COMMON
++/*
++ * This routine will record that the CPU is going idle with tick stopped.
++ * This info will be used in performing idle load balancing in the future.
++ */
++void nohz_balance_enter_idle(int cpu) {}
++
++/*
++ * In the semi idle case, use the nearest busy CPU for migrating timers
++ * from an idle CPU. This is good for power-savings.
++ *
++ * We don't do similar optimization for completely idle system, as
++ * selecting an idle CPU will add more delays to the timers than intended
++ * (as that CPU's timer base may not be up to date wrt jiffies etc).
++ */
++int get_nohz_timer_target(void)
++{
++ int i, cpu = smp_processor_id(), default_cpu = -1;
++ struct cpumask *mask;
++ const struct cpumask *hk_mask;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
++ if (!idle_cpu(cpu))
++ return cpu;
++ default_cpu = cpu;
++ }
++
++ hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
++
++ for (mask = per_cpu(sched_cpu_topo_masks, cpu);
++ mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
++ for_each_cpu_and(i, mask, hk_mask)
++ if (!idle_cpu(i))
++ return i;
++
++ if (default_cpu == -1)
++ default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
++ cpu = default_cpu;
++
++ return cpu;
++}
++
++/*
++ * When add_timer_on() enqueues a timer into the timer wheel of an
++ * idle CPU then this timer might expire before the next timer event
++ * which is scheduled to wake up that CPU. In case of a completely
++ * idle system the next event might even be infinite time into the
++ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
++ * leaves the inner idle loop so the newly added timer is taken into
++ * account when the CPU goes back to idle and evaluates the timer
++ * wheel for the next timer event.
++ */
++static inline void wake_up_idle_cpu(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (cpu == smp_processor_id())
++ return;
++
++ /*
++ * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
++ * part of the idle loop. This forces an exit from the idle loop
++ * and a round trip to schedule(). Now this could be optimized
++ * because a simple new idle loop iteration is enough to
++ * re-evaluate the next tick. Provided some re-ordering of tick
++ * nohz functions that would need to follow TIF_NR_POLLING
++ * clearing:
++ *
++ * - On most architectures, a simple fetch_or on ti::flags with a
++ * "0" value would be enough to know if an IPI needs to be sent.
++ *
++ * - x86 needs to perform a last need_resched() check between
++ * monitor and mwait which doesn't take timers into account.
++ * There a dedicated TIF_TIMER flag would be required to
++ * fetch_or here and be checked along with TIF_NEED_RESCHED
++ * before mwait().
++ *
++ * However, remote timer enqueue is not such a frequent event
++ * and testing of the above solutions didn't appear to report
++ * much benefits.
++ */
++ if (set_nr_and_not_polling(rq->idle))
++ smp_send_reschedule(cpu);
++ else
++ trace_sched_wake_idle_without_ipi(cpu);
++}
++
++static inline bool wake_up_full_nohz_cpu(int cpu)
++{
++ /*
++ * We just need the target to call irq_exit() and re-evaluate
++ * the next tick. The nohz full kick at least implies that.
++ * If needed we can still optimize that later with an
++ * empty IRQ.
++ */
++ if (cpu_is_offline(cpu))
++ return true; /* Don't try to wake offline CPUs. */
++ if (tick_nohz_full_cpu(cpu)) {
++ if (cpu != smp_processor_id() ||
++ tick_nohz_tick_stopped())
++ tick_nohz_full_kick_cpu(cpu);
++ return true;
++ }
++
++ return false;
++}
++
++void wake_up_nohz_cpu(int cpu)
++{
++ if (!wake_up_full_nohz_cpu(cpu))
++ wake_up_idle_cpu(cpu);
++}
++
++static void nohz_csd_func(void *info)
++{
++ struct rq *rq = info;
++ int cpu = cpu_of(rq);
++ unsigned int flags;
++
++ /*
++ * Release the rq::nohz_csd.
++ */
++ flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
++ WARN_ON(!(flags & NOHZ_KICK_MASK));
++
++ rq->idle_balance = idle_cpu(cpu);
++ if (rq->idle_balance && !need_resched()) {
++ rq->nohz_idle_balance = flags;
++ raise_softirq_irqoff(SCHED_SOFTIRQ);
++ }
++}
++
++#endif /* CONFIG_NO_HZ_COMMON */
++#endif /* CONFIG_SMP */
++
++static inline void wakeup_preempt(struct rq *rq)
++{
++ if (sched_rq_first_task(rq) != rq->curr)
++ resched_curr(rq);
++}
++
++static __always_inline
++int __task_state_match(struct task_struct *p, unsigned int state)
++{
++ if (READ_ONCE(p->__state) & state)
++ return 1;
++
++ if (READ_ONCE(p->saved_state) & state)
++ return -1;
++
++ return 0;
++}
++
++static __always_inline
++int task_state_match(struct task_struct *p, unsigned int state)
++{
++ /*
++ * Serialize against current_save_and_set_rtlock_wait_state(),
++ * current_restore_rtlock_saved_state(), and __refrigerator().
++ */
++ guard(raw_spinlock_irq)(&p->pi_lock);
++
++ return __task_state_match(p, state);
++}
++
++/*
++ * wait_task_inactive - wait for a thread to unschedule.
++ *
++ * Wait for the thread to block in any of the states set in @match_state.
++ * If it changes, i.e. @p might have woken up, then return zero. When we
++ * succeed in waiting for @p to be off its CPU, we return a positive number
++ * (its total switch count). If a second call a short while later returns the
++ * same number, the caller can be sure that @p has remained unscheduled the
++ * whole time.
++ *
++ * The caller must ensure that the task *will* unschedule sometime soon,
++ * else this function might spin for a *long* time. This function can't
++ * be called with interrupts off, or it may introduce deadlock with
++ * smp_call_function() if an IPI is sent by the same process we are
++ * waiting to become inactive.
++ */
++unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
++{
++ unsigned long flags;
++ int running, queued, match;
++ unsigned long ncsw;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ for (;;) {
++ rq = task_rq(p);
++
++ /*
++ * If the task is actively running on another CPU
++ * still, just relax and busy-wait without holding
++ * any locks.
++ *
++ * NOTE! Since we don't hold any locks, it's not
++ * even sure that "rq" stays as the right runqueue!
++ * But we don't care, since this will return false
++ * if the runqueue has changed and p is actually now
++ * running somewhere else!
++ */
++ while (task_on_cpu(p)) {
++ if (!task_state_match(p, match_state))
++ return 0;
++ cpu_relax();
++ }
++
++ /*
++ * Ok, time to look more closely! We need the rq
++ * lock now, to be *sure*. If we're wrong, we'll
++ * just go back and repeat.
++ */
++ task_access_lock_irqsave(p, &lock, &flags);
++ trace_sched_wait_task(p);
++ running = task_on_cpu(p);
++ queued = p->on_rq;
++ ncsw = 0;
++ if ((match = __task_state_match(p, match_state))) {
++ /*
++ * When matching on p->saved_state, consider this task
++ * still queued so it will wait.
++ */
++ if (match < 0)
++ queued = 1;
++ ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
++ }
++ task_access_unlock_irqrestore(p, lock, &flags);
++
++ /*
++ * If it changed from the expected state, bail out now.
++ */
++ if (unlikely(!ncsw))
++ break;
++
++ /*
++ * Was it really running after all now that we
++ * checked with the proper locks actually held?
++ *
++ * Oops. Go back and try again..
++ */
++ if (unlikely(running)) {
++ cpu_relax();
++ continue;
++ }
++
++ /*
++ * It's not enough that it's not actively running,
++ * it must be off the runqueue _entirely_, and not
++ * preempted!
++ *
++ * So if it was still runnable (but just not actively
++ * running right now), it's preempted, and we should
++ * yield - it could be a while.
++ */
++ if (unlikely(queued)) {
++ ktime_t to = NSEC_PER_SEC / HZ;
++
++ set_current_state(TASK_UNINTERRUPTIBLE);
++ schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
++ continue;
++ }
++
++ /*
++ * Ahh, all good. It wasn't running, and it wasn't
++ * runnable, which means that it will never become
++ * running in the future either. We're all done!
++ */
++ break;
++ }
++
++ return ncsw;
++}
++
++#ifdef CONFIG_SCHED_HRTICK
++/*
++ * Use HR-timers to deliver accurate preemption points.
++ */
++
++static void hrtick_clear(struct rq *rq)
++{
++ if (hrtimer_active(&rq->hrtick_timer))
++ hrtimer_cancel(&rq->hrtick_timer);
++}
++
++/*
++ * High-resolution timer tick.
++ * Runs from hardirq context with interrupts disabled.
++ */
++static enum hrtimer_restart hrtick(struct hrtimer *timer)
++{
++ struct rq *rq = container_of(timer, struct rq, hrtick_timer);
++
++ WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
++
++ raw_spin_lock(&rq->lock);
++ resched_curr(rq);
++ raw_spin_unlock(&rq->lock);
++
++ return HRTIMER_NORESTART;
++}
++
++/*
++ * Use hrtick when:
++ * - enabled by features
++ * - hrtimer is actually high res
++ */
++static inline int hrtick_enabled(struct rq *rq)
++{
++ /**
++ * Alt schedule FW doesn't support sched_feat yet
++ if (!sched_feat(HRTICK))
++ return 0;
++ */
++ if (!cpu_active(cpu_of(rq)))
++ return 0;
++ return hrtimer_is_hres_active(&rq->hrtick_timer);
++}
++
++#ifdef CONFIG_SMP
++
++static void __hrtick_restart(struct rq *rq)
++{
++ struct hrtimer *timer = &rq->hrtick_timer;
++ ktime_t time = rq->hrtick_time;
++
++ hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
++}
++
++/*
++ * called from hardirq (IPI) context
++ */
++static void __hrtick_start(void *arg)
++{
++ struct rq *rq = arg;
++
++ raw_spin_lock(&rq->lock);
++ __hrtick_restart(rq);
++ raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++ struct hrtimer *timer = &rq->hrtick_timer;
++ s64 delta;
++
++ /*
++ * Don't schedule slices shorter than 10000ns, that just
++ * doesn't make sense and can cause timer DoS.
++ */
++ delta = max_t(s64, delay, 10000LL);
++
++ rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
++
++ if (rq == this_rq())
++ __hrtick_restart(rq);
++ else
++ smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
++}
++
++#else
++/*
++ * Called to set the hrtick timer state.
++ *
++ * called with rq->lock held and IRQs disabled
++ */
++static inline void hrtick_start(struct rq *rq, u64 delay)
++{
++ /*
++ * Don't schedule slices shorter than 10000ns, that just
++ * doesn't make sense. Rely on vruntime for fairness.
++ */
++ delay = max_t(u64, delay, 10000LL);
++ hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
++ HRTIMER_MODE_REL_PINNED_HARD);
++}
++#endif /* CONFIG_SMP */
++
++static void hrtick_rq_init(struct rq *rq)
++{
++#ifdef CONFIG_SMP
++ INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
++#endif
++
++ hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
++ rq->hrtick_timer.function = hrtick;
++}
++#else /* CONFIG_SCHED_HRTICK */
++static inline int hrtick_enabled(struct rq *rq)
++{
++ return 0;
++}
++
++static inline void hrtick_clear(struct rq *rq)
++{
++}
++
++static inline void hrtick_rq_init(struct rq *rq)
++{
++}
++#endif /* CONFIG_SCHED_HRTICK */
++
++/*
++ * activate_task - move a task to the runqueue.
++ *
++ * Context: rq->lock
++ */
++static void activate_task(struct task_struct *p, struct rq *rq)
++{
++ enqueue_task(p, rq, ENQUEUE_WAKEUP);
++
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++ ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++
++ /*
++ * If in_iowait is set, the code below may not trigger any cpufreq
++ * utilization updates, so do it here explicitly with the IOWAIT flag
++ * passed.
++ */
++ cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
++}
++
++static void block_task(struct rq *rq, struct task_struct *p)
++{
++ dequeue_task(p, rq, DEQUEUE_SLEEP);
++
++ WRITE_ONCE(p->on_rq, 0);
++ ASSERT_EXCLUSIVE_WRITER(p->on_rq);
++ if (p->sched_contributes_to_load)
++ rq->nr_uninterruptible++;
++
++ if (p->in_iowait) {
++ atomic_inc(&rq->nr_iowait);
++ delayacct_blkio_start();
++ }
++}
++
++static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
++{
++#ifdef CONFIG_SMP
++ /*
++ * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
++ * successfully executed on another CPU. We must ensure that updates of
++ * per-task data have been completed by this moment.
++ */
++ smp_wmb();
++
++ WRITE_ONCE(task_thread_info(p)->cpu, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
++{
++#ifdef CONFIG_SCHED_DEBUG
++ unsigned int state = READ_ONCE(p->__state);
++
++ /*
++ * We should never call set_task_cpu() on a blocked task,
++ * ttwu() will sort out the placement.
++ */
++ WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
++
++#ifdef CONFIG_LOCKDEP
++ /*
++ * The caller should hold either p->pi_lock or rq->lock, when changing
++ * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
++ *
++ * sched_move_task() holds both and thus holding either pins the cgroup,
++ * see task_group().
++ */
++ WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
++ lockdep_is_held(&task_rq(p)->lock)));
++#endif
++ /*
++ * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
++ */
++ WARN_ON_ONCE(!cpu_online(new_cpu));
++
++ WARN_ON_ONCE(is_migration_disabled(p));
++#endif
++ trace_sched_migrate_task(p, new_cpu);
++
++ if (task_cpu(p) != new_cpu)
++ {
++ rseq_migrate(p);
++ sched_mm_cid_migrate_from(p);
++ perf_event_task_migrate(p);
++ }
++
++ __set_task_cpu(p, new_cpu);
++}
++
++#define MDF_FORCE_ENABLED 0x80
++
++static void
++__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++ /*
++ * This here violates the locking rules for affinity, since we're only
++ * supposed to change these variables while holding both rq->lock and
++ * p->pi_lock.
++ *
++ * HOWEVER, it magically works, because ttwu() is the only code that
++ * accesses these variables under p->pi_lock and only does so after
++ * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
++ * before finish_task().
++ *
++ * XXX do further audits, this smells like something putrid.
++ */
++ SCHED_WARN_ON(!p->on_cpu);
++ p->cpus_ptr = new_mask;
++}
++
++void migrate_disable(void)
++{
++ struct task_struct *p = current;
++ int cpu;
++
++ if (p->migration_disabled) {
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Warn about overflow half-way through the range.
++ */
++ WARN_ON_ONCE((s16)p->migration_disabled < 0);
++#endif
++ p->migration_disabled++;
++ return;
++ }
++
++ guard(preempt)();
++ cpu = smp_processor_id();
++ if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
++ cpu_rq(cpu)->nr_pinned++;
++ p->migration_disabled = 1;
++ p->migration_flags &= ~MDF_FORCE_ENABLED;
++
++ /*
++ * Violates locking rules! see comment in __do_set_cpus_ptr().
++ */
++ if (p->cpus_ptr == &p->cpus_mask)
++ __do_set_cpus_ptr(p, cpumask_of(cpu));
++ }
++}
++EXPORT_SYMBOL_GPL(migrate_disable);
++
++void migrate_enable(void)
++{
++ struct task_struct *p = current;
++
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Check both overflow from migrate_disable() and superfluous
++ * migrate_enable().
++ */
++ if (WARN_ON_ONCE((s16)p->migration_disabled <= 0))
++ return;
++#endif
++
++ if (p->migration_disabled > 1) {
++ p->migration_disabled--;
++ return;
++ }
++
++ /*
++ * Ensure stop_task runs either before or after this, and that
++ * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
++ */
++ guard(preempt)();
++ /*
++ * Assumption: current should be running on allowed cpu
++ */
++ WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
++ if (p->cpus_ptr != &p->cpus_mask)
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ /*
++ * Mustn't clear migration_disabled() until cpus_ptr points back at the
++ * regular cpus_mask, otherwise things that race (eg.
++ * select_fallback_rq) get confused.
++ */
++ barrier();
++ p->migration_disabled = 0;
++ this_rq()->nr_pinned--;
++}
++EXPORT_SYMBOL_GPL(migrate_enable);
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++ return rq->nr_pinned;
++}
++
++/*
++ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
++ * __set_cpus_allowed_ptr() and select_fallback_rq().
++ */
++static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
++{
++ /* When not in the task's cpumask, no point in looking further. */
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++ return false;
++
++ /* migrate_disabled() must be allowed to finish. */
++ if (is_migration_disabled(p))
++ return cpu_online(cpu);
++
++ /* Non kernel threads are not allowed during either online or offline. */
++ if (!(p->flags & PF_KTHREAD))
++ return cpu_active(cpu) && task_cpu_possible(cpu, p);
++
++ /* KTHREAD_IS_PER_CPU is always allowed. */
++ if (kthread_is_per_cpu(p))
++ return cpu_online(cpu);
++
++ /* Regular kernel threads don't get to stay during offline. */
++ if (cpu_dying(cpu))
++ return false;
++
++ /* But are allowed during online. */
++ return cpu_online(cpu);
++}
++
++/*
++ * This is how migration works:
++ *
++ * 1) we invoke migration_cpu_stop() on the target CPU using
++ * stop_one_cpu().
++ * 2) stopper starts to run (implicitly forcing the migrated thread
++ * off the CPU)
++ * 3) it checks whether the migrated task is still in the wrong runqueue.
++ * 4) if it's in the wrong runqueue then the migration thread removes
++ * it and puts it into the right queue.
++ * 5) stopper completes and stop_one_cpu() returns and the migration
++ * is done.
++ */
++
++/*
++ * move_queued_task - move a queued task to new rq.
++ *
++ * Returns (locked) new rq. Old rq's lock is released.
++ */
++struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
++{
++ lockdep_assert_held(&rq->lock);
++
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
++ dequeue_task(p, rq, 0);
++ set_task_cpu(p, new_cpu);
++ raw_spin_unlock(&rq->lock);
++
++ rq = cpu_rq(new_cpu);
++
++ raw_spin_lock(&rq->lock);
++ WARN_ON_ONCE(task_cpu(p) != new_cpu);
++
++ sched_mm_cid_migrate_to(rq, p);
++
++ sched_task_sanity_check(p, rq);
++ enqueue_task(p, rq, 0);
++ WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
++ wakeup_preempt(rq);
++
++ return rq;
++}
++
++struct migration_arg {
++ struct task_struct *task;
++ int dest_cpu;
++};
++
++/*
++ * Move (not current) task off this CPU, onto the destination CPU. We're doing
++ * this because either it can't run here any more (set_cpus_allowed()
++ * away from this CPU, or CPU going down), or because we're
++ * attempting to rebalance this task on exec (sched_exec).
++ *
++ * So we race with normal scheduler movements, but that's OK, as long
++ * as the task is no longer on this CPU.
++ */
++static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
++{
++ /* Affinity changed (again). */
++ if (!is_cpu_allowed(p, dest_cpu))
++ return rq;
++
++ return move_queued_task(rq, p, dest_cpu);
++}
++
++/*
++ * migration_cpu_stop - this will be executed by a high-prio stopper thread
++ * and performs thread migration by bumping thread off CPU then
++ * 'pushing' onto another runqueue.
++ */
++static int migration_cpu_stop(void *data)
++{
++ struct migration_arg *arg = data;
++ struct task_struct *p = arg->task;
++ struct rq *rq = this_rq();
++ unsigned long flags;
++
++ /*
++ * The original target CPU might have gone down and we might
++ * be on another CPU but it doesn't matter.
++ */
++ local_irq_save(flags);
++ /*
++ * We need to explicitly wake pending tasks before running
++ * __migrate_task() such that we will not miss enforcing cpus_ptr
++ * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
++ */
++ flush_smp_call_function_queue();
++
++ raw_spin_lock(&p->pi_lock);
++ raw_spin_lock(&rq->lock);
++ /*
++ * If task_rq(p) != rq, it cannot be migrated here, because we're
++ * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
++ * we're holding p->pi_lock.
++ */
++ if (task_rq(p) == rq && task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ rq = __migrate_task(rq, p, arg->dest_cpu);
++ }
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++static inline void
++set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
++{
++ cpumask_copy(&p->cpus_mask, ctx->new_mask);
++ p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
++
++ /*
++ * Swap in a new user_cpus_ptr if SCA_USER flag set
++ */
++ if (ctx->flags & SCA_USER)
++ swap(p->user_cpus_ptr, ctx->user_mask);
++}
++
++static void
++__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
++{
++ lockdep_assert_held(&p->pi_lock);
++ set_cpus_allowed_common(p, ctx);
++}
++
++/*
++ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
++ * affinity (if any) should be destroyed too.
++ */
++void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .user_mask = NULL,
++ .flags = SCA_USER, /* clear the user requested mask */
++ };
++ union cpumask_rcuhead {
++ cpumask_t cpumask;
++ struct rcu_head rcu;
++ };
++
++ __do_set_cpus_allowed(p, &ac);
++
++ /*
++ * Because this is called with p->pi_lock held, it is not possible
++ * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
++ * kfree_rcu().
++ */
++ kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
++}
++
++int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
++ int node)
++{
++ cpumask_t *user_mask;
++ unsigned long flags;
++
++ /*
++ * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
++ * may differ by now due to racing.
++ */
++ dst->user_cpus_ptr = NULL;
++
++ /*
++ * This check is racy and losing the race is a valid situation.
++ * It is not worth the extra overhead of taking the pi_lock on
++ * every fork/clone.
++ */
++ if (data_race(!src->user_cpus_ptr))
++ return 0;
++
++ user_mask = alloc_user_cpus_ptr(node);
++ if (!user_mask)
++ return -ENOMEM;
++
++ /*
++ * Use pi_lock to protect content of user_cpus_ptr
++ *
++ * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
++ * do_set_cpus_allowed().
++ */
++ raw_spin_lock_irqsave(&src->pi_lock, flags);
++ if (src->user_cpus_ptr) {
++ swap(dst->user_cpus_ptr, user_mask);
++ cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
++ }
++ raw_spin_unlock_irqrestore(&src->pi_lock, flags);
++
++ if (unlikely(user_mask))
++ kfree(user_mask);
++
++ return 0;
++}
++
++static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
++{
++ struct cpumask *user_mask = NULL;
++
++ swap(p->user_cpus_ptr, user_mask);
++
++ return user_mask;
++}
++
++void release_user_cpus_ptr(struct task_struct *p)
++{
++ kfree(clear_user_cpus_ptr(p));
++}
++
++#endif
++
++/**
++ * task_curr - is this task currently executing on a CPU?
++ * @p: the task in question.
++ *
++ * Return: 1 if the task is currently executing. 0 otherwise.
++ */
++inline int task_curr(const struct task_struct *p)
++{
++ return cpu_curr(task_cpu(p)) == p;
++}
++
++#ifdef CONFIG_SMP
++/***
++ * kick_process - kick a running thread to enter/exit the kernel
++ * @p: the to-be-kicked thread
++ *
++ * Cause a process which is running on another CPU to enter
++ * kernel-mode, without any delay. (to get signals handled.)
++ *
++ * NOTE: this function doesn't have to take the runqueue lock,
++ * because all it wants to ensure is that the remote task enters
++ * the kernel. If the IPI races and the task has been migrated
++ * to another CPU then no harm is done and the purpose has been
++ * achieved as well.
++ */
++void kick_process(struct task_struct *p)
++{
++ guard(preempt)();
++ int cpu = task_cpu(p);
++
++ if ((cpu != smp_processor_id()) && task_curr(p))
++ smp_send_reschedule(cpu);
++}
++EXPORT_SYMBOL_GPL(kick_process);
++
++/*
++ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
++ *
++ * A few notes on cpu_active vs cpu_online:
++ *
++ * - cpu_active must be a subset of cpu_online
++ *
++ * - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
++ * see __set_cpus_allowed_ptr(). At this point the newly online
++ * CPU isn't yet part of the sched domains, and balancing will not
++ * see it.
++ *
++ * - on cpu-down we clear cpu_active() to mask the sched domains and
++ * avoid the load balancer to place new tasks on the to be removed
++ * CPU. Existing tasks will remain running there and will be taken
++ * off.
++ *
++ * This means that fallback selection must not select !active CPUs.
++ * And can assume that any active CPU must be online. Conversely
++ * select_task_rq() below may allow selection of !active CPUs in order
++ * to satisfy the above rules.
++ */
++static int select_fallback_rq(int cpu, struct task_struct *p)
++{
++ int nid = cpu_to_node(cpu);
++ const struct cpumask *nodemask = NULL;
++ enum { cpuset, possible, fail } state = cpuset;
++ int dest_cpu;
++
++ /*
++ * If the node that the CPU is on has been offlined, cpu_to_node()
++ * will return -1. There is no CPU on the node, and we should
++ * select the CPU on the other node.
++ */
++ if (nid != -1) {
++ nodemask = cpumask_of_node(nid);
++
++ /* Look for allowed, online CPU in same node. */
++ for_each_cpu(dest_cpu, nodemask) {
++ if (is_cpu_allowed(p, dest_cpu))
++ return dest_cpu;
++ }
++ }
++
++ for (;;) {
++ /* Any allowed, online CPU? */
++ for_each_cpu(dest_cpu, p->cpus_ptr) {
++ if (!is_cpu_allowed(p, dest_cpu))
++ continue;
++ goto out;
++ }
++
++ /* No more Mr. Nice Guy. */
++ switch (state) {
++ case cpuset:
++ if (cpuset_cpus_allowed_fallback(p)) {
++ state = possible;
++ break;
++ }
++ fallthrough;
++ case possible:
++ /*
++ * XXX When called from select_task_rq() we only
++ * hold p->pi_lock and again violate locking order.
++ *
++ * More yuck to audit.
++ */
++ do_set_cpus_allowed(p, task_cpu_possible_mask(p));
++ state = fail;
++ break;
++
++ case fail:
++ BUG();
++ break;
++ }
++ }
++
++out:
++ if (state != cpuset) {
++ /*
++ * Don't tell them about moving exiting tasks or
++ * kernel threads (both mm NULL), since they never
++ * leave kernel.
++ */
++ if (p->mm && printk_ratelimit()) {
++ printk_deferred("process %d (%s) no longer affine to cpu%d\n",
++ task_pid_nr(p), p->comm, cpu);
++ }
++ }
++
++ return dest_cpu;
++}
++
++static inline void
++sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
++{
++ int cpu;
++
++ cpumask_copy(mask, sched_preempt_mask + ref);
++ if (prio < ref) {
++ for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
++ if (prio < cpu_rq(cpu)->prio)
++ cpumask_set_cpu(cpu, mask);
++ }
++ } else {
++ for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
++ if (prio >= cpu_rq(cpu)->prio)
++ cpumask_clear_cpu(cpu, mask);
++ }
++ }
++}
++
++static inline int
++preempt_mask_check(cpumask_t *preempt_mask, cpumask_t *allow_mask, int prio)
++{
++ cpumask_t *mask = sched_preempt_mask + prio;
++ int pr = atomic_read(&sched_prio_record);
++
++ if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
++ sched_preempt_mask_flush(mask, prio, pr);
++ atomic_set(&sched_prio_record, prio);
++ }
++
++ return cpumask_and(preempt_mask, allow_mask, mask);
++}
++
++__read_mostly idle_select_func_t idle_select_func ____cacheline_aligned_in_smp = cpumask_and;
++
++static inline int select_task_rq(struct task_struct *p)
++{
++ cpumask_t allow_mask, mask;
++
++ if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
++ return select_fallback_rq(task_cpu(p), p);
++
++ if (idle_select_func(&mask, &allow_mask, sched_idle_mask) ||
++ preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
++ return best_mask_cpu(task_cpu(p), &mask);
++
++ return best_mask_cpu(task_cpu(p), &allow_mask);
++}
++
++void sched_set_stop_task(int cpu, struct task_struct *stop)
++{
++ static struct lock_class_key stop_pi_lock;
++ struct sched_param stop_param = { .sched_priority = STOP_PRIO };
++ struct sched_param start_param = { .sched_priority = 0 };
++ struct task_struct *old_stop = cpu_rq(cpu)->stop;
++
++ if (stop) {
++ /*
++ * Make it appear like a SCHED_FIFO task, its something
++ * userspace knows about and won't get confused about.
++ *
++ * Also, it will make PI more or less work without too
++ * much confusion -- but then, stop work should not
++ * rely on PI working anyway.
++ */
++ sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
++
++ /*
++ * The PI code calls rt_mutex_setprio() with ->pi_lock held to
++ * adjust the effective priority of a task. As a result,
++ * rt_mutex_setprio() can trigger (RT) balancing operations,
++ * which can then trigger wakeups of the stop thread to push
++ * around the current task.
++ *
++ * The stop task itself will never be part of the PI-chain, it
++ * never blocks, therefore that ->pi_lock recursion is safe.
++ * Tell lockdep about this by placing the stop->pi_lock in its
++ * own class.
++ */
++ lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
++ }
++
++ cpu_rq(cpu)->stop = stop;
++
++ if (old_stop) {
++ /*
++ * Reset it back to a normal scheduling policy so that
++ * it can die in pieces.
++ */
++ sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
++ }
++}
++
++static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
++ raw_spinlock_t *lock, unsigned long irq_flags)
++ __releases(rq->lock)
++ __releases(p->pi_lock)
++{
++ /* Can the task run on the task's current CPU? If so, we're done */
++ if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
++ if (p->migration_disabled) {
++ if (likely(p->cpus_ptr != &p->cpus_mask))
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ p->migration_disabled = 0;
++ p->migration_flags |= MDF_FORCE_ENABLED;
++ /* When p is migrate_disabled, rq->lock should be held */
++ rq->nr_pinned--;
++ }
++
++ if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
++ struct migration_arg arg = { p, dest_cpu };
++
++ /* Need help from migration thread: drop lock and wait. */
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
++ return 0;
++ }
++ if (task_on_rq_queued(p)) {
++ /*
++ * OK, since we're going to drop the lock immediately
++ * afterwards anyway.
++ */
++ update_rq_clock(rq);
++ rq = move_queued_task(rq, p, dest_cpu);
++ lock = &rq->lock;
++ }
++ }
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ return 0;
++}
++
++static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
++ struct affinity_context *ctx,
++ struct rq *rq,
++ raw_spinlock_t *lock,
++ unsigned long irq_flags)
++{
++ const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
++ const struct cpumask *cpu_valid_mask = cpu_active_mask;
++ bool kthread = p->flags & PF_KTHREAD;
++ int dest_cpu;
++ int ret = 0;
++
++ if (kthread || is_migration_disabled(p)) {
++ /*
++ * Kernel threads are allowed on online && !active CPUs,
++ * however, during cpu-hot-unplug, even these might get pushed
++ * away if not KTHREAD_IS_PER_CPU.
++ *
++ * Specifically, migration_disabled() tasks must not fail the
++ * cpumask_any_and_distribute() pick below, esp. so on
++ * SCA_MIGRATE_ENABLE, otherwise we'll not call
++ * set_cpus_allowed_common() and actually reset p->cpus_ptr.
++ */
++ cpu_valid_mask = cpu_online_mask;
++ }
++
++ if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ /*
++ * Must re-check here, to close a race against __kthread_bind(),
++ * sched_setaffinity() is not guaranteed to observe the flag.
++ */
++ if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
++ goto out;
++
++ dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
++ if (dest_cpu >= nr_cpu_ids) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ __do_set_cpus_allowed(p, ctx);
++
++ return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
++
++out:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++
++ return ret;
++}
++
++/*
++ * Change a given task's CPU affinity. Migrate the thread to a
++ * is removed from the allowed bitmask.
++ *
++ * NOTE: the caller must have a valid reference to the task, the
++ * task must not exit() & deallocate itself prematurely. The
++ * call is not atomic; no spinlocks may be held.
++ */
++int __set_cpus_allowed_ptr(struct task_struct *p,
++ struct affinity_context *ctx)
++{
++ unsigned long irq_flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++ rq = __task_access_lock(p, &lock);
++ /*
++ * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
++ * flags are set.
++ */
++ if (p->user_cpus_ptr &&
++ !(ctx->flags & SCA_USER) &&
++ cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
++ ctx->new_mask = rq->scratch_mask;
++
++
++ return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
++}
++
++int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .flags = 0,
++ };
++
++ return __set_cpus_allowed_ptr(p, &ac);
++}
++EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
++
++/*
++ * Change a given task's CPU affinity to the intersection of its current
++ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
++ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
++ * affinity or use cpu_online_mask instead.
++ *
++ * If the resulting mask is empty, leave the affinity unchanged and return
++ * -EINVAL.
++ */
++static int restrict_cpus_allowed_ptr(struct task_struct *p,
++ struct cpumask *new_mask,
++ const struct cpumask *subset_mask)
++{
++ struct affinity_context ac = {
++ .new_mask = new_mask,
++ .flags = 0,
++ };
++ unsigned long irq_flags;
++ raw_spinlock_t *lock;
++ struct rq *rq;
++ int err;
++
++ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
++ rq = __task_access_lock(p, &lock);
++
++ if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
++ err = -EINVAL;
++ goto err_unlock;
++ }
++
++ return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
++
++err_unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
++ return err;
++}
++
++/*
++ * Restrict the CPU affinity of task @p so that it is a subset of
++ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
++ * old affinity mask. If the resulting mask is empty, we warn and walk
++ * up the cpuset hierarchy until we find a suitable mask.
++ */
++void force_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++ cpumask_var_t new_mask;
++ const struct cpumask *override_mask = task_cpu_possible_mask(p);
++
++ alloc_cpumask_var(&new_mask, GFP_KERNEL);
++
++ /*
++ * __migrate_task() can fail silently in the face of concurrent
++ * offlining of the chosen destination CPU, so take the hotplug
++ * lock to ensure that the migration succeeds.
++ */
++ cpus_read_lock();
++ if (!cpumask_available(new_mask))
++ goto out_set_mask;
++
++ if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
++ goto out_free_mask;
++
++ /*
++ * We failed to find a valid subset of the affinity mask for the
++ * task, so override it based on its cpuset hierarchy.
++ */
++ cpuset_cpus_allowed(p, new_mask);
++ override_mask = new_mask;
++
++out_set_mask:
++ if (printk_ratelimit()) {
++ printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
++ task_pid_nr(p), p->comm,
++ cpumask_pr_args(override_mask));
++ }
++
++ WARN_ON(set_cpus_allowed_ptr(p, override_mask));
++out_free_mask:
++ cpus_read_unlock();
++ free_cpumask_var(new_mask);
++}
++
++/*
++ * Restore the affinity of a task @p which was previously restricted by a
++ * call to force_compatible_cpus_allowed_ptr().
++ *
++ * It is the caller's responsibility to serialise this with any calls to
++ * force_compatible_cpus_allowed_ptr(@p).
++ */
++void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
++{
++ struct affinity_context ac = {
++ .new_mask = task_user_cpus(p),
++ .flags = 0,
++ };
++ int ret;
++
++ /*
++ * Try to restore the old affinity mask with __sched_setaffinity().
++ * Cpuset masking will be done there too.
++ */
++ ret = __sched_setaffinity(p, &ac);
++ WARN_ON_ONCE(ret);
++}
++
++#else /* CONFIG_SMP */
++
++static inline int select_task_rq(struct task_struct *p)
++{
++ return 0;
++}
++
++static inline bool rq_has_pinned_tasks(struct rq *rq)
++{
++ return false;
++}
++
++#endif /* !CONFIG_SMP */
++
++static void
++ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq;
++
++ if (!schedstat_enabled())
++ return;
++
++ rq = this_rq();
++
++#ifdef CONFIG_SMP
++ if (cpu == rq->cpu) {
++ __schedstat_inc(rq->ttwu_local);
++ __schedstat_inc(p->stats.nr_wakeups_local);
++ } else {
++ /** Alt schedule FW ToDo:
++ * How to do ttwu_wake_remote
++ */
++ }
++#endif /* CONFIG_SMP */
++
++ __schedstat_inc(rq->ttwu_count);
++ __schedstat_inc(p->stats.nr_wakeups);
++}
++
++/*
++ * Mark the task runnable.
++ */
++static inline void ttwu_do_wakeup(struct task_struct *p)
++{
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ trace_sched_wakeup(p);
++}
++
++static inline void
++ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
++{
++ if (p->sched_contributes_to_load)
++ rq->nr_uninterruptible--;
++
++ if (
++#ifdef CONFIG_SMP
++ !(wake_flags & WF_MIGRATED) &&
++#endif
++ p->in_iowait) {
++ delayacct_blkio_end(p);
++ atomic_dec(&task_rq(p)->nr_iowait);
++ }
++
++ activate_task(p, rq);
++ wakeup_preempt(rq);
++
++ ttwu_do_wakeup(p);
++}
++
++/*
++ * Consider @p being inside a wait loop:
++ *
++ * for (;;) {
++ * set_current_state(TASK_UNINTERRUPTIBLE);
++ *
++ * if (CONDITION)
++ * break;
++ *
++ * schedule();
++ * }
++ * __set_current_state(TASK_RUNNING);
++ *
++ * between set_current_state() and schedule(). In this case @p is still
++ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
++ * an atomic manner.
++ *
++ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
++ * then schedule() must still happen and p->state can be changed to
++ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
++ * need to do a full wakeup with enqueue.
++ *
++ * Returns: %true when the wakeup is done,
++ * %false otherwise.
++ */
++static int ttwu_runnable(struct task_struct *p, int wake_flags)
++{
++ struct rq *rq;
++ raw_spinlock_t *lock;
++ int ret = 0;
++
++ rq = __task_access_lock(p, &lock);
++ if (task_on_rq_queued(p)) {
++ if (!task_on_cpu(p)) {
++ /*
++ * When on_rq && !on_cpu the task is preempted, see if
++ * it should preempt the task that is current now.
++ */
++ update_rq_clock(rq);
++ wakeup_preempt(rq);
++ }
++ ttwu_do_wakeup(p);
++ ret = 1;
++ }
++ __task_access_unlock(p, lock);
++
++ return ret;
++}
++
++#ifdef CONFIG_SMP
++void sched_ttwu_pending(void *arg)
++{
++ struct llist_node *llist = arg;
++ struct rq *rq = this_rq();
++ struct task_struct *p, *t;
++ struct rq_flags rf;
++
++ if (!llist)
++ return;
++
++ rq_lock_irqsave(rq, &rf);
++ update_rq_clock(rq);
++
++ llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
++ if (WARN_ON_ONCE(p->on_cpu))
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++ if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
++ set_task_cpu(p, cpu_of(rq));
++
++ ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
++ }
++
++ /*
++ * Must be after enqueueing at least once task such that
++ * idle_cpu() does not observe a false-negative -- if it does,
++ * it is possible for select_idle_siblings() to stack a number
++ * of tasks on this CPU during that window.
++ *
++ * It is OK to clear ttwu_pending when another task pending.
++ * We will receive IPI after local IRQ enabled and then enqueue it.
++ * Since now nr_running > 0, idle_cpu() will always get correct result.
++ */
++ WRITE_ONCE(rq->ttwu_pending, 0);
++ rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Prepare the scene for sending an IPI for a remote smp_call
++ *
++ * Returns true if the caller can proceed with sending the IPI.
++ * Returns false otherwise.
++ */
++bool call_function_single_prep_ipi(int cpu)
++{
++ if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
++ trace_sched_wake_idle_without_ipi(cpu);
++ return false;
++ }
++
++ return true;
++}
++
++/*
++ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
++ * necessary. The wakee CPU on receipt of the IPI will queue the task
++ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
++ * of the wakeup instead of the waker.
++ */
++static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
++
++ WRITE_ONCE(rq->ttwu_pending, 1);
++ __smp_call_single_queue(cpu, &p->wake_entry.llist);
++}
++
++static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
++{
++ /*
++ * Do not complicate things with the async wake_list while the CPU is
++ * in hotplug state.
++ */
++ if (!cpu_active(cpu))
++ return false;
++
++ /* Ensure the task will still be allowed to run on the CPU. */
++ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
++ return false;
++
++ /*
++ * If the CPU does not share cache, then queue the task on the
++ * remote rqs wakelist to avoid accessing remote data.
++ */
++ if (!cpus_share_cache(smp_processor_id(), cpu))
++ return true;
++
++ if (cpu == smp_processor_id())
++ return false;
++
++ /*
++ * If the wakee cpu is idle, or the task is descheduling and the
++ * only running task on the CPU, then use the wakelist to offload
++ * the task activation to the idle (or soon-to-be-idle) CPU as
++ * the current CPU is likely busy. nr_running is checked to
++ * avoid unnecessary task stacking.
++ *
++ * Note that we can only get here with (wakee) p->on_rq=0,
++ * p->on_cpu can be whatever, we've done the dequeue, so
++ * the wakee has been accounted out of ->nr_running.
++ */
++ if (!cpu_rq(cpu)->nr_running)
++ return true;
++
++ return false;
++}
++
++static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
++ sched_clock_cpu(cpu); /* Sync clocks across CPUs */
++ __ttwu_queue_wakelist(p, cpu, wake_flags);
++ return true;
++ }
++
++ return false;
++}
++
++void wake_up_if_idle(int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ guard(rcu)();
++ if (is_idle_task(rcu_dereference(rq->curr))) {
++ guard(raw_spinlock_irqsave)(&rq->lock);
++ if (is_idle_task(rq->curr))
++ resched_curr(rq);
++ }
++}
++
++extern struct static_key_false sched_asym_cpucapacity;
++
++static __always_inline bool sched_asym_cpucap_active(void)
++{
++ return static_branch_unlikely(&sched_asym_cpucapacity);
++}
++
++bool cpus_equal_capacity(int this_cpu, int that_cpu)
++{
++ if (!sched_asym_cpucap_active())
++ return true;
++
++ if (this_cpu == that_cpu)
++ return true;
++
++ return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
++}
++
++bool cpus_share_cache(int this_cpu, int that_cpu)
++{
++ if (this_cpu == that_cpu)
++ return true;
++
++ return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
++}
++#else /* !CONFIG_SMP */
++
++static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
++{
++ return false;
++}
++
++#endif /* CONFIG_SMP */
++
++static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ if (ttwu_queue_wakelist(p, cpu, wake_flags))
++ return;
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++ ttwu_do_activate(rq, p, wake_flags);
++ raw_spin_unlock(&rq->lock);
++}
++
++/*
++ * Invoked from try_to_wake_up() to check whether the task can be woken up.
++ *
++ * The caller holds p::pi_lock if p != current or has preemption
++ * disabled when p == current.
++ *
++ * The rules of saved_state:
++ *
++ * The related locking code always holds p::pi_lock when updating
++ * p::saved_state, which means the code is fully serialized in both cases.
++ *
++ * For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
++ * No other bits set. This allows to distinguish all wakeup scenarios.
++ *
++ * For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
++ * allows us to prevent early wakeup of tasks before they can be run on
++ * asymmetric ISA architectures (eg ARMv9).
++ */
++static __always_inline
++bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
++{
++ int match;
++
++ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++ WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
++ state != TASK_RTLOCK_WAIT);
++ }
++
++ *success = !!(match = __task_state_match(p, state));
++
++ /*
++ * Saved state preserves the task state across blocking on
++ * an RT lock or TASK_FREEZABLE tasks. If the state matches,
++ * set p::saved_state to TASK_RUNNING, but do not wake the task
++ * because it waits for a lock wakeup or __thaw_task(). Also
++ * indicate success because from the regular waker's point of
++ * view this has succeeded.
++ *
++ * After acquiring the lock the task will restore p::__state
++ * from p::saved_state which ensures that the regular
++ * wakeup is not lost. The restore will also set
++ * p::saved_state to TASK_RUNNING so any further tests will
++ * not result in false positives vs. @success
++ */
++ if (match < 0)
++ p->saved_state = TASK_RUNNING;
++
++ return match > 0;
++}
++
++/*
++ * Notes on Program-Order guarantees on SMP systems.
++ *
++ * MIGRATION
++ *
++ * The basic program-order guarantee on SMP systems is that when a task [t]
++ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
++ * execution on its new CPU [c1].
++ *
++ * For migration (of runnable tasks) this is provided by the following means:
++ *
++ * A) UNLOCK of the rq(c0)->lock scheduling out task t
++ * B) migration for t is required to synchronize *both* rq(c0)->lock and
++ * rq(c1)->lock (if not at the same time, then in that order).
++ * C) LOCK of the rq(c1)->lock scheduling in task
++ *
++ * Transitivity guarantees that B happens after A and C after B.
++ * Note: we only require RCpc transitivity.
++ * Note: the CPU doing B need not be c0 or c1
++ *
++ * Example:
++ *
++ * CPU0 CPU1 CPU2
++ *
++ * LOCK rq(0)->lock
++ * sched-out X
++ * sched-in Y
++ * UNLOCK rq(0)->lock
++ *
++ * LOCK rq(0)->lock // orders against CPU0
++ * dequeue X
++ * UNLOCK rq(0)->lock
++ *
++ * LOCK rq(1)->lock
++ * enqueue X
++ * UNLOCK rq(1)->lock
++ *
++ * LOCK rq(1)->lock // orders against CPU2
++ * sched-out Z
++ * sched-in X
++ * UNLOCK rq(1)->lock
++ *
++ *
++ * BLOCKING -- aka. SLEEP + WAKEUP
++ *
++ * For blocking we (obviously) need to provide the same guarantee as for
++ * migration. However the means are completely different as there is no lock
++ * chain to provide order. Instead we do:
++ *
++ * 1) smp_store_release(X->on_cpu, 0) -- finish_task()
++ * 2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
++ *
++ * Example:
++ *
++ * CPU0 (schedule) CPU1 (try_to_wake_up) CPU2 (schedule)
++ *
++ * LOCK rq(0)->lock LOCK X->pi_lock
++ * dequeue X
++ * sched-out X
++ * smp_store_release(X->on_cpu, 0);
++ *
++ * smp_cond_load_acquire(&X->on_cpu, !VAL);
++ * X->state = WAKING
++ * set_task_cpu(X,2)
++ *
++ * LOCK rq(2)->lock
++ * enqueue X
++ * X->state = RUNNING
++ * UNLOCK rq(2)->lock
++ *
++ * LOCK rq(2)->lock // orders against CPU1
++ * sched-out Z
++ * sched-in X
++ * UNLOCK rq(2)->lock
++ *
++ * UNLOCK X->pi_lock
++ * UNLOCK rq(0)->lock
++ *
++ *
++ * However; for wakeups there is a second guarantee we must provide, namely we
++ * must observe the state that lead to our wakeup. That is, not only must our
++ * task observe its own prior state, it must also observe the stores prior to
++ * its wakeup.
++ *
++ * This means that any means of doing remote wakeups must order the CPU doing
++ * the wakeup against the CPU the task is going to end up running on. This,
++ * however, is already required for the regular Program-Order guarantee above,
++ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
++ *
++ */
++
++/**
++ * try_to_wake_up - wake up a thread
++ * @p: the thread to be awakened
++ * @state: the mask of task states that can be woken
++ * @wake_flags: wake modifier flags (WF_*)
++ *
++ * Conceptually does:
++ *
++ * If (@state & @p->state) @p->state = TASK_RUNNING.
++ *
++ * If the task was not queued/runnable, also place it back on a runqueue.
++ *
++ * This function is atomic against schedule() which would dequeue the task.
++ *
++ * It issues a full memory barrier before accessing @p->state, see the comment
++ * with set_current_state().
++ *
++ * Uses p->pi_lock to serialize against concurrent wake-ups.
++ *
++ * Relies on p->pi_lock stabilizing:
++ * - p->sched_class
++ * - p->cpus_ptr
++ * - p->sched_task_group
++ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
++ *
++ * Tries really hard to only take one task_rq(p)->lock for performance.
++ * Takes rq->lock in:
++ * - ttwu_runnable() -- old rq, unavoidable, see comment there;
++ * - ttwu_queue() -- new rq, for enqueue of the task;
++ * - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
++ *
++ * As a consequence we race really badly with just about everything. See the
++ * many memory barriers and their comments for details.
++ *
++ * Return: %true if @p->state changes (an actual wakeup was done),
++ * %false otherwise.
++ */
++int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
++{
++ guard(preempt)();
++ int cpu, success = 0;
++
++ if (p == current) {
++ /*
++ * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
++ * == smp_processor_id()'. Together this means we can special
++ * case the whole 'p->on_rq && ttwu_runnable()' case below
++ * without taking any locks.
++ *
++ * In particular:
++ * - we rely on Program-Order guarantees for all the ordering,
++ * - we're serialized against set_special_state() by virtue of
++ * it disabling IRQs (this allows not taking ->pi_lock).
++ */
++ if (!ttwu_state_match(p, state, &success))
++ goto out;
++
++ trace_sched_waking(p);
++ ttwu_do_wakeup(p);
++ goto out;
++ }
++
++ /*
++ * If we are going to wake up a thread waiting for CONDITION we
++ * need to ensure that CONDITION=1 done by the caller can not be
++ * reordered with p->state check below. This pairs with smp_store_mb()
++ * in set_current_state() that the waiting thread does.
++ */
++ scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
++ smp_mb__after_spinlock();
++ if (!ttwu_state_match(p, state, &success))
++ break;
++
++ trace_sched_waking(p);
++
++ /*
++ * Ensure we load p->on_rq _after_ p->state, otherwise it would
++ * be possible to, falsely, observe p->on_rq == 0 and get stuck
++ * in smp_cond_load_acquire() below.
++ *
++ * sched_ttwu_pending() try_to_wake_up()
++ * STORE p->on_rq = 1 LOAD p->state
++ * UNLOCK rq->lock
++ *
++ * __schedule() (switch to task 'p')
++ * LOCK rq->lock smp_rmb();
++ * smp_mb__after_spinlock();
++ * UNLOCK rq->lock
++ *
++ * [task p]
++ * STORE p->state = UNINTERRUPTIBLE LOAD p->on_rq
++ *
++ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++ * __schedule(). See the comment for smp_mb__after_spinlock().
++ *
++ * A similar smp_rmb() lives in __task_needs_rq_lock().
++ */
++ smp_rmb();
++ if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
++ break;
++
++#ifdef CONFIG_SMP
++ /*
++ * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
++ * possible to, falsely, observe p->on_cpu == 0.
++ *
++ * One must be running (->on_cpu == 1) in order to remove oneself
++ * from the runqueue.
++ *
++ * __schedule() (switch to task 'p') try_to_wake_up()
++ * STORE p->on_cpu = 1 LOAD p->on_rq
++ * UNLOCK rq->lock
++ *
++ * __schedule() (put 'p' to sleep)
++ * LOCK rq->lock smp_rmb();
++ * smp_mb__after_spinlock();
++ * STORE p->on_rq = 0 LOAD p->on_cpu
++ *
++ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
++ * __schedule(). See the comment for smp_mb__after_spinlock().
++ *
++ * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
++ * schedule()'s deactivate_task() has 'happened' and p will no longer
++ * care about it's own p->state. See the comment in __schedule().
++ */
++ smp_acquire__after_ctrl_dep();
++
++ /*
++ * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
++ * == 0), which means we need to do an enqueue, change p->state to
++ * TASK_WAKING such that we can unlock p->pi_lock before doing the
++ * enqueue, such as ttwu_queue_wakelist().
++ */
++ WRITE_ONCE(p->__state, TASK_WAKING);
++
++ /*
++ * If the owning (remote) CPU is still in the middle of schedule() with
++ * this task as prev, considering queueing p on the remote CPUs wake_list
++ * which potentially sends an IPI instead of spinning on p->on_cpu to
++ * let the waker make forward progress. This is safe because IRQs are
++ * disabled and the IPI will deliver after on_cpu is cleared.
++ *
++ * Ensure we load task_cpu(p) after p->on_cpu:
++ *
++ * set_task_cpu(p, cpu);
++ * STORE p->cpu = @cpu
++ * __schedule() (switch to task 'p')
++ * LOCK rq->lock
++ * smp_mb__after_spin_lock() smp_cond_load_acquire(&p->on_cpu)
++ * STORE p->on_cpu = 1 LOAD p->cpu
++ *
++ * to ensure we observe the correct CPU on which the task is currently
++ * scheduling.
++ */
++ if (smp_load_acquire(&p->on_cpu) &&
++ ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
++ break;
++
++ /*
++ * If the owning (remote) CPU is still in the middle of schedule() with
++ * this task as prev, wait until it's done referencing the task.
++ *
++ * Pairs with the smp_store_release() in finish_task().
++ *
++ * This ensures that tasks getting woken will be fully ordered against
++ * their previous state and preserve Program Order.
++ */
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++
++ sched_task_ttwu(p);
++
++ if ((wake_flags & WF_CURRENT_CPU) &&
++ cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
++ cpu = smp_processor_id();
++ else
++ cpu = select_task_rq(p);
++
++ if (cpu != task_cpu(p)) {
++ if (p->in_iowait) {
++ delayacct_blkio_end(p);
++ atomic_dec(&task_rq(p)->nr_iowait);
++ }
++
++ wake_flags |= WF_MIGRATED;
++ set_task_cpu(p, cpu);
++ }
++#else
++ sched_task_ttwu(p);
++
++ cpu = task_cpu(p);
++#endif /* CONFIG_SMP */
++
++ ttwu_queue(p, cpu, wake_flags);
++ }
++out:
++ if (success)
++ ttwu_stat(p, task_cpu(p), wake_flags);
++
++ return success;
++}
++
++static bool __task_needs_rq_lock(struct task_struct *p)
++{
++ unsigned int state = READ_ONCE(p->__state);
++
++ /*
++ * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
++ * the task is blocked. Make sure to check @state since ttwu() can drop
++ * locks at the end, see ttwu_queue_wakelist().
++ */
++ if (state == TASK_RUNNING || state == TASK_WAKING)
++ return true;
++
++ /*
++ * Ensure we load p->on_rq after p->__state, otherwise it would be
++ * possible to, falsely, observe p->on_rq == 0.
++ *
++ * See try_to_wake_up() for a longer comment.
++ */
++ smp_rmb();
++ if (p->on_rq)
++ return true;
++
++#ifdef CONFIG_SMP
++ /*
++ * Ensure the task has finished __schedule() and will not be referenced
++ * anymore. Again, see try_to_wake_up() for a longer comment.
++ */
++ smp_rmb();
++ smp_cond_load_acquire(&p->on_cpu, !VAL);
++#endif
++
++ return false;
++}
++
++/**
++ * task_call_func - Invoke a function on task in fixed state
++ * @p: Process for which the function is to be invoked, can be @current.
++ * @func: Function to invoke.
++ * @arg: Argument to function.
++ *
++ * Fix the task in it's current state by avoiding wakeups and or rq operations
++ * and call @func(@arg) on it. This function can use task_is_runnable() and
++ * task_curr() to work out what the state is, if required. Given that @func
++ * can be invoked with a runqueue lock held, it had better be quite
++ * lightweight.
++ *
++ * Returns:
++ * Whatever @func returns
++ */
++int task_call_func(struct task_struct *p, task_call_f func, void *arg)
++{
++ struct rq *rq = NULL;
++ struct rq_flags rf;
++ int ret;
++
++ raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
++
++ if (__task_needs_rq_lock(p))
++ rq = __task_rq_lock(p, &rf);
++
++ /*
++ * At this point the task is pinned; either:
++ * - blocked and we're holding off wakeups (pi->lock)
++ * - woken, and we're holding off enqueue (rq->lock)
++ * - queued, and we're holding off schedule (rq->lock)
++ * - running, and we're holding off de-schedule (rq->lock)
++ *
++ * The called function (@func) can use: task_curr(), p->on_rq and
++ * p->__state to differentiate between these states.
++ */
++ ret = func(p, arg);
++
++ if (rq)
++ __task_rq_unlock(rq, &rf);
++
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
++ return ret;
++}
++
++/**
++ * cpu_curr_snapshot - Return a snapshot of the currently running task
++ * @cpu: The CPU on which to snapshot the task.
++ *
++ * Returns the task_struct pointer of the task "currently" running on
++ * the specified CPU. If the same task is running on that CPU throughout,
++ * the return value will be a pointer to that task's task_struct structure.
++ * If the CPU did any context switches even vaguely concurrently with the
++ * execution of this function, the return value will be a pointer to the
++ * task_struct structure of a randomly chosen task that was running on
++ * that CPU somewhere around the time that this function was executing.
++ *
++ * If the specified CPU was offline, the return value is whatever it
++ * is, perhaps a pointer to the task_struct structure of that CPU's idle
++ * task, but there is no guarantee. Callers wishing a useful return
++ * value must take some action to ensure that the specified CPU remains
++ * online throughout.
++ *
++ * This function executes full memory barriers before and after fetching
++ * the pointer, which permits the caller to confine this function's fetch
++ * with respect to the caller's accesses to other shared variables.
++ */
++struct task_struct *cpu_curr_snapshot(int cpu)
++{
++ struct task_struct *t;
++
++ smp_mb(); /* Pairing determined by caller's synchronization design. */
++ t = rcu_dereference(cpu_curr(cpu));
++ smp_mb(); /* Pairing determined by caller's synchronization design. */
++ return t;
++}
++
++/**
++ * wake_up_process - Wake up a specific process
++ * @p: The process to be woken up.
++ *
++ * Attempt to wake up the nominated process and move it to the set of runnable
++ * processes.
++ *
++ * Return: 1 if the process was woken up, 0 if it was already running.
++ *
++ * This function executes a full memory barrier before accessing the task state.
++ */
++int wake_up_process(struct task_struct *p)
++{
++ return try_to_wake_up(p, TASK_NORMAL, 0);
++}
++EXPORT_SYMBOL(wake_up_process);
++
++int wake_up_state(struct task_struct *p, unsigned int state)
++{
++ return try_to_wake_up(p, state, 0);
++}
++
++/*
++ * Perform scheduler related setup for a newly forked process p.
++ * p is forked by current.
++ *
++ * __sched_fork() is basic setup used by init_idle() too:
++ */
++static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++ p->on_rq = 0;
++ p->on_cpu = 0;
++ p->utime = 0;
++ p->stime = 0;
++ p->sched_time = 0;
++
++#ifdef CONFIG_SCHEDSTATS
++ /* Even if schedstat is disabled, there should not be garbage */
++ memset(&p->stats, 0, sizeof(p->stats));
++#endif
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++ INIT_HLIST_HEAD(&p->preempt_notifiers);
++#endif
++
++#ifdef CONFIG_COMPACTION
++ p->capture_control = NULL;
++#endif
++#ifdef CONFIG_SMP
++ p->wake_entry.u_flags = CSD_TYPE_TTWU;
++#endif
++ init_sched_mm_cid(p);
++}
++
++/*
++ * fork()/clone()-time setup:
++ */
++int sched_fork(unsigned long clone_flags, struct task_struct *p)
++{
++ __sched_fork(clone_flags, p);
++ /*
++ * We mark the process as NEW here. This guarantees that
++ * nobody will actually run it, and a signal or other external
++ * event cannot wake it up and insert it on the runqueue either.
++ */
++ p->__state = TASK_NEW;
++
++ /*
++ * Make sure we do not leak PI boosting priority to the child.
++ */
++ p->prio = current->normal_prio;
++
++ /*
++ * Revert to default priority/policy on fork if requested.
++ */
++ if (unlikely(p->sched_reset_on_fork)) {
++ if (task_has_rt_policy(p)) {
++ p->policy = SCHED_NORMAL;
++ p->static_prio = NICE_TO_PRIO(0);
++ p->rt_priority = 0;
++ } else if (PRIO_TO_NICE(p->static_prio) < 0)
++ p->static_prio = NICE_TO_PRIO(0);
++
++ p->prio = p->normal_prio = p->static_prio;
++
++ /*
++ * We don't need the reset flag anymore after the fork. It has
++ * fulfilled its duty:
++ */
++ p->sched_reset_on_fork = 0;
++ }
++
++#ifdef CONFIG_SCHED_INFO
++ if (unlikely(sched_info_on()))
++ memset(&p->sched_info, 0, sizeof(p->sched_info));
++#endif
++ init_task_preempt_count(p);
++
++ return 0;
++}
++
++int sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
++{
++ unsigned long flags;
++ struct rq *rq;
++
++ /*
++ * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
++ * required yet, but lockdep gets upset if rules are violated.
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ /*
++ * Share the timeslice between parent and child, thus the
++ * total amount of pending timeslices in the system doesn't change,
++ * resulting in more scheduling fairness.
++ */
++ rq = this_rq();
++ raw_spin_lock(&rq->lock);
++
++ rq->curr->time_slice /= 2;
++ p->time_slice = rq->curr->time_slice;
++#ifdef CONFIG_SCHED_HRTICK
++ hrtick_start(rq, rq->curr->time_slice);
++#endif
++
++ if (p->time_slice < RESCHED_NS) {
++ p->time_slice = sysctl_sched_base_slice;
++ resched_curr(rq);
++ }
++ sched_task_fork(p, rq);
++ raw_spin_unlock(&rq->lock);
++
++ rseq_migrate(p);
++ /*
++ * We're setting the CPU for the first time, we don't migrate,
++ * so use __set_task_cpu().
++ */
++ __set_task_cpu(p, smp_processor_id());
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++void sched_cancel_fork(struct task_struct *p)
++{
++}
++
++void sched_post_fork(struct task_struct *p)
++{
++}
++
++#ifdef CONFIG_SCHEDSTATS
++
++DEFINE_STATIC_KEY_FALSE(sched_schedstats);
++
++static void set_schedstats(bool enabled)
++{
++ if (enabled)
++ static_branch_enable(&sched_schedstats);
++ else
++ static_branch_disable(&sched_schedstats);
++}
++
++void force_schedstat_enabled(void)
++{
++ if (!schedstat_enabled()) {
++ pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
++ static_branch_enable(&sched_schedstats);
++ }
++}
++
++static int __init setup_schedstats(char *str)
++{
++ int ret = 0;
++ if (!str)
++ goto out;
++
++ if (!strcmp(str, "enable")) {
++ set_schedstats(true);
++ ret = 1;
++ } else if (!strcmp(str, "disable")) {
++ set_schedstats(false);
++ ret = 1;
++ }
++out:
++ if (!ret)
++ pr_warn("Unable to parse schedstats=\n");
++
++ return ret;
++}
++__setup("schedstats=", setup_schedstats);
++
++#ifdef CONFIG_PROC_SYSCTL
++static int sysctl_schedstats(const struct ctl_table *table, int write, void *buffer,
++ size_t *lenp, loff_t *ppos)
++{
++ struct ctl_table t;
++ int err;
++ int state = static_branch_likely(&sched_schedstats);
++
++ if (write && !capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
++ t = *table;
++ t.data = &state;
++ err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
++ if (err < 0)
++ return err;
++ if (write)
++ set_schedstats(state);
++ return err;
++}
++
++static struct ctl_table sched_core_sysctls[] = {
++ {
++ .procname = "sched_schedstats",
++ .data = NULL,
++ .maxlen = sizeof(unsigned int),
++ .mode = 0644,
++ .proc_handler = sysctl_schedstats,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_ONE,
++ },
++};
++static int __init sched_core_sysctl_init(void)
++{
++ register_sysctl_init("kernel", sched_core_sysctls);
++ return 0;
++}
++late_initcall(sched_core_sysctl_init);
++#endif /* CONFIG_PROC_SYSCTL */
++#endif /* CONFIG_SCHEDSTATS */
++
++/*
++ * wake_up_new_task - wake up a newly created task for the first time.
++ *
++ * This function will do some initial scheduler statistics housekeeping
++ * that must be done for every newly created context, then puts the task
++ * on the runqueue and wakes it.
++ */
++void wake_up_new_task(struct task_struct *p)
++{
++ unsigned long flags;
++ struct rq *rq;
++
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ rq = cpu_rq(select_task_rq(p));
++#ifdef CONFIG_SMP
++ rseq_migrate(p);
++ /*
++ * Fork balancing, do it here and not earlier because:
++ * - cpus_ptr can change in the fork path
++ * - any previously selected CPU might disappear through hotplug
++ *
++ * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
++ * as we're not fully set-up yet.
++ */
++ __set_task_cpu(p, cpu_of(rq));
++#endif
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++
++ activate_task(p, rq);
++ trace_sched_wakeup_new(p);
++ wakeup_preempt(rq);
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++}
++
++#ifdef CONFIG_PREEMPT_NOTIFIERS
++
++static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
++
++void preempt_notifier_inc(void)
++{
++ static_branch_inc(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_inc);
++
++void preempt_notifier_dec(void)
++{
++ static_branch_dec(&preempt_notifier_key);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_dec);
++
++/**
++ * preempt_notifier_register - tell me when current is being preempted & rescheduled
++ * @notifier: notifier struct to register
++ */
++void preempt_notifier_register(struct preempt_notifier *notifier)
++{
++ if (!static_branch_unlikely(&preempt_notifier_key))
++ WARN(1, "registering preempt_notifier while notifiers disabled\n");
++
++ hlist_add_head(¬ifier->link, ¤t->preempt_notifiers);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_register);
++
++/**
++ * preempt_notifier_unregister - no longer interested in preemption notifications
++ * @notifier: notifier struct to unregister
++ *
++ * This is *not* safe to call from within a preemption notifier.
++ */
++void preempt_notifier_unregister(struct preempt_notifier *notifier)
++{
++ hlist_del(¬ifier->link);
++}
++EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
++
++static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++ struct preempt_notifier *notifier;
++
++ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++ notifier->ops->sched_in(notifier, raw_smp_processor_id());
++}
++
++static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++ if (static_branch_unlikely(&preempt_notifier_key))
++ __fire_sched_in_preempt_notifiers(curr);
++}
++
++static void
++__fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++ struct preempt_notifier *notifier;
++
++ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
++ notifier->ops->sched_out(notifier, next);
++}
++
++static __always_inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++ if (static_branch_unlikely(&preempt_notifier_key))
++ __fire_sched_out_preempt_notifiers(curr, next);
++}
++
++#else /* !CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
++{
++}
++
++static inline void
++fire_sched_out_preempt_notifiers(struct task_struct *curr,
++ struct task_struct *next)
++{
++}
++
++#endif /* CONFIG_PREEMPT_NOTIFIERS */
++
++static inline void prepare_task(struct task_struct *next)
++{
++ /*
++ * Claim the task as running, we do this before switching to it
++ * such that any running task will have this set.
++ *
++ * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
++ * its ordering comment.
++ */
++ WRITE_ONCE(next->on_cpu, 1);
++}
++
++static inline void finish_task(struct task_struct *prev)
++{
++#ifdef CONFIG_SMP
++ /*
++ * This must be the very last reference to @prev from this CPU. After
++ * p->on_cpu is cleared, the task can be moved to a different CPU. We
++ * must ensure this doesn't happen until the switch is completely
++ * finished.
++ *
++ * In particular, the load of prev->state in finish_task_switch() must
++ * happen before this.
++ *
++ * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
++ */
++ smp_store_release(&prev->on_cpu, 0);
++#else
++ prev->on_cpu = 0;
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++ void (*func)(struct rq *rq);
++ struct balance_callback *next;
++
++ lockdep_assert_held(&rq->lock);
++
++ while (head) {
++ func = (void (*)(struct rq *))head->func;
++ next = head->next;
++ head->next = NULL;
++ head = next;
++
++ func(rq);
++ }
++}
++
++static void balance_push(struct rq *rq);
++
++/*
++ * balance_push_callback is a right abuse of the callback interface and plays
++ * by significantly different rules.
++ *
++ * Where the normal balance_callback's purpose is to be ran in the same context
++ * that queued it (only later, when it's safe to drop rq->lock again),
++ * balance_push_callback is specifically targeted at __schedule().
++ *
++ * This abuse is tolerated because it places all the unlikely/odd cases behind
++ * a single test, namely: rq->balance_callback == NULL.
++ */
++struct balance_callback balance_push_callback = {
++ .next = NULL,
++ .func = balance_push,
++};
++
++static inline struct balance_callback *
++__splice_balance_callbacks(struct rq *rq, bool split)
++{
++ struct balance_callback *head = rq->balance_callback;
++
++ if (likely(!head))
++ return NULL;
++
++ lockdep_assert_rq_held(rq);
++ /*
++ * Must not take balance_push_callback off the list when
++ * splice_balance_callbacks() and balance_callbacks() are not
++ * in the same rq->lock section.
++ *
++ * In that case it would be possible for __schedule() to interleave
++ * and observe the list empty.
++ */
++ if (split && head == &balance_push_callback)
++ head = NULL;
++ else
++ rq->balance_callback = NULL;
++
++ return head;
++}
++
++struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++ return __splice_balance_callbacks(rq, true);
++}
++
++static void __balance_callbacks(struct rq *rq)
++{
++ do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
++}
++
++void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++ unsigned long flags;
++
++ if (unlikely(head)) {
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ do_balance_callbacks(rq, head);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++ }
++}
++
++#else
++
++static inline void __balance_callbacks(struct rq *rq)
++{
++}
++#endif
++
++static inline void
++prepare_lock_switch(struct rq *rq, struct task_struct *next)
++{
++ /*
++ * Since the runqueue lock will be released by the next
++ * task (which is an invalid locking op but in the case
++ * of the scheduler it's an obvious special-case), so we
++ * do an early lockdep release here:
++ */
++ spin_release(&rq->lock.dep_map, _THIS_IP_);
++#ifdef CONFIG_DEBUG_SPINLOCK
++ /* this is a valid case when another task releases the spinlock */
++ rq->lock.owner = next;
++#endif
++}
++
++static inline void finish_lock_switch(struct rq *rq)
++{
++ /*
++ * If we are tracking spinlock dependencies then we have to
++ * fix up the runqueue lock - which gets 'carried over' from
++ * prev into current:
++ */
++ spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
++ __balance_callbacks(rq);
++ raw_spin_unlock_irq(&rq->lock);
++}
++
++/*
++ * NOP if the arch has not defined these:
++ */
++
++#ifndef prepare_arch_switch
++# define prepare_arch_switch(next) do { } while (0)
++#endif
++
++#ifndef finish_arch_post_lock_switch
++# define finish_arch_post_lock_switch() do { } while (0)
++#endif
++
++static inline void kmap_local_sched_out(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++ if (unlikely(current->kmap_ctrl.idx))
++ __kmap_local_sched_out();
++#endif
++}
++
++static inline void kmap_local_sched_in(void)
++{
++#ifdef CONFIG_KMAP_LOCAL
++ if (unlikely(current->kmap_ctrl.idx))
++ __kmap_local_sched_in();
++#endif
++}
++
++/**
++ * prepare_task_switch - prepare to switch tasks
++ * @rq: the runqueue preparing to switch
++ * @next: the task we are going to switch to.
++ *
++ * This is called with the rq lock held and interrupts off. It must
++ * be paired with a subsequent finish_task_switch after the context
++ * switch.
++ *
++ * prepare_task_switch sets up locking and calls architecture specific
++ * hooks.
++ */
++static inline void
++prepare_task_switch(struct rq *rq, struct task_struct *prev,
++ struct task_struct *next)
++{
++ kcov_prepare_switch(prev);
++ sched_info_switch(rq, prev, next);
++ perf_event_task_sched_out(prev, next);
++ rseq_preempt(prev);
++ fire_sched_out_preempt_notifiers(prev, next);
++ kmap_local_sched_out();
++ prepare_task(next);
++ prepare_arch_switch(next);
++}
++
++/**
++ * finish_task_switch - clean up after a task-switch
++ * @rq: runqueue associated with task-switch
++ * @prev: the thread we just switched away from.
++ *
++ * finish_task_switch must be called after the context switch, paired
++ * with a prepare_task_switch call before the context switch.
++ * finish_task_switch will reconcile locking set up by prepare_task_switch,
++ * and do any other architecture-specific cleanup actions.
++ *
++ * Note that we may have delayed dropping an mm in context_switch(). If
++ * so, we finish that here outside of the runqueue lock. (Doing it
++ * with the lock held can cause deadlocks; see schedule() for
++ * details.)
++ *
++ * The context switch have flipped the stack from under us and restored the
++ * local variables which were saved when this task called schedule() in the
++ * past. 'prev == current' is still correct but we need to recalculate this_rq
++ * because prev may have moved to another CPU.
++ */
++static struct rq *finish_task_switch(struct task_struct *prev)
++ __releases(rq->lock)
++{
++ struct rq *rq = this_rq();
++ struct mm_struct *mm = rq->prev_mm;
++ unsigned int prev_state;
++
++ /*
++ * The previous task will have left us with a preempt_count of 2
++ * because it left us after:
++ *
++ * schedule()
++ * preempt_disable(); // 1
++ * __schedule()
++ * raw_spin_lock_irq(&rq->lock) // 2
++ *
++ * Also, see FORK_PREEMPT_COUNT.
++ */
++ if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
++ "corrupted preempt_count: %s/%d/0x%x\n",
++ current->comm, current->pid, preempt_count()))
++ preempt_count_set(FORK_PREEMPT_COUNT);
++
++ rq->prev_mm = NULL;
++
++ /*
++ * A task struct has one reference for the use as "current".
++ * If a task dies, then it sets TASK_DEAD in tsk->state and calls
++ * schedule one last time. The schedule call will never return, and
++ * the scheduled task must drop that reference.
++ *
++ * We must observe prev->state before clearing prev->on_cpu (in
++ * finish_task), otherwise a concurrent wakeup can get prev
++ * running on another CPU and we could rave with its RUNNING -> DEAD
++ * transition, resulting in a double drop.
++ */
++ prev_state = READ_ONCE(prev->__state);
++ vtime_task_switch(prev);
++ perf_event_task_sched_in(prev, current);
++ finish_task(prev);
++ tick_nohz_task_switch();
++ finish_lock_switch(rq);
++ finish_arch_post_lock_switch();
++ kcov_finish_switch(current);
++ /*
++ * kmap_local_sched_out() is invoked with rq::lock held and
++ * interrupts disabled. There is no requirement for that, but the
++ * sched out code does not have an interrupt enabled section.
++ * Restoring the maps on sched in does not require interrupts being
++ * disabled either.
++ */
++ kmap_local_sched_in();
++
++ fire_sched_in_preempt_notifiers(current);
++ /*
++ * When switching through a kernel thread, the loop in
++ * membarrier_{private,global}_expedited() may have observed that
++ * kernel thread and not issued an IPI. It is therefore possible to
++ * schedule between user->kernel->user threads without passing though
++ * switch_mm(). Membarrier requires a barrier after storing to
++ * rq->curr, before returning to userspace, so provide them here:
++ *
++ * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
++ * provided by mmdrop(),
++ * - a sync_core for SYNC_CORE.
++ */
++ if (mm) {
++ membarrier_mm_sync_core_before_usermode(mm);
++ mmdrop_sched(mm);
++ }
++ if (unlikely(prev_state == TASK_DEAD)) {
++ /* Task is done with its stack. */
++ put_task_stack(prev);
++
++ put_task_struct_rcu_user(prev);
++ }
++
++ return rq;
++}
++
++/**
++ * schedule_tail - first thing a freshly forked thread must call.
++ * @prev: the thread we just switched away from.
++ */
++asmlinkage __visible void schedule_tail(struct task_struct *prev)
++ __releases(rq->lock)
++{
++ /*
++ * New tasks start with FORK_PREEMPT_COUNT, see there and
++ * finish_task_switch() for details.
++ *
++ * finish_task_switch() will drop rq->lock() and lower preempt_count
++ * and the preempt_enable() will end up enabling preemption (on
++ * PREEMPT_COUNT kernels).
++ */
++
++ finish_task_switch(prev);
++ preempt_enable();
++
++ if (current->set_child_tid)
++ put_user(task_pid_vnr(current), current->set_child_tid);
++
++ calculate_sigpending();
++}
++
++/*
++ * context_switch - switch to the new MM and the new thread's register state.
++ */
++static __always_inline struct rq *
++context_switch(struct rq *rq, struct task_struct *prev,
++ struct task_struct *next)
++{
++ prepare_task_switch(rq, prev, next);
++
++ /*
++ * For paravirt, this is coupled with an exit in switch_to to
++ * combine the page table reload and the switch backend into
++ * one hypercall.
++ */
++ arch_start_context_switch(prev);
++
++ /*
++ * kernel -> kernel lazy + transfer active
++ * user -> kernel lazy + mmgrab() active
++ *
++ * kernel -> user switch + mmdrop() active
++ * user -> user switch
++ *
++ * switch_mm_cid() needs to be updated if the barriers provided
++ * by context_switch() are modified.
++ */
++ if (!next->mm) { // to kernel
++ enter_lazy_tlb(prev->active_mm, next);
++
++ next->active_mm = prev->active_mm;
++ if (prev->mm) // from user
++ mmgrab(prev->active_mm);
++ else
++ prev->active_mm = NULL;
++ } else { // to user
++ membarrier_switch_mm(rq, prev->active_mm, next->mm);
++ /*
++ * sys_membarrier() requires an smp_mb() between setting
++ * rq->curr / membarrier_switch_mm() and returning to userspace.
++ *
++ * The below provides this either through switch_mm(), or in
++ * case 'prev->active_mm == next->mm' through
++ * finish_task_switch()'s mmdrop().
++ */
++ switch_mm_irqs_off(prev->active_mm, next->mm, next);
++ lru_gen_use_mm(next->mm);
++
++ if (!prev->mm) { // from kernel
++ /* will mmdrop() in finish_task_switch(). */
++ rq->prev_mm = prev->active_mm;
++ prev->active_mm = NULL;
++ }
++ }
++
++ /* switch_mm_cid() requires the memory barriers above. */
++ switch_mm_cid(rq, prev, next);
++
++ prepare_lock_switch(rq, next);
++
++ /* Here we just switch the register state and the stack. */
++ switch_to(prev, next, prev);
++ barrier();
++
++ return finish_task_switch(prev);
++}
++
++/*
++ * nr_running, nr_uninterruptible and nr_context_switches:
++ *
++ * externally visible scheduler statistics: current number of runnable
++ * threads, total number of context switches performed since bootup.
++ */
++unsigned int nr_running(void)
++{
++ unsigned int i, sum = 0;
++
++ for_each_online_cpu(i)
++ sum += cpu_rq(i)->nr_running;
++
++ return sum;
++}
++
++/*
++ * Check if only the current task is running on the CPU.
++ *
++ * Caution: this function does not check that the caller has disabled
++ * preemption, thus the result might have a time-of-check-to-time-of-use
++ * race. The caller is responsible to use it correctly, for example:
++ *
++ * - from a non-preemptible section (of course)
++ *
++ * - from a thread that is bound to a single CPU
++ *
++ * - in a loop with very short iterations (e.g. a polling loop)
++ */
++bool single_task_running(void)
++{
++ return raw_rq()->nr_running == 1;
++}
++EXPORT_SYMBOL(single_task_running);
++
++unsigned long long nr_context_switches_cpu(int cpu)
++{
++ return cpu_rq(cpu)->nr_switches;
++}
++
++unsigned long long nr_context_switches(void)
++{
++ int i;
++ unsigned long long sum = 0;
++
++ for_each_possible_cpu(i)
++ sum += cpu_rq(i)->nr_switches;
++
++ return sum;
++}
++
++/*
++ * Consumers of these two interfaces, like for example the cpuidle menu
++ * governor, are using nonsensical data. Preferring shallow idle state selection
++ * for a CPU that has IO-wait which might not even end up running the task when
++ * it does become runnable.
++ */
++
++unsigned int nr_iowait_cpu(int cpu)
++{
++ return atomic_read(&cpu_rq(cpu)->nr_iowait);
++}
++
++/*
++ * IO-wait accounting, and how it's mostly bollocks (on SMP).
++ *
++ * The idea behind IO-wait account is to account the idle time that we could
++ * have spend running if it were not for IO. That is, if we were to improve the
++ * storage performance, we'd have a proportional reduction in IO-wait time.
++ *
++ * This all works nicely on UP, where, when a task blocks on IO, we account
++ * idle time as IO-wait, because if the storage were faster, it could've been
++ * running and we'd not be idle.
++ *
++ * This has been extended to SMP, by doing the same for each CPU. This however
++ * is broken.
++ *
++ * Imagine for instance the case where two tasks block on one CPU, only the one
++ * CPU will have IO-wait accounted, while the other has regular idle. Even
++ * though, if the storage were faster, both could've ran at the same time,
++ * utilising both CPUs.
++ *
++ * This means, that when looking globally, the current IO-wait accounting on
++ * SMP is a lower bound, by reason of under accounting.
++ *
++ * Worse, since the numbers are provided per CPU, they are sometimes
++ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
++ * associated with any one particular CPU, it can wake to another CPU than it
++ * blocked on. This means the per CPU IO-wait number is meaningless.
++ *
++ * Task CPU affinities can make all that even more 'interesting'.
++ */
++
++unsigned int nr_iowait(void)
++{
++ unsigned int i, sum = 0;
++
++ for_each_possible_cpu(i)
++ sum += nr_iowait_cpu(i);
++
++ return sum;
++}
++
++#ifdef CONFIG_SMP
++
++/*
++ * sched_exec - execve() is a valuable balancing opportunity, because at
++ * this point the task has the smallest effective memory and cache
++ * footprint.
++ */
++void sched_exec(void)
++{
++}
++
++#endif
++
++DEFINE_PER_CPU(struct kernel_stat, kstat);
++DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
++
++EXPORT_PER_CPU_SYMBOL(kstat);
++EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
++
++static inline void update_curr(struct rq *rq, struct task_struct *p)
++{
++ s64 ns = rq->clock_task - p->last_ran;
++
++ p->sched_time += ns;
++ cgroup_account_cputime(p, ns);
++ account_group_exec_runtime(p, ns);
++
++ p->time_slice -= ns;
++ p->last_ran = rq->clock_task;
++}
++
++/*
++ * Return accounted runtime for the task.
++ * Return separately the current's pending runtime that have not been
++ * accounted yet.
++ */
++unsigned long long task_sched_runtime(struct task_struct *p)
++{
++ unsigned long flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++ u64 ns;
++
++#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
++ /*
++ * 64-bit doesn't need locks to atomically read a 64-bit value.
++ * So we have a optimization chance when the task's delta_exec is 0.
++ * Reading ->on_cpu is racy, but this is OK.
++ *
++ * If we race with it leaving CPU, we'll take a lock. So we're correct.
++ * If we race with it entering CPU, unaccounted time is 0. This is
++ * indistinguishable from the read occurring a few cycles earlier.
++ * If we see ->on_cpu without ->on_rq, the task is leaving, and has
++ * been accounted, so we're correct here as well.
++ */
++ if (!p->on_cpu || !task_on_rq_queued(p))
++ return tsk_seruntime(p);
++#endif
++
++ rq = task_access_lock_irqsave(p, &lock, &flags);
++ /*
++ * Must be ->curr _and_ ->on_rq. If dequeued, we would
++ * project cycles that may never be accounted to this
++ * thread, breaking clock_gettime().
++ */
++ if (p == rq->curr && task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ update_curr(rq, p);
++ }
++ ns = tsk_seruntime(p);
++ task_access_unlock_irqrestore(p, lock, &flags);
++
++ return ns;
++}
++
++/* This manages tasks that have run out of timeslice during a scheduler_tick */
++static inline void scheduler_task_tick(struct rq *rq)
++{
++ struct task_struct *p = rq->curr;
++
++ if (is_idle_task(p))
++ return;
++
++ update_curr(rq, p);
++ cpufreq_update_util(rq, 0);
++
++ /*
++ * Tasks have less than RESCHED_NS of time slice left they will be
++ * rescheduled.
++ */
++ if (p->time_slice >= RESCHED_NS)
++ return;
++ set_tsk_need_resched(p);
++ set_preempt_need_resched();
++}
++
++#ifdef CONFIG_SCHED_DEBUG
++static u64 cpu_resched_latency(struct rq *rq)
++{
++ int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
++ u64 resched_latency, now = rq_clock(rq);
++ static bool warned_once;
++
++ if (sysctl_resched_latency_warn_once && warned_once)
++ return 0;
++
++ if (!need_resched() || !latency_warn_ms)
++ return 0;
++
++ if (system_state == SYSTEM_BOOTING)
++ return 0;
++
++ if (!rq->last_seen_need_resched_ns) {
++ rq->last_seen_need_resched_ns = now;
++ rq->ticks_without_resched = 0;
++ return 0;
++ }
++
++ rq->ticks_without_resched++;
++ resched_latency = now - rq->last_seen_need_resched_ns;
++ if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
++ return 0;
++
++ warned_once = true;
++
++ return resched_latency;
++}
++
++static int __init setup_resched_latency_warn_ms(char *str)
++{
++ long val;
++
++ if ((kstrtol(str, 0, &val))) {
++ pr_warn("Unable to set resched_latency_warn_ms\n");
++ return 1;
++ }
++
++ sysctl_resched_latency_warn_ms = val;
++ return 1;
++}
++__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
++#else
++static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
++#endif /* CONFIG_SCHED_DEBUG */
++
++/*
++ * This function gets called by the timer code, with HZ frequency.
++ * We call it with interrupts disabled.
++ */
++void sched_tick(void)
++{
++ int cpu __maybe_unused = smp_processor_id();
++ struct rq *rq = cpu_rq(cpu);
++ struct task_struct *curr = rq->curr;
++ u64 resched_latency;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++ arch_scale_freq_tick();
++
++ sched_clock_tick();
++
++ raw_spin_lock(&rq->lock);
++ update_rq_clock(rq);
++
++ scheduler_task_tick(rq);
++ if (sched_feat(LATENCY_WARN))
++ resched_latency = cpu_resched_latency(rq);
++ calc_global_load_tick(rq);
++
++ task_tick_mm_cid(rq, rq->curr);
++
++ raw_spin_unlock(&rq->lock);
++
++ if (sched_feat(LATENCY_WARN) && resched_latency)
++ resched_latency_warn(cpu, resched_latency);
++
++ perf_event_task_tick();
++
++ if (curr->flags & PF_WQ_WORKER)
++ wq_worker_tick(curr);
++}
++
++#ifdef CONFIG_NO_HZ_FULL
++
++struct tick_work {
++ int cpu;
++ atomic_t state;
++ struct delayed_work work;
++};
++/* Values for ->state, see diagram below. */
++#define TICK_SCHED_REMOTE_OFFLINE 0
++#define TICK_SCHED_REMOTE_OFFLINING 1
++#define TICK_SCHED_REMOTE_RUNNING 2
++
++/*
++ * State diagram for ->state:
++ *
++ *
++ * TICK_SCHED_REMOTE_OFFLINE
++ * | ^
++ * | |
++ * | | sched_tick_remote()
++ * | |
++ * | |
++ * +--TICK_SCHED_REMOTE_OFFLINING
++ * | ^
++ * | |
++ * sched_tick_start() | | sched_tick_stop()
++ * | |
++ * V |
++ * TICK_SCHED_REMOTE_RUNNING
++ *
++ *
++ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
++ * and sched_tick_start() are happy to leave the state in RUNNING.
++ */
++
++static struct tick_work __percpu *tick_work_cpu;
++
++static void sched_tick_remote(struct work_struct *work)
++{
++ struct delayed_work *dwork = to_delayed_work(work);
++ struct tick_work *twork = container_of(dwork, struct tick_work, work);
++ int cpu = twork->cpu;
++ struct rq *rq = cpu_rq(cpu);
++ int os;
++
++ /*
++ * Handle the tick only if it appears the remote CPU is running in full
++ * dynticks mode. The check is racy by nature, but missing a tick or
++ * having one too much is no big deal because the scheduler tick updates
++ * statistics and checks timeslices in a time-independent way, regardless
++ * of when exactly it is running.
++ */
++ if (tick_nohz_tick_stopped_cpu(cpu)) {
++ guard(raw_spinlock_irqsave)(&rq->lock);
++ struct task_struct *curr = rq->curr;
++
++ if (cpu_online(cpu)) {
++ update_rq_clock(rq);
++
++ if (!is_idle_task(curr)) {
++ /*
++ * Make sure the next tick runs within a
++ * reasonable amount of time.
++ */
++ u64 delta = rq_clock_task(rq) - curr->last_ran;
++ WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
++ }
++ scheduler_task_tick(rq);
++
++ calc_load_nohz_remote(rq);
++ }
++ }
++
++ /*
++ * Run the remote tick once per second (1Hz). This arbitrary
++ * frequency is large enough to avoid overload but short enough
++ * to keep scheduler internal stats reasonably up to date. But
++ * first update state to reflect hotplug activity if required.
++ */
++ os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
++ if (os == TICK_SCHED_REMOTE_RUNNING)
++ queue_delayed_work(system_unbound_wq, dwork, HZ);
++}
++
++static void sched_tick_start(int cpu)
++{
++ int os;
++ struct tick_work *twork;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++ return;
++
++ WARN_ON_ONCE(!tick_work_cpu);
++
++ twork = per_cpu_ptr(tick_work_cpu, cpu);
++ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
++ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
++ if (os == TICK_SCHED_REMOTE_OFFLINE) {
++ twork->cpu = cpu;
++ INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
++ queue_delayed_work(system_unbound_wq, &twork->work, HZ);
++ }
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++static void sched_tick_stop(int cpu)
++{
++ struct tick_work *twork;
++ int os;
++
++ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
++ return;
++
++ WARN_ON_ONCE(!tick_work_cpu);
++
++ twork = per_cpu_ptr(tick_work_cpu, cpu);
++ /* There cannot be competing actions, but don't rely on stop-machine. */
++ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
++ WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
++ /* Don't cancel, as this would mess up the state machine. */
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++int __init sched_tick_offload_init(void)
++{
++ tick_work_cpu = alloc_percpu(struct tick_work);
++ BUG_ON(!tick_work_cpu);
++ return 0;
++}
++
++#else /* !CONFIG_NO_HZ_FULL */
++static inline void sched_tick_start(int cpu) { }
++static inline void sched_tick_stop(int cpu) { }
++#endif
++
++#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
++ defined(CONFIG_PREEMPT_TRACER))
++/*
++ * If the value passed in is equal to the current preempt count
++ * then we just disabled preemption. Start timing the latency.
++ */
++static inline void preempt_latency_start(int val)
++{
++ if (preempt_count() == val) {
++ unsigned long ip = get_lock_parent_ip();
++#ifdef CONFIG_DEBUG_PREEMPT
++ current->preempt_disable_ip = ip;
++#endif
++ trace_preempt_off(CALLER_ADDR0, ip);
++ }
++}
++
++void preempt_count_add(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Underflow?
++ */
++ if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
++ return;
++#endif
++ __preempt_count_add(val);
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Spinlock count overflowing soon?
++ */
++ DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
++ PREEMPT_MASK - 10);
++#endif
++ preempt_latency_start(val);
++}
++EXPORT_SYMBOL(preempt_count_add);
++NOKPROBE_SYMBOL(preempt_count_add);
++
++/*
++ * If the value passed in equals to the current preempt count
++ * then we just enabled preemption. Stop timing the latency.
++ */
++static inline void preempt_latency_stop(int val)
++{
++ if (preempt_count() == val)
++ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
++}
++
++void preempt_count_sub(int val)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ /*
++ * Underflow?
++ */
++ if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
++ return;
++ /*
++ * Is the spinlock portion underflowing?
++ */
++ if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
++ !(preempt_count() & PREEMPT_MASK)))
++ return;
++#endif
++
++ preempt_latency_stop(val);
++ __preempt_count_sub(val);
++}
++EXPORT_SYMBOL(preempt_count_sub);
++NOKPROBE_SYMBOL(preempt_count_sub);
++
++#else
++static inline void preempt_latency_start(int val) { }
++static inline void preempt_latency_stop(int val) { }
++#endif
++
++static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
++{
++#ifdef CONFIG_DEBUG_PREEMPT
++ return p->preempt_disable_ip;
++#else
++ return 0;
++#endif
++}
++
++/*
++ * Print scheduling while atomic bug:
++ */
++static noinline void __schedule_bug(struct task_struct *prev)
++{
++ /* Save this before calling printk(), since that will clobber it */
++ unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
++
++ if (oops_in_progress)
++ return;
++
++ printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
++ prev->comm, prev->pid, preempt_count());
++
++ debug_show_held_locks(prev);
++ print_modules();
++ if (irqs_disabled())
++ print_irqtrace_events(prev);
++ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
++ pr_err("Preemption disabled at:");
++ print_ip_sym(KERN_ERR, preempt_disable_ip);
++ }
++ check_panic_on_warn("scheduling while atomic");
++
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++
++/*
++ * Various schedule()-time debugging checks and statistics:
++ */
++static inline void schedule_debug(struct task_struct *prev, bool preempt)
++{
++#ifdef CONFIG_SCHED_STACK_END_CHECK
++ if (task_stack_end_corrupted(prev))
++ panic("corrupted stack end detected inside scheduler\n");
++
++ if (task_scs_end_corrupted(prev))
++ panic("corrupted shadow stack detected inside scheduler\n");
++#endif
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++ if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
++ printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
++ prev->comm, prev->pid, prev->non_block_count);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++ }
++#endif
++
++ if (unlikely(in_atomic_preempt_off())) {
++ __schedule_bug(prev);
++ preempt_count_set(PREEMPT_DISABLED);
++ }
++ rcu_sleep_check();
++ SCHED_WARN_ON(ct_state() == CT_STATE_USER);
++
++ profile_hit(SCHED_PROFILING, __builtin_return_address(0));
++
++ schedstat_inc(this_rq()->sched_count);
++}
++
++#ifdef ALT_SCHED_DEBUG
++void alt_sched_debug(void)
++{
++ printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx,"
++ " ecore_idle: 0x%04lx\n",
++ sched_rq_pending_mask.bits[0],
++ sched_idle_mask->bits[0],
++ sched_pcore_idle_mask->bits[0],
++ sched_ecore_idle_mask->bits[0]);
++}
++#endif
++
++#ifdef CONFIG_SMP
++
++#ifdef CONFIG_PREEMPT_RT
++#define SCHED_NR_MIGRATE_BREAK 8
++#else
++#define SCHED_NR_MIGRATE_BREAK 32
++#endif
++
++const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
++
++/*
++ * Migrate pending tasks in @rq to @dest_cpu
++ */
++static inline int
++migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
++{
++ struct task_struct *p, *skip = rq->curr;
++ int nr_migrated = 0;
++ int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
++
++ /* WA to check rq->curr is still on rq */
++ if (!task_on_rq_queued(skip))
++ return 0;
++
++ while (skip != rq->idle && nr_tries &&
++ (p = sched_rq_next_task(skip, rq)) != rq->idle) {
++ skip = sched_rq_next_task(p, rq);
++ if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
++ __SCHED_DEQUEUE_TASK(p, rq, 0, );
++ set_task_cpu(p, dest_cpu);
++ sched_task_sanity_check(p, dest_rq);
++ sched_mm_cid_migrate_to(dest_rq, p);
++ __SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
++ nr_migrated++;
++ }
++ nr_tries--;
++ }
++
++ return nr_migrated;
++}
++
++static inline int take_other_rq_tasks(struct rq *rq, int cpu)
++{
++ cpumask_t *topo_mask, *end_mask, chk;
++
++ if (unlikely(!rq->online))
++ return 0;
++
++ if (cpumask_empty(&sched_rq_pending_mask))
++ return 0;
++
++ topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
++ end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
++ do {
++ int i;
++
++ if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
++ continue;
++
++ for_each_cpu_wrap(i, &chk, cpu) {
++ int nr_migrated;
++ struct rq *src_rq;
++
++ src_rq = cpu_rq(i);
++ if (!do_raw_spin_trylock(&src_rq->lock))
++ continue;
++ spin_acquire(&src_rq->lock.dep_map,
++ SINGLE_DEPTH_NESTING, 1, _RET_IP_);
++
++ if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
++ src_rq->nr_running -= nr_migrated;
++ if (src_rq->nr_running < 2)
++ cpumask_clear_cpu(i, &sched_rq_pending_mask);
++
++ spin_release(&src_rq->lock.dep_map, _RET_IP_);
++ do_raw_spin_unlock(&src_rq->lock);
++
++ rq->nr_running += nr_migrated;
++ if (rq->nr_running > 1)
++ cpumask_set_cpu(cpu, &sched_rq_pending_mask);
++
++ update_sched_preempt_mask(rq);
++ cpufreq_update_util(rq, 0);
++
++ return 1;
++ }
++
++ spin_release(&src_rq->lock.dep_map, _RET_IP_);
++ do_raw_spin_unlock(&src_rq->lock);
++ }
++ } while (++topo_mask < end_mask);
++
++ return 0;
++}
++#endif
++
++static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
++{
++ p->time_slice = sysctl_sched_base_slice;
++
++ sched_task_renew(p, rq);
++
++ if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
++ requeue_task(p, rq);
++}
++
++/*
++ * Timeslices below RESCHED_NS are considered as good as expired as there's no
++ * point rescheduling when there's so little time left.
++ */
++static inline void check_curr(struct task_struct *p, struct rq *rq)
++{
++ if (unlikely(rq->idle == p))
++ return;
++
++ update_curr(rq, p);
++
++ if (p->time_slice < RESCHED_NS)
++ time_slice_expired(p, rq);
++}
++
++static inline struct task_struct *
++choose_next_task(struct rq *rq, int cpu)
++{
++ struct task_struct *next = sched_rq_first_task(rq);
++
++ if (next == rq->idle) {
++#ifdef CONFIG_SMP
++ if (!take_other_rq_tasks(rq, cpu)) {
++ if (likely(rq->balance_func && rq->online))
++ rq->balance_func(rq, cpu);
++#endif /* CONFIG_SMP */
++
++ schedstat_inc(rq->sched_goidle);
++ /*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
++ return next;
++#ifdef CONFIG_SMP
++ }
++ next = sched_rq_first_task(rq);
++#endif
++ }
++#ifdef CONFIG_HIGH_RES_TIMERS
++ hrtick_start(rq, next->time_slice);
++#endif
++ /*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
++ return next;
++}
++
++/*
++ * Constants for the sched_mode argument of __schedule().
++ *
++ * The mode argument allows RT enabled kernels to differentiate a
++ * preemption from blocking on an 'sleeping' spin/rwlock.
++ */
++ #define SM_IDLE (-1)
++ #define SM_NONE 0
++ #define SM_PREEMPT 1
++ #define SM_RTLOCK_WAIT 2
++
++/*
++ * schedule() is the main scheduler function.
++ *
++ * The main means of driving the scheduler and thus entering this function are:
++ *
++ * 1. Explicit blocking: mutex, semaphore, waitqueue, etc.
++ *
++ * 2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
++ * paths. For example, see arch/x86/entry_64.S.
++ *
++ * To drive preemption between tasks, the scheduler sets the flag in timer
++ * interrupt handler sched_tick().
++ *
++ * 3. Wakeups don't really cause entry into schedule(). They add a
++ * task to the run-queue and that's it.
++ *
++ * Now, if the new task added to the run-queue preempts the current
++ * task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
++ * called on the nearest possible occasion:
++ *
++ * - If the kernel is preemptible (CONFIG_PREEMPTION=y):
++ *
++ * - in syscall or exception context, at the next outmost
++ * preempt_enable(). (this might be as soon as the wake_up()'s
++ * spin_unlock()!)
++ *
++ * - in IRQ context, return from interrupt-handler to
++ * preemptible context
++ *
++ * - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
++ * then at the next:
++ *
++ * - cond_resched() call
++ * - explicit schedule() call
++ * - return from syscall or exception to user-space
++ * - return from interrupt-handler to user-space
++ *
++ * WARNING: must be called with preemption disabled!
++ */
++static void __sched notrace __schedule(int sched_mode)
++{
++ struct task_struct *prev, *next;
++ /*
++ * On PREEMPT_RT kernel, SM_RTLOCK_WAIT is noted
++ * as a preemption by schedule_debug() and RCU.
++ */
++ bool preempt = sched_mode > SM_NONE;
++ unsigned long *switch_count;
++ unsigned long prev_state;
++ struct rq *rq;
++ int cpu;
++
++ cpu = smp_processor_id();
++ rq = cpu_rq(cpu);
++ prev = rq->curr;
++
++ schedule_debug(prev, preempt);
++
++ /* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
++ hrtick_clear(rq);
++
++ local_irq_disable();
++ rcu_note_context_switch(preempt);
++
++ /*
++ * Make sure that signal_pending_state()->signal_pending() below
++ * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
++ * done by the caller to avoid the race with signal_wake_up():
++ *
++ * __set_current_state(@state) signal_wake_up()
++ * schedule() set_tsk_thread_flag(p, TIF_SIGPENDING)
++ * wake_up_state(p, state)
++ * LOCK rq->lock LOCK p->pi_state
++ * smp_mb__after_spinlock() smp_mb__after_spinlock()
++ * if (signal_pending_state()) if (p->state & @state)
++ *
++ * Also, the membarrier system call requires a full memory barrier
++ * after coming from user-space, before storing to rq->curr; this
++ * barrier matches a full barrier in the proximity of the membarrier
++ * system call exit.
++ */
++ raw_spin_lock(&rq->lock);
++ smp_mb__after_spinlock();
++
++ update_rq_clock(rq);
++
++ switch_count = &prev->nivcsw;
++
++ /* Task state changes only considers SM_PREEMPT as preemption */
++ preempt = sched_mode == SM_PREEMPT;
++
++ /*
++ * We must load prev->state once (task_struct::state is volatile), such
++ * that we form a control dependency vs deactivate_task() below.
++ */
++ prev_state = READ_ONCE(prev->__state);
++ if (sched_mode == SM_IDLE) {
++ if (!rq->nr_running) {
++ next = prev;
++ goto picked;
++ }
++ } else if (!preempt && prev_state) {
++ if (signal_pending_state(prev_state, prev)) {
++ WRITE_ONCE(prev->__state, TASK_RUNNING);
++ } else {
++ prev->sched_contributes_to_load =
++ (prev_state & TASK_UNINTERRUPTIBLE) &&
++ !(prev_state & TASK_NOLOAD) &&
++ !(prev_state & TASK_FROZEN);
++
++ /*
++ * __schedule() ttwu()
++ * prev_state = prev->state; if (p->on_rq && ...)
++ * if (prev_state) goto out;
++ * p->on_rq = 0; smp_acquire__after_ctrl_dep();
++ * p->state = TASK_WAKING
++ *
++ * Where __schedule() and ttwu() have matching control dependencies.
++ *
++ * After this, schedule() must not care about p->state any more.
++ */
++ sched_task_deactivate(prev, rq);
++ block_task(rq, prev);
++ }
++ switch_count = &prev->nvcsw;
++ }
++
++ check_curr(prev, rq);
++
++ next = choose_next_task(rq, cpu);
++picked:
++ clear_tsk_need_resched(prev);
++ clear_preempt_need_resched();
++#ifdef CONFIG_SCHED_DEBUG
++ rq->last_seen_need_resched_ns = 0;
++#endif
++
++ if (likely(prev != next)) {
++ next->last_ran = rq->clock_task;
++
++ /*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
++ rq->nr_switches++;
++ /*
++ * RCU users of rcu_dereference(rq->curr) may not see
++ * changes to task_struct made by pick_next_task().
++ */
++ RCU_INIT_POINTER(rq->curr, next);
++ /*
++ * The membarrier system call requires each architecture
++ * to have a full memory barrier after updating
++ * rq->curr, before returning to user-space.
++ *
++ * Here are the schemes providing that barrier on the
++ * various architectures:
++ * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
++ * RISC-V. switch_mm() relies on membarrier_arch_switch_mm()
++ * on PowerPC and on RISC-V.
++ * - finish_lock_switch() for weakly-ordered
++ * architectures where spin_unlock is a full barrier,
++ * - switch_to() for arm64 (weakly-ordered, spin_unlock
++ * is a RELEASE barrier),
++ *
++ * The barrier matches a full barrier in the proximity of
++ * the membarrier system call entry.
++ *
++ * On RISC-V, this barrier pairing is also needed for the
++ * SYNC_CORE command when switching between processes, cf.
++ * the inline comments in membarrier_arch_switch_mm().
++ */
++ ++*switch_count;
++
++ trace_sched_switch(preempt, prev, next, prev_state);
++
++ /* Also unlocks the rq: */
++ rq = context_switch(rq, prev, next);
++
++ cpu = cpu_of(rq);
++ } else {
++ __balance_callbacks(rq);
++ raw_spin_unlock_irq(&rq->lock);
++ }
++}
++
++void __noreturn do_task_dead(void)
++{
++ /* Causes final put_task_struct in finish_task_switch(): */
++ set_special_state(TASK_DEAD);
++
++ /* Tell freezer to ignore us: */
++ current->flags |= PF_NOFREEZE;
++
++ __schedule(SM_NONE);
++ BUG();
++
++ /* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
++ for (;;)
++ cpu_relax();
++}
++
++static inline void sched_submit_work(struct task_struct *tsk)
++{
++ static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
++ unsigned int task_flags;
++
++ /*
++ * Establish LD_WAIT_CONFIG context to ensure none of the code called
++ * will use a blocking primitive -- which would lead to recursion.
++ */
++ lock_map_acquire_try(&sched_map);
++
++ task_flags = tsk->flags;
++ /*
++ * If a worker goes to sleep, notify and ask workqueue whether it
++ * wants to wake up a task to maintain concurrency.
++ */
++ if (task_flags & PF_WQ_WORKER)
++ wq_worker_sleeping(tsk);
++ else if (task_flags & PF_IO_WORKER)
++ io_wq_worker_sleeping(tsk);
++
++ /*
++ * spinlock and rwlock must not flush block requests. This will
++ * deadlock if the callback attempts to acquire a lock which is
++ * already acquired.
++ */
++ SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
++
++ /*
++ * If we are going to sleep and we have plugged IO queued,
++ * make sure to submit it to avoid deadlocks.
++ */
++ blk_flush_plug(tsk->plug, true);
++
++ lock_map_release(&sched_map);
++}
++
++static void sched_update_worker(struct task_struct *tsk)
++{
++ if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
++ if (tsk->flags & PF_BLOCK_TS)
++ blk_plug_invalidate_ts(tsk);
++ if (tsk->flags & PF_WQ_WORKER)
++ wq_worker_running(tsk);
++ else if (tsk->flags & PF_IO_WORKER)
++ io_wq_worker_running(tsk);
++ }
++}
++
++static __always_inline void __schedule_loop(int sched_mode)
++{
++ do {
++ preempt_disable();
++ __schedule(sched_mode);
++ sched_preempt_enable_no_resched();
++ } while (need_resched());
++}
++
++asmlinkage __visible void __sched schedule(void)
++{
++ struct task_struct *tsk = current;
++
++#ifdef CONFIG_RT_MUTEXES
++ lockdep_assert(!tsk->sched_rt_mutex);
++#endif
++
++ if (!task_is_running(tsk))
++ sched_submit_work(tsk);
++ __schedule_loop(SM_NONE);
++ sched_update_worker(tsk);
++}
++EXPORT_SYMBOL(schedule);
++
++/*
++ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
++ * state (have scheduled out non-voluntarily) by making sure that all
++ * tasks have either left the run queue or have gone into user space.
++ * As idle tasks do not do either, they must not ever be preempted
++ * (schedule out non-voluntarily).
++ *
++ * schedule_idle() is similar to schedule_preempt_disable() except that it
++ * never enables preemption because it does not call sched_submit_work().
++ */
++void __sched schedule_idle(void)
++{
++ /*
++ * As this skips calling sched_submit_work(), which the idle task does
++ * regardless because that function is a NOP when the task is in a
++ * TASK_RUNNING state, make sure this isn't used someplace that the
++ * current task can be in any other state. Note, idle is always in the
++ * TASK_RUNNING state.
++ */
++ WARN_ON_ONCE(current->__state);
++ do {
++ __schedule(SM_IDLE);
++ } while (need_resched());
++}
++
++#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
++asmlinkage __visible void __sched schedule_user(void)
++{
++ /*
++ * If we come here after a random call to set_need_resched(),
++ * or we have been woken up remotely but the IPI has not yet arrived,
++ * we haven't yet exited the RCU idle mode. Do it here manually until
++ * we find a better solution.
++ *
++ * NB: There are buggy callers of this function. Ideally we
++ * should warn if prev_state != CT_STATE_USER, but that will trigger
++ * too frequently to make sense yet.
++ */
++ enum ctx_state prev_state = exception_enter();
++ schedule();
++ exception_exit(prev_state);
++}
++#endif
++
++/**
++ * schedule_preempt_disabled - called with preemption disabled
++ *
++ * Returns with preemption disabled. Note: preempt_count must be 1
++ */
++void __sched schedule_preempt_disabled(void)
++{
++ sched_preempt_enable_no_resched();
++ schedule();
++ preempt_disable();
++}
++
++#ifdef CONFIG_PREEMPT_RT
++void __sched notrace schedule_rtlock(void)
++{
++ __schedule_loop(SM_RTLOCK_WAIT);
++}
++NOKPROBE_SYMBOL(schedule_rtlock);
++#endif
++
++static void __sched notrace preempt_schedule_common(void)
++{
++ do {
++ /*
++ * Because the function tracer can trace preempt_count_sub()
++ * and it also uses preempt_enable/disable_notrace(), if
++ * NEED_RESCHED is set, the preempt_enable_notrace() called
++ * by the function tracer will call this function again and
++ * cause infinite recursion.
++ *
++ * Preemption must be disabled here before the function
++ * tracer can trace. Break up preempt_disable() into two
++ * calls. One to disable preemption without fear of being
++ * traced. The other to still record the preemption latency,
++ * which can also be traced by the function tracer.
++ */
++ preempt_disable_notrace();
++ preempt_latency_start(1);
++ __schedule(SM_PREEMPT);
++ preempt_latency_stop(1);
++ preempt_enable_no_resched_notrace();
++
++ /*
++ * Check again in case we missed a preemption opportunity
++ * between schedule and now.
++ */
++ } while (need_resched());
++}
++
++#ifdef CONFIG_PREEMPTION
++/*
++ * This is the entry point to schedule() from in-kernel preemption
++ * off of preempt_enable.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule(void)
++{
++ /*
++ * If there is a non-zero preempt_count or interrupts are disabled,
++ * we do not want to preempt the current task. Just return..
++ */
++ if (likely(!preemptible()))
++ return;
++
++ preempt_schedule_common();
++}
++NOKPROBE_SYMBOL(preempt_schedule);
++EXPORT_SYMBOL(preempt_schedule);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_dynamic_enabled
++#define preempt_schedule_dynamic_enabled preempt_schedule
++#define preempt_schedule_dynamic_disabled NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
++void __sched notrace dynamic_preempt_schedule(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
++ return;
++ preempt_schedule();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule);
++EXPORT_SYMBOL(dynamic_preempt_schedule);
++#endif
++#endif
++
++/**
++ * preempt_schedule_notrace - preempt_schedule called by tracing
++ *
++ * The tracing infrastructure uses preempt_enable_notrace to prevent
++ * recursion and tracing preempt enabling caused by the tracing
++ * infrastructure itself. But as tracing can happen in areas coming
++ * from userspace or just about to enter userspace, a preempt enable
++ * can occur before user_exit() is called. This will cause the scheduler
++ * to be called when the system is still in usermode.
++ *
++ * To prevent this, the preempt_enable_notrace will use this function
++ * instead of preempt_schedule() to exit user context if needed before
++ * calling the scheduler.
++ */
++asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
++{
++ enum ctx_state prev_ctx;
++
++ if (likely(!preemptible()))
++ return;
++
++ do {
++ /*
++ * Because the function tracer can trace preempt_count_sub()
++ * and it also uses preempt_enable/disable_notrace(), if
++ * NEED_RESCHED is set, the preempt_enable_notrace() called
++ * by the function tracer will call this function again and
++ * cause infinite recursion.
++ *
++ * Preemption must be disabled here before the function
++ * tracer can trace. Break up preempt_disable() into two
++ * calls. One to disable preemption without fear of being
++ * traced. The other to still record the preemption latency,
++ * which can also be traced by the function tracer.
++ */
++ preempt_disable_notrace();
++ preempt_latency_start(1);
++ /*
++ * Needs preempt disabled in case user_exit() is traced
++ * and the tracer calls preempt_enable_notrace() causing
++ * an infinite recursion.
++ */
++ prev_ctx = exception_enter();
++ __schedule(SM_PREEMPT);
++ exception_exit(prev_ctx);
++
++ preempt_latency_stop(1);
++ preempt_enable_no_resched_notrace();
++ } while (need_resched());
++}
++EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#ifndef preempt_schedule_notrace_dynamic_enabled
++#define preempt_schedule_notrace_dynamic_enabled preempt_schedule_notrace
++#define preempt_schedule_notrace_dynamic_disabled NULL
++#endif
++DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
++EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
++void __sched notrace dynamic_preempt_schedule_notrace(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
++ return;
++ preempt_schedule_notrace();
++}
++NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
++EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
++#endif
++#endif
++
++#endif /* CONFIG_PREEMPTION */
++
++/*
++ * This is the entry point to schedule() from kernel preemption
++ * off of IRQ context.
++ * Note, that this is called and return with IRQs disabled. This will
++ * protect us against recursive calling from IRQ contexts.
++ */
++asmlinkage __visible void __sched preempt_schedule_irq(void)
++{
++ enum ctx_state prev_state;
++
++ /* Catch callers which need to be fixed */
++ BUG_ON(preempt_count() || !irqs_disabled());
++
++ prev_state = exception_enter();
++
++ do {
++ preempt_disable();
++ local_irq_enable();
++ __schedule(SM_PREEMPT);
++ local_irq_disable();
++ sched_preempt_enable_no_resched();
++ } while (need_resched());
++
++ exception_exit(prev_state);
++}
++
++int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
++ void *key)
++{
++ WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
++ return try_to_wake_up(curr->private, mode, wake_flags);
++}
++EXPORT_SYMBOL(default_wake_function);
++
++void check_task_changed(struct task_struct *p, struct rq *rq)
++{
++ /* Trigger resched if task sched_prio has been modified. */
++ if (task_on_rq_queued(p)) {
++ update_rq_clock(rq);
++ requeue_task(p, rq);
++ wakeup_preempt(rq);
++ }
++}
++
++void __setscheduler_prio(struct task_struct *p, int prio)
++{
++ p->prio = prio;
++}
++
++#ifdef CONFIG_RT_MUTEXES
++
++/*
++ * Would be more useful with typeof()/auto_type but they don't mix with
++ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
++ * name such that if someone were to implement this function we get to compare
++ * notes.
++ */
++#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
++
++void rt_mutex_pre_schedule(void)
++{
++ lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
++ sched_submit_work(current);
++}
++
++void rt_mutex_schedule(void)
++{
++ lockdep_assert(current->sched_rt_mutex);
++ __schedule_loop(SM_NONE);
++}
++
++void rt_mutex_post_schedule(void)
++{
++ sched_update_worker(current);
++ lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
++}
++
++/*
++ * rt_mutex_setprio - set the current priority of a task
++ * @p: task to boost
++ * @pi_task: donor task
++ *
++ * This function changes the 'effective' priority of a task. It does
++ * not touch ->normal_prio like __setscheduler().
++ *
++ * Used by the rt_mutex code to implement priority inheritance
++ * logic. Call site only calls if the priority of the task changed.
++ */
++void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
++{
++ int prio;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ /* XXX used to be waiter->prio, not waiter->task->prio */
++ prio = __rt_effective_prio(pi_task, p->normal_prio);
++
++ /*
++ * If nothing changed; bail early.
++ */
++ if (p->pi_top_task == pi_task && prio == p->prio)
++ return;
++
++ rq = __task_access_lock(p, &lock);
++ /*
++ * Set under pi_lock && rq->lock, such that the value can be used under
++ * either lock.
++ *
++ * Note that there is loads of tricky to make this pointer cache work
++ * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
++ * ensure a task is de-boosted (pi_task is set to NULL) before the
++ * task is allowed to run again (and can exit). This ensures the pointer
++ * points to a blocked task -- which guarantees the task is present.
++ */
++ p->pi_top_task = pi_task;
++
++ /*
++ * For FIFO/RR we only need to set prio, if that matches we're done.
++ */
++ if (prio == p->prio)
++ goto out_unlock;
++
++ /*
++ * Idle task boosting is a no-no in general. There is one
++ * exception, when PREEMPT_RT and NOHZ is active:
++ *
++ * The idle task calls get_next_timer_interrupt() and holds
++ * the timer wheel base->lock on the CPU and another CPU wants
++ * to access the timer (probably to cancel it). We can safely
++ * ignore the boosting request, as the idle CPU runs this code
++ * with interrupts disabled and will complete the lock
++ * protected section without being interrupted. So there is no
++ * real need to boost.
++ */
++ if (unlikely(p == rq->idle)) {
++ WARN_ON(p != rq->curr);
++ WARN_ON(p->pi_blocked_on);
++ goto out_unlock;
++ }
++
++ trace_sched_pi_setprio(p, pi_task);
++
++ __setscheduler_prio(p, prio);
++
++ check_task_changed(p, rq);
++out_unlock:
++ /* Avoid rq from going away on us: */
++ preempt_disable();
++
++ if (task_on_rq_queued(p))
++ __balance_callbacks(rq);
++ __task_access_unlock(p, lock);
++
++ preempt_enable();
++}
++#endif
++
++#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
++int __sched __cond_resched(void)
++{
++ if (should_resched(0)) {
++ preempt_schedule_common();
++ return 1;
++ }
++ /*
++ * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
++ * whether the current CPU is in an RCU read-side critical section,
++ * so the tick can report quiescent states even for CPUs looping
++ * in kernel context. In contrast, in non-preemptible kernels,
++ * RCU readers leave no in-memory hints, which means that CPU-bound
++ * processes executing in kernel context might never report an
++ * RCU quiescent state. Therefore, the following code causes
++ * cond_resched() to report a quiescent state, but only when RCU
++ * is in urgent need of one.
++ */
++#ifndef CONFIG_PREEMPT_RCU
++ rcu_all_qs();
++#endif
++ return 0;
++}
++EXPORT_SYMBOL(__cond_resched);
++#endif
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define cond_resched_dynamic_enabled __cond_resched
++#define cond_resched_dynamic_disabled ((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(cond_resched);
++
++#define might_resched_dynamic_enabled __cond_resched
++#define might_resched_dynamic_disabled ((void *)&__static_call_return0)
++DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
++EXPORT_STATIC_CALL_TRAMP(might_resched);
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
++int __sched dynamic_cond_resched(void)
++{
++ klp_sched_try_switch();
++ if (!static_branch_unlikely(&sk_dynamic_cond_resched))
++ return 0;
++ return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_cond_resched);
++
++static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
++int __sched dynamic_might_resched(void)
++{
++ if (!static_branch_unlikely(&sk_dynamic_might_resched))
++ return 0;
++ return __cond_resched();
++}
++EXPORT_SYMBOL(dynamic_might_resched);
++#endif
++#endif
++
++/*
++ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
++ * call schedule, and on return reacquire the lock.
++ *
++ * This works OK both with and without CONFIG_PREEMPTION. We do strange low-level
++ * operations here to prevent schedule() from being called twice (once via
++ * spin_unlock(), once by hand).
++ */
++int __cond_resched_lock(spinlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held(lock);
++
++ if (spin_needbreak(lock) || resched) {
++ spin_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ spin_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_lock);
++
++int __cond_resched_rwlock_read(rwlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held_read(lock);
++
++ if (rwlock_needbreak(lock) || resched) {
++ read_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ read_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_read);
++
++int __cond_resched_rwlock_write(rwlock_t *lock)
++{
++ int resched = should_resched(PREEMPT_LOCK_OFFSET);
++ int ret = 0;
++
++ lockdep_assert_held_write(lock);
++
++ if (rwlock_needbreak(lock) || resched) {
++ write_unlock(lock);
++ if (!_cond_resched())
++ cpu_relax();
++ ret = 1;
++ write_lock(lock);
++ }
++ return ret;
++}
++EXPORT_SYMBOL(__cond_resched_rwlock_write);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++
++#ifdef CONFIG_GENERIC_ENTRY
++#include <linux/entry-common.h>
++#endif
++
++/*
++ * SC:cond_resched
++ * SC:might_resched
++ * SC:preempt_schedule
++ * SC:preempt_schedule_notrace
++ * SC:irqentry_exit_cond_resched
++ *
++ *
++ * NONE:
++ * cond_resched <- __cond_resched
++ * might_resched <- RET0
++ * preempt_schedule <- NOP
++ * preempt_schedule_notrace <- NOP
++ * irqentry_exit_cond_resched <- NOP
++ *
++ * VOLUNTARY:
++ * cond_resched <- __cond_resched
++ * might_resched <- __cond_resched
++ * preempt_schedule <- NOP
++ * preempt_schedule_notrace <- NOP
++ * irqentry_exit_cond_resched <- NOP
++ *
++ * FULL:
++ * cond_resched <- RET0
++ * might_resched <- RET0
++ * preempt_schedule <- preempt_schedule
++ * preempt_schedule_notrace <- preempt_schedule_notrace
++ * irqentry_exit_cond_resched <- irqentry_exit_cond_resched
++ */
++
++enum {
++ preempt_dynamic_undefined = -1,
++ preempt_dynamic_none,
++ preempt_dynamic_voluntary,
++ preempt_dynamic_full,
++};
++
++int preempt_dynamic_mode = preempt_dynamic_undefined;
++
++int sched_dynamic_mode(const char *str)
++{
++ if (!strcmp(str, "none"))
++ return preempt_dynamic_none;
++
++ if (!strcmp(str, "voluntary"))
++ return preempt_dynamic_voluntary;
++
++ if (!strcmp(str, "full"))
++ return preempt_dynamic_full;
++
++ return -EINVAL;
++}
++
++#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
++#define preempt_dynamic_enable(f) static_call_update(f, f##_dynamic_enabled)
++#define preempt_dynamic_disable(f) static_call_update(f, f##_dynamic_disabled)
++#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
++#define preempt_dynamic_enable(f) static_key_enable(&sk_dynamic_##f.key)
++#define preempt_dynamic_disable(f) static_key_disable(&sk_dynamic_##f.key)
++#else
++#error "Unsupported PREEMPT_DYNAMIC mechanism"
++#endif
++
++static DEFINE_MUTEX(sched_dynamic_mutex);
++static bool klp_override;
++
++static void __sched_dynamic_update(int mode)
++{
++ /*
++ * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
++ * the ZERO state, which is invalid.
++ */
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(might_resched);
++ preempt_dynamic_enable(preempt_schedule);
++ preempt_dynamic_enable(preempt_schedule_notrace);
++ preempt_dynamic_enable(irqentry_exit_cond_resched);
++
++ switch (mode) {
++ case preempt_dynamic_none:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_disable(might_resched);
++ preempt_dynamic_disable(preempt_schedule);
++ preempt_dynamic_disable(preempt_schedule_notrace);
++ preempt_dynamic_disable(irqentry_exit_cond_resched);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: none\n");
++ break;
++
++ case preempt_dynamic_voluntary:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_enable(might_resched);
++ preempt_dynamic_disable(preempt_schedule);
++ preempt_dynamic_disable(preempt_schedule_notrace);
++ preempt_dynamic_disable(irqentry_exit_cond_resched);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: voluntary\n");
++ break;
++
++ case preempt_dynamic_full:
++ if (!klp_override)
++ preempt_dynamic_enable(cond_resched);
++ preempt_dynamic_disable(might_resched);
++ preempt_dynamic_enable(preempt_schedule);
++ preempt_dynamic_enable(preempt_schedule_notrace);
++ preempt_dynamic_enable(irqentry_exit_cond_resched);
++ if (mode != preempt_dynamic_mode)
++ pr_info("Dynamic Preempt: full\n");
++ break;
++ }
++
++ preempt_dynamic_mode = mode;
++}
++
++void sched_dynamic_update(int mode)
++{
++ mutex_lock(&sched_dynamic_mutex);
++ __sched_dynamic_update(mode);
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
++
++static int klp_cond_resched(void)
++{
++ __klp_sched_try_switch();
++ return __cond_resched();
++}
++
++void sched_dynamic_klp_enable(void)
++{
++ mutex_lock(&sched_dynamic_mutex);
++
++ klp_override = true;
++ static_call_update(cond_resched, klp_cond_resched);
++
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++void sched_dynamic_klp_disable(void)
++{
++ mutex_lock(&sched_dynamic_mutex);
++
++ klp_override = false;
++ __sched_dynamic_update(preempt_dynamic_mode);
++
++ mutex_unlock(&sched_dynamic_mutex);
++}
++
++#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
++
++
++static int __init setup_preempt_mode(char *str)
++{
++ int mode = sched_dynamic_mode(str);
++ if (mode < 0) {
++ pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
++ return 0;
++ }
++
++ sched_dynamic_update(mode);
++ return 1;
++}
++__setup("preempt=", setup_preempt_mode);
++
++static void __init preempt_dynamic_init(void)
++{
++ if (preempt_dynamic_mode == preempt_dynamic_undefined) {
++ if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
++ sched_dynamic_update(preempt_dynamic_none);
++ } else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
++ sched_dynamic_update(preempt_dynamic_voluntary);
++ } else {
++ /* Default static call setting, nothing to do */
++ WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
++ preempt_dynamic_mode = preempt_dynamic_full;
++ pr_info("Dynamic Preempt: full\n");
++ }
++ }
++}
++
++#define PREEMPT_MODEL_ACCESSOR(mode) \
++ bool preempt_model_##mode(void) \
++ { \
++ WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
++ return preempt_dynamic_mode == preempt_dynamic_##mode; \
++ } \
++ EXPORT_SYMBOL_GPL(preempt_model_##mode)
++
++PREEMPT_MODEL_ACCESSOR(none);
++PREEMPT_MODEL_ACCESSOR(voluntary);
++PREEMPT_MODEL_ACCESSOR(full);
++
++#else /* !CONFIG_PREEMPT_DYNAMIC: */
++
++static inline void preempt_dynamic_init(void) { }
++
++#endif /* CONFIG_PREEMPT_DYNAMIC */
++
++int io_schedule_prepare(void)
++{
++ int old_iowait = current->in_iowait;
++
++ current->in_iowait = 1;
++ blk_flush_plug(current->plug, true);
++ return old_iowait;
++}
++
++void io_schedule_finish(int token)
++{
++ current->in_iowait = token;
++}
++
++/*
++ * This task is about to go to sleep on IO. Increment rq->nr_iowait so
++ * that process accounting knows that this is a task in IO wait state.
++ *
++ * But don't do that if it is a deliberate, throttling IO wait (this task
++ * has set its backing_dev_info: the queue against which it should throttle)
++ */
++
++long __sched io_schedule_timeout(long timeout)
++{
++ int token;
++ long ret;
++
++ token = io_schedule_prepare();
++ ret = schedule_timeout(timeout);
++ io_schedule_finish(token);
++
++ return ret;
++}
++EXPORT_SYMBOL(io_schedule_timeout);
++
++void __sched io_schedule(void)
++{
++ int token;
++
++ token = io_schedule_prepare();
++ schedule();
++ io_schedule_finish(token);
++}
++EXPORT_SYMBOL(io_schedule);
++
++void sched_show_task(struct task_struct *p)
++{
++ unsigned long free;
++ int ppid;
++
++ if (!try_get_task_stack(p))
++ return;
++
++ pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
++
++ if (task_is_running(p))
++ pr_cont(" running task ");
++ free = stack_not_used(p);
++ ppid = 0;
++ rcu_read_lock();
++ if (pid_alive(p))
++ ppid = task_pid_nr(rcu_dereference(p->real_parent));
++ rcu_read_unlock();
++ pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
++ free, task_pid_nr(p), task_tgid_nr(p),
++ ppid, read_task_thread_flags(p));
++
++ print_worker_info(KERN_INFO, p);
++ print_stop_info(KERN_INFO, p);
++ show_stack(p, NULL, KERN_INFO);
++ put_task_stack(p);
++}
++EXPORT_SYMBOL_GPL(sched_show_task);
++
++static inline bool
++state_filter_match(unsigned long state_filter, struct task_struct *p)
++{
++ unsigned int state = READ_ONCE(p->__state);
++
++ /* no filter, everything matches */
++ if (!state_filter)
++ return true;
++
++ /* filter, but doesn't match */
++ if (!(state & state_filter))
++ return false;
++
++ /*
++ * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
++ * TASK_KILLABLE).
++ */
++ if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
++ return false;
++
++ return true;
++}
++
++
++void show_state_filter(unsigned int state_filter)
++{
++ struct task_struct *g, *p;
++
++ rcu_read_lock();
++ for_each_process_thread(g, p) {
++ /*
++ * reset the NMI-timeout, listing all files on a slow
++ * console might take a lot of time:
++ * Also, reset softlockup watchdogs on all CPUs, because
++ * another CPU might be blocked waiting for us to process
++ * an IPI.
++ */
++ touch_nmi_watchdog();
++ touch_all_softlockup_watchdogs();
++ if (state_filter_match(state_filter, p))
++ sched_show_task(p);
++ }
++
++#ifdef CONFIG_SCHED_DEBUG
++ /* TODO: Alt schedule FW should support this
++ if (!state_filter)
++ sysrq_sched_debug_show();
++ */
++#endif
++ rcu_read_unlock();
++ /*
++ * Only show locks if all tasks are dumped:
++ */
++ if (!state_filter)
++ debug_show_all_locks();
++}
++
++void dump_cpu_task(int cpu)
++{
++ if (in_hardirq() && cpu == smp_processor_id()) {
++ struct pt_regs *regs;
++
++ regs = get_irq_regs();
++ if (regs) {
++ show_regs(regs);
++ return;
++ }
++ }
++
++ if (trigger_single_cpu_backtrace(cpu))
++ return;
++
++ pr_info("Task dump for CPU %d:\n", cpu);
++ sched_show_task(cpu_curr(cpu));
++}
++
++/**
++ * init_idle - set up an idle thread for a given CPU
++ * @idle: task in question
++ * @cpu: CPU the idle task belongs to
++ *
++ * NOTE: this function does not set the idle thread's NEED_RESCHED
++ * flag, to make booting more robust.
++ */
++void __init init_idle(struct task_struct *idle, int cpu)
++{
++#ifdef CONFIG_SMP
++ struct affinity_context ac = (struct affinity_context) {
++ .new_mask = cpumask_of(cpu),
++ .flags = 0,
++ };
++#endif
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ __sched_fork(0, idle);
++
++ raw_spin_lock_irqsave(&idle->pi_lock, flags);
++ raw_spin_lock(&rq->lock);
++
++ idle->last_ran = rq->clock_task;
++ idle->__state = TASK_RUNNING;
++ /*
++ * PF_KTHREAD should already be set at this point; regardless, make it
++ * look like a proper per-CPU kthread.
++ */
++ idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
++ kthread_set_per_cpu(idle, cpu);
++
++ sched_queue_init_idle(&rq->queue, idle);
++
++#ifdef CONFIG_SMP
++ /*
++ * It's possible that init_idle() gets called multiple times on a task,
++ * in that case do_set_cpus_allowed() will not do the right thing.
++ *
++ * And since this is boot we can forgo the serialisation.
++ */
++ set_cpus_allowed_common(idle, &ac);
++#endif
++
++ /* Silence PROVE_RCU */
++ rcu_read_lock();
++ __set_task_cpu(idle, cpu);
++ rcu_read_unlock();
++
++ rq->idle = idle;
++ rcu_assign_pointer(rq->curr, idle);
++ idle->on_cpu = 1;
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
++
++ /* Set the preempt count _outside_ the spinlocks! */
++ init_idle_preempt_count(idle, cpu);
++
++ ftrace_graph_init_idle_task(idle, cpu);
++ vtime_init_idle(idle, cpu);
++#ifdef CONFIG_SMP
++ sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
++#endif
++}
++
++#ifdef CONFIG_SMP
++
++int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
++ const struct cpumask __maybe_unused *trial)
++{
++ return 1;
++}
++
++int task_can_attach(struct task_struct *p)
++{
++ int ret = 0;
++
++ /*
++ * Kthreads which disallow setaffinity shouldn't be moved
++ * to a new cpuset; we don't want to change their CPU
++ * affinity and isolating such threads by their set of
++ * allowed nodes is unnecessary. Thus, cpusets are not
++ * applicable for such threads. This prevents checking for
++ * success of set_cpus_allowed_ptr() on all attached tasks
++ * before cpus_mask may be changed.
++ */
++ if (p->flags & PF_NO_SETAFFINITY)
++ ret = -EINVAL;
++
++ return ret;
++}
++
++bool sched_smp_initialized __read_mostly;
++
++#ifdef CONFIG_HOTPLUG_CPU
++/*
++ * Ensures that the idle task is using init_mm right before its CPU goes
++ * offline.
++ */
++void idle_task_exit(void)
++{
++ struct mm_struct *mm = current->active_mm;
++
++ BUG_ON(current != this_rq()->idle);
++
++ if (mm != &init_mm) {
++ switch_mm(mm, &init_mm, current);
++ finish_arch_post_lock_switch();
++ }
++
++ /* finish_cpu(), as ran on the BP, will clean up the active_mm state */
++}
++
++static int __balance_push_cpu_stop(void *arg)
++{
++ struct task_struct *p = arg;
++ struct rq *rq = this_rq();
++ struct rq_flags rf;
++ int cpu;
++
++ raw_spin_lock_irq(&p->pi_lock);
++ rq_lock(rq, &rf);
++
++ update_rq_clock(rq);
++
++ if (task_rq(p) == rq && task_on_rq_queued(p)) {
++ cpu = select_fallback_rq(rq->cpu, p);
++ rq = __migrate_task(rq, p, cpu);
++ }
++
++ rq_unlock(rq, &rf);
++ raw_spin_unlock_irq(&p->pi_lock);
++
++ put_task_struct(p);
++
++ return 0;
++}
++
++static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
++
++/*
++ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
++ * effective when the hotplug motion is down.
++ */
++static void balance_push(struct rq *rq)
++{
++ struct task_struct *push_task = rq->curr;
++
++ lockdep_assert_held(&rq->lock);
++
++ /*
++ * Ensure the thing is persistent until balance_push_set(.on = false);
++ */
++ rq->balance_callback = &balance_push_callback;
++
++ /*
++ * Only active while going offline and when invoked on the outgoing
++ * CPU.
++ */
++ if (!cpu_dying(rq->cpu) || rq != this_rq())
++ return;
++
++ /*
++ * Both the cpu-hotplug and stop task are in this case and are
++ * required to complete the hotplug process.
++ */
++ if (kthread_is_per_cpu(push_task) ||
++ is_migration_disabled(push_task)) {
++
++ /*
++ * If this is the idle task on the outgoing CPU try to wake
++ * up the hotplug control thread which might wait for the
++ * last task to vanish. The rcuwait_active() check is
++ * accurate here because the waiter is pinned on this CPU
++ * and can't obviously be running in parallel.
++ *
++ * On RT kernels this also has to check whether there are
++ * pinned and scheduled out tasks on the runqueue. They
++ * need to leave the migrate disabled section first.
++ */
++ if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
++ rcuwait_active(&rq->hotplug_wait)) {
++ raw_spin_unlock(&rq->lock);
++ rcuwait_wake_up(&rq->hotplug_wait);
++ raw_spin_lock(&rq->lock);
++ }
++ return;
++ }
++
++ get_task_struct(push_task);
++ /*
++ * Temporarily drop rq->lock such that we can wake-up the stop task.
++ * Both preemption and IRQs are still disabled.
++ */
++ preempt_disable();
++ raw_spin_unlock(&rq->lock);
++ stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
++ this_cpu_ptr(&push_work));
++ preempt_enable();
++ /*
++ * At this point need_resched() is true and we'll take the loop in
++ * schedule(). The next pick is obviously going to be the stop task
++ * which kthread_is_per_cpu() and will push this task away.
++ */
++ raw_spin_lock(&rq->lock);
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct rq_flags rf;
++
++ rq_lock_irqsave(rq, &rf);
++ if (on) {
++ WARN_ON_ONCE(rq->balance_callback);
++ rq->balance_callback = &balance_push_callback;
++ } else if (rq->balance_callback == &balance_push_callback) {
++ rq->balance_callback = NULL;
++ }
++ rq_unlock_irqrestore(rq, &rf);
++}
++
++/*
++ * Invoked from a CPUs hotplug control thread after the CPU has been marked
++ * inactive. All tasks which are not per CPU kernel threads are either
++ * pushed off this CPU now via balance_push() or placed on a different CPU
++ * during wakeup. Wait until the CPU is quiescent.
++ */
++static void balance_hotplug_wait(void)
++{
++ struct rq *rq = this_rq();
++
++ rcuwait_wait_event(&rq->hotplug_wait,
++ rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
++ TASK_UNINTERRUPTIBLE);
++}
++
++#else
++
++static void balance_push(struct rq *rq)
++{
++}
++
++static void balance_push_set(int cpu, bool on)
++{
++}
++
++static inline void balance_hotplug_wait(void)
++{
++}
++#endif /* CONFIG_HOTPLUG_CPU */
++
++static void set_rq_offline(struct rq *rq)
++{
++ if (rq->online) {
++ update_rq_clock(rq);
++ rq->online = false;
++ }
++}
++
++static void set_rq_online(struct rq *rq)
++{
++ if (!rq->online)
++ rq->online = true;
++}
++
++static inline void sched_set_rq_online(struct rq *rq, int cpu)
++{
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ set_rq_online(rq);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++static inline void sched_set_rq_offline(struct rq *rq, int cpu)
++{
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ set_rq_offline(rq);
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++}
++
++/*
++ * used to mark begin/end of suspend/resume:
++ */
++static int num_cpus_frozen;
++
++/*
++ * Update cpusets according to cpu_active mask. If cpusets are
++ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
++ * around partition_sched_domains().
++ *
++ * If we come here as part of a suspend/resume, don't touch cpusets because we
++ * want to restore it back to its original state upon resume anyway.
++ */
++static void cpuset_cpu_active(void)
++{
++ if (cpuhp_tasks_frozen) {
++ /*
++ * num_cpus_frozen tracks how many CPUs are involved in suspend
++ * resume sequence. As long as this is not the last online
++ * operation in the resume sequence, just build a single sched
++ * domain, ignoring cpusets.
++ */
++ partition_sched_domains(1, NULL, NULL);
++ if (--num_cpus_frozen)
++ return;
++ /*
++ * This is the last CPU online operation. So fall through and
++ * restore the original sched domains by considering the
++ * cpuset configurations.
++ */
++ cpuset_force_rebuild();
++ }
++
++ cpuset_update_active_cpus();
++}
++
++static int cpuset_cpu_inactive(unsigned int cpu)
++{
++ if (!cpuhp_tasks_frozen) {
++ cpuset_update_active_cpus();
++ } else {
++ num_cpus_frozen++;
++ partition_sched_domains(1, NULL, NULL);
++ }
++ return 0;
++}
++
++static inline void sched_smt_present_inc(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++ static_branch_inc_cpuslocked(&sched_smt_present);
++ cpumask_or(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++ }
++#endif
++}
++
++static inline void sched_smt_present_dec(int cpu)
++{
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
++ static_branch_dec_cpuslocked(&sched_smt_present);
++ if (!static_branch_likely(&sched_smt_present))
++ cpumask_clear(sched_pcore_idle_mask);
++ cpumask_andnot(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
++ }
++#endif
++}
++
++int sched_cpu_activate(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ /*
++ * Clear the balance_push callback and prepare to schedule
++ * regular tasks.
++ */
++ balance_push_set(cpu, false);
++
++ set_cpu_active(cpu, true);
++
++ if (sched_smp_initialized)
++ cpuset_cpu_active();
++
++ /*
++ * Put the rq online, if not already. This happens:
++ *
++ * 1) In the early boot process, because we build the real domains
++ * after all cpus have been brought up.
++ *
++ * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
++ * domains.
++ */
++ sched_set_rq_online(rq, cpu);
++
++ /*
++ * When going up, increment the number of cores with SMT present.
++ */
++ sched_smt_present_inc(cpu);
++
++ return 0;
++}
++
++int sched_cpu_deactivate(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ int ret;
++
++ set_cpu_active(cpu, false);
++
++ /*
++ * From this point forward, this CPU will refuse to run any task that
++ * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
++ * push those tasks away until this gets cleared, see
++ * sched_cpu_dying().
++ */
++ balance_push_set(cpu, true);
++
++ /*
++ * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
++ * users of this state to go away such that all new such users will
++ * observe it.
++ *
++ * Specifically, we rely on ttwu to no longer target this CPU, see
++ * ttwu_queue_cond() and is_cpu_allowed().
++ *
++ * Do sync before park smpboot threads to take care the RCU boost case.
++ */
++ synchronize_rcu();
++
++ sched_set_rq_offline(rq, cpu);
++
++ /*
++ * When going down, decrement the number of cores with SMT present.
++ */
++ sched_smt_present_dec(cpu);
++
++ if (!sched_smp_initialized)
++ return 0;
++
++ ret = cpuset_cpu_inactive(cpu);
++ if (ret) {
++ sched_smt_present_inc(cpu);
++ sched_set_rq_online(rq, cpu);
++ balance_push_set(cpu, false);
++ set_cpu_active(cpu, true);
++ return ret;
++ }
++
++ return 0;
++}
++
++static void sched_rq_cpu_starting(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++
++ rq->calc_load_update = calc_load_update;
++}
++
++int sched_cpu_starting(unsigned int cpu)
++{
++ sched_rq_cpu_starting(cpu);
++ sched_tick_start(cpu);
++ return 0;
++}
++
++#ifdef CONFIG_HOTPLUG_CPU
++
++/*
++ * Invoked immediately before the stopper thread is invoked to bring the
++ * CPU down completely. At this point all per CPU kthreads except the
++ * hotplug thread (current) and the stopper thread (inactive) have been
++ * either parked or have been unbound from the outgoing CPU. Ensure that
++ * any of those which might be on the way out are gone.
++ *
++ * If after this point a bound task is being woken on this CPU then the
++ * responsible hotplug callback has failed to do it's job.
++ * sched_cpu_dying() will catch it with the appropriate fireworks.
++ */
++int sched_cpu_wait_empty(unsigned int cpu)
++{
++ balance_hotplug_wait();
++ return 0;
++}
++
++/*
++ * Since this CPU is going 'away' for a while, fold any nr_active delta we
++ * might have. Called from the CPU stopper task after ensuring that the
++ * stopper is the last running task on the CPU, so nr_active count is
++ * stable. We need to take the tear-down thread which is calling this into
++ * account, so we hand in adjust = 1 to the load calculation.
++ *
++ * Also see the comment "Global load-average calculations".
++ */
++static void calc_load_migrate(struct rq *rq)
++{
++ long delta = calc_load_fold_active(rq, 1);
++
++ if (delta)
++ atomic_long_add(delta, &calc_load_tasks);
++}
++
++static void dump_rq_tasks(struct rq *rq, const char *loglvl)
++{
++ struct task_struct *g, *p;
++ int cpu = cpu_of(rq);
++
++ lockdep_assert_held(&rq->lock);
++
++ printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
++ for_each_process_thread(g, p) {
++ if (task_cpu(p) != cpu)
++ continue;
++
++ if (!task_on_rq_queued(p))
++ continue;
++
++ printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
++ }
++}
++
++int sched_cpu_dying(unsigned int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ unsigned long flags;
++
++ /* Handle pending wakeups and then migrate everything off */
++ sched_tick_stop(cpu);
++
++ raw_spin_lock_irqsave(&rq->lock, flags);
++ if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
++ WARN(true, "Dying CPU not properly vacated!");
++ dump_rq_tasks(rq, KERN_WARNING);
++ }
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++ calc_load_migrate(rq);
++ hrtick_clear(rq);
++ return 0;
++}
++#endif
++
++#ifdef CONFIG_SMP
++static void sched_init_topology_cpumask_early(void)
++{
++ int cpu;
++ cpumask_t *tmp;
++
++ for_each_possible_cpu(cpu) {
++ /* init topo masks */
++ tmp = per_cpu(sched_cpu_topo_masks, cpu);
++
++ cpumask_copy(tmp, cpu_possible_mask);
++ per_cpu(sched_cpu_llc_mask, cpu) = tmp;
++ per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
++ }
++}
++
++#define TOPOLOGY_CPUMASK(name, mask, last)\
++ if (cpumask_and(topo, topo, mask)) { \
++ cpumask_copy(topo, mask); \
++ printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name, \
++ cpu, (topo++)->bits[0]); \
++ } \
++ if (!last) \
++ bitmap_complement(cpumask_bits(topo), cpumask_bits(mask), \
++ nr_cpumask_bits);
++
++static void sched_init_topology_cpumask(void)
++{
++ int cpu;
++ cpumask_t *topo;
++
++ for_each_online_cpu(cpu) {
++ topo = per_cpu(sched_cpu_topo_masks, cpu);
++
++ bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
++ nr_cpumask_bits);
++#ifdef CONFIG_SCHED_SMT
++ TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
++#endif
++ TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
++
++ per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
++ per_cpu(sched_cpu_llc_mask, cpu) = topo;
++ TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
++
++ TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
++
++ TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
++
++ per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
++ printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
++ cpu, per_cpu(sd_llc_id, cpu),
++ (int) (per_cpu(sched_cpu_llc_mask, cpu) -
++ per_cpu(sched_cpu_topo_masks, cpu)));
++ }
++}
++#endif
++
++void __init sched_init_smp(void)
++{
++ /* Move init over to a non-isolated CPU */
++ if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
++ BUG();
++ current->flags &= ~PF_NO_SETAFFINITY;
++
++ sched_init_topology();
++ sched_init_topology_cpumask();
++
++ sched_smp_initialized = true;
++}
++
++static int __init migration_init(void)
++{
++ sched_cpu_starting(smp_processor_id());
++ return 0;
++}
++early_initcall(migration_init);
++
++#else
++void __init sched_init_smp(void)
++{
++ cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
++}
++#endif /* CONFIG_SMP */
++
++int in_sched_functions(unsigned long addr)
++{
++ return in_lock_functions(addr) ||
++ (addr >= (unsigned long)__sched_text_start
++ && addr < (unsigned long)__sched_text_end);
++}
++
++#ifdef CONFIG_CGROUP_SCHED
++/*
++ * Default task group.
++ * Every task in system belongs to this group at bootup.
++ */
++struct task_group root_task_group;
++LIST_HEAD(task_groups);
++
++/* Cacheline aligned slab cache for task_group */
++static struct kmem_cache *task_group_cache __ro_after_init;
++#endif /* CONFIG_CGROUP_SCHED */
++
++void __init sched_init(void)
++{
++ int i;
++ struct rq *rq;
++
++ printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
++ " by Alfred Chen.\n");
++
++ wait_bit_init();
++
++#ifdef CONFIG_SMP
++ for (i = 0; i < SCHED_QUEUE_BITS; i++)
++ cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
++#endif
++
++#ifdef CONFIG_CGROUP_SCHED
++ task_group_cache = KMEM_CACHE(task_group, 0);
++
++ list_add(&root_task_group.list, &task_groups);
++ INIT_LIST_HEAD(&root_task_group.children);
++ INIT_LIST_HEAD(&root_task_group.siblings);
++#endif /* CONFIG_CGROUP_SCHED */
++ for_each_possible_cpu(i) {
++ rq = cpu_rq(i);
++
++ sched_queue_init(&rq->queue);
++ rq->prio = IDLE_TASK_SCHED_PRIO;
++#ifdef CONFIG_SCHED_PDS
++ rq->prio_idx = rq->prio;
++#endif
++
++ raw_spin_lock_init(&rq->lock);
++ rq->nr_running = rq->nr_uninterruptible = 0;
++ rq->calc_load_active = 0;
++ rq->calc_load_update = jiffies + LOAD_FREQ;
++#ifdef CONFIG_SMP
++ rq->online = false;
++ rq->cpu = i;
++
++ rq->clear_idle_mask_func = cpumask_clear_cpu;
++ rq->set_idle_mask_func = cpumask_set_cpu;
++ rq->balance_func = NULL;
++ rq->active_balance_arg.active = 0;
++
++#ifdef CONFIG_NO_HZ_COMMON
++ INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
++#endif
++ rq->balance_callback = &balance_push_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++ rcuwait_init(&rq->hotplug_wait);
++#endif
++#endif /* CONFIG_SMP */
++ rq->nr_switches = 0;
++
++ hrtick_rq_init(rq);
++ atomic_set(&rq->nr_iowait, 0);
++
++ zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
++ }
++#ifdef CONFIG_SMP
++ /* Set rq->online for cpu 0 */
++ cpu_rq(0)->online = true;
++#endif
++ /*
++ * The boot idle thread does lazy MMU switching as well:
++ */
++ mmgrab(&init_mm);
++ enter_lazy_tlb(&init_mm, current);
++
++ /*
++ * The idle task doesn't need the kthread struct to function, but it
++ * is dressed up as a per-CPU kthread and thus needs to play the part
++ * if we want to avoid special-casing it in code that deals with per-CPU
++ * kthreads.
++ */
++ WARN_ON(!set_kthread_struct(current));
++
++ /*
++ * Make us the idle thread. Technically, schedule() should not be
++ * called from this thread, however somewhere below it might be,
++ * but because we are the idle thread, we just pick up running again
++ * when this runqueue becomes "idle".
++ */
++ init_idle(current, smp_processor_id());
++
++ calc_load_update = jiffies + LOAD_FREQ;
++
++#ifdef CONFIG_SMP
++ idle_thread_set_boot_cpu();
++ balance_push_set(smp_processor_id(), false);
++
++ sched_init_topology_cpumask_early();
++#endif /* SMP */
++
++ preempt_dynamic_init();
++}
++
++#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
++
++void __might_sleep(const char *file, int line)
++{
++ unsigned int state = get_current_state();
++ /*
++ * Blocking primitives will set (and therefore destroy) current->state,
++ * since we will exit with TASK_RUNNING make sure we enter with it,
++ * otherwise we will destroy state.
++ */
++ WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
++ "do not call blocking ops when !TASK_RUNNING; "
++ "state=%x set at [<%p>] %pS\n", state,
++ (void *)current->task_state_change,
++ (void *)current->task_state_change);
++
++ __might_resched(file, line, 0);
++}
++EXPORT_SYMBOL(__might_sleep);
++
++static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
++{
++ if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
++ return;
++
++ if (preempt_count() == preempt_offset)
++ return;
++
++ pr_err("Preemption disabled at:");
++ print_ip_sym(KERN_ERR, ip);
++}
++
++static inline bool resched_offsets_ok(unsigned int offsets)
++{
++ unsigned int nested = preempt_count();
++
++ nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
++
++ return nested == offsets;
++}
++
++void __might_resched(const char *file, int line, unsigned int offsets)
++{
++ /* Ratelimiting timestamp: */
++ static unsigned long prev_jiffy;
++
++ unsigned long preempt_disable_ip;
++
++ /* WARN_ON_ONCE() by default, no rate limit required: */
++ rcu_sleep_check();
++
++ if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
++ !is_idle_task(current) && !current->non_block_count) ||
++ system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
++ oops_in_progress)
++ return;
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ /* Save this before calling printk(), since that will clobber it: */
++ preempt_disable_ip = get_preempt_disable_ip(current);
++
++ pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
++ file, line);
++ pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(), current->non_block_count,
++ current->pid, current->comm);
++ pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
++ offsets & MIGHT_RESCHED_PREEMPT_MASK);
++
++ if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
++ pr_err("RCU nest depth: %d, expected: %u\n",
++ rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
++ }
++
++ if (task_stack_end_corrupted(current))
++ pr_emerg("Thread overran stack, or stack corrupted\n");
++
++ debug_show_held_locks(current);
++ if (irqs_disabled())
++ print_irqtrace_events(current);
++
++ print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
++ preempt_disable_ip);
++
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL(__might_resched);
++
++void __cant_sleep(const char *file, int line, int preempt_offset)
++{
++ static unsigned long prev_jiffy;
++
++ if (irqs_disabled())
++ return;
++
++ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++ return;
++
++ if (preempt_count() > preempt_offset)
++ return;
++
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
++ printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(),
++ current->pid, current->comm);
++
++ debug_show_held_locks(current);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_sleep);
++
++#ifdef CONFIG_SMP
++void __cant_migrate(const char *file, int line)
++{
++ static unsigned long prev_jiffy;
++
++ if (irqs_disabled())
++ return;
++
++ if (is_migration_disabled(current))
++ return;
++
++ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
++ return;
++
++ if (preempt_count() > 0)
++ return;
++
++ if (current->migration_flags & MDF_FORCE_ENABLED)
++ return;
++
++ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
++ return;
++ prev_jiffy = jiffies;
++
++ pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
++ pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
++ in_atomic(), irqs_disabled(), is_migration_disabled(current),
++ current->pid, current->comm);
++
++ debug_show_held_locks(current);
++ dump_stack();
++ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
++}
++EXPORT_SYMBOL_GPL(__cant_migrate);
++#endif
++#endif
++
++#ifdef CONFIG_MAGIC_SYSRQ
++void normalize_rt_tasks(void)
++{
++ struct task_struct *g, *p;
++ struct sched_attr attr = {
++ .sched_policy = SCHED_NORMAL,
++ };
++
++ read_lock(&tasklist_lock);
++ for_each_process_thread(g, p) {
++ /*
++ * Only normalize user tasks:
++ */
++ if (p->flags & PF_KTHREAD)
++ continue;
++
++ schedstat_set(p->stats.wait_start, 0);
++ schedstat_set(p->stats.sleep_start, 0);
++ schedstat_set(p->stats.block_start, 0);
++
++ if (!rt_or_dl_task(p)) {
++ /*
++ * Renice negative nice level userspace
++ * tasks back to 0:
++ */
++ if (task_nice(p) < 0)
++ set_user_nice(p, 0);
++ continue;
++ }
++
++ __sched_setscheduler(p, &attr, false, false);
++ }
++ read_unlock(&tasklist_lock);
++}
++#endif /* CONFIG_MAGIC_SYSRQ */
++
++#if defined(CONFIG_KGDB_KDB)
++/*
++ * These functions are only useful for KDB.
++ *
++ * They can only be called when the whole system has been
++ * stopped - every CPU needs to be quiescent, and no scheduling
++ * activity can take place. Using them for anything else would
++ * be a serious bug, and as a result, they aren't even visible
++ * under any other configuration.
++ */
++
++/**
++ * curr_task - return the current task for a given CPU.
++ * @cpu: the processor in question.
++ *
++ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
++ *
++ * Return: The current task for @cpu.
++ */
++struct task_struct *curr_task(int cpu)
++{
++ return cpu_curr(cpu);
++}
++
++#endif /* defined(CONFIG_KGDB_KDB) */
++
++#ifdef CONFIG_CGROUP_SCHED
++static void sched_free_group(struct task_group *tg)
++{
++ kmem_cache_free(task_group_cache, tg);
++}
++
++static void sched_free_group_rcu(struct rcu_head *rhp)
++{
++ sched_free_group(container_of(rhp, struct task_group, rcu));
++}
++
++static void sched_unregister_group(struct task_group *tg)
++{
++ /*
++ * We have to wait for yet another RCU grace period to expire, as
++ * print_cfs_stats() might run concurrently.
++ */
++ call_rcu(&tg->rcu, sched_free_group_rcu);
++}
++
++/* allocate runqueue etc for a new task group */
++struct task_group *sched_create_group(struct task_group *parent)
++{
++ struct task_group *tg;
++
++ tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
++ if (!tg)
++ return ERR_PTR(-ENOMEM);
++
++ return tg;
++}
++
++void sched_online_group(struct task_group *tg, struct task_group *parent)
++{
++}
++
++/* RCU callback to free various structures associated with a task group */
++static void sched_unregister_group_rcu(struct rcu_head *rhp)
++{
++ /* Now it should be safe to free those cfs_rqs: */
++ sched_unregister_group(container_of(rhp, struct task_group, rcu));
++}
++
++void sched_destroy_group(struct task_group *tg)
++{
++ /* Wait for possible concurrent references to cfs_rqs complete: */
++ call_rcu(&tg->rcu, sched_unregister_group_rcu);
++}
++
++void sched_release_group(struct task_group *tg)
++{
++}
++
++static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
++{
++ return css ? container_of(css, struct task_group, css) : NULL;
++}
++
++static struct cgroup_subsys_state *
++cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
++{
++ struct task_group *parent = css_tg(parent_css);
++ struct task_group *tg;
++
++ if (!parent) {
++ /* This is early initialization for the top cgroup */
++ return &root_task_group.css;
++ }
++
++ tg = sched_create_group(parent);
++ if (IS_ERR(tg))
++ return ERR_PTR(-ENOMEM);
++ return &tg->css;
++}
++
++/* Expose task group only after completing cgroup initialization */
++static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++ struct task_group *parent = css_tg(css->parent);
++
++ if (parent)
++ sched_online_group(tg, parent);
++ return 0;
++}
++
++static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++
++ sched_release_group(tg);
++}
++
++static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
++{
++ struct task_group *tg = css_tg(css);
++
++ /*
++ * Relies on the RCU grace period between css_released() and this.
++ */
++ sched_unregister_group(tg);
++}
++
++#ifdef CONFIG_RT_GROUP_SCHED
++static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
++{
++ return 0;
++}
++#endif
++
++static void cpu_cgroup_attach(struct cgroup_taskset *tset)
++{
++}
++
++static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, s64 cfs_quota_us)
++{
++ return 0;
++}
++
++static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 cfs_period_us)
++{
++ return 0;
++}
++
++static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 cfs_burst_us)
++{
++ return 0;
++}
++
++static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 val)
++{
++ return 0;
++}
++
++static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 rt_period_us)
++{
++ return 0;
++}
++
++static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes,
++ loff_t off)
++{
++ return nbytes;
++}
++
++static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes,
++ loff_t off)
++{
++ return nbytes;
++}
++
++static struct cftype cpu_legacy_files[] = {
++ {
++ .name = "cfs_quota_us",
++ .read_s64 = cpu_cfs_quota_read_s64,
++ .write_s64 = cpu_cfs_quota_write_s64,
++ },
++ {
++ .name = "cfs_period_us",
++ .read_u64 = cpu_cfs_period_read_u64,
++ .write_u64 = cpu_cfs_period_write_u64,
++ },
++ {
++ .name = "cfs_burst_us",
++ .read_u64 = cpu_cfs_burst_read_u64,
++ .write_u64 = cpu_cfs_burst_write_u64,
++ },
++ {
++ .name = "stat",
++ .seq_show = cpu_cfs_stat_show,
++ },
++ {
++ .name = "stat.local",
++ .seq_show = cpu_cfs_local_stat_show,
++ },
++ {
++ .name = "rt_runtime_us",
++ .read_s64 = cpu_rt_runtime_read,
++ .write_s64 = cpu_rt_runtime_write,
++ },
++ {
++ .name = "rt_period_us",
++ .read_u64 = cpu_rt_period_read_uint,
++ .write_u64 = cpu_rt_period_write_uint,
++ },
++ {
++ .name = "uclamp.min",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_min_show,
++ .write = cpu_uclamp_min_write,
++ },
++ {
++ .name = "uclamp.max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_max_show,
++ .write = cpu_uclamp_max_write,
++ },
++ { } /* Terminate */
++};
++
++static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft, u64 weight)
++{
++ return 0;
++}
++
++static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 nice)
++{
++ return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 idle)
++{
++ return 0;
++}
++
++static int cpu_max_show(struct seq_file *sf, void *v)
++{
++ return 0;
++}
++
++static ssize_t cpu_max_write(struct kernfs_open_file *of,
++ char *buf, size_t nbytes, loff_t off)
++{
++ return nbytes;
++}
++
++static struct cftype cpu_files[] = {
++ {
++ .name = "weight",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_u64 = cpu_weight_read_u64,
++ .write_u64 = cpu_weight_write_u64,
++ },
++ {
++ .name = "weight.nice",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_s64 = cpu_weight_nice_read_s64,
++ .write_s64 = cpu_weight_nice_write_s64,
++ },
++ {
++ .name = "idle",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_s64 = cpu_idle_read_s64,
++ .write_s64 = cpu_idle_write_s64,
++ },
++ {
++ .name = "max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_max_show,
++ .write = cpu_max_write,
++ },
++ {
++ .name = "max.burst",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .read_u64 = cpu_cfs_burst_read_u64,
++ .write_u64 = cpu_cfs_burst_write_u64,
++ },
++ {
++ .name = "uclamp.min",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_min_show,
++ .write = cpu_uclamp_min_write,
++ },
++ {
++ .name = "uclamp.max",
++ .flags = CFTYPE_NOT_ON_ROOT,
++ .seq_show = cpu_uclamp_max_show,
++ .write = cpu_uclamp_max_write,
++ },
++ { } /* terminate */
++};
++
++static int cpu_extra_stat_show(struct seq_file *sf,
++ struct cgroup_subsys_state *css)
++{
++ return 0;
++}
++
++static int cpu_local_stat_show(struct seq_file *sf,
++ struct cgroup_subsys_state *css)
++{
++ return 0;
++}
++
++struct cgroup_subsys cpu_cgrp_subsys = {
++ .css_alloc = cpu_cgroup_css_alloc,
++ .css_online = cpu_cgroup_css_online,
++ .css_released = cpu_cgroup_css_released,
++ .css_free = cpu_cgroup_css_free,
++ .css_extra_stat_show = cpu_extra_stat_show,
++ .css_local_stat_show = cpu_local_stat_show,
++#ifdef CONFIG_RT_GROUP_SCHED
++ .can_attach = cpu_cgroup_can_attach,
++#endif
++ .attach = cpu_cgroup_attach,
++ .legacy_cftypes = cpu_legacy_files,
++ .dfl_cftypes = cpu_files,
++ .early_init = true,
++ .threaded = true,
++};
++#endif /* CONFIG_CGROUP_SCHED */
++
++#undef CREATE_TRACE_POINTS
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#
++/*
++ * @cid_lock: Guarantee forward-progress of cid allocation.
++ *
++ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
++ * is only used when contention is detected by the lock-free allocation so
++ * forward progress can be guaranteed.
++ */
++DEFINE_RAW_SPINLOCK(cid_lock);
++
++/*
++ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
++ *
++ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
++ * detected, it is set to 1 to ensure that all newly coming allocations are
++ * serialized by @cid_lock until the allocation which detected contention
++ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
++ * of a cid allocation.
++ */
++int use_cid_lock;
++
++/*
++ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
++ * concurrently with respect to the execution of the source runqueue context
++ * switch.
++ *
++ * There is one basic properties we want to guarantee here:
++ *
++ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
++ * used by a task. That would lead to concurrent allocation of the cid and
++ * userspace corruption.
++ *
++ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
++ * that a pair of loads observe at least one of a pair of stores, which can be
++ * shown as:
++ *
++ * X = Y = 0
++ *
++ * w[X]=1 w[Y]=1
++ * MB MB
++ * r[Y]=y r[X]=x
++ *
++ * Which guarantees that x==0 && y==0 is impossible. But rather than using
++ * values 0 and 1, this algorithm cares about specific state transitions of the
++ * runqueue current task (as updated by the scheduler context switch), and the
++ * per-mm/cpu cid value.
++ *
++ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
++ * task->mm != mm for the rest of the discussion. There are two scheduler state
++ * transitions on context switch we care about:
++ *
++ * (TSA) Store to rq->curr with transition from (N) to (Y)
++ *
++ * (TSB) Store to rq->curr with transition from (Y) to (N)
++ *
++ * On the remote-clear side, there is one transition we care about:
++ *
++ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
++ *
++ * There is also a transition to UNSET state which can be performed from all
++ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
++ * guarantees that only a single thread will succeed:
++ *
++ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
++ *
++ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
++ * when a thread is actively using the cid (property (1)).
++ *
++ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
++ *
++ * Scenario A) (TSA)+(TMA) (from next task perspective)
++ *
++ * CPU0 CPU1
++ *
++ * Context switch CS-1 Remote-clear
++ * - store to rq->curr: (N)->(Y) (TSA) - cmpxchg to *pcpu_id to LAZY (TMA)
++ * (implied barrier after cmpxchg)
++ * - switch_mm_cid()
++ * - memory barrier (see switch_mm_cid()
++ * comment explaining how this barrier
++ * is combined with other scheduler
++ * barriers)
++ * - mm_cid_get (next)
++ * - READ_ONCE(*pcpu_cid) - rcu_dereference(src_rq->curr)
++ *
++ * This Dekker ensures that either task (Y) is observed by the
++ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
++ * observed.
++ *
++ * If task (Y) store is observed by rcu_dereference(), it means that there is
++ * still an active task on the cpu. Remote-clear will therefore not transition
++ * to UNSET, which fulfills property (1).
++ *
++ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
++ * it will move its state to UNSET, which clears the percpu cid perhaps
++ * uselessly (which is not an issue for correctness). Because task (Y) is not
++ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
++ * state to UNSET is done with a cmpxchg expecting that the old state has the
++ * LAZY flag set, only one thread will successfully UNSET.
++ *
++ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
++ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
++ * CPU1 will observe task (Y) and do nothing more, which is fine.
++ *
++ * What we are effectively preventing with this Dekker is a scenario where
++ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
++ * because this would UNSET a cid which is actively used.
++ */
++
++void sched_mm_cid_migrate_from(struct task_struct *t)
++{
++ t->migrate_from_cpu = task_cpu(t);
++}
++
++static
++int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
++ struct task_struct *t,
++ struct mm_cid *src_pcpu_cid)
++{
++ struct mm_struct *mm = t->mm;
++ struct task_struct *src_task;
++ int src_cid, last_mm_cid;
++
++ if (!mm)
++ return -1;
++
++ last_mm_cid = t->last_mm_cid;
++ /*
++ * If the migrated task has no last cid, or if the current
++ * task on src rq uses the cid, it means the source cid does not need
++ * to be moved to the destination cpu.
++ */
++ if (last_mm_cid == -1)
++ return -1;
++ src_cid = READ_ONCE(src_pcpu_cid->cid);
++ if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
++ return -1;
++
++ /*
++ * If we observe an active task using the mm on this rq, it means we
++ * are not the last task to be migrated from this cpu for this mm, so
++ * there is no need to move src_cid to the destination cpu.
++ */
++ guard(rcu)();
++ src_task = rcu_dereference(src_rq->curr);
++ if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++ t->last_mm_cid = -1;
++ return -1;
++ }
++
++ return src_cid;
++}
++
++static
++int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
++ struct task_struct *t,
++ struct mm_cid *src_pcpu_cid,
++ int src_cid)
++{
++ struct task_struct *src_task;
++ struct mm_struct *mm = t->mm;
++ int lazy_cid;
++
++ if (src_cid == -1)
++ return -1;
++
++ /*
++ * Attempt to clear the source cpu cid to move it to the destination
++ * cpu.
++ */
++ lazy_cid = mm_cid_set_lazy_put(src_cid);
++ if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
++ return -1;
++
++ /*
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm matches the scheduler barrier in context_switch()
++ * between store to rq->curr and load of prev and next task's
++ * per-mm/cpu cid.
++ *
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm_cid_active matches the barrier in
++ * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++ * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++ * load of per-mm/cpu cid.
++ */
++
++ /*
++ * If we observe an active task using the mm on this rq after setting
++ * the lazy-put flag, this task will be responsible for transitioning
++ * from lazy-put flag set to MM_CID_UNSET.
++ */
++ scoped_guard (rcu) {
++ src_task = rcu_dereference(src_rq->curr);
++ if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
++ rcu_read_unlock();
++ /*
++ * We observed an active task for this mm, there is therefore
++ * no point in moving this cid to the destination cpu.
++ */
++ t->last_mm_cid = -1;
++ return -1;
++ }
++ }
++
++ /*
++ * The src_cid is unused, so it can be unset.
++ */
++ if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++ return -1;
++ return src_cid;
++}
++
++/*
++ * Migration to dst cpu. Called with dst_rq lock held.
++ * Interrupts are disabled, which keeps the window of cid ownership without the
++ * source rq lock held small.
++ */
++void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t)
++{
++ struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
++ struct mm_struct *mm = t->mm;
++ int src_cid, dst_cid, src_cpu;
++ struct rq *src_rq;
++
++ lockdep_assert_rq_held(dst_rq);
++
++ if (!mm)
++ return;
++ src_cpu = t->migrate_from_cpu;
++ if (src_cpu == -1) {
++ t->last_mm_cid = -1;
++ return;
++ }
++ /*
++ * Move the src cid if the dst cid is unset. This keeps id
++ * allocation closest to 0 in cases where few threads migrate around
++ * many CPUs.
++ *
++ * If destination cid is already set, we may have to just clear
++ * the src cid to ensure compactness in frequent migrations
++ * scenarios.
++ *
++ * It is not useful to clear the src cid when the number of threads is
++ * greater or equal to the number of allowed CPUs, because user-space
++ * can expect that the number of allowed cids can reach the number of
++ * allowed CPUs.
++ */
++ dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
++ dst_cid = READ_ONCE(dst_pcpu_cid->cid);
++ if (!mm_cid_is_unset(dst_cid) &&
++ atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
++ return;
++ src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
++ src_rq = cpu_rq(src_cpu);
++ src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
++ if (src_cid == -1)
++ return;
++ src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
++ src_cid);
++ if (src_cid == -1)
++ return;
++ if (!mm_cid_is_unset(dst_cid)) {
++ __mm_cid_put(mm, src_cid);
++ return;
++ }
++ /* Move src_cid to dst cpu. */
++ mm_cid_snapshot_time(dst_rq, mm);
++ WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
++}
++
++static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
++ int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct task_struct *t;
++ int cid, lazy_cid;
++
++ cid = READ_ONCE(pcpu_cid->cid);
++ if (!mm_cid_is_valid(cid))
++ return;
++
++ /*
++ * Clear the cpu cid if it is set to keep cid allocation compact. If
++ * there happens to be other tasks left on the source cpu using this
++ * mm, the next task using this mm will reallocate its cid on context
++ * switch.
++ */
++ lazy_cid = mm_cid_set_lazy_put(cid);
++ if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
++ return;
++
++ /*
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm matches the scheduler barrier in context_switch()
++ * between store to rq->curr and load of prev and next task's
++ * per-mm/cpu cid.
++ *
++ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
++ * rq->curr->mm_cid_active matches the barrier in
++ * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
++ * sched_mm_cid_after_execve() between store to t->mm_cid_active and
++ * load of per-mm/cpu cid.
++ */
++
++ /*
++ * If we observe an active task using the mm on this rq after setting
++ * the lazy-put flag, that task will be responsible for transitioning
++ * from lazy-put flag set to MM_CID_UNSET.
++ */
++ scoped_guard (rcu) {
++ t = rcu_dereference(rq->curr);
++ if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
++ return;
++ }
++
++ /*
++ * The cid is unused, so it can be unset.
++ * Disable interrupts to keep the window of cid ownership without rq
++ * lock small.
++ */
++ scoped_guard (irqsave) {
++ if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
++ __mm_cid_put(mm, cid);
++ }
++}
++
++static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
++{
++ struct rq *rq = cpu_rq(cpu);
++ struct mm_cid *pcpu_cid;
++ struct task_struct *curr;
++ u64 rq_clock;
++
++ /*
++ * rq->clock load is racy on 32-bit but one spurious clear once in a
++ * while is irrelevant.
++ */
++ rq_clock = READ_ONCE(rq->clock);
++ pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++
++ /*
++ * In order to take care of infrequently scheduled tasks, bump the time
++ * snapshot associated with this cid if an active task using the mm is
++ * observed on this rq.
++ */
++ scoped_guard (rcu) {
++ curr = rcu_dereference(rq->curr);
++ if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
++ WRITE_ONCE(pcpu_cid->time, rq_clock);
++ return;
++ }
++ }
++
++ if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
++ return;
++ sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
++ int weight)
++{
++ struct mm_cid *pcpu_cid;
++ int cid;
++
++ pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
++ cid = READ_ONCE(pcpu_cid->cid);
++ if (!mm_cid_is_valid(cid) || cid < weight)
++ return;
++ sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
++}
++
++static void task_mm_cid_work(struct callback_head *work)
++{
++ unsigned long now = jiffies, old_scan, next_scan;
++ struct task_struct *t = current;
++ struct cpumask *cidmask;
++ struct mm_struct *mm;
++ int weight, cpu;
++
++ SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
++
++ work->next = work; /* Prevent double-add */
++ if (t->flags & PF_EXITING)
++ return;
++ mm = t->mm;
++ if (!mm)
++ return;
++ old_scan = READ_ONCE(mm->mm_cid_next_scan);
++ next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++ if (!old_scan) {
++ unsigned long res;
++
++ res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
++ if (res != old_scan)
++ old_scan = res;
++ else
++ old_scan = next_scan;
++ }
++ if (time_before(now, old_scan))
++ return;
++ if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
++ return;
++ cidmask = mm_cidmask(mm);
++ /* Clear cids that were not recently used. */
++ for_each_possible_cpu(cpu)
++ sched_mm_cid_remote_clear_old(mm, cpu);
++ weight = cpumask_weight(cidmask);
++ /*
++ * Clear cids that are greater or equal to the cidmask weight to
++ * recompact it.
++ */
++ for_each_possible_cpu(cpu)
++ sched_mm_cid_remote_clear_weight(mm, cpu, weight);
++}
++
++void init_sched_mm_cid(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ int mm_users = 0;
++
++ if (mm) {
++ mm_users = atomic_read(&mm->mm_users);
++ if (mm_users == 1)
++ mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
++ }
++ t->cid_work.next = &t->cid_work; /* Protect against double add */
++ init_task_work(&t->cid_work, task_mm_cid_work);
++}
++
++void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
++{
++ struct callback_head *work = &curr->cid_work;
++ unsigned long now = jiffies;
++
++ if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
++ work->next != work)
++ return;
++ if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
++ return;
++
++ /* No page allocation under rq lock */
++ task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC);
++}
++
++void sched_mm_cid_exit_signals(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ guard(rq_lock_irqsave)(rq);
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 0);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ mm_cid_put(mm);
++ t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_before_execve(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ guard(rq_lock_irqsave)(rq);
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 0);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ mm_cid_put(mm);
++ t->last_mm_cid = t->mm_cid = -1;
++}
++
++void sched_mm_cid_after_execve(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct rq *rq;
++
++ if (!mm)
++ return;
++
++ preempt_disable();
++ rq = this_rq();
++ scoped_guard (rq_lock_irqsave, rq) {
++ preempt_enable_no_resched(); /* holding spinlock */
++ WRITE_ONCE(t->mm_cid_active, 1);
++ /*
++ * Store t->mm_cid_active before loading per-mm/cpu cid.
++ * Matches barrier in sched_mm_cid_remote_clear_old().
++ */
++ smp_mb();
++ t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
++ }
++ rseq_set_notify_resume(t);
++}
++
++void sched_mm_cid_fork(struct task_struct *t)
++{
++ WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
++ t->mm_cid_active = 1;
++}
++#endif
+diff --git a/kernel/sched/alt_core.h b/kernel/sched/alt_core.h
+new file mode 100644
+index 000000000000..12d76d9d290e
+--- /dev/null
++++ b/kernel/sched/alt_core.h
+@@ -0,0 +1,213 @@
++#ifndef _KERNEL_SCHED_ALT_CORE_H
++#define _KERNEL_SCHED_ALT_CORE_H
++
++/*
++ * Compile time debug macro
++ * #define ALT_SCHED_DEBUG
++ */
++
++/*
++ * Task related inlined functions
++ */
++static inline bool is_migration_disabled(struct task_struct *p)
++{
++#ifdef CONFIG_SMP
++ return p->migration_disabled;
++#else
++ return false;
++#endif
++}
++
++/* rt_prio(prio) defined in include/linux/sched/rt.h */
++#define rt_task(p) rt_prio((p)->prio)
++#define rt_policy(policy) ((policy) == SCHED_FIFO || (policy) == SCHED_RR)
++#define task_has_rt_policy(p) (rt_policy((p)->policy))
++
++struct affinity_context {
++ const struct cpumask *new_mask;
++ struct cpumask *user_mask;
++ unsigned int flags;
++};
++
++/* CONFIG_SCHED_CLASS_EXT is not supported */
++#define scx_switched_all() false
++
++#define SCA_CHECK 0x01
++#define SCA_MIGRATE_DISABLE 0x02
++#define SCA_MIGRATE_ENABLE 0x04
++#define SCA_USER 0x08
++
++#ifdef CONFIG_SMP
++
++extern int __set_cpus_allowed_ptr(struct task_struct *p, struct affinity_context *ctx);
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++ /*
++ * See do_set_cpus_allowed() above for the rcu_head usage.
++ */
++ int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
++
++ return kmalloc_node(size, GFP_KERNEL, node);
++}
++
++#else /* !CONFIG_SMP: */
++
++static inline int __set_cpus_allowed_ptr(struct task_struct *p,
++ struct affinity_context *ctx)
++{
++ return set_cpus_allowed_ptr(p, ctx->new_mask);
++}
++
++static inline cpumask_t *alloc_user_cpus_ptr(int node)
++{
++ return NULL;
++}
++
++#endif /* !CONFIG_SMP */
++
++#ifdef CONFIG_RT_MUTEXES
++
++static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
++{
++ if (pi_task)
++ prio = min(prio, pi_task->prio);
++
++ return prio;
++}
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++ struct task_struct *pi_task = rt_mutex_get_top_task(p);
++
++ return __rt_effective_prio(pi_task, prio);
++}
++
++#else /* !CONFIG_RT_MUTEXES: */
++
++static inline int rt_effective_prio(struct task_struct *p, int prio)
++{
++ return prio;
++}
++
++#endif /* !CONFIG_RT_MUTEXES */
++
++extern int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user, bool pi);
++extern int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
++extern void __setscheduler_prio(struct task_struct *p, int prio);
++
++/*
++ * Context API
++ */
++static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
++{
++ struct rq *rq;
++ for (;;) {
++ rq = task_rq(p);
++ if (p->on_cpu || task_on_rq_queued(p)) {
++ raw_spin_lock(&rq->lock);
++ if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
++ *plock = &rq->lock;
++ return rq;
++ }
++ raw_spin_unlock(&rq->lock);
++ } else if (task_on_rq_migrating(p)) {
++ do {
++ cpu_relax();
++ } while (unlikely(task_on_rq_migrating(p)));
++ } else {
++ *plock = NULL;
++ return rq;
++ }
++ }
++}
++
++static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
++{
++ if (NULL != lock)
++ raw_spin_unlock(lock);
++}
++
++void check_task_changed(struct task_struct *p, struct rq *rq);
++
++/*
++ * RQ related inlined functions
++ */
++
++/*
++ * This routine assume that the idle task always in queue
++ */
++static inline struct task_struct *sched_rq_first_task(struct rq *rq)
++{
++ const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
++
++ return list_first_entry(head, struct task_struct, sq_node);
++}
++
++static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
++{
++ struct list_head *next = p->sq_node.next;
++
++ if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
++ struct list_head *head;
++ unsigned long idx = next - &rq->queue.heads[0];
++
++ idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
++ sched_idx2prio(idx, rq) + 1);
++ head = &rq->queue.heads[sched_prio2idx(idx, rq)];
++
++ return list_first_entry(head, struct task_struct, sq_node);
++ }
++
++ return list_next_entry(p, sq_node);
++}
++
++extern void requeue_task(struct task_struct *p, struct rq *rq);
++
++#ifdef ALT_SCHED_DEBUG
++extern void alt_sched_debug(void);
++#else
++static inline void alt_sched_debug(void) {}
++#endif
++
++extern int sched_yield_type;
++
++#ifdef CONFIG_SMP
++extern cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
++
++DECLARE_STATIC_KEY_FALSE(sched_smt_present);
++DECLARE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
++
++extern cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
++
++extern cpumask_t *const sched_idle_mask;
++extern cpumask_t *const sched_sg_idle_mask;
++extern cpumask_t *const sched_pcore_idle_mask;
++extern cpumask_t *const sched_ecore_idle_mask;
++
++extern struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu);
++
++typedef bool (*idle_select_func_t)(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p);
++
++extern idle_select_func_t idle_select_func;
++#endif
++
++/* balance callback */
++#ifdef CONFIG_SMP
++extern struct balance_callback *splice_balance_callbacks(struct rq *rq);
++extern void balance_callbacks(struct rq *rq, struct balance_callback *head);
++#else
++
++static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
++{
++ return NULL;
++}
++
++static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
++{
++}
++
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_CORE_H */
+diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
+new file mode 100644
+index 000000000000..1dbd7eb6a434
+--- /dev/null
++++ b/kernel/sched/alt_debug.c
+@@ -0,0 +1,32 @@
++/*
++ * kernel/sched/alt_debug.c
++ *
++ * Print the alt scheduler debugging details
++ *
++ * Author: Alfred Chen
++ * Date : 2020
++ */
++#include "sched.h"
++#include "linux/sched/debug.h"
++
++/*
++ * This allows printing both to /proc/sched_debug and
++ * to the console
++ */
++#define SEQ_printf(m, x...) \
++ do { \
++ if (m) \
++ seq_printf(m, x); \
++ else \
++ pr_cont(x); \
++ } while (0)
++
++void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
++ struct seq_file *m)
++{
++ SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
++ get_nr_threads(p));
++}
++
++void proc_sched_set_task(struct task_struct *p)
++{}
+diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
+new file mode 100644
+index 000000000000..09c9e9f80bf4
+--- /dev/null
++++ b/kernel/sched/alt_sched.h
+@@ -0,0 +1,971 @@
++#ifndef _KERNEL_SCHED_ALT_SCHED_H
++#define _KERNEL_SCHED_ALT_SCHED_H
++
++#include <linux/context_tracking.h>
++#include <linux/profile.h>
++#include <linux/stop_machine.h>
++#include <linux/syscalls.h>
++#include <linux/tick.h>
++
++#include <trace/events/power.h>
++#include <trace/events/sched.h>
++
++#include "../workqueue_internal.h"
++
++#include "cpupri.h"
++
++#ifdef CONFIG_CGROUP_SCHED
++/* task group related information */
++struct task_group {
++ struct cgroup_subsys_state css;
++
++ struct rcu_head rcu;
++ struct list_head list;
++
++ struct task_group *parent;
++ struct list_head siblings;
++ struct list_head children;
++};
++
++extern struct task_group *sched_create_group(struct task_group *parent);
++extern void sched_online_group(struct task_group *tg,
++ struct task_group *parent);
++extern void sched_destroy_group(struct task_group *tg);
++extern void sched_release_group(struct task_group *tg);
++#endif /* CONFIG_CGROUP_SCHED */
++
++#define MIN_SCHED_NORMAL_PRIO (32)
++/*
++ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
++ *
++ * -- BMQ --
++ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
++ * -- PDS --
++ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
++ */
++#define SCHED_LEVELS (64 + 1)
++
++#define IDLE_TASK_SCHED_PRIO (SCHED_LEVELS - 1)
++
++#ifdef CONFIG_SCHED_DEBUG
++# define SCHED_WARN_ON(x) WARN_ONCE(x, #x)
++extern void resched_latency_warn(int cpu, u64 latency);
++#else
++# define SCHED_WARN_ON(x) ({ (void)(x), 0; })
++static inline void resched_latency_warn(int cpu, u64 latency) {}
++#endif
++
++/*
++ * Increase resolution of nice-level calculations for 64-bit architectures.
++ * The extra resolution improves shares distribution and load balancing of
++ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
++ * hierarchies, especially on larger systems. This is not a user-visible change
++ * and does not change the user-interface for setting shares/weights.
++ *
++ * We increase resolution only if we have enough bits to allow this increased
++ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
++ * are pretty high and the returns do not justify the increased costs.
++ *
++ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
++ * increase coverage and consistency always enable it on 64-bit platforms.
++ */
++#ifdef CONFIG_64BIT
++# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++ unsigned long __w = (w); \
++ if (__w) \
++ __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++ __w; \
++})
++#else
++# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
++# define scale_load(w) (w)
++# define scale_load_down(w) (w)
++#endif
++
++/*
++ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
++ */
++#ifdef CONFIG_SCHED_DEBUG
++# define const_debug __read_mostly
++#else
++# define const_debug const
++#endif
++
++/* task_struct::on_rq states: */
++#define TASK_ON_RQ_QUEUED 1
++#define TASK_ON_RQ_MIGRATING 2
++
++static inline int task_on_rq_queued(struct task_struct *p)
++{
++ return p->on_rq == TASK_ON_RQ_QUEUED;
++}
++
++static inline int task_on_rq_migrating(struct task_struct *p)
++{
++ return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
++}
++
++/* Wake flags. The first three directly map to some SD flag value */
++#define WF_EXEC 0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
++#define WF_FORK 0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
++#define WF_TTWU 0x08 /* Wakeup; maps to SD_BALANCE_WAKE */
++
++#define WF_SYNC 0x10 /* Waker goes to sleep after wakeup */
++#define WF_MIGRATED 0x20 /* Internal use, task got migrated */
++#define WF_CURRENT_CPU 0x40 /* Prefer to move the wakee to the current CPU. */
++
++#ifdef CONFIG_SMP
++static_assert(WF_EXEC == SD_BALANCE_EXEC);
++static_assert(WF_FORK == SD_BALANCE_FORK);
++static_assert(WF_TTWU == SD_BALANCE_WAKE);
++#endif
++
++#define SCHED_QUEUE_BITS (SCHED_LEVELS - 1)
++
++struct sched_queue {
++ DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
++ struct list_head heads[SCHED_LEVELS];
++};
++
++struct rq;
++struct cpuidle_state;
++
++struct balance_callback {
++ struct balance_callback *next;
++ void (*func)(struct rq *rq);
++};
++
++typedef void (*balance_func_t)(struct rq *rq, int cpu);
++typedef void (*set_idle_mask_func_t)(unsigned int cpu, struct cpumask *dstp);
++typedef void (*clear_idle_mask_func_t)(int cpu, struct cpumask *dstp);
++
++struct balance_arg {
++ struct task_struct *task;
++ int active;
++ cpumask_t *cpumask;
++};
++
++/*
++ * This is the main, per-CPU runqueue data structure.
++ * This data should only be modified by the local cpu.
++ */
++struct rq {
++ /* runqueue lock: */
++ raw_spinlock_t lock;
++
++ struct task_struct __rcu *curr;
++ struct task_struct *idle;
++ struct task_struct *stop;
++ struct mm_struct *prev_mm;
++
++ struct sched_queue queue ____cacheline_aligned;
++
++ int prio;
++#ifdef CONFIG_SCHED_PDS
++ int prio_idx;
++ u64 time_edge;
++#endif
++
++ /* switch count */
++ u64 nr_switches;
++
++ atomic_t nr_iowait;
++
++#ifdef CONFIG_SCHED_DEBUG
++ u64 last_seen_need_resched_ns;
++ int ticks_without_resched;
++#endif
++
++#ifdef CONFIG_MEMBARRIER
++ int membarrier_state;
++#endif
++
++ set_idle_mask_func_t set_idle_mask_func;
++ clear_idle_mask_func_t clear_idle_mask_func;
++
++#ifdef CONFIG_SMP
++ int cpu; /* cpu of this runqueue */
++ bool online;
++
++ unsigned int ttwu_pending;
++ unsigned char nohz_idle_balance;
++ unsigned char idle_balance;
++
++#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
++ struct sched_avg avg_irq;
++#endif
++
++ balance_func_t balance_func;
++ struct balance_arg active_balance_arg ____cacheline_aligned;
++ struct cpu_stop_work active_balance_work;
++
++ struct balance_callback *balance_callback;
++#ifdef CONFIG_HOTPLUG_CPU
++ struct rcuwait hotplug_wait;
++#endif
++ unsigned int nr_pinned;
++
++#endif /* CONFIG_SMP */
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++ u64 prev_irq_time;
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++#ifdef CONFIG_PARAVIRT
++ u64 prev_steal_time;
++#endif /* CONFIG_PARAVIRT */
++#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
++ u64 prev_steal_time_rq;
++#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
++
++ /* For genenal cpu load util */
++ s32 load_history;
++ u64 load_block;
++ u64 load_stamp;
++
++ /* calc_load related fields */
++ unsigned long calc_load_update;
++ long calc_load_active;
++
++ /* Ensure that all clocks are in the same cache line */
++ u64 clock ____cacheline_aligned;
++ u64 clock_task;
++
++ unsigned int nr_running;
++ unsigned long nr_uninterruptible;
++
++#ifdef CONFIG_SCHED_HRTICK
++#ifdef CONFIG_SMP
++ call_single_data_t hrtick_csd;
++#endif
++ struct hrtimer hrtick_timer;
++ ktime_t hrtick_time;
++#endif
++
++#ifdef CONFIG_SCHEDSTATS
++
++ /* latency stats */
++ struct sched_info rq_sched_info;
++ unsigned long long rq_cpu_time;
++ /* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
++
++ /* sys_sched_yield() stats */
++ unsigned int yld_count;
++
++ /* schedule() stats */
++ unsigned int sched_switch;
++ unsigned int sched_count;
++ unsigned int sched_goidle;
++
++ /* try_to_wake_up() stats */
++ unsigned int ttwu_count;
++ unsigned int ttwu_local;
++#endif /* CONFIG_SCHEDSTATS */
++
++#ifdef CONFIG_CPU_IDLE
++ /* Must be inspected within a rcu lock section */
++ struct cpuidle_state *idle_state;
++#endif
++
++#ifdef CONFIG_NO_HZ_COMMON
++#ifdef CONFIG_SMP
++ call_single_data_t nohz_csd;
++#endif
++ atomic_t nohz_flags;
++#endif /* CONFIG_NO_HZ_COMMON */
++
++ /* Scratch cpumask to be temporarily used under rq_lock */
++ cpumask_var_t scratch_mask;
++};
++
++extern unsigned int sysctl_sched_base_slice;
++
++extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
++
++extern unsigned long calc_load_update;
++extern atomic_long_t calc_load_tasks;
++
++extern void calc_global_load_tick(struct rq *this_rq);
++extern long calc_load_fold_active(struct rq *this_rq, long adjust);
++
++DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
++#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
++#define this_rq() this_cpu_ptr(&runqueues)
++#define task_rq(p) cpu_rq(task_cpu(p))
++#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
++#define raw_rq() raw_cpu_ptr(&runqueues)
++
++#ifdef CONFIG_SMP
++#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
++void register_sched_domain_sysctl(void);
++void unregister_sched_domain_sysctl(void);
++#else
++static inline void register_sched_domain_sysctl(void)
++{
++}
++static inline void unregister_sched_domain_sysctl(void)
++{
++}
++#endif
++
++extern bool sched_smp_initialized;
++
++enum {
++#ifdef CONFIG_SCHED_SMT
++ SMT_LEVEL_SPACE_HOLDER,
++#endif
++ COREGROUP_LEVEL_SPACE_HOLDER,
++ CORE_LEVEL_SPACE_HOLDER,
++ OTHER_LEVEL_SPACE_HOLDER,
++ NR_CPU_AFFINITY_LEVELS
++};
++
++DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
++
++static inline int
++__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
++{
++ int cpu;
++
++ while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
++ mask++;
++
++ return cpu;
++}
++
++static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
++{
++ return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
++}
++
++#endif
++
++#ifndef arch_scale_freq_tick
++static __always_inline
++void arch_scale_freq_tick(void)
++{
++}
++#endif
++
++#ifndef arch_scale_freq_capacity
++static __always_inline
++unsigned long arch_scale_freq_capacity(int cpu)
++{
++ return SCHED_CAPACITY_SCALE;
++}
++#endif
++
++static inline u64 __rq_clock_broken(struct rq *rq)
++{
++ return READ_ONCE(rq->clock);
++}
++
++static inline u64 rq_clock(struct rq *rq)
++{
++ /*
++ * Relax lockdep_assert_held() checking as in VRQ, call to
++ * sched_info_xxxx() may not held rq->lock
++ * lockdep_assert_held(&rq->lock);
++ */
++ return rq->clock;
++}
++
++static inline u64 rq_clock_task(struct rq *rq)
++{
++ /*
++ * Relax lockdep_assert_held() checking as in VRQ, call to
++ * sched_info_xxxx() may not held rq->lock
++ * lockdep_assert_held(&rq->lock);
++ */
++ return rq->clock_task;
++}
++
++/*
++ * {de,en}queue flags:
++ *
++ * DEQUEUE_SLEEP - task is no longer runnable
++ * ENQUEUE_WAKEUP - task just became runnable
++ *
++ */
++
++#define DEQUEUE_SLEEP 0x01
++
++#define ENQUEUE_WAKEUP 0x01
++
++
++/*
++ * Below are scheduler API which using in other kernel code
++ * It use the dummy rq_flags
++ * ToDo : BMQ need to support these APIs for compatibility with mainline
++ * scheduler code.
++ */
++struct rq_flags {
++ unsigned long flags;
++};
++
++struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(rq->lock);
++
++struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
++ __acquires(p->pi_lock)
++ __acquires(rq->lock);
++
++static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
++ __releases(rq->lock)
++ __releases(p->pi_lock)
++{
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
++}
++
++static inline void
++rq_lock(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock(&rq->lock);
++}
++
++static inline void
++rq_unlock(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock(&rq->lock);
++}
++
++static inline void
++rq_lock_irq(struct rq *rq, struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ raw_spin_lock_irq(&rq->lock);
++}
++
++static inline void
++rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
++ __releases(rq->lock)
++{
++ raw_spin_unlock_irq(&rq->lock);
++}
++
++static inline struct rq *
++this_rq_lock_irq(struct rq_flags *rf)
++ __acquires(rq->lock)
++{
++ struct rq *rq;
++
++ local_irq_disable();
++ rq = this_rq();
++ raw_spin_lock(&rq->lock);
++
++ return rq;
++}
++
++static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
++{
++ return &rq->lock;
++}
++
++static inline raw_spinlock_t *rq_lockp(struct rq *rq)
++{
++ return __rq_lockp(rq);
++}
++
++static inline void lockdep_assert_rq_held(struct rq *rq)
++{
++ lockdep_assert_held(__rq_lockp(rq));
++}
++
++extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
++extern void raw_spin_rq_unlock(struct rq *rq);
++
++static inline void raw_spin_rq_lock(struct rq *rq)
++{
++ raw_spin_rq_lock_nested(rq, 0);
++}
++
++static inline void raw_spin_rq_lock_irq(struct rq *rq)
++{
++ local_irq_disable();
++ raw_spin_rq_lock(rq);
++}
++
++static inline void raw_spin_rq_unlock_irq(struct rq *rq)
++{
++ raw_spin_rq_unlock(rq);
++ local_irq_enable();
++}
++
++static inline int task_current(struct rq *rq, struct task_struct *p)
++{
++ return rq->curr == p;
++}
++
++static inline bool task_on_cpu(struct task_struct *p)
++{
++ return p->on_cpu;
++}
++
++extern struct static_key_false sched_schedstats;
++
++#ifdef CONFIG_CPU_IDLE
++static inline void idle_set_state(struct rq *rq,
++ struct cpuidle_state *idle_state)
++{
++ rq->idle_state = idle_state;
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++ WARN_ON(!rcu_read_lock_held());
++ return rq->idle_state;
++}
++#else
++static inline void idle_set_state(struct rq *rq,
++ struct cpuidle_state *idle_state)
++{
++}
++
++static inline struct cpuidle_state *idle_get_state(struct rq *rq)
++{
++ return NULL;
++}
++#endif
++
++static inline int cpu_of(const struct rq *rq)
++{
++#ifdef CONFIG_SMP
++ return rq->cpu;
++#else
++ return 0;
++#endif
++}
++
++extern void resched_cpu(int cpu);
++
++#include "stats.h"
++
++#ifdef CONFIG_NO_HZ_COMMON
++#define NOHZ_BALANCE_KICK_BIT 0
++#define NOHZ_STATS_KICK_BIT 1
++
++#define NOHZ_BALANCE_KICK BIT(NOHZ_BALANCE_KICK_BIT)
++#define NOHZ_STATS_KICK BIT(NOHZ_STATS_KICK_BIT)
++
++#define NOHZ_KICK_MASK (NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
++
++#define nohz_flags(cpu) (&cpu_rq(cpu)->nohz_flags)
++
++/* TODO: needed?
++extern void nohz_balance_exit_idle(struct rq *rq);
++#else
++static inline void nohz_balance_exit_idle(struct rq *rq) { }
++*/
++#endif
++
++#ifdef CONFIG_IRQ_TIME_ACCOUNTING
++struct irqtime {
++ u64 total;
++ u64 tick_delta;
++ u64 irq_start_time;
++ struct u64_stats_sync sync;
++};
++
++DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
++
++/*
++ * Returns the irqtime minus the softirq time computed by ksoftirqd.
++ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
++ * and never move forward.
++ */
++static inline u64 irq_time_read(int cpu)
++{
++ struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
++ unsigned int seq;
++ u64 total;
++
++ do {
++ seq = __u64_stats_fetch_begin(&irqtime->sync);
++ total = irqtime->total;
++ } while (__u64_stats_fetch_retry(&irqtime->sync, seq));
++
++ return total;
++}
++#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
++
++#ifdef CONFIG_CPU_FREQ
++DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
++#endif /* CONFIG_CPU_FREQ */
++
++#ifdef CONFIG_NO_HZ_FULL
++extern int __init sched_tick_offload_init(void);
++#else
++static inline int sched_tick_offload_init(void) { return 0; }
++#endif
++
++#ifdef arch_scale_freq_capacity
++#ifndef arch_scale_freq_invariant
++#define arch_scale_freq_invariant() (true)
++#endif
++#else /* arch_scale_freq_capacity */
++#define arch_scale_freq_invariant() (false)
++#endif
++
++#ifdef CONFIG_SMP
++unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
++ unsigned long min,
++ unsigned long max);
++#endif /* CONFIG_SMP */
++
++extern void schedule_idle(void);
++
++#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
++
++/*
++ * !! For sched_setattr_nocheck() (kernel) only !!
++ *
++ * This is actually gross. :(
++ *
++ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
++ * tasks, but still be able to sleep. We need this on platforms that cannot
++ * atomically change clock frequency. Remove once fast switching will be
++ * available on such platforms.
++ *
++ * SUGOV stands for SchedUtil GOVernor.
++ */
++#define SCHED_FLAG_SUGOV 0x10000000
++
++#ifdef CONFIG_MEMBARRIER
++/*
++ * The scheduler provides memory barriers required by membarrier between:
++ * - prior user-space memory accesses and store to rq->membarrier_state,
++ * - store to rq->membarrier_state and following user-space memory accesses.
++ * In the same way it provides those guarantees around store to rq->curr.
++ */
++static inline void membarrier_switch_mm(struct rq *rq,
++ struct mm_struct *prev_mm,
++ struct mm_struct *next_mm)
++{
++ int membarrier_state;
++
++ if (prev_mm == next_mm)
++ return;
++
++ membarrier_state = atomic_read(&next_mm->membarrier_state);
++ if (READ_ONCE(rq->membarrier_state) == membarrier_state)
++ return;
++
++ WRITE_ONCE(rq->membarrier_state, membarrier_state);
++}
++#else
++static inline void membarrier_switch_mm(struct rq *rq,
++ struct mm_struct *prev_mm,
++ struct mm_struct *next_mm)
++{
++}
++#endif
++
++#ifdef CONFIG_NUMA
++extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
++#else
++static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++ return nr_cpu_ids;
++}
++#endif
++
++extern void swake_up_all_locked(struct swait_queue_head *q);
++extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
++
++extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
++
++#ifdef CONFIG_PREEMPT_DYNAMIC
++extern int preempt_dynamic_mode;
++extern int sched_dynamic_mode(const char *str);
++extern void sched_dynamic_update(int mode);
++#endif
++
++static inline void nohz_run_idle_balance(int cpu) { }
++
++static inline
++unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
++ struct task_struct *p)
++{
++ return util;
++}
++
++static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
++
++#ifdef CONFIG_SCHED_MM_CID
++
++#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
++#define MM_CID_SCAN_DELAY 100 /* 100ms */
++
++extern raw_spinlock_t cid_lock;
++extern int use_cid_lock;
++
++extern void sched_mm_cid_migrate_from(struct task_struct *t);
++extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
++extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
++extern void init_sched_mm_cid(struct task_struct *t);
++
++static inline void __mm_cid_put(struct mm_struct *mm, int cid)
++{
++ if (cid < 0)
++ return;
++ cpumask_clear_cpu(cid, mm_cidmask(mm));
++}
++
++/*
++ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
++ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
++ * be held to transition to other states.
++ *
++ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
++ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
++ */
++static inline void mm_cid_put_lazy(struct task_struct *t)
++{
++ struct mm_struct *mm = t->mm;
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ int cid;
++
++ lockdep_assert_irqs_disabled();
++ cid = __this_cpu_read(pcpu_cid->cid);
++ if (!mm_cid_is_lazy_put(cid) ||
++ !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++ return;
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
++{
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ int cid, res;
++
++ lockdep_assert_irqs_disabled();
++ cid = __this_cpu_read(pcpu_cid->cid);
++ for (;;) {
++ if (mm_cid_is_unset(cid))
++ return MM_CID_UNSET;
++ /*
++ * Attempt transition from valid or lazy-put to unset.
++ */
++ res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
++ if (res == cid)
++ break;
++ cid = res;
++ }
++ return cid;
++}
++
++static inline void mm_cid_put(struct mm_struct *mm)
++{
++ int cid;
++
++ lockdep_assert_irqs_disabled();
++ cid = mm_cid_pcpu_unset(mm);
++ if (cid == MM_CID_UNSET)
++ return;
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++}
++
++static inline int __mm_cid_try_get(struct mm_struct *mm)
++{
++ struct cpumask *cpumask;
++ int cid;
++
++ cpumask = mm_cidmask(mm);
++ /*
++ * Retry finding first zero bit if the mask is temporarily
++ * filled. This only happens during concurrent remote-clear
++ * which owns a cid without holding a rq lock.
++ */
++ for (;;) {
++ cid = cpumask_first_zero(cpumask);
++ if (cid < nr_cpu_ids)
++ break;
++ cpu_relax();
++ }
++ if (cpumask_test_and_set_cpu(cid, cpumask))
++ return -1;
++ return cid;
++}
++
++/*
++ * Save a snapshot of the current runqueue time of this cpu
++ * with the per-cpu cid value, allowing to estimate how recently it was used.
++ */
++static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
++{
++ struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
++
++ lockdep_assert_rq_held(rq);
++ WRITE_ONCE(pcpu_cid->time, rq->clock);
++}
++
++static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++ int cid;
++
++ /*
++ * All allocations (even those using the cid_lock) are lock-free. If
++ * use_cid_lock is set, hold the cid_lock to perform cid allocation to
++ * guarantee forward progress.
++ */
++ if (!READ_ONCE(use_cid_lock)) {
++ cid = __mm_cid_try_get(mm);
++ if (cid >= 0)
++ goto end;
++ raw_spin_lock(&cid_lock);
++ } else {
++ raw_spin_lock(&cid_lock);
++ cid = __mm_cid_try_get(mm);
++ if (cid >= 0)
++ goto unlock;
++ }
++
++ /*
++ * cid concurrently allocated. Retry while forcing following
++ * allocations to use the cid_lock to ensure forward progress.
++ */
++ WRITE_ONCE(use_cid_lock, 1);
++ /*
++ * Set use_cid_lock before allocation. Only care about program order
++ * because this is only required for forward progress.
++ */
++ barrier();
++ /*
++ * Retry until it succeeds. It is guaranteed to eventually succeed once
++ * all newcoming allocations observe the use_cid_lock flag set.
++ */
++ do {
++ cid = __mm_cid_try_get(mm);
++ cpu_relax();
++ } while (cid < 0);
++ /*
++ * Allocate before clearing use_cid_lock. Only care about
++ * program order because this is for forward progress.
++ */
++ barrier();
++ WRITE_ONCE(use_cid_lock, 0);
++unlock:
++ raw_spin_unlock(&cid_lock);
++end:
++ mm_cid_snapshot_time(rq, mm);
++ return cid;
++}
++
++static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
++{
++ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
++ struct cpumask *cpumask;
++ int cid;
++
++ lockdep_assert_rq_held(rq);
++ cpumask = mm_cidmask(mm);
++ cid = __this_cpu_read(pcpu_cid->cid);
++ if (mm_cid_is_valid(cid)) {
++ mm_cid_snapshot_time(rq, mm);
++ return cid;
++ }
++ if (mm_cid_is_lazy_put(cid)) {
++ if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
++ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
++ }
++ cid = __mm_cid_get(rq, mm);
++ __this_cpu_write(pcpu_cid->cid, cid);
++ return cid;
++}
++
++static inline void switch_mm_cid(struct rq *rq,
++ struct task_struct *prev,
++ struct task_struct *next)
++{
++ /*
++ * Provide a memory barrier between rq->curr store and load of
++ * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
++ *
++ * Should be adapted if context_switch() is modified.
++ */
++ if (!next->mm) { // to kernel
++ /*
++ * user -> kernel transition does not guarantee a barrier, but
++ * we can use the fact that it performs an atomic operation in
++ * mmgrab().
++ */
++ if (prev->mm) // from user
++ smp_mb__after_mmgrab();
++ /*
++ * kernel -> kernel transition does not change rq->curr->mm
++ * state. It stays NULL.
++ */
++ } else { // to user
++ /*
++ * kernel -> user transition does not provide a barrier
++ * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
++ * Provide it here.
++ */
++ if (!prev->mm) // from kernel
++ smp_mb();
++ /*
++ * user -> user transition guarantees a memory barrier through
++ * switch_mm() when current->mm changes. If current->mm is
++ * unchanged, no barrier is needed.
++ */
++ }
++ if (prev->mm_cid_active) {
++ mm_cid_snapshot_time(rq, prev->mm);
++ mm_cid_put_lazy(prev);
++ prev->mm_cid = -1;
++ }
++ if (next->mm_cid_active)
++ next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
++}
++
++#else
++static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
++static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
++static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
++static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
++static inline void init_sched_mm_cid(struct task_struct *t) { }
++#endif
++
++#ifdef CONFIG_SMP
++extern struct balance_callback balance_push_callback;
++
++static inline void
++queue_balance_callback(struct rq *rq,
++ struct balance_callback *head,
++ void (*func)(struct rq *rq))
++{
++ lockdep_assert_rq_held(rq);
++
++ /*
++ * Don't (re)queue an already queued item; nor queue anything when
++ * balance_push() is active, see the comment with
++ * balance_push_callback.
++ */
++ if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
++ return;
++
++ head->func = func;
++ head->next = rq->balance_callback;
++ rq->balance_callback = head;
++}
++#endif /* CONFIG_SMP */
++
++#ifdef CONFIG_SCHED_BMQ
++#include "bmq.h"
++#endif
++#ifdef CONFIG_SCHED_PDS
++#include "pds.h"
++#endif
++
++#endif /* _KERNEL_SCHED_ALT_SCHED_H */
+diff --git a/kernel/sched/alt_topology.c b/kernel/sched/alt_topology.c
+new file mode 100644
+index 000000000000..2266138ee783
+--- /dev/null
++++ b/kernel/sched/alt_topology.c
+@@ -0,0 +1,350 @@
++#include "alt_core.h"
++#include "alt_topology.h"
++
++#ifdef CONFIG_SMP
++
++static cpumask_t sched_pcore_mask ____cacheline_aligned_in_smp;
++
++static int __init sched_pcore_mask_setup(char *str)
++{
++ if (cpulist_parse(str, &sched_pcore_mask))
++ pr_warn("sched/alt: pcore_cpus= incorrect CPU range\n");
++
++ return 0;
++}
++__setup("pcore_cpus=", sched_pcore_mask_setup);
++
++/*
++ * set/clear idle mask functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static void set_idle_mask_smt(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ if (cpumask_subset(cpu_smt_mask(cpu), sched_idle_mask))
++ cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++
++static void clear_idle_mask_smt(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_andnot(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
++}
++#endif
++
++static void set_idle_mask_pcore(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ cpumask_set_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void clear_idle_mask_pcore(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_clear_cpu(cpu, sched_pcore_idle_mask);
++}
++
++static void set_idle_mask_ecore(unsigned int cpu, struct cpumask *dstp)
++{
++ cpumask_set_cpu(cpu, dstp);
++ cpumask_set_cpu(cpu, sched_ecore_idle_mask);
++}
++
++static void clear_idle_mask_ecore(int cpu, struct cpumask *dstp)
++{
++ cpumask_clear_cpu(cpu, dstp);
++ cpumask_clear_cpu(cpu, sched_ecore_idle_mask);
++}
++
++/*
++ * Idle cpu/rq selection functions
++ */
++#ifdef CONFIG_SCHED_SMT
++static bool p1_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p)
++{
++ return cpumask_and(dstp, src1p, src2p + 1) ||
++ cpumask_and(dstp, src1p, src2p);
++}
++#endif
++
++static bool p1p2_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
++ const struct cpumask *src2p)
++{
++ return cpumask_and(dstp, src1p, src2p + 1) ||
++ cpumask_and(dstp, src1p, src2p + 2) ||
++ cpumask_and(dstp, src1p, src2p);
++}
++
++/* common balance functions */
++static int active_balance_cpu_stop(void *data)
++{
++ struct balance_arg *arg = data;
++ struct task_struct *p = arg->task;
++ struct rq *rq = this_rq();
++ unsigned long flags;
++ cpumask_t tmp;
++
++ local_irq_save(flags);
++
++ raw_spin_lock(&p->pi_lock);
++ raw_spin_lock(&rq->lock);
++
++ arg->active = 0;
++
++ if (task_on_rq_queued(p) && task_rq(p) == rq &&
++ cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
++ !is_migration_disabled(p)) {
++ int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
++ rq = move_queued_task(rq, p, dcpu);
++ }
++
++ raw_spin_unlock(&rq->lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ return 0;
++}
++
++/* trigger_active_balance - for @rq */
++static inline int
++trigger_active_balance(struct rq *src_rq, struct rq *rq, cpumask_t *target_mask)
++{
++ struct balance_arg *arg;
++ unsigned long flags;
++ struct task_struct *p;
++ int res;
++
++ if (!raw_spin_trylock_irqsave(&rq->lock, flags))
++ return 0;
++
++ arg = &rq->active_balance_arg;
++ res = (1 == rq->nr_running) && \
++ !is_migration_disabled((p = sched_rq_first_task(rq))) && \
++ cpumask_intersects(p->cpus_ptr, target_mask) && \
++ !arg->active;
++ if (res) {
++ arg->task = p;
++ arg->cpumask = target_mask;
++
++ arg->active = 1;
++ }
++
++ raw_spin_unlock_irqrestore(&rq->lock, flags);
++
++ if (res) {
++ preempt_disable();
++ raw_spin_unlock(&src_rq->lock);
++
++ stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop, arg,
++ &rq->active_balance_work);
++
++ preempt_enable();
++ raw_spin_lock(&src_rq->lock);
++ }
++
++ return res;
++}
++
++static inline int
++ecore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++ if (cpumask_andnot(single_task_mask, single_task_mask, &sched_pcore_mask)) {
++ int i, cpu = cpu_of(rq);
++
++ for_each_cpu_wrap(i, single_task_mask, cpu)
++ if (trigger_active_balance(rq, cpu_rq(i), target_mask))
++ return 1;
++ }
++
++ return 0;
++}
++
++static DEFINE_PER_CPU(struct balance_callback, active_balance_head);
++
++#ifdef CONFIG_SCHED_SMT
++static inline int
++smt_pcore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
++{
++ cpumask_t smt_single_mask;
++
++ if (cpumask_and(&smt_single_mask, single_task_mask, &sched_smt_mask)) {
++ int i, cpu = cpu_of(rq);
++
++ for_each_cpu_wrap(i, &smt_single_mask, cpu) {
++ if (cpumask_subset(cpu_smt_mask(i), &smt_single_mask) &&
++ trigger_active_balance(rq, cpu_rq(i), target_mask))
++ return 1;
++ }
++ }
++
++ return 0;
++}
++
++/* smt p core balance functions */
++static inline void smt_pcore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ (/* smt core group balance */
++ (static_key_count(&sched_smt_present.key) > 1 &&
++ smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)
++ ) ||
++ /* e core to idle smt core balance */
++ ecore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)))
++ return;
++}
++
++static void smt_pcore_balance_func(struct rq *rq, const int cpu)
++{
++ if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_pcore_balance);
++}
++
++/* smt balance functions */
++static inline void smt_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ static_key_count(&sched_smt_present.key) > 1 &&
++ smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask))
++ return;
++}
++
++static void smt_balance_func(struct rq *rq, const int cpu)
++{
++ if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_balance);
++}
++
++/* e core balance functions */
++static inline void ecore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ /* smt occupied p core to idle e core balance */
++ smt_pcore_source_balance(rq, &single_task_mask, sched_ecore_idle_mask))
++ return;
++}
++
++static void ecore_balance_func(struct rq *rq, const int cpu)
++{
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), ecore_balance);
++}
++#endif /* CONFIG_SCHED_SMT */
++
++/* p core balance functions */
++static inline void pcore_balance(struct rq *rq)
++{
++ cpumask_t single_task_mask;
++
++ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
++ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
++ /* idle e core to p core balance */
++ ecore_source_balance(rq, &single_task_mask, sched_pcore_idle_mask))
++ return;
++}
++
++static void pcore_balance_func(struct rq *rq, const int cpu)
++{
++ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), pcore_balance);
++}
++
++#ifdef ALT_SCHED_DEBUG
++#define SCHED_DEBUG_INFO(...) printk(KERN_INFO __VA_ARGS__)
++#else
++#define SCHED_DEBUG_INFO(...) do { } while(0)
++#endif
++
++#define SET_IDLE_SELECT_FUNC(func) \
++{ \
++ idle_select_func = func; \
++ printk(KERN_INFO "sched: "#func); \
++}
++
++#define SET_RQ_BALANCE_FUNC(rq, cpu, func) \
++{ \
++ rq->balance_func = func; \
++ SCHED_DEBUG_INFO("sched: cpu#%02d -> "#func, cpu); \
++}
++
++#define SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_func, clear_func) \
++{ \
++ rq->set_idle_mask_func = set_func; \
++ rq->clear_idle_mask_func = clear_func; \
++ SCHED_DEBUG_INFO("sched: cpu#%02d -> "#set_func" "#clear_func, cpu); \
++}
++
++void sched_init_topology(void)
++{
++ int cpu;
++ struct rq *rq;
++ cpumask_t sched_ecore_mask = { CPU_BITS_NONE };
++ int ecore_present = 0;
++
++#ifdef CONFIG_SCHED_SMT
++ if (!cpumask_empty(&sched_smt_mask))
++ printk(KERN_INFO "sched: smt mask: 0x%08lx\n", sched_smt_mask.bits[0]);
++#endif
++
++ if (!cpumask_empty(&sched_pcore_mask)) {
++ cpumask_andnot(&sched_ecore_mask, cpu_online_mask, &sched_pcore_mask);
++ printk(KERN_INFO "sched: pcore mask: 0x%08lx, ecore mask: 0x%08lx\n",
++ sched_pcore_mask.bits[0], sched_ecore_mask.bits[0]);
++
++ ecore_present = !cpumask_empty(&sched_ecore_mask);
++ }
++
++#ifdef CONFIG_SCHED_SMT
++ /* idle select function */
++ if (cpumask_equal(&sched_smt_mask, cpu_online_mask)) {
++ SET_IDLE_SELECT_FUNC(p1_idle_select_func);
++ } else
++#endif
++ if (!cpumask_empty(&sched_pcore_mask)) {
++ SET_IDLE_SELECT_FUNC(p1p2_idle_select_func);
++ }
++
++ for_each_online_cpu(cpu) {
++ rq = cpu_rq(cpu);
++ /* take chance to reset time slice for idle tasks */
++ rq->idle->time_slice = sysctl_sched_base_slice;
++
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_weight(cpu_smt_mask(cpu)) > 1) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_smt, clear_idle_mask_smt);
++
++ if (cpumask_test_cpu(cpu, &sched_pcore_mask) &&
++ !cpumask_intersects(&sched_ecore_mask, &sched_smt_mask)) {
++ SET_RQ_BALANCE_FUNC(rq, cpu, smt_pcore_balance_func);
++ } else {
++ SET_RQ_BALANCE_FUNC(rq, cpu, smt_balance_func);
++ }
++
++ continue;
++ }
++#endif
++ /* !SMT or only one cpu in sg */
++ if (cpumask_test_cpu(cpu, &sched_pcore_mask)) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_pcore, clear_idle_mask_pcore);
++
++ if (ecore_present)
++ SET_RQ_BALANCE_FUNC(rq, cpu, pcore_balance_func);
++
++ continue;
++ }
++ if (cpumask_test_cpu(cpu, &sched_ecore_mask)) {
++ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_ecore, clear_idle_mask_ecore);
++#ifdef CONFIG_SCHED_SMT
++ if (cpumask_intersects(&sched_pcore_mask, &sched_smt_mask))
++ SET_RQ_BALANCE_FUNC(rq, cpu, ecore_balance_func);
++#endif
++ }
++ }
++}
++#endif /* CONFIG_SMP */
+diff --git a/kernel/sched/alt_topology.h b/kernel/sched/alt_topology.h
+new file mode 100644
+index 000000000000..076174cd2bc6
+--- /dev/null
++++ b/kernel/sched/alt_topology.h
+@@ -0,0 +1,6 @@
++#ifndef _KERNEL_SCHED_ALT_TOPOLOGY_H
++#define _KERNEL_SCHED_ALT_TOPOLOGY_H
++
++extern void sched_init_topology(void);
++
++#endif /* _KERNEL_SCHED_ALT_TOPOLOGY_H */
+diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
+new file mode 100644
+index 000000000000..5a7835246ec3
+--- /dev/null
++++ b/kernel/sched/bmq.h
+@@ -0,0 +1,103 @@
++#ifndef _KERNEL_SCHED_BMQ_H
++#define _KERNEL_SCHED_BMQ_H
++
++#define ALT_SCHED_NAME "BMQ"
++
++/*
++ * BMQ only routines
++ */
++static inline void boost_task(struct task_struct *p, int n)
++{
++ int limit;
++
++ switch (p->policy) {
++ case SCHED_NORMAL:
++ limit = -MAX_PRIORITY_ADJ;
++ break;
++ case SCHED_BATCH:
++ limit = 0;
++ break;
++ default:
++ return;
++ }
++
++ p->boost_prio = max(limit, p->boost_prio - n);
++}
++
++static inline void deboost_task(struct task_struct *p)
++{
++ if (p->boost_prio < MAX_PRIORITY_ADJ)
++ p->boost_prio++;
++}
++
++/*
++ * Common interfaces
++ */
++static inline void sched_timeslice_imp(const int timeslice_ms) {}
++
++/* This API is used in task_prio(), return value readed by human users */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++ return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++ return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
++ MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio) \
++ prio = task_sched_prio(p); \
++ idx = prio;
++
++static inline int sched_prio2idx(int prio, struct rq *rq)
++{
++ return prio;
++}
++
++static inline int sched_idx2prio(int idx, struct rq *rq)
++{
++ return idx;
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++ return rq->prio;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (p->prio + p->boost_prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq) {}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++ deboost_task(p);
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq) {}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++ p->boost_prio = MAX_PRIORITY_ADJ;
++}
++
++static inline void sched_task_ttwu(struct task_struct *p)
++{
++ s64 delta = this_rq()->clock_task > p->last_ran;
++
++ if (likely(delta > 0))
++ boost_task(p, delta >> 22);
++}
++
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
++{
++ boost_task(p, 1);
++}
++
++#endif /* _KERNEL_SCHED_BMQ_H */
+diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
+index fae1f5c921eb..1e06434b5b9b 100644
+--- a/kernel/sched/build_policy.c
++++ b/kernel/sched/build_policy.c
+@@ -49,15 +49,21 @@
+
+ #include "idle.c"
+
++#ifndef CONFIG_SCHED_ALT
+ #include "rt.c"
++#endif
+
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ # include "cpudeadline.c"
++#endif
+ # include "pelt.c"
+ #endif
+
+ #include "cputime.c"
++#ifndef CONFIG_SCHED_ALT
+ #include "deadline.c"
++#endif
+
+ #ifdef CONFIG_SCHED_CLASS_EXT
+ # include "ext.c"
+diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
+index 80a3df49ab47..58d04aa73634 100644
+--- a/kernel/sched/build_utility.c
++++ b/kernel/sched/build_utility.c
+@@ -56,6 +56,10 @@
+
+ #include "clock.c"
+
++#ifdef CONFIG_SCHED_ALT
++# include "alt_topology.c"
++#endif
++
+ #ifdef CONFIG_CGROUP_CPUACCT
+ # include "cpuacct.c"
+ #endif
+@@ -84,7 +88,9 @@
+
+ #ifdef CONFIG_SMP
+ # include "cpupri.c"
++#ifndef CONFIG_SCHED_ALT
+ # include "stop_task.c"
++#endif
+ # include "topology.c"
+ #endif
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index c6ba15388ea7..56590821f074 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -197,6 +197,7 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
+
+ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ {
++#ifndef CONFIG_SCHED_ALT
+ unsigned long min, max, util = scx_cpuperf_target(sg_cpu->cpu);
+
+ if (!scx_switched_all())
+@@ -205,6 +206,10 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
+ util = max(util, boost);
+ sg_cpu->bw_min = min;
+ sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
++#else /* CONFIG_SCHED_ALT */
++ sg_cpu->bw_min = 0;
++ sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
++#endif /* CONFIG_SCHED_ALT */
+ }
+
+ /**
+@@ -364,8 +369,10 @@ static inline bool sugov_hold_freq(struct sugov_cpu *sg_cpu) { return false; }
+ */
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
++#ifndef CONFIG_SCHED_ALT
+ if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
+ sg_cpu->sg_policy->limits_changed = true;
++#endif
+ }
+
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -684,6 +691,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
+ }
+
+ ret = sched_setattr_nocheck(thread, &attr);
++
+ if (ret) {
+ kthread_stop(thread);
+ pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index 0bed0fa1acd9..031affa09446 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -126,7 +126,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
+ p->utime += cputime;
+ account_group_user_time(p, cputime);
+
+- index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
++ index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
+
+ /* Add user time to cpustat. */
+ task_group_account_field(p, index, cputime);
+@@ -150,7 +150,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
+ p->gtime += cputime;
+
+ /* Add guest time to cpustat. */
+- if (task_nice(p) > 0) {
++ if (task_running_nice(p)) {
+ task_group_account_field(p, CPUTIME_NICE, cputime);
+ cpustat[CPUTIME_GUEST_NICE] += cputime;
+ } else {
+@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64 max)
+ #ifdef CONFIG_64BIT
+ static inline u64 read_sum_exec_runtime(struct task_struct *t)
+ {
+- return t->se.sum_exec_runtime;
++ return tsk_seruntime(t);
+ }
+ #else
+ static u64 read_sum_exec_runtime(struct task_struct *t)
+@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
+ struct rq *rq;
+
+ rq = task_rq_lock(t, &rf);
+- ns = t->se.sum_exec_runtime;
++ ns = tsk_seruntime(t);
+ task_rq_unlock(rq, t, &rf);
+
+ return ns;
+@@ -623,7 +623,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
+ void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
+ {
+ struct task_cputime cputime = {
+- .sum_exec_runtime = p->se.sum_exec_runtime,
++ .sum_exec_runtime = tsk_seruntime(p),
+ };
+
+ if (task_cputime(p, &cputime.utime, &cputime.stime))
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index f4035c7a0fa1..4df4ad88d6a9 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -7,6 +7,7 @@
+ * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
+ */
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * This allows printing both to /sys/kernel/debug/sched/debug and
+ * to the console
+@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
+ };
+
+ #endif /* SMP */
++#endif /* !CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+
+@@ -278,6 +280,7 @@ static const struct file_operations sched_dynamic_fops = {
+
+ #endif /* CONFIG_PREEMPT_DYNAMIC */
+
++#ifndef CONFIG_SCHED_ALT
+ __read_mostly bool sched_debug_verbose;
+
+ #ifdef CONFIG_SMP
+@@ -468,9 +471,11 @@ static const struct file_operations fair_server_period_fops = {
+ .llseek = seq_lseek,
+ .release = single_release,
+ };
++#endif /* !CONFIG_SCHED_ALT */
+
+ static struct dentry *debugfs_sched;
+
++#ifndef CONFIG_SCHED_ALT
+ static void debugfs_fair_server_init(void)
+ {
+ struct dentry *d_fair;
+@@ -491,6 +496,7 @@ static void debugfs_fair_server_init(void)
+ debugfs_create_file("period", 0644, d_cpu, (void *) cpu, &fair_server_period_fops);
+ }
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ static __init int sched_init_debug(void)
+ {
+@@ -498,14 +504,17 @@ static __init int sched_init_debug(void)
+
+ debugfs_sched = debugfs_create_dir("sched", NULL);
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
+ debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
++#endif /* !CONFIG_SCHED_ALT */
+ #ifdef CONFIG_PREEMPT_DYNAMIC
+ debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
+ #endif
+
+ debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
+ debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
+
+@@ -530,13 +539,17 @@ static __init int sched_init_debug(void)
+ #endif
+
+ debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
++#endif /* !CONFIG_SCHED_ALT */
+
++#ifndef CONFIG_SCHED_ALT
+ debugfs_fair_server_init();
++#endif /* !CONFIG_SCHED_ALT */
+
+ return 0;
+ }
+ late_initcall(sched_init_debug);
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+
+ static cpumask_var_t sd_sysctl_cpus;
+@@ -1288,6 +1301,7 @@ void proc_sched_set_task(struct task_struct *p)
+ memset(&p->stats, 0, sizeof(p->stats));
+ #endif
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ void resched_latency_warn(int cpu, u64 latency)
+ {
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index d2f096bb274c..36071f4b7b7f 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -424,6 +424,7 @@ void cpu_startup_entry(enum cpuhp_state state)
+ do_idle();
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * idle-task scheduling class.
+ */
+@@ -538,3 +539,4 @@ DEFINE_SCHED_CLASS(idle) = {
+ .switched_to = switched_to_idle,
+ .update_curr = update_curr_idle,
+ };
++#endif
+diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
+new file mode 100644
+index 000000000000..fe3099071eb7
+--- /dev/null
++++ b/kernel/sched/pds.h
+@@ -0,0 +1,139 @@
++#ifndef _KERNEL_SCHED_PDS_H
++#define _KERNEL_SCHED_PDS_H
++
++#define ALT_SCHED_NAME "PDS"
++
++static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
++
++#define SCHED_NORMAL_PRIO_NUM (32)
++#define SCHED_EDGE_DELTA (SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
++
++/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
++#define SCHED_NORMAL_PRIO_MOD(x) ((x) & (SCHED_NORMAL_PRIO_NUM - 1))
++
++/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
++static __read_mostly int sched_timeslice_shift = 23;
++
++/*
++ * Common interfaces
++ */
++static inline int
++task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
++{
++ u64 sched_dl = max(p->deadline, rq->time_edge);
++
++#ifdef ALT_SCHED_DEBUG
++ if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
++ "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
++ return SCHED_NORMAL_PRIO_NUM - 1;
++#endif
++
++ return sched_dl - rq->time_edge;
++}
++
++static inline int task_sched_prio(const struct task_struct *p)
++{
++ return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
++ MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
++}
++
++#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio) \
++ if (p->prio < MIN_NORMAL_PRIO) { \
++ prio = p->prio >> 2; \
++ idx = prio; \
++ } else { \
++ u64 sched_dl = max(p->deadline, rq->time_edge); \
++ prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge; \
++ idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl); \
++ }
++
++static inline int sched_prio2idx(int sched_prio, struct rq *rq)
++{
++ return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
++ sched_prio :
++ MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
++}
++
++static inline int sched_idx2prio(int sched_idx, struct rq *rq)
++{
++ return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
++ sched_idx :
++ MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
++}
++
++static inline int sched_rq_prio_idx(struct rq *rq)
++{
++ return rq->prio_idx;
++}
++
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (p->prio > DEFAULT_PRIO);
++}
++
++static inline void sched_update_rq_clock(struct rq *rq)
++{
++ struct list_head head;
++ u64 old = rq->time_edge;
++ u64 now = rq->clock >> sched_timeslice_shift;
++ u64 prio, delta;
++ DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
++
++ if (now == old)
++ return;
++
++ rq->time_edge = now;
++ delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
++ INIT_LIST_HEAD(&head);
++
++ prio = MIN_SCHED_NORMAL_PRIO;
++ for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
++ list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
++ SCHED_NORMAL_PRIO_MOD(prio + old), &head);
++
++ bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
++ if (!list_empty(&head)) {
++ u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
++
++ __list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
++ set_bit(MIN_SCHED_NORMAL_PRIO, normal);
++ }
++ bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
++ (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
++
++ if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
++ return;
++
++ rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
++ rq->prio_idx = sched_prio2idx(rq->prio, rq);
++}
++
++static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
++{
++ if (p->prio >= MIN_NORMAL_PRIO)
++ p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
++ (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
++}
++
++static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
++{
++ u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
++ if (unlikely(p->deadline > max_dl))
++ p->deadline = max_dl;
++}
++
++static inline void sched_task_fork(struct task_struct *p, struct rq *rq)
++{
++ sched_task_renew(p, rq);
++}
++
++static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
++{
++ p->time_slice = sysctl_sched_base_slice;
++ sched_task_renew(p, rq);
++}
++
++static inline void sched_task_ttwu(struct task_struct *p) {}
++static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
++
++#endif /* _KERNEL_SCHED_PDS_H */
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index a9c65d97b3ca..a66431e6527c 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
+ WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * sched_entity:
+ *
+@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+
+ return 0;
+ }
++#endif
+
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ /*
+ * hardware:
+ *
+@@ -468,6 +470,7 @@ int update_irq_load_avg(struct rq *rq, u64 running)
+ }
+ #endif
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * Load avg and utiliztion metrics need to be updated periodically and before
+ * consumption. This function updates the metrics for all subsystems except for
+@@ -487,3 +490,4 @@ bool update_other_load_avgs(struct rq *rq)
+ update_hw_load_avg(rq_clock_task(rq), rq, hw_pressure) |
+ update_irq_load_avg(rq, 0);
+ }
++#endif /* !CONFIG_SCHED_ALT */
+diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
+index f4f6a0875c66..ee780f2b6c17 100644
+--- a/kernel/sched/pelt.h
++++ b/kernel/sched/pelt.h
+@@ -1,14 +1,16 @@
+ #ifdef CONFIG_SMP
+ #include "sched-pelt.h"
+
++#ifndef CONFIG_SCHED_ALT
+ int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
+ int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
+ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
+ int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
+ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
+ bool update_other_load_avgs(struct rq *rq);
++#endif
+
+-#ifdef CONFIG_SCHED_HW_PRESSURE
++#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
+ int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
+
+ static inline u64 hw_load_avg(struct rq *rq)
+@@ -45,6 +47,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
+ return PELT_MIN_DIVIDER + avg->period_contrib;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ static inline void cfs_se_util_change(struct sched_avg *avg)
+ {
+ unsigned int enqueued;
+@@ -181,9 +184,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
+ return rq_clock_pelt(rq_of(cfs_rq));
+ }
+ #endif
++#endif /* CONFIG_SCHED_ALT */
+
+ #else
+
++#ifndef CONFIG_SCHED_ALT
+ static inline int
+ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
+ {
+@@ -201,6 +206,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
+ {
+ return 0;
+ }
++#endif
+
+ static inline int
+ update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index c03b3d7b320e..08ee4a9cd6a5 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -5,6 +5,10 @@
+ #ifndef _KERNEL_SCHED_SCHED_H
+ #define _KERNEL_SCHED_SCHED_H
+
++#ifdef CONFIG_SCHED_ALT
++#include "alt_sched.h"
++#else
++
+ #include <linux/sched/affinity.h>
+ #include <linux/sched/autogroup.h>
+ #include <linux/sched/cpufreq.h>
+@@ -3878,4 +3882,9 @@ void sched_enq_and_set_task(struct sched_enq_and_set_ctx *ctx);
+
+ #include "ext.h"
+
++static inline int task_running_nice(struct task_struct *p)
++{
++ return (task_nice(p) > 0);
++}
++#endif /* !CONFIG_SCHED_ALT */
+ #endif /* _KERNEL_SCHED_SCHED_H */
+diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
+index eb0cdcd4d921..72224ecb5cbf 100644
+--- a/kernel/sched/stats.c
++++ b/kernel/sched/stats.c
+@@ -115,8 +115,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ } else {
+ struct rq *rq;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ struct sched_domain *sd;
+ int dcount = 0;
++#endif
+ #endif
+ cpu = (unsigned long)(v - 2);
+ rq = cpu_rq(cpu);
+@@ -133,6 +135,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ seq_printf(seq, "\n");
+
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ /* domain-specific stats */
+ rcu_read_lock();
+ for_each_domain(cpu, sd) {
+@@ -160,6 +163,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
+ sd->ttwu_move_balance);
+ }
+ rcu_read_unlock();
++#endif
+ #endif
+ }
+ return 0;
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 767e098a3bd1..4cbf4d3e611e 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart (struct rq *rq, unsigned long long delt
+
+ #endif /* CONFIG_SCHEDSTATS */
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ struct sched_entity_stats {
+ struct sched_entity se;
+@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
+ #endif
+ return &task_of(se)->stats;
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ #ifdef CONFIG_PSI
+ void psi_task_change(struct task_struct *task, int clear, int set);
+diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
+index 24f9f90b6574..9aa01e45c920 100644
+--- a/kernel/sched/syscalls.c
++++ b/kernel/sched/syscalls.c
+@@ -16,6 +16,14 @@
+ #include "sched.h"
+ #include "autogroup.h"
+
++#ifdef CONFIG_SCHED_ALT
++#include "alt_core.h"
++
++static inline int __normal_prio(int policy, int rt_prio, int static_prio)
++{
++ return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
++}
++#else /* !CONFIG_SCHED_ALT */
+ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ {
+ int prio;
+@@ -29,6 +37,7 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+
+ return prio;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ /*
+ * Calculate the expected normal priority: i.e. priority
+@@ -39,7 +48,11 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
+ */
+ static inline int normal_prio(struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return __normal_prio(p->policy, p->rt_priority, p->static_prio);
++#else /* !CONFIG_SCHED_ALT */
+ return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio));
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /*
+@@ -64,6 +77,37 @@ static int effective_prio(struct task_struct *p)
+
+ void set_user_nice(struct task_struct *p, long nice)
+ {
++#ifdef CONFIG_SCHED_ALT
++ unsigned long flags;
++ struct rq *rq;
++ raw_spinlock_t *lock;
++
++ if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
++ return;
++ /*
++ * We have to be careful, if called from sys_setpriority(),
++ * the task might be in the middle of scheduling on another CPU.
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++ rq = __task_access_lock(p, &lock);
++
++ p->static_prio = NICE_TO_PRIO(nice);
++ /*
++ * The RT priorities are set via sched_setscheduler(), but we still
++ * allow the 'normal' nice value to be set - but as expected
++ * it won't have any effect on scheduling until the task is
++ * not SCHED_NORMAL/SCHED_BATCH:
++ */
++ if (task_has_rt_policy(p))
++ goto out_unlock;
++
++ p->prio = effective_prio(p);
++
++ check_task_changed(p, rq);
++out_unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++#else
+ bool queued, running;
+ struct rq *rq;
+ int old_prio;
+@@ -112,6 +156,7 @@ void set_user_nice(struct task_struct *p, long nice)
+ * lowered its priority, then reschedule its CPU:
+ */
+ p->sched_class->prio_changed(rq, p, old_prio);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL(set_user_nice);
+
+@@ -190,7 +235,19 @@ SYSCALL_DEFINE1(nice, int, increment)
+ */
+ int task_prio(const struct task_struct *p)
+ {
++#ifdef CONFIG_SCHED_ALT
++/*
++ * sched policy return value kernel prio user prio/nice
++ *
++ * (BMQ)normal, batch, idle[0 ... 53] [100 ... 139] 0/[-20 ... 19]/[-7 ... 7]
++ * (PDS)normal, batch, idle[0 ... 39] 100 0/[-20 ... 19]
++ * fifo, rr [-1 ... -100] [99 ... 0] [0 ... 99]
++ */
++ return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
++ task_sched_prio_normal(p, task_rq(p));
++#else
+ return p->prio - MAX_RT_PRIO;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /**
+@@ -300,10 +357,13 @@ static void __setscheduler_params(struct task_struct *p,
+
+ p->policy = policy;
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_policy(policy)) {
+ __setparam_dl(p, attr);
+ } else if (fair_policy(policy)) {
++#endif /* !CONFIG_SCHED_ALT */
+ p->static_prio = NICE_TO_PRIO(attr->sched_nice);
++#ifndef CONFIG_SCHED_ALT
+ if (attr->sched_runtime) {
+ p->se.custom_slice = 1;
+ p->se.slice = clamp_t(u64, attr->sched_runtime,
+@@ -322,6 +382,7 @@ static void __setscheduler_params(struct task_struct *p,
+ /* when switching back to non-rt policy, restore timerslack */
+ p->timer_slack_ns = p->default_timer_slack_ns;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ /*
+ * __sched_setscheduler() ensures attr->sched_priority == 0 when
+@@ -330,7 +391,9 @@ static void __setscheduler_params(struct task_struct *p,
+ */
+ p->rt_priority = attr->sched_priority;
+ p->normal_prio = normal_prio(p);
++#ifndef CONFIG_SCHED_ALT
+ set_load_weight(p, true);
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ /*
+@@ -346,6 +409,8 @@ static bool check_same_owner(struct task_struct *p)
+ uid_eq(cred->euid, pcred->uid));
+ }
+
++#ifndef CONFIG_SCHED_ALT
++
+ #ifdef CONFIG_UCLAMP_TASK
+
+ static int uclamp_validate(struct task_struct *p,
+@@ -459,6 +524,7 @@ static inline int uclamp_validate(struct task_struct *p,
+ static void __setscheduler_uclamp(struct task_struct *p,
+ const struct sched_attr *attr) { }
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
+
+ /*
+ * Allow unprivileged RT tasks to decrease priority.
+@@ -469,11 +535,13 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ const struct sched_attr *attr,
+ int policy, int reset_on_fork)
+ {
++#ifndef CONFIG_SCHED_ALT
+ if (fair_policy(policy)) {
+ if (attr->sched_nice < task_nice(p) &&
+ !is_nice_reduction(p, attr->sched_nice))
+ goto req_priv;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ if (rt_policy(policy)) {
+ unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
+@@ -488,6 +556,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ goto req_priv;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * Can't set/change SCHED_DEADLINE policy at all for now
+ * (safest behavior); in the future we would like to allow
+@@ -505,6 +574,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
+ if (!is_nice_reduction(p, task_nice(p)))
+ goto req_priv;
+ }
++#endif /* !CONFIG_SCHED_ALT */
+
+ /* Can't change other user's priorities: */
+ if (!check_same_owner(p))
+@@ -527,6 +597,158 @@ int __sched_setscheduler(struct task_struct *p,
+ const struct sched_attr *attr,
+ bool user, bool pi)
+ {
++#ifdef CONFIG_SCHED_ALT
++ const struct sched_attr dl_squash_attr = {
++ .size = sizeof(struct sched_attr),
++ .sched_policy = SCHED_FIFO,
++ .sched_nice = 0,
++ .sched_priority = 99,
++ };
++ int oldpolicy = -1, policy = attr->sched_policy;
++ int retval, newprio;
++ struct balance_callback *head;
++ unsigned long flags;
++ struct rq *rq;
++ int reset_on_fork;
++ raw_spinlock_t *lock;
++
++ /* The pi code expects interrupts enabled */
++ BUG_ON(pi && in_interrupt());
++
++ /*
++ * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
++ */
++ if (unlikely(SCHED_DEADLINE == policy)) {
++ attr = &dl_squash_attr;
++ policy = attr->sched_policy;
++ }
++recheck:
++ /* Double check policy once rq lock held */
++ if (policy < 0) {
++ reset_on_fork = p->sched_reset_on_fork;
++ policy = oldpolicy = p->policy;
++ } else {
++ reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
++
++ if (policy > SCHED_IDLE)
++ return -EINVAL;
++ }
++
++ if (attr->sched_flags & ~(SCHED_FLAG_ALL))
++ return -EINVAL;
++
++ /*
++ * Valid priorities for SCHED_FIFO and SCHED_RR are
++ * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
++ * SCHED_BATCH and SCHED_IDLE is 0.
++ */
++ if (attr->sched_priority < 0 ||
++ (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
++ (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
++ return -EINVAL;
++ if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
++ (attr->sched_priority != 0))
++ return -EINVAL;
++
++ if (user) {
++ retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
++ if (retval)
++ return retval;
++
++ retval = security_task_setscheduler(p);
++ if (retval)
++ return retval;
++ }
++
++ /*
++ * Make sure no PI-waiters arrive (or leave) while we are
++ * changing the priority of the task:
++ */
++ raw_spin_lock_irqsave(&p->pi_lock, flags);
++
++ /*
++ * To be able to change p->policy safely, task_access_lock()
++ * must be called.
++ * IF use task_access_lock() here:
++ * For the task p which is not running, reading rq->stop is
++ * racy but acceptable as ->stop doesn't change much.
++ * An enhancemnet can be made to read rq->stop saftly.
++ */
++ rq = __task_access_lock(p, &lock);
++
++ /*
++ * Changing the policy of the stop threads its a very bad idea
++ */
++ if (p == rq->stop) {
++ retval = -EINVAL;
++ goto unlock;
++ }
++
++ /*
++ * If not changing anything there's no need to proceed further:
++ */
++ if (unlikely(policy == p->policy)) {
++ if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
++ goto change;
++ if (!rt_policy(policy) &&
++ NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
++ goto change;
++
++ p->sched_reset_on_fork = reset_on_fork;
++ retval = 0;
++ goto unlock;
++ }
++change:
++
++ /* Re-check policy now with rq lock held */
++ if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
++ policy = oldpolicy = -1;
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++ goto recheck;
++ }
++
++ p->sched_reset_on_fork = reset_on_fork;
++
++ newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
++ if (pi) {
++ /*
++ * Take priority boosted tasks into account. If the new
++ * effective priority is unchanged, we just store the new
++ * normal parameters and do not touch the scheduler class and
++ * the runqueue. This will be done when the task deboost
++ * itself.
++ */
++ newprio = rt_effective_prio(p, newprio);
++ }
++
++ if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
++ __setscheduler_params(p, attr);
++ __setscheduler_prio(p, newprio);
++ }
++
++ check_task_changed(p, rq);
++
++ /* Avoid rq from going away on us: */
++ preempt_disable();
++ head = splice_balance_callbacks(rq);
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++
++ if (pi)
++ rt_mutex_adjust_pi(p);
++
++ /* Run balance callbacks after we've adjusted the PI chain: */
++ balance_callbacks(rq, head);
++ preempt_enable();
++
++ return 0;
++
++unlock:
++ __task_access_unlock(p, lock);
++ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
++ return retval;
++#else /* !CONFIG_SCHED_ALT */
+ int oldpolicy = -1, policy = attr->sched_policy;
+ int retval, oldprio, newprio, queued, running;
+ const struct sched_class *prev_class, *next_class;
+@@ -764,6 +986,7 @@ int __sched_setscheduler(struct task_struct *p,
+ if (cpuset_locked)
+ cpuset_unlock();
+ return retval;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+
+ static int _sched_setscheduler(struct task_struct *p, int policy,
+@@ -775,8 +998,10 @@ static int _sched_setscheduler(struct task_struct *p, int policy,
+ .sched_nice = PRIO_TO_NICE(p->static_prio),
+ };
+
++#ifndef CONFIG_SCHED_ALT
+ if (p->se.custom_slice)
+ attr.sched_runtime = p->se.slice;
++#endif /* !CONFIG_SCHED_ALT */
+
+ /* Fixup the legacy SCHED_RESET_ON_FORK hack. */
+ if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
+@@ -944,13 +1169,18 @@ static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *a
+
+ static void get_params(struct task_struct *p, struct sched_attr *attr)
+ {
+- if (task_has_dl_policy(p)) {
++#ifndef CONFIG_SCHED_ALT
++ if (task_has_dl_policy(p))
+ __getparam_dl(p, attr);
+- } else if (task_has_rt_policy(p)) {
++ else
++#endif
++ if (task_has_rt_policy(p)) {
+ attr->sched_priority = p->rt_priority;
+ } else {
+ attr->sched_nice = task_nice(p);
++#ifndef CONFIG_SCHED_ALT
+ attr->sched_runtime = p->se.slice;
++#endif
+ }
+ }
+
+@@ -1170,6 +1400,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
+ #ifdef CONFIG_SMP
+ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ {
++#ifndef CONFIG_SCHED_ALT
+ /*
+ * If the task isn't a deadline task or admission control is
+ * disabled then we don't care about affinity changes.
+@@ -1186,6 +1417,7 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
+ guard(rcu)();
+ if (!cpumask_subset(task_rq(p)->rd->span, mask))
+ return -EBUSY;
++#endif
+
+ return 0;
+ }
+@@ -1210,9 +1442,11 @@ int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
+ ctx->new_mask = new_mask;
+ ctx->flags |= SCA_CHECK;
+
++#ifndef CONFIG_SCHED_ALT
+ retval = dl_task_check_affinity(p, new_mask);
+ if (retval)
+ goto out_free_new_mask;
++#endif
+
+ retval = __set_cpus_allowed_ptr(p, ctx);
+ if (retval)
+@@ -1392,13 +1626,34 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
+
+ static void do_sched_yield(void)
+ {
+- struct rq_flags rf;
+ struct rq *rq;
++ struct rq_flags rf;
++
++#ifdef CONFIG_SCHED_ALT
++ struct task_struct *p;
++
++ if (!sched_yield_type)
++ return;
+
+ rq = this_rq_lock_irq(&rf);
+
++ schedstat_inc(rq->yld_count);
++
++ p = current;
++ if (rt_task(p)) {
++ if (task_on_rq_queued(p))
++ requeue_task(p, rq);
++ } else if (rq->nr_running > 1) {
++ do_sched_yield_type_1(p, rq);
++ if (task_on_rq_queued(p))
++ requeue_task(p, rq);
++ }
++#else /* !CONFIG_SCHED_ALT */
++ rq = this_rq_lock_irq(&rf);
++
+ schedstat_inc(rq->yld_count);
+ current->sched_class->yield_task(rq);
++#endif /* !CONFIG_SCHED_ALT */
+
+ preempt_disable();
+ rq_unlock_irq(rq, &rf);
+@@ -1467,6 +1722,9 @@ EXPORT_SYMBOL(yield);
+ */
+ int __sched yield_to(struct task_struct *p, bool preempt)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return 0;
++#else /* !CONFIG_SCHED_ALT */
+ struct task_struct *curr = current;
+ struct rq *rq, *p_rq;
+ int yielded = 0;
+@@ -1512,6 +1770,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
+ schedule();
+
+ return yielded;
++#endif /* !CONFIG_SCHED_ALT */
+ }
+ EXPORT_SYMBOL_GPL(yield_to);
+
+@@ -1532,7 +1791,9 @@ SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
+ case SCHED_RR:
+ ret = MAX_RT_PRIO-1;
+ break;
++#ifndef CONFIG_SCHED_ALT
+ case SCHED_DEADLINE:
++#endif
+ case SCHED_NORMAL:
+ case SCHED_BATCH:
+ case SCHED_IDLE:
+@@ -1560,7 +1821,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+ case SCHED_RR:
+ ret = 1;
+ break;
++#ifndef CONFIG_SCHED_ALT
+ case SCHED_DEADLINE:
++#endif
+ case SCHED_NORMAL:
+ case SCHED_BATCH:
+ case SCHED_IDLE:
+@@ -1572,7 +1835,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
+
+ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ {
++#ifndef CONFIG_SCHED_ALT
+ unsigned int time_slice = 0;
++#endif
+ int retval;
+
+ if (pid < 0)
+@@ -1587,6 +1852,7 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ if (retval)
+ return retval;
+
++#ifndef CONFIG_SCHED_ALT
+ scoped_guard (task_rq_lock, p) {
+ struct rq *rq = scope.rq;
+ if (p->sched_class->get_rr_interval)
+@@ -1595,6 +1861,13 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
+ }
+
+ jiffies_to_timespec64(time_slice, t);
++#else
++ }
++
++ alt_sched_debug();
++
++ *t = ns_to_timespec64(sysctl_sched_base_slice);
++#endif /* !CONFIG_SCHED_ALT */
+ return 0;
+ }
+
+diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
+index 9748a4c8d668..1e2bdd70d69a 100644
+--- a/kernel/sched/topology.c
++++ b/kernel/sched/topology.c
+@@ -3,6 +3,7 @@
+ * Scheduler topology setup/handling methods
+ */
+
++#ifndef CONFIG_SCHED_ALT
+ #include <linux/bsearch.h>
+
+ DEFINE_MUTEX(sched_domains_mutex);
+@@ -1459,8 +1460,10 @@ static void asym_cpu_capacity_scan(void)
+ */
+
+ static int default_relax_domain_level = -1;
++#endif /* CONFIG_SCHED_ALT */
+ int sched_domain_level_max;
+
++#ifndef CONFIG_SCHED_ALT
+ static int __init setup_relax_domain_level(char *str)
+ {
+ if (kstrtoint(str, 0, &default_relax_domain_level))
+@@ -1695,6 +1698,7 @@ sd_init(struct sched_domain_topology_level *tl,
+
+ return sd;
+ }
++#endif /* CONFIG_SCHED_ALT */
+
+ /*
+ * Topology list, bottom-up.
+@@ -1731,6 +1735,7 @@ void __init set_sched_topology(struct sched_domain_topology_level *tl)
+ sched_domain_topology_saved = NULL;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_NUMA
+
+ static const struct cpumask *sd_numa_mask(int cpu)
+@@ -2797,3 +2802,28 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
+ partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
+ mutex_unlock(&sched_domains_mutex);
+ }
++#else /* CONFIG_SCHED_ALT */
++DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
++
++void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
++ struct sched_domain_attr *dattr_new)
++{}
++
++#ifdef CONFIG_NUMA
++int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
++{
++ return best_mask_cpu(cpu, cpus);
++}
++
++int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
++{
++ return cpumask_nth(cpu, cpus);
++}
++
++const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
++{
++ return ERR_PTR(-EOPNOTSUPP);
++}
++EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
++#endif /* CONFIG_NUMA */
++#endif
+diff --git a/kernel/sysctl.c b/kernel/sysctl.c
+index 79e6cb1d5c48..61bc0352e233 100644
+--- a/kernel/sysctl.c
++++ b/kernel/sysctl.c
+@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
+
+ /* Constants used for minimum and maximum */
+
++#ifdef CONFIG_SCHED_ALT
++extern int sched_yield_type;
++#endif
++
+ #ifdef CONFIG_PERF_EVENTS
+ static const int six_hundred_forty_kb = 640 * 1024;
+ #endif
+@@ -1907,6 +1911,17 @@ static struct ctl_table kern_table[] = {
+ .proc_handler = proc_dointvec,
+ },
+ #endif
++#ifdef CONFIG_SCHED_ALT
++ {
++ .procname = "yield_type",
++ .data = &sched_yield_type,
++ .maxlen = sizeof (int),
++ .mode = 0644,
++ .proc_handler = &proc_dointvec_minmax,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_TWO,
++ },
++#endif
+ #if defined(CONFIG_S390) && defined(CONFIG_SMP)
+ {
+ .procname = "spin_retry",
+diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
+index 6bcee4704059..cf88205fd4a2 100644
+--- a/kernel/time/posix-cpu-timers.c
++++ b/kernel/time/posix-cpu-timers.c
+@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
+ u64 stime, utime;
+
+ task_cputime(p, &utime, &stime);
+- store_samples(samples, stime, utime, p->se.sum_exec_runtime);
++ store_samples(samples, stime, utime, tsk_seruntime(p));
+ }
+
+ static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
+@@ -830,6 +830,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
+ }
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ static inline void check_dl_overrun(struct task_struct *tsk)
+ {
+ if (tsk->dl.dl_overrun) {
+@@ -837,6 +838,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
+ send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
+ }
+ }
++#endif
+
+ static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
+ {
+@@ -864,8 +866,10 @@ static void check_thread_timers(struct task_struct *tsk,
+ u64 samples[CPUCLOCK_MAX];
+ unsigned long soft;
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(tsk))
+ check_dl_overrun(tsk);
++#endif
+
+ if (expiry_cache_is_inactive(pct))
+ return;
+@@ -879,7 +883,7 @@ static void check_thread_timers(struct task_struct *tsk,
+ soft = task_rlimit(tsk, RLIMIT_RTTIME);
+ if (soft != RLIM_INFINITY) {
+ /* Task RT timeout is accounted in jiffies. RTTIME is usec */
+- unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
++ unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
+ unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
+
+ /* At the hard limit, send SIGKILL. No further action. */
+@@ -1115,8 +1119,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
+ return true;
+ }
+
++#ifndef CONFIG_SCHED_ALT
+ if (dl_task(tsk) && tsk->dl.dl_overrun)
+ return true;
++#endif
+
+ return false;
+ }
+diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
+index 1469dd8075fa..803527a0e48a 100644
+--- a/kernel/trace/trace_selftest.c
++++ b/kernel/trace/trace_selftest.c
+@@ -1419,10 +1419,15 @@ static int trace_wakeup_test_thread(void *data)
+ {
+ /* Make this a -deadline thread */
+ static const struct sched_attr attr = {
++#ifdef CONFIG_SCHED_ALT
++ /* No deadline on BMQ/PDS, use RR */
++ .sched_policy = SCHED_RR,
++#else
+ .sched_policy = SCHED_DEADLINE,
+ .sched_runtime = 100000ULL,
+ .sched_deadline = 10000000ULL,
+ .sched_period = 10000000ULL
++#endif
+ };
+ struct wakeup_test_data *x = data;
+
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 9949ffad8df0..90eac9d802a8 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -1247,6 +1247,7 @@ static bool kick_pool(struct worker_pool *pool)
+
+ p = worker->task;
+
++#ifndef CONFIG_SCHED_ALT
+ #ifdef CONFIG_SMP
+ /*
+ * Idle @worker is about to execute @work and waking up provides an
+@@ -1276,6 +1277,8 @@ static bool kick_pool(struct worker_pool *pool)
+ }
+ }
+ #endif
++#endif /* !CONFIG_SCHED_ALT */
++
+ wake_up_process(p);
+ return true;
+ }
+@@ -1404,7 +1407,11 @@ void wq_worker_running(struct task_struct *task)
+ * CPU intensive auto-detection cares about how long a work item hogged
+ * CPU without sleeping. Reset the starting timestamp on wakeup.
+ */
++#ifdef CONFIG_SCHED_ALT
++ worker->current_at = worker->task->sched_time;
++#else
+ worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+
+ WRITE_ONCE(worker->sleeping, 0);
+ }
+@@ -1489,7 +1496,11 @@ void wq_worker_tick(struct task_struct *task)
+ * We probably want to make this prettier in the future.
+ */
+ if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
++#ifdef CONFIG_SCHED_ALT
++ worker->task->sched_time - worker->current_at <
++#else
+ worker->task->se.sum_exec_runtime - worker->current_at <
++#endif
+ wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
+ return;
+
+@@ -3157,7 +3168,11 @@ __acquires(&pool->lock)
+ worker->current_func = work->func;
+ worker->current_pwq = pwq;
+ if (worker->task)
++#ifdef CONFIG_SCHED_ALT
++ worker->current_at = worker->task->sched_time;
++#else
+ worker->current_at = worker->task->se.sum_exec_runtime;
++#endif
+ work_data = *work_data_bits(work);
+ worker->current_color = get_work_color(work_data);
+
diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
new file mode 100644
index 00000000..7748d78c
--- /dev/null
+++ b/5021_BMQ-and-PDS-gentoo-defaults.patch
@@ -0,0 +1,13 @@
+--- a/init/Kconfig 2024-11-13 14:45:36.566335895 -0500
++++ b/init/Kconfig 2024-11-13 14:47:02.670787774 -0500
+@@ -860,8 +860,9 @@ config UCLAMP_BUCKETS_COUNT
+ If in doubt, use the default value.
+
+ menuconfig SCHED_ALT
++ depends on X86_64
+ bool "Alternative CPU Schedulers"
+- default y
++ default n
+ help
+ This feature enable alternative CPU scheduler"
+
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-11-22 17:45 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-11-22 17:45 UTC (permalink / raw
To: gentoo-commits
commit: 84e347f66f81e2e80e29676135f13f38d88cd91e
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Nov 22 17:45:09 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Nov 22 17:45:09 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=84e347f6
Linux patch 6.12.1
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 12 ++++++----
1001_linux-6.12.1.patch | 62 +++++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 70 insertions(+), 4 deletions(-)
diff --git a/0000_README b/0000_README
index 2f20a332..4df3304e 100644
--- a/0000_README
+++ b/0000_README
@@ -43,17 +43,21 @@ EXPERIMENTAL
Individual Patch Descriptions:
--------------------------------------------------------------------------
+Patch: 1000_linux-6.12.1.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.1
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
Patch: 1700_sparc-address-warray-bound-warnings.patch
-From: https://github.com/KSPP/linux/issues/109
-Desc: Address -Warray-bounds warnings
+From: https://github.com/KSPP/linux/issues/109
+Desc: Address -Warray-bounds warnings
Patch: 1730_parisc-Disable-prctl.patch
-From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
-Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
+From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
+Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
diff --git a/1001_linux-6.12.1.patch b/1001_linux-6.12.1.patch
new file mode 100644
index 00000000..8eed7b47
--- /dev/null
+++ b/1001_linux-6.12.1.patch
@@ -0,0 +1,62 @@
+diff --git a/Makefile b/Makefile
+index 68a8faff25432a..70070e64d267c1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 0fac689c6350b2..13db0026dc1aad 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -371,7 +371,7 @@ static int uvc_parse_format(struct uvc_device *dev,
+ * Parse the frame descriptors. Only uncompressed, MJPEG and frame
+ * based formats have frame descriptors.
+ */
+- while (buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE &&
++ while (ftype && buflen > 2 && buffer[1] == USB_DT_CS_INTERFACE &&
+ buffer[2] == ftype) {
+ unsigned int maxIntervalIndex;
+
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 79d541f1502b22..4f6e566d52faa6 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -1491,7 +1491,18 @@ static unsigned long __mmap_region(struct file *file, unsigned long addr,
+ vm_flags = vma->vm_flags;
+ goto file_expanded;
+ }
+- vma_iter_config(&vmi, addr, end);
++
++ /*
++ * In the unlikely even that more memory was needed, but
++ * not available for the vma merge, the vma iterator
++ * will have no memory reserved for the write we told
++ * the driver was happening. To keep up the ruse,
++ * ensure the allocation for the store succeeds.
++ */
++ if (vmg_nomem(&vmg)) {
++ mas_preallocate(&vmi.mas, vma,
++ GFP_KERNEL|__GFP_NOFAIL);
++ }
+ }
+
+ vm_flags = vma->vm_flags;
+diff --git a/net/vmw_vsock/hyperv_transport.c b/net/vmw_vsock/hyperv_transport.c
+index e2157e38721770..56c232cf5b0f4f 100644
+--- a/net/vmw_vsock/hyperv_transport.c
++++ b/net/vmw_vsock/hyperv_transport.c
+@@ -549,6 +549,7 @@ static void hvs_destruct(struct vsock_sock *vsk)
+ vmbus_hvsock_device_unregister(chan);
+
+ kfree(hvs);
++ vsk->trans = NULL;
+ }
+
+ static int hvs_dgram_bind(struct vsock_sock *vsk, struct sockaddr_vm *addr)
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-11-30 17:33 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-11-30 17:33 UTC (permalink / raw
To: gentoo-commits
commit: cf5a18f21dd174f93ebf5fcc37a3e41ce8e5fdb8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Nov 30 17:29:45 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Nov 30 17:32:03 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=cf5a18f2
Fix case for X86_USER_SHADOW_STACK
Bug: https://bugs.gentoo.org/945481
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
4567_distro-Gentoo-Kconfig.patch | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 87b8fa95..74e75c40 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -254,7 +254,7 @@
+ select RANDOMIZE_BASE
+ select RANDOMIZE_MEMORY
+ select RELOCATABLE
-+ select X86_USER_SHADOW_STACK if AS_WRUSS=Y
++ select X86_USER_SHADOW_STACK if AS_WRUSS=y
+ select VMAP_STACK
+
+
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-02 17:15 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-02 17:15 UTC (permalink / raw
To: gentoo-commits
commit: 353d9f32e0ba5f71b437ff4c970539e584a58e0b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 2 17:13:57 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Dec 2 17:13:57 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=353d9f32
GCC 15 defs to -std=gnu23. Hack in CSTD_FLAG to pass -std=gnu11 everywhere
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
2980_GCC15-gnu23-to-gnu11-fix.patch | 105 ++++++++++++++++++++++++++++++++++++
2 files changed, 109 insertions(+)
diff --git a/0000_README b/0000_README
index 4df3304e..bc514d88 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 2920_sign-file-patch-for-libressl.patch
From: https://bugs.gentoo.org/717166
Desc: sign-file: full functionality with modern LibreSSL
+Patch: 2980_GCC15-gnu23-to-gnu11-fix.patch
+From: https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
+Desc: GCC 15 defaults to -std=gnu23. Hack in CSTD_FLAG to pass -std=gnu11 everywhere.
+
Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
From: https://lore.kernel.org/bpf/
Desc: libbpf: workaround -Wmaybe-uninitialized false positive
diff --git a/2980_GCC15-gnu23-to-gnu11-fix.patch b/2980_GCC15-gnu23-to-gnu11-fix.patch
new file mode 100644
index 00000000..c74b6180
--- /dev/null
+++ b/2980_GCC15-gnu23-to-gnu11-fix.patch
@@ -0,0 +1,105 @@
+iGCC 15 defaults to -std=gnu23. While most of the kernel builds with -std=gnu11,
+some of it forgets to pass that flag. Hack in CSTD_FLAG to pass -std=gnu11
+everywhere.
+
+https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
+--- a/Makefile
++++ b/Makefile
+@@ -416,6 +416,8 @@ export KCONFIG_CONFIG
+ # SHELL used by kbuild
+ CONFIG_SHELL := sh
+
++CSTD_FLAG := -std=gnu11
++
+ HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null)
+ HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null)
+ HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null)
+@@ -437,7 +439,7 @@ HOSTRUSTC = rustc
+ HOSTPKG_CONFIG = pkg-config
+
+ KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
+- -O2 -fomit-frame-pointer -std=gnu11
++ -O2 -fomit-frame-pointer $(CSTD_FLAG)
+ KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
+ KBUILD_USERLDFLAGS := $(USERLDFLAGS)
+
+@@ -545,7 +547,7 @@ LINUXINCLUDE := \
+ KBUILD_AFLAGS := -D__ASSEMBLY__ -fno-PIE
+
+ KBUILD_CFLAGS :=
+-KBUILD_CFLAGS += -std=gnu11
++KBUILD_CFLAGS += $(CSTD_FLAG)
+ KBUILD_CFLAGS += -fshort-wchar
+ KBUILD_CFLAGS += -funsigned-char
+ KBUILD_CFLAGS += -fno-common
+@@ -589,7 +591,7 @@ export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AW
+ export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
+ export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
+ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
+-export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS
++export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS CSTD_FLAG
+
+ export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
+ export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -65,7 +65,7 @@ VDSO_CFLAGS += -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
+ -fno-strict-aliasing -fno-common \
+ -Werror-implicit-function-declaration \
+ -Wno-format-security \
+- -std=gnu11
++ $(CSTD_FLAG)
+ VDSO_CFLAGS += -O2
+ # Some useful compiler-dependent flags from top-level Makefile
+ VDSO_CFLAGS += $(call cc32-option,-Wno-pointer-sign)
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -47,7 +47,7 @@ endif
+
+ # How to compile the 16-bit code. Note we always compile for -march=i386;
+ # that way we can complain to the user if the CPU is insufficient.
+-REALMODE_CFLAGS := -std=gnu11 -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
++REALMODE_CFLAGS := $(CSTD_FLAG) -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
+ -Wall -Wstrict-prototypes -march=i386 -mregparm=3 \
+ -fno-strict-aliasing -fomit-frame-pointer -fno-pic \
+ -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none)
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -7,7 +7,7 @@
+ #
+
+ # non-x86 reuses KBUILD_CFLAGS, x86 does not
+-cflags-y := $(KBUILD_CFLAGS)
++cflags-y := $(KBUILD_CFLAGS) $(CSTD_FLAG)
+
+ cflags-$(CONFIG_X86_32) := -march=i386
+ cflags-$(CONFIG_X86_64) := -mcmodel=small
+@@ -18,7 +18,7 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \
+ $(call cc-disable-warning, address-of-packed-member) \
+ $(call cc-disable-warning, gnu) \
+ -fno-asynchronous-unwind-tables \
+- $(CLANG_FLAGS)
++ $(CLANG_FLAGS) $(CSTD_FLAG)
+
+ # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
+ # disable the stackleak plugin
+@@ -42,7 +42,7 @@ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(cflags-y)) \
+ -ffreestanding \
+ -fno-stack-protector \
+ $(call cc-option,-fno-addrsig) \
+- -D__DISABLE_EXPORTS
++ -D__DISABLE_EXPORTS $(CSTD_FLAG)
+
+ #
+ # struct randomization only makes sense for Linux internal types, which the EFI
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -24,7 +24,7 @@ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
+ # case of cross compiling, as it has the '--target=' flag, which is needed to
+ # avoid errors with '-march=i386', and future flags may depend on the target to
+ # be valid.
+-KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
++KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS) $(CSTD_FLAG)
+ KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
+ KBUILD_CFLAGS += -Wundef
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-05 14:06 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-05 14:06 UTC (permalink / raw
To: gentoo-commits
commit: 667267c9cd00cf85da39630df8c81d77fda4ec4d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 5 14:06:06 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Dec 5 14:06:06 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=667267c9
Linux patch 6.12.2
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1001_linux-6.12.2.patch | 47740 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 47744 insertions(+)
diff --git a/0000_README b/0000_README
index bc514d88..ac1104a1 100644
--- a/0000_README
+++ b/0000_README
@@ -47,6 +47,10 @@ Patch: 1000_linux-6.12.1.patch
From: https://www.kernel.org
Desc: Linux 6.12.1
+Patch: 1001_linux-6.12.2.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.2
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1001_linux-6.12.2.patch b/1001_linux-6.12.2.patch
new file mode 100644
index 00000000..f10548d7
--- /dev/null
+++ b/1001_linux-6.12.2.patch
@@ -0,0 +1,47740 @@
+diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
+index fdedf1ea944ba8..513296bb6f297f 100644
+--- a/Documentation/ABI/testing/sysfs-fs-f2fs
++++ b/Documentation/ABI/testing/sysfs-fs-f2fs
+@@ -311,10 +311,13 @@ Description: Do background GC aggressively when set. Set to 0 by default.
+ GC approach and turns SSR mode on.
+ gc urgent low(2): lowers the bar of checking I/O idling in
+ order to process outstanding discard commands and GC a
+- little bit aggressively. uses cost benefit GC approach.
++ little bit aggressively. always uses cost benefit GC approach,
++ and will override age-threshold GC approach if ATGC is enabled
++ at the same time.
+ gc urgent mid(3): does GC forcibly in a period of given
+ gc_urgent_sleep_time and executes a mid level of I/O idling check.
+- uses cost benefit GC approach.
++ always uses cost benefit GC approach, and will override
++ age-threshold GC approach if ATGC is enabled at the same time.
+
+ What: /sys/fs/f2fs/<disk>/gc_urgent_sleep_time
+ Date: August 2017
+diff --git a/Documentation/RCU/stallwarn.rst b/Documentation/RCU/stallwarn.rst
+index ca7b7cd806a16c..30080ff6f4062d 100644
+--- a/Documentation/RCU/stallwarn.rst
++++ b/Documentation/RCU/stallwarn.rst
+@@ -249,7 +249,7 @@ ticks this GP)" indicates that this CPU has not taken any scheduling-clock
+ interrupts during the current stalled grace period.
+
+ The "idle=" portion of the message prints the dyntick-idle state.
+-The hex number before the first "/" is the low-order 12 bits of the
++The hex number before the first "/" is the low-order 16 bits of the
+ dynticks counter, which will have an even-numbered value if the CPU
+ is in dyntick-idle mode and an odd-numbered value otherwise. The hex
+ number between the two "/"s is the value of the nesting, which will be
+diff --git a/Documentation/admin-guide/blockdev/zram.rst b/Documentation/admin-guide/blockdev/zram.rst
+index 678d70d6e1c3ac..714a5171bfc0b8 100644
+--- a/Documentation/admin-guide/blockdev/zram.rst
++++ b/Documentation/admin-guide/blockdev/zram.rst
+@@ -47,6 +47,8 @@ The list of possible return codes:
+ -ENOMEM zram was not able to allocate enough memory to fulfil your
+ needs.
+ -EINVAL invalid input has been provided.
++-EAGAIN re-try operation later (e.g. when attempting to run recompress
++ and writeback simultaneously).
+ ======== =============================================================
+
+ If you use 'echo', the returned value is set by the 'echo' utility,
+diff --git a/Documentation/admin-guide/media/building.rst b/Documentation/admin-guide/media/building.rst
+index a0647342991637..7a413ba07f93bb 100644
+--- a/Documentation/admin-guide/media/building.rst
++++ b/Documentation/admin-guide/media/building.rst
+@@ -15,7 +15,7 @@ Please notice, however, that, if:
+
+ you should use the main media development tree ``master`` branch:
+
+- https://git.linuxtv.org/media_tree.git/
++ https://git.linuxtv.org/media.git/
+
+ In this case, you may find some useful information at the
+ `LinuxTv wiki pages <https://linuxtv.org/wiki>`_:
+diff --git a/Documentation/admin-guide/media/saa7134.rst b/Documentation/admin-guide/media/saa7134.rst
+index 51eae7eb5ab7f4..18d7cbc897db4b 100644
+--- a/Documentation/admin-guide/media/saa7134.rst
++++ b/Documentation/admin-guide/media/saa7134.rst
+@@ -67,7 +67,7 @@ Changes / Fixes
+ Please mail to linux-media AT vger.kernel.org unified diffs against
+ the linux media git tree:
+
+- https://git.linuxtv.org/media_tree.git/
++ https://git.linuxtv.org/media.git/
+
+ This is done by committing a patch at a clone of the git tree and
+ submitting the patch using ``git send-email``. Don't forget to
+diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
+index 4fd492cb49704f..ad2d8ddad27fe4 100644
+--- a/Documentation/arch/x86/boot.rst
++++ b/Documentation/arch/x86/boot.rst
+@@ -896,10 +896,19 @@ Offset/size: 0x260/4
+
+ The kernel runtime start address is determined by the following algorithm::
+
+- if (relocatable_kernel)
+- runtime_start = align_up(load_address, kernel_alignment)
+- else
+- runtime_start = pref_address
++ if (relocatable_kernel) {
++ if (load_address < pref_address)
++ load_address = pref_address;
++ runtime_start = align_up(load_address, kernel_alignment);
++ } else {
++ runtime_start = pref_address;
++ }
++
++Hence the necessary memory window location and size can be estimated by
++a boot loader as::
++
++ memory_window_start = runtime_start;
++ memory_window_size = init_size;
+
+ ============ ===============
+ Field name: handover_offset
+diff --git a/Documentation/devicetree/bindings/cache/qcom,llcc.yaml b/Documentation/devicetree/bindings/cache/qcom,llcc.yaml
+index 68ea5f70b75f03..ee7edc6f60e2b4 100644
+--- a/Documentation/devicetree/bindings/cache/qcom,llcc.yaml
++++ b/Documentation/devicetree/bindings/cache/qcom,llcc.yaml
+@@ -39,11 +39,11 @@ properties:
+
+ reg:
+ minItems: 2
+- maxItems: 9
++ maxItems: 10
+
+ reg-names:
+ minItems: 2
+- maxItems: 9
++ maxItems: 10
+
+ interrupts:
+ maxItems: 1
+@@ -134,6 +134,36 @@ allOf:
+ - qcom,qdu1000-llcc
+ - qcom,sc8180x-llcc
+ - qcom,sc8280xp-llcc
++ then:
++ properties:
++ reg:
++ items:
++ - description: LLCC0 base register region
++ - description: LLCC1 base register region
++ - description: LLCC2 base register region
++ - description: LLCC3 base register region
++ - description: LLCC4 base register region
++ - description: LLCC5 base register region
++ - description: LLCC6 base register region
++ - description: LLCC7 base register region
++ - description: LLCC broadcast base register region
++ reg-names:
++ items:
++ - const: llcc0_base
++ - const: llcc1_base
++ - const: llcc2_base
++ - const: llcc3_base
++ - const: llcc4_base
++ - const: llcc5_base
++ - const: llcc6_base
++ - const: llcc7_base
++ - const: llcc_broadcast_base
++
++ - if:
++ properties:
++ compatible:
++ contains:
++ enum:
+ - qcom,x1e80100-llcc
+ then:
+ properties:
+@@ -148,6 +178,7 @@ allOf:
+ - description: LLCC6 base register region
+ - description: LLCC7 base register region
+ - description: LLCC broadcast base register region
++ - description: LLCC broadcast AND register region
+ reg-names:
+ items:
+ - const: llcc0_base
+@@ -159,6 +190,7 @@ allOf:
+ - const: llcc6_base
+ - const: llcc7_base
+ - const: llcc_broadcast_base
++ - const: llcc_broadcast_and_base
+
+ - if:
+ properties:
+diff --git a/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml b/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
+index 5e942bccf27787..2b2041818a0a44 100644
+--- a/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
++++ b/Documentation/devicetree/bindings/clock/adi,axi-clkgen.yaml
+@@ -26,9 +26,21 @@ properties:
+ description:
+ Specifies the reference clock(s) from which the output frequency is
+ derived. This must either reference one clock if only the first clock
+- input is connected or two if both clock inputs are connected.
+- minItems: 1
+- maxItems: 2
++ input is connected or two if both clock inputs are connected. The last
++ clock is the AXI bus clock that needs to be enabled so we can access the
++ core registers.
++ minItems: 2
++ maxItems: 3
++
++ clock-names:
++ oneOf:
++ - items:
++ - const: clkin1
++ - const: s_axi_aclk
++ - items:
++ - const: clkin1
++ - const: clkin2
++ - const: s_axi_aclk
+
+ '#clock-cells':
+ const: 0
+@@ -40,6 +52,7 @@ required:
+ - compatible
+ - reg
+ - clocks
++ - clock-names
+ - '#clock-cells'
+
+ additionalProperties: false
+@@ -50,5 +63,6 @@ examples:
+ compatible = "adi,axi-clkgen-2.00.a";
+ #clock-cells = <0>;
+ reg = <0xff000000 0x1000>;
+- clocks = <&osc 1>;
++ clocks = <&osc 1>, <&clkc 15>;
++ clock-names = "clkin1", "s_axi_aclk";
+ };
+diff --git a/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml b/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
+index fc8b97f820775b..41fe0003474285 100644
+--- a/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
++++ b/Documentation/devicetree/bindings/iio/dac/adi,ad3552r.yaml
+@@ -30,7 +30,7 @@ properties:
+ maxItems: 1
+
+ spi-max-frequency:
+- maximum: 30000000
++ maximum: 66000000
+
+ reset-gpios:
+ maxItems: 1
+diff --git a/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml b/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml
+index 898c1be2d6a435..f05aab2b1addca 100644
+--- a/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml
++++ b/Documentation/devicetree/bindings/pci/mediatek-pcie-gen3.yaml
+@@ -149,7 +149,7 @@ allOf:
+ then:
+ properties:
+ clocks:
+- minItems: 4
++ minItems: 6
+
+ clock-names:
+ items:
+@@ -178,7 +178,7 @@ allOf:
+ then:
+ properties:
+ clocks:
+- minItems: 4
++ minItems: 6
+
+ clock-names:
+ items:
+@@ -207,6 +207,7 @@ allOf:
+ properties:
+ clocks:
+ minItems: 4
++ maxItems: 4
+
+ clock-names:
+ items:
+diff --git a/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml b/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml
+index 4dfb49b0e07f73..f82a3c7e6c29e4 100644
+--- a/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml
++++ b/Documentation/devicetree/bindings/pinctrl/samsung,pinctrl-wakeup-interrupt.yaml
+@@ -91,14 +91,17 @@ allOf:
+ - if:
+ properties:
+ compatible:
+- # Match without "contains", to skip newer variants which are still
+- # compatible with samsung,exynos7-wakeup-eint
+- enum:
+- - samsung,s5pv210-wakeup-eint
+- - samsung,exynos4210-wakeup-eint
+- - samsung,exynos5433-wakeup-eint
+- - samsung,exynos7-wakeup-eint
+- - samsung,exynos7885-wakeup-eint
++ oneOf:
++ # Match without "contains", to skip newer variants which are still
++ # compatible with samsung,exynos7-wakeup-eint
++ - enum:
++ - samsung,exynos4210-wakeup-eint
++ - samsung,exynos7-wakeup-eint
++ - samsung,s5pv210-wakeup-eint
++ - contains:
++ enum:
++ - samsung,exynos5433-wakeup-eint
++ - samsung,exynos7885-wakeup-eint
+ then:
+ properties:
+ interrupts:
+diff --git a/Documentation/devicetree/bindings/serial/rs485.yaml b/Documentation/devicetree/bindings/serial/rs485.yaml
+index 9418fd66a8e95a..b93254ad2a287a 100644
+--- a/Documentation/devicetree/bindings/serial/rs485.yaml
++++ b/Documentation/devicetree/bindings/serial/rs485.yaml
+@@ -18,16 +18,15 @@ properties:
+ description: prop-encoded-array <a b>
+ $ref: /schemas/types.yaml#/definitions/uint32-array
+ items:
+- items:
+- - description: Delay between rts signal and beginning of data sent in
+- milliseconds. It corresponds to the delay before sending data.
+- default: 0
+- maximum: 100
+- - description: Delay between end of data sent and rts signal in milliseconds.
+- It corresponds to the delay after sending data and actual release
+- of the line.
+- default: 0
+- maximum: 100
++ - description: Delay between rts signal and beginning of data sent in
++ milliseconds. It corresponds to the delay before sending data.
++ default: 0
++ maximum: 100
++ - description: Delay between end of data sent and rts signal in milliseconds.
++ It corresponds to the delay after sending data and actual release
++ of the line.
++ default: 0
++ maximum: 100
+
+ rs485-rts-active-high:
+ description: drive RTS high when sending (this is the default).
+diff --git a/Documentation/devicetree/bindings/sound/mt6359.yaml b/Documentation/devicetree/bindings/sound/mt6359.yaml
+index 23d411fc4200e6..128698630c865f 100644
+--- a/Documentation/devicetree/bindings/sound/mt6359.yaml
++++ b/Documentation/devicetree/bindings/sound/mt6359.yaml
+@@ -23,8 +23,8 @@ properties:
+ Indicates how many data pins are used to transmit two channels of PDM
+ signal. 0 means two wires, 1 means one wire. Default value is 0.
+ enum:
+- - 0 # one wire
+- - 1 # two wires
++ - 0 # two wires
++ - 1 # one wire
+
+ mediatek,mic-type-0:
+ $ref: /schemas/types.yaml#/definitions/uint32
+@@ -53,9 +53,9 @@ additionalProperties: false
+
+ examples:
+ - |
+- mt6359codec: mt6359codec {
+- mediatek,dmic-mode = <0>;
+- mediatek,mic-type-0 = <2>;
++ mt6359codec: audio-codec {
++ mediatek,dmic-mode = <0>;
++ mediatek,mic-type-0 = <2>;
+ };
+
+ ...
+diff --git a/Documentation/devicetree/bindings/vendor-prefixes.yaml b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+index b320a39de7fe40..fbfce9b4ae6b8e 100644
+--- a/Documentation/devicetree/bindings/vendor-prefixes.yaml
++++ b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+@@ -1013,6 +1013,8 @@ patternProperties:
+ description: Shanghai Neardi Technology Co., Ltd.
+ "^nec,.*":
+ description: NEC LCD Technologies, Ltd.
++ "^neofidelity,.*":
++ description: Neofidelity Inc.
+ "^neonode,.*":
+ description: Neonode Inc.
+ "^netgear,.*":
+diff --git a/Documentation/filesystems/mount_api.rst b/Documentation/filesystems/mount_api.rst
+index 317934c9e8fcac..d92c276f1575af 100644
+--- a/Documentation/filesystems/mount_api.rst
++++ b/Documentation/filesystems/mount_api.rst
+@@ -770,7 +770,8 @@ process the parameters it is given.
+
+ * ::
+
+- bool fs_validate_description(const struct fs_parameter_description *desc);
++ bool fs_validate_description(const char *name,
++ const struct fs_parameter_description *desc);
+
+ This performs some validation checks on a parameter description. It
+ returns true if the description is good and false if it is not. It will
+diff --git a/Documentation/locking/seqlock.rst b/Documentation/locking/seqlock.rst
+index bfda1a5fecadc6..ec6411d02ac8f5 100644
+--- a/Documentation/locking/seqlock.rst
++++ b/Documentation/locking/seqlock.rst
+@@ -153,7 +153,7 @@ Use seqcount_latch_t when the write side sections cannot be protected
+ from interruption by readers. This is typically the case when the read
+ side can be invoked from NMI handlers.
+
+-Check `raw_write_seqcount_latch()` for more information.
++Check `write_seqcount_latch()` for more information.
+
+
+ .. _seqlock_t:
+diff --git a/MAINTAINERS b/MAINTAINERS
+index b878ddc99f94e7..6bb4ec0c162a53 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -701,7 +701,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-aimslab*
+
+ AIO
+@@ -809,7 +809,7 @@ ALLWINNER A10 CSI DRIVER
+ M: Maxime Ripard <mripard@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun4i-a10-csi.yaml
+ F: drivers/media/platform/sunxi/sun4i-csi/
+
+@@ -818,7 +818,7 @@ M: Yong Deng <yong.deng@magewell.com>
+ M: Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun6i-a31-csi.yaml
+ F: drivers/media/platform/sunxi/sun6i-csi/
+
+@@ -826,7 +826,7 @@ ALLWINNER A31 ISP DRIVER
+ M: Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun6i-a31-isp.yaml
+ F: drivers/staging/media/sunxi/sun6i-isp/
+ F: drivers/staging/media/sunxi/sun6i-isp/uapi/sun6i-isp-config.h
+@@ -835,7 +835,7 @@ ALLWINNER A31 MIPI CSI-2 BRIDGE DRIVER
+ M: Paul Kocialkowski <paul.kocialkowski@bootlin.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun6i-a31-mipi-csi2.yaml
+ F: drivers/media/platform/sunxi/sun6i-mipi-csi2/
+
+@@ -3348,7 +3348,7 @@ ASAHI KASEI AK7375 LENS VOICE COIL DRIVER
+ M: Tianshu Qiu <tian.shu.qiu@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/asahi-kasei,ak7375.yaml
+ F: drivers/media/i2c/ak7375.c
+
+@@ -3765,7 +3765,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/dvb-usb-v2/az6007.c
+
+ AZTECH FM RADIO RECEIVER DRIVER
+@@ -3773,7 +3773,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-aztech*
+
+ B43 WIRELESS DRIVER
+@@ -3857,7 +3857,7 @@ M: Fabien Dessenne <fabien.dessenne@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/platform/st/sti/bdisp
+
+ BECKHOFF CX5020 ETHERCAT MASTER DRIVER
+@@ -4865,7 +4865,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/driver-api/media/drivers/bttv*
+ F: drivers/media/pci/bt8xx/bttv*
+
+@@ -4979,13 +4979,13 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-cadet*
+
+ CAFE CMOS INTEGRATED CAMERA CONTROLLER DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/cafe_ccic*
+ F: drivers/media/platform/marvell/
+
+@@ -5169,7 +5169,7 @@ M: Hans Verkuil <hverkuil-cisco@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: http://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/ABI/testing/debugfs-cec-error-inj
+ F: Documentation/devicetree/bindings/media/cec/cec-common.yaml
+ F: Documentation/driver-api/media/cec-core.rst
+@@ -5186,7 +5186,7 @@ M: Hans Verkuil <hverkuil-cisco@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: http://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/cec/cec-gpio.yaml
+ F: drivers/media/cec/platform/cec-gpio/
+
+@@ -5393,7 +5393,7 @@ CHRONTEL CH7322 CEC DRIVER
+ M: Joe Tessler <jrt@google.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/chrontel,ch7322.yaml
+ F: drivers/media/cec/i2c/ch7322.c
+
+@@ -5582,7 +5582,7 @@ M: Hans Verkuil <hverkuil-cisco@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/cobalt/
+
+ COCCINELLE/Semantic Patches (SmPL)
+@@ -6026,7 +6026,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: http://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/cs3308.c
+
+ CS5535 Audio ALSA driver
+@@ -6057,7 +6057,7 @@ M: Andy Walls <awalls@md.metrocast.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/cx18/
+ F: include/uapi/linux/ivtv*
+
+@@ -6066,7 +6066,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/common/cx2341x*
+ F: include/media/drv-intf/cx2341x.h
+
+@@ -6084,7 +6084,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/driver-api/media/drivers/cx88*
+ F: drivers/media/pci/cx88/
+
+@@ -6320,7 +6320,7 @@ DEINTERLACE DRIVERS FOR ALLWINNER H3
+ M: Jernej Skrabec <jernej.skrabec@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun8i-h3-deinterlace.yaml
+ F: drivers/media/platform/sunxi/sun8i-di/
+
+@@ -6447,7 +6447,7 @@ M: Hugues Fruchet <hugues.fruchet@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/platform/st/sti/delta
+
+ DENALI NAND DRIVER
+@@ -6855,7 +6855,7 @@ DONGWOON DW9714 LENS VOICE COIL DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9714.yaml
+ F: drivers/media/i2c/dw9714.c
+
+@@ -6863,13 +6863,13 @@ DONGWOON DW9719 LENS VOICE COIL DRIVER
+ M: Daniel Scally <djrscally@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/dw9719.c
+
+ DONGWOON DW9768 LENS VOICE COIL DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9768.yaml
+ F: drivers/media/i2c/dw9768.c
+
+@@ -6877,7 +6877,7 @@ DONGWOON DW9807 LENS VOICE COIL DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/dongwoon,dw9807-vcm.yaml
+ F: drivers/media/i2c/dw9807-vcm.c
+
+@@ -7860,7 +7860,7 @@ DSBR100 USB FM RADIO DRIVER
+ M: Alexey Klimov <klimov.linux@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/dsbr100.c
+
+ DT3155 MEDIA DRIVER
+@@ -7868,7 +7868,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/dt3155/
+
+ DVB_USB_AF9015 MEDIA DRIVER
+@@ -7913,7 +7913,7 @@ S: Maintained
+ W: https://linuxtv.org
+ W: http://github.com/mkrufky
+ Q: http://patchwork.linuxtv.org/project/linux-media/list/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/dvb-usb/cxusb*
+
+ DVB_USB_EC168 MEDIA DRIVER
+@@ -8282,7 +8282,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/em28xx*
+ F: drivers/media/usb/em28xx/
+
+@@ -8578,7 +8578,7 @@ EXTRON DA HD 4K PLUS CEC DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/cec/usb/extron-da-hd-4k-plus/
+
+ EXYNOS DP DRIVER
+@@ -9400,7 +9400,7 @@ GALAXYCORE GC2145 SENSOR DRIVER
+ M: Alain Volmat <alain.volmat@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/galaxycore,gc2145.yaml
+ F: drivers/media/i2c/gc2145.c
+
+@@ -9448,7 +9448,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-gemtek*
+
+ GENERIC ARCHITECTURE TOPOLOGY
+@@ -9830,56 +9830,56 @@ GS1662 VIDEO SERIALIZER
+ M: Charles-Antoine Couret <charles-antoine.couret@nexvision.fr>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/spi/gs1662.c
+
+ GSPCA FINEPIX SUBDRIVER
+ M: Frank Zago <frank@zago.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/finepix.c
+
+ GSPCA GL860 SUBDRIVER
+ M: Olivier Lorin <o.lorin@laposte.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/gl860/
+
+ GSPCA M5602 SUBDRIVER
+ M: Erik Andren <erik.andren@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/m5602/
+
+ GSPCA PAC207 SONIXB SUBDRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/pac207.c
+
+ GSPCA SN9C20X SUBDRIVER
+ M: Brian Johnson <brijohn@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/sn9c20x.c
+
+ GSPCA T613 SUBDRIVER
+ M: Leandro Costantino <lcostantino@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/t613.c
+
+ GSPCA USB WEBCAM DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/gspca/
+
+ GTP (GPRS Tunneling Protocol)
+@@ -9996,7 +9996,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/hdpvr/
+
+ HEWLETT PACKARD ENTERPRISE ILO CHIF DRIVER
+@@ -10503,7 +10503,7 @@ M: Jean-Christophe Trotin <jean-christophe.trotin@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/platform/st/sti/hva
+
+ HWPOISON MEMORY FAILURE HANDLING
+@@ -10531,7 +10531,7 @@ HYNIX HI556 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/hi556.c
+
+ HYNIX HI846 SENSOR DRIVER
+@@ -11502,7 +11502,7 @@ M: Dan Scally <djrscally@gmail.com>
+ R: Tianshu Qiu <tian.shu.qiu@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/userspace-api/media/v4l/pixfmt-srggb10-ipu3.rst
+ F: drivers/media/pci/intel/ipu3/
+
+@@ -11523,7 +11523,7 @@ M: Bingbu Cao <bingbu.cao@intel.com>
+ R: Tianshu Qiu <tian.shu.qiu@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/ipu6-isys.rst
+ F: drivers/media/pci/intel/ipu6/
+
+@@ -12036,7 +12036,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-isa*
+
+ ISAPNP
+@@ -12138,7 +12138,7 @@ M: Andy Walls <awalls@md.metrocast.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/ivtv*
+ F: drivers/media/pci/ivtv/
+ F: include/uapi/linux/ivtv*
+@@ -12286,7 +12286,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-keene*
+
+ KERNEL AUTOMOUNTER
+@@ -13573,7 +13573,7 @@ MA901 MASTERKIT USB FM RADIO DRIVER
+ M: Alexey Klimov <klimov.linux@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-ma901.c
+
+ MAC80211
+@@ -13868,7 +13868,7 @@ MAX2175 SDR TUNER DRIVER
+ M: Ramesh Shanmugasundaram <rashanmu@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/max2175.txt
+ F: Documentation/userspace-api/media/drivers/max2175.rst
+ F: drivers/media/i2c/max2175*
+@@ -14048,7 +14048,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-maxiradio*
+
+ MAXLINEAR ETHERNET PHY DRIVER
+@@ -14131,7 +14131,7 @@ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://www.linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/mc/
+ F: include/media/media-*.h
+ F: include/uapi/linux/media.h
+@@ -14140,7 +14140,7 @@ MEDIA DRIVER FOR FREESCALE IMX PXP
+ M: Philipp Zabel <p.zabel@pengutronix.de>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/platform/nxp/imx-pxp.[ch]
+
+ MEDIA DRIVERS FOR ASCOT2E
+@@ -14149,7 +14149,7 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/ascot2e*
+
+ MEDIA DRIVERS FOR CXD2099AR CI CONTROLLERS
+@@ -14157,7 +14157,7 @@ M: Jasmin Jessich <jasmin@anw.at>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/cxd2099*
+
+ MEDIA DRIVERS FOR CXD2841ER
+@@ -14166,7 +14166,7 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/cxd2841er*
+
+ MEDIA DRIVERS FOR CXD2880
+@@ -14174,7 +14174,7 @@ M: Yasunari Takiguchi <Yasunari.Takiguchi@sony.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+ W: http://linuxtv.org/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/cxd2880/*
+ F: drivers/media/spi/cxd2880*
+
+@@ -14182,7 +14182,7 @@ MEDIA DRIVERS FOR DIGITAL DEVICES PCIE DEVICES
+ L: linux-media@vger.kernel.org
+ S: Orphan
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/ddbridge/*
+
+ MEDIA DRIVERS FOR FREESCALE IMX
+@@ -14190,7 +14190,7 @@ M: Steve Longerbeam <slongerbeam@gmail.com>
+ M: Philipp Zabel <p.zabel@pengutronix.de>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/imx.rst
+ F: Documentation/devicetree/bindings/media/imx.txt
+ F: drivers/staging/media/imx/
+@@ -14204,7 +14204,7 @@ M: Martin Kepplinger <martin.kepplinger@puri.sm>
+ R: Purism Kernel Team <kernel@puri.sm>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/imx7.rst
+ F: Documentation/devicetree/bindings/media/nxp,imx-mipi-csi2.yaml
+ F: Documentation/devicetree/bindings/media/nxp,imx7-csi.yaml
+@@ -14219,7 +14219,7 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/helene*
+
+ MEDIA DRIVERS FOR HORUS3A
+@@ -14228,7 +14228,7 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/horus3a*
+
+ MEDIA DRIVERS FOR LNBH25
+@@ -14237,14 +14237,14 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/lnbh25*
+
+ MEDIA DRIVERS FOR MXL5XX TUNER DEMODULATORS
+ L: linux-media@vger.kernel.org
+ S: Orphan
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/mxl5xx*
+
+ MEDIA DRIVERS FOR NETUP PCI UNIVERSAL DVB devices
+@@ -14253,7 +14253,7 @@ L: linux-media@vger.kernel.org
+ S: Supported
+ W: https://linuxtv.org
+ W: http://netup.tv/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/netup_unidvb/*
+
+ MEDIA DRIVERS FOR NVIDIA TEGRA - VDE
+@@ -14261,7 +14261,7 @@ M: Dmitry Osipenko <digetx@gmail.com>
+ L: linux-media@vger.kernel.org
+ L: linux-tegra@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/nvidia,tegra-vde.yaml
+ F: drivers/media/platform/nvidia/tegra-vde/
+
+@@ -14270,7 +14270,7 @@ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,ceu.yaml
+ F: drivers/media/platform/renesas/renesas-ceu.c
+ F: include/media/drv-intf/renesas-ceu.h
+@@ -14280,7 +14280,7 @@ M: Fabrizio Castro <fabrizio.castro.jz@renesas.com>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,drif.yaml
+ F: drivers/media/platform/renesas/rcar_drif.c
+
+@@ -14289,7 +14289,7 @@ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,fcp.yaml
+ F: drivers/media/platform/renesas/rcar-fcp.c
+ F: include/media/rcar-fcp.h
+@@ -14299,7 +14299,7 @@ M: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,fdp1.yaml
+ F: drivers/media/platform/renesas/rcar_fdp1.c
+
+@@ -14308,7 +14308,7 @@ M: Niklas Söderlund <niklas.soderlund@ragnatech.se>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,csi2.yaml
+ F: Documentation/devicetree/bindings/media/renesas,isp.yaml
+ F: Documentation/devicetree/bindings/media/renesas,vin.yaml
+@@ -14322,7 +14322,7 @@ M: Kieran Bingham <kieran.bingham+renesas@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ L: linux-renesas-soc@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/renesas,vsp1.yaml
+ F: drivers/media/platform/renesas/vsp1/
+
+@@ -14330,14 +14330,14 @@ MEDIA DRIVERS FOR ST STV0910 DEMODULATOR ICs
+ L: linux-media@vger.kernel.org
+ S: Orphan
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/stv0910*
+
+ MEDIA DRIVERS FOR ST STV6111 TUNER ICs
+ L: linux-media@vger.kernel.org
+ S: Orphan
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/dvb-frontends/stv6111*
+
+ MEDIA DRIVERS FOR STM32 - DCMI / DCMIPP
+@@ -14345,7 +14345,7 @@ M: Hugues Fruchet <hugues.fruchet@foss.st.com>
+ M: Alain Volmat <alain.volmat@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/st,stm32-dcmi.yaml
+ F: Documentation/devicetree/bindings/media/st,stm32-dcmipp.yaml
+ F: drivers/media/platform/st/stm32/stm32-dcmi.c
+@@ -14357,7 +14357,7 @@ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+ Q: http://patchwork.kernel.org/project/linux-media/list/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/admin-guide/media/
+ F: Documentation/devicetree/bindings/media/
+ F: Documentation/driver-api/media/
+@@ -14933,7 +14933,7 @@ L: linux-media@vger.kernel.org
+ L: linux-amlogic@lists.infradead.org
+ S: Supported
+ W: http://linux-meson.com/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/cec/amlogic,meson-gx-ao-cec.yaml
+ F: drivers/media/cec/platform/meson/ao-cec-g12a.c
+ F: drivers/media/cec/platform/meson/ao-cec.c
+@@ -14943,7 +14943,7 @@ M: Neil Armstrong <neil.armstrong@linaro.org>
+ L: linux-media@vger.kernel.org
+ L: linux-amlogic@lists.infradead.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/amlogic,axg-ge2d.yaml
+ F: drivers/media/platform/amlogic/meson-ge2d/
+
+@@ -14959,7 +14959,7 @@ M: Neil Armstrong <neil.armstrong@linaro.org>
+ L: linux-media@vger.kernel.org
+ L: linux-amlogic@lists.infradead.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/amlogic,gx-vdec.yaml
+ F: drivers/staging/media/meson/vdec/
+
+@@ -15557,7 +15557,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-miropcm20*
+
+ MITSUMI MM8013 FG DRIVER
+@@ -15709,7 +15709,7 @@ MR800 AVERMEDIA USB FM RADIO DRIVER
+ M: Alexey Klimov <klimov.linux@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-mr800.c
+
+ MRF24J40 IEEE 802.15.4 RADIO DRIVER
+@@ -15776,7 +15776,7 @@ MT9M114 ONSEMI SENSOR DRIVER
+ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/onnn,mt9m114.yaml
+ F: drivers/media/i2c/mt9m114.c
+
+@@ -15784,7 +15784,7 @@ MT9P031 APTINA CAMERA SENSOR
+ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/aptina,mt9p031.yaml
+ F: drivers/media/i2c/mt9p031.c
+ F: include/media/i2c/mt9p031.h
+@@ -15793,7 +15793,7 @@ MT9T112 APTINA CAMERA SENSOR
+ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/mt9t112.c
+ F: include/media/i2c/mt9t112.h
+
+@@ -15801,7 +15801,7 @@ MT9V032 APTINA CAMERA SENSOR
+ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/mt9v032.txt
+ F: drivers/media/i2c/mt9v032.c
+ F: include/media/i2c/mt9v032.h
+@@ -15810,7 +15810,7 @@ MT9V111 APTINA CAMERA SENSOR
+ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/aptina,mt9v111.yaml
+ F: drivers/media/i2c/mt9v111.c
+
+@@ -17005,13 +17005,13 @@ OMNIVISION OV01A10 SENSOR DRIVER
+ M: Bingbu Cao <bingbu.cao@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov01a10.c
+
+ OMNIVISION OV02A10 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov02a10.yaml
+ F: drivers/media/i2c/ov02a10.c
+
+@@ -17019,28 +17019,28 @@ OMNIVISION OV08D10 SENSOR DRIVER
+ M: Jimmy Su <jimmy.su@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov08d10.c
+
+ OMNIVISION OV08X40 SENSOR DRIVER
+ M: Jason Chen <jason.z.chen@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov08x40.c
+
+ OMNIVISION OV13858 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov13858.c
+
+ OMNIVISION OV13B10 SENSOR DRIVER
+ M: Arec Kao <arec.kao@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov13b10.c
+
+ OMNIVISION OV2680 SENSOR DRIVER
+@@ -17048,7 +17048,7 @@ M: Rui Miguel Silva <rmfrfs@gmail.com>
+ M: Hans de Goede <hansg@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov2680.yaml
+ F: drivers/media/i2c/ov2680.c
+
+@@ -17056,7 +17056,7 @@ OMNIVISION OV2685 SENSOR DRIVER
+ M: Shunqian Zheng <zhengsq@rock-chips.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov2685.yaml
+ F: drivers/media/i2c/ov2685.c
+
+@@ -17066,14 +17066,14 @@ R: Sakari Ailus <sakari.ailus@linux.intel.com>
+ R: Bingbu Cao <bingbu.cao@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov2740.c
+
+ OMNIVISION OV4689 SENSOR DRIVER
+ M: Mikhail Rudenko <mike.rudenko@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov4689.yaml
+ F: drivers/media/i2c/ov4689.c
+
+@@ -17081,7 +17081,7 @@ OMNIVISION OV5640 SENSOR DRIVER
+ M: Steve Longerbeam <slongerbeam@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov5640.c
+
+ OMNIVISION OV5647 SENSOR DRIVER
+@@ -17089,7 +17089,7 @@ M: Dave Stevenson <dave.stevenson@raspberrypi.com>
+ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov5647.yaml
+ F: drivers/media/i2c/ov5647.c
+
+@@ -17097,7 +17097,7 @@ OMNIVISION OV5670 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov5670.yaml
+ F: drivers/media/i2c/ov5670.c
+
+@@ -17105,7 +17105,7 @@ OMNIVISION OV5675 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov5675.yaml
+ F: drivers/media/i2c/ov5675.c
+
+@@ -17113,7 +17113,7 @@ OMNIVISION OV5693 SENSOR DRIVER
+ M: Daniel Scally <djrscally@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov5693.yaml
+ F: drivers/media/i2c/ov5693.c
+
+@@ -17121,21 +17121,21 @@ OMNIVISION OV5695 SENSOR DRIVER
+ M: Shunqian Zheng <zhengsq@rock-chips.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov5695.c
+
+ OMNIVISION OV64A40 SENSOR DRIVER
+ M: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov64a40.yaml
+ F: drivers/media/i2c/ov64a40.c
+
+ OMNIVISION OV7670 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ov7670.txt
+ F: drivers/media/i2c/ov7670.c
+
+@@ -17143,7 +17143,7 @@ OMNIVISION OV772x SENSOR DRIVER
+ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov772x.yaml
+ F: drivers/media/i2c/ov772x.c
+ F: include/media/i2c/ov772x.h
+@@ -17151,7 +17151,7 @@ F: include/media/i2c/ov772x.h
+ OMNIVISION OV7740 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ov7740.txt
+ F: drivers/media/i2c/ov7740.c
+
+@@ -17159,7 +17159,7 @@ OMNIVISION OV8856 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov8856.yaml
+ F: drivers/media/i2c/ov8856.c
+
+@@ -17168,7 +17168,7 @@ M: Jacopo Mondi <jacopo.mondi@ideasonboard.com>
+ M: Nicholas Roth <nicholas@rothemail.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov8858.yaml
+ F: drivers/media/i2c/ov8858.c
+
+@@ -17176,7 +17176,7 @@ OMNIVISION OV9282 SENSOR DRIVER
+ M: Dave Stevenson <dave.stevenson@raspberrypi.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ovti,ov9282.yaml
+ F: drivers/media/i2c/ov9282.c
+
+@@ -17192,7 +17192,7 @@ R: Akinobu Mita <akinobu.mita@gmail.com>
+ R: Sylwester Nawrocki <s.nawrocki@samsung.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/ov9650.txt
+ F: drivers/media/i2c/ov9650.c
+
+@@ -17201,7 +17201,7 @@ M: Tianshu Qiu <tian.shu.qiu@intel.com>
+ R: Bingbu Cao <bingbu.cao@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/ov9734.c
+
+ ONBOARD USB HUB DRIVER
+@@ -18646,7 +18646,7 @@ PULSE8-CEC DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/cec/usb/pulse8/
+
+ PURELIFI PLFXLC DRIVER
+@@ -18661,7 +18661,7 @@ L: pvrusb2@isely.net (subscribers-only)
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: http://www.isely.net/pvrusb2/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/driver-api/media/drivers/pvrusb2*
+ F: drivers/media/usb/pvrusb2/
+
+@@ -18669,7 +18669,7 @@ PWC WEBCAM DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/pwc/*
+ F: include/trace/events/pwc.h
+
+@@ -19173,7 +19173,7 @@ R: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
+ L: linux-media@vger.kernel.org
+ L: linux-arm-msm@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/*venus*
+ F: drivers/media/platform/qcom/venus/
+
+@@ -19218,14 +19218,14 @@ RADIOSHARK RADIO DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-shark.c
+
+ RADIOSHARK2 RADIO DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-shark2.c
+ F: drivers/media/radio/radio-tea5777.c
+
+@@ -19249,7 +19249,7 @@ RAINSHADOW-CEC DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/cec/usb/rainshadow/
+
+ RALINK MIPS ARCHITECTURE
+@@ -19333,7 +19333,7 @@ M: Sean Young <sean@mess.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: http://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/driver-api/media/rc-core.rst
+ F: Documentation/userspace-api/media/rc/
+ F: drivers/media/rc/
+@@ -20077,7 +20077,7 @@ ROTATION DRIVER FOR ALLWINNER A83T
+ M: Jernej Skrabec <jernej.skrabec@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/allwinner,sun8i-a83t-de2-rotate.yaml
+ F: drivers/media/platform/sunxi/sun8i-rotate/
+
+@@ -20331,7 +20331,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/saa6588*
+
+ SAA7134 VIDEO4LINUX DRIVER
+@@ -20339,7 +20339,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/driver-api/media/drivers/saa7134*
+ F: drivers/media/pci/saa7134/
+
+@@ -20347,7 +20347,7 @@ SAA7146 VIDEO4LINUX-2 DRIVER
+ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/common/saa7146/
+ F: drivers/media/pci/saa7146/
+ F: include/media/drv-intf/saa7146*
+@@ -20965,7 +20965,7 @@ SHARP RJ54N1CB0C SENSOR DRIVER
+ M: Jacopo Mondi <jacopo@jmondi.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/rj54n1cb0c.c
+ F: include/media/i2c/rj54n1cb0c.h
+
+@@ -21015,7 +21015,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/silabs,si470x.yaml
+ F: drivers/media/radio/si470x/radio-si470x-i2c.c
+
+@@ -21024,7 +21024,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/si470x/radio-si470x-common.c
+ F: drivers/media/radio/si470x/radio-si470x-usb.c
+ F: drivers/media/radio/si470x/radio-si470x.h
+@@ -21034,7 +21034,7 @@ M: Eduardo Valentin <edubezval@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/si4713/si4713.?
+
+ SI4713 FM RADIO TRANSMITTER PLATFORM DRIVER
+@@ -21042,7 +21042,7 @@ M: Eduardo Valentin <edubezval@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/si4713/radio-platform-si4713.c
+
+ SI4713 FM RADIO TRANSMITTER USB DRIVER
+@@ -21050,7 +21050,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/si4713/radio-usb-si4713.c
+
+ SIANO DVB DRIVER
+@@ -21058,7 +21058,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/common/siano/
+ F: drivers/media/mmc/siano/
+ F: drivers/media/usb/siano/
+@@ -21434,14 +21434,14 @@ SONY IMX208 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/imx208.c
+
+ SONY IMX214 SENSOR DRIVER
+ M: Ricardo Ribalda <ribalda@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx214.yaml
+ F: drivers/media/i2c/imx214.c
+
+@@ -21449,7 +21449,7 @@ SONY IMX219 SENSOR DRIVER
+ M: Dave Stevenson <dave.stevenson@raspberrypi.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/imx219.yaml
+ F: drivers/media/i2c/imx219.c
+
+@@ -21457,7 +21457,7 @@ SONY IMX258 SENSOR DRIVER
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx258.yaml
+ F: drivers/media/i2c/imx258.c
+
+@@ -21465,7 +21465,7 @@ SONY IMX274 SENSOR DRIVER
+ M: Leon Luo <leonl@leopardimaging.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx274.yaml
+ F: drivers/media/i2c/imx274.c
+
+@@ -21474,7 +21474,7 @@ M: Kieran Bingham <kieran.bingham@ideasonboard.com>
+ M: Umang Jain <umang.jain@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx283.yaml
+ F: drivers/media/i2c/imx283.c
+
+@@ -21482,7 +21482,7 @@ SONY IMX290 SENSOR DRIVER
+ M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx290.yaml
+ F: drivers/media/i2c/imx290.c
+
+@@ -21491,7 +21491,7 @@ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ M: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx296.yaml
+ F: drivers/media/i2c/imx296.c
+
+@@ -21499,20 +21499,20 @@ SONY IMX319 SENSOR DRIVER
+ M: Bingbu Cao <bingbu.cao@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/imx319.c
+
+ SONY IMX334 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx334.yaml
+ F: drivers/media/i2c/imx334.c
+
+ SONY IMX335 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx335.yaml
+ F: drivers/media/i2c/imx335.c
+
+@@ -21520,13 +21520,13 @@ SONY IMX355 SENSOR DRIVER
+ M: Tianshu Qiu <tian.shu.qiu@intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/imx355.c
+
+ SONY IMX412 SENSOR DRIVER
+ L: linux-media@vger.kernel.org
+ S: Orphan
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx412.yaml
+ F: drivers/media/i2c/imx412.c
+
+@@ -21534,7 +21534,7 @@ SONY IMX415 SENSOR DRIVER
+ M: Michael Riesch <michael.riesch@wolfvision.net>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/sony,imx415.yaml
+ F: drivers/media/i2c/imx415.c
+
+@@ -21823,7 +21823,7 @@ M: Benjamin Mugnier <benjamin.mugnier@foss.st.com>
+ M: Sylvain Petinot <sylvain.petinot@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/st,st-mipid02.yaml
+ F: drivers/media/i2c/st-mipid02.c
+
+@@ -21859,7 +21859,7 @@ M: Benjamin Mugnier <benjamin.mugnier@foss.st.com>
+ M: Sylvain Petinot <sylvain.petinot@foss.st.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/st,st-vgxy61.yaml
+ F: Documentation/userspace-api/media/drivers/vgxy61.rst
+ F: drivers/media/i2c/vgxy61.c
+@@ -22149,7 +22149,7 @@ STK1160 USB VIDEO CAPTURE DRIVER
+ M: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/stk1160/
+
+ STM32 AUDIO (ASoC) DRIVERS
+@@ -22586,7 +22586,7 @@ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+ Q: http://patchwork.linuxtv.org/project/linux-media/list/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/tuners/tda18250*
+
+ TDA18271 MEDIA DRIVER
+@@ -22632,7 +22632,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/tda9840*
+
+ TEA5761 TUNER DRIVER
+@@ -22640,7 +22640,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Odd fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/tuners/tea5761.*
+
+ TEA5767 TUNER DRIVER
+@@ -22648,7 +22648,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/tuners/tea5767.*
+
+ TEA6415C MEDIA DRIVER
+@@ -22656,7 +22656,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/tea6415c*
+
+ TEA6420 MEDIA DRIVER
+@@ -22664,7 +22664,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/i2c/tea6420*
+
+ TEAM DRIVER
+@@ -22952,7 +22952,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/radio/radio-raremono.c
+
+ THERMAL
+@@ -23028,7 +23028,7 @@ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ M: Paul Elder <paul.elder@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/i2c/thine,thp7312.yaml
+ F: Documentation/userspace-api/media/drivers/thp7312.rst
+ F: drivers/media/i2c/thp7312.c
+@@ -23615,7 +23615,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Odd Fixes
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/tw68/
+
+ TW686X VIDEO4LINUX DRIVER
+@@ -23623,7 +23623,7 @@ M: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: http://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/pci/tw686x/
+
+ U-BOOT ENVIRONMENT VARIABLES
+@@ -24106,7 +24106,7 @@ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: http://www.ideasonboard.org/uvc/
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/usb/uvc/
+ F: include/uapi/linux/uvcvideo.h
+
+@@ -24212,7 +24212,7 @@ V4L2 ASYNC AND FWNODE FRAMEWORKS
+ M: Sakari Ailus <sakari.ailus@linux.intel.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/v4l2-core/v4l2-async.c
+ F: drivers/media/v4l2-core/v4l2-fwnode.c
+ F: include/media/v4l2-async.h
+@@ -24378,7 +24378,7 @@ M: Hans Verkuil <hverkuil-cisco@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/test-drivers/vicodec/*
+
+ VIDEO I2C POLLING DRIVER
+@@ -24406,7 +24406,7 @@ M: Daniel W. S. Almeida <dwlsalmeida@gmail.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/test-drivers/vidtv/*
+
+ VIMC VIRTUAL MEDIA CONTROLLER DRIVER
+@@ -24415,7 +24415,7 @@ R: Kieran Bingham <kieran.bingham@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/test-drivers/vimc/*
+
+ VIRT LIB
+@@ -24663,7 +24663,7 @@ M: Hans Verkuil <hverkuil@xs4all.nl>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/test-drivers/vivid/*
+
+ VM SOCKETS (AF_VSOCK)
+@@ -25217,7 +25217,7 @@ M: Mauro Carvalho Chehab <mchehab@kernel.org>
+ L: linux-media@vger.kernel.org
+ S: Maintained
+ W: https://linuxtv.org
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: drivers/media/tuners/xc2028.*
+
+ XDP (eXpress Data Path)
+@@ -25358,8 +25358,7 @@ F: include/xen/arm/swiotlb-xen.h
+ F: include/xen/swiotlb-xen.h
+
+ XFS FILESYSTEM
+-M: Carlos Maiolino <cem@kernel.org>
+-R: Darrick J. Wong <djwong@kernel.org>
++M: Darrick J. Wong <djwong@kernel.org>
+ L: linux-xfs@vger.kernel.org
+ S: Supported
+ W: http://xfs.org/
+@@ -25441,7 +25440,7 @@ XILINX VIDEO IP CORES
+ M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+ L: linux-media@vger.kernel.org
+ S: Supported
+-T: git git://linuxtv.org/media_tree.git
++T: git git://linuxtv.org/media.git
+ F: Documentation/devicetree/bindings/media/xilinx/
+ F: drivers/media/platform/xilinx/
+ F: include/uapi/linux/xilinx-v4l2-controls.h
+diff --git a/Makefile b/Makefile
+index 70070e64d267c1..da6e99309a4da4 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arc/kernel/devtree.c b/arch/arc/kernel/devtree.c
+index 4c9e61457b2f69..cc6ac7d128aa1a 100644
+--- a/arch/arc/kernel/devtree.c
++++ b/arch/arc/kernel/devtree.c
+@@ -62,7 +62,7 @@ const struct machine_desc * __init setup_machine_fdt(void *dt)
+ const struct machine_desc *mdesc;
+ unsigned long dt_root;
+
+- if (!early_init_dt_scan(dt))
++ if (!early_init_dt_scan(dt, __pa(dt)))
+ return NULL;
+
+ mdesc = of_flat_dt_match_machine(NULL, arch_get_next_mach);
+diff --git a/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts b/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
+index c8ca8cb7f5c94e..52ad95a2063aaf 100644
+--- a/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
++++ b/arch/arm/boot/dts/allwinner/sun9i-a80-cubieboard4.dts
+@@ -280,8 +280,8 @@ reg_dcdc4: dcdc4 {
+
+ reg_dcdc5: dcdc5 {
+ regulator-always-on;
+- regulator-min-microvolt = <1425000>;
+- regulator-max-microvolt = <1575000>;
++ regulator-min-microvolt = <1450000>;
++ regulator-max-microvolt = <1550000>;
+ regulator-name = "vcc-dram";
+ };
+
+diff --git a/arch/arm/boot/dts/microchip/sam9x60.dtsi b/arch/arm/boot/dts/microchip/sam9x60.dtsi
+index 04a6d716ecaf8a..1e8fcb5d4700d8 100644
+--- a/arch/arm/boot/dts/microchip/sam9x60.dtsi
++++ b/arch/arm/boot/dts/microchip/sam9x60.dtsi
+@@ -186,6 +186,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 13>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -388,6 +389,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 32>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -439,6 +441,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 33>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -598,6 +601,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 9>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -649,6 +653,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 10>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -700,6 +705,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 11>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -751,6 +757,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 5>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -821,6 +828,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 6>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -891,6 +899,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 7>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -961,6 +970,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 8>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -1086,6 +1096,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 15>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+@@ -1137,6 +1148,7 @@ AT91_XDMAC_DT_PER_IF(1) |
+ dma-names = "tx", "rx";
+ clocks = <&pmc PMC_TYPE_PERIPHERAL 16>;
+ clock-names = "usart";
++ atmel,usart-mode = <AT91_USART_MODE_SERIAL>;
+ atmel,use-dma-rx;
+ atmel,use-dma-tx;
+ atmel,fifo-size = <16>;
+diff --git a/arch/arm/boot/dts/renesas/r7s72100-genmai.dts b/arch/arm/boot/dts/renesas/r7s72100-genmai.dts
+index 29ba098f5dd5e8..28e703e0f152b2 100644
+--- a/arch/arm/boot/dts/renesas/r7s72100-genmai.dts
++++ b/arch/arm/boot/dts/renesas/r7s72100-genmai.dts
+@@ -53,7 +53,7 @@ partition@0 {
+
+ partition@4000000 {
+ label = "user1";
+- reg = <0x04000000 0x40000000>;
++ reg = <0x04000000 0x04000000>;
+ };
+ };
+ };
+diff --git a/arch/arm/boot/dts/ti/omap/omap36xx.dtsi b/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
+index c3d79ecd56e398..c217094b50abc9 100644
+--- a/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
++++ b/arch/arm/boot/dts/ti/omap/omap36xx.dtsi
+@@ -72,6 +72,7 @@ opp-1000000000 {
+ <1375000 1375000 1375000>;
+ /* only on am/dm37x with speed-binned bit set */
+ opp-supported-hw = <0xffffffff 2>;
++ turbo-mode;
+ };
+ };
+
+diff --git a/arch/arm/kernel/devtree.c b/arch/arm/kernel/devtree.c
+index fdb74e64206a8a..3b78966e750a2d 100644
+--- a/arch/arm/kernel/devtree.c
++++ b/arch/arm/kernel/devtree.c
+@@ -200,7 +200,7 @@ const struct machine_desc * __init setup_machine_fdt(void *dt_virt)
+
+ mdesc_best = &__mach_desc_GENERIC_DT;
+
+- if (!dt_virt || !early_init_dt_verify(dt_virt))
++ if (!dt_virt || !early_init_dt_verify(dt_virt, __pa(dt_virt)))
+ return NULL;
+
+ mdesc = of_flat_dt_match_machine(mdesc_best, arch_get_next_mach);
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso b/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso
+index 96db07fc9becea..1f2a0fe70a0a26 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso
++++ b/arch/arm64/boot/dts/freescale/imx8mn-tqma8mqnl-mba8mx-usbotg.dtso
+@@ -29,12 +29,37 @@ usb_dr_connector: endpoint {
+ };
+ };
+
++/*
++ * rst_usb_hub_hog and sel_usb_hub_hog have property 'output-high',
++ * dt overlay don't support /delete-property/. Both 'output-low' and
++ * 'output-high' will be exist under hog nodes if overlay file set
++ * 'output-low'. Workaround is disable these hog and create new hog with
++ * 'output-low'.
++ */
++
+ &rst_usb_hub_hog {
+- output-low;
++ status = "disabled";
++};
++
++&expander0 {
++ rst-usb-low-hub-hog {
++ gpio-hog;
++ gpios = <13 0>;
++ output-low;
++ line-name = "RST_USB_HUB#";
++ };
+ };
+
+ &sel_usb_hub_hog {
+- output-low;
++ status = "disabled";
++};
++
++&gpio2 {
++ sel-usb-low-hub-hog {
++ gpio-hog;
++ gpios = <1 GPIO_ACTIVE_HIGH>;
++ output-low;
++ };
+ };
+
+ &usbotg1 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt6358.dtsi b/arch/arm64/boot/dts/mediatek/mt6358.dtsi
+index 641d452fbc0830..e23672a2eea4af 100644
+--- a/arch/arm64/boot/dts/mediatek/mt6358.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt6358.dtsi
+@@ -15,12 +15,12 @@ pmic_adc: adc {
+ #io-channel-cells = <1>;
+ };
+
+- mt6358codec: mt6358codec {
++ mt6358codec: audio-codec {
+ compatible = "mediatek,mt6358-sound";
+ mediatek,dmic-mode = <0>; /* two-wires */
+ };
+
+- mt6358regulator: mt6358regulator {
++ mt6358regulator: regulators {
+ compatible = "mediatek,mt6358-regulator";
+
+ mt6358_vdram1_reg: buck_vdram1 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi b/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
+index 8d1cbc92bce320..ae0379fd42a91c 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173-elm-hana.dtsi
+@@ -49,6 +49,14 @@ trackpad2: trackpad@2c {
+ interrupts-extended = <&pio 117 IRQ_TYPE_LEVEL_LOW>;
+ reg = <0x2c>;
+ hid-descr-addr = <0x0020>;
++ /*
++ * The trackpad needs a post-power-on delay of 100ms,
++ * but at time of writing, the power supply for it on
++ * this board is always on. The delay is therefore not
++ * added to avoid impacting the readiness of the
++ * trackpad.
++ */
++ vdd-supply = <&mt6397_vgp6_reg>;
+ wakeup-source;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
+index 19c1e2bee494c9..20b71f2e7159ad 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-burnet.dts
+@@ -30,3 +30,6 @@ touchscreen@2c {
+ };
+ };
+
++&i2c2 {
++ i2c-scl-internal-delay-ns = <4100>;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
+index f34964afe39b53..83bbcfe620835a 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-cozmo.dts
+@@ -18,6 +18,8 @@ &i2c_tunnel {
+ };
+
+ &i2c2 {
++ i2c-scl-internal-delay-ns = <25000>;
++
+ trackpad@2c {
+ compatible = "hid-over-i2c";
+ reg = <0x2c>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
+index 0b45aee2e29953..65860b33c01fe8 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
+@@ -30,3 +30,6 @@ &qca_wifi {
+ qcom,ath10k-calibration-variant = "GO_DAMU";
+ };
+
++&i2c2 {
++ i2c-scl-internal-delay-ns = <20000>;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
+index bbe6c338f465ee..f9c1ec366b2660 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-fennel.dtsi
+@@ -25,3 +25,6 @@ trackpad@2c {
+ };
+ };
+
++&i2c2 {
++ i2c-scl-internal-delay-ns = <21500>;
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+index 783c333107bcbf..49e053b932e76c 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+@@ -8,28 +8,32 @@
+ #include <arm/cros-ec-keyboard.dtsi>
+
+ / {
+- pp1200_mipibrdg: pp1200-mipibrdg {
++ pp1000_mipibrdg: pp1000-mipibrdg {
+ compatible = "regulator-fixed";
+- regulator-name = "pp1200_mipibrdg";
++ regulator-name = "pp1000_mipibrdg";
++ regulator-min-microvolt = <1000000>;
++ regulator-max-microvolt = <1000000>;
+ pinctrl-names = "default";
+- pinctrl-0 = <&pp1200_mipibrdg_en>;
++ pinctrl-0 = <&pp1000_mipibrdg_en>;
+
+ enable-active-high;
+ regulator-boot-on;
+
+ gpio = <&pio 54 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp1800_alw>;
+ };
+
+ pp1800_mipibrdg: pp1800-mipibrdg {
+ compatible = "regulator-fixed";
+ regulator-name = "pp1800_mipibrdg";
+ pinctrl-names = "default";
+- pinctrl-0 = <&pp1800_lcd_en>;
++ pinctrl-0 = <&pp1800_mipibrdg_en>;
+
+ enable-active-high;
+ regulator-boot-on;
+
+ gpio = <&pio 36 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp1800_alw>;
+ };
+
+ pp3300_panel: pp3300-panel {
+@@ -44,18 +48,20 @@ pp3300_panel: pp3300-panel {
+ regulator-boot-on;
+
+ gpio = <&pio 35 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp3300_alw>;
+ };
+
+- vddio_mipibrdg: vddio-mipibrdg {
++ pp3300_mipibrdg: pp3300-mipibrdg {
+ compatible = "regulator-fixed";
+- regulator-name = "vddio_mipibrdg";
++ regulator-name = "pp3300_mipibrdg";
+ pinctrl-names = "default";
+- pinctrl-0 = <&vddio_mipibrdg_en>;
++ pinctrl-0 = <&pp3300_mipibrdg_en>;
+
+ enable-active-high;
+ regulator-boot-on;
+
+ gpio = <&pio 37 GPIO_ACTIVE_HIGH>;
++ vin-supply = <&pp3300_alw>;
+ };
+
+ volume_buttons: volume-buttons {
+@@ -146,9 +152,9 @@ anx_bridge: anx7625@58 {
+ pinctrl-0 = <&anx7625_pins>;
+ enable-gpios = <&pio 45 GPIO_ACTIVE_HIGH>;
+ reset-gpios = <&pio 73 GPIO_ACTIVE_HIGH>;
+- vdd10-supply = <&pp1200_mipibrdg>;
++ vdd10-supply = <&pp1000_mipibrdg>;
+ vdd18-supply = <&pp1800_mipibrdg>;
+- vdd33-supply = <&vddio_mipibrdg>;
++ vdd33-supply = <&pp3300_mipibrdg>;
+
+ ports {
+ #address-cells = <1>;
+@@ -391,14 +397,14 @@ &pio {
+ "",
+ "";
+
+- pp1200_mipibrdg_en: pp1200-mipibrdg-en {
++ pp1000_mipibrdg_en: pp1000-mipibrdg-en {
+ pins1 {
+ pinmux = <PINMUX_GPIO54__FUNC_GPIO54>;
+ output-low;
+ };
+ };
+
+- pp1800_lcd_en: pp1800-lcd-en {
++ pp1800_mipibrdg_en: pp1800-mipibrdg-en {
+ pins1 {
+ pinmux = <PINMUX_GPIO36__FUNC_GPIO36>;
+ output-low;
+@@ -460,7 +466,7 @@ trackpad-int {
+ };
+ };
+
+- vddio_mipibrdg_en: vddio-mipibrdg-en {
++ pp3300_mipibrdg_en: pp3300-mipibrdg-en {
+ pins1 {
+ pinmux = <PINMUX_GPIO37__FUNC_GPIO37>;
+ output-low;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
+index bfb9e42c8acaa7..ff02f63bac29b2 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kakadu.dtsi
+@@ -92,9 +92,9 @@ &i2c4 {
+ clock-frequency = <400000>;
+ vbus-supply = <&mt6358_vcn18_reg>;
+
+- eeprom@54 {
++ eeprom@50 {
+ compatible = "atmel,24c32";
+- reg = <0x54>;
++ reg = <0x50>;
+ pagesize = <32>;
+ vcc-supply = <&mt6358_vcn18_reg>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
+index 5c1bf6a1e47586..da6e767b4ceede 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-kodama.dtsi
+@@ -79,9 +79,9 @@ &i2c4 {
+ clock-frequency = <400000>;
+ vbus-supply = <&mt6358_vcn18_reg>;
+
+- eeprom@54 {
++ eeprom@50 {
+ compatible = "atmel,24c64";
+- reg = <0x54>;
++ reg = <0x50>;
+ pagesize = <32>;
+ vcc-supply = <&mt6358_vcn18_reg>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
+index 0f5fa893a77426..8b56b8564ed7a2 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-krane.dtsi
+@@ -88,9 +88,9 @@ &i2c4 {
+ clock-frequency = <400000>;
+ vbus-supply = <&mt6358_vcn18_reg>;
+
+- eeprom@54 {
++ eeprom@50 {
+ compatible = "atmel,24c32";
+- reg = <0x54>;
++ reg = <0x50>;
+ pagesize = <32>;
+ vcc-supply = <&mt6358_vcn18_reg>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index 22924f61ec9ed2..07ae3c8e897b7d 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -290,6 +290,11 @@ dsi_out: endpoint {
+ };
+ };
+
++&dpi0 {
++ /* TODO Re-enable after DP to Type-C port muxing can be described */
++ status = "disabled";
++};
++
+ &gic {
+ mediatek,broken-save-restore-fw;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 266441e999f211..0a6578aacf8280 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -1845,6 +1845,10 @@ dpi0: dpi@14015000 {
+ <&mmsys CLK_MM_DPI_MM>,
+ <&apmixedsys CLK_APMIXED_TVDPLL>;
+ clock-names = "pixel", "engine", "pll";
++
++ port {
++ dpi_out: endpoint { };
++ };
+ };
+
+ mutex: mutex@14016000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi
+index 52ec58128d5615..b495a241b4432b 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola-voltorb.dtsi
+@@ -10,12 +10,6 @@
+
+ / {
+ chassis-type = "laptop";
+-
+- max98360a: max98360a {
+- compatible = "maxim,max98360a";
+- sdmode-gpios = <&pio 150 GPIO_ACTIVE_HIGH>;
+- #sound-dai-cells = <0>;
+- };
+ };
+
+ &cpu6 {
+@@ -59,19 +53,14 @@ &cluster1_opp_15 {
+ opp-hz = /bits/ 64 <2200000000>;
+ };
+
+-&rt1019p{
+- status = "disabled";
+-};
+-
+ &sound {
+ compatible = "mediatek,mt8186-mt6366-rt5682s-max98360-sound";
+- status = "okay";
++};
+
+- spk-hdmi-playback-dai-link {
+- codec {
+- sound-dai = <&it6505dptx>, <&max98360a>;
+- };
+- };
++&speaker_codec {
++ compatible = "maxim,max98360a";
++ sdmode-gpios = <&pio 150 GPIO_ACTIVE_HIGH>;
++ /delete-property/ sdb-gpios;
+ };
+
+ &spmi {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+index 682c6ad2574d00..0c0b3ac5974525 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+@@ -259,15 +259,15 @@ spk-hdmi-playback-dai-link {
+ mediatek,clk-provider = "cpu";
+ /* RT1019P and IT6505 connected to the same I2S line */
+ codec {
+- sound-dai = <&it6505dptx>, <&rt1019p>;
++ sound-dai = <&it6505dptx>, <&speaker_codec>;
+ };
+ };
+ };
+
+- rt1019p: speaker-codec {
++ speaker_codec: speaker-codec {
+ compatible = "realtek,rt1019p";
+ pinctrl-names = "default";
+- pinctrl-0 = <&rt1019p_pins_default>;
++ pinctrl-0 = <&speaker_codec_pins_default>;
+ #sound-dai-cells = <0>;
+ sdb-gpios = <&pio 150 GPIO_ACTIVE_HIGH>;
+ };
+@@ -1179,7 +1179,7 @@ pins {
+ };
+ };
+
+- rt1019p_pins_default: rt1019p-default-pins {
++ speaker_codec_pins_default: speaker-codec-default-pins {
+ pins-sdb {
+ pinmux = <PINMUX_GPIO150__FUNC_GPIO150>;
+ output-low;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8188.dtsi b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+index cd27966d2e3c05..91beef22e0a9c6 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8188.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8188.dtsi
+@@ -956,9 +956,9 @@ mfg0: power-domain@MT8188_POWER_DOMAIN_MFG0 {
+ #size-cells = <0>;
+ #power-domain-cells = <1>;
+
+- power-domain@MT8188_POWER_DOMAIN_MFG1 {
++ mfg1: power-domain@MT8188_POWER_DOMAIN_MFG1 {
+ reg = <MT8188_POWER_DOMAIN_MFG1>;
+- clocks = <&topckgen CLK_APMIXED_MFGPLL>,
++ clocks = <&apmixedsys CLK_APMIXED_MFGPLL>,
+ <&topckgen CLK_TOP_MFG_CORE_TMP>;
+ clock-names = "mfg", "alt";
+ mediatek,infracfg = <&infracfg_ao>;
+@@ -1689,7 +1689,6 @@ u3port1: usb-phy@700 {
+ <&clk26m>;
+ clock-names = "ref", "da_ref";
+ #phy-cells = <1>;
+- status = "disabled";
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+index 75d56b2d5a3d34..2c7b2223ee76b1 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+@@ -438,7 +438,7 @@ audio_codec: codec@1a {
+ /* Realtek RT5682i or RT5682s, sharing the same configuration */
+ reg = <0x1a>;
+ interrupts-extended = <&pio 89 IRQ_TYPE_EDGE_BOTH>;
+- #sound-dai-cells = <0>;
++ #sound-dai-cells = <1>;
+ realtek,jd-src = <1>;
+
+ AVDD-supply = <&mt6359_vio18_ldo_reg>;
+@@ -1181,7 +1181,7 @@ hs-playback-dai-link {
+ link-name = "ETDM1_OUT_BE";
+ mediatek,clk-provider = "cpu";
+ codec {
+- sound-dai = <&audio_codec>;
++ sound-dai = <&audio_codec 0>;
+ };
+ };
+
+@@ -1189,7 +1189,7 @@ hs-capture-dai-link {
+ link-name = "ETDM2_IN_BE";
+ mediatek,clk-provider = "cpu";
+ codec {
+- sound-dai = <&audio_codec>;
++ sound-dai = <&audio_codec 0>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index e89ba384c4aafc..ade685ed2190b7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -487,7 +487,7 @@ topckgen: syscon@10000000 {
+ };
+
+ infracfg_ao: syscon@10001000 {
+- compatible = "mediatek,mt8195-infracfg_ao", "syscon", "simple-mfd";
++ compatible = "mediatek,mt8195-infracfg_ao", "syscon";
+ reg = <0 0x10001000 0 0x1000>;
+ #clock-cells = <1>;
+ #reset-cells = <1>;
+@@ -3331,11 +3331,9 @@ &larb19 &larb21 &larb24 &larb25
+ mutex1: mutex@1c101000 {
+ compatible = "mediatek,mt8195-disp-mutex";
+ reg = <0 0x1c101000 0 0x1000>;
+- reg-names = "vdo1_mutex";
+ interrupts = <GIC_SPI 494 IRQ_TYPE_LEVEL_HIGH 0>;
+ power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS1>;
+ clocks = <&vdosys1 CLK_VDO1_DISP_MUTEX>;
+- clock-names = "vdo1_mutex";
+ mediatek,gce-client-reg = <&gce0 SUBSYS_1c10XXXX 0x1000 0x1000>;
+ mediatek,gce-events = <CMDQ_EVENT_VDO1_STREAM_DONE_ENG_0>;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
+index 1ef6262b65c9ac..b4b48eb93f3c54 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
+@@ -187,7 +187,7 @@ mdio {
+ compatible = "snps,dwmac-mdio";
+ #address-cells = <1>;
+ #size-cells = <0>;
+- eth_phy0: eth-phy0@1 {
++ eth_phy0: ethernet-phy@1 {
+ compatible = "ethernet-phy-id001c.c916";
+ reg = <0x1>;
+ };
+diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+index c00db75e391057..1c53ccc5e3cbf3 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+@@ -351,7 +351,7 @@ mmc@700b0200 {
+ #size-cells = <0>;
+
+ wifi@1 {
+- compatible = "brcm,bcm4354-fmac";
++ compatible = "brcm,bcm4354-fmac", "brcm,bcm4329-fmac";
+ reg = <1>;
+ interrupt-parent = <&gpio>;
+ interrupts = <TEGRA_GPIO(H, 2) IRQ_TYPE_LEVEL_HIGH>;
+diff --git a/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts b/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
+index 0d45662b8028bf..5d0167fbc70982 100644
+--- a/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
++++ b/arch/arm64/boot/dts/qcom/qcs6490-rb3gen2.dts
+@@ -707,7 +707,7 @@ &remoteproc_cdsp {
+ };
+
+ &remoteproc_mpss {
+- firmware-name = "qcom/qcs6490/modem.mdt";
++ firmware-name = "qcom/qcs6490/modem.mbn";
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sc8180x.dtsi b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+index 0e9429684dd97b..60f71b49026153 100644
+--- a/arch/arm64/boot/dts/qcom/sc8180x.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8180x.dtsi
+@@ -3889,7 +3889,7 @@ lmh@18358800 {
+ };
+
+ cpufreq_hw: cpufreq@18323000 {
+- compatible = "qcom,cpufreq-hw";
++ compatible = "qcom,sc8180x-cpufreq-hw", "qcom,cpufreq-hw";
+ reg = <0 0x18323000 0 0x1400>, <0 0x18325800 0 0x1400>;
+ reg-names = "freq-domain0", "freq-domain1";
+
+diff --git a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
+index 60412281ab27de..962c8aa4004401 100644
+--- a/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
++++ b/arch/arm64/boot/dts/qcom/sda660-inforce-ifc6560.dts
+@@ -104,7 +104,7 @@ vreg_l10a_1p8: vreg-l10a-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "vreg_l10a_1p8";
+ regulator-min-microvolt = <1804000>;
+- regulator-max-microvolt = <1896000>;
++ regulator-max-microvolt = <1804000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index 7986ddb30f6e8c..4f8477de7e1b1e 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -1376,43 +1376,43 @@ gpu_opp_table: opp-table {
+ opp-850000000 {
+ opp-hz = /bits/ 64 <850000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_TURBO_L1>;
+- opp-supported-hw = <0x02>;
++ opp-supported-hw = <0x03>;
+ };
+
+ opp-800000000 {
+ opp-hz = /bits/ 64 <800000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_TURBO>;
+- opp-supported-hw = <0x04>;
++ opp-supported-hw = <0x07>;
+ };
+
+ opp-650000000 {
+ opp-hz = /bits/ 64 <650000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_NOM_L1>;
+- opp-supported-hw = <0x08>;
++ opp-supported-hw = <0x0f>;
+ };
+
+ opp-565000000 {
+ opp-hz = /bits/ 64 <565000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_NOM>;
+- opp-supported-hw = <0x10>;
++ opp-supported-hw = <0x1f>;
+ };
+
+ opp-430000000 {
+ opp-hz = /bits/ 64 <430000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_SVS_L1>;
+- opp-supported-hw = <0xff>;
++ opp-supported-hw = <0x1f>;
+ };
+
+ opp-355000000 {
+ opp-hz = /bits/ 64 <355000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_SVS>;
+- opp-supported-hw = <0xff>;
++ opp-supported-hw = <0x1f>;
+ };
+
+ opp-253000000 {
+ opp-hz = /bits/ 64 <253000000>;
+ opp-level = <RPMH_REGULATOR_LEVEL_LOW_SVS>;
+- opp-supported-hw = <0xff>;
++ opp-supported-hw = <0x1f>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+index fb4a48a1e2a8a5..2926a1aba76873 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+@@ -594,8 +594,6 @@ &usb_1_ss0_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+@@ -628,8 +626,6 @@ &usb_1_ss1_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+index 0cdaff9c8cf0fc..f22e5c840a2e55 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+@@ -898,8 +898,6 @@ &usb_1_ss0_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+@@ -932,8 +930,6 @@ &usb_1_ss1_qmpphy {
+ vdda-phy-supply = <&vreg_l3e_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 0510abc0edf0ff..914f9cb3aca215 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -279,8 +279,8 @@ CLUSTER_C4: cpu-sleep-0 {
+ idle-state-name = "ret";
+ arm,psci-suspend-param = <0x00000004>;
+ entry-latency-us = <180>;
+- exit-latency-us = <320>;
+- min-residency-us = <1000>;
++ exit-latency-us = <500>;
++ min-residency-us = <600>;
+ };
+ };
+
+@@ -299,7 +299,7 @@ CLUSTER_CL5: cluster-sleep-1 {
+ idle-state-name = "ret-pll-off";
+ arm,psci-suspend-param = <0x01000054>;
+ entry-latency-us = <2200>;
+- exit-latency-us = <2500>;
++ exit-latency-us = <4000>;
+ min-residency-us = <7000>;
+ };
+ };
+@@ -5752,7 +5752,7 @@ apps_smmu: iommu@15000000 {
+ intc: interrupt-controller@17000000 {
+ compatible = "arm,gic-v3";
+ reg = <0 0x17000000 0 0x10000>, /* GICD */
+- <0 0x17080000 0 0x480000>; /* GICR * 12 */
++ <0 0x17080000 0 0x300000>; /* GICR * 12 */
+
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+
+diff --git a/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi b/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
+index 8e2db1d6ca81e2..25c55b32aafe5a 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-rev2.dtsi
+@@ -69,9 +69,6 @@ &rcar_sound {
+
+ status = "okay";
+
+- /* Single DAI */
+- #sound-dai-cells = <0>;
+-
+ rsnd_port: port {
+ rsnd_endpoint: endpoint {
+ remote-endpoint = <&dw_hdmi0_snd_in>;
+diff --git a/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi b/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
+index 66f3affe046973..deb69c27277566 100644
+--- a/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
++++ b/arch/arm64/boot/dts/renesas/hihope-rev4.dtsi
+@@ -84,9 +84,6 @@ &rcar_sound {
+ pinctrl-names = "default";
+ status = "okay";
+
+- /* Single DAI */
+- #sound-dai-cells = <0>;
+-
+ /* audio_clkout0/1/2/3 */
+ #clock-cells = <1>;
+ clock-frequency = <12288000 11289600>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso
+index ebcaeafc3800d0..fa61633aea1526 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso
++++ b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5-io-expander.dtso
+@@ -49,7 +49,6 @@ vcc1v8_eth: vcc1v8-eth-regulator {
+
+ vcc3v3_eth: vcc3v3-eth-regulator {
+ compatible = "regulator-fixed";
+- enable-active-low;
+ gpio = <&gpio0 RK_PC0 GPIO_ACTIVE_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&vcc3v3_eth_enn>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
+index 8ba111d9283fef..d9d2bf822443bc 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-indiedroid-nova.dts
+@@ -62,7 +62,7 @@ sdio_pwrseq: sdio-pwrseq {
+
+ sound {
+ compatible = "audio-graph-card";
+- label = "rockchip,es8388-codec";
++ label = "rockchip,es8388";
+ widgets = "Microphone", "Mic Jack",
+ "Headphone", "Headphones";
+ routing = "LINPUT2", "Mic Jack",
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts
+index feea6b20a6bf54..6b77be64324950 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dts
+@@ -71,7 +71,6 @@ vcc5v0_sys: vcc5v0-sys-regulator {
+
+ vcc_3v3_sd_s0: vcc-3v3-sd-s0-regulator {
+ compatible = "regulator-fixed";
+- enable-active-low;
+ gpios = <&gpio4 RK_PB5 GPIO_ACTIVE_LOW>;
+ regulator-name = "vcc_3v3_sd_s0";
+ regulator-boot-on;
+diff --git a/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi b/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi
+index e4633af87eb9c5..d6ce53c6d74814 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62x-phyboard-lyra.dtsi
+@@ -433,8 +433,6 @@ &mcasp2 {
+ 0 0 0 0
+ 0 0 0 0
+ >;
+- tx-num-evt = <32>;
+- rx-num-evt = <32>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+index 6593c5da82c064..df39f2b1ff6ba6 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
++++ b/arch/arm64/boot/dts/ti/k3-j7200-common-proc-board.dts
+@@ -254,7 +254,7 @@ J721E_IOPAD(0x38, PIN_OUTPUT, 0) /* (Y21) MCAN3_TX */
+ };
+ };
+
+-&main_pmx1 {
++&main_pmx2 {
+ main_usbss0_pins_default: main-usbss0-default-pins {
+ pinctrl-single,pins = <
+ J721E_IOPAD(0x04, PIN_OUTPUT, 0) /* (T4) USB0_DRVVBUS */
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+index 9386bf3ef9f684..1d11da926a8714 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-main.dtsi
+@@ -426,10 +426,28 @@ main_pmx0: pinctrl@11c000 {
+ pinctrl-single,function-mask = <0xffffffff>;
+ };
+
+- main_pmx1: pinctrl@11c11c {
++ main_pmx1: pinctrl@11c110 {
+ compatible = "ti,j7200-padconf", "pinctrl-single";
+ /* Proxy 0 addressing */
+- reg = <0x00 0x11c11c 0x00 0xc>;
++ reg = <0x00 0x11c110 0x00 0x004>;
++ #pinctrl-cells = <1>;
++ pinctrl-single,register-width = <32>;
++ pinctrl-single,function-mask = <0xffffffff>;
++ };
++
++ main_pmx2: pinctrl@11c11c {
++ compatible = "ti,j7200-padconf", "pinctrl-single";
++ /* Proxy 0 addressing */
++ reg = <0x00 0x11c11c 0x00 0x00c>;
++ #pinctrl-cells = <1>;
++ pinctrl-single,register-width = <32>;
++ pinctrl-single,function-mask = <0xffffffff>;
++ };
++
++ main_pmx3: pinctrl@11c164 {
++ compatible = "ti,j7200-padconf", "pinctrl-single";
++ /* Proxy 0 addressing */
++ reg = <0x00 0x11c164 0x00 0x008>;
+ #pinctrl-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <0xffffffff>;
+@@ -1145,7 +1163,7 @@ main_spi0: spi@2100000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 266 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 266 1>;
++ clocks = <&k3_clks 266 4>;
+ status = "disabled";
+ };
+
+@@ -1156,7 +1174,7 @@ main_spi1: spi@2110000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 267 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 267 1>;
++ clocks = <&k3_clks 267 4>;
+ status = "disabled";
+ };
+
+@@ -1167,7 +1185,7 @@ main_spi2: spi@2120000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 268 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 268 1>;
++ clocks = <&k3_clks 268 4>;
+ status = "disabled";
+ };
+
+@@ -1178,7 +1196,7 @@ main_spi3: spi@2130000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 269 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 269 1>;
++ clocks = <&k3_clks 269 4>;
+ status = "disabled";
+ };
+
+@@ -1189,7 +1207,7 @@ main_spi4: spi@2140000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 270 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 270 1>;
++ clocks = <&k3_clks 270 2>;
+ status = "disabled";
+ };
+
+@@ -1200,7 +1218,7 @@ main_spi5: spi@2150000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 271 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 271 1>;
++ clocks = <&k3_clks 271 4>;
+ status = "disabled";
+ };
+
+@@ -1211,7 +1229,7 @@ main_spi6: spi@2160000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 272 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 272 1>;
++ clocks = <&k3_clks 272 4>;
+ status = "disabled";
+ };
+
+@@ -1222,7 +1240,7 @@ main_spi7: spi@2170000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 273 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 273 1>;
++ clocks = <&k3_clks 273 4>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+index 5097d192c2b208..b18b2f2deb969f 100644
+--- a/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j7200-mcu-wakeup.dtsi
+@@ -494,7 +494,7 @@ mcu_spi0: spi@40300000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 274 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 274 0>;
++ clocks = <&k3_clks 274 4>;
+ status = "disabled";
+ };
+
+@@ -505,7 +505,7 @@ mcu_spi1: spi@40310000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 275 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 275 0>;
++ clocks = <&k3_clks 275 4>;
+ status = "disabled";
+ };
+
+@@ -516,7 +516,7 @@ mcu_spi2: spi@40320000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 276 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 276 0>;
++ clocks = <&k3_clks 276 2>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
+index 3731ffb4a5c963..6f5c1401ebd6a0 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721e-mcu-wakeup.dtsi
+@@ -654,7 +654,7 @@ mcu_spi0: spi@40300000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 274 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 274 0>;
++ clocks = <&k3_clks 274 1>;
+ status = "disabled";
+ };
+
+@@ -665,7 +665,7 @@ mcu_spi1: spi@40310000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 275 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 275 0>;
++ clocks = <&k3_clks 275 1>;
+ status = "disabled";
+ };
+
+@@ -676,7 +676,7 @@ mcu_spi2: spi@40320000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 276 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 276 0>;
++ clocks = <&k3_clks 276 1>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
+index 9ed6949b40e9df..fae534b5c8a43f 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-main.dtsi
+@@ -1708,7 +1708,7 @@ main_spi0: spi@2100000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 339 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 339 1>;
++ clocks = <&k3_clks 339 2>;
+ status = "disabled";
+ };
+
+@@ -1719,7 +1719,7 @@ main_spi1: spi@2110000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 340 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 340 1>;
++ clocks = <&k3_clks 340 2>;
+ status = "disabled";
+ };
+
+@@ -1730,7 +1730,7 @@ main_spi2: spi@2120000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 341 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 341 1>;
++ clocks = <&k3_clks 341 2>;
+ status = "disabled";
+ };
+
+@@ -1741,7 +1741,7 @@ main_spi3: spi@2130000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 342 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 342 1>;
++ clocks = <&k3_clks 342 2>;
+ status = "disabled";
+ };
+
+@@ -1752,7 +1752,7 @@ main_spi4: spi@2140000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 343 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 343 1>;
++ clocks = <&k3_clks 343 2>;
+ status = "disabled";
+ };
+
+@@ -1763,7 +1763,7 @@ main_spi5: spi@2150000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 344 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 344 1>;
++ clocks = <&k3_clks 344 2>;
+ status = "disabled";
+ };
+
+@@ -1774,7 +1774,7 @@ main_spi6: spi@2160000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 345 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 345 1>;
++ clocks = <&k3_clks 345 2>;
+ status = "disabled";
+ };
+
+@@ -1785,7 +1785,7 @@ main_spi7: spi@2170000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 346 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 346 1>;
++ clocks = <&k3_clks 346 2>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
+index 9d96b19d0e7cf5..8232d308c23cc6 100644
+--- a/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-j721s2-mcu-wakeup.dtsi
+@@ -425,7 +425,7 @@ mcu_spi0: spi@40300000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 347 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 347 0>;
++ clocks = <&k3_clks 347 2>;
+ status = "disabled";
+ };
+
+@@ -436,7 +436,7 @@ mcu_spi1: spi@40310000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 348 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 348 0>;
++ clocks = <&k3_clks 348 2>;
+ status = "disabled";
+ };
+
+@@ -447,7 +447,7 @@ mcu_spi2: spi@40320000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ power-domains = <&k3_pds 349 TI_SCI_PD_EXCLUSIVE>;
+- clocks = <&k3_clks 349 0>;
++ clocks = <&k3_clks 349 2>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h
+index 8c0a36f72d6fcd..bc77869dbd43b2 100644
+--- a/arch/arm64/include/asm/insn.h
++++ b/arch/arm64/include/asm/insn.h
+@@ -353,6 +353,7 @@ __AARCH64_INSN_FUNCS(ldrsw_lit, 0xFF000000, 0x98000000)
+ __AARCH64_INSN_FUNCS(exclusive, 0x3F800000, 0x08000000)
+ __AARCH64_INSN_FUNCS(load_ex, 0x3F400000, 0x08400000)
+ __AARCH64_INSN_FUNCS(store_ex, 0x3F400000, 0x08000000)
++__AARCH64_INSN_FUNCS(mops, 0x3B200C00, 0x19000400)
+ __AARCH64_INSN_FUNCS(stp, 0x7FC00000, 0x29000000)
+ __AARCH64_INSN_FUNCS(ldp, 0x7FC00000, 0x29400000)
+ __AARCH64_INSN_FUNCS(stp_post, 0x7FC00000, 0x28800000)
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index bf64fed9820ea0..c315bc1a4e9adf 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -74,8 +74,6 @@ enum kvm_mode kvm_get_mode(void);
+ static inline enum kvm_mode kvm_get_mode(void) { return KVM_MODE_NONE; };
+ #endif
+
+-DECLARE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
+-
+ extern unsigned int __ro_after_init kvm_sve_max_vl;
+ extern unsigned int __ro_after_init kvm_host_sve_max_vl;
+ int __init kvm_arm_init_sve(void);
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index 718728a85430fa..db994d1fd97e70 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -228,6 +228,7 @@ static const struct arm64_ftr_bits ftr_id_aa64isar0[] = {
+ };
+
+ static const struct arm64_ftr_bits ftr_id_aa64isar1[] = {
++ ARM64_FTR_BITS(FTR_HIDDEN, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_XS_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_I8MM_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_DGH_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_VISIBLE, FTR_STRICT, FTR_LOWER_SAFE, ID_AA64ISAR1_EL1_BF16_SHIFT, 4, 0),
+diff --git a/arch/arm64/kernel/probes/decode-insn.c b/arch/arm64/kernel/probes/decode-insn.c
+index 3496d6169e59b2..42b69936cee34b 100644
+--- a/arch/arm64/kernel/probes/decode-insn.c
++++ b/arch/arm64/kernel/probes/decode-insn.c
+@@ -58,10 +58,13 @@ static bool __kprobes aarch64_insn_is_steppable(u32 insn)
+ * Instructions which load PC relative literals are not going to work
+ * when executed from an XOL slot. Instructions doing an exclusive
+ * load/store are not going to complete successfully when single-step
+- * exception handling happens in the middle of the sequence.
++ * exception handling happens in the middle of the sequence. Memory
++ * copy/set instructions require that all three instructions be placed
++ * consecutively in memory.
+ */
+ if (aarch64_insn_uses_literal(insn) ||
+- aarch64_insn_is_exclusive(insn))
++ aarch64_insn_is_exclusive(insn) ||
++ aarch64_insn_is_mops(insn))
+ return false;
+
+ return true;
+diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
+index 3e7c8c8195c3c9..2bbcbb11d844c9 100644
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -442,7 +442,7 @@ static void tls_thread_switch(struct task_struct *next)
+
+ if (is_compat_thread(task_thread_info(next)))
+ write_sysreg(next->thread.uw.tp_value, tpidrro_el0);
+- else if (!arm64_kernel_unmapped_at_el0())
++ else
+ write_sysreg(0, tpidrro_el0);
+
+ write_sysreg(*task_user_tls(next), tpidr_el0);
+diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
+index b22d28ec80284b..87f61fd6783c20 100644
+--- a/arch/arm64/kernel/setup.c
++++ b/arch/arm64/kernel/setup.c
+@@ -175,7 +175,11 @@ static void __init setup_machine_fdt(phys_addr_t dt_phys)
+ if (dt_virt)
+ memblock_reserve(dt_phys, size);
+
+- if (!dt_virt || !early_init_dt_scan(dt_virt)) {
++ /*
++ * dt_virt is a fixmap address, hence __pa(dt_virt) can't be used.
++ * Pass dt_phys directly.
++ */
++ if (!early_init_dt_scan(dt_virt, dt_phys)) {
+ pr_crit("\n"
+ "Error: invalid device tree blob at physical address %pa (virtual address 0x%px)\n"
+ "The dtb must be 8-byte aligned and must not exceed 2 MB in size\n"
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index 58d89d997d050f..f84c71f04d9ea9 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -287,6 +287,9 @@ SECTIONS
+ __initdata_end = .;
+ __init_end = .;
+
++ .data.rel.ro : { *(.data.rel.ro) }
++ ASSERT(SIZEOF(.data.rel.ro) == 0, "Unexpected RELRO detected!")
++
+ _data = .;
+ _sdata = .;
+ RW_DATA(L1_CACHE_BYTES, PAGE_SIZE, THREAD_ALIGN)
+@@ -343,9 +346,6 @@ SECTIONS
+ *(.plt) *(.plt.*) *(.iplt) *(.igot .igot.plt)
+ }
+ ASSERT(SIZEOF(.plt) == 0, "Unexpected run-time procedure linkages detected!")
+-
+- .data.rel.ro : { *(.data.rel.ro) }
+- ASSERT(SIZEOF(.data.rel.ro) == 0, "Unexpected RELRO detected!")
+ }
+
+ #include "image-vars.h"
+diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
+index 879982b1cc739e..1215df59041856 100644
+--- a/arch/arm64/kvm/arch_timer.c
++++ b/arch/arm64/kvm/arch_timer.c
+@@ -206,8 +206,7 @@ void get_timer_map(struct kvm_vcpu *vcpu, struct timer_map *map)
+
+ static inline bool userspace_irqchip(struct kvm *kvm)
+ {
+- return static_branch_unlikely(&userspace_irqchip_in_use) &&
+- unlikely(!irqchip_in_kernel(kvm));
++ return unlikely(!irqchip_in_kernel(kvm));
+ }
+
+ static void soft_timer_start(struct hrtimer *hrt, u64 ns)
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 48cafb65d6acff..70ff9a20ef3af3 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -69,7 +69,6 @@ DECLARE_KVM_NVHE_PER_CPU(struct kvm_cpu_context, kvm_hyp_ctxt);
+ static bool vgic_present, kvm_arm_initialised;
+
+ static DEFINE_PER_CPU(unsigned char, kvm_hyp_initialized);
+-DEFINE_STATIC_KEY_FALSE(userspace_irqchip_in_use);
+
+ bool is_kvm_arm_initialised(void)
+ {
+@@ -503,9 +502,6 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
+
+ void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
+ {
+- if (vcpu_has_run_once(vcpu) && unlikely(!irqchip_in_kernel(vcpu->kvm)))
+- static_branch_dec(&userspace_irqchip_in_use);
+-
+ kvm_mmu_free_memory_cache(&vcpu->arch.mmu_page_cache);
+ kvm_timer_vcpu_terminate(vcpu);
+ kvm_pmu_vcpu_destroy(vcpu);
+@@ -848,14 +844,6 @@ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
+ return ret;
+ }
+
+- if (!irqchip_in_kernel(kvm)) {
+- /*
+- * Tell the rest of the code that there are userspace irqchip
+- * VMs in the wild.
+- */
+- static_branch_inc(&userspace_irqchip_in_use);
+- }
+-
+ /*
+ * Initialize traps for protected VMs.
+ * NOTE: Move to run in EL2 directly, rather than via a hypercall, once
+@@ -1077,7 +1065,7 @@ static bool kvm_vcpu_exit_request(struct kvm_vcpu *vcpu, int *ret)
+ * state gets updated in kvm_timer_update_run and
+ * kvm_pmu_update_run below).
+ */
+- if (static_branch_unlikely(&userspace_irqchip_in_use)) {
++ if (unlikely(!irqchip_in_kernel(vcpu->kvm))) {
+ if (kvm_timer_should_notify_user(vcpu) ||
+ kvm_pmu_should_notify_user(vcpu)) {
+ *ret = -EINTR;
+@@ -1199,7 +1187,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ vcpu->mode = OUTSIDE_GUEST_MODE;
+ isb(); /* Ensure work in x_flush_hwstate is committed */
+ kvm_pmu_sync_hwstate(vcpu);
+- if (static_branch_unlikely(&userspace_irqchip_in_use))
++ if (unlikely(!irqchip_in_kernel(vcpu->kvm)))
+ kvm_timer_sync_user(vcpu);
+ kvm_vgic_sync_hwstate(vcpu);
+ local_irq_enable();
+@@ -1245,7 +1233,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ * we don't want vtimer interrupts to race with syncing the
+ * timer virtual interrupt state.
+ */
+- if (static_branch_unlikely(&userspace_irqchip_in_use))
++ if (unlikely(!irqchip_in_kernel(vcpu->kvm)))
+ kvm_timer_sync_user(vcpu);
+
+ kvm_arch_vcpu_ctxsync_fp(vcpu);
+diff --git a/arch/arm64/kvm/mmio.c b/arch/arm64/kvm/mmio.c
+index cd6b7b83e2c370..ab365e839874e5 100644
+--- a/arch/arm64/kvm/mmio.c
++++ b/arch/arm64/kvm/mmio.c
+@@ -72,6 +72,31 @@ unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len)
+ return data;
+ }
+
++static bool kvm_pending_sync_exception(struct kvm_vcpu *vcpu)
++{
++ if (!vcpu_get_flag(vcpu, PENDING_EXCEPTION))
++ return false;
++
++ if (vcpu_el1_is_32bit(vcpu)) {
++ switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
++ case unpack_vcpu_flag(EXCEPT_AA32_UND):
++ case unpack_vcpu_flag(EXCEPT_AA32_IABT):
++ case unpack_vcpu_flag(EXCEPT_AA32_DABT):
++ return true;
++ default:
++ return false;
++ }
++ } else {
++ switch (vcpu_get_flag(vcpu, EXCEPT_MASK)) {
++ case unpack_vcpu_flag(EXCEPT_AA64_EL1_SYNC):
++ case unpack_vcpu_flag(EXCEPT_AA64_EL2_SYNC):
++ return true;
++ default:
++ return false;
++ }
++ }
++}
++
+ /**
+ * kvm_handle_mmio_return -- Handle MMIO loads after user space emulation
+ * or in-kernel IO emulation
+@@ -84,8 +109,11 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu)
+ unsigned int len;
+ int mask;
+
+- /* Detect an already handled MMIO return */
+- if (unlikely(!vcpu->mmio_needed))
++ /*
++ * Detect if the MMIO return was already handled or if userspace aborted
++ * the MMIO access.
++ */
++ if (unlikely(!vcpu->mmio_needed || kvm_pending_sync_exception(vcpu)))
+ return 1;
+
+ vcpu->mmio_needed = 0;
+diff --git a/arch/arm64/kvm/pmu-emul.c b/arch/arm64/kvm/pmu-emul.c
+index ac36c438b8c18c..3940fe893783c8 100644
+--- a/arch/arm64/kvm/pmu-emul.c
++++ b/arch/arm64/kvm/pmu-emul.c
+@@ -342,7 +342,6 @@ static u64 kvm_pmu_overflow_status(struct kvm_vcpu *vcpu)
+
+ if ((kvm_vcpu_read_pmcr(vcpu) & ARMV8_PMU_PMCR_E)) {
+ reg = __vcpu_sys_reg(vcpu, PMOVSSET_EL0);
+- reg &= __vcpu_sys_reg(vcpu, PMCNTENSET_EL0);
+ reg &= __vcpu_sys_reg(vcpu, PMINTENSET_EL1);
+ }
+
+diff --git a/arch/arm64/kvm/vgic/vgic-its.c b/arch/arm64/kvm/vgic/vgic-its.c
+index ba945ba78cc7d7..198296933e7ebf 100644
+--- a/arch/arm64/kvm/vgic/vgic-its.c
++++ b/arch/arm64/kvm/vgic/vgic-its.c
+@@ -782,6 +782,9 @@ static int vgic_its_cmd_handle_discard(struct kvm *kvm, struct vgic_its *its,
+
+ ite = find_ite(its, device_id, event_id);
+ if (ite && its_is_collection_mapped(ite->collection)) {
++ struct its_device *device = find_its_device(its, device_id);
++ int ite_esz = vgic_its_get_abi(its)->ite_esz;
++ gpa_t gpa = device->itt_addr + ite->event_id * ite_esz;
+ /*
+ * Though the spec talks about removing the pending state, we
+ * don't bother here since we clear the ITTE anyway and the
+@@ -790,7 +793,8 @@ static int vgic_its_cmd_handle_discard(struct kvm *kvm, struct vgic_its *its,
+ vgic_its_invalidate_cache(its);
+
+ its_free_ite(kvm, ite);
+- return 0;
++
++ return vgic_its_write_entry_lock(its, gpa, 0, ite_esz);
+ }
+
+ return E_ITS_DISCARD_UNMAPPED_INTERRUPT;
+@@ -1139,9 +1143,11 @@ static int vgic_its_cmd_handle_mapd(struct kvm *kvm, struct vgic_its *its,
+ bool valid = its_cmd_get_validbit(its_cmd);
+ u8 num_eventid_bits = its_cmd_get_size(its_cmd);
+ gpa_t itt_addr = its_cmd_get_ittaddr(its_cmd);
++ int dte_esz = vgic_its_get_abi(its)->dte_esz;
+ struct its_device *device;
++ gpa_t gpa;
+
+- if (!vgic_its_check_id(its, its->baser_device_table, device_id, NULL))
++ if (!vgic_its_check_id(its, its->baser_device_table, device_id, &gpa))
+ return E_ITS_MAPD_DEVICE_OOR;
+
+ if (valid && num_eventid_bits > VITS_TYPER_IDBITS)
+@@ -1162,7 +1168,7 @@ static int vgic_its_cmd_handle_mapd(struct kvm *kvm, struct vgic_its *its,
+ * is an error, so we are done in any case.
+ */
+ if (!valid)
+- return 0;
++ return vgic_its_write_entry_lock(its, gpa, 0, dte_esz);
+
+ device = vgic_its_alloc_device(its, device_id, itt_addr,
+ num_eventid_bits);
+@@ -2086,7 +2092,6 @@ static int scan_its_table(struct vgic_its *its, gpa_t base, int size, u32 esz,
+ static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,
+ struct its_ite *ite, gpa_t gpa, int ite_esz)
+ {
+- struct kvm *kvm = its->dev->kvm;
+ u32 next_offset;
+ u64 val;
+
+@@ -2095,7 +2100,8 @@ static int vgic_its_save_ite(struct vgic_its *its, struct its_device *dev,
+ ((u64)ite->irq->intid << KVM_ITS_ITE_PINTID_SHIFT) |
+ ite->collection->collection_id;
+ val = cpu_to_le64(val);
+- return vgic_write_guest_lock(kvm, gpa, &val, ite_esz);
++
++ return vgic_its_write_entry_lock(its, gpa, val, ite_esz);
+ }
+
+ /**
+@@ -2239,7 +2245,6 @@ static int vgic_its_restore_itt(struct vgic_its *its, struct its_device *dev)
+ static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev,
+ gpa_t ptr, int dte_esz)
+ {
+- struct kvm *kvm = its->dev->kvm;
+ u64 val, itt_addr_field;
+ u32 next_offset;
+
+@@ -2250,7 +2255,8 @@ static int vgic_its_save_dte(struct vgic_its *its, struct its_device *dev,
+ (itt_addr_field << KVM_ITS_DTE_ITTADDR_SHIFT) |
+ (dev->num_eventid_bits - 1));
+ val = cpu_to_le64(val);
+- return vgic_write_guest_lock(kvm, ptr, &val, dte_esz);
++
++ return vgic_its_write_entry_lock(its, ptr, val, dte_esz);
+ }
+
+ /**
+@@ -2437,7 +2443,8 @@ static int vgic_its_save_cte(struct vgic_its *its,
+ ((u64)collection->target_addr << KVM_ITS_CTE_RDBASE_SHIFT) |
+ collection->collection_id);
+ val = cpu_to_le64(val);
+- return vgic_write_guest_lock(its->dev->kvm, gpa, &val, esz);
++
++ return vgic_its_write_entry_lock(its, gpa, val, esz);
+ }
+
+ /*
+@@ -2453,8 +2460,7 @@ static int vgic_its_restore_cte(struct vgic_its *its, gpa_t gpa, int esz)
+ u64 val;
+ int ret;
+
+- BUG_ON(esz > sizeof(val));
+- ret = kvm_read_guest_lock(kvm, gpa, &val, esz);
++ ret = vgic_its_read_entry_lock(its, gpa, &val, esz);
+ if (ret)
+ return ret;
+ val = le64_to_cpu(val);
+@@ -2492,7 +2498,6 @@ static int vgic_its_save_collection_table(struct vgic_its *its)
+ u64 baser = its->baser_coll_table;
+ gpa_t gpa = GITS_BASER_ADDR_48_to_52(baser);
+ struct its_collection *collection;
+- u64 val;
+ size_t max_size, filled = 0;
+ int ret, cte_esz = abi->cte_esz;
+
+@@ -2516,10 +2521,7 @@ static int vgic_its_save_collection_table(struct vgic_its *its)
+ * table is not fully filled, add a last dummy element
+ * with valid bit unset
+ */
+- val = 0;
+- BUG_ON(cte_esz > sizeof(val));
+- ret = vgic_write_guest_lock(its->dev->kvm, gpa, &val, cte_esz);
+- return ret;
++ return vgic_its_write_entry_lock(its, gpa, 0, cte_esz);
+ }
+
+ /*
+diff --git a/arch/arm64/kvm/vgic/vgic-mmio-v3.c b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+index 9e50928f5d7dfd..70a44852cbafe3 100644
+--- a/arch/arm64/kvm/vgic/vgic-mmio-v3.c
++++ b/arch/arm64/kvm/vgic/vgic-mmio-v3.c
+@@ -530,6 +530,7 @@ static void vgic_mmio_write_invlpi(struct kvm_vcpu *vcpu,
+ unsigned long val)
+ {
+ struct vgic_irq *irq;
++ u32 intid;
+
+ /*
+ * If the guest wrote only to the upper 32bit part of the
+@@ -541,9 +542,13 @@ static void vgic_mmio_write_invlpi(struct kvm_vcpu *vcpu,
+ if ((addr & 4) || !vgic_lpis_enabled(vcpu))
+ return;
+
++ intid = lower_32_bits(val);
++ if (intid < VGIC_MIN_LPI)
++ return;
++
+ vgic_set_rdist_busy(vcpu, true);
+
+- irq = vgic_get_irq(vcpu->kvm, NULL, lower_32_bits(val));
++ irq = vgic_get_irq(vcpu->kvm, NULL, intid);
+ if (irq) {
+ vgic_its_inv_lpi(vcpu->kvm, irq);
+ vgic_put_irq(vcpu->kvm, irq);
+diff --git a/arch/arm64/kvm/vgic/vgic.h b/arch/arm64/kvm/vgic/vgic.h
+index f2486b4d9f9566..309295f5e1b074 100644
+--- a/arch/arm64/kvm/vgic/vgic.h
++++ b/arch/arm64/kvm/vgic/vgic.h
+@@ -146,6 +146,29 @@ static inline int vgic_write_guest_lock(struct kvm *kvm, gpa_t gpa,
+ return ret;
+ }
+
++static inline int vgic_its_read_entry_lock(struct vgic_its *its, gpa_t eaddr,
++ u64 *eval, unsigned long esize)
++{
++ struct kvm *kvm = its->dev->kvm;
++
++ if (KVM_BUG_ON(esize != sizeof(*eval), kvm))
++ return -EINVAL;
++
++ return kvm_read_guest_lock(kvm, eaddr, eval, esize);
++
++}
++
++static inline int vgic_its_write_entry_lock(struct vgic_its *its, gpa_t eaddr,
++ u64 eval, unsigned long esize)
++{
++ struct kvm *kvm = its->dev->kvm;
++
++ if (KVM_BUG_ON(esize != sizeof(eval), kvm))
++ return -EINVAL;
++
++ return vgic_write_guest_lock(kvm, eaddr, &eval, esize);
++}
++
+ /*
+ * This struct provides an intermediate representation of the fields contained
+ * in the GICH_VMCR and ICH_VMCR registers, such that code exporting the GIC
+diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
+index 5db82bfc9dc115..27ef366363e4e2 100644
+--- a/arch/arm64/net/bpf_jit_comp.c
++++ b/arch/arm64/net/bpf_jit_comp.c
+@@ -2094,6 +2094,12 @@ static void restore_args(struct jit_ctx *ctx, int args_off, int nregs)
+ }
+ }
+
++static bool is_struct_ops_tramp(const struct bpf_tramp_links *fentry_links)
++{
++ return fentry_links->nr_links == 1 &&
++ fentry_links->links[0]->link.type == BPF_LINK_TYPE_STRUCT_OPS;
++}
++
+ /* Based on the x86's implementation of arch_prepare_bpf_trampoline().
+ *
+ * bpf prog and function entry before bpf trampoline hooked:
+@@ -2123,6 +2129,7 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ struct bpf_tramp_links *fmod_ret = &tlinks[BPF_TRAMP_MODIFY_RETURN];
+ bool save_ret;
+ __le32 **branches = NULL;
++ bool is_struct_ops = is_struct_ops_tramp(fentry);
+
+ /* trampoline stack layout:
+ * [ parent ip ]
+@@ -2191,11 +2198,14 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ */
+ emit_bti(A64_BTI_JC, ctx);
+
+- /* frame for parent function */
+- emit(A64_PUSH(A64_FP, A64_R(9), A64_SP), ctx);
+- emit(A64_MOV(1, A64_FP, A64_SP), ctx);
++ /* x9 is not set for struct_ops */
++ if (!is_struct_ops) {
++ /* frame for parent function */
++ emit(A64_PUSH(A64_FP, A64_R(9), A64_SP), ctx);
++ emit(A64_MOV(1, A64_FP, A64_SP), ctx);
++ }
+
+- /* frame for patched function */
++ /* frame for patched function for tracing, or caller for struct_ops */
+ emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx);
+ emit(A64_MOV(1, A64_FP, A64_SP), ctx);
+
+@@ -2289,19 +2299,24 @@ static int prepare_trampoline(struct jit_ctx *ctx, struct bpf_tramp_image *im,
+ /* reset SP */
+ emit(A64_MOV(1, A64_SP, A64_FP), ctx);
+
+- /* pop frames */
+- emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
+- emit(A64_POP(A64_FP, A64_R(9), A64_SP), ctx);
+-
+- if (flags & BPF_TRAMP_F_SKIP_FRAME) {
+- /* skip patched function, return to parent */
+- emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
+- emit(A64_RET(A64_R(9)), ctx);
++ if (is_struct_ops) {
++ emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
++ emit(A64_RET(A64_LR), ctx);
+ } else {
+- /* return to patched function */
+- emit(A64_MOV(1, A64_R(10), A64_LR), ctx);
+- emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
+- emit(A64_RET(A64_R(10)), ctx);
++ /* pop frames */
++ emit(A64_POP(A64_FP, A64_LR, A64_SP), ctx);
++ emit(A64_POP(A64_FP, A64_R(9), A64_SP), ctx);
++
++ if (flags & BPF_TRAMP_F_SKIP_FRAME) {
++ /* skip patched function, return to parent */
++ emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
++ emit(A64_RET(A64_R(9)), ctx);
++ } else {
++ /* return to patched function */
++ emit(A64_MOV(1, A64_R(10), A64_LR), ctx);
++ emit(A64_MOV(1, A64_LR, A64_R(9)), ctx);
++ emit(A64_RET(A64_R(10)), ctx);
++ }
+ }
+
+ kfree(branches);
+diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c
+index 51012e90780d6b..fe715b707fd0a4 100644
+--- a/arch/csky/kernel/setup.c
++++ b/arch/csky/kernel/setup.c
+@@ -112,9 +112,9 @@ asmlinkage __visible void __init csky_start(unsigned int unused,
+ pre_trap_init();
+
+ if (dtb_start == NULL)
+- early_init_dt_scan(__dtb_start);
++ early_init_dt_scan(__dtb_start, __pa(dtb_start));
+ else
+- early_init_dt_scan(dtb_start);
++ early_init_dt_scan(dtb_start, __pa(dtb_start));
+
+ start_kernel();
+
+diff --git a/arch/loongarch/Makefile b/arch/loongarch/Makefile
+index ae3f80622f4c60..567bd122a9ee47 100644
+--- a/arch/loongarch/Makefile
++++ b/arch/loongarch/Makefile
+@@ -59,7 +59,7 @@ endif
+
+ ifdef CONFIG_64BIT
+ ld-emul = $(64bit-emul)
+-cflags-y += -mabi=lp64s
++cflags-y += -mabi=lp64s -mcmodel=normal
+ endif
+
+ cflags-y += -pipe $(CC_FLAGS_NO_FPU)
+@@ -104,7 +104,7 @@ ifdef CONFIG_OBJTOOL
+ KBUILD_CFLAGS += -fno-jump-tables
+ endif
+
+-KBUILD_RUSTFLAGS += --target=loongarch64-unknown-none-softfloat
++KBUILD_RUSTFLAGS += --target=loongarch64-unknown-none-softfloat -Ccode-model=small
+ KBUILD_RUSTFLAGS_KERNEL += -Zdirect-access-external-data=yes
+ KBUILD_RUSTFLAGS_MODULE += -Zdirect-access-external-data=no
+
+diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
+index cbd3c09a93c14c..56934fe58170e0 100644
+--- a/arch/loongarch/kernel/setup.c
++++ b/arch/loongarch/kernel/setup.c
+@@ -291,7 +291,7 @@ static void __init fdt_setup(void)
+ if (!fdt_pointer || fdt_check_header(fdt_pointer))
+ return;
+
+- early_init_dt_scan(fdt_pointer);
++ early_init_dt_scan(fdt_pointer, __pa(fdt_pointer));
+ early_init_fdt_reserve_self();
+
+ max_low_pfn = PFN_PHYS(memblock_end_of_DRAM());
+diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
+index 7dbefd4ba21071..dd350cba1252f9 100644
+--- a/arch/loongarch/net/bpf_jit.c
++++ b/arch/loongarch/net/bpf_jit.c
+@@ -179,7 +179,7 @@ static void __build_epilogue(struct jit_ctx *ctx, bool is_tail_call)
+
+ if (!is_tail_call) {
+ /* Set return value */
+- move_reg(ctx, LOONGARCH_GPR_A0, regmap[BPF_REG_0]);
++ emit_insn(ctx, addiw, LOONGARCH_GPR_A0, regmap[BPF_REG_0], 0);
+ /* Return to the caller */
+ emit_insn(ctx, jirl, LOONGARCH_GPR_RA, LOONGARCH_GPR_ZERO, 0);
+ } else {
+diff --git a/arch/loongarch/vdso/Makefile b/arch/loongarch/vdso/Makefile
+index 40c1175823d61d..fdde1bcd4e2663 100644
+--- a/arch/loongarch/vdso/Makefile
++++ b/arch/loongarch/vdso/Makefile
+@@ -19,7 +19,7 @@ ccflags-vdso := \
+ cflags-vdso := $(ccflags-vdso) \
+ -isystem $(shell $(CC) -print-file-name=include) \
+ $(filter -W%,$(filter-out -Wa$(comma)%,$(KBUILD_CFLAGS))) \
+- -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
++ -std=gnu11 -O2 -g -fno-strict-aliasing -fno-common -fno-builtin \
+ -fno-stack-protector -fno-jump-tables -DDISABLE_BRANCH_PROFILING \
+ $(call cc-option, -fno-asynchronous-unwind-tables) \
+ $(call cc-option, -fno-stack-protector)
+diff --git a/arch/m68k/coldfire/device.c b/arch/m68k/coldfire/device.c
+index 7dab46728aedaf..b6958ec2a220cf 100644
+--- a/arch/m68k/coldfire/device.c
++++ b/arch/m68k/coldfire/device.c
+@@ -93,7 +93,7 @@ static struct platform_device mcf_uart = {
+ .dev.platform_data = mcf_uart_platform_data,
+ };
+
+-#if IS_ENABLED(CONFIG_FEC)
++#ifdef MCFFEC_BASE0
+
+ #ifdef CONFIG_M5441x
+ #define FEC_NAME "enet-fec"
+@@ -145,6 +145,7 @@ static struct platform_device mcf_fec0 = {
+ .platform_data = FEC_PDATA,
+ }
+ };
++#endif /* MCFFEC_BASE0 */
+
+ #ifdef MCFFEC_BASE1
+ static struct resource mcf_fec1_resources[] = {
+@@ -182,7 +183,6 @@ static struct platform_device mcf_fec1 = {
+ }
+ };
+ #endif /* MCFFEC_BASE1 */
+-#endif /* CONFIG_FEC */
+
+ #if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
+ /*
+@@ -624,12 +624,12 @@ static struct platform_device mcf_flexcan0 = {
+
+ static struct platform_device *mcf_devices[] __initdata = {
+ &mcf_uart,
+-#if IS_ENABLED(CONFIG_FEC)
++#ifdef MCFFEC_BASE0
+ &mcf_fec0,
++#endif
+ #ifdef MCFFEC_BASE1
+ &mcf_fec1,
+ #endif
+-#endif
+ #if IS_ENABLED(CONFIG_SPI_COLDFIRE_QSPI)
+ &mcf_qspi,
+ #endif
+diff --git a/arch/m68k/include/asm/mcfgpio.h b/arch/m68k/include/asm/mcfgpio.h
+index 019f244395464d..9c91ecdafc4539 100644
+--- a/arch/m68k/include/asm/mcfgpio.h
++++ b/arch/m68k/include/asm/mcfgpio.h
+@@ -136,7 +136,7 @@ static inline void gpio_free(unsigned gpio)
+ * read-modify-write as well as those controlled by the EPORT and GPIO modules.
+ */
+ #define MCFGPIO_SCR_START 40
+-#elif defined(CONFIGM5441x)
++#elif defined(CONFIG_M5441x)
+ /* The m5441x EPORT doesn't have its own GPIO port, uses PORT C */
+ #define MCFGPIO_SCR_START 0
+ #else
+diff --git a/arch/m68k/include/asm/mvme147hw.h b/arch/m68k/include/asm/mvme147hw.h
+index e28eb1c0e0bfb3..dbf88059e47a4d 100644
+--- a/arch/m68k/include/asm/mvme147hw.h
++++ b/arch/m68k/include/asm/mvme147hw.h
+@@ -93,8 +93,8 @@ struct pcc_regs {
+ #define M147_SCC_B_ADDR 0xfffe3000
+ #define M147_SCC_PCLK 5000000
+
+-#define MVME147_IRQ_SCSI_PORT (IRQ_USER+0x45)
+-#define MVME147_IRQ_SCSI_DMA (IRQ_USER+0x46)
++#define MVME147_IRQ_SCSI_PORT (IRQ_USER + 5)
++#define MVME147_IRQ_SCSI_DMA (IRQ_USER + 6)
+
+ /* SCC interrupts, for MVME147 */
+
+diff --git a/arch/m68k/kernel/early_printk.c b/arch/m68k/kernel/early_printk.c
+index 3cc944df04f65e..f11ef9f1f56fcf 100644
+--- a/arch/m68k/kernel/early_printk.c
++++ b/arch/m68k/kernel/early_printk.c
+@@ -13,6 +13,7 @@
+ #include <asm/setup.h>
+
+
++#include "../mvme147/mvme147.h"
+ #include "../mvme16x/mvme16x.h"
+
+ asmlinkage void __init debug_cons_nputs(const char *s, unsigned n);
+@@ -22,7 +23,9 @@ static void __ref debug_cons_write(struct console *c,
+ {
+ #if !(defined(CONFIG_SUN3) || defined(CONFIG_M68000) || \
+ defined(CONFIG_COLDFIRE))
+- if (MACH_IS_MVME16x)
++ if (MACH_IS_MVME147)
++ mvme147_scc_write(c, s, n);
++ else if (MACH_IS_MVME16x)
+ mvme16x_cons_write(c, s, n);
+ else
+ debug_cons_nputs(s, n);
+diff --git a/arch/m68k/mvme147/config.c b/arch/m68k/mvme147/config.c
+index 8b5dc07f0811f2..cc2fb0a83cf0b4 100644
+--- a/arch/m68k/mvme147/config.c
++++ b/arch/m68k/mvme147/config.c
+@@ -32,6 +32,7 @@
+ #include <asm/mvme147hw.h>
+ #include <asm/config.h>
+
++#include "mvme147.h"
+
+ static void mvme147_get_model(char *model);
+ extern void mvme147_sched_init(void);
+@@ -185,3 +186,32 @@ int mvme147_hwclk(int op, struct rtc_time *t)
+ }
+ return 0;
+ }
++
++static void scc_delay(void)
++{
++ __asm__ __volatile__ ("nop; nop;");
++}
++
++static void scc_write(char ch)
++{
++ do {
++ scc_delay();
++ } while (!(in_8(M147_SCC_A_ADDR) & BIT(2)));
++ scc_delay();
++ out_8(M147_SCC_A_ADDR, 8);
++ scc_delay();
++ out_8(M147_SCC_A_ADDR, ch);
++}
++
++void mvme147_scc_write(struct console *co, const char *str, unsigned int count)
++{
++ unsigned long flags;
++
++ local_irq_save(flags);
++ while (count--) {
++ if (*str == '\n')
++ scc_write('\r');
++ scc_write(*str++);
++ }
++ local_irq_restore(flags);
++}
+diff --git a/arch/m68k/mvme147/mvme147.h b/arch/m68k/mvme147/mvme147.h
+new file mode 100644
+index 00000000000000..140bc98b0102aa
+--- /dev/null
++++ b/arch/m68k/mvme147/mvme147.h
+@@ -0,0 +1,6 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++struct console;
++
++/* config.c */
++void mvme147_scc_write(struct console *co, const char *str, unsigned int count);
+diff --git a/arch/microblaze/kernel/microblaze_ksyms.c b/arch/microblaze/kernel/microblaze_ksyms.c
+index c892e173ec990b..a8553f54152b76 100644
+--- a/arch/microblaze/kernel/microblaze_ksyms.c
++++ b/arch/microblaze/kernel/microblaze_ksyms.c
+@@ -16,6 +16,7 @@
+ #include <asm/page.h>
+ #include <linux/ftrace.h>
+ #include <linux/uaccess.h>
++#include <asm/xilinx_mb_manager.h>
+
+ #ifdef CONFIG_FUNCTION_TRACER
+ extern void _mcount(void);
+@@ -46,3 +47,12 @@ extern void __udivsi3(void);
+ EXPORT_SYMBOL(__udivsi3);
+ extern void __umodsi3(void);
+ EXPORT_SYMBOL(__umodsi3);
++
++#ifdef CONFIG_MB_MANAGER
++extern void xmb_manager_register(uintptr_t phys_baseaddr, u32 cr_val,
++ void (*callback)(void *data),
++ void *priv, void (*reset_callback)(void *data));
++EXPORT_SYMBOL(xmb_manager_register);
++extern asmlinkage void xmb_inject_err(void);
++EXPORT_SYMBOL(xmb_inject_err);
++#endif
+diff --git a/arch/microblaze/kernel/prom.c b/arch/microblaze/kernel/prom.c
+index e424c796e297c5..76ac4cfdfb42ce 100644
+--- a/arch/microblaze/kernel/prom.c
++++ b/arch/microblaze/kernel/prom.c
+@@ -18,7 +18,7 @@ void __init early_init_devtree(void *params)
+ {
+ pr_debug(" -> early_init_devtree(%p)\n", params);
+
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ if (!strlen(boot_command_line))
+ strscpy(boot_command_line, cmd_line, COMMAND_LINE_SIZE);
+
+diff --git a/arch/mips/include/asm/switch_to.h b/arch/mips/include/asm/switch_to.h
+index a4374b4cb88fd8..d6ccd534402133 100644
+--- a/arch/mips/include/asm/switch_to.h
++++ b/arch/mips/include/asm/switch_to.h
+@@ -97,7 +97,7 @@ do { \
+ } \
+ } while (0)
+ #else
+-# define __sanitize_fcr31(next)
++# define __sanitize_fcr31(next) do { (void) (next); } while (0)
+ #endif
+
+ /*
+diff --git a/arch/mips/kernel/prom.c b/arch/mips/kernel/prom.c
+index 6062e6fa589a87..4fd6da0a06c372 100644
+--- a/arch/mips/kernel/prom.c
++++ b/arch/mips/kernel/prom.c
+@@ -41,7 +41,7 @@ char *mips_get_machine_name(void)
+
+ void __init __dt_setup_arch(void *bph)
+ {
+- if (!early_init_dt_scan(bph))
++ if (!early_init_dt_scan(bph, __pa(bph)))
+ return;
+
+ mips_set_machine_name(of_flat_dt_get_machine_name());
+diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
+index 7eeeaf1ff95d26..cda7983e7c18d4 100644
+--- a/arch/mips/kernel/relocate.c
++++ b/arch/mips/kernel/relocate.c
+@@ -337,7 +337,7 @@ void *__init relocate_kernel(void)
+ #if defined(CONFIG_USE_OF)
+ /* Deal with the device tree */
+ fdt = plat_get_fdt();
+- early_init_dt_scan(fdt);
++ early_init_dt_scan(fdt, __pa(fdt));
+ if (boot_command_line[0]) {
+ /* Boot command line was passed in device tree */
+ strscpy(arcs_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+diff --git a/arch/nios2/kernel/prom.c b/arch/nios2/kernel/prom.c
+index 9a8393e6b4a85e..db049249766fc2 100644
+--- a/arch/nios2/kernel/prom.c
++++ b/arch/nios2/kernel/prom.c
+@@ -27,7 +27,7 @@ void __init early_init_devtree(void *params)
+ if (be32_to_cpup((__be32 *)CONFIG_NIOS2_DTB_PHYS_ADDR) ==
+ OF_DT_HEADER) {
+ params = (void *)CONFIG_NIOS2_DTB_PHYS_ADDR;
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ return;
+ }
+ #endif
+@@ -37,5 +37,5 @@ void __init early_init_devtree(void *params)
+ params = (void *)__dtb_start;
+ #endif
+
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ }
+diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
+index 69c0258700b28a..3279ef457c573a 100644
+--- a/arch/openrisc/Kconfig
++++ b/arch/openrisc/Kconfig
+@@ -65,6 +65,9 @@ config STACKTRACE_SUPPORT
+ config LOCKDEP_SUPPORT
+ def_bool y
+
++config FIX_EARLYCON_MEM
++ def_bool y
++
+ menu "Processor type and features"
+
+ choice
+diff --git a/arch/openrisc/include/asm/fixmap.h b/arch/openrisc/include/asm/fixmap.h
+index ecdb98a5839f7c..aaa6a26a3e9215 100644
+--- a/arch/openrisc/include/asm/fixmap.h
++++ b/arch/openrisc/include/asm/fixmap.h
+@@ -26,29 +26,18 @@
+ #include <linux/bug.h>
+ #include <asm/page.h>
+
+-/*
+- * On OpenRISC we use these special fixed_addresses for doing ioremap
+- * early in the boot process before memory initialization is complete.
+- * This is used, in particular, by the early serial console code.
+- *
+- * It's not really 'fixmap', per se, but fits loosely into the same
+- * paradigm.
+- */
+ enum fixed_addresses {
+- /*
+- * FIX_IOREMAP entries are useful for mapping physical address
+- * space before ioremap() is useable, e.g. really early in boot
+- * before kmalloc() is working.
+- */
+-#define FIX_N_IOREMAPS 32
+- FIX_IOREMAP_BEGIN,
+- FIX_IOREMAP_END = FIX_IOREMAP_BEGIN + FIX_N_IOREMAPS - 1,
++ FIX_EARLYCON_MEM_BASE,
+ __end_of_fixed_addresses
+ };
+
+ #define FIXADDR_SIZE (__end_of_fixed_addresses << PAGE_SHIFT)
+ /* FIXADDR_BOTTOM might be a better name here... */
+ #define FIXADDR_START (FIXADDR_TOP - FIXADDR_SIZE)
++#define FIXMAP_PAGE_IO PAGE_KERNEL_NOCACHE
++
++extern void __set_fixmap(enum fixed_addresses idx,
++ phys_addr_t phys, pgprot_t flags);
+
+ #include <asm-generic/fixmap.h>
+
+diff --git a/arch/openrisc/kernel/prom.c b/arch/openrisc/kernel/prom.c
+index 19e6008bf114c6..e424e9bd12a793 100644
+--- a/arch/openrisc/kernel/prom.c
++++ b/arch/openrisc/kernel/prom.c
+@@ -22,6 +22,6 @@
+
+ void __init early_init_devtree(void *params)
+ {
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ memblock_allow_resize();
+ }
+diff --git a/arch/openrisc/mm/init.c b/arch/openrisc/mm/init.c
+index 1dcd78c8f0e99b..d0cb1a0126f95d 100644
+--- a/arch/openrisc/mm/init.c
++++ b/arch/openrisc/mm/init.c
+@@ -207,6 +207,43 @@ void __init mem_init(void)
+ return;
+ }
+
++static int __init map_page(unsigned long va, phys_addr_t pa, pgprot_t prot)
++{
++ p4d_t *p4d;
++ pud_t *pud;
++ pmd_t *pmd;
++ pte_t *pte;
++
++ p4d = p4d_offset(pgd_offset_k(va), va);
++ pud = pud_offset(p4d, va);
++ pmd = pmd_offset(pud, va);
++ pte = pte_alloc_kernel(pmd, va);
++
++ if (pte == NULL)
++ return -ENOMEM;
++
++ if (pgprot_val(prot))
++ set_pte_at(&init_mm, va, pte, pfn_pte(pa >> PAGE_SHIFT, prot));
++ else
++ pte_clear(&init_mm, va, pte);
++
++ local_flush_tlb_page(NULL, va);
++ return 0;
++}
++
++void __init __set_fixmap(enum fixed_addresses idx,
++ phys_addr_t phys, pgprot_t prot)
++{
++ unsigned long address = __fix_to_virt(idx);
++
++ if (idx >= __end_of_fixed_addresses) {
++ BUG();
++ return;
++ }
++
++ map_page(address, phys, prot);
++}
++
+ static const pgprot_t protection_map[16] = {
+ [VM_NONE] = PAGE_NONE,
+ [VM_READ] = PAGE_READONLY_X,
+diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
+index c91f9c2e61ed25..f8d08eab7db8b0 100644
+--- a/arch/parisc/kernel/ftrace.c
++++ b/arch/parisc/kernel/ftrace.c
+@@ -87,7 +87,7 @@ int ftrace_enable_ftrace_graph_caller(void)
+
+ int ftrace_disable_ftrace_graph_caller(void)
+ {
+- static_key_enable(&ftrace_graph_enable.key);
++ static_key_disable(&ftrace_graph_enable.key);
+ return 0;
+ }
+ #endif
+diff --git a/arch/powerpc/include/asm/dtl.h b/arch/powerpc/include/asm/dtl.h
+index d6f43d149f8dcb..a5c21bc623cb00 100644
+--- a/arch/powerpc/include/asm/dtl.h
++++ b/arch/powerpc/include/asm/dtl.h
+@@ -1,8 +1,8 @@
+ #ifndef _ASM_POWERPC_DTL_H
+ #define _ASM_POWERPC_DTL_H
+
++#include <linux/rwsem.h>
+ #include <asm/lppaca.h>
+-#include <linux/spinlock_types.h>
+
+ /*
+ * Layout of entries in the hypervisor's dispatch trace log buffer.
+@@ -35,7 +35,7 @@ struct dtl_entry {
+ #define DTL_LOG_ALL (DTL_LOG_CEDE | DTL_LOG_PREEMPT | DTL_LOG_FAULT)
+
+ extern struct kmem_cache *dtl_cache;
+-extern rwlock_t dtl_access_lock;
++extern struct rw_semaphore dtl_access_lock;
+
+ extern void register_dtl_buffer(int cpu);
+ extern void alloc_dtl_buffers(unsigned long *time_limit);
+diff --git a/arch/powerpc/include/asm/fadump.h b/arch/powerpc/include/asm/fadump.h
+index ef40c9b6972a6e..a48f54dde4f656 100644
+--- a/arch/powerpc/include/asm/fadump.h
++++ b/arch/powerpc/include/asm/fadump.h
+@@ -19,6 +19,7 @@ extern int is_fadump_active(void);
+ extern int should_fadump_crash(void);
+ extern void crash_fadump(struct pt_regs *, const char *);
+ extern void fadump_cleanup(void);
++void fadump_setup_param_area(void);
+ extern void fadump_append_bootargs(void);
+
+ #else /* CONFIG_FA_DUMP */
+@@ -26,6 +27,7 @@ static inline int is_fadump_active(void) { return 0; }
+ static inline int should_fadump_crash(void) { return 0; }
+ static inline void crash_fadump(struct pt_regs *regs, const char *str) { }
+ static inline void fadump_cleanup(void) { }
++static inline void fadump_setup_param_area(void) { }
+ static inline void fadump_append_bootargs(void) { }
+ #endif /* !CONFIG_FA_DUMP */
+
+@@ -34,4 +36,11 @@ extern int early_init_dt_scan_fw_dump(unsigned long node, const char *uname,
+ int depth, void *data);
+ extern int fadump_reserve_mem(void);
+ #endif
++
++#if defined(CONFIG_FA_DUMP) && defined(CONFIG_CMA)
++void fadump_cma_init(void);
++#else
++static inline void fadump_cma_init(void) { }
++#endif
++
+ #endif /* _ASM_POWERPC_FADUMP_H */
+diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
+index 2ef9a5f4e5d14c..11065313d4c123 100644
+--- a/arch/powerpc/include/asm/kvm_book3s_64.h
++++ b/arch/powerpc/include/asm/kvm_book3s_64.h
+@@ -684,8 +684,8 @@ int kvmhv_nestedv2_set_ptbl_entry(unsigned long lpid, u64 dw0, u64 dw1);
+ int kvmhv_nestedv2_parse_output(struct kvm_vcpu *vcpu);
+ int kvmhv_nestedv2_set_vpa(struct kvm_vcpu *vcpu, unsigned long vpa);
+
+-int kmvhv_counters_tracepoint_regfunc(void);
+-void kmvhv_counters_tracepoint_unregfunc(void);
++int kvmhv_counters_tracepoint_regfunc(void);
++void kvmhv_counters_tracepoint_unregfunc(void);
+ int kvmhv_get_l2_counters_status(void);
+ void kvmhv_set_l2_counters_status(int cpu, bool status);
+
+diff --git a/arch/powerpc/include/asm/sstep.h b/arch/powerpc/include/asm/sstep.h
+index 50950deedb8734..e3d0e714ff280e 100644
+--- a/arch/powerpc/include/asm/sstep.h
++++ b/arch/powerpc/include/asm/sstep.h
+@@ -173,9 +173,4 @@ int emulate_step(struct pt_regs *regs, ppc_inst_t instr);
+ */
+ extern int emulate_loadstore(struct pt_regs *regs, struct instruction_op *op);
+
+-extern void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
+- const void *mem, bool cross_endian);
+-extern void emulate_vsx_store(struct instruction_op *op,
+- const union vsx_reg *reg, void *mem,
+- bool cross_endian);
+ extern int emulate_dcbz(unsigned long ea, struct pt_regs *regs);
+diff --git a/arch/powerpc/include/asm/vdso.h b/arch/powerpc/include/asm/vdso.h
+index 7650b6ce14c85a..8d972bc98b55fe 100644
+--- a/arch/powerpc/include/asm/vdso.h
++++ b/arch/powerpc/include/asm/vdso.h
+@@ -25,6 +25,7 @@ int vdso_getcpu_init(void);
+ #ifdef __VDSO64__
+ #define V_FUNCTION_BEGIN(name) \
+ .globl name; \
++ .type name,@function; \
+ name: \
+
+ #define V_FUNCTION_END(name) \
+diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
+index af4263594eb2c9..1bee15c013e75f 100644
+--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
++++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
+@@ -867,7 +867,7 @@ bool __init dt_cpu_ftrs_init(void *fdt)
+ using_dt_cpu_ftrs = false;
+
+ /* Setup and verify the FDT, if it fails we just bail */
+- if (!early_init_dt_verify(fdt))
++ if (!early_init_dt_verify(fdt, __pa(fdt)))
+ return false;
+
+ if (!of_scan_flat_dt(fdt_find_cpu_features, NULL))
+diff --git a/arch/powerpc/kernel/fadump.c b/arch/powerpc/kernel/fadump.c
+index a612e7513a4f8a..4641de75f7fc1e 100644
+--- a/arch/powerpc/kernel/fadump.c
++++ b/arch/powerpc/kernel/fadump.c
+@@ -78,27 +78,23 @@ static struct cma *fadump_cma;
+ * But for some reason even if it fails we still have the memory reservation
+ * with us and we can still continue doing fadump.
+ */
+-static int __init fadump_cma_init(void)
++void __init fadump_cma_init(void)
+ {
+ unsigned long long base, size;
+ int rc;
+
+- if (!fw_dump.fadump_enabled)
+- return 0;
+-
++ if (!fw_dump.fadump_supported || !fw_dump.fadump_enabled ||
++ fw_dump.dump_active)
++ return;
+ /*
+ * Do not use CMA if user has provided fadump=nocma kernel parameter.
+- * Return 1 to continue with fadump old behaviour.
+ */
+- if (fw_dump.nocma)
+- return 1;
++ if (fw_dump.nocma || !fw_dump.boot_memory_size)
++ return;
+
+ base = fw_dump.reserve_dump_area_start;
+ size = fw_dump.boot_memory_size;
+
+- if (!size)
+- return 0;
+-
+ rc = cma_init_reserved_mem(base, size, 0, "fadump_cma", &fadump_cma);
+ if (rc) {
+ pr_err("Failed to init cma area for firmware-assisted dump,%d\n", rc);
+@@ -108,7 +104,7 @@ static int __init fadump_cma_init(void)
+ * blocked from production system usage. Hence return 1,
+ * so that we can continue with fadump.
+ */
+- return 1;
++ return;
+ }
+
+ /*
+@@ -125,10 +121,7 @@ static int __init fadump_cma_init(void)
+ cma_get_size(fadump_cma),
+ (unsigned long)cma_get_base(fadump_cma) >> 20,
+ fw_dump.reserve_dump_area_size);
+- return 1;
+ }
+-#else
+-static int __init fadump_cma_init(void) { return 1; }
+ #endif /* CONFIG_CMA */
+
+ /*
+@@ -143,7 +136,7 @@ void __init fadump_append_bootargs(void)
+ if (!fw_dump.dump_active || !fw_dump.param_area_supported || !fw_dump.param_area)
+ return;
+
+- if (fw_dump.param_area >= fw_dump.boot_mem_top) {
++ if (fw_dump.param_area < fw_dump.boot_mem_top) {
+ if (memblock_reserve(fw_dump.param_area, COMMAND_LINE_SIZE)) {
+ pr_warn("WARNING: Can't use additional parameters area!\n");
+ fw_dump.param_area = 0;
+@@ -637,8 +630,6 @@ int __init fadump_reserve_mem(void)
+
+ pr_info("Reserved %lldMB of memory at %#016llx (System RAM: %lldMB)\n",
+ (size >> 20), base, (memblock_phys_mem_size() >> 20));
+-
+- ret = fadump_cma_init();
+ }
+
+ return ret;
+@@ -1586,6 +1577,12 @@ static void __init fadump_init_files(void)
+ return;
+ }
+
++ if (fw_dump.param_area) {
++ rc = sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr);
++ if (rc)
++ pr_err("unable to create bootargs_append sysfs file (%d)\n", rc);
++ }
++
+ debugfs_create_file("fadump_region", 0444, arch_debugfs_dir, NULL,
+ &fadump_region_fops);
+
+@@ -1740,7 +1737,7 @@ static void __init fadump_process(void)
+ * Reserve memory to store additional parameters to be passed
+ * for fadump/capture kernel.
+ */
+-static void __init fadump_setup_param_area(void)
++void __init fadump_setup_param_area(void)
+ {
+ phys_addr_t range_start, range_end;
+
+@@ -1748,7 +1745,7 @@ static void __init fadump_setup_param_area(void)
+ return;
+
+ /* This memory can't be used by PFW or bootloader as it is shared across kernels */
+- if (radix_enabled()) {
++ if (early_radix_enabled()) {
+ /*
+ * Anywhere in the upper half should be good enough as all memory
+ * is accessible in real mode.
+@@ -1776,12 +1773,12 @@ static void __init fadump_setup_param_area(void)
+ COMMAND_LINE_SIZE,
+ range_start,
+ range_end);
+- if (!fw_dump.param_area || sysfs_create_file(fadump_kobj, &bootargs_append_attr.attr)) {
++ if (!fw_dump.param_area) {
+ pr_warn("WARNING: Could not setup area to pass additional parameters!\n");
+ return;
+ }
+
+- memset(phys_to_virt(fw_dump.param_area), 0, COMMAND_LINE_SIZE);
++ memset((void *)fw_dump.param_area, 0, COMMAND_LINE_SIZE);
+ }
+
+ /*
+@@ -1807,7 +1804,6 @@ int __init setup_fadump(void)
+ }
+ /* Initialize the kernel dump memory structure and register with f/w */
+ else if (fw_dump.reserve_dump_area_size) {
+- fadump_setup_param_area();
+ fw_dump.ops->fadump_init_mem_struct(&fw_dump);
+ register_fadump();
+ }
+diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
+index 0be07ed407c703..e0059842a1c64b 100644
+--- a/arch/powerpc/kernel/prom.c
++++ b/arch/powerpc/kernel/prom.c
+@@ -791,7 +791,7 @@ void __init early_init_devtree(void *params)
+ DBG(" -> early_init_devtree(%px)\n", params);
+
+ /* Too early to BUG_ON(), do it by hand */
+- if (!early_init_dt_verify(params))
++ if (!early_init_dt_verify(params, __pa(params)))
+ panic("BUG: Failed verifying flat device tree, bad version?");
+
+ of_scan_flat_dt(early_init_dt_scan_model, NULL);
+@@ -908,6 +908,9 @@ void __init early_init_devtree(void *params)
+
+ mmu_early_init_devtree();
+
++ /* Setup param area for passing additional parameters to fadump capture kernel. */
++ fadump_setup_param_area();
++
+ #ifdef CONFIG_PPC_POWERNV
+ /* Scan and build the list of machine check recoverable ranges */
+ of_scan_flat_dt(early_init_dt_scan_recoverable_ranges, NULL);
+diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
+index 943430077375a4..b6b01502e50472 100644
+--- a/arch/powerpc/kernel/setup-common.c
++++ b/arch/powerpc/kernel/setup-common.c
+@@ -997,9 +997,11 @@ void __init setup_arch(char **cmdline_p)
+ initmem_init();
+
+ /*
+- * Reserve large chunks of memory for use by CMA for KVM and hugetlb. These must
+- * be called after initmem_init(), so that pageblock_order is initialised.
++ * Reserve large chunks of memory for use by CMA for fadump, KVM and
++ * hugetlb. These must be called after initmem_init(), so that
++ * pageblock_order is initialised.
+ */
++ fadump_cma_init();
+ kvm_cma_reserve();
+ gigantic_hugetlb_cma_reserve();
+
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 22f83fbbc762ac..1edc7cd68c10d0 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -920,6 +920,7 @@ static int __init disable_hardlockup_detector(void)
+ hardlockup_detector_disable();
+ #else
+ if (firmware_has_feature(FW_FEATURE_LPAR)) {
++ check_kvm_guest();
+ if (is_kvm_guest())
+ hardlockup_detector_disable();
+ }
+diff --git a/arch/powerpc/kexec/file_load_64.c b/arch/powerpc/kexec/file_load_64.c
+index 9738adabeb1fee..dc65c139115772 100644
+--- a/arch/powerpc/kexec/file_load_64.c
++++ b/arch/powerpc/kexec/file_load_64.c
+@@ -736,13 +736,18 @@ int setup_purgatory_ppc64(struct kimage *image, const void *slave_code,
+ if (dn) {
+ u64 val;
+
+- of_property_read_u64(dn, "opal-base-address", &val);
++ ret = of_property_read_u64(dn, "opal-base-address", &val);
++ if (ret)
++ goto out;
++
+ ret = kexec_purgatory_get_set_symbol(image, "opal_base", &val,
+ sizeof(val), false);
+ if (ret)
+ goto out;
+
+- of_property_read_u64(dn, "opal-entry-address", &val);
++ ret = of_property_read_u64(dn, "opal-entry-address", &val);
++ if (ret)
++ goto out;
+ ret = kexec_purgatory_get_set_symbol(image, "opal_entry", &val,
+ sizeof(val), false);
+ }
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index ad8dc4ccdaab9e..57b6c1ba84d47e 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -4154,7 +4154,7 @@ void kvmhv_set_l2_counters_status(int cpu, bool status)
+ lppaca_of(cpu).l2_counters_enable = 0;
+ }
+
+-int kmvhv_counters_tracepoint_regfunc(void)
++int kvmhv_counters_tracepoint_regfunc(void)
+ {
+ int cpu;
+
+@@ -4164,7 +4164,7 @@ int kmvhv_counters_tracepoint_regfunc(void)
+ return 0;
+ }
+
+-void kmvhv_counters_tracepoint_unregfunc(void)
++void kvmhv_counters_tracepoint_unregfunc(void)
+ {
+ int cpu;
+
+@@ -4309,6 +4309,15 @@ static int kvmhv_vcpu_entry_p9_nested(struct kvm_vcpu *vcpu, u64 time_limit, uns
+ }
+ hvregs.hdec_expiry = time_limit;
+
++ /*
++ * hvregs has the doorbell status, so zero it here which
++ * enables us to receive doorbells when H_ENTER_NESTED is
++ * in progress for this vCPU
++ */
++
++ if (vcpu->arch.doorbell_request)
++ vcpu->arch.doorbell_request = 0;
++
+ /*
+ * When setting DEC, we must always deal with irq_work_raise
+ * via NMI vs setting DEC. The problem occurs right as we
+@@ -4912,7 +4921,6 @@ int kvmhv_run_single_vcpu(struct kvm_vcpu *vcpu, u64 time_limit,
+ lpcr &= ~LPCR_MER;
+ }
+ } else if (vcpu->arch.pending_exceptions ||
+- vcpu->arch.doorbell_request ||
+ xive_interrupt_pending(vcpu)) {
+ vcpu->arch.ret = RESUME_HOST;
+ goto out;
+diff --git a/arch/powerpc/kvm/book3s_hv_nested.c b/arch/powerpc/kvm/book3s_hv_nested.c
+index 05f5220960c63b..125440a606ee3b 100644
+--- a/arch/powerpc/kvm/book3s_hv_nested.c
++++ b/arch/powerpc/kvm/book3s_hv_nested.c
+@@ -32,7 +32,7 @@ void kvmhv_save_hv_regs(struct kvm_vcpu *vcpu, struct hv_guest_state *hr)
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+ hr->pcr = vc->pcr | PCR_MASK;
+- hr->dpdes = vc->dpdes;
++ hr->dpdes = vcpu->arch.doorbell_request;
+ hr->hfscr = vcpu->arch.hfscr;
+ hr->tb_offset = vc->tb_offset;
+ hr->dawr0 = vcpu->arch.dawr0;
+@@ -105,7 +105,7 @@ static void save_hv_return_state(struct kvm_vcpu *vcpu,
+ {
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+- hr->dpdes = vc->dpdes;
++ hr->dpdes = vcpu->arch.doorbell_request;
+ hr->purr = vcpu->arch.purr;
+ hr->spurr = vcpu->arch.spurr;
+ hr->ic = vcpu->arch.ic;
+@@ -143,7 +143,7 @@ static void restore_hv_regs(struct kvm_vcpu *vcpu, const struct hv_guest_state *
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+ vc->pcr = hr->pcr | PCR_MASK;
+- vc->dpdes = hr->dpdes;
++ vcpu->arch.doorbell_request = hr->dpdes;
+ vcpu->arch.hfscr = hr->hfscr;
+ vcpu->arch.dawr0 = hr->dawr0;
+ vcpu->arch.dawrx0 = hr->dawrx0;
+@@ -170,7 +170,13 @@ void kvmhv_restore_hv_return_state(struct kvm_vcpu *vcpu,
+ {
+ struct kvmppc_vcore *vc = vcpu->arch.vcore;
+
+- vc->dpdes = hr->dpdes;
++ /*
++ * This L2 vCPU might have received a doorbell while H_ENTER_NESTED was being handled.
++ * Make sure we preserve the doorbell if it was either:
++ * a) Sent after H_ENTER_NESTED was called on this vCPU (arch.doorbell_request would be 1)
++ * b) Doorbell was not handled and L2 exited for some other reason (hr->dpdes would be 1)
++ */
++ vcpu->arch.doorbell_request = vcpu->arch.doorbell_request | hr->dpdes;
+ vcpu->arch.hfscr = hr->hfscr;
+ vcpu->arch.purr = hr->purr;
+ vcpu->arch.spurr = hr->spurr;
+diff --git a/arch/powerpc/kvm/trace_hv.h b/arch/powerpc/kvm/trace_hv.h
+index 77ebc724e6cdf4..35fccaa575cc15 100644
+--- a/arch/powerpc/kvm/trace_hv.h
++++ b/arch/powerpc/kvm/trace_hv.h
+@@ -538,7 +538,7 @@ TRACE_EVENT_FN_COND(kvmppc_vcpu_stats,
+ TP_printk("VCPU %d: l1_to_l2_cs_time=%llu ns l2_to_l1_cs_time=%llu ns l2_runtime=%llu ns",
+ __entry->vcpu_id, __entry->l1_to_l2_cs,
+ __entry->l2_to_l1_cs, __entry->l2_runtime),
+- kmvhv_counters_tracepoint_regfunc, kmvhv_counters_tracepoint_unregfunc
++ kvmhv_counters_tracepoint_regfunc, kvmhv_counters_tracepoint_unregfunc
+ );
+ #endif
+ #endif /* _TRACE_KVM_HV_H */
+diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
+index e65f3fb68d06ba..ac3ee19531d8ac 100644
+--- a/arch/powerpc/lib/sstep.c
++++ b/arch/powerpc/lib/sstep.c
+@@ -780,8 +780,8 @@ static nokprobe_inline int emulate_stq(struct pt_regs *regs, unsigned long ea,
+ #endif /* __powerpc64 */
+
+ #ifdef CONFIG_VSX
+-void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
+- const void *mem, bool rev)
++static nokprobe_inline void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
++ const void *mem, bool rev)
+ {
+ int size, read_size;
+ int i, j;
+@@ -863,11 +863,9 @@ void emulate_vsx_load(struct instruction_op *op, union vsx_reg *reg,
+ break;
+ }
+ }
+-EXPORT_SYMBOL_GPL(emulate_vsx_load);
+-NOKPROBE_SYMBOL(emulate_vsx_load);
+
+-void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
+- void *mem, bool rev)
++static nokprobe_inline void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
++ void *mem, bool rev)
+ {
+ int size, write_size;
+ int i, j;
+@@ -955,8 +953,6 @@ void emulate_vsx_store(struct instruction_op *op, const union vsx_reg *reg,
+ break;
+ }
+ }
+-EXPORT_SYMBOL_GPL(emulate_vsx_store);
+-NOKPROBE_SYMBOL(emulate_vsx_store);
+
+ static nokprobe_inline int do_vsx_load(struct instruction_op *op,
+ unsigned long ea, struct pt_regs *regs,
+diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
+index 81c77ddce2e30a..c156fe0d53c378 100644
+--- a/arch/powerpc/mm/fault.c
++++ b/arch/powerpc/mm/fault.c
+@@ -439,10 +439,16 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
+ /*
+ * The kernel should never take an execute fault nor should it
+ * take a page fault to a kernel address or a page fault to a user
+- * address outside of dedicated places
++ * address outside of dedicated places.
++ *
++ * Rather than kfence directly reporting false negatives, search whether
++ * the NIP belongs to the fixup table for cases where fault could come
++ * from functions like copy_from_kernel_nofault().
+ */
+ if (unlikely(!is_user && bad_kernel_fault(regs, error_code, address, is_write))) {
+- if (kfence_handle_page_fault(address, is_write, regs))
++ if (is_kfence_address((void *)address) &&
++ !search_exception_tables(instruction_pointer(regs)) &&
++ kfence_handle_page_fault(address, is_write, regs))
+ return 0;
+
+ return SIGSEGV;
+diff --git a/arch/powerpc/platforms/pseries/dtl.c b/arch/powerpc/platforms/pseries/dtl.c
+index 8cb9d36ea49159..f293588b8c7b51 100644
+--- a/arch/powerpc/platforms/pseries/dtl.c
++++ b/arch/powerpc/platforms/pseries/dtl.c
+@@ -191,7 +191,7 @@ static int dtl_enable(struct dtl *dtl)
+ return -EBUSY;
+
+ /* ensure there are no other conflicting dtl users */
+- if (!read_trylock(&dtl_access_lock))
++ if (!down_read_trylock(&dtl_access_lock))
+ return -EBUSY;
+
+ n_entries = dtl_buf_entries;
+@@ -199,7 +199,7 @@ static int dtl_enable(struct dtl *dtl)
+ if (!buf) {
+ printk(KERN_WARNING "%s: buffer alloc failed for cpu %d\n",
+ __func__, dtl->cpu);
+- read_unlock(&dtl_access_lock);
++ up_read(&dtl_access_lock);
+ return -ENOMEM;
+ }
+
+@@ -217,7 +217,7 @@ static int dtl_enable(struct dtl *dtl)
+ spin_unlock(&dtl->lock);
+
+ if (rc) {
+- read_unlock(&dtl_access_lock);
++ up_read(&dtl_access_lock);
+ kmem_cache_free(dtl_cache, buf);
+ }
+
+@@ -232,7 +232,7 @@ static void dtl_disable(struct dtl *dtl)
+ dtl->buf = NULL;
+ dtl->buf_entries = 0;
+ spin_unlock(&dtl->lock);
+- read_unlock(&dtl_access_lock);
++ up_read(&dtl_access_lock);
+ }
+
+ /* file interface */
+diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
+index c1d8bee8f7018c..bb09990eec309a 100644
+--- a/arch/powerpc/platforms/pseries/lpar.c
++++ b/arch/powerpc/platforms/pseries/lpar.c
+@@ -169,7 +169,7 @@ struct vcpu_dispatch_data {
+ */
+ #define NR_CPUS_H NR_CPUS
+
+-DEFINE_RWLOCK(dtl_access_lock);
++DECLARE_RWSEM(dtl_access_lock);
+ static DEFINE_PER_CPU(struct vcpu_dispatch_data, vcpu_disp_data);
+ static DEFINE_PER_CPU(u64, dtl_entry_ridx);
+ static DEFINE_PER_CPU(struct dtl_worker, dtl_workers);
+@@ -463,7 +463,7 @@ static int dtl_worker_enable(unsigned long *time_limit)
+ {
+ int rc = 0, state;
+
+- if (!write_trylock(&dtl_access_lock)) {
++ if (!down_write_trylock(&dtl_access_lock)) {
+ rc = -EBUSY;
+ goto out;
+ }
+@@ -479,7 +479,7 @@ static int dtl_worker_enable(unsigned long *time_limit)
+ pr_err("vcpudispatch_stats: unable to setup workqueue for DTL processing\n");
+ free_dtl_buffers(time_limit);
+ reset_global_dtl_mask();
+- write_unlock(&dtl_access_lock);
++ up_write(&dtl_access_lock);
+ rc = -EINVAL;
+ goto out;
+ }
+@@ -494,7 +494,7 @@ static void dtl_worker_disable(unsigned long *time_limit)
+ cpuhp_remove_state(dtl_worker_state);
+ free_dtl_buffers(time_limit);
+ reset_global_dtl_mask();
+- write_unlock(&dtl_access_lock);
++ up_write(&dtl_access_lock);
+ }
+
+ static ssize_t vcpudispatch_stats_write(struct file *file, const char __user *p,
+diff --git a/arch/powerpc/platforms/pseries/plpks.c b/arch/powerpc/platforms/pseries/plpks.c
+index 4a595493d28ae3..b1667ed05f9882 100644
+--- a/arch/powerpc/platforms/pseries/plpks.c
++++ b/arch/powerpc/platforms/pseries/plpks.c
+@@ -683,7 +683,7 @@ void __init plpks_early_init_devtree(void)
+ out:
+ fdt_nop_property(fdt, chosen_node, "ibm,plpks-pw");
+ // Since we've cleared the password, we must update the FDT checksum
+- early_init_dt_verify(fdt);
++ early_init_dt_verify(fdt, __pa(fdt));
+ }
+
+ static __init int pseries_plpks_init(void)
+diff --git a/arch/riscv/include/asm/cpufeature.h b/arch/riscv/include/asm/cpufeature.h
+index 45f9c1171a486a..dfa5cdddd3671b 100644
+--- a/arch/riscv/include/asm/cpufeature.h
++++ b/arch/riscv/include/asm/cpufeature.h
+@@ -8,6 +8,7 @@
+
+ #include <linux/bitmap.h>
+ #include <linux/jump_label.h>
++#include <linux/workqueue.h>
+ #include <asm/hwcap.h>
+ #include <asm/alternative-macros.h>
+ #include <asm/errno.h>
+@@ -60,6 +61,7 @@ void riscv_user_isa_enable(void);
+
+ #if defined(CONFIG_RISCV_MISALIGNED)
+ bool check_unaligned_access_emulated_all_cpus(void);
++void check_unaligned_access_emulated(struct work_struct *work __always_unused);
+ void unaligned_emulation_finish(void);
+ bool unaligned_ctl_available(void);
+ DECLARE_PER_CPU(long, misaligned_access_speed);
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index a2cde65b69e950..26c886db4fb3d1 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -227,7 +227,7 @@ static void __init init_resources(void)
+ static void __init parse_dtb(void)
+ {
+ /* Early scan of device tree from init memory */
+- if (early_init_dt_scan(dtb_early_va)) {
++ if (early_init_dt_scan(dtb_early_va, __pa(dtb_early_va))) {
+ const char *name = of_flat_dt_get_machine_name();
+
+ if (name) {
+diff --git a/arch/riscv/kernel/traps_misaligned.c b/arch/riscv/kernel/traps_misaligned.c
+index 1b9867136b6100..9a80a12f6b48f2 100644
+--- a/arch/riscv/kernel/traps_misaligned.c
++++ b/arch/riscv/kernel/traps_misaligned.c
+@@ -524,11 +524,11 @@ int handle_misaligned_store(struct pt_regs *regs)
+ return 0;
+ }
+
+-static bool check_unaligned_access_emulated(int cpu)
++void check_unaligned_access_emulated(struct work_struct *work __always_unused)
+ {
++ int cpu = smp_processor_id();
+ long *mas_ptr = per_cpu_ptr(&misaligned_access_speed, cpu);
+ unsigned long tmp_var, tmp_val;
+- bool misaligned_emu_detected;
+
+ *mas_ptr = RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN;
+
+@@ -536,19 +536,16 @@ static bool check_unaligned_access_emulated(int cpu)
+ " "REG_L" %[tmp], 1(%[ptr])\n"
+ : [tmp] "=r" (tmp_val) : [ptr] "r" (&tmp_var) : "memory");
+
+- misaligned_emu_detected = (*mas_ptr == RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED);
+ /*
+ * If unaligned_ctl is already set, this means that we detected that all
+ * CPUS uses emulated misaligned access at boot time. If that changed
+ * when hotplugging the new cpu, this is something we don't handle.
+ */
+- if (unlikely(unaligned_ctl && !misaligned_emu_detected)) {
++ if (unlikely(unaligned_ctl && (*mas_ptr != RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED))) {
+ pr_crit("CPU misaligned accesses non homogeneous (expected all emulated)\n");
+ while (true)
+ cpu_relax();
+ }
+-
+- return misaligned_emu_detected;
+ }
+
+ bool check_unaligned_access_emulated_all_cpus(void)
+@@ -560,8 +557,11 @@ bool check_unaligned_access_emulated_all_cpus(void)
+ * accesses emulated since tasks requesting such control can run on any
+ * CPU.
+ */
++ schedule_on_each_cpu(check_unaligned_access_emulated);
++
+ for_each_online_cpu(cpu)
+- if (!check_unaligned_access_emulated(cpu))
++ if (per_cpu(misaligned_access_speed, cpu)
++ != RISCV_HWPROBE_MISALIGNED_SCALAR_EMULATED)
+ return false;
+
+ unaligned_ctl = true;
+diff --git a/arch/riscv/kernel/unaligned_access_speed.c b/arch/riscv/kernel/unaligned_access_speed.c
+index 160628a2116de4..f3508cc54f91ae 100644
+--- a/arch/riscv/kernel/unaligned_access_speed.c
++++ b/arch/riscv/kernel/unaligned_access_speed.c
+@@ -191,6 +191,7 @@ static int riscv_online_cpu(unsigned int cpu)
+ if (per_cpu(misaligned_access_speed, cpu) != RISCV_HWPROBE_MISALIGNED_SCALAR_UNKNOWN)
+ goto exit;
+
++ check_unaligned_access_emulated(NULL);
+ buf = alloc_pages(GFP_KERNEL, MISALIGNED_BUFFER_ORDER);
+ if (!buf) {
+ pr_warn("Allocation failure, not measuring misaligned performance\n");
+diff --git a/arch/riscv/kvm/aia_aplic.c b/arch/riscv/kvm/aia_aplic.c
+index da6ff1bade0df5..f59d1c0c8c43a7 100644
+--- a/arch/riscv/kvm/aia_aplic.c
++++ b/arch/riscv/kvm/aia_aplic.c
+@@ -143,7 +143,7 @@ static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending)
+ if (sm == APLIC_SOURCECFG_SM_LEVEL_HIGH ||
+ sm == APLIC_SOURCECFG_SM_LEVEL_LOW) {
+ if (!pending)
+- goto skip_write_pending;
++ goto noskip_write_pending;
+ if ((irqd->state & APLIC_IRQ_STATE_INPUT) &&
+ sm == APLIC_SOURCECFG_SM_LEVEL_LOW)
+ goto skip_write_pending;
+@@ -152,6 +152,7 @@ static void aplic_write_pending(struct aplic *aplic, u32 irq, bool pending)
+ goto skip_write_pending;
+ }
+
++noskip_write_pending:
+ if (pending)
+ irqd->state |= APLIC_IRQ_STATE_PENDING;
+ else
+diff --git a/arch/riscv/kvm/vcpu_sbi.c b/arch/riscv/kvm/vcpu_sbi.c
+index 7de128be8db9bc..6e704ed86a83a9 100644
+--- a/arch/riscv/kvm/vcpu_sbi.c
++++ b/arch/riscv/kvm/vcpu_sbi.c
+@@ -486,19 +486,22 @@ void kvm_riscv_vcpu_sbi_init(struct kvm_vcpu *vcpu)
+ struct kvm_vcpu_sbi_context *scontext = &vcpu->arch.sbi_context;
+ const struct kvm_riscv_sbi_extension_entry *entry;
+ const struct kvm_vcpu_sbi_extension *ext;
+- int i;
++ int idx, i;
+
+ for (i = 0; i < ARRAY_SIZE(sbi_ext); i++) {
+ entry = &sbi_ext[i];
+ ext = entry->ext_ptr;
++ idx = entry->ext_idx;
++
++ if (idx < 0 || idx >= ARRAY_SIZE(scontext->ext_status))
++ continue;
+
+ if (ext->probe && !ext->probe(vcpu)) {
+- scontext->ext_status[entry->ext_idx] =
+- KVM_RISCV_SBI_EXT_STATUS_UNAVAILABLE;
++ scontext->ext_status[idx] = KVM_RISCV_SBI_EXT_STATUS_UNAVAILABLE;
+ continue;
+ }
+
+- scontext->ext_status[entry->ext_idx] = ext->default_disabled ?
++ scontext->ext_status[idx] = ext->default_disabled ?
+ KVM_RISCV_SBI_EXT_STATUS_DISABLED :
+ KVM_RISCV_SBI_EXT_STATUS_ENABLED;
+ }
+diff --git a/arch/s390/include/asm/facility.h b/arch/s390/include/asm/facility.h
+index 715bcf8fb69a51..5f5b1aa6c23312 100644
+--- a/arch/s390/include/asm/facility.h
++++ b/arch/s390/include/asm/facility.h
+@@ -88,7 +88,7 @@ static __always_inline bool test_facility(unsigned long nr)
+ return __test_facility(nr, &stfle_fac_list);
+ }
+
+-static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
++static inline unsigned long __stfle_asm(u64 *fac_list, int size)
+ {
+ unsigned long reg0 = size - 1;
+
+@@ -96,7 +96,7 @@ static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
+ " lgr 0,%[reg0]\n"
+ " .insn s,0xb2b00000,%[list]\n" /* stfle */
+ " lgr %[reg0],0\n"
+- : [reg0] "+&d" (reg0), [list] "+Q" (*stfle_fac_list)
++ : [reg0] "+&d" (reg0), [list] "+Q" (*fac_list)
+ :
+ : "memory", "cc", "0");
+ return reg0;
+@@ -104,10 +104,10 @@ static inline unsigned long __stfle_asm(u64 *stfle_fac_list, int size)
+
+ /**
+ * stfle - Store facility list extended
+- * @stfle_fac_list: array where facility list can be stored
++ * @fac_list: array where facility list can be stored
+ * @size: size of passed in array in double words
+ */
+-static inline void __stfle(u64 *stfle_fac_list, int size)
++static inline void __stfle(u64 *fac_list, int size)
+ {
+ unsigned long nr;
+ u32 stfl_fac_list;
+@@ -116,20 +116,20 @@ static inline void __stfle(u64 *stfle_fac_list, int size)
+ " stfl 0(0)\n"
+ : "=m" (get_lowcore()->stfl_fac_list));
+ stfl_fac_list = get_lowcore()->stfl_fac_list;
+- memcpy(stfle_fac_list, &stfl_fac_list, 4);
++ memcpy(fac_list, &stfl_fac_list, 4);
+ nr = 4; /* bytes stored by stfl */
+ if (stfl_fac_list & 0x01000000) {
+ /* More facility bits available with stfle */
+- nr = __stfle_asm(stfle_fac_list, size);
++ nr = __stfle_asm(fac_list, size);
+ nr = min_t(unsigned long, (nr + 1) * 8, size * 8);
+ }
+- memset((char *) stfle_fac_list + nr, 0, size * 8 - nr);
++ memset((char *)fac_list + nr, 0, size * 8 - nr);
+ }
+
+-static inline void stfle(u64 *stfle_fac_list, int size)
++static inline void stfle(u64 *fac_list, int size)
+ {
+ preempt_disable();
+- __stfle(stfle_fac_list, size);
++ __stfle(fac_list, size);
+ preempt_enable();
+ }
+
+diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
+index 9d920ced604754..30b20ce9a70033 100644
+--- a/arch/s390/include/asm/pci.h
++++ b/arch/s390/include/asm/pci.h
+@@ -96,7 +96,6 @@ struct zpci_bar_struct {
+ u8 size; /* order 2 exponent */
+ };
+
+-struct s390_domain;
+ struct kvm_zdev;
+
+ #define ZPCI_FUNCTIONS_PER_BUS 256
+@@ -181,9 +180,10 @@ struct zpci_dev {
+ struct dentry *debugfs_dev;
+
+ /* IOMMU and passthrough */
+- struct s390_domain *s390_domain; /* s390 IOMMU domain data */
++ struct iommu_domain *s390_domain; /* attached IOMMU domain */
+ struct kvm_zdev *kzdev;
+ struct mutex kzdev_lock;
++ spinlock_t dom_lock; /* protect s390_domain change */
+ };
+
+ static inline bool zdev_enabled(struct zpci_dev *zdev)
+diff --git a/arch/s390/include/asm/set_memory.h b/arch/s390/include/asm/set_memory.h
+index 06fbabe2f66c98..cb4cc0f59012f7 100644
+--- a/arch/s390/include/asm/set_memory.h
++++ b/arch/s390/include/asm/set_memory.h
+@@ -62,5 +62,6 @@ __SET_MEMORY_FUNC(set_memory_4k, SET_MEMORY_4K)
+
+ int set_direct_map_invalid_noflush(struct page *page);
+ int set_direct_map_default_noflush(struct page *page);
++bool kernel_page_present(struct page *page);
+
+ #endif
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index 5b765e3ccf0cad..3317f4878eaa70 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -759,7 +759,6 @@ static int __hw_perf_event_init(struct perf_event *event)
+ reserve_pmc_hardware();
+ refcount_set(&num_events, 1);
+ }
+- mutex_unlock(&pmc_reserve_mutex);
+ event->destroy = hw_perf_event_destroy;
+
+ /* Access per-CPU sampling information (query sampling info) */
+@@ -848,6 +847,7 @@ static int __hw_perf_event_init(struct perf_event *event)
+ if (is_default_overflow_handler(event))
+ event->overflow_handler = cpumsf_output_event_pid;
+ out:
++ mutex_unlock(&pmc_reserve_mutex);
+ return err;
+ }
+
+diff --git a/arch/s390/kernel/syscalls/Makefile b/arch/s390/kernel/syscalls/Makefile
+index 1bb78b9468e8a9..e85c14f9058b92 100644
+--- a/arch/s390/kernel/syscalls/Makefile
++++ b/arch/s390/kernel/syscalls/Makefile
+@@ -12,7 +12,7 @@ kapi-hdrs-y := $(kapi)/unistd_nr.h
+ uapi-hdrs-y := $(uapi)/unistd_32.h
+ uapi-hdrs-y += $(uapi)/unistd_64.h
+
+-targets += $(addprefix ../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y))
++targets += $(addprefix ../../../../,$(gen-y) $(kapi-hdrs-y) $(uapi-hdrs-y))
+
+ PHONY += kapi uapi
+
+diff --git a/arch/s390/mm/pageattr.c b/arch/s390/mm/pageattr.c
+index 5f805ad42d4c3f..aec9eb16b6f7be 100644
+--- a/arch/s390/mm/pageattr.c
++++ b/arch/s390/mm/pageattr.c
+@@ -406,6 +406,21 @@ int set_direct_map_default_noflush(struct page *page)
+ return __set_memory((unsigned long)page_to_virt(page), 1, SET_MEMORY_DEF);
+ }
+
++bool kernel_page_present(struct page *page)
++{
++ unsigned long addr;
++ unsigned int cc;
++
++ addr = (unsigned long)page_address(page);
++ asm volatile(
++ " lra %[addr],0(%[addr])\n"
++ " ipm %[cc]\n"
++ : [cc] "=d" (cc), [addr] "+a" (addr)
++ :
++ : "cc");
++ return (cc >> 28) == 0;
++}
++
+ #if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KFENCE)
+
+ static void ipte_range(pte_t *pte, unsigned long address, int nr)
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index bd9624c20b8020..635fd8f2acbaa2 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -160,6 +160,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev)
+ u64 req = ZPCI_CREATE_REQ(zdev->fh, 0, ZPCI_MOD_FC_SET_MEASURE);
+ struct zpci_iommu_ctrs *ctrs;
+ struct zpci_fib fib = {0};
++ unsigned long flags;
+ u8 cc, status;
+
+ if (zdev->fmb || sizeof(*zdev->fmb) < zdev->fmb_length)
+@@ -171,6 +172,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev)
+ WARN_ON((u64) zdev->fmb & 0xf);
+
+ /* reset software counters */
++ spin_lock_irqsave(&zdev->dom_lock, flags);
+ ctrs = zpci_get_iommu_ctrs(zdev);
+ if (ctrs) {
+ atomic64_set(&ctrs->mapped_pages, 0);
+@@ -179,6 +181,7 @@ int zpci_fmb_enable_device(struct zpci_dev *zdev)
+ atomic64_set(&ctrs->sync_map_rpcits, 0);
+ atomic64_set(&ctrs->sync_rpcits, 0);
+ }
++ spin_unlock_irqrestore(&zdev->dom_lock, flags);
+
+
+ fib.fmb_addr = virt_to_phys(zdev->fmb);
+@@ -914,10 +917,8 @@ void zpci_device_reserved(struct zpci_dev *zdev)
+ void zpci_release_device(struct kref *kref)
+ {
+ struct zpci_dev *zdev = container_of(kref, struct zpci_dev, kref);
+- int ret;
+
+- if (zdev->has_hp_slot)
+- zpci_exit_slot(zdev);
++ WARN_ON(zdev->state != ZPCI_FN_STATE_RESERVED);
+
+ if (zdev->zbus->bus)
+ zpci_bus_remove_device(zdev, false);
+@@ -925,28 +926,14 @@ void zpci_release_device(struct kref *kref)
+ if (zdev_enabled(zdev))
+ zpci_disable_device(zdev);
+
+- switch (zdev->state) {
+- case ZPCI_FN_STATE_CONFIGURED:
+- ret = sclp_pci_deconfigure(zdev->fid);
+- zpci_dbg(3, "deconf fid:%x, rc:%d\n", zdev->fid, ret);
+- fallthrough;
+- case ZPCI_FN_STATE_STANDBY:
+- if (zdev->has_hp_slot)
+- zpci_exit_slot(zdev);
+- spin_lock(&zpci_list_lock);
+- list_del(&zdev->entry);
+- spin_unlock(&zpci_list_lock);
+- zpci_dbg(3, "rsv fid:%x\n", zdev->fid);
+- fallthrough;
+- case ZPCI_FN_STATE_RESERVED:
+- if (zdev->has_resources)
+- zpci_cleanup_bus_resources(zdev);
+- zpci_bus_device_unregister(zdev);
+- zpci_destroy_iommu(zdev);
+- fallthrough;
+- default:
+- break;
+- }
++ if (zdev->has_hp_slot)
++ zpci_exit_slot(zdev);
++
++ if (zdev->has_resources)
++ zpci_cleanup_bus_resources(zdev);
++
++ zpci_bus_device_unregister(zdev);
++ zpci_destroy_iommu(zdev);
+ zpci_dbg(3, "rem fid:%x\n", zdev->fid);
+ kfree_rcu(zdev, rcu);
+ }
+diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
+index 2cb5043a997d53..38014206c16b96 100644
+--- a/arch/s390/pci/pci_debug.c
++++ b/arch/s390/pci/pci_debug.c
+@@ -71,17 +71,23 @@ static void pci_fmb_show(struct seq_file *m, char *name[], int length,
+
+ static void pci_sw_counter_show(struct seq_file *m)
+ {
+- struct zpci_iommu_ctrs *ctrs = zpci_get_iommu_ctrs(m->private);
++ struct zpci_dev *zdev = m->private;
++ struct zpci_iommu_ctrs *ctrs;
+ atomic64_t *counter;
++ unsigned long flags;
+ int i;
+
++ spin_lock_irqsave(&zdev->dom_lock, flags);
++ ctrs = zpci_get_iommu_ctrs(m->private);
+ if (!ctrs)
+- return;
++ goto unlock;
+
+ counter = &ctrs->mapped_pages;
+ for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
+ seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
+ atomic64_read(counter));
++unlock:
++ spin_unlock_irqrestore(&zdev->dom_lock, flags);
+ }
+
+ static int pci_perf_show(struct seq_file *m, void *v)
+diff --git a/arch/sh/kernel/cpu/proc.c b/arch/sh/kernel/cpu/proc.c
+index a306bcd6b34130..5f6d0e827baeb0 100644
+--- a/arch/sh/kernel/cpu/proc.c
++++ b/arch/sh/kernel/cpu/proc.c
+@@ -132,7 +132,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
+
+ static void *c_start(struct seq_file *m, loff_t *pos)
+ {
+- return *pos < NR_CPUS ? cpu_data + *pos : NULL;
++ return *pos < nr_cpu_ids ? cpu_data + *pos : NULL;
+ }
+ static void *c_next(struct seq_file *m, void *v, loff_t *pos)
+ {
+diff --git a/arch/sh/kernel/setup.c b/arch/sh/kernel/setup.c
+index 620e5cf8ae1e74..f2b6f16a46b85d 100644
+--- a/arch/sh/kernel/setup.c
++++ b/arch/sh/kernel/setup.c
+@@ -255,7 +255,7 @@ void __ref sh_fdt_init(phys_addr_t dt_phys)
+ dt_virt = phys_to_virt(dt_phys);
+ #endif
+
+- if (!dt_virt || !early_init_dt_scan(dt_virt)) {
++ if (!dt_virt || !early_init_dt_scan(dt_virt, __pa(dt_virt))) {
+ pr_crit("Error: invalid device tree blob"
+ " at physical address %p\n", (void *)dt_phys);
+
+diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c
+index 77c4afb8ab9071..75d04fb4994a06 100644
+--- a/arch/um/drivers/net_kern.c
++++ b/arch/um/drivers/net_kern.c
+@@ -336,7 +336,7 @@ static struct platform_driver uml_net_driver = {
+
+ static void net_device_release(struct device *dev)
+ {
+- struct uml_net *device = dev_get_drvdata(dev);
++ struct uml_net *device = container_of(dev, struct uml_net, pdev.dev);
+ struct net_device *netdev = device->dev;
+ struct uml_net_private *lp = netdev_priv(netdev);
+
+diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
+index 7f28ec1929dc0b..2bfb17373244bb 100644
+--- a/arch/um/drivers/ubd_kern.c
++++ b/arch/um/drivers/ubd_kern.c
+@@ -779,7 +779,7 @@ static int ubd_open_dev(struct ubd *ubd_dev)
+
+ static void ubd_device_release(struct device *dev)
+ {
+- struct ubd *ubd_dev = dev_get_drvdata(dev);
++ struct ubd *ubd_dev = container_of(dev, struct ubd, pdev.dev);
+
+ blk_mq_free_tag_set(&ubd_dev->tag_set);
+ *ubd_dev = ((struct ubd) DEFAULT_UBD);
+@@ -898,6 +898,8 @@ static int ubd_add(int n, char **error_out)
+ if (err)
+ goto out_cleanup_disk;
+
++ ubd_dev->disk = disk;
++
+ return 0;
+
+ out_cleanup_disk:
+diff --git a/arch/um/drivers/vector_kern.c b/arch/um/drivers/vector_kern.c
+index c992da83268dd8..64c09db392c16a 100644
+--- a/arch/um/drivers/vector_kern.c
++++ b/arch/um/drivers/vector_kern.c
+@@ -815,7 +815,8 @@ static struct platform_driver uml_net_driver = {
+
+ static void vector_device_release(struct device *dev)
+ {
+- struct vector_device *device = dev_get_drvdata(dev);
++ struct vector_device *device =
++ container_of(dev, struct vector_device, pdev.dev);
+ struct net_device *netdev = device->dev;
+
+ list_del(&device->list);
+diff --git a/arch/um/kernel/dtb.c b/arch/um/kernel/dtb.c
+index 4954188a6a0908..8d78ced9e08f6d 100644
+--- a/arch/um/kernel/dtb.c
++++ b/arch/um/kernel/dtb.c
+@@ -17,7 +17,7 @@ void uml_dtb_init(void)
+
+ area = uml_load_file(dtb, &size);
+ if (area) {
+- if (!early_init_dt_scan(area)) {
++ if (!early_init_dt_scan(area, __pa(area))) {
+ pr_err("invalid DTB %s\n", dtb);
+ memblock_free(area, size);
+ return;
+diff --git a/arch/um/kernel/physmem.c b/arch/um/kernel/physmem.c
+index fb2adfb499452b..ee693e0b2b58bf 100644
+--- a/arch/um/kernel/physmem.c
++++ b/arch/um/kernel/physmem.c
+@@ -81,10 +81,10 @@ void __init setup_physmem(unsigned long start, unsigned long reserve_end,
+ unsigned long len, unsigned long long highmem)
+ {
+ unsigned long reserve = reserve_end - start;
+- long map_size = len - reserve;
++ unsigned long map_size = len - reserve;
+ int err;
+
+- if(map_size <= 0) {
++ if (len <= reserve) {
+ os_warn("Too few physical memory! Needed=%lu, given=%lu\n",
+ reserve, len);
+ exit(1);
+@@ -95,7 +95,7 @@ void __init setup_physmem(unsigned long start, unsigned long reserve_end,
+ err = os_map_memory((void *) reserve_end, physmem_fd, reserve,
+ map_size, 1, 1, 1);
+ if (err < 0) {
+- os_warn("setup_physmem - mapping %ld bytes of memory at 0x%p "
++ os_warn("setup_physmem - mapping %lu bytes of memory at 0x%p "
+ "failed - errno = %d\n", map_size,
+ (void *) reserve_end, err);
+ exit(1);
+diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
+index be2856af6d4c31..9c6cf03ed02b03 100644
+--- a/arch/um/kernel/process.c
++++ b/arch/um/kernel/process.c
+@@ -292,6 +292,6 @@ int elf_core_copy_task_fpregs(struct task_struct *t, elf_fpregset_t *fpu)
+ {
+ int cpu = current_thread_info()->cpu;
+
+- return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu);
++ return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu) == 0;
+ }
+
+diff --git a/arch/um/kernel/sysrq.c b/arch/um/kernel/sysrq.c
+index 4bb8622dc51226..e3b6a2fd75d996 100644
+--- a/arch/um/kernel/sysrq.c
++++ b/arch/um/kernel/sysrq.c
+@@ -52,5 +52,5 @@ void show_stack(struct task_struct *task, unsigned long *stack,
+ }
+
+ printk("%sCall Trace:\n", loglvl);
+- dump_trace(current, &stackops, (void *)loglvl);
++ dump_trace(task ?: current, &stackops, (void *)loglvl);
+ }
+diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
+index 327c45c5013fea..2f85ed005c42f1 100644
+--- a/arch/x86/coco/tdx/tdx.c
++++ b/arch/x86/coco/tdx/tdx.c
+@@ -78,6 +78,32 @@ static inline void tdcall(u64 fn, struct tdx_module_args *args)
+ panic("TDCALL %lld failed (Buggy TDX module!)\n", fn);
+ }
+
++/* Read TD-scoped metadata */
++static inline u64 tdg_vm_rd(u64 field, u64 *value)
++{
++ struct tdx_module_args args = {
++ .rdx = field,
++ };
++ u64 ret;
++
++ ret = __tdcall_ret(TDG_VM_RD, &args);
++ *value = args.r8;
++
++ return ret;
++}
++
++/* Write TD-scoped metadata */
++static inline u64 tdg_vm_wr(u64 field, u64 value, u64 mask)
++{
++ struct tdx_module_args args = {
++ .rdx = field,
++ .r8 = value,
++ .r9 = mask,
++ };
++
++ return __tdcall(TDG_VM_WR, &args);
++}
++
+ /**
+ * tdx_mcall_get_report0() - Wrapper to get TDREPORT0 (a.k.a. TDREPORT
+ * subtype 0) using TDG.MR.REPORT TDCALL.
+@@ -168,7 +194,61 @@ static void __noreturn tdx_panic(const char *msg)
+ __tdx_hypercall(&args);
+ }
+
+-static void tdx_parse_tdinfo(u64 *cc_mask)
++/*
++ * The kernel cannot handle #VEs when accessing normal kernel memory. Ensure
++ * that no #VE will be delivered for accesses to TD-private memory.
++ *
++ * TDX 1.0 does not allow the guest to disable SEPT #VE on its own. The VMM
++ * controls if the guest will receive such #VE with TD attribute
++ * ATTR_SEPT_VE_DISABLE.
++ *
++ * Newer TDX modules allow the guest to control if it wants to receive SEPT
++ * violation #VEs.
++ *
++ * Check if the feature is available and disable SEPT #VE if possible.
++ *
++ * If the TD is allowed to disable/enable SEPT #VEs, the ATTR_SEPT_VE_DISABLE
++ * attribute is no longer reliable. It reflects the initial state of the
++ * control for the TD, but it will not be updated if someone (e.g. bootloader)
++ * changes it before the kernel starts. Kernel must check TDCS_TD_CTLS bit to
++ * determine if SEPT #VEs are enabled or disabled.
++ */
++static void disable_sept_ve(u64 td_attr)
++{
++ const char *msg = "TD misconfiguration: SEPT #VE has to be disabled";
++ bool debug = td_attr & ATTR_DEBUG;
++ u64 config, controls;
++
++ /* Is this TD allowed to disable SEPT #VE */
++ tdg_vm_rd(TDCS_CONFIG_FLAGS, &config);
++ if (!(config & TDCS_CONFIG_FLEXIBLE_PENDING_VE)) {
++ /* No SEPT #VE controls for the guest: check the attribute */
++ if (td_attr & ATTR_SEPT_VE_DISABLE)
++ return;
++
++ /* Relax SEPT_VE_DISABLE check for debug TD for backtraces */
++ if (debug)
++ pr_warn("%s\n", msg);
++ else
++ tdx_panic(msg);
++ return;
++ }
++
++ /* Check if SEPT #VE has been disabled before us */
++ tdg_vm_rd(TDCS_TD_CTLS, &controls);
++ if (controls & TD_CTLS_PENDING_VE_DISABLE)
++ return;
++
++ /* Keep #VEs enabled for splats in debugging environments */
++ if (debug)
++ return;
++
++ /* Disable SEPT #VEs */
++ tdg_vm_wr(TDCS_TD_CTLS, TD_CTLS_PENDING_VE_DISABLE,
++ TD_CTLS_PENDING_VE_DISABLE);
++}
++
++static void tdx_setup(u64 *cc_mask)
+ {
+ struct tdx_module_args args = {};
+ unsigned int gpa_width;
+@@ -193,21 +273,12 @@ static void tdx_parse_tdinfo(u64 *cc_mask)
+ gpa_width = args.rcx & GENMASK(5, 0);
+ *cc_mask = BIT_ULL(gpa_width - 1);
+
+- /*
+- * The kernel can not handle #VE's when accessing normal kernel
+- * memory. Ensure that no #VE will be delivered for accesses to
+- * TD-private memory. Only VMM-shared memory (MMIO) will #VE.
+- */
+ td_attr = args.rdx;
+- if (!(td_attr & ATTR_SEPT_VE_DISABLE)) {
+- const char *msg = "TD misconfiguration: SEPT_VE_DISABLE attribute must be set.";
+
+- /* Relax SEPT_VE_DISABLE check for debug TD. */
+- if (td_attr & ATTR_DEBUG)
+- pr_warn("%s\n", msg);
+- else
+- tdx_panic(msg);
+- }
++ /* Kernel does not use NOTIFY_ENABLES and does not need random #VEs */
++ tdg_vm_wr(TDCS_NOTIFY_ENABLES, 0, -1ULL);
++
++ disable_sept_ve(td_attr);
+ }
+
+ /*
+@@ -929,10 +1000,6 @@ static void tdx_kexec_finish(void)
+
+ void __init tdx_early_init(void)
+ {
+- struct tdx_module_args args = {
+- .rdx = TDCS_NOTIFY_ENABLES,
+- .r9 = -1ULL,
+- };
+ u64 cc_mask;
+ u32 eax, sig[3];
+
+@@ -947,11 +1014,11 @@ void __init tdx_early_init(void)
+ setup_force_cpu_cap(X86_FEATURE_TSC_RELIABLE);
+
+ cc_vendor = CC_VENDOR_INTEL;
+- tdx_parse_tdinfo(&cc_mask);
+- cc_set_mask(cc_mask);
+
+- /* Kernel does not use NOTIFY_ENABLES and does not need random #VEs */
+- tdcall(TDG_VM_WR, &args);
++ /* Configure the TD */
++ tdx_setup(&cc_mask);
++
++ cc_set_mask(cc_mask);
+
+ /*
+ * All bits above GPA width are reserved and kernel treats shared bit
+diff --git a/arch/x86/crypto/aegis128-aesni-asm.S b/arch/x86/crypto/aegis128-aesni-asm.S
+index ad7f4c89162568..2de859173940eb 100644
+--- a/arch/x86/crypto/aegis128-aesni-asm.S
++++ b/arch/x86/crypto/aegis128-aesni-asm.S
+@@ -21,7 +21,7 @@
+ #define T1 %xmm7
+
+ #define STATEP %rdi
+-#define LEN %rsi
++#define LEN %esi
+ #define SRC %rdx
+ #define DST %rcx
+
+@@ -76,32 +76,32 @@ SYM_FUNC_START_LOCAL(__load_partial)
+ xor %r9d, %r9d
+ pxor MSG, MSG
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x1, %r8
+ jz .Lld_partial_1
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x1E, %r8
+ add SRC, %r8
+ mov (%r8), %r9b
+
+ .Lld_partial_1:
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x2, %r8
+ jz .Lld_partial_2
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x1C, %r8
+ add SRC, %r8
+ shl $0x10, %r9
+ mov (%r8), %r9w
+
+ .Lld_partial_2:
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x4, %r8
+ jz .Lld_partial_4
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x18, %r8
+ add SRC, %r8
+ shl $32, %r9
+@@ -111,11 +111,11 @@ SYM_FUNC_START_LOCAL(__load_partial)
+ .Lld_partial_4:
+ movq %r9, MSG
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x8, %r8
+ jz .Lld_partial_8
+
+- mov LEN, %r8
++ mov LEN, %r8d
+ and $0x10, %r8
+ add SRC, %r8
+ pslldq $8, MSG
+@@ -139,7 +139,7 @@ SYM_FUNC_END(__load_partial)
+ * %r10
+ */
+ SYM_FUNC_START_LOCAL(__store_partial)
+- mov LEN, %r8
++ mov LEN, %r8d
+ mov DST, %r9
+
+ movq T0, %r10
+@@ -677,7 +677,7 @@ SYM_TYPED_FUNC_START(crypto_aegis128_aesni_dec_tail)
+ call __store_partial
+
+ /* mask with byte count: */
+- movq LEN, T0
++ movd LEN, T0
+ punpcklbw T0, T0
+ punpcklbw T0, T0
+ punpcklbw T0, T0
+@@ -702,7 +702,8 @@ SYM_FUNC_END(crypto_aegis128_aesni_dec_tail)
+
+ /*
+ * void crypto_aegis128_aesni_final(void *state, void *tag_xor,
+- * u64 assoclen, u64 cryptlen);
++ * unsigned int assoclen,
++ * unsigned int cryptlen);
+ */
+ SYM_FUNC_START(crypto_aegis128_aesni_final)
+ FRAME_BEGIN
+@@ -715,8 +716,8 @@ SYM_FUNC_START(crypto_aegis128_aesni_final)
+ movdqu 0x40(STATEP), STATE4
+
+ /* prepare length block: */
+- movq %rdx, MSG
+- movq %rcx, T0
++ movd %edx, MSG
++ movd %ecx, T0
+ pslldq $8, T0
+ pxor T0, MSG
+ psllq $3, MSG /* multiply by 8 (to get bit count) */
+diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
+index fd4670a6694e77..a087bc0c549875 100644
+--- a/arch/x86/events/intel/pt.c
++++ b/arch/x86/events/intel/pt.c
+@@ -828,11 +828,13 @@ static void pt_buffer_advance(struct pt_buffer *buf)
+ buf->cur_idx++;
+
+ if (buf->cur_idx == buf->cur->last) {
+- if (buf->cur == buf->last)
++ if (buf->cur == buf->last) {
+ buf->cur = buf->first;
+- else
++ buf->wrapped = true;
++ } else {
+ buf->cur = list_entry(buf->cur->list.next, struct topa,
+ list);
++ }
+ buf->cur_idx = 0;
+ }
+ }
+@@ -846,8 +848,11 @@ static void pt_buffer_advance(struct pt_buffer *buf)
+ static void pt_update_head(struct pt *pt)
+ {
+ struct pt_buffer *buf = perf_get_aux(&pt->handle);
++ bool wrapped = buf->wrapped;
+ u64 topa_idx, base, old;
+
++ buf->wrapped = false;
++
+ if (buf->single) {
+ local_set(&buf->data_size, buf->output_off);
+ return;
+@@ -865,7 +870,7 @@ static void pt_update_head(struct pt *pt)
+ } else {
+ old = (local64_xchg(&buf->head, base) &
+ ((buf->nr_pages << PAGE_SHIFT) - 1));
+- if (base < old)
++ if (base < old || (base == old && wrapped))
+ base += buf->nr_pages << PAGE_SHIFT;
+
+ local_add(base - old, &buf->data_size);
+diff --git a/arch/x86/events/intel/pt.h b/arch/x86/events/intel/pt.h
+index f5e46c04c145d0..a1b6c04b7f6848 100644
+--- a/arch/x86/events/intel/pt.h
++++ b/arch/x86/events/intel/pt.h
+@@ -65,6 +65,7 @@ struct pt_pmu {
+ * @head: logical write offset inside the buffer
+ * @snapshot: if this is for a snapshot/overwrite counter
+ * @single: use Single Range Output instead of ToPA
++ * @wrapped: buffer advance wrapped back to the first topa table
+ * @stop_pos: STOP topa entry index
+ * @intr_pos: INT topa entry index
+ * @stop_te: STOP topa entry pointer
+@@ -82,6 +83,7 @@ struct pt_buffer {
+ local64_t head;
+ bool snapshot;
+ bool single;
++ bool wrapped;
+ long stop_pos, intr_pos;
+ struct topa_entry *stop_te, *intr_te;
+ void **data_pages;
+diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
+index 1f650b4dde509b..6c6e9b9f98a456 100644
+--- a/arch/x86/include/asm/atomic64_32.h
++++ b/arch/x86/include/asm/atomic64_32.h
+@@ -51,7 +51,8 @@ static __always_inline s64 arch_atomic64_read_nonatomic(const atomic64_t *v)
+ #ifdef CONFIG_X86_CMPXCHG64
+ #define __alternative_atomic64(f, g, out, in...) \
+ asm volatile("call %c[func]" \
+- : out : [func] "i" (atomic64_##g##_cx8), ## in)
++ : ALT_OUTPUT_SP(out) \
++ : [func] "i" (atomic64_##g##_cx8), ## in)
+
+ #define ATOMIC64_DECL(sym) ATOMIC64_DECL_ONE(sym##_cx8)
+ #else
+diff --git a/arch/x86/include/asm/cmpxchg_32.h b/arch/x86/include/asm/cmpxchg_32.h
+index 62cef2113ca749..fd1282a783ddbf 100644
+--- a/arch/x86/include/asm/cmpxchg_32.h
++++ b/arch/x86/include/asm/cmpxchg_32.h
+@@ -94,7 +94,7 @@ static __always_inline bool __try_cmpxchg64_local(volatile u64 *ptr, u64 *oldp,
+ asm volatile(ALTERNATIVE(_lock_loc \
+ "call cmpxchg8b_emu", \
+ _lock "cmpxchg8b %a[ptr]", X86_FEATURE_CX8) \
+- : "+a" (o.low), "+d" (o.high) \
++ : ALT_OUTPUT_SP("+a" (o.low), "+d" (o.high)) \
+ : "b" (n.low), "c" (n.high), [ptr] "S" (_ptr) \
+ : "memory"); \
+ \
+@@ -123,8 +123,8 @@ static __always_inline u64 arch_cmpxchg64_local(volatile u64 *ptr, u64 old, u64
+ "call cmpxchg8b_emu", \
+ _lock "cmpxchg8b %a[ptr]", X86_FEATURE_CX8) \
+ CC_SET(e) \
+- : CC_OUT(e) (ret), \
+- "+a" (o.low), "+d" (o.high) \
++ : ALT_OUTPUT_SP(CC_OUT(e) (ret), \
++ "+a" (o.low), "+d" (o.high)) \
+ : "b" (n.low), "c" (n.high), [ptr] "S" (_ptr) \
+ : "memory"); \
+ \
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 6d9f763a7bb9d5..427d1daf06d06a 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -26,6 +26,7 @@
+ #include <linux/irqbypass.h>
+ #include <linux/hyperv.h>
+ #include <linux/kfifo.h>
++#include <linux/sched/vhost_task.h>
+
+ #include <asm/apic.h>
+ #include <asm/pvclock-abi.h>
+@@ -1443,7 +1444,8 @@ struct kvm_arch {
+ bool sgx_provisioning_allowed;
+
+ struct kvm_x86_pmu_event_filter __rcu *pmu_event_filter;
+- struct task_struct *nx_huge_page_recovery_thread;
++ struct vhost_task *nx_huge_page_recovery_thread;
++ u64 nx_huge_page_last;
+
+ #ifdef CONFIG_X86_64
+ /* The number of TDP MMU pages across all roots. */
+diff --git a/arch/x86/include/asm/shared/tdx.h b/arch/x86/include/asm/shared/tdx.h
+index fdfd41511b0211..fecb2a6e864be1 100644
+--- a/arch/x86/include/asm/shared/tdx.h
++++ b/arch/x86/include/asm/shared/tdx.h
+@@ -16,11 +16,20 @@
+ #define TDG_VP_VEINFO_GET 3
+ #define TDG_MR_REPORT 4
+ #define TDG_MEM_PAGE_ACCEPT 6
++#define TDG_VM_RD 7
+ #define TDG_VM_WR 8
+
+-/* TDCS fields. To be used by TDG.VM.WR and TDG.VM.RD module calls */
++/* TDX TD-Scope Metadata. To be used by TDG.VM.WR and TDG.VM.RD */
++#define TDCS_CONFIG_FLAGS 0x1110000300000016
++#define TDCS_TD_CTLS 0x1110000300000017
+ #define TDCS_NOTIFY_ENABLES 0x9100000000000010
+
++/* TDCS_CONFIG_FLAGS bits */
++#define TDCS_CONFIG_FLEXIBLE_PENDING_VE BIT_ULL(1)
++
++/* TDCS_TD_CTLS bits */
++#define TD_CTLS_PENDING_VE_DISABLE BIT_ULL(0)
++
+ /* TDX hypercall Leaf IDs */
+ #define TDVMCALL_MAP_GPA 0x10001
+ #define TDVMCALL_GET_QUOTE 0x10002
+diff --git a/arch/x86/include/asm/tlb.h b/arch/x86/include/asm/tlb.h
+index 580636cdc257b7..4d3c9d00d6b6b2 100644
+--- a/arch/x86/include/asm/tlb.h
++++ b/arch/x86/include/asm/tlb.h
+@@ -34,4 +34,8 @@ static inline void __tlb_remove_table(void *table)
+ free_page_and_swap_cache(table);
+ }
+
++static inline void invlpg(unsigned long addr)
++{
++ asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
++}
+ #endif /* _ASM_X86_TLB_H */
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 823f44f7bc9465..d8408aafeed988 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -798,6 +798,7 @@ static void init_amd_bd(struct cpuinfo_x86 *c)
+ static const struct x86_cpu_desc erratum_1386_microcode[] = {
+ AMD_CPU_DESC(0x17, 0x1, 0x2, 0x0800126e),
+ AMD_CPU_DESC(0x17, 0x31, 0x0, 0x08301052),
++ {},
+ };
+
+ static void fix_erratum_1386(struct cpuinfo_x86 *c)
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index f43bb974fc66d7..b17bcf9b67eed4 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -2392,12 +2392,12 @@ void __init arch_cpu_finalize_init(void)
+ alternative_instructions();
+
+ if (IS_ENABLED(CONFIG_X86_64)) {
+- unsigned long USER_PTR_MAX = TASK_SIZE_MAX-1;
++ unsigned long USER_PTR_MAX = TASK_SIZE_MAX;
+
+ /*
+ * Enable this when LAM is gated on LASS support
+ if (cpu_feature_enabled(X86_FEATURE_LAM))
+- USER_PTR_MAX = (1ul << 63) - PAGE_SIZE - 1;
++ USER_PTR_MAX = (1ul << 63) - PAGE_SIZE;
+ */
+ runtime_const_init(ptr, USER_PTR_MAX);
+
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 31a73715d75531..fb5d0c67fbab17 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -34,6 +34,7 @@
+ #include <asm/setup.h>
+ #include <asm/cpu.h>
+ #include <asm/msr.h>
++#include <asm/tlb.h>
+
+ #include "internal.h"
+
+@@ -483,11 +484,25 @@ static void scan_containers(u8 *ucode, size_t size, struct cont_desc *desc)
+ }
+ }
+
+-static int __apply_microcode_amd(struct microcode_amd *mc)
++static int __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize)
+ {
++ unsigned long p_addr = (unsigned long)&mc->hdr.data_code;
+ u32 rev, dummy;
+
+- native_wrmsrl(MSR_AMD64_PATCH_LOADER, (u64)(long)&mc->hdr.data_code);
++ native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr);
++
++ if (x86_family(bsp_cpuid_1_eax) == 0x17) {
++ unsigned long p_addr_end = p_addr + psize - 1;
++
++ invlpg(p_addr);
++
++ /*
++ * Flush next page too if patch image is crossing a page
++ * boundary.
++ */
++ if (p_addr >> PAGE_SHIFT != p_addr_end >> PAGE_SHIFT)
++ invlpg(p_addr_end);
++ }
+
+ /* verify patch application was successful */
+ native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+@@ -529,7 +544,7 @@ static bool early_apply_microcode(u32 old_rev, void *ucode, size_t size)
+ if (old_rev > mc->hdr.patch_id)
+ return ret;
+
+- return !__apply_microcode_amd(mc);
++ return !__apply_microcode_amd(mc, desc.psize);
+ }
+
+ static bool get_builtin_microcode(struct cpio_data *cp)
+@@ -745,7 +760,7 @@ void reload_ucode_amd(unsigned int cpu)
+ rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+
+ if (rev < mc->hdr.patch_id) {
+- if (!__apply_microcode_amd(mc))
++ if (!__apply_microcode_amd(mc, p->size))
+ pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id);
+ }
+ }
+@@ -798,7 +813,7 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ goto out;
+ }
+
+- if (__apply_microcode_amd(mc_amd)) {
++ if (__apply_microcode_amd(mc_amd, p->size)) {
+ pr_err("CPU%d: update failed for patch_level=0x%08x\n",
+ cpu, mc_amd->hdr.patch_id);
+ return UCODE_ERROR;
+diff --git a/arch/x86/kernel/devicetree.c b/arch/x86/kernel/devicetree.c
+index 64280879c68c02..59d23cdf4ed0fa 100644
+--- a/arch/x86/kernel/devicetree.c
++++ b/arch/x86/kernel/devicetree.c
+@@ -305,7 +305,7 @@ void __init x86_flattree_get_config(void)
+ map_len = size;
+ }
+
+- early_init_dt_verify(dt);
++ early_init_dt_verify(dt, __pa(dt));
+ }
+
+ unflatten_and_copy_device_tree();
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index d00c28aaa5be45..d4705a348a8045 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -723,7 +723,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ state->sp = task->thread.sp + sizeof(*frame);
+ state->bp = READ_ONCE_NOCHECK(frame->bp);
+ state->ip = READ_ONCE_NOCHECK(frame->ret_addr);
+- state->signal = (void *)state->ip == ret_from_fork;
++ state->signal = (void *)state->ip == ret_from_fork_asm;
+ }
+
+ if (get_stack_info((unsigned long *)state->sp, state->task,
+diff --git a/arch/x86/kvm/Kconfig b/arch/x86/kvm/Kconfig
+index f09f13c01c6bbd..d7f27a3276549b 100644
+--- a/arch/x86/kvm/Kconfig
++++ b/arch/x86/kvm/Kconfig
+@@ -18,8 +18,7 @@ menuconfig VIRTUALIZATION
+ if VIRTUALIZATION
+
+ config KVM_X86
+- def_tristate KVM if KVM_INTEL || KVM_AMD
+- depends on X86_LOCAL_APIC
++ def_tristate KVM if (KVM_INTEL != n || KVM_AMD != n)
+ select KVM_COMMON
+ select KVM_GENERIC_MMU_NOTIFIER
+ select HAVE_KVM_IRQCHIP
+@@ -29,6 +28,7 @@ config KVM_X86
+ select HAVE_KVM_IRQ_BYPASS
+ select HAVE_KVM_IRQ_ROUTING
+ select HAVE_KVM_READONLY_MEM
++ select VHOST_TASK
+ select KVM_ASYNC_PF
+ select USER_RETURN_NOTIFIER
+ select KVM_MMIO
+@@ -49,6 +49,7 @@ config KVM_X86
+
+ config KVM
+ tristate "Kernel-based Virtual Machine (KVM) support"
++ depends on X86_LOCAL_APIC
+ help
+ Support hosting fully virtualized guest machines using hardware
+ virtualization extensions. You will need a fairly recent
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 8e853a5fc867b7..3e353ed1f76736 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -7281,7 +7281,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
+ kvm_mmu_zap_all_fast(kvm);
+ mutex_unlock(&kvm->slots_lock);
+
+- wake_up_process(kvm->arch.nx_huge_page_recovery_thread);
++ vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
+ }
+ mutex_unlock(&kvm_lock);
+ }
+@@ -7427,7 +7427,7 @@ static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel
+ mutex_lock(&kvm_lock);
+
+ list_for_each_entry(kvm, &vm_list, vm_list)
+- wake_up_process(kvm->arch.nx_huge_page_recovery_thread);
++ vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
+
+ mutex_unlock(&kvm_lock);
+ }
+@@ -7530,62 +7530,56 @@ static void kvm_recover_nx_huge_pages(struct kvm *kvm)
+ srcu_read_unlock(&kvm->srcu, rcu_idx);
+ }
+
+-static long get_nx_huge_page_recovery_timeout(u64 start_time)
++static void kvm_nx_huge_page_recovery_worker_kill(void *data)
+ {
+- bool enabled;
+- uint period;
+-
+- enabled = calc_nx_huge_pages_recovery_period(&period);
+-
+- return enabled ? start_time + msecs_to_jiffies(period) - get_jiffies_64()
+- : MAX_SCHEDULE_TIMEOUT;
+ }
+
+-static int kvm_nx_huge_page_recovery_worker(struct kvm *kvm, uintptr_t data)
++static bool kvm_nx_huge_page_recovery_worker(void *data)
+ {
+- u64 start_time;
++ struct kvm *kvm = data;
++ bool enabled;
++ uint period;
+ long remaining_time;
+
+- while (true) {
+- start_time = get_jiffies_64();
+- remaining_time = get_nx_huge_page_recovery_timeout(start_time);
+-
+- set_current_state(TASK_INTERRUPTIBLE);
+- while (!kthread_should_stop() && remaining_time > 0) {
+- schedule_timeout(remaining_time);
+- remaining_time = get_nx_huge_page_recovery_timeout(start_time);
+- set_current_state(TASK_INTERRUPTIBLE);
+- }
+-
+- set_current_state(TASK_RUNNING);
+-
+- if (kthread_should_stop())
+- return 0;
++ enabled = calc_nx_huge_pages_recovery_period(&period);
++ if (!enabled)
++ return false;
+
+- kvm_recover_nx_huge_pages(kvm);
++ remaining_time = kvm->arch.nx_huge_page_last + msecs_to_jiffies(period)
++ - get_jiffies_64();
++ if (remaining_time > 0) {
++ schedule_timeout(remaining_time);
++ /* check for signals and come back */
++ return true;
+ }
++
++ __set_current_state(TASK_RUNNING);
++ kvm_recover_nx_huge_pages(kvm);
++ kvm->arch.nx_huge_page_last = get_jiffies_64();
++ return true;
+ }
+
+ int kvm_mmu_post_init_vm(struct kvm *kvm)
+ {
+- int err;
+-
+ if (nx_hugepage_mitigation_hard_disabled)
+ return 0;
+
+- err = kvm_vm_create_worker_thread(kvm, kvm_nx_huge_page_recovery_worker, 0,
+- "kvm-nx-lpage-recovery",
+- &kvm->arch.nx_huge_page_recovery_thread);
+- if (!err)
+- kthread_unpark(kvm->arch.nx_huge_page_recovery_thread);
++ kvm->arch.nx_huge_page_last = get_jiffies_64();
++ kvm->arch.nx_huge_page_recovery_thread = vhost_task_create(
++ kvm_nx_huge_page_recovery_worker, kvm_nx_huge_page_recovery_worker_kill,
++ kvm, "kvm-nx-lpage-recovery");
+
+- return err;
++ if (!kvm->arch.nx_huge_page_recovery_thread)
++ return -ENOMEM;
++
++ vhost_task_start(kvm->arch.nx_huge_page_recovery_thread);
++ return 0;
+ }
+
+ void kvm_mmu_pre_destroy_vm(struct kvm *kvm)
+ {
+ if (kvm->arch.nx_huge_page_recovery_thread)
+- kthread_stop(kvm->arch.nx_huge_page_recovery_thread);
++ vhost_task_stop(kvm->arch.nx_huge_page_recovery_thread);
+ }
+
+ #ifdef CONFIG_KVM_GENERIC_MEMORY_ATTRIBUTES
+diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
+index 8f7eb3ad88fcb9..5521608077ec09 100644
+--- a/arch/x86/kvm/mmu/spte.c
++++ b/arch/x86/kvm/mmu/spte.c
+@@ -226,12 +226,20 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
+ spte |= PT_WRITABLE_MASK | shadow_mmu_writable_mask;
+
+ /*
+- * Optimization: for pte sync, if spte was writable the hash
+- * lookup is unnecessary (and expensive). Write protection
+- * is responsibility of kvm_mmu_get_page / kvm_mmu_sync_roots.
+- * Same reasoning can be applied to dirty page accounting.
++ * When overwriting an existing leaf SPTE, and the old SPTE was
++ * writable, skip trying to unsync shadow pages as any relevant
++ * shadow pages must already be unsync, i.e. the hash lookup is
++ * unnecessary (and expensive).
++ *
++ * The same reasoning applies to dirty page/folio accounting;
++ * KVM will mark the folio dirty using the old SPTE, thus
++ * there's no need to immediately mark the new SPTE as dirty.
++ *
++ * Note, both cases rely on KVM not changing PFNs without first
++ * zapping the old SPTE, which is guaranteed by both the shadow
++ * MMU and the TDP MMU.
+ */
+- if (is_writable_pte(old_spte))
++ if (is_last_spte(old_spte, level) && is_writable_pte(old_spte))
+ goto out;
+
+ /*
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index d28618e9277ede..92fee5e8a3c741 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2551,28 +2551,6 @@ static bool cpu_has_sgx(void)
+ return cpuid_eax(0) >= 0x12 && (cpuid_eax(0x12) & BIT(0));
+ }
+
+-/*
+- * Some cpus support VM_{ENTRY,EXIT}_IA32_PERF_GLOBAL_CTRL but they
+- * can't be used due to errata where VM Exit may incorrectly clear
+- * IA32_PERF_GLOBAL_CTRL[34:32]. Work around the errata by using the
+- * MSR load mechanism to switch IA32_PERF_GLOBAL_CTRL.
+- */
+-static bool cpu_has_perf_global_ctrl_bug(void)
+-{
+- switch (boot_cpu_data.x86_vfm) {
+- case INTEL_NEHALEM_EP: /* AAK155 */
+- case INTEL_NEHALEM: /* AAP115 */
+- case INTEL_WESTMERE: /* AAT100 */
+- case INTEL_WESTMERE_EP: /* BC86,AAY89,BD102 */
+- case INTEL_NEHALEM_EX: /* BA97 */
+- return true;
+- default:
+- break;
+- }
+-
+- return false;
+-}
+-
+ static int adjust_vmx_controls(u32 ctl_min, u32 ctl_opt, u32 msr, u32 *result)
+ {
+ u32 vmx_msr_low, vmx_msr_high;
+@@ -2732,6 +2710,27 @@ static int setup_vmcs_config(struct vmcs_config *vmcs_conf,
+ _vmexit_control &= ~x_ctrl;
+ }
+
++ /*
++ * Some cpus support VM_{ENTRY,EXIT}_IA32_PERF_GLOBAL_CTRL but they
++ * can't be used due to an errata where VM Exit may incorrectly clear
++ * IA32_PERF_GLOBAL_CTRL[34:32]. Workaround the errata by using the
++ * MSR load mechanism to switch IA32_PERF_GLOBAL_CTRL.
++ */
++ switch (boot_cpu_data.x86_vfm) {
++ case INTEL_NEHALEM_EP: /* AAK155 */
++ case INTEL_NEHALEM: /* AAP115 */
++ case INTEL_WESTMERE: /* AAT100 */
++ case INTEL_WESTMERE_EP: /* BC86,AAY89,BD102 */
++ case INTEL_NEHALEM_EX: /* BA97 */
++ _vmentry_control &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
++ _vmexit_control &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL;
++ pr_warn_once("VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL "
++ "does not work properly. Using workaround\n");
++ break;
++ default:
++ break;
++ }
++
+ rdmsrl(MSR_IA32_VMX_BASIC, basic_msr);
+
+ /* IA-32 SDM Vol 3B: VMCS size is never greater than 4kB. */
+@@ -4422,9 +4421,6 @@ static u32 vmx_vmentry_ctrl(void)
+ VM_ENTRY_LOAD_IA32_EFER |
+ VM_ENTRY_IA32E_MODE);
+
+- if (cpu_has_perf_global_ctrl_bug())
+- vmentry_ctrl &= ~VM_ENTRY_LOAD_IA32_PERF_GLOBAL_CTRL;
+-
+ return vmentry_ctrl;
+ }
+
+@@ -4442,10 +4438,6 @@ static u32 vmx_vmexit_ctrl(void)
+ if (vmx_pt_mode_is_system())
+ vmexit_ctrl &= ~(VM_EXIT_PT_CONCEAL_PIP |
+ VM_EXIT_CLEAR_IA32_RTIT_CTL);
+-
+- if (cpu_has_perf_global_ctrl_bug())
+- vmexit_ctrl &= ~VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL;
+-
+ /* Loading of EFER and PERF_GLOBAL_CTRL are toggled dynamically */
+ return vmexit_ctrl &
+ ~(VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL | VM_EXIT_LOAD_IA32_EFER);
+@@ -8400,10 +8392,6 @@ __init int vmx_hardware_setup(void)
+ if (setup_vmcs_config(&vmcs_config, &vmx_capability) < 0)
+ return -EIO;
+
+- if (cpu_has_perf_global_ctrl_bug())
+- pr_warn_once("VM_EXIT_LOAD_IA32_PERF_GLOBAL_CTRL "
+- "does not work properly. Using workaround\n");
+-
+ if (boot_cpu_has(X86_FEATURE_NX))
+ kvm_enable_efer_bits(EFER_NX);
+
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index 86593d1b787d8a..b0678d59ebdb4a 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -20,6 +20,7 @@
+ #include <asm/cacheflush.h>
+ #include <asm/apic.h>
+ #include <asm/perf_event.h>
++#include <asm/tlb.h>
+
+ #include "mm_internal.h"
+
+@@ -1140,7 +1141,7 @@ STATIC_NOPV void native_flush_tlb_one_user(unsigned long addr)
+ bool cpu_pcide;
+
+ /* Flush 'addr' from the kernel PCID: */
+- asm volatile("invlpg (%0)" ::"r" (addr) : "memory");
++ invlpg(addr);
+
+ /* If PTI is off there is no user PCID and nothing to flush. */
+ if (!static_cpu_has(X86_FEATURE_PTI))
+diff --git a/arch/x86/platform/pvh/head.S b/arch/x86/platform/pvh/head.S
+index 64fca49cd88ff9..ce4fd8d33da467 100644
+--- a/arch/x86/platform/pvh/head.S
++++ b/arch/x86/platform/pvh/head.S
+@@ -172,7 +172,14 @@ SYM_CODE_START_LOCAL(pvh_start_xen)
+ movq %rbp, %rbx
+ subq $_pa(pvh_start_xen), %rbx
+ movq %rbx, phys_base(%rip)
+- call xen_prepare_pvh
++
++ /* Call xen_prepare_pvh() via the kernel virtual mapping */
++ leaq xen_prepare_pvh(%rip), %rax
++ subq phys_base(%rip), %rax
++ addq $__START_KERNEL_map, %rax
++ ANNOTATE_RETPOLINE_SAFE
++ call *%rax
++
+ /*
+ * Clear phys_base. __startup_64 will *add* to its value,
+ * so reset to 0.
+diff --git a/arch/xtensa/kernel/setup.c b/arch/xtensa/kernel/setup.c
+index bdec4a773af098..e51f2060e83089 100644
+--- a/arch/xtensa/kernel/setup.c
++++ b/arch/xtensa/kernel/setup.c
+@@ -216,7 +216,7 @@ static int __init xtensa_dt_io_area(unsigned long node, const char *uname,
+
+ void __init early_init_devtree(void *params)
+ {
+- early_init_dt_scan(params);
++ early_init_dt_scan(params, __pa(params));
+ of_scan_flat_dt(xtensa_dt_io_area, NULL);
+
+ if (!command_line[0])
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index e831aedb464329..9fb9f353315025 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -736,6 +736,7 @@ static void bfq_sync_bfqq_move(struct bfq_data *bfqd,
+ */
+ bfq_put_cooperator(sync_bfqq);
+ bic_set_bfqq(bic, NULL, true, act_idx);
++ bfq_release_process_ref(bfqd, sync_bfqq);
+ }
+ }
+
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 0747d9d0e48c8a..95dd7b79593565 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -582,23 +582,31 @@ static struct request *bfq_choose_req(struct bfq_data *bfqd,
+ #define BFQ_LIMIT_INLINE_DEPTH 16
+
+ #ifdef CONFIG_BFQ_GROUP_IOSCHED
+-static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
++static bool bfqq_request_over_limit(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic, blk_opf_t opf,
++ unsigned int act_idx, int limit)
+ {
+- struct bfq_data *bfqd = bfqq->bfqd;
+- struct bfq_entity *entity = &bfqq->entity;
+ struct bfq_entity *inline_entities[BFQ_LIMIT_INLINE_DEPTH];
+ struct bfq_entity **entities = inline_entities;
+- int depth, level, alloc_depth = BFQ_LIMIT_INLINE_DEPTH;
+- int class_idx = bfqq->ioprio_class - 1;
++ int alloc_depth = BFQ_LIMIT_INLINE_DEPTH;
+ struct bfq_sched_data *sched_data;
++ struct bfq_entity *entity;
++ struct bfq_queue *bfqq;
+ unsigned long wsum;
+ bool ret = false;
+-
+- if (!entity->on_st_or_in_serv)
+- return false;
++ int depth;
++ int level;
+
+ retry:
+ spin_lock_irq(&bfqd->lock);
++ bfqq = bic_to_bfqq(bic, op_is_sync(opf), act_idx);
++ if (!bfqq)
++ goto out;
++
++ entity = &bfqq->entity;
++ if (!entity->on_st_or_in_serv)
++ goto out;
++
+ /* +1 for bfqq entity, root cgroup not included */
+ depth = bfqg_to_blkg(bfqq_group(bfqq))->blkcg->css.cgroup->level + 1;
+ if (depth > alloc_depth) {
+@@ -643,7 +651,7 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
+ * class.
+ */
+ wsum = 0;
+- for (i = 0; i <= class_idx; i++) {
++ for (i = 0; i <= bfqq->ioprio_class - 1; i++) {
+ wsum = wsum * IOPRIO_BE_NR +
+ sched_data->service_tree[i].wsum;
+ }
+@@ -666,7 +674,9 @@ static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
+ return ret;
+ }
+ #else
+-static bool bfqq_request_over_limit(struct bfq_queue *bfqq, int limit)
++static bool bfqq_request_over_limit(struct bfq_data *bfqd,
++ struct bfq_io_cq *bic, blk_opf_t opf,
++ unsigned int act_idx, int limit)
+ {
+ return false;
+ }
+@@ -704,8 +714,9 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ }
+
+ for (act_idx = 0; bic && act_idx < bfqd->num_actuators; act_idx++) {
+- struct bfq_queue *bfqq =
+- bic_to_bfqq(bic, op_is_sync(opf), act_idx);
++ /* Fast path to check if bfqq is already allocated. */
++ if (!bic_to_bfqq(bic, op_is_sync(opf), act_idx))
++ continue;
+
+ /*
+ * Does queue (or any parent entity) exceed number of
+@@ -713,7 +724,7 @@ static void bfq_limit_depth(blk_opf_t opf, struct blk_mq_alloc_data *data)
+ * limit depth so that it cannot consume more
+ * available requests and thus starve other entities.
+ */
+- if (bfqq && bfqq_request_over_limit(bfqq, limit)) {
++ if (bfqq_request_over_limit(bfqd, bic, opf, act_idx, limit)) {
+ depth = 1;
+ break;
+ }
+@@ -5434,8 +5445,6 @@ void bfq_put_cooperator(struct bfq_queue *bfqq)
+ bfq_put_queue(__bfqq);
+ __bfqq = next;
+ }
+-
+- bfq_release_process_ref(bfqq->bfqd, bfqq);
+ }
+
+ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+@@ -5448,6 +5457,8 @@ static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq, bfqq->ref);
+
+ bfq_put_cooperator(bfqq);
++
++ bfq_release_process_ref(bfqd, bfqq);
+ }
+
+ static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync,
+@@ -6734,6 +6745,8 @@ bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq)
+ bic_set_bfqq(bic, NULL, true, bfqq->actuator_idx);
+
+ bfq_put_cooperator(bfqq);
++
++ bfq_release_process_ref(bfqq->bfqd, bfqq);
+ return NULL;
+ }
+
+diff --git a/block/blk-core.c b/block/blk-core.c
+index bc5e8c5eaac9ff..4f791a3114a12c 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -261,6 +261,8 @@ static void blk_free_queue(struct request_queue *q)
+ blk_mq_release(q);
+
+ ida_free(&blk_queue_ida, q->id);
++ lockdep_unregister_key(&q->io_lock_cls_key);
++ lockdep_unregister_key(&q->q_lock_cls_key);
+ call_rcu(&q->rcu_head, blk_free_queue_rcu);
+ }
+
+@@ -278,18 +280,20 @@ void blk_put_queue(struct request_queue *q)
+ }
+ EXPORT_SYMBOL(blk_put_queue);
+
+-void blk_queue_start_drain(struct request_queue *q)
++bool blk_queue_start_drain(struct request_queue *q)
+ {
+ /*
+ * When queue DYING flag is set, we need to block new req
+ * entering queue, so we call blk_freeze_queue_start() to
+ * prevent I/O from crossing blk_queue_enter().
+ */
+- blk_freeze_queue_start(q);
++ bool freeze = __blk_freeze_queue_start(q, current);
+ if (queue_is_mq(q))
+ blk_mq_wake_waiters(q);
+ /* Make blk_queue_enter() reexamine the DYING flag. */
+ wake_up_all(&q->mq_freeze_wq);
++
++ return freeze;
+ }
+
+ /**
+@@ -321,6 +325,8 @@ int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags)
+ return -ENODEV;
+ }
+
++ rwsem_acquire_read(&q->q_lockdep_map, 0, 0, _RET_IP_);
++ rwsem_release(&q->q_lockdep_map, _RET_IP_);
+ return 0;
+ }
+
+@@ -352,6 +358,8 @@ int __bio_queue_enter(struct request_queue *q, struct bio *bio)
+ goto dead;
+ }
+
++ rwsem_acquire_read(&q->io_lockdep_map, 0, 0, _RET_IP_);
++ rwsem_release(&q->io_lockdep_map, _RET_IP_);
+ return 0;
+ dead:
+ bio_io_error(bio);
+@@ -441,6 +449,12 @@ struct request_queue *blk_alloc_queue(struct queue_limits *lim, int node_id)
+ PERCPU_REF_INIT_ATOMIC, GFP_KERNEL);
+ if (error)
+ goto fail_stats;
++ lockdep_register_key(&q->io_lock_cls_key);
++ lockdep_register_key(&q->q_lock_cls_key);
++ lockdep_init_map(&q->io_lockdep_map, "&q->q_usage_counter(io)",
++ &q->io_lock_cls_key, 0);
++ lockdep_init_map(&q->q_lockdep_map, "&q->q_usage_counter(queue)",
++ &q->q_lock_cls_key, 0);
+
+ q->nr_requests = BLKDEV_DEFAULT_RQ;
+
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index ad763ec313b6ad..5baa950f34fe21 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -166,17 +166,6 @@ struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim,
+ return bio_submit_split(bio, split_sectors);
+ }
+
+-struct bio *bio_split_write_zeroes(struct bio *bio,
+- const struct queue_limits *lim, unsigned *nsegs)
+-{
+- *nsegs = 0;
+- if (!lim->max_write_zeroes_sectors)
+- return bio;
+- if (bio_sectors(bio) <= lim->max_write_zeroes_sectors)
+- return bio;
+- return bio_submit_split(bio, lim->max_write_zeroes_sectors);
+-}
+-
+ static inline unsigned int blk_boundary_sectors(const struct queue_limits *lim,
+ bool is_atomic)
+ {
+@@ -211,7 +200,9 @@ static inline unsigned get_max_io_size(struct bio *bio,
+ * We ignore lim->max_sectors for atomic writes because it may less
+ * than the actual bio size, which we cannot tolerate.
+ */
+- if (is_atomic)
++ if (bio_op(bio) == REQ_OP_WRITE_ZEROES)
++ max_sectors = lim->max_write_zeroes_sectors;
++ else if (is_atomic)
+ max_sectors = lim->atomic_write_max_sectors;
+ else
+ max_sectors = lim->max_sectors;
+@@ -296,6 +287,14 @@ static bool bvec_split_segs(const struct queue_limits *lim,
+ return len > 0 || bv->bv_len > max_len;
+ }
+
++static unsigned int bio_split_alignment(struct bio *bio,
++ const struct queue_limits *lim)
++{
++ if (op_is_write(bio_op(bio)) && lim->zone_write_granularity)
++ return lim->zone_write_granularity;
++ return lim->logical_block_size;
++}
++
+ /**
+ * bio_split_rw_at - check if and where to split a read/write bio
+ * @bio: [in] bio to be split
+@@ -358,7 +357,7 @@ int bio_split_rw_at(struct bio *bio, const struct queue_limits *lim,
+ * split size so that each bio is properly block size aligned, even if
+ * we do not use the full hardware limits.
+ */
+- bytes = ALIGN_DOWN(bytes, lim->logical_block_size);
++ bytes = ALIGN_DOWN(bytes, bio_split_alignment(bio, lim));
+
+ /*
+ * Bio splitting may cause subtle trouble such as hang when doing sync
+@@ -398,6 +397,26 @@ struct bio *bio_split_zone_append(struct bio *bio,
+ return bio_submit_split(bio, split_sectors);
+ }
+
++struct bio *bio_split_write_zeroes(struct bio *bio,
++ const struct queue_limits *lim, unsigned *nsegs)
++{
++ unsigned int max_sectors = get_max_io_size(bio, lim);
++
++ *nsegs = 0;
++
++ /*
++ * An unset limit should normally not happen, as bio submission is keyed
++ * off having a non-zero limit. But SCSI can clear the limit in the
++ * I/O completion handler, and we can race and see this. Splitting to a
++ * zero limit obviously doesn't make sense, so band-aid it here.
++ */
++ if (!max_sectors)
++ return bio;
++ if (bio_sectors(bio) <= max_sectors)
++ return bio;
++ return bio_submit_split(bio, max_sectors);
++}
++
+ /**
+ * bio_split_to_limits - split a bio to fit the queue limits
+ * @bio: bio to be split
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index cf626e061dd774..b4fba7b398e5bc 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -120,9 +120,59 @@ void blk_mq_in_flight_rw(struct request_queue *q, struct block_device *part,
+ inflight[1] = mi.inflight[1];
+ }
+
+-void blk_freeze_queue_start(struct request_queue *q)
++#ifdef CONFIG_LOCKDEP
++static bool blk_freeze_set_owner(struct request_queue *q,
++ struct task_struct *owner)
++{
++ if (!owner)
++ return false;
++
++ if (!q->mq_freeze_depth) {
++ q->mq_freeze_owner = owner;
++ q->mq_freeze_owner_depth = 1;
++ return true;
++ }
++
++ if (owner == q->mq_freeze_owner)
++ q->mq_freeze_owner_depth += 1;
++ return false;
++}
++
++/* verify the last unfreeze in owner context */
++static bool blk_unfreeze_check_owner(struct request_queue *q)
++{
++ if (!q->mq_freeze_owner)
++ return false;
++ if (q->mq_freeze_owner != current)
++ return false;
++ if (--q->mq_freeze_owner_depth == 0) {
++ q->mq_freeze_owner = NULL;
++ return true;
++ }
++ return false;
++}
++
++#else
++
++static bool blk_freeze_set_owner(struct request_queue *q,
++ struct task_struct *owner)
++{
++ return false;
++}
++
++static bool blk_unfreeze_check_owner(struct request_queue *q)
+ {
++ return false;
++}
++#endif
++
++bool __blk_freeze_queue_start(struct request_queue *q,
++ struct task_struct *owner)
++{
++ bool freeze;
++
+ mutex_lock(&q->mq_freeze_lock);
++ freeze = blk_freeze_set_owner(q, owner);
+ if (++q->mq_freeze_depth == 1) {
+ percpu_ref_kill(&q->q_usage_counter);
+ mutex_unlock(&q->mq_freeze_lock);
+@@ -131,6 +181,14 @@ void blk_freeze_queue_start(struct request_queue *q)
+ } else {
+ mutex_unlock(&q->mq_freeze_lock);
+ }
++
++ return freeze;
++}
++
++void blk_freeze_queue_start(struct request_queue *q)
++{
++ if (__blk_freeze_queue_start(q, current))
++ blk_freeze_acquire_lock(q, false, false);
+ }
+ EXPORT_SYMBOL_GPL(blk_freeze_queue_start);
+
+@@ -176,8 +234,10 @@ void blk_mq_freeze_queue(struct request_queue *q)
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_freeze_queue);
+
+-void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
++bool __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
+ {
++ bool unfreeze;
++
+ mutex_lock(&q->mq_freeze_lock);
+ if (force_atomic)
+ q->q_usage_counter.data->force_atomic = true;
+@@ -187,15 +247,39 @@ void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic)
+ percpu_ref_resurrect(&q->q_usage_counter);
+ wake_up_all(&q->mq_freeze_wq);
+ }
++ unfreeze = blk_unfreeze_check_owner(q);
+ mutex_unlock(&q->mq_freeze_lock);
++
++ return unfreeze;
+ }
+
+ void blk_mq_unfreeze_queue(struct request_queue *q)
+ {
+- __blk_mq_unfreeze_queue(q, false);
++ if (__blk_mq_unfreeze_queue(q, false))
++ blk_unfreeze_release_lock(q, false, false);
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue);
+
++/*
++ * non_owner variant of blk_freeze_queue_start
++ *
++ * Unlike blk_freeze_queue_start, the queue doesn't need to be unfrozen
++ * by the same task. This is fragile and should not be used if at all
++ * possible.
++ */
++void blk_freeze_queue_start_non_owner(struct request_queue *q)
++{
++ __blk_freeze_queue_start(q, NULL);
++}
++EXPORT_SYMBOL_GPL(blk_freeze_queue_start_non_owner);
++
++/* non_owner variant of blk_mq_unfreeze_queue */
++void blk_mq_unfreeze_queue_non_owner(struct request_queue *q)
++{
++ __blk_mq_unfreeze_queue(q, false);
++}
++EXPORT_SYMBOL_GPL(blk_mq_unfreeze_queue_non_owner);
++
+ /*
+ * FIXME: replace the scsi_internal_device_*block_nowait() calls in the
+ * mpt3sas driver such that this function can be removed.
+@@ -283,8 +367,9 @@ void blk_mq_quiesce_tagset(struct blk_mq_tag_set *set)
+ if (!blk_queue_skip_tagset_quiesce(q))
+ blk_mq_quiesce_queue_nowait(q);
+ }
+- blk_mq_wait_quiesce_done(set);
+ mutex_unlock(&set->tag_list_lock);
++
++ blk_mq_wait_quiesce_done(set);
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_quiesce_tagset);
+
+@@ -2200,6 +2285,24 @@ void blk_mq_delay_run_hw_queue(struct blk_mq_hw_ctx *hctx, unsigned long msecs)
+ }
+ EXPORT_SYMBOL(blk_mq_delay_run_hw_queue);
+
++static inline bool blk_mq_hw_queue_need_run(struct blk_mq_hw_ctx *hctx)
++{
++ bool need_run;
++
++ /*
++ * When queue is quiesced, we may be switching io scheduler, or
++ * updating nr_hw_queues, or other things, and we can't run queue
++ * any more, even blk_mq_hctx_has_pending() can't be called safely.
++ *
++ * And queue will be rerun in blk_mq_unquiesce_queue() if it is
++ * quiesced.
++ */
++ __blk_mq_run_dispatch_ops(hctx->queue, false,
++ need_run = !blk_queue_quiesced(hctx->queue) &&
++ blk_mq_hctx_has_pending(hctx));
++ return need_run;
++}
++
+ /**
+ * blk_mq_run_hw_queue - Start to run a hardware queue.
+ * @hctx: Pointer to the hardware queue to run.
+@@ -2220,20 +2323,23 @@ void blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
+
+ might_sleep_if(!async && hctx->flags & BLK_MQ_F_BLOCKING);
+
+- /*
+- * When queue is quiesced, we may be switching io scheduler, or
+- * updating nr_hw_queues, or other things, and we can't run queue
+- * any more, even __blk_mq_hctx_has_pending() can't be called safely.
+- *
+- * And queue will be rerun in blk_mq_unquiesce_queue() if it is
+- * quiesced.
+- */
+- __blk_mq_run_dispatch_ops(hctx->queue, false,
+- need_run = !blk_queue_quiesced(hctx->queue) &&
+- blk_mq_hctx_has_pending(hctx));
++ need_run = blk_mq_hw_queue_need_run(hctx);
++ if (!need_run) {
++ unsigned long flags;
+
+- if (!need_run)
+- return;
++ /*
++ * Synchronize with blk_mq_unquiesce_queue(), because we check
++ * if hw queue is quiesced locklessly above, we need the use
++ * ->queue_lock to make sure we see the up-to-date status to
++ * not miss rerunning the hw queue.
++ */
++ spin_lock_irqsave(&hctx->queue->queue_lock, flags);
++ need_run = blk_mq_hw_queue_need_run(hctx);
++ spin_unlock_irqrestore(&hctx->queue->queue_lock, flags);
++
++ if (!need_run)
++ return;
++ }
+
+ if (async || !cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask)) {
+ blk_mq_delay_run_hw_queue(hctx, 0);
+@@ -2390,6 +2496,12 @@ void blk_mq_start_stopped_hw_queue(struct blk_mq_hw_ctx *hctx, bool async)
+ return;
+
+ clear_bit(BLK_MQ_S_STOPPED, &hctx->state);
++ /*
++ * Pairs with the smp_mb() in blk_mq_hctx_stopped() to order the
++ * clearing of BLK_MQ_S_STOPPED above and the checking of dispatch
++ * list in the subsequent routine.
++ */
++ smp_mb__after_atomic();
+ blk_mq_run_hw_queue(hctx, async);
+ }
+ EXPORT_SYMBOL_GPL(blk_mq_start_stopped_hw_queue);
+@@ -2620,6 +2732,7 @@ static void blk_mq_try_issue_directly(struct blk_mq_hw_ctx *hctx,
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, 0);
++ blk_mq_run_hw_queue(hctx, false);
+ return;
+ }
+
+@@ -2650,6 +2763,7 @@ static blk_status_t blk_mq_request_issue_directly(struct request *rq, bool last)
+
+ if (blk_mq_hctx_stopped(hctx) || blk_queue_quiesced(rq->q)) {
+ blk_mq_insert_request(rq, 0);
++ blk_mq_run_hw_queue(hctx, false);
+ return BLK_STS_OK;
+ }
+
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index 3bd43b10032f83..f4ac1af77a267e 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -230,6 +230,19 @@ static inline struct blk_mq_tags *blk_mq_tags_from_data(struct blk_mq_alloc_data
+
+ static inline bool blk_mq_hctx_stopped(struct blk_mq_hw_ctx *hctx)
+ {
++ /* Fast path: hardware queue is not stopped most of the time. */
++ if (likely(!test_bit(BLK_MQ_S_STOPPED, &hctx->state)))
++ return false;
++
++ /*
++ * This barrier is used to order adding of dispatch list before and
++ * the test of BLK_MQ_S_STOPPED below. Pairs with the memory barrier
++ * in blk_mq_start_stopped_hw_queue() so that dispatch code could
++ * either see BLK_MQ_S_STOPPED is cleared or dispatch list is not
++ * empty to avoid missing dispatching requests.
++ */
++ smp_mb();
++
+ return test_bit(BLK_MQ_S_STOPPED, &hctx->state);
+ }
+
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index a446654ddee5ef..7abf034089cd96 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -249,6 +249,13 @@ static int blk_validate_limits(struct queue_limits *lim)
+ if (lim->io_min < lim->physical_block_size)
+ lim->io_min = lim->physical_block_size;
+
++ /*
++ * The optimal I/O size may not be aligned to physical block size
++ * (because it may be limited by dma engines which have no clue about
++ * block size of the disks attached to them), so we round it down here.
++ */
++ lim->io_opt = round_down(lim->io_opt, lim->physical_block_size);
++
+ /*
+ * max_hw_sectors has a somewhat weird default for historical reason,
+ * but driver really should set their own instead of relying on this
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index e85941bec857b6..207577145c54f4 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -794,10 +794,8 @@ int blk_register_queue(struct gendisk *disk)
+ * faster to shut down and is made fully functional here as
+ * request_queues for non-existent devices never get registered.
+ */
+- if (!blk_queue_init_done(q)) {
+- blk_queue_flag_set(QUEUE_FLAG_INIT_DONE, q);
+- percpu_ref_switch_to_percpu(&q->q_usage_counter);
+- }
++ blk_queue_flag_set(QUEUE_FLAG_INIT_DONE, q);
++ percpu_ref_switch_to_percpu(&q->q_usage_counter);
+
+ return ret;
+
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index af19296fa50df1..95e517723db3e4 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -1541,6 +1541,7 @@ static int disk_update_zone_resources(struct gendisk *disk,
+ unsigned int nr_seq_zones, nr_conv_zones = 0;
+ unsigned int pool_size;
+ struct queue_limits lim;
++ int ret;
+
+ disk->nr_zones = args->nr_zones;
+ disk->zone_capacity = args->zone_capacity;
+@@ -1593,7 +1594,11 @@ static int disk_update_zone_resources(struct gendisk *disk,
+ }
+
+ commit:
+- return queue_limits_commit_update(q, &lim);
++ blk_mq_freeze_queue(q);
++ ret = queue_limits_commit_update(q, &lim);
++ blk_mq_unfreeze_queue(q);
++
++ return ret;
+ }
+
+ static int blk_revalidate_conv_zone(struct blk_zone *zone, unsigned int idx,
+@@ -1814,14 +1819,15 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ * Set the new disk zone parameters only once the queue is frozen and
+ * all I/Os are completed.
+ */
+- blk_mq_freeze_queue(q);
+ if (ret > 0)
+ ret = disk_update_zone_resources(disk, &args);
+ else
+ pr_warn("%s: failed to revalidate zones\n", disk->disk_name);
+- if (ret)
++ if (ret) {
++ blk_mq_freeze_queue(q);
+ disk_free_zone_resources(disk);
+- blk_mq_unfreeze_queue(q);
++ blk_mq_unfreeze_queue(q);
++ }
+
+ kfree(args.conv_zones_bitmap);
+
+diff --git a/block/blk.h b/block/blk.h
+index c718e4291db062..88fab6a81701ed 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -4,6 +4,7 @@
+
+ #include <linux/bio-integrity.h>
+ #include <linux/blk-crypto.h>
++#include <linux/lockdep.h>
+ #include <linux/memblock.h> /* for max_pfn/max_low_pfn */
+ #include <linux/sched/sysctl.h>
+ #include <linux/timekeeping.h>
+@@ -35,8 +36,10 @@ struct blk_flush_queue *blk_alloc_flush_queue(int node, int cmd_size,
+ void blk_free_flush_queue(struct blk_flush_queue *q);
+
+ void blk_freeze_queue(struct request_queue *q);
+-void __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
+-void blk_queue_start_drain(struct request_queue *q);
++bool __blk_mq_unfreeze_queue(struct request_queue *q, bool force_atomic);
++bool blk_queue_start_drain(struct request_queue *q);
++bool __blk_freeze_queue_start(struct request_queue *q,
++ struct task_struct *owner);
+ int __bio_queue_enter(struct request_queue *q, struct bio *bio);
+ void submit_bio_noacct_nocheck(struct bio *bio);
+ void bio_await_chain(struct bio *bio);
+@@ -69,8 +72,11 @@ static inline int bio_queue_enter(struct bio *bio)
+ {
+ struct request_queue *q = bdev_get_queue(bio->bi_bdev);
+
+- if (blk_try_enter_queue(q, false))
++ if (blk_try_enter_queue(q, false)) {
++ rwsem_acquire_read(&q->io_lockdep_map, 0, 0, _RET_IP_);
++ rwsem_release(&q->io_lockdep_map, _RET_IP_);
+ return 0;
++ }
+ return __bio_queue_enter(q, bio);
+ }
+
+@@ -734,4 +740,22 @@ void blk_integrity_verify(struct bio *bio);
+ void blk_integrity_prepare(struct request *rq);
+ void blk_integrity_complete(struct request *rq, unsigned int nr_bytes);
+
++static inline void blk_freeze_acquire_lock(struct request_queue *q, bool
++ disk_dead, bool queue_dying)
++{
++ if (!disk_dead)
++ rwsem_acquire(&q->io_lockdep_map, 0, 1, _RET_IP_);
++ if (!queue_dying)
++ rwsem_acquire(&q->q_lockdep_map, 0, 1, _RET_IP_);
++}
++
++static inline void blk_unfreeze_release_lock(struct request_queue *q, bool
++ disk_dead, bool queue_dying)
++{
++ if (!queue_dying)
++ rwsem_release(&q->q_lockdep_map, _RET_IP_);
++ if (!disk_dead)
++ rwsem_release(&q->io_lockdep_map, _RET_IP_);
++}
++
+ #endif /* BLK_INTERNAL_H */
+diff --git a/block/elevator.c b/block/elevator.c
+index 9430cde13d1a41..43ba4ab1ada7fd 100644
+--- a/block/elevator.c
++++ b/block/elevator.c
+@@ -598,13 +598,19 @@ void elevator_init_mq(struct request_queue *q)
+ * drain any dispatch activities originated from passthrough
+ * requests, then no need to quiesce queue which may add long boot
+ * latency, especially when lots of disks are involved.
++ *
++ * Disk isn't added yet, so verifying queue lock only manually.
+ */
+- blk_mq_freeze_queue(q);
++ blk_freeze_queue_start_non_owner(q);
++ blk_freeze_acquire_lock(q, true, false);
++ blk_mq_freeze_queue_wait(q);
++
+ blk_mq_cancel_work_sync(q);
+
+ err = blk_mq_init_sched(q, e);
+
+- blk_mq_unfreeze_queue(q);
++ blk_unfreeze_release_lock(q, true, false);
++ blk_mq_unfreeze_queue_non_owner(q);
+
+ if (err) {
+ pr_warn("\"%s\" elevator initialization failed, "
+diff --git a/block/fops.c b/block/fops.c
+index e696ae53bf1e08..13a67940d0408d 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -35,13 +35,10 @@ static blk_opf_t dio_bio_write_op(struct kiocb *iocb)
+ return opf;
+ }
+
+-static bool blkdev_dio_invalid(struct block_device *bdev, loff_t pos,
+- struct iov_iter *iter, bool is_atomic)
++static bool blkdev_dio_invalid(struct block_device *bdev, struct kiocb *iocb,
++ struct iov_iter *iter)
+ {
+- if (is_atomic && !generic_atomic_write_valid(iter, pos))
+- return true;
+-
+- return pos & (bdev_logical_block_size(bdev) - 1) ||
++ return iocb->ki_pos & (bdev_logical_block_size(bdev) - 1) ||
+ !bdev_iter_is_aligned(bdev, iter);
+ }
+
+@@ -368,13 +365,12 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *iocb,
+ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ {
+ struct block_device *bdev = I_BDEV(iocb->ki_filp->f_mapping->host);
+- bool is_atomic = iocb->ki_flags & IOCB_ATOMIC;
+ unsigned int nr_pages;
+
+ if (!iov_iter_count(iter))
+ return 0;
+
+- if (blkdev_dio_invalid(bdev, iocb->ki_pos, iter, is_atomic))
++ if (blkdev_dio_invalid(bdev, iocb, iter))
+ return -EINVAL;
+
+ nr_pages = bio_iov_vecs_to_alloc(iter, BIO_MAX_VECS + 1);
+@@ -383,7 +379,7 @@ static ssize_t blkdev_direct_IO(struct kiocb *iocb, struct iov_iter *iter)
+ return __blkdev_direct_IO_simple(iocb, iter, bdev,
+ nr_pages);
+ return __blkdev_direct_IO_async(iocb, iter, bdev, nr_pages);
+- } else if (is_atomic) {
++ } else if (iocb->ki_flags & IOCB_ATOMIC) {
+ return -EINVAL;
+ }
+ return __blkdev_direct_IO(iocb, iter, bdev, bio_max_segs(nr_pages));
+@@ -625,7 +621,7 @@ static int blkdev_open(struct inode *inode, struct file *filp)
+ if (!bdev)
+ return -ENXIO;
+
+- if (bdev_can_atomic_write(bdev) && filp->f_flags & O_DIRECT)
++ if (bdev_can_atomic_write(bdev))
+ filp->f_mode |= FMODE_CAN_ATOMIC_WRITE;
+
+ ret = bdev_open(bdev, mode, filp->private_data, NULL, filp);
+@@ -681,6 +677,7 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ struct file *file = iocb->ki_filp;
+ struct inode *bd_inode = bdev_file_inode(file);
+ struct block_device *bdev = I_BDEV(bd_inode);
++ bool atomic = iocb->ki_flags & IOCB_ATOMIC;
+ loff_t size = bdev_nr_bytes(bdev);
+ size_t shorted = 0;
+ ssize_t ret;
+@@ -700,8 +697,16 @@ static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ if ((iocb->ki_flags & (IOCB_NOWAIT | IOCB_DIRECT)) == IOCB_NOWAIT)
+ return -EOPNOTSUPP;
+
++ if (atomic) {
++ ret = generic_atomic_write_valid(iocb, from);
++ if (ret)
++ return ret;
++ }
++
+ size -= iocb->ki_pos;
+ if (iov_iter_count(from) > size) {
++ if (atomic)
++ return -EINVAL;
+ shorted = iov_iter_count(from) - size;
+ iov_iter_truncate(from, size);
+ }
+diff --git a/block/genhd.c b/block/genhd.c
+index 1c05dd4c6980b5..8645cf3b0816e4 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -581,13 +581,13 @@ static void blk_report_disk_dead(struct gendisk *disk, bool surprise)
+ rcu_read_unlock();
+ }
+
+-static void __blk_mark_disk_dead(struct gendisk *disk)
++static bool __blk_mark_disk_dead(struct gendisk *disk)
+ {
+ /*
+ * Fail any new I/O.
+ */
+ if (test_and_set_bit(GD_DEAD, &disk->state))
+- return;
++ return false;
+
+ if (test_bit(GD_OWNS_QUEUE, &disk->state))
+ blk_queue_flag_set(QUEUE_FLAG_DYING, disk->queue);
+@@ -600,7 +600,7 @@ static void __blk_mark_disk_dead(struct gendisk *disk)
+ /*
+ * Prevent new I/O from crossing bio_queue_enter().
+ */
+- blk_queue_start_drain(disk->queue);
++ return blk_queue_start_drain(disk->queue);
+ }
+
+ /**
+@@ -641,6 +641,7 @@ void del_gendisk(struct gendisk *disk)
+ struct request_queue *q = disk->queue;
+ struct block_device *part;
+ unsigned long idx;
++ bool start_drain, queue_dying;
+
+ might_sleep();
+
+@@ -668,7 +669,10 @@ void del_gendisk(struct gendisk *disk)
+ * Drop all partitions now that the disk is marked dead.
+ */
+ mutex_lock(&disk->open_mutex);
+- __blk_mark_disk_dead(disk);
++ start_drain = __blk_mark_disk_dead(disk);
++ queue_dying = blk_queue_dying(q);
++ if (start_drain)
++ blk_freeze_acquire_lock(q, true, queue_dying);
+ xa_for_each_start(&disk->part_tbl, idx, part, 1)
+ drop_partition(part);
+ mutex_unlock(&disk->open_mutex);
+@@ -718,13 +722,13 @@ void del_gendisk(struct gendisk *disk)
+ * If the disk does not own the queue, allow using passthrough requests
+ * again. Else leave the queue frozen to fail all I/O.
+ */
+- if (!test_bit(GD_OWNS_QUEUE, &disk->state)) {
+- blk_queue_flag_clear(QUEUE_FLAG_INIT_DONE, q);
++ if (!test_bit(GD_OWNS_QUEUE, &disk->state))
+ __blk_mq_unfreeze_queue(q, true);
+- } else {
+- if (queue_is_mq(q))
+- blk_mq_exit_queue(q);
+- }
++ else if (queue_is_mq(q))
++ blk_mq_exit_queue(q);
++
++ if (start_drain)
++ blk_unfreeze_release_lock(q, true, queue_dying);
+ }
+ EXPORT_SYMBOL(del_gendisk);
+
+diff --git a/crypto/pcrypt.c b/crypto/pcrypt.c
+index d0d954fe9d54f3..7fc79e7dce44a9 100644
+--- a/crypto/pcrypt.c
++++ b/crypto/pcrypt.c
+@@ -117,8 +117,10 @@ static int pcrypt_aead_encrypt(struct aead_request *req)
+ err = padata_do_parallel(ictx->psenc, padata, &ctx->cb_cpu);
+ if (!err)
+ return -EINPROGRESS;
+- if (err == -EBUSY)
+- return -EAGAIN;
++ if (err == -EBUSY) {
++ /* try non-parallel mode */
++ return crypto_aead_encrypt(creq);
++ }
+
+ return err;
+ }
+@@ -166,8 +168,10 @@ static int pcrypt_aead_decrypt(struct aead_request *req)
+ err = padata_do_parallel(ictx->psdec, padata, &ctx->cb_cpu);
+ if (!err)
+ return -EINPROGRESS;
+- if (err == -EBUSY)
+- return -EAGAIN;
++ if (err == -EBUSY) {
++ /* try non-parallel mode */
++ return crypto_aead_decrypt(creq);
++ }
+
+ return err;
+ }
+diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c
+index 78b32a8232419e..29b723039a3459 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.c
++++ b/drivers/accel/ivpu/ivpu_ipc.c
+@@ -291,15 +291,16 @@ int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ return ret;
+ }
+
+-static int
++int
+ ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+ enum vpu_ipc_msg_type expected_resp_type,
+- struct vpu_jsm_msg *resp, u32 channel,
+- unsigned long timeout_ms)
++ struct vpu_jsm_msg *resp, u32 channel, unsigned long timeout_ms)
+ {
+ struct ivpu_ipc_consumer cons;
+ int ret;
+
++ drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev));
++
+ ivpu_ipc_consumer_add(vdev, &cons, channel, NULL);
+
+ ret = ivpu_ipc_send(vdev, &cons, req);
+@@ -325,19 +326,21 @@ ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req
+ return ret;
+ }
+
+-int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+- enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+- u32 channel, unsigned long timeout_ms)
++int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
++ enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
++ u32 channel, unsigned long timeout_ms)
+ {
+ struct vpu_jsm_msg hb_req = { .type = VPU_JSM_MSG_QUERY_ENGINE_HB };
+ struct vpu_jsm_msg hb_resp;
+ int ret, hb_ret;
+
+- drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev));
++ ret = ivpu_rpm_get(vdev);
++ if (ret < 0)
++ return ret;
+
+ ret = ivpu_ipc_send_receive_internal(vdev, req, expected_resp, resp, channel, timeout_ms);
+ if (ret != -ETIMEDOUT)
+- return ret;
++ goto rpm_put;
+
+ hb_ret = ivpu_ipc_send_receive_internal(vdev, &hb_req, VPU_JSM_MSG_QUERY_ENGINE_HB_DONE,
+ &hb_resp, VPU_IPC_CHAN_ASYNC_CMD,
+@@ -345,21 +348,7 @@ int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *r
+ if (hb_ret == -ETIMEDOUT)
+ ivpu_pm_trigger_recovery(vdev, "IPC timeout");
+
+- return ret;
+-}
+-
+-int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+- enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+- u32 channel, unsigned long timeout_ms)
+-{
+- int ret;
+-
+- ret = ivpu_rpm_get(vdev);
+- if (ret < 0)
+- return ret;
+-
+- ret = ivpu_ipc_send_receive_active(vdev, req, expected_resp, resp, channel, timeout_ms);
+-
++rpm_put:
+ ivpu_rpm_put(vdev);
+ return ret;
+ }
+diff --git a/drivers/accel/ivpu/ivpu_ipc.h b/drivers/accel/ivpu/ivpu_ipc.h
+index 4fe38141045ea3..fb4de7fb8210ea 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.h
++++ b/drivers/accel/ivpu/ivpu_ipc.h
+@@ -101,10 +101,9 @@ int ivpu_ipc_send(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ int ivpu_ipc_receive(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ struct ivpu_ipc_hdr *ipc_buf, struct vpu_jsm_msg *jsm_msg,
+ unsigned long timeout_ms);
+-
+-int ivpu_ipc_send_receive_active(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+- enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+- u32 channel, unsigned long timeout_ms);
++int ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
++ enum vpu_ipc_msg_type expected_resp_type,
++ struct vpu_jsm_msg *resp, u32 channel, unsigned long timeout_ms);
+ int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+ enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+ u32 channel, unsigned long timeout_ms);
+diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.c b/drivers/accel/ivpu/ivpu_jsm_msg.c
+index 46ef16c3c06910..88105963c1b288 100644
+--- a/drivers/accel/ivpu/ivpu_jsm_msg.c
++++ b/drivers/accel/ivpu/ivpu_jsm_msg.c
+@@ -270,9 +270,8 @@ int ivpu_jsm_pwr_d0i3_enter(struct ivpu_device *vdev)
+
+ req.payload.pwr_d0i3_enter.send_response = 1;
+
+- ret = ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_PWR_D0I3_ENTER_DONE,
+- &resp, VPU_IPC_CHAN_GEN_CMD,
+- vdev->timeout.d0i3_entry_msg);
++ ret = ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_PWR_D0I3_ENTER_DONE, &resp,
++ VPU_IPC_CHAN_GEN_CMD, vdev->timeout.d0i3_entry_msg);
+ if (ret)
+ return ret;
+
+@@ -430,8 +429,8 @@ int ivpu_jsm_hws_setup_priority_bands(struct ivpu_device *vdev)
+
+ req.payload.hws_priority_band_setup.normal_band_percentage = 10;
+
+- ret = ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP_RSP,
+- &resp, VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
++ ret = ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_SET_PRIORITY_BAND_SETUP_RSP,
++ &resp, VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+ if (ret)
+ ivpu_warn_ratelimited(vdev, "Failed to set priority bands: %d\n", ret);
+
+@@ -544,9 +543,8 @@ int ivpu_jsm_dct_enable(struct ivpu_device *vdev, u32 active_us, u32 inactive_us
+ req.payload.pwr_dct_control.dct_active_us = active_us;
+ req.payload.pwr_dct_control.dct_inactive_us = inactive_us;
+
+- return ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_DCT_ENABLE_DONE,
+- &resp, VPU_IPC_CHAN_ASYNC_CMD,
+- vdev->timeout.jsm);
++ return ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_DCT_ENABLE_DONE, &resp,
++ VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+ }
+
+ int ivpu_jsm_dct_disable(struct ivpu_device *vdev)
+@@ -554,7 +552,6 @@ int ivpu_jsm_dct_disable(struct ivpu_device *vdev)
+ struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_DCT_DISABLE };
+ struct vpu_jsm_msg resp;
+
+- return ivpu_ipc_send_receive_active(vdev, &req, VPU_JSM_MSG_DCT_DISABLE_DONE,
+- &resp, VPU_IPC_CHAN_ASYNC_CMD,
+- vdev->timeout.jsm);
++ return ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_DCT_DISABLE_DONE, &resp,
++ VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+ }
+diff --git a/drivers/acpi/arm64/gtdt.c b/drivers/acpi/arm64/gtdt.c
+index c0e77c1c8e09d6..eb6c2d3603874a 100644
+--- a/drivers/acpi/arm64/gtdt.c
++++ b/drivers/acpi/arm64/gtdt.c
+@@ -283,7 +283,7 @@ static int __init gtdt_parse_timer_block(struct acpi_gtdt_timer_block *block,
+ if (frame->virt_irq > 0)
+ acpi_unregister_gsi(gtdt_frame->virtual_timer_interrupt);
+ frame->virt_irq = 0;
+- } while (i-- >= 0 && gtdt_frame--);
++ } while (i-- > 0 && gtdt_frame--);
+
+ return -EINVAL;
+ }
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index 5c0cc7aae8726b..e78e3754d99e1d 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -1140,7 +1140,6 @@ static int cpc_write(int cpu, struct cpc_register_resource *reg_res, u64 val)
+ return -EFAULT;
+ }
+ val = MASK_VAL_WRITE(reg, prev_val, val);
+- val |= prev_val;
+ }
+
+ switch (size) {
+diff --git a/drivers/base/firmware_loader/main.c b/drivers/base/firmware_loader/main.c
+index 324a9a3c087aa2..c6664a78796979 100644
+--- a/drivers/base/firmware_loader/main.c
++++ b/drivers/base/firmware_loader/main.c
+@@ -829,19 +829,18 @@ static void fw_log_firmware_info(const struct firmware *fw, const char *name, st
+ shash->tfm = alg;
+
+ if (crypto_shash_digest(shash, fw->data, fw->size, sha256buf) < 0)
+- goto out_shash;
++ goto out_free;
+
+ for (int i = 0; i < SHA256_DIGEST_SIZE; i++)
+ sprintf(&outbuf[i * 2], "%02x", sha256buf[i]);
+ outbuf[SHA256_BLOCK_SIZE] = 0;
+ dev_dbg(device, "Loaded FW: %s, sha256: %s\n", name, outbuf);
+
+-out_shash:
+- crypto_free_shash(alg);
+ out_free:
+ kfree(shash);
+ kfree(outbuf);
+ kfree(sha256buf);
++ crypto_free_shash(alg);
+ }
+ #else
+ static void fw_log_firmware_info(const struct firmware *fw, const char *name,
+diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c
+index a750e48a26b87c..6981e5f974e9a4 100644
+--- a/drivers/base/regmap/regmap-irq.c
++++ b/drivers/base/regmap/regmap-irq.c
+@@ -514,12 +514,16 @@ static irqreturn_t regmap_irq_thread(int irq, void *d)
+ return IRQ_NONE;
+ }
+
++static struct lock_class_key regmap_irq_lock_class;
++static struct lock_class_key regmap_irq_request_class;
++
+ static int regmap_irq_map(struct irq_domain *h, unsigned int virq,
+ irq_hw_number_t hw)
+ {
+ struct regmap_irq_chip_data *data = h->host_data;
+
+ irq_set_chip_data(virq, data);
++ irq_set_lockdep_class(virq, ®map_irq_lock_class, ®map_irq_request_class);
+ irq_set_chip(virq, &data->irq_chip);
+ irq_set_nested_thread(virq, 1);
+ irq_set_parent(virq, data->irq);
+diff --git a/drivers/base/trace.h b/drivers/base/trace.h
+index e52b6eae060dde..3b83b13a57ff1e 100644
+--- a/drivers/base/trace.h
++++ b/drivers/base/trace.h
+@@ -24,18 +24,18 @@ DECLARE_EVENT_CLASS(devres,
+ __field(struct device *, dev)
+ __field(const char *, op)
+ __field(void *, node)
+- __field(const char *, name)
++ __string(name, name)
+ __field(size_t, size)
+ ),
+ TP_fast_assign(
+ __assign_str(devname);
+ __entry->op = op;
+ __entry->node = node;
+- __entry->name = name;
++ __assign_str(name);
+ __entry->size = size;
+ ),
+ TP_printk("%s %3s %p %s (%zu bytes)", __get_str(devname),
+- __entry->op, __entry->node, __entry->name, __entry->size)
++ __entry->op, __entry->node, __get_str(name), __entry->size)
+ );
+
+ DEFINE_EVENT(devres, devres_log,
+diff --git a/drivers/block/brd.c b/drivers/block/brd.c
+index 2fd1ed1017481b..292f127cae0abe 100644
+--- a/drivers/block/brd.c
++++ b/drivers/block/brd.c
+@@ -231,8 +231,10 @@ static void brd_do_discard(struct brd_device *brd, sector_t sector, u32 size)
+ xa_lock(&brd->brd_pages);
+ while (size >= PAGE_SIZE && aligned_sector < rd_size * 2) {
+ page = __xa_erase(&brd->brd_pages, aligned_sector >> PAGE_SECTORS_SHIFT);
+- if (page)
++ if (page) {
+ __free_page(page);
++ brd->brd_nr_pages--;
++ }
+ aligned_sector += PAGE_SECTORS;
+ size -= PAGE_SIZE;
+ }
+@@ -316,8 +318,40 @@ __setup("ramdisk_size=", ramdisk_size);
+ * (should share code eventually).
+ */
+ static LIST_HEAD(brd_devices);
++static DEFINE_MUTEX(brd_devices_mutex);
+ static struct dentry *brd_debugfs_dir;
+
++static struct brd_device *brd_find_or_alloc_device(int i)
++{
++ struct brd_device *brd;
++
++ mutex_lock(&brd_devices_mutex);
++ list_for_each_entry(brd, &brd_devices, brd_list) {
++ if (brd->brd_number == i) {
++ mutex_unlock(&brd_devices_mutex);
++ return ERR_PTR(-EEXIST);
++ }
++ }
++
++ brd = kzalloc(sizeof(*brd), GFP_KERNEL);
++ if (!brd) {
++ mutex_unlock(&brd_devices_mutex);
++ return ERR_PTR(-ENOMEM);
++ }
++ brd->brd_number = i;
++ list_add_tail(&brd->brd_list, &brd_devices);
++ mutex_unlock(&brd_devices_mutex);
++ return brd;
++}
++
++static void brd_free_device(struct brd_device *brd)
++{
++ mutex_lock(&brd_devices_mutex);
++ list_del(&brd->brd_list);
++ mutex_unlock(&brd_devices_mutex);
++ kfree(brd);
++}
++
+ static int brd_alloc(int i)
+ {
+ struct brd_device *brd;
+@@ -340,14 +374,9 @@ static int brd_alloc(int i)
+ BLK_FEAT_NOWAIT,
+ };
+
+- list_for_each_entry(brd, &brd_devices, brd_list)
+- if (brd->brd_number == i)
+- return -EEXIST;
+- brd = kzalloc(sizeof(*brd), GFP_KERNEL);
+- if (!brd)
+- return -ENOMEM;
+- brd->brd_number = i;
+- list_add_tail(&brd->brd_list, &brd_devices);
++ brd = brd_find_or_alloc_device(i);
++ if (IS_ERR(brd))
++ return PTR_ERR(brd);
+
+ xa_init(&brd->brd_pages);
+
+@@ -378,8 +407,7 @@ static int brd_alloc(int i)
+ out_cleanup_disk:
+ put_disk(disk);
+ out_free_dev:
+- list_del(&brd->brd_list);
+- kfree(brd);
++ brd_free_device(brd);
+ return err;
+ }
+
+@@ -398,8 +426,7 @@ static void brd_cleanup(void)
+ del_gendisk(brd->brd_disk);
+ put_disk(brd->brd_disk);
+ brd_free_pages(brd);
+- list_del(&brd->brd_list);
+- kfree(brd);
++ brd_free_device(brd);
+ }
+ }
+
+@@ -426,16 +453,6 @@ static int __init brd_init(void)
+ {
+ int err, i;
+
+- brd_check_and_reset_par();
+-
+- brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
+-
+- for (i = 0; i < rd_nr; i++) {
+- err = brd_alloc(i);
+- if (err)
+- goto out_free;
+- }
+-
+ /*
+ * brd module now has a feature to instantiate underlying device
+ * structure on-demand, provided that there is an access dev node.
+@@ -451,11 +468,18 @@ static int __init brd_init(void)
+ * dynamically.
+ */
+
++ brd_check_and_reset_par();
++
++ brd_debugfs_dir = debugfs_create_dir("ramdisk_pages", NULL);
++
+ if (__register_blkdev(RAMDISK_MAJOR, "ramdisk", brd_probe)) {
+ err = -EIO;
+ goto out_free;
+ }
+
++ for (i = 0; i < rd_nr; i++)
++ brd_alloc(i);
++
+ pr_info("brd: module loaded\n");
+ return 0;
+
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 78a7bb28defe4c..86cc3b19faae86 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -173,7 +173,7 @@ static loff_t get_loop_size(struct loop_device *lo, struct file *file)
+ static bool lo_bdev_can_use_dio(struct loop_device *lo,
+ struct block_device *backing_bdev)
+ {
+- unsigned short sb_bsize = bdev_logical_block_size(backing_bdev);
++ unsigned int sb_bsize = bdev_logical_block_size(backing_bdev);
+
+ if (queue_logical_block_size(lo->lo_queue) < sb_bsize)
+ return false;
+@@ -977,7 +977,7 @@ loop_set_status_from_info(struct loop_device *lo,
+ return 0;
+ }
+
+-static unsigned short loop_default_blocksize(struct loop_device *lo,
++static unsigned int loop_default_blocksize(struct loop_device *lo,
+ struct block_device *backing_bdev)
+ {
+ /* In case of direct I/O, match underlying block size */
+@@ -986,7 +986,7 @@ static unsigned short loop_default_blocksize(struct loop_device *lo,
+ return SECTOR_SIZE;
+ }
+
+-static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
++static int loop_reconfigure_limits(struct loop_device *lo, unsigned int bsize)
+ {
+ struct file *file = lo->lo_backing_file;
+ struct inode *inode = file->f_mapping->host;
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 6ba2c1dd1d878a..90bc605ff6c299 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -664,12 +664,21 @@ static inline char *ublk_queue_cmd_buf(struct ublk_device *ub, int q_id)
+ return ublk_get_queue(ub, q_id)->io_cmd_buf;
+ }
+
++static inline int __ublk_queue_cmd_buf_size(int depth)
++{
++ return round_up(depth * sizeof(struct ublksrv_io_desc), PAGE_SIZE);
++}
++
+ static inline int ublk_queue_cmd_buf_size(struct ublk_device *ub, int q_id)
+ {
+ struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
+
+- return round_up(ubq->q_depth * sizeof(struct ublksrv_io_desc),
+- PAGE_SIZE);
++ return __ublk_queue_cmd_buf_size(ubq->q_depth);
++}
++
++static int ublk_max_cmd_buf_size(void)
++{
++ return __ublk_queue_cmd_buf_size(UBLK_MAX_QUEUE_DEPTH);
+ }
+
+ static inline bool ublk_queue_can_use_recovery_reissue(
+@@ -1322,7 +1331,7 @@ static int ublk_ch_mmap(struct file *filp, struct vm_area_struct *vma)
+ {
+ struct ublk_device *ub = filp->private_data;
+ size_t sz = vma->vm_end - vma->vm_start;
+- unsigned max_sz = UBLK_MAX_QUEUE_DEPTH * sizeof(struct ublksrv_io_desc);
++ unsigned max_sz = ublk_max_cmd_buf_size();
+ unsigned long pfn, end, phys_off = vma->vm_pgoff << PAGE_SHIFT;
+ int q_id, ret = 0;
+
+@@ -2965,7 +2974,7 @@ static int ublk_ctrl_uring_cmd(struct io_uring_cmd *cmd,
+ ret = ublk_ctrl_end_recovery(ub, cmd);
+ break;
+ default:
+- ret = -ENOTSUPP;
++ ret = -EOPNOTSUPP;
+ break;
+ }
+
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 194417abc1053c..43c96b73a7118f 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -471,18 +471,18 @@ static bool virtblk_prep_rq_batch(struct request *req)
+ return virtblk_prep_rq(req->mq_hctx, vblk, req, vbr) == BLK_STS_OK;
+ }
+
+-static bool virtblk_add_req_batch(struct virtio_blk_vq *vq,
++static void virtblk_add_req_batch(struct virtio_blk_vq *vq,
+ struct request **rqlist)
+ {
++ struct request *req;
+ unsigned long flags;
+- int err;
+ bool kick;
+
+ spin_lock_irqsave(&vq->lock, flags);
+
+- while (!rq_list_empty(*rqlist)) {
+- struct request *req = rq_list_pop(rqlist);
++ while ((req = rq_list_pop(rqlist))) {
+ struct virtblk_req *vbr = blk_mq_rq_to_pdu(req);
++ int err;
+
+ err = virtblk_add_req(vq->vq, vbr);
+ if (err) {
+@@ -495,37 +495,33 @@ static bool virtblk_add_req_batch(struct virtio_blk_vq *vq,
+ kick = virtqueue_kick_prepare(vq->vq);
+ spin_unlock_irqrestore(&vq->lock, flags);
+
+- return kick;
++ if (kick)
++ virtqueue_notify(vq->vq);
+ }
+
+ static void virtio_queue_rqs(struct request **rqlist)
+ {
+- struct request *req, *next, *prev = NULL;
++ struct request *submit_list = NULL;
+ struct request *requeue_list = NULL;
++ struct request **requeue_lastp = &requeue_list;
++ struct virtio_blk_vq *vq = NULL;
++ struct request *req;
+
+- rq_list_for_each_safe(rqlist, req, next) {
+- struct virtio_blk_vq *vq = get_virtio_blk_vq(req->mq_hctx);
+- bool kick;
+-
+- if (!virtblk_prep_rq_batch(req)) {
+- rq_list_move(rqlist, &requeue_list, req, prev);
+- req = prev;
+- if (!req)
+- continue;
+- }
++ while ((req = rq_list_pop(rqlist))) {
++ struct virtio_blk_vq *this_vq = get_virtio_blk_vq(req->mq_hctx);
+
+- if (!next || req->mq_hctx != next->mq_hctx) {
+- req->rq_next = NULL;
+- kick = virtblk_add_req_batch(vq, rqlist);
+- if (kick)
+- virtqueue_notify(vq->vq);
++ if (vq && vq != this_vq)
++ virtblk_add_req_batch(vq, &submit_list);
++ vq = this_vq;
+
+- *rqlist = next;
+- prev = NULL;
+- } else
+- prev = req;
++ if (virtblk_prep_rq_batch(req))
++ rq_list_add(&submit_list, req); /* reverse order */
++ else
++ rq_list_add_tail(&requeue_lastp, req);
+ }
+
++ if (vq)
++ virtblk_add_req_batch(vq, &submit_list);
+ *rqlist = requeue_list;
+ }
+
+diff --git a/drivers/block/zram/Kconfig b/drivers/block/zram/Kconfig
+index 6aea609b795c2f..402b7b17586328 100644
+--- a/drivers/block/zram/Kconfig
++++ b/drivers/block/zram/Kconfig
+@@ -94,6 +94,7 @@ endchoice
+
+ config ZRAM_DEF_COMP
+ string
++ depends on ZRAM
+ default "lzo-rle" if ZRAM_DEF_COMP_LZORLE
+ default "lzo" if ZRAM_DEF_COMP_LZO
+ default "lz4" if ZRAM_DEF_COMP_LZ4
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index ad9c9bc3ccfc5b..e682797cdee783 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -626,6 +626,12 @@ static ssize_t writeback_store(struct device *dev,
+ goto release_init_lock;
+ }
+
++ /* Do not permit concurrent post-processing actions. */
++ if (atomic_xchg(&zram->pp_in_progress, 1)) {
++ up_read(&zram->init_lock);
++ return -EAGAIN;
++ }
++
+ if (!zram->backing_dev) {
+ ret = -ENODEV;
+ goto release_init_lock;
+@@ -752,6 +758,7 @@ static ssize_t writeback_store(struct device *dev,
+ free_block_bdev(zram, blk_idx);
+ __free_page(page);
+ release_init_lock:
++ atomic_set(&zram->pp_in_progress, 0);
+ up_read(&zram->init_lock);
+
+ return ret;
+@@ -1881,6 +1888,12 @@ static ssize_t recompress_store(struct device *dev,
+ goto release_init_lock;
+ }
+
++ /* Do not permit concurrent post-processing actions. */
++ if (atomic_xchg(&zram->pp_in_progress, 1)) {
++ up_read(&zram->init_lock);
++ return -EAGAIN;
++ }
++
+ if (algo) {
+ bool found = false;
+
+@@ -1948,6 +1961,7 @@ static ssize_t recompress_store(struct device *dev,
+ __free_page(page);
+
+ release_init_lock:
++ atomic_set(&zram->pp_in_progress, 0);
+ up_read(&zram->init_lock);
+ return ret;
+ }
+@@ -2144,6 +2158,7 @@ static void zram_reset_device(struct zram *zram)
+ zram->disksize = 0;
+ zram_destroy_comps(zram);
+ memset(&zram->stats, 0, sizeof(zram->stats));
++ atomic_set(&zram->pp_in_progress, 0);
+ reset_bdev(zram);
+
+ comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
+@@ -2381,6 +2396,9 @@ static int zram_add(void)
+ zram->disk->fops = &zram_devops;
+ zram->disk->private_data = zram;
+ snprintf(zram->disk->disk_name, 16, "zram%d", device_id);
++ atomic_set(&zram->pp_in_progress, 0);
++ zram_comp_params_reset(zram);
++ comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
+
+ /* Actual capacity set using sysfs (/sys/block/zram<id>/disksize */
+ set_capacity(zram->disk, 0);
+@@ -2388,9 +2406,6 @@ static int zram_add(void)
+ if (ret)
+ goto out_cleanup_disk;
+
+- zram_comp_params_reset(zram);
+- comp_algorithm_set(zram, ZRAM_PRIMARY_COMP, default_compressor);
+-
+ zram_debugfs_register(zram);
+ pr_info("Added device: %s\n", zram->disk->disk_name);
+ return device_id;
+diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
+index cfc8c059db6369..8acf9d2ee42b87 100644
+--- a/drivers/block/zram/zram_drv.h
++++ b/drivers/block/zram/zram_drv.h
+@@ -139,5 +139,6 @@ struct zram {
+ #ifdef CONFIG_ZRAM_MEMORY_TRACKING
+ struct dentry *debugfs_dir;
+ #endif
++ atomic_t pp_in_progress;
+ };
+ #endif
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index eef00467905eb3..a1153ada74d206 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -541,11 +541,10 @@ static const struct bcm_subver_table bcm_usb_subver_table[] = {
+ static const char *btbcm_get_board_name(struct device *dev)
+ {
+ #ifdef CONFIG_OF
+- struct device_node *root;
++ struct device_node *root __free(device_node) = of_find_node_by_path("/");
+ char *board_type;
+ const char *tmp;
+
+- root = of_find_node_by_path("/");
+ if (!root)
+ return NULL;
+
+@@ -555,7 +554,6 @@ static const char *btbcm_get_board_name(struct device *dev)
+ /* get rid of any '/' in the compatible string */
+ board_type = devm_kstrdup(dev, tmp, GFP_KERNEL);
+ strreplace(board_type, '/', '-');
+- of_node_put(root);
+
+ return board_type;
+ #else
+diff --git a/drivers/bluetooth/btintel.c b/drivers/bluetooth/btintel.c
+index 30a32ebbcc681b..645047fb92fd26 100644
+--- a/drivers/bluetooth/btintel.c
++++ b/drivers/bluetooth/btintel.c
+@@ -1841,6 +1841,37 @@ static int btintel_boot_wait(struct hci_dev *hdev, ktime_t calltime, int msec)
+ return 0;
+ }
+
++static int btintel_boot_wait_d0(struct hci_dev *hdev, ktime_t calltime,
++ int msec)
++{
++ ktime_t delta, rettime;
++ unsigned long long duration;
++ int err;
++
++ bt_dev_info(hdev, "Waiting for device transition to d0");
++
++ err = btintel_wait_on_flag_timeout(hdev, INTEL_WAIT_FOR_D0,
++ TASK_INTERRUPTIBLE,
++ msecs_to_jiffies(msec));
++ if (err == -EINTR) {
++ bt_dev_err(hdev, "Device d0 move interrupted");
++ return -EINTR;
++ }
++
++ if (err) {
++ bt_dev_err(hdev, "Device d0 move timeout");
++ return -ETIMEDOUT;
++ }
++
++ rettime = ktime_get();
++ delta = ktime_sub(rettime, calltime);
++ duration = (unsigned long long)ktime_to_ns(delta) >> 10;
++
++ bt_dev_info(hdev, "Device moved to D0 in %llu usecs", duration);
++
++ return 0;
++}
++
+ static int btintel_boot(struct hci_dev *hdev, u32 boot_addr)
+ {
+ ktime_t calltime;
+@@ -1849,6 +1880,7 @@ static int btintel_boot(struct hci_dev *hdev, u32 boot_addr)
+ calltime = ktime_get();
+
+ btintel_set_flag(hdev, INTEL_BOOTING);
++ btintel_set_flag(hdev, INTEL_WAIT_FOR_D0);
+
+ err = btintel_send_intel_reset(hdev, boot_addr);
+ if (err) {
+@@ -1861,13 +1893,28 @@ static int btintel_boot(struct hci_dev *hdev, u32 boot_addr)
+ * is done by the operational firmware sending bootup notification.
+ *
+ * Booting into operational firmware should not take longer than
+- * 1 second. However if that happens, then just fail the setup
++ * 5 second. However if that happens, then just fail the setup
+ * since something went wrong.
+ */
+- err = btintel_boot_wait(hdev, calltime, 1000);
+- if (err == -ETIMEDOUT)
++ err = btintel_boot_wait(hdev, calltime, 5000);
++ if (err == -ETIMEDOUT) {
+ btintel_reset_to_bootloader(hdev);
++ goto exit_error;
++ }
+
++ if (hdev->bus == HCI_PCI) {
++ /* In case of PCIe, after receiving bootup event, driver performs
++ * D0 entry by writing 0 to sleep control register (check
++ * btintel_pcie_recv_event())
++ * Firmware acks with alive interrupt indicating host is full ready to
++ * perform BT operation. Lets wait here till INTEL_WAIT_FOR_D0
++ * bit is cleared.
++ */
++ calltime = ktime_get();
++ err = btintel_boot_wait_d0(hdev, calltime, 2000);
++ }
++
++exit_error:
+ return err;
+ }
+
+@@ -3273,7 +3320,7 @@ int btintel_configure_setup(struct hci_dev *hdev, const char *driver_name)
+ }
+ EXPORT_SYMBOL_GPL(btintel_configure_setup);
+
+-static int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
++int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+ struct intel_tlv *tlv = (void *)&skb->data[5];
+
+@@ -3301,6 +3348,7 @@ static int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
+ recv_frame:
+ return hci_recv_frame(hdev, skb);
+ }
++EXPORT_SYMBOL_GPL(btintel_diagnostics);
+
+ int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+@@ -3320,7 +3368,8 @@ int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ * indicating that the bootup completed.
+ */
+ btintel_bootup(hdev, ptr, len);
+- break;
++ kfree_skb(skb);
++ return 0;
+ case 0x06:
+ /* When the firmware loading completes the
+ * device sends out a vendor specific event
+@@ -3328,7 +3377,8 @@ int btintel_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
+ * loading.
+ */
+ btintel_secure_send_result(hdev, ptr, len);
+- break;
++ kfree_skb(skb);
++ return 0;
+ }
+ }
+
+diff --git a/drivers/bluetooth/btintel.h b/drivers/bluetooth/btintel.h
+index aa70e4c2741653..b448c67e8ed94d 100644
+--- a/drivers/bluetooth/btintel.h
++++ b/drivers/bluetooth/btintel.h
+@@ -178,6 +178,7 @@ enum {
+ INTEL_ROM_LEGACY,
+ INTEL_ROM_LEGACY_NO_WBS_SUPPORT,
+ INTEL_ACPI_RESET_ACTIVE,
++ INTEL_WAIT_FOR_D0,
+
+ __INTEL_NUM_FLAGS,
+ };
+@@ -249,6 +250,7 @@ int btintel_bootloader_setup_tlv(struct hci_dev *hdev,
+ int btintel_shutdown_combined(struct hci_dev *hdev);
+ void btintel_hw_error(struct hci_dev *hdev, u8 code);
+ void btintel_print_fseq_info(struct hci_dev *hdev);
++int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb);
+ #else
+
+ static inline int btintel_check_bdaddr(struct hci_dev *hdev)
+@@ -382,4 +384,9 @@ static inline void btintel_hw_error(struct hci_dev *hdev, u8 code)
+ static inline void btintel_print_fseq_info(struct hci_dev *hdev)
+ {
+ }
++
++static inline int btintel_diagnostics(struct hci_dev *hdev, struct sk_buff *skb)
++{
++ return -EOPNOTSUPP;
++}
+ #endif
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 5252125b003f58..8bd663f4bac1b7 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -48,6 +48,17 @@ MODULE_DEVICE_TABLE(pci, btintel_pcie_table);
+ #define BTINTEL_PCIE_HCI_EVT_PKT 0x00000004
+ #define BTINTEL_PCIE_HCI_ISO_PKT 0x00000005
+
++/* Alive interrupt context */
++enum {
++ BTINTEL_PCIE_ROM,
++ BTINTEL_PCIE_FW_DL,
++ BTINTEL_PCIE_HCI_RESET,
++ BTINTEL_PCIE_INTEL_HCI_RESET1,
++ BTINTEL_PCIE_INTEL_HCI_RESET2,
++ BTINTEL_PCIE_D0,
++ BTINTEL_PCIE_D3
++};
++
+ static inline void ipc_print_ia_ring(struct hci_dev *hdev, struct ia *ia,
+ u16 queue_num)
+ {
+@@ -290,8 +301,9 @@ static int btintel_pcie_enable_bt(struct btintel_pcie_data *data)
+ /* wait for interrupt from the device after booting up to primary
+ * bootloader.
+ */
++ data->alive_intr_ctxt = BTINTEL_PCIE_ROM;
+ err = wait_event_timeout(data->gp0_wait_q, data->gp0_received,
+- msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT));
++ msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
+ if (!err)
+ return -ETIME;
+
+@@ -302,12 +314,78 @@ static int btintel_pcie_enable_bt(struct btintel_pcie_data *data)
+ return 0;
+ }
+
++/* BIT(0) - ROM, BIT(1) - IML and BIT(3) - OP
++ * Sometimes during firmware image switching from ROM to IML or IML to OP image,
++ * the previous image bit is not cleared by firmware when alive interrupt is
++ * received. Driver needs to take care of these sticky bits when deciding the
++ * current image running on controller.
++ * Ex: 0x10 and 0x11 - both represents that controller is running IML
++ */
++static inline bool btintel_pcie_in_rom(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_ROM &&
++ !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML) &&
++ !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW);
++}
++
++static inline bool btintel_pcie_in_op(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW;
++}
++
++static inline bool btintel_pcie_in_iml(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML &&
++ !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW);
++}
++
++static inline bool btintel_pcie_in_d3(struct btintel_pcie_data *data)
++{
++ return data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY;
++}
++
++static inline bool btintel_pcie_in_d0(struct btintel_pcie_data *data)
++{
++ return !(data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY);
++}
++
++static void btintel_pcie_wr_sleep_cntrl(struct btintel_pcie_data *data,
++ u32 dxstate)
++{
++ bt_dev_dbg(data->hdev, "writing sleep_ctl_reg: 0x%8.8x", dxstate);
++ btintel_pcie_wr_reg32(data, BTINTEL_PCIE_CSR_IPC_SLEEP_CTL_REG, dxstate);
++}
++
++static inline char *btintel_pcie_alivectxt_state2str(u32 alive_intr_ctxt)
++{
++ switch (alive_intr_ctxt) {
++ case BTINTEL_PCIE_ROM:
++ return "rom";
++ case BTINTEL_PCIE_FW_DL:
++ return "fw_dl";
++ case BTINTEL_PCIE_D0:
++ return "d0";
++ case BTINTEL_PCIE_D3:
++ return "d3";
++ case BTINTEL_PCIE_HCI_RESET:
++ return "hci_reset";
++ case BTINTEL_PCIE_INTEL_HCI_RESET1:
++ return "intel_reset1";
++ case BTINTEL_PCIE_INTEL_HCI_RESET2:
++ return "intel_reset2";
++ default:
++ return "unknown";
++ }
++ return "null";
++}
++
+ /* This function handles the MSI-X interrupt for gp0 cause (bit 0 in
+ * BTINTEL_PCIE_CSR_MSIX_HW_INT_CAUSES) which is sent for boot stage and image response.
+ */
+ static void btintel_pcie_msix_gp0_handler(struct btintel_pcie_data *data)
+ {
+- u32 reg;
++ bool submit_rx, signal_waitq;
++ u32 reg, old_ctxt;
+
+ /* This interrupt is for three different causes and it is not easy to
+ * know what causes the interrupt. So, it compares each register value
+@@ -317,20 +395,87 @@ static void btintel_pcie_msix_gp0_handler(struct btintel_pcie_data *data)
+ if (reg != data->boot_stage_cache)
+ data->boot_stage_cache = reg;
+
++ bt_dev_dbg(data->hdev, "Alive context: %s old_boot_stage: 0x%8.8x new_boot_stage: 0x%8.8x",
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt),
++ data->boot_stage_cache, reg);
+ reg = btintel_pcie_rd_reg32(data, BTINTEL_PCIE_CSR_IMG_RESPONSE_REG);
+ if (reg != data->img_resp_cache)
+ data->img_resp_cache = reg;
+
+ data->gp0_received = true;
+
+- /* If the boot stage is OP or IML, reset IA and start RX again */
+- if (data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_OPFW ||
+- data->boot_stage_cache & BTINTEL_PCIE_CSR_BOOT_STAGE_IML) {
++ old_ctxt = data->alive_intr_ctxt;
++ submit_rx = false;
++ signal_waitq = false;
++
++ switch (data->alive_intr_ctxt) {
++ case BTINTEL_PCIE_ROM:
++ data->alive_intr_ctxt = BTINTEL_PCIE_FW_DL;
++ signal_waitq = true;
++ break;
++ case BTINTEL_PCIE_FW_DL:
++ /* Error case is already handled. Ideally control shall not
++ * reach here
++ */
++ break;
++ case BTINTEL_PCIE_INTEL_HCI_RESET1:
++ if (btintel_pcie_in_op(data)) {
++ submit_rx = true;
++ break;
++ }
++
++ if (btintel_pcie_in_iml(data)) {
++ submit_rx = true;
++ data->alive_intr_ctxt = BTINTEL_PCIE_FW_DL;
++ break;
++ }
++ break;
++ case BTINTEL_PCIE_INTEL_HCI_RESET2:
++ if (btintel_test_and_clear_flag(data->hdev, INTEL_WAIT_FOR_D0)) {
++ btintel_wake_up_flag(data->hdev, INTEL_WAIT_FOR_D0);
++ data->alive_intr_ctxt = BTINTEL_PCIE_D0;
++ }
++ break;
++ case BTINTEL_PCIE_D0:
++ if (btintel_pcie_in_d3(data)) {
++ data->alive_intr_ctxt = BTINTEL_PCIE_D3;
++ signal_waitq = true;
++ break;
++ }
++ break;
++ case BTINTEL_PCIE_D3:
++ if (btintel_pcie_in_d0(data)) {
++ data->alive_intr_ctxt = BTINTEL_PCIE_D0;
++ submit_rx = true;
++ signal_waitq = true;
++ break;
++ }
++ break;
++ case BTINTEL_PCIE_HCI_RESET:
++ data->alive_intr_ctxt = BTINTEL_PCIE_D0;
++ submit_rx = true;
++ signal_waitq = true;
++ break;
++ default:
++ bt_dev_err(data->hdev, "Unknown state: 0x%2.2x",
++ data->alive_intr_ctxt);
++ break;
++ }
++
++ if (submit_rx) {
+ btintel_pcie_reset_ia(data);
+ btintel_pcie_start_rx(data);
+ }
+
+- wake_up(&data->gp0_wait_q);
++ if (signal_waitq) {
++ bt_dev_dbg(data->hdev, "wake up gp0 wait_q");
++ wake_up(&data->gp0_wait_q);
++ }
++
++ if (old_ctxt != data->alive_intr_ctxt)
++ bt_dev_dbg(data->hdev, "alive context changed: %s -> %s",
++ btintel_pcie_alivectxt_state2str(old_ctxt),
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
+ }
+
+ /* This function handles the MSX-X interrupt for rx queue 0 which is for TX
+@@ -364,6 +509,83 @@ static void btintel_pcie_msix_tx_handle(struct btintel_pcie_data *data)
+ }
+ }
+
++static int btintel_pcie_recv_event(struct hci_dev *hdev, struct sk_buff *skb)
++{
++ struct hci_event_hdr *hdr = (void *)skb->data;
++ const char diagnostics_hdr[] = { 0x87, 0x80, 0x03 };
++ struct btintel_pcie_data *data = hci_get_drvdata(hdev);
++
++ if (skb->len > HCI_EVENT_HDR_SIZE && hdr->evt == 0xff &&
++ hdr->plen > 0) {
++ const void *ptr = skb->data + HCI_EVENT_HDR_SIZE + 1;
++ unsigned int len = skb->len - HCI_EVENT_HDR_SIZE - 1;
++
++ if (btintel_test_flag(hdev, INTEL_BOOTLOADER)) {
++ switch (skb->data[2]) {
++ case 0x02:
++ /* When switching to the operational firmware
++ * the device sends a vendor specific event
++ * indicating that the bootup completed.
++ */
++ btintel_bootup(hdev, ptr, len);
++
++ /* If bootup event is from operational image,
++ * driver needs to write sleep control register to
++ * move into D0 state
++ */
++ if (btintel_pcie_in_op(data)) {
++ btintel_pcie_wr_sleep_cntrl(data, BTINTEL_PCIE_STATE_D0);
++ data->alive_intr_ctxt = BTINTEL_PCIE_INTEL_HCI_RESET2;
++ kfree_skb(skb);
++ return 0;
++ }
++
++ if (btintel_pcie_in_iml(data)) {
++ /* In case of IML, there is no concept
++ * of D0 transition. Just mimic as if
++ * IML moved to D0 by clearing INTEL_WAIT_FOR_D0
++ * bit and waking up the task waiting on
++ * INTEL_WAIT_FOR_D0. This is required
++ * as intel_boot() is common function for
++ * both IML and OP image loading.
++ */
++ if (btintel_test_and_clear_flag(data->hdev,
++ INTEL_WAIT_FOR_D0))
++ btintel_wake_up_flag(data->hdev,
++ INTEL_WAIT_FOR_D0);
++ }
++ kfree_skb(skb);
++ return 0;
++ case 0x06:
++ /* When the firmware loading completes the
++ * device sends out a vendor specific event
++ * indicating the result of the firmware
++ * loading.
++ */
++ btintel_secure_send_result(hdev, ptr, len);
++ kfree_skb(skb);
++ return 0;
++ }
++ }
++
++ /* Handle all diagnostics events separately. May still call
++ * hci_recv_frame.
++ */
++ if (len >= sizeof(diagnostics_hdr) &&
++ memcmp(&skb->data[2], diagnostics_hdr,
++ sizeof(diagnostics_hdr)) == 0) {
++ return btintel_diagnostics(hdev, skb);
++ }
++
++ /* This is a debug event that comes from IML and OP image when it
++ * starts execution. There is no need pass this event to stack.
++ */
++ if (skb->data[2] == 0x97)
++ return 0;
++ }
++
++ return hci_recv_frame(hdev, skb);
++}
+ /* Process the received rx data
+ * It check the frame header to identify the data type and create skb
+ * and calling HCI API
+@@ -465,7 +687,7 @@ static int btintel_pcie_recv_frame(struct btintel_pcie_data *data,
+ hdev->stat.byte_rx += plen;
+
+ if (pcie_pkt_type == BTINTEL_PCIE_HCI_EVT_PKT)
+- ret = btintel_recv_event(hdev, new_skb);
++ ret = btintel_pcie_recv_event(hdev, new_skb);
+ else
+ ret = hci_recv_frame(hdev, new_skb);
+
+@@ -1053,8 +1275,11 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ struct sk_buff *skb)
+ {
+ struct btintel_pcie_data *data = hci_get_drvdata(hdev);
++ struct hci_command_hdr *cmd;
++ __u16 opcode = ~0;
+ int ret;
+ u32 type;
++ u32 old_ctxt;
+
+ /* Due to the fw limitation, the type header of the packet should be
+ * 4 bytes unlike 1 byte for UART. In UART, the firmware can read
+@@ -1073,6 +1298,8 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ switch (hci_skb_pkt_type(skb)) {
+ case HCI_COMMAND_PKT:
+ type = BTINTEL_PCIE_HCI_CMD_PKT;
++ cmd = (void *)skb->data;
++ opcode = le16_to_cpu(cmd->opcode);
+ if (btintel_test_flag(hdev, INTEL_BOOTLOADER)) {
+ struct hci_command_hdr *cmd = (void *)skb->data;
+ __u16 opcode = le16_to_cpu(cmd->opcode);
+@@ -1111,6 +1338,30 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ bt_dev_err(hdev, "Failed to send frame (%d)", ret);
+ goto exit_error;
+ }
++
++ if (type == BTINTEL_PCIE_HCI_CMD_PKT &&
++ (opcode == HCI_OP_RESET || opcode == 0xfc01)) {
++ old_ctxt = data->alive_intr_ctxt;
++ data->alive_intr_ctxt =
++ (opcode == 0xfc01 ? BTINTEL_PCIE_INTEL_HCI_RESET1 :
++ BTINTEL_PCIE_HCI_RESET);
++ bt_dev_dbg(data->hdev, "sent cmd: 0x%4.4x alive context changed: %s -> %s",
++ opcode, btintel_pcie_alivectxt_state2str(old_ctxt),
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
++ if (opcode == HCI_OP_RESET) {
++ data->gp0_received = false;
++ ret = wait_event_timeout(data->gp0_wait_q,
++ data->gp0_received,
++ msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
++ if (!ret) {
++ hdev->stat.err_tx++;
++ bt_dev_err(hdev, "No alive interrupt received for %s",
++ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
++ ret = -ETIME;
++ goto exit_error;
++ }
++ }
++ }
+ hdev->stat.byte_tx += skb->len;
+ kfree_skb(skb);
+
+diff --git a/drivers/bluetooth/btintel_pcie.h b/drivers/bluetooth/btintel_pcie.h
+index baaff70420f575..8b7824ad005a2a 100644
+--- a/drivers/bluetooth/btintel_pcie.h
++++ b/drivers/bluetooth/btintel_pcie.h
+@@ -12,6 +12,7 @@
+ #define BTINTEL_PCIE_CSR_HW_REV_REG (BTINTEL_PCIE_CSR_BASE + 0x028)
+ #define BTINTEL_PCIE_CSR_RF_ID_REG (BTINTEL_PCIE_CSR_BASE + 0x09C)
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_REG (BTINTEL_PCIE_CSR_BASE + 0x108)
++#define BTINTEL_PCIE_CSR_IPC_SLEEP_CTL_REG (BTINTEL_PCIE_CSR_BASE + 0x114)
+ #define BTINTEL_PCIE_CSR_CI_ADDR_LSB_REG (BTINTEL_PCIE_CSR_BASE + 0x118)
+ #define BTINTEL_PCIE_CSR_CI_ADDR_MSB_REG (BTINTEL_PCIE_CSR_BASE + 0x11C)
+ #define BTINTEL_PCIE_CSR_IMG_RESPONSE_REG (BTINTEL_PCIE_CSR_BASE + 0x12C)
+@@ -32,6 +33,7 @@
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_IML_LOCKDOWN (BIT(11))
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_MAC_ACCESS_ON (BIT(16))
+ #define BTINTEL_PCIE_CSR_BOOT_STAGE_ALIVE (BIT(23))
++#define BTINTEL_PCIE_CSR_BOOT_STAGE_D3_STATE_READY (BIT(24))
+
+ /* Registers for MSI-X */
+ #define BTINTEL_PCIE_CSR_MSIX_BASE (0x2000)
+@@ -55,6 +57,16 @@ enum msix_hw_int_causes {
+ BTINTEL_PCIE_MSIX_HW_INT_CAUSES_GP0 = BIT(0), /* cause 32 */
+ };
+
++/* PCIe device states
++ * Host-Device interface is active
++ * Host-Device interface is inactive(as reflected by IPC_SLEEP_CONTROL_CSR_AD)
++ * Host-Device interface is inactive(as reflected by IPC_SLEEP_CONTROL_CSR_AD)
++ */
++enum {
++ BTINTEL_PCIE_STATE_D0 = 0,
++ BTINTEL_PCIE_STATE_D3_HOT = 2,
++ BTINTEL_PCIE_STATE_D3_COLD = 3,
++};
+ #define BTINTEL_PCIE_MSIX_NON_AUTO_CLEAR_CAUSE BIT(7)
+
+ /* Minimum and Maximum number of MSI-X Vector
+@@ -67,7 +79,7 @@ enum msix_hw_int_causes {
+ #define BTINTEL_DEFAULT_MAC_ACCESS_TIMEOUT_US 200000
+
+ /* Default interrupt timeout in msec */
+-#define BTINTEL_DEFAULT_INTR_TIMEOUT 3000
++#define BTINTEL_DEFAULT_INTR_TIMEOUT_MS 3000
+
+ /* The number of descriptors in TX/RX queues */
+ #define BTINTEL_DESCS_COUNT 16
+@@ -343,6 +355,7 @@ struct rxq {
+ * @ia: Index Array struct
+ * @txq: TX Queue struct
+ * @rxq: RX Queue struct
++ * @alive_intr_ctxt: Alive interrupt context
+ */
+ struct btintel_pcie_data {
+ struct pci_dev *pdev;
+@@ -389,6 +402,7 @@ struct btintel_pcie_data {
+ struct ia ia;
+ struct txq txq;
+ struct rxq rxq;
++ u32 alive_intr_ctxt;
+ };
+
+ static inline u32 btintel_pcie_rd_reg32(struct btintel_pcie_data *data,
+diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c
+index 9bbf205021634f..480e4adba9faa6 100644
+--- a/drivers/bluetooth/btmtk.c
++++ b/drivers/bluetooth/btmtk.c
+@@ -1215,7 +1215,6 @@ static int btmtk_usb_isointf_init(struct hci_dev *hdev)
+ struct sk_buff *skb;
+ int err;
+
+- init_usb_anchor(&btmtk_data->isopkt_anchor);
+ spin_lock_init(&btmtk_data->isorxlock);
+
+ __set_mtk_intr_interface(hdev);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index e9534fbc92e32f..4ccaddb46ddd81 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2616,6 +2616,7 @@ static void btusb_mtk_claim_iso_intf(struct btusb_data *data)
+ }
+
+ set_bit(BTMTK_ISOPKT_OVER_INTR, &btmtk_data->flags);
++ init_usb_anchor(&btmtk_data->isopkt_anchor);
+ }
+
+ static void btusb_mtk_release_iso_intf(struct btusb_data *data)
+diff --git a/drivers/bus/mhi/host/trace.h b/drivers/bus/mhi/host/trace.h
+index 95613c8ebe0691..3e0c41777429eb 100644
+--- a/drivers/bus/mhi/host/trace.h
++++ b/drivers/bus/mhi/host/trace.h
+@@ -9,6 +9,7 @@
+ #if !defined(_TRACE_EVENT_MHI_HOST_H) || defined(TRACE_HEADER_MULTI_READ)
+ #define _TRACE_EVENT_MHI_HOST_H
+
++#include <linux/byteorder/generic.h>
+ #include <linux/tracepoint.h>
+ #include <linux/trace_seq.h>
+ #include "../common.h"
+@@ -97,18 +98,18 @@ TRACE_EVENT(mhi_gen_tre,
+ __string(name, mhi_cntrl->mhi_dev->name)
+ __field(int, ch_num)
+ __field(void *, wp)
+- __field(__le64, tre_ptr)
+- __field(__le32, dword0)
+- __field(__le32, dword1)
++ __field(uint64_t, tre_ptr)
++ __field(uint32_t, dword0)
++ __field(uint32_t, dword1)
+ ),
+
+ TP_fast_assign(
+ __assign_str(name);
+ __entry->ch_num = mhi_chan->chan;
+ __entry->wp = mhi_tre;
+- __entry->tre_ptr = mhi_tre->ptr;
+- __entry->dword0 = mhi_tre->dword[0];
+- __entry->dword1 = mhi_tre->dword[1];
++ __entry->tre_ptr = le64_to_cpu(mhi_tre->ptr);
++ __entry->dword0 = le32_to_cpu(mhi_tre->dword[0]);
++ __entry->dword1 = le32_to_cpu(mhi_tre->dword[1]);
+ ),
+
+ TP_printk("%s: Chan: %d TRE: 0x%p TRE buf: 0x%llx DWORD0: 0x%08x DWORD1: 0x%08x\n",
+@@ -176,19 +177,19 @@ DECLARE_EVENT_CLASS(mhi_process_event_ring,
+
+ TP_STRUCT__entry(
+ __string(name, mhi_cntrl->mhi_dev->name)
+- __field(__le32, dword0)
+- __field(__le32, dword1)
++ __field(uint32_t, dword0)
++ __field(uint32_t, dword1)
+ __field(int, state)
+- __field(__le64, ptr)
++ __field(uint64_t, ptr)
+ __field(void *, rp)
+ ),
+
+ TP_fast_assign(
+ __assign_str(name);
+ __entry->rp = rp;
+- __entry->ptr = rp->ptr;
+- __entry->dword0 = rp->dword[0];
+- __entry->dword1 = rp->dword[1];
++ __entry->ptr = le64_to_cpu(rp->ptr);
++ __entry->dword0 = le32_to_cpu(rp->dword[0]);
++ __entry->dword1 = le32_to_cpu(rp->dword[1]);
+ __entry->state = MHI_TRE_GET_EV_STATE(rp);
+ ),
+
+diff --git a/drivers/clk/.kunitconfig b/drivers/clk/.kunitconfig
+index 54ece920705525..08e26137f3d9c9 100644
+--- a/drivers/clk/.kunitconfig
++++ b/drivers/clk/.kunitconfig
+@@ -1,5 +1,6 @@
+ CONFIG_KUNIT=y
+ CONFIG_OF=y
++CONFIG_OF_OVERLAY=y
+ CONFIG_COMMON_CLK=y
+ CONFIG_CLK_KUNIT_TEST=y
+ CONFIG_CLK_FIXED_RATE_KUNIT_TEST=y
+diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig
+index 299bc678ed1b9f..0fe07a594b4e1b 100644
+--- a/drivers/clk/Kconfig
++++ b/drivers/clk/Kconfig
+@@ -517,7 +517,6 @@ config CLK_KUNIT_TEST
+ tristate "Basic Clock Framework Kunit Tests" if !KUNIT_ALL_TESTS
+ depends on KUNIT
+ default KUNIT_ALL_TESTS
+- select OF_OVERLAY if OF
+ select DTC
+ help
+ Kunit tests for the common clock framework.
+@@ -526,7 +525,6 @@ config CLK_FIXED_RATE_KUNIT_TEST
+ tristate "Basic fixed rate clk type KUnit test" if !KUNIT_ALL_TESTS
+ depends on KUNIT
+ default KUNIT_ALL_TESTS
+- select OF_OVERLAY if OF
+ select DTC
+ help
+ KUnit tests for the basic fixed rate clk type.
+diff --git a/drivers/clk/clk-apple-nco.c b/drivers/clk/clk-apple-nco.c
+index 39472a51530a34..457a48d4894128 100644
+--- a/drivers/clk/clk-apple-nco.c
++++ b/drivers/clk/clk-apple-nco.c
+@@ -297,6 +297,9 @@ static int applnco_probe(struct platform_device *pdev)
+ memset(&init, 0, sizeof(init));
+ init.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "%s-%d", np->name, i);
++ if (!init.name)
++ return -ENOMEM;
++
+ init.ops = &applnco_ops;
+ init.parent_data = &pdata;
+ init.num_parents = 1;
+diff --git a/drivers/clk/clk-axi-clkgen.c b/drivers/clk/clk-axi-clkgen.c
+index bf4d8ddc93aea1..934e53a96dddac 100644
+--- a/drivers/clk/clk-axi-clkgen.c
++++ b/drivers/clk/clk-axi-clkgen.c
+@@ -7,6 +7,7 @@
+ */
+
+ #include <linux/platform_device.h>
++#include <linux/clk.h>
+ #include <linux/clk-provider.h>
+ #include <linux/slab.h>
+ #include <linux/io.h>
+@@ -512,6 +513,7 @@ static int axi_clkgen_probe(struct platform_device *pdev)
+ struct clk_init_data init;
+ const char *parent_names[2];
+ const char *clk_name;
++ struct clk *axi_clk;
+ unsigned int i;
+ int ret;
+
+@@ -528,8 +530,24 @@ static int axi_clkgen_probe(struct platform_device *pdev)
+ return PTR_ERR(axi_clkgen->base);
+
+ init.num_parents = of_clk_get_parent_count(pdev->dev.of_node);
+- if (init.num_parents < 1 || init.num_parents > 2)
+- return -EINVAL;
++
++ axi_clk = devm_clk_get_enabled(&pdev->dev, "s_axi_aclk");
++ if (!IS_ERR(axi_clk)) {
++ if (init.num_parents < 2 || init.num_parents > 3)
++ return -EINVAL;
++
++ init.num_parents -= 1;
++ } else {
++ /*
++ * Legacy... So that old DTs which do not have clock-names still
++ * work. In this case we don't explicitly enable the AXI bus
++ * clock.
++ */
++ if (PTR_ERR(axi_clk) != -ENOENT)
++ return PTR_ERR(axi_clk);
++ if (init.num_parents < 1 || init.num_parents > 2)
++ return -EINVAL;
++ }
+
+ for (i = 0; i < init.num_parents; i++) {
+ parent_names[i] = of_clk_get_parent_name(pdev->dev.of_node, i);
+diff --git a/drivers/clk/clk-en7523.c b/drivers/clk/clk-en7523.c
+index 22fbea61c3dcc0..fdd8ea989ed24a 100644
+--- a/drivers/clk/clk-en7523.c
++++ b/drivers/clk/clk-en7523.c
+@@ -3,8 +3,10 @@
+ #include <linux/delay.h>
+ #include <linux/clk-provider.h>
+ #include <linux/io.h>
++#include <linux/mfd/syscon.h>
+ #include <linux/platform_device.h>
+ #include <linux/property.h>
++#include <linux/regmap.h>
+ #include <linux/reset-controller.h>
+ #include <dt-bindings/clock/en7523-clk.h>
+ #include <dt-bindings/reset/airoha,en7581-reset.h>
+@@ -31,16 +33,11 @@
+ #define REG_RESET_CONTROL_PCIE1 BIT(27)
+ #define REG_RESET_CONTROL_PCIE2 BIT(26)
+ /* EN7581 */
+-#define REG_PCIE0_MEM 0x00
+-#define REG_PCIE0_MEM_MASK 0x04
+-#define REG_PCIE1_MEM 0x08
+-#define REG_PCIE1_MEM_MASK 0x0c
+-#define REG_PCIE2_MEM 0x10
+-#define REG_PCIE2_MEM_MASK 0x14
+ #define REG_NP_SCU_PCIC 0x88
+ #define REG_NP_SCU_SSTR 0x9c
+ #define REG_PCIE_XSI0_SEL_MASK GENMASK(14, 13)
+ #define REG_PCIE_XSI1_SEL_MASK GENMASK(12, 11)
++#define REG_CRYPTO_CLKSRC2 0x20c
+
+ #define REG_RST_CTRL2 0x00
+ #define REG_RST_CTRL1 0x04
+@@ -84,7 +81,8 @@ struct en_clk_soc_data {
+ const u16 *idx_map;
+ u16 idx_map_nr;
+ } reset;
+- int (*hw_init)(struct platform_device *pdev, void __iomem *np_base);
++ int (*hw_init)(struct platform_device *pdev,
++ struct clk_hw_onecell_data *clk_data);
+ };
+
+ static const u32 gsw_base[] = { 400000000, 500000000 };
+@@ -92,6 +90,10 @@ static const u32 emi_base[] = { 333000000, 400000000 };
+ static const u32 bus_base[] = { 500000000, 540000000 };
+ static const u32 slic_base[] = { 100000000, 3125000 };
+ static const u32 npu_base[] = { 333000000, 400000000, 500000000 };
++/* EN7581 */
++static const u32 emi7581_base[] = { 540000000, 480000000, 400000000, 300000000 };
++static const u32 npu7581_base[] = { 800000000, 750000000, 720000000, 600000000 };
++static const u32 crypto_base[] = { 540000000, 480000000 };
+
+ static const struct en_clk_desc en7523_base_clks[] = {
+ {
+@@ -189,6 +191,102 @@ static const struct en_clk_desc en7523_base_clks[] = {
+ }
+ };
+
++static const struct en_clk_desc en7581_base_clks[] = {
++ {
++ .id = EN7523_CLK_GSW,
++ .name = "gsw",
++
++ .base_reg = REG_GSW_CLK_DIV_SEL,
++ .base_bits = 1,
++ .base_shift = 8,
++ .base_values = gsw_base,
++ .n_base_values = ARRAY_SIZE(gsw_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_EMI,
++ .name = "emi",
++
++ .base_reg = REG_EMI_CLK_DIV_SEL,
++ .base_bits = 2,
++ .base_shift = 8,
++ .base_values = emi7581_base,
++ .n_base_values = ARRAY_SIZE(emi7581_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_BUS,
++ .name = "bus",
++
++ .base_reg = REG_BUS_CLK_DIV_SEL,
++ .base_bits = 1,
++ .base_shift = 8,
++ .base_values = bus_base,
++ .n_base_values = ARRAY_SIZE(bus_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_SLIC,
++ .name = "slic",
++
++ .base_reg = REG_SPI_CLK_FREQ_SEL,
++ .base_bits = 1,
++ .base_shift = 0,
++ .base_values = slic_base,
++ .n_base_values = ARRAY_SIZE(slic_base),
++
++ .div_reg = REG_SPI_CLK_DIV_SEL,
++ .div_bits = 5,
++ .div_shift = 24,
++ .div_val0 = 20,
++ .div_step = 2,
++ }, {
++ .id = EN7523_CLK_SPI,
++ .name = "spi",
++
++ .base_reg = REG_SPI_CLK_DIV_SEL,
++
++ .base_value = 400000000,
++
++ .div_bits = 5,
++ .div_shift = 8,
++ .div_val0 = 40,
++ .div_step = 2,
++ }, {
++ .id = EN7523_CLK_NPU,
++ .name = "npu",
++
++ .base_reg = REG_NPU_CLK_DIV_SEL,
++ .base_bits = 2,
++ .base_shift = 8,
++ .base_values = npu7581_base,
++ .n_base_values = ARRAY_SIZE(npu7581_base),
++
++ .div_bits = 3,
++ .div_shift = 0,
++ .div_step = 1,
++ .div_offset = 1,
++ }, {
++ .id = EN7523_CLK_CRYPTO,
++ .name = "crypto",
++
++ .base_reg = REG_CRYPTO_CLKSRC2,
++ .base_bits = 1,
++ .base_shift = 0,
++ .base_values = crypto_base,
++ .n_base_values = ARRAY_SIZE(crypto_base),
++ }
++};
++
+ static const u16 en7581_rst_ofs[] = {
+ REG_RST_CTRL2,
+ REG_RST_CTRL1,
+@@ -252,15 +350,11 @@ static const u16 en7581_rst_map[] = {
+ [EN7581_XPON_MAC_RST] = RST_NR_PER_BANK + 31,
+ };
+
+-static unsigned int en7523_get_base_rate(void __iomem *base, unsigned int i)
++static u32 en7523_get_base_rate(const struct en_clk_desc *desc, u32 val)
+ {
+- const struct en_clk_desc *desc = &en7523_base_clks[i];
+- u32 val;
+-
+ if (!desc->base_bits)
+ return desc->base_value;
+
+- val = readl(base + desc->base_reg);
+ val >>= desc->base_shift;
+ val &= (1 << desc->base_bits) - 1;
+
+@@ -270,16 +364,11 @@ static unsigned int en7523_get_base_rate(void __iomem *base, unsigned int i)
+ return desc->base_values[val];
+ }
+
+-static u32 en7523_get_div(void __iomem *base, int i)
++static u32 en7523_get_div(const struct en_clk_desc *desc, u32 val)
+ {
+- const struct en_clk_desc *desc = &en7523_base_clks[i];
+- u32 reg, val;
+-
+ if (!desc->div_bits)
+ return 1;
+
+- reg = desc->div_reg ? desc->div_reg : desc->base_reg;
+- val = readl(base + reg);
+ val >>= desc->div_shift;
+ val &= (1 << desc->div_bits) - 1;
+
+@@ -412,44 +501,83 @@ static void en7581_pci_disable(struct clk_hw *hw)
+ usleep_range(1000, 2000);
+ }
+
+-static int en7581_clk_hw_init(struct platform_device *pdev,
+- void __iomem *np_base)
++static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data,
++ void __iomem *base, void __iomem *np_base)
+ {
+- void __iomem *pb_base;
+- u32 val;
++ struct clk_hw *hw;
++ u32 rate;
++ int i;
++
++ for (i = 0; i < ARRAY_SIZE(en7523_base_clks); i++) {
++ const struct en_clk_desc *desc = &en7523_base_clks[i];
++ u32 reg = desc->div_reg ? desc->div_reg : desc->base_reg;
++ u32 val = readl(base + desc->base_reg);
+
+- pb_base = devm_platform_ioremap_resource(pdev, 3);
+- if (IS_ERR(pb_base))
+- return PTR_ERR(pb_base);
++ rate = en7523_get_base_rate(desc, val);
++ val = readl(base + reg);
++ rate /= en7523_get_div(desc, val);
+
+- val = readl(np_base + REG_NP_SCU_SSTR);
+- val &= ~(REG_PCIE_XSI0_SEL_MASK | REG_PCIE_XSI1_SEL_MASK);
+- writel(val, np_base + REG_NP_SCU_SSTR);
+- val = readl(np_base + REG_NP_SCU_PCIC);
+- writel(val | 3, np_base + REG_NP_SCU_PCIC);
++ hw = clk_hw_register_fixed_rate(dev, desc->name, NULL, 0, rate);
++ if (IS_ERR(hw)) {
++ pr_err("Failed to register clk %s: %ld\n",
++ desc->name, PTR_ERR(hw));
++ continue;
++ }
++
++ clk_data->hws[desc->id] = hw;
++ }
++
++ hw = en7523_register_pcie_clk(dev, np_base);
++ clk_data->hws[EN7523_CLK_PCIE] = hw;
++
++ clk_data->num = EN7523_NUM_CLOCKS;
++}
++
++static int en7523_clk_hw_init(struct platform_device *pdev,
++ struct clk_hw_onecell_data *clk_data)
++{
++ void __iomem *base, *np_base;
++
++ base = devm_platform_ioremap_resource(pdev, 0);
++ if (IS_ERR(base))
++ return PTR_ERR(base);
++
++ np_base = devm_platform_ioremap_resource(pdev, 1);
++ if (IS_ERR(np_base))
++ return PTR_ERR(np_base);
+
+- writel(0x20000000, pb_base + REG_PCIE0_MEM);
+- writel(0xfc000000, pb_base + REG_PCIE0_MEM_MASK);
+- writel(0x24000000, pb_base + REG_PCIE1_MEM);
+- writel(0xfc000000, pb_base + REG_PCIE1_MEM_MASK);
+- writel(0x28000000, pb_base + REG_PCIE2_MEM);
+- writel(0xfc000000, pb_base + REG_PCIE2_MEM_MASK);
++ en7523_register_clocks(&pdev->dev, clk_data, base, np_base);
+
+ return 0;
+ }
+
+-static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data,
+- void __iomem *base, void __iomem *np_base)
++static void en7581_register_clocks(struct device *dev, struct clk_hw_onecell_data *clk_data,
++ struct regmap *map, void __iomem *base)
+ {
+ struct clk_hw *hw;
+ u32 rate;
+ int i;
+
+- for (i = 0; i < ARRAY_SIZE(en7523_base_clks); i++) {
+- const struct en_clk_desc *desc = &en7523_base_clks[i];
++ for (i = 0; i < ARRAY_SIZE(en7581_base_clks); i++) {
++ const struct en_clk_desc *desc = &en7581_base_clks[i];
++ u32 val, reg = desc->div_reg ? desc->div_reg : desc->base_reg;
++ int err;
+
+- rate = en7523_get_base_rate(base, i);
+- rate /= en7523_get_div(base, i);
++ err = regmap_read(map, desc->base_reg, &val);
++ if (err) {
++ pr_err("Failed reading fixed clk rate %s: %d\n",
++ desc->name, err);
++ continue;
++ }
++ rate = en7523_get_base_rate(desc, val);
++
++ err = regmap_read(map, reg, &val);
++ if (err) {
++ pr_err("Failed reading fixed clk div %s: %d\n",
++ desc->name, err);
++ continue;
++ }
++ rate /= en7523_get_div(desc, val);
+
+ hw = clk_hw_register_fixed_rate(dev, desc->name, NULL, 0, rate);
+ if (IS_ERR(hw)) {
+@@ -461,12 +589,38 @@ static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_dat
+ clk_data->hws[desc->id] = hw;
+ }
+
+- hw = en7523_register_pcie_clk(dev, np_base);
++ hw = en7523_register_pcie_clk(dev, base);
+ clk_data->hws[EN7523_CLK_PCIE] = hw;
+
+ clk_data->num = EN7523_NUM_CLOCKS;
+ }
+
++static int en7581_clk_hw_init(struct platform_device *pdev,
++ struct clk_hw_onecell_data *clk_data)
++{
++ void __iomem *np_base;
++ struct regmap *map;
++ u32 val;
++
++ map = syscon_regmap_lookup_by_compatible("airoha,en7581-chip-scu");
++ if (IS_ERR(map))
++ return PTR_ERR(map);
++
++ np_base = devm_platform_ioremap_resource(pdev, 0);
++ if (IS_ERR(np_base))
++ return PTR_ERR(np_base);
++
++ en7581_register_clocks(&pdev->dev, clk_data, map, np_base);
++
++ val = readl(np_base + REG_NP_SCU_SSTR);
++ val &= ~(REG_PCIE_XSI0_SEL_MASK | REG_PCIE_XSI1_SEL_MASK);
++ writel(val, np_base + REG_NP_SCU_SSTR);
++ val = readl(np_base + REG_NP_SCU_PCIC);
++ writel(val | 3, np_base + REG_NP_SCU_PCIC);
++
++ return 0;
++}
++
+ static int en7523_reset_update(struct reset_controller_dev *rcdev,
+ unsigned long id, bool assert)
+ {
+@@ -533,7 +687,7 @@ static int en7523_reset_register(struct platform_device *pdev,
+ if (!soc_data->reset.idx_map_nr)
+ return 0;
+
+- base = devm_platform_ioremap_resource(pdev, 2);
++ base = devm_platform_ioremap_resource(pdev, 1);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+@@ -561,31 +715,18 @@ static int en7523_clk_probe(struct platform_device *pdev)
+ struct device_node *node = pdev->dev.of_node;
+ const struct en_clk_soc_data *soc_data;
+ struct clk_hw_onecell_data *clk_data;
+- void __iomem *base, *np_base;
+ int r;
+
+- base = devm_platform_ioremap_resource(pdev, 0);
+- if (IS_ERR(base))
+- return PTR_ERR(base);
+-
+- np_base = devm_platform_ioremap_resource(pdev, 1);
+- if (IS_ERR(np_base))
+- return PTR_ERR(np_base);
+-
+- soc_data = device_get_match_data(&pdev->dev);
+- if (soc_data->hw_init) {
+- r = soc_data->hw_init(pdev, np_base);
+- if (r)
+- return r;
+- }
+-
+ clk_data = devm_kzalloc(&pdev->dev,
+ struct_size(clk_data, hws, EN7523_NUM_CLOCKS),
+ GFP_KERNEL);
+ if (!clk_data)
+ return -ENOMEM;
+
+- en7523_register_clocks(&pdev->dev, clk_data, base, np_base);
++ soc_data = device_get_match_data(&pdev->dev);
++ r = soc_data->hw_init(pdev, clk_data);
++ if (r)
++ return r;
+
+ r = of_clk_add_hw_provider(node, of_clk_hw_onecell_get, clk_data);
+ if (r)
+@@ -608,6 +749,7 @@ static const struct en_clk_soc_data en7523_data = {
+ .prepare = en7523_pci_prepare,
+ .unprepare = en7523_pci_unprepare,
+ },
++ .hw_init = en7523_clk_hw_init,
+ };
+
+ static const struct en_clk_soc_data en7581_data = {
+diff --git a/drivers/clk/clk-loongson2.c b/drivers/clk/clk-loongson2.c
+index 820bb1e9e3b79a..7082b4309c6f15 100644
+--- a/drivers/clk/clk-loongson2.c
++++ b/drivers/clk/clk-loongson2.c
+@@ -29,8 +29,10 @@ enum loongson2_clk_type {
+ struct loongson2_clk_provider {
+ void __iomem *base;
+ struct device *dev;
+- struct clk_hw_onecell_data clk_data;
+ spinlock_t clk_lock; /* protect access to DIV registers */
++
++ /* Must be last --ends in a flexible-array member. */
++ struct clk_hw_onecell_data clk_data;
+ };
+
+ struct loongson2_clk_data {
+@@ -304,7 +306,7 @@ static int loongson2_clk_probe(struct platform_device *pdev)
+ return PTR_ERR(clp->base);
+
+ spin_lock_init(&clp->clk_lock);
+- clp->clk_data.num = clks_num + 1;
++ clp->clk_data.num = clks_num;
+ clp->dev = dev;
+
+ for (i = 0; i < clks_num; i++) {
+diff --git a/drivers/clk/imx/clk-fracn-gppll.c b/drivers/clk/imx/clk-fracn-gppll.c
+index 591e0364ee5c11..85771afd4698ae 100644
+--- a/drivers/clk/imx/clk-fracn-gppll.c
++++ b/drivers/clk/imx/clk-fracn-gppll.c
+@@ -254,9 +254,11 @@ static int clk_fracn_gppll_set_rate(struct clk_hw *hw, unsigned long drate,
+ pll_div = FIELD_PREP(PLL_RDIV_MASK, rate->rdiv) | rate->odiv |
+ FIELD_PREP(PLL_MFI_MASK, rate->mfi);
+ writel_relaxed(pll_div, pll->base + PLL_DIV);
++ readl(pll->base + PLL_DIV);
+ if (pll->flags & CLK_FRACN_GPPLL_FRACN) {
+ writel_relaxed(rate->mfd, pll->base + PLL_DENOMINATOR);
+ writel_relaxed(FIELD_PREP(PLL_MFN_MASK, rate->mfn), pll->base + PLL_NUMERATOR);
++ readl(pll->base + PLL_NUMERATOR);
+ }
+
+ /* Wait for 5us according to fracn mode pll doc */
+@@ -265,6 +267,7 @@ static int clk_fracn_gppll_set_rate(struct clk_hw *hw, unsigned long drate,
+ /* Enable Powerup */
+ tmp |= POWERUP_MASK;
+ writel_relaxed(tmp, pll->base + PLL_CTRL);
++ readl(pll->base + PLL_CTRL);
+
+ /* Wait Lock */
+ ret = clk_fracn_gppll_wait_lock(pll);
+@@ -302,14 +305,15 @@ static int clk_fracn_gppll_prepare(struct clk_hw *hw)
+
+ val |= POWERUP_MASK;
+ writel_relaxed(val, pll->base + PLL_CTRL);
+-
+- val |= CLKMUX_EN;
+- writel_relaxed(val, pll->base + PLL_CTRL);
++ readl(pll->base + PLL_CTRL);
+
+ ret = clk_fracn_gppll_wait_lock(pll);
+ if (ret)
+ return ret;
+
++ val |= CLKMUX_EN;
++ writel_relaxed(val, pll->base + PLL_CTRL);
++
+ val &= ~CLKMUX_BYPASS;
+ writel_relaxed(val, pll->base + PLL_CTRL);
+
+diff --git a/drivers/clk/imx/clk-imx8-acm.c b/drivers/clk/imx/clk-imx8-acm.c
+index 6c351050b82ae0..c169fe53a35f83 100644
+--- a/drivers/clk/imx/clk-imx8-acm.c
++++ b/drivers/clk/imx/clk-imx8-acm.c
+@@ -294,9 +294,9 @@ static int clk_imx_acm_attach_pm_domains(struct device *dev,
+ DL_FLAG_STATELESS |
+ DL_FLAG_PM_RUNTIME |
+ DL_FLAG_RPM_ACTIVE);
+- if (IS_ERR(dev_pm->pd_dev_link[i])) {
++ if (!dev_pm->pd_dev_link[i]) {
+ dev_pm_domain_detach(dev_pm->pd_dev[i], false);
+- ret = PTR_ERR(dev_pm->pd_dev_link[i]);
++ ret = -EINVAL;
+ goto detach_pm;
+ }
+ }
+diff --git a/drivers/clk/imx/clk-lpcg-scu.c b/drivers/clk/imx/clk-lpcg-scu.c
+index dd5abd09f3e206..620afdf8dc03e9 100644
+--- a/drivers/clk/imx/clk-lpcg-scu.c
++++ b/drivers/clk/imx/clk-lpcg-scu.c
+@@ -6,10 +6,12 @@
+
+ #include <linux/bits.h>
+ #include <linux/clk-provider.h>
++#include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/io.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
++#include <linux/units.h>
+
+ #include "clk-scu.h"
+
+@@ -41,6 +43,29 @@ struct clk_lpcg_scu {
+
+ #define to_clk_lpcg_scu(_hw) container_of(_hw, struct clk_lpcg_scu, hw)
+
++/* e10858 -LPCG clock gating register synchronization errata */
++static void lpcg_e10858_writel(unsigned long rate, void __iomem *reg, u32 val)
++{
++ writel(val, reg);
++
++ if (rate >= 24 * HZ_PER_MHZ || rate == 0) {
++ /*
++ * The time taken to access the LPCG registers from the AP core
++ * through the interconnect is longer than the minimum delay
++ * of 4 clock cycles required by the errata.
++ * Adding a readl will provide sufficient delay to prevent
++ * back-to-back writes.
++ */
++ readl(reg);
++ } else {
++ /*
++ * For clocks running below 24MHz, wait a minimum of
++ * 4 clock cycles.
++ */
++ ndelay(4 * (DIV_ROUND_UP(1000 * HZ_PER_MHZ, rate)));
++ }
++}
++
+ static int clk_lpcg_scu_enable(struct clk_hw *hw)
+ {
+ struct clk_lpcg_scu *clk = to_clk_lpcg_scu(hw);
+@@ -57,7 +82,8 @@ static int clk_lpcg_scu_enable(struct clk_hw *hw)
+ val |= CLK_GATE_SCU_LPCG_HW_SEL;
+
+ reg |= val << clk->bit_idx;
+- writel(reg, clk->reg);
++
++ lpcg_e10858_writel(clk_hw_get_rate(hw), clk->reg, reg);
+
+ spin_unlock_irqrestore(&imx_lpcg_scu_lock, flags);
+
+@@ -74,7 +100,7 @@ static void clk_lpcg_scu_disable(struct clk_hw *hw)
+
+ reg = readl_relaxed(clk->reg);
+ reg &= ~(CLK_GATE_SCU_LPCG_MASK << clk->bit_idx);
+- writel(reg, clk->reg);
++ lpcg_e10858_writel(clk_hw_get_rate(hw), clk->reg, reg);
+
+ spin_unlock_irqrestore(&imx_lpcg_scu_lock, flags);
+ }
+@@ -145,13 +171,8 @@ static int __maybe_unused imx_clk_lpcg_scu_resume(struct device *dev)
+ {
+ struct clk_lpcg_scu *clk = dev_get_drvdata(dev);
+
+- /*
+- * FIXME: Sometimes writes don't work unless the CPU issues
+- * them twice
+- */
+-
+- writel(clk->state, clk->reg);
+ writel(clk->state, clk->reg);
++ lpcg_e10858_writel(0, clk->reg, clk->state);
+ dev_dbg(dev, "restore lpcg state 0x%x\n", clk->state);
+
+ return 0;
+diff --git a/drivers/clk/imx/clk-scu.c b/drivers/clk/imx/clk-scu.c
+index b1dd0c08e091b6..b27186aaf2a156 100644
+--- a/drivers/clk/imx/clk-scu.c
++++ b/drivers/clk/imx/clk-scu.c
+@@ -596,7 +596,7 @@ static int __maybe_unused imx_clk_scu_suspend(struct device *dev)
+ clk->rate = clk_scu_recalc_rate(&clk->hw, 0);
+ else
+ clk->rate = clk_hw_get_rate(&clk->hw);
+- clk->is_enabled = clk_hw_is_enabled(&clk->hw);
++ clk->is_enabled = clk_hw_is_prepared(&clk->hw);
+
+ if (clk->parent)
+ dev_dbg(dev, "save parent %s idx %u\n", clk_hw_get_name(clk->parent),
+diff --git a/drivers/clk/mediatek/Kconfig b/drivers/clk/mediatek/Kconfig
+index 70a005e7e1b180..486401e1f2f19c 100644
+--- a/drivers/clk/mediatek/Kconfig
++++ b/drivers/clk/mediatek/Kconfig
+@@ -887,13 +887,6 @@ config COMMON_CLK_MT8195_APUSYS
+ help
+ This driver supports MediaTek MT8195 AI Processor Unit System clocks.
+
+-config COMMON_CLK_MT8195_AUDSYS
+- tristate "Clock driver for MediaTek MT8195 audsys"
+- depends on COMMON_CLK_MT8195
+- default COMMON_CLK_MT8195
+- help
+- This driver supports MediaTek MT8195 audsys clocks.
+-
+ config COMMON_CLK_MT8195_IMP_IIC_WRAP
+ tristate "Clock driver for MediaTek MT8195 imp_iic_wrap"
+ depends on COMMON_CLK_MT8195
+@@ -908,14 +901,6 @@ config COMMON_CLK_MT8195_MFGCFG
+ help
+ This driver supports MediaTek MT8195 mfgcfg clocks.
+
+-config COMMON_CLK_MT8195_MSDC
+- tristate "Clock driver for MediaTek MT8195 msdc"
+- depends on COMMON_CLK_MT8195
+- default COMMON_CLK_MT8195
+- help
+- This driver supports MediaTek MT8195 MMC and SD Controller's
+- msdc and msdc_top clocks.
+-
+ config COMMON_CLK_MT8195_SCP_ADSP
+ tristate "Clock driver for MediaTek MT8195 scp_adsp"
+ depends on COMMON_CLK_MT8195
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index a3e2a09e2105b2..4444dafa4e3dfa 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -1230,11 +1230,11 @@ config SM_VIDEOCC_8350
+ config SM_VIDEOCC_8550
+ tristate "SM8550 Video Clock Controller"
+ depends on ARM64 || COMPILE_TEST
+- select SM_GCC_8550
++ depends on SM_GCC_8550 || SM_GCC_8650
+ select QCOM_GDSC
+ help
+ Support for the video clock controller on Qualcomm Technologies, Inc.
+- SM8550 devices.
++ SM8550 or SM8650 devices.
+ Say Y if you want to support video devices and functionality such as
+ video encode/decode.
+
+diff --git a/drivers/clk/ralink/clk-mtmips.c b/drivers/clk/ralink/clk-mtmips.c
+index 50a443bf79ecd3..76285fbbdeaa2d 100644
+--- a/drivers/clk/ralink/clk-mtmips.c
++++ b/drivers/clk/ralink/clk-mtmips.c
+@@ -263,8 +263,9 @@ static int mtmips_register_pherip_clocks(struct device_node *np,
+ .rate = _rate \
+ }
+
+-static struct mtmips_clk_fixed rt305x_fixed_clocks[] = {
+- CLK_FIXED("xtal", NULL, 40000000)
++static struct mtmips_clk_fixed rt3883_fixed_clocks[] = {
++ CLK_FIXED("xtal", NULL, 40000000),
++ CLK_FIXED("periph", "xtal", 40000000)
+ };
+
+ static struct mtmips_clk_fixed rt3352_fixed_clocks[] = {
+@@ -366,6 +367,12 @@ static inline struct mtmips_clk *to_mtmips_clk(struct clk_hw *hw)
+ return container_of(hw, struct mtmips_clk, hw);
+ }
+
++static unsigned long rt2880_xtal_recalc_rate(struct clk_hw *hw,
++ unsigned long parent_rate)
++{
++ return 40000000;
++}
++
+ static unsigned long rt5350_xtal_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+ {
+@@ -677,10 +684,12 @@ static unsigned long mt76x8_cpu_recalc_rate(struct clk_hw *hw,
+ }
+
+ static struct mtmips_clk rt2880_clks_base[] = {
++ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
+ { CLK_BASE("cpu", "xtal", rt2880_cpu_recalc_rate) }
+ };
+
+ static struct mtmips_clk rt305x_clks_base[] = {
++ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
+ { CLK_BASE("cpu", "xtal", rt305x_cpu_recalc_rate) }
+ };
+
+@@ -690,6 +699,7 @@ static struct mtmips_clk rt3352_clks_base[] = {
+ };
+
+ static struct mtmips_clk rt3883_clks_base[] = {
++ { CLK_BASE("xtal", NULL, rt2880_xtal_recalc_rate) },
+ { CLK_BASE("cpu", "xtal", rt3883_cpu_recalc_rate) },
+ { CLK_BASE("bus", "cpu", rt3883_bus_recalc_rate) }
+ };
+@@ -746,8 +756,8 @@ static int mtmips_register_clocks(struct device_node *np,
+ static const struct mtmips_clk_data rt2880_clk_data = {
+ .clk_base = rt2880_clks_base,
+ .num_clk_base = ARRAY_SIZE(rt2880_clks_base),
+- .clk_fixed = rt305x_fixed_clocks,
+- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
++ .clk_fixed = NULL,
++ .num_clk_fixed = 0,
+ .clk_factor = rt2880_factor_clocks,
+ .num_clk_factor = ARRAY_SIZE(rt2880_factor_clocks),
+ .clk_periph = rt2880_pherip_clks,
+@@ -757,8 +767,8 @@ static const struct mtmips_clk_data rt2880_clk_data = {
+ static const struct mtmips_clk_data rt305x_clk_data = {
+ .clk_base = rt305x_clks_base,
+ .num_clk_base = ARRAY_SIZE(rt305x_clks_base),
+- .clk_fixed = rt305x_fixed_clocks,
+- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
++ .clk_fixed = NULL,
++ .num_clk_fixed = 0,
+ .clk_factor = rt305x_factor_clocks,
+ .num_clk_factor = ARRAY_SIZE(rt305x_factor_clocks),
+ .clk_periph = rt305x_pherip_clks,
+@@ -779,8 +789,8 @@ static const struct mtmips_clk_data rt3352_clk_data = {
+ static const struct mtmips_clk_data rt3883_clk_data = {
+ .clk_base = rt3883_clks_base,
+ .num_clk_base = ARRAY_SIZE(rt3883_clks_base),
+- .clk_fixed = rt305x_fixed_clocks,
+- .num_clk_fixed = ARRAY_SIZE(rt305x_fixed_clocks),
++ .clk_fixed = rt3883_fixed_clocks,
++ .num_clk_fixed = ARRAY_SIZE(rt3883_fixed_clocks),
+ .clk_factor = NULL,
+ .num_clk_factor = 0,
+ .clk_periph = rt5350_pherip_clks,
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index 88bf39e8c79c83..b43b763dfe186a 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -548,7 +548,7 @@ static unsigned long
+ rzg2l_cpg_get_foutpostdiv_rate(struct rzg2l_pll5_param *params,
+ unsigned long rate)
+ {
+- unsigned long foutpostdiv_rate;
++ unsigned long foutpostdiv_rate, foutvco_rate;
+
+ params->pl5_intin = rate / MEGA;
+ params->pl5_fracin = div_u64(((u64)rate % MEGA) << 24, MEGA);
+@@ -557,10 +557,11 @@ rzg2l_cpg_get_foutpostdiv_rate(struct rzg2l_pll5_param *params,
+ params->pl5_postdiv2 = 1;
+ params->pl5_spread = 0x16;
+
+- foutpostdiv_rate =
+- EXTAL_FREQ_IN_MEGA_HZ * MEGA / params->pl5_refdiv *
+- ((((params->pl5_intin << 24) + params->pl5_fracin)) >> 24) /
+- (params->pl5_postdiv1 * params->pl5_postdiv2);
++ foutvco_rate = div_u64(mul_u32_u32(EXTAL_FREQ_IN_MEGA_HZ * MEGA,
++ (params->pl5_intin << 24) + params->pl5_fracin),
++ params->pl5_refdiv) >> 24;
++ foutpostdiv_rate = DIV_ROUND_CLOSEST_ULL(foutvco_rate,
++ params->pl5_postdiv1 * params->pl5_postdiv2);
+
+ return foutpostdiv_rate;
+ }
+diff --git a/drivers/clk/sophgo/clk-sg2042-pll.c b/drivers/clk/sophgo/clk-sg2042-pll.c
+index ff9deeef509b8f..1537f4f05860ea 100644
+--- a/drivers/clk/sophgo/clk-sg2042-pll.c
++++ b/drivers/clk/sophgo/clk-sg2042-pll.c
+@@ -153,7 +153,7 @@ static unsigned long sg2042_pll_recalc_rate(unsigned int reg_value,
+
+ sg2042_pll_ctrl_decode(reg_value, &ctrl_table);
+
+- numerator = parent_rate * ctrl_table.fbdiv;
++ numerator = (u64)parent_rate * ctrl_table.fbdiv;
+ denominator = ctrl_table.refdiv * ctrl_table.postdiv1 * ctrl_table.postdiv2;
+ do_div(numerator, denominator);
+ return numerator;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
+index 9b5cfac2ee70cb..3f095515f54f91 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
++++ b/drivers/clk/sunxi-ng/ccu-sun20i-d1.c
+@@ -1371,7 +1371,7 @@ static int sun20i_d1_ccu_probe(struct platform_device *pdev)
+
+ /* Enforce m1 = 0, m0 = 0 for PLL_AUDIO0 */
+ val = readl(reg + SUN20I_D1_PLL_AUDIO0_REG);
+- val &= ~BIT(1) | BIT(0);
++ val &= ~(BIT(1) | BIT(0));
+ writel(val, reg + SUN20I_D1_PLL_AUDIO0_REG);
+
+ /* Force fanout-27M factor N to 0. */
+diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
+index 95dd4660b5b659..d546903dba4f3a 100644
+--- a/drivers/clocksource/Kconfig
++++ b/drivers/clocksource/Kconfig
+@@ -400,7 +400,8 @@ config ARM_GT_INITIAL_PRESCALER_VAL
+ This affects CPU_FREQ max delta from the initial frequency.
+
+ config ARM_TIMER_SP804
+- bool "Support for Dual Timer SP804 module" if COMPILE_TEST
++ bool "Support for Dual Timer SP804 module"
++ depends on ARM || ARM64 || COMPILE_TEST
+ depends on GENERIC_SCHED_CLOCK && HAVE_CLK
+ select CLKSRC_MMIO
+ select TIMER_OF if OF
+diff --git a/drivers/clocksource/timer-ti-dm-systimer.c b/drivers/clocksource/timer-ti-dm-systimer.c
+index c2dcd8d68e4587..d1c144d6f328cf 100644
+--- a/drivers/clocksource/timer-ti-dm-systimer.c
++++ b/drivers/clocksource/timer-ti-dm-systimer.c
+@@ -686,9 +686,9 @@ subsys_initcall(dmtimer_percpu_timer_startup);
+
+ static int __init dmtimer_percpu_quirk_init(struct device_node *np, u32 pa)
+ {
+- struct device_node *arm_timer;
++ struct device_node *arm_timer __free(device_node) =
++ of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
+
+- arm_timer = of_find_compatible_node(NULL, NULL, "arm,armv7-timer");
+ if (of_device_is_available(arm_timer)) {
+ pr_warn_once("ARM architected timer wrap issue i940 detected\n");
+ return 0;
+diff --git a/drivers/comedi/comedi_fops.c b/drivers/comedi/comedi_fops.c
+index 1b481731df964e..b9df9b19d4bd97 100644
+--- a/drivers/comedi/comedi_fops.c
++++ b/drivers/comedi/comedi_fops.c
+@@ -2407,6 +2407,18 @@ static int comedi_mmap(struct file *file, struct vm_area_struct *vma)
+
+ start += PAGE_SIZE;
+ }
++
++#ifdef CONFIG_MMU
++ /*
++ * Leaving behind a partial mapping of a buffer we're about to
++ * drop is unsafe, see remap_pfn_range_notrack().
++ * We need to zap the range here ourselves instead of relying
++ * on the automatic zapping in remap_pfn_range() because we call
++ * remap_pfn_range() in a loop.
++ */
++ if (retval)
++ zap_vma_ptes(vma, vma->vm_start, size);
++#endif
+ }
+
+ if (retval == 0) {
+diff --git a/drivers/counter/stm32-timer-cnt.c b/drivers/counter/stm32-timer-cnt.c
+index 186e73d6ccb455..87b6ec567b5447 100644
+--- a/drivers/counter/stm32-timer-cnt.c
++++ b/drivers/counter/stm32-timer-cnt.c
+@@ -214,11 +214,17 @@ static int stm32_count_enable_write(struct counter_device *counter,
+ {
+ struct stm32_timer_cnt *const priv = counter_priv(counter);
+ u32 cr1;
++ int ret;
+
+ if (enable) {
+ regmap_read(priv->regmap, TIM_CR1, &cr1);
+- if (!(cr1 & TIM_CR1_CEN))
+- clk_enable(priv->clk);
++ if (!(cr1 & TIM_CR1_CEN)) {
++ ret = clk_enable(priv->clk);
++ if (ret) {
++ dev_err(counter->parent, "Cannot enable clock %d\n", ret);
++ return ret;
++ }
++ }
+
+ regmap_update_bits(priv->regmap, TIM_CR1, TIM_CR1_CEN,
+ TIM_CR1_CEN);
+@@ -694,6 +700,7 @@ static int stm32_timer_cnt_probe_encoder(struct device *dev,
+ }
+
+ ret = of_property_read_u32(tnode, "reg", &idx);
++ of_node_put(tnode);
+ if (ret) {
+ dev_err(dev, "Can't get index (%d)\n", ret);
+ return ret;
+@@ -816,7 +823,11 @@ static int __maybe_unused stm32_timer_cnt_resume(struct device *dev)
+ return ret;
+
+ if (priv->enabled) {
+- clk_enable(priv->clk);
++ ret = clk_enable(priv->clk);
++ if (ret) {
++ dev_err(dev, "Cannot enable clock %d\n", ret);
++ return ret;
++ }
+
+ /* Restore registers that may have been lost */
+ regmap_write(priv->regmap, TIM_SMCR, priv->bak.smcr);
+diff --git a/drivers/counter/ti-ecap-capture.c b/drivers/counter/ti-ecap-capture.c
+index 675447315cafb8..b119aeede693ec 100644
+--- a/drivers/counter/ti-ecap-capture.c
++++ b/drivers/counter/ti-ecap-capture.c
+@@ -574,8 +574,13 @@ static int ecap_cnt_resume(struct device *dev)
+ {
+ struct counter_device *counter_dev = dev_get_drvdata(dev);
+ struct ecap_cnt_dev *ecap_dev = counter_priv(counter_dev);
++ int ret;
+
+- clk_enable(ecap_dev->clk);
++ ret = clk_enable(ecap_dev->clk);
++ if (ret) {
++ dev_err(dev, "Cannot enable clock %d\n", ret);
++ return ret;
++ }
+
+ ecap_cnt_capture_set_evmode(counter_dev, ecap_dev->pm_ctx.ev_mode);
+
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index b63863f77c6778..91d3c3b1c2d3bf 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -665,34 +665,12 @@ static void amd_pstate_adjust_perf(unsigned int cpu,
+ static int amd_pstate_cpu_boost_update(struct cpufreq_policy *policy, bool on)
+ {
+ struct amd_cpudata *cpudata = policy->driver_data;
+- struct cppc_perf_ctrls perf_ctrls;
+- u32 highest_perf, nominal_perf, nominal_freq, max_freq;
++ u32 nominal_freq, max_freq;
+ int ret = 0;
+
+- highest_perf = READ_ONCE(cpudata->highest_perf);
+- nominal_perf = READ_ONCE(cpudata->nominal_perf);
+ nominal_freq = READ_ONCE(cpudata->nominal_freq);
+ max_freq = READ_ONCE(cpudata->max_freq);
+
+- if (boot_cpu_has(X86_FEATURE_CPPC)) {
+- u64 value = READ_ONCE(cpudata->cppc_req_cached);
+-
+- value &= ~GENMASK_ULL(7, 0);
+- value |= on ? highest_perf : nominal_perf;
+- WRITE_ONCE(cpudata->cppc_req_cached, value);
+-
+- wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+- } else {
+- perf_ctrls.max_perf = on ? highest_perf : nominal_perf;
+- ret = cppc_set_perf(cpudata->cpu, &perf_ctrls);
+- if (ret) {
+- cpufreq_cpu_release(policy);
+- pr_debug("Failed to set max perf on CPU:%d. ret:%d\n",
+- cpudata->cpu, ret);
+- return ret;
+- }
+- }
+-
+ if (on)
+ policy->cpuinfo.max_freq = max_freq;
+ else if (policy->cpuinfo.max_freq > nominal_freq * 1000)
+@@ -1535,7 +1513,7 @@ static int amd_pstate_epp_update_limit(struct cpufreq_policy *policy)
+ value = READ_ONCE(cpudata->cppc_req_cached);
+
+ if (cpudata->policy == CPUFREQ_POLICY_PERFORMANCE)
+- min_perf = max_perf;
++ min_perf = min(cpudata->nominal_perf, max_perf);
+
+ /* Initial min/max values for CPPC Performance Controls Register */
+ value &= ~AMD_CPPC_MIN_PERF(~0L);
+diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
+index 2b8708475ac776..c1cdf0f4d0ddda 100644
+--- a/drivers/cpufreq/cppc_cpufreq.c
++++ b/drivers/cpufreq/cppc_cpufreq.c
+@@ -118,6 +118,9 @@ static void cppc_scale_freq_workfn(struct kthread_work *work)
+
+ perf = cppc_perf_from_fbctrs(cpu_data, &cppc_fi->prev_perf_fb_ctrs,
+ &fb_ctrs);
++ if (!perf)
++ return;
++
+ cppc_fi->prev_perf_fb_ctrs = fb_ctrs;
+
+ perf <<= SCHED_CAPACITY_SHIFT;
+@@ -420,6 +423,9 @@ static int cppc_get_cpu_power(struct device *cpu_dev,
+ struct cppc_cpudata *cpu_data;
+
+ policy = cpufreq_cpu_get_raw(cpu_dev->id);
++ if (!policy)
++ return -EINVAL;
++
+ cpu_data = policy->driver_data;
+ perf_caps = &cpu_data->perf_caps;
+ max_cap = arch_scale_cpu_capacity(cpu_dev->id);
+@@ -487,6 +493,9 @@ static int cppc_get_cpu_cost(struct device *cpu_dev, unsigned long KHz,
+ int step;
+
+ policy = cpufreq_cpu_get_raw(cpu_dev->id);
++ if (!policy)
++ return -EINVAL;
++
+ cpu_data = policy->driver_data;
+ perf_caps = &cpu_data->perf_caps;
+ max_cap = arch_scale_cpu_capacity(cpu_dev->id);
+@@ -724,13 +733,31 @@ static int cppc_perf_from_fbctrs(struct cppc_cpudata *cpu_data,
+ delta_delivered = get_delta(fb_ctrs_t1->delivered,
+ fb_ctrs_t0->delivered);
+
+- /* Check to avoid divide-by zero and invalid delivered_perf */
++ /*
++ * Avoid divide-by zero and unchanged feedback counters.
++ * Leave it for callers to handle.
++ */
+ if (!delta_reference || !delta_delivered)
+- return cpu_data->perf_ctrls.desired_perf;
++ return 0;
+
+ return (reference_perf * delta_delivered) / delta_reference;
+ }
+
++static int cppc_get_perf_ctrs_sample(int cpu,
++ struct cppc_perf_fb_ctrs *fb_ctrs_t0,
++ struct cppc_perf_fb_ctrs *fb_ctrs_t1)
++{
++ int ret;
++
++ ret = cppc_get_perf_ctrs(cpu, fb_ctrs_t0);
++ if (ret)
++ return ret;
++
++ udelay(2); /* 2usec delay between sampling */
++
++ return cppc_get_perf_ctrs(cpu, fb_ctrs_t1);
++}
++
+ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
+ {
+ struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0};
+@@ -746,18 +773,32 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu)
+
+ cpufreq_cpu_put(policy);
+
+- ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t0);
+- if (ret)
+- return 0;
+-
+- udelay(2); /* 2usec delay between sampling */
+-
+- ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t1);
+- if (ret)
+- return 0;
++ ret = cppc_get_perf_ctrs_sample(cpu, &fb_ctrs_t0, &fb_ctrs_t1);
++ if (ret) {
++ if (ret == -EFAULT)
++ /* Any of the associated CPPC regs is 0. */
++ goto out_invalid_counters;
++ else
++ return 0;
++ }
+
+ delivered_perf = cppc_perf_from_fbctrs(cpu_data, &fb_ctrs_t0,
+ &fb_ctrs_t1);
++ if (!delivered_perf)
++ goto out_invalid_counters;
++
++ return cppc_perf_to_khz(&cpu_data->perf_caps, delivered_perf);
++
++out_invalid_counters:
++ /*
++ * Feedback counters could be unchanged or 0 when a cpu enters a
++ * low-power idle state, e.g. clock-gated or power-gated.
++ * Use desired perf for reflecting frequency. Get the latest register
++ * value first as some platforms may update the actual delivered perf
++ * there; if failed, resort to the cached desired perf.
++ */
++ if (cppc_get_desired_perf(cpu, &delivered_perf))
++ delivered_perf = cpu_data->perf_ctrls.desired_perf;
+
+ return cppc_perf_to_khz(&cpu_data->perf_caps, delivered_perf);
+ }
+diff --git a/drivers/cpufreq/loongson2_cpufreq.c b/drivers/cpufreq/loongson2_cpufreq.c
+index 6a8e97896d38ca..ed1a6dbad63894 100644
+--- a/drivers/cpufreq/loongson2_cpufreq.c
++++ b/drivers/cpufreq/loongson2_cpufreq.c
+@@ -148,7 +148,9 @@ static int __init cpufreq_init(void)
+
+ ret = cpufreq_register_driver(&loongson2_cpufreq_driver);
+
+- if (!ret && !nowait) {
++ if (ret) {
++ platform_driver_unregister(&platform_driver);
++ } else if (!nowait) {
+ saved_cpu_wait = cpu_wait;
+ cpu_wait = loongson2_cpu_wait;
+ }
+diff --git a/drivers/cpufreq/loongson3_cpufreq.c b/drivers/cpufreq/loongson3_cpufreq.c
+index 6b5e6798d9a283..a923e196ec86e7 100644
+--- a/drivers/cpufreq/loongson3_cpufreq.c
++++ b/drivers/cpufreq/loongson3_cpufreq.c
+@@ -346,8 +346,11 @@ static int loongson3_cpufreq_probe(struct platform_device *pdev)
+ {
+ int i, ret;
+
+- for (i = 0; i < MAX_PACKAGES; i++)
+- devm_mutex_init(&pdev->dev, &cpufreq_mutex[i]);
++ for (i = 0; i < MAX_PACKAGES; i++) {
++ ret = devm_mutex_init(&pdev->dev, &cpufreq_mutex[i]);
++ if (ret)
++ return ret;
++ }
+
+ ret = do_service_request(0, 0, CMD_GET_VERSION, 0, 0);
+ if (ret <= 0)
+diff --git a/drivers/cpufreq/mediatek-cpufreq-hw.c b/drivers/cpufreq/mediatek-cpufreq-hw.c
+index 8925e096d5b9a0..aeb5e63045421b 100644
+--- a/drivers/cpufreq/mediatek-cpufreq-hw.c
++++ b/drivers/cpufreq/mediatek-cpufreq-hw.c
+@@ -62,7 +62,7 @@ mtk_cpufreq_get_cpu_power(struct device *cpu_dev, unsigned long *uW,
+
+ policy = cpufreq_cpu_get_raw(cpu_dev->id);
+ if (!policy)
+- return 0;
++ return -EINVAL;
+
+ data = policy->driver_data;
+
+diff --git a/drivers/crypto/bcm/cipher.c b/drivers/crypto/bcm/cipher.c
+index 1a3ecd44cbaf65..20f6453670aa49 100644
+--- a/drivers/crypto/bcm/cipher.c
++++ b/drivers/crypto/bcm/cipher.c
+@@ -2415,6 +2415,7 @@ static int ahash_hmac_setkey(struct crypto_ahash *ahash, const u8 *key,
+
+ static int ahash_hmac_init(struct ahash_request *req)
+ {
++ int ret;
+ struct iproc_reqctx_s *rctx = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct iproc_ctx_s *ctx = crypto_ahash_ctx(tfm);
+@@ -2424,7 +2425,9 @@ static int ahash_hmac_init(struct ahash_request *req)
+ flow_log("ahash_hmac_init()\n");
+
+ /* init the context as a hash */
+- ahash_init(req);
++ ret = ahash_init(req);
++ if (ret)
++ return ret;
+
+ if (!spu_no_incr_hash(ctx)) {
+ /* SPU-M can do incr hashing but needs sw for outer HMAC */
+diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
+index 887a5f2fb9279b..cb001aa1de6618 100644
+--- a/drivers/crypto/caam/caampkc.c
++++ b/drivers/crypto/caam/caampkc.c
+@@ -984,7 +984,7 @@ static int caam_rsa_set_pub_key(struct crypto_akcipher *tfm, const void *key,
+ return -ENOMEM;
+ }
+
+-static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
++static int caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+ struct rsa_key *raw_key)
+ {
+ struct caam_rsa_key *rsa_key = &ctx->key;
+@@ -994,7 +994,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+
+ rsa_key->p = caam_read_raw_data(raw_key->p, &p_sz);
+ if (!rsa_key->p)
+- return;
++ return -ENOMEM;
+ rsa_key->p_sz = p_sz;
+
+ rsa_key->q = caam_read_raw_data(raw_key->q, &q_sz);
+@@ -1029,7 +1029,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+
+ rsa_key->priv_form = FORM3;
+
+- return;
++ return 0;
+
+ free_dq:
+ kfree_sensitive(rsa_key->dq);
+@@ -1043,6 +1043,7 @@ static void caam_rsa_set_priv_key_form(struct caam_rsa_ctx *ctx,
+ kfree_sensitive(rsa_key->q);
+ free_p:
+ kfree_sensitive(rsa_key->p);
++ return -ENOMEM;
+ }
+
+ static int caam_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+@@ -1088,7 +1089,9 @@ static int caam_rsa_set_priv_key(struct crypto_akcipher *tfm, const void *key,
+ rsa_key->e_sz = raw_key.e_sz;
+ rsa_key->n_sz = raw_key.n_sz;
+
+- caam_rsa_set_priv_key_form(ctx, &raw_key);
++ ret = caam_rsa_set_priv_key_form(ctx, &raw_key);
++ if (ret)
++ goto err;
+
+ return 0;
+
+diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c
+index f6111ee9ed342d..8ed2bb01a619fd 100644
+--- a/drivers/crypto/caam/qi.c
++++ b/drivers/crypto/caam/qi.c
+@@ -794,7 +794,7 @@ int caam_qi_init(struct platform_device *caam_pdev)
+
+ caam_debugfs_qi_init(ctrlpriv);
+
+- err = devm_add_action_or_reset(qidev, caam_qi_shutdown, ctrlpriv);
++ err = devm_add_action_or_reset(qidev, caam_qi_shutdown, qidev);
+ if (err)
+ goto fail2;
+
+diff --git a/drivers/crypto/cavium/cpt/cptpf_main.c b/drivers/crypto/cavium/cpt/cptpf_main.c
+index 6872ac3440010f..54de869e5374c2 100644
+--- a/drivers/crypto/cavium/cpt/cptpf_main.c
++++ b/drivers/crypto/cavium/cpt/cptpf_main.c
+@@ -44,7 +44,7 @@ static void cpt_disable_cores(struct cpt_device *cpt, u64 coremask,
+ dev_err(dev, "Cores still busy %llx", coremask);
+ grp = cpt_read_csr64(cpt->reg_base,
+ CPTX_PF_EXEC_BUSY(0));
+- if (timeout--)
++ if (!timeout--)
+ break;
+
+ udelay(CSR_DELAY);
+@@ -302,6 +302,8 @@ static int cpt_ucode_load_fw(struct cpt_device *cpt, const u8 *fw, bool is_ae)
+
+ ret = do_cpt_init(cpt, mcode);
+ if (ret) {
++ dma_free_coherent(&cpt->pdev->dev, mcode->code_size,
++ mcode->code, mcode->phys_base);
+ dev_err(dev, "do_cpt_init failed with ret: %d\n", ret);
+ goto fw_release;
+ }
+@@ -394,7 +396,7 @@ static void cpt_disable_all_cores(struct cpt_device *cpt)
+ dev_err(dev, "Cores still busy");
+ grp = cpt_read_csr64(cpt->reg_base,
+ CPTX_PF_EXEC_BUSY(0));
+- if (timeout--)
++ if (!timeout--)
+ break;
+
+ udelay(CSR_DELAY);
+diff --git a/drivers/crypto/hisilicon/hpre/hpre_main.c b/drivers/crypto/hisilicon/hpre/hpre_main.c
+index 6b536ad2ada52a..34d30b78381343 100644
+--- a/drivers/crypto/hisilicon/hpre/hpre_main.c
++++ b/drivers/crypto/hisilicon/hpre/hpre_main.c
+@@ -1280,11 +1280,15 @@ static u32 hpre_get_hw_err_status(struct hisi_qm *qm)
+
+ static void hpre_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
+ {
+- u32 nfe;
+-
+ writel(err_sts, qm->io_base + HPRE_HAC_SOURCE_INT);
+- nfe = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
+- writel(nfe, qm->io_base + HPRE_RAS_NFE_ENB);
++}
++
++static void hpre_disable_error_report(struct hisi_qm *qm, u32 err_type)
++{
++ u32 nfe_mask;
++
++ nfe_mask = hisi_qm_get_hw_info(qm, hpre_basic_info, HPRE_NFE_MASK_CAP, qm->cap_ver);
++ writel(nfe_mask & (~err_type), qm->io_base + HPRE_RAS_NFE_ENB);
+ }
+
+ static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
+@@ -1298,6 +1302,27 @@ static void hpre_open_axi_master_ooo(struct hisi_qm *qm)
+ qm->io_base + HPRE_AM_OOO_SHUTDOWN_ENB);
+ }
+
++static enum acc_err_result hpre_get_err_result(struct hisi_qm *qm)
++{
++ u32 err_status;
++
++ err_status = hpre_get_hw_err_status(qm);
++ if (err_status) {
++ if (err_status & qm->err_info.ecc_2bits_mask)
++ qm->err_status.is_dev_ecc_mbit = true;
++ hpre_log_hw_error(qm, err_status);
++
++ if (err_status & qm->err_info.dev_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ hpre_disable_error_report(qm, err_status);
++ return ACC_ERR_NEED_RESET;
++ }
++ hpre_clear_hw_err_status(qm, err_status);
++ }
++
++ return ACC_ERR_RECOVERED;
++}
++
+ static void hpre_err_info_init(struct hisi_qm *qm)
+ {
+ struct hisi_qm_err_info *err_info = &qm->err_info;
+@@ -1324,12 +1349,12 @@ static const struct hisi_qm_err_ini hpre_err_ini = {
+ .hw_err_disable = hpre_hw_error_disable,
+ .get_dev_hw_err_status = hpre_get_hw_err_status,
+ .clear_dev_hw_err_status = hpre_clear_hw_err_status,
+- .log_dev_hw_err = hpre_log_hw_error,
+ .open_axi_master_ooo = hpre_open_axi_master_ooo,
+ .open_sva_prefetch = hpre_open_sva_prefetch,
+ .close_sva_prefetch = hpre_close_sva_prefetch,
+ .show_last_dfx_regs = hpre_show_last_dfx_regs,
+ .err_info_init = hpre_err_info_init,
++ .get_err_result = hpre_get_err_result,
+ };
+
+ static int hpre_pf_probe_init(struct hpre *hpre)
+diff --git a/drivers/crypto/hisilicon/qm.c b/drivers/crypto/hisilicon/qm.c
+index 07983af9e3e229..b18692ee7fd563 100644
+--- a/drivers/crypto/hisilicon/qm.c
++++ b/drivers/crypto/hisilicon/qm.c
+@@ -271,12 +271,6 @@ enum vft_type {
+ SHAPER_VFT,
+ };
+
+-enum acc_err_result {
+- ACC_ERR_NONE,
+- ACC_ERR_NEED_RESET,
+- ACC_ERR_RECOVERED,
+-};
+-
+ enum qm_alg_type {
+ ALG_TYPE_0,
+ ALG_TYPE_1,
+@@ -1425,22 +1419,25 @@ static void qm_log_hw_error(struct hisi_qm *qm, u32 error_status)
+
+ static enum acc_err_result qm_hw_error_handle_v2(struct hisi_qm *qm)
+ {
+- u32 error_status, tmp;
+-
+- /* read err sts */
+- tmp = readl(qm->io_base + QM_ABNORMAL_INT_STATUS);
+- error_status = qm->error_mask & tmp;
++ u32 error_status;
+
+- if (error_status) {
++ error_status = qm_get_hw_error_status(qm);
++ if (error_status & qm->error_mask) {
+ if (error_status & QM_ECC_MBIT)
+ qm->err_status.is_qm_ecc_mbit = true;
+
+ qm_log_hw_error(qm, error_status);
+- if (error_status & qm->err_info.qm_reset_mask)
++ if (error_status & qm->err_info.qm_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ writel(qm->err_info.nfe & (~error_status),
++ qm->io_base + QM_RAS_NFE_ENABLE);
+ return ACC_ERR_NEED_RESET;
++ }
+
++ /* Clear error source if not need reset. */
+ writel(error_status, qm->io_base + QM_ABNORMAL_INT_SOURCE);
+ writel(qm->err_info.nfe, qm->io_base + QM_RAS_NFE_ENABLE);
++ writel(qm->err_info.ce, qm->io_base + QM_RAS_CE_ENABLE);
+ }
+
+ return ACC_ERR_RECOVERED;
+@@ -3861,30 +3858,12 @@ EXPORT_SYMBOL_GPL(hisi_qm_sriov_configure);
+
+ static enum acc_err_result qm_dev_err_handle(struct hisi_qm *qm)
+ {
+- u32 err_sts;
+-
+- if (!qm->err_ini->get_dev_hw_err_status) {
+- dev_err(&qm->pdev->dev, "Device doesn't support get hw error status!\n");
++ if (!qm->err_ini->get_err_result) {
++ dev_err(&qm->pdev->dev, "Device doesn't support reset!\n");
+ return ACC_ERR_NONE;
+ }
+
+- /* get device hardware error status */
+- err_sts = qm->err_ini->get_dev_hw_err_status(qm);
+- if (err_sts) {
+- if (err_sts & qm->err_info.ecc_2bits_mask)
+- qm->err_status.is_dev_ecc_mbit = true;
+-
+- if (qm->err_ini->log_dev_hw_err)
+- qm->err_ini->log_dev_hw_err(qm, err_sts);
+-
+- if (err_sts & qm->err_info.dev_reset_mask)
+- return ACC_ERR_NEED_RESET;
+-
+- if (qm->err_ini->clear_dev_hw_err_status)
+- qm->err_ini->clear_dev_hw_err_status(qm, err_sts);
+- }
+-
+- return ACC_ERR_RECOVERED;
++ return qm->err_ini->get_err_result(qm);
+ }
+
+ static enum acc_err_result qm_process_dev_error(struct hisi_qm *qm)
+diff --git a/drivers/crypto/hisilicon/sec2/sec_main.c b/drivers/crypto/hisilicon/sec2/sec_main.c
+index c35533d8930b21..75c25f0d5f2b82 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_main.c
++++ b/drivers/crypto/hisilicon/sec2/sec_main.c
+@@ -1010,11 +1010,15 @@ static u32 sec_get_hw_err_status(struct hisi_qm *qm)
+
+ static void sec_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
+ {
+- u32 nfe;
+-
+ writel(err_sts, qm->io_base + SEC_CORE_INT_SOURCE);
+- nfe = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
+- writel(nfe, qm->io_base + SEC_RAS_NFE_REG);
++}
++
++static void sec_disable_error_report(struct hisi_qm *qm, u32 err_type)
++{
++ u32 nfe_mask;
++
++ nfe_mask = hisi_qm_get_hw_info(qm, sec_basic_info, SEC_NFE_MASK_CAP, qm->cap_ver);
++ writel(nfe_mask & (~err_type), qm->io_base + SEC_RAS_NFE_REG);
+ }
+
+ static void sec_open_axi_master_ooo(struct hisi_qm *qm)
+@@ -1026,6 +1030,27 @@ static void sec_open_axi_master_ooo(struct hisi_qm *qm)
+ writel(val | SEC_AXI_SHUTDOWN_ENABLE, qm->io_base + SEC_CONTROL_REG);
+ }
+
++static enum acc_err_result sec_get_err_result(struct hisi_qm *qm)
++{
++ u32 err_status;
++
++ err_status = sec_get_hw_err_status(qm);
++ if (err_status) {
++ if (err_status & qm->err_info.ecc_2bits_mask)
++ qm->err_status.is_dev_ecc_mbit = true;
++ sec_log_hw_error(qm, err_status);
++
++ if (err_status & qm->err_info.dev_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ sec_disable_error_report(qm, err_status);
++ return ACC_ERR_NEED_RESET;
++ }
++ sec_clear_hw_err_status(qm, err_status);
++ }
++
++ return ACC_ERR_RECOVERED;
++}
++
+ static void sec_err_info_init(struct hisi_qm *qm)
+ {
+ struct hisi_qm_err_info *err_info = &qm->err_info;
+@@ -1052,12 +1077,12 @@ static const struct hisi_qm_err_ini sec_err_ini = {
+ .hw_err_disable = sec_hw_error_disable,
+ .get_dev_hw_err_status = sec_get_hw_err_status,
+ .clear_dev_hw_err_status = sec_clear_hw_err_status,
+- .log_dev_hw_err = sec_log_hw_error,
+ .open_axi_master_ooo = sec_open_axi_master_ooo,
+ .open_sva_prefetch = sec_open_sva_prefetch,
+ .close_sva_prefetch = sec_close_sva_prefetch,
+ .show_last_dfx_regs = sec_show_last_dfx_regs,
+ .err_info_init = sec_err_info_init,
++ .get_err_result = sec_get_err_result,
+ };
+
+ static int sec_pf_probe_init(struct sec_dev *sec)
+diff --git a/drivers/crypto/hisilicon/zip/zip_main.c b/drivers/crypto/hisilicon/zip/zip_main.c
+index d07e47b48be06a..80c2fcb1d26dcf 100644
+--- a/drivers/crypto/hisilicon/zip/zip_main.c
++++ b/drivers/crypto/hisilicon/zip/zip_main.c
+@@ -1059,11 +1059,15 @@ static u32 hisi_zip_get_hw_err_status(struct hisi_qm *qm)
+
+ static void hisi_zip_clear_hw_err_status(struct hisi_qm *qm, u32 err_sts)
+ {
+- u32 nfe;
+-
+ writel(err_sts, qm->io_base + HZIP_CORE_INT_SOURCE);
+- nfe = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
+- writel(nfe, qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
++}
++
++static void hisi_zip_disable_error_report(struct hisi_qm *qm, u32 err_type)
++{
++ u32 nfe_mask;
++
++ nfe_mask = hisi_qm_get_hw_info(qm, zip_basic_cap_info, ZIP_NFE_MASK_CAP, qm->cap_ver);
++ writel(nfe_mask & (~err_type), qm->io_base + HZIP_CORE_INT_RAS_NFE_ENB);
+ }
+
+ static void hisi_zip_open_axi_master_ooo(struct hisi_qm *qm)
+@@ -1093,6 +1097,27 @@ static void hisi_zip_close_axi_master_ooo(struct hisi_qm *qm)
+ qm->io_base + HZIP_CORE_INT_SET);
+ }
+
++static enum acc_err_result hisi_zip_get_err_result(struct hisi_qm *qm)
++{
++ u32 err_status;
++
++ err_status = hisi_zip_get_hw_err_status(qm);
++ if (err_status) {
++ if (err_status & qm->err_info.ecc_2bits_mask)
++ qm->err_status.is_dev_ecc_mbit = true;
++ hisi_zip_log_hw_error(qm, err_status);
++
++ if (err_status & qm->err_info.dev_reset_mask) {
++ /* Disable the same error reporting until device is recovered. */
++ hisi_zip_disable_error_report(qm, err_status);
++ return ACC_ERR_NEED_RESET;
++ }
++ hisi_zip_clear_hw_err_status(qm, err_status);
++ }
++
++ return ACC_ERR_RECOVERED;
++}
++
+ static void hisi_zip_err_info_init(struct hisi_qm *qm)
+ {
+ struct hisi_qm_err_info *err_info = &qm->err_info;
+@@ -1120,13 +1145,13 @@ static const struct hisi_qm_err_ini hisi_zip_err_ini = {
+ .hw_err_disable = hisi_zip_hw_error_disable,
+ .get_dev_hw_err_status = hisi_zip_get_hw_err_status,
+ .clear_dev_hw_err_status = hisi_zip_clear_hw_err_status,
+- .log_dev_hw_err = hisi_zip_log_hw_error,
+ .open_axi_master_ooo = hisi_zip_open_axi_master_ooo,
+ .close_axi_master_ooo = hisi_zip_close_axi_master_ooo,
+ .open_sva_prefetch = hisi_zip_open_sva_prefetch,
+ .close_sva_prefetch = hisi_zip_close_sva_prefetch,
+ .show_last_dfx_regs = hisi_zip_show_last_dfx_regs,
+ .err_info_init = hisi_zip_err_info_init,
++ .get_err_result = hisi_zip_get_err_result,
+ };
+
+ static int hisi_zip_pf_probe_init(struct hisi_zip *hisi_zip)
+diff --git a/drivers/crypto/inside-secure/safexcel_hash.c b/drivers/crypto/inside-secure/safexcel_hash.c
+index e17577b785c33a..f44c08f5f5ec4a 100644
+--- a/drivers/crypto/inside-secure/safexcel_hash.c
++++ b/drivers/crypto/inside-secure/safexcel_hash.c
+@@ -2093,7 +2093,7 @@ static int safexcel_xcbcmac_cra_init(struct crypto_tfm *tfm)
+
+ safexcel_ahash_cra_init(tfm);
+ ctx->aes = kmalloc(sizeof(*ctx->aes), GFP_KERNEL);
+- return PTR_ERR_OR_ZERO(ctx->aes);
++ return ctx->aes == NULL ? -ENOMEM : 0;
+ }
+
+ static void safexcel_xcbcmac_cra_exit(struct crypto_tfm *tfm)
+diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+index 78f0ea49254dbb..9faef33e54bd32 100644
+--- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+@@ -375,7 +375,7 @@ static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
+ else
+ id = -EINVAL;
+
+- if (id < 0 || id > num_objs)
++ if (id < 0 || id >= num_objs)
+ return NULL;
+
+ return fw_objs[id];
+diff --git a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+index 9fd7ec53b9f3d8..bbd92c017c28ed 100644
+--- a/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_4xxx/adf_4xxx_hw_data.c
+@@ -334,7 +334,7 @@ static const char *uof_get_name(struct adf_accel_dev *accel_dev, u32 obj_num,
+ else
+ id = -EINVAL;
+
+- if (id < 0 || id > num_objs)
++ if (id < 0 || id >= num_objs)
+ return NULL;
+
+ return fw_objs[id];
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_aer.c b/drivers/crypto/intel/qat/qat_common/adf_aer.c
+index ec7913ab00a2c7..4cb8bd83f57071 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_aer.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_aer.c
+@@ -281,8 +281,11 @@ int adf_init_aer(void)
+ return -EFAULT;
+
+ device_sriov_wq = alloc_workqueue("qat_device_sriov_wq", 0, 0);
+- if (!device_sriov_wq)
++ if (!device_sriov_wq) {
++ destroy_workqueue(device_reset_wq);
++ device_reset_wq = NULL;
+ return -EFAULT;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c b/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
+index c42f5c25aabdfa..4c11ad1ebcf0f8 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_dbgfs.c
+@@ -22,18 +22,13 @@
+ void adf_dbgfs_init(struct adf_accel_dev *accel_dev)
+ {
+ char name[ADF_DEVICE_NAME_LENGTH];
+- void *ret;
+
+ /* Create dev top level debugfs entry */
+ snprintf(name, sizeof(name), "%s%s_%s", ADF_DEVICE_NAME_PREFIX,
+ accel_dev->hw_device->dev_class->name,
+ pci_name(accel_dev->accel_pci_dev.pci_dev));
+
+- ret = debugfs_create_dir(name, NULL);
+- if (IS_ERR_OR_NULL(ret))
+- return;
+-
+- accel_dev->debugfs_dir = ret;
++ accel_dev->debugfs_dir = debugfs_create_dir(name, NULL);
+
+ adf_cfg_dev_dbgfs_add(accel_dev);
+ }
+@@ -59,9 +54,6 @@ EXPORT_SYMBOL_GPL(adf_dbgfs_exit);
+ */
+ void adf_dbgfs_add(struct adf_accel_dev *accel_dev)
+ {
+- if (!accel_dev->debugfs_dir)
+- return;
+-
+ if (!accel_dev->is_vf) {
+ adf_fw_counters_dbgfs_add(accel_dev);
+ adf_heartbeat_dbgfs_add(accel_dev);
+@@ -77,9 +69,6 @@ void adf_dbgfs_add(struct adf_accel_dev *accel_dev)
+ */
+ void adf_dbgfs_rm(struct adf_accel_dev *accel_dev)
+ {
+- if (!accel_dev->debugfs_dir)
+- return;
+-
+ if (!accel_dev->is_vf) {
+ adf_tl_dbgfs_rm(accel_dev);
+ adf_cnv_dbgfs_rm(accel_dev);
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c b/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
+index 65bd26b25abce9..f93d9cca70cee4 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_hw_arbiter.c
+@@ -90,10 +90,6 @@ void adf_exit_arb(struct adf_accel_dev *accel_dev)
+
+ hw_data->get_arb_info(&info);
+
+- /* Reset arbiter configuration */
+- for (i = 0; i < ADF_ARB_NUM; i++)
+- WRITE_CSR_ARB_SARCONFIG(csr, arb_off, i, 0);
+-
+ /* Unmap worker threads to service arbiters */
+ for (i = 0; i < hw_data->num_engines; i++)
+ WRITE_CSR_ARB_WT2SAM(csr, arb_off, wt_off, i, 0);
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index c82775dbb557a7..77a6301f37f0af 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -225,21 +225,22 @@ static int mxs_dcp_start_dma(struct dcp_async_ctx *actx)
+ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ struct skcipher_request *req, int init)
+ {
+- dma_addr_t key_phys = 0;
+- dma_addr_t src_phys, dst_phys;
++ dma_addr_t key_phys, src_phys, dst_phys;
+ struct dcp *sdcp = global_sdcp;
+ struct dcp_dma_desc *desc = &sdcp->coh->desc[actx->chan];
+ struct dcp_aes_req_ctx *rctx = skcipher_request_ctx(req);
+ bool key_referenced = actx->key_referenced;
+ int ret;
+
+- if (!key_referenced) {
++ if (key_referenced)
++ key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key + AES_KEYSIZE_128,
++ AES_KEYSIZE_128, DMA_TO_DEVICE);
++ else
+ key_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_key,
+ 2 * AES_KEYSIZE_128, DMA_TO_DEVICE);
+- ret = dma_mapping_error(sdcp->dev, key_phys);
+- if (ret)
+- return ret;
+- }
++ ret = dma_mapping_error(sdcp->dev, key_phys);
++ if (ret)
++ return ret;
+
+ src_phys = dma_map_single(sdcp->dev, sdcp->coh->aes_in_buf,
+ DCP_BUF_SZ, DMA_TO_DEVICE);
+@@ -300,7 +301,10 @@ static int mxs_dcp_run_aes(struct dcp_async_ctx *actx,
+ err_dst:
+ dma_unmap_single(sdcp->dev, src_phys, DCP_BUF_SZ, DMA_TO_DEVICE);
+ err_src:
+- if (!key_referenced)
++ if (key_referenced)
++ dma_unmap_single(sdcp->dev, key_phys, AES_KEYSIZE_128,
++ DMA_TO_DEVICE);
++ else
+ dma_unmap_single(sdcp->dev, key_phys, 2 * AES_KEYSIZE_128,
+ DMA_TO_DEVICE);
+ return ret;
+diff --git a/drivers/dax/pmem/Makefile b/drivers/dax/pmem/Makefile
+deleted file mode 100644
+index 191c31f0d4f008..00000000000000
+--- a/drivers/dax/pmem/Makefile
++++ /dev/null
+@@ -1,7 +0,0 @@
+-# SPDX-License-Identifier: GPL-2.0-only
+-obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o
+-obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem_core.o
+-
+-dax_pmem-y := pmem.o
+-dax_pmem_core-y := core.o
+-dax_pmem_compat-y := compat.o
+diff --git a/drivers/dax/pmem/pmem.c b/drivers/dax/pmem/pmem.c
+deleted file mode 100644
+index dfe91a2990fec4..00000000000000
+--- a/drivers/dax/pmem/pmem.c
++++ /dev/null
+@@ -1,10 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/* Copyright(c) 2016 - 2018 Intel Corporation. All rights reserved. */
+-#include <linux/percpu-refcount.h>
+-#include <linux/memremap.h>
+-#include <linux/module.h>
+-#include <linux/pfn_t.h>
+-#include <linux/nd.h>
+-#include "../bus.h"
+-
+-
+diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
+index b46eb8a552d7be..fee04fdb08220c 100644
+--- a/drivers/dma-buf/Kconfig
++++ b/drivers/dma-buf/Kconfig
+@@ -36,6 +36,7 @@ config UDMABUF
+ depends on DMA_SHARED_BUFFER
+ depends on MEMFD_CREATE || COMPILE_TEST
+ depends on MMU
++ select VMAP_PFN
+ help
+ A driver to let userspace turn memfd regions into dma-bufs.
+ Qemu can use this to create host dmabufs for guest framebuffers.
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index 047c3cd2cefff6..a3638ccc15f571 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -74,21 +74,29 @@ static int mmap_udmabuf(struct dma_buf *buf, struct vm_area_struct *vma)
+ static int vmap_udmabuf(struct dma_buf *buf, struct iosys_map *map)
+ {
+ struct udmabuf *ubuf = buf->priv;
+- struct page **pages;
++ unsigned long *pfns;
+ void *vaddr;
+ pgoff_t pg;
+
+ dma_resv_assert_held(buf->resv);
+
+- pages = kmalloc_array(ubuf->pagecount, sizeof(*pages), GFP_KERNEL);
+- if (!pages)
++ /**
++ * HVO may free tail pages, so just use pfn to map each folio
++ * into vmalloc area.
++ */
++ pfns = kvmalloc_array(ubuf->pagecount, sizeof(*pfns), GFP_KERNEL);
++ if (!pfns)
+ return -ENOMEM;
+
+- for (pg = 0; pg < ubuf->pagecount; pg++)
+- pages[pg] = &ubuf->folios[pg]->page;
++ for (pg = 0; pg < ubuf->pagecount; pg++) {
++ unsigned long pfn = folio_pfn(ubuf->folios[pg]);
+
+- vaddr = vm_map_ram(pages, ubuf->pagecount, -1);
+- kfree(pages);
++ pfn += ubuf->offsets[pg] >> PAGE_SHIFT;
++ pfns[pg] = pfn;
++ }
++
++ vaddr = vmap_pfn(pfns, ubuf->pagecount, PAGE_KERNEL);
++ kvfree(pfns);
+ if (!vaddr)
+ return -EINVAL;
+
+@@ -196,8 +204,8 @@ static void release_udmabuf(struct dma_buf *buf)
+ put_sg_table(dev, ubuf->sg, DMA_BIDIRECTIONAL);
+
+ unpin_all_folios(&ubuf->unpin_list);
+- kfree(ubuf->offsets);
+- kfree(ubuf->folios);
++ kvfree(ubuf->offsets);
++ kvfree(ubuf->folios);
+ kfree(ubuf);
+ }
+
+@@ -322,14 +330,14 @@ static long udmabuf_create(struct miscdevice *device,
+ if (!ubuf->pagecount)
+ goto err;
+
+- ubuf->folios = kmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
+- GFP_KERNEL);
++ ubuf->folios = kvmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
++ GFP_KERNEL);
+ if (!ubuf->folios) {
+ ret = -ENOMEM;
+ goto err;
+ }
+- ubuf->offsets = kcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
+- GFP_KERNEL);
++ ubuf->offsets = kvcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
++ GFP_KERNEL);
+ if (!ubuf->offsets) {
+ ret = -ENOMEM;
+ goto err;
+@@ -343,7 +351,7 @@ static long udmabuf_create(struct miscdevice *device,
+ goto err;
+
+ pgcnt = list[i].size >> PAGE_SHIFT;
+- folios = kmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
++ folios = kvmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
+ if (!folios) {
+ ret = -ENOMEM;
+ goto err;
+@@ -353,7 +361,7 @@ static long udmabuf_create(struct miscdevice *device,
+ ret = memfd_pin_folios(memfd, list[i].offset, end,
+ folios, pgcnt, &pgoff);
+ if (ret <= 0) {
+- kfree(folios);
++ kvfree(folios);
+ if (!ret)
+ ret = -EINVAL;
+ goto err;
+@@ -382,7 +390,7 @@ static long udmabuf_create(struct miscdevice *device,
+ }
+ }
+
+- kfree(folios);
++ kvfree(folios);
+ fput(memfd);
+ memfd = NULL;
+ }
+@@ -398,8 +406,8 @@ static long udmabuf_create(struct miscdevice *device,
+ if (memfd)
+ fput(memfd);
+ unpin_all_folios(&ubuf->unpin_list);
+- kfree(ubuf->offsets);
+- kfree(ubuf->folios);
++ kvfree(ubuf->offsets);
++ kvfree(ubuf->folios);
+ kfree(ubuf);
+ return ret;
+ }
+diff --git a/drivers/edac/bluefield_edac.c b/drivers/edac/bluefield_edac.c
+index 5b3164560648ee..0e539c1073510a 100644
+--- a/drivers/edac/bluefield_edac.c
++++ b/drivers/edac/bluefield_edac.c
+@@ -180,7 +180,7 @@ static void bluefield_edac_check(struct mem_ctl_info *mci)
+ static void bluefield_edac_init_dimms(struct mem_ctl_info *mci)
+ {
+ struct bluefield_edac_priv *priv = mci->pvt_info;
+- int mem_ctrl_idx = mci->mc_idx;
++ u64 mem_ctrl_idx = mci->mc_idx;
+ struct dimm_info *dimm;
+ u64 smc_info, smc_arg;
+ int is_empty = 1, i;
+diff --git a/drivers/edac/fsl_ddr_edac.c b/drivers/edac/fsl_ddr_edac.c
+index d148d262d0d4de..339d94b3d04c7d 100644
+--- a/drivers/edac/fsl_ddr_edac.c
++++ b/drivers/edac/fsl_ddr_edac.c
+@@ -328,21 +328,25 @@ static void fsl_mc_check(struct mem_ctl_info *mci)
+ * TODO: Add support for 32-bit wide buses
+ */
+ if ((err_detect & DDR_EDE_SBE) && (bus_width == 64)) {
++ u64 cap = (u64)cap_high << 32 | cap_low;
++ u32 s = syndrome;
++
+ sbe_ecc_decode(cap_high, cap_low, syndrome,
+ &bad_data_bit, &bad_ecc_bit);
+
+- if (bad_data_bit != -1)
+- fsl_mc_printk(mci, KERN_ERR,
+- "Faulty Data bit: %d\n", bad_data_bit);
+- if (bad_ecc_bit != -1)
+- fsl_mc_printk(mci, KERN_ERR,
+- "Faulty ECC bit: %d\n", bad_ecc_bit);
++ if (bad_data_bit >= 0) {
++ fsl_mc_printk(mci, KERN_ERR, "Faulty Data bit: %d\n", bad_data_bit);
++ cap ^= 1ULL << bad_data_bit;
++ }
++
++ if (bad_ecc_bit >= 0) {
++ fsl_mc_printk(mci, KERN_ERR, "Faulty ECC bit: %d\n", bad_ecc_bit);
++ s ^= 1 << bad_ecc_bit;
++ }
+
+ fsl_mc_printk(mci, KERN_ERR,
+ "Expected Data / ECC:\t%#8.8x_%08x / %#2.2x\n",
+- cap_high ^ (1 << (bad_data_bit - 32)),
+- cap_low ^ (1 << bad_data_bit),
+- syndrome ^ (1 << bad_ecc_bit));
++ upper_32_bits(cap), lower_32_bits(cap), s);
+ }
+
+ fsl_mc_printk(mci, KERN_ERR,
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index e2a954de913b42..51556c72a96746 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -1036,6 +1036,7 @@ static int __init i10nm_init(void)
+ return -ENODEV;
+
+ cfg = (struct res_config *)id->driver_data;
++ skx_set_res_cfg(cfg);
+ res_cfg = cfg;
+
+ rc = skx_get_hi_lo(0x09a2, off, &tolm, &tohm);
+diff --git a/drivers/edac/igen6_edac.c b/drivers/edac/igen6_edac.c
+index 189a2fc29e74f5..07dacf8c10be3d 100644
+--- a/drivers/edac/igen6_edac.c
++++ b/drivers/edac/igen6_edac.c
+@@ -1245,6 +1245,7 @@ static int igen6_register_mci(int mc, u64 mchbar, struct pci_dev *pdev)
+ imc->mci = mci;
+ return 0;
+ fail3:
++ mci->pvt_info = NULL;
+ kfree(mci->ctl_name);
+ fail2:
+ edac_mc_free(mci);
+@@ -1269,6 +1270,7 @@ static void igen6_unregister_mcis(void)
+
+ edac_mc_del_mc(mci->pdev);
+ kfree(mci->ctl_name);
++ mci->pvt_info = NULL;
+ edac_mc_free(mci);
+ iounmap(imc->window);
+ }
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index 85713646957b3e..6cf17af7d9112b 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -47,6 +47,7 @@ static skx_show_retry_log_f skx_show_retry_rd_err_log;
+ static u64 skx_tolm, skx_tohm;
+ static LIST_HEAD(dev_edac_list);
+ static bool skx_mem_cfg_2lm;
++static struct res_config *skx_res_cfg;
+
+ int skx_adxl_get(void)
+ {
+@@ -119,7 +120,7 @@ void skx_adxl_put(void)
+ }
+ EXPORT_SYMBOL_GPL(skx_adxl_put);
+
+-static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_mem)
++static bool skx_adxl_decode(struct decoded_addr *res, enum error_source err_src)
+ {
+ struct skx_dev *d;
+ int i, len = 0;
+@@ -135,8 +136,24 @@ static bool skx_adxl_decode(struct decoded_addr *res, bool error_in_1st_level_me
+ return false;
+ }
+
++ /*
++ * GNR with a Flat2LM memory configuration may mistakenly classify
++ * a near-memory error(DDR5) as a far-memory error(CXL), resulting
++ * in the incorrect selection of decoded ADXL components.
++ * To address this, prefetch the decoded far-memory controller ID
++ * and adjust the error source to near-memory if the far-memory
++ * controller ID is invalid.
++ */
++ if (skx_res_cfg && skx_res_cfg->type == GNR && err_src == ERR_SRC_2LM_FM) {
++ res->imc = (int)adxl_values[component_indices[INDEX_MEMCTRL]];
++ if (res->imc == -1) {
++ err_src = ERR_SRC_2LM_NM;
++ edac_dbg(0, "Adjust the error source to near-memory.\n");
++ }
++ }
++
+ res->socket = (int)adxl_values[component_indices[INDEX_SOCKET]];
+- if (error_in_1st_level_mem) {
++ if (err_src == ERR_SRC_2LM_NM) {
+ res->imc = (adxl_nm_bitmap & BIT_NM_MEMCTRL) ?
+ (int)adxl_values[component_indices[INDEX_NM_MEMCTRL]] : -1;
+ res->channel = (adxl_nm_bitmap & BIT_NM_CHANNEL) ?
+@@ -191,6 +208,12 @@ void skx_set_mem_cfg(bool mem_cfg_2lm)
+ }
+ EXPORT_SYMBOL_GPL(skx_set_mem_cfg);
+
++void skx_set_res_cfg(struct res_config *cfg)
++{
++ skx_res_cfg = cfg;
++}
++EXPORT_SYMBOL_GPL(skx_set_res_cfg);
++
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log)
+ {
+ driver_decode = decode;
+@@ -620,31 +643,27 @@ static void skx_mce_output_error(struct mem_ctl_info *mci,
+ optype, skx_msg);
+ }
+
+-static bool skx_error_in_1st_level_mem(const struct mce *m)
++static enum error_source skx_error_source(const struct mce *m)
+ {
+- u32 errcode;
++ u32 errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
+
+- if (!skx_mem_cfg_2lm)
+- return false;
+-
+- errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
+-
+- return errcode == MCACOD_EXT_MEM_ERR;
+-}
++ if (errcode != MCACOD_MEM_CTL_ERR && errcode != MCACOD_EXT_MEM_ERR)
++ return ERR_SRC_NOT_MEMORY;
+
+-static bool skx_error_in_mem(const struct mce *m)
+-{
+- u32 errcode;
++ if (!skx_mem_cfg_2lm)
++ return ERR_SRC_1LM;
+
+- errcode = GET_BITFIELD(m->status, 0, 15) & MCACOD_MEM_ERR_MASK;
++ if (errcode == MCACOD_EXT_MEM_ERR)
++ return ERR_SRC_2LM_NM;
+
+- return (errcode == MCACOD_MEM_CTL_ERR || errcode == MCACOD_EXT_MEM_ERR);
++ return ERR_SRC_2LM_FM;
+ }
+
+ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ void *data)
+ {
+ struct mce *mce = (struct mce *)data;
++ enum error_source err_src;
+ struct decoded_addr res;
+ struct mem_ctl_info *mci;
+ char *type;
+@@ -652,8 +671,10 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ if (mce->kflags & MCE_HANDLED_CEC)
+ return NOTIFY_DONE;
+
++ err_src = skx_error_source(mce);
++
+ /* Ignore unless this is memory related with an address */
+- if (!skx_error_in_mem(mce) || !(mce->status & MCI_STATUS_ADDRV))
++ if (err_src == ERR_SRC_NOT_MEMORY || !(mce->status & MCI_STATUS_ADDRV))
+ return NOTIFY_DONE;
+
+ memset(&res, 0, sizeof(res));
+@@ -667,7 +688,7 @@ int skx_mce_check_error(struct notifier_block *nb, unsigned long val,
+ /* Try driver decoder first */
+ if (!(driver_decode && driver_decode(&res))) {
+ /* Then try firmware decoder (ACPI DSM methods) */
+- if (!(adxl_component_count && skx_adxl_decode(&res, skx_error_in_1st_level_mem(mce))))
++ if (!(adxl_component_count && skx_adxl_decode(&res, err_src)))
+ return NOTIFY_DONE;
+ }
+
+diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
+index f945c1bf5ca465..54bba8a62f727c 100644
+--- a/drivers/edac/skx_common.h
++++ b/drivers/edac/skx_common.h
+@@ -146,6 +146,13 @@ enum {
+ INDEX_MAX
+ };
+
++enum error_source {
++ ERR_SRC_1LM,
++ ERR_SRC_2LM_NM,
++ ERR_SRC_2LM_FM,
++ ERR_SRC_NOT_MEMORY,
++};
++
+ #define BIT_NM_MEMCTRL BIT_ULL(INDEX_NM_MEMCTRL)
+ #define BIT_NM_CHANNEL BIT_ULL(INDEX_NM_CHANNEL)
+ #define BIT_NM_DIMM BIT_ULL(INDEX_NM_DIMM)
+@@ -234,6 +241,7 @@ int skx_adxl_get(void);
+ void skx_adxl_put(void);
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log);
+ void skx_set_mem_cfg(bool mem_cfg_2lm);
++void skx_set_res_cfg(struct res_config *cfg);
+
+ int skx_get_src_id(struct skx_dev *d, int off, u8 *id);
+ int skx_get_node_id(struct skx_dev *d, u8 *id);
+diff --git a/drivers/firmware/arm_scpi.c b/drivers/firmware/arm_scpi.c
+index 94a6b4e667de14..f4d47577f83ee7 100644
+--- a/drivers/firmware/arm_scpi.c
++++ b/drivers/firmware/arm_scpi.c
+@@ -630,6 +630,9 @@ static struct scpi_dvfs_info *scpi_dvfs_get_info(u8 domain)
+ if (ret)
+ return ERR_PTR(ret);
+
++ if (!buf.opp_count)
++ return ERR_PTR(-ENOENT);
++
+ info = kmalloc(sizeof(*info), GFP_KERNEL);
+ if (!info)
+ return ERR_PTR(-ENOMEM);
+diff --git a/drivers/firmware/efi/libstub/efi-stub.c b/drivers/firmware/efi/libstub/efi-stub.c
+index 958a680e0660d4..2a1b43f9e0fa2b 100644
+--- a/drivers/firmware/efi/libstub/efi-stub.c
++++ b/drivers/firmware/efi/libstub/efi-stub.c
+@@ -129,7 +129,7 @@ efi_status_t efi_handle_cmdline(efi_loaded_image_t *image, char **cmdline_ptr)
+
+ if (IS_ENABLED(CONFIG_CMDLINE_EXTEND) ||
+ IS_ENABLED(CONFIG_CMDLINE_FORCE) ||
+- cmdline_size == 0) {
++ cmdline[0] == 0) {
+ status = efi_parse_options(CONFIG_CMDLINE);
+ if (status != EFI_SUCCESS) {
+ efi_err("Failed to parse options\n");
+diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c
+index e8d69bd548f3fe..9c3613e6af158f 100644
+--- a/drivers/firmware/efi/tpm.c
++++ b/drivers/firmware/efi/tpm.c
+@@ -40,7 +40,8 @@ int __init efi_tpm_eventlog_init(void)
+ {
+ struct linux_efi_tpm_eventlog *log_tbl;
+ struct efi_tcg2_final_events_table *final_tbl;
+- int tbl_size;
++ unsigned int tbl_size;
++ int final_tbl_size;
+ int ret = 0;
+
+ if (efi.tpm_log == EFI_INVALID_TABLE_ADDR) {
+@@ -80,26 +81,26 @@ int __init efi_tpm_eventlog_init(void)
+ goto out;
+ }
+
+- tbl_size = 0;
++ final_tbl_size = 0;
+ if (final_tbl->nr_events != 0) {
+ void *events = (void *)efi.tpm_final_log
+ + sizeof(final_tbl->version)
+ + sizeof(final_tbl->nr_events);
+
+- tbl_size = tpm2_calc_event_log_size(events,
+- final_tbl->nr_events,
+- log_tbl->log);
++ final_tbl_size = tpm2_calc_event_log_size(events,
++ final_tbl->nr_events,
++ log_tbl->log);
+ }
+
+- if (tbl_size < 0) {
++ if (final_tbl_size < 0) {
+ pr_err(FW_BUG "Failed to parse event in TPM Final Events Log\n");
+ ret = -EINVAL;
+ goto out_calc;
+ }
+
+ memblock_reserve(efi.tpm_final_log,
+- tbl_size + sizeof(*final_tbl));
+- efi_tpm_final_log_size = tbl_size;
++ final_tbl_size + sizeof(*final_tbl));
++ efi_tpm_final_log_size = final_tbl_size;
+
+ out_calc:
+ early_memunmap(final_tbl, sizeof(*final_tbl));
+diff --git a/drivers/firmware/google/gsmi.c b/drivers/firmware/google/gsmi.c
+index d304913314e494..24e666d5c3d1a2 100644
+--- a/drivers/firmware/google/gsmi.c
++++ b/drivers/firmware/google/gsmi.c
+@@ -918,7 +918,8 @@ static __init int gsmi_init(void)
+ gsmi_dev.pdev = platform_device_register_full(&gsmi_dev_info);
+ if (IS_ERR(gsmi_dev.pdev)) {
+ printk(KERN_ERR "gsmi: unable to register platform device\n");
+- return PTR_ERR(gsmi_dev.pdev);
++ ret = PTR_ERR(gsmi_dev.pdev);
++ goto out_unregister;
+ }
+
+ /* SMI access needs to be serialized */
+@@ -1056,10 +1057,11 @@ static __init int gsmi_init(void)
+ gsmi_buf_free(gsmi_dev.name_buf);
+ kmem_cache_destroy(gsmi_dev.mem_pool);
+ platform_device_unregister(gsmi_dev.pdev);
+- pr_info("gsmi: failed to load: %d\n", ret);
++out_unregister:
+ #ifdef CONFIG_PM
+ platform_driver_unregister(&gsmi_driver_info);
+ #endif
++ pr_info("gsmi: failed to load: %d\n", ret);
+ return ret;
+ }
+
+diff --git a/drivers/gpio/gpio-exar.c b/drivers/gpio/gpio-exar.c
+index 5170fe7599cdf8..d5909a4f0433c1 100644
+--- a/drivers/gpio/gpio-exar.c
++++ b/drivers/gpio/gpio-exar.c
+@@ -99,11 +99,13 @@ static void exar_set_value(struct gpio_chip *chip, unsigned int offset,
+ struct exar_gpio_chip *exar_gpio = gpiochip_get_data(chip);
+ unsigned int addr = exar_offset_to_lvl_addr(exar_gpio, offset);
+ unsigned int bit = exar_offset_to_bit(exar_gpio, offset);
++ unsigned int bit_value = value ? BIT(bit) : 0;
+
+- if (value)
+- regmap_set_bits(exar_gpio->regmap, addr, BIT(bit));
+- else
+- regmap_clear_bits(exar_gpio->regmap, addr, BIT(bit));
++ /*
++ * regmap_write_bits() forces value to be written when an external
++ * pull up/down might otherwise indicate value was already set.
++ */
++ regmap_write_bits(exar_gpio->regmap, addr, BIT(bit), bit_value);
+ }
+
+ static int exar_direction_output(struct gpio_chip *chip, unsigned int offset,
+diff --git a/drivers/gpio/gpio-zevio.c b/drivers/gpio/gpio-zevio.c
+index 2de61337ad3b54..d7230fd83f5d68 100644
+--- a/drivers/gpio/gpio-zevio.c
++++ b/drivers/gpio/gpio-zevio.c
+@@ -11,6 +11,7 @@
+ #include <linux/io.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/platform_device.h>
++#include <linux/property.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+
+@@ -169,6 +170,7 @@ static const struct gpio_chip zevio_gpio_chip = {
+ /* Initialization */
+ static int zevio_gpio_probe(struct platform_device *pdev)
+ {
++ struct device *dev = &pdev->dev;
+ struct zevio_gpio *controller;
+ int status, i;
+
+@@ -180,6 +182,10 @@ static int zevio_gpio_probe(struct platform_device *pdev)
+ controller->chip = zevio_gpio_chip;
+ controller->chip.parent = &pdev->dev;
+
++ controller->chip.label = devm_kasprintf(dev, GFP_KERNEL, "%pfw", dev_fwnode(dev));
++ if (!controller->chip.label)
++ return -ENOMEM;
++
+ controller->regs = devm_platform_ioremap_resource(pdev, 0);
+ if (IS_ERR(controller->regs))
+ return dev_err_probe(&pdev->dev, PTR_ERR(controller->regs),
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index 1cb5a4f1929335..cf5bc77e2362c4 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -152,6 +152,7 @@ config DRM_PANIC_SCREEN
+ config DRM_PANIC_SCREEN_QR_CODE
+ bool "Add a panic screen with a QR code"
+ depends on DRM_PANIC && RUST
++ select ZLIB_DEFLATE
+ help
+ This option adds a QR code generator, and a panic screen with a QR
+ code. The QR code will contain the last lines of kmsg and other debug
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+index 2ca12717313573..9d6345146495fc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_aca.c
+@@ -158,7 +158,7 @@ static int aca_smu_get_valid_aca_banks(struct amdgpu_device *adev, enum aca_smu_
+ return -EINVAL;
+ }
+
+- if (start + count >= max_count)
++ if (start + count > max_count)
+ return -EINVAL;
+
+ count = min_t(int, count, max_count);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index 4f08b153cb66d8..e41318bfbf4575 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -834,6 +834,9 @@ int amdgpu_amdkfd_unmap_hiq(struct amdgpu_device *adev, u32 doorbell_off,
+ if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+ return -EINVAL;
+
++ if (!kiq_ring->sched.ready || adev->job_hang)
++ return 0;
++
+ ring_funcs = kzalloc(sizeof(*ring_funcs), GFP_KERNEL);
+ if (!ring_funcs)
+ return -ENOMEM;
+@@ -858,8 +861,14 @@ int amdgpu_amdkfd_unmap_hiq(struct amdgpu_device *adev, u32 doorbell_off,
+
+ kiq->pmf->kiq_unmap_queues(kiq_ring, ring, RESET_QUEUES, 0, 0);
+
+- if (kiq_ring->sched.ready && !adev->job_hang)
+- r = amdgpu_ring_test_helper(kiq_ring);
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that unmap queues that is submitted before got
++ * processed successfully before returning.
++ */
++ r = amdgpu_ring_test_helper(kiq_ring);
+
+ spin_unlock(&kiq->ring_lock);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+index 4bd61c169ca8d4..ca8091fd3a24f4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_discovery.c
+@@ -1757,11 +1757,13 @@ int amdgpu_discovery_get_nps_info(struct amdgpu_device *adev,
+
+ switch (le16_to_cpu(nps_info->v1.header.version_major)) {
+ case 1:
++ mem_ranges = kvcalloc(nps_info->v1.count,
++ sizeof(*mem_ranges),
++ GFP_KERNEL);
++ if (!mem_ranges)
++ return -ENOMEM;
+ *nps_type = nps_info->v1.nps_type;
+ *range_cnt = nps_info->v1.count;
+- mem_ranges = kvzalloc(
+- *range_cnt * sizeof(struct amdgpu_gmc_memrange),
+- GFP_KERNEL);
+ for (i = 0; i < *range_cnt; i++) {
+ mem_ranges[i].base_address =
+ nps_info->v1.instance_info[i].base_address;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index f1ffab5a1eaed9..156abd2ba5a6c6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -525,6 +525,17 @@ int amdgpu_gfx_disable_kcq(struct amdgpu_device *adev, int xcc_id)
+ if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+ return -EINVAL;
+
++ if (!kiq_ring->sched.ready || adev->job_hang)
++ return 0;
++ /**
++ * This is workaround: only skip kiq_ring test
++ * during ras recovery in suspend stage for gfx9.4.3
++ */
++ if ((amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3) ||
++ amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 4)) &&
++ amdgpu_ras_in_recovery(adev))
++ return 0;
++
+ spin_lock(&kiq->ring_lock);
+ if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size *
+ adev->gfx.num_compute_rings)) {
+@@ -538,20 +549,15 @@ int amdgpu_gfx_disable_kcq(struct amdgpu_device *adev, int xcc_id)
+ &adev->gfx.compute_ring[j],
+ RESET_QUEUES, 0, 0);
+ }
+-
+- /**
+- * This is workaround: only skip kiq_ring test
+- * during ras recovery in suspend stage for gfx9.4.3
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that unmap queues that is submitted before got
++ * processed successfully before returning.
+ */
+- if ((amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 3) ||
+- amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 4, 4)) &&
+- amdgpu_ras_in_recovery(adev)) {
+- spin_unlock(&kiq->ring_lock);
+- return 0;
+- }
++ r = amdgpu_ring_test_helper(kiq_ring);
+
+- if (kiq_ring->sched.ready && !adev->job_hang)
+- r = amdgpu_ring_test_helper(kiq_ring);
+ spin_unlock(&kiq->ring_lock);
+
+ return r;
+@@ -579,8 +585,11 @@ int amdgpu_gfx_disable_kgq(struct amdgpu_device *adev, int xcc_id)
+ if (!kiq->pmf || !kiq->pmf->kiq_unmap_queues)
+ return -EINVAL;
+
+- spin_lock(&kiq->ring_lock);
++ if (!adev->gfx.kiq[0].ring.sched.ready || adev->job_hang)
++ return 0;
++
+ if (amdgpu_gfx_is_master_xcc(adev, xcc_id)) {
++ spin_lock(&kiq->ring_lock);
+ if (amdgpu_ring_alloc(kiq_ring, kiq->pmf->unmap_queues_size *
+ adev->gfx.num_gfx_rings)) {
+ spin_unlock(&kiq->ring_lock);
+@@ -593,11 +602,17 @@ int amdgpu_gfx_disable_kgq(struct amdgpu_device *adev, int xcc_id)
+ &adev->gfx.gfx_ring[j],
+ PREEMPT_QUEUES, 0, 0);
+ }
+- }
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
+
+- if (adev->gfx.kiq[0].ring.sched.ready && !adev->job_hang)
++ /*
++ * Ring test will do a basic scratch register change check.
++ * Just run this to ensure that unmap queues that is submitted
++ * before got processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+- spin_unlock(&kiq->ring_lock);
++ spin_unlock(&kiq->ring_lock);
++ }
+
+ return r;
+ }
+@@ -702,7 +717,13 @@ int amdgpu_gfx_enable_kcq(struct amdgpu_device *adev, int xcc_id)
+ kiq->pmf->kiq_map_queues(kiq_ring,
+ &adev->gfx.compute_ring[j]);
+ }
+-
++ /* Submit map queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that map queues that is submitted before got
++ * processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+ spin_unlock(&kiq->ring_lock);
+ if (r)
+@@ -753,7 +774,13 @@ int amdgpu_gfx_enable_kgq(struct amdgpu_device *adev, int xcc_id)
+ &adev->gfx.gfx_ring[j]);
+ }
+ }
+-
++ /* Submit map queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that map queues that is submitted before got
++ * processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+ spin_unlock(&kiq->ring_lock);
+ if (r)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+index bc8295812cc842..9d741695ca07d6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+@@ -4823,6 +4823,13 @@ static int gfx_v8_0_kcq_disable(struct amdgpu_device *adev)
+ amdgpu_ring_write(kiq_ring, 0);
+ amdgpu_ring_write(kiq_ring, 0);
+ }
++ /* Submit unmap queue packet */
++ amdgpu_ring_commit(kiq_ring);
++ /*
++ * Ring test will do a basic scratch register change check. Just run
++ * this to ensure that unmap queues that is submitted before got
++ * processed successfully before returning.
++ */
+ r = amdgpu_ring_test_helper(kiq_ring);
+ if (r)
+ DRM_ERROR("KCQ disable failed\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 23f0573ae47b33..785a343a95f0ff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -2418,6 +2418,8 @@ static int gfx_v9_0_sw_fini(void *handle)
+ amdgpu_gfx_kiq_free_ring(&adev->gfx.kiq[0].ring);
+ amdgpu_gfx_kiq_fini(adev, 0);
+
++ amdgpu_gfx_cleaner_shader_sw_fini(adev);
++
+ gfx_v9_0_mec_fini(adev);
+ amdgpu_bo_free_kernel(&adev->gfx.rlc.clear_state_obj,
+ &adev->gfx.rlc.clear_state_gpu_addr,
+diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+index 86958cb2c2ab2b..aa5815bd633eba 100644
+--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v4_0_3.c
+@@ -674,11 +674,12 @@ void jpeg_v4_0_3_dec_ring_insert_start(struct amdgpu_ring *ring)
+ amdgpu_ring_write(ring, PACKETJ(regUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET,
+ 0, 0, PACKETJ_TYPE0));
+ amdgpu_ring_write(ring, 0x62a04); /* PCTL0_MMHUB_DEEPSLEEP_IB */
+- }
+
+- amdgpu_ring_write(ring, PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR,
+- 0, 0, PACKETJ_TYPE0));
+- amdgpu_ring_write(ring, 0x80004000);
++ amdgpu_ring_write(ring,
++ PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR, 0,
++ 0, PACKETJ_TYPE0));
++ amdgpu_ring_write(ring, 0x80004000);
++ }
+ }
+
+ /**
+@@ -694,11 +695,12 @@ void jpeg_v4_0_3_dec_ring_insert_end(struct amdgpu_ring *ring)
+ amdgpu_ring_write(ring, PACKETJ(regUVD_JRBC_EXTERNAL_REG_INTERNAL_OFFSET,
+ 0, 0, PACKETJ_TYPE0));
+ amdgpu_ring_write(ring, 0x62a04);
+- }
+
+- amdgpu_ring_write(ring, PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR,
+- 0, 0, PACKETJ_TYPE0));
+- amdgpu_ring_write(ring, 0x00004000);
++ amdgpu_ring_write(ring,
++ PACKETJ(JRBC_DEC_EXTERNAL_REG_WRITE_ADDR, 0,
++ 0, PACKETJ_TYPE0));
++ amdgpu_ring_write(ring, 0x00004000);
++ }
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index d4aa843aacfdd9..ff34bb1ac9db79 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -271,11 +271,9 @@ static int kfd_get_cu_occupancy(struct attribute *attr, char *buffer)
+ struct kfd_process *proc = NULL;
+ struct kfd_process_device *pdd = NULL;
+ int i;
+- struct kfd_cu_occupancy cu_occupancy[AMDGPU_MAX_QUEUES];
++ struct kfd_cu_occupancy *cu_occupancy;
+ u32 queue_format;
+
+- memset(cu_occupancy, 0x0, sizeof(cu_occupancy));
+-
+ pdd = container_of(attr, struct kfd_process_device, attr_cu_occupancy);
+ dev = pdd->dev;
+ if (dev->kfd2kgd->get_cu_occupancy == NULL)
+@@ -293,6 +291,10 @@ static int kfd_get_cu_occupancy(struct attribute *attr, char *buffer)
+ wave_cnt = 0;
+ max_waves_per_cu = 0;
+
++ cu_occupancy = kcalloc(AMDGPU_MAX_QUEUES, sizeof(*cu_occupancy), GFP_KERNEL);
++ if (!cu_occupancy)
++ return -ENOMEM;
++
+ /*
+ * For GFX 9.4.3, fetch the CU occupancy from the first XCC in the partition.
+ * For AQL queues, because of cooperative dispatch we multiply the wave count
+@@ -318,6 +320,7 @@ static int kfd_get_cu_occupancy(struct attribute *attr, char *buffer)
+
+ /* Translate wave count to number of compute units */
+ cu_cnt = (wave_cnt + (max_waves_per_cu - 1)) / max_waves_per_cu;
++ kfree(cu_occupancy);
+ return snprintf(buffer, PAGE_SIZE, "%d\n", cu_cnt);
+ }
+
+@@ -338,8 +341,8 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr,
+ attr_sdma);
+ struct kfd_sdma_activity_handler_workarea sdma_activity_work_handler;
+
+- INIT_WORK(&sdma_activity_work_handler.sdma_activity_work,
+- kfd_sdma_activity_worker);
++ INIT_WORK_ONSTACK(&sdma_activity_work_handler.sdma_activity_work,
++ kfd_sdma_activity_worker);
+
+ sdma_activity_work_handler.pdd = pdd;
+ sdma_activity_work_handler.sdma_activity_counter = 0;
+@@ -347,6 +350,7 @@ static ssize_t kfd_procfs_show(struct kobject *kobj, struct attribute *attr,
+ schedule_work(&sdma_activity_work_handler.sdma_activity_work);
+
+ flush_work(&sdma_activity_work_handler.sdma_activity_work);
++ destroy_work_on_stack(&sdma_activity_work_handler.sdma_activity_work);
+
+ return snprintf(buffer, PAGE_SIZE, "%llu\n",
+ (sdma_activity_work_handler.sdma_activity_counter)/
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 8d97f17ffe662a..24fbde7dd1c425 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1696,6 +1696,26 @@ dm_allocate_gpu_mem(
+ return da->cpu_ptr;
+ }
+
++void
++dm_free_gpu_mem(
++ struct amdgpu_device *adev,
++ enum dc_gpu_mem_alloc_type type,
++ void *pvMem)
++{
++ struct dal_allocation *da;
++
++ /* walk the da list in DM */
++ list_for_each_entry(da, &adev->dm.da_list, list) {
++ if (pvMem == da->cpu_ptr) {
++ amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr);
++ list_del(&da->list);
++ kfree(da);
++ break;
++ }
++ }
++
++}
++
+ static enum dmub_status
+ dm_dmub_send_vbios_gpint_command(struct amdgpu_device *adev,
+ enum dmub_gpint_command command_code,
+@@ -1762,16 +1782,20 @@ static struct dml2_soc_bb *dm_dmub_get_vbios_bounding_box(struct amdgpu_device *
+ /* Send the chunk */
+ ret = dm_dmub_send_vbios_gpint_command(adev, send_addrs[i], chunk, 30000);
+ if (ret != DMUB_STATUS_OK)
+- /* No need to free bb here since it shall be done in dm_sw_fini() */
+- return NULL;
++ goto free_bb;
+ }
+
+ /* Now ask DMUB to copy the bb */
+ ret = dm_dmub_send_vbios_gpint_command(adev, DMUB_GPINT__BB_COPY, 1, 200000);
+ if (ret != DMUB_STATUS_OK)
+- return NULL;
++ goto free_bb;
+
+ return bb;
++
++free_bb:
++ dm_free_gpu_mem(adev, DC_MEM_ALLOC_TYPE_GART, (void *) bb);
++ return NULL;
++
+ }
+
+ static enum dmub_ips_disable_type dm_get_default_ips_mode(
+@@ -2541,11 +2565,11 @@ static int dm_sw_fini(void *handle)
+ amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr);
+ list_del(&da->list);
+ kfree(da);
++ adev->dm.bb_from_dmub = NULL;
+ break;
+ }
+ }
+
+- adev->dm.bb_from_dmub = NULL;
+
+ kfree(adev->dm.dmub_fb_info);
+ adev->dm.dmub_fb_info = NULL;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index 90dfffec33cf49..a0bc2c0ac04d96 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -1004,6 +1004,9 @@ void *dm_allocate_gpu_mem(struct amdgpu_device *adev,
+ enum dc_gpu_mem_alloc_type type,
+ size_t size,
+ long long *addr);
++void dm_free_gpu_mem(struct amdgpu_device *adev,
++ enum dc_gpu_mem_alloc_type type,
++ void *addr);
+
+ bool amdgpu_dm_is_headless(struct amdgpu_device *adev);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index 288be19db7c1b8..9be87b53251739 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -35,8 +35,8 @@
+ #include "amdgpu_dm_trace.h"
+ #include "amdgpu_dm_debugfs.h"
+
+-#define HPD_DETECTION_PERIOD_uS 5000000
+-#define HPD_DETECTION_TIME_uS 1000
++#define HPD_DETECTION_PERIOD_uS 2000000
++#define HPD_DETECTION_TIME_uS 100000
+
+ void amdgpu_dm_crtc_handle_vblank(struct amdgpu_crtc *acrtc)
+ {
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+index eea317dcbe8c34..9752548cc5b21d 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_helpers.c
+@@ -1055,17 +1055,8 @@ void dm_helpers_free_gpu_mem(
+ void *pvMem)
+ {
+ struct amdgpu_device *adev = ctx->driver_context;
+- struct dal_allocation *da;
+-
+- /* walk the da list in DM */
+- list_for_each_entry(da, &adev->dm.da_list, list) {
+- if (pvMem == da->cpu_ptr) {
+- amdgpu_bo_free_kernel(&da->bo, &da->gpu_addr, &da->cpu_ptr);
+- list_del(&da->list);
+- kfree(da);
+- break;
+- }
+- }
++
++ dm_free_gpu_mem(adev, type, pvMem);
+ }
+
+ bool dm_helpers_dmub_outbox_interrupt_control(struct dc_context *ctx, bool enable)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index a08e8a0b696c60..32b025c92c63cf 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1120,6 +1120,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ int i, k, ret;
+ bool debugfs_overwrite = false;
+ uint16_t fec_overhead_multiplier_x1000 = get_fec_overhead_multiplier(dc_link);
++ struct drm_connector_state *new_conn_state;
+
+ memset(params, 0, sizeof(params));
+
+@@ -1127,7 +1128,7 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ return PTR_ERR(mst_state);
+
+ /* Set up params */
+- DRM_DEBUG_DRIVER("%s: MST_DSC Set up params for %d streams\n", __func__, dc_state->stream_count);
++ DRM_DEBUG_DRIVER("%s: MST_DSC Try to set up params from %d streams\n", __func__, dc_state->stream_count);
+ for (i = 0; i < dc_state->stream_count; i++) {
+ struct dc_dsc_policy dsc_policy = {0};
+
+@@ -1143,6 +1144,14 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ if (!aconnector->mst_output_port)
+ continue;
+
++ new_conn_state = drm_atomic_get_new_connector_state(state, &aconnector->base);
++
++ if (!new_conn_state) {
++ DRM_DEBUG_DRIVER("%s:%d MST_DSC Skip the stream 0x%p with invalid new_conn_state\n",
++ __func__, __LINE__, stream);
++ continue;
++ }
++
+ stream->timing.flags.DSC = 0;
+
+ params[count].timing = &stream->timing;
+@@ -1175,6 +1184,8 @@ static int compute_mst_dsc_configs_for_link(struct drm_atomic_state *state,
+ count++;
+ }
+
++ DRM_DEBUG_DRIVER("%s: MST_DSC Params set up for %d streams\n", __func__, count);
++
+ if (count == 0) {
+ ASSERT(0);
+ return 0;
+@@ -1302,7 +1313,7 @@ static bool is_dsc_need_re_compute(
+ continue;
+
+ aconnector = (struct amdgpu_dm_connector *) stream->dm_stream_context;
+- if (!aconnector || !aconnector->dsc_aux)
++ if (!aconnector)
+ continue;
+
+ stream_on_link[new_stream_on_link_num] = aconnector;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+index 7ee2be8f82c467..bb766c2a74176a 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_hw_sequencer.c
+@@ -881,6 +881,9 @@ void hwss_setup_dpp(union block_sequence_params *params)
+ struct dpp *dpp = pipe_ctx->plane_res.dpp;
+ struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+
++ if (!plane_state)
++ return;
++
+ if (dpp && dpp->funcs->dpp_setup) {
+ // program the input csc
+ dpp->funcs->dpp_setup(dpp,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index a80c0858293207..36d12db8d02256 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -1923,9 +1923,9 @@ static void dcn20_program_pipe(
+ dc->res_pool->hubbub, pipe_ctx->plane_res.hubp->inst, pipe_ctx->hubp_regs.det_size);
+ }
+
+- if (pipe_ctx->update_flags.raw ||
+- (pipe_ctx->plane_state && pipe_ctx->plane_state->update_flags.raw) ||
+- pipe_ctx->stream->update_flags.raw)
++ if (pipe_ctx->plane_state && (pipe_ctx->update_flags.raw ||
++ pipe_ctx->plane_state->update_flags.raw ||
++ pipe_ctx->stream->update_flags.raw))
+ dcn20_update_dchubp_dpp(dc, pipe_ctx, context);
+
+ if (pipe_ctx->plane_state && (pipe_ctx->update_flags.bits.enable ||
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index a2e9bb485c366e..a2675b121fe44b 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -2551,6 +2551,8 @@ static int __maybe_unused anx7625_runtime_pm_suspend(struct device *dev)
+ mutex_lock(&ctx->lock);
+
+ anx7625_stop_dp_work(ctx);
++ if (!ctx->pdata.panel_bridge)
++ anx7625_remove_edid(ctx);
+ anx7625_power_standby(ctx);
+
+ mutex_unlock(&ctx->lock);
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index 87b8545fccc0af..e3a9832c742cb1 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -3107,6 +3107,8 @@ static __maybe_unused int it6505_bridge_suspend(struct device *dev)
+ {
+ struct it6505 *it6505 = dev_get_drvdata(dev);
+
++ it6505_remove_edid(it6505);
++
+ return it6505_poweroff(it6505);
+ }
+
+diff --git a/drivers/gpu/drm/bridge/tc358767.c b/drivers/gpu/drm/bridge/tc358767.c
+index f3afdab55c113e..47189587643a15 100644
+--- a/drivers/gpu/drm/bridge/tc358767.c
++++ b/drivers/gpu/drm/bridge/tc358767.c
+@@ -1714,6 +1714,13 @@ static const struct drm_edid *tc_edid_read(struct drm_bridge *bridge,
+ struct drm_connector *connector)
+ {
+ struct tc_data *tc = bridge_to_tc(bridge);
++ int ret;
++
++ ret = tc_get_display_props(tc);
++ if (ret < 0) {
++ dev_err(tc->dev, "failed to read display props: %d\n", ret);
++ return 0;
++ }
+
+ return drm_edid_read_ddc(connector, &tc->aux.ddc);
+ }
+diff --git a/drivers/gpu/drm/drm_file.c b/drivers/gpu/drm/drm_file.c
+index ad1dc638c83bb1..ce82c9451dfe7d 100644
+--- a/drivers/gpu/drm/drm_file.c
++++ b/drivers/gpu/drm/drm_file.c
+@@ -129,7 +129,7 @@ bool drm_dev_needs_global_mutex(struct drm_device *dev)
+ */
+ struct drm_file *drm_file_alloc(struct drm_minor *minor)
+ {
+- static atomic64_t ident = ATOMIC_INIT(0);
++ static atomic64_t ident = ATOMIC64_INIT(0);
+ struct drm_device *dev = minor->dev;
+ struct drm_file *file;
+ int ret;
+diff --git a/drivers/gpu/drm/drm_mm.c b/drivers/gpu/drm/drm_mm.c
+index 5ace481c190117..1ed68d3cd80bad 100644
+--- a/drivers/gpu/drm/drm_mm.c
++++ b/drivers/gpu/drm/drm_mm.c
+@@ -151,7 +151,7 @@ static void show_leaks(struct drm_mm *mm) { }
+
+ INTERVAL_TREE_DEFINE(struct drm_mm_node, rb,
+ u64, __subtree_last,
+- START, LAST, static inline, drm_mm_interval_tree)
++ START, LAST, static inline __maybe_unused, drm_mm_interval_tree)
+
+ struct drm_mm_node *
+ __drm_mm_interval_first(const struct drm_mm *mm, u64 start, u64 last)
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.c b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+index 6500f3999c5fa5..19ec67a5a918e3 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_drv.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.c
+@@ -538,6 +538,16 @@ static int etnaviv_bind(struct device *dev)
+ priv->num_gpus = 0;
+ priv->shm_gfp_mask = GFP_HIGHUSER | __GFP_RETRY_MAYFAIL | __GFP_NOWARN;
+
++ /*
++ * If the GPU is part of a system with DMA addressing limitations,
++ * request pages for our SHM backend buffers from the DMA32 zone to
++ * hopefully avoid performance killing SWIOTLB bounce buffering.
++ */
++ if (dma_addressing_limited(dev)) {
++ priv->shm_gfp_mask |= GFP_DMA32;
++ priv->shm_gfp_mask &= ~__GFP_HIGHMEM;
++ }
++
+ priv->cmdbuf_suballoc = etnaviv_cmdbuf_suballoc_new(drm->dev);
+ if (IS_ERR(priv->cmdbuf_suballoc)) {
+ dev_err(drm->dev, "Failed to create cmdbuf suballocator\n");
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+index 7c7f97793ddd0c..df0bc828a23483 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+@@ -839,14 +839,6 @@ int etnaviv_gpu_init(struct etnaviv_gpu *gpu)
+ if (ret)
+ goto fail;
+
+- /*
+- * If the GPU is part of a system with DMA addressing limitations,
+- * request pages for our SHM backend buffers from the DMA32 zone to
+- * hopefully avoid performance killing SWIOTLB bounce buffering.
+- */
+- if (dma_addressing_limited(gpu->dev))
+- priv->shm_gfp_mask |= GFP_DMA32;
+-
+ /* Create buffer: */
+ ret = etnaviv_cmdbuf_init(priv->cmdbuf_suballoc, &gpu->buffer,
+ PAGE_SIZE);
+@@ -1330,6 +1322,8 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu,
+ {
+ u32 val;
+
++ mutex_lock(&gpu->lock);
++
+ /* disable clock gating */
+ val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS);
+ val &= ~VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
+@@ -1341,6 +1335,8 @@ static void sync_point_perfmon_sample_pre(struct etnaviv_gpu *gpu,
+ gpu_write(gpu, VIVS_HI_CLOCK_CONTROL, val);
+
+ sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_PRE);
++
++ mutex_unlock(&gpu->lock);
+ }
+
+ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
+@@ -1350,13 +1346,9 @@ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
+ unsigned int i;
+ u32 val;
+
+- sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST);
+-
+- for (i = 0; i < submit->nr_pmrs; i++) {
+- const struct etnaviv_perfmon_request *pmr = submit->pmrs + i;
++ mutex_lock(&gpu->lock);
+
+- *pmr->bo_vma = pmr->sequence;
+- }
++ sync_point_perfmon_sample(gpu, event, ETNA_PM_PROCESS_POST);
+
+ /* disable debug register */
+ val = gpu_read(gpu, VIVS_HI_CLOCK_CONTROL);
+@@ -1367,6 +1359,14 @@ static void sync_point_perfmon_sample_post(struct etnaviv_gpu *gpu,
+ val = gpu_read_power(gpu, VIVS_PM_POWER_CONTROLS);
+ val |= VIVS_PM_POWER_CONTROLS_ENABLE_MODULE_CLOCK_GATING;
+ gpu_write_power(gpu, VIVS_PM_POWER_CONTROLS, val);
++
++ mutex_unlock(&gpu->lock);
++
++ for (i = 0; i < submit->nr_pmrs; i++) {
++ const struct etnaviv_perfmon_request *pmr = submit->pmrs + i;
++
++ *pmr->bo_vma = pmr->sequence;
++ }
+ }
+
+
+diff --git a/drivers/gpu/drm/fsl-dcu/Kconfig b/drivers/gpu/drm/fsl-dcu/Kconfig
+index 5ca71ef8732590..c9ee98693b48a4 100644
+--- a/drivers/gpu/drm/fsl-dcu/Kconfig
++++ b/drivers/gpu/drm/fsl-dcu/Kconfig
+@@ -8,6 +8,7 @@ config DRM_FSL_DCU
+ select DRM_PANEL
+ select REGMAP_MMIO
+ select VIDEOMODE_HELPERS
++ select MFD_SYSCON if SOC_LS1021A
+ help
+ Choose this option if you have an Freescale DCU chipset.
+ If M is selected the module will be called fsl-dcu-drm.
+diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
+index ab6c0c6cd0e2e3..c4c3d41ee53097 100644
+--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
++++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.c
+@@ -100,6 +100,7 @@ static void fsl_dcu_irq_uninstall(struct drm_device *dev)
+ static int fsl_dcu_load(struct drm_device *dev, unsigned long flags)
+ {
+ struct fsl_dcu_drm_device *fsl_dev = dev->dev_private;
++ struct regmap *scfg;
+ int ret;
+
+ ret = fsl_dcu_drm_modeset_init(fsl_dev);
+@@ -108,6 +109,20 @@ static int fsl_dcu_load(struct drm_device *dev, unsigned long flags)
+ return ret;
+ }
+
++ scfg = syscon_regmap_lookup_by_compatible("fsl,ls1021a-scfg");
++ if (PTR_ERR(scfg) != -ENODEV) {
++ /*
++ * For simplicity, enable the PIXCLK unconditionally,
++ * resulting in increased power consumption. Disabling
++ * the clock in PM or on unload could be implemented as
++ * a future improvement.
++ */
++ ret = regmap_update_bits(scfg, SCFG_PIXCLKCR, SCFG_PIXCLKCR_PXCEN,
++ SCFG_PIXCLKCR_PXCEN);
++ if (ret < 0)
++ return dev_err_probe(dev->dev, ret, "failed to enable pixclk\n");
++ }
++
+ ret = drm_vblank_init(dev, dev->mode_config.num_crtc);
+ if (ret < 0) {
+ dev_err(dev->dev, "failed to initialize vblank\n");
+diff --git a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
+index e2049a0e8a92a5..566396013c04a5 100644
+--- a/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
++++ b/drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_drv.h
+@@ -160,6 +160,9 @@
+ #define FSL_DCU_ARGB4444 12
+ #define FSL_DCU_YUV422 14
+
++#define SCFG_PIXCLKCR 0x28
++#define SCFG_PIXCLKCR_PXCEN BIT(31)
++
+ #define VF610_LAYER_REG_NUM 9
+ #define LS1021A_LAYER_REG_NUM 10
+
+diff --git a/drivers/gpu/drm/imagination/pvr_ccb.c b/drivers/gpu/drm/imagination/pvr_ccb.c
+index 4deeac7ed40a4d..2bbdc05a3b9779 100644
+--- a/drivers/gpu/drm/imagination/pvr_ccb.c
++++ b/drivers/gpu/drm/imagination/pvr_ccb.c
+@@ -321,7 +321,7 @@ static int pvr_kccb_reserve_slot_sync(struct pvr_device *pvr_dev)
+ bool reserved = false;
+ u32 retries = 0;
+
+- while ((jiffies - start_timestamp) < (u32)RESERVE_SLOT_TIMEOUT ||
++ while (time_before(jiffies, start_timestamp + RESERVE_SLOT_TIMEOUT) ||
+ retries < RESERVE_SLOT_MIN_RETRIES) {
+ reserved = pvr_kccb_try_reserve_slot(pvr_dev);
+ if (reserved)
+diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
+index 7bd6ba4c6e8ab6..363f885a709826 100644
+--- a/drivers/gpu/drm/imagination/pvr_vm.c
++++ b/drivers/gpu/drm/imagination/pvr_vm.c
+@@ -654,9 +654,7 @@ pvr_vm_context_lookup(struct pvr_file *pvr_file, u32 handle)
+
+ xa_lock(&pvr_file->vm_ctx_handles);
+ vm_ctx = xa_load(&pvr_file->vm_ctx_handles, handle);
+- if (vm_ctx)
+- kref_get(&vm_ctx->ref_count);
+-
++ pvr_vm_context_get(vm_ctx);
+ xa_unlock(&pvr_file->vm_ctx_handles);
+
+ return vm_ctx;
+diff --git a/drivers/gpu/drm/imx/dcss/dcss-crtc.c b/drivers/gpu/drm/imx/dcss/dcss-crtc.c
+index 31267c00782fc1..af91e45b5d13b7 100644
+--- a/drivers/gpu/drm/imx/dcss/dcss-crtc.c
++++ b/drivers/gpu/drm/imx/dcss/dcss-crtc.c
+@@ -206,15 +206,13 @@ int dcss_crtc_init(struct dcss_crtc *crtc, struct drm_device *drm)
+ if (crtc->irq < 0)
+ return crtc->irq;
+
+- ret = request_irq(crtc->irq, dcss_crtc_irq_handler,
+- 0, "dcss_drm", crtc);
++ ret = request_irq(crtc->irq, dcss_crtc_irq_handler, IRQF_NO_AUTOEN,
++ "dcss_drm", crtc);
+ if (ret) {
+ dev_err(dcss->dev, "irq request failed with %d.\n", ret);
+ return ret;
+ }
+
+- disable_irq(crtc->irq);
+-
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
+index ef29c9a61a4617..99db53e167bd02 100644
+--- a/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
++++ b/drivers/gpu/drm/imx/ipuv3/ipuv3-crtc.c
+@@ -410,14 +410,12 @@ static int ipu_drm_bind(struct device *dev, struct device *master, void *data)
+ }
+
+ ipu_crtc->irq = ipu_plane_irq(ipu_crtc->plane[0]);
+- ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler, 0,
+- "imx_drm", ipu_crtc);
++ ret = devm_request_irq(ipu_crtc->dev, ipu_crtc->irq, ipu_irq_handler,
++ IRQF_NO_AUTOEN, "imx_drm", ipu_crtc);
+ if (ret < 0) {
+ dev_err(ipu_crtc->dev, "irq request failed with %d.\n", ret);
+ return ret;
+ }
+- /* Only enable IRQ when we actually need it to trigger work. */
+- disable_irq(ipu_crtc->irq);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 37927bdd6fbed8..14db7376c712d1 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -1522,15 +1522,13 @@ static int a6xx_gmu_get_irq(struct a6xx_gmu *gmu, struct platform_device *pdev,
+
+ irq = platform_get_irq_byname(pdev, name);
+
+- ret = request_irq(irq, handler, IRQF_TRIGGER_HIGH, name, gmu);
++ ret = request_irq(irq, handler, IRQF_TRIGGER_HIGH | IRQF_NO_AUTOEN, name, gmu);
+ if (ret) {
+ DRM_DEV_ERROR(&pdev->dev, "Unable to get interrupt %s %d\n",
+ name, ret);
+ return ret;
+ }
+
+- disable_irq(irq);
+-
+ return irq;
+ }
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+index 1d3e9666c7411e..64c94e919a6980 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_3_0_msm8998.h
+@@ -156,18 +156,6 @@ static const struct dpu_lm_cfg msm8998_lm[] = {
+ .sblk = &msm8998_lm_sblk,
+ .lm_pair = LM_5,
+ .pingpong = PINGPONG_2,
+- }, {
+- .name = "lm_3", .id = LM_3,
+- .base = 0x47000, .len = 0x320,
+- .features = MIXER_MSM8998_MASK,
+- .sblk = &msm8998_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+- }, {
+- .name = "lm_4", .id = LM_4,
+- .base = 0x48000, .len = 0x320,
+- .features = MIXER_MSM8998_MASK,
+- .sblk = &msm8998_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+ }, {
+ .name = "lm_5", .id = LM_5,
+ .base = 0x49000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+index 7a23389a573272..72bd4f7e9e504c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_0_sdm845.h
+@@ -155,19 +155,6 @@ static const struct dpu_lm_cfg sdm845_lm[] = {
+ .lm_pair = LM_5,
+ .pingpong = PINGPONG_2,
+ .dspp = DSPP_2,
+- }, {
+- .name = "lm_3", .id = LM_3,
+- .base = 0x0, .len = 0x320,
+- .features = MIXER_SDM845_MASK,
+- .sblk = &sdm845_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+- .dspp = DSPP_3,
+- }, {
+- .name = "lm_4", .id = LM_4,
+- .base = 0x0, .len = 0x320,
+- .features = MIXER_SDM845_MASK,
+- .sblk = &sdm845_lm_sblk,
+- .pingpong = PINGPONG_NONE,
+ }, {
+ .name = "lm_5", .id = LM_5,
+ .base = 0x49000, .len = 0x320,
+@@ -175,6 +162,7 @@ static const struct dpu_lm_cfg sdm845_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ },
+ };
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+index 68fae048a9a837..260accc151d4b4 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_core_perf.c
+@@ -80,7 +80,7 @@ static u64 _dpu_core_perf_calc_clk(const struct dpu_perf_cfg *perf_cfg,
+
+ mode = &state->adjusted_mode;
+
+- crtc_clk = mode->vtotal * mode->hdisplay * drm_mode_vrefresh(mode);
++ crtc_clk = (u64)mode->vtotal * mode->hdisplay * drm_mode_vrefresh(mode);
+
+ drm_atomic_crtc_for_each_plane(plane, crtc) {
+ pstate = to_dpu_plane_state(plane->state);
+diff --git a/drivers/gpu/drm/msm/msm_gpu_devfreq.c b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+index ea70c1c32d9401..6970b0f7f457c8 100644
+--- a/drivers/gpu/drm/msm/msm_gpu_devfreq.c
++++ b/drivers/gpu/drm/msm/msm_gpu_devfreq.c
+@@ -140,6 +140,7 @@ void msm_devfreq_init(struct msm_gpu *gpu)
+ {
+ struct msm_gpu_devfreq *df = &gpu->devfreq;
+ struct msm_drm_private *priv = gpu->dev->dev_private;
++ int ret;
+
+ /* We need target support to do devfreq */
+ if (!gpu->funcs->gpu_busy)
+@@ -156,8 +157,12 @@ void msm_devfreq_init(struct msm_gpu *gpu)
+
+ mutex_init(&df->lock);
+
+- dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
+- DEV_PM_QOS_MIN_FREQUENCY, 0);
++ ret = dev_pm_qos_add_request(&gpu->pdev->dev, &df->boost_freq,
++ DEV_PM_QOS_MIN_FREQUENCY, 0);
++ if (ret < 0) {
++ DRM_DEV_ERROR(&gpu->pdev->dev, "Couldn't initialize QoS\n");
++ return;
++ }
+
+ msm_devfreq_profile.initial_freq = gpu->fast_rate;
+
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+index 060c74a80eb14b..3ea447f6a45b51 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+@@ -443,6 +443,7 @@ gf100_gr_chan_new(struct nvkm_gr *base, struct nvkm_chan *fifoch,
+ ret = gf100_grctx_generate(gr, chan, fifoch->inst);
+ if (ret) {
+ nvkm_error(&base->engine.subdev, "failed to construct context\n");
++ mutex_unlock(&gr->fecs.mutex);
+ return ret;
+ }
+ }
+diff --git a/drivers/gpu/drm/omapdrm/dss/base.c b/drivers/gpu/drm/omapdrm/dss/base.c
+index 5f8002f6bb7a59..a4ac113e16904b 100644
+--- a/drivers/gpu/drm/omapdrm/dss/base.c
++++ b/drivers/gpu/drm/omapdrm/dss/base.c
+@@ -139,21 +139,13 @@ static bool omapdss_device_is_connected(struct omap_dss_device *dssdev)
+ }
+
+ int omapdss_device_connect(struct dss_device *dss,
+- struct omap_dss_device *src,
+ struct omap_dss_device *dst)
+ {
+- dev_dbg(&dss->pdev->dev, "connect(%s, %s)\n",
+- src ? dev_name(src->dev) : "NULL",
++ dev_dbg(&dss->pdev->dev, "connect(%s)\n",
+ dst ? dev_name(dst->dev) : "NULL");
+
+- if (!dst) {
+- /*
+- * The destination is NULL when the source is connected to a
+- * bridge instead of a DSS device. Stop here, we will attach
+- * the bridge later when we will have a DRM encoder.
+- */
+- return src && src->bridge ? 0 : -EINVAL;
+- }
++ if (!dst)
++ return -EINVAL;
+
+ if (omapdss_device_is_connected(dst))
+ return -EBUSY;
+@@ -163,19 +155,14 @@ int omapdss_device_connect(struct dss_device *dss,
+ return 0;
+ }
+
+-void omapdss_device_disconnect(struct omap_dss_device *src,
++void omapdss_device_disconnect(struct dss_device *dss,
+ struct omap_dss_device *dst)
+ {
+- struct dss_device *dss = src ? src->dss : dst->dss;
+-
+- dev_dbg(&dss->pdev->dev, "disconnect(%s, %s)\n",
+- src ? dev_name(src->dev) : "NULL",
++ dev_dbg(&dss->pdev->dev, "disconnect(%s)\n",
+ dst ? dev_name(dst->dev) : "NULL");
+
+- if (!dst) {
+- WARN_ON(!src->bridge);
++ if (WARN_ON(!dst))
+ return;
+- }
+
+ if (!dst->id && !omapdss_device_is_connected(dst)) {
+ WARN_ON(1);
+diff --git a/drivers/gpu/drm/omapdrm/dss/omapdss.h b/drivers/gpu/drm/omapdrm/dss/omapdss.h
+index 040d5a3e33d680..4c22c09c93d523 100644
+--- a/drivers/gpu/drm/omapdrm/dss/omapdss.h
++++ b/drivers/gpu/drm/omapdrm/dss/omapdss.h
+@@ -242,9 +242,8 @@ struct omap_dss_device *omapdss_device_get(struct omap_dss_device *dssdev);
+ void omapdss_device_put(struct omap_dss_device *dssdev);
+ struct omap_dss_device *omapdss_find_device_by_node(struct device_node *node);
+ int omapdss_device_connect(struct dss_device *dss,
+- struct omap_dss_device *src,
+ struct omap_dss_device *dst);
+-void omapdss_device_disconnect(struct omap_dss_device *src,
++void omapdss_device_disconnect(struct dss_device *dss,
+ struct omap_dss_device *dst);
+
+ int omap_dss_get_num_overlay_managers(void);
+diff --git a/drivers/gpu/drm/omapdrm/omap_drv.c b/drivers/gpu/drm/omapdrm/omap_drv.c
+index d3eac4817d7687..a982378aa14119 100644
+--- a/drivers/gpu/drm/omapdrm/omap_drv.c
++++ b/drivers/gpu/drm/omapdrm/omap_drv.c
+@@ -307,7 +307,7 @@ static void omap_disconnect_pipelines(struct drm_device *ddev)
+ for (i = 0; i < priv->num_pipes; i++) {
+ struct omap_drm_pipeline *pipe = &priv->pipes[i];
+
+- omapdss_device_disconnect(NULL, pipe->output);
++ omapdss_device_disconnect(priv->dss, pipe->output);
+
+ omapdss_device_put(pipe->output);
+ pipe->output = NULL;
+@@ -325,7 +325,7 @@ static int omap_connect_pipelines(struct drm_device *ddev)
+ int r;
+
+ for_each_dss_output(output) {
+- r = omapdss_device_connect(priv->dss, NULL, output);
++ r = omapdss_device_connect(priv->dss, output);
+ if (r == -EPROBE_DEFER) {
+ omapdss_device_put(output);
+ return r;
+diff --git a/drivers/gpu/drm/omapdrm/omap_gem.c b/drivers/gpu/drm/omapdrm/omap_gem.c
+index fdae677558f3ef..b9c67e4ca36054 100644
+--- a/drivers/gpu/drm/omapdrm/omap_gem.c
++++ b/drivers/gpu/drm/omapdrm/omap_gem.c
+@@ -1402,8 +1402,6 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
+
+ omap_obj = to_omap_bo(obj);
+
+- mutex_lock(&omap_obj->lock);
+-
+ omap_obj->sgt = sgt;
+
+ if (omap_gem_sgt_is_contiguous(sgt, size)) {
+@@ -1418,21 +1416,17 @@ struct drm_gem_object *omap_gem_new_dmabuf(struct drm_device *dev, size_t size,
+ pages = kcalloc(npages, sizeof(*pages), GFP_KERNEL);
+ if (!pages) {
+ omap_gem_free_object(obj);
+- obj = ERR_PTR(-ENOMEM);
+- goto done;
++ return ERR_PTR(-ENOMEM);
+ }
+
+ omap_obj->pages = pages;
+ ret = drm_prime_sg_to_page_array(sgt, pages, npages);
+ if (ret) {
+ omap_gem_free_object(obj);
+- obj = ERR_PTR(-ENOMEM);
+- goto done;
++ return ERR_PTR(-ENOMEM);
+ }
+ }
+
+-done:
+- mutex_unlock(&omap_obj->lock);
+ return obj;
+ }
+
+diff --git a/drivers/gpu/drm/panel/panel-newvision-nv3052c.c b/drivers/gpu/drm/panel/panel-newvision-nv3052c.c
+index d3baccfe6286b2..06e16a7c14a756 100644
+--- a/drivers/gpu/drm/panel/panel-newvision-nv3052c.c
++++ b/drivers/gpu/drm/panel/panel-newvision-nv3052c.c
+@@ -917,7 +917,7 @@ static const struct nv3052c_panel_info wl_355608_a8_panel_info = {
+ static const struct spi_device_id nv3052c_ids[] = {
+ { "ltk035c5444t", },
+ { "fs035vg158", },
+- { "wl-355608-a8", },
++ { "rg35xx-plus-panel", },
+ { /* sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(spi, nv3052c_ids);
+diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35510.c b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+index 57686340de49fc..549b86f2cc2887 100644
+--- a/drivers/gpu/drm/panel/panel-novatek-nt35510.c
++++ b/drivers/gpu/drm/panel/panel-novatek-nt35510.c
+@@ -38,6 +38,7 @@
+
+ #define NT35510_CMD_CORRECT_GAMMA BIT(0)
+ #define NT35510_CMD_CONTROL_DISPLAY BIT(1)
++#define NT35510_CMD_SETVCMOFF BIT(2)
+
+ #define MCS_CMD_MAUCCTR 0xF0 /* Manufacturer command enable */
+ #define MCS_CMD_READ_ID1 0xDA
+@@ -721,11 +722,13 @@ static int nt35510_setup_power(struct nt35510 *nt)
+ if (ret)
+ return ret;
+
+- ret = nt35510_send_long(nt, dsi, NT35510_P1_SETVCMOFF,
+- NT35510_P1_VCMOFF_LEN,
+- nt->conf->vcmoff);
+- if (ret)
+- return ret;
++ if (nt->conf->cmds & NT35510_CMD_SETVCMOFF) {
++ ret = nt35510_send_long(nt, dsi, NT35510_P1_SETVCMOFF,
++ NT35510_P1_VCMOFF_LEN,
++ nt->conf->vcmoff);
++ if (ret)
++ return ret;
++ }
+
+ /* Typically 10 ms */
+ usleep_range(10000, 20000);
+@@ -1319,7 +1322,7 @@ static const struct nt35510_config nt35510_frida_frd400b25025 = {
+ },
+ .mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST |
+ MIPI_DSI_MODE_LPM,
+- .cmds = NT35510_CMD_CONTROL_DISPLAY,
++ .cmds = NT35510_CMD_CONTROL_DISPLAY | NT35510_CMD_SETVCMOFF,
+ /* 0x03: AVDD = 6.2V */
+ .avdd = { 0x03, 0x03, 0x03 },
+ /* 0x46: PCK = 2 x Hsync, BTP = 2.5 x VDDB */
+diff --git a/drivers/gpu/drm/panfrost/panfrost_devfreq.c b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+index 2d30da38c2c3e4..3385fd3ef41a47 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_devfreq.c
++++ b/drivers/gpu/drm/panfrost/panfrost_devfreq.c
+@@ -38,7 +38,7 @@ static int panfrost_devfreq_target(struct device *dev, unsigned long *freq,
+ return PTR_ERR(opp);
+ dev_pm_opp_put(opp);
+
+- err = dev_pm_opp_set_rate(dev, *freq);
++ err = dev_pm_opp_set_rate(dev, *freq);
+ if (!err)
+ ptdev->pfdevfreq.current_frequency = *freq;
+
+@@ -182,6 +182,7 @@ int panfrost_devfreq_init(struct panfrost_device *pfdev)
+ * if any and will avoid a switch off by regulator_late_cleanup()
+ */
+ ret = dev_pm_opp_set_opp(dev, opp);
++ dev_pm_opp_put(opp);
+ if (ret) {
+ DRM_DEV_ERROR(dev, "Couldn't set recommended OPP\n");
+ return ret;
+diff --git a/drivers/gpu/drm/panfrost/panfrost_gpu.c b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+index fd8e44992184fa..b52dd510e0367b 100644
+--- a/drivers/gpu/drm/panfrost/panfrost_gpu.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gpu.c
+@@ -177,7 +177,6 @@ static void panfrost_gpu_init_quirks(struct panfrost_device *pfdev)
+ struct panfrost_model {
+ const char *name;
+ u32 id;
+- u32 id_mask;
+ u64 features;
+ u64 issues;
+ struct {
+diff --git a/drivers/gpu/drm/panthor/panthor_devfreq.c b/drivers/gpu/drm/panthor/panthor_devfreq.c
+index c6d3c327cc24c0..ecc7a52bd688ee 100644
+--- a/drivers/gpu/drm/panthor/panthor_devfreq.c
++++ b/drivers/gpu/drm/panthor/panthor_devfreq.c
+@@ -62,14 +62,20 @@ static void panthor_devfreq_update_utilization(struct panthor_devfreq *pdevfreq)
+ static int panthor_devfreq_target(struct device *dev, unsigned long *freq,
+ u32 flags)
+ {
++ struct panthor_device *ptdev = dev_get_drvdata(dev);
+ struct dev_pm_opp *opp;
++ int err;
+
+ opp = devfreq_recommended_opp(dev, freq, flags);
+ if (IS_ERR(opp))
+ return PTR_ERR(opp);
+ dev_pm_opp_put(opp);
+
+- return dev_pm_opp_set_rate(dev, *freq);
++ err = dev_pm_opp_set_rate(dev, *freq);
++ if (!err)
++ ptdev->current_frequency = *freq;
++
++ return err;
+ }
+
+ static void panthor_devfreq_reset(struct panthor_devfreq *pdevfreq)
+@@ -130,6 +136,7 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
+ struct panthor_devfreq *pdevfreq;
+ struct dev_pm_opp *opp;
+ unsigned long cur_freq;
++ unsigned long freq = ULONG_MAX;
+ int ret;
+
+ pdevfreq = drmm_kzalloc(&ptdev->base, sizeof(*ptdev->devfreq), GFP_KERNEL);
+@@ -156,12 +163,6 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
+
+ cur_freq = clk_get_rate(ptdev->clks.core);
+
+- opp = devfreq_recommended_opp(dev, &cur_freq, 0);
+- if (IS_ERR(opp))
+- return PTR_ERR(opp);
+-
+- panthor_devfreq_profile.initial_freq = cur_freq;
+-
+ /* Regulator coupling only takes care of synchronizing/balancing voltage
+ * updates, but the coupled regulator needs to be enabled manually.
+ *
+@@ -192,16 +193,30 @@ int panthor_devfreq_init(struct panthor_device *ptdev)
+ return ret;
+ }
+
++ opp = devfreq_recommended_opp(dev, &cur_freq, 0);
++ if (IS_ERR(opp))
++ return PTR_ERR(opp);
++
++ panthor_devfreq_profile.initial_freq = cur_freq;
++ ptdev->current_frequency = cur_freq;
++
+ /*
+ * Set the recommend OPP this will enable and configure the regulator
+ * if any and will avoid a switch off by regulator_late_cleanup()
+ */
+ ret = dev_pm_opp_set_opp(dev, opp);
++ dev_pm_opp_put(opp);
+ if (ret) {
+ DRM_DEV_ERROR(dev, "Couldn't set recommended OPP\n");
+ return ret;
+ }
+
++ /* Find the fastest defined rate */
++ opp = dev_pm_opp_find_freq_floor(dev, &freq);
++ if (IS_ERR(opp))
++ return PTR_ERR(opp);
++ ptdev->fast_rate = freq;
++
+ dev_pm_opp_put(opp);
+
+ /*
+diff --git a/drivers/gpu/drm/panthor/panthor_device.h b/drivers/gpu/drm/panthor/panthor_device.h
+index e388c0472ba783..2109905813e8c4 100644
+--- a/drivers/gpu/drm/panthor/panthor_device.h
++++ b/drivers/gpu/drm/panthor/panthor_device.h
+@@ -66,6 +66,25 @@ struct panthor_irq {
+ atomic_t suspended;
+ };
+
++/**
++ * enum panthor_device_profiling_mode - Profiling state
++ */
++enum panthor_device_profiling_flags {
++ /** @PANTHOR_DEVICE_PROFILING_DISABLED: Profiling is disabled. */
++ PANTHOR_DEVICE_PROFILING_DISABLED = 0,
++
++ /** @PANTHOR_DEVICE_PROFILING_CYCLES: Sampling job cycles. */
++ PANTHOR_DEVICE_PROFILING_CYCLES = BIT(0),
++
++ /** @PANTHOR_DEVICE_PROFILING_TIMESTAMP: Sampling job timestamp. */
++ PANTHOR_DEVICE_PROFILING_TIMESTAMP = BIT(1),
++
++ /** @PANTHOR_DEVICE_PROFILING_ALL: Sampling everything. */
++ PANTHOR_DEVICE_PROFILING_ALL =
++ PANTHOR_DEVICE_PROFILING_CYCLES |
++ PANTHOR_DEVICE_PROFILING_TIMESTAMP,
++};
++
+ /**
+ * struct panthor_device - Panthor device
+ */
+@@ -162,6 +181,15 @@ struct panthor_device {
+ */
+ struct page *dummy_latest_flush;
+ } pm;
++
++ /** @profile_mask: User-set profiling flags for job accounting. */
++ u32 profile_mask;
++
++ /** @current_frequency: Device clock frequency at present. Set by DVFS*/
++ unsigned long current_frequency;
++
++ /** @fast_rate: Maximum device clock frequency. Set by DVFS */
++ unsigned long fast_rate;
+ };
+
+ /**
+diff --git a/drivers/gpu/drm/panthor/panthor_sched.c b/drivers/gpu/drm/panthor/panthor_sched.c
+index 9929e22f4d8d2e..20135a9bc026ed 100644
+--- a/drivers/gpu/drm/panthor/panthor_sched.c
++++ b/drivers/gpu/drm/panthor/panthor_sched.c
+@@ -93,6 +93,9 @@
+ #define MIN_CSGS 3
+ #define MAX_CSG_PRIO 0xf
+
++#define NUM_INSTRS_PER_CACHE_LINE (64 / sizeof(u64))
++#define MAX_INSTRS_PER_JOB 24
++
+ struct panthor_group;
+
+ /**
+@@ -476,6 +479,18 @@ struct panthor_queue {
+ */
+ struct list_head in_flight_jobs;
+ } fence_ctx;
++
++ /** @profiling: Job profiling data slots and access information. */
++ struct {
++ /** @slots: Kernel BO holding the slots. */
++ struct panthor_kernel_bo *slots;
++
++ /** @slot_count: Number of jobs ringbuffer can hold at once. */
++ u32 slot_count;
++
++ /** @seqno: Index of the next available profiling information slot. */
++ u32 seqno;
++ } profiling;
+ };
+
+ /**
+@@ -662,6 +677,18 @@ struct panthor_group {
+ struct list_head wait_node;
+ };
+
++struct panthor_job_profiling_data {
++ struct {
++ u64 before;
++ u64 after;
++ } cycles;
++
++ struct {
++ u64 before;
++ u64 after;
++ } time;
++};
++
+ /**
+ * group_queue_work() - Queue a group work
+ * @group: Group to queue the work for.
+@@ -775,6 +802,15 @@ struct panthor_job {
+
+ /** @done_fence: Fence signaled when the job is finished or cancelled. */
+ struct dma_fence *done_fence;
++
++ /** @profiling: Job profiling information. */
++ struct {
++ /** @mask: Current device job profiling enablement bitmask. */
++ u32 mask;
++
++ /** @slot: Job index in the profiling slots BO. */
++ u32 slot;
++ } profiling;
+ };
+
+ static void
+@@ -839,6 +875,7 @@ static void group_free_queue(struct panthor_group *group, struct panthor_queue *
+
+ panthor_kernel_bo_destroy(queue->ringbuf);
+ panthor_kernel_bo_destroy(queue->iface.mem);
++ panthor_kernel_bo_destroy(queue->profiling.slots);
+
+ /* Release the last_fence we were holding, if any. */
+ dma_fence_put(queue->fence_ctx.last_fence);
+@@ -1989,8 +2026,6 @@ tick_ctx_init(struct panthor_scheduler *sched,
+ }
+ }
+
+-#define NUM_INSTRS_PER_SLOT 16
+-
+ static void
+ group_term_post_processing(struct panthor_group *group)
+ {
+@@ -2829,65 +2864,198 @@ static void group_sync_upd_work(struct work_struct *work)
+ group_put(group);
+ }
+
+-static struct dma_fence *
+-queue_run_job(struct drm_sched_job *sched_job)
++struct panthor_job_ringbuf_instrs {
++ u64 buffer[MAX_INSTRS_PER_JOB];
++ u32 count;
++};
++
++struct panthor_job_instr {
++ u32 profile_mask;
++ u64 instr;
++};
++
++#define JOB_INSTR(__prof, __instr) \
++ { \
++ .profile_mask = __prof, \
++ .instr = __instr, \
++ }
++
++static void
++copy_instrs_to_ringbuf(struct panthor_queue *queue,
++ struct panthor_job *job,
++ struct panthor_job_ringbuf_instrs *instrs)
++{
++ u64 ringbuf_size = panthor_kernel_bo_size(queue->ringbuf);
++ u64 start = job->ringbuf.start & (ringbuf_size - 1);
++ u64 size, written;
++
++ /*
++ * We need to write a whole slot, including any trailing zeroes
++ * that may come at the end of it. Also, because instrs.buffer has
++ * been zero-initialised, there's no need to pad it with 0's
++ */
++ instrs->count = ALIGN(instrs->count, NUM_INSTRS_PER_CACHE_LINE);
++ size = instrs->count * sizeof(u64);
++ WARN_ON(size > ringbuf_size);
++ written = min(ringbuf_size - start, size);
++
++ memcpy(queue->ringbuf->kmap + start, instrs->buffer, written);
++
++ if (written < size)
++ memcpy(queue->ringbuf->kmap,
++ &instrs->buffer[written / sizeof(u64)],
++ size - written);
++}
++
++struct panthor_job_cs_params {
++ u32 profile_mask;
++ u64 addr_reg; u64 val_reg;
++ u64 cycle_reg; u64 time_reg;
++ u64 sync_addr; u64 times_addr;
++ u64 cs_start; u64 cs_size;
++ u32 last_flush; u32 waitall_mask;
++};
++
++static void
++get_job_cs_params(struct panthor_job *job, struct panthor_job_cs_params *params)
+ {
+- struct panthor_job *job = container_of(sched_job, struct panthor_job, base);
+ struct panthor_group *group = job->group;
+ struct panthor_queue *queue = group->queues[job->queue_idx];
+ struct panthor_device *ptdev = group->ptdev;
+ struct panthor_scheduler *sched = ptdev->scheduler;
+- u32 ringbuf_size = panthor_kernel_bo_size(queue->ringbuf);
+- u32 ringbuf_insert = queue->iface.input->insert & (ringbuf_size - 1);
+- u64 addr_reg = ptdev->csif_info.cs_reg_count -
+- ptdev->csif_info.unpreserved_cs_reg_count;
+- u64 val_reg = addr_reg + 2;
+- u64 sync_addr = panthor_kernel_bo_gpuva(group->syncobjs) +
+- job->queue_idx * sizeof(struct panthor_syncobj_64b);
+- u32 waitall_mask = GENMASK(sched->sb_slot_count - 1, 0);
+- struct dma_fence *done_fence;
+- int ret;
+
+- u64 call_instrs[NUM_INSTRS_PER_SLOT] = {
+- /* MOV32 rX+2, cs.latest_flush */
+- (2ull << 56) | (val_reg << 48) | job->call_info.latest_flush,
++ params->addr_reg = ptdev->csif_info.cs_reg_count -
++ ptdev->csif_info.unpreserved_cs_reg_count;
++ params->val_reg = params->addr_reg + 2;
++ params->cycle_reg = params->addr_reg;
++ params->time_reg = params->val_reg;
+
+- /* FLUSH_CACHE2.clean_inv_all.no_wait.signal(0) rX+2 */
+- (36ull << 56) | (0ull << 48) | (val_reg << 40) | (0 << 16) | 0x233,
++ params->sync_addr = panthor_kernel_bo_gpuva(group->syncobjs) +
++ job->queue_idx * sizeof(struct panthor_syncobj_64b);
++ params->times_addr = panthor_kernel_bo_gpuva(queue->profiling.slots) +
++ (job->profiling.slot * sizeof(struct panthor_job_profiling_data));
++ params->waitall_mask = GENMASK(sched->sb_slot_count - 1, 0);
+
+- /* MOV48 rX:rX+1, cs.start */
+- (1ull << 56) | (addr_reg << 48) | job->call_info.start,
++ params->cs_start = job->call_info.start;
++ params->cs_size = job->call_info.size;
++ params->last_flush = job->call_info.latest_flush;
+
+- /* MOV32 rX+2, cs.size */
+- (2ull << 56) | (val_reg << 48) | job->call_info.size,
++ params->profile_mask = job->profiling.mask;
++}
+
+- /* WAIT(0) => waits for FLUSH_CACHE2 instruction */
+- (3ull << 56) | (1 << 16),
++#define JOB_INSTR_ALWAYS(instr) \
++ JOB_INSTR(PANTHOR_DEVICE_PROFILING_DISABLED, (instr))
++#define JOB_INSTR_TIMESTAMP(instr) \
++ JOB_INSTR(PANTHOR_DEVICE_PROFILING_TIMESTAMP, (instr))
++#define JOB_INSTR_CYCLES(instr) \
++ JOB_INSTR(PANTHOR_DEVICE_PROFILING_CYCLES, (instr))
+
++static void
++prepare_job_instrs(const struct panthor_job_cs_params *params,
++ struct panthor_job_ringbuf_instrs *instrs)
++{
++ const struct panthor_job_instr instr_seq[] = {
++ /* MOV32 rX+2, cs.latest_flush */
++ JOB_INSTR_ALWAYS((2ull << 56) | (params->val_reg << 48) | params->last_flush),
++ /* FLUSH_CACHE2.clean_inv_all.no_wait.signal(0) rX+2 */
++ JOB_INSTR_ALWAYS((36ull << 56) | (0ull << 48) | (params->val_reg << 40) |
++ (0 << 16) | 0x233),
++ /* MOV48 rX:rX+1, cycles_offset */
++ JOB_INSTR_CYCLES((1ull << 56) | (params->cycle_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, cycles.before))),
++ /* STORE_STATE cycles */
++ JOB_INSTR_CYCLES((40ull << 56) | (params->cycle_reg << 40) | (1ll << 32)),
++ /* MOV48 rX:rX+1, time_offset */
++ JOB_INSTR_TIMESTAMP((1ull << 56) | (params->time_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, time.before))),
++ /* STORE_STATE timer */
++ JOB_INSTR_TIMESTAMP((40ull << 56) | (params->time_reg << 40) | (0ll << 32)),
++ /* MOV48 rX:rX+1, cs.start */
++ JOB_INSTR_ALWAYS((1ull << 56) | (params->addr_reg << 48) | params->cs_start),
++ /* MOV32 rX+2, cs.size */
++ JOB_INSTR_ALWAYS((2ull << 56) | (params->val_reg << 48) | params->cs_size),
++ /* WAIT(0) => waits for FLUSH_CACHE2 instruction */
++ JOB_INSTR_ALWAYS((3ull << 56) | (1 << 16)),
+ /* CALL rX:rX+1, rX+2 */
+- (32ull << 56) | (addr_reg << 40) | (val_reg << 32),
+-
++ JOB_INSTR_ALWAYS((32ull << 56) | (params->addr_reg << 40) |
++ (params->val_reg << 32)),
++ /* MOV48 rX:rX+1, cycles_offset */
++ JOB_INSTR_CYCLES((1ull << 56) | (params->cycle_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, cycles.after))),
++ /* STORE_STATE cycles */
++ JOB_INSTR_CYCLES((40ull << 56) | (params->cycle_reg << 40) | (1ll << 32)),
++ /* MOV48 rX:rX+1, time_offset */
++ JOB_INSTR_TIMESTAMP((1ull << 56) | (params->time_reg << 48) |
++ (params->times_addr +
++ offsetof(struct panthor_job_profiling_data, time.after))),
++ /* STORE_STATE timer */
++ JOB_INSTR_TIMESTAMP((40ull << 56) | (params->time_reg << 40) | (0ll << 32)),
+ /* MOV48 rX:rX+1, sync_addr */
+- (1ull << 56) | (addr_reg << 48) | sync_addr,
+-
++ JOB_INSTR_ALWAYS((1ull << 56) | (params->addr_reg << 48) | params->sync_addr),
+ /* MOV48 rX+2, #1 */
+- (1ull << 56) | (val_reg << 48) | 1,
+-
++ JOB_INSTR_ALWAYS((1ull << 56) | (params->val_reg << 48) | 1),
+ /* WAIT(all) */
+- (3ull << 56) | (waitall_mask << 16),
+-
++ JOB_INSTR_ALWAYS((3ull << 56) | (params->waitall_mask << 16)),
+ /* SYNC_ADD64.system_scope.propage_err.nowait rX:rX+1, rX+2*/
+- (51ull << 56) | (0ull << 48) | (addr_reg << 40) | (val_reg << 32) | (0 << 16) | 1,
++ JOB_INSTR_ALWAYS((51ull << 56) | (0ull << 48) | (params->addr_reg << 40) |
++ (params->val_reg << 32) | (0 << 16) | 1),
++ /* ERROR_BARRIER, so we can recover from faults at job boundaries. */
++ JOB_INSTR_ALWAYS((47ull << 56)),
++ };
++ u32 pad;
+
+- /* ERROR_BARRIER, so we can recover from faults at job
+- * boundaries.
+- */
+- (47ull << 56),
++ instrs->count = 0;
++
++ /* NEED to be cacheline aligned to please the prefetcher. */
++ static_assert(sizeof(instrs->buffer) % 64 == 0,
++ "panthor_job_ringbuf_instrs::buffer is not aligned on a cacheline");
++
++ /* Make sure we have enough storage to store the whole sequence. */
++ static_assert(ALIGN(ARRAY_SIZE(instr_seq), NUM_INSTRS_PER_CACHE_LINE) ==
++ ARRAY_SIZE(instrs->buffer),
++ "instr_seq vs panthor_job_ringbuf_instrs::buffer size mismatch");
++
++ for (u32 i = 0; i < ARRAY_SIZE(instr_seq); i++) {
++ /* If the profile mask of this instruction is not enabled, skip it. */
++ if (instr_seq[i].profile_mask &&
++ !(instr_seq[i].profile_mask & params->profile_mask))
++ continue;
++
++ instrs->buffer[instrs->count++] = instr_seq[i].instr;
++ }
++
++ pad = ALIGN(instrs->count, NUM_INSTRS_PER_CACHE_LINE);
++ memset(&instrs->buffer[instrs->count], 0,
++ (pad - instrs->count) * sizeof(instrs->buffer[0]));
++ instrs->count = pad;
++}
++
++static u32 calc_job_credits(u32 profile_mask)
++{
++ struct panthor_job_ringbuf_instrs instrs;
++ struct panthor_job_cs_params params = {
++ .profile_mask = profile_mask,
+ };
+
+- /* Need to be cacheline aligned to please the prefetcher. */
+- static_assert(sizeof(call_instrs) % 64 == 0,
+- "call_instrs is not aligned on a cacheline");
++ prepare_job_instrs(¶ms, &instrs);
++ return instrs.count;
++}
++
++static struct dma_fence *
++queue_run_job(struct drm_sched_job *sched_job)
++{
++ struct panthor_job *job = container_of(sched_job, struct panthor_job, base);
++ struct panthor_group *group = job->group;
++ struct panthor_queue *queue = group->queues[job->queue_idx];
++ struct panthor_device *ptdev = group->ptdev;
++ struct panthor_scheduler *sched = ptdev->scheduler;
++ struct panthor_job_ringbuf_instrs instrs;
++ struct panthor_job_cs_params cs_params;
++ struct dma_fence *done_fence;
++ int ret;
+
+ /* Stream size is zero, nothing to do except making sure all previously
+ * submitted jobs are done before we signal the
+@@ -2914,17 +3082,23 @@ queue_run_job(struct drm_sched_job *sched_job)
+ queue->fence_ctx.id,
+ atomic64_inc_return(&queue->fence_ctx.seqno));
+
+- memcpy(queue->ringbuf->kmap + ringbuf_insert,
+- call_instrs, sizeof(call_instrs));
++ job->profiling.slot = queue->profiling.seqno++;
++ if (queue->profiling.seqno == queue->profiling.slot_count)
++ queue->profiling.seqno = 0;
++
++ job->ringbuf.start = queue->iface.input->insert;
++
++ get_job_cs_params(job, &cs_params);
++ prepare_job_instrs(&cs_params, &instrs);
++ copy_instrs_to_ringbuf(queue, job, &instrs);
++
++ job->ringbuf.end = job->ringbuf.start + (instrs.count * sizeof(u64));
+
+ panthor_job_get(&job->base);
+ spin_lock(&queue->fence_ctx.lock);
+ list_add_tail(&job->node, &queue->fence_ctx.in_flight_jobs);
+ spin_unlock(&queue->fence_ctx.lock);
+
+- job->ringbuf.start = queue->iface.input->insert;
+- job->ringbuf.end = job->ringbuf.start + sizeof(call_instrs);
+-
+ /* Make sure the ring buffer is updated before the INSERT
+ * register.
+ */
+@@ -3017,6 +3191,33 @@ static const struct drm_sched_backend_ops panthor_queue_sched_ops = {
+ .free_job = queue_free_job,
+ };
+
++static u32 calc_profiling_ringbuf_num_slots(struct panthor_device *ptdev,
++ u32 cs_ringbuf_size)
++{
++ u32 min_profiled_job_instrs = U32_MAX;
++ u32 last_flag = fls(PANTHOR_DEVICE_PROFILING_ALL);
++
++ /*
++ * We want to calculate the minimum size of a profiled job's CS,
++ * because since they need additional instructions for the sampling
++ * of performance metrics, they might take up further slots in
++ * the queue's ringbuffer. This means we might not need as many job
++ * slots for keeping track of their profiling information. What we
++ * need is the maximum number of slots we should allocate to this end,
++ * which matches the maximum number of profiled jobs we can place
++ * simultaneously in the queue's ring buffer.
++ * That has to be calculated separately for every single job profiling
++ * flag, but not in the case job profiling is disabled, since unprofiled
++ * jobs don't need to keep track of this at all.
++ */
++ for (u32 i = 0; i < last_flag; i++) {
++ min_profiled_job_instrs =
++ min(min_profiled_job_instrs, calc_job_credits(BIT(i)));
++ }
++
++ return DIV_ROUND_UP(cs_ringbuf_size, min_profiled_job_instrs * sizeof(u64));
++}
++
+ static struct panthor_queue *
+ group_create_queue(struct panthor_group *group,
+ const struct drm_panthor_queue_create *args)
+@@ -3070,9 +3271,35 @@ group_create_queue(struct panthor_group *group,
+ goto err_free_queue;
+ }
+
++ queue->profiling.slot_count =
++ calc_profiling_ringbuf_num_slots(group->ptdev, args->ringbuf_size);
++
++ queue->profiling.slots =
++ panthor_kernel_bo_create(group->ptdev, group->vm,
++ queue->profiling.slot_count *
++ sizeof(struct panthor_job_profiling_data),
++ DRM_PANTHOR_BO_NO_MMAP,
++ DRM_PANTHOR_VM_BIND_OP_MAP_NOEXEC |
++ DRM_PANTHOR_VM_BIND_OP_MAP_UNCACHED,
++ PANTHOR_VM_KERNEL_AUTO_VA);
++
++ if (IS_ERR(queue->profiling.slots)) {
++ ret = PTR_ERR(queue->profiling.slots);
++ goto err_free_queue;
++ }
++
++ ret = panthor_kernel_bo_vmap(queue->profiling.slots);
++ if (ret)
++ goto err_free_queue;
++
++ /*
++ * Credit limit argument tells us the total number of instructions
++ * across all CS slots in the ringbuffer, with some jobs requiring
++ * twice as many as others, depending on their profiling status.
++ */
+ ret = drm_sched_init(&queue->scheduler, &panthor_queue_sched_ops,
+ group->ptdev->scheduler->wq, 1,
+- args->ringbuf_size / (NUM_INSTRS_PER_SLOT * sizeof(u64)),
++ args->ringbuf_size / sizeof(u64),
+ 0, msecs_to_jiffies(JOB_TIMEOUT_MS),
+ group->ptdev->reset.wq,
+ NULL, "panthor-queue", group->ptdev->base.dev);
+@@ -3380,6 +3607,7 @@ panthor_job_create(struct panthor_file *pfile,
+ {
+ struct panthor_group_pool *gpool = pfile->groups;
+ struct panthor_job *job;
++ u32 credits;
+ int ret;
+
+ if (qsubmit->pad)
+@@ -3438,9 +3666,16 @@ panthor_job_create(struct panthor_file *pfile,
+ }
+ }
+
++ job->profiling.mask = pfile->ptdev->profile_mask;
++ credits = calc_job_credits(job->profiling.mask);
++ if (credits == 0) {
++ ret = -EINVAL;
++ goto err_put_job;
++ }
++
+ ret = drm_sched_job_init(&job->base,
+ &job->group->queues[job->queue_idx]->entity,
+- 1, job->group);
++ credits, job->group);
+ if (ret)
+ goto err_put_job;
+
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index 47aa06a9a94221..5b69cc8011b42b 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -760,16 +760,20 @@ static int radeon_audio_component_get_eld(struct device *kdev, int port,
+ if (!rdev->audio.enabled || !rdev->mode_info.mode_config_initialized)
+ return 0;
+
+- list_for_each_entry(encoder, &rdev_to_drm(rdev)->mode_config.encoder_list, head) {
++ list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
++ const struct drm_connector_helper_funcs *connector_funcs =
++ connector->helper_private;
++ encoder = connector_funcs->best_encoder(connector);
++
++ if (!encoder)
++ continue;
++
+ if (!radeon_encoder_is_digital(encoder))
+ continue;
+ radeon_encoder = to_radeon_encoder(encoder);
+ dig = radeon_encoder->enc_priv;
+ if (!dig->pin || dig->pin->id != port)
+ continue;
+- connector = radeon_get_connector_for_encoder(encoder);
+- if (!connector)
+- continue;
+ *enabled = true;
+ ret = drm_eld_size(connector->eld);
+ memcpy(buf, connector->eld, min(max_bytes, ret));
+diff --git a/drivers/gpu/drm/v3d/v3d_drv.h b/drivers/gpu/drm/v3d/v3d_drv.h
+index cf4b23369dc449..75b4725d49c7e1 100644
+--- a/drivers/gpu/drm/v3d/v3d_drv.h
++++ b/drivers/gpu/drm/v3d/v3d_drv.h
+@@ -553,6 +553,7 @@ void v3d_irq_disable(struct v3d_dev *v3d);
+ void v3d_irq_reset(struct v3d_dev *v3d);
+
+ /* v3d_mmu.c */
++int v3d_mmu_flush_all(struct v3d_dev *v3d);
+ int v3d_mmu_set_page_table(struct v3d_dev *v3d);
+ void v3d_mmu_insert_ptes(struct v3d_bo *bo);
+ void v3d_mmu_remove_ptes(struct v3d_bo *bo);
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index d469bda52c1a5e..20bf33702c3c4f 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -70,6 +70,8 @@ v3d_overflow_mem_work(struct work_struct *work)
+ list_add_tail(&bo->unref_head, &v3d->bin_job->render->unref_list);
+ spin_unlock_irqrestore(&v3d->job_lock, irqflags);
+
++ v3d_mmu_flush_all(v3d);
++
+ V3D_CORE_WRITE(0, V3D_PTB_BPOA, bo->node.start << V3D_MMU_PAGE_SHIFT);
+ V3D_CORE_WRITE(0, V3D_PTB_BPOS, obj->size);
+
+diff --git a/drivers/gpu/drm/v3d/v3d_mmu.c b/drivers/gpu/drm/v3d/v3d_mmu.c
+index 14f3af40d6f6d1..5bb7821c0243c6 100644
+--- a/drivers/gpu/drm/v3d/v3d_mmu.c
++++ b/drivers/gpu/drm/v3d/v3d_mmu.c
+@@ -28,36 +28,27 @@
+ #define V3D_PTE_WRITEABLE BIT(29)
+ #define V3D_PTE_VALID BIT(28)
+
+-static int v3d_mmu_flush_all(struct v3d_dev *v3d)
++int v3d_mmu_flush_all(struct v3d_dev *v3d)
+ {
+ int ret;
+
+- /* Make sure that another flush isn't already running when we
+- * start this one.
+- */
+- ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
+- V3D_MMU_CTL_TLB_CLEARING), 100);
+- if (ret)
+- dev_err(v3d->drm.dev, "TLB clear wait idle pre-wait failed\n");
+-
+- V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL) |
+- V3D_MMU_CTL_TLB_CLEAR);
+-
+- V3D_WRITE(V3D_MMUC_CONTROL,
+- V3D_MMUC_CONTROL_FLUSH |
++ V3D_WRITE(V3D_MMUC_CONTROL, V3D_MMUC_CONTROL_FLUSH |
+ V3D_MMUC_CONTROL_ENABLE);
+
+- ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
+- V3D_MMU_CTL_TLB_CLEARING), 100);
++ ret = wait_for(!(V3D_READ(V3D_MMUC_CONTROL) &
++ V3D_MMUC_CONTROL_FLUSHING), 100);
+ if (ret) {
+- dev_err(v3d->drm.dev, "TLB clear wait idle failed\n");
++ dev_err(v3d->drm.dev, "MMUC flush wait idle failed\n");
+ return ret;
+ }
+
+- ret = wait_for(!(V3D_READ(V3D_MMUC_CONTROL) &
+- V3D_MMUC_CONTROL_FLUSHING), 100);
++ V3D_WRITE(V3D_MMU_CTL, V3D_READ(V3D_MMU_CTL) |
++ V3D_MMU_CTL_TLB_CLEAR);
++
++ ret = wait_for(!(V3D_READ(V3D_MMU_CTL) &
++ V3D_MMU_CTL_TLB_CLEARING), 100);
+ if (ret)
+- dev_err(v3d->drm.dev, "MMUC flush wait idle failed\n");
++ dev_err(v3d->drm.dev, "MMU TLB clear wait idle failed\n");
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index 08d2a273958287..4f935f1d50a943 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -135,8 +135,31 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue)
+ struct v3d_stats *global_stats = &v3d->queue[queue].stats;
+ struct v3d_stats *local_stats = &file->stats[queue];
+ u64 now = local_clock();
+-
+- preempt_disable();
++ unsigned long flags;
++
++ /*
++ * We only need to disable local interrupts to appease lockdep who
++ * otherwise would think v3d_job_start_stats vs v3d_stats_update has an
++ * unsafe in-irq vs no-irq-off usage problem. This is a false positive
++ * because all the locks are per queue and stats type, and all jobs are
++ * completely one at a time serialised. More specifically:
++ *
++ * 1. Locks for GPU queues are updated from interrupt handlers under a
++ * spin lock and started here with preemption disabled.
++ *
++ * 2. Locks for CPU queues are updated from the worker with preemption
++ * disabled and equally started here with preemption disabled.
++ *
++ * Therefore both are consistent.
++ *
++ * 3. Because next job can only be queued after the previous one has
++ * been signaled, and locks are per queue, there is also no scope for
++ * the start part to race with the update part.
++ */
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_save(flags);
++ else
++ preempt_disable();
+
+ write_seqcount_begin(&local_stats->lock);
+ local_stats->start_ns = now;
+@@ -146,7 +169,10 @@ v3d_job_start_stats(struct v3d_job *job, enum v3d_queue queue)
+ global_stats->start_ns = now;
+ write_seqcount_end(&global_stats->lock);
+
+- preempt_enable();
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_restore(flags);
++ else
++ preempt_enable();
+ }
+
+ static void
+@@ -167,11 +193,21 @@ v3d_job_update_stats(struct v3d_job *job, enum v3d_queue queue)
+ struct v3d_stats *global_stats = &v3d->queue[queue].stats;
+ struct v3d_stats *local_stats = &file->stats[queue];
+ u64 now = local_clock();
++ unsigned long flags;
++
++ /* See comment in v3d_job_start_stats() */
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_save(flags);
++ else
++ preempt_disable();
+
+- preempt_disable();
+ v3d_stats_update(local_stats, now);
+ v3d_stats_update(global_stats, now);
+- preempt_enable();
++
++ if (IS_ENABLED(CONFIG_LOCKDEP))
++ local_irq_restore(flags);
++ else
++ preempt_enable();
+ }
+
+ static struct dma_fence *v3d_bin_job_run(struct drm_sched_job *sched_job)
+diff --git a/drivers/gpu/drm/vc4/tests/vc4_mock.c b/drivers/gpu/drm/vc4/tests/vc4_mock.c
+index 0731a7d85d7abc..922849dd4b4787 100644
+--- a/drivers/gpu/drm/vc4/tests/vc4_mock.c
++++ b/drivers/gpu/drm/vc4/tests/vc4_mock.c
+@@ -155,11 +155,11 @@ KUNIT_DEFINE_ACTION_WRAPPER(kunit_action_drm_dev_unregister,
+ drm_dev_unregister,
+ struct drm_device *);
+
+-static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5)
++static struct vc4_dev *__mock_device(struct kunit *test, enum vc4_gen gen)
+ {
+ struct drm_device *drm;
+- const struct drm_driver *drv = is_vc5 ? &vc5_drm_driver : &vc4_drm_driver;
+- const struct vc4_mock_desc *desc = is_vc5 ? &vc5_mock : &vc4_mock;
++ const struct drm_driver *drv = (gen == VC4_GEN_5) ? &vc5_drm_driver : &vc4_drm_driver;
++ const struct vc4_mock_desc *desc = (gen == VC4_GEN_5) ? &vc5_mock : &vc4_mock;
+ struct vc4_dev *vc4;
+ struct device *dev;
+ int ret;
+@@ -173,7 +173,7 @@ static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5)
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4);
+
+ vc4->dev = dev;
+- vc4->is_vc5 = is_vc5;
++ vc4->gen = gen;
+
+ vc4->hvs = __vc4_hvs_alloc(vc4, NULL);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, vc4->hvs);
+@@ -198,10 +198,10 @@ static struct vc4_dev *__mock_device(struct kunit *test, bool is_vc5)
+
+ struct vc4_dev *vc4_mock_device(struct kunit *test)
+ {
+- return __mock_device(test, false);
++ return __mock_device(test, VC4_GEN_4);
+ }
+
+ struct vc4_dev *vc5_mock_device(struct kunit *test)
+ {
+- return __mock_device(test, true);
++ return __mock_device(test, VC4_GEN_5);
+ }
+diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c
+index 3f72be7490d5b7..2a85d08b19852a 100644
+--- a/drivers/gpu/drm/vc4/vc4_bo.c
++++ b/drivers/gpu/drm/vc4/vc4_bo.c
+@@ -251,7 +251,7 @@ void vc4_bo_add_to_purgeable_pool(struct vc4_bo *bo)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4->purgeable.lock);
+@@ -265,7 +265,7 @@ static void vc4_bo_remove_from_purgeable_pool_locked(struct vc4_bo *bo)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* list_del_init() is used here because the caller might release
+@@ -396,7 +396,7 @@ struct drm_gem_object *vc4_create_object(struct drm_device *dev, size_t size)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return ERR_PTR(-ENODEV);
+
+ bo = kzalloc(sizeof(*bo), GFP_KERNEL);
+@@ -427,7 +427,7 @@ struct vc4_bo *vc4_bo_create(struct drm_device *dev, size_t unaligned_size,
+ struct drm_gem_dma_object *dma_obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return ERR_PTR(-ENODEV);
+
+ if (size == 0)
+@@ -496,7 +496,7 @@ int vc4_bo_dumb_create(struct drm_file *file_priv,
+ struct vc4_bo *bo = NULL;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ ret = vc4_dumb_fixup_args(args);
+@@ -622,7 +622,7 @@ int vc4_bo_inc_usecnt(struct vc4_bo *bo)
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ /* Fast path: if the BO is already retained by someone, no need to
+@@ -661,7 +661,7 @@ void vc4_bo_dec_usecnt(struct vc4_bo *bo)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(bo->base.base.dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* Fast path: if the BO is still retained by someone, no need to test
+@@ -783,7 +783,7 @@ int vc4_create_bo_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo = NULL;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ ret = vc4_grab_bin_bo(vc4, vc4file);
+@@ -813,7 +813,7 @@ int vc4_mmap_bo_ioctl(struct drm_device *dev, void *data,
+ struct drm_vc4_mmap_bo *args = data;
+ struct drm_gem_object *gem_obj;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ gem_obj = drm_gem_object_lookup(file_priv, args->handle);
+@@ -839,7 +839,7 @@ vc4_create_shader_bo_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo = NULL;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->size == 0)
+@@ -918,7 +918,7 @@ int vc4_set_tiling_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo;
+ bool t_format;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->flags != 0)
+@@ -964,7 +964,7 @@ int vc4_get_tiling_ioctl(struct drm_device *dev, void *data,
+ struct drm_gem_object *gem_obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->flags != 0 || args->modifier != 0)
+@@ -1007,7 +1007,7 @@ int vc4_bo_cache_init(struct drm_device *dev)
+ int ret;
+ int i;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ /* Create the initial set of BO labels that the kernel will
+@@ -1071,7 +1071,7 @@ int vc4_label_bo_ioctl(struct drm_device *dev, void *data,
+ struct drm_gem_object *gem_obj;
+ int ret = 0, label;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!args->len)
+diff --git a/drivers/gpu/drm/vc4/vc4_crtc.c b/drivers/gpu/drm/vc4/vc4_crtc.c
+index 8b5a7e5eb1466c..26a7cf7f646515 100644
+--- a/drivers/gpu/drm/vc4/vc4_crtc.c
++++ b/drivers/gpu/drm/vc4/vc4_crtc.c
+@@ -263,7 +263,7 @@ static u32 vc4_get_fifo_full_level(struct vc4_crtc *vc4_crtc, u32 format)
+ * Removing 1 from the FIFO full level however
+ * seems to completely remove that issue.
+ */
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX - 1;
+
+ return fifo_len_bytes - 3 * HVS_FIFO_LATENCY_PIX;
+@@ -428,7 +428,7 @@ static void vc4_crtc_config_pv(struct drm_crtc *crtc, struct drm_encoder *encode
+ if (is_dsi)
+ CRTC_WRITE(PV_HACT_ACT, mode->hdisplay * pixel_rep);
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ CRTC_WRITE(PV_MUX_CFG,
+ VC4_SET_FIELD(PV_MUX_CFG_RGB_PIXEL_MUX_MODE_NO_SWAP,
+ PV_MUX_CFG_RGB_PIXEL_MUX_MODE));
+@@ -913,7 +913,7 @@ static int vc4_async_set_fence_cb(struct drm_device *dev,
+ struct dma_fence *fence;
+ int ret;
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ struct vc4_bo *bo = to_vc4_bo(&dma_bo->base);
+
+ return vc4_queue_seqno_cb(dev, &flip_state->cb.seqno, bo->seqno,
+@@ -1000,7 +1000,7 @@ static int vc4_async_page_flip(struct drm_crtc *crtc,
+ struct vc4_bo *bo = to_vc4_bo(&dma_bo->base);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ /*
+@@ -1043,7 +1043,7 @@ int vc4_page_flip(struct drm_crtc *crtc,
+ struct drm_device *dev = crtc->dev;
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ return vc5_async_page_flip(crtc, fb, event, flags);
+ else
+ return vc4_async_page_flip(crtc, fb, event, flags);
+@@ -1338,9 +1338,8 @@ int __vc4_crtc_init(struct drm_device *drm,
+
+ drm_crtc_helper_add(crtc, crtc_helper_funcs);
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ drm_mode_crtc_set_gamma_size(crtc, ARRAY_SIZE(vc4_crtc->lut_r));
+-
+ drm_crtc_enable_color_mgmt(crtc, 0, false, crtc->gamma_size);
+
+ /* We support CTM, but only for one CRTC at a time. It's therefore
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c
+index c133e96b8aca25..550324819f37fc 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.c
++++ b/drivers/gpu/drm/vc4/vc4_drv.c
+@@ -98,7 +98,7 @@ static int vc4_get_param_ioctl(struct drm_device *dev, void *data,
+ if (args->pad != 0)
+ return -EINVAL;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d)
+@@ -147,7 +147,7 @@ static int vc4_open(struct drm_device *dev, struct drm_file *file)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_file *vc4file;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ vc4file = kzalloc(sizeof(*vc4file), GFP_KERNEL);
+@@ -165,7 +165,7 @@ static void vc4_close(struct drm_device *dev, struct drm_file *file)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_file *vc4file = file->driver_priv;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (vc4file->bin_bo_used)
+@@ -291,13 +291,17 @@ static int vc4_drm_bind(struct device *dev)
+ struct vc4_dev *vc4;
+ struct device_node *node;
+ struct drm_crtc *crtc;
+- bool is_vc5;
++ enum vc4_gen gen;
+ int ret = 0;
+
+ dev->coherent_dma_mask = DMA_BIT_MASK(32);
+
+- is_vc5 = of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5");
+- if (is_vc5)
++ if (of_device_is_compatible(dev->of_node, "brcm,bcm2711-vc5"))
++ gen = VC4_GEN_5;
++ else
++ gen = VC4_GEN_4;
++
++ if (gen == VC4_GEN_5)
+ driver = &vc5_drm_driver;
+ else
+ driver = &vc4_drm_driver;
+@@ -315,13 +319,13 @@ static int vc4_drm_bind(struct device *dev)
+ vc4 = devm_drm_dev_alloc(dev, driver, struct vc4_dev, base);
+ if (IS_ERR(vc4))
+ return PTR_ERR(vc4);
+- vc4->is_vc5 = is_vc5;
++ vc4->gen = gen;
+ vc4->dev = dev;
+
+ drm = &vc4->base;
+ platform_set_drvdata(pdev, drm);
+
+- if (!is_vc5) {
++ if (gen == VC4_GEN_4) {
+ ret = drmm_mutex_init(drm, &vc4->bin_bo_lock);
+ if (ret)
+ goto err;
+@@ -335,7 +339,7 @@ static int vc4_drm_bind(struct device *dev)
+ if (ret)
+ goto err;
+
+- if (!is_vc5) {
++ if (gen == VC4_GEN_4) {
+ ret = vc4_gem_init(drm);
+ if (ret)
+ goto err;
+diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h
+index 08e29fa825635d..dd452e6a114304 100644
+--- a/drivers/gpu/drm/vc4/vc4_drv.h
++++ b/drivers/gpu/drm/vc4/vc4_drv.h
+@@ -80,11 +80,16 @@ struct vc4_perfmon {
+ u64 counters[] __counted_by(ncounters);
+ };
+
++enum vc4_gen {
++ VC4_GEN_4,
++ VC4_GEN_5,
++};
++
+ struct vc4_dev {
+ struct drm_device base;
+ struct device *dev;
+
+- bool is_vc5;
++ enum vc4_gen gen;
+
+ unsigned int irq;
+
+@@ -315,6 +320,7 @@ struct vc4_hvs {
+ struct platform_device *pdev;
+ void __iomem *regs;
+ u32 __iomem *dlist;
++ unsigned int dlist_mem_size;
+
+ struct clk *core_clk;
+
+diff --git a/drivers/gpu/drm/vc4/vc4_gem.c b/drivers/gpu/drm/vc4/vc4_gem.c
+index 24fb1b57e1dd99..be9c0b72ebe869 100644
+--- a/drivers/gpu/drm/vc4/vc4_gem.c
++++ b/drivers/gpu/drm/vc4/vc4_gem.c
+@@ -76,7 +76,7 @@ vc4_get_hang_state_ioctl(struct drm_device *dev, void *data,
+ u32 i;
+ int ret = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -389,7 +389,7 @@ vc4_wait_for_seqno(struct drm_device *dev, uint64_t seqno, uint64_t timeout_ns,
+ unsigned long timeout_expire;
+ DEFINE_WAIT(wait);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (vc4->finished_seqno >= seqno)
+@@ -474,7 +474,7 @@ vc4_submit_next_bin_job(struct drm_device *dev)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_exec_info *exec;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ again:
+@@ -522,7 +522,7 @@ vc4_submit_next_render_job(struct drm_device *dev)
+ if (!exec)
+ return;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* A previous RCL may have written to one of our textures, and
+@@ -543,7 +543,7 @@ vc4_move_job_to_render(struct drm_device *dev, struct vc4_exec_info *exec)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ bool was_empty = list_empty(&vc4->render_job_list);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ list_move_tail(&exec->head, &vc4->render_job_list);
+@@ -970,7 +970,7 @@ vc4_job_handle_completed(struct vc4_dev *vc4)
+ unsigned long irqflags;
+ struct vc4_seqno_cb *cb, *cb_temp;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ spin_lock_irqsave(&vc4->job_lock, irqflags);
+@@ -1009,7 +1009,7 @@ int vc4_queue_seqno_cb(struct drm_device *dev,
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ unsigned long irqflags;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ cb->func = func;
+@@ -1065,7 +1065,7 @@ vc4_wait_seqno_ioctl(struct drm_device *dev, void *data,
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct drm_vc4_wait_seqno *args = data;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ return vc4_wait_for_seqno_ioctl_helper(dev, args->seqno,
+@@ -1082,7 +1082,7 @@ vc4_wait_bo_ioctl(struct drm_device *dev, void *data,
+ struct drm_gem_object *gem_obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->pad != 0)
+@@ -1131,7 +1131,7 @@ vc4_submit_cl_ioctl(struct drm_device *dev, void *data,
+ args->shader_rec_size,
+ args->bo_handle_count);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -1267,7 +1267,7 @@ int vc4_gem_init(struct drm_device *dev)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ vc4->dma_fence_context = dma_fence_context_alloc(1);
+@@ -1326,7 +1326,7 @@ int vc4_gem_madvise_ioctl(struct drm_device *dev, void *data,
+ struct vc4_bo *bo;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ switch (args->madv) {
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 6611ab7c26a63c..2d7d3e90f3be44 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -147,6 +147,8 @@ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused)
+ if (!drm_dev_enter(drm, &idx))
+ return -ENODEV;
+
++ WARN_ON(pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev));
++
+ drm_print_regset32(&p, &vc4_hdmi->hdmi_regset);
+ drm_print_regset32(&p, &vc4_hdmi->hd_regset);
+ drm_print_regset32(&p, &vc4_hdmi->cec_regset);
+@@ -156,6 +158,8 @@ static int vc4_hdmi_debugfs_regs(struct seq_file *m, void *unused)
+ drm_print_regset32(&p, &vc4_hdmi->ram_regset);
+ drm_print_regset32(&p, &vc4_hdmi->rm_regset);
+
++ pm_runtime_put(&vc4_hdmi->pdev->dev);
++
+ drm_dev_exit(idx);
+
+ return 0;
+@@ -2047,6 +2051,7 @@ static int vc4_hdmi_audio_prepare(struct device *dev, void *data,
+ struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
+ struct drm_device *drm = vc4_hdmi->connector.dev;
+ struct drm_connector *connector = &vc4_hdmi->connector;
++ struct vc4_dev *vc4 = to_vc4_dev(drm);
+ unsigned int sample_rate = params->sample_rate;
+ unsigned int channels = params->channels;
+ unsigned long flags;
+@@ -2104,11 +2109,18 @@ static int vc4_hdmi_audio_prepare(struct device *dev, void *data,
+ VC4_HDMI_AUDIO_PACKET_CEA_MASK);
+
+ /* Set the MAI threshold */
+- HDMI_WRITE(HDMI_MAI_THR,
+- VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICHIGH) |
+- VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_PANICLOW) |
+- VC4_SET_FIELD(0x06, VC4_HD_MAI_THR_DREQHIGH) |
+- VC4_SET_FIELD(0x08, VC4_HD_MAI_THR_DREQLOW));
++ if (vc4->gen >= VC4_GEN_5)
++ HDMI_WRITE(HDMI_MAI_THR,
++ VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICHIGH) |
++ VC4_SET_FIELD(0x10, VC4_HD_MAI_THR_PANICLOW) |
++ VC4_SET_FIELD(0x1c, VC4_HD_MAI_THR_DREQHIGH) |
++ VC4_SET_FIELD(0x1c, VC4_HD_MAI_THR_DREQLOW));
++ else
++ HDMI_WRITE(HDMI_MAI_THR,
++ VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_PANICHIGH) |
++ VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_PANICLOW) |
++ VC4_SET_FIELD(0x6, VC4_HD_MAI_THR_DREQHIGH) |
++ VC4_SET_FIELD(0x8, VC4_HD_MAI_THR_DREQLOW));
+
+ HDMI_WRITE(HDMI_MAI_CONFIG,
+ VC4_HDMI_MAI_CONFIG_BIT_REVERSE |
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index 2a835a5cff9dd1..863539e1f7e04b 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -110,7 +110,8 @@ static int vc4_hvs_debugfs_dlist(struct seq_file *m, void *data)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct vc4_hvs *hvs = vc4->hvs;
+ struct drm_printer p = drm_seq_file_printer(m);
+- unsigned int next_entry_start = 0;
++ unsigned int dlist_mem_size = hvs->dlist_mem_size;
++ unsigned int next_entry_start;
+ unsigned int i, j;
+ u32 dlist_word, dispstat;
+
+@@ -124,8 +125,9 @@ static int vc4_hvs_debugfs_dlist(struct seq_file *m, void *data)
+ }
+
+ drm_printf(&p, "HVS chan %u:\n", i);
++ next_entry_start = 0;
+
+- for (j = HVS_READ(SCALER_DISPLISTX(i)); j < 256; j++) {
++ for (j = HVS_READ(SCALER_DISPLISTX(i)); j < dlist_mem_size; j++) {
+ dlist_word = readl((u32 __iomem *)vc4->hvs->dlist + j);
+ drm_printf(&p, "dlist: %02d: 0x%08x\n", j,
+ dlist_word);
+@@ -222,6 +224,9 @@ static void vc4_hvs_lut_load(struct vc4_hvs *hvs,
+ if (!drm_dev_enter(drm, &idx))
+ return;
+
++ if (hvs->vc4->gen != VC4_GEN_4)
++ goto exit;
++
+ /* The LUT memory is laid out with each HVS channel in order,
+ * each of which takes 256 writes for R, 256 for G, then 256
+ * for B.
+@@ -237,6 +242,7 @@ static void vc4_hvs_lut_load(struct vc4_hvs *hvs,
+ for (i = 0; i < crtc->gamma_size; i++)
+ HVS_WRITE(SCALER_GAMDATA, vc4_crtc->lut_b[i]);
+
++exit:
+ drm_dev_exit(idx);
+ }
+
+@@ -291,7 +297,7 @@ int vc4_hvs_get_fifo_from_output(struct vc4_hvs *hvs, unsigned int output)
+ u32 reg;
+ int ret;
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ return output;
+
+ /*
+@@ -372,7 +378,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
+ dispctrl = SCALER_DISPCTRLX_ENABLE;
+ dispbkgndx = HVS_READ(SCALER_DISPBKGNDX(chan));
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ dispctrl |= VC4_SET_FIELD(mode->hdisplay,
+ SCALER_DISPCTRLX_WIDTH) |
+ VC4_SET_FIELD(mode->vdisplay,
+@@ -394,7 +400,7 @@ static int vc4_hvs_init_channel(struct vc4_hvs *hvs, struct drm_crtc *crtc,
+ dispbkgndx &= ~SCALER_DISPBKGND_INTERLACE;
+
+ HVS_WRITE(SCALER_DISPBKGNDX(chan), dispbkgndx |
+- ((!vc4->is_vc5) ? SCALER_DISPBKGND_GAMMA : 0) |
++ ((vc4->gen == VC4_GEN_4) ? SCALER_DISPBKGND_GAMMA : 0) |
+ (interlace ? SCALER_DISPBKGND_INTERLACE : 0));
+
+ /* Reload the LUT, since the SRAMs would have been disabled if
+@@ -415,13 +421,11 @@ void vc4_hvs_stop_channel(struct vc4_hvs *hvs, unsigned int chan)
+ if (!drm_dev_enter(drm, &idx))
+ return;
+
+- if (HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE)
++ if (!(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_ENABLE))
+ goto out;
+
+- HVS_WRITE(SCALER_DISPCTRLX(chan),
+- HVS_READ(SCALER_DISPCTRLX(chan)) | SCALER_DISPCTRLX_RESET);
+- HVS_WRITE(SCALER_DISPCTRLX(chan),
+- HVS_READ(SCALER_DISPCTRLX(chan)) & ~SCALER_DISPCTRLX_ENABLE);
++ HVS_WRITE(SCALER_DISPCTRLX(chan), SCALER_DISPCTRLX_RESET);
++ HVS_WRITE(SCALER_DISPCTRLX(chan), 0);
+
+ /* Once we leave, the scaler should be disabled and its fifo empty. */
+ WARN_ON_ONCE(HVS_READ(SCALER_DISPCTRLX(chan)) & SCALER_DISPCTRLX_RESET);
+@@ -580,7 +584,7 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc,
+ }
+
+ if (vc4_state->assigned_channel == VC4_HVS_CHANNEL_DISABLED)
+- return;
++ goto exit;
+
+ if (debug_dump_regs) {
+ DRM_INFO("CRTC %d HVS before:\n", drm_crtc_index(crtc));
+@@ -663,12 +667,14 @@ void vc4_hvs_atomic_flush(struct drm_crtc *crtc,
+ vc4_hvs_dump_state(hvs);
+ }
+
++exit:
+ drm_dev_exit(idx);
+ }
+
+ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel)
+ {
+- struct drm_device *drm = &hvs->vc4->base;
++ struct vc4_dev *vc4 = hvs->vc4;
++ struct drm_device *drm = &vc4->base;
+ u32 dispctrl;
+ int idx;
+
+@@ -676,8 +682,9 @@ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel)
+ return;
+
+ dispctrl = HVS_READ(SCALER_DISPCTRL);
+- dispctrl &= ~(hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
+- SCALER_DISPCTRL_DSPEISLUR(channel));
++ dispctrl &= ~((vc4->gen == VC4_GEN_5) ?
++ SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel));
+
+ HVS_WRITE(SCALER_DISPCTRL, dispctrl);
+
+@@ -686,7 +693,8 @@ void vc4_hvs_mask_underrun(struct vc4_hvs *hvs, int channel)
+
+ void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel)
+ {
+- struct drm_device *drm = &hvs->vc4->base;
++ struct vc4_dev *vc4 = hvs->vc4;
++ struct drm_device *drm = &vc4->base;
+ u32 dispctrl;
+ int idx;
+
+@@ -694,8 +702,9 @@ void vc4_hvs_unmask_underrun(struct vc4_hvs *hvs, int channel)
+ return;
+
+ dispctrl = HVS_READ(SCALER_DISPCTRL);
+- dispctrl |= (hvs->vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
+- SCALER_DISPCTRL_DSPEISLUR(channel));
++ dispctrl |= ((vc4->gen == VC4_GEN_5) ?
++ SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel));
+
+ HVS_WRITE(SCALER_DISPSTAT,
+ SCALER_DISPSTAT_EUFLOW(channel));
+@@ -738,8 +747,10 @@ static irqreturn_t vc4_hvs_irq_handler(int irq, void *data)
+ control = HVS_READ(SCALER_DISPCTRL);
+
+ for (channel = 0; channel < SCALER_CHANNELS_COUNT; channel++) {
+- dspeislur = vc4->is_vc5 ? SCALER5_DISPCTRL_DSPEISLUR(channel) :
+- SCALER_DISPCTRL_DSPEISLUR(channel);
++ dspeislur = (vc4->gen == VC4_GEN_5) ?
++ SCALER5_DISPCTRL_DSPEISLUR(channel) :
++ SCALER_DISPCTRL_DSPEISLUR(channel);
++
+ /* Interrupt masking is not always honored, so check it here. */
+ if (status & SCALER_DISPSTAT_EUFLOW(channel) &&
+ control & dspeislur) {
+@@ -767,7 +778,7 @@ int vc4_hvs_debugfs_init(struct drm_minor *minor)
+ if (!vc4->hvs)
+ return -ENODEV;
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ debugfs_create_bool("hvs_load_tracker", S_IRUGO | S_IWUSR,
+ minor->debugfs_root,
+ &vc4->load_tracker_enabled);
+@@ -800,16 +811,17 @@ struct vc4_hvs *__vc4_hvs_alloc(struct vc4_dev *vc4, struct platform_device *pde
+ * our 16K), since we don't want to scramble the screen when
+ * transitioning from the firmware's boot setup to runtime.
+ */
++ hvs->dlist_mem_size = (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END;
+ drm_mm_init(&hvs->dlist_mm,
+ HVS_BOOTLOADER_DLIST_END,
+- (SCALER_DLIST_SIZE >> 2) - HVS_BOOTLOADER_DLIST_END);
++ hvs->dlist_mem_size);
+
+ /* Set up the HVS LBM memory manager. We could have some more
+ * complicated data structure that allowed reuse of LBM areas
+ * between planes when they don't overlap on the screen, but
+ * for now we just allocate globally.
+ */
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ /* 48k words of 2x12-bit pixels */
+ drm_mm_init(&hvs->lbm_mm, 0, 48 * 1024);
+ else
+@@ -843,7 +855,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ hvs->regset.regs = hvs_regs;
+ hvs->regset.nregs = ARRAY_SIZE(hvs_regs);
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ struct rpi_firmware *firmware;
+ struct device_node *node;
+ unsigned int max_rate;
+@@ -881,7 +893,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ }
+ }
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ hvs->dlist = hvs->regs + SCALER_DLIST_START;
+ else
+ hvs->dlist = hvs->regs + SCALER5_DLIST_START;
+@@ -922,7 +934,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ SCALER_DISPCTRL_DISPEIRQ(1) |
+ SCALER_DISPCTRL_DISPEIRQ(2);
+
+- if (!vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_4)
+ dispctrl &= ~(SCALER_DISPCTRL_DMAEIRQ |
+ SCALER_DISPCTRL_SLVWREIRQ |
+ SCALER_DISPCTRL_SLVRDEIRQ |
+@@ -966,7 +978,7 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+
+ /* Recompute Composite Output Buffer (COB) allocations for the displays
+ */
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ /* The COB is 20736 pixels, or just over 10 lines at 2048 wide.
+ * The bottom 2048 pixels are full 32bpp RGBA (intended for the
+ * TXP composing RGBA to memory), whilst the remainder are only
+diff --git a/drivers/gpu/drm/vc4/vc4_irq.c b/drivers/gpu/drm/vc4/vc4_irq.c
+index ef93d8e22a35a4..968356d1b91dfb 100644
+--- a/drivers/gpu/drm/vc4/vc4_irq.c
++++ b/drivers/gpu/drm/vc4/vc4_irq.c
+@@ -263,7 +263,7 @@ vc4_irq_enable(struct drm_device *dev)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (!vc4->v3d)
+@@ -280,7 +280,7 @@ vc4_irq_disable(struct drm_device *dev)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (!vc4->v3d)
+@@ -303,7 +303,7 @@ int vc4_irq_install(struct drm_device *dev, int irq)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (irq == IRQ_NOTCONNECTED)
+@@ -324,7 +324,7 @@ void vc4_irq_uninstall(struct drm_device *dev)
+ {
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ vc4_irq_disable(dev);
+@@ -337,7 +337,7 @@ void vc4_irq_reset(struct drm_device *dev)
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ unsigned long irqflags;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ /* Acknowledge any stale IRQs. */
+diff --git a/drivers/gpu/drm/vc4/vc4_kms.c b/drivers/gpu/drm/vc4/vc4_kms.c
+index 5495f2a94fa926..bddfcad1095013 100644
+--- a/drivers/gpu/drm/vc4/vc4_kms.c
++++ b/drivers/gpu/drm/vc4/vc4_kms.c
+@@ -369,7 +369,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
+ old_hvs_state->fifo_state[channel].pending_commit = NULL;
+ }
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ unsigned long state_rate = max(old_hvs_state->core_clock_rate,
+ new_hvs_state->core_clock_rate);
+ unsigned long core_rate = clamp_t(unsigned long, state_rate,
+@@ -388,7 +388,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
+
+ vc4_ctm_commit(vc4, state);
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ vc5_hvs_pv_muxing_commit(vc4, state);
+ else
+ vc4_hvs_pv_muxing_commit(vc4, state);
+@@ -406,7 +406,7 @@ static void vc4_atomic_commit_tail(struct drm_atomic_state *state)
+
+ drm_atomic_helper_cleanup_planes(dev, state);
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ unsigned long core_rate = min_t(unsigned long,
+ hvs->max_core_rate,
+ new_hvs_state->core_clock_rate);
+@@ -461,7 +461,7 @@ static struct drm_framebuffer *vc4_fb_create(struct drm_device *dev,
+ struct vc4_dev *vc4 = to_vc4_dev(dev);
+ struct drm_mode_fb_cmd2 mode_cmd_local;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return ERR_PTR(-ENODEV);
+
+ /* If the user didn't specify a modifier, use the
+@@ -1040,7 +1040,7 @@ int vc4_kms_load(struct drm_device *dev)
+ * the BCM2711, but the load tracker computations are used for
+ * the core clock rate calculation.
+ */
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ /* Start with the load tracker enabled. Can be
+ * disabled through the debugfs load_tracker file.
+ */
+@@ -1056,7 +1056,7 @@ int vc4_kms_load(struct drm_device *dev)
+ return ret;
+ }
+
+- if (vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_5) {
+ dev->mode_config.max_width = 7680;
+ dev->mode_config.max_height = 7680;
+ } else {
+@@ -1064,7 +1064,7 @@ int vc4_kms_load(struct drm_device *dev)
+ dev->mode_config.max_height = 2048;
+ }
+
+- dev->mode_config.funcs = vc4->is_vc5 ? &vc5_mode_funcs : &vc4_mode_funcs;
++ dev->mode_config.funcs = (vc4->gen > VC4_GEN_4) ? &vc5_mode_funcs : &vc4_mode_funcs;
+ dev->mode_config.helper_private = &vc4_mode_config_helpers;
+ dev->mode_config.preferred_depth = 24;
+ dev->mode_config.async_page_flip = true;
+diff --git a/drivers/gpu/drm/vc4/vc4_perfmon.c b/drivers/gpu/drm/vc4/vc4_perfmon.c
+index c00a5cc2316d20..e4fda72c19f92f 100644
+--- a/drivers/gpu/drm/vc4/vc4_perfmon.c
++++ b/drivers/gpu/drm/vc4/vc4_perfmon.c
+@@ -23,7 +23,7 @@ void vc4_perfmon_get(struct vc4_perfmon *perfmon)
+ return;
+
+ vc4 = perfmon->dev;
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ refcount_inc(&perfmon->refcnt);
+@@ -37,7 +37,7 @@ void vc4_perfmon_put(struct vc4_perfmon *perfmon)
+ return;
+
+ vc4 = perfmon->dev;
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (refcount_dec_and_test(&perfmon->refcnt))
+@@ -49,7 +49,7 @@ void vc4_perfmon_start(struct vc4_dev *vc4, struct vc4_perfmon *perfmon)
+ unsigned int i;
+ u32 mask;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (WARN_ON_ONCE(!perfmon || vc4->active_perfmon))
+@@ -69,7 +69,7 @@ void vc4_perfmon_stop(struct vc4_dev *vc4, struct vc4_perfmon *perfmon,
+ {
+ unsigned int i;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ if (WARN_ON_ONCE(!vc4->active_perfmon ||
+@@ -90,7 +90,7 @@ struct vc4_perfmon *vc4_perfmon_find(struct vc4_file *vc4file, int id)
+ struct vc4_dev *vc4 = vc4file->dev;
+ struct vc4_perfmon *perfmon;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return NULL;
+
+ mutex_lock(&vc4file->perfmon.lock);
+@@ -105,7 +105,7 @@ void vc4_perfmon_open_file(struct vc4_file *vc4file)
+ {
+ struct vc4_dev *vc4 = vc4file->dev;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_init(&vc4file->perfmon.lock);
+@@ -131,7 +131,7 @@ void vc4_perfmon_close_file(struct vc4_file *vc4file)
+ {
+ struct vc4_dev *vc4 = vc4file->dev;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4file->perfmon.lock);
+@@ -151,7 +151,7 @@ int vc4_perfmon_create_ioctl(struct drm_device *dev, void *data,
+ unsigned int i;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -205,7 +205,7 @@ int vc4_perfmon_destroy_ioctl(struct drm_device *dev, void *data,
+ struct drm_vc4_perfmon_destroy *req = data;
+ struct vc4_perfmon *perfmon;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+@@ -233,7 +233,7 @@ int vc4_perfmon_get_values_ioctl(struct drm_device *dev, void *data,
+ struct vc4_perfmon *perfmon;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (!vc4->v3d) {
+diff --git a/drivers/gpu/drm/vc4/vc4_plane.c b/drivers/gpu/drm/vc4/vc4_plane.c
+index 07caf2a47c6cef..866bc46ee6d53a 100644
+--- a/drivers/gpu/drm/vc4/vc4_plane.c
++++ b/drivers/gpu/drm/vc4/vc4_plane.c
+@@ -587,10 +587,10 @@ static u32 vc4_lbm_size(struct drm_plane_state *state)
+ }
+
+ /* Align it to 64 or 128 (hvs5) bytes */
+- lbm = roundup(lbm, vc4->is_vc5 ? 128 : 64);
++ lbm = roundup(lbm, vc4->gen == VC4_GEN_5 ? 128 : 64);
+
+ /* Each "word" of the LBM memory contains 2 or 4 (hvs5) pixels */
+- lbm /= vc4->is_vc5 ? 4 : 2;
++ lbm /= vc4->gen == VC4_GEN_5 ? 4 : 2;
+
+ return lbm;
+ }
+@@ -706,7 +706,7 @@ static int vc4_plane_allocate_lbm(struct drm_plane_state *state)
+ ret = drm_mm_insert_node_generic(&vc4->hvs->lbm_mm,
+ &vc4_state->lbm,
+ lbm_size,
+- vc4->is_vc5 ? 64 : 32,
++ vc4->gen == VC4_GEN_5 ? 64 : 32,
+ 0, 0);
+ spin_unlock_irqrestore(&vc4->hvs->mm_lock, irqflags);
+
+@@ -1057,7 +1057,7 @@ static int vc4_plane_mode_set(struct drm_plane *plane,
+ mix_plane_alpha = state->alpha != DRM_BLEND_ALPHA_OPAQUE &&
+ fb->format->has_alpha;
+
+- if (!vc4->is_vc5) {
++ if (vc4->gen == VC4_GEN_4) {
+ /* Control word */
+ vc4_dlist_write(vc4_state,
+ SCALER_CTL0_VALID |
+@@ -1632,7 +1632,7 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
+ };
+
+ for (i = 0; i < ARRAY_SIZE(hvs_formats); i++) {
+- if (!hvs_formats[i].hvs5_only || vc4->is_vc5) {
++ if (!hvs_formats[i].hvs5_only || vc4->gen == VC4_GEN_5) {
+ formats[num_formats] = hvs_formats[i].drm;
+ num_formats++;
+ }
+@@ -1647,7 +1647,7 @@ struct drm_plane *vc4_plane_init(struct drm_device *dev,
+ return ERR_CAST(vc4_plane);
+ plane = &vc4_plane->base;
+
+- if (vc4->is_vc5)
++ if (vc4->gen == VC4_GEN_5)
+ drm_plane_helper_add(plane, &vc5_plane_helper_funcs);
+ else
+ drm_plane_helper_add(plane, &vc4_plane_helper_funcs);
+diff --git a/drivers/gpu/drm/vc4/vc4_render_cl.c b/drivers/gpu/drm/vc4/vc4_render_cl.c
+index 1bda5010f15a86..ae4ad956f04ff8 100644
+--- a/drivers/gpu/drm/vc4/vc4_render_cl.c
++++ b/drivers/gpu/drm/vc4/vc4_render_cl.c
+@@ -599,7 +599,7 @@ int vc4_get_rcl(struct drm_device *dev, struct vc4_exec_info *exec)
+ bool has_bin = args->bin_cl_size != 0;
+ int ret;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ if (args->min_x_tile > args->max_x_tile ||
+diff --git a/drivers/gpu/drm/vc4/vc4_v3d.c b/drivers/gpu/drm/vc4/vc4_v3d.c
+index bf5c4e36c94e4d..43f69d74e8761d 100644
+--- a/drivers/gpu/drm/vc4/vc4_v3d.c
++++ b/drivers/gpu/drm/vc4/vc4_v3d.c
+@@ -127,7 +127,7 @@ static int vc4_v3d_debugfs_ident(struct seq_file *m, void *unused)
+ int
+ vc4_v3d_pm_get(struct vc4_dev *vc4)
+ {
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ mutex_lock(&vc4->power_lock);
+@@ -148,7 +148,7 @@ vc4_v3d_pm_get(struct vc4_dev *vc4)
+ void
+ vc4_v3d_pm_put(struct vc4_dev *vc4)
+ {
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4->power_lock);
+@@ -178,7 +178,7 @@ int vc4_v3d_get_bin_slot(struct vc4_dev *vc4)
+ uint64_t seqno = 0;
+ struct vc4_exec_info *exec;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ try_again:
+@@ -325,7 +325,7 @@ int vc4_v3d_bin_bo_get(struct vc4_dev *vc4, bool *used)
+ {
+ int ret = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ mutex_lock(&vc4->bin_bo_lock);
+@@ -360,7 +360,7 @@ static void bin_bo_release(struct kref *ref)
+
+ void vc4_v3d_bin_bo_put(struct vc4_dev *vc4)
+ {
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return;
+
+ mutex_lock(&vc4->bin_bo_lock);
+diff --git a/drivers/gpu/drm/vc4/vc4_validate.c b/drivers/gpu/drm/vc4/vc4_validate.c
+index 0c17284bf6f5bb..f3d7fdbe9083c5 100644
+--- a/drivers/gpu/drm/vc4/vc4_validate.c
++++ b/drivers/gpu/drm/vc4/vc4_validate.c
+@@ -109,7 +109,7 @@ vc4_use_bo(struct vc4_exec_info *exec, uint32_t hindex)
+ struct drm_gem_dma_object *obj;
+ struct vc4_bo *bo;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return NULL;
+
+ if (hindex >= exec->bo_count) {
+@@ -169,7 +169,7 @@ vc4_check_tex_size(struct vc4_exec_info *exec, struct drm_gem_dma_object *fbo,
+ uint32_t utile_w = utile_width(cpp);
+ uint32_t utile_h = utile_height(cpp);
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return false;
+
+ /* The shaded vertex format stores signed 12.4 fixed point
+@@ -495,7 +495,7 @@ vc4_validate_bin_cl(struct drm_device *dev,
+ uint32_t dst_offset = 0;
+ uint32_t src_offset = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ while (src_offset < len) {
+@@ -942,7 +942,7 @@ vc4_validate_shader_recs(struct drm_device *dev,
+ uint32_t i;
+ int ret = 0;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return -ENODEV;
+
+ for (i = 0; i < exec->shader_state_count; i++) {
+diff --git a/drivers/gpu/drm/vc4/vc4_validate_shaders.c b/drivers/gpu/drm/vc4/vc4_validate_shaders.c
+index 9745f8810eca6d..afb1a4d8268465 100644
+--- a/drivers/gpu/drm/vc4/vc4_validate_shaders.c
++++ b/drivers/gpu/drm/vc4/vc4_validate_shaders.c
+@@ -786,7 +786,7 @@ vc4_validate_shader(struct drm_gem_dma_object *shader_obj)
+ struct vc4_validated_shader_info *validated_shader = NULL;
+ struct vc4_shader_validation_state validation_state;
+
+- if (WARN_ON_ONCE(vc4->is_vc5))
++ if (WARN_ON_ONCE(vc4->gen == VC4_GEN_5))
+ return NULL;
+
+ memset(&validation_state, 0, sizeof(validation_state));
+diff --git a/drivers/gpu/drm/vkms/vkms_output.c b/drivers/gpu/drm/vkms/vkms_output.c
+index 5ce70dd946aa63..24589b947dea3d 100644
+--- a/drivers/gpu/drm/vkms/vkms_output.c
++++ b/drivers/gpu/drm/vkms/vkms_output.c
+@@ -84,7 +84,7 @@ int vkms_output_init(struct vkms_device *vkmsdev, int index)
+ DRM_MODE_CONNECTOR_VIRTUAL);
+ if (ret) {
+ DRM_ERROR("Failed to init connector\n");
+- goto err_connector;
++ return ret;
+ }
+
+ drm_connector_helper_add(connector, &vkms_conn_helper_funcs);
+@@ -119,8 +119,5 @@ int vkms_output_init(struct vkms_device *vkmsdev, int index)
+ err_encoder:
+ drm_connector_cleanup(connector);
+
+-err_connector:
+- drm_crtc_cleanup(crtc);
+-
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+index 6619a40aed1533..f4332f06b6c809 100644
+--- a/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
++++ b/drivers/gpu/drm/xe/display/xe_hdcp_gsc.c
+@@ -42,7 +42,7 @@ bool intel_hdcp_gsc_check_status(struct xe_device *xe)
+ struct xe_gsc *gsc = >->uc.gsc;
+ bool ret = true;
+
+- if (!gsc && !xe_uc_fw_is_enabled(&gsc->fw)) {
++ if (!gsc || !xe_uc_fw_is_enabled(&gsc->fw)) {
+ drm_dbg_kms(&xe->drm,
+ "GSC Components not ready for HDCP2.x\n");
+ return false;
+diff --git a/drivers/gpu/drm/xe/xe_sync.c b/drivers/gpu/drm/xe/xe_sync.c
+index 2e72c06fd40d07..b0684e6d2047b1 100644
+--- a/drivers/gpu/drm/xe/xe_sync.c
++++ b/drivers/gpu/drm/xe/xe_sync.c
+@@ -85,8 +85,12 @@ static void user_fence_worker(struct work_struct *w)
+ mmput(ufence->mm);
+ }
+
+- wake_up_all(&ufence->xe->ufence_wq);
++ /*
++ * Wake up waiters only after updating the ufence state, allowing the UMD
++ * to safely reuse the same ufence without encountering -EBUSY errors.
++ */
+ WRITE_ONCE(ufence->signalled, 1);
++ wake_up_all(&ufence->xe->ufence_wq);
+ user_fence_put(ufence);
+ }
+
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_disp.c b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+index 9368acf56eaf79..e4e0e299e8a7d5 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_disp.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_disp.c
+@@ -1200,6 +1200,9 @@ static void zynqmp_disp_layer_release_dma(struct zynqmp_disp *disp,
+ {
+ unsigned int i;
+
++ if (!layer->info)
++ return;
++
+ for (i = 0; i < layer->info->num_channels; i++) {
+ struct zynqmp_disp_layer_dma *dma = &layer->dmas[i];
+
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_kms.c b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+index bd1368df787034..4556af2faa0f19 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_kms.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+@@ -536,7 +536,7 @@ void zynqmp_dpsub_drm_cleanup(struct zynqmp_dpsub *dpsub)
+ {
+ struct drm_device *drm = &dpsub->drm->dev;
+
+- drm_dev_unregister(drm);
++ drm_dev_unplug(drm);
+ drm_atomic_helper_shutdown(drm);
+ drm_encoder_cleanup(&dpsub->drm->encoder);
+ drm_kms_helper_poll_fini(drm);
+diff --git a/drivers/hid/hid-hyperv.c b/drivers/hid/hid-hyperv.c
+index f33485d83d24ff..0fb210e40a4127 100644
+--- a/drivers/hid/hid-hyperv.c
++++ b/drivers/hid/hid-hyperv.c
+@@ -422,6 +422,25 @@ static int mousevsc_hid_raw_request(struct hid_device *hid,
+ return 0;
+ }
+
++static int mousevsc_hid_probe(struct hid_device *hid_dev, const struct hid_device_id *id)
++{
++ int ret;
++
++ ret = hid_parse(hid_dev);
++ if (ret) {
++ hid_err(hid_dev, "parse failed\n");
++ return ret;
++ }
++
++ ret = hid_hw_start(hid_dev, HID_CONNECT_HIDINPUT | HID_CONNECT_HIDDEV);
++ if (ret) {
++ hid_err(hid_dev, "hw start failed\n");
++ return ret;
++ }
++
++ return 0;
++}
++
+ static const struct hid_ll_driver mousevsc_ll_driver = {
+ .parse = mousevsc_hid_parse,
+ .open = mousevsc_hid_open,
+@@ -431,7 +450,16 @@ static const struct hid_ll_driver mousevsc_ll_driver = {
+ .raw_request = mousevsc_hid_raw_request,
+ };
+
+-static struct hid_driver mousevsc_hid_driver;
++static const struct hid_device_id mousevsc_devices[] = {
++ { HID_DEVICE(BUS_VIRTUAL, HID_GROUP_ANY, 0x045E, 0x0621) },
++ { }
++};
++
++static struct hid_driver mousevsc_hid_driver = {
++ .name = "hid-hyperv",
++ .id_table = mousevsc_devices,
++ .probe = mousevsc_hid_probe,
++};
+
+ static int mousevsc_probe(struct hv_device *device,
+ const struct hv_vmbus_device_id *dev_id)
+@@ -473,7 +501,6 @@ static int mousevsc_probe(struct hv_device *device,
+ }
+
+ hid_dev->ll_driver = &mousevsc_ll_driver;
+- hid_dev->driver = &mousevsc_hid_driver;
+ hid_dev->bus = BUS_VIRTUAL;
+ hid_dev->vendor = input_dev->hid_dev_info.vendor;
+ hid_dev->product = input_dev->hid_dev_info.product;
+@@ -488,20 +515,6 @@ static int mousevsc_probe(struct hv_device *device,
+ if (ret)
+ goto probe_err2;
+
+-
+- ret = hid_parse(hid_dev);
+- if (ret) {
+- hid_err(hid_dev, "parse failed\n");
+- goto probe_err2;
+- }
+-
+- ret = hid_hw_start(hid_dev, HID_CONNECT_HIDINPUT | HID_CONNECT_HIDDEV);
+-
+- if (ret) {
+- hid_err(hid_dev, "hw start failed\n");
+- goto probe_err2;
+- }
+-
+ device_init_wakeup(&device->device, true);
+
+ input_dev->connected = true;
+@@ -579,12 +592,23 @@ static struct hv_driver mousevsc_drv = {
+
+ static int __init mousevsc_init(void)
+ {
+- return vmbus_driver_register(&mousevsc_drv);
++ int ret;
++
++ ret = hid_register_driver(&mousevsc_hid_driver);
++ if (ret)
++ return ret;
++
++ ret = vmbus_driver_register(&mousevsc_drv);
++ if (ret)
++ hid_unregister_driver(&mousevsc_hid_driver);
++
++ return ret;
+ }
+
+ static void __exit mousevsc_exit(void)
+ {
+ vmbus_driver_unregister(&mousevsc_drv);
++ hid_unregister_driver(&mousevsc_hid_driver);
+ }
+
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 413606bdf476df..5a599c90e7a2c7 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1353,9 +1353,9 @@ static void wacom_intuos_pro2_bt_pen(struct wacom_wac *wacom)
+ rotation -= 1800;
+
+ input_report_abs(pen_input, ABS_TILT_X,
+- (char)frame[7]);
++ (signed char)frame[7]);
+ input_report_abs(pen_input, ABS_TILT_Y,
+- (char)frame[8]);
++ (signed char)frame[8]);
+ input_report_abs(pen_input, ABS_Z, rotation);
+ input_report_abs(pen_input, ABS_WHEEL,
+ get_unaligned_le16(&frame[11]));
+diff --git a/drivers/hwmon/aquacomputer_d5next.c b/drivers/hwmon/aquacomputer_d5next.c
+index 34cac27e4ddec3..0dcb8a3a691d69 100644
+--- a/drivers/hwmon/aquacomputer_d5next.c
++++ b/drivers/hwmon/aquacomputer_d5next.c
+@@ -597,7 +597,7 @@ struct aqc_data {
+
+ /* Sensor values */
+ s32 temp_input[20]; /* Max 4 physical and 16 virtual or 8 physical and 12 virtual */
+- s32 speed_input[8];
++ s32 speed_input[9];
+ u32 speed_input_min[1];
+ u32 speed_input_target[1];
+ u32 speed_input_max[1];
+diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
+index 934fed3dd58661..ee04795b98aabe 100644
+--- a/drivers/hwmon/nct6775-core.c
++++ b/drivers/hwmon/nct6775-core.c
+@@ -2878,8 +2878,7 @@ store_target_temp(struct device *dev, struct device_attribute *attr,
+ if (err < 0)
+ return err;
+
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0,
+- data->target_temp_mask);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, data->target_temp_mask * 1000), 1000);
+
+ mutex_lock(&data->update_lock);
+ data->target_temp[nr] = val;
+@@ -2959,7 +2958,7 @@ store_temp_tolerance(struct device *dev, struct device_attribute *attr,
+ return err;
+
+ /* Limit tolerance as needed */
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, data->tolerance_mask);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, data->tolerance_mask * 1000), 1000);
+
+ mutex_lock(&data->update_lock);
+ data->temp_tolerance[index][nr] = val;
+@@ -3085,7 +3084,7 @@ store_weight_temp(struct device *dev, struct device_attribute *attr,
+ if (err < 0)
+ return err;
+
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 0, 255);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 0, 255000), 1000);
+
+ mutex_lock(&data->update_lock);
+ data->weight_temp[index][nr] = val;
+diff --git a/drivers/hwmon/pmbus/pmbus_core.c b/drivers/hwmon/pmbus/pmbus_core.c
+index ce7fd4ca9d89b0..a68b0a98e8d4db 100644
+--- a/drivers/hwmon/pmbus/pmbus_core.c
++++ b/drivers/hwmon/pmbus/pmbus_core.c
+@@ -3279,7 +3279,17 @@ static int pmbus_regulator_notify(struct pmbus_data *data, int page, int event)
+
+ static int pmbus_write_smbalert_mask(struct i2c_client *client, u8 page, u8 reg, u8 val)
+ {
+- return _pmbus_write_word_data(client, page, PMBUS_SMBALERT_MASK, reg | (val << 8));
++ int ret;
++
++ ret = _pmbus_write_word_data(client, page, PMBUS_SMBALERT_MASK, reg | (val << 8));
++
++ /*
++ * Clear fault systematically in case writing PMBUS_SMBALERT_MASK
++ * is not supported by the chip.
++ */
++ pmbus_clear_fault_page(client, page);
++
++ return ret;
+ }
+
+ static irqreturn_t pmbus_fault_handler(int irq, void *pdata)
+diff --git a/drivers/hwmon/tps23861.c b/drivers/hwmon/tps23861.c
+index dfcfb09d9f3cdf..80fb03f30c302d 100644
+--- a/drivers/hwmon/tps23861.c
++++ b/drivers/hwmon/tps23861.c
+@@ -132,7 +132,7 @@ static int tps23861_read_temp(struct tps23861_data *data, long *val)
+ if (err < 0)
+ return err;
+
+- *val = (regval * TEMPERATURE_LSB) - 20000;
++ *val = ((long)regval * TEMPERATURE_LSB) - 20000;
+
+ return 0;
+ }
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index 61f7c4003d2ff7..e9577f920286d0 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -251,10 +251,8 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
+ return -EOPNOTSUPP;
+
+ data_ptrs = kmalloc_array(nmsgs, sizeof(u8 __user *), GFP_KERNEL);
+- if (data_ptrs == NULL) {
+- kfree(msgs);
++ if (!data_ptrs)
+ return -ENOMEM;
+- }
+
+ res = 0;
+ for (i = 0; i < nmsgs; i++) {
+@@ -302,7 +300,6 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
+ for (j = 0; j < i; ++j)
+ kfree(msgs[j].buf);
+ kfree(data_ptrs);
+- kfree(msgs);
+ return res;
+ }
+
+@@ -316,7 +313,6 @@ static noinline int i2cdev_ioctl_rdwr(struct i2c_client *client,
+ kfree(msgs[i].buf);
+ }
+ kfree(data_ptrs);
+- kfree(msgs);
+ return res;
+ }
+
+@@ -446,6 +442,7 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ case I2C_RDWR: {
+ struct i2c_rdwr_ioctl_data rdwr_arg;
+ struct i2c_msg *rdwr_pa;
++ int res;
+
+ if (copy_from_user(&rdwr_arg,
+ (struct i2c_rdwr_ioctl_data __user *)arg,
+@@ -467,7 +464,9 @@ static long i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+ if (IS_ERR(rdwr_pa))
+ return PTR_ERR(rdwr_pa);
+
+- return i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ res = i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ kfree(rdwr_pa);
++ return res;
+ }
+
+ case I2C_SMBUS: {
+@@ -540,7 +539,7 @@ static long compat_i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned lo
+ struct i2c_rdwr_ioctl_data32 rdwr_arg;
+ struct i2c_msg32 __user *p;
+ struct i2c_msg *rdwr_pa;
+- int i;
++ int i, res;
+
+ if (copy_from_user(&rdwr_arg,
+ (struct i2c_rdwr_ioctl_data32 __user *)arg,
+@@ -573,7 +572,9 @@ static long compat_i2cdev_ioctl(struct file *file, unsigned int cmd, unsigned lo
+ };
+ }
+
+- return i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ res = i2cdev_ioctl_rdwr(client, rdwr_arg.nmsgs, rdwr_pa);
++ kfree(rdwr_pa);
++ return res;
+ }
+ case I2C_SMBUS: {
+ struct i2c_smbus_ioctl_data32 data32;
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 6f3eb710a75d60..ffe99f0c6acef5 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -2051,11 +2051,16 @@ int i3c_master_add_i3c_dev_locked(struct i3c_master_controller *master,
+ ibireq.max_payload_len = olddev->ibi->max_payload_len;
+ ibireq.num_slots = olddev->ibi->num_slots;
+
+- if (olddev->ibi->enabled) {
++ if (olddev->ibi->enabled)
+ enable_ibi = true;
+- i3c_dev_disable_ibi_locked(olddev);
+- }
+-
++ /*
++ * The olddev should not receive any commands on the
++ * i3c bus as it does not exist and has been assigned
++ * a new address. This will result in NACK or timeout.
++ * So, update the olddev->ibi->enabled flag to false
++ * to avoid DISEC with OldAddr.
++ */
++ olddev->ibi->enabled = false;
+ i3c_dev_free_ibi_locked(olddev);
+ }
+ mutex_unlock(&olddev->ibi_lock);
+diff --git a/drivers/iio/accel/adxl380.c b/drivers/iio/accel/adxl380.c
+index f80527d899be4d..b19ee37df7f12e 100644
+--- a/drivers/iio/accel/adxl380.c
++++ b/drivers/iio/accel/adxl380.c
+@@ -1181,7 +1181,7 @@ static int adxl380_read_raw(struct iio_dev *indio_dev,
+
+ ret = adxl380_read_chn(st, chan->address);
+ iio_device_release_direct_mode(indio_dev);
+- if (ret)
++ if (ret < 0)
+ return ret;
+
+ *val = sign_extend32(ret >> chan->scan_type.shift,
+diff --git a/drivers/iio/adc/ad4000.c b/drivers/iio/adc/ad4000.c
+index 6ea49124508499..b3b82535f5c14d 100644
+--- a/drivers/iio/adc/ad4000.c
++++ b/drivers/iio/adc/ad4000.c
+@@ -344,6 +344,8 @@ static int ad4000_single_conversion(struct iio_dev *indio_dev,
+
+ if (chan->scan_type.sign == 's')
+ *val = sign_extend32(sample, chan->scan_type.realbits - 1);
++ else
++ *val = sample;
+
+ return IIO_VAL_INT;
+ }
+@@ -637,7 +639,9 @@ static int ad4000_probe(struct spi_device *spi)
+ indio_dev->name = chip->dev_name;
+ indio_dev->num_channels = 1;
+
+- devm_mutex_init(dev, &st->lock);
++ ret = devm_mutex_init(dev, &st->lock);
++ if (ret)
++ return ret;
+
+ st->gain_milli = 1000;
+ if (chip->has_hardware_gain) {
+diff --git a/drivers/iio/adc/pac1921.c b/drivers/iio/adc/pac1921.c
+index 36e813d9c73f1c..fe1d9e07fce24d 100644
+--- a/drivers/iio/adc/pac1921.c
++++ b/drivers/iio/adc/pac1921.c
+@@ -1171,7 +1171,9 @@ static int pac1921_probe(struct i2c_client *client)
+ return dev_err_probe(dev, (int)PTR_ERR(priv->regmap),
+ "Cannot initialize register map\n");
+
+- devm_mutex_init(dev, &priv->lock);
++ ret = devm_mutex_init(dev, &priv->lock);
++ if (ret)
++ return ret;
+
+ priv->dv_gain = PAC1921_DEFAULT_DV_GAIN;
+ priv->di_gain = PAC1921_DEFAULT_DI_GAIN;
+diff --git a/drivers/iio/dac/adi-axi-dac.c b/drivers/iio/dac/adi-axi-dac.c
+index 0cb00f3bec0453..b8b4171b80436b 100644
+--- a/drivers/iio/dac/adi-axi-dac.c
++++ b/drivers/iio/dac/adi-axi-dac.c
+@@ -46,7 +46,7 @@
+ #define AXI_DAC_REG_CNTRL_1 0x0044
+ #define AXI_DAC_SYNC BIT(0)
+ #define AXI_DAC_REG_CNTRL_2 0x0048
+-#define ADI_DAC_R1_MODE BIT(4)
++#define ADI_DAC_R1_MODE BIT(5)
+ #define AXI_DAC_DRP_STATUS 0x0074
+ #define AXI_DAC_DRP_LOCKED BIT(17)
+ /* DAC Channel controls */
+diff --git a/drivers/iio/industrialio-backend.c b/drivers/iio/industrialio-backend.c
+index 20b3b5212da76a..fb34a8e4d04e74 100644
+--- a/drivers/iio/industrialio-backend.c
++++ b/drivers/iio/industrialio-backend.c
+@@ -737,8 +737,8 @@ static struct iio_backend *__devm_iio_backend_fwnode_get(struct device *dev, con
+ }
+
+ fwnode_back = fwnode_find_reference(fwnode, "io-backends", index);
+- if (IS_ERR(fwnode))
+- return dev_err_cast_probe(dev, fwnode,
++ if (IS_ERR(fwnode_back))
++ return dev_err_cast_probe(dev, fwnode_back,
+ "Cannot get Firmware reference\n");
+
+ guard(mutex)(&iio_back_lock);
+diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c
+index 5f131bc1a01e97..4ad949672210ba 100644
+--- a/drivers/iio/industrialio-gts-helper.c
++++ b/drivers/iio/industrialio-gts-helper.c
+@@ -167,7 +167,7 @@ static int iio_gts_gain_cmp(const void *a, const void *b)
+
+ static int gain_to_scaletables(struct iio_gts *gts, int **gains, int **scales)
+ {
+- int ret, i, j, new_idx, time_idx;
++ int i, j, new_idx, time_idx, ret = 0;
+ int *all_gains;
+ size_t gain_bytes;
+
+diff --git a/drivers/iio/light/al3010.c b/drivers/iio/light/al3010.c
+index 53569587ccb7ba..7cbb8b20330090 100644
+--- a/drivers/iio/light/al3010.c
++++ b/drivers/iio/light/al3010.c
+@@ -87,7 +87,12 @@ static int al3010_init(struct al3010_data *data)
+ int ret;
+
+ ret = al3010_set_pwr(data->client, true);
++ if (ret < 0)
++ return ret;
+
++ ret = devm_add_action_or_reset(&data->client->dev,
++ al3010_set_pwr_off,
++ data);
+ if (ret < 0)
+ return ret;
+
+@@ -190,12 +195,6 @@ static int al3010_probe(struct i2c_client *client)
+ return ret;
+ }
+
+- ret = devm_add_action_or_reset(&client->dev,
+- al3010_set_pwr_off,
+- data);
+- if (ret < 0)
+- return ret;
+-
+ return devm_iio_device_register(&client->dev, indio_dev);
+ }
+
+diff --git a/drivers/infiniband/core/roce_gid_mgmt.c b/drivers/infiniband/core/roce_gid_mgmt.c
+index d5131b3ba8ab04..a9f2c6b1b29ed2 100644
+--- a/drivers/infiniband/core/roce_gid_mgmt.c
++++ b/drivers/infiniband/core/roce_gid_mgmt.c
+@@ -515,6 +515,27 @@ void rdma_roce_rescan_device(struct ib_device *ib_dev)
+ }
+ EXPORT_SYMBOL(rdma_roce_rescan_device);
+
++/**
++ * rdma_roce_rescan_port - Rescan all of the network devices in the system
++ * and add their gids if relevant to the port of the RoCE device.
++ *
++ * @ib_dev: IB device
++ * @port: Port number
++ */
++void rdma_roce_rescan_port(struct ib_device *ib_dev, u32 port)
++{
++ struct net_device *ndev = NULL;
++
++ if (rdma_protocol_roce(ib_dev, port)) {
++ ndev = ib_device_get_netdev(ib_dev, port);
++ if (!ndev)
++ return;
++ enum_all_gids_of_dev_cb(ib_dev, port, ndev, ndev);
++ dev_put(ndev);
++ }
++}
++EXPORT_SYMBOL(rdma_roce_rescan_port);
++
+ static void callback_for_addr_gid_device_scan(struct ib_device *device,
+ u32 port,
+ struct net_device *rdma_ndev,
+@@ -575,16 +596,17 @@ static void handle_netdev_upper(struct ib_device *ib_dev, u32 port,
+ }
+ }
+
+-static void _roce_del_all_netdev_gids(struct ib_device *ib_dev, u32 port,
+- struct net_device *event_ndev)
++void roce_del_all_netdev_gids(struct ib_device *ib_dev,
++ u32 port, struct net_device *ndev)
+ {
+- ib_cache_gid_del_all_netdev_gids(ib_dev, port, event_ndev);
++ ib_cache_gid_del_all_netdev_gids(ib_dev, port, ndev);
+ }
++EXPORT_SYMBOL(roce_del_all_netdev_gids);
+
+ static void del_netdev_upper_ips(struct ib_device *ib_dev, u32 port,
+ struct net_device *rdma_ndev, void *cookie)
+ {
+- handle_netdev_upper(ib_dev, port, cookie, _roce_del_all_netdev_gids);
++ handle_netdev_upper(ib_dev, port, cookie, roce_del_all_netdev_gids);
+ }
+
+ static void add_netdev_upper_ips(struct ib_device *ib_dev, u32 port,
+diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h
+index 821d93c8f7123c..dfd2e5a86e6fe5 100644
+--- a/drivers/infiniband/core/uverbs.h
++++ b/drivers/infiniband/core/uverbs.h
+@@ -160,6 +160,8 @@ struct ib_uverbs_file {
+ struct page *disassociate_page;
+
+ struct xarray idr;
++
++ struct mutex disassociation_lock;
+ };
+
+ struct ib_uverbs_event {
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 94454186ed81d5..85cfc790a7bb36 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -76,6 +76,7 @@ static dev_t dynamic_uverbs_dev;
+ static DEFINE_IDA(uverbs_ida);
+ static int ib_uverbs_add_one(struct ib_device *device);
+ static void ib_uverbs_remove_one(struct ib_device *device, void *client_data);
++static struct ib_client uverbs_client;
+
+ static char *uverbs_devnode(const struct device *dev, umode_t *mode)
+ {
+@@ -217,6 +218,7 @@ void ib_uverbs_release_file(struct kref *ref)
+
+ if (file->disassociate_page)
+ __free_pages(file->disassociate_page, 0);
++ mutex_destroy(&file->disassociation_lock);
+ mutex_destroy(&file->umap_lock);
+ mutex_destroy(&file->ucontext_lock);
+ kfree(file);
+@@ -698,8 +700,13 @@ static int ib_uverbs_mmap(struct file *filp, struct vm_area_struct *vma)
+ ret = PTR_ERR(ucontext);
+ goto out;
+ }
++
++ mutex_lock(&file->disassociation_lock);
++
+ vma->vm_ops = &rdma_umap_ops;
+ ret = ucontext->device->ops.mmap(ucontext, vma);
++
++ mutex_unlock(&file->disassociation_lock);
+ out:
+ srcu_read_unlock(&file->device->disassociate_srcu, srcu_key);
+ return ret;
+@@ -721,6 +728,8 @@ static void rdma_umap_open(struct vm_area_struct *vma)
+ /* We are racing with disassociation */
+ if (!down_read_trylock(&ufile->hw_destroy_rwsem))
+ goto out_zap;
++ mutex_lock(&ufile->disassociation_lock);
++
+ /*
+ * Disassociation already completed, the VMA should already be zapped.
+ */
+@@ -732,10 +741,12 @@ static void rdma_umap_open(struct vm_area_struct *vma)
+ goto out_unlock;
+ rdma_umap_priv_init(priv, vma, opriv->entry);
+
++ mutex_unlock(&ufile->disassociation_lock);
+ up_read(&ufile->hw_destroy_rwsem);
+ return;
+
+ out_unlock:
++ mutex_unlock(&ufile->disassociation_lock);
+ up_read(&ufile->hw_destroy_rwsem);
+ out_zap:
+ /*
+@@ -819,7 +830,7 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ {
+ struct rdma_umap_priv *priv, *next_priv;
+
+- lockdep_assert_held(&ufile->hw_destroy_rwsem);
++ mutex_lock(&ufile->disassociation_lock);
+
+ while (1) {
+ struct mm_struct *mm = NULL;
+@@ -845,8 +856,10 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ break;
+ }
+ mutex_unlock(&ufile->umap_lock);
+- if (!mm)
++ if (!mm) {
++ mutex_unlock(&ufile->disassociation_lock);
+ return;
++ }
+
+ /*
+ * The umap_lock is nested under mmap_lock since it used within
+@@ -876,7 +889,31 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ mmap_read_unlock(mm);
+ mmput(mm);
+ }
++
++ mutex_unlock(&ufile->disassociation_lock);
++}
++
++/**
++ * rdma_user_mmap_disassociate() - Revoke mmaps for a device
++ * @device: device to revoke
++ *
++ * This function should be called by drivers that need to disable mmaps for the
++ * device, for instance because it is going to be reset.
++ */
++void rdma_user_mmap_disassociate(struct ib_device *device)
++{
++ struct ib_uverbs_device *uverbs_dev =
++ ib_get_client_data(device, &uverbs_client);
++ struct ib_uverbs_file *ufile;
++
++ mutex_lock(&uverbs_dev->lists_mutex);
++ list_for_each_entry(ufile, &uverbs_dev->uverbs_file_list, list) {
++ if (ufile->ucontext)
++ uverbs_user_mmap_disassociate(ufile);
++ }
++ mutex_unlock(&uverbs_dev->lists_mutex);
+ }
++EXPORT_SYMBOL(rdma_user_mmap_disassociate);
+
+ /*
+ * ib_uverbs_open() does not need the BKL:
+@@ -947,6 +984,8 @@ static int ib_uverbs_open(struct inode *inode, struct file *filp)
+ mutex_init(&file->umap_lock);
+ INIT_LIST_HEAD(&file->umaps);
+
++ mutex_init(&file->disassociation_lock);
++
+ filp->private_data = file;
+ list_add_tail(&file->list, &dev->uverbs_file_list);
+ mutex_unlock(&dev->lists_mutex);
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index e66ae9f22c710c..160096792224b1 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -3633,7 +3633,7 @@ static void bnxt_re_process_res_shadow_qp_wc(struct bnxt_re_qp *gsi_sqp,
+ wc->byte_len = orig_cqe->length;
+ wc->qp = &gsi_qp->ib_qp;
+
+- wc->ex.imm_data = cpu_to_be32(le32_to_cpu(orig_cqe->immdata));
++ wc->ex.imm_data = cpu_to_be32(orig_cqe->immdata);
+ wc->src_qp = orig_cqe->src_qp;
+ memcpy(wc->smac, orig_cqe->smac, ETH_ALEN);
+ if (bnxt_re_is_vlan_pkt(orig_cqe, &vlan_id, &sl)) {
+@@ -3778,7 +3778,10 @@ int bnxt_re_poll_cq(struct ib_cq *ib_cq, int num_entries, struct ib_wc *wc)
+ (unsigned long)(cqe->qp_handle),
+ struct bnxt_re_qp, qplib_qp);
+ wc->qp = &qp->ib_qp;
+- wc->ex.imm_data = cpu_to_be32(le32_to_cpu(cqe->immdata));
++ if (cqe->flags & CQ_RES_RC_FLAGS_IMM)
++ wc->ex.imm_data = cpu_to_be32(cqe->immdata);
++ else
++ wc->ex.invalidate_rkey = cqe->invrkey;
+ wc->src_qp = cqe->src_qp;
+ memcpy(wc->smac, cqe->smac, ETH_ALEN);
+ wc->port_num = 1;
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 9eb290ec71a85d..2ac8ddbed576f5 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -2033,12 +2033,6 @@ static int bnxt_re_suspend(struct auxiliary_device *adev, pm_message_t state)
+ rdev = en_info->rdev;
+ en_dev = en_info->en_dev;
+ mutex_lock(&bnxt_re_mutex);
+- /* L2 driver may invoke this callback during device error/crash or device
+- * reset. Current RoCE driver doesn't recover the device in case of
+- * error. Handle the error by dispatching fatal events to all qps
+- * ie. by calling bnxt_re_dev_stop and release the MSIx vectors as
+- * L2 driver want to modify the MSIx table.
+- */
+
+ ibdev_info(&rdev->ibdev, "Handle device suspend call");
+ /* Check the current device state from bnxt_en_dev and move the
+@@ -2046,17 +2040,12 @@ static int bnxt_re_suspend(struct auxiliary_device *adev, pm_message_t state)
+ * This prevents more commands to HW during clean-up,
+ * in case the device is already in error.
+ */
+- if (test_bit(BNXT_STATE_FW_FATAL_COND, &rdev->en_dev->en_state))
++ if (test_bit(BNXT_STATE_FW_FATAL_COND, &rdev->en_dev->en_state)) {
+ set_bit(ERR_DEVICE_DETACHED, &rdev->rcfw.cmdq.flags);
+-
+- bnxt_re_dev_stop(rdev);
+- bnxt_re_stop_irq(adev);
+- /* Move the device states to detached and avoid sending any more
+- * commands to HW
+- */
+- set_bit(BNXT_RE_FLAG_ERR_DEVICE_DETACHED, &rdev->flags);
+- set_bit(ERR_DEVICE_DETACHED, &rdev->rcfw.cmdq.flags);
+- wake_up_all(&rdev->rcfw.cmdq.waitq);
++ set_bit(BNXT_RE_FLAG_ERR_DEVICE_DETACHED, &rdev->flags);
++ wake_up_all(&rdev->rcfw.cmdq.waitq);
++ bnxt_re_dev_stop(rdev);
++ }
+
+ if (rdev->pacing.dbr_pacing)
+ bnxt_re_set_pacing_dev_state(rdev);
+@@ -2075,13 +2064,6 @@ static int bnxt_re_resume(struct auxiliary_device *adev)
+ struct bnxt_re_dev *rdev;
+
+ mutex_lock(&bnxt_re_mutex);
+- /* L2 driver may invoke this callback during device recovery, resume.
+- * reset. Current RoCE driver doesn't recover the device in case of
+- * error. Handle the error by dispatching fatal events to all qps
+- * ie. by calling bnxt_re_dev_stop and release the MSIx vectors as
+- * L2 driver want to modify the MSIx table.
+- */
+-
+ bnxt_re_add_device(adev, BNXT_RE_POST_RECOVERY_INIT);
+ rdev = en_info->rdev;
+ ibdev_info(&rdev->ibdev, "Device resume completed");
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index 820611a239433a..f55958e5fddb4a 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -391,7 +391,7 @@ struct bnxt_qplib_cqe {
+ u16 cfa_meta;
+ u64 wr_id;
+ union {
+- __le32 immdata;
++ u32 immdata;
+ u32 invrkey;
+ };
+ u64 qp_handle;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
+index 4ec66611a14340..4106423a1b399d 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
+@@ -179,8 +179,8 @@ static void free_cqc(struct hns_roce_dev *hr_dev, struct hns_roce_cq *hr_cq)
+ ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_CQC,
+ hr_cq->cqn);
+ if (ret)
+- dev_err(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n", ret,
+- hr_cq->cqn);
++ dev_err_ratelimited(dev, "DESTROY_CQ failed (%d) for CQN %06lx\n",
++ ret, hr_cq->cqn);
+
+ xa_erase_irq(&cq_table->array, hr_cq->cqn);
+
+diff --git a/drivers/infiniband/hw/hns/hns_roce_debugfs.c b/drivers/infiniband/hw/hns/hns_roce_debugfs.c
+index e8febb40f6450c..b869cdc5411893 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_debugfs.c
++++ b/drivers/infiniband/hw/hns/hns_roce_debugfs.c
+@@ -5,6 +5,7 @@
+
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
++#include <linux/pci.h>
+
+ #include "hns_roce_device.h"
+
+@@ -86,7 +87,7 @@ void hns_roce_register_debugfs(struct hns_roce_dev *hr_dev)
+ {
+ struct hns_roce_dev_debugfs *dbgfs = &hr_dev->dbgfs;
+
+- dbgfs->root = debugfs_create_dir(dev_name(&hr_dev->ib_dev.dev),
++ dbgfs->root = debugfs_create_dir(pci_name(hr_dev->pci_dev),
+ hns_roce_dbgfs_root);
+
+ create_sw_stat_debugfs(hr_dev, dbgfs->root);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_device.h b/drivers/infiniband/hw/hns/hns_roce_device.h
+index 0b1e21cb6d2d38..560a1d9de408ff 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_device.h
++++ b/drivers/infiniband/hw/hns/hns_roce_device.h
+@@ -489,12 +489,6 @@ struct hns_roce_bank {
+ u32 next; /* Next ID to allocate. */
+ };
+
+-struct hns_roce_idx_table {
+- u32 *spare_idx;
+- u32 head;
+- u32 tail;
+-};
+-
+ struct hns_roce_qp_table {
+ struct hns_roce_hem_table qp_table;
+ struct hns_roce_hem_table irrl_table;
+@@ -503,7 +497,7 @@ struct hns_roce_qp_table {
+ struct mutex scc_mutex;
+ struct hns_roce_bank bank[HNS_ROCE_QP_BANK_NUM];
+ struct mutex bank_mutex;
+- struct hns_roce_idx_table idx_table;
++ struct xarray dip_xa;
+ };
+
+ struct hns_roce_cq_table {
+@@ -593,6 +587,7 @@ struct hns_roce_dev;
+
+ enum {
+ HNS_ROCE_FLUSH_FLAG = 0,
++ HNS_ROCE_STOP_FLUSH_FLAG = 1,
+ };
+
+ struct hns_roce_work {
+@@ -656,6 +651,8 @@ struct hns_roce_qp {
+ enum hns_roce_cong_type cong_type;
+ u8 tc_mode;
+ u8 priority;
++ spinlock_t flush_lock;
++ struct hns_roce_dip *dip;
+ };
+
+ struct hns_roce_ib_iboe {
+@@ -982,8 +979,6 @@ struct hns_roce_dev {
+ enum hns_roce_device_state state;
+ struct list_head qp_list; /* list of all qps on this dev */
+ spinlock_t qp_list_lock; /* protect qp_list */
+- struct list_head dip_list; /* list of all dest ips on this dev */
+- spinlock_t dip_list_lock; /* protect dip_list */
+
+ struct list_head pgdir_list;
+ struct mutex pgdir_mutex;
+@@ -1289,6 +1284,7 @@ void hns_roce_cq_completion(struct hns_roce_dev *hr_dev, u32 cqn);
+ void hns_roce_cq_event(struct hns_roce_dev *hr_dev, u32 cqn, int event_type);
+ void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp);
+ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type);
++void hns_roce_flush_cqe(struct hns_roce_dev *hr_dev, u32 qpn);
+ void hns_roce_srq_event(struct hns_roce_dev *hr_dev, u32 srqn, int event_type);
+ void hns_roce_handle_device_err(struct hns_roce_dev *hr_dev);
+ int hns_roce_init(struct hns_roce_dev *hr_dev);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index c7c167e2a04513..f84521be3bea4a 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -300,7 +300,7 @@ static int calc_hem_config(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_mhop *mhop,
+ struct hns_roce_hem_index *index)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
++ struct device *dev = hr_dev->dev;
+ unsigned long mhop_obj = obj;
+ u32 l0_idx, l1_idx, l2_idx;
+ u32 chunk_ba_num;
+@@ -331,14 +331,14 @@ static int calc_hem_config(struct hns_roce_dev *hr_dev,
+ index->buf = l0_idx;
+ break;
+ default:
+- ibdev_err(ibdev, "table %u not support mhop.hop_num = %u!\n",
+- table->type, mhop->hop_num);
++ dev_err(dev, "table %u not support mhop.hop_num = %u!\n",
++ table->type, mhop->hop_num);
+ return -EINVAL;
+ }
+
+ if (unlikely(index->buf >= table->num_hem)) {
+- ibdev_err(ibdev, "table %u exceed hem limt idx %llu, max %lu!\n",
+- table->type, index->buf, table->num_hem);
++ dev_err(dev, "table %u exceed hem limt idx %llu, max %lu!\n",
++ table->type, index->buf, table->num_hem);
+ return -EINVAL;
+ }
+
+@@ -448,14 +448,14 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_mhop *mhop,
+ struct hns_roce_hem_index *index)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
++ struct device *dev = hr_dev->dev;
+ u32 step_idx;
+ int ret = 0;
+
+ if (index->inited & HEM_INDEX_L0) {
+ ret = hr_dev->hw->set_hem(hr_dev, table, obj, 0);
+ if (ret) {
+- ibdev_err(ibdev, "set HEM step 0 failed!\n");
++ dev_err(dev, "set HEM step 0 failed!\n");
+ goto out;
+ }
+ }
+@@ -463,7 +463,7 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
+ if (index->inited & HEM_INDEX_L1) {
+ ret = hr_dev->hw->set_hem(hr_dev, table, obj, 1);
+ if (ret) {
+- ibdev_err(ibdev, "set HEM step 1 failed!\n");
++ dev_err(dev, "set HEM step 1 failed!\n");
+ goto out;
+ }
+ }
+@@ -475,7 +475,7 @@ static int set_mhop_hem(struct hns_roce_dev *hr_dev,
+ step_idx = mhop->hop_num;
+ ret = hr_dev->hw->set_hem(hr_dev, table, obj, step_idx);
+ if (ret)
+- ibdev_err(ibdev, "set HEM step last failed!\n");
++ dev_err(dev, "set HEM step last failed!\n");
+ }
+ out:
+ return ret;
+@@ -485,14 +485,14 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_table *table,
+ unsigned long obj)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_hem_index index = {};
+ struct hns_roce_hem_mhop mhop = {};
++ struct device *dev = hr_dev->dev;
+ int ret;
+
+ ret = calc_hem_config(hr_dev, table, obj, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "calc hem config failed!\n");
++ dev_err(dev, "calc hem config failed!\n");
+ return ret;
+ }
+
+@@ -504,7 +504,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+
+ ret = alloc_mhop_hem(hr_dev, table, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "alloc mhop hem failed!\n");
++ dev_err(dev, "alloc mhop hem failed!\n");
+ goto out;
+ }
+
+@@ -512,7 +512,7 @@ static int hns_roce_table_mhop_get(struct hns_roce_dev *hr_dev,
+ if (table->type < HEM_TYPE_MTT) {
+ ret = set_mhop_hem(hr_dev, table, obj, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "set HEM address to HW failed!\n");
++ dev_err(dev, "set HEM address to HW failed!\n");
+ goto err_alloc;
+ }
+ }
+@@ -575,7 +575,7 @@ static void clear_mhop_hem(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_mhop *mhop,
+ struct hns_roce_hem_index *index)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
++ struct device *dev = hr_dev->dev;
+ u32 hop_num = mhop->hop_num;
+ u32 chunk_ba_num;
+ u32 step_idx;
+@@ -605,21 +605,21 @@ static void clear_mhop_hem(struct hns_roce_dev *hr_dev,
+
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, step_idx);
+ if (ret)
+- ibdev_warn(ibdev, "failed to clear hop%u HEM, ret = %d.\n",
+- hop_num, ret);
++ dev_warn(dev, "failed to clear hop%u HEM, ret = %d.\n",
++ hop_num, ret);
+
+ if (index->inited & HEM_INDEX_L1) {
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, 1);
+ if (ret)
+- ibdev_warn(ibdev, "failed to clear HEM step 1, ret = %d.\n",
+- ret);
++ dev_warn(dev, "failed to clear HEM step 1, ret = %d.\n",
++ ret);
+ }
+
+ if (index->inited & HEM_INDEX_L0) {
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, 0);
+ if (ret)
+- ibdev_warn(ibdev, "failed to clear HEM step 0, ret = %d.\n",
+- ret);
++ dev_warn(dev, "failed to clear HEM step 0, ret = %d.\n",
++ ret);
+ }
+ }
+ }
+@@ -629,14 +629,14 @@ static void hns_roce_table_mhop_put(struct hns_roce_dev *hr_dev,
+ unsigned long obj,
+ int check_refcount)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_hem_index index = {};
+ struct hns_roce_hem_mhop mhop = {};
++ struct device *dev = hr_dev->dev;
+ int ret;
+
+ ret = calc_hem_config(hr_dev, table, obj, &mhop, &index);
+ if (ret) {
+- ibdev_err(ibdev, "calc hem config failed!\n");
++ dev_err(dev, "calc hem config failed!\n");
+ return;
+ }
+
+@@ -672,8 +672,8 @@ void hns_roce_table_put(struct hns_roce_dev *hr_dev,
+
+ ret = hr_dev->hw->clear_hem(hr_dev, table, obj, HEM_HOP_STEP_DIRECT);
+ if (ret)
+- dev_warn(dev, "failed to clear HEM base address, ret = %d.\n",
+- ret);
++ dev_warn_ratelimited(dev, "failed to clear HEM base address, ret = %d.\n",
++ ret);
+
+ hns_roce_free_hem(hr_dev, table->hem[i]);
+ table->hem[i] = NULL;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 24e906b9d3ae13..697b17cca02e71 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -373,19 +373,12 @@ static int set_rwqe_data_seg(struct ib_qp *ibqp, const struct ib_send_wr *wr,
+ static int check_send_valid(struct hns_roce_dev *hr_dev,
+ struct hns_roce_qp *hr_qp)
+ {
+- struct ib_device *ibdev = &hr_dev->ib_dev;
+-
+ if (unlikely(hr_qp->state == IB_QPS_RESET ||
+ hr_qp->state == IB_QPS_INIT ||
+- hr_qp->state == IB_QPS_RTR)) {
+- ibdev_err(ibdev, "failed to post WQE, QP state %u!\n",
+- hr_qp->state);
++ hr_qp->state == IB_QPS_RTR))
+ return -EINVAL;
+- } else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN)) {
+- ibdev_err(ibdev, "failed to post WQE, dev state %d!\n",
+- hr_dev->state);
++ else if (unlikely(hr_dev->state >= HNS_ROCE_DEVICE_STATE_RST_DOWN))
+ return -EIO;
+- }
+
+ return 0;
+ }
+@@ -582,7 +575,7 @@ static inline int set_rc_wqe(struct hns_roce_qp *qp,
+ if (WARN_ON(ret))
+ return ret;
+
+- hr_reg_write(rc_sq_wqe, RC_SEND_WQE_FENCE,
++ hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SO,
+ (wr->send_flags & IB_SEND_FENCE) ? 1 : 0);
+
+ hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SE,
+@@ -2560,20 +2553,19 @@ static void hns_roce_free_link_table(struct hns_roce_dev *hr_dev)
+ free_link_table_buf(hr_dev, &priv->ext_llm);
+ }
+
+-static void free_dip_list(struct hns_roce_dev *hr_dev)
++static void free_dip_entry(struct hns_roce_dev *hr_dev)
+ {
+ struct hns_roce_dip *hr_dip;
+- struct hns_roce_dip *tmp;
+- unsigned long flags;
++ unsigned long idx;
+
+- spin_lock_irqsave(&hr_dev->dip_list_lock, flags);
++ xa_lock(&hr_dev->qp_table.dip_xa);
+
+- list_for_each_entry_safe(hr_dip, tmp, &hr_dev->dip_list, node) {
+- list_del(&hr_dip->node);
++ xa_for_each(&hr_dev->qp_table.dip_xa, idx, hr_dip) {
++ __xa_erase(&hr_dev->qp_table.dip_xa, hr_dip->dip_idx);
+ kfree(hr_dip);
+ }
+
+- spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags);
++ xa_unlock(&hr_dev->qp_table.dip_xa);
+ }
+
+ static struct ib_pd *free_mr_init_pd(struct hns_roce_dev *hr_dev)
+@@ -2775,8 +2767,8 @@ static int free_mr_modify_rsv_qp(struct hns_roce_dev *hr_dev,
+ ret = hr_dev->hw->modify_qp(&hr_qp->ibqp, attr, mask, IB_QPS_INIT,
+ IB_QPS_INIT, NULL);
+ if (ret) {
+- ibdev_err(ibdev, "failed to modify qp to init, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(ibdev, "failed to modify qp to init, ret = %d.\n",
++ ret);
+ return ret;
+ }
+
+@@ -2981,7 +2973,7 @@ static void hns_roce_v2_exit(struct hns_roce_dev *hr_dev)
+ hns_roce_free_link_table(hr_dev);
+
+ if (hr_dev->pci_dev->revision == PCI_REVISION_ID_HIP09)
+- free_dip_list(hr_dev);
++ free_dip_entry(hr_dev);
+ }
+
+ static int hns_roce_mbox_post(struct hns_roce_dev *hr_dev,
+@@ -3421,8 +3413,8 @@ static int free_mr_post_send_lp_wqe(struct hns_roce_qp *hr_qp)
+
+ ret = hns_roce_v2_post_send(&hr_qp->ibqp, send_wr, &bad_wr);
+ if (ret) {
+- ibdev_err(ibdev, "failed to post wqe for free mr, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(ibdev, "failed to post wqe for free mr, ret = %d.\n",
++ ret);
+ return ret;
+ }
+
+@@ -3461,9 +3453,9 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
+
+ ret = free_mr_post_send_lp_wqe(hr_qp);
+ if (ret) {
+- ibdev_err(ibdev,
+- "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n",
+- hr_qp->qpn, ret);
++ ibdev_err_ratelimited(ibdev,
++ "failed to send wqe (qp:0x%lx) for free mr, ret = %d.\n",
++ hr_qp->qpn, ret);
+ break;
+ }
+
+@@ -3474,16 +3466,16 @@ static void free_mr_send_cmd_to_hw(struct hns_roce_dev *hr_dev)
+ while (cqe_cnt) {
+ npolled = hns_roce_v2_poll_cq(&free_mr->rsv_cq->ib_cq, cqe_cnt, wc);
+ if (npolled < 0) {
+- ibdev_err(ibdev,
+- "failed to poll cqe for free mr, remain %d cqe.\n",
+- cqe_cnt);
++ ibdev_err_ratelimited(ibdev,
++ "failed to poll cqe for free mr, remain %d cqe.\n",
++ cqe_cnt);
+ goto out;
+ }
+
+ if (time_after(jiffies, end)) {
+- ibdev_err(ibdev,
+- "failed to poll cqe for free mr and timeout, remain %d cqe.\n",
+- cqe_cnt);
++ ibdev_err_ratelimited(ibdev,
++ "failed to poll cqe for free mr and timeout, remain %d cqe.\n",
++ cqe_cnt);
+ goto out;
+ }
+ cqe_cnt -= npolled;
+@@ -4701,26 +4693,49 @@ static int modify_qp_rtr_to_rts(struct ib_qp *ibqp, int attr_mask,
+ return 0;
+ }
+
++static int alloc_dip_entry(struct xarray *dip_xa, u32 qpn)
++{
++ struct hns_roce_dip *hr_dip;
++ int ret;
++
++ hr_dip = xa_load(dip_xa, qpn);
++ if (hr_dip)
++ return 0;
++
++ hr_dip = kzalloc(sizeof(*hr_dip), GFP_KERNEL);
++ if (!hr_dip)
++ return -ENOMEM;
++
++ ret = xa_err(xa_store(dip_xa, qpn, hr_dip, GFP_KERNEL));
++ if (ret)
++ kfree(hr_dip);
++
++ return ret;
++}
++
+ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ u32 *dip_idx)
+ {
+ const struct ib_global_route *grh = rdma_ah_read_grh(&attr->ah_attr);
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+- u32 *spare_idx = hr_dev->qp_table.idx_table.spare_idx;
+- u32 *head = &hr_dev->qp_table.idx_table.head;
+- u32 *tail = &hr_dev->qp_table.idx_table.tail;
++ struct xarray *dip_xa = &hr_dev->qp_table.dip_xa;
++ struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
+ struct hns_roce_dip *hr_dip;
+- unsigned long flags;
++ unsigned long idx;
+ int ret = 0;
+
+- spin_lock_irqsave(&hr_dev->dip_list_lock, flags);
++ ret = alloc_dip_entry(dip_xa, ibqp->qp_num);
++ if (ret)
++ return ret;
+
+- spare_idx[*tail] = ibqp->qp_num;
+- *tail = (*tail == hr_dev->caps.num_qps - 1) ? 0 : (*tail + 1);
++ xa_lock(dip_xa);
+
+- list_for_each_entry(hr_dip, &hr_dev->dip_list, node) {
+- if (!memcmp(grh->dgid.raw, hr_dip->dgid, GID_LEN_V2)) {
++ xa_for_each(dip_xa, idx, hr_dip) {
++ if (hr_dip->qp_cnt &&
++ !memcmp(grh->dgid.raw, hr_dip->dgid, GID_LEN_V2)) {
+ *dip_idx = hr_dip->dip_idx;
++ hr_dip->qp_cnt++;
++ hr_qp->dip = hr_dip;
+ goto out;
+ }
+ }
+@@ -4728,19 +4743,24 @@ static int get_dip_ctx_idx(struct ib_qp *ibqp, const struct ib_qp_attr *attr,
+ /* If no dgid is found, a new dip and a mapping between dgid and
+ * dip_idx will be created.
+ */
+- hr_dip = kzalloc(sizeof(*hr_dip), GFP_ATOMIC);
+- if (!hr_dip) {
+- ret = -ENOMEM;
+- goto out;
++ xa_for_each(dip_xa, idx, hr_dip) {
++ if (hr_dip->qp_cnt)
++ continue;
++
++ *dip_idx = idx;
++ memcpy(hr_dip->dgid, grh->dgid.raw, sizeof(grh->dgid.raw));
++ hr_dip->dip_idx = idx;
++ hr_dip->qp_cnt++;
++ hr_qp->dip = hr_dip;
++ break;
+ }
+
+- memcpy(hr_dip->dgid, grh->dgid.raw, sizeof(grh->dgid.raw));
+- hr_dip->dip_idx = *dip_idx = spare_idx[*head];
+- *head = (*head == hr_dev->caps.num_qps - 1) ? 0 : (*head + 1);
+- list_add_tail(&hr_dip->node, &hr_dev->dip_list);
++ /* This should never happen. */
++ if (WARN_ON_ONCE(!hr_qp->dip))
++ ret = -ENOSPC;
+
+ out:
+- spin_unlock_irqrestore(&hr_dev->dip_list_lock, flags);
++ xa_unlock(dip_xa);
+ return ret;
+ }
+
+@@ -5061,10 +5081,8 @@ static int hns_roce_v2_set_abs_fields(struct ib_qp *ibqp,
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+ int ret = 0;
+
+- if (!check_qp_state(cur_state, new_state)) {
+- ibdev_err(&hr_dev->ib_dev, "Illegal state for QP!\n");
++ if (!check_qp_state(cur_state, new_state))
+ return -EINVAL;
+- }
+
+ if (cur_state == IB_QPS_RESET && new_state == IB_QPS_INIT) {
+ memset(qpc_mask, 0, hr_dev->caps.qpc_sz);
+@@ -5325,7 +5343,7 @@ static int hns_roce_v2_modify_qp(struct ib_qp *ibqp,
+ /* SW pass context to HW */
+ ret = hns_roce_v2_qp_modify(hr_dev, context, qpc_mask, hr_qp);
+ if (ret) {
+- ibdev_err(ibdev, "failed to modify QP, ret = %d.\n", ret);
++ ibdev_err_ratelimited(ibdev, "failed to modify QP, ret = %d.\n", ret);
+ goto out;
+ }
+
+@@ -5463,7 +5481,9 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+
+ ret = hns_roce_v2_query_qpc(hr_dev, hr_qp->qpn, &context);
+ if (ret) {
+- ibdev_err(ibdev, "failed to query QPC, ret = %d.\n", ret);
++ ibdev_err_ratelimited(ibdev,
++ "failed to query QPC, ret = %d.\n",
++ ret);
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -5471,7 +5491,7 @@ static int hns_roce_v2_query_qp(struct ib_qp *ibqp, struct ib_qp_attr *qp_attr,
+ state = hr_reg_read(&context, QPC_QP_ST);
+ tmp_qp_state = to_ib_qp_st((enum hns_roce_v2_qp_state)state);
+ if (tmp_qp_state == -1) {
+- ibdev_err(ibdev, "Illegal ib_qp_state\n");
++ ibdev_err_ratelimited(ibdev, "Illegal ib_qp_state\n");
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -5564,9 +5584,9 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
+ ret = hns_roce_v2_modify_qp(&hr_qp->ibqp, NULL, 0,
+ hr_qp->state, IB_QPS_RESET, udata);
+ if (ret)
+- ibdev_err(ibdev,
+- "failed to modify QP to RST, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(ibdev,
++ "failed to modify QP to RST, ret = %d.\n",
++ ret);
+ }
+
+ send_cq = hr_qp->ibqp.send_cq ? to_hr_cq(hr_qp->ibqp.send_cq) : NULL;
+@@ -5594,17 +5614,41 @@ static int hns_roce_v2_destroy_qp_common(struct hns_roce_dev *hr_dev,
+ return ret;
+ }
+
++static void put_dip_ctx_idx(struct hns_roce_dev *hr_dev,
++ struct hns_roce_qp *hr_qp)
++{
++ struct hns_roce_dip *hr_dip = hr_qp->dip;
++
++ xa_lock(&hr_dev->qp_table.dip_xa);
++
++ hr_dip->qp_cnt--;
++ if (!hr_dip->qp_cnt)
++ memset(hr_dip->dgid, 0, GID_LEN_V2);
++
++ xa_unlock(&hr_dev->qp_table.dip_xa);
++}
++
+ int hns_roce_v2_destroy_qp(struct ib_qp *ibqp, struct ib_udata *udata)
+ {
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibqp->device);
+ struct hns_roce_qp *hr_qp = to_hr_qp(ibqp);
++ unsigned long flags;
+ int ret;
+
++ /* Make sure flush_cqe() is completed */
++ spin_lock_irqsave(&hr_qp->flush_lock, flags);
++ set_bit(HNS_ROCE_STOP_FLUSH_FLAG, &hr_qp->flush_flag);
++ spin_unlock_irqrestore(&hr_qp->flush_lock, flags);
++ flush_work(&hr_qp->flush_work.work);
++
++ if (hr_qp->cong_type == CONG_TYPE_DIP)
++ put_dip_ctx_idx(hr_dev, hr_qp);
++
+ ret = hns_roce_v2_destroy_qp_common(hr_dev, hr_qp, udata);
+ if (ret)
+- ibdev_err(&hr_dev->ib_dev,
+- "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n",
+- hr_qp->qpn, ret);
++ ibdev_err_ratelimited(&hr_dev->ib_dev,
++ "failed to destroy QP, QPN = 0x%06lx, ret = %d.\n",
++ hr_qp->qpn, ret);
+
+ hns_roce_qp_destroy(hr_dev, hr_qp, udata);
+
+@@ -5898,9 +5942,9 @@ static int hns_roce_v2_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period)
+ HNS_ROCE_CMD_MODIFY_CQC, hr_cq->cqn);
+ hns_roce_free_cmd_mailbox(hr_dev, mailbox);
+ if (ret)
+- ibdev_err(&hr_dev->ib_dev,
+- "failed to process cmd when modifying CQ, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(&hr_dev->ib_dev,
++ "failed to process cmd when modifying CQ, ret = %d.\n",
++ ret);
+
+ err_out:
+ if (ret)
+@@ -5924,9 +5968,9 @@ static int hns_roce_v2_query_cqc(struct hns_roce_dev *hr_dev, u32 cqn,
+ ret = hns_roce_cmd_mbox(hr_dev, 0, mailbox->dma,
+ HNS_ROCE_CMD_QUERY_CQC, cqn);
+ if (ret) {
+- ibdev_err(&hr_dev->ib_dev,
+- "failed to process cmd when querying CQ, ret = %d.\n",
+- ret);
++ ibdev_err_ratelimited(&hr_dev->ib_dev,
++ "failed to process cmd when querying CQ, ret = %d.\n",
++ ret);
+ goto err_mailbox;
+ }
+
+@@ -5967,11 +6011,10 @@ static int hns_roce_v2_query_mpt(struct hns_roce_dev *hr_dev, u32 key,
+ return ret;
+ }
+
+-static void hns_roce_irq_work_handle(struct work_struct *work)
++static void dump_aeqe_log(struct hns_roce_work *irq_work)
+ {
+- struct hns_roce_work *irq_work =
+- container_of(work, struct hns_roce_work, work);
+- struct ib_device *ibdev = &irq_work->hr_dev->ib_dev;
++ struct hns_roce_dev *hr_dev = irq_work->hr_dev;
++ struct ib_device *ibdev = &hr_dev->ib_dev;
+
+ switch (irq_work->event_type) {
+ case HNS_ROCE_EVENT_TYPE_PATH_MIG:
+@@ -6015,6 +6058,8 @@ static void hns_roce_irq_work_handle(struct work_struct *work)
+ case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
+ ibdev_warn(ibdev, "DB overflow.\n");
+ break;
++ case HNS_ROCE_EVENT_TYPE_MB:
++ break;
+ case HNS_ROCE_EVENT_TYPE_FLR:
+ ibdev_warn(ibdev, "function level reset.\n");
+ break;
+@@ -6025,8 +6070,46 @@ static void hns_roce_irq_work_handle(struct work_struct *work)
+ ibdev_err(ibdev, "invalid xrceth error.\n");
+ break;
+ default:
++ ibdev_info(ibdev, "Undefined event %d.\n",
++ irq_work->event_type);
+ break;
+ }
++}
++
++static void hns_roce_irq_work_handle(struct work_struct *work)
++{
++ struct hns_roce_work *irq_work =
++ container_of(work, struct hns_roce_work, work);
++ struct hns_roce_dev *hr_dev = irq_work->hr_dev;
++ int event_type = irq_work->event_type;
++ u32 queue_num = irq_work->queue_num;
++
++ switch (event_type) {
++ case HNS_ROCE_EVENT_TYPE_PATH_MIG:
++ case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
++ case HNS_ROCE_EVENT_TYPE_COMM_EST:
++ case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
++ case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
++ case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
++ case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
++ case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
++ case HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION:
++ case HNS_ROCE_EVENT_TYPE_INVALID_XRCETH:
++ hns_roce_qp_event(hr_dev, queue_num, event_type);
++ break;
++ case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
++ case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
++ hns_roce_srq_event(hr_dev, queue_num, event_type);
++ break;
++ case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
++ case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
++ hns_roce_cq_event(hr_dev, queue_num, event_type);
++ break;
++ default:
++ break;
++ }
++
++ dump_aeqe_log(irq_work);
+
+ kfree(irq_work);
+ }
+@@ -6087,14 +6170,14 @@ static struct hns_roce_aeqe *next_aeqe_sw_v2(struct hns_roce_eq *eq)
+ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ struct hns_roce_eq *eq)
+ {
+- struct device *dev = hr_dev->dev;
+ struct hns_roce_aeqe *aeqe = next_aeqe_sw_v2(eq);
+ irqreturn_t aeqe_found = IRQ_NONE;
++ int num_aeqes = 0;
+ int event_type;
+ u32 queue_num;
+ int sub_type;
+
+- while (aeqe) {
++ while (aeqe && num_aeqes < HNS_AEQ_POLLING_BUDGET) {
+ /* Make sure we read AEQ entry after we have checked the
+ * ownership bit
+ */
+@@ -6105,25 +6188,12 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ queue_num = hr_reg_read(aeqe, AEQE_EVENT_QUEUE_NUM);
+
+ switch (event_type) {
+- case HNS_ROCE_EVENT_TYPE_PATH_MIG:
+- case HNS_ROCE_EVENT_TYPE_PATH_MIG_FAILED:
+- case HNS_ROCE_EVENT_TYPE_COMM_EST:
+- case HNS_ROCE_EVENT_TYPE_SQ_DRAINED:
+ case HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR:
+- case HNS_ROCE_EVENT_TYPE_SRQ_LAST_WQE_REACH:
+ case HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR:
+ case HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR:
+ case HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION:
+ case HNS_ROCE_EVENT_TYPE_INVALID_XRCETH:
+- hns_roce_qp_event(hr_dev, queue_num, event_type);
+- break;
+- case HNS_ROCE_EVENT_TYPE_SRQ_LIMIT_REACH:
+- case HNS_ROCE_EVENT_TYPE_SRQ_CATAS_ERROR:
+- hns_roce_srq_event(hr_dev, queue_num, event_type);
+- break;
+- case HNS_ROCE_EVENT_TYPE_CQ_ACCESS_ERROR:
+- case HNS_ROCE_EVENT_TYPE_CQ_OVERFLOW:
+- hns_roce_cq_event(hr_dev, queue_num, event_type);
++ hns_roce_flush_cqe(hr_dev, queue_num);
+ break;
+ case HNS_ROCE_EVENT_TYPE_MB:
+ hns_roce_cmd_event(hr_dev,
+@@ -6131,12 +6201,7 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ aeqe->event.cmd.status,
+ le64_to_cpu(aeqe->event.cmd.out_param));
+ break;
+- case HNS_ROCE_EVENT_TYPE_DB_OVERFLOW:
+- case HNS_ROCE_EVENT_TYPE_FLR:
+- break;
+ default:
+- dev_err(dev, "unhandled event %d on EQ %d at idx %u.\n",
+- event_type, eq->eqn, eq->cons_index);
+ break;
+ }
+
+@@ -6150,6 +6215,7 @@ static irqreturn_t hns_roce_v2_aeq_int(struct hns_roce_dev *hr_dev,
+ hns_roce_v2_init_irq_work(hr_dev, eq, queue_num);
+
+ aeqe = next_aeqe_sw_v2(eq);
++ ++num_aeqes;
+ }
+
+ update_eq_db(eq);
+@@ -6699,6 +6765,9 @@ static int hns_roce_v2_init_eq_table(struct hns_roce_dev *hr_dev)
+ int ret;
+ int i;
+
++ if (hr_dev->caps.aeqe_depth < HNS_AEQ_POLLING_BUDGET)
++ return -EINVAL;
++
+ other_num = hr_dev->caps.num_other_vectors;
+ comp_num = hr_dev->caps.num_comp_vectors;
+ aeq_num = hr_dev->caps.num_aeq_vectors;
+@@ -7017,6 +7086,7 @@ static void hns_roce_hw_v2_uninit_instance(struct hnae3_handle *handle,
+
+ handle->rinfo.instance_state = HNS_ROCE_STATE_NON_INIT;
+ }
++
+ static int hns_roce_hw_v2_reset_notify_down(struct hnae3_handle *handle)
+ {
+ struct hns_roce_dev *hr_dev;
+@@ -7035,6 +7105,9 @@ static int hns_roce_hw_v2_reset_notify_down(struct hnae3_handle *handle)
+
+ hr_dev->active = false;
+ hr_dev->dis_db = true;
++
++ rdma_user_mmap_disassociate(&hr_dev->ib_dev);
++
+ hr_dev->state = HNS_ROCE_DEVICE_STATE_RST_DOWN;
+
+ return 0;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index c65f68a14a2608..cbdbc9edbce6ec 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -85,6 +85,11 @@
+
+ #define HNS_ROCE_V2_TABLE_CHUNK_SIZE (1 << 18)
+
++/* budget must be smaller than aeqe_depth to guarantee that we update
++ * the ci before we polled all the entries in the EQ.
++ */
++#define HNS_AEQ_POLLING_BUDGET 64
++
+ enum {
+ HNS_ROCE_CMD_FLAG_IN = BIT(0),
+ HNS_ROCE_CMD_FLAG_OUT = BIT(1),
+@@ -919,6 +924,7 @@ struct hns_roce_v2_rc_send_wqe {
+ #define RC_SEND_WQE_OWNER RC_SEND_WQE_FIELD_LOC(7, 7)
+ #define RC_SEND_WQE_CQE RC_SEND_WQE_FIELD_LOC(8, 8)
+ #define RC_SEND_WQE_FENCE RC_SEND_WQE_FIELD_LOC(9, 9)
++#define RC_SEND_WQE_SO RC_SEND_WQE_FIELD_LOC(10, 10)
+ #define RC_SEND_WQE_SE RC_SEND_WQE_FIELD_LOC(11, 11)
+ #define RC_SEND_WQE_INLINE RC_SEND_WQE_FIELD_LOC(12, 12)
+ #define RC_SEND_WQE_WQE_INDEX RC_SEND_WQE_FIELD_LOC(30, 15)
+@@ -1342,7 +1348,7 @@ struct hns_roce_v2_priv {
+ struct hns_roce_dip {
+ u8 dgid[GID_LEN_V2];
+ u32 dip_idx;
+- struct list_head node; /* all dips are on a list */
++ u32 qp_cnt;
+ };
+
+ struct fmea_ram_ecc {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index 4cb0af73358708..ae24c81c9812d9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -466,6 +466,11 @@ static int hns_roce_mmap(struct ib_ucontext *uctx, struct vm_area_struct *vma)
+ pgprot_t prot;
+ int ret;
+
++ if (hr_dev->dis_db) {
++ atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_MMAP_ERR_CNT]);
++ return -EPERM;
++ }
++
+ rdma_entry = rdma_user_mmap_entry_get_pgoff(uctx, vma->vm_pgoff);
+ if (!rdma_entry) {
+ atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_MMAP_ERR_CNT]);
+@@ -1130,8 +1135,6 @@ int hns_roce_init(struct hns_roce_dev *hr_dev)
+
+ INIT_LIST_HEAD(&hr_dev->qp_list);
+ spin_lock_init(&hr_dev->qp_list_lock);
+- INIT_LIST_HEAD(&hr_dev->dip_list);
+- spin_lock_init(&hr_dev->dip_list_lock);
+
+ ret = hns_roce_register_device(hr_dev);
+ if (ret)
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index 846da8c78b8b72..bf30b3a65a9ba9 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -138,8 +138,8 @@ static void hns_roce_mr_free(struct hns_roce_dev *hr_dev, struct hns_roce_mr *mr
+ key_to_hw_index(mr->key) &
+ (hr_dev->caps.num_mtpts - 1));
+ if (ret)
+- ibdev_warn(ibdev, "failed to destroy mpt, ret = %d.\n",
+- ret);
++ ibdev_warn_ratelimited(ibdev, "failed to destroy mpt, ret = %d.\n",
++ ret);
+ }
+
+ free_mr_pbl(hr_dev, mr);
+@@ -435,15 +435,16 @@ static int hns_roce_set_page(struct ib_mr *ibmr, u64 addr)
+ }
+
+ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+- unsigned int *sg_offset)
++ unsigned int *sg_offset_p)
+ {
++ unsigned int sg_offset = sg_offset_p ? *sg_offset_p : 0;
+ struct hns_roce_dev *hr_dev = to_hr_dev(ibmr->device);
+ struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_mr *mr = to_hr_mr(ibmr);
+ struct hns_roce_mtr *mtr = &mr->pbl_mtr;
+ int ret, sg_num = 0;
+
+- if (!IS_ALIGNED(*sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) ||
++ if (!IS_ALIGNED(sg_offset, HNS_ROCE_FRMR_ALIGN_SIZE) ||
+ ibmr->page_size < HNS_HW_PAGE_SIZE ||
+ ibmr->page_size > HNS_HW_MAX_PAGE_SIZE)
+ return sg_num;
+@@ -454,7 +455,7 @@ int hns_roce_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ if (!mr->page_list)
+ return sg_num;
+
+- sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, hns_roce_set_page);
++ sg_num = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset_p, hns_roce_set_page);
+ if (sg_num < 1) {
+ ibdev_err(ibdev, "failed to store sg pages %u %u, cnt = %d.\n",
+ mr->npages, mr->pbl_mtr.hem_cfg.buf_pg_count, sg_num);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 6b03ba671ff8f3..9e2e76c5940636 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -39,6 +39,25 @@
+ #include "hns_roce_device.h"
+ #include "hns_roce_hem.h"
+
++static struct hns_roce_qp *hns_roce_qp_lookup(struct hns_roce_dev *hr_dev,
++ u32 qpn)
++{
++ struct device *dev = hr_dev->dev;
++ struct hns_roce_qp *qp;
++ unsigned long flags;
++
++ xa_lock_irqsave(&hr_dev->qp_table_xa, flags);
++ qp = __hns_roce_qp_lookup(hr_dev, qpn);
++ if (qp)
++ refcount_inc(&qp->refcount);
++ xa_unlock_irqrestore(&hr_dev->qp_table_xa, flags);
++
++ if (!qp)
++ dev_warn(dev, "async event for bogus QP %08x\n", qpn);
++
++ return qp;
++}
++
+ static void flush_work_handle(struct work_struct *work)
+ {
+ struct hns_roce_work *flush_work = container_of(work,
+@@ -71,11 +90,18 @@ static void flush_work_handle(struct work_struct *work)
+ void init_flush_work(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp)
+ {
+ struct hns_roce_work *flush_work = &hr_qp->flush_work;
++ unsigned long flags;
++
++ spin_lock_irqsave(&hr_qp->flush_lock, flags);
++ /* Exit directly after destroy_qp() */
++ if (test_bit(HNS_ROCE_STOP_FLUSH_FLAG, &hr_qp->flush_flag)) {
++ spin_unlock_irqrestore(&hr_qp->flush_lock, flags);
++ return;
++ }
+
+- flush_work->hr_dev = hr_dev;
+- INIT_WORK(&flush_work->work, flush_work_handle);
+ refcount_inc(&hr_qp->refcount);
+ queue_work(hr_dev->irq_workq, &flush_work->work);
++ spin_unlock_irqrestore(&hr_qp->flush_lock, flags);
+ }
+
+ void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp)
+@@ -95,31 +121,28 @@ void flush_cqe(struct hns_roce_dev *dev, struct hns_roce_qp *qp)
+
+ void hns_roce_qp_event(struct hns_roce_dev *hr_dev, u32 qpn, int event_type)
+ {
+- struct device *dev = hr_dev->dev;
+ struct hns_roce_qp *qp;
+
+- xa_lock(&hr_dev->qp_table_xa);
+- qp = __hns_roce_qp_lookup(hr_dev, qpn);
+- if (qp)
+- refcount_inc(&qp->refcount);
+- xa_unlock(&hr_dev->qp_table_xa);
+-
+- if (!qp) {
+- dev_warn(dev, "async event for bogus QP %08x\n", qpn);
++ qp = hns_roce_qp_lookup(hr_dev, qpn);
++ if (!qp)
+ return;
+- }
+
+- if (event_type == HNS_ROCE_EVENT_TYPE_WQ_CATAS_ERROR ||
+- event_type == HNS_ROCE_EVENT_TYPE_INV_REQ_LOCAL_WQ_ERROR ||
+- event_type == HNS_ROCE_EVENT_TYPE_LOCAL_WQ_ACCESS_ERROR ||
+- event_type == HNS_ROCE_EVENT_TYPE_XRCD_VIOLATION ||
+- event_type == HNS_ROCE_EVENT_TYPE_INVALID_XRCETH) {
+- qp->state = IB_QPS_ERR;
++ qp->event(qp, (enum hns_roce_event)event_type);
+
+- flush_cqe(hr_dev, qp);
+- }
++ if (refcount_dec_and_test(&qp->refcount))
++ complete(&qp->free);
++}
+
+- qp->event(qp, (enum hns_roce_event)event_type);
++void hns_roce_flush_cqe(struct hns_roce_dev *hr_dev, u32 qpn)
++{
++ struct hns_roce_qp *qp;
++
++ qp = hns_roce_qp_lookup(hr_dev, qpn);
++ if (!qp)
++ return;
++
++ qp->state = IB_QPS_ERR;
++ flush_cqe(hr_dev, qp);
+
+ if (refcount_dec_and_test(&qp->refcount))
+ complete(&qp->free);
+@@ -1124,6 +1147,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ struct ib_udata *udata,
+ struct hns_roce_qp *hr_qp)
+ {
++ struct hns_roce_work *flush_work = &hr_qp->flush_work;
+ struct hns_roce_ib_create_qp_resp resp = {};
+ struct ib_device *ibdev = &hr_dev->ib_dev;
+ struct hns_roce_ib_create_qp ucmd = {};
+@@ -1132,9 +1156,12 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ mutex_init(&hr_qp->mutex);
+ spin_lock_init(&hr_qp->sq.lock);
+ spin_lock_init(&hr_qp->rq.lock);
++ spin_lock_init(&hr_qp->flush_lock);
+
+ hr_qp->state = IB_QPS_RESET;
+ hr_qp->flush_flag = 0;
++ flush_work->hr_dev = hr_dev;
++ INIT_WORK(&flush_work->work, flush_work_handle);
+
+ if (init_attr->create_flags)
+ return -EOPNOTSUPP;
+@@ -1546,14 +1573,10 @@ int hns_roce_init_qp_table(struct hns_roce_dev *hr_dev)
+ unsigned int reserved_from_bot;
+ unsigned int i;
+
+- qp_table->idx_table.spare_idx = kcalloc(hr_dev->caps.num_qps,
+- sizeof(u32), GFP_KERNEL);
+- if (!qp_table->idx_table.spare_idx)
+- return -ENOMEM;
+-
+ mutex_init(&qp_table->scc_mutex);
+ mutex_init(&qp_table->bank_mutex);
+ xa_init(&hr_dev->qp_table_xa);
++ xa_init(&qp_table->dip_xa);
+
+ reserved_from_bot = hr_dev->caps.reserved_qps;
+
+@@ -1578,7 +1601,7 @@ void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev)
+
+ for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++)
+ ida_destroy(&hr_dev->qp_table.bank[i].ida);
++ xa_destroy(&hr_dev->qp_table.dip_xa);
+ mutex_destroy(&hr_dev->qp_table.bank_mutex);
+ mutex_destroy(&hr_dev->qp_table.scc_mutex);
+- kfree(hr_dev->qp_table.idx_table.spare_idx);
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_srq.c b/drivers/infiniband/hw/hns/hns_roce_srq.c
+index c9b8233f4b0577..70c06ef65603d8 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_srq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_srq.c
+@@ -151,8 +151,8 @@ static void free_srqc(struct hns_roce_dev *hr_dev, struct hns_roce_srq *srq)
+ ret = hns_roce_destroy_hw_ctx(hr_dev, HNS_ROCE_CMD_DESTROY_SRQ,
+ srq->srqn);
+ if (ret)
+- dev_err(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n",
+- ret, srq->srqn);
++ dev_err_ratelimited(hr_dev->dev, "DESTROY_SRQ failed (%d) for SRQN %06lx\n",
++ ret, srq->srqn);
+
+ xa_erase_irq(&srq_table->xa, srq->srqn);
+
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index 4999239c8f4137..ac20ab3bbabf47 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2997,7 +2997,6 @@ int mlx5_ib_dev_res_srq_init(struct mlx5_ib_dev *dev)
+ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
+ {
+ struct mlx5_ib_resources *devr = &dev->devr;
+- int port;
+ int ret;
+
+ if (!MLX5_CAP_GEN(dev->mdev, xrc))
+@@ -3013,10 +3012,6 @@ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
+ return ret;
+ }
+
+- for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
+- INIT_WORK(&devr->ports[port].pkey_change_work,
+- pkey_change_handler);
+-
+ mutex_init(&devr->cq_lock);
+ mutex_init(&devr->srq_lock);
+
+@@ -3026,16 +3021,6 @@ static int mlx5_ib_dev_res_init(struct mlx5_ib_dev *dev)
+ static void mlx5_ib_dev_res_cleanup(struct mlx5_ib_dev *dev)
+ {
+ struct mlx5_ib_resources *devr = &dev->devr;
+- int port;
+-
+- /*
+- * Make sure no change P_Key work items are still executing.
+- *
+- * At this stage, the mlx5_ib_event should be unregistered
+- * and it ensures that no new works are added.
+- */
+- for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
+- cancel_work_sync(&devr->ports[port].pkey_change_work);
+
+ /* After s0/s1 init, they are not unset during the device lifetime. */
+ if (devr->s1) {
+@@ -3211,12 +3196,14 @@ static int lag_event(struct notifier_block *nb, unsigned long event, void *data)
+ struct mlx5_ib_dev *dev = container_of(nb, struct mlx5_ib_dev,
+ lag_events);
+ struct mlx5_core_dev *mdev = dev->mdev;
++ struct ib_device *ibdev = &dev->ib_dev;
++ struct net_device *old_ndev = NULL;
+ struct mlx5_ib_port *port;
+ struct net_device *ndev;
+- int i, err;
+- int portnum;
++ u32 portnum = 0;
++ int ret = 0;
++ int i;
+
+- portnum = 0;
+ switch (event) {
+ case MLX5_DRIVER_EVENT_ACTIVE_BACKUP_LAG_CHANGE_LOWERSTATE:
+ ndev = data;
+@@ -3232,19 +3219,24 @@ static int lag_event(struct notifier_block *nb, unsigned long event, void *data)
+ }
+ }
+ }
+- err = ib_device_set_netdev(&dev->ib_dev, ndev,
+- portnum + 1);
+- dev_put(ndev);
+- if (err)
+- return err;
+- /* Rescan gids after new netdev assignment */
+- rdma_roce_rescan_device(&dev->ib_dev);
++ old_ndev = ib_device_get_netdev(ibdev, portnum + 1);
++ ret = ib_device_set_netdev(ibdev, ndev, portnum + 1);
++ if (ret)
++ goto out;
++
++ if (old_ndev)
++ roce_del_all_netdev_gids(ibdev, portnum + 1,
++ old_ndev);
++ rdma_roce_rescan_port(ibdev, portnum + 1);
+ }
+ break;
+ default:
+ return NOTIFY_DONE;
+ }
+- return NOTIFY_OK;
++
++out:
++ dev_put(old_ndev);
++ return notifier_from_errno(ret);
+ }
+
+ static void mlx5e_lag_event_register(struct mlx5_ib_dev *dev)
+@@ -4464,6 +4456,13 @@ static void mlx5_ib_stage_delay_drop_cleanup(struct mlx5_ib_dev *dev)
+
+ static int mlx5_ib_stage_dev_notifier_init(struct mlx5_ib_dev *dev)
+ {
++ struct mlx5_ib_resources *devr = &dev->devr;
++ int port;
++
++ for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
++ INIT_WORK(&devr->ports[port].pkey_change_work,
++ pkey_change_handler);
++
+ dev->mdev_events.notifier_call = mlx5_ib_event;
+ mlx5_notifier_register(dev->mdev, &dev->mdev_events);
+
+@@ -4474,8 +4473,14 @@ static int mlx5_ib_stage_dev_notifier_init(struct mlx5_ib_dev *dev)
+
+ static void mlx5_ib_stage_dev_notifier_cleanup(struct mlx5_ib_dev *dev)
+ {
++ struct mlx5_ib_resources *devr = &dev->devr;
++ int port;
++
+ mlx5r_macsec_event_unregister(dev);
+ mlx5_notifier_unregister(dev->mdev, &dev->mdev_events);
++
++ for (port = 0; port < ARRAY_SIZE(devr->ports); ++port)
++ cancel_work_sync(&devr->ports[port].pkey_change_work);
+ }
+
+ void mlx5_ib_data_direct_bind(struct mlx5_ib_dev *ibdev,
+@@ -4565,9 +4570,6 @@ static const struct mlx5_ib_profile pf_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES,
+ mlx5_ib_dev_res_init,
+ mlx5_ib_dev_res_cleanup),
+- STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
+- mlx5_ib_stage_dev_notifier_init,
+- mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_ODP,
+ mlx5_ib_odp_init_one,
+ mlx5_ib_odp_cleanup_one),
+@@ -4592,6 +4594,9 @@ static const struct mlx5_ib_profile pf_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_IB_REG,
+ mlx5_ib_stage_ib_reg_init,
+ mlx5_ib_stage_ib_reg_cleanup),
++ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
++ mlx5_ib_stage_dev_notifier_init,
++ mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
+ mlx5_ib_stage_post_ib_reg_umr_init,
+ NULL),
+@@ -4628,9 +4633,6 @@ const struct mlx5_ib_profile raw_eth_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_RESOURCES,
+ mlx5_ib_dev_res_init,
+ mlx5_ib_dev_res_cleanup),
+- STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
+- mlx5_ib_stage_dev_notifier_init,
+- mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_COUNTERS,
+ mlx5_ib_counters_init,
+ mlx5_ib_counters_cleanup),
+@@ -4652,6 +4654,9 @@ const struct mlx5_ib_profile raw_eth_profile = {
+ STAGE_CREATE(MLX5_IB_STAGE_IB_REG,
+ mlx5_ib_stage_ib_reg_init,
+ mlx5_ib_stage_ib_reg_cleanup),
++ STAGE_CREATE(MLX5_IB_STAGE_DEVICE_NOTIFIER,
++ mlx5_ib_stage_dev_notifier_init,
++ mlx5_ib_stage_dev_notifier_cleanup),
+ STAGE_CREATE(MLX5_IB_STAGE_POST_IB_REG_UMR,
+ mlx5_ib_stage_post_ib_reg_umr_init,
+ NULL),
+diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+index 23fd72f7f63df9..29bde64ea1eac9 100644
+--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
++++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
+@@ -972,7 +972,6 @@ enum mlx5_ib_stages {
+ MLX5_IB_STAGE_QP,
+ MLX5_IB_STAGE_SRQ,
+ MLX5_IB_STAGE_DEVICE_RESOURCES,
+- MLX5_IB_STAGE_DEVICE_NOTIFIER,
+ MLX5_IB_STAGE_ODP,
+ MLX5_IB_STAGE_COUNTERS,
+ MLX5_IB_STAGE_CONG_DEBUGFS,
+@@ -981,6 +980,7 @@ enum mlx5_ib_stages {
+ MLX5_IB_STAGE_PRE_IB_REG_UMR,
+ MLX5_IB_STAGE_WHITELIST_UID,
+ MLX5_IB_STAGE_IB_REG,
++ MLX5_IB_STAGE_DEVICE_NOTIFIER,
+ MLX5_IB_STAGE_POST_IB_REG_UMR,
+ MLX5_IB_STAGE_DELAY_DROP,
+ MLX5_IB_STAGE_RESTRACK,
+diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c
+index d2f7b5195c19dd..91d329e903083c 100644
+--- a/drivers/infiniband/sw/rxe/rxe_qp.c
++++ b/drivers/infiniband/sw/rxe/rxe_qp.c
+@@ -775,6 +775,7 @@ int rxe_qp_to_attr(struct rxe_qp *qp, struct ib_qp_attr *attr, int mask)
+ * Yield the processor
+ */
+ spin_lock_irqsave(&qp->state_lock, flags);
++ attr->cur_qp_state = qp_state(qp);
+ if (qp->attr.sq_draining) {
+ spin_unlock_irqrestore(&qp->state_lock, flags);
+ cond_resched();
+diff --git a/drivers/infiniband/sw/rxe/rxe_req.c b/drivers/infiniband/sw/rxe/rxe_req.c
+index 479c07e6e4ed3e..87a02f0deb0001 100644
+--- a/drivers/infiniband/sw/rxe/rxe_req.c
++++ b/drivers/infiniband/sw/rxe/rxe_req.c
+@@ -663,10 +663,12 @@ int rxe_requester(struct rxe_qp *qp)
+ if (unlikely(qp_state(qp) == IB_QPS_ERR)) {
+ wqe = __req_next_wqe(qp);
+ spin_unlock_irqrestore(&qp->state_lock, flags);
+- if (wqe)
++ if (wqe) {
++ wqe->status = IB_WC_WR_FLUSH_ERR;
+ goto err;
+- else
++ } else {
+ goto exit;
++ }
+ }
+
+ if (unlikely(qp_state(qp) == IB_QPS_RESET)) {
+diff --git a/drivers/input/misc/cs40l50-vibra.c b/drivers/input/misc/cs40l50-vibra.c
+index 03bdb7c26ec09f..dce3b0ec8cf368 100644
+--- a/drivers/input/misc/cs40l50-vibra.c
++++ b/drivers/input/misc/cs40l50-vibra.c
+@@ -334,11 +334,12 @@ static int cs40l50_add(struct input_dev *dev, struct ff_effect *effect,
+ work_data.custom_len = effect->u.periodic.custom_len;
+ work_data.vib = vib;
+ work_data.effect = effect;
+- INIT_WORK(&work_data.work, cs40l50_add_worker);
++ INIT_WORK_ONSTACK(&work_data.work, cs40l50_add_worker);
+
+ /* Push to the workqueue to serialize with playbacks */
+ queue_work(vib->vib_wq, &work_data.work);
+ flush_work(&work_data.work);
++ destroy_work_on_stack(&work_data.work);
+
+ kfree(work_data.custom_data);
+
+@@ -467,11 +468,12 @@ static int cs40l50_erase(struct input_dev *dev, int effect_id)
+ work_data.vib = vib;
+ work_data.effect = &dev->ff->effects[effect_id];
+
+- INIT_WORK(&work_data.work, cs40l50_erase_worker);
++ INIT_WORK_ONSTACK(&work_data.work, cs40l50_erase_worker);
+
+ /* Push to workqueue to serialize with playbacks */
+ queue_work(vib->vib_wq, &work_data.work);
+ flush_work(&work_data.work);
++ destroy_work_on_stack(&work_data.work);
+
+ return work_data.error;
+ }
+diff --git a/drivers/interconnect/qcom/icc-rpmh.c b/drivers/interconnect/qcom/icc-rpmh.c
+index f49a8e0cb03c06..adacd6f7d6a8f7 100644
+--- a/drivers/interconnect/qcom/icc-rpmh.c
++++ b/drivers/interconnect/qcom/icc-rpmh.c
+@@ -311,6 +311,9 @@ int qcom_icc_rpmh_probe(struct platform_device *pdev)
+ }
+
+ qp->num_clks = devm_clk_bulk_get_all(qp->dev, &qp->clks);
++ if (qp->num_clks == -EPROBE_DEFER)
++ return dev_err_probe(dev, qp->num_clks, "Failed to get QoS clocks\n");
++
+ if (qp->num_clks < 0 || (!qp->num_clks && desc->qos_clks_required)) {
+ dev_info(dev, "Skipping QoS, failed to get clk: %d\n", qp->num_clks);
+ goto skip_qos_config;
+diff --git a/drivers/iommu/amd/io_pgtable_v2.c b/drivers/iommu/amd/io_pgtable_v2.c
+index 25b9042fa45307..c616de2c5926ec 100644
+--- a/drivers/iommu/amd/io_pgtable_v2.c
++++ b/drivers/iommu/amd/io_pgtable_v2.c
+@@ -268,8 +268,11 @@ static int iommu_v2_map_pages(struct io_pgtable_ops *ops, unsigned long iova,
+ out:
+ if (updated) {
+ struct protection_domain *pdom = io_pgtable_ops_to_domain(ops);
++ unsigned long flags;
+
++ spin_lock_irqsave(&pdom->lock, flags);
+ amd_iommu_domain_flush_pages(pdom, o_iova, size);
++ spin_unlock_irqrestore(&pdom->lock, flags);
+ }
+
+ if (mapped)
+diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+index fcd13d301fff68..6b479592140c47 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
++++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+@@ -509,7 +509,8 @@ static int tegra241_vcmdq_alloc_smmu_cmdq(struct tegra241_vcmdq *vcmdq)
+
+ snprintf(name, 16, "vcmdq%u", vcmdq->idx);
+
+- q->llq.max_n_shift = VCMDQ_LOG2SIZE_MAX;
++ /* Queue size, capped to ensure natural alignment */
++ q->llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT, VCMDQ_LOG2SIZE_MAX);
+
+ /* Use the common helper to init the VCMDQ, and then... */
+ ret = arm_smmu_init_one_queue(smmu, q, vcmdq->page0,
+@@ -800,7 +801,7 @@ static int tegra241_cmdqv_init_structures(struct arm_smmu_device *smmu)
+ return 0;
+ }
+
+-struct dentry *cmdqv_debugfs_dir;
++static struct dentry *cmdqv_debugfs_dir;
+
+ static struct arm_smmu_device *
+ __tegra241_cmdqv_probe(struct arm_smmu_device *smmu, struct resource *res,
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index e860bc9439a283..a167d59101ae2e 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -707,14 +707,15 @@ static void pgtable_walk(struct intel_iommu *iommu, unsigned long pfn,
+ while (1) {
+ offset = pfn_level_offset(pfn, level);
+ pte = &parent[offset];
+- if (!pte || (dma_pte_superpage(pte) || !dma_pte_present(pte))) {
+- pr_info("PTE not present at level %d\n", level);
+- break;
+- }
+
+ pr_info("pte level: %d, pte value: 0x%016llx\n", level, pte->val);
+
+- if (level == 1)
++ if (!dma_pte_present(pte)) {
++ pr_info("page table not present at level %d\n", level - 1);
++ break;
++ }
++
++ if (level == 1 || dma_pte_superpage(pte))
+ break;
+
+ parent = phys_to_virt(dma_pte_addr(pte));
+@@ -737,11 +738,11 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ pr_info("Dump %s table entries for IOVA 0x%llx\n", iommu->name, addr);
+
+ /* root entry dump */
+- rt_entry = &iommu->root_entry[bus];
+- if (!rt_entry) {
+- pr_info("root table entry is not present\n");
++ if (!iommu->root_entry) {
++ pr_info("root table is not present\n");
+ return;
+ }
++ rt_entry = &iommu->root_entry[bus];
+
+ if (sm_supported(iommu))
+ pr_info("scalable mode root entry: hi 0x%016llx, low 0x%016llx\n",
+@@ -752,7 +753,7 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ /* context entry dump */
+ ctx_entry = iommu_context_addr(iommu, bus, devfn, 0);
+ if (!ctx_entry) {
+- pr_info("context table entry is not present\n");
++ pr_info("context table is not present\n");
+ return;
+ }
+
+@@ -761,17 +762,23 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+
+ /* legacy mode does not require PASID entries */
+ if (!sm_supported(iommu)) {
++ if (!context_present(ctx_entry)) {
++ pr_info("legacy mode page table is not present\n");
++ return;
++ }
+ level = agaw_to_level(ctx_entry->hi & 7);
+ pgtable = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
+ goto pgtable_walk;
+ }
+
+- /* get the pointer to pasid directory entry */
+- dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
+- if (!dir) {
+- pr_info("pasid directory entry is not present\n");
++ if (!context_present(ctx_entry)) {
++ pr_info("pasid directory table is not present\n");
+ return;
+ }
++
++ /* get the pointer to pasid directory entry */
++ dir = phys_to_virt(ctx_entry->lo & VTD_PAGE_MASK);
++
+ /* For request-without-pasid, get the pasid from context entry */
+ if (intel_iommu_sm && pasid == IOMMU_PASID_INVALID)
+ pasid = IOMMU_NO_PASID;
+@@ -783,7 +790,7 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ /* get the pointer to the pasid table entry */
+ entries = get_pasid_table_from_pde(pde);
+ if (!entries) {
+- pr_info("pasid table entry is not present\n");
++ pr_info("pasid table is not present\n");
+ return;
+ }
+ index = pasid & PASID_PTE_MASK;
+@@ -791,6 +798,11 @@ void dmar_fault_dump_ptes(struct intel_iommu *iommu, u16 source_id,
+ for (i = 0; i < ARRAY_SIZE(pte->val); i++)
+ pr_info("pasid table entry[%d]: 0x%016llx\n", i, pte->val[i]);
+
++ if (!pasid_pte_is_present(pte)) {
++ pr_info("scalable mode page table is not present\n");
++ return;
++ }
++
+ if (pasid_pte_get_pgtt(pte) == PASID_ENTRY_PGTT_FL_ONLY) {
+ level = pte->val[2] & BIT_ULL(2) ? 5 : 4;
+ pgtable = phys_to_virt(pte->val[2] & VTD_PAGE_MASK);
+diff --git a/drivers/iommu/s390-iommu.c b/drivers/iommu/s390-iommu.c
+index d8eaa7ea380bb0..fbdeded3d48b59 100644
+--- a/drivers/iommu/s390-iommu.c
++++ b/drivers/iommu/s390-iommu.c
+@@ -33,6 +33,8 @@ struct s390_domain {
+ struct rcu_head rcu;
+ };
+
++static struct iommu_domain blocking_domain;
++
+ static inline unsigned int calc_rtx(dma_addr_t ptr)
+ {
+ return ((unsigned long)ptr >> ZPCI_RT_SHIFT) & ZPCI_INDEX_MASK;
+@@ -369,20 +371,36 @@ static void s390_domain_free(struct iommu_domain *domain)
+ call_rcu(&s390_domain->rcu, s390_iommu_rcu_free_domain);
+ }
+
+-static void s390_iommu_detach_device(struct iommu_domain *domain,
+- struct device *dev)
++static void zdev_s390_domain_update(struct zpci_dev *zdev,
++ struct iommu_domain *domain)
++{
++ unsigned long flags;
++
++ spin_lock_irqsave(&zdev->dom_lock, flags);
++ zdev->s390_domain = domain;
++ spin_unlock_irqrestore(&zdev->dom_lock, flags);
++}
++
++static int blocking_domain_attach_device(struct iommu_domain *domain,
++ struct device *dev)
+ {
+- struct s390_domain *s390_domain = to_s390_domain(domain);
+ struct zpci_dev *zdev = to_zpci_dev(dev);
++ struct s390_domain *s390_domain;
+ unsigned long flags;
+
++ if (zdev->s390_domain->type == IOMMU_DOMAIN_BLOCKED)
++ return 0;
++
++ s390_domain = to_s390_domain(zdev->s390_domain);
+ spin_lock_irqsave(&s390_domain->list_lock, flags);
+ list_del_rcu(&zdev->iommu_list);
+ spin_unlock_irqrestore(&s390_domain->list_lock, flags);
+
+ zpci_unregister_ioat(zdev, 0);
+- zdev->s390_domain = NULL;
+ zdev->dma_table = NULL;
++ zdev_s390_domain_update(zdev, domain);
++
++ return 0;
+ }
+
+ static int s390_iommu_attach_device(struct iommu_domain *domain,
+@@ -401,20 +419,15 @@ static int s390_iommu_attach_device(struct iommu_domain *domain,
+ domain->geometry.aperture_end < zdev->start_dma))
+ return -EINVAL;
+
+- if (zdev->s390_domain)
+- s390_iommu_detach_device(&zdev->s390_domain->domain, dev);
++ blocking_domain_attach_device(&blocking_domain, dev);
+
++ /* If we fail now DMA remains blocked via blocking domain */
+ cc = zpci_register_ioat(zdev, 0, zdev->start_dma, zdev->end_dma,
+ virt_to_phys(s390_domain->dma_table), &status);
+- /*
+- * If the device is undergoing error recovery the reset code
+- * will re-establish the new domain.
+- */
+ if (cc && status != ZPCI_PCI_ST_FUNC_NOT_AVAIL)
+ return -EIO;
+-
+ zdev->dma_table = s390_domain->dma_table;
+- zdev->s390_domain = s390_domain;
++ zdev_s390_domain_update(zdev, domain);
+
+ spin_lock_irqsave(&s390_domain->list_lock, flags);
+ list_add_rcu(&zdev->iommu_list, &s390_domain->devices);
+@@ -466,19 +479,11 @@ static struct iommu_device *s390_iommu_probe_device(struct device *dev)
+ if (zdev->tlb_refresh)
+ dev->iommu->shadow_on_flush = 1;
+
+- return &zdev->iommu_dev;
+-}
++ /* Start with DMA blocked */
++ spin_lock_init(&zdev->dom_lock);
++ zdev_s390_domain_update(zdev, &blocking_domain);
+
+-static void s390_iommu_release_device(struct device *dev)
+-{
+- struct zpci_dev *zdev = to_zpci_dev(dev);
+-
+- /*
+- * release_device is expected to detach any domain currently attached
+- * to the device, but keep it attached to other devices in the group.
+- */
+- if (zdev)
+- s390_iommu_detach_device(&zdev->s390_domain->domain, dev);
++ return &zdev->iommu_dev;
+ }
+
+ static int zpci_refresh_all(struct zpci_dev *zdev)
+@@ -697,9 +702,15 @@ static size_t s390_iommu_unmap_pages(struct iommu_domain *domain,
+
+ struct zpci_iommu_ctrs *zpci_get_iommu_ctrs(struct zpci_dev *zdev)
+ {
+- if (!zdev || !zdev->s390_domain)
++ struct s390_domain *s390_domain;
++
++ lockdep_assert_held(&zdev->dom_lock);
++
++ if (zdev->s390_domain->type == IOMMU_DOMAIN_BLOCKED)
+ return NULL;
+- return &zdev->s390_domain->ctrs;
++
++ s390_domain = to_s390_domain(zdev->s390_domain);
++ return &s390_domain->ctrs;
+ }
+
+ int zpci_init_iommu(struct zpci_dev *zdev)
+@@ -776,11 +787,19 @@ static int __init s390_iommu_init(void)
+ }
+ subsys_initcall(s390_iommu_init);
+
++static struct iommu_domain blocking_domain = {
++ .type = IOMMU_DOMAIN_BLOCKED,
++ .ops = &(const struct iommu_domain_ops) {
++ .attach_dev = blocking_domain_attach_device,
++ }
++};
++
+ static const struct iommu_ops s390_iommu_ops = {
++ .blocked_domain = &blocking_domain,
++ .release_domain = &blocking_domain,
+ .capable = s390_iommu_capable,
+ .domain_alloc_paging = s390_domain_alloc_paging,
+ .probe_device = s390_iommu_probe_device,
+- .release_device = s390_iommu_release_device,
+ .device_group = generic_device_group,
+ .pgsize_bitmap = SZ_4K,
+ .get_resv_regions = s390_iommu_get_resv_regions,
+diff --git a/drivers/irqchip/irq-mvebu-sei.c b/drivers/irqchip/irq-mvebu-sei.c
+index f8c70f2d100a11..065166ab5dbc04 100644
+--- a/drivers/irqchip/irq-mvebu-sei.c
++++ b/drivers/irqchip/irq-mvebu-sei.c
+@@ -192,7 +192,6 @@ static void mvebu_sei_domain_free(struct irq_domain *domain, unsigned int virq,
+ }
+
+ static const struct irq_domain_ops mvebu_sei_domain_ops = {
+- .select = msi_lib_irq_domain_select,
+ .alloc = mvebu_sei_domain_alloc,
+ .free = mvebu_sei_domain_free,
+ };
+@@ -306,6 +305,7 @@ static void mvebu_sei_cp_domain_free(struct irq_domain *domain,
+ }
+
+ static const struct irq_domain_ops mvebu_sei_cp_domain_ops = {
++ .select = msi_lib_irq_domain_select,
+ .alloc = mvebu_sei_cp_domain_alloc,
+ .free = mvebu_sei_cp_domain_free,
+ };
+diff --git a/drivers/irqchip/irq-riscv-aplic-main.c b/drivers/irqchip/irq-riscv-aplic-main.c
+index 900e72541db9e5..93e7c51f944abe 100644
+--- a/drivers/irqchip/irq-riscv-aplic-main.c
++++ b/drivers/irqchip/irq-riscv-aplic-main.c
+@@ -207,7 +207,8 @@ static int aplic_probe(struct platform_device *pdev)
+ else
+ rc = aplic_direct_setup(dev, regs);
+ if (rc)
+- dev_err(dev, "failed to setup APLIC in %s mode\n", msi_mode ? "MSI" : "direct");
++ dev_err_probe(dev, rc, "failed to setup APLIC in %s mode\n",
++ msi_mode ? "MSI" : "direct");
+
+ #ifdef CONFIG_ACPI
+ if (!acpi_disabled)
+diff --git a/drivers/irqchip/irq-riscv-aplic-msi.c b/drivers/irqchip/irq-riscv-aplic-msi.c
+index 945bff28265cdc..fb8d1838609fb5 100644
+--- a/drivers/irqchip/irq-riscv-aplic-msi.c
++++ b/drivers/irqchip/irq-riscv-aplic-msi.c
+@@ -266,6 +266,9 @@ int aplic_msi_setup(struct device *dev, void __iomem *regs)
+ if (msi_domain)
+ dev_set_msi_domain(dev, msi_domain);
+ }
++
++ if (!dev_get_msi_domain(dev))
++ return -EPROBE_DEFER;
+ }
+
+ if (!msi_create_device_irq_domain(dev, MSI_DEFAULT_DOMAIN, &aplic_msi_template,
+diff --git a/drivers/leds/flash/leds-ktd2692.c b/drivers/leds/flash/leds-ktd2692.c
+index 16a01a200c0b75..b92adf908793e5 100644
+--- a/drivers/leds/flash/leds-ktd2692.c
++++ b/drivers/leds/flash/leds-ktd2692.c
+@@ -292,6 +292,7 @@ static int ktd2692_probe(struct platform_device *pdev)
+
+ fled_cdev = &led->fled_cdev;
+ led_cdev = &fled_cdev->led_cdev;
++ led->props.timing = ktd2692_timing;
+
+ ret = ktd2692_parse_dt(led, &pdev->dev, &led_cfg);
+ if (ret)
+diff --git a/drivers/leds/leds-max5970.c b/drivers/leds/leds-max5970.c
+index 56a584311581af..285074c53b2344 100644
+--- a/drivers/leds/leds-max5970.c
++++ b/drivers/leds/leds-max5970.c
+@@ -45,7 +45,7 @@ static int max5970_led_set_brightness(struct led_classdev *cdev,
+
+ static int max5970_led_probe(struct platform_device *pdev)
+ {
+- struct fwnode_handle *led_node, *child;
++ struct fwnode_handle *child;
+ struct device *dev = &pdev->dev;
+ struct regmap *regmap;
+ struct max5970_led *ddata;
+@@ -55,7 +55,8 @@ static int max5970_led_probe(struct platform_device *pdev)
+ if (!regmap)
+ return -ENODEV;
+
+- led_node = device_get_named_child_node(dev->parent, "leds");
++ struct fwnode_handle *led_node __free(fwnode_handle) =
++ device_get_named_child_node(dev->parent, "leds");
+ if (!led_node)
+ return -ENODEV;
+
+diff --git a/drivers/mailbox/arm_mhuv2.c b/drivers/mailbox/arm_mhuv2.c
+index 0ec21dcdbde723..cff7c343ee082a 100644
+--- a/drivers/mailbox/arm_mhuv2.c
++++ b/drivers/mailbox/arm_mhuv2.c
+@@ -500,7 +500,7 @@ static const struct mhuv2_protocol_ops mhuv2_data_transfer_ops = {
+ static struct mbox_chan *get_irq_chan_comb(struct mhuv2 *mhu, u32 __iomem *reg)
+ {
+ struct mbox_chan *chans = mhu->mbox.chans;
+- int channel = 0, i, offset = 0, windows, protocol, ch_wn;
++ int channel = 0, i, j, offset = 0, windows, protocol, ch_wn;
+ u32 stat;
+
+ for (i = 0; i < MHUV2_CMB_INT_ST_REG_CNT; i++) {
+@@ -510,9 +510,9 @@ static struct mbox_chan *get_irq_chan_comb(struct mhuv2 *mhu, u32 __iomem *reg)
+
+ ch_wn = i * MHUV2_STAT_BITS + __builtin_ctz(stat);
+
+- for (i = 0; i < mhu->length; i += 2) {
+- protocol = mhu->protocols[i];
+- windows = mhu->protocols[i + 1];
++ for (j = 0; j < mhu->length; j += 2) {
++ protocol = mhu->protocols[j];
++ windows = mhu->protocols[j + 1];
+
+ if (ch_wn >= offset + windows) {
+ if (protocol == DOORBELL)
+diff --git a/drivers/mailbox/mtk-cmdq-mailbox.c b/drivers/mailbox/mtk-cmdq-mailbox.c
+index 4bff73532085bd..9c43ed9bdd37b5 100644
+--- a/drivers/mailbox/mtk-cmdq-mailbox.c
++++ b/drivers/mailbox/mtk-cmdq-mailbox.c
+@@ -584,7 +584,7 @@ static int cmdq_get_clocks(struct device *dev, struct cmdq *cmdq)
+ struct clk_bulk_data *clks;
+
+ cmdq->clocks = devm_kcalloc(dev, cmdq->pdata->gce_num,
+- sizeof(cmdq->clocks), GFP_KERNEL);
++ sizeof(*cmdq->clocks), GFP_KERNEL);
+ if (!cmdq->clocks)
+ return -ENOMEM;
+
+diff --git a/drivers/mailbox/omap-mailbox.c b/drivers/mailbox/omap-mailbox.c
+index 6797770474a55d..680243751d625f 100644
+--- a/drivers/mailbox/omap-mailbox.c
++++ b/drivers/mailbox/omap-mailbox.c
+@@ -15,6 +15,7 @@
+ #include <linux/slab.h>
+ #include <linux/kfifo.h>
+ #include <linux/err.h>
++#include <linux/io.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
+diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
+index 272945a878b3ce..a3f4b4ad35aab9 100644
+--- a/drivers/media/i2c/adv7604.c
++++ b/drivers/media/i2c/adv7604.c
+@@ -1405,12 +1405,13 @@ static int stdi2dv_timings(struct v4l2_subdev *sd,
+ if (v4l2_detect_cvt(stdi->lcf + 1, hfreq, stdi->lcvs, 0,
+ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, timings))
++ false, adv76xx_get_dv_timings_cap(sd, -1), timings))
+ return 0;
+ if (v4l2_detect_gtf(stdi->lcf + 1, hfreq, stdi->lcvs,
+ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, state->aspect_ratio, timings))
++ false, state->aspect_ratio,
++ adv76xx_get_dv_timings_cap(sd, -1), timings))
+ return 0;
+
+ v4l2_dbg(2, debug, sd,
+diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
+index 014fc913225c4a..61ea7393066d77 100644
+--- a/drivers/media/i2c/adv7842.c
++++ b/drivers/media/i2c/adv7842.c
+@@ -1431,14 +1431,15 @@ static int stdi2dv_timings(struct v4l2_subdev *sd,
+ }
+
+ if (v4l2_detect_cvt(stdi->lcf + 1, hfreq, stdi->lcvs, 0,
+- (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+- (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, timings))
++ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
++ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
++ false, adv7842_get_dv_timings_cap(sd), timings))
+ return 0;
+ if (v4l2_detect_gtf(stdi->lcf + 1, hfreq, stdi->lcvs,
+- (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
+- (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
+- false, state->aspect_ratio, timings))
++ (stdi->hs_pol == '+' ? V4L2_DV_HSYNC_POS_POL : 0) |
++ (stdi->vs_pol == '+' ? V4L2_DV_VSYNC_POS_POL : 0),
++ false, state->aspect_ratio,
++ adv7842_get_dv_timings_cap(sd), timings))
+ return 0;
+
+ v4l2_dbg(2, debug, sd,
+diff --git a/drivers/media/i2c/ds90ub960.c b/drivers/media/i2c/ds90ub960.c
+index ffe5f25f864762..58424d8f72af03 100644
+--- a/drivers/media/i2c/ds90ub960.c
++++ b/drivers/media/i2c/ds90ub960.c
+@@ -1286,7 +1286,7 @@ static int ub960_rxport_get_strobe_pos(struct ub960_data *priv,
+
+ clk_delay += v & UB960_IR_RX_ANA_STROBE_SET_CLK_DELAY_MASK;
+
+- ub960_rxport_read(priv, nport, UB960_RR_SFILTER_STS_1, &v);
++ ret = ub960_rxport_read(priv, nport, UB960_RR_SFILTER_STS_1, &v);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/media/i2c/max96717.c b/drivers/media/i2c/max96717.c
+index 4e85b8eb1e7767..9259d58ba734ee 100644
+--- a/drivers/media/i2c/max96717.c
++++ b/drivers/media/i2c/max96717.c
+@@ -697,8 +697,10 @@ static int max96717_subdev_init(struct max96717_priv *priv)
+ priv->pads[MAX96717_PAD_SOURCE].flags = MEDIA_PAD_FL_SOURCE;
+
+ ret = media_entity_pads_init(&priv->sd.entity, 2, priv->pads);
+- if (ret)
+- return dev_err_probe(dev, ret, "Failed to init pads\n");
++ if (ret) {
++ dev_err_probe(dev, ret, "Failed to init pads\n");
++ goto err_free_ctrl;
++ }
+
+ ret = v4l2_subdev_init_finalize(&priv->sd);
+ if (ret) {
+diff --git a/drivers/media/i2c/vgxy61.c b/drivers/media/i2c/vgxy61.c
+index 409d2d4ffb4bb2..d77468c8587bc4 100644
+--- a/drivers/media/i2c/vgxy61.c
++++ b/drivers/media/i2c/vgxy61.c
+@@ -1617,7 +1617,7 @@ static int vgxy61_detect(struct vgxy61_dev *sensor)
+
+ ret = cci_read(sensor->regmap, VGXY61_REG_NVM, &st, NULL);
+ if (ret < 0)
+- return st;
++ return ret;
+ if (st != VGXY61_NVM_OK)
+ dev_warn(&client->dev, "Bad nvm state got %u\n", (u8)st);
+
+diff --git a/drivers/media/pci/intel/ipu6/Kconfig b/drivers/media/pci/intel/ipu6/Kconfig
+index 49e4fb696573f6..a4537818a58c05 100644
+--- a/drivers/media/pci/intel/ipu6/Kconfig
++++ b/drivers/media/pci/intel/ipu6/Kconfig
+@@ -4,12 +4,6 @@ config VIDEO_INTEL_IPU6
+ depends on VIDEO_DEV
+ depends on X86 && X86_64 && HAS_DMA
+ depends on IPU_BRIDGE || !IPU_BRIDGE
+- #
+- # This driver incorrectly tries to override the dma_ops. It should
+- # never have done that, but for now keep it working on architectures
+- # that use dma ops
+- #
+- depends on ARCH_HAS_DMA_OPS
+ select AUXILIARY_BUS
+ select IOMMU_IOVA
+ select VIDEO_V4L2_SUBDEV_API
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-bus.c b/drivers/media/pci/intel/ipu6/ipu6-bus.c
+index 149ec098cdbfe1..37d88ddb6ee7cd 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-bus.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-bus.c
+@@ -94,8 +94,6 @@ ipu6_bus_initialize_device(struct pci_dev *pdev, struct device *parent,
+ if (!adev)
+ return ERR_PTR(-ENOMEM);
+
+- adev->dma_mask = DMA_BIT_MASK(isp->secure_mode ? IPU6_MMU_ADDR_BITS :
+- IPU6_MMU_ADDR_BITS_NON_SECURE);
+ adev->isp = isp;
+ adev->ctrl = ctrl;
+ adev->pdata = pdata;
+@@ -106,10 +104,6 @@ ipu6_bus_initialize_device(struct pci_dev *pdev, struct device *parent,
+
+ auxdev->dev.parent = parent;
+ auxdev->dev.release = ipu6_bus_release;
+- auxdev->dev.dma_ops = &ipu6_dma_ops;
+- auxdev->dev.dma_mask = &adev->dma_mask;
+- auxdev->dev.dma_parms = pdev->dev.dma_parms;
+- auxdev->dev.coherent_dma_mask = adev->dma_mask;
+
+ ret = auxiliary_device_init(auxdev);
+ if (ret < 0) {
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-buttress.c b/drivers/media/pci/intel/ipu6/ipu6-buttress.c
+index e47f84c30e10d6..1ee63ef4a40b22 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-buttress.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-buttress.c
+@@ -24,6 +24,7 @@
+
+ #include "ipu6.h"
+ #include "ipu6-bus.h"
++#include "ipu6-dma.h"
+ #include "ipu6-buttress.h"
+ #include "ipu6-platform-buttress-regs.h"
+
+@@ -345,12 +346,16 @@ irqreturn_t ipu6_buttress_isr(int irq, void *isp_ptr)
+ u32 disable_irqs = 0;
+ u32 irq_status;
+ u32 i, count = 0;
++ int active;
+
+- pm_runtime_get_noresume(&isp->pdev->dev);
++ active = pm_runtime_get_if_active(&isp->pdev->dev);
++ if (!active)
++ return IRQ_NONE;
+
+ irq_status = readl(isp->base + reg_irq_sts);
+- if (!irq_status) {
+- pm_runtime_put_noidle(&isp->pdev->dev);
++ if (irq_status == 0 || WARN_ON_ONCE(irq_status == 0xffffffffu)) {
++ if (active > 0)
++ pm_runtime_put_noidle(&isp->pdev->dev);
+ return IRQ_NONE;
+ }
+
+@@ -426,7 +431,8 @@ irqreturn_t ipu6_buttress_isr(int irq, void *isp_ptr)
+ writel(BUTTRESS_IRQS & ~disable_irqs,
+ isp->base + BUTTRESS_REG_ISR_ENABLE);
+
+- pm_runtime_put(&isp->pdev->dev);
++ if (active > 0)
++ pm_runtime_put(&isp->pdev->dev);
+
+ return ret;
+ }
+@@ -553,6 +559,7 @@ int ipu6_buttress_map_fw_image(struct ipu6_bus_device *sys,
+ const struct firmware *fw, struct sg_table *sgt)
+ {
+ bool is_vmalloc = is_vmalloc_addr(fw->data);
++ struct pci_dev *pdev = sys->isp->pdev;
+ struct page **pages;
+ const void *addr;
+ unsigned long n_pages;
+@@ -588,14 +595,20 @@ int ipu6_buttress_map_fw_image(struct ipu6_bus_device *sys,
+ goto out;
+ }
+
+- ret = dma_map_sgtable(&sys->auxdev.dev, sgt, DMA_TO_DEVICE, 0);
+- if (ret < 0) {
+- ret = -ENOMEM;
++ ret = dma_map_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0);
++ if (ret) {
+ sg_free_table(sgt);
+ goto out;
+ }
+
+- dma_sync_sgtable_for_device(&sys->auxdev.dev, sgt, DMA_TO_DEVICE);
++ ret = ipu6_dma_map_sgtable(sys, sgt, DMA_TO_DEVICE, 0);
++ if (ret) {
++ dma_unmap_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0);
++ sg_free_table(sgt);
++ goto out;
++ }
++
++ ipu6_dma_sync_sgtable(sys, sgt);
+
+ out:
+ kfree(pages);
+@@ -607,7 +620,10 @@ EXPORT_SYMBOL_NS_GPL(ipu6_buttress_map_fw_image, INTEL_IPU6);
+ void ipu6_buttress_unmap_fw_image(struct ipu6_bus_device *sys,
+ struct sg_table *sgt)
+ {
+- dma_unmap_sgtable(&sys->auxdev.dev, sgt, DMA_TO_DEVICE, 0);
++ struct pci_dev *pdev = sys->isp->pdev;
++
++ ipu6_dma_unmap_sgtable(sys, sgt, DMA_TO_DEVICE, 0);
++ dma_unmap_sgtable(&pdev->dev, sgt, DMA_TO_DEVICE, 0);
+ sg_free_table(sgt);
+ }
+ EXPORT_SYMBOL_NS_GPL(ipu6_buttress_unmap_fw_image, INTEL_IPU6);
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-cpd.c b/drivers/media/pci/intel/ipu6/ipu6-cpd.c
+index 715b21ab4b8e98..21c1c128a7eaa5 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-cpd.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-cpd.c
+@@ -15,6 +15,7 @@
+ #include "ipu6.h"
+ #include "ipu6-bus.h"
+ #include "ipu6-cpd.h"
++#include "ipu6-dma.h"
+
+ /* 15 entries + header*/
+ #define MAX_PKG_DIR_ENT_CNT 16
+@@ -162,7 +163,6 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ {
+ dma_addr_t dma_addr_src = sg_dma_address(adev->fw_sgt.sgl);
+ const struct ipu6_cpd_ent *ent, *man_ent, *met_ent;
+- struct device *dev = &adev->auxdev.dev;
+ struct ipu6_device *isp = adev->isp;
+ unsigned int man_sz, met_sz;
+ void *pkg_dir_pos;
+@@ -175,8 +175,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ met_sz = met_ent->len;
+
+ adev->pkg_dir_size = PKG_DIR_SIZE + man_sz + met_sz;
+- adev->pkg_dir = dma_alloc_attrs(dev, adev->pkg_dir_size,
+- &adev->pkg_dir_dma_addr, GFP_KERNEL, 0);
++ adev->pkg_dir = ipu6_dma_alloc(adev, adev->pkg_dir_size,
++ &adev->pkg_dir_dma_addr, GFP_KERNEL, 0);
+ if (!adev->pkg_dir)
+ return -ENOMEM;
+
+@@ -198,8 +198,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ met_ent->len);
+ if (ret) {
+ dev_err(&isp->pdev->dev, "Failed to parse module data\n");
+- dma_free_attrs(dev, adev->pkg_dir_size,
+- adev->pkg_dir, adev->pkg_dir_dma_addr, 0);
++ ipu6_dma_free(adev, adev->pkg_dir_size,
++ adev->pkg_dir, adev->pkg_dir_dma_addr, 0);
+ return ret;
+ }
+
+@@ -211,8 +211,8 @@ int ipu6_cpd_create_pkg_dir(struct ipu6_bus_device *adev, const void *src)
+ pkg_dir_pos += man_sz;
+ memcpy(pkg_dir_pos, src + met_ent->offset, met_sz);
+
+- dma_sync_single_range_for_device(dev, adev->pkg_dir_dma_addr,
+- 0, adev->pkg_dir_size, DMA_TO_DEVICE);
++ ipu6_dma_sync_single(adev, adev->pkg_dir_dma_addr,
++ adev->pkg_dir_size);
+
+ return 0;
+ }
+@@ -220,8 +220,8 @@ EXPORT_SYMBOL_NS_GPL(ipu6_cpd_create_pkg_dir, INTEL_IPU6);
+
+ void ipu6_cpd_free_pkg_dir(struct ipu6_bus_device *adev)
+ {
+- dma_free_attrs(&adev->auxdev.dev, adev->pkg_dir_size, adev->pkg_dir,
+- adev->pkg_dir_dma_addr, 0);
++ ipu6_dma_free(adev, adev->pkg_dir_size, adev->pkg_dir,
++ adev->pkg_dir_dma_addr, 0);
+ }
+ EXPORT_SYMBOL_NS_GPL(ipu6_cpd_free_pkg_dir, INTEL_IPU6);
+
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.c b/drivers/media/pci/intel/ipu6/ipu6-dma.c
+index 92530a1cc90f51..b71f66bd8c1fdb 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-dma.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-dma.c
+@@ -39,8 +39,7 @@ static struct vm_info *get_vm_info(struct ipu6_mmu *mmu, dma_addr_t iova)
+ return NULL;
+ }
+
+-static void __dma_clear_buffer(struct page *page, size_t size,
+- unsigned long attrs)
++static void __clear_buffer(struct page *page, size_t size, unsigned long attrs)
+ {
+ void *ptr;
+
+@@ -56,8 +55,7 @@ static void __dma_clear_buffer(struct page *page, size_t size,
+ clflush_cache_range(ptr, size);
+ }
+
+-static struct page **__dma_alloc_buffer(struct device *dev, size_t size,
+- gfp_t gfp, unsigned long attrs)
++static struct page **__alloc_buffer(size_t size, gfp_t gfp, unsigned long attrs)
+ {
+ int count = PHYS_PFN(size);
+ int array_size = count * sizeof(struct page *);
+@@ -86,7 +84,7 @@ static struct page **__dma_alloc_buffer(struct device *dev, size_t size,
+ pages[i + j] = pages[i] + j;
+ }
+
+- __dma_clear_buffer(pages[i], PAGE_SIZE << order, attrs);
++ __clear_buffer(pages[i], PAGE_SIZE << order, attrs);
+ i += 1 << order;
+ count -= 1 << order;
+ }
+@@ -100,29 +98,26 @@ static struct page **__dma_alloc_buffer(struct device *dev, size_t size,
+ return NULL;
+ }
+
+-static void __dma_free_buffer(struct device *dev, struct page **pages,
+- size_t size, unsigned long attrs)
++static void __free_buffer(struct page **pages, size_t size, unsigned long attrs)
+ {
+ int count = PHYS_PFN(size);
+ unsigned int i;
+
+ for (i = 0; i < count && pages[i]; i++) {
+- __dma_clear_buffer(pages[i], PAGE_SIZE, attrs);
++ __clear_buffer(pages[i], PAGE_SIZE, attrs);
+ __free_pages(pages[i], 0);
+ }
+
+ kvfree(pages);
+ }
+
+-static void ipu6_dma_sync_single_for_cpu(struct device *dev,
+- dma_addr_t dma_handle,
+- size_t size,
+- enum dma_data_direction dir)
++void ipu6_dma_sync_single(struct ipu6_bus_device *sys, dma_addr_t dma_handle,
++ size_t size)
+ {
+ void *vaddr;
+ u32 offset;
+ struct vm_info *info;
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
++ struct ipu6_mmu *mmu = sys->mmu;
+
+ info = get_vm_info(mmu, dma_handle);
+ if (WARN_ON(!info))
+@@ -135,10 +130,10 @@ static void ipu6_dma_sync_single_for_cpu(struct device *dev,
+ vaddr = info->vaddr + offset;
+ clflush_cache_range(vaddr, size);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_single, INTEL_IPU6);
+
+-static void ipu6_dma_sync_sg_for_cpu(struct device *dev,
+- struct scatterlist *sglist,
+- int nents, enum dma_data_direction dir)
++void ipu6_dma_sync_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents)
+ {
+ struct scatterlist *sg;
+ int i;
+@@ -146,14 +141,22 @@ static void ipu6_dma_sync_sg_for_cpu(struct device *dev,
+ for_each_sg(sglist, sg, nents, i)
+ clflush_cache_range(page_to_virt(sg_page(sg)), sg->length);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_sg, INTEL_IPU6);
+
+-static void *ipu6_dma_alloc(struct device *dev, size_t size,
+- dma_addr_t *dma_handle, gfp_t gfp,
+- unsigned long attrs)
++void ipu6_dma_sync_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
++ ipu6_dma_sync_sg(sys, sgt->sgl, sgt->orig_nents);
++}
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_sync_sgtable, INTEL_IPU6);
++
++void *ipu6_dma_alloc(struct ipu6_bus_device *sys, size_t size,
++ dma_addr_t *dma_handle, gfp_t gfp,
++ unsigned long attrs)
++{
++ struct device *dev = &sys->auxdev.dev;
++ struct pci_dev *pdev = sys->isp->pdev;
+ dma_addr_t pci_dma_addr, ipu6_iova;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct vm_info *info;
+ unsigned long count;
+ struct page **pages;
+@@ -173,7 +176,7 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size,
+ if (!iova)
+ goto out_kfree;
+
+- pages = __dma_alloc_buffer(dev, size, gfp, attrs);
++ pages = __alloc_buffer(size, gfp, attrs);
+ if (!pages)
+ goto out_free_iova;
+
+@@ -227,7 +230,7 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size,
+ ipu6_mmu_unmap(mmu->dmap->mmu_info, ipu6_iova, PAGE_SIZE);
+ }
+
+- __dma_free_buffer(dev, pages, size, attrs);
++ __free_buffer(pages, size, attrs);
+
+ out_free_iova:
+ __free_iova(&mmu->dmap->iovad, iova);
+@@ -236,13 +239,13 @@ static void *ipu6_dma_alloc(struct device *dev, size_t size,
+
+ return NULL;
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_alloc, INTEL_IPU6);
+
+-static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr,
+- dma_addr_t dma_handle,
+- unsigned long attrs)
++void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr,
++ dma_addr_t dma_handle, unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
++ struct ipu6_mmu *mmu = sys->mmu;
++ struct pci_dev *pdev = sys->isp->pdev;
+ struct iova *iova = find_iova(&mmu->dmap->iovad, PHYS_PFN(dma_handle));
+ dma_addr_t pci_dma_addr, ipu6_iova;
+ struct vm_info *info;
+@@ -281,7 +284,7 @@ static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr,
+ ipu6_mmu_unmap(mmu->dmap->mmu_info, PFN_PHYS(iova->pfn_lo),
+ PFN_PHYS(iova_size(iova)));
+
+- __dma_free_buffer(dev, pages, size, attrs);
++ __free_buffer(pages, size, attrs);
+
+ mmu->tlb_invalidate(mmu);
+
+@@ -289,13 +292,14 @@ static void ipu6_dma_free(struct device *dev, size_t size, void *vaddr,
+
+ kfree(info);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_free, INTEL_IPU6);
+
+-static int ipu6_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+- void *addr, dma_addr_t iova, size_t size,
+- unsigned long attrs)
++int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma,
++ void *addr, dma_addr_t iova, size_t size,
++ unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- size_t count = PHYS_PFN(PAGE_ALIGN(size));
++ struct ipu6_mmu *mmu = sys->mmu;
++ size_t count = PFN_UP(size);
+ struct vm_info *info;
+ size_t i;
+ int ret;
+@@ -323,18 +327,17 @@ static int ipu6_dma_mmap(struct device *dev, struct vm_area_struct *vma,
+ return 0;
+ }
+
+-static void ipu6_dma_unmap_sg(struct device *dev,
+- struct scatterlist *sglist,
+- int nents, enum dma_data_direction dir,
+- unsigned long attrs)
++void ipu6_dma_unmap_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs)
+ {
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
++ struct device *dev = &sys->auxdev.dev;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct iova *iova = find_iova(&mmu->dmap->iovad,
+ PHYS_PFN(sg_dma_address(sglist)));
+- int i, npages, count;
+ struct scatterlist *sg;
+ dma_addr_t pci_dma_addr;
++ unsigned int i;
+
+ if (!nents)
+ return;
+@@ -342,31 +345,15 @@ static void ipu6_dma_unmap_sg(struct device *dev,
+ if (WARN_ON(!iova))
+ return;
+
+- if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0)
+- ipu6_dma_sync_sg_for_cpu(dev, sglist, nents, DMA_BIDIRECTIONAL);
+-
+- /* get the nents as orig_nents given by caller */
+- count = 0;
+- npages = iova_size(iova);
+- for_each_sg(sglist, sg, nents, i) {
+- if (sg_dma_len(sg) == 0 ||
+- sg_dma_address(sg) == DMA_MAPPING_ERROR)
+- break;
+-
+- npages -= PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg)));
+- count++;
+- if (npages <= 0)
+- break;
+- }
+-
+ /*
+ * Before IPU6 mmu unmap, return the pci dma address back to sg
+ * assume the nents is less than orig_nents as the least granule
+ * is 1 SZ_4K page
+ */
+- dev_dbg(dev, "trying to unmap concatenated %u ents\n", count);
+- for_each_sg(sglist, sg, count, i) {
+- dev_dbg(dev, "ipu unmap sg[%d] %pad\n", i, &sg_dma_address(sg));
++ dev_dbg(dev, "trying to unmap concatenated %u ents\n", nents);
++ for_each_sg(sglist, sg, nents, i) {
++ dev_dbg(dev, "unmap sg[%d] %pad size %u\n", i,
++ &sg_dma_address(sg), sg_dma_len(sg));
+ pci_dma_addr = ipu6_mmu_iova_to_phys(mmu->dmap->mmu_info,
+ sg_dma_address(sg));
+ dev_dbg(dev, "return pci_dma_addr %pad back to sg[%d]\n",
+@@ -380,23 +367,21 @@ static void ipu6_dma_unmap_sg(struct device *dev,
+ PFN_PHYS(iova_size(iova)));
+
+ mmu->tlb_invalidate(mmu);
+-
+- dma_unmap_sg_attrs(&pdev->dev, sglist, nents, dir, attrs);
+-
+ __free_iova(&mmu->dmap->iovad, iova);
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_unmap_sg, INTEL_IPU6);
+
+-static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+- int nents, enum dma_data_direction dir,
+- unsigned long attrs)
++int ipu6_dma_map_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
+- struct pci_dev *pdev = to_ipu6_bus_device(dev)->isp->pdev;
++ struct device *dev = &sys->auxdev.dev;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct scatterlist *sg;
+ struct iova *iova;
+ size_t npages = 0;
+ unsigned long iova_addr;
+- int i, count;
++ int i;
+
+ for_each_sg(sglist, sg, nents, i) {
+ if (sg->offset) {
+@@ -406,18 +391,12 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+ }
+ }
+
+- dev_dbg(dev, "pci_dma_map_sg trying to map %d ents\n", nents);
+- count = dma_map_sg_attrs(&pdev->dev, sglist, nents, dir, attrs);
+- if (count <= 0) {
+- dev_err(dev, "pci_dma_map_sg %d ents failed\n", nents);
+- return 0;
+- }
+-
+- dev_dbg(dev, "pci_dma_map_sg %d ents mapped\n", count);
+-
+- for_each_sg(sglist, sg, count, i)
++ for_each_sg(sglist, sg, nents, i)
+ npages += PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg)));
+
++ dev_dbg(dev, "dmamap trying to map %d ents %zu pages\n",
++ nents, npages);
++
+ iova = alloc_iova(&mmu->dmap->iovad, npages,
+ PHYS_PFN(dma_get_mask(dev)), 0);
+ if (!iova)
+@@ -427,12 +406,13 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+ iova->pfn_hi);
+
+ iova_addr = iova->pfn_lo;
+- for_each_sg(sglist, sg, count, i) {
++ for_each_sg(sglist, sg, nents, i) {
++ phys_addr_t iova_pa;
+ int ret;
+
+- dev_dbg(dev, "mapping entry %d: iova 0x%llx phy %pad size %d\n",
+- i, PFN_PHYS(iova_addr), &sg_dma_address(sg),
+- sg_dma_len(sg));
++ iova_pa = PFN_PHYS(iova_addr);
++ dev_dbg(dev, "mapping entry %d: iova %pap phy %pap size %d\n",
++ i, &iova_pa, &sg_dma_address(sg), sg_dma_len(sg));
+
+ ret = ipu6_mmu_map(mmu->dmap->mmu_info, PFN_PHYS(iova_addr),
+ sg_dma_address(sg),
+@@ -445,25 +425,48 @@ static int ipu6_dma_map_sg(struct device *dev, struct scatterlist *sglist,
+ iova_addr += PHYS_PFN(PAGE_ALIGN(sg_dma_len(sg)));
+ }
+
+- if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0)
+- ipu6_dma_sync_sg_for_cpu(dev, sglist, nents, DMA_BIDIRECTIONAL);
++ dev_dbg(dev, "dmamap %d ents %zu pages mapped\n", nents, npages);
+
+- return count;
++ return nents;
+
+ out_fail:
+- ipu6_dma_unmap_sg(dev, sglist, i, dir, attrs);
++ ipu6_dma_unmap_sg(sys, sglist, i, dir, attrs);
+
+ return 0;
+ }
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_map_sg, INTEL_IPU6);
++
++int ipu6_dma_map_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs)
++{
++ int nents;
++
++ nents = ipu6_dma_map_sg(sys, sgt->sgl, sgt->nents, dir, attrs);
++ if (nents < 0)
++ return nents;
++
++ sgt->nents = nents;
++
++ return 0;
++}
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_map_sgtable, INTEL_IPU6);
++
++void ipu6_dma_unmap_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs)
++{
++ ipu6_dma_unmap_sg(sys, sgt->sgl, sgt->nents, dir, attrs);
++}
++EXPORT_SYMBOL_NS_GPL(ipu6_dma_unmap_sgtable, INTEL_IPU6);
+
+ /*
+ * Create scatter-list for the already allocated DMA buffer
+ */
+-static int ipu6_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+- void *cpu_addr, dma_addr_t handle, size_t size,
+- unsigned long attrs)
++int ipu6_dma_get_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ void *cpu_addr, dma_addr_t handle, size_t size,
++ unsigned long attrs)
+ {
+- struct ipu6_mmu *mmu = to_ipu6_bus_device(dev)->mmu;
++ struct device *dev = &sys->auxdev.dev;
++ struct ipu6_mmu *mmu = sys->mmu;
+ struct vm_info *info;
+ int n_pages;
+ int ret = 0;
+@@ -483,20 +486,7 @@ static int ipu6_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
+ ret = sg_alloc_table_from_pages(sgt, info->pages, n_pages, 0, size,
+ GFP_KERNEL);
+ if (ret)
+- dev_warn(dev, "IPU6 get sgt table failed\n");
++ dev_warn(dev, "get sgt table failed\n");
+
+ return ret;
+ }
+-
+-const struct dma_map_ops ipu6_dma_ops = {
+- .alloc = ipu6_dma_alloc,
+- .free = ipu6_dma_free,
+- .mmap = ipu6_dma_mmap,
+- .map_sg = ipu6_dma_map_sg,
+- .unmap_sg = ipu6_dma_unmap_sg,
+- .sync_single_for_cpu = ipu6_dma_sync_single_for_cpu,
+- .sync_single_for_device = ipu6_dma_sync_single_for_cpu,
+- .sync_sg_for_cpu = ipu6_dma_sync_sg_for_cpu,
+- .sync_sg_for_device = ipu6_dma_sync_sg_for_cpu,
+- .get_sgtable = ipu6_dma_get_sgtable,
+-};
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-dma.h b/drivers/media/pci/intel/ipu6/ipu6-dma.h
+index 847ea5b7c925c3..b51244add9e611 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-dma.h
++++ b/drivers/media/pci/intel/ipu6/ipu6-dma.h
+@@ -5,7 +5,13 @@
+ #define IPU6_DMA_H
+
+ #include <linux/dma-map-ops.h>
++#include <linux/dma-mapping.h>
+ #include <linux/iova.h>
++#include <linux/iova.h>
++#include <linux/scatterlist.h>
++#include <linux/types.h>
++
++#include "ipu6-bus.h"
+
+ struct ipu6_mmu_info;
+
+@@ -14,6 +20,30 @@ struct ipu6_dma_mapping {
+ struct iova_domain iovad;
+ };
+
+-extern const struct dma_map_ops ipu6_dma_ops;
+-
++void ipu6_dma_sync_single(struct ipu6_bus_device *sys, dma_addr_t dma_handle,
++ size_t size);
++void ipu6_dma_sync_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents);
++void ipu6_dma_sync_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt);
++void *ipu6_dma_alloc(struct ipu6_bus_device *sys, size_t size,
++ dma_addr_t *dma_handle, gfp_t gfp,
++ unsigned long attrs);
++void ipu6_dma_free(struct ipu6_bus_device *sys, size_t size, void *vaddr,
++ dma_addr_t dma_handle, unsigned long attrs);
++int ipu6_dma_mmap(struct ipu6_bus_device *sys, struct vm_area_struct *vma,
++ void *addr, dma_addr_t iova, size_t size,
++ unsigned long attrs);
++int ipu6_dma_map_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs);
++void ipu6_dma_unmap_sg(struct ipu6_bus_device *sys, struct scatterlist *sglist,
++ int nents, enum dma_data_direction dir,
++ unsigned long attrs);
++int ipu6_dma_map_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs);
++void ipu6_dma_unmap_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ enum dma_data_direction dir, unsigned long attrs);
++int ipu6_dma_get_sgtable(struct ipu6_bus_device *sys, struct sg_table *sgt,
++ void *cpu_addr, dma_addr_t handle, size_t size,
++ unsigned long attrs);
+ #endif /* IPU6_DMA_H */
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-fw-com.c b/drivers/media/pci/intel/ipu6/ipu6-fw-com.c
+index 0b33fe9e703dcb..7d3d9314cb306b 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-fw-com.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-fw-com.c
+@@ -12,6 +12,7 @@
+ #include <linux/types.h>
+
+ #include "ipu6-bus.h"
++#include "ipu6-dma.h"
+ #include "ipu6-fw-com.h"
+
+ /*
+@@ -88,7 +89,6 @@ struct ipu6_fw_com_context {
+ void *dma_buffer;
+ dma_addr_t dma_addr;
+ unsigned int dma_size;
+- unsigned long attrs;
+
+ struct ipu6_fw_sys_queue *input_queue; /* array of host to SP queues */
+ struct ipu6_fw_sys_queue *output_queue; /* array of SP to host */
+@@ -164,7 +164,6 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg,
+ struct ipu6_fw_com_context *ctx;
+ struct device *dev = &adev->auxdev.dev;
+ size_t sizeall, offset;
+- unsigned long attrs = 0;
+ void *specific_host_addr;
+ unsigned int i;
+
+@@ -206,9 +205,8 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg,
+
+ sizeall += sizeinput + sizeoutput;
+
+- ctx->dma_buffer = dma_alloc_attrs(dev, sizeall, &ctx->dma_addr,
+- GFP_KERNEL, attrs);
+- ctx->attrs = attrs;
++ ctx->dma_buffer = ipu6_dma_alloc(adev, sizeall, &ctx->dma_addr,
++ GFP_KERNEL, 0);
+ if (!ctx->dma_buffer) {
+ dev_err(dev, "failed to allocate dma memory\n");
+ kfree(ctx);
+@@ -239,6 +237,8 @@ void *ipu6_fw_com_prepare(struct ipu6_fw_com_cfg *cfg,
+ memcpy(specific_host_addr, cfg->specific_addr,
+ cfg->specific_size);
+
++ ipu6_dma_sync_single(adev, ctx->config_vied_addr, sizeall);
++
+ /* initialize input queues */
+ offset += specific_size;
+ res.reg = SYSCOM_QPR_BASE_REG;
+@@ -315,8 +315,8 @@ int ipu6_fw_com_release(struct ipu6_fw_com_context *ctx, unsigned int force)
+ if (!force && !ctx->cell_ready(ctx->adev))
+ return -EBUSY;
+
+- dma_free_attrs(&ctx->adev->auxdev.dev, ctx->dma_size,
+- ctx->dma_buffer, ctx->dma_addr, ctx->attrs);
++ ipu6_dma_free(ctx->adev, ctx->dma_size,
++ ctx->dma_buffer, ctx->dma_addr, 0);
+ kfree(ctx);
+ return 0;
+ }
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-mmu.c b/drivers/media/pci/intel/ipu6/ipu6-mmu.c
+index c3a20507d6dbcc..57298ac73d0722 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-mmu.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-mmu.c
+@@ -97,13 +97,15 @@ static void page_table_dump(struct ipu6_mmu_info *mmu_info)
+ for (l1_idx = 0; l1_idx < ISP_L1PT_PTES; l1_idx++) {
+ u32 l2_idx;
+ u32 iova = (phys_addr_t)l1_idx << ISP_L1PT_SHIFT;
++ phys_addr_t l2_phys;
+
+ if (mmu_info->l1_pt[l1_idx] == mmu_info->dummy_l2_pteval)
+ continue;
++
++ l2_phys = TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx];)
+ dev_dbg(mmu_info->dev,
+- "l1 entry %u; iovas 0x%8.8x-0x%8.8x, at %pa\n",
+- l1_idx, iova, iova + ISP_PAGE_SIZE,
+- TBL_PHYS_ADDR(mmu_info->l1_pt[l1_idx]));
++ "l1 entry %u; iovas 0x%8.8x-0x%8.8x, at %pap\n",
++ l1_idx, iova, iova + ISP_PAGE_SIZE, &l2_phys);
+
+ for (l2_idx = 0; l2_idx < ISP_L2PT_PTES; l2_idx++) {
+ u32 *l2_pt = mmu_info->l2_pts[l1_idx];
+@@ -227,7 +229,7 @@ static u32 *alloc_l1_pt(struct ipu6_mmu_info *mmu_info)
+ }
+
+ mmu_info->l1_pt_dma = dma >> ISP_PADDR_SHIFT;
+- dev_dbg(mmu_info->dev, "l1 pt %p mapped at %llx\n", pt, dma);
++ dev_dbg(mmu_info->dev, "l1 pt %p mapped at %pad\n", pt, &dma);
+
+ return pt;
+
+@@ -330,8 +332,8 @@ static int __ipu6_mmu_map(struct ipu6_mmu_info *mmu_info, unsigned long iova,
+ u32 iova_end = ALIGN(iova + size, ISP_PAGE_SIZE);
+
+ dev_dbg(mmu_info->dev,
+- "mapping iova 0x%8.8x--0x%8.8x, size %zu at paddr 0x%10.10llx\n",
+- iova_start, iova_end, size, paddr);
++ "mapping iova 0x%8.8x--0x%8.8x, size %zu at paddr %pap\n",
++ iova_start, iova_end, size, &paddr);
+
+ return l2_map(mmu_info, iova_start, paddr, size);
+ }
+@@ -361,10 +363,13 @@ static size_t l2_unmap(struct ipu6_mmu_info *mmu_info, unsigned long iova,
+ for (l2_idx = (iova_start & ISP_L2PT_MASK) >> ISP_L2PT_SHIFT;
+ (iova_start & ISP_L1PT_MASK) + (l2_idx << ISP_PAGE_SHIFT)
+ < iova_start + size && l2_idx < ISP_L2PT_PTES; l2_idx++) {
++ phys_addr_t pteval;
++
+ l2_pt = mmu_info->l2_pts[l1_idx];
++ pteval = TBL_PHYS_ADDR(l2_pt[l2_idx]);
+ dev_dbg(mmu_info->dev,
+- "unmap l2 index %u with pteval 0x%10.10llx\n",
+- l2_idx, TBL_PHYS_ADDR(l2_pt[l2_idx]));
++ "unmap l2 index %u with pteval 0x%p\n",
++ l2_idx, &pteval);
+ l2_pt[l2_idx] = mmu_info->dummy_page_pteval;
+
+ clflush_cache_range((void *)&l2_pt[l2_idx],
+@@ -525,9 +530,10 @@ static struct ipu6_mmu_info *ipu6_mmu_alloc(struct ipu6_device *isp)
+ return NULL;
+
+ mmu_info->aperture_start = 0;
+- mmu_info->aperture_end = DMA_BIT_MASK(isp->secure_mode ?
+- IPU6_MMU_ADDR_BITS :
+- IPU6_MMU_ADDR_BITS_NON_SECURE);
++ mmu_info->aperture_end =
++ (dma_addr_t)DMA_BIT_MASK(isp->secure_mode ?
++ IPU6_MMU_ADDR_BITS :
++ IPU6_MMU_ADDR_BITS_NON_SECURE);
+ mmu_info->pgsize_bitmap = SZ_4K;
+ mmu_info->dev = &isp->pdev->dev;
+
+diff --git a/drivers/media/pci/intel/ipu6/ipu6.c b/drivers/media/pci/intel/ipu6/ipu6.c
+index 7fb707d3530967..91718eabd74e57 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6.c
++++ b/drivers/media/pci/intel/ipu6/ipu6.c
+@@ -752,6 +752,9 @@ static void ipu6_pci_reset_done(struct pci_dev *pdev)
+ */
+ static int ipu6_suspend(struct device *dev)
+ {
++ struct pci_dev *pdev = to_pci_dev(dev);
++
++ synchronize_irq(pdev->irq);
+ return 0;
+ }
+
+diff --git a/drivers/media/radio/wl128x/fmdrv_common.c b/drivers/media/radio/wl128x/fmdrv_common.c
+index 3d36f323a8f8f7..4d032436691c1b 100644
+--- a/drivers/media/radio/wl128x/fmdrv_common.c
++++ b/drivers/media/radio/wl128x/fmdrv_common.c
+@@ -466,11 +466,12 @@ int fmc_send_cmd(struct fmdev *fmdev, u8 fm_op, u16 type, void *payload,
+ jiffies_to_msecs(FM_DRV_TX_TIMEOUT) / 1000);
+ return -ETIMEDOUT;
+ }
++ spin_lock_irqsave(&fmdev->resp_skb_lock, flags);
+ if (!fmdev->resp_skb) {
++ spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags);
+ fmerr("Response SKB is missing\n");
+ return -EFAULT;
+ }
+- spin_lock_irqsave(&fmdev->resp_skb_lock, flags);
+ skb = fmdev->resp_skb;
+ fmdev->resp_skb = NULL;
+ spin_unlock_irqrestore(&fmdev->resp_skb_lock, flags);
+diff --git a/drivers/media/test-drivers/vivid/vivid-vid-cap.c b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+index 6a790ac8cbe689..f25e011153642e 100644
+--- a/drivers/media/test-drivers/vivid/vivid-vid-cap.c
++++ b/drivers/media/test-drivers/vivid/vivid-vid-cap.c
+@@ -1459,12 +1459,19 @@ static bool valid_cvt_gtf_timings(struct v4l2_dv_timings *timings)
+ h_freq = (u32)bt->pixelclock / total_h_pixel;
+
+ if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_CVT)) {
++ struct v4l2_dv_timings cvt = {};
++
+ if (v4l2_detect_cvt(total_v_lines, h_freq, bt->vsync, bt->width,
+- bt->polarities, bt->interlaced, timings))
++ bt->polarities, bt->interlaced,
++ &vivid_dv_timings_cap, &cvt) &&
++ cvt.bt.width == bt->width && cvt.bt.height == bt->height) {
++ *timings = cvt;
+ return true;
++ }
+ }
+
+ if (bt->standards == 0 || (bt->standards & V4L2_DV_BT_STD_GTF)) {
++ struct v4l2_dv_timings gtf = {};
+ struct v4l2_fract aspect_ratio;
+
+ find_aspect_ratio(bt->width, bt->height,
+@@ -1472,8 +1479,12 @@ static bool valid_cvt_gtf_timings(struct v4l2_dv_timings *timings)
+ &aspect_ratio.denominator);
+ if (v4l2_detect_gtf(total_v_lines, h_freq, bt->vsync,
+ bt->polarities, bt->interlaced,
+- aspect_ratio, timings))
++ aspect_ratio, &vivid_dv_timings_cap,
++ >f) &&
++ gtf.bt.width == bt->width && gtf.bt.height == bt->height) {
++ *timings = gtf;
+ return true;
++ }
+ }
+ return false;
+ }
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index 942d0005c55e82..2cf5dcee0ce800 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -481,25 +481,28 @@ EXPORT_SYMBOL_GPL(v4l2_calc_timeperframe);
+ * @polarities - the horizontal and vertical polarities (same as struct
+ * v4l2_bt_timings polarities).
+ * @interlaced - if this flag is true, it indicates interlaced format
+- * @fmt - the resulting timings.
++ * @cap - the v4l2_dv_timings_cap capabilities.
++ * @timings - the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid CVT format. If so, then it will return true, and fmt will be filled
+ * in with the found CVT timings.
+ */
+-bool v4l2_detect_cvt(unsigned frame_height,
+- unsigned hfreq,
+- unsigned vsync,
+- unsigned active_width,
++bool v4l2_detect_cvt(unsigned int frame_height,
++ unsigned int hfreq,
++ unsigned int vsync,
++ unsigned int active_width,
+ u32 polarities,
+ bool interlaced,
+- struct v4l2_dv_timings *fmt)
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *timings)
+ {
+- int v_fp, v_bp, h_fp, h_bp, hsync;
+- int frame_width, image_height, image_width;
++ struct v4l2_dv_timings t = {};
++ int v_fp, v_bp, h_fp, h_bp, hsync;
++ int frame_width, image_height, image_width;
+ bool reduced_blanking;
+ bool rb_v2 = false;
+- unsigned pix_clk;
++ unsigned int pix_clk;
+
+ if (vsync < 4 || vsync > 8)
+ return false;
+@@ -625,36 +628,39 @@ bool v4l2_detect_cvt(unsigned frame_height,
+ h_fp = h_blank - hsync - h_bp;
+ }
+
+- fmt->type = V4L2_DV_BT_656_1120;
+- fmt->bt.polarities = polarities;
+- fmt->bt.width = image_width;
+- fmt->bt.height = image_height;
+- fmt->bt.hfrontporch = h_fp;
+- fmt->bt.vfrontporch = v_fp;
+- fmt->bt.hsync = hsync;
+- fmt->bt.vsync = vsync;
+- fmt->bt.hbackporch = frame_width - image_width - h_fp - hsync;
++ t.type = V4L2_DV_BT_656_1120;
++ t.bt.polarities = polarities;
++ t.bt.width = image_width;
++ t.bt.height = image_height;
++ t.bt.hfrontporch = h_fp;
++ t.bt.vfrontporch = v_fp;
++ t.bt.hsync = hsync;
++ t.bt.vsync = vsync;
++ t.bt.hbackporch = frame_width - image_width - h_fp - hsync;
+
+ if (!interlaced) {
+- fmt->bt.vbackporch = frame_height - image_height - v_fp - vsync;
+- fmt->bt.interlaced = V4L2_DV_PROGRESSIVE;
++ t.bt.vbackporch = frame_height - image_height - v_fp - vsync;
++ t.bt.interlaced = V4L2_DV_PROGRESSIVE;
+ } else {
+- fmt->bt.vbackporch = (frame_height - image_height - 2 * v_fp -
++ t.bt.vbackporch = (frame_height - image_height - 2 * v_fp -
+ 2 * vsync) / 2;
+- fmt->bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
+- 2 * vsync - fmt->bt.vbackporch;
+- fmt->bt.il_vfrontporch = v_fp;
+- fmt->bt.il_vsync = vsync;
+- fmt->bt.flags |= V4L2_DV_FL_HALF_LINE;
+- fmt->bt.interlaced = V4L2_DV_INTERLACED;
++ t.bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
++ 2 * vsync - t.bt.vbackporch;
++ t.bt.il_vfrontporch = v_fp;
++ t.bt.il_vsync = vsync;
++ t.bt.flags |= V4L2_DV_FL_HALF_LINE;
++ t.bt.interlaced = V4L2_DV_INTERLACED;
+ }
+
+- fmt->bt.pixelclock = pix_clk;
+- fmt->bt.standards = V4L2_DV_BT_STD_CVT;
++ t.bt.pixelclock = pix_clk;
++ t.bt.standards = V4L2_DV_BT_STD_CVT;
+
+ if (reduced_blanking)
+- fmt->bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
++ t.bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+
++ if (!v4l2_valid_dv_timings(&t, cap, NULL, NULL))
++ return false;
++ *timings = t;
+ return true;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_detect_cvt);
+@@ -699,22 +705,25 @@ EXPORT_SYMBOL_GPL(v4l2_detect_cvt);
+ * image height, so it has to be passed explicitly. Usually
+ * the native screen aspect ratio is used for this. If it
+ * is not filled in correctly, then 16:9 will be assumed.
+- * @fmt - the resulting timings.
++ * @cap - the v4l2_dv_timings_cap capabilities.
++ * @timings - the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid GTF format. If so, then it will return true, and fmt will be filled
+ * in with the found GTF timings.
+ */
+-bool v4l2_detect_gtf(unsigned frame_height,
+- unsigned hfreq,
+- unsigned vsync,
+- u32 polarities,
+- bool interlaced,
+- struct v4l2_fract aspect,
+- struct v4l2_dv_timings *fmt)
++bool v4l2_detect_gtf(unsigned int frame_height,
++ unsigned int hfreq,
++ unsigned int vsync,
++ u32 polarities,
++ bool interlaced,
++ struct v4l2_fract aspect,
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *timings)
+ {
++ struct v4l2_dv_timings t = {};
+ int pix_clk;
+- int v_fp, v_bp, h_fp, hsync;
++ int v_fp, v_bp, h_fp, hsync;
+ int frame_width, image_height, image_width;
+ bool default_gtf;
+ int h_blank;
+@@ -783,36 +792,39 @@ bool v4l2_detect_gtf(unsigned frame_height,
+
+ h_fp = h_blank / 2 - hsync;
+
+- fmt->type = V4L2_DV_BT_656_1120;
+- fmt->bt.polarities = polarities;
+- fmt->bt.width = image_width;
+- fmt->bt.height = image_height;
+- fmt->bt.hfrontporch = h_fp;
+- fmt->bt.vfrontporch = v_fp;
+- fmt->bt.hsync = hsync;
+- fmt->bt.vsync = vsync;
+- fmt->bt.hbackporch = frame_width - image_width - h_fp - hsync;
++ t.type = V4L2_DV_BT_656_1120;
++ t.bt.polarities = polarities;
++ t.bt.width = image_width;
++ t.bt.height = image_height;
++ t.bt.hfrontporch = h_fp;
++ t.bt.vfrontporch = v_fp;
++ t.bt.hsync = hsync;
++ t.bt.vsync = vsync;
++ t.bt.hbackporch = frame_width - image_width - h_fp - hsync;
+
+ if (!interlaced) {
+- fmt->bt.vbackporch = frame_height - image_height - v_fp - vsync;
+- fmt->bt.interlaced = V4L2_DV_PROGRESSIVE;
++ t.bt.vbackporch = frame_height - image_height - v_fp - vsync;
++ t.bt.interlaced = V4L2_DV_PROGRESSIVE;
+ } else {
+- fmt->bt.vbackporch = (frame_height - image_height - 2 * v_fp -
++ t.bt.vbackporch = (frame_height - image_height - 2 * v_fp -
+ 2 * vsync) / 2;
+- fmt->bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
+- 2 * vsync - fmt->bt.vbackporch;
+- fmt->bt.il_vfrontporch = v_fp;
+- fmt->bt.il_vsync = vsync;
+- fmt->bt.flags |= V4L2_DV_FL_HALF_LINE;
+- fmt->bt.interlaced = V4L2_DV_INTERLACED;
++ t.bt.il_vbackporch = frame_height - image_height - 2 * v_fp -
++ 2 * vsync - t.bt.vbackporch;
++ t.bt.il_vfrontporch = v_fp;
++ t.bt.il_vsync = vsync;
++ t.bt.flags |= V4L2_DV_FL_HALF_LINE;
++ t.bt.interlaced = V4L2_DV_INTERLACED;
+ }
+
+- fmt->bt.pixelclock = pix_clk;
+- fmt->bt.standards = V4L2_DV_BT_STD_GTF;
++ t.bt.pixelclock = pix_clk;
++ t.bt.standards = V4L2_DV_BT_STD_GTF;
+
+ if (!default_gtf)
+- fmt->bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
++ t.bt.flags |= V4L2_DV_FL_REDUCED_BLANKING;
+
++ if (!v4l2_valid_dv_timings(&t, cap, NULL, NULL))
++ return false;
++ *timings = t;
+ return true;
+ }
+ EXPORT_SYMBOL_GPL(v4l2_detect_gtf);
+diff --git a/drivers/message/fusion/mptsas.c b/drivers/message/fusion/mptsas.c
+index a0bcb0864ecd2c..a798e26c6402d4 100644
+--- a/drivers/message/fusion/mptsas.c
++++ b/drivers/message/fusion/mptsas.c
+@@ -4231,10 +4231,8 @@ mptsas_find_phyinfo_by_phys_disk_num(MPT_ADAPTER *ioc, u8 phys_disk_num,
+ static void
+ mptsas_reprobe_lun(struct scsi_device *sdev, void *data)
+ {
+- int rc;
+-
+ sdev->no_uld_attach = data ? 1 : 0;
+- rc = scsi_device_reprobe(sdev);
++ WARN_ON(scsi_device_reprobe(sdev));
+ }
+
+ static void
+diff --git a/drivers/mfd/da9052-spi.c b/drivers/mfd/da9052-spi.c
+index be5f2b34e18aeb..80fc5c0cac2fb0 100644
+--- a/drivers/mfd/da9052-spi.c
++++ b/drivers/mfd/da9052-spi.c
+@@ -37,7 +37,7 @@ static int da9052_spi_probe(struct spi_device *spi)
+ spi_set_drvdata(spi, da9052);
+
+ config = da9052_regmap_config;
+- config.read_flag_mask = 1;
++ config.write_flag_mask = 1;
+ config.reg_bits = 7;
+ config.pad_bits = 1;
+ config.val_bits = 8;
+diff --git a/drivers/mfd/intel_soc_pmic_bxtwc.c b/drivers/mfd/intel_soc_pmic_bxtwc.c
+index ccd76800d8e49b..b7204072e93ef8 100644
+--- a/drivers/mfd/intel_soc_pmic_bxtwc.c
++++ b/drivers/mfd/intel_soc_pmic_bxtwc.c
+@@ -148,6 +148,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_pwrbtn = {
+ .name = "bxtwc_irq_chip_pwrbtn",
++ .domain_suffix = "PWRBTN",
+ .status_base = BXTWC_PWRBTNIRQ,
+ .mask_base = BXTWC_MPWRBTNIRQ,
+ .irqs = bxtwc_regmap_irqs_pwrbtn,
+@@ -157,6 +158,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_pwrbtn = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_tmu = {
+ .name = "bxtwc_irq_chip_tmu",
++ .domain_suffix = "TMU",
+ .status_base = BXTWC_TMUIRQ,
+ .mask_base = BXTWC_MTMUIRQ,
+ .irqs = bxtwc_regmap_irqs_tmu,
+@@ -166,6 +168,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_tmu = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_bcu = {
+ .name = "bxtwc_irq_chip_bcu",
++ .domain_suffix = "BCU",
+ .status_base = BXTWC_BCUIRQ,
+ .mask_base = BXTWC_MBCUIRQ,
+ .irqs = bxtwc_regmap_irqs_bcu,
+@@ -175,6 +178,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_bcu = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_adc = {
+ .name = "bxtwc_irq_chip_adc",
++ .domain_suffix = "ADC",
+ .status_base = BXTWC_ADCIRQ,
+ .mask_base = BXTWC_MADCIRQ,
+ .irqs = bxtwc_regmap_irqs_adc,
+@@ -184,6 +188,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_adc = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_chgr = {
+ .name = "bxtwc_irq_chip_chgr",
++ .domain_suffix = "CHGR",
+ .status_base = BXTWC_CHGR0IRQ,
+ .mask_base = BXTWC_MCHGR0IRQ,
+ .irqs = bxtwc_regmap_irqs_chgr,
+@@ -193,6 +198,7 @@ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_chgr = {
+
+ static const struct regmap_irq_chip bxtwc_regmap_irq_chip_crit = {
+ .name = "bxtwc_irq_chip_crit",
++ .domain_suffix = "CRIT",
+ .status_base = BXTWC_CRITIRQ,
+ .mask_base = BXTWC_MCRITIRQ,
+ .irqs = bxtwc_regmap_irqs_crit,
+@@ -230,44 +236,55 @@ static const struct resource tmu_resources[] = {
+ };
+
+ static struct mfd_cell bxt_wc_dev[] = {
+- {
+- .name = "bxt_wcove_gpadc",
+- .num_resources = ARRAY_SIZE(adc_resources),
+- .resources = adc_resources,
+- },
+ {
+ .name = "bxt_wcove_thermal",
+ .num_resources = ARRAY_SIZE(thermal_resources),
+ .resources = thermal_resources,
+ },
+ {
+- .name = "bxt_wcove_usbc",
+- .num_resources = ARRAY_SIZE(usbc_resources),
+- .resources = usbc_resources,
++ .name = "bxt_wcove_gpio",
++ .num_resources = ARRAY_SIZE(gpio_resources),
++ .resources = gpio_resources,
+ },
+ {
+- .name = "bxt_wcove_ext_charger",
+- .num_resources = ARRAY_SIZE(charger_resources),
+- .resources = charger_resources,
++ .name = "bxt_wcove_region",
++ },
++};
++
++static const struct mfd_cell bxt_wc_tmu_dev[] = {
++ {
++ .name = "bxt_wcove_tmu",
++ .num_resources = ARRAY_SIZE(tmu_resources),
++ .resources = tmu_resources,
+ },
++};
++
++static const struct mfd_cell bxt_wc_bcu_dev[] = {
+ {
+ .name = "bxt_wcove_bcu",
+ .num_resources = ARRAY_SIZE(bcu_resources),
+ .resources = bcu_resources,
+ },
++};
++
++static const struct mfd_cell bxt_wc_adc_dev[] = {
+ {
+- .name = "bxt_wcove_tmu",
+- .num_resources = ARRAY_SIZE(tmu_resources),
+- .resources = tmu_resources,
++ .name = "bxt_wcove_gpadc",
++ .num_resources = ARRAY_SIZE(adc_resources),
++ .resources = adc_resources,
+ },
++};
+
++static struct mfd_cell bxt_wc_chgr_dev[] = {
+ {
+- .name = "bxt_wcove_gpio",
+- .num_resources = ARRAY_SIZE(gpio_resources),
+- .resources = gpio_resources,
++ .name = "bxt_wcove_usbc",
++ .num_resources = ARRAY_SIZE(usbc_resources),
++ .resources = usbc_resources,
+ },
+ {
+- .name = "bxt_wcove_region",
++ .name = "bxt_wcove_ext_charger",
++ .num_resources = ARRAY_SIZE(charger_resources),
++ .resources = charger_resources,
+ },
+ };
+
+@@ -425,6 +442,26 @@ static int bxtwc_add_chained_irq_chip(struct intel_soc_pmic *pmic,
+ 0, chip, data);
+ }
+
++static int bxtwc_add_chained_devices(struct intel_soc_pmic *pmic,
++ const struct mfd_cell *cells, int n_devs,
++ struct regmap_irq_chip_data *pdata,
++ int pirq, int irq_flags,
++ const struct regmap_irq_chip *chip,
++ struct regmap_irq_chip_data **data)
++{
++ struct device *dev = pmic->dev;
++ struct irq_domain *domain;
++ int ret;
++
++ ret = bxtwc_add_chained_irq_chip(pmic, pdata, pirq, irq_flags, chip, data);
++ if (ret)
++ return dev_err_probe(dev, ret, "Failed to add %s IRQ chip\n", chip->name);
++
++ domain = regmap_irq_get_domain(*data);
++
++ return devm_mfd_add_devices(dev, PLATFORM_DEVID_NONE, cells, n_devs, NULL, 0, domain);
++}
++
+ static int bxtwc_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -466,6 +503,15 @@ static int bxtwc_probe(struct platform_device *pdev)
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to add IRQ chip\n");
+
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_tmu_dev, ARRAY_SIZE(bxt_wc_tmu_dev),
++ pmic->irq_chip_data,
++ BXTWC_TMU_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_tmu,
++ &pmic->irq_chip_data_tmu);
++ if (ret)
++ return ret;
++
+ ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+ BXTWC_PWRBTN_LVL1_IRQ,
+ IRQF_ONESHOT,
+@@ -474,40 +520,32 @@ static int bxtwc_probe(struct platform_device *pdev)
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to add PWRBTN IRQ chip\n");
+
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_TMU_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_tmu,
+- &pmic->irq_chip_data_tmu);
+- if (ret)
+- return dev_err_probe(dev, ret, "Failed to add TMU IRQ chip\n");
+-
+- /* Add chained IRQ handler for BCU IRQs */
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_BCU_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_bcu,
+- &pmic->irq_chip_data_bcu);
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_bcu_dev, ARRAY_SIZE(bxt_wc_bcu_dev),
++ pmic->irq_chip_data,
++ BXTWC_BCU_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_bcu,
++ &pmic->irq_chip_data_bcu);
+ if (ret)
+- return dev_err_probe(dev, ret, "Failed to add BUC IRQ chip\n");
++ return ret;
+
+- /* Add chained IRQ handler for ADC IRQs */
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_ADC_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_adc,
+- &pmic->irq_chip_data_adc);
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_adc_dev, ARRAY_SIZE(bxt_wc_adc_dev),
++ pmic->irq_chip_data,
++ BXTWC_ADC_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_adc,
++ &pmic->irq_chip_data_adc);
+ if (ret)
+- return dev_err_probe(dev, ret, "Failed to add ADC IRQ chip\n");
++ return ret;
+
+- /* Add chained IRQ handler for CHGR IRQs */
+- ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+- BXTWC_CHGR_LVL1_IRQ,
+- IRQF_ONESHOT,
+- &bxtwc_regmap_irq_chip_chgr,
+- &pmic->irq_chip_data_chgr);
++ ret = bxtwc_add_chained_devices(pmic, bxt_wc_chgr_dev, ARRAY_SIZE(bxt_wc_chgr_dev),
++ pmic->irq_chip_data,
++ BXTWC_CHGR_LVL1_IRQ,
++ IRQF_ONESHOT,
++ &bxtwc_regmap_irq_chip_chgr,
++ &pmic->irq_chip_data_chgr);
+ if (ret)
+- return dev_err_probe(dev, ret, "Failed to add CHGR IRQ chip\n");
++ return ret;
+
+ /* Add chained IRQ handler for CRIT IRQs */
+ ret = bxtwc_add_chained_irq_chip(pmic, pmic->irq_chip_data,
+diff --git a/drivers/mfd/rt5033.c b/drivers/mfd/rt5033.c
+index 7e23ab3d5842c8..84ebc96f58e48d 100644
+--- a/drivers/mfd/rt5033.c
++++ b/drivers/mfd/rt5033.c
+@@ -81,8 +81,8 @@ static int rt5033_i2c_probe(struct i2c_client *i2c)
+ chip_rev = dev_id & RT5033_CHIP_REV_MASK;
+ dev_info(&i2c->dev, "Device found (rev. %d)\n", chip_rev);
+
+- ret = regmap_add_irq_chip(rt5033->regmap, rt5033->irq,
+- IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
++ ret = devm_regmap_add_irq_chip(rt5033->dev, rt5033->regmap,
++ rt5033->irq, IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ 0, &rt5033_irq_chip, &rt5033->irq_data);
+ if (ret) {
+ dev_err(&i2c->dev, "Failed to request IRQ %d: %d\n",
+diff --git a/drivers/mfd/tps65010.c b/drivers/mfd/tps65010.c
+index 2b9105295f3012..710364435b6b9e 100644
+--- a/drivers/mfd/tps65010.c
++++ b/drivers/mfd/tps65010.c
+@@ -544,17 +544,13 @@ static int tps65010_probe(struct i2c_client *client)
+ */
+ if (client->irq > 0) {
+ status = request_irq(client->irq, tps65010_irq,
+- IRQF_TRIGGER_FALLING, DRIVER_NAME, tps);
++ IRQF_TRIGGER_FALLING | IRQF_NO_AUTOEN,
++ DRIVER_NAME, tps);
+ if (status < 0) {
+ dev_dbg(&client->dev, "can't get IRQ %d, err %d\n",
+ client->irq, status);
+ return status;
+ }
+- /* annoying race here, ideally we'd have an option
+- * to claim the irq now and enable it later.
+- * FIXME genirq IRQF_NOAUTOEN now solves that ...
+- */
+- disable_irq(client->irq);
+ set_bit(FLAG_IRQ_ENABLE, &tps->flags);
+ } else
+ dev_warn(&client->dev, "IRQ not configured!\n");
+diff --git a/drivers/misc/apds990x.c b/drivers/misc/apds990x.c
+index 6d4edd69db126a..e7d73c972f65dc 100644
+--- a/drivers/misc/apds990x.c
++++ b/drivers/misc/apds990x.c
+@@ -1147,7 +1147,7 @@ static int apds990x_probe(struct i2c_client *client)
+ err = chip->pdata->setup_resources();
+ if (err) {
+ err = -EINVAL;
+- goto fail3;
++ goto fail4;
+ }
+ }
+
+@@ -1155,7 +1155,7 @@ static int apds990x_probe(struct i2c_client *client)
+ apds990x_attribute_group);
+ if (err < 0) {
+ dev_err(&chip->client->dev, "Sysfs registration failed\n");
+- goto fail4;
++ goto fail5;
+ }
+
+ err = request_threaded_irq(client->irq, NULL,
+@@ -1166,15 +1166,17 @@ static int apds990x_probe(struct i2c_client *client)
+ if (err) {
+ dev_err(&client->dev, "could not get IRQ %d\n",
+ client->irq);
+- goto fail5;
++ goto fail6;
+ }
+ return err;
+-fail5:
++fail6:
+ sysfs_remove_group(&chip->client->dev.kobj,
+ &apds990x_attribute_group[0]);
+-fail4:
++fail5:
+ if (chip->pdata && chip->pdata->release_resources)
+ chip->pdata->release_resources();
++fail4:
++ pm_runtime_disable(&client->dev);
+ fail3:
+ regulator_bulk_disable(ARRAY_SIZE(chip->regs), chip->regs);
+ fail2:
+diff --git a/drivers/misc/lkdtm/bugs.c b/drivers/misc/lkdtm/bugs.c
+index 62ba0152547975..376047beea3d64 100644
+--- a/drivers/misc/lkdtm/bugs.c
++++ b/drivers/misc/lkdtm/bugs.c
+@@ -445,7 +445,7 @@ static void lkdtm_FAM_BOUNDS(void)
+
+ pr_err("FAIL: survived access of invalid flexible array member index!\n");
+
+- if (!__has_attribute(__counted_by__))
++ if (!IS_ENABLED(CONFIG_CC_HAS_COUNTED_BY))
+ pr_warn("This is expected since this %s was built with a compiler that does not support __counted_by\n",
+ lkdtm_kernel_info);
+ else if (IS_ENABLED(CONFIG_UBSAN_BOUNDS))
+diff --git a/drivers/mmc/host/mmc_spi.c b/drivers/mmc/host/mmc_spi.c
+index 8fee7052f2ef4f..47443fb5eb3362 100644
+--- a/drivers/mmc/host/mmc_spi.c
++++ b/drivers/mmc/host/mmc_spi.c
+@@ -222,10 +222,6 @@ static int mmc_spi_response_get(struct mmc_spi_host *host,
+ u8 leftover = 0;
+ unsigned short rotator;
+ int i;
+- char tag[32];
+-
+- snprintf(tag, sizeof(tag), " ... CMD%d response SPI_%s",
+- cmd->opcode, maptype(cmd));
+
+ /* Except for data block reads, the whole response will already
+ * be stored in the scratch buffer. It's somewhere after the
+@@ -378,8 +374,9 @@ static int mmc_spi_response_get(struct mmc_spi_host *host,
+ }
+
+ if (value < 0)
+- dev_dbg(&host->spi->dev, "%s: resp %04x %08x\n",
+- tag, cmd->resp[0], cmd->resp[1]);
++ dev_dbg(&host->spi->dev,
++ " ... CMD%d response SPI_%s: resp %04x %08x\n",
++ cmd->opcode, maptype(cmd), cmd->resp[0], cmd->resp[1]);
+
+ /* disable chipselect on errors and some success cases */
+ if (value >= 0 && cs_on)
+diff --git a/drivers/mtd/hyperbus/rpc-if.c b/drivers/mtd/hyperbus/rpc-if.c
+index b22aa57119f238..e7a28f3316c3f2 100644
+--- a/drivers/mtd/hyperbus/rpc-if.c
++++ b/drivers/mtd/hyperbus/rpc-if.c
+@@ -163,9 +163,16 @@ static void rpcif_hb_remove(struct platform_device *pdev)
+ pm_runtime_disable(hyperbus->rpc.dev);
+ }
+
++static const struct platform_device_id rpc_if_hyperflash_id_table[] = {
++ { .name = "rpc-if-hyperflash" },
++ { /* sentinel */ }
++};
++MODULE_DEVICE_TABLE(platform, rpc_if_hyperflash_id_table);
++
+ static struct platform_driver rpcif_platform_driver = {
+ .probe = rpcif_hb_probe,
+ .remove_new = rpcif_hb_remove,
++ .id_table = rpc_if_hyperflash_id_table,
+ .driver = {
+ .name = "rpc-if-hyperflash",
+ },
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.c b/drivers/mtd/nand/raw/atmel/pmecc.c
+index 4d7dc8a9c37385..a22aab4ed4e8ab 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.c
++++ b/drivers/mtd/nand/raw/atmel/pmecc.c
+@@ -362,7 +362,7 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
+ size = ALIGN(size, sizeof(s32));
+ size += (req->ecc.strength + 1) * sizeof(s32) * 3;
+
+- user = kzalloc(size, GFP_KERNEL);
++ user = devm_kzalloc(pmecc->dev, size, GFP_KERNEL);
+ if (!user)
+ return ERR_PTR(-ENOMEM);
+
+@@ -408,12 +408,6 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
+ }
+ EXPORT_SYMBOL_GPL(atmel_pmecc_create_user);
+
+-void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user)
+-{
+- kfree(user);
+-}
+-EXPORT_SYMBOL_GPL(atmel_pmecc_destroy_user);
+-
+ static int get_strength(struct atmel_pmecc_user *user)
+ {
+ const int *strengths = user->pmecc->caps->strengths;
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.h b/drivers/mtd/nand/raw/atmel/pmecc.h
+index 7851c05126cf15..cc0c5af1f4f1ab 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.h
++++ b/drivers/mtd/nand/raw/atmel/pmecc.h
+@@ -55,8 +55,6 @@ struct atmel_pmecc *devm_atmel_pmecc_get(struct device *dev);
+ struct atmel_pmecc_user *
+ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
+ struct atmel_pmecc_user_req *req);
+-void atmel_pmecc_destroy_user(struct atmel_pmecc_user *user);
+-
+ void atmel_pmecc_reset(struct atmel_pmecc *pmecc);
+ int atmel_pmecc_enable(struct atmel_pmecc_user *user, int op);
+ void atmel_pmecc_disable(struct atmel_pmecc_user *user);
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 9d6e85bf227b92..8c57df44c40fe8 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -89,7 +89,7 @@ void spi_nor_spimem_setup_op(const struct spi_nor *nor,
+ op->addr.buswidth = spi_nor_get_protocol_addr_nbits(proto);
+
+ if (op->dummy.nbytes)
+- op->dummy.buswidth = spi_nor_get_protocol_addr_nbits(proto);
++ op->dummy.buswidth = spi_nor_get_protocol_data_nbits(proto);
+
+ if (op->data.nbytes)
+ op->data.buswidth = spi_nor_get_protocol_data_nbits(proto);
+diff --git a/drivers/mtd/spi-nor/spansion.c b/drivers/mtd/spi-nor/spansion.c
+index d6c92595f6bc9b..5a88a6096ca8c9 100644
+--- a/drivers/mtd/spi-nor/spansion.c
++++ b/drivers/mtd/spi-nor/spansion.c
+@@ -106,6 +106,7 @@ static int cypress_nor_sr_ready_and_clear_reg(struct spi_nor *nor, u64 addr)
+ int ret;
+
+ if (nor->reg_proto == SNOR_PROTO_8_8_8_DTR) {
++ op.addr.nbytes = nor->addr_nbytes;
+ op.dummy.nbytes = params->rdsr_dummy;
+ op.data.nbytes = 2;
+ }
+diff --git a/drivers/mtd/ubi/attach.c b/drivers/mtd/ubi/attach.c
+index ae5abe492b52ab..adc47b87b38a5f 100644
+--- a/drivers/mtd/ubi/attach.c
++++ b/drivers/mtd/ubi/attach.c
+@@ -1447,7 +1447,7 @@ static int scan_all(struct ubi_device *ubi, struct ubi_attach_info *ai,
+ return err;
+ }
+
+-static struct ubi_attach_info *alloc_ai(void)
++static struct ubi_attach_info *alloc_ai(const char *slab_name)
+ {
+ struct ubi_attach_info *ai;
+
+@@ -1461,7 +1461,7 @@ static struct ubi_attach_info *alloc_ai(void)
+ INIT_LIST_HEAD(&ai->alien);
+ INIT_LIST_HEAD(&ai->fastmap);
+ ai->volumes = RB_ROOT;
+- ai->aeb_slab_cache = kmem_cache_create("ubi_aeb_slab_cache",
++ ai->aeb_slab_cache = kmem_cache_create(slab_name,
+ sizeof(struct ubi_ainf_peb),
+ 0, 0, NULL);
+ if (!ai->aeb_slab_cache) {
+@@ -1491,7 +1491,7 @@ static int scan_fast(struct ubi_device *ubi, struct ubi_attach_info **ai)
+
+ err = -ENOMEM;
+
+- scan_ai = alloc_ai();
++ scan_ai = alloc_ai("ubi_aeb_slab_cache_fastmap");
+ if (!scan_ai)
+ goto out;
+
+@@ -1557,7 +1557,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
+ int err;
+ struct ubi_attach_info *ai;
+
+- ai = alloc_ai();
++ ai = alloc_ai("ubi_aeb_slab_cache");
+ if (!ai)
+ return -ENOMEM;
+
+@@ -1575,7 +1575,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
+ if (err > 0 || mtd_is_eccerr(err)) {
+ if (err != UBI_NO_FASTMAP) {
+ destroy_ai(ai);
+- ai = alloc_ai();
++ ai = alloc_ai("ubi_aeb_slab_cache");
+ if (!ai)
+ return -ENOMEM;
+
+@@ -1614,7 +1614,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
+ if (ubi->fm && ubi_dbg_chk_fastmap(ubi)) {
+ struct ubi_attach_info *scan_ai;
+
+- scan_ai = alloc_ai();
++ scan_ai = alloc_ai("ubi_aeb_slab_cache_dbg_chk_fastmap");
+ if (!scan_ai) {
+ err = -ENOMEM;
+ goto out_wl;
+diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
+index 2a9cc9413c427d..9bdb6525f1281f 100644
+--- a/drivers/mtd/ubi/fastmap-wl.c
++++ b/drivers/mtd/ubi/fastmap-wl.c
+@@ -346,14 +346,27 @@ int ubi_wl_get_peb(struct ubi_device *ubi)
+ * WL sub-system.
+ *
+ * @ubi: UBI device description object
++ * @need_fill: whether to fill wear-leveling pool when no PEBs are found
+ */
+-static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi)
++static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi,
++ bool need_fill)
+ {
+ struct ubi_fm_pool *pool = &ubi->fm_wl_pool;
+ int pnum;
+
+- if (pool->used == pool->size)
++ if (pool->used == pool->size) {
++ if (need_fill && !ubi->fm_work_scheduled) {
++ /*
++ * We cannot update the fastmap here because this
++ * function is called in atomic context.
++ * Let's fail here and refill/update it as soon as
++ * possible.
++ */
++ ubi->fm_work_scheduled = 1;
++ schedule_work(&ubi->fm_work);
++ }
+ return NULL;
++ }
+
+ pnum = pool->pebs[pool->used];
+ return ubi->lookuptbl[pnum];
+@@ -375,7 +388,7 @@ static bool need_wear_leveling(struct ubi_device *ubi)
+ if (!ubi->used.rb_node)
+ return false;
+
+- e = next_peb_for_wl(ubi);
++ e = next_peb_for_wl(ubi, false);
+ if (!e) {
+ if (!ubi->free.rb_node)
+ return false;
+diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
+index 5a3558bbb90356..e5cf3bdca3b012 100644
+--- a/drivers/mtd/ubi/vmt.c
++++ b/drivers/mtd/ubi/vmt.c
+@@ -143,8 +143,10 @@ static struct fwnode_handle *find_volume_fwnode(struct ubi_volume *vol)
+ vol->vol_id != volid)
+ continue;
+
++ fwnode_handle_put(fw_vols);
+ return fw_vol;
+ }
++ fwnode_handle_put(fw_vols);
+
+ return NULL;
+ }
+diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
+index a357f3d27f2f3d..fbd399cf650337 100644
+--- a/drivers/mtd/ubi/wl.c
++++ b/drivers/mtd/ubi/wl.c
+@@ -683,7 +683,7 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ ubi_assert(!ubi->move_to_put);
+
+ #ifdef CONFIG_MTD_UBI_FASTMAP
+- if (!next_peb_for_wl(ubi) ||
++ if (!next_peb_for_wl(ubi, true) ||
+ #else
+ if (!ubi->free.rb_node ||
+ #endif
+@@ -846,7 +846,14 @@ static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
+ goto out_not_moved;
+ }
+ if (err == MOVE_RETRY) {
+- scrubbing = 1;
++ /*
++ * For source PEB:
++ * 1. The scrubbing is set for scrub type PEB, it will
++ * be put back into ubi->scrub list.
++ * 2. Non-scrub type PEB will be put back into ubi->used
++ * list.
++ */
++ keep = 1;
+ dst_leb_clean = 1;
+ goto out_not_moved;
+ }
+diff --git a/drivers/mtd/ubi/wl.h b/drivers/mtd/ubi/wl.h
+index 7b6715ef6d4a35..a69169c35e310f 100644
+--- a/drivers/mtd/ubi/wl.h
++++ b/drivers/mtd/ubi/wl.h
+@@ -5,7 +5,8 @@
+ static void update_fastmap_work_fn(struct work_struct *wrk);
+ static struct ubi_wl_entry *find_anchor_wl_entry(struct rb_root *root);
+ static struct ubi_wl_entry *get_peb_for_wl(struct ubi_device *ubi);
+-static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi);
++static struct ubi_wl_entry *next_peb_for_wl(struct ubi_device *ubi,
++ bool need_fill);
+ static bool need_wear_leveling(struct ubi_device *ubi);
+ static void ubi_fastmap_close(struct ubi_device *ubi);
+ static inline void ubi_fastmap_init(struct ubi_device *ubi, int *count)
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 99d025b69079a8..3d9ee91e1f8be0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -4558,7 +4558,7 @@ int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
+ struct net_device *dev = bp->dev;
+
+ if (page_mode) {
+- bp->flags &= ~BNXT_FLAG_AGG_RINGS;
++ bp->flags &= ~(BNXT_FLAG_AGG_RINGS | BNXT_FLAG_NO_AGG_RINGS);
+ bp->flags |= BNXT_FLAG_RX_PAGE_MODE;
+
+ if (bp->xdp_prog->aux->xdp_has_frags)
+@@ -9053,7 +9053,6 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
+ struct hwrm_port_mac_ptp_qcfg_output *resp;
+ struct hwrm_port_mac_ptp_qcfg_input *req;
+ struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
+- bool phc_cfg;
+ u8 flags;
+ int rc;
+
+@@ -9100,8 +9099,9 @@ static int __bnxt_hwrm_ptp_qcfg(struct bnxt *bp)
+ rc = -ENODEV;
+ goto exit;
+ }
+- phc_cfg = (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0;
+- rc = bnxt_ptp_init(bp, phc_cfg);
++ ptp->rtc_configured =
++ (flags & PORT_MAC_PTP_QCFG_RESP_FLAGS_RTC_CONFIGURED) != 0;
++ rc = bnxt_ptp_init(bp);
+ if (rc)
+ netdev_warn(bp->dev, "PTP initialization failed.\n");
+ exit:
+@@ -14494,6 +14494,14 @@ static int bnxt_change_mtu(struct net_device *dev, int new_mtu)
+ bnxt_close_nic(bp, true, false);
+
+ WRITE_ONCE(dev->mtu, new_mtu);
++
++ /* MTU change may change the AGG ring settings if an XDP multi-buffer
++ * program is attached. We need to set the AGG rings settings and
++ * rx_skb_func accordingly.
++ */
++ if (READ_ONCE(bp->xdp_prog))
++ bnxt_set_rx_skb_mode(bp, true);
++
+ bnxt_set_ring_params(bp);
+
+ if (netif_running(dev))
+@@ -15231,6 +15239,13 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
+
+ for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) {
+ vnic = &bp->vnic_info[i];
++
++ rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true);
++ if (rc) {
++ netdev_err(bp->dev, "hwrm vnic %d set rss failure rc: %d\n",
++ vnic->vnic_id, rc);
++ return rc;
++ }
+ vnic->mru = bp->dev->mtu + ETH_HLEN + VLAN_HLEN;
+ bnxt_hwrm_vnic_update(bp, vnic,
+ VNIC_UPDATE_REQ_ENABLES_MRU_VALID);
+@@ -15984,6 +15999,7 @@ static void bnxt_shutdown(struct pci_dev *pdev)
+ if (netif_running(dev))
+ dev_close(dev);
+
++ bnxt_ptp_clear(bp);
+ bnxt_clear_int_mode(bp);
+ pci_disable_device(pdev);
+
+@@ -16011,6 +16027,7 @@ static int bnxt_suspend(struct device *device)
+ rc = bnxt_close(dev);
+ }
+ bnxt_hwrm_func_drv_unrgtr(bp);
++ bnxt_ptp_clear(bp);
+ pci_disable_device(bp->pdev);
+ bnxt_free_ctx_mem(bp);
+ rtnl_unlock();
+@@ -16054,6 +16071,10 @@ static int bnxt_resume(struct device *device)
+ if (bp->fw_crash_mem)
+ bnxt_hwrm_crash_dump_mem_cfg(bp);
+
++ if (bnxt_ptp_init(bp)) {
++ kfree(bp->ptp_cfg);
++ bp->ptp_cfg = NULL;
++ }
+ bnxt_get_wol_settings(bp);
+ if (netif_running(dev)) {
+ rc = bnxt_open(dev);
+@@ -16232,8 +16253,12 @@ static void bnxt_io_resume(struct pci_dev *pdev)
+ rtnl_lock();
+
+ err = bnxt_hwrm_func_qcaps(bp);
+- if (!err && netif_running(netdev))
+- err = bnxt_open(netdev);
++ if (!err) {
++ if (netif_running(netdev))
++ err = bnxt_open(netdev);
++ else
++ err = bnxt_reserve_rings(bp, true);
++ }
+
+ if (!err)
+ netif_device_attach(netdev);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index f71cc8188b4e5b..20ba14eb87e00b 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -2838,19 +2838,24 @@ static int bnxt_get_link_ksettings(struct net_device *dev,
+ }
+
+ base->port = PORT_NONE;
+- if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP) {
++ if (media == BNXT_MEDIA_TP) {
+ base->port = PORT_TP;
+ linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT,
+ lk_ksettings->link_modes.supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_TP_BIT,
+ lk_ksettings->link_modes.advertising);
++ } else if (media == BNXT_MEDIA_KR) {
++ linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT,
++ lk_ksettings->link_modes.supported);
++ linkmode_set_bit(ETHTOOL_LINK_MODE_Backplane_BIT,
++ lk_ksettings->link_modes.advertising);
+ } else {
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+ lk_ksettings->link_modes.supported);
+ linkmode_set_bit(ETHTOOL_LINK_MODE_FIBRE_BIT,
+ lk_ksettings->link_modes.advertising);
+
+- if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_DAC)
++ if (media == BNXT_MEDIA_CR)
+ base->port = PORT_DA;
+ else
+ base->port = PORT_FIBRE;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+index fa514be8765028..781225d3ba8ffc 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.c
+@@ -1024,7 +1024,7 @@ static void bnxt_ptp_free(struct bnxt *bp)
+ }
+ }
+
+-int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
++int bnxt_ptp_init(struct bnxt *bp)
+ {
+ struct bnxt_ptp_cfg *ptp = bp->ptp_cfg;
+ int rc;
+@@ -1047,7 +1047,7 @@ int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg)
+
+ if (BNXT_PTP_USE_RTC(bp)) {
+ bnxt_ptp_timecounter_init(bp, false);
+- rc = bnxt_ptp_init_rtc(bp, phc_cfg);
++ rc = bnxt_ptp_init_rtc(bp, ptp->rtc_configured);
+ if (rc)
+ goto out;
+ } else {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+index f322466ecad350..61e89bb2d2690c 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ptp.h
+@@ -133,6 +133,7 @@ struct bnxt_ptp_cfg {
+ BNXT_PTP_MSG_PDELAY_REQ | \
+ BNXT_PTP_MSG_PDELAY_RESP)
+ u8 tx_tstamp_en:1;
++ u8 rtc_configured:1;
+ int rx_filter;
+ u32 tstamp_filters;
+
+@@ -180,6 +181,6 @@ void bnxt_tx_ts_cmp(struct bnxt *bp, struct bnxt_napi *bnapi,
+ struct tx_ts_cmp *tscmp);
+ void bnxt_ptp_rtc_timecounter_init(struct bnxt_ptp_cfg *ptp, u64 ns);
+ int bnxt_ptp_init_rtc(struct bnxt *bp, bool phc_cfg);
+-int bnxt_ptp_init(struct bnxt *bp, bool phc_cfg);
++int bnxt_ptp_init(struct bnxt *bp);
+ void bnxt_ptp_clear(struct bnxt *bp);
+ #endif
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index 37881591774175..d178138981a967 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -17801,6 +17801,9 @@ static int tg3_init_one(struct pci_dev *pdev,
+ } else
+ persist_dma_mask = dma_mask = DMA_BIT_MASK(64);
+
++ if (tg3_asic_rev(tp) == ASIC_REV_57766)
++ persist_dma_mask = DMA_BIT_MASK(31);
++
+ /* Configure DMA attributes. */
+ if (dma_mask > DMA_BIT_MASK(32)) {
+ err = dma_set_mask(&pdev->dev, dma_mask);
+diff --git a/drivers/net/ethernet/google/gve/gve_adminq.c b/drivers/net/ethernet/google/gve/gve_adminq.c
+index e44e8b139633fc..060e0e6749380f 100644
+--- a/drivers/net/ethernet/google/gve/gve_adminq.c
++++ b/drivers/net/ethernet/google/gve/gve_adminq.c
+@@ -1248,10 +1248,10 @@ gve_adminq_configure_flow_rule(struct gve_priv *priv,
+ sizeof(struct gve_adminq_configure_flow_rule),
+ flow_rule_cmd);
+
+- if (err) {
++ if (err == -ETIME) {
+ dev_err(&priv->pdev->dev, "Timeout to configure the flow rule, trigger reset");
+ gve_reset(priv, true);
+- } else {
++ } else if (!err) {
+ priv->flow_rules_cache.rules_cache_synced = false;
+ }
+
+diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+index f2506511bbfff4..bce5b76f1e7a58 100644
+--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
++++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+@@ -5299,7 +5299,7 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
+ }
+
+ flags_complete:
+- bitmap_xor(changed_flags, pf->flags, orig_flags, I40E_PF_FLAGS_NBITS);
++ bitmap_xor(changed_flags, new_flags, orig_flags, I40E_PF_FLAGS_NBITS);
+
+ if (test_bit(I40E_FLAG_FW_LLDP_DIS, changed_flags))
+ reset_needed = I40E_PF_RESET_AND_REBUILD_FLAG;
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index 59f62306b9cb02..b6ec01f6fa73e0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -1715,8 +1715,8 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+
+ /* copy Tx queue info from VF into VSI */
+ if (qpi->txq.ring_len > 0) {
+- vsi->tx_rings[i]->dma = qpi->txq.dma_ring_addr;
+- vsi->tx_rings[i]->count = qpi->txq.ring_len;
++ vsi->tx_rings[q_idx]->dma = qpi->txq.dma_ring_addr;
++ vsi->tx_rings[q_idx]->count = qpi->txq.ring_len;
+
+ /* Disable any existing queue first */
+ if (ice_vf_vsi_dis_single_txq(vf, vsi, q_idx))
+@@ -1725,7 +1725,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+ /* Configure a queue with the requested settings */
+ if (ice_vsi_cfg_single_txq(vsi, vsi->tx_rings, q_idx)) {
+ dev_warn(ice_pf_to_dev(pf), "VF-%d failed to configure TX queue %d\n",
+- vf->vf_id, i);
++ vf->vf_id, q_idx);
+ goto error_param;
+ }
+ }
+@@ -1733,24 +1733,23 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+ /* copy Rx queue info from VF into VSI */
+ if (qpi->rxq.ring_len > 0) {
+ u16 max_frame_size = ice_vc_get_max_frame_size(vf);
++ struct ice_rx_ring *ring = vsi->rx_rings[q_idx];
+ u32 rxdid;
+
+- vsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr;
+- vsi->rx_rings[i]->count = qpi->rxq.ring_len;
++ ring->dma = qpi->rxq.dma_ring_addr;
++ ring->count = qpi->rxq.ring_len;
+
+ if (qpi->rxq.crc_disable)
+- vsi->rx_rings[q_idx]->flags |=
+- ICE_RX_FLAGS_CRC_STRIP_DIS;
++ ring->flags |= ICE_RX_FLAGS_CRC_STRIP_DIS;
+ else
+- vsi->rx_rings[q_idx]->flags &=
+- ~ICE_RX_FLAGS_CRC_STRIP_DIS;
++ ring->flags &= ~ICE_RX_FLAGS_CRC_STRIP_DIS;
+
+ if (qpi->rxq.databuffer_size != 0 &&
+ (qpi->rxq.databuffer_size > ((16 * 1024) - 128) ||
+ qpi->rxq.databuffer_size < 1024))
+ goto error_param;
+ vsi->rx_buf_len = qpi->rxq.databuffer_size;
+- vsi->rx_rings[i]->rx_buf_len = vsi->rx_buf_len;
++ ring->rx_buf_len = vsi->rx_buf_len;
+ if (qpi->rxq.max_pkt_size > max_frame_size ||
+ qpi->rxq.max_pkt_size < 64)
+ goto error_param;
+@@ -1765,7 +1764,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
+
+ if (ice_vsi_cfg_single_rxq(vsi, q_idx)) {
+ dev_warn(ice_pf_to_dev(pf), "VF-%d failed to configure RX queue %d\n",
+- vf->vf_id, i);
++ vf->vf_id, q_idx);
+ goto error_param;
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 27935c54b91bc7..8216f843a7cd5f 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -112,6 +112,11 @@ struct mac_ops *get_mac_ops(void *cgxd)
+ return ((struct cgx *)cgxd)->mac_ops;
+ }
+
++u32 cgx_get_fifo_len(void *cgxd)
++{
++ return ((struct cgx *)cgxd)->fifo_len;
++}
++
+ void cgx_write(struct cgx *cgx, u64 lmac, u64 offset, u64 val)
+ {
+ writeq(val, cgx->reg_base + (lmac << cgx->mac_ops->lmac_offset) +
+@@ -209,6 +214,24 @@ u8 cgx_lmac_get_p2x(int cgx_id, int lmac_id)
+ return (cfg & CMR_P2X_SEL_MASK) >> CMR_P2X_SEL_SHIFT;
+ }
+
++static u8 cgx_get_nix_resetbit(struct cgx *cgx)
++{
++ int first_lmac;
++ u8 p2x;
++
++ /* non 98XX silicons supports only NIX0 block */
++ if (cgx->pdev->subsystem_device != PCI_SUBSYS_DEVID_98XX)
++ return CGX_NIX0_RESET;
++
++ first_lmac = find_first_bit(&cgx->lmac_bmap, cgx->max_lmac_per_mac);
++ p2x = cgx_lmac_get_p2x(cgx->cgx_id, first_lmac);
++
++ if (p2x == CMR_P2X_SEL_NIX1)
++ return CGX_NIX1_RESET;
++ else
++ return CGX_NIX0_RESET;
++}
++
+ /* Ensure the required lock for event queue(where asynchronous events are
+ * posted) is acquired before calling this API. Else an asynchronous event(with
+ * latest link status) can reach the destination before this function returns
+@@ -501,7 +524,7 @@ static u32 cgx_get_lmac_fifo_len(void *cgxd, int lmac_id)
+ u8 num_lmacs;
+ u32 fifo_len;
+
+- fifo_len = cgx->mac_ops->fifo_len;
++ fifo_len = cgx->fifo_len;
+ num_lmacs = cgx->mac_ops->get_nr_lmacs(cgx);
+
+ switch (num_lmacs) {
+@@ -1719,6 +1742,8 @@ static int cgx_lmac_init(struct cgx *cgx)
+ lmac->lmac_type = cgx->mac_ops->get_lmac_type(cgx, lmac->lmac_id);
+ }
+
++ /* Start X2P reset on given MAC block */
++ cgx->mac_ops->mac_x2p_reset(cgx, true);
+ return cgx_lmac_verify_fwi_version(cgx);
+
+ err_bitmap_free:
+@@ -1764,7 +1789,7 @@ static void cgx_populate_features(struct cgx *cgx)
+ u64 cfg;
+
+ cfg = cgx_read(cgx, 0, CGX_CONST);
+- cgx->mac_ops->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg);
++ cgx->fifo_len = FIELD_GET(CGX_CONST_RXFIFO_SIZE, cfg);
+ cgx->max_lmac_per_mac = FIELD_GET(CGX_CONST_MAX_LMACS, cfg);
+
+ if (is_dev_rpm(cgx))
+@@ -1784,6 +1809,45 @@ static u8 cgx_get_rxid_mapoffset(struct cgx *cgx)
+ return 0x60;
+ }
+
++static void cgx_x2p_reset(void *cgxd, bool enable)
++{
++ struct cgx *cgx = cgxd;
++ int lmac_id;
++ u64 cfg;
++
++ if (enable) {
++ for_each_set_bit(lmac_id, &cgx->lmac_bmap, cgx->max_lmac_per_mac)
++ cgx->mac_ops->mac_enadis_rx(cgx, lmac_id, false);
++
++ usleep_range(1000, 2000);
++
++ cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG);
++ cfg |= cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP;
++ cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg);
++ } else {
++ cfg = cgx_read(cgx, 0, CGXX_CMR_GLOBAL_CONFIG);
++ cfg &= ~(cgx_get_nix_resetbit(cgx) | CGX_NSCI_DROP);
++ cgx_write(cgx, 0, CGXX_CMR_GLOBAL_CONFIG, cfg);
++ }
++}
++
++static int cgx_enadis_rx(void *cgxd, int lmac_id, bool enable)
++{
++ struct cgx *cgx = cgxd;
++ u64 cfg;
++
++ if (!is_lmac_valid(cgx, lmac_id))
++ return -ENODEV;
++
++ cfg = cgx_read(cgx, lmac_id, CGXX_CMRX_CFG);
++ if (enable)
++ cfg |= DATA_PKT_RX_EN;
++ else
++ cfg &= ~DATA_PKT_RX_EN;
++ cgx_write(cgx, lmac_id, CGXX_CMRX_CFG, cfg);
++ return 0;
++}
++
+ static struct mac_ops cgx_mac_ops = {
+ .name = "cgx",
+ .csr_offset = 0,
+@@ -1815,6 +1879,8 @@ static struct mac_ops cgx_mac_ops = {
+ .mac_get_pfc_frm_cfg = cgx_lmac_get_pfc_frm_cfg,
+ .mac_reset = cgx_lmac_reset,
+ .mac_stats_reset = cgx_stats_reset,
++ .mac_x2p_reset = cgx_x2p_reset,
++ .mac_enadis_rx = cgx_enadis_rx,
+ };
+
+ static int cgx_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+index dc9ace30554af6..1cf12e5c7da873 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.h
+@@ -32,6 +32,10 @@
+ #define CGX_LMAC_TYPE_MASK 0xF
+ #define CGXX_CMRX_INT 0x040
+ #define FW_CGX_INT BIT_ULL(1)
++#define CGXX_CMR_GLOBAL_CONFIG 0x08
++#define CGX_NIX0_RESET BIT_ULL(2)
++#define CGX_NIX1_RESET BIT_ULL(3)
++#define CGX_NSCI_DROP BIT_ULL(9)
+ #define CGXX_CMRX_INT_ENA_W1S 0x058
+ #define CGXX_CMRX_RX_ID_MAP 0x060
+ #define CGXX_CMRX_RX_STAT0 0x070
+@@ -185,4 +189,5 @@ int cgx_lmac_get_pfc_frm_cfg(void *cgxd, int lmac_id, u8 *tx_pause,
+ int verify_lmac_fc_cfg(void *cgxd, int lmac_id, u8 tx_pause, u8 rx_pause,
+ int pfvf_idx);
+ int cgx_lmac_reset(void *cgxd, int lmac_id, u8 pf_req_flr);
++u32 cgx_get_fifo_len(void *cgxd);
+ #endif /* CGX_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
+index 9ffc6790c51307..6180e68e1765a7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/lmac_common.h
+@@ -72,7 +72,6 @@ struct mac_ops {
+ u8 irq_offset;
+ u8 int_ena_bit;
+ u8 lmac_fwi;
+- u32 fifo_len;
+ bool non_contiguous_serdes_lane;
+ /* RPM & CGX differs in number of Receive/transmit stats */
+ u8 rx_stats_cnt;
+@@ -133,6 +132,8 @@ struct mac_ops {
+ int (*get_fec_stats)(void *cgxd, int lmac_id,
+ struct cgx_fec_stats_rsp *rsp);
+ int (*mac_stats_reset)(void *cgxd, int lmac_id);
++ void (*mac_x2p_reset)(void *cgxd, bool enable);
++ int (*mac_enadis_rx)(void *cgxd, int lmac_id, bool enable);
+ };
+
+ struct cgx {
+@@ -142,6 +143,10 @@ struct cgx {
+ u8 lmac_count;
+ /* number of LMACs per MAC could be 4 or 8 */
+ u8 max_lmac_per_mac;
++ /* length of fifo varies depending on the number
++ * of LMACS
++ */
++ u32 fifo_len;
+ #define MAX_LMAC_COUNT 8
+ struct lmac *lmac_idmap[MAX_LMAC_COUNT];
+ struct work_struct cgx_cmd_work;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+index 1b34cf9c97035a..2e9945446199ec 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.c
+@@ -39,6 +39,8 @@ static struct mac_ops rpm_mac_ops = {
+ .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
+ .mac_reset = rpm_lmac_reset,
+ .mac_stats_reset = rpm_stats_reset,
++ .mac_x2p_reset = rpm_x2p_reset,
++ .mac_enadis_rx = rpm_enadis_rx,
+ };
+
+ static struct mac_ops rpm2_mac_ops = {
+@@ -72,6 +74,8 @@ static struct mac_ops rpm2_mac_ops = {
+ .mac_get_pfc_frm_cfg = rpm_lmac_get_pfc_frm_cfg,
+ .mac_reset = rpm_lmac_reset,
+ .mac_stats_reset = rpm_stats_reset,
++ .mac_x2p_reset = rpm_x2p_reset,
++ .mac_enadis_rx = rpm_enadis_rx,
+ };
+
+ bool is_dev_rpm2(void *rpmd)
+@@ -467,7 +471,7 @@ u8 rpm_get_lmac_type(void *rpmd, int lmac_id)
+ int err;
+
+ req = FIELD_SET(CMDREG_ID, CGX_CMD_GET_LINK_STS, req);
+- err = cgx_fwi_cmd_generic(req, &resp, rpm, 0);
++ err = cgx_fwi_cmd_generic(req, &resp, rpm, lmac_id);
+ if (!err)
+ return FIELD_GET(RESP_LINKSTAT_LMAC_TYPE, resp);
+ return err;
+@@ -480,7 +484,7 @@ u32 rpm_get_lmac_fifo_len(void *rpmd, int lmac_id)
+ u8 num_lmacs;
+ u32 fifo_len;
+
+- fifo_len = rpm->mac_ops->fifo_len;
++ fifo_len = rpm->fifo_len;
+ num_lmacs = rpm->mac_ops->get_nr_lmacs(rpm);
+
+ switch (num_lmacs) {
+@@ -533,9 +537,9 @@ u32 rpm2_get_lmac_fifo_len(void *rpmd, int lmac_id)
+ */
+ max_lmac = (rpm_read(rpm, 0, CGX_CONST) >> 24) & 0xFF;
+ if (max_lmac > 4)
+- fifo_len = rpm->mac_ops->fifo_len / 2;
++ fifo_len = rpm->fifo_len / 2;
+ else
+- fifo_len = rpm->mac_ops->fifo_len;
++ fifo_len = rpm->fifo_len;
+
+ if (lmac_id < 4) {
+ num_lmacs = hweight8(lmac_info & 0xF);
+@@ -699,46 +703,51 @@ int rpm_get_fec_stats(void *rpmd, int lmac_id, struct cgx_fec_stats_rsp *rsp)
+ if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_NONE)
+ return 0;
+
++ /* latched registers FCFECX_CW_HI/RSFEC_STAT_FAST_DATA_HI_CDC are common
++ * for all counters. Acquire lock to ensure serialized reads
++ */
++ mutex_lock(&rpm->lock);
+ if (rpm->lmac_idmap[lmac_id]->link_info.fec == OTX2_FEC_BASER) {
+- val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_CCW_LO);
+- val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_CCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_corr_blks = (val_hi << 16 | val_lo);
+
+- val_lo = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_VL0_NCCW_LO);
+- val_hi = rpm_read(rpm, lmac_id, RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0, RPMX_MTI_FCFECX_VL0_NCCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_uncorr_blks = (val_hi << 16 | val_lo);
+
+ /* 50G uses 2 Physical serdes lines */
+ if (rpm->lmac_idmap[lmac_id]->link_info.lmac_type_id ==
+ LMAC_MODE_50G_R) {
+- val_lo = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_VL1_CCW_LO);
+- val_hi = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_VL1_CCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_corr_blks += (val_hi << 16 | val_lo);
+
+- val_lo = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_VL1_NCCW_LO);
+- val_hi = rpm_read(rpm, lmac_id,
+- RPMX_MTI_FCFECX_CW_HI);
++ val_lo = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_VL1_NCCW_LO(lmac_id));
++ val_hi = rpm_read(rpm, 0,
++ RPMX_MTI_FCFECX_CW_HI(lmac_id));
+ rsp->fec_uncorr_blks += (val_hi << 16 | val_lo);
+ }
+ } else {
+ /* enable RS-FEC capture */
+- cfg = rpm_read(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL);
++ cfg = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL);
+ cfg |= RPMX_RSFEC_RX_CAPTURE | BIT(lmac_id);
+- rpm_write(rpm, 0, RPMX_MTI_STAT_STATN_CONTROL, cfg);
++ rpm_write(rpm, 0, RPMX_MTI_RSFEC_STAT_STATN_CONTROL, cfg);
+
+ val_lo = rpm_read(rpm, 0,
+ RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2);
+- val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC);
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC);
+ rsp->fec_corr_blks = (val_hi << 32 | val_lo);
+
+ val_lo = rpm_read(rpm, 0,
+ RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3);
+- val_hi = rpm_read(rpm, 0, RPMX_MTI_STAT_DATA_HI_CDC);
++ val_hi = rpm_read(rpm, 0, RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC);
+ rsp->fec_uncorr_blks = (val_hi << 32 | val_lo);
+ }
++ mutex_unlock(&rpm->lock);
+
+ return 0;
+ }
+@@ -763,3 +772,41 @@ int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr)
+
+ return 0;
+ }
++
++void rpm_x2p_reset(void *rpmd, bool enable)
++{
++ rpm_t *rpm = rpmd;
++ int lmac_id;
++ u64 cfg;
++
++ if (enable) {
++ for_each_set_bit(lmac_id, &rpm->lmac_bmap, rpm->max_lmac_per_mac)
++ rpm->mac_ops->mac_enadis_rx(rpm, lmac_id, false);
++
++ usleep_range(1000, 2000);
++
++ cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG);
++ rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg | RPM_NIX0_RESET);
++ } else {
++ cfg = rpm_read(rpm, 0, RPMX_CMR_GLOBAL_CFG);
++ cfg &= ~RPM_NIX0_RESET;
++ rpm_write(rpm, 0, RPMX_CMR_GLOBAL_CFG, cfg);
++ }
++}
++
++int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable)
++{
++ rpm_t *rpm = rpmd;
++ u64 cfg;
++
++ if (!is_lmac_valid(rpm, lmac_id))
++ return -ENODEV;
++
++ cfg = rpm_read(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG);
++ if (enable)
++ cfg |= RPM_RX_EN;
++ else
++ cfg &= ~RPM_RX_EN;
++ rpm_write(rpm, lmac_id, RPMX_MTI_MAC100X_COMMAND_CONFIG, cfg);
++ return 0;
++}
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
+index 34b11deb0f3c1d..b8d3972e096aed 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rpm.h
+@@ -17,6 +17,8 @@
+
+ /* Registers */
+ #define RPMX_CMRX_CFG 0x00
++#define RPMX_CMR_GLOBAL_CFG 0x08
++#define RPM_NIX0_RESET BIT_ULL(3)
+ #define RPMX_RX_TS_PREPEND BIT_ULL(22)
+ #define RPMX_TX_PTP_1S_SUPPORT BIT_ULL(17)
+ #define RPMX_CMRX_RX_ID_MAP 0x80
+@@ -84,16 +86,18 @@
+ /* FEC stats */
+ #define RPMX_MTI_STAT_STATN_CONTROL 0x10018
+ #define RPMX_MTI_STAT_DATA_HI_CDC 0x10038
+-#define RPMX_RSFEC_RX_CAPTURE BIT_ULL(27)
++#define RPMX_RSFEC_RX_CAPTURE BIT_ULL(28)
+ #define RPMX_CMD_CLEAR_RX BIT_ULL(30)
+ #define RPMX_CMD_CLEAR_TX BIT_ULL(31)
++#define RPMX_MTI_RSFEC_STAT_STATN_CONTROL 0x40018
++#define RPMX_MTI_RSFEC_STAT_FAST_DATA_HI_CDC 0x40000
+ #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_2 0x40050
+ #define RPMX_MTI_RSFEC_STAT_COUNTER_CAPTURE_3 0x40058
+-#define RPMX_MTI_FCFECX_VL0_CCW_LO 0x38618
+-#define RPMX_MTI_FCFECX_VL0_NCCW_LO 0x38620
+-#define RPMX_MTI_FCFECX_VL1_CCW_LO 0x38628
+-#define RPMX_MTI_FCFECX_VL1_NCCW_LO 0x38630
+-#define RPMX_MTI_FCFECX_CW_HI 0x38638
++#define RPMX_MTI_FCFECX_VL0_CCW_LO(a) (0x38618 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_VL0_NCCW_LO(a) (0x38620 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_VL1_CCW_LO(a) (0x38628 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_VL1_NCCW_LO(a) (0x38630 + ((a) * 0x40))
++#define RPMX_MTI_FCFECX_CW_HI(a) (0x38638 + ((a) * 0x40))
+
+ /* CN10KB CSR Declaration */
+ #define RPM2_CMRX_SW_INT 0x1b0
+@@ -137,4 +141,6 @@ bool is_dev_rpm2(void *rpmd);
+ int rpm_get_fec_stats(void *cgxd, int lmac_id, struct cgx_fec_stats_rsp *rsp);
+ int rpm_lmac_reset(void *rpmd, int lmac_id, u8 pf_req_flr);
+ int rpm_stats_reset(void *rpmd, int lmac_id);
++void rpm_x2p_reset(void *rpmd, bool enable);
++int rpm_enadis_rx(void *rpmd, int lmac_id, bool enable);
+ #endif /* RPM_H */
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 1a97fb9032fa44..cd0d7b7774f1af 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -1162,6 +1162,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
+ }
+
+ rvu_program_channels(rvu);
++ cgx_start_linkup(rvu);
+
+ err = rvu_mcs_init(rvu);
+ if (err) {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index 5016ba82e1423a..8555edbb1c8f9a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -997,6 +997,7 @@ int rvu_cgx_prio_flow_ctrl_cfg(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_
+ int rvu_cgx_cfg_pause_frm(struct rvu *rvu, u16 pcifunc, u8 tx_pause, u8 rx_pause);
+ void rvu_mac_reset(struct rvu *rvu, u16 pcifunc);
+ u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac);
++void cgx_start_linkup(struct rvu *rvu);
+ int npc_get_nixlf_mcam_index(struct npc_mcam *mcam, u16 pcifunc, int nixlf,
+ int type);
+ bool is_mcam_entry_enabled(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr,
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index 266ecbc1b97a68..992fa0b82e8d2d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -349,6 +349,7 @@ static void rvu_cgx_wq_destroy(struct rvu *rvu)
+
+ int rvu_cgx_init(struct rvu *rvu)
+ {
++ struct mac_ops *mac_ops;
+ int cgx, err;
+ void *cgxd;
+
+@@ -375,6 +376,15 @@ int rvu_cgx_init(struct rvu *rvu)
+ if (err)
+ return err;
+
++ /* Clear X2P reset on all MAC blocks */
++ for (cgx = 0; cgx < rvu->cgx_cnt_max; cgx++) {
++ cgxd = rvu_cgx_pdata(cgx, rvu);
++ if (!cgxd)
++ continue;
++ mac_ops = get_mac_ops(cgxd);
++ mac_ops->mac_x2p_reset(cgxd, false);
++ }
++
+ /* Register for CGX events */
+ err = cgx_lmac_event_handler_init(rvu);
+ if (err)
+@@ -382,10 +392,26 @@ int rvu_cgx_init(struct rvu *rvu)
+
+ mutex_init(&rvu->cgx_cfg_lock);
+
+- /* Ensure event handler registration is completed, before
+- * we turn on the links
+- */
+- mb();
++ return 0;
++}
++
++void cgx_start_linkup(struct rvu *rvu)
++{
++ unsigned long lmac_bmap;
++ struct mac_ops *mac_ops;
++ int cgx, lmac, err;
++ void *cgxd;
++
++ /* Enable receive on all LMACS */
++ for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) {
++ cgxd = rvu_cgx_pdata(cgx, rvu);
++ if (!cgxd)
++ continue;
++ mac_ops = get_mac_ops(cgxd);
++ lmac_bmap = cgx_get_lmac_bmap(cgxd);
++ for_each_set_bit(lmac, &lmac_bmap, rvu->hw->lmac_per_cgx)
++ mac_ops->mac_enadis_rx(cgxd, lmac, true);
++ }
+
+ /* Do link up for all CGX ports */
+ for (cgx = 0; cgx <= rvu->cgx_cnt_max; cgx++) {
+@@ -398,8 +424,6 @@ int rvu_cgx_init(struct rvu *rvu)
+ "Link up process failed to start on cgx %d\n",
+ cgx);
+ }
+-
+- return 0;
+ }
+
+ int rvu_cgx_exit(struct rvu *rvu)
+@@ -923,13 +947,12 @@ int rvu_mbox_handler_cgx_features_get(struct rvu *rvu,
+
+ u32 rvu_cgx_get_fifolen(struct rvu *rvu)
+ {
+- struct mac_ops *mac_ops;
+- u32 fifo_len;
++ void *cgxd = rvu_first_cgx_pdata(rvu);
+
+- mac_ops = get_mac_ops(rvu_first_cgx_pdata(rvu));
+- fifo_len = mac_ops ? mac_ops->fifo_len : 0;
++ if (!cgxd)
++ return 0;
+
+- return fifo_len;
++ return cgx_get_fifo_len(cgxd);
+ }
+
+ u32 rvu_cgx_get_lmac_fifolen(struct rvu *rvu, int cgx, int lmac)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+index c1c99d7054f87f..7417087b6db597 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/cn10k.c
+@@ -203,6 +203,11 @@ int cn10k_alloc_leaf_profile(struct otx2_nic *pfvf, u16 *leaf)
+
+ rsp = (struct nix_bandprof_alloc_rsp *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ rc = PTR_ERR(rsp);
++ goto out;
++ }
++
+ if (!rsp->prof_count[BAND_PROF_LEAF_LAYER]) {
+ rc = -EIO;
+ goto out;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+index 87d5776e3b88e9..7510a918d942c0 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.c
+@@ -1837,6 +1837,10 @@ u16 otx2_get_max_mtu(struct otx2_nic *pfvf)
+ if (!rc) {
+ rsp = (struct nix_hw_info *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ rc = PTR_ERR(rsp);
++ goto out;
++ }
+
+ /* HW counts VLAN insertion bytes (8 for double tag)
+ * irrespective of whether SQE is requesting to insert VLAN
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
+index aa01110f04a339..294fba58b67095 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dcbnl.c
+@@ -315,6 +315,11 @@ int otx2_config_priority_flow_ctrl(struct otx2_nic *pfvf)
+ if (!otx2_sync_mbox_msg(&pfvf->mbox)) {
+ rsp = (struct cgx_pfc_rsp *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ err = PTR_ERR(rsp);
++ goto unlock;
++ }
++
+ if (req->rx_pause != rsp->rx_pause || req->tx_pause != rsp->tx_pause) {
+ dev_warn(pfvf->dev,
+ "Failed to config PFC\n");
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
+index 80d853b343f98f..2046dd0da00d85 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_dmac_flt.c
+@@ -28,6 +28,11 @@ static int otx2_dmacflt_do_add(struct otx2_nic *pf, const u8 *mac,
+ if (!err) {
+ rsp = (struct cgx_mac_addr_add_rsp *)
+ otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ mutex_unlock(&pf->mbox.lock);
++ return PTR_ERR(rsp);
++ }
++
+ *dmac_index = rsp->index;
+ }
+
+@@ -200,6 +205,10 @@ int otx2_dmacflt_update(struct otx2_nic *pf, u8 *mac, u32 bit_pos)
+
+ rsp = (struct cgx_mac_addr_update_rsp *)
+ otx2_mbox_get_rsp(&pf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ rc = PTR_ERR(rsp);
++ goto out;
++ }
+
+ pf->flow_cfg->bmap_to_dmacindex[bit_pos] = rsp->index;
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+index 32468c663605ef..5197ce816581e3 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_ethtool.c
+@@ -343,6 +343,11 @@ static void otx2_get_pauseparam(struct net_device *netdev,
+ if (!otx2_sync_mbox_msg(&pfvf->mbox)) {
+ rsp = (struct cgx_pause_frm_cfg *)
+ otx2_mbox_get_rsp(&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ mutex_unlock(&pfvf->mbox.lock);
++ return;
++ }
++
+ pause->rx_pause = rsp->rx_pause;
+ pause->tx_pause = rsp->tx_pause;
+ }
+@@ -1072,6 +1077,11 @@ static int otx2_set_fecparam(struct net_device *netdev,
+
+ rsp = (struct fec_mode *)otx2_mbox_get_rsp(&pfvf->mbox.mbox,
+ 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ err = PTR_ERR(rsp);
++ goto end;
++ }
++
+ if (rsp->fec >= 0)
+ pfvf->linfo.fec = rsp->fec;
+ else
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+index 98c31a16c70b4f..58720a161ee24a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_flows.c
+@@ -119,6 +119,8 @@ int otx2_alloc_mcam_entries(struct otx2_nic *pfvf, u16 count)
+
+ rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp
+ (&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp))
++ goto exit;
+
+ for (ent = 0; ent < rsp->count; ent++)
+ flow_cfg->flow_ent[ent + allocated] = rsp->entry_list[ent];
+@@ -197,6 +199,10 @@ int otx2_mcam_entry_init(struct otx2_nic *pfvf)
+
+ rsp = (struct npc_mcam_alloc_entry_rsp *)otx2_mbox_get_rsp
+ (&pfvf->mbox.mbox, 0, &req->hdr);
++ if (IS_ERR(rsp)) {
++ mutex_unlock(&pfvf->mbox.lock);
++ return PTR_ERR(rsp);
++ }
+
+ if (rsp->count != req->count) {
+ netdev_info(pfvf->netdev,
+@@ -232,6 +238,10 @@ int otx2_mcam_entry_init(struct otx2_nic *pfvf)
+
+ frsp = (struct npc_get_field_status_rsp *)otx2_mbox_get_rsp
+ (&pfvf->mbox.mbox, 0, &freq->hdr);
++ if (IS_ERR(frsp)) {
++ mutex_unlock(&pfvf->mbox.lock);
++ return PTR_ERR(frsp);
++ }
+
+ if (frsp->enable) {
+ pfvf->flags |= OTX2_FLAG_RX_VLAN_SUPPORT;
+diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
+index 1a59c952aa01c1..45f115e41857ba 100644
+--- a/drivers/net/ethernet/marvell/pxa168_eth.c
++++ b/drivers/net/ethernet/marvell/pxa168_eth.c
+@@ -1394,18 +1394,15 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+
+ printk(KERN_NOTICE "PXA168 10/100 Ethernet Driver\n");
+
+- clk = devm_clk_get(&pdev->dev, NULL);
++ clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ if (IS_ERR(clk)) {
+- dev_err(&pdev->dev, "Fast Ethernet failed to get clock\n");
++ dev_err(&pdev->dev, "Fast Ethernet failed to get and enable clock\n");
+ return -ENODEV;
+ }
+- clk_prepare_enable(clk);
+
+ dev = alloc_etherdev(sizeof(struct pxa168_eth_private));
+- if (!dev) {
+- err = -ENOMEM;
+- goto err_clk;
+- }
++ if (!dev)
++ return -ENOMEM;
+
+ platform_set_drvdata(pdev, dev);
+ pep = netdev_priv(dev);
+@@ -1523,8 +1520,6 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+ mdiobus_free(pep->smi_bus);
+ err_netdev:
+ free_netdev(dev);
+-err_clk:
+- clk_disable_unprepare(clk);
+ return err;
+ }
+
+@@ -1542,7 +1537,6 @@ static void pxa168_eth_remove(struct platform_device *pdev)
+ if (dev->phydev)
+ phy_disconnect(dev->phydev);
+
+- clk_disable_unprepare(pep->clk);
+ mdiobus_unregister(pep->smi_bus);
+ mdiobus_free(pep->smi_bus);
+ unregister_netdev(dev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+index 8577db3308cc56..7f68468c2e7598 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+@@ -516,6 +516,7 @@ void mlx5_modify_lag(struct mlx5_lag *ldev,
+ blocking_notifier_call_chain(&dev0->priv.lag_nh,
+ MLX5_DRIVER_EVENT_ACTIVE_BACKUP_LAG_CHANGE_LOWERSTATE,
+ ndev);
++ dev_put(ndev);
+ }
+ }
+
+@@ -918,6 +919,7 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
+ {
+ struct mlx5_core_dev *dev0 = ldev->pf[MLX5_LAG_P1].dev;
+ struct lag_tracker tracker = { };
++ struct net_device *ndev;
+ bool do_bond, roce_lag;
+ int err;
+ int i;
+@@ -981,6 +983,16 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
+ return;
+ }
+ }
++ if (tracker.tx_type == NETDEV_LAG_TX_TYPE_ACTIVEBACKUP) {
++ ndev = mlx5_lag_active_backup_get_netdev(dev0);
++ /** Only sriov and roce lag should have tracker->TX_type
++ * set so no need to check the mode
++ */
++ blocking_notifier_call_chain(&dev0->priv.lag_nh,
++ MLX5_DRIVER_EVENT_ACTIVE_BACKUP_LAG_CHANGE_LOWERSTATE,
++ ndev);
++ dev_put(ndev);
++ }
+ } else if (mlx5_lag_should_modify_lag(ldev, do_bond)) {
+ mlx5_modify_lag(ldev, &tracker);
+ } else if (mlx5_lag_should_disable_lag(ldev, do_bond)) {
+diff --git a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
+index a4809fe0fc2496..268489b15616fd 100644
+--- a/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
++++ b/drivers/net/ethernet/meta/fbnic/fbnic_pci.c
+@@ -319,7 +319,6 @@ static int fbnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ free_irqs:
+ fbnic_free_irqs(fbd);
+ free_fbd:
+- pci_disable_device(pdev);
+ fbnic_devlink_free(fbd);
+
+ return err;
+@@ -349,7 +348,6 @@ static void fbnic_remove(struct pci_dev *pdev)
+ fbnic_fw_disable_mbx(fbd);
+ fbnic_free_irqs(fbd);
+
+- pci_disable_device(pdev);
+ fbnic_devlink_free(fbd);
+ }
+
+diff --git a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+index 7251121ab196e3..16eb3de60eb6df 100644
+--- a/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
++++ b/drivers/net/ethernet/microchip/vcap/vcap_api_kunit.c
+@@ -366,12 +366,13 @@ static void vcap_api_iterator_init_test(struct kunit *test)
+ struct vcap_typegroup typegroups[] = {
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 156, .width = 1, .value = 0, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ struct vcap_typegroup typegroups2[] = {
+ { .offset = 0, .width = 3, .value = 4, },
+ { .offset = 49, .width = 2, .value = 0, },
+ { .offset = 98, .width = 2, .value = 0, },
++ { }
+ };
+
+ vcap_iter_init(&iter, 52, typegroups, 86);
+@@ -399,6 +400,7 @@ static void vcap_api_iterator_next_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 0, },
+ { .offset = 196, .width = 2, .value = 0, },
+ { .offset = 245, .width = 1, .value = 0, },
++ { }
+ };
+ int idx;
+
+@@ -433,7 +435,7 @@ static void vcap_api_encode_typegroups_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 5, },
+ { .offset = 196, .width = 2, .value = 2, },
+ { .offset = 245, .width = 5, .value = 27, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+
+ vcap_encode_typegroups(stream, 49, typegroups, false);
+@@ -463,6 +465,7 @@ static void vcap_api_encode_bit_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 5, },
+ { .offset = 196, .width = 2, .value = 2, },
+ { .offset = 245, .width = 1, .value = 0, },
++ { }
+ };
+
+ vcap_iter_init(&iter, 49, typegroups, 44);
+@@ -489,7 +492,7 @@ static void vcap_api_encode_field_test(struct kunit *test)
+ { .offset = 147, .width = 3, .value = 5, },
+ { .offset = 196, .width = 2, .value = 2, },
+ { .offset = 245, .width = 5, .value = 27, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ struct vcap_field rf = {
+ .type = VCAP_FIELD_U32,
+@@ -538,7 +541,7 @@ static void vcap_api_encode_short_field_test(struct kunit *test)
+ { .offset = 0, .width = 3, .value = 7, },
+ { .offset = 21, .width = 2, .value = 3, },
+ { .offset = 42, .width = 1, .value = 1, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ struct vcap_field rf = {
+ .type = VCAP_FIELD_U32,
+@@ -608,7 +611,7 @@ static void vcap_api_encode_keyfield_test(struct kunit *test)
+ struct vcap_typegroup tgt[] = {
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 156, .width = 1, .value = 1, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+
+ vcap_test_api_init(&admin);
+@@ -671,7 +674,7 @@ static void vcap_api_encode_max_keyfield_test(struct kunit *test)
+ struct vcap_typegroup tgt[] = {
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 156, .width = 1, .value = 1, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+ u32 keyres[] = {
+ 0x928e8a84,
+@@ -732,7 +735,7 @@ static void vcap_api_encode_actionfield_test(struct kunit *test)
+ { .offset = 0, .width = 2, .value = 2, },
+ { .offset = 21, .width = 1, .value = 1, },
+ { .offset = 42, .width = 1, .value = 0, },
+- { .offset = 0, .width = 0, .value = 0, },
++ { }
+ };
+
+ vcap_encode_actionfield(&rule, &caf, &rf, tgt);
+diff --git a/drivers/net/ethernet/realtek/rtase/rtase.h b/drivers/net/ethernet/realtek/rtase/rtase.h
+index 583c33930f886f..4a4434869b10a8 100644
+--- a/drivers/net/ethernet/realtek/rtase/rtase.h
++++ b/drivers/net/ethernet/realtek/rtase/rtase.h
+@@ -9,7 +9,10 @@
+ #ifndef RTASE_H
+ #define RTASE_H
+
+-#define RTASE_HW_VER_MASK 0x7C800000
++#define RTASE_HW_VER_MASK 0x7C800000
++#define RTASE_HW_VER_906X_7XA 0x00800000
++#define RTASE_HW_VER_906X_7XC 0x04000000
++#define RTASE_HW_VER_907XD_V1 0x04800000
+
+ #define RTASE_RX_DMA_BURST_256 4
+ #define RTASE_TX_DMA_BURST_UNLIMITED 7
+@@ -327,6 +330,8 @@ struct rtase_private {
+ u16 int_nums;
+ u16 tx_int_mit;
+ u16 rx_int_mit;
++
++ u32 hw_ver;
+ };
+
+ #define RTASE_LSO_64K 64000
+diff --git a/drivers/net/ethernet/realtek/rtase/rtase_main.c b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+index f8777b7663d35d..1bfe5ef40c522d 100644
+--- a/drivers/net/ethernet/realtek/rtase/rtase_main.c
++++ b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+@@ -1714,10 +1714,21 @@ static int rtase_get_settings(struct net_device *dev,
+ struct ethtool_link_ksettings *cmd)
+ {
+ u32 supported = SUPPORTED_MII | SUPPORTED_Pause | SUPPORTED_Asym_Pause;
++ const struct rtase_private *tp = netdev_priv(dev);
+
+ ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported,
+ supported);
+- cmd->base.speed = SPEED_5000;
++
++ switch (tp->hw_ver) {
++ case RTASE_HW_VER_906X_7XA:
++ case RTASE_HW_VER_906X_7XC:
++ cmd->base.speed = SPEED_5000;
++ break;
++ case RTASE_HW_VER_907XD_V1:
++ cmd->base.speed = SPEED_10000;
++ break;
++ }
++
+ cmd->base.duplex = DUPLEX_FULL;
+ cmd->base.port = PORT_MII;
+ cmd->base.autoneg = AUTONEG_DISABLE;
+@@ -1972,20 +1983,21 @@ static void rtase_init_software_variable(struct pci_dev *pdev,
+ tp->dev->max_mtu = RTASE_MAX_JUMBO_SIZE;
+ }
+
+-static bool rtase_check_mac_version_valid(struct rtase_private *tp)
++static int rtase_check_mac_version_valid(struct rtase_private *tp)
+ {
+- u32 hw_ver = rtase_r32(tp, RTASE_TX_CONFIG_0) & RTASE_HW_VER_MASK;
+- bool known_ver = false;
++ int ret = -ENODEV;
++
++ tp->hw_ver = rtase_r32(tp, RTASE_TX_CONFIG_0) & RTASE_HW_VER_MASK;
+
+- switch (hw_ver) {
+- case 0x00800000:
+- case 0x04000000:
+- case 0x04800000:
+- known_ver = true;
++ switch (tp->hw_ver) {
++ case RTASE_HW_VER_906X_7XA:
++ case RTASE_HW_VER_906X_7XC:
++ case RTASE_HW_VER_907XD_V1:
++ ret = 0;
+ break;
+ }
+
+- return known_ver;
++ return ret;
+ }
+
+ static int rtase_init_board(struct pci_dev *pdev, struct net_device **dev_out,
+@@ -2105,9 +2117,13 @@ static int rtase_init_one(struct pci_dev *pdev,
+ tp->pdev = pdev;
+
+ /* identify chip attached to board */
+- if (!rtase_check_mac_version_valid(tp))
+- return dev_err_probe(&pdev->dev, -ENODEV,
+- "unknown chip version, contact rtase maintainers (see MAINTAINERS file)\n");
++ ret = rtase_check_mac_version_valid(tp);
++ if (ret != 0) {
++ dev_err(&pdev->dev,
++ "unknown chip version: 0x%08x, contact rtase maintainers (see MAINTAINERS file)\n",
++ tp->hw_ver);
++ goto err_out_release_board;
++ }
+
+ rtase_init_software_variable(pdev, tp);
+ rtase_init_hardware(tp);
+@@ -2181,6 +2197,7 @@ static int rtase_init_one(struct pci_dev *pdev,
+ netif_napi_del(&ivec->napi);
+ }
+
++err_out_release_board:
+ rtase_release_board(pdev, dev, ioaddr);
+
+ return ret;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index fdb4c773ec98ab..e897b49aa9e05e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -486,6 +486,8 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
+ plat_dat->pcs_exit = socfpga_dwmac_pcs_exit;
+ plat_dat->select_pcs = socfpga_dwmac_select_pcs;
+
++ plat_dat->riwt_off = 1;
++
+ ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
+ if (ret)
+ return ret;
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
+index a4cf682dca650e..0ee73a265545c3 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_irq.c
+@@ -72,14 +72,6 @@ int txgbe_request_queue_irqs(struct wx *wx)
+ return err;
+ }
+
+-static int txgbe_request_gpio_irq(struct txgbe *txgbe)
+-{
+- txgbe->gpio_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_GPIO);
+- return request_threaded_irq(txgbe->gpio_irq, NULL,
+- txgbe_gpio_irq_handler,
+- IRQF_ONESHOT, "txgbe-gpio-irq", txgbe);
+-}
+-
+ static int txgbe_request_link_irq(struct txgbe *txgbe)
+ {
+ txgbe->link_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_LINK);
+@@ -149,11 +141,6 @@ static irqreturn_t txgbe_misc_irq_thread_fn(int irq, void *data)
+ u32 eicr;
+
+ eicr = wx_misc_isb(wx, WX_ISB_MISC);
+- if (eicr & TXGBE_PX_MISC_GPIO) {
+- sub_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_GPIO);
+- handle_nested_irq(sub_irq);
+- nhandled++;
+- }
+ if (eicr & (TXGBE_PX_MISC_ETH_LK | TXGBE_PX_MISC_ETH_LKDN |
+ TXGBE_PX_MISC_ETH_AN)) {
+ sub_irq = irq_find_mapping(txgbe->misc.domain, TXGBE_IRQ_LINK);
+@@ -179,7 +166,6 @@ static void txgbe_del_irq_domain(struct txgbe *txgbe)
+
+ void txgbe_free_misc_irq(struct txgbe *txgbe)
+ {
+- free_irq(txgbe->gpio_irq, txgbe);
+ free_irq(txgbe->link_irq, txgbe);
+ free_irq(txgbe->misc.irq, txgbe);
+ txgbe_del_irq_domain(txgbe);
+@@ -191,7 +177,7 @@ int txgbe_setup_misc_irq(struct txgbe *txgbe)
+ struct wx *wx = txgbe->wx;
+ int hwirq, err;
+
+- txgbe->misc.nirqs = 2;
++ txgbe->misc.nirqs = 1;
+ txgbe->misc.domain = irq_domain_add_simple(NULL, txgbe->misc.nirqs, 0,
+ &txgbe_misc_irq_domain_ops, txgbe);
+ if (!txgbe->misc.domain)
+@@ -216,20 +202,14 @@ int txgbe_setup_misc_irq(struct txgbe *txgbe)
+ if (err)
+ goto del_misc_irq;
+
+- err = txgbe_request_gpio_irq(txgbe);
+- if (err)
+- goto free_msic_irq;
+-
+ err = txgbe_request_link_irq(txgbe);
+ if (err)
+- goto free_gpio_irq;
++ goto free_msic_irq;
+
+ wx->misc_irq_domain = true;
+
+ return 0;
+
+-free_gpio_irq:
+- free_irq(txgbe->gpio_irq, txgbe);
+ free_msic_irq:
+ free_irq(txgbe->misc.irq, txgbe);
+ del_misc_irq:
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+index 93180225a6f14c..f7745026803643 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+@@ -82,7 +82,6 @@ static void txgbe_up_complete(struct wx *wx)
+ {
+ struct net_device *netdev = wx->netdev;
+
+- txgbe_reinit_gpio_intr(wx);
+ wx_control_hw(wx, true);
+ wx_configure_vectors(wx);
+
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
+index 67b61afdde96ce..f26946198a2fb9 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.c
+@@ -162,7 +162,7 @@ static struct phylink_pcs *txgbe_phylink_mac_select(struct phylink_config *confi
+ struct wx *wx = phylink_to_wx(config);
+ struct txgbe *txgbe = wx->priv;
+
+- if (interface == PHY_INTERFACE_MODE_10GBASER)
++ if (wx->media_type != sp_media_copper)
+ return &txgbe->xpcs->pcs;
+
+ return NULL;
+@@ -358,169 +358,8 @@ static int txgbe_gpio_direction_out(struct gpio_chip *chip, unsigned int offset,
+ return 0;
+ }
+
+-static void txgbe_gpio_irq_ack(struct irq_data *d)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- unsigned long flags;
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- wr32(wx, WX_GPIO_EOI, BIT(hwirq));
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-}
+-
+-static void txgbe_gpio_irq_mask(struct irq_data *d)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- unsigned long flags;
+-
+- gpiochip_disable_irq(gc, hwirq);
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- wr32m(wx, WX_GPIO_INTMASK, BIT(hwirq), BIT(hwirq));
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-}
+-
+-static void txgbe_gpio_irq_unmask(struct irq_data *d)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- unsigned long flags;
+-
+- gpiochip_enable_irq(gc, hwirq);
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- wr32m(wx, WX_GPIO_INTMASK, BIT(hwirq), 0);
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-}
+-
+-static void txgbe_toggle_trigger(struct gpio_chip *gc, unsigned int offset)
+-{
+- struct wx *wx = gpiochip_get_data(gc);
+- u32 pol, val;
+-
+- pol = rd32(wx, WX_GPIO_POLARITY);
+- val = rd32(wx, WX_GPIO_EXT);
+-
+- if (val & BIT(offset))
+- pol &= ~BIT(offset);
+- else
+- pol |= BIT(offset);
+-
+- wr32(wx, WX_GPIO_POLARITY, pol);
+-}
+-
+-static int txgbe_gpio_set_type(struct irq_data *d, unsigned int type)
+-{
+- struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t hwirq = irqd_to_hwirq(d);
+- struct wx *wx = gpiochip_get_data(gc);
+- u32 level, polarity, mask;
+- unsigned long flags;
+-
+- mask = BIT(hwirq);
+-
+- if (type & IRQ_TYPE_LEVEL_MASK) {
+- level = 0;
+- irq_set_handler_locked(d, handle_level_irq);
+- } else {
+- level = mask;
+- irq_set_handler_locked(d, handle_edge_irq);
+- }
+-
+- if (type == IRQ_TYPE_EDGE_RISING || type == IRQ_TYPE_LEVEL_HIGH)
+- polarity = mask;
+- else
+- polarity = 0;
+-
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+-
+- wr32m(wx, WX_GPIO_INTEN, mask, mask);
+- wr32m(wx, WX_GPIO_INTTYPE_LEVEL, mask, level);
+- if (type == IRQ_TYPE_EDGE_BOTH)
+- txgbe_toggle_trigger(gc, hwirq);
+- else
+- wr32m(wx, WX_GPIO_POLARITY, mask, polarity);
+-
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+-
+- return 0;
+-}
+-
+-static const struct irq_chip txgbe_gpio_irq_chip = {
+- .name = "txgbe-gpio-irq",
+- .irq_ack = txgbe_gpio_irq_ack,
+- .irq_mask = txgbe_gpio_irq_mask,
+- .irq_unmask = txgbe_gpio_irq_unmask,
+- .irq_set_type = txgbe_gpio_set_type,
+- .flags = IRQCHIP_IMMUTABLE,
+- GPIOCHIP_IRQ_RESOURCE_HELPERS,
+-};
+-
+-irqreturn_t txgbe_gpio_irq_handler(int irq, void *data)
+-{
+- struct txgbe *txgbe = data;
+- struct wx *wx = txgbe->wx;
+- irq_hw_number_t hwirq;
+- unsigned long gpioirq;
+- struct gpio_chip *gc;
+- unsigned long flags;
+-
+- gpioirq = rd32(wx, WX_GPIO_INTSTATUS);
+-
+- gc = txgbe->gpio;
+- for_each_set_bit(hwirq, &gpioirq, gc->ngpio) {
+- int gpio = irq_find_mapping(gc->irq.domain, hwirq);
+- struct irq_data *d = irq_get_irq_data(gpio);
+- u32 irq_type = irq_get_trigger_type(gpio);
+-
+- txgbe_gpio_irq_ack(d);
+- handle_nested_irq(gpio);
+-
+- if ((irq_type & IRQ_TYPE_SENSE_MASK) == IRQ_TYPE_EDGE_BOTH) {
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- txgbe_toggle_trigger(gc, hwirq);
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+- }
+- }
+-
+- return IRQ_HANDLED;
+-}
+-
+-void txgbe_reinit_gpio_intr(struct wx *wx)
+-{
+- struct txgbe *txgbe = wx->priv;
+- irq_hw_number_t hwirq;
+- unsigned long gpioirq;
+- struct gpio_chip *gc;
+- unsigned long flags;
+-
+- /* for gpio interrupt pending before irq enable */
+- gpioirq = rd32(wx, WX_GPIO_INTSTATUS);
+-
+- gc = txgbe->gpio;
+- for_each_set_bit(hwirq, &gpioirq, gc->ngpio) {
+- int gpio = irq_find_mapping(gc->irq.domain, hwirq);
+- struct irq_data *d = irq_get_irq_data(gpio);
+- u32 irq_type = irq_get_trigger_type(gpio);
+-
+- txgbe_gpio_irq_ack(d);
+-
+- if ((irq_type & IRQ_TYPE_SENSE_MASK) == IRQ_TYPE_EDGE_BOTH) {
+- raw_spin_lock_irqsave(&wx->gpio_lock, flags);
+- txgbe_toggle_trigger(gc, hwirq);
+- raw_spin_unlock_irqrestore(&wx->gpio_lock, flags);
+- }
+- }
+-}
+-
+ static int txgbe_gpio_init(struct txgbe *txgbe)
+ {
+- struct gpio_irq_chip *girq;
+ struct gpio_chip *gc;
+ struct device *dev;
+ struct wx *wx;
+@@ -550,11 +389,6 @@ static int txgbe_gpio_init(struct txgbe *txgbe)
+ gc->direction_input = txgbe_gpio_direction_in;
+ gc->direction_output = txgbe_gpio_direction_out;
+
+- girq = &gc->irq;
+- gpio_irq_chip_set_chip(girq, &txgbe_gpio_irq_chip);
+- girq->default_type = IRQ_TYPE_NONE;
+- girq->handler = handle_bad_irq;
+-
+ ret = devm_gpiochip_add_data(dev, gc, wx);
+ if (ret)
+ return ret;
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
+index 8a026d804fe24c..3938985355ed6c 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_phy.h
+@@ -4,8 +4,6 @@
+ #ifndef _TXGBE_PHY_H_
+ #define _TXGBE_PHY_H_
+
+-irqreturn_t txgbe_gpio_irq_handler(int irq, void *data);
+-void txgbe_reinit_gpio_intr(struct wx *wx);
+ irqreturn_t txgbe_link_irq_handler(int irq, void *data);
+ int txgbe_init_phy(struct txgbe *txgbe);
+ void txgbe_remove_phy(struct txgbe *txgbe);
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+index 959102c4c3797e..8ea413a7abe9d3 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_type.h
+@@ -75,8 +75,7 @@
+ #define TXGBE_PX_MISC_IEN_MASK \
+ (TXGBE_PX_MISC_ETH_LKDN | TXGBE_PX_MISC_DEV_RST | \
+ TXGBE_PX_MISC_ETH_EVENT | TXGBE_PX_MISC_ETH_LK | \
+- TXGBE_PX_MISC_ETH_AN | TXGBE_PX_MISC_INT_ERR | \
+- TXGBE_PX_MISC_GPIO)
++ TXGBE_PX_MISC_ETH_AN | TXGBE_PX_MISC_INT_ERR)
+
+ /* Port cfg registers */
+ #define TXGBE_CFG_PORT_ST 0x14404
+@@ -313,8 +312,7 @@ struct txgbe_nodes {
+ };
+
+ enum txgbe_misc_irqs {
+- TXGBE_IRQ_GPIO = 0,
+- TXGBE_IRQ_LINK,
++ TXGBE_IRQ_LINK = 0,
+ TXGBE_IRQ_MAX
+ };
+
+@@ -335,7 +333,6 @@ struct txgbe {
+ struct clk_lookup *clock;
+ struct clk *clk;
+ struct gpio_chip *gpio;
+- unsigned int gpio_irq;
+ unsigned int link_irq;
+
+ /* flow director */
+diff --git a/drivers/net/mdio/mdio-ipq4019.c b/drivers/net/mdio/mdio-ipq4019.c
+index 9d8f43b28aac5b..ea1f64596a85cf 100644
+--- a/drivers/net/mdio/mdio-ipq4019.c
++++ b/drivers/net/mdio/mdio-ipq4019.c
+@@ -352,8 +352,11 @@ static int ipq4019_mdio_probe(struct platform_device *pdev)
+ /* The platform resource is provided on the chipset IPQ5018 */
+ /* This resource is optional */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+- if (res)
++ if (res) {
+ priv->eth_ldo_rdy = devm_ioremap_resource(&pdev->dev, res);
++ if (IS_ERR(priv->eth_ldo_rdy))
++ return PTR_ERR(priv->eth_ldo_rdy);
++ }
+
+ bus->name = "ipq4019_mdio";
+ bus->read = ipq4019_mdio_read_c22;
+diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c
+index f0d58092e7e961..3612b0633bd177 100644
+--- a/drivers/net/netdevsim/ipsec.c
++++ b/drivers/net/netdevsim/ipsec.c
+@@ -176,14 +176,13 @@ static int nsim_ipsec_add_sa(struct xfrm_state *xs,
+ return ret;
+ }
+
+- if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN) {
++ if (xs->xso.dir == XFRM_DEV_OFFLOAD_IN)
+ sa.rx = true;
+
+- if (xs->props.family == AF_INET6)
+- memcpy(sa.ipaddr, &xs->id.daddr.a6, 16);
+- else
+- memcpy(&sa.ipaddr[3], &xs->id.daddr.a4, 4);
+- }
++ if (xs->props.family == AF_INET6)
++ memcpy(sa.ipaddr, &xs->id.daddr.a6, 16);
++ else
++ memcpy(&sa.ipaddr[3], &xs->id.daddr.a4, 4);
+
+ /* the preparations worked, so save the info */
+ memcpy(&ipsec->sa[sa_idx], &sa, sizeof(sa));
+diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
+index 8adf77e3557e7a..531b1b6a37d190 100644
+--- a/drivers/net/usb/lan78xx.c
++++ b/drivers/net/usb/lan78xx.c
+@@ -1652,13 +1652,13 @@ static int lan78xx_set_wol(struct net_device *netdev,
+ struct lan78xx_priv *pdata = (struct lan78xx_priv *)(dev->data[0]);
+ int ret;
+
++ if (wol->wolopts & ~WAKE_ALL)
++ return -EINVAL;
++
+ ret = usb_autopm_get_interface(dev->intf);
+ if (ret < 0)
+ return ret;
+
+- if (wol->wolopts & ~WAKE_ALL)
+- return -EINVAL;
+-
+ pdata->wol = wol->wolopts;
+
+ device_set_wakeup_enable(&dev->udev->dev, (bool)wol->wolopts);
+@@ -2380,6 +2380,7 @@ static int lan78xx_phy_init(struct lan78xx_net *dev)
+ if (dev->chipid == ID_REV_CHIP_ID_7801_) {
+ if (phy_is_pseudo_fixed_link(phydev)) {
+ fixed_phy_unregister(phydev);
++ phy_device_free(phydev);
+ } else {
+ phy_unregister_fixup_for_uid(PHY_KSZ9031RNX,
+ 0xfffffff0);
+@@ -4246,8 +4247,10 @@ static void lan78xx_disconnect(struct usb_interface *intf)
+
+ phy_disconnect(net->phydev);
+
+- if (phy_is_pseudo_fixed_link(phydev))
++ if (phy_is_pseudo_fixed_link(phydev)) {
+ fixed_phy_unregister(phydev);
++ phy_device_free(phydev);
++ }
+
+ usb_scuttle_anchored_urbs(&dev->deferred);
+
+@@ -4414,29 +4417,30 @@ static int lan78xx_probe(struct usb_interface *intf,
+
+ period = ep_intr->desc.bInterval;
+ maxp = usb_maxpacket(dev->udev, dev->pipe_intr);
+- buf = kmalloc(maxp, GFP_KERNEL);
+- if (!buf) {
++
++ dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL);
++ if (!dev->urb_intr) {
+ ret = -ENOMEM;
+ goto out5;
+ }
+
+- dev->urb_intr = usb_alloc_urb(0, GFP_KERNEL);
+- if (!dev->urb_intr) {
++ buf = kmalloc(maxp, GFP_KERNEL);
++ if (!buf) {
+ ret = -ENOMEM;
+- goto out6;
+- } else {
+- usb_fill_int_urb(dev->urb_intr, dev->udev,
+- dev->pipe_intr, buf, maxp,
+- intr_complete, dev, period);
+- dev->urb_intr->transfer_flags |= URB_FREE_BUFFER;
++ goto free_urbs;
+ }
+
++ usb_fill_int_urb(dev->urb_intr, dev->udev,
++ dev->pipe_intr, buf, maxp,
++ intr_complete, dev, period);
++ dev->urb_intr->transfer_flags |= URB_FREE_BUFFER;
++
+ dev->maxpacket = usb_maxpacket(dev->udev, dev->pipe_out);
+
+ /* Reject broken descriptors. */
+ if (dev->maxpacket == 0) {
+ ret = -ENODEV;
+- goto out6;
++ goto free_urbs;
+ }
+
+ /* driver requires remote-wakeup capability during autosuspend. */
+@@ -4444,7 +4448,7 @@ static int lan78xx_probe(struct usb_interface *intf,
+
+ ret = lan78xx_phy_init(dev);
+ if (ret < 0)
+- goto out7;
++ goto free_urbs;
+
+ ret = register_netdev(netdev);
+ if (ret != 0) {
+@@ -4466,10 +4470,8 @@ static int lan78xx_probe(struct usb_interface *intf,
+
+ out8:
+ phy_disconnect(netdev->phydev);
+-out7:
++free_urbs:
+ usb_free_urb(dev->urb_intr);
+-out6:
+- kfree(buf);
+ out5:
+ lan78xx_unbind(dev, intf);
+ out4:
+diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
+index 646e1737d4c47c..6b467696bc982c 100644
+--- a/drivers/net/wireless/ath/ath10k/mac.c
++++ b/drivers/net/wireless/ath/ath10k/mac.c
+@@ -9121,7 +9121,7 @@ static const struct ath10k_index_vht_data_rate_type supported_vht_mcs_rate_nss1[
+ {6, {2633, 2925}, {1215, 1350}, {585, 650} },
+ {7, {2925, 3250}, {1350, 1500}, {650, 722} },
+ {8, {3510, 3900}, {1620, 1800}, {780, 867} },
+- {9, {3900, 4333}, {1800, 2000}, {780, 867} }
++ {9, {3900, 4333}, {1800, 2000}, {865, 960} }
+ };
+
+ /*MCS parameters with Nss = 2 */
+@@ -9136,7 +9136,7 @@ static const struct ath10k_index_vht_data_rate_type supported_vht_mcs_rate_nss2[
+ {6, {5265, 5850}, {2430, 2700}, {1170, 1300} },
+ {7, {5850, 6500}, {2700, 3000}, {1300, 1444} },
+ {8, {7020, 7800}, {3240, 3600}, {1560, 1733} },
+- {9, {7800, 8667}, {3600, 4000}, {1560, 1733} }
++ {9, {7800, 8667}, {3600, 4000}, {1730, 1920} }
+ };
+
+ static void ath10k_mac_get_rate_flags_ht(struct ath10k *ar, u32 rate, u8 nss, u8 mcs,
+diff --git a/drivers/net/wireless/ath/ath11k/qmi.c b/drivers/net/wireless/ath/ath11k/qmi.c
+index f477afd325deaf..7a22483b35cd98 100644
+--- a/drivers/net/wireless/ath/ath11k/qmi.c
++++ b/drivers/net/wireless/ath/ath11k/qmi.c
+@@ -2180,6 +2180,9 @@ static int ath11k_qmi_request_device_info(struct ath11k_base *ab)
+ ab->mem = bar_addr_va;
+ ab->mem_len = resp.bar_size;
+
++ if (!ab->hw_params.ce_remap)
++ ab->mem_ce = ab->mem;
++
+ return 0;
+ out:
+ return ret;
+diff --git a/drivers/net/wireless/ath/ath12k/dp.c b/drivers/net/wireless/ath/ath12k/dp.c
+index 61aa78d8bd8c8f..217eb57663f058 100644
+--- a/drivers/net/wireless/ath/ath12k/dp.c
++++ b/drivers/net/wireless/ath/ath12k/dp.c
+@@ -1202,10 +1202,16 @@ static void ath12k_dp_cc_cleanup(struct ath12k_base *ab)
+ if (!skb)
+ continue;
+
+- skb_cb = ATH12K_SKB_CB(skb);
+- ar = skb_cb->ar;
+- if (atomic_dec_and_test(&ar->dp.num_tx_pending))
+- wake_up(&ar->dp.tx_empty_waitq);
++ /* if we are unregistering, hw would've been destroyed and
++ * ar is no longer valid.
++ */
++ if (!(test_bit(ATH12K_FLAG_UNREGISTERING, &ab->dev_flags))) {
++ skb_cb = ATH12K_SKB_CB(skb);
++ ar = skb_cb->ar;
++
++ if (atomic_dec_and_test(&ar->dp.num_tx_pending))
++ wake_up(&ar->dp.tx_empty_waitq);
++ }
+
+ dma_unmap_single(ab->dev, ATH12K_SKB_CB(skb)->paddr,
+ skb->len, DMA_TO_DEVICE);
+@@ -1241,6 +1247,7 @@ static void ath12k_dp_cc_cleanup(struct ath12k_base *ab)
+ }
+
+ kfree(dp->spt_info);
++ dp->spt_info = NULL;
+ }
+
+ static void ath12k_dp_reoq_lut_cleanup(struct ath12k_base *ab)
+@@ -1276,8 +1283,10 @@ void ath12k_dp_free(struct ath12k_base *ab)
+
+ ath12k_dp_rx_reo_cmd_list_cleanup(ab);
+
+- for (i = 0; i < ab->hw_params->max_tx_ring; i++)
++ for (i = 0; i < ab->hw_params->max_tx_ring; i++) {
+ kfree(dp->tx_ring[i].tx_status);
++ dp->tx_ring[i].tx_status = NULL;
++ }
+
+ ath12k_dp_rx_free(ab);
+ /* Deinit any SOC level resource */
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 137394c364603b..6d0784a21558ea 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -917,7 +917,10 @@ void ath12k_mac_peer_cleanup_all(struct ath12k *ar)
+
+ spin_lock_bh(&ab->base_lock);
+ list_for_each_entry_safe(peer, tmp, &ab->peers, list) {
+- ath12k_dp_rx_peer_tid_cleanup(ar, peer);
++ /* Skip Rx TID cleanup for self peer */
++ if (peer->sta)
++ ath12k_dp_rx_peer_tid_cleanup(ar, peer);
++
+ list_del(&peer->list);
+ kfree(peer);
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/wow.c b/drivers/net/wireless/ath/ath12k/wow.c
+index 9b8684abbe40ae..3624180b25b970 100644
+--- a/drivers/net/wireless/ath/ath12k/wow.c
++++ b/drivers/net/wireless/ath/ath12k/wow.c
+@@ -191,7 +191,7 @@ ath12k_wow_convert_8023_to_80211(struct ath12k *ar,
+ memcpy(bytemask, eth_bytemask, eth_pat_len);
+
+ pat_len = eth_pat_len;
+- } else if (eth_pkt_ofs + eth_pat_len < prot_ofs) {
++ } else if (size_add(eth_pkt_ofs, eth_pat_len) < prot_ofs) {
+ memcpy(pat, eth_pat, ETH_ALEN - eth_pkt_ofs);
+ memcpy(bytemask, eth_bytemask, ETH_ALEN - eth_pkt_ofs);
+
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index eb631fd3336d8d..b5257b2b4aa527 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -294,6 +294,9 @@ int htc_connect_service(struct htc_target *target,
+ return -ETIMEDOUT;
+ }
+
++ if (target->conn_rsp_epid < 0 || target->conn_rsp_epid >= ENDPOINT_MAX)
++ return -EINVAL;
++
+ *conn_rsp_epid = target->conn_rsp_epid;
+ return 0;
+ err:
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+index fe4f657561056c..af930e34c21f8a 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+@@ -110,9 +110,8 @@ void brcmf_of_probe(struct device *dev, enum brcmf_bus_type bus_type,
+ }
+ strreplace(board_type, '/', '-');
+ settings->board_type = board_type;
+-
+- of_node_put(root);
+ }
++ of_node_put(root);
+
+ if (!np || !of_device_is_compatible(np, "brcm,bcm4329-fmac"))
+ return;
+diff --git a/drivers/net/wireless/intel/iwlegacy/3945.c b/drivers/net/wireless/intel/iwlegacy/3945.c
+index 14d2331ee6cb97..b0656b143f77a2 100644
+--- a/drivers/net/wireless/intel/iwlegacy/3945.c
++++ b/drivers/net/wireless/intel/iwlegacy/3945.c
+@@ -566,7 +566,7 @@ il3945_hdl_rx(struct il_priv *il, struct il_rx_buf *rxb)
+ if (!(rx_end->status & RX_RES_STATUS_NO_CRC32_ERROR) ||
+ !(rx_end->status & RX_RES_STATUS_NO_RXE_OVERFLOW)) {
+ D_RX("Bad CRC or FIFO: 0x%08X.\n", rx_end->status);
+- rx_status.flag |= RX_FLAG_FAILED_FCS_CRC;
++ return;
+ }
+
+ /* Convert 3945's rssi indicator to dBm */
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-mac.c b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+index fcccde7bb65922..05c4af41bdb960 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-mac.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-mac.c
+@@ -664,7 +664,7 @@ il4965_hdl_rx(struct il_priv *il, struct il_rx_buf *rxb)
+ if (!(rx_pkt_status & RX_RES_STATUS_NO_CRC32_ERROR) ||
+ !(rx_pkt_status & RX_RES_STATUS_NO_RXE_OVERFLOW)) {
+ D_RX("Bad CRC or FIFO: 0x%08X.\n", le32_to_cpu(rx_pkt_status));
+- rx_status.flag |= RX_FLAG_FAILED_FCS_CRC;
++ return;
+ }
+
+ /* This will be used in several places later */
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+index 80b9a115245fe8..d37d83d246354e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c
+@@ -1237,6 +1237,7 @@ int __iwl_mvm_mac_start(struct iwl_mvm *mvm)
+ fast_resume = mvm->fast_resume;
+
+ if (fast_resume) {
++ iwl_mvm_mei_device_state(mvm, true);
+ ret = iwl_mvm_fast_resume(mvm);
+ if (ret) {
+ iwl_mvm_stop_device(mvm);
+@@ -1377,10 +1378,13 @@ void __iwl_mvm_mac_stop(struct iwl_mvm *mvm, bool suspend)
+ iwl_mvm_rm_aux_sta(mvm);
+
+ if (suspend &&
+- mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_AX210)
++ mvm->trans->trans_cfg->device_family >= IWL_DEVICE_FAMILY_22000) {
+ iwl_mvm_fast_suspend(mvm);
+- else
++ /* From this point on, we won't touch the device */
++ iwl_mvm_mei_device_state(mvm, false);
++ } else {
+ iwl_mvm_stop_device(mvm);
++ }
+
+ iwl_mvm_async_handlers_purge(mvm);
+ /* async_handlers_list is empty and will stay empty: HW is stopped */
+diff --git a/drivers/net/wireless/intersil/p54/p54spi.c b/drivers/net/wireless/intersil/p54/p54spi.c
+index d33a994906a7bb..27f44a9f0bc1f9 100644
+--- a/drivers/net/wireless/intersil/p54/p54spi.c
++++ b/drivers/net/wireless/intersil/p54/p54spi.c
+@@ -624,7 +624,7 @@ static int p54spi_probe(struct spi_device *spi)
+ gpio_direction_input(p54spi_gpio_irq);
+
+ ret = request_irq(gpio_to_irq(p54spi_gpio_irq),
+- p54spi_interrupt, 0, "p54spi",
++ p54spi_interrupt, IRQF_NO_AUTOEN, "p54spi",
+ priv->spi);
+ if (ret < 0) {
+ dev_err(&priv->spi->dev, "request_irq() failed");
+@@ -633,8 +633,6 @@ static int p54spi_probe(struct spi_device *spi)
+
+ irq_set_irq_type(gpio_to_irq(p54spi_gpio_irq), IRQ_TYPE_EDGE_RISING);
+
+- disable_irq(gpio_to_irq(p54spi_gpio_irq));
+-
+ INIT_WORK(&priv->work, p54spi_work);
+ init_completion(&priv->fw_comp);
+ INIT_LIST_HEAD(&priv->tx_pending);
+diff --git a/drivers/net/wireless/marvell/mwifiex/cmdevt.c b/drivers/net/wireless/marvell/mwifiex/cmdevt.c
+index 1cff001bdc5145..b30ed321c6251a 100644
+--- a/drivers/net/wireless/marvell/mwifiex/cmdevt.c
++++ b/drivers/net/wireless/marvell/mwifiex/cmdevt.c
+@@ -938,8 +938,10 @@ void mwifiex_process_assoc_resp(struct mwifiex_adapter *adapter)
+ assoc_resp.links[0].bss = priv->req_bss;
+ assoc_resp.buf = priv->assoc_rsp_buf;
+ assoc_resp.len = priv->assoc_rsp_size;
++ wiphy_lock(priv->wdev.wiphy);
+ cfg80211_rx_assoc_resp(priv->netdev,
+ &assoc_resp);
++ wiphy_unlock(priv->wdev.wiphy);
+ priv->assoc_rsp_size = 0;
+ }
+ }
+diff --git a/drivers/net/wireless/marvell/mwifiex/fw.h b/drivers/net/wireless/marvell/mwifiex/fw.h
+index d03129d5d24e3d..4a96281792cc1a 100644
+--- a/drivers/net/wireless/marvell/mwifiex/fw.h
++++ b/drivers/net/wireless/marvell/mwifiex/fw.h
+@@ -875,7 +875,7 @@ struct mwifiex_ietypes_chanstats {
+ struct mwifiex_ie_types_wildcard_ssid_params {
+ struct mwifiex_ie_types_header header;
+ u8 max_ssid_length;
+- u8 ssid[1];
++ u8 ssid[];
+ } __packed;
+
+ #define TSF_DATA_SIZE 8
+diff --git a/drivers/net/wireless/marvell/mwifiex/main.c b/drivers/net/wireless/marvell/mwifiex/main.c
+index 96d1f6039fbca3..855019fe548582 100644
+--- a/drivers/net/wireless/marvell/mwifiex/main.c
++++ b/drivers/net/wireless/marvell/mwifiex/main.c
+@@ -1679,7 +1679,8 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter)
+ }
+
+ ret = devm_request_irq(dev, adapter->irq_wakeup,
+- mwifiex_irq_wakeup_handler, IRQF_TRIGGER_LOW,
++ mwifiex_irq_wakeup_handler,
++ IRQF_TRIGGER_LOW | IRQF_NO_AUTOEN,
+ "wifi_wake", adapter);
+ if (ret) {
+ dev_err(dev, "Failed to request irq_wakeup %d (%d)\n",
+@@ -1687,7 +1688,6 @@ static void mwifiex_probe_of(struct mwifiex_adapter *adapter)
+ goto err_exit;
+ }
+
+- disable_irq(adapter->irq_wakeup);
+ if (device_init_wakeup(dev, true)) {
+ dev_err(dev, "fail to init wakeup for mwifiex\n");
+ goto err_exit;
+diff --git a/drivers/net/wireless/marvell/mwifiex/util.c b/drivers/net/wireless/marvell/mwifiex/util.c
+index 42c04bf858da37..1f1f6280a0f251 100644
+--- a/drivers/net/wireless/marvell/mwifiex/util.c
++++ b/drivers/net/wireless/marvell/mwifiex/util.c
+@@ -494,7 +494,9 @@ mwifiex_process_mgmt_packet(struct mwifiex_private *priv,
+ }
+ }
+
++ wiphy_lock(priv->wdev.wiphy);
+ cfg80211_rx_mlme_mgmt(priv->netdev, skb->data, pkt_len);
++ wiphy_unlock(priv->wdev.wiphy);
+ }
+
+ if (priv->adapter->host_mlme_enabled &&
+diff --git a/drivers/net/wireless/microchip/wilc1000/netdev.c b/drivers/net/wireless/microchip/wilc1000/netdev.c
+index 9ecf3fb29b558f..8bc127c5a538cb 100644
+--- a/drivers/net/wireless/microchip/wilc1000/netdev.c
++++ b/drivers/net/wireless/microchip/wilc1000/netdev.c
+@@ -608,6 +608,9 @@ static int wilc_mac_open(struct net_device *ndev)
+ return ret;
+ }
+
++ wilc_set_operation_mode(vif, wilc_get_vif_idx(vif), vif->iftype,
++ vif->idx);
++
+ netdev_dbg(ndev, "Mac address: %pM\n", ndev->dev_addr);
+ ret = wilc_set_mac_address(vif, ndev->dev_addr);
+ if (ret) {
+@@ -618,9 +621,6 @@ static int wilc_mac_open(struct net_device *ndev)
+ return ret;
+ }
+
+- wilc_set_operation_mode(vif, wilc_get_vif_idx(vif), vif->iftype,
+- vif->idx);
+-
+ mgmt_regs.interface_stypes = vif->mgmt_reg_stypes;
+ /* so we detect a change */
+ vif->mgmt_reg_stypes = 0;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/core.c b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+index 7891c988dd5f03..f95898f68d68a5 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+@@ -5058,10 +5058,12 @@ rtl8xxxu_bss_info_changed(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ }
+
+ if (changed & BSS_CHANGED_BEACON_ENABLED) {
+- if (bss_conf->enable_beacon)
++ if (bss_conf->enable_beacon) {
+ rtl8xxxu_start_tx_beacon(priv);
+- else
++ schedule_delayed_work(&priv->update_beacon_work, 0);
++ } else {
+ rtl8xxxu_stop_tx_beacon(priv);
++ }
+ }
+
+ if (changed & BSS_CHANGED_BEACON)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/efuse.c b/drivers/net/wireless/realtek/rtlwifi/efuse.c
+index 82cf5fb5175fef..6518e77b89f578 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/efuse.c
++++ b/drivers/net/wireless/realtek/rtlwifi/efuse.c
+@@ -162,10 +162,19 @@ void efuse_write_1byte(struct ieee80211_hw *hw, u16 address, u8 value)
+ void read_efuse_byte(struct ieee80211_hw *hw, u16 _offset, u8 *pbuf)
+ {
+ struct rtl_priv *rtlpriv = rtl_priv(hw);
++ u16 max_attempts = 10000;
+ u32 value32;
+ u8 readbyte;
+ u16 retry;
+
++ /*
++ * In case of USB devices, transfer speeds are limited, hence
++ * efuse I/O reads could be (way) slower. So, decrease (a lot)
++ * the read attempts in case of failures.
++ */
++ if (rtlpriv->rtlhal.interface == INTF_USB)
++ max_attempts = 10;
++
+ rtl_write_byte(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL] + 1,
+ (_offset & 0xff));
+ readbyte = rtl_read_byte(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL] + 2);
+@@ -178,7 +187,7 @@ void read_efuse_byte(struct ieee80211_hw *hw, u16 _offset, u8 *pbuf)
+
+ retry = 0;
+ value32 = rtl_read_dword(rtlpriv, rtlpriv->cfg->maps[EFUSE_CTRL]);
+- while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < 10000)) {
++ while (!(((value32 >> 24) & 0xff) & 0x80) && (retry < max_attempts)) {
+ value32 = rtl_read_dword(rtlpriv,
+ rtlpriv->cfg->maps[EFUSE_CTRL]);
+ retry++;
+diff --git a/drivers/net/wireless/realtek/rtw89/cam.c b/drivers/net/wireless/realtek/rtw89/cam.c
+index 4476fc7e53db74..8d140b94cb4403 100644
+--- a/drivers/net/wireless/realtek/rtw89/cam.c
++++ b/drivers/net/wireless/realtek/rtw89/cam.c
+@@ -211,25 +211,17 @@ static int rtw89_cam_get_addr_cam_key_idx(struct rtw89_addr_cam_entry *addr_cam,
+ return 0;
+ }
+
+-static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta,
+- const struct rtw89_sec_cam_entry *sec_cam,
+- bool inform_fw)
++static int __rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ const struct rtw89_sec_cam_entry *sec_cam,
++ bool inform_fw)
+ {
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+- struct rtw89_vif *rtwvif;
+ struct rtw89_addr_cam_entry *addr_cam;
+ unsigned int i;
+ int ret = 0;
+
+- if (!vif) {
+- rtw89_err(rtwdev, "No iface for deleting sec cam\n");
+- return -EINVAL;
+- }
+-
+- rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta);
++ addr_cam = rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
+
+ for_each_set_bit(i, addr_cam->sec_cam_map, RTW89_SEC_CAM_IN_ADDR_CAM) {
+ if (addr_cam->sec_ent[i] != sec_cam->sec_cam_idx)
+@@ -239,11 +231,11 @@ static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev,
+ }
+
+ if (inform_fw) {
+- ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta);
++ ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret)
+ rtw89_err(rtwdev,
+ "failed to update dctl cam del key: %d\n", ret);
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret)
+ rtw89_err(rtwdev, "failed to update cam del key: %d\n", ret);
+ }
+@@ -251,25 +243,17 @@ static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta,
+- struct ieee80211_key_conf *key,
+- struct rtw89_sec_cam_entry *sec_cam)
++static int __rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ struct ieee80211_key_conf *key,
++ struct rtw89_sec_cam_entry *sec_cam)
+ {
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+- struct rtw89_vif *rtwvif;
+ struct rtw89_addr_cam_entry *addr_cam;
+ u8 key_idx = 0;
+ int ret;
+
+- if (!vif) {
+- rtw89_err(rtwdev, "No iface for adding sec cam\n");
+- return -EINVAL;
+- }
+-
+- rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta);
++ addr_cam = rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
+
+ if (key->cipher == WLAN_CIPHER_SUITE_WEP40 ||
+ key->cipher == WLAN_CIPHER_SUITE_WEP104)
+@@ -285,13 +269,13 @@ static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev,
+ addr_cam->sec_ent_keyid[key_idx] = key->keyidx;
+ addr_cam->sec_ent[key_idx] = sec_cam->sec_cam_idx;
+ set_bit(key_idx, addr_cam->sec_cam_map);
+- ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta);
++ ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to update dctl cam sec entry: %d\n",
+ ret);
+ return ret;
+ }
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to update addr cam sec entry: %d\n",
+ ret);
+@@ -302,6 +286,92 @@ static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev,
+ return 0;
+ }
+
++static int rtw89_cam_detach_sec_cam(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta,
++ const struct rtw89_sec_cam_entry *sec_cam,
++ bool inform_fw)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
++ struct rtw89_sta_link *rtwsta_link;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
++ int ret;
++
++ if (!vif) {
++ rtw89_err(rtwdev, "No iface for deleting sec cam\n");
++ return -EINVAL;
++ }
++
++ rtwvif = vif_to_rtwvif(vif);
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ rtwsta_link = rtwsta ? rtwsta->links[link_id] : NULL;
++ if (rtwsta && !rtwsta_link)
++ continue;
++
++ ret = __rtw89_cam_detach_sec_cam(rtwdev, rtwvif_link, rtwsta_link,
++ sec_cam, inform_fw);
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++
++static int rtw89_cam_attach_sec_cam(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta,
++ struct ieee80211_key_conf *key,
++ struct rtw89_sec_cam_entry *sec_cam)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
++ struct rtw89_sta_link *rtwsta_link;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
++ int key_link_id;
++ int ret;
++
++ if (!vif) {
++ rtw89_err(rtwdev, "No iface for adding sec cam\n");
++ return -EINVAL;
++ }
++
++ rtwvif = vif_to_rtwvif(vif);
++
++ key_link_id = ieee80211_vif_is_mld(vif) ? key->link_id : 0;
++ if (key_link_id >= 0) {
++ rtwvif_link = rtwvif->links[key_link_id];
++ rtwsta_link = rtwsta ? rtwsta->links[key_link_id] : NULL;
++
++ if (!rtwvif_link || (rtwsta && !rtwsta_link)) {
++ rtw89_err(rtwdev, "No drv link for adding sec cam\n");
++ return -ENOLINK;
++ }
++
++ return __rtw89_cam_attach_sec_cam(rtwdev, rtwvif_link,
++ rtwsta_link, key, sec_cam);
++ }
++
++ /* key_link_id < 0: MLD pairwise key */
++ if (!rtwsta) {
++ rtw89_err(rtwdev, "No sta for adding MLD pairwise sec cam\n");
++ return -EINVAL;
++ }
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = __rtw89_cam_attach_sec_cam(rtwdev, rtwvif_link,
++ rtwsta_link, key, sec_cam);
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++
+ static int rtw89_cam_sec_key_install(struct rtw89_dev *rtwdev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+@@ -485,10 +555,10 @@ void rtw89_cam_deinit_bssid_cam(struct rtw89_dev *rtwdev,
+ clear_bit(bssid_cam->bssid_cam_idx, cam_info->bssid_cam_map);
+ }
+
+-void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- struct rtw89_addr_cam_entry *addr_cam = &rtwvif->addr_cam;
+- struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam;
++ struct rtw89_addr_cam_entry *addr_cam = &rtwvif_link->addr_cam;
++ struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam;
+
+ rtw89_cam_deinit_addr_cam(rtwdev, addr_cam);
+ rtw89_cam_deinit_bssid_cam(rtwdev, bssid_cam);
+@@ -593,7 +663,7 @@ static int rtw89_cam_get_avail_bssid_cam(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct rtw89_bssid_cam_entry *bssid_cam,
+ const u8 *bssid)
+ {
+@@ -613,7 +683,7 @@ int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev,
+ }
+
+ bssid_cam->bssid_cam_idx = bssid_cam_idx;
+- bssid_cam->phy_idx = rtwvif->phy_idx;
++ bssid_cam->phy_idx = rtwvif_link->phy_idx;
+ bssid_cam->len = BSSID_CAM_ENT_SIZE;
+ bssid_cam->offset = 0;
+ bssid_cam->valid = true;
+@@ -622,20 +692,21 @@ int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev,
+ return 0;
+ }
+
+-void rtw89_cam_bssid_changed(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++void rtw89_cam_bssid_changed(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam;
++ struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam;
+
+- ether_addr_copy(bssid_cam->bssid, rtwvif->bssid);
++ ether_addr_copy(bssid_cam->bssid, rtwvif_link->bssid);
+ }
+
+-int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- struct rtw89_addr_cam_entry *addr_cam = &rtwvif->addr_cam;
+- struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam;
++ struct rtw89_addr_cam_entry *addr_cam = &rtwvif_link->addr_cam;
++ struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam;
+ int ret;
+
+- ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif, bssid_cam, rtwvif->bssid);
++ ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif_link, bssid_cam,
++ rtwvif_link->bssid);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to init bssid cam\n");
+ return ret;
+@@ -651,19 +722,27 @@ int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ }
+
+ int rtw89_cam_fill_bssid_cam_info(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, u8 *cmd)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link, u8 *cmd)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif, rtwsta);
+- u8 bss_color = vif->bss_conf.he_bss_color.color;
++ struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif_link,
++ rtwsta_link);
++ struct ieee80211_bss_conf *bss_conf;
++ u8 bss_color;
+ u8 bss_mask;
+
+- if (vif->bss_conf.nontransmitted)
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++ bss_color = bss_conf->he_bss_color.color;
++
++ if (bss_conf->nontransmitted)
+ bss_mask = RTW89_BSSID_MATCH_5_BYTES;
+ else
+ bss_mask = RTW89_BSSID_MATCH_ALL;
+
++ rcu_read_unlock();
++
+ FWCMD_SET_ADDR_BSSID_IDX(cmd, bssid_cam->bssid_cam_idx);
+ FWCMD_SET_ADDR_BSSID_OFFSET(cmd, bssid_cam->offset);
+ FWCMD_SET_ADDR_BSSID_LEN(cmd, bssid_cam->len);
+@@ -694,19 +773,30 @@ static u8 rtw89_cam_addr_hash(u8 start, const u8 *addr)
+ }
+
+ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ const u8 *scan_mac_addr,
+ u8 *cmd)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct rtw89_addr_cam_entry *addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta);
+- struct ieee80211_sta *sta = rtwsta_to_sta_safe(rtwsta);
+- const u8 *sma = scan_mac_addr ? scan_mac_addr : rtwvif->mac_addr;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_addr_cam_entry *addr_cam =
++ rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
++ struct ieee80211_sta *sta = rtwsta_link_to_sta_safe(rtwsta_link);
++ struct ieee80211_link_sta *link_sta;
++ const u8 *sma = scan_mac_addr ? scan_mac_addr : rtwvif_link->mac_addr;
+ u8 sma_hash, tma_hash, addr_msk_start;
+ u8 sma_start = 0;
+ u8 tma_start = 0;
+- u8 *tma = sta ? sta->addr : rtwvif->bssid;
++ const u8 *tma;
++
++ rcu_read_lock();
++
++ if (sta) {
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ tma = link_sta->addr;
++ } else {
++ tma = rtwvif_link->bssid;
++ }
+
+ if (addr_cam->addr_mask != 0) {
+ addr_msk_start = __ffs(addr_cam->addr_mask);
+@@ -723,10 +813,10 @@ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev,
+ FWCMD_SET_ADDR_LEN(cmd, addr_cam->len);
+
+ FWCMD_SET_ADDR_VALID(cmd, addr_cam->valid);
+- FWCMD_SET_ADDR_NET_TYPE(cmd, rtwvif->net_type);
+- FWCMD_SET_ADDR_BCN_HIT_COND(cmd, rtwvif->bcn_hit_cond);
+- FWCMD_SET_ADDR_HIT_RULE(cmd, rtwvif->hit_rule);
+- FWCMD_SET_ADDR_BB_SEL(cmd, rtwvif->phy_idx);
++ FWCMD_SET_ADDR_NET_TYPE(cmd, rtwvif_link->net_type);
++ FWCMD_SET_ADDR_BCN_HIT_COND(cmd, rtwvif_link->bcn_hit_cond);
++ FWCMD_SET_ADDR_HIT_RULE(cmd, rtwvif_link->hit_rule);
++ FWCMD_SET_ADDR_BB_SEL(cmd, rtwvif_link->phy_idx);
+ FWCMD_SET_ADDR_ADDR_MASK(cmd, addr_cam->addr_mask);
+ FWCMD_SET_ADDR_MASK_SEL(cmd, addr_cam->mask_sel);
+ FWCMD_SET_ADDR_SMA_HASH(cmd, sma_hash);
+@@ -748,20 +838,21 @@ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev,
+ FWCMD_SET_ADDR_TMA4(cmd, tma[4]);
+ FWCMD_SET_ADDR_TMA5(cmd, tma[5]);
+
+- FWCMD_SET_ADDR_PORT_INT(cmd, rtwvif->port);
+- FWCMD_SET_ADDR_TSF_SYNC(cmd, rtwvif->port);
+- FWCMD_SET_ADDR_TF_TRS(cmd, rtwvif->trigger);
+- FWCMD_SET_ADDR_LSIG_TXOP(cmd, rtwvif->lsig_txop);
+- FWCMD_SET_ADDR_TGT_IND(cmd, rtwvif->tgt_ind);
+- FWCMD_SET_ADDR_FRM_TGT_IND(cmd, rtwvif->frm_tgt_ind);
+- FWCMD_SET_ADDR_MACID(cmd, rtwsta ? rtwsta->mac_id : rtwvif->mac_id);
+- if (rtwvif->net_type == RTW89_NET_TYPE_INFRA)
++ FWCMD_SET_ADDR_PORT_INT(cmd, rtwvif_link->port);
++ FWCMD_SET_ADDR_TSF_SYNC(cmd, rtwvif_link->port);
++ FWCMD_SET_ADDR_TF_TRS(cmd, rtwvif_link->trigger);
++ FWCMD_SET_ADDR_LSIG_TXOP(cmd, rtwvif_link->lsig_txop);
++ FWCMD_SET_ADDR_TGT_IND(cmd, rtwvif_link->tgt_ind);
++ FWCMD_SET_ADDR_FRM_TGT_IND(cmd, rtwvif_link->frm_tgt_ind);
++ FWCMD_SET_ADDR_MACID(cmd, rtwsta_link ? rtwsta_link->mac_id :
++ rtwvif_link->mac_id);
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_INFRA)
+ FWCMD_SET_ADDR_AID12(cmd, vif->cfg.aid & 0xfff);
+- else if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
++ else if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
+ FWCMD_SET_ADDR_AID12(cmd, sta ? sta->aid & 0xfff : 0);
+- FWCMD_SET_ADDR_WOL_PATTERN(cmd, rtwvif->wowlan_pattern);
+- FWCMD_SET_ADDR_WOL_UC(cmd, rtwvif->wowlan_uc);
+- FWCMD_SET_ADDR_WOL_MAGIC(cmd, rtwvif->wowlan_magic);
++ FWCMD_SET_ADDR_WOL_PATTERN(cmd, rtwvif_link->wowlan_pattern);
++ FWCMD_SET_ADDR_WOL_UC(cmd, rtwvif_link->wowlan_uc);
++ FWCMD_SET_ADDR_WOL_MAGIC(cmd, rtwvif_link->wowlan_magic);
+ FWCMD_SET_ADDR_WAPI(cmd, addr_cam->wapi);
+ FWCMD_SET_ADDR_SEC_ENT_MODE(cmd, addr_cam->sec_ent_mode);
+ FWCMD_SET_ADDR_SEC_ENT0_KEYID(cmd, addr_cam->sec_ent_keyid[0]);
+@@ -780,18 +871,22 @@ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev,
+ FWCMD_SET_ADDR_SEC_ENT4(cmd, addr_cam->sec_ent[4]);
+ FWCMD_SET_ADDR_SEC_ENT5(cmd, addr_cam->sec_ent[5]);
+ FWCMD_SET_ADDR_SEC_ENT6(cmd, addr_cam->sec_ent[6]);
++
++ rcu_read_unlock();
+ }
+
+ void rtw89_cam_fill_dctl_sec_cam_info_v1(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ struct rtw89_h2c_dctlinfo_ud_v1 *h2c)
+ {
+- struct rtw89_addr_cam_entry *addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta);
++ struct rtw89_addr_cam_entry *addr_cam =
++ rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ u8 *ptk_tx_iv = rtw_wow->key_info.ptk_tx_iv;
+
+- h2c->c0 = le32_encode_bits(rtwsta ? rtwsta->mac_id : rtwvif->mac_id,
++ h2c->c0 = le32_encode_bits(rtwsta_link ? rtwsta_link->mac_id :
++ rtwvif_link->mac_id,
+ DCTLINFO_V1_C0_MACID) |
+ le32_encode_bits(1, DCTLINFO_V1_C0_OP);
+
+@@ -862,15 +957,17 @@ void rtw89_cam_fill_dctl_sec_cam_info_v1(struct rtw89_dev *rtwdev,
+ }
+
+ void rtw89_cam_fill_dctl_sec_cam_info_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ struct rtw89_h2c_dctlinfo_ud_v2 *h2c)
+ {
+- struct rtw89_addr_cam_entry *addr_cam = rtw89_get_addr_cam_of(rtwvif, rtwsta);
++ struct rtw89_addr_cam_entry *addr_cam =
++ rtw89_get_addr_cam_of(rtwvif_link, rtwsta_link);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ u8 *ptk_tx_iv = rtw_wow->key_info.ptk_tx_iv;
+
+- h2c->c0 = le32_encode_bits(rtwsta ? rtwsta->mac_id : rtwvif->mac_id,
++ h2c->c0 = le32_encode_bits(rtwsta_link ? rtwsta_link->mac_id :
++ rtwvif_link->mac_id,
+ DCTLINFO_V2_C0_MACID) |
+ le32_encode_bits(1, DCTLINFO_V2_C0_OP);
+
+diff --git a/drivers/net/wireless/realtek/rtw89/cam.h b/drivers/net/wireless/realtek/rtw89/cam.h
+index 5d7b624c2dd428..a6f72edd30fe3a 100644
+--- a/drivers/net/wireless/realtek/rtw89/cam.h
++++ b/drivers/net/wireless/realtek/rtw89/cam.h
+@@ -526,34 +526,34 @@ struct rtw89_h2c_dctlinfo_ud_v2 {
+ #define DCTLINFO_V2_W12_MLD_TA_BSSID_H_V1 GENMASK(15, 0)
+ #define DCTLINFO_V2_W12_ALL GENMASK(15, 0)
+
+-int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif *vif);
+-void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *vif);
++int rtw89_cam_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif);
++void rtw89_cam_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif);
+ int rtw89_cam_init_addr_cam(struct rtw89_dev *rtwdev,
+ struct rtw89_addr_cam_entry *addr_cam,
+ const struct rtw89_bssid_cam_entry *bssid_cam);
+ void rtw89_cam_deinit_addr_cam(struct rtw89_dev *rtwdev,
+ struct rtw89_addr_cam_entry *addr_cam);
+ int rtw89_cam_init_bssid_cam(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct rtw89_bssid_cam_entry *bssid_cam,
+ const u8 *bssid);
+ void rtw89_cam_deinit_bssid_cam(struct rtw89_dev *rtwdev,
+ struct rtw89_bssid_cam_entry *bssid_cam);
+ void rtw89_cam_fill_addr_cam_info(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *vif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *vif,
++ struct rtw89_sta_link *rtwsta_link,
+ const u8 *scan_mac_addr, u8 *cmd);
+ void rtw89_cam_fill_dctl_sec_cam_info_v1(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ struct rtw89_h2c_dctlinfo_ud_v1 *h2c);
+ void rtw89_cam_fill_dctl_sec_cam_info_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ struct rtw89_h2c_dctlinfo_ud_v2 *h2c);
+ int rtw89_cam_fill_bssid_cam_info(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, u8 *cmd);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link, u8 *cmd);
+ int rtw89_cam_sec_key_add(struct rtw89_dev *rtwdev,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+@@ -564,6 +564,6 @@ int rtw89_cam_sec_key_del(struct rtw89_dev *rtwdev,
+ struct ieee80211_key_conf *key,
+ bool inform_fw);
+ void rtw89_cam_bssid_changed(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
+ void rtw89_cam_reset_keys(struct rtw89_dev *rtwdev);
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw89/chan.c b/drivers/net/wireless/realtek/rtw89/chan.c
+index 7070c85e2c2883..ba6332da8019c1 100644
+--- a/drivers/net/wireless/realtek/rtw89/chan.c
++++ b/drivers/net/wireless/realtek/rtw89/chan.c
+@@ -234,6 +234,18 @@ void rtw89_entity_init(struct rtw89_dev *rtwdev)
+ rtw89_config_default_chandef(rtwdev);
+ }
+
++static bool rtw89_vif_is_active_role(struct rtw89_vif *rtwvif)
++{
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ if (rtwvif_link->chanctx_assigned)
++ return true;
++
++ return false;
++}
++
+ static void rtw89_entity_calculate_weight(struct rtw89_dev *rtwdev,
+ struct rtw89_entity_weight *w)
+ {
+@@ -255,7 +267,7 @@ static void rtw89_entity_calculate_weight(struct rtw89_dev *rtwdev,
+ }
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- if (rtwvif->chanctx_assigned)
++ if (rtw89_vif_is_active_role(rtwvif))
+ w->active_roles++;
+ }
+ }
+@@ -387,9 +399,9 @@ int rtw89_iterate_mcc_roles(struct rtw89_dev *rtwdev,
+ static u32 rtw89_mcc_get_tbtt_ofst(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *role, u64 tsf)
+ {
+- struct rtw89_vif *rtwvif = role->rtwvif;
++ struct rtw89_vif_link *rtwvif_link = role->rtwvif_link;
+ u32 bcn_intvl_us = ieee80211_tu_to_usec(role->beacon_interval);
+- u64 sync_tsf = READ_ONCE(rtwvif->sync_bcn_tsf);
++ u64 sync_tsf = READ_ONCE(rtwvif_link->sync_bcn_tsf);
+ u32 remainder;
+
+ if (tsf < sync_tsf) {
+@@ -413,8 +425,8 @@ static int __mcc_fw_req_tsf(struct rtw89_dev *rtwdev, u64 *tsf_ref, u64 *tsf_aux
+ int ret;
+
+ req.group = mcc->group;
+- req.macid_x = ref->rtwvif->mac_id;
+- req.macid_y = aux->rtwvif->mac_id;
++ req.macid_x = ref->rtwvif_link->mac_id;
++ req.macid_y = aux->rtwvif_link->mac_id;
+ ret = rtw89_fw_h2c_mcc_req_tsf(rtwdev, &req, &rpt);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+@@ -440,10 +452,10 @@ static int __mrc_fw_req_tsf(struct rtw89_dev *rtwdev, u64 *tsf_ref, u64 *tsf_aux
+ BUILD_BUG_ON(RTW89_MAC_MRC_MAX_REQ_TSF_NUM < NUM_OF_RTW89_MCC_ROLES);
+
+ arg.num = 2;
+- arg.infos[0].band = ref->rtwvif->mac_idx;
+- arg.infos[0].port = ref->rtwvif->port;
+- arg.infos[1].band = aux->rtwvif->mac_idx;
+- arg.infos[1].port = aux->rtwvif->port;
++ arg.infos[0].band = ref->rtwvif_link->mac_idx;
++ arg.infos[0].port = ref->rtwvif_link->port;
++ arg.infos[1].band = aux->rtwvif_link->mac_idx;
++ arg.infos[1].port = aux->rtwvif_link->port;
+
+ ret = rtw89_fw_h2c_mrc_req_tsf(rtwdev, &arg, &rpt);
+ if (ret) {
+@@ -522,23 +534,31 @@ u32 rtw89_mcc_role_fw_macid_bitmap_to_u32(struct rtw89_mcc_role *mcc_role)
+
+ static void rtw89_mcc_role_macid_sta_iter(void *data, struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_mcc_role *mcc_role = data;
+- struct rtw89_vif *target = mcc_role->rtwvif;
++ struct rtw89_vif *target = mcc_role->rtwvif_link->rtwvif;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_sta_link *rtwsta_link;
+
+ if (rtwvif != target)
+ return;
+
+- rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwsta->mac_id);
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link)) {
++ rtw89_err(rtwdev, "mcc sta macid: find no link on HW-0\n");
++ return;
++ }
++
++ rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwsta_link->mac_id);
+ }
+
+ static void rtw89_mcc_fill_role_macid_bitmap(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role)
+ {
+- struct rtw89_vif *rtwvif = mcc_role->rtwvif;
++ struct rtw89_vif_link *rtwvif_link = mcc_role->rtwvif_link;
+
+- rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwvif->mac_id);
++ rtw89_mcc_role_fw_macid_bitmap_set_bit(mcc_role, rtwvif_link->mac_id);
+ ieee80211_iterate_stations_atomic(rtwdev->hw,
+ rtw89_mcc_role_macid_sta_iter,
+ mcc_role);
+@@ -564,8 +584,9 @@ static void rtw89_mcc_fill_role_policy(struct rtw89_dev *rtwdev,
+ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(mcc_role->rtwvif);
++ struct rtw89_vif_link *rtwvif_link = mcc_role->rtwvif_link;
+ struct ieee80211_p2p_noa_desc *noa_desc;
++ struct ieee80211_bss_conf *bss_conf;
+ u32 bcn_intvl_us = ieee80211_tu_to_usec(mcc_role->beacon_interval);
+ u32 max_toa_us, max_tob_us, max_dur_us;
+ u32 start_time, interval, duration;
+@@ -576,13 +597,18 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+ if (!mcc_role->is_go && !mcc_role->is_gc)
+ return;
+
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++
+ /* find the first periodic NoA */
+ for (i = 0; i < RTW89_P2P_MAX_NOA_NUM; i++) {
+- noa_desc = &vif->bss_conf.p2p_noa_attr.desc[i];
++ noa_desc = &bss_conf->p2p_noa_attr.desc[i];
+ if (noa_desc->count == 255)
+ goto fill;
+ }
+
++ rcu_read_unlock();
+ return;
+
+ fill:
+@@ -590,6 +616,8 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+ interval = le32_to_cpu(noa_desc->interval);
+ duration = le32_to_cpu(noa_desc->duration);
+
++ rcu_read_unlock();
++
+ if (interval != bcn_intvl_us) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "MCC role limit: mismatch interval: %d vs. %d\n",
+@@ -597,7 +625,7 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+ return;
+ }
+
+- ret = rtw89_mac_port_get_tsf(rtwdev, mcc_role->rtwvif, &tsf);
++ ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif_link, &tsf);
+ if (ret) {
+ rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret);
+ return;
+@@ -632,15 +660,21 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_mcc_fill_role(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct rtw89_mcc_role *role)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_bss_conf *bss_conf;
+ const struct rtw89_chan *chan;
+
+ memset(role, 0, sizeof(*role));
+- role->rtwvif = rtwvif;
+- role->beacon_interval = vif->bss_conf.beacon_int;
++ role->rtwvif_link = rtwvif_link;
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ role->beacon_interval = bss_conf->beacon_int;
++
++ rcu_read_unlock();
+
+ if (!role->beacon_interval) {
+ rtw89_warn(rtwdev,
+@@ -650,10 +684,10 @@ static int rtw89_mcc_fill_role(struct rtw89_dev *rtwdev,
+
+ role->duration = role->beacon_interval / 2;
+
+- chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
++ chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
+ role->is_2ghz = chan->band_type == RTW89_BAND_2G;
+- role->is_go = rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_GO;
+- role->is_gc = rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT;
++ role->is_go = rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_GO;
++ role->is_gc = rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT;
+
+ rtw89_mcc_fill_role_macid_bitmap(rtwdev, role);
+ rtw89_mcc_fill_role_policy(rtwdev, role);
+@@ -678,7 +712,7 @@ static void rtw89_mcc_fill_bt_role(struct rtw89_dev *rtwdev)
+ }
+
+ struct rtw89_mcc_fill_role_selector {
+- struct rtw89_vif *bind_vif[NUM_OF_RTW89_CHANCTX];
++ struct rtw89_vif_link *bind_vif[NUM_OF_RTW89_CHANCTX];
+ };
+
+ static_assert((u8)NUM_OF_RTW89_CHANCTX >= NUM_OF_RTW89_MCC_ROLES);
+@@ -689,7 +723,7 @@ static int rtw89_mcc_fill_role_iterator(struct rtw89_dev *rtwdev,
+ void *data)
+ {
+ struct rtw89_mcc_fill_role_selector *sel = data;
+- struct rtw89_vif *role_vif = sel->bind_vif[ordered_idx];
++ struct rtw89_vif_link *role_vif = sel->bind_vif[ordered_idx];
+ int ret;
+
+ if (!role_vif) {
+@@ -712,21 +746,28 @@ static int rtw89_mcc_fill_role_iterator(struct rtw89_dev *rtwdev,
+ static int rtw89_mcc_fill_all_roles(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_mcc_fill_role_selector sel = {};
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
+ int ret;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- if (!rtwvif->chanctx_assigned)
++ if (!rtw89_vif_is_active_role(rtwvif))
++ continue;
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "mcc fill roles: find no link on HW-0\n");
+ continue;
++ }
+
+- if (sel.bind_vif[rtwvif->chanctx_idx]) {
++ if (sel.bind_vif[rtwvif_link->chanctx_idx]) {
+ rtw89_warn(rtwdev,
+ "MCC skip extra vif <macid %d> on chanctx[%d]\n",
+- rtwvif->mac_id, rtwvif->chanctx_idx);
++ rtwvif_link->mac_id, rtwvif_link->chanctx_idx);
+ continue;
+ }
+
+- sel.bind_vif[rtwvif->chanctx_idx] = rtwvif;
++ sel.bind_vif[rtwvif_link->chanctx_idx] = rtwvif_link;
+ }
+
+ ret = rtw89_iterate_mcc_roles(rtwdev, rtw89_mcc_fill_role_iterator, &sel);
+@@ -754,13 +795,13 @@ static void rtw89_mcc_assign_pattern(struct rtw89_dev *rtwdev,
+ memset(&pattern->courtesy, 0, sizeof(pattern->courtesy));
+
+ if (pattern->tob_aux <= 0 || pattern->toa_aux <= 0) {
+- pattern->courtesy.macid_tgt = aux->rtwvif->mac_id;
+- pattern->courtesy.macid_src = ref->rtwvif->mac_id;
++ pattern->courtesy.macid_tgt = aux->rtwvif_link->mac_id;
++ pattern->courtesy.macid_src = ref->rtwvif_link->mac_id;
+ pattern->courtesy.slot_num = RTW89_MCC_DFLT_COURTESY_SLOT;
+ pattern->courtesy.enable = true;
+ } else if (pattern->tob_ref <= 0 || pattern->toa_ref <= 0) {
+- pattern->courtesy.macid_tgt = ref->rtwvif->mac_id;
+- pattern->courtesy.macid_src = aux->rtwvif->mac_id;
++ pattern->courtesy.macid_tgt = ref->rtwvif_link->mac_id;
++ pattern->courtesy.macid_src = aux->rtwvif_link->mac_id;
+ pattern->courtesy.slot_num = RTW89_MCC_DFLT_COURTESY_SLOT;
+ pattern->courtesy.enable = true;
+ }
+@@ -1263,7 +1304,7 @@ static void rtw89_mcc_sync_tbtt(struct rtw89_dev *rtwdev,
+ u64 tsf_src;
+ int ret;
+
+- ret = rtw89_mac_port_get_tsf(rtwdev, src->rtwvif, &tsf_src);
++ ret = rtw89_mac_port_get_tsf(rtwdev, src->rtwvif_link, &tsf_src);
+ if (ret) {
+ rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret);
+ return;
+@@ -1280,12 +1321,12 @@ static void rtw89_mcc_sync_tbtt(struct rtw89_dev *rtwdev,
+ div_u64_rem(tbtt_tgt, bcn_intvl_src_us, &remainder);
+ tsf_ofst_tgt = bcn_intvl_src_us - remainder;
+
+- config->sync.macid_tgt = tgt->rtwvif->mac_id;
+- config->sync.band_tgt = tgt->rtwvif->mac_idx;
+- config->sync.port_tgt = tgt->rtwvif->port;
+- config->sync.macid_src = src->rtwvif->mac_id;
+- config->sync.band_src = src->rtwvif->mac_idx;
+- config->sync.port_src = src->rtwvif->port;
++ config->sync.macid_tgt = tgt->rtwvif_link->mac_id;
++ config->sync.band_tgt = tgt->rtwvif_link->mac_idx;
++ config->sync.port_tgt = tgt->rtwvif_link->port;
++ config->sync.macid_src = src->rtwvif_link->mac_id;
++ config->sync.band_src = src->rtwvif_link->mac_idx;
++ config->sync.port_src = src->rtwvif_link->port;
+ config->sync.offset = tsf_ofst_tgt / 1024;
+ config->sync.enable = true;
+
+@@ -1294,7 +1335,7 @@ static void rtw89_mcc_sync_tbtt(struct rtw89_dev *rtwdev,
+ config->sync.macid_tgt, config->sync.macid_src,
+ config->sync.offset);
+
+- rtw89_mac_port_tsf_sync(rtwdev, tgt->rtwvif, src->rtwvif,
++ rtw89_mac_port_tsf_sync(rtwdev, tgt->rtwvif_link, src->rtwvif_link,
+ config->sync.offset);
+ }
+
+@@ -1305,13 +1346,13 @@ static int rtw89_mcc_fill_start_tsf(struct rtw89_dev *rtwdev)
+ struct rtw89_mcc_config *config = &mcc->config;
+ u32 bcn_intvl_ref_us = ieee80211_tu_to_usec(ref->beacon_interval);
+ u32 tob_ref_us = ieee80211_tu_to_usec(config->pattern.tob_ref);
+- struct rtw89_vif *rtwvif = ref->rtwvif;
++ struct rtw89_vif_link *rtwvif_link = ref->rtwvif_link;
+ u64 tsf, start_tsf;
+ u32 cur_tbtt_ofst;
+ u64 min_time;
+ int ret;
+
+- ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif, &tsf);
++ ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif_link, &tsf);
+ if (ret) {
+ rtw89_warn(rtwdev, "MCC failed to get port tsf: %d\n", ret);
+ return ret;
+@@ -1390,13 +1431,13 @@ static int __mcc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *ro
+ const struct rtw89_chan *chan;
+ int ret;
+
+- chan = rtw89_chan_get(rtwdev, role->rtwvif->chanctx_idx);
++ chan = rtw89_chan_get(rtwdev, role->rtwvif_link->chanctx_idx);
+ req.central_ch_seg0 = chan->channel;
+ req.primary_ch = chan->primary_channel;
+ req.bandwidth = chan->band_width;
+ req.ch_band_type = chan->band_type;
+
+- req.macid = role->rtwvif->mac_id;
++ req.macid = role->rtwvif_link->mac_id;
+ req.group = mcc->group;
+ req.c2h_rpt = policy->c2h_rpt;
+ req.tx_null_early = policy->tx_null_early;
+@@ -1421,7 +1462,7 @@ static int __mcc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *ro
+ }
+
+ ret = rtw89_fw_h2c_mcc_macid_bitmap(rtwdev, mcc->group,
+- role->rtwvif->mac_id,
++ role->rtwvif_link->mac_id,
+ role->macid_bitmap);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+@@ -1448,7 +1489,7 @@ void __mrc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *role,
+ slot_arg->duration = role->duration;
+ slot_arg->role_num = 1;
+
+- chan = rtw89_chan_get(rtwdev, role->rtwvif->chanctx_idx);
++ chan = rtw89_chan_get(rtwdev, role->rtwvif_link->chanctx_idx);
+
+ slot_arg->roles[0].role_type = RTW89_H2C_MRC_ROLE_WIFI;
+ slot_arg->roles[0].is_master = role == ref;
+@@ -1458,7 +1499,7 @@ void __mrc_fw_add_role(struct rtw89_dev *rtwdev, struct rtw89_mcc_role *role,
+ slot_arg->roles[0].primary_ch = chan->primary_channel;
+ slot_arg->roles[0].en_tx_null = !policy->dis_tx_null;
+ slot_arg->roles[0].null_early = policy->tx_null_early;
+- slot_arg->roles[0].macid = role->rtwvif->mac_id;
++ slot_arg->roles[0].macid = role->rtwvif_link->mac_id;
+ slot_arg->roles[0].macid_main_bitmap =
+ rtw89_mcc_role_fw_macid_bitmap_to_u32(role);
+ }
+@@ -1569,7 +1610,7 @@ static int __mcc_fw_start(struct rtw89_dev *rtwdev, bool replace)
+ }
+ }
+
+- req.macid = ref->rtwvif->mac_id;
++ req.macid = ref->rtwvif_link->mac_id;
+ req.tsf_high = config->start_tsf >> 32;
+ req.tsf_low = config->start_tsf;
+
+@@ -1598,7 +1639,7 @@ static void __mrc_fw_add_courtesy(struct rtw89_dev *rtwdev,
+ if (!courtesy->enable)
+ return;
+
+- if (courtesy->macid_src == ref->rtwvif->mac_id) {
++ if (courtesy->macid_src == ref->rtwvif_link->mac_id) {
+ slot_arg_src = &arg->slots[ref->slot_idx];
+ slot_idx_tgt = aux->slot_idx;
+ } else {
+@@ -1717,9 +1758,9 @@ static int __mcc_fw_set_duration_no_bt(struct rtw89_dev *rtwdev, bool sync_chang
+ struct rtw89_fw_mcc_duration req = {
+ .group = mcc->group,
+ .btc_in_group = false,
+- .start_macid = ref->rtwvif->mac_id,
+- .macid_x = ref->rtwvif->mac_id,
+- .macid_y = aux->rtwvif->mac_id,
++ .start_macid = ref->rtwvif_link->mac_id,
++ .macid_x = ref->rtwvif_link->mac_id,
++ .macid_y = aux->rtwvif_link->mac_id,
+ .duration_x = ref->duration,
+ .duration_y = aux->duration,
+ .start_tsf_high = config->start_tsf >> 32,
+@@ -1813,18 +1854,18 @@ static void rtw89_mcc_handle_beacon_noa(struct rtw89_dev *rtwdev, bool enable)
+ struct ieee80211_p2p_noa_desc noa_desc = {};
+ u64 start_time = config->start_tsf;
+ u32 interval = config->mcc_interval;
+- struct rtw89_vif *rtwvif_go;
++ struct rtw89_vif_link *rtwvif_go;
+ u32 duration;
+
+ if (mcc->mode != RTW89_MCC_MODE_GO_STA)
+ return;
+
+ if (ref->is_go) {
+- rtwvif_go = ref->rtwvif;
++ rtwvif_go = ref->rtwvif_link;
+ start_time += ieee80211_tu_to_usec(ref->duration);
+ duration = config->mcc_interval - ref->duration;
+ } else if (aux->is_go) {
+- rtwvif_go = aux->rtwvif;
++ rtwvif_go = aux->rtwvif_link;
+ start_time += ieee80211_tu_to_usec(pattern->tob_ref) +
+ ieee80211_tu_to_usec(config->beacon_offset) +
+ ieee80211_tu_to_usec(pattern->toa_aux);
+@@ -1865,9 +1906,9 @@ static void rtw89_mcc_start_beacon_noa(struct rtw89_dev *rtwdev)
+ return;
+
+ if (ref->is_go)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif, true);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif_link, true);
+ else if (aux->is_go)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif, true);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif_link, true);
+
+ rtw89_mcc_handle_beacon_noa(rtwdev, true);
+ }
+@@ -1882,9 +1923,9 @@ static void rtw89_mcc_stop_beacon_noa(struct rtw89_dev *rtwdev)
+ return;
+
+ if (ref->is_go)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif, false);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, ref->rtwvif_link, false);
+ else if (aux->is_go)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif, false);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, aux->rtwvif_link, false);
+
+ rtw89_mcc_handle_beacon_noa(rtwdev, false);
+ }
+@@ -1942,7 +1983,7 @@ struct rtw89_mcc_stop_sel {
+ static void rtw89_mcc_stop_sel_fill(struct rtw89_mcc_stop_sel *sel,
+ const struct rtw89_mcc_role *mcc_role)
+ {
+- sel->mac_id = mcc_role->rtwvif->mac_id;
++ sel->mac_id = mcc_role->rtwvif_link->mac_id;
+ sel->slot_idx = mcc_role->slot_idx;
+ }
+
+@@ -1953,7 +1994,7 @@ static int rtw89_mcc_stop_sel_iterator(struct rtw89_dev *rtwdev,
+ {
+ struct rtw89_mcc_stop_sel *sel = data;
+
+- if (!mcc_role->rtwvif->chanctx_assigned)
++ if (!mcc_role->rtwvif_link->chanctx_assigned)
+ return 0;
+
+ rtw89_mcc_stop_sel_fill(sel, mcc_role);
+@@ -2081,7 +2122,7 @@ static int __mcc_fw_upd_macid_bitmap(struct rtw89_dev *rtwdev,
+ int ret;
+
+ ret = rtw89_fw_h2c_mcc_macid_bitmap(rtwdev, mcc->group,
+- upd->rtwvif->mac_id,
++ upd->rtwvif_link->mac_id,
+ upd->macid_bitmap);
+ if (ret) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+@@ -2106,7 +2147,7 @@ static int __mrc_fw_upd_macid_bitmap(struct rtw89_dev *rtwdev,
+ int i;
+
+ arg.sch_idx = mcc->group;
+- arg.macid = upd->rtwvif->mac_id;
++ arg.macid = upd->rtwvif_link->mac_id;
+
+ for (i = 0; i < 32; i++) {
+ if (add & BIT(i)) {
+@@ -2144,7 +2185,7 @@ static int rtw89_mcc_upd_map_iterator(struct rtw89_dev *rtwdev,
+ void *data)
+ {
+ struct rtw89_mcc_role upd = {
+- .rtwvif = mcc_role->rtwvif,
++ .rtwvif_link = mcc_role->rtwvif_link,
+ };
+ int ret;
+
+@@ -2370,6 +2411,24 @@ void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev)
+ rtw89_queue_chanctx_work(rtwdev);
+ }
+
++static void __rtw89_swap_chanctx(struct rtw89_vif *rtwvif,
++ enum rtw89_chanctx_idx idx1,
++ enum rtw89_chanctx_idx idx2)
++{
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ if (!rtwvif_link->chanctx_assigned)
++ continue;
++
++ if (rtwvif_link->chanctx_idx == idx1)
++ rtwvif_link->chanctx_idx = idx2;
++ else if (rtwvif_link->chanctx_idx == idx2)
++ rtwvif_link->chanctx_idx = idx1;
++ }
++}
++
+ static void rtw89_swap_chanctx(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_idx idx1,
+ enum rtw89_chanctx_idx idx2)
+@@ -2386,14 +2445,8 @@ static void rtw89_swap_chanctx(struct rtw89_dev *rtwdev,
+
+ swap(hal->chanctx[idx1], hal->chanctx[idx2]);
+
+- rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- if (!rtwvif->chanctx_assigned)
+- continue;
+- if (rtwvif->chanctx_idx == idx1)
+- rtwvif->chanctx_idx = idx2;
+- else if (rtwvif->chanctx_idx == idx2)
+- rtwvif->chanctx_idx = idx1;
+- }
++ rtw89_for_each_rtwvif(rtwdev, rtwvif)
++ __rtw89_swap_chanctx(rtwvif, idx1, idx2);
+
+ cur = atomic_read(&hal->roc_chanctx_idx);
+ if (cur == idx1)
+@@ -2444,14 +2497,14 @@ void rtw89_chanctx_ops_change(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_chanctx_cfg *cfg = (struct rtw89_chanctx_cfg *)ctx->drv_priv;
+ struct rtw89_entity_weight w = {};
+
+- rtwvif->chanctx_idx = cfg->idx;
+- rtwvif->chanctx_assigned = true;
++ rtwvif_link->chanctx_idx = cfg->idx;
++ rtwvif_link->chanctx_assigned = true;
+ cfg->ref_count++;
+
+ if (cfg->idx == RTW89_CHANCTX_0)
+@@ -2469,7 +2522,7 @@ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev,
+ }
+
+ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_chanctx_cfg *cfg = (struct rtw89_chanctx_cfg *)ctx->drv_priv;
+@@ -2479,8 +2532,8 @@ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+ enum rtw89_entity_mode new;
+ int ret;
+
+- rtwvif->chanctx_idx = RTW89_CHANCTX_0;
+- rtwvif->chanctx_assigned = false;
++ rtwvif_link->chanctx_idx = RTW89_CHANCTX_0;
++ rtwvif_link->chanctx_assigned = false;
+ cfg->ref_count--;
+
+ if (cfg->ref_count != 0)
+diff --git a/drivers/net/wireless/realtek/rtw89/chan.h b/drivers/net/wireless/realtek/rtw89/chan.h
+index c6d31984e57536..4ed777ea506485 100644
+--- a/drivers/net/wireless/realtek/rtw89/chan.h
++++ b/drivers/net/wireless/realtek/rtw89/chan.h
+@@ -106,10 +106,10 @@ void rtw89_chanctx_ops_change(struct rtw89_dev *rtwdev,
+ struct ieee80211_chanctx_conf *ctx,
+ u32 changed);
+ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_chanctx_conf *ctx);
+ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_chanctx_conf *ctx);
+
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw89/coex.c b/drivers/net/wireless/realtek/rtw89/coex.c
+index 8d27374db83ca0..8d54d71fcf539e 100644
+--- a/drivers/net/wireless/realtek/rtw89/coex.c
++++ b/drivers/net/wireless/realtek/rtw89/coex.c
+@@ -2492,6 +2492,8 @@ static void btc_fw_set_monreg(struct rtw89_dev *rtwdev)
+ if (ver->fcxmreg == 7) {
+ sz = struct_size(v7, regs, n);
+ v7 = kmalloc(sz, GFP_KERNEL);
++ if (!v7)
++ return;
+ v7->type = RPT_EN_MREG;
+ v7->fver = ver->fcxmreg;
+ v7->len = n;
+@@ -2506,6 +2508,8 @@ static void btc_fw_set_monreg(struct rtw89_dev *rtwdev)
+ } else {
+ sz = struct_size(v1, regs, n);
+ v1 = kmalloc(sz, GFP_KERNEL);
++ if (!v1)
++ return;
+ v1->fver = ver->fcxmreg;
+ v1->reg_num = n;
+ memcpy(v1->regs, chip->mon_reg, flex_array_size(v1, regs, n));
+@@ -4989,18 +4993,16 @@ struct rtw89_txtime_data {
+ bool reenable;
+ };
+
+-static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta)
++static void __rtw89_tx_time_iter(struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ struct rtw89_txtime_data *iter_data)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_txtime_data *iter_data =
+- (struct rtw89_txtime_data *)data;
+ struct rtw89_dev *rtwdev = iter_data->rtwdev;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_cx *cx = &btc->cx;
+ struct rtw89_btc_wl_info *wl = &cx->wl;
+ struct rtw89_btc_wl_link_info *plink = NULL;
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+ u32 tx_time = iter_data->tx_time;
+ u8 tx_retry = iter_data->tx_retry;
+ u16 enable = iter_data->enable;
+@@ -5023,8 +5025,8 @@ static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta)
+
+ /* backup the original tx time before tx-limit on */
+ if (reenable) {
+- rtw89_mac_get_tx_time(rtwdev, rtwsta, &plink->tx_time);
+- rtw89_mac_get_tx_retry_limit(rtwdev, rtwsta, &plink->tx_retry);
++ rtw89_mac_get_tx_time(rtwdev, rtwsta_link, &plink->tx_time);
++ rtw89_mac_get_tx_retry_limit(rtwdev, rtwsta_link, &plink->tx_retry);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): reenable, tx_time=%d tx_retry= %d\n",
+ __func__, plink->tx_time, plink->tx_retry);
+@@ -5032,22 +5034,37 @@ static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta)
+
+ /* restore the original tx time if no tx-limit */
+ if (!enable) {
+- rtw89_mac_set_tx_time(rtwdev, rtwsta, true, plink->tx_time);
+- rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta, true,
++ rtw89_mac_set_tx_time(rtwdev, rtwsta_link, true, plink->tx_time);
++ rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta_link, true,
+ plink->tx_retry);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): restore, tx_time=%d tx_retry= %d\n",
+ __func__, plink->tx_time, plink->tx_retry);
+
+ } else {
+- rtw89_mac_set_tx_time(rtwdev, rtwsta, false, tx_time);
+- rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta, false, tx_retry);
++ rtw89_mac_set_tx_time(rtwdev, rtwsta_link, false, tx_time);
++ rtw89_mac_set_tx_retry_limit(rtwdev, rtwsta_link, false, tx_retry);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], %s(): set, tx_time=%d tx_retry= %d\n",
+ __func__, tx_time, tx_retry);
+ }
+ }
+
++static void rtw89_tx_time_iter(void *data, struct ieee80211_sta *sta)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_txtime_data *iter_data =
++ (struct rtw89_txtime_data *)data;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ __rtw89_tx_time_iter(rtwvif_link, rtwsta_link, iter_data);
++ }
++}
++
+ static void _set_wl_tx_limit(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_btc *btc = &rtwdev->btc;
+@@ -7481,13 +7498,16 @@ static void _update_bt_info(struct rtw89_dev *rtwdev, u8 *buf, u32 len)
+ _run_coex(rtwdev, BTC_RSN_UPDATE_BT_INFO);
+ }
+
+-void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, enum btc_role_state state)
++void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ enum btc_role_state state)
+ {
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta);
++ rtwvif_link->chanctx_idx);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct ieee80211_bss_conf *bss_conf;
++ struct ieee80211_link_sta *link_sta;
+ struct rtw89_btc *btc = &rtwdev->btc;
+ const struct rtw89_btc_ver *ver = btc->ver;
+ struct rtw89_btc_wl_info *wl = &btc->cx.wl;
+@@ -7495,51 +7515,59 @@ void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif
+ struct rtw89_btc_wl_link_info *wlinfo = NULL;
+ u8 mode = 0, rlink_id, link_mode_ori, pta_req_mac_ori, wa_type;
+
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], state=%d\n", state);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], role is STA=%d\n",
+ vif->type == NL80211_IFTYPE_STATION);
+- rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], port=%d\n", rtwvif->port);
++ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], port=%d\n", rtwvif_link->port);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], band=%d ch=%d bw=%d\n",
+ chan->band_type, chan->channel, chan->band_width);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], associated=%d\n",
+ state == BTC_ROLE_MSTS_STA_CONN_END);
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], bcn_period=%d dtim_period=%d\n",
+- vif->bss_conf.beacon_int, vif->bss_conf.dtim_period);
++ bss_conf->beacon_int, bss_conf->dtim_period);
++
++ if (rtwsta_link) {
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
+
+- if (rtwsta) {
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], STA mac_id=%d\n",
+- rtwsta->mac_id);
++ rtwsta_link->mac_id);
+
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+ "[BTC], STA support HE=%d VHT=%d HT=%d\n",
+- sta->deflink.he_cap.has_he,
+- sta->deflink.vht_cap.vht_supported,
+- sta->deflink.ht_cap.ht_supported);
+- if (sta->deflink.he_cap.has_he)
++ link_sta->he_cap.has_he,
++ link_sta->vht_cap.vht_supported,
++ link_sta->ht_cap.ht_supported);
++ if (link_sta->he_cap.has_he)
+ mode |= BIT(BTC_WL_MODE_HE);
+- if (sta->deflink.vht_cap.vht_supported)
++ if (link_sta->vht_cap.vht_supported)
+ mode |= BIT(BTC_WL_MODE_VHT);
+- if (sta->deflink.ht_cap.ht_supported)
++ if (link_sta->ht_cap.ht_supported)
+ mode |= BIT(BTC_WL_MODE_HT);
+
+ r.mode = mode;
+ }
+
+- if (rtwvif->wifi_role >= RTW89_WIFI_ROLE_MLME_MAX)
++ if (rtwvif_link->wifi_role >= RTW89_WIFI_ROLE_MLME_MAX) {
++ rcu_read_unlock();
+ return;
++ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_BTC,
+- "[BTC], wifi_role=%d\n", rtwvif->wifi_role);
++ "[BTC], wifi_role=%d\n", rtwvif_link->wifi_role);
+
+- r.role = rtwvif->wifi_role;
+- r.phy = rtwvif->phy_idx;
+- r.pid = rtwvif->port;
++ r.role = rtwvif_link->wifi_role;
++ r.phy = rtwvif_link->phy_idx;
++ r.pid = rtwvif_link->port;
+ r.active = true;
+ r.connected = MLME_LINKED;
+- r.bcn_period = vif->bss_conf.beacon_int;
+- r.dtim_period = vif->bss_conf.dtim_period;
++ r.bcn_period = bss_conf->beacon_int;
++ r.dtim_period = bss_conf->dtim_period;
+ r.band = chan->band_type;
+ r.ch = chan->channel;
+ r.bw = chan->band_width;
+@@ -7547,10 +7575,12 @@ void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif
+ r.chdef.center_ch = chan->channel;
+ r.chdef.bw = chan->band_width;
+ r.chdef.chan = chan->primary_channel;
+- ether_addr_copy(r.mac_addr, rtwvif->mac_addr);
++ ether_addr_copy(r.mac_addr, rtwvif_link->mac_addr);
+
+- if (rtwsta && vif->type == NL80211_IFTYPE_STATION)
+- r.mac_id = rtwsta->mac_id;
++ rcu_read_unlock();
++
++ if (rtwsta_link && vif->type == NL80211_IFTYPE_STATION)
++ r.mac_id = rtwsta_link->mac_id;
+
+ btc->dm.cnt_notify[BTC_NCNT_ROLE_INFO]++;
+
+@@ -7781,26 +7811,26 @@ struct rtw89_btc_wl_sta_iter_data {
+ bool is_traffic_change;
+ };
+
+-static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
++static
++void __rtw89_btc_ntfy_wl_sta_iter(struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ struct rtw89_btc_wl_sta_iter_data *iter_data)
+ {
+- struct rtw89_btc_wl_sta_iter_data *iter_data =
+- (struct rtw89_btc_wl_sta_iter_data *)data;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct rtw89_dev *rtwdev = iter_data->rtwdev;
+ struct rtw89_btc *btc = &rtwdev->btc;
+ struct rtw89_btc_dm *dm = &btc->dm;
+ const struct rtw89_btc_ver *ver = btc->ver;
+ struct rtw89_btc_wl_info *wl = &btc->cx.wl;
+ struct rtw89_btc_wl_link_info *link_info = NULL;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+ struct rtw89_traffic_stats *link_info_t = NULL;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_traffic_stats *stats = &rtwvif->stats;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_btc_wl_role_info *r;
+ struct rtw89_btc_wl_role_info_v1 *r1;
+ u32 last_tx_rate, last_rx_rate;
+ u16 last_tx_lvl, last_rx_lvl;
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+ u8 rssi;
+ u8 busy = 0;
+ u8 dir = 0;
+@@ -7808,11 +7838,11 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
+ u8 i = 0;
+ bool is_sta_change = false, is_traffic_change = false;
+
+- rssi = ewma_rssi_read(&rtwsta->avg_rssi) >> RSSI_FACTOR;
++ rssi = ewma_rssi_read(&rtwsta_link->avg_rssi) >> RSSI_FACTOR;
+ rtw89_debug(rtwdev, RTW89_DBG_BTC, "[BTC], rssi=%d\n", rssi);
+
+ link_info = &wl->link_info[port];
+- link_info->stat.traffic = rtwvif->stats;
++ link_info->stat.traffic = *stats;
+ link_info_t = &link_info->stat.traffic;
+
+ if (link_info->connected == MLME_NO_LINK) {
+@@ -7860,19 +7890,19 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
+ iter_data->busy_all |= busy;
+ iter_data->dir_all |= BIT(dir);
+
+- if (rtwsta->rx_hw_rate <= RTW89_HW_RATE_CCK2 &&
++ if (rtwsta_link->rx_hw_rate <= RTW89_HW_RATE_CCK2 &&
+ last_rx_rate > RTW89_HW_RATE_CCK2 &&
+ link_info_t->rx_tfc_lv > RTW89_TFC_IDLE)
+ link_info->rx_rate_drop_cnt++;
+
+- if (last_tx_rate != rtwsta->ra_report.hw_rate ||
+- last_rx_rate != rtwsta->rx_hw_rate ||
++ if (last_tx_rate != rtwsta_link->ra_report.hw_rate ||
++ last_rx_rate != rtwsta_link->rx_hw_rate ||
+ last_tx_lvl != link_info_t->tx_tfc_lv ||
+ last_rx_lvl != link_info_t->rx_tfc_lv)
+ is_traffic_change = true;
+
+- link_info_t->tx_rate = rtwsta->ra_report.hw_rate;
+- link_info_t->rx_rate = rtwsta->rx_hw_rate;
++ link_info_t->tx_rate = rtwsta_link->ra_report.hw_rate;
++ link_info_t->rx_rate = rtwsta_link->rx_hw_rate;
+
+ if (link_info->role == RTW89_WIFI_ROLE_STATION ||
+ link_info->role == RTW89_WIFI_ROLE_P2P_CLIENT) {
+@@ -7884,19 +7914,19 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
+ r = &wl->role_info;
+ r->active_role[port].tx_lvl = stats->tx_tfc_lv;
+ r->active_role[port].rx_lvl = stats->rx_tfc_lv;
+- r->active_role[port].tx_rate = rtwsta->ra_report.hw_rate;
+- r->active_role[port].rx_rate = rtwsta->rx_hw_rate;
++ r->active_role[port].tx_rate = rtwsta_link->ra_report.hw_rate;
++ r->active_role[port].rx_rate = rtwsta_link->rx_hw_rate;
+ } else if (ver->fwlrole == 1) {
+ r1 = &wl->role_info_v1;
+ r1->active_role_v1[port].tx_lvl = stats->tx_tfc_lv;
+ r1->active_role_v1[port].rx_lvl = stats->rx_tfc_lv;
+- r1->active_role_v1[port].tx_rate = rtwsta->ra_report.hw_rate;
+- r1->active_role_v1[port].rx_rate = rtwsta->rx_hw_rate;
++ r1->active_role_v1[port].tx_rate = rtwsta_link->ra_report.hw_rate;
++ r1->active_role_v1[port].rx_rate = rtwsta_link->rx_hw_rate;
+ } else if (ver->fwlrole == 2) {
+ dm->trx_info.tx_lvl = stats->tx_tfc_lv;
+ dm->trx_info.rx_lvl = stats->rx_tfc_lv;
+- dm->trx_info.tx_rate = rtwsta->ra_report.hw_rate;
+- dm->trx_info.rx_rate = rtwsta->rx_hw_rate;
++ dm->trx_info.tx_rate = rtwsta_link->ra_report.hw_rate;
++ dm->trx_info.rx_rate = rtwsta_link->rx_hw_rate;
+ }
+
+ dm->trx_info.tx_tp = link_info_t->tx_throughput;
+@@ -7916,6 +7946,21 @@ static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
+ iter_data->is_traffic_change = true;
+ }
+
++static void rtw89_btc_ntfy_wl_sta_iter(void *data, struct ieee80211_sta *sta)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_btc_wl_sta_iter_data *iter_data =
++ (struct rtw89_btc_wl_sta_iter_data *)data;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ __rtw89_btc_ntfy_wl_sta_iter(rtwvif_link, rtwsta_link, iter_data);
++ }
++}
++
+ #define BTC_NHM_CHK_INTVL 20
+
+ void rtw89_btc_ntfy_wl_sta(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/coex.h b/drivers/net/wireless/realtek/rtw89/coex.h
+index de53b56632f7c6..dbdb56e063ef03 100644
+--- a/drivers/net/wireless/realtek/rtw89/coex.h
++++ b/drivers/net/wireless/realtek/rtw89/coex.h
+@@ -271,8 +271,10 @@ void rtw89_btc_ntfy_eapol_packet_work(struct work_struct *work);
+ void rtw89_btc_ntfy_arp_packet_work(struct work_struct *work);
+ void rtw89_btc_ntfy_dhcp_packet_work(struct work_struct *work);
+ void rtw89_btc_ntfy_icmp_packet_work(struct work_struct *work);
+-void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, enum btc_role_state state);
++void rtw89_btc_ntfy_role_info(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ enum btc_role_state state);
+ void rtw89_btc_ntfy_radio_state(struct rtw89_dev *rtwdev, enum btc_rfctrl rf_state);
+ void rtw89_btc_ntfy_wl_rfk(struct rtw89_dev *rtwdev, u8 phy_map,
+ enum btc_wl_rfk_type type,
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index 4553810634c66b..5b8e65f6de6a4e 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -436,15 +436,6 @@ int rtw89_set_channel(struct rtw89_dev *rtwdev)
+ return 0;
+ }
+
+-void rtw89_get_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_chan *chan)
+-{
+- const struct cfg80211_chan_def *chandef;
+-
+- chandef = rtw89_chandef_get(rtwdev, rtwvif->chanctx_idx);
+- rtw89_get_channel_params(chandef, chan);
+-}
+-
+ static enum rtw89_core_tx_type
+ rtw89_core_get_tx_type(struct rtw89_dev *rtwdev,
+ struct sk_buff *skb)
+@@ -463,8 +454,9 @@ rtw89_core_tx_update_ampdu_info(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req,
+ enum btc_pkt_type pkt_type)
+ {
+- struct ieee80211_sta *sta = tx_req->sta;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+ struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info;
++ struct ieee80211_link_sta *link_sta;
+ struct sk_buff *skb = tx_req->skb;
+ struct rtw89_sta *rtwsta;
+ u8 ampdu_num;
+@@ -478,21 +470,26 @@ rtw89_core_tx_update_ampdu_info(struct rtw89_dev *rtwdev,
+ if (!(IEEE80211_SKB_CB(skb)->flags & IEEE80211_TX_CTL_AMPDU))
+ return;
+
+- if (!sta) {
++ if (!rtwsta_link) {
+ rtw89_warn(rtwdev, "cannot set ampdu info without sta\n");
+ return;
+ }
+
+ tid = skb->priority & IEEE80211_QOS_CTL_TAG1D_MASK;
+- rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ rtwsta = rtwsta_link->rtwsta;
++
++ rcu_read_lock();
+
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
+ ampdu_num = (u8)((rtwsta->ampdu_params[tid].agg_num ?
+ rtwsta->ampdu_params[tid].agg_num :
+- 4 << sta->deflink.ht_cap.ampdu_factor) - 1);
++ 4 << link_sta->ht_cap.ampdu_factor) - 1);
+
+ desc_info->agg_en = true;
+- desc_info->ampdu_density = sta->deflink.ht_cap.ampdu_density;
++ desc_info->ampdu_density = link_sta->ht_cap.ampdu_density;
+ desc_info->ampdu_num = ampdu_num;
++
++ rcu_read_unlock();
+ }
+
+ static void
+@@ -569,9 +566,13 @@ static u16 rtw89_core_get_mgmt_rate(struct rtw89_dev *rtwdev,
+ const struct rtw89_chan *chan)
+ {
+ struct sk_buff *skb = tx_req->skb;
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+ struct ieee80211_tx_info *tx_info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_vif *vif = tx_info->control.vif;
++ struct ieee80211_bss_conf *bss_conf;
+ u16 lowest_rate;
++ u16 rate;
+
+ if (tx_info->flags & IEEE80211_TX_CTL_NO_CCK_RATE ||
+ (vif && vif->p2p))
+@@ -581,25 +582,35 @@ static u16 rtw89_core_get_mgmt_rate(struct rtw89_dev *rtwdev,
+ else
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+
+- if (!vif || !vif->bss_conf.basic_rates || !tx_req->sta)
++ if (!rtwvif_link)
+ return lowest_rate;
+
+- return __ffs(vif->bss_conf.basic_rates) + lowest_rate;
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++ if (!bss_conf->basic_rates || !rtwsta_link) {
++ rate = lowest_rate;
++ goto out;
++ }
++
++ rate = __ffs(bss_conf->basic_rates) + lowest_rate;
++
++out:
++ rcu_read_unlock();
++
++ return rate;
+ }
+
+ static u8 rtw89_core_tx_get_mac_id(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+- struct ieee80211_vif *vif = tx_req->vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct ieee80211_sta *sta = tx_req->sta;
+- struct rtw89_sta *rtwsta;
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+
+- if (!sta)
+- return rtwvif->mac_id;
++ if (!rtwsta_link)
++ return rtwvif_link->mac_id;
+
+- rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- return rtwsta->mac_id;
++ return rtwsta_link->mac_id;
+ }
+
+ static void rtw89_core_tx_update_llc_hdr(struct rtw89_dev *rtwdev,
+@@ -618,11 +629,10 @@ rtw89_core_tx_update_mgmt_info(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- struct ieee80211_vif *vif = tx_req->vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
+ struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+ struct sk_buff *skb = tx_req->skb;
+ u8 qsel, ch_dma;
+
+@@ -631,7 +641,7 @@ rtw89_core_tx_update_mgmt_info(struct rtw89_dev *rtwdev,
+
+ desc_info->qsel = qsel;
+ desc_info->ch_dma = ch_dma;
+- desc_info->port = desc_info->hiq ? rtwvif->port : 0;
++ desc_info->port = desc_info->hiq ? rtwvif_link->port : 0;
+ desc_info->mac_id = rtw89_core_tx_get_mac_id(rtwdev, tx_req);
+ desc_info->hw_ssn_sel = RTW89_MGMT_HW_SSN_SEL;
+ desc_info->hw_seq_mode = RTW89_MGMT_HW_SEQ_MODE;
+@@ -701,26 +711,36 @@ __rtw89_core_tx_check_he_qos_htc(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req,
+ enum btc_pkt_type pkt_type)
+ {
+- struct ieee80211_sta *sta = tx_req->sta;
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+ struct sk_buff *skb = tx_req->skb;
+ struct ieee80211_hdr *hdr = (void *)skb->data;
++ struct ieee80211_link_sta *link_sta;
+ __le16 fc = hdr->frame_control;
+
+ /* AP IOT issue with EAPoL, ARP and DHCP */
+ if (pkt_type < PACKET_MAX)
+ return false;
+
+- if (!sta || !sta->deflink.he_cap.has_he)
++ if (!rtwsta_link)
+ return false;
+
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
++ if (!link_sta->he_cap.has_he) {
++ rcu_read_unlock();
++ return false;
++ }
++
++ rcu_read_unlock();
++
+ if (!ieee80211_is_data_qos(fc))
+ return false;
+
+ if (skb_headroom(skb) < IEEE80211_HT_CTL_LEN)
+ return false;
+
+- if (rtwsta && rtwsta->ra_report.might_fallback_legacy)
++ if (rtwsta_link && rtwsta_link->ra_report.might_fallback_legacy)
+ return false;
+
+ return true;
+@@ -730,8 +750,7 @@ static void
+ __rtw89_core_tx_adjust_he_qos_htc(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+- struct ieee80211_sta *sta = tx_req->sta;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+ struct sk_buff *skb = tx_req->skb;
+ struct ieee80211_hdr *hdr = (void *)skb->data;
+ __le16 fc = hdr->frame_control;
+@@ -747,7 +766,7 @@ __rtw89_core_tx_adjust_he_qos_htc(struct rtw89_dev *rtwdev,
+ hdr = data;
+ htc = data + hdr_len;
+ hdr->frame_control |= cpu_to_le16(IEEE80211_FCTL_ORDER);
+- *htc = rtwsta->htc_template ? rtwsta->htc_template :
++ *htc = rtwsta_link->htc_template ? rtwsta_link->htc_template :
+ le32_encode_bits(RTW89_HTC_VARIANT_HE, RTW89_HTC_MASK_VARIANT) |
+ le32_encode_bits(RTW89_HTC_VARIANT_HE_CID_CAS, RTW89_HTC_MASK_CTL_ID);
+
+@@ -761,8 +780,7 @@ rtw89_core_tx_update_he_qos_htc(struct rtw89_dev *rtwdev,
+ enum btc_pkt_type pkt_type)
+ {
+ struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info;
+- struct ieee80211_vif *vif = tx_req->vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
+
+ if (!__rtw89_core_tx_check_he_qos_htc(rtwdev, tx_req, pkt_type))
+ goto desc_bk;
+@@ -773,23 +791,25 @@ rtw89_core_tx_update_he_qos_htc(struct rtw89_dev *rtwdev,
+ desc_info->a_ctrl_bsr = true;
+
+ desc_bk:
+- if (!rtwvif || rtwvif->last_a_ctrl == desc_info->a_ctrl_bsr)
++ if (!rtwvif_link || rtwvif_link->last_a_ctrl == desc_info->a_ctrl_bsr)
+ return;
+
+- rtwvif->last_a_ctrl = desc_info->a_ctrl_bsr;
++ rtwvif_link->last_a_ctrl = desc_info->a_ctrl_bsr;
+ desc_info->bk = true;
+ }
+
+ static u16 rtw89_core_get_data_rate(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+- struct ieee80211_vif *vif = tx_req->vif;
+- struct ieee80211_sta *sta = tx_req->sta;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif->rate_pattern;
+- enum rtw89_chanctx_idx idx = rtwvif->chanctx_idx;
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif_link->rate_pattern;
++ enum rtw89_chanctx_idx idx = rtwvif_link->chanctx_idx;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, idx);
++ struct ieee80211_link_sta *link_sta;
+ u16 lowest_rate;
++ u16 rate;
+
+ if (rate_pattern->enable)
+ return rate_pattern->rate;
+@@ -801,20 +821,31 @@ static u16 rtw89_core_get_data_rate(struct rtw89_dev *rtwdev,
+ else
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+
+- if (!sta || !sta->deflink.supp_rates[chan->band_type])
++ if (!rtwsta_link)
+ return lowest_rate;
+
+- return __ffs(sta->deflink.supp_rates[chan->band_type]) + lowest_rate;
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
++ if (!link_sta->supp_rates[chan->band_type]) {
++ rate = lowest_rate;
++ goto out;
++ }
++
++ rate = __ffs(link_sta->supp_rates[chan->band_type]) + lowest_rate;
++
++out:
++ rcu_read_unlock();
++
++ return rate;
+ }
+
+ static void
+ rtw89_core_tx_update_data_info(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+- struct ieee80211_vif *vif = tx_req->vif;
+- struct ieee80211_sta *sta = tx_req->sta;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
++ struct rtw89_vif_link *rtwvif_link = tx_req->rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link = tx_req->rtwsta_link;
+ struct rtw89_tx_desc_info *desc_info = &tx_req->desc_info;
+ struct sk_buff *skb = tx_req->skb;
+ u8 tid, tid_indicate;
+@@ -829,10 +860,10 @@ rtw89_core_tx_update_data_info(struct rtw89_dev *rtwdev,
+ desc_info->tid_indicate = tid_indicate;
+ desc_info->qsel = qsel;
+ desc_info->mac_id = rtw89_core_tx_get_mac_id(rtwdev, tx_req);
+- desc_info->port = desc_info->hiq ? rtwvif->port : 0;
+- desc_info->er_cap = rtwsta ? rtwsta->er_cap : false;
+- desc_info->stbc = rtwsta ? rtwsta->ra.stbc_cap : false;
+- desc_info->ldpc = rtwsta ? rtwsta->ra.ldpc_cap : false;
++ desc_info->port = desc_info->hiq ? rtwvif_link->port : 0;
++ desc_info->er_cap = rtwsta_link ? rtwsta_link->er_cap : false;
++ desc_info->stbc = rtwsta_link ? rtwsta_link->ra.stbc_cap : false;
++ desc_info->ldpc = rtwsta_link ? rtwsta_link->ra.ldpc_cap : false;
+
+ /* enable wd_info for AMPDU */
+ desc_info->en_wd_info = true;
+@@ -1027,13 +1058,34 @@ int rtw89_h2c_tx(struct rtw89_dev *rtwdev,
+ int rtw89_core_tx_write(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, struct sk_buff *skb, int *qsel)
+ {
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
+ struct rtw89_core_tx_request tx_req = {0};
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_sta_link *rtwsta_link = NULL;
++ struct rtw89_vif_link *rtwvif_link;
+ int ret;
+
++ /* By default, driver writes tx via the link on HW-0. And then,
++ * according to links' status, HW can change tx to another link.
++ */
++
++ if (rtwsta) {
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link)) {
++ rtw89_err(rtwdev, "tx: find no sta link on HW-0\n");
++ return -ENOLINK;
++ }
++ }
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "tx: find no vif link on HW-0\n");
++ return -ENOLINK;
++ }
++
+ tx_req.skb = skb;
+- tx_req.sta = sta;
+- tx_req.vif = vif;
++ tx_req.rtwvif_link = rtwvif_link;
++ tx_req.rtwsta_link = rtwsta_link;
+
+ rtw89_traffic_stats_accu(rtwdev, &rtwdev->stats, skb, true);
+ rtw89_traffic_stats_accu(rtwdev, &rtwvif->stats, skb, true);
+@@ -1514,16 +1566,24 @@ static u8 rtw89_get_data_rate_nss(struct rtw89_dev *rtwdev, u16 data_rate)
+ static void rtw89_core_rx_process_phy_ppdu_iter(void *data,
+ struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+ struct rtw89_rx_phy_ppdu *phy_ppdu = (struct rtw89_rx_phy_ppdu *)data;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
+ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_sta_link *rtwsta_link;
+ u8 ant_num = hal->ant_diversity ? 2 : rtwdev->chip->rf_path_num;
+ u8 ant_pos = U8_MAX;
+ u8 evm_pos = 0;
+ int i;
+
+- if (rtwsta->mac_id != phy_ppdu->mac_id || !phy_ppdu->to_self)
++ /* FIXME: For single link, taking link on HW-0 here is okay. But, when
++ * enabling multiple active links, we should determine the right link.
++ */
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link))
++ return;
++
++ if (rtwsta_link->mac_id != phy_ppdu->mac_id || !phy_ppdu->to_self)
+ return;
+
+ if (hal->ant_diversity && hal->antenna_rx) {
+@@ -1531,22 +1591,24 @@ static void rtw89_core_rx_process_phy_ppdu_iter(void *data,
+ evm_pos = ant_pos;
+ }
+
+- ewma_rssi_add(&rtwsta->avg_rssi, phy_ppdu->rssi_avg);
++ ewma_rssi_add(&rtwsta_link->avg_rssi, phy_ppdu->rssi_avg);
+
+ if (ant_pos < ant_num) {
+- ewma_rssi_add(&rtwsta->rssi[ant_pos], phy_ppdu->rssi[0]);
++ ewma_rssi_add(&rtwsta_link->rssi[ant_pos], phy_ppdu->rssi[0]);
+ } else {
+ for (i = 0; i < rtwdev->chip->rf_path_num; i++)
+- ewma_rssi_add(&rtwsta->rssi[i], phy_ppdu->rssi[i]);
++ ewma_rssi_add(&rtwsta_link->rssi[i], phy_ppdu->rssi[i]);
+ }
+
+ if (phy_ppdu->ofdm.has && (phy_ppdu->has_data || phy_ppdu->has_bcn)) {
+- ewma_snr_add(&rtwsta->avg_snr, phy_ppdu->ofdm.avg_snr);
++ ewma_snr_add(&rtwsta_link->avg_snr, phy_ppdu->ofdm.avg_snr);
+ if (rtw89_get_data_rate_nss(rtwdev, phy_ppdu->rate) == 1) {
+- ewma_evm_add(&rtwsta->evm_1ss, phy_ppdu->ofdm.evm_min);
++ ewma_evm_add(&rtwsta_link->evm_1ss, phy_ppdu->ofdm.evm_min);
+ } else {
+- ewma_evm_add(&rtwsta->evm_min[evm_pos], phy_ppdu->ofdm.evm_min);
+- ewma_evm_add(&rtwsta->evm_max[evm_pos], phy_ppdu->ofdm.evm_max);
++ ewma_evm_add(&rtwsta_link->evm_min[evm_pos],
++ phy_ppdu->ofdm.evm_min);
++ ewma_evm_add(&rtwsta_link->evm_max[evm_pos],
++ phy_ppdu->ofdm.evm_max);
+ }
+ }
+ }
+@@ -1876,17 +1938,19 @@ struct rtw89_vif_rx_stats_iter_data {
+ };
+
+ static void rtw89_stats_trigger_frame(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf,
+ struct sk_buff *skb)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct ieee80211_trigger *tf = (struct ieee80211_trigger *)skb->data;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ u8 *pos, *end, type, tf_bw;
+ u16 aid, tf_rua;
+
+- if (!ether_addr_equal(vif->bss_conf.bssid, tf->ta) ||
+- rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION ||
+- rtwvif->net_type == RTW89_NET_TYPE_NO_LINK)
++ if (!ether_addr_equal(bss_conf->bssid, tf->ta) ||
++ rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION ||
++ rtwvif_link->net_type == RTW89_NET_TYPE_NO_LINK)
+ return;
+
+ type = le64_get_bits(tf->common_info, IEEE80211_TRIGGER_TYPE_MASK);
+@@ -1915,7 +1979,7 @@ static void rtw89_stats_trigger_frame(struct rtw89_dev *rtwdev,
+ rtwdev->stats.rx_tf_acc++;
+ if (tf_bw == IEEE80211_TRIGGER_ULBW_160_80P80MHZ &&
+ rua <= NL80211_RATE_INFO_HE_RU_ALLOC_106)
+- rtwvif->pwr_diff_en = true;
++ rtwvif_link->pwr_diff_en = true;
+ break;
+ }
+
+@@ -1986,7 +2050,7 @@ static void rtw89_core_cancel_6ghz_probe_tx(struct rtw89_dev *rtwdev,
+ ieee80211_queue_work(rtwdev->hw, &rtwdev->cancel_6ghz_probe_work);
+ }
+
+-static void rtw89_vif_sync_bcn_tsf(struct rtw89_vif *rtwvif,
++static void rtw89_vif_sync_bcn_tsf(struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_hdr *hdr, size_t len)
+ {
+ struct ieee80211_mgmt *mgmt = (typeof(mgmt))hdr;
+@@ -1994,20 +2058,22 @@ static void rtw89_vif_sync_bcn_tsf(struct rtw89_vif *rtwvif,
+ if (len < offsetof(typeof(*mgmt), u.beacon.variable))
+ return;
+
+- WRITE_ONCE(rtwvif->sync_bcn_tsf, le64_to_cpu(mgmt->u.beacon.timestamp));
++ WRITE_ONCE(rtwvif_link->sync_bcn_tsf, le64_to_cpu(mgmt->u.beacon.timestamp));
+ }
+
+ static void rtw89_vif_rx_stats_iter(void *data, u8 *mac,
+ struct ieee80211_vif *vif)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct rtw89_vif_rx_stats_iter_data *iter_data = data;
+ struct rtw89_dev *rtwdev = iter_data->rtwdev;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
+ struct rtw89_pkt_stat *pkt_stat = &rtwdev->phystat.cur_pkt_stat;
+ struct rtw89_rx_desc_info *desc_info = iter_data->desc_info;
+ struct sk_buff *skb = iter_data->skb;
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct rtw89_rx_phy_ppdu *phy_ppdu = iter_data->phy_ppdu;
++ struct ieee80211_bss_conf *bss_conf;
++ struct rtw89_vif_link *rtwvif_link;
+ const u8 *bssid = iter_data->bssid;
+
+ if (rtwdev->scanning &&
+@@ -2015,33 +2081,46 @@ static void rtw89_vif_rx_stats_iter(void *data, u8 *mac,
+ ieee80211_is_probe_resp(hdr->frame_control)))
+ rtw89_core_cancel_6ghz_probe_tx(rtwdev, skb);
+
+- if (!vif->bss_conf.bssid)
+- return;
++ rcu_read_lock();
++
++ /* FIXME: For single link, taking link on HW-0 here is okay. But, when
++ * enabling multiple active links, we should determine the right link.
++ */
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link))
++ goto out;
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++ if (!bss_conf->bssid)
++ goto out;
+
+ if (ieee80211_is_trigger(hdr->frame_control)) {
+- rtw89_stats_trigger_frame(rtwdev, vif, skb);
+- return;
++ rtw89_stats_trigger_frame(rtwdev, rtwvif_link, bss_conf, skb);
++ goto out;
+ }
+
+- if (!ether_addr_equal(vif->bss_conf.bssid, bssid))
+- return;
++ if (!ether_addr_equal(bss_conf->bssid, bssid))
++ goto out;
+
+ if (ieee80211_is_beacon(hdr->frame_control)) {
+ if (vif->type == NL80211_IFTYPE_STATION &&
+ !test_bit(RTW89_FLAG_WOWLAN, rtwdev->flags)) {
+- rtw89_vif_sync_bcn_tsf(rtwvif, hdr, skb->len);
++ rtw89_vif_sync_bcn_tsf(rtwvif_link, hdr, skb->len);
+ rtw89_fw_h2c_rssi_offload(rtwdev, phy_ppdu);
+ }
+ pkt_stat->beacon_nr++;
+ }
+
+- if (!ether_addr_equal(vif->addr, hdr->addr1))
+- return;
++ if (!ether_addr_equal(bss_conf->addr, hdr->addr1))
++ goto out;
+
+ if (desc_info->data_rate < RTW89_HW_RATE_NR)
+ pkt_stat->rx_rate_cnt[desc_info->data_rate]++;
+
+ rtw89_traffic_stats_accu(rtwdev, &rtwvif->stats, skb, false);
++
++out:
++ rcu_read_unlock();
+ }
+
+ static void rtw89_core_rx_stats(struct rtw89_dev *rtwdev,
+@@ -2432,15 +2511,23 @@ void rtw89_core_stats_sta_rx_status_iter(void *data, struct ieee80211_sta *sta)
+ struct rtw89_core_iter_rx_status *iter_data =
+ (struct rtw89_core_iter_rx_status *)data;
+ struct ieee80211_rx_status *rx_status = iter_data->rx_status;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+ struct rtw89_rx_desc_info *desc_info = iter_data->desc_info;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_sta_link *rtwsta_link;
+ u8 mac_id = iter_data->mac_id;
+
+- if (mac_id != rtwsta->mac_id)
++ /* FIXME: For single link, taking link on HW-0 here is okay. But, when
++ * enabling multiple active links, we should determine the right link.
++ */
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link))
+ return;
+
+- rtwsta->rx_status = *rx_status;
+- rtwsta->rx_hw_rate = desc_info->data_rate;
++ if (mac_id != rtwsta_link->mac_id)
++ return;
++
++ rtwsta_link->rx_status = *rx_status;
++ rtwsta_link->rx_hw_rate = desc_info->data_rate;
+ }
+
+ static void rtw89_core_stats_sta_rx_status(struct rtw89_dev *rtwdev,
+@@ -2546,6 +2633,10 @@ static enum rtw89_ps_mode rtw89_update_ps_mode(struct rtw89_dev *rtwdev)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
++ /* FIXME: Fix __rtw89_enter_ps_mode() to consider MLO cases. */
++ if (rtwdev->support_mlo)
++ return RTW89_PS_MODE_NONE;
++
+ if (rtw89_disable_ps_mode || !chip->ps_mode_supported ||
+ RTW89_CHK_FW_FEATURE(NO_DEEP_PS, &rtwdev->fw))
+ return RTW89_PS_MODE_NONE;
+@@ -2658,7 +2749,7 @@ static void rtw89_core_ba_work(struct work_struct *work)
+ list_for_each_entry_safe(rtwtxq, tmp, &rtwdev->ba_list, list) {
+ struct ieee80211_txq *txq = rtw89_txq_to_txq(rtwtxq);
+ struct ieee80211_sta *sta = txq->sta;
+- struct rtw89_sta *rtwsta = sta ? (struct rtw89_sta *)sta->drv_priv : NULL;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+ u8 tid = txq->tid;
+
+ if (!sta) {
+@@ -2686,8 +2777,8 @@ static void rtw89_core_ba_work(struct work_struct *work)
+ spin_unlock_bh(&rtwdev->ba_lock);
+ }
+
+-static void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta)
++void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta)
+ {
+ struct rtw89_txq *rtwtxq, *tmp;
+
+@@ -2701,8 +2792,8 @@ static void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev,
+ spin_unlock_bh(&rtwdev->ba_lock);
+ }
+
+-static void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta)
++void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta)
+ {
+ struct rtw89_txq *rtwtxq, *tmp;
+
+@@ -2718,10 +2809,10 @@ static void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev,
+ spin_unlock_bh(&rtwdev->ba_lock);
+ }
+
+-static void rtw89_core_free_sta_pending_roc_tx(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta)
++void rtw89_core_free_sta_pending_roc_tx(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+ struct sk_buff *skb, *tmp;
+
+ skb_queue_walk_safe(&rtwsta->roc_queue, skb, tmp) {
+@@ -2762,7 +2853,7 @@ static void rtw89_core_txq_check_agg(struct rtw89_dev *rtwdev,
+ struct ieee80211_hw *hw = rtwdev->hw;
+ struct ieee80211_txq *txq = rtw89_txq_to_txq(rtwtxq);
+ struct ieee80211_sta *sta = txq->sta;
+- struct rtw89_sta *rtwsta = sta ? (struct rtw89_sta *)sta->drv_priv : NULL;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+
+ if (test_bit(RTW89_TXQ_F_FORBID_BA, &rtwtxq->flags))
+ return;
+@@ -2838,10 +2929,19 @@ static bool rtw89_core_txq_agg_wait(struct rtw89_dev *rtwdev,
+ bool *sched_txq, bool *reinvoke)
+ {
+ struct rtw89_txq *rtwtxq = (struct rtw89_txq *)txq->drv_priv;
+- struct ieee80211_sta *sta = txq->sta;
+- struct rtw89_sta *rtwsta = sta ? (struct rtw89_sta *)sta->drv_priv : NULL;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(txq->sta);
++ struct rtw89_sta_link *rtwsta_link;
+
+- if (!sta || rtwsta->max_agg_wait <= 0)
++ if (!rtwsta)
++ return false;
++
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link)) {
++ rtw89_err(rtwdev, "agg wait: find no link on HW-0\n");
++ return false;
++ }
++
++ if (rtwsta_link->max_agg_wait <= 0)
+ return false;
+
+ if (rtwdev->stats.tx_tfc_lv <= RTW89_TFC_MID)
+@@ -2855,7 +2955,7 @@ static bool rtw89_core_txq_agg_wait(struct rtw89_dev *rtwdev,
+ return false;
+ }
+
+- if (*frame_cnt == 1 && rtwtxq->wait_cnt < rtwsta->max_agg_wait) {
++ if (*frame_cnt == 1 && rtwtxq->wait_cnt < rtwsta_link->max_agg_wait) {
+ *reinvoke = true;
+ rtwtxq->wait_cnt++;
+ return true;
+@@ -2879,7 +2979,7 @@ static void rtw89_core_txq_schedule(struct rtw89_dev *rtwdev, u8 ac, bool *reinv
+ ieee80211_txq_schedule_start(hw, ac);
+ while ((txq = ieee80211_next_txq(hw, ac))) {
+ rtwtxq = (struct rtw89_txq *)txq->drv_priv;
+- rtwvif = (struct rtw89_vif *)txq->vif->drv_priv;
++ rtwvif = vif_to_rtwvif(txq->vif);
+
+ if (rtwvif->offchan) {
+ ieee80211_return_txq(hw, txq, true);
+@@ -2955,16 +3055,23 @@ static void rtw89_forbid_ba_work(struct work_struct *w)
+ static void rtw89_core_sta_pending_tx_iter(void *data,
+ struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_vif *rtwvif_target = data, *rtwvif = rtwsta->rtwvif;
+- struct rtw89_dev *rtwdev = rtwvif->rtwdev;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct rtw89_vif_link *target = data;
++ struct rtw89_vif_link *rtwvif_link;
+ struct sk_buff *skb, *tmp;
++ unsigned int link_id;
+ int qsel, ret;
+
+- if (rtwvif->chanctx_idx != rtwvif_target->chanctx_idx)
+- return;
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ if (rtwvif_link->chanctx_idx == target->chanctx_idx)
++ goto bottom;
++
++ return;
+
++bottom:
+ if (skb_queue_len(&rtwsta->roc_queue) == 0)
+ return;
+
+@@ -2982,17 +3089,17 @@ static void rtw89_core_sta_pending_tx_iter(void *data,
+ }
+
+ static void rtw89_core_handle_sta_pending_tx(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ ieee80211_iterate_stations_atomic(rtwdev->hw,
+ rtw89_core_sta_pending_tx_iter,
+- rtwvif);
++ rtwvif_link);
+ }
+
+ static int rtw89_core_send_nullfunc(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool qos, bool ps)
++ struct rtw89_vif_link *rtwvif_link, bool qos, bool ps)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ struct ieee80211_sta *sta;
+ struct ieee80211_hdr *hdr;
+ struct sk_buff *skb;
+@@ -3002,7 +3109,7 @@ static int rtw89_core_send_nullfunc(struct rtw89_dev *rtwdev,
+ return 0;
+
+ rcu_read_lock();
+- sta = ieee80211_find_sta(vif, vif->bss_conf.bssid);
++ sta = ieee80211_find_sta(vif, vif->cfg.ap_addr);
+ if (!sta) {
+ ret = -EINVAL;
+ goto out;
+@@ -3040,27 +3147,43 @@ void rtw89_roc_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct ieee80211_hw *hw = rtwdev->hw;
+ struct rtw89_roc *roc = &rtwvif->roc;
++ struct rtw89_vif_link *rtwvif_link;
+ struct cfg80211_chan_def roc_chan;
+- struct rtw89_vif *tmp;
++ struct rtw89_vif *tmp_vif;
+ int ret;
+
+ lockdep_assert_held(&rtwdev->mutex);
+
+ rtw89_leave_ips_by_hwflags(rtwdev);
+ rtw89_leave_lps(rtwdev);
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "roc start: find no link on HW-0\n");
++ return;
++ }
++
+ rtw89_chanctx_pause(rtwdev, RTW89_CHANCTX_PAUSE_REASON_ROC);
+
+- ret = rtw89_core_send_nullfunc(rtwdev, rtwvif, true, true);
++ ret = rtw89_core_send_nullfunc(rtwdev, rtwvif_link, true, true);
+ if (ret)
+ rtw89_debug(rtwdev, RTW89_DBG_TXRX,
+ "roc send null-1 failed: %d\n", ret);
+
+- rtw89_for_each_rtwvif(rtwdev, tmp)
+- if (tmp->chanctx_idx == rtwvif->chanctx_idx)
+- tmp->offchan = true;
++ rtw89_for_each_rtwvif(rtwdev, tmp_vif) {
++ struct rtw89_vif_link *tmp_link;
++ unsigned int link_id;
++
++ rtw89_vif_for_each_link(tmp_vif, tmp_link, link_id) {
++ if (tmp_link->chanctx_idx == rtwvif_link->chanctx_idx) {
++ tmp_vif->offchan = true;
++ break;
++ }
++ }
++ }
+
+ cfg80211_chandef_create(&roc_chan, &roc->chan, NL80211_CHAN_NO_HT);
+- rtw89_config_roc_chandef(rtwdev, rtwvif->chanctx_idx, &roc_chan);
++ rtw89_config_roc_chandef(rtwdev, rtwvif_link->chanctx_idx, &roc_chan);
+ rtw89_set_channel(rtwdev);
+ rtw89_write32_clr(rtwdev,
+ rtw89_mac_reg_by_idx(rtwdev, mac->rx_fltr, RTW89_MAC_0),
+@@ -3077,7 +3200,8 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct ieee80211_hw *hw = rtwdev->hw;
+ struct rtw89_roc *roc = &rtwvif->roc;
+- struct rtw89_vif *tmp;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_vif *tmp_vif;
+ int ret;
+
+ lockdep_assert_held(&rtwdev->mutex);
+@@ -3087,24 +3211,29 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ rtw89_leave_ips_by_hwflags(rtwdev);
+ rtw89_leave_lps(rtwdev);
+
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "roc end: find no link on HW-0\n");
++ return;
++ }
++
+ rtw89_write32_mask(rtwdev,
+ rtw89_mac_reg_by_idx(rtwdev, mac->rx_fltr, RTW89_MAC_0),
+ B_AX_RX_FLTR_CFG_MASK,
+ rtwdev->hal.rx_fltr);
+
+ roc->state = RTW89_ROC_IDLE;
+- rtw89_config_roc_chandef(rtwdev, rtwvif->chanctx_idx, NULL);
++ rtw89_config_roc_chandef(rtwdev, rtwvif_link->chanctx_idx, NULL);
+ rtw89_chanctx_proceed(rtwdev);
+- ret = rtw89_core_send_nullfunc(rtwdev, rtwvif, true, false);
++ ret = rtw89_core_send_nullfunc(rtwdev, rtwvif_link, true, false);
+ if (ret)
+ rtw89_debug(rtwdev, RTW89_DBG_TXRX,
+ "roc send null-0 failed: %d\n", ret);
+
+- rtw89_for_each_rtwvif(rtwdev, tmp)
+- if (tmp->chanctx_idx == rtwvif->chanctx_idx)
+- tmp->offchan = false;
++ rtw89_for_each_rtwvif(rtwdev, tmp_vif)
++ tmp_vif->offchan = false;
+
+- rtw89_core_handle_sta_pending_tx(rtwdev, rtwvif);
++ rtw89_core_handle_sta_pending_tx(rtwdev, rtwvif_link);
+ queue_work(rtwdev->txq_wq, &rtwdev->txq_work);
+
+ if (hw->conf.flags & IEEE80211_CONF_IDLE)
+@@ -3188,39 +3317,52 @@ static bool rtw89_traffic_stats_calc(struct rtw89_dev *rtwdev,
+
+ static bool rtw89_traffic_stats_track(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+ bool tfc_changed;
+
+ tfc_changed = rtw89_traffic_stats_calc(rtwdev, &rtwdev->stats);
++
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+ rtw89_traffic_stats_calc(rtwdev, &rtwvif->stats);
+- rtw89_fw_h2c_tp_offload(rtwdev, rtwvif);
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_fw_h2c_tp_offload(rtwdev, rtwvif_link);
+ }
+
+ return tfc_changed;
+ }
+
+-static void rtw89_vif_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw89_vif_enter_lps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- if ((rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION &&
+- rtwvif->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT) ||
+- rtwvif->tdls_peer)
++ if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION &&
++ rtwvif_link->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT)
+ return;
+
+- if (rtwvif->offchan)
+- return;
+-
+- if (rtwvif->stats.tx_tfc_lv == RTW89_TFC_IDLE &&
+- rtwvif->stats.rx_tfc_lv == RTW89_TFC_IDLE)
+- rtw89_enter_lps(rtwdev, rtwvif, true);
++ rtw89_enter_lps(rtwdev, rtwvif_link, true);
+ }
+
+ static void rtw89_enter_lps_track(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+- rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_vif_enter_lps(rtwdev, rtwvif);
++ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
++ if (rtwvif->tdls_peer)
++ continue;
++ if (rtwvif->offchan)
++ continue;
++
++ if (rtwvif->stats.tx_tfc_lv != RTW89_TFC_IDLE ||
++ rtwvif->stats.rx_tfc_lv != RTW89_TFC_IDLE)
++ continue;
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_vif_enter_lps(rtwdev, rtwvif_link);
++ }
+ }
+
+ static void rtw89_core_rfk_track(struct rtw89_dev *rtwdev)
+@@ -3234,14 +3376,16 @@ static void rtw89_core_rfk_track(struct rtw89_dev *rtwdev)
+ rtw89_chip_rfk_track(rtwdev);
+ }
+
+-void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
++void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf)
+ {
+ enum rtw89_entity_mode mode = rtw89_get_entity_mode(rtwdev);
+
+ if (mode == RTW89_ENTITY_MODE_MCC)
+ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_P2P_PS_CHANGE);
+ else
+- rtw89_process_p2p_ps(rtwdev, vif);
++ rtw89_process_p2p_ps(rtwdev, rtwvif_link, bss_conf);
+ }
+
+ void rtw89_traffic_stats_init(struct rtw89_dev *rtwdev,
+@@ -3326,7 +3470,8 @@ void rtw89_core_release_all_bits_map(unsigned long *addr, unsigned int nbits)
+ }
+
+ int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx)
++ struct rtw89_sta_link *rtwsta_link, u8 tid,
++ u8 *cam_idx)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_cam_info *cam_info = &rtwdev->cam_info;
+@@ -3363,7 +3508,7 @@ int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev,
+ }
+
+ entry->tid = tid;
+- list_add_tail(&entry->list, &rtwsta->ba_cam_list);
++ list_add_tail(&entry->list, &rtwsta_link->ba_cam_list);
+
+ *cam_idx = idx;
+
+@@ -3371,7 +3516,8 @@ int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx)
++ struct rtw89_sta_link *rtwsta_link, u8 tid,
++ u8 *cam_idx)
+ {
+ struct rtw89_cam_info *cam_info = &rtwdev->cam_info;
+ struct rtw89_ba_cam_entry *entry = NULL, *tmp;
+@@ -3379,7 +3525,7 @@ int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev,
+
+ lockdep_assert_held(&rtwdev->mutex);
+
+- list_for_each_entry_safe(entry, tmp, &rtwsta->ba_cam_list, list) {
++ list_for_each_entry_safe(entry, tmp, &rtwsta_link->ba_cam_list, list) {
+ if (entry->tid != tid)
+ continue;
+
+@@ -3396,24 +3542,25 @@ int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev,
+
+ #define RTW89_TYPE_MAPPING(_type) \
+ case NL80211_IFTYPE_ ## _type: \
+- rtwvif->wifi_role = RTW89_WIFI_ROLE_ ## _type; \
++ rtwvif_link->wifi_role = RTW89_WIFI_ROLE_ ## _type; \
+ break
+-void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc)
++void rtw89_vif_type_mapping(struct rtw89_vif_link *rtwvif_link, bool assoc)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct ieee80211_bss_conf *bss_conf;
+
+ switch (vif->type) {
+ case NL80211_IFTYPE_STATION:
+ if (vif->p2p)
+- rtwvif->wifi_role = RTW89_WIFI_ROLE_P2P_CLIENT;
++ rtwvif_link->wifi_role = RTW89_WIFI_ROLE_P2P_CLIENT;
+ else
+- rtwvif->wifi_role = RTW89_WIFI_ROLE_STATION;
++ rtwvif_link->wifi_role = RTW89_WIFI_ROLE_STATION;
+ break;
+ case NL80211_IFTYPE_AP:
+ if (vif->p2p)
+- rtwvif->wifi_role = RTW89_WIFI_ROLE_P2P_GO;
++ rtwvif_link->wifi_role = RTW89_WIFI_ROLE_P2P_GO;
+ else
+- rtwvif->wifi_role = RTW89_WIFI_ROLE_AP;
++ rtwvif_link->wifi_role = RTW89_WIFI_ROLE_AP;
+ break;
+ RTW89_TYPE_MAPPING(ADHOC);
+ RTW89_TYPE_MAPPING(MONITOR);
+@@ -3426,23 +3573,27 @@ void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc)
+ switch (vif->type) {
+ case NL80211_IFTYPE_AP:
+ case NL80211_IFTYPE_MESH_POINT:
+- rtwvif->net_type = RTW89_NET_TYPE_AP_MODE;
+- rtwvif->self_role = RTW89_SELF_ROLE_AP;
++ rtwvif_link->net_type = RTW89_NET_TYPE_AP_MODE;
++ rtwvif_link->self_role = RTW89_SELF_ROLE_AP;
+ break;
+ case NL80211_IFTYPE_ADHOC:
+- rtwvif->net_type = RTW89_NET_TYPE_AD_HOC;
+- rtwvif->self_role = RTW89_SELF_ROLE_CLIENT;
++ rtwvif_link->net_type = RTW89_NET_TYPE_AD_HOC;
++ rtwvif_link->self_role = RTW89_SELF_ROLE_CLIENT;
+ break;
+ case NL80211_IFTYPE_STATION:
+ if (assoc) {
+- rtwvif->net_type = RTW89_NET_TYPE_INFRA;
+- rtwvif->trigger = vif->bss_conf.he_support;
++ rtwvif_link->net_type = RTW89_NET_TYPE_INFRA;
++
++ rcu_read_lock();
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++ rtwvif_link->trigger = bss_conf->he_support;
++ rcu_read_unlock();
+ } else {
+- rtwvif->net_type = RTW89_NET_TYPE_NO_LINK;
+- rtwvif->trigger = false;
++ rtwvif_link->net_type = RTW89_NET_TYPE_NO_LINK;
++ rtwvif_link->trigger = false;
+ }
+- rtwvif->self_role = RTW89_SELF_ROLE_CLIENT;
+- rtwvif->addr_cam.sec_ent_mode = RTW89_ADDR_CAM_SEC_NORMAL;
++ rtwvif_link->self_role = RTW89_SELF_ROLE_CLIENT;
++ rtwvif_link->addr_cam.sec_ent_mode = RTW89_ADDR_CAM_SEC_NORMAL;
+ break;
+ case NL80211_IFTYPE_MONITOR:
+ break;
+@@ -3452,137 +3603,110 @@ void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc)
+ }
+ }
+
+-int rtw89_core_sta_add(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++int rtw89_core_sta_link_add(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
+ struct rtw89_hal *hal = &rtwdev->hal;
+ u8 ant_num = hal->ant_diversity ? 2 : rtwdev->chip->rf_path_num;
+ int i;
+ int ret;
+
+- rtwsta->rtwdev = rtwdev;
+- rtwsta->rtwvif = rtwvif;
+- rtwsta->prev_rssi = 0;
+- INIT_LIST_HEAD(&rtwsta->ba_cam_list);
+- skb_queue_head_init(&rtwsta->roc_queue);
+-
+- for (i = 0; i < ARRAY_SIZE(sta->txq); i++)
+- rtw89_core_txq_init(rtwdev, sta->txq[i]);
+-
+- ewma_rssi_init(&rtwsta->avg_rssi);
+- ewma_snr_init(&rtwsta->avg_snr);
+- ewma_evm_init(&rtwsta->evm_1ss);
++ rtwsta_link->prev_rssi = 0;
++ INIT_LIST_HEAD(&rtwsta_link->ba_cam_list);
++ ewma_rssi_init(&rtwsta_link->avg_rssi);
++ ewma_snr_init(&rtwsta_link->avg_snr);
++ ewma_evm_init(&rtwsta_link->evm_1ss);
+ for (i = 0; i < ant_num; i++) {
+- ewma_rssi_init(&rtwsta->rssi[i]);
+- ewma_evm_init(&rtwsta->evm_min[i]);
+- ewma_evm_init(&rtwsta->evm_max[i]);
++ ewma_rssi_init(&rtwsta_link->rssi[i]);
++ ewma_evm_init(&rtwsta_link->evm_min[i]);
++ ewma_evm_init(&rtwsta_link->evm_max[i]);
+ }
+
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
+- /* for station mode, assign the mac_id from itself */
+- rtwsta->mac_id = rtwvif->mac_id;
+-
+ /* must do rtw89_reg_6ghz_recalc() before rfk channel */
+- ret = rtw89_reg_6ghz_recalc(rtwdev, rtwvif, true);
++ ret = rtw89_reg_6ghz_recalc(rtwdev, rtwvif_link, true);
+ if (ret)
+ return ret;
+
+- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, rtwsta,
++ rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, rtwsta_link,
+ BTC_ROLE_MSTS_STA_CONN_START);
+- rtw89_chip_rfk_channel(rtwdev, rtwvif);
++ rtw89_chip_rfk_channel(rtwdev, rtwvif_link);
+ } else if (vif->type == NL80211_IFTYPE_AP || sta->tdls) {
+- rtwsta->mac_id = rtw89_acquire_mac_id(rtwdev);
+- if (rtwsta->mac_id == RTW89_MAX_MAC_ID_NUM)
+- return -ENOSPC;
+-
+- ret = rtw89_mac_set_macid_pause(rtwdev, rtwsta->mac_id, false);
++ ret = rtw89_mac_set_macid_pause(rtwdev, rtwsta_link->mac_id, false);
+ if (ret) {
+- rtw89_release_mac_id(rtwdev, rtwsta->mac_id);
+ rtw89_warn(rtwdev, "failed to send h2c macid pause\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta,
++ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, rtwsta_link,
+ RTW89_ROLE_CREATE);
+ if (ret) {
+- rtw89_release_mac_id(rtwdev, rtwsta->mac_id);
+ rtw89_warn(rtwdev, "failed to send h2c role info\n");
+ return ret;
+ }
+
+- ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif, rtwsta);
++ ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret)
+ return ret;
+
+- ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif, rtwsta);
++ ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret)
+ return ret;
+-
+- rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE);
+ }
+
+ return 0;
+ }
+
+-int rtw89_core_sta_disassoc(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++int rtw89_core_sta_link_disassoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, false);
+-
+- rtwdev->total_sta_assoc--;
+- if (sta->tdls)
+- rtwvif->tdls_peer--;
+- rtwsta->disassoc = true;
++ rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, false);
+
+ return 0;
+ }
+
+-int rtw89_core_sta_disconnect(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++int rtw89_core_sta_link_disconnect(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
+ int ret;
+
+- rtw89_mac_bf_monitor_calc(rtwdev, sta, true);
+- rtw89_mac_bf_disassoc(rtwdev, vif, sta);
+- rtw89_core_free_sta_pending_ba(rtwdev, sta);
+- rtw89_core_free_sta_pending_forbid_ba(rtwdev, sta);
+- rtw89_core_free_sta_pending_roc_tx(rtwdev, sta);
++ rtw89_mac_bf_monitor_calc(rtwdev, rtwsta_link, true);
++ rtw89_mac_bf_disassoc(rtwdev, rtwvif_link, rtwsta_link);
+
+ if (vif->type == NL80211_IFTYPE_AP || sta->tdls)
+- rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta->addr_cam);
++ rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta_link->addr_cam);
+ if (sta->tdls)
+- rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta->bssid_cam);
++ rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta_link->bssid_cam);
+
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
+- rtw89_vif_type_mapping(vif, false);
+- rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif, true);
++ rtw89_vif_type_mapping(rtwvif_link, false);
++ rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif_link, true);
+ }
+
+- ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, sta);
++ ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cmac table\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, true);
++ ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, rtwsta_link, true);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c join info\n");
+ return ret;
+ }
+
+ /* update cam aid mac_id net_type */
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cam\n");
+ return ret;
+@@ -3591,106 +3715,114 @@ int rtw89_core_sta_disconnect(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-int rtw89_core_sta_assoc(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++int rtw89_core_sta_link_assoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif, rtwsta);
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
++ struct rtw89_bssid_cam_entry *bssid_cam = rtw89_get_bssid_cam_of(rtwvif_link,
++ rtwsta_link);
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+ int ret;
+
+ if (vif->type == NL80211_IFTYPE_AP || sta->tdls) {
+ if (sta->tdls) {
+- ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif, bssid_cam, sta->addr);
++ struct ieee80211_link_sta *link_sta;
++
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ ret = rtw89_cam_init_bssid_cam(rtwdev, rtwvif_link, bssid_cam,
++ link_sta->addr);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c init bssid cam for TDLS\n");
++ rcu_read_unlock();
+ return ret;
+ }
++
++ rcu_read_unlock();
+ }
+
+- ret = rtw89_cam_init_addr_cam(rtwdev, &rtwsta->addr_cam, bssid_cam);
++ ret = rtw89_cam_init_addr_cam(rtwdev, &rtwsta_link->addr_cam, bssid_cam);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c init addr cam\n");
+ return ret;
+ }
+ }
+
+- ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, sta);
++ ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cmac table\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, false);
++ ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, rtwsta_link, false);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c join info\n");
+ return ret;
+ }
+
+ /* update cam aid mac_id net_type */
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cam\n");
+ return ret;
+ }
+
+- rtwdev->total_sta_assoc++;
+- if (sta->tdls)
+- rtwvif->tdls_peer++;
+- rtw89_phy_ra_assoc(rtwdev, sta);
+- rtw89_mac_bf_assoc(rtwdev, vif, sta);
+- rtw89_mac_bf_monitor_calc(rtwdev, sta, false);
++ rtw89_phy_ra_assoc(rtwdev, rtwsta_link);
++ rtw89_mac_bf_assoc(rtwdev, rtwvif_link, rtwsta_link);
++ rtw89_mac_bf_monitor_calc(rtwdev, rtwsta_link, false);
+
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
+- struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
++ struct ieee80211_bss_conf *bss_conf;
+
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
+ if (bss_conf->he_support &&
+ !(bss_conf->he_oper.params & IEEE80211_HE_OPERATION_ER_SU_DISABLE))
+- rtwsta->er_cap = true;
++ rtwsta_link->er_cap = true;
++
++ rcu_read_unlock();
+
+- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, rtwsta,
++ rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, rtwsta_link,
+ BTC_ROLE_MSTS_STA_CONN_END);
+- rtw89_core_get_no_ul_ofdma_htc(rtwdev, &rtwsta->htc_template, chan);
+- rtw89_phy_ul_tb_assoc(rtwdev, rtwvif);
++ rtw89_core_get_no_ul_ofdma_htc(rtwdev, &rtwsta_link->htc_template, chan);
++ rtw89_phy_ul_tb_assoc(rtwdev, rtwvif_link);
+
+- ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif, rtwsta->mac_id);
++ ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif_link, rtwsta_link->mac_id);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c general packet\n");
+ return ret;
+ }
+
+- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true);
++ rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true);
+ }
+
+ return ret;
+ }
+
+-int rtw89_core_sta_remove(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++int rtw89_core_sta_link_remove(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ const struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
+ int ret;
+
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
+- rtw89_reg_6ghz_recalc(rtwdev, rtwvif, false);
+- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, rtwsta,
++ rtw89_reg_6ghz_recalc(rtwdev, rtwvif_link, false);
++ rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, rtwsta_link,
+ BTC_ROLE_MSTS_STA_DIS_CONN);
+ } else if (vif->type == NL80211_IFTYPE_AP || sta->tdls) {
+- rtw89_release_mac_id(rtwdev, rtwsta->mac_id);
+-
+- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta,
++ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, rtwsta_link,
+ RTW89_ROLE_REMOVE);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c role info\n");
+ return ret;
+ }
+-
+- rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE);
+ }
+
+ return 0;
+@@ -4152,15 +4284,16 @@ static void rtw89_core_ppdu_sts_init(struct rtw89_dev *rtwdev)
+ void rtw89_core_update_beacon_work(struct work_struct *work)
+ {
+ struct rtw89_dev *rtwdev;
+- struct rtw89_vif *rtwvif = container_of(work, struct rtw89_vif,
+- update_beacon_work);
++ struct rtw89_vif_link *rtwvif_link = container_of(work, struct rtw89_vif_link,
++ update_beacon_work);
+
+- if (rtwvif->net_type != RTW89_NET_TYPE_AP_MODE)
++ if (rtwvif_link->net_type != RTW89_NET_TYPE_AP_MODE)
+ return;
+
+- rtwdev = rtwvif->rtwdev;
++ rtwdev = rtwvif_link->rtwvif->rtwdev;
++
+ mutex_lock(&rtwdev->mutex);
+- rtw89_chip_h2c_update_beacon(rtwdev, rtwvif);
++ rtw89_chip_h2c_update_beacon(rtwdev, rtwvif_link);
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -4355,6 +4488,168 @@ void rtw89_release_mac_id(struct rtw89_dev *rtwdev, u8 mac_id)
+ clear_bit(mac_id, rtwdev->mac_id_map);
+ }
+
++void rtw89_init_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ u8 mac_id, u8 port)
++{
++ const struct rtw89_chip_info *chip = rtwdev->chip;
++ u8 support_link_num = chip->support_link_num;
++ u8 support_mld_num = 0;
++ unsigned int link_id;
++ u8 index;
++
++ bitmap_zero(rtwvif->links_inst_map, __RTW89_MLD_MAX_LINK_NUM);
++ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++)
++ rtwvif->links[link_id] = NULL;
++
++ rtwvif->rtwdev = rtwdev;
++
++ if (rtwdev->support_mlo) {
++ rtwvif->links_inst_valid_num = support_link_num;
++ support_mld_num = chip->support_macid_num / support_link_num;
++ } else {
++ rtwvif->links_inst_valid_num = 1;
++ }
++
++ for (index = 0; index < rtwvif->links_inst_valid_num; index++) {
++ struct rtw89_vif_link *inst = &rtwvif->links_inst[index];
++
++ inst->rtwvif = rtwvif;
++ inst->mac_id = mac_id + index * support_mld_num;
++ inst->mac_idx = RTW89_MAC_0 + index;
++ inst->phy_idx = RTW89_PHY_0 + index;
++
++ /* multi-link use the same port id on different HW bands */
++ inst->port = port;
++ }
++}
++
++void rtw89_init_sta(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_sta *rtwsta, u8 mac_id)
++{
++ const struct rtw89_chip_info *chip = rtwdev->chip;
++ u8 support_link_num = chip->support_link_num;
++ u8 support_mld_num = 0;
++ unsigned int link_id;
++ u8 index;
++
++ bitmap_zero(rtwsta->links_inst_map, __RTW89_MLD_MAX_LINK_NUM);
++ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++)
++ rtwsta->links[link_id] = NULL;
++
++ rtwsta->rtwdev = rtwdev;
++ rtwsta->rtwvif = rtwvif;
++
++ if (rtwdev->support_mlo) {
++ rtwsta->links_inst_valid_num = support_link_num;
++ support_mld_num = chip->support_macid_num / support_link_num;
++ } else {
++ rtwsta->links_inst_valid_num = 1;
++ }
++
++ for (index = 0; index < rtwsta->links_inst_valid_num; index++) {
++ struct rtw89_sta_link *inst = &rtwsta->links_inst[index];
++
++ inst->rtwvif_link = &rtwvif->links_inst[index];
++
++ inst->rtwsta = rtwsta;
++ inst->mac_id = mac_id + index * support_mld_num;
++ }
++}
++
++struct rtw89_vif_link *rtw89_vif_set_link(struct rtw89_vif *rtwvif,
++ unsigned int link_id)
++{
++ struct rtw89_vif_link *rtwvif_link = rtwvif->links[link_id];
++ u8 index;
++ int ret;
++
++ if (rtwvif_link)
++ return rtwvif_link;
++
++ index = find_first_zero_bit(rtwvif->links_inst_map,
++ rtwvif->links_inst_valid_num);
++ if (index == rtwvif->links_inst_valid_num) {
++ ret = -EBUSY;
++ goto err;
++ }
++
++ rtwvif_link = &rtwvif->links_inst[index];
++ rtwvif_link->link_id = link_id;
++
++ set_bit(index, rtwvif->links_inst_map);
++ rtwvif->links[link_id] = rtwvif_link;
++ return rtwvif_link;
++
++err:
++ rtw89_err(rtwvif->rtwdev, "vif (link_id %u) failed to set link: %d\n",
++ link_id, ret);
++ return NULL;
++}
++
++void rtw89_vif_unset_link(struct rtw89_vif *rtwvif, unsigned int link_id)
++{
++ struct rtw89_vif_link **container = &rtwvif->links[link_id];
++ struct rtw89_vif_link *link = *container;
++ u8 index;
++
++ if (!link)
++ return;
++
++ index = rtw89_vif_link_inst_get_index(link);
++ clear_bit(index, rtwvif->links_inst_map);
++ *container = NULL;
++}
++
++struct rtw89_sta_link *rtw89_sta_set_link(struct rtw89_sta *rtwsta,
++ unsigned int link_id)
++{
++ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
++ struct rtw89_vif_link *rtwvif_link = rtwvif->links[link_id];
++ struct rtw89_sta_link *rtwsta_link = rtwsta->links[link_id];
++ u8 index;
++ int ret;
++
++ if (rtwsta_link)
++ return rtwsta_link;
++
++ if (!rtwvif_link) {
++ ret = -ENOLINK;
++ goto err;
++ }
++
++ index = rtw89_vif_link_inst_get_index(rtwvif_link);
++ if (test_bit(index, rtwsta->links_inst_map)) {
++ ret = -EBUSY;
++ goto err;
++ }
++
++ rtwsta_link = &rtwsta->links_inst[index];
++ rtwsta_link->link_id = link_id;
++
++ set_bit(index, rtwsta->links_inst_map);
++ rtwsta->links[link_id] = rtwsta_link;
++ return rtwsta_link;
++
++err:
++ rtw89_err(rtwsta->rtwdev, "sta (link_id %u) failed to set link: %d\n",
++ link_id, ret);
++ return NULL;
++}
++
++void rtw89_sta_unset_link(struct rtw89_sta *rtwsta, unsigned int link_id)
++{
++ struct rtw89_sta_link **container = &rtwsta->links[link_id];
++ struct rtw89_sta_link *link = *container;
++ u8 index;
++
++ if (!link)
++ return;
++
++ index = rtw89_sta_link_inst_get_index(link);
++ clear_bit(index, rtwsta->links_inst_map);
++ *container = NULL;
++}
++
+ int rtw89_core_init(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_btc *btc = &rtwdev->btc;
+@@ -4444,38 +4739,44 @@ void rtw89_core_deinit(struct rtw89_dev *rtwdev)
+ }
+ EXPORT_SYMBOL(rtw89_core_deinit);
+
+-void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ const u8 *mac_addr, bool hw_scan)
+ {
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+
+ rtwdev->scanning = true;
+ rtw89_leave_lps(rtwdev);
+ if (hw_scan)
+ rtw89_leave_ips_by_hwflags(rtwdev);
+
+- ether_addr_copy(rtwvif->mac_addr, mac_addr);
++ ether_addr_copy(rtwvif_link->mac_addr, mac_addr);
+ rtw89_btc_ntfy_scan_start(rtwdev, RTW89_PHY_0, chan->band_type);
+- rtw89_chip_rfk_scan(rtwdev, rtwvif, true);
++ rtw89_chip_rfk_scan(rtwdev, rtwvif_link, true);
+ rtw89_hci_recalc_int_mit(rtwdev);
+ rtw89_phy_config_edcca(rtwdev, true);
+
+- rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, mac_addr);
++ rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, mac_addr);
+ }
+
+ void rtw89_core_scan_complete(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif, bool hw_scan)
++ struct rtw89_vif_link *rtwvif_link, bool hw_scan)
+ {
+- struct rtw89_vif *rtwvif = vif ? (struct rtw89_vif *)vif->drv_priv : NULL;
++ struct ieee80211_bss_conf *bss_conf;
+
+- if (!rtwvif)
++ if (!rtwvif_link)
+ return;
+
+- ether_addr_copy(rtwvif->mac_addr, vif->addr);
+- rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL);
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ ether_addr_copy(rtwvif_link->mac_addr, bss_conf->addr);
++
++ rcu_read_unlock();
++
++ rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL);
+
+- rtw89_chip_rfk_scan(rtwdev, rtwvif, false);
++ rtw89_chip_rfk_scan(rtwdev, rtwvif_link, false);
+ rtw89_btc_ntfy_scan_finish(rtwdev, RTW89_PHY_0);
+ rtw89_phy_config_edcca(rtwdev, false);
+
+@@ -4688,17 +4989,39 @@ int rtw89_chip_info_setup(struct rtw89_dev *rtwdev)
+ }
+ EXPORT_SYMBOL(rtw89_chip_info_setup);
+
++void rtw89_chip_cfg_txpwr_ul_tb_offset(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct rtw89_chip_info *chip = rtwdev->chip;
++ struct ieee80211_bss_conf *bss_conf;
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++ if (!bss_conf->he_support || !vif->cfg.assoc) {
++ rcu_read_unlock();
++ return;
++ }
++
++ rcu_read_unlock();
++
++ if (chip->ops->set_txpwr_ul_tb_offset)
++ chip->ops->set_txpwr_ul_tb_offset(rtwdev, 0, rtwvif_link->mac_idx);
++}
++
+ static int rtw89_core_register_hw(struct rtw89_dev *rtwdev)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
++ u8 n = rtwdev->support_mlo ? chip->support_link_num : 1;
+ struct ieee80211_hw *hw = rtwdev->hw;
+ struct rtw89_efuse *efuse = &rtwdev->efuse;
+ struct rtw89_hal *hal = &rtwdev->hal;
+ int ret;
+ int tx_headroom = IEEE80211_HT_CTL_LEN;
+
+- hw->vif_data_size = sizeof(struct rtw89_vif);
+- hw->sta_data_size = sizeof(struct rtw89_sta);
++ hw->vif_data_size = struct_size_t(struct rtw89_vif, links_inst, n);
++ hw->sta_data_size = struct_size_t(struct rtw89_sta, links_inst, n);
+ hw->txq_data_size = sizeof(struct rtw89_txq);
+ hw->chanctx_data_size = sizeof(struct rtw89_chanctx_cfg);
+
+diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
+index 4ed9034fdb4641..de33320b1354cd 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.h
++++ b/drivers/net/wireless/realtek/rtw89/core.h
+@@ -829,6 +829,8 @@ enum rtw89_phy_idx {
+ RTW89_PHY_MAX
+ };
+
++#define __RTW89_MLD_MAX_LINK_NUM 2
++
+ enum rtw89_chanctx_idx {
+ RTW89_CHANCTX_0 = 0,
+ RTW89_CHANCTX_1 = 1,
+@@ -1166,8 +1168,8 @@ struct rtw89_core_tx_request {
+ enum rtw89_core_tx_type tx_type;
+
+ struct sk_buff *skb;
+- struct ieee80211_vif *vif;
+- struct ieee80211_sta *sta;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
+ struct rtw89_tx_desc_info desc_info;
+ };
+
+@@ -3354,12 +3356,13 @@ struct rtw89_sec_cam_entry {
+ u8 key[32];
+ };
+
+-struct rtw89_sta {
++struct rtw89_sta_link {
++ struct rtw89_sta *rtwsta;
++ unsigned int link_id;
++
+ u8 mac_id;
+- bool disassoc;
+ bool er_cap;
+- struct rtw89_dev *rtwdev;
+- struct rtw89_vif *rtwvif;
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_ra_info ra;
+ struct rtw89_ra_report ra_report;
+ int max_agg_wait;
+@@ -3370,15 +3373,12 @@ struct rtw89_sta {
+ struct ewma_evm evm_1ss;
+ struct ewma_evm evm_min[RF_PATH_MAX];
+ struct ewma_evm evm_max[RF_PATH_MAX];
+- struct rtw89_ampdu_params ampdu_params[IEEE80211_NUM_TIDS];
+- DECLARE_BITMAP(ampdu_map, IEEE80211_NUM_TIDS);
+ struct ieee80211_rx_status rx_status;
+ u16 rx_hw_rate;
+ __le32 htc_template;
+ struct rtw89_addr_cam_entry addr_cam; /* AP mode or TDLS peer only */
+ struct rtw89_bssid_cam_entry bssid_cam; /* TDLS peer only */
+ struct list_head ba_cam_list;
+- struct sk_buff_head roc_queue;
+
+ bool use_cfg_mask;
+ struct cfg80211_bitrate_mask mask;
+@@ -3460,10 +3460,10 @@ struct rtw89_p2p_noa_setter {
+ u8 noa_index;
+ };
+
+-struct rtw89_vif {
+- struct list_head list;
+- struct rtw89_dev *rtwdev;
+- struct rtw89_roc roc;
++struct rtw89_vif_link {
++ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
++
+ bool chanctx_assigned; /* only valid when running with chanctx_ops */
+ enum rtw89_chanctx_idx chanctx_idx;
+ enum rtw89_reg_6ghz_power reg_6ghz_power;
+@@ -3473,7 +3473,6 @@ struct rtw89_vif {
+ u8 port;
+ u8 mac_addr[ETH_ALEN];
+ u8 bssid[ETH_ALEN];
+- __be32 ip_addr;
+ u8 phy_idx;
+ u8 mac_idx;
+ u8 net_type;
+@@ -3484,7 +3483,6 @@ struct rtw89_vif {
+ u8 hit_rule;
+ u8 last_noa_nr;
+ u64 sync_bcn_tsf;
+- bool offchan;
+ bool trigger;
+ bool lsig_txop;
+ u8 tgt_ind;
+@@ -3498,15 +3496,11 @@ struct rtw89_vif {
+ bool pre_pwr_diff_en;
+ bool pwr_diff_en;
+ u8 def_tri_idx;
+- u32 tdls_peer;
+ struct work_struct update_beacon_work;
+ struct rtw89_addr_cam_entry addr_cam;
+ struct rtw89_bssid_cam_entry bssid_cam;
+ struct ieee80211_tx_queue_params tx_params[IEEE80211_NUM_ACS];
+- struct rtw89_traffic_stats stats;
+ struct rtw89_phy_rate_pattern rate_pattern;
+- struct cfg80211_scan_request *scan_req;
+- struct ieee80211_scan_ies *scan_ies;
+ struct list_head general_pkt_list;
+ struct rtw89_p2p_noa_setter p2p_noa;
+ };
+@@ -3599,11 +3593,11 @@ struct rtw89_chip_ops {
+ void (*rfk_hw_init)(struct rtw89_dev *rtwdev);
+ void (*rfk_init)(struct rtw89_dev *rtwdev);
+ void (*rfk_init_late)(struct rtw89_dev *rtwdev);
+- void (*rfk_channel)(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++ void (*rfk_channel)(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ void (*rfk_band_changed)(struct rtw89_dev *rtwdev,
+ enum rtw89_phy_idx phy_idx,
+ const struct rtw89_chan *chan);
+- void (*rfk_scan)(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ void (*rfk_scan)(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool start);
+ void (*rfk_track)(struct rtw89_dev *rtwdev);
+ void (*power_trim)(struct rtw89_dev *rtwdev);
+@@ -3646,23 +3640,25 @@ struct rtw89_chip_ops {
+ u32 *tx_en, enum rtw89_sch_tx_sel sel);
+ int (*resume_sch_tx)(struct rtw89_dev *rtwdev, u8 mac_idx, u32 tx_en);
+ int (*h2c_dctl_sec_cam)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int (*h2c_default_cmac_tbl)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int (*h2c_assoc_cmac_tbl)(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int (*h2c_ampdu_cmac_tbl)(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int (*h2c_default_dmac_tbl)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int (*h2c_update_beacon)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
+- int (*h2c_ba_cam)(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link);
++ int (*h2c_ba_cam)(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ bool valid, struct ieee80211_ampdu_params *params);
+
+ void (*btc_set_rfe)(struct rtw89_dev *rtwdev);
+@@ -5196,7 +5192,7 @@ struct rtw89_early_h2c {
+ };
+
+ struct rtw89_hw_scan_info {
+- struct ieee80211_vif *scanning_vif;
++ struct rtw89_vif_link *scanning_vif;
+ struct list_head pkt_list[NUM_NL80211_BANDS];
+ struct rtw89_chan op_chan;
+ bool abort;
+@@ -5371,7 +5367,7 @@ struct rtw89_wow_aoac_report {
+ };
+
+ struct rtw89_wow_param {
+- struct ieee80211_vif *wow_vif;
++ struct rtw89_vif_link *rtwvif_link;
+ DECLARE_BITMAP(flags, RTW89_WOW_FLAG_NUM);
+ struct rtw89_wow_cam_info patterns[RTW89_MAX_PATTERN_NUM];
+ struct rtw89_wow_key_info key_info;
+@@ -5408,7 +5404,7 @@ struct rtw89_mcc_policy {
+ };
+
+ struct rtw89_mcc_role {
+- struct rtw89_vif *rtwvif;
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_mcc_policy policy;
+ struct rtw89_mcc_limit limit;
+
+@@ -5608,6 +5604,121 @@ struct rtw89_dev {
+ u8 priv[] __aligned(sizeof(void *));
+ };
+
++struct rtw89_vif {
++ struct rtw89_dev *rtwdev;
++ struct list_head list;
++
++ u8 mac_addr[ETH_ALEN];
++ __be32 ip_addr;
++
++ struct rtw89_traffic_stats stats;
++ u32 tdls_peer;
++
++ struct ieee80211_scan_ies *scan_ies;
++ struct cfg80211_scan_request *scan_req;
++
++ struct rtw89_roc roc;
++ bool offchan;
++
++ u8 links_inst_valid_num;
++ DECLARE_BITMAP(links_inst_map, __RTW89_MLD_MAX_LINK_NUM);
++ struct rtw89_vif_link *links[IEEE80211_MLD_MAX_NUM_LINKS];
++ struct rtw89_vif_link links_inst[] __counted_by(links_inst_valid_num);
++};
++
++static inline bool rtw89_vif_assign_link_is_valid(struct rtw89_vif_link **rtwvif_link,
++ const struct rtw89_vif *rtwvif,
++ unsigned int link_id)
++{
++ *rtwvif_link = rtwvif->links[link_id];
++ return !!*rtwvif_link;
++}
++
++#define rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) \
++ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) \
++ if (rtw89_vif_assign_link_is_valid(&(rtwvif_link), rtwvif, link_id))
++
++struct rtw89_sta {
++ struct rtw89_dev *rtwdev;
++ struct rtw89_vif *rtwvif;
++
++ bool disassoc;
++
++ struct sk_buff_head roc_queue;
++
++ struct rtw89_ampdu_params ampdu_params[IEEE80211_NUM_TIDS];
++ DECLARE_BITMAP(ampdu_map, IEEE80211_NUM_TIDS);
++
++ u8 links_inst_valid_num;
++ DECLARE_BITMAP(links_inst_map, __RTW89_MLD_MAX_LINK_NUM);
++ struct rtw89_sta_link *links[IEEE80211_MLD_MAX_NUM_LINKS];
++ struct rtw89_sta_link links_inst[] __counted_by(links_inst_valid_num);
++};
++
++static inline bool rtw89_sta_assign_link_is_valid(struct rtw89_sta_link **rtwsta_link,
++ const struct rtw89_sta *rtwsta,
++ unsigned int link_id)
++{
++ *rtwsta_link = rtwsta->links[link_id];
++ return !!*rtwsta_link;
++}
++
++#define rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) \
++ for (link_id = 0; link_id < IEEE80211_MLD_MAX_NUM_LINKS; link_id++) \
++ if (rtw89_sta_assign_link_is_valid(&(rtwsta_link), rtwsta, link_id))
++
++static inline u8 rtw89_vif_get_main_macid(struct rtw89_vif *rtwvif)
++{
++ /* const after init, so no need to check if active first */
++ return rtwvif->links_inst[0].mac_id;
++}
++
++static inline u8 rtw89_vif_get_main_port(struct rtw89_vif *rtwvif)
++{
++ /* const after init, so no need to check if active first */
++ return rtwvif->links_inst[0].port;
++}
++
++static inline struct rtw89_vif_link *
++rtw89_vif_get_link_inst(struct rtw89_vif *rtwvif, u8 index)
++{
++ if (index >= rtwvif->links_inst_valid_num ||
++ !test_bit(index, rtwvif->links_inst_map))
++ return NULL;
++ return &rtwvif->links_inst[index];
++}
++
++static inline
++u8 rtw89_vif_link_inst_get_index(struct rtw89_vif_link *rtwvif_link)
++{
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
++
++ return rtwvif_link - rtwvif->links_inst;
++}
++
++static inline u8 rtw89_sta_get_main_macid(struct rtw89_sta *rtwsta)
++{
++ /* const after init, so no need to check if active first */
++ return rtwsta->links_inst[0].mac_id;
++}
++
++static inline struct rtw89_sta_link *
++rtw89_sta_get_link_inst(struct rtw89_sta *rtwsta, u8 index)
++{
++ if (index >= rtwsta->links_inst_valid_num ||
++ !test_bit(index, rtwsta->links_inst_map))
++ return NULL;
++ return &rtwsta->links_inst[index];
++}
++
++static inline
++u8 rtw89_sta_link_inst_get_index(struct rtw89_sta_link *rtwsta_link)
++{
++ struct rtw89_sta *rtwsta = rtwsta_link->rtwsta;
++
++ return rtwsta_link - rtwsta->links_inst;
++}
++
+ static inline int rtw89_hci_tx_write(struct rtw89_dev *rtwdev,
+ struct rtw89_core_tx_request *tx_req)
+ {
+@@ -5972,9 +6083,26 @@ static inline struct ieee80211_vif *rtwvif_to_vif_safe(struct rtw89_vif *rtwvif)
+ return rtwvif ? rtwvif_to_vif(rtwvif) : NULL;
+ }
+
++static inline
++struct ieee80211_vif *rtwvif_link_to_vif(struct rtw89_vif_link *rtwvif_link)
++{
++ return rtwvif_to_vif(rtwvif_link->rtwvif);
++}
++
++static inline
++struct ieee80211_vif *rtwvif_link_to_vif_safe(struct rtw89_vif_link *rtwvif_link)
++{
++ return rtwvif_link ? rtwvif_link_to_vif(rtwvif_link) : NULL;
++}
++
++static inline struct rtw89_vif *vif_to_rtwvif(struct ieee80211_vif *vif)
++{
++ return (struct rtw89_vif *)vif->drv_priv;
++}
++
+ static inline struct rtw89_vif *vif_to_rtwvif_safe(struct ieee80211_vif *vif)
+ {
+- return vif ? (struct rtw89_vif *)vif->drv_priv : NULL;
++ return vif ? vif_to_rtwvif(vif) : NULL;
+ }
+
+ static inline struct ieee80211_sta *rtwsta_to_sta(struct rtw89_sta *rtwsta)
+@@ -5989,11 +6117,88 @@ static inline struct ieee80211_sta *rtwsta_to_sta_safe(struct rtw89_sta *rtwsta)
+ return rtwsta ? rtwsta_to_sta(rtwsta) : NULL;
+ }
+
++static inline
++struct ieee80211_sta *rtwsta_link_to_sta(struct rtw89_sta_link *rtwsta_link)
++{
++ return rtwsta_to_sta(rtwsta_link->rtwsta);
++}
++
++static inline
++struct ieee80211_sta *rtwsta_link_to_sta_safe(struct rtw89_sta_link *rtwsta_link)
++{
++ return rtwsta_link ? rtwsta_link_to_sta(rtwsta_link) : NULL;
++}
++
++static inline struct rtw89_sta *sta_to_rtwsta(struct ieee80211_sta *sta)
++{
++ return (struct rtw89_sta *)sta->drv_priv;
++}
++
+ static inline struct rtw89_sta *sta_to_rtwsta_safe(struct ieee80211_sta *sta)
+ {
+- return sta ? (struct rtw89_sta *)sta->drv_priv : NULL;
++ return sta ? sta_to_rtwsta(sta) : NULL;
+ }
+
++static inline struct ieee80211_bss_conf *
++__rtw89_vif_rcu_dereference_link(struct rtw89_vif_link *rtwvif_link, bool *nolink)
++{
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct ieee80211_bss_conf *bss_conf;
++
++ bss_conf = rcu_dereference(vif->link_conf[rtwvif_link->link_id]);
++ if (unlikely(!bss_conf)) {
++ *nolink = true;
++ return &vif->bss_conf;
++ }
++
++ *nolink = false;
++ return bss_conf;
++}
++
++#define rtw89_vif_rcu_dereference_link(rtwvif_link, assert) \
++({ \
++ typeof(rtwvif_link) p = rtwvif_link; \
++ struct ieee80211_bss_conf *bss_conf; \
++ bool nolink; \
++ \
++ bss_conf = __rtw89_vif_rcu_dereference_link(p, &nolink); \
++ if (unlikely(nolink) && (assert)) \
++ rtw89_err(p->rtwvif->rtwdev, \
++ "%s: cannot find exact bss_conf for link_id %u\n",\
++ __func__, p->link_id); \
++ bss_conf; \
++})
++
++static inline struct ieee80211_link_sta *
++__rtw89_sta_rcu_dereference_link(struct rtw89_sta_link *rtwsta_link, bool *nolink)
++{
++ struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
++ struct ieee80211_link_sta *link_sta;
++
++ link_sta = rcu_dereference(sta->link[rtwsta_link->link_id]);
++ if (unlikely(!link_sta)) {
++ *nolink = true;
++ return &sta->deflink;
++ }
++
++ *nolink = false;
++ return link_sta;
++}
++
++#define rtw89_sta_rcu_dereference_link(rtwsta_link, assert) \
++({ \
++ typeof(rtwsta_link) p = rtwsta_link; \
++ struct ieee80211_link_sta *link_sta; \
++ bool nolink; \
++ \
++ link_sta = __rtw89_sta_rcu_dereference_link(p, &nolink); \
++ if (unlikely(nolink) && (assert)) \
++ rtw89_err(p->rtwsta->rtwdev, \
++ "%s: cannot find exact link_sta for link_id %u\n",\
++ __func__, p->link_id); \
++ link_sta; \
++})
++
+ static inline u8 rtw89_hw_to_rate_info_bw(enum rtw89_bandwidth hw_bw)
+ {
+ if (hw_bw == RTW89_CHANNEL_WIDTH_160)
+@@ -6078,29 +6283,29 @@ enum nl80211_he_ru_alloc rtw89_he_rua_to_ru_alloc(u16 rua)
+ }
+
+ static inline
+-struct rtw89_addr_cam_entry *rtw89_get_addr_cam_of(struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++struct rtw89_addr_cam_entry *rtw89_get_addr_cam_of(struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- if (rtwsta) {
+- struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta);
++ if (rtwsta_link) {
++ struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls)
+- return &rtwsta->addr_cam;
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls)
++ return &rtwsta_link->addr_cam;
+ }
+- return &rtwvif->addr_cam;
++ return &rtwvif_link->addr_cam;
+ }
+
+ static inline
+-struct rtw89_bssid_cam_entry *rtw89_get_bssid_cam_of(struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++struct rtw89_bssid_cam_entry *rtw89_get_bssid_cam_of(struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- if (rtwsta) {
+- struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta);
++ if (rtwsta_link) {
++ struct ieee80211_sta *sta = rtwsta_link_to_sta(rtwsta_link);
+
+ if (sta->tdls)
+- return &rtwsta->bssid_cam;
++ return &rtwsta_link->bssid_cam;
+ }
+- return &rtwvif->bssid_cam;
++ return &rtwvif_link->bssid_cam;
+ }
+
+ static inline
+@@ -6159,11 +6364,10 @@ const struct rtw89_chan_rcd *rtw89_chan_rcd_get(struct rtw89_dev *rtwdev,
+ static inline
+ const struct rtw89_chan *rtw89_scan_chan_get(struct rtw89_dev *rtwdev)
+ {
+- struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif;
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
++ struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif;
+
+- if (rtwvif)
+- return rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
++ if (rtwvif_link)
++ return rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
+ else
+ return rtw89_chan_get(rtwdev, RTW89_CHANCTX_0);
+ }
+@@ -6240,12 +6444,12 @@ static inline void rtw89_chip_rfk_init_late(struct rtw89_dev *rtwdev)
+ }
+
+ static inline void rtw89_chip_rfk_channel(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+ if (chip->ops->rfk_channel)
+- chip->ops->rfk_channel(rtwdev, rtwvif);
++ chip->ops->rfk_channel(rtwdev, rtwvif_link);
+ }
+
+ static inline void rtw89_chip_rfk_band_changed(struct rtw89_dev *rtwdev,
+@@ -6259,12 +6463,12 @@ static inline void rtw89_chip_rfk_band_changed(struct rtw89_dev *rtwdev,
+ }
+
+ static inline void rtw89_chip_rfk_scan(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool start)
++ struct rtw89_vif_link *rtwvif_link, bool start)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+ if (chip->ops->rfk_scan)
+- chip->ops->rfk_scan(rtwdev, rtwvif, start);
++ chip->ops->rfk_scan(rtwdev, rtwvif_link, start);
+ }
+
+ static inline void rtw89_chip_rfk_track(struct rtw89_dev *rtwdev)
+@@ -6347,20 +6551,6 @@ static inline void rtw89_chip_cfg_txrx_path(struct rtw89_dev *rtwdev)
+ chip->ops->cfg_txrx_path(rtwdev);
+ }
+
+-static inline
+-void rtw89_chip_cfg_txpwr_ul_tb_offset(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif)
+-{
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- const struct rtw89_chip_info *chip = rtwdev->chip;
+-
+- if (!vif->bss_conf.he_support || !vif->cfg.assoc)
+- return;
+-
+- if (chip->ops->set_txpwr_ul_tb_offset)
+- chip->ops->set_txpwr_ul_tb_offset(rtwdev, 0, rtwvif->mac_idx);
+-}
+-
+ static inline void rtw89_chip_digital_pwr_comp(struct rtw89_dev *rtwdev,
+ enum rtw89_phy_idx phy_idx)
+ {
+@@ -6457,14 +6647,14 @@ int rtw89_chip_resume_sch_tx(struct rtw89_dev *rtwdev, u8 mac_idx, u32 tx_en)
+
+ static inline
+ int rtw89_chip_h2c_dctl_sec_cam(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+ if (!chip->ops->h2c_dctl_sec_cam)
+ return 0;
+- return chip->ops->h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta);
++ return chip->ops->h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link);
+ }
+
+ static inline u8 *get_hdr_bssid(struct ieee80211_hdr *hdr)
+@@ -6479,13 +6669,14 @@ static inline u8 *get_hdr_bssid(struct ieee80211_hdr *hdr)
+ return hdr->addr3;
+ }
+
+-static inline bool rtw89_sta_has_beamformer_cap(struct ieee80211_sta *sta)
++static inline
++bool rtw89_sta_has_beamformer_cap(struct ieee80211_link_sta *link_sta)
+ {
+- if ((sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
+- (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE) ||
+- (sta->deflink.he_cap.he_cap_elem.phy_cap_info[3] &
++ if ((link_sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
++ (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE) ||
++ (link_sta->he_cap.he_cap_elem.phy_cap_info[3] &
+ IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER) ||
+- (sta->deflink.he_cap.he_cap_elem.phy_cap_info[4] &
++ (link_sta->he_cap.he_cap_elem.phy_cap_info[4] &
+ IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER))
+ return true;
+ return false;
+@@ -6605,21 +6796,21 @@ void rtw89_core_napi_start(struct rtw89_dev *rtwdev);
+ void rtw89_core_napi_stop(struct rtw89_dev *rtwdev);
+ int rtw89_core_napi_init(struct rtw89_dev *rtwdev);
+ void rtw89_core_napi_deinit(struct rtw89_dev *rtwdev);
+-int rtw89_core_sta_add(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
+-int rtw89_core_sta_assoc(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
+-int rtw89_core_sta_disassoc(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
+-int rtw89_core_sta_disconnect(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
+-int rtw89_core_sta_remove(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++int rtw89_core_sta_link_add(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
++int rtw89_core_sta_link_assoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
++int rtw89_core_sta_link_disassoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
++int rtw89_core_sta_link_disconnect(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
++int rtw89_core_sta_link_remove(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ void rtw89_core_set_tid_config(struct rtw89_dev *rtwdev,
+ struct ieee80211_sta *sta,
+ struct cfg80211_tid_config *tid_config);
+@@ -6635,22 +6826,40 @@ struct rtw89_dev *rtw89_alloc_ieee80211_hw(struct device *device,
+ void rtw89_free_ieee80211_hw(struct rtw89_dev *rtwdev);
+ u8 rtw89_acquire_mac_id(struct rtw89_dev *rtwdev);
+ void rtw89_release_mac_id(struct rtw89_dev *rtwdev, u8 mac_id);
++void rtw89_init_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ u8 mac_id, u8 port);
++void rtw89_init_sta(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_sta *rtwsta, u8 mac_id);
++struct rtw89_vif_link *rtw89_vif_set_link(struct rtw89_vif *rtwvif,
++ unsigned int link_id);
++void rtw89_vif_unset_link(struct rtw89_vif *rtwvif, unsigned int link_id);
++struct rtw89_sta_link *rtw89_sta_set_link(struct rtw89_sta *rtwsta,
++ unsigned int link_id);
++void rtw89_sta_unset_link(struct rtw89_sta *rtwsta, unsigned int link_id);
+ void rtw89_core_set_chip_txpwr(struct rtw89_dev *rtwdev);
+ void rtw89_get_default_chandef(struct cfg80211_chan_def *chandef);
+ void rtw89_get_channel_params(const struct cfg80211_chan_def *chandef,
+ struct rtw89_chan *chan);
+ int rtw89_set_channel(struct rtw89_dev *rtwdev);
+-void rtw89_get_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_chan *chan);
+ u8 rtw89_core_acquire_bit_map(unsigned long *addr, unsigned long size);
+ void rtw89_core_release_bit_map(unsigned long *addr, u8 bit);
+ void rtw89_core_release_all_bits_map(unsigned long *addr, unsigned int nbits);
+ int rtw89_core_acquire_sta_ba_entry(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx);
++ struct rtw89_sta_link *rtwsta_link, u8 tid,
++ u8 *cam_idx);
+ int rtw89_core_release_sta_ba_entry(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 tid, u8 *cam_idx);
+-void rtw89_vif_type_mapping(struct ieee80211_vif *vif, bool assoc);
++ struct rtw89_sta_link *rtwsta_link, u8 tid,
++ u8 *cam_idx);
++void rtw89_core_free_sta_pending_ba(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta);
++void rtw89_core_free_sta_pending_forbid_ba(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta);
++void rtw89_core_free_sta_pending_roc_tx(struct rtw89_dev *rtwdev,
++ struct ieee80211_sta *sta);
++void rtw89_vif_type_mapping(struct rtw89_vif_link *rtwvif_link, bool assoc);
+ int rtw89_chip_info_setup(struct rtw89_dev *rtwdev);
++void rtw89_chip_cfg_txpwr_ul_tb_offset(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link);
+ bool rtw89_ra_report_to_bitrate(struct rtw89_dev *rtwdev, u8 rpt_rate, u16 *bitrate);
+ int rtw89_regd_setup(struct rtw89_dev *rtwdev);
+ int rtw89_regd_init(struct rtw89_dev *rtwdev,
+@@ -6667,13 +6876,15 @@ void rtw89_core_update_beacon_work(struct work_struct *work);
+ void rtw89_roc_work(struct work_struct *work);
+ void rtw89_roc_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
+ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
+-void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++void rtw89_core_scan_start(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ const u8 *mac_addr, bool hw_scan);
+ void rtw89_core_scan_complete(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif, bool hw_scan);
+-int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link, bool hw_scan);
++int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool active);
+-void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif);
++void rtw89_core_update_p2p_ps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf);
+ void rtw89_core_ntfy_btc_event(struct rtw89_dev *rtwdev, enum rtw89_btc_hmsg event);
+
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw89/debug.c b/drivers/net/wireless/realtek/rtw89/debug.c
+index 29f85210f91964..7391f131229a58 100644
+--- a/drivers/net/wireless/realtek/rtw89/debug.c
++++ b/drivers/net/wireless/realtek/rtw89/debug.c
+@@ -3506,7 +3506,9 @@ static ssize_t rtw89_debug_priv_fw_log_manual_set(struct file *filp,
+ return count;
+ }
+
+-static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
++static void rtw89_sta_link_info_get_iter(struct seq_file *m,
++ struct rtw89_dev *rtwdev,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ static const char * const he_gi_str[] = {
+ [NL80211_RATE_INFO_HE_GI_0_8] = "0.8",
+@@ -3518,20 +3520,26 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
+ [NL80211_RATE_INFO_EHT_GI_1_6] = "1.6",
+ [NL80211_RATE_INFO_EHT_GI_3_2] = "3.2",
+ };
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rate_info *rate = &rtwsta->ra_report.txrate;
+- struct ieee80211_rx_status *status = &rtwsta->rx_status;
+- struct seq_file *m = (struct seq_file *)data;
+- struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rate_info *rate = &rtwsta_link->ra_report.txrate;
++ struct ieee80211_rx_status *status = &rtwsta_link->rx_status;
+ struct rtw89_hal *hal = &rtwdev->hal;
+ u8 ant_num = hal->ant_diversity ? 2 : rtwdev->chip->rf_path_num;
+ bool ant_asterisk = hal->tx_path_diversity || hal->ant_diversity;
++ struct ieee80211_link_sta *link_sta;
+ u8 evm_min, evm_max, evm_1ss;
++ u16 max_rc_amsdu_len;
+ u8 rssi;
+ u8 snr;
+ int i;
+
+- seq_printf(m, "TX rate [%d]: ", rtwsta->mac_id);
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ max_rc_amsdu_len = link_sta->agg.max_rc_amsdu_len;
++
++ rcu_read_unlock();
++
++ seq_printf(m, "TX rate [%u, %u]: ", rtwsta_link->mac_id, rtwsta_link->link_id);
+
+ if (rate->flags & RATE_INFO_FLAGS_MCS)
+ seq_printf(m, "HT MCS-%d%s", rate->mcs,
+@@ -3549,13 +3557,13 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
+ eht_gi_str[rate->eht_gi] : "N/A");
+ else
+ seq_printf(m, "Legacy %d", rate->legacy);
+- seq_printf(m, "%s", rtwsta->ra_report.might_fallback_legacy ? " FB_G" : "");
++ seq_printf(m, "%s", rtwsta_link->ra_report.might_fallback_legacy ? " FB_G" : "");
+ seq_printf(m, " BW:%u", rtw89_rate_info_bw_to_mhz(rate->bw));
+- seq_printf(m, "\t(hw_rate=0x%x)", rtwsta->ra_report.hw_rate);
+- seq_printf(m, "\t==> agg_wait=%d (%d)\n", rtwsta->max_agg_wait,
+- sta->deflink.agg.max_rc_amsdu_len);
++ seq_printf(m, " (hw_rate=0x%x)", rtwsta_link->ra_report.hw_rate);
++ seq_printf(m, " ==> agg_wait=%d (%d)\n", rtwsta_link->max_agg_wait,
++ max_rc_amsdu_len);
+
+- seq_printf(m, "RX rate [%d]: ", rtwsta->mac_id);
++ seq_printf(m, "RX rate [%u, %u]: ", rtwsta_link->mac_id, rtwsta_link->link_id);
+
+ switch (status->encoding) {
+ case RX_ENC_LEGACY:
+@@ -3582,24 +3590,24 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
+ break;
+ }
+ seq_printf(m, " BW:%u", rtw89_rate_info_bw_to_mhz(status->bw));
+- seq_printf(m, "\t(hw_rate=0x%x)\n", rtwsta->rx_hw_rate);
++ seq_printf(m, " (hw_rate=0x%x)\n", rtwsta_link->rx_hw_rate);
+
+- rssi = ewma_rssi_read(&rtwsta->avg_rssi);
++ rssi = ewma_rssi_read(&rtwsta_link->avg_rssi);
+ seq_printf(m, "RSSI: %d dBm (raw=%d, prev=%d) [",
+- RTW89_RSSI_RAW_TO_DBM(rssi), rssi, rtwsta->prev_rssi);
++ RTW89_RSSI_RAW_TO_DBM(rssi), rssi, rtwsta_link->prev_rssi);
+ for (i = 0; i < ant_num; i++) {
+- rssi = ewma_rssi_read(&rtwsta->rssi[i]);
++ rssi = ewma_rssi_read(&rtwsta_link->rssi[i]);
+ seq_printf(m, "%d%s%s", RTW89_RSSI_RAW_TO_DBM(rssi),
+ ant_asterisk && (hal->antenna_tx & BIT(i)) ? "*" : "",
+ i + 1 == ant_num ? "" : ", ");
+ }
+ seq_puts(m, "]\n");
+
+- evm_1ss = ewma_evm_read(&rtwsta->evm_1ss);
++ evm_1ss = ewma_evm_read(&rtwsta_link->evm_1ss);
+ seq_printf(m, "EVM: [%2u.%02u, ", evm_1ss >> 2, (evm_1ss & 0x3) * 25);
+ for (i = 0; i < (hal->ant_diversity ? 2 : 1); i++) {
+- evm_min = ewma_evm_read(&rtwsta->evm_min[i]);
+- evm_max = ewma_evm_read(&rtwsta->evm_max[i]);
++ evm_min = ewma_evm_read(&rtwsta_link->evm_min[i]);
++ evm_max = ewma_evm_read(&rtwsta_link->evm_max[i]);
+
+ seq_printf(m, "%s(%2u.%02u, %2u.%02u)", i == 0 ? "" : " ",
+ evm_min >> 2, (evm_min & 0x3) * 25,
+@@ -3607,10 +3615,22 @@ static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
+ }
+ seq_puts(m, "]\t");
+
+- snr = ewma_snr_read(&rtwsta->avg_snr);
++ snr = ewma_snr_read(&rtwsta_link->avg_snr);
+ seq_printf(m, "SNR: %u\n", snr);
+ }
+
++static void rtw89_sta_info_get_iter(void *data, struct ieee80211_sta *sta)
++{
++ struct seq_file *m = (struct seq_file *)data;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id)
++ rtw89_sta_link_info_get_iter(m, rtwdev, rtwsta_link);
++}
++
+ static void
+ rtw89_debug_append_rx_rate(struct seq_file *m, struct rtw89_pkt_stat *pkt_stat,
+ enum rtw89_hw_rate first_rate, int len)
+@@ -3737,28 +3757,41 @@ static void rtw89_dump_pkt_offload(struct seq_file *m, struct list_head *pkt_lis
+ seq_puts(m, "\n");
+ }
+
++static void rtw89_vif_link_ids_get(struct seq_file *m, u8 *mac,
++ struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif_link->bssid_cam;
++
++ seq_printf(m, " [%u] %pM\n", rtwvif_link->mac_id, rtwvif_link->mac_addr);
++ seq_printf(m, "\tlink_id=%u\n", rtwvif_link->link_id);
++ seq_printf(m, "\tbssid_cam_idx=%u\n", bssid_cam->bssid_cam_idx);
++ rtw89_dump_addr_cam(m, rtwdev, &rtwvif_link->addr_cam);
++ rtw89_dump_pkt_offload(m, &rtwvif_link->general_pkt_list,
++ "\tpkt_ofld[GENERAL]: ");
++}
++
+ static
+ void rtw89_vif_ids_get_iter(void *data, u8 *mac, struct ieee80211_vif *vif)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_dev *rtwdev = rtwvif->rtwdev;
+ struct seq_file *m = (struct seq_file *)data;
+- struct rtw89_bssid_cam_entry *bssid_cam = &rtwvif->bssid_cam;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_dev *rtwdev = rtwvif->rtwdev;
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
+
+- seq_printf(m, "VIF [%d] %pM\n", rtwvif->mac_id, rtwvif->mac_addr);
+- seq_printf(m, "\tbssid_cam_idx=%u\n", bssid_cam->bssid_cam_idx);
+- rtw89_dump_addr_cam(m, rtwdev, &rtwvif->addr_cam);
+- rtw89_dump_pkt_offload(m, &rtwvif->general_pkt_list, "\tpkt_ofld[GENERAL]: ");
++ seq_printf(m, "VIF %pM\n", rtwvif->mac_addr);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_vif_link_ids_get(m, mac, rtwdev, rtwvif_link);
+ }
+
+-static void rtw89_dump_ba_cam(struct seq_file *m, struct rtw89_sta *rtwsta)
++static void rtw89_dump_ba_cam(struct seq_file *m, struct rtw89_dev *rtwdev,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+- struct rtw89_dev *rtwdev = rtwvif->rtwdev;
+ struct rtw89_ba_cam_entry *entry;
+ bool first = true;
+
+- list_for_each_entry(entry, &rtwsta->ba_cam_list, list) {
++ list_for_each_entry(entry, &rtwsta_link->ba_cam_list, list) {
+ if (first) {
+ seq_puts(m, "\tba_cam ");
+ first = false;
+@@ -3771,16 +3804,36 @@ static void rtw89_dump_ba_cam(struct seq_file *m, struct rtw89_sta *rtwsta)
+ seq_puts(m, "\n");
+ }
+
++static void rtw89_sta_link_ids_get(struct seq_file *m,
++ struct rtw89_dev *rtwdev,
++ struct rtw89_sta_link *rtwsta_link)
++{
++ struct ieee80211_link_sta *link_sta;
++
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ seq_printf(m, " [%u] %pM\n", rtwsta_link->mac_id, link_sta->addr);
++
++ rcu_read_unlock();
++
++ seq_printf(m, "\tlink_id=%u\n", rtwsta_link->link_id);
++ rtw89_dump_addr_cam(m, rtwdev, &rtwsta_link->addr_cam);
++ rtw89_dump_ba_cam(m, rtwdev, rtwsta_link);
++}
++
+ static void rtw89_sta_ids_get_iter(void *data, struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_dev *rtwdev = rtwsta->rtwdev;
+ struct seq_file *m = (struct seq_file *)data;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
+
+- seq_printf(m, "STA [%d] %pM %s\n", rtwsta->mac_id, sta->addr,
+- sta->tdls ? "(TDLS)" : "");
+- rtw89_dump_addr_cam(m, rtwdev, &rtwsta->addr_cam);
+- rtw89_dump_ba_cam(m, rtwsta);
++ seq_printf(m, "STA %pM %s\n", sta->addr, sta->tdls ? "(TDLS)" : "");
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id)
++ rtw89_sta_link_ids_get(m, rtwdev, rtwsta_link);
+ }
+
+ static int rtw89_debug_priv_stations_get(struct seq_file *m, void *v)
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index d9b0e7ebe619a3..13a7c39ceb6f55 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -1741,8 +1741,8 @@ void rtw89_fw_log_dump(struct rtw89_dev *rtwdev, u8 *buf, u32 len)
+ }
+
+ #define H2C_CAM_LEN 60
+-int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, const u8 *scan_mac_addr)
++int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link, const u8 *scan_mac_addr)
+ {
+ struct sk_buff *skb;
+ int ret;
+@@ -1753,8 +1753,9 @@ int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ return -ENOMEM;
+ }
+ skb_put(skb, H2C_CAM_LEN);
+- rtw89_cam_fill_addr_cam_info(rtwdev, rtwvif, rtwsta, scan_mac_addr, skb->data);
+- rtw89_cam_fill_bssid_cam_info(rtwdev, rtwvif, rtwsta, skb->data);
++ rtw89_cam_fill_addr_cam_info(rtwdev, rtwvif_link, rtwsta_link, scan_mac_addr,
++ skb->data);
++ rtw89_cam_fill_bssid_cam_info(rtwdev, rtwvif_link, rtwsta_link, skb->data);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC,
+@@ -1776,8 +1777,8 @@ int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ }
+
+ int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ struct rtw89_h2c_dctlinfo_ud_v1 *h2c;
+ u32 len = sizeof(*h2c);
+@@ -1792,7 +1793,7 @@ int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_dctlinfo_ud_v1 *)skb->data;
+
+- rtw89_cam_fill_dctl_sec_cam_info_v1(rtwdev, rtwvif, rtwsta, h2c);
++ rtw89_cam_fill_dctl_sec_cam_info_v1(rtwdev, rtwvif_link, rtwsta_link, h2c);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC,
+@@ -1815,8 +1816,8 @@ int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_dctl_sec_cam_v1);
+
+ int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ struct rtw89_h2c_dctlinfo_ud_v2 *h2c;
+ u32 len = sizeof(*h2c);
+@@ -1831,7 +1832,7 @@ int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_dctlinfo_ud_v2 *)skb->data;
+
+- rtw89_cam_fill_dctl_sec_cam_info_v2(rtwdev, rtwvif, rtwsta, h2c);
++ rtw89_cam_fill_dctl_sec_cam_info_v2(rtwdev, rtwvif_link, rtwsta_link, h2c);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC,
+@@ -1854,10 +1855,10 @@ int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_dctl_sec_cam_v2);
+
+ int rtw89_fw_h2c_default_dmac_tbl_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ struct rtw89_h2c_dctlinfo_ud_v2 *h2c;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+@@ -1908,21 +1909,24 @@ int rtw89_fw_h2c_default_dmac_tbl_v2(struct rtw89_dev *rtwdev,
+ }
+ EXPORT_SYMBOL(rtw89_fw_h2c_default_dmac_tbl_v2);
+
+-int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ bool valid, struct ieee80211_ampdu_params *params)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_h2c_ba_cam *h2c;
+- u8 macid = rtwsta->mac_id;
++ u8 macid = rtwsta_link->mac_id;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+ u8 entry_idx;
+ int ret;
+
+ ret = valid ?
+- rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx) :
+- rtw89_core_release_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx);
++ rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta_link, params->tid,
++ &entry_idx) :
++ rtw89_core_release_sta_ba_entry(rtwdev, rtwsta_link, params->tid,
++ &entry_idx);
+ if (ret) {
+ /* it still works even if we don't have static BA CAM, because
+ * hardware can create dynamic BA CAM automatically.
+@@ -1960,7 +1964,8 @@ int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+
+ if (chip->bacam_ver == RTW89_BACAM_V0_EXT) {
+ h2c->w1 |= le32_encode_bits(1, RTW89_H2C_BA_CAM_W1_STD_EN) |
+- le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_BA_CAM_W1_BAND);
++ le32_encode_bits(rtwvif_link->mac_idx,
++ RTW89_H2C_BA_CAM_W1_BAND);
+ }
+
+ end:
+@@ -2039,13 +2044,14 @@ void rtw89_fw_h2c_init_dynamic_ba_cam_v0_ext(struct rtw89_dev *rtwdev)
+ }
+ }
+
+-int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ bool valid, struct ieee80211_ampdu_params *params)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_h2c_ba_cam_v1 *h2c;
+- u8 macid = rtwsta->mac_id;
++ u8 macid = rtwsta_link->mac_id;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+ u8 entry_idx;
+@@ -2053,8 +2059,10 @@ int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+ int ret;
+
+ ret = valid ?
+- rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx) :
+- rtw89_core_release_sta_ba_entry(rtwdev, rtwsta, params->tid, &entry_idx);
++ rtw89_core_acquire_sta_ba_entry(rtwdev, rtwsta_link, params->tid,
++ &entry_idx) :
++ rtw89_core_release_sta_ba_entry(rtwdev, rtwsta_link, params->tid,
++ &entry_idx);
+ if (ret) {
+ /* it still works even if we don't have static BA CAM, because
+ * hardware can create dynamic BA CAM automatically.
+@@ -2092,7 +2100,8 @@ int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+ entry_idx += chip->bacam_dynamic_num; /* std entry right after dynamic ones */
+ h2c->w1 = le32_encode_bits(entry_idx, RTW89_H2C_BA_CAM_V1_W1_ENTRY_IDX_MASK) |
+ le32_encode_bits(1, RTW89_H2C_BA_CAM_V1_W1_STD_ENTRY_EN) |
+- le32_encode_bits(!!rtwvif->mac_idx, RTW89_H2C_BA_CAM_V1_W1_BAND_SEL);
++ le32_encode_bits(!!rtwvif_link->mac_idx,
++ RTW89_H2C_BA_CAM_V1_W1_BAND_SEL);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC,
+@@ -2197,15 +2206,14 @@ int rtw89_fw_h2c_fw_log(struct rtw89_dev *rtwdev, bool enable)
+ }
+
+ static struct sk_buff *rtw89_eapol_get(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ static const u8 gtkbody[] = {0xAA, 0xAA, 0x03, 0x00, 0x00, 0x00, 0x88,
+ 0x8E, 0x01, 0x03, 0x00, 0x5F, 0x02, 0x03};
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
+ u8 sec_hdr_len = rtw89_wow_get_sec_hdr_len(rtwdev);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_eapol_2_of_2 *eapol_pkt;
++ struct ieee80211_bss_conf *bss_conf;
+ struct ieee80211_hdr_3addr *hdr;
+ struct sk_buff *skb;
+ u8 key_des_ver;
+@@ -2227,10 +2235,17 @@ static struct sk_buff *rtw89_eapol_get(struct rtw89_dev *rtwdev,
+ hdr->frame_control = cpu_to_le16(IEEE80211_FTYPE_DATA |
+ IEEE80211_FCTL_TODS |
+ IEEE80211_FCTL_PROTECTED);
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++
+ ether_addr_copy(hdr->addr1, bss_conf->bssid);
+- ether_addr_copy(hdr->addr2, vif->addr);
++ ether_addr_copy(hdr->addr2, bss_conf->addr);
+ ether_addr_copy(hdr->addr3, bss_conf->bssid);
+
++ rcu_read_unlock();
++
+ skb_put_zero(skb, sec_hdr_len);
+
+ eapol_pkt = skb_put_zero(skb, sizeof(*eapol_pkt));
+@@ -2241,11 +2256,10 @@ static struct sk_buff *rtw89_eapol_get(struct rtw89_dev *rtwdev,
+ }
+
+ static struct sk_buff *rtw89_sa_query_get(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
+ u8 sec_hdr_len = rtw89_wow_get_sec_hdr_len(rtwdev);
++ struct ieee80211_bss_conf *bss_conf;
+ struct ieee80211_hdr_3addr *hdr;
+ struct rtw89_sa_query *sa_query;
+ struct sk_buff *skb;
+@@ -2258,10 +2272,17 @@ static struct sk_buff *rtw89_sa_query_get(struct rtw89_dev *rtwdev,
+ hdr->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT |
+ IEEE80211_STYPE_ACTION |
+ IEEE80211_FCTL_PROTECTED);
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++
+ ether_addr_copy(hdr->addr1, bss_conf->bssid);
+- ether_addr_copy(hdr->addr2, vif->addr);
++ ether_addr_copy(hdr->addr2, bss_conf->addr);
+ ether_addr_copy(hdr->addr3, bss_conf->bssid);
+
++ rcu_read_unlock();
++
+ skb_put_zero(skb, sec_hdr_len);
+
+ sa_query = skb_put_zero(skb, sizeof(*sa_query));
+@@ -2272,8 +2293,9 @@ static struct sk_buff *rtw89_sa_query_get(struct rtw89_dev *rtwdev,
+ }
+
+ static struct sk_buff *rtw89_arp_response_get(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ u8 sec_hdr_len = rtw89_wow_get_sec_hdr_len(rtwdev);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct ieee80211_hdr_3addr *hdr;
+@@ -2295,9 +2317,9 @@ static struct sk_buff *rtw89_arp_response_get(struct rtw89_dev *rtwdev,
+ fc = cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_FCTL_TODS);
+
+ hdr->frame_control = fc;
+- ether_addr_copy(hdr->addr1, rtwvif->bssid);
+- ether_addr_copy(hdr->addr2, rtwvif->mac_addr);
+- ether_addr_copy(hdr->addr3, rtwvif->bssid);
++ ether_addr_copy(hdr->addr1, rtwvif_link->bssid);
++ ether_addr_copy(hdr->addr2, rtwvif_link->mac_addr);
++ ether_addr_copy(hdr->addr3, rtwvif_link->bssid);
+
+ skb_put_zero(skb, sec_hdr_len);
+
+@@ -2312,18 +2334,18 @@ static struct sk_buff *rtw89_arp_response_get(struct rtw89_dev *rtwdev,
+ arp_hdr->ar_pln = 4;
+ arp_hdr->ar_op = htons(ARPOP_REPLY);
+
+- ether_addr_copy(arp_skb->sender_hw, rtwvif->mac_addr);
++ ether_addr_copy(arp_skb->sender_hw, rtwvif_link->mac_addr);
+ arp_skb->sender_ip = rtwvif->ip_addr;
+
+ return skb;
+ }
+
+ static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ enum rtw89_fw_pkt_ofld_type type,
+ u8 *id)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ struct rtw89_pktofld_info *info;
+ struct sk_buff *skb;
+ int ret;
+@@ -2346,13 +2368,13 @@ static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev,
+ skb = ieee80211_nullfunc_get(rtwdev->hw, vif, -1, true);
+ break;
+ case RTW89_PKT_OFLD_TYPE_EAPOL_KEY:
+- skb = rtw89_eapol_get(rtwdev, rtwvif);
++ skb = rtw89_eapol_get(rtwdev, rtwvif_link);
+ break;
+ case RTW89_PKT_OFLD_TYPE_SA_QUERY:
+- skb = rtw89_sa_query_get(rtwdev, rtwvif);
++ skb = rtw89_sa_query_get(rtwdev, rtwvif_link);
+ break;
+ case RTW89_PKT_OFLD_TYPE_ARP_RSP:
+- skb = rtw89_arp_response_get(rtwdev, rtwvif);
++ skb = rtw89_arp_response_get(rtwdev, rtwvif_link);
+ break;
+ default:
+ goto err;
+@@ -2367,7 +2389,7 @@ static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev,
+ if (ret)
+ goto err;
+
+- list_add_tail(&info->list, &rtwvif->general_pkt_list);
++ list_add_tail(&info->list, &rtwvif_link->general_pkt_list);
+ *id = info->id;
+ return 0;
+
+@@ -2377,9 +2399,10 @@ static int rtw89_fw_h2c_add_general_pkt(struct rtw89_dev *rtwdev,
+ }
+
+ void rtw89_fw_release_general_pkt_list_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool notify_fw)
++ struct rtw89_vif_link *rtwvif_link,
++ bool notify_fw)
+ {
+- struct list_head *pkt_list = &rtwvif->general_pkt_list;
++ struct list_head *pkt_list = &rtwvif_link->general_pkt_list;
+ struct rtw89_pktofld_info *info, *tmp;
+
+ list_for_each_entry_safe(info, tmp, pkt_list, list) {
+@@ -2394,16 +2417,20 @@ void rtw89_fw_release_general_pkt_list_vif(struct rtw89_dev *rtwdev,
+
+ void rtw89_fw_release_general_pkt_list(struct rtw89_dev *rtwdev, bool notify_fw)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif, notify_fw);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_fw_release_general_pkt_list_vif(rtwdev, rtwvif_link,
++ notify_fw);
+ }
+
+ #define H2C_GENERAL_PKT_LEN 6
+ #define H2C_GENERAL_PKT_ID_UND 0xff
+ int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u8 macid)
++ struct rtw89_vif_link *rtwvif_link, u8 macid)
+ {
+ u8 pkt_id_ps_poll = H2C_GENERAL_PKT_ID_UND;
+ u8 pkt_id_null = H2C_GENERAL_PKT_ID_UND;
+@@ -2411,11 +2438,11 @@ int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev,
+ struct sk_buff *skb;
+ int ret;
+
+- rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_PS_POLL, &pkt_id_ps_poll);
+- rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_NULL_DATA, &pkt_id_null);
+- rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_QOS_NULL, &pkt_id_qos_null);
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_GENERAL_PKT_LEN);
+@@ -2494,10 +2521,10 @@ int rtw89_fw_h2c_lps_parm(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_h2c_lps_ch_info *h2c;
+ u32 len = sizeof(*h2c);
+@@ -2546,13 +2573,14 @@ int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ }
+
+ #define H2C_P2P_ACT_LEN 20
+-int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf,
+ struct ieee80211_p2p_noa_desc *desc,
+ u8 act, u8 noa_id)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- bool p2p_type_gc = rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT;
+- u8 ctwindow_oppps = vif->bss_conf.p2p_noa_attr.oppps_ctwindow;
++ bool p2p_type_gc = rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT;
++ u8 ctwindow_oppps = bss_conf->p2p_noa_attr.oppps_ctwindow;
+ struct sk_buff *skb;
+ u8 *cmd;
+ int ret;
+@@ -2565,7 +2593,7 @@ int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ skb_put(skb, H2C_P2P_ACT_LEN);
+ cmd = skb->data;
+
+- RTW89_SET_FWCMD_P2P_MACID(cmd, rtwvif->mac_id);
++ RTW89_SET_FWCMD_P2P_MACID(cmd, rtwvif_link->mac_id);
+ RTW89_SET_FWCMD_P2P_P2PID(cmd, 0);
+ RTW89_SET_FWCMD_P2P_NOAID(cmd, noa_id);
+ RTW89_SET_FWCMD_P2P_ACT(cmd, act);
+@@ -2622,11 +2650,11 @@ static void __rtw89_fw_h2c_set_tx_path(struct rtw89_dev *rtwdev,
+
+ #define H2C_CMC_TBL_LEN 68
+ int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- u8 macid = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ u8 macid = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ struct sk_buff *skb;
+ int ret;
+
+@@ -2648,7 +2676,7 @@ int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev,
+ }
+ SET_CMC_TBL_DOPPLER_CTRL(skb->data, 0);
+ SET_CMC_TBL_TXPWR_TOLERENCE(skb->data, 0);
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
+ SET_CMC_TBL_DATA_DCM(skb->data, 0);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+@@ -2671,10 +2699,10 @@ int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_default_cmac_tbl);
+
+ int rtw89_fw_h2c_default_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ struct rtw89_h2c_cctlinfo_ud_g7 *h2c;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+@@ -2755,24 +2783,25 @@ int rtw89_fw_h2c_default_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_default_cmac_tbl_g7);
+
+ static void __get_sta_he_pkt_padding(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta, u8 *pads)
++ struct ieee80211_link_sta *link_sta,
++ u8 *pads)
+ {
+ bool ppe_th;
+ u8 ppe16, ppe8;
+- u8 nss = min(sta->deflink.rx_nss, rtwdev->hal.tx_nss) - 1;
+- u8 ppe_thres_hdr = sta->deflink.he_cap.ppe_thres[0];
++ u8 nss = min(link_sta->rx_nss, rtwdev->hal.tx_nss) - 1;
++ u8 ppe_thres_hdr = link_sta->he_cap.ppe_thres[0];
+ u8 ru_bitmap;
+ u8 n, idx, sh;
+ u16 ppe;
+ int i;
+
+ ppe_th = FIELD_GET(IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT,
+- sta->deflink.he_cap.he_cap_elem.phy_cap_info[6]);
++ link_sta->he_cap.he_cap_elem.phy_cap_info[6]);
+ if (!ppe_th) {
+ u8 pad;
+
+ pad = FIELD_GET(IEEE80211_HE_PHY_CAP9_NOMINAL_PKT_PADDING_MASK,
+- sta->deflink.he_cap.he_cap_elem.phy_cap_info[9]);
++ link_sta->he_cap.he_cap_elem.phy_cap_info[9]);
+
+ for (i = 0; i < RTW89_PPE_BW_NUM; i++)
+ pads[i] = pad;
+@@ -2794,7 +2823,7 @@ static void __get_sta_he_pkt_padding(struct rtw89_dev *rtwdev,
+ sh = n & 7;
+ n += IEEE80211_PPE_THRES_INFO_PPET_SIZE * 2;
+
+- ppe = le16_to_cpu(*((__le16 *)&sta->deflink.he_cap.ppe_thres[idx]));
++ ppe = le16_to_cpu(*((__le16 *)&link_sta->he_cap.ppe_thres[idx]));
+ ppe16 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK;
+ sh += IEEE80211_PPE_THRES_INFO_PPET_SIZE;
+ ppe8 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK;
+@@ -2809,23 +2838,35 @@ static void __get_sta_he_pkt_padding(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
++ struct ieee80211_link_sta *link_sta;
+ struct sk_buff *skb;
+ u8 pads[RTW89_PPE_BW_NUM];
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ u16 lowest_rate;
+ int ret;
+
+ memset(pads, 0, sizeof(pads));
+- if (sta && sta->deflink.he_cap.has_he)
+- __get_sta_he_pkt_padding(rtwdev, sta, pads);
++
++ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_CMC_TBL_LEN);
++ if (!skb) {
++ rtw89_err(rtwdev, "failed to alloc skb for fw dl\n");
++ return -ENOMEM;
++ }
++
++ rcu_read_lock();
++
++ if (rtwsta_link)
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ if (rtwsta_link && link_sta->he_cap.has_he)
++ __get_sta_he_pkt_padding(rtwdev, link_sta, pads);
+
+ if (vif->p2p)
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+@@ -2834,11 +2875,6 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+ else
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+
+- skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_CMC_TBL_LEN);
+- if (!skb) {
+- rtw89_err(rtwdev, "failed to alloc skb for fw dl\n");
+- return -ENOMEM;
+- }
+ skb_put(skb, H2C_CMC_TBL_LEN);
+ SET_CTRL_INFO_MACID(skb->data, mac_id);
+ SET_CTRL_INFO_OPERATION(skb->data, 1);
+@@ -2851,7 +2887,7 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+ SET_CMC_TBL_ULDL(skb->data, 1);
+ else
+ SET_CMC_TBL_ULDL(skb->data, 0);
+- SET_CMC_TBL_MULTI_PORT_ID(skb->data, rtwvif->port);
++ SET_CMC_TBL_MULTI_PORT_ID(skb->data, rtwvif_link->port);
+ if (chip->h2c_cctl_func_id == H2C_FUNC_MAC_CCTLINFO_UD_V1) {
+ SET_CMC_TBL_NOMINAL_PKT_PADDING_V1(skb->data, pads[RTW89_CHANNEL_WIDTH_20]);
+ SET_CMC_TBL_NOMINAL_PKT_PADDING40_V1(skb->data, pads[RTW89_CHANNEL_WIDTH_40]);
+@@ -2863,12 +2899,14 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+ SET_CMC_TBL_NOMINAL_PKT_PADDING80(skb->data, pads[RTW89_CHANNEL_WIDTH_80]);
+ SET_CMC_TBL_NOMINAL_PKT_PADDING160(skb->data, pads[RTW89_CHANNEL_WIDTH_160]);
+ }
+- if (sta)
++ if (rtwsta_link)
+ SET_CMC_TBL_BSR_QUEUE_SIZE_FORMAT(skb->data,
+- sta->deflink.he_cap.has_he);
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
++ link_sta->he_cap.has_he);
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
+ SET_CMC_TBL_DATA_DCM(skb->data, 0);
+
++ rcu_read_unlock();
++
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC, H2C_CL_MAC_FR_EXCHG,
+ chip->h2c_cctl_func_id, 0, 1,
+@@ -2889,9 +2927,10 @@ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_assoc_cmac_tbl);
+
+ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta, u8 *pads)
++ struct ieee80211_link_sta *link_sta,
++ u8 *pads)
+ {
+- u8 nss = min(sta->deflink.rx_nss, rtwdev->hal.tx_nss) - 1;
++ u8 nss = min(link_sta->rx_nss, rtwdev->hal.tx_nss) - 1;
+ u16 ppe_thres_hdr;
+ u8 ppe16, ppe8;
+ u8 n, idx, sh;
+@@ -2900,12 +2939,12 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev,
+ u16 ppe;
+ int i;
+
+- ppe_th = !!u8_get_bits(sta->deflink.eht_cap.eht_cap_elem.phy_cap_info[5],
++ ppe_th = !!u8_get_bits(link_sta->eht_cap.eht_cap_elem.phy_cap_info[5],
+ IEEE80211_EHT_PHY_CAP5_PPE_THRESHOLD_PRESENT);
+ if (!ppe_th) {
+ u8 pad;
+
+- pad = u8_get_bits(sta->deflink.eht_cap.eht_cap_elem.phy_cap_info[5],
++ pad = u8_get_bits(link_sta->eht_cap.eht_cap_elem.phy_cap_info[5],
+ IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_MASK);
+
+ for (i = 0; i < RTW89_PPE_BW_NUM; i++)
+@@ -2914,7 +2953,7 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev,
+ return;
+ }
+
+- ppe_thres_hdr = get_unaligned_le16(sta->deflink.eht_cap.eht_ppe_thres);
++ ppe_thres_hdr = get_unaligned_le16(link_sta->eht_cap.eht_ppe_thres);
+ ru_bitmap = u16_get_bits(ppe_thres_hdr,
+ IEEE80211_EHT_PPE_THRES_RU_INDEX_BITMASK_MASK);
+ n = hweight8(ru_bitmap);
+@@ -2931,7 +2970,7 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev,
+ sh = n & 7;
+ n += IEEE80211_EHT_PPE_THRES_INFO_PPET_SIZE * 2;
+
+- ppe = get_unaligned_le16(sta->deflink.eht_cap.eht_ppe_thres + idx);
++ ppe = get_unaligned_le16(link_sta->eht_cap.eht_ppe_thres + idx);
+ ppe16 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK;
+ sh += IEEE80211_EHT_PPE_THRES_INFO_PPET_SIZE;
+ ppe8 = (ppe >> sh) & IEEE80211_PPE_THRES_NSS_MASK;
+@@ -2946,14 +2985,15 @@ static void __get_sta_eht_pkt_padding(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- struct rtw89_sta *rtwsta = sta_to_rtwsta_safe(sta);
+- const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ struct rtw89_h2c_cctlinfo_ud_g7 *h2c;
++ struct ieee80211_bss_conf *bss_conf;
++ struct ieee80211_link_sta *link_sta;
+ u8 pads[RTW89_PPE_BW_NUM];
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+@@ -2961,11 +3001,24 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ int ret;
+
+ memset(pads, 0, sizeof(pads));
+- if (sta) {
+- if (sta->deflink.eht_cap.has_eht)
+- __get_sta_eht_pkt_padding(rtwdev, sta, pads);
+- else if (sta->deflink.he_cap.has_he)
+- __get_sta_he_pkt_padding(rtwdev, sta, pads);
++
++ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
++ if (!skb) {
++ rtw89_err(rtwdev, "failed to alloc skb for cmac g7\n");
++ return -ENOMEM;
++ }
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++
++ if (rtwsta_link) {
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ if (link_sta->eht_cap.has_eht)
++ __get_sta_eht_pkt_padding(rtwdev, link_sta, pads);
++ else if (link_sta->he_cap.has_he)
++ __get_sta_he_pkt_padding(rtwdev, link_sta, pads);
+ }
+
+ if (vif->p2p)
+@@ -2975,11 +3028,6 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ else
+ lowest_rate = RTW89_HW_RATE_OFDM6;
+
+- skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
+- if (!skb) {
+- rtw89_err(rtwdev, "failed to alloc skb for cmac g7\n");
+- return -ENOMEM;
+- }
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_cctlinfo_ud_g7 *)skb->data;
+
+@@ -3000,16 +3048,16 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ h2c->w3 = le32_encode_bits(0, CCTLINFO_G7_W3_RTS_TXCNT_LMT_SEL);
+ h2c->m3 = cpu_to_le32(CCTLINFO_G7_W3_RTS_TXCNT_LMT_SEL);
+
+- h2c->w4 = le32_encode_bits(rtwvif->port, CCTLINFO_G7_W4_MULTI_PORT_ID);
++ h2c->w4 = le32_encode_bits(rtwvif_link->port, CCTLINFO_G7_W4_MULTI_PORT_ID);
+ h2c->m4 = cpu_to_le32(CCTLINFO_G7_W4_MULTI_PORT_ID);
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) {
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) {
+ h2c->w4 |= le32_encode_bits(0, CCTLINFO_G7_W4_DATA_DCM);
+ h2c->m4 |= cpu_to_le32(CCTLINFO_G7_W4_DATA_DCM);
+ }
+
+- if (vif->bss_conf.eht_support) {
+- u16 punct = vif->bss_conf.chanreq.oper.punctured;
++ if (bss_conf->eht_support) {
++ u16 punct = bss_conf->chanreq.oper.punctured;
+
+ h2c->w4 |= le32_encode_bits(~punct,
+ CCTLINFO_G7_W4_ACT_SUBCH_CBW);
+@@ -3036,12 +3084,14 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ CCTLINFO_G7_W6_ULDL);
+ h2c->m6 = cpu_to_le32(CCTLINFO_G7_W6_ULDL);
+
+- if (sta) {
+- h2c->w8 = le32_encode_bits(sta->deflink.he_cap.has_he,
++ if (rtwsta_link) {
++ h2c->w8 = le32_encode_bits(link_sta->he_cap.has_he,
+ CCTLINFO_G7_W8_BSR_QUEUE_SIZE_FORMAT);
+ h2c->m8 = cpu_to_le32(CCTLINFO_G7_W8_BSR_QUEUE_SIZE_FORMAT);
+ }
+
++ rcu_read_unlock();
++
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC, H2C_CL_MAC_FR_EXCHG,
+ H2C_FUNC_MAC_CCTLINFO_UD_G7, 0, 1,
+@@ -3062,10 +3112,10 @@ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_assoc_cmac_tbl_g7);
+
+ int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = rtwsta_link->rtwsta;
+ struct rtw89_h2c_cctlinfo_ud_g7 *h2c;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+@@ -3102,7 +3152,7 @@ int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ else if (agg_num > 0x200 && agg_num <= 0x400)
+ ba_bmap = 5;
+
+- h2c->c0 = le32_encode_bits(rtwsta->mac_id, CCTLINFO_G7_C0_MACID) |
++ h2c->c0 = le32_encode_bits(rtwsta_link->mac_id, CCTLINFO_G7_C0_MACID) |
+ le32_encode_bits(1, CCTLINFO_G7_C0_OP);
+
+ h2c->w3 = le32_encode_bits(ba_bmap, CCTLINFO_G7_W3_BA_BMAP);
+@@ -3128,7 +3178,7 @@ int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_ampdu_cmac_tbl_g7);
+
+ int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct sk_buff *skb;
+@@ -3140,15 +3190,15 @@ int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev,
+ return -ENOMEM;
+ }
+ skb_put(skb, H2C_CMC_TBL_LEN);
+- SET_CTRL_INFO_MACID(skb->data, rtwsta->mac_id);
++ SET_CTRL_INFO_MACID(skb->data, rtwsta_link->mac_id);
+ SET_CTRL_INFO_OPERATION(skb->data, 1);
+- if (rtwsta->cctl_tx_time) {
++ if (rtwsta_link->cctl_tx_time) {
+ SET_CMC_TBL_AMPDU_TIME_SEL(skb->data, 1);
+- SET_CMC_TBL_AMPDU_MAX_TIME(skb->data, rtwsta->ampdu_max_time);
++ SET_CMC_TBL_AMPDU_MAX_TIME(skb->data, rtwsta_link->ampdu_max_time);
+ }
+- if (rtwsta->cctl_tx_retry_limit) {
++ if (rtwsta_link->cctl_tx_retry_limit) {
+ SET_CMC_TBL_DATA_TXCNT_LMT_SEL(skb->data, 1);
+- SET_CMC_TBL_DATA_TX_CNT_LMT(skb->data, rtwsta->data_tx_cnt_lmt);
++ SET_CMC_TBL_DATA_TX_CNT_LMT(skb->data, rtwsta_link->data_tx_cnt_lmt);
+ }
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+@@ -3170,7 +3220,7 @@ int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct sk_buff *skb;
+@@ -3185,7 +3235,7 @@ int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev,
+ return -ENOMEM;
+ }
+ skb_put(skb, H2C_CMC_TBL_LEN);
+- SET_CTRL_INFO_MACID(skb->data, rtwsta->mac_id);
++ SET_CTRL_INFO_MACID(skb->data, rtwsta_link->mac_id);
+ SET_CTRL_INFO_OPERATION(skb->data, 1);
+
+ __rtw89_fw_h2c_set_tx_path(rtwdev, skb);
+@@ -3209,11 +3259,11 @@ int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ rtwvif_link->chanctx_idx);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ struct rtw89_h2c_bcn_upd *h2c;
+ struct sk_buff *skb_beacon;
+ struct ieee80211_hdr *hdr;
+@@ -3240,7 +3290,7 @@ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev,
+ return -ENOMEM;
+ }
+
+- noa_len = rtw89_p2p_noa_fetch(rtwvif, &noa_data);
++ noa_len = rtw89_p2p_noa_fetch(rtwvif_link, &noa_data);
+ if (noa_len &&
+ (noa_len <= skb_tailroom(skb_beacon) ||
+ pskb_expand_head(skb_beacon, 0, noa_len, GFP_KERNEL) == 0)) {
+@@ -3260,11 +3310,11 @@ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_bcn_upd *)skb->data;
+
+- h2c->w0 = le32_encode_bits(rtwvif->port, RTW89_H2C_BCN_UPD_W0_PORT) |
++ h2c->w0 = le32_encode_bits(rtwvif_link->port, RTW89_H2C_BCN_UPD_W0_PORT) |
+ le32_encode_bits(0, RTW89_H2C_BCN_UPD_W0_MBSSID) |
+- le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_BCN_UPD_W0_BAND) |
++ le32_encode_bits(rtwvif_link->mac_idx, RTW89_H2C_BCN_UPD_W0_BAND) |
+ le32_encode_bits(tim_offset | BIT(7), RTW89_H2C_BCN_UPD_W0_GRP_IE_OFST);
+- h2c->w1 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_BCN_UPD_W1_MACID) |
++ h2c->w1 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_BCN_UPD_W1_MACID) |
+ le32_encode_bits(RTW89_MGMT_HW_SSN_SEL, RTW89_H2C_BCN_UPD_W1_SSN_SEL) |
+ le32_encode_bits(RTW89_MGMT_HW_SEQ_MODE, RTW89_H2C_BCN_UPD_W1_SSN_MODE) |
+ le32_encode_bits(beacon_rate, RTW89_H2C_BCN_UPD_W1_RATE);
+@@ -3289,10 +3339,10 @@ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev,
+ EXPORT_SYMBOL(rtw89_fw_h2c_update_beacon);
+
+ int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ struct rtw89_h2c_bcn_upd_be *h2c;
+ struct sk_buff *skb_beacon;
+ struct ieee80211_hdr *hdr;
+@@ -3319,7 +3369,7 @@ int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev,
+ return -ENOMEM;
+ }
+
+- noa_len = rtw89_p2p_noa_fetch(rtwvif, &noa_data);
++ noa_len = rtw89_p2p_noa_fetch(rtwvif_link, &noa_data);
+ if (noa_len &&
+ (noa_len <= skb_tailroom(skb_beacon) ||
+ pskb_expand_head(skb_beacon, 0, noa_len, GFP_KERNEL) == 0)) {
+@@ -3339,11 +3389,11 @@ int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_bcn_upd_be *)skb->data;
+
+- h2c->w0 = le32_encode_bits(rtwvif->port, RTW89_H2C_BCN_UPD_BE_W0_PORT) |
++ h2c->w0 = le32_encode_bits(rtwvif_link->port, RTW89_H2C_BCN_UPD_BE_W0_PORT) |
+ le32_encode_bits(0, RTW89_H2C_BCN_UPD_BE_W0_MBSSID) |
+- le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_BCN_UPD_BE_W0_BAND) |
++ le32_encode_bits(rtwvif_link->mac_idx, RTW89_H2C_BCN_UPD_BE_W0_BAND) |
+ le32_encode_bits(tim_offset | BIT(7), RTW89_H2C_BCN_UPD_BE_W0_GRP_IE_OFST);
+- h2c->w1 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_BCN_UPD_BE_W1_MACID) |
++ h2c->w1 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_BCN_UPD_BE_W1_MACID) |
+ le32_encode_bits(RTW89_MGMT_HW_SSN_SEL, RTW89_H2C_BCN_UPD_BE_W1_SSN_SEL) |
+ le32_encode_bits(RTW89_MGMT_HW_SEQ_MODE, RTW89_H2C_BCN_UPD_BE_W1_SSN_MODE) |
+ le32_encode_bits(beacon_rate, RTW89_H2C_BCN_UPD_BE_W1_RATE);
+@@ -3373,22 +3423,22 @@ EXPORT_SYMBOL(rtw89_fw_h2c_update_beacon_be);
+
+ #define H2C_ROLE_MAINTAIN_LEN 4
+ int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ enum rtw89_upd_mode upd_mode)
+ {
+ struct sk_buff *skb;
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
+ u8 self_role;
+ int ret;
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE) {
+- if (rtwsta)
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE) {
++ if (rtwsta_link)
+ self_role = RTW89_SELF_ROLE_AP_CLIENT;
+ else
+- self_role = rtwvif->self_role;
++ self_role = rtwvif_link->self_role;
+ } else {
+- self_role = rtwvif->self_role;
++ self_role = rtwvif_link->self_role;
+ }
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_ROLE_MAINTAIN_LEN);
+@@ -3400,7 +3450,7 @@ int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev,
+ SET_FWROLE_MAINTAIN_MACID(skb->data, mac_id);
+ SET_FWROLE_MAINTAIN_SELF_ROLE(skb->data, self_role);
+ SET_FWROLE_MAINTAIN_UPD_MODE(skb->data, upd_mode);
+- SET_FWROLE_MAINTAIN_WIFI_ROLE(skb->data, rtwvif->wifi_role);
++ SET_FWROLE_MAINTAIN_WIFI_ROLE(skb->data, rtwvif_link->wifi_role);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC, H2C_CL_MAC_MEDIA_RPT,
+@@ -3421,39 +3471,53 @@ int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev,
+ }
+
+ static enum rtw89_fw_sta_type
+-rtw89_fw_get_sta_type(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++rtw89_fw_get_sta_type(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct ieee80211_sta *sta = rtwsta_to_sta_safe(rtwsta);
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_bss_conf *bss_conf;
++ struct ieee80211_link_sta *link_sta;
++ enum rtw89_fw_sta_type type;
++
++ rcu_read_lock();
+
+- if (!sta)
++ if (!rtwsta_link)
+ goto by_vif;
+
+- if (sta->deflink.eht_cap.has_eht)
+- return RTW89_FW_BE_STA;
+- else if (sta->deflink.he_cap.has_he)
+- return RTW89_FW_AX_STA;
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ if (link_sta->eht_cap.has_eht)
++ type = RTW89_FW_BE_STA;
++ else if (link_sta->he_cap.has_he)
++ type = RTW89_FW_AX_STA;
+ else
+- return RTW89_FW_N_AC_STA;
++ type = RTW89_FW_N_AC_STA;
++
++ goto out;
+
+ by_vif:
+- if (vif->bss_conf.eht_support)
+- return RTW89_FW_BE_STA;
+- else if (vif->bss_conf.he_support)
+- return RTW89_FW_AX_STA;
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++
++ if (bss_conf->eht_support)
++ type = RTW89_FW_BE_STA;
++ else if (bss_conf->he_support)
++ type = RTW89_FW_AX_STA;
+ else
+- return RTW89_FW_N_AC_STA;
++ type = RTW89_FW_N_AC_STA;
++
++out:
++ rcu_read_unlock();
++
++ return type;
+ }
+
+-int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, bool dis_conn)
++int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link, bool dis_conn)
+ {
+ struct sk_buff *skb;
+- u8 mac_id = rtwsta ? rtwsta->mac_id : rtwvif->mac_id;
+- u8 self_role = rtwvif->self_role;
++ u8 mac_id = rtwsta_link ? rtwsta_link->mac_id : rtwvif_link->mac_id;
++ u8 self_role = rtwvif_link->self_role;
+ enum rtw89_fw_sta_type sta_type;
+- u8 net_type = rtwvif->net_type;
++ u8 net_type = rtwvif_link->net_type;
+ struct rtw89_h2c_join_v1 *h2c_v1;
+ struct rtw89_h2c_join *h2c;
+ u32 len = sizeof(*h2c);
+@@ -3465,7 +3529,7 @@ int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ format_v1 = true;
+ }
+
+- if (net_type == RTW89_NET_TYPE_AP_MODE && rtwsta) {
++ if (net_type == RTW89_NET_TYPE_AP_MODE && rtwsta_link) {
+ self_role = RTW89_SELF_ROLE_AP_CLIENT;
+ net_type = dis_conn ? RTW89_NET_TYPE_NO_LINK : net_type;
+ }
+@@ -3480,16 +3544,17 @@ int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ h2c->w0 = le32_encode_bits(mac_id, RTW89_H2C_JOININFO_W0_MACID) |
+ le32_encode_bits(dis_conn, RTW89_H2C_JOININFO_W0_OP) |
+- le32_encode_bits(rtwvif->mac_idx, RTW89_H2C_JOININFO_W0_BAND) |
+- le32_encode_bits(rtwvif->wmm, RTW89_H2C_JOININFO_W0_WMM) |
+- le32_encode_bits(rtwvif->trigger, RTW89_H2C_JOININFO_W0_TGR) |
++ le32_encode_bits(rtwvif_link->mac_idx, RTW89_H2C_JOININFO_W0_BAND) |
++ le32_encode_bits(rtwvif_link->wmm, RTW89_H2C_JOININFO_W0_WMM) |
++ le32_encode_bits(rtwvif_link->trigger, RTW89_H2C_JOININFO_W0_TGR) |
+ le32_encode_bits(0, RTW89_H2C_JOININFO_W0_ISHESTA) |
+ le32_encode_bits(0, RTW89_H2C_JOININFO_W0_DLBW) |
+ le32_encode_bits(0, RTW89_H2C_JOININFO_W0_TF_MAC_PAD) |
+ le32_encode_bits(0, RTW89_H2C_JOININFO_W0_DL_T_PE) |
+- le32_encode_bits(rtwvif->port, RTW89_H2C_JOININFO_W0_PORT_ID) |
++ le32_encode_bits(rtwvif_link->port, RTW89_H2C_JOININFO_W0_PORT_ID) |
+ le32_encode_bits(net_type, RTW89_H2C_JOININFO_W0_NET_TYPE) |
+- le32_encode_bits(rtwvif->wifi_role, RTW89_H2C_JOININFO_W0_WIFI_ROLE) |
++ le32_encode_bits(rtwvif_link->wifi_role,
++ RTW89_H2C_JOININFO_W0_WIFI_ROLE) |
+ le32_encode_bits(self_role, RTW89_H2C_JOININFO_W0_SELF_ROLE);
+
+ if (!format_v1)
+@@ -3497,7 +3562,7 @@ int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ h2c_v1 = (struct rtw89_h2c_join_v1 *)skb->data;
+
+- sta_type = rtw89_fw_get_sta_type(rtwdev, rtwvif, rtwsta);
++ sta_type = rtw89_fw_get_sta_type(rtwdev, rtwvif_link, rtwsta_link);
+
+ h2c_v1->w1 = le32_encode_bits(sta_type, RTW89_H2C_JOININFO_W1_STA_TYPE);
+ h2c_v1->w2 = 0;
+@@ -3618,7 +3683,7 @@ int rtw89_fw_h2c_macid_pause(struct rtw89_dev *rtwdev, u8 sh, u8 grp,
+ }
+
+ #define H2C_EDCA_LEN 12
+-int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u8 ac, u32 val)
+ {
+ struct sk_buff *skb;
+@@ -3631,7 +3696,7 @@ int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ }
+ skb_put(skb, H2C_EDCA_LEN);
+ RTW89_SET_EDCA_SEL(skb->data, 0);
+- RTW89_SET_EDCA_BAND(skb->data, rtwvif->mac_idx);
++ RTW89_SET_EDCA_BAND(skb->data, rtwvif_link->mac_idx);
+ RTW89_SET_EDCA_WMM(skb->data, 0);
+ RTW89_SET_EDCA_AC(skb->data, ac);
+ RTW89_SET_EDCA_PARAM(skb->data, val);
+@@ -3655,7 +3720,8 @@ int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ }
+
+ #define H2C_TSF32_TOGL_LEN 4
+-int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool en)
+ {
+ struct sk_buff *skb;
+@@ -3671,9 +3737,9 @@ int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif
+ skb_put(skb, H2C_TSF32_TOGL_LEN);
+ cmd = skb->data;
+
+- RTW89_SET_FWCMD_TSF32_TOGL_BAND(cmd, rtwvif->mac_idx);
++ RTW89_SET_FWCMD_TSF32_TOGL_BAND(cmd, rtwvif_link->mac_idx);
+ RTW89_SET_FWCMD_TSF32_TOGL_EN(cmd, en);
+- RTW89_SET_FWCMD_TSF32_TOGL_PORT(cmd, rtwvif->port);
++ RTW89_SET_FWCMD_TSF32_TOGL_PORT(cmd, rtwvif_link->port);
+ RTW89_SET_FWCMD_TSF32_TOGL_EARLY(cmd, early_us);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+@@ -3727,11 +3793,10 @@ int rtw89_fw_h2c_set_ofld_cfg(struct rtw89_dev *rtwdev)
+ }
+
+ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool connect)
+ {
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
+- struct ieee80211_bss_conf *bss_conf = vif ? &vif->bss_conf : NULL;
++ struct ieee80211_bss_conf *bss_conf;
+ s32 thold = RTW89_DEFAULT_CQM_THOLD;
+ u32 hyst = RTW89_DEFAULT_CQM_HYST;
+ struct rtw89_h2c_bcnfltr *h2c;
+@@ -3742,9 +3807,20 @@ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev,
+ if (!RTW89_CHK_FW_FEATURE(BEACON_FILTER, &rtwdev->fw))
+ return -EINVAL;
+
+- if (!rtwvif || !bss_conf || rtwvif->net_type != RTW89_NET_TYPE_INFRA)
++ if (!rtwvif_link || rtwvif_link->net_type != RTW89_NET_TYPE_INFRA)
+ return -EINVAL;
+
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, false);
++
++ if (bss_conf->cqm_rssi_hyst)
++ hyst = bss_conf->cqm_rssi_hyst;
++ if (bss_conf->cqm_rssi_thold)
++ thold = bss_conf->cqm_rssi_thold;
++
++ rcu_read_unlock();
++
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
+ if (!skb) {
+ rtw89_err(rtwdev, "failed to alloc skb for h2c bcn filter\n");
+@@ -3754,11 +3830,6 @@ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_bcnfltr *)skb->data;
+
+- if (bss_conf->cqm_rssi_hyst)
+- hyst = bss_conf->cqm_rssi_hyst;
+- if (bss_conf->cqm_rssi_thold)
+- thold = bss_conf->cqm_rssi_thold;
+-
+ h2c->w0 = le32_encode_bits(connect, RTW89_H2C_BCNFLTR_W0_MON_RSSI) |
+ le32_encode_bits(connect, RTW89_H2C_BCNFLTR_W0_MON_BCN) |
+ le32_encode_bits(connect, RTW89_H2C_BCNFLTR_W0_MON_EN) |
+@@ -3768,7 +3839,7 @@ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev,
+ le32_encode_bits(hyst, RTW89_H2C_BCNFLTR_W0_RSSI_HYST) |
+ le32_encode_bits(thold + MAX_RSSI,
+ RTW89_H2C_BCNFLTR_W0_RSSI_THRESHOLD) |
+- le32_encode_bits(rtwvif->mac_id, RTW89_H2C_BCNFLTR_W0_MAC_ID);
++ le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_BCNFLTR_W0_MAC_ID);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC, H2C_CL_MAC_FW_OFLD,
+@@ -3833,15 +3904,16 @@ int rtw89_fw_h2c_rssi_offload(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct rtw89_traffic_stats *stats = &rtwvif->stats;
+ struct rtw89_h2c_ofld *h2c;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+ int ret;
+
+- if (rtwvif->net_type != RTW89_NET_TYPE_INFRA)
++ if (rtwvif_link->net_type != RTW89_NET_TYPE_INFRA)
+ return -EINVAL;
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, len);
+@@ -3853,7 +3925,7 @@ int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_ofld *)skb->data;
+
+- h2c->w0 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_OFLD_W0_MAC_ID) |
++ h2c->w0 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_OFLD_W0_MAC_ID) |
+ le32_encode_bits(stats->tx_throughput, RTW89_H2C_OFLD_W0_TX_TP) |
+ le32_encode_bits(stats->rx_throughput, RTW89_H2C_OFLD_W0_RX_TP);
+
+@@ -4858,7 +4930,7 @@ int rtw89_fw_h2c_scan_list_offload_be(struct rtw89_dev *rtwdev, int ch_num,
+ #define RTW89_SCAN_DELAY_TSF_UNIT 104800
+ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_scan_option *option,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool wowlan)
+ {
+ struct rtw89_wait_info *wait = &rtwdev->mac.fw_ofld_wait;
+@@ -4880,7 +4952,7 @@ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev,
+ h2c = (struct rtw89_h2c_scanofld *)skb->data;
+
+ if (option->delay) {
+- ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif, &tsf);
++ ret = rtw89_mac_port_get_tsf(rtwdev, rtwvif_link, &tsf);
+ if (ret) {
+ rtw89_warn(rtwdev, "NLO failed to get port tsf: %d\n", ret);
+ scan_mode = RTW89_SCAN_IMMEDIATE;
+@@ -4890,8 +4962,8 @@ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev,
+ }
+ }
+
+- h2c->w0 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_SCANOFLD_W0_MACID) |
+- le32_encode_bits(rtwvif->port, RTW89_H2C_SCANOFLD_W0_PORT_ID) |
++ h2c->w0 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_SCANOFLD_W0_MACID) |
++ le32_encode_bits(rtwvif_link->port, RTW89_H2C_SCANOFLD_W0_PORT_ID) |
+ le32_encode_bits(RTW89_PHY_0, RTW89_H2C_SCANOFLD_W0_BAND) |
+ le32_encode_bits(option->enable, RTW89_H2C_SCANOFLD_W0_OPERATION);
+
+@@ -4963,9 +5035,10 @@ static void rtw89_scan_get_6g_disabled_chan(struct rtw89_dev *rtwdev,
+
+ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
+ struct rtw89_scan_option *option,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool wowlan)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
+ struct rtw89_wait_info *wait = &rtwdev->mac.fw_ofld_wait;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+@@ -5016,8 +5089,8 @@ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
+ le32_encode_bits(option->repeat, RTW89_H2C_SCANOFLD_BE_W0_REPEAT) |
+ le32_encode_bits(true, RTW89_H2C_SCANOFLD_BE_W0_NOTIFY_END) |
+ le32_encode_bits(true, RTW89_H2C_SCANOFLD_BE_W0_LEARN_CH) |
+- le32_encode_bits(rtwvif->mac_id, RTW89_H2C_SCANOFLD_BE_W0_MACID) |
+- le32_encode_bits(rtwvif->port, RTW89_H2C_SCANOFLD_BE_W0_PORT) |
++ le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_SCANOFLD_BE_W0_MACID) |
++ le32_encode_bits(rtwvif_link->port, RTW89_H2C_SCANOFLD_BE_W0_PORT) |
+ le32_encode_bits(option->band, RTW89_H2C_SCANOFLD_BE_W0_BAND);
+
+ h2c->w1 = le32_encode_bits(option->num_macc_role, RTW89_H2C_SCANOFLD_BE_W1_NUM_MACC_ROLE) |
+@@ -5082,11 +5155,11 @@ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
+
+ for (i = 0; i < option->num_opch; i++) {
+ opch = ptr;
+- opch->w0 = le32_encode_bits(rtwvif->mac_id,
++ opch->w0 = le32_encode_bits(rtwvif_link->mac_id,
+ RTW89_H2C_SCANOFLD_BE_OPCH_W0_MACID) |
+ le32_encode_bits(option->band,
+ RTW89_H2C_SCANOFLD_BE_OPCH_W0_BAND) |
+- le32_encode_bits(rtwvif->port,
++ le32_encode_bits(rtwvif_link->port,
+ RTW89_H2C_SCANOFLD_BE_OPCH_W0_PORT) |
+ le32_encode_bits(RTW89_SCAN_OPMODE_INTV,
+ RTW89_H2C_SCANOFLD_BE_OPCH_W0_POLICY) |
+@@ -5871,12 +5944,10 @@ static void rtw89_release_pkt_list(struct rtw89_dev *rtwdev)
+ }
+
+ static bool rtw89_is_6ghz_wildcard_probe_req(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct cfg80211_scan_request *req,
+ struct rtw89_pktofld_info *info,
+ enum nl80211_band band, u8 ssid_idx)
+ {
+- struct cfg80211_scan_request *req = rtwvif->scan_req;
+-
+ if (band != NL80211_BAND_6GHZ)
+ return false;
+
+@@ -5892,11 +5963,13 @@ static bool rtw89_is_6ghz_wildcard_probe_req(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct sk_buff *skb, u8 ssid_idx)
+ {
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct ieee80211_scan_ies *ies = rtwvif->scan_ies;
++ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct rtw89_pktofld_info *info;
+ struct sk_buff *new;
+ int ret = 0;
+@@ -5921,8 +5994,7 @@ static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev,
+ goto out;
+ }
+
+- rtw89_is_6ghz_wildcard_probe_req(rtwdev, rtwvif, info, band,
+- ssid_idx);
++ rtw89_is_6ghz_wildcard_probe_req(rtwdev, req, info, band, ssid_idx);
+
+ ret = rtw89_fw_h2c_add_pkt_offload(rtwdev, &info->id, new);
+ if (ret) {
+@@ -5939,22 +6011,23 @@ static int rtw89_append_probe_req_ie(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_hw_scan_update_probe_req(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct sk_buff *skb;
+ u8 num = req->n_ssids, i;
+ int ret;
+
+ for (i = 0; i < num; i++) {
+- skb = ieee80211_probereq_get(rtwdev->hw, rtwvif->mac_addr,
++ skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr,
+ req->ssids[i].ssid,
+ req->ssids[i].ssid_len,
+ req->ie_len);
+ if (!skb)
+ return -ENOMEM;
+
+- ret = rtw89_append_probe_req_ie(rtwdev, rtwvif, skb, i);
++ ret = rtw89_append_probe_req_ie(rtwdev, rtwvif_link, skb, i);
+ kfree_skb(skb);
+
+ if (ret)
+@@ -5965,13 +6038,12 @@ static int rtw89_hw_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_update_6ghz_rnr_chan(struct rtw89_dev *rtwdev,
++ struct ieee80211_scan_ies *ies,
+ struct cfg80211_scan_request *req,
+ struct rtw89_mac_chinfo *ch_info)
+ {
+- struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif;
+ struct list_head *pkt_list = rtwdev->scan_info.pkt_list;
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
+- struct ieee80211_scan_ies *ies = rtwvif->scan_ies;
+ struct cfg80211_scan_6ghz_params *params;
+ struct rtw89_pktofld_info *info, *tmp;
+ struct ieee80211_hdr *hdr;
+@@ -6000,7 +6072,7 @@ static int rtw89_update_6ghz_rnr_chan(struct rtw89_dev *rtwdev,
+ if (found)
+ continue;
+
+- skb = ieee80211_probereq_get(rtwdev->hw, rtwvif->mac_addr,
++ skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr,
+ NULL, 0, req->ie_len);
+ skb_put_data(skb, ies->ies[NL80211_BAND_6GHZ], ies->len[NL80211_BAND_6GHZ]);
+ skb_put_data(skb, ies->common_ies, ies->common_ie_len);
+@@ -6090,8 +6162,9 @@ static void rtw89_hw_scan_add_chan(struct rtw89_dev *rtwdev, int chan_type,
+ struct rtw89_mac_chinfo *ch_info)
+ {
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
+- struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
++ struct ieee80211_scan_ies *ies = rtwvif->scan_ies;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct rtw89_chan *op = &rtwdev->scan_info.op_chan;
+ struct rtw89_pktofld_info *info;
+@@ -6117,7 +6190,7 @@ static void rtw89_hw_scan_add_chan(struct rtw89_dev *rtwdev, int chan_type,
+ }
+ }
+
+- ret = rtw89_update_6ghz_rnr_chan(rtwdev, req, ch_info);
++ ret = rtw89_update_6ghz_rnr_chan(rtwdev, ies, req, ch_info);
+ if (ret)
+ rtw89_warn(rtwdev, "RNR fails: %d\n", ret);
+
+@@ -6207,8 +6280,8 @@ static void rtw89_hw_scan_add_chan_be(struct rtw89_dev *rtwdev, int chan_type,
+ struct rtw89_mac_chinfo_be *ch_info)
+ {
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
+- struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct rtw89_pktofld_info *info;
+ u8 band, probe_count = 0, i;
+@@ -6265,7 +6338,7 @@ static void rtw89_hw_scan_add_chan_be(struct rtw89_dev *rtwdev, int chan_type,
+ }
+
+ int rtw89_pno_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct cfg80211_sched_scan_request *nd_config = rtw_wow->nd_config;
+@@ -6315,8 +6388,9 @@ int rtw89_pno_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_hw_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected)
++ struct rtw89_vif_link *rtwvif_link, bool connected)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct rtw89_mac_chinfo *ch_info, *tmp;
+ struct ieee80211_channel *channel;
+@@ -6392,7 +6466,7 @@ int rtw89_hw_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_pno_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct cfg80211_sched_scan_request *nd_config = rtw_wow->nd_config;
+@@ -6444,8 +6518,9 @@ int rtw89_pno_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_hw_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected)
++ struct rtw89_vif_link *rtwvif_link, bool connected)
+ {
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct cfg80211_scan_request *req = rtwvif->scan_req;
+ struct rtw89_mac_chinfo_be *ch_info, *tmp;
+ struct ieee80211_channel *channel;
+@@ -6503,45 +6578,50 @@ int rtw89_hw_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_hw_scan_prehandle(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected)
++ struct rtw89_vif_link *rtwvif_link, bool connected)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ int ret;
+
+- ret = rtw89_hw_scan_update_probe_req(rtwdev, rtwvif);
++ ret = rtw89_hw_scan_update_probe_req(rtwdev, rtwvif_link);
+ if (ret) {
+ rtw89_err(rtwdev, "Update probe request failed\n");
+ goto out;
+ }
+- ret = mac->add_chan_list(rtwdev, rtwvif, connected);
++ ret = mac->add_chan_list(rtwdev, rtwvif_link, connected);
+ out:
+ return ret;
+ }
+
+-void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++void rtw89_hw_scan_start(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ struct ieee80211_scan_request *scan_req)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct cfg80211_scan_request *req = &scan_req->req;
++ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
++ rtwvif_link->chanctx_idx);
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ u32 rx_fltr = rtwdev->hal.rx_fltr;
+ u8 mac_addr[ETH_ALEN];
+
+- rtw89_get_channel(rtwdev, rtwvif, &rtwdev->scan_info.op_chan);
+- rtwdev->scan_info.scanning_vif = vif;
++ /* clone op and keep it during scan */
++ rtwdev->scan_info.op_chan = *chan;
++
++ rtwdev->scan_info.scanning_vif = rtwvif_link;
+ rtwdev->scan_info.last_chan_idx = 0;
+ rtwdev->scan_info.abort = false;
+ rtwvif->scan_ies = &scan_req->ies;
+ rtwvif->scan_req = req;
+ ieee80211_stop_queues(rtwdev->hw);
+- rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif, false);
++ rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif_link, false);
+
+ if (req->flags & NL80211_SCAN_FLAG_RANDOM_ADDR)
+ get_random_mask_addr(mac_addr, req->mac_addr,
+ req->mac_addr_mask);
+ else
+- ether_addr_copy(mac_addr, vif->addr);
+- rtw89_core_scan_start(rtwdev, rtwvif, mac_addr, true);
++ ether_addr_copy(mac_addr, rtwvif_link->mac_addr);
++ rtw89_core_scan_start(rtwdev, rtwvif_link, mac_addr, true);
+
+ rx_fltr &= ~B_AX_A_BCN_CHK_EN;
+ rx_fltr &= ~B_AX_A_BC;
+@@ -6554,28 +6634,33 @@ void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ rtw89_chanctx_pause(rtwdev, RTW89_CHANCTX_PAUSE_REASON_HW_SCAN);
+ }
+
+-void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool aborted)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
+ struct cfg80211_scan_info info = {
+ .aborted = aborted,
+ };
++ struct rtw89_vif *rtwvif;
+
+- if (!vif)
++ if (!rtwvif_link)
+ return;
+
++ rtw89_chanctx_proceed(rtwdev);
++
++ rtwvif = rtwvif_link->rtwvif;
++
+ rtw89_write32_mask(rtwdev,
+ rtw89_mac_reg_by_idx(rtwdev, mac->rx_fltr, RTW89_MAC_0),
+ B_AX_RX_FLTR_CFG_MASK,
+ rtwdev->hal.rx_fltr);
+
+- rtw89_core_scan_complete(rtwdev, vif, true);
++ rtw89_core_scan_complete(rtwdev, rtwvif_link, true);
+ ieee80211_scan_completed(rtwdev->hw, &info);
+ ieee80211_wake_queues(rtwdev->hw);
+- rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif, true);
++ rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif_link, true);
+ rtw89_mac_enable_beacon_for_ap_vifs(rtwdev, true);
+
+ rtw89_release_pkt_list(rtwdev);
+@@ -6584,18 +6669,17 @@ void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ scan_info->last_chan_idx = 0;
+ scan_info->scanning_vif = NULL;
+ scan_info->abort = false;
+-
+- rtw89_chanctx_proceed(rtwdev);
+ }
+
+-void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
++void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
+ int ret;
+
+ scan_info->abort = true;
+
+- ret = rtw89_hw_scan_offload(rtwdev, vif, false);
++ ret = rtw89_hw_scan_offload(rtwdev, rtwvif_link, false);
+ if (ret)
+ rtw89_warn(rtwdev, "rtw89_hw_scan_offload failed ret %d\n", ret);
+
+@@ -6604,40 +6688,43 @@ void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
+ * RTW89_SCAN_END_SCAN_NOTIFY, so that ieee80211_stop() can flush scan
+ * work properly.
+ */
+- rtw89_hw_scan_complete(rtwdev, vif, true);
++ rtw89_hw_scan_complete(rtwdev, rtwvif_link, true);
+ }
+
+ static bool rtw89_is_any_vif_connected_or_connecting(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- /* This variable implies connected or during attempt to connect */
+- if (!is_zero_ether_addr(rtwvif->bssid))
+- return true;
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ /* This variable implies connected or during attempt to connect */
++ if (!is_zero_ether_addr(rtwvif_link->bssid))
++ return true;
++ }
+ }
+
+ return false;
+ }
+
+-int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct rtw89_scan_option opt = {0};
+- struct rtw89_vif *rtwvif;
+ bool connected;
+ int ret = 0;
+
+- rtwvif = vif ? (struct rtw89_vif *)vif->drv_priv : NULL;
+- if (!rtwvif)
++ if (!rtwvif_link)
+ return -EINVAL;
+
+ connected = rtw89_is_any_vif_connected_or_connecting(rtwdev);
+ opt.enable = enable;
+ opt.target_ch_mode = connected;
+ if (enable) {
+- ret = rtw89_hw_scan_prehandle(rtwdev, rtwvif, connected);
++ ret = rtw89_hw_scan_prehandle(rtwdev, rtwvif_link, connected);
+ if (ret)
+ goto out;
+ }
+@@ -6652,7 +6739,7 @@ int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ opt.opch_end = connected ? 0 : RTW89_CHAN_INVALID;
+ }
+
+- ret = mac->scan_offload(rtwdev, &opt, rtwvif, false);
++ ret = mac->scan_offload(rtwdev, &opt, rtwvif_link, false);
+ out:
+ return ret;
+ }
+@@ -6758,7 +6845,7 @@ int rtw89_fw_h2c_pkt_drop(struct rtw89_dev *rtwdev,
+ }
+
+ #define H2C_KEEP_ALIVE_LEN 4
+-int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct sk_buff *skb;
+@@ -6766,7 +6853,7 @@ int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ int ret;
+
+ if (enable) {
+- ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_NULL_DATA,
+ &pkt_id);
+ if (ret)
+@@ -6784,7 +6871,7 @@ int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ RTW89_SET_KEEP_ALIVE_ENABLE(skb->data, enable);
+ RTW89_SET_KEEP_ALIVE_PKT_NULL_ID(skb->data, pkt_id);
+ RTW89_SET_KEEP_ALIVE_PERIOD(skb->data, 5);
+- RTW89_SET_KEEP_ALIVE_MACID(skb->data, rtwvif->mac_id);
++ RTW89_SET_KEEP_ALIVE_MACID(skb->data, rtwvif_link->mac_id);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+ H2C_CAT_MAC,
+@@ -6806,7 +6893,7 @@ int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_h2c_arp_offload *h2c;
+@@ -6816,7 +6903,7 @@ int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ int ret;
+
+ if (enable) {
+- ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_ARP_RSP,
+ &pkt_id);
+ if (ret)
+@@ -6834,7 +6921,7 @@ int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ h2c->w0 = le32_encode_bits(enable, RTW89_H2C_ARP_OFFLOAD_W0_ENABLE) |
+ le32_encode_bits(0, RTW89_H2C_ARP_OFFLOAD_W0_ACTION) |
+- le32_encode_bits(rtwvif->mac_id, RTW89_H2C_ARP_OFFLOAD_W0_MACID) |
++ le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_ARP_OFFLOAD_W0_MACID) |
+ le32_encode_bits(pkt_id, RTW89_H2C_ARP_OFFLOAD_W0_PKT_ID);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+@@ -6859,11 +6946,11 @@ int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ #define H2C_DISCONNECT_DETECT_LEN 8
+ int rtw89_fw_h2c_disconnect_detect(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable)
++ struct rtw89_vif_link *rtwvif_link, bool enable)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct sk_buff *skb;
+- u8 macid = rtwvif->mac_id;
++ u8 macid = rtwvif_link->mac_id;
+ int ret;
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_DISCONNECT_DETECT_LEN);
+@@ -6902,7 +6989,7 @@ int rtw89_fw_h2c_disconnect_detect(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+@@ -6923,7 +7010,7 @@ int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ h2c->w0 = le32_encode_bits(enable, RTW89_H2C_NLO_W0_ENABLE) |
+ le32_encode_bits(enable, RTW89_H2C_NLO_W0_IGNORE_CIPHER) |
+- le32_encode_bits(rtwvif->mac_id, RTW89_H2C_NLO_W0_MACID);
++ le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_NLO_W0_MACID);
+
+ if (enable) {
+ h2c->nlo_cnt = nd_config->n_match_sets;
+@@ -6953,12 +7040,12 @@ int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_h2c_wow_global *h2c;
+- u8 macid = rtwvif->mac_id;
++ u8 macid = rtwvif_link->mac_id;
+ u32 len = sizeof(*h2c);
+ struct sk_buff *skb;
+ int ret;
+@@ -7002,12 +7089,12 @@ int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+
+ #define H2C_WAKEUP_CTRL_LEN 4
+ int rtw89_fw_h2c_wow_wakeup_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct sk_buff *skb;
+- u8 macid = rtwvif->mac_id;
++ u8 macid = rtwvif_link->mac_id;
+ int ret;
+
+ skb = rtw89_fw_h2c_alloc_skb_with_hdr(rtwdev, H2C_WAKEUP_CTRL_LEN);
+@@ -7100,13 +7187,13 @@ int rtw89_fw_wow_cam_update(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_wow_gtk_info *gtk_info = &rtw_wow->gtk_info;
+ struct rtw89_h2c_wow_gtk_ofld *h2c;
+- u8 macid = rtwvif->mac_id;
++ u8 macid = rtwvif_link->mac_id;
+ u32 len = sizeof(*h2c);
+ u8 pkt_id_sa_query = 0;
+ struct sk_buff *skb;
+@@ -7128,14 +7215,14 @@ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev,
+ if (!enable)
+ goto hdr;
+
+- ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_EAPOL_KEY,
+ &pkt_id_eapol);
+ if (ret)
+ goto fail;
+
+ if (gtk_info->igtk_keyid) {
+- ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif,
++ ret = rtw89_fw_h2c_add_general_pkt(rtwdev, rtwvif_link,
+ RTW89_PKT_OFLD_TYPE_SA_QUERY,
+ &pkt_id_sa_query);
+ if (ret)
+@@ -7173,7 +7260,7 @@ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev,
+ return ret;
+ }
+
+-int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable)
+ {
+ struct rtw89_wait_info *wait = &rtwdev->mac.ps_wait;
+@@ -7189,7 +7276,7 @@ int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ skb_put(skb, len);
+ h2c = (struct rtw89_h2c_fwips *)skb->data;
+
+- h2c->w0 = le32_encode_bits(rtwvif->mac_id, RTW89_H2C_FW_IPS_W0_MACID) |
++ h2c->w0 = le32_encode_bits(rtwvif_link->mac_id, RTW89_H2C_FW_IPS_W0_MACID) |
+ le32_encode_bits(enable, RTW89_H2C_FW_IPS_W0_ENABLE);
+
+ rtw89_h2c_pkt_set_hdr(rtwdev, skb, FWCMD_TYPE_H2C,
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.h b/drivers/net/wireless/realtek/rtw89/fw.h
+index ad47e77d740b25..ccbbc43f33feed 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.h
++++ b/drivers/net/wireless/realtek/rtw89/fw.h
+@@ -4404,59 +4404,59 @@ void rtw89_h2c_pkt_set_hdr(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ u8 type, u8 cat, u8 class, u8 func,
+ bool rack, bool dack, u32 len);
+ int rtw89_fw_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_default_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_default_dmac_tbl_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_assoc_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_ampdu_cmac_tbl_g7(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_txtime_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_txpath_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_update_beacon(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
+ int rtw89_fw_h2c_update_beacon_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
+-int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *vif,
+- struct rtw89_sta *rtwsta, const u8 *scan_mac_addr);
++ struct rtw89_vif_link *rtwvif_link);
++int rtw89_fw_h2c_cam(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif,
++ struct rtw89_sta_link *rtwsta_link, const u8 *scan_mac_addr);
+ int rtw89_fw_h2c_dctl_sec_cam_v1(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ int rtw89_fw_h2c_dctl_sec_cam_v2(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta);
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ void rtw89_fw_c2h_irqsafe(struct rtw89_dev *rtwdev, struct sk_buff *c2h);
+ void rtw89_fw_c2h_work(struct work_struct *work);
+ int rtw89_fw_h2c_role_maintain(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ enum rtw89_upd_mode upd_mode);
+-int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta, bool dis_conn);
++int rtw89_fw_h2c_join_info(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link, bool dis_conn);
+ int rtw89_fw_h2c_notify_dbcc(struct rtw89_dev *rtwdev, bool en);
+ int rtw89_fw_h2c_macid_pause(struct rtw89_dev *rtwdev, u8 sh, u8 grp,
+ bool pause);
+-int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_set_edca(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u8 ac, u32 val);
+ int rtw89_fw_h2c_set_ofld_cfg(struct rtw89_dev *rtwdev);
+ int rtw89_fw_h2c_set_bcn_fltr_cfg(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool connect);
+ int rtw89_fw_h2c_rssi_offload(struct rtw89_dev *rtwdev,
+ struct rtw89_rx_phy_ppdu *phy_ppdu);
+-int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++int rtw89_fw_h2c_tp_offload(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ int rtw89_fw_h2c_ra(struct rtw89_dev *rtwdev, struct rtw89_ra_info *ra, bool csi);
+ int rtw89_fw_h2c_cxdrv_init(struct rtw89_dev *rtwdev, u8 type);
+ int rtw89_fw_h2c_cxdrv_init_v7(struct rtw89_dev *rtwdev, u8 type);
+@@ -4478,11 +4478,11 @@ int rtw89_fw_h2c_scan_list_offload_be(struct rtw89_dev *rtwdev, int ch_num,
+ struct list_head *chan_list);
+ int rtw89_fw_h2c_scan_offload_ax(struct rtw89_dev *rtwdev,
+ struct rtw89_scan_option *opt,
+- struct rtw89_vif *vif,
++ struct rtw89_vif_link *vif,
+ bool wowlan);
+ int rtw89_fw_h2c_scan_offload_be(struct rtw89_dev *rtwdev,
+ struct rtw89_scan_option *opt,
+- struct rtw89_vif *vif,
++ struct rtw89_vif_link *vif,
+ bool wowlan);
+ int rtw89_fw_h2c_rf_reg(struct rtw89_dev *rtwdev,
+ struct rtw89_fw_h2c_rf_reg_info *info,
+@@ -4508,14 +4508,19 @@ int rtw89_fw_h2c_raw_with_hdr(struct rtw89_dev *rtwdev,
+ int rtw89_fw_h2c_raw(struct rtw89_dev *rtwdev, const u8 *buf, u16 len);
+ void rtw89_fw_send_all_early_h2c(struct rtw89_dev *rtwdev);
+ void rtw89_fw_free_all_early_h2c(struct rtw89_dev *rtwdev);
+-int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_general_pkt(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u8 macid);
+ void rtw89_fw_release_general_pkt_list_vif(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool notify_fw);
++ struct rtw89_vif_link *rtwvif_link,
++ bool notify_fw);
+ void rtw89_fw_release_general_pkt_list(struct rtw89_dev *rtwdev, bool notify_fw);
+-int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_fw_h2c_ba_cam(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ bool valid, struct ieee80211_ampdu_params *params);
+-int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_fw_h2c_ba_cam_v1(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
+ bool valid, struct ieee80211_ampdu_params *params);
+ void rtw89_fw_h2c_init_dynamic_ba_cam_v0_ext(struct rtw89_dev *rtwdev);
+ int rtw89_fw_h2c_init_ba_cam_users(struct rtw89_dev *rtwdev, u8 users,
+@@ -4524,8 +4529,8 @@ int rtw89_fw_h2c_init_ba_cam_users(struct rtw89_dev *rtwdev, u8 users,
+ int rtw89_fw_h2c_lps_parm(struct rtw89_dev *rtwdev,
+ struct rtw89_lps_parm *lps_param);
+ int rtw89_fw_h2c_lps_ch_info(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
+-int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link);
++int rtw89_fw_h2c_fwips(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+ struct sk_buff *rtw89_fw_h2c_alloc_skb_with_hdr(struct rtw89_dev *rtwdev, u32 len);
+ struct sk_buff *rtw89_fw_h2c_alloc_skb_no_hdr(struct rtw89_dev *rtwdev, u32 len);
+@@ -4534,49 +4539,56 @@ int rtw89_fw_msg_reg(struct rtw89_dev *rtwdev,
+ struct rtw89_mac_c2h_info *c2h_info);
+ int rtw89_fw_h2c_fw_log(struct rtw89_dev *rtwdev, bool enable);
+ void rtw89_fw_st_dbg_dump(struct rtw89_dev *rtwdev);
+-void rtw89_hw_scan_start(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+- struct ieee80211_scan_request *req);
+-void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++void rtw89_hw_scan_start(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_scan_request *scan_req);
++void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool aborted);
+-int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++int rtw89_hw_scan_offload(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+-void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif);
++void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link);
+ int rtw89_hw_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected);
++ struct rtw89_vif_link *rtwvif_link, bool connected);
+ int rtw89_pno_scan_add_chan_list_ax(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
+ int rtw89_hw_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected);
++ struct rtw89_vif_link *rtwvif_link, bool connected);
+ int rtw89_pno_scan_add_chan_list_be(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
+ int rtw89_fw_h2c_trigger_cpu_exception(struct rtw89_dev *rtwdev);
+ int rtw89_fw_h2c_pkt_drop(struct rtw89_dev *rtwdev,
+ const struct rtw89_pkt_drop_params *params);
+-int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
++int rtw89_fw_h2c_p2p_act(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf,
+ struct ieee80211_p2p_noa_desc *desc,
+ u8 act, u8 noa_id);
+-int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_tsf32_toggle(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool en);
+-int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+ int rtw89_fw_h2c_wow_wakeup_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable);
+-int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link, bool enable);
++int rtw89_fw_h2c_cfg_pno(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+-int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_fw_h2c_keep_alive(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+ int rtw89_fw_h2c_arp_offload(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable);
++ struct rtw89_vif_link *rtwvif_link, bool enable);
+ int rtw89_fw_h2c_disconnect_detect(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable);
+-int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link, bool enable);
++int rtw89_fw_h2c_wow_global(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+ int rtw89_fw_h2c_wow_wakeup_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable);
++ struct rtw89_vif_link *rtwvif_link, bool enable);
+ int rtw89_fw_wow_cam_update(struct rtw89_dev *rtwdev,
+ struct rtw89_wow_cam_info *cam_info);
+ int rtw89_fw_h2c_wow_gtk_ofld(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool enable);
+ int rtw89_fw_h2c_wow_request_aoac(struct rtw89_dev *rtwdev);
+ int rtw89_fw_h2c_add_mcc(struct rtw89_dev *rtwdev,
+@@ -4621,51 +4633,73 @@ static inline void rtw89_fw_h2c_init_ba_cam(struct rtw89_dev *rtwdev)
+ }
+
+ static inline int rtw89_chip_h2c_default_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+- return chip->ops->h2c_default_cmac_tbl(rtwdev, rtwvif, rtwsta);
++ return chip->ops->h2c_default_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ }
+
+ static inline int rtw89_chip_h2c_default_dmac_tbl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_sta *rtwsta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+ if (chip->ops->h2c_default_dmac_tbl)
+- return chip->ops->h2c_default_dmac_tbl(rtwdev, rtwvif, rtwsta);
++ return chip->ops->h2c_default_dmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+
+ return 0;
+ }
+
+ static inline int rtw89_chip_h2c_update_beacon(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+- return chip->ops->h2c_update_beacon(rtwdev, rtwvif);
++ return chip->ops->h2c_update_beacon(rtwdev, rtwvif_link);
+ }
+
+ static inline int rtw89_chip_h2c_assoc_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+- return chip->ops->h2c_assoc_cmac_tbl(rtwdev, vif, sta);
++ return chip->ops->h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ }
+
+-static inline int rtw89_chip_h2c_ampdu_cmac_tbl(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++static inline
++int rtw89_chip_h2c_ampdu_link_cmac_tbl(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+
+ if (chip->ops->h2c_ampdu_cmac_tbl)
+- return chip->ops->h2c_ampdu_cmac_tbl(rtwdev, vif, sta);
++ return chip->ops->h2c_ampdu_cmac_tbl(rtwdev, rtwvif_link,
++ rtwsta_link);
++
++ return 0;
++}
++
++static inline int rtw89_chip_h2c_ampdu_cmac_tbl(struct rtw89_dev *rtwdev,
++ struct rtw89_vif *rtwvif,
++ struct rtw89_sta *rtwsta)
++{
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = rtw89_chip_h2c_ampdu_link_cmac_tbl(rtwdev, rtwvif_link,
++ rtwsta_link);
++ if (ret)
++ return ret;
++ }
+
+ return 0;
+ }
+@@ -4675,8 +4709,20 @@ int rtw89_chip_h2c_ba_cam(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+ bool valid, struct ieee80211_ampdu_params *params)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = chip->ops->h2c_ba_cam(rtwdev, rtwvif_link, rtwsta_link,
++ valid, params);
++ if (ret)
++ return ret;
++ }
+
+- return chip->ops->h2c_ba_cam(rtwdev, rtwsta, valid, params);
++ return 0;
+ }
+
+ /* must consider compatibility; don't insert new in the mid */
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index c70a23a763b0ee..4e15d539e3d1c4 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -4076,17 +4076,17 @@ static const struct rtw89_port_reg rtw89_port_base_ax = {
+ };
+
+ static void rtw89_mac_check_packet_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u8 type)
++ struct rtw89_vif_link *rtwvif_link, u8 type)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- u8 mask = B_AX_PTCL_DBG_INFO_MASK_BY_PORT(rtwvif->port);
++ u8 mask = B_AX_PTCL_DBG_INFO_MASK_BY_PORT(rtwvif_link->port);
+ u32 reg_info, reg_ctrl;
+ u32 val;
+ int ret;
+
+- reg_info = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg_info, rtwvif->mac_idx);
+- reg_ctrl = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg, rtwvif->mac_idx);
++ reg_info = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg_info, rtwvif_link->mac_idx);
++ reg_ctrl = rtw89_mac_reg_by_idx(rtwdev, p->ptcl_dbg, rtwvif_link->mac_idx);
+
+ rtw89_write32_mask(rtwdev, reg_ctrl, B_AX_PTCL_DBG_SEL_MASK, type);
+ rtw89_write32_set(rtwdev, reg_ctrl, B_AX_PTCL_DBG_EN);
+@@ -4098,26 +4098,32 @@ static void rtw89_mac_check_packet_ctrl(struct rtw89_dev *rtwdev,
+ rtw89_warn(rtwdev, "Polling beacon packet empty fail\n");
+ }
+
+-static void rtw89_mac_bcn_drop(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw89_mac_bcn_drop(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_set(rtwdev, p->bcn_drop_all, BIT(rtwvif->port));
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK, 1);
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_area, B_AX_BCN_MSK_AREA_MASK, 0);
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK, 0);
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK, 2);
+- rtw89_write16_port_mask(rtwdev, rtwvif, p->tbtt_early, B_AX_TBTTERLY_MASK, 1);
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_space, B_AX_BCN_SPACE_MASK, 1);
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN);
+-
+- rtw89_mac_check_packet_ctrl(rtwdev, rtwvif, AX_PTCL_DBG_BCNQ_NUM0);
+- if (rtwvif->port == RTW89_PORT_0)
+- rtw89_mac_check_packet_ctrl(rtwdev, rtwvif, AX_PTCL_DBG_BCNQ_NUM1);
+-
+- rtw89_write32_clr(rtwdev, p->bcn_drop_all, BIT(rtwvif->port));
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_TBTT_PROHIB_EN);
++ rtw89_write32_set(rtwdev, p->bcn_drop_all, BIT(rtwvif_link->port));
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK,
++ 1);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_area, B_AX_BCN_MSK_AREA_MASK,
++ 0);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK,
++ 0);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_early, B_AX_BCNERLY_MASK, 2);
++ rtw89_write16_port_mask(rtwdev, rtwvif_link, p->tbtt_early,
++ B_AX_TBTTERLY_MASK, 1);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_space,
++ B_AX_BCN_SPACE_MASK, 1);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_BCNTX_EN);
++
++ rtw89_mac_check_packet_ctrl(rtwdev, rtwvif_link, AX_PTCL_DBG_BCNQ_NUM0);
++ if (rtwvif_link->port == RTW89_PORT_0)
++ rtw89_mac_check_packet_ctrl(rtwdev, rtwvif_link, AX_PTCL_DBG_BCNQ_NUM1);
++
++ rtw89_write32_clr(rtwdev, p->bcn_drop_all, BIT(rtwvif_link->port));
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_TBTT_PROHIB_EN);
+ fsleep(2000);
+ }
+
+@@ -4131,286 +4137,329 @@ static void rtw89_mac_bcn_drop(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvi
+ #define BCN_ERLY_SET_DLY (10 * 2)
+
+ static void rtw89_mac_port_cfg_func_sw(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+ const struct rtw89_chip_info *chip = rtwdev->chip;
++ struct ieee80211_bss_conf *bss_conf;
+ bool need_backup = false;
+ u32 backup_val;
++ u16 beacon_int;
+
+- if (!rtw89_read32_port_mask(rtwdev, rtwvif, p->port_cfg, B_AX_PORT_FUNC_EN))
++ if (!rtw89_read32_port_mask(rtwdev, rtwvif_link, p->port_cfg, B_AX_PORT_FUNC_EN))
+ return;
+
+- if (chip->chip_id == RTL8852A && rtwvif->port != RTW89_PORT_0) {
++ if (chip->chip_id == RTL8852A && rtwvif_link->port != RTW89_PORT_0) {
+ need_backup = true;
+- backup_val = rtw89_read32_port(rtwdev, rtwvif, p->tbtt_prohib);
++ backup_val = rtw89_read32_port(rtwdev, rtwvif_link, p->tbtt_prohib);
+ }
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
+- rtw89_mac_bcn_drop(rtwdev, rtwvif);
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
++ rtw89_mac_bcn_drop(rtwdev, rtwvif_link);
+
+ if (chip->chip_id == RTL8852A) {
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_SETUP_MASK);
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib, B_AX_TBTT_HOLD_MASK, 1);
+- rtw89_write16_port_clr(rtwdev, rtwvif, p->tbtt_early, B_AX_TBTTERLY_MASK);
+- rtw89_write16_port_clr(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->tbtt_prohib,
++ B_AX_TBTT_SETUP_MASK);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib,
++ B_AX_TBTT_HOLD_MASK, 1);
++ rtw89_write16_port_clr(rtwdev, rtwvif_link, p->tbtt_early,
++ B_AX_TBTTERLY_MASK);
++ rtw89_write16_port_clr(rtwdev, rtwvif_link, p->bcn_early,
++ B_AX_BCNERLY_MASK);
+ }
+
+- msleep(vif->bss_conf.beacon_int + 1);
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_PORT_FUNC_EN |
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ beacon_int = bss_conf->beacon_int;
++
++ rcu_read_unlock();
++
++ msleep(beacon_int + 1);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_PORT_FUNC_EN |
+ B_AX_BRK_SETUP);
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_TSFTR_RST);
+- rtw89_write32_port(rtwdev, rtwvif, p->bcn_cnt_tmr, 0);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_TSFTR_RST);
++ rtw89_write32_port(rtwdev, rtwvif_link, p->bcn_cnt_tmr, 0);
+
+ if (need_backup)
+- rtw89_write32_port(rtwdev, rtwvif, p->tbtt_prohib, backup_val);
++ rtw89_write32_port(rtwdev, rtwvif_link, p->tbtt_prohib, backup_val);
+ }
+
+ static void rtw89_mac_port_cfg_tx_rpt(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en)
++ struct rtw89_vif_link *rtwvif_link, bool en)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_TXBCN_RPT_EN);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg,
++ B_AX_TXBCN_RPT_EN);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_TXBCN_RPT_EN);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg,
++ B_AX_TXBCN_RPT_EN);
+ }
+
+ static void rtw89_mac_port_cfg_rx_rpt(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en)
++ struct rtw89_vif_link *rtwvif_link, bool en)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_RXBCN_RPT_EN);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg,
++ B_AX_RXBCN_RPT_EN);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_RXBCN_RPT_EN);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg,
++ B_AX_RXBCN_RPT_EN);
+ }
+
+ static void rtw89_mac_port_cfg_net_type(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->port_cfg, B_AX_NET_TYPE_MASK,
+- rtwvif->net_type);
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->port_cfg, B_AX_NET_TYPE_MASK,
++ rtwvif_link->net_type);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_prct(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- bool en = rtwvif->net_type != RTW89_NET_TYPE_NO_LINK;
++ bool en = rtwvif_link->net_type != RTW89_NET_TYPE_NO_LINK;
+ u32 bits = B_AX_TBTT_PROHIB_EN | B_AX_BRK_SETUP;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, bits);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, bits);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, bits);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, bits);
+ }
+
+ static void rtw89_mac_port_cfg_rx_sw(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- bool en = rtwvif->net_type == RTW89_NET_TYPE_INFRA ||
+- rtwvif->net_type == RTW89_NET_TYPE_AD_HOC;
++ bool en = rtwvif_link->net_type == RTW89_NET_TYPE_INFRA ||
++ rtwvif_link->net_type == RTW89_NET_TYPE_AD_HOC;
+ u32 bit = B_AX_RX_BSSID_FIT_EN;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, bit);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, bit);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, bit);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, bit);
+ }
+
+ void rtw89_mac_port_cfg_rx_sync(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en)
++ struct rtw89_vif_link *rtwvif_link, bool en)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_TSF_UDT_EN);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_TSF_UDT_EN);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_TSF_UDT_EN);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_TSF_UDT_EN);
+ }
+
+ static void rtw89_mac_port_cfg_rx_sync_by_nettype(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- bool en = rtwvif->net_type == RTW89_NET_TYPE_INFRA ||
+- rtwvif->net_type == RTW89_NET_TYPE_AD_HOC;
++ bool en = rtwvif_link->net_type == RTW89_NET_TYPE_INFRA ||
++ rtwvif_link->net_type == RTW89_NET_TYPE_AD_HOC;
+
+- rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif, en);
++ rtw89_mac_port_cfg_rx_sync(rtwdev, rtwvif_link, en);
+ }
+
+ static void rtw89_mac_port_cfg_tx_sw(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en)
++ struct rtw89_vif_link *rtwvif_link, bool en)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+ if (en)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN);
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg, B_AX_BCNTX_EN);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg, B_AX_BCNTX_EN);
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg, B_AX_BCNTX_EN);
+ }
+
+ static void rtw89_mac_port_cfg_tx_sw_by_nettype(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- bool en = rtwvif->net_type == RTW89_NET_TYPE_AP_MODE ||
+- rtwvif->net_type == RTW89_NET_TYPE_AD_HOC;
++ bool en = rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE ||
++ rtwvif_link->net_type == RTW89_NET_TYPE_AD_HOC;
+
+- rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif, en);
++ rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif_link, en);
+ }
+
+ void rtw89_mac_enable_beacon_for_ap_vifs(struct rtw89_dev *rtwdev, bool en)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
+- rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif, en);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
++ rtw89_mac_port_cfg_tx_sw(rtwdev, rtwvif_link, en);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_intv(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- u16 bcn_int = vif->bss_conf.beacon_int ? vif->bss_conf.beacon_int : BCN_INTERVAL;
++ struct ieee80211_bss_conf *bss_conf;
++ u16 bcn_int;
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ if (bss_conf->beacon_int)
++ bcn_int = bss_conf->beacon_int;
++ else
++ bcn_int = BCN_INTERVAL;
++
++ rcu_read_unlock();
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_space, B_AX_BCN_SPACE_MASK,
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_space, B_AX_BCN_SPACE_MASK,
+ bcn_int);
+ }
+
+ static void rtw89_mac_port_cfg_hiq_win(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- u8 win = rtwvif->net_type == RTW89_NET_TYPE_AP_MODE ? 16 : 0;
++ u8 win = rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE ? 16 : 0;
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, p->hiq_win[port], rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, p->hiq_win[port], rtwvif_link->mac_idx);
+ rtw89_write8(rtwdev, reg, win);
+ }
+
+ static void rtw89_mac_port_cfg_hiq_dtim(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_bss_conf *bss_conf;
++ u8 dtim_period;
+ u32 addr;
+
+- addr = rtw89_mac_reg_by_idx(rtwdev, p->md_tsft, rtwvif->mac_idx);
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ dtim_period = bss_conf->dtim_period;
++
++ rcu_read_unlock();
++
++ addr = rtw89_mac_reg_by_idx(rtwdev, p->md_tsft, rtwvif_link->mac_idx);
+ rtw89_write8_set(rtwdev, addr, B_AX_UPD_HGQMD | B_AX_UPD_TIMIE);
+
+- rtw89_write16_port_mask(rtwdev, rtwvif, p->dtim_ctrl, B_AX_DTIM_NUM_MASK,
+- vif->bss_conf.dtim_period);
++ rtw89_write16_port_mask(rtwdev, rtwvif_link, p->dtim_ctrl, B_AX_DTIM_NUM_MASK,
++ dtim_period);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_setup_time(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib,
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib,
+ B_AX_TBTT_SETUP_MASK, BCN_SETUP_DEF);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_hold_time(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->tbtt_prohib,
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->tbtt_prohib,
+ B_AX_TBTT_HOLD_MASK, BCN_HOLD_DEF);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_mask_area(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_area,
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_area,
+ B_AX_BCN_MSK_AREA_MASK, BCN_MASK_DEF);
+ }
+
+ static void rtw89_mac_port_cfg_tbtt_early(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write16_port_mask(rtwdev, rtwvif, p->tbtt_early,
++ rtw89_write16_port_mask(rtwdev, rtwvif_link, p->tbtt_early,
+ B_AX_TBTTERLY_MASK, TBTT_ERLY_DEF);
+ }
+
+ static void rtw89_mac_port_cfg_bss_color(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+ static const u32 masks[RTW89_PORT_NUM] = {
+ B_AX_BSS_COLOB_AX_PORT_0_MASK, B_AX_BSS_COLOB_AX_PORT_1_MASK,
+ B_AX_BSS_COLOB_AX_PORT_2_MASK, B_AX_BSS_COLOB_AX_PORT_3_MASK,
+ B_AX_BSS_COLOB_AX_PORT_4_MASK,
+ };
+- u8 port = rtwvif->port;
++ struct ieee80211_bss_conf *bss_conf;
++ u8 port = rtwvif_link->port;
+ u32 reg_base;
+ u32 reg;
+ u8 bss_color;
+
+- bss_color = vif->bss_conf.he_bss_color.color;
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ bss_color = bss_conf->he_bss_color.color;
++
++ rcu_read_unlock();
++
+ reg_base = port >= 4 ? p->bss_color + 4 : p->bss_color;
+- reg = rtw89_mac_reg_by_idx(rtwdev, reg_base, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, reg_base, rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, masks[port], bss_color);
+ }
+
+ static void rtw89_mac_port_cfg_mbssid(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+ u32 reg;
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE)
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE)
+ return;
+
+ if (port == 0) {
+- reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid, rtwvif_link->mac_idx);
+ rtw89_write32_clr(rtwdev, reg, B_AX_P0MB_ALL_MASK);
+ }
+ }
+
+ static void rtw89_mac_port_cfg_hiq_drop(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+ u32 reg;
+ u32 val;
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid_drop, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, p->mbssid_drop, rtwvif_link->mac_idx);
+ val = rtw89_read32(rtwdev, reg);
+ val &= ~FIELD_PREP(B_AX_PORT_DROP_4_0_MASK, BIT(port));
+ if (port == 0)
+@@ -4419,31 +4468,31 @@ static void rtw89_mac_port_cfg_hiq_drop(struct rtw89_dev *rtwdev,
+ }
+
+ static void rtw89_mac_port_cfg_func_en(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool enable)
++ struct rtw89_vif_link *rtwvif_link, bool enable)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+ if (enable)
+- rtw89_write32_port_set(rtwdev, rtwvif, p->port_cfg,
++ rtw89_write32_port_set(rtwdev, rtwvif_link, p->port_cfg,
+ B_AX_PORT_FUNC_EN);
+ else
+- rtw89_write32_port_clr(rtwdev, rtwvif, p->port_cfg,
++ rtw89_write32_port_clr(rtwdev, rtwvif_link, p->port_cfg,
+ B_AX_PORT_FUNC_EN);
+ }
+
+ static void rtw89_mac_port_cfg_bcn_early(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+
+- rtw89_write32_port_mask(rtwdev, rtwvif, p->bcn_early, B_AX_BCNERLY_MASK,
++ rtw89_write32_port_mask(rtwdev, rtwvif_link, p->bcn_early, B_AX_BCNERLY_MASK,
+ BCN_ERLY_DEF);
+ }
+
+ static void rtw89_mac_port_cfg_tbtt_shift(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ const struct rtw89_port_reg *p = mac->port_base;
+@@ -4452,20 +4501,20 @@ static void rtw89_mac_port_cfg_tbtt_shift(struct rtw89_dev *rtwdev,
+ if (rtwdev->chip->chip_id != RTL8852C)
+ return;
+
+- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT &&
+- rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION)
++ if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT &&
++ rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION)
+ return;
+
+ val = FIELD_PREP(B_AX_TBTT_SHIFT_OFST_MAG, 1) |
+ B_AX_TBTT_SHIFT_OFST_SIGN;
+
+- rtw89_write16_port_mask(rtwdev, rtwvif, p->tbtt_shift,
++ rtw89_write16_port_mask(rtwdev, rtwvif_link, p->tbtt_shift,
+ B_AX_TBTT_SHIFT_OFST_MASK, val);
+ }
+
+ void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_vif *rtwvif_src,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_vif_link *rtwvif_src,
+ u16 offset_tu)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+@@ -4473,8 +4522,8 @@ void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
+ u32 val, reg;
+
+ val = RTW89_PORT_OFFSET_TU_TO_32US(offset_tu);
+- reg = rtw89_mac_reg_by_idx(rtwdev, p->tsf_sync + rtwvif->port * 4,
+- rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, p->tsf_sync + rtwvif_link->port * 4,
++ rtwvif_link->mac_idx);
+
+ rtw89_write32_mask(rtwdev, reg, B_AX_SYNC_PORT_SRC, rtwvif_src->port);
+ rtw89_write32_mask(rtwdev, reg, B_AX_SYNC_PORT_OFFSET_VAL, val);
+@@ -4482,16 +4531,16 @@ void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
+ }
+
+ static void rtw89_mac_port_tsf_sync_rand(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_vif *rtwvif_src,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_vif_link *rtwvif_src,
+ u8 offset, int *n_offset)
+ {
+- if (rtwvif->net_type != RTW89_NET_TYPE_AP_MODE || rtwvif == rtwvif_src)
++ if (rtwvif_link->net_type != RTW89_NET_TYPE_AP_MODE || rtwvif_link == rtwvif_src)
+ return;
+
+ /* adjust offset randomly to avoid beacon conflict */
+ offset = offset - offset / 4 + get_random_u32() % (offset / 2);
+- rtw89_mac_port_tsf_sync(rtwdev, rtwvif, rtwvif_src,
++ rtw89_mac_port_tsf_sync(rtwdev, rtwvif_link, rtwvif_src,
+ (*n_offset) * offset);
+
+ (*n_offset)++;
+@@ -4499,15 +4548,19 @@ static void rtw89_mac_port_tsf_sync_rand(struct rtw89_dev *rtwdev,
+
+ static void rtw89_mac_port_tsf_resync_all(struct rtw89_dev *rtwdev)
+ {
+- struct rtw89_vif *src = NULL, *tmp;
++ struct rtw89_vif_link *src = NULL, *tmp;
+ u8 offset = 100, vif_aps = 0;
++ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+ int n_offset = 1;
+
+- rtw89_for_each_rtwvif(rtwdev, tmp) {
+- if (!src || tmp->net_type == RTW89_NET_TYPE_INFRA)
+- src = tmp;
+- if (tmp->net_type == RTW89_NET_TYPE_AP_MODE)
+- vif_aps++;
++ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
++ rtw89_vif_for_each_link(rtwvif, tmp, link_id) {
++ if (!src || tmp->net_type == RTW89_NET_TYPE_INFRA)
++ src = tmp;
++ if (tmp->net_type == RTW89_NET_TYPE_AP_MODE)
++ vif_aps++;
++ }
+ }
+
+ if (vif_aps == 0)
+@@ -4515,104 +4568,106 @@ static void rtw89_mac_port_tsf_resync_all(struct rtw89_dev *rtwdev)
+
+ offset /= (vif_aps + 1);
+
+- rtw89_for_each_rtwvif(rtwdev, tmp)
+- rtw89_mac_port_tsf_sync_rand(rtwdev, tmp, src, offset, &n_offset);
++ rtw89_for_each_rtwvif(rtwdev, rtwvif)
++ rtw89_vif_for_each_link(rtwvif, tmp, link_id)
++ rtw89_mac_port_tsf_sync_rand(rtwdev, tmp, src, offset,
++ &n_offset);
+ }
+
+-int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+ int ret;
+
+- ret = rtw89_mac_port_update(rtwdev, rtwvif);
++ ret = rtw89_mac_port_update(rtwdev, rtwvif_link);
+ if (ret)
+ return ret;
+
+- rtw89_mac_dmac_tbl_init(rtwdev, rtwvif->mac_id);
+- rtw89_mac_cmac_tbl_init(rtwdev, rtwvif->mac_id);
++ rtw89_mac_dmac_tbl_init(rtwdev, rtwvif_link->mac_id);
++ rtw89_mac_cmac_tbl_init(rtwdev, rtwvif_link->mac_id);
+
+- ret = rtw89_mac_set_macid_pause(rtwdev, rtwvif->mac_id, false);
++ ret = rtw89_mac_set_macid_pause(rtwdev, rtwvif_link->mac_id, false);
+ if (ret)
+ return ret;
+
+- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, NULL, RTW89_ROLE_CREATE);
++ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, NULL, RTW89_ROLE_CREATE);
+ if (ret)
+ return ret;
+
+- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, NULL, true);
++ ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, NULL, true);
+ if (ret)
+ return ret;
+
+- ret = rtw89_cam_init(rtwdev, rtwvif);
++ ret = rtw89_cam_init(rtwdev, rtwvif_link);
+ if (ret)
+ return ret;
+
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL);
+ if (ret)
+ return ret;
+
+- ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif, NULL);
++ ret = rtw89_chip_h2c_default_cmac_tbl(rtwdev, rtwvif_link, NULL);
+ if (ret)
+ return ret;
+
+- ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif, NULL);
++ ret = rtw89_chip_h2c_default_dmac_tbl(rtwdev, rtwvif_link, NULL);
+ if (ret)
+ return ret;
+
+ return 0;
+ }
+
+-int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+ int ret;
+
+- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, NULL, RTW89_ROLE_REMOVE);
++ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, NULL, RTW89_ROLE_REMOVE);
+ if (ret)
+ return ret;
+
+- rtw89_cam_deinit(rtwdev, rtwvif);
++ rtw89_cam_deinit(rtwdev, rtwvif_link);
+
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL);
+ if (ret)
+ return ret;
+
+ return 0;
+ }
+
+-int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- u8 port = rtwvif->port;
++ u8 port = rtwvif_link->port;
+
+ if (port >= RTW89_PORT_NUM)
+ return -EINVAL;
+
+- rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_tx_rpt(rtwdev, rtwvif, false);
+- rtw89_mac_port_cfg_rx_rpt(rtwdev, rtwvif, false);
+- rtw89_mac_port_cfg_net_type(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bcn_prct(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_rx_sw(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_rx_sync_by_nettype(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_tx_sw_by_nettype(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bcn_intv(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_hiq_win(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_hiq_dtim(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_hiq_drop(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bcn_setup_time(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bcn_hold_time(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bcn_mask_area(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_tbtt_early(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_tbtt_shift(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_bss_color(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_mbssid(rtwdev, rtwvif);
+- rtw89_mac_port_cfg_func_en(rtwdev, rtwvif, true);
++ rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_tx_rpt(rtwdev, rtwvif_link, false);
++ rtw89_mac_port_cfg_rx_rpt(rtwdev, rtwvif_link, false);
++ rtw89_mac_port_cfg_net_type(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bcn_prct(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_rx_sw(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_rx_sync_by_nettype(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_tx_sw_by_nettype(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bcn_intv(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_hiq_win(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_hiq_dtim(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_hiq_drop(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bcn_setup_time(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bcn_hold_time(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bcn_mask_area(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_tbtt_early(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_tbtt_shift(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_bss_color(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_mbssid(rtwdev, rtwvif_link);
++ rtw89_mac_port_cfg_func_en(rtwdev, rtwvif_link, true);
+ rtw89_mac_port_tsf_resync_all(rtwdev);
+ fsleep(BCN_ERLY_SET_DLY);
+- rtw89_mac_port_cfg_bcn_early(rtwdev, rtwvif);
++ rtw89_mac_port_cfg_bcn_early(rtwdev, rtwvif_link);
+
+ return 0;
+ }
+
+-int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u64 *tsf)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+@@ -4620,12 +4675,12 @@ int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ u32 tsf_low, tsf_high;
+ int ret;
+
+- ret = rtw89_mac_check_mac_en(rtwdev, rtwvif->mac_idx, RTW89_CMAC_SEL);
++ ret = rtw89_mac_check_mac_en(rtwdev, rtwvif_link->mac_idx, RTW89_CMAC_SEL);
+ if (ret)
+ return ret;
+
+- tsf_low = rtw89_read32_port(rtwdev, rtwvif, p->tsftr_l);
+- tsf_high = rtw89_read32_port(rtwdev, rtwvif, p->tsftr_h);
++ tsf_low = rtw89_read32_port(rtwdev, rtwvif_link, p->tsftr_l);
++ tsf_high = rtw89_read32_port(rtwdev, rtwvif_link, p->tsftr_h);
+ *tsf = (u64)tsf_high << 32 | tsf_low;
+
+ return 0;
+@@ -4651,65 +4706,57 @@ static void rtw89_mac_check_he_obss_narrow_bw_ru_iter(struct wiphy *wiphy,
+ }
+
+ void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct ieee80211_hw *hw = rtwdev->hw;
++ struct ieee80211_bss_conf *bss_conf;
++ struct cfg80211_chan_def oper;
+ bool tolerated = true;
+ u32 reg;
+
+- if (!vif->bss_conf.he_support || vif->type != NL80211_IFTYPE_STATION)
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ if (!bss_conf->he_support || vif->type != NL80211_IFTYPE_STATION) {
++ rcu_read_unlock();
+ return;
++ }
+
+- if (!(vif->bss_conf.chanreq.oper.chan->flags & IEEE80211_CHAN_RADAR))
++ oper = bss_conf->chanreq.oper;
++ if (!(oper.chan->flags & IEEE80211_CHAN_RADAR)) {
++ rcu_read_unlock();
+ return;
++ }
++
++ rcu_read_unlock();
+
+- cfg80211_bss_iter(hw->wiphy, &vif->bss_conf.chanreq.oper,
++ cfg80211_bss_iter(hw->wiphy, &oper,
+ rtw89_mac_check_he_obss_narrow_bw_ru_iter,
+ &tolerated);
+
+ reg = rtw89_mac_reg_by_idx(rtwdev, mac->narrow_bw_ru_dis.addr,
+- rtwvif->mac_idx);
++ rtwvif_link->mac_idx);
+ if (tolerated)
+ rtw89_write32_clr(rtwdev, reg, mac->narrow_bw_ru_dis.mask);
+ else
+ rtw89_write32_set(rtwdev, reg, mac->narrow_bw_ru_dis.mask);
+ }
+
+-void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif);
++ rtw89_mac_port_cfg_func_sw(rtwdev, rtwvif_link);
+ }
+
+-int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- int ret;
+-
+- rtwvif->mac_id = rtw89_acquire_mac_id(rtwdev);
+- if (rtwvif->mac_id == RTW89_MAX_MAC_ID_NUM)
+- return -ENOSPC;
+-
+- ret = rtw89_mac_vif_init(rtwdev, rtwvif);
+- if (ret)
+- goto release_mac_id;
+-
+- return 0;
+-
+-release_mac_id:
+- rtw89_release_mac_id(rtwdev, rtwvif->mac_id);
+-
+- return ret;
++ return rtw89_mac_vif_init(rtwdev, rtwvif_link);
+ }
+
+-int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- int ret;
+-
+- ret = rtw89_mac_vif_deinit(rtwdev, rtwvif);
+- rtw89_release_mac_id(rtwdev, rtwvif->mac_id);
+-
+- return ret;
++ return rtw89_mac_vif_deinit(rtwdev, rtwvif_link);
+ }
+
+ static void
+@@ -4730,8 +4777,8 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ {
+ const struct rtw89_c2h_scanofld *c2h =
+ (const struct rtw89_c2h_scanofld *)skb->data;
+- struct ieee80211_vif *vif = rtwdev->scan_info.scanning_vif;
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
++ struct rtw89_vif_link *rtwvif_link = rtwdev->scan_info.scanning_vif;
++ struct rtw89_vif *rtwvif;
+ struct rtw89_chan new;
+ u8 reason, status, tx_fail, band, actual_period, expect_period;
+ u32 last_chan = rtwdev->scan_info.last_chan_idx, report_tsf;
+@@ -4739,9 +4786,11 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ u16 chan;
+ int ret;
+
+- if (!rtwvif)
++ if (!rtwvif_link)
+ return;
+
++ rtwvif = rtwvif_link->rtwvif;
++
+ tx_fail = le32_get_bits(c2h->w5, RTW89_C2H_SCANOFLD_W5_TX_FAIL);
+ status = le32_get_bits(c2h->w2, RTW89_C2H_SCANOFLD_W2_STATUS);
+ chan = le32_get_bits(c2h->w2, RTW89_C2H_SCANOFLD_W2_PRI_CH);
+@@ -4781,28 +4830,28 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ if (rtwdev->scan_info.abort)
+ return;
+
+- if (rtwvif && rtwvif->scan_req &&
++ if (rtwvif_link && rtwvif->scan_req &&
+ last_chan < rtwvif->scan_req->n_channels) {
+- ret = rtw89_hw_scan_offload(rtwdev, vif, true);
++ ret = rtw89_hw_scan_offload(rtwdev, rtwvif_link, true);
+ if (ret) {
+- rtw89_hw_scan_abort(rtwdev, vif);
++ rtw89_hw_scan_abort(rtwdev, rtwvif_link);
+ rtw89_warn(rtwdev, "HW scan failed: %d\n", ret);
+ }
+ } else {
+- rtw89_hw_scan_complete(rtwdev, vif, false);
++ rtw89_hw_scan_complete(rtwdev, rtwvif_link, false);
+ }
+ break;
+ case RTW89_SCAN_ENTER_OP_NOTIFY:
+ case RTW89_SCAN_ENTER_CH_NOTIFY:
+ if (rtw89_is_op_chan(rtwdev, band, chan)) {
+- rtw89_assign_entity_chan(rtwdev, rtwvif->chanctx_idx,
++ rtw89_assign_entity_chan(rtwdev, rtwvif_link->chanctx_idx,
+ &rtwdev->scan_info.op_chan);
+ rtw89_mac_enable_beacon_for_ap_vifs(rtwdev, true);
+ ieee80211_wake_queues(rtwdev->hw);
+ } else {
+ rtw89_chan_create(&new, chan, chan, band,
+ RTW89_CHANNEL_WIDTH_20);
+- rtw89_assign_entity_chan(rtwdev, rtwvif->chanctx_idx,
++ rtw89_assign_entity_chan(rtwdev, rtwvif_link->chanctx_idx,
+ &new);
+ }
+ break;
+@@ -4812,10 +4861,11 @@ rtw89_mac_c2h_scanofld_rsp(struct rtw89_dev *rtwdev, struct sk_buff *skb,
+ }
+
+ static void
+-rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ struct sk_buff *skb)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif_safe(rtwvif);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ enum nl80211_cqm_rssi_threshold_event nl_event;
+ const struct rtw89_c2h_mac_bcnfltr_rpt *c2h =
+ (const struct rtw89_c2h_mac_bcnfltr_rpt *)skb->data;
+@@ -4827,7 +4877,7 @@ rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ event = le32_get_bits(c2h->w2, RTW89_C2H_MAC_BCNFLTR_RPT_W2_EVENT);
+ mac_id = le32_get_bits(c2h->w2, RTW89_C2H_MAC_BCNFLTR_RPT_W2_MACID);
+
+- if (mac_id != rtwvif->mac_id)
++ if (mac_id != rtwvif_link->mac_id)
+ return;
+
+ rtw89_debug(rtwdev, RTW89_DBG_FW,
+@@ -4839,7 +4889,7 @@ rtw89_mac_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ if (!rtwdev->scanning && !rtwvif->offchan)
+ ieee80211_connection_loss(vif);
+ else
+- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true);
++ rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true);
+ return;
+ case RTW89_BCN_FLTR_NOTIFY:
+ nl_event = NL80211_CQM_RSSI_THRESHOLD_EVENT_HIGH;
+@@ -4863,10 +4913,13 @@ static void
+ rtw89_mac_c2h_bcn_fltr_rpt(struct rtw89_dev *rtwdev, struct sk_buff *c2h,
+ u32 len)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_mac_bcn_fltr_rpt(rtwdev, rtwvif, c2h);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_mac_bcn_fltr_rpt(rtwdev, rtwvif_link, c2h);
+ }
+
+ static void
+@@ -5931,15 +5984,15 @@ static int rtw89_mac_init_bfee_ax(struct rtw89_dev *rtwdev, u8 mac_idx)
+ }
+
+ static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- u8 mac_idx = rtwvif->mac_idx;
+ u8 nc = 1, nr = 3, ng = 0, cb = 1, cs = 1, ldpc_en = 1, stbc_en = 1;
+- u8 port_sel = rtwvif->port;
++ struct ieee80211_link_sta *link_sta;
++ u8 mac_idx = rtwvif_link->mac_idx;
++ u8 port_sel = rtwvif_link->port;
+ u8 sound_dim = 3, t;
+- u8 *phy_cap = sta->deflink.he_cap.he_cap_elem.phy_cap_info;
++ u8 *phy_cap;
+ u32 reg;
+ u16 val;
+ int ret;
+@@ -5948,6 +6001,11 @@ static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev,
+ if (ret)
+ return ret;
+
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ phy_cap = link_sta->he_cap.he_cap_elem.phy_cap_info;
++
+ if ((phy_cap[3] & IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER) ||
+ (phy_cap[4] & IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER)) {
+ ldpc_en &= !!(phy_cap[1] & IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD);
+@@ -5956,17 +6014,19 @@ static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev,
+ phy_cap[5]);
+ sound_dim = min(sound_dim, t);
+ }
+- if ((sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
+- (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) {
+- ldpc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC);
+- stbc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK);
++ if ((link_sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
++ (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) {
++ ldpc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC);
++ stbc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK);
+ t = FIELD_GET(IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK,
+- sta->deflink.vht_cap.cap);
++ link_sta->vht_cap.cap);
+ sound_dim = min(sound_dim, t);
+ }
+ nc = min(nc, sound_dim);
+ nr = min(nr, sound_dim);
+
++ rcu_read_unlock();
++
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_AX_BFMEE_BFPARAM_SEL);
+
+@@ -5989,34 +6049,41 @@ static int rtw89_mac_set_csi_para_reg_ax(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_mac_csi_rrsc_ax(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ u32 rrsc = BIT(RTW89_MAC_BF_RRSC_6M) | BIT(RTW89_MAC_BF_RRSC_24M);
++ struct ieee80211_link_sta *link_sta;
++ u8 mac_idx = rtwvif_link->mac_idx;
+ u32 reg;
+- u8 mac_idx = rtwvif->mac_idx;
+ int ret;
+
+ ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
+ if (ret)
+ return ret;
+
+- if (sta->deflink.he_cap.has_he) {
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ if (link_sta->he_cap.has_he) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_HE_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_HE_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_HE_MSC5));
+ }
+- if (sta->deflink.vht_cap.vht_supported) {
++ if (link_sta->vht_cap.vht_supported) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_VHT_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_VHT_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_VHT_MSC5));
+ }
+- if (sta->deflink.ht_cap.ht_supported) {
++ if (link_sta->ht_cap.ht_supported) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_HT_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_HT_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_HT_MSC5));
+ }
++
++ rcu_read_unlock();
++
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_AX_BFMEE_BFPARAM_SEL);
+ rtw89_write32_clr(rtwdev, reg, B_AX_BFMEE_CSI_FORCE_RETE_EN);
+@@ -6028,35 +6095,53 @@ static int rtw89_mac_csi_rrsc_ax(struct rtw89_dev *rtwdev,
+ }
+
+ static void rtw89_mac_bf_assoc_ax(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct ieee80211_link_sta *link_sta;
++ bool has_beamformer_cap;
++
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ has_beamformer_cap = rtw89_sta_has_beamformer_cap(link_sta);
++
++ rcu_read_unlock();
+
+- if (rtw89_sta_has_beamformer_cap(sta)) {
++ if (has_beamformer_cap) {
+ rtw89_debug(rtwdev, RTW89_DBG_BF,
+ "initialize bfee for new association\n");
+- rtw89_mac_init_bfee_ax(rtwdev, rtwvif->mac_idx);
+- rtw89_mac_set_csi_para_reg_ax(rtwdev, vif, sta);
+- rtw89_mac_csi_rrsc_ax(rtwdev, vif, sta);
++ rtw89_mac_init_bfee_ax(rtwdev, rtwvif_link->mac_idx);
++ rtw89_mac_set_csi_para_reg_ax(rtwdev, rtwvif_link, rtwsta_link);
++ rtw89_mac_csi_rrsc_ax(rtwdev, rtwvif_link, rtwsta_link);
+ }
+ }
+
+-void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+-
+- rtw89_mac_bfee_ctrl(rtwdev, rtwvif->mac_idx, false);
++ rtw89_mac_bfee_ctrl(rtwdev, rtwvif_link->mac_idx, false);
+ }
+
+ void rtw89_mac_bf_set_gid_table(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *conf)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+- u8 mac_idx = rtwvif->mac_idx;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ u8 mac_idx;
+ __le32 *p;
+
++ rtwvif_link = rtwvif->links[conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, conf->link_id);
++ return;
++ }
++
++ mac_idx = rtwvif_link->mac_idx;
++
+ rtw89_debug(rtwdev, RTW89_DBG_BF, "update bf GID table\n");
+
+ p = (__le32 *)conf->mu_group.membership;
+@@ -6080,7 +6165,7 @@ void rtw89_mac_bf_set_gid_table(struct rtw89_dev *rtwdev, struct ieee80211_vif *
+
+ struct rtw89_mac_bf_monitor_iter_data {
+ struct rtw89_dev *rtwdev;
+- struct ieee80211_sta *down_sta;
++ struct rtw89_sta_link *down_rtwsta_link;
+ int count;
+ };
+
+@@ -6089,23 +6174,41 @@ void rtw89_mac_bf_monitor_calc_iter(void *data, struct ieee80211_sta *sta)
+ {
+ struct rtw89_mac_bf_monitor_iter_data *iter_data =
+ (struct rtw89_mac_bf_monitor_iter_data *)data;
+- struct ieee80211_sta *down_sta = iter_data->down_sta;
++ struct rtw89_sta_link *down_rtwsta_link = iter_data->down_rtwsta_link;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct ieee80211_link_sta *link_sta;
++ struct rtw89_sta_link *rtwsta_link;
++ bool has_beamformer_cap = false;
+ int *count = &iter_data->count;
++ unsigned int link_id;
+
+- if (down_sta == sta)
+- return;
++ rcu_read_lock();
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ if (rtwsta_link == down_rtwsta_link)
++ continue;
+
+- if (rtw89_sta_has_beamformer_cap(sta))
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
++ if (rtw89_sta_has_beamformer_cap(link_sta)) {
++ has_beamformer_cap = true;
++ break;
++ }
++ }
++
++ if (has_beamformer_cap)
+ (*count)++;
++
++ rcu_read_unlock();
+ }
+
+ void rtw89_mac_bf_monitor_calc(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta, bool disconnect)
++ struct rtw89_sta_link *rtwsta_link,
++ bool disconnect)
+ {
+ struct rtw89_mac_bf_monitor_iter_data data;
+
+ data.rtwdev = rtwdev;
+- data.down_sta = disconnect ? sta : NULL;
++ data.down_rtwsta_link = disconnect ? rtwsta_link : NULL;
+ data.count = 0;
+ ieee80211_iterate_stations_atomic(rtwdev->hw,
+ rtw89_mac_bf_monitor_calc_iter,
+@@ -6121,10 +6224,12 @@ void rtw89_mac_bf_monitor_calc(struct rtw89_dev *rtwdev,
+ void _rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_traffic_stats *stats = &rtwdev->stats;
+- struct rtw89_vif *rtwvif;
++ struct rtw89_vif_link *rtwvif_link;
+ bool en = stats->tx_tfc_lv <= stats->rx_tfc_lv;
+ bool old = test_bit(RTW89_FLAG_BFEE_EN, rtwdev->flags);
++ struct rtw89_vif *rtwvif;
+ bool keep_timer = true;
++ unsigned int link_id;
+ bool old_keep_timer;
+
+ old_keep_timer = test_bit(RTW89_FLAG_BFEE_TIMER_KEEP, rtwdev->flags);
+@@ -6134,30 +6239,32 @@ void _rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev)
+
+ if (keep_timer != old_keep_timer) {
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_mac_bfee_standby_timer(rtwdev, rtwvif->mac_idx,
+- keep_timer);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_mac_bfee_standby_timer(rtwdev, rtwvif_link->mac_idx,
++ keep_timer);
+ }
+
+ if (en == old)
+ return;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_mac_bfee_ctrl(rtwdev, rtwvif->mac_idx, en);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_mac_bfee_ctrl(rtwdev, rtwvif_link->mac_idx, en);
+ }
+
+ static int
+-__rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++__rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link,
+ u32 tx_time)
+ {
+ #define MAC_AX_DFLT_TX_TIME 5280
+- u8 mac_idx = rtwsta->rtwvif->mac_idx;
++ u8 mac_idx = rtwsta_link->rtwvif_link->mac_idx;
+ u32 max_tx_time = tx_time == 0 ? MAC_AX_DFLT_TX_TIME : tx_time;
+ u32 reg;
+ int ret = 0;
+
+- if (rtwsta->cctl_tx_time) {
+- rtwsta->ampdu_max_time = (max_tx_time - 512) >> 9;
+- ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta);
++ if (rtwsta_link->cctl_tx_time) {
++ rtwsta_link->ampdu_max_time = (max_tx_time - 512) >> 9;
++ ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta_link);
+ } else {
+ ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
+ if (ret) {
+@@ -6173,31 +6280,31 @@ __rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+ return ret;
+ }
+
+-int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link,
+ bool resume, u32 tx_time)
+ {
+ int ret = 0;
+
+ if (!resume) {
+- rtwsta->cctl_tx_time = true;
+- ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta, tx_time);
++ rtwsta_link->cctl_tx_time = true;
++ ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta_link, tx_time);
+ } else {
+- ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta, tx_time);
+- rtwsta->cctl_tx_time = false;
++ ret = __rtw89_mac_set_tx_time(rtwdev, rtwsta_link, tx_time);
++ rtwsta_link->cctl_tx_time = false;
+ }
+
+ return ret;
+ }
+
+-int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link,
+ u32 *tx_time)
+ {
+- u8 mac_idx = rtwsta->rtwvif->mac_idx;
++ u8 mac_idx = rtwsta_link->rtwvif_link->mac_idx;
+ u32 reg;
+ int ret = 0;
+
+- if (rtwsta->cctl_tx_time) {
+- *tx_time = (rtwsta->ampdu_max_time + 1) << 9;
++ if (rtwsta_link->cctl_tx_time) {
++ *tx_time = (rtwsta_link->ampdu_max_time + 1) << 9;
+ } else {
+ ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
+ if (ret) {
+@@ -6213,33 +6320,33 @@ int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
+ }
+
+ int rtw89_mac_set_tx_retry_limit(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_sta_link *rtwsta_link,
+ bool resume, u8 tx_retry)
+ {
+ int ret = 0;
+
+- rtwsta->data_tx_cnt_lmt = tx_retry;
++ rtwsta_link->data_tx_cnt_lmt = tx_retry;
+
+ if (!resume) {
+- rtwsta->cctl_tx_retry_limit = true;
+- ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta);
++ rtwsta_link->cctl_tx_retry_limit = true;
++ ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta_link);
+ } else {
+- ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta);
+- rtwsta->cctl_tx_retry_limit = false;
++ ret = rtw89_fw_h2c_txtime_cmac_tbl(rtwdev, rtwsta_link);
++ rtwsta_link->cctl_tx_retry_limit = false;
+ }
+
+ return ret;
+ }
+
+ int rtw89_mac_get_tx_retry_limit(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 *tx_retry)
++ struct rtw89_sta_link *rtwsta_link, u8 *tx_retry)
+ {
+- u8 mac_idx = rtwsta->rtwvif->mac_idx;
++ u8 mac_idx = rtwsta_link->rtwvif_link->mac_idx;
+ u32 reg;
+ int ret = 0;
+
+- if (rtwsta->cctl_tx_retry_limit) {
+- *tx_retry = rtwsta->data_tx_cnt_lmt;
++ if (rtwsta_link->cctl_tx_retry_limit) {
++ *tx_retry = rtwsta_link->data_tx_cnt_lmt;
+ } else {
+ ret = rtw89_mac_check_mac_en(rtwdev, mac_idx, RTW89_CMAC_SEL);
+ if (ret) {
+@@ -6255,10 +6362,10 @@ int rtw89_mac_get_tx_retry_limit(struct rtw89_dev *rtwdev,
+ }
+
+ int rtw89_mac_set_hw_muedca_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en)
++ struct rtw89_vif_link *rtwvif_link, bool en)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+- u8 mac_idx = rtwvif->mac_idx;
++ u8 mac_idx = rtwvif_link->mac_idx;
+ u16 set = mac->muedca_ctrl.mask;
+ u32 reg;
+ u32 ret;
+@@ -6326,7 +6433,9 @@ int rtw89_mac_read_xtal_si_ax(struct rtw89_dev *rtwdev, u8 offset, u8 *val)
+ }
+
+ static
+-void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta)
++void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ static const enum rtw89_pkt_drop_sel sels[] = {
+ RTW89_PKT_DROP_SEL_MACID_BE_ONCE,
+@@ -6334,15 +6443,14 @@ void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta)
+ RTW89_PKT_DROP_SEL_MACID_VI_ONCE,
+ RTW89_PKT_DROP_SEL_MACID_VO_ONCE,
+ };
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_pkt_drop_params params = {0};
+ int i;
+
+ params.mac_band = RTW89_MAC_0;
+- params.macid = rtwsta->mac_id;
+- params.port = rtwvif->port;
++ params.macid = rtwsta_link->mac_id;
++ params.port = rtwvif_link->port;
+ params.mbssid = 0;
+- params.tf_trs = rtwvif->trigger;
++ params.tf_trs = rtwvif_link->trigger;
+
+ for (i = 0; i < ARRAY_SIZE(sels); i++) {
+ params.sel = sels[i];
+@@ -6352,15 +6460,21 @@ void rtw89_mac_pkt_drop_sta(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta)
+
+ static void rtw89_mac_pkt_drop_vif_iter(void *data, struct ieee80211_sta *sta)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+- struct rtw89_dev *rtwdev = rtwvif->rtwdev;
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
+ struct rtw89_vif *target = data;
++ unsigned int link_id;
+
+ if (rtwvif != target)
+ return;
+
+- rtw89_mac_pkt_drop_sta(rtwdev, rtwsta);
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ rtw89_mac_pkt_drop_sta(rtwdev, rtwvif_link, rtwsta_link);
++ }
+ }
+
+ void rtw89_mac_pkt_drop_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.h b/drivers/net/wireless/realtek/rtw89/mac.h
+index 67c2a45071244d..0c269961a57311 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.h
++++ b/drivers/net/wireless/realtek/rtw89/mac.h
+@@ -951,8 +951,9 @@ struct rtw89_mac_gen_def {
+ void (*dmac_func_pre_en)(struct rtw89_dev *rtwdev);
+ void (*dle_func_en)(struct rtw89_dev *rtwdev, bool enable);
+ void (*dle_clk_en)(struct rtw89_dev *rtwdev, bool enable);
+- void (*bf_assoc)(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++ void (*bf_assoc)(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+
+ int (*typ_fltr_opt)(struct rtw89_dev *rtwdev,
+ enum rtw89_machdr_frame_type type,
+@@ -1004,12 +1005,12 @@ struct rtw89_mac_gen_def {
+ bool (*is_txq_empty)(struct rtw89_dev *rtwdev);
+
+ int (*add_chan_list)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool connected);
++ struct rtw89_vif_link *rtwvif_link, bool connected);
+ int (*add_chan_list_pno)(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
+ int (*scan_offload)(struct rtw89_dev *rtwdev,
+ struct rtw89_scan_option *option,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ bool wowlan);
+
+ int (*wow_config_mac)(struct rtw89_dev *rtwdev, bool enable_wow);
+@@ -1033,81 +1034,89 @@ u32 rtw89_mac_reg_by_port(struct rtw89_dev *rtwdev, u32 base, u8 port, u8 mac_id
+ }
+
+ static inline u32
+-rtw89_read32_port(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, u32 base)
++rtw89_read32_port(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ return rtw89_read32(rtwdev, reg);
+ }
+
+ static inline u32
+-rtw89_read32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_read32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u32 mask)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ return rtw89_read32_mask(rtwdev, reg, mask);
+ }
+
+ static inline void
+-rtw89_write32_port(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif, u32 base,
++rtw89_write32_port(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link, u32 base,
+ u32 data)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write32(rtwdev, reg, data);
+ }
+
+ static inline void
+-rtw89_write32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_write32_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u32 mask, u32 data)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, mask, data);
+ }
+
+ static inline void
+-rtw89_write16_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_write16_port_mask(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u32 mask, u16 data)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write16_mask(rtwdev, reg, mask, data);
+ }
+
+ static inline void
+-rtw89_write32_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_write32_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u32 bit)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write32_clr(rtwdev, reg, bit);
+ }
+
+ static inline void
+-rtw89_write16_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_write16_port_clr(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u16 bit)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write16_clr(rtwdev, reg, bit);
+ }
+
+ static inline void
+-rtw89_write32_port_set(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++rtw89_write32_port_set(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u32 base, u32 bit)
+ {
+ u32 reg;
+
+- reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif->port, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_port(rtwdev, base, rtwvif_link->port,
++ rtwvif_link->mac_idx);
+ rtw89_write32_set(rtwdev, reg, bit);
+ }
+
+@@ -1139,21 +1148,21 @@ int rtw89_mac_dle_dfi_qempty_cfg(struct rtw89_dev *rtwdev,
+ struct rtw89_mac_dle_dfi_qempty *qempty);
+ void rtw89_mac_dump_l0_to_l1(struct rtw89_dev *rtwdev,
+ enum mac_ax_err_info err);
+-int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *vif);
+-int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++int rtw89_mac_add_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif);
++int rtw89_mac_port_update(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ void rtw89_mac_port_tsf_sync(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
+- struct rtw89_vif *rtwvif_src,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_vif_link *rtwvif_src,
+ u16 offset_tu);
+-int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_mac_port_get_tsf(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ u64 *tsf);
+ void rtw89_mac_port_cfg_rx_sync(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en);
++ struct rtw89_vif_link *rtwvif_link, bool en);
+ void rtw89_mac_set_he_obss_narrow_bw_ru(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif);
+-void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++ struct rtw89_vif_link *rtwvif_link);
++void rtw89_mac_stop_ap(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ void rtw89_mac_enable_beacon_for_ap_vifs(struct rtw89_dev *rtwdev, bool en);
+-int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *vif);
++int rtw89_mac_remove_vif(struct rtw89_dev *rtwdev, struct rtw89_vif_link *vif);
+ int rtw89_mac_enable_bb_rf(struct rtw89_dev *rtwdev);
+ int rtw89_mac_disable_bb_rf(struct rtw89_dev *rtwdev);
+
+@@ -1251,27 +1260,30 @@ void rtw89_mac_power_mode_change(struct rtw89_dev *rtwdev, bool enter);
+ void rtw89_mac_notify_wake(struct rtw89_dev *rtwdev);
+
+ static inline
+-void rtw89_mac_bf_assoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++void rtw89_mac_bf_assoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+
+ if (mac->bf_assoc)
+- mac->bf_assoc(rtwdev, vif, sta);
++ mac->bf_assoc(rtwdev, rtwvif_link, rtwsta_link);
+ }
+
+-void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta);
++void rtw89_mac_bf_disassoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link);
+ void rtw89_mac_bf_set_gid_table(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *conf);
+ void rtw89_mac_bf_monitor_calc(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta, bool disconnect);
++ struct rtw89_sta_link *rtwsta_link,
++ bool disconnect);
+ void _rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev);
+ void rtw89_mac_bfee_ctrl(struct rtw89_dev *rtwdev, u8 mac_idx, bool en);
+-int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
+-int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++int rtw89_mac_vif_init(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
++int rtw89_mac_vif_deinit(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ int rtw89_mac_set_hw_muedca_ctrl(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool en);
++ struct rtw89_vif_link *rtwvif_link, bool en);
+ int rtw89_mac_set_macid_pause(struct rtw89_dev *rtwdev, u8 macid, bool pause);
+
+ static inline void rtw89_mac_bf_monitor_track(struct rtw89_dev *rtwdev)
+@@ -1376,15 +1388,15 @@ static inline bool rtw89_mac_get_power_state(struct rtw89_dev *rtwdev)
+ return !!val;
+ }
+
+-int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_mac_set_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link,
+ bool resume, u32 tx_time);
+-int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++int rtw89_mac_get_tx_time(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link,
+ u32 *tx_time);
+ int rtw89_mac_set_tx_retry_limit(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_sta_link *rtwsta_link,
+ bool resume, u8 tx_retry);
+ int rtw89_mac_get_tx_retry_limit(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta, u8 *tx_retry);
++ struct rtw89_sta_link *rtwsta_link, u8 *tx_retry);
+
+ enum rtw89_mac_xtal_si_offset {
+ XTAL0 = 0x0,
+diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
+index 48ad0d0f76bff4..13fb3cac27016b 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
+@@ -23,13 +23,13 @@ static void rtw89_ops_tx(struct ieee80211_hw *hw,
+ struct rtw89_dev *rtwdev = hw->priv;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_vif *vif = info->control.vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
+ struct ieee80211_sta *sta = control->sta;
+ u32 flags = IEEE80211_SKB_CB(skb)->flags;
+ int ret, qsel;
+
+ if (rtwvif->offchan && !(flags & IEEE80211_TX_CTL_TX_OFFCHAN) && sta) {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+
+ rtw89_debug(rtwdev, RTW89_DBG_TXRX, "ops_tx during offchan\n");
+ skb_queue_tail(&rtwsta->roc_queue, skb);
+@@ -105,11 +105,61 @@ static int rtw89_ops_config(struct ieee80211_hw *hw, u32 changed)
+ return 0;
+ }
+
++static int __rtw89_ops_add_iface_link(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ struct ieee80211_bss_conf *bss_conf;
++ int ret;
++
++ rtw89_leave_ps_mode(rtwdev);
++
++ rtw89_vif_type_mapping(rtwvif_link, false);
++
++ INIT_WORK(&rtwvif_link->update_beacon_work, rtw89_core_update_beacon_work);
++ INIT_LIST_HEAD(&rtwvif_link->general_pkt_list);
++
++ rtwvif_link->hit_rule = 0;
++ rtwvif_link->bcn_hit_cond = 0;
++ rtwvif_link->chanctx_assigned = false;
++ rtwvif_link->chanctx_idx = RTW89_CHANCTX_0;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ ether_addr_copy(rtwvif_link->mac_addr, bss_conf->addr);
++
++ rcu_read_unlock();
++
++ ret = rtw89_mac_add_vif(rtwdev, rtwvif_link);
++ if (ret)
++ return ret;
++
++ rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, NULL, BTC_ROLE_START);
++ return 0;
++}
++
++static void __rtw89_ops_remove_iface_link(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ mutex_unlock(&rtwdev->mutex);
++ cancel_work_sync(&rtwvif_link->update_beacon_work);
++ mutex_lock(&rtwdev->mutex);
++
++ rtw89_leave_ps_mode(rtwdev);
++
++ rtw89_btc_ntfy_role_info(rtwdev, rtwvif_link, NULL, BTC_ROLE_STOP);
++
++ rtw89_mac_remove_vif(rtwdev, rtwvif_link);
++}
++
+ static int rtw89_ops_add_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ u8 mac_id, port;
+ int ret = 0;
+
+ rtw89_debug(rtwdev, RTW89_DBG_STATE, "add vif %pM type %d, p2p %d\n",
+@@ -123,49 +173,56 @@ static int rtw89_ops_add_interface(struct ieee80211_hw *hw,
+ vif->driver_flags |= IEEE80211_VIF_BEACON_FILTER |
+ IEEE80211_VIF_SUPPORTS_CQM_RSSI;
+
+- rtwvif->rtwdev = rtwdev;
+- rtwvif->roc.state = RTW89_ROC_IDLE;
+- rtwvif->offchan = false;
++ mac_id = rtw89_acquire_mac_id(rtwdev);
++ if (mac_id == RTW89_MAX_MAC_ID_NUM) {
++ ret = -ENOSPC;
++ goto err;
++ }
++
++ port = rtw89_core_acquire_bit_map(rtwdev->hw_port, RTW89_PORT_NUM);
++ if (port == RTW89_PORT_NUM) {
++ ret = -ENOSPC;
++ goto release_macid;
++ }
++
++ rtw89_init_vif(rtwdev, rtwvif, mac_id, port);
++
++ rtw89_core_txq_init(rtwdev, vif->txq);
++
+ if (!rtw89_rtwvif_in_list(rtwdev, rtwvif))
+ list_add_tail(&rtwvif->list, &rtwdev->rtwvifs_list);
+
+- INIT_WORK(&rtwvif->update_beacon_work, rtw89_core_update_beacon_work);
++ ether_addr_copy(rtwvif->mac_addr, vif->addr);
++
++ rtwvif->offchan = false;
++ rtwvif->roc.state = RTW89_ROC_IDLE;
+ INIT_DELAYED_WORK(&rtwvif->roc.roc_work, rtw89_roc_work);
+- rtw89_leave_ps_mode(rtwdev);
+
+ rtw89_traffic_stats_init(rtwdev, &rtwvif->stats);
+- rtw89_vif_type_mapping(vif, false);
+- rtwvif->port = rtw89_core_acquire_bit_map(rtwdev->hw_port,
+- RTW89_PORT_NUM);
+- if (rtwvif->port == RTW89_PORT_NUM) {
+- ret = -ENOSPC;
+- list_del_init(&rtwvif->list);
+- goto out;
+- }
+-
+- rtwvif->bcn_hit_cond = 0;
+- rtwvif->mac_idx = RTW89_MAC_0;
+- rtwvif->phy_idx = RTW89_PHY_0;
+- rtwvif->chanctx_idx = RTW89_CHANCTX_0;
+- rtwvif->chanctx_assigned = false;
+- rtwvif->hit_rule = 0;
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
+- ether_addr_copy(rtwvif->mac_addr, vif->addr);
+- INIT_LIST_HEAD(&rtwvif->general_pkt_list);
+
+- ret = rtw89_mac_add_vif(rtwdev, rtwvif);
+- if (ret) {
+- rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port);
+- list_del_init(&rtwvif->list);
+- goto out;
++ rtwvif_link = rtw89_vif_set_link(rtwvif, 0);
++ if (!rtwvif_link) {
++ ret = -EINVAL;
++ goto release_port;
+ }
+
+- rtw89_core_txq_init(rtwdev, vif->txq);
+-
+- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, NULL, BTC_ROLE_START);
++ ret = __rtw89_ops_add_iface_link(rtwdev, rtwvif_link);
++ if (ret)
++ goto unset_link;
+
+ rtw89_recalc_lps(rtwdev);
+-out:
++
++ mutex_unlock(&rtwdev->mutex);
++ return 0;
++
++unset_link:
++ rtw89_vif_unset_link(rtwvif, 0);
++release_port:
++ list_del_init(&rtwvif->list);
++ rtw89_core_release_bit_map(rtwdev->hw_port, port);
++release_macid:
++ rtw89_release_mac_id(rtwdev, mac_id);
++err:
+ mutex_unlock(&rtwdev->mutex);
+
+ return ret;
+@@ -175,20 +232,35 @@ static void rtw89_ops_remove_interface(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ u8 macid = rtw89_vif_get_main_macid(rtwvif);
++ u8 port = rtw89_vif_get_main_port(rtwvif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ rtw89_debug(rtwdev, RTW89_DBG_STATE, "remove vif %pM type %d p2p %d\n",
+ vif->addr, vif->type, vif->p2p);
+
+- cancel_work_sync(&rtwvif->update_beacon_work);
+ cancel_delayed_work_sync(&rtwvif->roc.roc_work);
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_leave_ps_mode(rtwdev);
+- rtw89_btc_ntfy_role_info(rtwdev, rtwvif, NULL, BTC_ROLE_STOP);
+- rtw89_mac_remove_vif(rtwdev, rtwvif);
+- rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port);
++
++ rtwvif_link = rtwvif->links[0];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, 0);
++ goto bottom;
++ }
++
++ __rtw89_ops_remove_iface_link(rtwdev, rtwvif_link);
++
++ rtw89_vif_unset_link(rtwvif, 0);
++
++bottom:
+ list_del_init(&rtwvif->list);
++ rtw89_core_release_bit_map(rtwdev->hw_port, port);
++ rtw89_release_mac_id(rtwdev, macid);
++
+ rtw89_recalc_lps(rtwdev);
+ rtw89_enter_ips_by_hwflags(rtwdev);
+
+@@ -311,24 +383,30 @@ static const u8 ac_to_fw_idx[IEEE80211_NUM_ACS] = {
+ };
+
+ static u8 rtw89_aifsn_to_aifs(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u8 aifsn)
++ struct rtw89_vif_link *rtwvif_link, u8 aifsn)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
++ struct ieee80211_bss_conf *bss_conf;
+ u8 slot_time;
+ u8 sifs;
+
+- slot_time = vif->bss_conf.use_short_slot ? 9 : 20;
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ slot_time = bss_conf->use_short_slot ? 9 : 20;
++
++ rcu_read_unlock();
++
+ sifs = chan->band_type == RTW89_BAND_2G ? 10 : 16;
+
+ return aifsn * slot_time + sifs;
+ }
+
+ static void ____rtw89_conf_tx_edca(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u16 ac)
++ struct rtw89_vif_link *rtwvif_link, u16 ac)
+ {
+- struct ieee80211_tx_queue_params *params = &rtwvif->tx_params[ac];
++ struct ieee80211_tx_queue_params *params = &rtwvif_link->tx_params[ac];
+ u32 val;
+ u8 ecw_max, ecw_min;
+ u8 aifs;
+@@ -336,12 +414,12 @@ static void ____rtw89_conf_tx_edca(struct rtw89_dev *rtwdev,
+ /* 2^ecw - 1 = cw; ecw = log2(cw + 1) */
+ ecw_max = ilog2(params->cw_max + 1);
+ ecw_min = ilog2(params->cw_min + 1);
+- aifs = rtw89_aifsn_to_aifs(rtwdev, rtwvif, params->aifs);
++ aifs = rtw89_aifsn_to_aifs(rtwdev, rtwvif_link, params->aifs);
+ val = FIELD_PREP(FW_EDCA_PARAM_TXOPLMT_MSK, params->txop) |
+ FIELD_PREP(FW_EDCA_PARAM_CWMAX_MSK, ecw_max) |
+ FIELD_PREP(FW_EDCA_PARAM_CWMIN_MSK, ecw_min) |
+ FIELD_PREP(FW_EDCA_PARAM_AIFS_MSK, aifs);
+- rtw89_fw_h2c_set_edca(rtwdev, rtwvif, ac_to_fw_idx[ac], val);
++ rtw89_fw_h2c_set_edca(rtwdev, rtwvif_link, ac_to_fw_idx[ac], val);
+ }
+
+ #define R_MUEDCA_ACS_PARAM(acs) {R_AX_MUEDCA_ ## acs ## _PARAM_0, \
+@@ -355,9 +433,9 @@ static const u32 ac_to_mu_edca_param[IEEE80211_NUM_ACS][RTW89_CHIP_GEN_NUM] = {
+ };
+
+ static void ____rtw89_conf_tx_mu_edca(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u16 ac)
++ struct rtw89_vif_link *rtwvif_link, u16 ac)
+ {
+- struct ieee80211_tx_queue_params *params = &rtwvif->tx_params[ac];
++ struct ieee80211_tx_queue_params *params = &rtwvif_link->tx_params[ac];
+ struct ieee80211_he_mu_edca_param_ac_rec *mu_edca;
+ int gen = rtwdev->chip->chip_gen;
+ u8 aifs, aifsn;
+@@ -370,32 +448,199 @@ static void ____rtw89_conf_tx_mu_edca(struct rtw89_dev *rtwdev,
+
+ mu_edca = ¶ms->mu_edca_param_rec;
+ aifsn = FIELD_GET(GENMASK(3, 0), mu_edca->aifsn);
+- aifs = aifsn ? rtw89_aifsn_to_aifs(rtwdev, rtwvif, aifsn) : 0;
++ aifs = aifsn ? rtw89_aifsn_to_aifs(rtwdev, rtwvif_link, aifsn) : 0;
+ timer_32us = mu_edca->mu_edca_timer << 8;
+
+ val = FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_TIMER_MASK, timer_32us) |
+ FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_CW_MASK, mu_edca->ecw_min_max) |
+ FIELD_PREP(B_AX_MUEDCA_BE_PARAM_0_AIFS_MASK, aifs);
+- reg = rtw89_mac_reg_by_idx(rtwdev, ac_to_mu_edca_param[ac][gen], rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, ac_to_mu_edca_param[ac][gen],
++ rtwvif_link->mac_idx);
+ rtw89_write32(rtwdev, reg, val);
+
+- rtw89_mac_set_hw_muedca_ctrl(rtwdev, rtwvif, true);
++ rtw89_mac_set_hw_muedca_ctrl(rtwdev, rtwvif_link, true);
+ }
+
+ static void __rtw89_conf_tx(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, u16 ac)
++ struct rtw89_vif_link *rtwvif_link, u16 ac)
+ {
+- ____rtw89_conf_tx_edca(rtwdev, rtwvif, ac);
+- ____rtw89_conf_tx_mu_edca(rtwdev, rtwvif, ac);
++ ____rtw89_conf_tx_edca(rtwdev, rtwvif_link, ac);
++ ____rtw89_conf_tx_mu_edca(rtwdev, rtwvif_link, ac);
+ }
+
+ static void rtw89_conf_tx(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ u16 ac;
+
+ for (ac = 0; ac < IEEE80211_NUM_ACS; ac++)
+- __rtw89_conf_tx(rtwdev, rtwvif, ac);
++ __rtw89_conf_tx(rtwdev, rtwvif_link, ac);
++}
++
++static int __rtw89_ops_sta_add(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta)
++{
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ bool acquire_macid = false;
++ u8 macid;
++ int ret;
++ int i;
++
++ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls) {
++ /* for station mode, assign the mac_id from itself */
++ macid = rtw89_vif_get_main_macid(rtwvif);
++ } else {
++ macid = rtw89_acquire_mac_id(rtwdev);
++ if (macid == RTW89_MAX_MAC_ID_NUM)
++ return -ENOSPC;
++
++ acquire_macid = true;
++ }
++
++ rtw89_init_sta(rtwdev, rtwvif, rtwsta, macid);
++
++ for (i = 0; i < ARRAY_SIZE(sta->txq); i++)
++ rtw89_core_txq_init(rtwdev, sta->txq[i]);
++
++ skb_queue_head_init(&rtwsta->roc_queue);
++
++ rtwsta_link = rtw89_sta_set_link(rtwsta, sta->deflink.link_id);
++ if (!rtwsta_link) {
++ ret = -EINVAL;
++ goto err;
++ }
++
++ rtwvif_link = rtwsta_link->rtwvif_link;
++
++ ret = rtw89_core_sta_link_add(rtwdev, rtwvif_link, rtwsta_link);
++ if (ret)
++ goto unset_link;
++
++ if (vif->type == NL80211_IFTYPE_AP || sta->tdls)
++ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE);
++
++ return 0;
++
++unset_link:
++ rtw89_sta_unset_link(rtwsta, sta->deflink.link_id);
++err:
++ if (acquire_macid)
++ rtw89_release_mac_id(rtwdev, macid);
++
++ return ret;
++}
++
++static int __rtw89_ops_sta_assoc(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta,
++ bool station_mode)
++{
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++
++ if (station_mode)
++ rtw89_vif_type_mapping(rtwvif_link, true);
++
++ ret = rtw89_core_sta_link_assoc(rtwdev, rtwvif_link, rtwsta_link);
++ if (ret)
++ return ret;
++ }
++
++ rtwdev->total_sta_assoc++;
++ if (sta->tdls)
++ rtwvif->tdls_peer++;
++
++ return 0;
++}
++
++static int __rtw89_ops_sta_disassoc(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta)
++{
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = rtw89_core_sta_link_disassoc(rtwdev, rtwvif_link, rtwsta_link);
++ if (ret)
++ return ret;
++ }
++
++ rtwsta->disassoc = true;
++
++ rtwdev->total_sta_assoc--;
++ if (sta->tdls)
++ rtwvif->tdls_peer--;
++
++ return 0;
++}
++
++static int __rtw89_ops_sta_disconnect(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_core_free_sta_pending_ba(rtwdev, sta);
++ rtw89_core_free_sta_pending_forbid_ba(rtwdev, sta);
++ rtw89_core_free_sta_pending_roc_tx(rtwdev, sta);
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = rtw89_core_sta_link_disconnect(rtwdev, rtwvif_link, rtwsta_link);
++ if (ret)
++ return ret;
++ }
++
++ return 0;
++}
++
++static int __rtw89_ops_sta_remove(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ struct ieee80211_sta *sta)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ u8 macid = rtw89_sta_get_main_macid(rtwsta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ int ret;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ ret = rtw89_core_sta_link_remove(rtwdev, rtwvif_link, rtwsta_link);
++ if (ret)
++ return ret;
++
++ rtw89_sta_unset_link(rtwsta, link_id);
++ }
++
++ if (vif->type == NL80211_IFTYPE_AP || sta->tdls) {
++ rtw89_release_mac_id(rtwdev, macid);
++ rtw89_queue_chanctx_change(rtwdev, RTW89_CHANCTX_REMOTE_STA_CHANGE);
++ }
++
++ return 0;
+ }
+
+ static void rtw89_station_mode_sta_assoc(struct rtw89_dev *rtwdev,
+@@ -412,16 +657,34 @@ static void rtw89_station_mode_sta_assoc(struct rtw89_dev *rtwdev,
+ return;
+ }
+
+- rtw89_vif_type_mapping(vif, true);
++ __rtw89_ops_sta_assoc(rtwdev, vif, sta, true);
++}
++
++static void __rtw89_ops_bss_link_assoc(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ rtw89_phy_set_bss_color(rtwdev, rtwvif_link);
++ rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, rtwvif_link);
++ rtw89_mac_port_update(rtwdev, rtwvif_link);
++ rtw89_mac_set_he_obss_narrow_bw_ru(rtwdev, rtwvif_link);
++}
++
++static void __rtw89_ops_bss_assoc(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif)
++{
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
+
+- rtw89_core_sta_assoc(rtwdev, vif, sta);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ __rtw89_ops_bss_link_assoc(rtwdev, rtwvif_link);
+ }
+
+ static void rtw89_ops_vif_cfg_changed(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif, u64 changed)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
+
+ mutex_lock(&rtwdev->mutex);
+ rtw89_leave_ps_mode(rtwdev);
+@@ -429,10 +692,7 @@ static void rtw89_ops_vif_cfg_changed(struct ieee80211_hw *hw,
+ if (changed & BSS_CHANGED_ASSOC) {
+ if (vif->cfg.assoc) {
+ rtw89_station_mode_sta_assoc(rtwdev, vif);
+- rtw89_phy_set_bss_color(rtwdev, vif);
+- rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, vif);
+- rtw89_mac_port_update(rtwdev, rtwvif);
+- rtw89_mac_set_he_obss_narrow_bw_ru(rtwdev, vif);
++ __rtw89_ops_bss_assoc(rtwdev, vif);
+
+ rtw89_queue_chanctx_work(rtwdev);
+ } else {
+@@ -459,39 +719,49 @@ static void rtw89_ops_link_info_changed(struct ieee80211_hw *hw,
+ u64 changed)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ mutex_lock(&rtwdev->mutex);
+ rtw89_leave_ps_mode(rtwdev);
+
++ rtwvif_link = rtwvif->links[conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, conf->link_id);
++ goto out;
++ }
++
+ if (changed & BSS_CHANGED_BSSID) {
+- ether_addr_copy(rtwvif->bssid, conf->bssid);
+- rtw89_cam_bssid_changed(rtwdev, rtwvif);
+- rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL);
+- WRITE_ONCE(rtwvif->sync_bcn_tsf, 0);
++ ether_addr_copy(rtwvif_link->bssid, conf->bssid);
++ rtw89_cam_bssid_changed(rtwdev, rtwvif_link);
++ rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL);
++ WRITE_ONCE(rtwvif_link->sync_bcn_tsf, 0);
+ }
+
+ if (changed & BSS_CHANGED_BEACON)
+- rtw89_chip_h2c_update_beacon(rtwdev, rtwvif);
++ rtw89_chip_h2c_update_beacon(rtwdev, rtwvif_link);
+
+ if (changed & BSS_CHANGED_ERP_SLOT)
+- rtw89_conf_tx(rtwdev, rtwvif);
++ rtw89_conf_tx(rtwdev, rtwvif_link);
+
+ if (changed & BSS_CHANGED_HE_BSS_COLOR)
+- rtw89_phy_set_bss_color(rtwdev, vif);
++ rtw89_phy_set_bss_color(rtwdev, rtwvif_link);
+
+ if (changed & BSS_CHANGED_MU_GROUPS)
+ rtw89_mac_bf_set_gid_table(rtwdev, vif, conf);
+
+ if (changed & BSS_CHANGED_P2P_PS)
+- rtw89_core_update_p2p_ps(rtwdev, vif);
++ rtw89_core_update_p2p_ps(rtwdev, rtwvif_link, conf);
+
+ if (changed & BSS_CHANGED_CQM)
+- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true);
++ rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true);
+
+ if (changed & BSS_CHANGED_TPE)
+- rtw89_reg_6ghz_recalc(rtwdev, rtwvif, true);
++ rtw89_reg_6ghz_recalc(rtwdev, rtwvif_link, true);
+
++out:
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -500,12 +770,21 @@ static int rtw89_ops_start_ap(struct ieee80211_hw *hw,
+ struct ieee80211_bss_conf *link_conf)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+ const struct rtw89_chan *chan;
+
+ mutex_lock(&rtwdev->mutex);
+
+- chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
++ rtwvif_link = rtwvif->links[link_conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, link_conf->link_id);
++ goto out;
++ }
++
++ chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
+ if (chan->band_type == RTW89_BAND_6G) {
+ mutex_unlock(&rtwdev->mutex);
+ return -EOPNOTSUPP;
+@@ -514,16 +793,18 @@ static int rtw89_ops_start_ap(struct ieee80211_hw *hw,
+ if (rtwdev->scanning)
+ rtw89_hw_scan_abort(rtwdev, rtwdev->scan_info.scanning_vif);
+
+- ether_addr_copy(rtwvif->bssid, vif->bss_conf.bssid);
+- rtw89_cam_bssid_changed(rtwdev, rtwvif);
+- rtw89_mac_port_update(rtwdev, rtwvif);
+- rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, NULL);
+- rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, NULL, RTW89_ROLE_TYPE_CHANGE);
+- rtw89_fw_h2c_join_info(rtwdev, rtwvif, NULL, true);
+- rtw89_fw_h2c_cam(rtwdev, rtwvif, NULL, NULL);
+- rtw89_chip_rfk_channel(rtwdev, rtwvif);
++ ether_addr_copy(rtwvif_link->bssid, link_conf->bssid);
++ rtw89_cam_bssid_changed(rtwdev, rtwvif_link);
++ rtw89_mac_port_update(rtwdev, rtwvif_link);
++ rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, NULL);
++ rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, NULL, RTW89_ROLE_TYPE_CHANGE);
++ rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, NULL, true);
++ rtw89_fw_h2c_cam(rtwdev, rtwvif_link, NULL, NULL);
++ rtw89_chip_rfk_channel(rtwdev, rtwvif_link);
+
+ rtw89_queue_chanctx_work(rtwdev);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+
+ return 0;
+@@ -534,12 +815,24 @@ void rtw89_ops_stop_ap(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_bss_conf *link_conf)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_mac_stop_ap(rtwdev, rtwvif);
+- rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, vif, NULL);
+- rtw89_fw_h2c_join_info(rtwdev, rtwvif, NULL, true);
++
++ rtwvif_link = rtwvif->links[link_conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, link_conf->link_id);
++ goto out;
++ }
++
++ rtw89_mac_stop_ap(rtwdev, rtwvif_link);
++ rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, NULL);
++ rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, NULL, true);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -547,10 +840,13 @@ static int rtw89_ops_set_tim(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
+ bool set)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
+
+- ieee80211_queue_work(rtwdev->hw, &rtwvif->update_beacon_work);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ ieee80211_queue_work(rtwdev->hw, &rtwvif_link->update_beacon_work);
+
+ return 0;
+ }
+@@ -561,15 +857,29 @@ static int rtw89_ops_conf_tx(struct ieee80211_hw *hw,
+ const struct ieee80211_tx_queue_params *params)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ int ret = 0;
+
+ mutex_lock(&rtwdev->mutex);
+ rtw89_leave_ps_mode(rtwdev);
+- rtwvif->tx_params[ac] = *params;
+- __rtw89_conf_tx(rtwdev, rtwvif, ac);
++
++ rtwvif_link = rtwvif->links[link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, link_id);
++ ret = -ENOLINK;
++ goto out;
++ }
++
++ rtwvif_link->tx_params[ac] = *params;
++ __rtw89_conf_tx(rtwdev, rtwvif_link, ac);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+
+- return 0;
++ return ret;
+ }
+
+ static int __rtw89_ops_sta_state(struct ieee80211_hw *hw,
+@@ -582,26 +892,26 @@ static int __rtw89_ops_sta_state(struct ieee80211_hw *hw,
+
+ if (old_state == IEEE80211_STA_NOTEXIST &&
+ new_state == IEEE80211_STA_NONE)
+- return rtw89_core_sta_add(rtwdev, vif, sta);
++ return __rtw89_ops_sta_add(rtwdev, vif, sta);
+
+ if (old_state == IEEE80211_STA_AUTH &&
+ new_state == IEEE80211_STA_ASSOC) {
+ if (vif->type == NL80211_IFTYPE_STATION && !sta->tdls)
+ return 0; /* defer to bss_info_changed to have vif info */
+- return rtw89_core_sta_assoc(rtwdev, vif, sta);
++ return __rtw89_ops_sta_assoc(rtwdev, vif, sta, false);
+ }
+
+ if (old_state == IEEE80211_STA_ASSOC &&
+ new_state == IEEE80211_STA_AUTH)
+- return rtw89_core_sta_disassoc(rtwdev, vif, sta);
++ return __rtw89_ops_sta_disassoc(rtwdev, vif, sta);
+
+ if (old_state == IEEE80211_STA_AUTH &&
+ new_state == IEEE80211_STA_NONE)
+- return rtw89_core_sta_disconnect(rtwdev, vif, sta);
++ return __rtw89_ops_sta_disconnect(rtwdev, vif, sta);
+
+ if (old_state == IEEE80211_STA_NONE &&
+ new_state == IEEE80211_STA_NOTEXIST)
+- return rtw89_core_sta_remove(rtwdev, vif, sta);
++ return __rtw89_ops_sta_remove(rtwdev, vif, sta);
+
+ return 0;
+ }
+@@ -667,7 +977,8 @@ static int rtw89_ops_ampdu_action(struct ieee80211_hw *hw,
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+ struct ieee80211_sta *sta = params->sta;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
+ u16 tid = params->tid;
+ struct ieee80211_txq *txq = sta->txq[tid];
+ struct rtw89_txq *rtwtxq = (struct rtw89_txq *)txq->drv_priv;
+@@ -681,7 +992,7 @@ static int rtw89_ops_ampdu_action(struct ieee80211_hw *hw,
+ mutex_lock(&rtwdev->mutex);
+ clear_bit(RTW89_TXQ_F_AMPDU, &rtwtxq->flags);
+ clear_bit(tid, rtwsta->ampdu_map);
+- rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, vif, sta);
++ rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, rtwvif, rtwsta);
+ mutex_unlock(&rtwdev->mutex);
+ ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
+ break;
+@@ -692,7 +1003,7 @@ static int rtw89_ops_ampdu_action(struct ieee80211_hw *hw,
+ rtwsta->ampdu_params[tid].amsdu = params->amsdu;
+ set_bit(tid, rtwsta->ampdu_map);
+ rtw89_leave_ps_mode(rtwdev);
+- rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, vif, sta);
++ rtw89_chip_h2c_ampdu_cmac_tbl(rtwdev, rtwvif, rtwsta);
+ mutex_unlock(&rtwdev->mutex);
+ break;
+ case IEEE80211_AMPDU_RX_START:
+@@ -731,9 +1042,14 @@ static void rtw89_ops_sta_statistics(struct ieee80211_hw *hw,
+ struct ieee80211_sta *sta,
+ struct station_info *sinfo)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_sta_link *rtwsta_link;
+
+- sinfo->txrate = rtwsta->ra_report.txrate;
++ rtwsta_link = rtw89_sta_get_link_inst(rtwsta, 0);
++ if (unlikely(!rtwsta_link))
++ return;
++
++ sinfo->txrate = rtwsta_link->ra_report.txrate;
+ sinfo->filled |= BIT_ULL(NL80211_STA_INFO_TX_BITRATE);
+ }
+
+@@ -743,7 +1059,7 @@ void __rtw89_drop_packets(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
+ struct rtw89_vif *rtwvif;
+
+ if (vif) {
+- rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ rtwvif = vif_to_rtwvif(vif);
+ rtw89_mac_pkt_drop_vif(rtwdev, rtwvif);
+ } else {
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+@@ -777,14 +1093,20 @@ struct rtw89_iter_bitrate_mask_data {
+ static void rtw89_ra_mask_info_update_iter(void *data, struct ieee80211_sta *sta)
+ {
+ struct rtw89_iter_bitrate_mask_data *br_data = data;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwsta->rtwvif);
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
++ struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
+
+ if (vif != br_data->vif || vif->p2p)
+ return;
+
+- rtwsta->use_cfg_mask = true;
+- rtwsta->mask = *br_data->mask;
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwsta_link->use_cfg_mask = true;
++ rtwsta_link->mask = *br_data->mask;
++ }
++
+ rtw89_phy_ra_update_sta(br_data->rtwdev, sta, IEEE80211_RC_SUPP_RATES_CHANGED);
+ }
+
+@@ -854,10 +1176,20 @@ static void rtw89_ops_sw_scan_start(struct ieee80211_hw *hw,
+ const u8 *mac_addr)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_core_scan_start(rtwdev, rtwvif, mac_addr, false);
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "sw scan start: find no link on HW-0\n");
++ goto out;
++ }
++
++ rtw89_core_scan_start(rtwdev, rtwvif_link, mac_addr, false);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -865,9 +1197,20 @@ static void rtw89_ops_sw_scan_complete(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_core_scan_complete(rtwdev, vif, false);
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "sw scan complete: find no link on HW-0\n");
++ goto out;
++ }
++
++ rtw89_core_scan_complete(rtwdev, rtwvif_link, false);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -884,22 +1227,35 @@ static int rtw89_ops_hw_scan(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *req)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
+- int ret = 0;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ int ret;
+
+ if (!RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD, &rtwdev->fw))
+ return 1;
+
+- if (rtwdev->scanning || rtwvif->offchan)
+- return -EBUSY;
+-
+ mutex_lock(&rtwdev->mutex);
+- rtw89_hw_scan_start(rtwdev, vif, req);
+- ret = rtw89_hw_scan_offload(rtwdev, vif, true);
++
++ if (rtwdev->scanning || rtwvif->offchan) {
++ ret = -EBUSY;
++ goto out;
++ }
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "hw scan: find no link on HW-0\n");
++ ret = -ENOLINK;
++ goto out;
++ }
++
++ rtw89_hw_scan_start(rtwdev, rtwvif_link, req);
++ ret = rtw89_hw_scan_offload(rtwdev, rtwvif_link, true);
+ if (ret) {
+- rtw89_hw_scan_abort(rtwdev, vif);
++ rtw89_hw_scan_abort(rtwdev, rtwvif_link);
+ rtw89_err(rtwdev, "HW scan failed with status: %d\n", ret);
+ }
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+
+ return ret;
+@@ -909,6 +1265,8 @@ static void rtw89_ops_cancel_hw_scan(struct ieee80211_hw *hw,
+ struct ieee80211_vif *vif)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ if (!RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD, &rtwdev->fw))
+ return;
+@@ -917,7 +1275,16 @@ static void rtw89_ops_cancel_hw_scan(struct ieee80211_hw *hw,
+ return;
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_hw_scan_abort(rtwdev, vif);
++
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev, "cancel hw scan: find no link on HW-0\n");
++ goto out;
++ }
++
++ rtw89_hw_scan_abort(rtwdev, rtwvif_link);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -970,11 +1337,24 @@ static int rtw89_ops_assign_vif_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+ int ret;
+
+ mutex_lock(&rtwdev->mutex);
+- ret = rtw89_chanctx_ops_assign_vif(rtwdev, rtwvif, ctx);
++
++ rtwvif_link = rtwvif->links[link_conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, link_conf->link_id);
++ ret = -ENOLINK;
++ goto out;
++ }
++
++ ret = rtw89_chanctx_ops_assign_vif(rtwdev, rtwvif_link, ctx);
++
++out:
+ mutex_unlock(&rtwdev->mutex);
+
+ return ret;
+@@ -986,10 +1366,21 @@ static void rtw89_ops_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_dev *rtwdev = hw->priv;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
+
+ mutex_lock(&rtwdev->mutex);
+- rtw89_chanctx_ops_unassign_vif(rtwdev, rtwvif, ctx);
++
++ rtwvif_link = rtwvif->links[link_conf->link_id];
++ if (unlikely(!rtwvif_link)) {
++ mutex_unlock(&rtwdev->mutex);
++ rtw89_err(rtwdev,
++ "%s: rtwvif link (link_id %u) is not active\n",
++ __func__, link_conf->link_id);
++ return;
++ }
++
++ rtw89_chanctx_ops_unassign_vif(rtwdev, rtwvif_link, ctx);
+ mutex_unlock(&rtwdev->mutex);
+ }
+
+@@ -1003,7 +1394,7 @@ static int rtw89_ops_remain_on_channel(struct ieee80211_hw *hw,
+ struct rtw89_vif *rtwvif = vif_to_rtwvif_safe(vif);
+ struct rtw89_roc *roc = &rtwvif->roc;
+
+- if (!vif)
++ if (!rtwvif)
+ return -EINVAL;
+
+ mutex_lock(&rtwdev->mutex);
+@@ -1053,8 +1444,8 @@ static int rtw89_ops_cancel_remain_on_channel(struct ieee80211_hw *hw,
+ static void rtw89_set_tid_config_iter(void *data, struct ieee80211_sta *sta)
+ {
+ struct cfg80211_tid_config *tid_config = data;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_dev *rtwdev = rtwsta->rtwvif->rtwdev;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
+
+ rtw89_core_set_tid_config(rtwdev, sta, tid_config);
+ }
+diff --git a/drivers/net/wireless/realtek/rtw89/mac_be.c b/drivers/net/wireless/realtek/rtw89/mac_be.c
+index 31f0a5225b115e..f22eaa83297fb4 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac_be.c
++++ b/drivers/net/wireless/realtek/rtw89/mac_be.c
+@@ -2091,13 +2091,13 @@ static int rtw89_mac_init_bfee_be(struct rtw89_dev *rtwdev, u8 mac_idx)
+ }
+
+ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ u8 nc = 1, nr = 3, ng = 0, cb = 1, cs = 1, ldpc_en = 1, stbc_en = 1;
+- u8 mac_idx = rtwvif->mac_idx;
+- u8 port_sel = rtwvif->port;
++ struct ieee80211_link_sta *link_sta;
++ u8 mac_idx = rtwvif_link->mac_idx;
++ u8 port_sel = rtwvif_link->port;
+ u8 sound_dim = 3, t;
+ u8 *phy_cap;
+ u32 reg;
+@@ -2108,7 +2108,10 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev,
+ if (ret)
+ return ret;
+
+- phy_cap = sta->deflink.he_cap.he_cap_elem.phy_cap_info;
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ phy_cap = link_sta->he_cap.he_cap_elem.phy_cap_info;
+
+ if ((phy_cap[3] & IEEE80211_HE_PHY_CAP3_SU_BEAMFORMER) ||
+ (phy_cap[4] & IEEE80211_HE_PHY_CAP4_MU_BEAMFORMER)) {
+@@ -2119,11 +2122,11 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev,
+ sound_dim = min(sound_dim, t);
+ }
+
+- if ((sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
+- (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) {
+- ldpc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC);
+- stbc_en &= !!(sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK);
+- t = u32_get_bits(sta->deflink.vht_cap.cap,
++ if ((link_sta->vht_cap.cap & IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE) ||
++ (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)) {
++ ldpc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC);
++ stbc_en &= !!(link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK);
++ t = u32_get_bits(link_sta->vht_cap.cap,
+ IEEE80211_VHT_CAP_SOUNDING_DIMENSIONS_MASK);
+ sound_dim = min(sound_dim, t);
+ }
+@@ -2131,6 +2134,8 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev,
+ nc = min(nc, sound_dim);
+ nr = min(nr, sound_dim);
+
++ rcu_read_unlock();
++
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_BE_BFMEE_BFPARAM_SEL);
+
+@@ -2155,12 +2160,12 @@ static int rtw89_mac_set_csi_para_reg_be(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_mac_csi_rrsc_be(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ u32 rrsc = BIT(RTW89_MAC_BF_RRSC_6M) | BIT(RTW89_MAC_BF_RRSC_24M);
+- u8 mac_idx = rtwvif->mac_idx;
++ struct ieee80211_link_sta *link_sta;
++ u8 mac_idx = rtwvif_link->mac_idx;
+ int ret;
+ u32 reg;
+
+@@ -2168,22 +2173,28 @@ static int rtw89_mac_csi_rrsc_be(struct rtw89_dev *rtwdev,
+ if (ret)
+ return ret;
+
+- if (sta->deflink.he_cap.has_he) {
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++
++ if (link_sta->he_cap.has_he) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_HE_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_HE_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_HE_MSC5));
+ }
+- if (sta->deflink.vht_cap.vht_supported) {
++ if (link_sta->vht_cap.vht_supported) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_VHT_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_VHT_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_VHT_MSC5));
+ }
+- if (sta->deflink.ht_cap.ht_supported) {
++ if (link_sta->ht_cap.ht_supported) {
+ rrsc |= (BIT(RTW89_MAC_BF_RRSC_HT_MSC0) |
+ BIT(RTW89_MAC_BF_RRSC_HT_MSC3) |
+ BIT(RTW89_MAC_BF_RRSC_HT_MSC5));
+ }
+
++ rcu_read_unlock();
++
+ reg = rtw89_mac_reg_by_idx(rtwdev, R_BE_TRXPTCL_RESP_CSI_CTRL_0, mac_idx);
+ rtw89_write32_set(rtwdev, reg, B_BE_BFMEE_BFPARAM_SEL);
+ rtw89_write32_clr(rtwdev, reg, B_BE_BFMEE_CSI_FORCE_RETE_EN);
+@@ -2195,17 +2206,25 @@ static int rtw89_mac_csi_rrsc_be(struct rtw89_dev *rtwdev,
+ }
+
+ static void rtw89_mac_bf_assoc_be(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- struct ieee80211_sta *sta)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
++ struct ieee80211_link_sta *link_sta;
++ bool has_beamformer_cap;
++
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ has_beamformer_cap = rtw89_sta_has_beamformer_cap(link_sta);
++
++ rcu_read_unlock();
+
+- if (rtw89_sta_has_beamformer_cap(sta)) {
++ if (has_beamformer_cap) {
+ rtw89_debug(rtwdev, RTW89_DBG_BF,
+ "initialize bfee for new association\n");
+- rtw89_mac_init_bfee_be(rtwdev, rtwvif->mac_idx);
+- rtw89_mac_set_csi_para_reg_be(rtwdev, vif, sta);
+- rtw89_mac_csi_rrsc_be(rtwdev, vif, sta);
++ rtw89_mac_init_bfee_be(rtwdev, rtwvif_link->mac_idx);
++ rtw89_mac_set_csi_para_reg_be(rtwdev, rtwvif_link, rtwsta_link);
++ rtw89_mac_csi_rrsc_be(rtwdev, rtwvif_link, rtwsta_link);
+ }
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
+index c7165e757842be..4b47b45f897cbc 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.c
++++ b/drivers/net/wireless/realtek/rtw89/phy.c
+@@ -75,12 +75,12 @@ static u64 get_mcs_ra_mask(u16 mcs_map, u8 highest_mcs, u8 gap)
+ return ra_mask;
+ }
+
+-static u64 get_he_ra_mask(struct ieee80211_sta *sta)
++static u64 get_he_ra_mask(struct ieee80211_link_sta *link_sta)
+ {
+- struct ieee80211_sta_he_cap cap = sta->deflink.he_cap;
++ struct ieee80211_sta_he_cap cap = link_sta->he_cap;
+ u16 mcs_map;
+
+- switch (sta->deflink.bandwidth) {
++ switch (link_sta->bandwidth) {
+ case IEEE80211_STA_RX_BW_160:
+ if (cap.he_cap_elem.phy_cap_info[0] &
+ IEEE80211_HE_PHY_CAP0_CHANNEL_WIDTH_SET_80PLUS80_MHZ_IN_5G)
+@@ -118,14 +118,14 @@ static u64 get_eht_mcs_ra_mask(u8 *max_nss, u8 start_mcs, u8 n_nss)
+ return mask;
+ }
+
+-static u64 get_eht_ra_mask(struct ieee80211_sta *sta)
++static u64 get_eht_ra_mask(struct ieee80211_link_sta *link_sta)
+ {
+- struct ieee80211_sta_eht_cap *eht_cap = &sta->deflink.eht_cap;
++ struct ieee80211_sta_eht_cap *eht_cap = &link_sta->eht_cap;
+ struct ieee80211_eht_mcs_nss_supp_20mhz_only *mcs_nss_20mhz;
+ struct ieee80211_eht_mcs_nss_supp_bw *mcs_nss;
+- u8 *he_phy_cap = sta->deflink.he_cap.he_cap_elem.phy_cap_info;
++ u8 *he_phy_cap = link_sta->he_cap.he_cap_elem.phy_cap_info;
+
+- switch (sta->deflink.bandwidth) {
++ switch (link_sta->bandwidth) {
+ case IEEE80211_STA_RX_BW_320:
+ mcs_nss = &eht_cap->eht_mcs_nss_supp.bw._320;
+ /* MCS 9, 11, 13 */
+@@ -195,15 +195,16 @@ static u64 rtw89_phy_ra_mask_recover(u64 ra_mask, u64 ra_mask_bak)
+ return ra_mask;
+ }
+
+-static u64 rtw89_phy_ra_mask_cfg(struct rtw89_dev *rtwdev, struct rtw89_sta *rtwsta,
++static u64 rtw89_phy_ra_mask_cfg(struct rtw89_dev *rtwdev,
++ struct rtw89_sta_link *rtwsta_link,
++ struct ieee80211_link_sta *link_sta,
+ const struct rtw89_chan *chan)
+ {
+- struct ieee80211_sta *sta = rtwsta_to_sta(rtwsta);
+- struct cfg80211_bitrate_mask *mask = &rtwsta->mask;
++ struct cfg80211_bitrate_mask *mask = &rtwsta_link->mask;
+ enum nl80211_band band;
+ u64 cfg_mask;
+
+- if (!rtwsta->use_cfg_mask)
++ if (!rtwsta_link->use_cfg_mask)
+ return -1;
+
+ switch (chan->band_type) {
+@@ -227,17 +228,17 @@ static u64 rtw89_phy_ra_mask_cfg(struct rtw89_dev *rtwdev, struct rtw89_sta *rtw
+ return -1;
+ }
+
+- if (sta->deflink.he_cap.has_he) {
++ if (link_sta->he_cap.has_he) {
+ cfg_mask |= u64_encode_bits(mask->control[band].he_mcs[0],
+ RA_MASK_HE_1SS_RATES);
+ cfg_mask |= u64_encode_bits(mask->control[band].he_mcs[1],
+ RA_MASK_HE_2SS_RATES);
+- } else if (sta->deflink.vht_cap.vht_supported) {
++ } else if (link_sta->vht_cap.vht_supported) {
+ cfg_mask |= u64_encode_bits(mask->control[band].vht_mcs[0],
+ RA_MASK_VHT_1SS_RATES);
+ cfg_mask |= u64_encode_bits(mask->control[band].vht_mcs[1],
+ RA_MASK_VHT_2SS_RATES);
+- } else if (sta->deflink.ht_cap.ht_supported) {
++ } else if (link_sta->ht_cap.ht_supported) {
+ cfg_mask |= u64_encode_bits(mask->control[band].ht_mcs[0],
+ RA_MASK_HT_1SS_RATES);
+ cfg_mask |= u64_encode_bits(mask->control[band].ht_mcs[1],
+@@ -261,17 +262,17 @@ rtw89_ra_mask_eht_rates[4] = {RA_MASK_EHT_1SS_RATES, RA_MASK_EHT_2SS_RATES,
+ RA_MASK_EHT_3SS_RATES, RA_MASK_EHT_4SS_RATES};
+
+ static void rtw89_phy_ra_gi_ltf(struct rtw89_dev *rtwdev,
+- struct rtw89_sta *rtwsta,
++ struct rtw89_sta_link *rtwsta_link,
+ const struct rtw89_chan *chan,
+ bool *fix_giltf_en, u8 *fix_giltf)
+ {
+- struct cfg80211_bitrate_mask *mask = &rtwsta->mask;
++ struct cfg80211_bitrate_mask *mask = &rtwsta_link->mask;
+ u8 band = chan->band_type;
+ enum nl80211_band nl_band = rtw89_hw_to_nl80211_band(band);
+ u8 he_gi = mask->control[nl_band].he_gi;
+ u8 he_ltf = mask->control[nl_band].he_ltf;
+
+- if (!rtwsta->use_cfg_mask)
++ if (!rtwsta_link->use_cfg_mask)
+ return;
+
+ if (he_ltf == 2 && he_gi == 2) {
+@@ -295,17 +296,17 @@ static void rtw89_phy_ra_gi_ltf(struct rtw89_dev *rtwdev,
+ }
+
+ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+- struct ieee80211_sta *sta, bool csi)
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ struct ieee80211_link_sta *link_sta,
++ bool p2p, bool csi)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+- struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif->rate_pattern;
+- struct rtw89_ra_info *ra = &rtwsta->ra;
++ struct rtw89_phy_rate_pattern *rate_pattern = &rtwvif_link->rate_pattern;
++ struct rtw89_ra_info *ra = &rtwsta_link->ra;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwsta->rtwvif);
++ rtwvif_link->chanctx_idx);
+ const u64 *high_rate_masks = rtw89_ra_mask_ht_rates;
+- u8 rssi = ewma_rssi_read(&rtwsta->avg_rssi);
++ u8 rssi = ewma_rssi_read(&rtwsta_link->avg_rssi);
+ u64 ra_mask = 0;
+ u64 ra_mask_bak;
+ u8 mode = 0;
+@@ -320,65 +321,65 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+
+ memset(ra, 0, sizeof(*ra));
+ /* Set the ra mask from sta's capability */
+- if (sta->deflink.eht_cap.has_eht) {
++ if (link_sta->eht_cap.has_eht) {
+ mode |= RTW89_RA_MODE_EHT;
+- ra_mask |= get_eht_ra_mask(sta);
++ ra_mask |= get_eht_ra_mask(link_sta);
+ high_rate_masks = rtw89_ra_mask_eht_rates;
+- } else if (sta->deflink.he_cap.has_he) {
++ } else if (link_sta->he_cap.has_he) {
+ mode |= RTW89_RA_MODE_HE;
+ csi_mode = RTW89_RA_RPT_MODE_HE;
+- ra_mask |= get_he_ra_mask(sta);
++ ra_mask |= get_he_ra_mask(link_sta);
+ high_rate_masks = rtw89_ra_mask_he_rates;
+- if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[2] &
++ if (link_sta->he_cap.he_cap_elem.phy_cap_info[2] &
+ IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ)
+ stbc_en = 1;
+- if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[1] &
++ if (link_sta->he_cap.he_cap_elem.phy_cap_info[1] &
+ IEEE80211_HE_PHY_CAP1_LDPC_CODING_IN_PAYLOAD)
+ ldpc_en = 1;
+- rtw89_phy_ra_gi_ltf(rtwdev, rtwsta, chan, &fix_giltf_en, &fix_giltf);
+- } else if (sta->deflink.vht_cap.vht_supported) {
+- u16 mcs_map = le16_to_cpu(sta->deflink.vht_cap.vht_mcs.rx_mcs_map);
++ rtw89_phy_ra_gi_ltf(rtwdev, rtwsta_link, chan, &fix_giltf_en, &fix_giltf);
++ } else if (link_sta->vht_cap.vht_supported) {
++ u16 mcs_map = le16_to_cpu(link_sta->vht_cap.vht_mcs.rx_mcs_map);
+
+ mode |= RTW89_RA_MODE_VHT;
+ csi_mode = RTW89_RA_RPT_MODE_VHT;
+ /* MCS9 (non-20MHz), MCS8, MCS7 */
+- if (sta->deflink.bandwidth == IEEE80211_STA_RX_BW_20)
++ if (link_sta->bandwidth == IEEE80211_STA_RX_BW_20)
+ ra_mask |= get_mcs_ra_mask(mcs_map, 8, 1);
+ else
+ ra_mask |= get_mcs_ra_mask(mcs_map, 9, 1);
+ high_rate_masks = rtw89_ra_mask_vht_rates;
+- if (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK)
++ if (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_MASK)
+ stbc_en = 1;
+- if (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC)
++ if (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC)
+ ldpc_en = 1;
+- } else if (sta->deflink.ht_cap.ht_supported) {
++ } else if (link_sta->ht_cap.ht_supported) {
+ mode |= RTW89_RA_MODE_HT;
+ csi_mode = RTW89_RA_RPT_MODE_HT;
+- ra_mask |= ((u64)sta->deflink.ht_cap.mcs.rx_mask[3] << 48) |
+- ((u64)sta->deflink.ht_cap.mcs.rx_mask[2] << 36) |
+- ((u64)sta->deflink.ht_cap.mcs.rx_mask[1] << 24) |
+- ((u64)sta->deflink.ht_cap.mcs.rx_mask[0] << 12);
++ ra_mask |= ((u64)link_sta->ht_cap.mcs.rx_mask[3] << 48) |
++ ((u64)link_sta->ht_cap.mcs.rx_mask[2] << 36) |
++ ((u64)link_sta->ht_cap.mcs.rx_mask[1] << 24) |
++ ((u64)link_sta->ht_cap.mcs.rx_mask[0] << 12);
+ high_rate_masks = rtw89_ra_mask_ht_rates;
+- if (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_RX_STBC)
++ if (link_sta->ht_cap.cap & IEEE80211_HT_CAP_RX_STBC)
+ stbc_en = 1;
+- if (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_LDPC_CODING)
++ if (link_sta->ht_cap.cap & IEEE80211_HT_CAP_LDPC_CODING)
+ ldpc_en = 1;
+ }
+
+ switch (chan->band_type) {
+ case RTW89_BAND_2G:
+- ra_mask |= sta->deflink.supp_rates[NL80211_BAND_2GHZ];
+- if (sta->deflink.supp_rates[NL80211_BAND_2GHZ] & 0xf)
++ ra_mask |= link_sta->supp_rates[NL80211_BAND_2GHZ];
++ if (link_sta->supp_rates[NL80211_BAND_2GHZ] & 0xf)
+ mode |= RTW89_RA_MODE_CCK;
+- if (sta->deflink.supp_rates[NL80211_BAND_2GHZ] & 0xff0)
++ if (link_sta->supp_rates[NL80211_BAND_2GHZ] & 0xff0)
+ mode |= RTW89_RA_MODE_OFDM;
+ break;
+ case RTW89_BAND_5G:
+- ra_mask |= (u64)sta->deflink.supp_rates[NL80211_BAND_5GHZ] << 4;
++ ra_mask |= (u64)link_sta->supp_rates[NL80211_BAND_5GHZ] << 4;
+ mode |= RTW89_RA_MODE_OFDM;
+ break;
+ case RTW89_BAND_6G:
+- ra_mask |= (u64)sta->deflink.supp_rates[NL80211_BAND_6GHZ] << 4;
++ ra_mask |= (u64)link_sta->supp_rates[NL80211_BAND_6GHZ] << 4;
+ mode |= RTW89_RA_MODE_OFDM;
+ break;
+ default:
+@@ -405,48 +406,48 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+ ra_mask &= rtw89_phy_ra_mask_rssi(rtwdev, rssi, 0);
+
+ ra_mask = rtw89_phy_ra_mask_recover(ra_mask, ra_mask_bak);
+- ra_mask &= rtw89_phy_ra_mask_cfg(rtwdev, rtwsta, chan);
++ ra_mask &= rtw89_phy_ra_mask_cfg(rtwdev, rtwsta_link, link_sta, chan);
+
+- switch (sta->deflink.bandwidth) {
++ switch (link_sta->bandwidth) {
+ case IEEE80211_STA_RX_BW_160:
+ bw_mode = RTW89_CHANNEL_WIDTH_160;
+- sgi = sta->deflink.vht_cap.vht_supported &&
+- (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_160);
++ sgi = link_sta->vht_cap.vht_supported &&
++ (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_160);
+ break;
+ case IEEE80211_STA_RX_BW_80:
+ bw_mode = RTW89_CHANNEL_WIDTH_80;
+- sgi = sta->deflink.vht_cap.vht_supported &&
+- (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80);
++ sgi = link_sta->vht_cap.vht_supported &&
++ (link_sta->vht_cap.cap & IEEE80211_VHT_CAP_SHORT_GI_80);
+ break;
+ case IEEE80211_STA_RX_BW_40:
+ bw_mode = RTW89_CHANNEL_WIDTH_40;
+- sgi = sta->deflink.ht_cap.ht_supported &&
+- (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_SGI_40);
++ sgi = link_sta->ht_cap.ht_supported &&
++ (link_sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_40);
+ break;
+ default:
+ bw_mode = RTW89_CHANNEL_WIDTH_20;
+- sgi = sta->deflink.ht_cap.ht_supported &&
+- (sta->deflink.ht_cap.cap & IEEE80211_HT_CAP_SGI_20);
++ sgi = link_sta->ht_cap.ht_supported &&
++ (link_sta->ht_cap.cap & IEEE80211_HT_CAP_SGI_20);
+ break;
+ }
+
+- if (sta->deflink.he_cap.he_cap_elem.phy_cap_info[3] &
++ if (link_sta->he_cap.he_cap_elem.phy_cap_info[3] &
+ IEEE80211_HE_PHY_CAP3_DCM_MAX_CONST_RX_16_QAM)
+ ra->dcm_cap = 1;
+
+- if (rate_pattern->enable && !vif->p2p) {
+- ra_mask = rtw89_phy_ra_mask_cfg(rtwdev, rtwsta, chan);
++ if (rate_pattern->enable && !p2p) {
++ ra_mask = rtw89_phy_ra_mask_cfg(rtwdev, rtwsta_link, link_sta, chan);
+ ra_mask &= rate_pattern->ra_mask;
+ mode = rate_pattern->ra_mode;
+ }
+
+ ra->bw_cap = bw_mode;
+- ra->er_cap = rtwsta->er_cap;
++ ra->er_cap = rtwsta_link->er_cap;
+ ra->mode_ctrl = mode;
+- ra->macid = rtwsta->mac_id;
++ ra->macid = rtwsta_link->mac_id;
+ ra->stbc_cap = stbc_en;
+ ra->ldpc_cap = ldpc_en;
+- ra->ss_num = min(sta->deflink.rx_nss, rtwdev->hal.tx_nss) - 1;
++ ra->ss_num = min(link_sta->rx_nss, rtwdev->hal.tx_nss) - 1;
+ ra->en_sgi = sgi;
+ ra->ra_mask = ra_mask;
+ ra->fix_giltf_en = fix_giltf_en;
+@@ -458,20 +459,29 @@ static void rtw89_phy_ra_sta_update(struct rtw89_dev *rtwdev,
+ ra->fixed_csi_rate_en = false;
+ ra->ra_csi_rate_en = true;
+ ra->cr_tbl_sel = false;
+- ra->band_num = rtwvif->phy_idx;
++ ra->band_num = rtwvif_link->phy_idx;
+ ra->csi_bw = bw_mode;
+ ra->csi_gi_ltf = RTW89_GILTF_LGI_4XHE32;
+ ra->csi_mcs_ss_idx = 5;
+ ra->csi_mode = csi_mode;
+ }
+
+-void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta,
+- u32 changed)
++static void __rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct rtw89_sta_link *rtwsta_link,
++ u32 changed)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_ra_info *ra = &rtwsta->ra;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_ra_info *ra = &rtwsta_link->ra;
++ struct ieee80211_link_sta *link_sta;
+
+- rtw89_phy_ra_sta_update(rtwdev, sta, false);
++ rcu_read_lock();
++
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
++ rtw89_phy_ra_sta_update(rtwdev, rtwvif_link, rtwsta_link,
++ link_sta, vif->p2p, false);
++
++ rcu_read_unlock();
+
+ if (changed & IEEE80211_RC_SUPP_RATES_CHANGED)
+ ra->upd_mask = 1;
+@@ -489,6 +499,20 @@ void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta
+ rtw89_fw_h2c_ra(rtwdev, ra, false);
+ }
+
++void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta,
++ u32 changed)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ __rtw89_phy_ra_update_sta(rtwdev, rtwvif_link, rtwsta_link, changed);
++ }
++}
++
+ static bool __check_rate_pattern(struct rtw89_phy_rate_pattern *next,
+ u16 rate_base, u64 ra_mask, u8 ra_mode,
+ u32 rate_ctrl, u32 ctrl_skip, bool force)
+@@ -523,15 +547,15 @@ static bool __check_rate_pattern(struct rtw89_phy_rate_pattern *next,
+ [RTW89_CHIP_BE] = RTW89_HW_RATE_V1_ ## rate, \
+ }
+
+-void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif,
+- const struct cfg80211_bitrate_mask *mask)
++static
++void __rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ const struct cfg80211_bitrate_mask *mask)
+ {
+ struct ieee80211_supported_band *sband;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct rtw89_phy_rate_pattern next_pattern = {0};
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+ static const u16 hw_rate_he[][RTW89_CHIP_GEN_NUM] = {
+ RTW89_HW_RATE_BY_CHIP_GEN(HE_NSS1_MCS0),
+ RTW89_HW_RATE_BY_CHIP_GEN(HE_NSS2_MCS0),
+@@ -600,7 +624,7 @@ void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev,
+ if (!next_pattern.enable)
+ goto out;
+
+- rtwvif->rate_pattern = next_pattern;
++ rtwvif_link->rate_pattern = next_pattern;
+ rtw89_debug(rtwdev, RTW89_DBG_RA,
+ "configure pattern: rate 0x%x, mask 0x%llx, mode 0x%x\n",
+ next_pattern.rate,
+@@ -609,10 +633,22 @@ void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev,
+ return;
+
+ out:
+- rtwvif->rate_pattern.enable = false;
++ rtwvif_link->rate_pattern.enable = false;
+ rtw89_debug(rtwdev, RTW89_DBG_RA, "unset rate pattern\n");
+ }
+
++void rtw89_phy_rate_pattern_vif(struct rtw89_dev *rtwdev,
++ struct ieee80211_vif *vif,
++ const struct cfg80211_bitrate_mask *mask)
++{
++ struct rtw89_vif *rtwvif = vif_to_rtwvif(vif);
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ __rtw89_phy_rate_pattern_vif(rtwdev, rtwvif_link, mask);
++}
++
+ static void rtw89_phy_ra_update_sta_iter(void *data, struct ieee80211_sta *sta)
+ {
+ struct rtw89_dev *rtwdev = (struct rtw89_dev *)data;
+@@ -627,14 +663,24 @@ void rtw89_phy_ra_update(struct rtw89_dev *rtwdev)
+ rtwdev);
+ }
+
+-void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta)
++void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_ra_info *ra = &rtwsta->ra;
+- u8 rssi = ewma_rssi_read(&rtwsta->avg_rssi) >> RSSI_FACTOR;
+- bool csi = rtw89_sta_has_beamformer_cap(sta);
++ struct rtw89_vif_link *rtwvif_link = rtwsta_link->rtwvif_link;
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
++ struct rtw89_ra_info *ra = &rtwsta_link->ra;
++ u8 rssi = ewma_rssi_read(&rtwsta_link->avg_rssi) >> RSSI_FACTOR;
++ struct ieee80211_link_sta *link_sta;
++ bool csi;
++
++ rcu_read_lock();
+
+- rtw89_phy_ra_sta_update(rtwdev, sta, csi);
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, true);
++ csi = rtw89_sta_has_beamformer_cap(link_sta);
++
++ rtw89_phy_ra_sta_update(rtwdev, rtwvif_link, rtwsta_link,
++ link_sta, vif->p2p, csi);
++
++ rcu_read_unlock();
+
+ if (rssi > 40)
+ ra->init_rate_lv = 1;
+@@ -2553,14 +2599,14 @@ struct rtw89_phy_iter_ra_data {
+ struct sk_buff *c2h;
+ };
+
+-static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta)
++static void __rtw89_phy_c2h_ra_rpt_iter(struct rtw89_sta_link *rtwsta_link,
++ struct ieee80211_link_sta *link_sta,
++ struct rtw89_phy_iter_ra_data *ra_data)
+ {
+- struct rtw89_phy_iter_ra_data *ra_data = (struct rtw89_phy_iter_ra_data *)data;
+ struct rtw89_dev *rtwdev = ra_data->rtwdev;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+ const struct rtw89_c2h_ra_rpt *c2h =
+ (const struct rtw89_c2h_ra_rpt *)ra_data->c2h->data;
+- struct rtw89_ra_report *ra_report = &rtwsta->ra_report;
++ struct rtw89_ra_report *ra_report = &rtwsta_link->ra_report;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ bool format_v1 = chip->chip_gen == RTW89_CHIP_BE;
+ u8 mode, rate, bw, giltf, mac_id;
+@@ -2570,7 +2616,7 @@ static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta)
+ u8 t;
+
+ mac_id = le32_get_bits(c2h->w2, RTW89_C2H_RA_RPT_W2_MACID);
+- if (mac_id != rtwsta->mac_id)
++ if (mac_id != rtwsta_link->mac_id)
+ return;
+
+ rate = le32_get_bits(c2h->w3, RTW89_C2H_RA_RPT_W3_MCSNSS);
+@@ -2661,8 +2707,26 @@ static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta)
+ u16_encode_bits(mode, RTW89_HW_RATE_MASK_MOD) |
+ u16_encode_bits(rate, RTW89_HW_RATE_MASK_VAL);
+ ra_report->might_fallback_legacy = mcs <= 2;
+- sta->deflink.agg.max_rc_amsdu_len = get_max_amsdu_len(rtwdev, ra_report);
+- rtwsta->max_agg_wait = sta->deflink.agg.max_rc_amsdu_len / 1500 - 1;
++ link_sta->agg.max_rc_amsdu_len = get_max_amsdu_len(rtwdev, ra_report);
++ rtwsta_link->max_agg_wait = link_sta->agg.max_rc_amsdu_len / 1500 - 1;
++}
++
++static void rtw89_phy_c2h_ra_rpt_iter(void *data, struct ieee80211_sta *sta)
++{
++ struct rtw89_phy_iter_ra_data *ra_data = (struct rtw89_phy_iter_ra_data *)data;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_sta_link *rtwsta_link;
++ struct ieee80211_link_sta *link_sta;
++ unsigned int link_id;
++
++ rcu_read_lock();
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ link_sta = rtw89_sta_rcu_dereference_link(rtwsta_link, false);
++ __rtw89_phy_c2h_ra_rpt_iter(rtwsta_link, link_sta, ra_data);
++ }
++
++ rcu_read_unlock();
+ }
+
+ static void
+@@ -4290,33 +4354,33 @@ void rtw89_phy_cfo_parse(struct rtw89_dev *rtwdev, s16 cfo_val,
+ cfo->packet_count++;
+ }
+
+-void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev,
+- rtwvif->chanctx_idx);
++ rtwvif_link->chanctx_idx);
+ struct rtw89_phy_ul_tb_info *ul_tb_info = &rtwdev->ul_tb_info;
+
+ if (!chip->ul_tb_waveform_ctrl)
+ return;
+
+- rtwvif->def_tri_idx =
++ rtwvif_link->def_tri_idx =
+ rtw89_phy_read32_mask(rtwdev, R_DCFO_OPT, B_TXSHAPE_TRIANGULAR_CFG);
+
+ if (chip->chip_id == RTL8852B && rtwdev->hal.cv > CHIP_CBV)
+- rtwvif->dyn_tb_bedge_en = false;
++ rtwvif_link->dyn_tb_bedge_en = false;
+ else if (chan->band_type >= RTW89_BAND_5G &&
+ chan->band_width >= RTW89_CHANNEL_WIDTH_40)
+- rtwvif->dyn_tb_bedge_en = true;
++ rtwvif_link->dyn_tb_bedge_en = true;
+ else
+- rtwvif->dyn_tb_bedge_en = false;
++ rtwvif_link->dyn_tb_bedge_en = false;
+
+ rtw89_debug(rtwdev, RTW89_DBG_UL_TB,
+ "[ULTB] def_if_bandedge=%d, def_tri_idx=%d\n",
+- ul_tb_info->def_if_bandedge, rtwvif->def_tri_idx);
++ ul_tb_info->def_if_bandedge, rtwvif_link->def_tri_idx);
+ rtw89_debug(rtwdev, RTW89_DBG_UL_TB,
+ "[ULTB] dyn_tb_begde_en=%d, dyn_tb_tri_en=%d\n",
+- rtwvif->dyn_tb_bedge_en, ul_tb_info->dyn_tb_tri_en);
++ rtwvif_link->dyn_tb_bedge_en, ul_tb_info->dyn_tb_tri_en);
+ }
+
+ struct rtw89_phy_ul_tb_check_data {
+@@ -4338,7 +4402,7 @@ struct rtw89_phy_power_diff {
+ };
+
+ static void rtw89_phy_ofdma_power_diff(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ static const struct rtw89_phy_power_diff table[2] = {
+ {0x0, 0x0, 0x0, 0x0, 0xf4, 0x3, 0x3},
+@@ -4350,13 +4414,13 @@ static void rtw89_phy_ofdma_power_diff(struct rtw89_dev *rtwdev,
+ if (!rtwdev->chip->ul_tb_pwr_diff)
+ return;
+
+- if (rtwvif->pwr_diff_en == rtwvif->pre_pwr_diff_en) {
+- rtwvif->pwr_diff_en = false;
++ if (rtwvif_link->pwr_diff_en == rtwvif_link->pre_pwr_diff_en) {
++ rtwvif_link->pwr_diff_en = false;
+ return;
+ }
+
+- rtwvif->pre_pwr_diff_en = rtwvif->pwr_diff_en;
+- param = &table[rtwvif->pwr_diff_en];
++ rtwvif_link->pre_pwr_diff_en = rtwvif_link->pwr_diff_en;
++ param = &table[rtwvif_link->pwr_diff_en];
+
+ rtw89_phy_write32_mask(rtwdev, R_Q_MATRIX_00, B_Q_MATRIX_00_REAL,
+ param->q_00);
+@@ -4365,32 +4429,32 @@ static void rtw89_phy_ofdma_power_diff(struct rtw89_dev *rtwdev,
+ rtw89_phy_write32_mask(rtwdev, R_CUSTOMIZE_Q_MATRIX,
+ B_CUSTOMIZE_Q_MATRIX_EN, param->q_matrix_en);
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_1T, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_1T, rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PWR_UL_TB_1T_NORM_BW160,
+ param->ultb_1t_norm_160);
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_2T, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PWR_UL_TB_2T, rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PWR_UL_TB_2T_NORM_BW160,
+ param->ultb_2t_norm_160);
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM1, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM1, rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PATH_COM1_NORM_1STS,
+ param->com1_norm_1sts);
+
+- reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM2, rtwvif->mac_idx);
++ reg = rtw89_mac_reg_by_idx(rtwdev, R_AX_PATH_COM2, rtwvif_link->mac_idx);
+ rtw89_write32_mask(rtwdev, reg, B_AX_PATH_COM2_RESP_1STS_PATH,
+ param->com2_resp_1sts_path);
+ }
+
+ static
+ void rtw89_phy_ul_tb_ctrl_check(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct rtw89_phy_ul_tb_check_data *ul_tb_data)
+ {
+ struct rtw89_traffic_stats *stats = &rtwdev->stats;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+
+- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION)
++ if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION)
+ return;
+
+ if (!vif->cfg.assoc)
+@@ -4403,11 +4467,11 @@ void rtw89_phy_ul_tb_ctrl_check(struct rtw89_dev *rtwdev,
+ ul_tb_data->low_tf_client = true;
+
+ ul_tb_data->valid = true;
+- ul_tb_data->def_tri_idx = rtwvif->def_tri_idx;
+- ul_tb_data->dyn_tb_bedge_en = rtwvif->dyn_tb_bedge_en;
++ ul_tb_data->def_tri_idx = rtwvif_link->def_tri_idx;
++ ul_tb_data->dyn_tb_bedge_en = rtwvif_link->dyn_tb_bedge_en;
+ }
+
+- rtw89_phy_ofdma_power_diff(rtwdev, rtwvif);
++ rtw89_phy_ofdma_power_diff(rtwdev, rtwvif_link);
+ }
+
+ static void rtw89_phy_ul_tb_waveform_ctrl(struct rtw89_dev *rtwdev,
+@@ -4453,7 +4517,9 @@ void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev)
+ {
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ struct rtw89_phy_ul_tb_check_data ul_tb_data = {};
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ if (!chip->ul_tb_waveform_ctrl && !chip->ul_tb_pwr_diff)
+ return;
+@@ -4462,7 +4528,8 @@ void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev)
+ return;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_phy_ul_tb_ctrl_check(rtwdev, rtwvif, &ul_tb_data);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_phy_ul_tb_ctrl_check(rtwdev, rtwvif_link, &ul_tb_data);
+
+ if (!ul_tb_data.valid)
+ return;
+@@ -4626,30 +4693,42 @@ struct rtw89_phy_iter_rssi_data {
+ bool rssi_changed;
+ };
+
+-static void rtw89_phy_stat_rssi_update_iter(void *data,
+- struct ieee80211_sta *sta)
++static
++void __rtw89_phy_stat_rssi_update_iter(struct rtw89_sta_link *rtwsta_link,
++ struct rtw89_phy_iter_rssi_data *rssi_data)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_phy_iter_rssi_data *rssi_data =
+- (struct rtw89_phy_iter_rssi_data *)data;
+ struct rtw89_phy_ch_info *ch_info = rssi_data->ch_info;
+ unsigned long rssi_curr;
+
+- rssi_curr = ewma_rssi_read(&rtwsta->avg_rssi);
++ rssi_curr = ewma_rssi_read(&rtwsta_link->avg_rssi);
+
+ if (rssi_curr < ch_info->rssi_min) {
+ ch_info->rssi_min = rssi_curr;
+- ch_info->rssi_min_macid = rtwsta->mac_id;
++ ch_info->rssi_min_macid = rtwsta_link->mac_id;
+ }
+
+- if (rtwsta->prev_rssi == 0) {
+- rtwsta->prev_rssi = rssi_curr;
+- } else if (abs((int)rtwsta->prev_rssi - (int)rssi_curr) > (3 << RSSI_FACTOR)) {
+- rtwsta->prev_rssi = rssi_curr;
++ if (rtwsta_link->prev_rssi == 0) {
++ rtwsta_link->prev_rssi = rssi_curr;
++ } else if (abs((int)rtwsta_link->prev_rssi - (int)rssi_curr) >
++ (3 << RSSI_FACTOR)) {
++ rtwsta_link->prev_rssi = rssi_curr;
+ rssi_data->rssi_changed = true;
+ }
+ }
+
++static void rtw89_phy_stat_rssi_update_iter(void *data,
++ struct ieee80211_sta *sta)
++{
++ struct rtw89_phy_iter_rssi_data *rssi_data =
++ (struct rtw89_phy_iter_rssi_data *)data;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id)
++ __rtw89_phy_stat_rssi_update_iter(rtwsta_link, rssi_data);
++}
++
+ static void rtw89_phy_stat_rssi_update(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_phy_iter_rssi_data rssi_data = {0};
+@@ -5753,26 +5832,15 @@ void rtw89_phy_dig(struct rtw89_dev *rtwdev)
+ rtw89_phy_dig_sdagc_follow_pagc_config(rtwdev, false);
+ }
+
+-static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta)
++static void __rtw89_phy_tx_path_div_sta_iter(struct rtw89_dev *rtwdev,
++ struct rtw89_sta_link *rtwsta_link)
+ {
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
+- struct rtw89_dev *rtwdev = rtwsta->rtwdev;
+- struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_hal *hal = &rtwdev->hal;
+- bool *done = data;
+ u8 rssi_a, rssi_b;
+ u32 candidate;
+
+- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION || sta->tdls)
+- return;
+-
+- if (*done)
+- return;
+-
+- *done = true;
+-
+- rssi_a = ewma_rssi_read(&rtwsta->rssi[RF_PATH_A]);
+- rssi_b = ewma_rssi_read(&rtwsta->rssi[RF_PATH_B]);
++ rssi_a = ewma_rssi_read(&rtwsta_link->rssi[RF_PATH_A]);
++ rssi_b = ewma_rssi_read(&rtwsta_link->rssi[RF_PATH_B]);
+
+ if (rssi_a > rssi_b + RTW89_TX_DIV_RSSI_RAW_TH)
+ candidate = RF_A;
+@@ -5785,7 +5853,7 @@ static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta
+ return;
+
+ hal->antenna_tx = candidate;
+- rtw89_fw_h2c_txpath_cmac_tbl(rtwdev, rtwsta);
++ rtw89_fw_h2c_txpath_cmac_tbl(rtwdev, rtwsta_link);
+
+ if (hal->antenna_tx == RF_A) {
+ rtw89_phy_write32_mask(rtwdev, R_P0_RFMODE, B_P0_RFMODE_MUX, 0x12);
+@@ -5796,6 +5864,37 @@ static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta
+ }
+ }
+
++static void rtw89_phy_tx_path_div_sta_iter(void *data, struct ieee80211_sta *sta)
++{
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
++ struct rtw89_dev *rtwdev = rtwsta->rtwdev;
++ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
++ struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
++ bool *done = data;
++
++ if (WARN(ieee80211_vif_is_mld(vif), "MLD mix path_div\n"))
++ return;
++
++ if (sta->tdls)
++ return;
++
++ if (*done)
++ return;
++
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
++ if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION)
++ continue;
++
++ *done = true;
++ __rtw89_phy_tx_path_div_sta_iter(rtwdev, rtwsta_link);
++ return;
++ }
++}
++
+ void rtw89_phy_tx_path_div_track(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_hal *hal = &rtwdev->hal;
+@@ -6002,17 +6101,27 @@ void rtw89_phy_dm_init(struct rtw89_dev *rtwdev)
+ rtw89_chip_cfg_txrx_path(rtwdev);
+ }
+
+-void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
++void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ const struct rtw89_reg_def *bss_clr_vld = &chip->bss_clr_vld;
+ enum rtw89_phy_idx phy_idx = RTW89_PHY_0;
++ struct ieee80211_bss_conf *bss_conf;
+ u8 bss_color;
+
+- if (!vif->bss_conf.he_support || !vif->cfg.assoc)
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ if (!bss_conf->he_support || !vif->cfg.assoc) {
++ rcu_read_unlock();
+ return;
++ }
++
++ bss_color = bss_conf->he_bss_color.color;
+
+- bss_color = vif->bss_conf.he_bss_color.color;
++ rcu_read_unlock();
+
+ rtw89_phy_write32_idx(rtwdev, bss_clr_vld->addr, bss_clr_vld->mask, 0x1,
+ phy_idx);
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.h b/drivers/net/wireless/realtek/rtw89/phy.h
+index 6dd8ec46939acd..7e335c02ee6fbf 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.h
++++ b/drivers/net/wireless/realtek/rtw89/phy.h
+@@ -892,7 +892,7 @@ void rtw89_phy_set_txpwr_limit_ru(struct rtw89_dev *rtwdev,
+ phy->set_txpwr_limit_ru(rtwdev, chan, phy_idx);
+ }
+
+-void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta);
++void rtw89_phy_ra_assoc(struct rtw89_dev *rtwdev, struct rtw89_sta_link *rtwsta_link);
+ void rtw89_phy_ra_update(struct rtw89_dev *rtwdev);
+ void rtw89_phy_ra_update_sta(struct rtw89_dev *rtwdev, struct ieee80211_sta *sta,
+ u32 changed);
+@@ -953,11 +953,12 @@ void rtw89_phy_antdiv_parse(struct rtw89_dev *rtwdev,
+ struct rtw89_rx_phy_ppdu *phy_ppdu);
+ void rtw89_phy_antdiv_track(struct rtw89_dev *rtwdev);
+ void rtw89_phy_antdiv_work(struct work_struct *work);
+-void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif);
++void rtw89_phy_set_bss_color(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link);
+ void rtw89_phy_tssi_ctrl_set_bandedge_cfg(struct rtw89_dev *rtwdev,
+ enum rtw89_mac_idx mac_idx,
+ enum rtw89_tssi_bandedge_cfg bandedge_cfg);
+-void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++void rtw89_phy_ul_tb_assoc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ void rtw89_phy_ul_tb_ctrl_track(struct rtw89_dev *rtwdev);
+ u8 rtw89_encode_chan_idx(struct rtw89_dev *rtwdev, u8 central_ch, u8 band);
+ void rtw89_decode_chan_idx(struct rtw89_dev *rtwdev, u8 chan_idx,
+diff --git a/drivers/net/wireless/realtek/rtw89/ps.c b/drivers/net/wireless/realtek/rtw89/ps.c
+index aebd6404f80250..c1c12abc2ea93a 100644
+--- a/drivers/net/wireless/realtek/rtw89/ps.c
++++ b/drivers/net/wireless/realtek/rtw89/ps.c
+@@ -62,9 +62,9 @@ static void rtw89_ps_power_mode_change(struct rtw89_dev *rtwdev, bool enter)
+ rtw89_mac_power_mode_change(rtwdev, enter);
+ }
+
+-void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link)
+ {
+- if (rtwvif->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT)
++ if (rtwvif_link->wifi_role == RTW89_WIFI_ROLE_P2P_CLIENT)
+ return;
+
+ if (!rtwdev->ps_mode)
+@@ -85,23 +85,25 @@ void __rtw89_leave_ps_mode(struct rtw89_dev *rtwdev)
+ rtw89_ps_power_mode_change(rtwdev, false);
+ }
+
+-static void __rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void __rtw89_enter_lps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_lps_parm lps_param = {
+- .macid = rtwvif->mac_id,
++ .macid = rtwvif_link->mac_id,
+ .psmode = RTW89_MAC_AX_PS_MODE_LEGACY,
+ .lastrpwm = RTW89_LAST_RPWM_PS,
+ };
+
+ rtw89_btc_ntfy_radio_state(rtwdev, BTC_RFCTRL_FW_CTRL);
+ rtw89_fw_h2c_lps_parm(rtwdev, &lps_param);
+- rtw89_fw_h2c_lps_ch_info(rtwdev, rtwvif);
++ rtw89_fw_h2c_lps_ch_info(rtwdev, rtwvif_link);
+ }
+
+-static void __rtw89_leave_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void __rtw89_leave_lps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_lps_parm lps_param = {
+- .macid = rtwvif->mac_id,
++ .macid = rtwvif_link->mac_id,
+ .psmode = RTW89_MAC_AX_PS_MODE_ACTIVE,
+ .lastrpwm = RTW89_LAST_RPWM_ACTIVE,
+ };
+@@ -109,7 +111,7 @@ static void __rtw89_leave_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif
+ rtw89_fw_h2c_lps_parm(rtwdev, &lps_param);
+ rtw89_fw_leave_lps_check(rtwdev, 0);
+ rtw89_btc_ntfy_radio_state(rtwdev, BTC_RFCTRL_WL_ON);
+- rtw89_chip_digital_pwr_comp(rtwdev, rtwvif->phy_idx);
++ rtw89_chip_digital_pwr_comp(rtwdev, rtwvif_link->phy_idx);
+ }
+
+ void rtw89_leave_ps_mode(struct rtw89_dev *rtwdev)
+@@ -119,7 +121,7 @@ void rtw89_leave_ps_mode(struct rtw89_dev *rtwdev)
+ __rtw89_leave_ps_mode(rtwdev);
+ }
+
+-void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool ps_mode)
+ {
+ lockdep_assert_held(&rtwdev->mutex);
+@@ -127,23 +129,26 @@ void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ if (test_and_set_bit(RTW89_FLAG_LEISURE_PS, rtwdev->flags))
+ return;
+
+- __rtw89_enter_lps(rtwdev, rtwvif);
++ __rtw89_enter_lps(rtwdev, rtwvif_link);
+ if (ps_mode)
+- __rtw89_enter_ps_mode(rtwdev, rtwvif);
++ __rtw89_enter_ps_mode(rtwdev, rtwvif_link);
+ }
+
+-static void rtw89_leave_lps_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw89_leave_lps_vif(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- if (rtwvif->wifi_role != RTW89_WIFI_ROLE_STATION &&
+- rtwvif->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT)
++ if (rtwvif_link->wifi_role != RTW89_WIFI_ROLE_STATION &&
++ rtwvif_link->wifi_role != RTW89_WIFI_ROLE_P2P_CLIENT)
+ return;
+
+- __rtw89_leave_lps(rtwdev, rtwvif);
++ __rtw89_leave_lps(rtwdev, rtwvif_link);
+ }
+
+ void rtw89_leave_lps(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ lockdep_assert_held(&rtwdev->mutex);
+
+@@ -153,12 +158,15 @@ void rtw89_leave_lps(struct rtw89_dev *rtwdev)
+ __rtw89_leave_ps_mode(rtwdev);
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_leave_lps_vif(rtwdev, rtwvif);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_leave_lps_vif(rtwdev, rtwvif_link);
+ }
+
+ void rtw89_enter_ips(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+
+ set_bit(RTW89_FLAG_INACTIVE_PS, rtwdev->flags);
+
+@@ -166,14 +174,17 @@ void rtw89_enter_ips(struct rtw89_dev *rtwdev)
+ return;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_mac_vif_deinit(rtwdev, rtwvif);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_mac_vif_deinit(rtwdev, rtwvif_link);
+
+ rtw89_core_stop(rtwdev);
+ }
+
+ void rtw89_leave_ips(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+ int ret;
+
+ if (test_bit(RTW89_FLAG_POWERON, rtwdev->flags))
+@@ -186,7 +197,8 @@ void rtw89_leave_ips(struct rtw89_dev *rtwdev)
+ rtw89_set_channel(rtwdev);
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_mac_vif_init(rtwdev, rtwvif);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_mac_vif_init(rtwdev, rtwvif_link);
+
+ clear_bit(RTW89_FLAG_INACTIVE_PS, rtwdev->flags);
+ }
+@@ -197,48 +209,50 @@ void rtw89_set_coex_ctrl_lps(struct rtw89_dev *rtwdev, bool btc_ctrl)
+ rtw89_leave_lps(rtwdev);
+ }
+
+-static void rtw89_tsf32_toggle(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw89_tsf32_toggle(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ enum rtw89_p2pps_action act)
+ {
+ if (act == RTW89_P2P_ACT_UPDATE || act == RTW89_P2P_ACT_REMOVE)
+ return;
+
+ if (act == RTW89_P2P_ACT_INIT)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif, true);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif_link, true);
+ else if (act == RTW89_P2P_ACT_TERMINATE)
+- rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif, false);
++ rtw89_fw_h2c_tsf32_toggle(rtwdev, rtwvif_link, false);
+ }
+
+ static void rtw89_p2p_disable_all_noa(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif)
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ enum rtw89_p2pps_action act;
+ u8 noa_id;
+
+- if (rtwvif->last_noa_nr == 0)
++ if (rtwvif_link->last_noa_nr == 0)
+ return;
+
+- for (noa_id = 0; noa_id < rtwvif->last_noa_nr; noa_id++) {
+- if (noa_id == rtwvif->last_noa_nr - 1)
++ for (noa_id = 0; noa_id < rtwvif_link->last_noa_nr; noa_id++) {
++ if (noa_id == rtwvif_link->last_noa_nr - 1)
+ act = RTW89_P2P_ACT_TERMINATE;
+ else
+ act = RTW89_P2P_ACT_REMOVE;
+- rtw89_tsf32_toggle(rtwdev, rtwvif, act);
+- rtw89_fw_h2c_p2p_act(rtwdev, vif, NULL, act, noa_id);
++ rtw89_tsf32_toggle(rtwdev, rtwvif_link, act);
++ rtw89_fw_h2c_p2p_act(rtwdev, rtwvif_link, bss_conf,
++ NULL, act, noa_id);
+ }
+ }
+
+ static void rtw89_p2p_update_noa(struct rtw89_dev *rtwdev,
+- struct ieee80211_vif *vif)
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf)
+ {
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)vif->drv_priv;
+ struct ieee80211_p2p_noa_desc *desc;
+ enum rtw89_p2pps_action act;
+ u8 noa_id;
+
+ for (noa_id = 0; noa_id < RTW89_P2P_MAX_NOA_NUM; noa_id++) {
+- desc = &vif->bss_conf.p2p_noa_attr.desc[noa_id];
++ desc = &bss_conf->p2p_noa_attr.desc[noa_id];
+ if (!desc->count || !desc->duration)
+ break;
+
+@@ -246,16 +260,19 @@ static void rtw89_p2p_update_noa(struct rtw89_dev *rtwdev,
+ act = RTW89_P2P_ACT_INIT;
+ else
+ act = RTW89_P2P_ACT_UPDATE;
+- rtw89_tsf32_toggle(rtwdev, rtwvif, act);
+- rtw89_fw_h2c_p2p_act(rtwdev, vif, desc, act, noa_id);
++ rtw89_tsf32_toggle(rtwdev, rtwvif_link, act);
++ rtw89_fw_h2c_p2p_act(rtwdev, rtwvif_link, bss_conf,
++ desc, act, noa_id);
+ }
+- rtwvif->last_noa_nr = noa_id;
++ rtwvif_link->last_noa_nr = noa_id;
+ }
+
+-void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif)
++void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf)
+ {
+- rtw89_p2p_disable_all_noa(rtwdev, vif);
+- rtw89_p2p_update_noa(rtwdev, vif);
++ rtw89_p2p_disable_all_noa(rtwdev, rtwvif_link, bss_conf);
++ rtw89_p2p_update_noa(rtwdev, rtwvif_link, bss_conf);
+ }
+
+ void rtw89_recalc_lps(struct rtw89_dev *rtwdev)
+@@ -265,6 +282,12 @@ void rtw89_recalc_lps(struct rtw89_dev *rtwdev)
+ enum rtw89_entity_mode mode;
+ int count = 0;
+
++ /* FIXME: Fix rtw89_enter_lps() and __rtw89_enter_ps_mode()
++ * to take MLO cases into account before doing the following.
++ */
++ if (rtwdev->support_mlo)
++ goto disable_lps;
++
+ mode = rtw89_get_entity_mode(rtwdev);
+ if (mode == RTW89_ENTITY_MODE_MCC)
+ goto disable_lps;
+@@ -291,9 +314,9 @@ void rtw89_recalc_lps(struct rtw89_dev *rtwdev)
+ rtwdev->lps_enabled = false;
+ }
+
+-void rtw89_p2p_noa_renew(struct rtw89_vif *rtwvif)
++void rtw89_p2p_noa_renew(struct rtw89_vif_link *rtwvif_link)
+ {
+- struct rtw89_p2p_noa_setter *setter = &rtwvif->p2p_noa;
++ struct rtw89_p2p_noa_setter *setter = &rtwvif_link->p2p_noa;
+ struct rtw89_p2p_noa_ie *ie = &setter->ie;
+ struct rtw89_p2p_ie_head *p2p_head = &ie->p2p_head;
+ struct rtw89_noa_attr_head *noa_head = &ie->noa_head;
+@@ -318,10 +341,10 @@ void rtw89_p2p_noa_renew(struct rtw89_vif *rtwvif)
+ noa_head->oppps_ctwindow = 0;
+ }
+
+-void rtw89_p2p_noa_append(struct rtw89_vif *rtwvif,
++void rtw89_p2p_noa_append(struct rtw89_vif_link *rtwvif_link,
+ const struct ieee80211_p2p_noa_desc *desc)
+ {
+- struct rtw89_p2p_noa_setter *setter = &rtwvif->p2p_noa;
++ struct rtw89_p2p_noa_setter *setter = &rtwvif_link->p2p_noa;
+ struct rtw89_p2p_noa_ie *ie = &setter->ie;
+ struct rtw89_p2p_ie_head *p2p_head = &ie->p2p_head;
+ struct rtw89_noa_attr_head *noa_head = &ie->noa_head;
+@@ -338,9 +361,9 @@ void rtw89_p2p_noa_append(struct rtw89_vif *rtwvif,
+ ie->noa_desc[setter->noa_count++] = *desc;
+ }
+
+-u8 rtw89_p2p_noa_fetch(struct rtw89_vif *rtwvif, void **data)
++u8 rtw89_p2p_noa_fetch(struct rtw89_vif_link *rtwvif_link, void **data)
+ {
+- struct rtw89_p2p_noa_setter *setter = &rtwvif->p2p_noa;
++ struct rtw89_p2p_noa_setter *setter = &rtwvif_link->p2p_noa;
+ struct rtw89_p2p_noa_ie *ie = &setter->ie;
+ void *tail;
+
+diff --git a/drivers/net/wireless/realtek/rtw89/ps.h b/drivers/net/wireless/realtek/rtw89/ps.h
+index 54486e4550b61e..cdd712966b09d9 100644
+--- a/drivers/net/wireless/realtek/rtw89/ps.h
++++ b/drivers/net/wireless/realtek/rtw89/ps.h
+@@ -5,21 +5,23 @@
+ #ifndef __RTW89_PS_H_
+ #define __RTW89_PS_H_
+
+-void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++void rtw89_enter_lps(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool ps_mode);
+ void rtw89_leave_lps(struct rtw89_dev *rtwdev);
+ void __rtw89_leave_ps_mode(struct rtw89_dev *rtwdev);
+-void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif);
++void __rtw89_enter_ps_mode(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link);
+ void rtw89_leave_ps_mode(struct rtw89_dev *rtwdev);
+ void rtw89_enter_ips(struct rtw89_dev *rtwdev);
+ void rtw89_leave_ips(struct rtw89_dev *rtwdev);
+ void rtw89_set_coex_ctrl_lps(struct rtw89_dev *rtwdev, bool btc_ctrl);
+-void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev, struct ieee80211_vif *vif);
++void rtw89_process_p2p_ps(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ struct ieee80211_bss_conf *bss_conf);
+ void rtw89_recalc_lps(struct rtw89_dev *rtwdev);
+-void rtw89_p2p_noa_renew(struct rtw89_vif *rtwvif);
+-void rtw89_p2p_noa_append(struct rtw89_vif *rtwvif,
++void rtw89_p2p_noa_renew(struct rtw89_vif_link *rtwvif_link);
++void rtw89_p2p_noa_append(struct rtw89_vif_link *rtwvif_link,
+ const struct ieee80211_p2p_noa_desc *desc);
+-u8 rtw89_p2p_noa_fetch(struct rtw89_vif *rtwvif, void **data);
++u8 rtw89_p2p_noa_fetch(struct rtw89_vif_link *rtwvif_link, void **data);
+
+ static inline void rtw89_leave_ips_by_hwflags(struct rtw89_dev *rtwdev)
+ {
+diff --git a/drivers/net/wireless/realtek/rtw89/regd.c b/drivers/net/wireless/realtek/rtw89/regd.c
+index a7720a1f17a743..bb064a086970bb 100644
+--- a/drivers/net/wireless/realtek/rtw89/regd.c
++++ b/drivers/net/wireless/realtek/rtw89/regd.c
+@@ -793,22 +793,26 @@ static bool __rtw89_reg_6ghz_tpe_recalc(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_regulatory_info *regulatory = &rtwdev->regulatory;
+ struct rtw89_reg_6ghz_tpe new = {};
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+ bool changed = false;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+ const struct rtw89_reg_6ghz_tpe *tmp;
+ const struct rtw89_chan *chan;
+
+- chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
+- if (chan->band_type != RTW89_BAND_6G)
+- continue;
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
++ if (chan->band_type != RTW89_BAND_6G)
++ continue;
+
+- tmp = &rtwvif->reg_6ghz_tpe;
+- if (!tmp->valid)
+- continue;
++ tmp = &rtwvif_link->reg_6ghz_tpe;
++ if (!tmp->valid)
++ continue;
+
+- tpe_intersect_constraint(&new, tmp->constraint);
++ tpe_intersect_constraint(&new, tmp->constraint);
++ }
+ }
+
+ if (memcmp(®ulatory->reg_6ghz_tpe, &new,
+@@ -831,19 +835,24 @@ static bool __rtw89_reg_6ghz_tpe_recalc(struct rtw89_dev *rtwdev)
+ }
+
+ static int rtw89_reg_6ghz_tpe_recalc(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool active,
++ struct rtw89_vif_link *rtwvif_link, bool active,
+ unsigned int *changed)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
+- struct ieee80211_bss_conf *bss_conf = &vif->bss_conf;
+- struct rtw89_reg_6ghz_tpe *tpe = &rtwvif->reg_6ghz_tpe;
++ struct rtw89_reg_6ghz_tpe *tpe = &rtwvif_link->reg_6ghz_tpe;
++ struct ieee80211_bss_conf *bss_conf;
+
+ memset(tpe, 0, sizeof(*tpe));
+
+- if (!active || rtwvif->reg_6ghz_power != RTW89_REG_6GHZ_POWER_STD)
++ if (!active || rtwvif_link->reg_6ghz_power != RTW89_REG_6GHZ_POWER_STD)
+ goto bottom;
+
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
+ rtw89_calculate_tpe(rtwdev, tpe, &bss_conf->tpe);
++
++ rcu_read_unlock();
++
+ if (!tpe->valid)
+ goto bottom;
+
+@@ -867,20 +876,24 @@ static bool __rtw89_reg_6ghz_power_recalc(struct rtw89_dev *rtwdev)
+ const struct rtw89_regd *regd = regulatory->regd;
+ enum rtw89_reg_6ghz_power sel;
+ const struct rtw89_chan *chan;
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
++ unsigned int link_id;
+ int count = 0;
+ u8 index;
+
+ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- chan = rtw89_chan_get(rtwdev, rtwvif->chanctx_idx);
+- if (chan->band_type != RTW89_BAND_6G)
+- continue;
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ chan = rtw89_chan_get(rtwdev, rtwvif_link->chanctx_idx);
++ if (chan->band_type != RTW89_BAND_6G)
++ continue;
+
+- if (count != 0 && rtwvif->reg_6ghz_power == sel)
+- continue;
++ if (count != 0 && rtwvif_link->reg_6ghz_power == sel)
++ continue;
+
+- sel = rtwvif->reg_6ghz_power;
+- count++;
++ sel = rtwvif_link->reg_6ghz_power;
++ count++;
++ }
+ }
+
+ if (count != 1)
+@@ -908,35 +921,41 @@ static bool __rtw89_reg_6ghz_power_recalc(struct rtw89_dev *rtwdev)
+ }
+
+ static int rtw89_reg_6ghz_power_recalc(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif, bool active,
++ struct rtw89_vif_link *rtwvif_link, bool active,
+ unsigned int *changed)
+ {
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_bss_conf *bss_conf;
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
+
+ if (active) {
+- switch (vif->bss_conf.power_type) {
++ switch (bss_conf->power_type) {
+ case IEEE80211_REG_VLP_AP:
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_VLP;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_VLP;
+ break;
+ case IEEE80211_REG_LPI_AP:
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_LPI;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_LPI;
+ break;
+ case IEEE80211_REG_SP_AP:
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_STD;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_STD;
+ break;
+ default:
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
+ break;
+ }
+ } else {
+- rtwvif->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
++ rtwvif_link->reg_6ghz_power = RTW89_REG_6GHZ_POWER_DFLT;
+ }
+
++ rcu_read_unlock();
++
+ *changed += __rtw89_reg_6ghz_power_recalc(rtwdev);
+ return 0;
+ }
+
+-int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif_link *rtwvif_link,
+ bool active)
+ {
+ unsigned int changed = 0;
+@@ -948,11 +967,11 @@ int rtw89_reg_6ghz_recalc(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
+ * so must do reg_6ghz_tpe_recalc() after reg_6ghz_power_recalc().
+ */
+
+- ret = rtw89_reg_6ghz_power_recalc(rtwdev, rtwvif, active, &changed);
++ ret = rtw89_reg_6ghz_power_recalc(rtwdev, rtwvif_link, active, &changed);
+ if (ret)
+ return ret;
+
+- ret = rtw89_reg_6ghz_tpe_recalc(rtwdev, rtwvif, active, &changed);
++ ret = rtw89_reg_6ghz_tpe_recalc(rtwdev, rtwvif_link, active, &changed);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8851b.c b/drivers/net/wireless/realtek/rtw89/rtw8851b.c
+index 1679bd408ef3f3..f9766bf30e71df 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8851b.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8851b.c
+@@ -1590,10 +1590,11 @@ static void rtw8851b_rfk_init(struct rtw89_dev *rtwdev)
+ rtw8851b_rx_dck(rtwdev, RTW89_PHY_0, RTW89_CHANCTX_0);
+ }
+
+-static void rtw8851b_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8851b_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
+ rtw8851b_rx_dck(rtwdev, phy_idx, chanctx_idx);
+ rtw8851b_iqk(rtwdev, phy_idx, chanctx_idx);
+@@ -1608,10 +1609,12 @@ static void rtw8851b_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw8851b_tssi_scan(rtwdev, phy_idx, chan);
+ }
+
+-static void rtw8851b_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8851b_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+- rtw8851b_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx, rtwvif->chanctx_idx);
++ rtw8851b_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx,
++ rtwvif_link->chanctx_idx);
+ }
+
+ static void rtw8851b_rfk_track(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852a.c b/drivers/net/wireless/realtek/rtw89/rtw8852a.c
+index dde96bd63021ff..42d369d2e916a6 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852a.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852a.c
+@@ -1350,10 +1350,11 @@ static void rtw8852a_rfk_init(struct rtw89_dev *rtwdev)
+ rtw8852a_rx_dck(rtwdev, RTW89_PHY_0, true, RTW89_CHANCTX_0);
+ }
+
+-static void rtw8852a_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8852a_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
+ rtw8852a_rx_dck(rtwdev, phy_idx, true, chanctx_idx);
+ rtw8852a_iqk(rtwdev, phy_idx, chanctx_idx);
+@@ -1368,10 +1369,11 @@ static void rtw8852a_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw8852a_tssi_scan(rtwdev, phy_idx, chan);
+ }
+
+-static void rtw8852a_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8852a_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+- rtw8852a_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx);
++ rtw8852a_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx);
+ }
+
+ static void rtw8852a_rfk_track(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852b.c b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+index 12be52f76427a1..364aa21cbd446f 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852b.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852b.c
+@@ -562,10 +562,11 @@ static void rtw8852b_rfk_init(struct rtw89_dev *rtwdev)
+ rtw8852b_rx_dck(rtwdev, RTW89_PHY_0, RTW89_CHANCTX_0);
+ }
+
+-static void rtw8852b_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8852b_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
+ rtw8852b_rx_dck(rtwdev, phy_idx, chanctx_idx);
+ rtw8852b_iqk(rtwdev, phy_idx, chanctx_idx);
+@@ -580,10 +581,12 @@ static void rtw8852b_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw8852b_tssi_scan(rtwdev, phy_idx, chan);
+ }
+
+-static void rtw8852b_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8852b_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+- rtw8852b_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx, rtwvif->chanctx_idx);
++ rtw8852b_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx,
++ rtwvif_link->chanctx_idx);
+ }
+
+ static void rtw8852b_rfk_track(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852bt.c b/drivers/net/wireless/realtek/rtw89/rtw8852bt.c
+index 7dfdcb5964e117..dab7e71ec6a140 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852bt.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852bt.c
+@@ -535,10 +535,11 @@ static void rtw8852bt_rfk_init(struct rtw89_dev *rtwdev)
+ rtw8852bt_rx_dck(rtwdev, RTW89_PHY_0, RTW89_CHANCTX_0);
+ }
+
+-static void rtw8852bt_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8852bt_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
+ rtw8852bt_rx_dck(rtwdev, phy_idx, chanctx_idx);
+ rtw8852bt_iqk(rtwdev, phy_idx, chanctx_idx);
+@@ -553,10 +554,12 @@ static void rtw8852bt_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw8852bt_tssi_scan(rtwdev, phy_idx, chan);
+ }
+
+-static void rtw8852bt_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8852bt_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+- rtw8852bt_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx, rtwvif->chanctx_idx);
++ rtw8852bt_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx,
++ rtwvif_link->chanctx_idx);
+ }
+
+ static void rtw8852bt_rfk_track(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8852c.c b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
+index 1c6e89ab0f4bcb..dbe77abb2c488f 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8852c.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8852c.c
+@@ -1846,10 +1846,11 @@ static void rtw8852c_rfk_init(struct rtw89_dev *rtwdev)
+ rtw8852c_rx_dck(rtwdev, RTW89_PHY_0, false);
+ }
+
+-static void rtw8852c_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8852c_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+
+ rtw8852c_mcc_get_ch_info(rtwdev, phy_idx);
+ rtw8852c_rx_dck(rtwdev, phy_idx, false);
+@@ -1866,10 +1867,11 @@ static void rtw8852c_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw8852c_tssi_scan(rtwdev, phy_idx, chan);
+ }
+
+-static void rtw8852c_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8852c_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+- rtw8852c_wifi_scan_notify(rtwdev, start, rtwvif->phy_idx);
++ rtw8852c_wifi_scan_notify(rtwdev, start, rtwvif_link->phy_idx);
+ }
+
+ static void rtw8852c_rfk_track(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/realtek/rtw89/rtw8922a.c b/drivers/net/wireless/realtek/rtw89/rtw8922a.c
+index 63b1ff2f98ed31..ef7747adbcc2b8 100644
+--- a/drivers/net/wireless/realtek/rtw89/rtw8922a.c
++++ b/drivers/net/wireless/realtek/rtw89/rtw8922a.c
+@@ -2020,11 +2020,12 @@ static void _wait_rx_mode(struct rtw89_dev *rtwdev, u8 kpath)
+ }
+ }
+
+-static void rtw8922a_rfk_channel(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw8922a_rfk_channel(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+- enum rtw89_chanctx_idx chanctx_idx = rtwvif->chanctx_idx;
++ enum rtw89_chanctx_idx chanctx_idx = rtwvif_link->chanctx_idx;
+ const struct rtw89_chan *chan = rtw89_chan_get(rtwdev, chanctx_idx);
+- enum rtw89_phy_idx phy_idx = rtwvif->phy_idx;
++ enum rtw89_phy_idx phy_idx = rtwvif_link->phy_idx;
+ u8 phy_map = rtw89_btc_phymap(rtwdev, phy_idx, RF_AB, chanctx_idx);
+ u32 tx_en;
+
+@@ -2050,7 +2051,8 @@ static void rtw8922a_rfk_band_changed(struct rtw89_dev *rtwdev,
+ rtw89_phy_rfk_tssi_and_wait(rtwdev, phy_idx, chan, RTW89_TSSI_SCAN, 6);
+ }
+
+-static void rtw8922a_rfk_scan(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif,
++static void rtw8922a_rfk_scan(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
+ bool start)
+ {
+ }
+diff --git a/drivers/net/wireless/realtek/rtw89/ser.c b/drivers/net/wireless/realtek/rtw89/ser.c
+index 5fc2faa9ba5a7e..7b203bb7f151a7 100644
+--- a/drivers/net/wireless/realtek/rtw89/ser.c
++++ b/drivers/net/wireless/realtek/rtw89/ser.c
+@@ -300,37 +300,54 @@ static void drv_resume_rx(struct rtw89_ser *ser)
+
+ static void ser_reset_vif(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ {
+- rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif->port);
+- rtwvif->net_type = RTW89_NET_TYPE_NO_LINK;
+- rtwvif->trigger = false;
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
++
+ rtwvif->tdls_peer = 0;
++
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id) {
++ rtw89_core_release_bit_map(rtwdev->hw_port, rtwvif_link->port);
++ rtwvif_link->net_type = RTW89_NET_TYPE_NO_LINK;
++ rtwvif_link->trigger = false;
++ }
+ }
+
+ static void ser_sta_deinit_cam_iter(void *data, struct ieee80211_sta *sta)
+ {
+ struct rtw89_vif *target_rtwvif = (struct rtw89_vif *)data;
+- struct rtw89_sta *rtwsta = (struct rtw89_sta *)sta->drv_priv;
++ struct rtw89_sta *rtwsta = sta_to_rtwsta(sta);
+ struct rtw89_vif *rtwvif = rtwsta->rtwvif;
+ struct rtw89_dev *rtwdev = rtwvif->rtwdev;
++ struct rtw89_vif_link *rtwvif_link;
++ struct rtw89_sta_link *rtwsta_link;
++ unsigned int link_id;
+
+ if (rtwvif != target_rtwvif)
+ return;
+
+- if (rtwvif->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls)
+- rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta->addr_cam);
+- if (sta->tdls)
+- rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta->bssid_cam);
++ rtw89_sta_for_each_link(rtwsta, rtwsta_link, link_id) {
++ rtwvif_link = rtwsta_link->rtwvif_link;
+
+- INIT_LIST_HEAD(&rtwsta->ba_cam_list);
++ if (rtwvif_link->net_type == RTW89_NET_TYPE_AP_MODE || sta->tdls)
++ rtw89_cam_deinit_addr_cam(rtwdev, &rtwsta_link->addr_cam);
++ if (sta->tdls)
++ rtw89_cam_deinit_bssid_cam(rtwdev, &rtwsta_link->bssid_cam);
++
++ INIT_LIST_HEAD(&rtwsta_link->ba_cam_list);
++ }
+ }
+
+ static void ser_deinit_cam(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ {
++ struct rtw89_vif_link *rtwvif_link;
++ unsigned int link_id;
++
+ ieee80211_iterate_stations_atomic(rtwdev->hw,
+ ser_sta_deinit_cam_iter,
+ rtwvif);
+
+- rtw89_cam_deinit(rtwdev, rtwvif);
++ rtw89_vif_for_each_link(rtwvif, rtwvif_link, link_id)
++ rtw89_cam_deinit(rtwdev, rtwvif_link);
+
+ bitmap_zero(rtwdev->cam_info.ba_cam_map, RTW89_MAX_BA_CAM_NUM);
+ }
+diff --git a/drivers/net/wireless/realtek/rtw89/wow.c b/drivers/net/wireless/realtek/rtw89/wow.c
+index 86e24e07780d9b..3e81fd974ec180 100644
+--- a/drivers/net/wireless/realtek/rtw89/wow.c
++++ b/drivers/net/wireless/realtek/rtw89/wow.c
+@@ -421,7 +421,8 @@ static void rtw89_wow_construct_key_info(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_wow_key_info *key_info = &rtw_wow->key_info;
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ bool err = false;
+
+ rcu_read_lock();
+@@ -596,7 +597,8 @@ static int rtw89_wow_get_aoac_rpt(struct rtw89_dev *rtwdev, bool rx_ready)
+ static struct ieee80211_key_conf *rtw89_wow_gtk_rekey(struct rtw89_dev *rtwdev,
+ u32 cipher, u8 keyidx, u8 *gtk)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ const struct rtw89_cipher_info *cipher_info;
+ struct ieee80211_key_conf *rekey_conf;
+ struct ieee80211_key_conf *key;
+@@ -632,11 +634,13 @@ static struct ieee80211_key_conf *rtw89_wow_gtk_rekey(struct rtw89_dev *rtwdev,
+
+ static void rtw89_wow_update_key_info(struct rtw89_dev *rtwdev, bool rx_ready)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_wow_aoac_report *aoac_rpt = &rtw_wow->aoac_rpt;
+ struct rtw89_set_key_info_iter_data data = {.error = false,
+ .rx_ready = rx_ready};
++ struct ieee80211_bss_conf *bss_conf;
+ struct ieee80211_key_conf *key;
+
+ rcu_read_lock();
+@@ -669,9 +673,15 @@ static void rtw89_wow_update_key_info(struct rtw89_dev *rtwdev, bool rx_ready)
+ return;
+
+ rtw89_rx_pn_set_pmf(rtwdev, key, aoac_rpt->igtk_ipn);
+- ieee80211_gtk_rekey_notify(wow_vif, wow_vif->bss_conf.bssid,
++
++ rcu_read_lock();
++
++ bss_conf = rtw89_vif_rcu_dereference_link(rtwvif_link, true);
++ ieee80211_gtk_rekey_notify(wow_vif, bss_conf->bssid,
+ aoac_rpt->eapol_key_replay_count,
+- GFP_KERNEL);
++ GFP_ATOMIC);
++
++ rcu_read_unlock();
+ }
+
+ static void rtw89_wow_leave_deep_ps(struct rtw89_dev *rtwdev)
+@@ -681,27 +691,24 @@ static void rtw89_wow_leave_deep_ps(struct rtw89_dev *rtwdev)
+
+ static void rtw89_wow_enter_deep_ps(struct rtw89_dev *rtwdev)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+
+- __rtw89_enter_ps_mode(rtwdev, rtwvif);
++ __rtw89_enter_ps_mode(rtwdev, rtwvif_link);
+ }
+
+ static void rtw89_wow_enter_ps(struct rtw89_dev *rtwdev)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+
+ if (rtw89_wow_mgd_linked(rtwdev))
+- rtw89_enter_lps(rtwdev, rtwvif, false);
++ rtw89_enter_lps(rtwdev, rtwvif_link, false);
+ else if (rtw89_wow_no_link(rtwdev))
+- rtw89_fw_h2c_fwips(rtwdev, rtwvif, true);
++ rtw89_fw_h2c_fwips(rtwdev, rtwvif_link, true);
+ }
+
+ static void rtw89_wow_leave_ps(struct rtw89_dev *rtwdev, bool enable_wow)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+
+ if (rtw89_wow_mgd_linked(rtwdev)) {
+ rtw89_leave_lps(rtwdev);
+@@ -709,7 +716,7 @@ static void rtw89_wow_leave_ps(struct rtw89_dev *rtwdev, bool enable_wow)
+ if (enable_wow)
+ rtw89_leave_ips(rtwdev);
+ else
+- rtw89_fw_h2c_fwips(rtwdev, rtwvif, false);
++ rtw89_fw_h2c_fwips(rtwdev, rtwvif_link, false);
+ }
+ }
+
+@@ -734,6 +741,8 @@ static void rtw89_wow_set_rx_filter(struct rtw89_dev *rtwdev, bool enable)
+
+ static void rtw89_wow_show_wakeup_reason(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct rtw89_wow_aoac_report *aoac_rpt = &rtw_wow->aoac_rpt;
+ struct cfg80211_wowlan_nd_info nd_info;
+@@ -780,35 +789,34 @@ static void rtw89_wow_show_wakeup_reason(struct rtw89_dev *rtwdev)
+ break;
+ default:
+ rtw89_warn(rtwdev, "Unknown wakeup reason %x\n", reason);
+- ieee80211_report_wowlan_wakeup(rtwdev->wow.wow_vif, NULL,
+- GFP_KERNEL);
++ ieee80211_report_wowlan_wakeup(wow_vif, NULL, GFP_KERNEL);
+ return;
+ }
+
+- ieee80211_report_wowlan_wakeup(rtwdev->wow.wow_vif, &wakeup,
+- GFP_KERNEL);
++ ieee80211_report_wowlan_wakeup(wow_vif, &wakeup, GFP_KERNEL);
+ }
+
+-static void rtw89_wow_vif_iter(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
++static void rtw89_wow_vif_iter(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *vif = rtwvif_to_vif(rtwvif);
++ struct ieee80211_vif *vif = rtwvif_link_to_vif(rtwvif_link);
+
+ /* Current WoWLAN function support setting of only vif in
+ * infra mode or no link mode. When one suitable vif is found,
+ * stop the iteration.
+ */
+- if (rtw_wow->wow_vif || vif->type != NL80211_IFTYPE_STATION)
++ if (rtw_wow->rtwvif_link || vif->type != NL80211_IFTYPE_STATION)
+ return;
+
+- switch (rtwvif->net_type) {
++ switch (rtwvif_link->net_type) {
+ case RTW89_NET_TYPE_INFRA:
+ if (rtw_wow_has_mgd_features(rtwdev))
+- rtw_wow->wow_vif = vif;
++ rtw_wow->rtwvif_link = rtwvif_link;
+ break;
+ case RTW89_NET_TYPE_NO_LINK:
+ if (rtw_wow->pno_inited)
+- rtw_wow->wow_vif = vif;
++ rtw_wow->rtwvif_link = rtwvif_link;
+ break;
+ default:
+ break;
+@@ -865,7 +873,7 @@ static u16 rtw89_calc_crc(u8 *pdata, int length)
+ return ~crc;
+ }
+
+-static int rtw89_wow_pattern_get_type(struct rtw89_vif *rtwvif,
++static int rtw89_wow_pattern_get_type(struct rtw89_vif_link *rtwvif_link,
+ struct rtw89_wow_cam_info *rtw_pattern,
+ const u8 *pattern, u8 da_mask)
+ {
+@@ -885,7 +893,7 @@ static int rtw89_wow_pattern_get_type(struct rtw89_vif *rtwvif,
+ rtw_pattern->bc = true;
+ else if (is_multicast_ether_addr(da))
+ rtw_pattern->mc = true;
+- else if (ether_addr_equal(da, rtwvif->mac_addr) &&
++ else if (ether_addr_equal(da, rtwvif_link->mac_addr) &&
+ da_mask == GENMASK(5, 0))
+ rtw_pattern->uc = true;
+ else if (!da_mask) /*da_mask == 0 mean wildcard*/
+@@ -897,7 +905,7 @@ static int rtw89_wow_pattern_get_type(struct rtw89_vif *rtwvif,
+ }
+
+ static int rtw89_wow_pattern_generate(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ const struct cfg80211_pkt_pattern *pkt_pattern,
+ struct rtw89_wow_cam_info *rtw_pattern)
+ {
+@@ -916,7 +924,7 @@ static int rtw89_wow_pattern_generate(struct rtw89_dev *rtwdev,
+ mask_len = DIV_ROUND_UP(len, 8);
+ memset(rtw_pattern, 0, sizeof(*rtw_pattern));
+
+- ret = rtw89_wow_pattern_get_type(rtwvif, rtw_pattern, pattern,
++ ret = rtw89_wow_pattern_get_type(rtwvif_link, rtw_pattern, pattern,
+ mask[0] & GENMASK(5, 0));
+ if (ret)
+ return ret;
+@@ -970,7 +978,7 @@ static int rtw89_wow_pattern_generate(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_wow_parse_patterns(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif,
++ struct rtw89_vif_link *rtwvif_link,
+ struct cfg80211_wowlan *wowlan)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+@@ -983,7 +991,7 @@ static int rtw89_wow_parse_patterns(struct rtw89_dev *rtwdev,
+
+ for (i = 0; i < wowlan->n_patterns; i++) {
+ rtw_pattern = &rtw_wow->patterns[i];
+- ret = rtw89_wow_pattern_generate(rtwdev, rtwvif,
++ ret = rtw89_wow_pattern_generate(rtwdev, rtwvif_link,
+ &wowlan->patterns[i],
+ rtw_pattern);
+ if (ret) {
+@@ -1040,7 +1048,7 @@ static void rtw89_wow_clear_wakeups(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+
+- rtw_wow->wow_vif = NULL;
++ rtw_wow->rtwvif_link = NULL;
+ rtw89_core_release_all_bits_map(rtw_wow->flags, RTW89_WOW_FLAG_NUM);
+ rtw_wow->pattern_cnt = 0;
+ rtw_wow->pno_inited = false;
+@@ -1066,6 +1074,7 @@ static int rtw89_wow_set_wakeups(struct rtw89_dev *rtwdev,
+ struct cfg80211_wowlan *wowlan)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
++ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
+
+ if (wowlan->disconnect)
+@@ -1078,36 +1087,40 @@ static int rtw89_wow_set_wakeups(struct rtw89_dev *rtwdev,
+ if (wowlan->nd_config)
+ rtw89_wow_init_pno(rtwdev, wowlan->nd_config);
+
+- rtw89_for_each_rtwvif(rtwdev, rtwvif)
+- rtw89_wow_vif_iter(rtwdev, rtwvif);
++ rtw89_for_each_rtwvif(rtwdev, rtwvif) {
++ /* use the link on HW-0 to do wow flow */
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (!rtwvif_link)
++ continue;
++
++ rtw89_wow_vif_iter(rtwdev, rtwvif_link);
++ }
+
+- if (!rtw_wow->wow_vif)
++ rtwvif_link = rtw_wow->rtwvif_link;
++ if (!rtwvif_link)
+ return -EPERM;
+
+- rtwvif = (struct rtw89_vif *)rtw_wow->wow_vif->drv_priv;
+- return rtw89_wow_parse_patterns(rtwdev, rtwvif, wowlan);
++ return rtw89_wow_parse_patterns(rtwdev, rtwvif_link, wowlan);
+ }
+
+ static int rtw89_wow_cfg_wake_pno(struct rtw89_dev *rtwdev, bool wow)
+ {
+- struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+ int ret;
+
+- ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif, true);
++ ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif_link, true);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to config pno\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif, wow);
++ ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif_link, wow);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to fw wow wakeup ctrl\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif, wow);
++ ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif_link, wow);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to fw wow global\n");
+ return ret;
+@@ -1119,34 +1132,39 @@ static int rtw89_wow_cfg_wake_pno(struct rtw89_dev *rtwdev, bool wow)
+ static int rtw89_wow_cfg_wake(struct rtw89_dev *rtwdev, bool wow)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ struct ieee80211_sta *wow_sta;
+- struct rtw89_sta *rtwsta = NULL;
++ struct rtw89_sta_link *rtwsta_link = NULL;
++ struct rtw89_sta *rtwsta;
+ int ret;
+
+- wow_sta = ieee80211_find_sta(wow_vif, rtwvif->bssid);
+- if (wow_sta)
+- rtwsta = (struct rtw89_sta *)wow_sta->drv_priv;
++ wow_sta = ieee80211_find_sta(wow_vif, wow_vif->cfg.ap_addr);
++ if (wow_sta) {
++ rtwsta = sta_to_rtwsta(wow_sta);
++ rtwsta_link = rtwsta->links[rtwvif_link->link_id];
++ if (!rtwsta_link)
++ return -ENOLINK;
++ }
+
+ if (wow) {
+ if (rtw_wow->pattern_cnt)
+- rtwvif->wowlan_pattern = true;
++ rtwvif_link->wowlan_pattern = true;
+ if (test_bit(RTW89_WOW_FLAG_EN_MAGIC_PKT, rtw_wow->flags))
+- rtwvif->wowlan_magic = true;
++ rtwvif_link->wowlan_magic = true;
+ } else {
+- rtwvif->wowlan_pattern = false;
+- rtwvif->wowlan_magic = false;
++ rtwvif_link->wowlan_pattern = false;
++ rtwvif_link->wowlan_magic = false;
+ }
+
+- ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif, wow);
++ ret = rtw89_fw_h2c_wow_wakeup_ctrl(rtwdev, rtwvif_link, wow);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to fw wow wakeup ctrl\n");
+ return ret;
+ }
+
+ if (wow) {
+- ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif, rtwsta);
++ ret = rtw89_chip_h2c_dctl_sec_cam(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to update dctl cam sec entry: %d\n",
+ ret);
+@@ -1154,13 +1172,13 @@ static int rtw89_wow_cfg_wake(struct rtw89_dev *rtwdev, bool wow)
+ }
+ }
+
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cam\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif, wow);
++ ret = rtw89_fw_h2c_wow_global(rtwdev, rtwvif_link, wow);
+ if (ret) {
+ rtw89_err(rtwdev, "failed to fw wow global\n");
+ return ret;
+@@ -1190,25 +1208,30 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow)
+ enum rtw89_fw_type fw_type = wow ? RTW89_FW_WOWLAN : RTW89_FW_NORMAL;
+ enum rtw89_chip_gen chip_gen = rtwdev->chip->chip_gen;
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link;
++ struct ieee80211_vif *wow_vif = rtwvif_link_to_vif(rtwvif_link);
+ enum rtw89_core_chip_id chip_id = rtwdev->chip->chip_id;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ bool include_bb = !!chip->bbmcu_nr;
+ bool disable_intr_for_dlfw = false;
+ struct ieee80211_sta *wow_sta;
+- struct rtw89_sta *rtwsta = NULL;
++ struct rtw89_sta_link *rtwsta_link = NULL;
++ struct rtw89_sta *rtwsta;
+ bool is_conn = true;
+ int ret;
+
+ if (chip_id == RTL8852C || chip_id == RTL8922A)
+ disable_intr_for_dlfw = true;
+
+- wow_sta = ieee80211_find_sta(wow_vif, rtwvif->bssid);
+- if (wow_sta)
+- rtwsta = (struct rtw89_sta *)wow_sta->drv_priv;
+- else
++ wow_sta = ieee80211_find_sta(wow_vif, wow_vif->cfg.ap_addr);
++ if (wow_sta) {
++ rtwsta = sta_to_rtwsta(wow_sta);
++ rtwsta_link = rtwsta->links[rtwvif_link->link_id];
++ if (!rtwsta_link)
++ return -ENOLINK;
++ } else {
+ is_conn = false;
++ }
+
+ if (disable_intr_for_dlfw)
+ rtw89_hci_disable_intr(rtwdev);
+@@ -1224,14 +1247,14 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow)
+
+ rtw89_phy_init_rf_reg(rtwdev, true);
+
+- ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif, rtwsta,
++ ret = rtw89_fw_h2c_role_maintain(rtwdev, rtwvif_link, rtwsta_link,
+ RTW89_ROLE_FW_RESTORE);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c role maintain\n");
+ return ret;
+ }
+
+- ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, wow_vif, wow_sta);
++ ret = rtw89_chip_h2c_assoc_cmac_tbl(rtwdev, rtwvif_link, rtwsta_link);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c assoc cmac tbl\n");
+ return ret;
+@@ -1240,27 +1263,27 @@ static int rtw89_wow_swap_fw(struct rtw89_dev *rtwdev, bool wow)
+ if (!is_conn)
+ rtw89_cam_reset_keys(rtwdev);
+
+- ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif, rtwsta, !is_conn);
++ ret = rtw89_fw_h2c_join_info(rtwdev, rtwvif_link, rtwsta_link, !is_conn);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c join info\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_cam(rtwdev, rtwvif, rtwsta, NULL);
++ ret = rtw89_fw_h2c_cam(rtwdev, rtwvif_link, rtwsta_link, NULL);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c cam\n");
+ return ret;
+ }
+
+ if (is_conn) {
+- ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif, rtwsta->mac_id);
++ ret = rtw89_fw_h2c_general_pkt(rtwdev, rtwvif_link, rtwsta_link->mac_id);
+ if (ret) {
+ rtw89_warn(rtwdev, "failed to send h2c general packet\n");
+ return ret;
+ }
+- rtw89_phy_ra_assoc(rtwdev, wow_sta);
+- rtw89_phy_set_bss_color(rtwdev, wow_vif);
+- rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, wow_vif);
++ rtw89_phy_ra_assoc(rtwdev, rtwsta_link);
++ rtw89_phy_set_bss_color(rtwdev, rtwvif_link);
++ rtw89_chip_cfg_txpwr_ul_tb_offset(rtwdev, rtwvif_link);
+ }
+
+ if (chip_gen == RTW89_CHIP_BE)
+@@ -1363,21 +1386,20 @@ static int rtw89_wow_disable_trx_pre(struct rtw89_dev *rtwdev)
+
+ static int rtw89_wow_disable_trx_post(struct rtw89_dev *rtwdev)
+ {
+- struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *vif = rtw_wow->wow_vif;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+ int ret;
+
+ ret = rtw89_mac_cfg_ppdu_status(rtwdev, RTW89_MAC_0, true);
+ if (ret)
+ rtw89_err(rtwdev, "cfg ppdu status\n");
+
+- rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, vif, true);
++ rtw89_fw_h2c_set_bcn_fltr_cfg(rtwdev, rtwvif_link, true);
+
+ return ret;
+ }
+
+ static void rtw89_fw_release_pno_pkt_list(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct list_head *pkt_list = &rtw_wow->pno_pkt_list;
+@@ -1391,7 +1413,7 @@ static void rtw89_fw_release_pno_pkt_list(struct rtw89_dev *rtwdev,
+ }
+
+ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+- struct rtw89_vif *rtwvif)
++ struct rtw89_vif_link *rtwvif_link)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+ struct cfg80211_sched_scan_request *nd_config = rtw_wow->nd_config;
+@@ -1401,7 +1423,7 @@ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ int ret;
+
+ for (i = 0; i < num; i++) {
+- skb = ieee80211_probereq_get(rtwdev->hw, rtwvif->mac_addr,
++ skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr,
+ nd_config->match_sets[i].ssid.ssid,
+ nd_config->match_sets[i].ssid.ssid_len,
+ nd_config->ie_len);
+@@ -1413,7 +1435,7 @@ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ info = kzalloc(sizeof(*info), GFP_KERNEL);
+ if (!info) {
+ kfree_skb(skb);
+- rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif);
++ rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif_link);
+ return -ENOMEM;
+ }
+
+@@ -1421,7 +1443,7 @@ static int rtw89_pno_scan_update_probe_req(struct rtw89_dev *rtwdev,
+ if (ret) {
+ kfree_skb(skb);
+ kfree(info);
+- rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif);
++ rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif_link);
+ return ret;
+ }
+
+@@ -1436,20 +1458,19 @@ static int rtw89_pno_scan_offload(struct rtw89_dev *rtwdev, bool enable)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+ int interval = rtw_wow->nd_config->scan_plans[0].interval;
+ struct rtw89_scan_option opt = {};
+ int ret;
+
+ if (enable) {
+- ret = rtw89_pno_scan_update_probe_req(rtwdev, rtwvif);
++ ret = rtw89_pno_scan_update_probe_req(rtwdev, rtwvif_link);
+ if (ret) {
+ rtw89_err(rtwdev, "Update probe request failed\n");
+ return ret;
+ }
+
+- ret = mac->add_chan_list_pno(rtwdev, rtwvif);
++ ret = mac->add_chan_list_pno(rtwdev, rtwvif_link);
+ if (ret) {
+ rtw89_err(rtwdev, "Update channel list failed\n");
+ return ret;
+@@ -1471,7 +1492,7 @@ static int rtw89_pno_scan_offload(struct rtw89_dev *rtwdev, bool enable)
+ opt.opch_end = RTW89_CHAN_INVALID;
+ }
+
+- mac->scan_offload(rtwdev, &opt, rtwvif, true);
++ mac->scan_offload(rtwdev, &opt, rtwvif_link, true);
+
+ return 0;
+ }
+@@ -1479,8 +1500,7 @@ static int rtw89_pno_scan_offload(struct rtw89_dev *rtwdev, bool enable)
+ static int rtw89_wow_fw_start(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link;
+ int ret;
+
+ if (rtw89_wow_no_link(rtwdev)) {
+@@ -1499,25 +1519,25 @@ static int rtw89_wow_fw_start(struct rtw89_dev *rtwdev)
+ rtw89_wow_pattern_write(rtwdev);
+ rtw89_wow_construct_key_info(rtwdev);
+
+- ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif, true);
++ ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif_link, true);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to enable keep alive\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif, true);
++ ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif_link, true);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to enable disconnect detect\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif, true);
++ ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif_link, true);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to enable GTK offload\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif, true);
++ ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif_link, true);
+ if (ret)
+ rtw89_warn(rtwdev, "wow: failed to enable arp offload\n");
+ }
+@@ -1548,8 +1568,7 @@ static int rtw89_wow_fw_start(struct rtw89_dev *rtwdev)
+ static int rtw89_wow_fw_stop(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_wow_param *rtw_wow = &rtwdev->wow;
+- struct ieee80211_vif *wow_vif = rtw_wow->wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtw_wow->rtwvif_link;
+ int ret;
+
+ if (rtw89_wow_no_link(rtwdev)) {
+@@ -1559,35 +1578,35 @@ static int rtw89_wow_fw_stop(struct rtw89_dev *rtwdev)
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif, false);
++ ret = rtw89_fw_h2c_cfg_pno(rtwdev, rtwvif_link, false);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to disable pno\n");
+ return ret;
+ }
+
+- rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif);
++ rtw89_fw_release_pno_pkt_list(rtwdev, rtwvif_link);
+ } else {
+ rtw89_wow_pattern_clear(rtwdev);
+
+- ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif, false);
++ ret = rtw89_fw_h2c_keep_alive(rtwdev, rtwvif_link, false);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to disable keep alive\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif, false);
++ ret = rtw89_fw_h2c_disconnect_detect(rtwdev, rtwvif_link, false);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to disable disconnect detect\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif, false);
++ ret = rtw89_fw_h2c_wow_gtk_ofld(rtwdev, rtwvif_link, false);
+ if (ret) {
+ rtw89_err(rtwdev, "wow: failed to disable GTK offload\n");
+ return ret;
+ }
+
+- ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif, false);
++ ret = rtw89_fw_h2c_arp_offload(rtwdev, rtwvif_link, false);
+ if (ret)
+ rtw89_warn(rtwdev, "wow: failed to disable arp offload\n");
+
+diff --git a/drivers/net/wireless/realtek/rtw89/wow.h b/drivers/net/wireless/realtek/rtw89/wow.h
+index 3fbc2b87c058ac..f91991e8f2e30e 100644
+--- a/drivers/net/wireless/realtek/rtw89/wow.h
++++ b/drivers/net/wireless/realtek/rtw89/wow.h
+@@ -97,18 +97,16 @@ static inline int rtw89_wow_get_sec_hdr_len(struct rtw89_dev *rtwdev)
+ #ifdef CONFIG_PM
+ static inline bool rtw89_wow_mgd_linked(struct rtw89_dev *rtwdev)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+
+- return rtwvif->net_type == RTW89_NET_TYPE_INFRA;
++ return rtwvif_link->net_type == RTW89_NET_TYPE_INFRA;
+ }
+
+ static inline bool rtw89_wow_no_link(struct rtw89_dev *rtwdev)
+ {
+- struct ieee80211_vif *wow_vif = rtwdev->wow.wow_vif;
+- struct rtw89_vif *rtwvif = (struct rtw89_vif *)wow_vif->drv_priv;
++ struct rtw89_vif_link *rtwvif_link = rtwdev->wow.rtwvif_link;
+
+- return rtwvif->net_type == RTW89_NET_TYPE_NO_LINK;
++ return rtwvif_link->net_type == RTW89_NET_TYPE_NO_LINK;
+ }
+
+ static inline bool rtw_wow_has_mgd_features(struct rtw89_dev *rtwdev)
+diff --git a/drivers/net/wireless/silabs/wfx/main.c b/drivers/net/wireless/silabs/wfx/main.c
+index e7198520bdffc7..64441c8bc4606c 100644
+--- a/drivers/net/wireless/silabs/wfx/main.c
++++ b/drivers/net/wireless/silabs/wfx/main.c
+@@ -480,10 +480,23 @@ static int __init wfx_core_init(void)
+ {
+ int ret = 0;
+
+- if (IS_ENABLED(CONFIG_SPI))
++ if (IS_ENABLED(CONFIG_SPI)) {
+ ret = spi_register_driver(&wfx_spi_driver);
+- if (IS_ENABLED(CONFIG_MMC) && !ret)
++ if (ret)
++ goto out;
++ }
++ if (IS_ENABLED(CONFIG_MMC)) {
+ ret = sdio_register_driver(&wfx_sdio_driver);
++ if (ret)
++ goto unregister_spi;
++ }
++
++ return 0;
++
++unregister_spi:
++ if (IS_ENABLED(CONFIG_SPI))
++ spi_unregister_driver(&wfx_spi_driver);
++out:
+ return ret;
+ }
+ module_init(wfx_core_init);
+diff --git a/drivers/net/wireless/st/cw1200/cw1200_spi.c b/drivers/net/wireless/st/cw1200/cw1200_spi.c
+index 4f346fb977a989..862964a8cc8761 100644
+--- a/drivers/net/wireless/st/cw1200/cw1200_spi.c
++++ b/drivers/net/wireless/st/cw1200/cw1200_spi.c
+@@ -450,7 +450,7 @@ static int __maybe_unused cw1200_spi_suspend(struct device *dev)
+ {
+ struct hwbus_priv *self = spi_get_drvdata(to_spi_device(dev));
+
+- if (!cw1200_can_suspend(self->core))
++ if (self && !cw1200_can_suspend(self->core))
+ return -EAGAIN;
+
+ /* XXX notify host that we have to keep CW1200 powered on? */
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 855b42c92284df..f0d4c6f3cb0555 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4591,6 +4591,11 @@ EXPORT_SYMBOL_GPL(nvme_alloc_admin_tag_set);
+
+ void nvme_remove_admin_tag_set(struct nvme_ctrl *ctrl)
+ {
++ /*
++ * As we're about to destroy the queue and free tagset
++ * we can not have keep-alive work running.
++ */
++ nvme_stop_keep_alive(ctrl);
+ blk_mq_destroy_queue(ctrl->admin_q);
+ blk_put_queue(ctrl->admin_q);
+ if (ctrl->ops->flags & NVME_F_FABRICS) {
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index 6a15873055b951..f25582e4d88bb0 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -165,7 +165,8 @@ void nvme_kick_requeue_lists(struct nvme_ctrl *ctrl)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu)) {
+ if (!ns->head->disk)
+ continue;
+ kblockd_schedule_work(&ns->head->requeue_work);
+@@ -209,7 +210,8 @@ void nvme_mpath_clear_ctrl_paths(struct nvme_ctrl *ctrl)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu)) {
+ nvme_mpath_clear_current_path(ns);
+ kblockd_schedule_work(&ns->head->requeue_work);
+ }
+@@ -224,7 +226,8 @@ void nvme_mpath_revalidate_paths(struct nvme_ns *ns)
+ int srcu_idx;
+
+ srcu_idx = srcu_read_lock(&head->srcu);
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (capacity != get_capacity(ns->disk))
+ clear_bit(NVME_NS_READY, &ns->flags);
+ }
+@@ -257,7 +260,8 @@ static struct nvme_ns *__nvme_find_path(struct nvme_ns_head *head, int node)
+ int found_distance = INT_MAX, fallback_distance = INT_MAX, distance;
+ struct nvme_ns *found = NULL, *fallback = NULL, *ns;
+
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (nvme_path_is_disabled(ns))
+ continue;
+
+@@ -356,7 +360,8 @@ static struct nvme_ns *nvme_queue_depth_path(struct nvme_ns_head *head)
+ unsigned int min_depth_opt = UINT_MAX, min_depth_nonopt = UINT_MAX;
+ unsigned int depth;
+
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (nvme_path_is_disabled(ns))
+ continue;
+
+@@ -424,7 +429,8 @@ static bool nvme_available_path(struct nvme_ns_head *head)
+ if (!test_bit(NVME_NSHEAD_DISK_LIVE, &head->flags))
+ return NULL;
+
+- list_for_each_entry_rcu(ns, &head->list, siblings) {
++ list_for_each_entry_srcu(ns, &head->list, siblings,
++ srcu_read_lock_held(&head->srcu)) {
+ if (test_bit(NVME_CTRL_FAILFAST_EXPIRED, &ns->ctrl->flags))
+ continue;
+ switch (nvme_ctrl_state(ns->ctrl)) {
+@@ -785,7 +791,8 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
+ return 0;
+
+ srcu_idx = srcu_read_lock(&ctrl->srcu);
+- list_for_each_entry_rcu(ns, &ctrl->namespaces, list) {
++ list_for_each_entry_srcu(ns, &ctrl->namespaces, list,
++ srcu_read_lock_held(&ctrl->srcu)) {
+ unsigned nsid;
+ again:
+ nsid = le32_to_cpu(desc->nsids[n]);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 4b9fda0b1d9a33..55af3dfbc2607b 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -153,6 +153,7 @@ struct nvme_dev {
+ /* host memory buffer support: */
+ u64 host_mem_size;
+ u32 nr_host_mem_descs;
++ u32 host_mem_descs_size;
+ dma_addr_t host_mem_descs_dma;
+ struct nvme_host_mem_buf_desc *host_mem_descs;
+ void **host_mem_desc_bufs;
+@@ -904,9 +905,10 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
+
+ static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
+ {
++ struct request *req;
++
+ spin_lock(&nvmeq->sq_lock);
+- while (!rq_list_empty(*rqlist)) {
+- struct request *req = rq_list_pop(rqlist);
++ while ((req = rq_list_pop(rqlist))) {
+ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+
+ nvme_sq_copy_cmd(nvmeq, &iod->cmd);
+@@ -931,31 +933,25 @@ static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
+
+ static void nvme_queue_rqs(struct request **rqlist)
+ {
+- struct request *req, *next, *prev = NULL;
++ struct request *submit_list = NULL;
+ struct request *requeue_list = NULL;
++ struct request **requeue_lastp = &requeue_list;
++ struct nvme_queue *nvmeq = NULL;
++ struct request *req;
+
+- rq_list_for_each_safe(rqlist, req, next) {
+- struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+-
+- if (!nvme_prep_rq_batch(nvmeq, req)) {
+- /* detach 'req' and add to remainder list */
+- rq_list_move(rqlist, &requeue_list, req, prev);
+-
+- req = prev;
+- if (!req)
+- continue;
+- }
++ while ((req = rq_list_pop(rqlist))) {
++ if (nvmeq && nvmeq != req->mq_hctx->driver_data)
++ nvme_submit_cmds(nvmeq, &submit_list);
++ nvmeq = req->mq_hctx->driver_data;
+
+- if (!next || req->mq_hctx != next->mq_hctx) {
+- /* detach rest of list, and submit */
+- req->rq_next = NULL;
+- nvme_submit_cmds(nvmeq, rqlist);
+- *rqlist = next;
+- prev = NULL;
+- } else
+- prev = req;
++ if (nvme_prep_rq_batch(nvmeq, req))
++ rq_list_add(&submit_list, req); /* reverse order */
++ else
++ rq_list_add_tail(&requeue_lastp, req);
+ }
+
++ if (nvmeq)
++ nvme_submit_cmds(nvmeq, &submit_list);
+ *rqlist = requeue_list;
+ }
+
+@@ -1966,10 +1962,10 @@ static void nvme_free_host_mem(struct nvme_dev *dev)
+
+ kfree(dev->host_mem_desc_bufs);
+ dev->host_mem_desc_bufs = NULL;
+- dma_free_coherent(dev->dev,
+- dev->nr_host_mem_descs * sizeof(*dev->host_mem_descs),
++ dma_free_coherent(dev->dev, dev->host_mem_descs_size,
+ dev->host_mem_descs, dev->host_mem_descs_dma);
+ dev->host_mem_descs = NULL;
++ dev->host_mem_descs_size = 0;
+ dev->nr_host_mem_descs = 0;
+ }
+
+@@ -1977,7 +1973,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+ u32 chunk_size)
+ {
+ struct nvme_host_mem_buf_desc *descs;
+- u32 max_entries, len;
++ u32 max_entries, len, descs_size;
+ dma_addr_t descs_dma;
+ int i = 0;
+ void **bufs;
+@@ -1990,8 +1986,9 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+ if (dev->ctrl.hmmaxd && dev->ctrl.hmmaxd < max_entries)
+ max_entries = dev->ctrl.hmmaxd;
+
+- descs = dma_alloc_coherent(dev->dev, max_entries * sizeof(*descs),
+- &descs_dma, GFP_KERNEL);
++ descs_size = max_entries * sizeof(*descs);
++ descs = dma_alloc_coherent(dev->dev, descs_size, &descs_dma,
++ GFP_KERNEL);
+ if (!descs)
+ goto out;
+
+@@ -2020,6 +2017,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+ dev->host_mem_size = size;
+ dev->host_mem_descs = descs;
+ dev->host_mem_descs_dma = descs_dma;
++ dev->host_mem_descs_size = descs_size;
+ dev->host_mem_desc_bufs = bufs;
+ return 0;
+
+@@ -2034,8 +2032,7 @@ static int __nvme_alloc_host_mem(struct nvme_dev *dev, u64 preferred,
+
+ kfree(bufs);
+ out_free_descs:
+- dma_free_coherent(dev->dev, max_entries * sizeof(*descs), descs,
+- descs_dma);
++ dma_free_coherent(dev->dev, descs_size, descs, descs_dma);
+ out:
+ dev->host_mem_descs = NULL;
+ return -ENOMEM;
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 4d528c10df3a9a..546e76ac407cfd 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -457,6 +457,7 @@ int __initdata dt_root_addr_cells;
+ int __initdata dt_root_size_cells;
+
+ void *initial_boot_params __ro_after_init;
++phys_addr_t initial_boot_params_pa __ro_after_init;
+
+ #ifdef CONFIG_OF_EARLY_FLATTREE
+
+@@ -1136,17 +1137,18 @@ static void * __init early_init_dt_alloc_memory_arch(u64 size, u64 align)
+ return ptr;
+ }
+
+-bool __init early_init_dt_verify(void *params)
++bool __init early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys)
+ {
+- if (!params)
++ if (!dt_virt)
+ return false;
+
+ /* check device tree validity */
+- if (fdt_check_header(params))
++ if (fdt_check_header(dt_virt))
+ return false;
+
+ /* Setup flat device-tree pointer */
+- initial_boot_params = params;
++ initial_boot_params = dt_virt;
++ initial_boot_params_pa = dt_phys;
+ of_fdt_crc32 = crc32_be(~0, initial_boot_params,
+ fdt_totalsize(initial_boot_params));
+
+@@ -1173,11 +1175,11 @@ void __init early_init_dt_scan_nodes(void)
+ early_init_dt_check_for_usable_mem_range();
+ }
+
+-bool __init early_init_dt_scan(void *params)
++bool __init early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys)
+ {
+ bool status;
+
+- status = early_init_dt_verify(params);
++ status = early_init_dt_verify(dt_virt, dt_phys);
+ if (!status)
+ return false;
+
+diff --git a/drivers/of/kexec.c b/drivers/of/kexec.c
+index 9ccde2fd77cbf5..5b924597a4debe 100644
+--- a/drivers/of/kexec.c
++++ b/drivers/of/kexec.c
+@@ -301,7 +301,7 @@ void *of_kexec_alloc_and_setup_fdt(const struct kimage *image,
+ }
+
+ /* Remove memory reservation for the current device tree. */
+- ret = fdt_find_and_del_mem_rsv(fdt, __pa(initial_boot_params),
++ ret = fdt_find_and_del_mem_rsv(fdt, initial_boot_params_pa,
+ fdt_totalsize(initial_boot_params));
+ if (ret == -EINVAL) {
+ pr_err("Error removing memory reservation.\n");
+diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
+index 284f2e0e4d2615..e091c3e55b5c6f 100644
+--- a/drivers/pci/controller/cadence/pci-j721e.c
++++ b/drivers/pci/controller/cadence/pci-j721e.c
+@@ -572,15 +572,14 @@ static int j721e_pcie_probe(struct platform_device *pdev)
+ pcie->refclk = clk;
+
+ /*
+- * The "Power Sequencing and Reset Signal Timings" table of the
+- * PCI Express Card Electromechanical Specification, Revision
+- * 5.1, Section 2.9.2, Symbol "T_PERST-CLK", indicates PERST#
+- * should be deasserted after minimum of 100us once REFCLK is
+- * stable. The REFCLK to the connector in RC mode is selected
+- * while enabling the PHY. So deassert PERST# after 100 us.
++ * Section 2.2 of the PCI Express Card Electromechanical
++ * Specification (Revision 5.1) mandates that the deassertion
++ * of the PERST# signal should be delayed by 100 ms (TPVPERL).
++ * This shall ensure that the power and the reference clock
++ * are stable.
+ */
+ if (gpiod) {
+- fsleep(PCIE_T_PERST_CLK_US);
++ msleep(PCIE_T_PVPERL_MS);
+ gpiod_set_value_cansleep(gpiod, 1);
+ }
+
+@@ -671,15 +670,14 @@ static int j721e_pcie_resume_noirq(struct device *dev)
+ return ret;
+
+ /*
+- * The "Power Sequencing and Reset Signal Timings" table of the
+- * PCI Express Card Electromechanical Specification, Revision
+- * 5.1, Section 2.9.2, Symbol "T_PERST-CLK", indicates PERST#
+- * should be deasserted after minimum of 100us once REFCLK is
+- * stable. The REFCLK to the connector in RC mode is selected
+- * while enabling the PHY. So deassert PERST# after 100 us.
++ * Section 2.2 of the PCI Express Card Electromechanical
++ * Specification (Revision 5.1) mandates that the deassertion
++ * of the PERST# signal should be delayed by 100 ms (TPVPERL).
++ * This shall ensure that the power and the reference clock
++ * are stable.
+ */
+ if (pcie->reset_gpio) {
+- fsleep(PCIE_T_PERST_CLK_US);
++ msleep(PCIE_T_PVPERL_MS);
+ gpiod_set_value_cansleep(pcie->reset_gpio, 1);
+ }
+
+diff --git a/drivers/pci/controller/dwc/pcie-qcom-ep.c b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+index e588fcc5458936..b5ca5260f9049f 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom-ep.c
++++ b/drivers/pci/controller/dwc/pcie-qcom-ep.c
+@@ -396,6 +396,10 @@ static int qcom_pcie_perst_deassert(struct dw_pcie *pci)
+ return ret;
+ }
+
++ /* Perform cleanup that requires refclk */
++ pci_epc_deinit_notify(pci->ep.epc);
++ dw_pcie_ep_cleanup(&pci->ep);
++
+ /* Assert WAKE# to RC to indicate device is ready */
+ gpiod_set_value_cansleep(pcie_ep->wake, 1);
+ usleep_range(WAKE_DELAY_US, WAKE_DELAY_US + 500);
+@@ -540,8 +544,6 @@ static void qcom_pcie_perst_assert(struct dw_pcie *pci)
+ {
+ struct qcom_pcie_ep *pcie_ep = to_pcie_ep(pci);
+
+- pci_epc_deinit_notify(pci->ep.epc);
+- dw_pcie_ep_cleanup(&pci->ep);
+ qcom_pcie_disable_resources(pcie_ep);
+ pcie_ep->link_status = QCOM_PCIE_EP_LINK_DISABLED;
+ }
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index ef44a82be058b2..2b33d03ed05416 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -133,6 +133,7 @@
+
+ /* PARF_INT_ALL_{STATUS/CLEAR/MASK} register fields */
+ #define PARF_INT_ALL_LINK_UP BIT(13)
++#define PARF_INT_MSI_DEV_0_7 GENMASK(30, 23)
+
+ /* PARF_NO_SNOOP_OVERIDE register fields */
+ #define WR_NO_SNOOP_OVERIDE_EN BIT(1)
+@@ -1716,7 +1717,8 @@ static int qcom_pcie_probe(struct platform_device *pdev)
+ goto err_host_deinit;
+ }
+
+- writel_relaxed(PARF_INT_ALL_LINK_UP, pcie->parf + PARF_INT_ALL_MASK);
++ writel_relaxed(PARF_INT_ALL_LINK_UP | PARF_INT_MSI_DEV_0_7,
++ pcie->parf + PARF_INT_ALL_MASK);
+ }
+
+ qcom_pcie_icc_opp_update(pcie);
+diff --git a/drivers/pci/controller/dwc/pcie-tegra194.c b/drivers/pci/controller/dwc/pcie-tegra194.c
+index c1394f2ab63ff1..ced3b7e7bdaded 100644
+--- a/drivers/pci/controller/dwc/pcie-tegra194.c
++++ b/drivers/pci/controller/dwc/pcie-tegra194.c
+@@ -1704,9 +1704,6 @@ static void pex_ep_event_pex_rst_assert(struct tegra_pcie_dw *pcie)
+ if (ret)
+ dev_err(pcie->dev, "Failed to go Detect state: %d\n", ret);
+
+- pci_epc_deinit_notify(pcie->pci.ep.epc);
+- dw_pcie_ep_cleanup(&pcie->pci.ep);
+-
+ reset_control_assert(pcie->core_rst);
+
+ tegra_pcie_disable_phy(pcie);
+@@ -1785,6 +1782,10 @@ static void pex_ep_event_pex_rst_deassert(struct tegra_pcie_dw *pcie)
+ goto fail_phy;
+ }
+
++ /* Perform cleanup that requires refclk */
++ pci_epc_deinit_notify(pcie->pci.ep.epc);
++ dw_pcie_ep_cleanup(&pcie->pci.ep);
++
+ /* Clear any stale interrupt statuses */
+ appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L0);
+ appl_writel(pcie, 0xFFFFFFFF, APPL_INTR_STATUS_L1_0_0);
+diff --git a/drivers/pci/endpoint/functions/pci-epf-mhi.c b/drivers/pci/endpoint/functions/pci-epf-mhi.c
+index 7d070b1def1166..54286a40bdfbf7 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-mhi.c
++++ b/drivers/pci/endpoint/functions/pci-epf-mhi.c
+@@ -867,12 +867,18 @@ static int pci_epf_mhi_bind(struct pci_epf *epf)
+ {
+ struct pci_epf_mhi *epf_mhi = epf_get_drvdata(epf);
+ struct pci_epc *epc = epf->epc;
++ struct device *dev = &epf->dev;
+ struct platform_device *pdev = to_platform_device(epc->dev.parent);
+ struct resource *res;
+ int ret;
+
+ /* Get MMIO base address from Endpoint controller */
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "mmio");
++ if (!res) {
++ dev_err(dev, "Failed to get \"mmio\" resource\n");
++ return -ENODEV;
++ }
++
+ epf_mhi->mmio_phys = res->start;
+ epf_mhi->mmio_size = resource_size(res);
+
+diff --git a/drivers/pci/hotplug/cpqphp_pci.c b/drivers/pci/hotplug/cpqphp_pci.c
+index 718bc6cf12cb3c..974c7db3265b5a 100644
+--- a/drivers/pci/hotplug/cpqphp_pci.c
++++ b/drivers/pci/hotplug/cpqphp_pci.c
+@@ -135,11 +135,13 @@ int cpqhp_unconfigure_device(struct pci_func *func)
+ static int PCI_RefinedAccessConfig(struct pci_bus *bus, unsigned int devfn, u8 offset, u32 *value)
+ {
+ u32 vendID = 0;
++ int ret;
+
+- if (pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID) == -1)
+- return -1;
++ ret = pci_bus_read_config_dword(bus, devfn, PCI_VENDOR_ID, &vendID);
++ if (ret != PCIBIOS_SUCCESSFUL)
++ return PCIBIOS_DEVICE_NOT_FOUND;
+ if (PCI_POSSIBLE_ERROR(vendID))
+- return -1;
++ return PCIBIOS_DEVICE_NOT_FOUND;
+ return pci_bus_read_config_dword(bus, devfn, offset, value);
+ }
+
+@@ -202,13 +204,15 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
+ {
+ u16 tdevice;
+ u32 work;
++ int ret;
+ u8 tbus;
+
+ ctrl->pci_bus->number = bus_num;
+
+ for (tdevice = 0; tdevice < 0xFF; tdevice++) {
+ /* Scan for access first */
+- if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
++ ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work);
++ if (ret)
+ continue;
+ dbg("Looking for nonbridge bus_num %d dev_num %d\n", bus_num, tdevice);
+ /* Yep we got one. Not a bridge ? */
+@@ -220,7 +224,8 @@ static int PCI_ScanBusForNonBridge(struct controller *ctrl, u8 bus_num, u8 *dev_
+ }
+ for (tdevice = 0; tdevice < 0xFF; tdevice++) {
+ /* Scan for access first */
+- if (PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work) == -1)
++ ret = PCI_RefinedAccessConfig(ctrl->pci_bus, tdevice, 0x08, &work);
++ if (ret)
+ continue;
+ dbg("Looking for bridge bus_num %d dev_num %d\n", bus_num, tdevice);
+ /* Yep we got one. bridge ? */
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 225a6cd2e9ca3b..08f170fd3efb3e 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -5248,7 +5248,7 @@ static ssize_t reset_method_store(struct device *dev,
+ const char *buf, size_t count)
+ {
+ struct pci_dev *pdev = to_pci_dev(dev);
+- char *options, *name;
++ char *options, *tmp_options, *name;
+ int m, n;
+ u8 reset_methods[PCI_NUM_RESET_METHODS] = { 0 };
+
+@@ -5268,7 +5268,8 @@ static ssize_t reset_method_store(struct device *dev,
+ return -ENOMEM;
+
+ n = 0;
+- while ((name = strsep(&options, " ")) != NULL) {
++ tmp_options = options;
++ while ((name = strsep(&tmp_options, " ")) != NULL) {
+ if (sysfs_streq(name, ""))
+ continue;
+
+diff --git a/drivers/pci/slot.c b/drivers/pci/slot.c
+index 0f87cade10f74b..ed645c7a4e4b41 100644
+--- a/drivers/pci/slot.c
++++ b/drivers/pci/slot.c
+@@ -79,6 +79,7 @@ static void pci_slot_release(struct kobject *kobj)
+ up_read(&pci_bus_sem);
+
+ list_del(&slot->list);
++ pci_bus_put(slot->bus);
+
+ kfree(slot);
+ }
+@@ -261,7 +262,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
+ goto err;
+ }
+
+- slot->bus = parent;
++ slot->bus = pci_bus_get(parent);
+ slot->number = slot_nr;
+
+ slot->kobj.kset = pci_slots_kset;
+@@ -269,6 +270,7 @@ struct pci_slot *pci_create_slot(struct pci_bus *parent, int slot_nr,
+ slot_name = make_slot_name(name);
+ if (!slot_name) {
+ err = -ENOMEM;
++ pci_bus_put(slot->bus);
+ kfree(slot);
+ goto err;
+ }
+diff --git a/drivers/perf/arm-cmn.c b/drivers/perf/arm-cmn.c
+index 397a46410f7cb7..30506c43776f15 100644
+--- a/drivers/perf/arm-cmn.c
++++ b/drivers/perf/arm-cmn.c
+@@ -2178,8 +2178,6 @@ static int arm_cmn_init_dtcs(struct arm_cmn *cmn)
+ continue;
+
+ xp = arm_cmn_node_to_xp(cmn, dn);
+- dn->portid_bits = xp->portid_bits;
+- dn->deviceid_bits = xp->deviceid_bits;
+ dn->dtc = xp->dtc;
+ dn->dtm = xp->dtm;
+ if (cmn->multi_dtm)
+@@ -2420,6 +2418,8 @@ static int arm_cmn_discover(struct arm_cmn *cmn, unsigned int rgn_offset)
+ }
+
+ arm_cmn_init_node_info(cmn, reg & CMN_CHILD_NODE_ADDR, dn);
++ dn->portid_bits = xp->portid_bits;
++ dn->deviceid_bits = xp->deviceid_bits;
+
+ switch (dn->type) {
+ case CMN_TYPE_DTC:
+diff --git a/drivers/perf/arm_smmuv3_pmu.c b/drivers/perf/arm_smmuv3_pmu.c
+index d5fa92ba837397..dabdb9f7bb82c4 100644
+--- a/drivers/perf/arm_smmuv3_pmu.c
++++ b/drivers/perf/arm_smmuv3_pmu.c
+@@ -431,6 +431,17 @@ static int smmu_pmu_event_init(struct perf_event *event)
+ return -EINVAL;
+ }
+
++ /*
++ * Ensure all events are on the same cpu so all events are in the
++ * same cpu context, to avoid races on pmu_enable etc.
++ */
++ event->cpu = smmu_pmu->on_cpu;
++
++ hwc->idx = -1;
++
++ if (event->group_leader == event)
++ return 0;
++
+ for_each_sibling_event(sibling, event->group_leader) {
+ if (is_software_event(sibling))
+ continue;
+@@ -442,14 +453,6 @@ static int smmu_pmu_event_init(struct perf_event *event)
+ return -EINVAL;
+ }
+
+- hwc->idx = -1;
+-
+- /*
+- * Ensure all events are on the same cpu so all events are in the
+- * same cpu context, to avoid races on pmu_enable etc.
+- */
+- event->cpu = smmu_pmu->on_cpu;
+-
+ return 0;
+ }
+
+diff --git a/drivers/phy/phy-airoha-pcie-regs.h b/drivers/phy/phy-airoha-pcie-regs.h
+index bb1f679ca1dfa0..b938a7b468fee3 100644
+--- a/drivers/phy/phy-airoha-pcie-regs.h
++++ b/drivers/phy/phy-airoha-pcie-regs.h
+@@ -197,9 +197,9 @@
+ #define CSR_2L_PXP_TX1_MULTLANE_EN BIT(0)
+
+ #define REG_CSR_2L_RX0_REV0 0x00fc
+-#define CSR_2L_PXP_VOS_PNINV GENMASK(3, 2)
+-#define CSR_2L_PXP_FE_GAIN_NORMAL_MODE GENMASK(6, 4)
+-#define CSR_2L_PXP_FE_GAIN_TRAIN_MODE GENMASK(10, 8)
++#define CSR_2L_PXP_VOS_PNINV GENMASK(19, 18)
++#define CSR_2L_PXP_FE_GAIN_NORMAL_MODE GENMASK(22, 20)
++#define CSR_2L_PXP_FE_GAIN_TRAIN_MODE GENMASK(26, 24)
+
+ #define REG_CSR_2L_RX0_PHYCK_DIV 0x0100
+ #define CSR_2L_PXP_RX0_PHYCK_SEL GENMASK(9, 8)
+diff --git a/drivers/phy/phy-airoha-pcie.c b/drivers/phy/phy-airoha-pcie.c
+index 1e410eb410580c..56e9ade8a9fd3d 100644
+--- a/drivers/phy/phy-airoha-pcie.c
++++ b/drivers/phy/phy-airoha-pcie.c
+@@ -459,7 +459,7 @@ static void airoha_pcie_phy_init_clk_out(struct airoha_pcie_phy *pcie_phy)
+ airoha_phy_csr_2l_clear_bits(pcie_phy, REG_CSR_2L_CLKTX1_OFFSET,
+ CSR_2L_PXP_CLKTX1_SR);
+ airoha_phy_csr_2l_update_field(pcie_phy, REG_CSR_2L_PLL_CMN_RESERVE0,
+- CSR_2L_PXP_PLL_RESERVE_MASK, 0xdd);
++ CSR_2L_PXP_PLL_RESERVE_MASK, 0xd0d);
+ }
+
+ static void airoha_pcie_phy_init_csr_2l(struct airoha_pcie_phy *pcie_phy)
+@@ -471,9 +471,9 @@ static void airoha_pcie_phy_init_csr_2l(struct airoha_pcie_phy *pcie_phy)
+ PCIE_SW_XFI_RXPCS_RST | PCIE_SW_REF_RST |
+ PCIE_SW_RX_RST);
+ airoha_phy_pma0_set_bits(pcie_phy, REG_PCIE_PMA_TX_RESET,
+- PCIE_TX_TOP_RST | REG_PCIE_PMA_TX_RESET);
++ PCIE_TX_TOP_RST | PCIE_TX_CAL_RST);
+ airoha_phy_pma1_set_bits(pcie_phy, REG_PCIE_PMA_TX_RESET,
+- PCIE_TX_TOP_RST | REG_PCIE_PMA_TX_RESET);
++ PCIE_TX_TOP_RST | PCIE_TX_CAL_RST);
+ }
+
+ static void airoha_pcie_phy_init_rx(struct airoha_pcie_phy *pcie_phy)
+@@ -802,7 +802,7 @@ static void airoha_pcie_phy_init_ssc_jcpll(struct airoha_pcie_phy *pcie_phy)
+ airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SDM_IFM,
+ CSR_2L_PXP_JCPLL_SDM_IFM);
+ airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SDM_HREN,
+- REG_CSR_2L_JCPLL_SDM_HREN);
++ CSR_2L_PXP_JCPLL_SDM_HREN);
+ airoha_phy_csr_2l_clear_bits(pcie_phy, REG_CSR_2L_JCPLL_RST_DLY,
+ CSR_2L_PXP_JCPLL_SDM_DI_EN);
+ airoha_phy_csr_2l_set_bits(pcie_phy, REG_CSR_2L_JCPLL_SSC,
+diff --git a/drivers/phy/realtek/phy-rtk-usb2.c b/drivers/phy/realtek/phy-rtk-usb2.c
+index e3ad7cea510998..e8ca2ec5998fe6 100644
+--- a/drivers/phy/realtek/phy-rtk-usb2.c
++++ b/drivers/phy/realtek/phy-rtk-usb2.c
+@@ -1023,6 +1023,8 @@ static int rtk_usb2phy_probe(struct platform_device *pdev)
+
+ rtk_phy->dev = &pdev->dev;
+ rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL);
++ if (!rtk_phy->phy_cfg)
++ return -ENOMEM;
+
+ memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg));
+
+diff --git a/drivers/phy/realtek/phy-rtk-usb3.c b/drivers/phy/realtek/phy-rtk-usb3.c
+index dfcf4b921bba63..96af483e5444b9 100644
+--- a/drivers/phy/realtek/phy-rtk-usb3.c
++++ b/drivers/phy/realtek/phy-rtk-usb3.c
+@@ -577,6 +577,8 @@ static int rtk_usb3phy_probe(struct platform_device *pdev)
+
+ rtk_phy->dev = &pdev->dev;
+ rtk_phy->phy_cfg = devm_kzalloc(dev, sizeof(*phy_cfg), GFP_KERNEL);
++ if (!rtk_phy->phy_cfg)
++ return -ENOMEM;
+
+ memcpy(rtk_phy->phy_cfg, phy_cfg, sizeof(*phy_cfg));
+
+diff --git a/drivers/pinctrl/pinctrl-k210.c b/drivers/pinctrl/pinctrl-k210.c
+index 0f6b55fec31de7..a71805997b028a 100644
+--- a/drivers/pinctrl/pinctrl-k210.c
++++ b/drivers/pinctrl/pinctrl-k210.c
+@@ -183,7 +183,7 @@ static const u32 k210_pinconf_mode_id_to_mode[] = {
+ [K210_PC_DEFAULT_INT13] = K210_PC_MODE_IN | K210_PC_PU,
+ };
+
+-#undef DEFAULT
++#undef K210_PC_DEFAULT
+
+ /*
+ * Pin functions configuration information.
+diff --git a/drivers/pinctrl/pinctrl-zynqmp.c b/drivers/pinctrl/pinctrl-zynqmp.c
+index 3c6d56fdb8c964..93454d2a26bcc6 100644
+--- a/drivers/pinctrl/pinctrl-zynqmp.c
++++ b/drivers/pinctrl/pinctrl-zynqmp.c
+@@ -49,7 +49,6 @@
+ * @name: Name of the pin mux function
+ * @groups: List of pin groups for this function
+ * @ngroups: Number of entries in @groups
+- * @node: Firmware node matching with the function
+ *
+ * This structure holds information about pin control function
+ * and function group names supporting that function.
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+index d2dd66769aa891..a0eb4e01b3a755 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+@@ -667,7 +667,7 @@ static void pmic_gpio_config_dbg_show(struct pinctrl_dev *pctldev,
+ "push-pull", "open-drain", "open-source"
+ };
+ static const char *const strengths[] = {
+- "no", "high", "medium", "low"
++ "no", "low", "medium", "high"
+ };
+
+ pad = pctldev->desc->pins[pin].drv_data;
+diff --git a/drivers/pinctrl/renesas/Kconfig b/drivers/pinctrl/renesas/Kconfig
+index 14bd55d647319b..7f3f41c7fe54c8 100644
+--- a/drivers/pinctrl/renesas/Kconfig
++++ b/drivers/pinctrl/renesas/Kconfig
+@@ -41,6 +41,7 @@ config PINCTRL_RENESAS
+ select PINCTRL_PFC_R8A779H0 if ARCH_R8A779H0
+ select PINCTRL_RZG2L if ARCH_RZG2L
+ select PINCTRL_RZV2M if ARCH_R9A09G011
++ select PINCTRL_RZG2L if ARCH_R9A09G057
+ select PINCTRL_PFC_SH7203 if CPU_SUBTYPE_SH7203
+ select PINCTRL_PFC_SH7264 if CPU_SUBTYPE_SH7264
+ select PINCTRL_PFC_SH7269 if CPU_SUBTYPE_SH7269
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index 5a403915fed2c6..3a81837b5e623b 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -2710,7 +2710,7 @@ static int rzg2l_pinctrl_register(struct rzg2l_pinctrl *pctrl)
+
+ ret = pinctrl_enable(pctrl->pctl);
+ if (ret)
+- dev_err_probe(pctrl->dev, ret, "pinctrl enable failed\n");
++ return dev_err_probe(pctrl->dev, ret, "pinctrl enable failed\n");
+
+ ret = rzg2l_gpio_register(pctrl);
+ if (ret)
+diff --git a/drivers/platform/chrome/cros_ec_typec.c b/drivers/platform/chrome/cros_ec_typec.c
+index c7781aea0b88b2..f1324466efac65 100644
+--- a/drivers/platform/chrome/cros_ec_typec.c
++++ b/drivers/platform/chrome/cros_ec_typec.c
+@@ -409,6 +409,7 @@ static int cros_typec_init_ports(struct cros_typec_data *typec)
+ return 0;
+
+ unregister_ports:
++ fwnode_handle_put(fwnode);
+ cros_unregister_ports(typec);
+ return ret;
+ }
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index abdca3f05c5c15..89f5f44857d555 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -3696,10 +3696,28 @@ static int asus_wmi_custom_fan_curve_init(struct asus_wmi *asus)
+ /* Throttle thermal policy ****************************************************/
+ static int throttle_thermal_policy_write(struct asus_wmi *asus)
+ {
+- u8 value = asus->throttle_thermal_policy_mode;
+ u32 retval;
++ u8 value;
+ int err;
+
++ if (asus->throttle_thermal_policy_dev == ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY_VIVO) {
++ switch (asus->throttle_thermal_policy_mode) {
++ case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT:
++ value = ASUS_THROTTLE_THERMAL_POLICY_DEFAULT_VIVO;
++ break;
++ case ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST:
++ value = ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST_VIVO;
++ break;
++ case ASUS_THROTTLE_THERMAL_POLICY_SILENT:
++ value = ASUS_THROTTLE_THERMAL_POLICY_SILENT_VIVO;
++ break;
++ default:
++ return -EINVAL;
++ }
++ } else {
++ value = asus->throttle_thermal_policy_mode;
++ }
++
+ err = asus_wmi_set_devstate(asus->throttle_thermal_policy_dev,
+ value, &retval);
+
+@@ -3804,46 +3822,6 @@ static ssize_t throttle_thermal_policy_store(struct device *dev,
+ static DEVICE_ATTR_RW(throttle_thermal_policy);
+
+ /* Platform profile ***********************************************************/
+-static int asus_wmi_platform_profile_to_vivo(struct asus_wmi *asus, int mode)
+-{
+- bool vivo;
+-
+- vivo = asus->throttle_thermal_policy_dev == ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY_VIVO;
+-
+- if (vivo) {
+- switch (mode) {
+- case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT:
+- return ASUS_THROTTLE_THERMAL_POLICY_DEFAULT_VIVO;
+- case ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST:
+- return ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST_VIVO;
+- case ASUS_THROTTLE_THERMAL_POLICY_SILENT:
+- return ASUS_THROTTLE_THERMAL_POLICY_SILENT_VIVO;
+- }
+- }
+-
+- return mode;
+-}
+-
+-static int asus_wmi_platform_profile_mode_from_vivo(struct asus_wmi *asus, int mode)
+-{
+- bool vivo;
+-
+- vivo = asus->throttle_thermal_policy_dev == ASUS_WMI_DEVID_THROTTLE_THERMAL_POLICY_VIVO;
+-
+- if (vivo) {
+- switch (mode) {
+- case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT_VIVO:
+- return ASUS_THROTTLE_THERMAL_POLICY_DEFAULT;
+- case ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST_VIVO:
+- return ASUS_THROTTLE_THERMAL_POLICY_OVERBOOST;
+- case ASUS_THROTTLE_THERMAL_POLICY_SILENT_VIVO:
+- return ASUS_THROTTLE_THERMAL_POLICY_SILENT;
+- }
+- }
+-
+- return mode;
+-}
+-
+ static int asus_wmi_platform_profile_get(struct platform_profile_handler *pprof,
+ enum platform_profile_option *profile)
+ {
+@@ -3853,7 +3831,7 @@ static int asus_wmi_platform_profile_get(struct platform_profile_handler *pprof,
+ asus = container_of(pprof, struct asus_wmi, platform_profile_handler);
+ tp = asus->throttle_thermal_policy_mode;
+
+- switch (asus_wmi_platform_profile_mode_from_vivo(asus, tp)) {
++ switch (tp) {
+ case ASUS_THROTTLE_THERMAL_POLICY_DEFAULT:
+ *profile = PLATFORM_PROFILE_BALANCED;
+ break;
+@@ -3892,7 +3870,7 @@ static int asus_wmi_platform_profile_set(struct platform_profile_handler *pprof,
+ return -EOPNOTSUPP;
+ }
+
+- asus->throttle_thermal_policy_mode = asus_wmi_platform_profile_to_vivo(asus, tp);
++ asus->throttle_thermal_policy_mode = tp;
+ return throttle_thermal_policy_write(asus);
+ }
+
+diff --git a/drivers/platform/x86/intel/bxtwc_tmu.c b/drivers/platform/x86/intel/bxtwc_tmu.c
+index d0e2a3c293b0b0..9ac801b929b93c 100644
+--- a/drivers/platform/x86/intel/bxtwc_tmu.c
++++ b/drivers/platform/x86/intel/bxtwc_tmu.c
+@@ -48,9 +48,8 @@ static irqreturn_t bxt_wcove_tmu_irq_handler(int irq, void *data)
+ static int bxt_wcove_tmu_probe(struct platform_device *pdev)
+ {
+ struct intel_soc_pmic *pmic = dev_get_drvdata(pdev->dev.parent);
+- struct regmap_irq_chip_data *regmap_irq_chip;
+ struct wcove_tmu *wctmu;
+- int ret, virq, irq;
++ int ret;
+
+ wctmu = devm_kzalloc(&pdev->dev, sizeof(*wctmu), GFP_KERNEL);
+ if (!wctmu)
+@@ -59,27 +58,18 @@ static int bxt_wcove_tmu_probe(struct platform_device *pdev)
+ wctmu->dev = &pdev->dev;
+ wctmu->regmap = pmic->regmap;
+
+- irq = platform_get_irq(pdev, 0);
+- if (irq < 0)
+- return irq;
++ wctmu->irq = platform_get_irq(pdev, 0);
++ if (wctmu->irq < 0)
++ return wctmu->irq;
+
+- regmap_irq_chip = pmic->irq_chip_data_tmu;
+- virq = regmap_irq_get_virq(regmap_irq_chip, irq);
+- if (virq < 0) {
+- dev_err(&pdev->dev,
+- "failed to get virtual interrupt=%d\n", irq);
+- return virq;
+- }
+-
+- ret = devm_request_threaded_irq(&pdev->dev, virq,
++ ret = devm_request_threaded_irq(&pdev->dev, wctmu->irq,
+ NULL, bxt_wcove_tmu_irq_handler,
+ IRQF_ONESHOT, "bxt_wcove_tmu", wctmu);
+ if (ret) {
+ dev_err(&pdev->dev, "request irq failed: %d,virq: %d\n",
+- ret, virq);
++ ret, wctmu->irq);
+ return ret;
+ }
+- wctmu->irq = virq;
+
+ /* Unmask TMU second level Wake & System alarm */
+ regmap_update_bits(wctmu->regmap, BXTWC_MTMUIRQ_REG,
+diff --git a/drivers/platform/x86/intel/pmt/class.c b/drivers/platform/x86/intel/pmt/class.c
+index c04bb7f97a4db1..c3ca2ac91b0569 100644
+--- a/drivers/platform/x86/intel/pmt/class.c
++++ b/drivers/platform/x86/intel/pmt/class.c
+@@ -59,10 +59,12 @@ pmt_memcpy64_fromio(void *to, const u64 __iomem *from, size_t count)
+ }
+
+ int pmt_telem_read_mmio(struct pci_dev *pdev, struct pmt_callbacks *cb, u32 guid, void *buf,
+- void __iomem *addr, u32 count)
++ void __iomem *addr, loff_t off, u32 count)
+ {
+ if (cb && cb->read_telem)
+- return cb->read_telem(pdev, guid, buf, count);
++ return cb->read_telem(pdev, guid, buf, off, count);
++
++ addr += off;
+
+ if (guid == GUID_SPR_PUNIT)
+ /* PUNIT on SPR only supports aligned 64-bit read */
+@@ -96,7 +98,7 @@ intel_pmt_read(struct file *filp, struct kobject *kobj,
+ count = entry->size - off;
+
+ count = pmt_telem_read_mmio(entry->ep->pcidev, entry->cb, entry->header.guid, buf,
+- entry->base + off, count);
++ entry->base, off, count);
+
+ return count;
+ }
+diff --git a/drivers/platform/x86/intel/pmt/class.h b/drivers/platform/x86/intel/pmt/class.h
+index a267ac96442301..b2006d57779d66 100644
+--- a/drivers/platform/x86/intel/pmt/class.h
++++ b/drivers/platform/x86/intel/pmt/class.h
+@@ -62,7 +62,7 @@ struct intel_pmt_namespace {
+ };
+
+ int pmt_telem_read_mmio(struct pci_dev *pdev, struct pmt_callbacks *cb, u32 guid, void *buf,
+- void __iomem *addr, u32 count);
++ void __iomem *addr, loff_t off, u32 count);
+ bool intel_pmt_is_early_client_hw(struct device *dev);
+ int intel_pmt_dev_create(struct intel_pmt_entry *entry,
+ struct intel_pmt_namespace *ns,
+diff --git a/drivers/platform/x86/intel/pmt/telemetry.c b/drivers/platform/x86/intel/pmt/telemetry.c
+index c9feac859e574c..0cea617c6c2e25 100644
+--- a/drivers/platform/x86/intel/pmt/telemetry.c
++++ b/drivers/platform/x86/intel/pmt/telemetry.c
+@@ -219,7 +219,7 @@ int pmt_telem_read(struct telem_endpoint *ep, u32 id, u64 *data, u32 count)
+ if (offset + NUM_BYTES_QWORD(count) > size)
+ return -EINVAL;
+
+- pmt_telem_read_mmio(ep->pcidev, ep->cb, ep->header.guid, data, ep->base + offset,
++ pmt_telem_read_mmio(ep->pcidev, ep->cb, ep->header.guid, data, ep->base, offset,
+ NUM_BYTES_QWORD(count));
+
+ return ep->present ? 0 : -EPIPE;
+diff --git a/drivers/platform/x86/panasonic-laptop.c b/drivers/platform/x86/panasonic-laptop.c
+index 2bf94d0ab32432..22ca70eb822718 100644
+--- a/drivers/platform/x86/panasonic-laptop.c
++++ b/drivers/platform/x86/panasonic-laptop.c
+@@ -614,8 +614,7 @@ static ssize_t eco_mode_show(struct device *dev, struct device_attribute *attr,
+ result = 1;
+ break;
+ default:
+- result = -EIO;
+- break;
++ return -EIO;
+ }
+ return sysfs_emit(buf, "%u\n", result);
+ }
+@@ -761,7 +760,12 @@ static ssize_t current_brightness_store(struct device *dev, struct device_attrib
+ static ssize_t cdpower_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+ {
+- return sysfs_emit(buf, "%d\n", get_optd_power_state());
++ int state = get_optd_power_state();
++
++ if (state < 0)
++ return state;
++
++ return sysfs_emit(buf, "%d\n", state);
+ }
+
+ static ssize_t cdpower_store(struct device *dev, struct device_attribute *attr,
+diff --git a/drivers/pmdomain/ti/ti_sci_pm_domains.c b/drivers/pmdomain/ti/ti_sci_pm_domains.c
+index 1510d5ddae3dec..0df3eb7ff09a3d 100644
+--- a/drivers/pmdomain/ti/ti_sci_pm_domains.c
++++ b/drivers/pmdomain/ti/ti_sci_pm_domains.c
+@@ -161,6 +161,7 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
+ break;
+
+ if (args.args_count >= 1 && args.np == dev->of_node) {
++ of_node_put(args.np);
+ if (args.args[0] > max_id) {
+ max_id = args.args[0];
+ } else {
+@@ -192,7 +193,10 @@ static int ti_sci_pm_domain_probe(struct platform_device *pdev)
+ pm_genpd_init(&pd->pd, NULL, true);
+
+ list_add(&pd->node, &pd_provider->pd_list);
++ } else {
++ of_node_put(args.np);
+ }
++
+ index++;
+ }
+ }
+diff --git a/drivers/power/reset/Kconfig b/drivers/power/reset/Kconfig
+index 389d5a193e5dce..f5fc33a8bf4431 100644
+--- a/drivers/power/reset/Kconfig
++++ b/drivers/power/reset/Kconfig
+@@ -79,6 +79,7 @@ config POWER_RESET_EP93XX
+ bool "Cirrus EP93XX reset driver" if COMPILE_TEST
+ depends on MFD_SYSCON
+ default ARCH_EP93XX
++ select AUXILIARY_BUS
+ help
+ This driver provides restart support for Cirrus EP93XX SoC.
+
+diff --git a/drivers/power/sequencing/Kconfig b/drivers/power/sequencing/Kconfig
+index c9f1cdb6652488..ddcc42a984921c 100644
+--- a/drivers/power/sequencing/Kconfig
++++ b/drivers/power/sequencing/Kconfig
+@@ -16,6 +16,7 @@ if POWER_SEQUENCING
+ config POWER_SEQUENCING_QCOM_WCN
+ tristate "Qualcomm WCN family PMU driver"
+ default m if ARCH_QCOM
++ depends on OF
+ help
+ Say Y here to enable the power sequencing driver for Qualcomm
+ WCN Bluetooth/WLAN chipsets.
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 750fda543308c8..51fb88aca0f9fd 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -449,9 +449,29 @@ static u8
+ [BQ27XXX_REG_AP] = 0x18,
+ BQ27XXX_DM_REG_ROWS,
+ },
++ bq27426_regs[BQ27XXX_REG_MAX] = {
++ [BQ27XXX_REG_CTRL] = 0x00,
++ [BQ27XXX_REG_TEMP] = 0x02,
++ [BQ27XXX_REG_INT_TEMP] = 0x1e,
++ [BQ27XXX_REG_VOLT] = 0x04,
++ [BQ27XXX_REG_AI] = 0x10,
++ [BQ27XXX_REG_FLAGS] = 0x06,
++ [BQ27XXX_REG_TTE] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_TTF] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_TTES] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_TTECP] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_NAC] = 0x08,
++ [BQ27XXX_REG_RC] = 0x0c,
++ [BQ27XXX_REG_FCC] = 0x0e,
++ [BQ27XXX_REG_CYCT] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_AE] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_SOC] = 0x1c,
++ [BQ27XXX_REG_DCAP] = INVALID_REG_ADDR,
++ [BQ27XXX_REG_AP] = 0x18,
++ BQ27XXX_DM_REG_ROWS,
++ },
+ #define bq27411_regs bq27421_regs
+ #define bq27425_regs bq27421_regs
+-#define bq27426_regs bq27421_regs
+ #define bq27441_regs bq27421_regs
+ #define bq27621_regs bq27421_regs
+ bq27z561_regs[BQ27XXX_REG_MAX] = {
+@@ -769,10 +789,23 @@ static enum power_supply_property bq27421_props[] = {
+ };
+ #define bq27411_props bq27421_props
+ #define bq27425_props bq27421_props
+-#define bq27426_props bq27421_props
+ #define bq27441_props bq27421_props
+ #define bq27621_props bq27421_props
+
++static enum power_supply_property bq27426_props[] = {
++ POWER_SUPPLY_PROP_STATUS,
++ POWER_SUPPLY_PROP_PRESENT,
++ POWER_SUPPLY_PROP_VOLTAGE_NOW,
++ POWER_SUPPLY_PROP_CURRENT_NOW,
++ POWER_SUPPLY_PROP_CAPACITY,
++ POWER_SUPPLY_PROP_CAPACITY_LEVEL,
++ POWER_SUPPLY_PROP_TEMP,
++ POWER_SUPPLY_PROP_TECHNOLOGY,
++ POWER_SUPPLY_PROP_CHARGE_FULL,
++ POWER_SUPPLY_PROP_CHARGE_NOW,
++ POWER_SUPPLY_PROP_MANUFACTURER,
++};
++
+ static enum power_supply_property bq27z561_props[] = {
+ POWER_SUPPLY_PROP_STATUS,
+ POWER_SUPPLY_PROP_PRESENT,
+diff --git a/drivers/power/supply/power_supply_core.c b/drivers/power/supply/power_supply_core.c
+index 49534458a9f7d3..73cc9c236e8333 100644
+--- a/drivers/power/supply/power_supply_core.c
++++ b/drivers/power/supply/power_supply_core.c
+@@ -484,8 +484,6 @@ EXPORT_SYMBOL_GPL(power_supply_get_by_name);
+ */
+ void power_supply_put(struct power_supply *psy)
+ {
+- might_sleep();
+-
+ atomic_dec(&psy->use_cnt);
+ put_device(&psy->dev);
+ }
+diff --git a/drivers/power/supply/rt9471.c b/drivers/power/supply/rt9471.c
+index c04af1ee89c675..67b86ac91a21dd 100644
+--- a/drivers/power/supply/rt9471.c
++++ b/drivers/power/supply/rt9471.c
+@@ -139,6 +139,19 @@ enum {
+ RT9471_PORTSTAT_DCP,
+ };
+
++enum {
++ RT9471_ICSTAT_SLEEP = 0,
++ RT9471_ICSTAT_VBUSRDY,
++ RT9471_ICSTAT_TRICKLECHG,
++ RT9471_ICSTAT_PRECHG,
++ RT9471_ICSTAT_FASTCHG,
++ RT9471_ICSTAT_IEOC,
++ RT9471_ICSTAT_BGCHG,
++ RT9471_ICSTAT_CHGDONE,
++ RT9471_ICSTAT_CHGFAULT,
++ RT9471_ICSTAT_OTG = 15,
++};
++
+ struct rt9471_chip {
+ struct device *dev;
+ struct regmap *regmap;
+@@ -153,8 +166,8 @@ struct rt9471_chip {
+ };
+
+ static const struct reg_field rt9471_reg_fields[F_MAX_FIELDS] = {
+- [F_WDT] = REG_FIELD(RT9471_REG_TOP, 0, 0),
+- [F_WDT_RST] = REG_FIELD(RT9471_REG_TOP, 1, 1),
++ [F_WDT] = REG_FIELD(RT9471_REG_TOP, 0, 1),
++ [F_WDT_RST] = REG_FIELD(RT9471_REG_TOP, 2, 2),
+ [F_CHG_EN] = REG_FIELD(RT9471_REG_FUNC, 0, 0),
+ [F_HZ] = REG_FIELD(RT9471_REG_FUNC, 5, 5),
+ [F_BATFET_DIS] = REG_FIELD(RT9471_REG_FUNC, 7, 7),
+@@ -255,31 +268,32 @@ static int rt9471_get_ieoc(struct rt9471_chip *chip, int *microamp)
+
+ static int rt9471_get_status(struct rt9471_chip *chip, int *status)
+ {
+- unsigned int chg_ready, chg_done, fault_stat;
++ unsigned int ic_stat;
+ int ret;
+
+- ret = regmap_field_read(chip->rm_fields[F_ST_CHG_RDY], &chg_ready);
+- if (ret)
+- return ret;
+-
+- ret = regmap_field_read(chip->rm_fields[F_ST_CHG_DONE], &chg_done);
++ ret = regmap_field_read(chip->rm_fields[F_IC_STAT], &ic_stat);
+ if (ret)
+ return ret;
+
+- ret = regmap_read(chip->regmap, RT9471_REG_STAT1, &fault_stat);
+- if (ret)
+- return ret;
+-
+- fault_stat &= RT9471_CHGFAULT_MASK;
+-
+- if (chg_ready && chg_done)
+- *status = POWER_SUPPLY_STATUS_FULL;
+- else if (chg_ready && fault_stat)
++ switch (ic_stat) {
++ case RT9471_ICSTAT_VBUSRDY:
++ case RT9471_ICSTAT_CHGFAULT:
+ *status = POWER_SUPPLY_STATUS_NOT_CHARGING;
+- else if (chg_ready && !fault_stat)
++ break;
++ case RT9471_ICSTAT_TRICKLECHG ... RT9471_ICSTAT_BGCHG:
+ *status = POWER_SUPPLY_STATUS_CHARGING;
+- else
++ break;
++ case RT9471_ICSTAT_CHGDONE:
++ *status = POWER_SUPPLY_STATUS_FULL;
++ break;
++ case RT9471_ICSTAT_SLEEP:
++ case RT9471_ICSTAT_OTG:
+ *status = POWER_SUPPLY_STATUS_DISCHARGING;
++ break;
++ default:
++ *status = POWER_SUPPLY_STATUS_UNKNOWN;
++ break;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index 6e752e148b98cc..210368099a0642 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -75,7 +75,7 @@ static void pwm_apply_debug(struct pwm_device *pwm,
+ state->duty_cycle < state->period)
+ dev_warn(pwmchip_parent(chip), ".apply ignored .polarity\n");
+
+- if (state->enabled &&
++ if (state->enabled && s2.enabled &&
+ last->polarity == state->polarity &&
+ last->period > s2.period &&
+ last->period <= state->period)
+@@ -83,7 +83,11 @@ static void pwm_apply_debug(struct pwm_device *pwm,
+ ".apply didn't pick the best available period (requested: %llu, applied: %llu, possible: %llu)\n",
+ state->period, s2.period, last->period);
+
+- if (state->enabled && state->period < s2.period)
++ /*
++ * Rounding period up is fine only if duty_cycle is 0 then, because a
++ * flat line doesn't have a characteristic period.
++ */
++ if (state->enabled && s2.enabled && state->period < s2.period && s2.duty_cycle)
+ dev_warn(pwmchip_parent(chip),
+ ".apply is supposed to round down period (requested: %llu, applied: %llu)\n",
+ state->period, s2.period);
+@@ -99,7 +103,7 @@ static void pwm_apply_debug(struct pwm_device *pwm,
+ s2.duty_cycle, s2.period,
+ last->duty_cycle, last->period);
+
+- if (state->enabled && state->duty_cycle < s2.duty_cycle)
++ if (state->enabled && s2.enabled && state->duty_cycle < s2.duty_cycle)
+ dev_warn(pwmchip_parent(chip),
+ ".apply is supposed to round down duty_cycle (requested: %llu/%llu, applied: %llu/%llu)\n",
+ state->duty_cycle, state->period,
+diff --git a/drivers/pwm/pwm-imx27.c b/drivers/pwm/pwm-imx27.c
+index 9e2bbf5b4a8ce7..0375987194318f 100644
+--- a/drivers/pwm/pwm-imx27.c
++++ b/drivers/pwm/pwm-imx27.c
+@@ -26,6 +26,7 @@
+ #define MX3_PWMSR 0x04 /* PWM Status Register */
+ #define MX3_PWMSAR 0x0C /* PWM Sample Register */
+ #define MX3_PWMPR 0x10 /* PWM Period Register */
++#define MX3_PWMCNR 0x14 /* PWM Counter Register */
+
+ #define MX3_PWMCR_FWM GENMASK(27, 26)
+ #define MX3_PWMCR_STOPEN BIT(25)
+@@ -219,10 +220,12 @@ static void pwm_imx27_wait_fifo_slot(struct pwm_chip *chip,
+ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ const struct pwm_state *state)
+ {
+- unsigned long period_cycles, duty_cycles, prescale;
++ unsigned long period_cycles, duty_cycles, prescale, period_us, tmp;
+ struct pwm_imx27_chip *imx = to_pwm_imx27_chip(chip);
+ unsigned long long c;
+ unsigned long long clkrate;
++ unsigned long flags;
++ int val;
+ int ret;
+ u32 cr;
+
+@@ -263,7 +266,98 @@ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ pwm_imx27_sw_reset(chip);
+ }
+
+- writel(duty_cycles, imx->mmio_base + MX3_PWMSAR);
++ val = readl(imx->mmio_base + MX3_PWMPR);
++ val = val >= MX3_PWMPR_MAX ? MX3_PWMPR_MAX : val;
++ cr = readl(imx->mmio_base + MX3_PWMCR);
++ tmp = NSEC_PER_SEC * (u64)(val + 2) * MX3_PWMCR_PRESCALER_GET(cr);
++ tmp = DIV_ROUND_UP_ULL(tmp, clkrate);
++ period_us = DIV_ROUND_UP_ULL(tmp, 1000);
++
++ /*
++ * ERR051198:
++ * PWM: PWM output may not function correctly if the FIFO is empty when
++ * a new SAR value is programmed
++ *
++ * Description:
++ * When the PWM FIFO is empty, a new value programmed to the PWM Sample
++ * register (PWM_PWMSAR) will be directly applied even if the current
++ * timer period has not expired.
++ *
++ * If the new SAMPLE value programmed in the PWM_PWMSAR register is
++ * less than the previous value, and the PWM counter register
++ * (PWM_PWMCNR) that contains the current COUNT value is greater than
++ * the new programmed SAMPLE value, the current period will not flip
++ * the level. This may result in an output pulse with a duty cycle of
++ * 100%.
++ *
++ * Consider a change from
++ * ________
++ * / \______/
++ * ^ * ^
++ * to
++ * ____
++ * / \__________/
++ * ^ ^
++ * At the time marked by *, the new write value will be directly applied
++ * to SAR even the current period is not over if FIFO is empty.
++ *
++ * ________ ____________________
++ * / \______/ \__________/
++ * ^ ^ * ^ ^
++ * |<-- old SAR -->| |<-- new SAR -->|
++ *
++ * That is the output is active for a whole period.
++ *
++ * Workaround:
++ * Check new SAR less than old SAR and current counter is in errata
++ * windows, write extra old SAR into FIFO and new SAR will effect at
++ * next period.
++ *
++ * Sometime period is quite long, such as over 1 second. If add old SAR
++ * into FIFO unconditional, new SAR have to wait for next period. It
++ * may be too long.
++ *
++ * Turn off the interrupt to ensure that not IRQ and schedule happen
++ * during above operations. If any irq and schedule happen, counter
++ * in PWM will be out of data and take wrong action.
++ *
++ * Add a safety margin 1.5us because it needs some time to complete
++ * IO write.
++ *
++ * Use writel_relaxed() to minimize the interval between two writes to
++ * the SAR register to increase the fastest PWM frequency supported.
++ *
++ * When the PWM period is longer than 2us(or <500kHz), this workaround
++ * can solve this problem. No software workaround is available if PWM
++ * period is shorter than IO write. Just try best to fill old data
++ * into FIFO.
++ */
++ c = clkrate * 1500;
++ do_div(c, NSEC_PER_SEC);
++
++ local_irq_save(flags);
++ val = FIELD_GET(MX3_PWMSR_FIFOAV, readl_relaxed(imx->mmio_base + MX3_PWMSR));
++
++ if (duty_cycles < imx->duty_cycle && (cr & MX3_PWMCR_EN)) {
++ if (period_us < 2) { /* 2us = 500 kHz */
++ /* Best effort attempt to fix up >500 kHz case */
++ udelay(3 * period_us);
++ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
++ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
++ } else if (val < MX3_PWMSR_FIFOAV_2WORDS) {
++ val = readl_relaxed(imx->mmio_base + MX3_PWMCNR);
++ /*
++ * If counter is close to period, controller may roll over when
++ * next IO write.
++ */
++ if ((val + c >= duty_cycles && val < imx->duty_cycle) ||
++ val + c >= period_cycles)
++ writel_relaxed(imx->duty_cycle, imx->mmio_base + MX3_PWMSAR);
++ }
++ }
++ writel_relaxed(duty_cycles, imx->mmio_base + MX3_PWMSAR);
++ local_irq_restore(flags);
++
+ writel(period_cycles, imx->mmio_base + MX3_PWMPR);
+
+ /*
+diff --git a/drivers/regulator/qcom_smd-regulator.c b/drivers/regulator/qcom_smd-regulator.c
+index 28e7ce60cb617c..25ed9f713974ba 100644
+--- a/drivers/regulator/qcom_smd-regulator.c
++++ b/drivers/regulator/qcom_smd-regulator.c
+@@ -11,7 +11,7 @@
+ #include <linux/regulator/of_regulator.h>
+ #include <linux/soc/qcom/smd-rpm.h>
+
+-struct qcom_smd_rpm *smd_vreg_rpm;
++static struct qcom_smd_rpm *smd_vreg_rpm;
+
+ struct qcom_rpm_reg {
+ struct device *dev;
+diff --git a/drivers/regulator/rk808-regulator.c b/drivers/regulator/rk808-regulator.c
+index 01a8d04879184c..37476d2558fda7 100644
+--- a/drivers/regulator/rk808-regulator.c
++++ b/drivers/regulator/rk808-regulator.c
+@@ -1853,7 +1853,7 @@ static int rk808_regulator_dt_parse_pdata(struct device *dev,
+ }
+
+ if (!pdata->dvs_gpio[i]) {
+- dev_info(dev, "there is no dvs%d gpio\n", i);
++ dev_dbg(dev, "there is no dvs%d gpio\n", i);
+ continue;
+ }
+
+@@ -1889,12 +1889,6 @@ static int rk808_regulator_probe(struct platform_device *pdev)
+ if (!pdata)
+ return -ENOMEM;
+
+- ret = rk808_regulator_dt_parse_pdata(&pdev->dev, regmap, pdata);
+- if (ret < 0)
+- return ret;
+-
+- platform_set_drvdata(pdev, pdata);
+-
+ switch (rk808->variant) {
+ case RK805_ID:
+ regulators = rk805_reg;
+@@ -1905,6 +1899,11 @@ static int rk808_regulator_probe(struct platform_device *pdev)
+ nregulators = ARRAY_SIZE(rk806_reg);
+ break;
+ case RK808_ID:
++ /* DVS0/1 GPIOs are supported on the RK808 only */
++ ret = rk808_regulator_dt_parse_pdata(&pdev->dev, regmap, pdata);
++ if (ret < 0)
++ return ret;
++
+ regulators = rk808_reg;
+ nregulators = RK808_NUM_REGULATORS;
+ break;
+@@ -1930,6 +1929,8 @@ static int rk808_regulator_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
++ platform_set_drvdata(pdev, pdata);
++
+ config.dev = &pdev->dev;
+ config.driver_data = pdata;
+ config.regmap = regmap;
+diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
+index 955e4e38477e6f..62f8548fb46a5d 100644
+--- a/drivers/remoteproc/Kconfig
++++ b/drivers/remoteproc/Kconfig
+@@ -341,9 +341,9 @@ config TI_K3_DSP_REMOTEPROC
+
+ config TI_K3_M4_REMOTEPROC
+ tristate "TI K3 M4 remoteproc support"
+- depends on ARCH_OMAP2PLUS || ARCH_K3
+- select MAILBOX
+- select OMAP2PLUS_MBOX
++ depends on ARCH_K3 || COMPILE_TEST
++ depends on TI_SCI_PROTOCOL || (COMPILE_TEST && TI_SCI_PROTOCOL=n)
++ depends on OMAP2PLUS_MBOX
+ help
+ Say m here to support TI's M4 remote processor subsystems
+ on various TI K3 family of SoCs through the remote processor
+diff --git a/drivers/remoteproc/qcom_q6v5_adsp.c b/drivers/remoteproc/qcom_q6v5_adsp.c
+index 572dcb0f055b76..223f6ca0745d3d 100644
+--- a/drivers/remoteproc/qcom_q6v5_adsp.c
++++ b/drivers/remoteproc/qcom_q6v5_adsp.c
+@@ -734,15 +734,22 @@ static int adsp_probe(struct platform_device *pdev)
+ desc->ssctl_id);
+ if (IS_ERR(adsp->sysmon)) {
+ ret = PTR_ERR(adsp->sysmon);
+- goto disable_pm;
++ goto deinit_remove_glink_pdm_ssr;
+ }
+
+ ret = rproc_add(rproc);
+ if (ret)
+- goto disable_pm;
++ goto remove_sysmon;
+
+ return 0;
+
++remove_sysmon:
++ qcom_remove_sysmon_subdev(adsp->sysmon);
++deinit_remove_glink_pdm_ssr:
++ qcom_q6v5_deinit(&adsp->q6v5);
++ qcom_remove_glink_subdev(rproc, &adsp->glink_subdev);
++ qcom_remove_pdm_subdev(rproc, &adsp->pdm_subdev);
++ qcom_remove_ssr_subdev(rproc, &adsp->ssr_subdev);
+ disable_pm:
+ qcom_rproc_pds_detach(adsp);
+
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 2a42215ce8e07b..32c3531b20c70a 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1162,6 +1162,9 @@ static int q6v5_mba_load(struct q6v5 *qproc)
+ goto disable_active_clks;
+ }
+
++ if (qproc->has_mba_logs)
++ qcom_pil_info_store("mba", qproc->mba_phys, MBA_LOG_SIZE);
++
+ writel(qproc->mba_phys, qproc->rmb_base + RMB_MBA_IMAGE_REG);
+ if (qproc->dp_size) {
+ writel(qproc->mba_phys + SZ_1M, qproc->rmb_base + RMB_PMI_CODE_START_REG);
+@@ -1172,9 +1175,6 @@ static int q6v5_mba_load(struct q6v5 *qproc)
+ if (ret)
+ goto reclaim_mba;
+
+- if (qproc->has_mba_logs)
+- qcom_pil_info_store("mba", qproc->mba_phys, MBA_LOG_SIZE);
+-
+ ret = q6v5_rmb_mba_wait(qproc, 0, 5000);
+ if (ret == -ETIMEDOUT) {
+ dev_err(qproc->dev, "MBA boot timed out\n");
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index ef82835e98a4ef..f4f4b3df3884ef 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -759,16 +759,16 @@ static int adsp_probe(struct platform_device *pdev)
+
+ ret = adsp_init_clock(adsp);
+ if (ret)
+- goto free_rproc;
++ goto unassign_mem;
+
+ ret = adsp_init_regulator(adsp);
+ if (ret)
+- goto free_rproc;
++ goto unassign_mem;
+
+ ret = adsp_pds_attach(&pdev->dev, adsp->proxy_pds,
+ desc->proxy_pd_names);
+ if (ret < 0)
+- goto free_rproc;
++ goto unassign_mem;
+ adsp->proxy_pd_count = ret;
+
+ ret = qcom_q6v5_init(&adsp->q6v5, pdev, rproc, desc->crash_reason_smem, desc->load_state,
+@@ -784,18 +784,28 @@ static int adsp_probe(struct platform_device *pdev)
+ desc->ssctl_id);
+ if (IS_ERR(adsp->sysmon)) {
+ ret = PTR_ERR(adsp->sysmon);
+- goto detach_proxy_pds;
++ goto deinit_remove_pdm_smd_glink;
+ }
+
+ qcom_add_ssr_subdev(rproc, &adsp->ssr_subdev, desc->ssr_name);
+ ret = rproc_add(rproc);
+ if (ret)
+- goto detach_proxy_pds;
++ goto remove_ssr_sysmon;
+
+ return 0;
+
++remove_ssr_sysmon:
++ qcom_remove_ssr_subdev(rproc, &adsp->ssr_subdev);
++ qcom_remove_sysmon_subdev(adsp->sysmon);
++deinit_remove_pdm_smd_glink:
++ qcom_remove_pdm_subdev(rproc, &adsp->pdm_subdev);
++ qcom_remove_smd_subdev(rproc, &adsp->smd_subdev);
++ qcom_remove_glink_subdev(rproc, &adsp->glink_subdev);
++ qcom_q6v5_deinit(&adsp->q6v5);
+ detach_proxy_pds:
+ adsp_pds_detach(adsp, adsp->proxy_pds, adsp->proxy_pd_count);
++unassign_mem:
++ adsp_unassign_memory_region(adsp);
+ free_rproc:
+ device_init_wakeup(adsp->dev, false);
+
+@@ -907,6 +917,7 @@ static const struct adsp_data sm8250_adsp_resource = {
+ .crash_reason_smem = 423,
+ .firmware_name = "adsp.mdt",
+ .pas_id = 1,
++ .minidump_id = 5,
+ .auto_boot = true,
+ .proxy_pd_names = (char*[]){
+ "lcx",
+@@ -1124,6 +1135,7 @@ static const struct adsp_data sm8350_cdsp_resource = {
+ .crash_reason_smem = 601,
+ .firmware_name = "cdsp.mdt",
+ .pas_id = 18,
++ .minidump_id = 7,
+ .auto_boot = true,
+ .proxy_pd_names = (char*[]){
+ "cx",
+diff --git a/drivers/rpmsg/qcom_glink_native.c b/drivers/rpmsg/qcom_glink_native.c
+index d3af1dfa3c7d71..a2f9d85c7156dc 100644
+--- a/drivers/rpmsg/qcom_glink_native.c
++++ b/drivers/rpmsg/qcom_glink_native.c
+@@ -1204,7 +1204,8 @@ void qcom_glink_native_rx(struct qcom_glink *glink)
+ ret = qcom_glink_rx_open_ack(glink, param1);
+ break;
+ case GLINK_CMD_OPEN:
+- ret = qcom_glink_rx_defer(glink, param2);
++ /* upper 16 bits of param2 are the "prio" field */
++ ret = qcom_glink_rx_defer(glink, param2 & 0xffff);
+ break;
+ case GLINK_CMD_TX_DATA:
+ case GLINK_CMD_TX_DATA_CONT:
+diff --git a/drivers/rtc/interface.c b/drivers/rtc/interface.c
+index cca650b2e0b94d..aaf76406cd7d7d 100644
+--- a/drivers/rtc/interface.c
++++ b/drivers/rtc/interface.c
+@@ -904,13 +904,18 @@ void rtc_timer_do_work(struct work_struct *work)
+ struct timerqueue_node *next;
+ ktime_t now;
+ struct rtc_time tm;
++ int err;
+
+ struct rtc_device *rtc =
+ container_of(work, struct rtc_device, irqwork);
+
+ mutex_lock(&rtc->ops_lock);
+ again:
+- __rtc_read_time(rtc, &tm);
++ err = __rtc_read_time(rtc, &tm);
++ if (err) {
++ mutex_unlock(&rtc->ops_lock);
++ return;
++ }
+ now = rtc_tm_to_ktime(tm);
+ while ((next = timerqueue_getnext(&rtc->timerqueue))) {
+ if (next->expires > now)
+diff --git a/drivers/rtc/rtc-ab-eoz9.c b/drivers/rtc/rtc-ab-eoz9.c
+index 02f7d071128772..e17bce9a27468b 100644
+--- a/drivers/rtc/rtc-ab-eoz9.c
++++ b/drivers/rtc/rtc-ab-eoz9.c
+@@ -396,13 +396,6 @@ static int abeoz9z3_temp_read(struct device *dev,
+ if (ret < 0)
+ return ret;
+
+- if ((val & ABEOZ9_REG_CTRL_STATUS_V1F) ||
+- (val & ABEOZ9_REG_CTRL_STATUS_V2F)) {
+- dev_err(dev,
+- "thermometer might be disabled due to low voltage\n");
+- return -EINVAL;
+- }
+-
+ switch (attr) {
+ case hwmon_temp_input:
+ ret = regmap_read(regmap, ABEOZ9_REG_REG_TEMP, &val);
+diff --git a/drivers/rtc/rtc-abx80x.c b/drivers/rtc/rtc-abx80x.c
+index 1298962402ff47..3fee27914ba805 100644
+--- a/drivers/rtc/rtc-abx80x.c
++++ b/drivers/rtc/rtc-abx80x.c
+@@ -39,7 +39,7 @@
+ #define ABX8XX_REG_STATUS 0x0f
+ #define ABX8XX_STATUS_AF BIT(2)
+ #define ABX8XX_STATUS_BLF BIT(4)
+-#define ABX8XX_STATUS_WDT BIT(6)
++#define ABX8XX_STATUS_WDT BIT(5)
+
+ #define ABX8XX_REG_CTRL1 0x10
+ #define ABX8XX_CTRL_WRITE BIT(0)
+diff --git a/drivers/rtc/rtc-rzn1.c b/drivers/rtc/rtc-rzn1.c
+index 56ebbd4d048147..8570c8e63d70c3 100644
+--- a/drivers/rtc/rtc-rzn1.c
++++ b/drivers/rtc/rtc-rzn1.c
+@@ -111,8 +111,8 @@ static int rzn1_rtc_read_time(struct device *dev, struct rtc_time *tm)
+ tm->tm_hour = bcd2bin(tm->tm_hour);
+ tm->tm_wday = bcd2bin(tm->tm_wday);
+ tm->tm_mday = bcd2bin(tm->tm_mday);
+- tm->tm_mon = bcd2bin(tm->tm_mon);
+- tm->tm_year = bcd2bin(tm->tm_year);
++ tm->tm_mon = bcd2bin(tm->tm_mon) - 1;
++ tm->tm_year = bcd2bin(tm->tm_year) + 100;
+
+ return 0;
+ }
+@@ -128,8 +128,8 @@ static int rzn1_rtc_set_time(struct device *dev, struct rtc_time *tm)
+ tm->tm_hour = bin2bcd(tm->tm_hour);
+ tm->tm_wday = bin2bcd(rzn1_rtc_tm_to_wday(tm));
+ tm->tm_mday = bin2bcd(tm->tm_mday);
+- tm->tm_mon = bin2bcd(tm->tm_mon);
+- tm->tm_year = bin2bcd(tm->tm_year);
++ tm->tm_mon = bin2bcd(tm->tm_mon + 1);
++ tm->tm_year = bin2bcd(tm->tm_year - 100);
+
+ val = readl(rtc->base + RZN1_RTC_CTL2);
+ if (!(val & RZN1_RTC_CTL2_STOPPED)) {
+diff --git a/drivers/rtc/rtc-st-lpc.c b/drivers/rtc/rtc-st-lpc.c
+index d492a2d26600c1..c6d4522411b312 100644
+--- a/drivers/rtc/rtc-st-lpc.c
++++ b/drivers/rtc/rtc-st-lpc.c
+@@ -218,15 +218,14 @@ static int st_rtc_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
+- ret = devm_request_irq(&pdev->dev, rtc->irq, st_rtc_handler, 0,
+- pdev->name, rtc);
++ ret = devm_request_irq(&pdev->dev, rtc->irq, st_rtc_handler,
++ IRQF_NO_AUTOEN, pdev->name, rtc);
+ if (ret) {
+ dev_err(&pdev->dev, "Failed to request irq %i\n", rtc->irq);
+ return ret;
+ }
+
+ enable_irq_wake(rtc->irq);
+- disable_irq(rtc->irq);
+
+ rtc->clk = devm_clk_get_enabled(&pdev->dev, NULL);
+ if (IS_ERR(rtc->clk))
+diff --git a/drivers/s390/cio/cio.c b/drivers/s390/cio/cio.c
+index c32e818f06dbad..ad17ab0a931494 100644
+--- a/drivers/s390/cio/cio.c
++++ b/drivers/s390/cio/cio.c
+@@ -459,10 +459,14 @@ int cio_update_schib(struct subchannel *sch)
+ {
+ struct schib schib;
+
+- if (stsch(sch->schid, &schib) || !css_sch_is_valid(&schib))
++ if (stsch(sch->schid, &schib))
+ return -ENODEV;
+
+ memcpy(&sch->schib, &schib, sizeof(schib));
++
++ if (!css_sch_is_valid(&schib))
++ return -EACCES;
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(cio_update_schib);
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index b0f23242e17145..9498825d9c7a5c 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -1387,14 +1387,18 @@ enum io_sch_action {
+ IO_SCH_VERIFY,
+ IO_SCH_DISC,
+ IO_SCH_NOP,
++ IO_SCH_ORPH_CDEV,
+ };
+
+ static enum io_sch_action sch_get_action(struct subchannel *sch)
+ {
+ struct ccw_device *cdev;
++ int rc;
+
+ cdev = sch_get_cdev(sch);
+- if (cio_update_schib(sch)) {
++ rc = cio_update_schib(sch);
++
++ if (rc == -ENODEV) {
+ /* Not operational. */
+ if (!cdev)
+ return IO_SCH_UNREG;
+@@ -1402,6 +1406,16 @@ static enum io_sch_action sch_get_action(struct subchannel *sch)
+ return IO_SCH_UNREG;
+ return IO_SCH_ORPH_UNREG;
+ }
++
++ /* Avoid unregistering subchannels without working device. */
++ if (rc == -EACCES) {
++ if (!cdev)
++ return IO_SCH_NOP;
++ if (ccw_device_notify(cdev, CIO_GONE) != NOTIFY_OK)
++ return IO_SCH_UNREG_CDEV;
++ return IO_SCH_ORPH_CDEV;
++ }
++
+ /* Operational. */
+ if (!cdev)
+ return IO_SCH_ATTACH;
+@@ -1471,6 +1485,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
+ rc = 0;
+ goto out_unlock;
+ case IO_SCH_ORPH_UNREG:
++ case IO_SCH_ORPH_CDEV:
+ case IO_SCH_ORPH_ATTACH:
+ ccw_device_set_disconnected(cdev);
+ break;
+@@ -1502,6 +1517,7 @@ static int io_subchannel_sch_event(struct subchannel *sch, int process)
+ /* Handle attached ccw device. */
+ switch (action) {
+ case IO_SCH_ORPH_UNREG:
++ case IO_SCH_ORPH_CDEV:
+ case IO_SCH_ORPH_ATTACH:
+ /* Move ccw device to orphanage. */
+ rc = ccw_device_move_to_orph(cdev);
+diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
+index 62eca9419ad76e..21fa7ac849e5c3 100644
+--- a/drivers/s390/virtio/virtio_ccw.c
++++ b/drivers/s390/virtio/virtio_ccw.c
+@@ -58,6 +58,8 @@ struct virtio_ccw_device {
+ struct virtio_device vdev;
+ __u8 config[VIRTIO_CCW_CONFIG_SIZE];
+ struct ccw_device *cdev;
++ /* we make cdev->dev.dma_parms point to this */
++ struct device_dma_parameters dma_parms;
+ __u32 curr_io;
+ int err;
+ unsigned int revision; /* Transport revision */
+@@ -1303,6 +1305,7 @@ static int virtio_ccw_offline(struct ccw_device *cdev)
+ unregister_virtio_device(&vcdev->vdev);
+ spin_lock_irqsave(get_ccwdev_lock(cdev), flags);
+ dev_set_drvdata(&cdev->dev, NULL);
++ cdev->dev.dma_parms = NULL;
+ spin_unlock_irqrestore(get_ccwdev_lock(cdev), flags);
+ return 0;
+ }
+@@ -1366,6 +1369,7 @@ static int virtio_ccw_online(struct ccw_device *cdev)
+ }
+ vcdev->vdev.dev.parent = &cdev->dev;
+ vcdev->cdev = cdev;
++ cdev->dev.dma_parms = &vcdev->dma_parms;
+ vcdev->dma_area = ccw_device_dma_zalloc(vcdev->cdev,
+ sizeof(*vcdev->dma_area),
+ &vcdev->dma_area_addr);
+diff --git a/drivers/scsi/bfa/bfad.c b/drivers/scsi/bfa/bfad.c
+index 62cb7a864fd53d..70c7515a822f52 100644
+--- a/drivers/scsi/bfa/bfad.c
++++ b/drivers/scsi/bfa/bfad.c
+@@ -1693,9 +1693,8 @@ bfad_init(void)
+
+ error = bfad_im_module_init();
+ if (error) {
+- error = -ENOMEM;
+ printk(KERN_WARNING "bfad_im_module_init failure\n");
+- goto ext;
++ return -ENOMEM;
+ }
+
+ if (strcmp(FCPI_NAME, " fcpim") == 0)
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_main.c b/drivers/scsi/hisi_sas/hisi_sas_main.c
+index 6219807ce3b9e1..ffd15fa4f9e596 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_main.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_main.c
+@@ -1545,10 +1545,16 @@ void hisi_sas_controller_reset_done(struct hisi_hba *hisi_hba)
+ /* Init and wait for PHYs to come up and all libsas event finished. */
+ for (phy_no = 0; phy_no < hisi_hba->n_phy; phy_no++) {
+ struct hisi_sas_phy *phy = &hisi_hba->phy[phy_no];
++ struct asd_sas_phy *sas_phy = &phy->sas_phy;
+
+- if (!(hisi_hba->phy_state & BIT(phy_no)))
++ if (!sas_phy->phy->enabled)
+ continue;
+
++ if (!(hisi_hba->phy_state & BIT(phy_no))) {
++ hisi_sas_phy_enable(hisi_hba, phy_no, 1);
++ continue;
++ }
++
+ async_schedule_domain(hisi_sas_async_init_wait_phyup,
+ phy, &async);
+ }
+diff --git a/drivers/scsi/qedf/qedf_main.c b/drivers/scsi/qedf/qedf_main.c
+index cf13148ba281c1..e979ec1478c184 100644
+--- a/drivers/scsi/qedf/qedf_main.c
++++ b/drivers/scsi/qedf/qedf_main.c
+@@ -2738,6 +2738,7 @@ static int qedf_alloc_and_init_sb(struct qedf_ctx *qedf,
+ sb_id, QED_SB_TYPE_STORAGE);
+
+ if (ret) {
++ dma_free_coherent(&qedf->pdev->dev, sizeof(*sb_virt), sb_virt, sb_phys);
+ QEDF_ERR(&qedf->dbg_ctx,
+ "Status block initialization failed (0x%x) for id = %d.\n",
+ ret, sb_id);
+diff --git a/drivers/scsi/qedi/qedi_main.c b/drivers/scsi/qedi/qedi_main.c
+index c5aec26019d6ab..628d59dda20cc4 100644
+--- a/drivers/scsi/qedi/qedi_main.c
++++ b/drivers/scsi/qedi/qedi_main.c
+@@ -369,6 +369,7 @@ static int qedi_alloc_and_init_sb(struct qedi_ctx *qedi,
+ ret = qedi_ops->common->sb_init(qedi->cdev, sb_info, sb_virt, sb_phys,
+ sb_id, QED_SB_TYPE_STORAGE);
+ if (ret) {
++ dma_free_coherent(&qedi->pdev->dev, sizeof(*sb_virt), sb_virt, sb_phys);
+ QEDI_ERR(&qedi->dbg_ctx,
+ "Status block initialization failed for id = %d.\n",
+ sb_id);
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index f86be197fedd04..84334ab39c8107 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -307,10 +307,6 @@ sg_open(struct inode *inode, struct file *filp)
+ if (retval)
+ goto sg_put;
+
+- retval = scsi_autopm_get_device(device);
+- if (retval)
+- goto sdp_put;
+-
+ /* scsi_block_when_processing_errors() may block so bypass
+ * check if O_NONBLOCK. Permits SCSI commands to be issued
+ * during error recovery. Tread carefully. */
+@@ -318,7 +314,7 @@ sg_open(struct inode *inode, struct file *filp)
+ scsi_block_when_processing_errors(device))) {
+ retval = -ENXIO;
+ /* we are in error recovery for this device */
+- goto error_out;
++ goto sdp_put;
+ }
+
+ mutex_lock(&sdp->open_rel_lock);
+@@ -371,8 +367,6 @@ sg_open(struct inode *inode, struct file *filp)
+ }
+ error_mutex_locked:
+ mutex_unlock(&sdp->open_rel_lock);
+-error_out:
+- scsi_autopm_put_device(device);
+ sdp_put:
+ kref_put(&sdp->d_ref, sg_device_destroy);
+ scsi_device_put(device);
+@@ -392,7 +386,6 @@ sg_release(struct inode *inode, struct file *filp)
+ SCSI_LOG_TIMEOUT(3, sg_printk(KERN_INFO, sdp, "sg_release\n"));
+
+ mutex_lock(&sdp->open_rel_lock);
+- scsi_autopm_put_device(sdp->device);
+ kref_put(&sfp->f_ref, sg_remove_sfp);
+ sdp->open_cnt--;
+
+diff --git a/drivers/sh/intc/core.c b/drivers/sh/intc/core.c
+index 74350b5871dc8e..ea571eeb307878 100644
+--- a/drivers/sh/intc/core.c
++++ b/drivers/sh/intc/core.c
+@@ -209,7 +209,6 @@ int __init register_intc_controller(struct intc_desc *desc)
+ goto err0;
+
+ INIT_LIST_HEAD(&d->list);
+- list_add_tail(&d->list, &intc_list);
+
+ raw_spin_lock_init(&d->lock);
+ INIT_RADIX_TREE(&d->tree, GFP_ATOMIC);
+@@ -369,6 +368,7 @@ int __init register_intc_controller(struct intc_desc *desc)
+
+ d->skip_suspend = desc->skip_syscore_suspend;
+
++ list_add_tail(&d->list, &intc_list);
+ nr_intc_controllers++;
+
+ return 0;
+diff --git a/drivers/soc/fsl/qe/qmc.c b/drivers/soc/fsl/qe/qmc.c
+index 19cc581b06d0c8..b3f773e135fd49 100644
+--- a/drivers/soc/fsl/qe/qmc.c
++++ b/drivers/soc/fsl/qe/qmc.c
+@@ -2004,8 +2004,10 @@ static int qmc_probe(struct platform_device *pdev)
+
+ /* Set the irq handler */
+ irq = platform_get_irq(pdev, 0);
+- if (irq < 0)
++ if (irq < 0) {
++ ret = irq;
+ goto err_exit_xcc;
++ }
+ ret = devm_request_irq(qmc->dev, irq, qmc_irq_handler, 0, "qmc", qmc);
+ if (ret < 0)
+ goto err_exit_xcc;
+diff --git a/drivers/soc/fsl/rcpm.c b/drivers/soc/fsl/rcpm.c
+index 3d0cae30c769ea..06bd94b29fb321 100644
+--- a/drivers/soc/fsl/rcpm.c
++++ b/drivers/soc/fsl/rcpm.c
+@@ -36,6 +36,7 @@ static void copy_ippdexpcr1_setting(u32 val)
+ return;
+
+ regs = of_iomap(np, 0);
++ of_node_put(np);
+ if (!regs)
+ return;
+
+diff --git a/drivers/soc/qcom/qcom-geni-se.c b/drivers/soc/qcom/qcom-geni-se.c
+index 2e8f24d5da80b6..4cb959106efa9e 100644
+--- a/drivers/soc/qcom/qcom-geni-se.c
++++ b/drivers/soc/qcom/qcom-geni-se.c
+@@ -585,7 +585,8 @@ int geni_se_clk_tbl_get(struct geni_se *se, unsigned long **tbl)
+
+ for (i = 0; i < MAX_CLK_PERF_LEVEL; i++) {
+ freq = clk_round_rate(se->clk, freq + 1);
+- if (freq <= 0 || freq == se->clk_perf_tbl[i - 1])
++ if (freq <= 0 ||
++ (i > 0 && freq == se->clk_perf_tbl[i - 1]))
+ break;
+ se->clk_perf_tbl[i] = freq;
+ }
+diff --git a/drivers/soc/ti/smartreflex.c b/drivers/soc/ti/smartreflex.c
+index d6219060b616d6..38add2ab561372 100644
+--- a/drivers/soc/ti/smartreflex.c
++++ b/drivers/soc/ti/smartreflex.c
+@@ -202,10 +202,10 @@ static int sr_late_init(struct omap_sr *sr_info)
+
+ if (sr_class->notify && sr_class->notify_flags && sr_info->irq) {
+ ret = devm_request_irq(&sr_info->pdev->dev, sr_info->irq,
+- sr_interrupt, 0, sr_info->name, sr_info);
++ sr_interrupt, IRQF_NO_AUTOEN,
++ sr_info->name, sr_info);
+ if (ret)
+ goto error;
+- disable_irq(sr_info->irq);
+ }
+
+ return ret;
+diff --git a/drivers/soc/xilinx/xlnx_event_manager.c b/drivers/soc/xilinx/xlnx_event_manager.c
+index f529e1346247cc..85df6b9c04ee69 100644
+--- a/drivers/soc/xilinx/xlnx_event_manager.c
++++ b/drivers/soc/xilinx/xlnx_event_manager.c
+@@ -188,8 +188,10 @@ static int xlnx_add_cb_for_suspend(event_cb_func_t cb_fun, void *data)
+ INIT_LIST_HEAD(&eve_data->cb_list_head);
+
+ cb_data = kmalloc(sizeof(*cb_data), GFP_KERNEL);
+- if (!cb_data)
++ if (!cb_data) {
++ kfree(eve_data);
+ return -ENOMEM;
++ }
+ cb_data->eve_cb = cb_fun;
+ cb_data->agent_data = data;
+
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index 95cdfc28361ef7..caecb2ad2a150d 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -183,7 +183,7 @@ static const char *atmel_qspi_reg_name(u32 offset, char *tmp, size_t sz)
+ case QSPI_MR:
+ return "MR";
+ case QSPI_RD:
+- return "MR";
++ return "RD";
+ case QSPI_TD:
+ return "TD";
+ case QSPI_SR:
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index 977e8b55c82b7d..9573b8fa4fbfc6 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -891,7 +891,7 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- ret = devm_request_irq(&pdev->dev, irq, fsl_lpspi_isr, 0,
++ ret = devm_request_irq(&pdev->dev, irq, fsl_lpspi_isr, IRQF_NO_AUTOEN,
+ dev_name(&pdev->dev), fsl_lpspi);
+ if (ret) {
+ dev_err(&pdev->dev, "can't get irq%d: %d\n", irq, ret);
+@@ -948,14 +948,10 @@ static int fsl_lpspi_probe(struct platform_device *pdev)
+ ret = fsl_lpspi_dma_init(&pdev->dev, fsl_lpspi, controller);
+ if (ret == -EPROBE_DEFER)
+ goto out_pm_get;
+- if (ret < 0)
++ if (ret < 0) {
+ dev_warn(&pdev->dev, "dma setup error %d, use pio\n", ret);
+- else
+- /*
+- * disable LPSPI module IRQ when enable DMA mode successfully,
+- * to prevent the unexpected LPSPI module IRQ events.
+- */
+- disable_irq(irq);
++ enable_irq(irq);
++ }
+
+ ret = devm_spi_register_controller(&pdev->dev, controller);
+ if (ret < 0) {
+diff --git a/drivers/spi/spi-tegra210-quad.c b/drivers/spi/spi-tegra210-quad.c
+index afbd64a217eb06..43f11b0e9e765c 100644
+--- a/drivers/spi/spi-tegra210-quad.c
++++ b/drivers/spi/spi-tegra210-quad.c
+@@ -341,7 +341,7 @@ tegra_qspi_fill_tx_fifo_from_client_txbuf(struct tegra_qspi *tqspi, struct spi_t
+ for (count = 0; count < max_n_32bit; count++) {
+ u32 x = 0;
+
+- for (i = 0; len && (i < bytes_per_word); i++, len--)
++ for (i = 0; len && (i < min(4, bytes_per_word)); i++, len--)
+ x |= (u32)(*tx_buf++) << (i * 8);
+ tegra_qspi_writel(tqspi, x, QSPI_TX_FIFO);
+ }
+diff --git a/drivers/spi/spi-zynqmp-gqspi.c b/drivers/spi/spi-zynqmp-gqspi.c
+index fcd0ca99668419..b9df39e06e7cd4 100644
+--- a/drivers/spi/spi-zynqmp-gqspi.c
++++ b/drivers/spi/spi-zynqmp-gqspi.c
+@@ -1351,6 +1351,7 @@ static int zynqmp_qspi_probe(struct platform_device *pdev)
+
+ clk_dis_all:
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_set_suspended(&pdev->dev);
+ clk_disable_unprepare(xqspi->refclk);
+@@ -1379,6 +1380,7 @@ static void zynqmp_qspi_remove(struct platform_device *pdev)
+ zynqmp_gqspi_write(xqspi, GQSPI_EN_OFST, 0x0);
+
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+ pm_runtime_set_suspended(&pdev->dev);
+ clk_disable_unprepare(xqspi->refclk);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index c1dad30a4528b7..0f3e6e2c24743c 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -424,6 +424,16 @@ static int spi_probe(struct device *dev)
+ spi->irq = 0;
+ }
+
++ if (has_acpi_companion(dev) && spi->irq < 0) {
++ struct acpi_device *adev = to_acpi_device_node(dev->fwnode);
++
++ spi->irq = acpi_dev_gpio_irq_get(adev, 0);
++ if (spi->irq == -EPROBE_DEFER)
++ return -EPROBE_DEFER;
++ if (spi->irq < 0)
++ spi->irq = 0;
++ }
++
+ ret = dev_pm_domain_attach(dev, true);
+ if (ret)
+ return ret;
+@@ -2869,9 +2879,6 @@ static acpi_status acpi_register_spi_device(struct spi_controller *ctlr,
+ acpi_set_modalias(adev, acpi_device_hid(adev), spi->modalias,
+ sizeof(spi->modalias));
+
+- if (spi->irq < 0)
+- spi->irq = acpi_dev_gpio_irq_get(adev, 0);
+-
+ acpi_device_set_enumerated(adev);
+
+ adev->power.flags.ignore_parent = true;
+diff --git a/drivers/staging/media/atomisp/pci/sh_css_params.c b/drivers/staging/media/atomisp/pci/sh_css_params.c
+index 232744973ab887..b1feb6f6ebe895 100644
+--- a/drivers/staging/media/atomisp/pci/sh_css_params.c
++++ b/drivers/staging/media/atomisp/pci/sh_css_params.c
+@@ -4181,6 +4181,8 @@ ia_css_3a_statistics_allocate(const struct ia_css_3a_grid_info *grid)
+ goto err;
+ /* No weighted histogram, no structure, treat the histogram data as a byte dump in a byte array */
+ me->rgby_data = kvmalloc(sizeof_hmem(HMEM0_ID), GFP_KERNEL);
++ if (!me->rgby_data)
++ goto err;
+
+ IA_CSS_LEAVE("return=%p", me);
+ return me;
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 6c488b1e262485..5fab33adf58ed0 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -1715,7 +1715,6 @@ MODULE_DEVICE_TABLE(of, vchiq_of_match);
+
+ static int vchiq_probe(struct platform_device *pdev)
+ {
+- struct device_node *fw_node;
+ const struct vchiq_platform_info *info;
+ struct vchiq_drv_mgmt *mgmt;
+ int ret;
+@@ -1724,8 +1723,8 @@ static int vchiq_probe(struct platform_device *pdev)
+ if (!info)
+ return -EINVAL;
+
+- fw_node = of_find_compatible_node(NULL, NULL,
+- "raspberrypi,bcm2835-firmware");
++ struct device_node *fw_node __free(device_node) =
++ of_find_compatible_node(NULL, NULL, "raspberrypi,bcm2835-firmware");
+ if (!fw_node) {
+ dev_err(&pdev->dev, "Missing firmware node\n");
+ return -ENOENT;
+@@ -1736,7 +1735,6 @@ static int vchiq_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ mgmt->fw = devm_rpi_firmware_get(&pdev->dev, fw_node);
+- of_node_put(fw_node);
+ if (!mgmt->fw)
+ return -EPROBE_DEFER;
+
+diff --git a/drivers/target/target_core_pscsi.c b/drivers/target/target_core_pscsi.c
+index 440e07b1d5cdb1..287ac5b0495f9a 100644
+--- a/drivers/target/target_core_pscsi.c
++++ b/drivers/target/target_core_pscsi.c
+@@ -369,7 +369,7 @@ static int pscsi_create_type_disk(struct se_device *dev, struct scsi_device *sd)
+ bdev_file = bdev_file_open_by_path(dev->udev_path,
+ BLK_OPEN_WRITE | BLK_OPEN_READ, pdv, NULL);
+ if (IS_ERR(bdev_file)) {
+- pr_err("pSCSI: bdev_open_by_path() failed\n");
++ pr_err("pSCSI: bdev_file_open_by_path() failed\n");
+ scsi_device_put(sd);
+ return PTR_ERR(bdev_file);
+ }
+diff --git a/drivers/thermal/testing/zone.c b/drivers/thermal/testing/zone.c
+index c6d8c66f40f980..1f01f495270313 100644
+--- a/drivers/thermal/testing/zone.c
++++ b/drivers/thermal/testing/zone.c
+@@ -185,7 +185,7 @@ static void tt_add_tz_work_fn(struct work_struct *work)
+ int tt_add_tz(void)
+ {
+ struct tt_thermal_zone *tt_zone __free(kfree);
+- struct tt_work *tt_work __free(kfree);
++ struct tt_work *tt_work __free(kfree) = NULL;
+ int ret;
+
+ tt_zone = kzalloc(sizeof(*tt_zone), GFP_KERNEL);
+@@ -237,7 +237,7 @@ static void tt_zone_unregister_tz(struct tt_thermal_zone *tt_zone)
+
+ int tt_del_tz(const char *arg)
+ {
+- struct tt_work *tt_work __free(kfree);
++ struct tt_work *tt_work __free(kfree) = NULL;
+ struct tt_thermal_zone *tt_zone, *aux;
+ int ret;
+ int id;
+@@ -310,6 +310,9 @@ static void tt_put_tt_zone(struct tt_thermal_zone *tt_zone)
+ tt_zone->refcount--;
+ }
+
++DEFINE_FREE(put_tt_zone, struct tt_thermal_zone *,
++ if (!IS_ERR_OR_NULL(_T)) tt_put_tt_zone(_T))
++
+ static void tt_zone_add_trip_work_fn(struct work_struct *work)
+ {
+ struct tt_work *tt_work = tt_work_of_work(work);
+@@ -332,9 +335,9 @@ static void tt_zone_add_trip_work_fn(struct work_struct *work)
+
+ int tt_zone_add_trip(const char *arg)
+ {
++ struct tt_thermal_zone *tt_zone __free(put_tt_zone) = NULL;
++ struct tt_trip *tt_trip __free(kfree) = NULL;
+ struct tt_work *tt_work __free(kfree);
+- struct tt_trip *tt_trip __free(kfree);
+- struct tt_thermal_zone *tt_zone;
+ int id;
+
+ tt_work = kzalloc(sizeof(*tt_work), GFP_KERNEL);
+@@ -350,10 +353,8 @@ int tt_zone_add_trip(const char *arg)
+ return PTR_ERR(tt_zone);
+
+ id = ida_alloc(&tt_zone->ida, GFP_KERNEL);
+- if (id < 0) {
+- tt_put_tt_zone(tt_zone);
++ if (id < 0)
+ return id;
+- }
+
+ tt_trip->trip.type = THERMAL_TRIP_ACTIVE;
+ tt_trip->trip.temperature = THERMAL_TEMP_INVALID;
+@@ -366,7 +367,7 @@ int tt_zone_add_trip(const char *arg)
+ tt_zone->num_trips++;
+
+ INIT_WORK(&tt_work->work, tt_zone_add_trip_work_fn);
+- tt_work->tt_zone = tt_zone;
++ tt_work->tt_zone = no_free_ptr(tt_zone);
+ tt_work->tt_trip = no_free_ptr(tt_trip);
+ schedule_work(&(no_free_ptr(tt_work)->work));
+
+@@ -391,7 +392,7 @@ static struct thermal_zone_device_ops tt_zone_ops = {
+
+ static int tt_zone_register_tz(struct tt_thermal_zone *tt_zone)
+ {
+- struct thermal_trip *trips __free(kfree);
++ struct thermal_trip *trips __free(kfree) = NULL;
+ struct thermal_zone_device *tz;
+ struct tt_trip *tt_trip;
+ int i;
+@@ -425,23 +426,18 @@ static int tt_zone_register_tz(struct tt_thermal_zone *tt_zone)
+
+ int tt_zone_reg(const char *arg)
+ {
+- struct tt_thermal_zone *tt_zone;
+- int ret;
++ struct tt_thermal_zone *tt_zone __free(put_tt_zone);
+
+ tt_zone = tt_get_tt_zone(arg);
+ if (IS_ERR(tt_zone))
+ return PTR_ERR(tt_zone);
+
+- ret = tt_zone_register_tz(tt_zone);
+-
+- tt_put_tt_zone(tt_zone);
+-
+- return ret;
++ return tt_zone_register_tz(tt_zone);
+ }
+
+ int tt_zone_unreg(const char *arg)
+ {
+- struct tt_thermal_zone *tt_zone;
++ struct tt_thermal_zone *tt_zone __free(put_tt_zone);
+
+ tt_zone = tt_get_tt_zone(arg);
+ if (IS_ERR(tt_zone))
+@@ -449,8 +445,6 @@ int tt_zone_unreg(const char *arg)
+
+ tt_zone_unregister_tz(tt_zone);
+
+- tt_put_tt_zone(tt_zone);
+-
+ return 0;
+ }
+
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 8f03985f971c30..1d2f2b307bac50 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -40,6 +40,8 @@ static DEFINE_MUTEX(thermal_governor_lock);
+
+ static struct thermal_governor *def_governor;
+
++static bool thermal_pm_suspended;
++
+ /*
+ * Governor section: set of functions to handle thermal governors
+ *
+@@ -547,7 +549,7 @@ void __thermal_zone_device_update(struct thermal_zone_device *tz,
+ int low = -INT_MAX, high = INT_MAX;
+ int temp, ret;
+
+- if (tz->suspended || tz->mode != THERMAL_DEVICE_ENABLED)
++ if (tz->state != TZ_STATE_READY || tz->mode != THERMAL_DEVICE_ENABLED)
+ return;
+
+ ret = __thermal_zone_get_temp(tz, &temp);
+@@ -1332,6 +1334,24 @@ int thermal_zone_get_crit_temp(struct thermal_zone_device *tz, int *temp)
+ }
+ EXPORT_SYMBOL_GPL(thermal_zone_get_crit_temp);
+
++static void thermal_zone_init_complete(struct thermal_zone_device *tz)
++{
++ mutex_lock(&tz->lock);
++
++ tz->state &= ~TZ_STATE_FLAG_INIT;
++ /*
++ * If system suspend or resume is in progress at this point, the
++ * new thermal zone needs to be marked as suspended because
++ * thermal_pm_notify() has run already.
++ */
++ if (thermal_pm_suspended)
++ tz->state |= TZ_STATE_FLAG_SUSPENDED;
++
++ __thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
++
++ mutex_unlock(&tz->lock);
++}
++
+ /**
+ * thermal_zone_device_register_with_trips() - register a new thermal zone device
+ * @type: the thermal zone device type
+@@ -1451,6 +1471,8 @@ thermal_zone_device_register_with_trips(const char *type,
+ tz->passive_delay_jiffies = msecs_to_jiffies(passive_delay);
+ tz->recheck_delay_jiffies = THERMAL_RECHECK_DELAY;
+
++ tz->state = TZ_STATE_FLAG_INIT;
++
+ /* sys I/F */
+ /* Add nodes that are always present via .groups */
+ result = thermal_zone_create_device_groups(tz);
+@@ -1465,6 +1487,7 @@ thermal_zone_device_register_with_trips(const char *type,
+ thermal_zone_destroy_device_groups(tz);
+ goto remove_id;
+ }
++ thermal_zone_device_init(tz);
+ result = device_register(&tz->device);
+ if (result)
+ goto release_device;
+@@ -1501,12 +1524,9 @@ thermal_zone_device_register_with_trips(const char *type,
+ list_for_each_entry(cdev, &thermal_cdev_list, node)
+ thermal_zone_cdev_bind(tz, cdev);
+
+- mutex_unlock(&thermal_list_lock);
++ thermal_zone_init_complete(tz);
+
+- thermal_zone_device_init(tz);
+- /* Update the new thermal zone and mark it as already updated. */
+- if (atomic_cmpxchg(&tz->need_update, 1, 0))
+- thermal_zone_device_update(tz, THERMAL_EVENT_UNSPECIFIED);
++ mutex_unlock(&thermal_list_lock);
+
+ thermal_notify_tz_create(tz);
+
+@@ -1662,7 +1682,7 @@ static void thermal_zone_device_resume(struct work_struct *work)
+
+ mutex_lock(&tz->lock);
+
+- tz->suspended = false;
++ tz->state &= ~(TZ_STATE_FLAG_SUSPENDED | TZ_STATE_FLAG_RESUMING);
+
+ thermal_debug_tz_resume(tz);
+ thermal_zone_device_init(tz);
+@@ -1670,7 +1690,48 @@ static void thermal_zone_device_resume(struct work_struct *work)
+ __thermal_zone_device_update(tz, THERMAL_TZ_RESUME);
+
+ complete(&tz->resume);
+- tz->resuming = false;
++
++ mutex_unlock(&tz->lock);
++}
++
++static void thermal_zone_pm_prepare(struct thermal_zone_device *tz)
++{
++ mutex_lock(&tz->lock);
++
++ if (tz->state & TZ_STATE_FLAG_RESUMING) {
++ /*
++ * thermal_zone_device_resume() queued up for this zone has not
++ * acquired the lock yet, so release it to let the function run
++ * and wait util it has done the work.
++ */
++ mutex_unlock(&tz->lock);
++
++ wait_for_completion(&tz->resume);
++
++ mutex_lock(&tz->lock);
++ }
++
++ tz->state |= TZ_STATE_FLAG_SUSPENDED;
++
++ mutex_unlock(&tz->lock);
++}
++
++static void thermal_zone_pm_complete(struct thermal_zone_device *tz)
++{
++ mutex_lock(&tz->lock);
++
++ cancel_delayed_work(&tz->poll_queue);
++
++ reinit_completion(&tz->resume);
++ tz->state |= TZ_STATE_FLAG_RESUMING;
++
++ /*
++ * Replace the work function with the resume one, which will restore the
++ * original work function and schedule the polling work if needed.
++ */
++ INIT_DELAYED_WORK(&tz->poll_queue, thermal_zone_device_resume);
++ /* Queue up the work without a delay. */
++ mod_delayed_work(system_freezable_power_efficient_wq, &tz->poll_queue, 0);
+
+ mutex_unlock(&tz->lock);
+ }
+@@ -1686,27 +1747,10 @@ static int thermal_pm_notify(struct notifier_block *nb,
+ case PM_SUSPEND_PREPARE:
+ mutex_lock(&thermal_list_lock);
+
+- list_for_each_entry(tz, &thermal_tz_list, node) {
+- mutex_lock(&tz->lock);
+-
+- if (tz->resuming) {
+- /*
+- * thermal_zone_device_resume() queued up for
+- * this zone has not acquired the lock yet, so
+- * release it to let the function run and wait
+- * util it has done the work.
+- */
+- mutex_unlock(&tz->lock);
+-
+- wait_for_completion(&tz->resume);
+-
+- mutex_lock(&tz->lock);
+- }
++ thermal_pm_suspended = true;
+
+- tz->suspended = true;
+-
+- mutex_unlock(&tz->lock);
+- }
++ list_for_each_entry(tz, &thermal_tz_list, node)
++ thermal_zone_pm_prepare(tz);
+
+ mutex_unlock(&thermal_list_lock);
+ break;
+@@ -1715,27 +1759,10 @@ static int thermal_pm_notify(struct notifier_block *nb,
+ case PM_POST_SUSPEND:
+ mutex_lock(&thermal_list_lock);
+
+- list_for_each_entry(tz, &thermal_tz_list, node) {
+- mutex_lock(&tz->lock);
+-
+- cancel_delayed_work(&tz->poll_queue);
++ thermal_pm_suspended = false;
+
+- reinit_completion(&tz->resume);
+- tz->resuming = true;
+-
+- /*
+- * Replace the work function with the resume one, which
+- * will restore the original work function and schedule
+- * the polling work if needed.
+- */
+- INIT_DELAYED_WORK(&tz->poll_queue,
+- thermal_zone_device_resume);
+- /* Queue up the work without a delay. */
+- mod_delayed_work(system_freezable_power_efficient_wq,
+- &tz->poll_queue, 0);
+-
+- mutex_unlock(&tz->lock);
+- }
++ list_for_each_entry(tz, &thermal_tz_list, node)
++ thermal_zone_pm_complete(tz);
+
+ mutex_unlock(&thermal_list_lock);
+ break;
+diff --git a/drivers/thermal/thermal_core.h b/drivers/thermal/thermal_core.h
+index a64d39b1c86b23..421522a2bb9d4c 100644
+--- a/drivers/thermal/thermal_core.h
++++ b/drivers/thermal/thermal_core.h
+@@ -61,6 +61,12 @@ struct thermal_governor {
+ struct list_head governor_list;
+ };
+
++#define TZ_STATE_FLAG_SUSPENDED BIT(0)
++#define TZ_STATE_FLAG_RESUMING BIT(1)
++#define TZ_STATE_FLAG_INIT BIT(2)
++
++#define TZ_STATE_READY 0
++
+ /**
+ * struct thermal_zone_device - structure for a thermal zone
+ * @id: unique id number for each thermal zone
+@@ -100,8 +106,7 @@ struct thermal_governor {
+ * @node: node in thermal_tz_list (in thermal_core.c)
+ * @poll_queue: delayed work for polling
+ * @notify_event: Last notification event
+- * @suspended: thermal zone suspend indicator
+- * @resuming: indicates whether or not thermal zone resume is in progress
++ * @state: current state of the thermal zone
+ * @trips: array of struct thermal_trip objects
+ */
+ struct thermal_zone_device {
+@@ -134,8 +139,7 @@ struct thermal_zone_device {
+ struct list_head node;
+ struct delayed_work poll_queue;
+ enum thermal_notify_event notify_event;
+- bool suspended;
+- bool resuming;
++ u8 state;
+ #ifdef CONFIG_THERMAL_DEBUGFS
+ struct thermal_debugfs *debugfs;
+ #endif
+diff --git a/drivers/tty/serial/8250/8250_fintek.c b/drivers/tty/serial/8250/8250_fintek.c
+index e2aa2a1a02ddf5..ecbce226b8747e 100644
+--- a/drivers/tty/serial/8250/8250_fintek.c
++++ b/drivers/tty/serial/8250/8250_fintek.c
+@@ -21,6 +21,7 @@
+ #define CHIP_ID_F81866 0x1010
+ #define CHIP_ID_F81966 0x0215
+ #define CHIP_ID_F81216AD 0x1602
++#define CHIP_ID_F81216E 0x1617
+ #define CHIP_ID_F81216H 0x0501
+ #define CHIP_ID_F81216 0x0802
+ #define VENDOR_ID1 0x23
+@@ -158,6 +159,7 @@ static int fintek_8250_check_id(struct fintek_8250 *pdata)
+ case CHIP_ID_F81866:
+ case CHIP_ID_F81966:
+ case CHIP_ID_F81216AD:
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81216:
+ break;
+@@ -181,6 +183,7 @@ static int fintek_8250_get_ldn_range(struct fintek_8250 *pdata, int *min,
+ return 0;
+
+ case CHIP_ID_F81216AD:
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81216:
+ *min = F81216_LDN_LOW;
+@@ -250,6 +253,7 @@ static void fintek_8250_set_irq_mode(struct fintek_8250 *pdata, bool is_level)
+ break;
+
+ case CHIP_ID_F81216AD:
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81216:
+ sio_write_mask_reg(pdata, FINTEK_IRQ_MODE, IRQ_SHARE,
+@@ -263,7 +267,8 @@ static void fintek_8250_set_irq_mode(struct fintek_8250 *pdata, bool is_level)
+ static void fintek_8250_set_max_fifo(struct fintek_8250 *pdata)
+ {
+ switch (pdata->pid) {
+- case CHIP_ID_F81216H: /* 128Bytes FIFO */
++ case CHIP_ID_F81216E: /* 128Bytes FIFO */
++ case CHIP_ID_F81216H:
+ case CHIP_ID_F81966:
+ case CHIP_ID_F81866:
+ sio_write_mask_reg(pdata, FIFO_CTRL,
+@@ -297,6 +302,7 @@ static void fintek_8250_set_termios(struct uart_port *port,
+ goto exit;
+
+ switch (pdata->pid) {
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ reg = RS485;
+ break;
+@@ -346,6 +352,7 @@ static void fintek_8250_set_termios_handler(struct uart_8250_port *uart)
+ struct fintek_8250 *pdata = uart->port.private_data;
+
+ switch (pdata->pid) {
++ case CHIP_ID_F81216E:
+ case CHIP_ID_F81216H:
+ case CHIP_ID_F81966:
+ case CHIP_ID_F81866:
+@@ -438,6 +445,11 @@ static void fintek_8250_set_rs485_handler(struct uart_8250_port *uart)
+ uart->port.rs485_supported = fintek_8250_rs485_supported;
+ break;
+
++ case CHIP_ID_F81216E: /* F81216E does not support RS485 delays */
++ uart->port.rs485_config = fintek_8250_rs485_config;
++ uart->port.rs485_supported = fintek_8250_rs485_supported;
++ break;
++
+ default: /* No RS485 Auto direction functional */
+ break;
+ }
+diff --git a/drivers/tty/serial/8250/8250_omap.c b/drivers/tty/serial/8250/8250_omap.c
+index 88b58f44e4e976..0dd68bdbfbcf7c 100644
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -776,12 +776,12 @@ static void omap_8250_shutdown(struct uart_port *port)
+ struct uart_8250_port *up = up_to_u8250p(port);
+ struct omap8250_priv *priv = port->private_data;
+
++ pm_runtime_get_sync(port->dev);
++
+ flush_work(&priv->qos_work);
+ if (up->dma)
+ omap_8250_rx_dma_flush(up);
+
+- pm_runtime_get_sync(port->dev);
+-
+ serial_out(up, UART_OMAP_WER, 0);
+ if (priv->habit & UART_HAS_EFR2)
+ serial_out(up, UART_OMAP_EFR2, 0x0);
+diff --git a/drivers/tty/serial/amba-pl011.c b/drivers/tty/serial/amba-pl011.c
+index 7d0134ecd82fa5..9529a512cbd40f 100644
+--- a/drivers/tty/serial/amba-pl011.c
++++ b/drivers/tty/serial/amba-pl011.c
+@@ -1819,6 +1819,13 @@ static void pl011_unthrottle_rx(struct uart_port *port)
+
+ pl011_write(uap->im, uap, REG_IMSC);
+
++#ifdef CONFIG_DMA_ENGINE
++ if (uap->using_rx_dma) {
++ uap->dmacr |= UART011_RXDMAE;
++ pl011_write(uap->dmacr, uap, REG_DMACR);
++ }
++#endif
++
+ uart_port_unlock_irqrestore(&uap->port, flags);
+ }
+
+diff --git a/drivers/tty/tty_io.c b/drivers/tty/tty_io.c
+index 9771072da177cb..dcb1769c3625cd 100644
+--- a/drivers/tty/tty_io.c
++++ b/drivers/tty/tty_io.c
+@@ -3631,7 +3631,7 @@ static struct ctl_table tty_table[] = {
+ .data = &tty_ldisc_autoload,
+ .maxlen = sizeof(tty_ldisc_autoload),
+ .mode = 0644,
+- .proc_handler = proc_dointvec,
++ .proc_handler = proc_dointvec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ .extra2 = SYSCTL_ONE,
+ },
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index eab81dfdcc3502..0b9ba338b2654c 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -915,6 +915,7 @@ struct dwc3_hwparams {
+ #define DWC3_MODE(n) ((n) & 0x7)
+
+ /* HWPARAMS1 */
++#define DWC3_SPRAM_TYPE(n) (((n) >> 23) & 1)
+ #define DWC3_NUM_INT(n) (((n) & (0x3f << 15)) >> 15)
+
+ /* HWPARAMS3 */
+@@ -925,6 +926,9 @@ struct dwc3_hwparams {
+ #define DWC3_NUM_IN_EPS(p) (((p)->hwparams3 & \
+ (DWC3_NUM_IN_EPS_MASK)) >> 18)
+
++/* HWPARAMS6 */
++#define DWC3_RAM0_DEPTH(n) (((n) & (0xffff0000)) >> 16)
++
+ /* HWPARAMS7 */
+ #define DWC3_RAM1_DEPTH(n) ((n) & 0xffff)
+
+diff --git a/drivers/usb/dwc3/ep0.c b/drivers/usb/dwc3/ep0.c
+index c9533a99e47c89..874497f86499b3 100644
+--- a/drivers/usb/dwc3/ep0.c
++++ b/drivers/usb/dwc3/ep0.c
+@@ -232,7 +232,7 @@ void dwc3_ep0_stall_and_restart(struct dwc3 *dwc)
+ /* stall is always issued on EP0 */
+ dep = dwc->eps[0];
+ __dwc3_gadget_ep_set_halt(dep, 1, false);
+- dep->flags &= DWC3_EP_RESOURCE_ALLOCATED;
++ dep->flags &= DWC3_EP_RESOURCE_ALLOCATED | DWC3_EP_TRANSFER_STARTED;
+ dep->flags |= DWC3_EP_ENABLED;
+ dwc->delayed_status = false;
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 4959c26d3b71b8..56744b11e67cb9 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -687,6 +687,44 @@ static int dwc3_gadget_calc_tx_fifo_size(struct dwc3 *dwc, int mult)
+ return fifo_size;
+ }
+
++/**
++ * dwc3_gadget_calc_ram_depth - calculates the ram depth for txfifo
++ * @dwc: pointer to the DWC3 context
++ */
++static int dwc3_gadget_calc_ram_depth(struct dwc3 *dwc)
++{
++ int ram_depth;
++ int fifo_0_start;
++ bool is_single_port_ram;
++
++ /* Check supporting RAM type by HW */
++ is_single_port_ram = DWC3_SPRAM_TYPE(dwc->hwparams.hwparams1);
++
++ /*
++ * If a single port RAM is utilized, then allocate TxFIFOs from
++ * RAM0. otherwise, allocate them from RAM1.
++ */
++ ram_depth = is_single_port_ram ? DWC3_RAM0_DEPTH(dwc->hwparams.hwparams6) :
++ DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7);
++
++ /*
++ * In a single port RAM configuration, the available RAM is shared
++ * between the RX and TX FIFOs. This means that the txfifo can begin
++ * at a non-zero address.
++ */
++ if (is_single_port_ram) {
++ u32 reg;
++
++ /* Check if TXFIFOs start at non-zero addr */
++ reg = dwc3_readl(dwc->regs, DWC3_GTXFIFOSIZ(0));
++ fifo_0_start = DWC3_GTXFIFOSIZ_TXFSTADDR(reg);
++
++ ram_depth -= (fifo_0_start >> 16);
++ }
++
++ return ram_depth;
++}
++
+ /**
+ * dwc3_gadget_clear_tx_fifos - Clears txfifo allocation
+ * @dwc: pointer to the DWC3 context
+@@ -753,7 +791,7 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep)
+ {
+ struct dwc3 *dwc = dep->dwc;
+ int fifo_0_start;
+- int ram1_depth;
++ int ram_depth;
+ int fifo_size;
+ int min_depth;
+ int num_in_ep;
+@@ -773,7 +811,7 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep)
+ if (dep->flags & DWC3_EP_TXFIFO_RESIZED)
+ return 0;
+
+- ram1_depth = DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7);
++ ram_depth = dwc3_gadget_calc_ram_depth(dwc);
+
+ if ((dep->endpoint.maxburst > 1 &&
+ usb_endpoint_xfer_bulk(dep->endpoint.desc)) ||
+@@ -794,7 +832,7 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep)
+
+ /* Reserve at least one FIFO for the number of IN EPs */
+ min_depth = num_in_ep * (fifo + 1);
+- remaining = ram1_depth - min_depth - dwc->last_fifo_depth;
++ remaining = ram_depth - min_depth - dwc->last_fifo_depth;
+ remaining = max_t(int, 0, remaining);
+ /*
+ * We've already reserved 1 FIFO per EP, so check what we can fit in
+@@ -820,9 +858,9 @@ static int dwc3_gadget_resize_tx_fifos(struct dwc3_ep *dep)
+ dwc->last_fifo_depth += DWC31_GTXFIFOSIZ_TXFDEP(fifo_size);
+
+ /* Check fifo size allocation doesn't exceed available RAM size. */
+- if (dwc->last_fifo_depth >= ram1_depth) {
++ if (dwc->last_fifo_depth >= ram_depth) {
+ dev_err(dwc->dev, "Fifosize(%d) > RAM size(%d) %s depth:%d\n",
+- dwc->last_fifo_depth, ram1_depth,
++ dwc->last_fifo_depth, ram_depth,
+ dep->endpoint.name, fifo_size);
+ if (DWC3_IP_IS(DWC3))
+ fifo_size = DWC3_GTXFIFOSIZ_TXFDEP(fifo_size);
+@@ -1177,11 +1215,14 @@ static u32 dwc3_calc_trbs_left(struct dwc3_ep *dep)
+ * pending to be processed by the driver.
+ */
+ if (dep->trb_enqueue == dep->trb_dequeue) {
++ struct dwc3_request *req;
++
+ /*
+- * If there is any request remained in the started_list at
+- * this point, that means there is no TRB available.
++ * If there is any request remained in the started_list with
++ * active TRBs at this point, then there is no TRB available.
+ */
+- if (!list_empty(&dep->started_list))
++ req = next_request(&dep->started_list);
++ if (req && req->num_trbs)
+ return 0;
+
+ return DWC3_TRB_NUM - 1;
+@@ -1414,8 +1455,8 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
+ struct scatterlist *s;
+ int i;
+ unsigned int length = req->request.length;
+- unsigned int remaining = req->request.num_mapped_sgs
+- - req->num_queued_sgs;
++ unsigned int remaining = req->num_pending_sgs;
++ unsigned int num_queued_sgs = req->request.num_mapped_sgs - remaining;
+ unsigned int num_trbs = req->num_trbs;
+ bool needs_extra_trb = dwc3_needs_extra_trb(dep, req);
+
+@@ -1423,7 +1464,7 @@ static int dwc3_prepare_trbs_sg(struct dwc3_ep *dep,
+ * If we resume preparing the request, then get the remaining length of
+ * the request and resume where we left off.
+ */
+- for_each_sg(req->request.sg, s, req->num_queued_sgs, i)
++ for_each_sg(req->request.sg, s, num_queued_sgs, i)
+ length -= sg_dma_len(s);
+
+ for_each_sg(sg, s, remaining, i) {
+@@ -3075,7 +3116,7 @@ static int dwc3_gadget_check_config(struct usb_gadget *g)
+ struct dwc3 *dwc = gadget_to_dwc(g);
+ struct usb_ep *ep;
+ int fifo_size = 0;
+- int ram1_depth;
++ int ram_depth;
+ int ep_num = 0;
+
+ if (!dwc->do_fifo_resize)
+@@ -3098,8 +3139,8 @@ static int dwc3_gadget_check_config(struct usb_gadget *g)
+ fifo_size += dwc->max_cfg_eps;
+
+ /* Check if we can fit a single fifo per endpoint */
+- ram1_depth = DWC3_RAM1_DEPTH(dwc->hwparams.hwparams7);
+- if (fifo_size > ram1_depth)
++ ram_depth = dwc3_gadget_calc_ram_depth(dwc);
++ if (fifo_size > ram_depth)
+ return -ENOMEM;
+
+ return 0;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index f25dd2cb5d03b1..cec86c0c6369ca 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -2111,8 +2111,20 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ memset(buf, 0, w_length);
+ buf[5] = 0x01;
+ switch (ctrl->bRequestType & USB_RECIP_MASK) {
++ /*
++ * The Microsoft CompatID OS Descriptor Spec(w_index = 0x4) and
++ * Extended Prop OS Desc Spec(w_index = 0x5) state that the
++ * HighByte of wValue is the InterfaceNumber and the LowByte is
++ * the PageNumber. This high/low byte ordering is incorrectly
++ * documented in the Spec. USB analyzer output on the below
++ * request packets show the high/low byte inverted i.e LowByte
++ * is the InterfaceNumber and the HighByte is the PageNumber.
++ * Since we dont support >64KB CompatID/ExtendedProp descriptors,
++ * PageNumber is set to 0. Hence verify that the HighByte is 0
++ * for below two cases.
++ */
+ case USB_RECIP_DEVICE:
+- if (w_index != 0x4 || (w_value & 0xff))
++ if (w_index != 0x4 || (w_value >> 8))
+ break;
+ buf[6] = w_index;
+ /* Number of ext compat interfaces */
+@@ -2128,9 +2140,9 @@ composite_setup(struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ }
+ break;
+ case USB_RECIP_INTERFACE:
+- if (w_index != 0x5 || (w_value & 0xff))
++ if (w_index != 0x5 || (w_value >> 8))
+ break;
+- interface = w_value >> 8;
++ interface = w_value & 0xFF;
+ if (interface >= MAX_CONFIG_INTERFACES ||
+ !os_desc_cfg->interface[interface])
+ break;
+diff --git a/drivers/usb/gadget/function/uvc_video.c b/drivers/usb/gadget/function/uvc_video.c
+index 57a851151225de..002bf724d8025d 100644
+--- a/drivers/usb/gadget/function/uvc_video.c
++++ b/drivers/usb/gadget/function/uvc_video.c
+@@ -480,6 +480,10 @@ uvc_video_complete(struct usb_ep *ep, struct usb_request *req)
+ * up later.
+ */
+ list_add_tail(&to_queue->list, &video->req_free);
++ /*
++ * There is a new free request - wake up the pump.
++ */
++ queue_work(video->async_wq, &video->pump);
+ }
+
+ spin_unlock_irqrestore(&video->req_lock, flags);
+diff --git a/drivers/usb/host/ehci-spear.c b/drivers/usb/host/ehci-spear.c
+index d0e94e4c9fe274..11294f196ee335 100644
+--- a/drivers/usb/host/ehci-spear.c
++++ b/drivers/usb/host/ehci-spear.c
+@@ -105,7 +105,9 @@ static int spear_ehci_hcd_drv_probe(struct platform_device *pdev)
+ /* registers start at offset 0x0 */
+ hcd_to_ehci(hcd)->caps = hcd->regs;
+
+- clk_prepare_enable(sehci->clk);
++ retval = clk_prepare_enable(sehci->clk);
++ if (retval)
++ goto err_put_hcd;
+ retval = usb_add_hcd(hcd, irq, IRQF_SHARED);
+ if (retval)
+ goto err_stop_ehci;
+@@ -130,8 +132,7 @@ static void spear_ehci_hcd_drv_remove(struct platform_device *pdev)
+
+ usb_remove_hcd(hcd);
+
+- if (sehci->clk)
+- clk_disable_unprepare(sehci->clk);
++ clk_disable_unprepare(sehci->clk);
+ usb_put_hcd(hcd);
+ }
+
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index cb07cee9ed0c75..3ba9902dd2093c 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -395,14 +395,12 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ xhci->quirks |= XHCI_DEFAULT_PM_RUNTIME_ALLOW;
+
+ if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+- pdev->device == PCI_DEVICE_ID_EJ168) {
+- xhci->quirks |= XHCI_RESET_ON_RESUME;
+- xhci->quirks |= XHCI_BROKEN_STREAMS;
+- }
+- if (pdev->vendor == PCI_VENDOR_ID_ETRON &&
+- pdev->device == PCI_DEVICE_ID_EJ188) {
++ (pdev->device == PCI_DEVICE_ID_EJ168 ||
++ pdev->device == PCI_DEVICE_ID_EJ188)) {
++ xhci->quirks |= XHCI_ETRON_HOST;
+ xhci->quirks |= XHCI_RESET_ON_RESUME;
+ xhci->quirks |= XHCI_BROKEN_STREAMS;
++ xhci->quirks |= XHCI_NO_SOFT_RETRY;
+ }
+
+ if (pdev->vendor == PCI_VENDOR_ID_RENESAS &&
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 928b93ad1ee866..f318864732f2db 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -52,6 +52,7 @@
+ * endpoint rings; it generates events on the event ring for these.
+ */
+
++#include <linux/jiffies.h>
+ #include <linux/scatterlist.h>
+ #include <linux/slab.h>
+ #include <linux/dma-mapping.h>
+@@ -972,6 +973,13 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
+ unsigned int slot_id = ep->vdev->slot_id;
+ int err;
+
++ /*
++ * This is not going to work if the hardware is changing its dequeue
++ * pointers as we look at them. Completion handler will call us later.
++ */
++ if (ep->ep_state & SET_DEQ_PENDING)
++ return 0;
++
+ xhci = ep->xhci;
+
+ list_for_each_entry_safe(td, tmp_td, &ep->cancelled_td_list, cancelled_td_list) {
+@@ -1061,6 +1069,19 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
+ return 0;
+ }
+
++/*
++ * Erase queued TDs from transfer ring(s) and give back those the xHC didn't
++ * stop on. If necessary, queue commands to move the xHC off cancelled TDs it
++ * stopped on. Those will be given back later when the commands complete.
++ *
++ * Call under xhci->lock on a stopped endpoint.
++ */
++void xhci_process_cancelled_tds(struct xhci_virt_ep *ep)
++{
++ xhci_invalidate_cancelled_tds(ep);
++ xhci_giveback_invalidated_tds(ep);
++}
++
+ /*
+ * Returns the TD the endpoint ring halted on.
+ * Only call for non-running rings without streams.
+@@ -1151,16 +1172,35 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
+ return;
+ case EP_STATE_STOPPED:
+ /*
+- * NEC uPD720200 sometimes sets this state and fails with
+- * Context Error while continuing to process TRBs.
+- * Be conservative and trust EP_CTX_STATE on other chips.
++ * Per xHCI 4.6.9, Stop Endpoint command on a Stopped
++ * EP is a Context State Error, and EP stays Stopped.
++ *
++ * But maybe it failed on Halted, and somebody ran Reset
++ * Endpoint later. EP state is now Stopped and EP_HALTED
++ * still set because Reset EP handler will run after us.
++ */
++ if (ep->ep_state & EP_HALTED)
++ break;
++ /*
++ * On some HCs EP state remains Stopped for some tens of
++ * us to a few ms or more after a doorbell ring, and any
++ * new Stop Endpoint fails without aborting the restart.
++ * This handler may run quickly enough to still see this
++ * Stopped state, but it will soon change to Running.
++ *
++ * Assume this bug on unexpected Stop Endpoint failures.
++ * Keep retrying until the EP starts and stops again, on
++ * chips where this is known to help. Wait for 100ms.
+ */
+ if (!(xhci->quirks & XHCI_NEC_HOST))
+ break;
++ if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100)))
++ break;
+ fallthrough;
+ case EP_STATE_RUNNING:
+ /* Race, HW handled stop ep cmd before ep was running */
+- xhci_dbg(xhci, "Stop ep completion ctx error, ep is running\n");
++ xhci_dbg(xhci, "Stop ep completion ctx error, ctx_state %d\n",
++ GET_EP_CTX_STATE(ep_ctx));
+
+ command = xhci_alloc_command(xhci, false, GFP_ATOMIC);
+ if (!command) {
+@@ -1339,7 +1379,6 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ struct xhci_ep_ctx *ep_ctx;
+ struct xhci_slot_ctx *slot_ctx;
+ struct xhci_td *td, *tmp_td;
+- bool deferred = false;
+
+ ep_index = TRB_TO_EP_INDEX(le32_to_cpu(trb->generic.field[3]));
+ stream_id = TRB_TO_STREAM_ID(le32_to_cpu(trb->generic.field[2]));
+@@ -1440,8 +1479,6 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ xhci_dbg(ep->xhci, "%s: Giveback cancelled URB %p TD\n",
+ __func__, td->urb);
+ xhci_td_cleanup(ep->xhci, td, ep_ring, td->status);
+- } else if (td->cancel_status == TD_CLEARING_CACHE_DEFERRED) {
+- deferred = true;
+ } else {
+ xhci_dbg(ep->xhci, "%s: Keep cancelled URB %p TD as cancel_status is %d\n",
+ __func__, td->urb, td->cancel_status);
+@@ -1452,11 +1489,15 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
+ ep->queued_deq_seg = NULL;
+ ep->queued_deq_ptr = NULL;
+
+- if (deferred) {
+- /* We have more streams to clear */
++ /* Check for deferred or newly cancelled TDs */
++ if (!list_empty(&ep->cancelled_td_list)) {
+ xhci_dbg(ep->xhci, "%s: Pending TDs to clear, continuing with invalidation\n",
+ __func__);
+ xhci_invalidate_cancelled_tds(ep);
++ /* Try to restart the endpoint if all is done */
++ ring_doorbell_for_active_rings(xhci, slot_id, ep_index);
++ /* Start giving back any TDs invalidated above */
++ xhci_giveback_invalidated_tds(ep);
+ } else {
+ /* Restart any rings with pending URBs */
+ xhci_dbg(ep->xhci, "%s: All TDs cleared, ring doorbell\n", __func__);
+@@ -3727,6 +3768,20 @@ int xhci_queue_ctrl_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ if (!urb->setup_packet)
+ return -EINVAL;
+
++ if ((xhci->quirks & XHCI_ETRON_HOST) &&
++ urb->dev->speed >= USB_SPEED_SUPER) {
++ /*
++ * If next available TRB is the Link TRB in the ring segment then
++ * enqueue a No Op TRB, this can prevent the Setup and Data Stage
++ * TRB to be breaked by the Link TRB.
++ */
++ if (trb_is_link(ep_ring->enqueue + 1)) {
++ field = TRB_TYPE(TRB_TR_NOOP) | ep_ring->cycle_state;
++ queue_trb(xhci, ep_ring, false, 0, 0,
++ TRB_INTR_TARGET(0), field);
++ }
++ }
++
+ /* 1 TRB for setup, 1 for status */
+ num_trbs = 2;
+ /*
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index 899c0effb5d3c1..358ed674f782fb 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -8,6 +8,7 @@
+ * Some code borrowed from the Linux EHCI driver.
+ */
+
++#include <linux/jiffies.h>
+ #include <linux/pci.h>
+ #include <linux/iommu.h>
+ #include <linux/iopoll.h>
+@@ -1768,15 +1769,27 @@ static int xhci_urb_dequeue(struct usb_hcd *hcd, struct urb *urb, int status)
+ }
+ }
+
+- /* Queue a stop endpoint command, but only if this is
+- * the first cancellation to be handled.
+- */
+- if (!(ep->ep_state & EP_STOP_CMD_PENDING)) {
++ /* These completion handlers will sort out cancelled TDs for us */
++ if (ep->ep_state & (EP_STOP_CMD_PENDING | EP_HALTED | SET_DEQ_PENDING)) {
++ xhci_dbg(xhci, "Not queuing Stop Endpoint on slot %d ep %d in state 0x%x\n",
++ urb->dev->slot_id, ep_index, ep->ep_state);
++ goto done;
++ }
++
++ /* In this case no commands are pending but the endpoint is stopped */
++ if (ep->ep_state & EP_CLEARING_TT) {
++ /* and cancelled TDs can be given back right away */
++ xhci_dbg(xhci, "Invalidating TDs instantly on slot %d ep %d in state 0x%x\n",
++ urb->dev->slot_id, ep_index, ep->ep_state);
++ xhci_process_cancelled_tds(ep);
++ } else {
++ /* Otherwise, queue a new Stop Endpoint command */
+ command = xhci_alloc_command(xhci, false, GFP_ATOMIC);
+ if (!command) {
+ ret = -ENOMEM;
+ goto done;
+ }
++ ep->stop_time = jiffies;
+ ep->ep_state |= EP_STOP_CMD_PENDING;
+ xhci_queue_stop_endpoint(xhci, command, urb->dev->slot_id,
+ ep_index, 0);
+@@ -3692,6 +3705,8 @@ void xhci_free_device_endpoint_resources(struct xhci_hcd *xhci,
+ xhci->num_active_eps);
+ }
+
++static void xhci_free_dev(struct usb_hcd *hcd, struct usb_device *udev);
++
+ /*
+ * This submits a Reset Device Command, which will set the device state to 0,
+ * set the device address to 0, and disable all the endpoints except the default
+@@ -3762,6 +3777,23 @@ static int xhci_discover_or_reset_device(struct usb_hcd *hcd,
+ SLOT_STATE_DISABLED)
+ return 0;
+
++ if (xhci->quirks & XHCI_ETRON_HOST) {
++ /*
++ * Obtaining a new device slot to inform the xHCI host that
++ * the USB device has been reset.
++ */
++ ret = xhci_disable_slot(xhci, udev->slot_id);
++ xhci_free_virt_device(xhci, udev->slot_id);
++ if (!ret) {
++ ret = xhci_alloc_dev(hcd, udev);
++ if (ret == 1)
++ ret = 0;
++ else
++ ret = -EINVAL;
++ }
++ return ret;
++ }
++
+ trace_xhci_discover_or_reset_device(slot_ctx);
+
+ xhci_dbg(xhci, "Resetting device with slot ID %u\n", slot_id);
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index f0fb696d561986..673179047eb82e 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -690,6 +690,7 @@ struct xhci_virt_ep {
+ /* Bandwidth checking storage */
+ struct xhci_bw_info bw_info;
+ struct list_head bw_endpoint_list;
++ unsigned long stop_time;
+ /* Isoch Frame ID checking storage */
+ int next_frame_id;
+ /* Use new Isoch TRB layout needed for extended TBC support */
+@@ -1624,6 +1625,7 @@ struct xhci_hcd {
+ #define XHCI_ZHAOXIN_HOST BIT_ULL(46)
+ #define XHCI_WRITE_64_HI_LO BIT_ULL(47)
+ #define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48)
++#define XHCI_ETRON_HOST BIT_ULL(49)
+
+ unsigned int num_active_eps;
+ unsigned int limit_active_eps;
+@@ -1913,6 +1915,7 @@ void xhci_ring_doorbell_for_active_rings(struct xhci_hcd *xhci,
+ void xhci_cleanup_command_queue(struct xhci_hcd *xhci);
+ void inc_deq(struct xhci_hcd *xhci, struct xhci_ring *ring);
+ unsigned int count_trbs(u64 addr, u64 len);
++void xhci_process_cancelled_tds(struct xhci_virt_ep *ep);
+
+ /* xHCI roothub code */
+ void xhci_set_link_state(struct xhci_hcd *xhci, struct xhci_port *port,
+diff --git a/drivers/usb/misc/chaoskey.c b/drivers/usb/misc/chaoskey.c
+index 6fb5140e29b9dd..225863321dc479 100644
+--- a/drivers/usb/misc/chaoskey.c
++++ b/drivers/usb/misc/chaoskey.c
+@@ -27,6 +27,8 @@ static struct usb_class_driver chaoskey_class;
+ static int chaoskey_rng_read(struct hwrng *rng, void *data,
+ size_t max, bool wait);
+
++static DEFINE_MUTEX(chaoskey_list_lock);
++
+ #define usb_dbg(usb_if, format, arg...) \
+ dev_dbg(&(usb_if)->dev, format, ## arg)
+
+@@ -233,6 +235,7 @@ static void chaoskey_disconnect(struct usb_interface *interface)
+ usb_deregister_dev(interface, &chaoskey_class);
+
+ usb_set_intfdata(interface, NULL);
++ mutex_lock(&chaoskey_list_lock);
+ mutex_lock(&dev->lock);
+
+ dev->present = false;
+@@ -244,6 +247,7 @@ static void chaoskey_disconnect(struct usb_interface *interface)
+ } else
+ mutex_unlock(&dev->lock);
+
++ mutex_unlock(&chaoskey_list_lock);
+ usb_dbg(interface, "disconnect done");
+ }
+
+@@ -251,6 +255,7 @@ static int chaoskey_open(struct inode *inode, struct file *file)
+ {
+ struct chaoskey *dev;
+ struct usb_interface *interface;
++ int rv = 0;
+
+ /* get the interface from minor number and driver information */
+ interface = usb_find_interface(&chaoskey_driver, iminor(inode));
+@@ -266,18 +271,23 @@ static int chaoskey_open(struct inode *inode, struct file *file)
+ }
+
+ file->private_data = dev;
++ mutex_lock(&chaoskey_list_lock);
+ mutex_lock(&dev->lock);
+- ++dev->open;
++ if (dev->present)
++ ++dev->open;
++ else
++ rv = -ENODEV;
+ mutex_unlock(&dev->lock);
++ mutex_unlock(&chaoskey_list_lock);
+
+- usb_dbg(interface, "open success");
+- return 0;
++ return rv;
+ }
+
+ static int chaoskey_release(struct inode *inode, struct file *file)
+ {
+ struct chaoskey *dev = file->private_data;
+ struct usb_interface *interface;
++ int rv = 0;
+
+ if (dev == NULL)
+ return -ENODEV;
+@@ -286,14 +296,15 @@ static int chaoskey_release(struct inode *inode, struct file *file)
+
+ usb_dbg(interface, "release");
+
++ mutex_lock(&chaoskey_list_lock);
+ mutex_lock(&dev->lock);
+
+ usb_dbg(interface, "open count at release is %d", dev->open);
+
+ if (dev->open <= 0) {
+ usb_dbg(interface, "invalid open count (%d)", dev->open);
+- mutex_unlock(&dev->lock);
+- return -ENODEV;
++ rv = -ENODEV;
++ goto bail;
+ }
+
+ --dev->open;
+@@ -302,13 +313,15 @@ static int chaoskey_release(struct inode *inode, struct file *file)
+ if (dev->open == 0) {
+ mutex_unlock(&dev->lock);
+ chaoskey_free(dev);
+- } else
+- mutex_unlock(&dev->lock);
+- } else
+- mutex_unlock(&dev->lock);
+-
++ goto destruction;
++ }
++ }
++bail:
++ mutex_unlock(&dev->lock);
++destruction:
++ mutex_unlock(&chaoskey_list_lock);
+ usb_dbg(interface, "release success");
+- return 0;
++ return rv;
+ }
+
+ static void chaos_read_callback(struct urb *urb)
+diff --git a/drivers/usb/misc/iowarrior.c b/drivers/usb/misc/iowarrior.c
+index 6d28467ce35227..365c1006934583 100644
+--- a/drivers/usb/misc/iowarrior.c
++++ b/drivers/usb/misc/iowarrior.c
+@@ -277,28 +277,45 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer,
+ struct iowarrior *dev;
+ int read_idx;
+ int offset;
++ int retval;
+
+ dev = file->private_data;
+
++ if (file->f_flags & O_NONBLOCK) {
++ retval = mutex_trylock(&dev->mutex);
++ if (!retval)
++ return -EAGAIN;
++ } else {
++ retval = mutex_lock_interruptible(&dev->mutex);
++ if (retval)
++ return -ERESTARTSYS;
++ }
++
+ /* verify that the device wasn't unplugged */
+- if (!dev || !dev->present)
+- return -ENODEV;
++ if (!dev->present) {
++ retval = -ENODEV;
++ goto exit;
++ }
+
+ dev_dbg(&dev->interface->dev, "minor %d, count = %zd\n",
+ dev->minor, count);
+
+ /* read count must be packet size (+ time stamp) */
+ if ((count != dev->report_size)
+- && (count != (dev->report_size + 1)))
+- return -EINVAL;
++ && (count != (dev->report_size + 1))) {
++ retval = -EINVAL;
++ goto exit;
++ }
+
+ /* repeat until no buffer overrun in callback handler occur */
+ do {
+ atomic_set(&dev->overflow_flag, 0);
+ if ((read_idx = read_index(dev)) == -1) {
+ /* queue empty */
+- if (file->f_flags & O_NONBLOCK)
+- return -EAGAIN;
++ if (file->f_flags & O_NONBLOCK) {
++ retval = -EAGAIN;
++ goto exit;
++ }
+ else {
+ //next line will return when there is either new data, or the device is unplugged
+ int r = wait_event_interruptible(dev->read_wait,
+@@ -309,28 +326,37 @@ static ssize_t iowarrior_read(struct file *file, char __user *buffer,
+ -1));
+ if (r) {
+ //we were interrupted by a signal
+- return -ERESTART;
++ retval = -ERESTART;
++ goto exit;
+ }
+ if (!dev->present) {
+ //The device was unplugged
+- return -ENODEV;
++ retval = -ENODEV;
++ goto exit;
+ }
+ if (read_idx == -1) {
+ // Can this happen ???
+- return 0;
++ retval = 0;
++ goto exit;
+ }
+ }
+ }
+
+ offset = read_idx * (dev->report_size + 1);
+ if (copy_to_user(buffer, dev->read_queue + offset, count)) {
+- return -EFAULT;
++ retval = -EFAULT;
++ goto exit;
+ }
+ } while (atomic_read(&dev->overflow_flag));
+
+ read_idx = ++read_idx == MAX_INTERRUPT_BUFFER ? 0 : read_idx;
+ atomic_set(&dev->read_idx, read_idx);
++ mutex_unlock(&dev->mutex);
+ return count;
++
++exit:
++ mutex_unlock(&dev->mutex);
++ return retval;
+ }
+
+ /*
+@@ -885,7 +911,6 @@ static int iowarrior_probe(struct usb_interface *interface,
+ static void iowarrior_disconnect(struct usb_interface *interface)
+ {
+ struct iowarrior *dev = usb_get_intfdata(interface);
+- int minor = dev->minor;
+
+ usb_deregister_dev(interface, &iowarrior_class);
+
+@@ -909,9 +934,6 @@ static void iowarrior_disconnect(struct usb_interface *interface)
+ mutex_unlock(&dev->mutex);
+ iowarrior_delete(dev);
+ }
+-
+- dev_info(&interface->dev, "I/O-Warror #%d now disconnected\n",
+- minor - IOWARRIOR_MINOR_BASE);
+ }
+
+ /* usb specific object needed to register this driver with the usb subsystem */
+diff --git a/drivers/usb/misc/usb-ljca.c b/drivers/usb/misc/usb-ljca.c
+index 01ceafc4ab78ce..d9c21f7830557b 100644
+--- a/drivers/usb/misc/usb-ljca.c
++++ b/drivers/usb/misc/usb-ljca.c
+@@ -332,14 +332,11 @@ static int ljca_send(struct ljca_adapter *adap, u8 type, u8 cmd,
+
+ ret = usb_bulk_msg(adap->usb_dev, adap->tx_pipe, header,
+ msg_len, &transferred, LJCA_WRITE_TIMEOUT_MS);
+-
+- usb_autopm_put_interface(adap->intf);
+-
+ if (ret < 0)
+- goto out;
++ goto out_put;
+ if (transferred != msg_len) {
+ ret = -EIO;
+- goto out;
++ goto out_put;
+ }
+
+ if (ack) {
+@@ -347,11 +344,14 @@ static int ljca_send(struct ljca_adapter *adap, u8 type, u8 cmd,
+ timeout);
+ if (!ret) {
+ ret = -ETIMEDOUT;
+- goto out;
++ goto out_put;
+ }
+ }
+ ret = adap->actual_length;
+
++out_put:
++ usb_autopm_put_interface(adap->intf);
++
+ out:
+ spin_lock_irqsave(&adap->lock, flags);
+ adap->ex_buf = NULL;
+@@ -811,6 +811,14 @@ static int ljca_probe(struct usb_interface *interface,
+ if (ret)
+ goto err_free;
+
++ /*
++ * This works around problems with ov2740 initialization on some
++ * Lenovo platforms. The autosuspend delay, has to be smaller than
++ * the delay after setting the reset_gpio line in ov2740_resume().
++ * Otherwise the sensor randomly fails to initialize.
++ */
++ pm_runtime_set_autosuspend_delay(&usb_dev->dev, 10);
++
+ usb_enable_autosuspend(usb_dev);
+
+ return 0;
+diff --git a/drivers/usb/misc/yurex.c b/drivers/usb/misc/yurex.c
+index 6aebc736a80c66..70dff0db5354ff 100644
+--- a/drivers/usb/misc/yurex.c
++++ b/drivers/usb/misc/yurex.c
+@@ -441,7 +441,10 @@ static ssize_t yurex_write(struct file *file, const char __user *user_buffer,
+ if (count == 0)
+ goto error;
+
+- mutex_lock(&dev->io_mutex);
++ retval = mutex_lock_interruptible(&dev->io_mutex);
++ if (retval < 0)
++ return -EINTR;
++
+ if (dev->disconnected) { /* already disconnected */
+ mutex_unlock(&dev->io_mutex);
+ retval = -ENODEV;
+diff --git a/drivers/usb/musb/musb_gadget.c b/drivers/usb/musb/musb_gadget.c
+index bdf13911a1e590..c6076df0d50cc7 100644
+--- a/drivers/usb/musb/musb_gadget.c
++++ b/drivers/usb/musb/musb_gadget.c
+@@ -1161,12 +1161,19 @@ void musb_free_request(struct usb_ep *ep, struct usb_request *req)
+ */
+ void musb_ep_restart(struct musb *musb, struct musb_request *req)
+ {
++ u16 csr;
++ void __iomem *epio = req->ep->hw_ep->regs;
++
+ trace_musb_req_start(req);
+ musb_ep_select(musb->mregs, req->epnum);
+- if (req->tx)
++ if (req->tx) {
+ txstate(musb, req);
+- else
+- rxstate(musb, req);
++ } else {
++ csr = musb_readw(epio, MUSB_RXCSR);
++ csr |= MUSB_RXCSR_FLUSHFIFO | MUSB_RXCSR_P_WZC_BITS;
++ musb_writew(epio, MUSB_RXCSR, csr);
++ musb_writew(epio, MUSB_RXCSR, csr);
++ }
+ }
+
+ static int musb_ep_restart_resume_work(struct musb *musb, void *data)
+diff --git a/drivers/usb/typec/tcpm/wcove.c b/drivers/usb/typec/tcpm/wcove.c
+index cf719307b3f6b9..60b2766a69bf8a 100644
+--- a/drivers/usb/typec/tcpm/wcove.c
++++ b/drivers/usb/typec/tcpm/wcove.c
+@@ -621,10 +621,6 @@ static int wcove_typec_probe(struct platform_device *pdev)
+ if (irq < 0)
+ return irq;
+
+- irq = regmap_irq_get_virq(pmic->irq_chip_data_chgr, irq);
+- if (irq < 0)
+- return irq;
+-
+ ret = guid_parse(WCOVE_DSM_UUID, &wcove->guid);
+ if (ret)
+ return ret;
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index bccfc03b5986d7..fcb8e61136cfd7 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -644,6 +644,10 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+ uc->has_multiple_dp) {
+ con_index = (uc->last_cmd_sent >> 16) &
+ UCSI_CMD_CONNECTOR_MASK;
++ if (con_index == 0) {
++ ret = -EINVAL;
++ goto unlock;
++ }
+ con = &uc->ucsi->connector[con_index - 1];
+ ucsi_ccg_update_set_new_cam_cmd(uc, con, &command);
+ }
+@@ -651,6 +655,7 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+ ret = ucsi_sync_control_common(ucsi, command);
+
+ pm_runtime_put_sync(uc->dev);
++unlock:
+ mutex_unlock(&uc->lock);
+
+ return ret;
+diff --git a/drivers/usb/typec/ucsi/ucsi_glink.c b/drivers/usb/typec/ucsi/ucsi_glink.c
+index 03c0fa8edc8db5..f7000d383a4e62 100644
+--- a/drivers/usb/typec/ucsi/ucsi_glink.c
++++ b/drivers/usb/typec/ucsi/ucsi_glink.c
+@@ -185,7 +185,7 @@ static void pmic_glink_ucsi_connector_status(struct ucsi_connector *con)
+ struct pmic_glink_ucsi *ucsi = ucsi_get_drvdata(con->ucsi);
+ int orientation;
+
+- if (con->num >= PMIC_GLINK_MAX_PORTS ||
++ if (con->num > PMIC_GLINK_MAX_PORTS ||
+ !ucsi->port_orientation[con->num - 1])
+ return;
+
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 7d0c83b5b07158..8455f08f5d4060 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -368,7 +368,6 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ unsigned long lgcd = 0;
+ int log_entity_size;
+ unsigned long size;
+- u64 start = 0;
+ int err;
+ struct page *pg;
+ unsigned int nsg;
+@@ -379,10 +378,9 @@ static int map_direct_mr(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_direct_mr
+ struct device *dma = mvdev->vdev.dma_dev;
+
+ for (map = vhost_iotlb_itree_first(iotlb, mr->start, mr->end - 1);
+- map; map = vhost_iotlb_itree_next(map, start, mr->end - 1)) {
++ map; map = vhost_iotlb_itree_next(map, mr->start, mr->end - 1)) {
+ size = maplen(map, mr);
+ lgcd = gcd(lgcd, size);
+- start += size;
+ }
+ log_entity_size = ilog2(lgcd);
+
+diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c
+index 41a4b0cf429756..7527e277c89897 100644
+--- a/drivers/vfio/pci/mlx5/cmd.c
++++ b/drivers/vfio/pci/mlx5/cmd.c
+@@ -423,6 +423,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf,
+ unsigned long filled;
+ unsigned int to_fill;
+ int ret;
++ int i;
+
+ to_fill = min_t(unsigned int, npages, PAGE_SIZE / sizeof(*page_list));
+ page_list = kvzalloc(to_fill * sizeof(*page_list), GFP_KERNEL_ACCOUNT);
+@@ -443,7 +444,7 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf,
+ GFP_KERNEL_ACCOUNT);
+
+ if (ret)
+- goto err;
++ goto err_append;
+ buf->allocated_length += filled * PAGE_SIZE;
+ /* clean input for another bulk allocation */
+ memset(page_list, 0, filled * sizeof(*page_list));
+@@ -454,6 +455,9 @@ static int mlx5vf_add_migration_pages(struct mlx5_vhca_data_buffer *buf,
+ kvfree(page_list);
+ return 0;
+
++err_append:
++ for (i = filled - 1; i >= 0; i--)
++ __free_page(page_list[i]);
+ err:
+ kvfree(page_list);
+ return ret;
+diff --git a/drivers/vfio/pci/mlx5/main.c b/drivers/vfio/pci/mlx5/main.c
+index 242c23eef452e8..8833e60d42f566 100644
+--- a/drivers/vfio/pci/mlx5/main.c
++++ b/drivers/vfio/pci/mlx5/main.c
+@@ -640,14 +640,11 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track)
+ O_RDONLY);
+ if (IS_ERR(migf->filp)) {
+ ret = PTR_ERR(migf->filp);
+- goto end;
++ kfree(migf);
++ return ERR_PTR(ret);
+ }
+
+ migf->mvdev = mvdev;
+- ret = mlx5vf_cmd_alloc_pd(migf);
+- if (ret)
+- goto out_free;
+-
+ stream_open(migf->filp->f_inode, migf->filp);
+ mutex_init(&migf->lock);
+ init_waitqueue_head(&migf->poll_wait);
+@@ -663,6 +660,11 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track)
+ INIT_LIST_HEAD(&migf->buf_list);
+ INIT_LIST_HEAD(&migf->avail_list);
+ spin_lock_init(&migf->list_lock);
++
++ ret = mlx5vf_cmd_alloc_pd(migf);
++ if (ret)
++ goto out;
++
+ ret = mlx5vf_cmd_query_vhca_migration_state(mvdev, &length, &full_size, 0);
+ if (ret)
+ goto out_pd;
+@@ -692,10 +694,8 @@ mlx5vf_pci_save_device_data(struct mlx5vf_pci_core_device *mvdev, bool track)
+ mlx5vf_free_data_buffer(buf);
+ out_pd:
+ mlx5fv_cmd_clean_migf_resources(migf);
+-out_free:
++out:
+ fput(migf->filp);
+-end:
+- kfree(migf);
+ return ERR_PTR(ret);
+ }
+
+@@ -1016,13 +1016,19 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev)
+ O_WRONLY);
+ if (IS_ERR(migf->filp)) {
+ ret = PTR_ERR(migf->filp);
+- goto end;
++ kfree(migf);
++ return ERR_PTR(ret);
+ }
+
++ stream_open(migf->filp->f_inode, migf->filp);
++ mutex_init(&migf->lock);
++ INIT_LIST_HEAD(&migf->buf_list);
++ INIT_LIST_HEAD(&migf->avail_list);
++ spin_lock_init(&migf->list_lock);
+ migf->mvdev = mvdev;
+ ret = mlx5vf_cmd_alloc_pd(migf);
+ if (ret)
+- goto out_free;
++ goto out;
+
+ buf = mlx5vf_alloc_data_buffer(migf, 0, DMA_TO_DEVICE);
+ if (IS_ERR(buf)) {
+@@ -1041,20 +1047,13 @@ mlx5vf_pci_resume_device_data(struct mlx5vf_pci_core_device *mvdev)
+ migf->buf_header[0] = buf;
+ migf->load_state = MLX5_VF_LOAD_STATE_READ_HEADER;
+
+- stream_open(migf->filp->f_inode, migf->filp);
+- mutex_init(&migf->lock);
+- INIT_LIST_HEAD(&migf->buf_list);
+- INIT_LIST_HEAD(&migf->avail_list);
+- spin_lock_init(&migf->list_lock);
+ return migf;
+ out_buf:
+ mlx5vf_free_data_buffer(migf->buf[0]);
+ out_pd:
+ mlx5vf_cmd_dealloc_pd(migf);
+-out_free:
++out:
+ fput(migf->filp);
+-end:
+- kfree(migf);
+ return ERR_PTR(ret);
+ }
+
+diff --git a/drivers/vfio/pci/vfio_pci_config.c b/drivers/vfio/pci/vfio_pci_config.c
+index 97422aafaa7b5d..ea2745c1ac5e68 100644
+--- a/drivers/vfio/pci/vfio_pci_config.c
++++ b/drivers/vfio/pci/vfio_pci_config.c
+@@ -313,6 +313,10 @@ static int vfio_virt_config_read(struct vfio_pci_core_device *vdev, int pos,
+ return count;
+ }
+
++static struct perm_bits direct_ro_perms = {
++ .readfn = vfio_direct_config_read,
++};
++
+ /* Default capability regions to read-only, no-virtualization */
+ static struct perm_bits cap_perms[PCI_CAP_ID_MAX + 1] = {
+ [0 ... PCI_CAP_ID_MAX] = { .readfn = vfio_direct_config_read }
+@@ -1897,9 +1901,17 @@ static ssize_t vfio_config_do_rw(struct vfio_pci_core_device *vdev, char __user
+ cap_start = *ppos;
+ } else {
+ if (*ppos >= PCI_CFG_SPACE_SIZE) {
+- WARN_ON(cap_id > PCI_EXT_CAP_ID_MAX);
++ /*
++ * We can get a cap_id that exceeds PCI_EXT_CAP_ID_MAX
++ * if we're hiding an unknown capability at the start
++ * of the extended capability list. Use default, ro
++ * access, which will virtualize the id and next values.
++ */
++ if (cap_id > PCI_EXT_CAP_ID_MAX)
++ perm = &direct_ro_perms;
++ else
++ perm = &ecap_perms[cap_id];
+
+- perm = &ecap_perms[cap_id];
+ cap_start = vfio_find_cap_start(vdev, *ppos);
+ } else {
+ WARN_ON(cap_id > PCI_CAP_ID_MAX);
+diff --git a/drivers/video/fbdev/sh7760fb.c b/drivers/video/fbdev/sh7760fb.c
+index 3d2a27fefc874a..130adef2e46869 100644
+--- a/drivers/video/fbdev/sh7760fb.c
++++ b/drivers/video/fbdev/sh7760fb.c
+@@ -409,12 +409,11 @@ static int sh7760fb_alloc_mem(struct fb_info *info)
+ vram = PAGE_SIZE;
+
+ fbmem = dma_alloc_coherent(info->device, vram, &par->fbdma, GFP_KERNEL);
+-
+ if (!fbmem)
+ return -ENOMEM;
+
+ if ((par->fbdma & SH7760FB_DMA_MASK) != SH7760FB_DMA_MASK) {
+- sh7760fb_free_mem(info);
++ dma_free_coherent(info->device, vram, fbmem, par->fbdma);
+ dev_err(info->device, "kernel gave me memory at 0x%08lx, which is"
+ "unusable for the LCDC\n", (unsigned long)par->fbdma);
+ return -ENOMEM;
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index 684b9fe84fff5b..94c96bcfefe347 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -1509,7 +1509,7 @@ config 60XX_WDT
+
+ config SBC8360_WDT
+ tristate "SBC8360 Watchdog Timer"
+- depends on X86_32
++ depends on X86_32 && HAS_IOPORT
+ help
+
+ This is the driver for the hardware watchdog on the SBC8360 Single
+@@ -1522,7 +1522,7 @@ config SBC8360_WDT
+
+ config SBC7240_WDT
+ tristate "SBC Nano 7240 Watchdog Timer"
+- depends on X86_32 && !UML
++ depends on X86_32 && HAS_IOPORT
+ help
+ This is the driver for the hardware watchdog found on the IEI
+ single board computers EPIC Nano 7240 (and likely others). This
+diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
+index 9f097f1f4a4cf3..6d32ffb0113650 100644
+--- a/drivers/xen/xenbus/xenbus_probe.c
++++ b/drivers/xen/xenbus/xenbus_probe.c
+@@ -313,7 +313,7 @@ int xenbus_dev_probe(struct device *_dev)
+ if (err) {
+ dev_warn(&dev->dev, "watch_otherend on %s failed.\n",
+ dev->nodename);
+- return err;
++ goto fail_remove;
+ }
+
+ dev->spurious_threshold = 1;
+@@ -322,6 +322,12 @@ int xenbus_dev_probe(struct device *_dev)
+ dev->nodename);
+
+ return 0;
++fail_remove:
++ if (drv->remove) {
++ down(&dev->reclaim_sem);
++ drv->remove(dev);
++ up(&dev->reclaim_sem);
++ }
+ fail_put:
+ module_put(drv->driver.owner);
+ fail:
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index 06dc4a57ba78a7..0a216a078c3155 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1251,6 +1251,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ }
+ reloc_func_desc = interp_load_addr;
+
++ allow_write_access(interpreter);
+ fput(interpreter);
+
+ kfree(interp_elf_ex);
+@@ -1347,6 +1348,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
+ kfree(interp_elf_ex);
+ kfree(interp_elf_phdata);
+ out_free_file:
++ allow_write_access(interpreter);
+ if (interpreter)
+ fput(interpreter);
+ out_free_ph:
+diff --git a/fs/binfmt_elf_fdpic.c b/fs/binfmt_elf_fdpic.c
+index 4fe5bb9f1b1f5e..7d35f0e1bc7641 100644
+--- a/fs/binfmt_elf_fdpic.c
++++ b/fs/binfmt_elf_fdpic.c
+@@ -394,6 +394,7 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ goto error;
+ }
+
++ allow_write_access(interpreter);
+ fput(interpreter);
+ interpreter = NULL;
+ }
+@@ -465,8 +466,10 @@ static int load_elf_fdpic_binary(struct linux_binprm *bprm)
+ retval = 0;
+
+ error:
+- if (interpreter)
++ if (interpreter) {
++ allow_write_access(interpreter);
+ fput(interpreter);
++ }
+ kfree(interpreter_name);
+ kfree(exec_params.phdrs);
+ kfree(exec_params.loadmap);
+diff --git a/fs/binfmt_misc.c b/fs/binfmt_misc.c
+index 31660d8cc2c610..6a3a16f910516c 100644
+--- a/fs/binfmt_misc.c
++++ b/fs/binfmt_misc.c
+@@ -247,10 +247,13 @@ static int load_misc_binary(struct linux_binprm *bprm)
+ if (retval < 0)
+ goto ret;
+
+- if (fmt->flags & MISC_FMT_OPEN_FILE)
++ if (fmt->flags & MISC_FMT_OPEN_FILE) {
+ interp_file = file_clone_open(fmt->interp_file);
+- else
++ if (!IS_ERR(interp_file))
++ deny_write_access(interp_file);
++ } else {
+ interp_file = open_exec(fmt->interpreter);
++ }
+ retval = PTR_ERR(interp_file);
+ if (IS_ERR(interp_file))
+ goto ret;
+diff --git a/fs/cachefiles/interface.c b/fs/cachefiles/interface.c
+index 35ba2117a6f652..3e63cfe1587472 100644
+--- a/fs/cachefiles/interface.c
++++ b/fs/cachefiles/interface.c
+@@ -327,6 +327,8 @@ static void cachefiles_commit_object(struct cachefiles_object *object,
+ static void cachefiles_clean_up_object(struct cachefiles_object *object,
+ struct cachefiles_cache *cache)
+ {
++ struct file *file;
++
+ if (test_bit(FSCACHE_COOKIE_RETIRED, &object->cookie->flags)) {
+ if (!test_bit(CACHEFILES_OBJECT_USING_TMPFILE, &object->flags)) {
+ cachefiles_see_object(object, cachefiles_obj_see_clean_delete);
+@@ -342,10 +344,14 @@ static void cachefiles_clean_up_object(struct cachefiles_object *object,
+ }
+
+ cachefiles_unmark_inode_in_use(object, object->file);
+- if (object->file) {
+- fput(object->file);
+- object->file = NULL;
+- }
++
++ spin_lock(&object->lock);
++ file = object->file;
++ object->file = NULL;
++ spin_unlock(&object->lock);
++
++ if (file)
++ fput(file);
+ }
+
+ /*
+diff --git a/fs/cachefiles/ondemand.c b/fs/cachefiles/ondemand.c
+index 470c9665838505..fe3de9ad57bf6d 100644
+--- a/fs/cachefiles/ondemand.c
++++ b/fs/cachefiles/ondemand.c
+@@ -60,26 +60,36 @@ static ssize_t cachefiles_ondemand_fd_write_iter(struct kiocb *kiocb,
+ {
+ struct cachefiles_object *object = kiocb->ki_filp->private_data;
+ struct cachefiles_cache *cache = object->volume->cache;
+- struct file *file = object->file;
+- size_t len = iter->count;
++ struct file *file;
++ size_t len = iter->count, aligned_len = len;
+ loff_t pos = kiocb->ki_pos;
+ const struct cred *saved_cred;
+ int ret;
+
+- if (!file)
++ spin_lock(&object->lock);
++ file = object->file;
++ if (!file) {
++ spin_unlock(&object->lock);
+ return -ENOBUFS;
++ }
++ get_file(file);
++ spin_unlock(&object->lock);
+
+ cachefiles_begin_secure(cache, &saved_cred);
+- ret = __cachefiles_prepare_write(object, file, &pos, &len, len, true);
++ ret = __cachefiles_prepare_write(object, file, &pos, &aligned_len, len, true);
+ cachefiles_end_secure(cache, saved_cred);
+ if (ret < 0)
+- return ret;
++ goto out;
+
+ trace_cachefiles_ondemand_fd_write(object, file_inode(file), pos, len);
+ ret = __cachefiles_write(object, file, pos, iter, NULL, NULL);
+- if (!ret)
++ if (!ret) {
+ ret = len;
++ kiocb->ki_pos += ret;
++ }
+
++out:
++ fput(file);
+ return ret;
+ }
+
+@@ -87,12 +97,22 @@ static loff_t cachefiles_ondemand_fd_llseek(struct file *filp, loff_t pos,
+ int whence)
+ {
+ struct cachefiles_object *object = filp->private_data;
+- struct file *file = object->file;
++ struct file *file;
++ loff_t ret;
+
+- if (!file)
++ spin_lock(&object->lock);
++ file = object->file;
++ if (!file) {
++ spin_unlock(&object->lock);
+ return -ENOBUFS;
++ }
++ get_file(file);
++ spin_unlock(&object->lock);
+
+- return vfs_llseek(file, pos, whence);
++ ret = vfs_llseek(file, pos, whence);
++ fput(file);
++
++ return ret;
+ }
+
+ static long cachefiles_ondemand_fd_ioctl(struct file *filp, unsigned int ioctl,
+diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
+index 742b30b61c196f..0fe8d80ce5e8d3 100644
+--- a/fs/dlm/ast.c
++++ b/fs/dlm/ast.c
+@@ -30,7 +30,7 @@ static void dlm_run_callback(uint32_t ls_id, uint32_t lkb_id, int8_t mode,
+ trace_dlm_bast(ls_id, lkb_id, mode, res_name, res_length);
+ bastfn(astparam, mode);
+ } else if (flags & DLM_CB_CAST) {
+- trace_dlm_ast(ls_id, lkb_id, sb_status, sb_flags, res_name,
++ trace_dlm_ast(ls_id, lkb_id, sb_flags, sb_status, res_name,
+ res_length);
+ lksb->sb_status = sb_status;
+ lksb->sb_flags = sb_flags;
+diff --git a/fs/dlm/recoverd.c b/fs/dlm/recoverd.c
+index 34f4f9f49a6ce5..12272a8f6d75f3 100644
+--- a/fs/dlm/recoverd.c
++++ b/fs/dlm/recoverd.c
+@@ -151,7 +151,7 @@ static int ls_recover(struct dlm_ls *ls, struct dlm_recover *rv)
+ error = dlm_recover_members(ls, rv, &neg);
+ if (error) {
+ log_rinfo(ls, "dlm_recover_members error %d", error);
+- goto fail;
++ goto fail_root_list;
+ }
+
+ dlm_recover_dir_nodeid(ls, &root_list);
+diff --git a/fs/efs/super.c b/fs/efs/super.c
+index e4421c10caebe5..c59086b7eabfe9 100644
+--- a/fs/efs/super.c
++++ b/fs/efs/super.c
+@@ -15,7 +15,6 @@
+ #include <linux/vfs.h>
+ #include <linux/blkdev.h>
+ #include <linux/fs_context.h>
+-#include <linux/fs_parser.h>
+ #include "efs.h"
+ #include <linux/efs_vh.h>
+ #include <linux/efs_fs_sb.h>
+@@ -49,15 +48,6 @@ static struct pt_types sgi_pt_types[] = {
+ {0, NULL}
+ };
+
+-enum {
+- Opt_explicit_open,
+-};
+-
+-static const struct fs_parameter_spec efs_param_spec[] = {
+- fsparam_flag ("explicit-open", Opt_explicit_open),
+- {}
+-};
+-
+ /*
+ * File system definition and registration.
+ */
+@@ -67,7 +57,6 @@ static struct file_system_type efs_fs_type = {
+ .kill_sb = efs_kill_sb,
+ .fs_flags = FS_REQUIRES_DEV,
+ .init_fs_context = efs_init_fs_context,
+- .parameters = efs_param_spec,
+ };
+ MODULE_ALIAS_FS("efs");
+
+@@ -265,7 +254,8 @@ static int efs_fill_super(struct super_block *s, struct fs_context *fc)
+ if (!sb_set_blocksize(s, EFS_BLOCKSIZE)) {
+ pr_err("device does not support %d byte blocks\n",
+ EFS_BLOCKSIZE);
+- return -EINVAL;
++ return invalf(fc, "device does not support %d byte blocks\n",
++ EFS_BLOCKSIZE);
+ }
+
+ /* read the vh (volume header) block */
+@@ -327,43 +317,22 @@ static int efs_fill_super(struct super_block *s, struct fs_context *fc)
+ return 0;
+ }
+
+-static void efs_free_fc(struct fs_context *fc)
+-{
+- kfree(fc->fs_private);
+-}
+-
+ static int efs_get_tree(struct fs_context *fc)
+ {
+ return get_tree_bdev(fc, efs_fill_super);
+ }
+
+-static int efs_parse_param(struct fs_context *fc, struct fs_parameter *param)
+-{
+- int token;
+- struct fs_parse_result result;
+-
+- token = fs_parse(fc, efs_param_spec, param, &result);
+- if (token < 0)
+- return token;
+- return 0;
+-}
+-
+ static int efs_reconfigure(struct fs_context *fc)
+ {
+ sync_filesystem(fc->root->d_sb);
++ fc->sb_flags |= SB_RDONLY;
+
+ return 0;
+ }
+
+-struct efs_context {
+- unsigned long s_mount_opts;
+-};
+-
+ static const struct fs_context_operations efs_context_opts = {
+- .parse_param = efs_parse_param,
+ .get_tree = efs_get_tree,
+ .reconfigure = efs_reconfigure,
+- .free = efs_free_fc,
+ };
+
+ /*
+@@ -371,12 +340,6 @@ static const struct fs_context_operations efs_context_opts = {
+ */
+ static int efs_init_fs_context(struct fs_context *fc)
+ {
+- struct efs_context *ctx;
+-
+- ctx = kzalloc(sizeof(struct efs_context), GFP_KERNEL);
+- if (!ctx)
+- return -ENOMEM;
+- fc->fs_private = ctx;
+ fc->ops = &efs_context_opts;
+
+ return 0;
+diff --git a/fs/erofs/data.c b/fs/erofs/data.c
+index 61debd799cf904..fa51437e1d99d9 100644
+--- a/fs/erofs/data.c
++++ b/fs/erofs/data.c
+@@ -38,7 +38,7 @@ void *erofs_bread(struct erofs_buf *buf, erofs_off_t offset,
+ }
+ if (!folio || !folio_contains(folio, index)) {
+ erofs_put_metabuf(buf);
+- folio = read_mapping_folio(buf->mapping, index, NULL);
++ folio = read_mapping_folio(buf->mapping, index, buf->file);
+ if (IS_ERR(folio))
+ return folio;
+ }
+@@ -61,9 +61,11 @@ void erofs_init_metabuf(struct erofs_buf *buf, struct super_block *sb)
+ {
+ struct erofs_sb_info *sbi = EROFS_SB(sb);
+
+- if (erofs_is_fileio_mode(sbi))
+- buf->mapping = file_inode(sbi->fdev)->i_mapping;
+- else if (erofs_is_fscache_mode(sb))
++ buf->file = NULL;
++ if (erofs_is_fileio_mode(sbi)) {
++ buf->file = sbi->fdev; /* some fs like FUSE needs it */
++ buf->mapping = buf->file->f_mapping;
++ } else if (erofs_is_fscache_mode(sb))
+ buf->mapping = sbi->s_fscache->inode->i_mapping;
+ else
+ buf->mapping = sb->s_bdev->bd_mapping;
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 4efd578d7c627b..9b03c8f323a762 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -221,6 +221,7 @@ enum erofs_kmap_type {
+
+ struct erofs_buf {
+ struct address_space *mapping;
++ struct file *file;
+ struct page *page;
+ void *base;
+ enum erofs_kmap_type kmap_type;
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index bed3dbe5b7cb8b..2dd7d819572f40 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -631,7 +631,11 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ errorfc(fc, "unsupported blksize for fscache mode");
+ return -EINVAL;
+ }
+- if (!sb_set_blocksize(sb, 1 << sbi->blkszbits)) {
++
++ if (erofs_is_fileio_mode(sbi)) {
++ sb->s_blocksize = 1 << sbi->blkszbits;
++ sb->s_blocksize_bits = sbi->blkszbits;
++ } else if (!sb_set_blocksize(sb, 1 << sbi->blkszbits)) {
+ errorfc(fc, "failed to set erofs blksize");
+ return -EINVAL;
+ }
+diff --git a/fs/erofs/zmap.c b/fs/erofs/zmap.c
+index a076cca1f54734..4535f2f0a0147e 100644
+--- a/fs/erofs/zmap.c
++++ b/fs/erofs/zmap.c
+@@ -219,7 +219,7 @@ static int z_erofs_load_compact_lcluster(struct z_erofs_maprecorder *m,
+ unsigned int amortizedshift;
+ erofs_off_t pos;
+
+- if (lcn >= totalidx)
++ if (lcn >= totalidx || vi->z_logical_clusterbits > 14)
+ return -EINVAL;
+
+ m->lcn = lcn;
+@@ -390,7 +390,7 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
+ u64 lcn = m->lcn, headlcn = map->m_la >> lclusterbits;
+ int err;
+
+- do {
++ while (1) {
+ /* handle the last EOF pcluster (no next HEAD lcluster) */
+ if ((lcn << lclusterbits) >= inode->i_size) {
+ map->m_llen = inode->i_size - map->m_la;
+@@ -402,14 +402,16 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
+ return err;
+
+ if (m->type == Z_EROFS_LCLUSTER_TYPE_NONHEAD) {
+- DBG_BUGON(!m->delta[1] &&
+- m->clusterofs != 1 << lclusterbits);
++ /* work around invalid d1 generated by pre-1.0 mkfs */
++ if (unlikely(!m->delta[1])) {
++ m->delta[1] = 1;
++ DBG_BUGON(1);
++ }
+ } else if (m->type == Z_EROFS_LCLUSTER_TYPE_PLAIN ||
+ m->type == Z_EROFS_LCLUSTER_TYPE_HEAD1 ||
+ m->type == Z_EROFS_LCLUSTER_TYPE_HEAD2) {
+- /* go on until the next HEAD lcluster */
+ if (lcn != headlcn)
+- break;
++ break; /* ends at the next HEAD lcluster */
+ m->delta[1] = 1;
+ } else {
+ erofs_err(inode->i_sb, "unknown type %u @ lcn %llu of nid %llu",
+@@ -418,8 +420,7 @@ static int z_erofs_get_extent_decompressedlen(struct z_erofs_maprecorder *m)
+ return -EOPNOTSUPP;
+ }
+ lcn += m->delta[1];
+- } while (m->delta[1]);
+-
++ }
+ map->m_llen = (lcn << lclusterbits) + m->clusterofs - map->m_la;
+ return 0;
+ }
+diff --git a/fs/exec.c b/fs/exec.c
+index 6c53920795c2e7..9c349a74f38589 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -883,7 +883,8 @@ EXPORT_SYMBOL(transfer_args_to_stack);
+ */
+ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+ {
+- struct file *file;
++ int err;
++ struct file *file __free(fput) = NULL;
+ struct open_flags open_exec_flags = {
+ .open_flag = O_LARGEFILE | O_RDONLY | __FMODE_EXEC,
+ .acc_mode = MAY_EXEC,
+@@ -908,12 +909,14 @@ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+ * an invariant that all non-regular files error out before we get here.
+ */
+ if (WARN_ON_ONCE(!S_ISREG(file_inode(file)->i_mode)) ||
+- path_noexec(&file->f_path)) {
+- fput(file);
++ path_noexec(&file->f_path))
+ return ERR_PTR(-EACCES);
+- }
+
+- return file;
++ err = deny_write_access(file);
++ if (err)
++ return ERR_PTR(err);
++
++ return no_free_ptr(file);
+ }
+
+ /**
+@@ -923,7 +926,8 @@ static struct file *do_open_execat(int fd, struct filename *name, int flags)
+ *
+ * Returns ERR_PTR on failure or allocated struct file on success.
+ *
+- * As this is a wrapper for the internal do_open_execat(). Also see
++ * As this is a wrapper for the internal do_open_execat(), callers
++ * must call allow_write_access() before fput() on release. Also see
+ * do_close_execat().
+ */
+ struct file *open_exec(const char *name)
+@@ -1475,8 +1479,10 @@ static int prepare_bprm_creds(struct linux_binprm *bprm)
+ /* Matches do_open_execat() */
+ static void do_close_execat(struct file *file)
+ {
+- if (file)
+- fput(file);
++ if (!file)
++ return;
++ allow_write_access(file);
++ fput(file);
+ }
+
+ static void free_bprm(struct linux_binprm *bprm)
+@@ -1801,6 +1807,7 @@ static int exec_binprm(struct linux_binprm *bprm)
+ bprm->file = bprm->interpreter;
+ bprm->interpreter = NULL;
+
++ allow_write_access(exec);
+ if (unlikely(bprm->have_execfd)) {
+ if (bprm->executable) {
+ fput(exec);
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index a25d7eb789f4cb..fb38769c3e39d1 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -584,6 +584,16 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ if (ret < 0)
+ goto unlock;
+
++ if (iocb->ki_flags & IOCB_DIRECT) {
++ unsigned long align = pos | iov_iter_alignment(iter);
++
++ if (!IS_ALIGNED(align, i_blocksize(inode)) &&
++ !IS_ALIGNED(align, bdev_logical_block_size(inode->i_sb->s_bdev))) {
++ ret = -EINVAL;
++ goto unlock;
++ }
++ }
++
+ if (pos > valid_size) {
+ ret = exfat_extend_valid_size(file, pos);
+ if (ret < 0 && ret != -ENOSPC) {
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index 2c4c442293529b..337197ece59955 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -345,6 +345,7 @@ static int exfat_find_empty_entry(struct inode *inode,
+ if (ei->start_clu == EXFAT_EOF_CLUSTER) {
+ ei->start_clu = clu.dir;
+ p_dir->dir = clu.dir;
++ hint_femp.eidx = 0;
+ }
+
+ /* append to the FAT chain */
+@@ -637,14 +638,26 @@ static int exfat_find(struct inode *dir, struct qstr *qname,
+ info->size = le64_to_cpu(ep2->dentry.stream.valid_size);
+ info->valid_size = le64_to_cpu(ep2->dentry.stream.valid_size);
+ info->size = le64_to_cpu(ep2->dentry.stream.size);
++
++ info->start_clu = le32_to_cpu(ep2->dentry.stream.start_clu);
++ if (!is_valid_cluster(sbi, info->start_clu) && info->size) {
++ exfat_warn(sb, "start_clu is invalid cluster(0x%x)",
++ info->start_clu);
++ info->size = 0;
++ info->valid_size = 0;
++ }
++
++ if (info->valid_size > info->size) {
++ exfat_warn(sb, "valid_size(%lld) is greater than size(%lld)",
++ info->valid_size, info->size);
++ info->valid_size = info->size;
++ }
++
+ if (info->size == 0) {
+ info->flags = ALLOC_NO_FAT_CHAIN;
+ info->start_clu = EXFAT_EOF_CLUSTER;
+- } else {
++ } else
+ info->flags = ep2->dentry.stream.flags;
+- info->start_clu =
+- le32_to_cpu(ep2->dentry.stream.start_clu);
+- }
+
+ exfat_get_entry_time(sbi, &info->crtime,
+ ep->dentry.file.create_tz,
+diff --git a/fs/ext4/balloc.c b/fs/ext4/balloc.c
+index 591fb3f710be72..8042ad87380897 100644
+--- a/fs/ext4/balloc.c
++++ b/fs/ext4/balloc.c
+@@ -550,7 +550,8 @@ ext4_read_block_bitmap_nowait(struct super_block *sb, ext4_group_t block_group,
+ trace_ext4_read_block_bitmap_load(sb, block_group, ignore_locked);
+ ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO |
+ (ignore_locked ? REQ_RAHEAD : 0),
+- ext4_end_bitmap_read);
++ ext4_end_bitmap_read,
++ ext4_simulate_fail(sb, EXT4_SIM_BBITMAP_EIO));
+ return bh;
+ verify:
+ err = ext4_validate_block_bitmap(sb, desc, block_group, bh);
+@@ -577,7 +578,6 @@ int ext4_wait_block_bitmap(struct super_block *sb, ext4_group_t block_group,
+ if (!desc)
+ return -EFSCORRUPTED;
+ wait_on_buffer(bh);
+- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_BBITMAP_EIO);
+ if (!buffer_uptodate(bh)) {
+ ext4_error_err(sb, EIO, "Cannot read block bitmap - "
+ "block_group = %u, block_bitmap = %llu",
+diff --git a/fs/ext4/ext4.h b/fs/ext4/ext4.h
+index 44b0d418143c2e..bbffb76d9a9049 100644
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -1865,14 +1865,6 @@ static inline bool ext4_simulate_fail(struct super_block *sb,
+ return false;
+ }
+
+-static inline void ext4_simulate_fail_bh(struct super_block *sb,
+- struct buffer_head *bh,
+- unsigned long code)
+-{
+- if (!IS_ERR(bh) && ext4_simulate_fail(sb, code))
+- clear_buffer_uptodate(bh);
+-}
+-
+ /*
+ * Error number codes for s_{first,last}_error_errno
+ *
+@@ -3100,9 +3092,9 @@ extern struct buffer_head *ext4_sb_bread(struct super_block *sb,
+ extern struct buffer_head *ext4_sb_bread_unmovable(struct super_block *sb,
+ sector_t block);
+ extern void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io);
++ bh_end_io_t *end_io, bool simu_fail);
+ extern int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io);
++ bh_end_io_t *end_io, bool simu_fail);
+ extern int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait);
+ extern void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block);
+ extern int ext4_seq_options_show(struct seq_file *seq, void *offset);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 34e25eee65219c..88f98dc4402753 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -568,7 +568,7 @@ __read_extent_tree_block(const char *function, unsigned int line,
+
+ if (!bh_uptodate_or_lock(bh)) {
+ trace_ext4_ext_load_extent(inode, pblk, _RET_IP_);
+- err = ext4_read_bh(bh, 0, NULL);
++ err = ext4_read_bh(bh, 0, NULL, false);
+ if (err < 0)
+ goto errout;
+ }
+diff --git a/fs/ext4/fsmap.c b/fs/ext4/fsmap.c
+index df853c4d3a8c91..383c6edea6dd31 100644
+--- a/fs/ext4/fsmap.c
++++ b/fs/ext4/fsmap.c
+@@ -185,6 +185,56 @@ static inline ext4_fsblk_t ext4_fsmap_next_pblk(struct ext4_fsmap *fmr)
+ return fmr->fmr_physical + fmr->fmr_length;
+ }
+
++static int ext4_getfsmap_meta_helper(struct super_block *sb,
++ ext4_group_t agno, ext4_grpblk_t start,
++ ext4_grpblk_t len, void *priv)
++{
++ struct ext4_getfsmap_info *info = priv;
++ struct ext4_fsmap *p;
++ struct ext4_fsmap *tmp;
++ struct ext4_sb_info *sbi = EXT4_SB(sb);
++ ext4_fsblk_t fsb, fs_start, fs_end;
++ int error;
++
++ fs_start = fsb = (EXT4_C2B(sbi, start) +
++ ext4_group_first_block_no(sb, agno));
++ fs_end = fs_start + EXT4_C2B(sbi, len);
++
++ /* Return relevant extents from the meta_list */
++ list_for_each_entry_safe(p, tmp, &info->gfi_meta_list, fmr_list) {
++ if (p->fmr_physical < info->gfi_next_fsblk) {
++ list_del(&p->fmr_list);
++ kfree(p);
++ continue;
++ }
++ if (p->fmr_physical <= fs_start ||
++ p->fmr_physical + p->fmr_length <= fs_end) {
++ /* Emit the retained free extent record if present */
++ if (info->gfi_lastfree.fmr_owner) {
++ error = ext4_getfsmap_helper(sb, info,
++ &info->gfi_lastfree);
++ if (error)
++ return error;
++ info->gfi_lastfree.fmr_owner = 0;
++ }
++ error = ext4_getfsmap_helper(sb, info, p);
++ if (error)
++ return error;
++ fsb = p->fmr_physical + p->fmr_length;
++ if (info->gfi_next_fsblk < fsb)
++ info->gfi_next_fsblk = fsb;
++ list_del(&p->fmr_list);
++ kfree(p);
++ continue;
++ }
++ }
++ if (info->gfi_next_fsblk < fsb)
++ info->gfi_next_fsblk = fsb;
++
++ return 0;
++}
++
++
+ /* Transform a blockgroup's free record into a fsmap */
+ static int ext4_getfsmap_datadev_helper(struct super_block *sb,
+ ext4_group_t agno, ext4_grpblk_t start,
+@@ -539,6 +589,7 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
+ error = ext4_mballoc_query_range(sb, info->gfi_agno,
+ EXT4_B2C(sbi, info->gfi_low.fmr_physical),
+ EXT4_B2C(sbi, info->gfi_high.fmr_physical),
++ ext4_getfsmap_meta_helper,
+ ext4_getfsmap_datadev_helper, info);
+ if (error)
+ goto err;
+@@ -560,7 +611,8 @@ static int ext4_getfsmap_datadev(struct super_block *sb,
+
+ /* Report any gaps at the end of the bg */
+ info->gfi_last = true;
+- error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster, 0, info);
++ error = ext4_getfsmap_datadev_helper(sb, end_ag, last_cluster + 1,
++ 0, info);
+ if (error)
+ goto err;
+
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index 7f1a5f90dbbdff..21d228073d7954 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -193,8 +193,9 @@ ext4_read_inode_bitmap(struct super_block *sb, ext4_group_t block_group)
+ * submit the buffer_head for reading
+ */
+ trace_ext4_load_inode_bitmap(sb, block_group);
+- ext4_read_bh(bh, REQ_META | REQ_PRIO, ext4_end_bitmap_read);
+- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_IBITMAP_EIO);
++ ext4_read_bh(bh, REQ_META | REQ_PRIO,
++ ext4_end_bitmap_read,
++ ext4_simulate_fail(sb, EXT4_SIM_IBITMAP_EIO));
+ if (!buffer_uptodate(bh)) {
+ put_bh(bh);
+ ext4_error_err(sb, EIO, "Cannot read inode bitmap - "
+diff --git a/fs/ext4/indirect.c b/fs/ext4/indirect.c
+index 7404f0935c9032..7de327fa7b1c51 100644
+--- a/fs/ext4/indirect.c
++++ b/fs/ext4/indirect.c
+@@ -170,7 +170,7 @@ static Indirect *ext4_get_branch(struct inode *inode, int depth,
+ }
+
+ if (!bh_uptodate_or_lock(bh)) {
+- if (ext4_read_bh(bh, 0, NULL) < 0) {
++ if (ext4_read_bh(bh, 0, NULL, false) < 0) {
+ put_bh(bh);
+ goto failure;
+ }
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 54bdd4884fe67d..99d09cd9c6a37e 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4497,10 +4497,10 @@ static int __ext4_get_inode_loc(struct super_block *sb, unsigned long ino,
+ * Read the block from disk.
+ */
+ trace_ext4_load_inode(sb, ino);
+- ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL);
++ ext4_read_bh_nowait(bh, REQ_META | REQ_PRIO, NULL,
++ ext4_simulate_fail(sb, EXT4_SIM_INODE_EIO));
+ blk_finish_plug(&plug);
+ wait_on_buffer(bh);
+- ext4_simulate_fail_bh(sb, bh, EXT4_SIM_INODE_EIO);
+ if (!buffer_uptodate(bh)) {
+ if (ret_block)
+ *ret_block = block;
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index d73e38323879ce..92f49d7eb3c001 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -6999,13 +6999,14 @@ int
+ ext4_mballoc_query_range(
+ struct super_block *sb,
+ ext4_group_t group,
+- ext4_grpblk_t start,
++ ext4_grpblk_t first,
+ ext4_grpblk_t end,
++ ext4_mballoc_query_range_fn meta_formatter,
+ ext4_mballoc_query_range_fn formatter,
+ void *priv)
+ {
+ void *bitmap;
+- ext4_grpblk_t next;
++ ext4_grpblk_t start, next;
+ struct ext4_buddy e4b;
+ int error;
+
+@@ -7016,10 +7017,19 @@ ext4_mballoc_query_range(
+
+ ext4_lock_group(sb, group);
+
+- start = max(e4b.bd_info->bb_first_free, start);
++ start = max(e4b.bd_info->bb_first_free, first);
+ if (end >= EXT4_CLUSTERS_PER_GROUP(sb))
+ end = EXT4_CLUSTERS_PER_GROUP(sb) - 1;
+-
++ if (meta_formatter && start != first) {
++ if (start > end)
++ start = end;
++ ext4_unlock_group(sb, group);
++ error = meta_formatter(sb, group, first, start - first,
++ priv);
++ if (error)
++ goto out_unload;
++ ext4_lock_group(sb, group);
++ }
+ while (start <= end) {
+ start = mb_find_next_zero_bit(bitmap, end + 1, start);
+ if (start > end)
+diff --git a/fs/ext4/mballoc.h b/fs/ext4/mballoc.h
+index d8553f1498d3cb..f8280de3e8820a 100644
+--- a/fs/ext4/mballoc.h
++++ b/fs/ext4/mballoc.h
+@@ -259,6 +259,7 @@ ext4_mballoc_query_range(
+ ext4_group_t agno,
+ ext4_grpblk_t start,
+ ext4_grpblk_t end,
++ ext4_mballoc_query_range_fn meta_formatter,
+ ext4_mballoc_query_range_fn formatter,
+ void *priv);
+
+diff --git a/fs/ext4/mmp.c b/fs/ext4/mmp.c
+index bd946d0c71b700..d64c04ed061ae9 100644
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -94,7 +94,7 @@ static int read_mmp_block(struct super_block *sb, struct buffer_head **bh,
+ }
+
+ lock_buffer(*bh);
+- ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL);
++ ret = ext4_read_bh(*bh, REQ_META | REQ_PRIO, NULL, false);
+ if (ret)
+ goto warn_exit;
+
+diff --git a/fs/ext4/move_extent.c b/fs/ext4/move_extent.c
+index b64661ea6e0ed7..898443e98efc9e 100644
+--- a/fs/ext4/move_extent.c
++++ b/fs/ext4/move_extent.c
+@@ -213,7 +213,7 @@ static int mext_page_mkuptodate(struct folio *folio, size_t from, size_t to)
+ unlock_buffer(bh);
+ continue;
+ }
+- ext4_read_bh_nowait(bh, 0, NULL);
++ ext4_read_bh_nowait(bh, 0, NULL, false);
+ nr++;
+ } while (block++, (bh = bh->b_this_page) != head);
+
+diff --git a/fs/ext4/resize.c b/fs/ext4/resize.c
+index a2704f06436106..72f77f78ae8df3 100644
+--- a/fs/ext4/resize.c
++++ b/fs/ext4/resize.c
+@@ -1300,7 +1300,7 @@ static struct buffer_head *ext4_get_bitmap(struct super_block *sb, __u64 block)
+ if (unlikely(!bh))
+ return NULL;
+ if (!bh_uptodate_or_lock(bh)) {
+- if (ext4_read_bh(bh, 0, NULL) < 0) {
++ if (ext4_read_bh(bh, 0, NULL, false) < 0) {
+ brelse(bh);
+ return NULL;
+ }
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 16a4ce704460e1..940ac1a49b729e 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -161,8 +161,14 @@ MODULE_ALIAS("ext3");
+
+
+ static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io)
++ bh_end_io_t *end_io, bool simu_fail)
+ {
++ if (simu_fail) {
++ clear_buffer_uptodate(bh);
++ unlock_buffer(bh);
++ return;
++ }
++
+ /*
+ * buffer's verified bit is no longer valid after reading from
+ * disk again due to write out error, clear it to make sure we
+@@ -176,7 +182,7 @@ static inline void __ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
+ }
+
+ void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
+- bh_end_io_t *end_io)
++ bh_end_io_t *end_io, bool simu_fail)
+ {
+ BUG_ON(!buffer_locked(bh));
+
+@@ -184,10 +190,11 @@ void ext4_read_bh_nowait(struct buffer_head *bh, blk_opf_t op_flags,
+ unlock_buffer(bh);
+ return;
+ }
+- __ext4_read_bh(bh, op_flags, end_io);
++ __ext4_read_bh(bh, op_flags, end_io, simu_fail);
+ }
+
+-int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io)
++int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags,
++ bh_end_io_t *end_io, bool simu_fail)
+ {
+ BUG_ON(!buffer_locked(bh));
+
+@@ -196,7 +203,7 @@ int ext4_read_bh(struct buffer_head *bh, blk_opf_t op_flags, bh_end_io_t *end_io
+ return 0;
+ }
+
+- __ext4_read_bh(bh, op_flags, end_io);
++ __ext4_read_bh(bh, op_flags, end_io, simu_fail);
+
+ wait_on_buffer(bh);
+ if (buffer_uptodate(bh))
+@@ -208,10 +215,10 @@ int ext4_read_bh_lock(struct buffer_head *bh, blk_opf_t op_flags, bool wait)
+ {
+ lock_buffer(bh);
+ if (!wait) {
+- ext4_read_bh_nowait(bh, op_flags, NULL);
++ ext4_read_bh_nowait(bh, op_flags, NULL, false);
+ return 0;
+ }
+- return ext4_read_bh(bh, op_flags, NULL);
++ return ext4_read_bh(bh, op_flags, NULL, false);
+ }
+
+ /*
+@@ -266,7 +273,7 @@ void ext4_sb_breadahead_unmovable(struct super_block *sb, sector_t block)
+
+ if (likely(bh)) {
+ if (trylock_buffer(bh))
+- ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL);
++ ext4_read_bh_nowait(bh, REQ_RAHEAD, NULL, false);
+ brelse(bh);
+ }
+ }
+@@ -346,9 +353,9 @@ __u32 ext4_free_group_clusters(struct super_block *sb,
+ __u32 ext4_free_inodes_count(struct super_block *sb,
+ struct ext4_group_desc *bg)
+ {
+- return le16_to_cpu(bg->bg_free_inodes_count_lo) |
++ return le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_lo)) |
+ (EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT ?
+- (__u32)le16_to_cpu(bg->bg_free_inodes_count_hi) << 16 : 0);
++ (__u32)le16_to_cpu(READ_ONCE(bg->bg_free_inodes_count_hi)) << 16 : 0);
+ }
+
+ __u32 ext4_used_dirs_count(struct super_block *sb,
+@@ -402,9 +409,9 @@ void ext4_free_group_clusters_set(struct super_block *sb,
+ void ext4_free_inodes_set(struct super_block *sb,
+ struct ext4_group_desc *bg, __u32 count)
+ {
+- bg->bg_free_inodes_count_lo = cpu_to_le16((__u16)count);
++ WRITE_ONCE(bg->bg_free_inodes_count_lo, cpu_to_le16((__u16)count));
+ if (EXT4_DESC_SIZE(sb) >= EXT4_MIN_DESC_SIZE_64BIT)
+- bg->bg_free_inodes_count_hi = cpu_to_le16(count >> 16);
++ WRITE_ONCE(bg->bg_free_inodes_count_hi, cpu_to_le16(count >> 16));
+ }
+
+ void ext4_used_dirs_set(struct super_block *sb,
+@@ -6518,9 +6525,6 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ goto restore_opts;
+ }
+
+- if (test_opt2(sb, ABORT))
+- ext4_abort(sb, ESHUTDOWN, "Abort forced by user");
+-
+ sb->s_flags = (sb->s_flags & ~SB_POSIXACL) |
+ (test_opt(sb, POSIX_ACL) ? SB_POSIXACL : 0);
+
+@@ -6689,6 +6693,14 @@ static int __ext4_remount(struct fs_context *fc, struct super_block *sb)
+ if (!ext4_has_feature_mmp(sb) || sb_rdonly(sb))
+ ext4_stop_mmpd(sbi);
+
++ /*
++ * Handle aborting the filesystem as the last thing during remount to
++ * avoid obsure errors during remount when some option changes fail to
++ * apply due to shutdown filesystem.
++ */
++ if (test_opt2(sb, ABORT))
++ ext4_abort(sb, ESHUTDOWN, "Abort forced by user");
++
+ return 0;
+
+ restore_opts:
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 7f76460b721f2c..efda9a0229816b 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -32,7 +32,7 @@ void f2fs_stop_checkpoint(struct f2fs_sb_info *sbi, bool end_io,
+ f2fs_build_fault_attr(sbi, 0, 0);
+ if (!end_io)
+ f2fs_flush_merged_writes(sbi);
+- f2fs_handle_critical_error(sbi, reason, end_io);
++ f2fs_handle_critical_error(sbi, reason);
+ }
+
+ /*
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 94f7b084f60164..9efe4c00d75bb3 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -1676,7 +1676,8 @@ int f2fs_map_blocks(struct inode *inode, struct f2fs_map_blocks *map, int flag)
+ /* reserved delalloc block should be mapped for fiemap. */
+ if (blkaddr == NEW_ADDR)
+ map->m_flags |= F2FS_MAP_DELALLOC;
+- if (flag != F2FS_GET_BLOCK_DIO || !is_hole)
++ /* DIO READ and hole case, should not map the blocks. */
++ if (!(flag == F2FS_GET_BLOCK_DIO && is_hole && !map->m_may_create))
+ map->m_flags |= F2FS_MAP_MAPPED;
+
+ map->m_pblk = blkaddr;
+@@ -1901,25 +1902,6 @@ static int f2fs_xattr_fiemap(struct inode *inode,
+ return (err < 0 ? err : 0);
+ }
+
+-static loff_t max_inode_blocks(struct inode *inode)
+-{
+- loff_t result = ADDRS_PER_INODE(inode);
+- loff_t leaf_count = ADDRS_PER_BLOCK(inode);
+-
+- /* two direct node blocks */
+- result += (leaf_count * 2);
+-
+- /* two indirect node blocks */
+- leaf_count *= NIDS_PER_BLOCK;
+- result += (leaf_count * 2);
+-
+- /* one double indirect node block */
+- leaf_count *= NIDS_PER_BLOCK;
+- result += leaf_count;
+-
+- return result;
+-}
+-
+ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ u64 start, u64 len)
+ {
+@@ -1992,8 +1974,7 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ if (!compr_cluster && !(map.m_flags & F2FS_MAP_FLAGS)) {
+ start_blk = next_pgofs;
+
+- if (blks_to_bytes(inode, start_blk) < blks_to_bytes(inode,
+- max_inode_blocks(inode)))
++ if (blks_to_bytes(inode, start_blk) < maxbytes)
+ goto prep_next;
+
+ flags |= FIEMAP_EXTENT_LAST;
+@@ -2385,10 +2366,10 @@ static int f2fs_mpage_readpages(struct inode *inode,
+ .nr_cpages = 0,
+ };
+ pgoff_t nc_cluster_idx = NULL_CLUSTER;
++ pgoff_t index;
+ #endif
+ unsigned nr_pages = rac ? readahead_count(rac) : 1;
+ unsigned max_nr_pages = nr_pages;
+- pgoff_t index;
+ int ret = 0;
+
+ map.m_pblk = 0;
+@@ -2406,9 +2387,9 @@ static int f2fs_mpage_readpages(struct inode *inode,
+ prefetchw(&folio->flags);
+ }
+
++#ifdef CONFIG_F2FS_FS_COMPRESSION
+ index = folio_index(folio);
+
+-#ifdef CONFIG_F2FS_FS_COMPRESSION
+ if (!f2fs_compressed_file(inode))
+ goto read_single_page;
+
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 33f5449dc22d50..93a5e1c24e566e 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3632,8 +3632,7 @@ int f2fs_quota_sync(struct super_block *sb, int type);
+ loff_t max_file_blocks(struct inode *inode);
+ void f2fs_quota_off_umount(struct super_block *sb);
+ void f2fs_save_errors(struct f2fs_sb_info *sbi, unsigned char flag);
+-void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
+- bool irq_context);
++void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason);
+ void f2fs_handle_error(struct f2fs_sb_info *sbi, unsigned char error);
+ void f2fs_handle_error_async(struct f2fs_sb_info *sbi, unsigned char error);
+ int f2fs_commit_super(struct f2fs_sb_info *sbi, bool recover);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 321d8ffbab6e4b..71ddecaf771f81 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -863,7 +863,11 @@ static bool f2fs_force_buffered_io(struct inode *inode, int rw)
+ return true;
+ if (f2fs_compressed_file(inode))
+ return true;
+- if (f2fs_has_inline_data(inode))
++ /*
++ * only force direct read to use buffered IO, for direct write,
++ * it expects inline data conversion before committing IO.
++ */
++ if (f2fs_has_inline_data(inode) && rw == READ)
+ return true;
+
+ /* disallow direct IO if any of devices has unaligned blksize */
+@@ -2343,9 +2347,12 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+ if (readonly)
+ goto out;
+
+- /* grab sb->s_umount to avoid racing w/ remount() */
++ /*
++ * grab sb->s_umount to avoid racing w/ remount() and other shutdown
++ * paths.
++ */
+ if (need_lock)
+- down_read(&sbi->sb->s_umount);
++ down_write(&sbi->sb->s_umount);
+
+ f2fs_stop_gc_thread(sbi);
+ f2fs_stop_discard_thread(sbi);
+@@ -2354,7 +2361,7 @@ int f2fs_do_shutdown(struct f2fs_sb_info *sbi, unsigned int flag,
+ clear_opt(sbi, DISCARD);
+
+ if (need_lock)
+- up_read(&sbi->sb->s_umount);
++ up_write(&sbi->sb->s_umount);
+
+ f2fs_update_time(sbi, REQ_TIME);
+ out:
+@@ -3792,7 +3799,7 @@ static int reserve_compress_blocks(struct dnode_of_data *dn, pgoff_t count,
+ to_reserved = cluster_size - compr_blocks - reserved;
+
+ /* for the case all blocks in cluster were reserved */
+- if (to_reserved == 1) {
++ if (reserved && to_reserved == 1) {
+ dn->ofs_in_node += cluster_size;
+ goto next;
+ }
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index 9322a7200e310d..e0469316c7cd4e 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -257,6 +257,8 @@ static int select_gc_type(struct f2fs_sb_info *sbi, int gc_type)
+
+ switch (sbi->gc_mode) {
+ case GC_IDLE_CB:
++ case GC_URGENT_LOW:
++ case GC_URGENT_MID:
+ gc_mode = GC_CB;
+ break;
+ case GC_IDLE_GREEDY:
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 59b13ff243fa80..af36c6d6542b8c 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -905,6 +905,16 @@ static int truncate_node(struct dnode_of_data *dn)
+ if (err)
+ return err;
+
++ if (ni.blk_addr != NEW_ADDR &&
++ !f2fs_is_valid_blkaddr(sbi, ni.blk_addr, DATA_GENERIC_ENHANCE)) {
++ f2fs_err_ratelimited(sbi,
++ "nat entry is corrupted, run fsck to fix it, ino:%u, "
++ "nid:%u, blkaddr:%u", ni.ino, ni.nid, ni.blk_addr);
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ f2fs_handle_error(sbi, ERROR_INCONSISTENT_NAT);
++ return -EFSCORRUPTED;
++ }
++
+ /* Deallocate node address */
+ f2fs_invalidate_blocks(sbi, ni.blk_addr);
+ dec_valid_node_count(sbi, dn->inode, dn->nid == dn->inode->i_ino);
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index 1766254279d24c..edf205093f4358 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -2926,7 +2926,8 @@ static int change_curseg(struct f2fs_sb_info *sbi, int type)
+ struct f2fs_summary_block *sum_node;
+ struct page *sum_page;
+
+- write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno));
++ if (curseg->inited)
++ write_sum_page(sbi, curseg->sum_blk, GET_SUM_BLOCK(sbi, curseg->segno));
+
+ __set_test_and_inuse(sbi, new_segno);
+
+@@ -3977,8 +3978,8 @@ void f2fs_do_replace_block(struct f2fs_sb_info *sbi, struct f2fs_summary *sum,
+ }
+ }
+
+- f2fs_bug_on(sbi, !IS_DATASEG(type));
+ curseg = CURSEG_I(sbi, type);
++ f2fs_bug_on(sbi, !IS_DATASEG(curseg->seg_type));
+
+ mutex_lock(&curseg->curseg_mutex);
+ down_write(&sit_i->sentry_lock);
+diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h
+index 71adb4a43bec53..51b2b8c5c749c5 100644
+--- a/fs/f2fs/segment.h
++++ b/fs/f2fs/segment.h
+@@ -559,18 +559,21 @@ static inline int reserved_sections(struct f2fs_sb_info *sbi)
+ }
+
+ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
+- unsigned int node_blocks, unsigned int dent_blocks)
++ unsigned int node_blocks, unsigned int data_blocks,
++ unsigned int dent_blocks)
+ {
+
+- unsigned segno, left_blocks;
++ unsigned int segno, left_blocks, blocks;
+ int i;
+
+- /* check current node sections in the worst case. */
+- for (i = CURSEG_HOT_NODE; i <= CURSEG_COLD_NODE; i++) {
++ /* check current data/node sections in the worst case. */
++ for (i = CURSEG_HOT_DATA; i < NR_PERSISTENT_LOG; i++) {
+ segno = CURSEG_I(sbi, i)->segno;
+ left_blocks = CAP_BLKS_PER_SEC(sbi) -
+ get_ckpt_valid_blocks(sbi, segno, true);
+- if (node_blocks > left_blocks)
++
++ blocks = i <= CURSEG_COLD_DATA ? data_blocks : node_blocks;
++ if (blocks > left_blocks)
+ return false;
+ }
+
+@@ -584,8 +587,9 @@ static inline bool has_curseg_enough_space(struct f2fs_sb_info *sbi,
+ }
+
+ /*
+- * calculate needed sections for dirty node/dentry
+- * and call has_curseg_enough_space
++ * calculate needed sections for dirty node/dentry and call
++ * has_curseg_enough_space, please note that, it needs to account
++ * dirty data as well in lfs mode when checkpoint is disabled.
+ */
+ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ unsigned int *lower_p, unsigned int *upper_p, bool *curseg_p)
+@@ -594,19 +598,30 @@ static inline void __get_secs_required(struct f2fs_sb_info *sbi,
+ get_pages(sbi, F2FS_DIRTY_DENTS) +
+ get_pages(sbi, F2FS_DIRTY_IMETA);
+ unsigned int total_dent_blocks = get_pages(sbi, F2FS_DIRTY_DENTS);
++ unsigned int total_data_blocks = 0;
+ unsigned int node_secs = total_node_blocks / CAP_BLKS_PER_SEC(sbi);
+ unsigned int dent_secs = total_dent_blocks / CAP_BLKS_PER_SEC(sbi);
++ unsigned int data_secs = 0;
+ unsigned int node_blocks = total_node_blocks % CAP_BLKS_PER_SEC(sbi);
+ unsigned int dent_blocks = total_dent_blocks % CAP_BLKS_PER_SEC(sbi);
++ unsigned int data_blocks = 0;
++
++ if (f2fs_lfs_mode(sbi) &&
++ unlikely(is_sbi_flag_set(sbi, SBI_CP_DISABLED))) {
++ total_data_blocks = get_pages(sbi, F2FS_DIRTY_DATA);
++ data_secs = total_data_blocks / CAP_BLKS_PER_SEC(sbi);
++ data_blocks = total_data_blocks % CAP_BLKS_PER_SEC(sbi);
++ }
+
+ if (lower_p)
+- *lower_p = node_secs + dent_secs;
++ *lower_p = node_secs + dent_secs + data_secs;
+ if (upper_p)
+ *upper_p = node_secs + dent_secs +
+- (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0);
++ (node_blocks ? 1 : 0) + (dent_blocks ? 1 : 0) +
++ (data_blocks ? 1 : 0);
+ if (curseg_p)
+ *curseg_p = has_curseg_enough_space(sbi,
+- node_blocks, dent_blocks);
++ node_blocks, data_blocks, dent_blocks);
+ }
+
+ static inline bool has_not_enough_free_secs(struct f2fs_sb_info *sbi,
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 87ab5696bd482c..983fdd98fc3755 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -150,6 +150,8 @@ enum {
+ Opt_mode,
+ Opt_fault_injection,
+ Opt_fault_type,
++ Opt_lazytime,
++ Opt_nolazytime,
+ Opt_quota,
+ Opt_noquota,
+ Opt_usrquota,
+@@ -226,6 +228,8 @@ static match_table_t f2fs_tokens = {
+ {Opt_mode, "mode=%s"},
+ {Opt_fault_injection, "fault_injection=%u"},
+ {Opt_fault_type, "fault_type=%u"},
++ {Opt_lazytime, "lazytime"},
++ {Opt_nolazytime, "nolazytime"},
+ {Opt_quota, "quota"},
+ {Opt_noquota, "noquota"},
+ {Opt_usrquota, "usrquota"},
+@@ -918,6 +922,12 @@ static int parse_options(struct super_block *sb, char *options, bool is_remount)
+ f2fs_info(sbi, "fault_type options not supported");
+ break;
+ #endif
++ case Opt_lazytime:
++ sb->s_flags |= SB_LAZYTIME;
++ break;
++ case Opt_nolazytime:
++ sb->s_flags &= ~SB_LAZYTIME;
++ break;
+ #ifdef CONFIG_QUOTA
+ case Opt_quota:
+ case Opt_usrquota:
+@@ -3322,7 +3332,7 @@ loff_t max_file_blocks(struct inode *inode)
+ * fit within U32_MAX + 1 data units.
+ */
+
+- result = min(result, F2FS_BYTES_TO_BLK(((loff_t)U32_MAX + 1) * 4096));
++ result = umin(result, F2FS_BYTES_TO_BLK(((loff_t)U32_MAX + 1) * 4096));
+
+ return result;
+ }
+@@ -4155,8 +4165,7 @@ static bool system_going_down(void)
+ || system_state == SYSTEM_RESTART;
+ }
+
+-void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
+- bool irq_context)
++void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason)
+ {
+ struct super_block *sb = sbi->sb;
+ bool shutdown = reason == STOP_CP_REASON_SHUTDOWN;
+@@ -4168,10 +4177,12 @@ void f2fs_handle_critical_error(struct f2fs_sb_info *sbi, unsigned char reason,
+ if (!f2fs_hw_is_readonly(sbi)) {
+ save_stop_reason(sbi, reason);
+
+- if (irq_context && !shutdown)
+- schedule_work(&sbi->s_error_work);
+- else
+- f2fs_record_stop_reason(sbi);
++ /*
++ * always create an asynchronous task to record stop_reason
++ * in order to avoid potential deadlock when running into
++ * f2fs_record_stop_reason() synchronously.
++ */
++ schedule_work(&sbi->s_error_work);
+ }
+
+ /*
+@@ -4991,9 +5002,6 @@ static int __init init_f2fs_fs(void)
+ err = f2fs_init_shrinker();
+ if (err)
+ goto free_sysfs;
+- err = register_filesystem(&f2fs_fs_type);
+- if (err)
+- goto free_shrinker;
+ f2fs_create_root_stats();
+ err = f2fs_init_post_read_processing();
+ if (err)
+@@ -5016,7 +5024,12 @@ static int __init init_f2fs_fs(void)
+ err = f2fs_create_casefold_cache();
+ if (err)
+ goto free_compress_cache;
++ err = register_filesystem(&f2fs_fs_type);
++ if (err)
++ goto free_casefold_cache;
+ return 0;
++free_casefold_cache:
++ f2fs_destroy_casefold_cache();
+ free_compress_cache:
+ f2fs_destroy_compress_cache();
+ free_compress_mempool:
+@@ -5031,8 +5044,6 @@ static int __init init_f2fs_fs(void)
+ f2fs_destroy_post_read_processing();
+ free_root_stats:
+ f2fs_destroy_root_stats();
+- unregister_filesystem(&f2fs_fs_type);
+-free_shrinker:
+ f2fs_exit_shrinker();
+ free_sysfs:
+ f2fs_exit_sysfs();
+@@ -5056,6 +5067,7 @@ static int __init init_f2fs_fs(void)
+
+ static void __exit exit_f2fs_fs(void)
+ {
++ unregister_filesystem(&f2fs_fs_type);
+ f2fs_destroy_casefold_cache();
+ f2fs_destroy_compress_cache();
+ f2fs_destroy_compress_mempool();
+@@ -5064,7 +5076,6 @@ static void __exit exit_f2fs_fs(void)
+ f2fs_destroy_iostat_processing();
+ f2fs_destroy_post_read_processing();
+ f2fs_destroy_root_stats();
+- unregister_filesystem(&f2fs_fs_type);
+ f2fs_exit_shrinker();
+ f2fs_exit_sysfs();
+ f2fs_destroy_garbage_collection_cache();
+diff --git a/fs/fcntl.c b/fs/fcntl.c
+index 22dd9dcce7ecc8..3d89de31066ae0 100644
+--- a/fs/fcntl.c
++++ b/fs/fcntl.c
+@@ -397,6 +397,9 @@ static long f_dupfd_query(int fd, struct file *filp)
+ {
+ CLASS(fd_raw, f)(fd);
+
++ if (fd_empty(f))
++ return -EBADF;
++
+ /*
+ * We can do the 'fdput()' immediately, as the only thing that
+ * matters is the pointer value which isn't changed by the fdput.
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index dafdf766b1d535..e20d91d0ae558c 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -645,7 +645,7 @@ void fuse_read_args_fill(struct fuse_io_args *ia, struct file *file, loff_t pos,
+ args->out_args[0].size = count;
+ }
+
+-static void fuse_release_user_pages(struct fuse_args_pages *ap,
++static void fuse_release_user_pages(struct fuse_args_pages *ap, ssize_t nres,
+ bool should_dirty)
+ {
+ unsigned int i;
+@@ -656,6 +656,9 @@ static void fuse_release_user_pages(struct fuse_args_pages *ap,
+ if (ap->args.is_pinned)
+ unpin_user_page(ap->pages[i]);
+ }
++
++ if (nres > 0 && ap->args.invalidate_vmap)
++ invalidate_kernel_vmap_range(ap->args.vmap_base, nres);
+ }
+
+ static void fuse_io_release(struct kref *kref)
+@@ -754,25 +757,29 @@ static void fuse_aio_complete_req(struct fuse_mount *fm, struct fuse_args *args,
+ struct fuse_io_args *ia = container_of(args, typeof(*ia), ap.args);
+ struct fuse_io_priv *io = ia->io;
+ ssize_t pos = -1;
+-
+- fuse_release_user_pages(&ia->ap, io->should_dirty);
++ size_t nres;
+
+ if (err) {
+ /* Nothing */
+ } else if (io->write) {
+ if (ia->write.out.size > ia->write.in.size) {
+ err = -EIO;
+- } else if (ia->write.in.size != ia->write.out.size) {
+- pos = ia->write.in.offset - io->offset +
+- ia->write.out.size;
++ } else {
++ nres = ia->write.out.size;
++ if (ia->write.in.size != ia->write.out.size)
++ pos = ia->write.in.offset - io->offset +
++ ia->write.out.size;
+ }
+ } else {
+ u32 outsize = args->out_args[0].size;
+
++ nres = outsize;
+ if (ia->read.in.size != outsize)
+ pos = ia->read.in.offset - io->offset + outsize;
+ }
+
++ fuse_release_user_pages(&ia->ap, err ?: nres, io->should_dirty);
++
+ fuse_aio_complete(io, err, pos);
+ fuse_io_free(ia);
+ }
+@@ -1468,24 +1475,37 @@ static inline size_t fuse_get_frag_size(const struct iov_iter *ii,
+
+ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii,
+ size_t *nbytesp, int write,
+- unsigned int max_pages)
++ unsigned int max_pages,
++ bool use_pages_for_kvec_io)
+ {
++ bool flush_or_invalidate = false;
+ size_t nbytes = 0; /* # bytes already packed in req */
+ ssize_t ret = 0;
+
+- /* Special case for kernel I/O: can copy directly into the buffer */
++ /* Special case for kernel I/O: can copy directly into the buffer.
++ * However if the implementation of fuse_conn requires pages instead of
++ * pointer (e.g., virtio-fs), use iov_iter_extract_pages() instead.
++ */
+ if (iov_iter_is_kvec(ii)) {
+- unsigned long user_addr = fuse_get_user_addr(ii);
+- size_t frag_size = fuse_get_frag_size(ii, *nbytesp);
++ void *user_addr = (void *)fuse_get_user_addr(ii);
+
+- if (write)
+- ap->args.in_args[1].value = (void *) user_addr;
+- else
+- ap->args.out_args[0].value = (void *) user_addr;
++ if (!use_pages_for_kvec_io) {
++ size_t frag_size = fuse_get_frag_size(ii, *nbytesp);
+
+- iov_iter_advance(ii, frag_size);
+- *nbytesp = frag_size;
+- return 0;
++ if (write)
++ ap->args.in_args[1].value = user_addr;
++ else
++ ap->args.out_args[0].value = user_addr;
++
++ iov_iter_advance(ii, frag_size);
++ *nbytesp = frag_size;
++ return 0;
++ }
++
++ if (is_vmalloc_addr(user_addr)) {
++ ap->args.vmap_base = user_addr;
++ flush_or_invalidate = true;
++ }
+ }
+
+ while (nbytes < *nbytesp && ap->num_pages < max_pages) {
+@@ -1514,6 +1534,10 @@ static int fuse_get_user_pages(struct fuse_args_pages *ap, struct iov_iter *ii,
+ (PAGE_SIZE - ret) & (PAGE_SIZE - 1);
+ }
+
++ if (write && flush_or_invalidate)
++ flush_kernel_vmap_range(ap->args.vmap_base, nbytes);
++
++ ap->args.invalidate_vmap = !write && flush_or_invalidate;
+ ap->args.is_pinned = iov_iter_extract_will_pin(ii);
+ ap->args.user_pages = true;
+ if (write)
+@@ -1582,7 +1606,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter,
+ size_t nbytes = min(count, nmax);
+
+ err = fuse_get_user_pages(&ia->ap, iter, &nbytes, write,
+- max_pages);
++ max_pages, fc->use_pages_for_kvec_io);
+ if (err && !nbytes)
+ break;
+
+@@ -1596,7 +1620,7 @@ ssize_t fuse_direct_io(struct fuse_io_priv *io, struct iov_iter *iter,
+ }
+
+ if (!io->async || nres < 0) {
+- fuse_release_user_pages(&ia->ap, io->should_dirty);
++ fuse_release_user_pages(&ia->ap, nres, io->should_dirty);
+ fuse_io_free(ia);
+ }
+ ia = NULL;
+diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
+index e6cc3d552b1382..28cf319c1c25cf 100644
+--- a/fs/fuse/fuse_i.h
++++ b/fs/fuse/fuse_i.h
+@@ -309,9 +309,12 @@ struct fuse_args {
+ bool may_block:1;
+ bool is_ext:1;
+ bool is_pinned:1;
++ bool invalidate_vmap:1;
+ struct fuse_in_arg in_args[3];
+ struct fuse_arg out_args[2];
+ void (*end)(struct fuse_mount *fm, struct fuse_args *args, int error);
++ /* Used for kvec iter backed by vmalloc address */
++ void *vmap_base;
+ };
+
+ struct fuse_args_pages {
+@@ -857,6 +860,9 @@ struct fuse_conn {
+ /** Passthrough support for read/write IO */
+ unsigned int passthrough:1;
+
++ /* Use pages instead of pointer for kernel I/O */
++ unsigned int use_pages_for_kvec_io:1;
++
+ /** Maximum stack depth for passthrough backing files */
+ int max_stack_depth;
+
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index 6404a189e98900..d220e28e755fef 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -1691,6 +1691,7 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ fc->delete_stale = true;
+ fc->auto_submounts = true;
+ fc->sync_fs = true;
++ fc->use_pages_for_kvec_io = true;
+
+ /* Tell FUSE to split requests that exceed the virtqueue's size */
+ fc->max_pages_limit = min_t(unsigned int, fc->max_pages_limit,
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 269c3bc7fced71..a51fe42732c4c2 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -1013,14 +1013,15 @@ bool gfs2_queue_try_to_evict(struct gfs2_glock *gl)
+ &gl->gl_delete, 0);
+ }
+
+-static bool gfs2_queue_verify_evict(struct gfs2_glock *gl)
++bool gfs2_queue_verify_delete(struct gfs2_glock *gl, bool later)
+ {
+ struct gfs2_sbd *sdp = gl->gl_name.ln_sbd;
++ unsigned long delay;
+
+- if (test_and_set_bit(GLF_VERIFY_EVICT, &gl->gl_flags))
++ if (test_and_set_bit(GLF_VERIFY_DELETE, &gl->gl_flags))
+ return false;
+- return queue_delayed_work(sdp->sd_delete_wq,
+- &gl->gl_delete, 5 * HZ);
++ delay = later ? 5 * HZ : 0;
++ return queue_delayed_work(sdp->sd_delete_wq, &gl->gl_delete, delay);
+ }
+
+ static void delete_work_func(struct work_struct *work)
+@@ -1052,19 +1053,19 @@ static void delete_work_func(struct work_struct *work)
+ if (gfs2_try_evict(gl)) {
+ if (test_bit(SDF_KILL, &sdp->sd_flags))
+ goto out;
+- if (gfs2_queue_verify_evict(gl))
++ if (gfs2_queue_verify_delete(gl, true))
+ return;
+ }
+ goto out;
+ }
+
+- if (test_and_clear_bit(GLF_VERIFY_EVICT, &gl->gl_flags)) {
++ if (test_and_clear_bit(GLF_VERIFY_DELETE, &gl->gl_flags)) {
+ inode = gfs2_lookup_by_inum(sdp, no_addr, gl->gl_no_formal_ino,
+ GFS2_BLKST_UNLINKED);
+ if (IS_ERR(inode)) {
+ if (PTR_ERR(inode) == -EAGAIN &&
+ !test_bit(SDF_KILL, &sdp->sd_flags) &&
+- gfs2_queue_verify_evict(gl))
++ gfs2_queue_verify_delete(gl, true))
+ return;
+ } else {
+ d_prune_aliases(inode);
+@@ -2118,7 +2119,7 @@ static void glock_hash_walk(glock_examiner examiner, const struct gfs2_sbd *sdp)
+ void gfs2_cancel_delete_work(struct gfs2_glock *gl)
+ {
+ clear_bit(GLF_TRY_TO_EVICT, &gl->gl_flags);
+- clear_bit(GLF_VERIFY_EVICT, &gl->gl_flags);
++ clear_bit(GLF_VERIFY_DELETE, &gl->gl_flags);
+ if (cancel_delayed_work(&gl->gl_delete))
+ gfs2_glock_put(gl);
+ }
+@@ -2371,7 +2372,7 @@ static const char *gflags2str(char *buf, const struct gfs2_glock *gl)
+ *p++ = 'N';
+ if (test_bit(GLF_TRY_TO_EVICT, gflags))
+ *p++ = 'e';
+- if (test_bit(GLF_VERIFY_EVICT, gflags))
++ if (test_bit(GLF_VERIFY_DELETE, gflags))
+ *p++ = 'E';
+ *p = 0;
+ return buf;
+diff --git a/fs/gfs2/glock.h b/fs/gfs2/glock.h
+index adf0091cc98f95..63e101d448e961 100644
+--- a/fs/gfs2/glock.h
++++ b/fs/gfs2/glock.h
+@@ -245,6 +245,7 @@ static inline int gfs2_glock_nq_init(struct gfs2_glock *gl,
+ void gfs2_glock_cb(struct gfs2_glock *gl, unsigned int state);
+ void gfs2_glock_complete(struct gfs2_glock *gl, int ret);
+ bool gfs2_queue_try_to_evict(struct gfs2_glock *gl);
++bool gfs2_queue_verify_delete(struct gfs2_glock *gl, bool later);
+ void gfs2_cancel_delete_work(struct gfs2_glock *gl);
+ void gfs2_flush_delete_work(struct gfs2_sbd *sdp);
+ void gfs2_gl_hash_clear(struct gfs2_sbd *sdp);
+diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h
+index aa4ef67a34e037..bd1348bff90ebe 100644
+--- a/fs/gfs2/incore.h
++++ b/fs/gfs2/incore.h
+@@ -329,7 +329,7 @@ enum {
+ GLF_BLOCKING = 15,
+ GLF_UNLOCKED = 16, /* Wait for glock to be unlocked */
+ GLF_TRY_TO_EVICT = 17, /* iopen glocks only */
+- GLF_VERIFY_EVICT = 18, /* iopen glocks only */
++ GLF_VERIFY_DELETE = 18, /* iopen glocks only */
+ };
+
+ struct gfs2_glock {
+diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c
+index 29c77281676526..53930312971530 100644
+--- a/fs/gfs2/rgrp.c
++++ b/fs/gfs2/rgrp.c
+@@ -1879,7 +1879,7 @@ static void try_rgrp_unlink(struct gfs2_rgrpd *rgd, u64 *last_unlinked, u64 skip
+ */
+ ip = gl->gl_object;
+
+- if (ip || !gfs2_queue_try_to_evict(gl))
++ if (ip || !gfs2_queue_verify_delete(gl, false))
+ gfs2_glock_put(gl);
+ else
+ found++;
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index 6678060ed4d2bb..e22c1edc32b39e 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1045,7 +1045,7 @@ static int gfs2_drop_inode(struct inode *inode)
+ struct gfs2_glock *gl = ip->i_iopen_gh.gh_gl;
+
+ gfs2_glock_hold(gl);
+- if (!gfs2_queue_try_to_evict(gl))
++ if (!gfs2_queue_verify_delete(gl, true))
+ gfs2_glock_put_async(gl);
+ return 0;
+ }
+diff --git a/fs/hfsplus/hfsplus_fs.h b/fs/hfsplus/hfsplus_fs.h
+index 59ce81dca73fce..5389918bbf29db 100644
+--- a/fs/hfsplus/hfsplus_fs.h
++++ b/fs/hfsplus/hfsplus_fs.h
+@@ -156,6 +156,7 @@ struct hfsplus_sb_info {
+
+ /* Runtime variables */
+ u32 blockoffset;
++ u32 min_io_size;
+ sector_t part_start;
+ sector_t sect_count;
+ int fs_shift;
+@@ -307,7 +308,7 @@ struct hfsplus_readdir_data {
+ */
+ static inline unsigned short hfsplus_min_io_size(struct super_block *sb)
+ {
+- return max_t(unsigned short, bdev_logical_block_size(sb->s_bdev),
++ return max_t(unsigned short, HFSPLUS_SB(sb)->min_io_size,
+ HFSPLUS_SECTOR_SIZE);
+ }
+
+diff --git a/fs/hfsplus/wrapper.c b/fs/hfsplus/wrapper.c
+index 9592ffcb44e5ea..74801911bc1cc4 100644
+--- a/fs/hfsplus/wrapper.c
++++ b/fs/hfsplus/wrapper.c
+@@ -172,6 +172,8 @@ int hfsplus_read_wrapper(struct super_block *sb)
+ if (!blocksize)
+ goto out;
+
++ sbi->min_io_size = blocksize;
++
+ if (hfsplus_get_last_session(sb, &part_start, &part_size))
+ goto out;
+
+diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c
+index 6d1cf2436ead68..084f6ed2dd7a69 100644
+--- a/fs/hostfs/hostfs_kern.c
++++ b/fs/hostfs/hostfs_kern.c
+@@ -471,8 +471,8 @@ static int hostfs_write_begin(struct file *file, struct address_space *mapping,
+
+ *foliop = __filemap_get_folio(mapping, index, FGP_WRITEBEGIN,
+ mapping_gfp_mask(mapping));
+- if (!*foliop)
+- return -ENOMEM;
++ if (IS_ERR(*foliop))
++ return PTR_ERR(*foliop);
+ return 0;
+ }
+
+diff --git a/fs/isofs/inode.c b/fs/isofs/inode.c
+index f50311a6b4299d..47038e6608123c 100644
+--- a/fs/isofs/inode.c
++++ b/fs/isofs/inode.c
+@@ -948,8 +948,6 @@ static int isofs_fill_super(struct super_block *s, struct fs_context *fc)
+ goto out_no_inode;
+ }
+
+- kfree(opt->iocharset);
+-
+ return 0;
+
+ /*
+@@ -987,7 +985,6 @@ static int isofs_fill_super(struct super_block *s, struct fs_context *fc)
+ brelse(bh);
+ brelse(pri_bh);
+ out_freesbi:
+- kfree(opt->iocharset);
+ kfree(sbi);
+ s->s_fs_info = NULL;
+ return error;
+@@ -1528,7 +1525,10 @@ static int isofs_get_tree(struct fs_context *fc)
+
+ static void isofs_free_fc(struct fs_context *fc)
+ {
+- kfree(fc->fs_private);
++ struct isofs_options *opt = fc->fs_private;
++
++ kfree(opt->iocharset);
++ kfree(opt);
+ }
+
+ static const struct fs_context_operations isofs_context_ops = {
+diff --git a/fs/jffs2/erase.c b/fs/jffs2/erase.c
+index acd32f05b51988..ef3a1e1b6cb065 100644
+--- a/fs/jffs2/erase.c
++++ b/fs/jffs2/erase.c
+@@ -338,10 +338,9 @@ static int jffs2_block_check_erase(struct jffs2_sb_info *c, struct jffs2_erasebl
+ } while(--retlen);
+ mtd_unpoint(c->mtd, jeb->offset, c->sector_size);
+ if (retlen) {
+- pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08tx\n",
+- *wordebuf,
+- jeb->offset +
+- c->sector_size-retlen * sizeof(*wordebuf));
++ *bad_offset = jeb->offset + c->sector_size - retlen * sizeof(*wordebuf);
++ pr_warn("Newly-erased block contained word 0x%lx at offset 0x%08x\n",
++ *wordebuf, *bad_offset);
+ return -EIO;
+ }
+ return 0;
+diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
+index 0fb05e314edf60..24afbae87225a7 100644
+--- a/fs/jfs/xattr.c
++++ b/fs/jfs/xattr.c
+@@ -559,7 +559,7 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
+
+ size_check:
+ if (EALIST_SIZE(ea_buf->xattr) != ea_size) {
+- int size = min_t(int, EALIST_SIZE(ea_buf->xattr), ea_size);
++ int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr));
+
+ printk(KERN_ERR "ea_get: invalid extended attribute\n");
+ print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1,
+diff --git a/fs/netfs/fscache_volume.c b/fs/netfs/fscache_volume.c
+index cb75c07b5281a5..ced14ac78cc1c2 100644
+--- a/fs/netfs/fscache_volume.c
++++ b/fs/netfs/fscache_volume.c
+@@ -322,8 +322,7 @@ void fscache_create_volume(struct fscache_volume *volume, bool wait)
+ }
+ return;
+ no_wait:
+- clear_bit_unlock(FSCACHE_VOLUME_CREATING, &volume->flags);
+- wake_up_bit(&volume->flags, FSCACHE_VOLUME_CREATING);
++ clear_and_wake_up_bit(FSCACHE_VOLUME_CREATING, &volume->flags);
+ }
+
+ /*
+diff --git a/fs/nfs/blocklayout/blocklayout.c b/fs/nfs/blocklayout/blocklayout.c
+index 0becdec129704f..47189476b5538b 100644
+--- a/fs/nfs/blocklayout/blocklayout.c
++++ b/fs/nfs/blocklayout/blocklayout.c
+@@ -571,19 +571,32 @@ bl_find_get_deviceid(struct nfs_server *server,
+ if (!node)
+ return ERR_PTR(-ENODEV);
+
++ /*
++ * Devices that are marked unavailable are left in the cache with a
++ * timeout to avoid sending GETDEVINFO after every LAYOUTGET, or
++ * constantly attempting to register the device. Once marked as
++ * unavailable they must be deleted and never reused.
++ */
+ if (test_bit(NFS_DEVICEID_UNAVAILABLE, &node->flags)) {
+ unsigned long end = jiffies;
+ unsigned long start = end - PNFS_DEVICE_RETRY_TIMEOUT;
+
+ if (!time_in_range(node->timestamp_unavailable, start, end)) {
++ /* Uncork subsequent GETDEVINFO operations for this device */
+ nfs4_delete_deviceid(node->ld, node->nfs_client, id);
+ goto retry;
+ }
+ goto out_put;
+ }
+
+- if (!bl_register_dev(container_of(node, struct pnfs_block_dev, node)))
++ if (!bl_register_dev(container_of(node, struct pnfs_block_dev, node))) {
++ /*
++ * If we cannot register, treat this device as transient:
++ * Make a negative cache entry for the device
++ */
++ nfs4_mark_deviceid_unavailable(node);
+ goto out_put;
++ }
+
+ return node;
+
+diff --git a/fs/nfs/blocklayout/dev.c b/fs/nfs/blocklayout/dev.c
+index 6252f44479457b..cab8809f0e0f48 100644
+--- a/fs/nfs/blocklayout/dev.c
++++ b/fs/nfs/blocklayout/dev.c
+@@ -20,9 +20,6 @@ static void bl_unregister_scsi(struct pnfs_block_dev *dev)
+ const struct pr_ops *ops = bdev->bd_disk->fops->pr_ops;
+ int status;
+
+- if (!test_and_clear_bit(PNFS_BDEV_REGISTERED, &dev->flags))
+- return;
+-
+ status = ops->pr_register(bdev, dev->pr_key, 0, false);
+ if (status)
+ trace_bl_pr_key_unreg_err(bdev, dev->pr_key, status);
+@@ -58,7 +55,8 @@ static void bl_unregister_dev(struct pnfs_block_dev *dev)
+ return;
+ }
+
+- if (dev->type == PNFS_BLOCK_VOLUME_SCSI)
++ if (dev->type == PNFS_BLOCK_VOLUME_SCSI &&
++ test_and_clear_bit(PNFS_BDEV_REGISTERED, &dev->flags))
+ bl_unregister_scsi(dev);
+ }
+
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 430733e3eff260..6bcc4b0e00ab72 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -12,7 +12,7 @@
+ #include <linux/nfslocalio.h>
+ #include <linux/wait_bit.h>
+
+-#define NFS_SB_MASK (SB_RDONLY|SB_NOSUID|SB_NODEV|SB_NOEXEC|SB_SYNCHRONOUS)
++#define NFS_SB_MASK (SB_NOSUID|SB_NODEV|SB_NOEXEC|SB_SYNCHRONOUS)
+
+ extern const struct export_operations nfs_export_ops;
+
+diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
+index 8f0ce82a677e15..637528e6368ef7 100644
+--- a/fs/nfs/localio.c
++++ b/fs/nfs/localio.c
+@@ -354,6 +354,12 @@ nfs_local_read_done(struct nfs_local_kiocb *iocb, long status)
+
+ nfs_local_pgio_done(hdr, status);
+
++ /*
++ * Must clear replen otherwise NFSv3 data corruption will occur
++ * if/when switching from LOCALIO back to using normal RPC.
++ */
++ hdr->res.replen = 0;
++
+ if (hdr->res.count != hdr->args.count ||
+ hdr->args.offset + hdr->res.count >= i_size_read(file_inode(filp)))
+ hdr->res.eof = true;
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 9d40319e063dea..405f17e6e0b45b 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -2603,12 +2603,14 @@ static void nfs4_open_release(void *calldata)
+ struct nfs4_opendata *data = calldata;
+ struct nfs4_state *state = NULL;
+
++ /* In case of error, no cleanup! */
++ if (data->rpc_status != 0 || !data->rpc_done) {
++ nfs_release_seqid(data->o_arg.seqid);
++ goto out_free;
++ }
+ /* If this request hasn't been cancelled, do nothing */
+ if (!data->cancelled)
+ goto out_free;
+- /* In case of error, no cleanup! */
+- if (data->rpc_status != 0 || !data->rpc_done)
+- goto out_free;
+ /* In case we need an open_confirm, no cleanup! */
+ if (data->o_res.rflags & NFS4_OPEN_RESULT_CONFIRM)
+ goto out_free;
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index ead2dc55952dba..82ae2b85d393cb 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -144,6 +144,31 @@ static void nfs_io_completion_put(struct nfs_io_completion *ioc)
+ kref_put(&ioc->refcount, nfs_io_completion_release);
+ }
+
++static void
++nfs_page_set_inode_ref(struct nfs_page *req, struct inode *inode)
++{
++ if (!test_and_set_bit(PG_INODE_REF, &req->wb_flags)) {
++ kref_get(&req->wb_kref);
++ atomic_long_inc(&NFS_I(inode)->nrequests);
++ }
++}
++
++static int
++nfs_cancel_remove_inode(struct nfs_page *req, struct inode *inode)
++{
++ int ret;
++
++ if (!test_bit(PG_REMOVE, &req->wb_flags))
++ return 0;
++ ret = nfs_page_group_lock(req);
++ if (ret)
++ return ret;
++ if (test_and_clear_bit(PG_REMOVE, &req->wb_flags))
++ nfs_page_set_inode_ref(req, inode);
++ nfs_page_group_unlock(req);
++ return 0;
++}
++
+ /**
+ * nfs_folio_find_head_request - find head request associated with a folio
+ * @folio: pointer to folio
+@@ -540,7 +565,6 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+ struct inode *inode = folio->mapping->host;
+ struct nfs_page *head, *subreq;
+ struct nfs_commit_info cinfo;
+- bool removed;
+ int ret;
+
+ /*
+@@ -565,18 +589,18 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+ goto retry;
+ }
+
+- ret = nfs_page_group_lock(head);
++ ret = nfs_cancel_remove_inode(head, inode);
+ if (ret < 0)
+ goto out_unlock;
+
+- removed = test_bit(PG_REMOVE, &head->wb_flags);
++ ret = nfs_page_group_lock(head);
++ if (ret < 0)
++ goto out_unlock;
+
+ /* lock each request in the page group */
+ for (subreq = head->wb_this_page;
+ subreq != head;
+ subreq = subreq->wb_this_page) {
+- if (test_bit(PG_REMOVE, &subreq->wb_flags))
+- removed = true;
+ ret = nfs_page_group_lock_subreq(head, subreq);
+ if (ret < 0)
+ goto out_unlock;
+@@ -584,21 +608,6 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+
+ nfs_page_group_unlock(head);
+
+- /*
+- * If PG_REMOVE is set on any request, I/O on that request has
+- * completed, but some requests were still under I/O at the time
+- * we locked the head request.
+- *
+- * In that case the above wait for all requests means that all I/O
+- * has now finished, and we can restart from a clean slate. Let the
+- * old requests go away and start from scratch instead.
+- */
+- if (removed) {
+- nfs_unroll_locks(head, head);
+- nfs_unlock_and_release_request(head);
+- goto retry;
+- }
+-
+ nfs_init_cinfo_from_inode(&cinfo, inode);
+ nfs_join_page_group(head, &cinfo, inode);
+ return head;
+diff --git a/fs/nfs_common/nfslocalio.c b/fs/nfs_common/nfslocalio.c
+index 09404d142d1ae6..a74ec08f6c96d0 100644
+--- a/fs/nfs_common/nfslocalio.c
++++ b/fs/nfs_common/nfslocalio.c
+@@ -155,11 +155,9 @@ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *uuid,
+ /* We have an implied reference to net thanks to nfsd_serv_try_get */
+ localio = nfs_to->nfsd_open_local_fh(net, uuid->dom, rpc_clnt,
+ cred, nfs_fh, fmode);
+- if (IS_ERR(localio)) {
+- rcu_read_lock();
+- nfs_to->nfsd_serv_put(net);
+- rcu_read_unlock();
+- }
++ if (IS_ERR(localio))
++ nfs_to_nfsd_net_put(net);
++
+ return localio;
+ }
+ EXPORT_SYMBOL_GPL(nfs_open_local_fh);
+diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
+index c82d8e3e0d4f28..984f8e6379dd47 100644
+--- a/fs/nfsd/export.c
++++ b/fs/nfsd/export.c
+@@ -40,15 +40,24 @@
+ #define EXPKEY_HASHMAX (1 << EXPKEY_HASHBITS)
+ #define EXPKEY_HASHMASK (EXPKEY_HASHMAX -1)
+
+-static void expkey_put(struct kref *ref)
++static void expkey_put_work(struct work_struct *work)
+ {
+- struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
++ struct svc_expkey *key =
++ container_of(to_rcu_work(work), struct svc_expkey, ek_rcu_work);
+
+ if (test_bit(CACHE_VALID, &key->h.flags) &&
+ !test_bit(CACHE_NEGATIVE, &key->h.flags))
+ path_put(&key->ek_path);
+ auth_domain_put(key->ek_client);
+- kfree_rcu(key, ek_rcu);
++ kfree(key);
++}
++
++static void expkey_put(struct kref *ref)
++{
++ struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
++
++ INIT_RCU_WORK(&key->ek_rcu_work, expkey_put_work);
++ queue_rcu_work(system_wq, &key->ek_rcu_work);
+ }
+
+ static int expkey_upcall(struct cache_detail *cd, struct cache_head *h)
+@@ -355,16 +364,26 @@ static void export_stats_destroy(struct export_stats *stats)
+ EXP_STATS_COUNTERS_NUM);
+ }
+
+-static void svc_export_put(struct kref *ref)
++static void svc_export_put_work(struct work_struct *work)
+ {
+- struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
++ struct svc_export *exp =
++ container_of(to_rcu_work(work), struct svc_export, ex_rcu_work);
++
+ path_put(&exp->ex_path);
+ auth_domain_put(exp->ex_client);
+ nfsd4_fslocs_free(&exp->ex_fslocs);
+ export_stats_destroy(exp->ex_stats);
+ kfree(exp->ex_stats);
+ kfree(exp->ex_uuid);
+- kfree_rcu(exp, ex_rcu);
++ kfree(exp);
++}
++
++static void svc_export_put(struct kref *ref)
++{
++ struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
++
++ INIT_RCU_WORK(&exp->ex_rcu_work, svc_export_put_work);
++ queue_rcu_work(system_wq, &exp->ex_rcu_work);
+ }
+
+ static int svc_export_upcall(struct cache_detail *cd, struct cache_head *h)
+diff --git a/fs/nfsd/export.h b/fs/nfsd/export.h
+index 3794ae253a7016..081afb68681e14 100644
+--- a/fs/nfsd/export.h
++++ b/fs/nfsd/export.h
+@@ -75,7 +75,7 @@ struct svc_export {
+ u32 ex_layout_types;
+ struct nfsd4_deviceid_map *ex_devid_map;
+ struct cache_detail *cd;
+- struct rcu_head ex_rcu;
++ struct rcu_work ex_rcu_work;
+ unsigned long ex_xprtsec_modes;
+ struct export_stats *ex_stats;
+ };
+@@ -92,7 +92,7 @@ struct svc_expkey {
+ u32 ek_fsid[6];
+
+ struct path ek_path;
+- struct rcu_head ek_rcu;
++ struct rcu_work ek_rcu_work;
+ };
+
+ #define EX_ISSYNC(exp) (!((exp)->ex_flags & NFSEXP_ASYNC))
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index 2e6783f6371245..146a9463c3c230 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -391,19 +391,19 @@ nfsd_file_put(struct nfsd_file *nf)
+ }
+
+ /**
+- * nfsd_file_put_local - put the reference to nfsd_file and local nfsd_serv
+- * @nf: nfsd_file of which to put the references
++ * nfsd_file_put_local - put nfsd_file reference and arm nfsd_serv_put in caller
++ * @nf: nfsd_file of which to put the reference
+ *
+- * First put the reference of the nfsd_file and then put the
+- * reference to the associated nn->nfsd_serv.
++ * First save the associated net to return to caller, then put
++ * the reference of the nfsd_file.
+ */
+-void
+-nfsd_file_put_local(struct nfsd_file *nf) __must_hold(rcu)
++struct net *
++nfsd_file_put_local(struct nfsd_file *nf)
+ {
+ struct net *net = nf->nf_net;
+
+ nfsd_file_put(nf);
+- nfsd_serv_put(net);
++ return net;
+ }
+
+ /**
+diff --git a/fs/nfsd/filecache.h b/fs/nfsd/filecache.h
+index cadf3c2689c44c..d5db6b34ba302c 100644
+--- a/fs/nfsd/filecache.h
++++ b/fs/nfsd/filecache.h
+@@ -55,7 +55,7 @@ void nfsd_file_cache_shutdown(void);
+ int nfsd_file_cache_start_net(struct net *net);
+ void nfsd_file_cache_shutdown_net(struct net *net);
+ void nfsd_file_put(struct nfsd_file *nf);
+-void nfsd_file_put_local(struct nfsd_file *nf);
++struct net *nfsd_file_put_local(struct nfsd_file *nf);
+ struct nfsd_file *nfsd_file_get(struct nfsd_file *nf);
+ struct file *nfsd_file_file(struct nfsd_file *nf);
+ void nfsd_file_close_inode_sync(struct inode *inode);
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index b5b3ab9d719a74..b8cbb15560040f 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -287,17 +287,17 @@ static int decode_cb_compound4res(struct xdr_stream *xdr,
+ u32 length;
+ __be32 *p;
+
+- p = xdr_inline_decode(xdr, 4 + 4);
++ p = xdr_inline_decode(xdr, XDR_UNIT);
+ if (unlikely(p == NULL))
+ goto out_overflow;
+- hdr->status = be32_to_cpup(p++);
++ hdr->status = be32_to_cpup(p);
+ /* Ignore the tag */
+- length = be32_to_cpup(p++);
+- p = xdr_inline_decode(xdr, length + 4);
+- if (unlikely(p == NULL))
++ if (xdr_stream_decode_u32(xdr, &length) < 0)
++ goto out_overflow;
++ if (xdr_inline_decode(xdr, length) == NULL)
++ goto out_overflow;
++ if (xdr_stream_decode_u32(xdr, &hdr->nops) < 0)
+ goto out_overflow;
+- p += XDR_QUADLEN(length);
+- hdr->nops = be32_to_cpup(p);
+ return 0;
+ out_overflow:
+ return -EIO;
+@@ -1461,6 +1461,8 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
+ ses = c->cn_session;
+ }
+ spin_unlock(&clp->cl_lock);
++ if (!c)
++ return;
+
+ err = setup_callback_client(clp, &conn, ses);
+ if (err) {
+diff --git a/fs/nfsd/nfs4proc.c b/fs/nfsd/nfs4proc.c
+index d32f2dfd148fe3..7a1fdafa42ea17 100644
+--- a/fs/nfsd/nfs4proc.c
++++ b/fs/nfsd/nfs4proc.c
+@@ -1292,7 +1292,7 @@ static void nfsd4_stop_copy(struct nfsd4_copy *copy)
+ nfs4_put_copy(copy);
+ }
+
+-static struct nfsd4_copy *nfsd4_get_copy(struct nfs4_client *clp)
++static struct nfsd4_copy *nfsd4_unhash_copy(struct nfs4_client *clp)
+ {
+ struct nfsd4_copy *copy = NULL;
+
+@@ -1301,6 +1301,9 @@ static struct nfsd4_copy *nfsd4_get_copy(struct nfs4_client *clp)
+ copy = list_first_entry(&clp->async_copies, struct nfsd4_copy,
+ copies);
+ refcount_inc(©->refcount);
++ copy->cp_clp = NULL;
++ if (!list_empty(©->copies))
++ list_del_init(©->copies);
+ }
+ spin_unlock(&clp->async_lock);
+ return copy;
+@@ -1310,7 +1313,7 @@ void nfsd4_shutdown_copy(struct nfs4_client *clp)
+ {
+ struct nfsd4_copy *copy;
+
+- while ((copy = nfsd4_get_copy(clp)) != NULL)
++ while ((copy = nfsd4_unhash_copy(clp)) != NULL)
+ nfsd4_stop_copy(copy);
+ }
+ #ifdef CONFIG_NFSD_V4_2_INTER_SSC
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index b7d61eb8afe9e1..4a765555bf8459 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -659,7 +659,8 @@ nfs4_reset_recoverydir(char *recdir)
+ return status;
+ status = -ENOTDIR;
+ if (d_is_dir(path.dentry)) {
+- strcpy(user_recovery_dirname, recdir);
++ strscpy(user_recovery_dirname, recdir,
++ sizeof(user_recovery_dirname));
+ status = 0;
+ }
+ path_put(&path);
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 551d2958ec2905..d3cfc647153993 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -5957,7 +5957,7 @@ nfs4_delegation_stat(struct nfs4_delegation *dp, struct svc_fh *currentfh,
+ path.dentry = file_dentry(nf->nf_file);
+
+ rc = vfs_getattr(&path, stat,
+- (STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
++ (STATX_MODE | STATX_SIZE | STATX_CTIME | STATX_CHANGE_COOKIE),
+ AT_STATX_SYNC_AS_STAT);
+
+ nfsd_file_put(nf);
+@@ -6041,8 +6041,7 @@ nfs4_open_delegation(struct nfsd4_open *open, struct nfs4_ol_stateid *stp,
+ }
+ open->op_delegate_type = NFS4_OPEN_DELEGATE_WRITE;
+ dp->dl_cb_fattr.ncf_cur_fsize = stat.size;
+- dp->dl_cb_fattr.ncf_initial_cinfo =
+- nfsd4_change_attribute(&stat, d_inode(currentfh->fh_dentry));
++ dp->dl_cb_fattr.ncf_initial_cinfo = nfsd4_change_attribute(&stat);
+ trace_nfsd_deleg_write(&dp->dl_stid.sc_stateid);
+ } else {
+ open->op_delegate_type = NFS4_OPEN_DELEGATE_READ;
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index f118921250c316..8d25aef51ad150 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -3040,7 +3040,7 @@ static __be32 nfsd4_encode_fattr4_change(struct xdr_stream *xdr,
+ return nfs_ok;
+ }
+
+- c = nfsd4_change_attribute(&args->stat, d_inode(args->dentry));
++ c = nfsd4_change_attribute(&args->stat);
+ return nfsd4_encode_changeid4(xdr, c);
+ }
+
+diff --git a/fs/nfsd/nfsfh.c b/fs/nfsd/nfsfh.c
+index 40ad58a6a0361e..96e19c50a5d7ee 100644
+--- a/fs/nfsd/nfsfh.c
++++ b/fs/nfsd/nfsfh.c
+@@ -667,20 +667,18 @@ fh_update(struct svc_fh *fhp)
+ __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp)
+ {
+ bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE);
+- struct inode *inode;
+ struct kstat stat;
+ __be32 err;
+
+ if (fhp->fh_no_wcc || fhp->fh_pre_saved)
+ return nfs_ok;
+
+- inode = d_inode(fhp->fh_dentry);
+ err = fh_getattr(fhp, &stat);
+ if (err)
+ return err;
+
+ if (v4)
+- fhp->fh_pre_change = nfsd4_change_attribute(&stat, inode);
++ fhp->fh_pre_change = nfsd4_change_attribute(&stat);
+
+ fhp->fh_pre_mtime = stat.mtime;
+ fhp->fh_pre_ctime = stat.ctime;
+@@ -697,7 +695,6 @@ __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp)
+ __be32 fh_fill_post_attrs(struct svc_fh *fhp)
+ {
+ bool v4 = (fhp->fh_maxsize == NFS4_FHSIZE);
+- struct inode *inode = d_inode(fhp->fh_dentry);
+ __be32 err;
+
+ if (fhp->fh_no_wcc)
+@@ -713,7 +710,7 @@ __be32 fh_fill_post_attrs(struct svc_fh *fhp)
+ fhp->fh_post_saved = true;
+ if (v4)
+ fhp->fh_post_change =
+- nfsd4_change_attribute(&fhp->fh_post_attr, inode);
++ nfsd4_change_attribute(&fhp->fh_post_attr);
+ return nfs_ok;
+ }
+
+@@ -804,7 +801,14 @@ enum fsid_source fsid_source(const struct svc_fh *fhp)
+ return FSIDSOURCE_DEV;
+ }
+
+-/*
++/**
++ * nfsd4_change_attribute - Generate an NFSv4 change_attribute value
++ * @stat: inode attributes
++ *
++ * Caller must fill in @stat before calling, typically by invoking
++ * vfs_getattr() with STATX_MODE, STATX_CTIME, and STATX_CHANGE_COOKIE.
++ * Returns an unsigned 64-bit changeid4 value (RFC 8881 Section 3.2).
++ *
+ * We could use i_version alone as the change attribute. However, i_version
+ * can go backwards on a regular file after an unclean shutdown. On its own
+ * that doesn't necessarily cause a problem, but if i_version goes backwards
+@@ -821,13 +825,13 @@ enum fsid_source fsid_source(const struct svc_fh *fhp)
+ * assume that the new change attr is always logged to stable storage in some
+ * fashion before the results can be seen.
+ */
+-u64 nfsd4_change_attribute(const struct kstat *stat, const struct inode *inode)
++u64 nfsd4_change_attribute(const struct kstat *stat)
+ {
+ u64 chattr;
+
+ if (stat->result_mask & STATX_CHANGE_COOKIE) {
+ chattr = stat->change_cookie;
+- if (S_ISREG(inode->i_mode) &&
++ if (S_ISREG(stat->mode) &&
+ !(stat->attributes & STATX_ATTR_CHANGE_MONOTONIC)) {
+ chattr += (u64)stat->ctime.tv_sec << 30;
+ chattr += stat->ctime.tv_nsec;
+diff --git a/fs/nfsd/nfsfh.h b/fs/nfsd/nfsfh.h
+index 5b7394801dc427..876152a91f122f 100644
+--- a/fs/nfsd/nfsfh.h
++++ b/fs/nfsd/nfsfh.h
+@@ -297,8 +297,7 @@ static inline void fh_clear_pre_post_attrs(struct svc_fh *fhp)
+ fhp->fh_pre_saved = false;
+ }
+
+-u64 nfsd4_change_attribute(const struct kstat *stat,
+- const struct inode *inode);
++u64 nfsd4_change_attribute(const struct kstat *stat);
+ __be32 __must_check fh_fill_pre_attrs(struct svc_fh *fhp);
+ __be32 fh_fill_post_attrs(struct svc_fh *fhp);
+ __be32 __must_check fh_fill_both_attrs(struct svc_fh *fhp);
+diff --git a/fs/notify/fsnotify.c b/fs/notify/fsnotify.c
+index 82ae8254c068be..f976949d2634a1 100644
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -333,16 +333,19 @@ static int fsnotify_handle_event(struct fsnotify_group *group, __u32 mask,
+ if (!inode_mark)
+ return 0;
+
+- if (mask & FS_EVENT_ON_CHILD) {
+- /*
+- * Some events can be sent on both parent dir and child marks
+- * (e.g. FS_ATTRIB). If both parent dir and child are
+- * watching, report the event once to parent dir with name (if
+- * interested) and once to child without name (if interested).
+- * The child watcher is expecting an event without a file name
+- * and without the FS_EVENT_ON_CHILD flag.
+- */
+- mask &= ~FS_EVENT_ON_CHILD;
++ /*
++ * Some events can be sent on both parent dir and child marks (e.g.
++ * FS_ATTRIB). If both parent dir and child are watching, report the
++ * event once to parent dir with name (if interested) and once to child
++ * without name (if interested).
++ *
++ * In any case regardless whether the parent is watching or not, the
++ * child watcher is expecting an event without the FS_EVENT_ON_CHILD
++ * flag. The file name is expected if and only if this is a directory
++ * event.
++ */
++ mask &= ~FS_EVENT_ON_CHILD;
++ if (!(mask & ALL_FSNOTIFY_DIRENT_EVENTS)) {
+ dir = NULL;
+ name = NULL;
+ }
+diff --git a/fs/notify/mark.c b/fs/notify/mark.c
+index c45b222cf9c11c..4981439e62092a 100644
+--- a/fs/notify/mark.c
++++ b/fs/notify/mark.c
+@@ -138,8 +138,11 @@ static void fsnotify_get_sb_watched_objects(struct super_block *sb)
+
+ static void fsnotify_put_sb_watched_objects(struct super_block *sb)
+ {
+- if (atomic_long_dec_and_test(fsnotify_sb_watched_objects(sb)))
+- wake_up_var(fsnotify_sb_watched_objects(sb));
++ atomic_long_t *watched_objects = fsnotify_sb_watched_objects(sb);
++
++ /* the superblock can go away after this decrement */
++ if (atomic_long_dec_and_test(watched_objects))
++ wake_up_var(watched_objects);
+ }
+
+ static void fsnotify_get_inode_ref(struct inode *inode)
+@@ -150,8 +153,11 @@ static void fsnotify_get_inode_ref(struct inode *inode)
+
+ static void fsnotify_put_inode_ref(struct inode *inode)
+ {
+- fsnotify_put_sb_watched_objects(inode->i_sb);
++ /* read ->i_sb before the inode can go away */
++ struct super_block *sb = inode->i_sb;
++
+ iput(inode);
++ fsnotify_put_sb_watched_objects(sb);
+ }
+
+ /*
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index e370eaf9bfe2ed..f704ceef953948 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -222,7 +222,7 @@ static int ntfs_extend_initialized_size(struct file *file,
+ if (err)
+ goto out;
+
+- folio_zero_range(folio, zerofrom, folio_size(folio));
++ folio_zero_range(folio, zerofrom, folio_size(folio) - zerofrom);
+
+ err = ntfs_write_end(file, mapping, pos, len, len, folio, NULL);
+ if (err < 0)
+diff --git a/fs/ocfs2/aops.h b/fs/ocfs2/aops.h
+index 45db1781ea735a..1d1b4b7edba02e 100644
+--- a/fs/ocfs2/aops.h
++++ b/fs/ocfs2/aops.h
+@@ -70,6 +70,8 @@ enum ocfs2_iocb_lock_bits {
+ OCFS2_IOCB_NUM_LOCKS
+ };
+
++#define ocfs2_iocb_init_rw_locked(iocb) \
++ (iocb->private = NULL)
+ #define ocfs2_iocb_clear_rw_locked(iocb) \
+ clear_bit(OCFS2_IOCB_RW_LOCK, (unsigned long *)&iocb->private)
+ #define ocfs2_iocb_rw_locked_level(iocb) \
+diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
+index 06af21982c16ab..cb09330a086119 100644
+--- a/fs/ocfs2/file.c
++++ b/fs/ocfs2/file.c
+@@ -2398,6 +2398,8 @@ static ssize_t ocfs2_file_write_iter(struct kiocb *iocb,
+ } else
+ inode_lock(inode);
+
++ ocfs2_iocb_init_rw_locked(iocb);
++
+ /*
+ * Concurrent O_DIRECT writes are allowed with
+ * mount_option "coherency=buffered".
+@@ -2544,6 +2546,8 @@ static ssize_t ocfs2_file_read_iter(struct kiocb *iocb,
+ if (!direct_io && nowait)
+ return -EOPNOTSUPP;
+
++ ocfs2_iocb_init_rw_locked(iocb);
++
+ /*
+ * buffered reads protect themselves in ->read_folio(). O_DIRECT reads
+ * need locks to protect pending reads from racing with truncate.
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index 51446c59388f10..7a85735d584f35 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -493,13 +493,13 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter)
+ * the previous entry, search for a matching entry.
+ */
+ if (!m || start < m->addr || start >= m->addr + m->size) {
+- struct kcore_list *iter;
++ struct kcore_list *pos;
+
+ m = NULL;
+- list_for_each_entry(iter, &kclist_head, list) {
+- if (start >= iter->addr &&
+- start < iter->addr + iter->size) {
+- m = iter;
++ list_for_each_entry(pos, &kclist_head, list) {
++ if (start >= pos->addr &&
++ start < pos->addr + pos->size) {
++ m = pos;
+ break;
+ }
+ }
+diff --git a/fs/read_write.c b/fs/read_write.c
+index 64dc24afdb3a7f..befec0b5c537a7 100644
+--- a/fs/read_write.c
++++ b/fs/read_write.c
+@@ -1830,18 +1830,21 @@ int generic_file_rw_checks(struct file *file_in, struct file *file_out)
+ return 0;
+ }
+
+-bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos)
++int generic_atomic_write_valid(struct kiocb *iocb, struct iov_iter *iter)
+ {
+ size_t len = iov_iter_count(iter);
+
+ if (!iter_is_ubuf(iter))
+- return false;
++ return -EINVAL;
+
+ if (!is_power_of_2(len))
+- return false;
++ return -EINVAL;
++
++ if (!IS_ALIGNED(iocb->ki_pos, len))
++ return -EINVAL;
+
+- if (!IS_ALIGNED(pos, len))
+- return false;
++ if (!(iocb->ki_flags & IOCB_DIRECT))
++ return -EOPNOTSUPP;
+
+- return true;
++ return 0;
+ }
+diff --git a/fs/smb/client/cached_dir.c b/fs/smb/client/cached_dir.c
+index 0ff2491c311d8a..9c0ef4195b5829 100644
+--- a/fs/smb/client/cached_dir.c
++++ b/fs/smb/client/cached_dir.c
+@@ -17,6 +17,11 @@ static void free_cached_dir(struct cached_fid *cfid);
+ static void smb2_close_cached_fid(struct kref *ref);
+ static void cfids_laundromat_worker(struct work_struct *work);
+
++struct cached_dir_dentry {
++ struct list_head entry;
++ struct dentry *dentry;
++};
++
+ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
+ const char *path,
+ bool lookup_only,
+@@ -59,6 +64,16 @@ static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,
+ list_add(&cfid->entry, &cfids->entries);
+ cfid->on_list = true;
+ kref_get(&cfid->refcount);
++ /*
++ * Set @cfid->has_lease to true during construction so that the lease
++ * reference can be put in cached_dir_lease_break() due to a potential
++ * lease break right after the request is sent or while @cfid is still
++ * being cached, or if a reconnection is triggered during construction.
++ * Concurrent processes won't be to use it yet due to @cfid->time being
++ * zero.
++ */
++ cfid->has_lease = true;
++
+ spin_unlock(&cfids->cfid_list_lock);
+ return cfid;
+ }
+@@ -176,12 +191,12 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ return -ENOENT;
+ }
+ /*
+- * Return cached fid if it has a lease. Otherwise, it is either a new
+- * entry or laundromat worker removed it from @cfids->entries. Caller
+- * will put last reference if the latter.
++ * Return cached fid if it is valid (has a lease and has a time).
++ * Otherwise, it is either a new entry or laundromat worker removed it
++ * from @cfids->entries. Caller will put last reference if the latter.
+ */
+ spin_lock(&cfids->cfid_list_lock);
+- if (cfid->has_lease) {
++ if (cfid->has_lease && cfid->time) {
+ spin_unlock(&cfids->cfid_list_lock);
+ *ret_cfid = cfid;
+ kfree(utf16_path);
+@@ -212,6 +227,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ }
+ }
+ cfid->dentry = dentry;
++ cfid->tcon = tcon;
+
+ /*
+ * We do not hold the lock for the open because in case
+@@ -267,15 +283,6 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+
+ smb2_set_related(&rqst[1]);
+
+- /*
+- * Set @cfid->has_lease to true before sending out compounded request so
+- * its lease reference can be put in cached_dir_lease_break() due to a
+- * potential lease break right after the request is sent or while @cfid
+- * is still being cached. Concurrent processes won't be to use it yet
+- * due to @cfid->time being zero.
+- */
+- cfid->has_lease = true;
+-
+ if (retries) {
+ smb2_set_replay(server, &rqst[0]);
+ smb2_set_replay(server, &rqst[1]);
+@@ -292,7 +299,6 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ }
+ goto oshr_free;
+ }
+- cfid->tcon = tcon;
+ cfid->is_open = true;
+
+ spin_lock(&cfids->cfid_list_lock);
+@@ -347,6 +353,7 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ SMB2_query_info_free(&rqst[1]);
+ free_rsp_buf(resp_buftype[0], rsp_iov[0].iov_base);
+ free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
++out:
+ if (rc) {
+ spin_lock(&cfids->cfid_list_lock);
+ if (cfid->on_list) {
+@@ -358,23 +365,14 @@ int open_cached_dir(unsigned int xid, struct cifs_tcon *tcon,
+ /*
+ * We are guaranteed to have two references at this
+ * point. One for the caller and one for a potential
+- * lease. Release the Lease-ref so that the directory
+- * will be closed when the caller closes the cached
+- * handle.
++ * lease. Release one here, and the second below.
+ */
+ cfid->has_lease = false;
+- spin_unlock(&cfids->cfid_list_lock);
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
+- goto out;
+ }
+ spin_unlock(&cfids->cfid_list_lock);
+- }
+-out:
+- if (rc) {
+- if (cfid->is_open)
+- SMB2_close(0, cfid->tcon, cfid->fid.persistent_fid,
+- cfid->fid.volatile_fid);
+- free_cached_dir(cfid);
++
++ kref_put(&cfid->refcount, smb2_close_cached_fid);
+ } else {
+ *ret_cfid = cfid;
+ atomic_inc(&tcon->num_remote_opens);
+@@ -401,7 +399,7 @@ int open_cached_dir_by_dentry(struct cifs_tcon *tcon,
+ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry(cfid, &cfids->entries, entry) {
+ if (dentry && cfid->dentry == dentry) {
+- cifs_dbg(FYI, "found a cached root file handle by dentry\n");
++ cifs_dbg(FYI, "found a cached file handle by dentry\n");
+ kref_get(&cfid->refcount);
+ *ret_cfid = cfid;
+ spin_unlock(&cfids->cfid_list_lock);
+@@ -477,7 +475,10 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)
+ struct cifs_tcon *tcon;
+ struct tcon_link *tlink;
+ struct cached_fids *cfids;
++ struct cached_dir_dentry *tmp_list, *q;
++ LIST_HEAD(entry);
+
++ spin_lock(&cifs_sb->tlink_tree_lock);
+ for (node = rb_first(root); node; node = rb_next(node)) {
+ tlink = rb_entry(node, struct tcon_link, tl_rbnode);
+ tcon = tlink_tcon(tlink);
+@@ -486,11 +487,30 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)
+ cfids = tcon->cfids;
+ if (cfids == NULL)
+ continue;
++ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry(cfid, &cfids->entries, entry) {
+- dput(cfid->dentry);
++ tmp_list = kmalloc(sizeof(*tmp_list), GFP_ATOMIC);
++ if (tmp_list == NULL)
++ break;
++ spin_lock(&cfid->fid_lock);
++ tmp_list->dentry = cfid->dentry;
+ cfid->dentry = NULL;
++ spin_unlock(&cfid->fid_lock);
++
++ list_add_tail(&tmp_list->entry, &entry);
+ }
++ spin_unlock(&cfids->cfid_list_lock);
++ }
++ spin_unlock(&cifs_sb->tlink_tree_lock);
++
++ list_for_each_entry_safe(tmp_list, q, &entry, entry) {
++ list_del(&tmp_list->entry);
++ dput(tmp_list->dentry);
++ kfree(tmp_list);
+ }
++
++ /* Flush any pending work that will drop dentries */
++ flush_workqueue(cfid_put_wq);
+ }
+
+ /*
+@@ -501,50 +521,71 @@ void invalidate_all_cached_dirs(struct cifs_tcon *tcon)
+ {
+ struct cached_fids *cfids = tcon->cfids;
+ struct cached_fid *cfid, *q;
+- LIST_HEAD(entry);
+
+ if (cfids == NULL)
+ return;
+
++ /*
++ * Mark all the cfids as closed, and move them to the cfids->dying list.
++ * They'll be cleaned up later by cfids_invalidation_worker. Take
++ * a reference to each cfid during this process.
++ */
+ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
+- list_move(&cfid->entry, &entry);
++ list_move(&cfid->entry, &cfids->dying);
+ cfids->num_entries--;
+ cfid->is_open = false;
+ cfid->on_list = false;
+- /* To prevent race with smb2_cached_lease_break() */
+- kref_get(&cfid->refcount);
+- }
+- spin_unlock(&cfids->cfid_list_lock);
+-
+- list_for_each_entry_safe(cfid, q, &entry, entry) {
+- list_del(&cfid->entry);
+- cancel_work_sync(&cfid->lease_break);
+ if (cfid->has_lease) {
+ /*
+- * We lease was never cancelled from the server so we
+- * need to drop the reference.
++ * The lease was never cancelled from the server,
++ * so steal that reference.
+ */
+- spin_lock(&cfids->cfid_list_lock);
+ cfid->has_lease = false;
+- spin_unlock(&cfids->cfid_list_lock);
+- kref_put(&cfid->refcount, smb2_close_cached_fid);
+- }
+- /* Drop the extra reference opened above*/
+- kref_put(&cfid->refcount, smb2_close_cached_fid);
++ } else
++ kref_get(&cfid->refcount);
+ }
++ /*
++ * Queue dropping of the dentries once locks have been dropped
++ */
++ if (!list_empty(&cfids->dying))
++ queue_work(cfid_put_wq, &cfids->invalidation_work);
++ spin_unlock(&cfids->cfid_list_lock);
+ }
+
+ static void
+-smb2_cached_lease_break(struct work_struct *work)
++cached_dir_offload_close(struct work_struct *work)
+ {
+ struct cached_fid *cfid = container_of(work,
+- struct cached_fid, lease_break);
++ struct cached_fid, close_work);
++ struct cifs_tcon *tcon = cfid->tcon;
++
++ WARN_ON(cfid->on_list);
+
+- spin_lock(&cfid->cfids->cfid_list_lock);
+- cfid->has_lease = false;
+- spin_unlock(&cfid->cfids->cfid_list_lock);
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
++ cifs_put_tcon(tcon, netfs_trace_tcon_ref_put_cached_close);
++}
++
++/*
++ * Release the cached directory's dentry, and then queue work to drop cached
++ * directory itself (closing on server if needed).
++ *
++ * Must be called with a reference to the cached_fid and a reference to the
++ * tcon.
++ */
++static void cached_dir_put_work(struct work_struct *work)
++{
++ struct cached_fid *cfid = container_of(work, struct cached_fid,
++ put_work);
++ struct dentry *dentry;
++
++ spin_lock(&cfid->fid_lock);
++ dentry = cfid->dentry;
++ cfid->dentry = NULL;
++ spin_unlock(&cfid->fid_lock);
++
++ dput(dentry);
++ queue_work(serverclose_wq, &cfid->close_work);
+ }
+
+ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
+@@ -561,6 +602,7 @@ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
+ !memcmp(lease_key,
+ cfid->fid.lease_key,
+ SMB2_LEASE_KEY_SIZE)) {
++ cfid->has_lease = false;
+ cfid->time = 0;
+ /*
+ * We found a lease remove it from the list
+@@ -570,8 +612,10 @@ int cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16])
+ cfid->on_list = false;
+ cfids->num_entries--;
+
+- queue_work(cifsiod_wq,
+- &cfid->lease_break);
++ ++tcon->tc_count;
++ trace_smb3_tcon_ref(tcon->debug_id, tcon->tc_count,
++ netfs_trace_tcon_ref_get_cached_lease_break);
++ queue_work(cfid_put_wq, &cfid->put_work);
+ spin_unlock(&cfids->cfid_list_lock);
+ return true;
+ }
+@@ -593,7 +637,8 @@ static struct cached_fid *init_cached_dir(const char *path)
+ return NULL;
+ }
+
+- INIT_WORK(&cfid->lease_break, smb2_cached_lease_break);
++ INIT_WORK(&cfid->close_work, cached_dir_offload_close);
++ INIT_WORK(&cfid->put_work, cached_dir_put_work);
+ INIT_LIST_HEAD(&cfid->entry);
+ INIT_LIST_HEAD(&cfid->dirents.entries);
+ mutex_init(&cfid->dirents.de_mutex);
+@@ -606,6 +651,9 @@ static void free_cached_dir(struct cached_fid *cfid)
+ {
+ struct cached_dirent *dirent, *q;
+
++ WARN_ON(work_pending(&cfid->close_work));
++ WARN_ON(work_pending(&cfid->put_work));
++
+ dput(cfid->dentry);
+ cfid->dentry = NULL;
+
+@@ -623,10 +671,30 @@ static void free_cached_dir(struct cached_fid *cfid)
+ kfree(cfid);
+ }
+
++static void cfids_invalidation_worker(struct work_struct *work)
++{
++ struct cached_fids *cfids = container_of(work, struct cached_fids,
++ invalidation_work);
++ struct cached_fid *cfid, *q;
++ LIST_HEAD(entry);
++
++ spin_lock(&cfids->cfid_list_lock);
++ /* move cfids->dying to the local list */
++ list_cut_before(&entry, &cfids->dying, &cfids->dying);
++ spin_unlock(&cfids->cfid_list_lock);
++
++ list_for_each_entry_safe(cfid, q, &entry, entry) {
++ list_del(&cfid->entry);
++ /* Drop the ref-count acquired in invalidate_all_cached_dirs */
++ kref_put(&cfid->refcount, smb2_close_cached_fid);
++ }
++}
++
+ static void cfids_laundromat_worker(struct work_struct *work)
+ {
+ struct cached_fids *cfids;
+ struct cached_fid *cfid, *q;
++ struct dentry *dentry;
+ LIST_HEAD(entry);
+
+ cfids = container_of(work, struct cached_fids, laundromat_work.work);
+@@ -638,33 +706,42 @@ static void cfids_laundromat_worker(struct work_struct *work)
+ cfid->on_list = false;
+ list_move(&cfid->entry, &entry);
+ cfids->num_entries--;
+- /* To prevent race with smb2_cached_lease_break() */
+- kref_get(&cfid->refcount);
++ if (cfid->has_lease) {
++ /*
++ * Our lease has not yet been cancelled from the
++ * server. Steal that reference.
++ */
++ cfid->has_lease = false;
++ } else
++ kref_get(&cfid->refcount);
+ }
+ }
+ spin_unlock(&cfids->cfid_list_lock);
+
+ list_for_each_entry_safe(cfid, q, &entry, entry) {
+ list_del(&cfid->entry);
+- /*
+- * Cancel and wait for the work to finish in case we are racing
+- * with it.
+- */
+- cancel_work_sync(&cfid->lease_break);
+- if (cfid->has_lease) {
++
++ spin_lock(&cfid->fid_lock);
++ dentry = cfid->dentry;
++ cfid->dentry = NULL;
++ spin_unlock(&cfid->fid_lock);
++
++ dput(dentry);
++ if (cfid->is_open) {
++ spin_lock(&cifs_tcp_ses_lock);
++ ++cfid->tcon->tc_count;
++ trace_smb3_tcon_ref(cfid->tcon->debug_id, cfid->tcon->tc_count,
++ netfs_trace_tcon_ref_get_cached_laundromat);
++ spin_unlock(&cifs_tcp_ses_lock);
++ queue_work(serverclose_wq, &cfid->close_work);
++ } else
+ /*
+- * Our lease has not yet been cancelled from the server
+- * so we need to drop the reference.
++ * Drop the ref-count from above, either the lease-ref (if there
++ * was one) or the extra one acquired.
+ */
+- spin_lock(&cfids->cfid_list_lock);
+- cfid->has_lease = false;
+- spin_unlock(&cfids->cfid_list_lock);
+ kref_put(&cfid->refcount, smb2_close_cached_fid);
+- }
+- /* Drop the extra reference opened above */
+- kref_put(&cfid->refcount, smb2_close_cached_fid);
+ }
+- queue_delayed_work(cifsiod_wq, &cfids->laundromat_work,
++ queue_delayed_work(cfid_put_wq, &cfids->laundromat_work,
+ dir_cache_timeout * HZ);
+ }
+
+@@ -677,9 +754,11 @@ struct cached_fids *init_cached_dirs(void)
+ return NULL;
+ spin_lock_init(&cfids->cfid_list_lock);
+ INIT_LIST_HEAD(&cfids->entries);
++ INIT_LIST_HEAD(&cfids->dying);
+
++ INIT_WORK(&cfids->invalidation_work, cfids_invalidation_worker);
+ INIT_DELAYED_WORK(&cfids->laundromat_work, cfids_laundromat_worker);
+- queue_delayed_work(cifsiod_wq, &cfids->laundromat_work,
++ queue_delayed_work(cfid_put_wq, &cfids->laundromat_work,
+ dir_cache_timeout * HZ);
+
+ return cfids;
+@@ -698,6 +777,7 @@ void free_cached_dirs(struct cached_fids *cfids)
+ return;
+
+ cancel_delayed_work_sync(&cfids->laundromat_work);
++ cancel_work_sync(&cfids->invalidation_work);
+
+ spin_lock(&cfids->cfid_list_lock);
+ list_for_each_entry_safe(cfid, q, &cfids->entries, entry) {
+@@ -705,6 +785,11 @@ void free_cached_dirs(struct cached_fids *cfids)
+ cfid->is_open = false;
+ list_move(&cfid->entry, &entry);
+ }
++ list_for_each_entry_safe(cfid, q, &cfids->dying, entry) {
++ cfid->on_list = false;
++ cfid->is_open = false;
++ list_move(&cfid->entry, &entry);
++ }
+ spin_unlock(&cfids->cfid_list_lock);
+
+ list_for_each_entry_safe(cfid, q, &entry, entry) {
+diff --git a/fs/smb/client/cached_dir.h b/fs/smb/client/cached_dir.h
+index 81ba0fd5cc16d6..1dfe79d947a62f 100644
+--- a/fs/smb/client/cached_dir.h
++++ b/fs/smb/client/cached_dir.h
+@@ -44,7 +44,8 @@ struct cached_fid {
+ spinlock_t fid_lock;
+ struct cifs_tcon *tcon;
+ struct dentry *dentry;
+- struct work_struct lease_break;
++ struct work_struct put_work;
++ struct work_struct close_work;
+ struct smb2_file_all_info file_all_info;
+ struct cached_dirents dirents;
+ };
+@@ -53,10 +54,13 @@ struct cached_fid {
+ struct cached_fids {
+ /* Must be held when:
+ * - accessing the cfids->entries list
++ * - accessing the cfids->dying list
+ */
+ spinlock_t cfid_list_lock;
+ int num_entries;
+ struct list_head entries;
++ struct list_head dying;
++ struct work_struct invalidation_work;
+ struct delayed_work laundromat_work;
+ };
+
+diff --git a/fs/smb/client/cifsacl.c b/fs/smb/client/cifsacl.c
+index 1d294d53f66247..c68ad526a4de1b 100644
+--- a/fs/smb/client/cifsacl.c
++++ b/fs/smb/client/cifsacl.c
+@@ -885,12 +885,17 @@ unsigned int setup_authusers_ACE(struct smb_ace *pntace)
+ * Fill in the special SID based on the mode. See
+ * https://technet.microsoft.com/en-us/library/hh509017(v=ws.10).aspx
+ */
+-unsigned int setup_special_mode_ACE(struct smb_ace *pntace, __u64 nmode)
++unsigned int setup_special_mode_ACE(struct smb_ace *pntace,
++ bool posix,
++ __u64 nmode)
+ {
+ int i;
+ unsigned int ace_size = 28;
+
+- pntace->type = ACCESS_DENIED_ACE_TYPE;
++ if (posix)
++ pntace->type = ACCESS_ALLOWED_ACE_TYPE;
++ else
++ pntace->type = ACCESS_DENIED_ACE_TYPE;
+ pntace->flags = 0x0;
+ pntace->access_req = 0;
+ pntace->sid.num_subauth = 3;
+@@ -933,7 +938,8 @@ static void populate_new_aces(char *nacl_base,
+ struct smb_sid *pownersid,
+ struct smb_sid *pgrpsid,
+ __u64 *pnmode, u32 *pnum_aces, u16 *pnsize,
+- bool modefromsid)
++ bool modefromsid,
++ bool posix)
+ {
+ __u64 nmode;
+ u32 num_aces = 0;
+@@ -950,13 +956,15 @@ static void populate_new_aces(char *nacl_base,
+ num_aces = *pnum_aces;
+ nsize = *pnsize;
+
+- if (modefromsid) {
+- pnntace = (struct smb_ace *) (nacl_base + nsize);
+- nsize += setup_special_mode_ACE(pnntace, nmode);
+- num_aces++;
++ if (modefromsid || posix) {
+ pnntace = (struct smb_ace *) (nacl_base + nsize);
+- nsize += setup_authusers_ACE(pnntace);
++ nsize += setup_special_mode_ACE(pnntace, posix, nmode);
+ num_aces++;
++ if (modefromsid) {
++ pnntace = (struct smb_ace *) (nacl_base + nsize);
++ nsize += setup_authusers_ACE(pnntace);
++ num_aces++;
++ }
+ goto set_size;
+ }
+
+@@ -1076,7 +1084,7 @@ static __u16 replace_sids_and_copy_aces(struct smb_acl *pdacl, struct smb_acl *p
+
+ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl,
+ struct smb_sid *pownersid, struct smb_sid *pgrpsid,
+- __u64 *pnmode, bool mode_from_sid)
++ __u64 *pnmode, bool mode_from_sid, bool posix)
+ {
+ int i;
+ u16 size = 0;
+@@ -1094,11 +1102,11 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl,
+ nsize = sizeof(struct smb_acl);
+
+ /* If pdacl is NULL, we don't have a src. Simply populate new ACL. */
+- if (!pdacl) {
++ if (!pdacl || posix) {
+ populate_new_aces(nacl_base,
+ pownersid, pgrpsid,
+ pnmode, &num_aces, &nsize,
+- mode_from_sid);
++ mode_from_sid, posix);
+ goto finalize_dacl;
+ }
+
+@@ -1115,7 +1123,7 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl,
+ populate_new_aces(nacl_base,
+ pownersid, pgrpsid,
+ pnmode, &num_aces, &nsize,
+- mode_from_sid);
++ mode_from_sid, posix);
+
+ new_aces_set = true;
+ }
+@@ -1144,7 +1152,7 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl,
+ populate_new_aces(nacl_base,
+ pownersid, pgrpsid,
+ pnmode, &num_aces, &nsize,
+- mode_from_sid);
++ mode_from_sid, posix);
+
+ new_aces_set = true;
+ }
+@@ -1251,7 +1259,7 @@ static int parse_sec_desc(struct cifs_sb_info *cifs_sb,
+ /* Convert permission bits from mode to equivalent CIFS ACL */
+ static int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *pnntsd,
+ __u32 secdesclen, __u32 *pnsecdesclen, __u64 *pnmode, kuid_t uid, kgid_t gid,
+- bool mode_from_sid, bool id_from_sid, int *aclflag)
++ bool mode_from_sid, bool id_from_sid, bool posix, int *aclflag)
+ {
+ int rc = 0;
+ __u32 dacloffset;
+@@ -1288,7 +1296,7 @@ static int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *pnntsd,
+ ndacl_ptr->num_aces = cpu_to_le32(0);
+
+ rc = set_chmod_dacl(dacl_ptr, ndacl_ptr, owner_sid_ptr, group_sid_ptr,
+- pnmode, mode_from_sid);
++ pnmode, mode_from_sid, posix);
+
+ sidsoffset = ndacloffset + le16_to_cpu(ndacl_ptr->size);
+ /* copy the non-dacl portion of secdesc */
+@@ -1587,6 +1595,7 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode,
+ struct tcon_link *tlink = cifs_sb_tlink(cifs_sb);
+ struct smb_version_operations *ops;
+ bool mode_from_sid, id_from_sid;
++ bool posix = tlink_tcon(tlink)->posix_extensions;
+ const u32 info = 0;
+
+ if (IS_ERR(tlink))
+@@ -1622,12 +1631,13 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode,
+ id_from_sid = false;
+
+ /* Potentially, five new ACEs can be added to the ACL for U,G,O mapping */
+- nsecdesclen = secdesclen;
+ if (pnmode && *pnmode != NO_CHANGE_64) { /* chmod */
+- if (mode_from_sid)
+- nsecdesclen += 2 * sizeof(struct smb_ace);
++ if (posix)
++ nsecdesclen = 1 * sizeof(struct smb_ace);
++ else if (mode_from_sid)
++ nsecdesclen = secdesclen + (2 * sizeof(struct smb_ace));
+ else /* cifsacl */
+- nsecdesclen += 5 * sizeof(struct smb_ace);
++ nsecdesclen = secdesclen + (5 * sizeof(struct smb_ace));
+ } else { /* chown */
+ /* When ownership changes, changes new owner sid length could be different */
+ nsecdesclen = sizeof(struct smb_ntsd) + (sizeof(struct smb_sid) * 2);
+@@ -1657,7 +1667,7 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode,
+ }
+
+ rc = build_sec_desc(pntsd, pnntsd, secdesclen, &nsecdesclen, pnmode, uid, gid,
+- mode_from_sid, id_from_sid, &aclflag);
++ mode_from_sid, id_from_sid, posix, &aclflag);
+
+ cifs_dbg(NOISY, "build_sec_desc rc: %d\n", rc);
+
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index 20cafdff508106..bf909c2f6b963b 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -157,6 +157,7 @@ struct workqueue_struct *fileinfo_put_wq;
+ struct workqueue_struct *cifsoplockd_wq;
+ struct workqueue_struct *deferredclose_wq;
+ struct workqueue_struct *serverclose_wq;
++struct workqueue_struct *cfid_put_wq;
+ __u32 cifs_lock_secret;
+
+ /*
+@@ -1895,9 +1896,16 @@ init_cifs(void)
+ goto out_destroy_deferredclose_wq;
+ }
+
++ cfid_put_wq = alloc_workqueue("cfid_put_wq",
++ WQ_FREEZABLE|WQ_MEM_RECLAIM, 0);
++ if (!cfid_put_wq) {
++ rc = -ENOMEM;
++ goto out_destroy_serverclose_wq;
++ }
++
+ rc = cifs_init_inodecache();
+ if (rc)
+- goto out_destroy_serverclose_wq;
++ goto out_destroy_cfid_put_wq;
+
+ rc = cifs_init_netfs();
+ if (rc)
+@@ -1965,6 +1973,8 @@ init_cifs(void)
+ cifs_destroy_netfs();
+ out_destroy_inodecache:
+ cifs_destroy_inodecache();
++out_destroy_cfid_put_wq:
++ destroy_workqueue(cfid_put_wq);
+ out_destroy_serverclose_wq:
+ destroy_workqueue(serverclose_wq);
+ out_destroy_deferredclose_wq:
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 5041b1ffc244b0..9a4b3608b7d6f3 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -588,6 +588,7 @@ struct smb_version_operations {
+ /* Check for STATUS_NETWORK_NAME_DELETED */
+ bool (*is_network_name_deleted)(char *buf, struct TCP_Server_Info *srv);
+ int (*parse_reparse_point)(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data);
+ int (*create_reparse_symlink)(const unsigned int xid,
+@@ -1983,7 +1984,7 @@ require use of the stronger protocol */
+ * cifsInodeInfo->lock_sem cifsInodeInfo->llist cifs_init_once
+ * ->can_cache_brlcks
+ * cifsInodeInfo->deferred_lock cifsInodeInfo->deferred_closes cifsInodeInfo_alloc
+- * cached_fid->fid_mutex cifs_tcon->crfid tcon_info_alloc
++ * cached_fids->cfid_list_lock cifs_tcon->cfids->entries init_cached_dirs
+ * cifsFileInfo->fh_mutex cifsFileInfo cifs_new_fileinfo
+ * cifsFileInfo->file_info_lock cifsFileInfo->count cifs_new_fileinfo
+ * ->invalidHandle initiate_cifs_search
+@@ -2071,6 +2072,7 @@ extern struct workqueue_struct *fileinfo_put_wq;
+ extern struct workqueue_struct *cifsoplockd_wq;
+ extern struct workqueue_struct *deferredclose_wq;
+ extern struct workqueue_struct *serverclose_wq;
++extern struct workqueue_struct *cfid_put_wq;
+ extern __u32 cifs_lock_secret;
+
+ extern mempool_t *cifs_sm_req_poolp;
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 1d3470bca45edd..0c6468844c4b54 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -244,7 +244,9 @@ extern int cifs_set_acl(struct mnt_idmap *idmap,
+ extern int set_cifs_acl(struct smb_ntsd *pntsd, __u32 len, struct inode *ino,
+ const char *path, int flag);
+ extern unsigned int setup_authusers_ACE(struct smb_ace *pace);
+-extern unsigned int setup_special_mode_ACE(struct smb_ace *pace, __u64 nmode);
++extern unsigned int setup_special_mode_ACE(struct smb_ace *pace,
++ bool posix,
++ __u64 nmode);
+ extern unsigned int setup_special_user_owner_ACE(struct smb_ace *pace);
+
+ extern void dequeue_mid(struct mid_q_entry *mid, bool malformed);
+@@ -666,6 +668,7 @@ char *extract_hostname(const char *unc);
+ char *extract_sharename(const char *unc);
+ int parse_reparse_point(struct reparse_data_buffer *buf,
+ u32 plen, struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ bool unicode, struct cifs_open_info_data *data);
+ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ struct dentry *dentry, struct cifs_tcon *tcon,
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 0ce2d704b1f3f8..a94c538ff86368 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -1897,11 +1897,35 @@ static int match_session(struct cifs_ses *ses,
+ CIFS_MAX_USERNAME_LEN))
+ return 0;
+ if ((ctx->username && strlen(ctx->username) != 0) &&
+- ses->password != NULL &&
+- strncmp(ses->password,
+- ctx->password ? ctx->password : "",
+- CIFS_MAX_PASSWORD_LEN))
+- return 0;
++ ses->password != NULL) {
++
++ /* New mount can only share sessions with an existing mount if:
++ * 1. Both password and password2 match, or
++ * 2. password2 of the old mount matches password of the new mount
++ * and password of the old mount matches password2 of the new
++ * mount
++ */
++ if (ses->password2 != NULL && ctx->password2 != NULL) {
++ if (!((strncmp(ses->password, ctx->password ?
++ ctx->password : "", CIFS_MAX_PASSWORD_LEN) == 0 &&
++ strncmp(ses->password2, ctx->password2,
++ CIFS_MAX_PASSWORD_LEN) == 0) ||
++ (strncmp(ses->password, ctx->password2,
++ CIFS_MAX_PASSWORD_LEN) == 0 &&
++ strncmp(ses->password2, ctx->password ?
++ ctx->password : "", CIFS_MAX_PASSWORD_LEN) == 0)))
++ return 0;
++
++ } else if ((ses->password2 == NULL && ctx->password2 != NULL) ||
++ (ses->password2 != NULL && ctx->password2 == NULL)) {
++ return 0;
++
++ } else {
++ if (strncmp(ses->password, ctx->password ?
++ ctx->password : "", CIFS_MAX_PASSWORD_LEN))
++ return 0;
++ }
++ }
+ }
+
+ if (strcmp(ctx->local_nls->charset, ses->local_nls->charset))
+@@ -2244,6 +2268,7 @@ struct cifs_ses *
+ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ {
+ int rc = 0;
++ int retries = 0;
+ unsigned int xid;
+ struct cifs_ses *ses;
+ struct sockaddr_in *addr = (struct sockaddr_in *)&server->dstaddr;
+@@ -2262,6 +2287,8 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ cifs_dbg(FYI, "Session needs reconnect\n");
+
+ mutex_lock(&ses->session_mutex);
++
++retry_old_session:
+ rc = cifs_negotiate_protocol(xid, ses, server);
+ if (rc) {
+ mutex_unlock(&ses->session_mutex);
+@@ -2274,6 +2301,13 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ rc = cifs_setup_session(xid, ses, server,
+ ctx->local_nls);
+ if (rc) {
++ if (((rc == -EACCES) || (rc == -EKEYEXPIRED) ||
++ (rc == -EKEYREVOKED)) && !retries && ses->password2) {
++ retries++;
++ cifs_dbg(FYI, "Session reconnect failed, retrying with alternate password\n");
++ swap(ses->password, ses->password2);
++ goto retry_old_session;
++ }
+ mutex_unlock(&ses->session_mutex);
+ /* problem -- put our reference */
+ cifs_put_smb_ses(ses);
+@@ -2349,6 +2383,7 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ ses->chans_need_reconnect = 1;
+ spin_unlock(&ses->chan_lock);
+
++retry_new_session:
+ mutex_lock(&ses->session_mutex);
+ rc = cifs_negotiate_protocol(xid, ses, server);
+ if (!rc)
+@@ -2361,8 +2396,16 @@ cifs_get_smb_ses(struct TCP_Server_Info *server, struct smb3_fs_context *ctx)
+ sizeof(ses->smb3signingkey));
+ spin_unlock(&ses->chan_lock);
+
+- if (rc)
+- goto get_ses_fail;
++ if (rc) {
++ if (((rc == -EACCES) || (rc == -EKEYEXPIRED) ||
++ (rc == -EKEYREVOKED)) && !retries && ses->password2) {
++ retries++;
++ cifs_dbg(FYI, "Session setup failed, retrying with alternate password\n");
++ swap(ses->password, ses->password2);
++ goto retry_new_session;
++ } else
++ goto get_ses_fail;
++ }
+
+ /*
+ * success, put it on the list and add it as first channel
+@@ -2551,7 +2594,7 @@ cifs_get_tcon(struct cifs_ses *ses, struct smb3_fs_context *ctx)
+
+ if (ses->server->dialect >= SMB20_PROT_ID &&
+ (ses->server->capabilities & SMB2_GLOBAL_CAP_DIRECTORY_LEASING))
+- nohandlecache = ctx->nohandlecache;
++ nohandlecache = ctx->nohandlecache || !dir_cache_timeout;
+ else
+ nohandlecache = true;
+ tcon = tcon_info_alloc(!nohandlecache, netfs_trace_tcon_ref_new);
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index 5c5a52019efada..48606e2ddffdcd 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -890,12 +890,37 @@ do { \
+ cifs_sb->ctx->field = NULL; \
+ } while (0)
+
++int smb3_sync_session_ctx_passwords(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses)
++{
++ if (ses->password &&
++ cifs_sb->ctx->password &&
++ strcmp(ses->password, cifs_sb->ctx->password)) {
++ kfree_sensitive(cifs_sb->ctx->password);
++ cifs_sb->ctx->password = kstrdup(ses->password, GFP_KERNEL);
++ if (!cifs_sb->ctx->password)
++ return -ENOMEM;
++ }
++ if (ses->password2 &&
++ cifs_sb->ctx->password2 &&
++ strcmp(ses->password2, cifs_sb->ctx->password2)) {
++ kfree_sensitive(cifs_sb->ctx->password2);
++ cifs_sb->ctx->password2 = kstrdup(ses->password2, GFP_KERNEL);
++ if (!cifs_sb->ctx->password2) {
++ kfree_sensitive(cifs_sb->ctx->password);
++ cifs_sb->ctx->password = NULL;
++ return -ENOMEM;
++ }
++ }
++ return 0;
++}
++
+ static int smb3_reconfigure(struct fs_context *fc)
+ {
+ struct smb3_fs_context *ctx = smb3_fc2context(fc);
+ struct dentry *root = fc->root;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(root->d_sb);
+ struct cifs_ses *ses = cifs_sb_master_tcon(cifs_sb)->ses;
++ char *new_password = NULL, *new_password2 = NULL;
+ bool need_recon = false;
+ int rc;
+
+@@ -915,21 +940,63 @@ static int smb3_reconfigure(struct fs_context *fc)
+ STEAL_STRING(cifs_sb, ctx, UNC);
+ STEAL_STRING(cifs_sb, ctx, source);
+ STEAL_STRING(cifs_sb, ctx, username);
++
+ if (need_recon == false)
+ STEAL_STRING_SENSITIVE(cifs_sb, ctx, password);
+ else {
+- kfree_sensitive(ses->password);
+- ses->password = kstrdup(ctx->password, GFP_KERNEL);
+- if (!ses->password)
+- return -ENOMEM;
+- kfree_sensitive(ses->password2);
+- ses->password2 = kstrdup(ctx->password2, GFP_KERNEL);
+- if (!ses->password2) {
+- kfree_sensitive(ses->password);
+- ses->password = NULL;
++ if (ctx->password) {
++ new_password = kstrdup(ctx->password, GFP_KERNEL);
++ if (!new_password)
++ return -ENOMEM;
++ } else
++ STEAL_STRING_SENSITIVE(cifs_sb, ctx, password);
++ }
++
++ /*
++ * if a new password2 has been specified, then reset it's value
++ * inside the ses struct
++ */
++ if (ctx->password2) {
++ new_password2 = kstrdup(ctx->password2, GFP_KERNEL);
++ if (!new_password2) {
++ kfree_sensitive(new_password);
+ return -ENOMEM;
+ }
++ } else
++ STEAL_STRING_SENSITIVE(cifs_sb, ctx, password2);
++
++ /*
++ * we may update the passwords in the ses struct below. Make sure we do
++ * not race with smb2_reconnect
++ */
++ mutex_lock(&ses->session_mutex);
++
++ /*
++ * smb2_reconnect may swap password and password2 in case session setup
++ * failed. First get ctx passwords in sync with ses passwords. It should
++ * be okay to do this even if this function were to return an error at a
++ * later stage
++ */
++ rc = smb3_sync_session_ctx_passwords(cifs_sb, ses);
++ if (rc) {
++ mutex_unlock(&ses->session_mutex);
++ return rc;
+ }
++
++ /*
++ * now that allocations for passwords are done, commit them
++ */
++ if (new_password) {
++ kfree_sensitive(ses->password);
++ ses->password = new_password;
++ }
++ if (new_password2) {
++ kfree_sensitive(ses->password2);
++ ses->password2 = new_password2;
++ }
++
++ mutex_unlock(&ses->session_mutex);
++
+ STEAL_STRING(cifs_sb, ctx, domainname);
+ STEAL_STRING(cifs_sb, ctx, nodename);
+ STEAL_STRING(cifs_sb, ctx, iocharset);
+diff --git a/fs/smb/client/fs_context.h b/fs/smb/client/fs_context.h
+index 890d6d9d4a592f..c8c8b4451b3bc7 100644
+--- a/fs/smb/client/fs_context.h
++++ b/fs/smb/client/fs_context.h
+@@ -299,6 +299,7 @@ static inline struct smb3_fs_context *smb3_fc2context(const struct fs_context *f
+ }
+
+ extern int smb3_fs_context_dup(struct smb3_fs_context *new_ctx, struct smb3_fs_context *ctx);
++extern int smb3_sync_session_ctx_passwords(struct cifs_sb_info *cifs_sb, struct cifs_ses *ses);
+ extern void smb3_update_mnt_flags(struct cifs_sb_info *cifs_sb);
+
+ /*
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index eff3f57235eef3..6d567b16998119 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1115,6 +1115,7 @@ static int reparse_info_to_fattr(struct cifs_open_info_data *data,
+ rc = 0;
+ } else if (iov && server->ops->parse_reparse_point) {
+ rc = server->ops->parse_reparse_point(cifs_sb,
++ full_path,
+ iov, data);
+ }
+ break;
+@@ -2473,13 +2474,10 @@ cifs_dentry_needs_reval(struct dentry *dentry)
+ return true;
+
+ if (!open_cached_dir_by_dentry(tcon, dentry->d_parent, &cfid)) {
+- spin_lock(&cfid->fid_lock);
+ if (cfid->time && cifs_i->time > cfid->time) {
+- spin_unlock(&cfid->fid_lock);
+ close_cached_dir(cfid);
+ return false;
+ }
+- spin_unlock(&cfid->fid_lock);
+ close_cached_dir(cfid);
+ }
+ /*
+@@ -3062,6 +3060,7 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs)
+ int rc = -EACCES;
+ __u32 dosattr = 0;
+ __u64 mode = NO_CHANGE_64;
++ bool posix = cifs_sb_master_tcon(cifs_sb)->posix_extensions;
+
+ xid = get_xid();
+
+@@ -3152,7 +3151,8 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs)
+ mode = attrs->ia_mode;
+ rc = 0;
+ if ((cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) ||
+- (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MODE_FROM_SID)) {
++ (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_MODE_FROM_SID) ||
++ posix) {
+ rc = id_mode_to_cifs_acl(inode, full_path, &mode,
+ INVALID_UID, INVALID_GID);
+ if (rc) {
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index 74abbdf5026c73..f74d0a86f44a4e 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -35,6 +35,9 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ u16 len, plen;
+ int rc = 0;
+
++ if (strlen(symname) > REPARSE_SYM_PATH_MAX)
++ return -ENAMETOOLONG;
++
+ sym = kstrdup(symname, GFP_KERNEL);
+ if (!sym)
+ return -ENOMEM;
+@@ -64,7 +67,7 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ if (rc < 0)
+ goto out;
+
+- plen = 2 * UniStrnlen((wchar_t *)path, PATH_MAX);
++ plen = 2 * UniStrnlen((wchar_t *)path, REPARSE_SYM_PATH_MAX);
+ len = sizeof(*buf) + plen * 2;
+ buf = kzalloc(len, GFP_KERNEL);
+ if (!buf) {
+@@ -532,9 +535,76 @@ static int parse_reparse_posix(struct reparse_posix_data *buf,
+ return 0;
+ }
+
++int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
++ bool unicode, bool relative,
++ const char *full_path,
++ struct cifs_sb_info *cifs_sb)
++{
++ char sep = CIFS_DIR_SEP(cifs_sb);
++ char *linux_target = NULL;
++ char *smb_target = NULL;
++ int levels;
++ int rc;
++ int i;
++
++ smb_target = cifs_strndup_from_utf16(buf, len, unicode, cifs_sb->local_nls);
++ if (!smb_target) {
++ rc = -ENOMEM;
++ goto out;
++ }
++
++ if (smb_target[0] == sep && relative) {
++ /*
++ * This is a relative SMB symlink from the top of the share,
++ * which is the top level directory of the Linux mount point.
++ * Linux does not support such relative symlinks, so convert
++ * it to the relative symlink from the current directory.
++ * full_path is the SMB path to the symlink (from which is
++ * extracted current directory) and smb_target is the SMB path
++ * where symlink points, therefore full_path must always be on
++ * the SMB share.
++ */
++ int smb_target_len = strlen(smb_target)+1;
++ levels = 0;
++ for (i = 1; full_path[i]; i++) { /* i=1 to skip leading sep */
++ if (full_path[i] == sep)
++ levels++;
++ }
++ linux_target = kmalloc(levels*3 + smb_target_len, GFP_KERNEL);
++ if (!linux_target) {
++ rc = -ENOMEM;
++ goto out;
++ }
++ for (i = 0; i < levels; i++) {
++ linux_target[i*3 + 0] = '.';
++ linux_target[i*3 + 1] = '.';
++ linux_target[i*3 + 2] = sep;
++ }
++ memcpy(linux_target + levels*3, smb_target+1, smb_target_len); /* +1 to skip leading sep */
++ } else {
++ linux_target = smb_target;
++ smb_target = NULL;
++ }
++
++ if (sep == '\\')
++ convert_delimiter(linux_target, '/');
++
++ rc = 0;
++ *target = linux_target;
++
++ cifs_dbg(FYI, "%s: symlink target: %s\n", __func__, *target);
++
++out:
++ if (rc != 0)
++ kfree(linux_target);
++ kfree(smb_target);
++ return rc;
++}
++
+ static int parse_reparse_symlink(struct reparse_symlink_data_buffer *sym,
+ u32 plen, bool unicode,
+ struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct cifs_open_info_data *data)
+ {
+ unsigned int len;
+@@ -549,20 +619,18 @@ static int parse_reparse_symlink(struct reparse_symlink_data_buffer *sym,
+ return -EIO;
+ }
+
+- data->symlink_target = cifs_strndup_from_utf16(sym->PathBuffer + offs,
+- len, unicode,
+- cifs_sb->local_nls);
+- if (!data->symlink_target)
+- return -ENOMEM;
+-
+- convert_delimiter(data->symlink_target, '/');
+- cifs_dbg(FYI, "%s: target path: %s\n", __func__, data->symlink_target);
+-
+- return 0;
++ return smb2_parse_native_symlink(&data->symlink_target,
++ sym->PathBuffer + offs,
++ len,
++ unicode,
++ le32_to_cpu(sym->Flags) & SYMLINK_FLAG_RELATIVE,
++ full_path,
++ cifs_sb);
+ }
+
+ int parse_reparse_point(struct reparse_data_buffer *buf,
+ u32 plen, struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ bool unicode, struct cifs_open_info_data *data)
+ {
+ struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+@@ -577,7 +645,7 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ case IO_REPARSE_TAG_SYMLINK:
+ return parse_reparse_symlink(
+ (struct reparse_symlink_data_buffer *)buf,
+- plen, unicode, cifs_sb, data);
++ plen, unicode, cifs_sb, full_path, data);
+ case IO_REPARSE_TAG_LX_SYMLINK:
+ case IO_REPARSE_TAG_AF_UNIX:
+ case IO_REPARSE_TAG_LX_FIFO:
+@@ -593,6 +661,7 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ }
+
+ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data)
+ {
+@@ -602,7 +671,7 @@ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
+
+ buf = (struct reparse_data_buffer *)((u8 *)io +
+ le32_to_cpu(io->OutputOffset));
+- return parse_reparse_point(buf, plen, cifs_sb, true, data);
++ return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data);
+ }
+
+ static void wsl_to_fattr(struct cifs_open_info_data *data,
+diff --git a/fs/smb/client/reparse.h b/fs/smb/client/reparse.h
+index 158e7b7aae646c..ff05b0e75c9284 100644
+--- a/fs/smb/client/reparse.h
++++ b/fs/smb/client/reparse.h
+@@ -12,6 +12,8 @@
+ #include "fs_context.h"
+ #include "cifsglob.h"
+
++#define REPARSE_SYM_PATH_MAX 4060
++
+ /*
+ * Used only by cifs.ko to ignore reparse points from files when client or
+ * server doesn't support FSCTL_GET_REPARSE_POINT.
+@@ -115,7 +117,9 @@ int smb2_create_reparse_symlink(const unsigned int xid, struct inode *inode,
+ int smb2_mknod_reparse(unsigned int xid, struct inode *inode,
+ struct dentry *dentry, struct cifs_tcon *tcon,
+ const char *full_path, umode_t mode, dev_t dev);
+-int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb, struct kvec *rsp_iov,
++int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
++ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data);
+
+ #endif /* _CIFS_REPARSE_H */
+diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
+index 9a6ece66c4d34e..db3695eddcf9d5 100644
+--- a/fs/smb/client/smb1ops.c
++++ b/fs/smb/client/smb1ops.c
+@@ -994,17 +994,17 @@ static int cifs_query_symlink(const unsigned int xid,
+ }
+
+ static int cifs_parse_reparse_point(struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ struct kvec *rsp_iov,
+ struct cifs_open_info_data *data)
+ {
+ struct reparse_data_buffer *buf;
+ TRANSACT_IOCTL_RSP *io = rsp_iov->iov_base;
+- bool unicode = !!(io->hdr.Flags2 & SMBFLG2_UNICODE);
+ u32 plen = le16_to_cpu(io->ByteCount);
+
+ buf = (struct reparse_data_buffer *)((__u8 *)&io->hdr.Protocol +
+ le32_to_cpu(io->DataOffset));
+- return parse_reparse_point(buf, plen, cifs_sb, unicode, data);
++ return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data);
+ }
+
+ static bool
+diff --git a/fs/smb/client/smb2file.c b/fs/smb/client/smb2file.c
+index e301349b0078d1..e836bc2193ddd3 100644
+--- a/fs/smb/client/smb2file.c
++++ b/fs/smb/client/smb2file.c
+@@ -63,12 +63,12 @@ static struct smb2_symlink_err_rsp *symlink_data(const struct kvec *iov)
+ return sym;
+ }
+
+-int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path)
++int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov,
++ const char *full_path, char **path)
+ {
+ struct smb2_symlink_err_rsp *sym;
+ unsigned int sub_offs, sub_len;
+ unsigned int print_offs, print_len;
+- char *s;
+
+ if (!cifs_sb || !iov || !iov->iov_base || !iov->iov_len || !path)
+ return -EINVAL;
+@@ -86,15 +86,13 @@ int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec
+ iov->iov_len < SMB2_SYMLINK_STRUCT_SIZE + print_offs + print_len)
+ return -EINVAL;
+
+- s = cifs_strndup_from_utf16((char *)sym->PathBuffer + sub_offs, sub_len, true,
+- cifs_sb->local_nls);
+- if (!s)
+- return -ENOMEM;
+- convert_delimiter(s, '/');
+- cifs_dbg(FYI, "%s: symlink target: %s\n", __func__, s);
+-
+- *path = s;
+- return 0;
++ return smb2_parse_native_symlink(path,
++ (char *)sym->PathBuffer + sub_offs,
++ sub_len,
++ true,
++ le32_to_cpu(sym->Flags) & SYMLINK_FLAG_RELATIVE,
++ full_path,
++ cifs_sb);
+ }
+
+ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock, void *buf)
+@@ -126,6 +124,7 @@ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32
+ goto out;
+ if (hdr->Status == STATUS_STOPPED_ON_SYMLINK) {
+ rc = smb2_parse_symlink_response(oparms->cifs_sb, &err_iov,
++ oparms->path,
+ &data->symlink_target);
+ if (!rc) {
+ memset(smb2_data, 0, sizeof(*smb2_data));
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index e49d0c25eb0384..a188908914fe8f 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -828,6 +828,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+
+ static int parse_create_response(struct cifs_open_info_data *data,
+ struct cifs_sb_info *cifs_sb,
++ const char *full_path,
+ const struct kvec *iov)
+ {
+ struct smb2_create_rsp *rsp = iov->iov_base;
+@@ -841,6 +842,7 @@ static int parse_create_response(struct cifs_open_info_data *data,
+ break;
+ case STATUS_STOPPED_ON_SYMLINK:
+ rc = smb2_parse_symlink_response(cifs_sb, iov,
++ full_path,
+ &data->symlink_target);
+ if (rc)
+ return rc;
+@@ -930,14 +932,14 @@ int smb2_query_path_info(const unsigned int xid,
+
+ switch (rc) {
+ case 0:
+- rc = parse_create_response(data, cifs_sb, &out_iov[0]);
++ rc = parse_create_response(data, cifs_sb, full_path, &out_iov[0]);
+ break;
+ case -EOPNOTSUPP:
+ /*
+ * BB TODO: When support for special files added to Samba
+ * re-verify this path.
+ */
+- rc = parse_create_response(data, cifs_sb, &out_iov[0]);
++ rc = parse_create_response(data, cifs_sb, full_path, &out_iov[0]);
+ if (rc || !data->reparse_point)
+ goto out;
+
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 24a2aa04a1086c..7571fefeb83aa1 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -4080,7 +4080,7 @@ map_oplock_to_lease(u8 oplock)
+ if (oplock == SMB2_OPLOCK_LEVEL_EXCLUSIVE)
+ return SMB2_LEASE_WRITE_CACHING_LE | SMB2_LEASE_READ_CACHING_LE;
+ else if (oplock == SMB2_OPLOCK_LEVEL_II)
+- return SMB2_LEASE_READ_CACHING_LE;
++ return SMB2_LEASE_READ_CACHING_LE | SMB2_LEASE_HANDLE_CACHING_LE;
+ else if (oplock == SMB2_OPLOCK_LEVEL_BATCH)
+ return SMB2_LEASE_HANDLE_CACHING_LE | SMB2_LEASE_READ_CACHING_LE |
+ SMB2_LEASE_WRITE_CACHING_LE;
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 6584b5cddc280a..d1bd69cbfe09a5 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -1231,7 +1231,9 @@ SMB2_negotiate(const unsigned int xid,
+ * SMB3.0 supports only 1 cipher and doesn't have a encryption neg context
+ * Set the cipher type manually.
+ */
+- if (server->dialect == SMB30_PROT_ID && (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
++ if ((server->dialect == SMB30_PROT_ID ||
++ server->dialect == SMB302_PROT_ID) &&
++ (server->capabilities & SMB2_GLOBAL_CAP_ENCRYPTION))
+ server->cipher_type = SMB2_ENCRYPTION_AES128_CCM;
+
+ security_blob = smb2_get_data_area_len(&blob_offset, &blob_length,
+@@ -2683,7 +2685,7 @@ create_sd_buf(umode_t mode, bool set_owner, unsigned int *len)
+ ptr += sizeof(struct smb3_acl);
+
+ /* create one ACE to hold the mode embedded in reserved special SID */
+- acelen = setup_special_mode_ACE((struct smb_ace *)ptr, (__u64)mode);
++ acelen = setup_special_mode_ACE((struct smb_ace *)ptr, false, (__u64)mode);
+ ptr += acelen;
+ acl_size = acelen + sizeof(struct smb3_acl);
+ ace_count = 1;
+diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
+index 6f9885e4f66ca5..09349fa8da039a 100644
+--- a/fs/smb/client/smb2proto.h
++++ b/fs/smb/client/smb2proto.h
+@@ -37,8 +37,6 @@ extern struct mid_q_entry *smb2_setup_request(struct cifs_ses *ses,
+ struct smb_rqst *rqst);
+ extern struct mid_q_entry *smb2_setup_async_request(
+ struct TCP_Server_Info *server, struct smb_rqst *rqst);
+-extern struct cifs_ses *smb2_find_smb_ses(struct TCP_Server_Info *server,
+- __u64 ses_id);
+ extern struct cifs_tcon *smb2_find_smb_tcon(struct TCP_Server_Info *server,
+ __u64 ses_id, __u32 tid);
+ extern int smb2_calc_signature(struct smb_rqst *rqst,
+@@ -113,7 +111,14 @@ extern int smb3_query_mf_symlink(unsigned int xid, struct cifs_tcon *tcon,
+ struct cifs_sb_info *cifs_sb,
+ const unsigned char *path, char *pbuf,
+ unsigned int *pbytes_read);
+-int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb, const struct kvec *iov, char **path);
++int smb2_parse_native_symlink(char **target, const char *buf, unsigned int len,
++ bool unicode, bool relative,
++ const char *full_path,
++ struct cifs_sb_info *cifs_sb);
++int smb2_parse_symlink_response(struct cifs_sb_info *cifs_sb,
++ const struct kvec *iov,
++ const char *full_path,
++ char **path);
+ int smb2_open_file(const unsigned int xid, struct cifs_open_parms *oparms, __u32 *oplock,
+ void *buf);
+ extern int smb2_unlock_range(struct cifsFileInfo *cfile,
+diff --git a/fs/smb/client/smb2transport.c b/fs/smb/client/smb2transport.c
+index b486b14bb3306f..475b36c27f6543 100644
+--- a/fs/smb/client/smb2transport.c
++++ b/fs/smb/client/smb2transport.c
+@@ -74,7 +74,7 @@ smb311_crypto_shash_allocate(struct TCP_Server_Info *server)
+
+
+ static
+-int smb2_get_sign_key(__u64 ses_id, struct TCP_Server_Info *server, u8 *key)
++int smb3_get_sign_key(__u64 ses_id, struct TCP_Server_Info *server, u8 *key)
+ {
+ struct cifs_chan *chan;
+ struct TCP_Server_Info *pserver;
+@@ -168,16 +168,41 @@ smb2_find_smb_ses_unlocked(struct TCP_Server_Info *server, __u64 ses_id)
+ return NULL;
+ }
+
+-struct cifs_ses *
+-smb2_find_smb_ses(struct TCP_Server_Info *server, __u64 ses_id)
++static int smb2_get_sign_key(struct TCP_Server_Info *server,
++ __u64 ses_id, u8 *key)
+ {
+ struct cifs_ses *ses;
++ int rc = -ENOENT;
++
++ if (SERVER_IS_CHAN(server))
++ server = server->primary_server;
+
+ spin_lock(&cifs_tcp_ses_lock);
+- ses = smb2_find_smb_ses_unlocked(server, ses_id);
+- spin_unlock(&cifs_tcp_ses_lock);
++ list_for_each_entry(ses, &server->smb_ses_list, smb_ses_list) {
++ if (ses->Suid != ses_id)
++ continue;
+
+- return ses;
++ rc = 0;
++ spin_lock(&ses->ses_lock);
++ switch (ses->ses_status) {
++ case SES_EXITING: /* SMB2_LOGOFF */
++ case SES_GOOD:
++ if (likely(ses->auth_key.response)) {
++ memcpy(key, ses->auth_key.response,
++ SMB2_NTLMV2_SESSKEY_SIZE);
++ } else {
++ rc = -EIO;
++ }
++ break;
++ default:
++ rc = -EAGAIN;
++ break;
++ }
++ spin_unlock(&ses->ses_lock);
++ break;
++ }
++ spin_unlock(&cifs_tcp_ses_lock);
++ return rc;
+ }
+
+ static struct cifs_tcon *
+@@ -236,14 +261,16 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ unsigned char *sigptr = smb2_signature;
+ struct kvec *iov = rqst->rq_iov;
+ struct smb2_hdr *shdr = (struct smb2_hdr *)iov[0].iov_base;
+- struct cifs_ses *ses;
+ struct shash_desc *shash = NULL;
+ struct smb_rqst drqst;
++ __u64 sid = le64_to_cpu(shdr->SessionId);
++ u8 key[SMB2_NTLMV2_SESSKEY_SIZE];
+
+- ses = smb2_find_smb_ses(server, le64_to_cpu(shdr->SessionId));
+- if (unlikely(!ses)) {
+- cifs_server_dbg(FYI, "%s: Could not find session\n", __func__);
+- return -ENOENT;
++ rc = smb2_get_sign_key(server, sid, key);
++ if (unlikely(rc)) {
++ cifs_server_dbg(FYI, "%s: [sesid=0x%llx] couldn't find signing key: %d\n",
++ __func__, sid, rc);
++ return rc;
+ }
+
+ memset(smb2_signature, 0x0, SMB2_HMACSHA256_SIZE);
+@@ -260,8 +287,7 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ shash = server->secmech.hmacsha256;
+ }
+
+- rc = crypto_shash_setkey(shash->tfm, ses->auth_key.response,
+- SMB2_NTLMV2_SESSKEY_SIZE);
++ rc = crypto_shash_setkey(shash->tfm, key, sizeof(key));
+ if (rc) {
+ cifs_server_dbg(VFS,
+ "%s: Could not update with response\n",
+@@ -303,8 +329,6 @@ smb2_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ out:
+ if (allocate_crypto)
+ cifs_free_hash(&shash);
+- if (ses)
+- cifs_put_smb_ses(ses);
+ return rc;
+ }
+
+@@ -570,7 +594,7 @@ smb3_calc_signature(struct smb_rqst *rqst, struct TCP_Server_Info *server,
+ struct smb_rqst drqst;
+ u8 key[SMB3_SIGN_KEY_SIZE];
+
+- rc = smb2_get_sign_key(le64_to_cpu(shdr->SessionId), server, key);
++ rc = smb3_get_sign_key(le64_to_cpu(shdr->SessionId), server, key);
+ if (unlikely(rc)) {
+ cifs_server_dbg(FYI, "%s: Could not get signing key\n", __func__);
+ return rc;
+diff --git a/fs/smb/client/trace.h b/fs/smb/client/trace.h
+index 0b52d22a91a0cb..12cbd3428a6da5 100644
+--- a/fs/smb/client/trace.h
++++ b/fs/smb/client/trace.h
+@@ -44,6 +44,8 @@
+ EM(netfs_trace_tcon_ref_free_ipc, "FRE Ipc ") \
+ EM(netfs_trace_tcon_ref_free_ipc_fail, "FRE Ipc-F ") \
+ EM(netfs_trace_tcon_ref_free_reconnect_server, "FRE Reconn") \
++ EM(netfs_trace_tcon_ref_get_cached_laundromat, "GET Ch-Lau") \
++ EM(netfs_trace_tcon_ref_get_cached_lease_break, "GET Ch-Lea") \
+ EM(netfs_trace_tcon_ref_get_cancelled_close, "GET Cn-Cls") \
+ EM(netfs_trace_tcon_ref_get_dfs_refer, "GET DfsRef") \
+ EM(netfs_trace_tcon_ref_get_find, "GET Find ") \
+@@ -52,6 +54,7 @@
+ EM(netfs_trace_tcon_ref_new, "NEW ") \
+ EM(netfs_trace_tcon_ref_new_ipc, "NEW Ipc ") \
+ EM(netfs_trace_tcon_ref_new_reconnect_server, "NEW Reconn") \
++ EM(netfs_trace_tcon_ref_put_cached_close, "PUT Ch-Cls") \
+ EM(netfs_trace_tcon_ref_put_cancelled_close, "PUT Cn-Cls") \
+ EM(netfs_trace_tcon_ref_put_cancelled_close_fid, "PUT Cn-Fid") \
+ EM(netfs_trace_tcon_ref_put_cancelled_mid, "PUT Cn-Mid") \
+diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c
+index e6cfedba999232..c8cc6fa6fc3ebb 100644
+--- a/fs/smb/server/server.c
++++ b/fs/smb/server/server.c
+@@ -276,8 +276,12 @@ static void handle_ksmbd_work(struct work_struct *wk)
+ * disconnection. waitqueue_active is safe because it
+ * uses atomic operation for condition.
+ */
++ atomic_inc(&conn->refcnt);
+ if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q))
+ wake_up(&conn->r_count_q);
++
++ if (atomic_dec_and_test(&conn->refcnt))
++ kfree(conn);
+ }
+
+ /**
+diff --git a/fs/ubifs/super.c b/fs/ubifs/super.c
+index 291583005dd123..245a10cc1eeb4d 100644
+--- a/fs/ubifs/super.c
++++ b/fs/ubifs/super.c
+@@ -773,10 +773,10 @@ static void init_constants_master(struct ubifs_info *c)
+ * necessary to report something for the 'statfs()' call.
+ *
+ * Subtract the LEB reserved for GC, the LEB which is reserved for
+- * deletions, minimum LEBs for the index, and assume only one journal
+- * head is available.
++ * deletions, minimum LEBs for the index, the LEBs which are reserved
++ * for each journal head.
+ */
+- tmp64 = c->main_lebs - 1 - 1 - MIN_INDEX_LEBS - c->jhead_cnt + 1;
++ tmp64 = c->main_lebs - 1 - 1 - MIN_INDEX_LEBS - c->jhead_cnt;
+ tmp64 *= (long long)c->leb_size - c->leb_overhead;
+ tmp64 = ubifs_reported_space(c, tmp64);
+ c->block_cnt = tmp64 >> UBIFS_BLOCK_SHIFT;
+diff --git a/fs/ubifs/tnc_commit.c b/fs/ubifs/tnc_commit.c
+index a55e04822d16e9..7c43e0ccf6d47d 100644
+--- a/fs/ubifs/tnc_commit.c
++++ b/fs/ubifs/tnc_commit.c
+@@ -657,6 +657,8 @@ static int get_znodes_to_commit(struct ubifs_info *c)
+ znode->alt = 0;
+ cnext = find_next_dirty(znode);
+ if (!cnext) {
++ ubifs_assert(c, !znode->parent);
++ znode->cparent = NULL;
+ znode->cnext = c->cnext;
+ break;
+ }
+diff --git a/fs/unicode/utf8-core.c b/fs/unicode/utf8-core.c
+index 8395066341a437..0400824ef4936e 100644
+--- a/fs/unicode/utf8-core.c
++++ b/fs/unicode/utf8-core.c
+@@ -198,7 +198,7 @@ struct unicode_map *utf8_load(unsigned int version)
+ return um;
+
+ out_symbol_put:
+- symbol_put(um->tables);
++ symbol_put(utf8_data_table);
+ out_free_um:
+ kfree(um);
+ return ERR_PTR(-EINVAL);
+diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
+index 4719ec90029cb7..edaf193dbd5ccc 100644
+--- a/fs/xfs/xfs_bmap_util.c
++++ b/fs/xfs/xfs_bmap_util.c
+@@ -546,10 +546,14 @@ xfs_can_free_eofblocks(
+ return false;
+
+ /*
+- * Check if there is an post-EOF extent to free.
++ * Check if there is an post-EOF extent to free. If there are any
++ * delalloc blocks attached to the inode (data fork delalloc
++ * reservations or CoW extents of any kind), we need to free them so
++ * that inactivation doesn't fail to erase them.
+ */
+ xfs_ilock(ip, XFS_ILOCK_SHARED);
+- if (xfs_iext_lookup_extent(ip, &ip->i_df, end_fsb, &icur, &imap))
++ if (ip->i_delayed_blks ||
++ xfs_iext_lookup_extent(ip, &ip->i_df, end_fsb, &icur, &imap))
+ found_blocks = true;
+ xfs_iunlock(ip, XFS_ILOCK_SHARED);
+ return found_blocks;
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index eeadbaeccf88b7..fa284b64b2de20 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -350,9 +350,9 @@
+ *(.data..decrypted) \
+ *(.ref.data) \
+ *(.data..shared_aligned) /* percpu related */ \
+- *(.data.unlikely) \
++ *(.data..unlikely) \
+ __start_once = .; \
+- *(.data.once) \
++ *(.data..once) \
+ __end_once = .; \
+ STRUCT_ALIGN(); \
+ *(__tracepoints) \
+diff --git a/include/kunit/skbuff.h b/include/kunit/skbuff.h
+index 44d12370939a90..345e1e8f031235 100644
+--- a/include/kunit/skbuff.h
++++ b/include/kunit/skbuff.h
+@@ -29,7 +29,7 @@ static void kunit_action_kfree_skb(void *p)
+ static inline struct sk_buff *kunit_zalloc_skb(struct kunit *test, int len,
+ gfp_t gfp)
+ {
+- struct sk_buff *res = alloc_skb(len, GFP_KERNEL);
++ struct sk_buff *res = alloc_skb(len, gfp);
+
+ if (!res || skb_pad(res, len))
+ return NULL;
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index 4fecf46ef681b3..c5063e0a38a058 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -925,6 +925,8 @@ void blk_freeze_queue_start(struct request_queue *q);
+ void blk_mq_freeze_queue_wait(struct request_queue *q);
+ int blk_mq_freeze_queue_wait_timeout(struct request_queue *q,
+ unsigned long timeout);
++void blk_mq_unfreeze_queue_non_owner(struct request_queue *q);
++void blk_freeze_queue_start_non_owner(struct request_queue *q);
+
+ void blk_mq_map_queues(struct blk_mq_queue_map *qmap);
+ void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues);
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 50c3b959da2816..e84a93c4013207 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -25,6 +25,7 @@
+ #include <linux/uuid.h>
+ #include <linux/xarray.h>
+ #include <linux/file.h>
++#include <linux/lockdep.h>
+
+ struct module;
+ struct request_queue;
+@@ -471,6 +472,11 @@ struct request_queue {
+ struct xarray hctx_table;
+
+ struct percpu_ref q_usage_counter;
++ struct lock_class_key io_lock_cls_key;
++ struct lockdep_map io_lockdep_map;
++
++ struct lock_class_key q_lock_cls_key;
++ struct lockdep_map q_lockdep_map;
+
+ struct request *last_merge;
+
+@@ -566,6 +572,10 @@ struct request_queue {
+ struct throtl_data *td;
+ #endif
+ struct rcu_head rcu_head;
++#ifdef CONFIG_LOCKDEP
++ struct task_struct *mq_freeze_owner;
++ int mq_freeze_owner_depth;
++#endif
+ wait_queue_head_t mq_freeze_wq;
+ /*
+ * Protect concurrent access to q_usage_counter by
+@@ -1247,7 +1257,7 @@ static inline unsigned int queue_io_min(const struct request_queue *q)
+ return q->limits.io_min;
+ }
+
+-static inline int bdev_io_min(struct block_device *bdev)
++static inline unsigned int bdev_io_min(struct block_device *bdev)
+ {
+ return queue_io_min(bdev_get_queue(bdev));
+ }
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index bdadb0bb6cecd1..bc2e3dab0487ea 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1373,7 +1373,8 @@ int arch_prepare_bpf_dispatcher(void *image, void *buf, s64 *funcs, int num_func
+ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
+ struct bpf_prog *to);
+ /* Called only from JIT-enabled code, so there's no need for stubs. */
+-void bpf_image_ksym_add(void *data, unsigned int size, struct bpf_ksym *ksym);
++void bpf_image_ksym_init(void *data, unsigned int size, struct bpf_ksym *ksym);
++void bpf_image_ksym_add(struct bpf_ksym *ksym);
+ void bpf_image_ksym_del(struct bpf_ksym *ksym);
+ void bpf_ksym_add(struct bpf_ksym *ksym);
+ void bpf_ksym_del(struct bpf_ksym *ksym);
+@@ -3461,4 +3462,10 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog)
+ return prog->aux->func_idx != 0;
+ }
+
++static inline bool bpf_prog_is_raw_tp(const struct bpf_prog *prog)
++{
++ return prog->type == BPF_PROG_TYPE_TRACING &&
++ prog->expected_attach_type == BPF_TRACE_RAW_TP;
++}
++
+ #endif /* _LINUX_BPF_H */
+diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h
+index 038b2d523bf884..518bd1fd86fbe0 100644
+--- a/include/linux/cleanup.h
++++ b/include/linux/cleanup.h
+@@ -290,7 +290,7 @@ static inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \
+ #define DEFINE_GUARD(_name, _type, _lock, _unlock) \
+ DEFINE_CLASS(_name, _type, if (_T) { _unlock; }, ({ _lock; _T; }), _type _T); \
+ static inline void * class_##_name##_lock_ptr(class_##_name##_t *_T) \
+- { return *_T; }
++ { return (void *)(__force unsigned long)*_T; }
+
+ #define DEFINE_GUARD_COND(_name, _ext, _condlock) \
+ EXTEND_CLASS(_name, _ext, \
+@@ -347,7 +347,7 @@ static inline void class_##_name##_destructor(class_##_name##_t *_T) \
+ \
+ static inline void *class_##_name##_lock_ptr(class_##_name##_t *_T) \
+ { \
+- return _T->lock; \
++ return (void *)(__force unsigned long)_T->lock; \
+ }
+
+
+diff --git a/include/linux/compiler_attributes.h b/include/linux/compiler_attributes.h
+index 32284cd26d52a7..c16d4199bf9231 100644
+--- a/include/linux/compiler_attributes.h
++++ b/include/linux/compiler_attributes.h
+@@ -94,19 +94,6 @@
+ # define __copy(symbol)
+ #endif
+
+-/*
+- * Optional: only supported since gcc >= 15
+- * Optional: only supported since clang >= 18
+- *
+- * gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896
+- * clang: https://github.com/llvm/llvm-project/pull/76348
+- */
+-#if __has_attribute(__counted_by__)
+-# define __counted_by(member) __attribute__((__counted_by__(member)))
+-#else
+-# define __counted_by(member)
+-#endif
+-
+ /*
+ * Optional: not supported by gcc
+ * Optional: only supported since clang >= 14.0
+diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
+index 1a957ea2f4fe78..639be0f30b455d 100644
+--- a/include/linux/compiler_types.h
++++ b/include/linux/compiler_types.h
+@@ -323,6 +323,25 @@ struct ftrace_likely_data {
+ #define __no_sanitize_or_inline __always_inline
+ #endif
+
++/*
++ * Optional: only supported since gcc >= 15
++ * Optional: only supported since clang >= 18
++ *
++ * gcc: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108896
++ * clang: https://github.com/llvm/llvm-project/pull/76348
++ *
++ * __bdos on clang < 19.1.2 can erroneously return 0:
++ * https://github.com/llvm/llvm-project/pull/110497
++ *
++ * __bdos on clang < 19.1.3 can be off by 4:
++ * https://github.com/llvm/llvm-project/pull/112636
++ */
++#ifdef CONFIG_CC_HAS_COUNTED_BY
++# define __counted_by(member) __attribute__((__counted_by__(member)))
++#else
++# define __counted_by(member)
++#endif
++
+ /*
+ * Apply __counted_by() when the Endianness matches to increase test coverage.
+ */
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index b0b821edfd97d1..3b2ad444c002ee 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -24,10 +24,10 @@
+ #define NEW_ADDR ((block_t)-1) /* used as block_t addresses */
+ #define COMPRESS_ADDR ((block_t)-2) /* used as compressed data flag */
+
+-#define F2FS_BYTES_TO_BLK(bytes) ((bytes) >> F2FS_BLKSIZE_BITS)
+-#define F2FS_BLK_TO_BYTES(blk) ((blk) << F2FS_BLKSIZE_BITS)
++#define F2FS_BYTES_TO_BLK(bytes) ((unsigned long long)(bytes) >> F2FS_BLKSIZE_BITS)
++#define F2FS_BLK_TO_BYTES(blk) ((unsigned long long)(blk) << F2FS_BLKSIZE_BITS)
+ #define F2FS_BLK_END_BYTES(blk) (F2FS_BLK_TO_BYTES(blk + 1) - 1)
+-#define F2FS_BLK_ALIGN(x) (F2FS_BYTES_TO_BLK((x) + F2FS_BLKSIZE - 1))
++#define F2FS_BLK_ALIGN(x) (F2FS_BYTES_TO_BLK((x) + F2FS_BLKSIZE - 1))
+
+ /* 0, 1(node nid), 2(meta nid) are reserved node id */
+ #define F2FS_RESERVED_NODE_NUM 3
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 3559446279c152..4b5cad44a12683 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3726,6 +3726,6 @@ static inline bool vfs_empty_path(int dfd, const char __user *path)
+ return !c;
+ }
+
+-bool generic_atomic_write_valid(struct iov_iter *iter, loff_t pos);
++int generic_atomic_write_valid(struct kiocb *iocb, struct iov_iter *iter);
+
+ #endif /* _LINUX_FS_H */
+diff --git a/include/linux/hisi_acc_qm.h b/include/linux/hisi_acc_qm.h
+index 9d7754ad5e9b08..43ad280935e360 100644
+--- a/include/linux/hisi_acc_qm.h
++++ b/include/linux/hisi_acc_qm.h
+@@ -229,6 +229,12 @@ struct hisi_qm_status {
+
+ struct hisi_qm;
+
++enum acc_err_result {
++ ACC_ERR_NONE,
++ ACC_ERR_NEED_RESET,
++ ACC_ERR_RECOVERED,
++};
++
+ struct hisi_qm_err_info {
+ char *acpi_rst;
+ u32 msi_wr_port;
+@@ -257,9 +263,9 @@ struct hisi_qm_err_ini {
+ void (*close_axi_master_ooo)(struct hisi_qm *qm);
+ void (*open_sva_prefetch)(struct hisi_qm *qm);
+ void (*close_sva_prefetch)(struct hisi_qm *qm);
+- void (*log_dev_hw_err)(struct hisi_qm *qm, u32 err_sts);
+ void (*show_last_dfx_regs)(struct hisi_qm *qm);
+ void (*err_info_init)(struct hisi_qm *qm);
++ enum acc_err_result (*get_err_result)(struct hisi_qm *qm);
+ };
+
+ struct hisi_qm_cap_info {
+diff --git a/include/linux/intel_vsec.h b/include/linux/intel_vsec.h
+index 11ee185566c31c..b94beab64610b9 100644
+--- a/include/linux/intel_vsec.h
++++ b/include/linux/intel_vsec.h
+@@ -74,10 +74,11 @@ enum intel_vsec_quirks {
+ * @pdev: PCI device reference for the callback's use
+ * @guid: ID of data to acccss
+ * @data: buffer for the data to be copied
++ * @off: offset into the requested buffer
+ * @count: size of buffer
+ */
+ struct pmt_callbacks {
+- int (*read_telem)(struct pci_dev *pdev, u32 guid, u64 *data, u32 count);
++ int (*read_telem)(struct pci_dev *pdev, u32 guid, u64 *data, loff_t off, u32 count);
+ };
+
+ /**
+diff --git a/include/linux/jiffies.h b/include/linux/jiffies.h
+index 1220f0fbe5bf9f..5d21dacd62bc7d 100644
+--- a/include/linux/jiffies.h
++++ b/include/linux/jiffies.h
+@@ -502,7 +502,7 @@ static inline unsigned long _msecs_to_jiffies(const unsigned int m)
+ * - all other values are converted to jiffies by either multiplying
+ * the input value by a factor or dividing it with a factor and
+ * handling any 32-bit overflows.
+- * for the details see __msecs_to_jiffies()
++ * for the details see _msecs_to_jiffies()
+ *
+ * msecs_to_jiffies() checks for the passed in value being a constant
+ * via __builtin_constant_p() allowing gcc to eliminate most of the
+diff --git a/include/linux/kfifo.h b/include/linux/kfifo.h
+index 564868bdce898b..fd743d4c4b4bdc 100644
+--- a/include/linux/kfifo.h
++++ b/include/linux/kfifo.h
+@@ -37,7 +37,6 @@
+ */
+
+ #include <linux/array_size.h>
+-#include <linux/dma-mapping.h>
+ #include <linux/spinlock.h>
+ #include <linux/stddef.h>
+ #include <linux/types.h>
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 45be36e5285ffb..85fe9d0ebb9152 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -2382,12 +2382,6 @@ static inline int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu)
+ }
+ #endif /* CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE */
+
+-typedef int (*kvm_vm_thread_fn_t)(struct kvm *kvm, uintptr_t data);
+-
+-int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_vm_thread_fn_t thread_fn,
+- uintptr_t data, const char *name,
+- struct task_struct **thread_ptr);
+-
+ #ifdef CONFIG_KVM_XFER_TO_GUEST_WORK
+ static inline void kvm_handle_signal_exit(struct kvm_vcpu *vcpu)
+ {
+diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
+index 217f7abf2cbfab..67964dc4db952e 100644
+--- a/include/linux/lockdep.h
++++ b/include/linux/lockdep.h
+@@ -173,7 +173,7 @@ static inline void lockdep_init_map(struct lockdep_map *lock, const char *name,
+ (lock)->dep_map.lock_type)
+
+ #define lockdep_set_subclass(lock, sub) \
+- lockdep_init_map_type(&(lock)->dep_map, #lock, (lock)->dep_map.key, sub,\
++ lockdep_init_map_type(&(lock)->dep_map, (lock)->dep_map.name, (lock)->dep_map.key, sub,\
+ (lock)->dep_map.wait_type_inner, \
+ (lock)->dep_map.wait_type_outer, \
+ (lock)->dep_map.lock_type)
+diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
+index 39a7714605a796..d7cb1e5ecbda9d 100644
+--- a/include/linux/mmdebug.h
++++ b/include/linux/mmdebug.h
+@@ -46,7 +46,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
+ } \
+ } while (0)
+ #define VM_WARN_ON_ONCE_PAGE(cond, page) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(__ret_warn_once && !__warned)) { \
+@@ -66,7 +66,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
+ unlikely(__ret_warn); \
+ })
+ #define VM_WARN_ON_ONCE_FOLIO(cond, folio) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(__ret_warn_once && !__warned)) { \
+@@ -77,7 +77,7 @@ void vma_iter_dump_tree(const struct vma_iterator *vmi);
+ unlikely(__ret_warn_once); \
+ })
+ #define VM_WARN_ON_ONCE_MM(cond, mm) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(__ret_warn_once && !__warned)) { \
+diff --git a/include/linux/netpoll.h b/include/linux/netpoll.h
+index cd4e28db0cbd77..959a4daacea1f2 100644
+--- a/include/linux/netpoll.h
++++ b/include/linux/netpoll.h
+@@ -72,7 +72,7 @@ static inline void *netpoll_poll_lock(struct napi_struct *napi)
+ {
+ struct net_device *dev = napi->dev;
+
+- if (dev && dev->npinfo) {
++ if (dev && rcu_access_pointer(dev->npinfo)) {
+ int owner = smp_processor_id();
+
+ while (cmpxchg(&napi->poll_owner, -1, owner) != -1)
+diff --git a/include/linux/nfslocalio.h b/include/linux/nfslocalio.h
+index 3982fea799195e..9202f4b24343d7 100644
+--- a/include/linux/nfslocalio.h
++++ b/include/linux/nfslocalio.h
+@@ -55,7 +55,7 @@ struct nfsd_localio_operations {
+ const struct cred *,
+ const struct nfs_fh *,
+ const fmode_t);
+- void (*nfsd_file_put_local)(struct nfsd_file *);
++ struct net *(*nfsd_file_put_local)(struct nfsd_file *);
+ struct file *(*nfsd_file_file)(struct nfsd_file *);
+ } ____cacheline_aligned;
+
+@@ -66,7 +66,7 @@ struct nfsd_file *nfs_open_local_fh(nfs_uuid_t *,
+ struct rpc_clnt *, const struct cred *,
+ const struct nfs_fh *, const fmode_t);
+
+-static inline void nfs_to_nfsd_file_put_local(struct nfsd_file *localio)
++static inline void nfs_to_nfsd_net_put(struct net *net)
+ {
+ /*
+ * Once reference to nfsd_serv is dropped, NFSD could be
+@@ -74,10 +74,22 @@ static inline void nfs_to_nfsd_file_put_local(struct nfsd_file *localio)
+ * by always taking RCU.
+ */
+ rcu_read_lock();
+- nfs_to->nfsd_file_put_local(localio);
++ nfs_to->nfsd_serv_put(net);
+ rcu_read_unlock();
+ }
+
++static inline void nfs_to_nfsd_file_put_local(struct nfsd_file *localio)
++{
++ /*
++ * Must not hold RCU otherwise nfsd_file_put() can easily trigger:
++ * "Voluntary context switch within RCU read-side critical section!"
++ * by scheduling deep in underlying filesystem (e.g. XFS).
++ */
++ struct net *net = nfs_to->nfsd_file_put_local(localio);
++
++ nfs_to_nfsd_net_put(net);
++}
++
+ #else /* CONFIG_NFS_LOCALIO */
+ static inline void nfsd_localio_ops_init(void)
+ {
+diff --git a/include/linux/of_fdt.h b/include/linux/of_fdt.h
+index d69ad5bb1eb1e6..b8d6c0c208760a 100644
+--- a/include/linux/of_fdt.h
++++ b/include/linux/of_fdt.h
+@@ -31,6 +31,7 @@ extern void *of_fdt_unflatten_tree(const unsigned long *blob,
+ extern int __initdata dt_root_addr_cells;
+ extern int __initdata dt_root_size_cells;
+ extern void *initial_boot_params;
++extern phys_addr_t initial_boot_params_pa;
+
+ extern char __dtb_start[];
+ extern char __dtb_end[];
+@@ -70,8 +71,8 @@ extern u64 dt_mem_next_cell(int s, const __be32 **cellp);
+ /* Early flat tree scan hooks */
+ extern int early_init_dt_scan_root(void);
+
+-extern bool early_init_dt_scan(void *params);
+-extern bool early_init_dt_verify(void *params);
++extern bool early_init_dt_scan(void *dt_virt, phys_addr_t dt_phys);
++extern bool early_init_dt_verify(void *dt_virt, phys_addr_t dt_phys);
+ extern void early_init_dt_scan_nodes(void);
+
+ extern const char *of_flat_dt_get_machine_name(void);
+diff --git a/include/linux/once.h b/include/linux/once.h
+index bc714d414448a7..30346fcdc7995d 100644
+--- a/include/linux/once.h
++++ b/include/linux/once.h
+@@ -46,7 +46,7 @@ void __do_once_sleepable_done(bool *done, struct static_key_true *once_key,
+ #define DO_ONCE(func, ...) \
+ ({ \
+ bool ___ret = false; \
+- static bool __section(".data.once") ___done = false; \
++ static bool __section(".data..once") ___done = false; \
+ static DEFINE_STATIC_KEY_TRUE(___once_key); \
+ if (static_branch_unlikely(&___once_key)) { \
+ unsigned long ___flags; \
+@@ -64,7 +64,7 @@ void __do_once_sleepable_done(bool *done, struct static_key_true *once_key,
+ #define DO_ONCE_SLEEPABLE(func, ...) \
+ ({ \
+ bool ___ret = false; \
+- static bool __section(".data.once") ___done = false; \
++ static bool __section(".data..once") ___done = false; \
+ static DEFINE_STATIC_KEY_TRUE(___once_key); \
+ if (static_branch_unlikely(&___once_key)) { \
+ ___ret = __do_once_sleepable_start(&___done); \
+diff --git a/include/linux/once_lite.h b/include/linux/once_lite.h
+index b7bce4983638f8..27de7bc32a0610 100644
+--- a/include/linux/once_lite.h
++++ b/include/linux/once_lite.h
+@@ -12,7 +12,7 @@
+
+ #define __ONCE_LITE_IF(condition) \
+ ({ \
+- static bool __section(".data.once") __already_done; \
++ static bool __section(".data..once") __already_done; \
+ bool __ret_cond = !!(condition); \
+ bool __ret_once = false; \
+ \
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 58d84c59f3ddae..48e5c03df1dd83 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -401,7 +401,7 @@ static inline int debug_lockdep_rcu_enabled(void)
+ */
+ #define RCU_LOCKDEP_WARN(c, s) \
+ do { \
+- static bool __section(".data.unlikely") __warned; \
++ static bool __section(".data..unlikely") __warned; \
+ if (debug_lockdep_rcu_enabled() && (c) && \
+ debug_lockdep_rcu_enabled() && !__warned) { \
+ __warned = true; \
+diff --git a/include/linux/rwlock_rt.h b/include/linux/rwlock_rt.h
+index 8544ff05e594d7..7d81fc6918ee86 100644
+--- a/include/linux/rwlock_rt.h
++++ b/include/linux/rwlock_rt.h
+@@ -24,13 +24,13 @@ do { \
+ __rt_rwlock_init(rwl, #rwl, &__key); \
+ } while (0)
+
+-extern void rt_read_lock(rwlock_t *rwlock);
++extern void rt_read_lock(rwlock_t *rwlock) __acquires(rwlock);
+ extern int rt_read_trylock(rwlock_t *rwlock);
+-extern void rt_read_unlock(rwlock_t *rwlock);
+-extern void rt_write_lock(rwlock_t *rwlock);
+-extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass);
++extern void rt_read_unlock(rwlock_t *rwlock) __releases(rwlock);
++extern void rt_write_lock(rwlock_t *rwlock) __acquires(rwlock);
++extern void rt_write_lock_nested(rwlock_t *rwlock, int subclass) __acquires(rwlock);
+ extern int rt_write_trylock(rwlock_t *rwlock);
+-extern void rt_write_unlock(rwlock_t *rwlock);
++extern void rt_write_unlock(rwlock_t *rwlock) __releases(rwlock);
+
+ static __always_inline void read_lock(rwlock_t *rwlock)
+ {
+diff --git a/include/linux/sched/ext.h b/include/linux/sched/ext.h
+index 1ddbde64a31b4a..2799e7284fff72 100644
+--- a/include/linux/sched/ext.h
++++ b/include/linux/sched/ext.h
+@@ -199,7 +199,6 @@ struct sched_ext_entity {
+ #ifdef CONFIG_EXT_GROUP_SCHED
+ struct cgroup *cgrp_moving_from;
+ #endif
+- /* must be the last field, see init_scx_entity() */
+ struct list_head tasks_node;
+ };
+
+diff --git a/include/linux/seqlock.h b/include/linux/seqlock.h
+index fffeb754880fca..5298765d6ca482 100644
+--- a/include/linux/seqlock.h
++++ b/include/linux/seqlock.h
+@@ -621,6 +621,23 @@ static __always_inline unsigned raw_read_seqcount_latch(const seqcount_latch_t *
+ return READ_ONCE(s->seqcount.sequence);
+ }
+
++/**
++ * read_seqcount_latch() - pick even/odd latch data copy
++ * @s: Pointer to seqcount_latch_t
++ *
++ * See write_seqcount_latch() for details and a full reader/writer usage
++ * example.
++ *
++ * Return: sequence counter raw value. Use the lowest bit as an index for
++ * picking which data copy to read. The full counter must then be checked
++ * with read_seqcount_latch_retry().
++ */
++static __always_inline unsigned read_seqcount_latch(const seqcount_latch_t *s)
++{
++ kcsan_atomic_next(KCSAN_SEQLOCK_REGION_MAX);
++ return raw_read_seqcount_latch(s);
++}
++
+ /**
+ * raw_read_seqcount_latch_retry() - end a seqcount_latch_t read section
+ * @s: Pointer to seqcount_latch_t
+@@ -635,9 +652,34 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ return unlikely(READ_ONCE(s->seqcount.sequence) != start);
+ }
+
++/**
++ * read_seqcount_latch_retry() - end a seqcount_latch_t read section
++ * @s: Pointer to seqcount_latch_t
++ * @start: count, from read_seqcount_latch()
++ *
++ * Return: true if a read section retry is required, else false
++ */
++static __always_inline int
++read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
++{
++ kcsan_atomic_next(0);
++ return raw_read_seqcount_latch_retry(s, start);
++}
++
+ /**
+ * raw_write_seqcount_latch() - redirect latch readers to even/odd copy
+ * @s: Pointer to seqcount_latch_t
++ */
++static __always_inline void raw_write_seqcount_latch(seqcount_latch_t *s)
++{
++ smp_wmb(); /* prior stores before incrementing "sequence" */
++ s->seqcount.sequence++;
++ smp_wmb(); /* increment "sequence" before following stores */
++}
++
++/**
++ * write_seqcount_latch_begin() - redirect latch readers to odd copy
++ * @s: Pointer to seqcount_latch_t
+ *
+ * The latch technique is a multiversion concurrency control method that allows
+ * queries during non-atomic modifications. If you can guarantee queries never
+@@ -665,17 +707,11 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ *
+ * void latch_modify(struct latch_struct *latch, ...)
+ * {
+- * smp_wmb(); // Ensure that the last data[1] update is visible
+- * latch->seq.sequence++;
+- * smp_wmb(); // Ensure that the seqcount update is visible
+- *
++ * write_seqcount_latch_begin(&latch->seq);
+ * modify(latch->data[0], ...);
+- *
+- * smp_wmb(); // Ensure that the data[0] update is visible
+- * latch->seq.sequence++;
+- * smp_wmb(); // Ensure that the seqcount update is visible
+- *
++ * write_seqcount_latch(&latch->seq);
+ * modify(latch->data[1], ...);
++ * write_seqcount_latch_end(&latch->seq);
+ * }
+ *
+ * The query will have a form like::
+@@ -686,13 +722,13 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ * unsigned seq, idx;
+ *
+ * do {
+- * seq = raw_read_seqcount_latch(&latch->seq);
++ * seq = read_seqcount_latch(&latch->seq);
+ *
+ * idx = seq & 0x01;
+ * entry = data_query(latch->data[idx], ...);
+ *
+ * // This includes needed smp_rmb()
+- * } while (raw_read_seqcount_latch_retry(&latch->seq, seq));
++ * } while (read_seqcount_latch_retry(&latch->seq, seq));
+ *
+ * return entry;
+ * }
+@@ -716,11 +752,31 @@ raw_read_seqcount_latch_retry(const seqcount_latch_t *s, unsigned start)
+ * When data is a dynamic data structure; one should use regular RCU
+ * patterns to manage the lifetimes of the objects within.
+ */
+-static inline void raw_write_seqcount_latch(seqcount_latch_t *s)
++static __always_inline void write_seqcount_latch_begin(seqcount_latch_t *s)
+ {
+- smp_wmb(); /* prior stores before incrementing "sequence" */
+- s->seqcount.sequence++;
+- smp_wmb(); /* increment "sequence" before following stores */
++ kcsan_nestable_atomic_begin();
++ raw_write_seqcount_latch(s);
++}
++
++/**
++ * write_seqcount_latch() - redirect latch readers to even copy
++ * @s: Pointer to seqcount_latch_t
++ */
++static __always_inline void write_seqcount_latch(seqcount_latch_t *s)
++{
++ raw_write_seqcount_latch(s);
++}
++
++/**
++ * write_seqcount_latch_end() - end a seqcount_latch_t write section
++ * @s: Pointer to seqcount_latch_t
++ *
++ * Marks the end of a seqcount_latch_t writer section, after all copies of the
++ * latch-protected data have been updated.
++ */
++static __always_inline void write_seqcount_latch_end(seqcount_latch_t *s)
++{
++ kcsan_nestable_atomic_end();
+ }
+
+ #define __SEQLOCK_UNLOCKED(lockname) \
+@@ -754,11 +810,7 @@ static inline void raw_write_seqcount_latch(seqcount_latch_t *s)
+ */
+ static inline unsigned read_seqbegin(const seqlock_t *sl)
+ {
+- unsigned ret = read_seqcount_begin(&sl->seqcount);
+-
+- kcsan_atomic_next(0); /* non-raw usage, assume closing read_seqretry() */
+- kcsan_flat_atomic_begin();
+- return ret;
++ return read_seqcount_begin(&sl->seqcount);
+ }
+
+ /**
+@@ -774,12 +826,6 @@ static inline unsigned read_seqbegin(const seqlock_t *sl)
+ */
+ static inline unsigned read_seqretry(const seqlock_t *sl, unsigned start)
+ {
+- /*
+- * Assume not nested: read_seqretry() may be called multiple times when
+- * completing read critical section.
+- */
+- kcsan_flat_atomic_end();
+-
+ return read_seqcount_retry(&sl->seqcount, start);
+ }
+
+diff --git a/include/linux/spinlock_rt.h b/include/linux/spinlock_rt.h
+index 61c49b16f69ab0..6175cd682ca0d8 100644
+--- a/include/linux/spinlock_rt.h
++++ b/include/linux/spinlock_rt.h
+@@ -16,26 +16,25 @@ static inline void __rt_spin_lock_init(spinlock_t *lock, const char *name,
+ }
+ #endif
+
+-#define spin_lock_init(slock) \
++#define __spin_lock_init(slock, name, key, percpu) \
+ do { \
+- static struct lock_class_key __key; \
+- \
+ rt_mutex_base_init(&(slock)->lock); \
+- __rt_spin_lock_init(slock, #slock, &__key, false); \
++ __rt_spin_lock_init(slock, name, key, percpu); \
+ } while (0)
+
+-#define local_spin_lock_init(slock) \
++#define _spin_lock_init(slock, percpu) \
+ do { \
+ static struct lock_class_key __key; \
+- \
+- rt_mutex_base_init(&(slock)->lock); \
+- __rt_spin_lock_init(slock, #slock, &__key, true); \
++ __spin_lock_init(slock, #slock, &__key, percpu); \
+ } while (0)
+
+-extern void rt_spin_lock(spinlock_t *lock);
+-extern void rt_spin_lock_nested(spinlock_t *lock, int subclass);
+-extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *nest_lock);
+-extern void rt_spin_unlock(spinlock_t *lock);
++#define spin_lock_init(slock) _spin_lock_init(slock, false)
++#define local_spin_lock_init(slock) _spin_lock_init(slock, true)
++
++extern void rt_spin_lock(spinlock_t *lock) __acquires(lock);
++extern void rt_spin_lock_nested(spinlock_t *lock, int subclass) __acquires(lock);
++extern void rt_spin_lock_nest_lock(spinlock_t *lock, struct lockdep_map *nest_lock) __acquires(lock);
++extern void rt_spin_unlock(spinlock_t *lock) __releases(lock);
+ extern void rt_spin_lock_unlock(spinlock_t *lock);
+ extern int rt_spin_trylock_bh(spinlock_t *lock);
+ extern int rt_spin_trylock(spinlock_t *lock);
+diff --git a/include/media/v4l2-dv-timings.h b/include/media/v4l2-dv-timings.h
+index 8fa963326bf6a2..c64096b5c78215 100644
+--- a/include/media/v4l2-dv-timings.h
++++ b/include/media/v4l2-dv-timings.h
+@@ -146,15 +146,18 @@ void v4l2_print_dv_timings(const char *dev_prefix, const char *prefix,
+ * @polarities: the horizontal and vertical polarities (same as struct
+ * v4l2_bt_timings polarities).
+ * @interlaced: if this flag is true, it indicates interlaced format
++ * @cap: the v4l2_dv_timings_cap capabilities.
+ * @fmt: the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid CVT format. If so, then it will return true, and fmt will be filled
+ * in with the found CVT timings.
+ */
+-bool v4l2_detect_cvt(unsigned frame_height, unsigned hfreq, unsigned vsync,
+- unsigned active_width, u32 polarities, bool interlaced,
+- struct v4l2_dv_timings *fmt);
++bool v4l2_detect_cvt(unsigned int frame_height, unsigned int hfreq,
++ unsigned int vsync, unsigned int active_width,
++ u32 polarities, bool interlaced,
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *fmt);
+
+ /**
+ * v4l2_detect_gtf - detect if the given timings follow the GTF standard
+@@ -170,15 +173,18 @@ bool v4l2_detect_cvt(unsigned frame_height, unsigned hfreq, unsigned vsync,
+ * image height, so it has to be passed explicitly. Usually
+ * the native screen aspect ratio is used for this. If it
+ * is not filled in correctly, then 16:9 will be assumed.
++ * @cap: the v4l2_dv_timings_cap capabilities.
+ * @fmt: the resulting timings.
+ *
+ * This function will attempt to detect if the given values correspond to a
+ * valid GTF format. If so, then it will return true, and fmt will be filled
+ * in with the found GTF timings.
+ */
+-bool v4l2_detect_gtf(unsigned frame_height, unsigned hfreq, unsigned vsync,
+- u32 polarities, bool interlaced, struct v4l2_fract aspect,
+- struct v4l2_dv_timings *fmt);
++bool v4l2_detect_gtf(unsigned int frame_height, unsigned int hfreq,
++ unsigned int vsync, u32 polarities, bool interlaced,
++ struct v4l2_fract aspect,
++ const struct v4l2_dv_timings_cap *cap,
++ struct v4l2_dv_timings *fmt);
+
+ /**
+ * v4l2_calc_aspect_ratio - calculate the aspect ratio based on bytes
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index bab1e3d7452a2c..a1864cff616aee 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -1,7 +1,7 @@
+ /*
+ BlueZ - Bluetooth protocol stack for Linux
+ Copyright (C) 2000-2001 Qualcomm Incorporated
+- Copyright 2023 NXP
++ Copyright 2023-2024 NXP
+
+ Written 2000,2001 by Maxim Krasnyansky <maxk@qualcomm.com>
+
+@@ -29,6 +29,7 @@
+ #define HCI_MAX_ACL_SIZE 1024
+ #define HCI_MAX_SCO_SIZE 255
+ #define HCI_MAX_ISO_SIZE 251
++#define HCI_MAX_ISO_BIS 31
+ #define HCI_MAX_EVENT_SIZE 260
+ #define HCI_MAX_FRAME_SIZE (HCI_MAX_ACL_SIZE + 4)
+
+@@ -683,6 +684,7 @@ enum {
+ #define HCI_RSSI_INVALID 127
+
+ #define HCI_SYNC_HANDLE_INVALID 0xffff
++#define HCI_SID_INVALID 0xff
+
+ #define HCI_ROLE_MASTER 0x00
+ #define HCI_ROLE_SLAVE 0x01
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 88265d37aa72e3..4c185a08c3a3af 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -668,6 +668,7 @@ struct hci_conn {
+ __u8 adv_instance;
+ __u16 handle;
+ __u16 sync_handle;
++ __u8 sid;
+ __u16 state;
+ __u16 mtu;
+ __u8 mode;
+@@ -710,6 +711,9 @@ struct hci_conn {
+ __s8 tx_power;
+ __s8 max_tx_power;
+ struct bt_iso_qos iso_qos;
++ __u8 num_bis;
++ __u8 bis[HCI_MAX_ISO_BIS];
++
+ unsigned long flags;
+
+ enum conn_reasons conn_reason;
+@@ -945,8 +949,10 @@ enum {
+ HCI_CONN_PER_ADV,
+ HCI_CONN_BIG_CREATED,
+ HCI_CONN_CREATE_CIS,
++ HCI_CONN_CREATE_BIG_SYNC,
+ HCI_CONN_BIG_SYNC,
+ HCI_CONN_BIG_SYNC_FAILED,
++ HCI_CONN_CREATE_PA_SYNC,
+ HCI_CONN_PA_SYNC,
+ HCI_CONN_PA_SYNC_FAILED,
+ };
+@@ -1099,6 +1105,30 @@ static inline struct hci_conn *hci_conn_hash_lookup_bis(struct hci_dev *hdev,
+ return NULL;
+ }
+
++static inline struct hci_conn *hci_conn_hash_lookup_sid(struct hci_dev *hdev,
++ __u8 sid,
++ bdaddr_t *dst,
++ __u8 dst_type)
++{
++ struct hci_conn_hash *h = &hdev->conn_hash;
++ struct hci_conn *c;
++
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(c, &h->list, list) {
++ if (c->type != ISO_LINK || bacmp(&c->dst, dst) ||
++ c->dst_type != dst_type || c->sid != sid)
++ continue;
++
++ rcu_read_unlock();
++ return c;
++ }
++
++ rcu_read_unlock();
++
++ return NULL;
++}
++
+ static inline struct hci_conn *
+ hci_conn_hash_lookup_per_adv_bis(struct hci_dev *hdev,
+ bdaddr_t *ba,
+@@ -1269,6 +1299,30 @@ static inline struct hci_conn *hci_conn_hash_lookup_big(struct hci_dev *hdev,
+ return NULL;
+ }
+
++static inline struct hci_conn *
++hci_conn_hash_lookup_big_sync_pend(struct hci_dev *hdev,
++ __u8 handle, __u8 num_bis)
++{
++ struct hci_conn_hash *h = &hdev->conn_hash;
++ struct hci_conn *c;
++
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(c, &h->list, list) {
++ if (c->type != ISO_LINK)
++ continue;
++
++ if (handle == c->iso_qos.bcast.big && num_bis == c->num_bis) {
++ rcu_read_unlock();
++ return c;
++ }
++ }
++
++ rcu_read_unlock();
++
++ return NULL;
++}
++
+ static inline struct hci_conn *
+ hci_conn_hash_lookup_big_state(struct hci_dev *hdev, __u8 handle, __u16 state)
+ {
+@@ -1328,6 +1382,13 @@ hci_conn_hash_lookup_pa_sync_handle(struct hci_dev *hdev, __u16 sync_handle)
+ if (c->type != ISO_LINK)
+ continue;
+
++ /* Ignore the listen hcon, we are looking
++ * for the child hcon that was created as
++ * a result of the PA sync established event.
++ */
++ if (c->state == BT_LISTEN)
++ continue;
++
+ if (c->sync_handle == sync_handle) {
+ rcu_read_unlock();
+ return c;
+@@ -1445,6 +1506,8 @@ bool hci_setup_sync(struct hci_conn *conn, __u16 handle);
+ void hci_sco_setup(struct hci_conn *conn, __u8 status);
+ bool hci_iso_setup_path(struct hci_conn *conn);
+ int hci_le_create_cis_pending(struct hci_dev *hdev);
++int hci_pa_create_sync_pending(struct hci_dev *hdev);
++int hci_le_big_create_sync_pending(struct hci_dev *hdev);
+ int hci_conn_check_create_cis(struct hci_conn *conn);
+
+ struct hci_conn *hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t *dst,
+diff --git a/include/net/net_debug.h b/include/net/net_debug.h
+index 1e74684cbbdbcd..4a79204c8d306e 100644
+--- a/include/net/net_debug.h
++++ b/include/net/net_debug.h
+@@ -27,7 +27,7 @@ void netdev_info(const struct net_device *dev, const char *format, ...);
+
+ #define netdev_level_once(level, dev, fmt, ...) \
+ do { \
+- static bool __section(".data.once") __print_once; \
++ static bool __section(".data..once") __print_once; \
+ \
+ if (!__print_once) { \
+ __print_once = true; \
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index aa8ede439905cb..67551133b5228e 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -2948,6 +2948,14 @@ int rdma_user_mmap_entry_insert_range(struct ib_ucontext *ucontext,
+ size_t length, u32 min_pgoff,
+ u32 max_pgoff);
+
++#if IS_ENABLED(CONFIG_INFINIBAND_USER_ACCESS)
++void rdma_user_mmap_disassociate(struct ib_device *device);
++#else
++static inline void rdma_user_mmap_disassociate(struct ib_device *device)
++{
++}
++#endif
++
+ static inline int
+ rdma_user_mmap_entry_insert_exact(struct ib_ucontext *ucontext,
+ struct rdma_user_mmap_entry *entry,
+@@ -4726,6 +4734,9 @@ ib_get_vector_affinity(struct ib_device *device, int comp_vector)
+ * @device: the rdma device
+ */
+ void rdma_roce_rescan_device(struct ib_device *ibdev);
++void rdma_roce_rescan_port(struct ib_device *ib_dev, u32 port);
++void roce_del_all_netdev_gids(struct ib_device *ib_dev,
++ u32 port, struct net_device *ndev);
+
+ struct ib_ucontext *ib_uverbs_get_ucontext_file(struct ib_uverbs_file *ufile);
+
+diff --git a/include/uapi/linux/rtnetlink.h b/include/uapi/linux/rtnetlink.h
+index 3b687d20c9ed34..db7254d52d9355 100644
+--- a/include/uapi/linux/rtnetlink.h
++++ b/include/uapi/linux/rtnetlink.h
+@@ -174,7 +174,7 @@ enum {
+ #define RTM_GETLINKPROP RTM_GETLINKPROP
+
+ RTM_NEWVLAN = 112,
+-#define RTM_NEWNVLAN RTM_NEWVLAN
++#define RTM_NEWVLAN RTM_NEWVLAN
+ RTM_DELVLAN,
+ #define RTM_DELVLAN RTM_DELVLAN
+ RTM_GETVLAN,
+diff --git a/init/Kconfig b/init/Kconfig
+index c521e1421ad4ab..7256fa127530ff 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -120,6 +120,15 @@ config CC_HAS_ASM_INLINE
+ config CC_HAS_NO_PROFILE_FN_ATTR
+ def_bool $(success,echo '__attribute__((no_profile_instrument_function)) int x();' | $(CC) -x c - -c -o /dev/null -Werror)
+
++config CC_HAS_COUNTED_BY
++ # TODO: when gcc 15 is released remove the build test and add
++ # a gcc version check
++ def_bool $(success,echo 'struct flex { int count; int array[] __attribute__((__counted_by__(count))); };' | $(CC) $(CLANG_FLAGS) -x c - -c -o /dev/null -Werror)
++ # clang needs to be at least 19.1.3 to avoid __bdos miscalculations
++ # https://github.com/llvm/llvm-project/pull/110497
++ # https://github.com/llvm/llvm-project/pull/112636
++ depends on !(CC_IS_CLANG && CLANG_VERSION < 190103)
++
+ config PAHOLE_VERSION
+ int
+ default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+diff --git a/init/initramfs.c b/init/initramfs.c
+index bc911e466d5bbb..b2f7583bb1f5c2 100644
+--- a/init/initramfs.c
++++ b/init/initramfs.c
+@@ -360,6 +360,15 @@ static int __init do_name(void)
+ {
+ state = SkipIt;
+ next_state = Reset;
++
++ /* name_len > 0 && name_len <= PATH_MAX checked in do_header */
++ if (collected[name_len - 1] != '\0') {
++ pr_err("initramfs name without nulterm: %.*s\n",
++ (int)name_len, collected);
++ error("malformed archive");
++ return 1;
++ }
++
+ if (strcmp(collected, "TRAILER!!!") == 0) {
+ free_hash();
+ return 0;
+@@ -424,6 +433,12 @@ static int __init do_copy(void)
+
+ static int __init do_symlink(void)
+ {
++ if (collected[name_len - 1] != '\0') {
++ pr_err("initramfs symlink without nulterm: %.*s\n",
++ (int)name_len, collected);
++ error("malformed archive");
++ return 1;
++ }
+ collected[N_ALIGN(name_len) + body_len] = '\0';
+ clean_path(collected, 0);
+ init_symlink(collected + N_ALIGN(name_len), collected);
+diff --git a/io_uring/memmap.c b/io_uring/memmap.c
+index a0f32a255fd1e1..6d151e46f3d69e 100644
+--- a/io_uring/memmap.c
++++ b/io_uring/memmap.c
+@@ -72,6 +72,8 @@ void *io_pages_map(struct page ***out_pages, unsigned short *npages,
+ ret = io_mem_alloc_compound(pages, nr_pages, size, gfp);
+ if (!IS_ERR(ret))
+ goto done;
++ if (nr_pages == 1)
++ goto fail;
+
+ ret = io_mem_alloc_single(pages, nr_pages, size, gfp);
+ if (!IS_ERR(ret)) {
+@@ -80,7 +82,7 @@ void *io_pages_map(struct page ***out_pages, unsigned short *npages,
+ *npages = nr_pages;
+ return ret;
+ }
+-
++fail:
+ kvfree(pages);
+ *out_pages = NULL;
+ *npages = 0;
+@@ -135,7 +137,12 @@ struct page **io_pin_pages(unsigned long uaddr, unsigned long len, int *npages)
+ struct page **pages;
+ int ret;
+
+- end = (uaddr + len + PAGE_SIZE - 1) >> PAGE_SHIFT;
++ if (check_add_overflow(uaddr, len, &end))
++ return ERR_PTR(-EOVERFLOW);
++ if (check_add_overflow(end, PAGE_SIZE - 1, &end))
++ return ERR_PTR(-EOVERFLOW);
++
++ end = end >> PAGE_SHIFT;
+ start = uaddr >> PAGE_SHIFT;
+ nr_pages = end - start;
+ if (WARN_ON_ONCE(!nr_pages))
+diff --git a/ipc/namespace.c b/ipc/namespace.c
+index 6ecc30effd3ec6..4df91ceeeafe9f 100644
+--- a/ipc/namespace.c
++++ b/ipc/namespace.c
+@@ -83,13 +83,15 @@ static struct ipc_namespace *create_ipc_ns(struct user_namespace *user_ns,
+
+ err = msg_init_ns(ns);
+ if (err)
+- goto fail_put;
++ goto fail_ipc;
+
+ sem_init_ns(ns);
+ shm_init_ns(ns);
+
+ return ns;
+
++fail_ipc:
++ retire_ipc_sysctls(ns);
+ fail_mq:
+ retire_mq_sysctls(ns);
+
+diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
+index fda3dd2ee9844f..b3a2ce1e5e22ec 100644
+--- a/kernel/bpf/bpf_struct_ops.c
++++ b/kernel/bpf/bpf_struct_ops.c
+@@ -32,7 +32,9 @@ struct bpf_struct_ops_map {
+ * (in kvalue.data).
+ */
+ struct bpf_link **links;
+- u32 links_cnt;
++ /* ksyms for bpf trampolines */
++ struct bpf_ksym **ksyms;
++ u32 funcs_cnt;
+ u32 image_pages_cnt;
+ /* image_pages is an array of pages that has all the trampolines
+ * that stores the func args before calling the bpf_prog.
+@@ -481,11 +483,11 @@ static void bpf_struct_ops_map_put_progs(struct bpf_struct_ops_map *st_map)
+ {
+ u32 i;
+
+- for (i = 0; i < st_map->links_cnt; i++) {
+- if (st_map->links[i]) {
+- bpf_link_put(st_map->links[i]);
+- st_map->links[i] = NULL;
+- }
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->links[i])
++ break;
++ bpf_link_put(st_map->links[i]);
++ st_map->links[i] = NULL;
+ }
+ }
+
+@@ -586,6 +588,49 @@ int bpf_struct_ops_prepare_trampoline(struct bpf_tramp_links *tlinks,
+ return 0;
+ }
+
++static void bpf_struct_ops_ksym_init(const char *tname, const char *mname,
++ void *image, unsigned int size,
++ struct bpf_ksym *ksym)
++{
++ snprintf(ksym->name, KSYM_NAME_LEN, "bpf__%s_%s", tname, mname);
++ INIT_LIST_HEAD_RCU(&ksym->lnode);
++ bpf_image_ksym_init(image, size, ksym);
++}
++
++static void bpf_struct_ops_map_add_ksyms(struct bpf_struct_ops_map *st_map)
++{
++ u32 i;
++
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->ksyms[i])
++ break;
++ bpf_image_ksym_add(st_map->ksyms[i]);
++ }
++}
++
++static void bpf_struct_ops_map_del_ksyms(struct bpf_struct_ops_map *st_map)
++{
++ u32 i;
++
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->ksyms[i])
++ break;
++ bpf_image_ksym_del(st_map->ksyms[i]);
++ }
++}
++
++static void bpf_struct_ops_map_free_ksyms(struct bpf_struct_ops_map *st_map)
++{
++ u32 i;
++
++ for (i = 0; i < st_map->funcs_cnt; i++) {
++ if (!st_map->ksyms[i])
++ break;
++ kfree(st_map->ksyms[i]);
++ st_map->ksyms[i] = NULL;
++ }
++}
++
+ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ void *value, u64 flags)
+ {
+@@ -601,6 +646,9 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ int prog_fd, err;
+ u32 i, trampoline_start, image_off = 0;
+ void *cur_image = NULL, *image = NULL;
++ struct bpf_link **plink;
++ struct bpf_ksym **pksym;
++ const char *tname, *mname;
+
+ if (flags)
+ return -EINVAL;
+@@ -639,14 +687,19 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ udata = &uvalue->data;
+ kdata = &kvalue->data;
+
++ plink = st_map->links;
++ pksym = st_map->ksyms;
++ tname = btf_name_by_offset(st_map->btf, t->name_off);
+ module_type = btf_type_by_id(btf_vmlinux, st_ops_ids[IDX_MODULE_ID]);
+ for_each_member(i, t, member) {
+ const struct btf_type *mtype, *ptype;
+ struct bpf_prog *prog;
+ struct bpf_tramp_link *link;
++ struct bpf_ksym *ksym;
+ u32 moff;
+
+ moff = __btf_member_bit_offset(t, member) / 8;
++ mname = btf_name_by_offset(st_map->btf, member->name_off);
+ ptype = btf_type_resolve_ptr(st_map->btf, member->type, NULL);
+ if (ptype == module_type) {
+ if (*(void **)(udata + moff))
+@@ -714,7 +767,14 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ }
+ bpf_link_init(&link->link, BPF_LINK_TYPE_STRUCT_OPS,
+ &bpf_struct_ops_link_lops, prog);
+- st_map->links[i] = &link->link;
++ *plink++ = &link->link;
++
++ ksym = kzalloc(sizeof(*ksym), GFP_USER);
++ if (!ksym) {
++ err = -ENOMEM;
++ goto reset_unlock;
++ }
++ *pksym++ = ksym;
+
+ trampoline_start = image_off;
+ err = bpf_struct_ops_prepare_trampoline(tlinks, link,
+@@ -735,6 +795,12 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+
+ /* put prog_id to udata */
+ *(unsigned long *)(udata + moff) = prog->aux->id;
++
++ /* init ksym for this trampoline */
++ bpf_struct_ops_ksym_init(tname, mname,
++ image + trampoline_start,
++ image_off - trampoline_start,
++ ksym);
+ }
+
+ if (st_ops->validate) {
+@@ -783,6 +849,7 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ */
+
+ reset_unlock:
++ bpf_struct_ops_map_free_ksyms(st_map);
+ bpf_struct_ops_map_free_image(st_map);
+ bpf_struct_ops_map_put_progs(st_map);
+ memset(uvalue, 0, map->value_size);
+@@ -790,6 +857,8 @@ static long bpf_struct_ops_map_update_elem(struct bpf_map *map, void *key,
+ unlock:
+ kfree(tlinks);
+ mutex_unlock(&st_map->lock);
++ if (!err)
++ bpf_struct_ops_map_add_ksyms(st_map);
+ return err;
+ }
+
+@@ -849,7 +918,10 @@ static void __bpf_struct_ops_map_free(struct bpf_map *map)
+
+ if (st_map->links)
+ bpf_struct_ops_map_put_progs(st_map);
++ if (st_map->ksyms)
++ bpf_struct_ops_map_free_ksyms(st_map);
+ bpf_map_area_free(st_map->links);
++ bpf_map_area_free(st_map->ksyms);
+ bpf_struct_ops_map_free_image(st_map);
+ bpf_map_area_free(st_map->uvalue);
+ bpf_map_area_free(st_map);
+@@ -866,6 +938,8 @@ static void bpf_struct_ops_map_free(struct bpf_map *map)
+ if (btf_is_module(st_map->btf))
+ module_put(st_map->st_ops_desc->st_ops->owner);
+
++ bpf_struct_ops_map_del_ksyms(st_map);
++
+ /* The struct_ops's function may switch to another struct_ops.
+ *
+ * For example, bpf_tcp_cc_x->init() may switch to
+@@ -895,6 +969,19 @@ static int bpf_struct_ops_map_alloc_check(union bpf_attr *attr)
+ return 0;
+ }
+
++static u32 count_func_ptrs(const struct btf *btf, const struct btf_type *t)
++{
++ int i;
++ u32 count;
++ const struct btf_member *member;
++
++ count = 0;
++ for_each_member(i, t, member)
++ if (btf_type_resolve_func_ptr(btf, member->type, NULL))
++ count++;
++ return count;
++}
++
+ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
+ {
+ const struct bpf_struct_ops_desc *st_ops_desc;
+@@ -961,11 +1048,15 @@ static struct bpf_map *bpf_struct_ops_map_alloc(union bpf_attr *attr)
+ map = &st_map->map;
+
+ st_map->uvalue = bpf_map_area_alloc(vt->size, NUMA_NO_NODE);
+- st_map->links_cnt = btf_type_vlen(t);
++ st_map->funcs_cnt = count_func_ptrs(btf, t);
+ st_map->links =
+- bpf_map_area_alloc(st_map->links_cnt * sizeof(struct bpf_links *),
++ bpf_map_area_alloc(st_map->funcs_cnt * sizeof(struct bpf_link *),
++ NUMA_NO_NODE);
++
++ st_map->ksyms =
++ bpf_map_area_alloc(st_map->funcs_cnt * sizeof(struct bpf_ksym *),
+ NUMA_NO_NODE);
+- if (!st_map->uvalue || !st_map->links) {
++ if (!st_map->uvalue || !st_map->links || !st_map->ksyms) {
+ ret = -ENOMEM;
+ goto errout_free;
+ }
+@@ -994,7 +1085,8 @@ static u64 bpf_struct_ops_map_mem_usage(const struct bpf_map *map)
+ usage = sizeof(*st_map) +
+ vt->size - sizeof(struct bpf_struct_ops_value);
+ usage += vt->size;
+- usage += btf_type_vlen(vt) * sizeof(struct bpf_links *);
++ usage += st_map->funcs_cnt * sizeof(struct bpf_link *);
++ usage += st_map->funcs_cnt * sizeof(struct bpf_ksym *);
+ usage += PAGE_SIZE;
+ return usage;
+ }
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 5cd1c7a23848cc..346826e3c933da 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -6564,7 +6564,10 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ if (prog_args_trusted(prog))
+ info->reg_type |= PTR_TRUSTED;
+
+- if (btf_param_match_suffix(btf, &args[arg], "__nullable"))
++ /* Raw tracepoint arguments always get marked as maybe NULL */
++ if (bpf_prog_is_raw_tp(prog))
++ info->reg_type |= PTR_MAYBE_NULL;
++ else if (btf_param_match_suffix(btf, &args[arg], "__nullable"))
+ info->reg_type |= PTR_MAYBE_NULL;
+
+ if (tgt_prog) {
+diff --git a/kernel/bpf/dispatcher.c b/kernel/bpf/dispatcher.c
+index 70fb82bf16370e..b77db7413f8c70 100644
+--- a/kernel/bpf/dispatcher.c
++++ b/kernel/bpf/dispatcher.c
+@@ -154,7 +154,8 @@ void bpf_dispatcher_change_prog(struct bpf_dispatcher *d, struct bpf_prog *from,
+ d->image = NULL;
+ goto out;
+ }
+- bpf_image_ksym_add(d->image, PAGE_SIZE, &d->ksym);
++ bpf_image_ksym_init(d->image, PAGE_SIZE, &d->ksym);
++ bpf_image_ksym_add(&d->ksym);
+ }
+
+ prev_num_progs = d->num_progs;
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index f8302a5ca400da..1166d9dd3e8b5d 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -115,10 +115,14 @@ bool bpf_prog_has_trampoline(const struct bpf_prog *prog)
+ (ptype == BPF_PROG_TYPE_LSM && eatype == BPF_LSM_MAC);
+ }
+
+-void bpf_image_ksym_add(void *data, unsigned int size, struct bpf_ksym *ksym)
++void bpf_image_ksym_init(void *data, unsigned int size, struct bpf_ksym *ksym)
+ {
+ ksym->start = (unsigned long) data;
+ ksym->end = ksym->start + size;
++}
++
++void bpf_image_ksym_add(struct bpf_ksym *ksym)
++{
+ bpf_ksym_add(ksym);
+ perf_event_ksymbol(PERF_RECORD_KSYMBOL_TYPE_BPF, ksym->start,
+ PAGE_SIZE, false, ksym->name);
+@@ -377,7 +381,8 @@ static struct bpf_tramp_image *bpf_tramp_image_alloc(u64 key, int size)
+ ksym = &im->ksym;
+ INIT_LIST_HEAD_RCU(&ksym->lnode);
+ snprintf(ksym->name, KSYM_NAME_LEN, "bpf_trampoline_%llu", key);
+- bpf_image_ksym_add(image, size, ksym);
++ bpf_image_ksym_init(image, size, ksym);
++ bpf_image_ksym_add(ksym);
+ return im;
+
+ out_free_image:
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index bb99bada7e2ed2..91317857ea3ee5 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -418,6 +418,25 @@ static struct btf_record *reg_btf_record(const struct bpf_reg_state *reg)
+ return rec;
+ }
+
++static bool mask_raw_tp_reg_cond(const struct bpf_verifier_env *env, struct bpf_reg_state *reg) {
++ return reg->type == (PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL) &&
++ bpf_prog_is_raw_tp(env->prog) && !reg->ref_obj_id;
++}
++
++static bool mask_raw_tp_reg(const struct bpf_verifier_env *env, struct bpf_reg_state *reg)
++{
++ if (!mask_raw_tp_reg_cond(env, reg))
++ return false;
++ reg->type &= ~PTR_MAYBE_NULL;
++ return true;
++}
++
++static void unmask_raw_tp_reg(struct bpf_reg_state *reg, bool result)
++{
++ if (result)
++ reg->type |= PTR_MAYBE_NULL;
++}
++
+ static bool subprog_is_global(const struct bpf_verifier_env *env, int subprog)
+ {
+ struct bpf_func_info_aux *aux = env->prog->aux->func_info_aux;
+@@ -6595,6 +6614,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ const char *field_name = NULL;
+ enum bpf_type_flag flag = 0;
+ u32 btf_id = 0;
++ bool mask;
+ int ret;
+
+ if (!env->allow_ptr_leaks) {
+@@ -6666,7 +6686,21 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+
+ if (ret < 0)
+ return ret;
+-
++ /* For raw_tp progs, we allow dereference of PTR_MAYBE_NULL
++ * trusted PTR_TO_BTF_ID, these are the ones that are possibly
++ * arguments to the raw_tp. Since internal checks in for trusted
++ * reg in check_ptr_to_btf_access would consider PTR_MAYBE_NULL
++ * modifier as problematic, mask it out temporarily for the
++ * check. Don't apply this to pointers with ref_obj_id > 0, as
++ * those won't be raw_tp args.
++ *
++ * We may end up applying this relaxation to other trusted
++ * PTR_TO_BTF_ID with maybe null flag, since we cannot
++ * distinguish PTR_MAYBE_NULL tagged for arguments vs normal
++ * tagging, but that should expand allowed behavior, and not
++ * cause regression for existing behavior.
++ */
++ mask = mask_raw_tp_reg(env, reg);
+ if (ret != PTR_TO_BTF_ID) {
+ /* just mark; */
+
+@@ -6727,8 +6761,13 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ clear_trusted_flags(&flag);
+ }
+
+- if (atype == BPF_READ && value_regno >= 0)
++ if (atype == BPF_READ && value_regno >= 0) {
+ mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag);
++ /* We've assigned a new type to regno, so don't undo masking. */
++ if (regno == value_regno)
++ mask = false;
++ }
++ unmask_raw_tp_reg(reg, mask);
+
+ return 0;
+ }
+@@ -7103,7 +7142,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ if (!err && t == BPF_READ && value_regno >= 0)
+ mark_reg_unknown(env, regs, value_regno);
+ } else if (base_type(reg->type) == PTR_TO_BTF_ID &&
+- !type_may_be_null(reg->type)) {
++ (mask_raw_tp_reg_cond(env, reg) || !type_may_be_null(reg->type))) {
+ err = check_ptr_to_btf_access(env, regs, regno, off, size, t,
+ value_regno);
+ } else if (reg->type == CONST_PTR_TO_MAP) {
+@@ -8796,6 +8835,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ enum bpf_reg_type type = reg->type;
+ u32 *arg_btf_id = NULL;
+ int err = 0;
++ bool mask;
+
+ if (arg_type == ARG_DONTCARE)
+ return 0;
+@@ -8836,11 +8876,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ base_type(arg_type) == ARG_PTR_TO_SPIN_LOCK)
+ arg_btf_id = fn->arg_btf_id[arg];
+
++ mask = mask_raw_tp_reg(env, reg);
+ err = check_reg_type(env, regno, arg_type, arg_btf_id, meta);
+- if (err)
+- return err;
+
+- err = check_func_arg_reg_off(env, reg, regno, arg_type);
++ err = err ?: check_func_arg_reg_off(env, reg, regno, arg_type);
++ unmask_raw_tp_reg(reg, mask);
+ if (err)
+ return err;
+
+@@ -9635,14 +9675,17 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
+ return ret;
+ } else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
+ struct bpf_call_arg_meta meta;
++ bool mask;
+ int err;
+
+ if (register_is_null(reg) && type_may_be_null(arg->arg_type))
+ continue;
+
+ memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */
++ mask = mask_raw_tp_reg(env, reg);
+ err = check_reg_type(env, regno, arg->arg_type, &arg->btf_id, &meta);
+ err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type);
++ unmask_raw_tp_reg(reg, mask);
+ if (err)
+ return err;
+ } else {
+@@ -10583,11 +10626,26 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+
+ switch (func_id) {
+ case BPF_FUNC_tail_call:
++ if (env->cur_state->active_lock.ptr) {
++ verbose(env, "tail_call cannot be used inside bpf_spin_lock-ed region\n");
++ return -EINVAL;
++ }
++
+ err = check_reference_leak(env, false);
+ if (err) {
+ verbose(env, "tail_call would lead to reference leak\n");
+ return err;
+ }
++
++ if (env->cur_state->active_rcu_lock) {
++ verbose(env, "tail_call cannot be used inside bpf_rcu_read_lock-ed region\n");
++ return -EINVAL;
++ }
++
++ if (env->cur_state->active_preempt_lock) {
++ verbose(env, "tail_call cannot be used inside bpf_preempt_disable-ed region\n");
++ return -EINVAL;
++ }
+ break;
+ case BPF_FUNC_get_local_storage:
+ /* check that flags argument in get_local_storage(map, flags) is 0,
+@@ -11942,6 +12000,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ enum bpf_arg_type arg_type = ARG_DONTCARE;
+ u32 regno = i + 1, ref_id, type_size;
+ bool is_ret_buf_sz = false;
++ bool mask = false;
+ int kf_arg_type;
+
+ t = btf_type_skip_modifiers(btf, args[i].type, NULL);
+@@ -12000,12 +12059,15 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ return -EINVAL;
+ }
+
++ mask = mask_raw_tp_reg(env, reg);
+ if ((is_kfunc_trusted_args(meta) || is_kfunc_rcu(meta)) &&
+ (register_is_null(reg) || type_may_be_null(reg->type)) &&
+ !is_kfunc_arg_nullable(meta->btf, &args[i])) {
+ verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i);
++ unmask_raw_tp_reg(reg, mask);
+ return -EACCES;
+ }
++ unmask_raw_tp_reg(reg, mask);
+
+ if (reg->ref_obj_id) {
+ if (is_kfunc_release(meta) && meta->ref_obj_id) {
+@@ -12063,16 +12125,24 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ if (!is_kfunc_trusted_args(meta) && !is_kfunc_rcu(meta))
+ break;
+
++ /* Allow passing maybe NULL raw_tp arguments to
++ * kfuncs for compatibility. Don't apply this to
++ * arguments with ref_obj_id > 0.
++ */
++ mask = mask_raw_tp_reg(env, reg);
+ if (!is_trusted_reg(reg)) {
+ if (!is_kfunc_rcu(meta)) {
+ verbose(env, "R%d must be referenced or trusted\n", regno);
++ unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ if (!is_rcu_reg(reg)) {
+ verbose(env, "R%d must be a rcu pointer\n", regno);
++ unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ }
++ unmask_raw_tp_reg(reg, mask);
+ fallthrough;
+ case KF_ARG_PTR_TO_CTX:
+ case KF_ARG_PTR_TO_DYNPTR:
+@@ -12095,7 +12165,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+
+ if (is_kfunc_release(meta) && reg->ref_obj_id)
+ arg_type |= OBJ_RELEASE;
++ mask = mask_raw_tp_reg(env, reg);
+ ret = check_func_arg_reg_off(env, reg, regno, arg_type);
++ unmask_raw_tp_reg(reg, mask);
+ if (ret < 0)
+ return ret;
+
+@@ -12272,6 +12344,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ ref_tname = btf_name_by_offset(btf, ref_t->name_off);
+ fallthrough;
+ case KF_ARG_PTR_TO_BTF_ID:
++ mask = mask_raw_tp_reg(env, reg);
+ /* Only base_type is checked, further checks are done here */
+ if ((base_type(reg->type) != PTR_TO_BTF_ID ||
+ (bpf_type_has_unsafe_modifiers(reg->type) && !is_rcu_reg(reg))) &&
+@@ -12280,9 +12353,11 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ verbose(env, "expected %s or socket\n",
+ reg_type_str(env, base_type(reg->type) |
+ (type_flag(reg->type) & BPF_REG_TRUSTED_MODIFIERS)));
++ unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ ret = process_kf_arg_ptr_to_btf_id(env, reg, ref_t, ref_tname, ref_id, meta, i);
++ unmask_raw_tp_reg(reg, mask);
+ if (ret < 0)
+ return ret;
+ break;
+@@ -13252,7 +13327,7 @@ static int sanitize_check_bounds(struct bpf_verifier_env *env,
+ */
+ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ struct bpf_insn *insn,
+- const struct bpf_reg_state *ptr_reg,
++ struct bpf_reg_state *ptr_reg,
+ const struct bpf_reg_state *off_reg)
+ {
+ struct bpf_verifier_state *vstate = env->cur_state;
+@@ -13266,6 +13341,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ struct bpf_sanitize_info info = {};
+ u8 opcode = BPF_OP(insn->code);
+ u32 dst = insn->dst_reg;
++ bool mask;
+ int ret;
+
+ dst_reg = ®s[dst];
+@@ -13292,11 +13368,14 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ return -EACCES;
+ }
+
++ mask = mask_raw_tp_reg(env, ptr_reg);
+ if (ptr_reg->type & PTR_MAYBE_NULL) {
+ verbose(env, "R%d pointer arithmetic on %s prohibited, null-check it first\n",
+ dst, reg_type_str(env, ptr_reg->type));
++ unmask_raw_tp_reg(ptr_reg, mask);
+ return -EACCES;
+ }
++ unmask_raw_tp_reg(ptr_reg, mask);
+
+ switch (base_type(ptr_reg->type)) {
+ case PTR_TO_CTX:
+@@ -15909,6 +15988,15 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
+ return -ENOTSUPP;
+ }
+ break;
++ case BPF_PROG_TYPE_KPROBE:
++ switch (env->prog->expected_attach_type) {
++ case BPF_TRACE_KPROBE_SESSION:
++ range = retval_range(0, 1);
++ break;
++ default:
++ return 0;
++ }
++ break;
+ case BPF_PROG_TYPE_SK_LOOKUP:
+ range = retval_range(SK_DROP, SK_PASS);
+ break;
+@@ -19837,6 +19925,7 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ * for this case.
+ */
+ case PTR_TO_BTF_ID | MEM_ALLOC | PTR_UNTRUSTED:
++ case PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL:
+ if (type == BPF_READ) {
+ if (BPF_MODE(insn->code) == BPF_MEM)
+ insn->code = BPF_LDX | BPF_PROBE_MEM |
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 044c7ba1cc482b..e275eaf2de7f8f 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -2140,8 +2140,10 @@ int cgroup_setup_root(struct cgroup_root *root, u16 ss_mask)
+ if (ret)
+ goto exit_stats;
+
+- ret = cgroup_bpf_inherit(root_cgrp);
+- WARN_ON_ONCE(ret);
++ if (root == &cgrp_dfl_root) {
++ ret = cgroup_bpf_inherit(root_cgrp);
++ WARN_ON_ONCE(ret);
++ }
+
+ trace_cgroup_setup_root(root);
+
+@@ -2314,10 +2316,8 @@ static void cgroup_kill_sb(struct super_block *sb)
+ * And don't kill the default root.
+ */
+ if (list_empty(&root->cgrp.self.children) && root != &cgrp_dfl_root &&
+- !percpu_ref_is_dying(&root->cgrp.self.refcnt)) {
+- cgroup_bpf_offline(&root->cgrp);
++ !percpu_ref_is_dying(&root->cgrp.self.refcnt))
+ percpu_ref_kill(&root->cgrp.self.refcnt);
+- }
+ cgroup_put(&root->cgrp);
+ kernfs_kill_sb(sb);
+ }
+@@ -5710,9 +5710,11 @@ static struct cgroup *cgroup_create(struct cgroup *parent, const char *name,
+ if (ret)
+ goto out_kernfs_remove;
+
+- ret = cgroup_bpf_inherit(cgrp);
+- if (ret)
+- goto out_psi_free;
++ if (cgrp->root == &cgrp_dfl_root) {
++ ret = cgroup_bpf_inherit(cgrp);
++ if (ret)
++ goto out_psi_free;
++ }
+
+ /*
+ * New cgroup inherits effective freeze counter, and
+@@ -6026,7 +6028,8 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
+
+ cgroup1_check_for_release(parent);
+
+- cgroup_bpf_offline(cgrp);
++ if (cgrp->root == &cgrp_dfl_root)
++ cgroup_bpf_offline(cgrp);
+
+ /* put the base reference */
+ percpu_ref_kill(&cgrp->self.refcnt);
+diff --git a/kernel/fork.c b/kernel/fork.c
+index 22f43721d031d4..ce8be55e5e04b3 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -622,6 +622,12 @@ static void dup_mm_exe_file(struct mm_struct *mm, struct mm_struct *oldmm)
+
+ exe_file = get_mm_exe_file(oldmm);
+ RCU_INIT_POINTER(mm->exe_file, exe_file);
++ /*
++ * We depend on the oldmm having properly denied write access to the
++ * exe_file already.
++ */
++ if (exe_file && deny_write_access(exe_file))
++ pr_warn_once("deny_write_access() failed in %s\n", __func__);
+ }
+
+ #ifdef CONFIG_MMU
+@@ -1414,11 +1420,20 @@ int set_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
+ */
+ old_exe_file = rcu_dereference_raw(mm->exe_file);
+
+- if (new_exe_file)
++ if (new_exe_file) {
++ /*
++ * We expect the caller (i.e., sys_execve) to already denied
++ * write access, so this is unlikely to fail.
++ */
++ if (unlikely(deny_write_access(new_exe_file)))
++ return -EACCES;
+ get_file(new_exe_file);
++ }
+ rcu_assign_pointer(mm->exe_file, new_exe_file);
+- if (old_exe_file)
++ if (old_exe_file) {
++ allow_write_access(old_exe_file);
+ fput(old_exe_file);
++ }
+ return 0;
+ }
+
+@@ -1457,6 +1472,9 @@ int replace_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
+ return ret;
+ }
+
++ ret = deny_write_access(new_exe_file);
++ if (ret)
++ return -EACCES;
+ get_file(new_exe_file);
+
+ /* set the new file */
+@@ -1465,8 +1483,10 @@ int replace_mm_exe_file(struct mm_struct *mm, struct file *new_exe_file)
+ rcu_assign_pointer(mm->exe_file, new_exe_file);
+ mmap_write_unlock(mm);
+
+- if (old_exe_file)
++ if (old_exe_file) {
++ allow_write_access(old_exe_file);
+ fput(old_exe_file);
++ }
+ return 0;
+ }
+
+diff --git a/kernel/rcu/rcuscale.c b/kernel/rcu/rcuscale.c
+index 6d37596deb1f12..d360fa44b234db 100644
+--- a/kernel/rcu/rcuscale.c
++++ b/kernel/rcu/rcuscale.c
+@@ -890,13 +890,15 @@ kfree_scale_init(void)
+ if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start < 2 * HZ)) {
+ pr_alert("ERROR: call_rcu() CBs are not being lazy as expected!\n");
+ WARN_ON_ONCE(1);
+- return -1;
++ firsterr = -1;
++ goto unwind;
+ }
+
+ if (WARN_ON_ONCE(jiffies_at_lazy_cb - jif_start > 3 * HZ)) {
+ pr_alert("ERROR: call_rcu() CBs are being too lazy!\n");
+ WARN_ON_ONCE(1);
+- return -1;
++ firsterr = -1;
++ goto unwind;
+ }
+ }
+
+diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
+index 549c03336ee971..4dcbf8aa80ff73 100644
+--- a/kernel/rcu/srcutiny.c
++++ b/kernel/rcu/srcutiny.c
+@@ -122,8 +122,8 @@ void srcu_drive_gp(struct work_struct *wp)
+ ssp = container_of(wp, struct srcu_struct, srcu_work);
+ preempt_disable(); // Needed for PREEMPT_AUTO
+ if (ssp->srcu_gp_running || ULONG_CMP_GE(ssp->srcu_idx, READ_ONCE(ssp->srcu_idx_max))) {
+- return; /* Already running or nothing to do. */
+ preempt_enable();
++ return; /* Already running or nothing to do. */
+ }
+
+ /* Remove recently arrived callbacks and wait for readers. */
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index b1f883fcd9185a..3e486ccaa4ca34 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3511,7 +3511,7 @@ static int krc_count(struct kfree_rcu_cpu *krcp)
+ }
+
+ static void
+-schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
++__schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
+ {
+ long delay, delay_left;
+
+@@ -3525,6 +3525,16 @@ schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
+ queue_delayed_work(system_unbound_wq, &krcp->monitor_work, delay);
+ }
+
++static void
++schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
++{
++ unsigned long flags;
++
++ raw_spin_lock_irqsave(&krcp->lock, flags);
++ __schedule_delayed_monitor_work(krcp);
++ raw_spin_unlock_irqrestore(&krcp->lock, flags);
++}
++
+ static void
+ kvfree_rcu_drain_ready(struct kfree_rcu_cpu *krcp)
+ {
+@@ -3836,7 +3846,7 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr)
+
+ // Set timer to drain after KFREE_DRAIN_JIFFIES.
+ if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING)
+- schedule_delayed_monitor_work(krcp);
++ __schedule_delayed_monitor_work(krcp);
+
+ unlock_return:
+ krc_this_cpu_unlock(krcp, flags);
+diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h
+index 16865475120ba3..2605dd234a13c8 100644
+--- a/kernel/rcu/tree_nocb.h
++++ b/kernel/rcu/tree_nocb.h
+@@ -891,7 +891,18 @@ static void nocb_cb_wait(struct rcu_data *rdp)
+ swait_event_interruptible_exclusive(rdp->nocb_cb_wq,
+ nocb_cb_wait_cond(rdp));
+ if (kthread_should_park()) {
+- kthread_parkme();
++ /*
++ * kthread_park() must be preceded by an rcu_barrier().
++ * But yet another rcu_barrier() might have sneaked in between
++ * the barrier callback execution and the callbacks counter
++ * decrement.
++ */
++ if (rdp->nocb_cb_sleep) {
++ rcu_nocb_lock_irqsave(rdp, flags);
++ WARN_ON_ONCE(rcu_segcblist_n_cbs(&rdp->cblist));
++ rcu_nocb_unlock_irqrestore(rdp, flags);
++ kthread_parkme();
++ }
+ } else if (READ_ONCE(rdp->nocb_cb_sleep)) {
+ WARN_ON(signal_pending(current));
+ trace_rcu_nocb_wake(rcu_state.name, rdp->cpu, TPS("WokeEmpty"));
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index c6ba15388ea706..28c77904ea749f 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -783,9 +783,8 @@ static int sugov_init(struct cpufreq_policy *policy)
+ if (ret)
+ goto fail;
+
+- sugov_eas_rebuild_sd();
+-
+ out:
++ sugov_eas_rebuild_sd();
+ mutex_unlock(&global_tunables_lock);
+ return 0;
+
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 751d73d500e51d..16613631543f18 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -3567,12 +3567,7 @@ static void scx_ops_exit_task(struct task_struct *p)
+
+ void init_scx_entity(struct sched_ext_entity *scx)
+ {
+- /*
+- * init_idle() calls this function again after fork sequence is
+- * complete. Don't touch ->tasks_node as it's already linked.
+- */
+- memset(scx, 0, offsetof(struct sched_ext_entity, tasks_node));
+-
++ memset(scx, 0, sizeof(*scx));
+ INIT_LIST_HEAD(&scx->dsq_list.node);
+ RB_CLEAR_NODE(&scx->dsq_priq);
+ scx->sticky_cpu = -1;
+@@ -6478,6 +6473,8 @@ __bpf_kfunc_end_defs();
+
+ BTF_KFUNCS_START(scx_kfunc_ids_unlocked)
+ BTF_ID_FLAGS(func, scx_bpf_create_dsq, KF_SLEEPABLE)
++BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq_set_slice)
++BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq_set_vtime)
+ BTF_ID_FLAGS(func, scx_bpf_dispatch_from_dsq, KF_RCU)
+ BTF_ID_FLAGS(func, scx_bpf_dispatch_vtime_from_dsq, KF_RCU)
+ BTF_KFUNCS_END(scx_kfunc_ids_unlocked)
+diff --git a/kernel/time/time.c b/kernel/time/time.c
+index 642647f5046be0..1ad88e97b4ebcf 100644
+--- a/kernel/time/time.c
++++ b/kernel/time/time.c
+@@ -556,9 +556,9 @@ EXPORT_SYMBOL(ns_to_timespec64);
+ * - all other values are converted to jiffies by either multiplying
+ * the input value by a factor or dividing it with a factor and
+ * handling any 32-bit overflows.
+- * for the details see __msecs_to_jiffies()
++ * for the details see _msecs_to_jiffies()
+ *
+- * __msecs_to_jiffies() checks for the passed in value being a constant
++ * msecs_to_jiffies() checks for the passed in value being a constant
+ * via __builtin_constant_p() allowing gcc to eliminate most of the
+ * code, __msecs_to_jiffies() is called if the value passed does not
+ * allow constant folding and the actual conversion must be done at
+diff --git a/kernel/time/timer.c b/kernel/time/timer.c
+index 0fc9d066a7be46..7835f9b376e76a 100644
+--- a/kernel/time/timer.c
++++ b/kernel/time/timer.c
+@@ -2422,7 +2422,8 @@ static inline void __run_timers(struct timer_base *base)
+
+ static void __run_timer_base(struct timer_base *base)
+ {
+- if (time_before(jiffies, base->next_expiry))
++ /* Can race against a remote CPU updating next_expiry under the lock */
++ if (time_before(jiffies, READ_ONCE(base->next_expiry)))
+ return;
+
+ timer_base_lock_expiry(base);
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 630b763e52402f..792dc35414a3c3 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -3205,7 +3205,6 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
+ struct bpf_prog *prog = link->link.prog;
+ bool sleepable = prog->sleepable;
+ struct bpf_run_ctx *old_run_ctx;
+- int err = 0;
+
+ if (link->task && !same_thread_group(current, link->task))
+ return 0;
+@@ -3218,7 +3217,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
+ migrate_disable();
+
+ old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
+- err = bpf_prog_run(link->link.prog, regs);
++ bpf_prog_run(link->link.prog, regs);
+ bpf_reset_run_ctx(old_run_ctx);
+
+ migrate_enable();
+@@ -3227,7 +3226,7 @@ static int uprobe_prog_run(struct bpf_uprobe *uprobe,
+ rcu_read_unlock_trace();
+ else
+ rcu_read_unlock();
+- return err;
++ return 0;
+ }
+
+ static bool
+diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c
+index 05e7912418126b..3ff9caa4a71bbd 100644
+--- a/kernel/trace/trace_event_perf.c
++++ b/kernel/trace/trace_event_perf.c
+@@ -352,10 +352,16 @@ void perf_uprobe_destroy(struct perf_event *p_event)
+ int perf_trace_add(struct perf_event *p_event, int flags)
+ {
+ struct trace_event_call *tp_event = p_event->tp_event;
++ struct hw_perf_event *hwc = &p_event->hw;
+
+ if (!(flags & PERF_EF_START))
+ p_event->hw.state = PERF_HES_STOPPED;
+
++ if (is_sampling_event(p_event)) {
++ hwc->last_period = hwc->sample_period;
++ perf_swevent_set_period(p_event);
++ }
++
+ /*
+ * If TRACE_REG_PERF_ADD returns false; no custom action was performed
+ * and we need to take the default action of enqueueing our event on
+diff --git a/lib/overflow_kunit.c b/lib/overflow_kunit.c
+index 2abc78367dd110..5222c6393f1168 100644
+--- a/lib/overflow_kunit.c
++++ b/lib/overflow_kunit.c
+@@ -1187,7 +1187,7 @@ static void DEFINE_FLEX_test(struct kunit *test)
+ {
+ /* Using _RAW_ on a __counted_by struct will initialize "counter" to zero */
+ DEFINE_RAW_FLEX(struct foo, two_but_zero, array, 2);
+-#if __has_attribute(__counted_by__)
++#ifdef CONFIG_CC_HAS_COUNTED_BY
+ int expected_raw_size = sizeof(struct foo);
+ #else
+ int expected_raw_size = sizeof(struct foo) + 2 * sizeof(s16);
+diff --git a/lib/string_helpers.c b/lib/string_helpers.c
+index 4f887aa62fa0cd..91fa37b5c510a7 100644
+--- a/lib/string_helpers.c
++++ b/lib/string_helpers.c
+@@ -57,7 +57,7 @@ int string_get_size(u64 size, u64 blk_size, const enum string_size_units units,
+ static const unsigned int rounding[] = { 500, 50, 5 };
+ int i = 0, j;
+ u32 remainder = 0, sf_cap;
+- char tmp[8];
++ char tmp[12];
+ const char *unit;
+
+ tmp[0] = '\0';
+diff --git a/lib/strncpy_from_user.c b/lib/strncpy_from_user.c
+index 989a12a6787214..6dc234913dd58e 100644
+--- a/lib/strncpy_from_user.c
++++ b/lib/strncpy_from_user.c
+@@ -120,6 +120,9 @@ long strncpy_from_user(char *dst, const char __user *src, long count)
+ if (unlikely(count <= 0))
+ return 0;
+
++ kasan_check_write(dst, count);
++ check_object_size(dst, count, false);
++
+ if (can_do_masked_user_access()) {
+ long retval;
+
+@@ -142,8 +145,6 @@ long strncpy_from_user(char *dst, const char __user *src, long count)
+ if (max > count)
+ max = count;
+
+- kasan_check_write(dst, count);
+- check_object_size(dst, count, false);
+ if (user_read_access_begin(src, max)) {
+ retval = do_strncpy_from_user(dst, src, count, max);
+ user_read_access_end();
+diff --git a/mm/internal.h b/mm/internal.h
+index 64c2eb0b160e16..9bb098e78f1556 100644
+--- a/mm/internal.h
++++ b/mm/internal.h
+@@ -48,7 +48,7 @@ struct folio_batch;
+ * when we specify __GFP_NOWARN.
+ */
+ #define WARN_ON_ONCE_GFP(cond, gfp) ({ \
+- static bool __section(".data.once") __warned; \
++ static bool __section(".data..once") __warned; \
+ int __ret_warn_once = !!(cond); \
+ \
+ if (unlikely(!(gfp & __GFP_NOWARN) && __ret_warn_once && !__warned)) { \
+diff --git a/net/9p/trans_usbg.c b/net/9p/trans_usbg.c
+index 975b76839dca1a..6b694f117aef29 100644
+--- a/net/9p/trans_usbg.c
++++ b/net/9p/trans_usbg.c
+@@ -909,9 +909,9 @@ static struct usb_function_instance *usb9pfs_alloc_instance(void)
+ usb9pfs_opts->buflen = DEFAULT_BUFLEN;
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+- if (IS_ERR(dev)) {
++ if (!dev) {
+ kfree(usb9pfs_opts);
+- return ERR_CAST(dev);
++ return ERR_PTR(-ENOMEM);
+ }
+
+ usb9pfs_opts->dev = dev;
+diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
+index dfdbe1ca533872..b9ff69c7522a19 100644
+--- a/net/9p/trans_xen.c
++++ b/net/9p/trans_xen.c
+@@ -286,7 +286,7 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
+ if (!priv->rings[i].intf)
+ break;
+ if (priv->rings[i].irq > 0)
+- unbind_from_irqhandler(priv->rings[i].irq, priv->dev);
++ unbind_from_irqhandler(priv->rings[i].irq, ring);
+ if (priv->rings[i].data.in) {
+ for (j = 0;
+ j < (1 << priv->rings[i].intf->ring_order);
+@@ -465,6 +465,7 @@ static int xen_9pfs_front_init(struct xenbus_device *dev)
+ goto error;
+ }
+
++ xenbus_switch_state(dev, XenbusStateInitialised);
+ return 0;
+
+ error_xenbus:
+@@ -512,8 +513,10 @@ static void xen_9pfs_front_changed(struct xenbus_device *dev,
+ break;
+
+ case XenbusStateInitWait:
+- if (!xen_9pfs_front_init(dev))
+- xenbus_switch_state(dev, XenbusStateInitialised);
++ if (dev->state != XenbusStateInitialising)
++ break;
++
++ xen_9pfs_front_init(dev);
+ break;
+
+ case XenbusStateConnected:
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index c4c74b82ed211c..6354cdf9c2b372 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -952,6 +952,7 @@ static struct hci_conn *__hci_conn_add(struct hci_dev *hdev, int type, bdaddr_t
+ conn->tx_power = HCI_TX_POWER_INVALID;
+ conn->max_tx_power = HCI_TX_POWER_INVALID;
+ conn->sync_handle = HCI_SYNC_HANDLE_INVALID;
++ conn->sid = HCI_SID_INVALID;
+
+ set_bit(HCI_CONN_POWER_SAVE, &conn->flags);
+ conn->disc_timeout = HCI_DISCONN_TIMEOUT;
+@@ -2062,105 +2063,225 @@ static int create_big_sync(struct hci_dev *hdev, void *data)
+
+ static void create_pa_complete(struct hci_dev *hdev, void *data, int err)
+ {
+- struct hci_cp_le_pa_create_sync *cp = data;
+-
+ bt_dev_dbg(hdev, "");
+
+ if (err)
+ bt_dev_err(hdev, "Unable to create PA: %d", err);
++}
+
+- kfree(cp);
++static bool hci_conn_check_create_pa_sync(struct hci_conn *conn)
++{
++ if (conn->type != ISO_LINK || conn->sid == HCI_SID_INVALID)
++ return false;
++
++ return true;
+ }
+
+ static int create_pa_sync(struct hci_dev *hdev, void *data)
+ {
+- struct hci_cp_le_pa_create_sync *cp = data;
+- int err;
++ struct hci_cp_le_pa_create_sync *cp = NULL;
++ struct hci_conn *conn;
++ int err = 0;
+
+- err = __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC,
+- sizeof(*cp), cp, HCI_CMD_TIMEOUT);
+- if (err) {
+- hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+- return err;
++ hci_dev_lock(hdev);
++
++ rcu_read_lock();
++
++ /* The spec allows only one pending LE Periodic Advertising Create
++ * Sync command at a time. If the command is pending now, don't do
++ * anything. We check for pending connections after each PA Sync
++ * Established event.
++ *
++ * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
++ * page 2493:
++ *
++ * If the Host issues this command when another HCI_LE_Periodic_
++ * Advertising_Create_Sync command is pending, the Controller shall
++ * return the error code Command Disallowed (0x0C).
++ */
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (test_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags))
++ goto unlock;
++ }
++
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (hci_conn_check_create_pa_sync(conn)) {
++ struct bt_iso_qos *qos = &conn->iso_qos;
++
++ cp = kzalloc(sizeof(*cp), GFP_KERNEL);
++ if (!cp) {
++ err = -ENOMEM;
++ goto unlock;
++ }
++
++ cp->options = qos->bcast.options;
++ cp->sid = conn->sid;
++ cp->addr_type = conn->dst_type;
++ bacpy(&cp->addr, &conn->dst);
++ cp->skip = cpu_to_le16(qos->bcast.skip);
++ cp->sync_timeout = cpu_to_le16(qos->bcast.sync_timeout);
++ cp->sync_cte_type = qos->bcast.sync_cte_type;
++
++ break;
++ }
+ }
+
+- return hci_update_passive_scan_sync(hdev);
++unlock:
++ rcu_read_unlock();
++
++ hci_dev_unlock(hdev);
++
++ if (cp) {
++ hci_dev_set_flag(hdev, HCI_PA_SYNC);
++ set_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
++
++ err = __hci_cmd_sync_status(hdev, HCI_OP_LE_PA_CREATE_SYNC,
++ sizeof(*cp), cp, HCI_CMD_TIMEOUT);
++ if (!err)
++ err = hci_update_passive_scan_sync(hdev);
++
++ kfree(cp);
++
++ if (err) {
++ hci_dev_clear_flag(hdev, HCI_PA_SYNC);
++ clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
++ }
++ }
++
++ return err;
++}
++
++int hci_pa_create_sync_pending(struct hci_dev *hdev)
++{
++ /* Queue start pa_create_sync and scan */
++ return hci_cmd_sync_queue(hdev, create_pa_sync,
++ NULL, create_pa_complete);
+ }
+
+ struct hci_conn *hci_pa_create_sync(struct hci_dev *hdev, bdaddr_t *dst,
+ __u8 dst_type, __u8 sid,
+ struct bt_iso_qos *qos)
+ {
+- struct hci_cp_le_pa_create_sync *cp;
+ struct hci_conn *conn;
+- int err;
+-
+- if (hci_dev_test_and_set_flag(hdev, HCI_PA_SYNC))
+- return ERR_PTR(-EBUSY);
+
+ conn = hci_conn_add_unset(hdev, ISO_LINK, dst, HCI_ROLE_SLAVE);
+ if (IS_ERR(conn))
+ return conn;
+
+ conn->iso_qos = *qos;
++ conn->dst_type = dst_type;
++ conn->sid = sid;
+ conn->state = BT_LISTEN;
+
+ hci_conn_hold(conn);
+
+- cp = kzalloc(sizeof(*cp), GFP_KERNEL);
+- if (!cp) {
+- hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+- hci_conn_drop(conn);
+- return ERR_PTR(-ENOMEM);
++ hci_pa_create_sync_pending(hdev);
++
++ return conn;
++}
++
++static bool hci_conn_check_create_big_sync(struct hci_conn *conn)
++{
++ if (!conn->num_bis)
++ return false;
++
++ return true;
++}
++
++static void big_create_sync_complete(struct hci_dev *hdev, void *data, int err)
++{
++ bt_dev_dbg(hdev, "");
++
++ if (err)
++ bt_dev_err(hdev, "Unable to create BIG sync: %d", err);
++}
++
++static int big_create_sync(struct hci_dev *hdev, void *data)
++{
++ DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11);
++ struct hci_conn *conn;
++
++ rcu_read_lock();
++
++ pdu->num_bis = 0;
++
++ /* The spec allows only one pending LE BIG Create Sync command at
++ * a time. If the command is pending now, don't do anything. We
++ * check for pending connections after each BIG Sync Established
++ * event.
++ *
++ * BLUETOOTH CORE SPECIFICATION Version 5.3 | Vol 4, Part E
++ * page 2586:
++ *
++ * If the Host sends this command when the Controller is in the
++ * process of synchronizing to any BIG, i.e. the HCI_LE_BIG_Sync_
++ * Established event has not been generated, the Controller shall
++ * return the error code Command Disallowed (0x0C).
++ */
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (test_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags))
++ goto unlock;
+ }
+
+- cp->options = qos->bcast.options;
+- cp->sid = sid;
+- cp->addr_type = dst_type;
+- bacpy(&cp->addr, dst);
+- cp->skip = cpu_to_le16(qos->bcast.skip);
+- cp->sync_timeout = cpu_to_le16(qos->bcast.sync_timeout);
+- cp->sync_cte_type = qos->bcast.sync_cte_type;
++ list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
++ if (hci_conn_check_create_big_sync(conn)) {
++ struct bt_iso_qos *qos = &conn->iso_qos;
+
+- /* Queue start pa_create_sync and scan */
+- err = hci_cmd_sync_queue(hdev, create_pa_sync, cp, create_pa_complete);
+- if (err < 0) {
+- hci_conn_drop(conn);
+- kfree(cp);
+- return ERR_PTR(err);
++ set_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
++
++ pdu->handle = qos->bcast.big;
++ pdu->sync_handle = cpu_to_le16(conn->sync_handle);
++ pdu->encryption = qos->bcast.encryption;
++ memcpy(pdu->bcode, qos->bcast.bcode,
++ sizeof(pdu->bcode));
++ pdu->mse = qos->bcast.mse;
++ pdu->timeout = cpu_to_le16(qos->bcast.timeout);
++ pdu->num_bis = conn->num_bis;
++ memcpy(pdu->bis, conn->bis, conn->num_bis);
++
++ break;
++ }
+ }
+
+- return conn;
++unlock:
++ rcu_read_unlock();
++
++ if (!pdu->num_bis)
++ return 0;
++
++ return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC,
++ struct_size(pdu, bis, pdu->num_bis), pdu);
++}
++
++int hci_le_big_create_sync_pending(struct hci_dev *hdev)
++{
++ /* Queue big_create_sync */
++ return hci_cmd_sync_queue_once(hdev, big_create_sync,
++ NULL, big_create_sync_complete);
+ }
+
+ int hci_le_big_create_sync(struct hci_dev *hdev, struct hci_conn *hcon,
+ struct bt_iso_qos *qos,
+ __u16 sync_handle, __u8 num_bis, __u8 bis[])
+ {
+- DEFINE_FLEX(struct hci_cp_le_big_create_sync, pdu, bis, num_bis, 0x11);
+ int err;
+
+- if (num_bis < 0x01 || num_bis > pdu->num_bis)
++ if (num_bis < 0x01 || num_bis > ISO_MAX_NUM_BIS)
+ return -EINVAL;
+
+ err = qos_set_big(hdev, qos);
+ if (err)
+ return err;
+
+- if (hcon)
+- hcon->iso_qos.bcast.big = qos->bcast.big;
++ if (hcon) {
++ /* Update hcon QoS */
++ hcon->iso_qos = *qos;
+
+- pdu->handle = qos->bcast.big;
+- pdu->sync_handle = cpu_to_le16(sync_handle);
+- pdu->encryption = qos->bcast.encryption;
+- memcpy(pdu->bcode, qos->bcast.bcode, sizeof(pdu->bcode));
+- pdu->mse = qos->bcast.mse;
+- pdu->timeout = cpu_to_le16(qos->bcast.timeout);
+- pdu->num_bis = num_bis;
+- memcpy(pdu->bis, bis, num_bis);
++ hcon->num_bis = num_bis;
++ memcpy(hcon->bis, bis, num_bis);
++ }
+
+- return hci_send_cmd(hdev, HCI_OP_LE_BIG_CREATE_SYNC,
+- struct_size(pdu, bis, num_bis), pdu);
++ return hci_le_big_create_sync_pending(hdev);
+ }
+
+ static void create_big_complete(struct hci_dev *hdev, void *data, int err)
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 0bbad90ddd6f87..2e4bd3e961ce09 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6345,7 +6345,7 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ struct hci_ev_le_pa_sync_established *ev = data;
+ int mask = hdev->link_mode;
+ __u8 flags = 0;
+- struct hci_conn *pa_sync;
++ struct hci_conn *pa_sync, *conn;
+
+ bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
+
+@@ -6353,6 +6353,20 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+
+ hci_dev_clear_flag(hdev, HCI_PA_SYNC);
+
++ conn = hci_conn_hash_lookup_sid(hdev, ev->sid, &ev->bdaddr,
++ ev->bdaddr_type);
++ if (!conn) {
++ bt_dev_err(hdev,
++ "Unable to find connection for dst %pMR sid 0x%2.2x",
++ &ev->bdaddr, ev->sid);
++ goto unlock;
++ }
++
++ clear_bit(HCI_CONN_CREATE_PA_SYNC, &conn->flags);
++
++ conn->sync_handle = le16_to_cpu(ev->handle);
++ conn->sid = HCI_SID_INVALID;
++
+ mask |= hci_proto_connect_ind(hdev, &ev->bdaddr, ISO_LINK, &flags);
+ if (!(mask & HCI_LM_ACCEPT)) {
+ hci_le_pa_term_sync(hdev, ev->handle);
+@@ -6379,6 +6393,9 @@ static void hci_le_pa_sync_estabilished_evt(struct hci_dev *hdev, void *data,
+ }
+
+ unlock:
++ /* Handle any other pending PA sync command */
++ hci_pa_create_sync_pending(hdev);
++
+ hci_dev_unlock(hdev);
+ }
+
+@@ -6896,7 +6913,7 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ struct sk_buff *skb)
+ {
+ struct hci_evt_le_big_sync_estabilished *ev = data;
+- struct hci_conn *bis;
++ struct hci_conn *bis, *conn;
+ int i;
+
+ bt_dev_dbg(hdev, "status 0x%2.2x", ev->status);
+@@ -6907,6 +6924,20 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+
+ hci_dev_lock(hdev);
+
++ conn = hci_conn_hash_lookup_big_sync_pend(hdev, ev->handle,
++ ev->num_bis);
++ if (!conn) {
++ bt_dev_err(hdev,
++ "Unable to find connection for big 0x%2.2x",
++ ev->handle);
++ goto unlock;
++ }
++
++ clear_bit(HCI_CONN_CREATE_BIG_SYNC, &conn->flags);
++
++ conn->num_bis = 0;
++ memset(conn->bis, 0, sizeof(conn->num_bis));
++
+ for (i = 0; i < ev->num_bis; i++) {
+ u16 handle = le16_to_cpu(ev->bis[i]);
+ __le32 interval;
+@@ -6956,6 +6987,10 @@ static void hci_le_big_sync_established_evt(struct hci_dev *hdev, void *data,
+ hci_connect_cfm(bis, ev->status);
+ }
+
++unlock:
++ /* Handle any other pending BIG sync command */
++ hci_le_big_create_sync_pending(hdev);
++
+ hci_dev_unlock(hdev);
+ }
+
+diff --git a/net/bluetooth/hci_sysfs.c b/net/bluetooth/hci_sysfs.c
+index 367e32fe30eb84..4b54dbbf0729a3 100644
+--- a/net/bluetooth/hci_sysfs.c
++++ b/net/bluetooth/hci_sysfs.c
+@@ -21,16 +21,6 @@ static const struct device_type bt_link = {
+ .release = bt_link_release,
+ };
+
+-/*
+- * The rfcomm tty device will possibly retain even when conn
+- * is down, and sysfs doesn't support move zombie device,
+- * so we should move the device before conn device is destroyed.
+- */
+-static int __match_tty(struct device *dev, void *data)
+-{
+- return !strncmp(dev_name(dev), "rfcomm", 6);
+-}
+-
+ void hci_conn_init_sysfs(struct hci_conn *conn)
+ {
+ struct hci_dev *hdev = conn->hdev;
+@@ -73,10 +63,13 @@ void hci_conn_del_sysfs(struct hci_conn *conn)
+ return;
+ }
+
++ /* If there are devices using the connection as parent reset it to NULL
++ * before unregistering the device.
++ */
+ while (1) {
+ struct device *dev;
+
+- dev = device_find_child(&conn->dev, NULL, __match_tty);
++ dev = device_find_any_child(&conn->dev);
+ if (!dev)
+ break;
+ device_move(dev, NULL, DPM_ORDER_DEV_LAST);
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 7a83e400ac77a0..5e2d9758bd3c1c 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -35,6 +35,7 @@ struct iso_conn {
+ struct sk_buff *rx_skb;
+ __u32 rx_len;
+ __u16 tx_sn;
++ struct kref ref;
+ };
+
+ #define iso_conn_lock(c) spin_lock(&(c)->lock)
+@@ -93,6 +94,49 @@ static struct sock *iso_get_sock(bdaddr_t *src, bdaddr_t *dst,
+ #define ISO_CONN_TIMEOUT (HZ * 40)
+ #define ISO_DISCONN_TIMEOUT (HZ * 2)
+
++static void iso_conn_free(struct kref *ref)
++{
++ struct iso_conn *conn = container_of(ref, struct iso_conn, ref);
++
++ BT_DBG("conn %p", conn);
++
++ if (conn->sk)
++ iso_pi(conn->sk)->conn = NULL;
++
++ if (conn->hcon) {
++ conn->hcon->iso_data = NULL;
++ hci_conn_drop(conn->hcon);
++ }
++
++ /* Ensure no more work items will run since hci_conn has been dropped */
++ disable_delayed_work_sync(&conn->timeout_work);
++
++ kfree(conn);
++}
++
++static void iso_conn_put(struct iso_conn *conn)
++{
++ if (!conn)
++ return;
++
++ BT_DBG("conn %p refcnt %d", conn, kref_read(&conn->ref));
++
++ kref_put(&conn->ref, iso_conn_free);
++}
++
++static struct iso_conn *iso_conn_hold_unless_zero(struct iso_conn *conn)
++{
++ if (!conn)
++ return NULL;
++
++ BT_DBG("conn %p refcnt %u", conn, kref_read(&conn->ref));
++
++ if (!kref_get_unless_zero(&conn->ref))
++ return NULL;
++
++ return conn;
++}
++
+ static struct sock *iso_sock_hold(struct iso_conn *conn)
+ {
+ if (!conn || !bt_sock_linked(&iso_sk_list, conn->sk))
+@@ -109,9 +153,14 @@ static void iso_sock_timeout(struct work_struct *work)
+ timeout_work.work);
+ struct sock *sk;
+
++ conn = iso_conn_hold_unless_zero(conn);
++ if (!conn)
++ return;
++
+ iso_conn_lock(conn);
+ sk = iso_sock_hold(conn);
+ iso_conn_unlock(conn);
++ iso_conn_put(conn);
+
+ if (!sk)
+ return;
+@@ -149,9 +198,14 @@ static struct iso_conn *iso_conn_add(struct hci_conn *hcon)
+ {
+ struct iso_conn *conn = hcon->iso_data;
+
++ conn = iso_conn_hold_unless_zero(conn);
+ if (conn) {
+- if (!conn->hcon)
++ if (!conn->hcon) {
++ iso_conn_lock(conn);
+ conn->hcon = hcon;
++ iso_conn_unlock(conn);
++ }
++ iso_conn_put(conn);
+ return conn;
+ }
+
+@@ -159,6 +213,7 @@ static struct iso_conn *iso_conn_add(struct hci_conn *hcon)
+ if (!conn)
+ return NULL;
+
++ kref_init(&conn->ref);
+ spin_lock_init(&conn->lock);
+ INIT_DELAYED_WORK(&conn->timeout_work, iso_sock_timeout);
+
+@@ -178,17 +233,15 @@ static void iso_chan_del(struct sock *sk, int err)
+ struct sock *parent;
+
+ conn = iso_pi(sk)->conn;
++ iso_pi(sk)->conn = NULL;
+
+ BT_DBG("sk %p, conn %p, err %d", sk, conn, err);
+
+ if (conn) {
+ iso_conn_lock(conn);
+ conn->sk = NULL;
+- iso_pi(sk)->conn = NULL;
+ iso_conn_unlock(conn);
+-
+- if (conn->hcon)
+- hci_conn_drop(conn->hcon);
++ iso_conn_put(conn);
+ }
+
+ sk->sk_state = BT_CLOSED;
+@@ -210,6 +263,7 @@ static void iso_conn_del(struct hci_conn *hcon, int err)
+ struct iso_conn *conn = hcon->iso_data;
+ struct sock *sk;
+
++ conn = iso_conn_hold_unless_zero(conn);
+ if (!conn)
+ return;
+
+@@ -219,20 +273,18 @@ static void iso_conn_del(struct hci_conn *hcon, int err)
+ iso_conn_lock(conn);
+ sk = iso_sock_hold(conn);
+ iso_conn_unlock(conn);
++ iso_conn_put(conn);
+
+- if (sk) {
+- lock_sock(sk);
+- iso_sock_clear_timer(sk);
+- iso_chan_del(sk, err);
+- release_sock(sk);
+- sock_put(sk);
++ if (!sk) {
++ iso_conn_put(conn);
++ return;
+ }
+
+- /* Ensure no more work items will run before freeing conn. */
+- cancel_delayed_work_sync(&conn->timeout_work);
+-
+- hcon->iso_data = NULL;
+- kfree(conn);
++ lock_sock(sk);
++ iso_sock_clear_timer(sk);
++ iso_chan_del(sk, err);
++ release_sock(sk);
++ sock_put(sk);
+ }
+
+ static int __iso_chan_add(struct iso_conn *conn, struct sock *sk,
+@@ -652,6 +704,8 @@ static void iso_sock_destruct(struct sock *sk)
+ {
+ BT_DBG("sk %p", sk);
+
++ iso_conn_put(iso_pi(sk)->conn);
++
+ skb_queue_purge(&sk->sk_receive_queue);
+ skb_queue_purge(&sk->sk_write_queue);
+ }
+@@ -711,6 +765,7 @@ static void iso_sock_disconn(struct sock *sk)
+ */
+ if (bis_sk) {
+ hcon->state = BT_OPEN;
++ hcon->iso_data = NULL;
+ iso_pi(sk)->conn->hcon = NULL;
+ iso_sock_clear_timer(sk);
+ iso_chan_del(sk, bt_to_errno(hcon->abort_reason));
+@@ -720,7 +775,6 @@ static void iso_sock_disconn(struct sock *sk)
+ }
+
+ sk->sk_state = BT_DISCONN;
+- iso_sock_set_timer(sk, ISO_DISCONN_TIMEOUT);
+ iso_conn_lock(iso_pi(sk)->conn);
+ hci_conn_drop(iso_pi(sk)->conn->hcon);
+ iso_pi(sk)->conn->hcon = NULL;
+@@ -1338,6 +1392,13 @@ static void iso_conn_big_sync(struct sock *sk)
+ if (!hdev)
+ return;
+
++ /* hci_le_big_create_sync requires hdev lock to be held, since
++ * it enqueues the HCI LE BIG Create Sync command via
++ * hci_cmd_sync_queue_once, which checks hdev flags that might
++ * change.
++ */
++ hci_dev_lock(hdev);
++
+ if (!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
+ err = hci_le_big_create_sync(hdev, iso_pi(sk)->conn->hcon,
+ &iso_pi(sk)->qos,
+@@ -1348,6 +1409,8 @@ static void iso_conn_big_sync(struct sock *sk)
+ bt_dev_err(hdev, "hci_le_big_create_sync: %d",
+ err);
+ }
++
++ hci_dev_unlock(hdev);
+ }
+
+ static int iso_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+@@ -1942,6 +2005,7 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+
+ if (sk) {
+ int err;
++ struct hci_conn *hcon = iso_pi(sk)->conn->hcon;
+
+ iso_pi(sk)->qos.bcast.encryption = ev2->encryption;
+
+@@ -1950,7 +2014,8 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+
+ if (!test_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags) &&
+ !test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
+- err = hci_le_big_create_sync(hdev, NULL,
++ err = hci_le_big_create_sync(hdev,
++ hcon,
+ &iso_pi(sk)->qos,
+ iso_pi(sk)->sync_handle,
+ iso_pi(sk)->bc_num_bis,
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index a429661b676a83..2343e15f8938ec 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -1317,7 +1317,8 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ struct mgmt_mode *cp;
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
+ return;
+
+ cp = cmd->param;
+@@ -1350,7 +1351,13 @@ static void mgmt_set_powered_complete(struct hci_dev *hdev, void *data, int err)
+ static int set_powered_sync(struct hci_dev *hdev, void *data)
+ {
+ struct mgmt_pending_cmd *cmd = data;
+- struct mgmt_mode *cp = cmd->param;
++ struct mgmt_mode *cp;
++
++ /* Make sure cmd still outstanding. */
++ if (cmd != pending_find(MGMT_OP_SET_POWERED, hdev))
++ return -ECANCELED;
++
++ cp = cmd->param;
+
+ BT_DBG("%s", hdev->name);
+
+@@ -1510,7 +1517,8 @@ static void mgmt_set_discoverable_complete(struct hci_dev *hdev, void *data,
+ bt_dev_dbg(hdev, "err %d", err);
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_DISCOVERABLE, hdev))
+ return;
+
+ hci_dev_lock(hdev);
+@@ -1684,7 +1692,8 @@ static void mgmt_set_connectable_complete(struct hci_dev *hdev, void *data,
+ bt_dev_dbg(hdev, "err %d", err);
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_CONNECTABLE, hdev))
+ return;
+
+ hci_dev_lock(hdev);
+@@ -1916,7 +1925,7 @@ static void set_ssp_complete(struct hci_dev *hdev, void *data, int err)
+ bool changed;
+
+ /* Make sure cmd still outstanding. */
+- if (cmd != pending_find(MGMT_OP_SET_SSP, hdev))
++ if (err == -ECANCELED || cmd != pending_find(MGMT_OP_SET_SSP, hdev))
+ return;
+
+ if (err) {
+@@ -3782,7 +3791,8 @@ static void set_name_complete(struct hci_dev *hdev, void *data, int err)
+
+ bt_dev_dbg(hdev, "err %d", err);
+
+- if (cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_LOCAL_NAME, hdev))
+ return;
+
+ if (status) {
+@@ -3957,7 +3967,8 @@ static void set_default_phy_complete(struct hci_dev *hdev, void *data, int err)
+ struct sk_buff *skb = cmd->skb;
+ u8 status = mgmt_status(err);
+
+- if (cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_SET_PHY_CONFIGURATION, hdev))
+ return;
+
+ if (!status) {
+@@ -5848,13 +5859,16 @@ static void start_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ struct mgmt_pending_cmd *cmd = data;
+
++ bt_dev_dbg(hdev, "err %d", err);
++
++ if (err == -ECANCELED)
++ return;
++
+ if (cmd != pending_find(MGMT_OP_START_DISCOVERY, hdev) &&
+ cmd != pending_find(MGMT_OP_START_LIMITED_DISCOVERY, hdev) &&
+ cmd != pending_find(MGMT_OP_START_SERVICE_DISCOVERY, hdev))
+ return;
+
+- bt_dev_dbg(hdev, "err %d", err);
+-
+ mgmt_cmd_complete(cmd->sk, cmd->index, cmd->opcode, mgmt_status(err),
+ cmd->param, 1);
+ mgmt_pending_remove(cmd);
+@@ -6087,7 +6101,8 @@ static void stop_discovery_complete(struct hci_dev *hdev, void *data, int err)
+ {
+ struct mgmt_pending_cmd *cmd = data;
+
+- if (cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_STOP_DISCOVERY, hdev))
+ return;
+
+ bt_dev_dbg(hdev, "err %d", err);
+@@ -8078,7 +8093,8 @@ static void read_local_oob_ext_data_complete(struct hci_dev *hdev, void *data,
+ u8 status = mgmt_status(err);
+ u16 eir_len;
+
+- if (cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
++ if (err == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_READ_LOCAL_OOB_EXT_DATA, hdev))
+ return;
+
+ if (!status) {
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index f48250e3f2e103..8af1bf518321fd 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -729,7 +729,8 @@ static int rfcomm_sock_getsockopt_old(struct socket *sock, int optname, char __u
+ struct sock *l2cap_sk;
+ struct l2cap_conn *conn;
+ struct rfcomm_conninfo cinfo;
+- int len, err = 0;
++ int err = 0;
++ size_t len;
+ u32 opt;
+
+ BT_DBG("sk %p", sk);
+@@ -783,7 +784,7 @@ static int rfcomm_sock_getsockopt_old(struct socket *sock, int optname, char __u
+ cinfo.hci_handle = conn->hcon->handle;
+ memcpy(cinfo.dev_class, conn->hcon->dev_class, 3);
+
+- len = min_t(unsigned int, len, sizeof(cinfo));
++ len = min(len, sizeof(cinfo));
+ if (copy_to_user(optval, (char *) &cinfo, len))
+ err = -EFAULT;
+
+@@ -802,7 +803,8 @@ static int rfcomm_sock_getsockopt(struct socket *sock, int level, int optname, c
+ {
+ struct sock *sk = sock->sk;
+ struct bt_security sec;
+- int len, err = 0;
++ int err = 0;
++ size_t len;
+
+ BT_DBG("sk %p", sk);
+
+@@ -827,7 +829,7 @@ static int rfcomm_sock_getsockopt(struct socket *sock, int level, int optname, c
+ sec.level = rfcomm_pi(sk)->sec_level;
+ sec.key_size = 0;
+
+- len = min_t(unsigned int, len, sizeof(sec));
++ len = min(len, sizeof(sec));
+ if (copy_to_user(optval, (char *) &sec, len))
+ err = -EFAULT;
+
+diff --git a/net/core/filter.c b/net/core/filter.c
+index fb56567c551ed6..9a459213d283f1 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2621,18 +2621,16 @@ BPF_CALL_2(bpf_msg_cork_bytes, struct sk_msg *, msg, u32, bytes)
+
+ static void sk_msg_reset_curr(struct sk_msg *msg)
+ {
+- u32 i = msg->sg.start;
+- u32 len = 0;
+-
+- do {
+- len += sk_msg_elem(msg, i)->length;
+- sk_msg_iter_var_next(i);
+- if (len >= msg->sg.size)
+- break;
+- } while (i != msg->sg.end);
++ if (!msg->sg.size) {
++ msg->sg.curr = msg->sg.start;
++ msg->sg.copybreak = 0;
++ } else {
++ u32 i = msg->sg.end;
+
+- msg->sg.curr = i;
+- msg->sg.copybreak = 0;
++ sk_msg_iter_var_prev(i);
++ msg->sg.curr = i;
++ msg->sg.copybreak = msg->sg.data[i].length;
++ }
+ }
+
+ static const struct bpf_func_proto bpf_msg_cork_bytes_proto = {
+@@ -2795,7 +2793,7 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ sk_msg_iter_var_next(i);
+ } while (i != msg->sg.end);
+
+- if (start >= offset + l)
++ if (start > offset + l)
+ return -EINVAL;
+
+ space = MAX_MSG_FRAGS - sk_msg_elem_used(msg);
+@@ -2820,6 +2818,8 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+
+ raw = page_address(page);
+
++ if (i == msg->sg.end)
++ sk_msg_iter_var_prev(i);
+ psge = sk_msg_elem(msg, i);
+ front = start - offset;
+ back = psge->length - front;
+@@ -2836,7 +2836,13 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ }
+
+ put_page(sg_page(psge));
+- } else if (start - offset) {
++ new = i;
++ goto place_new;
++ }
++
++ if (start - offset) {
++ if (i == msg->sg.end)
++ sk_msg_iter_var_prev(i);
+ psge = sk_msg_elem(msg, i);
+ rsge = sk_msg_elem_cpy(msg, i);
+
+@@ -2847,39 +2853,44 @@ BPF_CALL_4(bpf_msg_push_data, struct sk_msg *, msg, u32, start,
+ sk_msg_iter_var_next(i);
+ sg_unmark_end(psge);
+ sg_unmark_end(&rsge);
+- sk_msg_iter_next(msg, end);
+ }
+
+ /* Slot(s) to place newly allocated data */
++ sk_msg_iter_next(msg, end);
+ new = i;
++ sk_msg_iter_var_next(i);
++
++ if (i == msg->sg.end) {
++ if (!rsge.length)
++ goto place_new;
++ sk_msg_iter_next(msg, end);
++ goto place_new;
++ }
+
+ /* Shift one or two slots as needed */
+- if (!copy) {
+- sge = sk_msg_elem_cpy(msg, i);
++ sge = sk_msg_elem_cpy(msg, new);
++ sg_unmark_end(&sge);
+
++ nsge = sk_msg_elem_cpy(msg, i);
++ if (rsge.length) {
+ sk_msg_iter_var_next(i);
+- sg_unmark_end(&sge);
++ nnsge = sk_msg_elem_cpy(msg, i);
+ sk_msg_iter_next(msg, end);
++ }
+
+- nsge = sk_msg_elem_cpy(msg, i);
++ while (i != msg->sg.end) {
++ msg->sg.data[i] = sge;
++ sge = nsge;
++ sk_msg_iter_var_next(i);
+ if (rsge.length) {
+- sk_msg_iter_var_next(i);
++ nsge = nnsge;
+ nnsge = sk_msg_elem_cpy(msg, i);
+- }
+-
+- while (i != msg->sg.end) {
+- msg->sg.data[i] = sge;
+- sge = nsge;
+- sk_msg_iter_var_next(i);
+- if (rsge.length) {
+- nsge = nnsge;
+- nnsge = sk_msg_elem_cpy(msg, i);
+- } else {
+- nsge = sk_msg_elem_cpy(msg, i);
+- }
++ } else {
++ nsge = sk_msg_elem_cpy(msg, i);
+ }
+ }
+
++place_new:
+ /* Place newly allocated data buffer */
+ sk_mem_charge(msg->sk, len);
+ msg->sg.size += len;
+@@ -2908,8 +2919,10 @@ static const struct bpf_func_proto bpf_msg_push_data_proto = {
+
+ static void sk_msg_shift_left(struct sk_msg *msg, int i)
+ {
++ struct scatterlist *sge = sk_msg_elem(msg, i);
+ int prev;
+
++ put_page(sg_page(sge));
+ do {
+ prev = i;
+ sk_msg_iter_var_next(i);
+@@ -2946,6 +2959,9 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ if (unlikely(flags))
+ return -EINVAL;
+
++ if (unlikely(len == 0))
++ return 0;
++
+ /* First find the starting scatterlist element */
+ i = msg->sg.start;
+ do {
+@@ -2958,7 +2974,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ } while (i != msg->sg.end);
+
+ /* Bounds checks: start and pop must be inside message */
+- if (start >= offset + l || last >= msg->sg.size)
++ if (start >= offset + l || last > msg->sg.size)
+ return -EINVAL;
+
+ space = MAX_MSG_FRAGS - sk_msg_elem_used(msg);
+@@ -2987,12 +3003,12 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ */
+ if (start != offset) {
+ struct scatterlist *nsge, *sge = sk_msg_elem(msg, i);
+- int a = start;
++ int a = start - offset;
+ int b = sge->length - pop - a;
+
+ sk_msg_iter_var_next(i);
+
+- if (pop < sge->length - a) {
++ if (b > 0) {
+ if (space) {
+ sge->length = a;
+ sk_msg_shift_right(msg, i);
+@@ -3011,7 +3027,6 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ if (unlikely(!page))
+ return -ENOMEM;
+
+- sge->length = a;
+ orig = sg_page(sge);
+ from = sg_virt(sge);
+ to = page_address(page);
+@@ -3021,7 +3036,7 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ put_page(orig);
+ }
+ pop = 0;
+- } else if (pop >= sge->length - a) {
++ } else {
+ pop -= (sge->length - a);
+ sge->length = a;
+ }
+@@ -3055,7 +3070,6 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ pop -= sge->length;
+ sk_msg_shift_left(msg, i);
+ }
+- sk_msg_iter_var_next(i);
+ }
+
+ sk_mem_uncharge(msg->sk, len - pop);
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index 1cb954f2d39e82..d2baa1af9df09e 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -215,6 +215,7 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ return -ENOMEM;
+
+ rtnl_lock();
++ rcu_read_lock();
+
+ napi = napi_by_id(napi_id);
+ if (napi) {
+@@ -224,6 +225,7 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ err = -ENOENT;
+ }
+
++ rcu_read_unlock();
+ rtnl_unlock();
+
+ if (err)
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index b1dcbd3be89e10..e90fbab703b2db 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -1117,9 +1117,9 @@ static void sk_psock_strp_data_ready(struct sock *sk)
+ if (tls_sw_has_ctx_rx(sk)) {
+ psock->saved_data_ready(sk);
+ } else {
+- write_lock_bh(&sk->sk_callback_lock);
++ read_lock_bh(&sk->sk_callback_lock);
+ strp_data_ready(&psock->strp);
+- write_unlock_bh(&sk->sk_callback_lock);
++ read_unlock_bh(&sk->sk_callback_lock);
+ }
+ }
+ rcu_read_unlock();
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index ebdfd5b64e17a2..f630d6645636dd 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -268,6 +268,8 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
+ skb->dev = master->dev;
+ skb->priority = TC_PRIO_CONTROL;
+
++ skb_reset_network_header(skb);
++ skb_reset_transport_header(skb);
+ if (dev_hard_header(skb, skb->dev, ETH_P_PRP,
+ hsr->sup_multicast_addr,
+ skb->dev->dev_addr, skb->len) <= 0)
+@@ -275,8 +277,6 @@ static struct sk_buff *hsr_init_skb(struct hsr_port *master)
+
+ skb_reset_mac_header(skb);
+ skb_reset_mac_len(skb);
+- skb_reset_network_header(skb);
+- skb_reset_transport_header(skb);
+
+ return skb;
+ out:
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index 2b698f8419fe2b..fe7947f7740623 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -1189,7 +1189,7 @@ static void reqsk_timer_handler(struct timer_list *t)
+
+ drop:
+ __inet_csk_reqsk_queue_drop(sk_listener, oreq, true);
+- reqsk_put(req);
++ reqsk_put(oreq);
+ }
+
+ static bool reqsk_queue_hash_req(struct request_sock *req,
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 089864c6a35eec..449a2ac40bdc00 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -137,7 +137,7 @@ static struct mr_table *ipmr_mr_table_iter(struct net *net,
+ return ret;
+ }
+
+-static struct mr_table *ipmr_get_table(struct net *net, u32 id)
++static struct mr_table *__ipmr_get_table(struct net *net, u32 id)
+ {
+ struct mr_table *mrt;
+
+@@ -148,6 +148,16 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id)
+ return NULL;
+ }
+
++static struct mr_table *ipmr_get_table(struct net *net, u32 id)
++{
++ struct mr_table *mrt;
++
++ rcu_read_lock();
++ mrt = __ipmr_get_table(net, id);
++ rcu_read_unlock();
++ return mrt;
++}
++
+ static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4,
+ struct mr_table **mrt)
+ {
+@@ -189,7 +199,7 @@ static int ipmr_rule_action(struct fib_rule *rule, struct flowi *flp,
+
+ arg->table = fib_rule_get_table(rule, arg);
+
+- mrt = ipmr_get_table(rule->fr_net, arg->table);
++ mrt = __ipmr_get_table(rule->fr_net, arg->table);
+ if (!mrt)
+ return -EAGAIN;
+ res->mrt = mrt;
+@@ -315,6 +325,8 @@ static struct mr_table *ipmr_get_table(struct net *net, u32 id)
+ return net->ipv4.mrt;
+ }
+
++#define __ipmr_get_table ipmr_get_table
++
+ static int ipmr_fib_lookup(struct net *net, struct flowi4 *flp4,
+ struct mr_table **mrt)
+ {
+@@ -403,7 +415,7 @@ static struct mr_table *ipmr_new_table(struct net *net, u32 id)
+ if (id != RT_TABLE_DEFAULT && id >= 1000000000)
+ return ERR_PTR(-EINVAL);
+
+- mrt = ipmr_get_table(net, id);
++ mrt = __ipmr_get_table(net, id);
+ if (mrt)
+ return mrt;
+
+@@ -1374,7 +1386,7 @@ int ip_mroute_setsockopt(struct sock *sk, int optname, sockptr_t optval,
+ goto out_unlock;
+ }
+
+- mrt = ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT);
++ mrt = __ipmr_get_table(net, raw_sk(sk)->ipmr_table ? : RT_TABLE_DEFAULT);
+ if (!mrt) {
+ ret = -ENOENT;
+ goto out_unlock;
+@@ -2262,11 +2274,13 @@ int ipmr_get_route(struct net *net, struct sk_buff *skb,
+ struct mr_table *mrt;
+ int err;
+
+- mrt = ipmr_get_table(net, RT_TABLE_DEFAULT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return -ENOENT;
++ }
+
+- rcu_read_lock();
+ cache = ipmr_cache_find(mrt, saddr, daddr);
+ if (!cache && skb->dev) {
+ int vif = ipmr_find_vif(mrt, skb->dev);
+@@ -2550,7 +2564,7 @@ static int ipmr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ grp = tb[RTA_DST] ? nla_get_in_addr(tb[RTA_DST]) : 0;
+ tableid = tb[RTA_TABLE] ? nla_get_u32(tb[RTA_TABLE]) : 0;
+
+- mrt = ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT);
++ mrt = __ipmr_get_table(net, tableid ? tableid : RT_TABLE_DEFAULT);
+ if (!mrt) {
+ err = -ENOENT;
+ goto errout_free;
+@@ -2604,7 +2618,7 @@ static int ipmr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
+ if (filter.table_id) {
+ struct mr_table *mrt;
+
+- mrt = ipmr_get_table(sock_net(skb->sk), filter.table_id);
++ mrt = __ipmr_get_table(sock_net(skb->sk), filter.table_id);
+ if (!mrt) {
+ if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IPMR)
+ return skb->len;
+@@ -2712,7 +2726,7 @@ static int rtm_to_ipmr_mfcc(struct net *net, struct nlmsghdr *nlh,
+ break;
+ }
+ }
+- mrt = ipmr_get_table(net, tblid);
++ mrt = __ipmr_get_table(net, tblid);
+ if (!mrt) {
+ ret = -ENOENT;
+ goto out;
+@@ -2920,13 +2934,15 @@ static void *ipmr_vif_seq_start(struct seq_file *seq, loff_t *pos)
+ struct net *net = seq_file_net(seq);
+ struct mr_table *mrt;
+
+- mrt = ipmr_get_table(net, RT_TABLE_DEFAULT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ipmr_get_table(net, RT_TABLE_DEFAULT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return ERR_PTR(-ENOENT);
++ }
+
+ iter->mrt = mrt;
+
+- rcu_read_lock();
+ return mr_vif_seq_start(seq, pos);
+ }
+
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 94dceac528842c..01115e1a34cb66 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -2570,6 +2570,24 @@ static struct inet6_dev *addrconf_add_dev(struct net_device *dev)
+ return idev;
+ }
+
++static void delete_tempaddrs(struct inet6_dev *idev,
++ struct inet6_ifaddr *ifp)
++{
++ struct inet6_ifaddr *ift, *tmp;
++
++ write_lock_bh(&idev->lock);
++ list_for_each_entry_safe(ift, tmp, &idev->tempaddr_list, tmp_list) {
++ if (ift->ifpub != ifp)
++ continue;
++
++ in6_ifa_hold(ift);
++ write_unlock_bh(&idev->lock);
++ ipv6_del_addr(ift);
++ write_lock_bh(&idev->lock);
++ }
++ write_unlock_bh(&idev->lock);
++}
++
+ static void manage_tempaddrs(struct inet6_dev *idev,
+ struct inet6_ifaddr *ifp,
+ __u32 valid_lft, __u32 prefered_lft,
+@@ -3124,11 +3142,12 @@ static int inet6_addr_del(struct net *net, int ifindex, u32 ifa_flags,
+ in6_ifa_hold(ifp);
+ read_unlock_bh(&idev->lock);
+
+- if (!(ifp->flags & IFA_F_TEMPORARY) &&
+- (ifa_flags & IFA_F_MANAGETEMPADDR))
+- manage_tempaddrs(idev, ifp, 0, 0, false,
+- jiffies);
+ ipv6_del_addr(ifp);
++
++ if (!(ifp->flags & IFA_F_TEMPORARY) &&
++ (ifp->flags & IFA_F_MANAGETEMPADDR))
++ delete_tempaddrs(idev, ifp);
++
+ addrconf_verify_rtnl(net);
+ if (ipv6_addr_is_multicast(pfx)) {
+ ipv6_mc_config(net->ipv6.mc_autojoin_sk,
+@@ -4952,14 +4971,12 @@ static int inet6_addr_modify(struct net *net, struct inet6_ifaddr *ifp,
+ }
+
+ if (was_managetempaddr || ifp->flags & IFA_F_MANAGETEMPADDR) {
+- if (was_managetempaddr &&
+- !(ifp->flags & IFA_F_MANAGETEMPADDR)) {
+- cfg->valid_lft = 0;
+- cfg->preferred_lft = 0;
+- }
+- manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft,
+- cfg->preferred_lft, !was_managetempaddr,
+- jiffies);
++ if (was_managetempaddr && !(ifp->flags & IFA_F_MANAGETEMPADDR))
++ delete_tempaddrs(ifp->idev, ifp);
++ else
++ manage_tempaddrs(ifp->idev, ifp, cfg->valid_lft,
++ cfg->preferred_lft, !was_managetempaddr,
++ jiffies);
+ }
+
+ addrconf_verify_rtnl(net);
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index eb111d20615c62..9a1c59275a1099 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -1190,8 +1190,8 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ while (sibling) {
+ if (sibling->fib6_metric == rt->fib6_metric &&
+ rt6_qualify_for_ecmp(sibling)) {
+- list_add_tail(&rt->fib6_siblings,
+- &sibling->fib6_siblings);
++ list_add_tail_rcu(&rt->fib6_siblings,
++ &sibling->fib6_siblings);
+ break;
+ }
+ sibling = rcu_dereference_protected(sibling->fib6_next,
+@@ -1252,7 +1252,7 @@ static int fib6_add_rt2node(struct fib6_node *fn, struct fib6_info *rt,
+ fib6_siblings)
+ sibling->fib6_nsiblings--;
+ rt->fib6_nsiblings = 0;
+- list_del_init(&rt->fib6_siblings);
++ list_del_rcu(&rt->fib6_siblings);
+ rt6_multipath_rebalance(next_sibling);
+ return err;
+ }
+@@ -1970,7 +1970,7 @@ static void fib6_del_route(struct fib6_table *table, struct fib6_node *fn,
+ &rt->fib6_siblings, fib6_siblings)
+ sibling->fib6_nsiblings--;
+ rt->fib6_nsiblings = 0;
+- list_del_init(&rt->fib6_siblings);
++ list_del_rcu(&rt->fib6_siblings);
+ rt6_multipath_rebalance(next_sibling);
+ }
+
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index 2ce4ae0d8dc3b4..d5057401701c1a 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -125,7 +125,7 @@ static struct mr_table *ip6mr_mr_table_iter(struct net *net,
+ return ret;
+ }
+
+-static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
++static struct mr_table *__ip6mr_get_table(struct net *net, u32 id)
+ {
+ struct mr_table *mrt;
+
+@@ -136,6 +136,16 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
+ return NULL;
+ }
+
++static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
++{
++ struct mr_table *mrt;
++
++ rcu_read_lock();
++ mrt = __ip6mr_get_table(net, id);
++ rcu_read_unlock();
++ return mrt;
++}
++
+ static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6,
+ struct mr_table **mrt)
+ {
+@@ -177,7 +187,7 @@ static int ip6mr_rule_action(struct fib_rule *rule, struct flowi *flp,
+
+ arg->table = fib_rule_get_table(rule, arg);
+
+- mrt = ip6mr_get_table(rule->fr_net, arg->table);
++ mrt = __ip6mr_get_table(rule->fr_net, arg->table);
+ if (!mrt)
+ return -EAGAIN;
+ res->mrt = mrt;
+@@ -304,6 +314,8 @@ static struct mr_table *ip6mr_get_table(struct net *net, u32 id)
+ return net->ipv6.mrt6;
+ }
+
++#define __ip6mr_get_table ip6mr_get_table
++
+ static int ip6mr_fib_lookup(struct net *net, struct flowi6 *flp6,
+ struct mr_table **mrt)
+ {
+@@ -382,7 +394,7 @@ static struct mr_table *ip6mr_new_table(struct net *net, u32 id)
+ {
+ struct mr_table *mrt;
+
+- mrt = ip6mr_get_table(net, id);
++ mrt = __ip6mr_get_table(net, id);
+ if (mrt)
+ return mrt;
+
+@@ -411,13 +423,15 @@ static void *ip6mr_vif_seq_start(struct seq_file *seq, loff_t *pos)
+ struct net *net = seq_file_net(seq);
+ struct mr_table *mrt;
+
+- mrt = ip6mr_get_table(net, RT6_TABLE_DFLT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return ERR_PTR(-ENOENT);
++ }
+
+ iter->mrt = mrt;
+
+- rcu_read_lock();
+ return mr_vif_seq_start(seq, pos);
+ }
+
+@@ -2275,11 +2289,13 @@ int ip6mr_get_route(struct net *net, struct sk_buff *skb, struct rtmsg *rtm,
+ struct mfc6_cache *cache;
+ struct rt6_info *rt = dst_rt6_info(skb_dst(skb));
+
+- mrt = ip6mr_get_table(net, RT6_TABLE_DFLT);
+- if (!mrt)
++ rcu_read_lock();
++ mrt = __ip6mr_get_table(net, RT6_TABLE_DFLT);
++ if (!mrt) {
++ rcu_read_unlock();
+ return -ENOENT;
++ }
+
+- rcu_read_lock();
+ cache = ip6mr_cache_find(mrt, &rt->rt6i_src.addr, &rt->rt6i_dst.addr);
+ if (!cache && skb->dev) {
+ int vif = ip6mr_find_vif(mrt, skb->dev);
+@@ -2559,7 +2575,7 @@ static int ip6mr_rtm_getroute(struct sk_buff *in_skb, struct nlmsghdr *nlh,
+ grp = nla_get_in6_addr(tb[RTA_DST]);
+ tableid = tb[RTA_TABLE] ? nla_get_u32(tb[RTA_TABLE]) : 0;
+
+- mrt = ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT);
++ mrt = __ip6mr_get_table(net, tableid ?: RT_TABLE_DEFAULT);
+ if (!mrt) {
+ NL_SET_ERR_MSG_MOD(extack, "MR table does not exist");
+ return -ENOENT;
+@@ -2606,7 +2622,7 @@ static int ip6mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
+ if (filter.table_id) {
+ struct mr_table *mrt;
+
+- mrt = ip6mr_get_table(sock_net(skb->sk), filter.table_id);
++ mrt = __ip6mr_get_table(sock_net(skb->sk), filter.table_id);
+ if (!mrt) {
+ if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IP6MR)
+ return skb->len;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index b4251915585f75..cff4fbbc66efb2 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -374,6 +374,7 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev)
+ {
+ struct rt6_info *rt = dst_rt6_info(dst);
+ struct inet6_dev *idev = rt->rt6i_idev;
++ struct fib6_info *from;
+
+ if (idev && idev->dev != blackhole_netdev) {
+ struct inet6_dev *blackhole_idev = in6_dev_get(blackhole_netdev);
+@@ -383,6 +384,8 @@ static void ip6_dst_ifdown(struct dst_entry *dst, struct net_device *dev)
+ in6_dev_put(idev);
+ }
+ }
++ from = unrcu_pointer(xchg(&rt->from, NULL));
++ fib6_info_release(from);
+ }
+
+ static bool __rt6_check_expired(const struct rt6_info *rt)
+@@ -413,8 +416,8 @@ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ struct flowi6 *fl6, int oif, bool have_oif_match,
+ const struct sk_buff *skb, int strict)
+ {
+- struct fib6_info *sibling, *next_sibling;
+ struct fib6_info *match = res->f6i;
++ struct fib6_info *sibling;
+
+ if (!match->nh && (!match->fib6_nsiblings || have_oif_match))
+ goto out;
+@@ -440,8 +443,8 @@ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ if (fl6->mp_hash <= atomic_read(&match->fib6_nh->fib_nh_upper_bound))
+ goto out;
+
+- list_for_each_entry_safe(sibling, next_sibling, &match->fib6_siblings,
+- fib6_siblings) {
++ list_for_each_entry_rcu(sibling, &match->fib6_siblings,
++ fib6_siblings) {
+ const struct fib6_nh *nh = sibling->fib6_nh;
+ int nh_upper_bound;
+
+@@ -1455,7 +1458,6 @@ static DEFINE_SPINLOCK(rt6_exception_lock);
+ static void rt6_remove_exception(struct rt6_exception_bucket *bucket,
+ struct rt6_exception *rt6_ex)
+ {
+- struct fib6_info *from;
+ struct net *net;
+
+ if (!bucket || !rt6_ex)
+@@ -1467,8 +1469,6 @@ static void rt6_remove_exception(struct rt6_exception_bucket *bucket,
+ /* purge completely the exception to allow releasing the held resources:
+ * some [sk] cache may keep the dst around for unlimited time
+ */
+- from = unrcu_pointer(xchg(&rt6_ex->rt6i->from, NULL));
+- fib6_info_release(from);
+ dst_dev_put(&rt6_ex->rt6i->dst);
+
+ hlist_del_rcu(&rt6_ex->hlist);
+@@ -5195,14 +5195,18 @@ static void ip6_route_mpath_notify(struct fib6_info *rt,
+ * nexthop. Since sibling routes are always added at the end of
+ * the list, find the first sibling of the last route appended
+ */
++ rcu_read_lock();
++
+ if ((nlflags & NLM_F_APPEND) && rt_last && rt_last->fib6_nsiblings) {
+- rt = list_first_entry(&rt_last->fib6_siblings,
+- struct fib6_info,
+- fib6_siblings);
++ rt = list_first_or_null_rcu(&rt_last->fib6_siblings,
++ struct fib6_info,
++ fib6_siblings);
+ }
+
+ if (rt)
+ inet6_rt_notify(RTM_NEWROUTE, rt, info, nlflags);
++
++ rcu_read_unlock();
+ }
+
+ static bool ip6_route_mpath_should_notify(const struct fib6_info *rt)
+@@ -5547,17 +5551,21 @@ static size_t rt6_nlmsg_size(struct fib6_info *f6i)
+ nexthop_for_each_fib6_nh(f6i->nh, rt6_nh_nlmsg_size,
+ &nexthop_len);
+ } else {
+- struct fib6_info *sibling, *next_sibling;
+ struct fib6_nh *nh = f6i->fib6_nh;
++ struct fib6_info *sibling;
+
+ nexthop_len = 0;
+ if (f6i->fib6_nsiblings) {
+ rt6_nh_nlmsg_size(nh, &nexthop_len);
+
+- list_for_each_entry_safe(sibling, next_sibling,
+- &f6i->fib6_siblings, fib6_siblings) {
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(sibling, &f6i->fib6_siblings,
++ fib6_siblings) {
+ rt6_nh_nlmsg_size(sibling->fib6_nh, &nexthop_len);
+ }
++
++ rcu_read_unlock();
+ }
+ nexthop_len += lwtunnel_get_encap_size(nh->fib_nh_lws);
+ }
+@@ -5721,7 +5729,7 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ lwtunnel_fill_encap(skb, dst->lwtstate, RTA_ENCAP, RTA_ENCAP_TYPE) < 0)
+ goto nla_put_failure;
+ } else if (rt->fib6_nsiblings) {
+- struct fib6_info *sibling, *next_sibling;
++ struct fib6_info *sibling;
+ struct nlattr *mp;
+
+ mp = nla_nest_start_noflag(skb, RTA_MULTIPATH);
+@@ -5733,14 +5741,21 @@ static int rt6_fill_node(struct net *net, struct sk_buff *skb,
+ 0) < 0)
+ goto nla_put_failure;
+
+- list_for_each_entry_safe(sibling, next_sibling,
+- &rt->fib6_siblings, fib6_siblings) {
++ rcu_read_lock();
++
++ list_for_each_entry_rcu(sibling, &rt->fib6_siblings,
++ fib6_siblings) {
+ if (fib_add_nexthop(skb, &sibling->fib6_nh->nh_common,
+ sibling->fib6_nh->fib_nh_weight,
+- AF_INET6, 0) < 0)
++ AF_INET6, 0) < 0) {
++ rcu_read_unlock();
++
+ goto nla_put_failure;
++ }
+ }
+
++ rcu_read_unlock();
++
+ nla_nest_end(skb, mp);
+ } else if (rt->nh) {
+ if (nla_put_u32(skb, RTA_NH_ID, rt->nh->id))
+@@ -6177,7 +6192,7 @@ void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info,
+ err = -ENOBUFS;
+ seq = info->nlh ? info->nlh->nlmsg_seq : 0;
+
+- skb = nlmsg_new(rt6_nlmsg_size(rt), gfp_any());
++ skb = nlmsg_new(rt6_nlmsg_size(rt), GFP_ATOMIC);
+ if (!skb)
+ goto errout;
+
+@@ -6190,7 +6205,7 @@ void inet6_rt_notify(int event, struct fib6_info *rt, struct nl_info *info,
+ goto errout;
+ }
+ rtnl_notify(skb, net, info->portid, RTNLGRP_IPV6_ROUTE,
+- info->nlh, gfp_any());
++ info->nlh, GFP_ATOMIC);
+ return;
+ errout:
+ rtnl_set_sk_err(net, RTNLGRP_IPV6_ROUTE, err);
+diff --git a/net/iucv/af_iucv.c b/net/iucv/af_iucv.c
+index c00323fa9eb66e..7929df08d4e023 100644
+--- a/net/iucv/af_iucv.c
++++ b/net/iucv/af_iucv.c
+@@ -1236,7 +1236,9 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ return -EOPNOTSUPP;
+
+ /* receive/dequeue next skb:
+- * the function understands MSG_PEEK and, thus, does not dequeue skb */
++ * the function understands MSG_PEEK and, thus, does not dequeue skb
++ * only refcount is increased.
++ */
+ skb = skb_recv_datagram(sk, flags, &err);
+ if (!skb) {
+ if (sk->sk_shutdown & RCV_SHUTDOWN)
+@@ -1252,9 +1254,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+
+ cskb = skb;
+ if (skb_copy_datagram_msg(cskb, offset, msg, copied)) {
+- if (!(flags & MSG_PEEK))
+- skb_queue_head(&sk->sk_receive_queue, skb);
+- return -EFAULT;
++ err = -EFAULT;
++ goto err_out;
+ }
+
+ /* SOCK_SEQPACKET: set MSG_TRUNC if recv buf size is too small */
+@@ -1271,11 +1272,8 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ err = put_cmsg(msg, SOL_IUCV, SCM_IUCV_TRGCLS,
+ sizeof(IUCV_SKB_CB(skb)->class),
+ (void *)&IUCV_SKB_CB(skb)->class);
+- if (err) {
+- if (!(flags & MSG_PEEK))
+- skb_queue_head(&sk->sk_receive_queue, skb);
+- return err;
+- }
++ if (err)
++ goto err_out;
+
+ /* Mark read part of skb as used */
+ if (!(flags & MSG_PEEK)) {
+@@ -1331,8 +1329,18 @@ static int iucv_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ /* SOCK_SEQPACKET: return real length if MSG_TRUNC is set */
+ if (sk->sk_type == SOCK_SEQPACKET && (flags & MSG_TRUNC))
+ copied = rlen;
++ if (flags & MSG_PEEK)
++ skb_unref(skb);
+
+ return copied;
++
++err_out:
++ if (!(flags & MSG_PEEK))
++ skb_queue_head(&sk->sk_receive_queue, skb);
++ else
++ skb_unref(skb);
++
++ return err;
+ }
+
+ static inline __poll_t iucv_accept_poll(struct sock *parent)
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index 3eec23ac5ab10e..369a2f2e459cdb 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -1870,15 +1870,31 @@ static __net_exit void l2tp_pre_exit_net(struct net *net)
+ }
+ }
+
++static int l2tp_idr_item_unexpected(int id, void *p, void *data)
++{
++ const char *idr_name = data;
++
++ pr_err("l2tp: %s IDR not empty at net %d exit\n", idr_name, id);
++ WARN_ON_ONCE(1);
++ return 1;
++}
++
+ static __net_exit void l2tp_exit_net(struct net *net)
+ {
+ struct l2tp_net *pn = l2tp_pernet(net);
+
+- WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_v2_session_idr));
++ /* Our per-net IDRs should be empty. Check that is so, to
++ * help catch cleanup races or refcnt leaks.
++ */
++ idr_for_each(&pn->l2tp_v2_session_idr, l2tp_idr_item_unexpected,
++ "v2_session");
++ idr_for_each(&pn->l2tp_v3_session_idr, l2tp_idr_item_unexpected,
++ "v3_session");
++ idr_for_each(&pn->l2tp_tunnel_idr, l2tp_idr_item_unexpected,
++ "tunnel");
++
+ idr_destroy(&pn->l2tp_v2_session_idr);
+- WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_v3_session_idr));
+ idr_destroy(&pn->l2tp_v3_session_idr);
+- WARN_ON_ONCE(!idr_is_empty(&pn->l2tp_tunnel_idr));
+ idr_destroy(&pn->l2tp_tunnel_idr);
+ }
+
+diff --git a/net/llc/af_llc.c b/net/llc/af_llc.c
+index 4eb52add7103b0..0259cde394ba09 100644
+--- a/net/llc/af_llc.c
++++ b/net/llc/af_llc.c
+@@ -1098,7 +1098,7 @@ static int llc_ui_setsockopt(struct socket *sock, int level, int optname,
+ lock_sock(sk);
+ if (unlikely(level != SOL_LLC || optlen != sizeof(int)))
+ goto out;
+- rc = copy_from_sockptr(&opt, optval, sizeof(opt));
++ rc = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (rc)
+ goto out;
+ rc = -EINVAL;
+diff --git a/net/netfilter/ipset/ip_set_bitmap_ip.c b/net/netfilter/ipset/ip_set_bitmap_ip.c
+index e4fa00abde6a2a..5988b9bb9029dc 100644
+--- a/net/netfilter/ipset/ip_set_bitmap_ip.c
++++ b/net/netfilter/ipset/ip_set_bitmap_ip.c
+@@ -163,11 +163,8 @@ bitmap_ip_uadt(struct ip_set *set, struct nlattr *tb[],
+ ret = ip_set_get_hostipaddr4(tb[IPSET_ATTR_IP_TO], &ip_to);
+ if (ret)
+ return ret;
+- if (ip > ip_to) {
++ if (ip > ip_to)
+ swap(ip, ip_to);
+- if (ip < map->first_ip)
+- return -IPSET_ERR_BITMAP_RANGE;
+- }
+ } else if (tb[IPSET_ATTR_CIDR]) {
+ u8 cidr = nla_get_u8(tb[IPSET_ATTR_CIDR]);
+
+@@ -178,7 +175,7 @@ bitmap_ip_uadt(struct ip_set *set, struct nlattr *tb[],
+ ip_to = ip;
+ }
+
+- if (ip_to > map->last_ip)
++ if (ip < map->first_ip || ip_to > map->last_ip)
+ return -IPSET_ERR_BITMAP_RANGE;
+
+ for (; !before(ip_to, ip); ip += map->hosts) {
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 588a2757986c1d..4a137afaf0b87e 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3295,25 +3295,37 @@ int nft_expr_inner_parse(const struct nft_ctx *ctx, const struct nlattr *nla,
+ if (!tb[NFTA_EXPR_DATA] || !tb[NFTA_EXPR_NAME])
+ return -EINVAL;
+
++ rcu_read_lock();
++
+ type = __nft_expr_type_get(ctx->family, tb[NFTA_EXPR_NAME]);
+- if (!type)
+- return -ENOENT;
++ if (!type) {
++ err = -ENOENT;
++ goto out_unlock;
++ }
+
+- if (!type->inner_ops)
+- return -EOPNOTSUPP;
++ if (!type->inner_ops) {
++ err = -EOPNOTSUPP;
++ goto out_unlock;
++ }
+
+ err = nla_parse_nested_deprecated(info->tb, type->maxattr,
+ tb[NFTA_EXPR_DATA],
+ type->policy, NULL);
+ if (err < 0)
+- goto err_nla_parse;
++ goto out_unlock;
+
+ info->attr = nla;
+ info->ops = type->inner_ops;
+
++ /* No module reference will be taken on type->owner.
++ * Presence of type->inner_ops implies that the expression
++ * is builtin, so it cannot go away.
++ */
++ rcu_read_unlock();
+ return 0;
+
+-err_nla_parse:
++out_unlock:
++ rcu_read_unlock();
+ return err;
+ }
+
+@@ -3412,13 +3424,15 @@ void nft_expr_destroy(const struct nft_ctx *ctx, struct nft_expr *expr)
+ * Rules
+ */
+
+-static struct nft_rule *__nft_rule_lookup(const struct nft_chain *chain,
++static struct nft_rule *__nft_rule_lookup(const struct net *net,
++ const struct nft_chain *chain,
+ u64 handle)
+ {
+ struct nft_rule *rule;
+
+ // FIXME: this sucks
+- list_for_each_entry_rcu(rule, &chain->rules, list) {
++ list_for_each_entry_rcu(rule, &chain->rules, list,
++ lockdep_commit_lock_is_held(net)) {
+ if (handle == rule->handle)
+ return rule;
+ }
+@@ -3426,13 +3440,14 @@ static struct nft_rule *__nft_rule_lookup(const struct nft_chain *chain,
+ return ERR_PTR(-ENOENT);
+ }
+
+-static struct nft_rule *nft_rule_lookup(const struct nft_chain *chain,
++static struct nft_rule *nft_rule_lookup(const struct net *net,
++ const struct nft_chain *chain,
+ const struct nlattr *nla)
+ {
+ if (nla == NULL)
+ return ERR_PTR(-EINVAL);
+
+- return __nft_rule_lookup(chain, be64_to_cpu(nla_get_be64(nla)));
++ return __nft_rule_lookup(net, chain, be64_to_cpu(nla_get_be64(nla)));
+ }
+
+ static const struct nla_policy nft_rule_policy[NFTA_RULE_MAX + 1] = {
+@@ -3733,7 +3748,7 @@ static int nf_tables_dump_rules_done(struct netlink_callback *cb)
+ return 0;
+ }
+
+-/* called with rcu_read_lock held */
++/* Caller must hold rcu read lock or transaction mutex */
+ static struct sk_buff *
+ nf_tables_getrule_single(u32 portid, const struct nfnl_info *info,
+ const struct nlattr * const nla[], bool reset)
+@@ -3760,7 +3775,7 @@ nf_tables_getrule_single(u32 portid, const struct nfnl_info *info,
+ return ERR_CAST(chain);
+ }
+
+- rule = nft_rule_lookup(chain, nla[NFTA_RULE_HANDLE]);
++ rule = nft_rule_lookup(net, chain, nla[NFTA_RULE_HANDLE]);
+ if (IS_ERR(rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_HANDLE]);
+ return ERR_CAST(rule);
+@@ -4058,7 +4073,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+
+ if (nla[NFTA_RULE_HANDLE]) {
+ handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_HANDLE]));
+- rule = __nft_rule_lookup(chain, handle);
++ rule = __nft_rule_lookup(net, chain, handle);
+ if (IS_ERR(rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_HANDLE]);
+ return PTR_ERR(rule);
+@@ -4080,7 +4095,7 @@ static int nf_tables_newrule(struct sk_buff *skb, const struct nfnl_info *info,
+
+ if (nla[NFTA_RULE_POSITION]) {
+ pos_handle = be64_to_cpu(nla_get_be64(nla[NFTA_RULE_POSITION]));
+- old_rule = __nft_rule_lookup(chain, pos_handle);
++ old_rule = __nft_rule_lookup(net, chain, pos_handle);
+ if (IS_ERR(old_rule)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_RULE_POSITION]);
+ return PTR_ERR(old_rule);
+@@ -4297,7 +4312,7 @@ static int nf_tables_delrule(struct sk_buff *skb, const struct nfnl_info *info,
+
+ if (chain) {
+ if (nla[NFTA_RULE_HANDLE]) {
+- rule = nft_rule_lookup(chain, nla[NFTA_RULE_HANDLE]);
++ rule = nft_rule_lookup(info->net, chain, nla[NFTA_RULE_HANDLE]);
+ if (IS_ERR(rule)) {
+ if (PTR_ERR(rule) == -ENOENT &&
+ NFNL_MSG_TYPE(info->nlh->nlmsg_type) == NFT_MSG_DESTROYRULE)
+@@ -7790,9 +7805,7 @@ static int nf_tables_updobj(const struct nft_ctx *ctx,
+ struct nft_trans *trans;
+ int err = -ENOMEM;
+
+- if (!try_module_get(type->owner))
+- return -ENOENT;
+-
++ /* caller must have obtained type->owner reference. */
+ trans = nft_trans_alloc(ctx, NFT_MSG_NEWOBJ,
+ sizeof(struct nft_trans_obj));
+ if (!trans)
+@@ -7860,15 +7873,16 @@ static int nf_tables_newobj(struct sk_buff *skb, const struct nfnl_info *info,
+ if (info->nlh->nlmsg_flags & NLM_F_REPLACE)
+ return -EOPNOTSUPP;
+
+- type = __nft_obj_type_get(objtype, family);
+- if (WARN_ON_ONCE(!type))
+- return -ENOENT;
+-
+ if (!obj->ops->update)
+ return 0;
+
++ type = nft_obj_type_get(net, objtype, family);
++ if (WARN_ON_ONCE(IS_ERR(type)))
++ return PTR_ERR(type);
++
+ nft_ctx_init(&ctx, net, skb, info->nlh, family, table, NULL, nla);
+
++ /* type->owner reference is put when transaction object is released. */
+ return nf_tables_updobj(&ctx, type, nla[NFTA_OBJ_DATA], obj);
+ }
+
+@@ -8104,7 +8118,7 @@ static int nf_tables_dump_obj_done(struct netlink_callback *cb)
+ return 0;
+ }
+
+-/* called with rcu_read_lock held */
++/* Caller must hold rcu read lock or transaction mutex */
+ static struct sk_buff *
+ nf_tables_getobj_single(u32 portid, const struct nfnl_info *info,
+ const struct nlattr * const nla[], bool reset)
+diff --git a/net/netlink/af_netlink.c b/net/netlink/af_netlink.c
+index f84aad420d4464..775d707ec708a7 100644
+--- a/net/netlink/af_netlink.c
++++ b/net/netlink/af_netlink.c
+@@ -2176,9 +2176,14 @@ netlink_ack_tlv_len(struct netlink_sock *nlk, int err,
+ return tlvlen;
+ }
+
++static bool nlmsg_check_in_payload(const struct nlmsghdr *nlh, const void *addr)
++{
++ return !WARN_ON(addr < nlmsg_data(nlh) ||
++ addr - (const void *) nlh >= nlh->nlmsg_len);
++}
++
+ static void
+-netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb,
+- const struct nlmsghdr *nlh, int err,
++netlink_ack_tlv_fill(struct sk_buff *skb, const struct nlmsghdr *nlh, int err,
+ const struct netlink_ext_ack *extack)
+ {
+ if (extack->_msg)
+@@ -2190,9 +2195,7 @@ netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb,
+ if (!err)
+ return;
+
+- if (extack->bad_attr &&
+- !WARN_ON((u8 *)extack->bad_attr < in_skb->data ||
+- (u8 *)extack->bad_attr >= in_skb->data + in_skb->len))
++ if (extack->bad_attr && nlmsg_check_in_payload(nlh, extack->bad_attr))
+ WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_OFFS,
+ (u8 *)extack->bad_attr - (const u8 *)nlh));
+ if (extack->policy)
+@@ -2201,9 +2204,7 @@ netlink_ack_tlv_fill(struct sk_buff *in_skb, struct sk_buff *skb,
+ if (extack->miss_type)
+ WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_TYPE,
+ extack->miss_type));
+- if (extack->miss_nest &&
+- !WARN_ON((u8 *)extack->miss_nest < in_skb->data ||
+- (u8 *)extack->miss_nest > in_skb->data + in_skb->len))
++ if (extack->miss_nest && nlmsg_check_in_payload(nlh, extack->miss_nest))
+ WARN_ON(nla_put_u32(skb, NLMSGERR_ATTR_MISS_NEST,
+ (u8 *)extack->miss_nest - (const u8 *)nlh));
+ }
+@@ -2232,7 +2233,7 @@ static int netlink_dump_done(struct netlink_sock *nlk, struct sk_buff *skb,
+ if (extack_len) {
+ nlh->nlmsg_flags |= NLM_F_ACK_TLVS;
+ if (skb_tailroom(skb) >= extack_len) {
+- netlink_ack_tlv_fill(cb->skb, skb, cb->nlh,
++ netlink_ack_tlv_fill(skb, cb->nlh,
+ nlk->dump_done_errno, extack);
+ nlmsg_end(skb, nlh);
+ }
+@@ -2491,7 +2492,7 @@ void netlink_ack(struct sk_buff *in_skb, struct nlmsghdr *nlh, int err,
+ }
+
+ if (tlvlen)
+- netlink_ack_tlv_fill(in_skb, skb, nlh, err, extack);
++ netlink_ack_tlv_fill(skb, nlh, err, extack);
+
+ nlmsg_end(skb, rep);
+
+diff --git a/net/rfkill/rfkill-gpio.c b/net/rfkill/rfkill-gpio.c
+index c268c2b011f428..a8e21060112ffd 100644
+--- a/net/rfkill/rfkill-gpio.c
++++ b/net/rfkill/rfkill-gpio.c
+@@ -32,8 +32,12 @@ static int rfkill_gpio_set_power(void *data, bool blocked)
+ {
+ struct rfkill_gpio_data *rfkill = data;
+
+- if (!blocked && !IS_ERR(rfkill->clk) && !rfkill->clk_enabled)
+- clk_enable(rfkill->clk);
++ if (!blocked && !IS_ERR(rfkill->clk) && !rfkill->clk_enabled) {
++ int ret = clk_enable(rfkill->clk);
++
++ if (ret)
++ return ret;
++ }
+
+ gpiod_set_value_cansleep(rfkill->shutdown_gpio, !blocked);
+ gpiod_set_value_cansleep(rfkill->reset_gpio, !blocked);
+diff --git a/net/rxrpc/af_rxrpc.c b/net/rxrpc/af_rxrpc.c
+index f4844683e12039..9d8bd0b37e41da 100644
+--- a/net/rxrpc/af_rxrpc.c
++++ b/net/rxrpc/af_rxrpc.c
+@@ -707,9 +707,10 @@ static int rxrpc_setsockopt(struct socket *sock, int level, int optname,
+ ret = -EISCONN;
+ if (rx->sk.sk_state != RXRPC_UNBOUND)
+ goto error;
+- ret = copy_from_sockptr(&min_sec_level, optval,
+- sizeof(unsigned int));
+- if (ret < 0)
++ ret = copy_safe_from_sockptr(&min_sec_level,
++ sizeof(min_sec_level),
++ optval, optlen);
++ if (ret)
+ goto error;
+ ret = -EINVAL;
+ if (min_sec_level > RXRPC_SECURITY_MAX)
+diff --git a/net/sched/sch_fq.c b/net/sched/sch_fq.c
+index 19a49af5a9e527..afefe124d9039e 100644
+--- a/net/sched/sch_fq.c
++++ b/net/sched/sch_fq.c
+@@ -331,6 +331,12 @@ static bool fq_fastpath_check(const struct Qdisc *sch, struct sk_buff *skb,
+ */
+ if (q->internal.qlen >= 8)
+ return false;
++
++ /* Ordering invariants fall apart if some delayed flows
++ * are ready but we haven't serviced them, yet.
++ */
++ if (q->time_next_delayed_flow <= now)
++ return false;
+ }
+
+ sk = skb->sk;
+diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
+index 1bd3e531b0e090..059f6ef1ad1898 100644
+--- a/net/sunrpc/cache.c
++++ b/net/sunrpc/cache.c
+@@ -1427,7 +1427,9 @@ static int c_show(struct seq_file *m, void *p)
+ seq_printf(m, "# expiry=%lld refcnt=%d flags=%lx\n",
+ convert_to_wallclock(cp->expiry_time),
+ kref_read(&cp->ref), cp->flags);
+- cache_get(cp);
++ if (!cache_get_rcu(cp))
++ return 0;
++
+ if (cache_check(cd, cp, NULL))
+ /* cache_check does a cache_put on failure */
+ seq_puts(m, "# ");
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 825ec53576912a..59e2c46240f5c1 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1551,6 +1551,10 @@ static struct svc_xprt *svc_create_socket(struct svc_serv *serv,
+ newlen = error;
+
+ if (protocol == IPPROTO_TCP) {
++ __netns_tracker_free(net, &sock->sk->ns_tracker, false);
++ sock->sk->sk_net_refcnt = 1;
++ get_net_track(net, &sock->sk->ns_tracker, GFP_KERNEL);
++ sock_inuse_add(net, 1);
+ if ((error = kernel_listen(sock, 64)) < 0)
+ goto bummer;
+ }
+diff --git a/net/sunrpc/xprtrdma/svc_rdma.c b/net/sunrpc/xprtrdma/svc_rdma.c
+index 58ae6ec4f25b4f..415c0310101f0d 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma.c
++++ b/net/sunrpc/xprtrdma/svc_rdma.c
+@@ -233,25 +233,34 @@ static int svc_rdma_proc_init(void)
+
+ rc = percpu_counter_init(&svcrdma_stat_read, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err;
+ rc = percpu_counter_init(&svcrdma_stat_recv, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err_read;
+ rc = percpu_counter_init(&svcrdma_stat_sq_starve, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err_recv;
+ rc = percpu_counter_init(&svcrdma_stat_write, 0, GFP_KERNEL);
+ if (rc)
+- goto out_err;
++ goto err_sq;
+
+ svcrdma_table_header = register_sysctl("sunrpc/svc_rdma",
+ svcrdma_parm_table);
++ if (!svcrdma_table_header)
++ goto err_write;
++
+ return 0;
+
+-out_err:
++err_write:
++ rc = -ENOMEM;
++ percpu_counter_destroy(&svcrdma_stat_write);
++err_sq:
+ percpu_counter_destroy(&svcrdma_stat_sq_starve);
++err_recv:
+ percpu_counter_destroy(&svcrdma_stat_recv);
++err_read:
+ percpu_counter_destroy(&svcrdma_stat_read);
++err:
+ return rc;
+ }
+
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+index ae3fb9bc8a2168..292022f0976e17 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+@@ -493,7 +493,13 @@ static bool xdr_check_write_chunk(struct svc_rdma_recv_ctxt *rctxt)
+ if (xdr_stream_decode_u32(&rctxt->rc_stream, &segcount))
+ return false;
+
+- /* A bogus segcount causes this buffer overflow check to fail. */
++ /* Before trusting the segcount value enough to use it in
++ * a computation, perform a simple range check. This is an
++ * arbitrary but sensible limit (ie, not architectural).
++ */
++ if (unlikely(segcount > RPCSVC_MAXPAGES))
++ return false;
++
+ p = xdr_inline_decode(&rctxt->rc_stream,
+ segcount * rpcrdma_segment_maxsz * sizeof(*p));
+ return p != NULL;
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index 1326fbf45a3479..b69e6290acfabe 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -1198,6 +1198,7 @@ static void xs_sock_reset_state_flags(struct rpc_xprt *xprt)
+ clear_bit(XPRT_SOCK_WAKE_WRITE, &transport->sock_state);
+ clear_bit(XPRT_SOCK_WAKE_DISCONNECT, &transport->sock_state);
+ clear_bit(XPRT_SOCK_NOSPACE, &transport->sock_state);
++ clear_bit(XPRT_SOCK_UPD_TIMEOUT, &transport->sock_state);
+ }
+
+ static void xs_run_error_worker(struct sock_xprt *transport, unsigned int nr)
+@@ -1939,6 +1940,13 @@ static struct socket *xs_create_sock(struct rpc_xprt *xprt,
+ goto out;
+ }
+
++ if (protocol == IPPROTO_TCP) {
++ __netns_tracker_free(xprt->xprt_net, &sock->sk->ns_tracker, false);
++ sock->sk->sk_net_refcnt = 1;
++ get_net_track(xprt->xprt_net, &sock->sk->ns_tracker, GFP_KERNEL);
++ sock_inuse_add(xprt->xprt_net, 1);
++ }
++
+ filp = sock_alloc_file(sock, O_NONBLOCK, NULL);
+ if (IS_ERR(filp))
+ return ERR_CAST(filp);
+@@ -2614,11 +2622,10 @@ static int xs_tls_handshake_sync(struct rpc_xprt *lower_xprt, struct xprtsec_par
+ rc = wait_for_completion_interruptible_timeout(&lower_transport->handshake_done,
+ XS_TLS_HANDSHAKE_TO);
+ if (rc <= 0) {
+- if (!tls_handshake_cancel(sk)) {
+- if (rc == 0)
+- rc = -ETIMEDOUT;
+- goto out_put_xprt;
+- }
++ tls_handshake_cancel(sk);
++ if (rc == 0)
++ rc = -ETIMEDOUT;
++ goto out_put_xprt;
+ }
+
+ rc = lower_transport->xprt_err;
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 74ca18833df172..7d313fb66d76ba 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -603,16 +603,20 @@ struct wiphy *wiphy_new_nm(const struct cfg80211_ops *ops, int sizeof_priv,
+ }
+ EXPORT_SYMBOL(wiphy_new_nm);
+
+-static int wiphy_verify_combinations(struct wiphy *wiphy)
++static
++int wiphy_verify_iface_combinations(struct wiphy *wiphy,
++ const struct ieee80211_iface_combination *iface_comb,
++ int n_iface_comb,
++ bool combined_radio)
+ {
+ const struct ieee80211_iface_combination *c;
+ int i, j;
+
+- for (i = 0; i < wiphy->n_iface_combinations; i++) {
++ for (i = 0; i < n_iface_comb; i++) {
+ u32 cnt = 0;
+ u16 all_iftypes = 0;
+
+- c = &wiphy->iface_combinations[i];
++ c = &iface_comb[i];
+
+ /*
+ * Combinations with just one interface aren't real,
+@@ -625,9 +629,13 @@ static int wiphy_verify_combinations(struct wiphy *wiphy)
+ if (WARN_ON(!c->num_different_channels))
+ return -EINVAL;
+
+- /* DFS only works on one channel. */
+- if (WARN_ON(c->radar_detect_widths &&
+- (c->num_different_channels > 1)))
++ /* DFS only works on one channel. Avoid this check
++ * for multi-radio global combination, since it hold
++ * the capabilities of all radio combinations.
++ */
++ if (!combined_radio &&
++ WARN_ON(c->radar_detect_widths &&
++ c->num_different_channels > 1))
+ return -EINVAL;
+
+ if (WARN_ON(!c->n_limits))
+@@ -648,13 +656,21 @@ static int wiphy_verify_combinations(struct wiphy *wiphy)
+ if (WARN_ON(wiphy->software_iftypes & types))
+ return -EINVAL;
+
+- /* Only a single P2P_DEVICE can be allowed */
+- if (WARN_ON(types & BIT(NL80211_IFTYPE_P2P_DEVICE) &&
++ /* Only a single P2P_DEVICE can be allowed, avoid this
++ * check for multi-radio global combination, since it
++ * hold the capabilities of all radio combinations.
++ */
++ if (!combined_radio &&
++ WARN_ON(types & BIT(NL80211_IFTYPE_P2P_DEVICE) &&
+ c->limits[j].max > 1))
+ return -EINVAL;
+
+- /* Only a single NAN can be allowed */
+- if (WARN_ON(types & BIT(NL80211_IFTYPE_NAN) &&
++ /* Only a single NAN can be allowed, avoid this
++ * check for multi-radio global combination, since it
++ * hold the capabilities of all radio combinations.
++ */
++ if (!combined_radio &&
++ WARN_ON(types & BIT(NL80211_IFTYPE_NAN) &&
+ c->limits[j].max > 1))
+ return -EINVAL;
+
+@@ -693,6 +709,34 @@ static int wiphy_verify_combinations(struct wiphy *wiphy)
+ return 0;
+ }
+
++static int wiphy_verify_combinations(struct wiphy *wiphy)
++{
++ int i, ret;
++ bool combined_radio = false;
++
++ if (wiphy->n_radio) {
++ for (i = 0; i < wiphy->n_radio; i++) {
++ const struct wiphy_radio *radio = &wiphy->radio[i];
++
++ ret = wiphy_verify_iface_combinations(wiphy,
++ radio->iface_combinations,
++ radio->n_iface_combinations,
++ false);
++ if (ret)
++ return ret;
++ }
++
++ combined_radio = true;
++ }
++
++ ret = wiphy_verify_iface_combinations(wiphy,
++ wiphy->iface_combinations,
++ wiphy->n_iface_combinations,
++ combined_radio);
++
++ return ret;
++}
++
+ int wiphy_register(struct wiphy *wiphy)
+ {
+ struct cfg80211_registered_device *rdev = wiphy_to_rdev(wiphy);
+diff --git a/net/wireless/mlme.c b/net/wireless/mlme.c
+index 4dac8185472100..a5eb92d93074e6 100644
+--- a/net/wireless/mlme.c
++++ b/net/wireless/mlme.c
+@@ -340,12 +340,6 @@ cfg80211_mlme_check_mlo_compat(const struct ieee80211_multi_link_elem *mle_a,
+ return -EINVAL;
+ }
+
+- if (ieee80211_mle_get_eml_med_sync_delay((const u8 *)mle_a) !=
+- ieee80211_mle_get_eml_med_sync_delay((const u8 *)mle_b)) {
+- NL_SET_ERR_MSG(extack, "link EML medium sync delay mismatch");
+- return -EINVAL;
+- }
+-
+ if (ieee80211_mle_get_eml_cap((const u8 *)mle_a) !=
+ ieee80211_mle_get_eml_cap((const u8 *)mle_b)) {
+ NL_SET_ERR_MSG(extack, "link EML capabilities mismatch");
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index d7d099f7118ab5..9b1b9dc5a7eb2a 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -9776,6 +9776,7 @@ nl80211_parse_sched_scan(struct wiphy *wiphy, struct wireless_dev *wdev,
+ request = kzalloc(size, GFP_KERNEL);
+ if (!request)
+ return ERR_PTR(-ENOMEM);
++ request->n_channels = n_channels;
+
+ if (n_ssids)
+ request->ssids = (void *)request +
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 1140b2a120caec..b57d5d2904eb46 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -675,6 +675,8 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ len = desc->len;
+
+ if (!skb) {
++ first_frag = true;
++
+ hr = max(NET_SKB_PAD, L1_CACHE_ALIGN(dev->needed_headroom));
+ tr = dev->needed_tailroom;
+ skb = sock_alloc_send_skb(&xs->sk, hr + len + tr, 1, &err);
+@@ -685,12 +687,8 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ skb_put(skb, len);
+
+ err = skb_store_bits(skb, 0, buffer, len);
+- if (unlikely(err)) {
+- kfree_skb(skb);
++ if (unlikely(err))
+ goto free_err;
+- }
+-
+- first_frag = true;
+ } else {
+ int nr_frags = skb_shinfo(skb)->nr_frags;
+ struct page *page;
+@@ -758,6 +756,9 @@ static struct sk_buff *xsk_build_skb(struct xdp_sock *xs,
+ return skb;
+
+ free_err:
++ if (first_frag && skb)
++ kfree_skb(skb);
++
+ if (err == -EOVERFLOW) {
+ /* Drop the packet */
+ xsk_set_destructor_arg(xs->skb);
+diff --git a/rust/helpers/spinlock.c b/rust/helpers/spinlock.c
+index acc1376b833c78..92f7fc41842531 100644
+--- a/rust/helpers/spinlock.c
++++ b/rust/helpers/spinlock.c
+@@ -7,10 +7,14 @@ void rust_helper___spin_lock_init(spinlock_t *lock, const char *name,
+ struct lock_class_key *key)
+ {
+ #ifdef CONFIG_DEBUG_SPINLOCK
++# if defined(CONFIG_PREEMPT_RT)
++ __spin_lock_init(lock, name, key, false);
++# else /*!CONFIG_PREEMPT_RT */
+ __raw_spin_lock_init(spinlock_check(lock), name, key, LD_WAIT_CONFIG);
+-#else
++# endif /* CONFIG_PREEMPT_RT */
++#else /* !CONFIG_DEBUG_SPINLOCK */
+ spin_lock_init(lock);
+-#endif
++#endif /* CONFIG_DEBUG_SPINLOCK */
+ }
+
+ void rust_helper_spin_lock(spinlock_t *lock)
+diff --git a/rust/kernel/block/mq/request.rs b/rust/kernel/block/mq/request.rs
+index a0e22827f3f4ec..7943f43b957532 100644
+--- a/rust/kernel/block/mq/request.rs
++++ b/rust/kernel/block/mq/request.rs
+@@ -16,50 +16,55 @@
+ sync::atomic::{AtomicU64, Ordering},
+ };
+
+-/// A wrapper around a blk-mq `struct request`. This represents an IO request.
++/// A wrapper around a blk-mq [`struct request`]. This represents an IO request.
+ ///
+ /// # Implementation details
+ ///
+ /// There are four states for a request that the Rust bindings care about:
+ ///
+-/// A) Request is owned by block layer (refcount 0)
+-/// B) Request is owned by driver but with zero `ARef`s in existence
+-/// (refcount 1)
+-/// C) Request is owned by driver with exactly one `ARef` in existence
+-/// (refcount 2)
+-/// D) Request is owned by driver with more than one `ARef` in existence
+-/// (refcount > 2)
++/// 1. Request is owned by block layer (refcount 0).
++/// 2. Request is owned by driver but with zero [`ARef`]s in existence
++/// (refcount 1).
++/// 3. Request is owned by driver with exactly one [`ARef`] in existence
++/// (refcount 2).
++/// 4. Request is owned by driver with more than one [`ARef`] in existence
++/// (refcount > 2).
+ ///
+ ///
+-/// We need to track A and B to ensure we fail tag to request conversions for
++/// We need to track 1 and 2 to ensure we fail tag to request conversions for
+ /// requests that are not owned by the driver.
+ ///
+-/// We need to track C and D to ensure that it is safe to end the request and hand
++/// We need to track 3 and 4 to ensure that it is safe to end the request and hand
+ /// back ownership to the block layer.
+ ///
+ /// The states are tracked through the private `refcount` field of
+ /// `RequestDataWrapper`. This structure lives in the private data area of the C
+-/// `struct request`.
++/// [`struct request`].
+ ///
+ /// # Invariants
+ ///
+-/// * `self.0` is a valid `struct request` created by the C portion of the kernel.
++/// * `self.0` is a valid [`struct request`] created by the C portion of the
++/// kernel.
+ /// * The private data area associated with this request must be an initialized
+ /// and valid `RequestDataWrapper<T>`.
+ /// * `self` is reference counted by atomic modification of
+-/// self.wrapper_ref().refcount().
++/// `self.wrapper_ref().refcount()`.
++///
++/// [`struct request`]: srctree/include/linux/blk-mq.h
+ ///
+ #[repr(transparent)]
+ pub struct Request<T: Operations>(Opaque<bindings::request>, PhantomData<T>);
+
+ impl<T: Operations> Request<T> {
+- /// Create an `ARef<Request>` from a `struct request` pointer.
++ /// Create an [`ARef<Request>`] from a [`struct request`] pointer.
+ ///
+ /// # Safety
+ ///
+ /// * The caller must own a refcount on `ptr` that is transferred to the
+- /// returned `ARef`.
+- /// * The type invariants for `Request` must hold for the pointee of `ptr`.
++ /// returned [`ARef`].
++ /// * The type invariants for [`Request`] must hold for the pointee of `ptr`.
++ ///
++ /// [`struct request`]: srctree/include/linux/blk-mq.h
+ pub(crate) unsafe fn aref_from_raw(ptr: *mut bindings::request) -> ARef<Self> {
+ // INVARIANT: By the safety requirements of this function, invariants are upheld.
+ // SAFETY: By the safety requirement of this function, we own a
+@@ -84,12 +89,14 @@ pub(crate) unsafe fn start_unchecked(this: &ARef<Self>) {
+ }
+
+ /// Try to take exclusive ownership of `this` by dropping the refcount to 0.
+- /// This fails if `this` is not the only `ARef` pointing to the underlying
+- /// `Request`.
++ /// This fails if `this` is not the only [`ARef`] pointing to the underlying
++ /// [`Request`].
+ ///
+- /// If the operation is successful, `Ok` is returned with a pointer to the
+- /// C `struct request`. If the operation fails, `this` is returned in the
+- /// `Err` variant.
++ /// If the operation is successful, [`Ok`] is returned with a pointer to the
++ /// C [`struct request`]. If the operation fails, `this` is returned in the
++ /// [`Err`] variant.
++ ///
++ /// [`struct request`]: srctree/include/linux/blk-mq.h
+ fn try_set_end(this: ARef<Self>) -> Result<*mut bindings::request, ARef<Self>> {
+ // We can race with `TagSet::tag_to_rq`
+ if let Err(_old) = this.wrapper_ref().refcount().compare_exchange(
+@@ -109,7 +116,7 @@ fn try_set_end(this: ARef<Self>) -> Result<*mut bindings::request, ARef<Self>> {
+
+ /// Notify the block layer that the request has been completed without errors.
+ ///
+- /// This function will return `Err` if `this` is not the only `ARef`
++ /// This function will return [`Err`] if `this` is not the only [`ARef`]
+ /// referencing the request.
+ pub fn end_ok(this: ARef<Self>) -> Result<(), ARef<Self>> {
+ let request_ptr = Self::try_set_end(this)?;
+@@ -123,13 +130,13 @@ pub fn end_ok(this: ARef<Self>) -> Result<(), ARef<Self>> {
+ Ok(())
+ }
+
+- /// Return a pointer to the `RequestDataWrapper` stored in the private area
++ /// Return a pointer to the [`RequestDataWrapper`] stored in the private area
+ /// of the request structure.
+ ///
+ /// # Safety
+ ///
+ /// - `this` must point to a valid allocation of size at least size of
+- /// `Self` plus size of `RequestDataWrapper`.
++ /// [`Self`] plus size of [`RequestDataWrapper`].
+ pub(crate) unsafe fn wrapper_ptr(this: *mut Self) -> NonNull<RequestDataWrapper> {
+ let request_ptr = this.cast::<bindings::request>();
+ // SAFETY: By safety requirements for this function, `this` is a
+@@ -141,7 +148,7 @@ pub(crate) unsafe fn wrapper_ptr(this: *mut Self) -> NonNull<RequestDataWrapper>
+ unsafe { NonNull::new_unchecked(wrapper_ptr) }
+ }
+
+- /// Return a reference to the `RequestDataWrapper` stored in the private
++ /// Return a reference to the [`RequestDataWrapper`] stored in the private
+ /// area of the request structure.
+ pub(crate) fn wrapper_ref(&self) -> &RequestDataWrapper {
+ // SAFETY: By type invariant, `self.0` is a valid allocation. Further,
+@@ -152,13 +159,15 @@ pub(crate) fn wrapper_ref(&self) -> &RequestDataWrapper {
+ }
+ }
+
+-/// A wrapper around data stored in the private area of the C `struct request`.
++/// A wrapper around data stored in the private area of the C [`struct request`].
++///
++/// [`struct request`]: srctree/include/linux/blk-mq.h
+ pub(crate) struct RequestDataWrapper {
+ /// The Rust request refcount has the following states:
+ ///
+ /// - 0: The request is owned by C block layer.
+- /// - 1: The request is owned by Rust abstractions but there are no ARef references to it.
+- /// - 2+: There are `ARef` references to the request.
++ /// - 1: The request is owned by Rust abstractions but there are no [`ARef`] references to it.
++ /// - 2+: There are [`ARef`] references to the request.
+ refcount: AtomicU64,
+ }
+
+@@ -204,7 +213,7 @@ fn atomic_relaxed_op_return(target: &AtomicU64, op: impl Fn(u64) -> u64) -> u64
+ }
+
+ /// Store the result of `op(target.load)` in `target` if `target.load() !=
+-/// pred`, returning true if the target was updated.
++/// pred`, returning [`true`] if the target was updated.
+ fn atomic_relaxed_op_unless(target: &AtomicU64, op: impl Fn(u64) -> u64, pred: u64) -> bool {
+ target
+ .fetch_update(Ordering::Relaxed, Ordering::Relaxed, |x| {
+diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
+index b5f4b3ce6b4820..032c9089e6862d 100644
+--- a/rust/kernel/lib.rs
++++ b/rust/kernel/lib.rs
+@@ -83,7 +83,7 @@ pub trait Module: Sized + Sync + Send {
+
+ /// Equivalent to `THIS_MODULE` in the C API.
+ ///
+-/// C header: [`include/linux/export.h`](srctree/include/linux/export.h)
++/// C header: [`include/linux/init.h`](srctree/include/linux/init.h)
+ pub struct ThisModule(*mut bindings::module);
+
+ // SAFETY: `THIS_MODULE` may be used from all threads within a module.
+diff --git a/rust/kernel/rbtree.rs b/rust/kernel/rbtree.rs
+index 25eb36fd1cdceb..d03e4aa1f4812b 100644
+--- a/rust/kernel/rbtree.rs
++++ b/rust/kernel/rbtree.rs
+@@ -884,7 +884,8 @@ fn get_neighbor_raw(&self, direction: Direction) -> Option<NonNull<bindings::rb_
+ NonNull::new(neighbor)
+ }
+
+- /// SAFETY:
++ /// # Safety
++ ///
+ /// - `node` must be a valid pointer to a node in an [`RBTree`].
+ /// - The caller has immutable access to `node` for the duration of 'b.
+ unsafe fn to_key_value<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b V) {
+@@ -894,7 +895,8 @@ unsafe fn to_key_value<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b V) {
+ (k, unsafe { &*v })
+ }
+
+- /// SAFETY:
++ /// # Safety
++ ///
+ /// - `node` must be a valid pointer to a node in an [`RBTree`].
+ /// - The caller has mutable access to `node` for the duration of 'b.
+ unsafe fn to_key_value_mut<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b mut V) {
+@@ -904,7 +906,8 @@ unsafe fn to_key_value_mut<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, &'b
+ (k, unsafe { &mut *v })
+ }
+
+- /// SAFETY:
++ /// # Safety
++ ///
+ /// - `node` must be a valid pointer to a node in an [`RBTree`].
+ /// - The caller has immutable access to the key for the duration of 'b.
+ unsafe fn to_key_value_raw<'b>(node: NonNull<bindings::rb_node>) -> (&'b K, *mut V) {
+diff --git a/rust/macros/lib.rs b/rust/macros/lib.rs
+index a626b1145e5c4f..90e2202ba4d5a0 100644
+--- a/rust/macros/lib.rs
++++ b/rust/macros/lib.rs
+@@ -359,7 +359,7 @@ pub fn pinned_drop(args: TokenStream, input: TokenStream) -> TokenStream {
+ /// macro_rules! pub_no_prefix {
+ /// ($prefix:ident, $($newname:ident),+) => {
+ /// kernel::macros::paste! {
+-/// $(pub(crate) const fn [<$newname:lower:span>]: u32 = [<$prefix $newname:span>];)+
++/// $(pub(crate) const fn [<$newname:lower:span>]() -> u32 { [<$prefix $newname:span>] })+
+ /// }
+ /// };
+ /// }
+diff --git a/samples/bpf/xdp_adjust_tail_kern.c b/samples/bpf/xdp_adjust_tail_kern.c
+index ffdd548627f0a4..da67bcad1c6381 100644
+--- a/samples/bpf/xdp_adjust_tail_kern.c
++++ b/samples/bpf/xdp_adjust_tail_kern.c
+@@ -57,6 +57,7 @@ static __always_inline void swap_mac(void *data, struct ethhdr *orig_eth)
+
+ static __always_inline __u16 csum_fold_helper(__u32 csum)
+ {
++ csum = (csum & 0xffff) + (csum >> 16);
+ return ~((csum & 0xffff) + (csum >> 16));
+ }
+
+diff --git a/samples/kfifo/dma-example.c b/samples/kfifo/dma-example.c
+index 48df719dac8c6d..8076ac410161a3 100644
+--- a/samples/kfifo/dma-example.c
++++ b/samples/kfifo/dma-example.c
+@@ -9,6 +9,7 @@
+ #include <linux/kfifo.h>
+ #include <linux/module.h>
+ #include <linux/scatterlist.h>
++#include <linux/dma-mapping.h>
+
+ /*
+ * This module shows how to handle fifo dma operations.
+diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl
+index 4427572b24771d..b03d526e4c454a 100755
+--- a/scripts/checkpatch.pl
++++ b/scripts/checkpatch.pl
+@@ -3209,36 +3209,31 @@ sub process {
+
+ # Check Fixes: styles is correct
+ if (!$in_header_lines &&
+- $line =~ /^\s*fixes:?\s*(?:commit\s*)?[0-9a-f]{5,}\b/i) {
+- my $orig_commit = "";
+- my $id = "0123456789ab";
+- my $title = "commit title";
+- my $tag_case = 1;
+- my $tag_space = 1;
+- my $id_length = 1;
+- my $id_case = 1;
++ $line =~ /^\s*(fixes:?)\s*(?:commit\s*)?([0-9a-f]{5,40})(?:\s*($balanced_parens))?/i) {
++ my $tag = $1;
++ my $orig_commit = $2;
++ my $title;
+ my $title_has_quotes = 0;
+ $fixes_tag = 1;
+-
+- if ($line =~ /(\s*fixes:?)\s+([0-9a-f]{5,})\s+($balanced_parens)/i) {
+- my $tag = $1;
+- $orig_commit = $2;
+- $title = $3;
+-
+- $tag_case = 0 if $tag eq "Fixes:";
+- $tag_space = 0 if ($line =~ /^fixes:? [0-9a-f]{5,} ($balanced_parens)/i);
+-
+- $id_length = 0 if ($orig_commit =~ /^[0-9a-f]{12}$/i);
+- $id_case = 0 if ($orig_commit !~ /[A-F]/);
+-
++ if (defined $3) {
+ # Always strip leading/trailing parens then double quotes if existing
+- $title = substr($title, 1, -1);
++ $title = substr($3, 1, -1);
+ if ($title =~ /^".*"$/) {
+ $title = substr($title, 1, -1);
+ $title_has_quotes = 1;
+ }
++ } else {
++ $title = "commit title"
+ }
+
++
++ my $tag_case = not ($tag eq "Fixes:");
++ my $tag_space = not ($line =~ /^fixes:? [0-9a-f]{5,40} ($balanced_parens)/i);
++
++ my $id_length = not ($orig_commit =~ /^[0-9a-f]{12}$/i);
++ my $id_case = not ($orig_commit !~ /[A-F]/);
++
++ my $id = "0123456789ab";
+ my ($cid, $ctitle) = git_commit_info($orig_commit, $id,
+ $title);
+
+diff --git a/scripts/faddr2line b/scripts/faddr2line
+index fe0cc45f03be11..1fa6beef9f978e 100755
+--- a/scripts/faddr2line
++++ b/scripts/faddr2line
+@@ -252,7 +252,7 @@ __faddr2line() {
+ found=2
+ break
+ fi
+- done < <(echo "${ELF_SYMS}" | sed 's/\[.*\]//' | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2 | ${GREP} -A1 --no-group-separator " ${sym_name}$")
++ done < <(echo "${ELF_SYMS}" | sed 's/\[.*\]//' | ${AWK} -v sec=$sym_sec '$7 == sec' | sort --key=2)
+
+ if [[ $found = 0 ]]; then
+ warn "can't find symbol: sym_name: $sym_name sym_sec: $sym_sec sym_addr: $sym_addr sym_elf_size: $sym_elf_size"
+diff --git a/scripts/kernel-doc b/scripts/kernel-doc
+index 2791f819520387..320544321ecba5 100755
+--- a/scripts/kernel-doc
++++ b/scripts/kernel-doc
+@@ -569,6 +569,8 @@ sub output_function_man(%) {
+ my %args = %{$_[0]};
+ my ($parameter, $section);
+ my $count;
++ my $func_macro = $args{'func_macro'};
++ my $paramcount = $#{$args{'parameterlist'}}; # -1 is empty
+
+ print ".TH \"$args{'function'}\" 9 \"$args{'function'}\" \"$man_date\" \"Kernel Hacker's Manual\" LINUX\n";
+
+@@ -600,7 +602,10 @@ sub output_function_man(%) {
+ $parenth = "";
+ }
+
+- print ".SH ARGUMENTS\n";
++ $paramcount = $#{$args{'parameterlist'}}; # -1 is empty
++ if ($paramcount >= 0) {
++ print ".SH ARGUMENTS\n";
++ }
+ foreach $parameter (@{$args{'parameterlist'}}) {
+ my $parameter_name = $parameter;
+ $parameter_name =~ s/\[.*//;
+@@ -822,10 +827,16 @@ sub output_function_rst(%) {
+ my $oldprefix = $lineprefix;
+
+ my $signature = "";
+- if ($args{'functiontype'} ne "") {
+- $signature = $args{'functiontype'} . " " . $args{'function'} . " (";
+- } else {
+- $signature = $args{'function'} . " (";
++ my $func_macro = $args{'func_macro'};
++ my $paramcount = $#{$args{'parameterlist'}}; # -1 is empty
++
++ if ($func_macro) {
++ $signature = $args{'function'};
++ } else {
++ if ($args{'functiontype'}) {
++ $signature = $args{'functiontype'} . " ";
++ }
++ $signature .= $args{'function'} . " (";
+ }
+
+ my $count = 0;
+@@ -844,7 +855,9 @@ sub output_function_rst(%) {
+ }
+ }
+
+- $signature .= ")";
++ if (!$func_macro) {
++ $signature .= ")";
++ }
+
+ if ($sphinx_major < 3) {
+ if ($args{'typedef'}) {
+@@ -888,9 +901,11 @@ sub output_function_rst(%) {
+ # Put our descriptive text into a container (thus an HTML <div>) to help
+ # set the function prototypes apart.
+ #
+- print ".. container:: kernelindent\n\n";
+ $lineprefix = " ";
+- print $lineprefix . "**Parameters**\n\n";
++ if ($paramcount >= 0) {
++ print ".. container:: kernelindent\n\n";
++ print $lineprefix . "**Parameters**\n\n";
++ }
+ foreach $parameter (@{$args{'parameterlist'}}) {
+ my $parameter_name = $parameter;
+ $parameter_name =~ s/\[.*//;
+@@ -1704,7 +1719,7 @@ sub check_return_section {
+ sub dump_function($$) {
+ my $prototype = shift;
+ my $file = shift;
+- my $noret = 0;
++ my $func_macro = 0;
+
+ print_lineno($new_start_line);
+
+@@ -1769,7 +1784,7 @@ sub dump_function($$) {
+ # declaration_name and opening parenthesis (notice the \s+).
+ $return_type = $1;
+ $declaration_name = $2;
+- $noret = 1;
++ $func_macro = 1;
+ } elsif ($prototype =~ m/^()($name)\s*$prototype_end/ ||
+ $prototype =~ m/^($type1)\s+($name)\s*$prototype_end/ ||
+ $prototype =~ m/^($type2+)\s*($name)\s*$prototype_end/) {
+@@ -1796,7 +1811,7 @@ sub dump_function($$) {
+ # of warnings goes sufficiently down, the check is only performed in
+ # -Wreturn mode.
+ # TODO: always perform the check.
+- if ($Wreturn && !$noret) {
++ if ($Wreturn && !$func_macro) {
+ check_return_section($file, $declaration_name, $return_type);
+ }
+
+@@ -1814,7 +1829,8 @@ sub dump_function($$) {
+ 'parametertypes' => \%parametertypes,
+ 'sectionlist' => \@sectionlist,
+ 'sections' => \%sections,
+- 'purpose' => $declaration_purpose
++ 'purpose' => $declaration_purpose,
++ 'func_macro' => $func_macro
+ });
+ } else {
+ output_declaration($declaration_name,
+@@ -1827,7 +1843,8 @@ sub dump_function($$) {
+ 'parametertypes' => \%parametertypes,
+ 'sectionlist' => \@sectionlist,
+ 'sections' => \%sections,
+- 'purpose' => $declaration_purpose
++ 'purpose' => $declaration_purpose,
++ 'func_macro' => $func_macro
+ });
+ }
+ }
+@@ -2322,7 +2339,6 @@ sub process_inline($$) {
+
+ sub process_file($) {
+ my $file;
+- my $initial_section_counter = $section_counter;
+ my ($orig_file) = @_;
+
+ $file = map_filename($orig_file);
+@@ -2360,8 +2376,7 @@ sub process_file($) {
+ }
+
+ # Make sure we got something interesting.
+- if ($initial_section_counter == $section_counter && $
+- output_mode ne "none") {
++ if (!$section_counter && $output_mode ne "none") {
+ if ($output_selection == OUTPUT_INCLUDE) {
+ emit_warning("${file}:1", "'$_' not found\n")
+ for keys %function_table;
+diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
+index c4cc11aa558f5f..634e40748287c0 100644
+--- a/scripts/mod/file2alias.c
++++ b/scripts/mod/file2alias.c
+@@ -809,10 +809,7 @@ static int do_eisa_entry(const char *filename, void *symval,
+ char *alias)
+ {
+ DEF_FIELD_ADDR(symval, eisa_device_id, sig);
+- if (sig[0])
+- sprintf(alias, EISA_DEVICE_MODALIAS_FMT "*", *sig);
+- else
+- strcat(alias, "*");
++ sprintf(alias, EISA_DEVICE_MODALIAS_FMT "*", *sig);
+ return 1;
+ }
+
+diff --git a/scripts/package/builddeb b/scripts/package/builddeb
+index 441b0bb66e0d0c..fb686fd3266f01 100755
+--- a/scripts/package/builddeb
++++ b/scripts/package/builddeb
+@@ -96,16 +96,18 @@ install_linux_image_dbg () {
+
+ # Parse modules.order directly because 'make modules_install' may sign,
+ # compress modules, and then run unneeded depmod.
+- while read -r mod; do
+- mod="${mod%.o}.ko"
+- dbg="${pdir}/usr/lib/debug/lib/modules/${KERNELRELEASE}/kernel/${mod}"
+- buildid=$("${READELF}" -n "${mod}" | sed -n 's@^.*Build ID: \(..\)\(.*\)@\1/\2@p')
+- link="${pdir}/usr/lib/debug/.build-id/${buildid}.debug"
+-
+- mkdir -p "${dbg%/*}" "${link%/*}"
+- "${OBJCOPY}" --only-keep-debug "${mod}" "${dbg}"
+- ln -sf --relative "${dbg}" "${link}"
+- done < modules.order
++ if is_enabled CONFIG_MODULES; then
++ while read -r mod; do
++ mod="${mod%.o}.ko"
++ dbg="${pdir}/usr/lib/debug/lib/modules/${KERNELRELEASE}/kernel/${mod}"
++ buildid=$("${READELF}" -n "${mod}" | sed -n 's@^.*Build ID: \(..\)\(.*\)@\1/\2@p')
++ link="${pdir}/usr/lib/debug/.build-id/${buildid}.debug"
++
++ mkdir -p "${dbg%/*}" "${link%/*}"
++ "${OBJCOPY}" --only-keep-debug "${mod}" "${dbg}"
++ ln -sf --relative "${dbg}" "${link}"
++ done < modules.order
++ fi
+
+ # Build debug package
+ # Different tools want the image in different locations
+diff --git a/security/apparmor/capability.c b/security/apparmor/capability.c
+index 9934df16c8431d..bf7df60868308d 100644
+--- a/security/apparmor/capability.c
++++ b/security/apparmor/capability.c
+@@ -96,6 +96,8 @@ static int audit_caps(struct apparmor_audit_data *ad, struct aa_profile *profile
+ return error;
+ } else {
+ aa_put_profile(ent->profile);
++ if (profile != ent->profile)
++ cap_clear(ent->caps);
+ ent->profile = aa_get_profile(profile);
+ cap_raise(ent->caps, cap);
+ }
+diff --git a/security/apparmor/policy_unpack_test.c b/security/apparmor/policy_unpack_test.c
+index c64733d6c98fbb..f070902da8fcce 100644
+--- a/security/apparmor/policy_unpack_test.c
++++ b/security/apparmor/policy_unpack_test.c
+@@ -281,6 +281,8 @@ static void policy_unpack_test_unpack_strdup_with_null_name(struct kunit *test)
+ ((uintptr_t)puf->e->start <= (uintptr_t)string)
+ && ((uintptr_t)string <= (uintptr_t)puf->e->end));
+ KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA);
++
++ kfree(string);
+ }
+
+ static void policy_unpack_test_unpack_strdup_with_name(struct kunit *test)
+@@ -296,6 +298,8 @@ static void policy_unpack_test_unpack_strdup_with_name(struct kunit *test)
+ ((uintptr_t)puf->e->start <= (uintptr_t)string)
+ && ((uintptr_t)string <= (uintptr_t)puf->e->end));
+ KUNIT_EXPECT_STREQ(test, string, TEST_STRING_DATA);
++
++ kfree(string);
+ }
+
+ static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test)
+@@ -313,6 +317,8 @@ static void policy_unpack_test_unpack_strdup_out_of_bounds(struct kunit *test)
+ KUNIT_EXPECT_EQ(test, size, 0);
+ KUNIT_EXPECT_NULL(test, string);
+ KUNIT_EXPECT_PTR_EQ(test, puf->e->pos, start);
++
++ kfree(string);
+ }
+
+ static void policy_unpack_test_unpack_nameX_with_null_name(struct kunit *test)
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index b465fb6e1f5f0d..0790b5fd917e12 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -3793,9 +3793,11 @@ static vm_fault_t snd_pcm_mmap_data_fault(struct vm_fault *vmf)
+ return VM_FAULT_SIGBUS;
+ if (substream->ops->page)
+ page = substream->ops->page(substream, offset);
+- else if (!snd_pcm_get_dma_buf(substream))
++ else if (!snd_pcm_get_dma_buf(substream)) {
++ if (WARN_ON_ONCE(!runtime->dma_area))
++ return VM_FAULT_SIGBUS;
+ page = virt_to_page(runtime->dma_area + offset);
+- else
++ } else
+ page = snd_sgbuf_get_page(snd_pcm_get_dma_buf(substream), offset);
+ if (!page)
+ return VM_FAULT_SIGBUS;
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index 03306be5fa0245..348ce1b7725ea2 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -724,8 +724,9 @@ static int resize_runtime_buffer(struct snd_rawmidi_substream *substream,
+ newbuf = kvzalloc(params->buffer_size, GFP_KERNEL);
+ if (!newbuf)
+ return -ENOMEM;
+- guard(spinlock_irq)(&substream->lock);
++ spin_lock_irq(&substream->lock);
+ if (runtime->buffer_ref) {
++ spin_unlock_irq(&substream->lock);
+ kvfree(newbuf);
+ return -EBUSY;
+ }
+@@ -733,6 +734,7 @@ static int resize_runtime_buffer(struct snd_rawmidi_substream *substream,
+ runtime->buffer = newbuf;
+ runtime->buffer_size = params->buffer_size;
+ __reset_runtime_ptrs(runtime, is_input);
++ spin_unlock_irq(&substream->lock);
+ kvfree(oldbuf);
+ }
+ runtime->avail_min = params->avail_min;
+diff --git a/sound/core/sound_kunit.c b/sound/core/sound_kunit.c
+index bfed1a25fc8f74..84e337ecbddd0a 100644
+--- a/sound/core/sound_kunit.c
++++ b/sound/core/sound_kunit.c
+@@ -172,6 +172,7 @@ static void test_format_fill_silence(struct kunit *test)
+ u32 i, j;
+
+ buffer = kunit_kzalloc(test, SILENCE_BUFFER_SIZE, GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buffer);
+
+ for (i = 0; i < ARRAY_SIZE(buf_samples); i++) {
+ for (j = 0; j < ARRAY_SIZE(valid_fmt); j++)
+@@ -208,8 +209,12 @@ static void test_playback_avail(struct kunit *test)
+ struct snd_pcm_runtime *r = kunit_kzalloc(test, sizeof(*r), GFP_KERNEL);
+ u32 i;
+
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r);
++
+ r->status = kunit_kzalloc(test, sizeof(*r->status), GFP_KERNEL);
+ r->control = kunit_kzalloc(test, sizeof(*r->control), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->status);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->control);
+
+ for (i = 0; i < ARRAY_SIZE(p_avail_data); i++) {
+ r->buffer_size = p_avail_data[i].buffer_size;
+@@ -232,8 +237,12 @@ static void test_capture_avail(struct kunit *test)
+ struct snd_pcm_runtime *r = kunit_kzalloc(test, sizeof(*r), GFP_KERNEL);
+ u32 i;
+
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r);
++
+ r->status = kunit_kzalloc(test, sizeof(*r->status), GFP_KERNEL);
+ r->control = kunit_kzalloc(test, sizeof(*r->control), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->status);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, r->control);
+
+ for (i = 0; i < ARRAY_SIZE(c_avail_data); i++) {
+ r->buffer_size = c_avail_data[i].buffer_size;
+@@ -247,6 +256,7 @@ static void test_capture_avail(struct kunit *test)
+ static void test_card_set_id(struct kunit *test)
+ {
+ struct snd_card *card = kunit_kzalloc(test, sizeof(*card), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, card);
+
+ snd_card_set_id(card, VALID_NAME);
+ KUNIT_EXPECT_STREQ(test, card->id, VALID_NAME);
+@@ -280,6 +290,7 @@ static void test_pcm_format_name(struct kunit *test)
+ static void test_card_add_component(struct kunit *test)
+ {
+ struct snd_card *card = kunit_kzalloc(test, sizeof(*card), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, card);
+
+ snd_component_add(card, TEST_FIRST_COMPONENT);
+ KUNIT_ASSERT_STREQ(test, card->components, TEST_FIRST_COMPONENT);
+diff --git a/sound/core/ump.c b/sound/core/ump.c
+index 7d59a0a9b037ad..8d37f237f83b2e 100644
+--- a/sound/core/ump.c
++++ b/sound/core/ump.c
+@@ -788,7 +788,10 @@ static void fill_fb_info(struct snd_ump_endpoint *ump,
+ info->ui_hint = buf->fb_info.ui_hint;
+ info->first_group = buf->fb_info.first_group;
+ info->num_groups = buf->fb_info.num_groups;
+- info->flags = buf->fb_info.midi_10;
++ if (buf->fb_info.midi_10 < 2)
++ info->flags = buf->fb_info.midi_10;
++ else
++ info->flags = SNDRV_UMP_BLOCK_IS_MIDI1 | SNDRV_UMP_BLOCK_IS_LOWSPEED;
+ info->active = buf->fb_info.active;
+ info->midi_ci_version = buf->fb_info.midi_ci_version;
+ info->sysex8_streams = buf->fb_info.sysex8_streams;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 24b4fe99304a40..18e6779a83be2f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -473,6 +473,8 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ break;
+ case 0x10ec0234:
+ case 0x10ec0274:
++ alc_write_coef_idx(codec, 0x6e, 0x0c25);
++ fallthrough;
+ case 0x10ec0294:
+ case 0x10ec0700:
+ case 0x10ec0701:
+@@ -3613,25 +3615,22 @@ static void alc256_init(struct hda_codec *codec)
+
+ hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+
+- if (hp_pin_sense)
++ if (hp_pin_sense) {
+ msleep(2);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+-
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(85);
+-
+- snd_hda_codec_write(codec, hp_pin, 0,
++ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ msleep(75);
++
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+
++ msleep(75);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
++ }
+ alc_update_coef_idx(codec, 0x46, 3 << 12, 0);
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
+ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */
+ alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15);
+ /*
+@@ -3655,29 +3654,28 @@ static void alc256_shutup(struct hda_codec *codec)
+ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+ hp_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+
+- if (hp_pin_sense)
++ if (hp_pin_sense) {
+ msleep(2);
+
+- snd_hda_codec_write(codec, hp_pin, 0,
++ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(85);
++ msleep(75);
+
+ /* 3k pull low control for Headset jack. */
+ /* NOTE: call this before clearing the pin, otherwise codec stalls */
+ /* If disable 3k pulldown control for alc257, the Mic detection will not work correctly
+ * when booting with headset plugged. So skip setting it for the codec alc257
+ */
+- if (spec->en_3kpull_low)
+- alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
++ if (spec->en_3kpull_low)
++ alc_update_coef_idx(codec, 0x46, 0, 3 << 12);
+
+- if (!spec->no_shutup_pins)
+- snd_hda_codec_write(codec, hp_pin, 0,
++ if (!spec->no_shutup_pins)
++ snd_hda_codec_write(codec, hp_pin, 0,
+ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
+- if (hp_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ msleep(75);
++ }
+
+ alc_auto_setup_eapd(codec, false);
+ alc_shutup_pins(codec);
+@@ -3772,33 +3770,28 @@ static void alc225_init(struct hda_codec *codec)
+ hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+ hp2_pin_sense = snd_hda_jack_detect(codec, 0x16);
+
+- if (hp1_pin_sense || hp2_pin_sense)
++ if (hp1_pin_sense || hp2_pin_sense) {
+ msleep(2);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x1); /* Low power */
+-
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(85);
+-
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ msleep(75);
+
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+
+- alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
+- alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
++ msleep(75);
++ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
++ }
+ }
+
+ static void alc225_shutup(struct hda_codec *codec)
+@@ -3810,36 +3803,35 @@ static void alc225_shutup(struct hda_codec *codec)
+ if (!hp_pin)
+ hp_pin = 0x21;
+
+- alc_disable_headset_jack_key(codec);
+- /* 3k pull low control for Headset jack. */
+- alc_update_coef_idx(codec, 0x4a, 0, 3 << 10);
+-
+ hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin);
+ hp2_pin_sense = snd_hda_jack_detect(codec, 0x16);
+
+- if (hp1_pin_sense || hp2_pin_sense)
++ if (hp1_pin_sense || hp2_pin_sense) {
++ alc_disable_headset_jack_key(codec);
++ /* 3k pull low control for Headset jack. */
++ alc_update_coef_idx(codec, 0x4a, 0, 3 << 10);
+ msleep(2);
+
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+-
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(85);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
+
+- if (hp1_pin_sense || spec->ultra_low_power)
+- snd_hda_codec_write(codec, hp_pin, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+- if (hp2_pin_sense)
+- snd_hda_codec_write(codec, 0x16, 0,
+- AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ msleep(75);
+
+- if (hp1_pin_sense || hp2_pin_sense || spec->ultra_low_power)
+- msleep(100);
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x16, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
+
++ msleep(75);
++ alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
++ alc_enable_headset_jack_key(codec);
++ }
+ alc_auto_setup_eapd(codec, false);
+ alc_shutup_pins(codec);
+ if (spec->ultra_low_power) {
+@@ -3850,9 +3842,6 @@ static void alc225_shutup(struct hda_codec *codec)
+ alc_update_coef_idx(codec, 0x4a, 3<<4, 2<<4);
+ msleep(30);
+ }
+-
+- alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
+- alc_enable_headset_jack_key(codec);
+ }
+
+ static void alc_default_init(struct hda_codec *codec)
+@@ -7559,6 +7548,7 @@ enum {
+ ALC269_FIXUP_THINKPAD_ACPI,
+ ALC269_FIXUP_DMIC_THINKPAD_ACPI,
+ ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13,
++ ALC269VC_FIXUP_INFINIX_Y4_MAX,
+ ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO,
+ ALC255_FIXUP_ACER_MIC_NO_PRESENCE,
+ ALC255_FIXUP_ASUS_MIC_NO_PRESENCE,
+@@ -7786,6 +7776,7 @@ enum {
+ ALC287_FIXUP_LENOVO_SSID_17AA3820,
+ ALC245_FIXUP_CLEVO_NOISY_MIC,
+ ALC269_FIXUP_VAIO_VJFH52_MIC_NO_PRESENCE,
++ ALC233_FIXUP_MEDION_MTL_SPK,
+ };
+
+ /* A special fixup for Lenovo C940 and Yoga Duet 7;
+@@ -8015,6 +8006,15 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
+ },
++ [ALC269VC_FIXUP_INFINIX_Y4_MAX] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x1b, 0x90170150 }, /* use as internal speaker */
++ { }
++ },
++ .chained = true,
++ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
++ },
+ [ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -10160,6 +10160,13 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC269_FIXUP_LIMIT_INT_MIC_BOOST
+ },
++ [ALC233_FIXUP_MEDION_MTL_SPK] = {
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x1b, 0x90170110 },
++ { }
++ },
++ },
+ };
+
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -10585,6 +10592,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8cdf, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d84, "HP EliteBook X G1i", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -11025,7 +11033,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13),
+ SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
++ SND_PCI_QUIRK(0x2782, 0x1701, "Infinix Y4 Max", ALC269VC_FIXUP_INFINIX_Y4_MAX),
++ SND_PCI_QUIRK(0x2782, 0x1705, "MEDION E15433", ALC269VC_FIXUP_INFINIX_Y4_MAX),
+ SND_PCI_QUIRK(0x2782, 0x1707, "Vaio VJFE-ADL", ALC298_FIXUP_SPK_VOLUME),
++ SND_PCI_QUIRK(0x2782, 0x4900, "MEDION E15443", ALC233_FIXUP_MEDION_MTL_SPK),
+ SND_PCI_QUIRK(0x8086, 0x2074, "Intel NUC 8", ALC233_FIXUP_INTEL_NUC8_DMIC),
+ SND_PCI_QUIRK(0x8086, 0x2080, "Intel NUC 8 Rugged", ALC256_FIXUP_INTEL_NUC8_RUGGED),
+ SND_PCI_QUIRK(0x8086, 0x2081, "Intel NUC 10", ALC256_FIXUP_INTEL_NUC10),
+diff --git a/sound/soc/amd/acp/acp-sdw-sof-mach.c b/sound/soc/amd/acp/acp-sdw-sof-mach.c
+index 306854fb08e3d7..3be401c7227040 100644
+--- a/sound/soc/amd/acp/acp-sdw-sof-mach.c
++++ b/sound/soc/amd/acp/acp-sdw-sof-mach.c
+@@ -154,7 +154,7 @@ static int create_sdw_dailink(struct snd_soc_card *card,
+ int num_cpus = hweight32(sof_dai->link_mask[stream]);
+ int num_codecs = sof_dai->num_devs[stream];
+ int playback, capture;
+- int i = 0, j = 0;
++ int j = 0;
+ char *name;
+
+ if (!sof_dai->num_devs[stream])
+@@ -213,14 +213,14 @@ static int create_sdw_dailink(struct snd_soc_card *card,
+
+ int link_num = ffs(sof_end->link_mask) - 1;
+
+- cpus[i].dai_name = devm_kasprintf(dev, GFP_KERNEL,
+- "SDW%d Pin%d",
+- link_num, cpu_pin_id);
+- dev_dbg(dev, "cpu[%d].dai_name:%s\n", i, cpus[i].dai_name);
+- if (!cpus[i].dai_name)
++ cpus->dai_name = devm_kasprintf(dev, GFP_KERNEL,
++ "SDW%d Pin%d",
++ link_num, cpu_pin_id);
++ dev_dbg(dev, "cpu->dai_name:%s\n", cpus->dai_name);
++ if (!cpus->dai_name)
+ return -ENOMEM;
+
+- codec_maps[j].cpu = i;
++ codec_maps[j].cpu = 0;
+ codec_maps[j].codec = j;
+
+ codecs[j].name = sof_end->codec_name;
+@@ -362,7 +362,7 @@ static int sof_card_dai_links_create(struct snd_soc_card *card)
+ dai_links = devm_kcalloc(dev, num_links, sizeof(*dai_links), GFP_KERNEL);
+ if (!dai_links) {
+ ret = -ENOMEM;
+- goto err_end;
++ goto err_end;
+ }
+
+ card->codec_conf = codec_conf;
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 2436e8deb2be48..5153a68d8c0795 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -241,6 +241,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "21M5"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21ME"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+@@ -537,8 +544,14 @@ static int acp6x_probe(struct platform_device *pdev)
+ struct acp6x_pdm *machine = NULL;
+ struct snd_soc_card *card;
+ struct acpi_device *adev;
++ acpi_handle handle;
++ acpi_integer dmic_status;
+ int ret;
++ bool is_dmic_enable, wov_en;
+
++ /* IF WOV entry not found, enable dmic based on AcpDmicConnected entry*/
++ is_dmic_enable = false;
++ wov_en = true;
+ /* check the parent device's firmware node has _DSD or not */
+ adev = ACPI_COMPANION(pdev->dev.parent);
+ if (adev) {
+@@ -546,9 +559,19 @@ static int acp6x_probe(struct platform_device *pdev)
+
+ if (!acpi_dev_get_property(adev, "AcpDmicConnected", ACPI_TYPE_INTEGER, &obj) &&
+ obj->integer.value == 1)
+- platform_set_drvdata(pdev, &acp6x_card);
++ is_dmic_enable = true;
+ }
+
++ handle = ACPI_HANDLE(pdev->dev.parent);
++ ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status);
++ if (!ACPI_FAILURE(ret))
++ wov_en = dmic_status;
++
++ if (is_dmic_enable && wov_en)
++ platform_set_drvdata(pdev, &acp6x_card);
++ else
++ return 0;
++
+ /* check for any DMI overrides */
+ dmi_id = dmi_first_match(yc_acp_quirk_table);
+ if (dmi_id)
+diff --git a/sound/soc/codecs/da7213.c b/sound/soc/codecs/da7213.c
+index f3ef6fb5530471..486db60bf2dd14 100644
+--- a/sound/soc/codecs/da7213.c
++++ b/sound/soc/codecs/da7213.c
+@@ -2136,6 +2136,7 @@ static const struct regmap_config da7213_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+
++ .max_register = DA7213_TONE_GEN_OFF_PER,
+ .reg_defaults = da7213_reg_defaults,
+ .num_reg_defaults = ARRAY_SIZE(da7213_reg_defaults),
+ .volatile_reg = da7213_volatile_register,
+diff --git a/sound/soc/codecs/da7219.c b/sound/soc/codecs/da7219.c
+index 311ea7918b3124..e2da3e317b5a3e 100644
+--- a/sound/soc/codecs/da7219.c
++++ b/sound/soc/codecs/da7219.c
+@@ -1167,17 +1167,20 @@ static int da7219_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+ struct da7219_priv *da7219 = snd_soc_component_get_drvdata(component);
+ int ret = 0;
+
+- if ((da7219->clk_src == clk_id) && (da7219->mclk_rate == freq))
++ mutex_lock(&da7219->pll_lock);
++
++ if ((da7219->clk_src == clk_id) && (da7219->mclk_rate == freq)) {
++ mutex_unlock(&da7219->pll_lock);
+ return 0;
++ }
+
+ if ((freq < 2000000) || (freq > 54000000)) {
++ mutex_unlock(&da7219->pll_lock);
+ dev_err(codec_dai->dev, "Unsupported MCLK value %d\n",
+ freq);
+ return -EINVAL;
+ }
+
+- mutex_lock(&da7219->pll_lock);
+-
+ switch (clk_id) {
+ case DA7219_CLKSRC_MCLK_SQR:
+ snd_soc_component_update_bits(component, DA7219_PLL_CTRL,
+diff --git a/sound/soc/codecs/rt722-sdca.c b/sound/soc/codecs/rt722-sdca.c
+index e5bd9ef812de13..f9f7512ca36087 100644
+--- a/sound/soc/codecs/rt722-sdca.c
++++ b/sound/soc/codecs/rt722-sdca.c
+@@ -607,12 +607,8 @@ static int rt722_sdca_dmic_set_gain_get(struct snd_kcontrol *kcontrol,
+
+ if (!adc_vol_flag) /* boost gain */
+ ctl = regvalue / boost_step;
+- else { /* ADC gain */
+- if (adc_vol_flag)
+- ctl = p->max - (((vol_max - regvalue) & 0xffff) / interval_offset);
+- else
+- ctl = p->max - (((0 - regvalue) & 0xffff) / interval_offset);
+- }
++ else /* ADC gain */
++ ctl = p->max - (((vol_max - regvalue) & 0xffff) / interval_offset);
+
+ ucontrol->value.integer.value[i] = ctl;
+ }
+diff --git a/sound/soc/fsl/fsl-asoc-card.c b/sound/soc/fsl/fsl-asoc-card.c
+index f6c3aeff0d8eaf..a0c2ce84c32b1d 100644
+--- a/sound/soc/fsl/fsl-asoc-card.c
++++ b/sound/soc/fsl/fsl-asoc-card.c
+@@ -1033,14 +1033,15 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ }
+
+ /*
+- * Properties "hp-det-gpio" and "mic-det-gpio" are optional, and
++ * Properties "hp-det-gpios" and "mic-det-gpios" are optional, and
+ * simple_util_init_jack() uses these properties for creating
+ * Headphone Jack and Microphone Jack.
+ *
+ * The notifier is initialized in snd_soc_card_jack_new(), then
+ * snd_soc_jack_notifier_register can be called.
+ */
+- if (of_property_read_bool(np, "hp-det-gpio")) {
++ if (of_property_read_bool(np, "hp-det-gpios") ||
++ of_property_read_bool(np, "hp-det-gpio") /* deprecated */) {
+ ret = simple_util_init_jack(&priv->card, &priv->hp_jack,
+ 1, NULL, "Headphone Jack");
+ if (ret)
+@@ -1049,7 +1050,8 @@ static int fsl_asoc_card_probe(struct platform_device *pdev)
+ snd_soc_jack_notifier_register(&priv->hp_jack.jack, &hp_jack_nb);
+ }
+
+- if (of_property_read_bool(np, "mic-det-gpio")) {
++ if (of_property_read_bool(np, "mic-det-gpios") ||
++ of_property_read_bool(np, "mic-det-gpio") /* deprecated */) {
+ ret = simple_util_init_jack(&priv->card, &priv->mic_jack,
+ 0, NULL, "Mic Jack");
+ if (ret)
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index 0c71a73476dfa6..67c2d4cb0dea21 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -1061,7 +1061,7 @@ static irqreturn_t micfil_isr(int irq, void *devid)
+ regmap_write_bits(micfil->regmap,
+ REG_MICFIL_STAT,
+ MICFIL_STAT_CHXF(i),
+- 1);
++ MICFIL_STAT_CHXF(i));
+ }
+
+ for (i = 0; i < MICFIL_FIFO_NUM; i++) {
+@@ -1096,7 +1096,7 @@ static irqreturn_t micfil_err_isr(int irq, void *devid)
+ if (stat_reg & MICFIL_STAT_LOWFREQF) {
+ dev_dbg(&pdev->dev, "isr: ipg_clk_app is too low\n");
+ regmap_write_bits(micfil->regmap, REG_MICFIL_STAT,
+- MICFIL_STAT_LOWFREQF, 1);
++ MICFIL_STAT_LOWFREQF, MICFIL_STAT_LOWFREQF);
+ }
+
+ return IRQ_HANDLED;
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index 6fbcf33fd0dea6..8e7b75cf64db42 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -275,6 +275,9 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ /* Add AUDMIX Backend */
+ be_name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "audmix-%d", i);
++ if (!be_name)
++ return -ENOMEM;
++
+ priv->dai[num_dai + i].cpus = &dlc[1];
+ priv->dai[num_dai + i].codecs = &snd_soc_dummy_dlc;
+
+diff --git a/sound/soc/mediatek/mt8188/mt8188-mt6359.c b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+index 08ae962afeb929..4eed90d13a5326 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-mt6359.c
++++ b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+@@ -1279,10 +1279,12 @@ static int mt8188_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+
+ for_each_card_prelinks(card, i, dai_link) {
+ if (strcmp(dai_link->name, "DPTX_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8188_dptx_codec_init;
+ } else if (strcmp(dai_link->name, "ETDM3_OUT_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8188_hdmi_codec_init;
+ } else if (strcmp(dai_link->name, "DL_SRC_BE") == 0 ||
+ strcmp(dai_link->name, "UL_SRC_BE") == 0) {
+@@ -1294,6 +1296,9 @@ static int mt8188_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+ strcmp(dai_link->name, "ETDM2_OUT_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM1_IN_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM2_IN_BE") == 0) {
++ if (!dai_link->num_codecs)
++ continue;
++
+ if (!strcmp(dai_link->codecs->dai_name, MAX98390_CODEC_DAI)) {
+ /*
+ * The TDM protocol settings with fixed 4 slots are defined in
+diff --git a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+index db00704e206d6d..943f8116840373 100644
+--- a/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
++++ b/sound/soc/mediatek/mt8192/mt8192-mt6359-rt1015-rt5682.c
+@@ -1099,7 +1099,7 @@ static int mt8192_mt6359_legacy_probe(struct mtk_soc_card_data *soc_card_data)
+ dai_link->ignore = 0;
+ }
+
+- if (dai_link->num_codecs && dai_link->codecs[0].dai_name &&
++ if (dai_link->num_codecs &&
+ strcmp(dai_link->codecs[0].dai_name, RT1015_CODEC_DAI) == 0)
+ dai_link->ops = &mt8192_rt1015_i2s_ops;
+ }
+@@ -1127,7 +1127,7 @@ static int mt8192_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+ int i;
+
+ for_each_card_prelinks(card, i, dai_link)
+- if (dai_link->num_codecs && dai_link->codecs[0].dai_name &&
++ if (dai_link->num_codecs &&
+ strcmp(dai_link->codecs[0].dai_name, RT1015_CODEC_DAI) == 0)
+ dai_link->ops = &mt8192_rt1015_i2s_ops;
+ }
+diff --git a/sound/soc/mediatek/mt8195/mt8195-mt6359.c b/sound/soc/mediatek/mt8195/mt8195-mt6359.c
+index 2832ef78eaed72..8ebf6c7502aa3d 100644
+--- a/sound/soc/mediatek/mt8195/mt8195-mt6359.c
++++ b/sound/soc/mediatek/mt8195/mt8195-mt6359.c
+@@ -1380,10 +1380,12 @@ static int mt8195_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+
+ for_each_card_prelinks(card, i, dai_link) {
+ if (strcmp(dai_link->name, "DPTX_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8195_dptx_codec_init;
+ } else if (strcmp(dai_link->name, "ETDM3_OUT_BE") == 0) {
+- if (strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
++ if (dai_link->num_codecs &&
++ strcmp(dai_link->codecs->dai_name, "snd-soc-dummy-dai"))
+ dai_link->init = mt8195_hdmi_codec_init;
+ } else if (strcmp(dai_link->name, "DL_SRC_BE") == 0 ||
+ strcmp(dai_link->name, "UL_SRC1_BE") == 0 ||
+@@ -1396,6 +1398,9 @@ static int mt8195_mt6359_soc_card_probe(struct mtk_soc_card_data *soc_card_data,
+ strcmp(dai_link->name, "ETDM2_OUT_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM1_IN_BE") == 0 ||
+ strcmp(dai_link->name, "ETDM2_IN_BE") == 0) {
++ if (!dai_link->num_codecs)
++ continue;
++
+ if (!strcmp(dai_link->codecs->dai_name, MAX98390_CODEC_DAI)) {
+ if (!(codec_init & MAX98390_CODEC_INIT)) {
+ dai_link->init = mt8195_max98390_init;
+diff --git a/sound/usb/6fire/chip.c b/sound/usb/6fire/chip.c
+index 33e962178c9363..d562a30b087f01 100644
+--- a/sound/usb/6fire/chip.c
++++ b/sound/usb/6fire/chip.c
+@@ -61,8 +61,10 @@ static void usb6fire_chip_abort(struct sfire_chip *chip)
+ }
+ }
+
+-static void usb6fire_chip_destroy(struct sfire_chip *chip)
++static void usb6fire_card_free(struct snd_card *card)
+ {
++ struct sfire_chip *chip = card->private_data;
++
+ if (chip) {
+ if (chip->pcm)
+ usb6fire_pcm_destroy(chip);
+@@ -72,8 +74,6 @@ static void usb6fire_chip_destroy(struct sfire_chip *chip)
+ usb6fire_comm_destroy(chip);
+ if (chip->control)
+ usb6fire_control_destroy(chip);
+- if (chip->card)
+- snd_card_free(chip->card);
+ }
+ }
+
+@@ -136,6 +136,7 @@ static int usb6fire_chip_probe(struct usb_interface *intf,
+ chip->regidx = regidx;
+ chip->intf_count = 1;
+ chip->card = card;
++ card->private_free = usb6fire_card_free;
+
+ ret = usb6fire_comm_init(chip);
+ if (ret < 0)
+@@ -162,7 +163,7 @@ static int usb6fire_chip_probe(struct usb_interface *intf,
+ return 0;
+
+ destroy_chip:
+- usb6fire_chip_destroy(chip);
++ snd_card_free(card);
+ return ret;
+ }
+
+@@ -181,7 +182,6 @@ static void usb6fire_chip_disconnect(struct usb_interface *intf)
+
+ chip->shutdown = true;
+ usb6fire_chip_abort(chip);
+- usb6fire_chip_destroy(chip);
+ }
+ }
+ }
+diff --git a/sound/usb/caiaq/audio.c b/sound/usb/caiaq/audio.c
+index 772c0ecb707738..05f964347ed6c2 100644
+--- a/sound/usb/caiaq/audio.c
++++ b/sound/usb/caiaq/audio.c
+@@ -858,14 +858,20 @@ int snd_usb_caiaq_audio_init(struct snd_usb_caiaqdev *cdev)
+ return 0;
+ }
+
+-void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev)
++void snd_usb_caiaq_audio_disconnect(struct snd_usb_caiaqdev *cdev)
+ {
+ struct device *dev = caiaqdev_to_dev(cdev);
+
+ dev_dbg(dev, "%s(%p)\n", __func__, cdev);
+ stream_stop(cdev);
++}
++
++void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev)
++{
++ struct device *dev = caiaqdev_to_dev(cdev);
++
++ dev_dbg(dev, "%s(%p)\n", __func__, cdev);
+ free_urbs(cdev->data_urbs_in);
+ free_urbs(cdev->data_urbs_out);
+ kfree(cdev->data_cb_info);
+ }
+-
+diff --git a/sound/usb/caiaq/audio.h b/sound/usb/caiaq/audio.h
+index 869bf6264d6a09..07f5d064456cf7 100644
+--- a/sound/usb/caiaq/audio.h
++++ b/sound/usb/caiaq/audio.h
+@@ -3,6 +3,7 @@
+ #define CAIAQ_AUDIO_H
+
+ int snd_usb_caiaq_audio_init(struct snd_usb_caiaqdev *cdev);
++void snd_usb_caiaq_audio_disconnect(struct snd_usb_caiaqdev *cdev);
+ void snd_usb_caiaq_audio_free(struct snd_usb_caiaqdev *cdev);
+
+ #endif /* CAIAQ_AUDIO_H */
+diff --git a/sound/usb/caiaq/device.c b/sound/usb/caiaq/device.c
+index b5cbf1f195c48c..dfd820483849eb 100644
+--- a/sound/usb/caiaq/device.c
++++ b/sound/usb/caiaq/device.c
+@@ -376,6 +376,17 @@ static void setup_card(struct snd_usb_caiaqdev *cdev)
+ dev_err(dev, "Unable to set up control system (ret=%d)\n", ret);
+ }
+
++static void card_free(struct snd_card *card)
++{
++ struct snd_usb_caiaqdev *cdev = caiaqdev(card);
++
++#ifdef CONFIG_SND_USB_CAIAQ_INPUT
++ snd_usb_caiaq_input_free(cdev);
++#endif
++ snd_usb_caiaq_audio_free(cdev);
++ usb_reset_device(cdev->chip.dev);
++}
++
+ static int create_card(struct usb_device *usb_dev,
+ struct usb_interface *intf,
+ struct snd_card **cardp)
+@@ -489,6 +500,7 @@ static int init_card(struct snd_usb_caiaqdev *cdev)
+ cdev->vendor_name, cdev->product_name, usbpath);
+
+ setup_card(cdev);
++ card->private_free = card_free;
+ return 0;
+
+ err_kill_urb:
+@@ -534,15 +546,14 @@ static void snd_disconnect(struct usb_interface *intf)
+ snd_card_disconnect(card);
+
+ #ifdef CONFIG_SND_USB_CAIAQ_INPUT
+- snd_usb_caiaq_input_free(cdev);
++ snd_usb_caiaq_input_disconnect(cdev);
+ #endif
+- snd_usb_caiaq_audio_free(cdev);
++ snd_usb_caiaq_audio_disconnect(cdev);
+
+ usb_kill_urb(&cdev->ep1_in_urb);
+ usb_kill_urb(&cdev->midi_out_urb);
+
+- snd_card_free(card);
+- usb_reset_device(interface_to_usbdev(intf));
++ snd_card_free_when_closed(card);
+ }
+
+
+diff --git a/sound/usb/caiaq/input.c b/sound/usb/caiaq/input.c
+index 84f26dce7f5d03..a9130891bb696d 100644
+--- a/sound/usb/caiaq/input.c
++++ b/sound/usb/caiaq/input.c
+@@ -829,15 +829,21 @@ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev)
+ return ret;
+ }
+
+-void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev)
++void snd_usb_caiaq_input_disconnect(struct snd_usb_caiaqdev *cdev)
+ {
+ if (!cdev || !cdev->input_dev)
+ return;
+
+ usb_kill_urb(cdev->ep4_in_urb);
++ input_unregister_device(cdev->input_dev);
++}
++
++void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev)
++{
++ if (!cdev || !cdev->input_dev)
++ return;
++
+ usb_free_urb(cdev->ep4_in_urb);
+ cdev->ep4_in_urb = NULL;
+-
+- input_unregister_device(cdev->input_dev);
+ cdev->input_dev = NULL;
+ }
+diff --git a/sound/usb/caiaq/input.h b/sound/usb/caiaq/input.h
+index c42891e7be884d..fbe267f85d025f 100644
+--- a/sound/usb/caiaq/input.h
++++ b/sound/usb/caiaq/input.h
+@@ -4,6 +4,7 @@
+
+ void snd_usb_caiaq_input_dispatch(struct snd_usb_caiaqdev *cdev, char *buf, unsigned int len);
+ int snd_usb_caiaq_input_init(struct snd_usb_caiaqdev *cdev);
++void snd_usb_caiaq_input_disconnect(struct snd_usb_caiaqdev *cdev);
+ void snd_usb_caiaq_input_free(struct snd_usb_caiaqdev *cdev);
+
+ #endif
+diff --git a/sound/usb/clock.c b/sound/usb/clock.c
+index 8f85200292f3ff..842ba5b801eae8 100644
+--- a/sound/usb/clock.c
++++ b/sound/usb/clock.c
+@@ -36,6 +36,12 @@ union uac23_clock_multiplier_desc {
+ struct uac_clock_multiplier_descriptor v3;
+ };
+
++/* check whether the descriptor bLength has the minimal length */
++#define DESC_LENGTH_CHECK(p, proto) \
++ ((proto) == UAC_VERSION_3 ? \
++ ((p)->v3.bLength >= sizeof((p)->v3)) : \
++ ((p)->v2.bLength >= sizeof((p)->v2)))
++
+ #define GET_VAL(p, proto, field) \
+ ((proto) == UAC_VERSION_3 ? (p)->v3.field : (p)->v2.field)
+
+@@ -58,6 +64,8 @@ static bool validate_clock_source(void *p, int id, int proto)
+ {
+ union uac23_clock_source_desc *cs = p;
+
++ if (!DESC_LENGTH_CHECK(cs, proto))
++ return false;
+ return GET_VAL(cs, proto, bClockID) == id;
+ }
+
+@@ -65,13 +73,27 @@ static bool validate_clock_selector(void *p, int id, int proto)
+ {
+ union uac23_clock_selector_desc *cs = p;
+
+- return GET_VAL(cs, proto, bClockID) == id;
++ if (!DESC_LENGTH_CHECK(cs, proto))
++ return false;
++ if (GET_VAL(cs, proto, bClockID) != id)
++ return false;
++ /* additional length check for baCSourceID array (in bNrInPins size)
++ * and two more fields (which sizes depend on the protocol)
++ */
++ if (proto == UAC_VERSION_3)
++ return cs->v3.bLength >= sizeof(cs->v3) + cs->v3.bNrInPins +
++ 4 /* bmControls */ + 2 /* wCSelectorDescrStr */;
++ else
++ return cs->v2.bLength >= sizeof(cs->v2) + cs->v2.bNrInPins +
++ 1 /* bmControls */ + 1 /* iClockSelector */;
+ }
+
+ static bool validate_clock_multiplier(void *p, int id, int proto)
+ {
+ union uac23_clock_multiplier_desc *cs = p;
+
++ if (!DESC_LENGTH_CHECK(cs, proto))
++ return false;
+ return GET_VAL(cs, proto, bClockID) == id;
+ }
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index c5fd180357d1e8..8538fdfce3535b 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -555,6 +555,7 @@ int snd_usb_create_quirk(struct snd_usb_audio *chip,
+ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interface *intf)
+ {
+ struct usb_host_config *config = dev->actconfig;
++ struct usb_device_descriptor new_device_descriptor;
+ int err;
+
+ if (le16_to_cpu(get_cfg_desc(config)->wTotalLength) == EXTIGY_FIRMWARE_SIZE_OLD ||
+@@ -566,10 +567,14 @@ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interfac
+ if (err < 0)
+ dev_dbg(&dev->dev, "error sending boot message: %d\n", err);
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &dev->descriptor, sizeof(dev->descriptor));
+- config = dev->actconfig;
++ &new_device_descriptor, sizeof(new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
++ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
++ new_device_descriptor.bNumConfigurations);
++ else
++ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_reset_configuration: %d\n", err);
+@@ -901,6 +906,7 @@ static void mbox2_setup_48_24_magic(struct usb_device *dev)
+ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
+ {
+ struct usb_host_config *config = dev->actconfig;
++ struct usb_device_descriptor new_device_descriptor;
+ int err;
+ u8 bootresponse[0x12];
+ int fwsize;
+@@ -936,10 +942,14 @@ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
+ dev_dbg(&dev->dev, "device initialised!\n");
+
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &dev->descriptor, sizeof(dev->descriptor));
+- config = dev->actconfig;
++ &new_device_descriptor, sizeof(new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
++ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
++ new_device_descriptor.bNumConfigurations);
++ else
++ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
+
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+@@ -1249,6 +1259,7 @@ static void mbox3_setup_defaults(struct usb_device *dev)
+ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
+ {
+ struct usb_host_config *config = dev->actconfig;
++ struct usb_device_descriptor new_device_descriptor;
+ int err;
+ int descriptor_size;
+
+@@ -1262,10 +1273,14 @@ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
+ dev_dbg(&dev->dev, "MBOX3: device initialised!\n");
+
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &dev->descriptor, sizeof(dev->descriptor));
+- config = dev->actconfig;
++ &new_device_descriptor, sizeof(new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "MBOX3: error usb_get_descriptor: %d\n", err);
++ if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ dev_dbg(&dev->dev, "MBOX3: error too large bNumConfigurations: %d\n",
++ new_device_descriptor.bNumConfigurations);
++ else
++ memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
+
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+diff --git a/sound/usb/usx2y/us122l.c b/sound/usb/usx2y/us122l.c
+index 1be0e980feb958..ca5fac03ec798d 100644
+--- a/sound/usb/usx2y/us122l.c
++++ b/sound/usb/usx2y/us122l.c
+@@ -606,10 +606,7 @@ static void snd_us122l_disconnect(struct usb_interface *intf)
+ usb_put_intf(usb_ifnum_to_if(us122l->dev, 1));
+ usb_put_dev(us122l->dev);
+
+- while (atomic_read(&us122l->mmap_count))
+- msleep(500);
+-
+- snd_card_free(card);
++ snd_card_free_when_closed(card);
+ }
+
+ static int snd_us122l_suspend(struct usb_interface *intf, pm_message_t message)
+diff --git a/sound/usb/usx2y/usbusx2y.c b/sound/usb/usx2y/usbusx2y.c
+index 2f9cede242b3a9..5f81c68fd42b68 100644
+--- a/sound/usb/usx2y/usbusx2y.c
++++ b/sound/usb/usx2y/usbusx2y.c
+@@ -422,7 +422,7 @@ static void snd_usx2y_disconnect(struct usb_interface *intf)
+ }
+ if (usx2y->us428ctls_sharedmem)
+ wake_up(&usx2y->us428ctls_wait_queue_head);
+- snd_card_free(card);
++ snd_card_free_when_closed(card);
+ }
+
+ static int snd_usx2y_probe(struct usb_interface *intf,
+diff --git a/tools/bpf/bpftool/jit_disasm.c b/tools/bpf/bpftool/jit_disasm.c
+index 7b8d9ec89ebd35..c032d2c6ab6d55 100644
+--- a/tools/bpf/bpftool/jit_disasm.c
++++ b/tools/bpf/bpftool/jit_disasm.c
+@@ -80,7 +80,8 @@ symbol_lookup_callback(__maybe_unused void *disasm_info,
+ static int
+ init_context(disasm_ctx_t *ctx, const char *arch,
+ __maybe_unused const char *disassembler_options,
+- __maybe_unused unsigned char *image, __maybe_unused ssize_t len)
++ __maybe_unused unsigned char *image, __maybe_unused ssize_t len,
++ __maybe_unused __u64 func_ksym)
+ {
+ char *triple;
+
+@@ -109,12 +110,13 @@ static void destroy_context(disasm_ctx_t *ctx)
+ }
+
+ static int
+-disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc)
++disassemble_insn(disasm_ctx_t *ctx, unsigned char *image, ssize_t len, int pc,
++ __u64 func_ksym)
+ {
+ char buf[256];
+ int count;
+
+- count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, pc,
++ count = LLVMDisasmInstruction(*ctx, image + pc, len - pc, func_ksym + pc,
+ buf, sizeof(buf));
+ if (json_output)
+ printf_json(buf);
+@@ -136,8 +138,21 @@ int disasm_init(void)
+ #ifdef HAVE_LIBBFD_SUPPORT
+ #define DISASM_SPACER "\t"
+
++struct disasm_info {
++ struct disassemble_info info;
++ __u64 func_ksym;
++};
++
++static void disasm_print_addr(bfd_vma addr, struct disassemble_info *info)
++{
++ struct disasm_info *dinfo = container_of(info, struct disasm_info, info);
++
++ addr += dinfo->func_ksym;
++ generic_print_address(addr, info);
++}
++
+ typedef struct {
+- struct disassemble_info *info;
++ struct disasm_info *info;
+ disassembler_ftype disassemble;
+ bfd *bfdf;
+ } disasm_ctx_t;
+@@ -215,7 +230,7 @@ static int fprintf_json_styled(void *out,
+
+ static int init_context(disasm_ctx_t *ctx, const char *arch,
+ const char *disassembler_options,
+- unsigned char *image, ssize_t len)
++ unsigned char *image, ssize_t len, __u64 func_ksym)
+ {
+ struct disassemble_info *info;
+ char tpath[PATH_MAX];
+@@ -238,12 +253,13 @@ static int init_context(disasm_ctx_t *ctx, const char *arch,
+ }
+ bfdf = ctx->bfdf;
+
+- ctx->info = malloc(sizeof(struct disassemble_info));
++ ctx->info = malloc(sizeof(struct disasm_info));
+ if (!ctx->info) {
+ p_err("mem alloc failed");
+ goto err_close;
+ }
+- info = ctx->info;
++ ctx->info->func_ksym = func_ksym;
++ info = &ctx->info->info;
+
+ if (json_output)
+ init_disassemble_info_compat(info, stdout,
+@@ -272,6 +288,7 @@ static int init_context(disasm_ctx_t *ctx, const char *arch,
+ info->disassembler_options = disassembler_options;
+ info->buffer = image;
+ info->buffer_length = len;
++ info->print_address_func = disasm_print_addr;
+
+ disassemble_init_for_target(info);
+
+@@ -304,9 +321,10 @@ static void destroy_context(disasm_ctx_t *ctx)
+
+ static int
+ disassemble_insn(disasm_ctx_t *ctx, __maybe_unused unsigned char *image,
+- __maybe_unused ssize_t len, int pc)
++ __maybe_unused ssize_t len, int pc,
++ __maybe_unused __u64 func_ksym)
+ {
+- return ctx->disassemble(pc, ctx->info);
++ return ctx->disassemble(pc, &ctx->info->info);
+ }
+
+ int disasm_init(void)
+@@ -331,7 +349,7 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
+ if (!len)
+ return -1;
+
+- if (init_context(&ctx, arch, disassembler_options, image, len))
++ if (init_context(&ctx, arch, disassembler_options, image, len, func_ksym))
+ return -1;
+
+ if (json_output)
+@@ -360,7 +378,7 @@ int disasm_print_insn(unsigned char *image, ssize_t len, int opcodes,
+ printf("%4x:" DISASM_SPACER, pc);
+ }
+
+- count = disassemble_insn(&ctx, image, len, pc);
++ count = disassemble_insn(&ctx, image, len, pc, func_ksym);
+
+ if (json_output) {
+ /* Operand array, was started in fprintf_json. Before
+diff --git a/tools/gpio/gpio-sloppy-logic-analyzer.sh b/tools/gpio/gpio-sloppy-logic-analyzer.sh
+index ed21a110df5e5d..3ef2278e49f916 100755
+--- a/tools/gpio/gpio-sloppy-logic-analyzer.sh
++++ b/tools/gpio/gpio-sloppy-logic-analyzer.sh
+@@ -113,7 +113,7 @@ init_cpu()
+ taskset -p "$newmask" "$p" || continue
+ done 2>/dev/null >/dev/null
+
+- # Big hammer! Working with 'rcu_momentary_dyntick_idle()' for a more fine-grained solution
++ # Big hammer! Working with 'rcu_momentary_eqs()' for a more fine-grained solution
+ # still printed warnings. Same for re-enabling the stall detector after sampling.
+ echo 1 > /sys/module/rcupdate/parameters/rcu_cpu_stall_suppress
+
+diff --git a/tools/include/nolibc/arch-s390.h b/tools/include/nolibc/arch-s390.h
+index 2ec13d8b9a2db8..f9ab83a219b8a2 100644
+--- a/tools/include/nolibc/arch-s390.h
++++ b/tools/include/nolibc/arch-s390.h
+@@ -10,6 +10,7 @@
+
+ #include "compiler.h"
+ #include "crt.h"
++#include "std.h"
+
+ /* Syscalls for s390:
+ * - registers are 64-bit
+diff --git a/tools/lib/bpf/Makefile b/tools/lib/bpf/Makefile
+index 1b22f0f372880e..857a5f7b413d6d 100644
+--- a/tools/lib/bpf/Makefile
++++ b/tools/lib/bpf/Makefile
+@@ -61,7 +61,8 @@ ifndef VERBOSE
+ endif
+
+ INCLUDES = -I$(or $(OUTPUT),.) \
+- -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi
++ -I$(srctree)/tools/include -I$(srctree)/tools/include/uapi \
++ -I$(srctree)/tools/arch/$(SRCARCH)/include
+
+ export prefix libdir src obj
+
+diff --git a/tools/lib/bpf/libbpf.c b/tools/lib/bpf/libbpf.c
+index 219facd0e66e8b..5ff643e60d09ca 100644
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -3985,7 +3985,7 @@ static bool sym_is_subprog(const Elf64_Sym *sym, int text_shndx)
+ return true;
+
+ /* global function */
+- return bind == STB_GLOBAL && type == STT_FUNC;
++ return (bind == STB_GLOBAL || bind == STB_WEAK) && type == STT_FUNC;
+ }
+
+ static int find_extern_btf_id(const struct btf *btf, const char *ext_name)
+@@ -4389,7 +4389,7 @@ static int bpf_object__collect_externs(struct bpf_object *obj)
+
+ static bool prog_is_subprog(const struct bpf_object *obj, const struct bpf_program *prog)
+ {
+- return prog->sec_idx == obj->efile.text_shndx && obj->nr_programs > 1;
++ return prog->sec_idx == obj->efile.text_shndx;
+ }
+
+ struct bpf_program *
+@@ -5094,6 +5094,7 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ enum libbpf_map_type map_type = map->libbpf_type;
+ char *cp, errmsg[STRERR_BUFSIZE];
+ int err, zero = 0;
++ size_t mmap_sz;
+
+ if (obj->gen_loader) {
+ bpf_gen__map_update_elem(obj->gen_loader, map - obj->maps,
+@@ -5107,8 +5108,8 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ if (err) {
+ err = -errno;
+ cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+- pr_warn("Error setting initial map(%s) contents: %s\n",
+- map->name, cp);
++ pr_warn("map '%s': failed to set initial contents: %s\n",
++ bpf_map__name(map), cp);
+ return err;
+ }
+
+@@ -5118,11 +5119,43 @@ bpf_object__populate_internal_map(struct bpf_object *obj, struct bpf_map *map)
+ if (err) {
+ err = -errno;
+ cp = libbpf_strerror_r(err, errmsg, sizeof(errmsg));
+- pr_warn("Error freezing map(%s) as read-only: %s\n",
+- map->name, cp);
++ pr_warn("map '%s': failed to freeze as read-only: %s\n",
++ bpf_map__name(map), cp);
+ return err;
+ }
+ }
++
++ /* Remap anonymous mmap()-ed "map initialization image" as
++ * a BPF map-backed mmap()-ed memory, but preserving the same
++ * memory address. This will cause kernel to change process'
++ * page table to point to a different piece of kernel memory,
++ * but from userspace point of view memory address (and its
++ * contents, being identical at this point) will stay the
++ * same. This mapping will be released by bpf_object__close()
++ * as per normal clean up procedure.
++ */
++ mmap_sz = bpf_map_mmap_sz(map);
++ if (map->def.map_flags & BPF_F_MMAPABLE) {
++ void *mmaped;
++ int prot;
++
++ if (map->def.map_flags & BPF_F_RDONLY_PROG)
++ prot = PROT_READ;
++ else
++ prot = PROT_READ | PROT_WRITE;
++ mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map->fd, 0);
++ if (mmaped == MAP_FAILED) {
++ err = -errno;
++ pr_warn("map '%s': failed to re-mmap() contents: %d\n",
++ bpf_map__name(map), err);
++ return err;
++ }
++ map->mmaped = mmaped;
++ } else if (map->mmaped) {
++ munmap(map->mmaped, mmap_sz);
++ map->mmaped = NULL;
++ }
++
+ return 0;
+ }
+
+@@ -5439,8 +5472,7 @@ bpf_object__create_maps(struct bpf_object *obj)
+ err = bpf_object__populate_internal_map(obj, map);
+ if (err < 0)
+ goto err_out;
+- }
+- if (map->def.type == BPF_MAP_TYPE_ARENA) {
++ } else if (map->def.type == BPF_MAP_TYPE_ARENA) {
+ map->mmaped = mmap((void *)(long)map->map_extra,
+ bpf_map_mmap_sz(map), PROT_READ | PROT_WRITE,
+ map->map_extra ? MAP_SHARED | MAP_FIXED : MAP_SHARED,
+@@ -7352,8 +7384,14 @@ static int libbpf_prepare_prog_load(struct bpf_program *prog,
+ opts->prog_flags |= BPF_F_XDP_HAS_FRAGS;
+
+ /* special check for usdt to use uprobe_multi link */
+- if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK))
++ if ((def & SEC_USDT) && kernel_supports(prog->obj, FEAT_UPROBE_MULTI_LINK)) {
++ /* for BPF_TRACE_UPROBE_MULTI, user might want to query expected_attach_type
++ * in prog, and expected_attach_type we set in kernel is from opts, so we
++ * update both.
++ */
+ prog->expected_attach_type = BPF_TRACE_UPROBE_MULTI;
++ opts->expected_attach_type = BPF_TRACE_UPROBE_MULTI;
++ }
+
+ if ((def & SEC_ATTACH_BTF) && !prog->attach_btf_id) {
+ int btf_obj_fd = 0, btf_type_id = 0, err;
+@@ -7443,6 +7481,7 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
+ load_attr.attach_btf_id = prog->attach_btf_id;
+ load_attr.kern_version = kern_version;
+ load_attr.prog_ifindex = prog->prog_ifindex;
++ load_attr.expected_attach_type = prog->expected_attach_type;
+
+ /* specify func_info/line_info only if kernel supports them */
+ if (obj->btf && btf__fd(obj->btf) >= 0 && kernel_supports(obj, FEAT_BTF_FUNC)) {
+@@ -7474,9 +7513,6 @@ static int bpf_object_load_prog(struct bpf_object *obj, struct bpf_program *prog
+ insns_cnt = prog->insns_cnt;
+ }
+
+- /* allow prog_prepare_load_fn to change expected_attach_type */
+- load_attr.expected_attach_type = prog->expected_attach_type;
+-
+ if (obj->gen_loader) {
+ bpf_gen__prog_load(obj->gen_loader, prog->type, prog->name,
+ license, insns, insns_cnt, &load_attr,
+@@ -13877,46 +13913,11 @@ int bpf_object__load_skeleton(struct bpf_object_skeleton *s)
+ for (i = 0; i < s->map_cnt; i++) {
+ struct bpf_map_skeleton *map_skel = (void *)s->maps + i * s->map_skel_sz;
+ struct bpf_map *map = *map_skel->map;
+- size_t mmap_sz = bpf_map_mmap_sz(map);
+- int prot, map_fd = map->fd;
+- void **mmaped = map_skel->mmaped;
+-
+- if (!mmaped)
+- continue;
+
+- if (!(map->def.map_flags & BPF_F_MMAPABLE)) {
+- *mmaped = NULL;
++ if (!map_skel->mmaped)
+ continue;
+- }
+-
+- if (map->def.type == BPF_MAP_TYPE_ARENA) {
+- *mmaped = map->mmaped;
+- continue;
+- }
+
+- if (map->def.map_flags & BPF_F_RDONLY_PROG)
+- prot = PROT_READ;
+- else
+- prot = PROT_READ | PROT_WRITE;
+-
+- /* Remap anonymous mmap()-ed "map initialization image" as
+- * a BPF map-backed mmap()-ed memory, but preserving the same
+- * memory address. This will cause kernel to change process'
+- * page table to point to a different piece of kernel memory,
+- * but from userspace point of view memory address (and its
+- * contents, being identical at this point) will stay the
+- * same. This mapping will be released by bpf_object__close()
+- * as per normal clean up procedure, so we don't need to worry
+- * about it from skeleton's clean up perspective.
+- */
+- *mmaped = mmap(map->mmaped, mmap_sz, prot, MAP_SHARED | MAP_FIXED, map_fd, 0);
+- if (*mmaped == MAP_FAILED) {
+- err = -errno;
+- *mmaped = NULL;
+- pr_warn("failed to re-mmap() map '%s': %d\n",
+- bpf_map__name(map), err);
+- return libbpf_err(err);
+- }
++ *map_skel->mmaped = map->mmaped;
+ }
+
+ return 0;
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index e0005c6ade88a2..6985ab0f1ca9e8 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -396,6 +396,8 @@ static int init_output_elf(struct bpf_linker *linker, const char *file)
+ pr_warn_elf("failed to create SYMTAB data");
+ return -EINVAL;
+ }
++ /* Ensure libelf translates byte-order of symbol records */
++ sec->data->d_type = ELF_T_SYM;
+
+ str_off = strset__add_str(linker->strtab_strs, sec->sec_name);
+ if (str_off < 0)
+diff --git a/tools/lib/thermal/commands.c b/tools/lib/thermal/commands.c
+index 73d4d4e8d6ec0b..27b4442f0e347a 100644
+--- a/tools/lib/thermal/commands.c
++++ b/tools/lib/thermal/commands.c
+@@ -261,9 +261,25 @@ static struct genl_ops thermal_cmd_ops = {
+ .o_ncmds = ARRAY_SIZE(thermal_cmds),
+ };
+
+-static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int cmd,
+- int flags, void *arg)
++struct cmd_param {
++ int tz_id;
++};
++
++typedef int (*cmd_cb_t)(struct nl_msg *, struct cmd_param *);
++
++static int thermal_genl_tz_id_encode(struct nl_msg *msg, struct cmd_param *p)
++{
++ if (p->tz_id >= 0 && nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, p->tz_id))
++ return -1;
++
++ return 0;
++}
++
++static thermal_error_t thermal_genl_auto(struct thermal_handler *th, cmd_cb_t cmd_cb,
++ struct cmd_param *param,
++ int cmd, int flags, void *arg)
+ {
++ thermal_error_t ret = THERMAL_ERROR;
+ struct nl_msg *msg;
+ void *hdr;
+
+@@ -274,45 +290,55 @@ static thermal_error_t thermal_genl_auto(struct thermal_handler *th, int id, int
+ hdr = genlmsg_put(msg, NL_AUTO_PORT, NL_AUTO_SEQ, thermal_cmd_ops.o_id,
+ 0, flags, cmd, THERMAL_GENL_VERSION);
+ if (!hdr)
+- return THERMAL_ERROR;
++ goto out;
+
+- if (id >= 0 && nla_put_u32(msg, THERMAL_GENL_ATTR_TZ_ID, id))
+- return THERMAL_ERROR;
++ if (cmd_cb && cmd_cb(msg, param))
++ goto out;
+
+ if (nl_send_msg(th->sk_cmd, th->cb_cmd, msg, genl_handle_msg, arg))
+- return THERMAL_ERROR;
++ goto out;
+
++ ret = THERMAL_SUCCESS;
++out:
+ nlmsg_free(msg);
+
+- return THERMAL_SUCCESS;
++ return ret;
+ }
+
+ thermal_error_t thermal_cmd_get_tz(struct thermal_handler *th, struct thermal_zone **tz)
+ {
+- return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_TZ_GET_ID,
++ return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_TZ_GET_ID,
+ NLM_F_DUMP | NLM_F_ACK, tz);
+ }
+
+ thermal_error_t thermal_cmd_get_cdev(struct thermal_handler *th, struct thermal_cdev **tc)
+ {
+- return thermal_genl_auto(th, -1, THERMAL_GENL_CMD_CDEV_GET,
++ return thermal_genl_auto(th, NULL, NULL, THERMAL_GENL_CMD_CDEV_GET,
+ NLM_F_DUMP | NLM_F_ACK, tc);
+ }
+
+ thermal_error_t thermal_cmd_get_trip(struct thermal_handler *th, struct thermal_zone *tz)
+ {
+- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TRIP,
+- 0, tz);
++ struct cmd_param p = { .tz_id = tz->id };
++
++ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
++ THERMAL_GENL_CMD_TZ_GET_TRIP, 0, tz);
+ }
+
+ thermal_error_t thermal_cmd_get_governor(struct thermal_handler *th, struct thermal_zone *tz)
+ {
+- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz);
++ struct cmd_param p = { .tz_id = tz->id };
++
++ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
++ THERMAL_GENL_CMD_TZ_GET_GOV, 0, tz);
+ }
+
+ thermal_error_t thermal_cmd_get_temp(struct thermal_handler *th, struct thermal_zone *tz)
+ {
+- return thermal_genl_auto(th, tz->id, THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz);
++ struct cmd_param p = { .tz_id = tz->id };
++
++ return thermal_genl_auto(th, thermal_genl_tz_id_encode, &p,
++ THERMAL_GENL_CMD_TZ_GET_TEMP, 0, tz);
+ }
+
+ thermal_error_t thermal_cmd_exit(struct thermal_handler *th)
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index d4332675babb74..2ce71d2e5fae05 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -1194,7 +1194,7 @@ endif
+ ifneq ($(NO_LIBTRACEEVENT),1)
+ $(call feature_check,libtraceevent)
+ ifeq ($(feature-libtraceevent), 1)
+- CFLAGS += -DHAVE_LIBTRACEEVENT
++ CFLAGS += -DHAVE_LIBTRACEEVENT $(shell $(PKG_CONFIG) --cflags libtraceevent)
+ LDFLAGS += $(shell $(PKG_CONFIG) --libs-only-L libtraceevent)
+ EXTLIBS += $(shell $(PKG_CONFIG) --libs-only-l libtraceevent)
+ LIBTRACEEVENT_VERSION := $(shell $(PKG_CONFIG) --modversion libtraceevent).0.0
+diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
+index abcdc49b7a987f..272d3c70810e7d 100644
+--- a/tools/perf/builtin-ftrace.c
++++ b/tools/perf/builtin-ftrace.c
+@@ -815,7 +815,7 @@ static void display_histogram(int buckets[], bool use_nsec)
+
+ bar_len = buckets[0] * bar_total / total;
+ printf(" %4d - %-4d %s | %10d | %.*s%*s |\n",
+- 0, 1, "us", buckets[0], bar_len, bar, bar_total - bar_len, "");
++ 0, 1, use_nsec ? "ns" : "us", buckets[0], bar_len, bar, bar_total - bar_len, "");
+
+ for (i = 1; i < NUM_BUCKET - 1; i++) {
+ int start = (1 << (i - 1));
+diff --git a/tools/perf/builtin-list.c b/tools/perf/builtin-list.c
+index 65b8cba324be4b..c5331721dfee98 100644
+--- a/tools/perf/builtin-list.c
++++ b/tools/perf/builtin-list.c
+@@ -112,7 +112,7 @@ static void wordwrap(FILE *fp, const char *s, int start, int max, int corr)
+ }
+ }
+
+-static void default_print_event(void *ps, const char *pmu_name, const char *topic,
++static void default_print_event(void *ps, const char *topic, const char *pmu_name,
+ const char *event_name, const char *event_alias,
+ const char *scale_unit __maybe_unused,
+ bool deprecated, const char *event_type_desc,
+@@ -353,7 +353,7 @@ static void fix_escape_fprintf(FILE *fp, struct strbuf *buf, const char *fmt, ..
+ fputs(buf->buf, fp);
+ }
+
+-static void json_print_event(void *ps, const char *pmu_name, const char *topic,
++static void json_print_event(void *ps, const char *topic, const char *pmu_name,
+ const char *event_name, const char *event_alias,
+ const char *scale_unit,
+ bool deprecated, const char *event_type_desc,
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 689a3d43c2584f..4933efdfee76fb 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -716,15 +716,19 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ }
+
+ if (!cpu_map__is_dummy(evsel_list->core.user_requested_cpus)) {
+- if (affinity__setup(&saved_affinity) < 0)
+- return -1;
++ if (affinity__setup(&saved_affinity) < 0) {
++ err = -1;
++ goto err_out;
++ }
+ affinity = &saved_affinity;
+ }
+
+ evlist__for_each_entry(evsel_list, counter) {
+ counter->reset_group = false;
+- if (bpf_counter__load(counter, &target))
+- return -1;
++ if (bpf_counter__load(counter, &target)) {
++ err = -1;
++ goto err_out;
++ }
+ if (!(evsel__is_bperf(counter)))
+ all_counters_use_bpf = false;
+ }
+@@ -767,7 +771,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+
+ switch (stat_handle_error(counter)) {
+ case COUNTER_FATAL:
+- return -1;
++ err = -1;
++ goto err_out;
+ case COUNTER_RETRY:
+ goto try_again;
+ case COUNTER_SKIP:
+@@ -808,7 +813,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+
+ switch (stat_handle_error(counter)) {
+ case COUNTER_FATAL:
+- return -1;
++ err = -1;
++ goto err_out;
+ case COUNTER_RETRY:
+ goto try_again_reset;
+ case COUNTER_SKIP:
+@@ -821,6 +827,7 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ }
+ }
+ affinity__cleanup(affinity);
++ affinity = NULL;
+
+ evlist__for_each_entry(evsel_list, counter) {
+ if (!counter->supported) {
+@@ -833,8 +840,10 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ stat_config.unit_width = l;
+
+ if (evsel__should_store_id(counter) &&
+- evsel__store_ids(counter, evsel_list))
+- return -1;
++ evsel__store_ids(counter, evsel_list)) {
++ err = -1;
++ goto err_out;
++ }
+ }
+
+ if (evlist__apply_filters(evsel_list, &counter, &target)) {
+@@ -855,20 +864,23 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ }
+
+ if (err < 0)
+- return err;
++ goto err_out;
+
+ err = perf_event__synthesize_stat_events(&stat_config, NULL, evsel_list,
+ process_synthesized_event, is_pipe);
+ if (err < 0)
+- return err;
++ goto err_out;
++
+ }
+
+ if (target.initial_delay) {
+ pr_info(EVLIST_DISABLED_MSG);
+ } else {
+ err = enable_counters();
+- if (err)
+- return -1;
++ if (err) {
++ err = -1;
++ goto err_out;
++ }
+ }
+
+ /* Exec the command, if any */
+@@ -878,8 +890,10 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ if (target.initial_delay > 0) {
+ usleep(target.initial_delay * USEC_PER_MSEC);
+ err = enable_counters();
+- if (err)
+- return -1;
++ if (err) {
++ err = -1;
++ goto err_out;
++ }
+
+ pr_info(EVLIST_ENABLED_MSG);
+ }
+@@ -899,7 +913,8 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ if (workload_exec_errno) {
+ const char *emsg = str_error_r(workload_exec_errno, msg, sizeof(msg));
+ pr_err("Workload failed: %s\n", emsg);
+- return -1;
++ err = -1;
++ goto err_out;
+ }
+
+ if (WIFSIGNALED(status))
+@@ -946,6 +961,13 @@ static int __run_perf_stat(int argc, const char **argv, int run_idx)
+ evlist__close(evsel_list);
+
+ return WEXITSTATUS(status);
++
++err_out:
++ if (forks)
++ evlist__cancel_workload(evsel_list);
++
++ affinity__cleanup(affinity);
++ return err;
+ }
+
+ static int run_perf_stat(int argc, const char **argv, int run_idx)
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index d3f11b90d0255c..ffa1295273099e 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -2702,6 +2702,7 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
+ char msg[1024];
+ void *args, *augmented_args = NULL;
+ int augmented_args_size;
++ size_t printed = 0;
+
+ if (sc == NULL)
+ return -1;
+@@ -2717,8 +2718,8 @@ static int trace__fprintf_sys_enter(struct trace *trace, struct evsel *evsel,
+
+ args = perf_evsel__sc_tp_ptr(evsel, args, sample);
+ augmented_args = syscall__augmented_args(sc, sample, &augmented_args_size, trace->raw_augmented_syscalls_args_size);
+- syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);
+- fprintf(trace->output, "%s", msg);
++ printed += syscall__scnprintf_args(sc, msg, sizeof(msg), args, augmented_args, augmented_args_size, trace, thread);
++ fprintf(trace->output, "%.*s", (int)printed, msg);
+ err = 0;
+ out_put:
+ thread__put(thread);
+@@ -3087,7 +3088,7 @@ static size_t trace__fprintf_tp_fields(struct trace *trace, struct evsel *evsel,
+ printed += syscall_arg_fmt__scnprintf_val(arg, bf + printed, size - printed, &syscall_arg, val);
+ }
+
+- return printed + fprintf(trace->output, "%s", bf);
++ return printed + fprintf(trace->output, "%.*s", (int)printed, bf);
+ }
+
+ static int trace__event_handler(struct trace *trace, struct evsel *evsel,
+@@ -3096,13 +3097,8 @@ static int trace__event_handler(struct trace *trace, struct evsel *evsel,
+ {
+ struct thread *thread;
+ int callchain_ret = 0;
+- /*
+- * Check if we called perf_evsel__disable(evsel) due to, for instance,
+- * this event's max_events having been hit and this is an entry coming
+- * from the ring buffer that we should discard, since the max events
+- * have already been considered/printed.
+- */
+- if (evsel->disabled)
++
++ if (evsel->nr_events_printed >= evsel->max_events)
+ return 0;
+
+ thread = machine__findnew_thread(trace->host, sample->pid, sample->tid);
+@@ -4326,6 +4322,9 @@ static int trace__run(struct trace *trace, int argc, const char **argv)
+ sizeof(__u32), BPF_ANY);
+ }
+ }
++
++ if (trace->skel)
++ trace->filter_pids.map = trace->skel->maps.pids_filtered;
+ #endif
+ err = trace__set_filter_pids(trace);
+ if (err < 0)
+@@ -5449,6 +5448,10 @@ int cmd_trace(int argc, const char **argv)
+ if (trace.summary_only)
+ trace.summary = trace.summary_only;
+
++ /* Keep exited threads, otherwise information might be lost for summary */
++ if (trace.summary)
++ symbol_conf.keep_exited_threads = true;
++
+ if (output_name != NULL) {
+ err = trace__open_output(&trace, output_name);
+ if (err < 0) {
+diff --git a/tools/perf/pmu-events/empty-pmu-events.c b/tools/perf/pmu-events/empty-pmu-events.c
+index c592079982fbd8..873e9fb2041f02 100644
+--- a/tools/perf/pmu-events/empty-pmu-events.c
++++ b/tools/perf/pmu-events/empty-pmu-events.c
+@@ -380,7 +380,7 @@ int pmu_events_table__for_each_event(const struct pmu_events_table *table,
+ continue;
+
+ ret = pmu_events_table__for_each_event_pmu(table, table_pmu, fn, data);
+- if (pmu || ret)
++ if (ret)
+ return ret;
+ }
+ return 0;
+diff --git a/tools/perf/pmu-events/jevents.py b/tools/perf/pmu-events/jevents.py
+index bb0a5d92df4a15..d46a22fb5573de 100755
+--- a/tools/perf/pmu-events/jevents.py
++++ b/tools/perf/pmu-events/jevents.py
+@@ -930,7 +930,7 @@ int pmu_events_table__for_each_event(const struct pmu_events_table *table,
+ continue;
+
+ ret = pmu_events_table__for_each_event_pmu(table, table_pmu, fn, data);
+- if (pmu || ret)
++ if (ret)
+ return ret;
+ }
+ return 0;
+diff --git a/tools/perf/tests/attr/test-stat-default b/tools/perf/tests/attr/test-stat-default
+index a1e2da0a9a6ddb..e47fb49446799b 100644
+--- a/tools/perf/tests/attr/test-stat-default
++++ b/tools/perf/tests/attr/test-stat-default
+@@ -88,98 +88,142 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+diff --git a/tools/perf/tests/attr/test-stat-detailed-1 b/tools/perf/tests/attr/test-stat-detailed-1
+index 1c52cb05c900d7..3d500d3e0c5c8a 100644
+--- a/tools/perf/tests/attr/test-stat-detailed-1
++++ b/tools/perf/tests/attr/test-stat-detailed-1
+@@ -90,99 +90,143 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+
+@@ -190,8 +234,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event25:base-stat]
+-fd=25
++[event29:base-stat]
++fd=29
+ type=3
+ config=0
+ optional=1
+@@ -200,8 +244,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event26:base-stat]
+-fd=26
++[event30:base-stat]
++fd=30
+ type=3
+ config=65536
+ optional=1
+@@ -210,8 +254,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event27:base-stat]
+-fd=27
++[event31:base-stat]
++fd=31
+ type=3
+ config=2
+ optional=1
+@@ -220,8 +264,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event28:base-stat]
+-fd=28
++[event32:base-stat]
++fd=32
+ type=3
+ config=65538
+ optional=1
+diff --git a/tools/perf/tests/attr/test-stat-detailed-2 b/tools/perf/tests/attr/test-stat-detailed-2
+index 7e961d24a885a7..01777a63752fe6 100644
+--- a/tools/perf/tests/attr/test-stat-detailed-2
++++ b/tools/perf/tests/attr/test-stat-detailed-2
+@@ -90,99 +90,143 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+
+@@ -190,8 +234,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event25:base-stat]
+-fd=25
++[event29:base-stat]
++fd=29
+ type=3
+ config=0
+ optional=1
+@@ -200,8 +244,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event26:base-stat]
+-fd=26
++[event30:base-stat]
++fd=30
+ type=3
+ config=65536
+ optional=1
+@@ -210,8 +254,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event27:base-stat]
+-fd=27
++[event31:base-stat]
++fd=31
+ type=3
+ config=2
+ optional=1
+@@ -220,8 +264,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event28:base-stat]
+-fd=28
++[event32:base-stat]
++fd=32
+ type=3
+ config=65538
+ optional=1
+@@ -230,8 +274,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event29:base-stat]
+-fd=29
++[event33:base-stat]
++fd=33
+ type=3
+ config=1
+ optional=1
+@@ -240,8 +284,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event30:base-stat]
+-fd=30
++[event34:base-stat]
++fd=34
+ type=3
+ config=65537
+ optional=1
+@@ -250,8 +294,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event31:base-stat]
+-fd=31
++[event35:base-stat]
++fd=35
+ type=3
+ config=3
+ optional=1
+@@ -260,8 +304,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event32:base-stat]
+-fd=32
++[event36:base-stat]
++fd=36
+ type=3
+ config=65539
+ optional=1
+@@ -270,8 +314,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event33:base-stat]
+-fd=33
++[event37:base-stat]
++fd=37
+ type=3
+ config=4
+ optional=1
+@@ -280,8 +324,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event34:base-stat]
+-fd=34
++[event38:base-stat]
++fd=38
+ type=3
+ config=65540
+ optional=1
+diff --git a/tools/perf/tests/attr/test-stat-detailed-3 b/tools/perf/tests/attr/test-stat-detailed-3
+index e50535f45977c6..8400abd7e1e488 100644
+--- a/tools/perf/tests/attr/test-stat-detailed-3
++++ b/tools/perf/tests/attr/test-stat-detailed-3
+@@ -90,99 +90,143 @@ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
++# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
+ [event13:base-stat]
+ fd=13
+ group_fd=11
+ type=4
+-config=33280
++config=33024
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-be-bound (0x8300)
++# PERF_TYPE_RAW / topdown-fe-bound (0x8200)
+ [event14:base-stat]
+ fd=14
+ group_fd=11
+ type=4
+-config=33536
++config=33280
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / topdown-bad-spec (0x8100)
++# PERF_TYPE_RAW / topdown-be-bound (0x8300)
+ [event15:base-stat]
+ fd=15
+ group_fd=11
+ type=4
+-config=33024
++config=33536
+ disabled=0
+ enable_on_exec=0
+ read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
++# PERF_TYPE_RAW / topdown-heavy-ops (0x8400)
+ [event16:base-stat]
+ fd=16
++group_fd=11
+ type=4
+-config=4109
++config=33792
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
++# PERF_TYPE_RAW / topdown-br-mispredict (0x8500)
+ [event17:base-stat]
+ fd=17
++group_fd=11
+ type=4
+-config=17039629
++config=34048
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
++# PERF_TYPE_RAW / topdown-fetch-lat (0x8600)
+ [event18:base-stat]
+ fd=18
++group_fd=11
+ type=4
+-config=60
++config=34304
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
++# PERF_TYPE_RAW / topdown-mem-bound (0x8700)
+ [event19:base-stat]
+ fd=19
++group_fd=11
+ type=4
+-config=2097421
++config=34560
++disabled=0
++enable_on_exec=0
++read_format=15
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
++# PERF_TYPE_RAW / INT_MISC.UOP_DROPPING
+ [event20:base-stat]
+ fd=20
+ type=4
+-config=316
++config=4109
+ optional=1
+
+-# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++# PERF_TYPE_RAW / cpu/INT_MISC.RECOVERY_CYCLES,cmask=1,edge/
+ [event21:base-stat]
+ fd=21
+ type=4
+-config=412
++config=17039629
+ optional=1
+
+-# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.THREAD
+ [event22:base-stat]
+ fd=22
+ type=4
+-config=572
++config=60
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++# PERF_TYPE_RAW / INT_MISC.RECOVERY_CYCLES_ANY
+ [event23:base-stat]
+ fd=23
+ type=4
+-config=706
++config=2097421
+ optional=1
+
+-# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.REF_XCLK
+ [event24:base-stat]
+ fd=24
+ type=4
++config=316
++optional=1
++
++# PERF_TYPE_RAW / IDQ_UOPS_NOT_DELIVERED.CORE
++[event25:base-stat]
++fd=25
++type=4
++config=412
++optional=1
++
++# PERF_TYPE_RAW / CPU_CLK_UNHALTED.ONE_THREAD_ACTIVE
++[event26:base-stat]
++fd=26
++type=4
++config=572
++optional=1
++
++# PERF_TYPE_RAW / UOPS_RETIRED.RETIRE_SLOTS
++[event27:base-stat]
++fd=27
++type=4
++config=706
++optional=1
++
++# PERF_TYPE_RAW / UOPS_ISSUED.ANY
++[event28:base-stat]
++fd=28
++type=4
+ config=270
+ optional=1
+
+@@ -190,8 +234,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event25:base-stat]
+-fd=25
++[event29:base-stat]
++fd=29
+ type=3
+ config=0
+ optional=1
+@@ -200,8 +244,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event26:base-stat]
+-fd=26
++[event30:base-stat]
++fd=30
+ type=3
+ config=65536
+ optional=1
+@@ -210,8 +254,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event27:base-stat]
+-fd=27
++[event31:base-stat]
++fd=31
+ type=3
+ config=2
+ optional=1
+@@ -220,8 +264,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_LL << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event28:base-stat]
+-fd=28
++[event32:base-stat]
++fd=32
+ type=3
+ config=65538
+ optional=1
+@@ -230,8 +274,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event29:base-stat]
+-fd=29
++[event33:base-stat]
++fd=33
+ type=3
+ config=1
+ optional=1
+@@ -240,8 +284,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1I << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event30:base-stat]
+-fd=30
++[event34:base-stat]
++fd=34
+ type=3
+ config=65537
+ optional=1
+@@ -250,8 +294,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event31:base-stat]
+-fd=31
++[event35:base-stat]
++fd=35
+ type=3
+ config=3
+ optional=1
+@@ -260,8 +304,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_DTLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event32:base-stat]
+-fd=32
++[event36:base-stat]
++fd=36
+ type=3
+ config=65539
+ optional=1
+@@ -270,8 +314,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event33:base-stat]
+-fd=33
++[event37:base-stat]
++fd=37
+ type=3
+ config=4
+ optional=1
+@@ -280,8 +324,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_ITLB << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_READ << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event34:base-stat]
+-fd=34
++[event38:base-stat]
++fd=38
+ type=3
+ config=65540
+ optional=1
+@@ -290,8 +334,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_ACCESS << 16)
+-[event35:base-stat]
+-fd=35
++[event39:base-stat]
++fd=39
+ type=3
+ config=512
+ optional=1
+@@ -300,8 +344,8 @@ optional=1
+ # PERF_COUNT_HW_CACHE_L1D << 0 |
+ # (PERF_COUNT_HW_CACHE_OP_PREFETCH << 8) |
+ # (PERF_COUNT_HW_CACHE_RESULT_MISS << 16)
+-[event36:base-stat]
+-fd=36
++[event40:base-stat]
++fd=40
+ type=3
+ config=66048
+ optional=1
+diff --git a/tools/perf/util/bpf-filter.c b/tools/perf/util/bpf-filter.c
+index e87b6789eb9ef3..a4fdf6911ec1c3 100644
+--- a/tools/perf/util/bpf-filter.c
++++ b/tools/perf/util/bpf-filter.c
+@@ -375,7 +375,7 @@ static int create_idx_hash(struct evsel *evsel, struct perf_bpf_filter_entry *en
+ pfi = zalloc(sizeof(*pfi));
+ if (pfi == NULL) {
+ pr_err("Cannot save pinned filter index\n");
+- goto err;
++ return -ENOMEM;
+ }
+
+ pfi->evsel = evsel;
+diff --git a/tools/perf/util/cs-etm.c b/tools/perf/util/cs-etm.c
+index 40f047baef8100..0bf9e5c27b599b 100644
+--- a/tools/perf/util/cs-etm.c
++++ b/tools/perf/util/cs-etm.c
+@@ -2490,12 +2490,6 @@ static void cs_etm__clear_all_traceid_queues(struct cs_etm_queue *etmq)
+
+ /* Ignore return value */
+ cs_etm__process_traceid_queue(etmq, tidq);
+-
+- /*
+- * Generate an instruction sample with the remaining
+- * branchstack entries.
+- */
+- cs_etm__flush(etmq, tidq);
+ }
+ }
+
+@@ -2638,7 +2632,7 @@ static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm)
+
+ while (1) {
+ if (!etm->heap.heap_cnt)
+- goto out;
++ break;
+
+ /* Take the entry at the top of the min heap */
+ cs_queue_nr = etm->heap.heap_array[0].queue_nr;
+@@ -2721,6 +2715,23 @@ static int cs_etm__process_timestamped_queues(struct cs_etm_auxtrace *etm)
+ ret = auxtrace_heap__add(&etm->heap, cs_queue_nr, cs_timestamp);
+ }
+
++ for (i = 0; i < etm->queues.nr_queues; i++) {
++ struct int_node *inode;
++
++ etmq = etm->queues.queue_array[i].priv;
++ if (!etmq)
++ continue;
++
++ intlist__for_each_entry(inode, etmq->traceid_queues_list) {
++ int idx = (int)(intptr_t)inode->priv;
++
++ /* Flush any remaining branch stack entries */
++ tidq = etmq->traceid_queues[idx];
++ ret = cs_etm__end_block(etmq, tidq);
++ if (ret)
++ return ret;
++ }
++ }
+ out:
+ return ret;
+ }
+diff --git a/tools/perf/util/disasm.c b/tools/perf/util/disasm.c
+index f05ba7739c1e91..648e8d87ef1945 100644
+--- a/tools/perf/util/disasm.c
++++ b/tools/perf/util/disasm.c
+@@ -1627,12 +1627,12 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym,
+ u64 start = map__rip_2objdump(map, sym->start);
+ u64 len;
+ u64 offset;
+- int i, count;
++ int i, count, free_count;
+ bool is_64bit = false;
+ bool needs_cs_close = false;
+ u8 *buf = NULL;
+ csh handle;
+- cs_insn *insn;
++ cs_insn *insn = NULL;
+ char disasm_buf[512];
+ struct disasm_line *dl;
+
+@@ -1664,7 +1664,7 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym,
+
+ needs_cs_close = true;
+
+- count = cs_disasm(handle, buf, len, start, len, &insn);
++ free_count = count = cs_disasm(handle, buf, len, start, len, &insn);
+ for (i = 0, offset = 0; i < count; i++) {
+ int printed;
+
+@@ -1702,8 +1702,11 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym,
+ }
+
+ out:
+- if (needs_cs_close)
++ if (needs_cs_close) {
+ cs_close(&handle);
++ if (free_count > 0)
++ cs_free(insn, free_count);
++ }
+ free(buf);
+ return count < 0 ? count : 0;
+
+@@ -1717,7 +1720,7 @@ static int symbol__disassemble_capstone(char *filename, struct symbol *sym,
+ */
+ list_for_each_entry_safe(dl, tmp, ¬es->src->source, al.node) {
+ list_del(&dl->al.node);
+- free(dl);
++ disasm_line__free(dl);
+ }
+ }
+ count = -1;
+@@ -1782,7 +1785,7 @@ static int symbol__disassemble_raw(char *filename, struct symbol *sym,
+ sprintf(args->line, "%x", line[i]);
+ dl = disasm_line__new(args);
+ if (dl == NULL)
+- goto err;
++ break;
+
+ annotation_line__add(&dl->al, ¬es->src->source);
+ offset += 4;
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index f14b7e6ff1dcc2..a9df84692d4a88 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -48,6 +48,7 @@
+ #include <sys/mman.h>
+ #include <sys/prctl.h>
+ #include <sys/timerfd.h>
++#include <sys/wait.h>
+
+ #include <linux/bitops.h>
+ #include <linux/hash.h>
+@@ -1484,6 +1485,8 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
+ int child_ready_pipe[2], go_pipe[2];
+ char bf;
+
++ evlist->workload.cork_fd = -1;
++
+ if (pipe(child_ready_pipe) < 0) {
+ perror("failed to create 'ready' pipe");
+ return -1;
+@@ -1536,7 +1539,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
+ * For cancelling the workload without actually running it,
+ * the parent will just close workload.cork_fd, without writing
+ * anything, i.e. read will return zero and we just exit()
+- * here.
++ * here (See evlist__cancel_workload()).
+ */
+ if (ret != 1) {
+ if (ret == -1)
+@@ -1600,7 +1603,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target, const
+
+ int evlist__start_workload(struct evlist *evlist)
+ {
+- if (evlist->workload.cork_fd > 0) {
++ if (evlist->workload.cork_fd >= 0) {
+ char bf = 0;
+ int ret;
+ /*
+@@ -1611,12 +1614,24 @@ int evlist__start_workload(struct evlist *evlist)
+ perror("unable to write to pipe");
+
+ close(evlist->workload.cork_fd);
++ evlist->workload.cork_fd = -1;
+ return ret;
+ }
+
+ return 0;
+ }
+
++void evlist__cancel_workload(struct evlist *evlist)
++{
++ int status;
++
++ if (evlist->workload.cork_fd >= 0) {
++ close(evlist->workload.cork_fd);
++ evlist->workload.cork_fd = -1;
++ waitpid(evlist->workload.pid, &status, WNOHANG);
++ }
++}
++
+ int evlist__parse_sample(struct evlist *evlist, union perf_event *event, struct perf_sample *sample)
+ {
+ struct evsel *evsel = evlist__event2evsel(evlist, event);
+diff --git a/tools/perf/util/evlist.h b/tools/perf/util/evlist.h
+index bcc1c6984bb58a..888fda751e1a6e 100644
+--- a/tools/perf/util/evlist.h
++++ b/tools/perf/util/evlist.h
+@@ -186,6 +186,7 @@ int evlist__prepare_workload(struct evlist *evlist, struct target *target,
+ const char *argv[], bool pipe_output,
+ void (*exec_error)(int signo, siginfo_t *info, void *ucontext));
+ int evlist__start_workload(struct evlist *evlist);
++void evlist__cancel_workload(struct evlist *evlist);
+
+ struct option;
+
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index fad227b625d155..4f0ac998b0ccfd 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -1343,7 +1343,7 @@ static int maps__set_module_path(struct maps *maps, const char *path, struct kmo
+ * we need to update the symtab_type if needed.
+ */
+ if (m->comp && is_kmod_dso(dso)) {
+- dso__set_symtab_type(dso, dso__symtab_type(dso));
++ dso__set_symtab_type(dso, dso__symtab_type(dso)+1);
+ dso__set_comp(dso, m->comp);
+ }
+ map__put(map);
+diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
+index 051feb93ed8d40..bf5090f5220bbd 100644
+--- a/tools/perf/util/mem-events.c
++++ b/tools/perf/util/mem-events.c
+@@ -366,6 +366,12 @@ static const char * const mem_lvl[] = {
+ };
+
+ static const char * const mem_lvlnum[] = {
++ [PERF_MEM_LVLNUM_L1] = "L1",
++ [PERF_MEM_LVLNUM_L2] = "L2",
++ [PERF_MEM_LVLNUM_L3] = "L3",
++ [PERF_MEM_LVLNUM_L4] = "L4",
++ [PERF_MEM_LVLNUM_L2_MHB] = "L2 MHB",
++ [PERF_MEM_LVLNUM_MSC] = "Memory-side Cache",
+ [PERF_MEM_LVLNUM_UNC] = "Uncached",
+ [PERF_MEM_LVLNUM_CXL] = "CXL",
+ [PERF_MEM_LVLNUM_IO] = "I/O",
+@@ -448,7 +454,7 @@ int perf_mem__lvl_scnprintf(char *out, size_t sz, const struct mem_info *mem_inf
+ if (mem_lvlnum[lvl])
+ l += scnprintf(out + l, sz - l, mem_lvlnum[lvl]);
+ else
+- l += scnprintf(out + l, sz - l, "L%d", lvl);
++ l += scnprintf(out + l, sz - l, "Unknown level %d", lvl);
+
+ l += scnprintf(out + l, sz - l, " %s", hit_miss);
+ return l;
+diff --git a/tools/perf/util/pfm.c b/tools/perf/util/pfm.c
+index 5ccfe4b64cdfe4..0dacc133ed3960 100644
+--- a/tools/perf/util/pfm.c
++++ b/tools/perf/util/pfm.c
+@@ -233,7 +233,7 @@ print_libpfm_event(const struct print_callbacks *print_cb, void *print_state,
+ }
+
+ if (is_libpfm_event_supported(name, cpus, threads)) {
+- print_cb->print_event(print_state, pinfo->name, topic,
++ print_cb->print_event(print_state, topic, pinfo->name,
+ name, info->equiv,
+ /*scale_unit=*/NULL,
+ /*deprecated=*/NULL, "PFM event",
+@@ -267,8 +267,8 @@ print_libpfm_event(const struct print_callbacks *print_cb, void *print_state,
+ continue;
+
+ print_cb->print_event(print_state,
+- pinfo->name,
+ topic,
++ pinfo->name,
+ name, /*alias=*/NULL,
+ /*scale_unit=*/NULL,
+ /*deprecated=*/NULL, "PFM event",
+diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c
+index 52109af5f2f129..d7d67e09d759bb 100644
+--- a/tools/perf/util/pmus.c
++++ b/tools/perf/util/pmus.c
+@@ -494,8 +494,8 @@ void perf_pmus__print_pmu_events(const struct print_callbacks *print_cb, void *p
+ goto free;
+
+ print_cb->print_event(print_state,
+- aliases[j].pmu_name,
+ aliases[j].topic,
++ aliases[j].pmu_name,
+ aliases[j].name,
+ aliases[j].alias,
+ aliases[j].scale_unit,
+diff --git a/tools/perf/util/probe-finder.c b/tools/perf/util/probe-finder.c
+index 630e16c54ed5cb..a30f88ed030044 100644
+--- a/tools/perf/util/probe-finder.c
++++ b/tools/perf/util/probe-finder.c
+@@ -1379,6 +1379,10 @@ int debuginfo__find_trace_events(struct debuginfo *dbg,
+ if (ret >= 0 && tf.pf.skip_empty_arg)
+ ret = fill_empty_trace_arg(pev, tf.tevs, tf.ntevs);
+
++#if _ELFUTILS_PREREQ(0, 142)
++ dwarf_cfi_end(tf.pf.cfi_eh);
++#endif
++
+ if (ret < 0 || tf.ntevs == 0) {
+ for (i = 0; i < tf.ntevs; i++)
+ clear_probe_trace_event(&tf.tevs[i]);
+@@ -1583,8 +1587,21 @@ int debuginfo__find_probe_point(struct debuginfo *dbg, u64 addr,
+
+ /* Find a corresponding function (name, baseline and baseaddr) */
+ if (die_find_realfunc(&cudie, (Dwarf_Addr)addr, &spdie)) {
+- /* Get function entry information */
+- func = basefunc = dwarf_diename(&spdie);
++ /*
++ * Get function entry information.
++ *
++ * As described in the document DWARF Debugging Information
++ * Format Version 5, section 2.22 Linkage Names, "mangled names,
++ * are used in various ways, ... to distinguish multiple
++ * entities that have the same name".
++ *
++ * Firstly try to get distinct linkage name, if fail then
++ * rollback to get associated name in DIE.
++ */
++ func = basefunc = die_get_linkage_name(&spdie);
++ if (!func)
++ func = basefunc = dwarf_diename(&spdie);
++
+ if (!func ||
+ die_entrypc(&spdie, &baseaddr) != 0 ||
+ dwarf_decl_line(&spdie, &baseline) != 0) {
+diff --git a/tools/perf/util/probe-finder.h b/tools/perf/util/probe-finder.h
+index 3add5ff516e12d..724db829b49e02 100644
+--- a/tools/perf/util/probe-finder.h
++++ b/tools/perf/util/probe-finder.h
+@@ -64,9 +64,9 @@ struct probe_finder {
+
+ /* For variable searching */
+ #if _ELFUTILS_PREREQ(0, 142)
+- /* Call Frame Information from .eh_frame */
++ /* Call Frame Information from .eh_frame. Owned by this struct. */
+ Dwarf_CFI *cfi_eh;
+- /* Call Frame Information from .debug_frame */
++ /* Call Frame Information from .debug_frame. Not owned. */
+ Dwarf_CFI *cfi_dbg;
+ #endif
+ Dwarf_Op *fb_ops; /* Frame base attribute */
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 089220aaa5c929..a5ebee8b23bbe3 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -5385,6 +5385,9 @@ static int parse_cpu_str(char *cpu_str, cpu_set_t *cpu_set, int cpu_set_size)
+ if (*next == '-') /* no negative cpu numbers */
+ return 1;
+
++ if (*next == '\0' || *next == '\n')
++ break;
++
+ start = strtoul(next, &next, 10);
+
+ if (start >= CPU_SUBSET_MAXCPUS)
+@@ -9781,7 +9784,7 @@ void cmdline(int argc, char **argv)
+ * Parse some options early, because they may make other options invalid,
+ * like adding the MSR counter with --add and at the same time using --no-msr.
+ */
+- while ((opt = getopt_long_only(argc, argv, "MPn:", long_options, &option_index)) != -1) {
++ while ((opt = getopt_long_only(argc, argv, "+MPn:", long_options, &option_index)) != -1) {
+ switch (opt) {
+ case 'M':
+ no_msr = 1;
+diff --git a/tools/testing/selftests/arm64/abi/hwcap.c b/tools/testing/selftests/arm64/abi/hwcap.c
+index f2d6007a2b983e..265654ec48b9fc 100644
+--- a/tools/testing/selftests/arm64/abi/hwcap.c
++++ b/tools/testing/selftests/arm64/abi/hwcap.c
+@@ -361,8 +361,8 @@ static void sveaes_sigill(void)
+
+ static void sveb16b16_sigill(void)
+ {
+- /* BFADD ZA.H[W0, 0], {Z0.H-Z1.H} */
+- asm volatile(".inst 0xC1E41C00" : : : );
++ /* BFADD Z0.H, Z0.H, Z0.H */
++ asm volatile(".inst 0x65000000" : : : );
+ }
+
+ static void svepmull_sigill(void)
+@@ -490,7 +490,7 @@ static const struct hwcap_data {
+ .name = "F8DP2",
+ .at_hwcap = AT_HWCAP2,
+ .hwcap_bit = HWCAP2_F8DP2,
+- .cpuinfo = "f8dp4",
++ .cpuinfo = "f8dp2",
+ .sigill_fn = f8dp2_sigill,
+ },
+ {
+diff --git a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
+index 2b1425b92b6991..a3d1e23fe02aff 100644
+--- a/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
++++ b/tools/testing/selftests/arm64/mte/check_tags_inclusion.c
+@@ -65,7 +65,7 @@ static int check_single_included_tags(int mem_type, int mode)
+ ptr = mte_insert_tags(ptr, BUFFER_SIZE);
+ /* Check tag value */
+ if (MT_FETCH_TAG((uintptr_t)ptr) == tag) {
+- ksft_print_msg("FAIL: wrong tag = 0x%x with include mask=0x%x\n",
++ ksft_print_msg("FAIL: wrong tag = 0x%lx with include mask=0x%x\n",
+ MT_FETCH_TAG((uintptr_t)ptr),
+ MT_INCLUDE_VALID_TAG(tag));
+ result = KSFT_FAIL;
+@@ -97,7 +97,7 @@ static int check_multiple_included_tags(int mem_type, int mode)
+ ptr = mte_insert_tags(ptr, BUFFER_SIZE);
+ /* Check tag value */
+ if (MT_FETCH_TAG((uintptr_t)ptr) < tag) {
+- ksft_print_msg("FAIL: wrong tag = 0x%x with include mask=0x%x\n",
++ ksft_print_msg("FAIL: wrong tag = 0x%lx with include mask=0x%lx\n",
+ MT_FETCH_TAG((uintptr_t)ptr),
+ MT_INCLUDE_VALID_TAGS(excl_mask));
+ result = KSFT_FAIL;
+diff --git a/tools/testing/selftests/arm64/mte/mte_common_util.c b/tools/testing/selftests/arm64/mte/mte_common_util.c
+index 00ffd34c66d301..1120f5aa76550f 100644
+--- a/tools/testing/selftests/arm64/mte/mte_common_util.c
++++ b/tools/testing/selftests/arm64/mte/mte_common_util.c
+@@ -38,7 +38,7 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc)
+ if (cur_mte_cxt.trig_si_code == si->si_code)
+ cur_mte_cxt.fault_valid = true;
+ else
+- ksft_print_msg("Got unexpected SEGV_MTEAERR at pc=$lx, fault addr=%lx\n",
++ ksft_print_msg("Got unexpected SEGV_MTEAERR at pc=%llx, fault addr=%lx\n",
+ ((ucontext_t *)uc)->uc_mcontext.pc,
+ addr);
+ return;
+@@ -64,7 +64,7 @@ void mte_default_handler(int signum, siginfo_t *si, void *uc)
+ exit(1);
+ }
+ } else if (signum == SIGBUS) {
+- ksft_print_msg("INFO: SIGBUS signal at pc=%lx, fault addr=%lx, si_code=%lx\n",
++ ksft_print_msg("INFO: SIGBUS signal at pc=%llx, fault addr=%lx, si_code=%x\n",
+ ((ucontext_t *)uc)->uc_mcontext.pc, addr, si->si_code);
+ if ((cur_mte_cxt.trig_range >= 0 &&
+ addr >= MT_CLEAR_TAG(cur_mte_cxt.trig_addr) &&
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 75016962f79563..43a02931847854 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -10,6 +10,7 @@ TOOLSDIR := $(abspath ../../..)
+ LIBDIR := $(TOOLSDIR)/lib
+ BPFDIR := $(LIBDIR)/bpf
+ TOOLSINCDIR := $(TOOLSDIR)/include
++TOOLSARCHINCDIR := $(TOOLSDIR)/arch/$(SRCARCH)/include
+ BPFTOOLDIR := $(TOOLSDIR)/bpf/bpftool
+ APIDIR := $(TOOLSINCDIR)/uapi
+ ifneq ($(O),)
+@@ -44,7 +45,7 @@ CFLAGS += -g $(OPT_FLAGS) -rdynamic \
+ -Wall -Werror -fno-omit-frame-pointer \
+ $(GENFLAGS) $(SAN_CFLAGS) $(LIBELF_CFLAGS) \
+ -I$(CURDIR) -I$(INCLUDE_DIR) -I$(GENDIR) -I$(LIBDIR) \
+- -I$(TOOLSINCDIR) -I$(APIDIR) -I$(OUTPUT)
++ -I$(TOOLSINCDIR) -I$(TOOLSARCHINCDIR) -I$(APIDIR) -I$(OUTPUT)
+ LDFLAGS += $(SAN_LDFLAGS)
+ LDLIBS += $(LIBELF_LIBS) -lz -lrt -lpthread
+
+diff --git a/tools/testing/selftests/bpf/network_helpers.h b/tools/testing/selftests/bpf/network_helpers.h
+index c72c16e1aff825..5764155b6d2518 100644
+--- a/tools/testing/selftests/bpf/network_helpers.h
++++ b/tools/testing/selftests/bpf/network_helpers.h
+@@ -1,6 +1,7 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ #ifndef __NETWORK_HELPERS_H
+ #define __NETWORK_HELPERS_H
++#include <arpa/inet.h>
+ #include <sys/socket.h>
+ #include <sys/types.h>
+ #include <linux/types.h>
+diff --git a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+index 871d16cb95cfde..1a2f99596916fb 100644
+--- a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
++++ b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c
+@@ -5,6 +5,7 @@
+ #include <test_progs.h>
+ #include <pthread.h>
+ #include <network_helpers.h>
++#include <sys/sysinfo.h>
+
+ #include "timer_lockup.skel.h"
+
+@@ -52,6 +53,11 @@ void test_timer_lockup(void)
+ pthread_t thrds[2];
+ void *ret;
+
++ if (get_nprocs() < 2) {
++ test__skip();
++ return;
++ }
++
+ skel = timer_lockup__open_and_load();
+ if (!ASSERT_OK_PTR(skel, "timer_lockup__open_and_load"))
+ return;
+diff --git a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
+index 43f40c4fe241ac..1c8b678e2e9a39 100644
+--- a/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
++++ b/tools/testing/selftests/bpf/progs/test_spin_lock_fail.c
+@@ -28,8 +28,8 @@ struct {
+ },
+ };
+
+-SEC(".data.A") struct bpf_spin_lock lockA;
+-SEC(".data.B") struct bpf_spin_lock lockB;
++static struct bpf_spin_lock lockA SEC(".data.A");
++static struct bpf_spin_lock lockB SEC(".data.B");
+
+ SEC("?tc")
+ int lock_id_kptr_preserve(void *ctx)
+diff --git a/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
+index bba3e37f749b86..5aaf2b065f86c2 100644
+--- a/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
++++ b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
+@@ -7,7 +7,11 @@
+ #include "bpf_misc.h"
+
+ SEC("tp_btf/bpf_testmod_test_nullable_bare")
+-__failure __msg("R1 invalid mem access 'trusted_ptr_or_null_'")
++/* This used to be a failure test, but raw_tp nullable arguments can now
++ * directly be dereferenced, whether they have nullable annotation or not,
++ * and don't need to be explicitly checked.
++ */
++__success
+ int BPF_PROG(handle_tp_btf_nullable_bare1, struct bpf_testmod_test_read_ctx *nullable_ctx)
+ {
+ return nullable_ctx->len;
+diff --git a/tools/testing/selftests/bpf/test_progs.c b/tools/testing/selftests/bpf/test_progs.c
+index c7a70e1a1085a5..fa829a7854f24c 100644
+--- a/tools/testing/selftests/bpf/test_progs.c
++++ b/tools/testing/selftests/bpf/test_progs.c
+@@ -20,11 +20,13 @@
+
+ #include "network_helpers.h"
+
++/* backtrace() and backtrace_symbols_fd() are glibc specific,
++ * use header file when glibc is available and provide stub
++ * implementations when another libc implementation is used.
++ */
+ #ifdef __GLIBC__
+ #include <execinfo.h> /* backtrace */
+-#endif
+-
+-/* Default backtrace funcs if missing at link */
++#else
+ __weak int backtrace(void **buffer, int size)
+ {
+ return 0;
+@@ -34,6 +36,7 @@ __weak void backtrace_symbols_fd(void *const *buffer, int size, int fd)
+ {
+ dprintf(fd, "<backtrace not supported>\n");
+ }
++#endif /*__GLIBC__ */
+
+ int env_verbosity = 0;
+
+diff --git a/tools/testing/selftests/bpf/test_sockmap.c b/tools/testing/selftests/bpf/test_sockmap.c
+index 3e02d7267de8bb..61a747afcd05fb 100644
+--- a/tools/testing/selftests/bpf/test_sockmap.c
++++ b/tools/testing/selftests/bpf/test_sockmap.c
+@@ -56,6 +56,8 @@ static void running_handler(int a);
+ #define BPF_SOCKHASH_FILENAME "test_sockhash_kern.bpf.o"
+ #define CG_PATH "/sockmap"
+
++#define EDATAINTEGRITY 2001
++
+ /* global sockets */
+ int s1, s2, c1, c2, p1, p2;
+ int test_cnt;
+@@ -86,6 +88,10 @@ int ktls;
+ int peek_flag;
+ int skb_use_parser;
+ int txmsg_omit_skb_parser;
++int verify_push_start;
++int verify_push_len;
++int verify_pop_start;
++int verify_pop_len;
+
+ static const struct option long_options[] = {
+ {"help", no_argument, NULL, 'h' },
+@@ -418,16 +424,18 @@ static int msg_loop_sendpage(int fd, int iov_length, int cnt,
+ {
+ bool drop = opt->drop_expected;
+ unsigned char k = 0;
++ int i, j, fp;
+ FILE *file;
+- int i, fp;
+
+ file = tmpfile();
+ if (!file) {
+ perror("create file for sendpage");
+ return 1;
+ }
+- for (i = 0; i < iov_length * cnt; i++, k++)
+- fwrite(&k, sizeof(char), 1, file);
++ for (i = 0; i < cnt; i++, k = 0) {
++ for (j = 0; j < iov_length; j++, k++)
++ fwrite(&k, sizeof(char), 1, file);
++ }
+ fflush(file);
+ fseek(file, 0, SEEK_SET);
+
+@@ -510,42 +518,111 @@ static int msg_alloc_iov(struct msghdr *msg,
+ return -ENOMEM;
+ }
+
+-static int msg_verify_data(struct msghdr *msg, int size, int chunk_sz)
++/* In push or pop test, we need to do some calculations for msg_verify_data */
++static void msg_verify_date_prep(void)
+ {
+- int i, j = 0, bytes_cnt = 0;
+- unsigned char k = 0;
++ int push_range_end = txmsg_start_push + txmsg_end_push - 1;
++ int pop_range_end = txmsg_start_pop + txmsg_pop - 1;
++
++ if (txmsg_end_push && txmsg_pop &&
++ txmsg_start_push <= pop_range_end && txmsg_start_pop <= push_range_end) {
++ /* The push range and the pop range overlap */
++ int overlap_len;
++
++ verify_push_start = txmsg_start_push;
++ verify_pop_start = txmsg_start_pop;
++ if (txmsg_start_push < txmsg_start_pop)
++ overlap_len = min(push_range_end - txmsg_start_pop + 1, txmsg_pop);
++ else
++ overlap_len = min(pop_range_end - txmsg_start_push + 1, txmsg_end_push);
++ verify_push_len = max(txmsg_end_push - overlap_len, 0);
++ verify_pop_len = max(txmsg_pop - overlap_len, 0);
++ } else {
++ /* Otherwise */
++ verify_push_start = txmsg_start_push;
++ verify_pop_start = txmsg_start_pop;
++ verify_push_len = txmsg_end_push;
++ verify_pop_len = txmsg_pop;
++ }
++}
++
++static int msg_verify_data(struct msghdr *msg, int size, int chunk_sz,
++ unsigned char *k_p, int *bytes_cnt_p,
++ int *check_cnt_p, int *push_p)
++{
++ int bytes_cnt = *bytes_cnt_p, check_cnt = *check_cnt_p, push = *push_p;
++ unsigned char k = *k_p;
++ int i, j;
+
+- for (i = 0; i < msg->msg_iovlen; i++) {
++ for (i = 0, j = 0; i < msg->msg_iovlen && size; i++, j = 0) {
+ unsigned char *d = msg->msg_iov[i].iov_base;
+
+ /* Special case test for skb ingress + ktls */
+ if (i == 0 && txmsg_ktls_skb) {
+ if (msg->msg_iov[i].iov_len < 4)
+- return -EIO;
++ return -EDATAINTEGRITY;
+ if (memcmp(d, "PASS", 4) != 0) {
+ fprintf(stderr,
+ "detected skb data error with skb ingress update @iov[%i]:%i \"%02x %02x %02x %02x\" != \"PASS\"\n",
+ i, 0, d[0], d[1], d[2], d[3]);
+- return -EIO;
++ return -EDATAINTEGRITY;
+ }
+ j = 4; /* advance index past PASS header */
+ }
+
+ for (; j < msg->msg_iov[i].iov_len && size; j++) {
++ if (push > 0 &&
++ check_cnt == verify_push_start + verify_push_len - push) {
++ int skipped;
++revisit_push:
++ skipped = push;
++ if (j + push >= msg->msg_iov[i].iov_len)
++ skipped = msg->msg_iov[i].iov_len - j;
++ push -= skipped;
++ size -= skipped;
++ j += skipped - 1;
++ check_cnt += skipped;
++ continue;
++ }
++
++ if (verify_pop_len > 0 && check_cnt == verify_pop_start) {
++ bytes_cnt += verify_pop_len;
++ check_cnt += verify_pop_len;
++ k += verify_pop_len;
++
++ if (bytes_cnt == chunk_sz) {
++ k = 0;
++ bytes_cnt = 0;
++ check_cnt = 0;
++ push = verify_push_len;
++ }
++
++ if (push > 0 &&
++ check_cnt == verify_push_start + verify_push_len - push)
++ goto revisit_push;
++ }
++
+ if (d[j] != k++) {
+ fprintf(stderr,
+ "detected data corruption @iov[%i]:%i %02x != %02x, %02x ?= %02x\n",
+ i, j, d[j], k - 1, d[j+1], k);
+- return -EIO;
++ return -EDATAINTEGRITY;
+ }
+ bytes_cnt++;
++ check_cnt++;
+ if (bytes_cnt == chunk_sz) {
+ k = 0;
+ bytes_cnt = 0;
++ check_cnt = 0;
++ push = verify_push_len;
+ }
+ size--;
+ }
+ }
++ *k_p = k;
++ *bytes_cnt_p = bytes_cnt;
++ *check_cnt_p = check_cnt;
++ *push_p = push;
+ return 0;
+ }
+
+@@ -598,10 +675,14 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ }
+ clock_gettime(CLOCK_MONOTONIC, &s->end);
+ } else {
++ float total_bytes, txmsg_pop_total, txmsg_push_total;
+ int slct, recvp = 0, recv, max_fd = fd;
+- float total_bytes, txmsg_pop_total;
+ int fd_flags = O_NONBLOCK;
+ struct timeval timeout;
++ unsigned char k = 0;
++ int bytes_cnt = 0;
++ int check_cnt = 0;
++ int push = 0;
+ fd_set w;
+
+ fcntl(fd, fd_flags);
+@@ -615,12 +696,22 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ * This is really only useful for testing edge cases in code
+ * paths.
+ */
+- total_bytes = (float)iov_count * (float)iov_length * (float)cnt;
+- if (txmsg_apply)
++ total_bytes = (float)iov_length * (float)cnt;
++ if (!opt->sendpage)
++ total_bytes *= (float)iov_count;
++ if (txmsg_apply) {
++ txmsg_push_total = txmsg_end_push * (total_bytes / txmsg_apply);
+ txmsg_pop_total = txmsg_pop * (total_bytes / txmsg_apply);
+- else
++ } else {
++ txmsg_push_total = txmsg_end_push * cnt;
+ txmsg_pop_total = txmsg_pop * cnt;
++ }
++ total_bytes += txmsg_push_total;
+ total_bytes -= txmsg_pop_total;
++ if (data) {
++ msg_verify_date_prep();
++ push = verify_push_len;
++ }
+ err = clock_gettime(CLOCK_MONOTONIC, &s->start);
+ if (err < 0)
+ perror("recv start time");
+@@ -693,10 +784,11 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+
+ if (data) {
+ int chunk_sz = opt->sendpage ?
+- iov_length * cnt :
++ iov_length :
+ iov_length * iov_count;
+
+- errno = msg_verify_data(&msg, recv, chunk_sz);
++ errno = msg_verify_data(&msg, recv, chunk_sz, &k, &bytes_cnt,
++ &check_cnt, &push);
+ if (errno) {
+ perror("data verify msg failed");
+ goto out_errno;
+@@ -704,7 +796,11 @@ static int msg_loop(int fd, int iov_count, int iov_length, int cnt,
+ if (recvp) {
+ errno = msg_verify_data(&msg_peek,
+ recvp,
+- chunk_sz);
++ chunk_sz,
++ &k,
++ &bytes_cnt,
++ &check_cnt,
++ &push);
+ if (errno) {
+ perror("data verify msg_peek failed");
+ goto out_errno;
+@@ -786,8 +882,6 @@ static int sendmsg_test(struct sockmap_options *opt)
+
+ rxpid = fork();
+ if (rxpid == 0) {
+- if (txmsg_pop || txmsg_start_pop)
+- iov_buf -= (txmsg_pop - txmsg_start_pop + 1);
+ if (opt->drop_expected || txmsg_ktls_skb_drop)
+ _exit(0);
+
+@@ -812,7 +906,7 @@ static int sendmsg_test(struct sockmap_options *opt)
+ s.bytes_sent, sent_Bps, sent_Bps/giga,
+ s.bytes_recvd, recvd_Bps, recvd_Bps/giga,
+ peek_flag ? "(peek_msg)" : "");
+- if (err && txmsg_cork)
++ if (err && err != -EDATAINTEGRITY && txmsg_cork)
+ err = 0;
+ exit(err ? 1 : 0);
+ } else if (rxpid == -1) {
+@@ -1456,8 +1550,8 @@ static void test_send_many(struct sockmap_options *opt, int cgrp)
+
+ static void test_send_large(struct sockmap_options *opt, int cgrp)
+ {
+- opt->iov_length = 256;
+- opt->iov_count = 1024;
++ opt->iov_length = 8192;
++ opt->iov_count = 32;
+ opt->rate = 2;
+ test_exec(cgrp, opt);
+ }
+@@ -1586,17 +1680,19 @@ static void test_txmsg_cork_hangs(int cgrp, struct sockmap_options *opt)
+ static void test_txmsg_pull(int cgrp, struct sockmap_options *opt)
+ {
+ /* Test basic start/end */
++ txmsg_pass = 1;
+ txmsg_start = 1;
+ txmsg_end = 2;
+ test_send(opt, cgrp);
+
+ /* Test >4k pull */
++ txmsg_pass = 1;
+ txmsg_start = 4096;
+ txmsg_end = 9182;
+ test_send_large(opt, cgrp);
+
+ /* Test pull + redirect */
+- txmsg_redir = 0;
++ txmsg_redir = 1;
+ txmsg_start = 1;
+ txmsg_end = 2;
+ test_send(opt, cgrp);
+@@ -1618,12 +1714,16 @@ static void test_txmsg_pull(int cgrp, struct sockmap_options *opt)
+
+ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
+ {
++ bool data = opt->data_test;
++
+ /* Test basic pop */
++ txmsg_pass = 1;
+ txmsg_start_pop = 1;
+ txmsg_pop = 2;
+ test_send_many(opt, cgrp);
+
+ /* Test pop with >4k */
++ txmsg_pass = 1;
+ txmsg_start_pop = 4096;
+ txmsg_pop = 4096;
+ test_send_large(opt, cgrp);
+@@ -1634,6 +1734,12 @@ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
+ txmsg_pop = 2;
+ test_send_many(opt, cgrp);
+
++ /* TODO: Test for pop + cork should be different,
++ * - It makes the layout of the received data difficult
++ * - It makes it hard to calculate the total_bytes in the recvmsg
++ * Temporarily skip the data integrity test for this case now.
++ */
++ opt->data_test = false;
+ /* Test pop + cork */
+ txmsg_redir = 0;
+ txmsg_cork = 512;
+@@ -1647,16 +1753,21 @@ static void test_txmsg_pop(int cgrp, struct sockmap_options *opt)
+ txmsg_start_pop = 1;
+ txmsg_pop = 2;
+ test_send_many(opt, cgrp);
++ opt->data_test = data;
+ }
+
+ static void test_txmsg_push(int cgrp, struct sockmap_options *opt)
+ {
++ bool data = opt->data_test;
++
+ /* Test basic push */
++ txmsg_pass = 1;
+ txmsg_start_push = 1;
+ txmsg_end_push = 1;
+ test_send(opt, cgrp);
+
+ /* Test push 4kB >4k */
++ txmsg_pass = 1;
+ txmsg_start_push = 4096;
+ txmsg_end_push = 4096;
+ test_send_large(opt, cgrp);
+@@ -1667,16 +1778,24 @@ static void test_txmsg_push(int cgrp, struct sockmap_options *opt)
+ txmsg_end_push = 2;
+ test_send_many(opt, cgrp);
+
++ /* TODO: Test for push + cork should be different,
++ * - It makes the layout of the received data difficult
++ * - It makes it hard to calculate the total_bytes in the recvmsg
++ * Temporarily skip the data integrity test for this case now.
++ */
++ opt->data_test = false;
+ /* Test push + cork */
+ txmsg_redir = 0;
+ txmsg_cork = 512;
+ txmsg_start_push = 1;
+ txmsg_end_push = 2;
+ test_send_many(opt, cgrp);
++ opt->data_test = data;
+ }
+
+ static void test_txmsg_push_pop(int cgrp, struct sockmap_options *opt)
+ {
++ txmsg_pass = 1;
+ txmsg_start_push = 1;
+ txmsg_end_push = 10;
+ txmsg_start_pop = 5;
+diff --git a/tools/testing/selftests/bpf/uprobe_multi.c b/tools/testing/selftests/bpf/uprobe_multi.c
+index c7828b13e5ffd8..dd38dc68f63592 100644
+--- a/tools/testing/selftests/bpf/uprobe_multi.c
++++ b/tools/testing/selftests/bpf/uprobe_multi.c
+@@ -12,6 +12,10 @@
+ #define MADV_POPULATE_READ 22
+ #endif
+
++#ifndef MADV_PAGEOUT
++#define MADV_PAGEOUT 21
++#endif
++
+ int __attribute__((weak)) uprobe(void)
+ {
+ return 0;
+diff --git a/tools/testing/selftests/mount_setattr/mount_setattr_test.c b/tools/testing/selftests/mount_setattr/mount_setattr_test.c
+index 68801e1a9ec2d1..70f65eb320a7a7 100644
+--- a/tools/testing/selftests/mount_setattr/mount_setattr_test.c
++++ b/tools/testing/selftests/mount_setattr/mount_setattr_test.c
+@@ -1026,7 +1026,7 @@ FIXTURE_SETUP(mount_setattr_idmapped)
+ "size=100000,mode=700"), 0);
+
+ ASSERT_EQ(mount("testing", "/mnt", "tmpfs", MS_NOATIME | MS_NODEV,
+- "size=100000,mode=700"), 0);
++ "size=2m,mode=700"), 0);
+
+ ASSERT_EQ(mkdir("/mnt/A", 0777), 0);
+
+diff --git a/tools/testing/selftests/net/Makefile b/tools/testing/selftests/net/Makefile
+index 5e86f7a51b43c5..2c4b6e404a7c7f 100644
+--- a/tools/testing/selftests/net/Makefile
++++ b/tools/testing/selftests/net/Makefile
+@@ -97,6 +97,7 @@ TEST_PROGS += fdb_flush.sh
+ TEST_PROGS += fq_band_pktlimit.sh
+ TEST_PROGS += vlan_hw_filter.sh
+ TEST_PROGS += bpf_offload.py
++TEST_PROGS += ipv6_route_update_soft_lockup.sh
+
+ # YNL files, must be before "include ..lib.mk"
+ EXTRA_CLEAN += $(OUTPUT)/libynl.a
+diff --git a/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh b/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh
+new file mode 100755
+index 00000000000000..a6b2b1f9c641c9
+--- /dev/null
++++ b/tools/testing/selftests/net/ipv6_route_update_soft_lockup.sh
+@@ -0,0 +1,262 @@
++#!/bin/bash
++# SPDX-License-Identifier: GPL-2.0
++#
++# Testing for potential kernel soft lockup during IPv6 routing table
++# refresh under heavy outgoing IPv6 traffic. If a kernel soft lockup
++# occurs, a kernel panic will be triggered to prevent associated issues.
++#
++#
++# Test Environment Layout
++#
++# ┌----------------┐ ┌----------------┐
++# | SOURCE_NS | | SINK_NS |
++# | NAMESPACE | | NAMESPACE |
++# |(iperf3 clients)| |(iperf3 servers)|
++# | | | |
++# | | | |
++# | ┌-----------| nexthops |---------┐ |
++# | |veth_source|<--------------------------------------->|veth_sink|<┐ |
++# | └-----------|2001:0DB8:1::0:1/96 2001:0DB8:1::1:1/96 |---------┘ | |
++# | | ^ 2001:0DB8:1::1:2/96 | | |
++# | | . . | fwd | |
++# | ┌---------┐ | . . | | |
++# | | IPv6 | | . . | V |
++# | | routing | | . 2001:0DB8:1::1:80/96| ┌-----┐ |
++# | | table | | . | | lo | |
++# | | nexthop | | . └--------┴-----┴-┘
++# | | update | | ............................> 2001:0DB8:2::1:1/128
++# | └-------- ┘ |
++# └----------------┘
++#
++# The test script sets up two network namespaces, source_ns and sink_ns,
++# connected via a veth link. Within source_ns, it continuously updates the
++# IPv6 routing table by flushing and inserting IPV6_NEXTHOP_ADDR_COUNT nexthop
++# IPs destined for SINK_LOOPBACK_IP_ADDR in sink_ns. This refresh occurs at a
++# rate of 1/ROUTING_TABLE_REFRESH_PERIOD per second for TEST_DURATION seconds.
++#
++# Simultaneously, multiple iperf3 clients within source_ns generate heavy
++# outgoing IPv6 traffic. Each client is assigned a unique port number starting
++# at 5000 and incrementing sequentially. Each client targets a unique iperf3
++# server running in sink_ns, connected to the SINK_LOOPBACK_IFACE interface
++# using the same port number.
++#
++# The number of iperf3 servers and clients is set to half of the total
++# available cores on each machine.
++#
++# NOTE: We have tested this script on machines with various CPU specifications,
++# ranging from lower to higher performance as listed below. The test script
++# effectively triggered a kernel soft lockup on machines running an unpatched
++# kernel in under a minute:
++#
++# - 1x Intel Xeon E-2278G 8-Core Processor @ 3.40GHz
++# - 1x Intel Xeon E-2378G Processor 8-Core @ 2.80GHz
++# - 1x AMD EPYC 7401P 24-Core Processor @ 2.00GHz
++# - 1x AMD EPYC 7402P 24-Core Processor @ 2.80GHz
++# - 2x Intel Xeon Gold 5120 14-Core Processor @ 2.20GHz
++# - 1x Ampere Altra Q80-30 80-Core Processor @ 3.00GHz
++# - 2x Intel Xeon Gold 5120 14-Core Processor @ 2.20GHz
++# - 2x Intel Xeon Silver 4214 24-Core Processor @ 2.20GHz
++# - 1x AMD EPYC 7502P 32-Core @ 2.50GHz
++# - 1x Intel Xeon Gold 6314U 32-Core Processor @ 2.30GHz
++# - 2x Intel Xeon Gold 6338 32-Core Processor @ 2.00GHz
++#
++# On less performant machines, you may need to increase the TEST_DURATION
++# parameter to enhance the likelihood of encountering a race condition leading
++# to a kernel soft lockup and avoid a false negative result.
++#
++# NOTE: The test may not produce the expected result in virtualized
++# environments (e.g., qemu) due to differences in timing and CPU handling,
++# which can affect the conditions needed to trigger a soft lockup.
++
++source lib.sh
++source net_helper.sh
++
++TEST_DURATION=300
++ROUTING_TABLE_REFRESH_PERIOD=0.01
++
++IPERF3_BITRATE="300m"
++
++
++IPV6_NEXTHOP_ADDR_COUNT="128"
++IPV6_NEXTHOP_ADDR_MASK="96"
++IPV6_NEXTHOP_PREFIX="2001:0DB8:1"
++
++
++SOURCE_TEST_IFACE="veth_source"
++SOURCE_TEST_IP_ADDR="2001:0DB8:1::0:1/96"
++
++SINK_TEST_IFACE="veth_sink"
++# ${SINK_TEST_IFACE} is populated with the following range of IPv6 addresses:
++# 2001:0DB8:1::1:1 to 2001:0DB8:1::1:${IPV6_NEXTHOP_ADDR_COUNT}
++SINK_LOOPBACK_IFACE="lo"
++SINK_LOOPBACK_IP_MASK="128"
++SINK_LOOPBACK_IP_ADDR="2001:0DB8:2::1:1"
++
++nexthop_ip_list=""
++termination_signal=""
++kernel_softlokup_panic_prev_val=""
++
++terminate_ns_processes_by_pattern() {
++ local ns=$1
++ local pattern=$2
++
++ for pid in $(ip netns pids ${ns}); do
++ [ -e /proc/$pid/cmdline ] && grep -qe "${pattern}" /proc/$pid/cmdline && kill -9 $pid
++ done
++}
++
++cleanup() {
++ echo "info: cleaning up namespaces and terminating all processes within them..."
++
++
++ # Terminate iperf3 instances running in the source_ns. To avoid race
++ # conditions, first iterate over the PIDs and terminate those
++ # associated with the bash shells running the
++ # `while true; do iperf3 -c ...; done` loops. In a second iteration,
++ # terminate the individual `iperf3 -c ...` instances.
++ terminate_ns_processes_by_pattern ${source_ns} while
++ terminate_ns_processes_by_pattern ${source_ns} iperf3
++
++ # Repeat the same process for sink_ns
++ terminate_ns_processes_by_pattern ${sink_ns} while
++ terminate_ns_processes_by_pattern ${sink_ns} iperf3
++
++ # Check if any iperf3 instances are still running. This could happen
++ # if a core has entered an infinite loop and the timeout for detecting
++ # the soft lockup has not expired, but either the test interval has
++ # already elapsed or the test was terminated manually (e.g., with ^C)
++ for pid in $(ip netns pids ${source_ns}); do
++ if [ -e /proc/$pid/cmdline ] && grep -qe 'iperf3' /proc/$pid/cmdline; then
++ echo "FAIL: unable to terminate some iperf3 instances. Soft lockup is underway. A kernel panic is on the way!"
++ exit ${ksft_fail}
++ fi
++ done
++
++ if [ "$termination_signal" == "SIGINT" ]; then
++ echo "SKIP: Termination due to ^C (SIGINT)"
++ elif [ "$termination_signal" == "SIGALRM" ]; then
++ echo "PASS: No kernel soft lockup occurred during this ${TEST_DURATION} second test"
++ fi
++
++ cleanup_ns ${source_ns} ${sink_ns}
++
++ sysctl -qw kernel.softlockup_panic=${kernel_softlokup_panic_prev_val}
++}
++
++setup_prepare() {
++ setup_ns source_ns sink_ns
++
++ ip -n ${source_ns} link add name ${SOURCE_TEST_IFACE} type veth peer name ${SINK_TEST_IFACE} netns ${sink_ns}
++
++ # Setting up the Source namespace
++ ip -n ${source_ns} addr add ${SOURCE_TEST_IP_ADDR} dev ${SOURCE_TEST_IFACE}
++ ip -n ${source_ns} link set dev ${SOURCE_TEST_IFACE} qlen 10000
++ ip -n ${source_ns} link set dev ${SOURCE_TEST_IFACE} up
++ ip netns exec ${source_ns} sysctl -qw net.ipv6.fib_multipath_hash_policy=1
++
++ # Setting up the Sink namespace
++ ip -n ${sink_ns} addr add ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK} dev ${SINK_LOOPBACK_IFACE}
++ ip -n ${sink_ns} link set dev ${SINK_LOOPBACK_IFACE} up
++ ip netns exec ${sink_ns} sysctl -qw net.ipv6.conf.${SINK_LOOPBACK_IFACE}.forwarding=1
++
++ ip -n ${sink_ns} link set ${SINK_TEST_IFACE} up
++ ip netns exec ${sink_ns} sysctl -qw net.ipv6.conf.${SINK_TEST_IFACE}.forwarding=1
++
++
++ # Populate nexthop IPv6 addresses on the test interface in the sink_ns
++ echo "info: populating ${IPV6_NEXTHOP_ADDR_COUNT} IPv6 addresses on the ${SINK_TEST_IFACE} interface ..."
++ for IP in $(seq 1 ${IPV6_NEXTHOP_ADDR_COUNT}); do
++ ip -n ${sink_ns} addr add ${IPV6_NEXTHOP_PREFIX}::$(printf "1:%x" "${IP}")/${IPV6_NEXTHOP_ADDR_MASK} dev ${SINK_TEST_IFACE};
++ done
++
++ # Preparing list of nexthops
++ for IP in $(seq 1 ${IPV6_NEXTHOP_ADDR_COUNT}); do
++ nexthop_ip_list=$nexthop_ip_list" nexthop via ${IPV6_NEXTHOP_PREFIX}::$(printf "1:%x" $IP) dev ${SOURCE_TEST_IFACE} weight 1"
++ done
++}
++
++
++test_soft_lockup_during_routing_table_refresh() {
++ # Start num_of_iperf_servers iperf3 servers in the sink_ns namespace,
++ # each listening on ports starting at 5001 and incrementing
++ # sequentially. Since iperf3 instances may terminate unexpectedly, a
++ # while loop is used to automatically restart them in such cases.
++ echo "info: starting ${num_of_iperf_servers} iperf3 servers in the sink_ns namespace ..."
++ for i in $(seq 1 ${num_of_iperf_servers}); do
++ cmd="iperf3 --bind ${SINK_LOOPBACK_IP_ADDR} -s -p $(printf '5%03d' ${i}) --rcv-timeout 200 &>/dev/null"
++ ip netns exec ${sink_ns} bash -c "while true; do ${cmd}; done &" &>/dev/null
++ done
++
++ # Wait for the iperf3 servers to be ready
++ for i in $(seq ${num_of_iperf_servers}); do
++ port=$(printf '5%03d' ${i});
++ wait_local_port_listen ${sink_ns} ${port} tcp
++ done
++
++ # Continuously refresh the routing table in the background within
++ # the source_ns namespace
++ ip netns exec ${source_ns} bash -c "
++ while \$(ip netns list | grep -q ${source_ns}); do
++ ip -6 route add ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK} ${nexthop_ip_list};
++ sleep ${ROUTING_TABLE_REFRESH_PERIOD};
++ ip -6 route delete ${SINK_LOOPBACK_IP_ADDR}/${SINK_LOOPBACK_IP_MASK};
++ done &"
++
++ # Start num_of_iperf_servers iperf3 clients in the source_ns namespace,
++ # each sending TCP traffic on sequential ports starting at 5001.
++ # Since iperf3 instances may terminate unexpectedly (e.g., if the route
++ # to the server is deleted in the background during a route refresh), a
++ # while loop is used to automatically restart them in such cases.
++ echo "info: starting ${num_of_iperf_servers} iperf3 clients in the source_ns namespace ..."
++ for i in $(seq 1 ${num_of_iperf_servers}); do
++ cmd="iperf3 -c ${SINK_LOOPBACK_IP_ADDR} -p $(printf '5%03d' ${i}) --length 64 --bitrate ${IPERF3_BITRATE} -t 0 --connect-timeout 150 &>/dev/null"
++ ip netns exec ${source_ns} bash -c "while true; do ${cmd}; done &" &>/dev/null
++ done
++
++ echo "info: IPv6 routing table is being updated at the rate of $(echo "1/${ROUTING_TABLE_REFRESH_PERIOD}" | bc)/s for ${TEST_DURATION} seconds ..."
++ echo "info: A kernel soft lockup, if detected, results in a kernel panic!"
++
++ wait
++}
++
++# Make sure 'iperf3' is installed, skip the test otherwise
++if [ ! -x "$(command -v "iperf3")" ]; then
++ echo "SKIP: 'iperf3' is not installed. Skipping the test."
++ exit ${ksft_skip}
++fi
++
++# Determine the number of cores on the machine
++num_of_iperf_servers=$(( $(nproc)/2 ))
++
++# Check if we are running on a multi-core machine, skip the test otherwise
++if [ "${num_of_iperf_servers}" -eq 0 ]; then
++ echo "SKIP: This test is not valid on a single core machine!"
++ exit ${ksft_skip}
++fi
++
++# Since the kernel soft lockup we're testing causes at least one core to enter
++# an infinite loop, destabilizing the host and likely affecting subsequent
++# tests, we trigger a kernel panic instead of reporting a failure and
++# continuing
++kernel_softlokup_panic_prev_val=$(sysctl -n kernel.softlockup_panic)
++sysctl -qw kernel.softlockup_panic=1
++
++handle_sigint() {
++ termination_signal="SIGINT"
++ cleanup
++ exit ${ksft_skip}
++}
++
++handle_sigalrm() {
++ termination_signal="SIGALRM"
++ cleanup
++ exit ${ksft_pass}
++}
++
++trap handle_sigint SIGINT
++trap handle_sigalrm SIGALRM
++
++(sleep ${TEST_DURATION} && kill -s SIGALRM $$)&
++
++setup_prepare
++test_soft_lockup_during_routing_table_refresh
+diff --git a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
+index 254ff03297f06c..5f827e10717d19 100644
+--- a/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
++++ b/tools/testing/selftests/net/netfilter/conntrack_dump_flush.c
+@@ -43,6 +43,8 @@ static int build_cta_tuple_v4(struct nlmsghdr *nlh, int type,
+ mnl_attr_nest_end(nlh, nest_proto);
+
+ mnl_attr_nest_end(nlh, nest);
++
++ return 0;
+ }
+
+ static int build_cta_tuple_v6(struct nlmsghdr *nlh, int type,
+@@ -71,6 +73,8 @@ static int build_cta_tuple_v6(struct nlmsghdr *nlh, int type,
+ mnl_attr_nest_end(nlh, nest_proto);
+
+ mnl_attr_nest_end(nlh, nest);
++
++ return 0;
+ }
+
+ static int build_cta_proto(struct nlmsghdr *nlh)
+@@ -90,6 +94,8 @@ static int build_cta_proto(struct nlmsghdr *nlh)
+ mnl_attr_nest_end(nlh, nest_proto);
+
+ mnl_attr_nest_end(nlh, nest);
++
++ return 0;
+ }
+
+ static int conntrack_data_insert(struct mnl_socket *sock, struct nlmsghdr *nlh,
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 569bce8b6383ee..6c651c880fe83d 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -2056,7 +2056,7 @@ check_running() {
+ pid=${1}
+ cmd=${2}
+
+- [ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "{cmd}" ]
++ [ "$(cat /proc/${pid}/cmdline 2>/dev/null | tr -d '\0')" = "${cmd}" ]
+ }
+
+ test_cleanup_vxlanX_exception() {
+diff --git a/tools/testing/selftests/resctrl/fill_buf.c b/tools/testing/selftests/resctrl/fill_buf.c
+index ae120f1735c0bc..34e5df721430ee 100644
+--- a/tools/testing/selftests/resctrl/fill_buf.c
++++ b/tools/testing/selftests/resctrl/fill_buf.c
+@@ -127,7 +127,7 @@ unsigned char *alloc_buffer(size_t buf_size, int memflush)
+ {
+ void *buf = NULL;
+ uint64_t *p64;
+- size_t s64;
++ ssize_t s64;
+ int ret;
+
+ ret = posix_memalign(&buf, PAGE_SIZE, buf_size);
+diff --git a/tools/testing/selftests/resctrl/mbm_test.c b/tools/testing/selftests/resctrl/mbm_test.c
+index 6b5a3b52d861b8..cf08ba5e314e2a 100644
+--- a/tools/testing/selftests/resctrl/mbm_test.c
++++ b/tools/testing/selftests/resctrl/mbm_test.c
+@@ -40,7 +40,8 @@ show_bw_info(unsigned long *bw_imc, unsigned long *bw_resc, size_t span)
+ ksft_print_msg("%s Check MBM diff within %d%%\n",
+ ret ? "Fail:" : "Pass:", MAX_DIFF_PERCENT);
+ ksft_print_msg("avg_diff_per: %d%%\n", avg_diff_per);
+- ksft_print_msg("Span (MB): %zu\n", span / MB);
++ if (span)
++ ksft_print_msg("Span (MB): %zu\n", span / MB);
+ ksft_print_msg("avg_bw_imc: %lu\n", avg_bw_imc);
+ ksft_print_msg("avg_bw_resc: %lu\n", avg_bw_resc);
+
+@@ -138,15 +139,26 @@ static int mbm_run_test(const struct resctrl_test *test, const struct user_param
+ .setup = mbm_setup,
+ .measure = mbm_measure,
+ };
++ char *endptr = NULL;
++ size_t span = 0;
+ int ret;
+
+ remove(RESULT_FILE_NAME);
+
++ if (uparams->benchmark_cmd[0] && strcmp(uparams->benchmark_cmd[0], "fill_buf") == 0) {
++ if (uparams->benchmark_cmd[1] && *uparams->benchmark_cmd[1] != '\0') {
++ errno = 0;
++ span = strtoul(uparams->benchmark_cmd[1], &endptr, 10);
++ if (errno || *endptr != '\0')
++ return -EINVAL;
++ }
++ }
++
+ ret = resctrl_val(test, uparams, uparams->benchmark_cmd, ¶m);
+ if (ret)
+ return ret;
+
+- ret = check_results(DEFAULT_SPAN);
++ ret = check_results(span);
+ if (ret && (get_vendor() == ARCH_INTEL))
+ ksft_print_msg("Intel MBM may be inaccurate when Sub-NUMA Clustering is enabled. Check BIOS configuration.\n");
+
+diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
+index 8c275f6b4dd777..f118f659e89600 100644
+--- a/tools/testing/selftests/resctrl/resctrl_val.c
++++ b/tools/testing/selftests/resctrl/resctrl_val.c
+@@ -83,13 +83,12 @@ void get_event_and_umask(char *cas_count_cfg, int count, bool op)
+ char *token[MAX_TOKENS];
+ int i = 0;
+
+- strcat(cas_count_cfg, ",");
+ token[0] = strtok(cas_count_cfg, "=,");
+
+ for (i = 1; i < MAX_TOKENS; i++)
+ token[i] = strtok(NULL, "=,");
+
+- for (i = 0; i < MAX_TOKENS; i++) {
++ for (i = 0; i < MAX_TOKENS - 1; i++) {
+ if (!token[i])
+ break;
+ if (strcmp(token[i], "event") == 0) {
+diff --git a/tools/testing/selftests/vDSO/parse_vdso.c b/tools/testing/selftests/vDSO/parse_vdso.c
+index 7dd5668ea8a6e3..28f35620c49919 100644
+--- a/tools/testing/selftests/vDSO/parse_vdso.c
++++ b/tools/testing/selftests/vDSO/parse_vdso.c
+@@ -222,8 +222,7 @@ void *vdso_sym(const char *version, const char *name)
+ ELF(Sym) *sym = &vdso_info.symtab[chain];
+
+ /* Check for a defined global or weak function w/ right name. */
+- if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC &&
+- ELF64_ST_TYPE(sym->st_info) != STT_NOTYPE)
++ if (ELF64_ST_TYPE(sym->st_info) != STT_FUNC)
+ continue;
+ if (ELF64_ST_BIND(sym->st_info) != STB_GLOBAL &&
+ ELF64_ST_BIND(sym->st_info) != STB_WEAK)
+diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh
+index 405ff262ca93d4..55500f901fbc36 100755
+--- a/tools/testing/selftests/wireguard/netns.sh
++++ b/tools/testing/selftests/wireguard/netns.sh
+@@ -332,6 +332,7 @@ waitiface $netns1 vethc
+ waitiface $netns2 veths
+
+ n0 bash -c 'printf 1 > /proc/sys/net/ipv4/ip_forward'
++[[ -e /proc/sys/net/netfilter/nf_conntrack_udp_timeout ]] || modprobe nf_conntrack
+ n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout'
+ n0 bash -c 'printf 2 > /proc/sys/net/netfilter/nf_conntrack_udp_timeout_stream'
+ n0 iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -d 10.0.0.0/24 -j SNAT --to 10.0.0.1
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index a3907c390d67a5..829511a712224f 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -1064,7 +1064,7 @@ timerlat_hist_apply_config(struct osnoise_tool *tool, struct timerlat_hist_param
+ * If the user did not specify a type of thread, try user-threads first.
+ * Fall back to kernel threads otherwise.
+ */
+- if (!params->kernel_workload && !params->user_workload) {
++ if (!params->kernel_workload && !params->user_hist) {
+ retval = tracefs_file_exists(NULL, "osnoise/per_cpu/cpu0/timerlat_fd");
+ if (retval) {
+ debug_msg("User-space interface detected, setting user-threads\n");
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index 210b0f533534ab..3b62519a412fc9 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -830,7 +830,7 @@ timerlat_top_apply_config(struct osnoise_tool *top, struct timerlat_top_params *
+ * If the user did not specify a type of thread, try user-threads first.
+ * Fall back to kernel threads otherwise.
+ */
+- if (!params->kernel_workload && !params->user_workload) {
++ if (!params->kernel_workload && !params->user_top) {
+ retval = tracefs_file_exists(NULL, "osnoise/per_cpu/cpu0/timerlat_fd");
+ if (retval) {
+ debug_msg("User-space interface detected, setting user-threads\n");
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 6ca7a1045bbb75..279e03029ce149 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -6561,106 +6561,3 @@ void kvm_exit(void)
+ kvm_irqfd_exit();
+ }
+ EXPORT_SYMBOL_GPL(kvm_exit);
+-
+-struct kvm_vm_worker_thread_context {
+- struct kvm *kvm;
+- struct task_struct *parent;
+- struct completion init_done;
+- kvm_vm_thread_fn_t thread_fn;
+- uintptr_t data;
+- int err;
+-};
+-
+-static int kvm_vm_worker_thread(void *context)
+-{
+- /*
+- * The init_context is allocated on the stack of the parent thread, so
+- * we have to locally copy anything that is needed beyond initialization
+- */
+- struct kvm_vm_worker_thread_context *init_context = context;
+- struct task_struct *parent;
+- struct kvm *kvm = init_context->kvm;
+- kvm_vm_thread_fn_t thread_fn = init_context->thread_fn;
+- uintptr_t data = init_context->data;
+- int err;
+-
+- err = kthread_park(current);
+- /* kthread_park(current) is never supposed to return an error */
+- WARN_ON(err != 0);
+- if (err)
+- goto init_complete;
+-
+- err = cgroup_attach_task_all(init_context->parent, current);
+- if (err) {
+- kvm_err("%s: cgroup_attach_task_all failed with err %d\n",
+- __func__, err);
+- goto init_complete;
+- }
+-
+- set_user_nice(current, task_nice(init_context->parent));
+-
+-init_complete:
+- init_context->err = err;
+- complete(&init_context->init_done);
+- init_context = NULL;
+-
+- if (err)
+- goto out;
+-
+- /* Wait to be woken up by the spawner before proceeding. */
+- kthread_parkme();
+-
+- if (!kthread_should_stop())
+- err = thread_fn(kvm, data);
+-
+-out:
+- /*
+- * Move kthread back to its original cgroup to prevent it lingering in
+- * the cgroup of the VM process, after the latter finishes its
+- * execution.
+- *
+- * kthread_stop() waits on the 'exited' completion condition which is
+- * set in exit_mm(), via mm_release(), in do_exit(). However, the
+- * kthread is removed from the cgroup in the cgroup_exit() which is
+- * called after the exit_mm(). This causes the kthread_stop() to return
+- * before the kthread actually quits the cgroup.
+- */
+- rcu_read_lock();
+- parent = rcu_dereference(current->real_parent);
+- get_task_struct(parent);
+- rcu_read_unlock();
+- cgroup_attach_task_all(parent, current);
+- put_task_struct(parent);
+-
+- return err;
+-}
+-
+-int kvm_vm_create_worker_thread(struct kvm *kvm, kvm_vm_thread_fn_t thread_fn,
+- uintptr_t data, const char *name,
+- struct task_struct **thread_ptr)
+-{
+- struct kvm_vm_worker_thread_context init_context = {};
+- struct task_struct *thread;
+-
+- *thread_ptr = NULL;
+- init_context.kvm = kvm;
+- init_context.parent = current;
+- init_context.thread_fn = thread_fn;
+- init_context.data = data;
+- init_completion(&init_context.init_done);
+-
+- thread = kthread_run(kvm_vm_worker_thread, &init_context,
+- "%s-%d", name, task_pid_nr(current));
+- if (IS_ERR(thread))
+- return PTR_ERR(thread);
+-
+- /* kthread_run is never supposed to return NULL */
+- WARN_ON(thread == NULL);
+-
+- wait_for_completion(&init_context.init_done);
+-
+- if (!init_context.err)
+- *thread_ptr = thread;
+-
+- return init_context.err;
+-}
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-05 20:05 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-05 20:05 UTC (permalink / raw
To: gentoo-commits
commit: 2fcc7a615b8b2de79d0b1b3ce13cb5430b8c80d4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 5 20:05:18 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Dec 5 20:05:18 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2fcc7a61
sched: Initialize idle tasks only once
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
1800_sched-init-idle-tasks-only-once.patch | 80 ++++++++++++++++++++++++++++++
2 files changed, 84 insertions(+)
diff --git a/0000_README b/0000_README
index ac1104a1..f7334645 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1730_parisc-Disable-prctl.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
+Patch: 1800_sched-init-idle-tasks-only-once.patch
+From: https://git.kernel.org/
+Desc: sched: Initialize idle tasks only once
+
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1800_sched-init-idle-tasks-only-once.patch b/1800_sched-init-idle-tasks-only-once.patch
new file mode 100644
index 00000000..013a45fc
--- /dev/null
+++ b/1800_sched-init-idle-tasks-only-once.patch
@@ -0,0 +1,80 @@
+From b23decf8ac9102fc52c4de5196f4dc0a5f3eb80b Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Mon, 28 Oct 2024 11:43:42 +0100
+Subject: sched: Initialize idle tasks only once
+
+Idle tasks are initialized via __sched_fork() twice:
+
+ fork_idle()
+ copy_process()
+ sched_fork()
+ __sched_fork()
+ init_idle()
+ __sched_fork()
+
+Instead of cleaning this up, sched_ext hacked around it. Even when analyis
+and solution were provided in a discussion, nobody cared to clean this up.
+
+init_idle() is also invoked from sched_init() to initialize the boot CPU's
+idle task, which requires the __sched_fork() invocation. But this can be
+trivially solved by invoking __sched_fork() before init_idle() in
+sched_init() and removing the __sched_fork() invocation from init_idle().
+
+Do so and clean up the comments explaining this historical leftover.
+
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Link: https://lore.kernel.org/r/20241028103142.359584747@linutronix.de
+---
+ kernel/sched/core.c | 12 +++++-------
+ 1 file changed, 5 insertions(+), 7 deletions(-)
+
+(limited to 'kernel/sched/core.c')
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index c57a79e3491103..aad48850c1ef0d 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -4423,7 +4423,8 @@ int wake_up_state(struct task_struct *p, unsigned int state)
+ * Perform scheduler related setup for a newly forked process p.
+ * p is forked by current.
+ *
+- * __sched_fork() is basic setup used by init_idle() too:
++ * __sched_fork() is basic setup which is also used by sched_init() to
++ * initialize the boot CPU's idle task.
+ */
+ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
+ {
+@@ -7697,8 +7698,6 @@ void __init init_idle(struct task_struct *idle, int cpu)
+ struct rq *rq = cpu_rq(cpu);
+ unsigned long flags;
+
+- __sched_fork(0, idle);
+-
+ raw_spin_lock_irqsave(&idle->pi_lock, flags);
+ raw_spin_rq_lock(rq);
+
+@@ -7713,10 +7712,8 @@ void __init init_idle(struct task_struct *idle, int cpu)
+
+ #ifdef CONFIG_SMP
+ /*
+- * It's possible that init_idle() gets called multiple times on a task,
+- * in that case do_set_cpus_allowed() will not do the right thing.
+- *
+- * And since this is boot we can forgo the serialization.
++ * No validation and serialization required at boot time and for
++ * setting up the idle tasks of not yet online CPUs.
+ */
+ set_cpus_allowed_common(idle, &ac);
+ #endif
+@@ -8561,6 +8558,7 @@ void __init sched_init(void)
+ * but because we are the idle thread, we just pick up running again
+ * when this runqueue becomes "idle".
+ */
++ __sched_fork(0, current);
+ init_idle(current, smp_processor_id());
+
+ calc_load_update = jiffies + LOAD_FREQ;
+--
+cgit 1.2.3-korg
+
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-06 12:44 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-06 12:44 UTC (permalink / raw
To: gentoo-commits
commit: 7ff281e950aa65bc7416b43f48eeb7cabcbb7195
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 6 12:43:43 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Dec 6 12:43:43 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=7ff281e9
Linux patch 6.12.3, remove redundant patch
Removed:
1800_sched-init-idle-tasks-only-once.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 +--
1002_linux-6.12.3.patch | 57 +++++++++++++++++++++
1800_sched-init-idle-tasks-only-once.patch | 80 ------------------------------
3 files changed, 61 insertions(+), 84 deletions(-)
diff --git a/0000_README b/0000_README
index f7334645..c7f77bd5 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch: 1001_linux-6.12.2.patch
From: https://www.kernel.org
Desc: Linux 6.12.2
+Patch: 1002_linux-6.12.3.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.3
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
@@ -63,10 +67,6 @@ Patch: 1730_parisc-Disable-prctl.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
-Patch: 1800_sched-init-idle-tasks-only-once.patch
-From: https://git.kernel.org/
-Desc: sched: Initialize idle tasks only once
-
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1002_linux-6.12.3.patch b/1002_linux-6.12.3.patch
new file mode 100644
index 00000000..2e07970b
--- /dev/null
+++ b/1002_linux-6.12.3.patch
@@ -0,0 +1,57 @@
+diff --git a/Makefile b/Makefile
+index da6e99309a4da4..e81030ec683143 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index a1c353a62c5684..76b27b2a9c56ad 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -4424,7 +4424,8 @@ int wake_up_state(struct task_struct *p, unsigned int state)
+ * Perform scheduler related setup for a newly forked process p.
+ * p is forked by current.
+ *
+- * __sched_fork() is basic setup used by init_idle() too:
++ * __sched_fork() is basic setup which is also used by sched_init() to
++ * initialize the boot CPU's idle task.
+ */
+ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
+ {
+@@ -7683,8 +7684,6 @@ void __init init_idle(struct task_struct *idle, int cpu)
+ struct rq *rq = cpu_rq(cpu);
+ unsigned long flags;
+
+- __sched_fork(0, idle);
+-
+ raw_spin_lock_irqsave(&idle->pi_lock, flags);
+ raw_spin_rq_lock(rq);
+
+@@ -7699,10 +7698,8 @@ void __init init_idle(struct task_struct *idle, int cpu)
+
+ #ifdef CONFIG_SMP
+ /*
+- * It's possible that init_idle() gets called multiple times on a task,
+- * in that case do_set_cpus_allowed() will not do the right thing.
+- *
+- * And since this is boot we can forgo the serialization.
++ * No validation and serialization required at boot time and for
++ * setting up the idle tasks of not yet online CPUs.
+ */
+ set_cpus_allowed_common(idle, &ac);
+ #endif
+@@ -8546,6 +8543,7 @@ void __init sched_init(void)
+ * but because we are the idle thread, we just pick up running again
+ * when this runqueue becomes "idle".
+ */
++ __sched_fork(0, current);
+ init_idle(current, smp_processor_id());
+
+ calc_load_update = jiffies + LOAD_FREQ;
diff --git a/1800_sched-init-idle-tasks-only-once.patch b/1800_sched-init-idle-tasks-only-once.patch
deleted file mode 100644
index 013a45fc..00000000
--- a/1800_sched-init-idle-tasks-only-once.patch
+++ /dev/null
@@ -1,80 +0,0 @@
-From b23decf8ac9102fc52c4de5196f4dc0a5f3eb80b Mon Sep 17 00:00:00 2001
-From: Thomas Gleixner <tglx@linutronix.de>
-Date: Mon, 28 Oct 2024 11:43:42 +0100
-Subject: sched: Initialize idle tasks only once
-
-Idle tasks are initialized via __sched_fork() twice:
-
- fork_idle()
- copy_process()
- sched_fork()
- __sched_fork()
- init_idle()
- __sched_fork()
-
-Instead of cleaning this up, sched_ext hacked around it. Even when analyis
-and solution were provided in a discussion, nobody cared to clean this up.
-
-init_idle() is also invoked from sched_init() to initialize the boot CPU's
-idle task, which requires the __sched_fork() invocation. But this can be
-trivially solved by invoking __sched_fork() before init_idle() in
-sched_init() and removing the __sched_fork() invocation from init_idle().
-
-Do so and clean up the comments explaining this historical leftover.
-
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
-Link: https://lore.kernel.org/r/20241028103142.359584747@linutronix.de
----
- kernel/sched/core.c | 12 +++++-------
- 1 file changed, 5 insertions(+), 7 deletions(-)
-
-(limited to 'kernel/sched/core.c')
-
-diff --git a/kernel/sched/core.c b/kernel/sched/core.c
-index c57a79e3491103..aad48850c1ef0d 100644
---- a/kernel/sched/core.c
-+++ b/kernel/sched/core.c
-@@ -4423,7 +4423,8 @@ int wake_up_state(struct task_struct *p, unsigned int state)
- * Perform scheduler related setup for a newly forked process p.
- * p is forked by current.
- *
-- * __sched_fork() is basic setup used by init_idle() too:
-+ * __sched_fork() is basic setup which is also used by sched_init() to
-+ * initialize the boot CPU's idle task.
- */
- static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
- {
-@@ -7697,8 +7698,6 @@ void __init init_idle(struct task_struct *idle, int cpu)
- struct rq *rq = cpu_rq(cpu);
- unsigned long flags;
-
-- __sched_fork(0, idle);
--
- raw_spin_lock_irqsave(&idle->pi_lock, flags);
- raw_spin_rq_lock(rq);
-
-@@ -7713,10 +7712,8 @@ void __init init_idle(struct task_struct *idle, int cpu)
-
- #ifdef CONFIG_SMP
- /*
-- * It's possible that init_idle() gets called multiple times on a task,
-- * in that case do_set_cpus_allowed() will not do the right thing.
-- *
-- * And since this is boot we can forgo the serialization.
-+ * No validation and serialization required at boot time and for
-+ * setting up the idle tasks of not yet online CPUs.
- */
- set_cpus_allowed_common(idle, &ac);
- #endif
-@@ -8561,6 +8558,7 @@ void __init sched_init(void)
- * but because we are the idle thread, we just pick up running again
- * when this runqueue becomes "idle".
- */
-+ __sched_fork(0, current);
- init_idle(current, smp_processor_id());
-
- calc_load_update = jiffies + LOAD_FREQ;
---
-cgit 1.2.3-korg
-
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-09 11:35 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-09 11:35 UTC (permalink / raw
To: gentoo-commits
commit: a86bef4a2fd2b250f44dcf0c300bf7d7b26f05e5
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 9 11:34:48 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Dec 9 11:34:48 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a86bef4a
Linux patch 6.12.4
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1003_linux-6.12.4.patch | 4189 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4193 insertions(+)
diff --git a/0000_README b/0000_README
index c7f77bd5..87f43cf7 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch: 1002_linux-6.12.3.patch
From: https://www.kernel.org
Desc: Linux 6.12.3
+Patch: 1003_linux-6.12.4.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.4
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1003_linux-6.12.4.patch b/1003_linux-6.12.4.patch
new file mode 100644
index 00000000..42f90cf9
--- /dev/null
+++ b/1003_linux-6.12.4.patch
@@ -0,0 +1,4189 @@
+diff --git a/Documentation/devicetree/bindings/net/fsl,fec.yaml b/Documentation/devicetree/bindings/net/fsl,fec.yaml
+index 5536c06139cae5..24e863fdbdab08 100644
+--- a/Documentation/devicetree/bindings/net/fsl,fec.yaml
++++ b/Documentation/devicetree/bindings/net/fsl,fec.yaml
+@@ -183,6 +183,13 @@ properties:
+ description:
+ Register bits of stop mode control, the format is <&gpr req_gpr req_bit>.
+
++ fsl,pps-channel:
++ $ref: /schemas/types.yaml#/definitions/uint32
++ default: 0
++ description:
++ Specifies to which timer instance the PPS signal is routed.
++ enum: [0, 1, 2, 3]
++
+ mdio:
+ $ref: mdio.yaml#
+ unevaluatedProperties: false
+diff --git a/Makefile b/Makefile
+index e81030ec683143..87dc2f81086021 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
+index 1dfae1af8e31b0..ef6a657c8d1306 100644
+--- a/arch/arm/kernel/entry-armv.S
++++ b/arch/arm/kernel/entry-armv.S
+@@ -25,6 +25,7 @@
+ #include <asm/tls.h>
+ #include <asm/system_info.h>
+ #include <asm/uaccess-asm.h>
++#include <asm/kasan_def.h>
+
+ #include "entry-header.S"
+ #include <asm/probes.h>
+@@ -561,6 +562,13 @@ ENTRY(__switch_to)
+ @ entries covering the vmalloc region.
+ @
+ ldr r2, [ip]
++#ifdef CONFIG_KASAN_VMALLOC
++ @ Also dummy read from the KASAN shadow memory for the new stack if we
++ @ are using KASAN
++ mov_l r2, KASAN_SHADOW_OFFSET
++ add r2, r2, ip, lsr #KASAN_SHADOW_SCALE_SHIFT
++ ldr r2, [r2]
++#endif
+ #endif
+
+ @ When CONFIG_THREAD_INFO_IN_TASK=n, the update of SP itself is what
+diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
+index 794cfea9f9d4c8..89f1c97f3079c1 100644
+--- a/arch/arm/mm/ioremap.c
++++ b/arch/arm/mm/ioremap.c
+@@ -23,6 +23,7 @@
+ */
+ #include <linux/module.h>
+ #include <linux/errno.h>
++#include <linux/kasan.h>
+ #include <linux/mm.h>
+ #include <linux/vmalloc.h>
+ #include <linux/io.h>
+@@ -115,16 +116,40 @@ int ioremap_page(unsigned long virt, unsigned long phys,
+ }
+ EXPORT_SYMBOL(ioremap_page);
+
++#ifdef CONFIG_KASAN
++static unsigned long arm_kasan_mem_to_shadow(unsigned long addr)
++{
++ return (unsigned long)kasan_mem_to_shadow((void *)addr);
++}
++#else
++static unsigned long arm_kasan_mem_to_shadow(unsigned long addr)
++{
++ return 0;
++}
++#endif
++
++static void memcpy_pgd(struct mm_struct *mm, unsigned long start,
++ unsigned long end)
++{
++ end = ALIGN(end, PGDIR_SIZE);
++ memcpy(pgd_offset(mm, start), pgd_offset_k(start),
++ sizeof(pgd_t) * (pgd_index(end) - pgd_index(start)));
++}
++
+ void __check_vmalloc_seq(struct mm_struct *mm)
+ {
+ int seq;
+
+ do {
+- seq = atomic_read(&init_mm.context.vmalloc_seq);
+- memcpy(pgd_offset(mm, VMALLOC_START),
+- pgd_offset_k(VMALLOC_START),
+- sizeof(pgd_t) * (pgd_index(VMALLOC_END) -
+- pgd_index(VMALLOC_START)));
++ seq = atomic_read_acquire(&init_mm.context.vmalloc_seq);
++ memcpy_pgd(mm, VMALLOC_START, VMALLOC_END);
++ if (IS_ENABLED(CONFIG_KASAN_VMALLOC)) {
++ unsigned long start =
++ arm_kasan_mem_to_shadow(VMALLOC_START);
++ unsigned long end =
++ arm_kasan_mem_to_shadow(VMALLOC_END);
++ memcpy_pgd(mm, start, end);
++ }
+ /*
+ * Use a store-release so that other CPUs that observe the
+ * counter's new value are guaranteed to see the results of the
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
+index 6eab61a12cd8f8..b844759f52c0d8 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinephone.dtsi
+@@ -212,6 +212,9 @@ accelerometer@68 {
+ interrupts = <7 5 IRQ_TYPE_EDGE_RISING>; /* PH5 */
+ vdd-supply = <®_dldo1>;
+ vddio-supply = <®_dldo1>;
++ mount-matrix = "0", "1", "0",
++ "-1", "0", "0",
++ "0", "0", "1";
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+index 5fa39591419115..aee79a50d0e26a 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-verdin.dtsi
+@@ -162,7 +162,7 @@ reg_usdhc2_vmmc: regulator-usdhc2 {
+ regulator-max-microvolt = <3300000>;
+ regulator-min-microvolt = <3300000>;
+ regulator-name = "+V3.3_SD";
+- startup-delay-us = <2000>;
++ startup-delay-us = <20000>;
+ };
+
+ reserved-memory {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
+index a19ad5ee7f792b..1689fe44099396 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-verdin.dtsi
+@@ -175,7 +175,7 @@ reg_usdhc2_vmmc: regulator-usdhc2 {
+ regulator-max-microvolt = <3300000>;
+ regulator-min-microvolt = <3300000>;
+ regulator-name = "+V3.3_SD";
+- startup-delay-us = <2000>;
++ startup-delay-us = <20000>;
+ };
+
+ reserved-memory {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+index 0c0b3ac5974525..cfcc7909dfe68d 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186-corsola.dtsi
+@@ -423,7 +423,7 @@ it6505dptx: dp-bridge@5c {
+ #sound-dai-cells = <0>;
+ ovdd-supply = <&mt6366_vsim2_reg>;
+ pwr18-supply = <&pp1800_dpbrdg_dx>;
+- reset-gpios = <&pio 177 GPIO_ACTIVE_HIGH>;
++ reset-gpios = <&pio 177 GPIO_ACTIVE_LOW>;
+
+ ports {
+ #address-cells = <1>;
+@@ -1336,7 +1336,7 @@ mt6366_vgpu_reg: vgpu {
+ regulator-allowed-modes = <MT6397_BUCK_MODE_AUTO
+ MT6397_BUCK_MODE_FORCE_PWM>;
+ regulator-coupled-with = <&mt6366_vsram_gpu_reg>;
+- regulator-coupled-max-spread = <10000>;
++ regulator-coupled-max-spread = <100000>;
+ };
+
+ mt6366_vproc11_reg: vproc11 {
+@@ -1545,7 +1545,7 @@ mt6366_vsram_gpu_reg: vsram-gpu {
+ regulator-ramp-delay = <6250>;
+ regulator-enable-ramp-delay = <240>;
+ regulator-coupled-with = <&mt6366_vgpu_reg>;
+- regulator-coupled-max-spread = <10000>;
++ regulator-coupled-max-spread = <100000>;
+ };
+
+ mt6366_vsram_others_reg: vsram-others {
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
+index 5bef31b8577be5..f0eac05f7483ea 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-verdin.dtsi
+@@ -160,7 +160,7 @@ reg_sdhc1_vmmc: regulator-sdhci1 {
+ regulator-max-microvolt = <3300000>;
+ regulator-min-microvolt = <3300000>;
+ regulator-name = "+V3.3_SD";
+- startup-delay-us = <2000>;
++ startup-delay-us = <20000>;
+ };
+
+ reg_sdhc1_vqmmc: regulator-sdhci1-vqmmc {
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 1a2ff0276365b4..c7b420d6787ca1 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -275,8 +275,8 @@ config PPC
+ select HAVE_RSEQ
+ select HAVE_SETUP_PER_CPU_AREA if PPC64
+ select HAVE_SOFTIRQ_ON_OWN_STACK
+- select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r2)
+- select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,-mstack-protector-guard=tls -mstack-protector-guard-reg=r13)
++ select HAVE_STACKPROTECTOR if PPC32 && $(cc-option,$(m32-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 -mstack-protector-guard-offset=0)
++ select HAVE_STACKPROTECTOR if PPC64 && $(cc-option,$(m64-flag) -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 -mstack-protector-guard-offset=0)
+ select HAVE_STATIC_CALL if PPC32
+ select HAVE_SYSCALL_TRACEPOINTS
+ select HAVE_VIRT_CPU_ACCOUNTING
+diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
+index bbfe4a1f06ef9d..cbb353ddacb7ad 100644
+--- a/arch/powerpc/Makefile
++++ b/arch/powerpc/Makefile
+@@ -100,13 +100,6 @@ KBUILD_AFLAGS += -m$(BITS)
+ KBUILD_LDFLAGS += -m elf$(BITS)$(LDEMULATION)
+ endif
+
+-cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard=tls
+-ifdef CONFIG_PPC64
+-cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r13
+-else
+-cflags-$(CONFIG_STACKPROTECTOR) += -mstack-protector-guard-reg=r2
+-endif
+-
+ LDFLAGS_vmlinux-y := -Bstatic
+ LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) := -pie
+ LDFLAGS_vmlinux-$(CONFIG_RELOCATABLE) += -z notext
+@@ -402,9 +395,11 @@ prepare: stack_protector_prepare
+ PHONY += stack_protector_prepare
+ stack_protector_prepare: prepare0
+ ifdef CONFIG_PPC64
+- $(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "PACA_CANARY") print $$3;}' include/generated/asm-offsets.h))
++ $(eval KBUILD_CFLAGS += -mstack-protector-guard=tls -mstack-protector-guard-reg=r13 \
++ -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "PACA_CANARY") print $$3;}' include/generated/asm-offsets.h))
+ else
+- $(eval KBUILD_CFLAGS += -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "TASK_CANARY") print $$3;}' include/generated/asm-offsets.h))
++ $(eval KBUILD_CFLAGS += -mstack-protector-guard=tls -mstack-protector-guard-reg=r2 \
++ -mstack-protector-guard-offset=$(shell awk '{if ($$2 == "TASK_CANARY") print $$3;}' include/generated/asm-offsets.h))
+ endif
+ endif
+
+diff --git a/arch/powerpc/kernel/vdso/Makefile b/arch/powerpc/kernel/vdso/Makefile
+index 31ca5a5470047e..c568cad6a22e6b 100644
+--- a/arch/powerpc/kernel/vdso/Makefile
++++ b/arch/powerpc/kernel/vdso/Makefile
+@@ -54,10 +54,14 @@ ldflags-y += $(filter-out $(CC_AUTO_VAR_INIT_ZERO_ENABLER) $(CC_FLAGS_FTRACE) -W
+
+ CC32FLAGS := -m32
+ CC32FLAGSREMOVE := -mcmodel=medium -mabi=elfv1 -mabi=elfv2 -mcall-aixdesc
+- # This flag is supported by clang for 64-bit but not 32-bit so it will cause
+- # an unused command line flag warning for this file.
+ ifdef CONFIG_CC_IS_CLANG
++# This flag is supported by clang for 64-bit but not 32-bit so it will cause
++# an unused command line flag warning for this file.
+ CC32FLAGSREMOVE += -fno-stack-clash-protection
++# -mstack-protector-guard values from the 64-bit build are not valid for the
++# 32-bit one. clang validates the values passed to these arguments during
++# parsing, even when -fno-stack-protector is passed afterwards.
++CC32FLAGSREMOVE += -mstack-protector-guard%
+ endif
+ LD32FLAGS := -Wl,-soname=linux-vdso32.so.1
+ AS32FLAGS := -D__VDSO32__
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index d6d5317f768e82..594da4cba707a6 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -450,9 +450,13 @@ SYM_CODE_START(\name)
+ SYM_CODE_END(\name)
+ .endm
+
++ .section .irqentry.text, "ax"
++
+ INT_HANDLER ext_int_handler,__LC_EXT_OLD_PSW,do_ext_irq
+ INT_HANDLER io_int_handler,__LC_IO_OLD_PSW,do_io_irq
+
++ .section .kprobes.text, "ax"
++
+ /*
+ * Machine check handler routines
+ */
+diff --git a/arch/s390/kernel/kprobes.c b/arch/s390/kernel/kprobes.c
+index 6295faf0987d86..8b80ea57125f3c 100644
+--- a/arch/s390/kernel/kprobes.c
++++ b/arch/s390/kernel/kprobes.c
+@@ -489,6 +489,12 @@ int __init arch_init_kprobes(void)
+ return 0;
+ }
+
++int __init arch_populate_kprobe_blacklist(void)
++{
++ return kprobe_add_area_blacklist((unsigned long)__irqentry_text_start,
++ (unsigned long)__irqentry_text_end);
++}
++
+ int arch_trampoline_kprobe(struct kprobe *p)
+ {
+ return 0;
+diff --git a/arch/s390/kernel/stacktrace.c b/arch/s390/kernel/stacktrace.c
+index 9f59837d159e0c..40edfde25f5b97 100644
+--- a/arch/s390/kernel/stacktrace.c
++++ b/arch/s390/kernel/stacktrace.c
+@@ -151,7 +151,7 @@ void arch_stack_walk_user_common(stack_trace_consume_fn consume_entry, void *coo
+ break;
+ }
+ if (!store_ip(consume_entry, cookie, entry, perf, ip))
+- return;
++ break;
+ first = false;
+ }
+ pagefault_enable();
+diff --git a/drivers/android/binder.c b/drivers/android/binder.c
+index 978740537a1aac..ef353ca13c356a 100644
+--- a/drivers/android/binder.c
++++ b/drivers/android/binder.c
+@@ -1225,6 +1225,12 @@ static void binder_cleanup_ref_olocked(struct binder_ref *ref)
+ binder_dequeue_work(ref->proc, &ref->death->work);
+ binder_stats_deleted(BINDER_STAT_DEATH);
+ }
++
++ if (ref->freeze) {
++ binder_dequeue_work(ref->proc, &ref->freeze->work);
++ binder_stats_deleted(BINDER_STAT_FREEZE);
++ }
++
+ binder_stats_deleted(BINDER_STAT_REF);
+ }
+
+@@ -3850,7 +3856,6 @@ binder_request_freeze_notification(struct binder_proc *proc,
+ {
+ struct binder_ref_freeze *freeze;
+ struct binder_ref *ref;
+- bool is_frozen;
+
+ freeze = kzalloc(sizeof(*freeze), GFP_KERNEL);
+ if (!freeze)
+@@ -3866,32 +3871,31 @@ binder_request_freeze_notification(struct binder_proc *proc,
+ }
+
+ binder_node_lock(ref->node);
+-
+- if (ref->freeze || !ref->node->proc) {
+- binder_user_error("%d:%d invalid BC_REQUEST_FREEZE_NOTIFICATION %s\n",
+- proc->pid, thread->pid,
+- ref->freeze ? "already set" : "dead node");
++ if (ref->freeze) {
++ binder_user_error("%d:%d BC_REQUEST_FREEZE_NOTIFICATION already set\n",
++ proc->pid, thread->pid);
+ binder_node_unlock(ref->node);
+ binder_proc_unlock(proc);
+ kfree(freeze);
+ return -EINVAL;
+ }
+- binder_inner_proc_lock(ref->node->proc);
+- is_frozen = ref->node->proc->is_frozen;
+- binder_inner_proc_unlock(ref->node->proc);
+
+ binder_stats_created(BINDER_STAT_FREEZE);
+ INIT_LIST_HEAD(&freeze->work.entry);
+ freeze->cookie = handle_cookie->cookie;
+ freeze->work.type = BINDER_WORK_FROZEN_BINDER;
+- freeze->is_frozen = is_frozen;
+-
+ ref->freeze = freeze;
+
+- binder_inner_proc_lock(proc);
+- binder_enqueue_work_ilocked(&ref->freeze->work, &proc->todo);
+- binder_wakeup_proc_ilocked(proc);
+- binder_inner_proc_unlock(proc);
++ if (ref->node->proc) {
++ binder_inner_proc_lock(ref->node->proc);
++ freeze->is_frozen = ref->node->proc->is_frozen;
++ binder_inner_proc_unlock(ref->node->proc);
++
++ binder_inner_proc_lock(proc);
++ binder_enqueue_work_ilocked(&freeze->work, &proc->todo);
++ binder_wakeup_proc_ilocked(proc);
++ binder_inner_proc_unlock(proc);
++ }
+
+ binder_node_unlock(ref->node);
+ binder_proc_unlock(proc);
+@@ -5151,6 +5155,16 @@ static void binder_release_work(struct binder_proc *proc,
+ } break;
+ case BINDER_WORK_NODE:
+ break;
++ case BINDER_WORK_CLEAR_FREEZE_NOTIFICATION: {
++ struct binder_ref_freeze *freeze;
++
++ freeze = container_of(w, struct binder_ref_freeze, work);
++ binder_debug(BINDER_DEBUG_DEAD_TRANSACTION,
++ "undelivered freeze notification, %016llx\n",
++ (u64)freeze->cookie);
++ kfree(freeze);
++ binder_stats_deleted(BINDER_STAT_FREEZE);
++ } break;
+ default:
+ pr_err("unexpected work type, %d, not freed\n",
+ wtype);
+@@ -5552,6 +5566,7 @@ static bool binder_txns_pending_ilocked(struct binder_proc *proc)
+
+ static void binder_add_freeze_work(struct binder_proc *proc, bool is_frozen)
+ {
++ struct binder_node *prev = NULL;
+ struct rb_node *n;
+ struct binder_ref *ref;
+
+@@ -5560,7 +5575,10 @@ static void binder_add_freeze_work(struct binder_proc *proc, bool is_frozen)
+ struct binder_node *node;
+
+ node = rb_entry(n, struct binder_node, rb_node);
++ binder_inc_node_tmpref_ilocked(node);
+ binder_inner_proc_unlock(proc);
++ if (prev)
++ binder_put_node(prev);
+ binder_node_lock(node);
+ hlist_for_each_entry(ref, &node->refs, node_entry) {
+ /*
+@@ -5586,10 +5604,15 @@ static void binder_add_freeze_work(struct binder_proc *proc, bool is_frozen)
+ }
+ binder_inner_proc_unlock(ref->proc);
+ }
++ prev = node;
+ binder_node_unlock(node);
+ binder_inner_proc_lock(proc);
++ if (proc->is_dead)
++ break;
+ }
+ binder_inner_proc_unlock(proc);
++ if (prev)
++ binder_put_node(prev);
+ }
+
+ static int binder_ioctl_freeze(struct binder_freeze_info *info,
+@@ -6260,6 +6283,7 @@ static void binder_deferred_release(struct binder_proc *proc)
+
+ binder_release_work(proc, &proc->todo);
+ binder_release_work(proc, &proc->delivered_death);
++ binder_release_work(proc, &proc->delivered_freeze);
+
+ binder_debug(BINDER_DEBUG_OPEN_CLOSE,
+ "%s: %d threads %d, nodes %d (ref %d), refs %d, active transactions %d\n",
+@@ -6393,6 +6417,12 @@ static void print_binder_work_ilocked(struct seq_file *m,
+ case BINDER_WORK_CLEAR_DEATH_NOTIFICATION:
+ seq_printf(m, "%shas cleared death notification\n", prefix);
+ break;
++ case BINDER_WORK_FROZEN_BINDER:
++ seq_printf(m, "%shas frozen binder\n", prefix);
++ break;
++ case BINDER_WORK_CLEAR_FREEZE_NOTIFICATION:
++ seq_printf(m, "%shas cleared freeze notification\n", prefix);
++ break;
+ default:
+ seq_printf(m, "%sunknown work: type %d\n", prefix, w->type);
+ break;
+@@ -6539,6 +6569,10 @@ static void print_binder_proc(struct seq_file *m,
+ seq_puts(m, " has delivered dead binder\n");
+ break;
+ }
++ list_for_each_entry(w, &proc->delivered_freeze, entry) {
++ seq_puts(m, " has delivered freeze binder\n");
++ break;
++ }
+ binder_inner_proc_unlock(proc);
+ if (!print_all && m->count == header_pos)
+ m->count = start_pos;
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 048ff98dbdfd84..d922cefc1e6625 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -1989,10 +1989,10 @@ static struct device *fwnode_get_next_parent_dev(const struct fwnode_handle *fwn
+ *
+ * Return true if one or more cycles were found. Otherwise, return false.
+ */
+-static bool __fw_devlink_relax_cycles(struct device *con,
++static bool __fw_devlink_relax_cycles(struct fwnode_handle *con_handle,
+ struct fwnode_handle *sup_handle)
+ {
+- struct device *sup_dev = NULL, *par_dev = NULL;
++ struct device *sup_dev = NULL, *par_dev = NULL, *con_dev = NULL;
+ struct fwnode_link *link;
+ struct device_link *dev_link;
+ bool ret = false;
+@@ -2009,22 +2009,22 @@ static bool __fw_devlink_relax_cycles(struct device *con,
+
+ sup_handle->flags |= FWNODE_FLAG_VISITED;
+
+- sup_dev = get_dev_from_fwnode(sup_handle);
+-
+ /* Termination condition. */
+- if (sup_dev == con) {
++ if (sup_handle == con_handle) {
+ pr_debug("----- cycle: start -----\n");
+ ret = true;
+ goto out;
+ }
+
++ sup_dev = get_dev_from_fwnode(sup_handle);
++ con_dev = get_dev_from_fwnode(con_handle);
+ /*
+ * If sup_dev is bound to a driver and @con hasn't started binding to a
+ * driver, sup_dev can't be a consumer of @con. So, no need to check
+ * further.
+ */
+ if (sup_dev && sup_dev->links.status == DL_DEV_DRIVER_BOUND &&
+- con->links.status == DL_DEV_NO_DRIVER) {
++ con_dev && con_dev->links.status == DL_DEV_NO_DRIVER) {
+ ret = false;
+ goto out;
+ }
+@@ -2033,7 +2033,7 @@ static bool __fw_devlink_relax_cycles(struct device *con,
+ if (link->flags & FWLINK_FLAG_IGNORE)
+ continue;
+
+- if (__fw_devlink_relax_cycles(con, link->supplier)) {
++ if (__fw_devlink_relax_cycles(con_handle, link->supplier)) {
+ __fwnode_link_cycle(link);
+ ret = true;
+ }
+@@ -2048,7 +2048,7 @@ static bool __fw_devlink_relax_cycles(struct device *con,
+ else
+ par_dev = fwnode_get_next_parent_dev(sup_handle);
+
+- if (par_dev && __fw_devlink_relax_cycles(con, par_dev->fwnode)) {
++ if (par_dev && __fw_devlink_relax_cycles(con_handle, par_dev->fwnode)) {
+ pr_debug("%pfwf: cycle: child of %pfwf\n", sup_handle,
+ par_dev->fwnode);
+ ret = true;
+@@ -2066,7 +2066,7 @@ static bool __fw_devlink_relax_cycles(struct device *con,
+ !(dev_link->flags & DL_FLAG_CYCLE))
+ continue;
+
+- if (__fw_devlink_relax_cycles(con,
++ if (__fw_devlink_relax_cycles(con_handle,
+ dev_link->supplier->fwnode)) {
+ pr_debug("%pfwf: cycle: depends on %pfwf\n", sup_handle,
+ dev_link->supplier->fwnode);
+@@ -2114,11 +2114,6 @@ static int fw_devlink_create_devlink(struct device *con,
+ if (link->flags & FWLINK_FLAG_IGNORE)
+ return 0;
+
+- if (con->fwnode == link->consumer)
+- flags = fw_devlink_get_flags(link->flags);
+- else
+- flags = FW_DEVLINK_FLAGS_PERMISSIVE;
+-
+ /*
+ * In some cases, a device P might also be a supplier to its child node
+ * C. However, this would defer the probe of C until the probe of P
+@@ -2139,25 +2134,23 @@ static int fw_devlink_create_devlink(struct device *con,
+ return -EINVAL;
+
+ /*
+- * SYNC_STATE_ONLY device links don't block probing and supports cycles.
+- * So, one might expect that cycle detection isn't necessary for them.
+- * However, if the device link was marked as SYNC_STATE_ONLY because
+- * it's part of a cycle, then we still need to do cycle detection. This
+- * is because the consumer and supplier might be part of multiple cycles
+- * and we need to detect all those cycles.
++ * Don't try to optimize by not calling the cycle detection logic under
++ * certain conditions. There's always some corner case that won't get
++ * detected.
+ */
+- if (!device_link_flag_is_sync_state_only(flags) ||
+- flags & DL_FLAG_CYCLE) {
+- device_links_write_lock();
+- if (__fw_devlink_relax_cycles(con, sup_handle)) {
+- __fwnode_link_cycle(link);
+- flags = fw_devlink_get_flags(link->flags);
+- pr_debug("----- cycle: end -----\n");
+- dev_info(con, "Fixed dependency cycle(s) with %pfwf\n",
+- sup_handle);
+- }
+- device_links_write_unlock();
++ device_links_write_lock();
++ if (__fw_devlink_relax_cycles(link->consumer, sup_handle)) {
++ __fwnode_link_cycle(link);
++ pr_debug("----- cycle: end -----\n");
++ pr_info("%pfwf: Fixed dependency cycle(s) with %pfwf\n",
++ link->consumer, sup_handle);
+ }
++ device_links_write_unlock();
++
++ if (con->fwnode == link->consumer)
++ flags = fw_devlink_get_flags(link->flags);
++ else
++ flags = FW_DEVLINK_FLAGS_PERMISSIVE;
+
+ if (sup_handle->flags & FWNODE_FLAG_NOT_DEVICE)
+ sup_dev = fwnode_get_next_parent_dev(sup_handle);
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index e682797cdee783..d6a1ba969266a4 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1692,6 +1692,13 @@ static int zram_recompress(struct zram *zram, u32 index, struct page *page,
+ if (ret)
+ return ret;
+
++ /*
++ * We touched this entry so mark it as non-IDLE. This makes sure that
++ * we don't preserve IDLE flag and don't incorrectly pick this entry
++ * for different post-processing type (e.g. writeback).
++ */
++ zram_clear_flag(zram, index, ZRAM_IDLE);
++
+ class_index_old = zs_lookup_class_index(zram->mem_pool, comp_len_old);
+ /*
+ * Iterate the secondary comp algorithms list (in order of priority)
+diff --git a/drivers/clk/qcom/gcc-qcs404.c b/drivers/clk/qcom/gcc-qcs404.c
+index c3cfd572e7c1e0..5ca003c9bfba89 100644
+--- a/drivers/clk/qcom/gcc-qcs404.c
++++ b/drivers/clk/qcom/gcc-qcs404.c
+@@ -131,6 +131,7 @@ static struct clk_alpha_pll gpll1_out_main = {
+ /* 930MHz configuration */
+ static const struct alpha_pll_config gpll3_config = {
+ .l = 48,
++ .alpha_hi = 0x70,
+ .alpha = 0x0,
+ .alpha_en_mask = BIT(24),
+ .post_div_mask = 0xf << 8,
+diff --git a/drivers/cpufreq/scmi-cpufreq.c b/drivers/cpufreq/scmi-cpufreq.c
+index 5892c73e129d2b..07d6f9a9b7c820 100644
+--- a/drivers/cpufreq/scmi-cpufreq.c
++++ b/drivers/cpufreq/scmi-cpufreq.c
+@@ -287,7 +287,7 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
+ ret = cpufreq_enable_boost_support();
+ if (ret) {
+ dev_warn(cpu_dev, "failed to enable boost: %d\n", ret);
+- goto out_free_opp;
++ goto out_free_table;
+ } else {
+ scmi_cpufreq_hw_attr[1] = &cpufreq_freq_attr_scaling_boost_freqs;
+ scmi_cpufreq_driver.boost_enabled = true;
+@@ -296,6 +296,8 @@ static int scmi_cpufreq_init(struct cpufreq_policy *policy)
+
+ return 0;
+
++out_free_table:
++ dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table);
+ out_free_opp:
+ dev_pm_opp_remove_all_dynamic(cpu_dev);
+
+diff --git a/drivers/firmware/efi/libstub/efi-stub.c b/drivers/firmware/efi/libstub/efi-stub.c
+index 2a1b43f9e0fa2b..df5ffe23644298 100644
+--- a/drivers/firmware/efi/libstub/efi-stub.c
++++ b/drivers/firmware/efi/libstub/efi-stub.c
+@@ -149,7 +149,7 @@ efi_status_t efi_handle_cmdline(efi_loaded_image_t *image, char **cmdline_ptr)
+ return EFI_SUCCESS;
+
+ fail_free_cmdline:
+- efi_bs_call(free_pool, cmdline_ptr);
++ efi_bs_call(free_pool, cmdline);
+ return status;
+ }
+
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index cf5bc77e2362c4..610e159d362ad6 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -327,7 +327,7 @@ config DRM_TTM_HELPER
+ config DRM_GEM_DMA_HELPER
+ tristate
+ depends on DRM
+- select FB_DMAMEM_HELPERS if DRM_FBDEV_EMULATION
++ select FB_DMAMEM_HELPERS_DEFERRED if DRM_FBDEV_EMULATION
+ help
+ Choose this if you need the GEM DMA helper functions
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index c2394c8b4d6b21..1f08cb88d51be5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -4584,8 +4584,8 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
+ int idx;
+ bool px;
+
+- amdgpu_fence_driver_sw_fini(adev);
+ amdgpu_device_ip_fini(adev);
++ amdgpu_fence_driver_sw_fini(adev);
+ amdgpu_ucode_release(&adev->firmware.gpu_info_fw);
+ adev->accel_working = false;
+ dma_fence_put(rcu_dereference_protected(adev->gang_submit, true));
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+index 74fdbf71d95b74..599d3ca4e0ef9e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c
+@@ -214,15 +214,15 @@ int amdgpu_vce_sw_fini(struct amdgpu_device *adev)
+
+ drm_sched_entity_destroy(&adev->vce.entity);
+
+- amdgpu_bo_free_kernel(&adev->vce.vcpu_bo, &adev->vce.gpu_addr,
+- (void **)&adev->vce.cpu_addr);
+-
+ for (i = 0; i < adev->vce.num_rings; i++)
+ amdgpu_ring_fini(&adev->vce.ring[i]);
+
+ amdgpu_ucode_release(&adev->vce.fw);
+ mutex_destroy(&adev->vce.idle_mutex);
+
++ amdgpu_bo_free_kernel(&adev->vce.vcpu_bo, &adev->vce.gpu_addr,
++ (void **)&adev->vce.cpu_addr);
++
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
+index 7a9adfda5814a6..814ab59fdd4a3a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
+@@ -275,6 +275,15 @@ static void nbio_v7_11_init_registers(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_SOC15(NBIO, 0, regBIF_BIF256_CI256_RC3X4_USB4_PCIE_MST_CTRL_3, data);
+
++ switch (adev->ip_versions[NBIO_HWIP][0]) {
++ case IP_VERSION(7, 11, 0):
++ case IP_VERSION(7, 11, 1):
++ case IP_VERSION(7, 11, 2):
++ case IP_VERSION(7, 11, 3):
++ data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4) & ~BIT(23);
++ WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4, data);
++ break;
++ }
+ }
+
+ static void nbio_v7_11_update_medium_grain_clock_gating(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
+index 4843dcb9a5f796..d6037577c53278 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_kernel_queue.c
+@@ -125,7 +125,7 @@ static bool kq_initialize(struct kernel_queue *kq, struct kfd_node *dev,
+
+ memset(kq->pq_kernel_addr, 0, queue_size);
+ memset(kq->rptr_kernel, 0, sizeof(*kq->rptr_kernel));
+- memset(kq->wptr_kernel, 0, sizeof(*kq->wptr_kernel));
++ memset(kq->wptr_kernel, 0, dev->kfd->device_info.doorbell_size);
+
+ prop.queue_size = queue_size;
+ prop.is_interop = false;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index a88f1b6ea64cfa..a6911bb2cf0c6c 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -3066,7 +3066,10 @@ static void restore_planes_and_stream_state(
+ return;
+
+ for (i = 0; i < status->plane_count; i++) {
++ /* refcount will always be valid, restore everything else */
++ struct kref refcount = status->plane_states[i]->refcount;
+ *status->plane_states[i] = scratch->plane_states[i];
++ status->plane_states[i]->refcount = refcount;
+ }
+ *stream = scratch->stream_state;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+index 838d72eaa87fbd..b363f5360818d8 100644
+--- a/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
++++ b/drivers/gpu/drm/amd/display/dc/dccg/dcn35/dcn35_dccg.c
+@@ -1392,10 +1392,10 @@ static void dccg35_set_dtbclk_dto(
+
+ /* The recommended programming sequence to enable DTBCLK DTO to generate
+ * valid pixel HPO DPSTREAM ENCODER, specifies that DTO source select should
+- * be set only after DTO is enabled
++ * be set only after DTO is enabled.
++ * PIPEx_DTO_SRC_SEL should not be programmed during DTBCLK update since OTG may still be on, and the
++ * programming is handled in program_pix_clk() regardless, so it can be removed from here.
+ */
+- REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+- PIPE_DTO_SRC_SEL[params->otg_inst], 2);
+ } else {
+ switch (params->otg_inst) {
+ case 0:
+@@ -1412,9 +1412,12 @@ static void dccg35_set_dtbclk_dto(
+ break;
+ }
+
+- REG_UPDATE_2(OTG_PIXEL_RATE_CNTL[params->otg_inst],
+- DTBCLK_DTO_ENABLE[params->otg_inst], 0,
+- PIPE_DTO_SRC_SEL[params->otg_inst], params->is_hdmi ? 0 : 1);
++ /**
++ * PIPEx_DTO_SRC_SEL should not be programmed during DTBCLK update since OTG may still be on, and the
++ * programming is handled in program_pix_clk() regardless, so it can be removed from here.
++ */
++ REG_UPDATE(OTG_PIXEL_RATE_CNTL[params->otg_inst],
++ DTBCLK_DTO_ENABLE[params->otg_inst], 0);
+
+ REG_WRITE(DTBCLK_DTO_MODULO[params->otg_inst], 0);
+ REG_WRITE(DTBCLK_DTO_PHASE[params->otg_inst], 0);
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
+index 6eccf0241d857d..1ed21c1b86a5bb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
+@@ -258,12 +258,25 @@ static unsigned int find_preferred_pipe_candidates(const struct dc_state *existi
+ * However this condition comes with a caveat. We need to ignore pipes that will
+ * require a change in OPP but still have the same stream id. For example during
+ * an MPC to ODM transiton.
++ *
++ * Adding check to avoid pipe select on the head pipe by utilizing dc resource
++ * helper function resource_get_primary_dpp_pipe and comparing the pipe index.
+ */
+ if (existing_state) {
+ for (i = 0; i < pipe_count; i++) {
+ if (existing_state->res_ctx.pipe_ctx[i].stream && existing_state->res_ctx.pipe_ctx[i].stream->stream_id == stream_id) {
++ struct pipe_ctx *head_pipe =
++ resource_is_pipe_type(&existing_state->res_ctx.pipe_ctx[i], DPP_PIPE) ?
++ resource_get_primary_dpp_pipe(&existing_state->res_ctx.pipe_ctx[i]) :
++ NULL;
++
++ // we should always respect the head pipe from selection
++ if (head_pipe && head_pipe->pipe_idx == i)
++ continue;
+ if (existing_state->res_ctx.pipe_ctx[i].plane_res.hubp &&
+- existing_state->res_ctx.pipe_ctx[i].plane_res.hubp->opp_id != i)
++ existing_state->res_ctx.pipe_ctx[i].plane_res.hubp->opp_id != i &&
++ (existing_state->res_ctx.pipe_ctx[i].prev_odm_pipe ||
++ existing_state->res_ctx.pipe_ctx[i].next_odm_pipe))
+ continue;
+
+ preferred_pipe_candidates[num_preferred_candidates++] = i;
+@@ -292,6 +305,14 @@ static unsigned int find_last_resort_pipe_candidates(const struct dc_state *exis
+ */
+ if (existing_state) {
+ for (i = 0; i < pipe_count; i++) {
++ struct pipe_ctx *head_pipe =
++ resource_is_pipe_type(&existing_state->res_ctx.pipe_ctx[i], DPP_PIPE) ?
++ resource_get_primary_dpp_pipe(&existing_state->res_ctx.pipe_ctx[i]) :
++ NULL;
++
++ // we should always respect the head pipe from selection
++ if (head_pipe && head_pipe->pipe_idx == i)
++ continue;
+ if ((existing_state->res_ctx.pipe_ctx[i].plane_res.hubp &&
+ existing_state->res_ctx.pipe_ctx[i].plane_res.hubp->opp_id != i) ||
+ existing_state->res_ctx.pipe_ctx[i].stream_res.tg)
+diff --git a/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_offset.h b/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_offset.h
+index 5ebe4cb40f9db6..c38a01742d6f0e 100644
+--- a/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_offset.h
++++ b/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_offset.h
+@@ -7571,6 +7571,8 @@
+ // base address: 0x10100000
+ #define regRCC_STRAP0_RCC_DEV0_EPF0_STRAP0 0xd000
+ #define regRCC_STRAP0_RCC_DEV0_EPF0_STRAP0_BASE_IDX 5
++#define regRCC_DEV0_EPF5_STRAP4 0xd284
++#define regRCC_DEV0_EPF5_STRAP4_BASE_IDX 5
+
+
+ // addressBlock: nbio_nbif0_bif_rst_bif_rst_regblk
+diff --git a/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_sh_mask.h b/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_sh_mask.h
+index eb8c556d9c9300..3b96f1e5a1802c 100644
+--- a/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_sh_mask.h
++++ b/drivers/gpu/drm/amd/include/asic_reg/nbio/nbio_7_11_0_sh_mask.h
+@@ -50665,6 +50665,19 @@
+ #define RCC_STRAP0_RCC_DEV0_EPF0_STRAP0__STRAP_D1_SUPPORT_DEV0_F0_MASK 0x40000000L
+ #define RCC_STRAP0_RCC_DEV0_EPF0_STRAP0__STRAP_D2_SUPPORT_DEV0_F0_MASK 0x80000000L
+
++//RCC_DEV0_EPF5_STRAP4
++#define RCC_DEV0_EPF5_STRAP4__STRAP_ATOMIC_64BIT_EN_DEV0_F5__SHIFT 0x14
++#define RCC_DEV0_EPF5_STRAP4__STRAP_ATOMIC_EN_DEV0_F5__SHIFT 0x15
++#define RCC_DEV0_EPF5_STRAP4__STRAP_FLR_EN_DEV0_F5__SHIFT 0x16
++#define RCC_DEV0_EPF5_STRAP4__STRAP_PME_SUPPORT_DEV0_F5__SHIFT 0x17
++#define RCC_DEV0_EPF5_STRAP4__STRAP_INTERRUPT_PIN_DEV0_F5__SHIFT 0x1c
++#define RCC_DEV0_EPF5_STRAP4__STRAP_AUXPWR_SUPPORT_DEV0_F5__SHIFT 0x1f
++#define RCC_DEV0_EPF5_STRAP4__STRAP_ATOMIC_64BIT_EN_DEV0_F5_MASK 0x00100000L
++#define RCC_DEV0_EPF5_STRAP4__STRAP_ATOMIC_EN_DEV0_F5_MASK 0x00200000L
++#define RCC_DEV0_EPF5_STRAP4__STRAP_FLR_EN_DEV0_F5_MASK 0x00400000L
++#define RCC_DEV0_EPF5_STRAP4__STRAP_PME_SUPPORT_DEV0_F5_MASK 0x0F800000L
++#define RCC_DEV0_EPF5_STRAP4__STRAP_INTERRUPT_PIN_DEV0_F5_MASK 0x70000000L
++#define RCC_DEV0_EPF5_STRAP4__STRAP_AUXPWR_SUPPORT_DEV0_F5_MASK 0x80000000L
+
+ // addressBlock: nbio_nbif0_bif_rst_bif_rst_regblk
+ //HARD_RST_CTRL
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 80e60ea2d11e3c..32bdeac2676b5c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -1695,7 +1695,9 @@ static int smu_smc_hw_setup(struct smu_context *smu)
+ return ret;
+ }
+
+- if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
++ if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN5)
++ pcie_gen = 4;
++ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN4)
+ pcie_gen = 3;
+ else if (adev->pm.pcie_gen_mask & CAIL_PCIE_LINK_SPEED_SUPPORT_GEN3)
+ pcie_gen = 2;
+@@ -1708,7 +1710,9 @@ static int smu_smc_hw_setup(struct smu_context *smu)
+ * Bit 15:8: PCIE GEN, 0 to 3 corresponds to GEN1 to GEN4
+ * Bit 7:0: PCIE lane width, 1 to 7 corresponds is x1 to x32
+ */
+- if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
++ if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X32)
++ pcie_width = 7;
++ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X16)
+ pcie_width = 6;
+ else if (adev->pm.pcie_mlw_mask & CAIL_PCIE_LINK_WIDTH_SUPPORT_X12)
+ pcie_width = 5;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v14_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v14_0.h
+index 727d5b405435d0..3c1b4aa4a68d7e 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v14_0.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v14_0.h
+@@ -53,7 +53,7 @@
+ #define CTF_OFFSET_MEM 5
+
+ extern const int decoded_link_speed[5];
+-extern const int decoded_link_width[7];
++extern const int decoded_link_width[8];
+
+ #define DECODE_GEN_SPEED(gen_speed_idx) (decoded_link_speed[gen_speed_idx])
+ #define DECODE_LANE_WIDTH(lane_width_idx) (decoded_link_width[lane_width_idx])
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index c0f6b59369b7c4..d52512f5f1bd9d 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -1344,8 +1344,12 @@ static int arcturus_get_power_limit(struct smu_context *smu,
+ *default_power_limit = power_limit;
+ if (max_power_limit)
+ *max_power_limit = power_limit;
++ /**
++ * No lower bound is imposed on the limit. Any unreasonable limit set
++ * will result in frequent throttling.
++ */
+ if (min_power_limit)
+- *min_power_limit = power_limit;
++ *min_power_limit = 0;
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index b891a5e0a3969a..ceaf4572db2527 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2061,6 +2061,8 @@ static ssize_t smu_v13_0_7_get_gpu_metrics(struct smu_context *smu,
+ gpu_metrics->average_dclk1_frequency = metrics->AverageDclk1Frequency;
+
+ gpu_metrics->current_gfxclk = metrics->CurrClock[PPCLK_GFXCLK];
++ gpu_metrics->current_socclk = metrics->CurrClock[PPCLK_SOCCLK];
++ gpu_metrics->current_uclk = metrics->CurrClock[PPCLK_UCLK];
+ gpu_metrics->current_vclk0 = metrics->CurrClock[PPCLK_VCLK_0];
+ gpu_metrics->current_dclk0 = metrics->CurrClock[PPCLK_DCLK_0];
+ gpu_metrics->current_vclk1 = metrics->CurrClock[PPCLK_VCLK_1];
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
+index 865e916fc42544..452589adaf0468 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
+@@ -49,7 +49,7 @@
+ #define regMP1_SMN_IH_SW_INT_CTRL_mp1_14_0_0_BASE_IDX 0
+
+ const int decoded_link_speed[5] = {1, 2, 3, 4, 5};
+-const int decoded_link_width[7] = {0, 1, 2, 4, 8, 12, 16};
++const int decoded_link_width[8] = {0, 1, 2, 4, 8, 12, 16, 32};
+ /*
+ * DO NOT use these for err/warn/info/debug messages.
+ * Use dev_err, dev_warn, dev_info and dev_dbg instead.
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index 1e16a281f2dcde..82aef8626afa97 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -1186,13 +1186,15 @@ static int smu_v14_0_2_print_clk_levels(struct smu_context *smu,
+ (pcie_table->pcie_gen[i] == 0) ? "2.5GT/s," :
+ (pcie_table->pcie_gen[i] == 1) ? "5.0GT/s," :
+ (pcie_table->pcie_gen[i] == 2) ? "8.0GT/s," :
+- (pcie_table->pcie_gen[i] == 3) ? "16.0GT/s," : "",
++ (pcie_table->pcie_gen[i] == 3) ? "16.0GT/s," :
++ (pcie_table->pcie_gen[i] == 4) ? "32.0GT/s," : "",
+ (pcie_table->pcie_lane[i] == 1) ? "x1" :
+ (pcie_table->pcie_lane[i] == 2) ? "x2" :
+ (pcie_table->pcie_lane[i] == 3) ? "x4" :
+ (pcie_table->pcie_lane[i] == 4) ? "x8" :
+ (pcie_table->pcie_lane[i] == 5) ? "x12" :
+- (pcie_table->pcie_lane[i] == 6) ? "x16" : "",
++ (pcie_table->pcie_lane[i] == 6) ? "x16" :
++ (pcie_table->pcie_lane[i] == 7) ? "x32" : "",
+ pcie_table->clk_freq[i],
+ (gen_speed == DECODE_GEN_SPEED(pcie_table->pcie_gen[i])) &&
+ (lane_width == DECODE_LANE_WIDTH(pcie_table->pcie_lane[i])) ?
+@@ -1475,15 +1477,35 @@ static int smu_v14_0_2_update_pcie_parameters(struct smu_context *smu,
+ struct smu_14_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context;
+ struct smu_14_0_pcie_table *pcie_table =
+ &dpm_context->dpm_tables.pcie_table;
++ int num_of_levels = pcie_table->num_of_link_levels;
+ uint32_t smu_pcie_arg;
+ int ret, i;
+
+- for (i = 0; i < pcie_table->num_of_link_levels; i++) {
+- if (pcie_table->pcie_gen[i] > pcie_gen_cap)
++ if (!num_of_levels)
++ return 0;
++
++ if (!(smu->adev->pm.pp_feature & PP_PCIE_DPM_MASK)) {
++ if (pcie_table->pcie_gen[num_of_levels - 1] < pcie_gen_cap)
++ pcie_gen_cap = pcie_table->pcie_gen[num_of_levels - 1];
++
++ if (pcie_table->pcie_lane[num_of_levels - 1] < pcie_width_cap)
++ pcie_width_cap = pcie_table->pcie_lane[num_of_levels - 1];
++
++ /* Force all levels to use the same settings */
++ for (i = 0; i < num_of_levels; i++) {
+ pcie_table->pcie_gen[i] = pcie_gen_cap;
+- if (pcie_table->pcie_lane[i] > pcie_width_cap)
+ pcie_table->pcie_lane[i] = pcie_width_cap;
++ }
++ } else {
++ for (i = 0; i < num_of_levels; i++) {
++ if (pcie_table->pcie_gen[i] > pcie_gen_cap)
++ pcie_table->pcie_gen[i] = pcie_gen_cap;
++ if (pcie_table->pcie_lane[i] > pcie_width_cap)
++ pcie_table->pcie_lane[i] = pcie_width_cap;
++ }
++ }
+
++ for (i = 0; i < num_of_levels; i++) {
+ smu_pcie_arg = i << 16;
+ smu_pcie_arg |= pcie_table->pcie_gen[i] << 8;
+ smu_pcie_arg |= pcie_table->pcie_lane[i];
+@@ -2767,7 +2789,6 @@ static const struct pptable_funcs smu_v14_0_2_ppt_funcs = {
+ .get_unique_id = smu_v14_0_2_get_unique_id,
+ .get_power_limit = smu_v14_0_2_get_power_limit,
+ .set_power_limit = smu_v14_0_2_set_power_limit,
+- .set_power_source = smu_v14_0_set_power_source,
+ .get_power_profile_mode = smu_v14_0_2_get_power_profile_mode,
+ .set_power_profile_mode = smu_v14_0_2_set_power_profile_mode,
+ .run_btc = smu_v14_0_run_btc,
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index e3a9832c742cb1..65b57de20203f5 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -2614,9 +2614,9 @@ static int it6505_poweron(struct it6505 *it6505)
+ /* time interval between OVDD and SYSRSTN at least be 10ms */
+ if (pdata->gpiod_reset) {
+ usleep_range(10000, 20000);
+- gpiod_set_value_cansleep(pdata->gpiod_reset, 0);
+- usleep_range(1000, 2000);
+ gpiod_set_value_cansleep(pdata->gpiod_reset, 1);
++ usleep_range(1000, 2000);
++ gpiod_set_value_cansleep(pdata->gpiod_reset, 0);
+ usleep_range(25000, 35000);
+ }
+
+@@ -2647,7 +2647,7 @@ static int it6505_poweroff(struct it6505 *it6505)
+ disable_irq_nosync(it6505->irq);
+
+ if (pdata->gpiod_reset)
+- gpiod_set_value_cansleep(pdata->gpiod_reset, 0);
++ gpiod_set_value_cansleep(pdata->gpiod_reset, 1);
+
+ if (pdata->pwr18) {
+ err = regulator_disable(pdata->pwr18);
+@@ -3135,7 +3135,7 @@ static int it6505_init_pdata(struct it6505 *it6505)
+ return PTR_ERR(pdata->ovdd);
+ }
+
+- pdata->gpiod_reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
++ pdata->gpiod_reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
+ if (IS_ERR(pdata->gpiod_reset)) {
+ dev_err(dev, "gpiod_reset gpio not found");
+ return PTR_ERR(pdata->gpiod_reset);
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 43cdf39019a445..5186d2114a5037 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -3015,7 +3015,7 @@ int drm_atomic_helper_swap_state(struct drm_atomic_state *state,
+ bool stall)
+ {
+ int i, ret;
+- unsigned long flags;
++ unsigned long flags = 0;
+ struct drm_connector *connector;
+ struct drm_connector_state *old_conn_state, *new_conn_state;
+ struct drm_crtc *crtc;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
+index 384df1659be60d..b13a17276d07cd 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_buffer.c
+@@ -482,7 +482,8 @@ void etnaviv_buffer_queue(struct etnaviv_gpu *gpu, u32 exec_state,
+ } else {
+ CMD_LOAD_STATE(buffer, VIVS_GL_FLUSH_CACHE,
+ VIVS_GL_FLUSH_CACHE_DEPTH |
+- VIVS_GL_FLUSH_CACHE_COLOR);
++ VIVS_GL_FLUSH_CACHE_COLOR |
++ VIVS_GL_FLUSH_CACHE_SHADER_L1);
+ if (has_blt) {
+ CMD_LOAD_STATE(buffer, VIVS_BLT_ENABLE, 0x1);
+ CMD_LOAD_STATE(buffer, VIVS_BLT_SET_COMMAND, 0x1);
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 3e807195a0d03a..2c1cb335d8623f 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -405,8 +405,10 @@ static bool mtk_drm_get_all_drm_priv(struct device *dev)
+ if (temp_drm_priv->mtk_drm_bound)
+ cnt++;
+
+- if (cnt == MAX_CRTC)
++ if (cnt == MAX_CRTC) {
++ of_node_put(node);
+ break;
++ }
+ }
+
+ if (drm_priv->data->mmsys_dev_num == cnt) {
+diff --git a/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c b/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
+index 44897e5218a69f..45d09e6fa667fd 100644
+--- a/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
++++ b/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
+@@ -26,7 +26,6 @@ struct jadard_panel_desc {
+ unsigned int lanes;
+ enum mipi_dsi_pixel_format format;
+ int (*init)(struct jadard *jadard);
+- u32 num_init_cmds;
+ bool lp11_before_reset;
+ bool reset_before_power_off_vcioo;
+ unsigned int vcioo_to_lp11_delay_ms;
+diff --git a/drivers/gpu/drm/radeon/radeon_connectors.c b/drivers/gpu/drm/radeon/radeon_connectors.c
+index f9c73c55f04f76..f9996304d94313 100644
+--- a/drivers/gpu/drm/radeon/radeon_connectors.c
++++ b/drivers/gpu/drm/radeon/radeon_connectors.c
+@@ -1255,16 +1255,6 @@ radeon_dvi_detect(struct drm_connector *connector, bool force)
+ goto exit;
+ }
+ }
+-
+- if (dret && radeon_connector->hpd.hpd != RADEON_HPD_NONE &&
+- !radeon_hpd_sense(rdev, radeon_connector->hpd.hpd) &&
+- connector->connector_type == DRM_MODE_CONNECTOR_HDMIA) {
+- DRM_DEBUG_KMS("EDID is readable when HPD disconnected\n");
+- schedule_delayed_work(&rdev->hotplug_work, msecs_to_jiffies(1000));
+- ret = connector_status_disconnected;
+- goto exit;
+- }
+-
+ if (dret) {
+ radeon_connector->detected_by_load = false;
+ radeon_connector_free_edid(connector);
+diff --git a/drivers/gpu/drm/sti/sti_cursor.c b/drivers/gpu/drm/sti/sti_cursor.c
+index db0a1eb535328f..c59fcb4dca3249 100644
+--- a/drivers/gpu/drm/sti/sti_cursor.c
++++ b/drivers/gpu/drm/sti/sti_cursor.c
+@@ -200,6 +200,9 @@ static int sti_cursor_atomic_check(struct drm_plane *drm_plane,
+ return 0;
+
+ crtc_state = drm_atomic_get_crtc_state(state, crtc);
++ if (IS_ERR(crtc_state))
++ return PTR_ERR(crtc_state);
++
+ mode = &crtc_state->mode;
+ dst_x = new_plane_state->crtc_x;
+ dst_y = new_plane_state->crtc_y;
+diff --git a/drivers/gpu/drm/sti/sti_gdp.c b/drivers/gpu/drm/sti/sti_gdp.c
+index 43c72c2604a0cd..f046f5f7ad259d 100644
+--- a/drivers/gpu/drm/sti/sti_gdp.c
++++ b/drivers/gpu/drm/sti/sti_gdp.c
+@@ -638,6 +638,9 @@ static int sti_gdp_atomic_check(struct drm_plane *drm_plane,
+
+ mixer = to_sti_mixer(crtc);
+ crtc_state = drm_atomic_get_crtc_state(state, crtc);
++ if (IS_ERR(crtc_state))
++ return PTR_ERR(crtc_state);
++
+ mode = &crtc_state->mode;
+ dst_x = new_plane_state->crtc_x;
+ dst_y = new_plane_state->crtc_y;
+diff --git a/drivers/gpu/drm/sti/sti_hqvdp.c b/drivers/gpu/drm/sti/sti_hqvdp.c
+index acbf70b95aeb97..5793cf2cb8972c 100644
+--- a/drivers/gpu/drm/sti/sti_hqvdp.c
++++ b/drivers/gpu/drm/sti/sti_hqvdp.c
+@@ -1037,6 +1037,9 @@ static int sti_hqvdp_atomic_check(struct drm_plane *drm_plane,
+ return 0;
+
+ crtc_state = drm_atomic_get_crtc_state(state, crtc);
++ if (IS_ERR(crtc_state))
++ return PTR_ERR(crtc_state);
++
+ mode = &crtc_state->mode;
+ dst_x = new_plane_state->crtc_x;
+ dst_y = new_plane_state->crtc_y;
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 4f5d00aea7168b..2927745d689549 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -1846,16 +1846,29 @@ static void handle_sched_done(struct xe_guc *guc, struct xe_exec_queue *q,
+ xe_gt_assert(guc_to_gt(guc), runnable_state == 0);
+ xe_gt_assert(guc_to_gt(guc), exec_queue_pending_disable(q));
+
+- clear_exec_queue_pending_disable(q);
+ if (q->guc->suspend_pending) {
+ suspend_fence_signal(q);
++ clear_exec_queue_pending_disable(q);
+ } else {
+ if (exec_queue_banned(q) || check_timeout) {
+ smp_wmb();
+ wake_up_all(&guc->ct.wq);
+ }
+- if (!check_timeout)
++ if (!check_timeout && exec_queue_destroyed(q)) {
++ /*
++ * Make sure to clear the pending_disable only
++ * after sampling the destroyed state. We want
++ * to ensure we don't trigger the unregister too
++ * early with something intending to only
++ * disable scheduling. The caller doing the
++ * destroy must wait for an ongoing
++ * pending_disable before marking as destroyed.
++ */
++ clear_exec_queue_pending_disable(q);
+ deregister_exec_queue(guc, q);
++ } else {
++ clear_exec_queue_pending_disable(q);
++ }
+ }
+ }
+ }
+diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
+index cfd31ae49cc1f7..1b97d90aaddaf4 100644
+--- a/drivers/gpu/drm/xe/xe_migrate.c
++++ b/drivers/gpu/drm/xe/xe_migrate.c
+@@ -209,7 +209,8 @@ static int xe_migrate_prepare_vm(struct xe_tile *tile, struct xe_migrate *m,
+ num_entries * XE_PAGE_SIZE,
+ ttm_bo_type_kernel,
+ XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+- XE_BO_FLAG_PINNED);
++ XE_BO_FLAG_PINNED |
++ XE_BO_FLAG_PAGETABLE);
+ if (IS_ERR(bo))
+ return PTR_ERR(bo);
+
+@@ -1350,6 +1351,7 @@ __xe_migrate_update_pgtables(struct xe_migrate *m,
+
+ /* For sysmem PTE's, need to map them in our hole.. */
+ if (!IS_DGFX(xe)) {
++ u16 pat_index = xe->pat.idx[XE_CACHE_WB];
+ u32 ptes, ofs;
+
+ ppgtt_ofs = NUM_KERNEL_PDE - 1;
+@@ -1409,7 +1411,7 @@ __xe_migrate_update_pgtables(struct xe_migrate *m,
+ pt_bo->update_index = current_update;
+
+ addr = vm->pt_ops->pte_encode_bo(pt_bo, 0,
+- XE_CACHE_WB, 0);
++ pat_index, 0);
+ bb->cs[bb->len++] = lower_32_bits(addr);
+ bb->cs[bb->len++] = upper_32_bits(addr);
+ }
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_kms.c b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+index 4556af2faa0f19..1565a7dd4f04d0 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_kms.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_kms.c
+@@ -509,12 +509,12 @@ int zynqmp_dpsub_drm_init(struct zynqmp_dpsub *dpsub)
+ if (ret)
+ return ret;
+
+- drm_kms_helper_poll_init(drm);
+-
+ ret = zynqmp_dpsub_kms_init(dpsub);
+ if (ret < 0)
+ goto err_poll_fini;
+
++ drm_kms_helper_poll_init(drm);
++
+ /* Reset all components and register the DRM device. */
+ drm_mode_config_reset(drm);
+
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index ffe99f0c6acef5..da83c49223b33e 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -1417,7 +1417,7 @@ static void i3c_master_put_i3c_addrs(struct i3c_dev_desc *dev)
+ I3C_ADDR_SLOT_FREE);
+
+ if (dev->boardinfo && dev->boardinfo->init_dyn_addr)
+- i3c_bus_set_addr_slot_status(&master->bus, dev->info.dyn_addr,
++ i3c_bus_set_addr_slot_status(&master->bus, dev->boardinfo->init_dyn_addr,
+ I3C_ADDR_SLOT_FREE);
+ }
+
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index a7bfc678153e6c..565af3759813bd 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -130,8 +130,8 @@
+ #define SVC_I3C_PPBAUD_MAX 15
+ #define SVC_I3C_QUICK_I2C_CLK 4170000
+
+-#define SVC_I3C_EVENT_IBI BIT(0)
+-#define SVC_I3C_EVENT_HOTJOIN BIT(1)
++#define SVC_I3C_EVENT_IBI GENMASK(7, 0)
++#define SVC_I3C_EVENT_HOTJOIN BIT(31)
+
+ struct svc_i3c_cmd {
+ u8 addr;
+@@ -214,7 +214,7 @@ struct svc_i3c_master {
+ spinlock_t lock;
+ } ibi;
+ struct mutex lock;
+- int enabled_events;
++ u32 enabled_events;
+ u32 mctrl_config;
+ };
+
+@@ -1056,12 +1056,27 @@ static int svc_i3c_master_do_daa(struct i3c_master_controller *m)
+ if (ret)
+ goto rpm_out;
+
+- /* Register all devices who participated to the core */
+- for (i = 0; i < dev_nb; i++) {
+- ret = i3c_master_add_i3c_dev_locked(m, addrs[i]);
+- if (ret)
+- goto rpm_out;
+- }
++ /*
++ * Register all devices who participated to the core
++ *
++ * If two devices (A and B) are detected in DAA and address 0xa is assigned to
++ * device A and 0xb to device B, a failure in i3c_master_add_i3c_dev_locked()
++ * for device A (addr: 0xa) could prevent device B (addr: 0xb) from being
++ * registered on the bus. The I3C stack might still consider 0xb a free
++ * address. If a subsequent Hotjoin occurs, 0xb might be assigned to Device A,
++ * causing both devices A and B to use the same address 0xb, violating the I3C
++ * specification.
++ *
++ * The return value for i3c_master_add_i3c_dev_locked() should not be checked
++ * because subsequent steps will scan the entire I3C bus, independent of
++ * whether i3c_master_add_i3c_dev_locked() returns success.
++ *
++ * If device A registration fails, there is still a chance to register device
++ * B. i3c_master_add_i3c_dev_locked() can reset DAA if a failure occurs while
++ * retrieving device information.
++ */
++ for (i = 0; i < dev_nb; i++)
++ i3c_master_add_i3c_dev_locked(m, addrs[i]);
+
+ /* Configure IBI auto-rules */
+ ret = svc_i3c_update_ibirules(master);
+@@ -1624,7 +1639,7 @@ static int svc_i3c_master_enable_ibi(struct i3c_dev_desc *dev)
+ return ret;
+ }
+
+- master->enabled_events |= SVC_I3C_EVENT_IBI;
++ master->enabled_events++;
+ svc_i3c_master_enable_interrupts(master, SVC_I3C_MINT_SLVSTART);
+
+ return i3c_master_enec_locked(m, dev->info.dyn_addr, I3C_CCC_EVENT_SIR);
+@@ -1636,7 +1651,7 @@ static int svc_i3c_master_disable_ibi(struct i3c_dev_desc *dev)
+ struct svc_i3c_master *master = to_svc_i3c_master(m);
+ int ret;
+
+- master->enabled_events &= ~SVC_I3C_EVENT_IBI;
++ master->enabled_events--;
+ if (!master->enabled_events)
+ svc_i3c_master_disable_interrupts(master);
+
+@@ -1827,8 +1842,8 @@ static int svc_i3c_master_probe(struct platform_device *pdev)
+ rpm_disable:
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+- pm_runtime_set_suspended(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
++ pm_runtime_set_suspended(&pdev->dev);
+
+ err_disable_clks:
+ svc_i3c_master_unprepare_clks(master);
+diff --git a/drivers/iio/accel/kionix-kx022a.c b/drivers/iio/accel/kionix-kx022a.c
+index 53d59a04ae15e9..b6a828a6df934f 100644
+--- a/drivers/iio/accel/kionix-kx022a.c
++++ b/drivers/iio/accel/kionix-kx022a.c
+@@ -594,7 +594,7 @@ static int kx022a_get_axis(struct kx022a_data *data,
+ if (ret)
+ return ret;
+
+- *val = le16_to_cpu(data->buffer[0]);
++ *val = (s16)le16_to_cpu(data->buffer[0]);
+
+ return IIO_VAL_INT;
+ }
+diff --git a/drivers/iio/adc/ad7780.c b/drivers/iio/adc/ad7780.c
+index e9b0c577c9cca4..8ccb74f470309f 100644
+--- a/drivers/iio/adc/ad7780.c
++++ b/drivers/iio/adc/ad7780.c
+@@ -152,7 +152,7 @@ static int ad7780_write_raw(struct iio_dev *indio_dev,
+
+ switch (m) {
+ case IIO_CHAN_INFO_SCALE:
+- if (val != 0)
++ if (val != 0 || val2 == 0)
+ return -EINVAL;
+
+ vref = st->int_vref_mv * 1000000LL;
+diff --git a/drivers/iio/adc/ad7923.c b/drivers/iio/adc/ad7923.c
+index 09680015a7ab54..acc44cb34f8245 100644
+--- a/drivers/iio/adc/ad7923.c
++++ b/drivers/iio/adc/ad7923.c
+@@ -48,7 +48,7 @@
+
+ struct ad7923_state {
+ struct spi_device *spi;
+- struct spi_transfer ring_xfer[5];
++ struct spi_transfer ring_xfer[9];
+ struct spi_transfer scan_single_xfer[2];
+ struct spi_message ring_msg;
+ struct spi_message scan_single_msg;
+@@ -64,7 +64,7 @@ struct ad7923_state {
+ * Length = 8 channels + 4 extra for 8 byte timestamp
+ */
+ __be16 rx_buf[12] __aligned(IIO_DMA_MINALIGN);
+- __be16 tx_buf[4];
++ __be16 tx_buf[8];
+ };
+
+ struct ad7923_chip_info {
+diff --git a/drivers/iio/common/inv_sensors/inv_sensors_timestamp.c b/drivers/iio/common/inv_sensors/inv_sensors_timestamp.c
+index f44458c380d928..37d0bdaa8d824f 100644
+--- a/drivers/iio/common/inv_sensors/inv_sensors_timestamp.c
++++ b/drivers/iio/common/inv_sensors/inv_sensors_timestamp.c
+@@ -70,6 +70,10 @@ int inv_sensors_timestamp_update_odr(struct inv_sensors_timestamp *ts,
+ if (mult != ts->mult)
+ ts->new_mult = mult;
+
++ /* When FIFO is off, directly apply the new ODR */
++ if (!fifo)
++ inv_sensors_timestamp_apply_odr(ts, 0, 0, 0);
++
+ return 0;
+ }
+ EXPORT_SYMBOL_NS_GPL(inv_sensors_timestamp_update_odr, IIO_INV_SENSORS_TIMESTAMP);
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
+index 56ac198142500a..7968aa27f9fd79 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_accel.c
+@@ -200,7 +200,6 @@ static int inv_icm42600_accel_update_scan_mode(struct iio_dev *indio_dev,
+ {
+ struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev);
+ struct inv_icm42600_sensor_state *accel_st = iio_priv(indio_dev);
+- struct inv_sensors_timestamp *ts = &accel_st->ts;
+ struct inv_icm42600_sensor_conf conf = INV_ICM42600_SENSOR_CONF_INIT;
+ unsigned int fifo_en = 0;
+ unsigned int sleep_temp = 0;
+@@ -229,7 +228,6 @@ static int inv_icm42600_accel_update_scan_mode(struct iio_dev *indio_dev,
+ }
+
+ /* update data FIFO write */
+- inv_sensors_timestamp_apply_odr(ts, 0, 0, 0);
+ ret = inv_icm42600_buffer_set_fifo_en(st, fifo_en | st->fifo.en);
+
+ out_unlock:
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
+index 938af5b640b00f..c6bb68bf5e1449 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_gyro.c
+@@ -99,8 +99,6 @@ static int inv_icm42600_gyro_update_scan_mode(struct iio_dev *indio_dev,
+ const unsigned long *scan_mask)
+ {
+ struct inv_icm42600_state *st = iio_device_get_drvdata(indio_dev);
+- struct inv_icm42600_sensor_state *gyro_st = iio_priv(indio_dev);
+- struct inv_sensors_timestamp *ts = &gyro_st->ts;
+ struct inv_icm42600_sensor_conf conf = INV_ICM42600_SENSOR_CONF_INIT;
+ unsigned int fifo_en = 0;
+ unsigned int sleep_gyro = 0;
+@@ -128,7 +126,6 @@ static int inv_icm42600_gyro_update_scan_mode(struct iio_dev *indio_dev,
+ }
+
+ /* update data FIFO write */
+- inv_sensors_timestamp_apply_odr(ts, 0, 0, 0);
+ ret = inv_icm42600_buffer_set_fifo_en(st, fifo_en | st->fifo.en);
+
+ out_unlock:
+diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
+index 3bfeabab0ec4f6..5b1088cc3704f1 100644
+--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
++++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_trigger.c
+@@ -112,7 +112,6 @@ int inv_mpu6050_prepare_fifo(struct inv_mpu6050_state *st, bool enable)
+ if (enable) {
+ /* reset timestamping */
+ inv_sensors_timestamp_reset(&st->timestamp);
+- inv_sensors_timestamp_apply_odr(&st->timestamp, 0, 0, 0);
+ /* reset FIFO */
+ d = st->chip_config.user_ctrl | INV_MPU6050_BIT_FIFO_RST;
+ ret = regmap_write(st->map, st->reg->user_ctrl, d);
+diff --git a/drivers/iio/industrialio-gts-helper.c b/drivers/iio/industrialio-gts-helper.c
+index 4ad949672210ba..291c0fc332c978 100644
+--- a/drivers/iio/industrialio-gts-helper.c
++++ b/drivers/iio/industrialio-gts-helper.c
+@@ -205,7 +205,7 @@ static int gain_to_scaletables(struct iio_gts *gts, int **gains, int **scales)
+ memcpy(all_gains, gains[time_idx], gain_bytes);
+ new_idx = gts->num_hwgain;
+
+- while (time_idx--) {
++ while (time_idx-- > 0) {
+ for (j = 0; j < gts->num_hwgain; j++) {
+ int candidate = gains[time_idx][j];
+ int chk;
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index 151099be2863c6..3305ebbdbc0787 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -269,7 +269,7 @@ struct iio_channel *fwnode_iio_channel_get_by_name(struct fwnode_handle *fwnode,
+ return ERR_PTR(-ENODEV);
+ }
+
+- chan = __fwnode_iio_channel_get_by_name(fwnode, name);
++ chan = __fwnode_iio_channel_get_by_name(parent, name);
+ if (!IS_ERR(chan) || PTR_ERR(chan) != -ENODEV) {
+ fwnode_handle_put(parent);
+ return chan;
+diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+index 6b479592140c47..c8ec74f089f3d6 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
++++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+@@ -801,7 +801,9 @@ static int tegra241_cmdqv_init_structures(struct arm_smmu_device *smmu)
+ return 0;
+ }
+
++#ifdef CONFIG_IOMMU_DEBUGFS
+ static struct dentry *cmdqv_debugfs_dir;
++#endif
+
+ static struct arm_smmu_device *
+ __tegra241_cmdqv_probe(struct arm_smmu_device *smmu, struct resource *res,
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu.c b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+index 8321962b37148b..14618772a3d6e4 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu.c
+@@ -1437,6 +1437,17 @@ static struct iommu_device *arm_smmu_probe_device(struct device *dev)
+ goto out_free;
+ } else {
+ smmu = arm_smmu_get_by_fwnode(fwspec->iommu_fwnode);
++
++ /*
++ * Defer probe if the relevant SMMU instance hasn't finished
++ * probing yet. This is a fragile hack and we'd ideally
++ * avoid this race in the core code. Until that's ironed
++ * out, however, this is the most pragmatic option on the
++ * table.
++ */
++ if (!smmu)
++ return ERR_PTR(dev_err_probe(dev, -EPROBE_DEFER,
++ "smmu dev has not bound yet\n"));
+ }
+
+ ret = -EINVAL;
+diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
+index 0e67f1721a3d98..a286c5404ea701 100644
+--- a/drivers/iommu/io-pgtable-arm.c
++++ b/drivers/iommu/io-pgtable-arm.c
+@@ -199,6 +199,18 @@ static phys_addr_t iopte_to_paddr(arm_lpae_iopte pte,
+ return (paddr | (paddr << (48 - 12))) & (ARM_LPAE_PTE_ADDR_MASK << 4);
+ }
+
++/*
++ * Convert an index returned by ARM_LPAE_PGD_IDX(), which can point into
++ * a concatenated PGD, into the maximum number of entries that can be
++ * mapped in the same table page.
++ */
++static inline int arm_lpae_max_entries(int i, struct arm_lpae_io_pgtable *data)
++{
++ int ptes_per_table = ARM_LPAE_PTES_PER_TABLE(data);
++
++ return ptes_per_table - (i & (ptes_per_table - 1));
++}
++
+ static bool selftest_running = false;
+
+ static dma_addr_t __arm_lpae_dma_addr(void *pages)
+@@ -390,7 +402,7 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable *data, unsigned long iova,
+
+ /* If we can install a leaf entry at this level, then do so */
+ if (size == block_size) {
+- max_entries = ARM_LPAE_PTES_PER_TABLE(data) - map_idx_start;
++ max_entries = arm_lpae_max_entries(map_idx_start, data);
+ num_entries = min_t(int, pgcount, max_entries);
+ ret = arm_lpae_init_pte(data, iova, paddr, prot, lvl, num_entries, ptep);
+ if (!ret)
+@@ -592,7 +604,7 @@ static size_t arm_lpae_split_blk_unmap(struct arm_lpae_io_pgtable *data,
+
+ if (size == split_sz) {
+ unmap_idx_start = ARM_LPAE_LVL_IDX(iova, lvl, data);
+- max_entries = ptes_per_table - unmap_idx_start;
++ max_entries = arm_lpae_max_entries(unmap_idx_start, data);
+ num_entries = min_t(int, pgcount, max_entries);
+ }
+
+@@ -650,7 +662,7 @@ static size_t __arm_lpae_unmap(struct arm_lpae_io_pgtable *data,
+
+ /* If the size matches this level, we're in the right place */
+ if (size == ARM_LPAE_BLOCK_SIZE(lvl, data)) {
+- max_entries = ARM_LPAE_PTES_PER_TABLE(data) - unmap_idx_start;
++ max_entries = arm_lpae_max_entries(unmap_idx_start, data);
+ num_entries = min_t(int, pgcount, max_entries);
+
+ /* Find and handle non-leaf entries */
+diff --git a/drivers/leds/flash/leds-mt6360.c b/drivers/leds/flash/leds-mt6360.c
+index 4c74f1cf01f00d..676236c19ec415 100644
+--- a/drivers/leds/flash/leds-mt6360.c
++++ b/drivers/leds/flash/leds-mt6360.c
+@@ -784,7 +784,6 @@ static void mt6360_v4l2_flash_release(struct mt6360_priv *priv)
+ static int mt6360_led_probe(struct platform_device *pdev)
+ {
+ struct mt6360_priv *priv;
+- struct fwnode_handle *child;
+ size_t count;
+ int i = 0, ret;
+
+@@ -811,7 +810,7 @@ static int mt6360_led_probe(struct platform_device *pdev)
+ return -ENODEV;
+ }
+
+- device_for_each_child_node(&pdev->dev, child) {
++ device_for_each_child_node_scoped(&pdev->dev, child) {
+ struct mt6360_led *led = priv->leds + i;
+ struct led_init_data init_data = { .fwnode = child, };
+ u32 reg, led_color;
+diff --git a/drivers/leds/leds-lp55xx-common.c b/drivers/leds/leds-lp55xx-common.c
+index 5a2e259679cfdf..e71456a56ab8da 100644
+--- a/drivers/leds/leds-lp55xx-common.c
++++ b/drivers/leds/leds-lp55xx-common.c
+@@ -1132,9 +1132,6 @@ static int lp55xx_parse_common_child(struct device_node *np,
+ if (ret)
+ return ret;
+
+- if (*chan_nr < 0 || *chan_nr > cfg->max_channel)
+- return -EINVAL;
+-
+ return 0;
+ }
+
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index 89632ce9776056..c9f47d0cccf9bb 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -2484,6 +2484,7 @@ static void pool_work_wait(struct pool_work *pw, struct pool *pool,
+ init_completion(&pw->complete);
+ queue_work(pool->wq, &pw->worker);
+ wait_for_completion(&pw->complete);
++ destroy_work_on_stack(&pw->worker);
+ }
+
+ /*----------------------------------------------------------------*/
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 29da10e6f703e2..c3a42dd66ce551 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -1285,6 +1285,7 @@ static void bitmap_unplug_async(struct bitmap *bitmap)
+
+ queue_work(md_bitmap_wq, &unplug_work.work);
+ wait_for_completion(&done);
++ destroy_work_on_stack(&unplug_work.work);
+ }
+
+ static void bitmap_unplug(struct mddev *mddev, bool sync)
+diff --git a/drivers/md/persistent-data/dm-space-map-common.c b/drivers/md/persistent-data/dm-space-map-common.c
+index 3a19124ee27932..22a551c407da49 100644
+--- a/drivers/md/persistent-data/dm-space-map-common.c
++++ b/drivers/md/persistent-data/dm-space-map-common.c
+@@ -51,7 +51,7 @@ static int index_check(const struct dm_block_validator *v,
+ block_size - sizeof(__le32),
+ INDEX_CSUM_XOR));
+ if (csum_disk != mi_le->csum) {
+- DMERR_LIMIT("i%s failed: csum %u != wanted %u", __func__,
++ DMERR_LIMIT("%s failed: csum %u != wanted %u", __func__,
+ le32_to_cpu(csum_disk), le32_to_cpu(mi_le->csum));
+ return -EILSEQ;
+ }
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index dc2ea636d17342..2fa1f270fb1d3c 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -7177,6 +7177,8 @@ raid5_store_group_thread_cnt(struct mddev *mddev, const char *page, size_t len)
+ err = mddev_suspend_and_lock(mddev);
+ if (err)
+ return err;
++ raid5_quiesce(mddev, true);
++
+ conf = mddev->private;
+ if (!conf)
+ err = -ENODEV;
+@@ -7198,6 +7200,8 @@ raid5_store_group_thread_cnt(struct mddev *mddev, const char *page, size_t len)
+ kfree(old_groups);
+ }
+ }
++
++ raid5_quiesce(mddev, false);
+ mddev_unlock_and_resume(mddev);
+
+ return err ?: len;
+diff --git a/drivers/media/dvb-frontends/ts2020.c b/drivers/media/dvb-frontends/ts2020.c
+index a5baca2449c76d..e25add6cc38e94 100644
+--- a/drivers/media/dvb-frontends/ts2020.c
++++ b/drivers/media/dvb-frontends/ts2020.c
+@@ -553,13 +553,19 @@ static void ts2020_regmap_unlock(void *__dev)
+ static int ts2020_probe(struct i2c_client *client)
+ {
+ struct ts2020_config *pdata = client->dev.platform_data;
+- struct dvb_frontend *fe = pdata->fe;
++ struct dvb_frontend *fe;
+ struct ts2020_priv *dev;
+ int ret;
+ u8 u8tmp;
+ unsigned int utmp;
+ char *chip_str;
+
++ if (!pdata) {
++ dev_err(&client->dev, "platform data is mandatory\n");
++ return -EINVAL;
++ }
++
++ fe = pdata->fe;
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev) {
+ ret = -ENOMEM;
+diff --git a/drivers/media/i2c/dw9768.c b/drivers/media/i2c/dw9768.c
+index 18ef2b35c9aa3d..87a7c3ceeb119e 100644
+--- a/drivers/media/i2c/dw9768.c
++++ b/drivers/media/i2c/dw9768.c
+@@ -471,10 +471,9 @@ static int dw9768_probe(struct i2c_client *client)
+ * to be powered on in an ACPI system. Similarly for power off in
+ * remove.
+ */
+- pm_runtime_enable(dev);
+ full_power = (is_acpi_node(dev_fwnode(dev)) &&
+ acpi_dev_state_d0(dev)) ||
+- (is_of_node(dev_fwnode(dev)) && !pm_runtime_enabled(dev));
++ (is_of_node(dev_fwnode(dev)) && !IS_ENABLED(CONFIG_PM));
+ if (full_power) {
+ ret = dw9768_runtime_resume(dev);
+ if (ret < 0) {
+@@ -484,6 +483,7 @@ static int dw9768_probe(struct i2c_client *client)
+ pm_runtime_set_active(dev);
+ }
+
++ pm_runtime_enable(dev);
+ ret = v4l2_async_register_subdev(&dw9768->sd);
+ if (ret < 0) {
+ dev_err(dev, "failed to register V4L2 subdev: %d", ret);
+@@ -495,12 +495,12 @@ static int dw9768_probe(struct i2c_client *client)
+ return 0;
+
+ err_power_off:
++ pm_runtime_disable(dev);
+ if (full_power) {
+ dw9768_runtime_suspend(dev);
+ pm_runtime_set_suspended(dev);
+ }
+ err_clean_entity:
+- pm_runtime_disable(dev);
+ media_entity_cleanup(&dw9768->sd.entity);
+ err_free_handler:
+ v4l2_ctrl_handler_free(&dw9768->ctrls);
+@@ -517,12 +517,12 @@ static void dw9768_remove(struct i2c_client *client)
+ v4l2_async_unregister_subdev(&dw9768->sd);
+ v4l2_ctrl_handler_free(&dw9768->ctrls);
+ media_entity_cleanup(&dw9768->sd.entity);
++ pm_runtime_disable(dev);
+ if ((is_acpi_node(dev_fwnode(dev)) && acpi_dev_state_d0(dev)) ||
+- (is_of_node(dev_fwnode(dev)) && !pm_runtime_enabled(dev))) {
++ (is_of_node(dev_fwnode(dev)) && !IS_ENABLED(CONFIG_PM))) {
+ dw9768_runtime_suspend(dev);
+ pm_runtime_set_suspended(dev);
+ }
+- pm_runtime_disable(dev);
+ }
+
+ static const struct of_device_id dw9768_of_table[] = {
+diff --git a/drivers/media/i2c/ov08x40.c b/drivers/media/i2c/ov08x40.c
+index 7ead3c720e0e11..67b86dabc67eb1 100644
+--- a/drivers/media/i2c/ov08x40.c
++++ b/drivers/media/i2c/ov08x40.c
+@@ -1339,15 +1339,13 @@ static int ov08x40_read_reg(struct ov08x40 *ov08x,
+ return 0;
+ }
+
+-static int ov08x40_burst_fill_regs(struct ov08x40 *ov08x, u16 first_reg,
+- u16 last_reg, u8 val)
++static int __ov08x40_burst_fill_regs(struct i2c_client *client, u16 first_reg,
++ u16 last_reg, size_t num_regs, u8 val)
+ {
+- struct i2c_client *client = v4l2_get_subdevdata(&ov08x->sd);
+ struct i2c_msg msgs;
+- size_t i, num_regs;
++ size_t i;
+ int ret;
+
+- num_regs = last_reg - first_reg + 1;
+ msgs.addr = client->addr;
+ msgs.flags = 0;
+ msgs.len = 2 + num_regs;
+@@ -1373,6 +1371,31 @@ static int ov08x40_burst_fill_regs(struct ov08x40 *ov08x, u16 first_reg,
+ return 0;
+ }
+
++static int ov08x40_burst_fill_regs(struct ov08x40 *ov08x, u16 first_reg,
++ u16 last_reg, u8 val)
++{
++ struct i2c_client *client = v4l2_get_subdevdata(&ov08x->sd);
++ size_t num_regs, num_write_regs;
++ int ret;
++
++ num_regs = last_reg - first_reg + 1;
++ num_write_regs = num_regs;
++
++ if (client->adapter->quirks && client->adapter->quirks->max_write_len)
++ num_write_regs = client->adapter->quirks->max_write_len - 2;
++
++ while (first_reg < last_reg) {
++ ret = __ov08x40_burst_fill_regs(client, first_reg, last_reg,
++ num_write_regs, val);
++ if (ret)
++ return ret;
++
++ first_reg += num_write_regs;
++ }
++
++ return 0;
++}
++
+ /* Write registers up to 4 at a time */
+ static int ov08x40_write_reg(struct ov08x40 *ov08x,
+ u16 reg, u32 len, u32 __val)
+diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
+index 65d58ddf02870d..344a670e732fa5 100644
+--- a/drivers/media/i2c/tc358743.c
++++ b/drivers/media/i2c/tc358743.c
+@@ -2168,8 +2168,10 @@ static int tc358743_probe(struct i2c_client *client)
+
+ err_work_queues:
+ cec_unregister_adapter(state->cec_adap);
+- if (!state->i2c_client->irq)
++ if (!state->i2c_client->irq) {
++ del_timer(&state->timer);
+ flush_work(&state->work_i2c_poll);
++ }
+ cancel_delayed_work(&state->delayed_work_enable_hotplug);
+ mutex_destroy(&state->confctl_mutex);
+ err_hdl:
+diff --git a/drivers/media/platform/allegro-dvt/allegro-core.c b/drivers/media/platform/allegro-dvt/allegro-core.c
+index 73606cee586ede..88c36eb6174ad6 100644
+--- a/drivers/media/platform/allegro-dvt/allegro-core.c
++++ b/drivers/media/platform/allegro-dvt/allegro-core.c
+@@ -1509,8 +1509,10 @@ static int allocate_buffers_internal(struct allegro_channel *channel,
+ INIT_LIST_HEAD(&buffer->head);
+
+ err = allegro_alloc_buffer(dev, buffer, size);
+- if (err)
++ if (err) {
++ kfree(buffer);
+ goto err;
++ }
+ list_add(&buffer->head, list);
+ }
+
+diff --git a/drivers/media/platform/amphion/vpu_drv.c b/drivers/media/platform/amphion/vpu_drv.c
+index 2bf70aafd2baab..51d5234869f57d 100644
+--- a/drivers/media/platform/amphion/vpu_drv.c
++++ b/drivers/media/platform/amphion/vpu_drv.c
+@@ -151,8 +151,8 @@ static int vpu_probe(struct platform_device *pdev)
+ media_device_cleanup(&vpu->mdev);
+ v4l2_device_unregister(&vpu->v4l2_dev);
+ err_vpu_deinit:
+- pm_runtime_set_suspended(dev);
+ pm_runtime_disable(dev);
++ pm_runtime_set_suspended(dev);
+
+ return ret;
+ }
+diff --git a/drivers/media/platform/amphion/vpu_v4l2.c b/drivers/media/platform/amphion/vpu_v4l2.c
+index 83db57bc80b70f..f0b1ec79d2961c 100644
+--- a/drivers/media/platform/amphion/vpu_v4l2.c
++++ b/drivers/media/platform/amphion/vpu_v4l2.c
+@@ -841,6 +841,7 @@ int vpu_add_func(struct vpu_dev *vpu, struct vpu_func *func)
+ vfd->fops = vdec_get_fops();
+ vfd->ioctl_ops = vdec_get_ioctl_ops();
+ }
++ video_set_drvdata(vfd, vpu);
+
+ ret = video_register_device(vfd, VFL_TYPE_VIDEO, -1);
+ if (ret) {
+@@ -848,7 +849,6 @@ int vpu_add_func(struct vpu_dev *vpu, struct vpu_func *func)
+ v4l2_m2m_release(func->m2m_dev);
+ return ret;
+ }
+- video_set_drvdata(vfd, vpu);
+ func->vfd = vfd;
+
+ ret = v4l2_m2m_register_media_controller(func->m2m_dev, func->vfd, func->function);
+diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+index ac48658e2de403..ff269467635561 100644
+--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_core.c
+@@ -1293,6 +1293,11 @@ static int mtk_jpeg_single_core_init(struct platform_device *pdev,
+ return 0;
+ }
+
++static void mtk_jpeg_destroy_workqueue(void *data)
++{
++ destroy_workqueue(data);
++}
++
+ static int mtk_jpeg_probe(struct platform_device *pdev)
+ {
+ struct mtk_jpeg_dev *jpeg;
+@@ -1337,6 +1342,11 @@ static int mtk_jpeg_probe(struct platform_device *pdev)
+ | WQ_FREEZABLE);
+ if (!jpeg->workqueue)
+ return -EINVAL;
++ ret = devm_add_action_or_reset(&pdev->dev,
++ mtk_jpeg_destroy_workqueue,
++ jpeg->workqueue);
++ if (ret)
++ return ret;
+ }
+
+ ret = v4l2_device_register(&pdev->dev, &jpeg->v4l2_dev);
+diff --git a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c
+index 4a6ee211e18f97..2c5d74939d0a92 100644
+--- a/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c
++++ b/drivers/media/platform/mediatek/jpeg/mtk_jpeg_dec_hw.c
+@@ -578,11 +578,6 @@ static int mtk_jpegdec_hw_init_irq(struct mtk_jpegdec_comp_dev *dev)
+ return 0;
+ }
+
+-static void mtk_jpegdec_destroy_workqueue(void *data)
+-{
+- destroy_workqueue(data);
+-}
+-
+ static int mtk_jpegdec_hw_probe(struct platform_device *pdev)
+ {
+ struct mtk_jpegdec_clk *jpegdec_clk;
+@@ -606,12 +601,6 @@ static int mtk_jpegdec_hw_probe(struct platform_device *pdev)
+ dev->plat_dev = pdev;
+ dev->dev = &pdev->dev;
+
+- ret = devm_add_action_or_reset(&pdev->dev,
+- mtk_jpegdec_destroy_workqueue,
+- master_dev->workqueue);
+- if (ret)
+- return ret;
+-
+ spin_lock_init(&dev->hw_lock);
+ dev->hw_state = MTK_JPEG_HW_IDLE;
+
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+index 1d891381303722..1bf85c1cf96435 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+@@ -2679,6 +2679,8 @@ static void mxc_jpeg_detach_pm_domains(struct mxc_jpeg_dev *jpeg)
+ int i;
+
+ for (i = 0; i < jpeg->num_domains; i++) {
++ if (jpeg->pd_dev[i] && !pm_runtime_suspended(jpeg->pd_dev[i]))
++ pm_runtime_force_suspend(jpeg->pd_dev[i]);
+ if (jpeg->pd_link[i] && !IS_ERR(jpeg->pd_link[i]))
+ device_link_del(jpeg->pd_link[i]);
+ if (jpeg->pd_dev[i] && !IS_ERR(jpeg->pd_dev[i]))
+@@ -2842,6 +2844,7 @@ static int mxc_jpeg_probe(struct platform_device *pdev)
+ jpeg->dec_vdev->vfl_dir = VFL_DIR_M2M;
+ jpeg->dec_vdev->device_caps = V4L2_CAP_STREAMING |
+ V4L2_CAP_VIDEO_M2M_MPLANE;
++ video_set_drvdata(jpeg->dec_vdev, jpeg);
+ if (mode == MXC_JPEG_ENCODE) {
+ v4l2_disable_ioctl(jpeg->dec_vdev, VIDIOC_DECODER_CMD);
+ v4l2_disable_ioctl(jpeg->dec_vdev, VIDIOC_TRY_DECODER_CMD);
+@@ -2854,7 +2857,6 @@ static int mxc_jpeg_probe(struct platform_device *pdev)
+ dev_err(dev, "failed to register video device\n");
+ goto err_vdev_register;
+ }
+- video_set_drvdata(jpeg->dec_vdev, jpeg);
+ if (mode == MXC_JPEG_ENCODE)
+ v4l2_info(&jpeg->v4l2_dev,
+ "encoder device registered as /dev/video%d (%d,%d)\n",
+diff --git a/drivers/media/platform/qcom/camss/camss.c b/drivers/media/platform/qcom/camss/camss.c
+index d64985ca6e884f..8c3bce738f2a8f 100644
+--- a/drivers/media/platform/qcom/camss/camss.c
++++ b/drivers/media/platform/qcom/camss/camss.c
+@@ -2130,10 +2130,8 @@ static int camss_configure_pd(struct camss *camss)
+ if (camss->res->pd_name) {
+ camss->genpd = dev_pm_domain_attach_by_name(camss->dev,
+ camss->res->pd_name);
+- if (IS_ERR(camss->genpd)) {
+- ret = PTR_ERR(camss->genpd);
+- goto fail_pm;
+- }
++ if (IS_ERR(camss->genpd))
++ return PTR_ERR(camss->genpd);
+ }
+
+ if (!camss->genpd) {
+@@ -2143,14 +2141,13 @@ static int camss_configure_pd(struct camss *camss)
+ */
+ camss->genpd = dev_pm_domain_attach_by_id(camss->dev,
+ camss->genpd_num - 1);
++ if (IS_ERR(camss->genpd))
++ return PTR_ERR(camss->genpd);
+ }
+- if (IS_ERR_OR_NULL(camss->genpd)) {
+- if (!camss->genpd)
+- ret = -ENODEV;
+- else
+- ret = PTR_ERR(camss->genpd);
+- goto fail_pm;
+- }
++
++ if (!camss->genpd)
++ return -ENODEV;
++
+ camss->genpd_link = device_link_add(camss->dev, camss->genpd,
+ DL_FLAG_STATELESS | DL_FLAG_PM_RUNTIME |
+ DL_FLAG_RPM_ACTIVE);
+diff --git a/drivers/media/platform/qcom/venus/core.c b/drivers/media/platform/qcom/venus/core.c
+index 84e95a46dfc983..cabcf710c0462a 100644
+--- a/drivers/media/platform/qcom/venus/core.c
++++ b/drivers/media/platform/qcom/venus/core.c
+@@ -412,8 +412,8 @@ static int venus_probe(struct platform_device *pdev)
+ of_platform_depopulate(dev);
+ err_runtime_disable:
+ pm_runtime_put_noidle(dev);
+- pm_runtime_set_suspended(dev);
+ pm_runtime_disable(dev);
++ pm_runtime_set_suspended(dev);
+ hfi_destroy(core);
+ err_core_deinit:
+ hfi_core_deinit(core, false);
+diff --git a/drivers/media/platform/rockchip/rga/rga.c b/drivers/media/platform/rockchip/rga/rga.c
+index 0e768f3e9edab4..de532b7ecd74c1 100644
+--- a/drivers/media/platform/rockchip/rga/rga.c
++++ b/drivers/media/platform/rockchip/rga/rga.c
+@@ -102,7 +102,7 @@ queue_init(void *priv, struct vb2_queue *src_vq, struct vb2_queue *dst_vq)
+ src_vq->drv_priv = ctx;
+ src_vq->ops = &rga_qops;
+ src_vq->mem_ops = &vb2_dma_sg_memops;
+- dst_vq->gfp_flags = __GFP_DMA32;
++ src_vq->gfp_flags = __GFP_DMA32;
+ src_vq->buf_struct_size = sizeof(struct rga_vb_buffer);
+ src_vq->timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_COPY;
+ src_vq->lock = &ctx->rga->mutex;
+diff --git a/drivers/media/platform/samsung/exynos4-is/media-dev.h b/drivers/media/platform/samsung/exynos4-is/media-dev.h
+index 786264cf79dc14..a50e58ab7ef773 100644
+--- a/drivers/media/platform/samsung/exynos4-is/media-dev.h
++++ b/drivers/media/platform/samsung/exynos4-is/media-dev.h
+@@ -178,8 +178,9 @@ int fimc_md_set_camclk(struct v4l2_subdev *sd, bool on);
+ #ifdef CONFIG_OF
+ static inline bool fimc_md_is_isp_available(struct device_node *node)
+ {
+- node = of_get_child_by_name(node, FIMC_IS_OF_NODE_NAME);
+- return node ? of_device_is_available(node) : false;
++ struct device_node *child __free(device_node) =
++ of_get_child_by_name(node, FIMC_IS_OF_NODE_NAME);
++ return child ? of_device_is_available(child) : false;
+ }
+ #else
+ #define fimc_md_is_isp_available(node) (false)
+diff --git a/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c b/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
+index 65e8f2d074005c..e54f5fac325bd6 100644
+--- a/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
++++ b/drivers/media/platform/verisilicon/rockchip_vpu981_hw_av1_dec.c
+@@ -161,8 +161,7 @@ static int rockchip_vpu981_av1_dec_frame_ref(struct hantro_ctx *ctx,
+ av1_dec->frame_refs[i].timestamp = timestamp;
+ av1_dec->frame_refs[i].frame_type = frame->frame_type;
+ av1_dec->frame_refs[i].order_hint = frame->order_hint;
+- if (!av1_dec->frame_refs[i].vb2_ref)
+- av1_dec->frame_refs[i].vb2_ref = hantro_get_dst_buf(ctx);
++ av1_dec->frame_refs[i].vb2_ref = hantro_get_dst_buf(ctx);
+
+ for (j = 0; j < V4L2_AV1_TOTAL_REFS_PER_FRAME; j++)
+ av1_dec->frame_refs[i].order_hints[j] = frame->order_hints[j];
+diff --git a/drivers/media/usb/gspca/ov534.c b/drivers/media/usb/gspca/ov534.c
+index 8b6a57f170d0dd..bdff64a29a33a2 100644
+--- a/drivers/media/usb/gspca/ov534.c
++++ b/drivers/media/usb/gspca/ov534.c
+@@ -847,7 +847,7 @@ static void set_frame_rate(struct gspca_dev *gspca_dev)
+ r = rate_1;
+ i = ARRAY_SIZE(rate_1);
+ }
+- while (--i > 0) {
++ while (--i >= 0) {
+ if (sd->frame_rate >= r->fps)
+ break;
+ r++;
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 13db0026dc1aad..675be4858366f0 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -775,14 +775,27 @@ static const u8 uvc_media_transport_input_guid[16] =
+ UVC_GUID_UVC_MEDIA_TRANSPORT_INPUT;
+ static const u8 uvc_processing_guid[16] = UVC_GUID_UVC_PROCESSING;
+
+-static struct uvc_entity *uvc_alloc_entity(u16 type, u16 id,
+- unsigned int num_pads, unsigned int extra_size)
++static struct uvc_entity *uvc_alloc_new_entity(struct uvc_device *dev, u16 type,
++ u16 id, unsigned int num_pads,
++ unsigned int extra_size)
+ {
+ struct uvc_entity *entity;
+ unsigned int num_inputs;
+ unsigned int size;
+ unsigned int i;
+
++ /* Per UVC 1.1+ spec 3.7.2, the ID should be non-zero. */
++ if (id == 0) {
++ dev_err(&dev->udev->dev, "Found Unit with invalid ID 0.\n");
++ return ERR_PTR(-EINVAL);
++ }
++
++ /* Per UVC 1.1+ spec 3.7.2, the ID is unique. */
++ if (uvc_entity_by_id(dev, id)) {
++ dev_err(&dev->udev->dev, "Found multiple Units with ID %u\n", id);
++ return ERR_PTR(-EINVAL);
++ }
++
+ extra_size = roundup(extra_size, sizeof(*entity->pads));
+ if (num_pads)
+ num_inputs = type & UVC_TERM_OUTPUT ? num_pads : num_pads - 1;
+@@ -792,7 +805,7 @@ static struct uvc_entity *uvc_alloc_entity(u16 type, u16 id,
+ + num_inputs;
+ entity = kzalloc(size, GFP_KERNEL);
+ if (entity == NULL)
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+
+ entity->id = id;
+ entity->type = type;
+@@ -904,10 +917,10 @@ static int uvc_parse_vendor_control(struct uvc_device *dev,
+ break;
+ }
+
+- unit = uvc_alloc_entity(UVC_VC_EXTENSION_UNIT, buffer[3],
+- p + 1, 2*n);
+- if (unit == NULL)
+- return -ENOMEM;
++ unit = uvc_alloc_new_entity(dev, UVC_VC_EXTENSION_UNIT,
++ buffer[3], p + 1, 2 * n);
++ if (IS_ERR(unit))
++ return PTR_ERR(unit);
+
+ memcpy(unit->guid, &buffer[4], 16);
+ unit->extension.bNumControls = buffer[20];
+@@ -1016,10 +1029,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- term = uvc_alloc_entity(type | UVC_TERM_INPUT, buffer[3],
+- 1, n + p);
+- if (term == NULL)
+- return -ENOMEM;
++ term = uvc_alloc_new_entity(dev, type | UVC_TERM_INPUT,
++ buffer[3], 1, n + p);
++ if (IS_ERR(term))
++ return PTR_ERR(term);
+
+ if (UVC_ENTITY_TYPE(term) == UVC_ITT_CAMERA) {
+ term->camera.bControlSize = n;
+@@ -1075,10 +1088,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return 0;
+ }
+
+- term = uvc_alloc_entity(type | UVC_TERM_OUTPUT, buffer[3],
+- 1, 0);
+- if (term == NULL)
+- return -ENOMEM;
++ term = uvc_alloc_new_entity(dev, type | UVC_TERM_OUTPUT,
++ buffer[3], 1, 0);
++ if (IS_ERR(term))
++ return PTR_ERR(term);
+
+ memcpy(term->baSourceID, &buffer[7], 1);
+
+@@ -1097,9 +1110,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_entity(buffer[2], buffer[3], p + 1, 0);
+- if (unit == NULL)
+- return -ENOMEM;
++ unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3],
++ p + 1, 0);
++ if (IS_ERR(unit))
++ return PTR_ERR(unit);
+
+ memcpy(unit->baSourceID, &buffer[5], p);
+
+@@ -1119,9 +1133,9 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_entity(buffer[2], buffer[3], 2, n);
+- if (unit == NULL)
+- return -ENOMEM;
++ unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3], 2, n);
++ if (IS_ERR(unit))
++ return PTR_ERR(unit);
+
+ memcpy(unit->baSourceID, &buffer[4], 1);
+ unit->processing.wMaxMultiplier =
+@@ -1148,9 +1162,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_entity(buffer[2], buffer[3], p + 1, n);
+- if (unit == NULL)
+- return -ENOMEM;
++ unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3],
++ p + 1, n);
++ if (IS_ERR(unit))
++ return PTR_ERR(unit);
+
+ memcpy(unit->guid, &buffer[4], 16);
+ unit->extension.bNumControls = buffer[20];
+@@ -1290,9 +1305,10 @@ static int uvc_gpio_parse(struct uvc_device *dev)
+ return dev_err_probe(&dev->udev->dev, irq,
+ "No IRQ for privacy GPIO\n");
+
+- unit = uvc_alloc_entity(UVC_EXT_GPIO_UNIT, UVC_EXT_GPIO_UNIT_ID, 0, 1);
+- if (!unit)
+- return -ENOMEM;
++ unit = uvc_alloc_new_entity(dev, UVC_EXT_GPIO_UNIT,
++ UVC_EXT_GPIO_UNIT_ID, 0, 1);
++ if (IS_ERR(unit))
++ return PTR_ERR(unit);
+
+ unit->gpio.gpio_privacy = gpio_privacy;
+ unit->gpio.irq = irq;
+@@ -1919,11 +1935,41 @@ static void uvc_unregister_video(struct uvc_device *dev)
+ struct uvc_streaming *stream;
+
+ list_for_each_entry(stream, &dev->streams, list) {
++ /* Nothing to do here, continue. */
+ if (!video_is_registered(&stream->vdev))
+ continue;
+
++ /*
++ * For stream->vdev we follow the same logic as:
++ * vb2_video_unregister_device().
++ */
++
++ /* 1. Take a reference to vdev */
++ get_device(&stream->vdev.dev);
++
++ /* 2. Ensure that no new ioctls can be called. */
+ video_unregister_device(&stream->vdev);
+- video_unregister_device(&stream->meta.vdev);
++
++ /* 3. Wait for old ioctls to finish. */
++ mutex_lock(&stream->mutex);
++
++ /* 4. Stop streaming. */
++ uvc_queue_release(&stream->queue);
++
++ mutex_unlock(&stream->mutex);
++
++ put_device(&stream->vdev.dev);
++
++ /*
++ * For stream->meta.vdev we can directly call:
++ * vb2_video_unregister_device().
++ */
++ vb2_video_unregister_device(&stream->meta.vdev);
++
++ /*
++ * Now both vdevs are not streaming and all the ioctls will
++ * return -ENODEV.
++ */
+
+ uvc_debugfs_cleanup_stream(stream);
+ }
+diff --git a/drivers/mtd/nand/spi/winbond.c b/drivers/mtd/nand/spi/winbond.c
+index f3bb81d7e46045..a33ad04e99cc8e 100644
+--- a/drivers/mtd/nand/spi/winbond.c
++++ b/drivers/mtd/nand/spi/winbond.c
+@@ -201,30 +201,30 @@ static const struct spinand_info winbond_spinand_table[] = {
+ SPINAND_INFO("W25N01JW",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xbc, 0x21),
+ NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
+- NAND_ECCREQ(4, 512),
++ NAND_ECCREQ(1, 512),
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+ 0,
+- SPINAND_ECCINFO(&w25m02gv_ooblayout, w25n02kv_ecc_get_status)),
++ SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
+ SPINAND_INFO("W25N02JWZEIF",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xbf, 0x22),
+ NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 2, 1),
+- NAND_ECCREQ(4, 512),
++ NAND_ECCREQ(1, 512),
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+ 0,
+- SPINAND_ECCINFO(&w25n02kv_ooblayout, w25n02kv_ecc_get_status)),
++ SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
+ SPINAND_INFO("W25N512GW",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xba, 0x20),
+ NAND_MEMORG(1, 2048, 64, 64, 512, 10, 1, 1, 1),
+- NAND_ECCREQ(4, 512),
++ NAND_ECCREQ(1, 512),
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+ 0,
+- SPINAND_ECCINFO(&w25n02kv_ooblayout, w25n02kv_ecc_get_status)),
++ SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
+ SPINAND_INFO("W25N02KWZEIR",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xba, 0x22),
+ NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
+@@ -237,12 +237,12 @@ static const struct spinand_info winbond_spinand_table[] = {
+ SPINAND_INFO("W25N01GWZEIG",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xba, 0x21),
+ NAND_MEMORG(1, 2048, 64, 64, 1024, 20, 1, 1, 1),
+- NAND_ECCREQ(4, 512),
++ NAND_ECCREQ(1, 512),
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
+ &update_cache_variants),
+ 0,
+- SPINAND_ECCINFO(&w25m02gv_ooblayout, w25n02kv_ecc_get_status)),
++ SPINAND_ECCINFO(&w25m02gv_ooblayout, NULL)),
+ SPINAND_INFO("W25N04KV",
+ SPINAND_ID(SPINAND_READID_METHOD_OPCODE_DUMMY, 0xaa, 0x23),
+ NAND_MEMORG(1, 2048, 128, 64, 4096, 40, 2, 1, 1),
+diff --git a/drivers/net/ethernet/freescale/fec_ptp.c b/drivers/net/ethernet/freescale/fec_ptp.c
+index a4eb6edb850add..7f6b5743207166 100644
+--- a/drivers/net/ethernet/freescale/fec_ptp.c
++++ b/drivers/net/ethernet/freescale/fec_ptp.c
+@@ -84,8 +84,7 @@
+ #define FEC_CC_MULT (1 << 31)
+ #define FEC_COUNTER_PERIOD (1 << 31)
+ #define PPS_OUPUT_RELOAD_PERIOD NSEC_PER_SEC
+-#define FEC_CHANNLE_0 0
+-#define DEFAULT_PPS_CHANNEL FEC_CHANNLE_0
++#define DEFAULT_PPS_CHANNEL 0
+
+ #define FEC_PTP_MAX_NSEC_PERIOD 4000000000ULL
+ #define FEC_PTP_MAX_NSEC_COUNTER 0x80000000ULL
+@@ -525,7 +524,6 @@ static int fec_ptp_enable(struct ptp_clock_info *ptp,
+ int ret = 0;
+
+ if (rq->type == PTP_CLK_REQ_PPS) {
+- fep->pps_channel = DEFAULT_PPS_CHANNEL;
+ fep->reload_period = PPS_OUPUT_RELOAD_PERIOD;
+
+ ret = fec_ptp_enable_pps(fep, on);
+@@ -536,10 +534,9 @@ static int fec_ptp_enable(struct ptp_clock_info *ptp,
+ if (rq->perout.flags)
+ return -EOPNOTSUPP;
+
+- if (rq->perout.index != DEFAULT_PPS_CHANNEL)
++ if (rq->perout.index != fep->pps_channel)
+ return -EOPNOTSUPP;
+
+- fep->pps_channel = DEFAULT_PPS_CHANNEL;
+ period.tv_sec = rq->perout.period.sec;
+ period.tv_nsec = rq->perout.period.nsec;
+ period_ns = timespec64_to_ns(&period);
+@@ -707,12 +704,16 @@ void fec_ptp_init(struct platform_device *pdev, int irq_idx)
+ {
+ struct net_device *ndev = platform_get_drvdata(pdev);
+ struct fec_enet_private *fep = netdev_priv(ndev);
++ struct device_node *np = fep->pdev->dev.of_node;
+ int irq;
+ int ret;
+
+ fep->ptp_caps.owner = THIS_MODULE;
+ strscpy(fep->ptp_caps.name, "fec ptp", sizeof(fep->ptp_caps.name));
+
++ fep->pps_channel = DEFAULT_PPS_CHANNEL;
++ of_property_read_u32(np, "fsl,pps-channel", &fep->pps_channel);
++
+ fep->ptp_caps.max_adj = 250000000;
+ fep->ptp_caps.n_alarm = 0;
+ fep->ptp_caps.n_ext_ts = 0;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 7bf275f127c9d7..766213ee82c16e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -1205,6 +1205,9 @@ static int stmmac_init_phy(struct net_device *dev)
+ return -ENODEV;
+ }
+
++ if (priv->dma_cap.eee)
++ phy_support_eee(phydev);
++
+ ret = phylink_connect_phy(priv->phylink, phydev);
+ } else {
+ fwnode_handle_put(phy_fwnode);
+diff --git a/drivers/net/netkit.c b/drivers/net/netkit.c
+index 059269557d9264..fba2c734f0ec7f 100644
+--- a/drivers/net/netkit.c
++++ b/drivers/net/netkit.c
+@@ -20,6 +20,7 @@ struct netkit {
+ struct net_device __rcu *peer;
+ struct bpf_mprog_entry __rcu *active;
+ enum netkit_action policy;
++ enum netkit_scrub scrub;
+ struct bpf_mprog_bundle bundle;
+
+ /* Needed in slow-path */
+@@ -50,12 +51,24 @@ netkit_run(const struct bpf_mprog_entry *entry, struct sk_buff *skb,
+ return ret;
+ }
+
+-static void netkit_prep_forward(struct sk_buff *skb, bool xnet)
++static void netkit_xnet(struct sk_buff *skb)
+ {
+- skb_scrub_packet(skb, xnet);
+ skb->priority = 0;
++ skb->mark = 0;
++}
++
++static void netkit_prep_forward(struct sk_buff *skb,
++ bool xnet, bool xnet_scrub)
++{
++ skb_scrub_packet(skb, false);
+ nf_skip_egress(skb, true);
+ skb_reset_mac_header(skb);
++ if (!xnet)
++ return;
++ ipvs_reset(skb);
++ skb_clear_tstamp(skb);
++ if (xnet_scrub)
++ netkit_xnet(skb);
+ }
+
+ static struct netkit *netkit_priv(const struct net_device *dev)
+@@ -80,7 +93,8 @@ static netdev_tx_t netkit_xmit(struct sk_buff *skb, struct net_device *dev)
+ !pskb_may_pull(skb, ETH_HLEN) ||
+ skb_orphan_frags(skb, GFP_ATOMIC)))
+ goto drop;
+- netkit_prep_forward(skb, !net_eq(dev_net(dev), dev_net(peer)));
++ netkit_prep_forward(skb, !net_eq(dev_net(dev), dev_net(peer)),
++ nk->scrub);
+ eth_skb_pkt_type(skb, peer);
+ skb->dev = peer;
+ entry = rcu_dereference(nk->active);
+@@ -332,8 +346,10 @@ static int netkit_new_link(struct net *src_net, struct net_device *dev,
+ struct netlink_ext_ack *extack)
+ {
+ struct nlattr *peer_tb[IFLA_MAX + 1], **tbp = tb, *attr;
+- enum netkit_action default_prim = NETKIT_PASS;
+- enum netkit_action default_peer = NETKIT_PASS;
++ enum netkit_action policy_prim = NETKIT_PASS;
++ enum netkit_action policy_peer = NETKIT_PASS;
++ enum netkit_scrub scrub_prim = NETKIT_SCRUB_DEFAULT;
++ enum netkit_scrub scrub_peer = NETKIT_SCRUB_DEFAULT;
+ enum netkit_mode mode = NETKIT_L3;
+ unsigned char ifname_assign_type;
+ struct ifinfomsg *ifmp = NULL;
+@@ -362,17 +378,21 @@ static int netkit_new_link(struct net *src_net, struct net_device *dev,
+ return err;
+ tbp = peer_tb;
+ }
++ if (data[IFLA_NETKIT_SCRUB])
++ scrub_prim = nla_get_u32(data[IFLA_NETKIT_SCRUB]);
++ if (data[IFLA_NETKIT_PEER_SCRUB])
++ scrub_peer = nla_get_u32(data[IFLA_NETKIT_PEER_SCRUB]);
+ if (data[IFLA_NETKIT_POLICY]) {
+ attr = data[IFLA_NETKIT_POLICY];
+- default_prim = nla_get_u32(attr);
+- err = netkit_check_policy(default_prim, attr, extack);
++ policy_prim = nla_get_u32(attr);
++ err = netkit_check_policy(policy_prim, attr, extack);
+ if (err < 0)
+ return err;
+ }
+ if (data[IFLA_NETKIT_PEER_POLICY]) {
+ attr = data[IFLA_NETKIT_PEER_POLICY];
+- default_peer = nla_get_u32(attr);
+- err = netkit_check_policy(default_peer, attr, extack);
++ policy_peer = nla_get_u32(attr);
++ err = netkit_check_policy(policy_peer, attr, extack);
+ if (err < 0)
+ return err;
+ }
+@@ -409,7 +429,8 @@ static int netkit_new_link(struct net *src_net, struct net_device *dev,
+
+ nk = netkit_priv(peer);
+ nk->primary = false;
+- nk->policy = default_peer;
++ nk->policy = policy_peer;
++ nk->scrub = scrub_peer;
+ nk->mode = mode;
+ bpf_mprog_bundle_init(&nk->bundle);
+
+@@ -434,7 +455,8 @@ static int netkit_new_link(struct net *src_net, struct net_device *dev,
+
+ nk = netkit_priv(dev);
+ nk->primary = true;
+- nk->policy = default_prim;
++ nk->policy = policy_prim;
++ nk->scrub = scrub_prim;
+ nk->mode = mode;
+ bpf_mprog_bundle_init(&nk->bundle);
+
+@@ -874,6 +896,18 @@ static int netkit_change_link(struct net_device *dev, struct nlattr *tb[],
+ return -EACCES;
+ }
+
++ if (data[IFLA_NETKIT_SCRUB]) {
++ NL_SET_ERR_MSG_ATTR(extack, data[IFLA_NETKIT_SCRUB],
++ "netkit scrubbing cannot be changed after device creation");
++ return -EACCES;
++ }
++
++ if (data[IFLA_NETKIT_PEER_SCRUB]) {
++ NL_SET_ERR_MSG_ATTR(extack, data[IFLA_NETKIT_PEER_SCRUB],
++ "netkit scrubbing cannot be changed after device creation");
++ return -EACCES;
++ }
++
+ if (data[IFLA_NETKIT_PEER_INFO]) {
+ NL_SET_ERR_MSG_ATTR(extack, data[IFLA_NETKIT_PEER_INFO],
+ "netkit peer info cannot be changed after device creation");
+@@ -908,8 +942,10 @@ static size_t netkit_get_size(const struct net_device *dev)
+ {
+ return nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_POLICY */
+ nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_PEER_POLICY */
+- nla_total_size(sizeof(u8)) + /* IFLA_NETKIT_PRIMARY */
++ nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_SCRUB */
++ nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_PEER_SCRUB */
+ nla_total_size(sizeof(u32)) + /* IFLA_NETKIT_MODE */
++ nla_total_size(sizeof(u8)) + /* IFLA_NETKIT_PRIMARY */
+ 0;
+ }
+
+@@ -924,11 +960,15 @@ static int netkit_fill_info(struct sk_buff *skb, const struct net_device *dev)
+ return -EMSGSIZE;
+ if (nla_put_u32(skb, IFLA_NETKIT_MODE, nk->mode))
+ return -EMSGSIZE;
++ if (nla_put_u32(skb, IFLA_NETKIT_SCRUB, nk->scrub))
++ return -EMSGSIZE;
+
+ if (peer) {
+ nk = netkit_priv(peer);
+ if (nla_put_u32(skb, IFLA_NETKIT_PEER_POLICY, nk->policy))
+ return -EMSGSIZE;
++ if (nla_put_u32(skb, IFLA_NETKIT_PEER_SCRUB, nk->scrub))
++ return -EMSGSIZE;
+ }
+
+ return 0;
+@@ -936,9 +976,11 @@ static int netkit_fill_info(struct sk_buff *skb, const struct net_device *dev)
+
+ static const struct nla_policy netkit_policy[IFLA_NETKIT_MAX + 1] = {
+ [IFLA_NETKIT_PEER_INFO] = { .len = sizeof(struct ifinfomsg) },
+- [IFLA_NETKIT_POLICY] = { .type = NLA_U32 },
+ [IFLA_NETKIT_MODE] = { .type = NLA_U32 },
++ [IFLA_NETKIT_POLICY] = { .type = NLA_U32 },
+ [IFLA_NETKIT_PEER_POLICY] = { .type = NLA_U32 },
++ [IFLA_NETKIT_SCRUB] = NLA_POLICY_MAX(NLA_U32, NETKIT_SCRUB_DEFAULT),
++ [IFLA_NETKIT_PEER_SCRUB] = NLA_POLICY_MAX(NLA_U32, NETKIT_SCRUB_DEFAULT),
+ [IFLA_NETKIT_PRIMARY] = { .type = NLA_REJECT,
+ .reject_message = "Primary attribute is read-only" },
+ };
+diff --git a/drivers/net/phy/dp83869.c b/drivers/net/phy/dp83869.c
+index 5f056d7db83eed..b6b38caf9c0ed0 100644
+--- a/drivers/net/phy/dp83869.c
++++ b/drivers/net/phy/dp83869.c
+@@ -153,19 +153,32 @@ struct dp83869_private {
+ int mode;
+ };
+
++static int dp83869_config_aneg(struct phy_device *phydev)
++{
++ struct dp83869_private *dp83869 = phydev->priv;
++
++ if (dp83869->mode != DP83869_RGMII_1000_BASE)
++ return genphy_config_aneg(phydev);
++
++ return genphy_c37_config_aneg(phydev);
++}
++
+ static int dp83869_read_status(struct phy_device *phydev)
+ {
+ struct dp83869_private *dp83869 = phydev->priv;
++ bool changed;
+ int ret;
+
++ if (dp83869->mode == DP83869_RGMII_1000_BASE)
++ return genphy_c37_read_status(phydev, &changed);
++
+ ret = genphy_read_status(phydev);
+ if (ret)
+ return ret;
+
+- if (linkmode_test_bit(ETHTOOL_LINK_MODE_FIBRE_BIT, phydev->supported)) {
++ if (dp83869->mode == DP83869_RGMII_100_BASE) {
+ if (phydev->link) {
+- if (dp83869->mode == DP83869_RGMII_100_BASE)
+- phydev->speed = SPEED_100;
++ phydev->speed = SPEED_100;
+ } else {
+ phydev->speed = SPEED_UNKNOWN;
+ phydev->duplex = DUPLEX_UNKNOWN;
+@@ -898,6 +911,7 @@ static int dp83869_phy_reset(struct phy_device *phydev)
+ .soft_reset = dp83869_phy_reset, \
+ .config_intr = dp83869_config_intr, \
+ .handle_interrupt = dp83869_handle_interrupt, \
++ .config_aneg = dp83869_config_aneg, \
+ .read_status = dp83869_read_status, \
+ .get_tunable = dp83869_get_tunable, \
+ .set_tunable = dp83869_set_tunable, \
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index 33ffa2aa4c1152..e1a15fbc6ad025 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -267,7 +267,7 @@ static ssize_t bin_attr_nvmem_write(struct file *filp, struct kobject *kobj,
+
+ count = round_down(count, nvmem->word_size);
+
+- if (!nvmem->reg_write)
++ if (!nvmem->reg_write || nvmem->read_only)
+ return -EPERM;
+
+ rc = nvmem_reg_write(nvmem, pos, buf, count);
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index 808d1f10541733..c8d5c90aa4d45b 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -82,6 +82,11 @@ enum imx_pcie_variants {
+ #define IMX_PCIE_FLAG_HAS_SERDES BIT(6)
+ #define IMX_PCIE_FLAG_SUPPORT_64BIT BIT(7)
+ #define IMX_PCIE_FLAG_CPU_ADDR_FIXUP BIT(8)
++/*
++ * Because of ERR005723 (PCIe does not support L2 power down) we need to
++ * workaround suspend resume on some devices which are affected by this errata.
++ */
++#define IMX_PCIE_FLAG_BROKEN_SUSPEND BIT(9)
+
+ #define imx_check_flag(pci, val) (pci->drvdata->flags & val)
+
+@@ -1237,9 +1242,19 @@ static int imx_pcie_suspend_noirq(struct device *dev)
+ return 0;
+
+ imx_pcie_msi_save_restore(imx_pcie, true);
+- imx_pcie_pm_turnoff(imx_pcie);
+- imx_pcie_stop_link(imx_pcie->pci);
+- imx_pcie_host_exit(pp);
++ if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_BROKEN_SUSPEND)) {
++ /*
++ * The minimum for a workaround would be to set PERST# and to
++ * set the PCIE_TEST_PD flag. However, we can also disable the
++ * clock which saves some power.
++ */
++ imx_pcie_assert_core_reset(imx_pcie);
++ imx_pcie->drvdata->enable_ref_clk(imx_pcie, false);
++ } else {
++ imx_pcie_pm_turnoff(imx_pcie);
++ imx_pcie_stop_link(imx_pcie->pci);
++ imx_pcie_host_exit(pp);
++ }
+
+ return 0;
+ }
+@@ -1253,14 +1268,32 @@ static int imx_pcie_resume_noirq(struct device *dev)
+ if (!(imx_pcie->drvdata->flags & IMX_PCIE_FLAG_SUPPORTS_SUSPEND))
+ return 0;
+
+- ret = imx_pcie_host_init(pp);
+- if (ret)
+- return ret;
+- imx_pcie_msi_save_restore(imx_pcie, false);
+- dw_pcie_setup_rc(pp);
++ if (imx_check_flag(imx_pcie, IMX_PCIE_FLAG_BROKEN_SUSPEND)) {
++ ret = imx_pcie->drvdata->enable_ref_clk(imx_pcie, true);
++ if (ret)
++ return ret;
++ ret = imx_pcie_deassert_core_reset(imx_pcie);
++ if (ret)
++ return ret;
++ /*
++ * Using PCIE_TEST_PD seems to disable MSI and powers down the
++ * root complex. This is why we have to setup the rc again and
++ * why we have to restore the MSI register.
++ */
++ ret = dw_pcie_setup_rc(&imx_pcie->pci->pp);
++ if (ret)
++ return ret;
++ imx_pcie_msi_save_restore(imx_pcie, false);
++ } else {
++ ret = imx_pcie_host_init(pp);
++ if (ret)
++ return ret;
++ imx_pcie_msi_save_restore(imx_pcie, false);
++ dw_pcie_setup_rc(pp);
+
+- if (imx_pcie->link_is_up)
+- imx_pcie_start_link(imx_pcie->pci);
++ if (imx_pcie->link_is_up)
++ imx_pcie_start_link(imx_pcie->pci);
++ }
+
+ return 0;
+ }
+@@ -1485,7 +1518,9 @@ static const struct imx_pcie_drvdata drvdata[] = {
+ [IMX6Q] = {
+ .variant = IMX6Q,
+ .flags = IMX_PCIE_FLAG_IMX_PHY |
+- IMX_PCIE_FLAG_IMX_SPEED_CHANGE,
++ IMX_PCIE_FLAG_IMX_SPEED_CHANGE |
++ IMX_PCIE_FLAG_BROKEN_SUSPEND |
++ IMX_PCIE_FLAG_SUPPORTS_SUSPEND,
+ .dbi_length = 0x200,
+ .gpr = "fsl,imx6q-iomuxc-gpr",
+ .clk_names = imx6q_clks,
+diff --git a/drivers/pci/controller/dwc/pci-keystone.c b/drivers/pci/controller/dwc/pci-keystone.c
+index 2219b1a866faf2..44b34559de1ac5 100644
+--- a/drivers/pci/controller/dwc/pci-keystone.c
++++ b/drivers/pci/controller/dwc/pci-keystone.c
+@@ -455,6 +455,17 @@ static void __iomem *ks_pcie_other_map_bus(struct pci_bus *bus,
+ struct keystone_pcie *ks_pcie = to_keystone_pcie(pci);
+ u32 reg;
+
++ /*
++ * Checking whether the link is up here is a last line of defense
++ * against platforms that forward errors on the system bus as
++ * SError upon PCI configuration transactions issued when the link
++ * is down. This check is racy by definition and does not stop
++ * the system from triggering an SError if the link goes down
++ * after this check is performed.
++ */
++ if (!dw_pcie_link_up(pci))
++ return NULL;
++
+ reg = CFG_BUS(bus->number) | CFG_DEVICE(PCI_SLOT(devfn)) |
+ CFG_FUNC(PCI_FUNC(devfn));
+ if (!pci_is_root_bus(bus->parent))
+@@ -1093,6 +1104,7 @@ static int ks_pcie_am654_set_mode(struct device *dev,
+
+ static const struct ks_pcie_of_data ks_pcie_rc_of_data = {
+ .host_ops = &ks_pcie_host_ops,
++ .mode = DW_PCIE_RC_TYPE,
+ .version = DW_PCIE_VER_365A,
+ };
+
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index 43ba5c6738df1a..cc8ff4a014368c 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -689,7 +689,7 @@ static void dw_pcie_ep_init_non_sticky_registers(struct dw_pcie *pci)
+ * for 1 MB BAR size only.
+ */
+ for (i = 0; i < nbars; i++, offset += PCI_REBAR_CTRL)
+- dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, 0x0);
++ dw_pcie_writel_dbi(pci, offset + PCI_REBAR_CAP, BIT(4));
+ }
+
+ dw_pcie_setup(pci);
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 2b33d03ed05416..b5447228696dc4 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1845,7 +1845,7 @@ static const struct of_device_id qcom_pcie_match[] = {
+ { .compatible = "qcom,pcie-sm8450-pcie0", .data = &cfg_1_9_0 },
+ { .compatible = "qcom,pcie-sm8450-pcie1", .data = &cfg_1_9_0 },
+ { .compatible = "qcom,pcie-sm8550", .data = &cfg_1_9_0 },
+- { .compatible = "qcom,pcie-x1e80100", .data = &cfg_1_9_0 },
++ { .compatible = "qcom,pcie-x1e80100", .data = &cfg_sc8280xp },
+ { }
+ };
+
+diff --git a/drivers/pci/controller/pcie-rockchip-ep.c b/drivers/pci/controller/pcie-rockchip-ep.c
+index 1362745336568e..a6805b005798c3 100644
+--- a/drivers/pci/controller/pcie-rockchip-ep.c
++++ b/drivers/pci/controller/pcie-rockchip-ep.c
+@@ -63,15 +63,25 @@ static void rockchip_pcie_clear_ep_ob_atu(struct rockchip_pcie *rockchip,
+ ROCKCHIP_PCIE_AT_OB_REGION_DESC1(region));
+ }
+
++static int rockchip_pcie_ep_ob_atu_num_bits(struct rockchip_pcie *rockchip,
++ u64 pci_addr, size_t size)
++{
++ int num_pass_bits = fls64(pci_addr ^ (pci_addr + size - 1));
++
++ return clamp(num_pass_bits,
++ ROCKCHIP_PCIE_AT_MIN_NUM_BITS,
++ ROCKCHIP_PCIE_AT_MAX_NUM_BITS);
++}
++
+ static void rockchip_pcie_prog_ep_ob_atu(struct rockchip_pcie *rockchip, u8 fn,
+ u32 r, u64 cpu_addr, u64 pci_addr,
+ size_t size)
+ {
+- int num_pass_bits = fls64(size - 1);
++ int num_pass_bits;
+ u32 addr0, addr1, desc0;
+
+- if (num_pass_bits < 8)
+- num_pass_bits = 8;
++ num_pass_bits = rockchip_pcie_ep_ob_atu_num_bits(rockchip,
++ pci_addr, size);
+
+ addr0 = ((num_pass_bits - 1) & PCIE_CORE_OB_REGION_ADDR0_NUM_BITS) |
+ (lower_32_bits(pci_addr) & PCIE_CORE_OB_REGION_ADDR0_LO_ADDR);
+diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h
+index 6111de35f84ca2..15ee949f2485e3 100644
+--- a/drivers/pci/controller/pcie-rockchip.h
++++ b/drivers/pci/controller/pcie-rockchip.h
+@@ -245,6 +245,10 @@
+ (PCIE_EP_PF_CONFIG_REGS_BASE + (((fn) << 12) & GENMASK(19, 12)))
+ #define ROCKCHIP_PCIE_EP_VIRT_FUNC_BASE(fn) \
+ (PCIE_EP_PF_CONFIG_REGS_BASE + 0x10000 + (((fn) << 12) & GENMASK(19, 12)))
++
++#define ROCKCHIP_PCIE_AT_MIN_NUM_BITS 8
++#define ROCKCHIP_PCIE_AT_MAX_NUM_BITS 20
++
+ #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR0(fn, bar) \
+ (PCIE_CORE_AXI_CONF_BASE + 0x0828 + (fn) * 0x0040 + (bar) * 0x0008)
+ #define ROCKCHIP_PCIE_AT_IB_EP_FUNC_BAR_ADDR1(fn, bar) \
+diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c
+index 17f00710925508..62f7dff437309f 100644
+--- a/drivers/pci/endpoint/pci-epc-core.c
++++ b/drivers/pci/endpoint/pci-epc-core.c
+@@ -660,18 +660,18 @@ void pci_epc_remove_epf(struct pci_epc *epc, struct pci_epf *epf,
+ if (IS_ERR_OR_NULL(epc) || !epf)
+ return;
+
++ mutex_lock(&epc->list_lock);
+ if (type == PRIMARY_INTERFACE) {
+ func_no = epf->func_no;
+ list = &epf->list;
++ epf->epc = NULL;
+ } else {
+ func_no = epf->sec_epc_func_no;
+ list = &epf->sec_epc_list;
++ epf->sec_epc = NULL;
+ }
+-
+- mutex_lock(&epc->list_lock);
+ clear_bit(func_no, &epc->function_num_map);
+ list_del(list);
+- epf->epc = NULL;
+ mutex_unlock(&epc->list_lock);
+ }
+ EXPORT_SYMBOL_GPL(pci_epc_remove_epf);
+@@ -837,11 +837,10 @@ EXPORT_SYMBOL_GPL(pci_epc_bus_master_enable_notify);
+ void pci_epc_destroy(struct pci_epc *epc)
+ {
+ pci_ep_cfs_remove_epc_group(epc->group);
+- device_unregister(&epc->dev);
+-
+ #ifdef CONFIG_PCI_DOMAINS_GENERIC
+- pci_bus_release_domain_nr(&epc->dev, epc->domain_nr);
++ pci_bus_release_domain_nr(epc->dev.parent, epc->domain_nr);
+ #endif
++ device_unregister(&epc->dev);
+ }
+ EXPORT_SYMBOL_GPL(pci_epc_destroy);
+
+diff --git a/drivers/pci/of_property.c b/drivers/pci/of_property.c
+index 5a0b98e697954a..886c236e5de6e6 100644
+--- a/drivers/pci/of_property.c
++++ b/drivers/pci/of_property.c
+@@ -126,7 +126,7 @@ static int of_pci_prop_ranges(struct pci_dev *pdev, struct of_changeset *ocs,
+ if (of_pci_get_addr_flags(&res[j], &flags))
+ continue;
+
+- val64 = res[j].start;
++ val64 = pci_bus_address(pdev, &res[j] - pdev->resource);
+ of_pci_set_address(pdev, rp[i].parent_addr, val64, 0, flags,
+ false);
+ if (pci_is_bridge(pdev)) {
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index f4f4b3df3884ef..793b1d274be33a 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -1356,7 +1356,7 @@ static const struct adsp_data sc7280_wpss_resource = {
+ .crash_reason_smem = 626,
+ .firmware_name = "wpss.mdt",
+ .pas_id = 6,
+- .auto_boot = true,
++ .auto_boot = false,
+ .proxy_pd_names = (char*[]){
+ "cx",
+ "mx",
+diff --git a/drivers/spmi/spmi-pmic-arb.c b/drivers/spmi/spmi-pmic-arb.c
+index 9ba9495fcc4bae..ea843159b745d5 100644
+--- a/drivers/spmi/spmi-pmic-arb.c
++++ b/drivers/spmi/spmi-pmic-arb.c
+@@ -1763,14 +1763,13 @@ static int spmi_pmic_arb_register_buses(struct spmi_pmic_arb *pmic_arb,
+ {
+ struct device *dev = &pdev->dev;
+ struct device_node *node = dev->of_node;
+- struct device_node *child;
+ int ret;
+
+ /* legacy mode doesn't provide child node for the bus */
+ if (of_device_is_compatible(node, "qcom,spmi-pmic-arb"))
+ return spmi_pmic_arb_bus_init(pdev, node, pmic_arb);
+
+- for_each_available_child_of_node(node, child) {
++ for_each_available_child_of_node_scoped(node, child) {
+ if (of_node_name_eq(child, "spmi")) {
+ ret = spmi_pmic_arb_bus_init(pdev, child, pmic_arb);
+ if (ret)
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index b0c0f0ffdcb046..f547d386ae805b 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -137,7 +137,7 @@ static ssize_t current_uuid_show(struct device *dev,
+ struct int3400_thermal_priv *priv = dev_get_drvdata(dev);
+ int i, length = 0;
+
+- if (priv->current_uuid_index > 0)
++ if (priv->current_uuid_index >= 0)
+ return sprintf(buf, "%s\n",
+ int3400_thermal_uuids[priv->current_uuid_index]);
+
+diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
+index 5867e633856233..fb550a7c16b34b 100644
+--- a/drivers/ufs/host/ufs-exynos.c
++++ b/drivers/ufs/host/ufs-exynos.c
+@@ -724,6 +724,9 @@ static void exynos_ufs_config_smu(struct exynos_ufs *ufs)
+ {
+ u32 reg, val;
+
++ if (ufs->opts & EXYNOS_UFS_OPT_UFSPR_SECURE)
++ return;
++
+ exynos_ufs_disable_auto_ctrl_hcc_save(ufs, &val);
+
+ /* make encryption disabled by default */
+@@ -1440,8 +1443,8 @@ static int exynos_ufs_init(struct ufs_hba *hba)
+ if (ret)
+ goto out;
+ exynos_ufs_specify_phy_time_attr(ufs);
+- if (!(ufs->opts & EXYNOS_UFS_OPT_UFSPR_SECURE))
+- exynos_ufs_config_smu(ufs);
++
++ exynos_ufs_config_smu(ufs);
+
+ hba->host->dma_alignment = DATA_UNIT_SIZE - 1;
+ return 0;
+@@ -1484,12 +1487,12 @@ static void exynos_ufs_dev_hw_reset(struct ufs_hba *hba)
+ hci_writel(ufs, 1 << 0, HCI_GPIO_OUT);
+ }
+
+-static void exynos_ufs_pre_hibern8(struct ufs_hba *hba, u8 enter)
++static void exynos_ufs_pre_hibern8(struct ufs_hba *hba, enum uic_cmd_dme cmd)
+ {
+ struct exynos_ufs *ufs = ufshcd_get_variant(hba);
+ struct exynos_ufs_uic_attr *attr = ufs->drv_data->uic_attr;
+
+- if (!enter) {
++ if (cmd == UIC_CMD_DME_HIBER_EXIT) {
+ if (ufs->opts & EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL)
+ exynos_ufs_disable_auto_ctrl_hcc(ufs);
+ exynos_ufs_ungate_clks(ufs);
+@@ -1517,11 +1520,11 @@ static void exynos_ufs_pre_hibern8(struct ufs_hba *hba, u8 enter)
+ }
+ }
+
+-static void exynos_ufs_post_hibern8(struct ufs_hba *hba, u8 enter)
++static void exynos_ufs_post_hibern8(struct ufs_hba *hba, enum uic_cmd_dme cmd)
+ {
+ struct exynos_ufs *ufs = ufshcd_get_variant(hba);
+
+- if (!enter) {
++ if (cmd == UIC_CMD_DME_HIBER_EXIT) {
+ u32 cur_mode = 0;
+ u32 pwrmode;
+
+@@ -1540,7 +1543,7 @@ static void exynos_ufs_post_hibern8(struct ufs_hba *hba, u8 enter)
+
+ if (!(ufs->opts & EXYNOS_UFS_OPT_SKIP_CONNECTION_ESTAB))
+ exynos_ufs_establish_connt(ufs);
+- } else {
++ } else if (cmd == UIC_CMD_DME_HIBER_ENTER) {
+ ufs->entry_hibern8_t = ktime_get();
+ exynos_ufs_gate_clks(ufs);
+ if (ufs->opts & EXYNOS_UFS_OPT_BROKEN_AUTO_CLK_CTRL)
+@@ -1627,15 +1630,15 @@ static int exynos_ufs_pwr_change_notify(struct ufs_hba *hba,
+ }
+
+ static void exynos_ufs_hibern8_notify(struct ufs_hba *hba,
+- enum uic_cmd_dme enter,
++ enum uic_cmd_dme cmd,
+ enum ufs_notify_change_status notify)
+ {
+ switch ((u8)notify) {
+ case PRE_CHANGE:
+- exynos_ufs_pre_hibern8(hba, enter);
++ exynos_ufs_pre_hibern8(hba, cmd);
+ break;
+ case POST_CHANGE:
+- exynos_ufs_post_hibern8(hba, enter);
++ exynos_ufs_post_hibern8(hba, cmd);
+ break;
+ }
+ }
+diff --git a/drivers/vfio/pci/qat/main.c b/drivers/vfio/pci/qat/main.c
+index be3644ced17be4..c78cb6de93906c 100644
+--- a/drivers/vfio/pci/qat/main.c
++++ b/drivers/vfio/pci/qat/main.c
+@@ -304,7 +304,7 @@ static ssize_t qat_vf_resume_write(struct file *filp, const char __user *buf,
+ offs = &filp->f_pos;
+
+ if (*offs < 0 ||
+- check_add_overflow((loff_t)len, *offs, &end))
++ check_add_overflow(len, *offs, &end))
+ return -EOVERFLOW;
+
+ if (end > mig_dev->state_size)
+diff --git a/fs/btrfs/btrfs_inode.h b/fs/btrfs/btrfs_inode.h
+index e152fde888fc9a..db53a3263fbd05 100644
+--- a/fs/btrfs/btrfs_inode.h
++++ b/fs/btrfs/btrfs_inode.h
+@@ -613,11 +613,17 @@ int btrfs_writepage_cow_fixup(struct folio *folio);
+ int btrfs_encoded_io_compression_from_extent(struct btrfs_fs_info *fs_info,
+ int compress_type);
+ int btrfs_encoded_read_regular_fill_pages(struct btrfs_inode *inode,
+- u64 file_offset, u64 disk_bytenr,
+- u64 disk_io_size,
++ u64 disk_bytenr, u64 disk_io_size,
+ struct page **pages);
+ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+- struct btrfs_ioctl_encoded_io_args *encoded);
++ struct btrfs_ioctl_encoded_io_args *encoded,
++ struct extent_state **cached_state,
++ u64 *disk_bytenr, u64 *disk_io_size);
++ssize_t btrfs_encoded_read_regular(struct kiocb *iocb, struct iov_iter *iter,
++ u64 start, u64 lockend,
++ struct extent_state **cached_state,
++ u64 disk_bytenr, u64 disk_io_size,
++ size_t count, bool compressed, bool *unlocked);
+ ssize_t btrfs_do_encoded_write(struct kiocb *iocb, struct iov_iter *from,
+ const struct btrfs_ioctl_encoded_io_args *encoded);
+
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 0cc919d15b1441..9c05cab473f577 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -2010,7 +2010,7 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ const struct btrfs_key *key, struct btrfs_path *p,
+ int ins_len, int cow)
+ {
+- struct btrfs_fs_info *fs_info = root->fs_info;
++ struct btrfs_fs_info *fs_info;
+ struct extent_buffer *b;
+ int slot;
+ int ret;
+@@ -2023,6 +2023,10 @@ int btrfs_search_slot(struct btrfs_trans_handle *trans, struct btrfs_root *root,
+ int min_write_lock_level;
+ int prev_cmp;
+
++ if (!root)
++ return -EINVAL;
++
++ fs_info = root->fs_info;
+ might_sleep();
+
+ lowest_level = p->lowest_level;
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index d9f511babd89ab..b43a8611aca5c6 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -2446,7 +2446,7 @@ int btrfs_cross_ref_exist(struct btrfs_root *root, u64 objectid, u64 offset,
+ goto out;
+
+ ret = check_delayed_ref(root, path, objectid, offset, bytenr);
+- } while (ret == -EAGAIN);
++ } while (ret == -EAGAIN && !path->nowait);
+
+ out:
+ btrfs_release_path(path);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 1e4ca1e7d2e58d..d067db2619713f 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -9126,26 +9126,31 @@ static void btrfs_encoded_read_endio(struct btrfs_bio *bbio)
+ */
+ WRITE_ONCE(priv->status, bbio->bio.bi_status);
+ }
+- if (!atomic_dec_return(&priv->pending))
++ if (atomic_dec_and_test(&priv->pending))
+ wake_up(&priv->wait);
+ bio_put(&bbio->bio);
+ }
+
+ int btrfs_encoded_read_regular_fill_pages(struct btrfs_inode *inode,
+- u64 file_offset, u64 disk_bytenr,
+- u64 disk_io_size, struct page **pages)
++ u64 disk_bytenr, u64 disk_io_size,
++ struct page **pages)
+ {
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+- struct btrfs_encoded_read_private priv = {
+- .pending = ATOMIC_INIT(1),
+- };
++ struct btrfs_encoded_read_private *priv;
+ unsigned long i = 0;
+ struct btrfs_bio *bbio;
++ int ret;
++
++ priv = kmalloc(sizeof(struct btrfs_encoded_read_private), GFP_NOFS);
++ if (!priv)
++ return -ENOMEM;
+
+- init_waitqueue_head(&priv.wait);
++ init_waitqueue_head(&priv->wait);
++ atomic_set(&priv->pending, 1);
++ priv->status = 0;
+
+ bbio = btrfs_bio_alloc(BIO_MAX_VECS, REQ_OP_READ, fs_info,
+- btrfs_encoded_read_endio, &priv);
++ btrfs_encoded_read_endio, priv);
+ bbio->bio.bi_iter.bi_sector = disk_bytenr >> SECTOR_SHIFT;
+ bbio->inode = inode;
+
+@@ -9153,11 +9158,11 @@ int btrfs_encoded_read_regular_fill_pages(struct btrfs_inode *inode,
+ size_t bytes = min_t(u64, disk_io_size, PAGE_SIZE);
+
+ if (bio_add_page(&bbio->bio, pages[i], bytes, 0) < bytes) {
+- atomic_inc(&priv.pending);
++ atomic_inc(&priv->pending);
+ btrfs_submit_bbio(bbio, 0);
+
+ bbio = btrfs_bio_alloc(BIO_MAX_VECS, REQ_OP_READ, fs_info,
+- btrfs_encoded_read_endio, &priv);
++ btrfs_encoded_read_endio, priv);
+ bbio->bio.bi_iter.bi_sector = disk_bytenr >> SECTOR_SHIFT;
+ bbio->inode = inode;
+ continue;
+@@ -9168,22 +9173,22 @@ int btrfs_encoded_read_regular_fill_pages(struct btrfs_inode *inode,
+ disk_io_size -= bytes;
+ } while (disk_io_size);
+
+- atomic_inc(&priv.pending);
++ atomic_inc(&priv->pending);
+ btrfs_submit_bbio(bbio, 0);
+
+- if (atomic_dec_return(&priv.pending))
+- io_wait_event(priv.wait, !atomic_read(&priv.pending));
++ if (atomic_dec_return(&priv->pending))
++ io_wait_event(priv->wait, !atomic_read(&priv->pending));
+ /* See btrfs_encoded_read_endio() for ordering. */
+- return blk_status_to_errno(READ_ONCE(priv.status));
++ ret = blk_status_to_errno(READ_ONCE(priv->status));
++ kfree(priv);
++ return ret;
+ }
+
+-static ssize_t btrfs_encoded_read_regular(struct kiocb *iocb,
+- struct iov_iter *iter,
+- u64 start, u64 lockend,
+- struct extent_state **cached_state,
+- u64 disk_bytenr, u64 disk_io_size,
+- size_t count, bool compressed,
+- bool *unlocked)
++ssize_t btrfs_encoded_read_regular(struct kiocb *iocb, struct iov_iter *iter,
++ u64 start, u64 lockend,
++ struct extent_state **cached_state,
++ u64 disk_bytenr, u64 disk_io_size,
++ size_t count, bool compressed, bool *unlocked)
+ {
+ struct btrfs_inode *inode = BTRFS_I(file_inode(iocb->ki_filp));
+ struct extent_io_tree *io_tree = &inode->io_tree;
+@@ -9203,7 +9208,7 @@ static ssize_t btrfs_encoded_read_regular(struct kiocb *iocb,
+ goto out;
+ }
+
+- ret = btrfs_encoded_read_regular_fill_pages(inode, start, disk_bytenr,
++ ret = btrfs_encoded_read_regular_fill_pages(inode, disk_bytenr,
+ disk_io_size, pages);
+ if (ret)
+ goto out;
+@@ -9244,15 +9249,16 @@ static ssize_t btrfs_encoded_read_regular(struct kiocb *iocb,
+ }
+
+ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+- struct btrfs_ioctl_encoded_io_args *encoded)
++ struct btrfs_ioctl_encoded_io_args *encoded,
++ struct extent_state **cached_state,
++ u64 *disk_bytenr, u64 *disk_io_size)
+ {
+ struct btrfs_inode *inode = BTRFS_I(file_inode(iocb->ki_filp));
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ struct extent_io_tree *io_tree = &inode->io_tree;
+ ssize_t ret;
+ size_t count = iov_iter_count(iter);
+- u64 start, lockend, disk_bytenr, disk_io_size;
+- struct extent_state *cached_state = NULL;
++ u64 start, lockend;
+ struct extent_map *em;
+ bool unlocked = false;
+
+@@ -9278,13 +9284,13 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ lockend - start + 1);
+ if (ret)
+ goto out_unlock_inode;
+- lock_extent(io_tree, start, lockend, &cached_state);
++ lock_extent(io_tree, start, lockend, cached_state);
+ ordered = btrfs_lookup_ordered_range(inode, start,
+ lockend - start + 1);
+ if (!ordered)
+ break;
+ btrfs_put_ordered_extent(ordered);
+- unlock_extent(io_tree, start, lockend, &cached_state);
++ unlock_extent(io_tree, start, lockend, cached_state);
+ cond_resched();
+ }
+
+@@ -9304,7 +9310,7 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ free_extent_map(em);
+ em = NULL;
+ ret = btrfs_encoded_read_inline(iocb, iter, start, lockend,
+- &cached_state, extent_start,
++ cached_state, extent_start,
+ count, encoded, &unlocked);
+ goto out;
+ }
+@@ -9317,12 +9323,12 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ inode->vfs_inode.i_size) - iocb->ki_pos;
+ if (em->disk_bytenr == EXTENT_MAP_HOLE ||
+ (em->flags & EXTENT_FLAG_PREALLOC)) {
+- disk_bytenr = EXTENT_MAP_HOLE;
++ *disk_bytenr = EXTENT_MAP_HOLE;
+ count = min_t(u64, count, encoded->len);
+ encoded->len = count;
+ encoded->unencoded_len = count;
+ } else if (extent_map_is_compressed(em)) {
+- disk_bytenr = em->disk_bytenr;
++ *disk_bytenr = em->disk_bytenr;
+ /*
+ * Bail if the buffer isn't large enough to return the whole
+ * compressed extent.
+@@ -9331,7 +9337,7 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ ret = -ENOBUFS;
+ goto out_em;
+ }
+- disk_io_size = em->disk_num_bytes;
++ *disk_io_size = em->disk_num_bytes;
+ count = em->disk_num_bytes;
+ encoded->unencoded_len = em->ram_bytes;
+ encoded->unencoded_offset = iocb->ki_pos - (em->start - em->offset);
+@@ -9341,35 +9347,32 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ goto out_em;
+ encoded->compression = ret;
+ } else {
+- disk_bytenr = extent_map_block_start(em) + (start - em->start);
++ *disk_bytenr = extent_map_block_start(em) + (start - em->start);
+ if (encoded->len > count)
+ encoded->len = count;
+ /*
+ * Don't read beyond what we locked. This also limits the page
+ * allocations that we'll do.
+ */
+- disk_io_size = min(lockend + 1, iocb->ki_pos + encoded->len) - start;
+- count = start + disk_io_size - iocb->ki_pos;
++ *disk_io_size = min(lockend + 1, iocb->ki_pos + encoded->len) - start;
++ count = start + *disk_io_size - iocb->ki_pos;
+ encoded->len = count;
+ encoded->unencoded_len = count;
+- disk_io_size = ALIGN(disk_io_size, fs_info->sectorsize);
++ *disk_io_size = ALIGN(*disk_io_size, fs_info->sectorsize);
+ }
+ free_extent_map(em);
+ em = NULL;
+
+- if (disk_bytenr == EXTENT_MAP_HOLE) {
+- unlock_extent(io_tree, start, lockend, &cached_state);
++ if (*disk_bytenr == EXTENT_MAP_HOLE) {
++ unlock_extent(io_tree, start, lockend, cached_state);
+ btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED);
+ unlocked = true;
+ ret = iov_iter_zero(count, iter);
+ if (ret != count)
+ ret = -EFAULT;
+ } else {
+- ret = btrfs_encoded_read_regular(iocb, iter, start, lockend,
+- &cached_state, disk_bytenr,
+- disk_io_size, count,
+- encoded->compression,
+- &unlocked);
++ ret = -EIOCBQUEUED;
++ goto out_em;
+ }
+
+ out:
+@@ -9378,10 +9381,11 @@ ssize_t btrfs_encoded_read(struct kiocb *iocb, struct iov_iter *iter,
+ out_em:
+ free_extent_map(em);
+ out_unlock_extent:
+- if (!unlocked)
+- unlock_extent(io_tree, start, lockend, &cached_state);
++ /* Leave inode and extent locked if we need to do a read. */
++ if (!unlocked && ret != -EIOCBQUEUED)
++ unlock_extent(io_tree, start, lockend, cached_state);
+ out_unlock_inode:
+- if (!unlocked)
++ if (!unlocked && ret != -EIOCBQUEUED)
+ btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED);
+ return ret;
+ }
+diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
+index 226c91fe31a707..3e3722a7323936 100644
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -4514,12 +4514,17 @@ static int btrfs_ioctl_encoded_read(struct file *file, void __user *argp,
+ size_t copy_end_kernel = offsetofend(struct btrfs_ioctl_encoded_io_args,
+ flags);
+ size_t copy_end;
++ struct btrfs_inode *inode = BTRFS_I(file_inode(file));
++ struct btrfs_fs_info *fs_info = inode->root->fs_info;
++ struct extent_io_tree *io_tree = &inode->io_tree;
+ struct iovec iovstack[UIO_FASTIOV];
+ struct iovec *iov = iovstack;
+ struct iov_iter iter;
+ loff_t pos;
+ struct kiocb kiocb;
+ ssize_t ret;
++ u64 disk_bytenr, disk_io_size;
++ struct extent_state *cached_state = NULL;
+
+ if (!capable(CAP_SYS_ADMIN)) {
+ ret = -EPERM;
+@@ -4572,7 +4577,32 @@ static int btrfs_ioctl_encoded_read(struct file *file, void __user *argp,
+ init_sync_kiocb(&kiocb, file);
+ kiocb.ki_pos = pos;
+
+- ret = btrfs_encoded_read(&kiocb, &iter, &args);
++ ret = btrfs_encoded_read(&kiocb, &iter, &args, &cached_state,
++ &disk_bytenr, &disk_io_size);
++
++ if (ret == -EIOCBQUEUED) {
++ bool unlocked = false;
++ u64 start, lockend, count;
++
++ start = ALIGN_DOWN(kiocb.ki_pos, fs_info->sectorsize);
++ lockend = start + BTRFS_MAX_UNCOMPRESSED - 1;
++
++ if (args.compression)
++ count = disk_io_size;
++ else
++ count = args.len;
++
++ ret = btrfs_encoded_read_regular(&kiocb, &iter, start, lockend,
++ &cached_state, disk_bytenr,
++ disk_io_size, count,
++ args.compression, &unlocked);
++
++ if (!unlocked) {
++ unlock_extent(io_tree, start, lockend, &cached_state);
++ btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED);
++ }
++ }
++
+ if (ret >= 0) {
+ fsnotify_access(file);
+ if (copy_to_user(argp + copy_end,
+diff --git a/fs/btrfs/ref-verify.c b/fs/btrfs/ref-verify.c
+index 9522a8b79d22b5..2928abf7eb8271 100644
+--- a/fs/btrfs/ref-verify.c
++++ b/fs/btrfs/ref-verify.c
+@@ -857,6 +857,7 @@ int btrfs_ref_tree_mod(struct btrfs_fs_info *fs_info,
+ "dropping a ref for a root that doesn't have a ref on the block");
+ dump_block_entry(fs_info, be);
+ dump_ref_action(fs_info, ra);
++ rb_erase(&ref->node, &be->refs);
+ kfree(ref);
+ kfree(ra);
+ goto out_unlock;
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index b068469871f8e5..0cb11dcd10cd4b 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5677,7 +5677,7 @@ static int send_encoded_extent(struct send_ctx *sctx, struct btrfs_path *path,
+ * Note that send_buf is a mapping of send_buf_pages, so this is really
+ * reading into send_buf.
+ */
+- ret = btrfs_encoded_read_regular_fill_pages(BTRFS_I(inode), offset,
++ ret = btrfs_encoded_read_regular_fill_pages(BTRFS_I(inode),
+ disk_bytenr, disk_num_bytes,
+ sctx->send_buf_pages +
+ (data_offset >> PAGE_SHIFT));
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index c4a5fd94bbbb3b..cf92b75745e2a5 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -5609,9 +5609,9 @@ void send_flush_mdlog(struct ceph_mds_session *s)
+
+ static int ceph_mds_auth_match(struct ceph_mds_client *mdsc,
+ struct ceph_mds_cap_auth *auth,
++ const struct cred *cred,
+ char *tpath)
+ {
+- const struct cred *cred = get_current_cred();
+ u32 caller_uid = from_kuid(&init_user_ns, cred->fsuid);
+ u32 caller_gid = from_kgid(&init_user_ns, cred->fsgid);
+ struct ceph_client *cl = mdsc->fsc->client;
+@@ -5734,8 +5734,9 @@ int ceph_mds_check_access(struct ceph_mds_client *mdsc, char *tpath, int mask)
+ for (i = 0; i < mdsc->s_cap_auths_num; i++) {
+ struct ceph_mds_cap_auth *s = &mdsc->s_cap_auths[i];
+
+- err = ceph_mds_auth_match(mdsc, s, tpath);
++ err = ceph_mds_auth_match(mdsc, s, cred, tpath);
+ if (err < 0) {
++ put_cred(cred);
+ return err;
+ } else if (err > 0) {
+ /* always follow the last auth caps' permision */
+@@ -5751,6 +5752,8 @@ int ceph_mds_check_access(struct ceph_mds_client *mdsc, char *tpath, int mask)
+ }
+ }
+
++ put_cred(cred);
++
+ doutc(cl, "root_squash_perms %d, rw_perms_s %p\n", root_squash_perms,
+ rw_perms_s);
+ if (root_squash_perms && rw_perms_s == NULL) {
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 73f321b52895ea..86480e5a215e51 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -285,7 +285,9 @@ static int ceph_parse_new_source(const char *dev_name, const char *dev_name_end,
+ size_t len;
+ struct ceph_fsid fsid;
+ struct ceph_parse_opts_ctx *pctx = fc->fs_private;
++ struct ceph_options *opts = pctx->copts;
+ struct ceph_mount_options *fsopt = pctx->opts;
++ const char *name_start = dev_name;
+ char *fsid_start, *fs_name_start;
+
+ if (*dev_name_end != '=') {
+@@ -296,8 +298,14 @@ static int ceph_parse_new_source(const char *dev_name, const char *dev_name_end,
+ fsid_start = strchr(dev_name, '@');
+ if (!fsid_start)
+ return invalfc(fc, "missing cluster fsid");
+- ++fsid_start; /* start of cluster fsid */
++ len = fsid_start - name_start;
++ kfree(opts->name);
++ opts->name = kstrndup(name_start, len, GFP_KERNEL);
++ if (!opts->name)
++ return -ENOMEM;
++ dout("using %s entity name", opts->name);
+
++ ++fsid_start; /* start of cluster fsid */
+ fs_name_start = strchr(fsid_start, '.');
+ if (!fs_name_start)
+ return invalfc(fc, "missing file system name");
+diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c
+index edf205093f4358..b9ffb2ee9548ae 100644
+--- a/fs/f2fs/segment.c
++++ b/fs/f2fs/segment.c
+@@ -1290,16 +1290,18 @@ static int __submit_discard_cmd(struct f2fs_sb_info *sbi,
+ wait_list, issued);
+ return 0;
+ }
+-
+- /*
+- * Issue discard for conventional zones only if the device
+- * supports discard.
+- */
+- if (!bdev_max_discard_sectors(bdev))
+- return -EOPNOTSUPP;
+ }
+ #endif
+
++ /*
++ * stop issuing discard for any of below cases:
++ * 1. device is conventional zone, but it doesn't support discard.
++ * 2. device is regulare device, after snapshot it doesn't support
++ * discard.
++ */
++ if (!bdev_max_discard_sectors(bdev))
++ return -EOPNOTSUPP;
++
+ trace_f2fs_issue_discard(bdev, dc->di.start, dc->di.len);
+
+ lstart = dc->di.lstart;
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 983fdd98fc3755..a622056f27f3a2 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1748,6 +1748,18 @@ static int f2fs_freeze(struct super_block *sb)
+
+ static int f2fs_unfreeze(struct super_block *sb)
+ {
++ struct f2fs_sb_info *sbi = F2FS_SB(sb);
++
++ /*
++ * It will update discard_max_bytes of mounted lvm device to zero
++ * after creating snapshot on this lvm device, let's drop all
++ * remained discards.
++ * We don't need to disable real-time discard because discard_max_bytes
++ * will recover after removal of snapshot.
++ */
++ if (test_opt(sbi, DISCARD) && !f2fs_hw_support_discard(sbi))
++ f2fs_issue_discard_timeout(sbi);
++
+ clear_sbi_flag(F2FS_SB(sb), SBI_IS_FREEZING);
+ return 0;
+ }
+diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
+index 984f8e6379dd47..6d0455973d641e 100644
+--- a/fs/nfsd/export.c
++++ b/fs/nfsd/export.c
+@@ -1425,9 +1425,12 @@ static int e_show(struct seq_file *m, void *p)
+ return 0;
+ }
+
+- exp_get(exp);
++ if (!cache_get_rcu(&exp->h))
++ return 0;
++
+ if (cache_check(cd, &exp->h, NULL))
+ return 0;
++
+ exp_put(exp);
+ return svc_export_show(m, cd, cp);
+ }
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index d3cfc647153993..57f8818aa47c5f 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1660,6 +1660,14 @@ static void release_open_stateid(struct nfs4_ol_stateid *stp)
+ free_ol_stateid_reaplist(&reaplist);
+ }
+
++static bool nfs4_openowner_unhashed(struct nfs4_openowner *oo)
++{
++ lockdep_assert_held(&oo->oo_owner.so_client->cl_lock);
++
++ return list_empty(&oo->oo_owner.so_strhash) &&
++ list_empty(&oo->oo_perclient);
++}
++
+ static void unhash_openowner_locked(struct nfs4_openowner *oo)
+ {
+ struct nfs4_client *clp = oo->oo_owner.so_client;
+@@ -4975,6 +4983,12 @@ init_open_stateid(struct nfs4_file *fp, struct nfsd4_open *open)
+ spin_lock(&oo->oo_owner.so_client->cl_lock);
+ spin_lock(&fp->fi_lock);
+
++ if (nfs4_openowner_unhashed(oo)) {
++ mutex_unlock(&stp->st_mutex);
++ stp = NULL;
++ goto out_unlock;
++ }
++
+ retstp = nfsd4_find_existing_open(fp, open);
+ if (retstp)
+ goto out_unlock;
+@@ -6126,6 +6140,11 @@ nfsd4_process_open2(struct svc_rqst *rqstp, struct svc_fh *current_fh, struct nf
+
+ if (!stp) {
+ stp = init_open_stateid(fp, open);
++ if (!stp) {
++ status = nfserr_jukebox;
++ goto out;
++ }
++
+ if (!open->op_stp)
+ new_stp = true;
+ }
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index 35fd3e3e177807..baa54c718bd722 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -616,8 +616,13 @@ static int ovl_security_fileattr(const struct path *realpath, struct fileattr *f
+ struct file *file;
+ unsigned int cmd;
+ int err;
++ unsigned int flags;
++
++ flags = O_RDONLY;
++ if (force_o_largefile())
++ flags |= O_LARGEFILE;
+
+- file = dentry_open(realpath, O_RDONLY, current_cred());
++ file = dentry_open(realpath, flags, current_cred());
+ if (IS_ERR(file))
+ return PTR_ERR(file);
+
+diff --git a/fs/overlayfs/util.c b/fs/overlayfs/util.c
+index edc9216f6e27ad..8f080046c59d9a 100644
+--- a/fs/overlayfs/util.c
++++ b/fs/overlayfs/util.c
+@@ -197,6 +197,9 @@ void ovl_dentry_init_flags(struct dentry *dentry, struct dentry *upperdentry,
+
+ bool ovl_dentry_weird(struct dentry *dentry)
+ {
++ if (!d_can_lookup(dentry) && !d_is_file(dentry) && !d_is_symlink(dentry))
++ return true;
++
+ return dentry->d_flags & (DCACHE_NEED_AUTOMOUNT |
+ DCACHE_MANAGE_TRANSIT |
+ DCACHE_OP_HASH |
+diff --git a/fs/proc/kcore.c b/fs/proc/kcore.c
+index 7a85735d584f35..e376f48c4b8bf4 100644
+--- a/fs/proc/kcore.c
++++ b/fs/proc/kcore.c
+@@ -600,6 +600,7 @@ static ssize_t read_kcore_iter(struct kiocb *iocb, struct iov_iter *iter)
+ ret = -EFAULT;
+ goto out;
+ }
++ ret = 0;
+ /*
+ * We know the bounce buffer is safe to copy from, so
+ * use _copy_to_iter() directly.
+diff --git a/fs/quota/dquot.c b/fs/quota/dquot.c
+index b40410cd39af42..71c0ce31a4c4db 100644
+--- a/fs/quota/dquot.c
++++ b/fs/quota/dquot.c
+@@ -689,6 +689,8 @@ int dquot_writeback_dquots(struct super_block *sb, int type)
+
+ WARN_ON_ONCE(!rwsem_is_locked(&sb->s_umount));
+
++ flush_delayed_work("a_release_work);
++
+ for (cnt = 0; cnt < MAXQUOTAS; cnt++) {
+ if (type != -1 && cnt != type)
+ continue;
+diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
+index d95409f3cba667..02ebcbc4882f5b 100644
+--- a/fs/xfs/libxfs/xfs_sb.c
++++ b/fs/xfs/libxfs/xfs_sb.c
+@@ -297,13 +297,6 @@ xfs_validate_sb_write(
+ * the kernel cannot support since we checked for unsupported bits in
+ * the read verifier, which means that memory is corrupt.
+ */
+- if (xfs_sb_has_compat_feature(sbp, XFS_SB_FEAT_COMPAT_UNKNOWN)) {
+- xfs_warn(mp,
+-"Corruption detected in superblock compatible features (0x%x)!",
+- (sbp->sb_features_compat & XFS_SB_FEAT_COMPAT_UNKNOWN));
+- return -EFSCORRUPTED;
+- }
+-
+ if (!xfs_is_readonly(mp) &&
+ xfs_sb_has_ro_compat_feature(sbp, XFS_SB_FEAT_RO_COMPAT_UNKNOWN)) {
+ xfs_alert(mp,
+diff --git a/include/drm/drm_panic.h b/include/drm/drm_panic.h
+index 54085d5d05c345..f4e1fa9ae607a8 100644
+--- a/include/drm/drm_panic.h
++++ b/include/drm/drm_panic.h
+@@ -64,6 +64,8 @@ struct drm_scanout_buffer {
+
+ };
+
++#ifdef CONFIG_DRM_PANIC
++
+ /**
+ * drm_panic_trylock - try to enter the panic printing critical section
+ * @dev: struct drm_device
+@@ -149,4 +151,16 @@ struct drm_scanout_buffer {
+ #define drm_panic_unlock(dev, flags) \
+ raw_spin_unlock_irqrestore(&(dev)->mode_config.panic_lock, flags)
+
++#else
++
++static inline bool drm_panic_trylock(struct drm_device *dev, unsigned long flags)
++{
++ return true;
++}
++
++static inline void drm_panic_lock(struct drm_device *dev, unsigned long flags) {}
++static inline void drm_panic_unlock(struct drm_device *dev, unsigned long flags) {}
++
++#endif
++
+ #endif /* __DRM_PANIC_H__ */
+diff --git a/include/linux/kasan.h b/include/linux/kasan.h
+index 00a3bf7c0d8f0e..6bbfc8aa42e8f4 100644
+--- a/include/linux/kasan.h
++++ b/include/linux/kasan.h
+@@ -29,6 +29,9 @@ typedef unsigned int __bitwise kasan_vmalloc_flags_t;
+ #define KASAN_VMALLOC_VM_ALLOC ((__force kasan_vmalloc_flags_t)0x02u)
+ #define KASAN_VMALLOC_PROT_NORMAL ((__force kasan_vmalloc_flags_t)0x04u)
+
++#define KASAN_VMALLOC_PAGE_RANGE 0x1 /* Apply exsiting page range */
++#define KASAN_VMALLOC_TLB_FLUSH 0x2 /* TLB flush */
++
+ #if defined(CONFIG_KASAN_GENERIC) || defined(CONFIG_KASAN_SW_TAGS)
+
+ #include <linux/pgtable.h>
+@@ -564,7 +567,8 @@ void kasan_populate_early_vm_area_shadow(void *start, unsigned long size);
+ int kasan_populate_vmalloc(unsigned long addr, unsigned long size);
+ void kasan_release_vmalloc(unsigned long start, unsigned long end,
+ unsigned long free_region_start,
+- unsigned long free_region_end);
++ unsigned long free_region_end,
++ unsigned long flags);
+
+ #else /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+
+@@ -579,7 +583,8 @@ static inline int kasan_populate_vmalloc(unsigned long start,
+ static inline void kasan_release_vmalloc(unsigned long start,
+ unsigned long end,
+ unsigned long free_region_start,
+- unsigned long free_region_end) { }
++ unsigned long free_region_end,
++ unsigned long flags) { }
+
+ #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
+
+@@ -614,7 +619,8 @@ static inline int kasan_populate_vmalloc(unsigned long start,
+ static inline void kasan_release_vmalloc(unsigned long start,
+ unsigned long end,
+ unsigned long free_region_start,
+- unsigned long free_region_end) { }
++ unsigned long free_region_end,
++ unsigned long flags) { }
+
+ static inline void *kasan_unpoison_vmalloc(const void *start,
+ unsigned long size,
+diff --git a/include/linux/util_macros.h b/include/linux/util_macros.h
+index 6bb460c3e818b3..825487fb66faf9 100644
+--- a/include/linux/util_macros.h
++++ b/include/linux/util_macros.h
+@@ -4,19 +4,6 @@
+
+ #include <linux/math.h>
+
+-#define __find_closest(x, a, as, op) \
+-({ \
+- typeof(as) __fc_i, __fc_as = (as) - 1; \
+- typeof(x) __fc_x = (x); \
+- typeof(*a) const *__fc_a = (a); \
+- for (__fc_i = 0; __fc_i < __fc_as; __fc_i++) { \
+- if (__fc_x op DIV_ROUND_CLOSEST(__fc_a[__fc_i] + \
+- __fc_a[__fc_i + 1], 2)) \
+- break; \
+- } \
+- (__fc_i); \
+-})
+-
+ /**
+ * find_closest - locate the closest element in a sorted array
+ * @x: The reference value.
+@@ -25,8 +12,27 @@
+ * @as: Size of 'a'.
+ *
+ * Returns the index of the element closest to 'x'.
++ * Note: If using an array of negative numbers (or mixed positive numbers),
++ * then be sure that 'x' is of a signed-type to get good results.
+ */
+-#define find_closest(x, a, as) __find_closest(x, a, as, <=)
++#define find_closest(x, a, as) \
++({ \
++ typeof(as) __fc_i, __fc_as = (as) - 1; \
++ long __fc_mid_x, __fc_x = (x); \
++ long __fc_left, __fc_right; \
++ typeof(*a) const *__fc_a = (a); \
++ for (__fc_i = 0; __fc_i < __fc_as; __fc_i++) { \
++ __fc_mid_x = (__fc_a[__fc_i] + __fc_a[__fc_i + 1]) / 2; \
++ if (__fc_x <= __fc_mid_x) { \
++ __fc_left = __fc_x - __fc_a[__fc_i]; \
++ __fc_right = __fc_a[__fc_i + 1] - __fc_x; \
++ if (__fc_right < __fc_left) \
++ __fc_i++; \
++ break; \
++ } \
++ } \
++ (__fc_i); \
++})
+
+ /**
+ * find_closest_descending - locate the closest element in a sorted array
+@@ -36,9 +42,27 @@
+ * @as: Size of 'a'.
+ *
+ * Similar to find_closest() but 'a' is expected to be sorted in descending
+- * order.
++ * order. The iteration is done in reverse order, so that the comparison
++ * of '__fc_right' & '__fc_left' also works for unsigned numbers.
+ */
+-#define find_closest_descending(x, a, as) __find_closest(x, a, as, >=)
++#define find_closest_descending(x, a, as) \
++({ \
++ typeof(as) __fc_i, __fc_as = (as) - 1; \
++ long __fc_mid_x, __fc_x = (x); \
++ long __fc_left, __fc_right; \
++ typeof(*a) const *__fc_a = (a); \
++ for (__fc_i = __fc_as; __fc_i >= 1; __fc_i--) { \
++ __fc_mid_x = (__fc_a[__fc_i] + __fc_a[__fc_i - 1]) / 2; \
++ if (__fc_x <= __fc_mid_x) { \
++ __fc_left = __fc_x - __fc_a[__fc_i]; \
++ __fc_right = __fc_a[__fc_i - 1] - __fc_x; \
++ if (__fc_right < __fc_left) \
++ __fc_i--; \
++ break; \
++ } \
++ } \
++ (__fc_i); \
++})
+
+ /**
+ * is_insidevar - check if the @ptr points inside the @var memory range.
+diff --git a/include/uapi/linux/if_link.h b/include/uapi/linux/if_link.h
+index 6dc258993b1770..2acc7687e017a9 100644
+--- a/include/uapi/linux/if_link.h
++++ b/include/uapi/linux/if_link.h
+@@ -1292,6 +1292,19 @@ enum netkit_mode {
+ NETKIT_L3,
+ };
+
++/* NETKIT_SCRUB_NONE leaves clearing skb->{mark,priority} up to
++ * the BPF program if attached. This also means the latter can
++ * consume the two fields if they were populated earlier.
++ *
++ * NETKIT_SCRUB_DEFAULT zeroes skb->{mark,priority} fields before
++ * invoking the attached BPF program when the peer device resides
++ * in a different network namespace. This is the default behavior.
++ */
++enum netkit_scrub {
++ NETKIT_SCRUB_NONE,
++ NETKIT_SCRUB_DEFAULT,
++};
++
+ enum {
+ IFLA_NETKIT_UNSPEC,
+ IFLA_NETKIT_PEER_INFO,
+@@ -1299,6 +1312,8 @@ enum {
+ IFLA_NETKIT_POLICY,
+ IFLA_NETKIT_PEER_POLICY,
+ IFLA_NETKIT_MODE,
++ IFLA_NETKIT_SCRUB,
++ IFLA_NETKIT_PEER_SCRUB,
+ __IFLA_NETKIT_MAX,
+ };
+ #define IFLA_NETKIT_MAX (__IFLA_NETKIT_MAX - 1)
+diff --git a/kernel/signal.c b/kernel/signal.c
+index cbabb2d05e0ac8..2ae45e6eb6bb8e 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1986,14 +1986,15 @@ int send_sigqueue(struct sigqueue *q, struct pid *pid, enum pid_type type)
+ * into t->pending).
+ *
+ * Where type is not PIDTYPE_PID, signals must be delivered to the
+- * process. In this case, prefer to deliver to current if it is in
+- * the same thread group as the target process, which avoids
+- * unnecessarily waking up a potentially idle task.
++ * process. In this case, prefer to deliver to current if it is in the
++ * same thread group as the target process and its sighand is stable,
++ * which avoids unnecessarily waking up a potentially idle task.
+ */
+ t = pid_task(pid, type);
+ if (!t)
+ goto ret;
+- if (type != PIDTYPE_PID && same_thread_group(t, current))
++ if (type != PIDTYPE_PID &&
++ same_thread_group(t, current) && !current->exit_state)
+ t = current;
+ if (!likely(lock_task_sighand(t, &flags)))
+ goto ret;
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 4c28dd177ca650..3dd3b97d8049ae 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -883,6 +883,10 @@ static void profile_graph_return(struct ftrace_graph_ret *trace,
+ }
+
+ static struct fgraph_ops fprofiler_ops = {
++ .ops = {
++ .flags = FTRACE_OPS_FL_INITIALIZED,
++ INIT_OPS_HASH(fprofiler_ops.ops)
++ },
+ .entryfunc = &profile_graph_entry,
+ .retfunc = &profile_graph_return,
+ };
+@@ -5076,6 +5080,9 @@ ftrace_mod_callback(struct trace_array *tr, struct ftrace_hash *hash,
+ char *func;
+ int ret;
+
++ if (!tr)
++ return -ENODEV;
++
+ /* match_records() modifies func, and we need the original */
+ func = kstrdup(func_orig, GFP_KERNEL);
+ if (!func)
+diff --git a/lib/kunit/debugfs.c b/lib/kunit/debugfs.c
+index d548750a325ace..b25d214b93e161 100644
+--- a/lib/kunit/debugfs.c
++++ b/lib/kunit/debugfs.c
+@@ -212,8 +212,11 @@ void kunit_debugfs_create_suite(struct kunit_suite *suite)
+
+ err:
+ string_stream_destroy(suite->log);
+- kunit_suite_for_each_test_case(suite, test_case)
++ suite->log = NULL;
++ kunit_suite_for_each_test_case(suite, test_case) {
+ string_stream_destroy(test_case->log);
++ test_case->log = NULL;
++ }
+ }
+
+ void kunit_debugfs_destroy_suite(struct kunit_suite *suite)
+diff --git a/lib/kunit/kunit-test.c b/lib/kunit/kunit-test.c
+index 37e02be1e71015..d9c781c859fde1 100644
+--- a/lib/kunit/kunit-test.c
++++ b/lib/kunit/kunit-test.c
+@@ -805,6 +805,8 @@ static void kunit_device_driver_test(struct kunit *test)
+ struct device *test_device;
+ struct driver_test_state *test_state = kunit_kzalloc(test, sizeof(*test_state), GFP_KERNEL);
+
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, test_state);
++
+ test->priv = test_state;
+ test_driver = kunit_driver_create(test, "my_driver");
+
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index 3619301dda2ebe..8d83e217271967 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -3439,9 +3439,20 @@ static inline int mas_root_expand(struct ma_state *mas, void *entry)
+ return slot;
+ }
+
++/*
++ * mas_store_root() - Storing value into root.
++ * @mas: The maple state
++ * @entry: The entry to store.
++ *
++ * There is no root node now and we are storing a value into the root - this
++ * function either assigns the pointer or expands into a node.
++ */
+ static inline void mas_store_root(struct ma_state *mas, void *entry)
+ {
+- if (likely((mas->last != 0) || (mas->index != 0)))
++ if (!entry) {
++ if (!mas->index)
++ rcu_assign_pointer(mas->tree->ma_root, NULL);
++ } else if (likely((mas->last != 0) || (mas->index != 0)))
+ mas_root_expand(mas, entry);
+ else if (((unsigned long) (entry) & 3) == 2)
+ mas_root_expand(mas, entry);
+diff --git a/mm/damon/tests/vaddr-kunit.h b/mm/damon/tests/vaddr-kunit.h
+index a339d117150fba..a149e354bb2689 100644
+--- a/mm/damon/tests/vaddr-kunit.h
++++ b/mm/damon/tests/vaddr-kunit.h
+@@ -300,6 +300,7 @@ static void damon_test_split_evenly(struct kunit *test)
+ damon_test_split_evenly_fail(test, 0, 100, 0);
+ damon_test_split_evenly_succ(test, 0, 100, 10);
+ damon_test_split_evenly_succ(test, 5, 59, 5);
++ damon_test_split_evenly_succ(test, 0, 3, 2);
+ damon_test_split_evenly_fail(test, 5, 6, 2);
+ }
+
+diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
+index 08cfd22b524925..dba3b2f4d75813 100644
+--- a/mm/damon/vaddr.c
++++ b/mm/damon/vaddr.c
+@@ -67,6 +67,7 @@ static int damon_va_evenly_split_region(struct damon_target *t,
+ unsigned long sz_orig, sz_piece, orig_end;
+ struct damon_region *n = NULL, *next;
+ unsigned long start;
++ unsigned int i;
+
+ if (!r || !nr_pieces)
+ return -EINVAL;
+@@ -80,8 +81,7 @@ static int damon_va_evenly_split_region(struct damon_target *t,
+
+ r->ar.end = r->ar.start + sz_piece;
+ next = damon_next_region(r);
+- for (start = r->ar.end; start + sz_piece <= orig_end;
+- start += sz_piece) {
++ for (start = r->ar.end, i = 1; i < nr_pieces; start += sz_piece, i++) {
+ n = damon_new_region(start, start + sz_piece);
+ if (!n)
+ return -ENOMEM;
+diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
+index d6210ca48ddab9..88d1c9dcb50721 100644
+--- a/mm/kasan/shadow.c
++++ b/mm/kasan/shadow.c
+@@ -489,7 +489,8 @@ static int kasan_depopulate_vmalloc_pte(pte_t *ptep, unsigned long addr,
+ */
+ void kasan_release_vmalloc(unsigned long start, unsigned long end,
+ unsigned long free_region_start,
+- unsigned long free_region_end)
++ unsigned long free_region_end,
++ unsigned long flags)
+ {
+ void *shadow_start, *shadow_end;
+ unsigned long region_start, region_end;
+@@ -522,12 +523,17 @@ void kasan_release_vmalloc(unsigned long start, unsigned long end,
+ __memset(shadow_start, KASAN_SHADOW_INIT, shadow_end - shadow_start);
+ return;
+ }
+- apply_to_existing_page_range(&init_mm,
++
++
++ if (flags & KASAN_VMALLOC_PAGE_RANGE)
++ apply_to_existing_page_range(&init_mm,
+ (unsigned long)shadow_start,
+ size, kasan_depopulate_vmalloc_pte,
+ NULL);
+- flush_tlb_kernel_range((unsigned long)shadow_start,
+- (unsigned long)shadow_end);
++
++ if (flags & KASAN_VMALLOC_TLB_FLUSH)
++ flush_tlb_kernel_range((unsigned long)shadow_start,
++ (unsigned long)shadow_end);
+ }
+ }
+
+diff --git a/mm/slab.h b/mm/slab.h
+index 6c6fe6d630ce3d..92ca5ff2037534 100644
+--- a/mm/slab.h
++++ b/mm/slab.h
+@@ -73,6 +73,11 @@ struct slab {
+ struct {
+ unsigned inuse:16;
+ unsigned objects:15;
++ /*
++ * If slab debugging is enabled then the
++ * frozen bit can be reused to indicate
++ * that the slab was corrupted
++ */
+ unsigned frozen:1;
+ };
+ };
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 893d3205991518..477fa471da1859 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -230,7 +230,7 @@ static struct kmem_cache *create_cache(const char *name,
+ if (args->use_freeptr_offset &&
+ (args->freeptr_offset >= object_size ||
+ !(flags & SLAB_TYPESAFE_BY_RCU) ||
+- !IS_ALIGNED(args->freeptr_offset, sizeof(freeptr_t))))
++ !IS_ALIGNED(args->freeptr_offset, __alignof__(freeptr_t))))
+ goto out;
+
+ err = -ENOMEM;
+diff --git a/mm/slub.c b/mm/slub.c
+index 5b832512044e3e..15ba89fef89a1f 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -1423,6 +1423,11 @@ static int check_slab(struct kmem_cache *s, struct slab *slab)
+ slab->inuse, slab->objects);
+ return 0;
+ }
++ if (slab->frozen) {
++ slab_err(s, slab, "Slab disabled since SLUB metadata consistency check failed");
++ return 0;
++ }
++
+ /* Slab_pad_check fixes things up after itself */
+ slab_pad_check(s, slab);
+ return 1;
+@@ -1603,6 +1608,7 @@ static noinline bool alloc_debug_processing(struct kmem_cache *s,
+ slab_fix(s, "Marking all objects used");
+ slab->inuse = slab->objects;
+ slab->freelist = NULL;
++ slab->frozen = 1; /* mark consistency-failed slab as frozen */
+ }
+ return false;
+ }
+@@ -2744,7 +2750,8 @@ static void *alloc_single_from_partial(struct kmem_cache *s,
+ slab->inuse++;
+
+ if (!alloc_debug_processing(s, slab, object, orig_size)) {
+- remove_partial(n, slab);
++ if (folio_test_slab(slab_folio(slab)))
++ remove_partial(n, slab);
+ return NULL;
+ }
+
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 634162271c0045..5480b77f4167d7 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -2182,6 +2182,25 @@ decay_va_pool_node(struct vmap_node *vn, bool full_decay)
+ reclaim_list_global(&decay_list);
+ }
+
++static void
++kasan_release_vmalloc_node(struct vmap_node *vn)
++{
++ struct vmap_area *va;
++ unsigned long start, end;
++
++ start = list_first_entry(&vn->purge_list, struct vmap_area, list)->va_start;
++ end = list_last_entry(&vn->purge_list, struct vmap_area, list)->va_end;
++
++ list_for_each_entry(va, &vn->purge_list, list) {
++ if (is_vmalloc_or_module_addr((void *) va->va_start))
++ kasan_release_vmalloc(va->va_start, va->va_end,
++ va->va_start, va->va_end,
++ KASAN_VMALLOC_PAGE_RANGE);
++ }
++
++ kasan_release_vmalloc(start, end, start, end, KASAN_VMALLOC_TLB_FLUSH);
++}
++
+ static void purge_vmap_node(struct work_struct *work)
+ {
+ struct vmap_node *vn = container_of(work,
+@@ -2190,20 +2209,17 @@ static void purge_vmap_node(struct work_struct *work)
+ struct vmap_area *va, *n_va;
+ LIST_HEAD(local_list);
+
++ if (IS_ENABLED(CONFIG_KASAN_VMALLOC))
++ kasan_release_vmalloc_node(vn);
++
+ vn->nr_purged = 0;
+
+ list_for_each_entry_safe(va, n_va, &vn->purge_list, list) {
+ unsigned long nr = va_size(va) >> PAGE_SHIFT;
+- unsigned long orig_start = va->va_start;
+- unsigned long orig_end = va->va_end;
+ unsigned int vn_id = decode_vn_id(va->flags);
+
+ list_del_init(&va->list);
+
+- if (is_vmalloc_or_module_addr((void *)orig_start))
+- kasan_release_vmalloc(orig_start, orig_end,
+- va->va_start, va->va_end);
+-
+ nr_purged_pages += nr;
+ vn->nr_purged++;
+
+@@ -4784,7 +4800,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
+ &free_vmap_area_list);
+ if (va)
+ kasan_release_vmalloc(orig_start, orig_end,
+- va->va_start, va->va_end);
++ va->va_start, va->va_end,
++ KASAN_VMALLOC_PAGE_RANGE | KASAN_VMALLOC_TLB_FLUSH);
+ vas[area] = NULL;
+ }
+
+@@ -4834,7 +4851,8 @@ struct vm_struct **pcpu_get_vm_areas(const unsigned long *offsets,
+ &free_vmap_area_list);
+ if (va)
+ kasan_release_vmalloc(orig_start, orig_end,
+- va->va_start, va->va_end);
++ va->va_start, va->va_end,
++ KASAN_VMALLOC_PAGE_RANGE | KASAN_VMALLOC_TLB_FLUSH);
+ vas[area] = NULL;
+ kfree(vms[area]);
+ }
+diff --git a/mm/vmstat.c b/mm/vmstat.c
+index ac6a5aa34eabba..3f41344239126b 100644
+--- a/mm/vmstat.c
++++ b/mm/vmstat.c
+@@ -1780,6 +1780,7 @@ static void zoneinfo_show_print(struct seq_file *m, pg_data_t *pgdat,
+ zone_page_state(zone, i));
+
+ #ifdef CONFIG_NUMA
++ fold_vm_zone_numa_events(zone);
+ for (i = 0; i < NR_VM_NUMA_EVENT_ITEMS; i++)
+ seq_printf(m, "\n %-12s %lu", numa_stat_name(i),
+ zone_numa_event_state(zone, i));
+diff --git a/tools/perf/pmu-events/empty-pmu-events.c b/tools/perf/pmu-events/empty-pmu-events.c
+index 873e9fb2041f02..a9263bd948c41d 100644
+--- a/tools/perf/pmu-events/empty-pmu-events.c
++++ b/tools/perf/pmu-events/empty-pmu-events.c
+@@ -539,17 +539,7 @@ const struct pmu_metrics_table *perf_pmu__find_metrics_table(struct perf_pmu *pm
+ if (!map)
+ return NULL;
+
+- if (!pmu)
+- return &map->metric_table;
+-
+- for (size_t i = 0; i < map->metric_table.num_pmus; i++) {
+- const struct pmu_table_entry *table_pmu = &map->metric_table.pmus[i];
+- const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+-
+- if (pmu__name_match(pmu, pmu_name))
+- return &map->metric_table;
+- }
+- return NULL;
++ return &map->metric_table;
+ }
+
+ const struct pmu_events_table *find_core_events_table(const char *arch, const char *cpuid)
+diff --git a/tools/perf/pmu-events/jevents.py b/tools/perf/pmu-events/jevents.py
+index d46a22fb5573de..4145e027775316 100755
+--- a/tools/perf/pmu-events/jevents.py
++++ b/tools/perf/pmu-events/jevents.py
+@@ -1089,17 +1089,7 @@ const struct pmu_metrics_table *perf_pmu__find_metrics_table(struct perf_pmu *pm
+ if (!map)
+ return NULL;
+
+- if (!pmu)
+- return &map->metric_table;
+-
+- for (size_t i = 0; i < map->metric_table.num_pmus; i++) {
+- const struct pmu_table_entry *table_pmu = &map->metric_table.pmus[i];
+- const char *pmu_name = &big_c_string[table_pmu->pmu_name.offset];
+-
+- if (pmu__name_match(pmu, pmu_name))
+- return &map->metric_table;
+- }
+- return NULL;
++ return &map->metric_table;
+ }
+
+ const struct pmu_events_table *find_core_events_table(const char *arch, const char *cpuid)
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-09 23:13 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-09 23:13 UTC (permalink / raw
To: gentoo-commits
commit: 42337dcbb74c47c507f2628074a83f937cd1cf1a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Dec 9 23:12:52 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Dec 9 23:12:52 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=42337dcb
drm/display: Fix building with GCC 15
Bug: https://bugs.gentoo.org/946130
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++++
2700_drm-display-GCC15.patch | 52 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 56 insertions(+)
diff --git a/0000_README b/0000_README
index 87f43cf7..b2e6beb3 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+Patch: 2700_drm-display-GCC15.patch
+From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
+Desc: drm/display: Fix building with GCC 15
+
Patch: 2901_tools-lib-subcmd-compile-fix.patch
From: https://lore.kernel.org/all/20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de/
Desc: tools lib subcmd: Fixed uninitialized use of variable in parse-options
diff --git a/2700_drm-display-GCC15.patch b/2700_drm-display-GCC15.patch
new file mode 100644
index 00000000..0be775ea
--- /dev/null
+++ b/2700_drm-display-GCC15.patch
@@ -0,0 +1,52 @@
+From a500f3751d3c861be7e4463c933cf467240cca5d Mon Sep 17 00:00:00 2001
+From: Brahmajit Das <brahmajit.xyz@gmail.com>
+Date: Wed, 2 Oct 2024 14:53:11 +0530
+Subject: drm/display: Fix building with GCC 15
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+GCC 15 enables -Werror=unterminated-string-initialization by default.
+This results in the following build error
+
+drivers/gpu/drm/display/drm_dp_dual_mode_helper.c: In function ‘is_hdmi_adaptor’:
+drivers/gpu/drm/display/drm_dp_dual_mode_helper.c:164:17: error: initializer-string for array of
+ ‘char’ is too long [-Werror=unterminated-string-initialization]
+ 164 | "DP-HDMI ADAPTOR\x04";
+ | ^~~~~~~~~~~~~~~~~~~~~
+
+After discussion with Ville, the fix was to increase the size of
+dp_dual_mode_hdmi_id array by one, so that it can accommodate the NULL
+line character. This should let us build the kernel with GCC 15.
+
+Signed-off-by: Brahmajit Das <brahmajit.xyz@gmail.com>
+Reviewed-by: Jani Nikula <jani.nikula@intel.com>
+Link: https://patchwork.freedesktop.org/patch/msgid/20241002092311.942822-1-brahmajit.xyz@gmail.com
+Signed-off-by: Jani Nikula <jani.nikula@intel.com>
+---
+ drivers/gpu/drm/display/drm_dp_dual_mode_helper.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+(limited to 'drivers/gpu/drm/display/drm_dp_dual_mode_helper.c')
+
+diff --git a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
+index 14a2a8473682b0..c491e3203bf11c 100644
+--- a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
+@@ -160,11 +160,11 @@ EXPORT_SYMBOL(drm_dp_dual_mode_write);
+
+ static bool is_hdmi_adaptor(const char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN])
+ {
+- static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN] =
++ static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN + 1] =
+ "DP-HDMI ADAPTOR\x04";
+
+ return memcmp(hdmi_id, dp_dual_mode_hdmi_id,
+- sizeof(dp_dual_mode_hdmi_id)) == 0;
++ DP_DUAL_MODE_HDMI_ID_LEN) == 0;
+ }
+
+ static bool is_type1_adaptor(uint8_t adaptor_id)
+--
+cgit 1.2.3-korg
+
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-11 21:01 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-11 21:01 UTC (permalink / raw
To: gentoo-commits
commit: 3cf228ef3b389e949f1242512c85121af823b397
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Dec 11 21:01:01 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Dec 11 21:01:01 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=3cf228ef
Add x86/pkeys fixes
Bug: https://bugs.gentoo.org/946182
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 ++
...-change-caller-of-update_pkru_in_sigframe.patch | 107 +++++++++++++++++++++
...eys-ensure-updated-pkru-value-is-xrstor-d.patch | 96 ++++++++++++++++++
3 files changed, 211 insertions(+)
diff --git a/0000_README b/0000_README
index b2e6beb3..81375872 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,14 @@ Patch: 1730_parisc-Disable-prctl.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
+Patch: 1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
+From: https://git.kernel.org/
+Desc: x86/pkeys: Change caller of update_pkru_in_sigframe()
+
+Patch: 1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
+From: https://git.kernel.org/
+Desc: x86/pkeys: Ensure updated PKRU value is XRSTOR'd
+
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch b/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
new file mode 100644
index 00000000..3a1fbd82
--- /dev/null
+++ b/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
@@ -0,0 +1,107 @@
+From 5683d0ce8fb46f36315a2b508f90ec6221cda018 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 19 Nov 2024 17:45:19 +0000
+Subject: x86/pkeys: Change caller of update_pkru_in_sigframe()
+
+From: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
+
+[ Upstream commit 6a1853bdf17874392476b552398df261f75503e0 ]
+
+update_pkru_in_sigframe() will shortly need some information which
+is only available inside xsave_to_user_sigframe(). Move
+update_pkru_in_sigframe() inside the other function to make it
+easier to provide it that information.
+
+No functional changes.
+
+Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Link: https://lore.kernel.org/all/20241119174520.3987538-2-aruna.ramakrishna%40oracle.com
+Stable-dep-of: ae6012d72fa6 ("x86/pkeys: Ensure updated PKRU value is XRSTOR'd")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ arch/x86/kernel/fpu/signal.c | 20 ++------------------
+ arch/x86/kernel/fpu/xstate.h | 15 ++++++++++++++-
+ 2 files changed, 16 insertions(+), 19 deletions(-)
+
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 1065ab995305c..8f62e0666dea5 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -63,16 +63,6 @@ static inline bool check_xstate_in_sigframe(struct fxregs_state __user *fxbuf,
+ return true;
+ }
+
+-/*
+- * Update the value of PKRU register that was already pushed onto the signal frame.
+- */
+-static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
+-{
+- if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
+- return 0;
+- return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
+-}
+-
+ /*
+ * Signal frame handlers.
+ */
+@@ -168,14 +158,8 @@ static inline bool save_xstate_epilog(void __user *buf, int ia32_frame,
+
+ static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf, u32 pkru)
+ {
+- int err = 0;
+-
+- if (use_xsave()) {
+- err = xsave_to_user_sigframe(buf);
+- if (!err)
+- err = update_pkru_in_sigframe(buf, pkru);
+- return err;
+- }
++ if (use_xsave())
++ return xsave_to_user_sigframe(buf, pkru);
+
+ if (use_fxsr())
+ return fxsave_to_user_sigframe((struct fxregs_state __user *) buf);
+diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
+index 0b86a5002c846..6b2924fbe5b8d 100644
+--- a/arch/x86/kernel/fpu/xstate.h
++++ b/arch/x86/kernel/fpu/xstate.h
+@@ -69,6 +69,16 @@ static inline u64 xfeatures_mask_independent(void)
+ return fpu_kernel_cfg.independent_features;
+ }
+
++/*
++ * Update the value of PKRU register that was already pushed onto the signal frame.
++ */
++static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
++{
++ if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
++ return 0;
++ return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
++}
++
+ /* XSAVE/XRSTOR wrapper functions */
+
+ #ifdef CONFIG_X86_64
+@@ -256,7 +266,7 @@ static inline u64 xfeatures_need_sigframe_write(void)
+ * The caller has to zero buf::header before calling this because XSAVE*
+ * does not touch the reserved fields in the header.
+ */
+-static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
++static inline int xsave_to_user_sigframe(struct xregs_state __user *buf, u32 pkru)
+ {
+ /*
+ * Include the features which are not xsaved/rstored by the kernel
+@@ -281,6 +291,9 @@ static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
+ XSTATE_OP(XSAVE, buf, lmask, hmask, err);
+ clac();
+
++ if (!err)
++ err = update_pkru_in_sigframe(buf, pkru);
++
+ return err;
+ }
+
+--
+2.43.0
+
diff --git a/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch b/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
new file mode 100644
index 00000000..11b1f768
--- /dev/null
+++ b/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
@@ -0,0 +1,96 @@
+From 24fedf2768fd57e0d767137044c4f7493357b325 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 19 Nov 2024 17:45:20 +0000
+Subject: x86/pkeys: Ensure updated PKRU value is XRSTOR'd
+
+From: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
+
+[ Upstream commit ae6012d72fa60c9ff92de5bac7a8021a47458e5b ]
+
+When XSTATE_BV[i] is 0, and XRSTOR attempts to restore state component
+'i' it ignores any value in the XSAVE buffer and instead restores the
+state component's init value.
+
+This means that if XSAVE writes XSTATE_BV[PKRU]=0 then XRSTOR will
+ignore the value that update_pkru_in_sigframe() writes to the XSAVE buffer.
+
+XSTATE_BV[PKRU] only gets written as 0 if PKRU is in its init state. On
+Intel CPUs, basically never happens because the kernel usually
+overwrites the init value (aside: this is why we didn't notice this bug
+until now). But on AMD, the init tracker is more aggressive and will
+track PKRU as being in its init state upon any wrpkru(0x0).
+Unfortunately, sig_prepare_pkru() does just that: wrpkru(0x0).
+
+This writes XSTATE_BV[PKRU]=0 which makes XRSTOR ignore the PKRU value
+in the sigframe.
+
+To fix this, always overwrite the sigframe XSTATE_BV with a value that
+has XSTATE_BV[PKRU]==1. This ensures that XRSTOR will not ignore what
+update_pkru_in_sigframe() wrote.
+
+The problematic sequence of events is something like this:
+
+Userspace does:
+ * wrpkru(0xffff0000) (or whatever)
+ * Hardware sets: XINUSE[PKRU]=1
+Signal happens, kernel is entered:
+ * sig_prepare_pkru() => wrpkru(0x00000000)
+ * Hardware sets: XINUSE[PKRU]=0 (aggressive AMD init tracker)
+ * XSAVE writes most of XSAVE buffer, including
+ XSTATE_BV[PKRU]=XINUSE[PKRU]=0
+ * update_pkru_in_sigframe() overwrites PKRU in XSAVE buffer
+... signal handling
+ * XRSTOR sees XSTATE_BV[PKRU]==0, ignores just-written value
+ from update_pkru_in_sigframe()
+
+Fixes: 70044df250d0 ("x86/pkeys: Update PKRU to enable all pkeys before XSAVE")
+Suggested-by: Rudi Horn <rudi.horn@oracle.com>
+Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
+Link: https://lore.kernel.org/all/20241119174520.3987538-3-aruna.ramakrishna%40oracle.com
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ arch/x86/kernel/fpu/xstate.h | 16 ++++++++++++++--
+ 1 file changed, 14 insertions(+), 2 deletions(-)
+
+diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
+index 6b2924fbe5b8d..aa16f1a1bbcf1 100644
+--- a/arch/x86/kernel/fpu/xstate.h
++++ b/arch/x86/kernel/fpu/xstate.h
+@@ -72,10 +72,22 @@ static inline u64 xfeatures_mask_independent(void)
+ /*
+ * Update the value of PKRU register that was already pushed onto the signal frame.
+ */
+-static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
++static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u64 mask, u32 pkru)
+ {
++ u64 xstate_bv;
++ int err;
++
+ if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
+ return 0;
++
++ /* Mark PKRU as in-use so that it is restored correctly. */
++ xstate_bv = (mask & xfeatures_in_use()) | XFEATURE_MASK_PKRU;
++
++ err = __put_user(xstate_bv, &buf->header.xfeatures);
++ if (err)
++ return err;
++
++ /* Update PKRU value in the userspace xsave buffer. */
+ return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
+ }
+
+@@ -292,7 +304,7 @@ static inline int xsave_to_user_sigframe(struct xregs_state __user *buf, u32 pkr
+ clac();
+
+ if (!err)
+- err = update_pkru_in_sigframe(buf, pkru);
++ err = update_pkru_in_sigframe(buf, mask, pkru);
+
+ return err;
+ }
+--
+2.43.0
+
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-14 23:47 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-14 23:47 UTC (permalink / raw
To: gentoo-commits
commit: 4c68b8a5598beeb003b0f59ae33deba3b220de9a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 14 23:47:32 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Dec 14 23:47:32 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4c68b8a5
Linux patch 6.12.5
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1004_linux-6.12.5.patch | 33366 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 33370 insertions(+)
diff --git a/0000_README b/0000_README
index 81375872..6429d035 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch: 1003_linux-6.12.4.patch
From: https://www.kernel.org
Desc: Linux 6.12.4
+Patch: 1004_linux-6.12.5.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.5
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1004_linux-6.12.5.patch b/1004_linux-6.12.5.patch
new file mode 100644
index 00000000..6347cd6c
--- /dev/null
+++ b/1004_linux-6.12.5.patch
@@ -0,0 +1,33366 @@
+diff --git a/Documentation/ABI/testing/sysfs-bus-pci b/Documentation/ABI/testing/sysfs-bus-pci
+index 7f63c7e9777358..5da6a14dc326bd 100644
+--- a/Documentation/ABI/testing/sysfs-bus-pci
++++ b/Documentation/ABI/testing/sysfs-bus-pci
+@@ -163,6 +163,17 @@ Description:
+ will be present in sysfs. Writing 1 to this file
+ will perform reset.
+
++What: /sys/bus/pci/devices/.../reset_subordinate
++Date: October 2024
++Contact: linux-pci@vger.kernel.org
++Description:
++ This is visible only for bridge devices. If you want to reset
++ all devices attached through the subordinate bus of a specific
++ bridge device, writing 1 to this will try to do it. This will
++ affect all devices attached to the system through this bridge
++ similiar to writing 1 to their individual "reset" file, so use
++ with caution.
++
+ What: /sys/bus/pci/devices/.../vpd
+ Date: February 2008
+ Contact: Ben Hutchings <bwh@kernel.org>
+diff --git a/Documentation/ABI/testing/sysfs-fs-f2fs b/Documentation/ABI/testing/sysfs-fs-f2fs
+index 513296bb6f297f..3e1630c70d8ae7 100644
+--- a/Documentation/ABI/testing/sysfs-fs-f2fs
++++ b/Documentation/ABI/testing/sysfs-fs-f2fs
+@@ -822,3 +822,9 @@ Description: It controls the valid block ratio threshold not to trigger excessiv
+ for zoned deivces. The initial value of it is 95(%). F2FS will stop the
+ background GC thread from intiating GC for sections having valid blocks
+ exceeding the ratio.
++
++What: /sys/fs/f2fs/<disk>/max_read_extent_count
++Date: November 2024
++Contact: "Chao Yu" <chao@kernel.org>
++Description: It controls max read extent count for per-inode, the value of threshold
++ is 10240 by default.
+diff --git a/Documentation/accel/qaic/aic080.rst b/Documentation/accel/qaic/aic080.rst
+new file mode 100644
+index 00000000000000..d563771ea6ce48
+--- /dev/null
++++ b/Documentation/accel/qaic/aic080.rst
+@@ -0,0 +1,14 @@
++.. SPDX-License-Identifier: GPL-2.0-only
++
++===============================
++ Qualcomm Cloud AI 80 (AIC080)
++===============================
++
++Overview
++========
++
++The Qualcomm Cloud AI 80/AIC080 family of products are a derivative of AIC100.
++The number of NSPs and clock rates are reduced to fit within resource
++constrained solutions. The PCIe Product ID is 0xa080.
++
++As a derivative product, all AIC100 documentation applies.
+diff --git a/Documentation/accel/qaic/index.rst b/Documentation/accel/qaic/index.rst
+index ad19b88d1a669e..967b9dd8baceac 100644
+--- a/Documentation/accel/qaic/index.rst
++++ b/Documentation/accel/qaic/index.rst
+@@ -10,4 +10,5 @@ accelerator cards.
+ .. toctree::
+
+ qaic
++ aic080
+ aic100
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index 65bfab1b186146..77db10e944f039 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -258,6 +258,8 @@ stable kernels.
+ | Hisilicon | Hip{08,09,10,10C| #162001900 | N/A |
+ | | ,11} SMMU PMCG | | |
+ +----------------+-----------------+-----------------+-----------------------------+
++| Hisilicon | Hip09 | #162100801 | HISILICON_ERRATUM_162100801 |
+++----------------+-----------------+-----------------+-----------------------------+
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Qualcomm Tech. | Kryo/Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/i2c/busses/i2c-i801.rst b/Documentation/i2c/busses/i2c-i801.rst
+index c840b597912c87..47e8ac5b7099f7 100644
+--- a/Documentation/i2c/busses/i2c-i801.rst
++++ b/Documentation/i2c/busses/i2c-i801.rst
+@@ -49,6 +49,7 @@ Supported adapters:
+ * Intel Meteor Lake (SOC and PCH)
+ * Intel Birch Stream (SOC)
+ * Intel Arrow Lake (SOC)
++ * Intel Panther Lake (SOC)
+
+ Datasheets: Publicly available at the Intel website
+
+diff --git a/Documentation/netlink/specs/ethtool.yaml b/Documentation/netlink/specs/ethtool.yaml
+index 6a050d755b9cb4..f6c5d8214c7e98 100644
+--- a/Documentation/netlink/specs/ethtool.yaml
++++ b/Documentation/netlink/specs/ethtool.yaml
+@@ -96,7 +96,12 @@ attribute-sets:
+ name: bits
+ type: nest
+ nested-attributes: bitset-bits
+-
++ -
++ name: value
++ type: binary
++ -
++ name: mask
++ type: binary
+ -
+ name: string
+ attributes:
+diff --git a/Makefile b/Makefile
+index 87dc2f81086021..f158bfe6407ac9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -456,6 +456,7 @@ export rust_common_flags := --edition=2021 \
+ -Wclippy::mut_mut \
+ -Wclippy::needless_bitwise_bool \
+ -Wclippy::needless_continue \
++ -Aclippy::needless_lifetimes \
+ -Wclippy::no_mangle_with_rust_abi \
+ -Wclippy::dbg_macro
+
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 22f8a7bca6d21c..a11a7a42edbfb5 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1232,6 +1232,17 @@ config HISILICON_ERRATUM_161600802
+
+ If unsure, say Y.
+
++config HISILICON_ERRATUM_162100801
++ bool "Hip09 162100801 erratum support"
++ default y
++ help
++ When enabling GICv4.1 in hip09, VMAPP will fail to clear some caches
++ during unmapping operation, which will cause some vSGIs lost.
++ To fix the issue, invalidate related vPE cache through GICR_INVALLR
++ after VMOVP.
++
++ If unsure, say Y.
++
+ config QCOM_FALKOR_ERRATUM_1003
+ bool "Falkor E1003: Incorrect translation due to ASID change"
+ default y
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index b756578aeaeea1..1559a239137f32 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -719,6 +719,8 @@ static int fpmr_set(struct task_struct *target, const struct user_regset *regset
+ if (!system_supports_fpmr())
+ return -EINVAL;
+
++ fpmr = target->thread.uw.fpmr;
++
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &fpmr, 0, count);
+ if (ret)
+ return ret;
+@@ -1418,7 +1420,7 @@ static int tagged_addr_ctrl_get(struct task_struct *target,
+ {
+ long ctrl = get_tagged_addr_ctrl(target);
+
+- if (IS_ERR_VALUE(ctrl))
++ if (WARN_ON_ONCE(IS_ERR_VALUE(ctrl)))
+ return ctrl;
+
+ return membuf_write(&to, &ctrl, sizeof(ctrl));
+@@ -1432,6 +1434,10 @@ static int tagged_addr_ctrl_set(struct task_struct *target, const struct
+ int ret;
+ long ctrl;
+
++ ctrl = get_tagged_addr_ctrl(target);
++ if (WARN_ON_ONCE(IS_ERR_VALUE(ctrl)))
++ return ctrl;
++
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &ctrl, 0, -1);
+ if (ret)
+ return ret;
+@@ -1463,6 +1469,8 @@ static int poe_set(struct task_struct *target, const struct
+ if (!system_supports_poe())
+ return -EINVAL;
+
++ ctrl = target->thread.por_el0;
++
+ ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf, &ctrl, 0, -1);
+ if (ret)
+ return ret;
+diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
+index 188197590fc9ce..b2ac062463273f 100644
+--- a/arch/arm64/mm/context.c
++++ b/arch/arm64/mm/context.c
+@@ -32,9 +32,9 @@ static unsigned long nr_pinned_asids;
+ static unsigned long *pinned_asid_map;
+
+ #define ASID_MASK (~GENMASK(asid_bits - 1, 0))
+-#define ASID_FIRST_VERSION (1UL << asid_bits)
++#define ASID_FIRST_VERSION (1UL << 16)
+
+-#define NUM_USER_ASIDS ASID_FIRST_VERSION
++#define NUM_USER_ASIDS (1UL << asid_bits)
+ #define ctxid2asid(asid) ((asid) & ~ASID_MASK)
+ #define asid2ctxid(asid, genid) ((asid) | (genid))
+
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 27a32ff15412aa..93ba66de160ce4 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -116,15 +116,6 @@ static void __init arch_reserve_crashkernel(void)
+
+ static phys_addr_t __init max_zone_phys(phys_addr_t zone_limit)
+ {
+- /**
+- * Information we get from firmware (e.g. DT dma-ranges) describe DMA
+- * bus constraints. Devices using DMA might have their own limitations.
+- * Some of them rely on DMA zone in low 32-bit memory. Keep low RAM
+- * DMA zone on platforms that have RAM there.
+- */
+- if (memblock_start_of_DRAM() < U32_MAX)
+- zone_limit = min(zone_limit, U32_MAX);
+-
+ return min(zone_limit, memblock_end_of_DRAM() - 1) + 1;
+ }
+
+@@ -140,6 +131,14 @@ static void __init zone_sizes_init(void)
+ acpi_zone_dma_limit = acpi_iort_dma_get_max_cpu_address();
+ dt_zone_dma_limit = of_dma_get_max_cpu_address(NULL);
+ zone_dma_limit = min(dt_zone_dma_limit, acpi_zone_dma_limit);
++ /*
++ * Information we get from firmware (e.g. DT dma-ranges) describe DMA
++ * bus constraints. Devices using DMA might have their own limitations.
++ * Some of them rely on DMA zone in low 32-bit memory. Keep low RAM
++ * DMA zone on platforms that have RAM there.
++ */
++ if (memblock_start_of_DRAM() < U32_MAX)
++ zone_dma_limit = min(zone_dma_limit, U32_MAX);
+ arm64_dma_phys_limit = max_zone_phys(zone_dma_limit);
+ max_zone_pfns[ZONE_DMA] = PFN_DOWN(arm64_dma_phys_limit);
+ #endif
+diff --git a/arch/loongarch/include/asm/hugetlb.h b/arch/loongarch/include/asm/hugetlb.h
+index 5da32c00d483fb..376c0708e2979b 100644
+--- a/arch/loongarch/include/asm/hugetlb.h
++++ b/arch/loongarch/include/asm/hugetlb.h
+@@ -29,6 +29,16 @@ static inline int prepare_hugepage_range(struct file *file,
+ return 0;
+ }
+
++#define __HAVE_ARCH_HUGE_PTE_CLEAR
++static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep, unsigned long sz)
++{
++ pte_t clear;
++
++ pte_val(clear) = (unsigned long)invalid_pte_table;
++ set_pte_at(mm, addr, ptep, clear);
++}
++
+ #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
+ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
+index 174734a23d0ac8..9d53eca66fcc70 100644
+--- a/arch/loongarch/kvm/vcpu.c
++++ b/arch/loongarch/kvm/vcpu.c
+@@ -240,7 +240,7 @@ static void kvm_late_check_requests(struct kvm_vcpu *vcpu)
+ */
+ static int kvm_enter_guest_check(struct kvm_vcpu *vcpu)
+ {
+- int ret;
++ int idx, ret;
+
+ /*
+ * Check conditions before entering the guest
+@@ -249,7 +249,9 @@ static int kvm_enter_guest_check(struct kvm_vcpu *vcpu)
+ if (ret < 0)
+ return ret;
+
++ idx = srcu_read_lock(&vcpu->kvm->srcu);
+ ret = kvm_check_requests(vcpu);
++ srcu_read_unlock(&vcpu->kvm->srcu, idx);
+
+ return ret;
+ }
+diff --git a/arch/loongarch/mm/tlb.c b/arch/loongarch/mm/tlb.c
+index 5ac9beb5f0935e..3b427b319db21d 100644
+--- a/arch/loongarch/mm/tlb.c
++++ b/arch/loongarch/mm/tlb.c
+@@ -289,7 +289,7 @@ static void setup_tlb_handler(int cpu)
+ /* Avoid lockdep warning */
+ rcutree_report_cpu_starting(cpu);
+
+-#ifdef CONFIG_NUMA
++#if defined(CONFIG_NUMA) && !defined(CONFIG_PREEMPT_RT)
+ vec_sz = sizeof(exception_handlers);
+
+ if (pcpu_handlers[cpu])
+diff --git a/arch/mips/boot/dts/loongson/ls7a-pch.dtsi b/arch/mips/boot/dts/loongson/ls7a-pch.dtsi
+index cce9428afc41fc..ee71045883e7e7 100644
+--- a/arch/mips/boot/dts/loongson/ls7a-pch.dtsi
++++ b/arch/mips/boot/dts/loongson/ls7a-pch.dtsi
+@@ -70,7 +70,6 @@ pci@1a000000 {
+ device_type = "pci";
+ #address-cells = <3>;
+ #size-cells = <2>;
+- #interrupt-cells = <2>;
+ msi-parent = <&msi>;
+
+ reg = <0 0x1a000000 0 0x02000000>,
+@@ -234,7 +233,7 @@ phy1: ethernet-phy@1 {
+ };
+ };
+
+- pci_bridge@9,0 {
++ pcie@9,0 {
+ compatible = "pci0014,7a19.1",
+ "pci0014,7a19",
+ "pciclass060400",
+@@ -244,12 +243,16 @@ pci_bridge@9,0 {
+ interrupts = <32 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 32 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@a,0 {
++ pcie@a,0 {
+ compatible = "pci0014,7a09.1",
+ "pci0014,7a09",
+ "pciclass060400",
+@@ -259,12 +262,16 @@ pci_bridge@a,0 {
+ interrupts = <33 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 33 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@b,0 {
++ pcie@b,0 {
+ compatible = "pci0014,7a09.1",
+ "pci0014,7a09",
+ "pciclass060400",
+@@ -274,12 +281,16 @@ pci_bridge@b,0 {
+ interrupts = <34 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 34 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@c,0 {
++ pcie@c,0 {
+ compatible = "pci0014,7a09.1",
+ "pci0014,7a09",
+ "pciclass060400",
+@@ -289,12 +300,16 @@ pci_bridge@c,0 {
+ interrupts = <35 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 35 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@d,0 {
++ pcie@d,0 {
+ compatible = "pci0014,7a19.1",
+ "pci0014,7a19",
+ "pciclass060400",
+@@ -304,12 +319,16 @@ pci_bridge@d,0 {
+ interrupts = <36 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 36 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@e,0 {
++ pcie@e,0 {
+ compatible = "pci0014,7a09.1",
+ "pci0014,7a09",
+ "pciclass060400",
+@@ -319,12 +338,16 @@ pci_bridge@e,0 {
+ interrupts = <37 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 37 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@f,0 {
++ pcie@f,0 {
+ compatible = "pci0014,7a29.1",
+ "pci0014,7a29",
+ "pciclass060400",
+@@ -334,12 +357,16 @@ pci_bridge@f,0 {
+ interrupts = <40 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 40 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@10,0 {
++ pcie@10,0 {
+ compatible = "pci0014,7a19.1",
+ "pci0014,7a19",
+ "pciclass060400",
+@@ -349,12 +376,16 @@ pci_bridge@10,0 {
+ interrupts = <41 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 41 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@11,0 {
++ pcie@11,0 {
+ compatible = "pci0014,7a29.1",
+ "pci0014,7a29",
+ "pciclass060400",
+@@ -364,12 +395,16 @@ pci_bridge@11,0 {
+ interrupts = <42 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 42 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@12,0 {
++ pcie@12,0 {
+ compatible = "pci0014,7a19.1",
+ "pci0014,7a19",
+ "pciclass060400",
+@@ -379,12 +414,16 @@ pci_bridge@12,0 {
+ interrupts = <43 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 43 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@13,0 {
++ pcie@13,0 {
+ compatible = "pci0014,7a29.1",
+ "pci0014,7a29",
+ "pciclass060400",
+@@ -394,12 +433,16 @@ pci_bridge@13,0 {
+ interrupts = <38 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 38 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+
+- pci_bridge@14,0 {
++ pcie@14,0 {
+ compatible = "pci0014,7a19.1",
+ "pci0014,7a19",
+ "pciclass060400",
+@@ -409,9 +452,13 @@ pci_bridge@14,0 {
+ interrupts = <39 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&pic>;
+
++ #address-cells = <3>;
++ #size-cells = <2>;
++ device_type = "pci";
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &pic 39 IRQ_TYPE_LEVEL_HIGH>;
++ ranges;
+ };
+ };
+
+diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
+index fbb68fc28ed3a5..935568d68196d0 100644
+--- a/arch/powerpc/kernel/prom_init.c
++++ b/arch/powerpc/kernel/prom_init.c
+@@ -2932,7 +2932,7 @@ static void __init fixup_device_tree_chrp(void)
+ #endif
+
+ #if defined(CONFIG_PPC64) && defined(CONFIG_PPC_PMAC)
+-static void __init fixup_device_tree_pmac(void)
++static void __init fixup_device_tree_pmac64(void)
+ {
+ phandle u3, i2c, mpic;
+ u32 u3_rev;
+@@ -2972,7 +2972,31 @@ static void __init fixup_device_tree_pmac(void)
+ &parent, sizeof(parent));
+ }
+ #else
+-#define fixup_device_tree_pmac()
++#define fixup_device_tree_pmac64()
++#endif
++
++#ifdef CONFIG_PPC_PMAC
++static void __init fixup_device_tree_pmac(void)
++{
++ __be32 val = 1;
++ char type[8];
++ phandle node;
++
++ // Some pmacs are missing #size-cells on escc nodes
++ for (node = 0; prom_next_node(&node); ) {
++ type[0] = '\0';
++ prom_getprop(node, "device_type", type, sizeof(type));
++ if (prom_strcmp(type, "escc"))
++ continue;
++
++ if (prom_getproplen(node, "#size-cells") != PROM_ERROR)
++ continue;
++
++ prom_setprop(node, NULL, "#size-cells", &val, sizeof(val));
++ }
++}
++#else
++static inline void fixup_device_tree_pmac(void) { }
+ #endif
+
+ #ifdef CONFIG_PPC_EFIKA
+@@ -3197,6 +3221,7 @@ static void __init fixup_device_tree(void)
+ fixup_device_tree_maple_memory_controller();
+ fixup_device_tree_chrp();
+ fixup_device_tree_pmac();
++ fixup_device_tree_pmac64();
+ fixup_device_tree_efika();
+ fixup_device_tree_pasemi();
+ }
+diff --git a/arch/riscv/configs/defconfig b/arch/riscv/configs/defconfig
+index 2341393cfac1ae..26c01b9e3434c4 100644
+--- a/arch/riscv/configs/defconfig
++++ b/arch/riscv/configs/defconfig
+@@ -301,7 +301,6 @@ CONFIG_DEBUG_MEMORY_INIT=y
+ CONFIG_DEBUG_PER_CPU_MAPS=y
+ CONFIG_SOFTLOCKUP_DETECTOR=y
+ CONFIG_WQ_WATCHDOG=y
+-CONFIG_DEBUG_TIMEKEEPING=y
+ CONFIG_DEBUG_RT_MUTEXES=y
+ CONFIG_DEBUG_SPINLOCK=y
+ CONFIG_DEBUG_MUTEXES=y
+diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
+index 30b20ce9a70033..83789e39d1d5e5 100644
+--- a/arch/s390/include/asm/pci.h
++++ b/arch/s390/include/asm/pci.h
+@@ -106,9 +106,10 @@ struct zpci_bus {
+ struct list_head resources;
+ struct list_head bus_next;
+ struct resource bus_resource;
+- int pchid;
++ int topo; /* TID if topo_is_tid, PCHID otherwise */
+ int domain_nr;
+- bool multifunction;
++ u8 multifunction : 1;
++ u8 topo_is_tid : 1;
+ enum pci_bus_speed max_bus_speed;
+ };
+
+@@ -129,6 +130,8 @@ struct zpci_dev {
+ u16 vfn; /* virtual function number */
+ u16 pchid; /* physical channel ID */
+ u16 maxstbl; /* Maximum store block size */
++ u16 rid; /* RID as supplied by firmware */
++ u16 tid; /* Topology for which RID is valid */
+ u8 pfgid; /* function group ID */
+ u8 pft; /* pci function type */
+ u8 port;
+@@ -139,7 +142,8 @@ struct zpci_dev {
+ u8 is_physfn : 1;
+ u8 util_str_avail : 1;
+ u8 irqs_registered : 1;
+- u8 reserved : 2;
++ u8 tid_avail : 1;
++ u8 reserved : 1;
+ unsigned int devfn; /* DEVFN part of the RID*/
+
+ u8 pfip[CLP_PFIP_NR_SEGMENTS]; /* pci function internal path */
+@@ -210,12 +214,14 @@ extern struct airq_iv *zpci_aif_sbv;
+ ----------------------------------------------------------------------------- */
+ /* Base stuff */
+ struct zpci_dev *zpci_create_device(u32 fid, u32 fh, enum zpci_state state);
++int zpci_add_device(struct zpci_dev *zdev);
+ int zpci_enable_device(struct zpci_dev *);
+ int zpci_disable_device(struct zpci_dev *);
+ int zpci_scan_configured_device(struct zpci_dev *zdev, u32 fh);
+ int zpci_deconfigure_device(struct zpci_dev *zdev);
+ void zpci_device_reserved(struct zpci_dev *zdev);
+ bool zpci_is_device_configured(struct zpci_dev *zdev);
++int zpci_scan_devices(void);
+
+ int zpci_hot_reset_device(struct zpci_dev *zdev);
+ int zpci_register_ioat(struct zpci_dev *, u8, u64, u64, u64, u8 *);
+@@ -225,7 +231,7 @@ void zpci_update_fh(struct zpci_dev *zdev, u32 fh);
+
+ /* CLP */
+ int clp_setup_writeback_mio(void);
+-int clp_scan_pci_devices(void);
++int clp_scan_pci_devices(struct list_head *scan_list);
+ int clp_query_pci_fn(struct zpci_dev *zdev);
+ int clp_enable_fh(struct zpci_dev *zdev, u32 *fh, u8 nr_dma_as);
+ int clp_disable_fh(struct zpci_dev *zdev, u32 *fh);
+diff --git a/arch/s390/include/asm/pci_clp.h b/arch/s390/include/asm/pci_clp.h
+index f0c677ddd27060..14afb9ce91f3bc 100644
+--- a/arch/s390/include/asm/pci_clp.h
++++ b/arch/s390/include/asm/pci_clp.h
+@@ -110,7 +110,8 @@ struct clp_req_query_pci {
+ struct clp_rsp_query_pci {
+ struct clp_rsp_hdr hdr;
+ u16 vfn; /* virtual fn number */
+- u16 : 3;
++ u16 : 2;
++ u16 tid_avail : 1;
+ u16 rid_avail : 1;
+ u16 is_physfn : 1;
+ u16 reserved1 : 1;
+@@ -130,8 +131,9 @@ struct clp_rsp_query_pci {
+ u64 edma; /* end dma as */
+ #define ZPCI_RID_MASK_DEVFN 0x00ff
+ u16 rid; /* BUS/DEVFN PCI address */
+- u16 reserved0;
+- u32 reserved[10];
++ u32 reserved0;
++ u16 tid;
++ u32 reserved[9];
+ u32 uid; /* user defined id */
+ u8 util_str[CLP_UTIL_STR_LEN]; /* utility string */
+ u32 reserved2[16];
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index 3317f4878eaa70..331e0654d61d78 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -1780,7 +1780,9 @@ static void cpumsf_pmu_stop(struct perf_event *event, int flags)
+ event->hw.state |= PERF_HES_STOPPED;
+
+ if ((flags & PERF_EF_UPDATE) && !(event->hw.state & PERF_HES_UPTODATE)) {
+- hw_perf_event_update(event, 1);
++ /* CPU hotplug off removes SDBs. No samples to extract. */
++ if (cpuhw->flags & PMU_F_RESERVED)
++ hw_perf_event_update(event, 1);
+ event->hw.state |= PERF_HES_UPTODATE;
+ }
+ perf_pmu_enable(event->pmu);
+diff --git a/arch/s390/pci/pci.c b/arch/s390/pci/pci.c
+index 635fd8f2acbaa2..88f72745fa59e1 100644
+--- a/arch/s390/pci/pci.c
++++ b/arch/s390/pci/pci.c
+@@ -29,6 +29,7 @@
+ #include <linux/pci.h>
+ #include <linux/printk.h>
+ #include <linux/lockdep.h>
++#include <linux/list_sort.h>
+
+ #include <asm/isc.h>
+ #include <asm/airq.h>
+@@ -778,8 +779,9 @@ int zpci_hot_reset_device(struct zpci_dev *zdev)
+ * @fh: Current Function Handle of the device to be created
+ * @state: Initial state after creation either Standby or Configured
+ *
+- * Creates a new zpci device and adds it to its, possibly newly created, zbus
+- * as well as zpci_list.
++ * Allocates a new struct zpci_dev and queries the platform for its details.
++ * If successful the device can subsequently be added to the zPCI subsystem
++ * using zpci_add_device().
+ *
+ * Returns: the zdev on success or an error pointer otherwise
+ */
+@@ -788,7 +790,6 @@ struct zpci_dev *zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
+ struct zpci_dev *zdev;
+ int rc;
+
+- zpci_dbg(1, "add fid:%x, fh:%x, c:%d\n", fid, fh, state);
+ zdev = kzalloc(sizeof(*zdev), GFP_KERNEL);
+ if (!zdev)
+ return ERR_PTR(-ENOMEM);
+@@ -803,11 +804,34 @@ struct zpci_dev *zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
+ goto error;
+ zdev->state = state;
+
+- kref_init(&zdev->kref);
+ mutex_init(&zdev->state_lock);
+ mutex_init(&zdev->fmb_lock);
+ mutex_init(&zdev->kzdev_lock);
+
++ return zdev;
++
++error:
++ zpci_dbg(0, "crt fid:%x, rc:%d\n", fid, rc);
++ kfree(zdev);
++ return ERR_PTR(rc);
++}
++
++/**
++ * zpci_add_device() - Add a previously created zPCI device to the zPCI subsystem
++ * @zdev: The zPCI device to be added
++ *
++ * A struct zpci_dev is added to the zPCI subsystem and to a virtual PCI bus creating
++ * a new one as necessary. A hotplug slot is created and events start to be handled.
++ * If successful from this point on zpci_zdev_get() and zpci_zdev_put() must be used.
++ * If adding the struct zpci_dev fails the device was not added and should be freed.
++ *
++ * Return: 0 on success, or an error code otherwise
++ */
++int zpci_add_device(struct zpci_dev *zdev)
++{
++ int rc;
++
++ zpci_dbg(1, "add fid:%x, fh:%x, c:%d\n", zdev->fid, zdev->fh, zdev->state);
+ rc = zpci_init_iommu(zdev);
+ if (rc)
+ goto error;
+@@ -816,18 +840,17 @@ struct zpci_dev *zpci_create_device(u32 fid, u32 fh, enum zpci_state state)
+ if (rc)
+ goto error_destroy_iommu;
+
++ kref_init(&zdev->kref);
+ spin_lock(&zpci_list_lock);
+ list_add_tail(&zdev->entry, &zpci_list);
+ spin_unlock(&zpci_list_lock);
+-
+- return zdev;
++ return 0;
+
+ error_destroy_iommu:
+ zpci_destroy_iommu(zdev);
+ error:
+- zpci_dbg(0, "add fid:%x, rc:%d\n", fid, rc);
+- kfree(zdev);
+- return ERR_PTR(rc);
++ zpci_dbg(0, "add fid:%x, rc:%d\n", zdev->fid, rc);
++ return rc;
+ }
+
+ bool zpci_is_device_configured(struct zpci_dev *zdev)
+@@ -1069,6 +1092,50 @@ bool zpci_is_enabled(void)
+ return s390_pci_initialized;
+ }
+
++static int zpci_cmp_rid(void *priv, const struct list_head *a,
++ const struct list_head *b)
++{
++ struct zpci_dev *za = container_of(a, struct zpci_dev, entry);
++ struct zpci_dev *zb = container_of(b, struct zpci_dev, entry);
++
++ /*
++ * PCI functions without RID available maintain original order
++ * between themselves but sort before those with RID.
++ */
++ if (za->rid == zb->rid)
++ return za->rid_available > zb->rid_available;
++ /*
++ * PCI functions with RID sort by RID ascending.
++ */
++ return za->rid > zb->rid;
++}
++
++static void zpci_add_devices(struct list_head *scan_list)
++{
++ struct zpci_dev *zdev, *tmp;
++
++ list_sort(NULL, scan_list, &zpci_cmp_rid);
++ list_for_each_entry_safe(zdev, tmp, scan_list, entry) {
++ list_del_init(&zdev->entry);
++ if (zpci_add_device(zdev))
++ kfree(zdev);
++ }
++}
++
++int zpci_scan_devices(void)
++{
++ LIST_HEAD(scan_list);
++ int rc;
++
++ rc = clp_scan_pci_devices(&scan_list);
++ if (rc)
++ return rc;
++
++ zpci_add_devices(&scan_list);
++ zpci_bus_scan_busses();
++ return 0;
++}
++
+ static int __init pci_base_init(void)
+ {
+ int rc;
+@@ -1098,10 +1165,9 @@ static int __init pci_base_init(void)
+ if (rc)
+ goto out_irq;
+
+- rc = clp_scan_pci_devices();
++ rc = zpci_scan_devices();
+ if (rc)
+ goto out_find;
+- zpci_bus_scan_busses();
+
+ s390_pci_initialized = 1;
+ return 0;
+diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
+index daa5d7450c7d38..1b74a000ff6459 100644
+--- a/arch/s390/pci/pci_bus.c
++++ b/arch/s390/pci/pci_bus.c
+@@ -168,9 +168,16 @@ void zpci_bus_scan_busses(void)
+ mutex_unlock(&zbus_list_lock);
+ }
+
++static bool zpci_bus_is_multifunction_root(struct zpci_dev *zdev)
++{
++ return !s390_pci_no_rid && zdev->rid_available &&
++ zpci_is_device_configured(zdev) &&
++ !zdev->vfn;
++}
++
+ /* zpci_bus_create_pci_bus - Create the PCI bus associated with this zbus
+ * @zbus: the zbus holding the zdevices
+- * @fr: PCI root function that will determine the bus's domain, and bus speeed
++ * @fr: PCI root function that will determine the bus's domain, and bus speed
+ * @ops: the pci operations
+ *
+ * The PCI function @fr determines the domain (its UID), multifunction property
+@@ -188,7 +195,7 @@ static int zpci_bus_create_pci_bus(struct zpci_bus *zbus, struct zpci_dev *fr, s
+ return domain;
+
+ zbus->domain_nr = domain;
+- zbus->multifunction = fr->rid_available;
++ zbus->multifunction = zpci_bus_is_multifunction_root(fr);
+ zbus->max_bus_speed = fr->max_bus_speed;
+
+ /*
+@@ -232,13 +239,15 @@ static void zpci_bus_put(struct zpci_bus *zbus)
+ kref_put(&zbus->kref, zpci_bus_release);
+ }
+
+-static struct zpci_bus *zpci_bus_get(int pchid)
++static struct zpci_bus *zpci_bus_get(int topo, bool topo_is_tid)
+ {
+ struct zpci_bus *zbus;
+
+ mutex_lock(&zbus_list_lock);
+ list_for_each_entry(zbus, &zbus_list, bus_next) {
+- if (pchid == zbus->pchid) {
++ if (!zbus->multifunction)
++ continue;
++ if (topo_is_tid == zbus->topo_is_tid && topo == zbus->topo) {
+ kref_get(&zbus->kref);
+ goto out_unlock;
+ }
+@@ -249,7 +258,7 @@ static struct zpci_bus *zpci_bus_get(int pchid)
+ return zbus;
+ }
+
+-static struct zpci_bus *zpci_bus_alloc(int pchid)
++static struct zpci_bus *zpci_bus_alloc(int topo, bool topo_is_tid)
+ {
+ struct zpci_bus *zbus;
+
+@@ -257,7 +266,8 @@ static struct zpci_bus *zpci_bus_alloc(int pchid)
+ if (!zbus)
+ return NULL;
+
+- zbus->pchid = pchid;
++ zbus->topo = topo;
++ zbus->topo_is_tid = topo_is_tid;
+ INIT_LIST_HEAD(&zbus->bus_next);
+ mutex_lock(&zbus_list_lock);
+ list_add_tail(&zbus->bus_next, &zbus_list);
+@@ -292,19 +302,22 @@ static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
+ {
+ int rc = -EINVAL;
+
++ if (zbus->multifunction) {
++ if (!zdev->rid_available) {
++ WARN_ONCE(1, "rid_available not set for multifunction\n");
++ return rc;
++ }
++ zdev->devfn = zdev->rid & ZPCI_RID_MASK_DEVFN;
++ }
++
+ if (zbus->function[zdev->devfn]) {
+ pr_err("devfn %04x is already assigned\n", zdev->devfn);
+ return rc;
+ }
+-
+ zdev->zbus = zbus;
+ zbus->function[zdev->devfn] = zdev;
+ zpci_nb_devices++;
+
+- if (zbus->multifunction && !zdev->rid_available) {
+- WARN_ONCE(1, "rid_available not set for multifunction\n");
+- goto error;
+- }
+ rc = zpci_init_slot(zdev);
+ if (rc)
+ goto error;
+@@ -321,8 +334,9 @@ static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
+
+ int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
+ {
++ bool topo_is_tid = zdev->tid_avail;
+ struct zpci_bus *zbus = NULL;
+- int rc = -EBADF;
++ int topo, rc = -EBADF;
+
+ if (zpci_nb_devices == ZPCI_NR_DEVICES) {
+ pr_warn("Adding PCI function %08x failed because the configured limit of %d is reached\n",
+@@ -330,14 +344,10 @@ int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
+ return -ENOSPC;
+ }
+
+- if (zdev->devfn >= ZPCI_FUNCTIONS_PER_BUS)
+- return -EINVAL;
+-
+- if (!s390_pci_no_rid && zdev->rid_available)
+- zbus = zpci_bus_get(zdev->pchid);
+-
++ topo = topo_is_tid ? zdev->tid : zdev->pchid;
++ zbus = zpci_bus_get(topo, topo_is_tid);
+ if (!zbus) {
+- zbus = zpci_bus_alloc(zdev->pchid);
++ zbus = zpci_bus_alloc(topo, topo_is_tid);
+ if (!zbus)
+ return -ENOMEM;
+ }
+diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c
+index 6f55a59a087115..74dac6da03d5bb 100644
+--- a/arch/s390/pci/pci_clp.c
++++ b/arch/s390/pci/pci_clp.c
+@@ -164,10 +164,13 @@ static int clp_store_query_pci_fn(struct zpci_dev *zdev,
+ zdev->port = response->port;
+ zdev->uid = response->uid;
+ zdev->fmb_length = sizeof(u32) * response->fmb_len;
+- zdev->rid_available = response->rid_avail;
+ zdev->is_physfn = response->is_physfn;
+- if (!s390_pci_no_rid && zdev->rid_available)
+- zdev->devfn = response->rid & ZPCI_RID_MASK_DEVFN;
++ zdev->rid_available = response->rid_avail;
++ if (zdev->rid_available)
++ zdev->rid = response->rid;
++ zdev->tid_avail = response->tid_avail;
++ if (zdev->tid_avail)
++ zdev->tid = response->tid;
+
+ memcpy(zdev->pfip, response->pfip, sizeof(zdev->pfip));
+ if (response->util_str_avail) {
+@@ -407,6 +410,7 @@ static int clp_find_pci(struct clp_req_rsp_list_pci *rrb, u32 fid,
+
+ static void __clp_add(struct clp_fh_list_entry *entry, void *data)
+ {
++ struct list_head *scan_list = data;
+ struct zpci_dev *zdev;
+
+ if (!entry->vendor_id)
+@@ -417,10 +421,11 @@ static void __clp_add(struct clp_fh_list_entry *entry, void *data)
+ zpci_zdev_put(zdev);
+ return;
+ }
+- zpci_create_device(entry->fid, entry->fh, entry->config_state);
++ zdev = zpci_create_device(entry->fid, entry->fh, entry->config_state);
++ list_add_tail(&zdev->entry, scan_list);
+ }
+
+-int clp_scan_pci_devices(void)
++int clp_scan_pci_devices(struct list_head *scan_list)
+ {
+ struct clp_req_rsp_list_pci *rrb;
+ int rc;
+@@ -429,7 +434,7 @@ int clp_scan_pci_devices(void)
+ if (!rrb)
+ return -ENOMEM;
+
+- rc = clp_list_pci(rrb, NULL, __clp_add);
++ rc = clp_list_pci(rrb, scan_list, __clp_add);
+
+ clp_free_block(rrb);
+ return rc;
+diff --git a/arch/s390/pci/pci_event.c b/arch/s390/pci/pci_event.c
+index d4f19d33914cbc..7f7b732b3f3efa 100644
+--- a/arch/s390/pci/pci_event.c
++++ b/arch/s390/pci/pci_event.c
+@@ -340,6 +340,10 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ zdev = zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_CONFIGURED);
+ if (IS_ERR(zdev))
+ break;
++ if (zpci_add_device(zdev)) {
++ kfree(zdev);
++ break;
++ }
+ } else {
+ /* the configuration request may be stale */
+ if (zdev->state != ZPCI_FN_STATE_STANDBY)
+@@ -349,10 +353,17 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ zpci_scan_configured_device(zdev, ccdf->fh);
+ break;
+ case 0x0302: /* Reserved -> Standby */
+- if (!zdev)
+- zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_STANDBY);
+- else
++ if (!zdev) {
++ zdev = zpci_create_device(ccdf->fid, ccdf->fh, ZPCI_FN_STATE_STANDBY);
++ if (IS_ERR(zdev))
++ break;
++ if (zpci_add_device(zdev)) {
++ kfree(zdev);
++ break;
++ }
++ } else {
+ zpci_update_fh(zdev, ccdf->fh);
++ }
+ break;
+ case 0x0303: /* Deconfiguration requested */
+ if (zdev) {
+@@ -381,7 +392,7 @@ static void __zpci_event_availability(struct zpci_ccdf_avail *ccdf)
+ break;
+ case 0x0306: /* 0x308 or 0x302 for multiple devices */
+ zpci_remove_reserved_devices();
+- clp_scan_pci_devices();
++ zpci_scan_devices();
+ break;
+ case 0x0308: /* Standby -> Reserved */
+ if (!zdev)
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 7b9a7e8f39acc8..171be04eca1f5d 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -145,7 +145,6 @@ config X86
+ select ARCH_HAS_PARANOID_L1D_FLUSH
+ select BUILDTIME_TABLE_SORT
+ select CLKEVT_I8253
+- select CLOCKSOURCE_VALIDATE_LAST_CYCLE
+ select CLOCKSOURCE_WATCHDOG
+ # Word-size accesses may read uninitialized data past the trailing \0
+ # in strings and cause false KMSAN reports.
+diff --git a/arch/x86/events/amd/core.c b/arch/x86/events/amd/core.c
+index 920e3a640caddd..b4a1a2576510e0 100644
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -943,11 +943,12 @@ static int amd_pmu_v2_snapshot_branch_stack(struct perf_branch_entry *entries, u
+ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
+ {
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++ static atomic64_t status_warned = ATOMIC64_INIT(0);
++ u64 reserved, status, mask, new_bits, prev_bits;
+ struct perf_sample_data data;
+ struct hw_perf_event *hwc;
+ struct perf_event *event;
+ int handled = 0, idx;
+- u64 reserved, status, mask;
+ bool pmu_enabled;
+
+ /*
+@@ -1012,7 +1013,12 @@ static int amd_pmu_v2_handle_irq(struct pt_regs *regs)
+ * the corresponding PMCs are expected to be inactive according to the
+ * active_mask
+ */
+- WARN_ON(status > 0);
++ if (status > 0) {
++ prev_bits = atomic64_fetch_or(status, &status_warned);
++ // A new bit was set for the very first time.
++ new_bits = status & ~prev_bits;
++ WARN(new_bits, "New overflows for inactive PMCs: %llx\n", new_bits);
++ }
+
+ /* Clear overflow and freeze bits */
+ amd_pmu_ack_global_status(~status);
+diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
+index 6f82e75b61494e..4b804531b03c3c 100644
+--- a/arch/x86/include/asm/pgtable_types.h
++++ b/arch/x86/include/asm/pgtable_types.h
+@@ -36,10 +36,12 @@
+ #define _PAGE_BIT_DEVMAP _PAGE_BIT_SOFTW4
+
+ #ifdef CONFIG_X86_64
+-#define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW5 /* Saved Dirty bit */
++#define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW5 /* Saved Dirty bit (leaf) */
++#define _PAGE_BIT_NOPTISHADOW _PAGE_BIT_SOFTW5 /* No PTI shadow (root PGD) */
+ #else
+ /* Shared with _PAGE_BIT_UFFD_WP which is not supported on 32 bit */
+-#define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW2 /* Saved Dirty bit */
++#define _PAGE_BIT_SAVED_DIRTY _PAGE_BIT_SOFTW2 /* Saved Dirty bit (leaf) */
++#define _PAGE_BIT_NOPTISHADOW _PAGE_BIT_SOFTW2 /* No PTI shadow (root PGD) */
+ #endif
+
+ /* If _PAGE_BIT_PRESENT is clear, we use these: */
+@@ -139,6 +141,8 @@
+
+ #define _PAGE_PROTNONE (_AT(pteval_t, 1) << _PAGE_BIT_PROTNONE)
+
++#define _PAGE_NOPTISHADOW (_AT(pteval_t, 1) << _PAGE_BIT_NOPTISHADOW)
++
+ /*
+ * Set of bits not changed in pte_modify. The pte's
+ * protection key is treated like _PAGE_RW, for
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index d8408aafeed988..79d2e17f6582e9 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1065,7 +1065,7 @@ static void init_amd(struct cpuinfo_x86 *c)
+ */
+ if (spectre_v2_in_eibrs_mode(spectre_v2_enabled) &&
+ cpu_has(c, X86_FEATURE_AUTOIBRS))
+- WARN_ON_ONCE(msr_set_bit(MSR_EFER, _EFER_AUTOIBRS));
++ WARN_ON_ONCE(msr_set_bit(MSR_EFER, _EFER_AUTOIBRS) < 0);
+
+ /* AMD CPUs don't need fencing after x2APIC/TSC_DEADLINE MSR writes. */
+ clear_cpu_cap(c, X86_FEATURE_APIC_MSRS_FENCE);
+diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
+index 392d09c936d60c..e6fa03ed9172c0 100644
+--- a/arch/x86/kernel/cpu/cacheinfo.c
++++ b/arch/x86/kernel/cpu/cacheinfo.c
+@@ -178,8 +178,6 @@ struct _cpuid4_info_regs {
+ struct amd_northbridge *nb;
+ };
+
+-static unsigned short num_cache_leaves;
+-
+ /* AMD doesn't have CPUID4. Emulate it here to report the same
+ information to the user. This makes some assumptions about the machine:
+ L2 not shared, no SMT etc. that is currently true on AMD CPUs.
+@@ -717,20 +715,23 @@ void cacheinfo_hygon_init_llc_id(struct cpuinfo_x86 *c)
+
+ void init_amd_cacheinfo(struct cpuinfo_x86 *c)
+ {
++ struct cpu_cacheinfo *ci = get_cpu_cacheinfo(c->cpu_index);
+
+ if (boot_cpu_has(X86_FEATURE_TOPOEXT)) {
+- num_cache_leaves = find_num_cache_leaves(c);
++ ci->num_leaves = find_num_cache_leaves(c);
+ } else if (c->extended_cpuid_level >= 0x80000006) {
+ if (cpuid_edx(0x80000006) & 0xf000)
+- num_cache_leaves = 4;
++ ci->num_leaves = 4;
+ else
+- num_cache_leaves = 3;
++ ci->num_leaves = 3;
+ }
+ }
+
+ void init_hygon_cacheinfo(struct cpuinfo_x86 *c)
+ {
+- num_cache_leaves = find_num_cache_leaves(c);
++ struct cpu_cacheinfo *ci = get_cpu_cacheinfo(c->cpu_index);
++
++ ci->num_leaves = find_num_cache_leaves(c);
+ }
+
+ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
+@@ -740,21 +741,21 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
+ unsigned int new_l1d = 0, new_l1i = 0; /* Cache sizes from cpuid(4) */
+ unsigned int new_l2 = 0, new_l3 = 0, i; /* Cache sizes from cpuid(4) */
+ unsigned int l2_id = 0, l3_id = 0, num_threads_sharing, index_msb;
++ struct cpu_cacheinfo *ci = get_cpu_cacheinfo(c->cpu_index);
+
+ if (c->cpuid_level > 3) {
+- static int is_initialized;
+-
+- if (is_initialized == 0) {
+- /* Init num_cache_leaves from boot CPU */
+- num_cache_leaves = find_num_cache_leaves(c);
+- is_initialized++;
+- }
++ /*
++ * There should be at least one leaf. A non-zero value means
++ * that the number of leaves has been initialized.
++ */
++ if (!ci->num_leaves)
++ ci->num_leaves = find_num_cache_leaves(c);
+
+ /*
+ * Whenever possible use cpuid(4), deterministic cache
+ * parameters cpuid leaf to find the cache details
+ */
+- for (i = 0; i < num_cache_leaves; i++) {
++ for (i = 0; i < ci->num_leaves; i++) {
+ struct _cpuid4_info_regs this_leaf = {};
+ int retval;
+
+@@ -790,14 +791,14 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
+ * Don't use cpuid2 if cpuid4 is supported. For P4, we use cpuid2 for
+ * trace cache
+ */
+- if ((num_cache_leaves == 0 || c->x86 == 15) && c->cpuid_level > 1) {
++ if ((!ci->num_leaves || c->x86 == 15) && c->cpuid_level > 1) {
+ /* supports eax=2 call */
+ int j, n;
+ unsigned int regs[4];
+ unsigned char *dp = (unsigned char *)regs;
+ int only_trace = 0;
+
+- if (num_cache_leaves != 0 && c->x86 == 15)
++ if (ci->num_leaves && c->x86 == 15)
+ only_trace = 1;
+
+ /* Number of times to iterate */
+@@ -991,14 +992,12 @@ static void ci_leaf_init(struct cacheinfo *this_leaf,
+
+ int init_cache_level(unsigned int cpu)
+ {
+- struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
++ struct cpu_cacheinfo *ci = get_cpu_cacheinfo(cpu);
+
+- if (!num_cache_leaves)
++ /* There should be at least one leaf. */
++ if (!ci->num_leaves)
+ return -ENOENT;
+- if (!this_cpu_ci)
+- return -EINVAL;
+- this_cpu_ci->num_levels = 3;
+- this_cpu_ci->num_leaves = num_cache_leaves;
++
+ return 0;
+ }
+
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index e7656cbef68d54..4b5f3d0521517a 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -586,7 +586,9 @@ static void init_intel(struct cpuinfo_x86 *c)
+ c->x86_vfm == INTEL_WESTMERE_EX))
+ set_cpu_bug(c, X86_BUG_CLFLUSH_MONITOR);
+
+- if (boot_cpu_has(X86_FEATURE_MWAIT) && c->x86_vfm == INTEL_ATOM_GOLDMONT)
++ if (boot_cpu_has(X86_FEATURE_MWAIT) &&
++ (c->x86_vfm == INTEL_ATOM_GOLDMONT ||
++ c->x86_vfm == INTEL_LUNARLAKE_M))
+ set_cpu_bug(c, X86_BUG_MONITOR);
+
+ #ifdef CONFIG_X86_64
+diff --git a/arch/x86/kernel/cpu/topology.c b/arch/x86/kernel/cpu/topology.c
+index 621a151ccf7d0a..b2e313ea17bf6f 100644
+--- a/arch/x86/kernel/cpu/topology.c
++++ b/arch/x86/kernel/cpu/topology.c
+@@ -428,8 +428,8 @@ void __init topology_apply_cmdline_limits_early(void)
+ {
+ unsigned int possible = nr_cpu_ids;
+
+- /* 'maxcpus=0' 'nosmp' 'nolapic' 'disableapic' 'noapic' */
+- if (!setup_max_cpus || ioapic_is_disabled || apic_is_disabled)
++ /* 'maxcpus=0' 'nosmp' 'nolapic' 'disableapic' */
++ if (!setup_max_cpus || apic_is_disabled)
+ possible = 1;
+
+ /* 'possible_cpus=N' */
+@@ -443,7 +443,7 @@ void __init topology_apply_cmdline_limits_early(void)
+
+ static __init bool restrict_to_up(void)
+ {
+- if (!smp_found_config || ioapic_is_disabled)
++ if (!smp_found_config)
+ return true;
+ /*
+ * XEN PV is special as it does not advertise the local APIC
+diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
+index 1065ab995305cd..8f62e0666dea51 100644
+--- a/arch/x86/kernel/fpu/signal.c
++++ b/arch/x86/kernel/fpu/signal.c
+@@ -63,16 +63,6 @@ static inline bool check_xstate_in_sigframe(struct fxregs_state __user *fxbuf,
+ return true;
+ }
+
+-/*
+- * Update the value of PKRU register that was already pushed onto the signal frame.
+- */
+-static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
+-{
+- if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
+- return 0;
+- return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
+-}
+-
+ /*
+ * Signal frame handlers.
+ */
+@@ -168,14 +158,8 @@ static inline bool save_xstate_epilog(void __user *buf, int ia32_frame,
+
+ static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf, u32 pkru)
+ {
+- int err = 0;
+-
+- if (use_xsave()) {
+- err = xsave_to_user_sigframe(buf);
+- if (!err)
+- err = update_pkru_in_sigframe(buf, pkru);
+- return err;
+- }
++ if (use_xsave())
++ return xsave_to_user_sigframe(buf, pkru);
+
+ if (use_fxsr())
+ return fxsave_to_user_sigframe((struct fxregs_state __user *) buf);
+diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
+index 0b86a5002c846d..aa16f1a1bbcf17 100644
+--- a/arch/x86/kernel/fpu/xstate.h
++++ b/arch/x86/kernel/fpu/xstate.h
+@@ -69,6 +69,28 @@ static inline u64 xfeatures_mask_independent(void)
+ return fpu_kernel_cfg.independent_features;
+ }
+
++/*
++ * Update the value of PKRU register that was already pushed onto the signal frame.
++ */
++static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u64 mask, u32 pkru)
++{
++ u64 xstate_bv;
++ int err;
++
++ if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
++ return 0;
++
++ /* Mark PKRU as in-use so that it is restored correctly. */
++ xstate_bv = (mask & xfeatures_in_use()) | XFEATURE_MASK_PKRU;
++
++ err = __put_user(xstate_bv, &buf->header.xfeatures);
++ if (err)
++ return err;
++
++ /* Update PKRU value in the userspace xsave buffer. */
++ return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
++}
++
+ /* XSAVE/XRSTOR wrapper functions */
+
+ #ifdef CONFIG_X86_64
+@@ -256,7 +278,7 @@ static inline u64 xfeatures_need_sigframe_write(void)
+ * The caller has to zero buf::header before calling this because XSAVE*
+ * does not touch the reserved fields in the header.
+ */
+-static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
++static inline int xsave_to_user_sigframe(struct xregs_state __user *buf, u32 pkru)
+ {
+ /*
+ * Include the features which are not xsaved/rstored by the kernel
+@@ -281,6 +303,9 @@ static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
+ XSTATE_OP(XSAVE, buf, lmask, hmask, err);
+ clac();
+
++ if (!err)
++ err = update_pkru_in_sigframe(buf, mask, pkru);
++
+ return err;
+ }
+
+diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S
+index e9e88c342f752e..540443d699e3c2 100644
+--- a/arch/x86/kernel/relocate_kernel_64.S
++++ b/arch/x86/kernel/relocate_kernel_64.S
+@@ -13,6 +13,7 @@
+ #include <asm/pgtable_types.h>
+ #include <asm/nospec-branch.h>
+ #include <asm/unwind_hints.h>
++#include <asm/asm-offsets.h>
+
+ /*
+ * Must be relocatable PIC code callable as a C function, in particular
+@@ -242,6 +243,13 @@ SYM_CODE_START_LOCAL_NOALIGN(virtual_mapped)
+ movq CR0(%r8), %r8
+ movq %rax, %cr3
+ movq %r8, %cr0
++
++#ifdef CONFIG_KEXEC_JUMP
++ /* Saved in save_processor_state. */
++ movq $saved_context, %rax
++ lgdt saved_context_gdt_desc(%rax)
++#endif
++
+ movq %rbp, %rax
+
+ popf
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 3e353ed1f76736..1b4438e24814b4 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -4580,6 +4580,7 @@ static bool is_page_fault_stale(struct kvm_vcpu *vcpu,
+
+ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
+ {
++ kvm_pfn_t orig_pfn;
+ int r;
+
+ /* Dummy roots are used only for shadowing bad guest roots. */
+@@ -4601,6 +4602,8 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
+ if (r != RET_PF_CONTINUE)
+ return r;
+
++ orig_pfn = fault->pfn;
++
+ r = RET_PF_RETRY;
+ write_lock(&vcpu->kvm->mmu_lock);
+
+@@ -4615,7 +4618,7 @@ static int direct_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
+
+ out_unlock:
+ write_unlock(&vcpu->kvm->mmu_lock);
+- kvm_release_pfn_clean(fault->pfn);
++ kvm_release_pfn_clean(orig_pfn);
+ return r;
+ }
+
+@@ -4675,6 +4678,7 @@ EXPORT_SYMBOL_GPL(kvm_handle_page_fault);
+ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu,
+ struct kvm_page_fault *fault)
+ {
++ kvm_pfn_t orig_pfn;
+ int r;
+
+ if (page_fault_handle_page_track(vcpu, fault))
+@@ -4692,6 +4696,8 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu,
+ if (r != RET_PF_CONTINUE)
+ return r;
+
++ orig_pfn = fault->pfn;
++
+ r = RET_PF_RETRY;
+ read_lock(&vcpu->kvm->mmu_lock);
+
+@@ -4702,7 +4708,7 @@ static int kvm_tdp_mmu_page_fault(struct kvm_vcpu *vcpu,
+
+ out_unlock:
+ read_unlock(&vcpu->kvm->mmu_lock);
+- kvm_release_pfn_clean(fault->pfn);
++ kvm_release_pfn_clean(orig_pfn);
+ return r;
+ }
+ #endif
+diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
+index ae7d39ff2d07f0..b08017683920f0 100644
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -778,6 +778,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault,
+ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
+ {
+ struct guest_walker walker;
++ kvm_pfn_t orig_pfn;
+ int r;
+
+ WARN_ON_ONCE(fault->is_tdp);
+@@ -836,6 +837,8 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
+ walker.pte_access &= ~ACC_EXEC_MASK;
+ }
+
++ orig_pfn = fault->pfn;
++
+ r = RET_PF_RETRY;
+ write_lock(&vcpu->kvm->mmu_lock);
+
+@@ -849,7 +852,7 @@ static int FNAME(page_fault)(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault
+
+ out_unlock:
+ write_unlock(&vcpu->kvm->mmu_lock);
+- kvm_release_pfn_clean(fault->pfn);
++ kvm_release_pfn_clean(orig_pfn);
+ return r;
+ }
+
+diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
+index 437e96fb497734..5ab7bd2f1983c1 100644
+--- a/arch/x86/mm/ident_map.c
++++ b/arch/x86/mm/ident_map.c
+@@ -174,7 +174,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
+ if (result)
+ return result;
+
+- set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag));
++ set_p4d(p4d, __p4d(__pa(pud) | info->kernpg_flag | _PAGE_NOPTISHADOW));
+ }
+
+ return 0;
+@@ -218,14 +218,14 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
+ if (result)
+ return result;
+ if (pgtable_l5_enabled()) {
+- set_pgd(pgd, __pgd(__pa(p4d) | info->kernpg_flag));
++ set_pgd(pgd, __pgd(__pa(p4d) | info->kernpg_flag | _PAGE_NOPTISHADOW));
+ } else {
+ /*
+ * With p4d folded, pgd is equal to p4d.
+ * The pgd entry has to point to the pud page table in this case.
+ */
+ pud_t *pud = pud_offset(p4d, 0);
+- set_pgd(pgd, __pgd(__pa(pud) | info->kernpg_flag));
++ set_pgd(pgd, __pgd(__pa(pud) | info->kernpg_flag | _PAGE_NOPTISHADOW));
+ }
+ }
+
+diff --git a/arch/x86/mm/pti.c b/arch/x86/mm/pti.c
+index 851ec8f1363a8b..5f0d579932c688 100644
+--- a/arch/x86/mm/pti.c
++++ b/arch/x86/mm/pti.c
+@@ -132,7 +132,7 @@ pgd_t __pti_set_user_pgtbl(pgd_t *pgdp, pgd_t pgd)
+ * Top-level entries added to init_mm's usermode pgd after boot
+ * will not be automatically propagated to other mms.
+ */
+- if (!pgdp_maps_userspace(pgdp))
++ if (!pgdp_maps_userspace(pgdp) || (pgd.pgd & _PAGE_NOPTISHADOW))
+ return pgd;
+
+ /*
+diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c
+index 55c4b07ec1f631..0c316bae1726ee 100644
+--- a/arch/x86/pci/acpi.c
++++ b/arch/x86/pci/acpi.c
+@@ -250,6 +250,125 @@ void __init pci_acpi_crs_quirks(void)
+ pr_info("Please notify linux-pci@vger.kernel.org so future kernels can do this automatically\n");
+ }
+
++/*
++ * Check if pdev is part of a PCIe switch that is directly below the
++ * specified bridge.
++ */
++static bool pcie_switch_directly_under(struct pci_dev *bridge,
++ struct pci_dev *pdev)
++{
++ struct pci_dev *parent = pci_upstream_bridge(pdev);
++
++ /* If the device doesn't have a parent, it's not under anything */
++ if (!parent)
++ return false;
++
++ /*
++ * If the device has a PCIe type, check if it is below the
++ * corresponding PCIe switch components (if applicable). Then check
++ * if its upstream port is directly beneath the specified bridge.
++ */
++ switch (pci_pcie_type(pdev)) {
++ case PCI_EXP_TYPE_UPSTREAM:
++ return parent == bridge;
++
++ case PCI_EXP_TYPE_DOWNSTREAM:
++ if (pci_pcie_type(parent) != PCI_EXP_TYPE_UPSTREAM)
++ return false;
++ parent = pci_upstream_bridge(parent);
++ return parent == bridge;
++
++ case PCI_EXP_TYPE_ENDPOINT:
++ if (pci_pcie_type(parent) != PCI_EXP_TYPE_DOWNSTREAM)
++ return false;
++ parent = pci_upstream_bridge(parent);
++ if (!parent || pci_pcie_type(parent) != PCI_EXP_TYPE_UPSTREAM)
++ return false;
++ parent = pci_upstream_bridge(parent);
++ return parent == bridge;
++ }
++
++ return false;
++}
++
++static bool pcie_has_usb4_host_interface(struct pci_dev *pdev)
++{
++ struct fwnode_handle *fwnode;
++
++ /*
++ * For USB4, the tunneled PCIe Root or Downstream Ports are marked
++ * with the "usb4-host-interface" ACPI property, so we look for
++ * that first. This should cover most cases.
++ */
++ fwnode = fwnode_find_reference(dev_fwnode(&pdev->dev),
++ "usb4-host-interface", 0);
++ if (!IS_ERR(fwnode)) {
++ fwnode_handle_put(fwnode);
++ return true;
++ }
++
++ /*
++ * Any integrated Thunderbolt 3/4 PCIe Root Ports from Intel
++ * before Alder Lake do not have the "usb4-host-interface"
++ * property so we use their PCI IDs instead. All these are
++ * tunneled. This list is not expected to grow.
++ */
++ if (pdev->vendor == PCI_VENDOR_ID_INTEL) {
++ switch (pdev->device) {
++ /* Ice Lake Thunderbolt 3 PCIe Root Ports */
++ case 0x8a1d:
++ case 0x8a1f:
++ case 0x8a21:
++ case 0x8a23:
++ /* Tiger Lake-LP Thunderbolt 4 PCIe Root Ports */
++ case 0x9a23:
++ case 0x9a25:
++ case 0x9a27:
++ case 0x9a29:
++ /* Tiger Lake-H Thunderbolt 4 PCIe Root Ports */
++ case 0x9a2b:
++ case 0x9a2d:
++ case 0x9a2f:
++ case 0x9a31:
++ return true;
++ }
++ }
++
++ return false;
++}
++
++bool arch_pci_dev_is_removable(struct pci_dev *pdev)
++{
++ struct pci_dev *parent, *root;
++
++ /* pdev without a parent or Root Port is never tunneled */
++ parent = pci_upstream_bridge(pdev);
++ if (!parent)
++ return false;
++ root = pcie_find_root_port(pdev);
++ if (!root)
++ return false;
++
++ /* Internal PCIe devices are not tunneled */
++ if (!root->external_facing)
++ return false;
++
++ /* Anything directly behind a "usb4-host-interface" is tunneled */
++ if (pcie_has_usb4_host_interface(parent))
++ return true;
++
++ /*
++ * Check if this is a discrete Thunderbolt/USB4 controller that is
++ * directly behind the non-USB4 PCIe Root Port marked as
++ * "ExternalFacingPort". Those are not behind a PCIe tunnel.
++ */
++ if (pcie_switch_directly_under(root, pdev))
++ return false;
++
++ /* PCIe devices after the discrete chip are tunneled */
++ return true;
++}
++
+ #ifdef CONFIG_PCI_MMCONFIG
+ static int check_segment(u16 seg, struct device *dev, char *estr)
+ {
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 95e517723db3e4..0b1184176ce77a 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -350,9 +350,15 @@ int blkdev_zone_mgmt_ioctl(struct block_device *bdev, blk_mode_t mode,
+
+ static inline bool disk_zone_is_conv(struct gendisk *disk, sector_t sector)
+ {
+- if (!disk->conv_zones_bitmap)
+- return false;
+- return test_bit(disk_zone_no(disk, sector), disk->conv_zones_bitmap);
++ unsigned long *bitmap;
++ bool is_conv;
++
++ rcu_read_lock();
++ bitmap = rcu_dereference(disk->conv_zones_bitmap);
++ is_conv = bitmap && test_bit(disk_zone_no(disk, sector), bitmap);
++ rcu_read_unlock();
++
++ return is_conv;
+ }
+
+ static bool disk_zone_is_last(struct gendisk *disk, struct blk_zone *zone)
+@@ -1455,6 +1461,24 @@ static void disk_destroy_zone_wplugs_hash_table(struct gendisk *disk)
+ disk->zone_wplugs_hash_bits = 0;
+ }
+
++static unsigned int disk_set_conv_zones_bitmap(struct gendisk *disk,
++ unsigned long *bitmap)
++{
++ unsigned int nr_conv_zones = 0;
++ unsigned long flags;
++
++ spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
++ if (bitmap)
++ nr_conv_zones = bitmap_weight(bitmap, disk->nr_zones);
++ bitmap = rcu_replace_pointer(disk->conv_zones_bitmap, bitmap,
++ lockdep_is_held(&disk->zone_wplugs_lock));
++ spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
++
++ kfree_rcu_mightsleep(bitmap);
++
++ return nr_conv_zones;
++}
++
+ void disk_free_zone_resources(struct gendisk *disk)
+ {
+ if (!disk->zone_wplugs_pool)
+@@ -1478,8 +1502,7 @@ void disk_free_zone_resources(struct gendisk *disk)
+ mempool_destroy(disk->zone_wplugs_pool);
+ disk->zone_wplugs_pool = NULL;
+
+- bitmap_free(disk->conv_zones_bitmap);
+- disk->conv_zones_bitmap = NULL;
++ disk_set_conv_zones_bitmap(disk, NULL);
+ disk->zone_capacity = 0;
+ disk->last_zone_capacity = 0;
+ disk->nr_zones = 0;
+@@ -1538,7 +1561,7 @@ static int disk_update_zone_resources(struct gendisk *disk,
+ struct blk_revalidate_zone_args *args)
+ {
+ struct request_queue *q = disk->queue;
+- unsigned int nr_seq_zones, nr_conv_zones = 0;
++ unsigned int nr_seq_zones, nr_conv_zones;
+ unsigned int pool_size;
+ struct queue_limits lim;
+ int ret;
+@@ -1546,10 +1569,8 @@ static int disk_update_zone_resources(struct gendisk *disk,
+ disk->nr_zones = args->nr_zones;
+ disk->zone_capacity = args->zone_capacity;
+ disk->last_zone_capacity = args->last_zone_capacity;
+- swap(disk->conv_zones_bitmap, args->conv_zones_bitmap);
+- if (disk->conv_zones_bitmap)
+- nr_conv_zones = bitmap_weight(disk->conv_zones_bitmap,
+- disk->nr_zones);
++ nr_conv_zones =
++ disk_set_conv_zones_bitmap(disk, args->conv_zones_bitmap);
+ if (nr_conv_zones >= disk->nr_zones) {
+ pr_warn("%s: Invalid number of conventional zones %u / %u\n",
+ disk->disk_name, nr_conv_zones, disk->nr_zones);
+@@ -1829,8 +1850,6 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ blk_mq_unfreeze_queue(q);
+ }
+
+- kfree(args.conv_zones_bitmap);
+-
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(blk_revalidate_disk_zones);
+diff --git a/crypto/ecdsa.c b/crypto/ecdsa.c
+index d5a10959ec281c..80ef16ae6a40b4 100644
+--- a/crypto/ecdsa.c
++++ b/crypto/ecdsa.c
+@@ -36,29 +36,24 @@ static int ecdsa_get_signature_rs(u64 *dest, size_t hdrlen, unsigned char tag,
+ const void *value, size_t vlen, unsigned int ndigits)
+ {
+ size_t bufsize = ndigits * sizeof(u64);
+- ssize_t diff = vlen - bufsize;
+ const char *d = value;
+
+- if (!value || !vlen)
++ if (!value || !vlen || vlen > bufsize + 1)
+ return -EINVAL;
+
+- /* diff = 0: 'value' has exacly the right size
+- * diff > 0: 'value' has too many bytes; one leading zero is allowed that
+- * makes the value a positive integer; error on more
+- * diff < 0: 'value' is missing leading zeros
++ /*
++ * vlen may be 1 byte larger than bufsize due to a leading zero byte
++ * (necessary if the most significant bit of the integer is set).
+ */
+- if (diff > 0) {
++ if (vlen > bufsize) {
+ /* skip over leading zeros that make 'value' a positive int */
+ if (*d == 0) {
+ vlen -= 1;
+- diff--;
+ d++;
+- }
+- if (diff)
++ } else {
+ return -EINVAL;
++ }
+ }
+- if (-diff >= bufsize)
+- return -EINVAL;
+
+ ecc_digits_from_bytes(d, vlen, dest, ndigits);
+
+diff --git a/drivers/accel/qaic/qaic_drv.c b/drivers/accel/qaic/qaic_drv.c
+index bf10156c334e71..f139c564eadf9f 100644
+--- a/drivers/accel/qaic/qaic_drv.c
++++ b/drivers/accel/qaic/qaic_drv.c
+@@ -34,6 +34,7 @@
+
+ MODULE_IMPORT_NS(DMA_BUF);
+
++#define PCI_DEV_AIC080 0xa080
+ #define PCI_DEV_AIC100 0xa100
+ #define QAIC_NAME "qaic"
+ #define QAIC_DESC "Qualcomm Cloud AI Accelerators"
+@@ -365,7 +366,7 @@ static struct qaic_device *create_qdev(struct pci_dev *pdev, const struct pci_de
+ return NULL;
+
+ qdev->dev_state = QAIC_OFFLINE;
+- if (id->device == PCI_DEV_AIC100) {
++ if (id->device == PCI_DEV_AIC080 || id->device == PCI_DEV_AIC100) {
+ qdev->num_dbc = 16;
+ qdev->dbc = devm_kcalloc(dev, qdev->num_dbc, sizeof(*qdev->dbc), GFP_KERNEL);
+ if (!qdev->dbc)
+@@ -607,6 +608,7 @@ static struct mhi_driver qaic_mhi_driver = {
+ };
+
+ static const struct pci_device_id qaic_ids[] = {
++ { PCI_DEVICE(PCI_VENDOR_ID_QCOM, PCI_DEV_AIC080), },
+ { PCI_DEVICE(PCI_VENDOR_ID_QCOM, PCI_DEV_AIC100), },
+ { }
+ };
+diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
+index 015bd8e66c1cf8..d507d5e084354b 100644
+--- a/drivers/acpi/video_detect.c
++++ b/drivers/acpi/video_detect.c
+@@ -549,6 +549,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "iMac12,2"),
+ },
+ },
++ {
++ .callback = video_detect_force_native,
++ /* Apple MacBook Air 7,2 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "MacBookAir7,2"),
++ },
++ },
+ {
+ .callback = video_detect_force_native,
+ /* Apple MacBook Air 9,1 */
+@@ -565,6 +573,14 @@ static const struct dmi_system_id video_detect_dmi_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro9,2"),
+ },
+ },
++ {
++ .callback = video_detect_force_native,
++ /* Apple MacBook Pro 11,2 */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "MacBookPro11,2"),
++ },
++ },
+ {
+ /* https://bugzilla.redhat.com/show_bug.cgi?id=1217249 */
+ .callback = video_detect_force_native,
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index 6af546b21574f9..cb45ef5240dab6 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -12,6 +12,7 @@
+
+ #include <linux/acpi.h>
+ #include <linux/dmi.h>
++#include <linux/pci.h>
+ #include <linux/platform_device.h>
+ #include <asm/cpu_device_id.h>
+ #include <asm/intel-family.h>
+@@ -295,6 +296,7 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ /*
+ * 2. Devices which also have the skip i2c/serdev quirks and which
+ * need the x86-android-tablets module to properly work.
++ * Sorted alphabetically.
+ */
+ #if IS_ENABLED(CONFIG_X86_ANDROID_TABLETS)
+ {
+@@ -308,6 +310,19 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ },
+ {
++ /* Acer Iconia One 8 A1-840 (non FHD version) */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Insyde"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "BayTrail"),
++ /* Above strings are too generic also match BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "04/01/2014"),
++ },
++ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
++ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
++ },
++ {
++ /* Asus ME176C tablet */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ME176C"),
+@@ -318,23 +333,24 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ },
+ {
+- /* Lenovo Yoga Book X90F/L */
++ /* Asus TF103C transformer 2-in-1 */
+ .matches = {
+- DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "TF103C"),
+ },
+ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
+- ACPI_QUIRK_UART1_SKIP |
+ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
+ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ },
+ {
++ /* Lenovo Yoga Book X90F/L */
+ .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "TF103C"),
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "CHERRYVIEW D1 PLATFORM"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_VERSION, "YETI-11"),
+ },
+ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++ ACPI_QUIRK_UART1_SKIP |
+ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
+ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ },
+@@ -391,6 +407,19 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
+ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
+ },
++ {
++ /* Vexia Edu Atla 10 tablet 9V version */
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++ DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
++ /* Above strings are too generic, also match on BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "08/25/2014"),
++ },
++ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++ ACPI_QUIRK_UART1_SKIP |
++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
++ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
++ },
+ {
+ /* Whitelabel (sold as various brands) TM800A550L */
+ .matches = {
+@@ -411,6 +440,7 @@ static const struct acpi_device_id i2c_acpi_known_good_ids[] = {
+ { "10EC5640", 0 }, /* RealTek ALC5640 audio codec */
+ { "10EC5651", 0 }, /* RealTek ALC5651 audio codec */
+ { "INT33F4", 0 }, /* X-Powers AXP288 PMIC */
++ { "INT33F5", 0 }, /* TI Dollar Cove PMIC */
+ { "INT33FD", 0 }, /* Intel Crystal Cove PMIC */
+ { "INT34D3", 0 }, /* Intel Whiskey Cove PMIC */
+ { "NPCE69A", 0 }, /* Asus Transformer keyboard dock */
+@@ -439,18 +469,35 @@ static int acpi_dmi_skip_serdev_enumeration(struct device *controller_parent, bo
+ struct acpi_device *adev = ACPI_COMPANION(controller_parent);
+ const struct dmi_system_id *dmi_id;
+ long quirks = 0;
+- u64 uid;
+- int ret;
++ u64 uid = 0;
+
+- ret = acpi_dev_uid_to_integer(adev, &uid);
+- if (ret)
++ dmi_id = dmi_first_match(acpi_quirk_skip_dmi_ids);
++ if (!dmi_id)
+ return 0;
+
+- dmi_id = dmi_first_match(acpi_quirk_skip_dmi_ids);
+- if (dmi_id)
+- quirks = (unsigned long)dmi_id->driver_data;
++ quirks = (unsigned long)dmi_id->driver_data;
++
++ /* uid is left at 0 on errors and 0 is not a valid UART UID */
++ acpi_dev_uid_to_integer(adev, &uid);
++
++ /* For PCI UARTs without an UID */
++ if (!uid && dev_is_pci(controller_parent)) {
++ struct pci_dev *pdev = to_pci_dev(controller_parent);
++
++ /*
++ * Devfn values for PCI UARTs on Bay Trail SoCs, which are
++ * the only devices where this fallback is necessary.
++ */
++ if (pdev->devfn == PCI_DEVFN(0x1e, 3))
++ uid = 1;
++ else if (pdev->devfn == PCI_DEVFN(0x1e, 4))
++ uid = 2;
++ }
++
++ if (!uid)
++ return 0;
+
+- if (!dev_is_platform(controller_parent)) {
++ if (!dev_is_platform(controller_parent) && !dev_is_pci(controller_parent)) {
+ /* PNP enumerated UARTs */
+ if ((quirks & ACPI_QUIRK_PNP_UART1_SKIP) && uid == 1)
+ *skip = true;
+@@ -505,7 +552,7 @@ int acpi_quirk_skip_serdev_enumeration(struct device *controller_parent, bool *s
+ * Set skip to true so that the tty core creates a serdev ctrl device.
+ * The backlight driver will manually create the serdev client device.
+ */
+- if (acpi_dev_hid_match(adev, "DELL0501")) {
++ if (adev && acpi_dev_hid_match(adev, "DELL0501")) {
+ *skip = true;
+ /*
+ * Create a platform dev for dell-uart-backlight to bind to.
+diff --git a/drivers/base/arch_numa.c b/drivers/base/arch_numa.c
+index e1870167642658..c99f2ab105e5b7 100644
+--- a/drivers/base/arch_numa.c
++++ b/drivers/base/arch_numa.c
+@@ -208,6 +208,10 @@ static int __init numa_register_nodes(void)
+ {
+ int nid;
+
++ /* Check the validity of the memblock/node mapping */
++ if (!memblock_validate_numa_coverage(0))
++ return -EINVAL;
++
+ /* Finally register nodes. */
+ for_each_node_mask(nid, numa_nodes_parsed) {
+ unsigned long start_pfn, end_pfn;
+diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
+index 7a7609298e18bd..89410127089b93 100644
+--- a/drivers/base/cacheinfo.c
++++ b/drivers/base/cacheinfo.c
+@@ -58,7 +58,7 @@ bool last_level_cache_is_valid(unsigned int cpu)
+ {
+ struct cacheinfo *llc;
+
+- if (!cache_leaves(cpu))
++ if (!cache_leaves(cpu) || !per_cpu_cacheinfo(cpu))
+ return false;
+
+ llc = per_cpu_cacheinfo_idx(cpu, cache_leaves(cpu) - 1);
+@@ -463,11 +463,9 @@ int __weak populate_cache_leaves(unsigned int cpu)
+ return -ENOENT;
+ }
+
+-static inline
+-int allocate_cache_info(int cpu)
++static inline int allocate_cache_info(int cpu)
+ {
+- per_cpu_cacheinfo(cpu) = kcalloc(cache_leaves(cpu),
+- sizeof(struct cacheinfo), GFP_ATOMIC);
++ per_cpu_cacheinfo(cpu) = kcalloc(cache_leaves(cpu), sizeof(struct cacheinfo), GFP_ATOMIC);
+ if (!per_cpu_cacheinfo(cpu)) {
+ cache_leaves(cpu) = 0;
+ return -ENOMEM;
+@@ -539,7 +537,11 @@ static inline int init_level_allocate_ci(unsigned int cpu)
+ */
+ ci_cacheinfo(cpu)->early_ci_levels = false;
+
+- if (cache_leaves(cpu) <= early_leaves)
++ /*
++ * Some architectures (e.g., x86) do not use early initialization.
++ * Allocate memory now in such case.
++ */
++ if (cache_leaves(cpu) <= early_leaves && per_cpu_cacheinfo(cpu))
+ return 0;
+
+ kfree(per_cpu_cacheinfo(cpu));
+diff --git a/drivers/base/regmap/internal.h b/drivers/base/regmap/internal.h
+index 83acccdc100897..bdb450436cbc53 100644
+--- a/drivers/base/regmap/internal.h
++++ b/drivers/base/regmap/internal.h
+@@ -59,6 +59,7 @@ struct regmap {
+ unsigned long raw_spinlock_flags;
+ };
+ };
++ struct lock_class_key *lock_key;
+ regmap_lock lock;
+ regmap_unlock unlock;
+ void *lock_arg; /* This is passed to lock/unlock functions */
+diff --git a/drivers/base/regmap/regcache-maple.c b/drivers/base/regmap/regcache-maple.c
+index 8d27d3653ea3e7..23da7b31d71534 100644
+--- a/drivers/base/regmap/regcache-maple.c
++++ b/drivers/base/regmap/regcache-maple.c
+@@ -355,6 +355,9 @@ static int regcache_maple_init(struct regmap *map)
+
+ mt_init(mt);
+
++ if (!mt_external_lock(mt) && map->lock_key)
++ lockdep_set_class_and_subclass(&mt->ma_lock, map->lock_key, 1);
++
+ if (!map->num_reg_defaults)
+ return 0;
+
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index 4ded93687c1f0a..e3e2afc2c83c6b 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -598,6 +598,17 @@ int regmap_attach_dev(struct device *dev, struct regmap *map,
+ }
+ EXPORT_SYMBOL_GPL(regmap_attach_dev);
+
++static int dev_get_regmap_match(struct device *dev, void *res, void *data);
++
++static int regmap_detach_dev(struct device *dev, struct regmap *map)
++{
++ if (!dev)
++ return 0;
++
++ return devres_release(dev, dev_get_regmap_release,
++ dev_get_regmap_match, (void *)map->name);
++}
++
+ static enum regmap_endian regmap_get_reg_endian(const struct regmap_bus *bus,
+ const struct regmap_config *config)
+ {
+@@ -745,6 +756,7 @@ struct regmap *__regmap_init(struct device *dev,
+ lock_key, lock_name);
+ }
+ map->lock_arg = map;
++ map->lock_key = lock_key;
+ }
+
+ /*
+@@ -1444,6 +1456,7 @@ void regmap_exit(struct regmap *map)
+ {
+ struct regmap_async *async;
+
++ regmap_detach_dev(map->dev, map);
+ regcache_exit(map);
+
+ regmap_debugfs_exit(map);
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index d6a1ba969266a4..d0432b1707ceb6 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -298,17 +298,30 @@ static void mark_idle(struct zram *zram, ktime_t cutoff)
+ /*
+ * Do not mark ZRAM_UNDER_WB slot as ZRAM_IDLE to close race.
+ * See the comment in writeback_store.
++ *
++ * Also do not mark ZRAM_SAME slots as ZRAM_IDLE, because no
++ * post-processing (recompress, writeback) happens to the
++ * ZRAM_SAME slot.
++ *
++ * And ZRAM_WB slots simply cannot be ZRAM_IDLE.
+ */
+ zram_slot_lock(zram, index);
+- if (zram_allocated(zram, index) &&
+- !zram_test_flag(zram, index, ZRAM_UNDER_WB)) {
++ if (!zram_allocated(zram, index) ||
++ zram_test_flag(zram, index, ZRAM_WB) ||
++ zram_test_flag(zram, index, ZRAM_UNDER_WB) ||
++ zram_test_flag(zram, index, ZRAM_SAME)) {
++ zram_slot_unlock(zram, index);
++ continue;
++ }
++
+ #ifdef CONFIG_ZRAM_TRACK_ENTRY_ACTIME
+- is_idle = !cutoff || ktime_after(cutoff,
+- zram->table[index].ac_time);
++ is_idle = !cutoff ||
++ ktime_after(cutoff, zram->table[index].ac_time);
+ #endif
+- if (is_idle)
+- zram_set_flag(zram, index, ZRAM_IDLE);
+- }
++ if (is_idle)
++ zram_set_flag(zram, index, ZRAM_IDLE);
++ else
++ zram_clear_flag(zram, index, ZRAM_IDLE);
+ zram_slot_unlock(zram, index);
+ }
+ }
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 4ccaddb46ddd81..11755cb1eb1635 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -524,6 +524,8 @@ static const struct usb_device_id quirks_table[] = {
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3591), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe123), .driver_info = BTUSB_REALTEK |
++ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe125), .driver_info = BTUSB_REALTEK |
+ BTUSB_WIDEBAND_SPEECH },
+
+@@ -563,6 +565,16 @@ static const struct usb_device_id quirks_table[] = {
+ { USB_DEVICE(0x043e, 0x3109), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+
++ /* Additional MediaTek MT7920 Bluetooth devices */
++ { USB_DEVICE(0x0489, 0xe134), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3620), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3621), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3622), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++
+ /* Additional MediaTek MT7921 Bluetooth devices */
+ { USB_DEVICE(0x0489, 0xe0c8), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+@@ -630,12 +642,24 @@ static const struct usb_device_id quirks_table[] = {
+ BTUSB_WIDEBAND_SPEECH },
+
+ /* Additional MediaTek MT7925 Bluetooth devices */
++ { USB_DEVICE(0x0489, 0xe111), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe113), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe118), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe11e), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe124), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe139), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe14f), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe150), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe151), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3602), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3603), .driver_info = BTUSB_MEDIATEK |
+@@ -3897,6 +3921,8 @@ static int btusb_probe(struct usb_interface *intf,
+ set_bit(HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT, &hdev->quirks);
+ set_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &hdev->quirks);
+ set_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks);
++ set_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, &hdev->quirks);
++ set_bit(HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT, &hdev->quirks);
+ }
+
+ if (!reset)
+diff --git a/drivers/clk/clk-en7523.c b/drivers/clk/clk-en7523.c
+index fdd8ea989ed24a..bc21b292144926 100644
+--- a/drivers/clk/clk-en7523.c
++++ b/drivers/clk/clk-en7523.c
+@@ -508,6 +508,8 @@ static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_dat
+ u32 rate;
+ int i;
+
++ clk_data->num = EN7523_NUM_CLOCKS;
++
+ for (i = 0; i < ARRAY_SIZE(en7523_base_clks); i++) {
+ const struct en_clk_desc *desc = &en7523_base_clks[i];
+ u32 reg = desc->div_reg ? desc->div_reg : desc->base_reg;
+@@ -529,8 +531,6 @@ static void en7523_register_clocks(struct device *dev, struct clk_hw_onecell_dat
+
+ hw = en7523_register_pcie_clk(dev, np_base);
+ clk_data->hws[EN7523_CLK_PCIE] = hw;
+-
+- clk_data->num = EN7523_NUM_CLOCKS;
+ }
+
+ static int en7523_clk_hw_init(struct platform_device *pdev,
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index 4444dafa4e3dfa..9ba675f229b144 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -959,10 +959,10 @@ config SM_DISPCC_8450
+ config SM_DISPCC_8550
+ tristate "SM8550 Display Clock Controller"
+ depends on ARM64 || COMPILE_TEST
+- depends on SM_GCC_8550 || SM_GCC_8650
++ depends on SM_GCC_8550 || SM_GCC_8650 || SAR_GCC_2130P
+ help
+ Support for the display clock controller on Qualcomm Technologies, Inc
+- SM8550 or SM8650 devices.
++ SAR2130P, SM8550 or SM8650 devices.
+ Say Y if you want to support display devices and functionality such as
+ splash screen.
+
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index be9bee6ab65f6e..49687512184b92 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -267,6 +267,17 @@ const u8 clk_alpha_pll_regs[][PLL_OFF_MAX_REGS] = {
+ [PLL_OFF_OPMODE] = 0x30,
+ [PLL_OFF_STATUS] = 0x3c,
+ },
++ [CLK_ALPHA_PLL_TYPE_NSS_HUAYRA] = {
++ [PLL_OFF_L_VAL] = 0x04,
++ [PLL_OFF_ALPHA_VAL] = 0x08,
++ [PLL_OFF_TEST_CTL] = 0x0c,
++ [PLL_OFF_TEST_CTL_U] = 0x10,
++ [PLL_OFF_USER_CTL] = 0x14,
++ [PLL_OFF_CONFIG_CTL] = 0x18,
++ [PLL_OFF_CONFIG_CTL_U] = 0x1c,
++ [PLL_OFF_STATUS] = 0x20,
++ },
++
+ };
+ EXPORT_SYMBOL_GPL(clk_alpha_pll_regs);
+
+diff --git a/drivers/clk/qcom/clk-alpha-pll.h b/drivers/clk/qcom/clk-alpha-pll.h
+index 55eca04b23a1fc..c6d1b8429f951a 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.h
++++ b/drivers/clk/qcom/clk-alpha-pll.h
+@@ -32,6 +32,7 @@ enum {
+ CLK_ALPHA_PLL_TYPE_BRAMMO_EVO,
+ CLK_ALPHA_PLL_TYPE_STROMER,
+ CLK_ALPHA_PLL_TYPE_STROMER_PLUS,
++ CLK_ALPHA_PLL_TYPE_NSS_HUAYRA,
+ CLK_ALPHA_PLL_TYPE_MAX,
+ };
+
+diff --git a/drivers/clk/qcom/clk-rcg.h b/drivers/clk/qcom/clk-rcg.h
+index 8e0f3372dc7a83..80f1f4fcd52a68 100644
+--- a/drivers/clk/qcom/clk-rcg.h
++++ b/drivers/clk/qcom/clk-rcg.h
+@@ -198,6 +198,7 @@ extern const struct clk_ops clk_byte2_ops;
+ extern const struct clk_ops clk_pixel_ops;
+ extern const struct clk_ops clk_gfx3d_ops;
+ extern const struct clk_ops clk_rcg2_shared_ops;
++extern const struct clk_ops clk_rcg2_shared_floor_ops;
+ extern const struct clk_ops clk_rcg2_shared_no_init_park_ops;
+ extern const struct clk_ops clk_dp_ops;
+
+diff --git a/drivers/clk/qcom/clk-rcg2.c b/drivers/clk/qcom/clk-rcg2.c
+index bf26c5448f0067..bf6406f5279a4c 100644
+--- a/drivers/clk/qcom/clk-rcg2.c
++++ b/drivers/clk/qcom/clk-rcg2.c
+@@ -1186,15 +1186,23 @@ clk_rcg2_shared_force_enable_clear(struct clk_hw *hw, const struct freq_tbl *f)
+ return clk_rcg2_clear_force_enable(hw);
+ }
+
+-static int clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
+- unsigned long parent_rate)
++static int __clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
++ unsigned long parent_rate,
++ enum freq_policy policy)
+ {
+ struct clk_rcg2 *rcg = to_clk_rcg2(hw);
+ const struct freq_tbl *f;
+
+- f = qcom_find_freq(rcg->freq_tbl, rate);
+- if (!f)
++ switch (policy) {
++ case FLOOR:
++ f = qcom_find_freq_floor(rcg->freq_tbl, rate);
++ break;
++ case CEIL:
++ f = qcom_find_freq(rcg->freq_tbl, rate);
++ break;
++ default:
+ return -EINVAL;
++ }
+
+ /*
+ * In case clock is disabled, update the M, N and D registers, cache
+@@ -1207,10 +1215,28 @@ static int clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
+ return clk_rcg2_shared_force_enable_clear(hw, f);
+ }
+
++static int clk_rcg2_shared_set_rate(struct clk_hw *hw, unsigned long rate,
++ unsigned long parent_rate)
++{
++ return __clk_rcg2_shared_set_rate(hw, rate, parent_rate, CEIL);
++}
++
+ static int clk_rcg2_shared_set_rate_and_parent(struct clk_hw *hw,
+ unsigned long rate, unsigned long parent_rate, u8 index)
+ {
+- return clk_rcg2_shared_set_rate(hw, rate, parent_rate);
++ return __clk_rcg2_shared_set_rate(hw, rate, parent_rate, CEIL);
++}
++
++static int clk_rcg2_shared_set_floor_rate(struct clk_hw *hw, unsigned long rate,
++ unsigned long parent_rate)
++{
++ return __clk_rcg2_shared_set_rate(hw, rate, parent_rate, FLOOR);
++}
++
++static int clk_rcg2_shared_set_floor_rate_and_parent(struct clk_hw *hw,
++ unsigned long rate, unsigned long parent_rate, u8 index)
++{
++ return __clk_rcg2_shared_set_rate(hw, rate, parent_rate, FLOOR);
+ }
+
+ static int clk_rcg2_shared_enable(struct clk_hw *hw)
+@@ -1348,6 +1374,18 @@ const struct clk_ops clk_rcg2_shared_ops = {
+ };
+ EXPORT_SYMBOL_GPL(clk_rcg2_shared_ops);
+
++const struct clk_ops clk_rcg2_shared_floor_ops = {
++ .enable = clk_rcg2_shared_enable,
++ .disable = clk_rcg2_shared_disable,
++ .get_parent = clk_rcg2_shared_get_parent,
++ .set_parent = clk_rcg2_shared_set_parent,
++ .recalc_rate = clk_rcg2_shared_recalc_rate,
++ .determine_rate = clk_rcg2_determine_floor_rate,
++ .set_rate = clk_rcg2_shared_set_floor_rate,
++ .set_rate_and_parent = clk_rcg2_shared_set_floor_rate_and_parent,
++};
++EXPORT_SYMBOL_GPL(clk_rcg2_shared_floor_ops);
++
+ static int clk_rcg2_shared_no_init_park(struct clk_hw *hw)
+ {
+ struct clk_rcg2 *rcg = to_clk_rcg2(hw);
+diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c
+index 4acde937114af3..eefc322ce36798 100644
+--- a/drivers/clk/qcom/clk-rpmh.c
++++ b/drivers/clk/qcom/clk-rpmh.c
+@@ -389,6 +389,18 @@ DEFINE_CLK_RPMH_BCM(ipa, "IP0");
+ DEFINE_CLK_RPMH_BCM(pka, "PKA0");
+ DEFINE_CLK_RPMH_BCM(qpic_clk, "QP0");
+
++static struct clk_hw *sar2130p_rpmh_clocks[] = {
++ [RPMH_CXO_CLK] = &clk_rpmh_bi_tcxo_div1.hw,
++ [RPMH_CXO_CLK_A] = &clk_rpmh_bi_tcxo_div1_ao.hw,
++ [RPMH_RF_CLK1] = &clk_rpmh_rf_clk1_a.hw,
++ [RPMH_RF_CLK1_A] = &clk_rpmh_rf_clk1_a_ao.hw,
++};
++
++static const struct clk_rpmh_desc clk_rpmh_sar2130p = {
++ .clks = sar2130p_rpmh_clocks,
++ .num_clks = ARRAY_SIZE(sar2130p_rpmh_clocks),
++};
++
+ static struct clk_hw *sdm845_rpmh_clocks[] = {
+ [RPMH_CXO_CLK] = &clk_rpmh_bi_tcxo_div2.hw,
+ [RPMH_CXO_CLK_A] = &clk_rpmh_bi_tcxo_div2_ao.hw,
+@@ -880,6 +892,7 @@ static int clk_rpmh_probe(struct platform_device *pdev)
+ static const struct of_device_id clk_rpmh_match_table[] = {
+ { .compatible = "qcom,qdu1000-rpmh-clk", .data = &clk_rpmh_qdu1000},
+ { .compatible = "qcom,sa8775p-rpmh-clk", .data = &clk_rpmh_sa8775p},
++ { .compatible = "qcom,sar2130p-rpmh-clk", .data = &clk_rpmh_sar2130p},
+ { .compatible = "qcom,sc7180-rpmh-clk", .data = &clk_rpmh_sc7180},
+ { .compatible = "qcom,sc8180x-rpmh-clk", .data = &clk_rpmh_sc8180x},
+ { .compatible = "qcom,sc8280xp-rpmh-clk", .data = &clk_rpmh_sc8280xp},
+diff --git a/drivers/clk/qcom/dispcc-sm8550.c b/drivers/clk/qcom/dispcc-sm8550.c
+index 7f9021ca0ecb0e..e41d4104d77021 100644
+--- a/drivers/clk/qcom/dispcc-sm8550.c
++++ b/drivers/clk/qcom/dispcc-sm8550.c
+@@ -75,7 +75,7 @@ static struct pll_vco lucid_ole_vco[] = {
+ { 249600000, 2000000000, 0 },
+ };
+
+-static const struct alpha_pll_config disp_cc_pll0_config = {
++static struct alpha_pll_config disp_cc_pll0_config = {
+ .l = 0xd,
+ .alpha = 0x6492,
+ .config_ctl_val = 0x20485699,
+@@ -106,7 +106,7 @@ static struct clk_alpha_pll disp_cc_pll0 = {
+ },
+ };
+
+-static const struct alpha_pll_config disp_cc_pll1_config = {
++static struct alpha_pll_config disp_cc_pll1_config = {
+ .l = 0x1f,
+ .alpha = 0x4000,
+ .config_ctl_val = 0x20485699,
+@@ -594,6 +594,13 @@ static const struct freq_tbl ftbl_disp_cc_mdss_mdp_clk_src[] = {
+ { }
+ };
+
++static const struct freq_tbl ftbl_disp_cc_mdss_mdp_clk_src_sar2130p[] = {
++ F(200000000, P_DISP_CC_PLL0_OUT_MAIN, 3, 0, 0),
++ F(325000000, P_DISP_CC_PLL0_OUT_MAIN, 3, 0, 0),
++ F(514000000, P_DISP_CC_PLL0_OUT_MAIN, 3, 0, 0),
++ { }
++};
++
+ static const struct freq_tbl ftbl_disp_cc_mdss_mdp_clk_src_sm8650[] = {
+ F(19200000, P_BI_TCXO, 1, 0, 0),
+ F(85714286, P_DISP_CC_PLL0_OUT_MAIN, 3, 0, 0),
+@@ -1750,6 +1757,7 @@ static struct qcom_cc_desc disp_cc_sm8550_desc = {
+ };
+
+ static const struct of_device_id disp_cc_sm8550_match_table[] = {
++ { .compatible = "qcom,sar2130p-dispcc" },
+ { .compatible = "qcom,sm8550-dispcc" },
+ { .compatible = "qcom,sm8650-dispcc" },
+ { }
+@@ -1780,6 +1788,12 @@ static int disp_cc_sm8550_probe(struct platform_device *pdev)
+ disp_cc_mdss_mdp_clk_src.freq_tbl = ftbl_disp_cc_mdss_mdp_clk_src_sm8650;
+ disp_cc_mdss_dptx1_usb_router_link_intf_clk.clkr.hw.init->parent_hws[0] =
+ &disp_cc_mdss_dptx1_link_div_clk_src.clkr.hw;
++ } else if (of_device_is_compatible(pdev->dev.of_node, "qcom,sar2130p-dispcc")) {
++ disp_cc_pll0_config.l = 0x1f;
++ disp_cc_pll0_config.alpha = 0x4000;
++ disp_cc_pll0_config.user_ctl_val = 0x1;
++ disp_cc_pll1_config.user_ctl_val = 0x1;
++ disp_cc_mdss_mdp_clk_src.freq_tbl = ftbl_disp_cc_mdss_mdp_clk_src_sar2130p;
+ }
+
+ clk_lucid_ole_pll_configure(&disp_cc_pll0, regmap, &disp_cc_pll0_config);
+diff --git a/drivers/clk/qcom/tcsrcc-sm8550.c b/drivers/clk/qcom/tcsrcc-sm8550.c
+index e5e8f2e82b949d..41d73f92a000ab 100644
+--- a/drivers/clk/qcom/tcsrcc-sm8550.c
++++ b/drivers/clk/qcom/tcsrcc-sm8550.c
+@@ -129,6 +129,13 @@ static struct clk_branch tcsr_usb3_clkref_en = {
+ },
+ };
+
++static struct clk_regmap *tcsr_cc_sar2130p_clocks[] = {
++ [TCSR_PCIE_0_CLKREF_EN] = &tcsr_pcie_0_clkref_en.clkr,
++ [TCSR_PCIE_1_CLKREF_EN] = &tcsr_pcie_1_clkref_en.clkr,
++ [TCSR_USB2_CLKREF_EN] = &tcsr_usb2_clkref_en.clkr,
++ [TCSR_USB3_CLKREF_EN] = &tcsr_usb3_clkref_en.clkr,
++};
++
+ static struct clk_regmap *tcsr_cc_sm8550_clocks[] = {
+ [TCSR_PCIE_0_CLKREF_EN] = &tcsr_pcie_0_clkref_en.clkr,
+ [TCSR_PCIE_1_CLKREF_EN] = &tcsr_pcie_1_clkref_en.clkr,
+@@ -146,6 +153,12 @@ static const struct regmap_config tcsr_cc_sm8550_regmap_config = {
+ .fast_io = true,
+ };
+
++static const struct qcom_cc_desc tcsr_cc_sar2130p_desc = {
++ .config = &tcsr_cc_sm8550_regmap_config,
++ .clks = tcsr_cc_sar2130p_clocks,
++ .num_clks = ARRAY_SIZE(tcsr_cc_sar2130p_clocks),
++};
++
+ static const struct qcom_cc_desc tcsr_cc_sm8550_desc = {
+ .config = &tcsr_cc_sm8550_regmap_config,
+ .clks = tcsr_cc_sm8550_clocks,
+@@ -153,7 +166,8 @@ static const struct qcom_cc_desc tcsr_cc_sm8550_desc = {
+ };
+
+ static const struct of_device_id tcsr_cc_sm8550_match_table[] = {
+- { .compatible = "qcom,sm8550-tcsr" },
++ { .compatible = "qcom,sar2130p-tcsr", .data = &tcsr_cc_sar2130p_desc },
++ { .compatible = "qcom,sm8550-tcsr", .data = &tcsr_cc_sm8550_desc },
+ { }
+ };
+ MODULE_DEVICE_TABLE(of, tcsr_cc_sm8550_match_table);
+@@ -162,7 +176,7 @@ static int tcsr_cc_sm8550_probe(struct platform_device *pdev)
+ {
+ struct regmap *regmap;
+
+- regmap = qcom_cc_map(pdev, &tcsr_cc_sm8550_desc);
++ regmap = qcom_cc_map(pdev, of_device_get_match_data(&pdev->dev));
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
+
+diff --git a/drivers/dma-buf/dma-fence-array.c b/drivers/dma-buf/dma-fence-array.c
+index 8a08ffde31e758..6657d4b30af9dc 100644
+--- a/drivers/dma-buf/dma-fence-array.c
++++ b/drivers/dma-buf/dma-fence-array.c
+@@ -103,10 +103,36 @@ static bool dma_fence_array_enable_signaling(struct dma_fence *fence)
+ static bool dma_fence_array_signaled(struct dma_fence *fence)
+ {
+ struct dma_fence_array *array = to_dma_fence_array(fence);
++ int num_pending;
++ unsigned int i;
+
+- if (atomic_read(&array->num_pending) > 0)
++ /*
++ * We need to read num_pending before checking the enable_signal bit
++ * to avoid racing with the enable_signaling() implementation, which
++ * might decrement the counter, and cause a partial check.
++ * atomic_read_acquire() pairs with atomic_dec_and_test() in
++ * dma_fence_array_enable_signaling()
++ *
++ * The !--num_pending check is here to account for the any_signaled case
++ * if we race with enable_signaling(), that means the !num_pending check
++ * in the is_signalling_enabled branch might be outdated (num_pending
++ * might have been decremented), but that's fine. The user will get the
++ * right value when testing again later.
++ */
++ num_pending = atomic_read_acquire(&array->num_pending);
++ if (test_bit(DMA_FENCE_FLAG_ENABLE_SIGNAL_BIT, &array->base.flags)) {
++ if (num_pending <= 0)
++ goto signal;
+ return false;
++ }
++
++ for (i = 0; i < array->num_fences; ++i) {
++ if (dma_fence_is_signaled(array->fences[i]) && !--num_pending)
++ goto signal;
++ }
++ return false;
+
++signal:
+ dma_fence_array_clear_pending_error(array);
+ return true;
+ }
+diff --git a/drivers/dma-buf/dma-fence-unwrap.c b/drivers/dma-buf/dma-fence-unwrap.c
+index 628af51c81af3d..6345062731f153 100644
+--- a/drivers/dma-buf/dma-fence-unwrap.c
++++ b/drivers/dma-buf/dma-fence-unwrap.c
+@@ -12,6 +12,7 @@
+ #include <linux/dma-fence-chain.h>
+ #include <linux/dma-fence-unwrap.h>
+ #include <linux/slab.h>
++#include <linux/sort.h>
+
+ /* Internal helper to start new array iteration, don't use directly */
+ static struct dma_fence *
+@@ -59,6 +60,25 @@ struct dma_fence *dma_fence_unwrap_next(struct dma_fence_unwrap *cursor)
+ }
+ EXPORT_SYMBOL_GPL(dma_fence_unwrap_next);
+
++
++static int fence_cmp(const void *_a, const void *_b)
++{
++ struct dma_fence *a = *(struct dma_fence **)_a;
++ struct dma_fence *b = *(struct dma_fence **)_b;
++
++ if (a->context < b->context)
++ return -1;
++ else if (a->context > b->context)
++ return 1;
++
++ if (dma_fence_is_later(b, a))
++ return 1;
++ else if (dma_fence_is_later(a, b))
++ return -1;
++
++ return 0;
++}
++
+ /* Implementation for the dma_fence_merge() marco, don't use directly */
+ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
+ struct dma_fence **fences,
+@@ -67,8 +87,7 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
+ struct dma_fence_array *result;
+ struct dma_fence *tmp, **array;
+ ktime_t timestamp;
+- unsigned int i;
+- size_t count;
++ int i, j, count;
+
+ count = 0;
+ timestamp = ns_to_ktime(0);
+@@ -96,78 +115,55 @@ struct dma_fence *__dma_fence_unwrap_merge(unsigned int num_fences,
+ if (!array)
+ return NULL;
+
+- /*
+- * This trashes the input fence array and uses it as position for the
+- * following merge loop. This works because the dma_fence_merge()
+- * wrapper macro is creating this temporary array on the stack together
+- * with the iterators.
+- */
+- for (i = 0; i < num_fences; ++i)
+- fences[i] = dma_fence_unwrap_first(fences[i], &iter[i]);
+-
+ count = 0;
+- do {
+- unsigned int sel;
+-
+-restart:
+- tmp = NULL;
+- for (i = 0; i < num_fences; ++i) {
+- struct dma_fence *next;
+-
+- while (fences[i] && dma_fence_is_signaled(fences[i]))
+- fences[i] = dma_fence_unwrap_next(&iter[i]);
+-
+- next = fences[i];
+- if (!next)
+- continue;
+-
+- /*
+- * We can't guarantee that inpute fences are ordered by
+- * context, but it is still quite likely when this
+- * function is used multiple times. So attempt to order
+- * the fences by context as we pass over them and merge
+- * fences with the same context.
+- */
+- if (!tmp || tmp->context > next->context) {
+- tmp = next;
+- sel = i;
+-
+- } else if (tmp->context < next->context) {
+- continue;
+-
+- } else if (dma_fence_is_later(tmp, next)) {
+- fences[i] = dma_fence_unwrap_next(&iter[i]);
+- goto restart;
++ for (i = 0; i < num_fences; ++i) {
++ dma_fence_unwrap_for_each(tmp, &iter[i], fences[i]) {
++ if (!dma_fence_is_signaled(tmp)) {
++ array[count++] = dma_fence_get(tmp);
+ } else {
+- fences[sel] = dma_fence_unwrap_next(&iter[sel]);
+- goto restart;
++ ktime_t t = dma_fence_timestamp(tmp);
++
++ if (ktime_after(t, timestamp))
++ timestamp = t;
+ }
+ }
++ }
+
+- if (tmp) {
+- array[count++] = dma_fence_get(tmp);
+- fences[sel] = dma_fence_unwrap_next(&iter[sel]);
+- }
+- } while (tmp);
++ if (count == 0 || count == 1)
++ goto return_fastpath;
+
+- if (count == 0) {
+- tmp = dma_fence_allocate_private_stub(ktime_get());
+- goto return_tmp;
+- }
++ sort(array, count, sizeof(*array), fence_cmp, NULL);
+
+- if (count == 1) {
+- tmp = array[0];
+- goto return_tmp;
++ /*
++ * Only keep the most recent fence for each context.
++ */
++ j = 0;
++ for (i = 1; i < count; i++) {
++ if (array[i]->context == array[j]->context)
++ dma_fence_put(array[i]);
++ else
++ array[++j] = array[i];
+ }
+-
+- result = dma_fence_array_create(count, array,
+- dma_fence_context_alloc(1),
+- 1, false);
+- if (!result) {
+- tmp = NULL;
+- goto return_tmp;
++ count = ++j;
++
++ if (count > 1) {
++ result = dma_fence_array_create(count, array,
++ dma_fence_context_alloc(1),
++ 1, false);
++ if (!result) {
++ for (i = 0; i < count; i++)
++ dma_fence_put(array[i]);
++ tmp = NULL;
++ goto return_tmp;
++ }
++ return &result->base;
+ }
+- return &result->base;
++
++return_fastpath:
++ if (count == 0)
++ tmp = dma_fence_allocate_private_stub(timestamp);
++ else
++ tmp = array[0];
+
+ return_tmp:
+ kfree(array);
+diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
+index 2e4260ba5f793c..14afd68664a911 100644
+--- a/drivers/firmware/qcom/qcom_scm.c
++++ b/drivers/firmware/qcom/qcom_scm.c
+@@ -1742,9 +1742,11 @@ EXPORT_SYMBOL_GPL(qcom_scm_qseecom_app_send);
+ + any potential issues with this, only allow validated machines for now.
+ */
+ static const struct of_device_id qcom_scm_qseecom_allowlist[] __maybe_unused = {
++ { .compatible = "dell,xps13-9345" },
+ { .compatible = "lenovo,flex-5g" },
+ { .compatible = "lenovo,thinkpad-t14s" },
+ { .compatible = "lenovo,thinkpad-x13s", },
++ { .compatible = "lenovo,yoga-slim7x" },
+ { .compatible = "microsoft,romulus13", },
+ { .compatible = "microsoft,romulus15", },
+ { .compatible = "qcom,sc8180x-primus" },
+diff --git a/drivers/gpio/gpio-grgpio.c b/drivers/gpio/gpio-grgpio.c
+index 017c7170eb57c4..620793740c6681 100644
+--- a/drivers/gpio/gpio-grgpio.c
++++ b/drivers/gpio/gpio-grgpio.c
+@@ -328,6 +328,7 @@ static const struct irq_domain_ops grgpio_irq_domain_ops = {
+ static int grgpio_probe(struct platform_device *ofdev)
+ {
+ struct device_node *np = ofdev->dev.of_node;
++ struct device *dev = &ofdev->dev;
+ void __iomem *regs;
+ struct gpio_chip *gc;
+ struct grgpio_priv *priv;
+@@ -337,7 +338,7 @@ static int grgpio_probe(struct platform_device *ofdev)
+ int size;
+ int i;
+
+- priv = devm_kzalloc(&ofdev->dev, sizeof(*priv), GFP_KERNEL);
++ priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return -ENOMEM;
+
+@@ -346,28 +347,31 @@ static int grgpio_probe(struct platform_device *ofdev)
+ return PTR_ERR(regs);
+
+ gc = &priv->gc;
+- err = bgpio_init(gc, &ofdev->dev, 4, regs + GRGPIO_DATA,
++ err = bgpio_init(gc, dev, 4, regs + GRGPIO_DATA,
+ regs + GRGPIO_OUTPUT, NULL, regs + GRGPIO_DIR, NULL,
+ BGPIOF_BIG_ENDIAN_BYTE_ORDER);
+ if (err) {
+- dev_err(&ofdev->dev, "bgpio_init() failed\n");
++ dev_err(dev, "bgpio_init() failed\n");
+ return err;
+ }
+
+ priv->regs = regs;
+ priv->imask = gc->read_reg(regs + GRGPIO_IMASK);
+- priv->dev = &ofdev->dev;
++ priv->dev = dev;
+
+ gc->owner = THIS_MODULE;
+ gc->to_irq = grgpio_to_irq;
+- gc->label = devm_kasprintf(&ofdev->dev, GFP_KERNEL, "%pOF", np);
++ gc->label = devm_kasprintf(dev, GFP_KERNEL, "%pOF", np);
++ if (!gc->label)
++ return -ENOMEM;
++
+ gc->base = -1;
+
+ err = of_property_read_u32(np, "nbits", &prop);
+ if (err || prop <= 0 || prop > GRGPIO_MAX_NGPIO) {
+ gc->ngpio = GRGPIO_MAX_NGPIO;
+- dev_dbg(&ofdev->dev,
+- "No or invalid nbits property: assume %d\n", gc->ngpio);
++ dev_dbg(dev, "No or invalid nbits property: assume %d\n",
++ gc->ngpio);
+ } else {
+ gc->ngpio = prop;
+ }
+@@ -379,7 +383,7 @@ static int grgpio_probe(struct platform_device *ofdev)
+ irqmap = (s32 *)of_get_property(np, "irqmap", &size);
+ if (irqmap) {
+ if (size < gc->ngpio) {
+- dev_err(&ofdev->dev,
++ dev_err(dev,
+ "irqmap shorter than ngpio (%d < %d)\n",
+ size, gc->ngpio);
+ return -EINVAL;
+@@ -389,7 +393,7 @@ static int grgpio_probe(struct platform_device *ofdev)
+ &grgpio_irq_domain_ops,
+ priv);
+ if (!priv->domain) {
+- dev_err(&ofdev->dev, "Could not add irq domain\n");
++ dev_err(dev, "Could not add irq domain\n");
+ return -EINVAL;
+ }
+
+@@ -419,13 +423,13 @@ static int grgpio_probe(struct platform_device *ofdev)
+
+ err = gpiochip_add_data(gc, priv);
+ if (err) {
+- dev_err(&ofdev->dev, "Could not add gpiochip\n");
++ dev_err(dev, "Could not add gpiochip\n");
+ if (priv->domain)
+ irq_domain_remove(priv->domain);
+ return err;
+ }
+
+- dev_info(&ofdev->dev, "regs=0x%p, base=%d, ngpio=%d, irqs=%s\n",
++ dev_info(dev, "regs=0x%p, base=%d, ngpio=%d, irqs=%s\n",
+ priv->regs, gc->base, gc->ngpio, priv->domain ? "on" : "off");
+
+ return 0;
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 2b02655abb56ea..44372f8647d51a 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -14,6 +14,7 @@
+ #include <linux/idr.h>
+ #include <linux/interrupt.h>
+ #include <linux/irq.h>
++#include <linux/irqdesc.h>
+ #include <linux/kernel.h>
+ #include <linux/list.h>
+ #include <linux/lockdep.h>
+@@ -713,6 +714,45 @@ bool gpiochip_line_is_valid(const struct gpio_chip *gc,
+ }
+ EXPORT_SYMBOL_GPL(gpiochip_line_is_valid);
+
++static void gpiod_free_irqs(struct gpio_desc *desc)
++{
++ int irq = gpiod_to_irq(desc);
++ struct irq_desc *irqd = irq_to_desc(irq);
++ void *cookie;
++
++ for (;;) {
++ /*
++ * Make sure the action doesn't go away while we're
++ * dereferencing it. Retrieve and store the cookie value.
++ * If the irq is freed after we release the lock, that's
++ * alright - the underlying maple tree lookup will return NULL
++ * and nothing will happen in free_irq().
++ */
++ scoped_guard(mutex, &irqd->request_mutex) {
++ if (!irq_desc_has_action(irqd))
++ return;
++
++ cookie = irqd->action->dev_id;
++ }
++
++ free_irq(irq, cookie);
++ }
++}
++
++/*
++ * The chip is going away but there may be users who had requested interrupts
++ * on its GPIO lines who have no idea about its removal and have no way of
++ * being notified about it. We need to free any interrupts still in use here or
++ * we'll leak memory and resources (like procfs files).
++ */
++static void gpiochip_free_remaining_irqs(struct gpio_chip *gc)
++{
++ struct gpio_desc *desc;
++
++ for_each_gpio_desc_with_flag(gc, desc, FLAG_USED_AS_IRQ)
++ gpiod_free_irqs(desc);
++}
++
+ static void gpiodev_release(struct device *dev)
+ {
+ struct gpio_device *gdev = to_gpio_device(dev);
+@@ -1125,6 +1165,7 @@ void gpiochip_remove(struct gpio_chip *gc)
+ /* FIXME: should the legacy sysfs handling be moved to gpio_device? */
+ gpiochip_sysfs_unregister(gdev);
+ gpiochip_free_hogs(gc);
++ gpiochip_free_remaining_irqs(gc);
+
+ scoped_guard(mutex, &gpio_devices_lock)
+ list_del_rcu(&gdev->list);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+index 7dd55ed57c1d97..b8d4e07d2043ed 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c
+@@ -800,6 +800,7 @@ int amdgpu_acpi_power_shift_control(struct amdgpu_device *adev,
+ return -EIO;
+ }
+
++ kfree(info);
+ return 0;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 1f08cb88d51be5..51904906545e59 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3666,7 +3666,7 @@ static int amdgpu_device_ip_resume_phase1(struct amdgpu_device *adev)
+ *
+ * @adev: amdgpu_device pointer
+ *
+- * First resume function for hardware IPs. The list of all the hardware
++ * Second resume function for hardware IPs. The list of all the hardware
+ * IPs that make up the asic is walked and the resume callbacks are run for
+ * all blocks except COMMON, GMC, and IH. resume puts the hardware into a
+ * functional state after a suspend and updates the software state as
+@@ -3684,6 +3684,7 @@ static int amdgpu_device_ip_resume_phase2(struct amdgpu_device *adev)
+ if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_COMMON ||
+ adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_GMC ||
+ adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_IH ||
++ adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_DCE ||
+ adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_PSP)
+ continue;
+ r = adev->ip_blocks[i].version->funcs->resume(adev);
+@@ -3698,6 +3699,36 @@ static int amdgpu_device_ip_resume_phase2(struct amdgpu_device *adev)
+ return 0;
+ }
+
++/**
++ * amdgpu_device_ip_resume_phase3 - run resume for hardware IPs
++ *
++ * @adev: amdgpu_device pointer
++ *
++ * Third resume function for hardware IPs. The list of all the hardware
++ * IPs that make up the asic is walked and the resume callbacks are run for
++ * all DCE. resume puts the hardware into a functional state after a suspend
++ * and updates the software state as necessary. This function is also used
++ * for restoring the GPU after a GPU reset.
++ *
++ * Returns 0 on success, negative error code on failure.
++ */
++static int amdgpu_device_ip_resume_phase3(struct amdgpu_device *adev)
++{
++ int i, r;
++
++ for (i = 0; i < adev->num_ip_blocks; i++) {
++ if (!adev->ip_blocks[i].status.valid || adev->ip_blocks[i].status.hw)
++ continue;
++ if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_DCE) {
++ r = adev->ip_blocks[i].version->funcs->resume(adev);
++ if (r)
++ return r;
++ }
++ }
++
++ return 0;
++}
++
+ /**
+ * amdgpu_device_ip_resume - run resume for hardware IPs
+ *
+@@ -3727,6 +3758,13 @@ static int amdgpu_device_ip_resume(struct amdgpu_device *adev)
+ if (adev->mman.buffer_funcs_ring->sched.ready)
+ amdgpu_ttm_set_buffer_funcs_status(adev, true);
+
++ if (r)
++ return r;
++
++ amdgpu_fence_driver_hw_init(adev);
++
++ r = amdgpu_device_ip_resume_phase3(adev);
++
+ return r;
+ }
+
+@@ -4809,7 +4847,6 @@ int amdgpu_device_resume(struct drm_device *dev, bool fbcon)
+ dev_err(adev->dev, "amdgpu_device_ip_resume failed (%d).\n", r);
+ goto exit;
+ }
+- amdgpu_fence_driver_hw_init(adev);
+
+ if (!adev->in_s0ix) {
+ r = amdgpu_amdkfd_resume(adev, adev->in_runpm);
+@@ -5431,6 +5468,10 @@ int amdgpu_do_asic_reset(struct list_head *device_list_handle,
+ if (tmp_adev->mman.buffer_funcs_ring->sched.ready)
+ amdgpu_ttm_set_buffer_funcs_status(tmp_adev, true);
+
++ r = amdgpu_device_ip_resume_phase3(tmp_adev);
++ if (r)
++ goto out;
++
+ if (vram_lost)
+ amdgpu_device_fill_reset_magic(tmp_adev);
+
+@@ -6344,6 +6385,9 @@ bool amdgpu_device_cache_pci_state(struct pci_dev *pdev)
+ struct amdgpu_device *adev = drm_to_adev(dev);
+ int r;
+
++ if (amdgpu_sriov_vf(adev))
++ return false;
++
+ r = pci_save_state(pdev);
+ if (!r) {
+ kfree(adev->pci_state);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index 74adb983ab03e0..9f922ec50ea2dc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -812,7 +812,7 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_device *bdev,
+ /* Map SG to device */
+ r = dma_map_sgtable(adev->dev, ttm->sg, direction, 0);
+ if (r)
+- goto release_sg;
++ goto release_sg_table;
+
+ /* convert SG to linear array of pages and dma addresses */
+ drm_prime_sg_to_dma_addr_array(ttm->sg, gtt->ttm.dma_address,
+@@ -820,6 +820,8 @@ static int amdgpu_ttm_tt_pin_userptr(struct ttm_device *bdev,
+
+ return 0;
+
++release_sg_table:
++ sg_free_table(ttm->sg);
+ release_sg:
+ kfree(ttm->sg);
+ ttm->sg = NULL;
+@@ -1849,6 +1851,7 @@ int amdgpu_ttm_init(struct amdgpu_device *adev)
+
+ mutex_init(&adev->mman.gtt_window_lock);
+
++ dma_set_max_seg_size(adev->dev, UINT_MAX);
+ /* No others user of address space so set it to 0 */
+ r = ttm_device_init(&adev->mman.bdev, &amdgpu_bo_driver, adev->dev,
+ adev_to_drm(adev)->anon_inode->i_mapping,
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 785a343a95f0ff..e7cd51c95141e1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -2223,6 +2223,18 @@ static int gfx_v9_0_sw_init(void *handle)
+ }
+
+ switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
++ case IP_VERSION(9, 4, 2):
++ adev->gfx.cleaner_shader_ptr = gfx_9_4_2_cleaner_shader_hex;
++ adev->gfx.cleaner_shader_size = sizeof(gfx_9_4_2_cleaner_shader_hex);
++ if (adev->gfx.mec_fw_version >= 88) {
++ adev->gfx.enable_cleaner_shader = true;
++ r = amdgpu_gfx_cleaner_shader_sw_init(adev, adev->gfx.cleaner_shader_size);
++ if (r) {
++ adev->gfx.enable_cleaner_shader = false;
++ dev_err(adev->dev, "Failed to initialize cleaner shader\n");
++ }
++ }
++ break;
+ default:
+ adev->gfx.enable_cleaner_shader = false;
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0_cleaner_shader.h b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0_cleaner_shader.h
+index 36c0292b511067..0b6bd09b752993 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0_cleaner_shader.h
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0_cleaner_shader.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: MIT */
+ /*
+- * Copyright 2018 Advanced Micro Devices, Inc.
++ * Copyright 2024 Advanced Micro Devices, Inc.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+@@ -24,3 +24,45 @@
+ static const u32 __maybe_unused gfx_9_0_cleaner_shader_hex[] = {
+ /* Add the cleaner shader code here */
+ };
++
++/* Define the cleaner shader gfx_9_4_2 */
++static const u32 gfx_9_4_2_cleaner_shader_hex[] = {
++ 0xbf068100, 0xbf84003b,
++ 0xbf8a0000, 0xb07c0000,
++ 0xbe8200ff, 0x00000078,
++ 0xbf110802, 0x7e000280,
++ 0x7e020280, 0x7e040280,
++ 0x7e060280, 0x7e080280,
++ 0x7e0a0280, 0x7e0c0280,
++ 0x7e0e0280, 0x80828802,
++ 0xbe803202, 0xbf84fff5,
++ 0xbf9c0000, 0xbe8200ff,
++ 0x80000000, 0x86020102,
++ 0xbf840011, 0xbefe00c1,
++ 0xbeff00c1, 0xd28c0001,
++ 0x0001007f, 0xd28d0001,
++ 0x0002027e, 0x10020288,
++ 0xbe8200bf, 0xbefc00c1,
++ 0xd89c2000, 0x00020201,
++ 0xd89c6040, 0x00040401,
++ 0x320202ff, 0x00000400,
++ 0x80828102, 0xbf84fff8,
++ 0xbefc00ff, 0x0000005c,
++ 0xbf800000, 0xbe802c80,
++ 0xbe812c80, 0xbe822c80,
++ 0xbe832c80, 0x80fc847c,
++ 0xbf84fffa, 0xbee60080,
++ 0xbee70080, 0xbeea0180,
++ 0xbeec0180, 0xbeee0180,
++ 0xbef00180, 0xbef20180,
++ 0xbef40180, 0xbef60180,
++ 0xbef80180, 0xbefa0180,
++ 0xbf810000, 0xbf8d0001,
++ 0xbefc00ff, 0x0000005c,
++ 0xbf800000, 0xbe802c80,
++ 0xbe812c80, 0xbe822c80,
++ 0xbe832c80, 0x80fc847c,
++ 0xbf84fffa, 0xbee60080,
++ 0xbee70080, 0xbeea01ff,
++ 0x000000ee, 0xbf810000,
++};
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2_cleaner_shader.asm b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2_cleaner_shader.asm
+new file mode 100644
+index 00000000000000..35b8cf9070bd98
+--- /dev/null
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_2_cleaner_shader.asm
+@@ -0,0 +1,153 @@
++/* SPDX-License-Identifier: MIT */
++/*
++ * Copyright 2024 Advanced Micro Devices, Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ */
++
++// This shader is to clean LDS, SGPRs and VGPRs. It is first 64 Dwords or 256 bytes of 192 Dwords cleaner shader.
++//To turn this shader program on for complitaion change this to main and lower shader main to main_1
++
++// MI200 : Clear SGPRs, VGPRs and LDS
++// Uses two kernels launched separately:
++// 1. Clean VGPRs, LDS, and lower SGPRs
++// Launches one workgroup per CU, each workgroup with 4x wave64 per SIMD in the CU
++// Waves are "wave64" and have 128 VGPRs each, which uses all 512 VGPRs per SIMD
++// Waves in the workgroup share the 64KB of LDS
++// Each wave clears SGPRs 0 - 95. Because there are 4 waves/SIMD, this is physical SGPRs 0-383
++// Each wave clears 128 VGPRs, so all 512 in the SIMD
++// The first wave of the workgroup clears its 64KB of LDS
++// The shader starts with "S_BARRIER" to ensure SPI has launched all waves of the workgroup
++// before any wave in the workgroup could end. Without this, it is possible not all SGPRs get cleared.
++// 2. Clean remaining SGPRs
++// Launches a workgroup with 24 waves per workgroup, yielding 6 waves per SIMD in each CU
++// Waves are allocating 96 SGPRs
++// CP sets up SPI_RESOURCE_RESERVE_* registers to prevent these waves from allocating SGPRs 0-223.
++// As such, these 6 waves per SIMD are allocated physical SGPRs 224-799
++// Barriers do not work for >16 waves per workgroup, so we cannot start with S_BARRIER
++// Instead, the shader starts with an S_SETHALT 1. Once all waves are launched CP will send unhalt command
++// The shader then clears all SGPRs allocated to it, cleaning out physical SGPRs 224-799
++
++shader main
++ asic(MI200)
++ type(CS)
++ wave_size(64)
++// Note: original source code from SQ team
++
++// (theorhetical fastest = ~512clks vgpr + 1536 lds + ~128 sgpr = 2176 clks)
++
++ s_cmp_eq_u32 s0, 1 // Bit0 is set, sgpr0 is set then clear VGPRS and LDS as FW set COMPUTE_USER_DATA_3
++ s_cbranch_scc0 label_0023 // Clean VGPRs and LDS if sgpr0 of wave is set, scc = (s3 == 1)
++ S_BARRIER
++
++ s_movk_i32 m0, 0x0000
++ s_mov_b32 s2, 0x00000078 // Loop 128/8=16 times (loop unrolled for performance)
++ //
++ // CLEAR VGPRs
++ //
++ s_set_gpr_idx_on s2, 0x8 // enable Dest VGPR indexing
++label_0005:
++ v_mov_b32 v0, 0
++ v_mov_b32 v1, 0
++ v_mov_b32 v2, 0
++ v_mov_b32 v3, 0
++ v_mov_b32 v4, 0
++ v_mov_b32 v5, 0
++ v_mov_b32 v6, 0
++ v_mov_b32 v7, 0
++ s_sub_u32 s2, s2, 8
++ s_set_gpr_idx_idx s2
++ s_cbranch_scc0 label_0005
++ s_set_gpr_idx_off
++
++ //
++ //
++
++ s_mov_b32 s2, 0x80000000 // Bit31 is first_wave
++ s_and_b32 s2, s2, s1 // sgpr0 has tg_size (first_wave) term as in ucode only COMPUTE_PGM_RSRC2.tg_size_en is set
++ s_cbranch_scc0 label_clean_sgpr_1 // Clean LDS if its first wave of ThreadGroup/WorkGroup
++ // CLEAR LDS
++ //
++ s_mov_b32 exec_lo, 0xffffffff
++ s_mov_b32 exec_hi, 0xffffffff
++ v_mbcnt_lo_u32_b32 v1, exec_hi, 0 // Set V1 to thread-ID (0..63)
++ v_mbcnt_hi_u32_b32 v1, exec_lo, v1 // Set V1 to thread-ID (0..63)
++ v_mul_u32_u24 v1, 0x00000008, v1 // * 8, so each thread is a double-dword address (8byte)
++ s_mov_b32 s2, 0x00000003f // 64 loop iterations
++ s_mov_b32 m0, 0xffffffff
++ // Clear all of LDS space
++ // Each FirstWave of WorkGroup clears 64kbyte block
++
++label_001F:
++ ds_write2_b64 v1, v[2:3], v[2:3] offset1:32
++ ds_write2_b64 v1, v[4:5], v[4:5] offset0:64 offset1:96
++ v_add_co_u32 v1, vcc, 0x00000400, v1
++ s_sub_u32 s2, s2, 1
++ s_cbranch_scc0 label_001F
++ //
++ // CLEAR SGPRs
++ //
++label_clean_sgpr_1:
++ s_mov_b32 m0, 0x0000005c // Loop 96/4=24 times (loop unrolled for performance)
++ s_nop 0
++label_sgpr_loop:
++ s_movreld_b32 s0, 0
++ s_movreld_b32 s1, 0
++ s_movreld_b32 s2, 0
++ s_movreld_b32 s3, 0
++ s_sub_u32 m0, m0, 4
++ s_cbranch_scc0 label_sgpr_loop
++
++ //clear vcc, flat scratch
++ s_mov_b32 flat_scratch_lo, 0 //clear flat scratch lo SGPR
++ s_mov_b32 flat_scratch_hi, 0 //clear flat scratch hi SGPR
++ s_mov_b64 vcc, 0 //clear vcc
++ s_mov_b64 ttmp0, 0 //Clear ttmp0 and ttmp1
++ s_mov_b64 ttmp2, 0 //Clear ttmp2 and ttmp3
++ s_mov_b64 ttmp4, 0 //Clear ttmp4 and ttmp5
++ s_mov_b64 ttmp6, 0 //Clear ttmp6 and ttmp7
++ s_mov_b64 ttmp8, 0 //Clear ttmp8 and ttmp9
++ s_mov_b64 ttmp10, 0 //Clear ttmp10 and ttmp11
++ s_mov_b64 ttmp12, 0 //Clear ttmp12 and ttmp13
++ s_mov_b64 ttmp14, 0 //Clear ttmp14 and ttmp15
++s_endpgm
++
++label_0023:
++
++ s_sethalt 1
++
++ s_mov_b32 m0, 0x0000005c // Loop 96/4=24 times (loop unrolled for performance)
++ s_nop 0
++label_sgpr_loop1:
++
++ s_movreld_b32 s0, 0
++ s_movreld_b32 s1, 0
++ s_movreld_b32 s2, 0
++ s_movreld_b32 s3, 0
++ s_sub_u32 m0, m0, 4
++ s_cbranch_scc0 label_sgpr_loop1
++
++ //clear vcc, flat scratch
++ s_mov_b32 flat_scratch_lo, 0 //clear flat scratch lo SGPR
++ s_mov_b32 flat_scratch_hi, 0 //clear flat scratch hi SGPR
++ s_mov_b64 vcc, 0xee //clear vcc
++
++s_endpgm
++end
++
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c
+index e019249883fb2f..194026e9be3331 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v4_0.c
+@@ -40,10 +40,12 @@
+ static void hdp_v4_0_flush_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- else
++ RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
++ }
+ }
+
+ static void hdp_v4_0_invalidate_hdp(struct amdgpu_device *adev,
+@@ -54,11 +56,13 @@ static void hdp_v4_0_invalidate_hdp(struct amdgpu_device *adev,
+ amdgpu_ip_version(adev, HDP_HWIP, 0) == IP_VERSION(4, 4, 5))
+ return;
+
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32_SOC15_NO_KIQ(HDP, 0, mmHDP_READ_CACHE_INVALIDATE, 1);
+- else
++ RREG32_SOC15_NO_KIQ(HDP, 0, mmHDP_READ_CACHE_INVALIDATE);
++ } else {
+ amdgpu_ring_emit_wreg(ring, SOC15_REG_OFFSET(
+ HDP, 0, mmHDP_READ_CACHE_INVALIDATE), 1);
++ }
+ }
+
+ static void hdp_v4_0_query_ras_error_count(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c
+index ed7facacf2fe30..d3962d46908811 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v5_0.c
+@@ -31,10 +31,12 @@
+ static void hdp_v5_0_flush_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- else
++ RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
++ }
+ }
+
+ static void hdp_v5_0_invalidate_hdp(struct amdgpu_device *adev,
+@@ -42,6 +44,7 @@ static void hdp_v5_0_invalidate_hdp(struct amdgpu_device *adev,
+ {
+ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32_SOC15_NO_KIQ(HDP, 0, mmHDP_READ_CACHE_INVALIDATE, 1);
++ RREG32_SOC15_NO_KIQ(HDP, 0, mmHDP_READ_CACHE_INVALIDATE);
+ } else {
+ amdgpu_ring_emit_wreg(ring, SOC15_REG_OFFSET(
+ HDP, 0, mmHDP_READ_CACHE_INVALIDATE), 1);
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c b/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c
+index 29c3484ae1f166..f52552c5fa27b6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v5_2.c
+@@ -31,13 +31,15 @@
+ static void hdp_v5_2_flush_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32_NO_KIQ((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2,
+ 0);
+- else
++ RREG32_NO_KIQ((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ } else {
+ amdgpu_ring_emit_wreg(ring,
+ (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2,
+ 0);
++ }
+ }
+
+ static void hdp_v5_2_update_mem_power_gating(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c
+index 33736d361dd0bc..6948fe9956ce47 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v6_0.c
+@@ -34,10 +34,12 @@
+ static void hdp_v6_0_flush_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- else
++ RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
++ }
+ }
+
+ static void hdp_v6_0_update_clock_gating(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c b/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c
+index 1c99bb09e2a129..63820329f67eb6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/hdp_v7_0.c
+@@ -31,10 +31,12 @@
+ static void hdp_v7_0_flush_hdp(struct amdgpu_device *adev,
+ struct amdgpu_ring *ring)
+ {
+- if (!ring || !ring->funcs->emit_wreg)
++ if (!ring || !ring->funcs->emit_wreg) {
+ WREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
+- else
++ RREG32((adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2);
++ } else {
+ amdgpu_ring_emit_wreg(ring, (adev->rmmio_remap.reg_offset + KFD_MMIO_REMAP_HDP_MEM_FLUSH_CNTL) >> 2, 0);
++ }
+ }
+
+ static void hdp_v7_0_update_clock_gating(struct amdgpu_device *adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+index 0fda703363004f..6fca2915ea8fd5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+@@ -116,6 +116,20 @@ static int vcn_v4_0_3_early_init(void *handle)
+ return amdgpu_vcn_early_init(adev);
+ }
+
++static int vcn_v4_0_3_fw_shared_init(struct amdgpu_device *adev, int inst_idx)
++{
++ struct amdgpu_vcn4_fw_shared *fw_shared;
++
++ fw_shared = adev->vcn.inst[inst_idx].fw_shared.cpu_addr;
++ fw_shared->present_flag_0 = cpu_to_le32(AMDGPU_FW_SHARED_FLAG_0_UNIFIED_QUEUE);
++ fw_shared->sq.is_enabled = 1;
++
++ if (amdgpu_vcnfw_log)
++ amdgpu_vcn_fwlog_init(&adev->vcn.inst[inst_idx]);
++
++ return 0;
++}
++
+ /**
+ * vcn_v4_0_3_sw_init - sw init for VCN block
+ *
+@@ -148,8 +162,6 @@ static int vcn_v4_0_3_sw_init(void *handle)
+ return r;
+
+ for (i = 0; i < adev->vcn.num_vcn_inst; i++) {
+- volatile struct amdgpu_vcn4_fw_shared *fw_shared;
+-
+ vcn_inst = GET_INST(VCN, i);
+
+ ring = &adev->vcn.inst[i].ring_enc[0];
+@@ -172,12 +184,7 @@ static int vcn_v4_0_3_sw_init(void *handle)
+ if (r)
+ return r;
+
+- fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
+- fw_shared->present_flag_0 = cpu_to_le32(AMDGPU_FW_SHARED_FLAG_0_UNIFIED_QUEUE);
+- fw_shared->sq.is_enabled = true;
+-
+- if (amdgpu_vcnfw_log)
+- amdgpu_vcn_fwlog_init(&adev->vcn.inst[i]);
++ vcn_v4_0_3_fw_shared_init(adev, i);
+ }
+
+ if (amdgpu_sriov_vf(adev)) {
+@@ -273,6 +280,8 @@ static int vcn_v4_0_3_hw_init(void *handle)
+ }
+ } else {
+ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
++ struct amdgpu_vcn4_fw_shared *fw_shared;
++
+ vcn_inst = GET_INST(VCN, i);
+ ring = &adev->vcn.inst[i].ring_enc[0];
+
+@@ -296,6 +305,11 @@ static int vcn_v4_0_3_hw_init(void *handle)
+ regVCN_RB1_DB_CTRL);
+ }
+
++ /* Re-init fw_shared when RAS fatal error occurred */
++ fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr;
++ if (!fw_shared->sq.is_enabled)
++ vcn_v4_0_3_fw_shared_init(adev, i);
++
+ r = amdgpu_ring_test_helper(ring);
+ if (r)
+ return r;
+diff --git a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+index ac439f0565e357..16f5561fb86ec5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
++++ b/drivers/gpu/drm/amd/amdgpu/vega20_ih.c
+@@ -114,6 +114,33 @@ static int vega20_ih_toggle_ring_interrupts(struct amdgpu_device *adev,
+ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, RB_ENABLE, (enable ? 1 : 0));
+ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, RB_GPU_TS_ENABLE, 1);
+
++ if (enable) {
++ /* Unset the CLEAR_OVERFLOW bit to make sure the next step
++ * is switching the bit from 0 to 1
++ */
++ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++ if (amdgpu_sriov_vf(adev) && amdgpu_sriov_reg_indirect_ih(adev)) {
++ if (psp_reg_program(&adev->psp, ih_regs->psp_reg_id, tmp))
++ return -ETIMEDOUT;
++ } else {
++ WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++ }
++
++ /* Clear RB_OVERFLOW bit */
++ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 1);
++ if (amdgpu_sriov_vf(adev) && amdgpu_sriov_reg_indirect_ih(adev)) {
++ if (psp_reg_program(&adev->psp, ih_regs->psp_reg_id, tmp))
++ return -ETIMEDOUT;
++ } else {
++ WREG32_NO_KIQ(ih_regs->ih_rb_cntl, tmp);
++ }
++
++ /* Unset the CLEAR_OVERFLOW bit immediately so new overflows
++ * can be detected.
++ */
++ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, WPTR_OVERFLOW_CLEAR, 0);
++ }
++
+ /* enable_intr field is only valid in ring0 */
+ if (ih == &adev->irq.ih)
+ tmp = REG_SET_FIELD(tmp, IH_RB_CNTL, ENABLE_INTR, (enable ? 1 : 0));
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index 48caecf7e72ed1..8de61cc524c943 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -1509,6 +1509,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ if (adev->gfx.config.gc_tcp_size_per_cu) {
+ pcache_info[i].cache_size = adev->gfx.config.gc_tcp_size_per_cu;
+ pcache_info[i].cache_level = 1;
++ /* Cacheline size not available in IP discovery for gc943,gc944 */
++ pcache_info[i].cache_line_size = 128;
+ pcache_info[i].flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+@@ -1520,6 +1522,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ pcache_info[i].cache_size =
+ adev->gfx.config.gc_l1_instruction_cache_size_per_sqc;
+ pcache_info[i].cache_level = 1;
++ pcache_info[i].cache_line_size = 64;
+ pcache_info[i].flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_INST_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+@@ -1530,6 +1533,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ if (adev->gfx.config.gc_l1_data_cache_size_per_sqc) {
+ pcache_info[i].cache_size = adev->gfx.config.gc_l1_data_cache_size_per_sqc;
+ pcache_info[i].cache_level = 1;
++ pcache_info[i].cache_line_size = 64;
+ pcache_info[i].flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+@@ -1540,6 +1544,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ if (adev->gfx.config.gc_tcc_size) {
+ pcache_info[i].cache_size = adev->gfx.config.gc_tcc_size;
+ pcache_info[i].cache_level = 2;
++ pcache_info[i].cache_line_size = 128;
+ pcache_info[i].flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+@@ -1550,6 +1555,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ if (adev->gmc.mall_size) {
+ pcache_info[i].cache_size = adev->gmc.mall_size / 1024;
+ pcache_info[i].cache_level = 3;
++ pcache_info[i].cache_line_size = 64;
+ pcache_info[i].flags = (CRAT_CACHE_FLAGS_ENABLED |
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index fad1c8f2bc8334..b05be24531e187 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -235,6 +235,9 @@ static void kfd_device_info_init(struct kfd_dev *kfd,
+ */
+ kfd->device_info.needs_pci_atomics = true;
+ kfd->device_info.no_atomic_fw_version = kfd->adev->gfx.rs64_enable ? 509 : 0;
++ } else if (gc_version < IP_VERSION(13, 0, 0)) {
++ kfd->device_info.needs_pci_atomics = true;
++ kfd->device_info.no_atomic_fw_version = 2090;
+ } else {
+ kfd->device_info.needs_pci_atomics = true;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 24fbde7dd1c425..ad3a3aa72b51f3 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1910,7 +1910,11 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ else
+ init_data.flags.gpu_vm_support = (amdgpu_sg_display != 0);
+ } else {
+- init_data.flags.gpu_vm_support = (amdgpu_sg_display != 0) && (adev->flags & AMD_IS_APU);
++ if (amdgpu_ip_version(adev, DCE_HWIP, 0) == IP_VERSION(2, 0, 3))
++ init_data.flags.gpu_vm_support = (amdgpu_sg_display == 1);
++ else
++ init_data.flags.gpu_vm_support =
++ (amdgpu_sg_display != 0) && (adev->flags & AMD_IS_APU);
+ }
+
+ adev->mode_info.gpu_vm_support = init_data.flags.gpu_vm_support;
+@@ -7337,10 +7341,15 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ const struct drm_connector_state *drm_state = dm_state ? &dm_state->base : NULL;
+ int requested_bpc = drm_state ? drm_state->max_requested_bpc : 8;
+ enum dc_status dc_result = DC_OK;
++ uint8_t bpc_limit = 6;
+
+ if (!dm_state)
+ return NULL;
+
++ if (aconnector->dc_link->connector_signal == SIGNAL_TYPE_HDMI_TYPE_A ||
++ aconnector->dc_link->dpcd_caps.dongle_type == DISPLAY_DONGLE_DP_HDMI_CONVERTER)
++ bpc_limit = 8;
++
+ do {
+ stream = create_stream_for_sink(connector, drm_mode,
+ dm_state, old_stream,
+@@ -7361,11 +7370,12 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ dc_result = dm_validate_stream_and_context(adev->dm.dc, stream);
+
+ if (dc_result != DC_OK) {
+- DRM_DEBUG_KMS("Mode %dx%d (clk %d) failed DC validation with error %d (%s)\n",
++ DRM_DEBUG_KMS("Mode %dx%d (clk %d) pixel_encoding:%s color_depth:%s failed validation -- %s\n",
+ drm_mode->hdisplay,
+ drm_mode->vdisplay,
+ drm_mode->clock,
+- dc_result,
++ dc_pixel_encoding_to_str(stream->timing.pixel_encoding),
++ dc_color_depth_to_str(stream->timing.display_color_depth),
+ dc_status_to_str(dc_result));
+
+ dc_stream_release(stream);
+@@ -7373,10 +7383,13 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector,
+ requested_bpc -= 2; /* lower bpc to retry validation */
+ }
+
+- } while (stream == NULL && requested_bpc >= 6);
++ } while (stream == NULL && requested_bpc >= bpc_limit);
+
+- if (dc_result == DC_FAIL_ENC_VALIDATE && !aconnector->force_yuv420_output) {
+- DRM_DEBUG_KMS("Retry forcing YCbCr420 encoding\n");
++ if ((dc_result == DC_FAIL_ENC_VALIDATE ||
++ dc_result == DC_EXCEED_DONGLE_CAP) &&
++ !aconnector->force_yuv420_output) {
++ DRM_DEBUG_KMS("%s:%d Retry forcing yuv420 encoding\n",
++ __func__, __LINE__);
+
+ aconnector->force_yuv420_output = true;
+ stream = create_validate_stream_for_sink(aconnector, drm_mode,
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index b46a3afe48ca7c..3bd0d46c170109 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -132,6 +132,8 @@ static void dcn35_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *
+ for (i = 0; i < dc->res_pool->pipe_count; ++i) {
+ struct pipe_ctx *old_pipe = &dc->current_state->res_ctx.pipe_ctx[i];
+ struct pipe_ctx *new_pipe = &context->res_ctx.pipe_ctx[i];
++ struct clk_mgr_internal *clk_mgr_internal = TO_CLK_MGR_INTERNAL(clk_mgr_base);
++ struct dccg *dccg = clk_mgr_internal->dccg;
+ struct pipe_ctx *pipe = safe_to_lower
+ ? &context->res_ctx.pipe_ctx[i]
+ : &dc->current_state->res_ctx.pipe_ctx[i];
+@@ -148,8 +150,13 @@ static void dcn35_disable_otg_wa(struct clk_mgr *clk_mgr_base, struct dc_state *
+ new_pipe->stream_res.stream_enc &&
+ new_pipe->stream_res.stream_enc->funcs->is_fifo_enabled &&
+ new_pipe->stream_res.stream_enc->funcs->is_fifo_enabled(new_pipe->stream_res.stream_enc);
+- if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal) ||
+- !pipe->stream->link_enc) && !stream_changed_otg_dig_on) {
++ bool has_active_hpo = dccg->ctx->dc->link_srv->dp_is_128b_132b_signal(old_pipe) && dccg->ctx->dc->link_srv->dp_is_128b_132b_signal(new_pipe);
++
++ if (!has_active_hpo && !dccg->ctx->dc->link_srv->dp_is_128b_132b_signal(pipe) &&
++ (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal) ||
++ !pipe->stream->link_enc) && !stream_changed_otg_dig_on)) {
++
++
+ /* This w/a should not trigger when we have a dig active */
+ if (disable) {
+ if (pipe->stream_res.tg && pipe->stream_res.tg->funcs->immediate_disable_crtc)
+@@ -257,11 +264,11 @@ static void dcn35_notify_host_router_bw(struct clk_mgr *clk_mgr_base, struct dc_
+ struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+ uint32_t host_router_bw_kbps[MAX_HOST_ROUTERS_NUM] = { 0 };
+ int i;
+-
+ for (i = 0; i < context->stream_count; ++i) {
+ const struct dc_stream_state *stream = context->streams[i];
+ const struct dc_link *link = stream->link;
+- uint8_t lowest_dpia_index = 0, hr_index = 0;
++ uint8_t lowest_dpia_index = 0;
++ unsigned int hr_index = 0;
+
+ if (!link)
+ continue;
+@@ -271,6 +278,8 @@ static void dcn35_notify_host_router_bw(struct clk_mgr *clk_mgr_base, struct dc_
+ continue;
+
+ hr_index = (link->link_index - lowest_dpia_index) / 2;
++ if (hr_index >= MAX_HOST_ROUTERS_NUM)
++ continue;
+ host_router_bw_kbps[hr_index] += dc_bandwidth_in_kbps_from_timing(
+ &stream->timing, dc_link_get_highest_encoding_format(link));
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index a6911bb2cf0c6c..9f570d447c2099 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -6006,3 +6006,21 @@ struct dc_power_profile dc_get_power_profile_for_dc_state(const struct dc_state
+
+ return profile;
+ }
++
++/*
++ **********************************************************************************
++ * dc_get_det_buffer_size_from_state() - extracts detile buffer size from dc state
++ *
++ * Called when DM wants to log detile buffer size from dc_state
++ *
++ **********************************************************************************
++ */
++unsigned int dc_get_det_buffer_size_from_state(const struct dc_state *context)
++{
++ struct dc *dc = context->clk_mgr->ctx->dc;
++
++ if (dc->res_pool->funcs->get_det_buffer_size)
++ return dc->res_pool->funcs->get_det_buffer_size(context);
++ else
++ return 0;
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
+index 801cdbc8117d9b..e255c204b7e855 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_debug.c
+@@ -434,3 +434,43 @@ char *dc_status_to_str(enum dc_status status)
+
+ return "Unexpected status error";
+ }
++
++char *dc_pixel_encoding_to_str(enum dc_pixel_encoding pixel_encoding)
++{
++ switch (pixel_encoding) {
++ case PIXEL_ENCODING_RGB:
++ return "RGB";
++ case PIXEL_ENCODING_YCBCR422:
++ return "YUV422";
++ case PIXEL_ENCODING_YCBCR444:
++ return "YUV444";
++ case PIXEL_ENCODING_YCBCR420:
++ return "YUV420";
++ default:
++ return "Unknown";
++ }
++}
++
++char *dc_color_depth_to_str(enum dc_color_depth color_depth)
++{
++ switch (color_depth) {
++ case COLOR_DEPTH_666:
++ return "6-bpc";
++ case COLOR_DEPTH_888:
++ return "8-bpc";
++ case COLOR_DEPTH_101010:
++ return "10-bpc";
++ case COLOR_DEPTH_121212:
++ return "12-bpc";
++ case COLOR_DEPTH_141414:
++ return "14-bpc";
++ case COLOR_DEPTH_161616:
++ return "16-bpc";
++ case COLOR_DEPTH_999:
++ return "9-bpc";
++ case COLOR_DEPTH_111111:
++ return "11-bpc";
++ default:
++ return "Unknown";
++ }
++}
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index c7599c40d4be38..d915020a429582 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -765,25 +765,6 @@ static inline void get_vp_scan_direction(
+ *flip_horz_scan_dir = !*flip_horz_scan_dir;
+ }
+
+-/*
+- * This is a preliminary vp size calculation to allow us to check taps support.
+- * The result is completely overridden afterwards.
+- */
+-static void calculate_viewport_size(struct pipe_ctx *pipe_ctx)
+-{
+- struct scaler_data *data = &pipe_ctx->plane_res.scl_data;
+-
+- data->viewport.width = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.horz, data->recout.width));
+- data->viewport.height = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.vert, data->recout.height));
+- data->viewport_c.width = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.horz_c, data->recout.width));
+- data->viewport_c.height = dc_fixpt_ceil(dc_fixpt_mul_int(data->ratios.vert_c, data->recout.height));
+- if (pipe_ctx->plane_state->rotation == ROTATION_ANGLE_90 ||
+- pipe_ctx->plane_state->rotation == ROTATION_ANGLE_270) {
+- swap(data->viewport.width, data->viewport.height);
+- swap(data->viewport_c.width, data->viewport_c.height);
+- }
+-}
+-
+ static struct rect intersect_rec(const struct rect *r0, const struct rect *r1)
+ {
+ struct rect rec;
+@@ -1468,6 +1449,7 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+ const struct dc_plane_state *plane_state = pipe_ctx->plane_state;
+ struct dc_crtc_timing *timing = &pipe_ctx->stream->timing;
+ const struct rect odm_slice_src = resource_get_odm_slice_src_rect(pipe_ctx);
++ struct scaling_taps temp = {0};
+ bool res = false;
+
+ DC_LOGGER_INIT(pipe_ctx->stream->ctx->logger);
+@@ -1519,14 +1501,16 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+ res = spl_calculate_scaler_params(spl_in, spl_out);
+ // Convert respective out params from SPL to scaler data
+ translate_SPL_out_params_to_pipe_ctx(pipe_ctx, spl_out);
++
++ /* Ignore scaler failure if pipe context plane is phantom plane */
++ if (!res && plane_state->is_phantom)
++ res = true;
+ } else {
+ #endif
+ /* depends on h_active */
+ calculate_recout(pipe_ctx);
+ /* depends on pixel format */
+ calculate_scaling_ratios(pipe_ctx);
+- /* depends on scaling ratios and recout, does not calculate offset yet */
+- calculate_viewport_size(pipe_ctx);
+
+ /*
+ * LB calculations depend on vp size, h/v_active and scaling ratios
+@@ -1547,6 +1531,24 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+
+ pipe_ctx->plane_res.scl_data.lb_params.alpha_en = plane_state->per_pixel_alpha;
+
++ // get TAP value with 100x100 dummy data for max scaling qualify, override
++ // if a new scaling quality required
++ pipe_ctx->plane_res.scl_data.viewport.width = 100;
++ pipe_ctx->plane_res.scl_data.viewport.height = 100;
++ pipe_ctx->plane_res.scl_data.viewport_c.width = 100;
++ pipe_ctx->plane_res.scl_data.viewport_c.height = 100;
++ if (pipe_ctx->plane_res.xfm != NULL)
++ res = pipe_ctx->plane_res.xfm->funcs->transform_get_optimal_number_of_taps(
++ pipe_ctx->plane_res.xfm, &pipe_ctx->plane_res.scl_data, &plane_state->scaling_quality);
++
++ if (pipe_ctx->plane_res.dpp != NULL)
++ res = pipe_ctx->plane_res.dpp->funcs->dpp_get_optimal_number_of_taps(
++ pipe_ctx->plane_res.dpp, &pipe_ctx->plane_res.scl_data, &plane_state->scaling_quality);
++
++ temp = pipe_ctx->plane_res.scl_data.taps;
++
++ calculate_inits_and_viewports(pipe_ctx);
++
+ if (pipe_ctx->plane_res.xfm != NULL)
+ res = pipe_ctx->plane_res.xfm->funcs->transform_get_optimal_number_of_taps(
+ pipe_ctx->plane_res.xfm, &pipe_ctx->plane_res.scl_data, &plane_state->scaling_quality);
+@@ -1573,11 +1575,14 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+ &plane_state->scaling_quality);
+ }
+
+- /*
+- * Depends on recout, scaling ratios, h_active and taps
+- * May need to re-check lb size after this in some obscure scenario
+- */
+- if (res)
++ /* Ignore scaler failure if pipe context plane is phantom plane */
++ if (!res && plane_state->is_phantom)
++ res = true;
++
++ if (res && (pipe_ctx->plane_res.scl_data.taps.v_taps != temp.v_taps ||
++ pipe_ctx->plane_res.scl_data.taps.h_taps != temp.h_taps ||
++ pipe_ctx->plane_res.scl_data.taps.v_taps_c != temp.v_taps_c ||
++ pipe_ctx->plane_res.scl_data.taps.h_taps_c != temp.h_taps_c))
+ calculate_inits_and_viewports(pipe_ctx);
+
+ /*
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+index 9a406d74c0dd76..3d93efdc1026dd 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+@@ -819,12 +819,12 @@ void dc_stream_log(const struct dc *dc, const struct dc_stream_state *stream)
+ stream->dst.height,
+ stream->output_color_space);
+ DC_LOG_DC(
+- "\tpix_clk_khz: %d, h_total: %d, v_total: %d, pixelencoder:%d, displaycolorDepth:%d\n",
++ "\tpix_clk_khz: %d, h_total: %d, v_total: %d, pixel_encoding:%s, color_depth:%s\n",
+ stream->timing.pix_clk_100hz / 10,
+ stream->timing.h_total,
+ stream->timing.v_total,
+- stream->timing.pixel_encoding,
+- stream->timing.display_color_depth);
++ dc_pixel_encoding_to_str(stream->timing.pixel_encoding),
++ dc_color_depth_to_str(stream->timing.display_color_depth));
+ DC_LOG_DC(
+ "\tlink: %d\n",
+ stream->link->link_index);
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 3992ad73165bc6..7c163aa7e8bd2d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -285,6 +285,7 @@ struct dc_caps {
+ uint16_t subvp_vertical_int_margin_us;
+ bool seamless_odm;
+ uint32_t max_v_total;
++ bool vtotal_limited_by_fp2;
+ uint32_t max_disp_clock_khz_at_vmin;
+ uint8_t subvp_drr_vblank_start_margin_us;
+ bool cursor_not_scaled;
+@@ -2543,6 +2544,8 @@ struct dc_power_profile {
+
+ struct dc_power_profile dc_get_power_profile_for_dc_state(const struct dc_state *context);
+
++unsigned int dc_get_det_buffer_size_from_state(const struct dc_state *context);
++
+ /* DSC Interfaces */
+ #include "dc_dsc.h"
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+index 1e7de0f03290a3..ec5009f411eb0d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
++++ b/drivers/gpu/drm/amd/display/dc/dc_dmub_srv.c
+@@ -1294,6 +1294,8 @@ static void dc_dmub_srv_notify_idle(const struct dc *dc, bool allow_idle)
+
+ memset(&new_signals, 0, sizeof(new_signals));
+
++ new_signals.bits.allow_idle = 1; /* always set */
++
+ if (dc->config.disable_ips == DMUB_IPS_ENABLE ||
+ dc->config.disable_ips == DMUB_IPS_DISABLE_DYNAMIC) {
+ new_signals.bits.allow_pg = 1;
+@@ -1389,7 +1391,7 @@ static void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
+ */
+ dc_dmub_srv->needs_idle_wake = false;
+
+- if (prev_driver_signals.bits.allow_ips2 &&
++ if ((prev_driver_signals.bits.allow_ips2 || prev_driver_signals.all == 0) &&
+ (!dc->debug.optimize_ips_handshake ||
+ ips_fw->signals.bits.ips2_commit || !ips_fw->signals.bits.in_idle)) {
+ DC_LOG_IPS(
+@@ -1450,7 +1452,7 @@ static void dc_dmub_srv_exit_low_power_state(const struct dc *dc)
+ }
+
+ dc_dmub_srv_notify_idle(dc, false);
+- if (prev_driver_signals.bits.allow_ips1) {
++ if (prev_driver_signals.bits.allow_ips1 || prev_driver_signals.all == 0) {
+ DC_LOG_IPS(
+ "wait for IPS1 commit clear (ips1_commit=%u ips2_commit=%u)",
+ ips_fw->signals.bits.ips1_commit,
+diff --git a/drivers/gpu/drm/amd/display/dc/dio/dcn314/dcn314_dio_stream_encoder.c b/drivers/gpu/drm/amd/display/dc/dio/dcn314/dcn314_dio_stream_encoder.c
+index 5b343f745cf333..ae81451a3a725c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dio/dcn314/dcn314_dio_stream_encoder.c
++++ b/drivers/gpu/drm/amd/display/dc/dio/dcn314/dcn314_dio_stream_encoder.c
+@@ -83,6 +83,15 @@ void enc314_disable_fifo(struct stream_encoder *enc)
+ REG_UPDATE(DIG_FIFO_CTRL0, DIG_FIFO_ENABLE, 0);
+ }
+
++static bool enc314_is_fifo_enabled(struct stream_encoder *enc)
++{
++ struct dcn10_stream_encoder *enc1 = DCN10STRENC_FROM_STRENC(enc);
++ uint32_t reset_val;
++
++ REG_GET(DIG_FIFO_CTRL0, DIG_FIFO_ENABLE, &reset_val);
++ return (reset_val != 0);
++}
++
+ void enc314_dp_set_odm_combine(
+ struct stream_encoder *enc,
+ bool odm_combine)
+@@ -468,6 +477,7 @@ static const struct stream_encoder_funcs dcn314_str_enc_funcs = {
+
+ .enable_fifo = enc314_enable_fifo,
+ .disable_fifo = enc314_disable_fifo,
++ .is_fifo_enabled = enc314_is_fifo_enabled,
+ .set_input_mode = enc314_set_dig_input_mode,
+ };
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+index d851c081e3768a..8dabb1ac0b684d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+@@ -1222,6 +1222,7 @@ static dml_bool_t CalculatePrefetchSchedule(struct display_mode_lib_scratch_st *
+ s->dst_y_prefetch_oto = s->Tvm_oto_lines + 2 * s->Tr0_oto_lines + s->Lsw_oto;
+
+ s->dst_y_prefetch_equ = p->VStartup - (*p->TSetup + dml_max(p->TWait + p->TCalc, *p->Tdmdl)) / s->LineTime - (*p->DSTYAfterScaler + (dml_float_t) *p->DSTXAfterScaler / (dml_float_t)p->myPipe->HTotal);
++ s->dst_y_prefetch_equ = dml_min(s->dst_y_prefetch_equ, 63.75); // limit to the reg limit of U6.2 for DST_Y_PREFETCH
+
+ #ifdef __DML_VBA_DEBUG__
+ dml_print("DML::%s: HTotal = %u\n", __func__, p->myPipe->HTotal);
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+index 8697eac1e1f7e1..8dee0d397e0322 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+@@ -339,11 +339,22 @@ void dml21_apply_soc_bb_overrides(struct dml2_initialize_instance_in_out *dml_in
+ // }
+ }
+
++static unsigned int calc_max_hardware_v_total(const struct dc_stream_state *stream)
++{
++ unsigned int max_hw_v_total = stream->ctx->dc->caps.max_v_total;
++
++ if (stream->ctx->dc->caps.vtotal_limited_by_fp2) {
++ max_hw_v_total -= stream->timing.v_front_porch + 1;
++ }
++
++ return max_hw_v_total;
++}
++
+ static void populate_dml21_timing_config_from_stream_state(struct dml2_timing_cfg *timing,
+ struct dc_stream_state *stream,
+ struct dml2_context *dml_ctx)
+ {
+- unsigned int hblank_start, vblank_start;
++ unsigned int hblank_start, vblank_start, min_hardware_refresh_in_uhz;
+
+ timing->h_active = stream->timing.h_addressable + stream->timing.h_border_left + stream->timing.h_border_right;
+ timing->v_active = stream->timing.v_addressable + stream->timing.v_border_bottom + stream->timing.v_border_top;
+@@ -371,11 +382,23 @@ static void populate_dml21_timing_config_from_stream_state(struct dml2_timing_cf
+ - stream->timing.v_border_top - stream->timing.v_border_bottom;
+
+ timing->drr_config.enabled = stream->ignore_msa_timing_param;
+- timing->drr_config.min_refresh_uhz = stream->timing.min_refresh_in_uhz;
+ timing->drr_config.drr_active_variable = stream->vrr_active_variable;
+ timing->drr_config.drr_active_fixed = stream->vrr_active_fixed;
+ timing->drr_config.disallowed = !stream->allow_freesync;
+
++ /* limit min refresh rate to DC cap */
++ min_hardware_refresh_in_uhz = stream->timing.min_refresh_in_uhz;
++ if (stream->ctx->dc->caps.max_v_total != 0) {
++ min_hardware_refresh_in_uhz = div64_u64((stream->timing.pix_clk_100hz * 100000000ULL),
++ (stream->timing.h_total * (long long)calc_max_hardware_v_total(stream)));
++ }
++
++ if (stream->timing.min_refresh_in_uhz > min_hardware_refresh_in_uhz) {
++ timing->drr_config.min_refresh_uhz = stream->timing.min_refresh_in_uhz;
++ } else {
++ timing->drr_config.min_refresh_uhz = min_hardware_refresh_in_uhz;
++ }
++
+ if (dml_ctx->config.callbacks.get_max_flickerless_instant_vtotal_increase &&
+ stream->ctx->dc->config.enable_fpo_flicker_detection == 1)
+ timing->drr_config.max_instant_vtotal_delta = dml_ctx->config.callbacks.get_max_flickerless_instant_vtotal_increase(stream, false);
+@@ -859,7 +882,7 @@ static void populate_dml21_plane_config_from_plane_state(struct dml2_context *dm
+ plane->immediate_flip = plane_state->flip_immediate;
+
+ plane->composition.rect_out_height_spans_vactive =
+- plane_state->dst_rect.height >= stream->timing.v_addressable &&
++ plane_state->dst_rect.height >= stream->src.height &&
+ stream->dst.height >= stream->timing.v_addressable;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
+index 4e93eeedfc1bbd..efcc1a6b364c27 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c
+@@ -355,6 +355,20 @@ void dcn314_calculate_pix_rate_divider(
+ }
+ }
+
++static bool dcn314_is_pipe_dig_fifo_on(struct pipe_ctx *pipe)
++{
++ return pipe && pipe->stream
++ // Check dig's otg instance.
++ && pipe->stream_res.stream_enc
++ && pipe->stream_res.stream_enc->funcs->dig_source_otg
++ && pipe->stream_res.tg->inst == pipe->stream_res.stream_enc->funcs->dig_source_otg(pipe->stream_res.stream_enc)
++ && pipe->stream->link && pipe->stream->link->link_enc
++ && pipe->stream->link->link_enc->funcs->is_dig_enabled
++ && pipe->stream->link->link_enc->funcs->is_dig_enabled(pipe->stream->link->link_enc)
++ && pipe->stream_res.stream_enc->funcs->is_fifo_enabled
++ && pipe->stream_res.stream_enc->funcs->is_fifo_enabled(pipe->stream_res.stream_enc);
++}
++
+ void dcn314_resync_fifo_dccg_dio(struct dce_hwseq *hws, struct dc *dc, struct dc_state *context, unsigned int current_pipe_idx)
+ {
+ unsigned int i;
+@@ -371,7 +385,11 @@ void dcn314_resync_fifo_dccg_dio(struct dce_hwseq *hws, struct dc *dc, struct dc
+ if (pipe->top_pipe || pipe->prev_odm_pipe)
+ continue;
+
+- if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal))) {
++ if (pipe->stream && (pipe->stream->dpms_off || dc_is_virtual_signal(pipe->stream->signal)) &&
++ !pipe->stream->apply_seamless_boot_optimization &&
++ !pipe->stream->apply_edp_fast_boot_optimization) {
++ if (dcn314_is_pipe_dig_fifo_on(pipe))
++ continue;
+ pipe->stream_res.tg->funcs->disable_crtc(pipe->stream_res.tg);
+ reset_sync_context_for_pipe(dc, context, i);
+ otg_disabled[i] = true;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/core_status.h b/drivers/gpu/drm/amd/display/dc/inc/core_status.h
+index fa5edd03d00439..b5afd8c3103dba 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/core_status.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/core_status.h
+@@ -60,5 +60,7 @@ enum dc_status {
+ };
+
+ char *dc_status_to_str(enum dc_status status);
++char *dc_pixel_encoding_to_str(enum dc_pixel_encoding pixel_encoding);
++char *dc_color_depth_to_str(enum dc_color_depth color_depth);
+
+ #endif /* _CORE_STATUS_H_ */
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/core_types.h b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+index bfb8b8502d2026..e1e3142cdc00ac 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/core_types.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/core_types.h
+@@ -215,6 +215,7 @@ struct resource_funcs {
+
+ void (*get_panel_config_defaults)(struct dc_panel_config *panel_config);
+ void (*build_pipe_pix_clk_params)(struct pipe_ctx *pipe_ctx);
++ unsigned int (*get_det_buffer_size)(const struct dc_state *context);
+ };
+
+ struct audio_support{
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
+index eea2b3b307cd5f..45e4de8d5cff8d 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn20/dcn20_resource.c
+@@ -1511,6 +1511,7 @@ bool dcn20_split_stream_for_odm(
+
+ if (prev_odm_pipe->plane_state) {
+ struct scaler_data *sd = &prev_odm_pipe->plane_res.scl_data;
++ struct output_pixel_processor *opp = next_odm_pipe->stream_res.opp;
+ int new_width;
+
+ /* HACTIVE halved for odm combine */
+@@ -1544,7 +1545,28 @@ bool dcn20_split_stream_for_odm(
+ sd->viewport_c.x += dc_fixpt_floor(dc_fixpt_mul_int(
+ sd->ratios.horz_c, sd->h_active - sd->recout.x));
+ sd->recout.x = 0;
++
++ /*
++ * When odm is used in YcbCr422 or 420 colour space, a split screen
++ * will be seen with the previous calculations since the extra left
++ * edge pixel is accounted for in fmt but not in viewport.
++ *
++ * Below are calculations which fix the split by fixing the calculations
++ * if there is an extra left edge pixel.
++ */
++ if (opp && opp->funcs->opp_get_left_edge_extra_pixel_count
++ && opp->funcs->opp_get_left_edge_extra_pixel_count(
++ opp, next_odm_pipe->stream->timing.pixel_encoding,
++ resource_is_pipe_type(next_odm_pipe, OTG_MASTER)) == 1) {
++ sd->h_active += 1;
++ sd->recout.width += 1;
++ sd->viewport.x -= dc_fixpt_ceil(dc_fixpt_mul_int(sd->ratios.horz, 1));
++ sd->viewport_c.x -= dc_fixpt_ceil(dc_fixpt_mul_int(sd->ratios.horz, 1));
++ sd->viewport_c.width += dc_fixpt_ceil(dc_fixpt_mul_int(sd->ratios.horz, 1));
++ sd->viewport.width += dc_fixpt_ceil(dc_fixpt_mul_int(sd->ratios.horz, 1));
++ }
+ }
++
+ if (!next_odm_pipe->top_pipe)
+ next_odm_pipe->stream_res.opp = pool->opps[next_odm_pipe->pipe_idx];
+ else
+@@ -2133,6 +2155,7 @@ bool dcn20_fast_validate_bw(
+ ASSERT(0);
+ }
+ }
++
+ /* Actual dsc count per stream dsc validation*/
+ if (!dcn20_validate_dsc(dc, context)) {
+ context->bw_ctx.dml.vba.ValidationStatus[context->bw_ctx.dml.vba.soc.num_states] =
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
+index 347e6aaea582fb..14b28841657d21 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn21/dcn21_resource.c
+@@ -1298,7 +1298,7 @@ static struct link_encoder *dcn21_link_encoder_create(
+ kzalloc(sizeof(struct dcn21_link_encoder), GFP_KERNEL);
+ int link_regs_id;
+
+- if (!enc21)
++ if (!enc21 || enc_init_data->hpd_source >= ARRAY_SIZE(link_enc_hpd_regs))
+ return NULL;
+
+ link_regs_id =
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c
+index 5040a4c6ed1862..75cc84473a577e 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn30/dcn30_resource.c
+@@ -2354,6 +2354,7 @@ static bool dcn30_resource_construct(
+
+ dc->caps.dp_hdmi21_pcon_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* read VBIOS LTTPR caps */
+ {
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c
+index 5791b5cc287529..320b040d591d1e 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn302/dcn302_resource.c
+@@ -1234,6 +1234,7 @@ static bool dcn302_resource_construct(
+ dc->caps.extended_aux_timeout_support = true;
+ dc->caps.dmcub_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c
+index 63f0f882c8610c..297cf4b5600dae 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn303/dcn303_resource.c
+@@ -1179,6 +1179,7 @@ static bool dcn303_resource_construct(
+ dc->caps.extended_aux_timeout_support = true;
+ dc->caps.dmcub_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+index ac8cb20e2e3b64..80386f698ae4de 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+@@ -1721,6 +1721,12 @@ int dcn31_populate_dml_pipes_from_context(
+ return pipe_cnt;
+ }
+
++unsigned int dcn31_get_det_buffer_size(
++ const struct dc_state *context)
++{
++ return context->bw_ctx.dml.ip.det_buffer_size_kbytes;
++}
++
+ void dcn31_calculate_wm_and_dlg(
+ struct dc *dc, struct dc_state *context,
+ display_e2e_pipe_params_st *pipes,
+@@ -1843,6 +1849,7 @@ static struct resource_funcs dcn31_res_pool_funcs = {
+ .update_bw_bounding_box = dcn31_update_bw_bounding_box,
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn31_get_panel_config_defaults,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static struct clock_source *dcn30_clock_source_create(
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.h b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.h
+index 901436591ed45c..551ad912f7bea8 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.h
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.h
+@@ -63,6 +63,9 @@ struct resource_pool *dcn31_create_resource_pool(
+ const struct dc_init_data *init_data,
+ struct dc *dc);
+
++unsigned int dcn31_get_det_buffer_size(
++ const struct dc_state *context);
++
+ /*temp: B0 specific before switch to dcn313 headers*/
+ #ifndef regPHYPLLF_PIXCLK_RESYNC_CNTL
+ #define regPHYPLLF_PIXCLK_RESYNC_CNTL 0x007e
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+index 169924d0a8393e..01d95108ce662b 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn314/dcn314_resource.c
+@@ -1778,6 +1778,7 @@ static struct resource_funcs dcn314_res_pool_funcs = {
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn314_get_panel_config_defaults,
+ .get_preferred_eng_id_dpia = dcn314_get_preferred_eng_id_dpia,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static struct clock_source *dcn30_clock_source_create(
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
+index 3f4b9dba411244..f2ce687c0e03ca 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn315/dcn315_resource.c
+@@ -1840,6 +1840,7 @@ static struct resource_funcs dcn315_res_pool_funcs = {
+ .update_bw_bounding_box = dcn315_update_bw_bounding_box,
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn315_get_panel_config_defaults,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static bool dcn315_resource_construct(
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
+index 5fd52c5fcee458..af82e13029c9e4 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn316/dcn316_resource.c
+@@ -1720,6 +1720,7 @@ static struct resource_funcs dcn316_res_pool_funcs = {
+ .update_bw_bounding_box = dcn316_update_bw_bounding_box,
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn316_get_panel_config_defaults,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static bool dcn316_resource_construct(
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+index a124ad9bd108c8..6b889c8be0ca3f 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn32/dcn32_resource.c
+@@ -2186,6 +2186,7 @@ static bool dcn32_resource_construct(
+ dc->caps.dmcub_support = true;
+ dc->caps.seamless_odm = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
+index 827a94f84f1001..74113c578bac40 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn321/dcn321_resource.c
+@@ -1743,6 +1743,7 @@ static bool dcn321_resource_construct(
+ dc->caps.extended_aux_timeout_support = true;
+ dc->caps.dmcub_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+index 893a9d9ee870df..d0c4693c12241b 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn35/dcn35_resource.c
+@@ -1779,6 +1779,7 @@ static struct resource_funcs dcn35_res_pool_funcs = {
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn35_get_panel_config_defaults,
+ .get_preferred_eng_id_dpia = dcn35_get_preferred_eng_id_dpia,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static bool dcn35_resource_construct(
+@@ -1850,6 +1851,7 @@ static bool dcn35_resource_construct(
+ dc->caps.zstate_support = true;
+ dc->caps.ips_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+index 70abd32ce2ad18..575c0aa12229cf 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn351/dcn351_resource.c
+@@ -1758,6 +1758,7 @@ static struct resource_funcs dcn351_res_pool_funcs = {
+ .patch_unknown_plane_state = dcn20_patch_unknown_plane_state,
+ .get_panel_config_defaults = dcn35_get_panel_config_defaults,
+ .get_preferred_eng_id_dpia = dcn351_get_preferred_eng_id_dpia,
++ .get_det_buffer_size = dcn31_get_det_buffer_size,
+ };
+
+ static bool dcn351_resource_construct(
+@@ -1829,6 +1830,7 @@ static bool dcn351_resource_construct(
+ dc->caps.zstate_support = true;
+ dc->caps.ips_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ /* Color pipeline capabilities */
+ dc->caps.color.dpp.dcn_arch = 1;
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
+index 9d56fbdcd06afd..4aa975418fb18d 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn401/dcn401_resource.c
+@@ -1826,6 +1826,7 @@ static bool dcn401_resource_construct(
+ dc->caps.extended_aux_timeout_support = true;
+ dc->caps.dmcub_support = true;
+ dc->caps.max_v_total = (1 << 15) - 1;
++ dc->caps.vtotal_limited_by_fp2 = true;
+
+ if (ASICREV_IS_GC_12_0_1_A0(dc->ctx->asic_id.hw_internal_rev))
+ dc->caps.dcc_plane_width_limit = 7680;
+diff --git a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+index ebcf68bfae2b32..7835100b37c41e 100644
+--- a/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
++++ b/drivers/gpu/drm/amd/display/dmub/inc/dmub_cmd.h
+@@ -747,7 +747,8 @@ union dmub_shared_state_ips_driver_signals {
+ uint32_t allow_ips1 : 1; /**< 1 is IPS1 is allowed */
+ uint32_t allow_ips2 : 1; /**< 1 is IPS1 is allowed */
+ uint32_t allow_z10 : 1; /**< 1 if Z10 is allowed */
+- uint32_t reserved_bits : 28; /**< Reversed bits */
++ uint32_t allow_idle : 1; /**< 1 if driver is allowing idle */
++ uint32_t reserved_bits : 27; /**< Reversed bits */
+ } bits;
+ uint32_t all;
+ };
+diff --git a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+index bbd259cea4f4f6..ab62a76d48cf76 100644
+--- a/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
++++ b/drivers/gpu/drm/amd/display/modules/freesync/freesync.c
+@@ -121,6 +121,17 @@ static unsigned int calc_duration_in_us_from_v_total(
+ return duration_in_us;
+ }
+
++static unsigned int calc_max_hardware_v_total(const struct dc_stream_state *stream)
++{
++ unsigned int max_hw_v_total = stream->ctx->dc->caps.max_v_total;
++
++ if (stream->ctx->dc->caps.vtotal_limited_by_fp2) {
++ max_hw_v_total -= stream->timing.v_front_porch + 1;
++ }
++
++ return max_hw_v_total;
++}
++
+ unsigned int mod_freesync_calc_v_total_from_refresh(
+ const struct dc_stream_state *stream,
+ unsigned int refresh_in_uhz)
+@@ -1002,7 +1013,7 @@ void mod_freesync_build_vrr_params(struct mod_freesync *mod_freesync,
+
+ if (stream->ctx->dc->caps.max_v_total != 0 && stream->timing.h_total != 0) {
+ min_hardware_refresh_in_uhz = div64_u64((stream->timing.pix_clk_100hz * 100000000ULL),
+- (stream->timing.h_total * (long long)stream->ctx->dc->caps.max_v_total));
++ (stream->timing.h_total * (long long)calc_max_hardware_v_total(stream)));
+ }
+ /* Limit minimum refresh rate to what can be supported by hardware */
+ min_refresh_in_uhz = min_hardware_refresh_in_uhz > in_config->min_refresh_in_uhz ?
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index d5d6ab484e5add..0fa6fbee197899 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -1409,7 +1409,11 @@ static ssize_t amdgpu_set_pp_mclk_od(struct device *dev,
+ * create a custom set of heuristics, write a string of numbers to the file
+ * starting with the number of the custom profile along with a setting
+ * for each heuristic parameter. Due to differences across asic families
+- * the heuristic parameters vary from family to family.
++ * the heuristic parameters vary from family to family. Additionally,
++ * you can apply the custom heuristics to different clock domains. Each
++ * clock domain is considered a distinct operation so if you modify the
++ * gfxclk heuristics and then the memclk heuristics, the all of the
++ * custom heuristics will be retained until you switch to another profile.
+ *
+ */
+
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 32bdeac2676b5c..0c0b9aa44dfa3a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -72,6 +72,10 @@ static int smu_set_power_limit(void *handle, uint32_t limit);
+ static int smu_set_fan_speed_rpm(void *handle, uint32_t speed);
+ static int smu_set_gfx_cgpg(struct smu_context *smu, bool enabled);
+ static int smu_set_mp1_state(void *handle, enum pp_mp1_state mp1_state);
++static void smu_power_profile_mode_get(struct smu_context *smu,
++ enum PP_SMC_POWER_PROFILE profile_mode);
++static void smu_power_profile_mode_put(struct smu_context *smu,
++ enum PP_SMC_POWER_PROFILE profile_mode);
+
+ static int smu_sys_get_pp_feature_mask(void *handle,
+ char *buf)
+@@ -1257,35 +1261,19 @@ static int smu_sw_init(void *handle)
+ INIT_WORK(&smu->interrupt_work, smu_interrupt_work_fn);
+ atomic64_set(&smu->throttle_int_counter, 0);
+ smu->watermarks_bitmap = 0;
+- smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+- smu->default_power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+
+ atomic_set(&smu->smu_power.power_gate.vcn_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.jpeg_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.vpe_gated, 1);
+ atomic_set(&smu->smu_power.power_gate.umsch_mm_gated, 1);
+
+- smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT] = 0;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D] = 1;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_POWERSAVING] = 2;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_VIDEO] = 3;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_VR] = 4;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_COMPUTE] = 5;
+- smu->workload_prority[PP_SMC_POWER_PROFILE_CUSTOM] = 6;
+-
+ if (smu->is_apu ||
+ !smu_is_workload_profile_available(smu, PP_SMC_POWER_PROFILE_FULLSCREEN3D))
+- smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT];
++ smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+ else
+- smu->workload_mask = 1 << smu->workload_prority[PP_SMC_POWER_PROFILE_FULLSCREEN3D];
+-
+- smu->workload_setting[0] = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+- smu->workload_setting[1] = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
+- smu->workload_setting[2] = PP_SMC_POWER_PROFILE_POWERSAVING;
+- smu->workload_setting[3] = PP_SMC_POWER_PROFILE_VIDEO;
+- smu->workload_setting[4] = PP_SMC_POWER_PROFILE_VR;
+- smu->workload_setting[5] = PP_SMC_POWER_PROFILE_COMPUTE;
+- smu->workload_setting[6] = PP_SMC_POWER_PROFILE_CUSTOM;
++ smu->power_profile_mode = PP_SMC_POWER_PROFILE_FULLSCREEN3D;
++ smu_power_profile_mode_get(smu, smu->power_profile_mode);
++
+ smu->display_config = &adev->pm.pm_display_cfg;
+
+ smu->smu_dpm.dpm_level = AMD_DPM_FORCED_LEVEL_AUTO;
+@@ -1338,6 +1326,11 @@ static int smu_sw_fini(void *handle)
+ return ret;
+ }
+
++ if (smu->custom_profile_params) {
++ kfree(smu->custom_profile_params);
++ smu->custom_profile_params = NULL;
++ }
++
+ smu_fini_microcode(smu);
+
+ return 0;
+@@ -2117,6 +2110,9 @@ static int smu_suspend(void *handle)
+ if (!ret)
+ adev->gfx.gfx_off_entrycount = count;
+
++ /* clear this on suspend so it will get reprogrammed on resume */
++ smu->workload_mask = 0;
++
+ return 0;
+ }
+
+@@ -2229,25 +2225,49 @@ static int smu_enable_umd_pstate(void *handle,
+ }
+
+ static int smu_bump_power_profile_mode(struct smu_context *smu,
+- long *param,
+- uint32_t param_size)
++ long *custom_params,
++ u32 custom_params_max_idx)
+ {
+- int ret = 0;
++ u32 workload_mask = 0;
++ int i, ret = 0;
++
++ for (i = 0; i < PP_SMC_POWER_PROFILE_COUNT; i++) {
++ if (smu->workload_refcount[i])
++ workload_mask |= 1 << i;
++ }
++
++ if (smu->workload_mask == workload_mask)
++ return 0;
+
+ if (smu->ppt_funcs->set_power_profile_mode)
+- ret = smu->ppt_funcs->set_power_profile_mode(smu, param, param_size);
++ ret = smu->ppt_funcs->set_power_profile_mode(smu, workload_mask,
++ custom_params,
++ custom_params_max_idx);
++
++ if (!ret)
++ smu->workload_mask = workload_mask;
+
+ return ret;
+ }
+
++static void smu_power_profile_mode_get(struct smu_context *smu,
++ enum PP_SMC_POWER_PROFILE profile_mode)
++{
++ smu->workload_refcount[profile_mode]++;
++}
++
++static void smu_power_profile_mode_put(struct smu_context *smu,
++ enum PP_SMC_POWER_PROFILE profile_mode)
++{
++ if (smu->workload_refcount[profile_mode])
++ smu->workload_refcount[profile_mode]--;
++}
++
+ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ enum amd_dpm_forced_level level,
+- bool skip_display_settings,
+- bool init)
++ bool skip_display_settings)
+ {
+ int ret = 0;
+- int index = 0;
+- long workload[1];
+ struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+
+ if (!skip_display_settings) {
+@@ -2284,14 +2304,8 @@ static int smu_adjust_power_state_dynamic(struct smu_context *smu,
+ }
+
+ if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
+- smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) {
+- index = fls(smu->workload_mask);
+- index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+- workload[0] = smu->workload_setting[index];
+-
+- if (init || smu->power_profile_mode != workload[0])
+- smu_bump_power_profile_mode(smu, workload, 0);
+- }
++ smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
++ smu_bump_power_profile_mode(smu, NULL, 0);
+
+ return ret;
+ }
+@@ -2310,13 +2324,13 @@ static int smu_handle_task(struct smu_context *smu,
+ ret = smu_pre_display_config_changed(smu);
+ if (ret)
+ return ret;
+- ret = smu_adjust_power_state_dynamic(smu, level, false, false);
++ ret = smu_adjust_power_state_dynamic(smu, level, false);
+ break;
+ case AMD_PP_TASK_COMPLETE_INIT:
+- ret = smu_adjust_power_state_dynamic(smu, level, true, true);
++ ret = smu_adjust_power_state_dynamic(smu, level, true);
+ break;
+ case AMD_PP_TASK_READJUST_POWER_STATE:
+- ret = smu_adjust_power_state_dynamic(smu, level, true, false);
++ ret = smu_adjust_power_state_dynamic(smu, level, true);
+ break;
+ default:
+ break;
+@@ -2338,12 +2352,11 @@ static int smu_handle_dpm_task(void *handle,
+
+ static int smu_switch_power_profile(void *handle,
+ enum PP_SMC_POWER_PROFILE type,
+- bool en)
++ bool enable)
+ {
+ struct smu_context *smu = handle;
+ struct smu_dpm_context *smu_dpm_ctx = &(smu->smu_dpm);
+- long workload[1];
+- uint32_t index;
++ int ret;
+
+ if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled)
+ return -EOPNOTSUPP;
+@@ -2351,21 +2364,21 @@ static int smu_switch_power_profile(void *handle,
+ if (!(type < PP_SMC_POWER_PROFILE_CUSTOM))
+ return -EINVAL;
+
+- if (!en) {
+- smu->workload_mask &= ~(1 << smu->workload_prority[type]);
+- index = fls(smu->workload_mask);
+- index = index > 0 && index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+- workload[0] = smu->workload_setting[index];
+- } else {
+- smu->workload_mask |= (1 << smu->workload_prority[type]);
+- index = fls(smu->workload_mask);
+- index = index <= WORKLOAD_POLICY_MAX ? index - 1 : 0;
+- workload[0] = smu->workload_setting[index];
+- }
+-
+ if (smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_MANUAL &&
+- smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM)
+- smu_bump_power_profile_mode(smu, workload, 0);
++ smu_dpm_ctx->dpm_level != AMD_DPM_FORCED_LEVEL_PERF_DETERMINISM) {
++ if (enable)
++ smu_power_profile_mode_get(smu, type);
++ else
++ smu_power_profile_mode_put(smu, type);
++ ret = smu_bump_power_profile_mode(smu, NULL, 0);
++ if (ret) {
++ if (enable)
++ smu_power_profile_mode_put(smu, type);
++ else
++ smu_power_profile_mode_get(smu, type);
++ return ret;
++ }
++ }
+
+ return 0;
+ }
+@@ -3053,12 +3066,35 @@ static int smu_set_power_profile_mode(void *handle,
+ uint32_t param_size)
+ {
+ struct smu_context *smu = handle;
++ bool custom = false;
++ int ret = 0;
+
+ if (!smu->pm_enabled || !smu->adev->pm.dpm_enabled ||
+ !smu->ppt_funcs->set_power_profile_mode)
+ return -EOPNOTSUPP;
+
+- return smu_bump_power_profile_mode(smu, param, param_size);
++ if (param[param_size] == PP_SMC_POWER_PROFILE_CUSTOM) {
++ custom = true;
++ /* clear frontend mask so custom changes propogate */
++ smu->workload_mask = 0;
++ }
++
++ if ((param[param_size] != smu->power_profile_mode) || custom) {
++ /* clear the old user preference */
++ smu_power_profile_mode_put(smu, smu->power_profile_mode);
++ /* set the new user preference */
++ smu_power_profile_mode_get(smu, param[param_size]);
++ ret = smu_bump_power_profile_mode(smu,
++ custom ? param : NULL,
++ custom ? param_size : 0);
++ if (ret)
++ smu_power_profile_mode_put(smu, param[param_size]);
++ else
++ /* store the user's preference */
++ smu->power_profile_mode = param[param_size];
++ }
++
++ return ret;
+ }
+
+ static int smu_get_fan_control_mode(void *handle, u32 *fan_mode)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
+index b44a185d07e84c..2b8a18ce25d943 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
+@@ -556,11 +556,13 @@ struct smu_context {
+ uint32_t hard_min_uclk_req_from_dal;
+ bool disable_uclk_switch;
+
++ /* asic agnostic workload mask */
+ uint32_t workload_mask;
+- uint32_t workload_prority[WORKLOAD_POLICY_MAX];
+- uint32_t workload_setting[WORKLOAD_POLICY_MAX];
++ /* default/user workload preference */
+ uint32_t power_profile_mode;
+- uint32_t default_power_profile_mode;
++ uint32_t workload_refcount[PP_SMC_POWER_PROFILE_COUNT];
++ /* backend specific custom workload settings */
++ long *custom_profile_params;
+ bool pm_enabled;
+ bool is_apu;
+
+@@ -731,9 +733,12 @@ struct pptable_funcs {
+ * @set_power_profile_mode: Set a power profile mode. Also used to
+ * create/set custom power profile modes.
+ * &input: Power profile mode parameters.
+- * &size: Size of &input.
++ * &workload_mask: mask of workloads to enable
++ * &custom_params: custom profile parameters
++ * &custom_params_max_idx: max valid idx into custom_params
+ */
+- int (*set_power_profile_mode)(struct smu_context *smu, long *input, uint32_t size);
++ int (*set_power_profile_mode)(struct smu_context *smu, u32 workload_mask,
++ long *custom_params, u32 custom_params_max_idx);
+
+ /**
+ * @dpm_set_vcn_enable: Enable/disable VCN engine dynamic power
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index d52512f5f1bd9d..fc1297fecc62e0 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -1445,98 +1445,120 @@ static int arcturus_get_power_profile_mode(struct smu_context *smu,
+ return size;
+ }
+
+-static int arcturus_set_power_profile_mode(struct smu_context *smu,
+- long *input,
+- uint32_t size)
++#define ARCTURUS_CUSTOM_PARAMS_COUNT 10
++#define ARCTURUS_CUSTOM_PARAMS_CLOCK_COUNT 2
++#define ARCTURUS_CUSTOM_PARAMS_SIZE (ARCTURUS_CUSTOM_PARAMS_CLOCK_COUNT * ARCTURUS_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int arcturus_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+ DpmActivityMonitorCoeffInt_t activity_monitor;
+- int workload_type = 0;
+- uint32_t profile_mode = input[size];
+- int ret = 0;
++ int ret, idx;
+
+- if (profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", profile_mode);
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor),
++ false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
+ }
+
++ idx = 0 * ARCTURUS_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor.Gfx_FPS = input[idx + 1];
++ activity_monitor.Gfx_UseRlcBusy = input[idx + 2];
++ activity_monitor.Gfx_MinActiveFreqType = input[idx + 3];
++ activity_monitor.Gfx_MinActiveFreq = input[idx + 4];
++ activity_monitor.Gfx_BoosterFreqType = input[idx + 5];
++ activity_monitor.Gfx_BoosterFreq = input[idx + 6];
++ activity_monitor.Gfx_PD_Data_limit_c = input[idx + 7];
++ activity_monitor.Gfx_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor.Gfx_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++ idx = 1 * ARCTURUS_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Uclk */
++ activity_monitor.Mem_FPS = input[idx + 1];
++ activity_monitor.Mem_UseRlcBusy = input[idx + 2];
++ activity_monitor.Mem_MinActiveFreqType = input[idx + 3];
++ activity_monitor.Mem_MinActiveFreq = input[idx + 4];
++ activity_monitor.Mem_BoosterFreqType = input[idx + 5];
++ activity_monitor.Mem_BoosterFreq = input[idx + 6];
++ activity_monitor.Mem_PD_Data_limit_c = input[idx + 7];
++ activity_monitor.Mem_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor.Mem_PD_Data_error_rate_coeff = input[idx + 9];
++ }
+
+- if ((profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) &&
+- (smu->smc_fw_version >= 0x360d00)) {
+- if (size != 10)
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor),
++ true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
++ }
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor),
+- false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
++ return ret;
++}
+
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor.Gfx_FPS = input[1];
+- activity_monitor.Gfx_UseRlcBusy = input[2];
+- activity_monitor.Gfx_MinActiveFreqType = input[3];
+- activity_monitor.Gfx_MinActiveFreq = input[4];
+- activity_monitor.Gfx_BoosterFreqType = input[5];
+- activity_monitor.Gfx_BoosterFreq = input[6];
+- activity_monitor.Gfx_PD_Data_limit_c = input[7];
+- activity_monitor.Gfx_PD_Data_error_coeff = input[8];
+- activity_monitor.Gfx_PD_Data_error_rate_coeff = input[9];
+- break;
+- case 1: /* Uclk */
+- activity_monitor.Mem_FPS = input[1];
+- activity_monitor.Mem_UseRlcBusy = input[2];
+- activity_monitor.Mem_MinActiveFreqType = input[3];
+- activity_monitor.Mem_MinActiveFreq = input[4];
+- activity_monitor.Mem_BoosterFreqType = input[5];
+- activity_monitor.Mem_BoosterFreq = input[6];
+- activity_monitor.Mem_PD_Data_limit_c = input[7];
+- activity_monitor.Mem_PD_Data_error_coeff = input[8];
+- activity_monitor.Mem_PD_Data_error_rate_coeff = input[9];
+- break;
+- default:
++static int arcturus_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int ret, idx = -1, i;
++
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
++
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (smu->smc_fw_version < 0x360d00)
+ return -EINVAL;
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params =
++ kzalloc(ARCTURUS_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
+ }
+-
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor),
+- true);
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != ARCTURUS_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= ARCTURUS_CUSTOM_PARAMS_CLOCK_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * ARCTURUS_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = arcturus_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
+ if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
+ return ret;
+ }
+- }
+-
+- /*
+- * Conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT
+- * Not all profile modes are supported on arcturus.
+- */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- profile_mode);
+- if (workload_type < 0) {
+- dev_dbg(smu->adev->dev, "Unsupported power profile mode %d on arcturus\n", profile_mode);
+- return -EINVAL;
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, ARCTURUS_CUSTOM_PARAMS_SIZE);
+ }
+
+ ret = smu_cmn_send_smc_msg_with_param(smu,
+- SMU_MSG_SetWorkloadMask,
+- 1 << workload_type,
+- NULL);
++ SMU_MSG_SetWorkloadMask,
++ backend_workload_mask,
++ NULL);
+ if (ret) {
+- dev_err(smu->adev->dev, "Fail to set workload type %d\n", workload_type);
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
+ return ret;
+ }
+
+- smu->power_profile_mode = profile_mode;
+-
+- return 0;
++ return ret;
+ }
+
+ static int arcturus_set_performance_level(struct smu_context *smu,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+index 16af1a329621f1..27c1892b2c7493 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/navi10_ppt.c
+@@ -2004,87 +2004,122 @@ static int navi10_get_power_profile_mode(struct smu_context *smu, char *buf)
+ return size;
+ }
+
+-static int navi10_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
++#define NAVI10_CUSTOM_PARAMS_COUNT 10
++#define NAVI10_CUSTOM_PARAMS_CLOCKS_COUNT 3
++#define NAVI10_CUSTOM_PARAMS_SIZE (NAVI10_CUSTOM_PARAMS_CLOCKS_COUNT * NAVI10_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int navi10_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+ DpmActivityMonitorCoeffInt_t activity_monitor;
+- int workload_type, ret = 0;
++ int ret, idx;
+
+- smu->power_profile_mode = input[size];
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor), false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
++ }
+
+- if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", smu->power_profile_mode);
+- return -EINVAL;
++ idx = 0 * NAVI10_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor.Gfx_FPS = input[idx + 1];
++ activity_monitor.Gfx_MinFreqStep = input[idx + 2];
++ activity_monitor.Gfx_MinActiveFreqType = input[idx + 3];
++ activity_monitor.Gfx_MinActiveFreq = input[idx + 4];
++ activity_monitor.Gfx_BoosterFreqType = input[idx + 5];
++ activity_monitor.Gfx_BoosterFreq = input[idx + 6];
++ activity_monitor.Gfx_PD_Data_limit_c = input[idx + 7];
++ activity_monitor.Gfx_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor.Gfx_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++ idx = 1 * NAVI10_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Socclk */
++ activity_monitor.Soc_FPS = input[idx + 1];
++ activity_monitor.Soc_MinFreqStep = input[idx + 2];
++ activity_monitor.Soc_MinActiveFreqType = input[idx + 3];
++ activity_monitor.Soc_MinActiveFreq = input[idx + 4];
++ activity_monitor.Soc_BoosterFreqType = input[idx + 5];
++ activity_monitor.Soc_BoosterFreq = input[idx + 6];
++ activity_monitor.Soc_PD_Data_limit_c = input[idx + 7];
++ activity_monitor.Soc_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor.Soc_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++ idx = 2 * NAVI10_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Memclk */
++ activity_monitor.Mem_FPS = input[idx + 1];
++ activity_monitor.Mem_MinFreqStep = input[idx + 2];
++ activity_monitor.Mem_MinActiveFreqType = input[idx + 3];
++ activity_monitor.Mem_MinActiveFreq = input[idx + 4];
++ activity_monitor.Mem_BoosterFreqType = input[idx + 5];
++ activity_monitor.Mem_BoosterFreq = input[idx + 6];
++ activity_monitor.Mem_PD_Data_limit_c = input[idx + 7];
++ activity_monitor.Mem_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor.Mem_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor), true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
+ }
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+- if (size != 10)
+- return -EINVAL;
++ return ret;
++}
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor), false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
++static int navi10_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int ret, idx = -1, i;
+
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor.Gfx_FPS = input[1];
+- activity_monitor.Gfx_MinFreqStep = input[2];
+- activity_monitor.Gfx_MinActiveFreqType = input[3];
+- activity_monitor.Gfx_MinActiveFreq = input[4];
+- activity_monitor.Gfx_BoosterFreqType = input[5];
+- activity_monitor.Gfx_BoosterFreq = input[6];
+- activity_monitor.Gfx_PD_Data_limit_c = input[7];
+- activity_monitor.Gfx_PD_Data_error_coeff = input[8];
+- activity_monitor.Gfx_PD_Data_error_rate_coeff = input[9];
+- break;
+- case 1: /* Socclk */
+- activity_monitor.Soc_FPS = input[1];
+- activity_monitor.Soc_MinFreqStep = input[2];
+- activity_monitor.Soc_MinActiveFreqType = input[3];
+- activity_monitor.Soc_MinActiveFreq = input[4];
+- activity_monitor.Soc_BoosterFreqType = input[5];
+- activity_monitor.Soc_BoosterFreq = input[6];
+- activity_monitor.Soc_PD_Data_limit_c = input[7];
+- activity_monitor.Soc_PD_Data_error_coeff = input[8];
+- activity_monitor.Soc_PD_Data_error_rate_coeff = input[9];
+- break;
+- case 2: /* Memclk */
+- activity_monitor.Mem_FPS = input[1];
+- activity_monitor.Mem_MinFreqStep = input[2];
+- activity_monitor.Mem_MinActiveFreqType = input[3];
+- activity_monitor.Mem_MinActiveFreq = input[4];
+- activity_monitor.Mem_BoosterFreqType = input[5];
+- activity_monitor.Mem_BoosterFreq = input[6];
+- activity_monitor.Mem_PD_Data_limit_c = input[7];
+- activity_monitor.Mem_PD_Data_error_coeff = input[8];
+- activity_monitor.Mem_PD_Data_error_rate_coeff = input[9];
+- break;
+- default:
+- return -EINVAL;
+- }
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor), true);
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params = kzalloc(NAVI10_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
++ }
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != NAVI10_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= NAVI10_CUSTOM_PARAMS_CLOCKS_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * NAVI10_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = navi10_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
+ if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
+ return ret;
+ }
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, NAVI10_CUSTOM_PARAMS_SIZE);
+ }
+
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- smu->power_profile_mode);
+- if (workload_type < 0)
+- return -EINVAL;
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- 1 << workload_type, NULL);
+- if (ret)
+- dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
++ backend_workload_mask, NULL);
++ if (ret) {
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+index 9c3c48297cba03..1af90990d05c8f 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c
+@@ -1706,90 +1706,126 @@ static int sienna_cichlid_get_power_profile_mode(struct smu_context *smu, char *
+ return size;
+ }
+
+-static int sienna_cichlid_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
++#define SIENNA_CICHLID_CUSTOM_PARAMS_COUNT 10
++#define SIENNA_CICHLID_CUSTOM_PARAMS_CLOCK_COUNT 3
++#define SIENNA_CICHLID_CUSTOM_PARAMS_SIZE (SIENNA_CICHLID_CUSTOM_PARAMS_CLOCK_COUNT * SIENNA_CICHLID_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int sienna_cichlid_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+
+ DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
+ DpmActivityMonitorCoeffInt_t *activity_monitor =
+ &(activity_monitor_external.DpmActivityMonitorCoeffInt);
+- int workload_type, ret = 0;
++ int ret, idx;
+
+- smu->power_profile_mode = input[size];
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external), false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
++ }
+
+- if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", smu->power_profile_mode);
+- return -EINVAL;
++ idx = 0 * SIENNA_CICHLID_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor->Gfx_FPS = input[idx + 1];
++ activity_monitor->Gfx_MinFreqStep = input[idx + 2];
++ activity_monitor->Gfx_MinActiveFreqType = input[idx + 3];
++ activity_monitor->Gfx_MinActiveFreq = input[idx + 4];
++ activity_monitor->Gfx_BoosterFreqType = input[idx + 5];
++ activity_monitor->Gfx_BoosterFreq = input[idx + 6];
++ activity_monitor->Gfx_PD_Data_limit_c = input[idx + 7];
++ activity_monitor->Gfx_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor->Gfx_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++ idx = 1 * SIENNA_CICHLID_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Socclk */
++ activity_monitor->Fclk_FPS = input[idx + 1];
++ activity_monitor->Fclk_MinFreqStep = input[idx + 2];
++ activity_monitor->Fclk_MinActiveFreqType = input[idx + 3];
++ activity_monitor->Fclk_MinActiveFreq = input[idx + 4];
++ activity_monitor->Fclk_BoosterFreqType = input[idx + 5];
++ activity_monitor->Fclk_BoosterFreq = input[idx + 6];
++ activity_monitor->Fclk_PD_Data_limit_c = input[idx + 7];
++ activity_monitor->Fclk_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor->Fclk_PD_Data_error_rate_coeff = input[idx + 9];
++ }
++ idx = 2 * SIENNA_CICHLID_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Memclk */
++ activity_monitor->Mem_FPS = input[idx + 1];
++ activity_monitor->Mem_MinFreqStep = input[idx + 2];
++ activity_monitor->Mem_MinActiveFreqType = input[idx + 3];
++ activity_monitor->Mem_MinActiveFreq = input[idx + 4];
++ activity_monitor->Mem_BoosterFreqType = input[idx + 5];
++ activity_monitor->Mem_BoosterFreq = input[idx + 6];
++ activity_monitor->Mem_PD_Data_limit_c = input[idx + 7];
++ activity_monitor->Mem_PD_Data_error_coeff = input[idx + 8];
++ activity_monitor->Mem_PD_Data_error_rate_coeff = input[idx + 9];
+ }
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+- if (size != 10)
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external), true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
++ }
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external), false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
++ return ret;
++}
+
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor->Gfx_FPS = input[1];
+- activity_monitor->Gfx_MinFreqStep = input[2];
+- activity_monitor->Gfx_MinActiveFreqType = input[3];
+- activity_monitor->Gfx_MinActiveFreq = input[4];
+- activity_monitor->Gfx_BoosterFreqType = input[5];
+- activity_monitor->Gfx_BoosterFreq = input[6];
+- activity_monitor->Gfx_PD_Data_limit_c = input[7];
+- activity_monitor->Gfx_PD_Data_error_coeff = input[8];
+- activity_monitor->Gfx_PD_Data_error_rate_coeff = input[9];
+- break;
+- case 1: /* Socclk */
+- activity_monitor->Fclk_FPS = input[1];
+- activity_monitor->Fclk_MinFreqStep = input[2];
+- activity_monitor->Fclk_MinActiveFreqType = input[3];
+- activity_monitor->Fclk_MinActiveFreq = input[4];
+- activity_monitor->Fclk_BoosterFreqType = input[5];
+- activity_monitor->Fclk_BoosterFreq = input[6];
+- activity_monitor->Fclk_PD_Data_limit_c = input[7];
+- activity_monitor->Fclk_PD_Data_error_coeff = input[8];
+- activity_monitor->Fclk_PD_Data_error_rate_coeff = input[9];
+- break;
+- case 2: /* Memclk */
+- activity_monitor->Mem_FPS = input[1];
+- activity_monitor->Mem_MinFreqStep = input[2];
+- activity_monitor->Mem_MinActiveFreqType = input[3];
+- activity_monitor->Mem_MinActiveFreq = input[4];
+- activity_monitor->Mem_BoosterFreqType = input[5];
+- activity_monitor->Mem_BoosterFreq = input[6];
+- activity_monitor->Mem_PD_Data_limit_c = input[7];
+- activity_monitor->Mem_PD_Data_error_coeff = input[8];
+- activity_monitor->Mem_PD_Data_error_rate_coeff = input[9];
+- break;
+- default:
+- return -EINVAL;
++static int sienna_cichlid_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int ret, idx = -1, i;
++
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
++
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params =
++ kzalloc(SIENNA_CICHLID_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
+ }
+-
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external), true);
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != SIENNA_CICHLID_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= SIENNA_CICHLID_CUSTOM_PARAMS_CLOCK_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * SIENNA_CICHLID_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = sienna_cichlid_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
+ if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
+ return ret;
+ }
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, SIENNA_CICHLID_CUSTOM_PARAMS_SIZE);
+ }
+
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- smu->power_profile_mode);
+- if (workload_type < 0)
+- return -EINVAL;
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- 1 << workload_type, NULL);
+- if (ret)
+- dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
++ backend_workload_mask, NULL);
++ if (ret) {
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+index 1fe020f1f4dbe2..9bca748ac2e947 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/vangogh_ppt.c
+@@ -1054,42 +1054,27 @@ static int vangogh_get_power_profile_mode(struct smu_context *smu,
+ return size;
+ }
+
+-static int vangogh_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
++static int vangogh_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
+ {
+- int workload_type, ret;
+- uint32_t profile_mode = input[size];
++ u32 backend_workload_mask = 0;
++ int ret;
+
+- if (profile_mode >= PP_SMC_POWER_PROFILE_COUNT) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", profile_mode);
+- return -EINVAL;
+- }
+-
+- if (profile_mode == PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT ||
+- profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING)
+- return 0;
+-
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- profile_mode);
+- if (workload_type < 0) {
+- dev_dbg(smu->adev->dev, "Unsupported power profile mode %d on VANGOGH\n",
+- profile_mode);
+- return -EINVAL;
+- }
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
+
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify,
+- 1 << workload_type,
+- NULL);
++ backend_workload_mask,
++ NULL);
+ if (ret) {
+- dev_err_once(smu->adev->dev, "Fail to set workload type %d\n",
+- workload_type);
++ dev_err_once(smu->adev->dev, "Fail to set workload mask 0x%08x\n",
++ workload_mask);
+ return ret;
+ }
+
+- smu->power_profile_mode = profile_mode;
+-
+- return 0;
++ return ret;
+ }
+
+ static int vangogh_set_soft_freq_limited_range(struct smu_context *smu,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+index cc0504b063fa3a..1a8a42b176e520 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu12/renoir_ppt.c
+@@ -862,44 +862,27 @@ static int renoir_force_clk_levels(struct smu_context *smu,
+ return ret;
+ }
+
+-static int renoir_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
++static int renoir_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
+ {
+- int workload_type, ret;
+- uint32_t profile_mode = input[size];
++ int ret;
++ u32 backend_workload_mask = 0;
+
+- if (profile_mode > PP_SMC_POWER_PROFILE_CUSTOM) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", profile_mode);
+- return -EINVAL;
+- }
+-
+- if (profile_mode == PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT ||
+- profile_mode == PP_SMC_POWER_PROFILE_POWERSAVING)
+- return 0;
+-
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- profile_mode);
+- if (workload_type < 0) {
+- /*
+- * TODO: If some case need switch to powersave/default power mode
+- * then can consider enter WORKLOAD_COMPUTE/WORKLOAD_CUSTOM for power saving.
+- */
+- dev_dbg(smu->adev->dev, "Unsupported power profile mode %d on RENOIR\n", profile_mode);
+- return -EINVAL;
+- }
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
+
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_ActiveProcessNotify,
+- 1 << workload_type,
+- NULL);
++ backend_workload_mask,
++ NULL);
+ if (ret) {
+- dev_err_once(smu->adev->dev, "Fail to set workload type %d\n", workload_type);
++ dev_err_once(smu->adev->dev, "Failed to set workload mask 0x08%x\n",
++ workload_mask);
+ return ret;
+ }
+
+- smu->power_profile_mode = profile_mode;
+-
+- return 0;
++ return ret;
+ }
+
+ static int renoir_set_peak_clock_by_device(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index d53e162dcd8de2..a9373968807164 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2477,82 +2477,76 @@ static int smu_v13_0_0_get_power_profile_mode(struct smu_context *smu,
+ return size;
+ }
+
+-static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+- long *input,
+- uint32_t size)
++#define SMU_13_0_0_CUSTOM_PARAMS_COUNT 9
++#define SMU_13_0_0_CUSTOM_PARAMS_CLOCK_COUNT 2
++#define SMU_13_0_0_CUSTOM_PARAMS_SIZE (SMU_13_0_0_CUSTOM_PARAMS_CLOCK_COUNT * SMU_13_0_0_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int smu_v13_0_0_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+ DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
+ DpmActivityMonitorCoeffInt_t *activity_monitor =
+ &(activity_monitor_external.DpmActivityMonitorCoeffInt);
+- int workload_type, ret = 0;
+- u32 workload_mask, selected_workload_mask;
+-
+- smu->power_profile_mode = input[size];
++ int ret, idx;
+
+- if (smu->power_profile_mode >= PP_SMC_POWER_PROFILE_COUNT) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", smu->power_profile_mode);
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external),
++ false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
+ }
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+- if (size != 9)
+- return -EINVAL;
+-
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external),
+- false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
+-
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor->Gfx_FPS = input[1];
+- activity_monitor->Gfx_MinActiveFreqType = input[2];
+- activity_monitor->Gfx_MinActiveFreq = input[3];
+- activity_monitor->Gfx_BoosterFreqType = input[4];
+- activity_monitor->Gfx_BoosterFreq = input[5];
+- activity_monitor->Gfx_PD_Data_limit_c = input[6];
+- activity_monitor->Gfx_PD_Data_error_coeff = input[7];
+- activity_monitor->Gfx_PD_Data_error_rate_coeff = input[8];
+- break;
+- case 1: /* Fclk */
+- activity_monitor->Fclk_FPS = input[1];
+- activity_monitor->Fclk_MinActiveFreqType = input[2];
+- activity_monitor->Fclk_MinActiveFreq = input[3];
+- activity_monitor->Fclk_BoosterFreqType = input[4];
+- activity_monitor->Fclk_BoosterFreq = input[5];
+- activity_monitor->Fclk_PD_Data_limit_c = input[6];
+- activity_monitor->Fclk_PD_Data_error_coeff = input[7];
+- activity_monitor->Fclk_PD_Data_error_rate_coeff = input[8];
+- break;
+- default:
+- return -EINVAL;
+- }
++ idx = 0 * SMU_13_0_0_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor->Gfx_FPS = input[idx + 1];
++ activity_monitor->Gfx_MinActiveFreqType = input[idx + 2];
++ activity_monitor->Gfx_MinActiveFreq = input[idx + 3];
++ activity_monitor->Gfx_BoosterFreqType = input[idx + 4];
++ activity_monitor->Gfx_BoosterFreq = input[idx + 5];
++ activity_monitor->Gfx_PD_Data_limit_c = input[idx + 6];
++ activity_monitor->Gfx_PD_Data_error_coeff = input[idx + 7];
++ activity_monitor->Gfx_PD_Data_error_rate_coeff = input[idx + 8];
++ }
++ idx = 1 * SMU_13_0_0_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Fclk */
++ activity_monitor->Fclk_FPS = input[idx + 1];
++ activity_monitor->Fclk_MinActiveFreqType = input[idx + 2];
++ activity_monitor->Fclk_MinActiveFreq = input[idx + 3];
++ activity_monitor->Fclk_BoosterFreqType = input[idx + 4];
++ activity_monitor->Fclk_BoosterFreq = input[idx + 5];
++ activity_monitor->Fclk_PD_Data_limit_c = input[idx + 6];
++ activity_monitor->Fclk_PD_Data_error_coeff = input[idx + 7];
++ activity_monitor->Fclk_PD_Data_error_rate_coeff = input[idx + 8];
++ }
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external),
+- true);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
+- return ret;
+- }
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external),
++ true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
+ }
+
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- smu->power_profile_mode);
++ return ret;
++}
+
+- if (workload_type < 0)
+- return -EINVAL;
++static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int workload_type, ret, idx = -1, i;
+
+- selected_workload_mask = workload_mask = 1 << workload_type;
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
+
+ /* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
+ if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
+@@ -2564,15 +2558,48 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ CMN2ASIC_MAPPING_WORKLOAD,
+ PP_SMC_POWER_PROFILE_POWERSAVING);
+ if (workload_type >= 0)
+- workload_mask |= 1 << workload_type;
++ backend_workload_mask |= 1 << workload_type;
++ }
++
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params =
++ kzalloc(SMU_13_0_0_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
++ }
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != SMU_13_0_0_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= SMU_13_0_0_CUSTOM_PARAMS_CLOCK_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * SMU_13_0_0_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = smu_v13_0_0_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
++ if (ret) {
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, SMU_13_0_0_CUSTOM_PARAMS_SIZE);
+ }
+
+ ret = smu_cmn_send_smc_msg_with_param(smu,
+- SMU_MSG_SetWorkloadMask,
+- workload_mask,
+- NULL);
+- if (!ret)
+- smu->workload_mask = selected_workload_mask;
++ SMU_MSG_SetWorkloadMask,
++ backend_workload_mask,
++ NULL);
++ if (ret) {
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index ceaf4572db2527..d0e6d051e9cf9f 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2436,78 +2436,110 @@ do { \
+ return result;
+ }
+
+-static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu, long *input, uint32_t size)
++#define SMU_13_0_7_CUSTOM_PARAMS_COUNT 8
++#define SMU_13_0_7_CUSTOM_PARAMS_CLOCK_COUNT 2
++#define SMU_13_0_7_CUSTOM_PARAMS_SIZE (SMU_13_0_7_CUSTOM_PARAMS_CLOCK_COUNT * SMU_13_0_7_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int smu_v13_0_7_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+
+ DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
+ DpmActivityMonitorCoeffInt_t *activity_monitor =
+ &(activity_monitor_external.DpmActivityMonitorCoeffInt);
+- int workload_type, ret = 0;
++ int ret, idx;
+
+- smu->power_profile_mode = input[size];
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external), false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
++ }
+
+- if (smu->power_profile_mode > PP_SMC_POWER_PROFILE_WINDOW3D) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", smu->power_profile_mode);
+- return -EINVAL;
++ idx = 0 * SMU_13_0_7_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor->Gfx_ActiveHystLimit = input[idx + 1];
++ activity_monitor->Gfx_IdleHystLimit = input[idx + 2];
++ activity_monitor->Gfx_FPS = input[idx + 3];
++ activity_monitor->Gfx_MinActiveFreqType = input[idx + 4];
++ activity_monitor->Gfx_BoosterFreqType = input[idx + 5];
++ activity_monitor->Gfx_MinActiveFreq = input[idx + 6];
++ activity_monitor->Gfx_BoosterFreq = input[idx + 7];
++ }
++ idx = 1 * SMU_13_0_7_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Fclk */
++ activity_monitor->Fclk_ActiveHystLimit = input[idx + 1];
++ activity_monitor->Fclk_IdleHystLimit = input[idx + 2];
++ activity_monitor->Fclk_FPS = input[idx + 3];
++ activity_monitor->Fclk_MinActiveFreqType = input[idx + 4];
++ activity_monitor->Fclk_BoosterFreqType = input[idx + 5];
++ activity_monitor->Fclk_MinActiveFreq = input[idx + 6];
++ activity_monitor->Fclk_BoosterFreq = input[idx + 7];
+ }
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+- if (size != 8)
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external), true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
++ }
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external), false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
++ return ret;
++}
+
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor->Gfx_ActiveHystLimit = input[1];
+- activity_monitor->Gfx_IdleHystLimit = input[2];
+- activity_monitor->Gfx_FPS = input[3];
+- activity_monitor->Gfx_MinActiveFreqType = input[4];
+- activity_monitor->Gfx_BoosterFreqType = input[5];
+- activity_monitor->Gfx_MinActiveFreq = input[6];
+- activity_monitor->Gfx_BoosterFreq = input[7];
+- break;
+- case 1: /* Fclk */
+- activity_monitor->Fclk_ActiveHystLimit = input[1];
+- activity_monitor->Fclk_IdleHystLimit = input[2];
+- activity_monitor->Fclk_FPS = input[3];
+- activity_monitor->Fclk_MinActiveFreqType = input[4];
+- activity_monitor->Fclk_BoosterFreqType = input[5];
+- activity_monitor->Fclk_MinActiveFreq = input[6];
+- activity_monitor->Fclk_BoosterFreq = input[7];
+- break;
+- default:
+- return -EINVAL;
++static int smu_v13_0_7_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int ret, idx = -1, i;
++
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
++
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params =
++ kzalloc(SMU_13_0_7_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
+ }
+-
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF, WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external), true);
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != SMU_13_0_7_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= SMU_13_0_7_CUSTOM_PARAMS_CLOCK_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * SMU_13_0_7_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = smu_v13_0_7_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
+ if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
+ return ret;
+ }
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, SMU_13_0_7_CUSTOM_PARAMS_SIZE);
+ }
+
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- smu->power_profile_mode);
+- if (workload_type < 0)
+- return -EINVAL;
+ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
+- 1 << workload_type, NULL);
++ backend_workload_mask, NULL);
+
+- if (ret)
+- dev_err(smu->adev->dev, "[%s] Failed to set work load mask!", __func__);
+- else
+- smu->workload_mask = (1 << workload_type);
++ if (ret) {
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index 82aef8626afa97..b22fb7eafcd3f2 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -1751,90 +1751,120 @@ static int smu_v14_0_2_get_power_profile_mode(struct smu_context *smu,
+ return size;
+ }
+
+-static int smu_v14_0_2_set_power_profile_mode(struct smu_context *smu,
+- long *input,
+- uint32_t size)
++#define SMU_14_0_2_CUSTOM_PARAMS_COUNT 9
++#define SMU_14_0_2_CUSTOM_PARAMS_CLOCK_COUNT 2
++#define SMU_14_0_2_CUSTOM_PARAMS_SIZE (SMU_14_0_2_CUSTOM_PARAMS_CLOCK_COUNT * SMU_14_0_2_CUSTOM_PARAMS_COUNT * sizeof(long))
++
++static int smu_v14_0_2_set_power_profile_mode_coeff(struct smu_context *smu,
++ long *input)
+ {
+ DpmActivityMonitorCoeffIntExternal_t activity_monitor_external;
+ DpmActivityMonitorCoeffInt_t *activity_monitor =
+ &(activity_monitor_external.DpmActivityMonitorCoeffInt);
+- int workload_type, ret = 0;
+- uint32_t current_profile_mode = smu->power_profile_mode;
+- smu->power_profile_mode = input[size];
++ int ret, idx;
+
+- if (smu->power_profile_mode >= PP_SMC_POWER_PROFILE_COUNT) {
+- dev_err(smu->adev->dev, "Invalid power profile mode %d\n", smu->power_profile_mode);
+- return -EINVAL;
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external),
++ false);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
++ return ret;
+ }
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_CUSTOM) {
+- if (size != 9)
+- return -EINVAL;
++ idx = 0 * SMU_14_0_2_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Gfxclk */
++ activity_monitor->Gfx_FPS = input[idx + 1];
++ activity_monitor->Gfx_MinActiveFreqType = input[idx + 2];
++ activity_monitor->Gfx_MinActiveFreq = input[idx + 3];
++ activity_monitor->Gfx_BoosterFreqType = input[idx + 4];
++ activity_monitor->Gfx_BoosterFreq = input[idx + 5];
++ activity_monitor->Gfx_PD_Data_limit_c = input[idx + 6];
++ activity_monitor->Gfx_PD_Data_error_coeff = input[idx + 7];
++ activity_monitor->Gfx_PD_Data_error_rate_coeff = input[idx + 8];
++ }
++ idx = 1 * SMU_14_0_2_CUSTOM_PARAMS_COUNT;
++ if (input[idx]) {
++ /* Fclk */
++ activity_monitor->Fclk_FPS = input[idx + 1];
++ activity_monitor->Fclk_MinActiveFreqType = input[idx + 2];
++ activity_monitor->Fclk_MinActiveFreq = input[idx + 3];
++ activity_monitor->Fclk_BoosterFreqType = input[idx + 4];
++ activity_monitor->Fclk_BoosterFreq = input[idx + 5];
++ activity_monitor->Fclk_PD_Data_limit_c = input[idx + 6];
++ activity_monitor->Fclk_PD_Data_error_coeff = input[idx + 7];
++ activity_monitor->Fclk_PD_Data_error_rate_coeff = input[idx + 8];
++ }
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external),
+- false);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to get activity monitor!", __func__);
+- return ret;
+- }
++ ret = smu_cmn_update_table(smu,
++ SMU_TABLE_ACTIVITY_MONITOR_COEFF,
++ WORKLOAD_PPLIB_CUSTOM_BIT,
++ (void *)(&activity_monitor_external),
++ true);
++ if (ret) {
++ dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
++ return ret;
++ }
+
+- switch (input[0]) {
+- case 0: /* Gfxclk */
+- activity_monitor->Gfx_FPS = input[1];
+- activity_monitor->Gfx_MinActiveFreqType = input[2];
+- activity_monitor->Gfx_MinActiveFreq = input[3];
+- activity_monitor->Gfx_BoosterFreqType = input[4];
+- activity_monitor->Gfx_BoosterFreq = input[5];
+- activity_monitor->Gfx_PD_Data_limit_c = input[6];
+- activity_monitor->Gfx_PD_Data_error_coeff = input[7];
+- activity_monitor->Gfx_PD_Data_error_rate_coeff = input[8];
+- break;
+- case 1: /* Fclk */
+- activity_monitor->Fclk_FPS = input[1];
+- activity_monitor->Fclk_MinActiveFreqType = input[2];
+- activity_monitor->Fclk_MinActiveFreq = input[3];
+- activity_monitor->Fclk_BoosterFreqType = input[4];
+- activity_monitor->Fclk_BoosterFreq = input[5];
+- activity_monitor->Fclk_PD_Data_limit_c = input[6];
+- activity_monitor->Fclk_PD_Data_error_coeff = input[7];
+- activity_monitor->Fclk_PD_Data_error_rate_coeff = input[8];
+- break;
+- default:
+- return -EINVAL;
+- }
++ return ret;
++}
+
+- ret = smu_cmn_update_table(smu,
+- SMU_TABLE_ACTIVITY_MONITOR_COEFF,
+- WORKLOAD_PPLIB_CUSTOM_BIT,
+- (void *)(&activity_monitor_external),
+- true);
+- if (ret) {
+- dev_err(smu->adev->dev, "[%s] Failed to set activity monitor!", __func__);
+- return ret;
+- }
+- }
++static int smu_v14_0_2_set_power_profile_mode(struct smu_context *smu,
++ u32 workload_mask,
++ long *custom_params,
++ u32 custom_params_max_idx)
++{
++ u32 backend_workload_mask = 0;
++ int ret, idx = -1, i;
++
++ smu_cmn_get_backend_workload_mask(smu, workload_mask,
++ &backend_workload_mask);
+
+- if (smu->power_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE)
++ /* disable deep sleep if compute is enabled */
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_COMPUTE))
+ smu_v14_0_deep_sleep_control(smu, false);
+- else if (current_profile_mode == PP_SMC_POWER_PROFILE_COMPUTE)
++ else
+ smu_v14_0_deep_sleep_control(smu, true);
+
+- /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
+- workload_type = smu_cmn_to_asic_specific_index(smu,
+- CMN2ASIC_MAPPING_WORKLOAD,
+- smu->power_profile_mode);
+- if (workload_type < 0)
+- return -EINVAL;
++ if (workload_mask & (1 << PP_SMC_POWER_PROFILE_CUSTOM)) {
++ if (!smu->custom_profile_params) {
++ smu->custom_profile_params =
++ kzalloc(SMU_14_0_2_CUSTOM_PARAMS_SIZE, GFP_KERNEL);
++ if (!smu->custom_profile_params)
++ return -ENOMEM;
++ }
++ if (custom_params && custom_params_max_idx) {
++ if (custom_params_max_idx != SMU_14_0_2_CUSTOM_PARAMS_COUNT)
++ return -EINVAL;
++ if (custom_params[0] >= SMU_14_0_2_CUSTOM_PARAMS_CLOCK_COUNT)
++ return -EINVAL;
++ idx = custom_params[0] * SMU_14_0_2_CUSTOM_PARAMS_COUNT;
++ smu->custom_profile_params[idx] = 1;
++ for (i = 1; i < custom_params_max_idx; i++)
++ smu->custom_profile_params[idx + i] = custom_params[i];
++ }
++ ret = smu_v14_0_2_set_power_profile_mode_coeff(smu,
++ smu->custom_profile_params);
++ if (ret) {
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
++ } else if (smu->custom_profile_params) {
++ memset(smu->custom_profile_params, 0, SMU_14_0_2_CUSTOM_PARAMS_SIZE);
++ }
+
+- ret = smu_cmn_send_smc_msg_with_param(smu,
+- SMU_MSG_SetWorkloadMask,
+- 1 << workload_type,
+- NULL);
+- if (!ret)
+- smu->workload_mask = 1 << workload_type;
++ ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_SetWorkloadMask,
++ backend_workload_mask, NULL);
++ if (ret) {
++ dev_err(smu->adev->dev, "Failed to set workload mask 0x%08x\n",
++ workload_mask);
++ if (idx != -1)
++ smu->custom_profile_params[idx] = 0;
++ return ret;
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+index 91ad434bcdaeb4..0d71db7be325da 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+@@ -1215,3 +1215,28 @@ void smu_cmn_generic_plpd_policy_desc(struct smu_dpm_policy *policy)
+ {
+ policy->desc = &xgmi_plpd_policy_desc;
+ }
++
++void smu_cmn_get_backend_workload_mask(struct smu_context *smu,
++ u32 workload_mask,
++ u32 *backend_workload_mask)
++{
++ int workload_type;
++ u32 profile_mode;
++
++ *backend_workload_mask = 0;
++
++ for (profile_mode = 0; profile_mode < PP_SMC_POWER_PROFILE_COUNT; profile_mode++) {
++ if (!(workload_mask & (1 << profile_mode)))
++ continue;
++
++ /* conv PP_SMC_POWER_PROFILE* to WORKLOAD_PPLIB_*_BIT */
++ workload_type = smu_cmn_to_asic_specific_index(smu,
++ CMN2ASIC_MAPPING_WORKLOAD,
++ profile_mode);
++
++ if (workload_type < 0)
++ continue;
++
++ *backend_workload_mask |= 1 << workload_type;
++ }
++}
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+index 1de685defe85b1..a020277dec3e96 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.h
+@@ -147,5 +147,9 @@ bool smu_cmn_is_audio_func_enabled(struct amdgpu_device *adev);
+ void smu_cmn_generic_soc_policy_desc(struct smu_dpm_policy *policy);
+ void smu_cmn_generic_plpd_policy_desc(struct smu_dpm_policy *policy);
+
++void smu_cmn_get_backend_workload_mask(struct smu_context *smu,
++ u32 workload_mask,
++ u32 *backend_workload_mask);
++
+ #endif
+ #endif
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index 65b57de20203f5..008d86cc562af7 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -3507,6 +3507,7 @@ static const struct of_device_id it6505_of_match[] = {
+ { .compatible = "ite,it6505" },
+ { }
+ };
++MODULE_DEVICE_TABLE(of, it6505_of_match);
+
+ static struct i2c_driver it6505_i2c_driver = {
+ .driver = {
+diff --git a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
+index 14a2a8473682b0..c491e3203bf11c 100644
+--- a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
+@@ -160,11 +160,11 @@ EXPORT_SYMBOL(drm_dp_dual_mode_write);
+
+ static bool is_hdmi_adaptor(const char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN])
+ {
+- static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN] =
++ static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN + 1] =
+ "DP-HDMI ADAPTOR\x04";
+
+ return memcmp(hdmi_id, dp_dual_mode_hdmi_id,
+- sizeof(dp_dual_mode_hdmi_id)) == 0;
++ DP_DUAL_MODE_HDMI_ID_LEN) == 0;
+ }
+
+ static bool is_type1_adaptor(uint8_t adaptor_id)
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index ac90118b9e7a81..bcf3a33123be1c 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -320,6 +320,9 @@ static bool drm_dp_decode_sideband_msg_hdr(const struct drm_dp_mst_topology_mgr
+ hdr->broadcast = (buf[idx] >> 7) & 0x1;
+ hdr->path_msg = (buf[idx] >> 6) & 0x1;
+ hdr->msg_len = buf[idx] & 0x3f;
++ if (hdr->msg_len < 1) /* min space for body CRC */
++ return false;
++
+ idx++;
+ hdr->somt = (buf[idx] >> 7) & 0x1;
+ hdr->eomt = (buf[idx] >> 6) & 0x1;
+@@ -3697,8 +3700,7 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
+ ret = 0;
+ mgr->payload_id_table_cleared = false;
+
+- memset(&mgr->down_rep_recv, 0, sizeof(mgr->down_rep_recv));
+- memset(&mgr->up_req_recv, 0, sizeof(mgr->up_req_recv));
++ mgr->reset_rx_state = true;
+ }
+
+ out_unlock:
+@@ -3856,6 +3858,11 @@ int drm_dp_mst_topology_mgr_resume(struct drm_dp_mst_topology_mgr *mgr,
+ }
+ EXPORT_SYMBOL(drm_dp_mst_topology_mgr_resume);
+
++static void reset_msg_rx_state(struct drm_dp_sideband_msg_rx *msg)
++{
++ memset(msg, 0, sizeof(*msg));
++}
++
+ static bool
+ drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up,
+ struct drm_dp_mst_branch **mstb)
+@@ -3934,6 +3941,34 @@ drm_dp_get_one_sb_msg(struct drm_dp_mst_topology_mgr *mgr, bool up,
+ return true;
+ }
+
++static int get_msg_request_type(u8 data)
++{
++ return data & 0x7f;
++}
++
++static bool verify_rx_request_type(struct drm_dp_mst_topology_mgr *mgr,
++ const struct drm_dp_sideband_msg_tx *txmsg,
++ const struct drm_dp_sideband_msg_rx *rxmsg)
++{
++ const struct drm_dp_sideband_msg_hdr *hdr = &rxmsg->initial_hdr;
++ const struct drm_dp_mst_branch *mstb = txmsg->dst;
++ int tx_req_type = get_msg_request_type(txmsg->msg[0]);
++ int rx_req_type = get_msg_request_type(rxmsg->msg[0]);
++ char rad_str[64];
++
++ if (tx_req_type == rx_req_type)
++ return true;
++
++ drm_dp_mst_rad_to_str(mstb->rad, mstb->lct, rad_str, sizeof(rad_str));
++ drm_dbg_kms(mgr->dev,
++ "Got unexpected MST reply, mstb: %p seqno: %d lct: %d rad: %s rx_req_type: %s (%02x) != tx_req_type: %s (%02x)\n",
++ mstb, hdr->seqno, mstb->lct, rad_str,
++ drm_dp_mst_req_type_str(rx_req_type), rx_req_type,
++ drm_dp_mst_req_type_str(tx_req_type), tx_req_type);
++
++ return false;
++}
++
+ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
+ {
+ struct drm_dp_sideband_msg_tx *txmsg;
+@@ -3963,6 +3998,9 @@ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
+ goto out_clear_reply;
+ }
+
++ if (!verify_rx_request_type(mgr, txmsg, msg))
++ goto out_clear_reply;
++
+ drm_dp_sideband_parse_reply(mgr, msg, &txmsg->reply);
+
+ if (txmsg->reply.reply_type == DP_SIDEBAND_REPLY_NAK) {
+@@ -4138,6 +4176,17 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ return 0;
+ }
+
++static void update_msg_rx_state(struct drm_dp_mst_topology_mgr *mgr)
++{
++ mutex_lock(&mgr->lock);
++ if (mgr->reset_rx_state) {
++ mgr->reset_rx_state = false;
++ reset_msg_rx_state(&mgr->down_rep_recv);
++ reset_msg_rx_state(&mgr->up_req_recv);
++ }
++ mutex_unlock(&mgr->lock);
++}
++
+ /**
+ * drm_dp_mst_hpd_irq_handle_event() - MST hotplug IRQ handle MST event
+ * @mgr: manager to notify irq for.
+@@ -4172,6 +4221,8 @@ int drm_dp_mst_hpd_irq_handle_event(struct drm_dp_mst_topology_mgr *mgr, const u
+ *handled = true;
+ }
+
++ update_msg_rx_state(mgr);
++
+ if (esi[1] & DP_DOWN_REP_MSG_RDY) {
+ ret = drm_dp_mst_handle_down_rep(mgr);
+ *handled = true;
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 2d84d7ea1ab7a0..4a73821b81f6fd 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -184,6 +184,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T103HAF"),
+ },
+ .driver_data = (void *)&lcd800x1280_rightside_up,
++ }, { /* AYA NEO AYANEO 2 */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "AYANEO 2"),
++ },
++ .driver_data = (void *)&lcd1200x1920_rightside_up,
+ }, { /* AYA NEO 2021 */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYADEVICE"),
+@@ -196,6 +202,18 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "AIR"),
+ },
+ .driver_data = (void *)&lcd1080x1920_leftside_up,
++ }, { /* AYA NEO Founder */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYA NEO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "AYA NEO Founder"),
++ },
++ .driver_data = (void *)&lcd800x1280_rightside_up,
++ }, { /* AYA NEO GEEK */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "GEEK"),
++ },
++ .driver_data = (void *)&lcd800x1280_rightside_up,
+ }, { /* AYA NEO NEXT */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_BOARD_VENDOR, "AYANEO"),
+diff --git a/drivers/gpu/drm/drm_panic.c b/drivers/gpu/drm/drm_panic.c
+index 74412b7bf936c2..0a9ecc1380d2a4 100644
+--- a/drivers/gpu/drm/drm_panic.c
++++ b/drivers/gpu/drm/drm_panic.c
+@@ -209,6 +209,14 @@ static u32 convert_xrgb8888_to_argb2101010(u32 pix)
+ return GENMASK(31, 30) /* set alpha bits */ | pix | ((pix >> 8) & 0x00300C03);
+ }
+
++static u32 convert_xrgb8888_to_abgr2101010(u32 pix)
++{
++ pix = ((pix & 0x00FF0000) >> 14) |
++ ((pix & 0x0000FF00) << 4) |
++ ((pix & 0x000000FF) << 22);
++ return GENMASK(31, 30) /* set alpha bits */ | pix | ((pix >> 8) & 0x00300C03);
++}
++
+ /*
+ * convert_from_xrgb8888 - convert one pixel from xrgb8888 to the desired format
+ * @color: input color, in xrgb8888 format
+@@ -242,6 +250,8 @@ static u32 convert_from_xrgb8888(u32 color, u32 format)
+ return convert_xrgb8888_to_xrgb2101010(color);
+ case DRM_FORMAT_ARGB2101010:
+ return convert_xrgb8888_to_argb2101010(color);
++ case DRM_FORMAT_ABGR2101010:
++ return convert_xrgb8888_to_abgr2101010(color);
+ default:
+ WARN_ONCE(1, "Can't convert to %p4cc\n", &format);
+ return 0;
+diff --git a/drivers/gpu/drm/mcde/mcde_drv.c b/drivers/gpu/drm/mcde/mcde_drv.c
+index 10c06440c7e73e..f1bb38f4e67349 100644
+--- a/drivers/gpu/drm/mcde/mcde_drv.c
++++ b/drivers/gpu/drm/mcde/mcde_drv.c
+@@ -473,6 +473,7 @@ static const struct of_device_id mcde_of_match[] = {
+ },
+ {},
+ };
++MODULE_DEVICE_TABLE(of, mcde_of_match);
+
+ static struct platform_driver mcde_driver = {
+ .driver = {
+diff --git a/drivers/gpu/drm/panel/panel-simple.c b/drivers/gpu/drm/panel/panel-simple.c
+index 86735430462fa6..06381c62820975 100644
+--- a/drivers/gpu/drm/panel/panel-simple.c
++++ b/drivers/gpu/drm/panel/panel-simple.c
+@@ -4565,6 +4565,31 @@ static const struct panel_desc yes_optoelectronics_ytc700tlag_05_201c = {
+ .connector_type = DRM_MODE_CONNECTOR_LVDS,
+ };
+
++static const struct drm_display_mode mchp_ac69t88a_mode = {
++ .clock = 25000,
++ .hdisplay = 800,
++ .hsync_start = 800 + 88,
++ .hsync_end = 800 + 88 + 5,
++ .htotal = 800 + 88 + 5 + 40,
++ .vdisplay = 480,
++ .vsync_start = 480 + 23,
++ .vsync_end = 480 + 23 + 5,
++ .vtotal = 480 + 23 + 5 + 1,
++};
++
++static const struct panel_desc mchp_ac69t88a = {
++ .modes = &mchp_ac69t88a_mode,
++ .num_modes = 1,
++ .bpc = 8,
++ .size = {
++ .width = 108,
++ .height = 65,
++ },
++ .bus_flags = DRM_BUS_FLAG_DE_HIGH,
++ .bus_format = MEDIA_BUS_FMT_RGB888_1X7X4_JEIDA,
++ .connector_type = DRM_MODE_CONNECTOR_LVDS,
++};
++
+ static const struct drm_display_mode arm_rtsm_mode[] = {
+ {
+ .clock = 65000,
+@@ -5048,6 +5073,9 @@ static const struct of_device_id platform_of_match[] = {
+ }, {
+ .compatible = "yes-optoelectronics,ytc700tlag-05-201c",
+ .data = &yes_optoelectronics_ytc700tlag_05_201c,
++ }, {
++ .compatible = "microchip,ac69t88a",
++ .data = &mchp_ac69t88a,
+ }, {
+ /* Must be the last entry */
+ .compatible = "panel-dpi",
+diff --git a/drivers/gpu/drm/radeon/r600_cs.c b/drivers/gpu/drm/radeon/r600_cs.c
+index 1b2d31c4d77caa..ac77d1246b9453 100644
+--- a/drivers/gpu/drm/radeon/r600_cs.c
++++ b/drivers/gpu/drm/radeon/r600_cs.c
+@@ -2104,7 +2104,7 @@ static int r600_packet3_check(struct radeon_cs_parser *p,
+ return -EINVAL;
+ }
+
+- offset = radeon_get_ib_value(p, idx+1) << 8;
++ offset = (u64)radeon_get_ib_value(p, idx+1) << 8;
+ if (offset != track->vgt_strmout_bo_offset[idx_value]) {
+ DRM_ERROR("bad STRMOUT_BASE_UPDATE, bo offset does not match: 0x%llx, 0x%x\n",
+ offset, track->vgt_strmout_bo_offset[idx_value]);
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index e97c6c60bc96ef..416590ea0dc3d6 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -803,6 +803,14 @@ int drm_sched_job_init(struct drm_sched_job *job,
+ return -EINVAL;
+ }
+
++ /*
++ * We don't know for sure how the user has allocated. Thus, zero the
++ * struct so that unallowed (i.e., too early) usage of pointers that
++ * this function does not set is guaranteed to lead to a NULL pointer
++ * exception instead of UB.
++ */
++ memset(job, 0, sizeof(*job));
++
+ job->entity = entity;
+ job->credits = credits;
+ job->s_fence = drm_sched_fence_alloc(entity, owner);
+diff --git a/drivers/gpu/drm/sti/sti_mixer.c b/drivers/gpu/drm/sti/sti_mixer.c
+index 7e5f14646625b4..06c1b81912f79f 100644
+--- a/drivers/gpu/drm/sti/sti_mixer.c
++++ b/drivers/gpu/drm/sti/sti_mixer.c
+@@ -137,7 +137,7 @@ static void mixer_dbg_crb(struct seq_file *s, int val)
+ }
+ }
+
+-static void mixer_dbg_mxn(struct seq_file *s, void *addr)
++static void mixer_dbg_mxn(struct seq_file *s, void __iomem *addr)
+ {
+ int i;
+
+diff --git a/drivers/gpu/drm/v3d/v3d_perfmon.c b/drivers/gpu/drm/v3d/v3d_perfmon.c
+index 00cd081d787327..6ee56cbd3f1bfc 100644
+--- a/drivers/gpu/drm/v3d/v3d_perfmon.c
++++ b/drivers/gpu/drm/v3d/v3d_perfmon.c
+@@ -254,9 +254,9 @@ void v3d_perfmon_start(struct v3d_dev *v3d, struct v3d_perfmon *perfmon)
+ V3D_CORE_WRITE(0, V3D_V4_PCTR_0_SRC_X(source), channel);
+ }
+
++ V3D_CORE_WRITE(0, V3D_V4_PCTR_0_EN, mask);
+ V3D_CORE_WRITE(0, V3D_V4_PCTR_0_CLR, mask);
+ V3D_CORE_WRITE(0, V3D_PCTR_0_OVERFLOW, mask);
+- V3D_CORE_WRITE(0, V3D_V4_PCTR_0_EN, mask);
+
+ v3d->active_perfmon = perfmon;
+ }
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 2d7d3e90f3be44..7e0a5ea7ab859a 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -1924,7 +1924,7 @@ static int vc4_hdmi_audio_startup(struct device *dev, void *data)
+ }
+
+ if (!vc4_hdmi_audio_can_stream(vc4_hdmi)) {
+- ret = -ENODEV;
++ ret = -ENOTSUPP;
+ goto out_dev_exit;
+ }
+
+diff --git a/drivers/gpu/drm/vc4/vc4_hvs.c b/drivers/gpu/drm/vc4/vc4_hvs.c
+index 863539e1f7e04b..c389e82463bfdb 100644
+--- a/drivers/gpu/drm/vc4/vc4_hvs.c
++++ b/drivers/gpu/drm/vc4/vc4_hvs.c
+@@ -963,6 +963,17 @@ static int vc4_hvs_bind(struct device *dev, struct device *master, void *data)
+ SCALER_DISPCTRL_SCLEIRQ);
+
+
++ /* Set AXI panic mode.
++ * VC4 panics when < 2 lines in FIFO.
++ * VC5 panics when less than 1 line in the FIFO.
++ */
++ dispctrl &= ~(SCALER_DISPCTRL_PANIC0_MASK |
++ SCALER_DISPCTRL_PANIC1_MASK |
++ SCALER_DISPCTRL_PANIC2_MASK);
++ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC0);
++ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC1);
++ dispctrl |= VC4_SET_FIELD(2, SCALER_DISPCTRL_PANIC2);
++
+ /* Set AXI panic mode.
+ * VC4 panics when < 2 lines in FIFO.
+ * VC5 panics when less than 1 line in the FIFO.
+diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+index 81b71903675e0d..7c78496e6213cc 100644
+--- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+@@ -186,6 +186,7 @@
+
+ #define VDBOX_CGCTL3F10(base) XE_REG((base) + 0x3f10)
+ #define IECPUNIT_CLKGATE_DIS REG_BIT(22)
++#define RAMDFTUNIT_CLKGATE_DIS REG_BIT(9)
+
+ #define VDBOX_CGCTL3F18(base) XE_REG((base) + 0x3f18)
+ #define ALNUNIT_CLKGATE_DIS REG_BIT(13)
+diff --git a/drivers/gpu/drm/xe/regs/xe_gt_regs.h b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+index bd604b9f08e4fa..5404de2aea5457 100644
+--- a/drivers/gpu/drm/xe/regs/xe_gt_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_gt_regs.h
+@@ -286,6 +286,9 @@
+ #define GAMTLBVEBOX0_CLKGATE_DIS REG_BIT(16)
+ #define LTCDD_CLKGATE_DIS REG_BIT(10)
+
++#define UNSLCGCTL9454 XE_REG(0x9454)
++#define LSCFE_CLKGATE_DIS REG_BIT(4)
++
+ #define XEHP_SLICE_UNIT_LEVEL_CLKGATE XE_REG_MCR(0x94d4)
+ #define L3_CR2X_CLKGATE_DIS REG_BIT(17)
+ #define L3_CLKGATE_DIS REG_BIT(16)
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
+index bdb76e834e4c36..5221ee3f12149b 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.c
++++ b/drivers/gpu/drm/xe/xe_devcoredump.c
+@@ -6,6 +6,7 @@
+ #include "xe_devcoredump.h"
+ #include "xe_devcoredump_types.h"
+
++#include <linux/ascii85.h>
+ #include <linux/devcoredump.h>
+ #include <generated/utsrelease.h>
+
+@@ -85,9 +86,9 @@ static ssize_t __xe_devcoredump_read(char *buffer, size_t count,
+
+ p = drm_coredump_printer(&iter);
+
+- drm_printf(&p, "**** Xe Device Coredump ****\n");
+- drm_printf(&p, "kernel: " UTS_RELEASE "\n");
+- drm_printf(&p, "module: " KBUILD_MODNAME "\n");
++ drm_puts(&p, "**** Xe Device Coredump ****\n");
++ drm_puts(&p, "kernel: " UTS_RELEASE "\n");
++ drm_puts(&p, "module: " KBUILD_MODNAME "\n");
+
+ ts = ktime_to_timespec64(ss->snapshot_time);
+ drm_printf(&p, "Snapshot time: %lld.%09ld\n", ts.tv_sec, ts.tv_nsec);
+@@ -96,20 +97,25 @@ static ssize_t __xe_devcoredump_read(char *buffer, size_t count,
+ drm_printf(&p, "Process: %s\n", ss->process_name);
+ xe_device_snapshot_print(xe, &p);
+
+- drm_printf(&p, "\n**** GuC CT ****\n");
+- xe_guc_ct_snapshot_print(coredump->snapshot.ct, &p);
+- xe_guc_exec_queue_snapshot_print(coredump->snapshot.ge, &p);
++ drm_printf(&p, "\n**** GT #%d ****\n", ss->gt->info.id);
++ drm_printf(&p, "\tTile: %d\n", ss->gt->tile->id);
+
+- drm_printf(&p, "\n**** Job ****\n");
+- xe_sched_job_snapshot_print(coredump->snapshot.job, &p);
++ drm_puts(&p, "\n**** GuC CT ****\n");
++ xe_guc_ct_snapshot_print(ss->ct, &p);
+
+- drm_printf(&p, "\n**** HW Engines ****\n");
++ drm_puts(&p, "\n**** Contexts ****\n");
++ xe_guc_exec_queue_snapshot_print(ss->ge, &p);
++
++ drm_puts(&p, "\n**** Job ****\n");
++ xe_sched_job_snapshot_print(ss->job, &p);
++
++ drm_puts(&p, "\n**** HW Engines ****\n");
+ for (i = 0; i < XE_NUM_HW_ENGINES; i++)
+- if (coredump->snapshot.hwe[i])
+- xe_hw_engine_snapshot_print(coredump->snapshot.hwe[i],
+- &p);
+- drm_printf(&p, "\n**** VM state ****\n");
+- xe_vm_snapshot_print(coredump->snapshot.vm, &p);
++ if (ss->hwe[i])
++ xe_hw_engine_snapshot_print(ss->hwe[i], &p);
++
++ drm_puts(&p, "\n**** VM state ****\n");
++ xe_vm_snapshot_print(ss->vm, &p);
+
+ return count - iter.remain;
+ }
+@@ -141,13 +147,15 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
+ {
+ struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
+ struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
++ unsigned int fw_ref;
+
+ /* keep going if fw fails as we still want to save the memory and SW data */
+- if (xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL))
++ fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
++ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
+ xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
+ xe_vm_snapshot_capture_delayed(ss->vm);
+ xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
+- xe_force_wake_put(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
++ xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
+
+ /* Calculate devcoredump size */
+ ss->read.size = __xe_devcoredump_read(NULL, INT_MAX, coredump);
+@@ -220,8 +228,9 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
+ u32 width_mask = (0x1 << q->width) - 1;
+ const char *process_name = "no process";
+
+- int i;
++ unsigned int fw_ref;
+ bool cookie;
++ int i;
+
+ ss->snapshot_time = ktime_get_real();
+ ss->boot_time = ktime_get_boottime();
+@@ -244,26 +253,25 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
+ }
+
+ /* keep going if fw fails as we still want to save the memory and SW data */
+- if (xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL))
+- xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
++ fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
+
+- coredump->snapshot.ct = xe_guc_ct_snapshot_capture(&guc->ct, true);
+- coredump->snapshot.ge = xe_guc_exec_queue_snapshot_capture(q);
+- coredump->snapshot.job = xe_sched_job_snapshot_capture(job);
+- coredump->snapshot.vm = xe_vm_snapshot_capture(q->vm);
++ ss->ct = xe_guc_ct_snapshot_capture(&guc->ct, true);
++ ss->ge = xe_guc_exec_queue_snapshot_capture(q);
++ ss->job = xe_sched_job_snapshot_capture(job);
++ ss->vm = xe_vm_snapshot_capture(q->vm);
+
+ for_each_hw_engine(hwe, q->gt, id) {
+ if (hwe->class != q->hwe->class ||
+ !(BIT(hwe->logical_instance) & adj_logical_mask)) {
+- coredump->snapshot.hwe[id] = NULL;
++ ss->hwe[id] = NULL;
+ continue;
+ }
+- coredump->snapshot.hwe[id] = xe_hw_engine_snapshot_capture(hwe);
++ ss->hwe[id] = xe_hw_engine_snapshot_capture(hwe);
+ }
+
+ queue_work(system_unbound_wq, &ss->work);
+
+- xe_force_wake_put(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
++ xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
+ dma_fence_end_signalling(cookie);
+ }
+
+@@ -310,3 +318,89 @@ int xe_devcoredump_init(struct xe_device *xe)
+ }
+
+ #endif
++
++/**
++ * xe_print_blob_ascii85 - print a BLOB to some useful location in ASCII85
++ *
++ * The output is split to multiple lines because some print targets, e.g. dmesg
++ * cannot handle arbitrarily long lines. Note also that printing to dmesg in
++ * piece-meal fashion is not possible, each separate call to drm_puts() has a
++ * line-feed automatically added! Therefore, the entire output line must be
++ * constructed in a local buffer first, then printed in one atomic output call.
++ *
++ * There is also a scheduler yield call to prevent the 'task has been stuck for
++ * 120s' kernel hang check feature from firing when printing to a slow target
++ * such as dmesg over a serial port.
++ *
++ * TODO: Add compression prior to the ASCII85 encoding to shrink huge buffers down.
++ *
++ * @p: the printer object to output to
++ * @prefix: optional prefix to add to output string
++ * @blob: the Binary Large OBject to dump out
++ * @offset: offset in bytes to skip from the front of the BLOB, must be a multiple of sizeof(u32)
++ * @size: the size in bytes of the BLOB, must be a multiple of sizeof(u32)
++ */
++void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
++ const void *blob, size_t offset, size_t size)
++{
++ const u32 *blob32 = (const u32 *)blob;
++ char buff[ASCII85_BUFSZ], *line_buff;
++ size_t line_pos = 0;
++
++#define DMESG_MAX_LINE_LEN 800
++#define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "\n\0" */
++
++ if (size & 3)
++ drm_printf(p, "Size not word aligned: %zu", size);
++ if (offset & 3)
++ drm_printf(p, "Offset not word aligned: %zu", size);
++
++ line_buff = kzalloc(DMESG_MAX_LINE_LEN, GFP_KERNEL);
++ if (IS_ERR_OR_NULL(line_buff)) {
++ drm_printf(p, "Failed to allocate line buffer: %pe", line_buff);
++ return;
++ }
++
++ blob32 += offset / sizeof(*blob32);
++ size /= sizeof(*blob32);
++
++ if (prefix) {
++ strscpy(line_buff, prefix, DMESG_MAX_LINE_LEN - MIN_SPACE - 2);
++ line_pos = strlen(line_buff);
++
++ line_buff[line_pos++] = ':';
++ line_buff[line_pos++] = ' ';
++ }
++
++ while (size--) {
++ u32 val = *(blob32++);
++
++ strscpy(line_buff + line_pos, ascii85_encode(val, buff),
++ DMESG_MAX_LINE_LEN - line_pos);
++ line_pos += strlen(line_buff + line_pos);
++
++ if ((line_pos + MIN_SPACE) >= DMESG_MAX_LINE_LEN) {
++ line_buff[line_pos++] = '\n';
++ line_buff[line_pos++] = 0;
++
++ drm_puts(p, line_buff);
++
++ line_pos = 0;
++
++ /* Prevent 'stuck thread' time out errors */
++ cond_resched();
++ }
++ }
++
++ if (line_pos) {
++ line_buff[line_pos++] = '\n';
++ line_buff[line_pos++] = 0;
++
++ drm_puts(p, line_buff);
++ }
++
++ kfree(line_buff);
++
++#undef MIN_SPACE
++#undef DMESG_MAX_LINE_LEN
++}
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.h b/drivers/gpu/drm/xe/xe_devcoredump.h
+index e2fa65ce093226..a4eebc285fc837 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.h
++++ b/drivers/gpu/drm/xe/xe_devcoredump.h
+@@ -6,6 +6,9 @@
+ #ifndef _XE_DEVCOREDUMP_H_
+ #define _XE_DEVCOREDUMP_H_
+
++#include <linux/types.h>
++
++struct drm_printer;
+ struct xe_device;
+ struct xe_sched_job;
+
+@@ -23,4 +26,7 @@ static inline int xe_devcoredump_init(struct xe_device *xe)
+ }
+ #endif
+
++void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
++ const void *blob, size_t offset, size_t size);
++
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump_types.h b/drivers/gpu/drm/xe/xe_devcoredump_types.h
+index 440d05d77a5af8..3cc2f095fdfbd1 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump_types.h
++++ b/drivers/gpu/drm/xe/xe_devcoredump_types.h
+@@ -37,7 +37,8 @@ struct xe_devcoredump_snapshot {
+ /* GuC snapshots */
+ /** @ct: GuC CT snapshot */
+ struct xe_guc_ct_snapshot *ct;
+- /** @ge: Guc Engine snapshot */
++
++ /** @ge: GuC Submission Engine snapshot */
+ struct xe_guc_submit_exec_queue_snapshot *ge;
+
+ /** @hwe: HW Engine snapshot array */
+diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
+index a1987b554a8d2a..bb85208cf1a94c 100644
+--- a/drivers/gpu/drm/xe/xe_device.c
++++ b/drivers/gpu/drm/xe/xe_device.c
+@@ -919,6 +919,7 @@ void xe_device_snapshot_print(struct xe_device *xe, struct drm_printer *p)
+
+ for_each_gt(gt, xe, id) {
+ drm_printf(p, "GT id: %u\n", id);
++ drm_printf(p, "\tTile: %u\n", gt->tile->id);
+ drm_printf(p, "\tType: %s\n",
+ gt->info.type == XE_GT_TYPE_MAIN ? "main" : "media");
+ drm_printf(p, "\tIP ver: %u.%u.%u\n",
+diff --git a/drivers/gpu/drm/xe/xe_force_wake.h b/drivers/gpu/drm/xe/xe_force_wake.h
+index a2577672f4e3e6..1608a55edc846e 100644
+--- a/drivers/gpu/drm/xe/xe_force_wake.h
++++ b/drivers/gpu/drm/xe/xe_force_wake.h
+@@ -46,4 +46,20 @@ xe_force_wake_assert_held(struct xe_force_wake *fw,
+ xe_gt_assert(fw->gt, fw->awake_domains & domain);
+ }
+
++/**
++ * xe_force_wake_ref_has_domain - verifies if the domains are in fw_ref
++ * @fw_ref : the force_wake reference
++ * @domain : forcewake domain to verify
++ *
++ * This function confirms whether the @fw_ref includes a reference to the
++ * specified @domain.
++ *
++ * Return: true if domain is refcounted.
++ */
++static inline bool
++xe_force_wake_ref_has_domain(unsigned int fw_ref, enum xe_force_wake_domains domain)
++{
++ return fw_ref & domain;
++}
++
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_gt_topology.c b/drivers/gpu/drm/xe/xe_gt_topology.c
+index 0662f71c6ede78..3e113422b88de2 100644
+--- a/drivers/gpu/drm/xe/xe_gt_topology.c
++++ b/drivers/gpu/drm/xe/xe_gt_topology.c
+@@ -5,6 +5,7 @@
+
+ #include "xe_gt_topology.h"
+
++#include <generated/xe_wa_oob.h>
+ #include <linux/bitmap.h>
+ #include <linux/compiler.h>
+
+@@ -12,6 +13,7 @@
+ #include "xe_assert.h"
+ #include "xe_gt.h"
+ #include "xe_mmio.h"
++#include "xe_wa.h"
+
+ static void
+ load_dss_mask(struct xe_gt *gt, xe_dss_mask_t mask, int numregs, ...)
+@@ -129,6 +131,18 @@ load_l3_bank_mask(struct xe_gt *gt, xe_l3_bank_mask_t l3_bank_mask)
+ struct xe_device *xe = gt_to_xe(gt);
+ u32 fuse3 = xe_mmio_read32(gt, MIRROR_FUSE3);
+
++ /*
++ * PTL platforms with media version 30.00 do not provide proper values
++ * for the media GT's L3 bank registers. Skip the readout since we
++ * don't have any way to obtain real values.
++ *
++ * This may get re-described as an official workaround in the future,
++ * but there's no tracking number assigned yet so we use a custom
++ * OOB workaround descriptor.
++ */
++ if (XE_WA(gt, no_media_l3))
++ return;
++
+ if (GRAPHICS_VER(xe) >= 20) {
+ xe_l3_bank_mask_t per_node = {};
+ u32 meml3_en = REG_FIELD_GET(XE2_NODE_ENABLE_MASK, fuse3);
+diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c
+index 9c505d3517cd1a..cd6a5f09d631e4 100644
+--- a/drivers/gpu/drm/xe/xe_guc_ct.c
++++ b/drivers/gpu/drm/xe/xe_guc_ct.c
+@@ -906,6 +906,24 @@ static int guc_ct_send_recv(struct xe_guc_ct *ct, const u32 *action, u32 len,
+ }
+ }
+
++ /*
++ * Occasionally it is seen that the G2H worker starts running after a delay of more than
++ * a second even after being queued and activated by the Linux workqueue subsystem. This
++ * leads to G2H timeout error. The root cause of issue lies with scheduling latency of
++ * Lunarlake Hybrid CPU. Issue dissappears if we disable Lunarlake atom cores from BIOS
++ * and this is beyond xe kmd.
++ *
++ * TODO: Drop this change once workqueue scheduling delay issue is fixed on LNL Hybrid CPU.
++ */
++ if (!ret) {
++ flush_work(&ct->g2h_worker);
++ if (g2h_fence.done) {
++ xe_gt_warn(gt, "G2H fence %u, action %04x, done\n",
++ g2h_fence.seqno, action[0]);
++ ret = 1;
++ }
++ }
++
+ /*
+ * Ensure we serialize with completion side to prevent UAF with fence going out of scope on
+ * the stack, since we have no clue if it will fire after the timeout before we can erase
+diff --git a/drivers/gpu/drm/xe/xe_guc_log.c b/drivers/gpu/drm/xe/xe_guc_log.c
+index a37ee341942844..be47780ec2a7e7 100644
+--- a/drivers/gpu/drm/xe/xe_guc_log.c
++++ b/drivers/gpu/drm/xe/xe_guc_log.c
+@@ -6,9 +6,12 @@
+ #include "xe_guc_log.h"
+
+ #include <drm/drm_managed.h>
++#include <linux/vmalloc.h>
+
+ #include "xe_bo.h"
++#include "xe_devcoredump.h"
+ #include "xe_gt.h"
++#include "xe_gt_printk.h"
+ #include "xe_map.h"
+ #include "xe_module.h"
+
+@@ -49,32 +52,35 @@ static size_t guc_log_size(void)
+ CAPTURE_BUFFER_SIZE;
+ }
+
++/**
++ * xe_guc_log_print - dump a copy of the GuC log to some useful location
++ * @log: GuC log structure
++ * @p: the printer object to output to
++ */
+ void xe_guc_log_print(struct xe_guc_log *log, struct drm_printer *p)
+ {
+ struct xe_device *xe = log_to_xe(log);
+ size_t size;
+- int i, j;
++ void *copy;
+
+- xe_assert(xe, log->bo);
++ if (!log->bo) {
++ drm_puts(p, "GuC log buffer not allocated");
++ return;
++ }
+
+ size = log->bo->size;
+
+-#define DW_PER_READ 128
+- xe_assert(xe, !(size % (DW_PER_READ * sizeof(u32))));
+- for (i = 0; i < size / sizeof(u32); i += DW_PER_READ) {
+- u32 read[DW_PER_READ];
+-
+- xe_map_memcpy_from(xe, read, &log->bo->vmap, i * sizeof(u32),
+- DW_PER_READ * sizeof(u32));
+-#define DW_PER_PRINT 4
+- for (j = 0; j < DW_PER_READ / DW_PER_PRINT; ++j) {
+- u32 *print = read + j * DW_PER_PRINT;
+-
+- drm_printf(p, "0x%08x 0x%08x 0x%08x 0x%08x\n",
+- *(print + 0), *(print + 1),
+- *(print + 2), *(print + 3));
+- }
++ copy = vmalloc(size);
++ if (!copy) {
++ drm_printf(p, "Failed to allocate %zu", size);
++ return;
+ }
++
++ xe_map_memcpy_from(xe, copy, &log->bo->vmap, 0, size);
++
++ xe_print_blob_ascii85(p, "Log data", copy, 0, size);
++
++ vfree(copy);
+ }
+
+ int xe_guc_log_init(struct xe_guc_log *log)
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index 2927745d689549..fed23304e4da58 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -2193,7 +2193,7 @@ xe_guc_exec_queue_snapshot_print(struct xe_guc_submit_exec_queue_snapshot *snaps
+ if (!snapshot)
+ return;
+
+- drm_printf(p, "\nGuC ID: %d\n", snapshot->guc.id);
++ drm_printf(p, "GuC ID: %d\n", snapshot->guc.id);
+ drm_printf(p, "\tName: %s\n", snapshot->name);
+ drm_printf(p, "\tClass: %d\n", snapshot->class);
+ drm_printf(p, "\tLogical mask: 0x%x\n", snapshot->logical_mask);
+diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
+index c9c3beb3ce8d06..547919e8ce9e45 100644
+--- a/drivers/gpu/drm/xe/xe_hw_engine.c
++++ b/drivers/gpu/drm/xe/xe_hw_engine.c
+@@ -1053,7 +1053,6 @@ void xe_hw_engine_snapshot_print(struct xe_hw_engine_snapshot *snapshot,
+ if (snapshot->hwe->class == XE_ENGINE_CLASS_COMPUTE)
+ drm_printf(p, "\tRCU_MODE: 0x%08x\n",
+ snapshot->reg.rcu_mode);
+- drm_puts(p, "\n");
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/xe/xe_pci.c b/drivers/gpu/drm/xe/xe_pci.c
+index 5e962e72c97ea6..025d649434673d 100644
+--- a/drivers/gpu/drm/xe/xe_pci.c
++++ b/drivers/gpu/drm/xe/xe_pci.c
+@@ -383,10 +383,12 @@ static const struct pci_device_id pciidlist[] = {
+ XE_ADLS_IDS(INTEL_VGA_DEVICE, &adl_s_desc),
+ XE_ADLP_IDS(INTEL_VGA_DEVICE, &adl_p_desc),
+ XE_ADLN_IDS(INTEL_VGA_DEVICE, &adl_n_desc),
++ XE_RPLU_IDS(INTEL_VGA_DEVICE, &adl_p_desc),
+ XE_RPLP_IDS(INTEL_VGA_DEVICE, &adl_p_desc),
+ XE_RPLS_IDS(INTEL_VGA_DEVICE, &adl_s_desc),
+ XE_DG1_IDS(INTEL_VGA_DEVICE, &dg1_desc),
+ XE_ATS_M_IDS(INTEL_VGA_DEVICE, &ats_m_desc),
++ XE_ARL_IDS(INTEL_VGA_DEVICE, &mtl_desc),
+ XE_DG2_IDS(INTEL_VGA_DEVICE, &dg2_desc),
+ XE_MTL_IDS(INTEL_VGA_DEVICE, &mtl_desc),
+ XE_LNL_IDS(INTEL_VGA_DEVICE, &lnl_desc),
+diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
+index 848da8e68c7a83..1c96375bd7df75 100644
+--- a/drivers/gpu/drm/xe/xe_query.c
++++ b/drivers/gpu/drm/xe/xe_query.c
+@@ -9,6 +9,7 @@
+ #include <linux/sched/clock.h>
+
+ #include <drm/ttm/ttm_placement.h>
++#include <generated/xe_wa_oob.h>
+ #include <uapi/drm/xe_drm.h>
+
+ #include "regs/xe_engine_regs.h"
+@@ -23,6 +24,7 @@
+ #include "xe_macros.h"
+ #include "xe_mmio.h"
+ #include "xe_ttm_vram_mgr.h"
++#include "xe_wa.h"
+
+ static const u16 xe_to_user_engine_class[] = {
+ [XE_ENGINE_CLASS_RENDER] = DRM_XE_ENGINE_CLASS_RENDER,
+@@ -458,12 +460,23 @@ static int query_hwconfig(struct xe_device *xe,
+
+ static size_t calc_topo_query_size(struct xe_device *xe)
+ {
+- return xe->info.gt_count *
+- (4 * sizeof(struct drm_xe_query_topology_mask) +
+- sizeof_field(struct xe_gt, fuse_topo.g_dss_mask) +
+- sizeof_field(struct xe_gt, fuse_topo.c_dss_mask) +
+- sizeof_field(struct xe_gt, fuse_topo.l3_bank_mask) +
+- sizeof_field(struct xe_gt, fuse_topo.eu_mask_per_dss));
++ struct xe_gt *gt;
++ size_t query_size = 0;
++ int id;
++
++ for_each_gt(gt, xe, id) {
++ query_size += 3 * sizeof(struct drm_xe_query_topology_mask) +
++ sizeof_field(struct xe_gt, fuse_topo.g_dss_mask) +
++ sizeof_field(struct xe_gt, fuse_topo.c_dss_mask) +
++ sizeof_field(struct xe_gt, fuse_topo.eu_mask_per_dss);
++
++ /* L3bank mask may not be available for some GTs */
++ if (!XE_WA(gt, no_media_l3))
++ query_size += sizeof(struct drm_xe_query_topology_mask) +
++ sizeof_field(struct xe_gt, fuse_topo.l3_bank_mask);
++ }
++
++ return query_size;
+ }
+
+ static int copy_mask(void __user **ptr,
+@@ -516,11 +529,18 @@ static int query_gt_topology(struct xe_device *xe,
+ if (err)
+ return err;
+
+- topo.type = DRM_XE_TOPO_L3_BANK;
+- err = copy_mask(&query_ptr, &topo, gt->fuse_topo.l3_bank_mask,
+- sizeof(gt->fuse_topo.l3_bank_mask));
+- if (err)
+- return err;
++ /*
++ * If the kernel doesn't have a way to obtain a correct L3bank
++ * mask, then it's better to omit L3 from the query rather than
++ * reporting bogus or zeroed information to userspace.
++ */
++ if (!XE_WA(gt, no_media_l3)) {
++ topo.type = DRM_XE_TOPO_L3_BANK;
++ err = copy_mask(&query_ptr, &topo, gt->fuse_topo.l3_bank_mask,
++ sizeof(gt->fuse_topo.l3_bank_mask));
++ if (err)
++ return err;
++ }
+
+ topo.type = gt->fuse_topo.eu_type == XE_GT_EU_TYPE_SIMD16 ?
+ DRM_XE_TOPO_SIMD16_EU_PER_DSS :
+diff --git a/drivers/gpu/drm/xe/xe_wa.c b/drivers/gpu/drm/xe/xe_wa.c
+index 353936a0f877de..37e592b2bf062a 100644
+--- a/drivers/gpu/drm/xe/xe_wa.c
++++ b/drivers/gpu/drm/xe/xe_wa.c
+@@ -251,6 +251,34 @@ static const struct xe_rtp_entry_sr gt_was[] = {
+ XE_RTP_ENTRY_FLAG(FOREACH_ENGINE),
+ },
+
++ /* Xe3_LPG */
++
++ { XE_RTP_NAME("14021871409"),
++ XE_RTP_RULES(GRAPHICS_VERSION(3000), GRAPHICS_STEP(A0, B0)),
++ XE_RTP_ACTIONS(SET(UNSLCGCTL9454, LSCFE_CLKGATE_DIS))
++ },
++
++ /* Xe3_LPM */
++
++ { XE_RTP_NAME("16021867713"),
++ XE_RTP_RULES(MEDIA_VERSION(3000),
++ ENGINE_CLASS(VIDEO_DECODE)),
++ XE_RTP_ACTIONS(SET(VDBOX_CGCTL3F1C(0), MFXPIPE_CLKGATE_DIS)),
++ XE_RTP_ENTRY_FLAG(FOREACH_ENGINE),
++ },
++ { XE_RTP_NAME("16021865536"),
++ XE_RTP_RULES(MEDIA_VERSION(3000),
++ ENGINE_CLASS(VIDEO_DECODE)),
++ XE_RTP_ACTIONS(SET(VDBOX_CGCTL3F10(0), IECPUNIT_CLKGATE_DIS)),
++ XE_RTP_ENTRY_FLAG(FOREACH_ENGINE),
++ },
++ { XE_RTP_NAME("14021486841"),
++ XE_RTP_RULES(MEDIA_VERSION(3000), MEDIA_STEP(A0, B0),
++ ENGINE_CLASS(VIDEO_DECODE)),
++ XE_RTP_ACTIONS(SET(VDBOX_CGCTL3F10(0), RAMDFTUNIT_CLKGATE_DIS)),
++ XE_RTP_ENTRY_FLAG(FOREACH_ENGINE),
++ },
++
+ {}
+ };
+
+@@ -567,6 +595,13 @@ static const struct xe_rtp_entry_sr engine_was[] = {
+ XE_RTP_ACTION_FLAG(ENGINE_BASE)))
+ },
+
++ /* Xe3_LPG */
++
++ { XE_RTP_NAME("14021402888"),
++ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(3000, 3001), FUNC(xe_rtp_match_first_render_or_compute)),
++ XE_RTP_ACTIONS(SET(HALF_SLICE_CHICKEN7, CLEAR_OPTIMIZATION_DISABLE))
++ },
++
+ {}
+ };
+
+@@ -742,6 +777,18 @@ static const struct xe_rtp_entry_sr lrc_was[] = {
+ XE_RTP_ACTIONS(SET(CHICKEN_RASTER_1, DIS_CLIP_NEGATIVE_BOUNDING_BOX))
+ },
+
++ /* Xe3_LPG */
++ { XE_RTP_NAME("14021490052"),
++ XE_RTP_RULES(GRAPHICS_VERSION(3000), GRAPHICS_STEP(A0, B0),
++ ENGINE_CLASS(RENDER)),
++ XE_RTP_ACTIONS(SET(FF_MODE,
++ DIS_MESH_PARTIAL_AUTOSTRIP |
++ DIS_MESH_AUTOSTRIP),
++ SET(VFLSKPD,
++ DIS_PARTIAL_AUTOSTRIP |
++ DIS_AUTOSTRIP))
++ },
++
+ {}
+ };
+
+diff --git a/drivers/gpu/drm/xe/xe_wa_oob.rules b/drivers/gpu/drm/xe/xe_wa_oob.rules
+index 920ca506014661..264d6e116499ce 100644
+--- a/drivers/gpu/drm/xe/xe_wa_oob.rules
++++ b/drivers/gpu/drm/xe/xe_wa_oob.rules
+@@ -33,7 +33,9 @@
+ GRAPHICS_VERSION(2004)
+ 22019338487 MEDIA_VERSION(2000)
+ GRAPHICS_VERSION(2001)
++ MEDIA_VERSION(3000), MEDIA_STEP(A0, B0)
+ 22019338487_display PLATFORM(LUNARLAKE)
+ 16023588340 GRAPHICS_VERSION(2001)
+ 14019789679 GRAPHICS_VERSION(1255)
+ GRAPHICS_VERSION_RANGE(1270, 2004)
++no_media_l3 MEDIA_VERSION(3000)
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 582fd234eec789..935ccc38d12958 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -2674,9 +2674,10 @@ static bool hid_check_device_match(struct hid_device *hdev,
+ /*
+ * hid-generic implements .match(), so we must be dealing with a
+ * different HID driver here, and can simply check if
+- * hid_ignore_special_drivers is set or not.
++ * hid_ignore_special_drivers or HID_QUIRK_IGNORE_SPECIAL_DRIVER
++ * are set or not.
+ */
+- return !hid_ignore_special_drivers;
++ return !hid_ignore_special_drivers && !(hdev->quirks & HID_QUIRK_IGNORE_SPECIAL_DRIVER);
+ }
+
+ static int __hid_device_probe(struct hid_device *hdev, struct hid_driver *hdrv)
+diff --git a/drivers/hid/hid-generic.c b/drivers/hid/hid-generic.c
+index d2439399fb357a..9e04c6d0fcc874 100644
+--- a/drivers/hid/hid-generic.c
++++ b/drivers/hid/hid-generic.c
+@@ -40,6 +40,9 @@ static bool hid_generic_match(struct hid_device *hdev,
+ if (ignore_special_driver)
+ return true;
+
++ if (hdev->quirks & HID_QUIRK_IGNORE_SPECIAL_DRIVER)
++ return true;
++
+ if (hdev->quirks & HID_QUIRK_HAVE_SPECIAL_DRIVER)
+ return false;
+
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 92cff3f2658cf5..0f23be98c56e22 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -94,6 +94,7 @@
+ #define USB_DEVICE_ID_APPLE_MAGICMOUSE2 0x0269
+ #define USB_DEVICE_ID_APPLE_MAGICTRACKPAD 0x030e
+ #define USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 0x0265
++#define USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC 0x0324
+ #define USB_DEVICE_ID_APPLE_FOUNTAIN_ANSI 0x020e
+ #define USB_DEVICE_ID_APPLE_FOUNTAIN_ISO 0x020f
+ #define USB_DEVICE_ID_APPLE_GEYSER_ANSI 0x0214
+diff --git a/drivers/hid/hid-magicmouse.c b/drivers/hid/hid-magicmouse.c
+index 8a73b59e0827b9..ec110dea87726d 100644
+--- a/drivers/hid/hid-magicmouse.c
++++ b/drivers/hid/hid-magicmouse.c
+@@ -227,7 +227,9 @@ static void magicmouse_emit_touch(struct magicmouse_sc *msc, int raw_id, u8 *tda
+ touch_minor = tdata[4];
+ state = tdata[7] & TOUCH_STATE_MASK;
+ down = state != TOUCH_STATE_NONE;
+- } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ id = tdata[8] & 0xf;
+ x = (tdata[1] << 27 | tdata[0] << 19) >> 19;
+ y = -((tdata[3] << 30 | tdata[2] << 22 | tdata[1] << 14) >> 19);
+@@ -259,8 +261,9 @@ static void magicmouse_emit_touch(struct magicmouse_sc *msc, int raw_id, u8 *tda
+ /* If requested, emulate a scroll wheel by detecting small
+ * vertical touch motions.
+ */
+- if (emulate_scroll_wheel && (input->id.product !=
+- USB_DEVICE_ID_APPLE_MAGICTRACKPAD2)) {
++ if (emulate_scroll_wheel &&
++ input->id.product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
++ input->id.product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ unsigned long now = jiffies;
+ int step_x = msc->touches[id].scroll_x - x;
+ int step_y = msc->touches[id].scroll_y - y;
+@@ -359,7 +362,9 @@ static void magicmouse_emit_touch(struct magicmouse_sc *msc, int raw_id, u8 *tda
+ input_report_abs(input, ABS_MT_POSITION_X, x);
+ input_report_abs(input, ABS_MT_POSITION_Y, y);
+
+- if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2)
++ if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC)
+ input_report_abs(input, ABS_MT_PRESSURE, pressure);
+
+ if (report_undeciphered) {
+@@ -367,7 +372,9 @@ static void magicmouse_emit_touch(struct magicmouse_sc *msc, int raw_id, u8 *tda
+ input->id.product == USB_DEVICE_ID_APPLE_MAGICMOUSE2)
+ input_event(input, EV_MSC, MSC_RAW, tdata[7]);
+ else if (input->id.product !=
+- USB_DEVICE_ID_APPLE_MAGICTRACKPAD2)
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
++ input->id.product !=
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC)
+ input_event(input, EV_MSC, MSC_RAW, tdata[8]);
+ }
+ }
+@@ -493,7 +500,9 @@ static int magicmouse_raw_event(struct hid_device *hdev,
+ magicmouse_emit_buttons(msc, clicks & 3);
+ input_report_rel(input, REL_X, x);
+ input_report_rel(input, REL_Y, y);
+- } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ input_mt_sync_frame(input);
+ input_report_key(input, BTN_MOUSE, clicks & 1);
+ } else { /* USB_DEVICE_ID_APPLE_MAGICTRACKPAD */
+@@ -545,7 +554,9 @@ static int magicmouse_setup_input(struct input_dev *input, struct hid_device *hd
+ __set_bit(REL_WHEEL_HI_RES, input->relbit);
+ __set_bit(REL_HWHEEL_HI_RES, input->relbit);
+ }
+- } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ /* If the trackpad has been connected to a Mac, the name is
+ * automatically personalized, e.g., "José Expósito's Trackpad".
+ * When connected through Bluetooth, the personalized name is
+@@ -621,7 +632,9 @@ static int magicmouse_setup_input(struct input_dev *input, struct hid_device *hd
+ MOUSE_RES_X);
+ input_abs_set_res(input, ABS_MT_POSITION_Y,
+ MOUSE_RES_Y);
+- } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ } else if (input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ input_set_abs_params(input, ABS_MT_PRESSURE, 0, 253, 0, 0);
+ input_set_abs_params(input, ABS_PRESSURE, 0, 253, 0, 0);
+ input_set_abs_params(input, ABS_MT_ORIENTATION, -3, 4, 0, 0);
+@@ -660,7 +673,8 @@ static int magicmouse_setup_input(struct input_dev *input, struct hid_device *hd
+ input_set_events_per_packet(input, 60);
+
+ if (report_undeciphered &&
+- input->id.product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ input->id.product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
++ input->id.product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ __set_bit(EV_MSC, input->evbit);
+ __set_bit(MSC_RAW, input->mscbit);
+ }
+@@ -685,7 +699,9 @@ static int magicmouse_input_mapping(struct hid_device *hdev,
+
+ /* Magic Trackpad does not give relative data after switching to MT */
+ if ((hi->input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD ||
+- hi->input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) &&
++ hi->input->id.product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ hi->input->id.product ==
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) &&
+ field->flags & HID_MAIN_ITEM_RELATIVE)
+ return -1;
+
+@@ -721,7 +737,8 @@ static int magicmouse_enable_multitouch(struct hid_device *hdev)
+ int ret;
+ int feature_size;
+
+- if (hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ if (hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ if (hdev->vendor == BT_VENDOR_ID_APPLE) {
+ feature_size = sizeof(feature_mt_trackpad2_bt);
+ feature = feature_mt_trackpad2_bt;
+@@ -766,7 +783,8 @@ static int magicmouse_fetch_battery(struct hid_device *hdev)
+
+ if (!hdev->battery || hdev->vendor != USB_VENDOR_ID_APPLE ||
+ (hdev->product != USB_DEVICE_ID_APPLE_MAGICMOUSE2 &&
+- hdev->product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2))
++ hdev->product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 &&
++ hdev->product != USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC))
+ return -1;
+
+ report_enum = &hdev->report_enum[hdev->battery_report_type];
+@@ -835,7 +853,9 @@ static int magicmouse_probe(struct hid_device *hdev,
+
+ if (id->vendor == USB_VENDOR_ID_APPLE &&
+ (id->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2 ||
+- (id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 && hdev->type != HID_TYPE_USBMOUSE)))
++ ((id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) &&
++ hdev->type != HID_TYPE_USBMOUSE)))
+ return 0;
+
+ if (!msc->input) {
+@@ -850,7 +870,8 @@ static int magicmouse_probe(struct hid_device *hdev,
+ else if (id->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2)
+ report = hid_register_report(hdev, HID_INPUT_REPORT,
+ MOUSE2_REPORT_ID, 0);
+- else if (id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) {
++ else if (id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ id->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) {
+ if (id->vendor == BT_VENDOR_ID_APPLE)
+ report = hid_register_report(hdev, HID_INPUT_REPORT,
+ TRACKPAD2_BT_REPORT_ID, 0);
+@@ -920,7 +941,8 @@ static const __u8 *magicmouse_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ */
+ if (hdev->vendor == USB_VENDOR_ID_APPLE &&
+ (hdev->product == USB_DEVICE_ID_APPLE_MAGICMOUSE2 ||
+- hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2) &&
++ hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2 ||
++ hdev->product == USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC) &&
+ *rsize == 83 && rdesc[46] == 0x84 && rdesc[58] == 0x85) {
+ hid_info(hdev,
+ "fixing up magicmouse battery report descriptor\n");
+@@ -951,6 +973,10 @@ static const struct hid_device_id magic_mice[] = {
+ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2), .driver_data = 0 },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE,
+ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2), .driver_data = 0 },
++ { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE,
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC), .driver_data = 0 },
++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE,
++ USB_DEVICE_ID_APPLE_MAGICTRACKPAD2_USBC), .driver_data = 0 },
+ { }
+ };
+ MODULE_DEVICE_TABLE(hid, magic_mice);
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 43664a24176fca..4e87380d3edd6b 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -414,7 +414,19 @@ static int i2c_hid_set_power(struct i2c_hid *ihid, int power_state)
+
+ i2c_hid_dbg(ihid, "%s\n", __func__);
+
++ /*
++ * Some STM-based devices need 400µs after a rising clock edge to wake
++ * from deep sleep, in which case the first request will fail due to
++ * the address not being acknowledged. Try after a short sleep to see
++ * if the device came alive on the bus. Certain Weida Tech devices also
++ * need this.
++ */
+ ret = i2c_hid_set_power_command(ihid, power_state);
++ if (ret && power_state == I2C_HID_PWR_ON) {
++ usleep_range(400, 500);
++ ret = i2c_hid_set_power_command(ihid, I2C_HID_PWR_ON);
++ }
++
+ if (ret)
+ dev_err(&ihid->client->dev,
+ "failed to change power setting.\n");
+@@ -976,14 +988,6 @@ static int i2c_hid_core_resume(struct i2c_hid *ihid)
+
+ enable_irq(client->irq);
+
+- /* Make sure the device is awake on the bus */
+- ret = i2c_hid_probe_address(ihid);
+- if (ret < 0) {
+- dev_err(&client->dev, "nothing at address after resume: %d\n",
+- ret);
+- return -ENXIO;
+- }
+-
+ /* On Goodix 27c6:0d42 wait extra time before device wakeup.
+ * It's not clear why but if we send wakeup too early, the device will
+ * never trigger input interrupts.
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 2bc45b24075c3f..9843b52bd017a0 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -2241,7 +2241,8 @@ static void wacom_update_name(struct wacom *wacom, const char *suffix)
+ if (hid_is_usb(wacom->hdev)) {
+ struct usb_interface *intf = to_usb_interface(wacom->hdev->dev.parent);
+ struct usb_device *dev = interface_to_usbdev(intf);
+- product_name = dev->product;
++ if (dev->product != NULL)
++ product_name = dev->product;
+ }
+
+ if (wacom->hdev->bus == BUS_I2C) {
+diff --git a/drivers/hwmon/nct6775-platform.c b/drivers/hwmon/nct6775-platform.c
+index 096f1daa8f2bcf..1218a3b449a801 100644
+--- a/drivers/hwmon/nct6775-platform.c
++++ b/drivers/hwmon/nct6775-platform.c
+@@ -1350,6 +1350,8 @@ static const char * const asus_msi_boards[] = {
+ "Pro H610M-CT D4",
+ "Pro H610T D4",
+ "Pro Q670M-C",
++ "Pro WS 600M-CL",
++ "Pro WS 665-ACE",
+ "Pro WS W680-ACE",
+ "Pro WS W680-ACE IPMI",
+ "Pro WS W790-ACE",
+diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
+index 6b3ba7e5723aa1..2254abda5c46c9 100644
+--- a/drivers/i2c/busses/Kconfig
++++ b/drivers/i2c/busses/Kconfig
+@@ -160,6 +160,7 @@ config I2C_I801
+ Meteor Lake (SOC and PCH)
+ Birch Stream (SOC)
+ Arrow Lake (SOC)
++ Panther Lake (SOC)
+
+ This driver can also be built as a module. If so, the module
+ will be called i2c-i801.
+diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
+index 299fe9d3afab0a..75dab01d43a750 100644
+--- a/drivers/i2c/busses/i2c-i801.c
++++ b/drivers/i2c/busses/i2c-i801.c
+@@ -81,6 +81,8 @@
+ * Meteor Lake PCH-S (PCH) 0x7f23 32 hard yes yes yes
+ * Birch Stream (SOC) 0x5796 32 hard yes yes yes
+ * Arrow Lake-H (SOC) 0x7722 32 hard yes yes yes
++ * Panther Lake-H (SOC) 0xe322 32 hard yes yes yes
++ * Panther Lake-P (SOC) 0xe422 32 hard yes yes yes
+ *
+ * Features supported by this driver:
+ * Software PEC no
+@@ -261,6 +263,8 @@
+ #define PCI_DEVICE_ID_INTEL_CANNONLAKE_H_SMBUS 0xa323
+ #define PCI_DEVICE_ID_INTEL_COMETLAKE_V_SMBUS 0xa3a3
+ #define PCI_DEVICE_ID_INTEL_METEOR_LAKE_SOC_S_SMBUS 0xae22
++#define PCI_DEVICE_ID_INTEL_PANTHER_LAKE_H_SMBUS 0xe322
++#define PCI_DEVICE_ID_INTEL_PANTHER_LAKE_P_SMBUS 0xe422
+
+ struct i801_mux_config {
+ char *gpio_chip;
+@@ -1055,6 +1059,8 @@ static const struct pci_device_id i801_ids[] = {
+ { PCI_DEVICE_DATA(INTEL, METEOR_LAKE_PCH_S_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ { PCI_DEVICE_DATA(INTEL, BIRCH_STREAM_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ { PCI_DEVICE_DATA(INTEL, ARROW_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
++ { PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_H_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
++ { PCI_DEVICE_DATA(INTEL, PANTHER_LAKE_P_SMBUS, FEATURES_ICH5 | FEATURE_TCO_CNL) },
+ { 0, }
+ };
+
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index da83c49223b33e..42310c9a00c2d1 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -282,7 +282,8 @@ static int i3c_device_uevent(const struct device *dev, struct kobj_uevent_env *e
+ struct i3c_device_info devinfo;
+ u16 manuf, part, ext;
+
+- i3c_device_get_info(i3cdev, &devinfo);
++ if (i3cdev->desc)
++ devinfo = i3cdev->desc->info;
+ manuf = I3C_PID_MANUF_ID(devinfo.pid);
+ part = I3C_PID_PART_ID(devinfo.pid);
+ ext = I3C_PID_EXTRA_INFO(devinfo.pid);
+@@ -345,10 +346,10 @@ const struct bus_type i3c_bus_type = {
+ EXPORT_SYMBOL_GPL(i3c_bus_type);
+
+ static enum i3c_addr_slot_status
+-i3c_bus_get_addr_slot_status(struct i3c_bus *bus, u16 addr)
++i3c_bus_get_addr_slot_status_mask(struct i3c_bus *bus, u16 addr, u32 mask)
+ {
+ unsigned long status;
+- int bitpos = addr * 2;
++ int bitpos = addr * I3C_ADDR_SLOT_STATUS_BITS;
+
+ if (addr > I2C_MAX_ADDR)
+ return I3C_ADDR_SLOT_RSVD;
+@@ -356,22 +357,33 @@ i3c_bus_get_addr_slot_status(struct i3c_bus *bus, u16 addr)
+ status = bus->addrslots[bitpos / BITS_PER_LONG];
+ status >>= bitpos % BITS_PER_LONG;
+
+- return status & I3C_ADDR_SLOT_STATUS_MASK;
++ return status & mask;
+ }
+
+-static void i3c_bus_set_addr_slot_status(struct i3c_bus *bus, u16 addr,
+- enum i3c_addr_slot_status status)
++static enum i3c_addr_slot_status
++i3c_bus_get_addr_slot_status(struct i3c_bus *bus, u16 addr)
++{
++ return i3c_bus_get_addr_slot_status_mask(bus, addr, I3C_ADDR_SLOT_STATUS_MASK);
++}
++
++static void i3c_bus_set_addr_slot_status_mask(struct i3c_bus *bus, u16 addr,
++ enum i3c_addr_slot_status status, u32 mask)
+ {
+- int bitpos = addr * 2;
++ int bitpos = addr * I3C_ADDR_SLOT_STATUS_BITS;
+ unsigned long *ptr;
+
+ if (addr > I2C_MAX_ADDR)
+ return;
+
+ ptr = bus->addrslots + (bitpos / BITS_PER_LONG);
+- *ptr &= ~((unsigned long)I3C_ADDR_SLOT_STATUS_MASK <<
+- (bitpos % BITS_PER_LONG));
+- *ptr |= (unsigned long)status << (bitpos % BITS_PER_LONG);
++ *ptr &= ~((unsigned long)mask << (bitpos % BITS_PER_LONG));
++ *ptr |= ((unsigned long)status & mask) << (bitpos % BITS_PER_LONG);
++}
++
++static void i3c_bus_set_addr_slot_status(struct i3c_bus *bus, u16 addr,
++ enum i3c_addr_slot_status status)
++{
++ i3c_bus_set_addr_slot_status_mask(bus, addr, status, I3C_ADDR_SLOT_STATUS_MASK);
+ }
+
+ static bool i3c_bus_dev_addr_is_avail(struct i3c_bus *bus, u8 addr)
+@@ -383,13 +395,44 @@ static bool i3c_bus_dev_addr_is_avail(struct i3c_bus *bus, u8 addr)
+ return status == I3C_ADDR_SLOT_FREE;
+ }
+
++/*
++ * ┌────┬─────────────┬───┬─────────┬───┐
++ * │S/Sr│ 7'h7E RnW=0 │ACK│ ENTDAA │ T ├────┐
++ * └────┴─────────────┴───┴─────────┴───┘ │
++ * ┌─────────────────────────────────────────┘
++ * │ ┌──┬─────────────┬───┬─────────────────┬────────────────┬───┬─────────┐
++ * └─►│Sr│7'h7E RnW=1 │ACK│48bit UID BCR DCR│Assign 7bit Addr│PAR│ ACK/NACK│
++ * └──┴─────────────┴───┴─────────────────┴────────────────┴───┴─────────┘
++ * Some master controllers (such as HCI) need to prepare the entire above transaction before
++ * sending it out to the I3C bus. This means that a 7-bit dynamic address needs to be allocated
++ * before knowing the target device's UID information.
++ *
++ * However, some I3C targets may request specific addresses (called as "init_dyn_addr"), which is
++ * typically specified by the DT-'s assigned-address property. Lower addresses having higher IBI
++ * priority. If it is available, i3c_bus_get_free_addr() preferably return a free address that is
++ * not in the list of desired addresses (called as "init_dyn_addr"). This allows the device with
++ * the "init_dyn_addr" to switch to its "init_dyn_addr" when it hot-joins the I3C bus. Otherwise,
++ * if the "init_dyn_addr" is already in use by another I3C device, the target device will not be
++ * able to switch to its desired address.
++ *
++ * If the previous step fails, fallback returning one of the remaining unassigned address,
++ * regardless of its state in the desired list.
++ */
+ static int i3c_bus_get_free_addr(struct i3c_bus *bus, u8 start_addr)
+ {
+ enum i3c_addr_slot_status status;
+ u8 addr;
+
+ for (addr = start_addr; addr < I3C_MAX_ADDR; addr++) {
+- status = i3c_bus_get_addr_slot_status(bus, addr);
++ status = i3c_bus_get_addr_slot_status_mask(bus, addr,
++ I3C_ADDR_SLOT_EXT_STATUS_MASK);
++ if (status == I3C_ADDR_SLOT_FREE)
++ return addr;
++ }
++
++ for (addr = start_addr; addr < I3C_MAX_ADDR; addr++) {
++ status = i3c_bus_get_addr_slot_status_mask(bus, addr,
++ I3C_ADDR_SLOT_STATUS_MASK);
+ if (status == I3C_ADDR_SLOT_FREE)
+ return addr;
+ }
+@@ -1506,16 +1549,9 @@ static int i3c_master_reattach_i3c_dev(struct i3c_dev_desc *dev,
+ u8 old_dyn_addr)
+ {
+ struct i3c_master_controller *master = i3c_dev_get_master(dev);
+- enum i3c_addr_slot_status status;
+ int ret;
+
+- if (dev->info.dyn_addr != old_dyn_addr &&
+- (!dev->boardinfo ||
+- dev->info.dyn_addr != dev->boardinfo->init_dyn_addr)) {
+- status = i3c_bus_get_addr_slot_status(&master->bus,
+- dev->info.dyn_addr);
+- if (status != I3C_ADDR_SLOT_FREE)
+- return -EBUSY;
++ if (dev->info.dyn_addr != old_dyn_addr) {
+ i3c_bus_set_addr_slot_status(&master->bus,
+ dev->info.dyn_addr,
+ I3C_ADDR_SLOT_I3C_DEV);
+@@ -1918,9 +1954,11 @@ static int i3c_master_bus_init(struct i3c_master_controller *master)
+ goto err_rstdaa;
+ }
+
+- i3c_bus_set_addr_slot_status(&master->bus,
+- i3cboardinfo->init_dyn_addr,
+- I3C_ADDR_SLOT_I3C_DEV);
++ /* Do not mark as occupied until real device exist in bus */
++ i3c_bus_set_addr_slot_status_mask(&master->bus,
++ i3cboardinfo->init_dyn_addr,
++ I3C_ADDR_SLOT_EXT_DESIRED,
++ I3C_ADDR_SLOT_EXT_STATUS_MASK);
+
+ /*
+ * Only try to create/attach devices that have a static
+@@ -2088,7 +2126,8 @@ int i3c_master_add_i3c_dev_locked(struct i3c_master_controller *master,
+ else
+ expected_dyn_addr = newdev->info.dyn_addr;
+
+- if (newdev->info.dyn_addr != expected_dyn_addr) {
++ if (newdev->info.dyn_addr != expected_dyn_addr &&
++ i3c_bus_get_addr_slot_status(&master->bus, expected_dyn_addr) == I3C_ADDR_SLOT_FREE) {
+ /*
+ * Try to apply the expected dynamic address. If it fails, keep
+ * the address assigned by the master.
+diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
+index a918e96b21fddc..13adc584009429 100644
+--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
++++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
+@@ -159,10 +159,10 @@ static void hci_dma_cleanup(struct i3c_hci *hci)
+ for (i = 0; i < rings->total; i++) {
+ rh = &rings->headers[i];
+
++ rh_reg_write(INTR_SIGNAL_ENABLE, 0);
+ rh_reg_write(RING_CONTROL, 0);
+ rh_reg_write(CR_SETUP, 0);
+ rh_reg_write(IBI_SETUP, 0);
+- rh_reg_write(INTR_SIGNAL_ENABLE, 0);
+
+ if (rh->xfer)
+ dma_free_coherent(&hci->master.dev,
+diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c
+index 7042ddfdfc03ee..955e9eff0099e5 100644
+--- a/drivers/iio/adc/ad7192.c
++++ b/drivers/iio/adc/ad7192.c
+@@ -1394,6 +1394,9 @@ static int ad7192_probe(struct spi_device *spi)
+ st->int_vref_mv = ret == -ENODEV ? avdd_mv : ret / MILLI;
+
+ st->chip_info = spi_get_device_match_data(spi);
++ if (!st->chip_info)
++ return -ENODEV;
++
+ indio_dev->name = st->chip_info->name;
+ indio_dev->modes = INDIO_DIRECT_MODE;
+ indio_dev->info = st->chip_info->info;
+diff --git a/drivers/iio/light/ltr501.c b/drivers/iio/light/ltr501.c
+index 8c516ede911619..640a5d3aa2c6e7 100644
+--- a/drivers/iio/light/ltr501.c
++++ b/drivers/iio/light/ltr501.c
+@@ -1613,6 +1613,8 @@ static const struct acpi_device_id ltr_acpi_match[] = {
+ { "LTER0501", ltr501 },
+ { "LTER0559", ltr559 },
+ { "LTER0301", ltr301 },
++ /* https://www.catalog.update.microsoft.com/Search.aspx?q=lter0303 */
++ { "LTER0303", ltr303 },
+ { },
+ };
+ MODULE_DEVICE_TABLE(acpi, ltr_acpi_match);
+diff --git a/drivers/iio/magnetometer/af8133j.c b/drivers/iio/magnetometer/af8133j.c
+index d81d89af6283b7..acd291f3e7924c 100644
+--- a/drivers/iio/magnetometer/af8133j.c
++++ b/drivers/iio/magnetometer/af8133j.c
+@@ -312,10 +312,11 @@ static int af8133j_set_scale(struct af8133j_data *data,
+ * When suspended, just store the new range to data->range to be
+ * applied later during power up.
+ */
+- if (!pm_runtime_status_suspended(dev))
++ if (!pm_runtime_status_suspended(dev)) {
+ scoped_guard(mutex, &data->mutex)
+ ret = regmap_write(data->regmap,
+ AF8133J_REG_RANGE, range);
++ }
+
+ pm_runtime_enable(dev);
+
+diff --git a/drivers/iio/magnetometer/yamaha-yas530.c b/drivers/iio/magnetometer/yamaha-yas530.c
+index 65011a8598d332..c55a38650c0d47 100644
+--- a/drivers/iio/magnetometer/yamaha-yas530.c
++++ b/drivers/iio/magnetometer/yamaha-yas530.c
+@@ -372,6 +372,7 @@ static int yas537_measure(struct yas5xx *yas5xx, u16 *t, u16 *x, u16 *y1, u16 *y
+ u8 data[8];
+ u16 xy1y2[3];
+ s32 h[3], s[3];
++ int half_range = BIT(13);
+ int i, ret;
+
+ mutex_lock(&yas5xx->lock);
+@@ -406,13 +407,13 @@ static int yas537_measure(struct yas5xx *yas5xx, u16 *t, u16 *x, u16 *y1, u16 *y
+ /* The second version of YAS537 needs to include calibration coefficients */
+ if (yas5xx->version == YAS537_VERSION_1) {
+ for (i = 0; i < 3; i++)
+- s[i] = xy1y2[i] - BIT(13);
+- h[0] = (c->k * (128 * s[0] + c->a2 * s[1] + c->a3 * s[2])) / BIT(13);
+- h[1] = (c->k * (c->a4 * s[0] + c->a5 * s[1] + c->a6 * s[2])) / BIT(13);
+- h[2] = (c->k * (c->a7 * s[0] + c->a8 * s[1] + c->a9 * s[2])) / BIT(13);
++ s[i] = xy1y2[i] - half_range;
++ h[0] = (c->k * (128 * s[0] + c->a2 * s[1] + c->a3 * s[2])) / half_range;
++ h[1] = (c->k * (c->a4 * s[0] + c->a5 * s[1] + c->a6 * s[2])) / half_range;
++ h[2] = (c->k * (c->a7 * s[0] + c->a8 * s[1] + c->a9 * s[2])) / half_range;
+ for (i = 0; i < 3; i++) {
+- clamp_val(h[i], -BIT(13), BIT(13) - 1);
+- xy1y2[i] = h[i] + BIT(13);
++ h[i] = clamp(h[i], -half_range, half_range - 1);
++ xy1y2[i] = h[i] + half_range;
+ }
+ }
+
+diff --git a/drivers/iommu/amd/io_pgtable.c b/drivers/iommu/amd/io_pgtable.c
+index 804b788f3f167d..f3399087859fd1 100644
+--- a/drivers/iommu/amd/io_pgtable.c
++++ b/drivers/iommu/amd/io_pgtable.c
+@@ -118,6 +118,7 @@ static void free_sub_pt(u64 *root, int mode, struct list_head *freelist)
+ */
+ static bool increase_address_space(struct amd_io_pgtable *pgtable,
+ unsigned long address,
++ unsigned int page_size_level,
+ gfp_t gfp)
+ {
+ struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
+@@ -133,7 +134,8 @@ static bool increase_address_space(struct amd_io_pgtable *pgtable,
+
+ spin_lock_irqsave(&domain->lock, flags);
+
+- if (address <= PM_LEVEL_SIZE(pgtable->mode))
++ if (address <= PM_LEVEL_SIZE(pgtable->mode) &&
++ pgtable->mode - 1 >= page_size_level)
+ goto out;
+
+ ret = false;
+@@ -163,18 +165,21 @@ static u64 *alloc_pte(struct amd_io_pgtable *pgtable,
+ gfp_t gfp,
+ bool *updated)
+ {
++ unsigned long last_addr = address + (page_size - 1);
+ struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
+ int level, end_lvl;
+ u64 *pte, *page;
+
+ BUG_ON(!is_power_of_2(page_size));
+
+- while (address > PM_LEVEL_SIZE(pgtable->mode)) {
++ while (last_addr > PM_LEVEL_SIZE(pgtable->mode) ||
++ pgtable->mode - 1 < PAGE_SIZE_LEVEL(page_size)) {
+ /*
+ * Return an error if there is no memory to update the
+ * page-table.
+ */
+- if (!increase_address_space(pgtable, address, gfp))
++ if (!increase_address_space(pgtable, last_addr,
++ PAGE_SIZE_LEVEL(page_size), gfp))
+ return NULL;
+ }
+
+diff --git a/drivers/iommu/iommufd/fault.c b/drivers/iommu/iommufd/fault.c
+index e590973ce5cfa2..b8393a8c075396 100644
+--- a/drivers/iommu/iommufd/fault.c
++++ b/drivers/iommu/iommufd/fault.c
+@@ -415,8 +415,6 @@ int iommufd_fault_alloc(struct iommufd_ucmd *ucmd)
+ put_unused_fd(fdno);
+ out_fput:
+ fput(filep);
+- refcount_dec(&fault->obj.users);
+- iommufd_ctx_put(fault->ictx);
+ out_abort:
+ iommufd_object_abort_and_destroy(ucmd->ictx, &fault->obj);
+
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index d82bcab233a1b0..66ce15027f28d7 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -407,7 +407,7 @@ config PARTITION_PERCPU
+ config STM32MP_EXTI
+ tristate "STM32MP extended interrupts and event controller"
+ depends on (ARCH_STM32 && !ARM_SINGLE_ARMV7M) || COMPILE_TEST
+- default y
++ default ARCH_STM32 && !ARM_SINGLE_ARMV7M
+ select IRQ_DOMAIN_HIERARCHY
+ select GENERIC_IRQ_CHIP
+ help
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 52f625e07658cb..d9b6ec844cdda0 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -44,6 +44,7 @@
+ #define ITS_FLAGS_WORKAROUND_CAVIUM_22375 (1ULL << 1)
+ #define ITS_FLAGS_WORKAROUND_CAVIUM_23144 (1ULL << 2)
+ #define ITS_FLAGS_FORCE_NON_SHAREABLE (1ULL << 3)
++#define ITS_FLAGS_WORKAROUND_HISILICON_162100801 (1ULL << 4)
+
+ #define RD_LOCAL_LPI_ENABLED BIT(0)
+ #define RD_LOCAL_PENDTABLE_PREALLOCATED BIT(1)
+@@ -61,6 +62,7 @@ static u32 lpi_id_bits;
+ #define LPI_PENDBASE_SZ ALIGN(BIT(LPI_NRBITS) / 8, SZ_64K)
+
+ static u8 __ro_after_init lpi_prop_prio;
++static struct its_node *find_4_1_its(void);
+
+ /*
+ * Collection structure - just an ID, and a redistributor address to
+@@ -3797,6 +3799,20 @@ static void its_vpe_db_proxy_move(struct its_vpe *vpe, int from, int to)
+ raw_spin_unlock_irqrestore(&vpe_proxy.lock, flags);
+ }
+
++static void its_vpe_4_1_invall_locked(int cpu, struct its_vpe *vpe)
++{
++ void __iomem *rdbase;
++ u64 val;
++
++ val = GICR_INVALLR_V;
++ val |= FIELD_PREP(GICR_INVALLR_VPEID, vpe->vpe_id);
++
++ guard(raw_spinlock)(&gic_data_rdist_cpu(cpu)->rd_lock);
++ rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
++ gic_write_lpir(val, rdbase + GICR_INVALLR);
++ wait_for_syncr(rdbase);
++}
++
+ static int its_vpe_set_affinity(struct irq_data *d,
+ const struct cpumask *mask_val,
+ bool force)
+@@ -3804,6 +3820,7 @@ static int its_vpe_set_affinity(struct irq_data *d,
+ struct its_vpe *vpe = irq_data_get_irq_chip_data(d);
+ unsigned int from, cpu = nr_cpu_ids;
+ struct cpumask *table_mask;
++ struct its_node *its;
+ unsigned long flags;
+
+ /*
+@@ -3866,6 +3883,11 @@ static int its_vpe_set_affinity(struct irq_data *d,
+ vpe->col_idx = cpu;
+
+ its_send_vmovp(vpe);
++
++ its = find_4_1_its();
++ if (its && its->flags & ITS_FLAGS_WORKAROUND_HISILICON_162100801)
++ its_vpe_4_1_invall_locked(cpu, vpe);
++
+ its_vpe_db_proxy_move(vpe, from, cpu);
+
+ out:
+@@ -4173,22 +4195,12 @@ static void its_vpe_4_1_deschedule(struct its_vpe *vpe,
+
+ static void its_vpe_4_1_invall(struct its_vpe *vpe)
+ {
+- void __iomem *rdbase;
+ unsigned long flags;
+- u64 val;
+ int cpu;
+
+- val = GICR_INVALLR_V;
+- val |= FIELD_PREP(GICR_INVALLR_VPEID, vpe->vpe_id);
+-
+ /* Target the redistributor this vPE is currently known on */
+ cpu = vpe_to_cpuid_lock(vpe, &flags);
+- raw_spin_lock(&gic_data_rdist_cpu(cpu)->rd_lock);
+- rdbase = per_cpu_ptr(gic_rdists->rdist, cpu)->rd_base;
+- gic_write_lpir(val, rdbase + GICR_INVALLR);
+-
+- wait_for_syncr(rdbase);
+- raw_spin_unlock(&gic_data_rdist_cpu(cpu)->rd_lock);
++ its_vpe_4_1_invall_locked(cpu, vpe);
+ vpe_to_cpuid_unlock(vpe, flags);
+ }
+
+@@ -4781,6 +4793,14 @@ static bool its_set_non_coherent(void *data)
+ return true;
+ }
+
++static bool __maybe_unused its_enable_quirk_hip09_162100801(void *data)
++{
++ struct its_node *its = data;
++
++ its->flags |= ITS_FLAGS_WORKAROUND_HISILICON_162100801;
++ return true;
++}
++
+ static const struct gic_quirk its_quirks[] = {
+ #ifdef CONFIG_CAVIUM_ERRATUM_22375
+ {
+@@ -4827,6 +4847,14 @@ static const struct gic_quirk its_quirks[] = {
+ .init = its_enable_quirk_hip07_161600802,
+ },
+ #endif
++#ifdef CONFIG_HISILICON_ERRATUM_162100801
++ {
++ .desc = "ITS: Hip09 erratum 162100801",
++ .iidr = 0x00051736,
++ .mask = 0xffffffff,
++ .init = its_enable_quirk_hip09_162100801,
++ },
++#endif
+ #ifdef CONFIG_ROCKCHIP_ERRATUM_3588001
+ {
+ .desc = "ITS: Rockchip erratum RK3588001",
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index 06b97fd49ad9a2..f69f4e928d6143 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -29,11 +29,14 @@ static ssize_t brightness_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct led_classdev *led_cdev = dev_get_drvdata(dev);
++ unsigned int brightness;
+
+- /* no lock needed for this */
++ mutex_lock(&led_cdev->led_access);
+ led_update_brightness(led_cdev);
++ brightness = led_cdev->brightness;
++ mutex_unlock(&led_cdev->led_access);
+
+- return sprintf(buf, "%u\n", led_cdev->brightness);
++ return sprintf(buf, "%u\n", brightness);
+ }
+
+ static ssize_t brightness_store(struct device *dev,
+@@ -70,8 +73,13 @@ static ssize_t max_brightness_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct led_classdev *led_cdev = dev_get_drvdata(dev);
++ unsigned int max_brightness;
++
++ mutex_lock(&led_cdev->led_access);
++ max_brightness = led_cdev->max_brightness;
++ mutex_unlock(&led_cdev->led_access);
+
+- return sprintf(buf, "%u\n", led_cdev->max_brightness);
++ return sprintf(buf, "%u\n", max_brightness);
+ }
+ static DEVICE_ATTR_RO(max_brightness);
+
+diff --git a/drivers/mailbox/pcc.c b/drivers/mailbox/pcc.c
+index 94885e411085ad..82102a4c5d6883 100644
+--- a/drivers/mailbox/pcc.c
++++ b/drivers/mailbox/pcc.c
+@@ -269,6 +269,35 @@ static bool pcc_mbox_cmd_complete_check(struct pcc_chan_info *pchan)
+ return !!val;
+ }
+
++static void check_and_ack(struct pcc_chan_info *pchan, struct mbox_chan *chan)
++{
++ struct acpi_pcct_ext_pcc_shared_memory pcc_hdr;
++
++ if (pchan->type != ACPI_PCCT_TYPE_EXT_PCC_SLAVE_SUBSPACE)
++ return;
++ /* If the memory region has not been mapped, we cannot
++ * determine if we need to send the message, but we still
++ * need to set the cmd_update flag before returning.
++ */
++ if (pchan->chan.shmem == NULL) {
++ pcc_chan_reg_read_modify_write(&pchan->cmd_update);
++ return;
++ }
++ memcpy_fromio(&pcc_hdr, pchan->chan.shmem,
++ sizeof(struct acpi_pcct_ext_pcc_shared_memory));
++ /*
++ * The PCC slave subspace channel needs to set the command complete bit
++ * after processing message. If the PCC_ACK_FLAG is set, it should also
++ * ring the doorbell.
++ *
++ * The PCC master subspace channel clears chan_in_use to free channel.
++ */
++ if (le32_to_cpup(&pcc_hdr.flags) & PCC_ACK_FLAG_MASK)
++ pcc_send_data(chan, NULL);
++ else
++ pcc_chan_reg_read_modify_write(&pchan->cmd_update);
++}
++
+ /**
+ * pcc_mbox_irq - PCC mailbox interrupt handler
+ * @irq: interrupt number
+@@ -306,14 +335,7 @@ static irqreturn_t pcc_mbox_irq(int irq, void *p)
+
+ mbox_chan_received_data(chan, NULL);
+
+- /*
+- * The PCC slave subspace channel needs to set the command complete bit
+- * and ring doorbell after processing message.
+- *
+- * The PCC master subspace channel clears chan_in_use to free channel.
+- */
+- if (pchan->type == ACPI_PCCT_TYPE_EXT_PCC_SLAVE_SUBSPACE)
+- pcc_send_data(chan, NULL);
++ check_and_ack(pchan, chan);
+ pchan->chan_in_use = false;
+
+ return IRQ_HANDLED;
+@@ -365,14 +387,37 @@ EXPORT_SYMBOL_GPL(pcc_mbox_request_channel);
+ void pcc_mbox_free_channel(struct pcc_mbox_chan *pchan)
+ {
+ struct mbox_chan *chan = pchan->mchan;
++ struct pcc_chan_info *pchan_info;
++ struct pcc_mbox_chan *pcc_mbox_chan;
+
+ if (!chan || !chan->cl)
+ return;
++ pchan_info = chan->con_priv;
++ pcc_mbox_chan = &pchan_info->chan;
++ if (pcc_mbox_chan->shmem) {
++ iounmap(pcc_mbox_chan->shmem);
++ pcc_mbox_chan->shmem = NULL;
++ }
+
+ mbox_free_channel(chan);
+ }
+ EXPORT_SYMBOL_GPL(pcc_mbox_free_channel);
+
++int pcc_mbox_ioremap(struct mbox_chan *chan)
++{
++ struct pcc_chan_info *pchan_info;
++ struct pcc_mbox_chan *pcc_mbox_chan;
++
++ if (!chan || !chan->cl)
++ return -1;
++ pchan_info = chan->con_priv;
++ pcc_mbox_chan = &pchan_info->chan;
++ pcc_mbox_chan->shmem = ioremap(pcc_mbox_chan->shmem_base_addr,
++ pcc_mbox_chan->shmem_size);
++ return 0;
++}
++EXPORT_SYMBOL_GPL(pcc_mbox_ioremap);
++
+ /**
+ * pcc_send_data - Called from Mailbox Controller code. Used
+ * here only to ring the channel doorbell. The PCC client
+diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
+index e7abfdd77c3b66..e42f1400cea9d7 100644
+--- a/drivers/md/bcache/super.c
++++ b/drivers/md/bcache/super.c
+@@ -1718,7 +1718,7 @@ static CLOSURE_CALLBACK(cache_set_flush)
+ if (!IS_ERR_OR_NULL(c->gc_thread))
+ kthread_stop(c->gc_thread);
+
+- if (!IS_ERR(c->root))
++ if (!IS_ERR_OR_NULL(c->root))
+ list_add(&c->root->list, &c->btree_cache);
+
+ /*
+diff --git a/drivers/media/pci/intel/ipu6/Kconfig b/drivers/media/pci/intel/ipu6/Kconfig
+index a4537818a58c05..cd1c545293574a 100644
+--- a/drivers/media/pci/intel/ipu6/Kconfig
++++ b/drivers/media/pci/intel/ipu6/Kconfig
+@@ -8,7 +8,7 @@ config VIDEO_INTEL_IPU6
+ select IOMMU_IOVA
+ select VIDEO_V4L2_SUBDEV_API
+ select MEDIA_CONTROLLER
+- select VIDEOBUF2_DMA_CONTIG
++ select VIDEOBUF2_DMA_SG
+ select V4L2_FWNODE
+ help
+ This is the 6th Gen Intel Image Processing Unit, found in Intel SoCs
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c
+index 03dbb0e0ea7957..bbb66b56ee88c9 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.c
+@@ -13,17 +13,48 @@
+
+ #include <media/media-entity.h>
+ #include <media/v4l2-subdev.h>
+-#include <media/videobuf2-dma-contig.h>
++#include <media/videobuf2-dma-sg.h>
+ #include <media/videobuf2-v4l2.h>
+
+ #include "ipu6-bus.h"
++#include "ipu6-dma.h"
+ #include "ipu6-fw-isys.h"
+ #include "ipu6-isys.h"
+ #include "ipu6-isys-video.h"
+
+-static int queue_setup(struct vb2_queue *q, unsigned int *num_buffers,
+- unsigned int *num_planes, unsigned int sizes[],
+- struct device *alloc_devs[])
++static int ipu6_isys_buf_init(struct vb2_buffer *vb)
++{
++ struct ipu6_isys *isys = vb2_get_drv_priv(vb->vb2_queue);
++ struct sg_table *sg = vb2_dma_sg_plane_desc(vb, 0);
++ struct vb2_v4l2_buffer *vvb = to_vb2_v4l2_buffer(vb);
++ struct ipu6_isys_video_buffer *ivb =
++ vb2_buffer_to_ipu6_isys_video_buffer(vvb);
++ int ret;
++
++ ret = ipu6_dma_map_sgtable(isys->adev, sg, DMA_TO_DEVICE, 0);
++ if (ret)
++ return ret;
++
++ ivb->dma_addr = sg_dma_address(sg->sgl);
++
++ return 0;
++}
++
++static void ipu6_isys_buf_cleanup(struct vb2_buffer *vb)
++{
++ struct ipu6_isys *isys = vb2_get_drv_priv(vb->vb2_queue);
++ struct sg_table *sg = vb2_dma_sg_plane_desc(vb, 0);
++ struct vb2_v4l2_buffer *vvb = to_vb2_v4l2_buffer(vb);
++ struct ipu6_isys_video_buffer *ivb =
++ vb2_buffer_to_ipu6_isys_video_buffer(vvb);
++
++ ivb->dma_addr = 0;
++ ipu6_dma_unmap_sgtable(isys->adev, sg, DMA_TO_DEVICE, 0);
++}
++
++static int ipu6_isys_queue_setup(struct vb2_queue *q, unsigned int *num_buffers,
++ unsigned int *num_planes, unsigned int sizes[],
++ struct device *alloc_devs[])
+ {
+ struct ipu6_isys_queue *aq = vb2_queue_to_isys_queue(q);
+ struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq);
+@@ -207,9 +238,11 @@ ipu6_isys_buf_to_fw_frame_buf_pin(struct vb2_buffer *vb,
+ struct ipu6_fw_isys_frame_buff_set_abi *set)
+ {
+ struct ipu6_isys_queue *aq = vb2_queue_to_isys_queue(vb->vb2_queue);
++ struct vb2_v4l2_buffer *vvb = to_vb2_v4l2_buffer(vb);
++ struct ipu6_isys_video_buffer *ivb =
++ vb2_buffer_to_ipu6_isys_video_buffer(vvb);
+
+- set->output_pins[aq->fw_output].addr =
+- vb2_dma_contig_plane_dma_addr(vb, 0);
++ set->output_pins[aq->fw_output].addr = ivb->dma_addr;
+ set->output_pins[aq->fw_output].out_buf_id = vb->index + 1;
+ }
+
+@@ -332,7 +365,7 @@ static void buf_queue(struct vb2_buffer *vb)
+
+ dev_dbg(dev, "queue buffer %u for %s\n", vb->index, av->vdev.name);
+
+- dma = vb2_dma_contig_plane_dma_addr(vb, 0);
++ dma = ivb->dma_addr;
+ dev_dbg(dev, "iova: iova %pad\n", &dma);
+
+ spin_lock_irqsave(&aq->lock, flags);
+@@ -724,10 +757,14 @@ void ipu6_isys_queue_buf_ready(struct ipu6_isys_stream *stream,
+ }
+
+ list_for_each_entry_reverse(ib, &aq->active, head) {
++ struct ipu6_isys_video_buffer *ivb;
++ struct vb2_v4l2_buffer *vvb;
+ dma_addr_t addr;
+
+ vb = ipu6_isys_buffer_to_vb2_buffer(ib);
+- addr = vb2_dma_contig_plane_dma_addr(vb, 0);
++ vvb = to_vb2_v4l2_buffer(vb);
++ ivb = vb2_buffer_to_ipu6_isys_video_buffer(vvb);
++ addr = ivb->dma_addr;
+
+ if (info->pin.addr != addr) {
+ if (first)
+@@ -766,10 +803,12 @@ void ipu6_isys_queue_buf_ready(struct ipu6_isys_stream *stream,
+ }
+
+ static const struct vb2_ops ipu6_isys_queue_ops = {
+- .queue_setup = queue_setup,
++ .queue_setup = ipu6_isys_queue_setup,
+ .wait_prepare = vb2_ops_wait_prepare,
+ .wait_finish = vb2_ops_wait_finish,
++ .buf_init = ipu6_isys_buf_init,
+ .buf_prepare = ipu6_isys_buf_prepare,
++ .buf_cleanup = ipu6_isys_buf_cleanup,
+ .start_streaming = start_streaming,
+ .stop_streaming = stop_streaming,
+ .buf_queue = buf_queue,
+@@ -779,16 +818,17 @@ int ipu6_isys_queue_init(struct ipu6_isys_queue *aq)
+ {
+ struct ipu6_isys *isys = ipu6_isys_queue_to_video(aq)->isys;
+ struct ipu6_isys_video *av = ipu6_isys_queue_to_video(aq);
++ struct ipu6_bus_device *adev = isys->adev;
+ int ret;
+
+ /* no support for userptr */
+ if (!aq->vbq.io_modes)
+ aq->vbq.io_modes = VB2_MMAP | VB2_DMABUF;
+
+- aq->vbq.drv_priv = aq;
++ aq->vbq.drv_priv = isys;
+ aq->vbq.ops = &ipu6_isys_queue_ops;
+ aq->vbq.lock = &av->mutex;
+- aq->vbq.mem_ops = &vb2_dma_contig_memops;
++ aq->vbq.mem_ops = &vb2_dma_sg_memops;
+ aq->vbq.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ aq->vbq.min_queued_buffers = 1;
+ aq->vbq.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+@@ -797,8 +837,8 @@ int ipu6_isys_queue_init(struct ipu6_isys_queue *aq)
+ if (ret)
+ return ret;
+
+- aq->dev = &isys->adev->auxdev.dev;
+- aq->vbq.dev = &isys->adev->auxdev.dev;
++ aq->dev = &adev->auxdev.dev;
++ aq->vbq.dev = &adev->isp->pdev->dev;
+ spin_lock_init(&aq->lock);
+ INIT_LIST_HEAD(&aq->active);
+ INIT_LIST_HEAD(&aq->incoming);
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.h b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.h
+index 95cfd4869d9356..fe8fc796a58f5d 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-isys-queue.h
++++ b/drivers/media/pci/intel/ipu6/ipu6-isys-queue.h
+@@ -38,6 +38,7 @@ struct ipu6_isys_buffer {
+ struct ipu6_isys_video_buffer {
+ struct vb2_v4l2_buffer vb_v4l2;
+ struct ipu6_isys_buffer ib;
++ dma_addr_t dma_addr;
+ };
+
+ #define IPU6_ISYS_BUFFER_LIST_FL_INCOMING BIT(0)
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys.c b/drivers/media/pci/intel/ipu6/ipu6-isys.c
+index c4aff2e2009bab..c85e056cb904b2 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-isys.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-isys.c
+@@ -34,6 +34,7 @@
+
+ #include "ipu6-bus.h"
+ #include "ipu6-cpd.h"
++#include "ipu6-dma.h"
+ #include "ipu6-isys.h"
+ #include "ipu6-isys-csi2.h"
+ #include "ipu6-mmu.h"
+@@ -933,29 +934,27 @@ static const struct dev_pm_ops isys_pm_ops = {
+
+ static void free_fw_msg_bufs(struct ipu6_isys *isys)
+ {
+- struct device *dev = &isys->adev->auxdev.dev;
+ struct isys_fw_msgs *fwmsg, *safe;
+
+ list_for_each_entry_safe(fwmsg, safe, &isys->framebuflist, head)
+- dma_free_attrs(dev, sizeof(struct isys_fw_msgs), fwmsg,
+- fwmsg->dma_addr, 0);
++ ipu6_dma_free(isys->adev, sizeof(struct isys_fw_msgs), fwmsg,
++ fwmsg->dma_addr, 0);
+
+ list_for_each_entry_safe(fwmsg, safe, &isys->framebuflist_fw, head)
+- dma_free_attrs(dev, sizeof(struct isys_fw_msgs), fwmsg,
+- fwmsg->dma_addr, 0);
++ ipu6_dma_free(isys->adev, sizeof(struct isys_fw_msgs), fwmsg,
++ fwmsg->dma_addr, 0);
+ }
+
+ static int alloc_fw_msg_bufs(struct ipu6_isys *isys, int amount)
+ {
+- struct device *dev = &isys->adev->auxdev.dev;
+ struct isys_fw_msgs *addr;
+ dma_addr_t dma_addr;
+ unsigned long flags;
+ unsigned int i;
+
+ for (i = 0; i < amount; i++) {
+- addr = dma_alloc_attrs(dev, sizeof(struct isys_fw_msgs),
+- &dma_addr, GFP_KERNEL, 0);
++ addr = ipu6_dma_alloc(isys->adev, sizeof(*addr),
++ &dma_addr, GFP_KERNEL, 0);
+ if (!addr)
+ break;
+ addr->dma_addr = dma_addr;
+@@ -974,8 +973,8 @@ static int alloc_fw_msg_bufs(struct ipu6_isys *isys, int amount)
+ struct isys_fw_msgs, head);
+ list_del(&addr->head);
+ spin_unlock_irqrestore(&isys->listlock, flags);
+- dma_free_attrs(dev, sizeof(struct isys_fw_msgs), addr,
+- addr->dma_addr, 0);
++ ipu6_dma_free(isys->adev, sizeof(struct isys_fw_msgs), addr,
++ addr->dma_addr, 0);
+ spin_lock_irqsave(&isys->listlock, flags);
+ }
+ spin_unlock_irqrestore(&isys->listlock, flags);
+diff --git a/drivers/media/usb/cx231xx/cx231xx-cards.c b/drivers/media/usb/cx231xx/cx231xx-cards.c
+index 92efe6c1f47bae..bda729b42d05fe 100644
+--- a/drivers/media/usb/cx231xx/cx231xx-cards.c
++++ b/drivers/media/usb/cx231xx/cx231xx-cards.c
+@@ -994,6 +994,8 @@ const unsigned int cx231xx_bcount = ARRAY_SIZE(cx231xx_boards);
+
+ /* table of devices that work with this driver */
+ struct usb_device_id cx231xx_id_table[] = {
++ {USB_DEVICE(0x1D19, 0x6108),
++ .driver_info = CX231XX_BOARD_PV_XCAPTURE_USB},
+ {USB_DEVICE(0x1D19, 0x6109),
+ .driver_info = CX231XX_BOARD_PV_XCAPTURE_USB},
+ {USB_DEVICE(0x0572, 0x5A3C),
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 675be4858366f0..9f38a9b23c0181 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2474,9 +2474,22 @@ static const struct uvc_device_info uvc_quirk_force_y8 = {
+ * The Logitech cameras listed below have their interface class set to
+ * VENDOR_SPEC because they don't announce themselves as UVC devices, even
+ * though they are compliant.
++ *
++ * Sort these by vendor/product ID.
+ */
+ static const struct usb_device_id uvc_ids[] = {
+ /* Quanta ACER HD User Facing */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x0408,
++ .idProduct = 0x4033,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = UVC_PC_PROTOCOL_15,
++ .driver_info = (kernel_ulong_t)&(const struct uvc_device_info){
++ .uvc_version = 0x010a,
++ } },
++ /* Quanta ACER HD User Facing */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+ .idVendor = 0x0408,
+@@ -3010,6 +3023,15 @@ static const struct usb_device_id uvc_ids[] = {
+ .bInterfaceProtocol = 0,
+ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_PROBE_MINMAX
+ | UVC_QUIRK_IGNORE_SELECTOR_UNIT) },
++ /* NXP Semiconductors IR VIDEO */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x1fc9,
++ .idProduct = 0x009b,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = 0,
++ .driver_info = (kernel_ulong_t)&uvc_quirk_probe_minmax },
+ /* Oculus VR Positional Tracker DK2 */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+@@ -3118,6 +3140,15 @@ static const struct usb_device_id uvc_ids[] = {
+ .bInterfaceSubClass = 1,
+ .bInterfaceProtocol = 0,
+ .driver_info = UVC_INFO_META(V4L2_META_FMT_D4XX) },
++ /* Intel D421 Depth Module */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x8086,
++ .idProduct = 0x1155,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = 0,
++ .driver_info = UVC_INFO_META(V4L2_META_FMT_D4XX) },
+ /* Generic USB Video Class */
+ { USB_INTERFACE_INFO(USB_CLASS_VIDEO, 1, UVC_PC_PROTOCOL_UNDEFINED) },
+ { USB_INTERFACE_INFO(USB_CLASS_VIDEO, 1, UVC_PC_PROTOCOL_15) },
+diff --git a/drivers/misc/eeprom/eeprom_93cx6.c b/drivers/misc/eeprom/eeprom_93cx6.c
+index 9627294fe3e951..4c9827fe921731 100644
+--- a/drivers/misc/eeprom/eeprom_93cx6.c
++++ b/drivers/misc/eeprom/eeprom_93cx6.c
+@@ -186,6 +186,11 @@ void eeprom_93cx6_read(struct eeprom_93cx6 *eeprom, const u8 word,
+ eeprom_93cx6_write_bits(eeprom, command,
+ PCI_EEPROM_WIDTH_OPCODE + eeprom->width);
+
++ if (has_quirk_extra_read_cycle(eeprom)) {
++ eeprom_93cx6_pulse_high(eeprom);
++ eeprom_93cx6_pulse_low(eeprom);
++ }
++
+ /*
+ * Read the requested 16 bits.
+ */
+@@ -252,6 +257,11 @@ void eeprom_93cx6_readb(struct eeprom_93cx6 *eeprom, const u8 byte,
+ eeprom_93cx6_write_bits(eeprom, command,
+ PCI_EEPROM_WIDTH_OPCODE + eeprom->width + 1);
+
++ if (has_quirk_extra_read_cycle(eeprom)) {
++ eeprom_93cx6_pulse_high(eeprom);
++ eeprom_93cx6_pulse_low(eeprom);
++ }
++
+ /*
+ * Read the requested 8 bits.
+ */
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index ef06a4d5d65bb2..1d08009f2bd83f 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -50,6 +50,7 @@
+ #include <linux/mmc/sd.h>
+
+ #include <linux/uaccess.h>
++#include <linux/unaligned.h>
+
+ #include "queue.h"
+ #include "block.h"
+@@ -993,11 +994,12 @@ static int mmc_sd_num_wr_blocks(struct mmc_card *card, u32 *written_blocks)
+ int err;
+ u32 result;
+ __be32 *blocks;
++ u8 resp_sz = mmc_card_ult_capacity(card) ? 8 : 4;
++ unsigned int noio_flag;
+
+ struct mmc_request mrq = {};
+ struct mmc_command cmd = {};
+ struct mmc_data data = {};
+-
+ struct scatterlist sg;
+
+ err = mmc_app_cmd(card->host, card);
+@@ -1008,7 +1010,7 @@ static int mmc_sd_num_wr_blocks(struct mmc_card *card, u32 *written_blocks)
+ cmd.arg = 0;
+ cmd.flags = MMC_RSP_SPI_R1 | MMC_RSP_R1 | MMC_CMD_ADTC;
+
+- data.blksz = 4;
++ data.blksz = resp_sz;
+ data.blocks = 1;
+ data.flags = MMC_DATA_READ;
+ data.sg = &sg;
+@@ -1018,15 +1020,29 @@ static int mmc_sd_num_wr_blocks(struct mmc_card *card, u32 *written_blocks)
+ mrq.cmd = &cmd;
+ mrq.data = &data;
+
+- blocks = kmalloc(4, GFP_KERNEL);
++ noio_flag = memalloc_noio_save();
++ blocks = kmalloc(resp_sz, GFP_KERNEL);
++ memalloc_noio_restore(noio_flag);
+ if (!blocks)
+ return -ENOMEM;
+
+- sg_init_one(&sg, blocks, 4);
++ sg_init_one(&sg, blocks, resp_sz);
+
+ mmc_wait_for_req(card->host, &mrq);
+
+- result = ntohl(*blocks);
++ if (mmc_card_ult_capacity(card)) {
++ /*
++ * Normally, ACMD22 returns the number of written sectors as
++ * u32. SDUC, however, returns it as u64. This is not a
++ * superfluous requirement, because SDUC writes may exceed 2TB.
++ * For Linux mmc however, the previously write operation could
++ * not be more than the block layer limits, thus just make room
++ * for a u64 and cast the response back to u32.
++ */
++ result = clamp_val(get_unaligned_be64(blocks), 0, UINT_MAX);
++ } else {
++ result = ntohl(*blocks);
++ }
+ kfree(blocks);
+
+ if (cmd.error || data.error)
+diff --git a/drivers/mmc/core/bus.c b/drivers/mmc/core/bus.c
+index 0ddaee0eae54f0..4f3a26676ccb86 100644
+--- a/drivers/mmc/core/bus.c
++++ b/drivers/mmc/core/bus.c
+@@ -149,6 +149,8 @@ static void mmc_bus_shutdown(struct device *dev)
+ if (dev->driver && drv->shutdown)
+ drv->shutdown(card);
+
++ __mmc_stop_host(host);
++
+ if (host->bus_ops->shutdown) {
+ ret = host->bus_ops->shutdown(host);
+ if (ret)
+@@ -321,7 +323,9 @@ int mmc_add_card(struct mmc_card *card)
+ case MMC_TYPE_SD:
+ type = "SD";
+ if (mmc_card_blockaddr(card)) {
+- if (mmc_card_ext_capacity(card))
++ if (mmc_card_ult_capacity(card))
++ type = "SDUC";
++ else if (mmc_card_ext_capacity(card))
+ type = "SDXC";
+ else
+ type = "SDHC";
+diff --git a/drivers/mmc/core/card.h b/drivers/mmc/core/card.h
+index b7754a1b8d9788..3205feb1e8ff6a 100644
+--- a/drivers/mmc/core/card.h
++++ b/drivers/mmc/core/card.h
+@@ -23,6 +23,7 @@
+ #define MMC_CARD_SDXC (1<<3) /* card is SDXC */
+ #define MMC_CARD_REMOVED (1<<4) /* card has been removed */
+ #define MMC_STATE_SUSPENDED (1<<5) /* card is suspended */
++#define MMC_CARD_SDUC (1<<6) /* card is SDUC */
+
+ #define mmc_card_present(c) ((c)->state & MMC_STATE_PRESENT)
+ #define mmc_card_readonly(c) ((c)->state & MMC_STATE_READONLY)
+@@ -30,11 +31,13 @@
+ #define mmc_card_ext_capacity(c) ((c)->state & MMC_CARD_SDXC)
+ #define mmc_card_removed(c) ((c) && ((c)->state & MMC_CARD_REMOVED))
+ #define mmc_card_suspended(c) ((c)->state & MMC_STATE_SUSPENDED)
++#define mmc_card_ult_capacity(c) ((c)->state & MMC_CARD_SDUC)
+
+ #define mmc_card_set_present(c) ((c)->state |= MMC_STATE_PRESENT)
+ #define mmc_card_set_readonly(c) ((c)->state |= MMC_STATE_READONLY)
+ #define mmc_card_set_blockaddr(c) ((c)->state |= MMC_STATE_BLOCKADDR)
+ #define mmc_card_set_ext_capacity(c) ((c)->state |= MMC_CARD_SDXC)
++#define mmc_card_set_ult_capacity(c) ((c)->state |= MMC_CARD_SDUC)
+ #define mmc_card_set_removed(c) ((c)->state |= MMC_CARD_REMOVED)
+ #define mmc_card_set_suspended(c) ((c)->state |= MMC_STATE_SUSPENDED)
+ #define mmc_card_clr_suspended(c) ((c)->state &= ~MMC_STATE_SUSPENDED)
+@@ -82,6 +85,7 @@ struct mmc_fixup {
+ #define CID_MANFID_SANDISK_SD 0x3
+ #define CID_MANFID_ATP 0x9
+ #define CID_MANFID_TOSHIBA 0x11
++#define CID_MANFID_GIGASTONE 0x12
+ #define CID_MANFID_MICRON 0x13
+ #define CID_MANFID_SAMSUNG 0x15
+ #define CID_MANFID_APACER 0x27
+@@ -284,4 +288,10 @@ static inline int mmc_card_broken_cache_flush(const struct mmc_card *c)
+ {
+ return c->quirks & MMC_QUIRK_BROKEN_CACHE_FLUSH;
+ }
++
++static inline int mmc_card_broken_sd_poweroff_notify(const struct mmc_card *c)
++{
++ return c->quirks & MMC_QUIRK_BROKEN_SD_POWEROFF_NOTIFY;
++}
++
+ #endif
+diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
+index d6c819dd68ed47..327029f5c59b79 100644
+--- a/drivers/mmc/core/core.c
++++ b/drivers/mmc/core/core.c
+@@ -2296,6 +2296,9 @@ void mmc_start_host(struct mmc_host *host)
+
+ void __mmc_stop_host(struct mmc_host *host)
+ {
++ if (host->rescan_disable)
++ return;
++
+ if (host->slot.cd_irq >= 0) {
+ mmc_gpio_set_cd_wake(host, false);
+ disable_irq(host->slot.cd_irq);
+diff --git a/drivers/mmc/core/quirks.h b/drivers/mmc/core/quirks.h
+index 92905fc46436dd..89b512905be140 100644
+--- a/drivers/mmc/core/quirks.h
++++ b/drivers/mmc/core/quirks.h
+@@ -25,6 +25,15 @@ static const struct mmc_fixup __maybe_unused mmc_sd_fixups[] = {
+ 0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,
+ MMC_QUIRK_BROKEN_SD_CACHE, EXT_CSD_REV_ANY),
+
++ /*
++ * GIGASTONE Gaming Plus microSD cards manufactured on 02/2022 never
++ * clear Flush Cache bit and set Poweroff Notification Ready bit.
++ */
++ _FIXUP_EXT("ASTC", CID_MANFID_GIGASTONE, 0x3456, 2022, 2,
++ 0, -1ull, SDIO_ANY_ID, SDIO_ANY_ID, add_quirk_sd,
++ MMC_QUIRK_BROKEN_SD_CACHE | MMC_QUIRK_BROKEN_SD_POWEROFF_NOTIFY,
++ EXT_CSD_REV_ANY),
++
+ END_FIXUP
+ };
+
+diff --git a/drivers/mmc/core/sd.c b/drivers/mmc/core/sd.c
+index 12fe282bea77ef..63915541c0e494 100644
+--- a/drivers/mmc/core/sd.c
++++ b/drivers/mmc/core/sd.c
+@@ -100,7 +100,7 @@ void mmc_decode_cid(struct mmc_card *card)
+ /*
+ * Given a 128-bit response, decode to our card CSD structure.
+ */
+-static int mmc_decode_csd(struct mmc_card *card)
++static int mmc_decode_csd(struct mmc_card *card, bool is_sduc)
+ {
+ struct mmc_csd *csd = &card->csd;
+ unsigned int e, m, csd_struct;
+@@ -144,9 +144,10 @@ static int mmc_decode_csd(struct mmc_card *card)
+ mmc_card_set_readonly(card);
+ break;
+ case 1:
++ case 2:
+ /*
+- * This is a block-addressed SDHC or SDXC card. Most
+- * interesting fields are unused and have fixed
++ * This is a block-addressed SDHC, SDXC or SDUC card.
++ * Most interesting fields are unused and have fixed
+ * values. To avoid getting tripped by buggy cards,
+ * we assume those fixed values ourselves.
+ */
+@@ -159,14 +160,19 @@ static int mmc_decode_csd(struct mmc_card *card)
+ e = unstuff_bits(resp, 96, 3);
+ csd->max_dtr = tran_exp[e] * tran_mant[m];
+ csd->cmdclass = unstuff_bits(resp, 84, 12);
+- csd->c_size = unstuff_bits(resp, 48, 22);
+
+- /* SDXC cards have a minimum C_SIZE of 0x00FFFF */
+- if (csd->c_size >= 0xFFFF)
++ if (csd_struct == 1)
++ m = unstuff_bits(resp, 48, 22);
++ else
++ m = unstuff_bits(resp, 48, 28);
++ csd->c_size = m;
++
++ if (csd->c_size >= 0x400000 && is_sduc)
++ mmc_card_set_ult_capacity(card);
++ else if (csd->c_size >= 0xFFFF)
+ mmc_card_set_ext_capacity(card);
+
+- m = unstuff_bits(resp, 48, 22);
+- csd->capacity = (1 + m) << 10;
++ csd->capacity = (1 + (typeof(sector_t))m) << 10;
+
+ csd->read_blkbits = 9;
+ csd->read_partial = 0;
+@@ -876,7 +882,7 @@ int mmc_sd_get_cid(struct mmc_host *host, u32 ocr, u32 *cid, u32 *rocr)
+ return err;
+ }
+
+-int mmc_sd_get_csd(struct mmc_card *card)
++int mmc_sd_get_csd(struct mmc_card *card, bool is_sduc)
+ {
+ int err;
+
+@@ -887,7 +893,7 @@ int mmc_sd_get_csd(struct mmc_card *card)
+ if (err)
+ return err;
+
+- err = mmc_decode_csd(card);
++ err = mmc_decode_csd(card, is_sduc);
+ if (err)
+ return err;
+
+@@ -1107,7 +1113,7 @@ static int sd_parse_ext_reg_power(struct mmc_card *card, u8 fno, u8 page,
+ card->ext_power.rev = reg_buf[0] & 0xf;
+
+ /* Power Off Notification support at bit 4. */
+- if (reg_buf[1] & BIT(4))
++ if ((reg_buf[1] & BIT(4)) && !mmc_card_broken_sd_poweroff_notify(card))
+ card->ext_power.feature_support |= SD_EXT_POWER_OFF_NOTIFY;
+
+ /* Power Sustenance support at bit 5. */
+@@ -1442,7 +1448,7 @@ static int mmc_sd_init_card(struct mmc_host *host, u32 ocr,
+ }
+
+ if (!oldcard) {
+- err = mmc_sd_get_csd(card);
++ err = mmc_sd_get_csd(card, false);
+ if (err)
+ goto free_card;
+
+diff --git a/drivers/mmc/core/sd.h b/drivers/mmc/core/sd.h
+index fe6dd46927a423..7e8beface2ca61 100644
+--- a/drivers/mmc/core/sd.h
++++ b/drivers/mmc/core/sd.h
+@@ -10,7 +10,7 @@ struct mmc_host;
+ struct mmc_card;
+
+ int mmc_sd_get_cid(struct mmc_host *host, u32 ocr, u32 *cid, u32 *rocr);
+-int mmc_sd_get_csd(struct mmc_card *card);
++int mmc_sd_get_csd(struct mmc_card *card, bool is_sduc);
+ void mmc_decode_cid(struct mmc_card *card);
+ int mmc_sd_setup_card(struct mmc_host *host, struct mmc_card *card,
+ bool reinit);
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index 4fb247fde5c080..9566837c9848e6 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -769,7 +769,7 @@ static int mmc_sdio_init_card(struct mmc_host *host, u32 ocr,
+ * Read CSD, before selecting the card
+ */
+ if (!oldcard && mmc_card_sd_combo(card)) {
+- err = mmc_sd_get_csd(card);
++ err = mmc_sd_get_csd(card, false);
+ if (err)
+ goto remove;
+
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 89018b6c97b9a7..813bc20cfb5a6c 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2736,20 +2736,18 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ }
+
+ /* Allocate MMC host for this device */
+- mmc = mmc_alloc_host(sizeof(struct msdc_host), &pdev->dev);
++ mmc = devm_mmc_alloc_host(&pdev->dev, sizeof(struct msdc_host));
+ if (!mmc)
+ return -ENOMEM;
+
+ host = mmc_priv(mmc);
+ ret = mmc_of_parse(mmc);
+ if (ret)
+- goto host_free;
++ return ret;
+
+ host->base = devm_platform_ioremap_resource(pdev, 0);
+- if (IS_ERR(host->base)) {
+- ret = PTR_ERR(host->base);
+- goto host_free;
+- }
++ if (IS_ERR(host->base))
++ return PTR_ERR(host->base);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ if (res) {
+@@ -2760,53 +2758,45 @@ static int msdc_drv_probe(struct platform_device *pdev)
+
+ ret = mmc_regulator_get_supply(mmc);
+ if (ret)
+- goto host_free;
++ return ret;
+
+ ret = msdc_of_clock_parse(pdev, host);
+ if (ret)
+- goto host_free;
++ return ret;
+
+ host->reset = devm_reset_control_get_optional_exclusive(&pdev->dev,
+ "hrst");
+- if (IS_ERR(host->reset)) {
+- ret = PTR_ERR(host->reset);
+- goto host_free;
+- }
++ if (IS_ERR(host->reset))
++ return PTR_ERR(host->reset);
+
+ /* only eMMC has crypto property */
+ if (!(mmc->caps2 & MMC_CAP2_NO_MMC)) {
+ host->crypto_clk = devm_clk_get_optional(&pdev->dev, "crypto");
+ if (IS_ERR(host->crypto_clk))
+- host->crypto_clk = NULL;
+- else
++ return PTR_ERR(host->crypto_clk);
++ else if (host->crypto_clk)
+ mmc->caps2 |= MMC_CAP2_CRYPTO;
+ }
+
+ host->irq = platform_get_irq(pdev, 0);
+- if (host->irq < 0) {
+- ret = host->irq;
+- goto host_free;
+- }
++ if (host->irq < 0)
++ return host->irq;
+
+ host->pinctrl = devm_pinctrl_get(&pdev->dev);
+- if (IS_ERR(host->pinctrl)) {
+- ret = PTR_ERR(host->pinctrl);
+- dev_err(&pdev->dev, "Cannot find pinctrl!\n");
+- goto host_free;
+- }
++ if (IS_ERR(host->pinctrl))
++ return dev_err_probe(&pdev->dev, PTR_ERR(host->pinctrl),
++ "Cannot find pinctrl");
+
+ host->pins_default = pinctrl_lookup_state(host->pinctrl, "default");
+ if (IS_ERR(host->pins_default)) {
+- ret = PTR_ERR(host->pins_default);
+ dev_err(&pdev->dev, "Cannot find pinctrl default!\n");
+- goto host_free;
++ return PTR_ERR(host->pins_default);
+ }
+
+ host->pins_uhs = pinctrl_lookup_state(host->pinctrl, "state_uhs");
+ if (IS_ERR(host->pins_uhs)) {
+- ret = PTR_ERR(host->pins_uhs);
+ dev_err(&pdev->dev, "Cannot find pinctrl uhs!\n");
+- goto host_free;
++ return PTR_ERR(host->pins_uhs);
+ }
+
+ /* Support for SDIO eint irq ? */
+@@ -2885,7 +2875,7 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ ret = msdc_ungate_clock(host);
+ if (ret) {
+ dev_err(&pdev->dev, "Cannot ungate clocks!\n");
+- goto release_mem;
++ goto release_clk;
+ }
+ msdc_init_hw(host);
+
+@@ -2895,14 +2885,14 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ GFP_KERNEL);
+ if (!host->cq_host) {
+ ret = -ENOMEM;
+- goto host_free;
++ goto release;
+ }
+ host->cq_host->caps |= CQHCI_TASK_DESC_SZ_128;
+ host->cq_host->mmio = host->base + 0x800;
+ host->cq_host->ops = &msdc_cmdq_ops;
+ ret = cqhci_init(host->cq_host, mmc, true);
+ if (ret)
+- goto host_free;
++ goto release;
+ mmc->max_segs = 128;
+ /* cqhci 16bit length */
+ /* 0 size, means 65536 so we don't have to -1 here */
+@@ -2929,9 +2919,10 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ end:
+ pm_runtime_disable(host->dev);
+ release:
+- platform_set_drvdata(pdev, NULL);
+ msdc_deinit_hw(host);
++release_clk:
+ msdc_gate_clock(host);
++ platform_set_drvdata(pdev, NULL);
+ release_mem:
+ if (host->dma.gpd)
+ dma_free_coherent(&pdev->dev,
+@@ -2939,11 +2930,8 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ host->dma.gpd, host->dma.gpd_addr);
+ if (host->dma.bd)
+ dma_free_coherent(&pdev->dev,
+- MAX_BD_NUM * sizeof(struct mt_bdma_desc),
+- host->dma.bd, host->dma.bd_addr);
+-host_free:
+- mmc_free_host(mmc);
+-
++ MAX_BD_NUM * sizeof(struct mt_bdma_desc),
++ host->dma.bd, host->dma.bd_addr);
+ return ret;
+ }
+
+@@ -2968,9 +2956,7 @@ static void msdc_drv_remove(struct platform_device *pdev)
+ 2 * sizeof(struct mt_gpdma_desc),
+ host->dma.gpd, host->dma.gpd_addr);
+ dma_free_coherent(&pdev->dev, MAX_BD_NUM * sizeof(struct mt_bdma_desc),
+- host->dma.bd, host->dma.bd_addr);
+-
+- mmc_free_host(mmc);
++ host->dma.bd, host->dma.bd_addr);
+ }
+
+ static void msdc_save_reg(struct msdc_host *host)
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index 8f0bc6dca2b040..ef3a44f2dff16d 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -238,6 +238,7 @@ struct esdhc_platform_data {
+
+ struct esdhc_soc_data {
+ u32 flags;
++ u32 quirks;
+ };
+
+ static const struct esdhc_soc_data esdhc_imx25_data = {
+@@ -309,10 +310,12 @@ static struct esdhc_soc_data usdhc_imx7ulp_data = {
+ | ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200
+ | ESDHC_FLAG_PMQOS | ESDHC_FLAG_HS400
+ | ESDHC_FLAG_STATE_LOST_IN_LPMODE,
++ .quirks = SDHCI_QUIRK_NO_LED,
+ };
+ static struct esdhc_soc_data usdhc_imxrt1050_data = {
+ .flags = ESDHC_FLAG_USDHC | ESDHC_FLAG_STD_TUNING
+ | ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200,
++ .quirks = SDHCI_QUIRK_NO_LED,
+ };
+
+ static struct esdhc_soc_data usdhc_imx8qxp_data = {
+@@ -321,6 +324,7 @@ static struct esdhc_soc_data usdhc_imx8qxp_data = {
+ | ESDHC_FLAG_HS400 | ESDHC_FLAG_HS400_ES
+ | ESDHC_FLAG_STATE_LOST_IN_LPMODE
+ | ESDHC_FLAG_CLK_RATE_LOST_IN_PM_RUNTIME,
++ .quirks = SDHCI_QUIRK_NO_LED,
+ };
+
+ static struct esdhc_soc_data usdhc_imx8mm_data = {
+@@ -328,6 +332,7 @@ static struct esdhc_soc_data usdhc_imx8mm_data = {
+ | ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200
+ | ESDHC_FLAG_HS400 | ESDHC_FLAG_HS400_ES
+ | ESDHC_FLAG_STATE_LOST_IN_LPMODE,
++ .quirks = SDHCI_QUIRK_NO_LED,
+ };
+
+ struct pltfm_imx_data {
+@@ -1687,6 +1692,7 @@ static int sdhci_esdhc_imx_probe(struct platform_device *pdev)
+
+ imx_data->socdata = device_get_match_data(&pdev->dev);
+
++ host->quirks |= imx_data->socdata->quirks;
+ if (imx_data->socdata->flags & ESDHC_FLAG_PMQOS)
+ cpu_latency_qos_add_request(&imx_data->pm_qos_req, 0);
+
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index ed45ed0bdafd96..2e2e15e2d8fb8b 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -21,6 +21,7 @@
+ #include <linux/io.h>
+ #include <linux/iopoll.h>
+ #include <linux/gpio.h>
++#include <linux/gpio/machine.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/pm_qos.h>
+ #include <linux/debugfs.h>
+@@ -1235,6 +1236,29 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_sdio = {
+ .priv_size = sizeof(struct intel_host),
+ };
+
++/* DMI quirks for devices with missing or broken CD GPIO info */
++static const struct gpiod_lookup_table vexia_edu_atla10_cd_gpios = {
++ .dev_id = "0000:00:12.0",
++ .table = {
++ GPIO_LOOKUP("INT33FC:00", 38, "cd", GPIO_ACTIVE_HIGH),
++ { }
++ },
++};
++
++static const struct dmi_system_id sdhci_intel_byt_cd_gpio_override[] = {
++ {
++ /* Vexia Edu Atla 10 tablet 9V version */
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
++ DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
++ /* Above strings are too generic, also match on BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "08/25/2014"),
++ },
++ .driver_data = (void *)&vexia_edu_atla10_cd_gpios,
++ },
++ { }
++};
++
+ static const struct sdhci_pci_fixes sdhci_intel_byt_sd = {
+ #ifdef CONFIG_PM_SLEEP
+ .resume = byt_resume,
+@@ -1253,6 +1277,7 @@ static const struct sdhci_pci_fixes sdhci_intel_byt_sd = {
+ .add_host = byt_add_host,
+ .remove_slot = byt_remove_slot,
+ .ops = &sdhci_intel_byt_ops,
++ .cd_gpio_override = sdhci_intel_byt_cd_gpio_override,
+ .priv_size = sizeof(struct intel_host),
+ };
+
+@@ -2054,6 +2079,42 @@ static const struct dev_pm_ops sdhci_pci_pm_ops = {
+ * *
+ \*****************************************************************************/
+
++static struct gpiod_lookup_table *sdhci_pci_add_gpio_lookup_table(
++ struct sdhci_pci_chip *chip)
++{
++ struct gpiod_lookup_table *cd_gpio_lookup_table;
++ const struct dmi_system_id *dmi_id = NULL;
++ size_t count;
++
++ if (chip->fixes && chip->fixes->cd_gpio_override)
++ dmi_id = dmi_first_match(chip->fixes->cd_gpio_override);
++
++ if (!dmi_id)
++ return NULL;
++
++ cd_gpio_lookup_table = dmi_id->driver_data;
++ for (count = 0; cd_gpio_lookup_table->table[count].key; count++)
++ ;
++
++ cd_gpio_lookup_table = kmemdup(dmi_id->driver_data,
++ /* count + 1 terminating entry */
++ struct_size(cd_gpio_lookup_table, table, count + 1),
++ GFP_KERNEL);
++ if (!cd_gpio_lookup_table)
++ return ERR_PTR(-ENOMEM);
++
++ gpiod_add_lookup_table(cd_gpio_lookup_table);
++ return cd_gpio_lookup_table;
++}
++
++static void sdhci_pci_remove_gpio_lookup_table(struct gpiod_lookup_table *lookup_table)
++{
++ if (lookup_table) {
++ gpiod_remove_lookup_table(lookup_table);
++ kfree(lookup_table);
++ }
++}
++
+ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
+ struct pci_dev *pdev, struct sdhci_pci_chip *chip, int first_bar,
+ int slotno)
+@@ -2129,8 +2190,19 @@ static struct sdhci_pci_slot *sdhci_pci_probe_slot(
+ device_init_wakeup(&pdev->dev, true);
+
+ if (slot->cd_idx >= 0) {
++ struct gpiod_lookup_table *cd_gpio_lookup_table;
++
++ cd_gpio_lookup_table = sdhci_pci_add_gpio_lookup_table(chip);
++ if (IS_ERR(cd_gpio_lookup_table)) {
++ ret = PTR_ERR(cd_gpio_lookup_table);
++ goto remove;
++ }
++
+ ret = mmc_gpiod_request_cd(host->mmc, "cd", slot->cd_idx,
+ slot->cd_override_level, 0);
++
++ sdhci_pci_remove_gpio_lookup_table(cd_gpio_lookup_table);
++
+ if (ret && ret != -EPROBE_DEFER)
+ ret = mmc_gpiod_request_cd(host->mmc, NULL,
+ slot->cd_idx,
+diff --git a/drivers/mmc/host/sdhci-pci.h b/drivers/mmc/host/sdhci-pci.h
+index 153704f812edc8..4973fa8592175e 100644
+--- a/drivers/mmc/host/sdhci-pci.h
++++ b/drivers/mmc/host/sdhci-pci.h
+@@ -156,6 +156,7 @@ struct sdhci_pci_fixes {
+ #endif
+
+ const struct sdhci_ops *ops;
++ const struct dmi_system_id *cd_gpio_override;
+ size_t priv_size;
+ };
+
+diff --git a/drivers/net/can/c_can/c_can_main.c b/drivers/net/can/c_can/c_can_main.c
+index 511615dc334196..cc371d0c9f3c76 100644
+--- a/drivers/net/can/c_can/c_can_main.c
++++ b/drivers/net/can/c_can/c_can_main.c
+@@ -1014,49 +1014,57 @@ static int c_can_handle_bus_err(struct net_device *dev,
+
+ /* propagate the error condition to the CAN stack */
+ skb = alloc_can_err_skb(dev, &cf);
+- if (unlikely(!skb))
+- return 0;
+
+ /* check for 'last error code' which tells us the
+ * type of the last error to occur on the CAN bus
+ */
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
++ if (likely(skb))
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+ switch (lec_type) {
+ case LEC_STUFF_ERROR:
+ netdev_dbg(dev, "stuff error\n");
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
+ stats->rx_errors++;
+ break;
+ case LEC_FORM_ERROR:
+ netdev_dbg(dev, "form error\n");
+- cf->data[2] |= CAN_ERR_PROT_FORM;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_FORM;
+ stats->rx_errors++;
+ break;
+ case LEC_ACK_ERROR:
+ netdev_dbg(dev, "ack error\n");
+- cf->data[3] = CAN_ERR_PROT_LOC_ACK;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_ACK;
+ stats->tx_errors++;
+ break;
+ case LEC_BIT1_ERROR:
+ netdev_dbg(dev, "bit1 error\n");
+- cf->data[2] |= CAN_ERR_PROT_BIT1;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT1;
+ stats->tx_errors++;
+ break;
+ case LEC_BIT0_ERROR:
+ netdev_dbg(dev, "bit0 error\n");
+- cf->data[2] |= CAN_ERR_PROT_BIT0;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT0;
+ stats->tx_errors++;
+ break;
+ case LEC_CRC_ERROR:
+ netdev_dbg(dev, "CRC error\n");
+- cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
+ stats->rx_errors++;
+ break;
+ default:
+ break;
+ }
+
++ if (unlikely(!skb))
++ return 0;
++
+ netif_receive_skb(skb);
+ return 1;
+ }
+diff --git a/drivers/net/can/dev/dev.c b/drivers/net/can/dev/dev.c
+index 6792c14fd7eb00..681643ab37804e 100644
+--- a/drivers/net/can/dev/dev.c
++++ b/drivers/net/can/dev/dev.c
+@@ -468,7 +468,7 @@ static int can_set_termination(struct net_device *ndev, u16 term)
+ else
+ set = 0;
+
+- gpiod_set_value(priv->termination_gpio, set);
++ gpiod_set_value_cansleep(priv->termination_gpio, set);
+
+ return 0;
+ }
+diff --git a/drivers/net/can/ifi_canfd/ifi_canfd.c b/drivers/net/can/ifi_canfd/ifi_canfd.c
+index d32b10900d2f62..c86b57d47085fd 100644
+--- a/drivers/net/can/ifi_canfd/ifi_canfd.c
++++ b/drivers/net/can/ifi_canfd/ifi_canfd.c
+@@ -390,36 +390,55 @@ static int ifi_canfd_handle_lec_err(struct net_device *ndev)
+ return 0;
+
+ priv->can.can_stats.bus_error++;
+- stats->rx_errors++;
+
+ /* Propagate the error condition to the CAN stack. */
+ skb = alloc_can_err_skb(ndev, &cf);
+- if (unlikely(!skb))
+- return 0;
+
+ /* Read the error counter register and check for new errors. */
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
++ if (likely(skb))
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+- if (errctr & IFI_CANFD_ERROR_CTR_OVERLOAD_FIRST)
+- cf->data[2] |= CAN_ERR_PROT_OVERLOAD;
++ if (errctr & IFI_CANFD_ERROR_CTR_OVERLOAD_FIRST) {
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_OVERLOAD;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_ACK_ERROR_FIRST)
+- cf->data[3] = CAN_ERR_PROT_LOC_ACK;
++ if (errctr & IFI_CANFD_ERROR_CTR_ACK_ERROR_FIRST) {
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_ACK;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_BIT0_ERROR_FIRST)
+- cf->data[2] |= CAN_ERR_PROT_BIT0;
++ if (errctr & IFI_CANFD_ERROR_CTR_BIT0_ERROR_FIRST) {
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT0;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_BIT1_ERROR_FIRST)
+- cf->data[2] |= CAN_ERR_PROT_BIT1;
++ if (errctr & IFI_CANFD_ERROR_CTR_BIT1_ERROR_FIRST) {
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT1;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_STUFF_ERROR_FIRST)
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
++ if (errctr & IFI_CANFD_ERROR_CTR_STUFF_ERROR_FIRST) {
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_CRC_ERROR_FIRST)
+- cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
++ if (errctr & IFI_CANFD_ERROR_CTR_CRC_ERROR_FIRST) {
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
++ }
+
+- if (errctr & IFI_CANFD_ERROR_CTR_FORM_ERROR_FIRST)
+- cf->data[2] |= CAN_ERR_PROT_FORM;
++ if (errctr & IFI_CANFD_ERROR_CTR_FORM_ERROR_FIRST) {
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_FORM;
++ }
+
+ /* Reset the error counter, ack the IRQ and re-enable the counter. */
+ writel(IFI_CANFD_ERROR_CTR_ER_RESET, priv->base + IFI_CANFD_ERROR_CTR);
+@@ -427,6 +446,9 @@ static int ifi_canfd_handle_lec_err(struct net_device *ndev)
+ priv->base + IFI_CANFD_INTERRUPT);
+ writel(IFI_CANFD_ERROR_CTR_ER_ENABLE, priv->base + IFI_CANFD_ERROR_CTR);
+
++ if (unlikely(!skb))
++ return 0;
++
+ netif_receive_skb(skb);
+
+ return 1;
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 16e9e7d7527d97..533bcb77c9f934 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -695,47 +695,60 @@ static int m_can_handle_lec_err(struct net_device *dev,
+ u32 timestamp = 0;
+
+ cdev->can.can_stats.bus_error++;
+- stats->rx_errors++;
+
+ /* propagate the error condition to the CAN stack */
+ skb = alloc_can_err_skb(dev, &cf);
+- if (unlikely(!skb))
+- return 0;
+
+ /* check for 'last error code' which tells us the
+ * type of the last error to occur on the CAN bus
+ */
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
++ if (likely(skb))
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+ switch (lec_type) {
+ case LEC_STUFF_ERROR:
+ netdev_dbg(dev, "stuff error\n");
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
+ break;
+ case LEC_FORM_ERROR:
+ netdev_dbg(dev, "form error\n");
+- cf->data[2] |= CAN_ERR_PROT_FORM;
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_FORM;
+ break;
+ case LEC_ACK_ERROR:
+ netdev_dbg(dev, "ack error\n");
+- cf->data[3] = CAN_ERR_PROT_LOC_ACK;
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_ACK;
+ break;
+ case LEC_BIT1_ERROR:
+ netdev_dbg(dev, "bit1 error\n");
+- cf->data[2] |= CAN_ERR_PROT_BIT1;
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT1;
+ break;
+ case LEC_BIT0_ERROR:
+ netdev_dbg(dev, "bit0 error\n");
+- cf->data[2] |= CAN_ERR_PROT_BIT0;
++ stats->tx_errors++;
++ if (likely(skb))
++ cf->data[2] |= CAN_ERR_PROT_BIT0;
+ break;
+ case LEC_CRC_ERROR:
+ netdev_dbg(dev, "CRC error\n");
+- cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
++ stats->rx_errors++;
++ if (likely(skb))
++ cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
+ break;
+ default:
+ break;
+ }
+
++ if (unlikely(!skb))
++ return 0;
++
+ if (cdev->is_peripheral)
+ timestamp = m_can_get_timestamp(cdev);
+
+diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c
+index ddb3247948ad2f..4d245857ef1cec 100644
+--- a/drivers/net/can/sja1000/sja1000.c
++++ b/drivers/net/can/sja1000/sja1000.c
+@@ -416,8 +416,6 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ int ret = 0;
+
+ skb = alloc_can_err_skb(dev, &cf);
+- if (skb == NULL)
+- return -ENOMEM;
+
+ txerr = priv->read_reg(priv, SJA1000_TXERR);
+ rxerr = priv->read_reg(priv, SJA1000_RXERR);
+@@ -425,8 +423,11 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ if (isrc & IRQ_DOI) {
+ /* data overrun interrupt */
+ netdev_dbg(dev, "data overrun interrupt\n");
+- cf->can_id |= CAN_ERR_CRTL;
+- cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
++ if (skb) {
++ cf->can_id |= CAN_ERR_CRTL;
++ cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
++ }
++
+ stats->rx_over_errors++;
+ stats->rx_errors++;
+ sja1000_write_cmdreg(priv, CMD_CDO); /* clear bit */
+@@ -452,7 +453,7 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ else
+ state = CAN_STATE_ERROR_ACTIVE;
+ }
+- if (state != CAN_STATE_BUS_OFF) {
++ if (state != CAN_STATE_BUS_OFF && skb) {
+ cf->can_id |= CAN_ERR_CNT;
+ cf->data[6] = txerr;
+ cf->data[7] = rxerr;
+@@ -460,33 +461,38 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ if (isrc & IRQ_BEI) {
+ /* bus error interrupt */
+ priv->can.can_stats.bus_error++;
+- stats->rx_errors++;
+
+ ecc = priv->read_reg(priv, SJA1000_ECC);
++ if (skb) {
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+-
+- /* set error type */
+- switch (ecc & ECC_MASK) {
+- case ECC_BIT:
+- cf->data[2] |= CAN_ERR_PROT_BIT;
+- break;
+- case ECC_FORM:
+- cf->data[2] |= CAN_ERR_PROT_FORM;
+- break;
+- case ECC_STUFF:
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
+- break;
+- default:
+- break;
+- }
++ /* set error type */
++ switch (ecc & ECC_MASK) {
++ case ECC_BIT:
++ cf->data[2] |= CAN_ERR_PROT_BIT;
++ break;
++ case ECC_FORM:
++ cf->data[2] |= CAN_ERR_PROT_FORM;
++ break;
++ case ECC_STUFF:
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
++ break;
++ default:
++ break;
++ }
+
+- /* set error location */
+- cf->data[3] = ecc & ECC_SEG;
++ /* set error location */
++ cf->data[3] = ecc & ECC_SEG;
++ }
+
+ /* Error occurred during transmission? */
+- if ((ecc & ECC_DIR) == 0)
+- cf->data[2] |= CAN_ERR_PROT_TX;
++ if ((ecc & ECC_DIR) == 0) {
++ stats->tx_errors++;
++ if (skb)
++ cf->data[2] |= CAN_ERR_PROT_TX;
++ } else {
++ stats->rx_errors++;
++ }
+ }
+ if (isrc & IRQ_EPI) {
+ /* error passive interrupt */
+@@ -502,8 +508,10 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ netdev_dbg(dev, "arbitration lost interrupt\n");
+ alc = priv->read_reg(priv, SJA1000_ALC);
+ priv->can.can_stats.arbitration_lost++;
+- cf->can_id |= CAN_ERR_LOSTARB;
+- cf->data[0] = alc & 0x1f;
++ if (skb) {
++ cf->can_id |= CAN_ERR_LOSTARB;
++ cf->data[0] = alc & 0x1f;
++ }
+ }
+
+ if (state != priv->can.state) {
+@@ -516,6 +524,9 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
+ can_bus_off(dev);
+ }
+
++ if (!skb)
++ return -ENOMEM;
++
+ netif_rx(skb);
+
+ return ret;
+diff --git a/drivers/net/can/spi/hi311x.c b/drivers/net/can/spi/hi311x.c
+index 148d974ebb2107..1b9501ee10deb5 100644
+--- a/drivers/net/can/spi/hi311x.c
++++ b/drivers/net/can/spi/hi311x.c
+@@ -671,9 +671,9 @@ static irqreturn_t hi3110_can_ist(int irq, void *dev_id)
+ tx_state = txerr >= rxerr ? new_state : 0;
+ rx_state = txerr <= rxerr ? new_state : 0;
+ can_change_state(net, cf, tx_state, rx_state);
+- netif_rx(skb);
+
+ if (new_state == CAN_STATE_BUS_OFF) {
++ netif_rx(skb);
+ can_bus_off(net);
+ if (priv->can.restart_ms == 0) {
+ priv->force_quit = 1;
+@@ -684,6 +684,7 @@ static irqreturn_t hi3110_can_ist(int irq, void *dev_id)
+ cf->can_id |= CAN_ERR_CNT;
+ cf->data[6] = txerr;
+ cf->data[7] = rxerr;
++ netif_rx(skb);
+ }
+ }
+
+@@ -696,27 +697,38 @@ static irqreturn_t hi3110_can_ist(int irq, void *dev_id)
+ /* Check for protocol errors */
+ if (eflag & HI3110_ERR_PROTOCOL_MASK) {
+ skb = alloc_can_err_skb(net, &cf);
+- if (!skb)
+- break;
++ if (skb)
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+ priv->can.can_stats.bus_error++;
+- priv->net->stats.rx_errors++;
+- if (eflag & HI3110_ERR_BITERR)
+- cf->data[2] |= CAN_ERR_PROT_BIT;
+- else if (eflag & HI3110_ERR_FRMERR)
+- cf->data[2] |= CAN_ERR_PROT_FORM;
+- else if (eflag & HI3110_ERR_STUFERR)
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
+- else if (eflag & HI3110_ERR_CRCERR)
+- cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ;
+- else if (eflag & HI3110_ERR_ACKERR)
+- cf->data[3] |= CAN_ERR_PROT_LOC_ACK;
+-
+- cf->data[6] = hi3110_read(spi, HI3110_READ_TEC);
+- cf->data[7] = hi3110_read(spi, HI3110_READ_REC);
++ if (eflag & HI3110_ERR_BITERR) {
++ priv->net->stats.tx_errors++;
++ if (skb)
++ cf->data[2] |= CAN_ERR_PROT_BIT;
++ } else if (eflag & HI3110_ERR_FRMERR) {
++ priv->net->stats.rx_errors++;
++ if (skb)
++ cf->data[2] |= CAN_ERR_PROT_FORM;
++ } else if (eflag & HI3110_ERR_STUFERR) {
++ priv->net->stats.rx_errors++;
++ if (skb)
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
++ } else if (eflag & HI3110_ERR_CRCERR) {
++ priv->net->stats.rx_errors++;
++ if (skb)
++ cf->data[3] |= CAN_ERR_PROT_LOC_CRC_SEQ;
++ } else if (eflag & HI3110_ERR_ACKERR) {
++ priv->net->stats.tx_errors++;
++ if (skb)
++ cf->data[3] |= CAN_ERR_PROT_LOC_ACK;
++ }
++
+ netdev_dbg(priv->net, "Bus Error\n");
+- netif_rx(skb);
++ if (skb) {
++ cf->data[6] = hi3110_read(spi, HI3110_READ_TEC);
++ cf->data[7] = hi3110_read(spi, HI3110_READ_REC);
++ netif_rx(skb);
++ }
+ }
+ }
+
+diff --git a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+index d3ac865933fdf6..e94321849fd7e6 100644
+--- a/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
++++ b/drivers/net/can/spi/mcp251xfd/mcp251xfd-tef.c
+@@ -21,6 +21,11 @@ static inline bool mcp251xfd_tx_fifo_sta_empty(u32 fifo_sta)
+ return fifo_sta & MCP251XFD_REG_FIFOSTA_TFERFFIF;
+ }
+
++static inline bool mcp251xfd_tx_fifo_sta_less_than_half_full(u32 fifo_sta)
++{
++ return fifo_sta & MCP251XFD_REG_FIFOSTA_TFHRFHIF;
++}
++
+ static inline int
+ mcp251xfd_tef_tail_get_from_chip(const struct mcp251xfd_priv *priv,
+ u8 *tef_tail)
+@@ -147,7 +152,29 @@ mcp251xfd_get_tef_len(struct mcp251xfd_priv *priv, u8 *len_p)
+ BUILD_BUG_ON(sizeof(tx_ring->obj_num) != sizeof(len));
+
+ len = (chip_tx_tail << shift) - (tail << shift);
+- *len_p = len >> shift;
++ len >>= shift;
++
++ /* According to mcp2518fd erratum DS80000789E 6. the FIFOCI
++ * bits of a FIFOSTA register, here the TX-FIFO tail index
++ * might be corrupted.
++ *
++ * However here it seems the bit indicating that the TX-FIFO
++ * is empty (MCP251XFD_REG_FIFOSTA_TFERFFIF) is not correct
++ * while the TX-FIFO tail index is.
++ *
++ * We assume the TX-FIFO is empty, i.e. all pending CAN frames
++ * haven been send, if:
++ * - Chip's head and tail index are equal (len == 0).
++ * - The TX-FIFO is less than half full.
++ * (The TX-FIFO empty case has already been checked at the
++ * beginning of this function.)
++ * - No free buffers in the TX ring.
++ */
++ if (len == 0 && mcp251xfd_tx_fifo_sta_less_than_half_full(fifo_sta) &&
++ mcp251xfd_get_tx_free(tx_ring) == 0)
++ len = tx_ring->obj_num;
++
++ *len_p = len;
+
+ return 0;
+ }
+diff --git a/drivers/net/can/sun4i_can.c b/drivers/net/can/sun4i_can.c
+index 360158c295d348..4311c1f0eafd8d 100644
+--- a/drivers/net/can/sun4i_can.c
++++ b/drivers/net/can/sun4i_can.c
+@@ -579,11 +579,9 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
+ /* bus error interrupt */
+ netdev_dbg(dev, "bus error interrupt\n");
+ priv->can.can_stats.bus_error++;
+- stats->rx_errors++;
++ ecc = readl(priv->base + SUN4I_REG_STA_ADDR);
+
+ if (likely(skb)) {
+- ecc = readl(priv->base + SUN4I_REG_STA_ADDR);
+-
+ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+ switch (ecc & SUN4I_STA_MASK_ERR) {
+@@ -601,9 +599,15 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
+ >> 16;
+ break;
+ }
+- /* error occurred during transmission? */
+- if ((ecc & SUN4I_STA_ERR_DIR) == 0)
++ }
++
++ /* error occurred during transmission? */
++ if ((ecc & SUN4I_STA_ERR_DIR) == 0) {
++ if (likely(skb))
+ cf->data[2] |= CAN_ERR_PROT_TX;
++ stats->tx_errors++;
++ } else {
++ stats->rx_errors++;
+ }
+ }
+ if (isrc & SUN4I_INT_ERR_PASSIVE) {
+@@ -629,10 +633,10 @@ static int sun4i_can_err(struct net_device *dev, u8 isrc, u8 status)
+ tx_state = txerr >= rxerr ? state : 0;
+ rx_state = txerr <= rxerr ? state : 0;
+
+- if (likely(skb))
+- can_change_state(dev, cf, tx_state, rx_state);
+- else
+- priv->can.state = state;
++ /* The skb allocation might fail, but can_change_state()
++ * handles cf == NULL.
++ */
++ can_change_state(dev, cf, tx_state, rx_state);
+ if (state == CAN_STATE_BUS_OFF)
+ can_bus_off(dev);
+ }
+diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
+index 050c0b49938a42..5355bac4dccbe0 100644
+--- a/drivers/net/can/usb/ems_usb.c
++++ b/drivers/net/can/usb/ems_usb.c
+@@ -335,15 +335,14 @@ static void ems_usb_rx_err(struct ems_usb *dev, struct ems_cpc_msg *msg)
+ struct net_device_stats *stats = &dev->netdev->stats;
+
+ skb = alloc_can_err_skb(dev->netdev, &cf);
+- if (skb == NULL)
+- return;
+
+ if (msg->type == CPC_MSG_TYPE_CAN_STATE) {
+ u8 state = msg->msg.can_state;
+
+ if (state & SJA1000_SR_BS) {
+ dev->can.state = CAN_STATE_BUS_OFF;
+- cf->can_id |= CAN_ERR_BUSOFF;
++ if (skb)
++ cf->can_id |= CAN_ERR_BUSOFF;
+
+ dev->can.can_stats.bus_off++;
+ can_bus_off(dev->netdev);
+@@ -361,44 +360,53 @@ static void ems_usb_rx_err(struct ems_usb *dev, struct ems_cpc_msg *msg)
+
+ /* bus error interrupt */
+ dev->can.can_stats.bus_error++;
+- stats->rx_errors++;
+
+- cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
++ if (skb) {
++ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+- switch (ecc & SJA1000_ECC_MASK) {
+- case SJA1000_ECC_BIT:
+- cf->data[2] |= CAN_ERR_PROT_BIT;
+- break;
+- case SJA1000_ECC_FORM:
+- cf->data[2] |= CAN_ERR_PROT_FORM;
+- break;
+- case SJA1000_ECC_STUFF:
+- cf->data[2] |= CAN_ERR_PROT_STUFF;
+- break;
+- default:
+- cf->data[3] = ecc & SJA1000_ECC_SEG;
+- break;
++ switch (ecc & SJA1000_ECC_MASK) {
++ case SJA1000_ECC_BIT:
++ cf->data[2] |= CAN_ERR_PROT_BIT;
++ break;
++ case SJA1000_ECC_FORM:
++ cf->data[2] |= CAN_ERR_PROT_FORM;
++ break;
++ case SJA1000_ECC_STUFF:
++ cf->data[2] |= CAN_ERR_PROT_STUFF;
++ break;
++ default:
++ cf->data[3] = ecc & SJA1000_ECC_SEG;
++ break;
++ }
+ }
+
+ /* Error occurred during transmission? */
+- if ((ecc & SJA1000_ECC_DIR) == 0)
+- cf->data[2] |= CAN_ERR_PROT_TX;
++ if ((ecc & SJA1000_ECC_DIR) == 0) {
++ stats->tx_errors++;
++ if (skb)
++ cf->data[2] |= CAN_ERR_PROT_TX;
++ } else {
++ stats->rx_errors++;
++ }
+
+- if (dev->can.state == CAN_STATE_ERROR_WARNING ||
+- dev->can.state == CAN_STATE_ERROR_PASSIVE) {
++ if (skb && (dev->can.state == CAN_STATE_ERROR_WARNING ||
++ dev->can.state == CAN_STATE_ERROR_PASSIVE)) {
+ cf->can_id |= CAN_ERR_CRTL;
+ cf->data[1] = (txerr > rxerr) ?
+ CAN_ERR_CRTL_TX_PASSIVE : CAN_ERR_CRTL_RX_PASSIVE;
+ }
+ } else if (msg->type == CPC_MSG_TYPE_OVERRUN) {
+- cf->can_id |= CAN_ERR_CRTL;
+- cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
++ if (skb) {
++ cf->can_id |= CAN_ERR_CRTL;
++ cf->data[1] = CAN_ERR_CRTL_RX_OVERFLOW;
++ }
+
+ stats->rx_over_errors++;
+ stats->rx_errors++;
+ }
+
+- netif_rx(skb);
++ if (skb)
++ netif_rx(skb);
+ }
+
+ /*
+diff --git a/drivers/net/can/usb/f81604.c b/drivers/net/can/usb/f81604.c
+index bc0c8903fe7794..e0cfa1460b0b83 100644
+--- a/drivers/net/can/usb/f81604.c
++++ b/drivers/net/can/usb/f81604.c
+@@ -526,7 +526,6 @@ static void f81604_handle_can_bus_errors(struct f81604_port_priv *priv,
+ netdev_dbg(netdev, "bus error interrupt\n");
+
+ priv->can.can_stats.bus_error++;
+- stats->rx_errors++;
+
+ if (skb) {
+ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+@@ -548,10 +547,15 @@ static void f81604_handle_can_bus_errors(struct f81604_port_priv *priv,
+
+ /* set error location */
+ cf->data[3] = data->ecc & F81604_SJA1000_ECC_SEG;
++ }
+
+- /* Error occurred during transmission? */
+- if ((data->ecc & F81604_SJA1000_ECC_DIR) == 0)
++ /* Error occurred during transmission? */
++ if ((data->ecc & F81604_SJA1000_ECC_DIR) == 0) {
++ stats->tx_errors++;
++ if (skb)
+ cf->data[2] |= CAN_ERR_PROT_TX;
++ } else {
++ stats->rx_errors++;
+ }
+
+ set_bit(F81604_CLEAR_ECC, &priv->clear_flags);
+diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
+index bc86e9b329fd10..b6f4de375df75d 100644
+--- a/drivers/net/can/usb/gs_usb.c
++++ b/drivers/net/can/usb/gs_usb.c
+@@ -43,9 +43,6 @@
+ #define USB_XYLANTA_SAINT3_VENDOR_ID 0x16d0
+ #define USB_XYLANTA_SAINT3_PRODUCT_ID 0x0f30
+
+-#define GS_USB_ENDPOINT_IN 1
+-#define GS_USB_ENDPOINT_OUT 2
+-
+ /* Timestamp 32 bit timer runs at 1 MHz (1 µs tick). Worker accounts
+ * for timer overflow (will be after ~71 minutes)
+ */
+@@ -336,6 +333,9 @@ struct gs_usb {
+
+ unsigned int hf_size_rx;
+ u8 active_channels;
++
++ unsigned int pipe_in;
++ unsigned int pipe_out;
+ };
+
+ /* 'allocate' a tx context.
+@@ -687,7 +687,7 @@ static void gs_usb_receive_bulk_callback(struct urb *urb)
+
+ resubmit_urb:
+ usb_fill_bulk_urb(urb, parent->udev,
+- usb_rcvbulkpipe(parent->udev, GS_USB_ENDPOINT_IN),
++ parent->pipe_in,
+ hf, dev->parent->hf_size_rx,
+ gs_usb_receive_bulk_callback, parent);
+
+@@ -819,7 +819,7 @@ static netdev_tx_t gs_can_start_xmit(struct sk_buff *skb,
+ }
+
+ usb_fill_bulk_urb(urb, dev->udev,
+- usb_sndbulkpipe(dev->udev, GS_USB_ENDPOINT_OUT),
++ dev->parent->pipe_out,
+ hf, dev->hf_size_tx,
+ gs_usb_xmit_callback, txc);
+
+@@ -925,8 +925,7 @@ static int gs_can_open(struct net_device *netdev)
+ /* fill, anchor, and submit rx urb */
+ usb_fill_bulk_urb(urb,
+ dev->udev,
+- usb_rcvbulkpipe(dev->udev,
+- GS_USB_ENDPOINT_IN),
++ dev->parent->pipe_in,
+ buf,
+ dev->parent->hf_size_rx,
+ gs_usb_receive_bulk_callback, parent);
+@@ -1413,6 +1412,7 @@ static int gs_usb_probe(struct usb_interface *intf,
+ const struct usb_device_id *id)
+ {
+ struct usb_device *udev = interface_to_usbdev(intf);
++ struct usb_endpoint_descriptor *ep_in, *ep_out;
+ struct gs_host_frame *hf;
+ struct gs_usb *parent;
+ struct gs_host_config hconf = {
+@@ -1422,6 +1422,13 @@ static int gs_usb_probe(struct usb_interface *intf,
+ unsigned int icount, i;
+ int rc;
+
++ rc = usb_find_common_endpoints(intf->cur_altsetting,
++ &ep_in, &ep_out, NULL, NULL);
++ if (rc) {
++ dev_err(&intf->dev, "Required endpoints not found\n");
++ return rc;
++ }
++
+ /* send host config */
+ rc = usb_control_msg_send(udev, 0,
+ GS_USB_BREQ_HOST_FORMAT,
+@@ -1466,6 +1473,10 @@ static int gs_usb_probe(struct usb_interface *intf,
+ usb_set_intfdata(intf, parent);
+ parent->udev = udev;
+
++ /* store the detected endpoints */
++ parent->pipe_in = usb_rcvbulkpipe(parent->udev, ep_in->bEndpointAddress);
++ parent->pipe_out = usb_sndbulkpipe(parent->udev, ep_out->bEndpointAddress);
++
+ for (i = 0; i < icount; i++) {
+ unsigned int hf_size_rx = 0;
+
+diff --git a/drivers/net/dsa/qca/qca8k-8xxx.c b/drivers/net/dsa/qca/qca8k-8xxx.c
+index f8d8c70642c4ff..59b4a7240b5832 100644
+--- a/drivers/net/dsa/qca/qca8k-8xxx.c
++++ b/drivers/net/dsa/qca/qca8k-8xxx.c
+@@ -673,7 +673,7 @@ qca8k_phy_eth_command(struct qca8k_priv *priv, bool read, int phy,
+ * We therefore need to lock the MDIO bus onto which the switch is
+ * connected.
+ */
+- mutex_lock(&priv->bus->mdio_lock);
++ mutex_lock_nested(&priv->bus->mdio_lock, MDIO_MUTEX_NESTED);
+
+ /* Actually start the request:
+ * 1. Send mdio master packet
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+index 20ba14eb87e00b..b901ecb57f2552 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+@@ -1193,10 +1193,14 @@ static int bnxt_grxclsrule(struct bnxt *bp, struct ethtool_rxnfc *cmd)
+ }
+ }
+
+- if (fltr->base.flags & BNXT_ACT_DROP)
++ if (fltr->base.flags & BNXT_ACT_DROP) {
+ fs->ring_cookie = RX_CLS_FLOW_DISC;
+- else
++ } else if (fltr->base.flags & BNXT_ACT_RSS_CTX) {
++ fs->flow_type |= FLOW_RSS;
++ cmd->rss_context = fltr->base.fw_vnic_id;
++ } else {
+ fs->ring_cookie = fltr->base.rxq;
++ }
+ rc = 0;
+
+ fltr_err:
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index c09370eab319b2..16a7908c79f703 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -28,6 +28,9 @@ EXPORT_SYMBOL_GPL(enetc_port_mac_wr);
+ static void enetc_change_preemptible_tcs(struct enetc_ndev_priv *priv,
+ u8 preemptible_tcs)
+ {
++ if (!(priv->si->hw_features & ENETC_SI_F_QBU))
++ return;
++
+ priv->preemptible_tcs = preemptible_tcs;
+ enetc_mm_commit_preemptible_tcs(priv);
+ }
+diff --git a/drivers/net/ethernet/freescale/fec_mpc52xx_phy.c b/drivers/net/ethernet/freescale/fec_mpc52xx_phy.c
+index 39689826cc8ffc..ce253aac5344cc 100644
+--- a/drivers/net/ethernet/freescale/fec_mpc52xx_phy.c
++++ b/drivers/net/ethernet/freescale/fec_mpc52xx_phy.c
+@@ -94,7 +94,7 @@ static int mpc52xx_fec_mdio_probe(struct platform_device *of)
+ goto out_free;
+ }
+
+- snprintf(bus->id, MII_BUS_ID_SIZE, "%x", res.start);
++ snprintf(bus->id, MII_BUS_ID_SIZE, "%pa", &res.start);
+ bus->priv = priv;
+
+ bus->parent = dev;
+diff --git a/drivers/net/ethernet/freescale/fman/fman.c b/drivers/net/ethernet/freescale/fman/fman.c
+index d96028f01770cf..fb416d60dcd727 100644
+--- a/drivers/net/ethernet/freescale/fman/fman.c
++++ b/drivers/net/ethernet/freescale/fman/fman.c
+@@ -24,7 +24,6 @@
+
+ /* General defines */
+ #define FMAN_LIODN_TBL 64 /* size of LIODN table */
+-#define MAX_NUM_OF_MACS 10
+ #define FM_NUM_OF_FMAN_CTRL_EVENT_REGS 4
+ #define BASE_RX_PORTID 0x08
+ #define BASE_TX_PORTID 0x28
+diff --git a/drivers/net/ethernet/freescale/fman/fman.h b/drivers/net/ethernet/freescale/fman/fman.h
+index 2ea575a46675b0..74eb62eba0d7ff 100644
+--- a/drivers/net/ethernet/freescale/fman/fman.h
++++ b/drivers/net/ethernet/freescale/fman/fman.h
+@@ -74,6 +74,9 @@
+ #define BM_MAX_NUM_OF_POOLS 64 /* Buffers pools */
+ #define FMAN_PORT_MAX_EXT_POOLS_NUM 8 /* External BM pools per Rx port */
+
++/* General defines */
++#define MAX_NUM_OF_MACS 10
++
+ struct fman; /* FMan data */
+
+ /* Enum for defining port types */
+diff --git a/drivers/net/ethernet/freescale/fman/mac.c b/drivers/net/ethernet/freescale/fman/mac.c
+index 11da139082e1bf..1916a2ac48b9f1 100644
+--- a/drivers/net/ethernet/freescale/fman/mac.c
++++ b/drivers/net/ethernet/freescale/fman/mac.c
+@@ -259,6 +259,11 @@ static int mac_probe(struct platform_device *_of_dev)
+ err = -EINVAL;
+ goto _return_dev_put;
+ }
++ if (val >= MAX_NUM_OF_MACS) {
++ dev_err(dev, "cell-index value is too big for %pOF\n", mac_node);
++ err = -EINVAL;
++ goto _return_dev_put;
++ }
+ priv->cell_index = (u8)val;
+
+ /* Get the MAC address */
+diff --git a/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c b/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c
+index 2e210a00355843..249b482e32d3bd 100644
+--- a/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c
++++ b/drivers/net/ethernet/freescale/fs_enet/mii-bitbang.c
+@@ -123,7 +123,7 @@ static int fs_mii_bitbang_init(struct mii_bus *bus, struct device_node *np)
+ * we get is an int, and the odds of multiple bitbang mdio buses
+ * is low enough that it's not worth going too crazy.
+ */
+- snprintf(bus->id, MII_BUS_ID_SIZE, "%x", res.start);
++ snprintf(bus->id, MII_BUS_ID_SIZE, "%pa", &res.start);
+
+ data = of_get_property(np, "fsl,mdio-pin", &len);
+ if (!data || len != 4)
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index 009716a12a26af..f1324e25b2af1c 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -542,7 +542,8 @@ ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd,
+ /**
+ * ice_find_netlist_node
+ * @hw: pointer to the hw struct
+- * @node_type_ctx: type of netlist node to look for
++ * @node_type: type of netlist node to look for
++ * @ctx: context of the search
+ * @node_part_number: node part number to look for
+ * @node_handle: output parameter if node found - optional
+ *
+@@ -552,10 +553,12 @@ ice_aq_get_netlist_node(struct ice_hw *hw, struct ice_aqc_get_link_topo *cmd,
+ * valid if the function returns zero, and should be ignored on any non-zero
+ * return value.
+ *
+- * Returns: 0 if the node is found, -ENOENT if no handle was found, and
+- * a negative error code on failure to access the AQ.
++ * Return:
++ * * 0 if the node is found,
++ * * -ENOENT if no handle was found,
++ * * negative error code on failure to access the AQ.
+ */
+-static int ice_find_netlist_node(struct ice_hw *hw, u8 node_type_ctx,
++static int ice_find_netlist_node(struct ice_hw *hw, u8 node_type, u8 ctx,
+ u8 node_part_number, u16 *node_handle)
+ {
+ u8 idx;
+@@ -566,8 +569,8 @@ static int ice_find_netlist_node(struct ice_hw *hw, u8 node_type_ctx,
+ int status;
+
+ cmd.addr.topo_params.node_type_ctx =
+- FIELD_PREP(ICE_AQC_LINK_TOPO_NODE_TYPE_M,
+- node_type_ctx);
++ FIELD_PREP(ICE_AQC_LINK_TOPO_NODE_TYPE_M, node_type) |
++ FIELD_PREP(ICE_AQC_LINK_TOPO_NODE_CTX_M, ctx);
+ cmd.addr.topo_params.index = idx;
+
+ status = ice_aq_get_netlist_node(hw, &cmd,
+@@ -2726,9 +2729,11 @@ bool ice_is_pf_c827(struct ice_hw *hw)
+ */
+ bool ice_is_phy_rclk_in_netlist(struct ice_hw *hw)
+ {
+- if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
++ if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_PHY,
++ ICE_AQC_LINK_TOPO_NODE_CTX_PORT,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_C827, NULL) &&
+- ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
++ ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_PHY,
++ ICE_AQC_LINK_TOPO_NODE_CTX_PORT,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_E822_PHY, NULL))
+ return false;
+
+@@ -2744,6 +2749,7 @@ bool ice_is_phy_rclk_in_netlist(struct ice_hw *hw)
+ bool ice_is_clock_mux_in_netlist(struct ice_hw *hw)
+ {
+ if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_MUX,
++ ICE_AQC_LINK_TOPO_NODE_CTX_GLOBAL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_CLK_MUX,
+ NULL))
+ return false;
+@@ -2764,12 +2770,14 @@ bool ice_is_clock_mux_in_netlist(struct ice_hw *hw)
+ bool ice_is_cgu_in_netlist(struct ice_hw *hw)
+ {
+ if (!ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
++ ICE_AQC_LINK_TOPO_NODE_CTX_GLOBAL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_ZL30632_80032,
+ NULL)) {
+ hw->cgu_part_number = ICE_AQC_GET_LINK_TOPO_NODE_NR_ZL30632_80032;
+ return true;
+ } else if (!ice_find_netlist_node(hw,
+ ICE_AQC_LINK_TOPO_NODE_TYPE_CLK_CTRL,
++ ICE_AQC_LINK_TOPO_NODE_CTX_GLOBAL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_SI5383_5384,
+ NULL)) {
+ hw->cgu_part_number = ICE_AQC_GET_LINK_TOPO_NODE_NR_SI5383_5384;
+@@ -2788,6 +2796,7 @@ bool ice_is_cgu_in_netlist(struct ice_hw *hw)
+ bool ice_is_gps_in_netlist(struct ice_hw *hw)
+ {
+ if (ice_find_netlist_node(hw, ICE_AQC_LINK_TOPO_NODE_TYPE_GPS,
++ ICE_AQC_LINK_TOPO_NODE_CTX_GLOBAL,
+ ICE_AQC_GET_LINK_TOPO_NODE_NR_GEN_GPS, NULL))
+ return false;
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index b1e7727b8677f9..8f2e758c394277 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -6361,10 +6361,12 @@ ice_set_vlan_filtering_features(struct ice_vsi *vsi, netdev_features_t features)
+ int err = 0;
+
+ /* support Single VLAN Mode (SVM) and Double VLAN Mode (DVM) by checking
+- * if either bit is set
++ * if either bit is set. In switchdev mode Rx filtering should never be
++ * enabled.
+ */
+- if (features &
+- (NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER))
++ if ((features &
++ (NETIF_F_HW_VLAN_CTAG_FILTER | NETIF_F_HW_VLAN_STAG_FILTER)) &&
++ !ice_is_eswitch_mode_switchdev(vsi->back))
+ err = vlan_ops->ena_rx_filtering(vsi);
+ else
+ err = vlan_ops->dis_rx_filtering(vsi);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+index ec8db830ac73ae..3816e45b6ab44a 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+@@ -1495,7 +1495,8 @@ static int ice_read_ptp_tstamp_eth56g(struct ice_hw *hw, u8 port, u8 idx,
+ * lower 8 bits in the low register, and the upper 32 bits in the high
+ * register.
+ */
+- *tstamp = ((u64)hi) << TS_PHY_HIGH_S | ((u64)lo & TS_PHY_LOW_M);
++ *tstamp = FIELD_PREP(TS_PHY_HIGH_M, hi) |
++ FIELD_PREP(TS_PHY_LOW_M, lo);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+index 6cedc1a906afb6..4c8b8457134427 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+@@ -663,9 +663,8 @@ static inline u64 ice_get_base_incval(struct ice_hw *hw)
+ #define TS_HIGH_M 0xFF
+ #define TS_HIGH_S 32
+
+-#define TS_PHY_LOW_M 0xFF
+-#define TS_PHY_HIGH_M 0xFFFFFFFF
+-#define TS_PHY_HIGH_S 8
++#define TS_PHY_LOW_M GENMASK(7, 0)
++#define TS_PHY_HIGH_M GENMASK_ULL(39, 8)
+
+ #define BYTES_PER_IDX_ADDR_L_U 8
+ #define BYTES_PER_IDX_ADDR_L 4
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index d4e6f0e104872d..60d15b3e6e2faa 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -2448,6 +2448,7 @@ static void idpf_tx_splitq_map(struct idpf_tx_queue *tx_q,
+ * rest of the packet.
+ */
+ tx_buf->type = LIBETH_SQE_EMPTY;
++ idpf_tx_buf_compl_tag(tx_buf) = params->compl_tag;
+
+ /* Adjust the DMA offset and the remaining size of the
+ * fragment. On the first iteration of this loop,
+diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
+index f1d0881687233e..18284a838e2424 100644
+--- a/drivers/net/ethernet/intel/igb/igb_main.c
++++ b/drivers/net/ethernet/intel/igb/igb_main.c
+@@ -637,6 +637,10 @@ static int __init igb_init_module(void)
+ dca_register_notify(&dca_notifier);
+ #endif
+ ret = pci_register_driver(&igb_driver);
++#ifdef CONFIG_IGB_DCA
++ if (ret)
++ dca_unregister_notify(&dca_notifier);
++#endif
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h
+index 6493abf189de5e..6639069ad52834 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h
+@@ -194,6 +194,8 @@ u32 ixgbe_read_reg(struct ixgbe_hw *hw, u32 reg);
+ dev_err(&adapter->pdev->dev, format, ## arg)
+ #define e_dev_notice(format, arg...) \
+ dev_notice(&adapter->pdev->dev, format, ## arg)
++#define e_dbg(msglvl, format, arg...) \
++ netif_dbg(adapter, msglvl, adapter->netdev, format, ## arg)
+ #define e_info(msglvl, format, arg...) \
+ netif_info(adapter, msglvl, adapter->netdev, format, ## arg)
+ #define e_err(msglvl, format, arg...) \
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+index 14aa2ca51f70ec..81179c60af4e01 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+@@ -40,7 +40,7 @@
+ #define IXGBE_SFF_1GBASESX_CAPABLE 0x1
+ #define IXGBE_SFF_1GBASELX_CAPABLE 0x2
+ #define IXGBE_SFF_1GBASET_CAPABLE 0x8
+-#define IXGBE_SFF_BASEBX10_CAPABLE 0x64
++#define IXGBE_SFF_BASEBX10_CAPABLE 0x40
+ #define IXGBE_SFF_10GBASESR_CAPABLE 0x10
+ #define IXGBE_SFF_10GBASELR_CAPABLE 0x20
+ #define IXGBE_SFF_SOFT_RS_SELECT_MASK 0x8
+diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+index e71715f5da2287..20415c1238ef8d 100644
+--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+@@ -1047,7 +1047,7 @@ static int ixgbe_negotiate_vf_api(struct ixgbe_adapter *adapter,
+ break;
+ }
+
+- e_info(drv, "VF %d requested invalid api version %u\n", vf, api);
++ e_dbg(drv, "VF %d requested unsupported api version %u\n", vf, api);
+
+ return -1;
+ }
+diff --git a/drivers/net/ethernet/intel/ixgbevf/ipsec.c b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+index 66cf17f1940820..f804b35d79c726 100644
+--- a/drivers/net/ethernet/intel/ixgbevf/ipsec.c
++++ b/drivers/net/ethernet/intel/ixgbevf/ipsec.c
+@@ -629,7 +629,6 @@ void ixgbevf_init_ipsec_offload(struct ixgbevf_adapter *adapter)
+
+ switch (adapter->hw.api_version) {
+ case ixgbe_mbox_api_14:
+- case ixgbe_mbox_api_15:
+ break;
+ default:
+ return;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+index 878cbdbf5ec8b4..e7e01f3298efb0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/tc_tun_encap.c
+@@ -5,6 +5,7 @@
+ #include <net/nexthop.h>
+ #include <net/ip_tunnels.h>
+ #include "tc_tun_encap.h"
++#include "fs_core.h"
+ #include "en_tc.h"
+ #include "tc_tun.h"
+ #include "rep/tc.h"
+@@ -24,10 +25,18 @@ static int mlx5e_set_int_port_tunnel(struct mlx5e_priv *priv,
+
+ route_dev = dev_get_by_index(dev_net(e->out_dev), e->route_dev_ifindex);
+
+- if (!route_dev || !netif_is_ovs_master(route_dev) ||
+- attr->parse_attr->filter_dev == e->out_dev)
++ if (!route_dev || !netif_is_ovs_master(route_dev))
+ goto out;
+
++ if (priv->mdev->priv.steering->mode == MLX5_FLOW_STEERING_MODE_DMFS &&
++ mlx5e_eswitch_uplink_rep(attr->parse_attr->filter_dev) &&
++ (attr->esw_attr->dests[out_index].flags & MLX5_ESW_DEST_ENCAP)) {
++ mlx5_core_warn(priv->mdev,
++ "Matching on external port with encap + fwd to table actions is not allowed for firmware steering\n");
++ err = -EINVAL;
++ goto out;
++ }
++
+ err = mlx5e_set_fwd_to_int_port_actions(priv, attr, e->route_dev_ifindex,
+ MLX5E_TC_INT_PORT_EGRESS,
+ &attr->action, out_index);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 13a3fa8dc0cb09..c14bef83d84d0f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -2652,11 +2652,11 @@ void mlx5e_trigger_napi_sched(struct napi_struct *napi)
+
+ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+ struct mlx5e_params *params,
+- struct mlx5e_channel_param *cparam,
+ struct xsk_buff_pool *xsk_pool,
+ struct mlx5e_channel **cp)
+ {
+ struct net_device *netdev = priv->netdev;
++ struct mlx5e_channel_param *cparam;
+ struct mlx5_core_dev *mdev;
+ struct mlx5e_xsk_param xsk;
+ struct mlx5e_channel *c;
+@@ -2678,8 +2678,15 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+ return err;
+
+ c = kvzalloc_node(sizeof(*c), GFP_KERNEL, cpu_to_node(cpu));
+- if (!c)
+- return -ENOMEM;
++ cparam = kvzalloc(sizeof(*cparam), GFP_KERNEL);
++ if (!c || !cparam) {
++ err = -ENOMEM;
++ goto err_free;
++ }
++
++ err = mlx5e_build_channel_param(mdev, params, cparam);
++ if (err)
++ goto err_free;
+
+ c->priv = priv;
+ c->mdev = mdev;
+@@ -2713,6 +2720,7 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+
+ *cp = c;
+
++ kvfree(cparam);
+ return 0;
+
+ err_close_queues:
+@@ -2721,6 +2729,8 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
+ err_napi_del:
+ netif_napi_del(&c->napi);
+
++err_free:
++ kvfree(cparam);
+ kvfree(c);
+
+ return err;
+@@ -2779,20 +2789,14 @@ static void mlx5e_close_channel(struct mlx5e_channel *c)
+ int mlx5e_open_channels(struct mlx5e_priv *priv,
+ struct mlx5e_channels *chs)
+ {
+- struct mlx5e_channel_param *cparam;
+ int err = -ENOMEM;
+ int i;
+
+ chs->num = chs->params.num_channels;
+
+ chs->c = kcalloc(chs->num, sizeof(struct mlx5e_channel *), GFP_KERNEL);
+- cparam = kvzalloc(sizeof(struct mlx5e_channel_param), GFP_KERNEL);
+- if (!chs->c || !cparam)
+- goto err_free;
+-
+- err = mlx5e_build_channel_param(priv->mdev, &chs->params, cparam);
+- if (err)
+- goto err_free;
++ if (!chs->c)
++ goto err_out;
+
+ for (i = 0; i < chs->num; i++) {
+ struct xsk_buff_pool *xsk_pool = NULL;
+@@ -2800,7 +2804,7 @@ int mlx5e_open_channels(struct mlx5e_priv *priv,
+ if (chs->params.xdp_prog)
+ xsk_pool = mlx5e_xsk_get_pool(&chs->params, chs->params.xsk, i);
+
+- err = mlx5e_open_channel(priv, i, &chs->params, cparam, xsk_pool, &chs->c[i]);
++ err = mlx5e_open_channel(priv, i, &chs->params, xsk_pool, &chs->c[i]);
+ if (err)
+ goto err_close_channels;
+ }
+@@ -2818,7 +2822,6 @@ int mlx5e_open_channels(struct mlx5e_priv *priv,
+ }
+
+ mlx5e_health_channels_update(priv);
+- kvfree(cparam);
+ return 0;
+
+ err_close_ptp:
+@@ -2829,9 +2832,8 @@ int mlx5e_open_channels(struct mlx5e_priv *priv,
+ for (i--; i >= 0; i--)
+ mlx5e_close_channel(chs->c[i]);
+
+-err_free:
+ kfree(chs->c);
+- kvfree(cparam);
++err_out:
+ chs->num = 0;
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 6e4f8aaf8d2f21..2eabfcc247c6ae 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -3698,6 +3698,7 @@ void mlx5_fs_core_free(struct mlx5_core_dev *dev)
+ int mlx5_fs_core_alloc(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_flow_steering *steering;
++ char name[80];
+ int err = 0;
+
+ err = mlx5_init_fc_stats(dev);
+@@ -3722,10 +3723,12 @@ int mlx5_fs_core_alloc(struct mlx5_core_dev *dev)
+ else
+ steering->mode = MLX5_FLOW_STEERING_MODE_DMFS;
+
+- steering->fgs_cache = kmem_cache_create("mlx5_fs_fgs",
++ snprintf(name, sizeof(name), "%s-mlx5_fs_fgs", dev_name(dev->device));
++ steering->fgs_cache = kmem_cache_create(name,
+ sizeof(struct mlx5_flow_group), 0,
+ 0, NULL);
+- steering->ftes_cache = kmem_cache_create("mlx5_fs_ftes", sizeof(struct fs_fte), 0,
++ snprintf(name, sizeof(name), "%s-mlx5_fs_ftes", dev_name(dev->device));
++ steering->ftes_cache = kmem_cache_create(name, sizeof(struct fs_fte), 0,
+ 0, NULL);
+ if (!steering->ftes_cache || !steering->fgs_cache) {
+ err = -ENOMEM;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc_complex.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc_complex.c
+index 601fad5fc54a39..ee4058bafe119b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc_complex.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc_complex.c
+@@ -39,6 +39,8 @@ bool mlx5hws_bwc_match_params_is_complex(struct mlx5hws_context *ctx,
+ } else {
+ mlx5hws_err(ctx, "Failed to calculate matcher definer layout\n");
+ }
++ } else {
++ kfree(mt->fc);
+ }
+
+ mlx5hws_match_template_destroy(mt);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_send.c
+index 6d443e6ee8d9e9..08be034bd1e16d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_send.c
+@@ -990,6 +990,7 @@ static int hws_bwc_send_queues_init(struct mlx5hws_context *ctx)
+ for (i = 0; i < bwc_queues; i++) {
+ mutex_init(&ctx->bwc_send_queue_locks[i]);
+ lockdep_register_key(ctx->bwc_lock_class_keys + i);
++ lockdep_set_class(ctx->bwc_send_queue_locks + i, ctx->bwc_lock_class_keys + i);
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
+index 947500f8ed7142..7aa1a462a1035b 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.c
+@@ -67,7 +67,7 @@ static bool mlxsw_afk_blocks_check(struct mlxsw_afk *mlxsw_afk)
+
+ for (j = 0; j < block->instances_count; j++) {
+ const struct mlxsw_afk_element_info *elinfo;
+- struct mlxsw_afk_element_inst *elinst;
++ const struct mlxsw_afk_element_inst *elinst;
+
+ elinst = &block->instances[j];
+ elinfo = &mlxsw_afk_element_infos[elinst->element];
+@@ -154,7 +154,7 @@ static void mlxsw_afk_picker_count_hits(struct mlxsw_afk *mlxsw_afk,
+ const struct mlxsw_afk_block *block = &mlxsw_afk->blocks[i];
+
+ for (j = 0; j < block->instances_count; j++) {
+- struct mlxsw_afk_element_inst *elinst;
++ const struct mlxsw_afk_element_inst *elinst;
+
+ elinst = &block->instances[j];
+ if (elinst->element == element) {
+@@ -386,7 +386,7 @@ mlxsw_afk_block_elinst_get(const struct mlxsw_afk_block *block,
+ int i;
+
+ for (i = 0; i < block->instances_count; i++) {
+- struct mlxsw_afk_element_inst *elinst;
++ const struct mlxsw_afk_element_inst *elinst;
+
+ elinst = &block->instances[i];
+ if (elinst->element == element)
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
+index 98a05598178b3b..5aa1afb3f2ca81 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_keys.h
+@@ -117,7 +117,7 @@ struct mlxsw_afk_element_inst { /* element instance in actual block */
+
+ struct mlxsw_afk_block {
+ u16 encoding; /* block ID */
+- struct mlxsw_afk_element_inst *instances;
++ const struct mlxsw_afk_element_inst *instances;
+ unsigned int instances_count;
+ bool high_entropy;
+ };
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
+index eaad7860560271..1850a975b38044 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_flex_keys.c
+@@ -7,7 +7,7 @@
+ #include "item.h"
+ #include "core_acl_flex_keys.h"
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_dmac[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_dmac[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(DMAC_32_47, 0x00, 2),
+ MLXSW_AFK_ELEMENT_INST_BUF(DMAC_0_31, 0x02, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x08, 13, 3),
+@@ -15,7 +15,7 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_dmac[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_32_47, 0x00, 2),
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_0_31, 0x02, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x08, 13, 3),
+@@ -23,27 +23,27 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac_ex[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l2_smac_ex[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_32_47, 0x02, 2),
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_0_31, 0x04, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(ETHERTYPE, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_sip[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_sip[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_0_31, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(L4_PORT_RANGE, 0x04, 16, 16),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x08, 0, 8),
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_dip[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_dip[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_0_31, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(L4_PORT_RANGE, 0x04, 16, 16),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x08, 0, 8),
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_SYS_PORT, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_0_31, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_ECN, 0x04, 4, 2),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_TTL_, 0x04, 24, 8),
+@@ -51,35 +51,35 @@ static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(TCP_FLAGS, 0x08, 8, 9), /* TCP_CONTROL+TCP_ECN */
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_ex[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_ex[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VID, 0x00, 0, 12),
+ MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x08, 29, 3),
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_L4_PORT, 0x08, 0, 16),
+ MLXSW_AFK_ELEMENT_INST_U32(DST_L4_PORT, 0x0C, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_dip[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_dip[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_32_63, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_ex1[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_ex1[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_96_127, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_64_95, 0x04, 4),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x08, 0, 8),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_sip[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_sip[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_32_63, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_sip_ex[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_sip_ex[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_96_127, 0x00, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_64_95, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_packet_type[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_packet_type[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(ETHERTYPE, 0x00, 0, 16),
+ };
+
+@@ -124,90 +124,90 @@ const struct mlxsw_afk_ops mlxsw_sp1_afk_ops = {
+ .clear_block = mlxsw_sp1_afk_clear_block,
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_0[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_0[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(FDB_MISS, 0x00, 3, 1),
+ MLXSW_AFK_ELEMENT_INST_BUF(DMAC_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_1[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_1[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(FDB_MISS, 0x00, 3, 1),
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_2[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_2[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SMAC_32_47, 0x04, 2),
+ MLXSW_AFK_ELEMENT_INST_BUF(DMAC_32_47, 0x06, 2),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_3[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_3[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x00, 0, 3),
+ MLXSW_AFK_ELEMENT_INST_U32(VID, 0x04, 16, 12),
+ MLXSW_AFK_ELEMENT_INST_BUF(DMAC_32_47, 0x06, 2),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_4[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_4[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(PCP, 0x00, 0, 3),
+ MLXSW_AFK_ELEMENT_INST_U32(VID, 0x04, 16, 12),
+ MLXSW_AFK_ELEMENT_INST_U32(ETHERTYPE, 0x04, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_5[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_5[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VID, 0x04, 16, 12),
+ MLXSW_AFK_ELEMENT_INST_EXT_U32(SRC_SYS_PORT, 0x04, 0, 8, -1, true), /* RX_ACL_SYSTEM_PORT */
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_0[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_0[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_1[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_1[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_0_31, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_2[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_2[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(IP_DSCP, 0x04, 0, 6),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_ECN, 0x04, 6, 2),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_TTL_, 0x04, 8, 8),
+ MLXSW_AFK_ELEMENT_INST_U32(IP_PROTO, 0x04, 16, 8),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_5[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_5[] = {
+ MLXSW_AFK_ELEMENT_INST_EXT_U32(VIRT_ROUTER, 0x04, 20, 11, 0, true),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_0[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_0[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_0_3, 0x00, 0, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_32_63, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_1[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_1[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_4_7, 0x00, 0, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_64_95, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2[] = {
+ MLXSW_AFK_ELEMENT_INST_EXT_U32(VIRT_ROUTER_MSB, 0x00, 0, 3, 0, true),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_96_127, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_3[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_3[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_32_63, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_4[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_4[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_64_95, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_5[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_5[] = {
+ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_96_127, 0x04, 4),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l4_0[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l4_0[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(SRC_L4_PORT, 0x04, 16, 16),
+ MLXSW_AFK_ELEMENT_INST_U32(DST_L4_PORT, 0x04, 0, 16),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l4_2[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_l4_2[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(TCP_FLAGS, 0x04, 16, 9), /* TCP_CONTROL + TCP_ECN */
+ MLXSW_AFK_ELEMENT_INST_U32(L4_PORT_RANGE, 0x04, 0, 16),
+ };
+@@ -319,16 +319,20 @@ const struct mlxsw_afk_ops mlxsw_sp2_afk_ops = {
+ .clear_block = mlxsw_sp2_afk_clear_block,
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_5b[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_mac_5b[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VID, 0x04, 18, 12),
+ MLXSW_AFK_ELEMENT_INST_EXT_U32(SRC_SYS_PORT, 0x04, 0, 9, -1, true), /* RX_ACL_SYSTEM_PORT */
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_5b[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_1b[] = {
++ MLXSW_AFK_ELEMENT_INST_BUF(SRC_IP_0_31, 0x04, 4),
++};
++
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv4_5b[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER, 0x04, 20, 12),
+ };
+
+-static struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2b[] = {
++static const struct mlxsw_afk_element_inst mlxsw_sp_afk_element_info_ipv6_2b[] = {
+ MLXSW_AFK_ELEMENT_INST_U32(VIRT_ROUTER_MSB, 0x00, 0, 4),
+ MLXSW_AFK_ELEMENT_INST_BUF(DST_IP_96_127, 0x04, 4),
+ };
+@@ -341,7 +345,7 @@ static const struct mlxsw_afk_block mlxsw_sp4_afk_blocks[] = {
+ MLXSW_AFK_BLOCK(0x14, mlxsw_sp_afk_element_info_mac_4),
+ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x1A, mlxsw_sp_afk_element_info_mac_5b),
+ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x38, mlxsw_sp_afk_element_info_ipv4_0),
+- MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x39, mlxsw_sp_afk_element_info_ipv4_1),
++ MLXSW_AFK_BLOCK_HIGH_ENTROPY(0x3F, mlxsw_sp_afk_element_info_ipv4_1b),
+ MLXSW_AFK_BLOCK(0x3A, mlxsw_sp_afk_element_info_ipv4_2),
+ MLXSW_AFK_BLOCK(0x36, mlxsw_sp_afk_element_info_ipv4_5b),
+ MLXSW_AFK_BLOCK(0x40, mlxsw_sp_afk_element_info_ipv6_0),
+diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
+index c47266d1c7c279..b2d206dec70c8a 100644
+--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
++++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
+@@ -2439,6 +2439,7 @@ void mana_query_gf_stats(struct mana_port_context *apc)
+
+ mana_gd_init_req_hdr(&req.hdr, MANA_QUERY_GF_STAT,
+ sizeof(req), sizeof(resp));
++ req.hdr.resp.msg_version = GDMA_MESSAGE_V2;
+ req.req_stats = STATISTICS_FLAGS_RX_DISCARDS_NO_WQE |
+ STATISTICS_FLAGS_RX_ERRORS_VPORT_DISABLED |
+ STATISTICS_FLAGS_HC_RX_BYTES |
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+index 16e6bd4661433f..6218d9c2685546 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+@@ -3314,7 +3314,9 @@ int qed_mcp_bist_nvm_get_num_images(struct qed_hwfn *p_hwfn,
+ if (rc)
+ return rc;
+
+- if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK))
++ if (((rsp & FW_MSG_CODE_MASK) == FW_MSG_CODE_UNSUPPORTED))
++ rc = -EOPNOTSUPP;
++ else if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK))
+ rc = -EINVAL;
+
+ return rc;
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 713a89bb21e93b..5ed2818bac257c 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -4233,8 +4233,8 @@ static unsigned int rtl8125_quirk_udp_padto(struct rtl8169_private *tp,
+ {
+ unsigned int padto = 0, len = skb->len;
+
+- if (rtl_is_8125(tp) && len < 128 + RTL_MIN_PATCH_LEN &&
+- rtl_skb_is_udp(skb) && skb_transport_header_was_set(skb)) {
++ if (len < 128 + RTL_MIN_PATCH_LEN && rtl_skb_is_udp(skb) &&
++ skb_transport_header_was_set(skb)) {
+ unsigned int trans_data_len = skb_tail_pointer(skb) -
+ skb_transport_header(skb);
+
+@@ -4258,9 +4258,15 @@ static unsigned int rtl8125_quirk_udp_padto(struct rtl8169_private *tp,
+ static unsigned int rtl_quirk_packet_padto(struct rtl8169_private *tp,
+ struct sk_buff *skb)
+ {
+- unsigned int padto;
++ unsigned int padto = 0;
+
+- padto = rtl8125_quirk_udp_padto(tp, skb);
++ switch (tp->mac_version) {
++ case RTL_GIGA_MAC_VER_61 ... RTL_GIGA_MAC_VER_63:
++ padto = rtl8125_quirk_udp_padto(tp, skb);
++ break;
++ default:
++ break;
++ }
+
+ switch (tp->mac_version) {
+ case RTL_GIGA_MAC_VER_34:
+diff --git a/drivers/net/ethernet/rocker/rocker_main.c b/drivers/net/ethernet/rocker/rocker_main.c
+index 84fa911c78db55..fe0bf1d3217af2 100644
+--- a/drivers/net/ethernet/rocker/rocker_main.c
++++ b/drivers/net/ethernet/rocker/rocker_main.c
+@@ -2502,7 +2502,7 @@ static void rocker_carrier_init(const struct rocker_port *rocker_port)
+ u64 link_status = rocker_read64(rocker, PORT_PHYS_LINK_STATUS);
+ bool link_up;
+
+- link_up = link_status & (1 << rocker_port->pport);
++ link_up = link_status & (1ULL << rocker_port->pport);
+ if (link_up)
+ netif_carrier_on(rocker_port->dev);
+ else
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4.h b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
+index 93a78fd0737b6c..28fff6cab812e4 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
+@@ -44,6 +44,7 @@
+ #define GMAC_MDIO_DATA 0x00000204
+ #define GMAC_GPIO_STATUS 0x0000020C
+ #define GMAC_ARP_ADDR 0x00000210
++#define GMAC_EXT_CFG1 0x00000238
+ #define GMAC_ADDR_HIGH(reg) (0x300 + reg * 8)
+ #define GMAC_ADDR_LOW(reg) (0x304 + reg * 8)
+ #define GMAC_L3L4_CTRL(reg) (0x900 + (reg) * 0x30)
+@@ -284,6 +285,10 @@ enum power_event {
+ #define GMAC_HW_FEAT_DVLAN BIT(5)
+ #define GMAC_HW_FEAT_NRVF GENMASK(2, 0)
+
++/* MAC extended config 1 */
++#define GMAC_CONFIG1_SAVE_EN BIT(24)
++#define GMAC_CONFIG1_SPLM(v) FIELD_PREP(GENMASK(9, 8), v)
++
+ /* GMAC GPIO Status reg */
+ #define GMAC_GPO0 BIT(16)
+ #define GMAC_GPO1 BIT(17)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+index 77b35abc6f6fa4..22a044d93e172f 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
+@@ -534,6 +534,11 @@ static void dwmac4_enable_sph(struct stmmac_priv *priv, void __iomem *ioaddr,
+ value |= GMAC_CONFIG_HDSMS_256; /* Segment max 256 bytes */
+ writel(value, ioaddr + GMAC_EXT_CONFIG);
+
++ value = readl(ioaddr + GMAC_EXT_CFG1);
++ value |= GMAC_CONFIG1_SPLM(1); /* Split mode set to L2OFST */
++ value |= GMAC_CONFIG1_SAVE_EN; /* Enable Split AV mode */
++ writel(value, ioaddr + GMAC_EXT_CFG1);
++
+ value = readl(ioaddr + DMA_CHAN_CONTROL(dwmac4_addrs, chan));
+ if (en)
+ value |= DMA_CONTROL_SPH;
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 7f611c74eb629b..ba15a0a4ce629e 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -895,7 +895,7 @@ static int geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
+ if (geneve->cfg.df == GENEVE_DF_SET) {
+ df = htons(IP_DF);
+ } else if (geneve->cfg.df == GENEVE_DF_INHERIT) {
+- struct ethhdr *eth = eth_hdr(skb);
++ struct ethhdr *eth = skb_eth_hdr(skb);
+
+ if (ntohs(eth->h_proto) == ETH_P_IPV6) {
+ df = htons(IP_DF);
+diff --git a/drivers/net/phy/microchip.c b/drivers/net/phy/microchip.c
+index d3273bc0da4a1f..691969a4910f2b 100644
+--- a/drivers/net/phy/microchip.c
++++ b/drivers/net/phy/microchip.c
+@@ -351,6 +351,22 @@ static int lan88xx_config_aneg(struct phy_device *phydev)
+ static void lan88xx_link_change_notify(struct phy_device *phydev)
+ {
+ int temp;
++ int ret;
++
++ /* Reset PHY to ensure MII_LPA provides up-to-date information. This
++ * issue is reproducible only after parallel detection, as described
++ * in IEEE 802.3-2022, Section 28.2.3.1 ("Parallel detection function"),
++ * where the link partner does not support auto-negotiation.
++ */
++ if (phydev->state == PHY_NOLINK) {
++ ret = phy_init_hw(phydev);
++ if (ret < 0)
++ goto link_change_notify_failed;
++
++ ret = _phy_start_aneg(phydev);
++ if (ret < 0)
++ goto link_change_notify_failed;
++ }
+
+ /* At forced 100 F/H mode, chip may fail to set mode correctly
+ * when cable is switched between long(~50+m) and short one.
+@@ -377,6 +393,11 @@ static void lan88xx_link_change_notify(struct phy_device *phydev)
+ temp |= LAN88XX_INT_MASK_MDINTPIN_EN_;
+ phy_write(phydev, LAN88XX_INT_MASK, temp);
+ }
++
++ return;
++
++link_change_notify_failed:
++ phydev_err(phydev, "Link change process failed %pe\n", ERR_PTR(ret));
+ }
+
+ /**
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index a5684ef5884bda..dcec92625cf651 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -466,7 +466,8 @@ static void sfp_quirk_ubnt_uf_instant(const struct sfp_eeprom_id *id,
+ static const struct sfp_quirk sfp_quirks[] = {
+ // Alcatel Lucent G-010S-P can operate at 2500base-X, but incorrectly
+ // report 2500MBd NRZ in their EEPROM
+- SFP_QUIRK_M("ALCATELLUCENT", "G010SP", sfp_quirk_2500basex),
++ SFP_QUIRK("ALCATELLUCENT", "G010SP", sfp_quirk_2500basex,
++ sfp_fixup_ignore_tx_fault),
+
+ // Alcatel Lucent G-010S-A can operate at 2500base-X, but report 3.2GBd
+ // NRZ in their EEPROM
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 53a038fcbe991d..c897afef0b414c 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -946,9 +946,6 @@ static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp)
+ void *buf, *head;
+ dma_addr_t addr;
+
+- if (unlikely(!skb_page_frag_refill(size, alloc_frag, gfp)))
+- return NULL;
+-
+ head = page_address(alloc_frag->page);
+
+ if (rq->do_dma) {
+@@ -2443,6 +2440,9 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
+ len = SKB_DATA_ALIGN(len) +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+
++ if (unlikely(!skb_page_frag_refill(len, &rq->alloc_frag, gfp)))
++ return -ENOMEM;
++
+ buf = virtnet_rq_alloc(rq, len, gfp);
+ if (unlikely(!buf))
+ return -ENOMEM;
+@@ -2545,6 +2545,12 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
+ */
+ len = get_mergeable_buf_len(rq, &rq->mrg_avg_pkt_len, room);
+
++ if (unlikely(!skb_page_frag_refill(len + room, alloc_frag, gfp)))
++ return -ENOMEM;
++
++ if (!alloc_frag->offset && len + room + sizeof(struct virtnet_rq_dma) > alloc_frag->size)
++ len -= sizeof(struct virtnet_rq_dma);
++
+ buf = virtnet_rq_alloc(rq, len + room, gfp);
+ if (unlikely(!buf))
+ return -ENOMEM;
+diff --git a/drivers/net/wireless/ath/ath10k/sdio.c b/drivers/net/wireless/ath/ath10k/sdio.c
+index 08a6f36a6be9cb..6805357ee29e6d 100644
+--- a/drivers/net/wireless/ath/ath10k/sdio.c
++++ b/drivers/net/wireless/ath/ath10k/sdio.c
+@@ -3,7 +3,7 @@
+ * Copyright (c) 2004-2011 Atheros Communications Inc.
+ * Copyright (c) 2011-2012,2017 Qualcomm Atheros, Inc.
+ * Copyright (c) 2016-2017 Erik Stromdahl <erik.stromdahl@gmail.com>
+- * Copyright (c) 2022-2023 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <linux/module.h>
+@@ -2648,9 +2648,9 @@ static void ath10k_sdio_remove(struct sdio_func *func)
+
+ netif_napi_del(&ar->napi);
+
+- ath10k_core_destroy(ar);
+-
+ destroy_workqueue(ar_sdio->workqueue);
++
++ ath10k_core_destroy(ar);
+ }
+
+ static const struct sdio_device_id ath10k_sdio_devices[] = {
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 6d0784a21558ea..8946141aa0dce6 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -8186,9 +8186,9 @@ ath12k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
+ arvif->vdev_id, ret);
+ goto out;
+ }
+- ieee80211_iterate_stations_atomic(hw,
+- ath12k_mac_disable_peer_fixed_rate,
+- arvif);
++ ieee80211_iterate_stations_mtx(hw,
++ ath12k_mac_disable_peer_fixed_rate,
++ arvif);
+ } else if (ath12k_mac_bitrate_mask_get_single_nss(ar, band, mask,
+ &single_nss)) {
+ rate = WMI_FIXED_RATE_NONE;
+@@ -8233,16 +8233,16 @@ ath12k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
+ goto out;
+ }
+
+- ieee80211_iterate_stations_atomic(hw,
+- ath12k_mac_disable_peer_fixed_rate,
+- arvif);
++ ieee80211_iterate_stations_mtx(hw,
++ ath12k_mac_disable_peer_fixed_rate,
++ arvif);
+
+ mutex_lock(&ar->conf_mutex);
+
+ arvif->bitrate_mask = *mask;
+- ieee80211_iterate_stations_atomic(hw,
+- ath12k_mac_set_bitrate_mask_iter,
+- arvif);
++ ieee80211_iterate_stations_mtx(hw,
++ ath12k_mac_set_bitrate_mask_iter,
++ arvif);
+
+ mutex_unlock(&ar->conf_mutex);
+ }
+diff --git a/drivers/net/wireless/ath/ath5k/pci.c b/drivers/net/wireless/ath/ath5k/pci.c
+index b51fce5ae26020..f5ca2fe0d07490 100644
+--- a/drivers/net/wireless/ath/ath5k/pci.c
++++ b/drivers/net/wireless/ath/ath5k/pci.c
+@@ -46,6 +46,8 @@ static const struct pci_device_id ath5k_pci_id_table[] = {
+ { PCI_VDEVICE(ATHEROS, 0x001b) }, /* 5413 Eagle */
+ { PCI_VDEVICE(ATHEROS, 0x001c) }, /* PCI-E cards */
+ { PCI_VDEVICE(ATHEROS, 0x001d) }, /* 2417 Nala */
++ { PCI_VDEVICE(ATHEROS, 0xff16) }, /* Gigaset SX76[23] AR241[34]A */
++ { PCI_VDEVICE(ATHEROS, 0xff1a) }, /* Arcadyan ARV45XX AR2417 */
+ { PCI_VDEVICE(ATHEROS, 0xff1b) }, /* AR5BXB63 */
+ { 0 }
+ };
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+index d35262335eaf79..8a1e3376424487 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+@@ -770,7 +770,7 @@ void brcmf_sdiod_sgtable_alloc(struct brcmf_sdio_dev *sdiodev)
+
+ nents = max_t(uint, BRCMF_DEFAULT_RXGLOM_SIZE,
+ sdiodev->settings->bus.sdio.txglomsz);
+- nents += (nents >> 4) + 1;
++ nents *= 2;
+
+ WARN_ON(nents > sdiodev->max_segment_count);
+
+diff --git a/drivers/net/wireless/intel/ipw2x00/libipw_rx.c b/drivers/net/wireless/intel/ipw2x00/libipw_rx.c
+index 48d6870bbf4e25..9a97ab9b89ae8b 100644
+--- a/drivers/net/wireless/intel/ipw2x00/libipw_rx.c
++++ b/drivers/net/wireless/intel/ipw2x00/libipw_rx.c
+@@ -870,8 +870,8 @@ void libipw_rx_any(struct libipw_device *ieee,
+ switch (ieee->iw_mode) {
+ case IW_MODE_ADHOC:
+ /* our BSS and not from/to DS */
+- if (ether_addr_equal(hdr->addr3, ieee->bssid))
+- if ((fc & (IEEE80211_FCTL_TODS+IEEE80211_FCTL_FROMDS)) == 0) {
++ if (ether_addr_equal(hdr->addr3, ieee->bssid) &&
++ ((fc & (IEEE80211_FCTL_TODS + IEEE80211_FCTL_FROMDS)) == 0)) {
+ /* promisc: get all */
+ if (ieee->dev->flags & IFF_PROMISC)
+ is_packet_for_us = 1;
+@@ -885,8 +885,8 @@ void libipw_rx_any(struct libipw_device *ieee,
+ break;
+ case IW_MODE_INFRA:
+ /* our BSS (== from our AP) and from DS */
+- if (ether_addr_equal(hdr->addr2, ieee->bssid))
+- if ((fc & (IEEE80211_FCTL_TODS+IEEE80211_FCTL_FROMDS)) == IEEE80211_FCTL_FROMDS) {
++ if (ether_addr_equal(hdr->addr2, ieee->bssid) &&
++ ((fc & (IEEE80211_FCTL_TODS + IEEE80211_FCTL_FROMDS)) == IEEE80211_FCTL_FROMDS)) {
+ /* promisc: get all */
+ if (ieee->dev->flags & IFF_PROMISC)
+ is_packet_for_us = 1;
+diff --git a/drivers/net/wireless/realtek/rtw88/sdio.c b/drivers/net/wireless/realtek/rtw88/sdio.c
+index 21d0754dd7f6ac..b67e551fcee3ef 100644
+--- a/drivers/net/wireless/realtek/rtw88/sdio.c
++++ b/drivers/net/wireless/realtek/rtw88/sdio.c
+@@ -1297,12 +1297,12 @@ static void rtw_sdio_deinit_tx(struct rtw_dev *rtwdev)
+ struct rtw_sdio *rtwsdio = (struct rtw_sdio *)rtwdev->priv;
+ int i;
+
+- for (i = 0; i < RTK_MAX_TX_QUEUE_NUM; i++)
+- skb_queue_purge(&rtwsdio->tx_queue[i]);
+-
+ flush_workqueue(rtwsdio->txwq);
+ destroy_workqueue(rtwsdio->txwq);
+ kfree(rtwsdio->tx_handler_data);
++
++ for (i = 0; i < RTK_MAX_TX_QUEUE_NUM; i++)
++ ieee80211_purge_tx_queue(rtwdev->hw, &rtwsdio->tx_queue[i]);
+ }
+
+ int rtw_sdio_probe(struct sdio_func *sdio_func,
+diff --git a/drivers/net/wireless/realtek/rtw88/usb.c b/drivers/net/wireless/realtek/rtw88/usb.c
+index b17a429bcd2994..07695294767acb 100644
+--- a/drivers/net/wireless/realtek/rtw88/usb.c
++++ b/drivers/net/wireless/realtek/rtw88/usb.c
+@@ -423,10 +423,11 @@ static void rtw_usb_tx_handler(struct work_struct *work)
+
+ static void rtw_usb_tx_queue_purge(struct rtw_usb *rtwusb)
+ {
++ struct rtw_dev *rtwdev = rtwusb->rtwdev;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(rtwusb->tx_queue); i++)
+- skb_queue_purge(&rtwusb->tx_queue[i]);
++ ieee80211_purge_tx_queue(rtwdev->hw, &rtwusb->tx_queue[i]);
+ }
+
+ static void rtw_usb_write_port_complete(struct urb *urb)
+@@ -888,9 +889,9 @@ static void rtw_usb_deinit_tx(struct rtw_dev *rtwdev)
+ {
+ struct rtw_usb *rtwusb = rtw_get_usb_priv(rtwdev);
+
+- rtw_usb_tx_queue_purge(rtwusb);
+ flush_workqueue(rtwusb->txwq);
+ destroy_workqueue(rtwusb->txwq);
++ rtw_usb_tx_queue_purge(rtwusb);
+ }
+
+ static int rtw_usb_intf_init(struct rtw_dev *rtwdev,
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index 13a7c39ceb6f55..e6bceef691e9be 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -6074,6 +6074,9 @@ static int rtw89_update_6ghz_rnr_chan(struct rtw89_dev *rtwdev,
+
+ skb = ieee80211_probereq_get(rtwdev->hw, rtwvif_link->mac_addr,
+ NULL, 0, req->ie_len);
++ if (!skb)
++ return -ENOMEM;
++
+ skb_put_data(skb, ies->ies[NL80211_BAND_6GHZ], ies->len[NL80211_BAND_6GHZ]);
+ skb_put_data(skb, ies->common_ies, ies->common_ie_len);
+ hdr = (struct ieee80211_hdr *)skb->data;
+diff --git a/drivers/nvdimm/dax_devs.c b/drivers/nvdimm/dax_devs.c
+index 6b4922de30477e..37b743acbb7bad 100644
+--- a/drivers/nvdimm/dax_devs.c
++++ b/drivers/nvdimm/dax_devs.c
+@@ -106,12 +106,12 @@ int nd_dax_probe(struct device *dev, struct nd_namespace_common *ndns)
+
+ nvdimm_bus_lock(&ndns->dev);
+ nd_dax = nd_dax_alloc(nd_region);
+- nd_pfn = &nd_dax->nd_pfn;
+- dax_dev = nd_pfn_devinit(nd_pfn, ndns);
++ dax_dev = nd_dax_devinit(nd_dax, ndns);
+ nvdimm_bus_unlock(&ndns->dev);
+ if (!dax_dev)
+ return -ENOMEM;
+ pfn_sb = devm_kmalloc(dev, sizeof(*pfn_sb), GFP_KERNEL);
++ nd_pfn = &nd_dax->nd_pfn;
+ nd_pfn->pfn_sb = pfn_sb;
+ rc = nd_pfn_validate(nd_pfn, DAX_SIG);
+ dev_dbg(dev, "dax: %s\n", rc == 0 ? dev_name(dax_dev) : "<none>");
+diff --git a/drivers/nvdimm/nd.h b/drivers/nvdimm/nd.h
+index 2dbb1dca17b534..5ca06e9a2d2925 100644
+--- a/drivers/nvdimm/nd.h
++++ b/drivers/nvdimm/nd.h
+@@ -600,6 +600,13 @@ struct nd_dax *to_nd_dax(struct device *dev);
+ int nd_dax_probe(struct device *dev, struct nd_namespace_common *ndns);
+ bool is_nd_dax(const struct device *dev);
+ struct device *nd_dax_create(struct nd_region *nd_region);
++static inline struct device *nd_dax_devinit(struct nd_dax *nd_dax,
++ struct nd_namespace_common *ndns)
++{
++ if (!nd_dax)
++ return NULL;
++ return nd_pfn_devinit(&nd_dax->nd_pfn, ndns);
++}
+ #else
+ static inline int nd_dax_probe(struct device *dev,
+ struct nd_namespace_common *ndns)
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index f0d4c6f3cb0555..249914b90dbfa7 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1303,9 +1303,10 @@ static void nvme_queue_keep_alive_work(struct nvme_ctrl *ctrl)
+ queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
+ }
+
+-static void nvme_keep_alive_finish(struct request *rq,
+- blk_status_t status, struct nvme_ctrl *ctrl)
++static enum rq_end_io_ret nvme_keep_alive_end_io(struct request *rq,
++ blk_status_t status)
+ {
++ struct nvme_ctrl *ctrl = rq->end_io_data;
+ unsigned long rtt = jiffies - (rq->deadline - rq->timeout);
+ unsigned long delay = nvme_keep_alive_work_period(ctrl);
+ enum nvme_ctrl_state state = nvme_ctrl_state(ctrl);
+@@ -1322,17 +1323,20 @@ static void nvme_keep_alive_finish(struct request *rq,
+ delay = 0;
+ }
+
++ blk_mq_free_request(rq);
++
+ if (status) {
+ dev_err(ctrl->device,
+ "failed nvme_keep_alive_end_io error=%d\n",
+ status);
+- return;
++ return RQ_END_IO_NONE;
+ }
+
+ ctrl->ka_last_check_time = jiffies;
+ ctrl->comp_seen = false;
+ if (state == NVME_CTRL_LIVE || state == NVME_CTRL_CONNECTING)
+ queue_delayed_work(nvme_wq, &ctrl->ka_work, delay);
++ return RQ_END_IO_NONE;
+ }
+
+ static void nvme_keep_alive_work(struct work_struct *work)
+@@ -1341,7 +1345,6 @@ static void nvme_keep_alive_work(struct work_struct *work)
+ struct nvme_ctrl, ka_work);
+ bool comp_seen = ctrl->comp_seen;
+ struct request *rq;
+- blk_status_t status;
+
+ ctrl->ka_last_check_time = jiffies;
+
+@@ -1364,9 +1367,9 @@ static void nvme_keep_alive_work(struct work_struct *work)
+ nvme_init_request(rq, &ctrl->ka_cmd);
+
+ rq->timeout = ctrl->kato * HZ;
+- status = blk_execute_rq(rq, false);
+- nvme_keep_alive_finish(rq, status, ctrl);
+- blk_mq_free_request(rq);
++ rq->end_io = nvme_keep_alive_end_io;
++ rq->end_io_data = ctrl;
++ blk_execute_rq_nowait(rq, false);
+ }
+
+ static void nvme_start_keep_alive(struct nvme_ctrl *ctrl)
+@@ -2064,7 +2067,8 @@ static bool nvme_update_disk_info(struct nvme_ns *ns, struct nvme_id_ns *id,
+ lim->physical_block_size = min(phys_bs, atomic_bs);
+ lim->io_min = phys_bs;
+ lim->io_opt = io_opt;
+- if (ns->ctrl->quirks & NVME_QUIRK_DEALLOCATE_ZEROES)
++ if ((ns->ctrl->quirks & NVME_QUIRK_DEALLOCATE_ZEROES) &&
++ (ns->ctrl->oncs & NVME_CTRL_ONCS_DSM))
+ lim->max_write_zeroes_sectors = UINT_MAX;
+ else
+ lim->max_write_zeroes_sectors = ns->ctrl->max_zeroes_sectors;
+@@ -3250,8 +3254,9 @@ static int nvme_check_ctrl_fabric_info(struct nvme_ctrl *ctrl, struct nvme_id_ct
+ }
+
+ if (!ctrl->maxcmd) {
+- dev_err(ctrl->device, "Maximum outstanding commands is 0\n");
+- return -EINVAL;
++ dev_warn(ctrl->device,
++ "Firmware bug: maximum outstanding commands is 0\n");
++ ctrl->maxcmd = ctrl->sqsize + 1;
+ }
+
+ return 0;
+diff --git a/drivers/nvme/host/rdma.c b/drivers/nvme/host/rdma.c
+index 24a2759798d01e..913e6e5a80705f 100644
+--- a/drivers/nvme/host/rdma.c
++++ b/drivers/nvme/host/rdma.c
+@@ -1091,13 +1091,7 @@ static int nvme_rdma_setup_ctrl(struct nvme_rdma_ctrl *ctrl, bool new)
+ }
+ destroy_admin:
+ nvme_stop_keep_alive(&ctrl->ctrl);
+- nvme_quiesce_admin_queue(&ctrl->ctrl);
+- blk_sync_queue(ctrl->ctrl.admin_q);
+- nvme_rdma_stop_queue(&ctrl->queues[0]);
+- nvme_cancel_admin_tagset(&ctrl->ctrl);
+- if (new)
+- nvme_remove_admin_tag_set(&ctrl->ctrl);
+- nvme_rdma_destroy_admin_queue(ctrl);
++ nvme_rdma_teardown_admin_queue(ctrl, new);
+ return ret;
+ }
+
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 3e416af2659f19..55abfe5e1d2548 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -2278,7 +2278,7 @@ static int nvme_tcp_setup_ctrl(struct nvme_ctrl *ctrl, bool new)
+ }
+ destroy_admin:
+ nvme_stop_keep_alive(ctrl);
+- nvme_tcp_teardown_admin_queue(ctrl, false);
++ nvme_tcp_teardown_admin_queue(ctrl, new);
+ return ret;
+ }
+
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index b5447228696dc4..6483e1874477ef 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1830,6 +1830,7 @@ static const struct of_device_id qcom_pcie_match[] = {
+ { .compatible = "qcom,pcie-ipq8064-v2", .data = &cfg_2_1_0 },
+ { .compatible = "qcom,pcie-ipq8074", .data = &cfg_2_3_3 },
+ { .compatible = "qcom,pcie-ipq8074-gen3", .data = &cfg_2_9_0 },
++ { .compatible = "qcom,pcie-ipq9574", .data = &cfg_2_9_0 },
+ { .compatible = "qcom,pcie-msm8996", .data = &cfg_2_3_2 },
+ { .compatible = "qcom,pcie-qcs404", .data = &cfg_2_4_0 },
+ { .compatible = "qcom,pcie-sa8540p", .data = &cfg_sc8280xp },
+diff --git a/drivers/pci/controller/plda/pcie-starfive.c b/drivers/pci/controller/plda/pcie-starfive.c
+index c9933ecf683382..0564fdce47c2a3 100644
+--- a/drivers/pci/controller/plda/pcie-starfive.c
++++ b/drivers/pci/controller/plda/pcie-starfive.c
+@@ -404,6 +404,9 @@ static int starfive_pcie_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
++ pm_runtime_enable(&pdev->dev);
++ pm_runtime_get_sync(&pdev->dev);
++
+ plda->host_ops = &sf_host_ops;
+ plda->num_events = PLDA_MAX_EVENT_NUM;
+ /* mask doorbell event */
+@@ -413,11 +416,12 @@ static int starfive_pcie_probe(struct platform_device *pdev)
+ plda->events_bitmap <<= PLDA_NUM_DMA_EVENTS;
+ ret = plda_pcie_host_init(&pcie->plda, &starfive_pcie_ops,
+ &stf_pcie_event);
+- if (ret)
++ if (ret) {
++ pm_runtime_put_sync(&pdev->dev);
++ pm_runtime_disable(&pdev->dev);
+ return ret;
++ }
+
+- pm_runtime_enable(&pdev->dev);
+- pm_runtime_get_sync(&pdev->dev);
+ platform_set_drvdata(pdev, pcie);
+
+ return 0;
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 264a180403a0ec..9d9596947350f5 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -740,11 +740,9 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
+ if (!(features & VMD_FEAT_BIOS_PM_QUIRK))
+ return 0;
+
+- pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
+-
+ pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_LTR);
+ if (!pos)
+- return 0;
++ goto out_state_change;
+
+ /*
+ * Skip if the max snoop LTR is non-zero, indicating BIOS has set it
+@@ -752,7 +750,7 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
+ */
+ pci_read_config_dword(pdev, pos + PCI_LTR_MAX_SNOOP_LAT, <r_reg);
+ if (!!(ltr_reg & (PCI_LTR_VALUE_MASK | PCI_LTR_SCALE_MASK)))
+- return 0;
++ goto out_state_change;
+
+ /*
+ * Set the default values to the maximum required by the platform to
+@@ -764,6 +762,13 @@ static int vmd_pm_enable_quirk(struct pci_dev *pdev, void *userdata)
+ pci_write_config_dword(pdev, pos + PCI_LTR_MAX_SNOOP_LAT, ltr_reg);
+ pci_info(pdev, "VMD: Default LTR value set by driver\n");
+
++out_state_change:
++ /*
++ * Ensure devices are in D0 before enabling PCI-PM L1 PM Substates, per
++ * PCIe r6.0, sec 5.5.4.
++ */
++ pci_set_power_state_locked(pdev, PCI_D0);
++ pci_enable_link_state_locked(pdev, PCIE_LINK_STATE_ALL);
+ return 0;
+ }
+
+@@ -1100,6 +1105,10 @@ static const struct pci_device_id vmd_ids[] = {
+ .driver_data = VMD_FEATS_CLIENT,},
+ {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_VMD_9A0B),
+ .driver_data = VMD_FEATS_CLIENT,},
++ {PCI_VDEVICE(INTEL, 0xb60b),
++ .driver_data = VMD_FEATS_CLIENT,},
++ {PCI_VDEVICE(INTEL, 0xb06f),
++ .driver_data = VMD_FEATS_CLIENT,},
+ {0,}
+ };
+ MODULE_DEVICE_TABLE(pci, vmd_ids);
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index 5d0f4db1cab786..3e5a117f5b5d60 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -521,6 +521,31 @@ static ssize_t bus_rescan_store(struct device *dev,
+ static struct device_attribute dev_attr_bus_rescan = __ATTR(rescan, 0200, NULL,
+ bus_rescan_store);
+
++static ssize_t reset_subordinate_store(struct device *dev,
++ struct device_attribute *attr,
++ const char *buf, size_t count)
++{
++ struct pci_dev *pdev = to_pci_dev(dev);
++ struct pci_bus *bus = pdev->subordinate;
++ unsigned long val;
++
++ if (!capable(CAP_SYS_ADMIN))
++ return -EPERM;
++
++ if (kstrtoul(buf, 0, &val) < 0)
++ return -EINVAL;
++
++ if (val) {
++ int ret = __pci_reset_bus(bus);
++
++ if (ret)
++ return ret;
++ }
++
++ return count;
++}
++static DEVICE_ATTR_WO(reset_subordinate);
++
+ #if defined(CONFIG_PM) && defined(CONFIG_ACPI)
+ static ssize_t d3cold_allowed_store(struct device *dev,
+ struct device_attribute *attr,
+@@ -625,6 +650,7 @@ static struct attribute *pci_dev_attrs[] = {
+ static struct attribute *pci_bridge_attrs[] = {
+ &dev_attr_subordinate_bus_number.attr,
+ &dev_attr_secondary_bus_number.attr,
++ &dev_attr_reset_subordinate.attr,
+ NULL,
+ };
+
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 08f170fd3efb3e..dd3c6dcb47ae4a 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -5885,7 +5885,7 @@ EXPORT_SYMBOL_GPL(pci_probe_reset_bus);
+ *
+ * Same as above except return -EAGAIN if the bus cannot be locked
+ */
+-static int __pci_reset_bus(struct pci_bus *bus)
++int __pci_reset_bus(struct pci_bus *bus)
+ {
+ int rc;
+
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 14d00ce45bfa95..1cdc2c9547a7e1 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -104,6 +104,7 @@ bool pci_reset_supported(struct pci_dev *dev);
+ void pci_init_reset_methods(struct pci_dev *dev);
+ int pci_bridge_secondary_bus_reset(struct pci_dev *dev);
+ int pci_bus_error_reset(struct pci_dev *dev);
++int __pci_reset_bus(struct pci_bus *bus);
+
+ struct pci_cap_saved_data {
+ u16 cap_nr;
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index f1615805f5b078..ebb0c1d5cae255 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -1633,23 +1633,33 @@ static void set_pcie_thunderbolt(struct pci_dev *dev)
+
+ static void set_pcie_untrusted(struct pci_dev *dev)
+ {
+- struct pci_dev *parent;
++ struct pci_dev *parent = pci_upstream_bridge(dev);
+
++ if (!parent)
++ return;
+ /*
+- * If the upstream bridge is untrusted we treat this device
++ * If the upstream bridge is untrusted we treat this device as
+ * untrusted as well.
+ */
+- parent = pci_upstream_bridge(dev);
+- if (parent && (parent->untrusted || parent->external_facing))
++ if (parent->untrusted) {
++ dev->untrusted = true;
++ return;
++ }
++
++ if (arch_pci_dev_is_removable(dev)) {
++ pci_dbg(dev, "marking as untrusted\n");
+ dev->untrusted = true;
++ }
+ }
+
+ static void pci_set_removable(struct pci_dev *dev)
+ {
+ struct pci_dev *parent = pci_upstream_bridge(dev);
+
++ if (!parent)
++ return;
+ /*
+- * We (only) consider everything downstream from an external_facing
++ * We (only) consider everything tunneled below an external_facing
+ * device to be removable by the user. We're mainly concerned with
+ * consumer platforms with user accessible thunderbolt ports that are
+ * vulnerable to DMA attacks, and we expect those ports to be marked by
+@@ -1659,9 +1669,15 @@ static void pci_set_removable(struct pci_dev *dev)
+ * accessible to user / may not be removed by end user, and thus not
+ * exposed as "removable" to userspace.
+ */
+- if (parent &&
+- (parent->external_facing || dev_is_removable(&parent->dev)))
++ if (dev_is_removable(&parent->dev)) {
++ dev_set_removable(&dev->dev, DEVICE_REMOVABLE);
++ return;
++ }
++
++ if (arch_pci_dev_is_removable(dev)) {
++ pci_dbg(dev, "marking as removable\n");
+ dev_set_removable(&dev->dev, DEVICE_REMOVABLE);
++ }
+ }
+
+ /**
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index dccb60c1d9cc3d..8103bc24a54ea4 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4996,18 +4996,21 @@ static int pci_quirk_brcm_acs(struct pci_dev *dev, u16 acs_flags)
+ }
+
+ /*
+- * Wangxun 10G/1G NICs have no ACS capability, and on multi-function
+- * devices, peer-to-peer transactions are not be used between the functions.
+- * So add an ACS quirk for below devices to isolate functions.
++ * Wangxun 40G/25G/10G/1G NICs have no ACS capability, but on
++ * multi-function devices, the hardware isolates the functions by
++ * directing all peer-to-peer traffic upstream as though PCI_ACS_RR and
++ * PCI_ACS_CR were set.
+ * SFxxx 1G NICs(em).
+ * RP1000/RP2000 10G NICs(sp).
++ * FF5xxx 40G/25G/10G NICs(aml).
+ */
+ static int pci_quirk_wangxun_nic_acs(struct pci_dev *dev, u16 acs_flags)
+ {
+ switch (dev->device) {
+- case 0x0100 ... 0x010F:
+- case 0x1001:
+- case 0x2001:
++ case 0x0100 ... 0x010F: /* EM */
++ case 0x1001: case 0x2001: /* SP */
++ case 0x5010: case 0x5025: case 0x5040: /* AML */
++ case 0x5110: case 0x5125: case 0x5140: /* AML */
+ return pci_acs_ctrl_enabled(acs_flags,
+ PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
+ }
+diff --git a/drivers/pinctrl/core.c b/drivers/pinctrl/core.c
+index 4061890a174835..b3eec63c00ba04 100644
+--- a/drivers/pinctrl/core.c
++++ b/drivers/pinctrl/core.c
+@@ -220,6 +220,9 @@ static int pinctrl_register_one_pin(struct pinctrl_dev *pctldev,
+
+ /* Set owner */
+ pindesc->pctldev = pctldev;
++#ifdef CONFIG_PINMUX
++ mutex_init(&pindesc->mux_lock);
++#endif
+
+ /* Copy basic pin info */
+ if (pin->name) {
+diff --git a/drivers/pinctrl/core.h b/drivers/pinctrl/core.h
+index 4e07707d2435bd..d6c24978e7081a 100644
+--- a/drivers/pinctrl/core.h
++++ b/drivers/pinctrl/core.h
+@@ -177,6 +177,7 @@ struct pin_desc {
+ const char *mux_owner;
+ const struct pinctrl_setting_mux *mux_setting;
+ const char *gpio_owner;
++ struct mutex mux_lock;
+ #endif
+ };
+
+diff --git a/drivers/pinctrl/freescale/Kconfig b/drivers/pinctrl/freescale/Kconfig
+index 3b59d71890045b..139bc0fb8a9dbf 100644
+--- a/drivers/pinctrl/freescale/Kconfig
++++ b/drivers/pinctrl/freescale/Kconfig
+@@ -20,7 +20,7 @@ config PINCTRL_IMX_SCMI
+
+ config PINCTRL_IMX_SCU
+ tristate
+- depends on IMX_SCU
++ depends on IMX_SCU || COMPILE_TEST
+ select PINCTRL_IMX
+
+ config PINCTRL_IMX1_CORE
+diff --git a/drivers/pinctrl/pinmux.c b/drivers/pinctrl/pinmux.c
+index 02033ea1c64384..0743190da59e81 100644
+--- a/drivers/pinctrl/pinmux.c
++++ b/drivers/pinctrl/pinmux.c
+@@ -14,6 +14,7 @@
+
+ #include <linux/array_size.h>
+ #include <linux/ctype.h>
++#include <linux/cleanup.h>
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+ #include <linux/err.h>
+@@ -93,6 +94,7 @@ bool pinmux_can_be_used_for_gpio(struct pinctrl_dev *pctldev, unsigned int pin)
+ if (!desc || !ops)
+ return true;
+
++ guard(mutex)(&desc->mux_lock);
+ if (ops->strict && desc->mux_usecount)
+ return false;
+
+@@ -127,29 +129,31 @@ static int pin_request(struct pinctrl_dev *pctldev,
+ dev_dbg(pctldev->dev, "request pin %d (%s) for %s\n",
+ pin, desc->name, owner);
+
+- if ((!gpio_range || ops->strict) &&
+- desc->mux_usecount && strcmp(desc->mux_owner, owner)) {
+- dev_err(pctldev->dev,
+- "pin %s already requested by %s; cannot claim for %s\n",
+- desc->name, desc->mux_owner, owner);
+- goto out;
+- }
++ scoped_guard(mutex, &desc->mux_lock) {
++ if ((!gpio_range || ops->strict) &&
++ desc->mux_usecount && strcmp(desc->mux_owner, owner)) {
++ dev_err(pctldev->dev,
++ "pin %s already requested by %s; cannot claim for %s\n",
++ desc->name, desc->mux_owner, owner);
++ goto out;
++ }
+
+- if ((gpio_range || ops->strict) && desc->gpio_owner) {
+- dev_err(pctldev->dev,
+- "pin %s already requested by %s; cannot claim for %s\n",
+- desc->name, desc->gpio_owner, owner);
+- goto out;
+- }
++ if ((gpio_range || ops->strict) && desc->gpio_owner) {
++ dev_err(pctldev->dev,
++ "pin %s already requested by %s; cannot claim for %s\n",
++ desc->name, desc->gpio_owner, owner);
++ goto out;
++ }
+
+- if (gpio_range) {
+- desc->gpio_owner = owner;
+- } else {
+- desc->mux_usecount++;
+- if (desc->mux_usecount > 1)
+- return 0;
++ if (gpio_range) {
++ desc->gpio_owner = owner;
++ } else {
++ desc->mux_usecount++;
++ if (desc->mux_usecount > 1)
++ return 0;
+
+- desc->mux_owner = owner;
++ desc->mux_owner = owner;
++ }
+ }
+
+ /* Let each pin increase references to this module */
+@@ -178,12 +182,14 @@ static int pin_request(struct pinctrl_dev *pctldev,
+
+ out_free_pin:
+ if (status) {
+- if (gpio_range) {
+- desc->gpio_owner = NULL;
+- } else {
+- desc->mux_usecount--;
+- if (!desc->mux_usecount)
+- desc->mux_owner = NULL;
++ scoped_guard(mutex, &desc->mux_lock) {
++ if (gpio_range) {
++ desc->gpio_owner = NULL;
++ } else {
++ desc->mux_usecount--;
++ if (!desc->mux_usecount)
++ desc->mux_owner = NULL;
++ }
+ }
+ }
+ out:
+@@ -219,15 +225,17 @@ static const char *pin_free(struct pinctrl_dev *pctldev, int pin,
+ return NULL;
+ }
+
+- if (!gpio_range) {
+- /*
+- * A pin should not be freed more times than allocated.
+- */
+- if (WARN_ON(!desc->mux_usecount))
+- return NULL;
+- desc->mux_usecount--;
+- if (desc->mux_usecount)
+- return NULL;
++ scoped_guard(mutex, &desc->mux_lock) {
++ if (!gpio_range) {
++ /*
++ * A pin should not be freed more times than allocated.
++ */
++ if (WARN_ON(!desc->mux_usecount))
++ return NULL;
++ desc->mux_usecount--;
++ if (desc->mux_usecount)
++ return NULL;
++ }
+ }
+
+ /*
+@@ -239,13 +247,15 @@ static const char *pin_free(struct pinctrl_dev *pctldev, int pin,
+ else if (ops->free)
+ ops->free(pctldev, pin);
+
+- if (gpio_range) {
+- owner = desc->gpio_owner;
+- desc->gpio_owner = NULL;
+- } else {
+- owner = desc->mux_owner;
+- desc->mux_owner = NULL;
+- desc->mux_setting = NULL;
++ scoped_guard(mutex, &desc->mux_lock) {
++ if (gpio_range) {
++ owner = desc->gpio_owner;
++ desc->gpio_owner = NULL;
++ } else {
++ owner = desc->mux_owner;
++ desc->mux_owner = NULL;
++ desc->mux_setting = NULL;
++ }
+ }
+
+ module_put(pctldev->owner);
+@@ -458,7 +468,8 @@ int pinmux_enable_setting(const struct pinctrl_setting *setting)
+ pins[i]);
+ continue;
+ }
+- desc->mux_setting = &(setting->data.mux);
++ scoped_guard(mutex, &desc->mux_lock)
++ desc->mux_setting = &(setting->data.mux);
+ }
+
+ ret = ops->set_mux(pctldev, setting->data.mux.func,
+@@ -472,8 +483,10 @@ int pinmux_enable_setting(const struct pinctrl_setting *setting)
+ err_set_mux:
+ for (i = 0; i < num_pins; i++) {
+ desc = pin_desc_get(pctldev, pins[i]);
+- if (desc)
+- desc->mux_setting = NULL;
++ if (desc) {
++ scoped_guard(mutex, &desc->mux_lock)
++ desc->mux_setting = NULL;
++ }
+ }
+ err_pin_request:
+ /* On error release all taken pins */
+@@ -492,6 +505,7 @@ void pinmux_disable_setting(const struct pinctrl_setting *setting)
+ unsigned int num_pins = 0;
+ int i;
+ struct pin_desc *desc;
++ bool is_equal;
+
+ if (pctlops->get_group_pins)
+ ret = pctlops->get_group_pins(pctldev, setting->data.mux.group,
+@@ -517,7 +531,10 @@ void pinmux_disable_setting(const struct pinctrl_setting *setting)
+ pins[i]);
+ continue;
+ }
+- if (desc->mux_setting == &(setting->data.mux)) {
++ scoped_guard(mutex, &desc->mux_lock)
++ is_equal = (desc->mux_setting == &(setting->data.mux));
++
++ if (is_equal) {
+ pin_free(pctldev, pins[i], NULL);
+ } else {
+ const char *gname;
+@@ -608,40 +625,42 @@ static int pinmux_pins_show(struct seq_file *s, void *what)
+ if (desc == NULL)
+ continue;
+
+- if (desc->mux_owner &&
+- !strcmp(desc->mux_owner, pinctrl_dev_get_name(pctldev)))
+- is_hog = true;
+-
+- if (pmxops->strict) {
+- if (desc->mux_owner)
+- seq_printf(s, "pin %d (%s): device %s%s",
+- pin, desc->name, desc->mux_owner,
++ scoped_guard(mutex, &desc->mux_lock) {
++ if (desc->mux_owner &&
++ !strcmp(desc->mux_owner, pinctrl_dev_get_name(pctldev)))
++ is_hog = true;
++
++ if (pmxops->strict) {
++ if (desc->mux_owner)
++ seq_printf(s, "pin %d (%s): device %s%s",
++ pin, desc->name, desc->mux_owner,
++ is_hog ? " (HOG)" : "");
++ else if (desc->gpio_owner)
++ seq_printf(s, "pin %d (%s): GPIO %s",
++ pin, desc->name, desc->gpio_owner);
++ else
++ seq_printf(s, "pin %d (%s): UNCLAIMED",
++ pin, desc->name);
++ } else {
++ /* For non-strict controllers */
++ seq_printf(s, "pin %d (%s): %s %s%s", pin, desc->name,
++ desc->mux_owner ? desc->mux_owner
++ : "(MUX UNCLAIMED)",
++ desc->gpio_owner ? desc->gpio_owner
++ : "(GPIO UNCLAIMED)",
+ is_hog ? " (HOG)" : "");
+- else if (desc->gpio_owner)
+- seq_printf(s, "pin %d (%s): GPIO %s",
+- pin, desc->name, desc->gpio_owner);
++ }
++
++ /* If mux: print function+group claiming the pin */
++ if (desc->mux_setting)
++ seq_printf(s, " function %s group %s\n",
++ pmxops->get_function_name(pctldev,
++ desc->mux_setting->func),
++ pctlops->get_group_name(pctldev,
++ desc->mux_setting->group));
+ else
+- seq_printf(s, "pin %d (%s): UNCLAIMED",
+- pin, desc->name);
+- } else {
+- /* For non-strict controllers */
+- seq_printf(s, "pin %d (%s): %s %s%s", pin, desc->name,
+- desc->mux_owner ? desc->mux_owner
+- : "(MUX UNCLAIMED)",
+- desc->gpio_owner ? desc->gpio_owner
+- : "(GPIO UNCLAIMED)",
+- is_hog ? " (HOG)" : "");
++ seq_putc(s, '\n');
+ }
+-
+- /* If mux: print function+group claiming the pin */
+- if (desc->mux_setting)
+- seq_printf(s, " function %s group %s\n",
+- pmxops->get_function_name(pctldev,
+- desc->mux_setting->func),
+- pctlops->get_group_name(pctldev,
+- desc->mux_setting->group));
+- else
+- seq_putc(s, '\n');
+ }
+
+ mutex_unlock(&pctldev->mutex);
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+index a0eb4e01b3a755..1b7eecff3ffa43 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-gpio.c
+@@ -1226,6 +1226,8 @@ static const struct of_device_id pmic_gpio_of_match[] = {
+ { .compatible = "qcom,pm8550ve-gpio", .data = (void *) 8 },
+ { .compatible = "qcom,pm8550vs-gpio", .data = (void *) 6 },
+ { .compatible = "qcom,pm8916-gpio", .data = (void *) 4 },
++ /* pm8937 has 8 GPIOs with holes on 3, 4 and 6 */
++ { .compatible = "qcom,pm8937-gpio", .data = (void *) 8 },
+ { .compatible = "qcom,pm8941-gpio", .data = (void *) 36 },
+ /* pm8950 has 8 GPIOs with holes on 3 */
+ { .compatible = "qcom,pm8950-gpio", .data = (void *) 8 },
+diff --git a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+index d16ece90d926cf..5fa04e7c1d5c4d 100644
+--- a/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
++++ b/drivers/pinctrl/qcom/pinctrl-spmi-mpp.c
+@@ -983,6 +983,7 @@ static const struct of_device_id pmic_mpp_of_match[] = {
+ { .compatible = "qcom,pm8226-mpp", .data = (void *) 8 },
+ { .compatible = "qcom,pm8841-mpp", .data = (void *) 4 },
+ { .compatible = "qcom,pm8916-mpp", .data = (void *) 4 },
++ { .compatible = "qcom,pm8937-mpp", .data = (void *) 4 },
+ { .compatible = "qcom,pm8941-mpp", .data = (void *) 8 },
+ { .compatible = "qcom,pm8950-mpp", .data = (void *) 4 },
+ { .compatible = "qcom,pmi8950-mpp", .data = (void *) 4 },
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index 89f5f44857d555..1101e5b2488e52 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -3696,7 +3696,6 @@ static int asus_wmi_custom_fan_curve_init(struct asus_wmi *asus)
+ /* Throttle thermal policy ****************************************************/
+ static int throttle_thermal_policy_write(struct asus_wmi *asus)
+ {
+- u32 retval;
+ u8 value;
+ int err;
+
+@@ -3718,8 +3717,8 @@ static int throttle_thermal_policy_write(struct asus_wmi *asus)
+ value = asus->throttle_thermal_policy_mode;
+ }
+
+- err = asus_wmi_set_devstate(asus->throttle_thermal_policy_dev,
+- value, &retval);
++ /* Some machines do not return an error code as a result, so we ignore it */
++ err = asus_wmi_set_devstate(asus->throttle_thermal_policy_dev, value, NULL);
+
+ sysfs_notify(&asus->platform_device->dev.kobj, NULL,
+ "throttle_thermal_policy");
+@@ -3729,12 +3728,6 @@ static int throttle_thermal_policy_write(struct asus_wmi *asus)
+ return err;
+ }
+
+- if (retval != 1) {
+- pr_warn("Failed to set throttle thermal policy (retval): 0x%x\n",
+- retval);
+- return -EIO;
+- }
+-
+ /* Must set to disabled if mode is toggled */
+ if (asus->cpu_fan_curve_available)
+ asus->custom_fan_curves[FAN_CURVE_DEV_CPU].enabled = false;
+diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
+index 29ad510e881c39..778ff187ac59e6 100644
+--- a/drivers/pmdomain/core.c
++++ b/drivers/pmdomain/core.c
+@@ -2171,8 +2171,24 @@ static int genpd_alloc_data(struct generic_pm_domain *genpd)
+ }
+
+ genpd->gd = gd;
+- return 0;
++ device_initialize(&genpd->dev);
++
++ if (!genpd_is_dev_name_fw(genpd)) {
++ dev_set_name(&genpd->dev, "%s", genpd->name);
++ } else {
++ ret = ida_alloc(&genpd_ida, GFP_KERNEL);
++ if (ret < 0)
++ goto put;
+
++ genpd->device_id = ret;
++ dev_set_name(&genpd->dev, "%s_%u", genpd->name, genpd->device_id);
++ }
++
++ return 0;
++put:
++ put_device(&genpd->dev);
++ if (genpd->free_states == genpd_free_default_power_state)
++ kfree(genpd->states);
+ free:
+ if (genpd_is_cpu_domain(genpd))
+ free_cpumask_var(genpd->cpus);
+@@ -2182,6 +2198,9 @@ static int genpd_alloc_data(struct generic_pm_domain *genpd)
+
+ static void genpd_free_data(struct generic_pm_domain *genpd)
+ {
++ put_device(&genpd->dev);
++ if (genpd->device_id != -ENXIO)
++ ida_free(&genpd_ida, genpd->device_id);
+ if (genpd_is_cpu_domain(genpd))
+ free_cpumask_var(genpd->cpus);
+ if (genpd->free_states)
+@@ -2270,20 +2289,6 @@ int pm_genpd_init(struct generic_pm_domain *genpd,
+ if (ret)
+ return ret;
+
+- device_initialize(&genpd->dev);
+-
+- if (!genpd_is_dev_name_fw(genpd)) {
+- dev_set_name(&genpd->dev, "%s", genpd->name);
+- } else {
+- ret = ida_alloc(&genpd_ida, GFP_KERNEL);
+- if (ret < 0) {
+- put_device(&genpd->dev);
+- return ret;
+- }
+- genpd->device_id = ret;
+- dev_set_name(&genpd->dev, "%s_%u", genpd->name, genpd->device_id);
+- }
+-
+ mutex_lock(&gpd_list_lock);
+ list_add(&genpd->gpd_list_node, &gpd_list);
+ mutex_unlock(&gpd_list_lock);
+@@ -2324,8 +2329,6 @@ static int genpd_remove(struct generic_pm_domain *genpd)
+ genpd_unlock(genpd);
+ genpd_debug_remove(genpd);
+ cancel_work_sync(&genpd->power_off_work);
+- if (genpd->device_id != -ENXIO)
+- ida_free(&genpd_ida, genpd->device_id);
+ genpd_free_data(genpd);
+
+ pr_debug("%s: removed %s\n", __func__, dev_name(&genpd->dev));
+diff --git a/drivers/pmdomain/imx/gpcv2.c b/drivers/pmdomain/imx/gpcv2.c
+index 963d61c5af6d5e..3f0e6960f47fc2 100644
+--- a/drivers/pmdomain/imx/gpcv2.c
++++ b/drivers/pmdomain/imx/gpcv2.c
+@@ -403,7 +403,7 @@ static int imx_pgc_power_up(struct generic_pm_domain *genpd)
+ * already reaches target before udelay()
+ */
+ regmap_read_bypassed(domain->regmap, domain->regs->hsk, ®_val);
+- udelay(5);
++ udelay(10);
+ }
+
+ /* Disable reset clocks for all devices in the domain */
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index c56cd0f63909a2..77a36e7bddd54e 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -150,7 +150,8 @@ static int ptp_clock_adjtime(struct posix_clock *pc, struct __kernel_timex *tx)
+ if (ppb > ops->max_adj || ppb < -ops->max_adj)
+ return -ERANGE;
+ err = ops->adjfine(ops, tx->freq);
+- ptp->dialed_frequency = tx->freq;
++ if (!err)
++ ptp->dialed_frequency = tx->freq;
+ } else if (tx->modes & ADJ_OFFSET) {
+ if (ops->adjphase) {
+ s32 max_phase_adj = ops->getmaxphase(ops);
+diff --git a/drivers/regulator/qcom-rpmh-regulator.c b/drivers/regulator/qcom-rpmh-regulator.c
+index 6c343b4b9d15a8..7870722b6ee21c 100644
+--- a/drivers/regulator/qcom-rpmh-regulator.c
++++ b/drivers/regulator/qcom-rpmh-regulator.c
+@@ -843,26 +843,15 @@ static const struct rpmh_vreg_hw_data pmic5_ftsmps520 = {
+ .of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
+ };
+
+-static const struct rpmh_vreg_hw_data pmic5_ftsmps525_lv = {
++static const struct rpmh_vreg_hw_data pmic5_ftsmps525 = {
+ .regulator_type = VRM,
+ .ops = &rpmh_regulator_vrm_ops,
+ .voltage_ranges = (struct linear_range[]) {
+ REGULATOR_LINEAR_RANGE(300000, 0, 267, 4000),
++ REGULATOR_LINEAR_RANGE(1376000, 268, 438, 8000),
+ },
+- .n_linear_ranges = 1,
+- .n_voltages = 268,
+- .pmic_mode_map = pmic_mode_map_pmic5_smps,
+- .of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
+-};
+-
+-static const struct rpmh_vreg_hw_data pmic5_ftsmps525_mv = {
+- .regulator_type = VRM,
+- .ops = &rpmh_regulator_vrm_ops,
+- .voltage_ranges = (struct linear_range[]) {
+- REGULATOR_LINEAR_RANGE(600000, 0, 267, 8000),
+- },
+- .n_linear_ranges = 1,
+- .n_voltages = 268,
++ .n_linear_ranges = 2,
++ .n_voltages = 439,
+ .pmic_mode_map = pmic_mode_map_pmic5_smps,
+ .of_map_mode = rpmh_regulator_pmic4_smps_of_map_mode,
+ };
+@@ -1190,12 +1179,12 @@ static const struct rpmh_vreg_init_data pm8550_vreg_data[] = {
+ };
+
+ static const struct rpmh_vreg_init_data pm8550vs_vreg_data[] = {
+- RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525_lv, "vdd-s1"),
+- RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525_lv, "vdd-s2"),
+- RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525_lv, "vdd-s3"),
+- RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525_lv, "vdd-s4"),
+- RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525_lv, "vdd-s5"),
+- RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_mv, "vdd-s6"),
++ RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525, "vdd-s1"),
++ RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525, "vdd-s2"),
++ RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525, "vdd-s3"),
++ RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525, "vdd-s4"),
++ RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525, "vdd-s5"),
++ RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525, "vdd-s6"),
+ RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"),
+ RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2"),
+ RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"),
+@@ -1203,14 +1192,14 @@ static const struct rpmh_vreg_init_data pm8550vs_vreg_data[] = {
+ };
+
+ static const struct rpmh_vreg_init_data pm8550ve_vreg_data[] = {
+- RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525_lv, "vdd-s1"),
+- RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525_lv, "vdd-s2"),
+- RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525_lv, "vdd-s3"),
+- RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525_mv, "vdd-s4"),
+- RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525_lv, "vdd-s5"),
+- RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_lv, "vdd-s6"),
+- RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525_lv, "vdd-s7"),
+- RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525_lv, "vdd-s8"),
++ RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525, "vdd-s1"),
++ RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525, "vdd-s2"),
++ RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525, "vdd-s3"),
++ RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525, "vdd-s4"),
++ RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525, "vdd-s5"),
++ RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525, "vdd-s6"),
++ RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525, "vdd-s7"),
++ RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525, "vdd-s8"),
+ RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"),
+ RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2"),
+ RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"),
+@@ -1218,14 +1207,14 @@ static const struct rpmh_vreg_init_data pm8550ve_vreg_data[] = {
+ };
+
+ static const struct rpmh_vreg_init_data pmc8380_vreg_data[] = {
+- RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525_lv, "vdd-s1"),
+- RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525_lv, "vdd-s2"),
+- RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525_lv, "vdd-s3"),
+- RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525_mv, "vdd-s4"),
+- RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525_lv, "vdd-s5"),
+- RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_lv, "vdd-s6"),
+- RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525_lv, "vdd-s7"),
+- RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525_lv, "vdd-s8"),
++ RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525, "vdd-s1"),
++ RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525, "vdd-s2"),
++ RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525, "vdd-s3"),
++ RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525, "vdd-s4"),
++ RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525, "vdd-s5"),
++ RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525, "vdd-s6"),
++ RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525, "vdd-s7"),
++ RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525, "vdd-s8"),
+ RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"),
+ RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2"),
+ RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"),
+@@ -1409,16 +1398,16 @@ static const struct rpmh_vreg_init_data pmx65_vreg_data[] = {
+ };
+
+ static const struct rpmh_vreg_init_data pmx75_vreg_data[] = {
+- RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525_lv, "vdd-s1"),
+- RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525_lv, "vdd-s2"),
+- RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525_lv, "vdd-s3"),
+- RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525_mv, "vdd-s4"),
+- RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525_lv, "vdd-s5"),
+- RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525_lv, "vdd-s6"),
+- RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525_lv, "vdd-s7"),
+- RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525_lv, "vdd-s8"),
+- RPMH_VREG("smps9", "smp%s9", &pmic5_ftsmps525_lv, "vdd-s9"),
+- RPMH_VREG("smps10", "smp%s10", &pmic5_ftsmps525_lv, "vdd-s10"),
++ RPMH_VREG("smps1", "smp%s1", &pmic5_ftsmps525, "vdd-s1"),
++ RPMH_VREG("smps2", "smp%s2", &pmic5_ftsmps525, "vdd-s2"),
++ RPMH_VREG("smps3", "smp%s3", &pmic5_ftsmps525, "vdd-s3"),
++ RPMH_VREG("smps4", "smp%s4", &pmic5_ftsmps525, "vdd-s4"),
++ RPMH_VREG("smps5", "smp%s5", &pmic5_ftsmps525, "vdd-s5"),
++ RPMH_VREG("smps6", "smp%s6", &pmic5_ftsmps525, "vdd-s6"),
++ RPMH_VREG("smps7", "smp%s7", &pmic5_ftsmps525, "vdd-s7"),
++ RPMH_VREG("smps8", "smp%s8", &pmic5_ftsmps525, "vdd-s8"),
++ RPMH_VREG("smps9", "smp%s9", &pmic5_ftsmps525, "vdd-s9"),
++ RPMH_VREG("smps10", "smp%s10", &pmic5_ftsmps525, "vdd-s10"),
+ RPMH_VREG("ldo1", "ldo%s1", &pmic5_nldo515, "vdd-l1"),
+ RPMH_VREG("ldo2", "ldo%s2", &pmic5_nldo515, "vdd-l2-18"),
+ RPMH_VREG("ldo3", "ldo%s3", &pmic5_nldo515, "vdd-l3"),
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index 793b1d274be33a..1a2d08ec9de9ef 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -1433,6 +1433,7 @@ static const struct of_device_id adsp_of_match[] = {
+ { .compatible = "qcom,sa8775p-cdsp1-pas", .data = &sa8775p_cdsp1_resource},
+ { .compatible = "qcom,sa8775p-gpdsp0-pas", .data = &sa8775p_gpdsp0_resource},
+ { .compatible = "qcom,sa8775p-gpdsp1-pas", .data = &sa8775p_gpdsp1_resource},
++ { .compatible = "qcom,sar2130p-adsp-pas", .data = &sm8350_adsp_resource},
+ { .compatible = "qcom,sc7180-adsp-pas", .data = &sm8250_adsp_resource},
+ { .compatible = "qcom,sc7180-mpss-pas", .data = &mpss_resource_init},
+ { .compatible = "qcom,sc7280-adsp-pas", .data = &sm8350_adsp_resource},
+diff --git a/drivers/rtc/rtc-cmos.c b/drivers/rtc/rtc-cmos.c
+index 35dca2accbb8df..5849d2970bba45 100644
+--- a/drivers/rtc/rtc-cmos.c
++++ b/drivers/rtc/rtc-cmos.c
+@@ -645,18 +645,17 @@ static int cmos_nvram_read(void *priv, unsigned int off, void *val,
+ unsigned char *buf = val;
+
+ off += NVRAM_OFFSET;
+- spin_lock_irq(&rtc_lock);
+- for (; count; count--, off++) {
++ for (; count; count--, off++, buf++) {
++ guard(spinlock_irq)(&rtc_lock);
+ if (off < 128)
+- *buf++ = CMOS_READ(off);
++ *buf = CMOS_READ(off);
+ else if (can_bank2)
+- *buf++ = cmos_read_bank2(off);
++ *buf = cmos_read_bank2(off);
+ else
+- break;
++ return -EIO;
+ }
+- spin_unlock_irq(&rtc_lock);
+
+- return count ? -EIO : 0;
++ return 0;
+ }
+
+ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+@@ -671,23 +670,23 @@ static int cmos_nvram_write(void *priv, unsigned int off, void *val,
+ * NVRAM to update, updating checksums is also part of its job.
+ */
+ off += NVRAM_OFFSET;
+- spin_lock_irq(&rtc_lock);
+- for (; count; count--, off++) {
++ for (; count; count--, off++, buf++) {
+ /* don't trash RTC registers */
+ if (off == cmos->day_alrm
+ || off == cmos->mon_alrm
+ || off == cmos->century)
+- buf++;
+- else if (off < 128)
+- CMOS_WRITE(*buf++, off);
++ continue;
++
++ guard(spinlock_irq)(&rtc_lock);
++ if (off < 128)
++ CMOS_WRITE(*buf, off);
+ else if (can_bank2)
+- cmos_write_bank2(*buf++, off);
++ cmos_write_bank2(*buf, off);
+ else
+- break;
++ return -EIO;
+ }
+- spin_unlock_irq(&rtc_lock);
+
+- return count ? -EIO : 0;
++ return 0;
+ }
+
+ /*----------------------------------------------------------------*/
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index 4cd3a3eab6f1c4..cd394d8c9f07f0 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -2493,6 +2493,7 @@ static int complete_v3_hw(struct hisi_sas_cq *cq)
+ /* update rd_point */
+ cq->rd_point = rd_point;
+ hisi_sas_write32(hisi_hba, COMPL_Q_0_RD_PTR + (0x14 * queue), rd_point);
++ cond_resched();
+
+ return completed;
+ }
+@@ -3550,6 +3551,11 @@ debugfs_to_reg_name_v3_hw(int off, int base_off,
+ return NULL;
+ }
+
++static bool debugfs_dump_is_generated_v3_hw(void *p)
++{
++ return p ? true : false;
++}
++
+ static void debugfs_print_reg_v3_hw(u32 *regs_val, struct seq_file *s,
+ const struct hisi_sas_debugfs_reg *reg)
+ {
+@@ -3575,6 +3581,9 @@ static int debugfs_global_v3_hw_show(struct seq_file *s, void *p)
+ {
+ struct hisi_sas_debugfs_regs *global = s->private;
+
++ if (!debugfs_dump_is_generated_v3_hw(global->data))
++ return -EPERM;
++
+ debugfs_print_reg_v3_hw(global->data, s,
+ &debugfs_global_reg);
+
+@@ -3586,6 +3595,9 @@ static int debugfs_axi_v3_hw_show(struct seq_file *s, void *p)
+ {
+ struct hisi_sas_debugfs_regs *axi = s->private;
+
++ if (!debugfs_dump_is_generated_v3_hw(axi->data))
++ return -EPERM;
++
+ debugfs_print_reg_v3_hw(axi->data, s,
+ &debugfs_axi_reg);
+
+@@ -3597,6 +3609,9 @@ static int debugfs_ras_v3_hw_show(struct seq_file *s, void *p)
+ {
+ struct hisi_sas_debugfs_regs *ras = s->private;
+
++ if (!debugfs_dump_is_generated_v3_hw(ras->data))
++ return -EPERM;
++
+ debugfs_print_reg_v3_hw(ras->data, s,
+ &debugfs_ras_reg);
+
+@@ -3609,6 +3624,9 @@ static int debugfs_port_v3_hw_show(struct seq_file *s, void *p)
+ struct hisi_sas_debugfs_port *port = s->private;
+ const struct hisi_sas_debugfs_reg *reg_port = &debugfs_port_reg;
+
++ if (!debugfs_dump_is_generated_v3_hw(port->data))
++ return -EPERM;
++
+ debugfs_print_reg_v3_hw(port->data, s, reg_port);
+
+ return 0;
+@@ -3664,6 +3682,9 @@ static int debugfs_cq_v3_hw_show(struct seq_file *s, void *p)
+ struct hisi_sas_debugfs_cq *debugfs_cq = s->private;
+ int slot;
+
++ if (!debugfs_dump_is_generated_v3_hw(debugfs_cq->complete_hdr))
++ return -EPERM;
++
+ for (slot = 0; slot < HISI_SAS_QUEUE_SLOTS; slot++)
+ debugfs_cq_show_slot_v3_hw(s, slot, debugfs_cq);
+
+@@ -3685,8 +3706,12 @@ static void debugfs_dq_show_slot_v3_hw(struct seq_file *s, int slot,
+
+ static int debugfs_dq_v3_hw_show(struct seq_file *s, void *p)
+ {
++ struct hisi_sas_debugfs_dq *debugfs_dq = s->private;
+ int slot;
+
++ if (!debugfs_dump_is_generated_v3_hw(debugfs_dq->hdr))
++ return -EPERM;
++
+ for (slot = 0; slot < HISI_SAS_QUEUE_SLOTS; slot++)
+ debugfs_dq_show_slot_v3_hw(s, slot, s->private);
+
+@@ -3700,6 +3725,9 @@ static int debugfs_iost_v3_hw_show(struct seq_file *s, void *p)
+ struct hisi_sas_iost *iost = debugfs_iost->iost;
+ int i, max_command_entries = HISI_SAS_MAX_COMMANDS;
+
++ if (!debugfs_dump_is_generated_v3_hw(iost))
++ return -EPERM;
++
+ for (i = 0; i < max_command_entries; i++, iost++) {
+ __le64 *data = &iost->qw0;
+
+@@ -3719,6 +3747,9 @@ static int debugfs_iost_cache_v3_hw_show(struct seq_file *s, void *p)
+ int i, tab_idx;
+ __le64 *iost;
+
++ if (!debugfs_dump_is_generated_v3_hw(iost_cache))
++ return -EPERM;
++
+ for (i = 0; i < HISI_SAS_IOST_ITCT_CACHE_NUM; i++, iost_cache++) {
+ /*
+ * Data struct of IOST cache:
+@@ -3742,6 +3773,9 @@ static int debugfs_itct_v3_hw_show(struct seq_file *s, void *p)
+ struct hisi_sas_debugfs_itct *debugfs_itct = s->private;
+ struct hisi_sas_itct *itct = debugfs_itct->itct;
+
++ if (!debugfs_dump_is_generated_v3_hw(itct))
++ return -EPERM;
++
+ for (i = 0; i < HISI_SAS_MAX_ITCT_ENTRIES; i++, itct++) {
+ __le64 *data = &itct->qw0;
+
+@@ -3761,6 +3795,9 @@ static int debugfs_itct_cache_v3_hw_show(struct seq_file *s, void *p)
+ int i, tab_idx;
+ __le64 *itct;
+
++ if (!debugfs_dump_is_generated_v3_hw(itct_cache))
++ return -EPERM;
++
+ for (i = 0; i < HISI_SAS_IOST_ITCT_CACHE_NUM; i++, itct_cache++) {
+ /*
+ * Data struct of ITCT cache:
+@@ -3778,10 +3815,9 @@ static int debugfs_itct_cache_v3_hw_show(struct seq_file *s, void *p)
+ }
+ DEFINE_SHOW_ATTRIBUTE(debugfs_itct_cache_v3_hw);
+
+-static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba)
++static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba, int index)
+ {
+ u64 *debugfs_timestamp;
+- int dump_index = hisi_hba->debugfs_dump_index;
+ struct dentry *dump_dentry;
+ struct dentry *dentry;
+ char name[256];
+@@ -3789,17 +3825,17 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba)
+ int c;
+ int d;
+
+- snprintf(name, 256, "%d", dump_index);
++ snprintf(name, 256, "%d", index);
+
+ dump_dentry = debugfs_create_dir(name, hisi_hba->debugfs_dump_dentry);
+
+- debugfs_timestamp = &hisi_hba->debugfs_timestamp[dump_index];
++ debugfs_timestamp = &hisi_hba->debugfs_timestamp[index];
+
+ debugfs_create_u64("timestamp", 0400, dump_dentry,
+ debugfs_timestamp);
+
+ debugfs_create_file("global", 0400, dump_dentry,
+- &hisi_hba->debugfs_regs[dump_index][DEBUGFS_GLOBAL],
++ &hisi_hba->debugfs_regs[index][DEBUGFS_GLOBAL],
+ &debugfs_global_v3_hw_fops);
+
+ /* Create port dir and files */
+@@ -3808,7 +3844,7 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba)
+ snprintf(name, 256, "%d", p);
+
+ debugfs_create_file(name, 0400, dentry,
+- &hisi_hba->debugfs_port_reg[dump_index][p],
++ &hisi_hba->debugfs_port_reg[index][p],
+ &debugfs_port_v3_hw_fops);
+ }
+
+@@ -3818,7 +3854,7 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba)
+ snprintf(name, 256, "%d", c);
+
+ debugfs_create_file(name, 0400, dentry,
+- &hisi_hba->debugfs_cq[dump_index][c],
++ &hisi_hba->debugfs_cq[index][c],
+ &debugfs_cq_v3_hw_fops);
+ }
+
+@@ -3828,32 +3864,32 @@ static void debugfs_create_files_v3_hw(struct hisi_hba *hisi_hba)
+ snprintf(name, 256, "%d", d);
+
+ debugfs_create_file(name, 0400, dentry,
+- &hisi_hba->debugfs_dq[dump_index][d],
++ &hisi_hba->debugfs_dq[index][d],
+ &debugfs_dq_v3_hw_fops);
+ }
+
+ debugfs_create_file("iost", 0400, dump_dentry,
+- &hisi_hba->debugfs_iost[dump_index],
++ &hisi_hba->debugfs_iost[index],
+ &debugfs_iost_v3_hw_fops);
+
+ debugfs_create_file("iost_cache", 0400, dump_dentry,
+- &hisi_hba->debugfs_iost_cache[dump_index],
++ &hisi_hba->debugfs_iost_cache[index],
+ &debugfs_iost_cache_v3_hw_fops);
+
+ debugfs_create_file("itct", 0400, dump_dentry,
+- &hisi_hba->debugfs_itct[dump_index],
++ &hisi_hba->debugfs_itct[index],
+ &debugfs_itct_v3_hw_fops);
+
+ debugfs_create_file("itct_cache", 0400, dump_dentry,
+- &hisi_hba->debugfs_itct_cache[dump_index],
++ &hisi_hba->debugfs_itct_cache[index],
+ &debugfs_itct_cache_v3_hw_fops);
+
+ debugfs_create_file("axi", 0400, dump_dentry,
+- &hisi_hba->debugfs_regs[dump_index][DEBUGFS_AXI],
++ &hisi_hba->debugfs_regs[index][DEBUGFS_AXI],
+ &debugfs_axi_v3_hw_fops);
+
+ debugfs_create_file("ras", 0400, dump_dentry,
+- &hisi_hba->debugfs_regs[dump_index][DEBUGFS_RAS],
++ &hisi_hba->debugfs_regs[index][DEBUGFS_RAS],
+ &debugfs_ras_v3_hw_fops);
+ }
+
+@@ -4516,22 +4552,34 @@ static void debugfs_release_v3_hw(struct hisi_hba *hisi_hba, int dump_index)
+ int i;
+
+ devm_kfree(dev, hisi_hba->debugfs_iost_cache[dump_index].cache);
++ hisi_hba->debugfs_iost_cache[dump_index].cache = NULL;
+ devm_kfree(dev, hisi_hba->debugfs_itct_cache[dump_index].cache);
++ hisi_hba->debugfs_itct_cache[dump_index].cache = NULL;
+ devm_kfree(dev, hisi_hba->debugfs_iost[dump_index].iost);
++ hisi_hba->debugfs_iost[dump_index].iost = NULL;
+ devm_kfree(dev, hisi_hba->debugfs_itct[dump_index].itct);
++ hisi_hba->debugfs_itct[dump_index].itct = NULL;
+
+- for (i = 0; i < hisi_hba->queue_count; i++)
++ for (i = 0; i < hisi_hba->queue_count; i++) {
+ devm_kfree(dev, hisi_hba->debugfs_dq[dump_index][i].hdr);
++ hisi_hba->debugfs_dq[dump_index][i].hdr = NULL;
++ }
+
+- for (i = 0; i < hisi_hba->queue_count; i++)
++ for (i = 0; i < hisi_hba->queue_count; i++) {
+ devm_kfree(dev,
+ hisi_hba->debugfs_cq[dump_index][i].complete_hdr);
++ hisi_hba->debugfs_cq[dump_index][i].complete_hdr = NULL;
++ }
+
+- for (i = 0; i < DEBUGFS_REGS_NUM; i++)
++ for (i = 0; i < DEBUGFS_REGS_NUM; i++) {
+ devm_kfree(dev, hisi_hba->debugfs_regs[dump_index][i].data);
++ hisi_hba->debugfs_regs[dump_index][i].data = NULL;
++ }
+
+- for (i = 0; i < hisi_hba->n_phy; i++)
++ for (i = 0; i < hisi_hba->n_phy; i++) {
+ devm_kfree(dev, hisi_hba->debugfs_port_reg[dump_index][i].data);
++ hisi_hba->debugfs_port_reg[dump_index][i].data = NULL;
++ }
+ }
+
+ static const struct hisi_sas_debugfs_reg *debugfs_reg_array_v3_hw[DEBUGFS_REGS_NUM] = {
+@@ -4658,8 +4706,6 @@ static int debugfs_snapshot_regs_v3_hw(struct hisi_hba *hisi_hba)
+ debugfs_snapshot_itct_reg_v3_hw(hisi_hba);
+ debugfs_snapshot_iost_reg_v3_hw(hisi_hba);
+
+- debugfs_create_files_v3_hw(hisi_hba);
+-
+ debugfs_snapshot_restore_v3_hw(hisi_hba);
+ hisi_hba->debugfs_dump_index++;
+
+@@ -4743,6 +4789,17 @@ static void debugfs_bist_init_v3_hw(struct hisi_hba *hisi_hba)
+ hisi_hba->debugfs_bist_linkrate = SAS_LINK_RATE_1_5_GBPS;
+ }
+
++static void debugfs_dump_init_v3_hw(struct hisi_hba *hisi_hba)
++{
++ int i;
++
++ hisi_hba->debugfs_dump_dentry =
++ debugfs_create_dir("dump", hisi_hba->debugfs_dir);
++
++ for (i = 0; i < hisi_sas_debugfs_dump_count; i++)
++ debugfs_create_files_v3_hw(hisi_hba, i);
++}
++
+ static void debugfs_exit_v3_hw(struct hisi_hba *hisi_hba)
+ {
+ debugfs_remove_recursive(hisi_hba->debugfs_dir);
+@@ -4763,8 +4820,7 @@ static void debugfs_init_v3_hw(struct hisi_hba *hisi_hba)
+ /* create bist structures */
+ debugfs_bist_init_v3_hw(hisi_hba);
+
+- hisi_hba->debugfs_dump_dentry =
+- debugfs_create_dir("dump", hisi_hba->debugfs_dir);
++ debugfs_dump_init_v3_hw(hisi_hba);
+
+ debugfs_phy_down_cnt_init_v3_hw(hisi_hba);
+ debugfs_fifo_init_v3_hw(hisi_hba);
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index 134bc96dd13400..ce3a1f42713dd8 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -2226,6 +2226,11 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ ulp_status, ulp_word4, latt);
+
+ if (latt || ulp_status) {
++ lpfc_printf_vlog(vport, KERN_WARNING, LOG_DISCOVERY,
++ "0229 FDMI cmd %04x failed, latt = %d "
++ "ulp_status: (x%x/x%x), sli_flag x%x\n",
++ be16_to_cpu(fdmi_cmd), latt, ulp_status,
++ ulp_word4, phba->sli.sli_flag);
+
+ /* Look for a retryable error */
+ if (ulp_status == IOSTAT_LOCAL_REJECT) {
+@@ -2234,8 +2239,16 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ case IOERR_SLI_DOWN:
+ /* Driver aborted this IO. No retry as error
+ * is likely Offline->Online or some adapter
+- * error. Recovery will try again.
++ * error. Recovery will try again, but if port
++ * is not active there's no point to continue
++ * issuing follow up FDMI commands.
+ */
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE)) {
++ free_ndlp = cmdiocb->ndlp;
++ lpfc_ct_free_iocb(phba, cmdiocb);
++ lpfc_nlp_put(free_ndlp);
++ return;
++ }
+ break;
+ case IOERR_ABORT_IN_PROGRESS:
+ case IOERR_SEQUENCE_TIMEOUT:
+@@ -2256,12 +2269,6 @@ lpfc_cmpl_ct_disc_fdmi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb,
+ break;
+ }
+ }
+-
+- lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY,
+- "0229 FDMI cmd %04x latt = %d "
+- "ulp_status: x%x, rid x%x\n",
+- be16_to_cpu(fdmi_cmd), latt, ulp_status,
+- ulp_word4);
+ }
+
+ free_ndlp = cmdiocb->ndlp;
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index 9241075f72fa4b..6e8d8a96c54fb3 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -155,6 +155,7 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ struct lpfc_hba *phba;
+ struct lpfc_work_evt *evtp;
+ unsigned long iflags;
++ bool nvme_reg = false;
+
+ ndlp = ((struct lpfc_rport_data *)rport->dd_data)->pnode;
+ if (!ndlp)
+@@ -177,38 +178,49 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport)
+ /* Don't schedule a worker thread event if the vport is going down. */
+ if (test_bit(FC_UNLOADING, &vport->load_flag) ||
+ !test_bit(HBA_SETUP, &phba->hba_flag)) {
++
+ spin_lock_irqsave(&ndlp->lock, iflags);
+ ndlp->rport = NULL;
+
++ if (ndlp->fc4_xpt_flags & NVME_XPT_REGD)
++ nvme_reg = true;
++
+ /* The scsi_transport is done with the rport so lpfc cannot
+- * call to unregister. Remove the scsi transport reference
+- * and clean up the SCSI transport node details.
++ * call to unregister.
+ */
+- if (ndlp->fc4_xpt_flags & (NLP_XPT_REGD | SCSI_XPT_REGD)) {
++ if (ndlp->fc4_xpt_flags & SCSI_XPT_REGD) {
+ ndlp->fc4_xpt_flags &= ~SCSI_XPT_REGD;
+
+- /* NVME transport-registered rports need the
+- * NLP_XPT_REGD flag to complete an unregister.
++ /* If NLP_XPT_REGD was cleared in lpfc_nlp_unreg_node,
++ * unregister calls were made to the scsi and nvme
++ * transports and refcnt was already decremented. Clear
++ * the NLP_XPT_REGD flag only if the NVME Rport is
++ * confirmed unregistered.
+ */
+- if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD))
++ if (!nvme_reg && ndlp->fc4_xpt_flags & NLP_XPT_REGD) {
+ ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD;
++ spin_unlock_irqrestore(&ndlp->lock, iflags);
++ lpfc_nlp_put(ndlp); /* may free ndlp */
++ } else {
++ spin_unlock_irqrestore(&ndlp->lock, iflags);
++ }
++ } else {
+ spin_unlock_irqrestore(&ndlp->lock, iflags);
+- lpfc_nlp_put(ndlp);
+- spin_lock_irqsave(&ndlp->lock, iflags);
+ }
+
++ spin_lock_irqsave(&ndlp->lock, iflags);
++
+ /* Only 1 thread can drop the initial node reference. If
+ * another thread has set NLP_DROPPED, this thread is done.
+ */
+- if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD) &&
+- !(ndlp->nlp_flag & NLP_DROPPED)) {
+- ndlp->nlp_flag |= NLP_DROPPED;
++ if (nvme_reg || (ndlp->nlp_flag & NLP_DROPPED)) {
+ spin_unlock_irqrestore(&ndlp->lock, iflags);
+- lpfc_nlp_put(ndlp);
+ return;
+ }
+
++ ndlp->nlp_flag |= NLP_DROPPED;
+ spin_unlock_irqrestore(&ndlp->lock, iflags);
++ lpfc_nlp_put(ndlp);
+ return;
+ }
+
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 0dd451009b0791..a3658ef1141b26 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -13518,6 +13518,8 @@ lpfc_sli4_hba_unset(struct lpfc_hba *phba)
+ /* Disable FW logging to host memory */
+ lpfc_ras_stop_fwlog(phba);
+
++ lpfc_sli4_queue_unset(phba);
++
+ /* Reset SLI4 HBA FCoE function */
+ lpfc_pci_function_reset(phba);
+
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 2ec6e55771b45a..6748fba48a07ed 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -5291,6 +5291,8 @@ lpfc_sli_brdrestart_s4(struct lpfc_hba *phba)
+ "0296 Restart HBA Data: x%x x%x\n",
+ phba->pport->port_state, psli->sli_flag);
+
++ lpfc_sli4_queue_unset(phba);
++
+ rc = lpfc_sli4_brdreset(phba);
+ if (rc) {
+ phba->link_state = LPFC_HBA_ERROR;
+@@ -17625,6 +17627,9 @@ lpfc_eq_destroy(struct lpfc_hba *phba, struct lpfc_queue *eq)
+ if (!eq)
+ return -ENODEV;
+
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE))
++ goto list_remove;
++
+ mbox = mempool_alloc(eq->phba->mbox_mem_pool, GFP_KERNEL);
+ if (!mbox)
+ return -ENOMEM;
+@@ -17651,10 +17656,12 @@ lpfc_eq_destroy(struct lpfc_hba *phba, struct lpfc_queue *eq)
+ shdr_status, shdr_add_status, rc);
+ status = -ENXIO;
+ }
++ mempool_free(mbox, eq->phba->mbox_mem_pool);
+
++list_remove:
+ /* Remove eq from any list */
+ list_del_init(&eq->list);
+- mempool_free(mbox, eq->phba->mbox_mem_pool);
++
+ return status;
+ }
+
+@@ -17682,6 +17689,10 @@ lpfc_cq_destroy(struct lpfc_hba *phba, struct lpfc_queue *cq)
+ /* sanity check on queue memory */
+ if (!cq)
+ return -ENODEV;
++
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE))
++ goto list_remove;
++
+ mbox = mempool_alloc(cq->phba->mbox_mem_pool, GFP_KERNEL);
+ if (!mbox)
+ return -ENOMEM;
+@@ -17707,9 +17718,11 @@ lpfc_cq_destroy(struct lpfc_hba *phba, struct lpfc_queue *cq)
+ shdr_status, shdr_add_status, rc);
+ status = -ENXIO;
+ }
++ mempool_free(mbox, cq->phba->mbox_mem_pool);
++
++list_remove:
+ /* Remove cq from any list */
+ list_del_init(&cq->list);
+- mempool_free(mbox, cq->phba->mbox_mem_pool);
+ return status;
+ }
+
+@@ -17737,6 +17750,10 @@ lpfc_mq_destroy(struct lpfc_hba *phba, struct lpfc_queue *mq)
+ /* sanity check on queue memory */
+ if (!mq)
+ return -ENODEV;
++
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE))
++ goto list_remove;
++
+ mbox = mempool_alloc(mq->phba->mbox_mem_pool, GFP_KERNEL);
+ if (!mbox)
+ return -ENOMEM;
+@@ -17762,9 +17779,11 @@ lpfc_mq_destroy(struct lpfc_hba *phba, struct lpfc_queue *mq)
+ shdr_status, shdr_add_status, rc);
+ status = -ENXIO;
+ }
++ mempool_free(mbox, mq->phba->mbox_mem_pool);
++
++list_remove:
+ /* Remove mq from any list */
+ list_del_init(&mq->list);
+- mempool_free(mbox, mq->phba->mbox_mem_pool);
+ return status;
+ }
+
+@@ -17792,6 +17811,10 @@ lpfc_wq_destroy(struct lpfc_hba *phba, struct lpfc_queue *wq)
+ /* sanity check on queue memory */
+ if (!wq)
+ return -ENODEV;
++
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE))
++ goto list_remove;
++
+ mbox = mempool_alloc(wq->phba->mbox_mem_pool, GFP_KERNEL);
+ if (!mbox)
+ return -ENOMEM;
+@@ -17816,11 +17839,13 @@ lpfc_wq_destroy(struct lpfc_hba *phba, struct lpfc_queue *wq)
+ shdr_status, shdr_add_status, rc);
+ status = -ENXIO;
+ }
++ mempool_free(mbox, wq->phba->mbox_mem_pool);
++
++list_remove:
+ /* Remove wq from any list */
+ list_del_init(&wq->list);
+ kfree(wq->pring);
+ wq->pring = NULL;
+- mempool_free(mbox, wq->phba->mbox_mem_pool);
+ return status;
+ }
+
+@@ -17850,6 +17875,10 @@ lpfc_rq_destroy(struct lpfc_hba *phba, struct lpfc_queue *hrq,
+ /* sanity check on queue memory */
+ if (!hrq || !drq)
+ return -ENODEV;
++
++ if (!(phba->sli.sli_flag & LPFC_SLI_ACTIVE))
++ goto list_remove;
++
+ mbox = mempool_alloc(hrq->phba->mbox_mem_pool, GFP_KERNEL);
+ if (!mbox)
+ return -ENOMEM;
+@@ -17890,9 +17919,11 @@ lpfc_rq_destroy(struct lpfc_hba *phba, struct lpfc_queue *hrq,
+ shdr_status, shdr_add_status, rc);
+ status = -ENXIO;
+ }
++ mempool_free(mbox, hrq->phba->mbox_mem_pool);
++
++list_remove:
+ list_del_init(&hrq->list);
+ list_del_init(&drq->list);
+- mempool_free(mbox, hrq->phba->mbox_mem_pool);
+ return status;
+ }
+
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index 2810608acd963a..e6ece30c43486c 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -3304,6 +3304,7 @@ struct fc_function_template qla2xxx_transport_vport_functions = {
+ .show_host_node_name = 1,
+ .show_host_port_name = 1,
+ .show_host_supported_classes = 1,
++ .show_host_supported_speeds = 1,
+
+ .get_host_port_id = qla2x00_get_host_port_id,
+ .show_host_port_id = 1,
+diff --git a/drivers/scsi/qla2xxx/qla_bsg.c b/drivers/scsi/qla2xxx/qla_bsg.c
+index 52dc9604f56746..10431a67d202bb 100644
+--- a/drivers/scsi/qla2xxx/qla_bsg.c
++++ b/drivers/scsi/qla2xxx/qla_bsg.c
+@@ -24,6 +24,7 @@ void qla2x00_bsg_job_done(srb_t *sp, int res)
+ {
+ struct bsg_job *bsg_job = sp->u.bsg_job;
+ struct fc_bsg_reply *bsg_reply = bsg_job->reply;
++ struct completion *comp = sp->comp;
+
+ ql_dbg(ql_dbg_user, sp->vha, 0x7009,
+ "%s: sp hdl %x, result=%x bsg ptr %p\n",
+@@ -35,6 +36,9 @@ void qla2x00_bsg_job_done(srb_t *sp, int res)
+ bsg_reply->result = res;
+ bsg_job_done(bsg_job, bsg_reply->result,
+ bsg_reply->reply_payload_rcv_len);
++
++ if (comp)
++ complete(comp);
+ }
+
+ void qla2x00_bsg_sp_free(srb_t *sp)
+@@ -490,16 +494,6 @@ qla2x00_process_ct(struct bsg_job *bsg_job)
+ goto done;
+ }
+
+- if ((req_sg_cnt != bsg_job->request_payload.sg_cnt) ||
+- (rsp_sg_cnt != bsg_job->reply_payload.sg_cnt)) {
+- ql_log(ql_log_warn, vha, 0x7011,
+- "request_sg_cnt: %x dma_request_sg_cnt: %x reply_sg_cnt:%x "
+- "dma_reply_sg_cnt: %x\n", bsg_job->request_payload.sg_cnt,
+- req_sg_cnt, bsg_job->reply_payload.sg_cnt, rsp_sg_cnt);
+- rval = -EAGAIN;
+- goto done_unmap_sg;
+- }
+-
+ if (!vha->flags.online) {
+ ql_log(ql_log_warn, vha, 0x7012,
+ "Host is not online.\n");
+@@ -3061,7 +3055,7 @@ qla24xx_bsg_request(struct bsg_job *bsg_job)
+
+ static bool qla_bsg_found(struct qla_qpair *qpair, struct bsg_job *bsg_job)
+ {
+- bool found = false;
++ bool found, do_bsg_done;
+ struct fc_bsg_reply *bsg_reply = bsg_job->reply;
+ scsi_qla_host_t *vha = shost_priv(fc_bsg_to_shost(bsg_job));
+ struct qla_hw_data *ha = vha->hw;
+@@ -3069,6 +3063,11 @@ static bool qla_bsg_found(struct qla_qpair *qpair, struct bsg_job *bsg_job)
+ int cnt;
+ unsigned long flags;
+ struct req_que *req;
++ int rval;
++ DECLARE_COMPLETION_ONSTACK(comp);
++ uint32_t ratov_j;
++
++ found = do_bsg_done = false;
+
+ spin_lock_irqsave(qpair->qp_lock_ptr, flags);
+ req = qpair->req;
+@@ -3080,42 +3079,104 @@ static bool qla_bsg_found(struct qla_qpair *qpair, struct bsg_job *bsg_job)
+ sp->type == SRB_ELS_CMD_HST ||
+ sp->type == SRB_ELS_CMD_HST_NOLOGIN) &&
+ sp->u.bsg_job == bsg_job) {
+- req->outstanding_cmds[cnt] = NULL;
+- spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+-
+- if (!ha->flags.eeh_busy && ha->isp_ops->abort_command(sp)) {
+- ql_log(ql_log_warn, vha, 0x7089,
+- "mbx abort_command failed.\n");
+- bsg_reply->result = -EIO;
+- } else {
+- ql_dbg(ql_dbg_user, vha, 0x708a,
+- "mbx abort_command success.\n");
+- bsg_reply->result = 0;
+- }
+- /* ref: INIT */
+- kref_put(&sp->cmd_kref, qla2x00_sp_release);
+
+ found = true;
+- goto done;
++ sp->comp = ∁
++ break;
+ }
+ }
+ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
+
+-done:
+- return found;
++ if (!found)
++ return false;
++
++ if (ha->flags.eeh_busy) {
++ /* skip over abort. EEH handling will return the bsg. Wait for it */
++ rval = QLA_SUCCESS;
++ ql_dbg(ql_dbg_user, vha, 0x802c,
++ "eeh encounter. bsg %p sp=%p handle=%x \n",
++ bsg_job, sp, sp->handle);
++ } else {
++ rval = ha->isp_ops->abort_command(sp);
++ ql_dbg(ql_dbg_user, vha, 0x802c,
++ "Aborting bsg %p sp=%p handle=%x rval=%x\n",
++ bsg_job, sp, sp->handle, rval);
++ }
++
++ switch (rval) {
++ case QLA_SUCCESS:
++ /* Wait for the command completion. */
++ ratov_j = ha->r_a_tov / 10 * 4 * 1000;
++ ratov_j = msecs_to_jiffies(ratov_j);
++
++ if (!wait_for_completion_timeout(&comp, ratov_j)) {
++ ql_log(ql_log_info, vha, 0x7089,
++ "bsg abort timeout. bsg=%p sp=%p handle %#x .\n",
++ bsg_job, sp, sp->handle);
++
++ do_bsg_done = true;
++ } else {
++ /* fw had returned the bsg */
++ ql_dbg(ql_dbg_user, vha, 0x708a,
++ "bsg abort success. bsg %p sp=%p handle=%#x\n",
++ bsg_job, sp, sp->handle);
++ do_bsg_done = false;
++ }
++ break;
++ default:
++ ql_log(ql_log_info, vha, 0x704f,
++ "bsg abort fail. bsg=%p sp=%p rval=%x.\n",
++ bsg_job, sp, rval);
++
++ do_bsg_done = true;
++ break;
++ }
++
++ if (!do_bsg_done)
++ return true;
++
++ spin_lock_irqsave(qpair->qp_lock_ptr, flags);
++ /*
++ * recheck to make sure it's still the same bsg_job due to
++ * qp_lock_ptr was released earlier.
++ */
++ if (req->outstanding_cmds[cnt] &&
++ req->outstanding_cmds[cnt]->u.bsg_job != bsg_job) {
++ /* fw had returned the bsg */
++ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
++ return true;
++ }
++ req->outstanding_cmds[cnt] = NULL;
++ spin_unlock_irqrestore(qpair->qp_lock_ptr, flags);
++
++ /* ref: INIT */
++ sp->comp = NULL;
++ kref_put(&sp->cmd_kref, qla2x00_sp_release);
++ bsg_reply->result = -ENXIO;
++ bsg_reply->reply_payload_rcv_len = 0;
++
++ ql_dbg(ql_dbg_user, vha, 0x7051,
++ "%s bsg_job_done : bsg %p result %#x sp %p.\n",
++ __func__, bsg_job, bsg_reply->result, sp);
++
++ bsg_job_done(bsg_job, bsg_reply->result, bsg_reply->reply_payload_rcv_len);
++
++ return true;
+ }
+
+ int
+ qla24xx_bsg_timeout(struct bsg_job *bsg_job)
+ {
+- struct fc_bsg_reply *bsg_reply = bsg_job->reply;
++ struct fc_bsg_request *bsg_request = bsg_job->request;
+ scsi_qla_host_t *vha = shost_priv(fc_bsg_to_shost(bsg_job));
+ struct qla_hw_data *ha = vha->hw;
+ int i;
+ struct qla_qpair *qpair;
+
+- ql_log(ql_log_info, vha, 0x708b, "%s CMD timeout. bsg ptr %p.\n",
+- __func__, bsg_job);
++ ql_log(ql_log_info, vha, 0x708b,
++ "%s CMD timeout. bsg ptr %p msgcode %x vendor cmd %x\n",
++ __func__, bsg_job, bsg_request->msgcode,
++ bsg_request->rqst_data.h_vendor.vendor_cmd[0]);
+
+ if (qla2x00_isp_reg_stat(ha)) {
+ ql_log(ql_log_info, vha, 0x9007,
+@@ -3136,7 +3197,6 @@ qla24xx_bsg_timeout(struct bsg_job *bsg_job)
+ }
+
+ ql_log(ql_log_info, vha, 0x708b, "SRB not found to abort.\n");
+- bsg_reply->result = -ENXIO;
+
+ done:
+ return 0;
+diff --git a/drivers/scsi/qla2xxx/qla_mid.c b/drivers/scsi/qla2xxx/qla_mid.c
+index 76703f2706b8e3..79879c4743e6dc 100644
+--- a/drivers/scsi/qla2xxx/qla_mid.c
++++ b/drivers/scsi/qla2xxx/qla_mid.c
+@@ -506,6 +506,7 @@ qla24xx_create_vhost(struct fc_vport *fc_vport)
+ return(NULL);
+ }
+
++ vha->irq_offset = QLA_BASE_VECTORS;
+ host = vha->host;
+ fc_vport->dd_data = vha;
+ /* New host info */
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 7f980e6141c282..7ab717ed72327e 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -6902,12 +6902,15 @@ qla2x00_do_dpc(void *data)
+ set_user_nice(current, MIN_NICE);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+- while (!kthread_should_stop()) {
++ while (1) {
+ ql_dbg(ql_dbg_dpc, base_vha, 0x4000,
+ "DPC handler sleeping.\n");
+
+ schedule();
+
++ if (kthread_should_stop())
++ break;
++
+ if (test_and_clear_bit(DO_EEH_RECOVERY, &base_vha->dpc_flags))
+ qla_pci_set_eeh_busy(base_vha);
+
+@@ -6920,15 +6923,16 @@ qla2x00_do_dpc(void *data)
+ goto end_loop;
+ }
+
++ if (test_bit(UNLOADING, &base_vha->dpc_flags))
++ /* don't do any work. Wait to be terminated by kthread_stop */
++ goto end_loop;
++
+ ha->dpc_active = 1;
+
+ ql_dbg(ql_dbg_dpc + ql_dbg_verbose, base_vha, 0x4001,
+ "DPC handler waking up, dpc_flags=0x%lx.\n",
+ base_vha->dpc_flags);
+
+- if (test_bit(UNLOADING, &base_vha->dpc_flags))
+- break;
+-
+ if (IS_P3P_TYPE(ha)) {
+ if (IS_QLA8044(ha)) {
+ if (test_and_clear_bit(ISP_UNRECOVERABLE,
+@@ -7241,9 +7245,6 @@ qla2x00_do_dpc(void *data)
+ */
+ ha->dpc_active = 0;
+
+- /* Cleanup any residual CTX SRBs. */
+- qla2x00_abort_all_cmds(base_vha, DID_NO_CONNECT << 16);
+-
+ return 0;
+ }
+
+diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c
+index b52513eeeafa75..680ba180a67252 100644
+--- a/drivers/scsi/scsi_debug.c
++++ b/drivers/scsi/scsi_debug.c
+@@ -6447,7 +6447,7 @@ static int schedule_resp(struct scsi_cmnd *cmnd, struct sdebug_dev_info *devip,
+ }
+ sd_dp = &sqcp->sd_dp;
+
+- if (polled)
++ if (polled || (ndelay > 0 && ndelay < INCLUSIVE_TIMING_MAX_NS))
+ ns_from_boot = ktime_get_boottime_ns();
+
+ /* one of the resp_*() response functions is called here */
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index 84334ab39c8107..94127868bedf8a 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -386,7 +386,6 @@ sg_release(struct inode *inode, struct file *filp)
+ SCSI_LOG_TIMEOUT(3, sg_printk(KERN_INFO, sdp, "sg_release\n"));
+
+ mutex_lock(&sdp->open_rel_lock);
+- kref_put(&sfp->f_ref, sg_remove_sfp);
+ sdp->open_cnt--;
+
+ /* possibly many open()s waiting on exlude clearing, start many;
+@@ -398,6 +397,7 @@ sg_release(struct inode *inode, struct file *filp)
+ wake_up_interruptible(&sdp->open_wait);
+ }
+ mutex_unlock(&sdp->open_rel_lock);
++ kref_put(&sfp->f_ref, sg_remove_sfp);
+ return 0;
+ }
+
+diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
+index beb88f25dbb993..c9038284bc893d 100644
+--- a/drivers/scsi/st.c
++++ b/drivers/scsi/st.c
+@@ -3506,6 +3506,7 @@ static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
+ int i, cmd_nr, cmd_type, bt;
+ int retval = 0;
+ unsigned int blk;
++ bool cmd_mtiocget;
+ struct scsi_tape *STp = file->private_data;
+ struct st_modedef *STm;
+ struct st_partstat *STps;
+@@ -3619,6 +3620,7 @@ static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
+ */
+ if (mtc.mt_op != MTREW &&
+ mtc.mt_op != MTOFFL &&
++ mtc.mt_op != MTLOAD &&
+ mtc.mt_op != MTRETEN &&
+ mtc.mt_op != MTERASE &&
+ mtc.mt_op != MTSEEK &&
+@@ -3732,17 +3734,28 @@ static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
+ goto out;
+ }
+
++ cmd_mtiocget = cmd_type == _IOC_TYPE(MTIOCGET) && cmd_nr == _IOC_NR(MTIOCGET);
++
+ if ((i = flush_buffer(STp, 0)) < 0) {
+- retval = i;
+- goto out;
+- }
+- if (STp->can_partitions &&
+- (i = switch_partition(STp)) < 0) {
+- retval = i;
+- goto out;
++ if (cmd_mtiocget && STp->pos_unknown) {
++ /* flush fails -> modify status accordingly */
++ reset_state(STp);
++ STp->pos_unknown = 1;
++ } else { /* return error */
++ retval = i;
++ goto out;
++ }
++ } else { /* flush_buffer succeeds */
++ if (STp->can_partitions) {
++ i = switch_partition(STp);
++ if (i < 0) {
++ retval = i;
++ goto out;
++ }
++ }
+ }
+
+- if (cmd_type == _IOC_TYPE(MTIOCGET) && cmd_nr == _IOC_NR(MTIOCGET)) {
++ if (cmd_mtiocget) {
+ struct mtget mt_status;
+
+ if (_IOC_SIZE(cmd_in) != sizeof(struct mtget)) {
+@@ -3756,7 +3769,7 @@ static long st_ioctl(struct file *file, unsigned int cmd_in, unsigned long arg)
+ ((STp->density << MT_ST_DENSITY_SHIFT) & MT_ST_DENSITY_MASK);
+ mt_status.mt_blkno = STps->drv_block;
+ mt_status.mt_fileno = STps->drv_file;
+- if (STp->block_size != 0) {
++ if (STp->block_size != 0 && mt_status.mt_blkno >= 0) {
+ if (STps->rw == ST_WRITING)
+ mt_status.mt_blkno +=
+ (STp->buffer)->buffer_bytes / STp->block_size;
+diff --git a/drivers/soc/imx/soc-imx8m.c b/drivers/soc/imx/soc-imx8m.c
+index fe111bae38c8e1..5ea8887828c064 100644
+--- a/drivers/soc/imx/soc-imx8m.c
++++ b/drivers/soc/imx/soc-imx8m.c
+@@ -30,7 +30,7 @@
+
+ struct imx8_soc_data {
+ char *name;
+- u32 (*soc_revision)(void);
++ int (*soc_revision)(u32 *socrev);
+ };
+
+ static u64 soc_uid;
+@@ -51,24 +51,29 @@ static u32 imx8mq_soc_revision_from_atf(void)
+ static inline u32 imx8mq_soc_revision_from_atf(void) { return 0; };
+ #endif
+
+-static u32 __init imx8mq_soc_revision(void)
++static int imx8mq_soc_revision(u32 *socrev)
+ {
+ struct device_node *np;
+ void __iomem *ocotp_base;
+ u32 magic;
+ u32 rev;
+ struct clk *clk;
++ int ret;
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp");
+ if (!np)
+- return 0;
++ return -EINVAL;
+
+ ocotp_base = of_iomap(np, 0);
+- WARN_ON(!ocotp_base);
++ if (!ocotp_base) {
++ ret = -EINVAL;
++ goto err_iomap;
++ }
++
+ clk = of_clk_get_by_name(np, NULL);
+ if (IS_ERR(clk)) {
+- WARN_ON(IS_ERR(clk));
+- return 0;
++ ret = PTR_ERR(clk);
++ goto err_clk;
+ }
+
+ clk_prepare_enable(clk);
+@@ -88,32 +93,45 @@ static u32 __init imx8mq_soc_revision(void)
+ soc_uid <<= 32;
+ soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW);
+
++ *socrev = rev;
++
+ clk_disable_unprepare(clk);
+ clk_put(clk);
+ iounmap(ocotp_base);
+ of_node_put(np);
+
+- return rev;
++ return 0;
++
++err_clk:
++ iounmap(ocotp_base);
++err_iomap:
++ of_node_put(np);
++ return ret;
+ }
+
+-static void __init imx8mm_soc_uid(void)
++static int imx8mm_soc_uid(void)
+ {
+ void __iomem *ocotp_base;
+ struct device_node *np;
+ struct clk *clk;
++ int ret = 0;
+ u32 offset = of_machine_is_compatible("fsl,imx8mp") ?
+ IMX8MP_OCOTP_UID_OFFSET : 0;
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-ocotp");
+ if (!np)
+- return;
++ return -EINVAL;
+
+ ocotp_base = of_iomap(np, 0);
+- WARN_ON(!ocotp_base);
++ if (!ocotp_base) {
++ ret = -EINVAL;
++ goto err_iomap;
++ }
++
+ clk = of_clk_get_by_name(np, NULL);
+ if (IS_ERR(clk)) {
+- WARN_ON(IS_ERR(clk));
+- return;
++ ret = PTR_ERR(clk);
++ goto err_clk;
+ }
+
+ clk_prepare_enable(clk);
+@@ -124,31 +142,41 @@ static void __init imx8mm_soc_uid(void)
+
+ clk_disable_unprepare(clk);
+ clk_put(clk);
++
++err_clk:
+ iounmap(ocotp_base);
++err_iomap:
+ of_node_put(np);
++
++ return ret;
+ }
+
+-static u32 __init imx8mm_soc_revision(void)
++static int imx8mm_soc_revision(u32 *socrev)
+ {
+ struct device_node *np;
+ void __iomem *anatop_base;
+- u32 rev;
++ int ret;
+
+ np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-anatop");
+ if (!np)
+- return 0;
++ return -EINVAL;
+
+ anatop_base = of_iomap(np, 0);
+- WARN_ON(!anatop_base);
++ if (!anatop_base) {
++ ret = -EINVAL;
++ goto err_iomap;
++ }
+
+- rev = readl_relaxed(anatop_base + ANADIG_DIGPROG_IMX8MM);
++ *socrev = readl_relaxed(anatop_base + ANADIG_DIGPROG_IMX8MM);
+
+ iounmap(anatop_base);
+ of_node_put(np);
+
+- imx8mm_soc_uid();
++ return imx8mm_soc_uid();
+
+- return rev;
++err_iomap:
++ of_node_put(np);
++ return ret;
+ }
+
+ static const struct imx8_soc_data imx8mq_soc_data = {
+@@ -184,7 +212,7 @@ static __maybe_unused const struct of_device_id imx8_soc_match[] = {
+ kasprintf(GFP_KERNEL, "%d.%d", (soc_rev >> 4) & 0xf, soc_rev & 0xf) : \
+ "unknown"
+
+-static int __init imx8_soc_init(void)
++static int imx8m_soc_probe(struct platform_device *pdev)
+ {
+ struct soc_device_attribute *soc_dev_attr;
+ struct soc_device *soc_dev;
+@@ -212,8 +240,11 @@ static int __init imx8_soc_init(void)
+ data = id->data;
+ if (data) {
+ soc_dev_attr->soc_id = data->name;
+- if (data->soc_revision)
+- soc_rev = data->soc_revision();
++ if (data->soc_revision) {
++ ret = data->soc_revision(&soc_rev);
++ if (ret)
++ goto free_soc;
++ }
+ }
+
+ soc_dev_attr->revision = imx8_revision(soc_rev);
+@@ -251,6 +282,38 @@ static int __init imx8_soc_init(void)
+ kfree(soc_dev_attr);
+ return ret;
+ }
++
++static struct platform_driver imx8m_soc_driver = {
++ .probe = imx8m_soc_probe,
++ .driver = {
++ .name = "imx8m-soc",
++ },
++};
++
++static int __init imx8_soc_init(void)
++{
++ struct platform_device *pdev;
++ int ret;
++
++ /* No match means this is non-i.MX8M hardware, do nothing. */
++ if (!of_match_node(imx8_soc_match, of_root))
++ return 0;
++
++ ret = platform_driver_register(&imx8m_soc_driver);
++ if (ret) {
++ pr_err("Failed to register imx8m-soc platform driver: %d\n", ret);
++ return ret;
++ }
++
++ pdev = platform_device_register_simple("imx8m-soc", -1, NULL, 0);
++ if (IS_ERR(pdev)) {
++ pr_err("Failed to register imx8m-soc platform device: %ld\n", PTR_ERR(pdev));
++ platform_driver_unregister(&imx8m_soc_driver);
++ return PTR_ERR(pdev);
++ }
++
++ return 0;
++}
+ device_initcall(imx8_soc_init);
+ MODULE_DESCRIPTION("NXP i.MX8M SoC driver");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index 28bcc65e91beb3..a470285f54a875 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -153,325 +153,2431 @@ enum llcc_reg_offset {
+ };
+
+ static const struct llcc_slice_config sa8775p_data[] = {
+- {LLCC_CPUSS, 1, 2048, 1, 0, 0x00FF, 0x0, 0, 0, 0, 1, 1, 0, 0},
+- {LLCC_VIDSC0, 2, 512, 3, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_CPUSS1, 3, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_CPUHWT, 5, 512, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_AUDIO, 6, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CMPT, 10, 4096, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_GPUHTW, 11, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_GPU, 12, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 1, 0},
+- {LLCC_MMUHWT, 13, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 1, 0, 0},
+- {LLCC_CMPTDMA, 15, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_DISP, 16, 4096, 2, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_VIDFW, 17, 3072, 1, 0, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_AUDHW, 22, 1024, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CVP, 28, 256, 3, 1, 0x00FF, 0x0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0xF0, 1, 0, 0, 1, 0, 0, 0},
+- {LLCC_WRCACHE, 31, 512, 1, 1, 0x00FF, 0x0, 0, 0, 0, 0, 1, 0, 0},
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 2048,
++ .priority = 1,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUSS1,
++ .slice_id = 3,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUHWT,
++ .slice_id = 5,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 4096,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPTDMA,
++ .slice_id = 15,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 4096,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDFW,
++ .slice_id = 17,
++ .max_cap = 3072,
++ .priority = 1,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 28,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0xf0,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config sc7180_data[] = {
+- { LLCC_CPUSS, 1, 256, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 1 },
+- { LLCC_MDM, 8, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW, 11, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 128, 1, 0, 0xf, 0x0, 0, 0, 0, 1, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 256,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 128,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 128,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 128,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ },
+ };
+
+ static const struct llcc_slice_config sc7280_data[] = {
+- { LLCC_CPUSS, 1, 768, 1, 0, 0x3f, 0x0, 0, 0, 0, 1, 1, 0},
+- { LLCC_MDMHPGRW, 7, 512, 2, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_CMPT, 10, 768, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_GPUHTW, 11, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_GPU, 12, 512, 1, 0, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_MMUHWT, 13, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 0, 1, 0},
+- { LLCC_MDMPNG, 21, 768, 0, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_WLHW, 24, 256, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
+- { LLCC_MODPE, 29, 64, 1, 1, 0x3f, 0x0, 0, 0, 0, 1, 0, 0},
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 768,
++ .priority = 1,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 768,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 768,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WLHW,
++ .slice_id = 24,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 64,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3f,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ },
+ };
+
+ static const struct llcc_slice_config sc8180x_data[] = {
+- { LLCC_CPUSS, 1, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1 },
+- { LLCC_VIDSC0, 2, 512, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_VIDSC1, 3, 512, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDIO, 6, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMHPGRW, 7, 3072, 1, 1, 0x3ff, 0xc00, 0, 0, 0, 1, 0 },
+- { LLCC_MDM, 8, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MODHW, 9, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_CMPT, 10, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 5120, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1 },
+- { LLCC_CMPTDMA, 15, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_DISP, 16, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_VIDFW, 17, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMHPFX, 20, 1024, 2, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMPNG, 21, 1024, 0, 1, 0xc, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDHW, 22, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_NPU, 23, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_WLHW, 24, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MODPE, 29, 512, 1, 1, 0xc, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_APTCM, 30, 512, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0 },
+- { LLCC_WRCACHE, 31, 128, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDSC1,
++ .slice_id = 3,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3ff,
++ .res_ways = 0xc00,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 5120,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPTDMA,
++ .slice_id = 15,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDFW,
++ .slice_id = 17,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPFX,
++ .slice_id = 20,
++ .max_cap = 1024,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0xc,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_NPU,
++ .slice_id = 23,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WLHW,
++ .slice_id = 24,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xc,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0x1,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 128,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ },
+ };
+
+ static const struct llcc_slice_config sc8280xp_data[] = {
+- { LLCC_CPUSS, 1, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1, 0 },
+- { LLCC_VIDSC0, 2, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_AUDIO, 6, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 },
+- { LLCC_CMPT, 10, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 },
+- { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_GPU, 12, 4096, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 1 },
+- { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_DISP, 16, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_AUDHW, 22, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_ECC, 26, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_CVP, 28, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0, 0 },
+- { LLCC_WRCACHE, 31, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CVPFW, 17, 512, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_CPUSS1, 3, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_CPUHWT, 5, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 4096,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 2048,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_ECC,
++ .slice_id = 26,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 28,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0x1,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CVPFW,
++ .slice_id = 17,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUSS1,
++ .slice_id = 3,
++ .max_cap = 2048,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUHWT,
++ .slice_id = 5,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+-static const struct llcc_slice_config sdm845_data[] = {
+- { LLCC_CPUSS, 1, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 1 },
+- { LLCC_VIDSC0, 2, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0 },
+- { LLCC_VIDSC1, 3, 512, 2, 1, 0x0, 0x0f0, 0, 0, 1, 1, 0 },
+- { LLCC_ROTATOR, 4, 563, 2, 1, 0x0, 0x00e, 2, 0, 1, 1, 0 },
+- { LLCC_VOICE, 5, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_AUDIO, 6, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_MDMHPGRW, 7, 1024, 2, 0, 0xfc, 0xf00, 0, 0, 1, 1, 0 },
+- { LLCC_MDM, 8, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_CMPT, 10, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_GPUHTW, 11, 512, 1, 1, 0xc, 0x0, 0, 0, 1, 1, 0 },
+- { LLCC_GPU, 12, 2304, 1, 0, 0xff0, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_MMUHWT, 13, 256, 2, 0, 0x0, 0x1, 0, 0, 1, 0, 1 },
+- { LLCC_CMPTDMA, 15, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_DISP, 16, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_VIDFW, 17, 2816, 1, 0, 0xffc, 0x2, 0, 0, 1, 1, 0 },
+- { LLCC_MDMHPFX, 20, 1024, 2, 1, 0x0, 0xf00, 0, 0, 1, 1, 0 },
+- { LLCC_MDMPNG, 21, 1024, 0, 1, 0x1e, 0x0, 0, 0, 1, 1, 0 },
+- { LLCC_AUDHW, 22, 1024, 1, 1, 0xffc, 0x2, 0, 0, 1, 1, 0 },
++static const struct llcc_slice_config sdm845_data[] = {{
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .res_ways = 0xf0,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDSC1,
++ .slice_id = 3,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .res_ways = 0xf0,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_ROTATOR,
++ .slice_id = 4,
++ .max_cap = 563,
++ .priority = 2,
++ .fixed_size = true,
++ .res_ways = 0xe,
++ .cache_mode = 2,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VOICE,
++ .slice_id = 5,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 1024,
++ .priority = 2,
++ .bonus_ways = 0xfc,
++ .res_ways = 0xf00,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xc,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 2304,
++ .priority = 1,
++ .bonus_ways = 0xff0,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 256,
++ .priority = 2,
++ .res_ways = 0x1,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPTDMA,
++ .slice_id = 15,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDFW,
++ .slice_id = 17,
++ .max_cap = 2816,
++ .priority = 1,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPFX,
++ .slice_id = 20,
++ .max_cap = 1024,
++ .priority = 2,
++ .fixed_size = true,
++ .res_ways = 0xf00,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0x1e,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .res_ways = 0x2,
++ .cache_mode = 0,
++ .dis_cap_alloc = true,
++ .retain_on_pc = true,
++ },
+ };
+
+ static const struct llcc_slice_config sm6350_data[] = {
+- { LLCC_CPUSS, 1, 768, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 1 },
+- { LLCC_MDM, 8, 512, 2, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW, 11, 256, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 512, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMPNG, 21, 768, 0, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_NPU, 23, 768, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_MODPE, 29, 64, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 768,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 512,
++ .priority = 2,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 256,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 768,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_NPU,
++ .slice_id = 23,
++ .max_cap = 768,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 64,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config sm7150_data[] = {
+- { LLCC_CPUSS, 1, 512, 1, 0, 0xF, 0x0, 0, 0, 0, 1, 1 },
+- { LLCC_MDM, 8, 128, 2, 0, 0xF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW, 11, 256, 1, 1, 0xF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 256, 1, 1, 0xF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_NPU, 23, 512, 1, 0, 0xF, 0x0, 0, 0, 0, 1, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 128,
++ .priority = 2,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_NPU,
++ .slice_id = 23,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ },
+ };
+
+ static const struct llcc_slice_config sm8150_data[] = {
+- { LLCC_CPUSS, 1, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 1 },
+- { LLCC_VIDSC0, 2, 512, 2, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_VIDSC1, 3, 512, 2, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDIO, 6, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMHPGRW, 7, 3072, 1, 0, 0xFF, 0xF00, 0, 0, 0, 1, 0 },
+- { LLCC_MDM, 8, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MODHW, 9, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_CMPT, 10, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW , 11, 512, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 2560, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MMUHWT, 13, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1 },
+- { LLCC_CMPTDMA, 15, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_DISP, 16, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMHPFX, 20, 1024, 2, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMHPFX, 21, 1024, 0, 1, 0xF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDHW, 22, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_NPU, 23, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_WLHW, 24, 3072, 1, 1, 0xFFF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_MODPE, 29, 256, 1, 1, 0xF, 0x0, 0, 0, 0, 1, 0 },
+- { LLCC_APTCM, 30, 256, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0 },
+- { LLCC_WRCACHE, 31, 128, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDSC1,
++ .slice_id = 3,
++ .max_cap = 512,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 3072,
++ .priority = 1,
++ .bonus_ways = 0xff,
++ .res_ways = 0xf00,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDM,
++ .slice_id = 8,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 2560,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPTDMA,
++ .slice_id = 15,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPFX,
++ .slice_id = 20,
++ .max_cap = 1024,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMHPFX,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_NPU,
++ .slice_id = 23,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WLHW,
++ .slice_id = 24,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0x1,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 128,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ },
+ };
+
+ static const struct llcc_slice_config sm8250_data[] = {
+- { LLCC_CPUSS, 1, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 1, 0 },
+- { LLCC_VIDSC0, 2, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_AUDIO, 6, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 },
+- { LLCC_CMPT, 10, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 },
+- { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_GPU, 12, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 1 },
+- { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CMPTDMA, 15, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_DISP, 16, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_VIDFW, 17, 512, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_AUDHW, 22, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_NPU, 23, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_WLHW, 24, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_CVP, 28, 256, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_APTCM, 30, 128, 3, 0, 0x0, 0x3, 1, 0, 0, 1, 0, 0 },
+- { LLCC_WRCACHE, 31, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPTDMA,
++ .slice_id = 15,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_VIDFW,
++ .slice_id = 17,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_NPU,
++ .slice_id = 23,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WLHW,
++ .slice_id = 24,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 28,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 128,
++ .priority = 3,
++ .res_ways = 0x3,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config sm8350_data[] = {
+- { LLCC_CPUSS, 1, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 1 },
+- { LLCC_VIDSC0, 2, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDIO, 6, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 0 },
+- { LLCC_MDMHPGRW, 7, 1024, 3, 0, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_MODHW, 9, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CMPT, 10, 3072, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_GPUHTW, 11, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_GPU, 12, 1024, 1, 0, 0xfff, 0x0, 0, 0, 0, 1, 1, 0 },
+- { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 1 },
+- { LLCC_DISP, 16, 3072, 2, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_MDMPNG, 21, 1024, 0, 1, 0xf, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_AUDHW, 22, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CVP, 28, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_MODPE, 29, 256, 1, 1, 0xf, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0x1, 1, 0, 0, 0, 1, 0 },
+- { LLCC_WRCACHE, 31, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 1 },
+- { LLCC_CVPFW, 17, 512, 1, 0, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CPUSS1, 3, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 },
+- { LLCC_CPUHWT, 5, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 0, 1 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 1024,
++ .priority = 3,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 1024,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 3072,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 28,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0x1,
++ .cache_mode = 1,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_CVPFW,
++ .slice_id = 17,
++ .max_cap = 512,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CPUSS1,
++ .slice_id = 3,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CPUHWT,
++ .slice_id = 5,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ },
+ };
+
+ static const struct llcc_slice_config sm8450_data[] = {
+- {LLCC_CPUSS, 1, 3072, 1, 0, 0xFFFF, 0x0, 0, 0, 0, 1, 1, 0, 0 },
+- {LLCC_VIDSC0, 2, 512, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_AUDIO, 6, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 },
+- {LLCC_MDMHPGRW, 7, 1024, 3, 0, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_MODHW, 9, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_CMPT, 10, 4096, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_GPU, 12, 2048, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 1, 0 },
+- {LLCC_MMUHWT, 13, 768, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- {LLCC_DISP, 16, 4096, 2, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_MDMPNG, 21, 1024, 1, 1, 0xF000, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 },
+- {LLCC_CVP, 28, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_MODPE, 29, 64, 1, 1, 0xF000, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0xF0, 1, 0, 0, 1, 0, 0, 0 },
+- {LLCC_WRCACHE, 31, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- {LLCC_CVPFW, 17, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_CPUSS1, 3, 1024, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_CAMEXP0, 4, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_CPUMTE, 23, 256, 1, 1, 0x0FFF, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- {LLCC_CPUHWT, 5, 512, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 1, 0, 0 },
+- {LLCC_CAMEXP1, 27, 256, 3, 1, 0xFFFF, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- {LLCC_AENPU, 8, 2048, 1, 1, 0xFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0 },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 3072,
++ .priority = 1,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 1024,
++ .priority = 3,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 4096,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 12,
++ .max_cap = 2048,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 13,
++ .max_cap = 768,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 4096,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf000,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 28,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 64,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf000,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0xf0,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CVPFW,
++ .slice_id = 17,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUSS1,
++ .slice_id = 3,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CAMEXP0,
++ .slice_id = 4,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_CPUMTE,
++ .slice_id = 23,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CPUHWT,
++ .slice_id = 5,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CAMEXP1,
++ .slice_id = 27,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_AENPU,
++ .slice_id = 8,
++ .max_cap = 2048,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffff,
++ .cache_mode = 0,
++ },
+ };
+
+ static const struct llcc_slice_config sm8550_data[] = {
+- {LLCC_CPUSS, 1, 5120, 1, 0, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_VIDSC0, 2, 512, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_AUDIO, 6, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_MDMHPGRW, 25, 1024, 4, 0, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_MODHW, 26, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CMPT, 10, 4096, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_GPU, 9, 3096, 1, 0, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_MMUHWT, 18, 768, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_DISP, 16, 6144, 1, 1, 0xFFFFFF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_MDMPNG, 27, 1024, 0, 1, 0xF00000, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CVP, 8, 256, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_MODPE, 29, 64, 1, 1, 0xF00000, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, },
+- {LLCC_WRCACHE, 31, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CAMEXP0, 4, 256, 4, 1, 0xF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CPUHWT, 5, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CAMEXP1, 7, 3200, 3, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CMPTHCP, 17, 256, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_LCPDARE, 30, 128, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, },
+- {LLCC_AENPU, 3, 3072, 1, 1, 0xFE01FF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_ISLAND1, 12, 1792, 7, 1, 0xFE00, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_ISLAND4, 15, 256, 7, 1, 0x10000, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CAMEXP2, 19, 3200, 3, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CAMEXP3, 20, 3200, 2, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_CAMEXP4, 21, 3200, 2, 1, 0xFFFFF0, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_DISP_WB, 23, 1024, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_DISP_1, 24, 6144, 1, 1, 0xFFFFFF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
+- {LLCC_VIDVSP, 28, 256, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, },
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 5120,
++ .priority = 1,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .write_scid_en = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 25,
++ .max_cap = 1024,
++ .priority = 4,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 26,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 4096,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 9,
++ .max_cap = 3096,
++ .priority = 1,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ .write_scid_cacheable_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 18,
++ .max_cap = 768,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 27,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0xf00000,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 8,
++ .max_cap = 256,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 64,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf00000,
++ .cache_mode = 0,
++ .alloc_oneway_en = true,
++ .vict_prio = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CAMEXP0,
++ .slice_id = 4,
++ .max_cap = 256,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CPUHWT,
++ .slice_id = 5,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CAMEXP1,
++ .slice_id = 7,
++ .max_cap = 3200,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfffff0,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CMPTHCP,
++ .slice_id = 17,
++ .max_cap = 256,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_LCPDARE,
++ .slice_id = 30,
++ .max_cap = 128,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .alloc_oneway_en = true,
++ .vict_prio = true,
++ }, {
++ .usecase_id = LLCC_AENPU,
++ .slice_id = 3,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfe01ff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_ISLAND1,
++ .slice_id = 12,
++ .max_cap = 1792,
++ .priority = 7,
++ .fixed_size = true,
++ .bonus_ways = 0xfe00,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_ISLAND4,
++ .slice_id = 15,
++ .max_cap = 256,
++ .priority = 7,
++ .fixed_size = true,
++ .bonus_ways = 0x10000,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CAMEXP2,
++ .slice_id = 19,
++ .max_cap = 3200,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfffff0,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CAMEXP3,
++ .slice_id = 20,
++ .max_cap = 3200,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfffff0,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CAMEXP4,
++ .slice_id = 21,
++ .max_cap = 3200,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xfffff0,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_DISP_WB,
++ .slice_id = 23,
++ .max_cap = 1024,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_DISP_1,
++ .slice_id = 24,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_VIDVSP,
++ .slice_id = 28,
++ .max_cap = 256,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ },
+ };
+
+ static const struct llcc_slice_config sm8650_data[] = {
+- {LLCC_CPUSS, 1, 5120, 1, 0, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_VIDSC0, 2, 512, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_AUDIO, 6, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MDMHPGRW, 25, 1024, 3, 0, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MODHW, 26, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CMPT, 10, 4096, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_GPU, 9, 3096, 1, 0, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MMUHWT, 18, 768, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_DISP, 16, 6144, 1, 1, 0xFFFFFF, 0x0, 2, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MDMHPFX, 24, 1024, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MDMPNG, 27, 1024, 0, 1, 0x000000, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CVP, 8, 256, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MODPE, 29, 128, 1, 1, 0xF00000, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_WRCACHE, 31, 512, 1, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP0, 4, 256, 3, 1, 0xF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP1, 7, 3200, 3, 1, 0xFFFFF0, 0x0, 2, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CMPTHCP, 17, 256, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_LCPDARE, 30, 128, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_AENPU, 3, 3072, 1, 1, 0xFFFFFF, 0x0, 2, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_ISLAND1, 12, 5888, 7, 1, 0x0, 0x7FFFFF, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_DISP_WB, 23, 1024, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_VIDVSP, 28, 256, 3, 1, 0xFFFFFF, 0x0, 0, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 5120,
++ .priority = 1,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .stale_en = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 25,
++ .max_cap = 1024,
++ .priority = 3,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 26,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 4096,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 9,
++ .max_cap = 3096,
++ .priority = 1,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ .write_scid_cacheable_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 18,
++ .max_cap = 768,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_DISP,
++ .slice_id = 16,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_MDMHPFX,
++ .slice_id = 24,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 27,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 8,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 128,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xf00000,
++ .cache_mode = 0,
++ .alloc_oneway_en = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_CAMEXP0,
++ .slice_id = 4,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xf,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CAMEXP1,
++ .slice_id = 7,
++ .max_cap = 3200,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfffff0,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CMPTHCP,
++ .slice_id = 17,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_LCPDARE,
++ .slice_id = 30,
++ .max_cap = 128,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .alloc_oneway_en = true,
++ }, {
++ .usecase_id = LLCC_AENPU,
++ .slice_id = 3,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_ISLAND1,
++ .slice_id = 12,
++ .max_cap = 5888,
++ .priority = 7,
++ .fixed_size = true,
++ .res_ways = 0x7fffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_DISP_WB,
++ .slice_id = 23,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_VIDVSP,
++ .slice_id = 28,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffffff,
++ .cache_mode = 0,
++ },
+ };
+
+ static const struct llcc_slice_config qdu1000_data_2ch[] = {
+- { LLCC_MDMHPGRW, 7, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MODHW, 9, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MDMPNG, 21, 256, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_ECC, 26, 512, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_MODPE, 29, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_APTCM, 30, 256, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 },
+- { LLCC_WRCACHE, 31, 128, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 },
++ {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 256,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_ECC,
++ .slice_id = 26,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 256,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0xc,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 128,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config qdu1000_data_4ch[] = {
+- { LLCC_MDMHPGRW, 7, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MODHW, 9, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MDMPNG, 21, 512, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_ECC, 26, 1024, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_MODPE, 29, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_APTCM, 30, 512, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 },
+- { LLCC_WRCACHE, 31, 256, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 },
++ {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 512,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_ECC,
++ .slice_id = 26,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0xc,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 256,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config qdu1000_data_8ch[] = {
+- { LLCC_MDMHPGRW, 7, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MODHW, 9, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_MDMPNG, 21, 1024, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_ECC, 26, 2048, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 },
+- { LLCC_MODPE, 29, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 },
+- { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 },
+- { LLCC_WRCACHE, 31, 512, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 },
++ {
++ .usecase_id = LLCC_MDMHPGRW,
++ .slice_id = 7,
++ .max_cap = 2048,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MODHW,
++ .slice_id = 9,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_MDMPNG,
++ .slice_id = 21,
++ .max_cap = 1024,
++ .priority = 0,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_ECC,
++ .slice_id = 26,
++ .max_cap = 2048,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_MODPE,
++ .slice_id = 29,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_APTCM,
++ .slice_id = 30,
++ .max_cap = 1024,
++ .priority = 3,
++ .fixed_size = true,
++ .res_ways = 0xc,
++ .cache_mode = 1,
++ .retain_on_pc = true,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ },
+ };
+
+ static const struct llcc_slice_config x1e80100_data[] = {
+- {LLCC_CPUSS, 1, 6144, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_VIDSC0, 2, 512, 4, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_AUDIO, 6, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CMPT, 10, 6144, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_GPUHTW, 11, 512, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_GPU, 9, 4608, 1, 0, 0xFFF, 0x0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_MMUHWT, 18, 512, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_AUDHW, 22, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CVP, 8, 512, 4, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_WRCACHE, 31, 1024, 1, 1, 0xFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP0, 4, 256, 4, 1, 0x3, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP1, 7, 3072, 3, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_LCPDARE, 30, 512, 3, 1, 0xFFF, 0x0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0},
+- {LLCC_AENPU, 3, 3072, 1, 1, 0xFFF, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_ISLAND1, 12, 2048, 7, 1, 0x0, 0xF, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP2, 19, 3072, 3, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP3, 20, 3072, 2, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
+- {LLCC_CAMEXP4, 21, 3072, 2, 1, 0xFFC, 0x0, 2, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0},
++ {
++ .usecase_id = LLCC_CPUSS,
++ .slice_id = 1,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_VIDSC0,
++ .slice_id = 2,
++ .max_cap = 512,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_AUDIO,
++ .slice_id = 6,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CMPT,
++ .slice_id = 10,
++ .max_cap = 6144,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPUHTW,
++ .slice_id = 11,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_GPU,
++ .slice_id = 9,
++ .max_cap = 4608,
++ .priority = 1,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .write_scid_en = true,
++ .write_scid_cacheable_en = true,
++ .stale_en = true,
++ }, {
++ .usecase_id = LLCC_MMUHWT,
++ .slice_id = 18,
++ .max_cap = 512,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ }, {
++ .usecase_id = LLCC_AUDHW,
++ .slice_id = 22,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CVP,
++ .slice_id = 8,
++ .max_cap = 512,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_WRCACHE,
++ .slice_id = 31,
++ .max_cap = 1024,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CAMEXP0,
++ .slice_id = 4,
++ .max_cap = 256,
++ .priority = 4,
++ .fixed_size = true,
++ .bonus_ways = 0x3,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CAMEXP1,
++ .slice_id = 7,
++ .max_cap = 3072,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_LCPDARE,
++ .slice_id = 30,
++ .max_cap = 512,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 0,
++ .activate_on_init = true,
++ .alloc_oneway_en = true,
++ }, {
++ .usecase_id = LLCC_AENPU,
++ .slice_id = 3,
++ .max_cap = 3072,
++ .priority = 1,
++ .fixed_size = true,
++ .bonus_ways = 0xfff,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_ISLAND1,
++ .slice_id = 12,
++ .max_cap = 2048,
++ .priority = 7,
++ .fixed_size = true,
++ .res_ways = 0xf,
++ .cache_mode = 0,
++ }, {
++ .usecase_id = LLCC_CAMEXP2,
++ .slice_id = 19,
++ .max_cap = 3072,
++ .priority = 3,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CAMEXP3,
++ .slice_id = 20,
++ .max_cap = 3072,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 2,
++ }, {
++ .usecase_id = LLCC_CAMEXP4,
++ .slice_id = 21,
++ .max_cap = 3072,
++ .priority = 2,
++ .fixed_size = true,
++ .bonus_ways = 0xffc,
++ .cache_mode = 2,
++ },
+ };
+
+ static const struct llcc_edac_reg_offset llcc_v1_edac_reg_offset = {
+diff --git a/drivers/soc/qcom/qcom_pd_mapper.c b/drivers/soc/qcom/qcom_pd_mapper.c
+index c940f4da28ed5c..6e30f08761aa43 100644
+--- a/drivers/soc/qcom/qcom_pd_mapper.c
++++ b/drivers/soc/qcom/qcom_pd_mapper.c
+@@ -540,6 +540,7 @@ static const struct of_device_id qcom_pdm_domains[] __maybe_unused = {
+ { .compatible = "qcom,msm8996", .data = msm8996_domains, },
+ { .compatible = "qcom,msm8998", .data = msm8998_domains, },
+ { .compatible = "qcom,qcm2290", .data = qcm2290_domains, },
++ { .compatible = "qcom,qcm6490", .data = sc7280_domains, },
+ { .compatible = "qcom,qcs404", .data = qcs404_domains, },
+ { .compatible = "qcom,sc7180", .data = sc7180_domains, },
+ { .compatible = "qcom,sc7280", .data = sc7280_domains, },
+diff --git a/drivers/spi/spi-fsl-lpspi.c b/drivers/spi/spi-fsl-lpspi.c
+index 9573b8fa4fbfc6..29b9676fe43d89 100644
+--- a/drivers/spi/spi-fsl-lpspi.c
++++ b/drivers/spi/spi-fsl-lpspi.c
+@@ -315,9 +315,10 @@ static void fsl_lpspi_set_watermark(struct fsl_lpspi_data *fsl_lpspi)
+ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi)
+ {
+ struct lpspi_config config = fsl_lpspi->config;
+- unsigned int perclk_rate, scldiv, div;
++ unsigned int perclk_rate, div;
+ u8 prescale_max;
+ u8 prescale;
++ int scldiv;
+
+ perclk_rate = clk_get_rate(fsl_lpspi->clk_per);
+ prescale_max = fsl_lpspi->devtype_data->prescale_max;
+@@ -338,13 +339,13 @@ static int fsl_lpspi_set_bitrate(struct fsl_lpspi_data *fsl_lpspi)
+
+ for (prescale = 0; prescale <= prescale_max; prescale++) {
+ scldiv = div / (1 << prescale) - 2;
+- if (scldiv < 256) {
++ if (scldiv >= 0 && scldiv < 256) {
+ fsl_lpspi->config.prescale = prescale;
+ break;
+ }
+ }
+
+- if (scldiv >= 256)
++ if (scldiv < 0 || scldiv >= 256)
+ return -EINVAL;
+
+ writel(scldiv | (scldiv << 8) | ((scldiv >> 1) << 16),
+diff --git a/drivers/spi/spi-mpc52xx.c b/drivers/spi/spi-mpc52xx.c
+index d5ac60c135c20a..159f359d7501aa 100644
+--- a/drivers/spi/spi-mpc52xx.c
++++ b/drivers/spi/spi-mpc52xx.c
+@@ -520,6 +520,7 @@ static void mpc52xx_spi_remove(struct platform_device *op)
+ struct mpc52xx_spi *ms = spi_controller_get_devdata(host);
+ int i;
+
++ cancel_work_sync(&ms->work);
+ free_irq(ms->irq0, ms);
+ free_irq(ms->irq1, ms);
+
+diff --git a/drivers/thermal/qcom/tsens-v1.c b/drivers/thermal/qcom/tsens-v1.c
+index dc1c4ae2d8b01b..1a7874676f68e4 100644
+--- a/drivers/thermal/qcom/tsens-v1.c
++++ b/drivers/thermal/qcom/tsens-v1.c
+@@ -162,28 +162,35 @@ struct tsens_plat_data data_tsens_v1 = {
+ .fields = tsens_v1_regfields,
+ };
+
+-static const struct tsens_ops ops_8956 = {
+- .init = init_8956,
++static const struct tsens_ops ops_common = {
++ .init = init_common,
+ .calibrate = tsens_calibrate_common,
+ .get_temp = get_temp_tsens_valid,
+ };
+
+-struct tsens_plat_data data_8956 = {
++struct tsens_plat_data data_8937 = {
+ .num_sensors = 11,
+- .ops = &ops_8956,
++ .ops = &ops_common,
+ .feat = &tsens_v1_feat,
+ .fields = tsens_v1_regfields,
+ };
+
+-static const struct tsens_ops ops_8976 = {
+- .init = init_common,
++static const struct tsens_ops ops_8956 = {
++ .init = init_8956,
+ .calibrate = tsens_calibrate_common,
+ .get_temp = get_temp_tsens_valid,
+ };
+
++struct tsens_plat_data data_8956 = {
++ .num_sensors = 11,
++ .ops = &ops_8956,
++ .feat = &tsens_v1_feat,
++ .fields = tsens_v1_regfields,
++};
++
+ struct tsens_plat_data data_8976 = {
+ .num_sensors = 11,
+- .ops = &ops_8976,
++ .ops = &ops_common,
+ .feat = &tsens_v1_feat,
+ .fields = tsens_v1_regfields,
+ };
+diff --git a/drivers/thermal/qcom/tsens.c b/drivers/thermal/qcom/tsens.c
+index 0b4421bf478544..d2db804692f01d 100644
+--- a/drivers/thermal/qcom/tsens.c
++++ b/drivers/thermal/qcom/tsens.c
+@@ -1119,6 +1119,9 @@ static const struct of_device_id tsens_table[] = {
+ }, {
+ .compatible = "qcom,msm8916-tsens",
+ .data = &data_8916,
++ }, {
++ .compatible = "qcom,msm8937-tsens",
++ .data = &data_8937,
+ }, {
+ .compatible = "qcom,msm8939-tsens",
+ .data = &data_8939,
+diff --git a/drivers/thermal/qcom/tsens.h b/drivers/thermal/qcom/tsens.h
+index cab39de045b100..7b36a0318fa6a0 100644
+--- a/drivers/thermal/qcom/tsens.h
++++ b/drivers/thermal/qcom/tsens.h
+@@ -647,7 +647,7 @@ extern struct tsens_plat_data data_8960;
+ extern struct tsens_plat_data data_8226, data_8909, data_8916, data_8939, data_8974, data_9607;
+
+ /* TSENS v1 targets */
+-extern struct tsens_plat_data data_tsens_v1, data_8976, data_8956;
++extern struct tsens_plat_data data_tsens_v1, data_8937, data_8976, data_8956;
+
+ /* TSENS v2 targets */
+ extern struct tsens_plat_data data_8996, data_ipq8074, data_tsens_v2;
+diff --git a/drivers/tty/serial/8250/8250_dw.c b/drivers/tty/serial/8250/8250_dw.c
+index ab9e7f20426025..51894c93c8a313 100644
+--- a/drivers/tty/serial/8250/8250_dw.c
++++ b/drivers/tty/serial/8250/8250_dw.c
+@@ -750,7 +750,7 @@ static const struct dw8250_platform_data dw8250_renesas_rzn1_data = {
+ .quirks = DW_UART_QUIRK_CPR_VALUE | DW_UART_QUIRK_IS_DMA_FC,
+ };
+
+-static const struct dw8250_platform_data dw8250_starfive_jh7100_data = {
++static const struct dw8250_platform_data dw8250_skip_set_rate_data = {
+ .usr_reg = DW_UART_USR,
+ .quirks = DW_UART_QUIRK_SKIP_SET_RATE,
+ };
+@@ -760,7 +760,8 @@ static const struct of_device_id dw8250_of_match[] = {
+ { .compatible = "cavium,octeon-3860-uart", .data = &dw8250_octeon_3860_data },
+ { .compatible = "marvell,armada-38x-uart", .data = &dw8250_armada_38x_data },
+ { .compatible = "renesas,rzn1-uart", .data = &dw8250_renesas_rzn1_data },
+- { .compatible = "starfive,jh7100-uart", .data = &dw8250_starfive_jh7100_data },
++ { .compatible = "sophgo,sg2044-uart", .data = &dw8250_skip_set_rate_data },
++ { .compatible = "starfive,jh7100-uart", .data = &dw8250_skip_set_rate_data },
+ { /* Sentinel */ }
+ };
+ MODULE_DEVICE_TABLE(of, dw8250_of_match);
+diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c
+index 265f21133b633e..796e37a1d859f2 100644
+--- a/drivers/ufs/core/ufs-sysfs.c
++++ b/drivers/ufs/core/ufs-sysfs.c
+@@ -670,6 +670,9 @@ static ssize_t read_req_latency_avg_show(struct device *dev,
+ struct ufs_hba *hba = dev_get_drvdata(dev);
+ struct ufs_hba_monitor *m = &hba->monitor;
+
++ if (!m->nr_req[READ])
++ return sysfs_emit(buf, "0\n");
++
+ return sysfs_emit(buf, "%llu\n", div_u64(ktime_to_us(m->lat_sum[READ]),
+ m->nr_req[READ]));
+ }
+@@ -737,6 +740,9 @@ static ssize_t write_req_latency_avg_show(struct device *dev,
+ struct ufs_hba *hba = dev_get_drvdata(dev);
+ struct ufs_hba_monitor *m = &hba->monitor;
+
++ if (!m->nr_req[WRITE])
++ return sysfs_emit(buf, "0\n");
++
+ return sysfs_emit(buf, "%llu\n", div_u64(ktime_to_us(m->lat_sum[WRITE]),
+ m->nr_req[WRITE]));
+ }
+diff --git a/drivers/ufs/core/ufs_bsg.c b/drivers/ufs/core/ufs_bsg.c
+index 433d0480391ea6..6c09d97ae00658 100644
+--- a/drivers/ufs/core/ufs_bsg.c
++++ b/drivers/ufs/core/ufs_bsg.c
+@@ -170,7 +170,7 @@ static int ufs_bsg_request(struct bsg_job *job)
+ break;
+ case UPIU_TRANSACTION_UIC_CMD:
+ memcpy(&uc, &bsg_request->upiu_req.uc, UIC_CMD_SIZE);
+- ret = ufshcd_send_uic_cmd(hba, &uc);
++ ret = ufshcd_send_bsg_uic_cmd(hba, &uc);
+ if (ret)
+ dev_err(hba->dev, "send uic cmd: error code %d\n", ret);
+
+diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h
+index 7aea8fbaeee882..9ffd94ddf8c7ce 100644
+--- a/drivers/ufs/core/ufshcd-priv.h
++++ b/drivers/ufs/core/ufshcd-priv.h
+@@ -84,6 +84,7 @@ int ufshcd_read_string_desc(struct ufs_hba *hba, u8 desc_index,
+ u8 **buf, bool ascii);
+
+ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd);
++int ufshcd_send_bsg_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd);
+
+ int ufshcd_exec_raw_upiu_cmd(struct ufs_hba *hba,
+ struct utp_upiu_req *req_upiu,
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index abbe7135a97787..cfebe4a1af9e84 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -2411,8 +2411,6 @@ static inline int ufshcd_hba_capabilities(struct ufs_hba *hba)
+ int err;
+
+ hba->capabilities = ufshcd_readl(hba, REG_CONTROLLER_CAPABILITIES);
+- if (hba->quirks & UFSHCD_QUIRK_BROKEN_64BIT_ADDRESS)
+- hba->capabilities &= ~MASK_64_ADDRESSING_SUPPORT;
+
+ /* nutrs and nutmrs are 0 based values */
+ hba->nutrs = (hba->capabilities & MASK_TRANSFER_REQUESTS_SLOTS_SDB) + 1;
+@@ -2551,13 +2549,11 @@ ufshcd_wait_for_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
+ * __ufshcd_send_uic_cmd - Send UIC commands and retrieve the result
+ * @hba: per adapter instance
+ * @uic_cmd: UIC command
+- * @completion: initialize the completion only if this is set to true
+ *
+ * Return: 0 only if success.
+ */
+ static int
+-__ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd,
+- bool completion)
++__ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
+ {
+ lockdep_assert_held(&hba->uic_cmd_mutex);
+
+@@ -2567,8 +2563,7 @@ __ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd,
+ return -EIO;
+ }
+
+- if (completion)
+- init_completion(&uic_cmd->done);
++ init_completion(&uic_cmd->done);
+
+ uic_cmd->cmd_active = 1;
+ ufshcd_dispatch_uic_cmd(hba, uic_cmd);
+@@ -2594,7 +2589,7 @@ int ufshcd_send_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
+ mutex_lock(&hba->uic_cmd_mutex);
+ ufshcd_add_delay_before_dme_cmd(hba);
+
+- ret = __ufshcd_send_uic_cmd(hba, uic_cmd, true);
++ ret = __ufshcd_send_uic_cmd(hba, uic_cmd);
+ if (!ret)
+ ret = ufshcd_wait_for_uic_cmd(hba, uic_cmd);
+
+@@ -4288,7 +4283,7 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ reenable_intr = true;
+ }
+ spin_unlock_irqrestore(hba->host->host_lock, flags);
+- ret = __ufshcd_send_uic_cmd(hba, cmd, false);
++ ret = __ufshcd_send_uic_cmd(hba, cmd);
+ if (ret) {
+ dev_err(hba->dev,
+ "pwr ctrl cmd 0x%x with mode 0x%x uic error %d\n",
+@@ -4343,6 +4338,42 @@ static int ufshcd_uic_pwr_ctrl(struct ufs_hba *hba, struct uic_command *cmd)
+ return ret;
+ }
+
++/**
++ * ufshcd_send_bsg_uic_cmd - Send UIC commands requested via BSG layer and retrieve the result
++ * @hba: per adapter instance
++ * @uic_cmd: UIC command
++ *
++ * Return: 0 only if success.
++ */
++int ufshcd_send_bsg_uic_cmd(struct ufs_hba *hba, struct uic_command *uic_cmd)
++{
++ int ret;
++
++ if (hba->quirks & UFSHCD_QUIRK_BROKEN_UIC_CMD)
++ return 0;
++
++ ufshcd_hold(hba);
++
++ if (uic_cmd->argument1 == UIC_ARG_MIB(PA_PWRMODE) &&
++ uic_cmd->command == UIC_CMD_DME_SET) {
++ ret = ufshcd_uic_pwr_ctrl(hba, uic_cmd);
++ goto out;
++ }
++
++ mutex_lock(&hba->uic_cmd_mutex);
++ ufshcd_add_delay_before_dme_cmd(hba);
++
++ ret = __ufshcd_send_uic_cmd(hba, uic_cmd);
++ if (!ret)
++ ret = ufshcd_wait_for_uic_cmd(hba, uic_cmd);
++
++ mutex_unlock(&hba->uic_cmd_mutex);
++
++out:
++ ufshcd_release(hba);
++ return ret;
++}
++
+ /**
+ * ufshcd_uic_change_pwr_mode - Perform the UIC power mode chage
+ * using DME_SET primitives.
+@@ -4651,9 +4682,6 @@ static int ufshcd_change_power_mode(struct ufs_hba *hba,
+ dev_err(hba->dev,
+ "%s: power mode change failed %d\n", __func__, ret);
+ } else {
+- ufshcd_vops_pwr_change_notify(hba, POST_CHANGE, NULL,
+- pwr_mode);
+-
+ memcpy(&hba->pwr_info, pwr_mode,
+ sizeof(struct ufs_pa_layer_attr));
+ }
+@@ -4682,6 +4710,10 @@ int ufshcd_config_pwr_mode(struct ufs_hba *hba,
+
+ ret = ufshcd_change_power_mode(hba, &final_params);
+
++ if (!ret)
++ ufshcd_vops_pwr_change_notify(hba, POST_CHANGE, NULL,
++ &final_params);
++
+ return ret;
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_config_pwr_mode);
+@@ -10231,6 +10263,7 @@ void ufshcd_remove(struct ufs_hba *hba)
+ ufs_hwmon_remove(hba);
+ ufs_bsg_remove(hba);
+ ufs_sysfs_remove_nodes(hba->dev);
++ cancel_delayed_work_sync(&hba->ufs_rtc_update_work);
+ blk_mq_destroy_queue(hba->tmf_queue);
+ blk_put_queue(hba->tmf_queue);
+ blk_mq_free_tag_set(&hba->tmf_tag_set);
+@@ -10309,6 +10342,8 @@ EXPORT_SYMBOL_GPL(ufshcd_dealloc_host);
+ */
+ static int ufshcd_set_dma_mask(struct ufs_hba *hba)
+ {
++ if (hba->vops && hba->vops->set_dma_mask)
++ return hba->vops->set_dma_mask(hba);
+ if (hba->capabilities & MASK_64_ADDRESSING_SUPPORT) {
+ if (!dma_set_mask_and_coherent(hba->dev, DMA_BIT_MASK(64)))
+ return 0;
+diff --git a/drivers/ufs/host/cdns-pltfrm.c b/drivers/ufs/host/cdns-pltfrm.c
+index 66811d8d1929c1..b31aa84111511b 100644
+--- a/drivers/ufs/host/cdns-pltfrm.c
++++ b/drivers/ufs/host/cdns-pltfrm.c
+@@ -307,9 +307,7 @@ static int cdns_ufs_pltfrm_probe(struct platform_device *pdev)
+ */
+ static void cdns_ufs_pltfrm_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ static const struct dev_pm_ops cdns_ufs_dev_pm_ops = {
+diff --git a/drivers/ufs/host/tc-dwc-g210-pltfrm.c b/drivers/ufs/host/tc-dwc-g210-pltfrm.c
+index a3877592604d5d..c6f8565ede21a1 100644
+--- a/drivers/ufs/host/tc-dwc-g210-pltfrm.c
++++ b/drivers/ufs/host/tc-dwc-g210-pltfrm.c
+@@ -76,10 +76,7 @@ static int tc_dwc_g210_pltfm_probe(struct platform_device *pdev)
+ */
+ static void tc_dwc_g210_pltfm_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- pm_runtime_get_sync(&(pdev)->dev);
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ static const struct dev_pm_ops tc_dwc_g210_pltfm_pm_ops = {
+diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
+index fb550a7c16b34b..98505c68103d0e 100644
+--- a/drivers/ufs/host/ufs-exynos.c
++++ b/drivers/ufs/host/ufs-exynos.c
+@@ -1963,8 +1963,7 @@ static void exynos_ufs_remove(struct platform_device *pdev)
+ struct ufs_hba *hba = platform_get_drvdata(pdev);
+ struct exynos_ufs *ufs = ufshcd_get_variant(hba);
+
+- pm_runtime_get_sync(&(pdev)->dev);
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+
+ phy_power_off(ufs->phy);
+ phy_exit(ufs->phy);
+diff --git a/drivers/ufs/host/ufs-hisi.c b/drivers/ufs/host/ufs-hisi.c
+index 5ee73ff052512b..501609521b2609 100644
+--- a/drivers/ufs/host/ufs-hisi.c
++++ b/drivers/ufs/host/ufs-hisi.c
+@@ -576,9 +576,7 @@ static int ufs_hisi_probe(struct platform_device *pdev)
+
+ static void ufs_hisi_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ static const struct dev_pm_ops ufs_hisi_pm_ops = {
+diff --git a/drivers/ufs/host/ufs-mediatek.c b/drivers/ufs/host/ufs-mediatek.c
+index 9a5919434c4e0d..c834d38921b6cb 100644
+--- a/drivers/ufs/host/ufs-mediatek.c
++++ b/drivers/ufs/host/ufs-mediatek.c
+@@ -1869,10 +1869,7 @@ static int ufs_mtk_probe(struct platform_device *pdev)
+ */
+ static void ufs_mtk_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- pm_runtime_get_sync(&(pdev)->dev);
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index ecdfff2456e31d..91127fb171864f 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -1843,10 +1843,11 @@ static int ufs_qcom_probe(struct platform_device *pdev)
+ static void ufs_qcom_remove(struct platform_device *pdev)
+ {
+ struct ufs_hba *hba = platform_get_drvdata(pdev);
++ struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+
+- pm_runtime_get_sync(&(pdev)->dev);
+- ufshcd_remove(hba);
+- platform_device_msi_free_irqs_all(hba->dev);
++ ufshcd_pltfrm_remove(pdev);
++ if (host->esi_enabled)
++ platform_device_msi_free_irqs_all(hba->dev);
+ }
+
+ static const struct of_device_id ufs_qcom_of_match[] __maybe_unused = {
+diff --git a/drivers/ufs/host/ufs-renesas.c b/drivers/ufs/host/ufs-renesas.c
+index 8711e5cbc9680a..21a64b34397d8c 100644
+--- a/drivers/ufs/host/ufs-renesas.c
++++ b/drivers/ufs/host/ufs-renesas.c
+@@ -7,6 +7,7 @@
+
+ #include <linux/clk.h>
+ #include <linux/delay.h>
++#include <linux/dma-mapping.h>
+ #include <linux/err.h>
+ #include <linux/iopoll.h>
+ #include <linux/kernel.h>
+@@ -364,14 +365,20 @@ static int ufs_renesas_init(struct ufs_hba *hba)
+ return -ENOMEM;
+ ufshcd_set_variant(hba, priv);
+
+- hba->quirks |= UFSHCD_QUIRK_BROKEN_64BIT_ADDRESS | UFSHCD_QUIRK_HIBERN_FASTAUTO;
++ hba->quirks |= UFSHCD_QUIRK_HIBERN_FASTAUTO;
+
+ return 0;
+ }
+
++static int ufs_renesas_set_dma_mask(struct ufs_hba *hba)
++{
++ return dma_set_mask_and_coherent(hba->dev, DMA_BIT_MASK(32));
++}
++
+ static const struct ufs_hba_variant_ops ufs_renesas_vops = {
+ .name = "renesas",
+ .init = ufs_renesas_init,
++ .set_dma_mask = ufs_renesas_set_dma_mask,
+ .setup_clocks = ufs_renesas_setup_clocks,
+ .hce_enable_notify = ufs_renesas_hce_enable_notify,
+ .dbg_register_dump = ufs_renesas_dbg_register_dump,
+@@ -390,9 +397,7 @@ static int ufs_renesas_probe(struct platform_device *pdev)
+
+ static void ufs_renesas_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ static struct platform_driver ufs_renesas_platform = {
+diff --git a/drivers/ufs/host/ufs-sprd.c b/drivers/ufs/host/ufs-sprd.c
+index d8b165908809d6..d220978c2d8c8a 100644
+--- a/drivers/ufs/host/ufs-sprd.c
++++ b/drivers/ufs/host/ufs-sprd.c
+@@ -427,10 +427,7 @@ static int ufs_sprd_probe(struct platform_device *pdev)
+
+ static void ufs_sprd_remove(struct platform_device *pdev)
+ {
+- struct ufs_hba *hba = platform_get_drvdata(pdev);
+-
+- pm_runtime_get_sync(&(pdev)->dev);
+- ufshcd_remove(hba);
++ ufshcd_pltfrm_remove(pdev);
+ }
+
+ static const struct dev_pm_ops ufs_sprd_pm_ops = {
+diff --git a/drivers/ufs/host/ufshcd-pltfrm.c b/drivers/ufs/host/ufshcd-pltfrm.c
+index 1f4f30d6cb4234..505572d4fa878c 100644
+--- a/drivers/ufs/host/ufshcd-pltfrm.c
++++ b/drivers/ufs/host/ufshcd-pltfrm.c
+@@ -524,6 +524,22 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_pltfrm_init);
+
++/**
++ * ufshcd_pltfrm_remove - Remove ufshcd platform
++ * @pdev: pointer to Platform device handle
++ */
++void ufshcd_pltfrm_remove(struct platform_device *pdev)
++{
++ struct ufs_hba *hba = platform_get_drvdata(pdev);
++
++ pm_runtime_get_sync(&pdev->dev);
++ ufshcd_remove(hba);
++ ufshcd_dealloc_host(hba);
++ pm_runtime_disable(&pdev->dev);
++ pm_runtime_put_noidle(&pdev->dev);
++}
++EXPORT_SYMBOL_GPL(ufshcd_pltfrm_remove);
++
+ MODULE_AUTHOR("Santosh Yaragnavi <santosh.sy@samsung.com>");
+ MODULE_AUTHOR("Vinayak Holikatti <h.vinayak@samsung.com>");
+ MODULE_DESCRIPTION("UFS host controller Platform bus based glue driver");
+diff --git a/drivers/ufs/host/ufshcd-pltfrm.h b/drivers/ufs/host/ufshcd-pltfrm.h
+index df387be5216bd4..3017f8e8f93c67 100644
+--- a/drivers/ufs/host/ufshcd-pltfrm.h
++++ b/drivers/ufs/host/ufshcd-pltfrm.h
+@@ -31,6 +31,7 @@ int ufshcd_negotiate_pwr_params(const struct ufs_host_params *host_params,
+ void ufshcd_init_host_params(struct ufs_host_params *host_params);
+ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ const struct ufs_hba_variant_ops *vops);
++void ufshcd_pltfrm_remove(struct platform_device *pdev);
+ int ufshcd_populate_vreg(struct device *dev, const char *name,
+ struct ufs_vreg **out_vreg, bool skip_current);
+
+diff --git a/drivers/usb/chipidea/ci.h b/drivers/usb/chipidea/ci.h
+index 2a38e1eb65466c..97437de52ef681 100644
+--- a/drivers/usb/chipidea/ci.h
++++ b/drivers/usb/chipidea/ci.h
+@@ -25,6 +25,7 @@
+ #define TD_PAGE_COUNT 5
+ #define CI_HDRC_PAGE_SIZE 4096ul /* page size for TD's */
+ #define ENDPT_MAX 32
++#define CI_MAX_REQ_SIZE (4 * CI_HDRC_PAGE_SIZE)
+ #define CI_MAX_BUF_SIZE (TD_PAGE_COUNT * CI_HDRC_PAGE_SIZE)
+
+ /******************************************************************************
+@@ -260,6 +261,7 @@ struct ci_hdrc {
+ bool b_sess_valid_event;
+ bool imx28_write_fix;
+ bool has_portsc_pec_bug;
++ bool has_short_pkt_limit;
+ bool supports_runtime_pm;
+ bool in_lpm;
+ bool wakeup_int;
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index c64ab0e07ea030..17b3ac2ac8a1e8 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -342,6 +342,7 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ struct ci_hdrc_platform_data pdata = {
+ .name = dev_name(&pdev->dev),
+ .capoffset = DEF_CAPOFFSET,
++ .flags = CI_HDRC_HAS_SHORT_PKT_LIMIT,
+ .notify_event = ci_hdrc_imx_notify_event,
+ };
+ int ret;
+diff --git a/drivers/usb/chipidea/core.c b/drivers/usb/chipidea/core.c
+index 835bf2428dc6ec..5aa16dbfc289ce 100644
+--- a/drivers/usb/chipidea/core.c
++++ b/drivers/usb/chipidea/core.c
+@@ -1076,6 +1076,8 @@ static int ci_hdrc_probe(struct platform_device *pdev)
+ CI_HDRC_SUPPORTS_RUNTIME_PM);
+ ci->has_portsc_pec_bug = !!(ci->platdata->flags &
+ CI_HDRC_HAS_PORTSC_PEC_MISSED);
++ ci->has_short_pkt_limit = !!(ci->platdata->flags &
++ CI_HDRC_HAS_SHORT_PKT_LIMIT);
+ platform_set_drvdata(pdev, ci);
+
+ ret = hw_device_init(ci, base);
+diff --git a/drivers/usb/chipidea/udc.c b/drivers/usb/chipidea/udc.c
+index 69ef3cd8d4f836..fd6032874bf33a 100644
+--- a/drivers/usb/chipidea/udc.c
++++ b/drivers/usb/chipidea/udc.c
+@@ -10,6 +10,7 @@
+ #include <linux/delay.h>
+ #include <linux/device.h>
+ #include <linux/dmapool.h>
++#include <linux/dma-direct.h>
+ #include <linux/err.h>
+ #include <linux/irqreturn.h>
+ #include <linux/kernel.h>
+@@ -540,6 +541,126 @@ static int prepare_td_for_sg(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq)
+ return ret;
+ }
+
++/*
++ * Verify if the scatterlist is valid by iterating each sg entry.
++ * Return invalid sg entry index which is less than num_sgs.
++ */
++static int sglist_get_invalid_entry(struct device *dma_dev, u8 dir,
++ struct usb_request *req)
++{
++ int i;
++ struct scatterlist *s = req->sg;
++
++ if (req->num_sgs == 1)
++ return 1;
++
++ dir = dir ? DMA_TO_DEVICE : DMA_FROM_DEVICE;
++
++ for (i = 0; i < req->num_sgs; i++, s = sg_next(s)) {
++ /* Only small sg (generally last sg) may be bounced. If
++ * that happens. we can't ensure the addr is page-aligned
++ * after dma map.
++ */
++ if (dma_kmalloc_needs_bounce(dma_dev, s->length, dir))
++ break;
++
++ /* Make sure each sg start address (except first sg) is
++ * page-aligned and end address (except last sg) is also
++ * page-aligned.
++ */
++ if (i == 0) {
++ if (!IS_ALIGNED(s->offset + s->length,
++ CI_HDRC_PAGE_SIZE))
++ break;
++ } else {
++ if (s->offset)
++ break;
++ if (!sg_is_last(s) && !IS_ALIGNED(s->length,
++ CI_HDRC_PAGE_SIZE))
++ break;
++ }
++ }
++
++ return i;
++}
++
++static int sglist_do_bounce(struct ci_hw_req *hwreq, int index,
++ bool copy, unsigned int *bounced)
++{
++ void *buf;
++ int i, ret, nents, num_sgs;
++ unsigned int rest, rounded;
++ struct scatterlist *sg, *src, *dst;
++
++ nents = index + 1;
++ ret = sg_alloc_table(&hwreq->sgt, nents, GFP_KERNEL);
++ if (ret)
++ return ret;
++
++ sg = src = hwreq->req.sg;
++ num_sgs = hwreq->req.num_sgs;
++ rest = hwreq->req.length;
++ dst = hwreq->sgt.sgl;
++
++ for (i = 0; i < index; i++) {
++ memcpy(dst, src, sizeof(*src));
++ rest -= src->length;
++ src = sg_next(src);
++ dst = sg_next(dst);
++ }
++
++ /* create one bounce buffer */
++ rounded = round_up(rest, CI_HDRC_PAGE_SIZE);
++ buf = kmalloc(rounded, GFP_KERNEL);
++ if (!buf) {
++ sg_free_table(&hwreq->sgt);
++ return -ENOMEM;
++ }
++
++ sg_set_buf(dst, buf, rounded);
++
++ hwreq->req.sg = hwreq->sgt.sgl;
++ hwreq->req.num_sgs = nents;
++ hwreq->sgt.sgl = sg;
++ hwreq->sgt.nents = num_sgs;
++
++ if (copy)
++ sg_copy_to_buffer(src, num_sgs - index, buf, rest);
++
++ *bounced = rest;
++
++ return 0;
++}
++
++static void sglist_do_debounce(struct ci_hw_req *hwreq, bool copy)
++{
++ void *buf;
++ int i, nents, num_sgs;
++ struct scatterlist *sg, *src, *dst;
++
++ sg = hwreq->req.sg;
++ num_sgs = hwreq->req.num_sgs;
++ src = sg_last(sg, num_sgs);
++ buf = sg_virt(src);
++
++ if (copy) {
++ dst = hwreq->sgt.sgl;
++ for (i = 0; i < num_sgs - 1; i++)
++ dst = sg_next(dst);
++
++ nents = hwreq->sgt.nents - num_sgs + 1;
++ sg_copy_from_buffer(dst, nents, buf, sg_dma_len(src));
++ }
++
++ hwreq->req.sg = hwreq->sgt.sgl;
++ hwreq->req.num_sgs = hwreq->sgt.nents;
++ hwreq->sgt.sgl = sg;
++ hwreq->sgt.nents = num_sgs;
++
++ kfree(buf);
++ sg_free_table(&hwreq->sgt);
++}
++
+ /**
+ * _hardware_enqueue: configures a request at hardware level
+ * @hwep: endpoint
+@@ -552,6 +673,8 @@ static int _hardware_enqueue(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq)
+ struct ci_hdrc *ci = hwep->ci;
+ int ret = 0;
+ struct td_node *firstnode, *lastnode;
++ unsigned int bounced_size;
++ struct scatterlist *sg;
+
+ /* don't queue twice */
+ if (hwreq->req.status == -EALREADY)
+@@ -559,11 +682,29 @@ static int _hardware_enqueue(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq)
+
+ hwreq->req.status = -EALREADY;
+
++ if (hwreq->req.num_sgs && hwreq->req.length &&
++ ci->has_short_pkt_limit) {
++ ret = sglist_get_invalid_entry(ci->dev->parent, hwep->dir,
++ &hwreq->req);
++ if (ret < hwreq->req.num_sgs) {
++ ret = sglist_do_bounce(hwreq, ret, hwep->dir == TX,
++ &bounced_size);
++ if (ret)
++ return ret;
++ }
++ }
++
+ ret = usb_gadget_map_request_by_dev(ci->dev->parent,
+ &hwreq->req, hwep->dir);
+ if (ret)
+ return ret;
+
++ if (hwreq->sgt.sgl) {
++ /* We've mapped a bigger buffer, now recover the actual size */
++ sg = sg_last(hwreq->req.sg, hwreq->req.num_sgs);
++ sg_dma_len(sg) = min(sg_dma_len(sg), bounced_size);
++ }
++
+ if (hwreq->req.num_mapped_sgs)
+ ret = prepare_td_for_sg(hwep, hwreq);
+ else
+@@ -733,6 +874,10 @@ static int _hardware_dequeue(struct ci_hw_ep *hwep, struct ci_hw_req *hwreq)
+ usb_gadget_unmap_request_by_dev(hwep->ci->dev->parent,
+ &hwreq->req, hwep->dir);
+
++ /* sglist bounced */
++ if (hwreq->sgt.sgl)
++ sglist_do_debounce(hwreq, hwep->dir == RX);
++
+ hwreq->req.actual += actual;
+
+ if (hwreq->req.status)
+@@ -960,6 +1105,12 @@ static int _ep_queue(struct usb_ep *ep, struct usb_request *req,
+ return -EMSGSIZE;
+ }
+
++ if (ci->has_short_pkt_limit &&
++ hwreq->req.length > CI_MAX_REQ_SIZE) {
++ dev_err(hwep->ci->dev, "request length too big (max 16KB)\n");
++ return -EMSGSIZE;
++ }
++
+ /* first nuke then test link, e.g. previous status has not sent */
+ if (!list_empty(&hwreq->queue)) {
+ dev_err(hwep->ci->dev, "request already in queue\n");
+@@ -1574,6 +1725,9 @@ static int ep_dequeue(struct usb_ep *ep, struct usb_request *req)
+
+ usb_gadget_unmap_request(&hwep->ci->gadget, req, hwep->dir);
+
++ if (hwreq->sgt.sgl)
++ sglist_do_debounce(hwreq, false);
++
+ req->status = -ECONNRESET;
+
+ if (hwreq->req.complete != NULL) {
+@@ -2063,7 +2217,7 @@ static irqreturn_t udc_irq(struct ci_hdrc *ci)
+ }
+ }
+
+- if (USBi_UI & intr)
++ if ((USBi_UI | USBi_UEI) & intr)
+ isr_tr_complete_handler(ci);
+
+ if ((USBi_SLI & intr) && !(ci->suspended)) {
+diff --git a/drivers/usb/chipidea/udc.h b/drivers/usb/chipidea/udc.h
+index 5193df1e18c75b..c8a47389a46bbb 100644
+--- a/drivers/usb/chipidea/udc.h
++++ b/drivers/usb/chipidea/udc.h
+@@ -69,11 +69,13 @@ struct td_node {
+ * @req: request structure for gadget drivers
+ * @queue: link to QH list
+ * @tds: link to TD list
++ * @sgt: hold original sglist when bounce sglist
+ */
+ struct ci_hw_req {
+ struct usb_request req;
+ struct list_head queue;
+ struct list_head tds;
++ struct sg_table sgt;
+ };
+
+ #ifdef CONFIG_USB_CHIPIDEA_UDC
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index 7a5dff8d9cc6c3..accf15ff1306a2 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -61,9 +61,11 @@ static int ucsi_acpi_read_cci(struct ucsi *ucsi, u32 *cci)
+ struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+ int ret;
+
+- ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ);
+- if (ret)
+- return ret;
++ if (UCSI_COMMAND(ua->cmd) == UCSI_PPM_RESET) {
++ ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ);
++ if (ret)
++ return ret;
++ }
+
+ memcpy(cci, ua->base + UCSI_CCI, sizeof(*cci));
+
+@@ -73,11 +75,6 @@ static int ucsi_acpi_read_cci(struct ucsi *ucsi, u32 *cci)
+ static int ucsi_acpi_read_message_in(struct ucsi *ucsi, void *val, size_t val_len)
+ {
+ struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+- int ret;
+-
+- ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ);
+- if (ret)
+- return ret;
+
+ memcpy(val, ua->base + UCSI_MESSAGE_IN, val_len);
+
+@@ -102,42 +99,6 @@ static const struct ucsi_operations ucsi_acpi_ops = {
+ .async_control = ucsi_acpi_async_control
+ };
+
+-static int
+-ucsi_zenbook_read_cci(struct ucsi *ucsi, u32 *cci)
+-{
+- struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+- int ret;
+-
+- if (UCSI_COMMAND(ua->cmd) == UCSI_PPM_RESET) {
+- ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ);
+- if (ret)
+- return ret;
+- }
+-
+- memcpy(cci, ua->base + UCSI_CCI, sizeof(*cci));
+-
+- return 0;
+-}
+-
+-static int
+-ucsi_zenbook_read_message_in(struct ucsi *ucsi, void *val, size_t val_len)
+-{
+- struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+-
+- /* UCSI_MESSAGE_IN is never read for PPM_RESET, return stored data */
+- memcpy(val, ua->base + UCSI_MESSAGE_IN, val_len);
+-
+- return 0;
+-}
+-
+-static const struct ucsi_operations ucsi_zenbook_ops = {
+- .read_version = ucsi_acpi_read_version,
+- .read_cci = ucsi_zenbook_read_cci,
+- .read_message_in = ucsi_zenbook_read_message_in,
+- .sync_control = ucsi_sync_control_common,
+- .async_control = ucsi_acpi_async_control
+-};
+-
+ static int ucsi_gram_read_message_in(struct ucsi *ucsi, void *val, size_t val_len)
+ {
+ u16 bogus_change = UCSI_CONSTAT_POWER_LEVEL_CHANGE |
+@@ -190,13 +151,6 @@ static const struct ucsi_operations ucsi_gram_ops = {
+ };
+
+ static const struct dmi_system_id ucsi_acpi_quirks[] = {
+- {
+- .matches = {
+- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
+- DMI_MATCH(DMI_PRODUCT_NAME, "ZenBook UX325UA_UM325UA"),
+- },
+- .driver_data = (void *)&ucsi_zenbook_ops,
+- },
+ {
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LG Electronics"),
+diff --git a/drivers/usb/typec/ucsi/ucsi_glink.c b/drivers/usb/typec/ucsi/ucsi_glink.c
+index f7000d383a4e62..9b6cb76e632807 100644
+--- a/drivers/usb/typec/ucsi/ucsi_glink.c
++++ b/drivers/usb/typec/ucsi/ucsi_glink.c
+@@ -172,12 +172,12 @@ static int pmic_glink_ucsi_async_control(struct ucsi *__ucsi, u64 command)
+ static void pmic_glink_ucsi_update_connector(struct ucsi_connector *con)
+ {
+ struct pmic_glink_ucsi *ucsi = ucsi_get_drvdata(con->ucsi);
+- int i;
+
+- for (i = 0; i < PMIC_GLINK_MAX_PORTS; i++) {
+- if (ucsi->port_orientation[i])
+- con->typec_cap.orientation_aware = true;
+- }
++ if (con->num > PMIC_GLINK_MAX_PORTS ||
++ !ucsi->port_orientation[con->num - 1])
++ return;
++
++ con->typec_cap.orientation_aware = true;
+ }
+
+ static void pmic_glink_ucsi_connector_status(struct ucsi_connector *con)
+diff --git a/drivers/vfio/pci/mlx5/cmd.c b/drivers/vfio/pci/mlx5/cmd.c
+index 7527e277c89897..eb7387ee6ebd10 100644
+--- a/drivers/vfio/pci/mlx5/cmd.c
++++ b/drivers/vfio/pci/mlx5/cmd.c
+@@ -1517,7 +1517,8 @@ int mlx5vf_start_page_tracker(struct vfio_device *vdev,
+ struct mlx5_vhca_qp *host_qp;
+ struct mlx5_vhca_qp *fw_qp;
+ struct mlx5_core_dev *mdev;
+- u32 max_msg_size = PAGE_SIZE;
++ u32 log_max_msg_size;
++ u32 max_msg_size;
+ u64 rq_size = SZ_2M;
+ u32 max_recv_wr;
+ int err;
+@@ -1534,6 +1535,12 @@ int mlx5vf_start_page_tracker(struct vfio_device *vdev,
+ }
+
+ mdev = mvdev->mdev;
++ log_max_msg_size = MLX5_CAP_ADV_VIRTUALIZATION(mdev, pg_track_log_max_msg_size);
++ max_msg_size = (1ULL << log_max_msg_size);
++ /* The RQ must hold at least 4 WQEs/messages for successful QP creation */
++ if (rq_size < 4 * max_msg_size)
++ rq_size = 4 * max_msg_size;
++
+ memset(tracker, 0, sizeof(*tracker));
+ tracker->uar = mlx5_get_uars_page(mdev);
+ if (IS_ERR(tracker->uar)) {
+@@ -1623,25 +1630,41 @@ set_report_output(u32 size, int index, struct mlx5_vhca_qp *qp,
+ {
+ u32 entry_size = MLX5_ST_SZ_BYTES(page_track_report_entry);
+ u32 nent = size / entry_size;
++ u32 nent_in_page;
++ u32 nent_to_set;
+ struct page *page;
++ u32 page_offset;
++ u32 page_index;
++ u32 buf_offset;
++ void *kaddr;
+ u64 addr;
+ u64 *buf;
+ int i;
+
+- if (WARN_ON(index >= qp->recv_buf.npages ||
++ buf_offset = index * qp->max_msg_size;
++ if (WARN_ON(buf_offset + size >= qp->recv_buf.npages * PAGE_SIZE ||
+ (nent > qp->max_msg_size / entry_size)))
+ return;
+
+- page = qp->recv_buf.page_list[index];
+- buf = kmap_local_page(page);
+- for (i = 0; i < nent; i++) {
+- addr = MLX5_GET(page_track_report_entry, buf + i,
+- dirty_address_low);
+- addr |= (u64)MLX5_GET(page_track_report_entry, buf + i,
+- dirty_address_high) << 32;
+- iova_bitmap_set(dirty, addr, qp->tracked_page_size);
+- }
+- kunmap_local(buf);
++ do {
++ page_index = buf_offset / PAGE_SIZE;
++ page_offset = buf_offset % PAGE_SIZE;
++ nent_in_page = (PAGE_SIZE - page_offset) / entry_size;
++ page = qp->recv_buf.page_list[page_index];
++ kaddr = kmap_local_page(page);
++ buf = kaddr + page_offset;
++ nent_to_set = min(nent, nent_in_page);
++ for (i = 0; i < nent_to_set; i++) {
++ addr = MLX5_GET(page_track_report_entry, buf + i,
++ dirty_address_low);
++ addr |= (u64)MLX5_GET(page_track_report_entry, buf + i,
++ dirty_address_high) << 32;
++ iova_bitmap_set(dirty, addr, qp->tracked_page_size);
++ }
++ kunmap_local(kaddr);
++ buf_offset += (nent_to_set * entry_size);
++ nent -= nent_to_set;
++ } while (nent);
+ }
+
+ static void
+diff --git a/drivers/virt/coco/pkvm-guest/arm-pkvm-guest.c b/drivers/virt/coco/pkvm-guest/arm-pkvm-guest.c
+index 56a3859dda8a15..4230b817a80bd8 100644
+--- a/drivers/virt/coco/pkvm-guest/arm-pkvm-guest.c
++++ b/drivers/virt/coco/pkvm-guest/arm-pkvm-guest.c
+@@ -87,12 +87,8 @@ static int mmio_guard_ioremap_hook(phys_addr_t phys, size_t size,
+
+ while (phys < end) {
+ const int func_id = ARM_SMCCC_VENDOR_HYP_KVM_MMIO_GUARD_FUNC_ID;
+- int err;
+-
+- err = arm_smccc_do_one_page(func_id, phys);
+- if (err)
+- return err;
+
++ WARN_ON_ONCE(arm_smccc_do_one_page(func_id, phys));
+ phys += PAGE_SIZE;
+ }
+
+diff --git a/drivers/watchdog/apple_wdt.c b/drivers/watchdog/apple_wdt.c
+index d4f739932f0be8..62dabf223d9096 100644
+--- a/drivers/watchdog/apple_wdt.c
++++ b/drivers/watchdog/apple_wdt.c
+@@ -130,7 +130,7 @@ static int apple_wdt_restart(struct watchdog_device *wdd, unsigned long mode,
+ * can take up to ~20-25ms until the SoC is actually reset. Just wait
+ * 50ms here to be safe.
+ */
+- (void)readl_relaxed(wdt->regs + APPLE_WDT_WD1_CUR_TIME);
++ (void)readl(wdt->regs + APPLE_WDT_WD1_CUR_TIME);
+ mdelay(50);
+
+ return 0;
+diff --git a/drivers/watchdog/iTCO_wdt.c b/drivers/watchdog/iTCO_wdt.c
+index 35b358bcf94ce6..f01ed38aba6751 100644
+--- a/drivers/watchdog/iTCO_wdt.c
++++ b/drivers/watchdog/iTCO_wdt.c
+@@ -82,6 +82,13 @@
+ #define TCO2_CNT(p) (TCOBASE(p) + 0x0a) /* TCO2 Control Register */
+ #define TCOv2_TMR(p) (TCOBASE(p) + 0x12) /* TCOv2 Timer Initial Value*/
+
++/*
++ * NMI_NOW is bit 8 of TCO1_CNT register
++ * Read/Write
++ * This bit is implemented as RW but has no effect on HW.
++ */
++#define NMI_NOW BIT(8)
++
+ /* internal variables */
+ struct iTCO_wdt_private {
+ struct watchdog_device wddev;
+@@ -219,13 +226,23 @@ static int update_no_reboot_bit_cnt(void *priv, bool set)
+ struct iTCO_wdt_private *p = priv;
+ u16 val, newval;
+
+- val = inw(TCO1_CNT(p));
++ /*
++ * writing back 1b1 to NMI_NOW of TCO1_CNT register
++ * causes NMI_NOW bit inversion what consequently does
++ * not allow to perform the register's value comparison
++ * properly.
++ *
++ * NMI_NOW bit masking for TCO1_CNT register values
++ * helps to avoid possible NMI_NOW bit inversions on
++ * following write operation.
++ */
++ val = inw(TCO1_CNT(p)) & ~NMI_NOW;
+ if (set)
+ val |= BIT(0);
+ else
+ val &= ~BIT(0);
+ outw(val, TCO1_CNT(p));
+- newval = inw(TCO1_CNT(p));
++ newval = inw(TCO1_CNT(p)) & ~NMI_NOW;
+
+ /* make sure the update is successful */
+ return val != newval ? -EIO : 0;
+diff --git a/drivers/watchdog/mtk_wdt.c b/drivers/watchdog/mtk_wdt.c
+index c35f85ce8d69cc..e2d7a57d6ea2e7 100644
+--- a/drivers/watchdog/mtk_wdt.c
++++ b/drivers/watchdog/mtk_wdt.c
+@@ -225,9 +225,15 @@ static int mtk_wdt_restart(struct watchdog_device *wdt_dev,
+ {
+ struct mtk_wdt_dev *mtk_wdt = watchdog_get_drvdata(wdt_dev);
+ void __iomem *wdt_base;
++ u32 reg;
+
+ wdt_base = mtk_wdt->wdt_base;
+
++ /* Enable reset in order to issue a system reset instead of an IRQ */
++ reg = readl(wdt_base + WDT_MODE);
++ reg &= ~WDT_MODE_IRQ_EN;
++ writel(reg | WDT_MODE_KEY, wdt_base + WDT_MODE);
++
+ while (1) {
+ writel(WDT_SWRST_KEY, wdt_base + WDT_SWRST);
+ mdelay(5);
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index 4895a69015a8ea..563d842014dfba 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -61,7 +61,7 @@
+
+ #define MAX_HW_ERROR 250
+
+-static int heartbeat = DEFAULT_HEARTBEAT;
++static int heartbeat;
+
+ /*
+ * struct to hold data for each WDT device
+@@ -252,6 +252,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ wdd->min_timeout = 1;
+ wdd->max_hw_heartbeat_ms = (WDT_PRELOAD_MAX << WDT_PRELOAD_SHIFT) /
+ wdt->freq * 1000;
++ wdd->timeout = DEFAULT_HEARTBEAT;
+ wdd->parent = dev;
+
+ watchdog_set_drvdata(wdd, wdt);
+diff --git a/drivers/watchdog/xilinx_wwdt.c b/drivers/watchdog/xilinx_wwdt.c
+index d271e2e8d6e271..3d2a156f718009 100644
+--- a/drivers/watchdog/xilinx_wwdt.c
++++ b/drivers/watchdog/xilinx_wwdt.c
+@@ -2,7 +2,7 @@
+ /*
+ * Window watchdog device driver for Xilinx Versal WWDT
+ *
+- * Copyright (C) 2022 - 2023, Advanced Micro Devices, Inc.
++ * Copyright (C) 2022 - 2024, Advanced Micro Devices, Inc.
+ */
+
+ #include <linux/clk.h>
+@@ -36,6 +36,12 @@
+
+ #define XWWDT_CLOSE_WINDOW_PERCENT 50
+
++/* Maximum count value of each 32 bit window */
++#define XWWDT_MAX_COUNT_WINDOW GENMASK(31, 0)
++
++/* Maximum count value of closed and open window combined */
++#define XWWDT_MAX_COUNT_WINDOW_COMBINED GENMASK_ULL(32, 1)
++
+ static int wwdt_timeout;
+ static int closed_window_percent;
+
+@@ -54,6 +60,8 @@ MODULE_PARM_DESC(closed_window_percent,
+ * @xilinx_wwdt_wdd: watchdog device structure
+ * @freq: source clock frequency of WWDT
+ * @close_percent: Closed window percent
++ * @closed_timeout: Closed window timeout in ticks
++ * @open_timeout: Open window timeout in ticks
+ */
+ struct xwwdt_device {
+ void __iomem *base;
+@@ -61,27 +69,22 @@ struct xwwdt_device {
+ struct watchdog_device xilinx_wwdt_wdd;
+ unsigned long freq;
+ u32 close_percent;
++ u64 closed_timeout;
++ u64 open_timeout;
+ };
+
+ static int xilinx_wwdt_start(struct watchdog_device *wdd)
+ {
+ struct xwwdt_device *xdev = watchdog_get_drvdata(wdd);
+ struct watchdog_device *xilinx_wwdt_wdd = &xdev->xilinx_wwdt_wdd;
+- u64 time_out, closed_timeout, open_timeout;
+ u32 control_status_reg;
+
+- /* Calculate timeout count */
+- time_out = xdev->freq * wdd->timeout;
+- closed_timeout = div_u64(time_out * xdev->close_percent, 100);
+- open_timeout = time_out - closed_timeout;
+- wdd->min_hw_heartbeat_ms = xdev->close_percent * 10 * wdd->timeout;
+-
+ spin_lock(&xdev->spinlock);
+
+ iowrite32(XWWDT_MWR_MASK, xdev->base + XWWDT_MWR_OFFSET);
+ iowrite32(~(u32)XWWDT_ESR_WEN_MASK, xdev->base + XWWDT_ESR_OFFSET);
+- iowrite32((u32)closed_timeout, xdev->base + XWWDT_FWR_OFFSET);
+- iowrite32((u32)open_timeout, xdev->base + XWWDT_SWR_OFFSET);
++ iowrite32((u32)xdev->closed_timeout, xdev->base + XWWDT_FWR_OFFSET);
++ iowrite32((u32)xdev->open_timeout, xdev->base + XWWDT_SWR_OFFSET);
+
+ /* Enable the window watchdog timer */
+ control_status_reg = ioread32(xdev->base + XWWDT_ESR_OFFSET);
+@@ -133,7 +136,12 @@ static int xwwdt_probe(struct platform_device *pdev)
+ struct watchdog_device *xilinx_wwdt_wdd;
+ struct device *dev = &pdev->dev;
+ struct xwwdt_device *xdev;
++ u64 max_per_window_ms;
++ u64 min_per_window_ms;
++ u64 timeout_count;
+ struct clk *clk;
++ u32 timeout_ms;
++ u64 ms_count;
+ int ret;
+
+ xdev = devm_kzalloc(dev, sizeof(*xdev), GFP_KERNEL);
+@@ -154,12 +162,13 @@ static int xwwdt_probe(struct platform_device *pdev)
+ return PTR_ERR(clk);
+
+ xdev->freq = clk_get_rate(clk);
+- if (!xdev->freq)
++ if (xdev->freq < 1000000)
+ return -EINVAL;
+
+ xilinx_wwdt_wdd->min_timeout = XWWDT_MIN_TIMEOUT;
+ xilinx_wwdt_wdd->timeout = XWWDT_DEFAULT_TIMEOUT;
+- xilinx_wwdt_wdd->max_hw_heartbeat_ms = 1000 * xilinx_wwdt_wdd->timeout;
++ xilinx_wwdt_wdd->max_hw_heartbeat_ms =
++ div64_u64(XWWDT_MAX_COUNT_WINDOW_COMBINED, xdev->freq) * 1000;
+
+ if (closed_window_percent == 0 || closed_window_percent >= 100)
+ xdev->close_percent = XWWDT_CLOSE_WINDOW_PERCENT;
+@@ -167,6 +176,48 @@ static int xwwdt_probe(struct platform_device *pdev)
+ xdev->close_percent = closed_window_percent;
+
+ watchdog_init_timeout(xilinx_wwdt_wdd, wwdt_timeout, &pdev->dev);
++
++ /* Calculate ticks for 1 milli-second */
++ ms_count = div_u64(xdev->freq, 1000);
++ timeout_ms = xilinx_wwdt_wdd->timeout * 1000;
++ timeout_count = timeout_ms * ms_count;
++
++ if (timeout_ms > xilinx_wwdt_wdd->max_hw_heartbeat_ms) {
++ /*
++ * To avoid ping restrictions until the minimum hardware heartbeat,
++ * we will solely rely on the open window and
++ * adjust the minimum hardware heartbeat to 0.
++ */
++ xdev->closed_timeout = 0;
++ xdev->open_timeout = XWWDT_MAX_COUNT_WINDOW;
++ xilinx_wwdt_wdd->min_hw_heartbeat_ms = 0;
++ xilinx_wwdt_wdd->max_hw_heartbeat_ms = xilinx_wwdt_wdd->max_hw_heartbeat_ms / 2;
++ } else {
++ xdev->closed_timeout = div64_u64(timeout_count * xdev->close_percent, 100);
++ xilinx_wwdt_wdd->min_hw_heartbeat_ms =
++ div64_u64(timeout_ms * xdev->close_percent, 100);
++
++ if (timeout_ms > xilinx_wwdt_wdd->max_hw_heartbeat_ms / 2) {
++ max_per_window_ms = xilinx_wwdt_wdd->max_hw_heartbeat_ms / 2;
++ min_per_window_ms = timeout_ms - max_per_window_ms;
++
++ if (xilinx_wwdt_wdd->min_hw_heartbeat_ms > max_per_window_ms) {
++ dev_info(xilinx_wwdt_wdd->parent,
++ "Closed window cannot be set to %d%%. Using maximum supported value.\n",
++ xdev->close_percent);
++ xdev->closed_timeout = max_per_window_ms * ms_count;
++ xilinx_wwdt_wdd->min_hw_heartbeat_ms = max_per_window_ms;
++ } else if (xilinx_wwdt_wdd->min_hw_heartbeat_ms < min_per_window_ms) {
++ dev_info(xilinx_wwdt_wdd->parent,
++ "Closed window cannot be set to %d%%. Using minimum supported value.\n",
++ xdev->close_percent);
++ xdev->closed_timeout = min_per_window_ms * ms_count;
++ xilinx_wwdt_wdd->min_hw_heartbeat_ms = min_per_window_ms;
++ }
++ }
++ xdev->open_timeout = timeout_count - xdev->closed_timeout;
++ }
++
+ spin_lock_init(&xdev->spinlock);
+ watchdog_set_drvdata(xilinx_wwdt_wdd, xdev);
+ watchdog_set_nowayout(xilinx_wwdt_wdd, 1);
+diff --git a/fs/btrfs/dev-replace.c b/fs/btrfs/dev-replace.c
+index 83d5cdd77f293e..604399e59a3d10 100644
+--- a/fs/btrfs/dev-replace.c
++++ b/fs/btrfs/dev-replace.c
+@@ -641,6 +641,7 @@ static int btrfs_dev_replace_start(struct btrfs_fs_info *fs_info,
+ return ret;
+
+ down_write(&dev_replace->rwsem);
++ dev_replace->replace_task = current;
+ switch (dev_replace->replace_state) {
+ case BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED:
+ case BTRFS_IOCTL_DEV_REPLACE_STATE_FINISHED:
+@@ -994,6 +995,7 @@ static int btrfs_dev_replace_finishing(struct btrfs_fs_info *fs_info,
+ list_add(&tgt_device->dev_alloc_list, &fs_devices->alloc_list);
+ fs_devices->rw_devices++;
+
++ dev_replace->replace_task = NULL;
+ up_write(&dev_replace->rwsem);
+ btrfs_rm_dev_replace_blocked(fs_info);
+
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index b11bfe68dd65fb..43b7b331b2da36 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3202,8 +3202,7 @@ int btrfs_check_features(struct btrfs_fs_info *fs_info, bool is_rw_mount)
+ return 0;
+ }
+
+-int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_devices,
+- const char *options)
++int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_devices)
+ {
+ u32 sectorsize;
+ u32 nodesize;
+diff --git a/fs/btrfs/disk-io.h b/fs/btrfs/disk-io.h
+index 99af64d3f27781..127e31e0834709 100644
+--- a/fs/btrfs/disk-io.h
++++ b/fs/btrfs/disk-io.h
+@@ -52,8 +52,7 @@ struct extent_buffer *btrfs_find_create_tree_block(
+ int btrfs_start_pre_rw_mount(struct btrfs_fs_info *fs_info);
+ int btrfs_check_super_csum(struct btrfs_fs_info *fs_info,
+ const struct btrfs_super_block *disk_sb);
+-int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_devices,
+- const char *options);
++int __cold open_ctree(struct super_block *sb, struct btrfs_fs_devices *fs_devices);
+ void __cold close_ctree(struct btrfs_fs_info *fs_info);
+ int btrfs_validate_super(const struct btrfs_fs_info *fs_info,
+ const struct btrfs_super_block *sb, int mirror_num);
+diff --git a/fs/btrfs/fs.h b/fs/btrfs/fs.h
+index 79f64e383eddf8..cbfb225858a59f 100644
+--- a/fs/btrfs/fs.h
++++ b/fs/btrfs/fs.h
+@@ -317,6 +317,8 @@ struct btrfs_dev_replace {
+
+ struct percpu_counter bio_counter;
+ wait_queue_head_t replace_wait;
++
++ struct task_struct *replace_task;
+ };
+
+ /*
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index d067db2619713f..58ffe78132d9d6 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -9857,6 +9857,7 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ if (btrfs_root_dead(root)) {
+ spin_unlock(&root->root_item_lock);
+
++ btrfs_drew_write_unlock(&root->snapshot_lock);
+ btrfs_exclop_finish(fs_info);
+ btrfs_warn(fs_info,
+ "cannot activate swapfile because subvolume %llu is being deleted",
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index c64d0713412231..8292e488d3d777 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -946,8 +946,7 @@ static int get_default_subvol_objectid(struct btrfs_fs_info *fs_info, u64 *objec
+ }
+
+ static int btrfs_fill_super(struct super_block *sb,
+- struct btrfs_fs_devices *fs_devices,
+- void *data)
++ struct btrfs_fs_devices *fs_devices)
+ {
+ struct inode *inode;
+ struct btrfs_fs_info *fs_info = btrfs_sb(sb);
+@@ -971,7 +970,7 @@ static int btrfs_fill_super(struct super_block *sb,
+ return err;
+ }
+
+- err = open_ctree(sb, fs_devices, (char *)data);
++ err = open_ctree(sb, fs_devices);
+ if (err) {
+ btrfs_err(fs_info, "open_ctree failed");
+ return err;
+@@ -1887,18 +1886,21 @@ static int btrfs_get_tree_super(struct fs_context *fc)
+
+ if (sb->s_root) {
+ btrfs_close_devices(fs_devices);
+- if ((fc->sb_flags ^ sb->s_flags) & SB_RDONLY)
+- ret = -EBUSY;
++ /*
++ * At this stage we may have RO flag mismatch between
++ * fc->sb_flags and sb->s_flags. Caller should detect such
++ * mismatch and reconfigure with sb->s_umount rwsem held if
++ * needed.
++ */
+ } else {
+ snprintf(sb->s_id, sizeof(sb->s_id), "%pg", bdev);
+ shrinker_debugfs_rename(sb->s_shrink, "sb-btrfs:%s", sb->s_id);
+ btrfs_sb(sb)->bdev_holder = &btrfs_fs_type;
+- ret = btrfs_fill_super(sb, fs_devices, NULL);
+- }
+-
+- if (ret) {
+- deactivate_locked_super(sb);
+- return ret;
++ ret = btrfs_fill_super(sb, fs_devices);
++ if (ret) {
++ deactivate_locked_super(sb);
++ return ret;
++ }
+ }
+
+ btrfs_clear_oneshot_options(fs_info);
+@@ -1984,39 +1986,18 @@ static int btrfs_get_tree_super(struct fs_context *fc)
+ * btrfs or not, setting the whole super block RO. To make per-subvolume mounting
+ * work with different options work we need to keep backward compatibility.
+ */
+-static struct vfsmount *btrfs_reconfigure_for_mount(struct fs_context *fc)
++static int btrfs_reconfigure_for_mount(struct fs_context *fc, struct vfsmount *mnt)
+ {
+- struct vfsmount *mnt;
+- int ret;
+- const bool ro2rw = !(fc->sb_flags & SB_RDONLY);
+-
+- /*
+- * We got an EBUSY because our SB_RDONLY flag didn't match the existing
+- * super block, so invert our setting here and retry the mount so we
+- * can get our vfsmount.
+- */
+- if (ro2rw)
+- fc->sb_flags |= SB_RDONLY;
+- else
+- fc->sb_flags &= ~SB_RDONLY;
+-
+- mnt = fc_mount(fc);
+- if (IS_ERR(mnt))
+- return mnt;
++ int ret = 0;
+
+- if (!ro2rw)
+- return mnt;
++ if (fc->sb_flags & SB_RDONLY)
++ return ret;
+
+- /* We need to convert to rw, call reconfigure. */
+- fc->sb_flags &= ~SB_RDONLY;
+ down_write(&mnt->mnt_sb->s_umount);
+- ret = btrfs_reconfigure(fc);
++ if (!(fc->sb_flags & SB_RDONLY) && (mnt->mnt_sb->s_flags & SB_RDONLY))
++ ret = btrfs_reconfigure(fc);
+ up_write(&mnt->mnt_sb->s_umount);
+- if (ret) {
+- mntput(mnt);
+- return ERR_PTR(ret);
+- }
+- return mnt;
++ return ret;
+ }
+
+ static int btrfs_get_tree_subvol(struct fs_context *fc)
+@@ -2026,6 +2007,7 @@ static int btrfs_get_tree_subvol(struct fs_context *fc)
+ struct fs_context *dup_fc;
+ struct dentry *dentry;
+ struct vfsmount *mnt;
++ int ret = 0;
+
+ /*
+ * Setup a dummy root and fs_info for test/set super. This is because
+@@ -2068,11 +2050,16 @@ static int btrfs_get_tree_subvol(struct fs_context *fc)
+ fc->security = NULL;
+
+ mnt = fc_mount(dup_fc);
+- if (PTR_ERR_OR_ZERO(mnt) == -EBUSY)
+- mnt = btrfs_reconfigure_for_mount(dup_fc);
+- put_fs_context(dup_fc);
+- if (IS_ERR(mnt))
++ if (IS_ERR(mnt)) {
++ put_fs_context(dup_fc);
+ return PTR_ERR(mnt);
++ }
++ ret = btrfs_reconfigure_for_mount(dup_fc, mnt);
++ put_fs_context(dup_fc);
++ if (ret) {
++ mntput(mnt);
++ return ret;
++ }
+
+ /*
+ * This free's ->subvol_name, because if it isn't set we have to
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index eb51b609190fb5..0c4d14c59ebec5 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -732,6 +732,114 @@ const u8 *btrfs_sb_fsid_ptr(const struct btrfs_super_block *sb)
+ return has_metadata_uuid ? sb->metadata_uuid : sb->fsid;
+ }
+
++/*
++ * We can have very weird soft links passed in.
++ * One example is "/proc/self/fd/<fd>", which can be a soft link to
++ * a block device.
++ *
++ * But it's never a good idea to use those weird names.
++ * Here we check if the path (not following symlinks) is a good one inside
++ * "/dev/".
++ */
++static bool is_good_dev_path(const char *dev_path)
++{
++ struct path path = { .mnt = NULL, .dentry = NULL };
++ char *path_buf = NULL;
++ char *resolved_path;
++ bool is_good = false;
++ int ret;
++
++ if (!dev_path)
++ goto out;
++
++ path_buf = kmalloc(PATH_MAX, GFP_KERNEL);
++ if (!path_buf)
++ goto out;
++
++ /*
++ * Do not follow soft link, just check if the original path is inside
++ * "/dev/".
++ */
++ ret = kern_path(dev_path, 0, &path);
++ if (ret)
++ goto out;
++ resolved_path = d_path(&path, path_buf, PATH_MAX);
++ if (IS_ERR(resolved_path))
++ goto out;
++ if (strncmp(resolved_path, "/dev/", strlen("/dev/")))
++ goto out;
++ is_good = true;
++out:
++ kfree(path_buf);
++ path_put(&path);
++ return is_good;
++}
++
++static int get_canonical_dev_path(const char *dev_path, char *canonical)
++{
++ struct path path = { .mnt = NULL, .dentry = NULL };
++ char *path_buf = NULL;
++ char *resolved_path;
++ int ret;
++
++ if (!dev_path) {
++ ret = -EINVAL;
++ goto out;
++ }
++
++ path_buf = kmalloc(PATH_MAX, GFP_KERNEL);
++ if (!path_buf) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
++ ret = kern_path(dev_path, LOOKUP_FOLLOW, &path);
++ if (ret)
++ goto out;
++ resolved_path = d_path(&path, path_buf, PATH_MAX);
++ ret = strscpy(canonical, resolved_path, PATH_MAX);
++out:
++ kfree(path_buf);
++ path_put(&path);
++ return ret;
++}
++
++static bool is_same_device(struct btrfs_device *device, const char *new_path)
++{
++ struct path old = { .mnt = NULL, .dentry = NULL };
++ struct path new = { .mnt = NULL, .dentry = NULL };
++ char *old_path = NULL;
++ bool is_same = false;
++ int ret;
++
++ if (!device->name)
++ goto out;
++
++ old_path = kzalloc(PATH_MAX, GFP_NOFS);
++ if (!old_path)
++ goto out;
++
++ rcu_read_lock();
++ ret = strscpy(old_path, rcu_str_deref(device->name), PATH_MAX);
++ rcu_read_unlock();
++ if (ret < 0)
++ goto out;
++
++ ret = kern_path(old_path, LOOKUP_FOLLOW, &old);
++ if (ret)
++ goto out;
++ ret = kern_path(new_path, LOOKUP_FOLLOW, &new);
++ if (ret)
++ goto out;
++ if (path_equal(&old, &new))
++ is_same = true;
++out:
++ kfree(old_path);
++ path_put(&old);
++ path_put(&new);
++ return is_same;
++}
++
+ /*
+ * Add new device to list of registered devices
+ *
+@@ -852,7 +960,7 @@ static noinline struct btrfs_device *device_list_add(const char *path,
+ MAJOR(path_devt), MINOR(path_devt),
+ current->comm, task_pid_nr(current));
+
+- } else if (!device->name || strcmp(device->name->str, path)) {
++ } else if (!device->name || !is_same_device(device, path)) {
+ /*
+ * When FS is already mounted.
+ * 1. If you are here and if the device->name is NULL that
+@@ -1383,12 +1491,23 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, blk_mode_t flags,
+ bool new_device_added = false;
+ struct btrfs_device *device = NULL;
+ struct file *bdev_file;
++ char *canonical_path = NULL;
+ u64 bytenr;
+ dev_t devt;
+ int ret;
+
+ lockdep_assert_held(&uuid_mutex);
+
++ if (!is_good_dev_path(path)) {
++ canonical_path = kmalloc(PATH_MAX, GFP_KERNEL);
++ if (canonical_path) {
++ ret = get_canonical_dev_path(path, canonical_path);
++ if (ret < 0) {
++ kfree(canonical_path);
++ canonical_path = NULL;
++ }
++ }
++ }
+ /*
+ * Avoid an exclusive open here, as the systemd-udev may initiate the
+ * device scan which may race with the user's mount or mkfs command,
+@@ -1433,7 +1552,8 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, blk_mode_t flags,
+ goto free_disk_super;
+ }
+
+- device = device_list_add(path, disk_super, &new_device_added);
++ device = device_list_add(canonical_path ? : path, disk_super,
++ &new_device_added);
+ if (!IS_ERR(device) && new_device_added)
+ btrfs_free_stale_devices(device->devt, device);
+
+@@ -1442,6 +1562,7 @@ struct btrfs_device *btrfs_scan_one_device(const char *path, blk_mode_t flags,
+
+ error_bdev_put:
+ fput(bdev_file);
++ kfree(canonical_path);
+
+ return device;
+ }
+@@ -2721,8 +2842,6 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ set_blocksize(device->bdev_file, BTRFS_BDEV_BLOCKSIZE);
+
+ if (seeding_dev) {
+- btrfs_clear_sb_rdonly(sb);
+-
+ /* GFP_KERNEL allocation must not be under device_list_mutex */
+ seed_devices = btrfs_init_sprout(fs_info);
+ if (IS_ERR(seed_devices)) {
+@@ -2865,8 +2984,6 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
+ mutex_unlock(&fs_info->chunk_mutex);
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ error_trans:
+- if (seeding_dev)
+- btrfs_set_sb_rdonly(sb);
+ if (trans)
+ btrfs_end_transaction(trans);
+ error_free_zone:
+@@ -6481,13 +6598,15 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
+ max_len = btrfs_max_io_len(map, map_offset, &io_geom);
+ *length = min_t(u64, map->chunk_len - map_offset, max_len);
+
+- down_read(&dev_replace->rwsem);
++ if (dev_replace->replace_task != current)
++ down_read(&dev_replace->rwsem);
++
+ dev_replace_is_ongoing = btrfs_dev_replace_is_ongoing(dev_replace);
+ /*
+ * Hold the semaphore for read during the whole operation, write is
+ * requested at commit time but must wait.
+ */
+- if (!dev_replace_is_ongoing)
++ if (!dev_replace_is_ongoing && dev_replace->replace_task != current)
+ up_read(&dev_replace->rwsem);
+
+ switch (map->type & BTRFS_BLOCK_GROUP_PROFILE_MASK) {
+@@ -6627,7 +6746,7 @@ int btrfs_map_block(struct btrfs_fs_info *fs_info, enum btrfs_map_op op,
+ bioc->mirror_num = io_geom.mirror_num;
+
+ out:
+- if (dev_replace_is_ongoing) {
++ if (dev_replace_is_ongoing && dev_replace->replace_task != current) {
+ lockdep_assert_held(&dev_replace->rwsem);
+ /* Unlock and let waiting writers proceed */
+ up_read(&dev_replace->rwsem);
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index 865dc70a9dfc47..dddedaef5e93dd 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -2861,16 +2861,14 @@ static int validate_lock_args(struct dlm_ls *ls, struct dlm_lkb *lkb,
+ case -EINVAL:
+ /* annoy the user because dlm usage is wrong */
+ WARN_ON(1);
+- log_error(ls, "%s %d %x %x %x %d %d %s", __func__,
++ log_error(ls, "%s %d %x %x %x %d %d", __func__,
+ rv, lkb->lkb_id, dlm_iflags_val(lkb), args->flags,
+- lkb->lkb_status, lkb->lkb_wait_type,
+- lkb->lkb_resource->res_name);
++ lkb->lkb_status, lkb->lkb_wait_type);
+ break;
+ default:
+- log_debug(ls, "%s %d %x %x %x %d %d %s", __func__,
++ log_debug(ls, "%s %d %x %x %x %d %d", __func__,
+ rv, lkb->lkb_id, dlm_iflags_val(lkb), args->flags,
+- lkb->lkb_status, lkb->lkb_wait_type,
+- lkb->lkb_resource->res_name);
++ lkb->lkb_status, lkb->lkb_wait_type);
+ break;
+ }
+
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 1ae4542f0bd88b..90fbab6b6f0363 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -823,7 +823,8 @@ static bool __ep_remove(struct eventpoll *ep, struct epitem *epi, bool force)
+ to_free = NULL;
+ head = file->f_ep;
+ if (head->first == &epi->fllink && !epi->fllink.next) {
+- file->f_ep = NULL;
++ /* See eventpoll_release() for details. */
++ WRITE_ONCE(file->f_ep, NULL);
+ if (!is_file_epoll(file)) {
+ struct epitems_head *v;
+ v = container_of(head, struct epitems_head, epitems);
+@@ -1603,7 +1604,8 @@ static int attach_epitem(struct file *file, struct epitem *epi)
+ spin_unlock(&file->f_lock);
+ goto allocate;
+ }
+- file->f_ep = head;
++ /* See eventpoll_release() for details. */
++ WRITE_ONCE(file->f_ep, head);
+ to_free = NULL;
+ }
+ hlist_add_head_rcu(&epi->fllink, file->f_ep);
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 88f98dc4402753..60909af2d4a537 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -4482,7 +4482,7 @@ static int ext4_alloc_file_blocks(struct file *file, ext4_lblk_t offset,
+ int depth = 0;
+ struct ext4_map_blocks map;
+ unsigned int credits;
+- loff_t epos;
++ loff_t epos, old_size = i_size_read(inode);
+
+ BUG_ON(!ext4_test_inode_flag(inode, EXT4_INODE_EXTENTS));
+ map.m_lblk = offset;
+@@ -4541,6 +4541,11 @@ static int ext4_alloc_file_blocks(struct file *file, ext4_lblk_t offset,
+ if (ext4_update_inode_size(inode, epos) & 0x1)
+ inode_set_mtime_to_ts(inode,
+ inode_get_ctime(inode));
++ if (epos > old_size) {
++ pagecache_isize_extended(inode, old_size, epos);
++ ext4_zero_partial_blocks(handle, inode,
++ old_size, epos - old_size);
++ }
+ }
+ ret2 = ext4_mark_inode_dirty(handle, inode);
+ ext4_update_inode_fsync_trans(handle, inode, 1);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 99d09cd9c6a37e..67a5b937f5a92d 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1307,8 +1307,10 @@ static int ext4_write_end(struct file *file,
+ folio_unlock(folio);
+ folio_put(folio);
+
+- if (old_size < pos && !verity)
++ if (old_size < pos && !verity) {
+ pagecache_isize_extended(inode, old_size, pos);
++ ext4_zero_partial_blocks(handle, inode, old_size, pos - old_size);
++ }
+ /*
+ * Don't mark the inode dirty under folio lock. First, it unnecessarily
+ * makes the holding time of folio lock longer. Second, it forces lock
+@@ -1423,8 +1425,10 @@ static int ext4_journalled_write_end(struct file *file,
+ folio_unlock(folio);
+ folio_put(folio);
+
+- if (old_size < pos && !verity)
++ if (old_size < pos && !verity) {
+ pagecache_isize_extended(inode, old_size, pos);
++ ext4_zero_partial_blocks(handle, inode, old_size, pos - old_size);
++ }
+
+ if (size_changed) {
+ ret2 = ext4_mark_inode_dirty(handle, inode);
+@@ -2985,7 +2989,8 @@ static int ext4_da_do_write_end(struct address_space *mapping,
+ struct inode *inode = mapping->host;
+ loff_t old_size = inode->i_size;
+ bool disksize_changed = false;
+- loff_t new_i_size;
++ loff_t new_i_size, zero_len = 0;
++ handle_t *handle;
+
+ if (unlikely(!folio_buffers(folio))) {
+ folio_unlock(folio);
+@@ -3029,18 +3034,21 @@ static int ext4_da_do_write_end(struct address_space *mapping,
+ folio_unlock(folio);
+ folio_put(folio);
+
+- if (old_size < pos)
++ if (pos > old_size) {
+ pagecache_isize_extended(inode, old_size, pos);
++ zero_len = pos - old_size;
++ }
+
+- if (disksize_changed) {
+- handle_t *handle;
++ if (!disksize_changed && !zero_len)
++ return copied;
+
+- handle = ext4_journal_start(inode, EXT4_HT_INODE, 2);
+- if (IS_ERR(handle))
+- return PTR_ERR(handle);
+- ext4_mark_inode_dirty(handle, inode);
+- ext4_journal_stop(handle);
+- }
++ handle = ext4_journal_start(inode, EXT4_HT_INODE, 2);
++ if (IS_ERR(handle))
++ return PTR_ERR(handle);
++ if (zero_len)
++ ext4_zero_partial_blocks(handle, inode, old_size, zero_len);
++ ext4_mark_inode_dirty(handle, inode);
++ ext4_journal_stop(handle);
+
+ return copied;
+ }
+@@ -5426,6 +5434,14 @@ int ext4_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ }
+
+ if (attr->ia_size != inode->i_size) {
++ /* attach jbd2 jinode for EOF folio tail zeroing */
++ if (attr->ia_size & (inode->i_sb->s_blocksize - 1) ||
++ oldsize & (inode->i_sb->s_blocksize - 1)) {
++ error = ext4_inode_attach_jinode(inode);
++ if (error)
++ goto err_out;
++ }
++
+ handle = ext4_journal_start(inode, EXT4_HT_INODE, 3);
+ if (IS_ERR(handle)) {
+ error = PTR_ERR(handle);
+@@ -5436,12 +5452,17 @@ int ext4_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ orphan = 1;
+ }
+ /*
+- * Update c/mtime on truncate up, ext4_truncate() will
+- * update c/mtime in shrink case below
++ * Update c/mtime and tail zero the EOF folio on
++ * truncate up. ext4_truncate() handles the shrink case
++ * below.
+ */
+- if (!shrink)
++ if (!shrink) {
+ inode_set_mtime_to_ts(inode,
+ inode_set_ctime_current(inode));
++ if (oldsize & (inode->i_sb->s_blocksize - 1))
++ ext4_block_truncate_page(handle,
++ inode->i_mapping, oldsize);
++ }
+
+ if (shrink)
+ ext4_fc_track_range(handle, inode,
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index 9efe4c00d75bb3..da0960d496ae09 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -1819,16 +1819,6 @@ bool f2fs_overwrite_io(struct inode *inode, loff_t pos, size_t len)
+ return true;
+ }
+
+-static inline u64 bytes_to_blks(struct inode *inode, u64 bytes)
+-{
+- return (bytes >> inode->i_blkbits);
+-}
+-
+-static inline u64 blks_to_bytes(struct inode *inode, u64 blks)
+-{
+- return (blks << inode->i_blkbits);
+-}
+-
+ static int f2fs_xattr_fiemap(struct inode *inode,
+ struct fiemap_extent_info *fieinfo)
+ {
+@@ -1854,7 +1844,7 @@ static int f2fs_xattr_fiemap(struct inode *inode,
+ return err;
+ }
+
+- phys = blks_to_bytes(inode, ni.blk_addr);
++ phys = F2FS_BLK_TO_BYTES(ni.blk_addr);
+ offset = offsetof(struct f2fs_inode, i_addr) +
+ sizeof(__le32) * (DEF_ADDRS_PER_INODE -
+ get_inline_xattr_addrs(inode));
+@@ -1886,7 +1876,7 @@ static int f2fs_xattr_fiemap(struct inode *inode,
+ return err;
+ }
+
+- phys = blks_to_bytes(inode, ni.blk_addr);
++ phys = F2FS_BLK_TO_BYTES(ni.blk_addr);
+ len = inode->i_sb->s_blocksize;
+
+ f2fs_put_page(page, 1);
+@@ -1906,7 +1896,7 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ u64 start, u64 len)
+ {
+ struct f2fs_map_blocks map;
+- sector_t start_blk, last_blk;
++ sector_t start_blk, last_blk, blk_len, max_len;
+ pgoff_t next_pgofs;
+ u64 logical = 0, phys = 0, size = 0;
+ u32 flags = 0;
+@@ -1948,16 +1938,15 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ goto out;
+ }
+
+- if (bytes_to_blks(inode, len) == 0)
+- len = blks_to_bytes(inode, 1);
+-
+- start_blk = bytes_to_blks(inode, start);
+- last_blk = bytes_to_blks(inode, start + len - 1);
++ start_blk = F2FS_BYTES_TO_BLK(start);
++ last_blk = F2FS_BYTES_TO_BLK(start + len - 1);
++ blk_len = last_blk - start_blk + 1;
++ max_len = F2FS_BYTES_TO_BLK(maxbytes) - start_blk;
+
+ next:
+ memset(&map, 0, sizeof(map));
+ map.m_lblk = start_blk;
+- map.m_len = bytes_to_blks(inode, len);
++ map.m_len = blk_len;
+ map.m_next_pgofs = &next_pgofs;
+ map.m_seg_type = NO_CHECK_TYPE;
+
+@@ -1974,12 +1963,23 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ if (!compr_cluster && !(map.m_flags & F2FS_MAP_FLAGS)) {
+ start_blk = next_pgofs;
+
+- if (blks_to_bytes(inode, start_blk) < maxbytes)
++ if (F2FS_BLK_TO_BYTES(start_blk) < maxbytes)
+ goto prep_next;
+
+ flags |= FIEMAP_EXTENT_LAST;
+ }
+
++ /*
++ * current extent may cross boundary of inquiry, increase len to
++ * requery.
++ */
++ if (!compr_cluster && (map.m_flags & F2FS_MAP_MAPPED) &&
++ map.m_lblk + map.m_len - 1 == last_blk &&
++ blk_len != max_len) {
++ blk_len = max_len;
++ goto next;
++ }
++
+ compr_appended = false;
+ /* In a case of compressed cluster, append this to the last extent */
+ if (compr_cluster && ((map.m_flags & F2FS_MAP_DELALLOC) ||
+@@ -2011,14 +2011,14 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ } else if (compr_appended) {
+ unsigned int appended_blks = cluster_size -
+ count_in_cluster + 1;
+- size += blks_to_bytes(inode, appended_blks);
++ size += F2FS_BLK_TO_BYTES(appended_blks);
+ start_blk += appended_blks;
+ compr_cluster = false;
+ } else {
+- logical = blks_to_bytes(inode, start_blk);
++ logical = F2FS_BLK_TO_BYTES(start_blk);
+ phys = __is_valid_data_blkaddr(map.m_pblk) ?
+- blks_to_bytes(inode, map.m_pblk) : 0;
+- size = blks_to_bytes(inode, map.m_len);
++ F2FS_BLK_TO_BYTES(map.m_pblk) : 0;
++ size = F2FS_BLK_TO_BYTES(map.m_len);
+ flags = 0;
+
+ if (compr_cluster) {
+@@ -2026,13 +2026,13 @@ int f2fs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ count_in_cluster += map.m_len;
+ if (count_in_cluster == cluster_size) {
+ compr_cluster = false;
+- size += blks_to_bytes(inode, 1);
++ size += F2FS_BLKSIZE;
+ }
+ } else if (map.m_flags & F2FS_MAP_DELALLOC) {
+ flags = FIEMAP_EXTENT_UNWRITTEN;
+ }
+
+- start_blk += bytes_to_blks(inode, size);
++ start_blk += F2FS_BYTES_TO_BLK(size);
+ }
+
+ prep_next:
+@@ -2070,7 +2070,7 @@ static int f2fs_read_single_page(struct inode *inode, struct folio *folio,
+ struct readahead_control *rac)
+ {
+ struct bio *bio = *bio_ret;
+- const unsigned blocksize = blks_to_bytes(inode, 1);
++ const unsigned int blocksize = F2FS_BLKSIZE;
+ sector_t block_in_file;
+ sector_t last_block;
+ sector_t last_block_in_file;
+@@ -2080,8 +2080,8 @@ static int f2fs_read_single_page(struct inode *inode, struct folio *folio,
+
+ block_in_file = (sector_t)index;
+ last_block = block_in_file + nr_pages;
+- last_block_in_file = bytes_to_blks(inode,
+- f2fs_readpage_limit(inode) + blocksize - 1);
++ last_block_in_file = F2FS_BYTES_TO_BLK(f2fs_readpage_limit(inode) +
++ blocksize - 1);
+ if (last_block > last_block_in_file)
+ last_block = last_block_in_file;
+
+@@ -2181,7 +2181,7 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+ struct bio *bio = *bio_ret;
+ unsigned int start_idx = cc->cluster_idx << cc->log_cluster_size;
+ sector_t last_block_in_file;
+- const unsigned blocksize = blks_to_bytes(inode, 1);
++ const unsigned int blocksize = F2FS_BLKSIZE;
+ struct decompress_io_ctx *dic = NULL;
+ struct extent_info ei = {};
+ bool from_dnode = true;
+@@ -2190,8 +2190,8 @@ int f2fs_read_multi_pages(struct compress_ctx *cc, struct bio **bio_ret,
+
+ f2fs_bug_on(sbi, f2fs_cluster_is_empty(cc));
+
+- last_block_in_file = bytes_to_blks(inode,
+- f2fs_readpage_limit(inode) + blocksize - 1);
++ last_block_in_file = F2FS_BYTES_TO_BLK(f2fs_readpage_limit(inode) +
++ blocksize - 1);
+
+ /* get rid of pages beyond EOF */
+ for (i = 0; i < cc->cluster_size; i++) {
+@@ -3952,7 +3952,7 @@ static int check_swap_activate(struct swap_info_struct *sis,
+ * to be very smart.
+ */
+ cur_lblock = 0;
+- last_lblock = bytes_to_blks(inode, i_size_read(inode));
++ last_lblock = F2FS_BYTES_TO_BLK(i_size_read(inode));
+
+ while (cur_lblock < last_lblock && cur_lblock < sis->max) {
+ struct f2fs_map_blocks map;
+@@ -4195,8 +4195,8 @@ static int f2fs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ pgoff_t next_pgofs = 0;
+ int err;
+
+- map.m_lblk = bytes_to_blks(inode, offset);
+- map.m_len = bytes_to_blks(inode, offset + length - 1) - map.m_lblk + 1;
++ map.m_lblk = F2FS_BYTES_TO_BLK(offset);
++ map.m_len = F2FS_BYTES_TO_BLK(offset + length - 1) - map.m_lblk + 1;
+ map.m_next_pgofs = &next_pgofs;
+ map.m_seg_type = f2fs_rw_hint_to_seg_type(F2FS_I_SB(inode),
+ inode->i_write_hint);
+@@ -4207,7 +4207,7 @@ static int f2fs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ if (err)
+ return err;
+
+- iomap->offset = blks_to_bytes(inode, map.m_lblk);
++ iomap->offset = F2FS_BLK_TO_BYTES(map.m_lblk);
+
+ /*
+ * When inline encryption is enabled, sometimes I/O to an encrypted file
+@@ -4227,21 +4227,21 @@ static int f2fs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ if (WARN_ON_ONCE(map.m_pblk == NEW_ADDR))
+ return -EINVAL;
+
+- iomap->length = blks_to_bytes(inode, map.m_len);
++ iomap->length = F2FS_BLK_TO_BYTES(map.m_len);
+ iomap->type = IOMAP_MAPPED;
+ iomap->flags |= IOMAP_F_MERGED;
+ iomap->bdev = map.m_bdev;
+- iomap->addr = blks_to_bytes(inode, map.m_pblk);
++ iomap->addr = F2FS_BLK_TO_BYTES(map.m_pblk);
+ } else {
+ if (flags & IOMAP_WRITE)
+ return -ENOTBLK;
+
+ if (map.m_pblk == NULL_ADDR) {
+- iomap->length = blks_to_bytes(inode, next_pgofs) -
+- iomap->offset;
++ iomap->length = F2FS_BLK_TO_BYTES(next_pgofs) -
++ iomap->offset;
+ iomap->type = IOMAP_HOLE;
+ } else if (map.m_pblk == NEW_ADDR) {
+- iomap->length = blks_to_bytes(inode, map.m_len);
++ iomap->length = F2FS_BLK_TO_BYTES(map.m_len);
+ iomap->type = IOMAP_UNWRITTEN;
+ } else {
+ f2fs_bug_on(F2FS_I_SB(inode), 1);
+diff --git a/fs/f2fs/extent_cache.c b/fs/f2fs/extent_cache.c
+index 62ac440d94168a..fb09c8e9bc5732 100644
+--- a/fs/f2fs/extent_cache.c
++++ b/fs/f2fs/extent_cache.c
+@@ -346,21 +346,22 @@ static struct extent_tree *__grab_extent_tree(struct inode *inode,
+ }
+
+ static unsigned int __free_extent_tree(struct f2fs_sb_info *sbi,
+- struct extent_tree *et)
++ struct extent_tree *et, unsigned int nr_shrink)
+ {
+ struct rb_node *node, *next;
+ struct extent_node *en;
+- unsigned int count = atomic_read(&et->node_cnt);
++ unsigned int count;
+
+ node = rb_first_cached(&et->root);
+- while (node) {
++
++ for (count = 0; node && count < nr_shrink; count++) {
+ next = rb_next(node);
+ en = rb_entry(node, struct extent_node, rb_node);
+ __release_extent_node(sbi, et, en);
+ node = next;
+ }
+
+- return count - atomic_read(&et->node_cnt);
++ return count;
+ }
+
+ static void __drop_largest_extent(struct extent_tree *et,
+@@ -579,6 +580,30 @@ static struct extent_node *__insert_extent_tree(struct f2fs_sb_info *sbi,
+ return en;
+ }
+
++static unsigned int __destroy_extent_node(struct inode *inode,
++ enum extent_type type)
++{
++ struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
++ struct extent_tree *et = F2FS_I(inode)->extent_tree[type];
++ unsigned int nr_shrink = type == EX_READ ?
++ READ_EXTENT_CACHE_SHRINK_NUMBER :
++ AGE_EXTENT_CACHE_SHRINK_NUMBER;
++ unsigned int node_cnt = 0;
++
++ if (!et || !atomic_read(&et->node_cnt))
++ return 0;
++
++ while (atomic_read(&et->node_cnt)) {
++ write_lock(&et->lock);
++ node_cnt += __free_extent_tree(sbi, et, nr_shrink);
++ write_unlock(&et->lock);
++ }
++
++ f2fs_bug_on(sbi, atomic_read(&et->node_cnt));
++
++ return node_cnt;
++}
++
+ static void __update_extent_tree_range(struct inode *inode,
+ struct extent_info *tei, enum extent_type type)
+ {
+@@ -649,7 +674,9 @@ static void __update_extent_tree_range(struct inode *inode,
+ }
+
+ if (end < org_end && (type != EX_READ ||
+- org_end - end >= F2FS_MIN_EXTENT_LEN)) {
++ (org_end - end >= F2FS_MIN_EXTENT_LEN &&
++ atomic_read(&et->node_cnt) <
++ sbi->max_read_extent_count))) {
+ if (parts) {
+ __set_extent_info(&ei,
+ end, org_end - end,
+@@ -717,9 +744,6 @@ static void __update_extent_tree_range(struct inode *inode,
+ }
+ }
+
+- if (is_inode_flag_set(inode, FI_NO_EXTENT))
+- __free_extent_tree(sbi, et);
+-
+ if (et->largest_updated) {
+ et->largest_updated = false;
+ updated = true;
+@@ -737,6 +761,9 @@ static void __update_extent_tree_range(struct inode *inode,
+ out_read_extent_cache:
+ write_unlock(&et->lock);
+
++ if (is_inode_flag_set(inode, FI_NO_EXTENT))
++ __destroy_extent_node(inode, EX_READ);
++
+ if (updated)
+ f2fs_mark_inode_dirty_sync(inode, true);
+ }
+@@ -899,10 +926,14 @@ static unsigned int __shrink_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink
+ list_for_each_entry_safe(et, next, &eti->zombie_list, list) {
+ if (atomic_read(&et->node_cnt)) {
+ write_lock(&et->lock);
+- node_cnt += __free_extent_tree(sbi, et);
++ node_cnt += __free_extent_tree(sbi, et,
++ nr_shrink - node_cnt - tree_cnt);
+ write_unlock(&et->lock);
+ }
+- f2fs_bug_on(sbi, atomic_read(&et->node_cnt));
++
++ if (atomic_read(&et->node_cnt))
++ goto unlock_out;
++
+ list_del_init(&et->list);
+ radix_tree_delete(&eti->extent_tree_root, et->ino);
+ kmem_cache_free(extent_tree_slab, et);
+@@ -1041,23 +1072,6 @@ unsigned int f2fs_shrink_age_extent_tree(struct f2fs_sb_info *sbi, int nr_shrink
+ return __shrink_extent_tree(sbi, nr_shrink, EX_BLOCK_AGE);
+ }
+
+-static unsigned int __destroy_extent_node(struct inode *inode,
+- enum extent_type type)
+-{
+- struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+- struct extent_tree *et = F2FS_I(inode)->extent_tree[type];
+- unsigned int node_cnt = 0;
+-
+- if (!et || !atomic_read(&et->node_cnt))
+- return 0;
+-
+- write_lock(&et->lock);
+- node_cnt = __free_extent_tree(sbi, et);
+- write_unlock(&et->lock);
+-
+- return node_cnt;
+-}
+-
+ void f2fs_destroy_extent_node(struct inode *inode)
+ {
+ __destroy_extent_node(inode, EX_READ);
+@@ -1066,7 +1080,6 @@ void f2fs_destroy_extent_node(struct inode *inode)
+
+ static void __drop_extent_tree(struct inode *inode, enum extent_type type)
+ {
+- struct f2fs_sb_info *sbi = F2FS_I_SB(inode);
+ struct extent_tree *et = F2FS_I(inode)->extent_tree[type];
+ bool updated = false;
+
+@@ -1074,7 +1087,6 @@ static void __drop_extent_tree(struct inode *inode, enum extent_type type)
+ return;
+
+ write_lock(&et->lock);
+- __free_extent_tree(sbi, et);
+ if (type == EX_READ) {
+ set_inode_flag(inode, FI_NO_EXTENT);
+ if (et->largest.len) {
+@@ -1083,6 +1095,9 @@ static void __drop_extent_tree(struct inode *inode, enum extent_type type)
+ }
+ }
+ write_unlock(&et->lock);
++
++ __destroy_extent_node(inode, type);
++
+ if (updated)
+ f2fs_mark_inode_dirty_sync(inode, true);
+ }
+@@ -1156,6 +1171,7 @@ void f2fs_init_extent_cache_info(struct f2fs_sb_info *sbi)
+ sbi->hot_data_age_threshold = DEF_HOT_DATA_AGE_THRESHOLD;
+ sbi->warm_data_age_threshold = DEF_WARM_DATA_AGE_THRESHOLD;
+ sbi->last_age_weight = LAST_AGE_WEIGHT;
++ sbi->max_read_extent_count = DEF_MAX_READ_EXTENT_COUNT;
+ }
+
+ int __init f2fs_create_extent_cache(void)
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 93a5e1c24e566e..cec3dd205b3df8 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -634,6 +634,9 @@ enum {
+ #define DEF_HOT_DATA_AGE_THRESHOLD 262144
+ #define DEF_WARM_DATA_AGE_THRESHOLD 2621440
+
++/* default max read extent count per inode */
++#define DEF_MAX_READ_EXTENT_COUNT 10240
++
+ /* extent cache type */
+ enum extent_type {
+ EX_READ,
+@@ -1619,6 +1622,7 @@ struct f2fs_sb_info {
+ /* for extent tree cache */
+ struct extent_tree_info extent_tree[NR_EXTENT_CACHES];
+ atomic64_t allocated_data_blocks; /* for block age extent_cache */
++ unsigned int max_read_extent_count; /* max read extent count per inode */
+
+ /* The threshold used for hot and warm data seperation*/
+ unsigned int hot_data_age_threshold;
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 1ed86df343a5d1..10780e37fc7b68 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -775,8 +775,10 @@ int f2fs_write_inode(struct inode *inode, struct writeback_control *wbc)
+ !is_inode_flag_set(inode, FI_DIRTY_INODE))
+ return 0;
+
+- if (!f2fs_is_checkpoint_ready(sbi))
++ if (!f2fs_is_checkpoint_ready(sbi)) {
++ f2fs_mark_inode_dirty_sync(inode, true);
+ return -ENOSPC;
++ }
+
+ /*
+ * We need to balance fs here to prevent from producing dirty node pages
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index af36c6d6542b8c..4d7b9fd6ef31ab 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1341,7 +1341,12 @@ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs)
+ err = -EFSCORRUPTED;
+ dec_valid_node_count(sbi, dn->inode, !ofs);
+ set_sbi_flag(sbi, SBI_NEED_FSCK);
+- f2fs_handle_error(sbi, ERROR_INVALID_BLKADDR);
++ f2fs_warn_ratelimited(sbi,
++ "f2fs_new_node_page: inconsistent nat entry, "
++ "ino:%u, nid:%u, blkaddr:%u, ver:%u, flag:%u",
++ new_ni.ino, new_ni.nid, new_ni.blk_addr,
++ new_ni.version, new_ni.flag);
++ f2fs_handle_error(sbi, ERROR_INCONSISTENT_NAT);
+ goto fail;
+ }
+ #endif
+diff --git a/fs/f2fs/sysfs.c b/fs/f2fs/sysfs.c
+index c56e8c87393523..d9a44f03e558bf 100644
+--- a/fs/f2fs/sysfs.c
++++ b/fs/f2fs/sysfs.c
+@@ -789,6 +789,13 @@ static ssize_t __sbi_store(struct f2fs_attr *a,
+ return count;
+ }
+
++ if (!strcmp(a->attr.name, "max_read_extent_count")) {
++ if (t > UINT_MAX)
++ return -EINVAL;
++ *ui = (unsigned int)t;
++ return count;
++ }
++
+ if (!strcmp(a->attr.name, "ipu_policy")) {
+ if (t >= BIT(F2FS_IPU_MAX))
+ return -EINVAL;
+@@ -1054,6 +1061,8 @@ F2FS_SBI_GENERAL_RW_ATTR(revoked_atomic_block);
+ F2FS_SBI_GENERAL_RW_ATTR(hot_data_age_threshold);
+ F2FS_SBI_GENERAL_RW_ATTR(warm_data_age_threshold);
+ F2FS_SBI_GENERAL_RW_ATTR(last_age_weight);
++/* read extent cache */
++F2FS_SBI_GENERAL_RW_ATTR(max_read_extent_count);
+ #ifdef CONFIG_BLK_DEV_ZONED
+ F2FS_SBI_GENERAL_RO_ATTR(unusable_blocks_per_sec);
+ F2FS_SBI_GENERAL_RW_ATTR(blkzone_alloc_policy);
+@@ -1244,6 +1253,7 @@ static struct attribute *f2fs_attrs[] = {
+ ATTR_LIST(hot_data_age_threshold),
+ ATTR_LIST(warm_data_age_threshold),
+ ATTR_LIST(last_age_weight),
++ ATTR_LIST(max_read_extent_count),
+ NULL,
+ };
+ ATTRIBUTE_GROUPS(f2fs);
+diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c
+index e22c1edc32b39e..b9cef63c78717f 100644
+--- a/fs/gfs2/super.c
++++ b/fs/gfs2/super.c
+@@ -1537,11 +1537,13 @@ static struct inode *gfs2_alloc_inode(struct super_block *sb)
+ if (!ip)
+ return NULL;
+ ip->i_no_addr = 0;
++ ip->i_no_formal_ino = 0;
+ ip->i_flags = 0;
+ ip->i_gl = NULL;
+ gfs2_holder_mark_uninitialized(&ip->i_iopen_gh);
+ memset(&ip->i_res, 0, sizeof(ip->i_res));
+ RB_CLEAR_NODE(&ip->i_res.rs_node);
++ ip->i_diskflags = 0;
+ ip->i_rahead = 0;
+ return &ip->i_inode;
+ }
+diff --git a/fs/jffs2/compr_rtime.c b/fs/jffs2/compr_rtime.c
+index 79e771ab624f47..3bd9d2f3bece20 100644
+--- a/fs/jffs2/compr_rtime.c
++++ b/fs/jffs2/compr_rtime.c
+@@ -95,6 +95,9 @@ static int jffs2_rtime_decompress(unsigned char *data_in,
+
+ positions[value]=outpos;
+ if (repeat) {
++ if ((outpos + repeat) > destlen) {
++ return 1;
++ }
+ if (backoffs + repeat >= outpos) {
+ while(repeat) {
+ cpage_out[outpos++] = cpage_out[backoffs++];
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index 3ab410059dc202..f9009e4f9ffd89 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -1820,6 +1820,9 @@ dbAllocCtl(struct bmap * bmp, s64 nblocks, int l2nb, s64 blkno, s64 * results)
+ return -EIO;
+ dp = (struct dmap *) mp->data;
+
++ if (dp->tree.budmin < 0)
++ return -EIO;
++
+ /* try to allocate the blocks.
+ */
+ rc = dbAllocDmapLev(bmp, dp, (int) nblocks, l2nb, results);
+@@ -2888,6 +2891,9 @@ static void dbAdjTree(dmtree_t *tp, int leafno, int newval, bool is_ctl)
+ /* bubble the new value up the tree as required.
+ */
+ for (k = 0; k < le32_to_cpu(tp->dmt_height); k++) {
++ if (lp == 0)
++ break;
++
+ /* get the index of the first leaf of the 4 leaf
+ * group containing the specified leaf (leafno).
+ */
+diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c
+index 5d3127ca68a42d..8f85177f284b5a 100644
+--- a/fs/jfs/jfs_dtree.c
++++ b/fs/jfs/jfs_dtree.c
+@@ -2891,6 +2891,14 @@ int jfs_readdir(struct file *file, struct dir_context *ctx)
+ stbl = DT_GETSTBL(p);
+
+ for (i = index; i < p->header.nextindex; i++) {
++ if (stbl[i] < 0 || stbl[i] > 127) {
++ jfs_err("JFS: Invalid stbl[%d] = %d for inode %ld, block = %lld",
++ i, stbl[i], (long)ip->i_ino, (long long)bn);
++ free_page(dirent_buf);
++ DT_PUTPAGE(mp);
++ return -EIO;
++ }
++
+ d = (struct ldtentry *) & p->slot[stbl[i]];
+
+ if (((long) jfs_dirent + d->namlen + 1) >
+@@ -3086,6 +3094,13 @@ static int dtReadFirst(struct inode *ip, struct btstack * btstack)
+
+ /* get the leftmost entry */
+ stbl = DT_GETSTBL(p);
++
++ if (stbl[0] < 0 || stbl[0] > 127) {
++ DT_PUTPAGE(mp);
++ jfs_error(ip->i_sb, "stbl[0] out of bound\n");
++ return -EIO;
++ }
++
+ xd = (pxd_t *) & p->slot[stbl[0]];
+
+ /* get the child page block address */
+diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c
+index a8602729586ab7..f61c58fbf117d3 100644
+--- a/fs/nilfs2/dir.c
++++ b/fs/nilfs2/dir.c
+@@ -70,7 +70,7 @@ static inline unsigned int nilfs_chunk_size(struct inode *inode)
+ */
+ static unsigned int nilfs_last_byte(struct inode *inode, unsigned long page_nr)
+ {
+- unsigned int last_byte = inode->i_size;
++ u64 last_byte = inode->i_size;
+
+ last_byte -= page_nr << PAGE_SHIFT;
+ if (last_byte > PAGE_SIZE)
+diff --git a/fs/notify/fanotify/fanotify_user.c b/fs/notify/fanotify/fanotify_user.c
+index 9644bc72e4573b..8e2d43fc6f7c1f 100644
+--- a/fs/notify/fanotify/fanotify_user.c
++++ b/fs/notify/fanotify/fanotify_user.c
+@@ -266,13 +266,6 @@ static int create_fd(struct fsnotify_group *group, const struct path *path,
+ group->fanotify_data.f_flags | __FMODE_NONOTIFY,
+ current_cred());
+ if (IS_ERR(new_file)) {
+- /*
+- * we still send an event even if we can't open the file. this
+- * can happen when say tasks are gone and we try to open their
+- * /proc files or we try to open a WRONLY file like in sysfs
+- * we just send the errno to userspace since there isn't much
+- * else we can do.
+- */
+ put_unused_fd(client_fd);
+ client_fd = PTR_ERR(new_file);
+ } else {
+@@ -663,7 +656,7 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ unsigned int info_mode = FAN_GROUP_FLAG(group, FANOTIFY_INFO_MODES);
+ unsigned int pidfd_mode = info_mode & FAN_REPORT_PIDFD;
+ struct file *f = NULL, *pidfd_file = NULL;
+- int ret, pidfd = FAN_NOPIDFD, fd = FAN_NOFD;
++ int ret, pidfd = -ESRCH, fd = -EBADF;
+
+ pr_debug("%s: group=%p event=%p\n", __func__, group, event);
+
+@@ -691,10 +684,39 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ if (!FAN_GROUP_FLAG(group, FANOTIFY_UNPRIV) &&
+ path && path->mnt && path->dentry) {
+ fd = create_fd(group, path, &f);
+- if (fd < 0)
+- return fd;
++ /*
++ * Opening an fd from dentry can fail for several reasons.
++ * For example, when tasks are gone and we try to open their
++ * /proc files or we try to open a WRONLY file like in sysfs
++ * or when trying to open a file that was deleted on the
++ * remote network server.
++ *
++ * For a group with FAN_REPORT_FD_ERROR, we will send the
++ * event with the error instead of the open fd, otherwise
++ * Userspace may not get the error at all.
++ * In any case, userspace will not know which file failed to
++ * open, so add a debug print for further investigation.
++ */
++ if (fd < 0) {
++ pr_debug("fanotify: create_fd(%pd2) failed err=%d\n",
++ path->dentry, fd);
++ if (!FAN_GROUP_FLAG(group, FAN_REPORT_FD_ERROR)) {
++ /*
++ * Historically, we've handled EOPENSTALE in a
++ * special way and silently dropped such
++ * events. Now we have to keep it to maintain
++ * backward compatibility...
++ */
++ if (fd == -EOPENSTALE)
++ fd = 0;
++ return fd;
++ }
++ }
+ }
+- metadata.fd = fd;
++ if (FAN_GROUP_FLAG(group, FAN_REPORT_FD_ERROR))
++ metadata.fd = fd;
++ else
++ metadata.fd = fd >= 0 ? fd : FAN_NOFD;
+
+ if (pidfd_mode) {
+ /*
+@@ -709,18 +731,16 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ * The PIDTYPE_TGID check for an event->pid is performed
+ * preemptively in an attempt to catch out cases where the event
+ * listener reads events after the event generating process has
+- * already terminated. Report FAN_NOPIDFD to the event listener
+- * in those cases, with all other pidfd creation errors being
+- * reported as FAN_EPIDFD.
++ * already terminated. Depending on flag FAN_REPORT_FD_ERROR,
++ * report either -ESRCH or FAN_NOPIDFD to the event listener in
++ * those cases with all other pidfd creation errors reported as
++ * the error code itself or as FAN_EPIDFD.
+ */
+- if (metadata.pid == 0 ||
+- !pid_has_task(event->pid, PIDTYPE_TGID)) {
+- pidfd = FAN_NOPIDFD;
+- } else {
++ if (metadata.pid && pid_has_task(event->pid, PIDTYPE_TGID))
+ pidfd = pidfd_prepare(event->pid, 0, &pidfd_file);
+- if (pidfd < 0)
+- pidfd = FAN_EPIDFD;
+- }
++
++ if (!FAN_GROUP_FLAG(group, FAN_REPORT_FD_ERROR) && pidfd < 0)
++ pidfd = pidfd == -ESRCH ? FAN_NOPIDFD : FAN_EPIDFD;
+ }
+
+ ret = -EFAULT;
+@@ -737,9 +757,6 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ buf += FAN_EVENT_METADATA_LEN;
+ count -= FAN_EVENT_METADATA_LEN;
+
+- if (fanotify_is_perm_event(event->mask))
+- FANOTIFY_PERM(event)->fd = fd;
+-
+ if (info_mode) {
+ ret = copy_info_records_to_user(event, info, info_mode, pidfd,
+ buf, count);
+@@ -753,15 +770,18 @@ static ssize_t copy_event_to_user(struct fsnotify_group *group,
+ if (pidfd_file)
+ fd_install(pidfd, pidfd_file);
+
++ if (fanotify_is_perm_event(event->mask))
++ FANOTIFY_PERM(event)->fd = fd;
++
+ return metadata.event_len;
+
+ out_close_fd:
+- if (fd != FAN_NOFD) {
++ if (f) {
+ put_unused_fd(fd);
+ fput(f);
+ }
+
+- if (pidfd >= 0) {
++ if (pidfd_file) {
+ put_unused_fd(pidfd);
+ fput(pidfd_file);
+ }
+@@ -828,15 +848,6 @@ static ssize_t fanotify_read(struct file *file, char __user *buf,
+ }
+
+ ret = copy_event_to_user(group, event, buf, count);
+- if (unlikely(ret == -EOPENSTALE)) {
+- /*
+- * We cannot report events with stale fd so drop it.
+- * Setting ret to 0 will continue the event loop and
+- * do the right thing if there are no more events to
+- * read (i.e. return bytes read, -EAGAIN or wait).
+- */
+- ret = 0;
+- }
+
+ /*
+ * Permission events get queued to wait for response. Other
+@@ -845,7 +856,7 @@ static ssize_t fanotify_read(struct file *file, char __user *buf,
+ if (!fanotify_is_perm_event(event->mask)) {
+ fsnotify_destroy_event(group, &event->fse);
+ } else {
+- if (ret <= 0) {
++ if (ret <= 0 || FANOTIFY_PERM(event)->fd < 0) {
+ spin_lock(&group->notification_lock);
+ finish_permission_event(group,
+ FANOTIFY_PERM(event), FAN_DENY, NULL);
+@@ -1954,7 +1965,7 @@ static int __init fanotify_user_setup(void)
+ FANOTIFY_DEFAULT_MAX_USER_MARKS);
+
+ BUILD_BUG_ON(FANOTIFY_INIT_FLAGS & FANOTIFY_INTERNAL_GROUP_FLAGS);
+- BUILD_BUG_ON(HWEIGHT32(FANOTIFY_INIT_FLAGS) != 12);
++ BUILD_BUG_ON(HWEIGHT32(FANOTIFY_INIT_FLAGS) != 13);
+ BUILD_BUG_ON(HWEIGHT32(FANOTIFY_MARK_FLAGS) != 11);
+
+ fanotify_mark_cache = KMEM_CACHE(fanotify_mark,
+diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
+index 0763202d00c992..8d789b017fa9b6 100644
+--- a/fs/ntfs3/attrib.c
++++ b/fs/ntfs3/attrib.c
+@@ -977,7 +977,7 @@ int attr_data_get_block(struct ntfs_inode *ni, CLST vcn, CLST clen, CLST *lcn,
+
+ /* Check for compressed frame. */
+ err = attr_is_frame_compressed(ni, attr_b, vcn >> NTFS_LZNT_CUNIT,
+- &hint);
++ &hint, run);
+ if (err)
+ goto out;
+
+@@ -1521,16 +1521,16 @@ int attr_wof_frame_info(struct ntfs_inode *ni, struct ATTRIB *attr,
+ * attr_is_frame_compressed - Used to detect compressed frame.
+ *
+ * attr - base (primary) attribute segment.
++ * run - run to use, usually == &ni->file.run.
+ * Only base segments contains valid 'attr->nres.c_unit'
+ */
+ int attr_is_frame_compressed(struct ntfs_inode *ni, struct ATTRIB *attr,
+- CLST frame, CLST *clst_data)
++ CLST frame, CLST *clst_data, struct runs_tree *run)
+ {
+ int err;
+ u32 clst_frame;
+ CLST clen, lcn, vcn, alen, slen, vcn_next;
+ size_t idx;
+- struct runs_tree *run;
+
+ *clst_data = 0;
+
+@@ -1542,7 +1542,6 @@ int attr_is_frame_compressed(struct ntfs_inode *ni, struct ATTRIB *attr,
+
+ clst_frame = 1u << attr->nres.c_unit;
+ vcn = frame * clst_frame;
+- run = &ni->file.run;
+
+ if (!run_lookup_entry(run, vcn, &lcn, &clen, &idx)) {
+ err = attr_load_runs_vcn(ni, attr->type, attr_name(attr),
+@@ -1678,7 +1677,7 @@ int attr_allocate_frame(struct ntfs_inode *ni, CLST frame, size_t compr_size,
+ if (err)
+ goto out;
+
+- err = attr_is_frame_compressed(ni, attr_b, frame, &clst_data);
++ err = attr_is_frame_compressed(ni, attr_b, frame, &clst_data, run);
+ if (err)
+ goto out;
+
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 41c7ffad279016..c33e818b3164cd 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -1900,46 +1900,6 @@ enum REPARSE_SIGN ni_parse_reparse(struct ntfs_inode *ni, struct ATTRIB *attr,
+ return REPARSE_LINK;
+ }
+
+-/*
+- * fiemap_fill_next_extent_k - a copy of fiemap_fill_next_extent
+- * but it uses 'fe_k' instead of fieinfo->fi_extents_start
+- */
+-static int fiemap_fill_next_extent_k(struct fiemap_extent_info *fieinfo,
+- struct fiemap_extent *fe_k, u64 logical,
+- u64 phys, u64 len, u32 flags)
+-{
+- struct fiemap_extent extent;
+-
+- /* only count the extents */
+- if (fieinfo->fi_extents_max == 0) {
+- fieinfo->fi_extents_mapped++;
+- return (flags & FIEMAP_EXTENT_LAST) ? 1 : 0;
+- }
+-
+- if (fieinfo->fi_extents_mapped >= fieinfo->fi_extents_max)
+- return 1;
+-
+- if (flags & FIEMAP_EXTENT_DELALLOC)
+- flags |= FIEMAP_EXTENT_UNKNOWN;
+- if (flags & FIEMAP_EXTENT_DATA_ENCRYPTED)
+- flags |= FIEMAP_EXTENT_ENCODED;
+- if (flags & (FIEMAP_EXTENT_DATA_TAIL | FIEMAP_EXTENT_DATA_INLINE))
+- flags |= FIEMAP_EXTENT_NOT_ALIGNED;
+-
+- memset(&extent, 0, sizeof(extent));
+- extent.fe_logical = logical;
+- extent.fe_physical = phys;
+- extent.fe_length = len;
+- extent.fe_flags = flags;
+-
+- memcpy(fe_k + fieinfo->fi_extents_mapped, &extent, sizeof(extent));
+-
+- fieinfo->fi_extents_mapped++;
+- if (fieinfo->fi_extents_mapped == fieinfo->fi_extents_max)
+- return 1;
+- return (flags & FIEMAP_EXTENT_LAST) ? 1 : 0;
+-}
+-
+ /*
+ * ni_fiemap - Helper for file_fiemap().
+ *
+@@ -1950,11 +1910,9 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ __u64 vbo, __u64 len)
+ {
+ int err = 0;
+- struct fiemap_extent *fe_k = NULL;
+ struct ntfs_sb_info *sbi = ni->mi.sbi;
+ u8 cluster_bits = sbi->cluster_bits;
+- struct runs_tree *run;
+- struct rw_semaphore *run_lock;
++ struct runs_tree run;
+ struct ATTRIB *attr;
+ CLST vcn = vbo >> cluster_bits;
+ CLST lcn, clen;
+@@ -1965,13 +1923,11 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ u32 flags;
+ bool ok;
+
++ run_init(&run);
+ if (S_ISDIR(ni->vfs_inode.i_mode)) {
+- run = &ni->dir.alloc_run;
+ attr = ni_find_attr(ni, NULL, NULL, ATTR_ALLOC, I30_NAME,
+ ARRAY_SIZE(I30_NAME), NULL, NULL);
+- run_lock = &ni->dir.run_lock;
+ } else {
+- run = &ni->file.run;
+ attr = ni_find_attr(ni, NULL, NULL, ATTR_DATA, NULL, 0, NULL,
+ NULL);
+ if (!attr) {
+@@ -1986,7 +1942,6 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ "fiemap is not supported for compressed file (cp -r)");
+ goto out;
+ }
+- run_lock = &ni->file.run_lock;
+ }
+
+ if (!attr || !attr->non_res) {
+@@ -1998,51 +1953,33 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ goto out;
+ }
+
+- /*
+- * To avoid lock problems replace pointer to user memory by pointer to kernel memory.
+- */
+- fe_k = kmalloc_array(fieinfo->fi_extents_max,
+- sizeof(struct fiemap_extent),
+- GFP_NOFS | __GFP_ZERO);
+- if (!fe_k) {
+- err = -ENOMEM;
+- goto out;
+- }
+-
+ end = vbo + len;
+ alloc_size = le64_to_cpu(attr->nres.alloc_size);
+ if (end > alloc_size)
+ end = alloc_size;
+
+- down_read(run_lock);
+
+ while (vbo < end) {
+ if (idx == -1) {
+- ok = run_lookup_entry(run, vcn, &lcn, &clen, &idx);
++ ok = run_lookup_entry(&run, vcn, &lcn, &clen, &idx);
+ } else {
+ CLST vcn_next = vcn;
+
+- ok = run_get_entry(run, ++idx, &vcn, &lcn, &clen) &&
++ ok = run_get_entry(&run, ++idx, &vcn, &lcn, &clen) &&
+ vcn == vcn_next;
+ if (!ok)
+ vcn = vcn_next;
+ }
+
+ if (!ok) {
+- up_read(run_lock);
+- down_write(run_lock);
+-
+ err = attr_load_runs_vcn(ni, attr->type,
+ attr_name(attr),
+- attr->name_len, run, vcn);
+-
+- up_write(run_lock);
+- down_read(run_lock);
++ attr->name_len, &run, vcn);
+
+ if (err)
+ break;
+
+- ok = run_lookup_entry(run, vcn, &lcn, &clen, &idx);
++ ok = run_lookup_entry(&run, vcn, &lcn, &clen, &idx);
+
+ if (!ok) {
+ err = -EINVAL;
+@@ -2067,8 +2004,9 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ } else if (is_attr_compressed(attr)) {
+ CLST clst_data;
+
+- err = attr_is_frame_compressed(
+- ni, attr, vcn >> attr->nres.c_unit, &clst_data);
++ err = attr_is_frame_compressed(ni, attr,
++ vcn >> attr->nres.c_unit,
++ &clst_data, &run);
+ if (err)
+ break;
+ if (clst_data < NTFS_LZNT_CLUSTERS)
+@@ -2097,8 +2035,8 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ if (vbo + dlen >= end)
+ flags |= FIEMAP_EXTENT_LAST;
+
+- err = fiemap_fill_next_extent_k(fieinfo, fe_k, vbo, lbo,
+- dlen, flags);
++ err = fiemap_fill_next_extent(fieinfo, vbo, lbo, dlen,
++ flags);
+
+ if (err < 0)
+ break;
+@@ -2119,8 +2057,7 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ if (vbo + bytes >= end)
+ flags |= FIEMAP_EXTENT_LAST;
+
+- err = fiemap_fill_next_extent_k(fieinfo, fe_k, vbo, lbo, bytes,
+- flags);
++ err = fiemap_fill_next_extent(fieinfo, vbo, lbo, bytes, flags);
+ if (err < 0)
+ break;
+ if (err == 1) {
+@@ -2131,19 +2068,8 @@ int ni_fiemap(struct ntfs_inode *ni, struct fiemap_extent_info *fieinfo,
+ vbo += bytes;
+ }
+
+- up_read(run_lock);
+-
+- /*
+- * Copy to user memory out of lock
+- */
+- if (copy_to_user(fieinfo->fi_extents_start, fe_k,
+- fieinfo->fi_extents_max *
+- sizeof(struct fiemap_extent))) {
+- err = -EFAULT;
+- }
+-
+ out:
+- kfree(fe_k);
++ run_close(&run);
+ return err;
+ }
+
+@@ -2672,7 +2598,8 @@ int ni_read_frame(struct ntfs_inode *ni, u64 frame_vbo, struct page **pages,
+ down_write(&ni->file.run_lock);
+ run_truncate_around(run, le64_to_cpu(attr->nres.svcn));
+ frame = frame_vbo >> (cluster_bits + NTFS_LZNT_CUNIT);
+- err = attr_is_frame_compressed(ni, attr, frame, &clst_data);
++ err = attr_is_frame_compressed(ni, attr, frame, &clst_data,
++ run);
+ up_write(&ni->file.run_lock);
+ if (err)
+ goto out1;
+diff --git a/fs/ntfs3/ntfs_fs.h b/fs/ntfs3/ntfs_fs.h
+index 26e1e1379c04e2..cd8e8374bb5a0a 100644
+--- a/fs/ntfs3/ntfs_fs.h
++++ b/fs/ntfs3/ntfs_fs.h
+@@ -446,7 +446,8 @@ int attr_wof_frame_info(struct ntfs_inode *ni, struct ATTRIB *attr,
+ struct runs_tree *run, u64 frame, u64 frames,
+ u8 frame_bits, u32 *ondisk_size, u64 *vbo_data);
+ int attr_is_frame_compressed(struct ntfs_inode *ni, struct ATTRIB *attr,
+- CLST frame, CLST *clst_data);
++ CLST frame, CLST *clst_data,
++ struct runs_tree *run);
+ int attr_allocate_frame(struct ntfs_inode *ni, CLST frame, size_t compr_size,
+ u64 new_valid);
+ int attr_collapse_range(struct ntfs_inode *ni, u64 vbo, u64 bytes);
+diff --git a/fs/ntfs3/run.c b/fs/ntfs3/run.c
+index 58e988cd80490d..48566dff0dc92b 100644
+--- a/fs/ntfs3/run.c
++++ b/fs/ntfs3/run.c
+@@ -1055,8 +1055,8 @@ int run_unpack_ex(struct runs_tree *run, struct ntfs_sb_info *sbi, CLST ino,
+ {
+ int ret, err;
+ CLST next_vcn, lcn, len;
+- size_t index;
+- bool ok;
++ size_t index, done;
++ bool ok, zone;
+ struct wnd_bitmap *wnd;
+
+ ret = run_unpack(run, sbi, ino, svcn, evcn, vcn, run_buf, run_buf_size);
+@@ -1087,8 +1087,9 @@ int run_unpack_ex(struct runs_tree *run, struct ntfs_sb_info *sbi, CLST ino,
+ continue;
+
+ down_read_nested(&wnd->rw_lock, BITMAP_MUTEX_CLUSTERS);
++ zone = max(wnd->zone_bit, lcn) < min(wnd->zone_end, lcn + len);
+ /* Check for free blocks. */
+- ok = wnd_is_used(wnd, lcn, len);
++ ok = !zone && wnd_is_used(wnd, lcn, len);
+ up_read(&wnd->rw_lock);
+ if (ok)
+ continue;
+@@ -1096,14 +1097,33 @@ int run_unpack_ex(struct runs_tree *run, struct ntfs_sb_info *sbi, CLST ino,
+ /* Looks like volume is corrupted. */
+ ntfs_set_state(sbi, NTFS_DIRTY_ERROR);
+
+- if (down_write_trylock(&wnd->rw_lock)) {
+- /* Mark all zero bits as used in range [lcn, lcn+len). */
+- size_t done;
+- err = wnd_set_used_safe(wnd, lcn, len, &done);
+- up_write(&wnd->rw_lock);
+- if (err)
+- return err;
++ if (!down_write_trylock(&wnd->rw_lock))
++ continue;
++
++ if (zone) {
++ /*
++ * Range [lcn, lcn + len) intersects with zone.
++ * To avoid complex with zone just turn it off.
++ */
++ wnd_zone_set(wnd, 0, 0);
++ }
++
++ /* Mark all zero bits as used in range [lcn, lcn+len). */
++ err = wnd_set_used_safe(wnd, lcn, len, &done);
++ if (zone) {
++ /* Restore zone. Lock mft run. */
++ struct rw_semaphore *lock;
++ lock = is_mounted(sbi) ? &sbi->mft.ni->file.run_lock :
++ NULL;
++ if (lock)
++ down_read(lock);
++ ntfs_refresh_zone(sbi);
++ if (lock)
++ up_read(lock);
+ }
++ up_write(&wnd->rw_lock);
++ if (err)
++ return err;
+ }
+
+ return ret;
+diff --git a/fs/ocfs2/dlmglue.c b/fs/ocfs2/dlmglue.c
+index 60df52e4c1f878..764ecbd5ad41dd 100644
+--- a/fs/ocfs2/dlmglue.c
++++ b/fs/ocfs2/dlmglue.c
+@@ -3110,6 +3110,7 @@ static void *ocfs2_dlm_seq_next(struct seq_file *m, void *v, loff_t *pos)
+ struct ocfs2_lock_res *iter = v;
+ struct ocfs2_lock_res *dummy = &priv->p_iter_res;
+
++ (*pos)++;
+ spin_lock(&ocfs2_dlm_tracking_lock);
+ iter = ocfs2_dlm_next_res(iter, priv);
+ list_del_init(&dummy->l_debug_list);
+diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c
+index 8ac42ea81a17bd..5df34561c551c6 100644
+--- a/fs/ocfs2/localalloc.c
++++ b/fs/ocfs2/localalloc.c
+@@ -1002,25 +1002,6 @@ static int ocfs2_sync_local_to_main(struct ocfs2_super *osb,
+ start = bit_off + 1;
+ }
+
+- /* clear the contiguous bits until the end boundary */
+- if (count) {
+- blkno = la_start_blk +
+- ocfs2_clusters_to_blocks(osb->sb,
+- start - count);
+-
+- trace_ocfs2_sync_local_to_main_free(
+- count, start - count,
+- (unsigned long long)la_start_blk,
+- (unsigned long long)blkno);
+-
+- status = ocfs2_release_clusters(handle,
+- main_bm_inode,
+- main_bm_bh, blkno,
+- count);
+- if (status < 0)
+- mlog_errno(status);
+- }
+-
+ bail:
+ if (status)
+ mlog_errno(status);
+diff --git a/fs/ocfs2/namei.c b/fs/ocfs2/namei.c
+index 59c92353151a85..5550f8afa43802 100644
+--- a/fs/ocfs2/namei.c
++++ b/fs/ocfs2/namei.c
+@@ -200,8 +200,10 @@ static struct inode *ocfs2_get_init_inode(struct inode *dir, umode_t mode)
+ mode = mode_strip_sgid(&nop_mnt_idmap, dir, mode);
+ inode_init_owner(&nop_mnt_idmap, inode, dir, mode);
+ status = dquot_initialize(inode);
+- if (status)
++ if (status) {
++ iput(inode);
+ return ERR_PTR(status);
++ }
+
+ return inode;
+ }
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 0c6468844c4b54..a697e53ccee2be 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -677,6 +677,7 @@ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ int cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ struct dentry *dentry, struct cifs_tcon *tcon,
+ const char *full_path, umode_t mode, dev_t dev);
++umode_t wire_mode_to_posix(u32 wire, bool is_dir);
+
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+ static inline int get_dfs_path(const unsigned int xid, struct cifs_ses *ses,
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index c6f15dbe860a41..0eae60731c20c0 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -5406,7 +5406,7 @@ CIFSSMBSetPathInfo(const unsigned int xid, struct cifs_tcon *tcon,
+ param_offset = offsetof(struct smb_com_transaction2_spi_req,
+ InformationLevel) - 4;
+ offset = param_offset + params;
+- data_offset = (char *) (&pSMB->hdr.Protocol) + offset;
++ data_offset = (char *)pSMB + offsetof(typeof(*pSMB), hdr.Protocol) + offset;
+ pSMB->ParameterOffset = cpu_to_le16(param_offset);
+ pSMB->DataOffset = cpu_to_le16(offset);
+ pSMB->SetupCount = 1;
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index a94c538ff86368..feff3324d39c6d 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -2512,9 +2512,6 @@ cifs_put_tcon(struct cifs_tcon *tcon, enum smb3_tcon_ref_trace trace)
+
+ list_del_init(&tcon->tcon_list);
+ tcon->status = TID_EXITING;
+-#ifdef CONFIG_CIFS_DFS_UPCALL
+- list_replace_init(&tcon->dfs_ses_list, &ses_list);
+-#endif
+ spin_unlock(&tcon->tc_lock);
+ spin_unlock(&cifs_tcp_ses_lock);
+
+@@ -2522,6 +2519,7 @@ cifs_put_tcon(struct cifs_tcon *tcon, enum smb3_tcon_ref_trace trace)
+ cancel_delayed_work_sync(&tcon->query_interfaces);
+ #ifdef CONFIG_CIFS_DFS_UPCALL
+ cancel_delayed_work_sync(&tcon->dfs_cache_work);
++ list_replace_init(&tcon->dfs_ses_list, &ses_list);
+ #endif
+
+ if (tcon->use_witness) {
+diff --git a/fs/smb/client/dfs.c b/fs/smb/client/dfs.c
+index 3f6077c68d68aa..c35953843373ea 100644
+--- a/fs/smb/client/dfs.c
++++ b/fs/smb/client/dfs.c
+@@ -321,49 +321,6 @@ int dfs_mount_share(struct cifs_mount_ctx *mnt_ctx)
+ return rc;
+ }
+
+-/* Update dfs referral path of superblock */
+-static int update_server_fullpath(struct TCP_Server_Info *server, struct cifs_sb_info *cifs_sb,
+- const char *target)
+-{
+- int rc = 0;
+- size_t len = strlen(target);
+- char *refpath, *npath;
+-
+- if (unlikely(len < 2 || *target != '\\'))
+- return -EINVAL;
+-
+- if (target[1] == '\\') {
+- len += 1;
+- refpath = kmalloc(len, GFP_KERNEL);
+- if (!refpath)
+- return -ENOMEM;
+-
+- scnprintf(refpath, len, "%s", target);
+- } else {
+- len += sizeof("\\");
+- refpath = kmalloc(len, GFP_KERNEL);
+- if (!refpath)
+- return -ENOMEM;
+-
+- scnprintf(refpath, len, "\\%s", target);
+- }
+-
+- npath = dfs_cache_canonical_path(refpath, cifs_sb->local_nls, cifs_remap(cifs_sb));
+- kfree(refpath);
+-
+- if (IS_ERR(npath)) {
+- rc = PTR_ERR(npath);
+- } else {
+- mutex_lock(&server->refpath_lock);
+- spin_lock(&server->srv_lock);
+- kfree(server->leaf_fullpath);
+- server->leaf_fullpath = npath;
+- spin_unlock(&server->srv_lock);
+- mutex_unlock(&server->refpath_lock);
+- }
+- return rc;
+-}
+-
+ static int target_share_matches_server(struct TCP_Server_Info *server, char *share,
+ bool *target_match)
+ {
+@@ -388,77 +345,22 @@ static int target_share_matches_server(struct TCP_Server_Info *server, char *sha
+ return rc;
+ }
+
+-static void __tree_connect_ipc(const unsigned int xid, char *tree,
+- struct cifs_sb_info *cifs_sb,
+- struct cifs_ses *ses)
+-{
+- struct TCP_Server_Info *server = ses->server;
+- struct cifs_tcon *tcon = ses->tcon_ipc;
+- int rc;
+-
+- spin_lock(&ses->ses_lock);
+- spin_lock(&ses->chan_lock);
+- if (cifs_chan_needs_reconnect(ses, server) ||
+- ses->ses_status != SES_GOOD) {
+- spin_unlock(&ses->chan_lock);
+- spin_unlock(&ses->ses_lock);
+- cifs_server_dbg(FYI, "%s: skipping ipc reconnect due to disconnected ses\n",
+- __func__);
+- return;
+- }
+- spin_unlock(&ses->chan_lock);
+- spin_unlock(&ses->ses_lock);
+-
+- cifs_server_lock(server);
+- scnprintf(tree, MAX_TREE_SIZE, "\\\\%s\\IPC$", server->hostname);
+- cifs_server_unlock(server);
+-
+- rc = server->ops->tree_connect(xid, ses, tree, tcon,
+- cifs_sb->local_nls);
+- cifs_server_dbg(FYI, "%s: tree_reconnect %s: %d\n", __func__, tree, rc);
+- spin_lock(&tcon->tc_lock);
+- if (rc) {
+- tcon->status = TID_NEED_TCON;
+- } else {
+- tcon->status = TID_GOOD;
+- tcon->need_reconnect = false;
+- }
+- spin_unlock(&tcon->tc_lock);
+-}
+-
+-static void tree_connect_ipc(const unsigned int xid, char *tree,
+- struct cifs_sb_info *cifs_sb,
+- struct cifs_tcon *tcon)
+-{
+- struct cifs_ses *ses = tcon->ses;
+-
+- __tree_connect_ipc(xid, tree, cifs_sb, ses);
+- __tree_connect_ipc(xid, tree, cifs_sb, CIFS_DFS_ROOT_SES(ses));
+-}
+-
+-static int __tree_connect_dfs_target(const unsigned int xid, struct cifs_tcon *tcon,
+- struct cifs_sb_info *cifs_sb, char *tree, bool islink,
+- struct dfs_cache_tgt_list *tl)
++static int tree_connect_dfs_target(const unsigned int xid,
++ struct cifs_tcon *tcon,
++ struct cifs_sb_info *cifs_sb,
++ char *tree, bool islink,
++ struct dfs_cache_tgt_list *tl)
+ {
+- int rc;
++ const struct smb_version_operations *ops = tcon->ses->server->ops;
+ struct TCP_Server_Info *server = tcon->ses->server;
+- const struct smb_version_operations *ops = server->ops;
+- struct cifs_ses *root_ses = CIFS_DFS_ROOT_SES(tcon->ses);
+- char *share = NULL, *prefix = NULL;
+ struct dfs_cache_tgt_iterator *tit;
++ char *share = NULL, *prefix = NULL;
+ bool target_match;
+-
+- tit = dfs_cache_get_tgt_iterator(tl);
+- if (!tit) {
+- rc = -ENOENT;
+- goto out;
+- }
++ int rc = -ENOENT;
+
+ /* Try to tree connect to all dfs targets */
+- for (; tit; tit = dfs_cache_get_next_tgt(tl, tit)) {
+- const char *target = dfs_cache_get_tgt_name(tit);
+- DFS_CACHE_TGT_LIST(ntl);
+-
++ for (tit = dfs_cache_get_tgt_iterator(tl);
++ tit; tit = dfs_cache_get_next_tgt(tl, tit)) {
+ kfree(share);
+ kfree(prefix);
+ share = prefix = NULL;
+@@ -479,69 +381,16 @@ static int __tree_connect_dfs_target(const unsigned int xid, struct cifs_tcon *t
+ }
+
+ dfs_cache_noreq_update_tgthint(server->leaf_fullpath + 1, tit);
+- tree_connect_ipc(xid, tree, cifs_sb, tcon);
+-
+ scnprintf(tree, MAX_TREE_SIZE, "\\%s", share);
+- if (!islink) {
+- rc = ops->tree_connect(xid, tcon->ses, tree, tcon, cifs_sb->local_nls);
+- break;
+- }
+-
+- /*
+- * If no dfs referrals were returned from link target, then just do a TREE_CONNECT
+- * to it. Otherwise, cache the dfs referral and then mark current tcp ses for
+- * reconnect so either the demultiplex thread or the echo worker will reconnect to
+- * newly resolved target.
+- */
+- if (dfs_cache_find(xid, root_ses, cifs_sb->local_nls, cifs_remap(cifs_sb), target,
+- NULL, &ntl)) {
+- rc = ops->tree_connect(xid, tcon->ses, tree, tcon, cifs_sb->local_nls);
+- if (rc)
+- continue;
+-
++ rc = ops->tree_connect(xid, tcon->ses, tree,
++ tcon, tcon->ses->local_nls);
++ if (islink && !rc && cifs_sb)
+ rc = cifs_update_super_prepath(cifs_sb, prefix);
+- } else {
+- /* Target is another dfs share */
+- rc = update_server_fullpath(server, cifs_sb, target);
+- dfs_cache_free_tgts(tl);
+-
+- if (!rc) {
+- rc = -EREMOTE;
+- list_replace_init(&ntl.tl_list, &tl->tl_list);
+- } else
+- dfs_cache_free_tgts(&ntl);
+- }
+ break;
+ }
+
+-out:
+ kfree(share);
+ kfree(prefix);
+-
+- return rc;
+-}
+-
+-static int tree_connect_dfs_target(const unsigned int xid, struct cifs_tcon *tcon,
+- struct cifs_sb_info *cifs_sb, char *tree, bool islink,
+- struct dfs_cache_tgt_list *tl)
+-{
+- int rc;
+- int num_links = 0;
+- struct TCP_Server_Info *server = tcon->ses->server;
+- char *old_fullpath = server->leaf_fullpath;
+-
+- do {
+- rc = __tree_connect_dfs_target(xid, tcon, cifs_sb, tree, islink, tl);
+- if (!rc || rc != -EREMOTE)
+- break;
+- } while (rc = -ELOOP, ++num_links < MAX_NESTED_LINKS);
+- /*
+- * If we couldn't tree connect to any targets from last referral path, then
+- * retry it from newly resolved dfs referral.
+- */
+- if (rc && server->leaf_fullpath != old_fullpath)
+- cifs_signal_cifsd_for_reconnect(server, true);
+-
+ dfs_cache_free_tgts(tl);
+ return rc;
+ }
+@@ -596,14 +445,11 @@ int cifs_tree_connect(const unsigned int xid, struct cifs_tcon *tcon, const stru
+ if (!IS_ERR(sb))
+ cifs_sb = CIFS_SB(sb);
+
+- /*
+- * Tree connect to last share in @tcon->tree_name whether dfs super or
+- * cached dfs referral was not found.
+- */
+- if (!cifs_sb || !server->leaf_fullpath ||
++ /* Tree connect to last share in @tcon->tree_name if no DFS referral */
++ if (!server->leaf_fullpath ||
+ dfs_cache_noreq_find(server->leaf_fullpath + 1, &ref, &tl)) {
+- rc = ops->tree_connect(xid, tcon->ses, tcon->tree_name, tcon,
+- cifs_sb ? cifs_sb->local_nls : nlsc);
++ rc = ops->tree_connect(xid, tcon->ses, tcon->tree_name,
++ tcon, tcon->ses->local_nls);
+ goto out;
+ }
+
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index 6d567b16998119..b35fe1075503e1 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -724,6 +724,88 @@ static int cifs_sfu_mode(struct cifs_fattr *fattr, const unsigned char *path,
+ #endif
+ }
+
++#define POSIX_TYPE_FILE 0
++#define POSIX_TYPE_DIR 1
++#define POSIX_TYPE_SYMLINK 2
++#define POSIX_TYPE_CHARDEV 3
++#define POSIX_TYPE_BLKDEV 4
++#define POSIX_TYPE_FIFO 5
++#define POSIX_TYPE_SOCKET 6
++
++#define POSIX_X_OTH 0000001
++#define POSIX_W_OTH 0000002
++#define POSIX_R_OTH 0000004
++#define POSIX_X_GRP 0000010
++#define POSIX_W_GRP 0000020
++#define POSIX_R_GRP 0000040
++#define POSIX_X_USR 0000100
++#define POSIX_W_USR 0000200
++#define POSIX_R_USR 0000400
++#define POSIX_STICKY 0001000
++#define POSIX_SET_GID 0002000
++#define POSIX_SET_UID 0004000
++
++#define POSIX_OTH_MASK 0000007
++#define POSIX_GRP_MASK 0000070
++#define POSIX_USR_MASK 0000700
++#define POSIX_PERM_MASK 0000777
++#define POSIX_FILETYPE_MASK 0070000
++
++#define POSIX_FILETYPE_SHIFT 12
++
++static u32 wire_perms_to_posix(u32 wire)
++{
++ u32 mode = 0;
++
++ mode |= (wire & POSIX_X_OTH) ? S_IXOTH : 0;
++ mode |= (wire & POSIX_W_OTH) ? S_IWOTH : 0;
++ mode |= (wire & POSIX_R_OTH) ? S_IROTH : 0;
++ mode |= (wire & POSIX_X_GRP) ? S_IXGRP : 0;
++ mode |= (wire & POSIX_W_GRP) ? S_IWGRP : 0;
++ mode |= (wire & POSIX_R_GRP) ? S_IRGRP : 0;
++ mode |= (wire & POSIX_X_USR) ? S_IXUSR : 0;
++ mode |= (wire & POSIX_W_USR) ? S_IWUSR : 0;
++ mode |= (wire & POSIX_R_USR) ? S_IRUSR : 0;
++ mode |= (wire & POSIX_STICKY) ? S_ISVTX : 0;
++ mode |= (wire & POSIX_SET_GID) ? S_ISGID : 0;
++ mode |= (wire & POSIX_SET_UID) ? S_ISUID : 0;
++
++ return mode;
++}
++
++static u32 posix_filetypes[] = {
++ S_IFREG,
++ S_IFDIR,
++ S_IFLNK,
++ S_IFCHR,
++ S_IFBLK,
++ S_IFIFO,
++ S_IFSOCK
++};
++
++static u32 wire_filetype_to_posix(u32 wire_type)
++{
++ if (wire_type >= ARRAY_SIZE(posix_filetypes)) {
++ pr_warn("Unexpected type %u", wire_type);
++ return 0;
++ }
++ return posix_filetypes[wire_type];
++}
++
++umode_t wire_mode_to_posix(u32 wire, bool is_dir)
++{
++ u32 wire_type;
++ u32 mode;
++
++ wire_type = (wire & POSIX_FILETYPE_MASK) >> POSIX_FILETYPE_SHIFT;
++ /* older servers do not set POSIX file type in the mode field in the response */
++ if ((wire_type == 0) && is_dir)
++ mode = wire_perms_to_posix(wire) | S_IFDIR;
++ else
++ mode = (wire_perms_to_posix(wire) | wire_filetype_to_posix(wire_type));
++ return (umode_t)mode;
++}
++
+ /* Fill a cifs_fattr struct with info from POSIX info struct */
+ static void smb311_posix_info_to_fattr(struct cifs_fattr *fattr,
+ struct cifs_open_info_data *data,
+@@ -760,20 +842,14 @@ static void smb311_posix_info_to_fattr(struct cifs_fattr *fattr,
+ fattr->cf_bytes = le64_to_cpu(info->AllocationSize);
+ fattr->cf_createtime = le64_to_cpu(info->CreationTime);
+ fattr->cf_nlink = le32_to_cpu(info->HardLinks);
+- fattr->cf_mode = (umode_t) le32_to_cpu(info->Mode);
++ fattr->cf_mode = wire_mode_to_posix(le32_to_cpu(info->Mode),
++ fattr->cf_cifsattrs & ATTR_DIRECTORY);
+
+ if (cifs_open_data_reparse(data) &&
+ cifs_reparse_point_to_fattr(cifs_sb, fattr, data))
+ goto out_reparse;
+
+- fattr->cf_mode &= ~S_IFMT;
+- if (fattr->cf_cifsattrs & ATTR_DIRECTORY) {
+- fattr->cf_mode |= S_IFDIR;
+- fattr->cf_dtype = DT_DIR;
+- } else { /* file */
+- fattr->cf_mode |= S_IFREG;
+- fattr->cf_dtype = DT_REG;
+- }
++ fattr->cf_dtype = S_DT(fattr->cf_mode);
+
+ out_reparse:
+ if (S_ISLNK(fattr->cf_mode)) {
+diff --git a/fs/smb/client/readdir.c b/fs/smb/client/readdir.c
+index b3a8f9c6fcff6f..273358d20a46c9 100644
+--- a/fs/smb/client/readdir.c
++++ b/fs/smb/client/readdir.c
+@@ -71,6 +71,8 @@ cifs_prime_dcache(struct dentry *parent, struct qstr *name,
+ struct inode *inode;
+ struct super_block *sb = parent->d_sb;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
++ bool posix = cifs_sb_master_tcon(cifs_sb)->posix_extensions;
++ bool reparse_need_reval = false;
+ DECLARE_WAIT_QUEUE_HEAD_ONSTACK(wq);
+ int rc;
+
+@@ -85,7 +87,21 @@ cifs_prime_dcache(struct dentry *parent, struct qstr *name,
+ * this spares us an invalidation.
+ */
+ retry:
+- if ((fattr->cf_cifsattrs & ATTR_REPARSE) ||
++ if (posix) {
++ switch (fattr->cf_mode & S_IFMT) {
++ case S_IFLNK:
++ case S_IFBLK:
++ case S_IFCHR:
++ reparse_need_reval = true;
++ break;
++ default:
++ break;
++ }
++ } else if (fattr->cf_cifsattrs & ATTR_REPARSE) {
++ reparse_need_reval = true;
++ }
++
++ if (reparse_need_reval ||
+ (fattr->cf_flags & CIFS_FATTR_NEED_REVAL))
+ return;
+
+@@ -241,31 +257,29 @@ cifs_posix_to_fattr(struct cifs_fattr *fattr, struct smb2_posix_info *info,
+ fattr->cf_nlink = le32_to_cpu(info->HardLinks);
+ fattr->cf_cifsattrs = le32_to_cpu(info->DosAttributes);
+
+- /*
+- * Since we set the inode type below we need to mask off
+- * to avoid strange results if bits set above.
+- * XXX: why not make server&client use the type bits?
+- */
+- fattr->cf_mode = le32_to_cpu(info->Mode) & ~S_IFMT;
++ if (fattr->cf_cifsattrs & ATTR_REPARSE)
++ fattr->cf_cifstag = le32_to_cpu(info->ReparseTag);
++
++ /* The Mode field in the response can now include the file type as well */
++ fattr->cf_mode = wire_mode_to_posix(le32_to_cpu(info->Mode),
++ fattr->cf_cifsattrs & ATTR_DIRECTORY);
++ fattr->cf_dtype = S_DT(le32_to_cpu(info->Mode));
++
++ switch (fattr->cf_mode & S_IFMT) {
++ case S_IFLNK:
++ case S_IFBLK:
++ case S_IFCHR:
++ fattr->cf_flags |= CIFS_FATTR_NEED_REVAL;
++ break;
++ default:
++ break;
++ }
+
+ cifs_dbg(FYI, "posix fattr: dev %d, reparse %d, mode %o\n",
+ le32_to_cpu(info->DeviceId),
+ le32_to_cpu(info->ReparseTag),
+ le32_to_cpu(info->Mode));
+
+- if (fattr->cf_cifsattrs & ATTR_DIRECTORY) {
+- fattr->cf_mode |= S_IFDIR;
+- fattr->cf_dtype = DT_DIR;
+- } else {
+- /*
+- * mark anything that is not a dir as regular
+- * file. special files should have the REPARSE
+- * attribute and will be marked as needing revaluation
+- */
+- fattr->cf_mode |= S_IFREG;
+- fattr->cf_dtype = DT_REG;
+- }
+-
+ sid_to_id(cifs_sb, &parsed.owner, fattr, SIDOWNER);
+ sid_to_id(cifs_sb, &parsed.group, fattr, SIDGROUP);
+ }
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index f74d0a86f44a4e..d3abb99cc99094 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -730,44 +730,60 @@ static void wsl_to_fattr(struct cifs_open_info_data *data,
+ fattr->cf_dtype = S_DT(fattr->cf_mode);
+ }
+
+-bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
+- struct cifs_fattr *fattr,
+- struct cifs_open_info_data *data)
++static bool posix_reparse_to_fattr(struct cifs_sb_info *cifs_sb,
++ struct cifs_fattr *fattr,
++ struct cifs_open_info_data *data)
+ {
+ struct reparse_posix_data *buf = data->reparse.posix;
+- u32 tag = data->reparse.tag;
+
+- if (tag == IO_REPARSE_TAG_NFS && buf) {
+- if (le16_to_cpu(buf->ReparseDataLength) < sizeof(buf->InodeType))
++
++ if (buf == NULL)
++ return true;
++
++ if (le16_to_cpu(buf->ReparseDataLength) < sizeof(buf->InodeType)) {
++ WARN_ON_ONCE(1);
++ return false;
++ }
++
++ switch (le64_to_cpu(buf->InodeType)) {
++ case NFS_SPECFILE_CHR:
++ if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8) {
++ WARN_ON_ONCE(1);
+ return false;
+- switch (le64_to_cpu(buf->InodeType)) {
+- case NFS_SPECFILE_CHR:
+- if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8)
+- return false;
+- fattr->cf_mode |= S_IFCHR;
+- fattr->cf_rdev = reparse_mkdev(buf->DataBuffer);
+- break;
+- case NFS_SPECFILE_BLK:
+- if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8)
+- return false;
+- fattr->cf_mode |= S_IFBLK;
+- fattr->cf_rdev = reparse_mkdev(buf->DataBuffer);
+- break;
+- case NFS_SPECFILE_FIFO:
+- fattr->cf_mode |= S_IFIFO;
+- break;
+- case NFS_SPECFILE_SOCK:
+- fattr->cf_mode |= S_IFSOCK;
+- break;
+- case NFS_SPECFILE_LNK:
+- fattr->cf_mode |= S_IFLNK;
+- break;
+- default:
++ }
++ fattr->cf_mode |= S_IFCHR;
++ fattr->cf_rdev = reparse_mkdev(buf->DataBuffer);
++ break;
++ case NFS_SPECFILE_BLK:
++ if (le16_to_cpu(buf->ReparseDataLength) != sizeof(buf->InodeType) + 8) {
+ WARN_ON_ONCE(1);
+ return false;
+ }
+- goto out;
++ fattr->cf_mode |= S_IFBLK;
++ fattr->cf_rdev = reparse_mkdev(buf->DataBuffer);
++ break;
++ case NFS_SPECFILE_FIFO:
++ fattr->cf_mode |= S_IFIFO;
++ break;
++ case NFS_SPECFILE_SOCK:
++ fattr->cf_mode |= S_IFSOCK;
++ break;
++ case NFS_SPECFILE_LNK:
++ fattr->cf_mode |= S_IFLNK;
++ break;
++ default:
++ WARN_ON_ONCE(1);
++ return false;
+ }
++ return true;
++}
++
++bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
++ struct cifs_fattr *fattr,
++ struct cifs_open_info_data *data)
++{
++ u32 tag = data->reparse.tag;
++ bool ok;
+
+ switch (tag) {
+ case IO_REPARSE_TAG_INTERNAL:
+@@ -787,15 +803,19 @@ bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
+ case IO_REPARSE_TAG_LX_BLK:
+ wsl_to_fattr(data, cifs_sb, tag, fattr);
+ break;
++ case IO_REPARSE_TAG_NFS:
++ ok = posix_reparse_to_fattr(cifs_sb, fattr, data);
++ if (!ok)
++ return false;
++ break;
+ case 0: /* SMB1 symlink */
+ case IO_REPARSE_TAG_SYMLINK:
+- case IO_REPARSE_TAG_NFS:
+ fattr->cf_mode |= S_IFLNK;
+ break;
+ default:
+ return false;
+ }
+-out:
++
+ fattr->cf_dtype = S_DT(fattr->cf_mode);
+ return true;
+ }
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index a188908914fe8f..a55f0044d30bde 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -943,7 +943,8 @@ int smb2_query_path_info(const unsigned int xid,
+ if (rc || !data->reparse_point)
+ goto out;
+
+- cmds[num_cmds++] = SMB2_OP_QUERY_WSL_EA;
++ if (!tcon->posix_extensions)
++ cmds[num_cmds++] = SMB2_OP_QUERY_WSL_EA;
+ /*
+ * Skip SMB2_OP_GET_REPARSE if symlink already parsed in create
+ * response.
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 599118aed20539..d0836d710f1814 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -6651,6 +6651,10 @@ int smb2_read(struct ksmbd_work *work)
+ }
+
+ offset = le64_to_cpu(req->Offset);
++ if (offset < 0) {
++ err = -EINVAL;
++ goto out;
++ }
+ length = le32_to_cpu(req->Length);
+ mincount = le32_to_cpu(req->MinimumCount);
+
+@@ -6864,6 +6868,8 @@ int smb2_write(struct ksmbd_work *work)
+ }
+
+ offset = le64_to_cpu(req->Offset);
++ if (offset < 0)
++ return -EINVAL;
+ length = le32_to_cpu(req->Length);
+
+ if (req->Channel == SMB2_CHANNEL_RDMA_V1 ||
+diff --git a/fs/unicode/mkutf8data.c b/fs/unicode/mkutf8data.c
+index b2bd08250c7a09..77b685db827511 100644
+--- a/fs/unicode/mkutf8data.c
++++ b/fs/unicode/mkutf8data.c
+@@ -2230,6 +2230,75 @@ static void nfdicf_init(void)
+ file_fail(fold_name);
+ }
+
++static void ignore_init(void)
++{
++ FILE *file;
++ unsigned int unichar;
++ unsigned int first;
++ unsigned int last;
++ unsigned int *um;
++ int count;
++ int ret;
++
++ if (verbose > 0)
++ printf("Parsing %s\n", prop_name);
++ file = fopen(prop_name, "r");
++ if (!file)
++ open_fail(prop_name, errno);
++ assert(file);
++ count = 0;
++ while (fgets(line, LINESIZE, file)) {
++ ret = sscanf(line, "%X..%X ; %s # ", &first, &last, buf0);
++ if (ret == 3) {
++ if (strcmp(buf0, "Default_Ignorable_Code_Point"))
++ continue;
++ if (!utf32valid(first) || !utf32valid(last))
++ line_fail(prop_name, line);
++ for (unichar = first; unichar <= last; unichar++) {
++ free(unicode_data[unichar].utf32nfdi);
++ um = malloc(sizeof(unsigned int));
++ *um = 0;
++ unicode_data[unichar].utf32nfdi = um;
++ free(unicode_data[unichar].utf32nfdicf);
++ um = malloc(sizeof(unsigned int));
++ *um = 0;
++ unicode_data[unichar].utf32nfdicf = um;
++ count++;
++ }
++ if (verbose > 1)
++ printf(" %X..%X Default_Ignorable_Code_Point\n",
++ first, last);
++ continue;
++ }
++ ret = sscanf(line, "%X ; %s # ", &unichar, buf0);
++ if (ret == 2) {
++ if (strcmp(buf0, "Default_Ignorable_Code_Point"))
++ continue;
++ if (!utf32valid(unichar))
++ line_fail(prop_name, line);
++ free(unicode_data[unichar].utf32nfdi);
++ um = malloc(sizeof(unsigned int));
++ *um = 0;
++ unicode_data[unichar].utf32nfdi = um;
++ free(unicode_data[unichar].utf32nfdicf);
++ um = malloc(sizeof(unsigned int));
++ *um = 0;
++ unicode_data[unichar].utf32nfdicf = um;
++ if (verbose > 1)
++ printf(" %X Default_Ignorable_Code_Point\n",
++ unichar);
++ count++;
++ continue;
++ }
++ }
++ fclose(file);
++
++ if (verbose > 0)
++ printf("Found %d entries\n", count);
++ if (count == 0)
++ file_fail(prop_name);
++}
++
+ static void corrections_init(void)
+ {
+ FILE *file;
+@@ -3342,6 +3411,7 @@ int main(int argc, char *argv[])
+ ccc_init();
+ nfdi_init();
+ nfdicf_init();
++ ignore_init();
+ corrections_init();
+ hangul_decompose();
+ nfdi_decompose();
+diff --git a/fs/unicode/utf8data.c_shipped b/fs/unicode/utf8data.c_shipped
+index ac2da4ba2dc0f9..dafa5fed761d83 100644
+--- a/fs/unicode/utf8data.c_shipped
++++ b/fs/unicode/utf8data.c_shipped
+@@ -82,58 +82,58 @@ static const struct utf8data utf8nfdidata[] = {
+ { 0xc0100, 20736 }
+ };
+
+-static const unsigned char utf8data[64080] = {
++static const unsigned char utf8data[64256] = {
+ /* nfdicf_30100 */
+- 0xd7,0x07,0x66,0x84,0x0c,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x96,0x1a,0xe3,0x60,0x15,
+- 0xe2,0x49,0x0e,0xc1,0xe0,0x4b,0x0d,0xcf,0x86,0x65,0x2d,0x0d,0x01,0x00,0xd4,0xb8,
+- 0xd3,0x27,0xe2,0x03,0xa3,0xe1,0xcb,0x35,0xe0,0x29,0x22,0xcf,0x86,0xc5,0xe4,0xfa,
+- 0x6c,0xe3,0x45,0x68,0xe2,0xdb,0x65,0xe1,0x0e,0x65,0xe0,0xd3,0x64,0xcf,0x86,0xe5,
+- 0x98,0x64,0x64,0x7b,0x64,0x0b,0x00,0xd2,0x0e,0xe1,0xb3,0x3c,0xe0,0x34,0xa3,0xcf,
+- 0x86,0xcf,0x06,0x01,0x00,0xd1,0x0c,0xe0,0x98,0xa8,0xcf,0x86,0xcf,0x06,0x02,0xff,
++ 0xd7,0x07,0x66,0x84,0x0c,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x99,0x1a,0xe3,0x63,0x15,
++ 0xe2,0x4c,0x0e,0xc1,0xe0,0x4e,0x0d,0xcf,0x86,0x65,0x2d,0x0d,0x01,0x00,0xd4,0xb8,
++ 0xd3,0x27,0xe2,0x89,0xa3,0xe1,0xce,0x35,0xe0,0x2c,0x22,0xcf,0x86,0xc5,0xe4,0x15,
++ 0x6d,0xe3,0x60,0x68,0xe2,0xf6,0x65,0xe1,0x29,0x65,0xe0,0xee,0x64,0xcf,0x86,0xe5,
++ 0xb3,0x64,0x64,0x96,0x64,0x0b,0x00,0xd2,0x0e,0xe1,0xb5,0x3c,0xe0,0xba,0xa3,0xcf,
++ 0x86,0xcf,0x06,0x01,0x00,0xd1,0x0c,0xe0,0x1e,0xa9,0xcf,0x86,0xcf,0x06,0x02,0xff,
+ 0xff,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,
+- 0x00,0xe4,0xdf,0x45,0xe3,0x39,0x45,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x01,0xad,
+- 0xd0,0x21,0xcf,0x86,0xe5,0xfb,0xa9,0xe4,0x7a,0xa9,0xe3,0x39,0xa9,0xe2,0x18,0xa9,
+- 0xe1,0x07,0xa9,0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,
+- 0x00,0xcf,0x86,0xe5,0xdd,0xab,0xd4,0x19,0xe3,0x1c,0xab,0xe2,0xfb,0xaa,0xe1,0xea,
+- 0xaa,0x10,0x08,0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,
+- 0x83,0xab,0xe2,0x62,0xab,0xe1,0x51,0xab,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,
+- 0x01,0xff,0xe9,0x9b,0xbb,0x00,0x83,0xe2,0x68,0xf9,0xe1,0x52,0xf6,0xe0,0xcf,0xf4,
+- 0xcf,0x86,0xd5,0x31,0xc4,0xe3,0x51,0x4e,0xe2,0xf2,0x4c,0xe1,0x09,0xcc,0xe0,0x99,
+- 0x4b,0xcf,0x86,0xe5,0x8b,0x49,0xe4,0xac,0x46,0xe3,0x76,0xbc,0xe2,0xcd,0xbb,0xe1,
+- 0xa8,0xbb,0xe0,0x81,0xbb,0xcf,0x86,0xe5,0x4e,0xbb,0x94,0x07,0x63,0x39,0xbb,0x07,
+- 0x00,0x07,0x00,0xe4,0x3b,0xf4,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,
+- 0xe1,0x4a,0xe1,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x39,0xe2,0xcf,0x86,
+- 0xe5,0xfe,0xe1,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x39,0xe2,0xcf,0x06,
+- 0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0xd4,0xf3,0xe3,0xbd,0xf2,
+- 0xd2,0xa0,0xe1,0x73,0xe6,0xd0,0x21,0xcf,0x86,0xe5,0x74,0xe3,0xe4,0xf0,0xe2,0xe3,
+- 0xae,0xe2,0xe2,0x8d,0xe2,0xe1,0x7b,0xe2,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,
+- 0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xd0,0xe4,0xe3,0x8f,0xe4,
+- 0xe2,0x6e,0xe4,0xe1,0x5d,0xe4,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,
+- 0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0x57,0xe5,0xe1,0x46,0xe5,0x10,0x09,
+- 0x05,0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x77,
+- 0xe5,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,
+- 0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0xbd,0xe5,0xd2,0x14,0xe1,0x8c,0xe5,
++ 0x00,0xe4,0xe1,0x45,0xe3,0x3b,0x45,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x87,0xad,
++ 0xd0,0x21,0xcf,0x86,0xe5,0x81,0xaa,0xe4,0x00,0xaa,0xe3,0xbf,0xa9,0xe2,0x9e,0xa9,
++ 0xe1,0x8d,0xa9,0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,
++ 0x00,0xcf,0x86,0xe5,0x63,0xac,0xd4,0x19,0xe3,0xa2,0xab,0xe2,0x81,0xab,0xe1,0x70,
++ 0xab,0x10,0x08,0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,
++ 0x09,0xac,0xe2,0xe8,0xab,0xe1,0xd7,0xab,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,
++ 0x01,0xff,0xe9,0x9b,0xbb,0x00,0x83,0xe2,0x19,0xfa,0xe1,0xf2,0xf6,0xe0,0x6f,0xf5,
++ 0xcf,0x86,0xd5,0x31,0xc4,0xe3,0x54,0x4e,0xe2,0xf5,0x4c,0xe1,0xa4,0xcc,0xe0,0x9c,
++ 0x4b,0xcf,0x86,0xe5,0x8e,0x49,0xe4,0xaf,0x46,0xe3,0x11,0xbd,0xe2,0x68,0xbc,0xe1,
++ 0x43,0xbc,0xe0,0x1c,0xbc,0xcf,0x86,0xe5,0xe9,0xbb,0x94,0x07,0x63,0xd4,0xbb,0x07,
++ 0x00,0x07,0x00,0xe4,0xdb,0xf4,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,
++ 0xe1,0xea,0xe1,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0xd9,0xe2,0xcf,0x86,
++ 0xe5,0x9e,0xe2,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xd9,0xe2,0xcf,0x06,
++ 0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x74,0xf4,0xe3,0x5d,0xf3,
++ 0xd2,0xa0,0xe1,0x13,0xe7,0xd0,0x21,0xcf,0x86,0xe5,0x14,0xe4,0xe4,0x90,0xe3,0xe3,
++ 0x4e,0xe3,0xe2,0x2d,0xe3,0xe1,0x1b,0xe3,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,
++ 0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0x70,0xe5,0xe3,0x2f,0xe5,
++ 0xe2,0x0e,0xe5,0xe1,0xfd,0xe4,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,
++ 0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0xf7,0xe5,0xe1,0xe6,0xe5,0x10,0x09,
++ 0x05,0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x17,
++ 0xe6,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,
++ 0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0x5d,0xe6,0xd2,0x14,0xe1,0x2c,0xe6,
+ 0x10,0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,
+- 0x98,0xe5,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,
+- 0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0xed,0xea,0xd4,0x19,0xe3,0x26,0xea,0xe2,0x04,
+- 0xea,0xe1,0xf3,0xe9,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,
+- 0xb7,0x00,0xd3,0x18,0xe2,0x70,0xea,0xe1,0x5f,0xea,0x10,0x09,0x05,0xff,0xf0,0xa3,
+- 0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x88,0xea,0x10,
++ 0x38,0xe6,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,
++ 0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x8d,0xeb,0xd4,0x19,0xe3,0xc6,0xea,0xe2,0xa4,
++ 0xea,0xe1,0x93,0xea,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,
++ 0xb7,0x00,0xd3,0x18,0xe2,0x10,0xeb,0xe1,0xff,0xea,0x10,0x09,0x05,0xff,0xf0,0xa3,
++ 0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x28,0xeb,0x10,
+ 0x08,0x05,0xff,0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,
+ 0x08,0x05,0xff,0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,
+- 0x05,0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x8a,
+- 0xec,0xd4,0x1a,0xe3,0xc2,0xeb,0xe2,0xa8,0xeb,0xe1,0x95,0xeb,0x10,0x08,0x05,0xff,
+- 0xe7,0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x0a,0xec,
+- 0xe1,0xf8,0xeb,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,
+- 0x00,0xd2,0x13,0xe1,0x26,0xec,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,
++ 0x05,0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x2a,
++ 0xed,0xd4,0x1a,0xe3,0x62,0xec,0xe2,0x48,0xec,0xe1,0x35,0xec,0x10,0x08,0x05,0xff,
++ 0xe7,0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0xaa,0xec,
++ 0xe1,0x98,0xec,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,
++ 0x00,0xd2,0x13,0xe1,0xc6,0xec,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,
+ 0xe7,0xa9,0x80,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,
+ 0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,
+- 0xff,0xe7,0xaa,0xae,0x00,0xe0,0x3c,0xef,0xcf,0x86,0xd5,0x1d,0xe4,0xb1,0xed,0xe3,
+- 0x6d,0xed,0xe2,0x4b,0xed,0xe1,0x3a,0xed,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,
+- 0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0x58,0xee,0xe2,0x34,0xee,0xe1,
+- 0x23,0xee,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,
+- 0xd3,0x18,0xe2,0xa3,0xee,0xe1,0x92,0xee,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,
+- 0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0xbb,0xee,0x10,0x08,0x05,
++ 0xff,0xe7,0xaa,0xae,0x00,0xe0,0xdc,0xef,0xcf,0x86,0xd5,0x1d,0xe4,0x51,0xee,0xe3,
++ 0x0d,0xee,0xe2,0xeb,0xed,0xe1,0xda,0xed,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,
++ 0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0xf8,0xee,0xe2,0xd4,0xee,0xe1,
++ 0xc3,0xee,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,
++ 0xd3,0x18,0xe2,0x43,0xef,0xe1,0x32,0xef,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,
++ 0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x5b,0xef,0x10,0x08,0x05,
+ 0xff,0xe8,0x9a,0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,
+ 0xff,0xe8,0x9c,0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,
+ 0x9e,0x86,0x00,0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+@@ -141,152 +141,152 @@ static const unsigned char utf8data[64080] = {
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdi_30100 */
+- 0x57,0x04,0x01,0x00,0xc6,0xd5,0x13,0xe4,0xa8,0x59,0xe3,0xe2,0x54,0xe2,0x5b,0x4f,
+- 0xc1,0xe0,0x87,0x4d,0xcf,0x06,0x01,0x00,0xd4,0xb8,0xd3,0x27,0xe2,0x89,0x9f,0xe1,
+- 0x91,0x8d,0xe0,0x21,0x71,0xcf,0x86,0xc5,0xe4,0x80,0x69,0xe3,0xcb,0x64,0xe2,0x61,
+- 0x62,0xe1,0x94,0x61,0xe0,0x59,0x61,0xcf,0x86,0xe5,0x1e,0x61,0x64,0x01,0x61,0x0b,
+- 0x00,0xd2,0x0e,0xe1,0x3f,0xa0,0xe0,0xba,0x9f,0xcf,0x86,0xcf,0x06,0x01,0x00,0xd1,
+- 0x0c,0xe0,0x1e,0xa5,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xd0,0x08,0xcf,0x86,0xcf,
+- 0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xe4,0x1b,0xb6,0xe3,0x95,
+- 0xad,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x87,0xa9,0xd0,0x21,0xcf,0x86,0xe5,0x81,
+- 0xa6,0xe4,0x00,0xa6,0xe3,0xbf,0xa5,0xe2,0x9e,0xa5,0xe1,0x8d,0xa5,0x10,0x08,0x01,
+- 0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0xcf,0x86,0xe5,0x63,0xa8,
+- 0xd4,0x19,0xe3,0xa2,0xa7,0xe2,0x81,0xa7,0xe1,0x70,0xa7,0x10,0x08,0x01,0xff,0xe9,
+- 0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,0x09,0xa8,0xe2,0xe8,0xa7,0xe1,
+- 0xd7,0xa7,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,0x01,0xff,0xe9,0x9b,0xbb,0x00,
+- 0x83,0xe2,0xee,0xf5,0xe1,0xd8,0xf2,0xe0,0x55,0xf1,0xcf,0x86,0xd5,0x31,0xc4,0xe3,
+- 0xd5,0xcb,0xe2,0xae,0xc9,0xe1,0x8f,0xc8,0xe0,0x1f,0xbf,0xcf,0x86,0xe5,0x12,0xbb,
+- 0xe4,0x0b,0xba,0xe3,0xfc,0xb8,0xe2,0x53,0xb8,0xe1,0x2e,0xb8,0xe0,0x07,0xb8,0xcf,
+- 0x86,0xe5,0xd4,0xb7,0x94,0x07,0x63,0xbf,0xb7,0x07,0x00,0x07,0x00,0xe4,0xc1,0xf0,
+- 0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,0xd0,0xdd,0xcf,0x86,0xcf,
+- 0x06,0x05,0x00,0xd1,0x0e,0xe0,0xbf,0xde,0xcf,0x86,0xe5,0x84,0xde,0xcf,0x06,0x11,
+- 0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xbf,0xde,0xcf,0x06,0x13,0x00,0xcf,0x86,0xd5,0x06,
+- 0xcf,0x06,0x00,0x00,0xe4,0x5a,0xf0,0xe3,0x43,0xef,0xd2,0xa0,0xe1,0xf9,0xe2,0xd0,
+- 0x21,0xcf,0x86,0xe5,0xfa,0xdf,0xe4,0x76,0xdf,0xe3,0x34,0xdf,0xe2,0x13,0xdf,0xe1,
+- 0x01,0xdf,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,0xff,0xe4,0xb8,0xb8,0x00,
+- 0xcf,0x86,0xd5,0x1c,0xe4,0x56,0xe1,0xe3,0x15,0xe1,0xe2,0xf4,0xe0,0xe1,0xe3,0xe0,
+- 0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,0x93,0xb6,0x00,0xd4,0x34,
+- 0xd3,0x18,0xe2,0xdd,0xe1,0xe1,0xcc,0xe1,0x10,0x09,0x05,0xff,0xf0,0xa1,0x9a,0xa8,
+- 0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0xfd,0xe1,0x91,0x11,0x10,0x09,0x05,
+- 0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,0x00,0x05,0xff,0xe5,0xac,
+- 0xbe,0x00,0xe3,0x43,0xe2,0xd2,0x14,0xe1,0x12,0xe2,0x10,0x08,0x05,0xff,0xe5,0xaf,
+- 0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0x1e,0xe2,0x10,0x08,0x05,0xff,
+- 0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,0xd5,0xd0,0x6a,0xcf,0x86,
+- 0xe5,0x73,0xe7,0xd4,0x19,0xe3,0xac,0xe6,0xe2,0x8a,0xe6,0xe1,0x79,0xe6,0x10,0x08,
+- 0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,0x00,0xd3,0x18,0xe2,0xf6,
+- 0xe6,0xe1,0xe5,0xe6,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,0x9e,0x00,0x05,0xff,0xf0,
+- 0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x0e,0xe7,0x10,0x08,0x05,0xff,0xe7,0x81,0xbd,
+- 0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,0x05,0xff,0xe7,0x85,0x85,
+- 0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,0xff,0xe7,0x86,0x9c,0x00,
+- 0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x10,0xe9,0xd4,0x1a,0xe3,0x48,0xe8,
+- 0xe2,0x2e,0xe8,0xe1,0x1b,0xe8,0x10,0x08,0x05,0xff,0xe7,0x9b,0xb4,0x00,0x05,0xff,
+- 0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x90,0xe8,0xe1,0x7e,0xe8,0x10,0x08,0x05,
+- 0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,0xd2,0x13,0xe1,0xac,0xe8,
+- 0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,0xa9,0x80,0x00,0xd1,0x12,
+- 0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,
+- 0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,0xe7,0xaa,0xae,0x00,0xe0,
+- 0xc2,0xeb,0xcf,0x86,0xd5,0x1d,0xe4,0x37,0xea,0xe3,0xf3,0xe9,0xe2,0xd1,0xe9,0xe1,
+- 0xc0,0xe9,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,0x05,0xff,0xe4,0x8f,0x95,
+- 0x00,0xd4,0x19,0xe3,0xde,0xea,0xe2,0xba,0xea,0xe1,0xa9,0xea,0x10,0x08,0x05,0xff,
+- 0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,0x18,0xe2,0x29,0xeb,0xe1,
+- 0x18,0xeb,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,0x05,0xff,0xf0,0xa7,0x83,
+- 0x92,0x00,0xd2,0x13,0xe1,0x41,0xeb,0x10,0x08,0x05,0xff,0xe8,0x9a,0x88,0x00,0x05,
+- 0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,0xe8,0x9c,0xa8,0x00,0x05,
+- 0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,0x86,0x00,0x05,0xff,0xe4,
+- 0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x57,0x04,0x01,0x00,0xc6,0xd5,0x16,0xe4,0xc2,0x59,0xe3,0xfb,0x54,0xe2,0x74,0x4f,
++ 0xc1,0xe0,0xa0,0x4d,0xcf,0x86,0x65,0x84,0x4d,0x01,0x00,0xd4,0xb8,0xd3,0x27,0xe2,
++ 0x0c,0xa0,0xe1,0xdf,0x8d,0xe0,0x39,0x71,0xcf,0x86,0xc5,0xe4,0x98,0x69,0xe3,0xe3,
++ 0x64,0xe2,0x79,0x62,0xe1,0xac,0x61,0xe0,0x71,0x61,0xcf,0x86,0xe5,0x36,0x61,0x64,
++ 0x19,0x61,0x0b,0x00,0xd2,0x0e,0xe1,0xc2,0xa0,0xe0,0x3d,0xa0,0xcf,0x86,0xcf,0x06,
++ 0x01,0x00,0xd1,0x0c,0xe0,0xa1,0xa5,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xd0,0x08,
++ 0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xe4,0x9e,
++ 0xb6,0xe3,0x18,0xae,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x0a,0xaa,0xd0,0x21,0xcf,
++ 0x86,0xe5,0x04,0xa7,0xe4,0x83,0xa6,0xe3,0x42,0xa6,0xe2,0x21,0xa6,0xe1,0x10,0xa6,
++ 0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0xcf,0x86,
++ 0xe5,0xe6,0xa8,0xd4,0x19,0xe3,0x25,0xa8,0xe2,0x04,0xa8,0xe1,0xf3,0xa7,0x10,0x08,
++ 0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0xe3,0x8c,0xa8,0xe2,
++ 0x6b,0xa8,0xe1,0x5a,0xa8,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,0x01,0xff,0xe9,
++ 0x9b,0xbb,0x00,0x83,0xe2,0x9c,0xf6,0xe1,0x75,0xf3,0xe0,0xf2,0xf1,0xcf,0x86,0xd5,
++ 0x31,0xc4,0xe3,0x6d,0xcc,0xe2,0x46,0xca,0xe1,0x27,0xc9,0xe0,0xb7,0xbf,0xcf,0x86,
++ 0xe5,0xaa,0xbb,0xe4,0xa3,0xba,0xe3,0x94,0xb9,0xe2,0xeb,0xb8,0xe1,0xc6,0xb8,0xe0,
++ 0x9f,0xb8,0xcf,0x86,0xe5,0x6c,0xb8,0x94,0x07,0x63,0x57,0xb8,0x07,0x00,0x07,0x00,
++ 0xe4,0x5e,0xf1,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,0x6d,0xde,
++ 0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x5c,0xdf,0xcf,0x86,0xe5,0x21,0xdf,
++ 0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x5c,0xdf,0xcf,0x06,0x13,0x00,0xcf,
++ 0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0xf7,0xf0,0xe3,0xe0,0xef,0xd2,0xa0,0xe1,
++ 0x96,0xe3,0xd0,0x21,0xcf,0x86,0xe5,0x97,0xe0,0xe4,0x13,0xe0,0xe3,0xd1,0xdf,0xe2,
++ 0xb0,0xdf,0xe1,0x9e,0xdf,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,0xff,0xe4,
++ 0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xf3,0xe1,0xe3,0xb2,0xe1,0xe2,0x91,0xe1,
++ 0xe1,0x80,0xe1,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,0x93,0xb6,
++ 0x00,0xd4,0x34,0xd3,0x18,0xe2,0x7a,0xe2,0xe1,0x69,0xe2,0x10,0x09,0x05,0xff,0xf0,
++ 0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x9a,0xe2,0x91,0x11,
++ 0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,0x00,0x05,
++ 0xff,0xe5,0xac,0xbe,0x00,0xe3,0xe0,0xe2,0xd2,0x14,0xe1,0xaf,0xe2,0x10,0x08,0x05,
++ 0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0xbb,0xe2,0x10,
++ 0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,0xd5,0xd0,
++ 0x6a,0xcf,0x86,0xe5,0x10,0xe8,0xd4,0x19,0xe3,0x49,0xe7,0xe2,0x27,0xe7,0xe1,0x16,
++ 0xe7,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,0x00,0xd3,
++ 0x18,0xe2,0x93,0xe7,0xe1,0x82,0xe7,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,0x9e,0x00,
++ 0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0xab,0xe7,0x10,0x08,0x05,0xff,
++ 0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,0x05,0xff,
++ 0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,0xff,0xe7,
++ 0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0xad,0xe9,0xd4,0x1a,
++ 0xe3,0xe5,0xe8,0xe2,0xcb,0xe8,0xe1,0xb8,0xe8,0x10,0x08,0x05,0xff,0xe7,0x9b,0xb4,
++ 0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x2d,0xe9,0xe1,0x1b,0xe9,
++ 0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,0xd2,0x13,
++ 0xe1,0x49,0xe9,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,0xa9,0x80,
++ 0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,0xf0,0xa5,
++ 0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,0xe7,0xaa,
++ 0xae,0x00,0xe0,0x5f,0xec,0xcf,0x86,0xd5,0x1d,0xe4,0xd4,0xea,0xe3,0x90,0xea,0xe2,
++ 0x6e,0xea,0xe1,0x5d,0xea,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,0x05,0xff,
++ 0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0x7b,0xeb,0xe2,0x57,0xeb,0xe1,0x46,0xeb,0x10,
++ 0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,0x18,0xe2,
++ 0xc6,0xeb,0xe1,0xb5,0xeb,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,0x05,0xff,
++ 0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0xde,0xeb,0x10,0x08,0x05,0xff,0xe8,0x9a,
++ 0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,0xe8,0x9c,
++ 0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,0x86,0x00,
++ 0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdicf_30200 */
+- 0xd7,0x07,0x66,0x84,0x05,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x96,0x13,0xe3,0x60,0x0e,
+- 0xe2,0x49,0x07,0xc1,0xe0,0x4b,0x06,0xcf,0x86,0x65,0x2d,0x06,0x01,0x00,0xd4,0x2a,
+- 0xe3,0xce,0x35,0xe2,0x02,0x9c,0xe1,0xca,0x2e,0xe0,0x28,0x1b,0xcf,0x86,0xc5,0xe4,
+- 0xf9,0x65,0xe3,0x44,0x61,0xe2,0xda,0x5e,0xe1,0x0d,0x5e,0xe0,0xd2,0x5d,0xcf,0x86,
+- 0xe5,0x97,0x5d,0x64,0x7a,0x5d,0x0b,0x00,0x83,0xe2,0xf6,0xf2,0xe1,0xe0,0xef,0xe0,
+- 0x5d,0xee,0xcf,0x86,0xd5,0x31,0xc4,0xe3,0xdf,0x47,0xe2,0x80,0x46,0xe1,0x97,0xc5,
+- 0xe0,0x27,0x45,0xcf,0x86,0xe5,0x19,0x43,0xe4,0x3a,0x40,0xe3,0x04,0xb6,0xe2,0x5b,
+- 0xb5,0xe1,0x36,0xb5,0xe0,0x0f,0xb5,0xcf,0x86,0xe5,0xdc,0xb4,0x94,0x07,0x63,0xc7,
+- 0xb4,0x07,0x00,0x07,0x00,0xe4,0xc9,0xed,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,
+- 0xd2,0x0b,0xe1,0xd8,0xda,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0xc7,0xdb,
+- 0xcf,0x86,0xe5,0x8c,0xdb,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xc7,0xdb,
+- 0xcf,0x06,0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x62,0xed,0xe3,
+- 0x4b,0xec,0xd2,0xa0,0xe1,0x01,0xe0,0xd0,0x21,0xcf,0x86,0xe5,0x02,0xdd,0xe4,0x7e,
+- 0xdc,0xe3,0x3c,0xdc,0xe2,0x1b,0xdc,0xe1,0x09,0xdc,0x10,0x08,0x05,0xff,0xe4,0xb8,
+- 0xbd,0x00,0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0x5e,0xde,0xe3,
+- 0x1d,0xde,0xe2,0xfc,0xdd,0xe1,0xeb,0xdd,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,
+- 0x05,0xff,0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0xe5,0xde,0xe1,0xd4,0xde,
++ 0xd7,0x07,0x66,0x84,0x05,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x99,0x13,0xe3,0x63,0x0e,
++ 0xe2,0x4c,0x07,0xc1,0xe0,0x4e,0x06,0xcf,0x86,0x65,0x2d,0x06,0x01,0x00,0xd4,0x2a,
++ 0xe3,0xd0,0x35,0xe2,0x88,0x9c,0xe1,0xcd,0x2e,0xe0,0x2b,0x1b,0xcf,0x86,0xc5,0xe4,
++ 0x14,0x66,0xe3,0x5f,0x61,0xe2,0xf5,0x5e,0xe1,0x28,0x5e,0xe0,0xed,0x5d,0xcf,0x86,
++ 0xe5,0xb2,0x5d,0x64,0x95,0x5d,0x0b,0x00,0x83,0xe2,0xa7,0xf3,0xe1,0x80,0xf0,0xe0,
++ 0xfd,0xee,0xcf,0x86,0xd5,0x31,0xc4,0xe3,0xe2,0x47,0xe2,0x83,0x46,0xe1,0x32,0xc6,
++ 0xe0,0x2a,0x45,0xcf,0x86,0xe5,0x1c,0x43,0xe4,0x3d,0x40,0xe3,0x9f,0xb6,0xe2,0xf6,
++ 0xb5,0xe1,0xd1,0xb5,0xe0,0xaa,0xb5,0xcf,0x86,0xe5,0x77,0xb5,0x94,0x07,0x63,0x62,
++ 0xb5,0x07,0x00,0x07,0x00,0xe4,0x69,0xee,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,
++ 0xd2,0x0b,0xe1,0x78,0xdb,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x67,0xdc,
++ 0xcf,0x86,0xe5,0x2c,0xdc,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x67,0xdc,
++ 0xcf,0x06,0x13,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x02,0xee,0xe3,
++ 0xeb,0xec,0xd2,0xa0,0xe1,0xa1,0xe0,0xd0,0x21,0xcf,0x86,0xe5,0xa2,0xdd,0xe4,0x1e,
++ 0xdd,0xe3,0xdc,0xdc,0xe2,0xbb,0xdc,0xe1,0xa9,0xdc,0x10,0x08,0x05,0xff,0xe4,0xb8,
++ 0xbd,0x00,0x05,0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xfe,0xde,0xe3,
++ 0xbd,0xde,0xe2,0x9c,0xde,0xe1,0x8b,0xde,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,
++ 0x05,0xff,0xe5,0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0x85,0xdf,0xe1,0x74,0xdf,
+ 0x10,0x09,0x05,0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,
+- 0xe2,0x05,0xdf,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,
+- 0xe5,0xac,0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0x4b,0xdf,0xd2,0x14,0xe1,
+- 0x1a,0xdf,0x10,0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,
+- 0x00,0xe1,0x26,0xdf,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,
+- 0xa2,0x00,0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x7b,0xe4,0xd4,0x19,0xe3,0xb4,0xe3,
+- 0xe2,0x92,0xe3,0xe1,0x81,0xe3,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,
+- 0xe6,0xb5,0xb7,0x00,0xd3,0x18,0xe2,0xfe,0xe3,0xe1,0xed,0xe3,0x10,0x09,0x05,0xff,
+- 0xf0,0xa3,0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x16,
++ 0xe2,0xa5,0xdf,0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,
++ 0xe5,0xac,0x88,0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0xeb,0xdf,0xd2,0x14,0xe1,
++ 0xba,0xdf,0x10,0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,
++ 0x00,0xe1,0xc6,0xdf,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,
++ 0xa2,0x00,0xd1,0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x1b,0xe5,0xd4,0x19,0xe3,0x54,0xe4,
++ 0xe2,0x32,0xe4,0xe1,0x21,0xe4,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,
++ 0xe6,0xb5,0xb7,0x00,0xd3,0x18,0xe2,0x9e,0xe4,0xe1,0x8d,0xe4,0x10,0x09,0x05,0xff,
++ 0xf0,0xa3,0xbd,0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0xb6,
+ 0xe4,0x10,0x08,0x05,0xff,0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,
+ 0x11,0x10,0x08,0x05,0xff,0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,
+ 0x10,0x08,0x05,0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,
+- 0xe5,0x18,0xe6,0xd4,0x1a,0xe3,0x50,0xe5,0xe2,0x36,0xe5,0xe1,0x23,0xe5,0x10,0x08,
++ 0xe5,0xb8,0xe6,0xd4,0x1a,0xe3,0xf0,0xe5,0xe2,0xd6,0xe5,0xe1,0xc3,0xe5,0x10,0x08,
+ 0x05,0xff,0xe7,0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,
+- 0x98,0xe5,0xe1,0x86,0xe5,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,
+- 0x83,0xa3,0x00,0xd2,0x13,0xe1,0xb4,0xe5,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,
++ 0x38,0xe6,0xe1,0x26,0xe6,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,
++ 0x83,0xa3,0x00,0xd2,0x13,0xe1,0x54,0xe6,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,
+ 0x05,0xff,0xe7,0xa9,0x80,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,
+ 0x00,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,
+- 0x00,0x05,0xff,0xe7,0xaa,0xae,0x00,0xe0,0xca,0xe8,0xcf,0x86,0xd5,0x1d,0xe4,0x3f,
+- 0xe7,0xe3,0xfb,0xe6,0xe2,0xd9,0xe6,0xe1,0xc8,0xe6,0x10,0x09,0x05,0xff,0xf0,0xa3,
+- 0x8d,0x9f,0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0xe6,0xe7,0xe2,0xc2,
+- 0xe7,0xe1,0xb1,0xe7,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,
+- 0x8a,0x00,0xd3,0x18,0xe2,0x31,0xe8,0xe1,0x20,0xe8,0x10,0x09,0x05,0xff,0xf0,0xa6,
+- 0xbe,0xb1,0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x49,0xe8,0x10,
++ 0x00,0x05,0xff,0xe7,0xaa,0xae,0x00,0xe0,0x6a,0xe9,0xcf,0x86,0xd5,0x1d,0xe4,0xdf,
++ 0xe7,0xe3,0x9b,0xe7,0xe2,0x79,0xe7,0xe1,0x68,0xe7,0x10,0x09,0x05,0xff,0xf0,0xa3,
++ 0x8d,0x9f,0x00,0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0x86,0xe8,0xe2,0x62,
++ 0xe8,0xe1,0x51,0xe8,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,
++ 0x8a,0x00,0xd3,0x18,0xe2,0xd1,0xe8,0xe1,0xc0,0xe8,0x10,0x09,0x05,0xff,0xf0,0xa6,
++ 0xbe,0xb1,0x00,0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0xe9,0xe8,0x10,
+ 0x08,0x05,0xff,0xe8,0x9a,0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,
+ 0x08,0x05,0xff,0xe8,0x9c,0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,
+ 0xff,0xe8,0x9e,0x86,0x00,0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdi_30200 */
+- 0x57,0x04,0x01,0x00,0xc6,0xd5,0x13,0xe4,0x68,0x53,0xe3,0xa2,0x4e,0xe2,0x1b,0x49,
+- 0xc1,0xe0,0x47,0x47,0xcf,0x06,0x01,0x00,0xd4,0x2a,0xe3,0x99,0x99,0xe2,0x48,0x99,
+- 0xe1,0x50,0x87,0xe0,0xe0,0x6a,0xcf,0x86,0xc5,0xe4,0x3f,0x63,0xe3,0x8a,0x5e,0xe2,
+- 0x20,0x5c,0xe1,0x53,0x5b,0xe0,0x18,0x5b,0xcf,0x86,0xe5,0xdd,0x5a,0x64,0xc0,0x5a,
+- 0x0b,0x00,0x83,0xe2,0x3c,0xf0,0xe1,0x26,0xed,0xe0,0xa3,0xeb,0xcf,0x86,0xd5,0x31,
+- 0xc4,0xe3,0x23,0xc6,0xe2,0xfc,0xc3,0xe1,0xdd,0xc2,0xe0,0x6d,0xb9,0xcf,0x86,0xe5,
+- 0x60,0xb5,0xe4,0x59,0xb4,0xe3,0x4a,0xb3,0xe2,0xa1,0xb2,0xe1,0x7c,0xb2,0xe0,0x55,
+- 0xb2,0xcf,0x86,0xe5,0x22,0xb2,0x94,0x07,0x63,0x0d,0xb2,0x07,0x00,0x07,0x00,0xe4,
+- 0x0f,0xeb,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,0x1e,0xd8,0xcf,
+- 0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0x0d,0xd9,0xcf,0x86,0xe5,0xd2,0xd8,0xcf,
+- 0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0x0d,0xd9,0xcf,0x06,0x13,0x00,0xcf,0x86,
+- 0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0xa8,0xea,0xe3,0x91,0xe9,0xd2,0xa0,0xe1,0x47,
+- 0xdd,0xd0,0x21,0xcf,0x86,0xe5,0x48,0xda,0xe4,0xc4,0xd9,0xe3,0x82,0xd9,0xe2,0x61,
+- 0xd9,0xe1,0x4f,0xd9,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,0xff,0xe4,0xb8,
+- 0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0xa4,0xdb,0xe3,0x63,0xdb,0xe2,0x42,0xdb,0xe1,
+- 0x31,0xdb,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,0x93,0xb6,0x00,
+- 0xd4,0x34,0xd3,0x18,0xe2,0x2b,0xdc,0xe1,0x1a,0xdc,0x10,0x09,0x05,0xff,0xf0,0xa1,
+- 0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0x4b,0xdc,0x91,0x11,0x10,
+- 0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,0x00,0x05,0xff,
+- 0xe5,0xac,0xbe,0x00,0xe3,0x91,0xdc,0xd2,0x14,0xe1,0x60,0xdc,0x10,0x08,0x05,0xff,
+- 0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0x6c,0xdc,0x10,0x08,
+- 0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,0xd5,0xd0,0x6a,
+- 0xcf,0x86,0xe5,0xc1,0xe1,0xd4,0x19,0xe3,0xfa,0xe0,0xe2,0xd8,0xe0,0xe1,0xc7,0xe0,
+- 0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,0x00,0xd3,0x18,
+- 0xe2,0x44,0xe1,0xe1,0x33,0xe1,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,0x9e,0x00,0x05,
+- 0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0x5c,0xe1,0x10,0x08,0x05,0xff,0xe7,
+- 0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,0x05,0xff,0xe7,
+- 0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,0xff,0xe7,0x86,
+- 0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0x5e,0xe3,0xd4,0x1a,0xe3,
+- 0x96,0xe2,0xe2,0x7c,0xe2,0xe1,0x69,0xe2,0x10,0x08,0x05,0xff,0xe7,0x9b,0xb4,0x00,
+- 0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0xde,0xe2,0xe1,0xcc,0xe2,0x10,
+- 0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,0xd2,0x13,0xe1,
+- 0xfa,0xe2,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,0xa9,0x80,0x00,
+- 0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,0xf0,0xa5,0xaa,
+- 0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,0xe7,0xaa,0xae,
+- 0x00,0xe0,0x10,0xe6,0xcf,0x86,0xd5,0x1d,0xe4,0x85,0xe4,0xe3,0x41,0xe4,0xe2,0x1f,
+- 0xe4,0xe1,0x0e,0xe4,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,0x05,0xff,0xe4,
+- 0x8f,0x95,0x00,0xd4,0x19,0xe3,0x2c,0xe5,0xe2,0x08,0xe5,0xe1,0xf7,0xe4,0x10,0x08,
+- 0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,0x18,0xe2,0x77,
+- 0xe5,0xe1,0x66,0xe5,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,0x05,0xff,0xf0,
+- 0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x8f,0xe5,0x10,0x08,0x05,0xff,0xe8,0x9a,0x88,
+- 0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,0xe8,0x9c,0xa8,
+- 0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,0x86,0x00,0x05,
+- 0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x57,0x04,0x01,0x00,0xc6,0xd5,0x16,0xe4,0x82,0x53,0xe3,0xbb,0x4e,0xe2,0x34,0x49,
++ 0xc1,0xe0,0x60,0x47,0xcf,0x86,0x65,0x44,0x47,0x01,0x00,0xd4,0x2a,0xe3,0x1c,0x9a,
++ 0xe2,0xcb,0x99,0xe1,0x9e,0x87,0xe0,0xf8,0x6a,0xcf,0x86,0xc5,0xe4,0x57,0x63,0xe3,
++ 0xa2,0x5e,0xe2,0x38,0x5c,0xe1,0x6b,0x5b,0xe0,0x30,0x5b,0xcf,0x86,0xe5,0xf5,0x5a,
++ 0x64,0xd8,0x5a,0x0b,0x00,0x83,0xe2,0xea,0xf0,0xe1,0xc3,0xed,0xe0,0x40,0xec,0xcf,
++ 0x86,0xd5,0x31,0xc4,0xe3,0xbb,0xc6,0xe2,0x94,0xc4,0xe1,0x75,0xc3,0xe0,0x05,0xba,
++ 0xcf,0x86,0xe5,0xf8,0xb5,0xe4,0xf1,0xb4,0xe3,0xe2,0xb3,0xe2,0x39,0xb3,0xe1,0x14,
++ 0xb3,0xe0,0xed,0xb2,0xcf,0x86,0xe5,0xba,0xb2,0x94,0x07,0x63,0xa5,0xb2,0x07,0x00,
++ 0x07,0x00,0xe4,0xac,0xeb,0xd3,0x08,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd2,0x0b,0xe1,
++ 0xbb,0xd8,0xcf,0x86,0xcf,0x06,0x05,0x00,0xd1,0x0e,0xe0,0xaa,0xd9,0xcf,0x86,0xe5,
++ 0x6f,0xd9,0xcf,0x06,0x11,0x00,0xd0,0x0b,0xcf,0x86,0xe5,0xaa,0xd9,0xcf,0x06,0x13,
++ 0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x45,0xeb,0xe3,0x2e,0xea,0xd2,
++ 0xa0,0xe1,0xe4,0xdd,0xd0,0x21,0xcf,0x86,0xe5,0xe5,0xda,0xe4,0x61,0xda,0xe3,0x1f,
++ 0xda,0xe2,0xfe,0xd9,0xe1,0xec,0xd9,0x10,0x08,0x05,0xff,0xe4,0xb8,0xbd,0x00,0x05,
++ 0xff,0xe4,0xb8,0xb8,0x00,0xcf,0x86,0xd5,0x1c,0xe4,0x41,0xdc,0xe3,0x00,0xdc,0xe2,
++ 0xdf,0xdb,0xe1,0xce,0xdb,0x10,0x08,0x05,0xff,0xe5,0x92,0xa2,0x00,0x05,0xff,0xe5,
++ 0x93,0xb6,0x00,0xd4,0x34,0xd3,0x18,0xe2,0xc8,0xdc,0xe1,0xb7,0xdc,0x10,0x09,0x05,
++ 0xff,0xf0,0xa1,0x9a,0xa8,0x00,0x05,0xff,0xf0,0xa1,0x9b,0xaa,0x00,0xe2,0xe8,0xdc,
++ 0x91,0x11,0x10,0x09,0x05,0xff,0xf0,0xa1,0x8d,0xaa,0x00,0x05,0xff,0xe5,0xac,0x88,
++ 0x00,0x05,0xff,0xe5,0xac,0xbe,0x00,0xe3,0x2e,0xdd,0xd2,0x14,0xe1,0xfd,0xdc,0x10,
++ 0x08,0x05,0xff,0xe5,0xaf,0xb3,0x00,0x05,0xff,0xf0,0xa1,0xac,0x98,0x00,0xe1,0x09,
++ 0xdd,0x10,0x08,0x05,0xff,0xe5,0xbc,0xb3,0x00,0x05,0xff,0xe5,0xb0,0xa2,0x00,0xd1,
++ 0xd5,0xd0,0x6a,0xcf,0x86,0xe5,0x5e,0xe2,0xd4,0x19,0xe3,0x97,0xe1,0xe2,0x75,0xe1,
++ 0xe1,0x64,0xe1,0x10,0x08,0x05,0xff,0xe6,0xb4,0xbe,0x00,0x05,0xff,0xe6,0xb5,0xb7,
++ 0x00,0xd3,0x18,0xe2,0xe1,0xe1,0xe1,0xd0,0xe1,0x10,0x09,0x05,0xff,0xf0,0xa3,0xbd,
++ 0x9e,0x00,0x05,0xff,0xf0,0xa3,0xbe,0x8e,0x00,0xd2,0x13,0xe1,0xf9,0xe1,0x10,0x08,
++ 0x05,0xff,0xe7,0x81,0xbd,0x00,0x05,0xff,0xe7,0x81,0xb7,0x00,0xd1,0x11,0x10,0x08,
++ 0x05,0xff,0xe7,0x85,0x85,0x00,0x05,0xff,0xf0,0xa4,0x89,0xa3,0x00,0x10,0x08,0x05,
++ 0xff,0xe7,0x86,0x9c,0x00,0x05,0xff,0xe4,0x8e,0xab,0x00,0xcf,0x86,0xe5,0xfb,0xe3,
++ 0xd4,0x1a,0xe3,0x33,0xe3,0xe2,0x19,0xe3,0xe1,0x06,0xe3,0x10,0x08,0x05,0xff,0xe7,
++ 0x9b,0xb4,0x00,0x05,0xff,0xf0,0xa5,0x83,0xb3,0x00,0xd3,0x16,0xe2,0x7b,0xe3,0xe1,
++ 0x69,0xe3,0x10,0x08,0x05,0xff,0xe7,0xa3,0x8c,0x00,0x05,0xff,0xe4,0x83,0xa3,0x00,
++ 0xd2,0x13,0xe1,0x97,0xe3,0x10,0x08,0x05,0xff,0xe4,0x84,0xaf,0x00,0x05,0xff,0xe7,
++ 0xa9,0x80,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0xa5,0xa5,0xbc,0x00,0x05,0xff,
++ 0xf0,0xa5,0xaa,0xa7,0x00,0x10,0x09,0x05,0xff,0xf0,0xa5,0xaa,0xa7,0x00,0x05,0xff,
++ 0xe7,0xaa,0xae,0x00,0xe0,0xad,0xe6,0xcf,0x86,0xd5,0x1d,0xe4,0x22,0xe5,0xe3,0xde,
++ 0xe4,0xe2,0xbc,0xe4,0xe1,0xab,0xe4,0x10,0x09,0x05,0xff,0xf0,0xa3,0x8d,0x9f,0x00,
++ 0x05,0xff,0xe4,0x8f,0x95,0x00,0xd4,0x19,0xe3,0xc9,0xe5,0xe2,0xa5,0xe5,0xe1,0x94,
++ 0xe5,0x10,0x08,0x05,0xff,0xe8,0x8d,0x93,0x00,0x05,0xff,0xe8,0x8f,0x8a,0x00,0xd3,
++ 0x18,0xe2,0x14,0xe6,0xe1,0x03,0xe6,0x10,0x09,0x05,0xff,0xf0,0xa6,0xbe,0xb1,0x00,
++ 0x05,0xff,0xf0,0xa7,0x83,0x92,0x00,0xd2,0x13,0xe1,0x2c,0xe6,0x10,0x08,0x05,0xff,
++ 0xe8,0x9a,0x88,0x00,0x05,0xff,0xe8,0x9c,0x8e,0x00,0xd1,0x10,0x10,0x08,0x05,0xff,
++ 0xe8,0x9c,0xa8,0x00,0x05,0xff,0xe8,0x9d,0xab,0x00,0x10,0x08,0x05,0xff,0xe8,0x9e,
++ 0x86,0x00,0x05,0xff,0xe4,0xb5,0x97,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+ /* nfdicf_c0100 */
+ 0xd7,0xb0,0x56,0x04,0x01,0x00,0x95,0xa8,0xd4,0x5e,0xd3,0x2e,0xd2,0x16,0xd1,0x0a,
+ 0x10,0x04,0x01,0x00,0x01,0xff,0x61,0x00,0x10,0x06,0x01,0xff,0x62,0x00,0x01,0xff,
+@@ -299,3174 +299,3184 @@ static const unsigned char utf8data[64080] = {
+ 0xd1,0x0c,0x10,0x06,0x01,0xff,0x74,0x00,0x01,0xff,0x75,0x00,0x10,0x06,0x01,0xff,
+ 0x76,0x00,0x01,0xff,0x77,0x00,0x92,0x16,0xd1,0x0c,0x10,0x06,0x01,0xff,0x78,0x00,
+ 0x01,0xff,0x79,0x00,0x10,0x06,0x01,0xff,0x7a,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0xc6,0xe5,0xf6,0x14,0xe4,0x6c,0x0d,0xe3,0x36,0x08,0xe2,0x1f,0x01,0xc1,0xd0,0x21,
+- 0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x13,0x52,0x04,0x01,0x00,
+- 0x91,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xce,0xbc,0x00,0x01,0x00,0x01,0x00,0xcf,
+- 0x86,0xe5,0x9d,0x44,0xd4,0x7f,0xd3,0x3f,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x61,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,
+- 0x82,0x00,0x01,0xff,0x61,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,
+- 0x88,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x10,0x07,0x01,0xff,0xc3,0xa6,0x00,0x01,
+- 0xff,0x63,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x80,
+- 0x00,0x01,0xff,0x65,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x82,0x00,0x01,
+- 0xff,0x65,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x80,0x00,0x01,
+- 0xff,0x69,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0x82,0x00,0x01,0xff,0x69,
+- 0xcc,0x88,0x00,0xd3,0x3b,0xd2,0x1f,0xd1,0x0f,0x10,0x07,0x01,0xff,0xc3,0xb0,0x00,
+- 0x01,0xff,0x6e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x80,0x00,0x01,0xff,
+- 0x6f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x82,0x00,0x01,0xff,
+- 0x6f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,0x01,0x00,0xd2,0x1f,
+- 0xd1,0x0f,0x10,0x07,0x01,0xff,0xc3,0xb8,0x00,0x01,0xff,0x75,0xcc,0x80,0x00,0x10,
+- 0x08,0x01,0xff,0x75,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x82,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x75,0xcc,0x88,0x00,0x01,0xff,0x79,0xcc,0x81,0x00,0x10,0x07,0x01,
+- 0xff,0xc3,0xbe,0x00,0x01,0xff,0x73,0x73,0x00,0xe1,0xd4,0x03,0xe0,0xeb,0x01,0xcf,
+- 0x86,0xd5,0xfb,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,
+- 0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x86,
+- 0x00,0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa8,
+- 0x00,0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,0x81,0x00,0x01,
+- 0xff,0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x63,0xcc,0x82,
+- 0x00,0x01,0xff,0x63,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,0x87,0x00,0x01,
+- 0xff,0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x63,0xcc,0x8c,0x00,0x01,
+- 0xff,0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x8c,0x00,0x01,0xff,0x64,
+- 0xcc,0x8c,0x00,0xd3,0x3b,0xd2,0x1b,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc4,0x91,0x00,
+- 0x01,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x84,0x00,0x01,0xff,0x65,0xcc,0x84,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0x86,0x00,
+- 0x10,0x08,0x01,0xff,0x65,0xcc,0x87,0x00,0x01,0xff,0x65,0xcc,0x87,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xa8,0x00,0x01,0xff,0x65,0xcc,0xa8,0x00,
+- 0x10,0x08,0x01,0xff,0x65,0xcc,0x8c,0x00,0x01,0xff,0x65,0xcc,0x8c,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x67,0xcc,0x82,0x00,0x01,0xff,0x67,0xcc,0x82,0x00,0x10,0x08,
+- 0x01,0xff,0x67,0xcc,0x86,0x00,0x01,0xff,0x67,0xcc,0x86,0x00,0xd4,0x7b,0xd3,0x3b,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x87,0x00,0x01,0xff,0x67,0xcc,
+- 0x87,0x00,0x10,0x08,0x01,0xff,0x67,0xcc,0xa7,0x00,0x01,0xff,0x67,0xcc,0xa7,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x68,0xcc,0x82,0x00,0x01,0xff,0x68,0xcc,0x82,0x00,
+- 0x10,0x07,0x01,0xff,0xc4,0xa7,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x69,0xcc,0x83,0x00,0x01,0xff,0x69,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x69,
+- 0xcc,0x84,0x00,0x01,0xff,0x69,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,
+- 0xcc,0x86,0x00,0x01,0xff,0x69,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0xa8,
+- 0x00,0x01,0xff,0x69,0xcc,0xa8,0x00,0xd3,0x37,0xd2,0x17,0xd1,0x0c,0x10,0x08,0x01,
+- 0xff,0x69,0xcc,0x87,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc4,0xb3,0x00,0x01,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x6a,0xcc,0x82,0x00,0x01,0xff,0x6a,0xcc,0x82,0x00,
+- 0x10,0x08,0x01,0xff,0x6b,0xcc,0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,0x00,0xd2,0x1c,
+- 0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x6c,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,
+- 0x6c,0xcc,0x81,0x00,0x01,0xff,0x6c,0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x6c,0xcc,0xa7,0x00,0x01,0xff,0x6c,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,
+- 0x8c,0x00,0x01,0xff,0xc5,0x80,0x00,0xcf,0x86,0xd5,0xed,0xd4,0x72,0xd3,0x37,0xd2,
+- 0x17,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc5,0x82,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0x6e,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x81,0x00,
+- 0x01,0xff,0x6e,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x01,0xff,
+- 0x6e,0xcc,0x8c,0x00,0xd2,0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x8c,0x00,
+- 0x01,0xff,0xca,0xbc,0x6e,0x00,0x10,0x07,0x01,0xff,0xc5,0x8b,0x00,0x01,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0x84,0x00,0x10,
+- 0x08,0x01,0xff,0x6f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,0x86,0x00,0xd3,0x3b,0xd2,
+- 0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,0xff,0x6f,0xcc,0x8b,
+- 0x00,0x10,0x07,0x01,0xff,0xc5,0x93,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x72,0xcc,0x81,0x00,0x01,0xff,0x72,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,
+- 0xa7,0x00,0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x72,0xcc,0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x73,0xcc,
+- 0x81,0x00,0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x73,0xcc,
+- 0x82,0x00,0x01,0xff,0x73,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x73,0xcc,0xa7,0x00,
+- 0x01,0xff,0x73,0xcc,0xa7,0x00,0xd4,0x7b,0xd3,0x3b,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x73,0xcc,0x8c,0x00,0x01,0xff,0x73,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,
+- 0x74,0xcc,0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x74,0xcc,0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,0x00,0x10,0x07,0x01,0xff,0xc5,0xa7,
+- 0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x83,0x00,0x01,
+- 0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x84,0x00,0x01,0xff,0x75,
+- 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x86,0x00,0x01,0xff,0x75,
+- 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x8a,0x00,0x01,0xff,0x75,0xcc,0x8a,
+- 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x8b,0x00,0x01,
+- 0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa8,0x00,0x01,0xff,0x75,
+- 0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x82,0x00,0x01,0xff,0x77,
+- 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x82,0x00,0x01,0xff,0x79,0xcc,0x82,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,0x88,0x00,0x01,0xff,0x7a,
+- 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x81,0x00,0x01,0xff,0x7a,0xcc,0x87,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x87,0x00,0x01,0xff,0x7a,0xcc,0x8c,
+- 0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,0x01,0xff,0x73,0x00,0xe0,0x65,0x01,
+- 0xcf,0x86,0xd5,0xb4,0xd4,0x5a,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xc9,0x93,0x00,0x10,0x07,0x01,0xff,0xc6,0x83,0x00,0x01,0x00,0xd1,0x0b,
+- 0x10,0x07,0x01,0xff,0xc6,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc9,0x94,0x00,
+- 0x01,0xff,0xc6,0x88,0x00,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc9,
+- 0x96,0x00,0x10,0x07,0x01,0xff,0xc9,0x97,0x00,0x01,0xff,0xc6,0x8c,0x00,0x51,0x04,
+- 0x01,0x00,0x10,0x07,0x01,0xff,0xc7,0x9d,0x00,0x01,0xff,0xc9,0x99,0x00,0xd3,0x32,
+- 0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xc9,0x9b,0x00,0x01,0xff,0xc6,0x92,0x00,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xc9,0xa0,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc9,
+- 0xa3,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc9,0xa9,0x00,0x01,0xff,0xc9,0xa8,0x00,
+- 0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0x99,0x00,0x01,0x00,0x01,0x00,0xd1,
+- 0x0e,0x10,0x07,0x01,0xff,0xc9,0xaf,0x00,0x01,0xff,0xc9,0xb2,0x00,0x10,0x04,0x01,
+- 0x00,0x01,0xff,0xc9,0xb5,0x00,0xd4,0x5d,0xd3,0x34,0xd2,0x1b,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x6f,0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x10,0x07,0x01,0xff,
+- 0xc6,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc6,0xa5,0x00,0x01,0x00,
+- 0x10,0x07,0x01,0xff,0xca,0x80,0x00,0x01,0xff,0xc6,0xa8,0x00,0xd2,0x0f,0x91,0x0b,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xca,0x83,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,
+- 0xff,0xc6,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xca,0x88,0x00,0x01,0xff,0x75,
+- 0xcc,0x9b,0x00,0xd3,0x33,0xd2,0x1d,0xd1,0x0f,0x10,0x08,0x01,0xff,0x75,0xcc,0x9b,
+- 0x00,0x01,0xff,0xca,0x8a,0x00,0x10,0x07,0x01,0xff,0xca,0x8b,0x00,0x01,0xff,0xc6,
+- 0xb4,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc6,0xb6,0x00,0x10,0x04,0x01,
+- 0x00,0x01,0xff,0xca,0x92,0x00,0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0xb9,
+- 0x00,0x01,0x00,0x01,0x00,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0xbd,0x00,0x01,0x00,
+- 0x01,0x00,0xcf,0x86,0xd5,0xd4,0xd4,0x44,0xd3,0x16,0x52,0x04,0x01,0x00,0x51,0x07,
+- 0x01,0xff,0xc7,0x86,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xc7,0x89,0x00,0xd2,0x12,
+- 0x91,0x0b,0x10,0x07,0x01,0xff,0xc7,0x89,0x00,0x01,0x00,0x01,0xff,0xc7,0x8c,0x00,
+- 0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x61,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,
+- 0x61,0xcc,0x8c,0x00,0x01,0xff,0x69,0xcc,0x8c,0x00,0xd3,0x46,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x69,0xcc,0x8c,0x00,0x01,0xff,0x6f,0xcc,0x8c,0x00,0x10,0x08,
+- 0x01,0xff,0x6f,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x8c,0x00,0xd1,0x12,0x10,0x08,
+- 0x01,0xff,0x75,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x10,0x0a,
+- 0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,0x00,
+- 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,
+- 0x75,0xcc,0x88,0xcc,0x8c,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x8c,0x00,
+- 0x01,0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0x75,0xcc,
+- 0x88,0xcc,0x80,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,
+- 0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,0xd4,0x87,0xd3,0x41,0xd2,0x26,0xd1,0x14,
+- 0x10,0x0a,0x01,0xff,0x61,0xcc,0x87,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x87,0xcc,
+- 0x84,0x00,0x10,0x09,0x01,0xff,0xc3,0xa6,0xcc,0x84,0x00,0x01,0xff,0xc3,0xa6,0xcc,
+- 0x84,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc7,0xa5,0x00,0x01,0x00,0x10,0x08,0x01,
+- 0xff,0x67,0xcc,0x8c,0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x10,0x08,0x01,
+- 0xff,0x6f,0xcc,0xa8,0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,0x10,0x0a,0x01,
+- 0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x10,
+- 0x09,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0xd3,
+- 0x38,0xd2,0x1a,0xd1,0x0f,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,0x01,0xff,0xc7,
+- 0xb3,0x00,0x10,0x07,0x01,0xff,0xc7,0xb3,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x67,0xcc,0x81,0x00,0x01,0xff,0x67,0xcc,0x81,0x00,0x10,0x07,0x04,0xff,0xc6,
+- 0x95,0x00,0x04,0xff,0xc6,0xbf,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x6e,
+- 0xcc,0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x8a,
+- 0xcc,0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,
+- 0xff,0xc3,0xa6,0xcc,0x81,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,0x10,0x09,0x01,
+- 0xff,0xc3,0xb8,0xcc,0x81,0x00,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,0xe2,0x31,0x02,
+- 0xe1,0xad,0x44,0xe0,0xc8,0x01,0xcf,0x86,0xd5,0xfb,0xd4,0x80,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x8f,0x00,0x01,0xff,0x61,0xcc,0x8f,0x00,
+- 0x10,0x08,0x01,0xff,0x61,0xcc,0x91,0x00,0x01,0xff,0x61,0xcc,0x91,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x65,0xcc,0x8f,0x00,0x01,0xff,0x65,0xcc,0x8f,0x00,0x10,0x08,
+- 0x01,0xff,0x65,0xcc,0x91,0x00,0x01,0xff,0x65,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x69,0xcc,0x8f,0x00,0x01,0xff,0x69,0xcc,0x8f,0x00,0x10,0x08,
+- 0x01,0xff,0x69,0xcc,0x91,0x00,0x01,0xff,0x69,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x6f,0xcc,0x8f,0x00,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,
+- 0x6f,0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,0x91,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x72,0xcc,0x8f,0x00,0x01,0xff,0x72,0xcc,0x8f,0x00,0x10,0x08,
+- 0x01,0xff,0x72,0xcc,0x91,0x00,0x01,0xff,0x72,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x75,0xcc,0x8f,0x00,0x01,0xff,0x75,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,
+- 0x75,0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x04,0xff,0x73,0xcc,0xa6,0x00,0x04,0xff,0x73,0xcc,0xa6,0x00,0x10,0x08,0x04,0xff,
+- 0x74,0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,0xa6,0x00,0xd1,0x0b,0x10,0x07,0x04,0xff,
+- 0xc8,0x9d,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x68,0xcc,0x8c,0x00,0x04,0xff,0x68,
+- 0xcc,0x8c,0x00,0xd4,0x79,0xd3,0x31,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,0xc6,
+- 0x9e,0x00,0x07,0x00,0x10,0x07,0x04,0xff,0xc8,0xa3,0x00,0x04,0x00,0xd1,0x0b,0x10,
+- 0x07,0x04,0xff,0xc8,0xa5,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x61,0xcc,0x87,0x00,
+- 0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x65,0xcc,
+- 0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,0x0a,0x04,0xff,0x6f,0xcc,0x88,0xcc,
+- 0x84,0x00,0x04,0xff,0x6f,0xcc,0x88,0xcc,0x84,0x00,0xd1,0x14,0x10,0x0a,0x04,0xff,
+- 0x6f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,0x00,0x10,0x08,
+- 0x04,0xff,0x6f,0xcc,0x87,0x00,0x04,0xff,0x6f,0xcc,0x87,0x00,0xd3,0x27,0xe2,0x0b,
+- 0x43,0xd1,0x14,0x10,0x0a,0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,0x04,0xff,0x6f,
+- 0xcc,0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x79,0xcc,0x84,0x00,0x04,0xff,0x79,
+- 0xcc,0x84,0x00,0xd2,0x13,0x51,0x04,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0xa5,
+- 0x00,0x08,0xff,0xc8,0xbc,0x00,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,0xff,0xc6,0x9a,
+- 0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0xa6,0x00,0x08,0x00,0xcf,0x86,0x95,0x5f,0x94,
+- 0x5b,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,0xff,0xc9,0x82,0x00,
+- 0x10,0x04,0x09,0x00,0x09,0xff,0xc6,0x80,0x00,0xd1,0x0e,0x10,0x07,0x09,0xff,0xca,
+- 0x89,0x00,0x09,0xff,0xca,0x8c,0x00,0x10,0x07,0x09,0xff,0xc9,0x87,0x00,0x09,0x00,
+- 0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,0x89,0x00,0x09,0x00,0x10,0x07,0x09,
+- 0xff,0xc9,0x8b,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,0x8d,0x00,0x09,
+- 0x00,0x10,0x07,0x09,0xff,0xc9,0x8f,0x00,0x09,0x00,0x01,0x00,0x01,0x00,0xd1,0x8b,
+- 0xd0,0x0c,0xcf,0x86,0xe5,0xfa,0x42,0x64,0xd9,0x42,0x01,0xe6,0xcf,0x86,0xd5,0x2a,
+- 0xe4,0x82,0x43,0xe3,0x69,0x43,0xd2,0x11,0xe1,0x48,0x43,0x10,0x07,0x01,0xff,0xcc,
+- 0x80,0x00,0x01,0xff,0xcc,0x81,0x00,0xe1,0x4f,0x43,0x10,0x09,0x01,0xff,0xcc,0x88,
+- 0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0x00,0xd4,0x0f,0x93,0x0b,0x92,0x07,0x61,0x94,
+- 0x43,0x01,0xea,0x06,0xe6,0x06,0xe6,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x0a,
+- 0xff,0xcd,0xb1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xcd,0xb3,0x00,0x0a,0x00,0xd1,
+- 0x0b,0x10,0x07,0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x10,0x07,0x0a,0xff,0xcd,0xb7,
+- 0x00,0x0a,0x00,0xd2,0x07,0x61,0x80,0x43,0x00,0x00,0x51,0x04,0x09,0x00,0x10,0x06,
+- 0x01,0xff,0x3b,0x00,0x10,0xff,0xcf,0xb3,0x00,0xe0,0x31,0x01,0xcf,0x86,0xd5,0xd3,
+- 0xd4,0x5f,0xd3,0x21,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,
+- 0xc2,0xa8,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x81,0x00,0x01,0xff,
+- 0xc2,0xb7,0x00,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,
+- 0x00,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x00,0x00,0x10,
+- 0x09,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0xd3,
+- 0x3c,0xd2,0x20,0xd1,0x12,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,
+- 0x01,0xff,0xce,0xb1,0x00,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,0xff,0xce,0xb3,
+- 0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xb4,0x00,0x01,0xff,0xce,0xb5,0x00,0x10,
+- 0x07,0x01,0xff,0xce,0xb6,0x00,0x01,0xff,0xce,0xb7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,
+- 0x07,0x01,0xff,0xce,0xb8,0x00,0x01,0xff,0xce,0xb9,0x00,0x10,0x07,0x01,0xff,0xce,
+- 0xba,0x00,0x01,0xff,0xce,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xbc,0x00,
+- 0x01,0xff,0xce,0xbd,0x00,0x10,0x07,0x01,0xff,0xce,0xbe,0x00,0x01,0xff,0xce,0xbf,
+- 0x00,0xe4,0x6e,0x43,0xd3,0x35,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xcf,0x80,
+- 0x00,0x01,0xff,0xcf,0x81,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x83,0x00,0xd1,
+- 0x0e,0x10,0x07,0x01,0xff,0xcf,0x84,0x00,0x01,0xff,0xcf,0x85,0x00,0x10,0x07,0x01,
+- 0xff,0xcf,0x86,0x00,0x01,0xff,0xcf,0x87,0x00,0xe2,0x14,0x43,0xd1,0x0e,0x10,0x07,
+- 0x01,0xff,0xcf,0x88,0x00,0x01,0xff,0xcf,0x89,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,
+- 0xcc,0x88,0x00,0x01,0xff,0xcf,0x85,0xcc,0x88,0x00,0xcf,0x86,0xd5,0x94,0xd4,0x3c,
+- 0xd3,0x13,0x92,0x0f,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0x83,0x00,0x01,
+- 0x00,0x01,0x00,0xd2,0x07,0x61,0x23,0x43,0x01,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xce,0xbf,0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
+- 0xcf,0x89,0xcc,0x81,0x00,0x0a,0xff,0xcf,0x97,0x00,0xd3,0x2c,0xd2,0x11,0xe1,0x2f,
+- 0x43,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,0xff,0xce,0xb8,0x00,0xd1,0x10,0x10,
+- 0x09,0x01,0xff,0xcf,0x92,0xcc,0x88,0x00,0x01,0xff,0xcf,0x86,0x00,0x10,0x07,0x01,
+- 0xff,0xcf,0x80,0x00,0x04,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,0xcf,0x99,
+- 0x00,0x06,0x00,0x10,0x07,0x01,0xff,0xcf,0x9b,0x00,0x04,0x00,0xd1,0x0b,0x10,0x07,
+- 0x01,0xff,0xcf,0x9d,0x00,0x04,0x00,0x10,0x07,0x01,0xff,0xcf,0x9f,0x00,0x04,0x00,
+- 0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xa1,0x00,0x04,
+- 0x00,0x10,0x07,0x01,0xff,0xcf,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,
+- 0xcf,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0xa7,0x00,0x01,0x00,0xd2,0x16,
+- 0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xa9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,
+- 0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xad,0x00,0x01,0x00,0x10,
+- 0x07,0x01,0xff,0xcf,0xaf,0x00,0x01,0x00,0xd3,0x2b,0xd2,0x12,0x91,0x0e,0x10,0x07,
+- 0x01,0xff,0xce,0xba,0x00,0x01,0xff,0xcf,0x81,0x00,0x01,0x00,0xd1,0x0e,0x10,0x07,
+- 0x05,0xff,0xce,0xb8,0x00,0x05,0xff,0xce,0xb5,0x00,0x10,0x04,0x06,0x00,0x07,0xff,
+- 0xcf,0xb8,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x07,0x00,0x07,0xff,0xcf,0xb2,0x00,
+- 0x10,0x07,0x07,0xff,0xcf,0xbb,0x00,0x07,0x00,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,
+- 0xff,0xcd,0xbb,0x00,0x10,0x07,0x08,0xff,0xcd,0xbc,0x00,0x08,0xff,0xcd,0xbd,0x00,
+- 0xe3,0xd6,0x46,0xe2,0x3d,0x05,0xe1,0x27,0x02,0xe0,0x66,0x01,0xcf,0x86,0xd5,0xf0,
+- 0xd4,0x7e,0xd3,0x40,0xd2,0x22,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,0xb5,0xcc,0x80,
+- 0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,0x00,0x10,0x07,0x01,0xff,0xd1,0x92,0x00,0x01,
+- 0xff,0xd0,0xb3,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x94,0x00,0x01,
+- 0xff,0xd1,0x95,0x00,0x10,0x07,0x01,0xff,0xd1,0x96,0x00,0x01,0xff,0xd1,0x96,0xcc,
+- 0x88,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x98,0x00,0x01,0xff,0xd1,
+- 0x99,0x00,0x10,0x07,0x01,0xff,0xd1,0x9a,0x00,0x01,0xff,0xd1,0x9b,0x00,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xd0,0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,0xcc,0x80,0x00,
+- 0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x86,0x00,0x01,0xff,0xd1,0x9f,0x00,0xd3,0x38,
+- 0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xb0,0x00,0x01,0xff,0xd0,0xb1,0x00,
+- 0x10,0x07,0x01,0xff,0xd0,0xb2,0x00,0x01,0xff,0xd0,0xb3,0x00,0xd1,0x0e,0x10,0x07,
+- 0x01,0xff,0xd0,0xb4,0x00,0x01,0xff,0xd0,0xb5,0x00,0x10,0x07,0x01,0xff,0xd0,0xb6,
+- 0x00,0x01,0xff,0xd0,0xb7,0x00,0xd2,0x1e,0xd1,0x10,0x10,0x07,0x01,0xff,0xd0,0xb8,
+- 0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x10,0x07,0x01,0xff,0xd0,0xba,0x00,0x01,
+- 0xff,0xd0,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xbc,0x00,0x01,0xff,0xd0,
+- 0xbd,0x00,0x10,0x07,0x01,0xff,0xd0,0xbe,0x00,0x01,0xff,0xd0,0xbf,0x00,0xe4,0x0e,
+- 0x42,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x80,0x00,0x01,0xff,
+- 0xd1,0x81,0x00,0x10,0x07,0x01,0xff,0xd1,0x82,0x00,0x01,0xff,0xd1,0x83,0x00,0xd1,
+- 0x0e,0x10,0x07,0x01,0xff,0xd1,0x84,0x00,0x01,0xff,0xd1,0x85,0x00,0x10,0x07,0x01,
+- 0xff,0xd1,0x86,0x00,0x01,0xff,0xd1,0x87,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,
+- 0xff,0xd1,0x88,0x00,0x01,0xff,0xd1,0x89,0x00,0x10,0x07,0x01,0xff,0xd1,0x8a,0x00,
+- 0x01,0xff,0xd1,0x8b,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x8c,0x00,0x01,0xff,
+- 0xd1,0x8d,0x00,0x10,0x07,0x01,0xff,0xd1,0x8e,0x00,0x01,0xff,0xd1,0x8f,0x00,0xcf,
+- 0x86,0xd5,0x07,0x64,0xb8,0x41,0x01,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
+- 0x10,0x07,0x01,0xff,0xd1,0xa1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xa3,0x00,
+- 0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,
+- 0xff,0xd1,0xa7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa9,
+- 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,
+- 0x01,0xff,0xd1,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xaf,0x00,0x01,0x00,
+- 0xd3,0x33,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb1,0x00,0x01,0x00,0x10,
+- 0x07,0x01,0xff,0xd1,0xb3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb5,
+- 0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb5,0xcc,0x8f,0x00,0x01,0xff,0xd1,0xb5,
+- 0xcc,0x8f,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb9,0x00,0x01,0x00,
+- 0x10,0x07,0x01,0xff,0xd1,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,
+- 0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xbf,0x00,0x01,0x00,0xe0,0x41,0x01,
+- 0xcf,0x86,0xd5,0x8e,0xd4,0x36,0xd3,0x11,0xe2,0x7a,0x41,0xe1,0x71,0x41,0x10,0x07,
+- 0x01,0xff,0xd2,0x81,0x00,0x01,0x00,0xd2,0x0f,0x51,0x04,0x04,0x00,0x10,0x07,0x06,
+- 0xff,0xd2,0x8b,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x04,0xff,0xd2,0x8d,0x00,0x04,
+- 0x00,0x10,0x07,0x04,0xff,0xd2,0x8f,0x00,0x04,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
+- 0x10,0x07,0x01,0xff,0xd2,0x91,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x93,0x00,
+- 0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x95,0x00,0x01,0x00,0x10,0x07,0x01,
+- 0xff,0xd2,0x97,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x99,
+- 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9b,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,
+- 0x01,0xff,0xd2,0x9d,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9f,0x00,0x01,0x00,
+- 0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xa1,0x00,0x01,
+- 0x00,0x10,0x07,0x01,0xff,0xd2,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,
+- 0xd2,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xa7,0x00,0x01,0x00,0xd2,0x16,
+- 0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xa9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,
+- 0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xad,0x00,0x01,0x00,0x10,
+- 0x07,0x01,0xff,0xd2,0xaf,0x00,0x01,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,
+- 0x01,0xff,0xd2,0xb1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xb3,0x00,0x01,0x00,
+- 0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xb5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,
+- 0xb7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xb9,0x00,0x01,
+- 0x00,0x10,0x07,0x01,0xff,0xd2,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,
+- 0xd2,0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xbf,0x00,0x01,0x00,0xcf,0x86,
+- 0xd5,0xdc,0xd4,0x5a,0xd3,0x36,0xd2,0x20,0xd1,0x10,0x10,0x07,0x01,0xff,0xd3,0x8f,
+- 0x00,0x01,0xff,0xd0,0xb6,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0xb6,0xcc,0x86,
+- 0x00,0x01,0xff,0xd3,0x84,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x86,
+- 0x00,0x10,0x04,0x06,0x00,0x01,0xff,0xd3,0x88,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x04,
+- 0x01,0x00,0x06,0xff,0xd3,0x8a,0x00,0x10,0x04,0x06,0x00,0x01,0xff,0xd3,0x8c,0x00,
+- 0xe1,0x52,0x40,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x8e,0x00,0xd3,0x41,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb0,0xcc,
+- 0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb0,0xcc,
+- 0x88,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0x95,0x00,0x01,0x00,0x10,0x09,0x01,
+- 0xff,0xd0,0xb5,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,0xd2,0x1d,0xd1,
+- 0x0b,0x10,0x07,0x01,0xff,0xd3,0x99,0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0x99,
+- 0xcc,0x88,0x00,0x01,0xff,0xd3,0x99,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xd0,0xb6,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,
+- 0xd0,0xb7,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0xd4,0x82,0xd3,0x41,
+- 0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa1,0x00,0x01,0x00,0x10,0x09,0x01,
+- 0xff,0xd0,0xb8,0xcc,0x84,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,0x10,
+- 0x09,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,0xd2,
+- 0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa9,0x00,0x01,0x00,0x10,0x09,0x01,0xff,
+- 0xd3,0xa9,0xcc,0x88,0x00,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,
+- 0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x10,0x09,
+- 0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0xd3,0x41,
+- 0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,0x01,0xff,0xd1,
+- 0x83,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,0x01,0xff,0xd1,
+- 0x83,0xcc,0x8b,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x87,0xcc,0x88,0x00,0x01,
+- 0xff,0xd1,0x87,0xcc,0x88,0x00,0x10,0x07,0x08,0xff,0xd3,0xb7,0x00,0x08,0x00,0xd2,
+- 0x1d,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x8b,0xcc,0x88,0x00,0x01,0xff,0xd1,0x8b,
+- 0xcc,0x88,0x00,0x10,0x07,0x09,0xff,0xd3,0xbb,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,
+- 0x09,0xff,0xd3,0xbd,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xd3,0xbf,0x00,0x09,0x00,
+- 0xe1,0x26,0x02,0xe0,0x78,0x01,0xcf,0x86,0xd5,0xb0,0xd4,0x58,0xd3,0x2c,0xd2,0x16,
+- 0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x81,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,
+- 0x83,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x85,0x00,0x06,0x00,0x10,
+- 0x07,0x06,0xff,0xd4,0x87,0x00,0x06,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,
+- 0xd4,0x89,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x8b,0x00,0x06,0x00,0xd1,0x0b,
+- 0x10,0x07,0x06,0xff,0xd4,0x8d,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x8f,0x00,
+- 0x06,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xd4,0x91,0x00,0x09,
+- 0x00,0x10,0x07,0x09,0xff,0xd4,0x93,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,0x0a,0xff,
+- 0xd4,0x95,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0x97,0x00,0x0a,0x00,0xd2,0x16,
+- 0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x99,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,
+- 0x9b,0x00,0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x9d,0x00,0x0a,0x00,0x10,
+- 0x07,0x0a,0xff,0xd4,0x9f,0x00,0x0a,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
+- 0x10,0x07,0x0a,0xff,0xd4,0xa1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0xa3,0x00,
+- 0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xd4,0xa5,0x00,0x0b,0x00,0x10,0x07,0x0c,
+- 0xff,0xd4,0xa7,0x00,0x0c,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x10,0xff,0xd4,0xa9,
+- 0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xab,0x00,0x10,0x00,0xd1,0x0b,0x10,0x07,
+- 0x10,0xff,0xd4,0xad,0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xaf,0x00,0x10,0x00,
+- 0xd3,0x35,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,0xa1,0x00,0x10,
+- 0x07,0x01,0xff,0xd5,0xa2,0x00,0x01,0xff,0xd5,0xa3,0x00,0xd1,0x0e,0x10,0x07,0x01,
+- 0xff,0xd5,0xa4,0x00,0x01,0xff,0xd5,0xa5,0x00,0x10,0x07,0x01,0xff,0xd5,0xa6,0x00,
+- 0x01,0xff,0xd5,0xa7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xa8,0x00,
+- 0x01,0xff,0xd5,0xa9,0x00,0x10,0x07,0x01,0xff,0xd5,0xaa,0x00,0x01,0xff,0xd5,0xab,
+- 0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xac,0x00,0x01,0xff,0xd5,0xad,0x00,0x10,
+- 0x07,0x01,0xff,0xd5,0xae,0x00,0x01,0xff,0xd5,0xaf,0x00,0xcf,0x86,0xe5,0xf1,0x3e,
+- 0xd4,0x70,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xb0,0x00,0x01,
+- 0xff,0xd5,0xb1,0x00,0x10,0x07,0x01,0xff,0xd5,0xb2,0x00,0x01,0xff,0xd5,0xb3,0x00,
+- 0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xb4,0x00,0x01,0xff,0xd5,0xb5,0x00,0x10,0x07,
+- 0x01,0xff,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,
+- 0x01,0xff,0xd5,0xb8,0x00,0x01,0xff,0xd5,0xb9,0x00,0x10,0x07,0x01,0xff,0xd5,0xba,
+- 0x00,0x01,0xff,0xd5,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xbc,0x00,0x01,
+- 0xff,0xd5,0xbd,0x00,0x10,0x07,0x01,0xff,0xd5,0xbe,0x00,0x01,0xff,0xd5,0xbf,0x00,
+- 0xe3,0x70,0x3e,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd6,0x80,0x00,0x01,0xff,
+- 0xd6,0x81,0x00,0x10,0x07,0x01,0xff,0xd6,0x82,0x00,0x01,0xff,0xd6,0x83,0x00,0xd1,
+- 0x0e,0x10,0x07,0x01,0xff,0xd6,0x84,0x00,0x01,0xff,0xd6,0x85,0x00,0x10,0x07,0x01,
+- 0xff,0xd6,0x86,0x00,0x00,0x00,0xe0,0x18,0x3f,0xcf,0x86,0xe5,0xa9,0x3e,0xe4,0x80,
+- 0x3e,0xe3,0x5f,0x3e,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xd5,0xa5,0xd6,0x82,0x00,0xe4,0x3e,0x25,0xe3,0xc4,0x1a,0xe2,0xf8,0x80,
+- 0xe1,0xc0,0x13,0xd0,0x1e,0xcf,0x86,0xc5,0xe4,0xf0,0x4a,0xe3,0x3b,0x46,0xe2,0xd1,
+- 0x43,0xe1,0x04,0x43,0xe0,0xc9,0x42,0xcf,0x86,0xe5,0x8e,0x42,0x64,0x71,0x42,0x0b,
+- 0x00,0xcf,0x86,0xe5,0xfa,0x01,0xe4,0xd5,0x55,0xe3,0x76,0x01,0xe2,0x76,0x53,0xd1,
+- 0x0c,0xe0,0xd7,0x52,0xcf,0x86,0x65,0x75,0x52,0x04,0x00,0xe0,0x0d,0x01,0xcf,0x86,
+- 0xd5,0x0a,0xe4,0xf8,0x52,0x63,0xe7,0x52,0x0a,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x80,0x00,0x01,0xff,0xe2,0xb4,0x81,0x00,
+- 0x10,0x08,0x01,0xff,0xe2,0xb4,0x82,0x00,0x01,0xff,0xe2,0xb4,0x83,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe2,0xb4,0x84,0x00,0x01,0xff,0xe2,0xb4,0x85,0x00,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x86,0x00,0x01,0xff,0xe2,0xb4,0x87,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe2,0xb4,0x88,0x00,0x01,0xff,0xe2,0xb4,0x89,0x00,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x8a,0x00,0x01,0xff,0xe2,0xb4,0x8b,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x8c,0x00,0x01,0xff,0xe2,0xb4,0x8d,0x00,0x10,0x08,0x01,0xff,
+- 0xe2,0xb4,0x8e,0x00,0x01,0xff,0xe2,0xb4,0x8f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0xe2,0xb4,0x90,0x00,0x01,0xff,0xe2,0xb4,0x91,0x00,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x92,0x00,0x01,0xff,0xe2,0xb4,0x93,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x94,0x00,0x01,0xff,0xe2,0xb4,0x95,0x00,0x10,0x08,0x01,0xff,
+- 0xe2,0xb4,0x96,0x00,0x01,0xff,0xe2,0xb4,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe2,0xb4,0x98,0x00,0x01,0xff,0xe2,0xb4,0x99,0x00,0x10,0x08,0x01,0xff,
+- 0xe2,0xb4,0x9a,0x00,0x01,0xff,0xe2,0xb4,0x9b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe2,0xb4,0x9c,0x00,0x01,0xff,0xe2,0xb4,0x9d,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,
+- 0x9e,0x00,0x01,0xff,0xe2,0xb4,0x9f,0x00,0xcf,0x86,0xe5,0x2a,0x52,0x94,0x50,0xd3,
+- 0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa0,0x00,0x01,0xff,0xe2,
+- 0xb4,0xa1,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa2,0x00,0x01,0xff,0xe2,0xb4,0xa3,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa4,0x00,0x01,0xff,0xe2,0xb4,0xa5,
+- 0x00,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xa7,0x00,0x52,0x04,0x00,0x00,0x91,
+- 0x0c,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xad,0x00,0x00,0x00,0x01,0x00,0xd2,
+- 0x1b,0xe1,0xce,0x52,0xe0,0x7f,0x52,0xcf,0x86,0x95,0x0f,0x94,0x0b,0x93,0x07,0x62,
+- 0x64,0x52,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xd1,0x13,0xe0,0xa5,0x53,0xcf,
+- 0x86,0x95,0x0a,0xe4,0x7a,0x53,0x63,0x69,0x53,0x04,0x00,0x04,0x00,0xd0,0x0d,0xcf,
+- 0x86,0x95,0x07,0x64,0xf4,0x53,0x08,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,
+- 0x54,0x04,0x04,0x00,0xd3,0x07,0x62,0x01,0x54,0x04,0x00,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x11,0xff,0xe1,0x8f,0xb0,0x00,0x11,0xff,0xe1,0x8f,0xb1,0x00,0x10,0x08,0x11,
+- 0xff,0xe1,0x8f,0xb2,0x00,0x11,0xff,0xe1,0x8f,0xb3,0x00,0x91,0x10,0x10,0x08,0x11,
+- 0xff,0xe1,0x8f,0xb4,0x00,0x11,0xff,0xe1,0x8f,0xb5,0x00,0x00,0x00,0xd4,0x1c,0xe3,
+- 0x92,0x56,0xe2,0xc9,0x55,0xe1,0x8c,0x55,0xe0,0x6d,0x55,0xcf,0x86,0x95,0x0a,0xe4,
+- 0x56,0x55,0x63,0x45,0x55,0x04,0x00,0x04,0x00,0xe3,0xd2,0x01,0xe2,0xdd,0x59,0xd1,
+- 0x0c,0xe0,0xfe,0x58,0xcf,0x86,0x65,0xd7,0x58,0x0a,0x00,0xe0,0x4e,0x59,0xcf,0x86,
+- 0xd5,0xc5,0xd4,0x45,0xd3,0x31,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x12,0xff,0xd0,0xb2,
+- 0x00,0x12,0xff,0xd0,0xb4,0x00,0x10,0x07,0x12,0xff,0xd0,0xbe,0x00,0x12,0xff,0xd1,
+- 0x81,0x00,0x51,0x07,0x12,0xff,0xd1,0x82,0x00,0x10,0x07,0x12,0xff,0xd1,0x8a,0x00,
+- 0x12,0xff,0xd1,0xa3,0x00,0x92,0x10,0x91,0x0c,0x10,0x08,0x12,0xff,0xea,0x99,0x8b,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,
+- 0xff,0xe1,0x83,0x90,0x00,0x14,0xff,0xe1,0x83,0x91,0x00,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0x92,0x00,0x14,0xff,0xe1,0x83,0x93,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0x94,0x00,0x14,0xff,0xe1,0x83,0x95,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0x96,
+- 0x00,0x14,0xff,0xe1,0x83,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0x98,0x00,0x14,0xff,0xe1,0x83,0x99,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0x9a,
+- 0x00,0x14,0xff,0xe1,0x83,0x9b,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0x9c,
+- 0x00,0x14,0xff,0xe1,0x83,0x9d,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0x9e,0x00,0x14,
+- 0xff,0xe1,0x83,0x9f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,
+- 0xff,0xe1,0x83,0xa0,0x00,0x14,0xff,0xe1,0x83,0xa1,0x00,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0xa2,0x00,0x14,0xff,0xe1,0x83,0xa3,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0xa4,0x00,0x14,0xff,0xe1,0x83,0xa5,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xa6,
+- 0x00,0x14,0xff,0xe1,0x83,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0xa8,0x00,0x14,0xff,0xe1,0x83,0xa9,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xaa,
+- 0x00,0x14,0xff,0xe1,0x83,0xab,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0xac,
+- 0x00,0x14,0xff,0xe1,0x83,0xad,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xae,0x00,0x14,
+- 0xff,0xe1,0x83,0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,
+- 0x83,0xb0,0x00,0x14,0xff,0xe1,0x83,0xb1,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xb2,
+- 0x00,0x14,0xff,0xe1,0x83,0xb3,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0xb4,
+- 0x00,0x14,0xff,0xe1,0x83,0xb5,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xb6,0x00,0x14,
+- 0xff,0xe1,0x83,0xb7,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x14,0xff,0xe1,0x83,0xb8,
+- 0x00,0x14,0xff,0xe1,0x83,0xb9,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,0xba,0x00,0x00,
+- 0x00,0xd1,0x0c,0x10,0x04,0x00,0x00,0x14,0xff,0xe1,0x83,0xbd,0x00,0x10,0x08,0x14,
+- 0xff,0xe1,0x83,0xbe,0x00,0x14,0xff,0xe1,0x83,0xbf,0x00,0xe2,0x9d,0x08,0xe1,0x48,
+- 0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,0xd3,0x40,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa5,0x00,0x01,0xff,0x61,0xcc,0xa5,0x00,0x10,
+- 0x08,0x01,0xff,0x62,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,0x87,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x62,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,0xa3,0x00,0x10,0x08,0x01,
+- 0xff,0x62,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,0xd2,0x24,0xd1,0x14,0x10,
+- 0x0a,0x01,0xff,0x63,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,0x63,0xcc,0xa7,0xcc,0x81,
+- 0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x87,0x00,0x01,0xff,0x64,0xcc,0x87,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa3,0x00,0x01,0xff,0x64,0xcc,0xa3,0x00,0x10,
+- 0x08,0x01,0xff,0x64,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,0xb1,0x00,0xd3,0x48,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa7,0x00,0x01,0xff,0x64,0xcc,0xa7,
+- 0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0xad,0x00,0x01,0xff,0x64,0xcc,0xad,0x00,0xd1,
+- 0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,0x65,0xcc,0x84,
+- 0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0x01,0xff,0x65,
+- 0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xad,
+- 0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0xb0,0x00,0x01,
+- 0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,
+- 0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x66,0xcc,0x87,
+- 0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x67,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,0x84,0x00,0x10,0x08,0x01,
+- 0xff,0x68,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x68,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x68,
+- 0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x68,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x68,
+- 0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,
+- 0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,0x01,0xff,0x69,0xcc,0x88,
+- 0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,0xd3,0x40,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0x81,0x00,0x01,0xff,0x6b,0xcc,0x81,0x00,0x10,
+- 0x08,0x01,0xff,0x6b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,0xa3,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x6b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,0xb1,0x00,0x10,0x08,0x01,
+- 0xff,0x6c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,0xd2,0x24,0xd1,0x14,0x10,
+- 0x0a,0x01,0xff,0x6c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,0x6c,0xcc,0xa3,0xcc,0x84,
+- 0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0xb1,0x00,0x01,0xff,0x6c,0xcc,0xb1,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x6c,0xcc,0xad,0x00,0x01,0xff,0x6c,0xcc,0xad,0x00,0x10,
+- 0x08,0x01,0xff,0x6d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,0x81,0x00,0xcf,0x86,0xe5,
+- 0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6d,0xcc,
+- 0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6d,0xcc,0xa3,0x00,
+- 0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x87,0x00,
+- 0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa3,0x00,0x01,0xff,
+- 0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0xb1,0x00,
+- 0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xad,0x00,0x01,0xff,
+- 0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,
+- 0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x83,0xcc,
+- 0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,0x48,0xd2,0x28,0xd1,0x14,
+- 0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x84,0xcc,
+- 0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,
+- 0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x70,0xcc,0x81,0x00,0x01,0xff,
+- 0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x70,0xcc,0x87,0x00,0x01,0xff,0x70,0xcc,
+- 0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x72,0xcc,0x87,0x00,0x01,0xff,
+- 0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xa3,0x00,0x01,0xff,0x72,0xcc,
+- 0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,
+- 0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xb1,0x00,0x01,0xff,
+- 0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x73,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x73,0xcc,
+- 0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x73,0xcc,
+- 0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,0x10,0x0a,0x01,0xff,
+- 0x73,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,0xcc,0x87,0x00,0xd2,0x24,
+- 0xd1,0x14,0x10,0x0a,0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,
+- 0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0x87,0x00,0x01,0xff,0x74,0xcc,
+- 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xa3,0x00,0x01,0xff,0x74,0xcc,
+- 0xa3,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0xb1,0x00,0x01,0xff,0x74,0xcc,0xb1,0x00,
+- 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xad,0x00,0x01,0xff,
+- 0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa4,0x00,0x01,0xff,0x75,0xcc,
+- 0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0xb0,0x00,0x01,0xff,0x75,0xcc,
+- 0xb0,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xad,0x00,0x01,0xff,0x75,0xcc,0xad,0x00,
+- 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x01,0xff,
+- 0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,
+- 0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x76,0xcc,
+- 0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x76,0xcc,0xa3,0x00,
+- 0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x11,0x02,0xcf,0x86,0xd5,0xe2,0xd4,0x80,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x80,0x00,0x01,0xff,0x77,
+- 0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x81,0x00,0x01,0xff,0x77,0xcc,0x81,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x88,0x00,0x01,0xff,0x77,0xcc,0x88,
+- 0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x87,0x00,0x01,0xff,0x77,0xcc,0x87,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0xa3,0x00,0x01,0xff,0x77,0xcc,0xa3,
+- 0x00,0x10,0x08,0x01,0xff,0x78,0xcc,0x87,0x00,0x01,0xff,0x78,0xcc,0x87,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x78,0xcc,0x88,0x00,0x01,0xff,0x78,0xcc,0x88,0x00,0x10,
+- 0x08,0x01,0xff,0x79,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,0x87,0x00,0xd3,0x33,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x82,0x00,0x01,0xff,0x7a,0xcc,0x82,
+- 0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0xa3,0x00,0x01,0xff,0x7a,0xcc,0xa3,0x00,0xe1,
+- 0xc4,0x58,0x10,0x08,0x01,0xff,0x7a,0xcc,0xb1,0x00,0x01,0xff,0x7a,0xcc,0xb1,0x00,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,0xff,0x79,0xcc,
+- 0x8a,0x00,0x10,0x08,0x01,0xff,0x61,0xca,0xbe,0x00,0x02,0xff,0x73,0xcc,0x87,0x00,
+- 0x51,0x04,0x0a,0x00,0x10,0x07,0x0a,0xff,0x73,0x73,0x00,0x0a,0x00,0xd4,0x98,0xd3,
+- 0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa3,0x00,0x01,0xff,0x61,
+- 0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x89,
+- 0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x01,0xff,0x61,
+- 0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0x01,
+- 0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,
+- 0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,0x01,
+- 0xff,0x61,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x83,0x00,0xd1,
+- 0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,0xa3,
+- 0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,0x01,0xff,0x61,
+- 0xcc,0x86,0xcc,0x81,0x00,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,
+- 0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,0x10,0x0a,0x01,
+- 0xff,0x61,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x89,0x00,0xd1,
+- 0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x86,
+- 0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,0x01,0xff,0x61,
+- 0xcc,0xa3,0xcc,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xa3,
+- 0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x89,0x00,0x01,
+- 0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x83,0x00,0x01,
+- 0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0x01,
+- 0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0xcf,0x86,0xe5,0x31,0x01,0xd4,0x90,0xd3,0x50,
+- 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x01,0xff,
+- 0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,
+- 0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,
+- 0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,
+- 0x65,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x89,0x00,0x01,0xff,0x69,0xcc,0x89,0x00,
+- 0x10,0x08,0x01,0xff,0x69,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,0xa3,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x6f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,0xa3,0x00,0x10,0x08,
+- 0x01,0xff,0x6f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,0xd3,0x50,0xd2,0x28,
+- 0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,
+- 0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0x01,0xff,
+- 0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,
+- 0x89,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,
+- 0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0xd2,0x28,0xd1,0x14,
+- 0x10,0x0a,0x01,0xff,0x6f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x6f,0xcc,0xa3,0xcc,
+- 0x82,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,
+- 0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,
+- 0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,
+- 0x89,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x89,0x00,0xd4,0x98,0xd3,0x48,0xd2,0x28,
+- 0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,
+- 0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0x01,0xff,
+- 0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0xa3,0x00,
+- 0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x89,0x00,0x01,0xff,
+- 0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,
+- 0x81,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,
+- 0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x75,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x89,0x00,
+- 0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,
+- 0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,
+- 0xa3,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,
+- 0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,
+- 0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x89,0x00,
+- 0x01,0xff,0x79,0xcc,0x89,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,
+- 0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbb,0x00,
+- 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbd,0x00,0x0a,0x00,0x10,0x08,
+- 0x0a,0xff,0xe1,0xbb,0xbf,0x00,0x0a,0x00,0xe1,0xbf,0x02,0xe0,0xa1,0x01,0xcf,0x86,
+- 0xd5,0xc6,0xd4,0x6c,0xd3,0x18,0xe2,0xc0,0x58,0xe1,0xa9,0x58,0x10,0x09,0x01,0xff,
+- 0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0x00,
+- 0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb1,0xcc,
+- 0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,
+- 0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,
+- 0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0x00,0xd3,0x18,
+- 0xe2,0xfc,0x58,0xe1,0xe5,0x58,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x93,0x00,0x01,
+- 0xff,0xce,0xb5,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,
+- 0xcc,0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb5,
+- 0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,
+- 0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,
+- 0x94,0xcc,0x81,0x00,0x00,0x00,0xd4,0x6c,0xd3,0x18,0xe2,0x26,0x59,0xe1,0x0f,0x59,
+- 0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,
+- 0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,
+- 0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0x00,0x01,
+- 0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,
+- 0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,
+- 0x82,0x00,0xd3,0x18,0xe2,0x62,0x59,0xe1,0x4b,0x59,0x10,0x09,0x01,0xff,0xce,0xb9,
+- 0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,
+- 0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x81,0x00,0x01,
+- 0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,
+- 0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,0x00,0xcf,0x86,0xd5,0xac,
+- 0xd4,0x5a,0xd3,0x18,0xe2,0x9f,0x59,0xe1,0x88,0x59,0x10,0x09,0x01,0xff,0xce,0xbf,
+- 0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,
+- 0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x81,0x00,0x01,
+- 0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x18,0xe2,0xc9,0x59,0xe1,
+- 0xb2,0x59,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,0xcf,0x85,0xcc,
+- 0x94,0x00,0xd2,0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,
+- 0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x0f,
+- 0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,0x10,0x04,0x00,
+- 0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,0xe4,0x85,0x5a,0xd3,0x18,0xe2,
+- 0x04,0x5a,0xe1,0xed,0x59,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,
+- 0xcf,0x89,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,
+- 0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,
+- 0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,
+- 0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,
+- 0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xe0,0xd9,0x02,0xcf,0x86,0xe5,0x91,0x01,0xd4,
+- 0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xce,
+- 0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,
+- 0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x80,
+- 0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,0xce,
+- 0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,
+- 0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
+- 0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,
+- 0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,
+- 0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
+- 0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,
+- 0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,
+- 0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,
+- 0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,
+- 0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xce,0xb9,
+- 0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,
+- 0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,
+- 0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,
+- 0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,
+- 0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,
+- 0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,
+- 0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0xce,
+- 0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd4,0xc8,0xd3,
+- 0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xce,0xb9,0x00,
+- 0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,
+- 0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0xce,0xb9,
+- 0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,
+- 0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xcf,
+- 0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,
+- 0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xce,
+- 0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xcf,
+- 0x89,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,
+- 0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0xce,
+- 0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,
+- 0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,
+- 0xcd,0x82,0xce,0xb9,0x00,0xd3,0x49,0xd2,0x26,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
+- 0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,
+- 0xb1,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xce,0xb9,0x00,0xd1,0x0f,0x10,
+- 0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,
+- 0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,0xcc,
+- 0x84,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,0xff,0xce,0xb1,0xcc,
+- 0x81,0x00,0xe1,0xa5,0x5a,0x10,0x09,0x01,0xff,0xce,0xb1,0xce,0xb9,0x00,0x01,0x00,
+- 0xcf,0x86,0xd5,0xbd,0xd4,0x7e,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x80,0xce,
+- 0xb9,0x00,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,
+- 0xb7,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcd,0x82,
+- 0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,0x10,0x09,
+- 0x01,0xff,0xce,0xb7,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0xe1,0xb4,
+- 0x5a,0x10,0x09,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0x01,0xff,0xe1,0xbe,0xbf,0xcc,
+- 0x80,0x00,0xd3,0x18,0xe2,0xda,0x5a,0xe1,0xc3,0x5a,0x10,0x09,0x01,0xff,0xce,0xb9,
+- 0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0xe2,0xfe,0x5a,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0x10,
+- 0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0xd4,
+- 0x51,0xd3,0x18,0xe2,0x21,0x5b,0xe1,0x0a,0x5b,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,
+- 0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,
+- 0xff,0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,0x10,0x09,0x01,
+- 0xff,0xcf,0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0xe1,0x41,0x5b,
+- 0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x80,0x00,
+- 0xd3,0x3b,0xd2,0x18,0x51,0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x80,
+- 0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,
+- 0xcf,0x89,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcd,
+- 0x82,0x00,0x01,0xff,0xcf,0x89,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x10,
+- 0x09,0x01,0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0xe1,
+- 0x4b,0x5b,0x10,0x09,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0x01,0xff,0xc2,0xb4,0x00,
+- 0xe0,0xa2,0x67,0xcf,0x86,0xe5,0x24,0x02,0xe4,0x26,0x01,0xe3,0x1b,0x5e,0xd2,0x2b,
+- 0xe1,0xf5,0x5b,0xe0,0x7a,0x5b,0xcf,0x86,0xe5,0x5f,0x5b,0x94,0x1c,0x93,0x18,0x92,
+- 0x14,0x91,0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,0xff,0xe2,0x80,0x83,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd1,0xd6,0xd0,0x46,0xcf,0x86,0x55,
+- 0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
+- 0x07,0x01,0xff,0xcf,0x89,0x00,0x01,0x00,0x92,0x12,0x51,0x04,0x01,0x00,0x10,0x06,
+- 0x01,0xff,0x6b,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x01,0x00,0xe3,0xba,0x5c,0x92,
+- 0x10,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0x8e,0x00,0x01,0x00,0x01,
+- 0x00,0xcf,0x86,0xd5,0x0a,0xe4,0xd7,0x5c,0x63,0xc2,0x5c,0x06,0x00,0x94,0x80,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb0,0x00,0x01,0xff,0xe2,
+- 0x85,0xb1,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb2,0x00,0x01,0xff,0xe2,0x85,0xb3,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb4,0x00,0x01,0xff,0xe2,0x85,0xb5,
+- 0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb6,0x00,0x01,0xff,0xe2,0x85,0xb7,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb8,0x00,0x01,0xff,0xe2,0x85,0xb9,
+- 0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xba,0x00,0x01,0xff,0xe2,0x85,0xbb,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xbc,0x00,0x01,0xff,0xe2,0x85,0xbd,0x00,0x10,
+- 0x08,0x01,0xff,0xe2,0x85,0xbe,0x00,0x01,0xff,0xe2,0x85,0xbf,0x00,0x01,0x00,0xe0,
+- 0xc9,0x5c,0xcf,0x86,0xe5,0xa8,0x5c,0xe4,0x87,0x5c,0xe3,0x76,0x5c,0xe2,0x69,0x5c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0xff,0xe2,0x86,0x84,0x00,0xe3,0xb8,
+- 0x60,0xe2,0x85,0x60,0xd1,0x0c,0xe0,0x32,0x60,0xcf,0x86,0x65,0x13,0x60,0x01,0x00,
+- 0xd0,0x62,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x18,0x52,0x04,
+- 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x90,0x00,0x01,0xff,
+- 0xe2,0x93,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x92,0x00,
+- 0x01,0xff,0xe2,0x93,0x93,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x94,0x00,0x01,0xff,
+- 0xe2,0x93,0x95,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x96,0x00,0x01,0xff,
+- 0xe2,0x93,0x97,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x98,0x00,0x01,0xff,0xe2,0x93,
+- 0x99,0x00,0xcf,0x86,0xe5,0xec,0x5f,0x94,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe2,0x93,0x9a,0x00,0x01,0xff,0xe2,0x93,0x9b,0x00,0x10,0x08,0x01,
+- 0xff,0xe2,0x93,0x9c,0x00,0x01,0xff,0xe2,0x93,0x9d,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe2,0x93,0x9e,0x00,0x01,0xff,0xe2,0x93,0x9f,0x00,0x10,0x08,0x01,0xff,0xe2,
+- 0x93,0xa0,0x00,0x01,0xff,0xe2,0x93,0xa1,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe2,0x93,0xa2,0x00,0x01,0xff,0xe2,0x93,0xa3,0x00,0x10,0x08,0x01,0xff,0xe2,
+- 0x93,0xa4,0x00,0x01,0xff,0xe2,0x93,0xa5,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,
+- 0x93,0xa6,0x00,0x01,0xff,0xe2,0x93,0xa7,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0xa8,
+- 0x00,0x01,0xff,0xe2,0x93,0xa9,0x00,0x01,0x00,0xd4,0x0c,0xe3,0xc8,0x61,0xe2,0xc1,
+- 0x61,0xcf,0x06,0x04,0x00,0xe3,0xa1,0x64,0xe2,0x94,0x63,0xe1,0x2e,0x02,0xe0,0x84,
+- 0x01,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe2,0xb0,0xb0,0x00,0x08,0xff,0xe2,0xb0,0xb1,0x00,0x10,0x08,0x08,0xff,
+- 0xe2,0xb0,0xb2,0x00,0x08,0xff,0xe2,0xb0,0xb3,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb0,0xb4,0x00,0x08,0xff,0xe2,0xb0,0xb5,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,
+- 0xb6,0x00,0x08,0xff,0xe2,0xb0,0xb7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb0,0xb8,0x00,0x08,0xff,0xe2,0xb0,0xb9,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,
+- 0xba,0x00,0x08,0xff,0xe2,0xb0,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb0,
+- 0xbc,0x00,0x08,0xff,0xe2,0xb0,0xbd,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,0xbe,0x00,
+- 0x08,0xff,0xe2,0xb0,0xbf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb1,0x80,0x00,0x08,0xff,0xe2,0xb1,0x81,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x82,0x00,0x08,0xff,0xe2,0xb1,0x83,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x84,0x00,0x08,0xff,0xe2,0xb1,0x85,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x86,0x00,
+- 0x08,0xff,0xe2,0xb1,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x88,0x00,0x08,0xff,0xe2,0xb1,0x89,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8a,0x00,
+- 0x08,0xff,0xe2,0xb1,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8c,0x00,
+- 0x08,0xff,0xe2,0xb1,0x8d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8e,0x00,0x08,0xff,
+- 0xe2,0xb1,0x8f,0x00,0x94,0x7c,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
+- 0xe2,0xb1,0x90,0x00,0x08,0xff,0xe2,0xb1,0x91,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x92,0x00,0x08,0xff,0xe2,0xb1,0x93,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x94,0x00,0x08,0xff,0xe2,0xb1,0x95,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x96,0x00,
+- 0x08,0xff,0xe2,0xb1,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
+- 0x98,0x00,0x08,0xff,0xe2,0xb1,0x99,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9a,0x00,
+- 0x08,0xff,0xe2,0xb1,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9c,0x00,
+- 0x08,0xff,0xe2,0xb1,0x9d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9e,0x00,0x00,0x00,
+- 0x08,0x00,0xcf,0x86,0xd5,0x07,0x64,0x84,0x61,0x08,0x00,0xd4,0x63,0xd3,0x32,0xd2,
+- 0x1b,0xd1,0x0c,0x10,0x08,0x09,0xff,0xe2,0xb1,0xa1,0x00,0x09,0x00,0x10,0x07,0x09,
+- 0xff,0xc9,0xab,0x00,0x09,0xff,0xe1,0xb5,0xbd,0x00,0xd1,0x0b,0x10,0x07,0x09,0xff,
+- 0xc9,0xbd,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xa8,0x00,0xd2,
+- 0x18,0xd1,0x0c,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xaa,0x00,0x10,0x04,0x09,
+- 0x00,0x09,0xff,0xe2,0xb1,0xac,0x00,0xd1,0x0b,0x10,0x04,0x09,0x00,0x0a,0xff,0xc9,
+- 0x91,0x00,0x10,0x07,0x0a,0xff,0xc9,0xb1,0x00,0x0a,0xff,0xc9,0x90,0x00,0xd3,0x27,
+- 0xd2,0x17,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xc9,0x92,0x00,0x0a,0x00,0x10,0x08,0x0a,
+- 0xff,0xe2,0xb1,0xb3,0x00,0x0a,0x00,0x91,0x0c,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,
+- 0xb1,0xb6,0x00,0x09,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x07,0x0b,
+- 0xff,0xc8,0xbf,0x00,0x0b,0xff,0xc9,0x80,0x00,0xe0,0x83,0x01,0xcf,0x86,0xd5,0xc0,
+- 0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x81,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x83,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x87,0x00,
+- 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x89,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb2,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb2,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x8f,0x00,0x08,0x00,
+- 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x91,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb2,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb2,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x97,0x00,0x08,0x00,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x99,0x00,0x08,0x00,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
+- 0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x9f,0x00,0x08,0x00,0xd4,0x60,
+- 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa1,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb2,0xa3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb2,0xa5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa7,0x00,0x08,0x00,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa9,0x00,0x08,0x00,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0xab,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
+- 0xad,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xaf,0x00,0x08,0x00,0xd3,0x30,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb1,0x00,0x08,0x00,0x10,0x08,
+- 0x08,0xff,0xe2,0xb2,0xb3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
+- 0xb5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb7,0x00,0x08,0x00,0xd2,0x18,
+- 0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb9,0x00,0x08,0x00,0x10,0x08,0x08,0xff,
+- 0xe2,0xb2,0xbb,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xbd,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xbf,0x00,0x08,0x00,0xcf,0x86,0xd5,0xc0,
+- 0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x81,0x00,
+- 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x83,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
+- 0x08,0xff,0xe2,0xb3,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x87,0x00,
+- 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x89,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb3,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb3,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x8f,0x00,0x08,0x00,
+- 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x91,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb3,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
+- 0xe2,0xb3,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x97,0x00,0x08,0x00,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x99,0x00,0x08,0x00,0x10,0x08,
+- 0x08,0xff,0xe2,0xb3,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,
+- 0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x9f,0x00,0x08,0x00,0xd4,0x3b,
+- 0xd3,0x1c,0x92,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0xa1,0x00,0x08,0x00,
+- 0x10,0x08,0x08,0xff,0xe2,0xb3,0xa3,0x00,0x08,0x00,0x08,0x00,0xd2,0x10,0x51,0x04,
+- 0x08,0x00,0x10,0x04,0x08,0x00,0x0b,0xff,0xe2,0xb3,0xac,0x00,0xe1,0xd0,0x5e,0x10,
+- 0x04,0x0b,0x00,0x0b,0xff,0xe2,0xb3,0xae,0x00,0xe3,0xd5,0x5e,0x92,0x10,0x51,0x04,
+- 0x0b,0xe6,0x10,0x08,0x0d,0xff,0xe2,0xb3,0xb3,0x00,0x0d,0x00,0x00,0x00,0xe2,0x98,
+- 0x08,0xd1,0x0b,0xe0,0x8d,0x66,0xcf,0x86,0xcf,0x06,0x01,0x00,0xe0,0xe1,0x6b,0xcf,
+- 0x86,0xe5,0xa7,0x05,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x0c,0xe2,0x74,0x67,0xe1,
+- 0x0b,0x67,0xcf,0x06,0x04,0x00,0xe2,0xdb,0x01,0xe1,0x26,0x01,0xd0,0x09,0xcf,0x86,
+- 0x65,0x70,0x67,0x0a,0x00,0xcf,0x86,0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,
+- 0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
+- 0x99,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x85,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,
+- 0x08,0x0a,0xff,0xea,0x99,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x8b,
+- 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x8d,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x99,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,
+- 0x08,0x0a,0xff,0xea,0x99,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x93,
+- 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x95,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x99,0x97,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x99,0x99,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x9b,0x00,0x0a,
+- 0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x9d,0x00,0x0a,0x00,0x10,0x08,0x0a,
+- 0xff,0xea,0x99,0x9f,0x00,0x0a,0x00,0xe4,0xd9,0x66,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
+- 0x10,0x08,0x0c,0xff,0xea,0x99,0xa1,0x00,0x0c,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,
+- 0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0xa5,0x00,0x0a,0x00,
+- 0x10,0x08,0x0a,0xff,0xea,0x99,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,
+- 0x0a,0xff,0xea,0x99,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0xab,0x00,
+- 0x0a,0x00,0xe1,0x88,0x66,0x10,0x08,0x0a,0xff,0xea,0x99,0xad,0x00,0x0a,0x00,0xe0,
+- 0xb1,0x66,0xcf,0x86,0x95,0xab,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,
+- 0x0a,0xff,0xea,0x9a,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x83,0x00,
+- 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x85,0x00,0x0a,0x00,0x10,0x08,
+- 0x0a,0xff,0xea,0x9a,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,
+- 0xea,0x9a,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8b,0x00,0x0a,0x00,
+- 0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,
+- 0xea,0x9a,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,
+- 0xea,0x9a,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x93,0x00,0x0a,0x00,
+- 0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x95,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,
+- 0xea,0x9a,0x97,0x00,0x0a,0x00,0xe2,0x0e,0x66,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,
+- 0x9a,0x99,0x00,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9a,0x9b,0x00,0x10,0x00,0x0b,
+- 0x00,0xe1,0x10,0x02,0xd0,0xb9,0xcf,0x86,0xd5,0x07,0x64,0x1a,0x66,0x08,0x00,0xd4,
+- 0x58,0xd3,0x28,0xd2,0x10,0x51,0x04,0x09,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa3,
+- 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa5,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x9c,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x9c,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xab,0x00,0x0a,
+- 0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xad,0x00,0x0a,0x00,0x10,0x08,0x0a,
+- 0xff,0xea,0x9c,0xaf,0x00,0x0a,0x00,0xd3,0x28,0xd2,0x10,0x51,0x04,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x9c,0xb3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
+- 0x9c,0xb5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb7,0x00,0x0a,0x00,0xd2,
+- 0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb9,0x00,0x0a,0x00,0x10,0x08,0x0a,
+- 0xff,0xea,0x9c,0xbb,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xbd,
+- 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xbf,0x00,0x0a,0x00,0xcf,0x86,0xd5,
+- 0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x81,
+- 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,
+- 0x08,0x0a,0xff,0xea,0x9d,0x85,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x87,
+- 0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x89,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x9d,0x8d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8f,0x00,0x0a,
+- 0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x91,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x93,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x9d,0x95,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x97,0x00,0x0a,
+- 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x99,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x9d,0x9b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
+- 0x9d,0x9d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x9f,0x00,0x0a,0x00,0xd4,
+- 0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa1,0x00,0x0a,
+- 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
+- 0xff,0xea,0x9d,0xa5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa7,0x00,0x0a,
+- 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa9,0x00,0x0a,0x00,0x10,
+- 0x08,0x0a,0xff,0xea,0x9d,0xab,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
+- 0x9d,0xad,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xaf,0x00,0x0a,0x00,0x53,
+- 0x04,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9d,0xba,
+- 0x00,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9d,0xbc,0x00,0xd1,0x0c,0x10,0x04,0x0a,
+- 0x00,0x0a,0xff,0xe1,0xb5,0xb9,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xbf,0x00,0x0a,
+- 0x00,0xe0,0x71,0x01,0xcf,0x86,0xd5,0xa6,0xd4,0x4e,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
+- 0x10,0x08,0x0a,0xff,0xea,0x9e,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9e,
+- 0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9e,0x85,0x00,0x0a,0x00,
+- 0x10,0x08,0x0a,0xff,0xea,0x9e,0x87,0x00,0x0a,0x00,0xd2,0x10,0x51,0x04,0x0a,0x00,
+- 0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9e,0x8c,0x00,0xe1,0x16,0x64,0x10,0x04,0x0a,
+- 0x00,0x0c,0xff,0xc9,0xa5,0x00,0xd3,0x28,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0c,0xff,
+- 0xea,0x9e,0x91,0x00,0x0c,0x00,0x10,0x08,0x0d,0xff,0xea,0x9e,0x93,0x00,0x0d,0x00,
+- 0x51,0x04,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x97,0x00,0x10,0x00,0xd2,0x18,
+- 0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,0x99,0x00,0x10,0x00,0x10,0x08,0x10,0xff,
+- 0xea,0x9e,0x9b,0x00,0x10,0x00,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,0x9d,0x00,
+- 0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x9f,0x00,0x10,0x00,0xd4,0x63,0xd3,0x30,
+- 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa1,0x00,0x0c,0x00,0x10,0x08,
+- 0x0c,0xff,0xea,0x9e,0xa3,0x00,0x0c,0x00,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,
+- 0xa5,0x00,0x0c,0x00,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa7,0x00,0x0c,0x00,0xd2,0x1a,
+- 0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa9,0x00,0x0c,0x00,0x10,0x07,0x0d,0xff,
+- 0xc9,0xa6,0x00,0x10,0xff,0xc9,0x9c,0x00,0xd1,0x0e,0x10,0x07,0x10,0xff,0xc9,0xa1,
+- 0x00,0x10,0xff,0xc9,0xac,0x00,0x10,0x07,0x12,0xff,0xc9,0xaa,0x00,0x14,0x00,0xd3,
+- 0x35,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x10,0xff,0xca,0x9e,0x00,0x10,0xff,0xca,0x87,
+- 0x00,0x10,0x07,0x11,0xff,0xca,0x9d,0x00,0x11,0xff,0xea,0xad,0x93,0x00,0xd1,0x0c,
+- 0x10,0x08,0x11,0xff,0xea,0x9e,0xb5,0x00,0x11,0x00,0x10,0x08,0x11,0xff,0xea,0x9e,
+- 0xb7,0x00,0x11,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x14,0xff,0xea,0x9e,0xb9,0x00,
+- 0x14,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,0xbb,0x00,0x15,0x00,0xd1,0x0c,0x10,0x08,
+- 0x15,0xff,0xea,0x9e,0xbd,0x00,0x15,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,0xbf,0x00,
+- 0x15,0x00,0xcf,0x86,0xe5,0x50,0x63,0x94,0x2f,0x93,0x2b,0xd2,0x10,0x51,0x04,0x00,
+- 0x00,0x10,0x08,0x15,0xff,0xea,0x9f,0x83,0x00,0x15,0x00,0xd1,0x0f,0x10,0x08,0x15,
+- 0xff,0xea,0x9e,0x94,0x00,0x15,0xff,0xca,0x82,0x00,0x10,0x08,0x15,0xff,0xe1,0xb6,
+- 0x8e,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xe4,0x30,0x66,0xd3,0x1d,0xe2,0xd7,0x63,
+- 0xe1,0x86,0x63,0xe0,0x73,0x63,0xcf,0x86,0xe5,0x54,0x63,0x94,0x0b,0x93,0x07,0x62,
+- 0x3f,0x63,0x08,0x00,0x08,0x00,0x08,0x00,0xd2,0x0f,0xe1,0xd6,0x64,0xe0,0xa3,0x64,
+- 0xcf,0x86,0x65,0x88,0x64,0x0a,0x00,0xd1,0xab,0xd0,0x1a,0xcf,0x86,0xe5,0x93,0x65,
+- 0xe4,0x76,0x65,0xe3,0x5d,0x65,0xe2,0x50,0x65,0x91,0x08,0x10,0x04,0x00,0x00,0x0c,
+- 0x00,0x0c,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x0b,0x93,0x07,0x62,0xa3,0x65,
+- 0x11,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,
+- 0xa0,0x00,0x11,0xff,0xe1,0x8e,0xa1,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa2,0x00,
+- 0x11,0xff,0xe1,0x8e,0xa3,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa4,0x00,
+- 0x11,0xff,0xe1,0x8e,0xa5,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa6,0x00,0x11,0xff,
+- 0xe1,0x8e,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa8,0x00,
+- 0x11,0xff,0xe1,0x8e,0xa9,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xaa,0x00,0x11,0xff,
+- 0xe1,0x8e,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xac,0x00,0x11,0xff,
+- 0xe1,0x8e,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xae,0x00,0x11,0xff,0xe1,0x8e,
+- 0xaf,0x00,0xe0,0x2e,0x65,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb0,0x00,0x11,0xff,0xe1,0x8e,0xb1,0x00,
+- 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb2,0x00,0x11,0xff,0xe1,0x8e,0xb3,0x00,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb4,0x00,0x11,0xff,0xe1,0x8e,0xb5,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8e,0xb6,0x00,0x11,0xff,0xe1,0x8e,0xb7,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb8,0x00,0x11,0xff,0xe1,0x8e,0xb9,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8e,0xba,0x00,0x11,0xff,0xe1,0x8e,0xbb,0x00,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8e,0xbc,0x00,0x11,0xff,0xe1,0x8e,0xbd,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8e,0xbe,0x00,0x11,0xff,0xe1,0x8e,0xbf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0x80,0x00,0x11,0xff,0xe1,0x8f,0x81,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x82,0x00,0x11,0xff,0xe1,0x8f,0x83,0x00,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x84,0x00,0x11,0xff,0xe1,0x8f,0x85,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x86,0x00,0x11,0xff,0xe1,0x8f,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x88,0x00,0x11,0xff,0xe1,0x8f,0x89,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x8a,0x00,0x11,0xff,0xe1,0x8f,0x8b,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x8c,0x00,0x11,0xff,0xe1,0x8f,0x8d,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
+- 0x8e,0x00,0x11,0xff,0xe1,0x8f,0x8f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x11,0xff,0xe1,0x8f,0x90,0x00,0x11,0xff,0xe1,0x8f,0x91,0x00,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x92,0x00,0x11,0xff,0xe1,0x8f,0x93,0x00,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x94,0x00,0x11,0xff,0xe1,0x8f,0x95,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x96,0x00,0x11,0xff,0xe1,0x8f,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0x98,0x00,0x11,0xff,0xe1,0x8f,0x99,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x9a,0x00,0x11,0xff,0xe1,0x8f,0x9b,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0x9c,0x00,0x11,0xff,0xe1,0x8f,0x9d,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
+- 0x9e,0x00,0x11,0xff,0xe1,0x8f,0x9f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x11,0xff,0xe1,0x8f,0xa0,0x00,0x11,0xff,0xe1,0x8f,0xa1,0x00,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0xa2,0x00,0x11,0xff,0xe1,0x8f,0xa3,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0xa4,0x00,0x11,0xff,0xe1,0x8f,0xa5,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
+- 0xa6,0x00,0x11,0xff,0xe1,0x8f,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,
+- 0xe1,0x8f,0xa8,0x00,0x11,0xff,0xe1,0x8f,0xa9,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
+- 0xaa,0x00,0x11,0xff,0xe1,0x8f,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,
+- 0xac,0x00,0x11,0xff,0xe1,0x8f,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,0xae,0x00,
+- 0x11,0xff,0xe1,0x8f,0xaf,0x00,0xd1,0x0c,0xe0,0x67,0x63,0xcf,0x86,0xcf,0x06,0x02,
+- 0xff,0xff,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,
+- 0x01,0x00,0xd4,0xae,0xd3,0x09,0xe2,0xd0,0x63,0xcf,0x06,0x01,0x00,0xd2,0x27,0xe1,
+- 0x9b,0x6f,0xe0,0xa2,0x6d,0xcf,0x86,0xe5,0xbb,0x6c,0xe4,0x4a,0x6c,0xe3,0x15,0x6c,
+- 0xe2,0xf4,0x6b,0xe1,0xe3,0x6b,0x10,0x08,0x01,0xff,0xe5,0x88,0x87,0x00,0x01,0xff,
+- 0xe5,0xba,0xa6,0x00,0xe1,0xf0,0x73,0xe0,0x64,0x73,0xcf,0x86,0xe5,0x9e,0x72,0xd4,
+- 0x3b,0x93,0x37,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x01,0xff,0x66,0x66,0x00,0x01,0xff,
+- 0x66,0x69,0x00,0x10,0x07,0x01,0xff,0x66,0x6c,0x00,0x01,0xff,0x66,0x66,0x69,0x00,
+- 0xd1,0x0f,0x10,0x08,0x01,0xff,0x66,0x66,0x6c,0x00,0x01,0xff,0x73,0x74,0x00,0x10,
+- 0x07,0x01,0xff,0x73,0x74,0x00,0x00,0x00,0x00,0x00,0xe3,0x44,0x72,0xd2,0x11,0x51,
+- 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xb6,0x00,0xd1,0x12,
+- 0x10,0x09,0x01,0xff,0xd5,0xb4,0xd5,0xa5,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xab,0x00,
+- 0x10,0x09,0x01,0xff,0xd5,0xbe,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xad,0x00,
+- 0xd3,0x09,0xe2,0xbc,0x73,0xcf,0x06,0x01,0x00,0xd2,0x12,0xe1,0xab,0x74,0xe0,0x3c,
+- 0x74,0xcf,0x86,0xe5,0x19,0x74,0x64,0x08,0x74,0x06,0x00,0xe1,0x11,0x75,0xe0,0xde,
+- 0x74,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x7c,0xd3,0x3c,0xd2,
+- 0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xef,0xbd,0x81,0x00,0x10,0x08,0x01,
+- 0xff,0xef,0xbd,0x82,0x00,0x01,0xff,0xef,0xbd,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xef,0xbd,0x84,0x00,0x01,0xff,0xef,0xbd,0x85,0x00,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x86,0x00,0x01,0xff,0xef,0xbd,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xef,0xbd,0x88,0x00,0x01,0xff,0xef,0xbd,0x89,0x00,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x8a,0x00,0x01,0xff,0xef,0xbd,0x8b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x8c,0x00,0x01,0xff,0xef,0xbd,0x8d,0x00,0x10,0x08,0x01,0xff,0xef,0xbd,0x8e,
+- 0x00,0x01,0xff,0xef,0xbd,0x8f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xef,0xbd,0x90,0x00,0x01,0xff,0xef,0xbd,0x91,0x00,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x92,0x00,0x01,0xff,0xef,0xbd,0x93,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x94,0x00,0x01,0xff,0xef,0xbd,0x95,0x00,0x10,0x08,0x01,0xff,0xef,0xbd,0x96,
+- 0x00,0x01,0xff,0xef,0xbd,0x97,0x00,0x92,0x1c,0xd1,0x10,0x10,0x08,0x01,0xff,0xef,
+- 0xbd,0x98,0x00,0x01,0xff,0xef,0xbd,0x99,0x00,0x10,0x08,0x01,0xff,0xef,0xbd,0x9a,
+- 0x00,0x01,0x00,0x01,0x00,0x83,0xe2,0xd9,0xb2,0xe1,0xc3,0xaf,0xe0,0x40,0xae,0xcf,
+- 0x86,0xe5,0xe4,0x9a,0xc4,0xe3,0xc1,0x07,0xe2,0x62,0x06,0xe1,0x79,0x85,0xe0,0x09,
+- 0x05,0xcf,0x86,0xe5,0xfb,0x02,0xd4,0x1c,0xe3,0xe7,0x75,0xe2,0x3e,0x75,0xe1,0x19,
+- 0x75,0xe0,0xf2,0x74,0xcf,0x86,0xe5,0xbf,0x74,0x94,0x07,0x63,0xaa,0x74,0x07,0x00,
+- 0x07,0x00,0xe3,0x93,0x77,0xe2,0x58,0x77,0xe1,0x77,0x01,0xe0,0xf0,0x76,0xcf,0x86,
+- 0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,
+- 0x90,0x90,0xa8,0x00,0x05,0xff,0xf0,0x90,0x90,0xa9,0x00,0x10,0x09,0x05,0xff,0xf0,
+- 0x90,0x90,0xaa,0x00,0x05,0xff,0xf0,0x90,0x90,0xab,0x00,0xd1,0x12,0x10,0x09,0x05,
+- 0xff,0xf0,0x90,0x90,0xac,0x00,0x05,0xff,0xf0,0x90,0x90,0xad,0x00,0x10,0x09,0x05,
+- 0xff,0xf0,0x90,0x90,0xae,0x00,0x05,0xff,0xf0,0x90,0x90,0xaf,0x00,0xd2,0x24,0xd1,
+- 0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb0,0x00,0x05,0xff,0xf0,0x90,0x90,0xb1,
+- 0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb2,0x00,0x05,0xff,0xf0,0x90,0x90,0xb3,
+- 0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb4,0x00,0x05,0xff,0xf0,0x90,
+- 0x90,0xb5,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb6,0x00,0x05,0xff,0xf0,0x90,
+- 0x90,0xb7,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,
+- 0xb8,0x00,0x05,0xff,0xf0,0x90,0x90,0xb9,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,
+- 0xba,0x00,0x05,0xff,0xf0,0x90,0x90,0xbb,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,
+- 0x90,0x90,0xbc,0x00,0x05,0xff,0xf0,0x90,0x90,0xbd,0x00,0x10,0x09,0x05,0xff,0xf0,
+- 0x90,0x90,0xbe,0x00,0x05,0xff,0xf0,0x90,0x90,0xbf,0x00,0xd2,0x24,0xd1,0x12,0x10,
+- 0x09,0x05,0xff,0xf0,0x90,0x91,0x80,0x00,0x05,0xff,0xf0,0x90,0x91,0x81,0x00,0x10,
+- 0x09,0x05,0xff,0xf0,0x90,0x91,0x82,0x00,0x05,0xff,0xf0,0x90,0x91,0x83,0x00,0xd1,
+- 0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x84,0x00,0x05,0xff,0xf0,0x90,0x91,0x85,
+- 0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x86,0x00,0x05,0xff,0xf0,0x90,0x91,0x87,
+- 0x00,0x94,0x4c,0x93,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,
+- 0x88,0x00,0x05,0xff,0xf0,0x90,0x91,0x89,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,
+- 0x8a,0x00,0x05,0xff,0xf0,0x90,0x91,0x8b,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,
+- 0x90,0x91,0x8c,0x00,0x05,0xff,0xf0,0x90,0x91,0x8d,0x00,0x10,0x09,0x07,0xff,0xf0,
+- 0x90,0x91,0x8e,0x00,0x07,0xff,0xf0,0x90,0x91,0x8f,0x00,0x05,0x00,0x05,0x00,0xd0,
+- 0xa0,0xcf,0x86,0xd5,0x07,0x64,0x98,0x75,0x07,0x00,0xd4,0x07,0x63,0xa5,0x75,0x07,
+- 0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0x98,0x00,
+- 0x12,0xff,0xf0,0x90,0x93,0x99,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0x9a,0x00,
+- 0x12,0xff,0xf0,0x90,0x93,0x9b,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,
+- 0x9c,0x00,0x12,0xff,0xf0,0x90,0x93,0x9d,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,
+- 0x9e,0x00,0x12,0xff,0xf0,0x90,0x93,0x9f,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,
+- 0xff,0xf0,0x90,0x93,0xa0,0x00,0x12,0xff,0xf0,0x90,0x93,0xa1,0x00,0x10,0x09,0x12,
+- 0xff,0xf0,0x90,0x93,0xa2,0x00,0x12,0xff,0xf0,0x90,0x93,0xa3,0x00,0xd1,0x12,0x10,
+- 0x09,0x12,0xff,0xf0,0x90,0x93,0xa4,0x00,0x12,0xff,0xf0,0x90,0x93,0xa5,0x00,0x10,
+- 0x09,0x12,0xff,0xf0,0x90,0x93,0xa6,0x00,0x12,0xff,0xf0,0x90,0x93,0xa7,0x00,0xcf,
+- 0x86,0xe5,0x2e,0x75,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,
+- 0xf0,0x90,0x93,0xa8,0x00,0x12,0xff,0xf0,0x90,0x93,0xa9,0x00,0x10,0x09,0x12,0xff,
+- 0xf0,0x90,0x93,0xaa,0x00,0x12,0xff,0xf0,0x90,0x93,0xab,0x00,0xd1,0x12,0x10,0x09,
+- 0x12,0xff,0xf0,0x90,0x93,0xac,0x00,0x12,0xff,0xf0,0x90,0x93,0xad,0x00,0x10,0x09,
+- 0x12,0xff,0xf0,0x90,0x93,0xae,0x00,0x12,0xff,0xf0,0x90,0x93,0xaf,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb0,0x00,0x12,0xff,0xf0,0x90,0x93,
+- 0xb1,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb2,0x00,0x12,0xff,0xf0,0x90,0x93,
+- 0xb3,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb4,0x00,0x12,0xff,0xf0,
+- 0x90,0x93,0xb5,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb6,0x00,0x12,0xff,0xf0,
+- 0x90,0x93,0xb7,0x00,0x93,0x28,0x92,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,
+- 0x93,0xb8,0x00,0x12,0xff,0xf0,0x90,0x93,0xb9,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,
+- 0x93,0xba,0x00,0x12,0xff,0xf0,0x90,0x93,0xbb,0x00,0x00,0x00,0x12,0x00,0xd4,0x1f,
+- 0xe3,0x47,0x76,0xe2,0xd2,0x75,0xe1,0x71,0x75,0xe0,0x52,0x75,0xcf,0x86,0xe5,0x1f,
+- 0x75,0x94,0x0a,0xe3,0x0a,0x75,0x62,0x01,0x75,0x07,0x00,0x07,0x00,0xe3,0x46,0x78,
+- 0xe2,0x17,0x78,0xd1,0x09,0xe0,0xb4,0x77,0xcf,0x06,0x0b,0x00,0xe0,0xe7,0x77,0xcf,
+- 0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0x80,0x00,0x11,0xff,0xf0,0x90,0xb3,0x81,0x00,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0x82,0x00,0x11,0xff,0xf0,0x90,0xb3,0x83,0x00,0xd1,0x12,0x10,0x09,
+- 0x11,0xff,0xf0,0x90,0xb3,0x84,0x00,0x11,0xff,0xf0,0x90,0xb3,0x85,0x00,0x10,0x09,
+- 0x11,0xff,0xf0,0x90,0xb3,0x86,0x00,0x11,0xff,0xf0,0x90,0xb3,0x87,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x88,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0x89,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8a,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0x8b,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8c,0x00,0x11,0xff,0xf0,
+- 0x90,0xb3,0x8d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8e,0x00,0x11,0xff,0xf0,
+- 0x90,0xb3,0x8f,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,
+- 0xb3,0x90,0x00,0x11,0xff,0xf0,0x90,0xb3,0x91,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,
+- 0xb3,0x92,0x00,0x11,0xff,0xf0,0x90,0xb3,0x93,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0x94,0x00,0x11,0xff,0xf0,0x90,0xb3,0x95,0x00,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0x96,0x00,0x11,0xff,0xf0,0x90,0xb3,0x97,0x00,0xd2,0x24,0xd1,0x12,
+- 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x98,0x00,0x11,0xff,0xf0,0x90,0xb3,0x99,0x00,
+- 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9a,0x00,0x11,0xff,0xf0,0x90,0xb3,0x9b,0x00,
+- 0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9c,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0x9d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9e,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0x9f,0x00,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,
+- 0xb3,0xa0,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa1,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,
+- 0xb3,0xa2,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa3,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0xa4,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa5,0x00,0x10,0x09,0x11,0xff,
+- 0xf0,0x90,0xb3,0xa6,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa7,0x00,0xd2,0x24,0xd1,0x12,
+- 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xa8,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa9,0x00,
+- 0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xaa,0x00,0x11,0xff,0xf0,0x90,0xb3,0xab,0x00,
+- 0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xac,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0xad,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xae,0x00,0x11,0xff,0xf0,0x90,0xb3,
+- 0xaf,0x00,0x93,0x23,0x92,0x1f,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xb0,
+- 0x00,0x11,0xff,0xf0,0x90,0xb3,0xb1,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xb2,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x15,0xe4,0xf9,0x7a,0xe3,0x03,
+- 0x79,0xe2,0xfc,0x77,0xe1,0x4c,0x77,0xe0,0x05,0x77,0xcf,0x06,0x0c,0x00,0xe4,0x53,
+- 0x7e,0xe3,0xac,0x7d,0xe2,0x55,0x7d,0xd1,0x0c,0xe0,0x1a,0x7d,0xcf,0x86,0x65,0xfb,
+- 0x7c,0x14,0x00,0xe0,0x1e,0x7d,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,0x90,0xd3,0x48,
+- 0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x80,0x00,0x10,0xff,0xf0,
+- 0x91,0xa3,0x81,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x82,0x00,0x10,0xff,0xf0,
+- 0x91,0xa3,0x83,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x84,0x00,0x10,
+- 0xff,0xf0,0x91,0xa3,0x85,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x86,0x00,0x10,
+- 0xff,0xf0,0x91,0xa3,0x87,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,
+- 0xa3,0x88,0x00,0x10,0xff,0xf0,0x91,0xa3,0x89,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,
+- 0xa3,0x8a,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8b,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,
+- 0xf0,0x91,0xa3,0x8c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8d,0x00,0x10,0x09,0x10,0xff,
+- 0xf0,0x91,0xa3,0x8e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8f,0x00,0xd3,0x48,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x90,0x00,0x10,0xff,0xf0,0x91,0xa3,
+- 0x91,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x92,0x00,0x10,0xff,0xf0,0x91,0xa3,
+- 0x93,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x94,0x00,0x10,0xff,0xf0,
+- 0x91,0xa3,0x95,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x96,0x00,0x10,0xff,0xf0,
+- 0x91,0xa3,0x97,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x98,
+- 0x00,0x10,0xff,0xf0,0x91,0xa3,0x99,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x9a,
+- 0x00,0x10,0xff,0xf0,0x91,0xa3,0x9b,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,
+- 0xa3,0x9c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9d,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,
+- 0xa3,0x9e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9f,0x00,0xd1,0x11,0xe0,0x7a,0x80,0xcf,
+- 0x86,0xe5,0x71,0x80,0xe4,0x3a,0x80,0xcf,0x06,0x00,0x00,0xe0,0x43,0x82,0xcf,0x86,
+- 0xd5,0x06,0xcf,0x06,0x00,0x00,0xd4,0x09,0xe3,0x78,0x80,0xcf,0x06,0x0c,0x00,0xd3,
+- 0x06,0xcf,0x06,0x00,0x00,0xe2,0xa3,0x81,0xe1,0x7e,0x81,0xd0,0x06,0xcf,0x06,0x00,
+- 0x00,0xcf,0x86,0xa5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,
+- 0x14,0xff,0xf0,0x96,0xb9,0xa0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa1,0x00,0x10,0x09,
+- 0x14,0xff,0xf0,0x96,0xb9,0xa2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa3,0x00,0xd1,0x12,
+- 0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa4,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa5,0x00,
+- 0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa6,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa7,0x00,
+- 0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa8,0x00,0x14,0xff,0xf0,
+- 0x96,0xb9,0xa9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xaa,0x00,0x14,0xff,0xf0,
+- 0x96,0xb9,0xab,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xac,0x00,0x14,
+- 0xff,0xf0,0x96,0xb9,0xad,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xae,0x00,0x14,
+- 0xff,0xf0,0x96,0xb9,0xaf,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,
+- 0xf0,0x96,0xb9,0xb0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb1,0x00,0x10,0x09,0x14,0xff,
+- 0xf0,0x96,0xb9,0xb2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb3,0x00,0xd1,0x12,0x10,0x09,
+- 0x14,0xff,0xf0,0x96,0xb9,0xb4,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb5,0x00,0x10,0x09,
+- 0x14,0xff,0xf0,0x96,0xb9,0xb6,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb7,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb8,0x00,0x14,0xff,0xf0,0x96,0xb9,
+- 0xb9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xba,0x00,0x14,0xff,0xf0,0x96,0xb9,
+- 0xbb,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbc,0x00,0x14,0xff,0xf0,
+- 0x96,0xb9,0xbd,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbe,0x00,0x14,0xff,0xf0,
+- 0x96,0xb9,0xbf,0x00,0x14,0x00,0xd2,0x14,0xe1,0x8d,0x81,0xe0,0x84,0x81,0xcf,0x86,
+- 0xe5,0x45,0x81,0xe4,0x02,0x81,0xcf,0x06,0x12,0x00,0xd1,0x0b,0xe0,0xb8,0x82,0xcf,
+- 0x86,0xcf,0x06,0x00,0x00,0xe0,0xf8,0x8a,0xcf,0x86,0xd5,0x22,0xe4,0x33,0x88,0xe3,
+- 0xf6,0x87,0xe2,0x9b,0x87,0xe1,0x94,0x87,0xe0,0x8d,0x87,0xcf,0x86,0xe5,0x5e,0x87,
+- 0xe4,0x45,0x87,0x93,0x07,0x62,0x34,0x87,0x12,0xe6,0x12,0xe6,0xe4,0x99,0x88,0xe3,
+- 0x92,0x88,0xd2,0x09,0xe1,0x1b,0x88,0xcf,0x06,0x10,0x00,0xe1,0x82,0x88,0xe0,0x4f,
+- 0x88,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,
+- 0x12,0xff,0xf0,0x9e,0xa4,0xa2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa3,0x00,0x10,0x09,
+- 0x12,0xff,0xf0,0x9e,0xa4,0xa4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa5,0x00,0xd1,0x12,
+- 0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa6,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa7,0x00,
+- 0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa8,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa9,0x00,
+- 0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xaa,0x00,0x12,0xff,0xf0,
+- 0x9e,0xa4,0xab,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xac,0x00,0x12,0xff,0xf0,
+- 0x9e,0xa4,0xad,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xae,0x00,0x12,
+- 0xff,0xf0,0x9e,0xa4,0xaf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb0,0x00,0x12,
+- 0xff,0xf0,0x9e,0xa4,0xb1,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,
+- 0xf0,0x9e,0xa4,0xb2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb3,0x00,0x10,0x09,0x12,0xff,
+- 0xf0,0x9e,0xa4,0xb4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb5,0x00,0xd1,0x12,0x10,0x09,
+- 0x12,0xff,0xf0,0x9e,0xa4,0xb6,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb7,0x00,0x10,0x09,
+- 0x12,0xff,0xf0,0x9e,0xa4,0xb8,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb9,0x00,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xba,0x00,0x12,0xff,0xf0,0x9e,0xa4,
+- 0xbb,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbc,0x00,0x12,0xff,0xf0,0x9e,0xa4,
+- 0xbd,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbe,0x00,0x12,0xff,0xf0,
+- 0x9e,0xa4,0xbf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa5,0x80,0x00,0x12,0xff,0xf0,
+- 0x9e,0xa5,0x81,0x00,0x94,0x1e,0x93,0x1a,0x92,0x16,0x91,0x12,0x10,0x09,0x12,0xff,
+- 0xf0,0x9e,0xa5,0x82,0x00,0x12,0xff,0xf0,0x9e,0xa5,0x83,0x00,0x12,0x00,0x12,0x00,
+- 0x12,0x00,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- /* nfdi_c0100 */
+- 0x57,0x04,0x01,0x00,0xc6,0xe5,0x91,0x13,0xe4,0x27,0x0c,0xe3,0x61,0x07,0xe2,0xda,
+- 0x01,0xc1,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0xe4,0xd4,0x7c,0xd3,0x3c,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x80,0x00,0x01,0xff,0x41,0xcc,
+- 0x81,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x82,0x00,0x01,0xff,0x41,0xcc,0x83,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x88,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0x43,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x45,0xcc,0x80,0x00,0x01,0xff,0x45,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,
+- 0x45,0xcc,0x82,0x00,0x01,0xff,0x45,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x49,0xcc,0x80,0x00,0x01,0xff,0x49,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,
+- 0x82,0x00,0x01,0xff,0x49,0xcc,0x88,0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0x4e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x80,0x00,
+- 0x01,0xff,0x4f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x82,0x00,
+- 0x01,0xff,0x4f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x88,0x00,0x01,0x00,
+- 0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x55,0xcc,0x80,0x00,0x10,0x08,
+- 0x01,0xff,0x55,0xcc,0x81,0x00,0x01,0xff,0x55,0xcc,0x82,0x00,0x91,0x10,0x10,0x08,
+- 0x01,0xff,0x55,0xcc,0x88,0x00,0x01,0xff,0x59,0xcc,0x81,0x00,0x01,0x00,0xd4,0x7c,
+- 0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x80,0x00,0x01,0xff,
+- 0x61,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,
+- 0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x88,0x00,0x01,0xff,0x61,0xcc,
+- 0x8a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x63,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x65,0xcc,0x80,0x00,0x01,0xff,0x65,0xcc,0x81,0x00,0x10,0x08,
+- 0x01,0xff,0x65,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x69,0xcc,0x80,0x00,0x01,0xff,0x69,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,
+- 0x69,0xcc,0x82,0x00,0x01,0xff,0x69,0xcc,0x88,0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0x6e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,
+- 0x80,0x00,0x01,0xff,0x6f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,
+- 0x82,0x00,0x01,0xff,0x6f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,
+- 0x01,0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x75,0xcc,0x80,0x00,
+- 0x10,0x08,0x01,0xff,0x75,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x82,0x00,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x75,0xcc,0x88,0x00,0x01,0xff,0x79,0xcc,0x81,0x00,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0x79,0xcc,0x88,0x00,0xe1,0x9a,0x03,0xe0,0xd3,0x01,0xcf,0x86,
+- 0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,
+- 0x84,0x00,0x01,0xff,0x61,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x86,0x00,
+- 0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa8,0x00,
+- 0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x43,0xcc,0x81,0x00,0x01,0xff,
+- 0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x43,0xcc,0x82,0x00,
+- 0x01,0xff,0x63,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x43,0xcc,0x87,0x00,0x01,0xff,
+- 0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x43,0xcc,0x8c,0x00,0x01,0xff,
+- 0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0x8c,0x00,0x01,0xff,0x64,0xcc,
+- 0x8c,0x00,0xd3,0x34,0xd2,0x14,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,
+- 0x84,0x00,0x01,0xff,0x65,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,
+- 0x86,0x00,0x01,0xff,0x65,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x87,0x00,
+- 0x01,0xff,0x65,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,
+- 0xa8,0x00,0x01,0xff,0x65,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x8c,0x00,
+- 0x01,0xff,0x65,0xcc,0x8c,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x82,0x00,
+- 0x01,0xff,0x67,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,0x86,0x00,0x01,0xff,
+- 0x67,0xcc,0x86,0x00,0xd4,0x74,0xd3,0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x47,0xcc,0x87,0x00,0x01,0xff,0x67,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,
+- 0xa7,0x00,0x01,0xff,0x67,0xcc,0xa7,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,
+- 0x82,0x00,0x01,0xff,0x68,0xcc,0x82,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x49,0xcc,0x83,0x00,0x01,0xff,0x69,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,
+- 0x49,0xcc,0x84,0x00,0x01,0xff,0x69,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x49,0xcc,0x86,0x00,0x01,0xff,0x69,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,
+- 0xa8,0x00,0x01,0xff,0x69,0xcc,0xa8,0x00,0xd3,0x30,0xd2,0x10,0x91,0x0c,0x10,0x08,
+- 0x01,0xff,0x49,0xcc,0x87,0x00,0x01,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x4a,0xcc,0x82,0x00,0x01,0xff,0x6a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,
+- 0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0x4c,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0x81,0x00,0x01,0xff,
+- 0x4c,0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6c,0xcc,0xa7,0x00,0x01,0xff,
+- 0x4c,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0x8c,0x00,0x01,0x00,0xcf,0x86,
+- 0xd5,0xd4,0xd4,0x60,0xd3,0x30,0xd2,0x10,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0x4e,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x81,0x00,
+- 0x01,0xff,0x4e,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x01,0xff,
+- 0x4e,0xcc,0x8c,0x00,0xd2,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x6e,0xcc,0x8c,0x00,
+- 0x01,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x84,0x00,0x01,0xff,
+- 0x6f,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,
+- 0x86,0x00,0xd3,0x34,0xd2,0x14,0x91,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x8b,0x00,
+- 0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,
+- 0x81,0x00,0x01,0xff,0x72,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa7,0x00,
+- 0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,
+- 0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x53,0xcc,0x81,0x00,
+- 0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x82,0x00,
+- 0x01,0xff,0x73,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x53,0xcc,0xa7,0x00,0x01,0xff,
+- 0x73,0xcc,0xa7,0x00,0xd4,0x74,0xd3,0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x53,0xcc,0x8c,0x00,0x01,0xff,0x73,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,
+- 0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,
+- 0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x55,0xcc,0x83,0x00,0x01,0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,
+- 0x55,0xcc,0x84,0x00,0x01,0xff,0x75,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x55,0xcc,0x86,0x00,0x01,0xff,0x75,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,
+- 0x8a,0x00,0x01,0xff,0x75,0xcc,0x8a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x55,0xcc,0x8b,0x00,0x01,0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,
+- 0x55,0xcc,0xa8,0x00,0x01,0xff,0x75,0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x57,0xcc,0x82,0x00,0x01,0xff,0x77,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,
+- 0x82,0x00,0x01,0xff,0x79,0xcc,0x82,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x59,0xcc,0x88,0x00,0x01,0xff,0x5a,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,
+- 0x81,0x00,0x01,0xff,0x5a,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,
+- 0x87,0x00,0x01,0xff,0x5a,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,
+- 0x01,0x00,0xd0,0x4a,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x2c,0xd3,0x18,0x92,0x14,
+- 0x91,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,
+- 0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0x55,0xcc,0x9b,0x00,0x93,0x14,0x92,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,
+- 0x75,0xcc,0x9b,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xb4,
+- 0xd4,0x24,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0x41,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x8c,0x00,0x01,0xff,
+- 0x49,0xcc,0x8c,0x00,0xd3,0x46,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,
+- 0x8c,0x00,0x01,0xff,0x4f,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8c,0x00,
+- 0x01,0xff,0x55,0xcc,0x8c,0x00,0xd1,0x12,0x10,0x08,0x01,0xff,0x75,0xcc,0x8c,0x00,
+- 0x01,0xff,0x55,0xcc,0x88,0xcc,0x84,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,
+- 0x84,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x81,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x8c,0x00,
+- 0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,
+- 0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0x01,0x00,
+- 0x10,0x0a,0x01,0xff,0x41,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x88,0xcc,
+- 0x84,0x00,0xd4,0x80,0xd3,0x3a,0xd2,0x26,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,
+- 0x87,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x87,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,
+- 0xc3,0x86,0xcc,0x84,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x84,0x00,0x51,0x04,0x01,0x00,
+- 0x10,0x08,0x01,0xff,0x47,0xcc,0x8c,0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,
+- 0x10,0x08,0x01,0xff,0x4f,0xcc,0xa8,0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,
+- 0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa8,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,
+- 0x84,0x00,0x10,0x09,0x01,0xff,0xc6,0xb7,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,
+- 0x8c,0x00,0xd3,0x24,0xd2,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,
+- 0x01,0x00,0x01,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x81,0x00,0x01,0xff,
+- 0x67,0xcc,0x81,0x00,0x04,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x4e,0xcc,
+- 0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x8a,0xcc,
+- 0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xc3,0x86,0xcc,0x81,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
+- 0xc3,0x98,0xcc,0x81,0x00,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,0xe2,0x07,0x02,0xe1,
+- 0xae,0x01,0xe0,0x93,0x01,0xcf,0x86,0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x8f,0x00,0x01,0xff,0x61,0xcc,0x8f,0x00,0x10,
+- 0x08,0x01,0xff,0x41,0xcc,0x91,0x00,0x01,0xff,0x61,0xcc,0x91,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x45,0xcc,0x8f,0x00,0x01,0xff,0x65,0xcc,0x8f,0x00,0x10,0x08,0x01,
+- 0xff,0x45,0xcc,0x91,0x00,0x01,0xff,0x65,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x49,0xcc,0x8f,0x00,0x01,0xff,0x69,0xcc,0x8f,0x00,0x10,0x08,0x01,
+- 0xff,0x49,0xcc,0x91,0x00,0x01,0xff,0x69,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x4f,0xcc,0x8f,0x00,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x4f,
+- 0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,0x91,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x52,0xcc,0x8f,0x00,0x01,0xff,0x72,0xcc,0x8f,0x00,0x10,0x08,0x01,
+- 0xff,0x52,0xcc,0x91,0x00,0x01,0xff,0x72,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x55,0xcc,0x8f,0x00,0x01,0xff,0x75,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x55,
+- 0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x04,
+- 0xff,0x53,0xcc,0xa6,0x00,0x04,0xff,0x73,0xcc,0xa6,0x00,0x10,0x08,0x04,0xff,0x54,
+- 0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,0xa6,0x00,0x51,0x04,0x04,0x00,0x10,0x08,0x04,
+- 0xff,0x48,0xcc,0x8c,0x00,0x04,0xff,0x68,0xcc,0x8c,0x00,0xd4,0x68,0xd3,0x20,0xd2,
+- 0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x07,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,
+- 0x08,0x04,0xff,0x41,0xcc,0x87,0x00,0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,
+- 0x10,0x10,0x08,0x04,0xff,0x45,0xcc,0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,
+- 0x0a,0x04,0xff,0x4f,0xcc,0x88,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x88,0xcc,0x84,
+- 0x00,0xd1,0x14,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,
+- 0xcc,0x83,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x4f,0xcc,0x87,0x00,0x04,0xff,0x6f,
+- 0xcc,0x87,0x00,0x93,0x30,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x87,
+- 0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x59,
+- 0xcc,0x84,0x00,0x04,0xff,0x79,0xcc,0x84,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x07,
+- 0x00,0x08,0x00,0x08,0x00,0xcf,0x86,0x95,0x14,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,
+- 0x04,0x08,0x00,0x09,0x00,0x09,0x00,0x09,0x00,0x01,0x00,0x01,0x00,0xd0,0x22,0xcf,
+- 0x86,0x55,0x04,0x01,0x00,0x94,0x18,0x53,0x04,0x01,0x00,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x01,0x00,0x04,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x07,0x00,0x01,0x00,0xcf,
+- 0x86,0xd5,0x18,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,
+- 0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x94,0x18,0x53,0x04,0x01,0x00,0xd2,
+- 0x08,0x11,0x04,0x01,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,
+- 0x00,0x07,0x00,0xe1,0x34,0x01,0xd0,0x72,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0xe6,
+- 0xd3,0x10,0x52,0x04,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0xe8,0x01,0xdc,
+- 0x92,0x0c,0x51,0x04,0x01,0xdc,0x10,0x04,0x01,0xe8,0x01,0xd8,0x01,0xdc,0xd4,0x2c,
+- 0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0xdc,0x01,0xca,0x10,0x04,0x01,0xca,
+- 0x01,0xdc,0x51,0x04,0x01,0xdc,0x10,0x04,0x01,0xdc,0x01,0xca,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x01,0xca,0x01,0xdc,0x01,0xdc,0x01,0xdc,0xd3,0x08,0x12,0x04,0x01,0xdc,
+- 0x01,0x01,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x01,0x01,0xdc,0x01,0xdc,0x91,0x08,
+- 0x10,0x04,0x01,0xdc,0x01,0xe6,0x01,0xe6,0xcf,0x86,0xd5,0x7e,0xd4,0x46,0xd3,0x2e,
+- 0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xcc,0x80,0x00,0x01,0xff,0xcc,0x81,0x00,
+- 0x10,0x04,0x01,0xe6,0x01,0xff,0xcc,0x93,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcc,
+- 0x88,0xcc,0x81,0x00,0x01,0xf0,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd2,0x08,0x11,0x04,
+- 0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x10,0x04,0x04,0xdc,
+- 0x06,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x07,0xe6,0x10,0x04,0x07,0xe6,0x07,0xdc,
+- 0x51,0x04,0x07,0xdc,0x10,0x04,0x07,0xdc,0x07,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,
+- 0x08,0xe8,0x08,0xdc,0x10,0x04,0x08,0xdc,0x08,0xe6,0xd1,0x08,0x10,0x04,0x08,0xe9,
+- 0x07,0xea,0x10,0x04,0x07,0xea,0x07,0xe9,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,
+- 0x01,0xea,0x10,0x04,0x04,0xe9,0x06,0xe6,0x06,0xe6,0x06,0xe6,0xd3,0x13,0x52,0x04,
+- 0x0a,0x00,0x91,0x0b,0x10,0x07,0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x0a,0x00,0xd2,
+- 0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x01,0x00,0x09,0x00,0x51,0x04,0x09,0x00,0x10,
+- 0x06,0x01,0xff,0x3b,0x00,0x10,0x00,0xd0,0xe1,0xcf,0x86,0xd5,0x7a,0xd4,0x5f,0xd3,
+- 0x21,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xc2,0xa8,0xcc,
+- 0x81,0x00,0x10,0x09,0x01,0xff,0xce,0x91,0xcc,0x81,0x00,0x01,0xff,0xc2,0xb7,0x00,
+- 0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x95,0xcc,0x81,0x00,0x01,0xff,0xce,
+- 0x97,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0x00,0x00,0xd1,
+- 0x0d,0x10,0x09,0x01,0xff,0xce,0x9f,0xcc,0x81,0x00,0x00,0x00,0x10,0x09,0x01,0xff,
+- 0xce,0xa5,0xcc,0x81,0x00,0x01,0xff,0xce,0xa9,0xcc,0x81,0x00,0x93,0x17,0x92,0x13,
+- 0x91,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x01,0x00,0x01,
+- 0x00,0x01,0x00,0x01,0x00,0xd4,0x4a,0xd3,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,
+- 0xff,0xce,0x99,0xcc,0x88,0x00,0x01,0xff,0xce,0xa5,0xcc,0x88,0x00,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0xb1,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,0x10,
+- 0x09,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0x93,
+- 0x17,0x92,0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x81,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x7b,0xd4,0x39,0x53,0x04,
+- 0x01,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x88,
+- 0x00,0x01,0xff,0xcf,0x85,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xbf,
+- 0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,
+- 0xcc,0x81,0x00,0x0a,0x00,0xd3,0x26,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
+- 0x00,0x01,0xff,0xcf,0x92,0xcc,0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcf,0x92,
+- 0xcc,0x88,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd2,0x0c,0x51,0x04,0x06,
+- 0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,
+- 0x04,0x01,0x00,0x04,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,
+- 0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,
+- 0x04,0x05,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0x12,0x04,0x07,0x00,0x08,0x00,0xe3,
+- 0x47,0x04,0xe2,0xbe,0x02,0xe1,0x07,0x01,0xd0,0x8b,0xcf,0x86,0xd5,0x6c,0xd4,0x53,
+- 0xd3,0x30,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,0x95,0xcc,0x80,0x00,0x01,
+- 0xff,0xd0,0x95,0xcc,0x88,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x93,0xcc,0x81,
+- 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x86,0xcc,0x88,0x00,
+- 0x52,0x04,0x01,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x9a,0xcc,0x81,0x00,0x04,
+- 0xff,0xd0,0x98,0xcc,0x80,0x00,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x86,0x00,0x01,
+- 0x00,0x53,0x04,0x01,0x00,0x92,0x11,0x91,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,
+- 0x98,0xcc,0x86,0x00,0x01,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,
+- 0x92,0x11,0x91,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x01,
+- 0x00,0x01,0x00,0xcf,0x86,0xd5,0x57,0x54,0x04,0x01,0x00,0xd3,0x30,0xd2,0x1f,0xd1,
+- 0x12,0x10,0x09,0x04,0xff,0xd0,0xb5,0xcc,0x80,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,
+- 0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0xb3,0xcc,0x81,0x00,0x51,0x04,0x01,0x00,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xd1,0x96,0xcc,0x88,0x00,0x52,0x04,0x01,0x00,0xd1,
+- 0x12,0x10,0x09,0x01,0xff,0xd0,0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,0xcc,0x80,
+- 0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x86,0x00,0x01,0x00,0x54,0x04,0x01,0x00,
+- 0x93,0x1a,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb4,
+- 0xcc,0x8f,0x00,0x01,0xff,0xd1,0xb5,0xcc,0x8f,0x00,0x01,0x00,0xd0,0x2e,0xcf,0x86,
+- 0x95,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xe6,0x51,0x04,0x01,0xe6,0x10,0x04,0x01,0xe6,0x0a,0xe6,0x92,0x08,0x11,0x04,
+- 0x04,0x00,0x06,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xbe,0xd4,0x4a,
+- 0xd3,0x2a,0xd2,0x1a,0xd1,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x96,0xcc,0x86,
+- 0x00,0x10,0x09,0x01,0xff,0xd0,0xb6,0xcc,0x86,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,
+- 0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
+- 0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,
+- 0x06,0x00,0x10,0x04,0x06,0x00,0x09,0x00,0xd3,0x3a,0xd2,0x24,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xd0,0x90,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x10,0x09,
+- 0x01,0xff,0xd0,0x90,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x51,0x04,
+- 0x01,0x00,0x10,0x09,0x01,0xff,0xd0,0x95,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,
+- 0x86,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0x98,0xcc,0x88,
+- 0x00,0x01,0xff,0xd3,0x99,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x96,
+- 0xcc,0x88,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0x97,
+- 0xcc,0x88,0x00,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0xd4,0x74,0xd3,0x3a,0xd2,0x16,
+- 0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd0,0x98,0xcc,0x84,0x00,0x01,0xff,0xd0,
+- 0xb8,0xcc,0x84,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x98,0xcc,0x88,0x00,0x01,
+- 0xff,0xd0,0xb8,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0x9e,0xcc,0x88,0x00,0x01,
+- 0xff,0xd0,0xbe,0xcc,0x88,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,
+- 0xd3,0xa8,0xcc,0x88,0x00,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,
+- 0x04,0xff,0xd0,0xad,0xcc,0x88,0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x10,0x09,
+- 0x01,0xff,0xd0,0xa3,0xcc,0x84,0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0xd3,0x3a,
+- 0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x88,0x00,0x01,0xff,0xd1,
+- 0x83,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x8b,0x00,0x01,0xff,0xd1,
+- 0x83,0xcc,0x8b,0x00,0x91,0x12,0x10,0x09,0x01,0xff,0xd0,0xa7,0xcc,0x88,0x00,0x01,
+- 0xff,0xd1,0x87,0xcc,0x88,0x00,0x08,0x00,0x92,0x16,0x91,0x12,0x10,0x09,0x01,0xff,
+- 0xd0,0xab,0xcc,0x88,0x00,0x01,0xff,0xd1,0x8b,0xcc,0x88,0x00,0x09,0x00,0x09,0x00,
+- 0xd1,0x74,0xd0,0x36,0xcf,0x86,0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,
+- 0x09,0x00,0x0a,0x00,0x0a,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x0a,0x00,0x11,0x04,
+- 0x0b,0x00,0x0c,0x00,0x10,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0x00,
+- 0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x14,
+- 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0xd0,0xba,0xcf,0x86,0xd5,0x4c,0xd4,0x24,0x53,0x04,0x01,0x00,
+- 0xd2,0x10,0xd1,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x10,0x04,0x04,0x00,0x00,0x00,
+- 0xd1,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x04,0x10,0x00,0x0d,0x00,0xd3,0x18,
+- 0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x02,0xdc,0x02,0xe6,0x51,0x04,0x02,0xe6,
+- 0x10,0x04,0x02,0xdc,0x02,0xe6,0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xde,
+- 0x02,0xdc,0x02,0xe6,0xd4,0x2c,0xd3,0x10,0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,
+- 0x08,0xdc,0x02,0xdc,0x02,0xdc,0xd2,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xdc,
+- 0x02,0xe6,0xd1,0x08,0x10,0x04,0x02,0xe6,0x02,0xde,0x10,0x04,0x02,0xe4,0x02,0xe6,
+- 0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x0a,0x01,0x0b,0x10,0x04,0x01,0x0c,
+- 0x01,0x0d,0xd1,0x08,0x10,0x04,0x01,0x0e,0x01,0x0f,0x10,0x04,0x01,0x10,0x01,0x11,
+- 0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x12,0x01,0x13,0x10,0x04,0x09,0x13,0x01,0x14,
+- 0xd1,0x08,0x10,0x04,0x01,0x15,0x01,0x16,0x10,0x04,0x01,0x00,0x01,0x17,0xcf,0x86,
+- 0xd5,0x28,0x94,0x24,0x93,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x18,
+- 0x10,0x04,0x01,0x19,0x01,0x00,0xd1,0x08,0x10,0x04,0x02,0xe6,0x08,0xdc,0x10,0x04,
+- 0x08,0x00,0x08,0x12,0x00,0x00,0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,0xd2,0x0c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x14,0x00,0x93,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0xe2,0xfa,0x01,0xe1,0x2a,0x01,0xd0,0xa7,0xcf,0x86,
+- 0xd5,0x54,0xd4,0x28,0xd3,0x10,0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,
+- 0x10,0x00,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x08,0x00,
+- 0x91,0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x07,0x00,0xd3,0x0c,0x52,0x04,0x07,0xe6,
+- 0x11,0x04,0x07,0xe6,0x0a,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0a,0x1e,0x0a,0x1f,
+- 0x10,0x04,0x0a,0x20,0x01,0x00,0xd1,0x08,0x10,0x04,0x0f,0x00,0x00,0x00,0x10,0x04,
+- 0x08,0x00,0x01,0x00,0xd4,0x3d,0x93,0x39,0xd2,0x1a,0xd1,0x08,0x10,0x04,0x0c,0x00,
+- 0x01,0x00,0x10,0x09,0x01,0xff,0xd8,0xa7,0xd9,0x93,0x00,0x01,0xff,0xd8,0xa7,0xd9,
+- 0x94,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd9,0x88,0xd9,0x94,0x00,0x01,0xff,0xd8,
+- 0xa7,0xd9,0x95,0x00,0x10,0x09,0x01,0xff,0xd9,0x8a,0xd9,0x94,0x00,0x01,0x00,0x01,
+- 0x00,0x53,0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0a,
+- 0x00,0x0a,0x00,0xcf,0x86,0xd5,0x5c,0xd4,0x20,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,
+- 0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0x1b,0xd1,0x08,0x10,0x04,0x01,0x1c,0x01,
+- 0x1d,0x10,0x04,0x01,0x1e,0x01,0x1f,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,
+- 0x20,0x01,0x21,0x10,0x04,0x01,0x22,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xe6,0x04,
+- 0xdc,0x10,0x04,0x07,0xdc,0x07,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x07,0xe6,0x08,
+- 0xe6,0x08,0xe6,0xd1,0x08,0x10,0x04,0x08,0xdc,0x08,0xe6,0x10,0x04,0x08,0xe6,0x0c,
+- 0xdc,0xd4,0x10,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x06,
+- 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x23,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0x01,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,
+- 0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x04,0x00,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x01,0x00,0x04,0x00,0xcf,0x86,0xd5,0x5b,0xd4,0x2e,0xd3,0x1e,0x92,0x1a,0xd1,
+- 0x0d,0x10,0x09,0x01,0xff,0xdb,0x95,0xd9,0x94,0x00,0x01,0x00,0x10,0x09,0x01,0xff,
+- 0xdb,0x81,0xd9,0x94,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd3,0x19,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0xdb,0x92,0xd9,0x94,0x00,0x11,0x04,0x01,0x00,0x01,0xe6,
+- 0x52,0x04,0x01,0xe6,0xd1,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xe6,0xd4,0x38,0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x01,0xe6,0x10,0x04,0x01,0xe6,
+- 0x01,0xdc,0xd1,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,
+- 0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0xdc,0x01,0xe6,
+- 0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0xdc,0x07,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,
+- 0x11,0x04,0x01,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,0x00,
+- 0xd1,0xc8,0xd0,0x76,0xcf,0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,0x04,
+- 0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x93,0x10,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x04,0x00,0x04,0x24,0x04,0x00,0x04,0x00,0x04,0x00,0xd4,0x14,
+- 0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x07,0x00,
+- 0x07,0x00,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x04,0xe6,
+- 0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd2,0x0c,
+- 0x51,0x04,0x04,0xdc,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd1,0x08,0x10,0x04,0x04,0xdc,
+- 0x04,0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xcf,0x86,0xd5,0x3c,0x94,0x38,0xd3,0x1c,
+- 0xd2,0x0c,0x51,0x04,0x04,0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,
+- 0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,
+- 0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,0xe6,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x07,0x00,0x07,0x00,0x08,0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x52,0x04,0x08,0x00,
+- 0x11,0x04,0x08,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x04,0x00,
+- 0x54,0x04,0x04,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0xd4,0x14,0x53,0x04,
+- 0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xe6,0x09,0xe6,
+- 0xd3,0x10,0x92,0x0c,0x51,0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0x00,
+- 0xd2,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x00,0x00,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x14,0xdc,0x14,0x00,0xe4,0x78,0x57,0xe3,0xda,0x3e,0xe2,0x89,0x3e,0xe1,
+- 0x91,0x2c,0xe0,0x21,0x10,0xcf,0x86,0xc5,0xe4,0x80,0x08,0xe3,0xcb,0x03,0xe2,0x61,
+- 0x01,0xd1,0x94,0xd0,0x5a,0xcf,0x86,0xd5,0x20,0x54,0x04,0x0b,0x00,0xd3,0x0c,0x52,
+- 0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0b,0xe6,0x92,0x0c,0x51,0x04,0x0b,0xe6,0x10,
+- 0x04,0x0b,0x00,0x0b,0xe6,0x0b,0xe6,0xd4,0x24,0xd3,0x10,0x52,0x04,0x0b,0xe6,0x91,
+- 0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x0b,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,
+- 0x00,0x0b,0xe6,0x0b,0xe6,0x11,0x04,0x0b,0xe6,0x00,0x00,0x53,0x04,0x0b,0x00,0x52,
+- 0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xcf,0x86,0xd5,
+- 0x20,0x54,0x04,0x0c,0x00,0x53,0x04,0x0c,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0c,
+- 0x00,0x0c,0xdc,0x0c,0xdc,0x51,0x04,0x00,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x94,
+- 0x14,0x53,0x04,0x13,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0xd0,0x4a,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,0x20,0xd3,
+- 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x00,0x10,0x00,0x0d,0x00,0x0d,0x00,0x52,
+- 0x04,0x0d,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x10,0x00,0x10,0x00,0xd3,0x18,0xd2,
+- 0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x11,0x00,0x91,0x08,0x10,0x04,0x11,
+- 0x00,0x00,0x00,0x12,0x00,0x52,0x04,0x12,0x00,0x11,0x04,0x12,0x00,0x00,0x00,0xcf,
+- 0x86,0xd5,0x18,0x54,0x04,0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,
+- 0x04,0x00,0x00,0x14,0xdc,0x12,0xe6,0x12,0xe6,0xd4,0x30,0xd3,0x18,0xd2,0x0c,0x51,
+- 0x04,0x12,0xe6,0x10,0x04,0x12,0x00,0x11,0xdc,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,
+- 0xdc,0x0d,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x0d,0xe6,0x91,
+- 0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x0d,0xdc,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,
+- 0x04,0x0d,0x1b,0x0d,0x1c,0x10,0x04,0x0d,0x1d,0x0d,0xe6,0x51,0x04,0x0d,0xe6,0x10,
+- 0x04,0x0d,0xdc,0x0d,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x10,
+- 0x04,0x0d,0xdc,0x0d,0xe6,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xe6,0x10,0xe6,0xe1,
+- 0x3a,0x01,0xd0,0x77,0xcf,0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,0xd2,0x0c,0x91,0x08,
+- 0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x07,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x1b,0x53,0x04,0x01,0x00,0x92,0x13,0x91,0x0f,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa4,0xa8,0xe0,0xa4,0xbc,0x00,0x01,0x00,0x01,
+- 0x00,0xd3,0x26,0xd2,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa4,0xb0,
+- 0xe0,0xa4,0xbc,0x00,0x01,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xa4,0xb3,0xe0,
+- 0xa4,0xbc,0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x91,
+- 0x08,0x10,0x04,0x01,0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x8c,0xd4,0x18,0x53,
+- 0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x10,
+- 0x04,0x0b,0x00,0x0c,0x00,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,
+- 0xe6,0x10,0x04,0x01,0xdc,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x0b,0x00,0x0c,
+- 0x00,0xd2,0x2c,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xa4,0x95,0xe0,0xa4,0xbc,0x00,
+- 0x01,0xff,0xe0,0xa4,0x96,0xe0,0xa4,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa4,0x97,
+- 0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0x9c,0xe0,0xa4,0xbc,0x00,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xe0,0xa4,0xa1,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0xa2,0xe0,
+- 0xa4,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa4,0xab,0xe0,0xa4,0xbc,0x00,0x01,0xff,
+- 0xe0,0xa4,0xaf,0xe0,0xa4,0xbc,0x00,0x54,0x04,0x01,0x00,0xd3,0x14,0x92,0x10,0xd1,
+- 0x08,0x10,0x04,0x01,0x00,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0c,0x00,0x0c,0x00,0xd2,
+- 0x10,0xd1,0x08,0x10,0x04,0x10,0x00,0x0b,0x00,0x10,0x04,0x0b,0x00,0x09,0x00,0x91,
+- 0x08,0x10,0x04,0x09,0x00,0x08,0x00,0x09,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x44,0xd4,
+- 0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,
+- 0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x08,0x11,
+- 0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x07,0x00,0x01,0x00,0xcf,
+- 0x86,0xd5,0x7b,0xd4,0x42,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,
+- 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x17,0xd1,0x08,0x10,0x04,0x01,
+- 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xe0,0xa7,0x87,0xe0,0xa6,0xbe,0x00,
+- 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xa7,0x87,0xe0,0xa7,0x97,0x00,0x01,0x09,0x10,
+- 0x04,0x08,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0x00,0x52,0x04,0x00,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,
+- 0xa6,0xa1,0xe0,0xa6,0xbc,0x00,0x01,0xff,0xe0,0xa6,0xa2,0xe0,0xa6,0xbc,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0xff,0xe0,0xa6,0xaf,0xe0,0xa6,0xbc,0x00,0xd4,0x10,0x93,0x0c,
+- 0x52,0x04,0x01,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,
+- 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,0x13,0x00,
+- 0x10,0x04,0x14,0xe6,0x00,0x00,0xe2,0x48,0x02,0xe1,0x4f,0x01,0xd0,0xa4,0xcf,0x86,
+- 0xd5,0x4c,0xd4,0x34,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x07,0x00,
+- 0x10,0x04,0x01,0x00,0x07,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,
+- 0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,
+- 0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0xd3,0x2e,0xd2,0x17,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xe0,0xa8,0xb2,0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,
+- 0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0xb8,0xe0,0xa8,0xbc,0x00,0x00,0x00,0xd2,0x08,
+- 0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x00,0x00,0x01,0x00,
+- 0xcf,0x86,0xd5,0x80,0xd4,0x34,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,
+- 0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x10,
+- 0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,
+- 0x10,0x04,0x01,0x00,0x01,0x09,0x00,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x0a,0x00,0x00,0x00,0x00,0x00,0xd2,0x25,0xd1,0x0f,0x10,0x04,0x00,0x00,
+- 0x01,0xff,0xe0,0xa8,0x96,0xe0,0xa8,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0x97,
+- 0xe0,0xa8,0xbc,0x00,0x01,0xff,0xe0,0xa8,0x9c,0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,
+- 0x04,0x01,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0xab,0xe0,0xa8,0xbc,0x00,
+- 0x00,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0x93,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,0x00,
+- 0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0xd0,0x82,0xcf,0x86,0xd5,0x40,0xd4,0x2c,
+- 0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x91,0x08,
+- 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,
+- 0x07,0x00,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,
+- 0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,
+- 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,
+- 0x91,0x08,0x10,0x04,0x01,0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x3c,0xd4,0x28,
+- 0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
+- 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,
+- 0x01,0x00,0x01,0x09,0x00,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd4,0x18,0x93,0x14,0xd2,0x0c,0x91,0x08,
+- 0x10,0x04,0x01,0x00,0x07,0x00,0x07,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x00,0x07,0x00,0x00,0x00,0x00,0x00,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x11,0x00,0x13,0x00,0x13,0x00,0xe1,0x24,
+- 0x01,0xd0,0x86,0xcf,0x86,0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,
+- 0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,
+- 0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x07,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,
+- 0x04,0x01,0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x73,0xd4,0x45,0xd3,0x14,0x52,
+- 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,
+- 0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xad,0x87,0xe0,0xad,0x96,0x00,
+- 0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xe0,0xad,0x87,0xe0,0xac,0xbe,0x00,0x91,
+- 0x0f,0x10,0x0b,0x01,0xff,0xe0,0xad,0x87,0xe0,0xad,0x97,0x00,0x01,0x09,0x00,0x00,
+- 0xd3,0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x52,0x04,0x00,0x00,
+- 0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xac,0xa1,0xe0,0xac,0xbc,0x00,0x01,0xff,0xe0,
+- 0xac,0xa2,0xe0,0xac,0xbc,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd4,0x14,0x93,0x10,
+- 0xd2,0x08,0x11,0x04,0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
+- 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x0c,0x00,0x0c,0x00,
+- 0x00,0x00,0xd0,0xb1,0xcf,0x86,0xd5,0x63,0xd4,0x28,0xd3,0x14,0xd2,0x08,0x11,0x04,
+- 0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x0c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,
+- 0xd3,0x1f,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x0f,
+- 0x10,0x0b,0x01,0xff,0xe0,0xae,0x92,0xe0,0xaf,0x97,0x00,0x01,0x00,0x00,0x00,0xd2,
+- 0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x51,
+- 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x00,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,
+- 0x04,0x00,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
+- 0x04,0x08,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,
+- 0x00,0x01,0x00,0xcf,0x86,0xd5,0x61,0xd4,0x45,0xd3,0x14,0xd2,0x0c,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,
+- 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xaf,0x86,0xe0,0xae,
+- 0xbe,0x00,0x01,0xff,0xe0,0xaf,0x87,0xe0,0xae,0xbe,0x00,0x91,0x0f,0x10,0x0b,0x01,
+- 0xff,0xe0,0xaf,0x86,0xe0,0xaf,0x97,0x00,0x01,0x09,0x00,0x00,0x93,0x18,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x01,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,
+- 0x00,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x01,0x00,0x07,0x00,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,
+- 0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xe3,0x1c,0x04,0xe2,0x1a,0x02,0xd1,0xf3,
+- 0xd0,0x76,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
+- 0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,
+- 0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,
+- 0x01,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x10,0x00,
+- 0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x0a,0x00,0x01,0x00,0xcf,0x86,0xd5,0x53,0xd4,0x2f,0xd3,0x10,0x52,0x04,
+- 0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0xd2,0x13,0x91,0x0f,
+- 0x10,0x0b,0x01,0xff,0xe0,0xb1,0x86,0xe0,0xb1,0x96,0x00,0x00,0x00,0x01,0x00,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,
+- 0x08,0x10,0x04,0x00,0x00,0x01,0x54,0x10,0x04,0x01,0x5b,0x00,0x00,0x92,0x0c,0x51,
+- 0x04,0x0a,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,
+- 0x08,0x11,0x04,0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,
+- 0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0a,
+- 0x00,0xd0,0x76,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x12,0x00,0x10,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,
+- 0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,
+- 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,
+- 0x04,0x07,0x07,0x07,0x00,0x01,0x00,0xcf,0x86,0xd5,0x82,0xd4,0x5e,0xd3,0x2a,0xd2,
+- 0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xb2,0xbf,0xe0,0xb3,0x95,0x00,0x01,0x00,
+- 0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
+- 0xe0,0xb3,0x86,0xe0,0xb3,0x95,0x00,0xd2,0x28,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,
+- 0xb3,0x86,0xe0,0xb3,0x96,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xb3,0x86,0xe0,
+- 0xb3,0x82,0x00,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,0x82,0xe0,0xb3,0x95,0x00,0x91,
+- 0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,
+- 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x00,
+- 0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,
+- 0x08,0x11,0x04,0x01,0x00,0x09,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,
+- 0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0xe1,0x06,0x01,0xd0,0x6e,0xcf,0x86,0xd5,0x3c,0xd4,0x28,
+- 0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x13,0x00,0x10,0x00,0x01,0x00,0x91,0x08,
+- 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,
+- 0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x01,0x00,0x0c,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,
+- 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x0c,0x00,0x13,0x09,0x91,0x08,0x10,0x04,
+- 0x13,0x09,0x0a,0x00,0x01,0x00,0xcf,0x86,0xd5,0x65,0xd4,0x45,0xd3,0x10,0x52,0x04,
+- 0x01,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x08,
+- 0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xb5,0x86,0xe0,0xb4,0xbe,
+- 0x00,0x01,0xff,0xe0,0xb5,0x87,0xe0,0xb4,0xbe,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,
+- 0xe0,0xb5,0x86,0xe0,0xb5,0x97,0x00,0x01,0x09,0x10,0x04,0x0c,0x00,0x12,0x00,0xd3,
+- 0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x01,0x00,0x52,
+- 0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x11,0x00,0xd4,0x14,0x93,
+- 0x10,0xd2,0x08,0x11,0x04,0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,
+- 0x00,0xd3,0x0c,0x52,0x04,0x0a,0x00,0x11,0x04,0x0a,0x00,0x12,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x12,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x5a,0xcf,0x86,0xd5,
+- 0x34,0xd4,0x18,0x93,0x14,0xd2,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x91,0x08,0x10,
+- 0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x04,
+- 0x00,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x04,0x00,0x10,
+- 0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x04,0x00,0x00,0x00,0xcf,0x86,0xd5,0x77,0xd4,0x28,0xd3,0x10,0x52,0x04,0x04,
+- 0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x51,0x04,0x00,
+- 0x00,0x10,0x04,0x04,0x09,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x04,
+- 0x00,0xd3,0x14,0x52,0x04,0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x10,
+- 0x04,0x04,0x00,0x00,0x00,0xd2,0x13,0x51,0x04,0x04,0x00,0x10,0x0b,0x04,0xff,0xe0,
+- 0xb7,0x99,0xe0,0xb7,0x8a,0x00,0x04,0x00,0xd1,0x19,0x10,0x0b,0x04,0xff,0xe0,0xb7,
+- 0x99,0xe0,0xb7,0x8f,0x00,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8f,0xe0,0xb7,0x8a,
+- 0x00,0x10,0x0b,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x9f,0x00,0x04,0x00,0xd4,0x10,
+- 0x93,0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x93,0x14,
+- 0xd2,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0xe2,0x31,0x01,0xd1,0x58,0xd0,0x3a,0xcf,0x86,0xd5,0x18,0x94,
+- 0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,
+- 0x00,0x01,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,
+- 0x04,0x01,0x67,0x10,0x04,0x01,0x09,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0xcf,0x86,0x95,0x18,0xd4,0x0c,0x53,0x04,0x01,0x00,0x12,0x04,0x01,
+- 0x6b,0x01,0x00,0x53,0x04,0x01,0x00,0x12,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd0,
+- 0x9e,0xcf,0x86,0xd5,0x54,0xd4,0x3c,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x10,0x04,0x15,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x15,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,0x15,
+- 0x00,0xd3,0x08,0x12,0x04,0x15,0x00,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x30,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x15,0x00,0x01,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,
+- 0x04,0x00,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x15,0x00,0x01,0x00,0x91,0x08,0x10,
+- 0x04,0x15,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
+- 0x76,0x10,0x04,0x15,0x09,0x01,0x00,0x11,0x04,0x01,0x00,0x00,0x00,0xcf,0x86,0x95,
+- 0x34,0xd4,0x20,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x01,0x7a,0x11,0x04,0x01,0x00,0x00,
+- 0x00,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x01,
+- 0x00,0x0d,0x00,0x00,0x00,0xe1,0x2b,0x01,0xd0,0x3e,0xcf,0x86,0xd5,0x14,0x54,0x04,
+- 0x02,0x00,0x53,0x04,0x02,0x00,0x92,0x08,0x11,0x04,0x02,0xdc,0x02,0x00,0x02,0x00,
+- 0x54,0x04,0x02,0x00,0xd3,0x14,0x52,0x04,0x02,0x00,0xd1,0x08,0x10,0x04,0x02,0x00,
+- 0x02,0xdc,0x10,0x04,0x02,0x00,0x02,0xdc,0x92,0x0c,0x91,0x08,0x10,0x04,0x02,0x00,
+- 0x02,0xd8,0x02,0x00,0x02,0x00,0xcf,0x86,0xd5,0x73,0xd4,0x36,0xd3,0x17,0x92,0x13,
+- 0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x82,0xe0,0xbe,0xb7,
+- 0x00,0x02,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x02,0x00,0x02,0x00,0x91,
+- 0x0f,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x8c,0xe0,0xbe,0xb7,0x00,0x02,0x00,
+- 0xd3,0x26,0xd2,0x13,0x51,0x04,0x02,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbd,0x91,0xe0,
+- 0xbe,0xb7,0x00,0x02,0x00,0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,
+- 0xbd,0x96,0xe0,0xbe,0xb7,0x00,0x52,0x04,0x02,0x00,0x91,0x0f,0x10,0x0b,0x02,0xff,
+- 0xe0,0xbd,0x9b,0xe0,0xbe,0xb7,0x00,0x02,0x00,0x02,0x00,0xd4,0x27,0x53,0x04,0x02,
+- 0x00,0xd2,0x17,0xd1,0x0f,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x80,0xe0,0xbe,
+- 0xb5,0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,
+- 0x00,0x00,0xd3,0x35,0xd2,0x17,0xd1,0x08,0x10,0x04,0x00,0x00,0x02,0x81,0x10,0x04,
+- 0x02,0x82,0x02,0xff,0xe0,0xbd,0xb1,0xe0,0xbd,0xb2,0x00,0xd1,0x0f,0x10,0x04,0x02,
+- 0x84,0x02,0xff,0xe0,0xbd,0xb1,0xe0,0xbd,0xb4,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbe,
+- 0xb2,0xe0,0xbe,0x80,0x00,0x02,0x00,0xd2,0x13,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,
+- 0xbe,0xb3,0xe0,0xbe,0x80,0x00,0x02,0x00,0x02,0x82,0x11,0x04,0x02,0x82,0x02,0x00,
+- 0xd0,0xd3,0xcf,0x86,0xd5,0x65,0xd4,0x27,0xd3,0x1f,0xd2,0x13,0x91,0x0f,0x10,0x04,
+- 0x02,0x82,0x02,0xff,0xe0,0xbd,0xb1,0xe0,0xbe,0x80,0x00,0x02,0xe6,0x91,0x08,0x10,
+- 0x04,0x02,0x09,0x02,0x00,0x02,0xe6,0x12,0x04,0x02,0x00,0x0c,0x00,0xd3,0x1f,0xd2,
+- 0x13,0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0x92,0xe0,0xbe,
+- 0xb7,0x00,0x51,0x04,0x02,0x00,0x10,0x04,0x04,0x00,0x02,0x00,0xd2,0x0c,0x91,0x08,
+- 0x10,0x04,0x00,0x00,0x02,0x00,0x02,0x00,0x91,0x0f,0x10,0x04,0x02,0x00,0x02,0xff,
+- 0xe0,0xbe,0x9c,0xe0,0xbe,0xb7,0x00,0x02,0x00,0xd4,0x3d,0xd3,0x26,0xd2,0x13,0x51,
+- 0x04,0x02,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xa1,0xe0,0xbe,0xb7,0x00,0x02,0x00,
+- 0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0xa6,0xe0,0xbe,0xb7,
+- 0x00,0x52,0x04,0x02,0x00,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xab,0xe0,0xbe,
+- 0xb7,0x00,0x02,0x00,0x04,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,
+- 0x02,0x00,0x02,0x00,0x02,0x00,0xd2,0x13,0x91,0x0f,0x10,0x04,0x04,0x00,0x02,0xff,
+- 0xe0,0xbe,0x90,0xe0,0xbe,0xb5,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,
+- 0x00,0x04,0x00,0xcf,0x86,0x95,0x4c,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0xdc,0x04,0x00,0x52,0x04,0x04,0x00,0xd1,0x08,0x10,
+- 0x04,0x04,0x00,0x00,0x00,0x10,0x04,0x0a,0x00,0x04,0x00,0xd3,0x14,0xd2,0x08,0x11,
+- 0x04,0x08,0x00,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0x92,
+- 0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0xcf,0x86,0xe5,0xcc,0x04,0xe4,0x63,0x03,0xe3,0x65,0x01,0xe2,0x04,
+- 0x01,0xd1,0x7f,0xd0,0x65,0xcf,0x86,0x55,0x04,0x04,0x00,0xd4,0x33,0xd3,0x1f,0xd2,
+- 0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x0a,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,
+- 0x0b,0x04,0xff,0xe1,0x80,0xa5,0xe1,0x80,0xae,0x00,0x04,0x00,0x92,0x10,0xd1,0x08,
+- 0x10,0x04,0x0a,0x00,0x04,0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x04,0x00,0xd3,0x18,
+- 0xd2,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x51,0x04,0x0a,0x00,
+- 0x10,0x04,0x04,0x00,0x04,0x07,0x92,0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0x09,
+- 0x10,0x04,0x0a,0x09,0x0a,0x00,0x0a,0x00,0xcf,0x86,0x95,0x14,0x54,0x04,0x04,0x00,
+- 0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,
+- 0xd0,0x2e,0xcf,0x86,0x95,0x28,0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,
+- 0x91,0x08,0x10,0x04,0x0a,0x00,0x0a,0xdc,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,0x08,
+- 0x11,0x04,0x0a,0x00,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0a,0x00,0x01,0x00,0xcf,0x86,
+- 0xd5,0x24,0x94,0x20,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,
+- 0x00,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0d,0x00,
+- 0x00,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x06,0x00,
+- 0x08,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0d,0x00,
+- 0x0d,0x00,0xd1,0x28,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x95,0x1c,0x54,0x04,
+- 0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,
+- 0x0b,0x00,0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,
+- 0x01,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x0b,0x00,0x0b,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,
+- 0x01,0x00,0x53,0x04,0x01,0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x0b,0x00,
+- 0xe2,0x21,0x01,0xd1,0x6c,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x52,
+- 0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,0x04,
+- 0x00,0x04,0x00,0xcf,0x86,0x95,0x48,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,
+- 0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x04,
+- 0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,
+- 0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xd0,
+- 0x62,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,
+- 0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,
+- 0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xd4,0x14,0x53,0x04,0x04,
+- 0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd3,
+- 0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,
+- 0x00,0x00,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,
+- 0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,0xd3,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,
+- 0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x52,0x04,0x04,0x00,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x93,0x10,0x52,0x04,0x04,0x00,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x94,0x14,0x53,0x04,0x04,
+- 0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,
+- 0x00,0xd1,0x9c,0xd0,0x3e,0xcf,0x86,0x95,0x38,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,
+- 0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd3,0x14,0xd2,
+- 0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,
+- 0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,
+- 0x00,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x93,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,
+- 0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,0x53,0x04,0x04,0x00,0xd2,0x0c,0x51,
+- 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x0c,
+- 0xe6,0x10,0x04,0x0c,0xe6,0x08,0xe6,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x08,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x53,0x04,0x04,0x00,0x52,
+- 0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,0xcf,
+- 0x86,0x95,0x14,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,
+- 0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,
+- 0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x11,0x00,0x00,
+- 0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x00,0x00,0xd3,0x30,0xd2,0x2a,0xd1,
+- 0x24,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x0b,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,
+- 0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xd2,0x6c,0xd1,0x24,0xd0,
+- 0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,
+- 0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x0b,0x00,0x0b,
+- 0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x52,
+- 0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xcf,
+- 0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x04,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x46,0xcf,0x86,0xd5,0x28,0xd4,
+- 0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x00,
+- 0x00,0x06,0x00,0x93,0x10,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x09,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x06,0x00,0x93,0x14,0x52,0x04,0x06,0x00,0xd1,
+- 0x08,0x10,0x04,0x06,0x09,0x06,0x00,0x10,0x04,0x06,0x00,0x00,0x00,0x00,0x00,0xcf,
+- 0x86,0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,0x06,0x00,0x00,0x00,0x00,
+- 0x00,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,
+- 0x00,0x00,0x00,0x06,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x00,
+- 0x00,0x06,0x00,0x00,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0xd5,
+- 0x24,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x04,
+- 0x09,0x04,0x00,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x07,
+- 0xe6,0x00,0x00,0xd4,0x10,0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,0x00,
+- 0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x08,0x11,0x04,0x07,0x00,0x00,0x00,0x00,
+- 0x00,0xe4,0xac,0x03,0xe3,0x4d,0x01,0xd2,0x84,0xd1,0x48,0xd0,0x2a,0xcf,0x86,0x95,
+- 0x24,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,
+- 0x04,0x04,0x00,0x00,0x00,0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,0x00,
+- 0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x53,
+- 0x04,0x04,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x04,0x00,0x94,0x18,0x53,0x04,0x04,0x00,0x92,
+- 0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0xe4,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,
+- 0x00,0x0b,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x0c,0x52,
+- 0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x42,0xcf,
+- 0x86,0xd5,0x1c,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0xd1,
+- 0x08,0x10,0x04,0x07,0x00,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0xd4,0x0c,0x53,
+- 0x04,0x07,0x00,0x12,0x04,0x07,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x10,0xd1,
+- 0x08,0x10,0x04,0x07,0x00,0x07,0xde,0x10,0x04,0x07,0xe6,0x07,0xdc,0x00,0x00,0xcf,
+- 0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x00,
+- 0x00,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,
+- 0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x07,0x00,0x91,
+- 0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,0xcf,0x86,0x55,
+- 0x04,0x08,0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,0x0b,
+- 0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x95,0x28,0xd4,0x10,0x53,0x04,0x08,0x00,0x92,
+- 0x08,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,
+- 0x04,0x08,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x08,0x00,0x07,
+- 0x00,0xd2,0xe4,0xd1,0x80,0xd0,0x2e,0xcf,0x86,0x95,0x28,0x54,0x04,0x08,0x00,0xd3,
+- 0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x08,0xe6,0xd2,
+- 0x0c,0x91,0x08,0x10,0x04,0x08,0xdc,0x08,0x00,0x08,0x00,0x11,0x04,0x00,0x00,0x08,
+- 0x00,0x0b,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,
+- 0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xd4,0x14,0x93,
+- 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x0b,
+- 0x00,0xd3,0x10,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x0b,
+- 0xe6,0x52,0x04,0x0b,0xe6,0xd1,0x08,0x10,0x04,0x0b,0xe6,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x0b,0xdc,0xd0,0x5e,0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0b,0x00,0x92,
+- 0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,0x11,
+- 0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd4,0x10,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,
+- 0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x10,0xe6,0x91,0x08,0x10,
+- 0x04,0x10,0xe6,0x10,0xdc,0x10,0xdc,0xd2,0x0c,0x51,0x04,0x10,0xdc,0x10,0x04,0x10,
+- 0xdc,0x10,0xe6,0xd1,0x08,0x10,0x04,0x10,0xe6,0x10,0xdc,0x10,0x04,0x10,0x00,0x00,
+- 0x00,0xcf,0x06,0x00,0x00,0xe1,0x1e,0x01,0xd0,0xaa,0xcf,0x86,0xd5,0x6e,0xd4,0x53,
+- 0xd3,0x17,0x52,0x04,0x09,0x00,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,
+- 0x85,0xe1,0xac,0xb5,0x00,0x09,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x09,0xff,0xe1,
+- 0xac,0x87,0xe1,0xac,0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x89,0xe1,
+- 0xac,0xb5,0x00,0x09,0x00,0xd1,0x0f,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8b,0xe1,0xac,
+- 0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8d,0xe1,0xac,0xb5,0x00,0x09,
+- 0x00,0x93,0x17,0x92,0x13,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x91,
+- 0xe1,0xac,0xb5,0x00,0x09,0x00,0x09,0x00,0x09,0x00,0x54,0x04,0x09,0x00,0xd3,0x10,
+- 0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x07,0x09,0x00,0x09,0x00,0xd2,0x13,
+- 0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xba,0xe1,0xac,0xb5,
+- 0x00,0x91,0x0f,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xbc,0xe1,0xac,0xb5,0x00,
+- 0x09,0x00,0xcf,0x86,0xd5,0x3d,0x94,0x39,0xd3,0x31,0xd2,0x25,0xd1,0x16,0x10,0x0b,
+- 0x09,0xff,0xe1,0xac,0xbe,0xe1,0xac,0xb5,0x00,0x09,0xff,0xe1,0xac,0xbf,0xe1,0xac,
+- 0xb5,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xad,0x82,0xe1,0xac,0xb5,0x00,0x91,
+- 0x08,0x10,0x04,0x09,0x09,0x09,0x00,0x09,0x00,0x12,0x04,0x09,0x00,0x00,0x00,0x09,
+- 0x00,0xd4,0x1c,0x53,0x04,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,
+- 0x00,0x09,0xe6,0x91,0x08,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0xe6,0xd3,0x08,0x12,
+- 0x04,0x09,0xe6,0x09,0x00,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x00,
+- 0x00,0x00,0x00,0xd0,0x2e,0xcf,0x86,0x55,0x04,0x0a,0x00,0xd4,0x18,0x53,0x04,0x0a,
+- 0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x09,0x0d,0x09,0x11,0x04,0x0d,
+- 0x00,0x0a,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x0d,0x00,0x0d,
+- 0x00,0xcf,0x86,0x55,0x04,0x0c,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x0c,0x00,0x51,
+- 0x04,0x0c,0x00,0x10,0x04,0x0c,0x07,0x0c,0x00,0x0c,0x00,0xd3,0x0c,0x92,0x08,0x11,
+- 0x04,0x0c,0x00,0x0c,0x09,0x00,0x00,0x12,0x04,0x00,0x00,0x0c,0x00,0xe3,0xb2,0x01,
+- 0xe2,0x09,0x01,0xd1,0x4c,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x0a,0x00,0x54,0x04,0x0a,
+- 0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,
+- 0x07,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,0xcf,
+- 0x86,0x95,0x1c,0x94,0x18,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,0x00,
+- 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,
+- 0x3a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x14,0x00,0x54,0x04,0x14,0x00,0x53,
+- 0x04,0x14,0x00,0xd2,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x14,0x00,0x14,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x08,0x13,
+- 0x04,0x0d,0x00,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x0b,0xe6,0x10,0x04,0x0b,
+- 0xe6,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x01,0x0b,0xdc,0x0b,0xdc,0x92,0x08,0x11,
+- 0x04,0x0b,0xdc,0x0b,0xe6,0x0b,0xdc,0xd4,0x28,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x01,0x0b,0x01,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,
+- 0x01,0x0b,0x00,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xdc,0x0b,0x00,0xd3,
+- 0x1c,0xd2,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0d,0x00,0xd1,0x08,0x10,
+- 0x04,0x0d,0xe6,0x0d,0x00,0x10,0x04,0x0d,0x00,0x13,0x00,0x92,0x0c,0x51,0x04,0x10,
+- 0xe6,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,0x07,
+- 0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x94,0x0c,0x53,0x04,0x07,0x00,0x12,0x04,0x07,
+- 0x00,0x08,0x00,0x08,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,0xcf,0x86,0xd5,0x40,0xd4,
+- 0x2c,0xd3,0x10,0x92,0x0c,0x51,0x04,0x08,0xe6,0x10,0x04,0x08,0xdc,0x08,0xe6,0x09,
+- 0xe6,0xd2,0x0c,0x51,0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x0a,0xe6,0xd1,0x08,0x10,
+- 0x04,0x0a,0xe6,0x0a,0xea,0x10,0x04,0x0a,0xd6,0x0a,0xdc,0x93,0x10,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x0a,0xca,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0xd4,0x14,0x93,
+- 0x10,0x52,0x04,0x0a,0xe6,0x51,0x04,0x0a,0xe6,0x10,0x04,0x0a,0xe6,0x10,0xe6,0x10,
+- 0xe6,0xd3,0x10,0x52,0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x13,0xe8,0x13,
+- 0xe4,0xd2,0x10,0xd1,0x08,0x10,0x04,0x13,0xe4,0x13,0xdc,0x10,0x04,0x00,0x00,0x12,
+- 0xe6,0xd1,0x08,0x10,0x04,0x0c,0xe9,0x0b,0xdc,0x10,0x04,0x09,0xe6,0x09,0xdc,0xe2,
+- 0x80,0x08,0xe1,0x48,0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa5,0x00,0x01,0xff,0x61,
+- 0xcc,0xa5,0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,0x87,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x42,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,0xa3,
+- 0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,0xd2,
+- 0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x43,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,0x63,
+- 0xcc,0xa7,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0x87,0x00,0x01,0xff,0x64,
+- 0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa3,0x00,0x01,0xff,0x64,
+- 0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,0xb1,
+- 0x00,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa7,0x00,0x01,
+- 0xff,0x64,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xad,0x00,0x01,0xff,0x64,
+- 0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,0x80,0x00,0x01,
+- 0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,0x81,
+- 0x00,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x45,0xcc,0xad,0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x45,
+- 0xcc,0xb0,0x00,0x01,0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,
+- 0xcc,0xa7,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,0x01,
+- 0xff,0x46,0xcc,0x87,0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,0x84,
+- 0x00,0x10,0x08,0x01,0xff,0x48,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,0x10,
+- 0x08,0x01,0xff,0x48,0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,0x10,
+- 0x08,0x01,0xff,0x48,0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0x49,0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,0x01,
+- 0xff,0x49,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0x81,0x00,0x01,0xff,0x6b,
+- 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,0xa3,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,0xb1,
+- 0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,0xd2,
+- 0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,0x6c,
+- 0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xb1,0x00,0x01,0xff,0x6c,
+- 0xcc,0xb1,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4c,0xcc,0xad,0x00,0x01,0xff,0x6c,
+- 0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x4d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,0x81,
+- 0x00,0xcf,0x86,0xe5,0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x4d,0xcc,0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,
+- 0x4d,0xcc,0xa3,0x00,0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x4e,0xcc,0x87,0x00,0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x4e,0xcc,
+- 0xa3,0x00,0x01,0xff,0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x4e,0xcc,0xb1,0x00,0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x4e,0xcc,
+- 0xad,0x00,0x01,0xff,0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,
+- 0x83,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,
+- 0x4f,0xcc,0x83,0xcc,0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,0x48,
+- 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,
+- 0x6f,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x81,0x00,
+- 0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x50,0xcc,
+- 0x81,0x00,0x01,0xff,0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x50,0xcc,0x87,0x00,
+- 0x01,0xff,0x70,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,
+- 0x87,0x00,0x01,0xff,0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa3,0x00,
+- 0x01,0xff,0x72,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x52,0xcc,0xa3,0xcc,
+- 0x84,0x00,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,
+- 0xb1,0x00,0x01,0xff,0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x01,0xff,0x53,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,0x08,
+- 0x01,0xff,0x53,0xcc,0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x53,0xcc,0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,
+- 0x10,0x0a,0x01,0xff,0x53,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,0xcc,
+- 0x87,0x00,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x53,0xcc,0xa3,0xcc,0x87,0x00,
+- 0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0x87,0x00,
+- 0x01,0xff,0x74,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,0xa3,0x00,
+- 0x01,0xff,0x74,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0xb1,0x00,0x01,0xff,
+- 0x74,0xcc,0xb1,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,
+- 0xad,0x00,0x01,0xff,0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xa4,0x00,
+- 0x01,0xff,0x75,0xcc,0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0xb0,0x00,
+- 0x01,0xff,0x75,0xcc,0xb0,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xad,0x00,0x01,0xff,
+- 0x75,0xcc,0xad,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x83,0xcc,
+- 0x81,0x00,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x55,0xcc,
+- 0x84,0xcc,0x88,0x00,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0x56,0xcc,0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,
+- 0x56,0xcc,0xa3,0x00,0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x10,0x02,0xcf,0x86,0xd5,
+- 0xe1,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x80,
+- 0x00,0x01,0xff,0x77,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x81,0x00,0x01,
+- 0xff,0x77,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x88,0x00,0x01,
+- 0xff,0x77,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x87,0x00,0x01,0xff,0x77,
+- 0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0xa3,0x00,0x01,
+- 0xff,0x77,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x58,0xcc,0x87,0x00,0x01,0xff,0x78,
+- 0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x58,0xcc,0x88,0x00,0x01,0xff,0x78,
+- 0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,0x87,
+- 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0x82,0x00,0x01,
+- 0xff,0x7a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x5a,0xcc,0xa3,0x00,0x01,0xff,0x7a,
+- 0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0xb1,0x00,0x01,0xff,0x7a,
+- 0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x68,0xcc,0xb1,0x00,0x01,0xff,0x74,0xcc,0x88,
+- 0x00,0x92,0x1d,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,0xff,0x79,
+- 0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x02,0xff,0xc5,0xbf,0xcc,0x87,0x00,0x0a,0x00,
+- 0xd4,0x98,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa3,0x00,
+- 0x01,0xff,0x61,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x89,0x00,0x01,0xff,
+- 0x61,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x81,0x00,
+- 0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,
++ 0xc6,0xe5,0xf9,0x14,0xe4,0x6f,0x0d,0xe3,0x39,0x08,0xe2,0x22,0x01,0xc1,0xd0,0x24,
++ 0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x07,0x63,0xd8,0x43,0x01,0x00,0x93,0x13,0x52,
++ 0x04,0x01,0x00,0x91,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xce,0xbc,0x00,0x01,0x00,
++ 0x01,0x00,0xcf,0x86,0xe5,0xb3,0x44,0xd4,0x7f,0xd3,0x3f,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x61,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x81,0x00,0x10,0x08,0x01,
++ 0xff,0x61,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x61,0xcc,0x88,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x10,0x07,0x01,0xff,0xc3,
++ 0xa6,0x00,0x01,0xff,0x63,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x65,0xcc,0x80,0x00,0x01,0xff,0x65,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,
++ 0x82,0x00,0x01,0xff,0x65,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,
++ 0x80,0x00,0x01,0xff,0x69,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0x82,0x00,
++ 0x01,0xff,0x69,0xcc,0x88,0x00,0xd3,0x3b,0xd2,0x1f,0xd1,0x0f,0x10,0x07,0x01,0xff,
++ 0xc3,0xb0,0x00,0x01,0xff,0x6e,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x80,
++ 0x00,0x01,0xff,0x6f,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x82,
++ 0x00,0x01,0xff,0x6f,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,0x01,
++ 0x00,0xd2,0x1f,0xd1,0x0f,0x10,0x07,0x01,0xff,0xc3,0xb8,0x00,0x01,0xff,0x75,0xcc,
++ 0x80,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x82,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x88,0x00,0x01,0xff,0x79,0xcc,0x81,0x00,
++ 0x10,0x07,0x01,0xff,0xc3,0xbe,0x00,0x01,0xff,0x73,0x73,0x00,0xe1,0xd4,0x03,0xe0,
++ 0xeb,0x01,0xcf,0x86,0xd5,0xfb,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x61,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,
++ 0x61,0xcc,0x86,0x00,0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x61,0xcc,0xa8,0x00,0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,
++ 0x81,0x00,0x01,0xff,0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x63,0xcc,0x82,0x00,0x01,0xff,0x63,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x63,0xcc,
++ 0x87,0x00,0x01,0xff,0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x63,0xcc,
++ 0x8c,0x00,0x01,0xff,0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x8c,0x00,
++ 0x01,0xff,0x64,0xcc,0x8c,0x00,0xd3,0x3b,0xd2,0x1b,0xd1,0x0b,0x10,0x07,0x01,0xff,
++ 0xc4,0x91,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x84,0x00,0x01,0xff,0x65,
++ 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x86,0x00,0x01,0xff,0x65,
++ 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x87,0x00,0x01,0xff,0x65,0xcc,0x87,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0xa8,0x00,0x01,0xff,0x65,
++ 0xcc,0xa8,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x8c,0x00,0x01,0xff,0x65,0xcc,0x8c,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x82,0x00,0x01,0xff,0x67,0xcc,0x82,
++ 0x00,0x10,0x08,0x01,0xff,0x67,0xcc,0x86,0x00,0x01,0xff,0x67,0xcc,0x86,0x00,0xd4,
++ 0x7b,0xd3,0x3b,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x87,0x00,0x01,
++ 0xff,0x67,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x67,0xcc,0xa7,0x00,0x01,0xff,0x67,
++ 0xcc,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x68,0xcc,0x82,0x00,0x01,0xff,0x68,
++ 0xcc,0x82,0x00,0x10,0x07,0x01,0xff,0xc4,0xa7,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x69,0xcc,0x83,0x00,0x01,0xff,0x69,0xcc,0x83,0x00,0x10,0x08,
++ 0x01,0xff,0x69,0xcc,0x84,0x00,0x01,0xff,0x69,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x69,0xcc,0x86,0x00,0x01,0xff,0x69,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,
++ 0x69,0xcc,0xa8,0x00,0x01,0xff,0x69,0xcc,0xa8,0x00,0xd3,0x37,0xd2,0x17,0xd1,0x0c,
++ 0x10,0x08,0x01,0xff,0x69,0xcc,0x87,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc4,0xb3,
++ 0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6a,0xcc,0x82,0x00,0x01,0xff,0x6a,
++ 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x6b,0xcc,0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,
++ 0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x6c,0xcc,0x81,0x00,0x10,
++ 0x08,0x01,0xff,0x6c,0xcc,0x81,0x00,0x01,0xff,0x6c,0xcc,0xa7,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x6c,0xcc,0xa7,0x00,0x01,0xff,0x6c,0xcc,0x8c,0x00,0x10,0x08,0x01,
++ 0xff,0x6c,0xcc,0x8c,0x00,0x01,0xff,0xc5,0x80,0x00,0xcf,0x86,0xd5,0xed,0xd4,0x72,
++ 0xd3,0x37,0xd2,0x17,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc5,0x82,0x00,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0x6e,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
++ 0xcc,0x81,0x00,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa7,
++ 0x00,0x01,0xff,0x6e,0xcc,0x8c,0x00,0xd2,0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
++ 0xcc,0x8c,0x00,0x01,0xff,0xca,0xbc,0x6e,0x00,0x10,0x07,0x01,0xff,0xc5,0x8b,0x00,
++ 0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,
++ 0x84,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,0x86,0x00,
++ 0xd3,0x3b,0xd2,0x1b,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,0xff,
++ 0x6f,0xcc,0x8b,0x00,0x10,0x07,0x01,0xff,0xc5,0x93,0x00,0x01,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x72,0xcc,0x81,0x00,0x01,0xff,0x72,0xcc,0x81,0x00,0x10,0x08,0x01,
++ 0xff,0x72,0xcc,0xa7,0x00,0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x72,0xcc,0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,0x00,0x10,0x08,0x01,
++ 0xff,0x73,0xcc,0x81,0x00,0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x73,0xcc,0x82,0x00,0x01,0xff,0x73,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x73,
++ 0xcc,0xa7,0x00,0x01,0xff,0x73,0xcc,0xa7,0x00,0xd4,0x7b,0xd3,0x3b,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x73,0xcc,0x8c,0x00,0x01,0xff,0x73,0xcc,0x8c,0x00,0x10,
++ 0x08,0x01,0xff,0x74,0xcc,0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x74,0xcc,0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,0x00,0x10,0x07,0x01,
++ 0xff,0xc5,0xa7,0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,
++ 0x83,0x00,0x01,0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x84,0x00,
++ 0x01,0xff,0x75,0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x86,0x00,
++ 0x01,0xff,0x75,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x8a,0x00,0x01,0xff,
++ 0x75,0xcc,0x8a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,
++ 0x8b,0x00,0x01,0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa8,0x00,
++ 0x01,0xff,0x75,0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x82,0x00,
++ 0x01,0xff,0x77,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x82,0x00,0x01,0xff,
++ 0x79,0xcc,0x82,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x79,0xcc,0x88,0x00,
++ 0x01,0xff,0x7a,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x81,0x00,0x01,0xff,
++ 0x7a,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x87,0x00,0x01,0xff,
++ 0x7a,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,0x01,0xff,0x73,0x00,
++ 0xe0,0x65,0x01,0xcf,0x86,0xd5,0xb4,0xd4,0x5a,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0xc9,0x93,0x00,0x10,0x07,0x01,0xff,0xc6,0x83,0x00,0x01,
++ 0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc6,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,
++ 0xc9,0x94,0x00,0x01,0xff,0xc6,0x88,0x00,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xc9,0x96,0x00,0x10,0x07,0x01,0xff,0xc9,0x97,0x00,0x01,0xff,0xc6,0x8c,
++ 0x00,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xc7,0x9d,0x00,0x01,0xff,0xc9,0x99,
++ 0x00,0xd3,0x32,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,0xff,0xc9,0x9b,0x00,0x01,0xff,
++ 0xc6,0x92,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xc9,0xa0,0x00,0xd1,0x0b,0x10,0x07,
++ 0x01,0xff,0xc9,0xa3,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xc9,0xa9,0x00,0x01,0xff,
++ 0xc9,0xa8,0x00,0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0x99,0x00,0x01,0x00,
++ 0x01,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xc9,0xaf,0x00,0x01,0xff,0xc9,0xb2,0x00,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xc9,0xb5,0x00,0xd4,0x5d,0xd3,0x34,0xd2,0x1b,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x10,
++ 0x07,0x01,0xff,0xc6,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc6,0xa5,
++ 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xca,0x80,0x00,0x01,0xff,0xc6,0xa8,0x00,0xd2,
++ 0x0f,0x91,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xca,0x83,0x00,0x01,0x00,0xd1,0x0b,
++ 0x10,0x07,0x01,0xff,0xc6,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xca,0x88,0x00,
++ 0x01,0xff,0x75,0xcc,0x9b,0x00,0xd3,0x33,0xd2,0x1d,0xd1,0x0f,0x10,0x08,0x01,0xff,
++ 0x75,0xcc,0x9b,0x00,0x01,0xff,0xca,0x8a,0x00,0x10,0x07,0x01,0xff,0xca,0x8b,0x00,
++ 0x01,0xff,0xc6,0xb4,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x01,0xff,0xc6,0xb6,0x00,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xca,0x92,0x00,0xd2,0x0f,0x91,0x0b,0x10,0x07,0x01,
++ 0xff,0xc6,0xb9,0x00,0x01,0x00,0x01,0x00,0x91,0x0b,0x10,0x07,0x01,0xff,0xc6,0xbd,
++ 0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xd4,0xd4,0x44,0xd3,0x16,0x52,0x04,0x01,
++ 0x00,0x51,0x07,0x01,0xff,0xc7,0x86,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xc7,0x89,
++ 0x00,0xd2,0x12,0x91,0x0b,0x10,0x07,0x01,0xff,0xc7,0x89,0x00,0x01,0x00,0x01,0xff,
++ 0xc7,0x8c,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x61,0xcc,0x8c,0x00,0x10,
++ 0x08,0x01,0xff,0x61,0xcc,0x8c,0x00,0x01,0xff,0x69,0xcc,0x8c,0x00,0xd3,0x46,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x8c,0x00,0x01,0xff,0x6f,0xcc,0x8c,
++ 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x8c,0x00,0xd1,
++ 0x12,0x10,0x08,0x01,0xff,0x75,0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,
++ 0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x75,0xcc,0x88,
++ 0xcc,0x81,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,
++ 0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x8c,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,
++ 0xcc,0x8c,0x00,0x01,0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,
++ 0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x88,
++ 0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,0xd4,0x87,0xd3,0x41,0xd2,
++ 0x26,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x87,0xcc,0x84,0x00,0x01,0xff,0x61,
++ 0xcc,0x87,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xc3,0xa6,0xcc,0x84,0x00,0x01,0xff,
++ 0xc3,0xa6,0xcc,0x84,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xc7,0xa5,0x00,0x01,0x00,
++ 0x10,0x08,0x01,0xff,0x67,0xcc,0x8c,0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,
++ 0x10,0x08,0x01,0xff,0x6f,0xcc,0xa8,0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,
++ 0x10,0x0a,0x01,0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,
++ 0x84,0x00,0x10,0x09,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,
++ 0x8c,0x00,0xd3,0x38,0xd2,0x1a,0xd1,0x0f,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,
++ 0x01,0xff,0xc7,0xb3,0x00,0x10,0x07,0x01,0xff,0xc7,0xb3,0x00,0x01,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x67,0xcc,0x81,0x00,0x01,0xff,0x67,0xcc,0x81,0x00,0x10,0x07,
++ 0x04,0xff,0xc6,0x95,0x00,0x04,0xff,0xc6,0xbf,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,
++ 0x04,0xff,0x6e,0xcc,0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,
++ 0x61,0xcc,0x8a,0xcc,0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,0xcc,0x81,0x00,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,0x01,0xff,0xc3,0xa6,0xcc,0x81,0x00,
++ 0x10,0x09,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,0x01,0xff,0xc3,0xb8,0xcc,0x81,0x00,
++ 0xe2,0x31,0x02,0xe1,0xc3,0x44,0xe0,0xc8,0x01,0xcf,0x86,0xd5,0xfb,0xd4,0x80,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0x8f,0x00,0x01,0xff,0x61,
++ 0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x91,0x00,0x01,0xff,0x61,0xcc,0x91,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x8f,0x00,0x01,0xff,0x65,0xcc,0x8f,
++ 0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x91,0x00,0x01,0xff,0x65,0xcc,0x91,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x8f,0x00,0x01,0xff,0x69,0xcc,0x8f,
++ 0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0x91,0x00,0x01,0xff,0x69,0xcc,0x91,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x01,0xff,0x6f,0xcc,0x8f,0x00,0x10,
++ 0x08,0x01,0xff,0x6f,0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,0x91,0x00,0xd3,0x40,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x72,0xcc,0x8f,0x00,0x01,0xff,0x72,0xcc,0x8f,
++ 0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0x91,0x00,0x01,0xff,0x72,0xcc,0x91,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x8f,0x00,0x01,0xff,0x75,0xcc,0x8f,0x00,0x10,
++ 0x08,0x01,0xff,0x75,0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,0x91,0x00,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x04,0xff,0x73,0xcc,0xa6,0x00,0x04,0xff,0x73,0xcc,0xa6,0x00,0x10,
++ 0x08,0x04,0xff,0x74,0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,0xa6,0x00,0xd1,0x0b,0x10,
++ 0x07,0x04,0xff,0xc8,0x9d,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x68,0xcc,0x8c,0x00,
++ 0x04,0xff,0x68,0xcc,0x8c,0x00,0xd4,0x79,0xd3,0x31,0xd2,0x16,0xd1,0x0b,0x10,0x07,
++ 0x06,0xff,0xc6,0x9e,0x00,0x07,0x00,0x10,0x07,0x04,0xff,0xc8,0xa3,0x00,0x04,0x00,
++ 0xd1,0x0b,0x10,0x07,0x04,0xff,0xc8,0xa5,0x00,0x04,0x00,0x10,0x08,0x04,0xff,0x61,
++ 0xcc,0x87,0x00,0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,
++ 0xff,0x65,0xcc,0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,0x0a,0x04,0xff,0x6f,
++ 0xcc,0x88,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x88,0xcc,0x84,0x00,0xd1,0x14,0x10,
++ 0x0a,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,
++ 0x00,0x10,0x08,0x04,0xff,0x6f,0xcc,0x87,0x00,0x04,0xff,0x6f,0xcc,0x87,0x00,0xd3,
++ 0x27,0xe2,0x21,0x43,0xd1,0x14,0x10,0x0a,0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,
++ 0x04,0xff,0x6f,0xcc,0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x79,0xcc,0x84,0x00,
++ 0x04,0xff,0x79,0xcc,0x84,0x00,0xd2,0x13,0x51,0x04,0x08,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0xa5,0x00,0x08,0xff,0xc8,0xbc,0x00,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,
++ 0xff,0xc6,0x9a,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0xa6,0x00,0x08,0x00,0xcf,0x86,
++ 0x95,0x5f,0x94,0x5b,0xd3,0x2f,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x08,0x00,0x08,0xff,
++ 0xc9,0x82,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xc6,0x80,0x00,0xd1,0x0e,0x10,0x07,
++ 0x09,0xff,0xca,0x89,0x00,0x09,0xff,0xca,0x8c,0x00,0x10,0x07,0x09,0xff,0xc9,0x87,
++ 0x00,0x09,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,0x89,0x00,0x09,0x00,
++ 0x10,0x07,0x09,0xff,0xc9,0x8b,0x00,0x09,0x00,0xd1,0x0b,0x10,0x07,0x09,0xff,0xc9,
++ 0x8d,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xc9,0x8f,0x00,0x09,0x00,0x01,0x00,0x01,
++ 0x00,0xd1,0x8b,0xd0,0x0c,0xcf,0x86,0xe5,0x10,0x43,0x64,0xef,0x42,0x01,0xe6,0xcf,
++ 0x86,0xd5,0x2a,0xe4,0x99,0x43,0xe3,0x7f,0x43,0xd2,0x11,0xe1,0x5e,0x43,0x10,0x07,
++ 0x01,0xff,0xcc,0x80,0x00,0x01,0xff,0xcc,0x81,0x00,0xe1,0x65,0x43,0x10,0x09,0x01,
++ 0xff,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0x00,0xd4,0x0f,0x93,0x0b,0x92,
++ 0x07,0x61,0xab,0x43,0x01,0xea,0x06,0xe6,0x06,0xe6,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,
++ 0x10,0x07,0x0a,0xff,0xcd,0xb1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xcd,0xb3,0x00,
++ 0x0a,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x10,0x07,0x0a,
++ 0xff,0xcd,0xb7,0x00,0x0a,0x00,0xd2,0x07,0x61,0x97,0x43,0x00,0x00,0x51,0x04,0x09,
++ 0x00,0x10,0x06,0x01,0xff,0x3b,0x00,0x10,0xff,0xcf,0xb3,0x00,0xe0,0x31,0x01,0xcf,
++ 0x86,0xd5,0xd3,0xd4,0x5f,0xd3,0x21,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x01,
++ 0x00,0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x81,
++ 0x00,0x01,0xff,0xc2,0xb7,0x00,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,
++ 0xcc,0x81,0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,
++ 0xcc,0x81,0x00,0x00,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,
++ 0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x01,0xff,0xcf,0x89,0xcc,
++ 0x81,0x00,0xd3,0x3c,0xd2,0x20,0xd1,0x12,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x88,
++ 0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0x00,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,
++ 0xff,0xce,0xb3,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xb4,0x00,0x01,0xff,0xce,
++ 0xb5,0x00,0x10,0x07,0x01,0xff,0xce,0xb6,0x00,0x01,0xff,0xce,0xb7,0x00,0xd2,0x1c,
++ 0xd1,0x0e,0x10,0x07,0x01,0xff,0xce,0xb8,0x00,0x01,0xff,0xce,0xb9,0x00,0x10,0x07,
++ 0x01,0xff,0xce,0xba,0x00,0x01,0xff,0xce,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,
++ 0xce,0xbc,0x00,0x01,0xff,0xce,0xbd,0x00,0x10,0x07,0x01,0xff,0xce,0xbe,0x00,0x01,
++ 0xff,0xce,0xbf,0x00,0xe4,0x85,0x43,0xd3,0x35,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,
++ 0xff,0xcf,0x80,0x00,0x01,0xff,0xcf,0x81,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,
++ 0x83,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xcf,0x84,0x00,0x01,0xff,0xcf,0x85,0x00,
++ 0x10,0x07,0x01,0xff,0xcf,0x86,0x00,0x01,0xff,0xcf,0x87,0x00,0xe2,0x2b,0x43,0xd1,
++ 0x0e,0x10,0x07,0x01,0xff,0xcf,0x88,0x00,0x01,0xff,0xcf,0x89,0x00,0x10,0x09,0x01,
++ 0xff,0xce,0xb9,0xcc,0x88,0x00,0x01,0xff,0xcf,0x85,0xcc,0x88,0x00,0xcf,0x86,0xd5,
++ 0x94,0xd4,0x3c,0xd3,0x13,0x92,0x0f,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,
++ 0x83,0x00,0x01,0x00,0x01,0x00,0xd2,0x07,0x61,0x3a,0x43,0x01,0x00,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x10,
++ 0x09,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x0a,0xff,0xcf,0x97,0x00,0xd3,0x2c,0xd2,
++ 0x11,0xe1,0x46,0x43,0x10,0x07,0x01,0xff,0xce,0xb2,0x00,0x01,0xff,0xce,0xb8,0x00,
++ 0xd1,0x10,0x10,0x09,0x01,0xff,0xcf,0x92,0xcc,0x88,0x00,0x01,0xff,0xcf,0x86,0x00,
++ 0x10,0x07,0x01,0xff,0xcf,0x80,0x00,0x04,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,
++ 0xff,0xcf,0x99,0x00,0x06,0x00,0x10,0x07,0x01,0xff,0xcf,0x9b,0x00,0x04,0x00,0xd1,
++ 0x0b,0x10,0x07,0x01,0xff,0xcf,0x9d,0x00,0x04,0x00,0x10,0x07,0x01,0xff,0xcf,0x9f,
++ 0x00,0x04,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,
++ 0xa1,0x00,0x04,0x00,0x10,0x07,0x01,0xff,0xcf,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,
++ 0x07,0x01,0xff,0xcf,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0xa7,0x00,0x01,
++ 0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xa9,0x00,0x01,0x00,0x10,0x07,
++ 0x01,0xff,0xcf,0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xcf,0xad,0x00,
++ 0x01,0x00,0x10,0x07,0x01,0xff,0xcf,0xaf,0x00,0x01,0x00,0xd3,0x2b,0xd2,0x12,0x91,
++ 0x0e,0x10,0x07,0x01,0xff,0xce,0xba,0x00,0x01,0xff,0xcf,0x81,0x00,0x01,0x00,0xd1,
++ 0x0e,0x10,0x07,0x05,0xff,0xce,0xb8,0x00,0x05,0xff,0xce,0xb5,0x00,0x10,0x04,0x06,
++ 0x00,0x07,0xff,0xcf,0xb8,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x04,0x07,0x00,0x07,0xff,
++ 0xcf,0xb2,0x00,0x10,0x07,0x07,0xff,0xcf,0xbb,0x00,0x07,0x00,0xd1,0x0b,0x10,0x04,
++ 0x08,0x00,0x08,0xff,0xcd,0xbb,0x00,0x10,0x07,0x08,0xff,0xcd,0xbc,0x00,0x08,0xff,
++ 0xcd,0xbd,0x00,0xe3,0xed,0x46,0xe2,0x3d,0x05,0xe1,0x27,0x02,0xe0,0x66,0x01,0xcf,
++ 0x86,0xd5,0xf0,0xd4,0x7e,0xd3,0x40,0xd2,0x22,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,
++ 0xb5,0xcc,0x80,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,0x00,0x10,0x07,0x01,0xff,0xd1,
++ 0x92,0x00,0x01,0xff,0xd0,0xb3,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,
++ 0x94,0x00,0x01,0xff,0xd1,0x95,0x00,0x10,0x07,0x01,0xff,0xd1,0x96,0x00,0x01,0xff,
++ 0xd1,0x96,0xcc,0x88,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x98,0x00,
++ 0x01,0xff,0xd1,0x99,0x00,0x10,0x07,0x01,0xff,0xd1,0x9a,0x00,0x01,0xff,0xd1,0x9b,
++ 0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,
++ 0xcc,0x80,0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x86,0x00,0x01,0xff,0xd1,0x9f,
++ 0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xb0,0x00,0x01,0xff,
++ 0xd0,0xb1,0x00,0x10,0x07,0x01,0xff,0xd0,0xb2,0x00,0x01,0xff,0xd0,0xb3,0x00,0xd1,
++ 0x0e,0x10,0x07,0x01,0xff,0xd0,0xb4,0x00,0x01,0xff,0xd0,0xb5,0x00,0x10,0x07,0x01,
++ 0xff,0xd0,0xb6,0x00,0x01,0xff,0xd0,0xb7,0x00,0xd2,0x1e,0xd1,0x10,0x10,0x07,0x01,
++ 0xff,0xd0,0xb8,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x10,0x07,0x01,0xff,0xd0,
++ 0xba,0x00,0x01,0xff,0xd0,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd0,0xbc,0x00,
++ 0x01,0xff,0xd0,0xbd,0x00,0x10,0x07,0x01,0xff,0xd0,0xbe,0x00,0x01,0xff,0xd0,0xbf,
++ 0x00,0xe4,0x25,0x42,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x80,
++ 0x00,0x01,0xff,0xd1,0x81,0x00,0x10,0x07,0x01,0xff,0xd1,0x82,0x00,0x01,0xff,0xd1,
++ 0x83,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x84,0x00,0x01,0xff,0xd1,0x85,0x00,
++ 0x10,0x07,0x01,0xff,0xd1,0x86,0x00,0x01,0xff,0xd1,0x87,0x00,0xd2,0x1c,0xd1,0x0e,
++ 0x10,0x07,0x01,0xff,0xd1,0x88,0x00,0x01,0xff,0xd1,0x89,0x00,0x10,0x07,0x01,0xff,
++ 0xd1,0x8a,0x00,0x01,0xff,0xd1,0x8b,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd1,0x8c,
++ 0x00,0x01,0xff,0xd1,0x8d,0x00,0x10,0x07,0x01,0xff,0xd1,0x8e,0x00,0x01,0xff,0xd1,
++ 0x8f,0x00,0xcf,0x86,0xd5,0x07,0x64,0xcf,0x41,0x01,0x00,0xd4,0x58,0xd3,0x2c,0xd2,
++ 0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,
++ 0xd1,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xa5,0x00,0x01,0x00,
++ 0x10,0x07,0x01,0xff,0xd1,0xa7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,
++ 0xff,0xd1,0xa9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xab,0x00,0x01,0x00,0xd1,
++ 0x0b,0x10,0x07,0x01,0xff,0xd1,0xad,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xaf,
++ 0x00,0x01,0x00,0xd3,0x33,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb1,0x00,
++ 0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xb3,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,
++ 0xff,0xd1,0xb5,0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb5,0xcc,0x8f,0x00,0x01,
++ 0xff,0xd1,0xb5,0xcc,0x8f,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd1,0xb9,
++ 0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,
++ 0x01,0xff,0xd1,0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd1,0xbf,0x00,0x01,0x00,
++ 0xe0,0x41,0x01,0xcf,0x86,0xd5,0x8e,0xd4,0x36,0xd3,0x11,0xe2,0x91,0x41,0xe1,0x88,
++ 0x41,0x10,0x07,0x01,0xff,0xd2,0x81,0x00,0x01,0x00,0xd2,0x0f,0x51,0x04,0x04,0x00,
++ 0x10,0x07,0x06,0xff,0xd2,0x8b,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x04,0xff,0xd2,
++ 0x8d,0x00,0x04,0x00,0x10,0x07,0x04,0xff,0xd2,0x8f,0x00,0x04,0x00,0xd3,0x2c,0xd2,
++ 0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x91,0x00,0x01,0x00,0x10,0x07,0x01,0xff,
++ 0xd2,0x93,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0x95,0x00,0x01,0x00,
++ 0x10,0x07,0x01,0xff,0xd2,0x97,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,
++ 0xff,0xd2,0x99,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9b,0x00,0x01,0x00,0xd1,
++ 0x0b,0x10,0x07,0x01,0xff,0xd2,0x9d,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0x9f,
++ 0x00,0x01,0x00,0xd4,0x58,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,
++ 0xa1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xa3,0x00,0x01,0x00,0xd1,0x0b,0x10,
++ 0x07,0x01,0xff,0xd2,0xa5,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xa7,0x00,0x01,
++ 0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xa9,0x00,0x01,0x00,0x10,0x07,
++ 0x01,0xff,0xd2,0xab,0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xad,0x00,
++ 0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xaf,0x00,0x01,0x00,0xd3,0x2c,0xd2,0x16,0xd1,
++ 0x0b,0x10,0x07,0x01,0xff,0xd2,0xb1,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xb3,
++ 0x00,0x01,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,0xb5,0x00,0x01,0x00,0x10,0x07,
++ 0x01,0xff,0xd2,0xb7,0x00,0x01,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd2,
++ 0xb9,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xbb,0x00,0x01,0x00,0xd1,0x0b,0x10,
++ 0x07,0x01,0xff,0xd2,0xbd,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xd2,0xbf,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0xdc,0xd4,0x5a,0xd3,0x36,0xd2,0x20,0xd1,0x10,0x10,0x07,0x01,
++ 0xff,0xd3,0x8f,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,
++ 0xb6,0xcc,0x86,0x00,0x01,0xff,0xd3,0x84,0x00,0xd1,0x0b,0x10,0x04,0x01,0x00,0x06,
++ 0xff,0xd3,0x86,0x00,0x10,0x04,0x06,0x00,0x01,0xff,0xd3,0x88,0x00,0xd2,0x16,0xd1,
++ 0x0b,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x8a,0x00,0x10,0x04,0x06,0x00,0x01,0xff,
++ 0xd3,0x8c,0x00,0xe1,0x69,0x40,0x10,0x04,0x01,0x00,0x06,0xff,0xd3,0x8e,0x00,0xd3,
++ 0x41,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x01,0xff,
++ 0xd0,0xb0,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x01,0xff,
++ 0xd0,0xb0,0xcc,0x88,0x00,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0x95,0x00,0x01,0x00,
++ 0x10,0x09,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,
++ 0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0x99,0x00,0x01,0x00,0x10,0x09,0x01,
++ 0xff,0xd3,0x99,0xcc,0x88,0x00,0x01,0xff,0xd3,0x99,0xcc,0x88,0x00,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb6,0xcc,0x88,0x00,0x10,
++ 0x09,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb7,0xcc,0x88,0x00,0xd4,
++ 0x82,0xd3,0x41,0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa1,0x00,0x01,0x00,
++ 0x10,0x09,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb8,0xcc,
++ 0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,0x01,0xff,0xd0,0xbe,0xcc,
++ 0x88,0x00,0xd2,0x1d,0xd1,0x0b,0x10,0x07,0x01,0xff,0xd3,0xa9,0x00,0x01,0x00,0x10,
++ 0x09,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0x01,0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,
++ 0x12,0x10,0x09,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,
++ 0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,
++ 0x00,0xd3,0x41,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,
++ 0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,
++ 0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x87,0xcc,
++ 0x88,0x00,0x01,0xff,0xd1,0x87,0xcc,0x88,0x00,0x10,0x07,0x08,0xff,0xd3,0xb7,0x00,
++ 0x08,0x00,0xd2,0x1d,0xd1,0x12,0x10,0x09,0x01,0xff,0xd1,0x8b,0xcc,0x88,0x00,0x01,
++ 0xff,0xd1,0x8b,0xcc,0x88,0x00,0x10,0x07,0x09,0xff,0xd3,0xbb,0x00,0x09,0x00,0xd1,
++ 0x0b,0x10,0x07,0x09,0xff,0xd3,0xbd,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xd3,0xbf,
++ 0x00,0x09,0x00,0xe1,0x26,0x02,0xe0,0x78,0x01,0xcf,0x86,0xd5,0xb0,0xd4,0x58,0xd3,
++ 0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x81,0x00,0x06,0x00,0x10,0x07,
++ 0x06,0xff,0xd4,0x83,0x00,0x06,0x00,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x85,0x00,
++ 0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x87,0x00,0x06,0x00,0xd2,0x16,0xd1,0x0b,0x10,
++ 0x07,0x06,0xff,0xd4,0x89,0x00,0x06,0x00,0x10,0x07,0x06,0xff,0xd4,0x8b,0x00,0x06,
++ 0x00,0xd1,0x0b,0x10,0x07,0x06,0xff,0xd4,0x8d,0x00,0x06,0x00,0x10,0x07,0x06,0xff,
++ 0xd4,0x8f,0x00,0x06,0x00,0xd3,0x2c,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x09,0xff,0xd4,
++ 0x91,0x00,0x09,0x00,0x10,0x07,0x09,0xff,0xd4,0x93,0x00,0x09,0x00,0xd1,0x0b,0x10,
++ 0x07,0x0a,0xff,0xd4,0x95,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0x97,0x00,0x0a,
++ 0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x99,0x00,0x0a,0x00,0x10,0x07,
++ 0x0a,0xff,0xd4,0x9b,0x00,0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0x9d,0x00,
++ 0x0a,0x00,0x10,0x07,0x0a,0xff,0xd4,0x9f,0x00,0x0a,0x00,0xd4,0x58,0xd3,0x2c,0xd2,
++ 0x16,0xd1,0x0b,0x10,0x07,0x0a,0xff,0xd4,0xa1,0x00,0x0a,0x00,0x10,0x07,0x0a,0xff,
++ 0xd4,0xa3,0x00,0x0a,0x00,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xd4,0xa5,0x00,0x0b,0x00,
++ 0x10,0x07,0x0c,0xff,0xd4,0xa7,0x00,0x0c,0x00,0xd2,0x16,0xd1,0x0b,0x10,0x07,0x10,
++ 0xff,0xd4,0xa9,0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xab,0x00,0x10,0x00,0xd1,
++ 0x0b,0x10,0x07,0x10,0xff,0xd4,0xad,0x00,0x10,0x00,0x10,0x07,0x10,0xff,0xd4,0xaf,
++ 0x00,0x10,0x00,0xd3,0x35,0xd2,0x19,0xd1,0x0b,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,
++ 0xa1,0x00,0x10,0x07,0x01,0xff,0xd5,0xa2,0x00,0x01,0xff,0xd5,0xa3,0x00,0xd1,0x0e,
++ 0x10,0x07,0x01,0xff,0xd5,0xa4,0x00,0x01,0xff,0xd5,0xa5,0x00,0x10,0x07,0x01,0xff,
++ 0xd5,0xa6,0x00,0x01,0xff,0xd5,0xa7,0x00,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,
++ 0xd5,0xa8,0x00,0x01,0xff,0xd5,0xa9,0x00,0x10,0x07,0x01,0xff,0xd5,0xaa,0x00,0x01,
++ 0xff,0xd5,0xab,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xac,0x00,0x01,0xff,0xd5,
++ 0xad,0x00,0x10,0x07,0x01,0xff,0xd5,0xae,0x00,0x01,0xff,0xd5,0xaf,0x00,0xcf,0x86,
++ 0xe5,0x08,0x3f,0xd4,0x70,0xd3,0x38,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,
++ 0xb0,0x00,0x01,0xff,0xd5,0xb1,0x00,0x10,0x07,0x01,0xff,0xd5,0xb2,0x00,0x01,0xff,
++ 0xd5,0xb3,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,0xb4,0x00,0x01,0xff,0xd5,0xb5,
++ 0x00,0x10,0x07,0x01,0xff,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb7,0x00,0xd2,0x1c,0xd1,
++ 0x0e,0x10,0x07,0x01,0xff,0xd5,0xb8,0x00,0x01,0xff,0xd5,0xb9,0x00,0x10,0x07,0x01,
++ 0xff,0xd5,0xba,0x00,0x01,0xff,0xd5,0xbb,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd5,
++ 0xbc,0x00,0x01,0xff,0xd5,0xbd,0x00,0x10,0x07,0x01,0xff,0xd5,0xbe,0x00,0x01,0xff,
++ 0xd5,0xbf,0x00,0xe3,0x87,0x3e,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd6,0x80,
++ 0x00,0x01,0xff,0xd6,0x81,0x00,0x10,0x07,0x01,0xff,0xd6,0x82,0x00,0x01,0xff,0xd6,
++ 0x83,0x00,0xd1,0x0e,0x10,0x07,0x01,0xff,0xd6,0x84,0x00,0x01,0xff,0xd6,0x85,0x00,
++ 0x10,0x07,0x01,0xff,0xd6,0x86,0x00,0x00,0x00,0xe0,0x2f,0x3f,0xcf,0x86,0xe5,0xc0,
++ 0x3e,0xe4,0x97,0x3e,0xe3,0x76,0x3e,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0xd5,0xa5,0xd6,0x82,0x00,0xe4,0x3e,0x25,0xe3,0xc3,0x1a,
++ 0xe2,0x7b,0x81,0xe1,0xc0,0x13,0xd0,0x1e,0xcf,0x86,0xc5,0xe4,0x08,0x4b,0xe3,0x53,
++ 0x46,0xe2,0xe9,0x43,0xe1,0x1c,0x43,0xe0,0xe1,0x42,0xcf,0x86,0xe5,0xa6,0x42,0x64,
++ 0x89,0x42,0x0b,0x00,0xcf,0x86,0xe5,0xfa,0x01,0xe4,0x03,0x56,0xe3,0x76,0x01,0xe2,
++ 0x8e,0x53,0xd1,0x0c,0xe0,0xef,0x52,0xcf,0x86,0x65,0x8d,0x52,0x04,0x00,0xe0,0x0d,
++ 0x01,0xcf,0x86,0xd5,0x0a,0xe4,0x10,0x53,0x63,0xff,0x52,0x0a,0x00,0xd4,0x80,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x80,0x00,0x01,0xff,0xe2,
++ 0xb4,0x81,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x82,0x00,0x01,0xff,0xe2,0xb4,0x83,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x84,0x00,0x01,0xff,0xe2,0xb4,0x85,
++ 0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x86,0x00,0x01,0xff,0xe2,0xb4,0x87,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x88,0x00,0x01,0xff,0xe2,0xb4,0x89,
++ 0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x8a,0x00,0x01,0xff,0xe2,0xb4,0x8b,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x8c,0x00,0x01,0xff,0xe2,0xb4,0x8d,0x00,0x10,
++ 0x08,0x01,0xff,0xe2,0xb4,0x8e,0x00,0x01,0xff,0xe2,0xb4,0x8f,0x00,0xd3,0x40,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x90,0x00,0x01,0xff,0xe2,0xb4,0x91,
++ 0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0x92,0x00,0x01,0xff,0xe2,0xb4,0x93,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x94,0x00,0x01,0xff,0xe2,0xb4,0x95,0x00,0x10,
++ 0x08,0x01,0xff,0xe2,0xb4,0x96,0x00,0x01,0xff,0xe2,0xb4,0x97,0x00,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0x98,0x00,0x01,0xff,0xe2,0xb4,0x99,0x00,0x10,
++ 0x08,0x01,0xff,0xe2,0xb4,0x9a,0x00,0x01,0xff,0xe2,0xb4,0x9b,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe2,0xb4,0x9c,0x00,0x01,0xff,0xe2,0xb4,0x9d,0x00,0x10,0x08,0x01,
++ 0xff,0xe2,0xb4,0x9e,0x00,0x01,0xff,0xe2,0xb4,0x9f,0x00,0xcf,0x86,0xe5,0x42,0x52,
++ 0x94,0x50,0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa0,0x00,
++ 0x01,0xff,0xe2,0xb4,0xa1,0x00,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa2,0x00,0x01,0xff,
++ 0xe2,0xb4,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0xb4,0xa4,0x00,0x01,0xff,
++ 0xe2,0xb4,0xa5,0x00,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xa7,0x00,0x52,0x04,
++ 0x00,0x00,0x91,0x0c,0x10,0x04,0x00,0x00,0x0d,0xff,0xe2,0xb4,0xad,0x00,0x00,0x00,
++ 0x01,0x00,0xd2,0x1b,0xe1,0xfc,0x52,0xe0,0xad,0x52,0xcf,0x86,0x95,0x0f,0x94,0x0b,
++ 0x93,0x07,0x62,0x92,0x52,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xd1,0x13,0xe0,
++ 0xd3,0x53,0xcf,0x86,0x95,0x0a,0xe4,0xa8,0x53,0x63,0x97,0x53,0x04,0x00,0x04,0x00,
++ 0xd0,0x0d,0xcf,0x86,0x95,0x07,0x64,0x22,0x54,0x08,0x00,0x04,0x00,0xcf,0x86,0x55,
++ 0x04,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x07,0x62,0x2f,0x54,0x04,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,0xb0,0x00,0x11,0xff,0xe1,0x8f,0xb1,0x00,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0xb2,0x00,0x11,0xff,0xe1,0x8f,0xb3,0x00,0x91,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0xb4,0x00,0x11,0xff,0xe1,0x8f,0xb5,0x00,0x00,0x00,
++ 0xd4,0x1c,0xe3,0xe0,0x56,0xe2,0x17,0x56,0xe1,0xda,0x55,0xe0,0xbb,0x55,0xcf,0x86,
++ 0x95,0x0a,0xe4,0xa4,0x55,0x63,0x88,0x55,0x04,0x00,0x04,0x00,0xe3,0xd2,0x01,0xe2,
++ 0x2b,0x5a,0xd1,0x0c,0xe0,0x4c,0x59,0xcf,0x86,0x65,0x25,0x59,0x0a,0x00,0xe0,0x9c,
++ 0x59,0xcf,0x86,0xd5,0xc5,0xd4,0x45,0xd3,0x31,0xd2,0x1c,0xd1,0x0e,0x10,0x07,0x12,
++ 0xff,0xd0,0xb2,0x00,0x12,0xff,0xd0,0xb4,0x00,0x10,0x07,0x12,0xff,0xd0,0xbe,0x00,
++ 0x12,0xff,0xd1,0x81,0x00,0x51,0x07,0x12,0xff,0xd1,0x82,0x00,0x10,0x07,0x12,0xff,
++ 0xd1,0x8a,0x00,0x12,0xff,0xd1,0xa3,0x00,0x92,0x10,0x91,0x0c,0x10,0x08,0x12,0xff,
++ 0xea,0x99,0x8b,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x14,0xff,0xe1,0x83,0x90,0x00,0x14,0xff,0xe1,0x83,0x91,0x00,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0x92,0x00,0x14,0xff,0xe1,0x83,0x93,0x00,0xd1,0x10,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0x94,0x00,0x14,0xff,0xe1,0x83,0x95,0x00,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0x96,0x00,0x14,0xff,0xe1,0x83,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0x98,0x00,0x14,0xff,0xe1,0x83,0x99,0x00,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0x9a,0x00,0x14,0xff,0xe1,0x83,0x9b,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0x9c,0x00,0x14,0xff,0xe1,0x83,0x9d,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
++ 0x9e,0x00,0x14,0xff,0xe1,0x83,0x9f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x14,0xff,0xe1,0x83,0xa0,0x00,0x14,0xff,0xe1,0x83,0xa1,0x00,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0xa2,0x00,0x14,0xff,0xe1,0x83,0xa3,0x00,0xd1,0x10,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0xa4,0x00,0x14,0xff,0xe1,0x83,0xa5,0x00,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xa6,0x00,0x14,0xff,0xe1,0x83,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0xa8,0x00,0x14,0xff,0xe1,0x83,0xa9,0x00,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xaa,0x00,0x14,0xff,0xe1,0x83,0xab,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xac,0x00,0x14,0xff,0xe1,0x83,0xad,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
++ 0xae,0x00,0x14,0xff,0xe1,0x83,0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x14,0xff,0xe1,0x83,0xb0,0x00,0x14,0xff,0xe1,0x83,0xb1,0x00,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xb2,0x00,0x14,0xff,0xe1,0x83,0xb3,0x00,0xd1,0x10,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xb4,0x00,0x14,0xff,0xe1,0x83,0xb5,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
++ 0xb6,0x00,0x14,0xff,0xe1,0x83,0xb7,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x14,0xff,
++ 0xe1,0x83,0xb8,0x00,0x14,0xff,0xe1,0x83,0xb9,0x00,0x10,0x08,0x14,0xff,0xe1,0x83,
++ 0xba,0x00,0x00,0x00,0xd1,0x0c,0x10,0x04,0x00,0x00,0x14,0xff,0xe1,0x83,0xbd,0x00,
++ 0x10,0x08,0x14,0xff,0xe1,0x83,0xbe,0x00,0x14,0xff,0xe1,0x83,0xbf,0x00,0xe2,0x9d,
++ 0x08,0xe1,0x48,0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,0xd3,0x40,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa5,0x00,0x01,0xff,0x61,0xcc,
++ 0xa5,0x00,0x10,0x08,0x01,0xff,0x62,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,0x87,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x62,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,0xa3,0x00,
++ 0x10,0x08,0x01,0xff,0x62,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,0xd2,0x24,
++ 0xd1,0x14,0x10,0x0a,0x01,0xff,0x63,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,0x63,0xcc,
++ 0xa7,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0x87,0x00,0x01,0xff,0x64,0xcc,
++ 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa3,0x00,0x01,0xff,0x64,0xcc,
++ 0xa3,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,0xb1,0x00,
++ 0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x64,0xcc,0xa7,0x00,0x01,0xff,
++ 0x64,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x64,0xcc,0xad,0x00,0x01,0xff,0x64,0xcc,
++ 0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,
++ 0x65,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,
++ 0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x65,0xcc,0xad,0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,
++ 0xb0,0x00,0x01,0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,
++ 0xa7,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,0x01,0xff,
++ 0x66,0xcc,0x87,0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x67,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,0x84,0x00,
++ 0x10,0x08,0x01,0xff,0x68,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x68,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,0x10,0x08,
++ 0x01,0xff,0x68,0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x68,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,0x10,0x08,
++ 0x01,0xff,0x68,0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x69,0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,0x01,0xff,
++ 0x69,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,0xd3,0x40,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0x81,0x00,0x01,0xff,0x6b,0xcc,
++ 0x81,0x00,0x10,0x08,0x01,0xff,0x6b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,0xa3,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x6b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,0xb1,0x00,
++ 0x10,0x08,0x01,0xff,0x6c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,0xd2,0x24,
++ 0xd1,0x14,0x10,0x0a,0x01,0xff,0x6c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,0x6c,0xcc,
++ 0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x6c,0xcc,0xb1,0x00,0x01,0xff,0x6c,0xcc,
++ 0xb1,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6c,0xcc,0xad,0x00,0x01,0xff,0x6c,0xcc,
++ 0xad,0x00,0x10,0x08,0x01,0xff,0x6d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,0x81,0x00,
++ 0xcf,0x86,0xe5,0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x6d,0xcc,0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6d,
++ 0xcc,0xa3,0x00,0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
++ 0xcc,0x87,0x00,0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xa3,
++ 0x00,0x01,0xff,0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x6e,
++ 0xcc,0xb1,0x00,0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x6e,0xcc,0xad,
++ 0x00,0x01,0xff,0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x83,
++ 0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,
++ 0xcc,0x83,0xcc,0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,0x48,0xd2,
++ 0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x80,0x00,0x01,0xff,0x6f,
++ 0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0x01,
++ 0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x70,0xcc,0x81,
++ 0x00,0x01,0xff,0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x70,0xcc,0x87,0x00,0x01,
++ 0xff,0x70,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x72,0xcc,0x87,
++ 0x00,0x01,0xff,0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xa3,0x00,0x01,
++ 0xff,0x72,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,
++ 0x00,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x72,0xcc,0xb1,
++ 0x00,0x01,0xff,0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x73,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,0x08,0x01,
++ 0xff,0x73,0xcc,0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,
++ 0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,0x00,0x10,
++ 0x0a,0x01,0xff,0x73,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,0xcc,0x87,
++ 0x00,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x01,
++ 0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0x87,0x00,0x01,
++ 0xff,0x74,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xa3,0x00,0x01,
++ 0xff,0x74,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x74,0xcc,0xb1,0x00,0x01,0xff,0x74,
++ 0xcc,0xb1,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x74,0xcc,0xad,
++ 0x00,0x01,0xff,0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xa4,0x00,0x01,
++ 0xff,0x75,0xcc,0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0xb0,0x00,0x01,
++ 0xff,0x75,0xcc,0xb0,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0xad,0x00,0x01,0xff,0x75,
++ 0xcc,0xad,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,
++ 0x00,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x84,
++ 0xcc,0x88,0x00,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x76,0xcc,0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x76,
++ 0xcc,0xa3,0x00,0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x11,0x02,0xcf,0x86,0xd5,0xe2,
++ 0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x80,0x00,
++ 0x01,0xff,0x77,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x81,0x00,0x01,0xff,
++ 0x77,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x88,0x00,0x01,0xff,
++ 0x77,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x77,0xcc,0x87,0x00,0x01,0xff,0x77,0xcc,
++ 0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0xa3,0x00,0x01,0xff,
++ 0x77,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x78,0xcc,0x87,0x00,0x01,0xff,0x78,0xcc,
++ 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x78,0xcc,0x88,0x00,0x01,0xff,0x78,0xcc,
++ 0x88,0x00,0x10,0x08,0x01,0xff,0x79,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,0x87,0x00,
++ 0xd3,0x33,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x82,0x00,0x01,0xff,
++ 0x7a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0xa3,0x00,0x01,0xff,0x7a,0xcc,
++ 0xa3,0x00,0xe1,0x12,0x59,0x10,0x08,0x01,0xff,0x7a,0xcc,0xb1,0x00,0x01,0xff,0x7a,
++ 0xcc,0xb1,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,
++ 0xff,0x79,0xcc,0x8a,0x00,0x10,0x08,0x01,0xff,0x61,0xca,0xbe,0x00,0x02,0xff,0x73,
++ 0xcc,0x87,0x00,0x51,0x04,0x0a,0x00,0x10,0x07,0x0a,0xff,0x73,0x73,0x00,0x0a,0x00,
++ 0xd4,0x98,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x61,0xcc,0xa3,0x00,
++ 0x01,0xff,0x61,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x61,0xcc,0x89,0x00,0x01,0xff,
++ 0x61,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,
++ 0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,
+ 0x80,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x41,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,
+- 0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,
+- 0x83,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,
+- 0x61,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x81,0x00,
++ 0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,0x00,
++ 0x10,0x0a,0x01,0xff,0x61,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,
++ 0x83,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,
++ 0x61,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,
+ 0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,
+- 0x01,0xff,0x41,0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,
+- 0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,
+- 0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x83,0x00,0x01,0xff,
+- 0x61,0xcc,0x86,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x86,0x00,
++ 0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,0x00,
++ 0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,
++ 0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x61,0xcc,0x86,0xcc,0x83,0x00,0x01,0xff,
++ 0x61,0xcc,0x86,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,
+ 0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0x45,0xcc,0xa3,0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,
+- 0x89,0x00,0x01,0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,
+- 0x83,0x00,0x01,0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,0xcc,
++ 0x65,0xcc,0xa3,0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,
++ 0x89,0x00,0x01,0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,
++ 0x83,0x00,0x01,0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,
+ 0x81,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0xcf,0x86,0xe5,0x31,0x01,0xd4,
+- 0x90,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,0xcc,0x80,
+- 0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,
++ 0x90,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,
++ 0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x65,0xcc,0x82,
+ 0xcc,0x89,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,
+- 0xff,0x45,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x10,
+- 0x0a,0x01,0xff,0x45,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x89,0x00,0x01,0xff,0x69,
+- 0xcc,0x89,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,0xa3,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,0xa3,
+- 0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,0xd3,
+- 0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,0x81,0x00,0x01,
+- 0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,0x80,
+- 0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,
++ 0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,0x10,
++ 0x0a,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,0x82,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x89,0x00,0x01,0xff,0x69,
++ 0xcc,0x89,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,0xa3,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,0xa3,
++ 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,0xd3,
++ 0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x01,
++ 0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,
++ 0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,
+ 0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,0x01,
+- 0xff,0x4f,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0xd2,
+- 0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x6f,
+- 0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x81,0x00,0x01,
+- 0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,
+- 0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x4f,
++ 0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,0xd2,
++ 0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x6f,
++ 0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0x01,
++ 0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,
++ 0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x6f,
+ 0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x89,0x00,0xd4,0x98,0xd3,
+- 0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x83,0x00,0x01,
+- 0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0xa3,
+- 0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,
+- 0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x89,
+- 0x00,0x01,0xff,0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,
++ 0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x01,
++ 0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,
++ 0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,
++ 0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x89,
++ 0x00,0x01,0xff,0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,
+ 0xcc,0x9b,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x81,0x00,0x10,0x0a,0x01,
+- 0xff,0x55,0xcc,0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0xd1,
+- 0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,0x9b,
+- 0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x75,
+- 0xcc,0x9b,0xcc,0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,
++ 0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,0xd1,
++ 0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,0x9b,
++ 0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,0x75,
++ 0xcc,0x9b,0xcc,0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,
+ 0xcc,0x9b,0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0xa3,0x00,0x10,0x08,0x01,
+- 0xff,0x59,0xcc,0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0x59,0xcc,0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x59,
+- 0xcc,0x89,0x00,0x01,0xff,0x79,0xcc,0x89,0x00,0x92,0x14,0x91,0x10,0x10,0x08,0x01,
+- 0xff,0x59,0xcc,0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x0a,0x00,0x0a,0x00,0xe1,
+- 0xc0,0x04,0xe0,0x80,0x02,0xcf,0x86,0xe5,0x2d,0x01,0xd4,0xa8,0xd3,0x54,0xd2,0x28,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,
+- 0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,
+- 0xb1,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,
+- 0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
+- 0xce,0xb1,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0x00,
+- 0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x91,0xcc,0x93,0x00,0x01,0xff,0xce,
+- 0x91,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,0x00,0x01,
+- 0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x91,
+- 0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcd,
+- 0x82,0x00,0xd3,0x42,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x93,
+- 0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,
+- 0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,
+- 0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,
+- 0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x95,0xcc,0x93,
+- 0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x95,0xcc,0x93,
+- 0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,
+- 0x01,0xff,0xce,0x95,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0xcc,
+- 0x81,0x00,0x00,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,
+- 0xce,0xb7,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0x00,
+- 0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,
+- 0xb7,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,
+- 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0x97,0xcc,0x93,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,
+- 0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,0x00,0x01,
+- 0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,
+- 0xcd,0x82,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x82,0x00,0xd3,0x54,0xd2,0x28,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,
+- 0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,
+- 0xb9,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,
+- 0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
+- 0xce,0xb9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,0x00,
+- 0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x93,0x00,0x01,0xff,0xce,
+- 0x99,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x99,0xcc,0x93,0xcc,0x80,0x00,0x01,
+- 0xff,0xce,0x99,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x99,
+- 0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,
+- 0x01,0xff,0xce,0x99,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,0xcd,
+- 0x82,0x00,0xcf,0x86,0xe5,0x13,0x01,0xd4,0x84,0xd3,0x42,0xd2,0x28,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0x10,
+- 0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,
+- 0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x81,0x00,
+- 0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0x9f,0xcc,0x93,0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,0x00,0x10,
+- 0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,
+- 0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x81,0x00,
+- 0x01,0xff,0xce,0x9f,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x54,0xd2,0x28,0xd1,
+- 0x12,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,
+- 0x00,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,
+- 0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,0xcc,
+- 0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,
+- 0x85,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,0xd2,
+- 0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0x00,0x10,0x04,
+- 0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x0f,0x10,0x04,0x00,
+- 0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x81,0x00,0x10,0x04,0x00,0x00,0x01,0xff,
+- 0xce,0xa5,0xcc,0x94,0xcd,0x82,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0x10,
+- 0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,
+- 0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0x00,
+- 0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,
+- 0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,0xd1,
+- 0x12,0x10,0x09,0x01,0xff,0xce,0xa9,0xcc,0x93,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,
+- 0x00,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xa9,
+- 0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,
+- 0x81,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,
+- 0xa9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0x00,0xd3,
+- 0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,0xff,
+- 0xce,0xb1,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,0xff,
+- 0xce,0xb5,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x80,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,0x00,
+- 0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
+- 0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xcf,
+- 0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x91,0x12,0x10,0x09,0x01,
+- 0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x00,0x00,0xe0,
+- 0xe1,0x02,0xcf,0x86,0xe5,0x91,0x01,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
+- 0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,
+- 0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,
+- 0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,
+- 0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xcd,
+- 0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,
+- 0x16,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,
+- 0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,0xcd,
+- 0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,
+- 0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,
+- 0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,
+- 0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd3,
+- 0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x85,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,
+- 0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xcd,0x85,
+- 0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,
+- 0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,
+- 0xb7,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,
+- 0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcd,
+- 0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,
+- 0x97,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x80,
+- 0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,0xcd,
+- 0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,
+- 0xff,0xce,0x97,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,
+- 0xcd,0x82,0xcd,0x85,0x00,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,
+- 0xff,0xcf,0x89,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x85,
+- 0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,
+- 0xcf,0x89,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,
+- 0x89,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,
+- 0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,
+- 0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,
+- 0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,
+- 0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,
+- 0xff,0xce,0xa9,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,
+- 0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x82,0xcd,
+- 0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd3,0x49,0xd2,
+- 0x26,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,
+- 0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,
+- 0xce,0xb1,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,0xcd,
+- 0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,0xce,
+- 0xb1,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x91,
+- 0xcc,0x86,0x00,0x01,0xff,0xce,0x91,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0x91,
+- 0xcc,0x80,0x00,0x01,0xff,0xce,0x91,0xcc,0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,
+- 0xce,0x91,0xcd,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xb9,0x00,0x01,0x00,
+- 0xcf,0x86,0xe5,0x16,0x01,0xd4,0x8f,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,0x04,0x01,
+- 0x00,0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x80,
+- 0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,
+- 0xce,0xb7,0xcc,0x81,0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcd,
+- 0x82,0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,
+- 0x09,0x01,0xff,0xce,0x95,0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x81,0x00,0x10,
+- 0x09,0x01,0xff,0xce,0x97,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x81,0x00,0xd1,
+- 0x13,0x10,0x09,0x01,0xff,0xce,0x97,0xcd,0x85,0x00,0x01,0xff,0xe1,0xbe,0xbf,0xcc,
+- 0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbe,0xbf,0xcc,0x81,0x00,0x01,0xff,0xe1,0xbe,
+- 0xbf,0xcd,0x82,0x00,0xd3,0x40,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,
+- 0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,
+- 0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x51,0x04,
+- 0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,
+- 0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x86,
+- 0x00,0x01,0xff,0xce,0x99,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x80,
+- 0x00,0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x04,0x00,0x00,0x01,0xff,
+- 0xe1,0xbf,0xbe,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbf,0xbe,0xcc,0x81,0x00,
+- 0x01,0xff,0xe1,0xbf,0xbe,0xcd,0x82,0x00,0xd4,0x93,0xd3,0x4e,0xd2,0x28,0xd1,0x12,
++ 0xff,0x79,0xcc,0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x79,0xcc,0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x79,
++ 0xcc,0x89,0x00,0x01,0xff,0x79,0xcc,0x89,0x00,0xd2,0x1c,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x79,0xcc,0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x10,0x08,0x0a,0xff,0xe1,
++ 0xbb,0xbb,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbd,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xe1,0xbb,0xbf,0x00,0x0a,0x00,0xe1,0xbf,0x02,0xe0,0xa1,
++ 0x01,0xcf,0x86,0xd5,0xc6,0xd4,0x6c,0xd3,0x18,0xe2,0x0e,0x59,0xe1,0xf7,0x58,0x10,
++ 0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0x00,0xd2,
++ 0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,
++ 0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,
++ 0xce,0xb1,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,
++ 0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,
++ 0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,
++ 0x00,0xd3,0x18,0xe2,0x4a,0x59,0xe1,0x33,0x59,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,
++ 0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,
++ 0xff,0xce,0xb5,0xcc,0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,
++ 0xff,0xce,0xb5,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,
++ 0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,
++ 0xce,0xb5,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd4,0x6c,0xd3,0x18,0xe2,0x74,0x59,
++ 0xe1,0x5d,0x59,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,
++ 0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x93,0x00,
++ 0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,
++ 0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,
++ 0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,
++ 0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,
++ 0xcc,0x94,0xcd,0x82,0x00,0xd3,0x18,0xe2,0xb0,0x59,0xe1,0x99,0x59,0x10,0x09,0x01,
++ 0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0x00,0xd2,0x28,0xd1,
++ 0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,
++ 0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,
++ 0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,
++ 0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,
++ 0xb9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,0x00,0xcf,
++ 0x86,0xd5,0xac,0xd4,0x5a,0xd3,0x18,0xe2,0xed,0x59,0xe1,0xd6,0x59,0x10,0x09,0x01,
++ 0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,0xd2,0x28,0xd1,
++ 0x12,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,
++ 0x00,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,
++ 0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,
++ 0x81,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x18,0xe2,
++ 0x17,0x5a,0xe1,0x00,0x5a,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,
++ 0xcf,0x85,0xcc,0x94,0x00,0xd2,0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,
++ 0x85,0xcc,0x94,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x80,
++ 0x00,0xd1,0x0f,0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,
++ 0x10,0x04,0x00,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,0xe4,0xd3,0x5a,
++ 0xd3,0x18,0xe2,0x52,0x5a,0xe1,0x3b,0x5a,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,
++ 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,
++ 0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,
++ 0xcf,0x89,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0x00,
++ 0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xcf,
++ 0x89,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,
++ 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xe0,0xd9,0x02,0xcf,0x86,0xe5,
++ 0x91,0x01,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,
++ 0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,
++ 0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,
++ 0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,
++ 0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,
++ 0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,
++ 0xb1,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,
++ 0xce,0xb1,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xce,0xb9,0x00,
++ 0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,
++ 0xb1,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb1,
++ 0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0xce,
++ 0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,
++ 0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd3,0x64,0xd2,0x30,0xd1,0x16,
++ 0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,
++ 0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0xce,0xb9,
++ 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,
++ 0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,
++ 0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x82,
++ 0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,
++ 0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xce,
++ 0xb7,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,
++ 0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,
++ 0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xce,
++ 0xb7,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,
++ 0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,
++ 0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,
++ 0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,0x01,0xff,
++ 0xcf,0x89,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,
++ 0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,
++ 0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,0x10,0x0d,
++ 0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,
++ 0x94,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,
++ 0xcc,0x93,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xce,0xb9,0x00,0x10,0x0d,
++ 0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,
++ 0x94,0xcc,0x80,0xce,0xb9,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,
++ 0xcc,0x81,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0xce,0xb9,0x00,
++ 0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xce,0xb9,0x00,0x01,0xff,0xcf,
++ 0x89,0xcc,0x94,0xcd,0x82,0xce,0xb9,0x00,0xd3,0x49,0xd2,0x26,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,0xb1,0xcc,0x84,0x00,0x10,0x0b,
++ 0x01,0xff,0xce,0xb1,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb1,0xce,0xb9,0x00,
++ 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,
++ 0x09,0x01,0xff,0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcd,0x82,0xce,0xb9,
++ 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,
++ 0xce,0xb1,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,0xff,
++ 0xce,0xb1,0xcc,0x81,0x00,0xe1,0xf3,0x5a,0x10,0x09,0x01,0xff,0xce,0xb1,0xce,0xb9,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0xbd,0xd4,0x7e,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,
++ 0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0xd1,0x0f,0x10,0x0b,
++ 0x01,0xff,0xce,0xb7,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,
++ 0xb7,0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,0xd1,
++ 0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,
++ 0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,
++ 0x00,0xe1,0x02,0x5b,0x10,0x09,0x01,0xff,0xce,0xb7,0xce,0xb9,0x00,0x01,0xff,0xe1,
++ 0xbe,0xbf,0xcc,0x80,0x00,0xd3,0x18,0xe2,0x28,0x5b,0xe1,0x11,0x5b,0x10,0x09,0x01,
++ 0xff,0xce,0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0xe2,0x4c,0x5b,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,
++ 0x84,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,
++ 0x81,0x00,0xd4,0x51,0xd3,0x18,0xe2,0x6f,0x5b,0xe1,0x58,0x5b,0x10,0x09,0x01,0xff,
++ 0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,0xd2,0x24,0xd1,0x12,
+ 0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,0x00,
+- 0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,
+- 0x88,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x93,0x00,0x01,
+- 0xff,0xcf,0x81,0xcc,0x94,0x00,0x10,0x09,0x01,0xff,0xcf,0x85,0xcd,0x82,0x00,0x01,
+- 0xff,0xcf,0x85,0xcc,0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,
+- 0xce,0xa5,0xcc,0x86,0x00,0x01,0xff,0xce,0xa5,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,
+- 0xce,0xa5,0xcc,0x80,0x00,0x01,0xff,0xce,0xa5,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,
+- 0x01,0xff,0xce,0xa1,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x80,0x00,0x10,0x09,
+- 0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x01,0xff,0x60,0x00,0xd3,0x3b,0xd2,0x18,0x51,
+- 0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,
+- 0xcf,0x89,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x81,0xcd,
+- 0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcd,0x82,0x00,0x01,0xff,0xcf,
+- 0x89,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x9f,
+- 0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xa9,
+- 0xcc,0x80,0x00,0x01,0xff,0xce,0xa9,0xcc,0x81,0x00,0xd1,0x10,0x10,0x09,0x01,0xff,
+- 0xce,0xa9,0xcd,0x85,0x00,0x01,0xff,0xc2,0xb4,0x00,0x10,0x04,0x01,0x00,0x00,0x00,
+- 0xe0,0x62,0x0c,0xcf,0x86,0xe5,0x9f,0x08,0xe4,0xf8,0x05,0xe3,0xdb,0x02,0xe2,0xa1,
+- 0x01,0xd1,0xb4,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,0x92,0x14,0x91,
+- 0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,0xff,0xe2,0x80,0x83,0x00,0x01,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x14,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
+- 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x01,0x00,0xcf,0x86,0xd5,
+- 0x48,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
+- 0x00,0x06,0x00,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,0x06,0x00,0xd3,0x1c,0xd2,
+- 0x0c,0x51,0x04,0x06,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0xd1,0x08,0x10,0x04,0x07,
+- 0x00,0x08,0x00,0x10,0x04,0x08,0x00,0x06,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,
+- 0x00,0x10,0x04,0x08,0x00,0x06,0x00,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x06,0x00,0x91,
+- 0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x0f,0x00,0x92,0x08,0x11,0x04,0x0f,0x00,0x01,
+- 0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0xd0,0x7e,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x53,0x04,0x01,
+- 0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd3,
+- 0x10,0x52,0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0c,0x00,0x0c,0x00,0x52,
+- 0x04,0x0c,0x00,0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0xd4,0x1c,0x53,
+- 0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x02,0x00,0x91,
+- 0x08,0x10,0x04,0x03,0x00,0x04,0x00,0x04,0x00,0xd3,0x10,0xd2,0x08,0x11,0x04,0x06,
+- 0x00,0x08,0x00,0x11,0x04,0x08,0x00,0x0b,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0b,
+- 0x00,0x0c,0x00,0x10,0x04,0x0e,0x00,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x11,
+- 0x00,0x13,0x00,0xcf,0x86,0xd5,0x28,0x54,0x04,0x00,0x00,0xd3,0x0c,0x92,0x08,0x11,
+- 0x04,0x01,0xe6,0x01,0x01,0x01,0xe6,0xd2,0x0c,0x51,0x04,0x01,0x01,0x10,0x04,0x01,
+- 0x01,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x01,0x00,0xd4,0x30,0xd3,
+- 0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x01,0xe6,0x04,0x00,0xd1,0x08,0x10,
+- 0x04,0x06,0x00,0x06,0x01,0x10,0x04,0x06,0x01,0x06,0xe6,0x92,0x10,0xd1,0x08,0x10,
+- 0x04,0x06,0xdc,0x06,0xe6,0x10,0x04,0x06,0x01,0x08,0x01,0x09,0xdc,0x93,0x10,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x0a,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,
+- 0x81,0xd0,0x4f,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,
+- 0x00,0x51,0x04,0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xa9,0x00,0x01,0x00,0x92,0x12,
+- 0x51,0x04,0x01,0x00,0x10,0x06,0x01,0xff,0x4b,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,
+- 0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,
+- 0x10,0x04,0x04,0x00,0x07,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x06,0x00,0x06,0x00,
+- 0xcf,0x86,0x95,0x2c,0xd4,0x18,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0xd1,0x08,
+- 0x10,0x04,0x08,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x93,0x10,0x92,0x0c,
+- 0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0xd0,0x68,0xcf,0x86,0xd5,0x48,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,
+- 0x10,0x04,0x01,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x11,0x00,0x00,0x00,0x53,0x04,
+- 0x01,0x00,0x92,0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,0x90,0xcc,
+- 0xb8,0x00,0x01,0xff,0xe2,0x86,0x92,0xcc,0xb8,0x00,0x01,0x00,0x94,0x1a,0x53,0x04,
+- 0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,
+- 0x94,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x2e,0x94,0x2a,0x53,0x04,
+- 0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x87,
+- 0x90,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,0x87,0x94,0xcc,0xb8,0x00,0x01,0xff,
+- 0xe2,0x87,0x92,0xcc,0xb8,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x04,0x00,0x93,0x08,0x12,0x04,
+- 0x04,0x00,0x06,0x00,0x06,0x00,0xe2,0x38,0x02,0xe1,0x3f,0x01,0xd0,0x68,0xcf,0x86,
+- 0xd5,0x3e,0x94,0x3a,0xd3,0x16,0x52,0x04,0x01,0x00,0x91,0x0e,0x10,0x0a,0x01,0xff,
+- 0xe2,0x88,0x83,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xd2,0x12,0x91,0x0e,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0xe2,0x88,0x88,0xcc,0xb8,0x00,0x01,0x00,0x91,0x0e,0x10,0x0a,
+- 0x01,0xff,0xe2,0x88,0x8b,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x24,
+- 0x93,0x20,0x52,0x04,0x01,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa3,0xcc,
+- 0xb8,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa5,0xcc,0xb8,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x48,0x94,0x44,0xd3,0x2e,0xd2,0x12,0x91,0x0e,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x88,0xbc,0xcc,0xb8,0x00,0x01,0x00,0xd1,0x0e,
+- 0x10,0x0a,0x01,0xff,0xe2,0x89,0x83,0xcc,0xb8,0x00,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xe2,0x89,0x85,0xcc,0xb8,0x00,0x92,0x12,0x91,0x0e,0x10,0x04,0x01,0x00,
+- 0x01,0xff,0xe2,0x89,0x88,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x40,
+- 0xd3,0x1e,0x92,0x1a,0xd1,0x0c,0x10,0x08,0x01,0xff,0x3d,0xcc,0xb8,0x00,0x01,0x00,
+- 0x10,0x0a,0x01,0xff,0xe2,0x89,0xa1,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x52,0x04,
+- 0x01,0x00,0xd1,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x89,0x8d,0xcc,0xb8,0x00,
+- 0x10,0x08,0x01,0xff,0x3c,0xcc,0xb8,0x00,0x01,0xff,0x3e,0xcc,0xb8,0x00,0xd3,0x30,
+- 0xd2,0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xa4,0xcc,0xb8,0x00,0x01,0xff,
+- 0xe2,0x89,0xa5,0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,
+- 0xb2,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xb3,0xcc,0xb8,0x00,0x01,0x00,0x92,0x18,
+- 0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xb6,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,
+- 0xb7,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x50,0x94,0x4c,
+- 0xd3,0x30,0xd2,0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xba,0xcc,0xb8,0x00,
+- 0x01,0xff,0xe2,0x89,0xbb,0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,
+- 0xe2,0x8a,0x82,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x83,0xcc,0xb8,0x00,0x01,0x00,
+- 0x92,0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0x86,0xcc,0xb8,0x00,0x01,0xff,
+- 0xe2,0x8a,0x87,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x30,0x53,0x04,
+- 0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xa2,0xcc,
+- 0xb8,0x00,0x01,0xff,0xe2,0x8a,0xa8,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,
+- 0xa9,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0xab,0xcc,0xb8,0x00,0x01,0x00,0xcf,0x86,
+- 0x55,0x04,0x01,0x00,0xd4,0x5c,0xd3,0x2c,0x92,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,
+- 0xe2,0x89,0xbc,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xbd,0xcc,0xb8,0x00,0x10,0x0a,
+- 0x01,0xff,0xe2,0x8a,0x91,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x92,0xcc,0xb8,0x00,
+- 0x01,0x00,0xd2,0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xb2,0xcc,
+- 0xb8,0x00,0x01,0xff,0xe2,0x8a,0xb3,0xcc,0xb8,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,
+- 0xe2,0x8a,0xb4,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0xb5,0xcc,0xb8,0x00,0x01,0x00,
+- 0x93,0x0c,0x92,0x08,0x11,0x04,0x01,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xd1,0x64,
+- 0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x01,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x20,0x53,0x04,
+- 0x01,0x00,0x92,0x18,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x80,0x88,0x00,
+- 0x10,0x08,0x01,0xff,0xe3,0x80,0x89,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,
+- 0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x04,0x00,
+- 0x04,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,
+- 0x92,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x06,0x00,0x06,0x00,0x06,0x00,
+- 0xcf,0x86,0xd5,0x2c,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,
+- 0x06,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x07,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0x12,0x04,0x08,0x00,0x09,0x00,0xd4,0x14,
+- 0x53,0x04,0x09,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,0x0c,0x00,
+- 0x0c,0x00,0xd3,0x08,0x12,0x04,0x0c,0x00,0x10,0x00,0xd2,0x0c,0x51,0x04,0x10,0x00,
+- 0x10,0x04,0x10,0x00,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x13,0x00,
+- 0xd3,0xa6,0xd2,0x74,0xd1,0x40,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x18,
+- 0x93,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,
+- 0x04,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,
+- 0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x01,0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,
+- 0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
+- 0x06,0x00,0x06,0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,0x06,0x00,
+- 0x10,0x04,0x06,0x00,0x07,0x00,0xd1,0x06,0xcf,0x06,0x01,0x00,0xd0,0x1a,0xcf,0x86,
+- 0x95,0x14,0x54,0x04,0x01,0x00,0x93,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,
+- 0x06,0x00,0x06,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,
+- 0x13,0x04,0x04,0x00,0x06,0x00,0xd2,0xdc,0xd1,0x48,0xd0,0x26,0xcf,0x86,0x95,0x20,
+- 0x54,0x04,0x01,0x00,0xd3,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x07,0x00,0x06,0x00,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x08,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
+- 0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,
+- 0x04,0x00,0x06,0x00,0x06,0x00,0x52,0x04,0x06,0x00,0x11,0x04,0x06,0x00,0x08,0x00,
+- 0xd0,0x5e,0xcf,0x86,0xd5,0x2c,0xd4,0x10,0x53,0x04,0x06,0x00,0x92,0x08,0x11,0x04,
+- 0x06,0x00,0x07,0x00,0x07,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,
+- 0x08,0x00,0x52,0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0a,0x00,0x0b,0x00,
+- 0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0x08,0x00,
+- 0xd3,0x10,0x92,0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
+- 0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,
+- 0xd5,0x1c,0x94,0x18,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,
+- 0x51,0x04,0x0b,0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0b,0x00,0x94,0x14,0x93,0x10,
+- 0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0c,0x00,0x0b,0x00,
+- 0x0b,0x00,0xd1,0xa8,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x0c,0x00,0x01,0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x01,0x00,0x01,0x00,
+- 0x94,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x18,0x53,0x04,0x01,0x00,
+- 0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x10,0x04,0x0c,0x00,
+- 0x01,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,
+- 0x51,0x04,0x0c,0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x52,0x04,0x01,0x00,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0c,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x06,0x00,0x93,0x0c,0x52,0x04,
+- 0x06,0x00,0x11,0x04,0x06,0x00,0x01,0x00,0x01,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x18,
+- 0x54,0x04,0x01,0x00,0x93,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x0c,0x00,0x0c,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0c,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,
+- 0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,0xcf,0x86,0xd5,0x2c,0x94,0x28,0xd3,0x10,
+- 0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x09,0x00,0xd2,0x0c,
+- 0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0d,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,
+- 0x0d,0x00,0x0c,0x00,0x06,0x00,0x94,0x0c,0x53,0x04,0x06,0x00,0x12,0x04,0x06,0x00,
+- 0x0a,0x00,0x06,0x00,0xe4,0x39,0x01,0xd3,0x0c,0xd2,0x06,0xcf,0x06,0x04,0x00,0xcf,
+- 0x06,0x06,0x00,0xd2,0x30,0xd1,0x06,0xcf,0x06,0x06,0x00,0xd0,0x06,0xcf,0x06,0x06,
+- 0x00,0xcf,0x86,0x95,0x1e,0x54,0x04,0x06,0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,
+- 0x00,0x91,0x0e,0x10,0x0a,0x06,0xff,0xe2,0xab,0x9d,0xcc,0xb8,0x00,0x06,0x00,0x06,
+- 0x00,0x06,0x00,0xd1,0x80,0xd0,0x3a,0xcf,0x86,0xd5,0x28,0xd4,0x10,0x53,0x04,0x07,
+- 0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x08,0x00,0xd3,0x08,0x12,0x04,0x08,
+- 0x00,0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,
+- 0x00,0x94,0x0c,0x93,0x08,0x12,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xcf,
+- 0x86,0xd5,0x30,0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,
+- 0x04,0x0a,0x00,0x10,0x00,0x10,0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,
+- 0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x10,0x00,0x10,
+- 0x00,0x54,0x04,0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,
+- 0x00,0x10,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x14,0x54,0x04,0x10,0x00,0x93,0x0c,0x52,
+- 0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0x53,
+- 0x04,0x10,0x00,0xd2,0x08,0x11,0x04,0x10,0x00,0x14,0x00,0x91,0x08,0x10,0x04,0x14,
+- 0x00,0x10,0x00,0x10,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x15,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x92,
+- 0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x14,0x00,0x14,0x00,0xd4,
+- 0x0c,0x53,0x04,0x14,0x00,0x12,0x04,0x14,0x00,0x11,0x00,0x53,0x04,0x14,0x00,0x52,
+- 0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0xe3,0xb9,0x01,
+- 0xd2,0xac,0xd1,0x68,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x08,0x00,0x94,0x14,0x53,0x04,
+- 0x08,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,
+- 0x08,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x52,0x04,
+- 0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0xd4,0x14,0x53,0x04,
+- 0x09,0x00,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
+- 0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0a,0x00,0x0a,0x00,0x09,0x00,
+- 0x52,0x04,0x0a,0x00,0x11,0x04,0x0a,0x00,0x0b,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,
+- 0xcf,0x86,0x55,0x04,0x08,0x00,0xd4,0x1c,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,0x04,
+- 0x08,0x00,0x10,0x04,0x08,0x00,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,
+- 0x0b,0xe6,0xd3,0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0d,0x00,0x00,0x00,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0xd1,0x6c,0xd0,0x2a,
+- 0xcf,0x86,0x55,0x04,0x08,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,
+- 0x08,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x0d,0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x55,0x04,0x08,0x00,0xd4,0x1c,
+- 0xd3,0x0c,0x52,0x04,0x08,0x00,0x11,0x04,0x08,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,
+- 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x08,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,
+- 0x00,0x00,0x10,0x04,0x00,0x00,0x0c,0x09,0xd0,0x5a,0xcf,0x86,0xd5,0x18,0x54,0x04,
+- 0x08,0x00,0x93,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,
+- 0x00,0x00,0x00,0x00,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,
+- 0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
+- 0x08,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
+- 0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,
+- 0x00,0x00,0xcf,0x86,0x95,0x40,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,
+- 0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,
+- 0x10,0x04,0x08,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,
+- 0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
+- 0x08,0x00,0x00,0x00,0x0a,0xe6,0xd2,0x9c,0xd1,0x68,0xd0,0x32,0xcf,0x86,0xd5,0x14,
+- 0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x52,0x04,0x0a,0x00,0x11,0x04,0x08,0x00,
+- 0x0a,0x00,0x54,0x04,0x0a,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,
+- 0x0b,0x00,0x0d,0x00,0x0d,0x00,0x12,0x04,0x0d,0x00,0x10,0x00,0xcf,0x86,0x95,0x30,
+- 0x94,0x2c,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x12,0x00,
+- 0x91,0x08,0x10,0x04,0x12,0x00,0x13,0x00,0x13,0x00,0xd2,0x08,0x11,0x04,0x13,0x00,
+- 0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0x00,0x00,0x00,0x00,
+- 0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,
+- 0x51,0x04,0x04,0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,
+- 0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,0x08,0x12,0x04,0x04,0x00,0x00,0x00,
+- 0x00,0x00,0xd1,0x06,0xcf,0x06,0x04,0x00,0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,
+- 0xd5,0x14,0x54,0x04,0x04,0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,
+- 0x00,0x00,0x00,0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x04,0x00,0x12,0x04,0x04,0x00,
+- 0x00,0x00,0xcf,0x86,0xe5,0x8d,0x05,0xe4,0x86,0x05,0xe3,0x7d,0x04,0xe2,0xe4,0x03,
+- 0xe1,0xc0,0x01,0xd0,0x3e,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,
+- 0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0xda,0x01,0xe4,0x91,0x08,0x10,
+- 0x04,0x01,0xe8,0x01,0xde,0x01,0xe0,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x04,
+- 0x00,0x10,0x04,0x04,0x00,0x06,0x00,0x51,0x04,0x06,0x00,0x10,0x04,0x04,0x00,0x01,
+- 0x00,0xcf,0x86,0xd5,0xaa,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,
+- 0xff,0xe3,0x81,0x8b,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,
+- 0x8d,0xe3,0x82,0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,
+- 0xff,0xe3,0x81,0x8f,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,
+- 0x91,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x93,
+- 0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x95,0xe3,0x82,0x99,
+- 0x00,0x01,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x97,0xe3,0x82,
+- 0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x99,0xe3,0x82,0x99,0x00,0x01,
+- 0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x9b,0xe3,0x82,0x99,0x00,0x01,0x00,
+- 0x10,0x0b,0x01,0xff,0xe3,0x81,0x9d,0xe3,0x82,0x99,0x00,0x01,0x00,0xd4,0x53,0xd3,
+- 0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x9f,0xe3,0x82,0x99,0x00,
+- 0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xa1,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,
+- 0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x81,0xa4,0xe3,0x82,0x99,0x00,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0xe3,0x81,0xa6,0xe3,0x82,0x99,0x00,0x92,0x13,0x91,0x0f,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0xe3,0x81,0xa8,0xe3,0x82,0x99,0x00,0x01,0x00,0x01,0x00,
+- 0xd3,0x4a,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,0xaf,0xe3,0x82,0x99,
+- 0x00,0x01,0xff,0xe3,0x81,0xaf,0xe3,0x82,0x9a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
+- 0xe3,0x81,0xb2,0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb2,
+- 0xe3,0x82,0x9a,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb5,0xe3,0x82,0x99,
+- 0x00,0x01,0xff,0xe3,0x81,0xb5,0xe3,0x82,0x9a,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0xe3,0x81,0xb8,0xe3,0x82,0x99,0x00,0x10,0x0b,0x01,0xff,0xe3,
+- 0x81,0xb8,0xe3,0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,
+- 0xbb,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x81,0xbb,0xe3,0x82,0x9a,0x00,0x01,0x00,
+- 0xd0,0xee,0xcf,0x86,0xd5,0x42,0x54,0x04,0x01,0x00,0xd3,0x1b,0x52,0x04,0x01,0x00,
+- 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x86,0xe3,0x82,0x99,0x00,0x06,0x00,0x10,
+- 0x04,0x06,0x00,0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x08,0x10,
+- 0x04,0x01,0x08,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0x9d,
+- 0xe3,0x82,0x99,0x00,0x06,0x00,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x06,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,
+- 0x01,0xff,0xe3,0x82,0xab,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,
+- 0x82,0xad,0xe3,0x82,0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,
+- 0x01,0xff,0xe3,0x82,0xaf,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,
+- 0x82,0xb1,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,
+- 0xb3,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb5,0xe3,0x82,
+- 0x99,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb7,0xe3,
+- 0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb9,0xe3,0x82,0x99,0x00,
+- 0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbb,0xe3,0x82,0x99,0x00,0x01,
+- 0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbd,0xe3,0x82,0x99,0x00,0x01,0x00,0xcf,0x86,
+- 0xd5,0xd5,0xd4,0x53,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,
+- 0xbf,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0x81,0xe3,0x82,
+- 0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x84,0xe3,
+- 0x82,0x99,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x86,0xe3,0x82,0x99,0x00,
+- 0x92,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x88,0xe3,0x82,0x99,
+- 0x00,0x01,0x00,0x01,0x00,0xd3,0x4a,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,
+- 0x83,0x8f,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x8f,0xe3,0x82,0x9a,0x00,0x10,
+- 0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x92,0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,
+- 0x01,0xff,0xe3,0x83,0x92,0xe3,0x82,0x9a,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,
+- 0x83,0x95,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x95,0xe3,0x82,0x9a,0x00,0xd2,
+- 0x1e,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x98,0xe3,0x82,0x99,0x00,
+- 0x10,0x0b,0x01,0xff,0xe3,0x83,0x98,0xe3,0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,
+- 0x0b,0x01,0xff,0xe3,0x83,0x9b,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x9b,0xe3,
+- 0x82,0x9a,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x22,0x52,0x04,0x01,0x00,0xd1,
+- 0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xa6,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x04,
+- 0x01,0x00,0x01,0xff,0xe3,0x83,0xaf,0xe3,0x82,0x99,0x00,0xd2,0x25,0xd1,0x16,0x10,
+- 0x0b,0x01,0xff,0xe3,0x83,0xb0,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0xb1,0xe3,
+- 0x82,0x99,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0xb2,0xe3,0x82,0x99,0x00,0x01,0x00,
+- 0x51,0x04,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0xbd,0xe3,0x82,0x99,0x00,0x06,
+- 0x00,0xd1,0x4c,0xd0,0x46,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x00,
+- 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,
+- 0x18,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,
+- 0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x06,0x01,0x00,0xd0,0x32,0xcf,
+- 0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,
+- 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x54,0x04,0x04,0x00,0x53,0x04,0x04,
+- 0x00,0x92,0x0c,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0xcf,
+- 0x86,0xd5,0x08,0x14,0x04,0x08,0x00,0x0a,0x00,0x94,0x0c,0x93,0x08,0x12,0x04,0x0a,
+- 0x00,0x00,0x00,0x00,0x00,0x06,0x00,0xd2,0xa4,0xd1,0x5c,0xd0,0x22,0xcf,0x86,0x95,
+- 0x1c,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,
+- 0x04,0x01,0x00,0x07,0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x01,0x00,0xcf,0x86,0xd5,
+- 0x20,0xd4,0x0c,0x93,0x08,0x12,0x04,0x01,0x00,0x0b,0x00,0x0b,0x00,0x93,0x10,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0x54,
+- 0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x07,0x00,0x10,
+- 0x04,0x08,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,
+- 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x06,0x00,0x06,
+- 0x00,0x06,0x00,0xcf,0x86,0xd5,0x10,0x94,0x0c,0x53,0x04,0x01,0x00,0x12,0x04,0x01,
+- 0x00,0x07,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
+- 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x16,0x00,0xd1,0x30,0xd0,0x06,0xcf,
+- 0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x52,
+- 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x07,0x00,0x92,0x0c,0x51,
+- 0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x01,0x00,0x01,0x00,0xd0,0x06,0xcf,0x06,0x01,
+- 0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
+- 0x00,0x11,0x04,0x01,0x00,0x07,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,
+- 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x07,0x00,0xcf,0x06,0x04,
+- 0x00,0xcf,0x06,0x04,0x00,0xd1,0x48,0xd0,0x40,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x04,
+- 0x00,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x2c,0xd2,0x06,0xcf,0x06,0x04,0x00,0xd1,
+- 0x06,0xcf,0x06,0x04,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,
+- 0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xcf,
+- 0x06,0x07,0x00,0xcf,0x06,0x01,0x00,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xcf,
+- 0x06,0x01,0x00,0xe2,0x71,0x05,0xd1,0x8c,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,
+- 0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xd4,0x06,0xcf,0x06,0x01,0x00,0xd3,0x06,
+- 0xcf,0x06,0x01,0x00,0xd2,0x06,0xcf,0x06,0x01,0x00,0xd1,0x06,0xcf,0x06,0x01,0x00,
+- 0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x01,0x00,
+- 0x11,0x04,0x01,0x00,0x08,0x00,0x08,0x00,0x53,0x04,0x08,0x00,0x12,0x04,0x08,0x00,
+- 0x0a,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0b,0x00,
+- 0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x11,0x00,0x11,0x00,0x93,0x0c,
+- 0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x13,0x00,0x13,0x00,0x94,0x14,0x53,0x04,
+- 0x13,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x14,0x00,
+- 0x00,0x00,0xe0,0xdb,0x04,0xcf,0x86,0xe5,0xdf,0x01,0xd4,0x06,0xcf,0x06,0x04,0x00,
+- 0xd3,0x74,0xd2,0x6e,0xd1,0x06,0xcf,0x06,0x04,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x18,
+- 0x94,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,
+- 0x00,0x00,0x00,0x00,0x04,0x00,0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x04,0x00,
+- 0x06,0x00,0x04,0x00,0x04,0x00,0x93,0x10,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,
+- 0x06,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,0x95,0x24,0x94,0x20,0x93,0x1c,
+- 0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,0x04,0x00,0xd1,0x08,0x10,0x04,
+- 0x04,0x00,0x06,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0x0b,0x00,
+- 0xcf,0x06,0x0a,0x00,0xd2,0x84,0xd1,0x4c,0xd0,0x16,0xcf,0x86,0x55,0x04,0x0a,0x00,
+- 0x94,0x0c,0x53,0x04,0x0a,0x00,0x12,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
+- 0x55,0x04,0x0a,0x00,0xd4,0x1c,0xd3,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x0a,0x00,
+- 0x0a,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0xe6,
+- 0xd3,0x08,0x12,0x04,0x0a,0x00,0x0d,0xe6,0x52,0x04,0x0d,0xe6,0x11,0x04,0x0a,0xe6,
+- 0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,
+- 0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x11,0xe6,0x0d,0xe6,0x0b,0x00,
+- 0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,
+- 0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x24,
+- 0x54,0x04,0x08,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,
+- 0x08,0x00,0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,
+- 0x0a,0x00,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
+- 0x0a,0x00,0x0a,0x00,0xcf,0x06,0x0a,0x00,0xd0,0x5e,0xcf,0x86,0xd5,0x28,0xd4,0x18,
+- 0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0xd1,0x08,0x10,0x04,0x0a,0x00,0x0c,0x00,
+- 0x10,0x04,0x0c,0x00,0x11,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x0d,0x00,
+- 0x10,0x00,0x10,0x00,0xd4,0x1c,0x53,0x04,0x0c,0x00,0xd2,0x0c,0x51,0x04,0x0c,0x00,
+- 0x10,0x04,0x0d,0x00,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x12,0x00,0x14,0x00,
+- 0xd3,0x0c,0x92,0x08,0x11,0x04,0x10,0x00,0x11,0x00,0x11,0x00,0x92,0x08,0x11,0x04,
+- 0x14,0x00,0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x1c,0x94,0x18,0x93,0x14,0xd2,0x08,
+- 0x11,0x04,0x00,0x00,0x15,0x00,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x54,0x04,0x00,0x00,0xd3,0x10,0x52,0x04,0x00,0x00,0x51,0x04,
+- 0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x92,0x0c,0x51,0x04,0x0d,0x00,0x10,0x04,
+- 0x0c,0x00,0x0a,0x00,0x0a,0x00,0xe4,0xf2,0x02,0xe3,0x65,0x01,0xd2,0x98,0xd1,0x48,
+- 0xd0,0x36,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x08,0x00,0x51,0x04,
+- 0x08,0x00,0x10,0x04,0x08,0x09,0x08,0x00,0x08,0x00,0x08,0x00,0xd4,0x0c,0x53,0x04,
+- 0x08,0x00,0x12,0x04,0x08,0x00,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,0x11,0x04,
+- 0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0x54,0x04,0x09,0x00,
+- 0x13,0x04,0x09,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x0a,0x00,0xcf,0x86,0xd5,0x2c,
+- 0xd4,0x1c,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x09,0x12,0x00,
+- 0x00,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x0a,0x00,0x53,0x04,0x0a,0x00,
+- 0x92,0x08,0x11,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x0b,0xe6,0xd3,0x0c,
+- 0x92,0x08,0x11,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x11,0x04,
+- 0x11,0x00,0x14,0x00,0xd1,0x60,0xd0,0x22,0xcf,0x86,0x55,0x04,0x0a,0x00,0x94,0x18,
+- 0x53,0x04,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0xdc,
+- 0x11,0x04,0x0a,0xdc,0x0a,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x0a,0x00,
+- 0xd3,0x10,0x92,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0x09,0x00,0x00,
+- 0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0x54,0x04,
+- 0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,
+- 0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,
+- 0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0x07,0x0b,0x00,
+- 0x0b,0x00,0xcf,0x86,0xd5,0x34,0xd4,0x20,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,
+- 0x10,0x04,0x00,0x00,0x0b,0x00,0x53,0x04,0x0b,0x00,0xd2,0x08,0x11,0x04,0x0b,0x00,
+- 0x00,0x00,0x11,0x04,0x00,0x00,0x0b,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,
+- 0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0xd2,0xd0,
+- 0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0a,0x00,0x54,0x04,0x0a,0x00,0x93,0x10,
+- 0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,
+- 0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x11,0x04,
+- 0x0a,0x00,0x00,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x00,0x00,
+- 0x0a,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x12,0x04,0x0b,0x00,0x10,0x00,
+- 0xd0,0x3a,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0xd3,0x1c,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0xe6,0xd1,0x08,0x10,0x04,0x0b,0xdc,
+- 0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0xe6,
+- 0x0b,0x00,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0b,0xe6,0xcf,0x86,0xd5,0x2c,0xd4,0x18,
+- 0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x10,0x04,0x0b,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,
+- 0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x54,0x04,0x0d,0x00,0x93,0x10,0x52,0x04,
+- 0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,0x00,0x00,0x00,0x00,0xd1,0x8c,
+- 0xd0,0x72,0xcf,0x86,0xd5,0x4c,0xd4,0x30,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
+- 0x00,0x00,0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,
++ 0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,
++ 0xe1,0x8f,0x5b,0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,
++ 0xcc,0x80,0x00,0xd3,0x3b,0xd2,0x18,0x51,0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,
++ 0x89,0xcc,0x80,0xce,0xb9,0x00,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0xd1,0x0f,0x10,
++ 0x0b,0x01,0xff,0xcf,0x89,0xcc,0x81,0xce,0xb9,0x00,0x00,0x00,0x10,0x09,0x01,0xff,
++ 0xcf,0x89,0xcd,0x82,0x00,0x01,0xff,0xcf,0x89,0xcd,0x82,0xce,0xb9,0x00,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,
++ 0x81,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,
++ 0x81,0x00,0xe1,0x99,0x5b,0x10,0x09,0x01,0xff,0xcf,0x89,0xce,0xb9,0x00,0x01,0xff,
++ 0xc2,0xb4,0x00,0xe0,0x0c,0x68,0xcf,0x86,0xe5,0x23,0x02,0xe4,0x25,0x01,0xe3,0x85,
++ 0x5e,0xd2,0x2a,0xe1,0x5f,0x5c,0xe0,0xdd,0x5b,0xcf,0x86,0xe5,0xbb,0x5b,0x94,0x1b,
++ 0xe3,0xa4,0x5b,0x92,0x14,0x91,0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,
++ 0xff,0xe2,0x80,0x83,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd1,0xd6,0xd0,0x46,0xcf,
++ 0x86,0x55,0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,0x00,0x51,0x04,0x01,
++ 0x00,0x10,0x07,0x01,0xff,0xcf,0x89,0x00,0x01,0x00,0x92,0x12,0x51,0x04,0x01,0x00,
++ 0x10,0x06,0x01,0xff,0x6b,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x01,0x00,0xe3,0x25,
++ 0x5d,0x92,0x10,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0x8e,0x00,0x01,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0x0a,0xe4,0x42,0x5d,0x63,0x2d,0x5d,0x06,0x00,0x94,
++ 0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb0,0x00,0x01,
++ 0xff,0xe2,0x85,0xb1,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb2,0x00,0x01,0xff,0xe2,
++ 0x85,0xb3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb4,0x00,0x01,0xff,0xe2,
++ 0x85,0xb5,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xb6,0x00,0x01,0xff,0xe2,0x85,0xb7,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xb8,0x00,0x01,0xff,0xe2,
++ 0x85,0xb9,0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xba,0x00,0x01,0xff,0xe2,0x85,0xbb,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x85,0xbc,0x00,0x01,0xff,0xe2,0x85,0xbd,
++ 0x00,0x10,0x08,0x01,0xff,0xe2,0x85,0xbe,0x00,0x01,0xff,0xe2,0x85,0xbf,0x00,0x01,
++ 0x00,0xe0,0x34,0x5d,0xcf,0x86,0xe5,0x13,0x5d,0xe4,0xf2,0x5c,0xe3,0xe1,0x5c,0xe2,
++ 0xd4,0x5c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0xff,0xe2,0x86,0x84,0x00,
++ 0xe3,0x23,0x61,0xe2,0xf0,0x60,0xd1,0x0c,0xe0,0x9d,0x60,0xcf,0x86,0x65,0x7e,0x60,
++ 0x01,0x00,0xd0,0x62,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x18,
++ 0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x90,0x00,
++ 0x01,0xff,0xe2,0x93,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,
++ 0x92,0x00,0x01,0xff,0xe2,0x93,0x93,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x94,0x00,
++ 0x01,0xff,0xe2,0x93,0x95,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x96,0x00,
++ 0x01,0xff,0xe2,0x93,0x97,0x00,0x10,0x08,0x01,0xff,0xe2,0x93,0x98,0x00,0x01,0xff,
++ 0xe2,0x93,0x99,0x00,0xcf,0x86,0xe5,0x57,0x60,0x94,0x80,0xd3,0x40,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe2,0x93,0x9a,0x00,0x01,0xff,0xe2,0x93,0x9b,0x00,0x10,
++ 0x08,0x01,0xff,0xe2,0x93,0x9c,0x00,0x01,0xff,0xe2,0x93,0x9d,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe2,0x93,0x9e,0x00,0x01,0xff,0xe2,0x93,0x9f,0x00,0x10,0x08,0x01,
++ 0xff,0xe2,0x93,0xa0,0x00,0x01,0xff,0xe2,0x93,0xa1,0x00,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe2,0x93,0xa2,0x00,0x01,0xff,0xe2,0x93,0xa3,0x00,0x10,0x08,0x01,
++ 0xff,0xe2,0x93,0xa4,0x00,0x01,0xff,0xe2,0x93,0xa5,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe2,0x93,0xa6,0x00,0x01,0xff,0xe2,0x93,0xa7,0x00,0x10,0x08,0x01,0xff,0xe2,
++ 0x93,0xa8,0x00,0x01,0xff,0xe2,0x93,0xa9,0x00,0x01,0x00,0xd4,0x0c,0xe3,0x33,0x62,
++ 0xe2,0x2c,0x62,0xcf,0x06,0x04,0x00,0xe3,0x0c,0x65,0xe2,0xff,0x63,0xe1,0x2e,0x02,
++ 0xe0,0x84,0x01,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x08,0xff,0xe2,0xb0,0xb0,0x00,0x08,0xff,0xe2,0xb0,0xb1,0x00,0x10,0x08,
++ 0x08,0xff,0xe2,0xb0,0xb2,0x00,0x08,0xff,0xe2,0xb0,0xb3,0x00,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe2,0xb0,0xb4,0x00,0x08,0xff,0xe2,0xb0,0xb5,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb0,0xb6,0x00,0x08,0xff,0xe2,0xb0,0xb7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe2,0xb0,0xb8,0x00,0x08,0xff,0xe2,0xb0,0xb9,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb0,0xba,0x00,0x08,0xff,0xe2,0xb0,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb0,0xbc,0x00,0x08,0xff,0xe2,0xb0,0xbd,0x00,0x10,0x08,0x08,0xff,0xe2,0xb0,
++ 0xbe,0x00,0x08,0xff,0xe2,0xb0,0xbf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe2,0xb1,0x80,0x00,0x08,0xff,0xe2,0xb1,0x81,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x82,0x00,0x08,0xff,0xe2,0xb1,0x83,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x84,0x00,0x08,0xff,0xe2,0xb1,0x85,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x86,0x00,0x08,0xff,0xe2,0xb1,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x88,0x00,0x08,0xff,0xe2,0xb1,0x89,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x8a,0x00,0x08,0xff,0xe2,0xb1,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x8c,0x00,0x08,0xff,0xe2,0xb1,0x8d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x8e,0x00,
++ 0x08,0xff,0xe2,0xb1,0x8f,0x00,0x94,0x7c,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe2,0xb1,0x90,0x00,0x08,0xff,0xe2,0xb1,0x91,0x00,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x92,0x00,0x08,0xff,0xe2,0xb1,0x93,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x94,0x00,0x08,0xff,0xe2,0xb1,0x95,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x96,0x00,0x08,0xff,0xe2,0xb1,0x97,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe2,0xb1,0x98,0x00,0x08,0xff,0xe2,0xb1,0x99,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x9a,0x00,0x08,0xff,0xe2,0xb1,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe2,0xb1,
++ 0x9c,0x00,0x08,0xff,0xe2,0xb1,0x9d,0x00,0x10,0x08,0x08,0xff,0xe2,0xb1,0x9e,0x00,
++ 0x00,0x00,0x08,0x00,0xcf,0x86,0xd5,0x07,0x64,0xef,0x61,0x08,0x00,0xd4,0x63,0xd3,
++ 0x32,0xd2,0x1b,0xd1,0x0c,0x10,0x08,0x09,0xff,0xe2,0xb1,0xa1,0x00,0x09,0x00,0x10,
++ 0x07,0x09,0xff,0xc9,0xab,0x00,0x09,0xff,0xe1,0xb5,0xbd,0x00,0xd1,0x0b,0x10,0x07,
++ 0x09,0xff,0xc9,0xbd,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xa8,
++ 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xaa,0x00,0x10,
++ 0x04,0x09,0x00,0x09,0xff,0xe2,0xb1,0xac,0x00,0xd1,0x0b,0x10,0x04,0x09,0x00,0x0a,
++ 0xff,0xc9,0x91,0x00,0x10,0x07,0x0a,0xff,0xc9,0xb1,0x00,0x0a,0xff,0xc9,0x90,0x00,
++ 0xd3,0x27,0xd2,0x17,0xd1,0x0b,0x10,0x07,0x0b,0xff,0xc9,0x92,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xe2,0xb1,0xb3,0x00,0x0a,0x00,0x91,0x0c,0x10,0x04,0x09,0x00,0x09,
++ 0xff,0xe2,0xb1,0xb6,0x00,0x09,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,
++ 0x07,0x0b,0xff,0xc8,0xbf,0x00,0x0b,0xff,0xc9,0x80,0x00,0xe0,0x83,0x01,0xcf,0x86,
++ 0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
++ 0x81,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x83,0x00,0x08,0x00,0xd1,0x0c,
++ 0x10,0x08,0x08,0xff,0xe2,0xb2,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,
++ 0x87,0x00,0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x89,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x8f,0x00,
++ 0x08,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x91,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x97,0x00,
++ 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0x99,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb2,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb2,0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0x9f,0x00,0x08,0x00,
++ 0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa1,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0xa5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa7,0x00,
++ 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xa9,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb2,0xab,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb2,0xad,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xaf,0x00,0x08,0x00,
++ 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb1,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb2,0xb3,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb2,0xb5,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb7,0x00,0x08,0x00,
++ 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,0xb9,0x00,0x08,0x00,0x10,0x08,
++ 0x08,0xff,0xe2,0xb2,0xbb,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb2,
++ 0xbd,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb2,0xbf,0x00,0x08,0x00,0xcf,0x86,
++ 0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,
++ 0x81,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x83,0x00,0x08,0x00,0xd1,0x0c,
++ 0x10,0x08,0x08,0xff,0xe2,0xb3,0x85,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,
++ 0x87,0x00,0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x89,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x8b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb3,0x8d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x8f,0x00,
++ 0x08,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x91,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x93,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,
++ 0x08,0xff,0xe2,0xb3,0x95,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x97,0x00,
++ 0x08,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0x99,0x00,0x08,0x00,
++ 0x10,0x08,0x08,0xff,0xe2,0xb3,0x9b,0x00,0x08,0x00,0xd1,0x0c,0x10,0x08,0x08,0xff,
++ 0xe2,0xb3,0x9d,0x00,0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0x9f,0x00,0x08,0x00,
++ 0xd4,0x3b,0xd3,0x1c,0x92,0x18,0xd1,0x0c,0x10,0x08,0x08,0xff,0xe2,0xb3,0xa1,0x00,
++ 0x08,0x00,0x10,0x08,0x08,0xff,0xe2,0xb3,0xa3,0x00,0x08,0x00,0x08,0x00,0xd2,0x10,
++ 0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x0b,0xff,0xe2,0xb3,0xac,0x00,0xe1,0x3b,
++ 0x5f,0x10,0x04,0x0b,0x00,0x0b,0xff,0xe2,0xb3,0xae,0x00,0xe3,0x40,0x5f,0x92,0x10,
++ 0x51,0x04,0x0b,0xe6,0x10,0x08,0x0d,0xff,0xe2,0xb3,0xb3,0x00,0x0d,0x00,0x00,0x00,
++ 0xe2,0x98,0x08,0xd1,0x0b,0xe0,0x11,0x67,0xcf,0x86,0xcf,0x06,0x01,0x00,0xe0,0x65,
++ 0x6c,0xcf,0x86,0xe5,0xa7,0x05,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x0c,0xe2,0xf8,
++ 0x67,0xe1,0x8f,0x67,0xcf,0x06,0x04,0x00,0xe2,0xdb,0x01,0xe1,0x26,0x01,0xd0,0x09,
++ 0xcf,0x86,0x65,0xf4,0x67,0x0a,0x00,0xcf,0x86,0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,
++ 0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,
++ 0xff,0xea,0x99,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x85,
++ 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,
++ 0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
++ 0x99,0x8b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x8d,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,
++ 0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
++ 0x99,0x93,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x95,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x97,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x99,0x99,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0x9b,
++ 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0x9d,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x99,0x9f,0x00,0x0a,0x00,0xe4,0x5d,0x67,0xd3,0x30,0xd2,0x18,
++ 0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x99,0xa1,0x00,0x0c,0x00,0x10,0x08,0x0a,0xff,
++ 0xea,0x99,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x99,0xa5,0x00,
++ 0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,
++ 0x10,0x08,0x0a,0xff,0xea,0x99,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x99,
++ 0xab,0x00,0x0a,0x00,0xe1,0x0c,0x67,0x10,0x08,0x0a,0xff,0xea,0x99,0xad,0x00,0x0a,
++ 0x00,0xe0,0x35,0x67,0xcf,0x86,0x95,0xab,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
++ 0x10,0x08,0x0a,0xff,0xea,0x9a,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,
++ 0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x85,0x00,0x0a,0x00,
++ 0x10,0x08,0x0a,0xff,0xea,0x9a,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,
++ 0x0a,0xff,0xea,0x9a,0x89,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8b,0x00,
++ 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x8d,0x00,0x0a,0x00,0x10,0x08,
++ 0x0a,0xff,0xea,0x9a,0x8f,0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,
++ 0x0a,0xff,0xea,0x9a,0x91,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9a,0x93,0x00,
++ 0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9a,0x95,0x00,0x0a,0x00,0x10,0x08,
++ 0x0a,0xff,0xea,0x9a,0x97,0x00,0x0a,0x00,0xe2,0x92,0x66,0xd1,0x0c,0x10,0x08,0x10,
++ 0xff,0xea,0x9a,0x99,0x00,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9a,0x9b,0x00,0x10,
++ 0x00,0x0b,0x00,0xe1,0x10,0x02,0xd0,0xb9,0xcf,0x86,0xd5,0x07,0x64,0x9e,0x66,0x08,
++ 0x00,0xd4,0x58,0xd3,0x28,0xd2,0x10,0x51,0x04,0x09,0x00,0x10,0x08,0x0a,0xff,0xea,
++ 0x9c,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa5,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xa7,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x9c,0xa9,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xab,
++ 0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xad,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x9c,0xaf,0x00,0x0a,0x00,0xd3,0x28,0xd2,0x10,0x51,0x04,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb3,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
++ 0xff,0xea,0x9c,0xb5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb7,0x00,0x0a,
++ 0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9c,0xb9,0x00,0x0a,0x00,0x10,
++ 0x08,0x0a,0xff,0xea,0x9c,0xbb,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
++ 0x9c,0xbd,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9c,0xbf,0x00,0x0a,0x00,0xcf,
++ 0x86,0xd5,0xc0,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,
++ 0x9d,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x83,0x00,0x0a,0x00,0xd1,
++ 0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x85,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,
++ 0x9d,0x87,0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x89,
++ 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8b,0x00,0x0a,0x00,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x9d,0x8d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x8f,
++ 0x00,0x0a,0x00,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x91,
++ 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x93,0x00,0x0a,0x00,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x9d,0x95,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x97,
++ 0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0x99,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x9b,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
++ 0xff,0xea,0x9d,0x9d,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0x9f,0x00,0x0a,
++ 0x00,0xd4,0x60,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa1,
++ 0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa3,0x00,0x0a,0x00,0xd1,0x0c,0x10,
++ 0x08,0x0a,0xff,0xea,0x9d,0xa5,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa7,
++ 0x00,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9d,0xa9,0x00,0x0a,
++ 0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xab,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,
++ 0xff,0xea,0x9d,0xad,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xaf,0x00,0x0a,
++ 0x00,0x53,0x04,0x0a,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,
++ 0x9d,0xba,0x00,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9d,0xbc,0x00,0xd1,0x0c,0x10,
++ 0x04,0x0a,0x00,0x0a,0xff,0xe1,0xb5,0xb9,0x00,0x10,0x08,0x0a,0xff,0xea,0x9d,0xbf,
++ 0x00,0x0a,0x00,0xe0,0x71,0x01,0xcf,0x86,0xd5,0xa6,0xd4,0x4e,0xd3,0x30,0xd2,0x18,
++ 0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9e,0x81,0x00,0x0a,0x00,0x10,0x08,0x0a,0xff,
++ 0xea,0x9e,0x83,0x00,0x0a,0x00,0xd1,0x0c,0x10,0x08,0x0a,0xff,0xea,0x9e,0x85,0x00,
++ 0x0a,0x00,0x10,0x08,0x0a,0xff,0xea,0x9e,0x87,0x00,0x0a,0x00,0xd2,0x10,0x51,0x04,
++ 0x0a,0x00,0x10,0x04,0x0a,0x00,0x0a,0xff,0xea,0x9e,0x8c,0x00,0xe1,0x9a,0x64,0x10,
++ 0x04,0x0a,0x00,0x0c,0xff,0xc9,0xa5,0x00,0xd3,0x28,0xd2,0x18,0xd1,0x0c,0x10,0x08,
++ 0x0c,0xff,0xea,0x9e,0x91,0x00,0x0c,0x00,0x10,0x08,0x0d,0xff,0xea,0x9e,0x93,0x00,
++ 0x0d,0x00,0x51,0x04,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x97,0x00,0x10,0x00,
++ 0xd2,0x18,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,0x99,0x00,0x10,0x00,0x10,0x08,
++ 0x10,0xff,0xea,0x9e,0x9b,0x00,0x10,0x00,0xd1,0x0c,0x10,0x08,0x10,0xff,0xea,0x9e,
++ 0x9d,0x00,0x10,0x00,0x10,0x08,0x10,0xff,0xea,0x9e,0x9f,0x00,0x10,0x00,0xd4,0x63,
++ 0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa1,0x00,0x0c,0x00,
++ 0x10,0x08,0x0c,0xff,0xea,0x9e,0xa3,0x00,0x0c,0x00,0xd1,0x0c,0x10,0x08,0x0c,0xff,
++ 0xea,0x9e,0xa5,0x00,0x0c,0x00,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa7,0x00,0x0c,0x00,
++ 0xd2,0x1a,0xd1,0x0c,0x10,0x08,0x0c,0xff,0xea,0x9e,0xa9,0x00,0x0c,0x00,0x10,0x07,
++ 0x0d,0xff,0xc9,0xa6,0x00,0x10,0xff,0xc9,0x9c,0x00,0xd1,0x0e,0x10,0x07,0x10,0xff,
++ 0xc9,0xa1,0x00,0x10,0xff,0xc9,0xac,0x00,0x10,0x07,0x12,0xff,0xc9,0xaa,0x00,0x14,
++ 0x00,0xd3,0x35,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x10,0xff,0xca,0x9e,0x00,0x10,0xff,
++ 0xca,0x87,0x00,0x10,0x07,0x11,0xff,0xca,0x9d,0x00,0x11,0xff,0xea,0xad,0x93,0x00,
++ 0xd1,0x0c,0x10,0x08,0x11,0xff,0xea,0x9e,0xb5,0x00,0x11,0x00,0x10,0x08,0x11,0xff,
++ 0xea,0x9e,0xb7,0x00,0x11,0x00,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x14,0xff,0xea,0x9e,
++ 0xb9,0x00,0x14,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,0xbb,0x00,0x15,0x00,0xd1,0x0c,
++ 0x10,0x08,0x15,0xff,0xea,0x9e,0xbd,0x00,0x15,0x00,0x10,0x08,0x15,0xff,0xea,0x9e,
++ 0xbf,0x00,0x15,0x00,0xcf,0x86,0xe5,0xd4,0x63,0x94,0x2f,0x93,0x2b,0xd2,0x10,0x51,
++ 0x04,0x00,0x00,0x10,0x08,0x15,0xff,0xea,0x9f,0x83,0x00,0x15,0x00,0xd1,0x0f,0x10,
++ 0x08,0x15,0xff,0xea,0x9e,0x94,0x00,0x15,0xff,0xca,0x82,0x00,0x10,0x08,0x15,0xff,
++ 0xe1,0xb6,0x8e,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xe4,0xb4,0x66,0xd3,0x1d,0xe2,
++ 0x5b,0x64,0xe1,0x0a,0x64,0xe0,0xf7,0x63,0xcf,0x86,0xe5,0xd8,0x63,0x94,0x0b,0x93,
++ 0x07,0x62,0xc3,0x63,0x08,0x00,0x08,0x00,0x08,0x00,0xd2,0x0f,0xe1,0x5a,0x65,0xe0,
++ 0x27,0x65,0xcf,0x86,0x65,0x0c,0x65,0x0a,0x00,0xd1,0xab,0xd0,0x1a,0xcf,0x86,0xe5,
++ 0x17,0x66,0xe4,0xfa,0x65,0xe3,0xe1,0x65,0xe2,0xd4,0x65,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x0c,0x00,0x0c,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x0b,0x93,0x07,0x62,
++ 0x27,0x66,0x11,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,
++ 0xe1,0x8e,0xa0,0x00,0x11,0xff,0xe1,0x8e,0xa1,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,
++ 0xa2,0x00,0x11,0xff,0xe1,0x8e,0xa3,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,
++ 0xa4,0x00,0x11,0xff,0xe1,0x8e,0xa5,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xa6,0x00,
++ 0x11,0xff,0xe1,0x8e,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,
++ 0xa8,0x00,0x11,0xff,0xe1,0x8e,0xa9,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xaa,0x00,
++ 0x11,0xff,0xe1,0x8e,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xac,0x00,
++ 0x11,0xff,0xe1,0x8e,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xae,0x00,0x11,0xff,
++ 0xe1,0x8e,0xaf,0x00,0xe0,0xb2,0x65,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb0,0x00,0x11,0xff,0xe1,0x8e,
++ 0xb1,0x00,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb2,0x00,0x11,0xff,0xe1,0x8e,0xb3,0x00,
++ 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb4,0x00,0x11,0xff,0xe1,0x8e,0xb5,0x00,
++ 0x10,0x08,0x11,0xff,0xe1,0x8e,0xb6,0x00,0x11,0xff,0xe1,0x8e,0xb7,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8e,0xb8,0x00,0x11,0xff,0xe1,0x8e,0xb9,0x00,
++ 0x10,0x08,0x11,0xff,0xe1,0x8e,0xba,0x00,0x11,0xff,0xe1,0x8e,0xbb,0x00,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8e,0xbc,0x00,0x11,0xff,0xe1,0x8e,0xbd,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8e,0xbe,0x00,0x11,0xff,0xe1,0x8e,0xbf,0x00,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,0x80,0x00,0x11,0xff,0xe1,0x8f,0x81,0x00,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x82,0x00,0x11,0xff,0xe1,0x8f,0x83,0x00,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x84,0x00,0x11,0xff,0xe1,0x8f,0x85,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x86,0x00,0x11,0xff,0xe1,0x8f,0x87,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x88,0x00,0x11,0xff,0xe1,0x8f,0x89,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x8a,0x00,0x11,0xff,0xe1,0x8f,0x8b,0x00,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x8c,0x00,0x11,0xff,0xe1,0x8f,0x8d,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0x8e,0x00,0x11,0xff,0xe1,0x8f,0x8f,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x11,0xff,0xe1,0x8f,0x90,0x00,0x11,0xff,0xe1,0x8f,0x91,0x00,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x92,0x00,0x11,0xff,0xe1,0x8f,0x93,0x00,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x94,0x00,0x11,0xff,0xe1,0x8f,0x95,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x96,0x00,0x11,0xff,0xe1,0x8f,0x97,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0x98,0x00,0x11,0xff,0xe1,0x8f,0x99,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x9a,0x00,0x11,0xff,0xe1,0x8f,0x9b,0x00,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0x9c,0x00,0x11,0xff,0xe1,0x8f,0x9d,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0x9e,0x00,0x11,0xff,0xe1,0x8f,0x9f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x11,0xff,0xe1,0x8f,0xa0,0x00,0x11,0xff,0xe1,0x8f,0xa1,0x00,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0xa2,0x00,0x11,0xff,0xe1,0x8f,0xa3,0x00,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0xa4,0x00,0x11,0xff,0xe1,0x8f,0xa5,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0xa6,0x00,0x11,0xff,0xe1,0x8f,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x11,0xff,0xe1,0x8f,0xa8,0x00,0x11,0xff,0xe1,0x8f,0xa9,0x00,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0xaa,0x00,0x11,0xff,0xe1,0x8f,0xab,0x00,0xd1,0x10,0x10,0x08,0x11,0xff,
++ 0xe1,0x8f,0xac,0x00,0x11,0xff,0xe1,0x8f,0xad,0x00,0x10,0x08,0x11,0xff,0xe1,0x8f,
++ 0xae,0x00,0x11,0xff,0xe1,0x8f,0xaf,0x00,0xd1,0x0c,0xe0,0xeb,0x63,0xcf,0x86,0xcf,
++ 0x06,0x02,0xff,0xff,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,
++ 0xcf,0x06,0x01,0x00,0xd4,0xae,0xd3,0x09,0xe2,0x54,0x64,0xcf,0x06,0x01,0x00,0xd2,
++ 0x27,0xe1,0x1f,0x70,0xe0,0x26,0x6e,0xcf,0x86,0xe5,0x3f,0x6d,0xe4,0xce,0x6c,0xe3,
++ 0x99,0x6c,0xe2,0x78,0x6c,0xe1,0x67,0x6c,0x10,0x08,0x01,0xff,0xe5,0x88,0x87,0x00,
++ 0x01,0xff,0xe5,0xba,0xa6,0x00,0xe1,0x74,0x74,0xe0,0xe8,0x73,0xcf,0x86,0xe5,0x22,
++ 0x73,0xd4,0x3b,0x93,0x37,0xd2,0x1d,0xd1,0x0e,0x10,0x07,0x01,0xff,0x66,0x66,0x00,
++ 0x01,0xff,0x66,0x69,0x00,0x10,0x07,0x01,0xff,0x66,0x6c,0x00,0x01,0xff,0x66,0x66,
++ 0x69,0x00,0xd1,0x0f,0x10,0x08,0x01,0xff,0x66,0x66,0x6c,0x00,0x01,0xff,0x73,0x74,
++ 0x00,0x10,0x07,0x01,0xff,0x73,0x74,0x00,0x00,0x00,0x00,0x00,0xe3,0xc8,0x72,0xd2,
++ 0x11,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xd5,0xb4,0xd5,0xb6,0x00,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xd5,0xb4,0xd5,0xa5,0x00,0x01,0xff,0xd5,0xb4,0xd5,
++ 0xab,0x00,0x10,0x09,0x01,0xff,0xd5,0xbe,0xd5,0xb6,0x00,0x01,0xff,0xd5,0xb4,0xd5,
++ 0xad,0x00,0xd3,0x09,0xe2,0x40,0x74,0xcf,0x06,0x01,0x00,0xd2,0x13,0xe1,0x30,0x75,
++ 0xe0,0xc1,0x74,0xcf,0x86,0xe5,0x9e,0x74,0x64,0x8d,0x74,0x06,0xff,0x00,0xe1,0x96,
++ 0x75,0xe0,0x63,0x75,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x7c,
++ 0xd3,0x3c,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xef,0xbd,0x81,0x00,
++ 0x10,0x08,0x01,0xff,0xef,0xbd,0x82,0x00,0x01,0xff,0xef,0xbd,0x83,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xef,0xbd,0x84,0x00,0x01,0xff,0xef,0xbd,0x85,0x00,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x86,0x00,0x01,0xff,0xef,0xbd,0x87,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xef,0xbd,0x88,0x00,0x01,0xff,0xef,0xbd,0x89,0x00,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x8a,0x00,0x01,0xff,0xef,0xbd,0x8b,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x8c,0x00,0x01,0xff,0xef,0xbd,0x8d,0x00,0x10,0x08,0x01,0xff,
++ 0xef,0xbd,0x8e,0x00,0x01,0xff,0xef,0xbd,0x8f,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xef,0xbd,0x90,0x00,0x01,0xff,0xef,0xbd,0x91,0x00,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x92,0x00,0x01,0xff,0xef,0xbd,0x93,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x94,0x00,0x01,0xff,0xef,0xbd,0x95,0x00,0x10,0x08,0x01,0xff,
++ 0xef,0xbd,0x96,0x00,0x01,0xff,0xef,0xbd,0x97,0x00,0x92,0x1c,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xef,0xbd,0x98,0x00,0x01,0xff,0xef,0xbd,0x99,0x00,0x10,0x08,0x01,0xff,
++ 0xef,0xbd,0x9a,0x00,0x01,0x00,0x01,0x00,0x83,0xe2,0x87,0xb3,0xe1,0x60,0xb0,0xe0,
++ 0xdd,0xae,0xcf,0x86,0xe5,0x81,0x9b,0xc4,0xe3,0xc1,0x07,0xe2,0x62,0x06,0xe1,0x11,
++ 0x86,0xe0,0x09,0x05,0xcf,0x86,0xe5,0xfb,0x02,0xd4,0x1c,0xe3,0x7f,0x76,0xe2,0xd6,
++ 0x75,0xe1,0xb1,0x75,0xe0,0x8a,0x75,0xcf,0x86,0xe5,0x57,0x75,0x94,0x07,0x63,0x42,
++ 0x75,0x07,0x00,0x07,0x00,0xe3,0x2b,0x78,0xe2,0xf0,0x77,0xe1,0x77,0x01,0xe0,0x88,
++ 0x77,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,
++ 0x05,0xff,0xf0,0x90,0x90,0xa8,0x00,0x05,0xff,0xf0,0x90,0x90,0xa9,0x00,0x10,0x09,
++ 0x05,0xff,0xf0,0x90,0x90,0xaa,0x00,0x05,0xff,0xf0,0x90,0x90,0xab,0x00,0xd1,0x12,
++ 0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xac,0x00,0x05,0xff,0xf0,0x90,0x90,0xad,0x00,
++ 0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xae,0x00,0x05,0xff,0xf0,0x90,0x90,0xaf,0x00,
++ 0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb0,0x00,0x05,0xff,0xf0,
++ 0x90,0x90,0xb1,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb2,0x00,0x05,0xff,0xf0,
++ 0x90,0x90,0xb3,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb4,0x00,0x05,
++ 0xff,0xf0,0x90,0x90,0xb5,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x90,0xb6,0x00,0x05,
++ 0xff,0xf0,0x90,0x90,0xb7,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,
++ 0xf0,0x90,0x90,0xb8,0x00,0x05,0xff,0xf0,0x90,0x90,0xb9,0x00,0x10,0x09,0x05,0xff,
++ 0xf0,0x90,0x90,0xba,0x00,0x05,0xff,0xf0,0x90,0x90,0xbb,0x00,0xd1,0x12,0x10,0x09,
++ 0x05,0xff,0xf0,0x90,0x90,0xbc,0x00,0x05,0xff,0xf0,0x90,0x90,0xbd,0x00,0x10,0x09,
++ 0x05,0xff,0xf0,0x90,0x90,0xbe,0x00,0x05,0xff,0xf0,0x90,0x90,0xbf,0x00,0xd2,0x24,
++ 0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x80,0x00,0x05,0xff,0xf0,0x90,0x91,
++ 0x81,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x82,0x00,0x05,0xff,0xf0,0x90,0x91,
++ 0x83,0x00,0xd1,0x12,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x84,0x00,0x05,0xff,0xf0,
++ 0x90,0x91,0x85,0x00,0x10,0x09,0x05,0xff,0xf0,0x90,0x91,0x86,0x00,0x05,0xff,0xf0,
++ 0x90,0x91,0x87,0x00,0x94,0x4c,0x93,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x05,0xff,
++ 0xf0,0x90,0x91,0x88,0x00,0x05,0xff,0xf0,0x90,0x91,0x89,0x00,0x10,0x09,0x05,0xff,
++ 0xf0,0x90,0x91,0x8a,0x00,0x05,0xff,0xf0,0x90,0x91,0x8b,0x00,0xd1,0x12,0x10,0x09,
++ 0x05,0xff,0xf0,0x90,0x91,0x8c,0x00,0x05,0xff,0xf0,0x90,0x91,0x8d,0x00,0x10,0x09,
++ 0x07,0xff,0xf0,0x90,0x91,0x8e,0x00,0x07,0xff,0xf0,0x90,0x91,0x8f,0x00,0x05,0x00,
++ 0x05,0x00,0xd0,0xa0,0xcf,0x86,0xd5,0x07,0x64,0x30,0x76,0x07,0x00,0xd4,0x07,0x63,
++ 0x3d,0x76,0x07,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,
++ 0x93,0x98,0x00,0x12,0xff,0xf0,0x90,0x93,0x99,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,
++ 0x93,0x9a,0x00,0x12,0xff,0xf0,0x90,0x93,0x9b,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,
++ 0xf0,0x90,0x93,0x9c,0x00,0x12,0xff,0xf0,0x90,0x93,0x9d,0x00,0x10,0x09,0x12,0xff,
++ 0xf0,0x90,0x93,0x9e,0x00,0x12,0xff,0xf0,0x90,0x93,0x9f,0x00,0xd2,0x24,0xd1,0x12,
++ 0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa0,0x00,0x12,0xff,0xf0,0x90,0x93,0xa1,0x00,
++ 0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa2,0x00,0x12,0xff,0xf0,0x90,0x93,0xa3,0x00,
++ 0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa4,0x00,0x12,0xff,0xf0,0x90,0x93,
++ 0xa5,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xa6,0x00,0x12,0xff,0xf0,0x90,0x93,
++ 0xa7,0x00,0xcf,0x86,0xe5,0xc6,0x75,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
++ 0x09,0x12,0xff,0xf0,0x90,0x93,0xa8,0x00,0x12,0xff,0xf0,0x90,0x93,0xa9,0x00,0x10,
++ 0x09,0x12,0xff,0xf0,0x90,0x93,0xaa,0x00,0x12,0xff,0xf0,0x90,0x93,0xab,0x00,0xd1,
++ 0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xac,0x00,0x12,0xff,0xf0,0x90,0x93,0xad,
++ 0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xae,0x00,0x12,0xff,0xf0,0x90,0x93,0xaf,
++ 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb0,0x00,0x12,0xff,
++ 0xf0,0x90,0x93,0xb1,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb2,0x00,0x12,0xff,
++ 0xf0,0x90,0x93,0xb3,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb4,0x00,
++ 0x12,0xff,0xf0,0x90,0x93,0xb5,0x00,0x10,0x09,0x12,0xff,0xf0,0x90,0x93,0xb6,0x00,
++ 0x12,0xff,0xf0,0x90,0x93,0xb7,0x00,0x93,0x28,0x92,0x24,0xd1,0x12,0x10,0x09,0x12,
++ 0xff,0xf0,0x90,0x93,0xb8,0x00,0x12,0xff,0xf0,0x90,0x93,0xb9,0x00,0x10,0x09,0x12,
++ 0xff,0xf0,0x90,0x93,0xba,0x00,0x12,0xff,0xf0,0x90,0x93,0xbb,0x00,0x00,0x00,0x12,
++ 0x00,0xd4,0x1f,0xe3,0xdf,0x76,0xe2,0x6a,0x76,0xe1,0x09,0x76,0xe0,0xea,0x75,0xcf,
++ 0x86,0xe5,0xb7,0x75,0x94,0x0a,0xe3,0xa2,0x75,0x62,0x99,0x75,0x07,0x00,0x07,0x00,
++ 0xe3,0xde,0x78,0xe2,0xaf,0x78,0xd1,0x09,0xe0,0x4c,0x78,0xcf,0x06,0x0b,0x00,0xe0,
++ 0x7f,0x78,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0x80,0x00,0x11,0xff,0xf0,0x90,0xb3,0x81,0x00,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0x82,0x00,0x11,0xff,0xf0,0x90,0xb3,0x83,0x00,0xd1,
++ 0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x84,0x00,0x11,0xff,0xf0,0x90,0xb3,0x85,
++ 0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x86,0x00,0x11,0xff,0xf0,0x90,0xb3,0x87,
++ 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x88,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0x89,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8a,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0x8b,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8c,0x00,
++ 0x11,0xff,0xf0,0x90,0xb3,0x8d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x8e,0x00,
++ 0x11,0xff,0xf0,0x90,0xb3,0x8f,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,
++ 0xff,0xf0,0x90,0xb3,0x90,0x00,0x11,0xff,0xf0,0x90,0xb3,0x91,0x00,0x10,0x09,0x11,
++ 0xff,0xf0,0x90,0xb3,0x92,0x00,0x11,0xff,0xf0,0x90,0xb3,0x93,0x00,0xd1,0x12,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0x94,0x00,0x11,0xff,0xf0,0x90,0xb3,0x95,0x00,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0x96,0x00,0x11,0xff,0xf0,0x90,0xb3,0x97,0x00,0xd2,
++ 0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x98,0x00,0x11,0xff,0xf0,0x90,
++ 0xb3,0x99,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9a,0x00,0x11,0xff,0xf0,0x90,
++ 0xb3,0x9b,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9c,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0x9d,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0x9e,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0x9f,0x00,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x11,
++ 0xff,0xf0,0x90,0xb3,0xa0,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa1,0x00,0x10,0x09,0x11,
++ 0xff,0xf0,0x90,0xb3,0xa2,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa3,0x00,0xd1,0x12,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0xa4,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa5,0x00,0x10,
++ 0x09,0x11,0xff,0xf0,0x90,0xb3,0xa6,0x00,0x11,0xff,0xf0,0x90,0xb3,0xa7,0x00,0xd2,
++ 0x24,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xa8,0x00,0x11,0xff,0xf0,0x90,
++ 0xb3,0xa9,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xaa,0x00,0x11,0xff,0xf0,0x90,
++ 0xb3,0xab,0x00,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xac,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0xad,0x00,0x10,0x09,0x11,0xff,0xf0,0x90,0xb3,0xae,0x00,0x11,0xff,
++ 0xf0,0x90,0xb3,0xaf,0x00,0x93,0x23,0x92,0x1f,0xd1,0x12,0x10,0x09,0x11,0xff,0xf0,
++ 0x90,0xb3,0xb0,0x00,0x11,0xff,0xf0,0x90,0xb3,0xb1,0x00,0x10,0x09,0x11,0xff,0xf0,
++ 0x90,0xb3,0xb2,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x15,0xe4,0x91,
++ 0x7b,0xe3,0x9b,0x79,0xe2,0x94,0x78,0xe1,0xe4,0x77,0xe0,0x9d,0x77,0xcf,0x06,0x0c,
++ 0x00,0xe4,0xeb,0x7e,0xe3,0x44,0x7e,0xe2,0xed,0x7d,0xd1,0x0c,0xe0,0xb2,0x7d,0xcf,
++ 0x86,0x65,0x93,0x7d,0x14,0x00,0xe0,0xb6,0x7d,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,
++ 0x90,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x80,0x00,
++ 0x10,0xff,0xf0,0x91,0xa3,0x81,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x82,0x00,
++ 0x10,0xff,0xf0,0x91,0xa3,0x83,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,
++ 0x84,0x00,0x10,0xff,0xf0,0x91,0xa3,0x85,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,
++ 0x86,0x00,0x10,0xff,0xf0,0x91,0xa3,0x87,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,
++ 0xff,0xf0,0x91,0xa3,0x88,0x00,0x10,0xff,0xf0,0x91,0xa3,0x89,0x00,0x10,0x09,0x10,
++ 0xff,0xf0,0x91,0xa3,0x8a,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8b,0x00,0xd1,0x12,0x10,
++ 0x09,0x10,0xff,0xf0,0x91,0xa3,0x8c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8d,0x00,0x10,
++ 0x09,0x10,0xff,0xf0,0x91,0xa3,0x8e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x8f,0x00,0xd3,
++ 0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x90,0x00,0x10,0xff,
++ 0xf0,0x91,0xa3,0x91,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x92,0x00,0x10,0xff,
++ 0xf0,0x91,0xa3,0x93,0x00,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x94,0x00,
++ 0x10,0xff,0xf0,0x91,0xa3,0x95,0x00,0x10,0x09,0x10,0xff,0xf0,0x91,0xa3,0x96,0x00,
++ 0x10,0xff,0xf0,0x91,0xa3,0x97,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x10,0xff,0xf0,
++ 0x91,0xa3,0x98,0x00,0x10,0xff,0xf0,0x91,0xa3,0x99,0x00,0x10,0x09,0x10,0xff,0xf0,
++ 0x91,0xa3,0x9a,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9b,0x00,0xd1,0x12,0x10,0x09,0x10,
++ 0xff,0xf0,0x91,0xa3,0x9c,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9d,0x00,0x10,0x09,0x10,
++ 0xff,0xf0,0x91,0xa3,0x9e,0x00,0x10,0xff,0xf0,0x91,0xa3,0x9f,0x00,0xd1,0x11,0xe0,
++ 0x12,0x81,0xcf,0x86,0xe5,0x09,0x81,0xe4,0xd2,0x80,0xcf,0x06,0x00,0x00,0xe0,0xdb,
++ 0x82,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xd4,0x09,0xe3,0x10,0x81,0xcf,0x06,
++ 0x0c,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xe2,0x3b,0x82,0xe1,0x16,0x82,0xd0,0x06,
++ 0xcf,0x06,0x00,0x00,0xcf,0x86,0xa5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,
++ 0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa1,
++ 0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xa3,
++ 0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa4,0x00,0x14,0xff,0xf0,0x96,
++ 0xb9,0xa5,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa6,0x00,0x14,0xff,0xf0,0x96,
++ 0xb9,0xa7,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xa8,0x00,
++ 0x14,0xff,0xf0,0x96,0xb9,0xa9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xaa,0x00,
++ 0x14,0xff,0xf0,0x96,0xb9,0xab,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,
++ 0xac,0x00,0x14,0xff,0xf0,0x96,0xb9,0xad,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,
++ 0xae,0x00,0x14,0xff,0xf0,0x96,0xb9,0xaf,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
++ 0x09,0x14,0xff,0xf0,0x96,0xb9,0xb0,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb1,0x00,0x10,
++ 0x09,0x14,0xff,0xf0,0x96,0xb9,0xb2,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb3,0x00,0xd1,
++ 0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb4,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb5,
++ 0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb6,0x00,0x14,0xff,0xf0,0x96,0xb9,0xb7,
++ 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xb8,0x00,0x14,0xff,
++ 0xf0,0x96,0xb9,0xb9,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xba,0x00,0x14,0xff,
++ 0xf0,0x96,0xb9,0xbb,0x00,0xd1,0x12,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbc,0x00,
++ 0x14,0xff,0xf0,0x96,0xb9,0xbd,0x00,0x10,0x09,0x14,0xff,0xf0,0x96,0xb9,0xbe,0x00,
++ 0x14,0xff,0xf0,0x96,0xb9,0xbf,0x00,0x14,0x00,0xd2,0x14,0xe1,0x25,0x82,0xe0,0x1c,
++ 0x82,0xcf,0x86,0xe5,0xdd,0x81,0xe4,0x9a,0x81,0xcf,0x06,0x12,0x00,0xd1,0x0b,0xe0,
++ 0x51,0x83,0xcf,0x86,0xcf,0x06,0x00,0x00,0xe0,0x95,0x8b,0xcf,0x86,0xd5,0x22,0xe4,
++ 0xd0,0x88,0xe3,0x93,0x88,0xe2,0x38,0x88,0xe1,0x31,0x88,0xe0,0x2a,0x88,0xcf,0x86,
++ 0xe5,0xfb,0x87,0xe4,0xe2,0x87,0x93,0x07,0x62,0xd1,0x87,0x12,0xe6,0x12,0xe6,0xe4,
++ 0x36,0x89,0xe3,0x2f,0x89,0xd2,0x09,0xe1,0xb8,0x88,0xcf,0x06,0x10,0x00,0xe1,0x1f,
++ 0x89,0xe0,0xec,0x88,0xcf,0x86,0xe5,0x21,0x01,0xd4,0x90,0xd3,0x48,0xd2,0x24,0xd1,
++ 0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa3,
++ 0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xa5,
++ 0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa6,0x00,0x12,0xff,0xf0,0x9e,
++ 0xa4,0xa7,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xa8,0x00,0x12,0xff,0xf0,0x9e,
++ 0xa4,0xa9,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xaa,0x00,
++ 0x12,0xff,0xf0,0x9e,0xa4,0xab,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xac,0x00,
++ 0x12,0xff,0xf0,0x9e,0xa4,0xad,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,
++ 0xae,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xaf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,
++ 0xb0,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb1,0x00,0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,
++ 0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb2,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb3,0x00,0x10,
++ 0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb4,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb5,0x00,0xd1,
++ 0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb6,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb7,
++ 0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xb8,0x00,0x12,0xff,0xf0,0x9e,0xa4,0xb9,
++ 0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xba,0x00,0x12,0xff,
++ 0xf0,0x9e,0xa4,0xbb,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbc,0x00,0x12,0xff,
++ 0xf0,0x9e,0xa4,0xbd,0x00,0xd1,0x12,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa4,0xbe,0x00,
++ 0x12,0xff,0xf0,0x9e,0xa4,0xbf,0x00,0x10,0x09,0x12,0xff,0xf0,0x9e,0xa5,0x80,0x00,
++ 0x12,0xff,0xf0,0x9e,0xa5,0x81,0x00,0x94,0x1e,0x93,0x1a,0x92,0x16,0x91,0x12,0x10,
++ 0x09,0x12,0xff,0xf0,0x9e,0xa5,0x82,0x00,0x12,0xff,0xf0,0x9e,0xa5,0x83,0x00,0x12,
++ 0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ /* nfdi_c0100 */
++ 0x57,0x04,0x01,0x00,0xc6,0xe5,0xac,0x13,0xe4,0x41,0x0c,0xe3,0x7a,0x07,0xe2,0xf3,
++ 0x01,0xc1,0xd0,0x1f,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x15,0x53,0x04,0x01,0x00,
++ 0x52,0x04,0x01,0x00,0x91,0x09,0x10,0x04,0x01,0x00,0x01,0xff,0x00,0x01,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0xe4,0xd4,0x7c,0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x41,0xcc,0x80,0x00,0x01,0xff,0x41,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x41,
++ 0xcc,0x82,0x00,0x01,0xff,0x41,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,
++ 0xcc,0x88,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x43,
++ 0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0x80,0x00,0x01,
++ 0xff,0x45,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x82,0x00,0x01,0xff,0x45,
++ 0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x80,0x00,0x01,0xff,0x49,
++ 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0x82,0x00,0x01,0xff,0x49,0xcc,0x88,
++ 0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x4e,0xcc,0x83,
++ 0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x80,0x00,0x01,0xff,0x4f,0xcc,0x81,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x82,0x00,0x01,0xff,0x4f,0xcc,0x83,0x00,0x10,
++ 0x08,0x01,0xff,0x4f,0xcc,0x88,0x00,0x01,0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,
++ 0x00,0x01,0xff,0x55,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x81,0x00,0x01,
++ 0xff,0x55,0xcc,0x82,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x88,0x00,0x01,
++ 0xff,0x59,0xcc,0x81,0x00,0x01,0x00,0xd4,0x7c,0xd3,0x3c,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x61,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x81,0x00,0x10,0x08,0x01,
++ 0xff,0x61,0xcc,0x82,0x00,0x01,0xff,0x61,0xcc,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x61,0xcc,0x88,0x00,0x01,0xff,0x61,0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0x63,0xcc,0xa7,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x65,0xcc,0x80,
++ 0x00,0x01,0xff,0x65,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x65,0xcc,0x82,0x00,0x01,
++ 0xff,0x65,0xcc,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x80,0x00,0x01,
++ 0xff,0x69,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x69,0xcc,0x82,0x00,0x01,0xff,0x69,
++ 0xcc,0x88,0x00,0xd3,0x38,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x6e,
++ 0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x81,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x6f,0xcc,0x82,0x00,0x01,0xff,0x6f,0xcc,0x83,
++ 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x88,0x00,0x01,0x00,0xd2,0x1c,0xd1,0x0c,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0x75,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x75,0xcc,0x81,
++ 0x00,0x01,0xff,0x75,0xcc,0x82,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x75,0xcc,0x88,
++ 0x00,0x01,0xff,0x79,0xcc,0x81,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x79,0xcc,0x88,
++ 0x00,0xe1,0x9a,0x03,0xe0,0xd3,0x01,0xcf,0x86,0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x84,
++ 0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x86,0x00,0x01,0xff,0x61,0xcc,0x86,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa8,0x00,0x01,0xff,0x61,0xcc,0xa8,0x00,0x10,
++ 0x08,0x01,0xff,0x43,0xcc,0x81,0x00,0x01,0xff,0x63,0xcc,0x81,0x00,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x43,0xcc,0x82,0x00,0x01,0xff,0x63,0xcc,0x82,0x00,0x10,
++ 0x08,0x01,0xff,0x43,0xcc,0x87,0x00,0x01,0xff,0x63,0xcc,0x87,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x43,0xcc,0x8c,0x00,0x01,0xff,0x63,0xcc,0x8c,0x00,0x10,0x08,0x01,
++ 0xff,0x44,0xcc,0x8c,0x00,0x01,0xff,0x64,0xcc,0x8c,0x00,0xd3,0x34,0xd2,0x14,0x51,
++ 0x04,0x01,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x84,0x00,0x01,0xff,0x65,0xcc,0x84,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0x86,
++ 0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x87,0x00,0x01,0xff,0x65,0xcc,0x87,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0xa8,0x00,0x01,0xff,0x65,0xcc,0xa8,
++ 0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x8c,0x00,0x01,0xff,0x65,0xcc,0x8c,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x82,0x00,0x01,0xff,0x67,0xcc,0x82,0x00,0x10,
++ 0x08,0x01,0xff,0x47,0xcc,0x86,0x00,0x01,0xff,0x67,0xcc,0x86,0x00,0xd4,0x74,0xd3,
++ 0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x87,0x00,0x01,0xff,0x67,
++ 0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,0xa7,0x00,0x01,0xff,0x67,0xcc,0xa7,
++ 0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0x82,0x00,0x01,0xff,0x68,0xcc,0x82,
++ 0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x83,0x00,0x01,
++ 0xff,0x69,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0x84,0x00,0x01,0xff,0x69,
++ 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x86,0x00,0x01,0xff,0x69,
++ 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0xa8,0x00,0x01,0xff,0x69,0xcc,0xa8,
++ 0x00,0xd3,0x30,0xd2,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x49,0xcc,0x87,0x00,0x01,
++ 0x00,0x01,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4a,0xcc,0x82,0x00,0x01,0xff,0x6a,
++ 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,0xa7,0x00,0x01,0xff,0x6b,0xcc,0xa7,
++ 0x00,0xd2,0x1c,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x4c,0xcc,0x81,0x00,0x10,
++ 0x08,0x01,0xff,0x6c,0xcc,0x81,0x00,0x01,0xff,0x4c,0xcc,0xa7,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x6c,0xcc,0xa7,0x00,0x01,0xff,0x4c,0xcc,0x8c,0x00,0x10,0x08,0x01,
++ 0xff,0x6c,0xcc,0x8c,0x00,0x01,0x00,0xcf,0x86,0xd5,0xd4,0xd4,0x60,0xd3,0x30,0xd2,
++ 0x10,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x4e,0xcc,0x81,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x6e,0xcc,0x81,0x00,0x01,0xff,0x4e,0xcc,0xa7,0x00,0x10,
++ 0x08,0x01,0xff,0x6e,0xcc,0xa7,0x00,0x01,0xff,0x4e,0xcc,0x8c,0x00,0xd2,0x10,0x91,
++ 0x0c,0x10,0x08,0x01,0xff,0x6e,0xcc,0x8c,0x00,0x01,0x00,0x01,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x4f,0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0x84,0x00,0x10,0x08,0x01,
++ 0xff,0x4f,0xcc,0x86,0x00,0x01,0xff,0x6f,0xcc,0x86,0x00,0xd3,0x34,0xd2,0x14,0x91,
++ 0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x8b,0x00,0x01,0xff,0x6f,0xcc,0x8b,0x00,0x01,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,0x81,0x00,0x01,0xff,0x72,0xcc,0x81,
++ 0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa7,0x00,0x01,0xff,0x72,0xcc,0xa7,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,0x8c,0x00,0x01,0xff,0x72,0xcc,0x8c,
++ 0x00,0x10,0x08,0x01,0xff,0x53,0xcc,0x81,0x00,0x01,0xff,0x73,0xcc,0x81,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x82,0x00,0x01,0xff,0x73,0xcc,0x82,0x00,0x10,
++ 0x08,0x01,0xff,0x53,0xcc,0xa7,0x00,0x01,0xff,0x73,0xcc,0xa7,0x00,0xd4,0x74,0xd3,
++ 0x34,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x8c,0x00,0x01,0xff,0x73,
++ 0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0xa7,0x00,0x01,0xff,0x74,0xcc,0xa7,
++ 0x00,0x91,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,0x8c,0x00,0x01,0xff,0x74,0xcc,0x8c,
++ 0x00,0x01,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x83,0x00,0x01,
++ 0xff,0x75,0xcc,0x83,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x84,0x00,0x01,0xff,0x75,
++ 0xcc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x86,0x00,0x01,0xff,0x75,
++ 0xcc,0x86,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x8a,0x00,0x01,0xff,0x75,0xcc,0x8a,
++ 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x8b,0x00,0x01,
++ 0xff,0x75,0xcc,0x8b,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xa8,0x00,0x01,0xff,0x75,
++ 0xcc,0xa8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x82,0x00,0x01,0xff,0x77,
++ 0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,0x82,0x00,0x01,0xff,0x79,0xcc,0x82,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x59,0xcc,0x88,0x00,0x01,0xff,0x5a,
++ 0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x81,0x00,0x01,0xff,0x5a,0xcc,0x87,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x7a,0xcc,0x87,0x00,0x01,0xff,0x5a,0xcc,0x8c,
++ 0x00,0x10,0x08,0x01,0xff,0x7a,0xcc,0x8c,0x00,0x01,0x00,0xd0,0x4a,0xcf,0x86,0x55,
++ 0x04,0x01,0x00,0xd4,0x2c,0xd3,0x18,0x92,0x14,0x91,0x10,0x10,0x08,0x01,0xff,0x4f,
++ 0xcc,0x9b,0x00,0x01,0xff,0x6f,0xcc,0x9b,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,
++ 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0x55,0xcc,0x9b,0x00,0x93,
++ 0x14,0x92,0x10,0x91,0x0c,0x10,0x08,0x01,0xff,0x75,0xcc,0x9b,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xb4,0xd4,0x24,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0x41,0xcc,0x8c,0x00,0x10,
++ 0x08,0x01,0xff,0x61,0xcc,0x8c,0x00,0x01,0xff,0x49,0xcc,0x8c,0x00,0xd3,0x46,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x69,0xcc,0x8c,0x00,0x01,0xff,0x4f,0xcc,0x8c,
++ 0x00,0x10,0x08,0x01,0xff,0x6f,0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x8c,0x00,0xd1,
++ 0x12,0x10,0x08,0x01,0xff,0x75,0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x84,
++ 0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x84,0x00,0x01,0xff,0x55,0xcc,0x88,
++ 0xcc,0x81,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,0xcc,0x81,
++ 0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x8c,0x00,0x10,0x0a,0x01,0xff,0x75,0xcc,0x88,
++ 0xcc,0x8c,0x00,0x01,0xff,0x55,0xcc,0x88,0xcc,0x80,0x00,0xd1,0x0e,0x10,0x0a,0x01,
++ 0xff,0x75,0xcc,0x88,0xcc,0x80,0x00,0x01,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x88,
++ 0xcc,0x84,0x00,0x01,0xff,0x61,0xcc,0x88,0xcc,0x84,0x00,0xd4,0x80,0xd3,0x3a,0xd2,
++ 0x26,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x87,0xcc,0x84,0x00,0x01,0xff,0x61,
++ 0xcc,0x87,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xc3,0x86,0xcc,0x84,0x00,0x01,0xff,
++ 0xc3,0xa6,0xcc,0x84,0x00,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0x47,0xcc,0x8c,
++ 0x00,0x01,0xff,0x67,0xcc,0x8c,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,
++ 0xcc,0x8c,0x00,0x01,0xff,0x6b,0xcc,0x8c,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0xa8,
++ 0x00,0x01,0xff,0x6f,0xcc,0xa8,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa8,
++ 0xcc,0x84,0x00,0x01,0xff,0x6f,0xcc,0xa8,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xc6,
++ 0xb7,0xcc,0x8c,0x00,0x01,0xff,0xca,0x92,0xcc,0x8c,0x00,0xd3,0x24,0xd2,0x10,0x91,
++ 0x0c,0x10,0x08,0x01,0xff,0x6a,0xcc,0x8c,0x00,0x01,0x00,0x01,0x00,0x91,0x10,0x10,
++ 0x08,0x01,0xff,0x47,0xcc,0x81,0x00,0x01,0xff,0x67,0xcc,0x81,0x00,0x04,0x00,0xd2,
++ 0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x4e,0xcc,0x80,0x00,0x04,0xff,0x6e,0xcc,0x80,
++ 0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x8a,0xcc,0x81,0x00,0x01,0xff,0x61,0xcc,0x8a,
++ 0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xc3,0x86,0xcc,0x81,0x00,0x01,0xff,
++ 0xc3,0xa6,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xc3,0x98,0xcc,0x81,0x00,0x01,0xff,
++ 0xc3,0xb8,0xcc,0x81,0x00,0xe2,0x07,0x02,0xe1,0xae,0x01,0xe0,0x93,0x01,0xcf,0x86,
++ 0xd5,0xf4,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,
++ 0x8f,0x00,0x01,0xff,0x61,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x91,0x00,
++ 0x01,0xff,0x61,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,0xcc,0x8f,0x00,
++ 0x01,0xff,0x65,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x45,0xcc,0x91,0x00,0x01,0xff,
++ 0x65,0xcc,0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x8f,0x00,
++ 0x01,0xff,0x69,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0x91,0x00,0x01,0xff,
++ 0x69,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0x8f,0x00,0x01,0xff,
++ 0x6f,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x91,0x00,0x01,0xff,0x6f,0xcc,
++ 0x91,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,0xcc,0x8f,0x00,
++ 0x01,0xff,0x72,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0x91,0x00,0x01,0xff,
++ 0x72,0xcc,0x91,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0x8f,0x00,0x01,0xff,
++ 0x75,0xcc,0x8f,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0x91,0x00,0x01,0xff,0x75,0xcc,
++ 0x91,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x04,0xff,0x53,0xcc,0xa6,0x00,0x04,0xff,
++ 0x73,0xcc,0xa6,0x00,0x10,0x08,0x04,0xff,0x54,0xcc,0xa6,0x00,0x04,0xff,0x74,0xcc,
++ 0xa6,0x00,0x51,0x04,0x04,0x00,0x10,0x08,0x04,0xff,0x48,0xcc,0x8c,0x00,0x04,0xff,
++ 0x68,0xcc,0x8c,0x00,0xd4,0x68,0xd3,0x20,0xd2,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,
++ 0x07,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x08,0x04,0xff,0x41,0xcc,0x87,0x00,
++ 0x04,0xff,0x61,0xcc,0x87,0x00,0xd2,0x24,0xd1,0x10,0x10,0x08,0x04,0xff,0x45,0xcc,
++ 0xa7,0x00,0x04,0xff,0x65,0xcc,0xa7,0x00,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x88,0xcc,
++ 0x84,0x00,0x04,0xff,0x6f,0xcc,0x88,0xcc,0x84,0x00,0xd1,0x14,0x10,0x0a,0x04,0xff,
++ 0x4f,0xcc,0x83,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,0x83,0xcc,0x84,0x00,0x10,0x08,
++ 0x04,0xff,0x4f,0xcc,0x87,0x00,0x04,0xff,0x6f,0xcc,0x87,0x00,0x93,0x30,0xd2,0x24,
++ 0xd1,0x14,0x10,0x0a,0x04,0xff,0x4f,0xcc,0x87,0xcc,0x84,0x00,0x04,0xff,0x6f,0xcc,
++ 0x87,0xcc,0x84,0x00,0x10,0x08,0x04,0xff,0x59,0xcc,0x84,0x00,0x04,0xff,0x79,0xcc,
++ 0x84,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0xcf,0x86,
++ 0x95,0x14,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x08,0x00,0x09,0x00,0x09,0x00,
++ 0x09,0x00,0x01,0x00,0x01,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x18,
++ 0x53,0x04,0x01,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x04,0x00,
++ 0x11,0x04,0x04,0x00,0x07,0x00,0x01,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x01,0x00,
++ 0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x04,0x00,0x94,0x18,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x04,0x00,
++ 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,0x00,0x07,0x00,0xe1,0x35,0x01,0xd0,
++ 0x72,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0xe6,0xd3,0x10,0x52,0x04,0x01,0xe6,0x91,
++ 0x08,0x10,0x04,0x01,0xe6,0x01,0xe8,0x01,0xdc,0x92,0x0c,0x51,0x04,0x01,0xdc,0x10,
++ 0x04,0x01,0xe8,0x01,0xd8,0x01,0xdc,0xd4,0x2c,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,
++ 0x04,0x01,0xdc,0x01,0xca,0x10,0x04,0x01,0xca,0x01,0xdc,0x51,0x04,0x01,0xdc,0x10,
++ 0x04,0x01,0xdc,0x01,0xca,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0xca,0x01,0xdc,0x01,
++ 0xdc,0x01,0xdc,0xd3,0x08,0x12,0x04,0x01,0xdc,0x01,0x01,0xd2,0x0c,0x91,0x08,0x10,
++ 0x04,0x01,0x01,0x01,0xdc,0x01,0xdc,0x91,0x08,0x10,0x04,0x01,0xdc,0x01,0xe6,0x01,
++ 0xe6,0xcf,0x86,0xd5,0x7f,0xd4,0x47,0xd3,0x2e,0xd2,0x19,0xd1,0x0e,0x10,0x07,0x01,
++ 0xff,0xcc,0x80,0x00,0x01,0xff,0xcc,0x81,0x00,0x10,0x04,0x01,0xe6,0x01,0xff,0xcc,
++ 0x93,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcc,0x88,0xcc,0x81,0x00,0x01,0xf0,0x10,
++ 0x04,0x04,0xe6,0x04,0xdc,0xd2,0x08,0x11,0x04,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,
++ 0x04,0x04,0xe6,0x04,0xdc,0x10,0x04,0x04,0xdc,0x06,0xff,0x00,0xd3,0x18,0xd2,0x0c,
++ 0x51,0x04,0x07,0xe6,0x10,0x04,0x07,0xe6,0x07,0xdc,0x51,0x04,0x07,0xdc,0x10,0x04,
++ 0x07,0xdc,0x07,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x08,0xe8,0x08,0xdc,0x10,0x04,
++ 0x08,0xdc,0x08,0xe6,0xd1,0x08,0x10,0x04,0x08,0xe9,0x07,0xea,0x10,0x04,0x07,0xea,
++ 0x07,0xe9,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0xea,0x10,0x04,0x04,0xe9,
++ 0x06,0xe6,0x06,0xe6,0x06,0xe6,0xd3,0x13,0x52,0x04,0x0a,0x00,0x91,0x0b,0x10,0x07,
++ 0x01,0xff,0xca,0xb9,0x00,0x01,0x00,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,
++ 0x04,0x01,0x00,0x09,0x00,0x51,0x04,0x09,0x00,0x10,0x06,0x01,0xff,0x3b,0x00,0x10,
++ 0x00,0xd0,0xe1,0xcf,0x86,0xd5,0x7a,0xd4,0x5f,0xd3,0x21,0x52,0x04,0x00,0x00,0xd1,
++ 0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
++ 0xce,0x91,0xcc,0x81,0x00,0x01,0xff,0xc2,0xb7,0x00,0xd2,0x1f,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xce,0x95,0xcc,0x81,0x00,0x01,0xff,0xce,0x97,0xcc,0x81,0x00,0x10,0x09,
++ 0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0x00,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xce,
++ 0x9f,0xcc,0x81,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xa5,0xcc,0x81,0x00,0x01,
++ 0xff,0xce,0xa9,0xcc,0x81,0x00,0x93,0x17,0x92,0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,
++ 0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,
++ 0x4a,0xd3,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,
++ 0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x88,0x00,
++ 0x01,0xff,0xce,0xa5,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,
++ 0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,
++ 0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0x93,0x17,0x92,0x13,0x91,0x0f,0x10,
++ 0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x81,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0xcf,0x86,0xd5,0x7b,0xd4,0x39,0x53,0x04,0x01,0x00,0xd2,0x16,0x51,0x04,
++ 0x01,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x88,0x00,0x01,0xff,0xcf,0x85,0xcc,
++ 0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x01,0xff,0xcf,
++ 0x85,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x0a,0x00,0xd3,
++ 0x26,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xcf,0x92,0xcc,
++ 0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xcf,0x92,0xcc,0x88,0x00,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x04,0x00,0xd2,0x0c,0x51,0x04,0x06,0x00,0x10,0x04,0x01,0x00,0x04,
++ 0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,0x01,0x00,0x04,0x00,0xd4,
++ 0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x06,
++ 0x00,0x07,0x00,0x12,0x04,0x07,0x00,0x08,0x00,0xe3,0x47,0x04,0xe2,0xbe,0x02,0xe1,
++ 0x07,0x01,0xd0,0x8b,0xcf,0x86,0xd5,0x6c,0xd4,0x53,0xd3,0x30,0xd2,0x1f,0xd1,0x12,
++ 0x10,0x09,0x04,0xff,0xd0,0x95,0xcc,0x80,0x00,0x01,0xff,0xd0,0x95,0xcc,0x88,0x00,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x93,0xcc,0x81,0x00,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x01,0xff,0xd0,0x86,0xcc,0x88,0x00,0x52,0x04,0x01,0x00,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xd0,0x9a,0xcc,0x81,0x00,0x04,0xff,0xd0,0x98,0xcc,0x80,0x00,
++ 0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x86,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0x92,
++ 0x11,0x91,0x0d,0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x98,0xcc,0x86,0x00,0x01,0x00,
++ 0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x92,0x11,0x91,0x0d,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x86,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,
++ 0x57,0x54,0x04,0x01,0x00,0xd3,0x30,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,
++ 0xb5,0xcc,0x80,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x88,0x00,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xd0,0xb3,0xcc,0x81,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
++ 0xd1,0x96,0xcc,0x88,0x00,0x52,0x04,0x01,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,
++ 0xba,0xcc,0x81,0x00,0x04,0xff,0xd0,0xb8,0xcc,0x80,0x00,0x10,0x09,0x01,0xff,0xd1,
++ 0x83,0xcc,0x86,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x1a,0x52,0x04,0x01,0x00,
++ 0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd1,0xb4,0xcc,0x8f,0x00,0x01,0xff,0xd1,
++ 0xb5,0xcc,0x8f,0x00,0x01,0x00,0xd0,0x2e,0xcf,0x86,0x95,0x28,0x94,0x24,0xd3,0x18,
++ 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,0x51,0x04,0x01,0xe6,
++ 0x10,0x04,0x01,0xe6,0x0a,0xe6,0x92,0x08,0x11,0x04,0x04,0x00,0x06,0x00,0x04,0x00,
++ 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0xbe,0xd4,0x4a,0xd3,0x2a,0xd2,0x1a,0xd1,0x0d,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xd0,0x96,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,
++ 0xb6,0xcc,0x86,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,
++ 0x06,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,
++ 0x06,0x00,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,
++ 0x09,0x00,0xd3,0x3a,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x90,0xcc,0x86,
++ 0x00,0x01,0xff,0xd0,0xb0,0xcc,0x86,0x00,0x10,0x09,0x01,0xff,0xd0,0x90,0xcc,0x88,
++ 0x00,0x01,0xff,0xd0,0xb0,0xcc,0x88,0x00,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,
++ 0xd0,0x95,0xcc,0x86,0x00,0x01,0xff,0xd0,0xb5,0xcc,0x86,0x00,0xd2,0x16,0x51,0x04,
++ 0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0x98,0xcc,0x88,0x00,0x01,0xff,0xd3,0x99,0xcc,
++ 0x88,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd0,0x96,0xcc,0x88,0x00,0x01,0xff,0xd0,
++ 0xb6,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0x97,0xcc,0x88,0x00,0x01,0xff,0xd0,
++ 0xb7,0xcc,0x88,0x00,0xd4,0x74,0xd3,0x3a,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,
++ 0x01,0xff,0xd0,0x98,0xcc,0x84,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x84,0x00,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xd0,0x98,0xcc,0x88,0x00,0x01,0xff,0xd0,0xb8,0xcc,0x88,0x00,
++ 0x10,0x09,0x01,0xff,0xd0,0x9e,0xcc,0x88,0x00,0x01,0xff,0xd0,0xbe,0xcc,0x88,0x00,
++ 0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd3,0xa8,0xcc,0x88,0x00,0x01,
++ 0xff,0xd3,0xa9,0xcc,0x88,0x00,0xd1,0x12,0x10,0x09,0x04,0xff,0xd0,0xad,0xcc,0x88,
++ 0x00,0x04,0xff,0xd1,0x8d,0xcc,0x88,0x00,0x10,0x09,0x01,0xff,0xd0,0xa3,0xcc,0x84,
++ 0x00,0x01,0xff,0xd1,0x83,0xcc,0x84,0x00,0xd3,0x3a,0xd2,0x24,0xd1,0x12,0x10,0x09,
++ 0x01,0xff,0xd0,0xa3,0xcc,0x88,0x00,0x01,0xff,0xd1,0x83,0xcc,0x88,0x00,0x10,0x09,
++ 0x01,0xff,0xd0,0xa3,0xcc,0x8b,0x00,0x01,0xff,0xd1,0x83,0xcc,0x8b,0x00,0x91,0x12,
++ 0x10,0x09,0x01,0xff,0xd0,0xa7,0xcc,0x88,0x00,0x01,0xff,0xd1,0x87,0xcc,0x88,0x00,
++ 0x08,0x00,0x92,0x16,0x91,0x12,0x10,0x09,0x01,0xff,0xd0,0xab,0xcc,0x88,0x00,0x01,
++ 0xff,0xd1,0x8b,0xcc,0x88,0x00,0x09,0x00,0x09,0x00,0xd1,0x74,0xd0,0x36,0xcf,0x86,
++ 0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,
++ 0xd4,0x10,0x93,0x0c,0x52,0x04,0x0a,0x00,0x11,0x04,0x0b,0x00,0x0c,0x00,0x10,0x00,
++ 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0xba,
++ 0xcf,0x86,0xd5,0x4c,0xd4,0x24,0x53,0x04,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x14,0x00,0x01,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
++ 0x10,0x00,0x10,0x04,0x10,0x00,0x0d,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x00,0x00,0x02,0xdc,0x02,0xe6,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xdc,0x02,0xe6,
++ 0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xde,0x02,0xdc,0x02,0xe6,0xd4,0x2c,
++ 0xd3,0x10,0x92,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x08,0xdc,0x02,0xdc,0x02,0xdc,
++ 0xd2,0x0c,0x51,0x04,0x02,0xe6,0x10,0x04,0x02,0xdc,0x02,0xe6,0xd1,0x08,0x10,0x04,
++ 0x02,0xe6,0x02,0xde,0x10,0x04,0x02,0xe4,0x02,0xe6,0xd3,0x20,0xd2,0x10,0xd1,0x08,
++ 0x10,0x04,0x01,0x0a,0x01,0x0b,0x10,0x04,0x01,0x0c,0x01,0x0d,0xd1,0x08,0x10,0x04,
++ 0x01,0x0e,0x01,0x0f,0x10,0x04,0x01,0x10,0x01,0x11,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x01,0x12,0x01,0x13,0x10,0x04,0x09,0x13,0x01,0x14,0xd1,0x08,0x10,0x04,0x01,0x15,
++ 0x01,0x16,0x10,0x04,0x01,0x00,0x01,0x17,0xcf,0x86,0xd5,0x28,0x94,0x24,0x93,0x20,
++ 0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x18,0x10,0x04,0x01,0x19,0x01,0x00,
++ 0xd1,0x08,0x10,0x04,0x02,0xe6,0x08,0xdc,0x10,0x04,0x08,0x00,0x08,0x12,0x00,0x00,
++ 0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,
++ 0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x93,0x10,
++ 0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xe2,0xfb,0x01,0xe1,0x2b,0x01,0xd0,0xa8,0xcf,0x86,0xd5,0x55,0xd4,0x28,0xd3,0x10,
++ 0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x10,0x00,0x0a,0x00,0xd2,0x0c,
++ 0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,0x08,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x07,0x00,0x07,0x00,0xd3,0x0c,0x52,0x04,0x07,0xe6,0x11,0x04,0x07,0xe6,0x0a,0xe6,
++ 0xd2,0x10,0xd1,0x08,0x10,0x04,0x0a,0x1e,0x0a,0x1f,0x10,0x04,0x0a,0x20,0x01,0x00,
++ 0xd1,0x09,0x10,0x05,0x0f,0xff,0x00,0x00,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0xd4,
++ 0x3d,0x93,0x39,0xd2,0x1a,0xd1,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x10,0x09,0x01,
++ 0xff,0xd8,0xa7,0xd9,0x93,0x00,0x01,0xff,0xd8,0xa7,0xd9,0x94,0x00,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xd9,0x88,0xd9,0x94,0x00,0x01,0xff,0xd8,0xa7,0xd9,0x95,0x00,0x10,
++ 0x09,0x01,0xff,0xd9,0x8a,0xd9,0x94,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,
++ 0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0a,0x00,0x0a,0x00,0xcf,0x86,
++ 0xd5,0x5c,0xd4,0x20,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,
++ 0x01,0x00,0x01,0x1b,0xd1,0x08,0x10,0x04,0x01,0x1c,0x01,0x1d,0x10,0x04,0x01,0x1e,
++ 0x01,0x1f,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x20,0x01,0x21,0x10,0x04,
++ 0x01,0x22,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x10,0x04,0x07,0xdc,
++ 0x07,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x07,0xe6,0x08,0xe6,0x08,0xe6,0xd1,0x08,
++ 0x10,0x04,0x08,0xdc,0x08,0xe6,0x10,0x04,0x08,0xe6,0x0c,0xdc,0xd4,0x10,0x53,0x04,
++ 0x01,0x00,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x06,0x00,0x93,0x10,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x01,0x23,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x22,
++ 0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,
++ 0x11,0x04,0x04,0x00,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x04,0x00,
++ 0xcf,0x86,0xd5,0x5b,0xd4,0x2e,0xd3,0x1e,0x92,0x1a,0xd1,0x0d,0x10,0x09,0x01,0xff,
++ 0xdb,0x95,0xd9,0x94,0x00,0x01,0x00,0x10,0x09,0x01,0xff,0xdb,0x81,0xd9,0x94,0x00,
++ 0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x04,0x00,0xd3,0x19,0xd2,0x11,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
++ 0xdb,0x92,0xd9,0x94,0x00,0x11,0x04,0x01,0x00,0x01,0xe6,0x52,0x04,0x01,0xe6,0xd1,
++ 0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,0xd4,0x38,0xd3,
++ 0x1c,0xd2,0x0c,0x51,0x04,0x01,0xe6,0x10,0x04,0x01,0xe6,0x01,0xdc,0xd1,0x08,0x10,
++ 0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xe6,0xd2,0x10,0xd1,0x08,0x10,
++ 0x04,0x01,0xe6,0x01,0x00,0x10,0x04,0x01,0xdc,0x01,0xe6,0x91,0x08,0x10,0x04,0x01,
++ 0xe6,0x01,0xdc,0x07,0x00,0x53,0x04,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x04,
++ 0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x07,0x00,0xd1,0xc8,0xd0,0x76,0xcf,
++ 0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,
++ 0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,
++ 0x00,0x04,0x24,0x04,0x00,0x04,0x00,0x04,0x00,0xd4,0x14,0x53,0x04,0x04,0x00,0x52,
++ 0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x07,0x00,0x07,0x00,0xd3,0x1c,0xd2,
++ 0x0c,0x91,0x08,0x10,0x04,0x04,0xe6,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,
++ 0xdc,0x04,0xe6,0x10,0x04,0x04,0xe6,0x04,0xdc,0xd2,0x0c,0x51,0x04,0x04,0xdc,0x10,
++ 0x04,0x04,0xe6,0x04,0xdc,0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,0x04,0x04,
++ 0xdc,0x04,0xe6,0xcf,0x86,0xd5,0x3c,0x94,0x38,0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x04,
++ 0xe6,0x10,0x04,0x04,0xdc,0x04,0xe6,0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,
++ 0x04,0x04,0xdc,0x04,0xe6,0xd2,0x10,0xd1,0x08,0x10,0x04,0x04,0xdc,0x04,0xe6,0x10,
++ 0x04,0x04,0xe6,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x08,
++ 0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x52,0x04,0x08,0x00,0x11,0x04,0x08,0x00,0x0a,
++ 0x00,0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,
++ 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0xd4,0x14,0x53,0x04,0x09,0x00,0x92,0x0c,0x51,
++ 0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xe6,0x09,0xe6,0xd3,0x10,0x92,0x0c,0x51,
++ 0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,
++ 0x00,0x10,0x04,0x09,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x14,0xdc,0x14,
++ 0x00,0xe4,0xf8,0x57,0xe3,0x45,0x3f,0xe2,0xf4,0x3e,0xe1,0xc7,0x2c,0xe0,0x21,0x10,
++ 0xcf,0x86,0xc5,0xe4,0x80,0x08,0xe3,0xcb,0x03,0xe2,0x61,0x01,0xd1,0x94,0xd0,0x5a,
++ 0xcf,0x86,0xd5,0x20,0x54,0x04,0x0b,0x00,0xd3,0x0c,0x52,0x04,0x0b,0x00,0x11,0x04,
++ 0x0b,0x00,0x0b,0xe6,0x92,0x0c,0x51,0x04,0x0b,0xe6,0x10,0x04,0x0b,0x00,0x0b,0xe6,
++ 0x0b,0xe6,0xd4,0x24,0xd3,0x10,0x52,0x04,0x0b,0xe6,0x91,0x08,0x10,0x04,0x0b,0x00,
++ 0x0b,0xe6,0x0b,0xe6,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,0x0b,0xe6,
++ 0x11,0x04,0x0b,0xe6,0x00,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,
++ 0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xcf,0x86,0xd5,0x20,0x54,0x04,0x0c,0x00,
++ 0x53,0x04,0x0c,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x0c,0xdc,0x0c,0xdc,
++ 0x51,0x04,0x00,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x13,0x00,
++ 0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xd0,0x4a,0xcf,0x86,0x55,0x04,0x00,0x00,0xd4,0x20,0xd3,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0d,0x00,0x10,0x00,0x0d,0x00,0x0d,0x00,0x52,0x04,0x0d,0x00,0x91,0x08,
++ 0x10,0x04,0x0d,0x00,0x10,0x00,0x10,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,
++ 0x10,0x04,0x10,0x00,0x11,0x00,0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x12,0x00,
++ 0x52,0x04,0x12,0x00,0x11,0x04,0x12,0x00,0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,
++ 0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x14,0xdc,
++ 0x12,0xe6,0x12,0xe6,0xd4,0x30,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x12,0xe6,0x10,0x04,
++ 0x12,0x00,0x11,0xdc,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xdc,0x0d,0xe6,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x0d,0xe6,0x91,0x08,0x10,0x04,0x0d,0xe6,
++ 0x0d,0xdc,0x0d,0xdc,0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0d,0x1b,0x0d,0x1c,
++ 0x10,0x04,0x0d,0x1d,0x0d,0xe6,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xdc,0x0d,0xe6,
++ 0xd2,0x10,0xd1,0x08,0x10,0x04,0x0d,0xe6,0x0d,0xdc,0x10,0x04,0x0d,0xdc,0x0d,0xe6,
++ 0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xe6,0x10,0xe6,0xe1,0x3a,0x01,0xd0,0x77,0xcf,
++ 0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x01,
++ 0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x07,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0xd4,0x1b,0x53,0x04,0x01,0x00,0x92,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xe0,0xa4,0xa8,0xe0,0xa4,0xbc,0x00,0x01,0x00,0x01,0x00,0xd3,0x26,0xd2,0x13,
++ 0x91,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa4,0xb0,0xe0,0xa4,0xbc,0x00,0x01,
++ 0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xa4,0xb3,0xe0,0xa4,0xbc,0x00,0x01,0x00,
++ 0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x91,0x08,0x10,0x04,0x01,0x07,
++ 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x8c,0xd4,0x18,0x53,0x04,0x01,0x00,0x52,0x04,
++ 0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x10,0x04,0x0b,0x00,0x0c,0x00,
++ 0xd3,0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x01,0xe6,0x10,0x04,0x01,0xdc,
++ 0x01,0xe6,0x91,0x08,0x10,0x04,0x01,0xe6,0x0b,0x00,0x0c,0x00,0xd2,0x2c,0xd1,0x16,
++ 0x10,0x0b,0x01,0xff,0xe0,0xa4,0x95,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0x96,
++ 0xe0,0xa4,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa4,0x97,0xe0,0xa4,0xbc,0x00,0x01,
++ 0xff,0xe0,0xa4,0x9c,0xe0,0xa4,0xbc,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xa4,
++ 0xa1,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0xa2,0xe0,0xa4,0xbc,0x00,0x10,0x0b,
++ 0x01,0xff,0xe0,0xa4,0xab,0xe0,0xa4,0xbc,0x00,0x01,0xff,0xe0,0xa4,0xaf,0xe0,0xa4,
++ 0xbc,0x00,0x54,0x04,0x01,0x00,0xd3,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,
++ 0x0a,0x00,0x10,0x04,0x0a,0x00,0x0c,0x00,0x0c,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x10,0x00,0x0b,0x00,0x10,0x04,0x0b,0x00,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,
++ 0x08,0x00,0x09,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,
++ 0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,
++ 0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,
++ 0xd3,0x18,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,
++ 0x91,0x08,0x10,0x04,0x01,0x07,0x07,0x00,0x01,0x00,0xcf,0x86,0xd5,0x7b,0xd4,0x42,
++ 0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x01,0x00,0xd2,0x17,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x01,0xff,0xe0,0xa7,0x87,0xe0,0xa6,0xbe,0x00,0xd1,0x0f,0x10,0x0b,0x01,
++ 0xff,0xe0,0xa7,0x87,0xe0,0xa7,0x97,0x00,0x01,0x09,0x10,0x04,0x08,0x00,0x00,0x00,
++ 0xd3,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0x52,0x04,0x00,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe0,0xa6,0xa1,0xe0,0xa6,0xbc,
++ 0x00,0x01,0xff,0xe0,0xa6,0xa2,0xe0,0xa6,0xbc,0x00,0x10,0x04,0x00,0x00,0x01,0xff,
++ 0xe0,0xa6,0xaf,0xe0,0xa6,0xbc,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x01,0x00,0x11,
++ 0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x14,0xe6,0x00,
++ 0x00,0xe2,0x48,0x02,0xe1,0x4f,0x01,0xd0,0xa4,0xcf,0x86,0xd5,0x4c,0xd4,0x34,0xd3,
++ 0x1c,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x10,0x04,0x01,0x00,0x07,
++ 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,
++ 0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x2e,0xd2,0x17,0xd1,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xa8,0xb2,
++ 0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,
++ 0xe0,0xa8,0xb8,0xe0,0xa8,0xbc,0x00,0x00,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,
++ 0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x00,0x00,0x01,0x00,0xcf,0x86,0xd5,0x80,0xd4,
++ 0x34,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x51,
++ 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,
++ 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x01,
++ 0x09,0x00,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x00,
++ 0x00,0x00,0x00,0xd2,0x25,0xd1,0x0f,0x10,0x04,0x00,0x00,0x01,0xff,0xe0,0xa8,0x96,
++ 0xe0,0xa8,0xbc,0x00,0x10,0x0b,0x01,0xff,0xe0,0xa8,0x97,0xe0,0xa8,0xbc,0x00,0x01,
++ 0xff,0xe0,0xa8,0x9c,0xe0,0xa8,0xbc,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,
++ 0x10,0x0b,0x01,0xff,0xe0,0xa8,0xab,0xe0,0xa8,0xbc,0x00,0x00,0x00,0xd4,0x10,0x93,
++ 0x0c,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x14,0x52,
++ 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,0x00,0x10,0x04,0x14,0x00,0x00,
++ 0x00,0x00,0x00,0xd0,0x82,0xcf,0x86,0xd5,0x40,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,
++ 0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x07,0x00,0x01,0x00,0x10,
++ 0x04,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,0x0c,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,
++ 0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,
++ 0x07,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x10,0x52,0x04,0x01,
++ 0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x01,0x09,0x00,
++ 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0xd4,0x18,0x93,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x07,
++ 0x00,0x07,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x0d,0x00,0x07,0x00,0x00,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x00,0x00,0x11,0x00,0x13,0x00,0x13,0x00,0xe1,0x24,0x01,0xd0,0x86,0xcf,0x86,
++ 0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,
++ 0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x93,0x14,
++ 0x92,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x01,0x00,
++ 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x07,0x01,0x00,
++ 0x01,0x00,0xcf,0x86,0xd5,0x73,0xd4,0x45,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,
++ 0x10,0x04,0x0a,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x0f,
++ 0x10,0x0b,0x01,0xff,0xe0,0xad,0x87,0xe0,0xad,0x96,0x00,0x00,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0xff,0xe0,0xad,0x87,0xe0,0xac,0xbe,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,
++ 0xe0,0xad,0x87,0xe0,0xad,0x97,0x00,0x01,0x09,0x00,0x00,0xd3,0x0c,0x52,0x04,0x00,
++ 0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x52,0x04,0x00,0x00,0xd1,0x16,0x10,0x0b,0x01,
++ 0xff,0xe0,0xac,0xa1,0xe0,0xac,0xbc,0x00,0x01,0xff,0xe0,0xac,0xa2,0xe0,0xac,0xbc,
++ 0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,0x01,
++ 0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x0c,0x00,0x0c,0x00,0x00,0x00,0xd0,0xb1,0xcf,
++ 0x86,0xd5,0x63,0xd4,0x28,0xd3,0x14,0xd2,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd3,0x1f,0xd2,0x0c,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,
++ 0xae,0x92,0xe0,0xaf,0x97,0x00,0x01,0x00,0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x00,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x01,0x00,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd2,0x0c,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,
++ 0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x08,0x00,0x01,0x00,
++ 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xcf,0x86,
++ 0xd5,0x61,0xd4,0x45,0xd3,0x14,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x08,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xaf,0x86,0xe0,0xae,0xbe,0x00,0x01,0xff,0xe0,
++ 0xaf,0x87,0xe0,0xae,0xbe,0x00,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xaf,0x86,0xe0,
++ 0xaf,0x97,0x00,0x01,0x09,0x00,0x00,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0a,
++ 0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x00,
++ 0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x08,
++ 0x00,0x01,0x00,0x01,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x07,0x00,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,
++ 0x00,0x00,0x00,0xe3,0x1c,0x04,0xe2,0x1a,0x02,0xd1,0xf3,0xd0,0x76,0xcf,0x86,0xd5,
++ 0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,
++ 0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x91,
++ 0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,
++ 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,
++ 0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x01,0x00,0x01,0x00,0xd2,
++ 0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0x53,0xd4,0x2f,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,
++ 0x04,0x01,0x00,0x00,0x00,0x01,0x00,0xd2,0x13,0x91,0x0f,0x10,0x0b,0x01,0xff,0xe0,
++ 0xb1,0x86,0xe0,0xb1,0x96,0x00,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
++ 0x01,0x54,0x10,0x04,0x01,0x5b,0x00,0x00,0x92,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,
++ 0x11,0x00,0x00,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,0x01,0x00,
++ 0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x10,0x52,0x04,0x00,0x00,
++ 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0a,0x00,0xd0,0x76,0xcf,0x86,
++ 0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x10,0x00,
++ 0x01,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,
++ 0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,
++ 0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,
++ 0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,
++ 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x07,0x07,0x07,0x00,
++ 0x01,0x00,0xcf,0x86,0xd5,0x82,0xd4,0x5e,0xd3,0x2a,0xd2,0x13,0x91,0x0f,0x10,0x0b,
++ 0x01,0xff,0xe0,0xb2,0xbf,0xe0,0xb3,0x95,0x00,0x01,0x00,0x01,0x00,0xd1,0x08,0x10,
++ 0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,
++ 0x95,0x00,0xd2,0x28,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,0x96,
++ 0x00,0x00,0x00,0x10,0x0b,0x01,0xff,0xe0,0xb3,0x86,0xe0,0xb3,0x82,0x00,0x01,0xff,
++ 0xe0,0xb3,0x86,0xe0,0xb3,0x82,0xe0,0xb3,0x95,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
++ 0x01,0x09,0x00,0x00,0xd3,0x14,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
++ 0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,
++ 0x10,0x04,0x01,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,0x01,0x00,
++ 0x09,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,
++ 0x10,0x04,0x00,0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xe1,0x06,0x01,0xd0,0x6e,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x91,
++ 0x08,0x10,0x04,0x13,0x00,0x10,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x01,
++ 0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,
++ 0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,
++ 0x00,0x0c,0x00,0x01,0x00,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x0c,0x00,0x13,0x09,0x91,0x08,0x10,0x04,0x13,0x09,0x0a,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0x65,0xd4,0x45,0xd3,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,
++ 0x04,0x0a,0x00,0x00,0x00,0x01,0x00,0xd2,0x1e,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,
++ 0x00,0x10,0x0b,0x01,0xff,0xe0,0xb5,0x86,0xe0,0xb4,0xbe,0x00,0x01,0xff,0xe0,0xb5,
++ 0x87,0xe0,0xb4,0xbe,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe0,0xb5,0x86,0xe0,0xb5,
++ 0x97,0x00,0x01,0x09,0x10,0x04,0x0c,0x00,0x12,0x00,0xd3,0x10,0x52,0x04,0x00,0x00,
++ 0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x01,0x00,0x52,0x04,0x12,0x00,0x51,0x04,
++ 0x12,0x00,0x10,0x04,0x12,0x00,0x11,0x00,0xd4,0x14,0x93,0x10,0xd2,0x08,0x11,0x04,
++ 0x01,0x00,0x0a,0x00,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x0c,0x52,0x04,
++ 0x0a,0x00,0x11,0x04,0x0a,0x00,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,
++ 0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x5a,0xcf,0x86,0xd5,0x34,0xd4,0x18,0x93,0x14,
++ 0xd2,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x04,0x00,
++ 0x04,0x00,0x04,0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,
++ 0x04,0x00,0x00,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x54,0x04,
++ 0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,0x00,0x00,0x04,0x00,
++ 0x04,0x00,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x04,0x00,0x00,0x00,
++ 0xcf,0x86,0xd5,0x77,0xd4,0x28,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,
++ 0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x04,0x09,
++ 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x04,0x00,0xd3,0x14,0x52,0x04,
++ 0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x10,0x04,0x04,0x00,0x00,0x00,
++ 0xd2,0x13,0x51,0x04,0x04,0x00,0x10,0x0b,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8a,
++ 0x00,0x04,0x00,0xd1,0x19,0x10,0x0b,0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8f,0x00,
++ 0x04,0xff,0xe0,0xb7,0x99,0xe0,0xb7,0x8f,0xe0,0xb7,0x8a,0x00,0x10,0x0b,0x04,0xff,
++ 0xe0,0xb7,0x99,0xe0,0xb7,0x9f,0x00,0x04,0x00,0xd4,0x10,0x93,0x0c,0x52,0x04,0x00,
++ 0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x93,0x14,0xd2,0x08,0x11,0x04,0x00,
++ 0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xe2,
++ 0x31,0x01,0xd1,0x58,0xd0,0x3a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x67,0x10,0x04,
++ 0x01,0x09,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xcf,0x86,
++ 0x95,0x18,0xd4,0x0c,0x53,0x04,0x01,0x00,0x12,0x04,0x01,0x6b,0x01,0x00,0x53,0x04,
++ 0x01,0x00,0x12,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd0,0x9e,0xcf,0x86,0xd5,0x54,
++ 0xd4,0x3c,0xd3,0x20,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,
++ 0x01,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x15,0x00,
++ 0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x15,0x00,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,0x15,0x00,0xd3,0x08,0x12,0x04,
++ 0x15,0x00,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0xd4,0x30,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,
++ 0x01,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0xd2,0x08,0x11,0x04,0x15,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x15,0x00,0x01,0x00,
++ 0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x76,0x10,0x04,0x15,0x09,
++ 0x01,0x00,0x11,0x04,0x01,0x00,0x00,0x00,0xcf,0x86,0x95,0x34,0xd4,0x20,0xd3,0x14,
++ 0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x52,0x04,0x01,0x7a,0x11,0x04,0x01,0x00,0x00,0x00,0x53,0x04,0x01,0x00,
++ 0xd2,0x08,0x11,0x04,0x01,0x00,0x00,0x00,0x11,0x04,0x01,0x00,0x0d,0x00,0x00,0x00,
++ 0xe1,0x2b,0x01,0xd0,0x3e,0xcf,0x86,0xd5,0x14,0x54,0x04,0x02,0x00,0x53,0x04,0x02,
++ 0x00,0x92,0x08,0x11,0x04,0x02,0xdc,0x02,0x00,0x02,0x00,0x54,0x04,0x02,0x00,0xd3,
++ 0x14,0x52,0x04,0x02,0x00,0xd1,0x08,0x10,0x04,0x02,0x00,0x02,0xdc,0x10,0x04,0x02,
++ 0x00,0x02,0xdc,0x92,0x0c,0x91,0x08,0x10,0x04,0x02,0x00,0x02,0xd8,0x02,0x00,0x02,
++ 0x00,0xcf,0x86,0xd5,0x73,0xd4,0x36,0xd3,0x17,0x92,0x13,0x51,0x04,0x02,0x00,0x10,
++ 0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x82,0xe0,0xbe,0xb7,0x00,0x02,0x00,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x00,0x00,0x02,0x00,0x02,0x00,0x91,0x0f,0x10,0x04,0x02,0x00,
++ 0x02,0xff,0xe0,0xbd,0x8c,0xe0,0xbe,0xb7,0x00,0x02,0x00,0xd3,0x26,0xd2,0x13,0x51,
++ 0x04,0x02,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbd,0x91,0xe0,0xbe,0xb7,0x00,0x02,0x00,
++ 0x51,0x04,0x02,0x00,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x96,0xe0,0xbe,0xb7,
++ 0x00,0x52,0x04,0x02,0x00,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbd,0x9b,0xe0,0xbe,
++ 0xb7,0x00,0x02,0x00,0x02,0x00,0xd4,0x27,0x53,0x04,0x02,0x00,0xd2,0x17,0xd1,0x0f,
++ 0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbd,0x80,0xe0,0xbe,0xb5,0x00,0x10,0x04,0x04,
++ 0x00,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0xd3,0x35,0xd2,
++ 0x17,0xd1,0x08,0x10,0x04,0x00,0x00,0x02,0x81,0x10,0x04,0x02,0x82,0x02,0xff,0xe0,
++ 0xbd,0xb1,0xe0,0xbd,0xb2,0x00,0xd1,0x0f,0x10,0x04,0x02,0x84,0x02,0xff,0xe0,0xbd,
++ 0xb1,0xe0,0xbd,0xb4,0x00,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xb2,0xe0,0xbe,0x80,0x00,
++ 0x02,0x00,0xd2,0x13,0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xb3,0xe0,0xbe,0x80,
++ 0x00,0x02,0x00,0x02,0x82,0x11,0x04,0x02,0x82,0x02,0x00,0xd0,0xd3,0xcf,0x86,0xd5,
++ 0x65,0xd4,0x27,0xd3,0x1f,0xd2,0x13,0x91,0x0f,0x10,0x04,0x02,0x82,0x02,0xff,0xe0,
++ 0xbd,0xb1,0xe0,0xbe,0x80,0x00,0x02,0xe6,0x91,0x08,0x10,0x04,0x02,0x09,0x02,0x00,
++ 0x02,0xe6,0x12,0x04,0x02,0x00,0x0c,0x00,0xd3,0x1f,0xd2,0x13,0x51,0x04,0x02,0x00,
++ 0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0x92,0xe0,0xbe,0xb7,0x00,0x51,0x04,0x02,
++ 0x00,0x10,0x04,0x04,0x00,0x02,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x02,
++ 0x00,0x02,0x00,0x91,0x0f,0x10,0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0x9c,0xe0,0xbe,
++ 0xb7,0x00,0x02,0x00,0xd4,0x3d,0xd3,0x26,0xd2,0x13,0x51,0x04,0x02,0x00,0x10,0x0b,
++ 0x02,0xff,0xe0,0xbe,0xa1,0xe0,0xbe,0xb7,0x00,0x02,0x00,0x51,0x04,0x02,0x00,0x10,
++ 0x04,0x02,0x00,0x02,0xff,0xe0,0xbe,0xa6,0xe0,0xbe,0xb7,0x00,0x52,0x04,0x02,0x00,
++ 0x91,0x0f,0x10,0x0b,0x02,0xff,0xe0,0xbe,0xab,0xe0,0xbe,0xb7,0x00,0x02,0x00,0x04,
++ 0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x02,0x00,0x02,0x00,0x02,
++ 0x00,0xd2,0x13,0x91,0x0f,0x10,0x04,0x04,0x00,0x02,0xff,0xe0,0xbe,0x90,0xe0,0xbe,
++ 0xb5,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,
++ 0x95,0x4c,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,
++ 0x04,0xdc,0x04,0x00,0x52,0x04,0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x00,0x00,
++ 0x10,0x04,0x0a,0x00,0x04,0x00,0xd3,0x14,0xd2,0x08,0x11,0x04,0x08,0x00,0x0a,0x00,
++ 0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,
++ 0x0b,0x00,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
++ 0xe5,0xf7,0x04,0xe4,0x79,0x03,0xe3,0x7b,0x01,0xe2,0x04,0x01,0xd1,0x7f,0xd0,0x65,
++ 0xcf,0x86,0x55,0x04,0x04,0x00,0xd4,0x33,0xd3,0x1f,0xd2,0x0c,0x51,0x04,0x04,0x00,
++ 0x10,0x04,0x0a,0x00,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x0b,0x04,0xff,0xe1,0x80,
++ 0xa5,0xe1,0x80,0xae,0x00,0x04,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x0a,0x00,0x04,
++ 0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x04,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x04,
++ 0x00,0x10,0x04,0x04,0x00,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x04,0x00,0x04,
++ 0x07,0x92,0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0x09,0x10,0x04,0x0a,0x09,0x0a,
++ 0x00,0x0a,0x00,0xcf,0x86,0x95,0x14,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,
++ 0x08,0x11,0x04,0x04,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xd0,0x2e,0xcf,0x86,0x95,
++ 0x28,0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,
++ 0x00,0x0a,0xdc,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,0x0b,
++ 0x00,0x11,0x04,0x0b,0x00,0x0a,0x00,0x01,0x00,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,
++ 0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x52,
++ 0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0d,0x00,0x00,0x00,0x01,0x00,0x54,
++ 0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x06,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x06,0x00,0x08,0x00,0x10,0x04,0x08,
++ 0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0d,0x00,0x0d,0x00,0xd1,0x3e,0xd0,
++ 0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x1d,0x54,0x04,0x01,0x00,0x53,0x04,0x01,
++ 0x00,0xd2,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,
++ 0x00,0x01,0xff,0x00,0x94,0x15,0x93,0x11,0x92,0x0d,0x91,0x09,0x10,0x05,0x01,0xff,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,
++ 0x04,0x01,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x0b,0x00,0x0b,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,
++ 0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0b,0x00,0x0b,
++ 0x00,0xe2,0x21,0x01,0xd1,0x6c,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,
++ 0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,
++ 0x04,0x00,0x04,0x00,0xcf,0x86,0x95,0x48,0xd4,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,
++ 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,
++ 0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,
++ 0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,
++ 0xd0,0x62,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x10,0x52,0x04,0x04,0x00,0x51,0x04,
++ 0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,
++ 0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0xd4,0x14,0x53,0x04,
++ 0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,
++ 0xd3,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,
++ 0x04,0x00,0x00,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,
++ 0x00,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,0xd3,0x14,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x52,0x04,0x04,0x00,
++ 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x93,0x10,0x52,0x04,0x04,0x00,
++ 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x94,0x14,0x53,0x04,
++ 0x04,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,
++ 0x04,0x00,0xd1,0x9c,0xd0,0x3e,0xcf,0x86,0x95,0x38,0xd4,0x14,0x53,0x04,0x04,0x00,
++ 0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0xd3,0x14,
++ 0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x04,0x00,0x11,0x04,0x04,0x00,
++ 0x00,0x00,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,
++ 0x04,0x00,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x93,0x10,0x52,0x04,0x04,0x00,0x51,0x04,
++ 0x04,0x00,0x10,0x04,0x04,0x00,0x08,0x00,0x04,0x00,0x53,0x04,0x04,0x00,0xd2,0x0c,
++ 0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,
++ 0x0c,0xe6,0x10,0x04,0x0c,0xe6,0x08,0xe6,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x08,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x53,0x04,0x04,0x00,
++ 0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,
++ 0xcf,0x86,0x95,0x14,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,
++ 0x08,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,
++ 0x04,0x00,0xd3,0x10,0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x11,0x00,
++ 0x00,0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x00,0x00,0xd3,0x30,0xd2,0x2a,
++ 0xd1,0x24,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0b,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0x04,0x00,
++ 0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xd2,0x6c,0xd1,0x24,
++ 0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,
++ 0x93,0x10,0x52,0x04,0x04,0x00,0x51,0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x0b,0x00,
++ 0x0b,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,
++ 0x52,0x04,0x04,0x00,0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x04,0x00,
++ 0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x04,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x46,0xcf,0x86,0xd5,0x28,
++ 0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x00,
++ 0x00,0x00,0x06,0x00,0x93,0x10,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,0x06,0x09,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x06,0x00,0x93,0x14,0x52,0x04,0x06,0x00,
++ 0xd1,0x08,0x10,0x04,0x06,0x09,0x06,0x00,0x10,0x04,0x06,0x00,0x00,0x00,0x00,0x00,
++ 0xcf,0x86,0xd5,0x10,0x54,0x04,0x06,0x00,0x93,0x08,0x12,0x04,0x06,0x00,0x00,0x00,
++ 0x00,0x00,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x08,0x10,0x04,
++ 0x06,0x00,0x00,0x00,0x06,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,
++ 0x00,0x00,0x06,0x00,0x00,0x00,0x00,0x00,0xd0,0x1b,0xcf,0x86,0x55,0x04,0x04,0x00,
++ 0x54,0x04,0x04,0x00,0x93,0x0d,0x52,0x04,0x04,0x00,0x11,0x05,0x04,0xff,0x00,0x04,
++ 0x00,0x04,0x00,0xcf,0x86,0xd5,0x24,0x54,0x04,0x04,0x00,0xd3,0x10,0x92,0x0c,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0x09,0x04,0x00,0x04,0x00,0x52,0x04,0x04,0x00,0x91,
++ 0x08,0x10,0x04,0x04,0x00,0x07,0xe6,0x00,0x00,0xd4,0x10,0x53,0x04,0x04,0x00,0x92,
++ 0x08,0x11,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x08,0x11,
++ 0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xe4,0xb7,0x03,0xe3,0x58,0x01,0xd2,0x8f,0xd1,
++ 0x53,0xd0,0x35,0xcf,0x86,0x95,0x2f,0xd4,0x1f,0x53,0x04,0x04,0x00,0xd2,0x0d,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x04,0xff,0x00,0x51,0x05,0x04,0xff,0x00,0x10,
++ 0x05,0x04,0xff,0x00,0x00,0x00,0x53,0x04,0x04,0x00,0x92,0x08,0x11,0x04,0x04,0x00,
++ 0x00,0x00,0x00,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,0x00,0x54,0x04,0x04,0x00,
++ 0x53,0x04,0x04,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x04,0x00,0x94,0x18,0x53,0x04,0x04,0x00,
++ 0x92,0x10,0xd1,0x08,0x10,0x04,0x04,0x00,0x04,0xe4,0x10,0x04,0x0a,0x00,0x00,0x00,
++ 0x00,0x00,0x0b,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x0c,
++ 0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd1,0x80,0xd0,0x42,
++ 0xcf,0x86,0xd5,0x1c,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,
++ 0xd1,0x08,0x10,0x04,0x07,0x00,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0xd4,0x0c,
++ 0x53,0x04,0x07,0x00,0x12,0x04,0x07,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x92,0x10,
++ 0xd1,0x08,0x10,0x04,0x07,0x00,0x07,0xde,0x10,0x04,0x07,0xe6,0x07,0xdc,0x00,0x00,
++ 0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,
++ 0x00,0x00,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0xd4,0x10,0x53,0x04,0x07,0x00,
++ 0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x07,0x00,
++ 0x91,0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd0,0x1a,0xcf,0x86,
++ 0x55,0x04,0x08,0x00,0x94,0x10,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,
++ 0x0b,0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x95,0x28,0xd4,0x10,0x53,0x04,0x08,0x00,
++ 0x92,0x08,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0xd2,0x0c,
++ 0x51,0x04,0x08,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x08,0x00,
++ 0x07,0x00,0xd2,0xe4,0xd1,0x80,0xd0,0x2e,0xcf,0x86,0x95,0x28,0x54,0x04,0x08,0x00,
++ 0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x08,0xe6,
++ 0xd2,0x0c,0x91,0x08,0x10,0x04,0x08,0xdc,0x08,0x00,0x08,0x00,0x11,0x04,0x00,0x00,
++ 0x08,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,
++ 0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xd4,0x14,
++ 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,
++ 0x0b,0x00,0xd3,0x10,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xe6,
++ 0x0b,0xe6,0x52,0x04,0x0b,0xe6,0xd1,0x08,0x10,0x04,0x0b,0xe6,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x0b,0xdc,0xd0,0x5e,0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0b,0x00,
++ 0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,
++ 0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd4,0x10,0x53,0x04,0x0b,0x00,0x52,0x04,
++ 0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x10,0xe6,0x91,0x08,
++ 0x10,0x04,0x10,0xe6,0x10,0xdc,0x10,0xdc,0xd2,0x0c,0x51,0x04,0x10,0xdc,0x10,0x04,
++ 0x10,0xdc,0x10,0xe6,0xd1,0x08,0x10,0x04,0x10,0xe6,0x10,0xdc,0x10,0x04,0x10,0x00,
++ 0x00,0x00,0xcf,0x06,0x00,0x00,0xe1,0x1e,0x01,0xd0,0xaa,0xcf,0x86,0xd5,0x6e,0xd4,
++ 0x53,0xd3,0x17,0x52,0x04,0x09,0x00,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,
++ 0xac,0x85,0xe1,0xac,0xb5,0x00,0x09,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x09,0xff,
++ 0xe1,0xac,0x87,0xe1,0xac,0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x89,
++ 0xe1,0xac,0xb5,0x00,0x09,0x00,0xd1,0x0f,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8b,0xe1,
++ 0xac,0xb5,0x00,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,0x8d,0xe1,0xac,0xb5,0x00,
++ 0x09,0x00,0x93,0x17,0x92,0x13,0x51,0x04,0x09,0x00,0x10,0x0b,0x09,0xff,0xe1,0xac,
++ 0x91,0xe1,0xac,0xb5,0x00,0x09,0x00,0x09,0x00,0x09,0x00,0x54,0x04,0x09,0x00,0xd3,
++ 0x10,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x07,0x09,0x00,0x09,0x00,0xd2,
++ 0x13,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xba,0xe1,0xac,
++ 0xb5,0x00,0x91,0x0f,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xac,0xbc,0xe1,0xac,0xb5,
++ 0x00,0x09,0x00,0xcf,0x86,0xd5,0x3d,0x94,0x39,0xd3,0x31,0xd2,0x25,0xd1,0x16,0x10,
++ 0x0b,0x09,0xff,0xe1,0xac,0xbe,0xe1,0xac,0xb5,0x00,0x09,0xff,0xe1,0xac,0xbf,0xe1,
++ 0xac,0xb5,0x00,0x10,0x04,0x09,0x00,0x09,0xff,0xe1,0xad,0x82,0xe1,0xac,0xb5,0x00,
++ 0x91,0x08,0x10,0x04,0x09,0x09,0x09,0x00,0x09,0x00,0x12,0x04,0x09,0x00,0x00,0x00,
++ 0x09,0x00,0xd4,0x1c,0x53,0x04,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,
++ 0x09,0x00,0x09,0xe6,0x91,0x08,0x10,0x04,0x09,0xdc,0x09,0xe6,0x09,0xe6,0xd3,0x08,
++ 0x12,0x04,0x09,0xe6,0x09,0x00,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,
++ 0x00,0x00,0x00,0x00,0xd0,0x2e,0xcf,0x86,0x55,0x04,0x0a,0x00,0xd4,0x18,0x53,0x04,
++ 0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x09,0x0d,0x09,0x11,0x04,
++ 0x0d,0x00,0x0a,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x0d,0x00,
++ 0x0d,0x00,0xcf,0x86,0x55,0x04,0x0c,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x0c,0x00,
++ 0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x07,0x0c,0x00,0x0c,0x00,0xd3,0x0c,0x92,0x08,
++ 0x11,0x04,0x0c,0x00,0x0c,0x09,0x00,0x00,0x12,0x04,0x00,0x00,0x0c,0x00,0xe3,0xb2,
++ 0x01,0xe2,0x09,0x01,0xd1,0x4c,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x0a,0x00,0x54,0x04,
++ 0x0a,0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,0x0a,0x00,
++ 0x0a,0x07,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,
++ 0xcf,0x86,0x95,0x1c,0x94,0x18,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,
++ 0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,
++ 0xd0,0x3a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x14,0x00,0x54,0x04,0x14,0x00,
++ 0x53,0x04,0x14,0x00,0xd2,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,
++ 0x91,0x08,0x10,0x04,0x00,0x00,0x14,0x00,0x14,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x08,
++ 0x13,0x04,0x0d,0x00,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x0b,0xe6,0x10,0x04,
++ 0x0b,0xe6,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x01,0x0b,0xdc,0x0b,0xdc,0x92,0x08,
++ 0x11,0x04,0x0b,0xdc,0x0b,0xe6,0x0b,0xdc,0xd4,0x28,0xd3,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x01,0x0b,0x01,0xd2,0x0c,0x91,0x08,0x10,0x04,
++ 0x0b,0x01,0x0b,0x00,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0xdc,0x0b,0x00,
++ 0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0d,0x00,0xd1,0x08,
++ 0x10,0x04,0x0d,0xe6,0x0d,0x00,0x10,0x04,0x0d,0x00,0x13,0x00,0x92,0x0c,0x51,0x04,
++ 0x10,0xe6,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,
++ 0x07,0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x94,0x0c,0x53,0x04,0x07,0x00,0x12,0x04,
++ 0x07,0x00,0x08,0x00,0x08,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,0xcf,0x86,0xd5,0x40,
++ 0xd4,0x2c,0xd3,0x10,0x92,0x0c,0x51,0x04,0x08,0xe6,0x10,0x04,0x08,0xdc,0x08,0xe6,
++ 0x09,0xe6,0xd2,0x0c,0x51,0x04,0x09,0xe6,0x10,0x04,0x09,0xdc,0x0a,0xe6,0xd1,0x08,
++ 0x10,0x04,0x0a,0xe6,0x0a,0xea,0x10,0x04,0x0a,0xd6,0x0a,0xdc,0x93,0x10,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x0a,0xca,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0x0a,0xe6,0xd4,0x14,
++ 0x93,0x10,0x52,0x04,0x0a,0xe6,0x51,0x04,0x0a,0xe6,0x10,0x04,0x0a,0xe6,0x10,0xe6,
++ 0x10,0xe6,0xd3,0x10,0x52,0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x13,0xe8,
++ 0x13,0xe4,0xd2,0x10,0xd1,0x08,0x10,0x04,0x13,0xe4,0x13,0xdc,0x10,0x04,0x00,0x00,
++ 0x12,0xe6,0xd1,0x08,0x10,0x04,0x0c,0xe9,0x0b,0xdc,0x10,0x04,0x09,0xe6,0x09,0xdc,
++ 0xe2,0x80,0x08,0xe1,0x48,0x04,0xe0,0x1c,0x02,0xcf,0x86,0xe5,0x11,0x01,0xd4,0x84,
++ 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa5,0x00,0x01,0xff,
++ 0x61,0xcc,0xa5,0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0x87,0x00,0x01,0xff,0x62,0xcc,
++ 0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x42,0xcc,0xa3,0x00,0x01,0xff,0x62,0xcc,
++ 0xa3,0x00,0x10,0x08,0x01,0xff,0x42,0xcc,0xb1,0x00,0x01,0xff,0x62,0xcc,0xb1,0x00,
++ 0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x43,0xcc,0xa7,0xcc,0x81,0x00,0x01,0xff,
++ 0x63,0xcc,0xa7,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0x87,0x00,0x01,0xff,
++ 0x64,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa3,0x00,0x01,0xff,
++ 0x64,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xb1,0x00,0x01,0xff,0x64,0xcc,
++ 0xb1,0x00,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x44,0xcc,0xa7,0x00,
++ 0x01,0xff,0x64,0xcc,0xa7,0x00,0x10,0x08,0x01,0xff,0x44,0xcc,0xad,0x00,0x01,0xff,
++ 0x64,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,0x80,0x00,
++ 0x01,0xff,0x65,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x84,0xcc,
++ 0x81,0x00,0x01,0xff,0x65,0xcc,0x84,0xcc,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x45,0xcc,0xad,0x00,0x01,0xff,0x65,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,
++ 0x45,0xcc,0xb0,0x00,0x01,0xff,0x65,0xcc,0xb0,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,
++ 0x45,0xcc,0xa7,0xcc,0x86,0x00,0x01,0xff,0x65,0xcc,0xa7,0xcc,0x86,0x00,0x10,0x08,
++ 0x01,0xff,0x46,0xcc,0x87,0x00,0x01,0xff,0x66,0xcc,0x87,0x00,0xd4,0x84,0xd3,0x40,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x47,0xcc,0x84,0x00,0x01,0xff,0x67,0xcc,
++ 0x84,0x00,0x10,0x08,0x01,0xff,0x48,0xcc,0x87,0x00,0x01,0xff,0x68,0xcc,0x87,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa3,0x00,0x01,0xff,0x68,0xcc,0xa3,0x00,
++ 0x10,0x08,0x01,0xff,0x48,0xcc,0x88,0x00,0x01,0xff,0x68,0xcc,0x88,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0x48,0xcc,0xa7,0x00,0x01,0xff,0x68,0xcc,0xa7,0x00,
++ 0x10,0x08,0x01,0xff,0x48,0xcc,0xae,0x00,0x01,0xff,0x68,0xcc,0xae,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0x49,0xcc,0xb0,0x00,0x01,0xff,0x69,0xcc,0xb0,0x00,0x10,0x0a,
++ 0x01,0xff,0x49,0xcc,0x88,0xcc,0x81,0x00,0x01,0xff,0x69,0xcc,0x88,0xcc,0x81,0x00,
++ 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0x81,0x00,0x01,0xff,
++ 0x6b,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x4b,0xcc,0xa3,0x00,0x01,0xff,0x6b,0xcc,
++ 0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4b,0xcc,0xb1,0x00,0x01,0xff,0x6b,0xcc,
++ 0xb1,0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xa3,0x00,0x01,0xff,0x6c,0xcc,0xa3,0x00,
++ 0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4c,0xcc,0xa3,0xcc,0x84,0x00,0x01,0xff,
++ 0x6c,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x4c,0xcc,0xb1,0x00,0x01,0xff,
++ 0x6c,0xcc,0xb1,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4c,0xcc,0xad,0x00,0x01,0xff,
++ 0x6c,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x4d,0xcc,0x81,0x00,0x01,0xff,0x6d,0xcc,
++ 0x81,0x00,0xcf,0x86,0xe5,0x15,0x01,0xd4,0x88,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x4d,0xcc,0x87,0x00,0x01,0xff,0x6d,0xcc,0x87,0x00,0x10,0x08,0x01,
++ 0xff,0x4d,0xcc,0xa3,0x00,0x01,0xff,0x6d,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x4e,0xcc,0x87,0x00,0x01,0xff,0x6e,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x4e,
++ 0xcc,0xa3,0x00,0x01,0xff,0x6e,0xcc,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x4e,0xcc,0xb1,0x00,0x01,0xff,0x6e,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x4e,
++ 0xcc,0xad,0x00,0x01,0xff,0x6e,0xcc,0xad,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,
++ 0xcc,0x83,0xcc,0x81,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,
++ 0xff,0x4f,0xcc,0x83,0xcc,0x88,0x00,0x01,0xff,0x6f,0xcc,0x83,0xcc,0x88,0x00,0xd3,
++ 0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x80,0x00,0x01,
++ 0xff,0x6f,0xcc,0x84,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x84,0xcc,0x81,
++ 0x00,0x01,0xff,0x6f,0xcc,0x84,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x50,
++ 0xcc,0x81,0x00,0x01,0xff,0x70,0xcc,0x81,0x00,0x10,0x08,0x01,0xff,0x50,0xcc,0x87,
++ 0x00,0x01,0xff,0x70,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x52,
++ 0xcc,0x87,0x00,0x01,0xff,0x72,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x52,0xcc,0xa3,
++ 0x00,0x01,0xff,0x72,0xcc,0xa3,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x52,0xcc,0xa3,
++ 0xcc,0x84,0x00,0x01,0xff,0x72,0xcc,0xa3,0xcc,0x84,0x00,0x10,0x08,0x01,0xff,0x52,
++ 0xcc,0xb1,0x00,0x01,0xff,0x72,0xcc,0xb1,0x00,0xd4,0x8c,0xd3,0x48,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0x53,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x87,0x00,0x10,
++ 0x08,0x01,0xff,0x53,0xcc,0xa3,0x00,0x01,0xff,0x73,0xcc,0xa3,0x00,0xd1,0x14,0x10,
++ 0x0a,0x01,0xff,0x53,0xcc,0x81,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x81,0xcc,0x87,
++ 0x00,0x10,0x0a,0x01,0xff,0x53,0xcc,0x8c,0xcc,0x87,0x00,0x01,0xff,0x73,0xcc,0x8c,
++ 0xcc,0x87,0x00,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,0x53,0xcc,0xa3,0xcc,0x87,
++ 0x00,0x01,0xff,0x73,0xcc,0xa3,0xcc,0x87,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0x87,
++ 0x00,0x01,0xff,0x74,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,0xcc,0xa3,
++ 0x00,0x01,0xff,0x74,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x54,0xcc,0xb1,0x00,0x01,
++ 0xff,0x74,0xcc,0xb1,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x54,
++ 0xcc,0xad,0x00,0x01,0xff,0x74,0xcc,0xad,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xa4,
++ 0x00,0x01,0xff,0x75,0xcc,0xa4,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x55,0xcc,0xb0,
++ 0x00,0x01,0xff,0x75,0xcc,0xb0,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,0xad,0x00,0x01,
++ 0xff,0x75,0xcc,0xad,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x83,
++ 0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x83,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x55,
++ 0xcc,0x84,0xcc,0x88,0x00,0x01,0xff,0x75,0xcc,0x84,0xcc,0x88,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0x56,0xcc,0x83,0x00,0x01,0xff,0x76,0xcc,0x83,0x00,0x10,0x08,0x01,
++ 0xff,0x56,0xcc,0xa3,0x00,0x01,0xff,0x76,0xcc,0xa3,0x00,0xe0,0x10,0x02,0xcf,0x86,
++ 0xd5,0xe1,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,
++ 0x80,0x00,0x01,0xff,0x77,0xcc,0x80,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x81,0x00,
++ 0x01,0xff,0x77,0xcc,0x81,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0x88,0x00,
++ 0x01,0xff,0x77,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x57,0xcc,0x87,0x00,0x01,0xff,
++ 0x77,0xcc,0x87,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x57,0xcc,0xa3,0x00,
++ 0x01,0xff,0x77,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x58,0xcc,0x87,0x00,0x01,0xff,
++ 0x78,0xcc,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x58,0xcc,0x88,0x00,0x01,0xff,
++ 0x78,0xcc,0x88,0x00,0x10,0x08,0x01,0xff,0x59,0xcc,0x87,0x00,0x01,0xff,0x79,0xcc,
++ 0x87,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0x82,0x00,
++ 0x01,0xff,0x7a,0xcc,0x82,0x00,0x10,0x08,0x01,0xff,0x5a,0xcc,0xa3,0x00,0x01,0xff,
++ 0x7a,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x5a,0xcc,0xb1,0x00,0x01,0xff,
++ 0x7a,0xcc,0xb1,0x00,0x10,0x08,0x01,0xff,0x68,0xcc,0xb1,0x00,0x01,0xff,0x74,0xcc,
++ 0x88,0x00,0x92,0x1d,0xd1,0x10,0x10,0x08,0x01,0xff,0x77,0xcc,0x8a,0x00,0x01,0xff,
++ 0x79,0xcc,0x8a,0x00,0x10,0x04,0x01,0x00,0x02,0xff,0xc5,0xbf,0xcc,0x87,0x00,0x0a,
++ 0x00,0xd4,0x98,0xd3,0x48,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x41,0xcc,0xa3,
++ 0x00,0x01,0xff,0x61,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x41,0xcc,0x89,0x00,0x01,
++ 0xff,0x61,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x81,
++ 0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,
++ 0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x80,0x00,0xd2,0x28,0xd1,0x14,0x10,
++ 0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x82,0xcc,0x89,
++ 0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x61,0xcc,0x82,
++ 0xcc,0x83,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x82,0x00,0x01,
++ 0xff,0x61,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x81,
++ 0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x81,0x00,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,
++ 0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x80,0x00,0x01,0xff,0x61,0xcc,0x86,0xcc,0x80,
++ 0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x89,0x00,0x01,0xff,0x61,0xcc,0x86,
++ 0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x41,0xcc,0x86,0xcc,0x83,0x00,0x01,
++ 0xff,0x61,0xcc,0x86,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x41,0xcc,0xa3,0xcc,0x86,
++ 0x00,0x01,0xff,0x61,0xcc,0xa3,0xcc,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0x45,0xcc,0xa3,0x00,0x01,0xff,0x65,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x45,
++ 0xcc,0x89,0x00,0x01,0xff,0x65,0xcc,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x45,
++ 0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,
++ 0xcc,0x81,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x81,0x00,0xcf,0x86,0xe5,0x31,0x01,
++ 0xd4,0x90,0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x45,0xcc,0x82,0xcc,
++ 0x80,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0x45,0xcc,
++ 0x82,0xcc,0x89,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x89,0x00,0xd1,0x14,0x10,0x0a,
++ 0x01,0xff,0x45,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x65,0xcc,0x82,0xcc,0x83,0x00,
++ 0x10,0x0a,0x01,0xff,0x45,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,0x65,0xcc,0xa3,0xcc,
++ 0x82,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0x49,0xcc,0x89,0x00,0x01,0xff,
++ 0x69,0xcc,0x89,0x00,0x10,0x08,0x01,0xff,0x49,0xcc,0xa3,0x00,0x01,0xff,0x69,0xcc,
++ 0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0x4f,0xcc,0xa3,0x00,0x01,0xff,0x6f,0xcc,
++ 0xa3,0x00,0x10,0x08,0x01,0xff,0x4f,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x89,0x00,
++ 0xd3,0x50,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,0x81,0x00,
++ 0x01,0xff,0x6f,0xcc,0x82,0xcc,0x81,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x82,0xcc,
++ 0x80,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x80,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,
++ 0x4f,0xcc,0x82,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x89,0x00,0x10,0x0a,
++ 0x01,0xff,0x4f,0xcc,0x82,0xcc,0x83,0x00,0x01,0xff,0x6f,0xcc,0x82,0xcc,0x83,0x00,
++ 0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0xa3,0xcc,0x82,0x00,0x01,0xff,
++ 0x6f,0xcc,0xa3,0xcc,0x82,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x81,0x00,
++ 0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x81,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,
++ 0x9b,0xcc,0x80,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,
++ 0x4f,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x89,0x00,0xd4,0x98,
++ 0xd3,0x48,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,0x83,0x00,
++ 0x01,0xff,0x6f,0xcc,0x9b,0xcc,0x83,0x00,0x10,0x0a,0x01,0xff,0x4f,0xcc,0x9b,0xcc,
++ 0xa3,0x00,0x01,0xff,0x6f,0xcc,0x9b,0xcc,0xa3,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0x55,0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,0x55,0xcc,
++ 0x89,0x00,0x01,0xff,0x75,0xcc,0x89,0x00,0xd2,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,
++ 0x55,0xcc,0x9b,0xcc,0x81,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x81,0x00,0x10,0x0a,
++ 0x01,0xff,0x55,0xcc,0x9b,0xcc,0x80,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0x80,0x00,
++ 0xd1,0x14,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x89,0x00,0x01,0xff,0x75,0xcc,
++ 0x9b,0xcc,0x89,0x00,0x10,0x0a,0x01,0xff,0x55,0xcc,0x9b,0xcc,0x83,0x00,0x01,0xff,
++ 0x75,0xcc,0x9b,0xcc,0x83,0x00,0xd3,0x44,0xd2,0x24,0xd1,0x14,0x10,0x0a,0x01,0xff,
++ 0x55,0xcc,0x9b,0xcc,0xa3,0x00,0x01,0xff,0x75,0xcc,0x9b,0xcc,0xa3,0x00,0x10,0x08,
++ 0x01,0xff,0x59,0xcc,0x80,0x00,0x01,0xff,0x79,0xcc,0x80,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0x59,0xcc,0xa3,0x00,0x01,0xff,0x79,0xcc,0xa3,0x00,0x10,0x08,0x01,0xff,
++ 0x59,0xcc,0x89,0x00,0x01,0xff,0x79,0xcc,0x89,0x00,0x92,0x14,0x91,0x10,0x10,0x08,
++ 0x01,0xff,0x59,0xcc,0x83,0x00,0x01,0xff,0x79,0xcc,0x83,0x00,0x0a,0x00,0x0a,0x00,
++ 0xe1,0xc0,0x04,0xe0,0x80,0x02,0xcf,0x86,0xe5,0x2d,0x01,0xd4,0xa8,0xd3,0x54,0xd2,
++ 0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x93,0x00,0x01,0xff,0xce,0xb1,
++ 0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,
++ 0xce,0xb1,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,
++ 0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,
++ 0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,
++ 0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x91,0xcc,0x93,0x00,0x01,0xff,
++ 0xce,0x91,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,0x00,
++ 0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,
++ 0x91,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x81,0x00,0x10,
++ 0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,
++ 0xcd,0x82,0x00,0xd3,0x42,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,
++ 0x93,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb5,0xcc,
++ 0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,
++ 0x0b,0x01,0xff,0xce,0xb5,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb5,0xcc,0x94,
++ 0xcc,0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x95,0xcc,
++ 0x93,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x95,0xcc,
++ 0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,0xcc,0x80,0x00,0x91,0x16,0x10,
++ 0x0b,0x01,0xff,0xce,0x95,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x95,0xcc,0x94,
++ 0xcc,0x81,0x00,0x00,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,
++ 0xff,0xce,0xb7,0xcc,0x93,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0x00,0x10,0x0b,0x01,
++ 0xff,0xce,0xb7,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,
++ 0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,
++ 0xce,0xb7,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,
++ 0x82,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0x97,0xcc,0x93,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0x00,0x10,
++ 0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,
++ 0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,0x00,
++ 0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,
++ 0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x82,0x00,0xd3,0x54,0xd2,
++ 0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x93,0x00,0x01,0xff,0xce,0xb9,
++ 0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,
++ 0xce,0xb9,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb9,0xcc,
++ 0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,
++ 0xff,0xce,0xb9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,0xcc,0x94,0xcd,0x82,
++ 0x00,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,0x93,0x00,0x01,0xff,
++ 0xce,0x99,0xcc,0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0x99,0xcc,0x93,0xcc,0x80,0x00,
++ 0x01,0xff,0xce,0x99,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,
++ 0x99,0xcc,0x93,0xcc,0x81,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,0xcc,0x81,0x00,0x10,
++ 0x0b,0x01,0xff,0xce,0x99,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0x99,0xcc,0x94,
++ 0xcd,0x82,0x00,0xcf,0x86,0xe5,0x13,0x01,0xd4,0x84,0xd3,0x42,0xd2,0x28,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xce,0xbf,0xcc,0x93,0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0x00,
++ 0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,
++ 0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0xbf,0xcc,0x93,0xcc,0x81,
++ 0x00,0x01,0xff,0xce,0xbf,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd2,0x28,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xce,0x9f,0xcc,0x93,0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,0x00,
++ 0x10,0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,
++ 0x94,0xcc,0x80,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xce,0x9f,0xcc,0x93,0xcc,0x81,
++ 0x00,0x01,0xff,0xce,0x9f,0xcc,0x94,0xcc,0x81,0x00,0x00,0x00,0xd3,0x54,0xd2,0x28,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x93,0x00,0x01,0xff,0xcf,0x85,0xcc,
++ 0x94,0x00,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,
++ 0x85,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x93,
++ 0xcc,0x81,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
++ 0xcf,0x85,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x85,0xcc,0x94,0xcd,0x82,0x00,
++ 0xd2,0x1c,0xd1,0x0d,0x10,0x04,0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0x00,0x10,
++ 0x04,0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x0f,0x10,0x04,
++ 0x00,0x00,0x01,0xff,0xce,0xa5,0xcc,0x94,0xcc,0x81,0x00,0x10,0x04,0x00,0x00,0x01,
++ 0xff,0xce,0xa5,0xcc,0x94,0xcd,0x82,0x00,0xd4,0xa8,0xd3,0x54,0xd2,0x28,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xcf,0x89,0xcc,0x93,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0x00,
++ 0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,
++ 0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x81,
++ 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,
++ 0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0x00,0xd2,0x28,
++ 0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xa9,0xcc,0x93,0x00,0x01,0xff,0xce,0xa9,0xcc,
++ 0x94,0x00,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0x00,0x01,0xff,0xce,
++ 0xa9,0xcc,0x94,0xcc,0x80,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,
++ 0xcc,0x81,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x81,0x00,0x10,0x0b,0x01,0xff,
++ 0xce,0xa9,0xcc,0x93,0xcd,0x82,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0x00,
++ 0xd3,0x48,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x80,0x00,0x01,
++ 0xff,0xce,0xb1,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb5,0xcc,0x80,0x00,0x01,
++ 0xff,0xce,0xb5,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb7,0xcc,0x80,
++ 0x00,0x01,0xff,0xce,0xb7,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcc,0x80,
++ 0x00,0x01,0xff,0xce,0xb9,0xcc,0x81,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,
++ 0xce,0xbf,0xcc,0x80,0x00,0x01,0xff,0xce,0xbf,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,
++ 0xcf,0x85,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,0xcc,0x81,0x00,0x91,0x12,0x10,0x09,
++ 0x01,0xff,0xcf,0x89,0xcc,0x80,0x00,0x01,0xff,0xcf,0x89,0xcc,0x81,0x00,0x00,0x00,
++ 0xe0,0xe1,0x02,0xcf,0x86,0xe5,0x91,0x01,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,
++ 0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,
++ 0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x80,0xcd,0x85,
++ 0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,
++ 0x01,0xff,0xce,0xb1,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,
++ 0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb1,0xcc,0x93,0xcd,0x82,
++ 0xcd,0x85,0x00,0x01,0xff,0xce,0xb1,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,
++ 0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x91,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,
++ 0x91,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x80,
++ 0xcd,0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,
++ 0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,
++ 0x91,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0x91,0xcc,0x93,
++ 0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x91,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,
++ 0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcd,0x85,
++ 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xb7,
++ 0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x80,0xcd,
++ 0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0xb7,0xcc,0x93,0xcc,0x81,0xcd,0x85,
++ 0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,
++ 0xce,0xb7,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcc,0x94,0xcd,
++ 0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,0x10,0x0b,0x01,0xff,0xce,0x97,0xcc,0x93,
++ 0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,
++ 0xce,0x97,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,
++ 0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,0xce,0x97,0xcc,0x93,0xcc,0x81,
++ 0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,
++ 0x01,0xff,0xce,0x97,0xcc,0x93,0xcd,0x82,0xcd,0x85,0x00,0x01,0xff,0xce,0x97,0xcc,
++ 0x94,0xcd,0x82,0xcd,0x85,0x00,0xd4,0xc8,0xd3,0x64,0xd2,0x30,0xd1,0x16,0x10,0x0b,
++ 0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,
++ 0x85,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcc,0x80,0xcd,0x85,0x00,0x01,
++ 0xff,0xcf,0x89,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,0x01,0xff,
++ 0xcf,0x89,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcc,
++ 0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xcf,0x89,0xcc,0x93,0xcd,0x82,0xcd,0x85,
++ 0x00,0x01,0xff,0xcf,0x89,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x30,0xd1,0x16,
++ 0x10,0x0b,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,
++ 0x94,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x80,0xcd,0x85,
++ 0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcc,0x80,0xcd,0x85,0x00,0xd1,0x1a,0x10,0x0d,
++ 0x01,0xff,0xce,0xa9,0xcc,0x93,0xcc,0x81,0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,
++ 0x94,0xcc,0x81,0xcd,0x85,0x00,0x10,0x0d,0x01,0xff,0xce,0xa9,0xcc,0x93,0xcd,0x82,
++ 0xcd,0x85,0x00,0x01,0xff,0xce,0xa9,0xcc,0x94,0xcd,0x82,0xcd,0x85,0x00,0xd3,0x49,
++ 0xd2,0x26,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0xb1,0xcc,0x86,0x00,0x01,0xff,0xce,
++ 0xb1,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x80,0xcd,0x85,0x00,0x01,
++ 0xff,0xce,0xb1,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xce,0xb1,0xcc,0x81,
++ 0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb1,0xcd,0x82,0x00,0x01,0xff,
++ 0xce,0xb1,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
++ 0x91,0xcc,0x86,0x00,0x01,0xff,0xce,0x91,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,
++ 0x91,0xcc,0x80,0x00,0x01,0xff,0xce,0x91,0xcc,0x81,0x00,0xd1,0x0d,0x10,0x09,0x01,
++ 0xff,0xce,0x91,0xcd,0x85,0x00,0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xb9,0x00,0x01,
++ 0x00,0xcf,0x86,0xe5,0x16,0x01,0xd4,0x8f,0xd3,0x44,0xd2,0x21,0xd1,0x0d,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0xc2,0xa8,0xcd,0x82,0x00,0x10,0x0b,0x01,0xff,0xce,0xb7,0xcc,
++ 0x80,0xcd,0x85,0x00,0x01,0xff,0xce,0xb7,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,
++ 0xff,0xce,0xb7,0xcc,0x81,0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb7,
++ 0xcd,0x82,0x00,0x01,0xff,0xce,0xb7,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,
++ 0x10,0x09,0x01,0xff,0xce,0x95,0xcc,0x80,0x00,0x01,0xff,0xce,0x95,0xcc,0x81,0x00,
++ 0x10,0x09,0x01,0xff,0xce,0x97,0xcc,0x80,0x00,0x01,0xff,0xce,0x97,0xcc,0x81,0x00,
++ 0xd1,0x13,0x10,0x09,0x01,0xff,0xce,0x97,0xcd,0x85,0x00,0x01,0xff,0xe1,0xbe,0xbf,
++ 0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbe,0xbf,0xcc,0x81,0x00,0x01,0xff,0xe1,
++ 0xbe,0xbf,0xcd,0x82,0x00,0xd3,0x40,0xd2,0x28,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
++ 0xb9,0xcc,0x86,0x00,0x01,0xff,0xce,0xb9,0xcc,0x84,0x00,0x10,0x0b,0x01,0xff,0xce,
++ 0xb9,0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xce,0xb9,0xcc,0x88,0xcc,0x81,0x00,0x51,
++ 0x04,0x00,0x00,0x10,0x09,0x01,0xff,0xce,0xb9,0xcd,0x82,0x00,0x01,0xff,0xce,0xb9,
++ 0xcc,0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,
++ 0x86,0x00,0x01,0xff,0xce,0x99,0xcc,0x84,0x00,0x10,0x09,0x01,0xff,0xce,0x99,0xcc,
++ 0x80,0x00,0x01,0xff,0xce,0x99,0xcc,0x81,0x00,0xd1,0x0e,0x10,0x04,0x00,0x00,0x01,
++ 0xff,0xe1,0xbf,0xbe,0xcc,0x80,0x00,0x10,0x0a,0x01,0xff,0xe1,0xbf,0xbe,0xcc,0x81,
++ 0x00,0x01,0xff,0xe1,0xbf,0xbe,0xcd,0x82,0x00,0xd4,0x93,0xd3,0x4e,0xd2,0x28,0xd1,
++ 0x12,0x10,0x09,0x01,0xff,0xcf,0x85,0xcc,0x86,0x00,0x01,0xff,0xcf,0x85,0xcc,0x84,
++ 0x00,0x10,0x0b,0x01,0xff,0xcf,0x85,0xcc,0x88,0xcc,0x80,0x00,0x01,0xff,0xcf,0x85,
++ 0xcc,0x88,0xcc,0x81,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xcf,0x81,0xcc,0x93,0x00,
++ 0x01,0xff,0xcf,0x81,0xcc,0x94,0x00,0x10,0x09,0x01,0xff,0xcf,0x85,0xcd,0x82,0x00,
++ 0x01,0xff,0xcf,0x85,0xcc,0x88,0xcd,0x82,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,
++ 0xff,0xce,0xa5,0xcc,0x86,0x00,0x01,0xff,0xce,0xa5,0xcc,0x84,0x00,0x10,0x09,0x01,
++ 0xff,0xce,0xa5,0xcc,0x80,0x00,0x01,0xff,0xce,0xa5,0xcc,0x81,0x00,0xd1,0x12,0x10,
++ 0x09,0x01,0xff,0xce,0xa1,0xcc,0x94,0x00,0x01,0xff,0xc2,0xa8,0xcc,0x80,0x00,0x10,
++ 0x09,0x01,0xff,0xc2,0xa8,0xcc,0x81,0x00,0x01,0xff,0x60,0x00,0xd3,0x3b,0xd2,0x18,
++ 0x51,0x04,0x00,0x00,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x80,0xcd,0x85,0x00,0x01,
++ 0xff,0xcf,0x89,0xcd,0x85,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xcf,0x89,0xcc,0x81,
++ 0xcd,0x85,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xcf,0x89,0xcd,0x82,0x00,0x01,0xff,
++ 0xcf,0x89,0xcd,0x82,0xcd,0x85,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xce,
++ 0x9f,0xcc,0x80,0x00,0x01,0xff,0xce,0x9f,0xcc,0x81,0x00,0x10,0x09,0x01,0xff,0xce,
++ 0xa9,0xcc,0x80,0x00,0x01,0xff,0xce,0xa9,0xcc,0x81,0x00,0xd1,0x10,0x10,0x09,0x01,
++ 0xff,0xce,0xa9,0xcd,0x85,0x00,0x01,0xff,0xc2,0xb4,0x00,0x10,0x04,0x01,0x00,0x00,
++ 0x00,0xe0,0x7e,0x0c,0xcf,0x86,0xe5,0xbb,0x08,0xe4,0x14,0x06,0xe3,0xf7,0x02,0xe2,
++ 0xbd,0x01,0xd1,0xd0,0xd0,0x4f,0xcf,0x86,0xd5,0x2e,0x94,0x2a,0xd3,0x18,0x92,0x14,
++ 0x91,0x10,0x10,0x08,0x01,0xff,0xe2,0x80,0x82,0x00,0x01,0xff,0xe2,0x80,0x83,0x00,
++ 0x01,0x00,0x01,0x00,0x92,0x0d,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,
++ 0x00,0x01,0xff,0x00,0x01,0x00,0x94,0x1b,0x53,0x04,0x01,0x00,0xd2,0x09,0x11,0x04,
++ 0x01,0x00,0x01,0xff,0x00,0x51,0x05,0x01,0xff,0x00,0x10,0x05,0x01,0xff,0x00,0x04,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0x48,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,
++ 0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x52,0x04,0x04,0x00,0x11,0x04,0x04,
++ 0x00,0x06,0x00,0xd3,0x1c,0xd2,0x0c,0x51,0x04,0x06,0x00,0x10,0x04,0x06,0x00,0x07,
++ 0x00,0xd1,0x08,0x10,0x04,0x07,0x00,0x08,0x00,0x10,0x04,0x08,0x00,0x06,0x00,0x52,
++ 0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x06,0x00,0xd4,0x23,0xd3,
++ 0x14,0x52,0x05,0x06,0xff,0x00,0x91,0x0a,0x10,0x05,0x0a,0xff,0x00,0x00,0xff,0x00,
++ 0x0f,0xff,0x00,0x92,0x0a,0x11,0x05,0x0f,0xff,0x00,0x01,0xff,0x00,0x01,0xff,0x00,
++ 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0xd0,0x7e,0xcf,0x86,0xd5,0x34,0xd4,0x14,0x53,0x04,0x01,0x00,0x52,0x04,
++ 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xd3,0x10,0x52,0x04,
++ 0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0c,0x00,0x0c,0x00,0x52,0x04,0x0c,0x00,
++ 0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,
++ 0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x02,0x00,0x91,0x08,0x10,0x04,
++ 0x03,0x00,0x04,0x00,0x04,0x00,0xd3,0x10,0xd2,0x08,0x11,0x04,0x06,0x00,0x08,0x00,
++ 0x11,0x04,0x08,0x00,0x0b,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,
++ 0x10,0x04,0x0e,0x00,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x11,0x00,0x13,0x00,
++ 0xcf,0x86,0xd5,0x28,0x54,0x04,0x00,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x01,0xe6,
++ 0x01,0x01,0x01,0xe6,0xd2,0x0c,0x51,0x04,0x01,0x01,0x10,0x04,0x01,0x01,0x01,0xe6,
++ 0x91,0x08,0x10,0x04,0x01,0xe6,0x01,0x00,0x01,0x00,0xd4,0x30,0xd3,0x1c,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x01,0x00,0x01,0xe6,0x04,0x00,0xd1,0x08,0x10,0x04,0x06,0x00,
++ 0x06,0x01,0x10,0x04,0x06,0x01,0x06,0xe6,0x92,0x10,0xd1,0x08,0x10,0x04,0x06,0xdc,
++ 0x06,0xe6,0x10,0x04,0x06,0x01,0x08,0x01,0x09,0xdc,0x93,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0a,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x81,0xd0,0x4f,
++ 0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x29,0xd3,0x13,0x52,0x04,0x01,0x00,0x51,0x04,
++ 0x01,0x00,0x10,0x07,0x01,0xff,0xce,0xa9,0x00,0x01,0x00,0x92,0x12,0x51,0x04,0x01,
++ 0x00,0x10,0x06,0x01,0xff,0x4b,0x00,0x01,0xff,0x41,0xcc,0x8a,0x00,0x01,0x00,0x53,
++ 0x04,0x01,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,0x04,
++ 0x00,0x07,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x06,0x00,0x06,0x00,0xcf,0x86,0x95,
++ 0x2c,0xd4,0x18,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0xd1,0x08,0x10,0x04,0x08,
++ 0x00,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,
++ 0x00,0x10,0x04,0x0b,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x68,0xcf,
++ 0x86,0xd5,0x48,0xd4,0x28,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x11,0x00,0x00,0x00,0x53,0x04,0x01,0x00,0x92,
++ 0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,0x90,0xcc,0xb8,0x00,0x01,
++ 0xff,0xe2,0x86,0x92,0xcc,0xb8,0x00,0x01,0x00,0x94,0x1a,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x86,0x94,0xcc,0xb8,
++ 0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x2e,0x94,0x2a,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0xd1,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x87,0x90,0xcc,0xb8,
++ 0x00,0x10,0x0a,0x01,0xff,0xe2,0x87,0x94,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x87,0x92,
++ 0xcc,0xb8,0x00,0x01,0x00,0xd4,0x14,0x53,0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x01,0x00,0x04,0x00,0x04,0x00,0x93,0x08,0x12,0x04,0x04,0x00,0x06,
++ 0x00,0x06,0x00,0xe2,0x38,0x02,0xe1,0x3f,0x01,0xd0,0x68,0xcf,0x86,0xd5,0x3e,0x94,
++ 0x3a,0xd3,0x16,0x52,0x04,0x01,0x00,0x91,0x0e,0x10,0x0a,0x01,0xff,0xe2,0x88,0x83,
++ 0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0xd2,0x12,0x91,0x0e,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xe2,0x88,0x88,0xcc,0xb8,0x00,0x01,0x00,0x91,0x0e,0x10,0x0a,0x01,0xff,0xe2,
++ 0x88,0x8b,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x24,0x93,0x20,0x52,
++ 0x04,0x01,0x00,0xd1,0x0e,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa3,0xcc,0xb8,0x00,0x01,
++ 0x00,0x10,0x0a,0x01,0xff,0xe2,0x88,0xa5,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0xcf,0x86,0xd5,0x48,0x94,0x44,0xd3,0x2e,0xd2,0x12,0x91,0x0e,0x10,0x04,0x01,
++ 0x00,0x01,0xff,0xe2,0x88,0xbc,0xcc,0xb8,0x00,0x01,0x00,0xd1,0x0e,0x10,0x0a,0x01,
++ 0xff,0xe2,0x89,0x83,0xcc,0xb8,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,
++ 0x89,0x85,0xcc,0xb8,0x00,0x92,0x12,0x91,0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,
++ 0x89,0x88,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x40,0xd3,0x1e,0x92,
++ 0x1a,0xd1,0x0c,0x10,0x08,0x01,0xff,0x3d,0xcc,0xb8,0x00,0x01,0x00,0x10,0x0a,0x01,
++ 0xff,0xe2,0x89,0xa1,0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,
++ 0x0e,0x10,0x04,0x01,0x00,0x01,0xff,0xe2,0x89,0x8d,0xcc,0xb8,0x00,0x10,0x08,0x01,
++ 0xff,0x3c,0xcc,0xb8,0x00,0x01,0xff,0x3e,0xcc,0xb8,0x00,0xd3,0x30,0xd2,0x18,0x91,
++ 0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xa4,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xa5,
++ 0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xb2,0xcc,0xb8,
++ 0x00,0x01,0xff,0xe2,0x89,0xb3,0xcc,0xb8,0x00,0x01,0x00,0x92,0x18,0x91,0x14,0x10,
++ 0x0a,0x01,0xff,0xe2,0x89,0xb6,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xb7,0xcc,0xb8,
++ 0x00,0x01,0x00,0x01,0x00,0xd0,0x86,0xcf,0x86,0xd5,0x50,0x94,0x4c,0xd3,0x30,0xd2,
++ 0x18,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xba,0xcc,0xb8,0x00,0x01,0xff,0xe2,
++ 0x89,0xbb,0xcc,0xb8,0x00,0x01,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0x82,
++ 0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x83,0xcc,0xb8,0x00,0x01,0x00,0x92,0x18,0x91,
++ 0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0x86,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x87,
++ 0xcc,0xb8,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x30,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0xd1,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xa2,0xcc,0xb8,0x00,0x01,
++ 0xff,0xe2,0x8a,0xa8,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xa9,0xcc,0xb8,
++ 0x00,0x01,0xff,0xe2,0x8a,0xab,0xcc,0xb8,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,
++ 0x00,0xd4,0x5c,0xd3,0x2c,0x92,0x28,0xd1,0x14,0x10,0x0a,0x01,0xff,0xe2,0x89,0xbc,
++ 0xcc,0xb8,0x00,0x01,0xff,0xe2,0x89,0xbd,0xcc,0xb8,0x00,0x10,0x0a,0x01,0xff,0xe2,
++ 0x8a,0x91,0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0x92,0xcc,0xb8,0x00,0x01,0x00,0xd2,
++ 0x18,0x51,0x04,0x01,0x00,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xb2,0xcc,0xb8,0x00,0x01,
++ 0xff,0xe2,0x8a,0xb3,0xcc,0xb8,0x00,0x91,0x14,0x10,0x0a,0x01,0xff,0xe2,0x8a,0xb4,
++ 0xcc,0xb8,0x00,0x01,0xff,0xe2,0x8a,0xb5,0xcc,0xb8,0x00,0x01,0x00,0x93,0x0c,0x92,
++ 0x08,0x11,0x04,0x01,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xd1,0x64,0xd0,0x3e,0xcf,
++ 0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,0x00,0x04,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x94,0x20,0x53,0x04,0x01,0x00,0x92,
++ 0x18,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x80,0x88,0x00,0x10,0x08,0x01,
++ 0xff,0xe3,0x80,0x89,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,
++ 0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x04,0x00,0x04,0x00,0xd0,
++ 0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,0x51,
++ 0x04,0x04,0x00,0x10,0x04,0x04,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xcf,0x86,0xd5,
++ 0x2c,0xd4,0x14,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,0x06,0x00,0x10,
++ 0x04,0x06,0x00,0x07,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x08,
++ 0x00,0x08,0x00,0x08,0x00,0x12,0x04,0x08,0x00,0x09,0x00,0xd4,0x14,0x53,0x04,0x09,
++ 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0c,0x00,0x0c,0x00,0x0c,0x00,0xd3,
++ 0x08,0x12,0x04,0x0c,0x00,0x10,0x00,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,
++ 0x00,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x13,0x00,0xd3,0xa6,0xd2,
++ 0x74,0xd1,0x40,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x18,0x93,0x14,0x52,
++ 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x04,0x00,0x10,0x04,0x04,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x01,0x00,0x92,
++ 0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x01,
++ 0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x14,0x53,
++ 0x04,0x01,0x00,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x06,
++ 0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x51,0x04,0x06,0x00,0x10,0x04,0x06,
++ 0x00,0x07,0x00,0xd1,0x06,0xcf,0x06,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x54,
++ 0x04,0x01,0x00,0x93,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x06,0x00,0x06,
++ 0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x13,0x04,0x04,
++ 0x00,0x06,0x00,0xd2,0xdc,0xd1,0x48,0xd0,0x26,0xcf,0x86,0x95,0x20,0x54,0x04,0x01,
++ 0x00,0xd3,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x07,0x00,0x06,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x08,0x00,0x04,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,
++ 0x04,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x04,0x00,0x06,
++ 0x00,0x06,0x00,0x52,0x04,0x06,0x00,0x11,0x04,0x06,0x00,0x08,0x00,0xd0,0x5e,0xcf,
++ 0x86,0xd5,0x2c,0xd4,0x10,0x53,0x04,0x06,0x00,0x92,0x08,0x11,0x04,0x06,0x00,0x07,
++ 0x00,0x07,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0x52,
++ 0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x08,0x00,0x0a,0x00,0x0b,0x00,0xd4,0x10,0x93,
++ 0x0c,0x92,0x08,0x11,0x04,0x07,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0xd3,0x10,0x92,
++ 0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x52,0x04,0x0a,
++ 0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x1c,0x94,
++ 0x18,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,
++ 0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0b,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,
++ 0x04,0x0b,0x00,0x10,0x04,0x0c,0x00,0x0b,0x00,0x0c,0x00,0x0b,0x00,0x0b,0x00,0xd1,
++ 0xa8,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,
++ 0x04,0x10,0x00,0x01,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x0c,0x00,0x01,
++ 0x00,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x01,0x00,0x01,0x00,0x94,0x14,0x53,
++ 0x04,0x01,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x18,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
++ 0x00,0xd1,0x08,0x10,0x04,0x0c,0x00,0x01,0x00,0x10,0x04,0x0c,0x00,0x01,0x00,0xd3,
++ 0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x0c,0x00,0x51,0x04,0x0c,
++ 0x00,0x10,0x04,0x01,0x00,0x0b,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x0c,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x06,0x00,0x93,0x0c,0x52,0x04,0x06,0x00,0x11,
++ 0x04,0x06,0x00,0x01,0x00,0x01,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x54,0x04,0x01,
++ 0x00,0x93,0x10,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x0c,0x00,0x0c,
++ 0x00,0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,
++ 0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,
++ 0x04,0x01,0x00,0x0c,0x00,0xcf,0x86,0xd5,0x2c,0x94,0x28,0xd3,0x10,0x52,0x04,0x08,
++ 0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x09,0x00,0xd2,0x0c,0x51,0x04,0x09,
++ 0x00,0x10,0x04,0x09,0x00,0x0d,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0d,0x00,0x0c,
++ 0x00,0x06,0x00,0x94,0x0c,0x53,0x04,0x06,0x00,0x12,0x04,0x06,0x00,0x0a,0x00,0x06,
++ 0x00,0xe4,0x39,0x01,0xd3,0x0c,0xd2,0x06,0xcf,0x06,0x04,0x00,0xcf,0x06,0x06,0x00,
++ 0xd2,0x30,0xd1,0x06,0xcf,0x06,0x06,0x00,0xd0,0x06,0xcf,0x06,0x06,0x00,0xcf,0x86,
++ 0x95,0x1e,0x54,0x04,0x06,0x00,0x53,0x04,0x06,0x00,0x52,0x04,0x06,0x00,0x91,0x0e,
++ 0x10,0x0a,0x06,0xff,0xe2,0xab,0x9d,0xcc,0xb8,0x00,0x06,0x00,0x06,0x00,0x06,0x00,
++ 0xd1,0x80,0xd0,0x3a,0xcf,0x86,0xd5,0x28,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,0x04,
++ 0x07,0x00,0x11,0x04,0x07,0x00,0x08,0x00,0xd3,0x08,0x12,0x04,0x08,0x00,0x09,0x00,
++ 0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x94,0x0c,
++ 0x93,0x08,0x12,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x30,
++ 0xd4,0x14,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,
++ 0x10,0x00,0x10,0x00,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,
++ 0x0b,0x00,0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x10,0x00,0x10,0x00,0x54,0x04,
++ 0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
++ 0xd0,0x32,0xcf,0x86,0xd5,0x14,0x54,0x04,0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,
++ 0x11,0x04,0x10,0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,
++ 0xd2,0x08,0x11,0x04,0x10,0x00,0x14,0x00,0x91,0x08,0x10,0x04,0x14,0x00,0x10,0x00,
++ 0x10,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x10,0x00,0x15,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,
++ 0x10,0x00,0x10,0x04,0x13,0x00,0x14,0x00,0x14,0x00,0x14,0x00,0xd4,0x0c,0x53,0x04,
++ 0x14,0x00,0x12,0x04,0x14,0x00,0x11,0x00,0x53,0x04,0x14,0x00,0x52,0x04,0x14,0x00,
++ 0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0xe3,0xb9,0x01,0xd2,0xac,0xd1,
++ 0x68,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x08,0x00,0x94,0x14,0x53,0x04,0x08,0x00,0x52,
++ 0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x08,0x00,0xcf,
++ 0x86,0xd5,0x18,0x54,0x04,0x08,0x00,0x53,0x04,0x08,0x00,0x52,0x04,0x08,0x00,0x51,
++ 0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0xd4,0x14,0x53,0x04,0x09,0x00,0x52,
++ 0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0xd3,0x10,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x0a,0x00,0x0a,0x00,0x09,0x00,0x52,0x04,0x0a,
++ 0x00,0x11,0x04,0x0a,0x00,0x0b,0x00,0xd0,0x06,0xcf,0x06,0x08,0x00,0xcf,0x86,0x55,
++ 0x04,0x08,0x00,0xd4,0x1c,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,0x04,0x08,0x00,0x10,
++ 0x04,0x08,0x00,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xe6,0xd3,
++ 0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0d,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x00,0x00,0x08,0x00,0x08,0x00,0x08,0x00,0xd1,0x6c,0xd0,0x2a,0xcf,0x86,0x55,
++ 0x04,0x08,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,
++ 0x04,0x00,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x0d,
++ 0x00,0x00,0x00,0x08,0x00,0xcf,0x86,0x55,0x04,0x08,0x00,0xd4,0x1c,0xd3,0x0c,0x52,
++ 0x04,0x08,0x00,0x11,0x04,0x08,0x00,0x0d,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,
++ 0x00,0x10,0x04,0x00,0x00,0x08,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,
++ 0x04,0x00,0x00,0x0c,0x09,0xd0,0x5a,0xcf,0x86,0xd5,0x18,0x54,0x04,0x08,0x00,0x93,
++ 0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x00,
++ 0x00,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,
++ 0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,
++ 0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,
++ 0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,0x00,0xcf,
++ 0x86,0x95,0x40,0xd4,0x20,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,
++ 0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,
++ 0x00,0x00,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,
++ 0x00,0x00,0x00,0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x00,
++ 0x00,0x0a,0xe6,0xd2,0x9c,0xd1,0x68,0xd0,0x32,0xcf,0x86,0xd5,0x14,0x54,0x04,0x08,
++ 0x00,0x53,0x04,0x08,0x00,0x52,0x04,0x0a,0x00,0x11,0x04,0x08,0x00,0x0a,0x00,0x54,
++ 0x04,0x0a,0x00,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,0x0b,0x00,0x0d,
++ 0x00,0x0d,0x00,0x12,0x04,0x0d,0x00,0x10,0x00,0xcf,0x86,0x95,0x30,0x94,0x2c,0xd3,
++ 0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x12,0x00,0x91,0x08,0x10,
++ 0x04,0x12,0x00,0x13,0x00,0x13,0x00,0xd2,0x08,0x11,0x04,0x13,0x00,0x14,0x00,0x51,
++ 0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x15,0x00,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,
++ 0x86,0x95,0x18,0x54,0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,0x51,0x04,0x04,
++ 0x00,0x10,0x04,0x00,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,0x55,0x04,0x04,
++ 0x00,0x54,0x04,0x04,0x00,0x93,0x08,0x12,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0xd1,
++ 0x06,0xcf,0x06,0x04,0x00,0xd0,0x06,0xcf,0x06,0x04,0x00,0xcf,0x86,0xd5,0x14,0x54,
++ 0x04,0x04,0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,0x00,0x00,0x00,0x00,
++ 0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x04,0x00,0x12,0x04,0x04,0x00,0x00,0x00,0xcf,
++ 0x86,0xe5,0xa6,0x05,0xe4,0x9f,0x05,0xe3,0x96,0x04,0xe2,0xe4,0x03,0xe1,0xc0,0x01,
++ 0xd0,0x3e,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x1c,0x53,0x04,0x01,0x00,0xd2,0x0c,
++ 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0xda,0x01,0xe4,0x91,0x08,0x10,0x04,0x01,0xe8,
++ 0x01,0xde,0x01,0xe0,0x53,0x04,0x01,0x00,0xd2,0x0c,0x51,0x04,0x04,0x00,0x10,0x04,
++ 0x04,0x00,0x06,0x00,0x51,0x04,0x06,0x00,0x10,0x04,0x04,0x00,0x01,0x00,0xcf,0x86,
++ 0xd5,0xaa,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,
++ 0x8b,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x8d,0xe3,0x82,
++ 0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,
++ 0x8f,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x91,0xe3,0x82,
++ 0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x93,0xe3,0x82,0x99,
++ 0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x95,0xe3,0x82,0x99,0x00,0x01,0x00,
++ 0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x97,0xe3,0x82,0x99,0x00,0x01,
++ 0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0x99,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,
++ 0x10,0x0b,0x01,0xff,0xe3,0x81,0x9b,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,
++ 0xff,0xe3,0x81,0x9d,0xe3,0x82,0x99,0x00,0x01,0x00,0xd4,0x53,0xd3,0x3c,0xd2,0x1e,
++ 0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0x9f,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,
++ 0x0b,0x01,0xff,0xe3,0x81,0xa1,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x04,
++ 0x01,0x00,0x01,0xff,0xe3,0x81,0xa4,0xe3,0x82,0x99,0x00,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xe3,0x81,0xa6,0xe3,0x82,0x99,0x00,0x92,0x13,0x91,0x0f,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xe3,0x81,0xa8,0xe3,0x82,0x99,0x00,0x01,0x00,0x01,0x00,0xd3,0x4a,0xd2,
++ 0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,0xaf,0xe3,0x82,0x99,0x00,0x01,0xff,
++ 0xe3,0x81,0xaf,0xe3,0x82,0x9a,0x00,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x81,0xb2,
++ 0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb2,0xe3,0x82,0x9a,
++ 0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb5,0xe3,0x82,0x99,0x00,0x01,0xff,
++ 0xe3,0x81,0xb5,0xe3,0x82,0x9a,0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xe3,0x81,0xb8,0xe3,0x82,0x99,0x00,0x10,0x0b,0x01,0xff,0xe3,0x81,0xb8,0xe3,
++ 0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,0xe3,0x81,0xbb,0xe3,0x82,
++ 0x99,0x00,0x01,0xff,0xe3,0x81,0xbb,0xe3,0x82,0x9a,0x00,0x01,0x00,0xd0,0xee,0xcf,
++ 0x86,0xd5,0x42,0x54,0x04,0x01,0x00,0xd3,0x1b,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,
++ 0x0b,0x01,0xff,0xe3,0x81,0x86,0xe3,0x82,0x99,0x00,0x06,0x00,0x10,0x04,0x06,0x00,
++ 0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x00,0x00,0x01,0x08,0x10,0x04,0x01,0x08,
++ 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0x9d,0xe3,0x82,0x99,
++ 0x00,0x06,0x00,0xd4,0x32,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x01,
++ 0x00,0x01,0x00,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,
++ 0x82,0xab,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xad,0xe3,
++ 0x82,0x99,0x00,0x01,0x00,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,
++ 0x82,0xaf,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb1,0xe3,
++ 0x82,0x99,0x00,0x01,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb3,0xe3,0x82,
++ 0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb5,0xe3,0x82,0x99,0x00,0x01,
++ 0x00,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb7,0xe3,0x82,0x99,0x00,
++ 0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x82,0xb9,0xe3,0x82,0x99,0x00,0x01,0x00,0xd1,
++ 0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbb,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x0b,
++ 0x01,0xff,0xe3,0x82,0xbd,0xe3,0x82,0x99,0x00,0x01,0x00,0xcf,0x86,0xd5,0xd5,0xd4,
++ 0x53,0xd3,0x3c,0xd2,0x1e,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,0x82,0xbf,0xe3,0x82,
++ 0x99,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0x81,0xe3,0x82,0x99,0x00,0x01,
++ 0x00,0xd1,0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x84,0xe3,0x82,0x99,0x00,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x86,0xe3,0x82,0x99,0x00,0x92,0x13,0x91,
++ 0x0f,0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x88,0xe3,0x82,0x99,0x00,0x01,0x00,
++ 0x01,0x00,0xd3,0x4a,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,0xe3,0x83,0x8f,0xe3,
++ 0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x8f,0xe3,0x82,0x9a,0x00,0x10,0x04,0x01,0x00,
++ 0x01,0xff,0xe3,0x83,0x92,0xe3,0x82,0x99,0x00,0xd1,0x0f,0x10,0x0b,0x01,0xff,0xe3,
++ 0x83,0x92,0xe3,0x82,0x9a,0x00,0x01,0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0x95,0xe3,
++ 0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x95,0xe3,0x82,0x9a,0x00,0xd2,0x1e,0xd1,0x0f,
++ 0x10,0x04,0x01,0x00,0x01,0xff,0xe3,0x83,0x98,0xe3,0x82,0x99,0x00,0x10,0x0b,0x01,
++ 0xff,0xe3,0x83,0x98,0xe3,0x82,0x9a,0x00,0x01,0x00,0x91,0x16,0x10,0x0b,0x01,0xff,
++ 0xe3,0x83,0x9b,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0x9b,0xe3,0x82,0x9a,0x00,
++ 0x01,0x00,0x54,0x04,0x01,0x00,0xd3,0x22,0x52,0x04,0x01,0x00,0xd1,0x0f,0x10,0x0b,
++ 0x01,0xff,0xe3,0x82,0xa6,0xe3,0x82,0x99,0x00,0x01,0x00,0x10,0x04,0x01,0x00,0x01,
++ 0xff,0xe3,0x83,0xaf,0xe3,0x82,0x99,0x00,0xd2,0x25,0xd1,0x16,0x10,0x0b,0x01,0xff,
++ 0xe3,0x83,0xb0,0xe3,0x82,0x99,0x00,0x01,0xff,0xe3,0x83,0xb1,0xe3,0x82,0x99,0x00,
++ 0x10,0x0b,0x01,0xff,0xe3,0x83,0xb2,0xe3,0x82,0x99,0x00,0x01,0x00,0x51,0x04,0x01,
++ 0x00,0x10,0x0b,0x01,0xff,0xe3,0x83,0xbd,0xe3,0x82,0x99,0x00,0x06,0x00,0xd1,0x65,
++ 0xd0,0x46,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x91,0x08,
++ 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd4,0x18,0x53,0x04,
++ 0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x0a,0x00,0x10,0x04,
++ 0x13,0x00,0x14,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x94,0x15,0x93,0x11,
++ 0x52,0x04,0x01,0x00,0x91,0x09,0x10,0x05,0x01,0xff,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0x01,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x54,
++ 0x04,0x04,0x00,0x53,0x04,0x04,0x00,0x92,0x0c,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,
++ 0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,0x08,0x00,0x0a,0x00,0x94,
++ 0x0c,0x93,0x08,0x12,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x06,0x00,0xd2,0xa4,0xd1,
++ 0x5c,0xd0,0x22,0xcf,0x86,0x95,0x1c,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,
++ 0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x07,0x00,0x10,0x04,0x07,0x00,0x00,
++ 0x00,0x01,0x00,0xcf,0x86,0xd5,0x20,0xd4,0x0c,0x93,0x08,0x12,0x04,0x01,0x00,0x0b,
++ 0x00,0x0b,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x06,0x00,0x06,
++ 0x00,0x06,0x00,0x06,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,
++ 0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x08,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,0x55,
++ 0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x01,
++ 0x00,0x06,0x00,0x06,0x00,0x06,0x00,0x06,0x00,0xcf,0x86,0xd5,0x10,0x94,0x0c,0x53,
++ 0x04,0x01,0x00,0x12,0x04,0x01,0x00,0x07,0x00,0x01,0x00,0x54,0x04,0x01,0x00,0x53,
++ 0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x16,
++ 0x00,0xd1,0x30,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,
++ 0x04,0x01,0x00,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x01,0x00,0x01,
++ 0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x01,0x00,0x53,
++ 0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x07,0x00,0x54,0x04,0x01,
++ 0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x07,0x00,0xcf,0x06,0x04,0x00,0xcf,0x06,0x04,0x00,0xd1,0x48,0xd0,0x40,0xcf,
++ 0x86,0xd5,0x06,0xcf,0x06,0x04,0x00,0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x2c,0xd2,
++ 0x06,0xcf,0x06,0x04,0x00,0xd1,0x06,0xcf,0x06,0x04,0x00,0xd0,0x1a,0xcf,0x86,0x55,
++ 0x04,0x04,0x00,0x54,0x04,0x04,0x00,0x93,0x0c,0x52,0x04,0x04,0x00,0x11,0x04,0x04,
++ 0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x07,0x00,0xcf,0x06,0x01,0x00,0xcf,0x86,0xcf,
++ 0x06,0x01,0x00,0xcf,0x86,0xcf,0x06,0x01,0x00,0xe2,0x71,0x05,0xd1,0x8c,0xd0,0x08,
++ 0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xd4,0x06,
++ 0xcf,0x06,0x01,0x00,0xd3,0x06,0xcf,0x06,0x01,0x00,0xd2,0x06,0xcf,0x06,0x01,0x00,
++ 0xd1,0x06,0xcf,0x06,0x01,0x00,0xd0,0x22,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x10,
++ 0x93,0x0c,0x52,0x04,0x01,0x00,0x11,0x04,0x01,0x00,0x08,0x00,0x08,0x00,0x53,0x04,
++ 0x08,0x00,0x12,0x04,0x08,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0xd3,0x08,
++ 0x12,0x04,0x0a,0x00,0x0b,0x00,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,
++ 0x11,0x00,0x11,0x00,0x93,0x0c,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x13,0x00,
++ 0x13,0x00,0x94,0x14,0x53,0x04,0x13,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,
++ 0x13,0x00,0x14,0x00,0x14,0x00,0x00,0x00,0xe0,0xdb,0x04,0xcf,0x86,0xe5,0xdf,0x01,
++ 0xd4,0x06,0xcf,0x06,0x04,0x00,0xd3,0x74,0xd2,0x6e,0xd1,0x06,0xcf,0x06,0x04,0x00,
++ 0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x04,0x00,0x52,0x04,0x04,0x00,
++ 0x91,0x08,0x10,0x04,0x04,0x00,0x00,0x00,0x00,0x00,0x04,0x00,0xd4,0x10,0x93,0x0c,
++ 0x92,0x08,0x11,0x04,0x04,0x00,0x06,0x00,0x04,0x00,0x04,0x00,0x93,0x10,0x52,0x04,
++ 0x04,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x04,0x00,0x04,0x00,0x04,0x00,0xcf,0x86,
++ 0x95,0x24,0x94,0x20,0x93,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x04,0x00,0x06,0x00,
++ 0x04,0x00,0xd1,0x08,0x10,0x04,0x04,0x00,0x06,0x00,0x10,0x04,0x04,0x00,0x00,0x00,
++ 0x00,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x06,0x0a,0x00,0xd2,0x84,0xd1,0x4c,0xd0,0x16,
++ 0xcf,0x86,0x55,0x04,0x0a,0x00,0x94,0x0c,0x53,0x04,0x0a,0x00,0x12,0x04,0x0a,0x00,
++ 0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x0a,0x00,0xd4,0x1c,0xd3,0x0c,0x92,0x08,
++ 0x11,0x04,0x0c,0x00,0x0a,0x00,0x0a,0x00,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,
++ 0x10,0x04,0x0a,0x00,0x0a,0xe6,0xd3,0x08,0x12,0x04,0x0a,0x00,0x0d,0xe6,0x52,0x04,
++ 0x0d,0xe6,0x11,0x04,0x0a,0xe6,0x0a,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,
++ 0x0a,0x00,0x53,0x04,0x0a,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
++ 0x11,0xe6,0x0d,0xe6,0x0b,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,
++ 0x93,0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x00,0x00,0xd1,0x40,
++ 0xd0,0x3a,0xcf,0x86,0xd5,0x24,0x54,0x04,0x08,0x00,0xd3,0x10,0x52,0x04,0x08,0x00,
++ 0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x09,0x00,0x92,0x0c,0x51,0x04,0x09,0x00,
++ 0x10,0x04,0x09,0x00,0x0a,0x00,0x0a,0x00,0x94,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,
++ 0x09,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0x0a,0x00,0xcf,0x06,0x0a,0x00,0xd0,0x5e,
++ 0xcf,0x86,0xd5,0x28,0xd4,0x18,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0xd1,0x08,
++ 0x10,0x04,0x0a,0x00,0x0c,0x00,0x10,0x04,0x0c,0x00,0x11,0x00,0x93,0x0c,0x92,0x08,
++ 0x11,0x04,0x0c,0x00,0x0d,0x00,0x10,0x00,0x10,0x00,0xd4,0x1c,0x53,0x04,0x0c,0x00,
++ 0xd2,0x0c,0x51,0x04,0x0c,0x00,0x10,0x04,0x0d,0x00,0x10,0x00,0x51,0x04,0x10,0x00,
++ 0x10,0x04,0x12,0x00,0x14,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x10,0x00,0x11,0x00,
++ 0x11,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x1c,
++ 0x94,0x18,0x93,0x14,0xd2,0x08,0x11,0x04,0x00,0x00,0x15,0x00,0x51,0x04,0x15,0x00,
++ 0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x54,0x04,0x00,0x00,0xd3,0x10,
++ 0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x92,0x0c,
++ 0x51,0x04,0x0d,0x00,0x10,0x04,0x0c,0x00,0x0a,0x00,0x0a,0x00,0xe4,0xf2,0x02,0xe3,
++ 0x65,0x01,0xd2,0x98,0xd1,0x48,0xd0,0x36,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,
++ 0x52,0x04,0x08,0x00,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x09,0x08,0x00,0x08,0x00,
++ 0x08,0x00,0xd4,0x0c,0x53,0x04,0x08,0x00,0x12,0x04,0x08,0x00,0x00,0x00,0x53,0x04,
++ 0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,
++ 0x09,0x00,0x54,0x04,0x09,0x00,0x13,0x04,0x09,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,
++ 0x0a,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x1c,0xd3,0x10,0x52,0x04,0x0a,0x00,0x91,0x08,
++ 0x10,0x04,0x0a,0x09,0x12,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,
++ 0x0a,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,0x11,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,
++ 0x54,0x04,0x0b,0xe6,0xd3,0x0c,0x92,0x08,0x11,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,
++ 0x52,0x04,0x0b,0x00,0x11,0x04,0x11,0x00,0x14,0x00,0xd1,0x60,0xd0,0x22,0xcf,0x86,
++ 0x55,0x04,0x0a,0x00,0x94,0x18,0x53,0x04,0x0a,0x00,0xd2,0x0c,0x51,0x04,0x0a,0x00,
++ 0x10,0x04,0x0a,0x00,0x0a,0xdc,0x11,0x04,0x0a,0xdc,0x0a,0x00,0x0a,0x00,0xcf,0x86,
++ 0xd5,0x24,0x54,0x04,0x0a,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x0a,0x00,0x10,0x04,
++ 0x0a,0x00,0x0a,0x09,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x0a,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,
++ 0x91,0x08,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,
++ 0x0b,0x00,0x54,0x04,0x0b,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,
++ 0x0b,0x00,0x0b,0x07,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x34,0xd4,0x20,0xd3,0x10,
++ 0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x09,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x52,0x04,
++ 0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x53,0x04,0x0b,0x00,
++ 0xd2,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x0b,0x00,0x54,0x04,
++ 0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0xd2,0xd0,0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0a,0x00,
++ 0x54,0x04,0x0a,0x00,0x93,0x10,0x52,0x04,0x0a,0x00,0x51,0x04,0x0a,0x00,0x10,0x04,
++ 0x0a,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x20,0xd4,0x10,0x53,0x04,0x0a,0x00,
++ 0x52,0x04,0x0a,0x00,0x11,0x04,0x0a,0x00,0x00,0x00,0x53,0x04,0x0a,0x00,0x92,0x08,
++ 0x11,0x04,0x0a,0x00,0x00,0x00,0x0a,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,
++ 0x12,0x04,0x0b,0x00,0x10,0x00,0xd0,0x3a,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,
++ 0x0b,0x00,0xd3,0x1c,0xd2,0x0c,0x91,0x08,0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0xe6,
++ 0xd1,0x08,0x10,0x04,0x0b,0xdc,0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xe6,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x0b,0xe6,0x0b,0x00,0x0b,0x00,0x11,0x04,0x0b,0x00,0x0b,0xe6,
++ 0xcf,0x86,0xd5,0x2c,0xd4,0x18,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,
++ 0x0b,0xe6,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x00,0x00,
++ 0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x54,0x04,
++ 0x0d,0x00,0x93,0x10,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,
++ 0x00,0x00,0x00,0x00,0xd1,0x8c,0xd0,0x72,0xcf,0x86,0xd5,0x4c,0xd4,0x30,0xd3,0x18,
+ 0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,
+- 0x10,0x04,0x0c,0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,
+- 0x94,0x20,0xd3,0x10,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,
+- 0x00,0x00,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,
+- 0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x10,0x93,0x0c,0x52,0x04,0x11,0x00,
+- 0x11,0x04,0x10,0x00,0x15,0x00,0x00,0x00,0x11,0x00,0xd0,0x06,0xcf,0x06,0x11,0x00,
+- 0xcf,0x86,0x55,0x04,0x0b,0x00,0xd4,0x14,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,
+- 0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0x09,0x00,0x00,0x53,0x04,0x0b,0x00,0x92,0x08,
+- 0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x02,0xff,0xff,0xcf,0x86,0xcf,
+- 0x06,0x02,0xff,0xff,0xd1,0x76,0xd0,0x09,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xcf,
+- 0x86,0x85,0xd4,0x07,0xcf,0x06,0x02,0xff,0xff,0xd3,0x07,0xcf,0x06,0x02,0xff,0xff,
+- 0xd2,0x07,0xcf,0x06,0x02,0xff,0xff,0xd1,0x07,0xcf,0x06,0x02,0xff,0xff,0xd0,0x18,
+- 0xcf,0x86,0x55,0x05,0x02,0xff,0xff,0x94,0x0d,0x93,0x09,0x12,0x05,0x02,0xff,0xff,
+- 0x00,0x00,0x00,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,0x10,0x52,0x04,
+- 0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x92,0x0c,0x51,0x04,
+- 0x00,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x0b,0x00,0x54,0x04,0x0b,0x00,
+- 0x53,0x04,0x0b,0x00,0x12,0x04,0x0b,0x00,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,
+- 0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,0xe4,0x9c,0x10,0xe3,0x16,0x08,
+- 0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x08,0x04,0xe0,0x04,0x02,0xcf,0x86,0xe5,0x01,
+- 0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xb1,0x88,
+- 0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0x10,0x08,0x01,0xff,0xe8,0xbb,0x8a,0x00,0x01,
+- 0xff,0xe8,0xb3,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xbb,0x91,0x00,0x01,
+- 0xff,0xe4,0xb8,0xb2,0x00,0x10,0x08,0x01,0xff,0xe5,0x8f,0xa5,0x00,0x01,0xff,0xe9,
+- 0xbe,0x9c,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xbe,0x9c,0x00,0x01,
+- 0xff,0xe5,0xa5,0x91,0x00,0x10,0x08,0x01,0xff,0xe9,0x87,0x91,0x00,0x01,0xff,0xe5,
+- 0x96,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xa5,0x88,0x00,0x01,0xff,0xe6,
+- 0x87,0xb6,0x00,0x10,0x08,0x01,0xff,0xe7,0x99,0xa9,0x00,0x01,0xff,0xe7,0xbe,0x85,
+- 0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x98,0xbf,0x00,0x01,
+- 0xff,0xe8,0x9e,0xba,0x00,0x10,0x08,0x01,0xff,0xe8,0xa3,0xb8,0x00,0x01,0xff,0xe9,
+- 0x82,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,0x82,0x00,0x01,0xff,0xe6,
+- 0xb4,0x9b,0x00,0x10,0x08,0x01,0xff,0xe7,0x83,0x99,0x00,0x01,0xff,0xe7,0x8f,0x9e,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x90,0xbd,0x00,0x01,0xff,0xe9,
+- 0x85,0xaa,0x00,0x10,0x08,0x01,0xff,0xe9,0xa7,0xb1,0x00,0x01,0xff,0xe4,0xba,0x82,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x8d,0xb5,0x00,0x01,0xff,0xe6,0xac,0x84,
+- 0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x9b,0x00,0x01,0xff,0xe8,0x98,0xad,0x00,0xd4,
+- 0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xb8,0x9e,0x00,0x01,
+- 0xff,0xe5,0xb5,0x90,0x00,0x10,0x08,0x01,0xff,0xe6,0xbf,0xab,0x00,0x01,0xff,0xe8,
+- 0x97,0x8d,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xa5,0xa4,0x00,0x01,0xff,0xe6,
+- 0x8b,0x89,0x00,0x10,0x08,0x01,0xff,0xe8,0x87,0x98,0x00,0x01,0xff,0xe8,0xa0,0x9f,
+- 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xbb,0x8a,0x00,0x01,0xff,0xe6,
+- 0x9c,0x97,0x00,0x10,0x08,0x01,0xff,0xe6,0xb5,0xaa,0x00,0x01,0xff,0xe7,0x8b,0xbc,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x83,0x8e,0x00,0x01,0xff,0xe4,0xbe,0x86,
+- 0x00,0x10,0x08,0x01,0xff,0xe5,0x86,0xb7,0x00,0x01,0xff,0xe5,0x8b,0x9e,0x00,0xd3,
+- 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x93,0x84,0x00,0x01,0xff,0xe6,
+- 0xab,0x93,0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x90,0x00,0x01,0xff,0xe7,0x9b,0xa7,
+- 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x80,0x81,0x00,0x01,0xff,0xe8,0x98,0x86,
+- 0x00,0x10,0x08,0x01,0xff,0xe8,0x99,0x9c,0x00,0x01,0xff,0xe8,0xb7,0xaf,0x00,0xd2,
+- 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9c,0xb2,0x00,0x01,0xff,0xe9,0xad,0xaf,
+- 0x00,0x10,0x08,0x01,0xff,0xe9,0xb7,0xba,0x00,0x01,0xff,0xe7,0xa2,0x8c,0x00,0xd1,
+- 0x10,0x10,0x08,0x01,0xff,0xe7,0xa5,0xbf,0x00,0x01,0xff,0xe7,0xb6,0xa0,0x00,0x10,
+- 0x08,0x01,0xff,0xe8,0x8f,0x89,0x00,0x01,0xff,0xe9,0x8c,0x84,0x00,0xcf,0x86,0xe5,
+- 0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xb9,
+- 0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0x10,0x08,0x01,0xff,0xe5,0xa3,0x9f,0x00,
+- 0x01,0xff,0xe5,0xbc,0x84,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xb1,0xa0,0x00,
+- 0x01,0xff,0xe8,0x81,0xbe,0x00,0x10,0x08,0x01,0xff,0xe7,0x89,0xa2,0x00,0x01,0xff,
+- 0xe7,0xa3,0x8a,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xb3,0x82,0x00,
+- 0x01,0xff,0xe9,0x9b,0xb7,0x00,0x10,0x08,0x01,0xff,0xe5,0xa3,0x98,0x00,0x01,0xff,
+- 0xe5,0xb1,0xa2,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,0x93,0x00,0x01,0xff,
+- 0xe6,0xb7,0x9a,0x00,0x10,0x08,0x01,0xff,0xe6,0xbc,0x8f,0x00,0x01,0xff,0xe7,0xb4,
+- 0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xb8,0xb7,0x00,
+- 0x01,0xff,0xe9,0x99,0x8b,0x00,0x10,0x08,0x01,0xff,0xe5,0x8b,0x92,0x00,0x01,0xff,
+- 0xe8,0x82,0x8b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x87,0x9c,0x00,0x01,0xff,
+- 0xe5,0x87,0x8c,0x00,0x10,0x08,0x01,0xff,0xe7,0xa8,0x9c,0x00,0x01,0xff,0xe7,0xb6,
+- 0xbe,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x8f,0xb1,0x00,0x01,0xff,
+- 0xe9,0x99,0xb5,0x00,0x10,0x08,0x01,0xff,0xe8,0xae,0x80,0x00,0x01,0xff,0xe6,0x8b,
+- 0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,0x82,0x00,0x01,0xff,0xe8,0xab,
+- 0xbe,0x00,0x10,0x08,0x01,0xff,0xe4,0xb8,0xb9,0x00,0x01,0xff,0xe5,0xaf,0xa7,0x00,
+- 0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x80,0x92,0x00,
+- 0x01,0xff,0xe7,0x8e,0x87,0x00,0x10,0x08,0x01,0xff,0xe7,0x95,0xb0,0x00,0x01,0xff,
+- 0xe5,0x8c,0x97,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xa3,0xbb,0x00,0x01,0xff,
+- 0xe4,0xbe,0xbf,0x00,0x10,0x08,0x01,0xff,0xe5,0xbe,0xa9,0x00,0x01,0xff,0xe4,0xb8,
+- 0x8d,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xb3,0x8c,0x00,0x01,0xff,
+- 0xe6,0x95,0xb8,0x00,0x10,0x08,0x01,0xff,0xe7,0xb4,0xa2,0x00,0x01,0xff,0xe5,0x8f,
+- 0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xa1,0x9e,0x00,0x01,0xff,0xe7,0x9c,
+- 0x81,0x00,0x10,0x08,0x01,0xff,0xe8,0x91,0x89,0x00,0x01,0xff,0xe8,0xaa,0xaa,0x00,
+- 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xae,0xba,0x00,0x01,0xff,
+- 0xe8,0xbe,0xb0,0x00,0x10,0x08,0x01,0xff,0xe6,0xb2,0x88,0x00,0x01,0xff,0xe6,0x8b,
+- 0xbe,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x8b,0xa5,0x00,0x01,0xff,0xe6,0x8e,
+- 0xa0,0x00,0x10,0x08,0x01,0xff,0xe7,0x95,0xa5,0x00,0x01,0xff,0xe4,0xba,0xae,0x00,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,0xa9,0x00,0x01,0xff,0xe5,0x87,
+- 0x89,0x00,0x10,0x08,0x01,0xff,0xe6,0xa2,0x81,0x00,0x01,0xff,0xe7,0xb3,0xa7,0x00,
+- 0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x89,0xaf,0x00,0x01,0xff,0xe8,0xab,0x92,0x00,
+- 0x10,0x08,0x01,0xff,0xe9,0x87,0x8f,0x00,0x01,0xff,0xe5,0x8b,0xb5,0x00,0xe0,0x04,
+- 0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
+- 0x01,0xff,0xe5,0x91,0x82,0x00,0x01,0xff,0xe5,0xa5,0xb3,0x00,0x10,0x08,0x01,0xff,
+- 0xe5,0xbb,0xac,0x00,0x01,0xff,0xe6,0x97,0x85,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe6,0xbf,0xbe,0x00,0x01,0xff,0xe7,0xa4,0xaa,0x00,0x10,0x08,0x01,0xff,0xe9,0x96,
+- 0xad,0x00,0x01,0xff,0xe9,0xa9,0xaa,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe9,0xba,0x97,0x00,0x01,0xff,0xe9,0xbb,0x8e,0x00,0x10,0x08,0x01,0xff,0xe5,0x8a,
+- 0x9b,0x00,0x01,0xff,0xe6,0x9b,0x86,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xad,
+- 0xb7,0x00,0x01,0xff,0xe8,0xbd,0xa2,0x00,0x10,0x08,0x01,0xff,0xe5,0xb9,0xb4,0x00,
+- 0x01,0xff,0xe6,0x86,0x90,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe6,0x88,0x80,0x00,0x01,0xff,0xe6,0x92,0x9a,0x00,0x10,0x08,0x01,0xff,0xe6,0xbc,
+- 0xa3,0x00,0x01,0xff,0xe7,0x85,0x89,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0x92,
+- 0x89,0x00,0x01,0xff,0xe7,0xa7,0x8a,0x00,0x10,0x08,0x01,0xff,0xe7,0xb7,0xb4,0x00,
+- 0x01,0xff,0xe8,0x81,0xaf,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xbc,
+- 0xa6,0x00,0x01,0xff,0xe8,0x93,0xae,0x00,0x10,0x08,0x01,0xff,0xe9,0x80,0xa3,0x00,
+- 0x01,0xff,0xe9,0x8d,0x8a,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x88,0x97,0x00,
+- 0x01,0xff,0xe5,0x8a,0xa3,0x00,0x10,0x08,0x01,0xff,0xe5,0x92,0xbd,0x00,0x01,0xff,
+- 0xe7,0x83,0x88,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe8,0xa3,0x82,0x00,0x01,0xff,0xe8,0xaa,0xaa,0x00,0x10,0x08,0x01,0xff,0xe5,0xbb,
+- 0x89,0x00,0x01,0xff,0xe5,0xbf,0xb5,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x8d,
+- 0xbb,0x00,0x01,0xff,0xe6,0xae,0xae,0x00,0x10,0x08,0x01,0xff,0xe7,0xb0,0xbe,0x00,
+- 0x01,0xff,0xe7,0x8d,0xb5,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe4,0xbb,
+- 0xa4,0x00,0x01,0xff,0xe5,0x9b,0xb9,0x00,0x10,0x08,0x01,0xff,0xe5,0xaf,0xa7,0x00,
+- 0x01,0xff,0xe5,0xb6,0xba,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x80,0x9c,0x00,
+- 0x01,0xff,0xe7,0x8e,0xb2,0x00,0x10,0x08,0x01,0xff,0xe7,0x91,0xa9,0x00,0x01,0xff,
+- 0xe7,0xbe,0x9a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x81,
+- 0x86,0x00,0x01,0xff,0xe9,0x88,0xb4,0x00,0x10,0x08,0x01,0xff,0xe9,0x9b,0xb6,0x00,
+- 0x01,0xff,0xe9,0x9d,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xa0,0x98,0x00,
+- 0x01,0xff,0xe4,0xbe,0x8b,0x00,0x10,0x08,0x01,0xff,0xe7,0xa6,0xae,0x00,0x01,0xff,
+- 0xe9,0x86,0xb4,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9a,0xb8,0x00,
+- 0x01,0xff,0xe6,0x83,0xa1,0x00,0x10,0x08,0x01,0xff,0xe4,0xba,0x86,0x00,0x01,0xff,
+- 0xe5,0x83,0x9a,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xaf,0xae,0x00,0x01,0xff,
+- 0xe5,0xb0,0xbf,0x00,0x10,0x08,0x01,0xff,0xe6,0x96,0x99,0x00,0x01,0xff,0xe6,0xa8,
+- 0x82,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x01,0xff,0xe7,0x87,0x8e,0x00,0x01,0xff,0xe7,0x99,0x82,0x00,0x10,0x08,0x01,
+- 0xff,0xe8,0x93,0xbc,0x00,0x01,0xff,0xe9,0x81,0xbc,0x00,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe9,0xbe,0x8d,0x00,0x01,0xff,0xe6,0x9a,0x88,0x00,0x10,0x08,0x01,0xff,0xe9,
+- 0x98,0xae,0x00,0x01,0xff,0xe5,0x8a,0x89,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe6,0x9d,0xbb,0x00,0x01,0xff,0xe6,0x9f,0xb3,0x00,0x10,0x08,0x01,0xff,0xe6,
+- 0xb5,0x81,0x00,0x01,0xff,0xe6,0xba,0x9c,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,
+- 0x90,0x89,0x00,0x01,0xff,0xe7,0x95,0x99,0x00,0x10,0x08,0x01,0xff,0xe7,0xa1,0xab,
+- 0x00,0x01,0xff,0xe7,0xb4,0x90,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe9,0xa1,0x9e,0x00,0x01,0xff,0xe5,0x85,0xad,0x00,0x10,0x08,0x01,0xff,0xe6,
+- 0x88,0xae,0x00,0x01,0xff,0xe9,0x99,0xb8,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
+- 0x80,0xab,0x00,0x01,0xff,0xe5,0xb4,0x99,0x00,0x10,0x08,0x01,0xff,0xe6,0xb7,0xaa,
+- 0x00,0x01,0xff,0xe8,0xbc,0xaa,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
+- 0xbe,0x8b,0x00,0x01,0xff,0xe6,0x85,0x84,0x00,0x10,0x08,0x01,0xff,0xe6,0xa0,0x97,
+- 0x00,0x01,0xff,0xe7,0x8e,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9a,0x86,
+- 0x00,0x01,0xff,0xe5,0x88,0xa9,0x00,0x10,0x08,0x01,0xff,0xe5,0x90,0x8f,0x00,0x01,
+- 0xff,0xe5,0xb1,0xa5,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
+- 0xff,0xe6,0x98,0x93,0x00,0x01,0xff,0xe6,0x9d,0x8e,0x00,0x10,0x08,0x01,0xff,0xe6,
+- 0xa2,0xa8,0x00,0x01,0xff,0xe6,0xb3,0xa5,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,
+- 0x90,0x86,0x00,0x01,0xff,0xe7,0x97,0xa2,0x00,0x10,0x08,0x01,0xff,0xe7,0xbd,0xb9,
+- 0x00,0x01,0xff,0xe8,0xa3,0x8f,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
+- 0xa3,0xa1,0x00,0x01,0xff,0xe9,0x87,0x8c,0x00,0x10,0x08,0x01,0xff,0xe9,0x9b,0xa2,
+- 0x00,0x01,0xff,0xe5,0x8c,0xbf,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xba,0xba,
+- 0x00,0x01,0xff,0xe5,0x90,0x9d,0x00,0x10,0x08,0x01,0xff,0xe7,0x87,0x90,0x00,0x01,
+- 0xff,0xe7,0x92,0x98,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
+- 0x97,0xba,0x00,0x01,0xff,0xe9,0x9a,0xa3,0x00,0x10,0x08,0x01,0xff,0xe9,0xb1,0x97,
+- 0x00,0x01,0xff,0xe9,0xba,0x9f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x9e,0x97,
+- 0x00,0x01,0xff,0xe6,0xb7,0x8b,0x00,0x10,0x08,0x01,0xff,0xe8,0x87,0xa8,0x00,0x01,
+- 0xff,0xe7,0xab,0x8b,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xac,0xa0,
+- 0x00,0x01,0xff,0xe7,0xb2,0x92,0x00,0x10,0x08,0x01,0xff,0xe7,0x8b,0x80,0x00,0x01,
+- 0xff,0xe7,0x82,0x99,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xad,0x98,0x00,0x01,
+- 0xff,0xe4,0xbb,0x80,0x00,0x10,0x08,0x01,0xff,0xe8,0x8c,0xb6,0x00,0x01,0xff,0xe5,
+- 0x88,0xba,0x00,0xe2,0xad,0x06,0xe1,0xc4,0x03,0xe0,0xcb,0x01,0xcf,0x86,0xd5,0xe4,
+- 0xd4,0x74,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x88,0x87,0x00,
+- 0x01,0xff,0xe5,0xba,0xa6,0x00,0x10,0x08,0x01,0xff,0xe6,0x8b,0x93,0x00,0x01,0xff,
+- 0xe7,0xb3,0x96,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xae,0x85,0x00,0x01,0xff,
+- 0xe6,0xb4,0x9e,0x00,0x10,0x08,0x01,0xff,0xe6,0x9a,0xb4,0x00,0x01,0xff,0xe8,0xbc,
+- 0xbb,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xa1,0x8c,0x00,0x01,0xff,
+- 0xe9,0x99,0x8d,0x00,0x10,0x08,0x01,0xff,0xe8,0xa6,0x8b,0x00,0x01,0xff,0xe5,0xbb,
+- 0x93,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,0x80,0x00,0x01,0xff,0xe5,0x97,
+- 0x80,0x00,0x01,0x00,0xd3,0x34,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x01,0xff,0xe5,0xa1,
+- 0x9a,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe6,0x99,0xb4,0x00,0x01,0x00,0xd1,0x0c,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xe5,0x87,0x9e,0x00,0x10,0x08,0x01,0xff,0xe7,0x8c,
+- 0xaa,0x00,0x01,0xff,0xe7,0x9b,0x8a,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
+- 0xe7,0xa4,0xbc,0x00,0x01,0xff,0xe7,0xa5,0x9e,0x00,0x10,0x08,0x01,0xff,0xe7,0xa5,
+- 0xa5,0x00,0x01,0xff,0xe7,0xa6,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9d,
+- 0x96,0x00,0x01,0xff,0xe7,0xb2,0xbe,0x00,0x10,0x08,0x01,0xff,0xe7,0xbe,0xbd,0x00,
+- 0x01,0x00,0xd4,0x64,0xd3,0x30,0xd2,0x18,0xd1,0x0c,0x10,0x08,0x01,0xff,0xe8,0x98,
+- 0x92,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe8,0xab,0xb8,0x00,0x01,0x00,0xd1,0x0c,
+- 0x10,0x04,0x01,0x00,0x01,0xff,0xe9,0x80,0xb8,0x00,0x10,0x08,0x01,0xff,0xe9,0x83,
+- 0xbd,0x00,0x01,0x00,0xd2,0x14,0x51,0x04,0x01,0x00,0x10,0x08,0x01,0xff,0xe9,0xa3,
+- 0xaf,0x00,0x01,0xff,0xe9,0xa3,0xbc,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xa4,
+- 0xa8,0x00,0x01,0xff,0xe9,0xb6,0xb4,0x00,0x10,0x08,0x0d,0xff,0xe9,0x83,0x9e,0x00,
+- 0x0d,0xff,0xe9,0x9a,0xb7,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,
+- 0xe4,0xbe,0xae,0x00,0x06,0xff,0xe5,0x83,0xa7,0x00,0x10,0x08,0x06,0xff,0xe5,0x85,
+- 0x8d,0x00,0x06,0xff,0xe5,0x8b,0x89,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe5,0x8b,
+- 0xa4,0x00,0x06,0xff,0xe5,0x8d,0x91,0x00,0x10,0x08,0x06,0xff,0xe5,0x96,0x9d,0x00,
+- 0x06,0xff,0xe5,0x98,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe5,0x99,
+- 0xa8,0x00,0x06,0xff,0xe5,0xa1,0x80,0x00,0x10,0x08,0x06,0xff,0xe5,0xa2,0xa8,0x00,
+- 0x06,0xff,0xe5,0xb1,0xa4,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe5,0xb1,0xae,0x00,
+- 0x06,0xff,0xe6,0x82,0x94,0x00,0x10,0x08,0x06,0xff,0xe6,0x85,0xa8,0x00,0x06,0xff,
+- 0xe6,0x86,0x8e,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,
+- 0x10,0x10,0x08,0x06,0xff,0xe6,0x87,0xb2,0x00,0x06,0xff,0xe6,0x95,0x8f,0x00,0x10,
+- 0x08,0x06,0xff,0xe6,0x97,0xa2,0x00,0x06,0xff,0xe6,0x9a,0x91,0x00,0xd1,0x10,0x10,
+- 0x08,0x06,0xff,0xe6,0xa2,0x85,0x00,0x06,0xff,0xe6,0xb5,0xb7,0x00,0x10,0x08,0x06,
+- 0xff,0xe6,0xb8,0x9a,0x00,0x06,0xff,0xe6,0xbc,0xa2,0x00,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x06,0xff,0xe7,0x85,0xae,0x00,0x06,0xff,0xe7,0x88,0xab,0x00,0x10,0x08,0x06,
+- 0xff,0xe7,0x90,0xa2,0x00,0x06,0xff,0xe7,0xa2,0x91,0x00,0xd1,0x10,0x10,0x08,0x06,
+- 0xff,0xe7,0xa4,0xbe,0x00,0x06,0xff,0xe7,0xa5,0x89,0x00,0x10,0x08,0x06,0xff,0xe7,
+- 0xa5,0x88,0x00,0x06,0xff,0xe7,0xa5,0x90,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x06,0xff,0xe7,0xa5,0x96,0x00,0x06,0xff,0xe7,0xa5,0x9d,0x00,0x10,0x08,0x06,
+- 0xff,0xe7,0xa6,0x8d,0x00,0x06,0xff,0xe7,0xa6,0x8e,0x00,0xd1,0x10,0x10,0x08,0x06,
+- 0xff,0xe7,0xa9,0x80,0x00,0x06,0xff,0xe7,0xaa,0x81,0x00,0x10,0x08,0x06,0xff,0xe7,
+- 0xaf,0x80,0x00,0x06,0xff,0xe7,0xb7,0xb4,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,
+- 0xff,0xe7,0xb8,0x89,0x00,0x06,0xff,0xe7,0xb9,0x81,0x00,0x10,0x08,0x06,0xff,0xe7,
+- 0xbd,0xb2,0x00,0x06,0xff,0xe8,0x80,0x85,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe8,
+- 0x87,0xad,0x00,0x06,0xff,0xe8,0x89,0xb9,0x00,0x10,0x08,0x06,0xff,0xe8,0x89,0xb9,
+- 0x00,0x06,0xff,0xe8,0x91,0x97,0x00,0xd4,0x75,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
+- 0x08,0x06,0xff,0xe8,0xa4,0x90,0x00,0x06,0xff,0xe8,0xa6,0x96,0x00,0x10,0x08,0x06,
+- 0xff,0xe8,0xac,0x81,0x00,0x06,0xff,0xe8,0xac,0xb9,0x00,0xd1,0x10,0x10,0x08,0x06,
+- 0xff,0xe8,0xb3,0x93,0x00,0x06,0xff,0xe8,0xb4,0x88,0x00,0x10,0x08,0x06,0xff,0xe8,
+- 0xbe,0xb6,0x00,0x06,0xff,0xe9,0x80,0xb8,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,
+- 0xff,0xe9,0x9b,0xa3,0x00,0x06,0xff,0xe9,0x9f,0xbf,0x00,0x10,0x08,0x06,0xff,0xe9,
+- 0xa0,0xbb,0x00,0x0b,0xff,0xe6,0x81,0xb5,0x00,0x91,0x11,0x10,0x09,0x0b,0xff,0xf0,
+- 0xa4,0x8b,0xae,0x00,0x0b,0xff,0xe8,0x88,0x98,0x00,0x00,0x00,0xd3,0x40,0xd2,0x20,
+- 0xd1,0x10,0x10,0x08,0x08,0xff,0xe4,0xb8,0xa6,0x00,0x08,0xff,0xe5,0x86,0xb5,0x00,
+- 0x10,0x08,0x08,0xff,0xe5,0x85,0xa8,0x00,0x08,0xff,0xe4,0xbe,0x80,0x00,0xd1,0x10,
+- 0x10,0x08,0x08,0xff,0xe5,0x85,0x85,0x00,0x08,0xff,0xe5,0x86,0x80,0x00,0x10,0x08,
+- 0x08,0xff,0xe5,0x8b,0x87,0x00,0x08,0xff,0xe5,0x8b,0xba,0x00,0xd2,0x20,0xd1,0x10,
+- 0x10,0x08,0x08,0xff,0xe5,0x96,0x9d,0x00,0x08,0xff,0xe5,0x95,0x95,0x00,0x10,0x08,
+- 0x08,0xff,0xe5,0x96,0x99,0x00,0x08,0xff,0xe5,0x97,0xa2,0x00,0xd1,0x10,0x10,0x08,
+- 0x08,0xff,0xe5,0xa1,0x9a,0x00,0x08,0xff,0xe5,0xa2,0xb3,0x00,0x10,0x08,0x08,0xff,
+- 0xe5,0xa5,0x84,0x00,0x08,0xff,0xe5,0xa5,0x94,0x00,0xe0,0x04,0x02,0xcf,0x86,0xe5,
+- 0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0xa9,
+- 0xa2,0x00,0x08,0xff,0xe5,0xac,0xa8,0x00,0x10,0x08,0x08,0xff,0xe5,0xbb,0x92,0x00,
+- 0x08,0xff,0xe5,0xbb,0x99,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0xbd,0xa9,0x00,
+- 0x08,0xff,0xe5,0xbe,0xad,0x00,0x10,0x08,0x08,0xff,0xe6,0x83,0x98,0x00,0x08,0xff,
+- 0xe6,0x85,0x8e,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x84,0x88,0x00,
+- 0x08,0xff,0xe6,0x86,0x8e,0x00,0x10,0x08,0x08,0xff,0xe6,0x85,0xa0,0x00,0x08,0xff,
+- 0xe6,0x87,0xb2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x88,0xb4,0x00,0x08,0xff,
+- 0xe6,0x8f,0x84,0x00,0x10,0x08,0x08,0xff,0xe6,0x90,0x9c,0x00,0x08,0xff,0xe6,0x91,
+- 0x92,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x95,0x96,0x00,
+- 0x08,0xff,0xe6,0x99,0xb4,0x00,0x10,0x08,0x08,0xff,0xe6,0x9c,0x97,0x00,0x08,0xff,
+- 0xe6,0x9c,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0x9d,0x96,0x00,0x08,0xff,
+- 0xe6,0xad,0xb9,0x00,0x10,0x08,0x08,0xff,0xe6,0xae,0xba,0x00,0x08,0xff,0xe6,0xb5,
+- 0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe6,0xbb,0x9b,0x00,0x08,0xff,
+- 0xe6,0xbb,0x8b,0x00,0x10,0x08,0x08,0xff,0xe6,0xbc,0xa2,0x00,0x08,0xff,0xe7,0x80,
+- 0x9e,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x85,0xae,0x00,0x08,0xff,0xe7,0x9e,
+- 0xa7,0x00,0x10,0x08,0x08,0xff,0xe7,0x88,0xb5,0x00,0x08,0xff,0xe7,0x8a,0xaf,0x00,
+- 0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x8c,0xaa,0x00,
+- 0x08,0xff,0xe7,0x91,0xb1,0x00,0x10,0x08,0x08,0xff,0xe7,0x94,0x86,0x00,0x08,0xff,
+- 0xe7,0x94,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x98,0x9d,0x00,0x08,0xff,
+- 0xe7,0x98,0x9f,0x00,0x10,0x08,0x08,0xff,0xe7,0x9b,0x8a,0x00,0x08,0xff,0xe7,0x9b,
+- 0x9b,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x9b,0xb4,0x00,0x08,0xff,
+- 0xe7,0x9d,0x8a,0x00,0x10,0x08,0x08,0xff,0xe7,0x9d,0x80,0x00,0x08,0xff,0xe7,0xa3,
+- 0x8c,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0xaa,0xb1,0x00,0x08,0xff,0xe7,0xaf,
+- 0x80,0x00,0x10,0x08,0x08,0xff,0xe7,0xb1,0xbb,0x00,0x08,0xff,0xe7,0xb5,0x9b,0x00,
+- 0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0xb7,0xb4,0x00,0x08,0xff,
+- 0xe7,0xbc,0xbe,0x00,0x10,0x08,0x08,0xff,0xe8,0x80,0x85,0x00,0x08,0xff,0xe8,0x8d,
+- 0x92,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0x8f,0xaf,0x00,0x08,0xff,0xe8,0x9d,
+- 0xb9,0x00,0x10,0x08,0x08,0xff,0xe8,0xa5,0x81,0x00,0x08,0xff,0xe8,0xa6,0x86,0x00,
+- 0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xa6,0x96,0x00,0x08,0xff,0xe8,0xaa,
+- 0xbf,0x00,0x10,0x08,0x08,0xff,0xe8,0xab,0xb8,0x00,0x08,0xff,0xe8,0xab,0x8b,0x00,
+- 0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xac,0x81,0x00,0x08,0xff,0xe8,0xab,0xbe,0x00,
+- 0x10,0x08,0x08,0xff,0xe8,0xab,0xad,0x00,0x08,0xff,0xe8,0xac,0xb9,0x00,0xcf,0x86,
+- 0x95,0xde,0xd4,0x81,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xae,
+- 0x8a,0x00,0x08,0xff,0xe8,0xb4,0x88,0x00,0x10,0x08,0x08,0xff,0xe8,0xbc,0xb8,0x00,
+- 0x08,0xff,0xe9,0x81,0xb2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe9,0x86,0x99,0x00,
+- 0x08,0xff,0xe9,0x89,0xb6,0x00,0x10,0x08,0x08,0xff,0xe9,0x99,0xbc,0x00,0x08,0xff,
+- 0xe9,0x9b,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe9,0x9d,0x96,0x00,
+- 0x08,0xff,0xe9,0x9f,0x9b,0x00,0x10,0x08,0x08,0xff,0xe9,0x9f,0xbf,0x00,0x08,0xff,
+- 0xe9,0xa0,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe9,0xa0,0xbb,0x00,0x08,0xff,
+- 0xe9,0xac,0x92,0x00,0x10,0x08,0x08,0xff,0xe9,0xbe,0x9c,0x00,0x08,0xff,0xf0,0xa2,
+- 0xa1,0x8a,0x00,0xd3,0x45,0xd2,0x22,0xd1,0x12,0x10,0x09,0x08,0xff,0xf0,0xa2,0xa1,
+- 0x84,0x00,0x08,0xff,0xf0,0xa3,0x8f,0x95,0x00,0x10,0x08,0x08,0xff,0xe3,0xae,0x9d,
+- 0x00,0x08,0xff,0xe4,0x80,0x98,0x00,0xd1,0x11,0x10,0x08,0x08,0xff,0xe4,0x80,0xb9,
+- 0x00,0x08,0xff,0xf0,0xa5,0x89,0x89,0x00,0x10,0x09,0x08,0xff,0xf0,0xa5,0xb3,0x90,
+- 0x00,0x08,0xff,0xf0,0xa7,0xbb,0x93,0x00,0x92,0x14,0x91,0x10,0x10,0x08,0x08,0xff,
+- 0xe9,0xbd,0x83,0x00,0x08,0xff,0xe9,0xbe,0x8e,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0xe1,0x94,0x01,0xe0,0x08,0x01,0xcf,0x86,0xd5,0x42,0xd4,0x14,0x93,0x10,0x52,0x04,
+- 0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd3,0x10,
+- 0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x52,0x04,
+- 0x00,0x00,0xd1,0x0d,0x10,0x04,0x00,0x00,0x04,0xff,0xd7,0x99,0xd6,0xb4,0x00,0x10,
+- 0x04,0x01,0x1a,0x01,0xff,0xd7,0xb2,0xd6,0xb7,0x00,0xd4,0x42,0x53,0x04,0x01,0x00,
+- 0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,0xd7,0xa9,0xd7,0x81,0x00,0x01,
+- 0xff,0xd7,0xa9,0xd7,0x82,0x00,0xd1,0x16,0x10,0x0b,0x01,0xff,0xd7,0xa9,0xd6,0xbc,
+- 0xd7,0x81,0x00,0x01,0xff,0xd7,0xa9,0xd6,0xbc,0xd7,0x82,0x00,0x10,0x09,0x01,0xff,
+- 0xd7,0x90,0xd6,0xb7,0x00,0x01,0xff,0xd7,0x90,0xd6,0xb8,0x00,0xd3,0x43,0xd2,0x24,
+- 0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x90,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x91,0xd6,
+- 0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x92,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x93,0xd6,
+- 0xbc,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x94,0xd6,0xbc,0x00,0x01,0xff,0xd7,
+- 0x95,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x96,0xd6,0xbc,0x00,0x00,0x00,0xd2,
+- 0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x98,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x99,
+- 0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x9a,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x9b,
+- 0xd6,0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,0x9c,0xd6,0xbc,0x00,0x00,0x00,
+- 0x10,0x09,0x01,0xff,0xd7,0x9e,0xd6,0xbc,0x00,0x00,0x00,0xcf,0x86,0x95,0x85,0x94,
+- 0x81,0xd3,0x3e,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0xa0,0xd6,0xbc,0x00,
+- 0x01,0xff,0xd7,0xa1,0xd6,0xbc,0x00,0x10,0x04,0x00,0x00,0x01,0xff,0xd7,0xa3,0xd6,
+- 0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,0xa4,0xd6,0xbc,0x00,0x00,0x00,0x10,
+- 0x09,0x01,0xff,0xd7,0xa6,0xd6,0xbc,0x00,0x01,0xff,0xd7,0xa7,0xd6,0xbc,0x00,0xd2,
+- 0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0xa8,0xd6,0xbc,0x00,0x01,0xff,0xd7,0xa9,
+- 0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0xaa,0xd6,0xbc,0x00,0x01,0xff,0xd7,0x95,
+- 0xd6,0xb9,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x91,0xd6,0xbf,0x00,0x01,0xff,
+- 0xd7,0x9b,0xd6,0xbf,0x00,0x10,0x09,0x01,0xff,0xd7,0xa4,0xd6,0xbf,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,
+- 0x93,0x0c,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,0x0c,0x00,0x0c,0x00,0xcf,0x86,
+- 0x95,0x24,0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0xd3,0x5a,0xd2,0x06,0xcf,0x06,0x01,0x00,0xd1,0x14,
+- 0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x95,0x08,0x14,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x54,0x04,0x01,0x00,0x93,0x0c,0x92,0x08,
+- 0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x0c,
+- 0x94,0x08,0x13,0x04,0x01,0x00,0x00,0x00,0x05,0x00,0x54,0x04,0x05,0x00,0x53,0x04,
+- 0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,0x06,0x00,0x07,0x00,0x00,0x00,
+- 0xd2,0xcc,0xd1,0xa4,0xd0,0x36,0xcf,0x86,0xd5,0x14,0x54,0x04,0x06,0x00,0x53,0x04,
+- 0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x94,0x1c,0xd3,0x10,
+- 0x52,0x04,0x01,0xe6,0x51,0x04,0x0a,0xe6,0x10,0x04,0x0a,0xe6,0x10,0xdc,0x52,0x04,
+- 0x10,0xdc,0x11,0x04,0x10,0xdc,0x11,0xe6,0x01,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,
+- 0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x06,0x00,0x10,0x04,
+- 0x06,0x00,0x07,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x01,0x00,0x01,0x00,
+- 0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,
+- 0x01,0x00,0x01,0x00,0xd4,0x18,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,
+- 0x10,0x04,0x01,0x00,0x00,0x00,0x12,0x04,0x01,0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x01,0x00,0x01,0x00,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x55,0x04,
+- 0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0xd1,0x08,
+- 0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x01,0x00,0xd1,0x50,0xd0,0x1e,
+- 0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x18,
+- 0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,
+- 0x10,0x04,0x01,0x00,0x06,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x06,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x1e,0xcf,0x86,
+- 0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,
+- 0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x18,
+- 0xd3,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x92,0x08,0x11,0x04,
+- 0x00,0x00,0x01,0x00,0x01,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,
+- 0x01,0x00,0xd2,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,
+- 0x00,0x00,0x00,0x00,0xd4,0x20,0xd3,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,
+- 0x10,0x04,0x01,0x00,0x00,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,
+- 0x01,0x00,0x00,0x00,0x53,0x04,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
+- 0x04,0x00,0x04,0x00,0x91,0x08,0x10,0x04,0x03,0x00,0x01,0x00,0x01,0x00,0x83,0xe2,
+- 0x30,0x3e,0xe1,0x1a,0x3b,0xe0,0x97,0x39,0xcf,0x86,0xe5,0x3b,0x26,0xc4,0xe3,0x16,
+- 0x14,0xe2,0xef,0x11,0xe1,0xd0,0x10,0xe0,0x60,0x07,0xcf,0x86,0xe5,0x53,0x03,0xe4,
+- 0x4c,0x02,0xe3,0x3d,0x01,0xd2,0x94,0xd1,0x70,0xd0,0x4a,0xcf,0x86,0xd5,0x18,0x94,
+- 0x14,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,
+- 0x00,0x07,0x00,0x07,0x00,0xd4,0x14,0x93,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x07,
+- 0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x07,0x00,0x53,0x04,0x07,0x00,0xd2,0x0c,0x51,
+- 0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,
+- 0x00,0x07,0x00,0xcf,0x86,0x95,0x20,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,0x04,0x07,
+- 0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x11,
+- 0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x07,0x00,0xcf,0x86,0x55,
+- 0x04,0x07,0x00,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,
+- 0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,
+- 0x20,0x94,0x1c,0x93,0x18,0xd2,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,
+- 0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,
+- 0x04,0x07,0x00,0x93,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x07,0x00,0x07,0x00,0xcf,0x06,0x08,0x00,0xd0,0x46,0xcf,0x86,0xd5,0x2c,0xd4,
+- 0x20,0x53,0x04,0x08,0x00,0xd2,0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x10,
+- 0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x53,
+- 0x04,0x0a,0x00,0x12,0x04,0x0a,0x00,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,
+- 0x86,0xd5,0x08,0x14,0x04,0x00,0x00,0x0a,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,
+- 0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x0a,0xdc,0x00,0x00,0xd2,
+- 0x5e,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x0a,
+- 0x00,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,
+- 0x00,0x00,0x00,0x0a,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x0a,0x00,0x93,0x10,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd4,
+- 0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0xdc,0x10,0x00,0x10,0x00,0x10,
+- 0x00,0x10,0x00,0x53,0x04,0x10,0x00,0x12,0x04,0x10,0x00,0x00,0x00,0xd1,0x70,0xd0,
+- 0x36,0xcf,0x86,0xd5,0x18,0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,
+- 0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x10,0x00,0x94,0x18,0xd3,0x08,0x12,
+- 0x04,0x05,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x13,
+- 0x00,0x13,0x00,0x05,0x00,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x05,0x00,0x92,
+- 0x0c,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0x54,
+- 0x04,0x10,0x00,0xd3,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x10,0xe6,0x92,
+- 0x0c,0x51,0x04,0x10,0xe6,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,
+- 0x86,0x95,0x18,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x51,
+- 0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0x08,0x00,0xcf,0x86,0x95,0x1c,0xd4,
+- 0x0c,0x93,0x08,0x12,0x04,0x08,0x00,0x00,0x00,0x08,0x00,0x93,0x0c,0x52,0x04,0x08,
+- 0x00,0x11,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0xba,0xd2,0x80,0xd1,
+- 0x34,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x05,0x00,0x94,0x10,0x93,0x0c,0x52,0x04,0x05,
+- 0x00,0x11,0x04,0x05,0x00,0x07,0x00,0x05,0x00,0x05,0x00,0xcf,0x86,0x95,0x14,0x94,
+- 0x10,0x53,0x04,0x05,0x00,0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,0x07,0x00,0x07,
+- 0x00,0x07,0x00,0xd0,0x2a,0xcf,0x86,0xd5,0x14,0x54,0x04,0x07,0x00,0x53,0x04,0x07,
+- 0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x94,0x10,0x53,0x04,0x07,
+- 0x00,0x92,0x08,0x11,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0xcf,0x86,0xd5,
+- 0x10,0x54,0x04,0x12,0x00,0x93,0x08,0x12,0x04,0x12,0x00,0x00,0x00,0x12,0x00,0x54,
+- 0x04,0x12,0x00,0x53,0x04,0x12,0x00,0x12,0x04,0x12,0x00,0x00,0x00,0xd1,0x34,0xd0,
+- 0x12,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x08,0x13,0x04,0x10,0x00,0x00,0x00,0x10,
+- 0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x18,0xd3,0x08,0x12,0x04,0x10,0x00,0x00,
+- 0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x00,
+- 0x00,0xcf,0x06,0x00,0x00,0xd2,0x06,0xcf,0x06,0x10,0x00,0xd1,0x40,0xd0,0x1e,0xcf,
+- 0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x93,0x10,0x52,0x04,0x10,0x00,0x51,
+- 0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x14,0x54,
+- 0x04,0x10,0x00,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x00,
+- 0x00,0x94,0x08,0x13,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe4,
+- 0xce,0x02,0xe3,0x45,0x01,0xd2,0xd0,0xd1,0x70,0xd0,0x52,0xcf,0x86,0xd5,0x20,0x94,
+- 0x1c,0xd3,0x0c,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,0x04,0x07,
+- 0x00,0xd3,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,
+- 0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd1,0x08,0x10,
+- 0x04,0x07,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0xcf,0x86,0x95,0x18,0x54,
+- 0x04,0x0b,0x00,0x93,0x10,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x00,
+- 0x00,0x0b,0x00,0x0b,0x00,0x10,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,
+- 0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,
+- 0x00,0x00,0x00,0x94,0x14,0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,
+- 0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,
+- 0x04,0x11,0x00,0xd3,0x14,0xd2,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,
+- 0x00,0x11,0x04,0x11,0x00,0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,
+- 0x00,0x11,0x00,0x11,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x1c,0x54,0x04,0x09,
+- 0x00,0x53,0x04,0x09,0x00,0xd2,0x08,0x11,0x04,0x09,0x00,0x0b,0x00,0x51,0x04,0x00,
+- 0x00,0x10,0x04,0x00,0x00,0x09,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,
+- 0x08,0x11,0x04,0x0a,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,
+- 0x00,0xcf,0x06,0x00,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,
+- 0x00,0x53,0x04,0x0d,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x11,0x00,0x0d,0x00,0xcf,
+- 0x86,0x95,0x14,0x54,0x04,0x11,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x11,
+- 0x00,0x11,0x00,0x11,0x00,0x11,0x00,0xd2,0xec,0xd1,0xa4,0xd0,0x76,0xcf,0x86,0xd5,
+- 0x48,0xd4,0x28,0xd3,0x14,0x52,0x04,0x08,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x08,
+- 0x00,0x10,0x04,0x08,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x08,
+- 0x00,0x08,0xdc,0x10,0x04,0x08,0x00,0x08,0xe6,0xd3,0x10,0x52,0x04,0x08,0x00,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x08,0x00,0x08,0x00,0x08,0x00,0x54,0x04,0x08,0x00,0xd3,0x0c,0x52,0x04,0x08,
+- 0x00,0x11,0x04,0x14,0x00,0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x08,0xe6,0x08,
+- 0x01,0x10,0x04,0x08,0xdc,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x08,
+- 0x09,0xcf,0x86,0x95,0x28,0xd4,0x14,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x08,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0xd0,0x0a,0xcf,
+- 0x86,0x15,0x04,0x10,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x24,0xd3,
+- 0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0xe6,0x10,0x04,0x10,
+- 0xdc,0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,
+- 0x00,0x93,0x10,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,
+- 0x00,0x00,0x00,0xd1,0x54,0xd0,0x26,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,
+- 0x00,0xd3,0x0c,0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x92,0x0c,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x14,0x54,
+- 0x04,0x0b,0x00,0x93,0x0c,0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x0b,
+- 0x00,0x54,0x04,0x0b,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,
+- 0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x54,0x04,0x10,
+- 0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd2,0x0c,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,
+- 0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,
+- 0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x96,0xd2,
+- 0x68,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x0b,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,
+- 0x04,0x0b,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x11,0x00,0x54,0x04,0x11,
+- 0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x11,0x00,0x54,0x04,0x11,0x00,0xd3,0x10,0x92,
+- 0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x92,0x08,0x11,
+- 0x04,0x00,0x00,0x11,0x00,0x11,0x00,0xd1,0x28,0xd0,0x22,0xcf,0x86,0x55,0x04,0x14,
+- 0x00,0xd4,0x0c,0x93,0x08,0x12,0x04,0x14,0x00,0x14,0xe6,0x00,0x00,0x53,0x04,0x14,
+- 0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,
+- 0x06,0x00,0x00,0xd2,0x2a,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,
+- 0x04,0x00,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,
+- 0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x58,0xd0,
+- 0x12,0xcf,0x86,0x55,0x04,0x14,0x00,0x94,0x08,0x13,0x04,0x14,0x00,0x00,0x00,0x14,
+- 0x00,0xcf,0x86,0x95,0x40,0xd4,0x24,0xd3,0x0c,0x52,0x04,0x14,0x00,0x11,0x04,0x14,
+- 0x00,0x14,0xdc,0xd2,0x0c,0x51,0x04,0x14,0xe6,0x10,0x04,0x14,0xe6,0x14,0xdc,0x91,
+- 0x08,0x10,0x04,0x14,0xe6,0x14,0xdc,0x14,0xdc,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x14,0xdc,0x14,0x00,0x14,0x00,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,
+- 0x00,0x54,0x04,0x15,0x00,0x93,0x10,0x52,0x04,0x15,0x00,0x51,0x04,0x15,0x00,0x10,
+- 0x04,0x15,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xe5,0x0f,0x06,0xe4,0xf8,0x03,0xe3,
+- 0x02,0x02,0xd2,0xfb,0xd1,0x4c,0xd0,0x06,0xcf,0x06,0x0c,0x00,0xcf,0x86,0xd5,0x2c,
+- 0xd4,0x1c,0xd3,0x10,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x09,
+- 0x0c,0x00,0x52,0x04,0x0c,0x00,0x11,0x04,0x0c,0x00,0x00,0x00,0x93,0x0c,0x92,0x08,
+- 0x11,0x04,0x00,0x00,0x0c,0x00,0x0c,0x00,0x0c,0x00,0x54,0x04,0x0c,0x00,0x53,0x04,
+- 0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x09,
+- 0xd0,0x69,0xcf,0x86,0xd5,0x32,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0xd2,0x15,
+- 0x51,0x04,0x0b,0x00,0x10,0x0d,0x0b,0xff,0xf0,0x91,0x82,0x99,0xf0,0x91,0x82,0xba,
+- 0x00,0x0b,0x00,0x91,0x11,0x10,0x0d,0x0b,0xff,0xf0,0x91,0x82,0x9b,0xf0,0x91,0x82,
+- 0xba,0x00,0x0b,0x00,0x0b,0x00,0xd4,0x1d,0x53,0x04,0x0b,0x00,0x92,0x15,0x51,0x04,
+- 0x0b,0x00,0x10,0x04,0x0b,0x00,0x0b,0xff,0xf0,0x91,0x82,0xa5,0xf0,0x91,0x82,0xba,
+- 0x00,0x0b,0x00,0x53,0x04,0x0b,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0b,
+- 0x09,0x10,0x04,0x0b,0x07,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x20,0x94,0x1c,0xd3,
+- 0x0c,0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x91,
+- 0x08,0x10,0x04,0x00,0x00,0x14,0x00,0x00,0x00,0x0d,0x00,0xd4,0x14,0x53,0x04,0x0d,
+- 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,
+- 0x04,0x0d,0x00,0x92,0x08,0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0xd1,0x96,0xd0,
+- 0x5c,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x0d,0xe6,0x10,
+- 0x04,0x0d,0xe6,0x0d,0x00,0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd4,0x26,0x53,0x04,0x0d,
+- 0x00,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x0d,0x0d,0xff,0xf0,0x91,0x84,
+- 0xb1,0xf0,0x91,0x84,0xa7,0x00,0x0d,0xff,0xf0,0x91,0x84,0xb2,0xf0,0x91,0x84,0xa7,
+- 0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x00,0x0d,0x09,0x91,
+- 0x08,0x10,0x04,0x0d,0x09,0x00,0x00,0x0d,0x00,0x0d,0x00,0xcf,0x86,0xd5,0x18,0x94,
+- 0x14,0x93,0x10,0x52,0x04,0x0d,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,
+- 0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x10,
+- 0x00,0x10,0x04,0x10,0x00,0x10,0x07,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,
+- 0x00,0x00,0x00,0xd0,0x06,0xcf,0x06,0x0d,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x2c,0xd3,
+- 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x0d,0x09,0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd2,
+- 0x10,0xd1,0x08,0x10,0x04,0x0d,0x00,0x11,0x00,0x10,0x04,0x11,0x07,0x11,0x00,0x91,
+- 0x08,0x10,0x04,0x11,0x00,0x10,0x00,0x00,0x00,0x53,0x04,0x0d,0x00,0x92,0x0c,0x51,
+- 0x04,0x0d,0x00,0x10,0x04,0x10,0x00,0x11,0x00,0x11,0x00,0xd4,0x14,0x93,0x10,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x93,
+- 0x10,0x52,0x04,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0xd2,0xc8,0xd1,0x48,0xd0,0x42,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x93,
+- 0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,
+- 0x00,0x54,0x04,0x10,0x00,0xd3,0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,
+- 0x00,0x10,0x09,0x10,0x04,0x10,0x07,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,
+- 0x00,0x10,0x04,0x12,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd0,0x52,0xcf,0x86,0xd5,
+- 0x3c,0xd4,0x28,0xd3,0x10,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x11,
+- 0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x11,0x00,0x51,
+- 0x04,0x11,0x00,0x10,0x04,0x00,0x00,0x11,0x00,0x53,0x04,0x11,0x00,0x52,0x04,0x11,
+- 0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x00,0x00,0x11,0x00,0x94,0x10,0x53,0x04,0x11,
+- 0x00,0x92,0x08,0x11,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,
+- 0x04,0x10,0x00,0xd4,0x18,0x53,0x04,0x10,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x10,
+- 0x00,0x10,0x07,0x10,0x04,0x10,0x09,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,
+- 0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xe1,0x27,0x01,0xd0,0x8a,0xcf,0x86,
+- 0xd5,0x44,0xd4,0x2c,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,0x00,0x10,0x00,
+- 0x10,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x52,0x04,0x10,0x00,
+- 0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x93,0x14,
+- 0x92,0x10,0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,
+- 0x10,0x00,0x10,0x00,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x10,0x00,0x00,0x00,0x10,0x00,0x10,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,
+- 0x10,0x00,0x00,0x00,0x10,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
+- 0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x00,0x00,0x14,0x07,0x91,0x08,0x10,0x04,
+- 0x10,0x07,0x10,0x00,0x10,0x00,0xcf,0x86,0xd5,0x6a,0xd4,0x42,0xd3,0x14,0x52,0x04,
+- 0x10,0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,
+- 0xd2,0x19,0xd1,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0xff,
+- 0xf0,0x91,0x8d,0x87,0xf0,0x91,0x8c,0xbe,0x00,0x91,0x11,0x10,0x0d,0x10,0xff,0xf0,
+- 0x91,0x8d,0x87,0xf0,0x91,0x8d,0x97,0x00,0x10,0x09,0x00,0x00,0xd3,0x18,0xd2,0x0c,
+- 0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x10,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,
+- 0x10,0x00,0xd4,0x1c,0xd3,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,0xe6,
+- 0x52,0x04,0x10,0xe6,0x91,0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x93,0x10,
+- 0x52,0x04,0x10,0xe6,0x91,0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0xcf,0x06,0x00,0x00,0xe3,0x30,0x01,0xd2,0xb7,0xd1,0x48,0xd0,0x06,0xcf,0x06,0x12,
+- 0x00,0xcf,0x86,0x95,0x3c,0xd4,0x1c,0x93,0x18,0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,
+- 0x04,0x12,0x09,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x07,0x12,0x00,0x12,
+- 0x00,0x53,0x04,0x12,0x00,0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x00,0x00,0x12,
+- 0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x12,0x00,0x10,0x04,0x14,0xe6,0x15,0x00,0x00,
+- 0x00,0xd0,0x45,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,
+- 0x00,0xd2,0x15,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x10,0xff,0xf0,0x91,0x92,
+- 0xb9,0xf0,0x91,0x92,0xba,0x00,0xd1,0x11,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,
+- 0xf0,0x91,0x92,0xb0,0x00,0x10,0x00,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,
+- 0x91,0x92,0xbd,0x00,0x10,0x00,0xcf,0x86,0x95,0x24,0xd4,0x14,0x93,0x10,0x92,0x0c,
+- 0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x09,0x10,0x07,0x10,0x00,0x00,0x00,0x53,0x04,
+- 0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x06,
+- 0xcf,0x06,0x00,0x00,0xd0,0x40,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,
+- 0xd3,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0xd2,0x1e,0x51,0x04,
+- 0x10,0x00,0x10,0x0d,0x10,0xff,0xf0,0x91,0x96,0xb8,0xf0,0x91,0x96,0xaf,0x00,0x10,
+- 0xff,0xf0,0x91,0x96,0xb9,0xf0,0x91,0x96,0xaf,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
+- 0x10,0x00,0x10,0x09,0xcf,0x86,0x95,0x2c,0xd4,0x1c,0xd3,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x10,0x07,0x10,0x00,0x10,0x00,0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,
+- 0x11,0x00,0x11,0x00,0x53,0x04,0x11,0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,
+- 0x00,0x00,0x00,0x00,0xd2,0xa0,0xd1,0x5c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,
+- 0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,
+- 0x10,0x04,0x10,0x00,0x10,0x09,0xcf,0x86,0xd5,0x24,0xd4,0x14,0x93,0x10,0x52,0x04,
+- 0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,
+- 0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x14,0x53,0x04,
+- 0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,0x00,0xd3,0x10,
+- 0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,0x0d,0x07,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x14,
+- 0x94,0x10,0x53,0x04,0x0d,0x00,0x92,0x08,0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x54,0x04,0x11,0x00,
+- 0x53,0x04,0x11,0x00,0xd2,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x14,0x00,0x00,0x00,
+- 0x91,0x08,0x10,0x04,0x00,0x00,0x11,0x00,0x11,0x00,0x94,0x14,0x53,0x04,0x11,0x00,
+- 0x92,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x11,0x09,0x00,0x00,0x11,0x00,
+- 0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xe4,0x59,0x01,0xd3,0xb2,0xd2,0x5c,0xd1,
+- 0x28,0xd0,0x22,0xcf,0x86,0x55,0x04,0x14,0x00,0x54,0x04,0x14,0x00,0x53,0x04,0x14,
+- 0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x14,0x00,0x14,0x09,0x10,0x04,0x14,0x07,0x14,
+- 0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd0,0x0a,0xcf,0x86,0x15,0x04,0x00,0x00,0x10,
+- 0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x10,0x92,0x0c,0x51,
+- 0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,
+- 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,
+- 0x1a,0xcf,0x86,0x55,0x04,0x00,0x00,0x94,0x10,0x53,0x04,0x15,0x00,0x92,0x08,0x11,
+- 0x04,0x00,0x00,0x15,0x00,0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x15,
+- 0x00,0x53,0x04,0x15,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x15,0x00,0x15,0x00,0x94,
+- 0x1c,0x93,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x15,0x09,0x15,0x00,0x15,0x00,0x91,
+- 0x08,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd2,0xa0,0xd1,
+- 0x3c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x93,0x10,0x52,
+- 0x04,0x13,0x00,0x91,0x08,0x10,0x04,0x13,0x09,0x13,0x00,0x13,0x00,0x13,0x00,0xcf,
+- 0x86,0x95,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,
+- 0x04,0x13,0x00,0x13,0x09,0x00,0x00,0x13,0x00,0x13,0x00,0xd0,0x46,0xcf,0x86,0xd5,
+- 0x2c,0xd4,0x10,0x93,0x0c,0x52,0x04,0x13,0x00,0x11,0x04,0x15,0x00,0x13,0x00,0x13,
+- 0x00,0x53,0x04,0x13,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x13,0x00,0x13,0x09,0x13,
+- 0x00,0x91,0x08,0x10,0x04,0x13,0x00,0x14,0x00,0x13,0x00,0x94,0x14,0x93,0x10,0x92,
+- 0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x92,
+- 0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,
+- 0x00,0xe3,0xa9,0x01,0xd2,0xb0,0xd1,0x6c,0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,
+- 0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x12,0x00,
+- 0x12,0x00,0x12,0x00,0x54,0x04,0x12,0x00,0xd3,0x10,0x52,0x04,0x12,0x00,0x51,0x04,
+- 0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,
+- 0x10,0x04,0x12,0x00,0x12,0x09,0xcf,0x86,0xd5,0x14,0x94,0x10,0x93,0x0c,0x52,0x04,
+- 0x12,0x00,0x11,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0x94,0x14,0x53,0x04,
+- 0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
+- 0x12,0x00,0xd0,0x3e,0xcf,0x86,0xd5,0x14,0x54,0x04,0x12,0x00,0x93,0x0c,0x92,0x08,
+- 0x11,0x04,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0xd4,0x14,0x53,0x04,0x12,0x00,
+- 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x93,0x10,
+- 0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
+- 0xcf,0x06,0x00,0x00,0xd1,0xa0,0xd0,0x52,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,0x10,
+- 0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x92,0x0c,
+- 0x51,0x04,0x13,0x00,0x10,0x04,0x00,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x54,0x04,
+- 0x13,0x00,0xd3,0x10,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,
+- 0x00,0x00,0xd2,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x51,0x04,
+- 0x13,0x00,0x10,0x04,0x00,0x00,0x13,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0x93,0x14,
+- 0xd2,0x0c,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x07,0x13,0x00,0x11,0x04,0x13,0x09,
+- 0x13,0x00,0x00,0x00,0x53,0x04,0x13,0x00,0x92,0x08,0x11,0x04,0x13,0x00,0x00,0x00,
+- 0x00,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,
+- 0x00,0x00,0x14,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x14,0x00,
+- 0x14,0x00,0x14,0x00,0xd0,0x52,0xcf,0x86,0xd5,0x3c,0xd4,0x14,0x53,0x04,0x14,0x00,
+- 0x52,0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0xd3,0x18,
+- 0xd2,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x51,0x04,0x14,0x00,
+- 0x10,0x04,0x14,0x00,0x14,0x09,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x94,0x10,0x53,0x04,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd2,0x2a,0xd1,0x06,0xcf,0x06,
+- 0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,
+- 0x14,0x00,0x53,0x04,0x14,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,
+- 0xcf,0x86,0x55,0x04,0x15,0x00,0x54,0x04,0x15,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,
+- 0x15,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
+- 0x00,0x00,0x15,0x00,0xd0,0xca,0xcf,0x86,0xd5,0xc2,0xd4,0x54,0xd3,0x06,0xcf,0x06,
+- 0x09,0x00,0xd2,0x06,0xcf,0x06,0x09,0x00,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x09,0x00,
+- 0xcf,0x86,0x55,0x04,0x09,0x00,0x94,0x14,0x53,0x04,0x09,0x00,0x52,0x04,0x09,0x00,
+- 0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x10,0x00,0x10,0x00,0xd0,0x1e,0xcf,0x86,
+- 0x95,0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
+- 0x10,0x00,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x68,
+- 0xd2,0x46,0xd1,0x40,0xd0,0x06,0xcf,0x06,0x09,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,
+- 0xd4,0x20,0xd3,0x10,0x92,0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x10,0x00,
+- 0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,
+- 0x93,0x10,0x52,0x04,0x09,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0xcf,0x06,0x11,0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,
+- 0x95,0x10,0x94,0x0c,0x93,0x08,0x12,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,
+- 0xd5,0x4c,0xd4,0x06,0xcf,0x06,0x0b,0x00,0xd3,0x40,0xd2,0x3a,0xd1,0x34,0xd0,0x2e,
+- 0xcf,0x86,0x55,0x04,0x0b,0x00,0xd4,0x14,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,
+- 0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x53,0x04,0x15,0x00,0x92,0x0c,
+- 0x91,0x08,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,
+- 0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,
+- 0xd1,0x4c,0xd0,0x44,0xcf,0x86,0xd5,0x3c,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,
+- 0xcf,0x06,0x11,0x00,0xd2,0x2a,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,
+- 0x95,0x18,0x94,0x14,0x93,0x10,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,
+- 0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,
+- 0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xe0,0xd2,0x01,0xcf,
+- 0x86,0xd5,0x06,0xcf,0x06,0x00,0x00,0xe4,0x0b,0x01,0xd3,0x06,0xcf,0x06,0x0c,0x00,
+- 0xd2,0x84,0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x0c,0x00,0x54,0x04,0x0c,0x00,
+- 0x53,0x04,0x0c,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,
+- 0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x94,0x14,0x53,0x04,
+- 0x10,0x00,0xd2,0x08,0x11,0x04,0x10,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x10,0x00,
+- 0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,0x00,0x00,
+- 0x10,0x00,0xd4,0x10,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,
+- 0x00,0x00,0x93,0x10,0x52,0x04,0x10,0x01,0x91,0x08,0x10,0x04,0x10,0x01,0x10,0x00,
+- 0x00,0x00,0x00,0x00,0xd1,0x6c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,
+- 0x10,0x00,0x93,0x10,0x52,0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x10,0xe6,
+- 0x10,0x00,0x10,0x00,0xcf,0x86,0xd5,0x24,0xd4,0x10,0x93,0x0c,0x52,0x04,0x10,0x00,
+- 0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x51,0x04,
+- 0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,
+- 0x51,0x04,0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x53,0x04,
+- 0x10,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
+- 0xd0,0x0e,0xcf,0x86,0x95,0x08,0x14,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,
+- 0x00,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xd2,0x30,0xd1,0x0c,0xd0,0x06,0xcf,0x06,
+- 0x00,0x00,0xcf,0x06,0x14,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x14,0x00,
+- 0x53,0x04,0x14,0x00,0x92,0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x4c,0xd0,0x06,0xcf,0x06,0x0d,0x00,
+- 0xcf,0x86,0xd5,0x2c,0x94,0x28,0xd3,0x10,0x52,0x04,0x0d,0x00,0x91,0x08,0x10,0x04,
+- 0x0d,0x00,0x15,0x00,0x15,0x00,0xd2,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,
+- 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0d,0x00,0x54,0x04,
+- 0x0d,0x00,0x53,0x04,0x0d,0x00,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,
+- 0x0d,0x00,0x15,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x15,0x00,
+- 0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x0d,0x00,
+- 0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,
+- 0x10,0x04,0x12,0x00,0x13,0x00,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
+- 0xcf,0x06,0x12,0x00,0xe2,0xc5,0x01,0xd1,0x8e,0xd0,0x86,0xcf,0x86,0xd5,0x48,0xd4,
+- 0x06,0xcf,0x06,0x12,0x00,0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x06,0xcf,0x06,0x12,
+- 0x00,0xd1,0x06,0xcf,0x06,0x12,0x00,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,
+- 0x04,0x12,0x00,0xd4,0x14,0x53,0x04,0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,
+- 0x04,0x12,0x00,0x14,0x00,0x14,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x14,0x00,0x15,
+- 0x00,0x15,0x00,0x00,0x00,0xd4,0x36,0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x2a,0xd1,
+- 0x06,0xcf,0x06,0x12,0x00,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,0x04,0x12,
+- 0x00,0x54,0x04,0x12,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x12,
++ 0x10,0x04,0x0c,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0c,0x00,
++ 0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x00,0x00,0x0c,0x00,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,
++ 0x0c,0x00,0x00,0x00,0x00,0x00,0x94,0x20,0xd3,0x10,0x52,0x04,0x0c,0x00,0x51,0x04,
++ 0x0c,0x00,0x10,0x04,0x0c,0x00,0x00,0x00,0x52,0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,
++ 0x10,0x04,0x0c,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0x94,0x10,
++ 0x93,0x0c,0x52,0x04,0x11,0x00,0x11,0x04,0x10,0x00,0x15,0x00,0x00,0x00,0x11,0x00,
++ 0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,0x55,0x04,0x0b,0x00,0xd4,0x14,0x53,0x04,
++ 0x0b,0x00,0x52,0x04,0x0b,0x00,0x91,0x08,0x10,0x04,0x0b,0x00,0x0b,0x09,0x00,0x00,
++ 0x53,0x04,0x0b,0x00,0x92,0x08,0x11,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,
++ 0x02,0xff,0xff,0xcf,0x86,0xcf,0x06,0x02,0xff,0xff,0xd1,0x76,0xd0,0x09,0xcf,0x86,
++ 0xcf,0x06,0x02,0xff,0xff,0xcf,0x86,0x85,0xd4,0x07,0xcf,0x06,0x02,0xff,0xff,0xd3,
++ 0x07,0xcf,0x06,0x02,0xff,0xff,0xd2,0x07,0xcf,0x06,0x02,0xff,0xff,0xd1,0x07,0xcf,
++ 0x06,0x02,0xff,0xff,0xd0,0x18,0xcf,0x86,0x55,0x05,0x02,0xff,0xff,0x94,0x0d,0x93,
++ 0x09,0x12,0x05,0x02,0xff,0xff,0x00,0x00,0x00,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x24,
++ 0x94,0x20,0xd3,0x10,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,
++ 0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,
++ 0x0b,0x00,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0x12,0x04,0x0b,0x00,0x00,0x00,
++ 0xd0,0x08,0xcf,0x86,0xcf,0x06,0x01,0x00,0xcf,0x86,0xd5,0x06,0xcf,0x06,0x01,0x00,
++ 0xe4,0x9c,0x10,0xe3,0x16,0x08,0xd2,0x06,0xcf,0x06,0x01,0x00,0xe1,0x08,0x04,0xe0,
++ 0x04,0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe8,0xb1,0x88,0x00,0x01,0xff,0xe6,0x9b,0xb4,0x00,0x10,0x08,0x01,
++ 0xff,0xe8,0xbb,0x8a,0x00,0x01,0xff,0xe8,0xb3,0x88,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe6,0xbb,0x91,0x00,0x01,0xff,0xe4,0xb8,0xb2,0x00,0x10,0x08,0x01,0xff,0xe5,
++ 0x8f,0xa5,0x00,0x01,0xff,0xe9,0xbe,0x9c,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe9,0xbe,0x9c,0x00,0x01,0xff,0xe5,0xa5,0x91,0x00,0x10,0x08,0x01,0xff,0xe9,
++ 0x87,0x91,0x00,0x01,0xff,0xe5,0x96,0x87,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
++ 0xa5,0x88,0x00,0x01,0xff,0xe6,0x87,0xb6,0x00,0x10,0x08,0x01,0xff,0xe7,0x99,0xa9,
++ 0x00,0x01,0xff,0xe7,0xbe,0x85,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe8,0x98,0xbf,0x00,0x01,0xff,0xe8,0x9e,0xba,0x00,0x10,0x08,0x01,0xff,0xe8,
++ 0xa3,0xb8,0x00,0x01,0xff,0xe9,0x82,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,
++ 0xa8,0x82,0x00,0x01,0xff,0xe6,0xb4,0x9b,0x00,0x10,0x08,0x01,0xff,0xe7,0x83,0x99,
++ 0x00,0x01,0xff,0xe7,0x8f,0x9e,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
++ 0x90,0xbd,0x00,0x01,0xff,0xe9,0x85,0xaa,0x00,0x10,0x08,0x01,0xff,0xe9,0xa7,0xb1,
++ 0x00,0x01,0xff,0xe4,0xba,0x82,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x8d,0xb5,
++ 0x00,0x01,0xff,0xe6,0xac,0x84,0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x9b,0x00,0x01,
++ 0xff,0xe8,0x98,0xad,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe9,0xb8,0x9e,0x00,0x01,0xff,0xe5,0xb5,0x90,0x00,0x10,0x08,0x01,0xff,0xe6,
++ 0xbf,0xab,0x00,0x01,0xff,0xe8,0x97,0x8d,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,
++ 0xa5,0xa4,0x00,0x01,0xff,0xe6,0x8b,0x89,0x00,0x10,0x08,0x01,0xff,0xe8,0x87,0x98,
++ 0x00,0x01,0xff,0xe8,0xa0,0x9f,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,
++ 0xbb,0x8a,0x00,0x01,0xff,0xe6,0x9c,0x97,0x00,0x10,0x08,0x01,0xff,0xe6,0xb5,0xaa,
++ 0x00,0x01,0xff,0xe7,0x8b,0xbc,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x83,0x8e,
++ 0x00,0x01,0xff,0xe4,0xbe,0x86,0x00,0x10,0x08,0x01,0xff,0xe5,0x86,0xb7,0x00,0x01,
++ 0xff,0xe5,0x8b,0x9e,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,
++ 0x93,0x84,0x00,0x01,0xff,0xe6,0xab,0x93,0x00,0x10,0x08,0x01,0xff,0xe7,0x88,0x90,
++ 0x00,0x01,0xff,0xe7,0x9b,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x80,0x81,
++ 0x00,0x01,0xff,0xe8,0x98,0x86,0x00,0x10,0x08,0x01,0xff,0xe8,0x99,0x9c,0x00,0x01,
++ 0xff,0xe8,0xb7,0xaf,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0x9c,0xb2,
++ 0x00,0x01,0xff,0xe9,0xad,0xaf,0x00,0x10,0x08,0x01,0xff,0xe9,0xb7,0xba,0x00,0x01,
++ 0xff,0xe7,0xa2,0x8c,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xa5,0xbf,0x00,0x01,
++ 0xff,0xe7,0xb6,0xa0,0x00,0x10,0x08,0x01,0xff,0xe8,0x8f,0x89,0x00,0x01,0xff,0xe9,
++ 0x8c,0x84,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe9,0xb9,0xbf,0x00,0x01,0xff,0xe8,0xab,0x96,0x00,0x10,0x08,
++ 0x01,0xff,0xe5,0xa3,0x9f,0x00,0x01,0xff,0xe5,0xbc,0x84,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe7,0xb1,0xa0,0x00,0x01,0xff,0xe8,0x81,0xbe,0x00,0x10,0x08,0x01,0xff,
++ 0xe7,0x89,0xa2,0x00,0x01,0xff,0xe7,0xa3,0x8a,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe8,0xb3,0x82,0x00,0x01,0xff,0xe9,0x9b,0xb7,0x00,0x10,0x08,0x01,0xff,
++ 0xe5,0xa3,0x98,0x00,0x01,0xff,0xe5,0xb1,0xa2,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe6,0xa8,0x93,0x00,0x01,0xff,0xe6,0xb7,0x9a,0x00,0x10,0x08,0x01,0xff,0xe6,0xbc,
++ 0x8f,0x00,0x01,0xff,0xe7,0xb4,0xaf,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe7,0xb8,0xb7,0x00,0x01,0xff,0xe9,0x99,0x8b,0x00,0x10,0x08,0x01,0xff,
++ 0xe5,0x8b,0x92,0x00,0x01,0xff,0xe8,0x82,0x8b,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe5,0x87,0x9c,0x00,0x01,0xff,0xe5,0x87,0x8c,0x00,0x10,0x08,0x01,0xff,0xe7,0xa8,
++ 0x9c,0x00,0x01,0xff,0xe7,0xb6,0xbe,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe8,0x8f,0xb1,0x00,0x01,0xff,0xe9,0x99,0xb5,0x00,0x10,0x08,0x01,0xff,0xe8,0xae,
++ 0x80,0x00,0x01,0xff,0xe6,0x8b,0x8f,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xa8,
++ 0x82,0x00,0x01,0xff,0xe8,0xab,0xbe,0x00,0x10,0x08,0x01,0xff,0xe4,0xb8,0xb9,0x00,
++ 0x01,0xff,0xe5,0xaf,0xa7,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe6,0x80,0x92,0x00,0x01,0xff,0xe7,0x8e,0x87,0x00,0x10,0x08,0x01,0xff,
++ 0xe7,0x95,0xb0,0x00,0x01,0xff,0xe5,0x8c,0x97,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe7,0xa3,0xbb,0x00,0x01,0xff,0xe4,0xbe,0xbf,0x00,0x10,0x08,0x01,0xff,0xe5,0xbe,
++ 0xa9,0x00,0x01,0xff,0xe4,0xb8,0x8d,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe6,0xb3,0x8c,0x00,0x01,0xff,0xe6,0x95,0xb8,0x00,0x10,0x08,0x01,0xff,0xe7,0xb4,
++ 0xa2,0x00,0x01,0xff,0xe5,0x8f,0x83,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0xa1,
++ 0x9e,0x00,0x01,0xff,0xe7,0x9c,0x81,0x00,0x10,0x08,0x01,0xff,0xe8,0x91,0x89,0x00,
++ 0x01,0xff,0xe8,0xaa,0xaa,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe6,0xae,0xba,0x00,0x01,0xff,0xe8,0xbe,0xb0,0x00,0x10,0x08,0x01,0xff,0xe6,0xb2,
++ 0x88,0x00,0x01,0xff,0xe6,0x8b,0xbe,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x8b,
++ 0xa5,0x00,0x01,0xff,0xe6,0x8e,0xa0,0x00,0x10,0x08,0x01,0xff,0xe7,0x95,0xa5,0x00,
++ 0x01,0xff,0xe4,0xba,0xae,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,
++ 0xa9,0x00,0x01,0xff,0xe5,0x87,0x89,0x00,0x10,0x08,0x01,0xff,0xe6,0xa2,0x81,0x00,
++ 0x01,0xff,0xe7,0xb3,0xa7,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0x89,0xaf,0x00,
++ 0x01,0xff,0xe8,0xab,0x92,0x00,0x10,0x08,0x01,0xff,0xe9,0x87,0x8f,0x00,0x01,0xff,
++ 0xe5,0x8b,0xb5,0x00,0xe0,0x04,0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,
++ 0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe5,0x91,0x82,0x00,0x01,0xff,0xe5,0xa5,
++ 0xb3,0x00,0x10,0x08,0x01,0xff,0xe5,0xbb,0xac,0x00,0x01,0xff,0xe6,0x97,0x85,0x00,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0xbf,0xbe,0x00,0x01,0xff,0xe7,0xa4,0xaa,0x00,
++ 0x10,0x08,0x01,0xff,0xe9,0x96,0xad,0x00,0x01,0xff,0xe9,0xa9,0xaa,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xba,0x97,0x00,0x01,0xff,0xe9,0xbb,0x8e,0x00,
++ 0x10,0x08,0x01,0xff,0xe5,0x8a,0x9b,0x00,0x01,0xff,0xe6,0x9b,0x86,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe6,0xad,0xb7,0x00,0x01,0xff,0xe8,0xbd,0xa2,0x00,0x10,0x08,
++ 0x01,0xff,0xe5,0xb9,0xb4,0x00,0x01,0xff,0xe6,0x86,0x90,0x00,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x88,0x80,0x00,0x01,0xff,0xe6,0x92,0x9a,0x00,
++ 0x10,0x08,0x01,0xff,0xe6,0xbc,0xa3,0x00,0x01,0xff,0xe7,0x85,0x89,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe7,0x92,0x89,0x00,0x01,0xff,0xe7,0xa7,0x8a,0x00,0x10,0x08,
++ 0x01,0xff,0xe7,0xb7,0xb4,0x00,0x01,0xff,0xe8,0x81,0xaf,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe8,0xbc,0xa6,0x00,0x01,0xff,0xe8,0x93,0xae,0x00,0x10,0x08,
++ 0x01,0xff,0xe9,0x80,0xa3,0x00,0x01,0xff,0xe9,0x8d,0x8a,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe5,0x88,0x97,0x00,0x01,0xff,0xe5,0x8a,0xa3,0x00,0x10,0x08,0x01,0xff,
++ 0xe5,0x92,0xbd,0x00,0x01,0xff,0xe7,0x83,0x88,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe8,0xa3,0x82,0x00,0x01,0xff,0xe8,0xaa,0xaa,0x00,
++ 0x10,0x08,0x01,0xff,0xe5,0xbb,0x89,0x00,0x01,0xff,0xe5,0xbf,0xb5,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe6,0x8d,0xbb,0x00,0x01,0xff,0xe6,0xae,0xae,0x00,0x10,0x08,
++ 0x01,0xff,0xe7,0xb0,0xbe,0x00,0x01,0xff,0xe7,0x8d,0xb5,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe4,0xbb,0xa4,0x00,0x01,0xff,0xe5,0x9b,0xb9,0x00,0x10,0x08,
++ 0x01,0xff,0xe5,0xaf,0xa7,0x00,0x01,0xff,0xe5,0xb6,0xba,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe6,0x80,0x9c,0x00,0x01,0xff,0xe7,0x8e,0xb2,0x00,0x10,0x08,0x01,0xff,
++ 0xe7,0x91,0xa9,0x00,0x01,0xff,0xe7,0xbe,0x9a,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe8,0x81,0x86,0x00,0x01,0xff,0xe9,0x88,0xb4,0x00,0x10,0x08,
++ 0x01,0xff,0xe9,0x9b,0xb6,0x00,0x01,0xff,0xe9,0x9d,0x88,0x00,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe9,0xa0,0x98,0x00,0x01,0xff,0xe4,0xbe,0x8b,0x00,0x10,0x08,0x01,0xff,
++ 0xe7,0xa6,0xae,0x00,0x01,0xff,0xe9,0x86,0xb4,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe9,0x9a,0xb8,0x00,0x01,0xff,0xe6,0x83,0xa1,0x00,0x10,0x08,0x01,0xff,
++ 0xe4,0xba,0x86,0x00,0x01,0xff,0xe5,0x83,0x9a,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe5,0xaf,0xae,0x00,0x01,0xff,0xe5,0xb0,0xbf,0x00,0x10,0x08,0x01,0xff,0xe6,0x96,
++ 0x99,0x00,0x01,0xff,0xe6,0xa8,0x82,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0x87,0x8e,0x00,0x01,0xff,0xe7,
++ 0x99,0x82,0x00,0x10,0x08,0x01,0xff,0xe8,0x93,0xbc,0x00,0x01,0xff,0xe9,0x81,0xbc,
++ 0x00,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xbe,0x8d,0x00,0x01,0xff,0xe6,0x9a,0x88,
++ 0x00,0x10,0x08,0x01,0xff,0xe9,0x98,0xae,0x00,0x01,0xff,0xe5,0x8a,0x89,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x9d,0xbb,0x00,0x01,0xff,0xe6,0x9f,0xb3,
++ 0x00,0x10,0x08,0x01,0xff,0xe6,0xb5,0x81,0x00,0x01,0xff,0xe6,0xba,0x9c,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe7,0x90,0x89,0x00,0x01,0xff,0xe7,0x95,0x99,0x00,0x10,
++ 0x08,0x01,0xff,0xe7,0xa1,0xab,0x00,0x01,0xff,0xe7,0xb4,0x90,0x00,0xd3,0x40,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe9,0xa1,0x9e,0x00,0x01,0xff,0xe5,0x85,0xad,
++ 0x00,0x10,0x08,0x01,0xff,0xe6,0x88,0xae,0x00,0x01,0xff,0xe9,0x99,0xb8,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe5,0x80,0xab,0x00,0x01,0xff,0xe5,0xb4,0x99,0x00,0x10,
++ 0x08,0x01,0xff,0xe6,0xb7,0xaa,0x00,0x01,0xff,0xe8,0xbc,0xaa,0x00,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe5,0xbe,0x8b,0x00,0x01,0xff,0xe6,0x85,0x84,0x00,0x10,
++ 0x08,0x01,0xff,0xe6,0xa0,0x97,0x00,0x01,0xff,0xe7,0x8e,0x87,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe9,0x9a,0x86,0x00,0x01,0xff,0xe5,0x88,0xa9,0x00,0x10,0x08,0x01,
++ 0xff,0xe5,0x90,0x8f,0x00,0x01,0xff,0xe5,0xb1,0xa5,0x00,0xd4,0x80,0xd3,0x40,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x01,0xff,0xe6,0x98,0x93,0x00,0x01,0xff,0xe6,0x9d,0x8e,
++ 0x00,0x10,0x08,0x01,0xff,0xe6,0xa2,0xa8,0x00,0x01,0xff,0xe6,0xb3,0xa5,0x00,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe7,0x90,0x86,0x00,0x01,0xff,0xe7,0x97,0xa2,0x00,0x10,
++ 0x08,0x01,0xff,0xe7,0xbd,0xb9,0x00,0x01,0xff,0xe8,0xa3,0x8f,0x00,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe8,0xa3,0xa1,0x00,0x01,0xff,0xe9,0x87,0x8c,0x00,0x10,
++ 0x08,0x01,0xff,0xe9,0x9b,0xa2,0x00,0x01,0xff,0xe5,0x8c,0xbf,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe6,0xba,0xba,0x00,0x01,0xff,0xe5,0x90,0x9d,0x00,0x10,0x08,0x01,
++ 0xff,0xe7,0x87,0x90,0x00,0x01,0xff,0xe7,0x92,0x98,0x00,0xd3,0x40,0xd2,0x20,0xd1,
++ 0x10,0x10,0x08,0x01,0xff,0xe8,0x97,0xba,0x00,0x01,0xff,0xe9,0x9a,0xa3,0x00,0x10,
++ 0x08,0x01,0xff,0xe9,0xb1,0x97,0x00,0x01,0xff,0xe9,0xba,0x9f,0x00,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe6,0x9e,0x97,0x00,0x01,0xff,0xe6,0xb7,0x8b,0x00,0x10,0x08,0x01,
++ 0xff,0xe8,0x87,0xa8,0x00,0x01,0xff,0xe7,0xab,0x8b,0x00,0xd2,0x20,0xd1,0x10,0x10,
++ 0x08,0x01,0xff,0xe7,0xac,0xa0,0x00,0x01,0xff,0xe7,0xb2,0x92,0x00,0x10,0x08,0x01,
++ 0xff,0xe7,0x8b,0x80,0x00,0x01,0xff,0xe7,0x82,0x99,0x00,0xd1,0x10,0x10,0x08,0x01,
++ 0xff,0xe8,0xad,0x98,0x00,0x01,0xff,0xe4,0xbb,0x80,0x00,0x10,0x08,0x01,0xff,0xe8,
++ 0x8c,0xb6,0x00,0x01,0xff,0xe5,0x88,0xba,0x00,0xe2,0xad,0x06,0xe1,0xc4,0x03,0xe0,
++ 0xcb,0x01,0xcf,0x86,0xd5,0xe4,0xd4,0x74,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x01,0xff,0xe5,0x88,0x87,0x00,0x01,0xff,0xe5,0xba,0xa6,0x00,0x10,0x08,0x01,0xff,
++ 0xe6,0x8b,0x93,0x00,0x01,0xff,0xe7,0xb3,0x96,0x00,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe5,0xae,0x85,0x00,0x01,0xff,0xe6,0xb4,0x9e,0x00,0x10,0x08,0x01,0xff,0xe6,0x9a,
++ 0xb4,0x00,0x01,0xff,0xe8,0xbc,0xbb,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x01,0xff,
++ 0xe8,0xa1,0x8c,0x00,0x01,0xff,0xe9,0x99,0x8d,0x00,0x10,0x08,0x01,0xff,0xe8,0xa6,
++ 0x8b,0x00,0x01,0xff,0xe5,0xbb,0x93,0x00,0x91,0x10,0x10,0x08,0x01,0xff,0xe5,0x85,
++ 0x80,0x00,0x01,0xff,0xe5,0x97,0x80,0x00,0x01,0x00,0xd3,0x34,0xd2,0x18,0xd1,0x0c,
++ 0x10,0x08,0x01,0xff,0xe5,0xa1,0x9a,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe6,0x99,
++ 0xb4,0x00,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe5,0x87,0x9e,0x00,
++ 0x10,0x08,0x01,0xff,0xe7,0x8c,0xaa,0x00,0x01,0xff,0xe7,0x9b,0x8a,0x00,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x01,0xff,0xe7,0xa4,0xbc,0x00,0x01,0xff,0xe7,0xa5,0x9e,0x00,
++ 0x10,0x08,0x01,0xff,0xe7,0xa5,0xa5,0x00,0x01,0xff,0xe7,0xa6,0x8f,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe9,0x9d,0x96,0x00,0x01,0xff,0xe7,0xb2,0xbe,0x00,0x10,0x08,
++ 0x01,0xff,0xe7,0xbe,0xbd,0x00,0x01,0x00,0xd4,0x64,0xd3,0x30,0xd2,0x18,0xd1,0x0c,
++ 0x10,0x08,0x01,0xff,0xe8,0x98,0x92,0x00,0x01,0x00,0x10,0x08,0x01,0xff,0xe8,0xab,
++ 0xb8,0x00,0x01,0x00,0xd1,0x0c,0x10,0x04,0x01,0x00,0x01,0xff,0xe9,0x80,0xb8,0x00,
++ 0x10,0x08,0x01,0xff,0xe9,0x83,0xbd,0x00,0x01,0x00,0xd2,0x14,0x51,0x04,0x01,0x00,
++ 0x10,0x08,0x01,0xff,0xe9,0xa3,0xaf,0x00,0x01,0xff,0xe9,0xa3,0xbc,0x00,0xd1,0x10,
++ 0x10,0x08,0x01,0xff,0xe9,0xa4,0xa8,0x00,0x01,0xff,0xe9,0xb6,0xb4,0x00,0x10,0x08,
++ 0x0d,0xff,0xe9,0x83,0x9e,0x00,0x0d,0xff,0xe9,0x9a,0xb7,0x00,0xd3,0x40,0xd2,0x20,
++ 0xd1,0x10,0x10,0x08,0x06,0xff,0xe4,0xbe,0xae,0x00,0x06,0xff,0xe5,0x83,0xa7,0x00,
++ 0x10,0x08,0x06,0xff,0xe5,0x85,0x8d,0x00,0x06,0xff,0xe5,0x8b,0x89,0x00,0xd1,0x10,
++ 0x10,0x08,0x06,0xff,0xe5,0x8b,0xa4,0x00,0x06,0xff,0xe5,0x8d,0x91,0x00,0x10,0x08,
++ 0x06,0xff,0xe5,0x96,0x9d,0x00,0x06,0xff,0xe5,0x98,0x86,0x00,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x06,0xff,0xe5,0x99,0xa8,0x00,0x06,0xff,0xe5,0xa1,0x80,0x00,0x10,0x08,
++ 0x06,0xff,0xe5,0xa2,0xa8,0x00,0x06,0xff,0xe5,0xb1,0xa4,0x00,0xd1,0x10,0x10,0x08,
++ 0x06,0xff,0xe5,0xb1,0xae,0x00,0x06,0xff,0xe6,0x82,0x94,0x00,0x10,0x08,0x06,0xff,
++ 0xe6,0x85,0xa8,0x00,0x06,0xff,0xe6,0x86,0x8e,0x00,0xcf,0x86,0xe5,0x01,0x01,0xd4,
++ 0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe6,0x87,0xb2,0x00,0x06,
++ 0xff,0xe6,0x95,0x8f,0x00,0x10,0x08,0x06,0xff,0xe6,0x97,0xa2,0x00,0x06,0xff,0xe6,
++ 0x9a,0x91,0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe6,0xa2,0x85,0x00,0x06,0xff,0xe6,
++ 0xb5,0xb7,0x00,0x10,0x08,0x06,0xff,0xe6,0xb8,0x9a,0x00,0x06,0xff,0xe6,0xbc,0xa2,
++ 0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0x85,0xae,0x00,0x06,0xff,0xe7,
++ 0x88,0xab,0x00,0x10,0x08,0x06,0xff,0xe7,0x90,0xa2,0x00,0x06,0xff,0xe7,0xa2,0x91,
++ 0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xa4,0xbe,0x00,0x06,0xff,0xe7,0xa5,0x89,
++ 0x00,0x10,0x08,0x06,0xff,0xe7,0xa5,0x88,0x00,0x06,0xff,0xe7,0xa5,0x90,0x00,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xa5,0x96,0x00,0x06,0xff,0xe7,
++ 0xa5,0x9d,0x00,0x10,0x08,0x06,0xff,0xe7,0xa6,0x8d,0x00,0x06,0xff,0xe7,0xa6,0x8e,
++ 0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xa9,0x80,0x00,0x06,0xff,0xe7,0xaa,0x81,
++ 0x00,0x10,0x08,0x06,0xff,0xe7,0xaf,0x80,0x00,0x06,0xff,0xe7,0xb7,0xb4,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe7,0xb8,0x89,0x00,0x06,0xff,0xe7,0xb9,0x81,
++ 0x00,0x10,0x08,0x06,0xff,0xe7,0xbd,0xb2,0x00,0x06,0xff,0xe8,0x80,0x85,0x00,0xd1,
++ 0x10,0x10,0x08,0x06,0xff,0xe8,0x87,0xad,0x00,0x06,0xff,0xe8,0x89,0xb9,0x00,0x10,
++ 0x08,0x06,0xff,0xe8,0x89,0xb9,0x00,0x06,0xff,0xe8,0x91,0x97,0x00,0xd4,0x75,0xd3,
++ 0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe8,0xa4,0x90,0x00,0x06,0xff,0xe8,
++ 0xa6,0x96,0x00,0x10,0x08,0x06,0xff,0xe8,0xac,0x81,0x00,0x06,0xff,0xe8,0xac,0xb9,
++ 0x00,0xd1,0x10,0x10,0x08,0x06,0xff,0xe8,0xb3,0x93,0x00,0x06,0xff,0xe8,0xb4,0x88,
++ 0x00,0x10,0x08,0x06,0xff,0xe8,0xbe,0xb6,0x00,0x06,0xff,0xe9,0x80,0xb8,0x00,0xd2,
++ 0x20,0xd1,0x10,0x10,0x08,0x06,0xff,0xe9,0x9b,0xa3,0x00,0x06,0xff,0xe9,0x9f,0xbf,
++ 0x00,0x10,0x08,0x06,0xff,0xe9,0xa0,0xbb,0x00,0x0b,0xff,0xe6,0x81,0xb5,0x00,0x91,
++ 0x11,0x10,0x09,0x0b,0xff,0xf0,0xa4,0x8b,0xae,0x00,0x0b,0xff,0xe8,0x88,0x98,0x00,
++ 0x00,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe4,0xb8,0xa6,0x00,
++ 0x08,0xff,0xe5,0x86,0xb5,0x00,0x10,0x08,0x08,0xff,0xe5,0x85,0xa8,0x00,0x08,0xff,
++ 0xe4,0xbe,0x80,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0x85,0x85,0x00,0x08,0xff,
++ 0xe5,0x86,0x80,0x00,0x10,0x08,0x08,0xff,0xe5,0x8b,0x87,0x00,0x08,0xff,0xe5,0x8b,
++ 0xba,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0x96,0x9d,0x00,0x08,0xff,
++ 0xe5,0x95,0x95,0x00,0x10,0x08,0x08,0xff,0xe5,0x96,0x99,0x00,0x08,0xff,0xe5,0x97,
++ 0xa2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe5,0xa1,0x9a,0x00,0x08,0xff,0xe5,0xa2,
++ 0xb3,0x00,0x10,0x08,0x08,0xff,0xe5,0xa5,0x84,0x00,0x08,0xff,0xe5,0xa5,0x94,0x00,
++ 0xe0,0x04,0x02,0xcf,0x86,0xe5,0x01,0x01,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x08,0xff,0xe5,0xa9,0xa2,0x00,0x08,0xff,0xe5,0xac,0xa8,0x00,0x10,0x08,
++ 0x08,0xff,0xe5,0xbb,0x92,0x00,0x08,0xff,0xe5,0xbb,0x99,0x00,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe5,0xbd,0xa9,0x00,0x08,0xff,0xe5,0xbe,0xad,0x00,0x10,0x08,0x08,0xff,
++ 0xe6,0x83,0x98,0x00,0x08,0xff,0xe6,0x85,0x8e,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe6,0x84,0x88,0x00,0x08,0xff,0xe6,0x86,0x8e,0x00,0x10,0x08,0x08,0xff,
++ 0xe6,0x85,0xa0,0x00,0x08,0xff,0xe6,0x87,0xb2,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe6,0x88,0xb4,0x00,0x08,0xff,0xe6,0x8f,0x84,0x00,0x10,0x08,0x08,0xff,0xe6,0x90,
++ 0x9c,0x00,0x08,0xff,0xe6,0x91,0x92,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe6,0x95,0x96,0x00,0x08,0xff,0xe6,0x99,0xb4,0x00,0x10,0x08,0x08,0xff,
++ 0xe6,0x9c,0x97,0x00,0x08,0xff,0xe6,0x9c,0x9b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe6,0x9d,0x96,0x00,0x08,0xff,0xe6,0xad,0xb9,0x00,0x10,0x08,0x08,0xff,0xe6,0xae,
++ 0xba,0x00,0x08,0xff,0xe6,0xb5,0x81,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe6,0xbb,0x9b,0x00,0x08,0xff,0xe6,0xbb,0x8b,0x00,0x10,0x08,0x08,0xff,0xe6,0xbc,
++ 0xa2,0x00,0x08,0xff,0xe7,0x80,0x9e,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0x85,
++ 0xae,0x00,0x08,0xff,0xe7,0x9e,0xa7,0x00,0x10,0x08,0x08,0xff,0xe7,0x88,0xb5,0x00,
++ 0x08,0xff,0xe7,0x8a,0xaf,0x00,0xd4,0x80,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe7,0x8c,0xaa,0x00,0x08,0xff,0xe7,0x91,0xb1,0x00,0x10,0x08,0x08,0xff,
++ 0xe7,0x94,0x86,0x00,0x08,0xff,0xe7,0x94,0xbb,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe7,0x98,0x9d,0x00,0x08,0xff,0xe7,0x98,0x9f,0x00,0x10,0x08,0x08,0xff,0xe7,0x9b,
++ 0x8a,0x00,0x08,0xff,0xe7,0x9b,0x9b,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe7,0x9b,0xb4,0x00,0x08,0xff,0xe7,0x9d,0x8a,0x00,0x10,0x08,0x08,0xff,0xe7,0x9d,
++ 0x80,0x00,0x08,0xff,0xe7,0xa3,0x8c,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe7,0xaa,
++ 0xb1,0x00,0x08,0xff,0xe7,0xaf,0x80,0x00,0x10,0x08,0x08,0xff,0xe7,0xb1,0xbb,0x00,
++ 0x08,0xff,0xe7,0xb5,0x9b,0x00,0xd3,0x40,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe7,0xb7,0xb4,0x00,0x08,0xff,0xe7,0xbc,0xbe,0x00,0x10,0x08,0x08,0xff,0xe8,0x80,
++ 0x85,0x00,0x08,0xff,0xe8,0x8d,0x92,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0x8f,
++ 0xaf,0x00,0x08,0xff,0xe8,0x9d,0xb9,0x00,0x10,0x08,0x08,0xff,0xe8,0xa5,0x81,0x00,
++ 0x08,0xff,0xe8,0xa6,0x86,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xa6,
++ 0x96,0x00,0x08,0xff,0xe8,0xaa,0xbf,0x00,0x10,0x08,0x08,0xff,0xe8,0xab,0xb8,0x00,
++ 0x08,0xff,0xe8,0xab,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,0xe8,0xac,0x81,0x00,
++ 0x08,0xff,0xe8,0xab,0xbe,0x00,0x10,0x08,0x08,0xff,0xe8,0xab,0xad,0x00,0x08,0xff,
++ 0xe8,0xac,0xb9,0x00,0xcf,0x86,0x95,0xde,0xd4,0x81,0xd3,0x40,0xd2,0x20,0xd1,0x10,
++ 0x10,0x08,0x08,0xff,0xe8,0xae,0x8a,0x00,0x08,0xff,0xe8,0xb4,0x88,0x00,0x10,0x08,
++ 0x08,0xff,0xe8,0xbc,0xb8,0x00,0x08,0xff,0xe9,0x81,0xb2,0x00,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe9,0x86,0x99,0x00,0x08,0xff,0xe9,0x89,0xb6,0x00,0x10,0x08,0x08,0xff,
++ 0xe9,0x99,0xbc,0x00,0x08,0xff,0xe9,0x9b,0xa3,0x00,0xd2,0x20,0xd1,0x10,0x10,0x08,
++ 0x08,0xff,0xe9,0x9d,0x96,0x00,0x08,0xff,0xe9,0x9f,0x9b,0x00,0x10,0x08,0x08,0xff,
++ 0xe9,0x9f,0xbf,0x00,0x08,0xff,0xe9,0xa0,0x8b,0x00,0xd1,0x10,0x10,0x08,0x08,0xff,
++ 0xe9,0xa0,0xbb,0x00,0x08,0xff,0xe9,0xac,0x92,0x00,0x10,0x08,0x08,0xff,0xe9,0xbe,
++ 0x9c,0x00,0x08,0xff,0xf0,0xa2,0xa1,0x8a,0x00,0xd3,0x45,0xd2,0x22,0xd1,0x12,0x10,
++ 0x09,0x08,0xff,0xf0,0xa2,0xa1,0x84,0x00,0x08,0xff,0xf0,0xa3,0x8f,0x95,0x00,0x10,
++ 0x08,0x08,0xff,0xe3,0xae,0x9d,0x00,0x08,0xff,0xe4,0x80,0x98,0x00,0xd1,0x11,0x10,
++ 0x08,0x08,0xff,0xe4,0x80,0xb9,0x00,0x08,0xff,0xf0,0xa5,0x89,0x89,0x00,0x10,0x09,
++ 0x08,0xff,0xf0,0xa5,0xb3,0x90,0x00,0x08,0xff,0xf0,0xa7,0xbb,0x93,0x00,0x92,0x14,
++ 0x91,0x10,0x10,0x08,0x08,0xff,0xe9,0xbd,0x83,0x00,0x08,0xff,0xe9,0xbe,0x8e,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0xe1,0x94,0x01,0xe0,0x08,0x01,0xcf,0x86,0xd5,0x42,
++ 0xd4,0x14,0x93,0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,
++ 0x00,0x00,0x00,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,
++ 0x01,0x00,0x01,0x00,0x52,0x04,0x00,0x00,0xd1,0x0d,0x10,0x04,0x00,0x00,0x04,0xff,
++ 0xd7,0x99,0xd6,0xb4,0x00,0x10,0x04,0x01,0x1a,0x01,0xff,0xd7,0xb2,0xd6,0xb7,0x00,
++ 0xd4,0x42,0x53,0x04,0x01,0x00,0xd2,0x16,0x51,0x04,0x01,0x00,0x10,0x09,0x01,0xff,
++ 0xd7,0xa9,0xd7,0x81,0x00,0x01,0xff,0xd7,0xa9,0xd7,0x82,0x00,0xd1,0x16,0x10,0x0b,
++ 0x01,0xff,0xd7,0xa9,0xd6,0xbc,0xd7,0x81,0x00,0x01,0xff,0xd7,0xa9,0xd6,0xbc,0xd7,
++ 0x82,0x00,0x10,0x09,0x01,0xff,0xd7,0x90,0xd6,0xb7,0x00,0x01,0xff,0xd7,0x90,0xd6,
++ 0xb8,0x00,0xd3,0x43,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x90,0xd6,0xbc,
++ 0x00,0x01,0xff,0xd7,0x91,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x92,0xd6,0xbc,
++ 0x00,0x01,0xff,0xd7,0x93,0xd6,0xbc,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x94,
++ 0xd6,0xbc,0x00,0x01,0xff,0xd7,0x95,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x96,
++ 0xd6,0xbc,0x00,0x00,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0x98,0xd6,
++ 0xbc,0x00,0x01,0xff,0xd7,0x99,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0x9a,0xd6,
++ 0xbc,0x00,0x01,0xff,0xd7,0x9b,0xd6,0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,
++ 0x9c,0xd6,0xbc,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xd7,0x9e,0xd6,0xbc,0x00,0x00,
++ 0x00,0xcf,0x86,0x95,0x85,0x94,0x81,0xd3,0x3e,0xd2,0x1f,0xd1,0x12,0x10,0x09,0x01,
++ 0xff,0xd7,0xa0,0xd6,0xbc,0x00,0x01,0xff,0xd7,0xa1,0xd6,0xbc,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0xff,0xd7,0xa3,0xd6,0xbc,0x00,0xd1,0x0d,0x10,0x09,0x01,0xff,0xd7,0xa4,
++ 0xd6,0xbc,0x00,0x00,0x00,0x10,0x09,0x01,0xff,0xd7,0xa6,0xd6,0xbc,0x00,0x01,0xff,
++ 0xd7,0xa7,0xd6,0xbc,0x00,0xd2,0x24,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,0xa8,0xd6,
++ 0xbc,0x00,0x01,0xff,0xd7,0xa9,0xd6,0xbc,0x00,0x10,0x09,0x01,0xff,0xd7,0xaa,0xd6,
++ 0xbc,0x00,0x01,0xff,0xd7,0x95,0xd6,0xb9,0x00,0xd1,0x12,0x10,0x09,0x01,0xff,0xd7,
++ 0x91,0xd6,0xbf,0x00,0x01,0xff,0xd7,0x9b,0xd6,0xbf,0x00,0x10,0x09,0x01,0xff,0xd7,
++ 0xa4,0xd6,0xbf,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,
++ 0x01,0x00,0x54,0x04,0x01,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x01,0x00,0x0c,0x00,
++ 0x0c,0x00,0x0c,0x00,0xcf,0x86,0x95,0x24,0xd4,0x10,0x93,0x0c,0x92,0x08,0x11,0x04,
++ 0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x00,0x00,
++ 0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0xd3,0x5a,0xd2,0x06,
++ 0xcf,0x06,0x01,0x00,0xd1,0x14,0xd0,0x06,0xcf,0x06,0x01,0x00,0xcf,0x86,0x95,0x08,
++ 0x14,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x54,0x04,
++ 0x01,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0xcf,0x86,0xd5,0x0c,0x94,0x08,0x13,0x04,0x01,0x00,0x00,0x00,0x05,0x00,
++ 0x54,0x04,0x05,0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x91,0x08,0x10,0x04,
++ 0x06,0x00,0x07,0x00,0x00,0x00,0xd2,0xce,0xd1,0xa5,0xd0,0x37,0xcf,0x86,0xd5,0x15,
++ 0x54,0x05,0x06,0xff,0x00,0x53,0x04,0x08,0x00,0x92,0x08,0x11,0x04,0x08,0x00,0x00,
++ 0x00,0x00,0x00,0x94,0x1c,0xd3,0x10,0x52,0x04,0x01,0xe6,0x51,0x04,0x0a,0xe6,0x10,
++ 0x04,0x0a,0xe6,0x10,0xdc,0x52,0x04,0x10,0xdc,0x11,0x04,0x10,0xdc,0x11,0xe6,0x01,
++ 0x00,0xcf,0x86,0xd5,0x38,0xd4,0x24,0xd3,0x14,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,
++ 0x04,0x01,0x00,0x06,0x00,0x10,0x04,0x06,0x00,0x07,0x00,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x07,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x01,
++ 0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd4,0x18,0xd3,0x10,0x52,
++ 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x12,0x04,0x01,
++ 0x00,0x00,0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,
++ 0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x01,0x00,0x01,0x00,0xd0,0x06,0xcf,
++ 0x06,0x01,0x00,0xcf,0x86,0x55,0x04,0x01,0x00,0x54,0x04,0x01,0x00,0x53,0x04,0x01,
++ 0x00,0x52,0x04,0x01,0x00,0xd1,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x10,0x04,0x00,
++ 0x00,0x01,0xff,0x00,0xd1,0x50,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,0x10,
++ 0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x01,0x00,0x53,0x04,0x01,0x00,
++ 0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x06,0x00,0x94,0x14,
++ 0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x06,0x00,0x01,0x00,0x01,0x00,0x01,0x00,
++ 0x01,0x00,0x01,0x00,0xd0,0x2f,0xcf,0x86,0x55,0x04,0x01,0x00,0xd4,0x15,0x93,0x11,
++ 0x92,0x0d,0x91,0x09,0x10,0x05,0x01,0xff,0x00,0x01,0x00,0x01,0x00,0x01,0x00,0x01,
++ 0x00,0x53,0x04,0x01,0x00,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,
++ 0x00,0x00,0x00,0xcf,0x86,0xd5,0x38,0xd4,0x18,0xd3,0x0c,0x92,0x08,0x11,0x04,0x00,
++ 0x00,0x01,0x00,0x01,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd3,
++ 0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x01,0x00,0x01,0x00,0xd2,0x08,0x11,0x04,0x00,
++ 0x00,0x01,0x00,0x91,0x08,0x10,0x04,0x01,0x00,0x00,0x00,0x00,0x00,0xd4,0x20,0xd3,
++ 0x10,0x52,0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x52,
++ 0x04,0x01,0x00,0x51,0x04,0x01,0x00,0x10,0x04,0x01,0x00,0x00,0x00,0x53,0x05,0x00,
++ 0xff,0x00,0xd2,0x0d,0x91,0x09,0x10,0x05,0x00,0xff,0x00,0x04,0x00,0x04,0x00,0x91,
++ 0x08,0x10,0x04,0x03,0x00,0x01,0x00,0x01,0x00,0x83,0xe2,0x46,0x3e,0xe1,0x1f,0x3b,
++ 0xe0,0x9c,0x39,0xcf,0x86,0xe5,0x40,0x26,0xc4,0xe3,0x16,0x14,0xe2,0xef,0x11,0xe1,
++ 0xd0,0x10,0xe0,0x60,0x07,0xcf,0x86,0xe5,0x53,0x03,0xe4,0x4c,0x02,0xe3,0x3d,0x01,
++ 0xd2,0x94,0xd1,0x70,0xd0,0x4a,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x07,0x00,
++ 0x52,0x04,0x07,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,
++ 0xd4,0x14,0x93,0x10,0x52,0x04,0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,
++ 0x00,0x00,0x07,0x00,0x53,0x04,0x07,0x00,0xd2,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,
++ 0x07,0x00,0x00,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0xcf,0x86,
++ 0x95,0x20,0xd4,0x10,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,
++ 0x00,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,
++ 0x00,0x00,0xd0,0x06,0xcf,0x06,0x07,0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x54,0x04,
++ 0x07,0x00,0x53,0x04,0x07,0x00,0x92,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,
++ 0x00,0x00,0x00,0x00,0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x94,0x1c,0x93,0x18,
++ 0xd2,0x0c,0x51,0x04,0x07,0x00,0x10,0x04,0x07,0x00,0x00,0x00,0x51,0x04,0x00,0x00,
++ 0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,0x04,0x07,0x00,0x93,0x10,
++ 0x52,0x04,0x07,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0x07,0x00,
++ 0xcf,0x06,0x08,0x00,0xd0,0x46,0xcf,0x86,0xd5,0x2c,0xd4,0x20,0x53,0x04,0x08,0x00,
++ 0xd2,0x0c,0x51,0x04,0x08,0x00,0x10,0x04,0x08,0x00,0x10,0x00,0xd1,0x08,0x10,0x04,
++ 0x10,0x00,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x53,0x04,0x0a,0x00,0x12,0x04,
++ 0x0a,0x00,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,
++ 0x00,0x00,0x0a,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,0x52,0x04,0x0a,0x00,
++ 0x91,0x08,0x10,0x04,0x0a,0x00,0x0a,0xdc,0x00,0x00,0xd2,0x5e,0xd1,0x06,0xcf,0x06,
++ 0x00,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,
++ 0x52,0x04,0x0a,0x00,0x91,0x08,0x10,0x04,0x0a,0x00,0x00,0x00,0x00,0x00,0x0a,0x00,
++ 0xcf,0x86,0xd5,0x18,0x54,0x04,0x0a,0x00,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x0a,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x10,0xdc,0x10,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x53,0x04,
++ 0x10,0x00,0x12,0x04,0x10,0x00,0x00,0x00,0xd1,0x70,0xd0,0x36,0xcf,0x86,0xd5,0x18,
++ 0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,0x00,0x51,0x04,0x05,0x00,
++ 0x10,0x04,0x05,0x00,0x10,0x00,0x94,0x18,0xd3,0x08,0x12,0x04,0x05,0x00,0x00,0x00,
++ 0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x13,0x00,0x13,0x00,0x05,0x00,
++ 0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x05,0x00,0x92,0x0c,0x51,0x04,0x05,0x00,
++ 0x10,0x04,0x05,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x0c,
++ 0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x10,0xe6,0x92,0x0c,0x51,0x04,0x10,0xe6,
++ 0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,
++ 0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,
++ 0x00,0x00,0x07,0x00,0x08,0x00,0xcf,0x86,0x95,0x1c,0xd4,0x0c,0x93,0x08,0x12,0x04,
++ 0x08,0x00,0x00,0x00,0x08,0x00,0x93,0x0c,0x52,0x04,0x08,0x00,0x11,0x04,0x08,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0xd3,0xba,0xd2,0x80,0xd1,0x34,0xd0,0x1a,0xcf,0x86,
++ 0x55,0x04,0x05,0x00,0x94,0x10,0x93,0x0c,0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,
++ 0x07,0x00,0x05,0x00,0x05,0x00,0xcf,0x86,0x95,0x14,0x94,0x10,0x53,0x04,0x05,0x00,
++ 0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0xd0,0x2a,
++ 0xcf,0x86,0xd5,0x14,0x54,0x04,0x07,0x00,0x53,0x04,0x07,0x00,0x52,0x04,0x07,0x00,
++ 0x11,0x04,0x07,0x00,0x00,0x00,0x94,0x10,0x53,0x04,0x07,0x00,0x92,0x08,0x11,0x04,
++ 0x07,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0xcf,0x86,0xd5,0x10,0x54,0x04,0x12,0x00,
++ 0x93,0x08,0x12,0x04,0x12,0x00,0x00,0x00,0x12,0x00,0x54,0x04,0x12,0x00,0x53,0x04,
++ 0x12,0x00,0x12,0x04,0x12,0x00,0x00,0x00,0xd1,0x34,0xd0,0x12,0xcf,0x86,0x55,0x04,
++ 0x10,0x00,0x94,0x08,0x13,0x04,0x10,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,
++ 0x10,0x00,0x94,0x18,0xd3,0x08,0x12,0x04,0x10,0x00,0x00,0x00,0x52,0x04,0x00,0x00,
++ 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,
++ 0xd2,0x06,0xcf,0x06,0x10,0x00,0xd1,0x40,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,
++ 0x54,0x04,0x10,0x00,0x93,0x10,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x10,0x00,0x93,0x0c,
++ 0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x08,0x13,0x04,
++ 0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe4,0xce,0x02,0xe3,0x45,0x01,
++ 0xd2,0xd0,0xd1,0x70,0xd0,0x52,0xcf,0x86,0xd5,0x20,0x94,0x1c,0xd3,0x0c,0x52,0x04,
++ 0x07,0x00,0x11,0x04,0x07,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x07,0x00,
++ 0x00,0x00,0x07,0x00,0x07,0x00,0x07,0x00,0x54,0x04,0x07,0x00,0xd3,0x10,0x52,0x04,
++ 0x07,0x00,0x51,0x04,0x07,0x00,0x10,0x04,0x00,0x00,0x07,0x00,0xd2,0x0c,0x91,0x08,
++ 0x10,0x04,0x07,0x00,0x00,0x00,0x00,0x00,0xd1,0x08,0x10,0x04,0x07,0x00,0x00,0x00,
++ 0x10,0x04,0x00,0x00,0x07,0x00,0xcf,0x86,0x95,0x18,0x54,0x04,0x0b,0x00,0x93,0x10,
++ 0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,0x00,0x00,0x0b,0x00,0x0b,0x00,
++ 0x10,0x00,0xd0,0x32,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,
++ 0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x94,0x14,
++ 0x93,0x10,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,
++ 0x10,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x11,0x00,0xd3,0x14,
++ 0xd2,0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x11,0x04,0x11,0x00,
++ 0x00,0x00,0x92,0x0c,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x11,0x00,0x11,0x00,
++ 0xd1,0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x1c,0x54,0x04,0x09,0x00,0x53,0x04,0x09,0x00,
++ 0xd2,0x08,0x11,0x04,0x09,0x00,0x0b,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,
++ 0x09,0x00,0x54,0x04,0x0a,0x00,0x53,0x04,0x0a,0x00,0xd2,0x08,0x11,0x04,0x0a,0x00,
++ 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0a,0x00,0xcf,0x06,0x00,0x00,
++ 0xd0,0x1a,0xcf,0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,0x00,0x53,0x04,0x0d,0x00,
++ 0x52,0x04,0x00,0x00,0x11,0x04,0x11,0x00,0x0d,0x00,0xcf,0x86,0x95,0x14,0x54,0x04,
++ 0x11,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x11,0x00,0x11,0x00,0x11,0x00,
++ 0x11,0x00,0xd2,0xec,0xd1,0xa4,0xd0,0x76,0xcf,0x86,0xd5,0x48,0xd4,0x28,0xd3,0x14,
++ 0x52,0x04,0x08,0x00,0xd1,0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x10,0x04,0x08,0x00,
++ 0x00,0x00,0x52,0x04,0x00,0x00,0xd1,0x08,0x10,0x04,0x08,0x00,0x08,0xdc,0x10,0x04,
++ 0x08,0x00,0x08,0xe6,0xd3,0x10,0x52,0x04,0x08,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x08,0x00,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x08,0x00,0x08,0x00,
++ 0x08,0x00,0x54,0x04,0x08,0x00,0xd3,0x0c,0x52,0x04,0x08,0x00,0x11,0x04,0x14,0x00,
++ 0x00,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,0x08,0xe6,0x08,0x01,0x10,0x04,0x08,0xdc,
++ 0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x08,0x09,0xcf,0x86,0x95,0x28,
++ 0xd4,0x14,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x53,0x04,0x08,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x08,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x0b,0x00,0xd0,0x0a,0xcf,0x86,0x15,0x04,0x10,0x00,
++ 0x00,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x24,0xd3,0x14,0x52,0x04,0x10,0x00,
++ 0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0xe6,0x10,0x04,0x10,0xdc,0x00,0x00,0x92,0x0c,
++ 0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x52,0x04,
++ 0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd1,0x54,
++ 0xd0,0x26,0xcf,0x86,0x55,0x04,0x0b,0x00,0x54,0x04,0x0b,0x00,0xd3,0x0c,0x52,0x04,
++ 0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x0b,0x00,0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x0b,0x00,0x93,0x0c,
++ 0x52,0x04,0x0b,0x00,0x11,0x04,0x0b,0x00,0x00,0x00,0x0b,0x00,0x54,0x04,0x0b,0x00,
++ 0x93,0x10,0x92,0x0c,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,
++ 0x0b,0x00,0xd0,0x42,0xcf,0x86,0xd5,0x28,0x54,0x04,0x10,0x00,0xd3,0x0c,0x92,0x08,
++ 0x11,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd2,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x10,0x00,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x14,
++ 0x53,0x04,0x00,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,
++ 0x10,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x96,0xd2,0x68,0xd1,0x24,0xd0,0x06,
++ 0xcf,0x06,0x0b,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x0b,0x00,0x92,0x0c,
++ 0x91,0x08,0x10,0x04,0x0b,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xd0,0x1e,0xcf,0x86,0x55,0x04,0x11,0x00,0x54,0x04,0x11,0x00,0x93,0x10,0x92,0x0c,
++ 0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,
++ 0x55,0x04,0x11,0x00,0x54,0x04,0x11,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x11,0x00,
++ 0x10,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x11,0x00,
++ 0x11,0x00,0xd1,0x28,0xd0,0x22,0xcf,0x86,0x55,0x04,0x14,0x00,0xd4,0x0c,0x93,0x08,
++ 0x12,0x04,0x14,0x00,0x14,0xe6,0x00,0x00,0x53,0x04,0x14,0x00,0x92,0x08,0x11,0x04,
++ 0x14,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xd2,0x2a,
++ 0xd1,0x24,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,
++ 0x0b,0x00,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,0x04,
++ 0x0b,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x58,0xd0,0x12,0xcf,0x86,0x55,0x04,
++ 0x14,0x00,0x94,0x08,0x13,0x04,0x14,0x00,0x00,0x00,0x14,0x00,0xcf,0x86,0x95,0x40,
++ 0xd4,0x24,0xd3,0x0c,0x52,0x04,0x14,0x00,0x11,0x04,0x14,0x00,0x14,0xdc,0xd2,0x0c,
++ 0x51,0x04,0x14,0xe6,0x10,0x04,0x14,0xe6,0x14,0xdc,0x91,0x08,0x10,0x04,0x14,0xe6,
++ 0x14,0xdc,0x14,0xdc,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0xdc,0x14,0x00,
++ 0x14,0x00,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x15,0x00,
++ 0x93,0x10,0x52,0x04,0x15,0x00,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,
++ 0x00,0x00,0xcf,0x86,0xe5,0x0f,0x06,0xe4,0xf8,0x03,0xe3,0x02,0x02,0xd2,0xfb,0xd1,
++ 0x4c,0xd0,0x06,0xcf,0x06,0x0c,0x00,0xcf,0x86,0xd5,0x2c,0xd4,0x1c,0xd3,0x10,0x52,
++ 0x04,0x0c,0x00,0x51,0x04,0x0c,0x00,0x10,0x04,0x0c,0x09,0x0c,0x00,0x52,0x04,0x0c,
++ 0x00,0x11,0x04,0x0c,0x00,0x00,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x0c,
++ 0x00,0x0c,0x00,0x0c,0x00,0x54,0x04,0x0c,0x00,0x53,0x04,0x00,0x00,0x52,0x04,0x00,
++ 0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x09,0xd0,0x69,0xcf,0x86,0xd5,
++ 0x32,0x54,0x04,0x0b,0x00,0x53,0x04,0x0b,0x00,0xd2,0x15,0x51,0x04,0x0b,0x00,0x10,
++ 0x0d,0x0b,0xff,0xf0,0x91,0x82,0x99,0xf0,0x91,0x82,0xba,0x00,0x0b,0x00,0x91,0x11,
++ 0x10,0x0d,0x0b,0xff,0xf0,0x91,0x82,0x9b,0xf0,0x91,0x82,0xba,0x00,0x0b,0x00,0x0b,
++ 0x00,0xd4,0x1d,0x53,0x04,0x0b,0x00,0x92,0x15,0x51,0x04,0x0b,0x00,0x10,0x04,0x0b,
++ 0x00,0x0b,0xff,0xf0,0x91,0x82,0xa5,0xf0,0x91,0x82,0xba,0x00,0x0b,0x00,0x53,0x04,
++ 0x0b,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x0b,0x00,0x0b,0x09,0x10,0x04,0x0b,0x07,
++ 0x0b,0x00,0x0b,0x00,0xcf,0x86,0xd5,0x20,0x94,0x1c,0xd3,0x0c,0x92,0x08,0x11,0x04,
++ 0x0b,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,
++ 0x14,0x00,0x00,0x00,0x0d,0x00,0xd4,0x14,0x53,0x04,0x0d,0x00,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x0d,0x00,0x92,0x08,
++ 0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0xd1,0x96,0xd0,0x5c,0xcf,0x86,0xd5,0x18,
++ 0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x0d,0xe6,0x10,0x04,0x0d,0xe6,0x0d,0x00,
++ 0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd4,0x26,0x53,0x04,0x0d,0x00,0x52,0x04,0x0d,0x00,
++ 0x51,0x04,0x0d,0x00,0x10,0x0d,0x0d,0xff,0xf0,0x91,0x84,0xb1,0xf0,0x91,0x84,0xa7,
++ 0x00,0x0d,0xff,0xf0,0x91,0x84,0xb2,0xf0,0x91,0x84,0xa7,0x00,0x93,0x18,0xd2,0x0c,
++ 0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x00,0x0d,0x09,0x91,0x08,0x10,0x04,0x0d,0x09,
++ 0x00,0x00,0x0d,0x00,0x0d,0x00,0xcf,0x86,0xd5,0x18,0x94,0x14,0x93,0x10,0x52,0x04,
++ 0x0d,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x10,0x00,
++ 0x54,0x04,0x10,0x00,0x93,0x18,0xd2,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,
++ 0x10,0x07,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd0,0x06,
++ 0xcf,0x06,0x0d,0x00,0xcf,0x86,0xd5,0x40,0xd4,0x2c,0xd3,0x10,0x92,0x0c,0x91,0x08,
++ 0x10,0x04,0x0d,0x09,0x0d,0x00,0x0d,0x00,0x0d,0x00,0xd2,0x10,0xd1,0x08,0x10,0x04,
++ 0x0d,0x00,0x11,0x00,0x10,0x04,0x11,0x07,0x11,0x00,0x91,0x08,0x10,0x04,0x11,0x00,
++ 0x10,0x00,0x00,0x00,0x53,0x04,0x0d,0x00,0x92,0x0c,0x51,0x04,0x0d,0x00,0x10,0x04,
++ 0x10,0x00,0x11,0x00,0x11,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x93,0x10,0x52,0x04,0x10,0x00,
++ 0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd2,0xc8,0xd1,0x48,
++ 0xd0,0x42,0xcf,0x86,0xd5,0x18,0x54,0x04,0x10,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,
++ 0x10,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x54,0x04,0x10,0x00,
++ 0xd3,0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0x09,0x10,0x04,
++ 0x10,0x07,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x12,0x00,
++ 0x00,0x00,0xcf,0x06,0x00,0x00,0xd0,0x52,0xcf,0x86,0xd5,0x3c,0xd4,0x28,0xd3,0x10,
++ 0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x11,0x00,0x00,0x00,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,
++ 0x00,0x00,0x11,0x00,0x53,0x04,0x11,0x00,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,
++ 0x10,0x04,0x00,0x00,0x11,0x00,0x94,0x10,0x53,0x04,0x11,0x00,0x92,0x08,0x11,0x04,
++ 0x11,0x00,0x00,0x00,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x18,
++ 0x53,0x04,0x10,0x00,0x92,0x10,0xd1,0x08,0x10,0x04,0x10,0x00,0x10,0x07,0x10,0x04,
++ 0x10,0x09,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,
++ 0x00,0x00,0x00,0x00,0xe1,0x27,0x01,0xd0,0x8a,0xcf,0x86,0xd5,0x44,0xd4,0x2c,0xd3,
++ 0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,0x00,0x10,0x00,0x10,0x00,0x91,0x08,0x10,
++ 0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,0x04,0x10,
++ 0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x93,0x14,0x92,0x10,0xd1,0x08,0x10,
++ 0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0xd4,
++ 0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,
++ 0x00,0x10,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x10,
++ 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd2,0x0c,0x51,0x04,0x10,
++ 0x00,0x10,0x04,0x00,0x00,0x14,0x07,0x91,0x08,0x10,0x04,0x10,0x07,0x10,0x00,0x10,
++ 0x00,0xcf,0x86,0xd5,0x6a,0xd4,0x42,0xd3,0x14,0x52,0x04,0x10,0x00,0xd1,0x08,0x10,
++ 0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0xd2,0x19,0xd1,0x08,0x10,
++ 0x04,0x10,0x00,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0xff,0xf0,0x91,0x8d,0x87,0xf0,
++ 0x91,0x8c,0xbe,0x00,0x91,0x11,0x10,0x0d,0x10,0xff,0xf0,0x91,0x8d,0x87,0xf0,0x91,
++ 0x8d,0x97,0x00,0x10,0x09,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x91,0x08,0x10,0x04,0x11,
++ 0x00,0x00,0x00,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x10,0x00,0x52,
++ 0x04,0x00,0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd4,0x1c,0xd3,
++ 0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x00,0x00,0x10,0xe6,0x52,0x04,0x10,0xe6,0x91,
++ 0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x10,0xe6,0x91,
++ 0x08,0x10,0x04,0x10,0xe6,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe3,
++ 0x30,0x01,0xd2,0xb7,0xd1,0x48,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x95,0x3c,
++ 0xd4,0x1c,0x93,0x18,0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x09,0x12,0x00,
++ 0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x07,0x12,0x00,0x12,0x00,0x53,0x04,0x12,0x00,
++ 0xd2,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x00,0x00,0x12,0x00,0xd1,0x08,0x10,0x04,
++ 0x00,0x00,0x12,0x00,0x10,0x04,0x14,0xe6,0x15,0x00,0x00,0x00,0xd0,0x45,0xcf,0x86,
++ 0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0xd2,0x15,0x51,0x04,
++ 0x10,0x00,0x10,0x04,0x10,0x00,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,0x91,0x92,0xba,
++ 0x00,0xd1,0x11,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,0x91,0x92,0xb0,0x00,
++ 0x10,0x00,0x10,0x0d,0x10,0xff,0xf0,0x91,0x92,0xb9,0xf0,0x91,0x92,0xbd,0x00,0x10,
++ 0x00,0xcf,0x86,0x95,0x24,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,
++ 0x04,0x10,0x09,0x10,0x07,0x10,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x08,0x11,
++ 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,
++ 0x40,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x0c,0x52,0x04,0x10,
++ 0x00,0x11,0x04,0x10,0x00,0x00,0x00,0xd2,0x1e,0x51,0x04,0x10,0x00,0x10,0x0d,0x10,
++ 0xff,0xf0,0x91,0x96,0xb8,0xf0,0x91,0x96,0xaf,0x00,0x10,0xff,0xf0,0x91,0x96,0xb9,
++ 0xf0,0x91,0x96,0xaf,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x10,0x09,0xcf,
++ 0x86,0x95,0x2c,0xd4,0x1c,0xd3,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x07,0x10,
++ 0x00,0x10,0x00,0x10,0x00,0x92,0x08,0x11,0x04,0x10,0x00,0x11,0x00,0x11,0x00,0x53,
++ 0x04,0x11,0x00,0x52,0x04,0x11,0x00,0x11,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0xd2,
++ 0xa0,0xd1,0x5c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x53,
++ 0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x10,
++ 0x09,0xcf,0x86,0xd5,0x24,0xd4,0x14,0x93,0x10,0x52,0x04,0x10,0x00,0x91,0x08,0x10,
++ 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x08,0x11,
++ 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x12,0x00,0x52,0x04,0x12,
++ 0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd0,0x2a,0xcf,
++ 0x86,0x55,0x04,0x0d,0x00,0x54,0x04,0x0d,0x00,0xd3,0x10,0x52,0x04,0x0d,0x00,0x51,
++ 0x04,0x0d,0x00,0x10,0x04,0x0d,0x09,0x0d,0x07,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x95,0x14,0x94,0x10,0x53,0x04,0x0d,
++ 0x00,0x92,0x08,0x11,0x04,0x0d,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,
++ 0x40,0xd0,0x3a,0xcf,0x86,0xd5,0x20,0x54,0x04,0x11,0x00,0x53,0x04,0x11,0x00,0xd2,
++ 0x0c,0x51,0x04,0x11,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x91,0x08,0x10,0x04,0x00,
++ 0x00,0x11,0x00,0x11,0x00,0x94,0x14,0x53,0x04,0x11,0x00,0x92,0x0c,0x51,0x04,0x11,
++ 0x00,0x10,0x04,0x11,0x00,0x11,0x09,0x00,0x00,0x11,0x00,0xcf,0x06,0x00,0x00,0xcf,
++ 0x06,0x00,0x00,0xe4,0x59,0x01,0xd3,0xb2,0xd2,0x5c,0xd1,0x28,0xd0,0x22,0xcf,0x86,
++ 0x55,0x04,0x14,0x00,0x54,0x04,0x14,0x00,0x53,0x04,0x14,0x00,0x92,0x10,0xd1,0x08,
++ 0x10,0x04,0x14,0x00,0x14,0x09,0x10,0x04,0x14,0x07,0x14,0x00,0x00,0x00,0xcf,0x06,
++ 0x00,0x00,0xd0,0x0a,0xcf,0x86,0x15,0x04,0x00,0x00,0x10,0x00,0xcf,0x86,0x55,0x04,
++ 0x10,0x00,0x54,0x04,0x10,0x00,0xd3,0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0x00,0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,
++ 0x00,0x00,0x10,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x1a,0xcf,0x86,0x55,0x04,
++ 0x00,0x00,0x94,0x10,0x53,0x04,0x15,0x00,0x92,0x08,0x11,0x04,0x00,0x00,0x15,0x00,
++ 0x15,0x00,0x15,0x00,0xcf,0x86,0xd5,0x14,0x54,0x04,0x15,0x00,0x53,0x04,0x15,0x00,
++ 0x92,0x08,0x11,0x04,0x00,0x00,0x15,0x00,0x15,0x00,0x94,0x1c,0x93,0x18,0xd2,0x0c,
++ 0x91,0x08,0x10,0x04,0x15,0x09,0x15,0x00,0x15,0x00,0x91,0x08,0x10,0x04,0x15,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd2,0xa0,0xd1,0x3c,0xd0,0x1e,0xcf,0x86,
++ 0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x93,0x10,0x52,0x04,0x13,0x00,0x91,0x08,
++ 0x10,0x04,0x13,0x09,0x13,0x00,0x13,0x00,0x13,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,
++ 0x93,0x10,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x13,0x09,
++ 0x00,0x00,0x13,0x00,0x13,0x00,0xd0,0x46,0xcf,0x86,0xd5,0x2c,0xd4,0x10,0x93,0x0c,
++ 0x52,0x04,0x13,0x00,0x11,0x04,0x15,0x00,0x13,0x00,0x13,0x00,0x53,0x04,0x13,0x00,
++ 0xd2,0x0c,0x91,0x08,0x10,0x04,0x13,0x00,0x13,0x09,0x13,0x00,0x91,0x08,0x10,0x04,
++ 0x13,0x00,0x14,0x00,0x13,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x13,0x00,
++ 0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0x55,0x04,
++ 0x10,0x00,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,
++ 0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xe3,0xa9,0x01,0xd2,
++ 0xb0,0xd1,0x6c,0xd0,0x3e,0xcf,0x86,0xd5,0x18,0x94,0x14,0x53,0x04,0x12,0x00,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x54,
++ 0x04,0x12,0x00,0xd3,0x10,0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,
++ 0x00,0x00,0x00,0x52,0x04,0x12,0x00,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x12,
++ 0x09,0xcf,0x86,0xd5,0x14,0x94,0x10,0x93,0x0c,0x52,0x04,0x12,0x00,0x11,0x04,0x12,
++ 0x00,0x00,0x00,0x00,0x00,0x12,0x00,0x94,0x14,0x53,0x04,0x12,0x00,0x52,0x04,0x12,
++ 0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0x12,0x00,0xd0,0x3e,0xcf,
++ 0x86,0xd5,0x14,0x54,0x04,0x12,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x00,0x00,0x12,
++ 0x00,0x12,0x00,0x12,0x00,0xd4,0x14,0x53,0x04,0x12,0x00,0x92,0x0c,0x91,0x08,0x10,
++ 0x04,0x00,0x00,0x12,0x00,0x12,0x00,0x12,0x00,0x93,0x10,0x52,0x04,0x12,0x00,0x51,
++ 0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,
++ 0xa0,0xd0,0x52,0xcf,0x86,0xd5,0x24,0x94,0x20,0xd3,0x10,0x52,0x04,0x13,0x00,0x51,
++ 0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x92,0x0c,0x51,0x04,0x13,0x00,0x10,
++ 0x04,0x00,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x54,0x04,0x13,0x00,0xd3,0x10,0x52,
++ 0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0xd2,0x0c,0x51,
++ 0x04,0x00,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x51,0x04,0x13,0x00,0x10,0x04,0x00,
++ 0x00,0x13,0x00,0xcf,0x86,0xd5,0x28,0xd4,0x18,0x93,0x14,0xd2,0x0c,0x51,0x04,0x13,
++ 0x00,0x10,0x04,0x13,0x07,0x13,0x00,0x11,0x04,0x13,0x09,0x13,0x00,0x00,0x00,0x53,
++ 0x04,0x13,0x00,0x92,0x08,0x11,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0x94,0x20,0xd3,
++ 0x10,0x52,0x04,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x14,0x00,0x14,0x00,0x14,0x00,0xd0,
++ 0x52,0xcf,0x86,0xd5,0x3c,0xd4,0x14,0x53,0x04,0x14,0x00,0x52,0x04,0x14,0x00,0x51,
++ 0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x14,
++ 0x00,0x10,0x04,0x00,0x00,0x14,0x00,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x14,
++ 0x09,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x94,
++ 0x10,0x53,0x04,0x14,0x00,0x92,0x08,0x11,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0xcf,0x06,0x00,0x00,0xd2,0x2a,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,
++ 0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x14,0x00,0x53,0x04,0x14,
++ 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xd1,
++ 0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x15,
++ 0x00,0x54,0x04,0x15,0x00,0xd3,0x0c,0x92,0x08,0x11,0x04,0x15,0x00,0x00,0x00,0x00,
++ 0x00,0x52,0x04,0x00,0x00,0x51,0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x15,0x00,0xd0,
++ 0xca,0xcf,0x86,0xd5,0xc2,0xd4,0x54,0xd3,0x06,0xcf,0x06,0x09,0x00,0xd2,0x06,0xcf,
++ 0x06,0x09,0x00,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x09,0x00,0xcf,0x86,0x55,0x04,0x09,
++ 0x00,0x94,0x14,0x53,0x04,0x09,0x00,0x52,0x04,0x09,0x00,0x51,0x04,0x09,0x00,0x10,
++ 0x04,0x09,0x00,0x10,0x00,0x10,0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x10,
++ 0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x11,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x68,0xd2,0x46,0xd1,0x40,0xd0,
++ 0x06,0xcf,0x06,0x09,0x00,0xcf,0x86,0x55,0x04,0x09,0x00,0xd4,0x20,0xd3,0x10,0x92,
++ 0x0c,0x51,0x04,0x09,0x00,0x10,0x04,0x09,0x00,0x10,0x00,0x10,0x00,0x52,0x04,0x10,
++ 0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x93,0x10,0x52,0x04,0x09,
++ 0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x11,
++ 0x00,0xd1,0x1c,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,0x95,0x10,0x94,0x0c,0x93,
++ 0x08,0x12,0x04,0x11,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,
++ 0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x4c,0xd4,0x06,0xcf,
++ 0x06,0x0b,0x00,0xd3,0x40,0xd2,0x3a,0xd1,0x34,0xd0,0x2e,0xcf,0x86,0x55,0x04,0x0b,
++ 0x00,0xd4,0x14,0x53,0x04,0x0b,0x00,0x52,0x04,0x0b,0x00,0x51,0x04,0x0b,0x00,0x10,
++ 0x04,0x0b,0x00,0x00,0x00,0x53,0x04,0x15,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x15,
+ 0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,
+- 0x86,0xcf,0x06,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,
+- 0xa2,0xd4,0x9c,0xd3,0x74,0xd2,0x26,0xd1,0x20,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x94,
+- 0x10,0x93,0x0c,0x92,0x08,0x11,0x04,0x0c,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x13,
+- 0x00,0x13,0x00,0xcf,0x06,0x13,0x00,0xcf,0x06,0x13,0x00,0xd1,0x48,0xd0,0x1e,0xcf,
+- 0x86,0x95,0x18,0x54,0x04,0x13,0x00,0x53,0x04,0x13,0x00,0x52,0x04,0x13,0x00,0x51,
+- 0x04,0x13,0x00,0x10,0x04,0x13,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,
+- 0x04,0x00,0x00,0x93,0x10,0x92,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,
+- 0x00,0x00,0x00,0x00,0x00,0x94,0x0c,0x93,0x08,0x12,0x04,0x00,0x00,0x15,0x00,0x00,
+- 0x00,0x13,0x00,0xcf,0x06,0x13,0x00,0xd2,0x22,0xd1,0x06,0xcf,0x06,0x13,0x00,0xd0,
+- 0x06,0xcf,0x06,0x13,0x00,0xcf,0x86,0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x53,
+- 0x04,0x13,0x00,0x12,0x04,0x13,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
+- 0x00,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x7e,0xd2,0x78,0xd1,0x34,0xd0,0x06,0xcf,
+- 0x06,0x10,0x00,0xcf,0x86,0x55,0x04,0x10,0x00,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,
+- 0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,
+- 0x00,0x52,0x04,0x10,0x00,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd0,
+- 0x3e,0xcf,0x86,0xd5,0x2c,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,
+- 0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0xd2,0x08,0x11,
+- 0x04,0x10,0x00,0x00,0x00,0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x01,0x10,0x00,0x94,
+- 0x0c,0x93,0x08,0x12,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,
+- 0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xe1,0x92,0x04,0xd0,0x08,0xcf,0x86,
+- 0xcf,0x06,0x00,0x00,0xcf,0x86,0xe5,0x2f,0x04,0xe4,0x7f,0x02,0xe3,0xf4,0x01,0xd2,
+- 0x26,0xd1,0x06,0xcf,0x06,0x05,0x00,0xd0,0x06,0xcf,0x06,0x05,0x00,0xcf,0x86,0x55,
+- 0x04,0x05,0x00,0x54,0x04,0x05,0x00,0x93,0x0c,0x52,0x04,0x05,0x00,0x11,0x04,0x05,
+- 0x00,0x00,0x00,0x00,0x00,0xd1,0xeb,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x05,0x00,0x94,
+- 0x20,0xd3,0x10,0x52,0x04,0x05,0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x00,
+- 0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x05,0x00,0x05,0x00,0x05,
+- 0x00,0xcf,0x86,0xd5,0x2a,0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,
+- 0x00,0x51,0x04,0x05,0x00,0x10,0x0d,0x05,0xff,0xf0,0x9d,0x85,0x97,0xf0,0x9d,0x85,
+- 0xa5,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0x00,0xd4,0x75,0xd3,
+- 0x61,0xd2,0x44,0xd1,0x22,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,
+- 0xa5,0xf0,0x9d,0x85,0xae,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,
+- 0xf0,0x9d,0x85,0xaf,0x00,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,
+- 0xa5,0xf0,0x9d,0x85,0xb0,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,
+- 0xf0,0x9d,0x85,0xb1,0x00,0xd1,0x15,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,
+- 0x9d,0x85,0xa5,0xf0,0x9d,0x85,0xb2,0x00,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0x01,
+- 0xd2,0x08,0x11,0x04,0x05,0x01,0x05,0x00,0x91,0x08,0x10,0x04,0x05,0x00,0x05,0xe2,
+- 0x05,0xd8,0xd3,0x10,0x92,0x0c,0x51,0x04,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0x00,
+- 0x05,0x00,0x92,0x0c,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x05,0xdc,0x05,0xdc,
++ 0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xd1,0x4c,0xd0,0x44,0xcf,
++ 0x86,0xd5,0x3c,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,0x06,0x11,0x00,0xd2,
++ 0x2a,0xd1,0x24,0xd0,0x06,0xcf,0x06,0x11,0x00,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,
++ 0x10,0x52,0x04,0x11,0x00,0x51,0x04,0x11,0x00,0x10,0x04,0x11,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
++ 0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xe0,0xd2,0x01,0xcf,0x86,0xd5,0x06,0xcf,0x06,
++ 0x00,0x00,0xe4,0x0b,0x01,0xd3,0x06,0xcf,0x06,0x0c,0x00,0xd2,0x84,0xd1,0x50,0xd0,
++ 0x1e,0xcf,0x86,0x55,0x04,0x0c,0x00,0x54,0x04,0x0c,0x00,0x53,0x04,0x0c,0x00,0x92,
++ 0x0c,0x91,0x08,0x10,0x04,0x0c,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,
++ 0x18,0x54,0x04,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x51,0x04,0x10,
++ 0x00,0x10,0x04,0x10,0x00,0x00,0x00,0x94,0x14,0x53,0x04,0x10,0x00,0xd2,0x08,0x11,
++ 0x04,0x10,0x00,0x00,0x00,0x11,0x04,0x00,0x00,0x10,0x00,0x00,0x00,0xd0,0x06,0xcf,
++ 0x06,0x00,0x00,0xcf,0x86,0xd5,0x08,0x14,0x04,0x00,0x00,0x10,0x00,0xd4,0x10,0x53,
++ 0x04,0x10,0x00,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,0x00,0x93,0x10,0x52,
++ 0x04,0x10,0x01,0x91,0x08,0x10,0x04,0x10,0x01,0x10,0x00,0x00,0x00,0x00,0x00,0xd1,
++ 0x6c,0xd0,0x1e,0xcf,0x86,0x55,0x04,0x10,0x00,0x54,0x04,0x10,0x00,0x93,0x10,0x52,
++ 0x04,0x10,0xe6,0x51,0x04,0x10,0xe6,0x10,0x04,0x10,0xe6,0x10,0x00,0x10,0x00,0xcf,
++ 0x86,0xd5,0x24,0xd4,0x10,0x93,0x0c,0x52,0x04,0x10,0x00,0x11,0x04,0x10,0x00,0x00,
++ 0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,0x04,0x00,
++ 0x00,0x10,0x00,0x10,0x00,0xd4,0x14,0x93,0x10,0x92,0x0c,0x51,0x04,0x10,0x00,0x10,
++ 0x04,0x00,0x00,0x10,0x00,0x10,0x00,0x10,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x00,
++ 0x00,0x91,0x08,0x10,0x04,0x00,0x00,0x10,0x00,0x10,0x00,0xd0,0x0e,0xcf,0x86,0x95,
++ 0x08,0x14,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,
++ 0x06,0x00,0x00,0xd2,0x30,0xd1,0x0c,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x06,0x14,
++ 0x00,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,0x14,0x00,0x53,0x04,0x14,0x00,0x92,
++ 0x0c,0x51,0x04,0x14,0x00,0x10,0x04,0x14,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,
++ 0x06,0x00,0x00,0xd1,0x4c,0xd0,0x06,0xcf,0x06,0x0d,0x00,0xcf,0x86,0xd5,0x2c,0x94,
++ 0x28,0xd3,0x10,0x52,0x04,0x0d,0x00,0x91,0x08,0x10,0x04,0x0d,0x00,0x15,0x00,0x15,
++ 0x00,0xd2,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,0x51,0x04,0x00,
++ 0x00,0x10,0x04,0x00,0x00,0x15,0x00,0x0d,0x00,0x54,0x04,0x0d,0x00,0x53,0x04,0x0d,
++ 0x00,0x52,0x04,0x0d,0x00,0x51,0x04,0x0d,0x00,0x10,0x04,0x0d,0x00,0x15,0x00,0xd0,
++ 0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x53,0x04,0x15,0x00,0x52,0x04,0x00,0x00,0x51,
++ 0x04,0x00,0x00,0x10,0x04,0x00,0x00,0x0d,0x00,0x0d,0x00,0x00,0x00,0xcf,0x86,0x55,
++ 0x04,0x00,0x00,0x94,0x14,0x93,0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x12,0x00,0x13,
++ 0x00,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xcf,0x06,0x12,0x00,0xe2,
++ 0xc6,0x01,0xd1,0x8e,0xd0,0x86,0xcf,0x86,0xd5,0x48,0xd4,0x06,0xcf,0x06,0x12,0x00,
++ 0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x06,0xcf,0x06,0x12,0x00,0xd1,0x06,0xcf,0x06,
++ 0x12,0x00,0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,0x04,0x12,0x00,0xd4,0x14,
++ 0x53,0x04,0x12,0x00,0x52,0x04,0x12,0x00,0x91,0x08,0x10,0x04,0x12,0x00,0x14,0x00,
++ 0x14,0x00,0x93,0x0c,0x92,0x08,0x11,0x04,0x14,0x00,0x15,0x00,0x15,0x00,0x00,0x00,
++ 0xd4,0x36,0xd3,0x06,0xcf,0x06,0x12,0x00,0xd2,0x2a,0xd1,0x06,0xcf,0x06,0x12,0x00,
++ 0xd0,0x06,0xcf,0x06,0x12,0x00,0xcf,0x86,0x55,0x04,0x12,0x00,0x54,0x04,0x12,0x00,
++ 0x93,0x10,0x92,0x0c,0x51,0x04,0x12,0x00,0x10,0x04,0x12,0x00,0x00,0x00,0x00,0x00,
++ 0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,
++ 0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0xa2,0xd4,0x9c,0xd3,0x74,
++ 0xd2,0x26,0xd1,0x20,0xd0,0x1a,0xcf,0x86,0x95,0x14,0x94,0x10,0x93,0x0c,0x92,0x08,
++ 0x11,0x04,0x0c,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0x13,0x00,0xcf,0x06,
++ 0x13,0x00,0xcf,0x06,0x13,0x00,0xd1,0x48,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x54,0x04,
++ 0x13,0x00,0x53,0x04,0x13,0x00,0x52,0x04,0x13,0x00,0x51,0x04,0x13,0x00,0x10,0x04,
++ 0x13,0x00,0x00,0x00,0x00,0x00,0xcf,0x86,0xd5,0x18,0x54,0x04,0x00,0x00,0x93,0x10,
++ 0x92,0x0c,0x51,0x04,0x15,0x00,0x10,0x04,0x15,0x00,0x00,0x00,0x00,0x00,0x00,0x00,
++ 0x94,0x0c,0x93,0x08,0x12,0x04,0x00,0x00,0x15,0x00,0x00,0x00,0x13,0x00,0xcf,0x06,
++ 0x13,0x00,0xd2,0x22,0xd1,0x06,0xcf,0x06,0x13,0x00,0xd0,0x06,0xcf,0x06,0x13,0x00,
++ 0xcf,0x86,0x55,0x04,0x13,0x00,0x54,0x04,0x13,0x00,0x53,0x04,0x13,0x00,0x12,0x04,
++ 0x13,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xd4,0x06,0xcf,0x06,
++ 0x00,0x00,0xd3,0x7f,0xd2,0x79,0xd1,0x34,0xd0,0x06,0xcf,0x06,0x10,0x00,0xcf,0x86,
++ 0x55,0x04,0x10,0x00,0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x51,0x04,0x10,0x00,
++ 0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0x52,0x04,0x10,0x00,
++ 0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,0x00,0x00,0xd0,0x3f,0xcf,0x86,0xd5,0x2c,
++ 0xd4,0x14,0x53,0x04,0x10,0x00,0x92,0x0c,0x91,0x08,0x10,0x04,0x10,0x00,0x00,0x00,
++ 0x00,0x00,0x00,0x00,0x53,0x04,0x10,0x00,0xd2,0x08,0x11,0x04,0x10,0x00,0x00,0x00,
++ 0x51,0x04,0x10,0x00,0x10,0x04,0x10,0x01,0x10,0x00,0x94,0x0d,0x93,0x09,0x12,0x05,
++ 0x10,0xff,0x00,0x00,0x00,0x00,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
++ 0x00,0xcf,0x06,0x00,0x00,0xe1,0x96,0x04,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,
++ 0xcf,0x86,0xe5,0x33,0x04,0xe4,0x83,0x02,0xe3,0xf8,0x01,0xd2,0x26,0xd1,0x06,0xcf,
++ 0x06,0x05,0x00,0xd0,0x06,0xcf,0x06,0x05,0x00,0xcf,0x86,0x55,0x04,0x05,0x00,0x54,
++ 0x04,0x05,0x00,0x93,0x0c,0x52,0x04,0x05,0x00,0x11,0x04,0x05,0x00,0x00,0x00,0x00,
++ 0x00,0xd1,0xef,0xd0,0x2a,0xcf,0x86,0x55,0x04,0x05,0x00,0x94,0x20,0xd3,0x10,0x52,
++ 0x04,0x05,0x00,0x51,0x04,0x05,0x00,0x10,0x04,0x05,0x00,0x00,0x00,0x92,0x0c,0x91,
++ 0x08,0x10,0x04,0x00,0x00,0x0a,0x00,0x05,0x00,0x05,0x00,0x05,0x00,0xcf,0x86,0xd5,
++ 0x2a,0x54,0x04,0x05,0x00,0x53,0x04,0x05,0x00,0x52,0x04,0x05,0x00,0x51,0x04,0x05,
++ 0x00,0x10,0x0d,0x05,0xff,0xf0,0x9d,0x85,0x97,0xf0,0x9d,0x85,0xa5,0x00,0x05,0xff,
++ 0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0x00,0xd4,0x75,0xd3,0x61,0xd2,0x44,0xd1,
++ 0x22,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,
++ 0xae,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,0xaf,
++ 0x00,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,
++ 0xb0,0x00,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,0x9d,0x85,0xb1,
++ 0x00,0xd1,0x15,0x10,0x11,0x05,0xff,0xf0,0x9d,0x85,0x98,0xf0,0x9d,0x85,0xa5,0xf0,
++ 0x9d,0x85,0xb2,0x00,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0x01,0xd2,0x08,0x11,0x04,
++ 0x05,0x01,0x05,0x00,0x91,0x08,0x10,0x04,0x05,0x00,0x05,0xe2,0x05,0xd8,0xd3,0x12,
++ 0x92,0x0d,0x51,0x04,0x05,0xd8,0x10,0x04,0x05,0xd8,0x05,0xff,0x00,0x05,0xff,0x00,
++ 0x92,0x0e,0x51,0x05,0x05,0xff,0x00,0x10,0x05,0x05,0xff,0x00,0x05,0xdc,0x05,0xdc,
+ 0xd0,0x97,0xcf,0x86,0xd5,0x28,0x94,0x24,0xd3,0x18,0xd2,0x0c,0x51,0x04,0x05,0xdc,
+ 0x10,0x04,0x05,0xdc,0x05,0x00,0x91,0x08,0x10,0x04,0x05,0x00,0x05,0xe6,0x05,0xe6,
+ 0x92,0x08,0x11,0x04,0x05,0xe6,0x05,0xdc,0x05,0x00,0x05,0x00,0xd4,0x14,0x53,0x04,
+@@ -4080,20 +4090,21 @@ static const unsigned char utf8data[64080] = {
+ 0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xd2,0x06,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,
+ 0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,
+ 0x04,0x00,0x00,0x53,0x04,0x00,0x00,0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x02,
+- 0x00,0xd4,0xc8,0xd3,0x70,0xd2,0x68,0xd1,0x60,0xd0,0x58,0xcf,0x86,0xd5,0x50,0xd4,
+- 0x4a,0xd3,0x44,0xd2,0x2a,0xd1,0x24,0xd0,0x1e,0xcf,0x86,0x95,0x18,0x94,0x14,0x93,
+- 0x10,0x92,0x0c,0x91,0x08,0x10,0x04,0x00,0x00,0x05,0x00,0x00,0x00,0x00,0x00,0x00,
+- 0x00,0x00,0x00,0x05,0x00,0xcf,0x06,0x05,0x00,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,
+- 0x06,0x07,0x00,0xd0,0x06,0xcf,0x06,0x07,0x00,0xcf,0x86,0x55,0x04,0x07,0x00,0x14,
+- 0x04,0x07,0x00,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,0x00,0xcf,0x06,0x00,
+- 0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,
+- 0x06,0x00,0x00,0xd2,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xd1,0x08,0xcf,0x86,0xcf,
+- 0x06,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x06,0xcf,
+- 0x06,0x00,0x00,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,0xd2,
+- 0x06,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,0x00,
+- 0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x00,0x00,0x52,
+- 0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x02,0x00,0xcf,0x86,0xcf,0x06,0x02,0x00,0x81,
+- 0x80,0xcf,0x86,0x85,0x84,0xcf,0x86,0xcf,0x06,0x02,0x00,0x00,0x00,0x00,0x00,0x00
++ 0x00,0xd4,0xd9,0xd3,0x81,0xd2,0x79,0xd1,0x71,0xd0,0x69,0xcf,0x86,0xd5,0x60,0xd4,
++ 0x59,0xd3,0x52,0xd2,0x33,0xd1,0x2c,0xd0,0x25,0xcf,0x86,0x95,0x1e,0x94,0x19,0x93,
++ 0x14,0x92,0x0f,0x91,0x0a,0x10,0x05,0x00,0xff,0x00,0x05,0xff,0x00,0x00,0xff,0x00,
++ 0x00,0xff,0x00,0x00,0xff,0x00,0x00,0xff,0x00,0x05,0xff,0x00,0xcf,0x06,0x05,0xff,
++ 0x00,0xcf,0x06,0x00,0xff,0x00,0xd1,0x07,0xcf,0x06,0x07,0xff,0x00,0xd0,0x07,0xcf,
++ 0x06,0x07,0xff,0x00,0xcf,0x86,0x55,0x05,0x07,0xff,0x00,0x14,0x05,0x07,0xff,0x00,
++ 0x00,0xff,0x00,0xcf,0x06,0x00,0xff,0x00,0xcf,0x06,0x00,0xff,0x00,0xcf,0x06,0x00,
++ 0xff,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,
++ 0xcf,0x06,0x00,0x00,0xd2,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xd1,0x08,0xcf,0x86,
++ 0xcf,0x06,0x00,0x00,0xd0,0x08,0xcf,0x86,0xcf,0x06,0x00,0x00,0xcf,0x86,0xd5,0x06,
++ 0xcf,0x06,0x00,0x00,0xd4,0x06,0xcf,0x06,0x00,0x00,0xd3,0x06,0xcf,0x06,0x00,0x00,
++ 0xd2,0x06,0xcf,0x06,0x00,0x00,0xd1,0x06,0xcf,0x06,0x00,0x00,0xd0,0x06,0xcf,0x06,
++ 0x00,0x00,0xcf,0x86,0x55,0x04,0x00,0x00,0x54,0x04,0x00,0x00,0x53,0x04,0x00,0x00,
++ 0x52,0x04,0x00,0x00,0x11,0x04,0x00,0x00,0x02,0x00,0xcf,0x86,0xcf,0x06,0x02,0x00,
++ 0x81,0x80,0xcf,0x86,0x85,0x84,0xcf,0x86,0xcf,0x06,0x02,0x00,0x00,0x00,0x00,0x00
+ };
+
+ struct utf8data_table utf8_data_table = {
+diff --git a/include/acpi/pcc.h b/include/acpi/pcc.h
+index 9b373d172a7760..699c1a37b8e784 100644
+--- a/include/acpi/pcc.h
++++ b/include/acpi/pcc.h
+@@ -12,6 +12,7 @@
+ struct pcc_mbox_chan {
+ struct mbox_chan *mchan;
+ u64 shmem_base_addr;
++ void __iomem *shmem;
+ u64 shmem_size;
+ u32 latency;
+ u32 max_access_rate;
+@@ -31,11 +32,13 @@ struct pcc_mbox_chan {
+ #define PCC_CMD_COMPLETION_NOTIFY BIT(0)
+
+ #define MAX_PCC_SUBSPACES 256
++#define PCC_ACK_FLAG_MASK 0x1
+
+ #ifdef CONFIG_PCC
+ extern struct pcc_mbox_chan *
+ pcc_mbox_request_channel(struct mbox_client *cl, int subspace_id);
+ extern void pcc_mbox_free_channel(struct pcc_mbox_chan *chan);
++extern int pcc_mbox_ioremap(struct mbox_chan *chan);
+ #else
+ static inline struct pcc_mbox_chan *
+ pcc_mbox_request_channel(struct mbox_client *cl, int subspace_id)
+@@ -43,6 +46,10 @@ pcc_mbox_request_channel(struct mbox_client *cl, int subspace_id)
+ return ERR_PTR(-ENODEV);
+ }
+ static inline void pcc_mbox_free_channel(struct pcc_mbox_chan *chan) { }
++static inline int pcc_mbox_ioremap(struct mbox_chan *chan)
++{
++ return 0;
++};
+ #endif
+
+ #endif /* _PCC_H */
+diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h
+index f6a1cbb0f600fa..a80ba457a858f3 100644
+--- a/include/drm/display/drm_dp_mst_helper.h
++++ b/include/drm/display/drm_dp_mst_helper.h
+@@ -699,6 +699,13 @@ struct drm_dp_mst_topology_mgr {
+ */
+ bool payload_id_table_cleared : 1;
+
++ /**
++ * @reset_rx_state: The down request's reply and up request message
++ * receiver state must be reset, after the topology manager got
++ * removed. Protected by @lock.
++ */
++ bool reset_rx_state : 1;
++
+ /**
+ * @payload_count: The number of currently active payloads in hardware. This value is only
+ * intended to be used internally by MST helpers for payload tracking, and is only safe to
+diff --git a/include/drm/intel/xe_pciids.h b/include/drm/intel/xe_pciids.h
+index 644872a35c3526..4ba88d2dccd4b3 100644
+--- a/include/drm/intel/xe_pciids.h
++++ b/include/drm/intel/xe_pciids.h
+@@ -120,7 +120,6 @@
+
+ /* RPL-P */
+ #define XE_RPLP_IDS(MACRO__, ...) \
+- XE_RPLU_IDS(MACRO__, ## __VA_ARGS__), \
+ MACRO__(0xA720, ## __VA_ARGS__), \
+ MACRO__(0xA7A0, ## __VA_ARGS__), \
+ MACRO__(0xA7A8, ## __VA_ARGS__), \
+@@ -175,18 +174,38 @@
+ XE_ATS_M150_IDS(MACRO__, ## __VA_ARGS__),\
+ XE_ATS_M75_IDS(MACRO__, ## __VA_ARGS__)
+
+-/* MTL / ARL */
++/* ARL */
++#define XE_ARL_IDS(MACRO__, ...) \
++ MACRO__(0x7D41, ## __VA_ARGS__), \
++ MACRO__(0x7D51, ## __VA_ARGS__), \
++ MACRO__(0x7D67, ## __VA_ARGS__), \
++ MACRO__(0x7DD1, ## __VA_ARGS__), \
++ MACRO__(0xB640, ## __VA_ARGS__)
++
++/* MTL */
+ #define XE_MTL_IDS(MACRO__, ...) \
+ MACRO__(0x7D40, ## __VA_ARGS__), \
+- MACRO__(0x7D41, ## __VA_ARGS__), \
+ MACRO__(0x7D45, ## __VA_ARGS__), \
+- MACRO__(0x7D51, ## __VA_ARGS__), \
+ MACRO__(0x7D55, ## __VA_ARGS__), \
+ MACRO__(0x7D60, ## __VA_ARGS__), \
+- MACRO__(0x7D67, ## __VA_ARGS__), \
+- MACRO__(0x7DD1, ## __VA_ARGS__), \
+ MACRO__(0x7DD5, ## __VA_ARGS__)
+
++/* PVC */
++#define XE_PVC_IDS(MACRO__, ...) \
++ MACRO__(0x0B69, ## __VA_ARGS__), \
++ MACRO__(0x0B6E, ## __VA_ARGS__), \
++ MACRO__(0x0BD4, ## __VA_ARGS__), \
++ MACRO__(0x0BD5, ## __VA_ARGS__), \
++ MACRO__(0x0BD6, ## __VA_ARGS__), \
++ MACRO__(0x0BD7, ## __VA_ARGS__), \
++ MACRO__(0x0BD8, ## __VA_ARGS__), \
++ MACRO__(0x0BD9, ## __VA_ARGS__), \
++ MACRO__(0x0BDA, ## __VA_ARGS__), \
++ MACRO__(0x0BDB, ## __VA_ARGS__), \
++ MACRO__(0x0BE0, ## __VA_ARGS__), \
++ MACRO__(0x0BE1, ## __VA_ARGS__), \
++ MACRO__(0x0BE5, ## __VA_ARGS__)
++
+ #define XE_LNL_IDS(MACRO__, ...) \
+ MACRO__(0x6420, ## __VA_ARGS__), \
+ MACRO__(0x64A0, ## __VA_ARGS__), \
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index e84a93c4013207..6b4bc85f4999ba 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -195,7 +195,7 @@ struct gendisk {
+ unsigned int nr_zones;
+ unsigned int zone_capacity;
+ unsigned int last_zone_capacity;
+- unsigned long *conv_zones_bitmap;
++ unsigned long __rcu *conv_zones_bitmap;
+ unsigned int zone_wplugs_hash_bits;
+ spinlock_t zone_wplugs_lock;
+ struct mempool_s *zone_wplugs_pool;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index bc2e3dab0487ea..cbe2350912460b 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1300,8 +1300,12 @@ void *__bpf_dynptr_data_rw(const struct bpf_dynptr_kern *ptr, u32 len);
+ bool __bpf_dynptr_is_rdonly(const struct bpf_dynptr_kern *ptr);
+
+ #ifdef CONFIG_BPF_JIT
+-int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
+-int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr);
++int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog);
++int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog);
+ struct bpf_trampoline *bpf_trampoline_get(u64 key,
+ struct bpf_attach_target_info *tgt_info);
+ void bpf_trampoline_put(struct bpf_trampoline *tr);
+@@ -1383,12 +1387,14 @@ void bpf_jit_uncharge_modmem(u32 size);
+ bool bpf_prog_has_trampoline(const struct bpf_prog *prog);
+ #else
+ static inline int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
+- struct bpf_trampoline *tr)
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ return -ENOTSUPP;
+ }
+ static inline int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
+- struct bpf_trampoline *tr)
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ return -ENOTSUPP;
+ }
+@@ -1492,6 +1498,9 @@ struct bpf_prog_aux {
+ bool xdp_has_frags;
+ bool exception_cb;
+ bool exception_boundary;
++ bool is_extended; /* true if extended by freplace program */
++ u64 prog_array_member_cnt; /* counts how many times as member of prog_array */
++ struct mutex ext_mutex; /* mutex for is_extended and prog_array_member_cnt */
+ struct bpf_arena *arena;
+ /* BTF_KIND_FUNC_PROTO for valid attach_btf_id */
+ const struct btf_type *attach_func_proto;
+diff --git a/include/linux/cleanup.h b/include/linux/cleanup.h
+index 518bd1fd86fbe0..0cc66f8d28e7b6 100644
+--- a/include/linux/cleanup.h
++++ b/include/linux/cleanup.h
+@@ -285,14 +285,20 @@ static inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \
+ * similar to scoped_guard(), except it does fail when the lock
+ * acquire fails.
+ *
++ * Only for conditional locks.
+ */
+
++#define __DEFINE_CLASS_IS_CONDITIONAL(_name, _is_cond) \
++static __maybe_unused const bool class_##_name##_is_conditional = _is_cond
++
+ #define DEFINE_GUARD(_name, _type, _lock, _unlock) \
++ __DEFINE_CLASS_IS_CONDITIONAL(_name, false); \
+ DEFINE_CLASS(_name, _type, if (_T) { _unlock; }, ({ _lock; _T; }), _type _T); \
+ static inline void * class_##_name##_lock_ptr(class_##_name##_t *_T) \
+ { return (void *)(__force unsigned long)*_T; }
+
+ #define DEFINE_GUARD_COND(_name, _ext, _condlock) \
++ __DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true); \
+ EXTEND_CLASS(_name, _ext, \
+ ({ void *_t = _T; if (_T && !(_condlock)) _t = NULL; _t; }), \
+ class_##_name##_t _T) \
+@@ -303,17 +309,40 @@ static inline class_##_name##_t class_##_name##ext##_constructor(_init_args) \
+ CLASS(_name, __UNIQUE_ID(guard))
+
+ #define __guard_ptr(_name) class_##_name##_lock_ptr
++#define __is_cond_ptr(_name) class_##_name##_is_conditional
+
+-#define scoped_guard(_name, args...) \
+- for (CLASS(_name, scope)(args), \
+- *done = NULL; __guard_ptr(_name)(&scope) && !done; done = (void *)1)
+-
+-#define scoped_cond_guard(_name, _fail, args...) \
+- for (CLASS(_name, scope)(args), \
+- *done = NULL; !done; done = (void *)1) \
+- if (!__guard_ptr(_name)(&scope)) _fail; \
+- else
+-
++/*
++ * Helper macro for scoped_guard().
++ *
++ * Note that the "!__is_cond_ptr(_name)" part of the condition ensures that
++ * compiler would be sure that for the unconditional locks the body of the
++ * loop (caller-provided code glued to the else clause) could not be skipped.
++ * It is needed because the other part - "__guard_ptr(_name)(&scope)" - is too
++ * hard to deduce (even if could be proven true for unconditional locks).
++ */
++#define __scoped_guard(_name, _label, args...) \
++ for (CLASS(_name, scope)(args); \
++ __guard_ptr(_name)(&scope) || !__is_cond_ptr(_name); \
++ ({ goto _label; })) \
++ if (0) { \
++_label: \
++ break; \
++ } else
++
++#define scoped_guard(_name, args...) \
++ __scoped_guard(_name, __UNIQUE_ID(label), args)
++
++#define __scoped_cond_guard(_name, _fail, _label, args...) \
++ for (CLASS(_name, scope)(args); true; ({ goto _label; })) \
++ if (!__guard_ptr(_name)(&scope)) { \
++ BUILD_BUG_ON(!__is_cond_ptr(_name)); \
++ _fail; \
++_label: \
++ break; \
++ } else
++
++#define scoped_cond_guard(_name, _fail, args...) \
++ __scoped_cond_guard(_name, _fail, __UNIQUE_ID(label), args)
+ /*
+ * Additional helper macros for generating lock guards with types, either for
+ * locks that don't have a native type (eg. RCU, preempt) or those that need a
+@@ -369,14 +398,17 @@ static inline class_##_name##_t class_##_name##_constructor(void) \
+ }
+
+ #define DEFINE_LOCK_GUARD_1(_name, _type, _lock, _unlock, ...) \
++__DEFINE_CLASS_IS_CONDITIONAL(_name, false); \
+ __DEFINE_UNLOCK_GUARD(_name, _type, _unlock, __VA_ARGS__) \
+ __DEFINE_LOCK_GUARD_1(_name, _type, _lock)
+
+ #define DEFINE_LOCK_GUARD_0(_name, _lock, _unlock, ...) \
++__DEFINE_CLASS_IS_CONDITIONAL(_name, false); \
+ __DEFINE_UNLOCK_GUARD(_name, void, _unlock, __VA_ARGS__) \
+ __DEFINE_LOCK_GUARD_0(_name, _lock)
+
+ #define DEFINE_LOCK_GUARD_1_COND(_name, _ext, _condlock) \
++ __DEFINE_CLASS_IS_CONDITIONAL(_name##_ext, true); \
+ EXTEND_CLASS(_name, _ext, \
+ ({ class_##_name##_t _t = { .lock = l }, *_T = &_t;\
+ if (_T->lock && !(_condlock)) _T->lock = NULL; \
+diff --git a/include/linux/clocksource.h b/include/linux/clocksource.h
+index d35b677b08fe13..c846436b64593e 100644
+--- a/include/linux/clocksource.h
++++ b/include/linux/clocksource.h
+@@ -49,6 +49,7 @@ struct module;
+ * @archdata: Optional arch-specific data
+ * @max_cycles: Maximum safe cycle value which won't overflow on
+ * multiplication
++ * @max_raw_delta: Maximum safe delta value for negative motion detection
+ * @name: Pointer to clocksource name
+ * @list: List head for registration (internal)
+ * @freq_khz: Clocksource frequency in khz.
+@@ -109,6 +110,7 @@ struct clocksource {
+ struct arch_clocksource_data archdata;
+ #endif
+ u64 max_cycles;
++ u64 max_raw_delta;
+ const char *name;
+ struct list_head list;
+ u32 freq_khz;
+diff --git a/include/linux/eeprom_93cx6.h b/include/linux/eeprom_93cx6.h
+index c860c72a921d03..3a485cc0e0fa0b 100644
+--- a/include/linux/eeprom_93cx6.h
++++ b/include/linux/eeprom_93cx6.h
+@@ -11,6 +11,8 @@
+ Supported chipsets: 93c46, 93c56 and 93c66.
+ */
+
++#include <linux/bits.h>
++
+ /*
+ * EEPROM operation defines.
+ */
+@@ -34,6 +36,7 @@
+ * @register_write(struct eeprom_93cx6 *eeprom): handler to
+ * write to the eeprom register by using all reg_* fields.
+ * @width: eeprom width, should be one of the PCI_EEPROM_WIDTH_* defines
++ * @quirks: eeprom or controller quirks
+ * @drive_data: Set if we're driving the data line.
+ * @reg_data_in: register field to indicate data input
+ * @reg_data_out: register field to indicate data output
+@@ -50,6 +53,9 @@ struct eeprom_93cx6 {
+ void (*register_write)(struct eeprom_93cx6 *eeprom);
+
+ int width;
++ unsigned int quirks;
++/* Some EEPROMs require an extra clock cycle before reading */
++#define PCI_EEPROM_QUIRK_EXTRA_READ_CYCLE BIT(0)
+
+ char drive_data;
+ char reg_data_in;
+@@ -71,3 +77,8 @@ extern void eeprom_93cx6_wren(struct eeprom_93cx6 *eeprom, bool enable);
+
+ extern void eeprom_93cx6_write(struct eeprom_93cx6 *eeprom,
+ u8 addr, u16 data);
++
++static inline bool has_quirk_extra_read_cycle(struct eeprom_93cx6 *eeprom)
++{
++ return eeprom->quirks & PCI_EEPROM_QUIRK_EXTRA_READ_CYCLE;
++}
+diff --git a/include/linux/eventpoll.h b/include/linux/eventpoll.h
+index 3337745d81bd69..0c0d00fcd131f9 100644
+--- a/include/linux/eventpoll.h
++++ b/include/linux/eventpoll.h
+@@ -42,7 +42,7 @@ static inline void eventpoll_release(struct file *file)
+ * because the file in on the way to be removed and nobody ( but
+ * eventpoll ) has still a reference to this file.
+ */
+- if (likely(!file->f_ep))
++ if (likely(!READ_ONCE(file->f_ep)))
+ return;
+
+ /*
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index 3b2ad444c002ee..c24f8bc01045df 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -24,6 +24,7 @@
+ #define NEW_ADDR ((block_t)-1) /* used as block_t addresses */
+ #define COMPRESS_ADDR ((block_t)-2) /* used as compressed data flag */
+
++#define F2FS_BLKSIZE_MASK (F2FS_BLKSIZE - 1)
+ #define F2FS_BYTES_TO_BLK(bytes) ((unsigned long long)(bytes) >> F2FS_BLKSIZE_BITS)
+ #define F2FS_BLK_TO_BYTES(blk) ((unsigned long long)(blk) << F2FS_BLKSIZE_BITS)
+ #define F2FS_BLK_END_BYTES(blk) (F2FS_BLK_TO_BYTES(blk + 1) - 1)
+diff --git a/include/linux/fanotify.h b/include/linux/fanotify.h
+index 4f1c4f60311808..89ff45bd6f01ba 100644
+--- a/include/linux/fanotify.h
++++ b/include/linux/fanotify.h
+@@ -36,6 +36,7 @@
+ #define FANOTIFY_ADMIN_INIT_FLAGS (FANOTIFY_PERM_CLASSES | \
+ FAN_REPORT_TID | \
+ FAN_REPORT_PIDFD | \
++ FAN_REPORT_FD_ERROR | \
+ FAN_UNLIMITED_QUEUE | \
+ FAN_UNLIMITED_MARKS)
+
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index 121d5b8bc86753..a7d60a1c72a09a 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -359,6 +359,7 @@ struct hid_item {
+ * | @HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP:
+ * | @HID_QUIRK_HAVE_SPECIAL_DRIVER:
+ * | @HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE:
++ * | @HID_QUIRK_IGNORE_SPECIAL_DRIVER
+ * | @HID_QUIRK_FULLSPEED_INTERVAL:
+ * | @HID_QUIRK_NO_INIT_REPORTS:
+ * | @HID_QUIRK_NO_IGNORE:
+@@ -384,6 +385,7 @@ struct hid_item {
+ #define HID_QUIRK_HAVE_SPECIAL_DRIVER BIT(19)
+ #define HID_QUIRK_INCREMENT_USAGE_ON_DUPLICATE BIT(20)
+ #define HID_QUIRK_NOINVERT BIT(21)
++#define HID_QUIRK_IGNORE_SPECIAL_DRIVER BIT(22)
+ #define HID_QUIRK_FULLSPEED_INTERVAL BIT(28)
+ #define HID_QUIRK_NO_INIT_REPORTS BIT(29)
+ #define HID_QUIRK_NO_IGNORE BIT(30)
+diff --git a/include/linux/i3c/master.h b/include/linux/i3c/master.h
+index 2a1ed05d5782a8..6e5328c6c6afd2 100644
+--- a/include/linux/i3c/master.h
++++ b/include/linux/i3c/master.h
+@@ -298,7 +298,8 @@ enum i3c_open_drain_speed {
+ * @I3C_ADDR_SLOT_I2C_DEV: address is assigned to an I2C device
+ * @I3C_ADDR_SLOT_I3C_DEV: address is assigned to an I3C device
+ * @I3C_ADDR_SLOT_STATUS_MASK: address slot mask
+- *
++ * @I3C_ADDR_SLOT_EXT_DESIRED: the bitmask represents addresses that are preferred by some devices,
++ * such as the "assigned-address" property in a device tree source.
+ * On an I3C bus, addresses are assigned dynamically, and we need to know which
+ * addresses are free to use and which ones are already assigned.
+ *
+@@ -311,8 +312,12 @@ enum i3c_addr_slot_status {
+ I3C_ADDR_SLOT_I2C_DEV,
+ I3C_ADDR_SLOT_I3C_DEV,
+ I3C_ADDR_SLOT_STATUS_MASK = 3,
++ I3C_ADDR_SLOT_EXT_STATUS_MASK = 7,
++ I3C_ADDR_SLOT_EXT_DESIRED = BIT(2),
+ };
+
++#define I3C_ADDR_SLOT_STATUS_BITS 4
++
+ /**
+ * struct i3c_bus - I3C bus object
+ * @cur_master: I3C master currently driving the bus. Since I3C is multi-master
+@@ -354,7 +359,7 @@ enum i3c_addr_slot_status {
+ struct i3c_bus {
+ struct i3c_dev_desc *cur_master;
+ int id;
+- unsigned long addrslots[((I2C_MAX_ADDR + 1) * 2) / BITS_PER_LONG];
++ unsigned long addrslots[((I2C_MAX_ADDR + 1) * I3C_ADDR_SLOT_STATUS_BITS) / BITS_PER_LONG];
+ enum i3c_bus_mode mode;
+ struct {
+ unsigned long i3c;
+diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h
+index c189d36ad55ea6..968de0cde25d58 100644
+--- a/include/linux/io_uring/cmd.h
++++ b/include/linux/io_uring/cmd.h
+@@ -43,7 +43,7 @@ int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
+ * Note: the caller should never hard code @issue_flags and is only allowed
+ * to pass the mask provided by the core io_uring code.
+ */
+-void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret, ssize_t res2,
++void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret, u64 res2,
+ unsigned issue_flags);
+
+ void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd,
+@@ -67,7 +67,7 @@ static inline int io_uring_cmd_import_fixed(u64 ubuf, unsigned long len, int rw,
+ return -EOPNOTSUPP;
+ }
+ static inline void io_uring_cmd_done(struct io_uring_cmd *cmd, ssize_t ret,
+- ssize_t ret2, unsigned issue_flags)
++ u64 ret2, unsigned issue_flags)
+ {
+ }
+ static inline void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd,
+diff --git a/include/linux/leds.h b/include/linux/leds.h
+index e5968c3ed4ae08..2337f516fa7c2c 100644
+--- a/include/linux/leds.h
++++ b/include/linux/leds.h
+@@ -238,7 +238,7 @@ struct led_classdev {
+ struct kernfs_node *brightness_hw_changed_kn;
+ #endif
+
+- /* Ensures consistent access to the LED Flash Class device */
++ /* Ensures consistent access to the LED class device */
+ struct mutex led_access;
+ };
+
+diff --git a/include/linux/mmc/card.h b/include/linux/mmc/card.h
+index f34407cc27888d..eb67d3d5ff5b22 100644
+--- a/include/linux/mmc/card.h
++++ b/include/linux/mmc/card.h
+@@ -35,7 +35,7 @@ struct mmc_csd {
+ unsigned int wp_grp_size;
+ unsigned int read_blkbits;
+ unsigned int write_blkbits;
+- unsigned int capacity;
++ sector_t capacity;
+ unsigned int read_partial:1,
+ read_misalign:1,
+ write_partial:1,
+@@ -294,6 +294,7 @@ struct mmc_card {
+ #define MMC_QUIRK_BROKEN_SD_DISCARD (1<<14) /* Disable broken SD discard support */
+ #define MMC_QUIRK_BROKEN_SD_CACHE (1<<15) /* Disable broken SD cache support */
+ #define MMC_QUIRK_BROKEN_CACHE_FLUSH (1<<16) /* Don't flush cache until the write has occurred */
++#define MMC_QUIRK_BROKEN_SD_POWEROFF_NOTIFY (1<<17) /* Disable broken SD poweroff notify support */
+
+ bool written_flag; /* Indicates eMMC has been written since power on */
+ bool reenable_cmdq; /* Re-enable Command Queue */
+diff --git a/include/linux/mmc/sd.h b/include/linux/mmc/sd.h
+index 6727576a875559..865cc0ca8543d1 100644
+--- a/include/linux/mmc/sd.h
++++ b/include/linux/mmc/sd.h
+@@ -36,6 +36,7 @@
+ /* OCR bit definitions */
+ #define SD_OCR_S18R (1 << 24) /* 1.8V switching request */
+ #define SD_ROCR_S18A SD_OCR_S18R /* 1.8V switching accepted by card */
++#define SD_OCR_2T (1 << 27) /* HO2T/CO2T - SDUC support */
+ #define SD_OCR_XPC (1 << 28) /* SDXC power control */
+ #define SD_OCR_CCS (1 << 30) /* Card Capacity Status */
+
+diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
+index cc839e4365c182..74aa9fbbdae70b 100644
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -306,7 +306,7 @@ static const unsigned long *const_folio_flags(const struct folio *folio,
+ {
+ const struct page *page = &folio->page;
+
+- VM_BUG_ON_PGFLAGS(PageTail(page), page);
++ VM_BUG_ON_PGFLAGS(page->compound_head & 1, page);
+ VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags), page);
+ return &page[n].flags;
+ }
+@@ -315,7 +315,7 @@ static unsigned long *folio_flags(struct folio *folio, unsigned n)
+ {
+ struct page *page = &folio->page;
+
+- VM_BUG_ON_PGFLAGS(PageTail(page), page);
++ VM_BUG_ON_PGFLAGS(page->compound_head & 1, page);
+ VM_BUG_ON_PGFLAGS(n > 0 && !test_bit(PG_head, &page->flags), page);
+ return &page[n].flags;
+ }
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 573b4c4c2be61f..4e77c4230c0a19 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -2609,6 +2609,12 @@ pci_host_bridge_acpi_msi_domain(struct pci_bus *bus) { return NULL; }
+ static inline bool pci_pr3_present(struct pci_dev *pdev) { return false; }
+ #endif
+
++#if defined(CONFIG_X86) && defined(CONFIG_ACPI)
++bool arch_pci_dev_is_removable(struct pci_dev *pdev);
++#else
++static inline bool arch_pci_dev_is_removable(struct pci_dev *pdev) { return false; }
++#endif
++
+ #ifdef CONFIG_EEH
+ static inline struct eeh_dev *pci_dev_to_eeh_dev(struct pci_dev *pdev)
+ {
+diff --git a/include/linux/scatterlist.h b/include/linux/scatterlist.h
+index e61d164622db47..1bad36e3e4ef1f 100644
+--- a/include/linux/scatterlist.h
++++ b/include/linux/scatterlist.h
+@@ -313,7 +313,7 @@ static inline void sg_dma_mark_bus_address(struct scatterlist *sg)
+ }
+
+ /**
+- * sg_unmark_bus_address - Unmark the scatterlist entry as a bus address
++ * sg_dma_unmark_bus_address - Unmark the scatterlist entry as a bus address
+ * @sg: SG entry
+ *
+ * Description:
+diff --git a/include/linux/stackdepot.h b/include/linux/stackdepot.h
+index e9ec32fb97d4a7..2cc21ffcdaf9e4 100644
+--- a/include/linux/stackdepot.h
++++ b/include/linux/stackdepot.h
+@@ -147,7 +147,7 @@ static inline int stack_depot_early_init(void) { return 0; }
+ * If the provided stack trace comes from the interrupt context, only the part
+ * up to the interrupt entry is saved.
+ *
+- * Context: Any context, but setting STACK_DEPOT_FLAG_CAN_ALLOC is required if
++ * Context: Any context, but unsetting STACK_DEPOT_FLAG_CAN_ALLOC is required if
+ * alloc_pages() cannot be used from the current context. Currently
+ * this is the case for contexts where neither %GFP_ATOMIC nor
+ * %GFP_NOWAIT can be used (NMI, raw_spin_lock).
+@@ -156,7 +156,7 @@ static inline int stack_depot_early_init(void) { return 0; }
+ */
+ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
+ unsigned int nr_entries,
+- gfp_t gfp_flags,
++ gfp_t alloc_flags,
+ depot_flags_t depot_flags);
+
+ /**
+@@ -175,7 +175,7 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
+ * Return: Handle of the stack trace stored in depot, 0 on failure
+ */
+ depot_stack_handle_t stack_depot_save(unsigned long *entries,
+- unsigned int nr_entries, gfp_t gfp_flags);
++ unsigned int nr_entries, gfp_t alloc_flags);
+
+ /**
+ * __stack_depot_get_stack_record - Get a pointer to a stack_record struct
+diff --git a/include/linux/timekeeper_internal.h b/include/linux/timekeeper_internal.h
+index 902c20ef495acb..715e0919972e4c 100644
+--- a/include/linux/timekeeper_internal.h
++++ b/include/linux/timekeeper_internal.h
+@@ -68,9 +68,6 @@ struct tk_read_base {
+ * shifted nano seconds.
+ * @ntp_error_shift: Shift conversion between clock shifted nano seconds and
+ * ntp shifted nano seconds.
+- * @last_warning: Warning ratelimiter (DEBUG_TIMEKEEPING)
+- * @underflow_seen: Underflow warning flag (DEBUG_TIMEKEEPING)
+- * @overflow_seen: Overflow warning flag (DEBUG_TIMEKEEPING)
+ *
+ * Note: For timespec(64) based interfaces wall_to_monotonic is what
+ * we need to add to xtime (or xtime corrected for sub jiffy times)
+@@ -124,18 +121,6 @@ struct timekeeper {
+ u32 ntp_err_mult;
+ /* Flag used to avoid updating NTP twice with same second */
+ u32 skip_second_overflow;
+-#ifdef CONFIG_DEBUG_TIMEKEEPING
+- long last_warning;
+- /*
+- * These simple flag variables are managed
+- * without locks, which is racy, but they are
+- * ok since we don't really care about being
+- * super precise about how many events were
+- * seen, just that a problem was observed.
+- */
+- int underflow_seen;
+- int overflow_seen;
+-#endif
+ };
+
+ #ifdef CONFIG_GENERIC_TIME_VSYSCALL
+diff --git a/include/linux/usb/chipidea.h b/include/linux/usb/chipidea.h
+index 5a7f96684ea226..ebdfef124b2bc0 100644
+--- a/include/linux/usb/chipidea.h
++++ b/include/linux/usb/chipidea.h
+@@ -65,6 +65,7 @@ struct ci_hdrc_platform_data {
+ #define CI_HDRC_PHY_VBUS_CONTROL BIT(16)
+ #define CI_HDRC_HAS_PORTSC_PEC_MISSED BIT(17)
+ #define CI_HDRC_FORCE_VBUS_ACTIVE_ALWAYS BIT(18)
++#define CI_HDRC_HAS_SHORT_PKT_LIMIT BIT(19)
+ enum usb_dr_mode dr_mode;
+ #define CI_HDRC_CONTROLLER_RESET_EVENT 0
+ #define CI_HDRC_CONTROLLER_STOPPED_EVENT 1
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index a1864cff616aee..5bb4eaa52e14cf 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -301,6 +301,20 @@ enum {
+ */
+ HCI_QUIRK_BROKEN_SET_RPA_TIMEOUT,
+
++ /*
++ * When this quirk is set, the HCI_OP_LE_EXT_CREATE_CONN command is
++ * disabled. This is required for the Actions Semiconductor ATS2851
++ * based controllers, which erroneously claims to support it.
++ */
++ HCI_QUIRK_BROKEN_EXT_CREATE_CONN,
++
++ /*
++ * When this quirk is set, the command WRITE_AUTH_PAYLOAD_TIMEOUT is
++ * skipped. This is required for the Actions Semiconductor ATS2851
++ * based controllers, due to a race condition in pairing process.
++ */
++ HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT,
++
+ /* When this quirk is set, MSFT extension monitor tracking by
+ * address filter is supported. Since tracking quantity of each
+ * pattern is limited, this feature supports tracking multiple
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index 4c185a08c3a3af..c95f7e6ba25514 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1934,8 +1934,8 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ !test_bit(HCI_QUIRK_BROKEN_EXT_SCAN, &(dev)->quirks))
+
+ /* Use ext create connection if command is supported */
+-#define use_ext_conn(dev) ((dev)->commands[37] & 0x80)
+-
++#define use_ext_conn(dev) (((dev)->commands[37] & 0x80) && \
++ !test_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, &(dev)->quirks))
+ /* Extended advertising support */
+ #define ext_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_EXT_ADV))
+
+@@ -1948,8 +1948,10 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ * C24: Mandatory if the LE Controller supports Connection State and either
+ * LE Feature (LL Privacy) or LE Feature (Extended Advertising) is supported
+ */
+-#define use_enhanced_conn_complete(dev) (ll_privacy_capable(dev) || \
+- ext_adv_capable(dev))
++#define use_enhanced_conn_complete(dev) ((ll_privacy_capable(dev) || \
++ ext_adv_capable(dev)) && \
++ !test_bit(HCI_QUIRK_BROKEN_EXT_CREATE_CONN, \
++ &(dev)->quirks))
+
+ /* Periodic advertising support */
+ #define per_adv_capable(dev) (((dev)->le_features[1] & HCI_LE_PERIODIC_ADV))
+diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h
+index ff27cb2e166207..03b6165756fc5d 100644
+--- a/include/net/netfilter/nf_tables_core.h
++++ b/include/net/netfilter/nf_tables_core.h
+@@ -161,6 +161,7 @@ enum {
+ };
+
+ struct nft_inner_tun_ctx {
++ unsigned long cookie;
+ u16 type;
+ u16 inner_tunoff;
+ u16 inner_lloff;
+diff --git a/include/net/tcp_ao.h b/include/net/tcp_ao.h
+index 1d46460d0fefab..df655ce6987d37 100644
+--- a/include/net/tcp_ao.h
++++ b/include/net/tcp_ao.h
+@@ -183,7 +183,8 @@ int tcp_ao_hash_skb(unsigned short int family,
+ const u8 *tkey, int hash_offset, u32 sne);
+ int tcp_parse_ao(struct sock *sk, int cmd, unsigned short int family,
+ sockptr_t optval, int optlen);
+-struct tcp_ao_key *tcp_ao_established_key(struct tcp_ao_info *ao,
++struct tcp_ao_key *tcp_ao_established_key(const struct sock *sk,
++ struct tcp_ao_info *ao,
+ int sndid, int rcvid);
+ int tcp_ao_copy_all_matching(const struct sock *sk, struct sock *newsk,
+ struct request_sock *req, struct sk_buff *skb,
+diff --git a/include/sound/soc_sdw_utils.h b/include/sound/soc_sdw_utils.h
+index f68c1f193b3b46..0150b3735b4bd5 100644
+--- a/include/sound/soc_sdw_utils.h
++++ b/include/sound/soc_sdw_utils.h
+@@ -28,6 +28,7 @@
+ * - SOC_SDW_CODEC_SPKR | SOF_SIDECAR_AMPS - Not currently supported
+ */
+ #define SOC_SDW_SIDECAR_AMPS BIT(16)
++#define SOC_SDW_CODEC_MIC BIT(17)
+
+ #define SOC_SDW_UNUSED_DAI_ID -1
+ #define SOC_SDW_JACK_OUT_DAI_ID 0
+@@ -59,6 +60,7 @@ struct asoc_sdw_dai_info {
+ int (*rtd_init)(struct snd_soc_pcm_runtime *rtd, struct snd_soc_dai *dai);
+ bool rtd_init_done; /* Indicate that the rtd_init callback is done */
+ unsigned long quirk;
++ bool quirk_exclude;
+ };
+
+ struct asoc_sdw_codec_info {
+diff --git a/include/trace/events/damon.h b/include/trace/events/damon.h
+index 23200aabccacb1..da4bd9fd11625e 100644
+--- a/include/trace/events/damon.h
++++ b/include/trace/events/damon.h
+@@ -15,7 +15,7 @@ TRACE_EVENT_CONDITION(damos_before_apply,
+ unsigned int target_idx, struct damon_region *r,
+ unsigned int nr_regions, bool do_trace),
+
+- TP_ARGS(context_idx, target_idx, scheme_idx, r, nr_regions, do_trace),
++ TP_ARGS(context_idx, scheme_idx, target_idx, r, nr_regions, do_trace),
+
+ TP_CONDITION(do_trace),
+
+diff --git a/include/trace/trace_events.h b/include/trace/trace_events.h
+index c2f9cabf154d11..fa0d51cad57a80 100644
+--- a/include/trace/trace_events.h
++++ b/include/trace/trace_events.h
+@@ -244,6 +244,9 @@ static struct trace_event_fields trace_event_fields_##call[] = { \
+ tstruct \
+ {} };
+
++#undef DECLARE_EVENT_SYSCALL_CLASS
++#define DECLARE_EVENT_SYSCALL_CLASS DECLARE_EVENT_CLASS
++
+ #undef DEFINE_EVENT_PRINT
+ #define DEFINE_EVENT_PRINT(template, name, proto, args, print)
+
+@@ -374,11 +377,11 @@ static inline notrace int trace_event_get_offsets_##call( \
+
+ #include "stages/stage6_event_callback.h"
+
+-#undef DECLARE_EVENT_CLASS
+-#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
+- \
++
++#undef __DECLARE_EVENT_CLASS
++#define __DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
+ static notrace void \
+-trace_event_raw_event_##call(void *__data, proto) \
++do_trace_event_raw_event_##call(void *__data, proto) \
+ { \
+ struct trace_event_file *trace_file = __data; \
+ struct trace_event_data_offsets_##call __maybe_unused __data_offsets;\
+@@ -403,6 +406,29 @@ trace_event_raw_event_##call(void *__data, proto) \
+ \
+ trace_event_buffer_commit(&fbuffer); \
+ }
++
++#undef DECLARE_EVENT_CLASS
++#define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) \
++__DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), PARAMS(tstruct), \
++ PARAMS(assign), PARAMS(print)) \
++static notrace void \
++trace_event_raw_event_##call(void *__data, proto) \
++{ \
++ do_trace_event_raw_event_##call(__data, args); \
++}
++
++#undef DECLARE_EVENT_SYSCALL_CLASS
++#define DECLARE_EVENT_SYSCALL_CLASS(call, proto, args, tstruct, assign, print) \
++__DECLARE_EVENT_CLASS(call, PARAMS(proto), PARAMS(args), PARAMS(tstruct), \
++ PARAMS(assign), PARAMS(print)) \
++static notrace void \
++trace_event_raw_event_##call(void *__data, proto) \
++{ \
++ preempt_disable_notrace(); \
++ do_trace_event_raw_event_##call(__data, args); \
++ preempt_enable_notrace(); \
++}
++
+ /*
+ * The ftrace_test_probe is compiled out, it is only here as a build time check
+ * to make sure that if the tracepoint handling changes, the ftrace probe will
+@@ -418,6 +444,8 @@ static inline void ftrace_test_probe_##call(void) \
+
+ #include TRACE_INCLUDE(TRACE_INCLUDE_FILE)
+
++#undef __DECLARE_EVENT_CLASS
++
+ #include "stages/stage7_class_define.h"
+
+ #undef DECLARE_EVENT_CLASS
+diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
+index b6fbe4988f2e9e..c4182e95a61955 100644
+--- a/include/uapi/drm/xe_drm.h
++++ b/include/uapi/drm/xe_drm.h
+@@ -512,7 +512,9 @@ struct drm_xe_query_gt_list {
+ * containing the following in mask:
+ * ``DSS_COMPUTE ff ff ff ff 00 00 00 00``
+ * means 32 DSS are available for compute.
+- * - %DRM_XE_TOPO_L3_BANK - To query the mask of enabled L3 banks
++ * - %DRM_XE_TOPO_L3_BANK - To query the mask of enabled L3 banks. This type
++ * may be omitted if the driver is unable to query the mask from the
++ * hardware.
+ * - %DRM_XE_TOPO_EU_PER_DSS - To query the mask of Execution Units (EU)
+ * available per Dual Sub Slices (DSS). For example a query response
+ * containing the following in mask:
+diff --git a/include/uapi/linux/fanotify.h b/include/uapi/linux/fanotify.h
+index a37de58ca571ae..34f221d3a1b957 100644
+--- a/include/uapi/linux/fanotify.h
++++ b/include/uapi/linux/fanotify.h
+@@ -60,6 +60,7 @@
+ #define FAN_REPORT_DIR_FID 0x00000400 /* Report unique directory id */
+ #define FAN_REPORT_NAME 0x00000800 /* Report events with name */
+ #define FAN_REPORT_TARGET_FID 0x00001000 /* Report dirent target id */
++#define FAN_REPORT_FD_ERROR 0x00002000 /* event->fd can report error */
+
+ /* Convenience macro - FAN_REPORT_NAME requires FAN_REPORT_DIR_FID */
+ #define FAN_REPORT_DFID_NAME (FAN_REPORT_DIR_FID | FAN_REPORT_NAME)
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index 3f68ae3e4330dc..8932ec5bd7c029 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -299,6 +299,8 @@ struct ufs_pwr_mode_info {
+ * @max_num_rtt: maximum RTT supported by the host
+ * @init: called when the driver is initialized
+ * @exit: called to cleanup everything done in init
++ * @set_dma_mask: For setting another DMA mask than indicated by the 64AS
++ * capability bit.
+ * @get_ufs_hci_version: called to get UFS HCI version
+ * @clk_scale_notify: notifies that clks are scaled up/down
+ * @setup_clocks: called before touching any of the controller registers
+@@ -308,7 +310,9 @@ struct ufs_pwr_mode_info {
+ * to allow variant specific Uni-Pro initialization.
+ * @pwr_change_notify: called before and after a power mode change
+ * is carried out to allow vendor spesific capabilities
+- * to be set.
++ * to be set. PRE_CHANGE can modify final_params based
++ * on desired_pwr_mode, but POST_CHANGE must not alter
++ * the final_params parameter
+ * @setup_xfer_req: called before any transfer request is issued
+ * to set some things
+ * @setup_task_mgmt: called before any task management request is issued
+@@ -341,6 +345,7 @@ struct ufs_hba_variant_ops {
+ int (*init)(struct ufs_hba *);
+ void (*exit)(struct ufs_hba *);
+ u32 (*get_ufs_hci_version)(struct ufs_hba *);
++ int (*set_dma_mask)(struct ufs_hba *);
+ int (*clk_scale_notify)(struct ufs_hba *, bool,
+ enum ufs_notify_change_status);
+ int (*setup_clocks)(struct ufs_hba *, bool,
+@@ -350,9 +355,9 @@ struct ufs_hba_variant_ops {
+ int (*link_startup_notify)(struct ufs_hba *,
+ enum ufs_notify_change_status);
+ int (*pwr_change_notify)(struct ufs_hba *,
+- enum ufs_notify_change_status status,
+- struct ufs_pa_layer_attr *,
+- struct ufs_pa_layer_attr *);
++ enum ufs_notify_change_status status,
++ struct ufs_pa_layer_attr *desired_pwr_mode,
++ struct ufs_pa_layer_attr *final_params);
+ void (*setup_xfer_req)(struct ufs_hba *hba, int tag,
+ bool is_scsi_cmd);
+ void (*setup_task_mgmt)(struct ufs_hba *, int, u8);
+@@ -623,12 +628,6 @@ enum ufshcd_quirks {
+ */
+ UFSHCD_QUIRK_SKIP_PH_CONFIGURATION = 1 << 16,
+
+- /*
+- * This quirk needs to be enabled if the host controller has
+- * 64-bit addressing supported capability but it doesn't work.
+- */
+- UFSHCD_QUIRK_BROKEN_64BIT_ADDRESS = 1 << 17,
+-
+ /*
+ * This quirk needs to be enabled if the host controller has
+ * auto-hibernate capability but it's FASTAUTO only.
+diff --git a/io_uring/tctx.c b/io_uring/tctx.c
+index c043fe93a3f232..84f6a838572040 100644
+--- a/io_uring/tctx.c
++++ b/io_uring/tctx.c
+@@ -47,8 +47,19 @@ static struct io_wq *io_init_wq_offload(struct io_ring_ctx *ctx,
+ void __io_uring_free(struct task_struct *tsk)
+ {
+ struct io_uring_task *tctx = tsk->io_uring;
++ struct io_tctx_node *node;
++ unsigned long index;
+
+- WARN_ON_ONCE(!xa_empty(&tctx->xa));
++ /*
++ * Fault injection forcing allocation errors in the xa_store() path
++ * can lead to xa_empty() returning false, even though no actual
++ * node is stored in the xarray. Until that gets sorted out, attempt
++ * an iteration here and warn if any entries are found.
++ */
++ xa_for_each(&tctx->xa, index, node) {
++ WARN_ON_ONCE(1);
++ break;
++ }
+ WARN_ON_ONCE(tctx->io_wq);
+ WARN_ON_ONCE(tctx->cached_refs);
+
+diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
+index 39c3c816ec7882..883510a3e8d075 100644
+--- a/io_uring/uring_cmd.c
++++ b/io_uring/uring_cmd.c
+@@ -147,7 +147,7 @@ static inline void io_req_set_cqe32_extra(struct io_kiocb *req,
+ * Called by consumers of io_uring_cmd, if they originally returned
+ * -EIOCBQUEUED upon receiving the command.
+ */
+-void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, ssize_t res2,
++void io_uring_cmd_done(struct io_uring_cmd *ioucmd, ssize_t ret, u64 res2,
+ unsigned issue_flags)
+ {
+ struct io_kiocb *req = cmd_to_io_kiocb(ioucmd);
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index 79660e3fca4c1b..6cdbb4c33d31d5 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -947,22 +947,44 @@ static void *prog_fd_array_get_ptr(struct bpf_map *map,
+ struct file *map_file, int fd)
+ {
+ struct bpf_prog *prog = bpf_prog_get(fd);
++ bool is_extended;
+
+ if (IS_ERR(prog))
+ return prog;
+
+- if (!bpf_prog_map_compatible(map, prog)) {
++ if (prog->type == BPF_PROG_TYPE_EXT ||
++ !bpf_prog_map_compatible(map, prog)) {
+ bpf_prog_put(prog);
+ return ERR_PTR(-EINVAL);
+ }
+
++ mutex_lock(&prog->aux->ext_mutex);
++ is_extended = prog->aux->is_extended;
++ if (!is_extended)
++ prog->aux->prog_array_member_cnt++;
++ mutex_unlock(&prog->aux->ext_mutex);
++ if (is_extended) {
++ /* Extended prog can not be tail callee. It's to prevent a
++ * potential infinite loop like:
++ * tail callee prog entry -> tail callee prog subprog ->
++ * freplace prog entry --tailcall-> tail callee prog entry.
++ */
++ bpf_prog_put(prog);
++ return ERR_PTR(-EBUSY);
++ }
++
+ return prog;
+ }
+
+ static void prog_fd_array_put_ptr(struct bpf_map *map, void *ptr, bool need_defer)
+ {
++ struct bpf_prog *prog = ptr;
++
++ mutex_lock(&prog->aux->ext_mutex);
++ prog->aux->prog_array_member_cnt--;
++ mutex_unlock(&prog->aux->ext_mutex);
+ /* bpf_prog is freed after one RCU or tasks trace grace period */
+- bpf_prog_put(ptr);
++ bpf_prog_put(prog);
+ }
+
+ static u32 prog_fd_array_sys_lookup_elem(void *ptr)
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 5e77c58e06010e..233ea78f8f1bd9 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -131,6 +131,7 @@ struct bpf_prog *bpf_prog_alloc_no_stats(unsigned int size, gfp_t gfp_extra_flag
+ INIT_LIST_HEAD_RCU(&fp->aux->ksym_prefix.lnode);
+ #endif
+ mutex_init(&fp->aux->used_maps_mutex);
++ mutex_init(&fp->aux->ext_mutex);
+ mutex_init(&fp->aux->dst_mutex);
+
+ return fp;
+diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
+index 7878be18e9d264..3aa002a47a9666 100644
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -184,7 +184,7 @@ static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
+ static void dev_map_free(struct bpf_map *map)
+ {
+ struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+- int i;
++ u32 i;
+
+ /* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
+ * so the programs (can be more than one that used this map) were
+@@ -821,7 +821,7 @@ static long dev_map_delete_elem(struct bpf_map *map, void *key)
+ {
+ struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+ struct bpf_dtab_netdev *old_dev;
+- int k = *(u32 *)key;
++ u32 k = *(u32 *)key;
+
+ if (k >= map->max_entries)
+ return -EINVAL;
+@@ -838,7 +838,7 @@ static long dev_map_hash_delete_elem(struct bpf_map *map, void *key)
+ {
+ struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
+ struct bpf_dtab_netdev *old_dev;
+- int k = *(u32 *)key;
++ u32 k = *(u32 *)key;
+ unsigned long flags;
+ int ret = -ENOENT;
+
+diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
+index b14b87463ee04e..3ec941a0ea41c5 100644
+--- a/kernel/bpf/hashtab.c
++++ b/kernel/bpf/hashtab.c
+@@ -896,9 +896,12 @@ static int htab_map_get_next_key(struct bpf_map *map, void *key, void *next_key)
+ static void htab_elem_free(struct bpf_htab *htab, struct htab_elem *l)
+ {
+ check_and_free_fields(htab, l);
++
++ migrate_disable();
+ if (htab->map.map_type == BPF_MAP_TYPE_PERCPU_HASH)
+ bpf_mem_cache_free(&htab->pcpu_ma, l->ptr_to_pptr);
+ bpf_mem_cache_free(&htab->ma, l);
++ migrate_enable();
+ }
+
+ static void htab_put_fd_value(struct bpf_htab *htab, struct htab_elem *l)
+@@ -948,7 +951,7 @@ static void free_htab_elem(struct bpf_htab *htab, struct htab_elem *l)
+ if (htab_is_prealloc(htab)) {
+ bpf_map_dec_elem_count(&htab->map);
+ check_and_free_fields(htab, l);
+- __pcpu_freelist_push(&htab->freelist, &l->fnode);
++ pcpu_freelist_push(&htab->freelist, &l->fnode);
+ } else {
+ dec_elem_count(htab);
+ htab_elem_free(htab, l);
+@@ -1018,7 +1021,6 @@ static struct htab_elem *alloc_htab_elem(struct bpf_htab *htab, void *key,
+ */
+ pl_new = this_cpu_ptr(htab->extra_elems);
+ l_new = *pl_new;
+- htab_put_fd_value(htab, old_elem);
+ *pl_new = old_elem;
+ } else {
+ struct pcpu_freelist_node *l;
+@@ -1105,6 +1107,7 @@ static long htab_map_update_elem(struct bpf_map *map, void *key, void *value,
+ struct htab_elem *l_new = NULL, *l_old;
+ struct hlist_nulls_head *head;
+ unsigned long flags;
++ void *old_map_ptr;
+ struct bucket *b;
+ u32 key_size, hash;
+ int ret;
+@@ -1183,12 +1186,27 @@ static long htab_map_update_elem(struct bpf_map *map, void *key, void *value,
+ hlist_nulls_add_head_rcu(&l_new->hash_node, head);
+ if (l_old) {
+ hlist_nulls_del_rcu(&l_old->hash_node);
++
++ /* l_old has already been stashed in htab->extra_elems, free
++ * its special fields before it is available for reuse. Also
++ * save the old map pointer in htab of maps before unlock
++ * and release it after unlock.
++ */
++ old_map_ptr = NULL;
++ if (htab_is_prealloc(htab)) {
++ if (map->ops->map_fd_put_ptr)
++ old_map_ptr = fd_htab_map_get_ptr(map, l_old);
++ check_and_free_fields(htab, l_old);
++ }
++ }
++ htab_unlock_bucket(htab, b, hash, flags);
++ if (l_old) {
++ if (old_map_ptr)
++ map->ops->map_fd_put_ptr(map, old_map_ptr, true);
+ if (!htab_is_prealloc(htab))
+ free_htab_elem(htab, l_old);
+- else
+- check_and_free_fields(htab, l_old);
+ }
+- ret = 0;
++ return 0;
+ err:
+ htab_unlock_bucket(htab, b, hash, flags);
+ return ret;
+@@ -1432,15 +1450,15 @@ static long htab_map_delete_elem(struct bpf_map *map, void *key)
+ return ret;
+
+ l = lookup_elem_raw(head, hash, key, key_size);
+-
+- if (l) {
++ if (l)
+ hlist_nulls_del_rcu(&l->hash_node);
+- free_htab_elem(htab, l);
+- } else {
++ else
+ ret = -ENOENT;
+- }
+
+ htab_unlock_bucket(htab, b, hash, flags);
++
++ if (l)
++ free_htab_elem(htab, l);
+ return ret;
+ }
+
+@@ -1853,13 +1871,14 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map,
+ * may cause deadlock. See comments in function
+ * prealloc_lru_pop(). Let us do bpf_lru_push_free()
+ * after releasing the bucket lock.
++ *
++ * For htab of maps, htab_put_fd_value() in
++ * free_htab_elem() may acquire a spinlock with bucket
++ * lock being held and it violates the lock rule, so
++ * invoke free_htab_elem() after unlock as well.
+ */
+- if (is_lru_map) {
+- l->batch_flink = node_to_free;
+- node_to_free = l;
+- } else {
+- free_htab_elem(htab, l);
+- }
++ l->batch_flink = node_to_free;
++ node_to_free = l;
+ }
+ dst_key += key_size;
+ dst_val += value_size;
+@@ -1871,7 +1890,10 @@ __htab_map_lookup_and_delete_batch(struct bpf_map *map,
+ while (node_to_free) {
+ l = node_to_free;
+ node_to_free = node_to_free->batch_flink;
+- htab_lru_push_free(htab, l);
++ if (is_lru_map)
++ htab_lru_push_free(htab, l);
++ else
++ free_htab_elem(htab, l);
+ }
+
+ next_batch:
+diff --git a/kernel/bpf/lpm_trie.c b/kernel/bpf/lpm_trie.c
+index 9b60eda0f727b3..010e91ac978e62 100644
+--- a/kernel/bpf/lpm_trie.c
++++ b/kernel/bpf/lpm_trie.c
+@@ -310,12 +310,22 @@ static struct lpm_trie_node *lpm_trie_node_alloc(const struct lpm_trie *trie,
+ return node;
+ }
+
++static int trie_check_add_elem(struct lpm_trie *trie, u64 flags)
++{
++ if (flags == BPF_EXIST)
++ return -ENOENT;
++ if (trie->n_entries == trie->map.max_entries)
++ return -ENOSPC;
++ trie->n_entries++;
++ return 0;
++}
++
+ /* Called from syscall or from eBPF program */
+ static long trie_update_elem(struct bpf_map *map,
+ void *_key, void *value, u64 flags)
+ {
+ struct lpm_trie *trie = container_of(map, struct lpm_trie, map);
+- struct lpm_trie_node *node, *im_node = NULL, *new_node = NULL;
++ struct lpm_trie_node *node, *im_node, *new_node = NULL;
+ struct lpm_trie_node *free_node = NULL;
+ struct lpm_trie_node __rcu **slot;
+ struct bpf_lpm_trie_key_u8 *key = _key;
+@@ -333,20 +343,12 @@ static long trie_update_elem(struct bpf_map *map,
+ spin_lock_irqsave(&trie->lock, irq_flags);
+
+ /* Allocate and fill a new node */
+-
+- if (trie->n_entries == trie->map.max_entries) {
+- ret = -ENOSPC;
+- goto out;
+- }
+-
+ new_node = lpm_trie_node_alloc(trie, value);
+ if (!new_node) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+- trie->n_entries++;
+-
+ new_node->prefixlen = key->prefixlen;
+ RCU_INIT_POINTER(new_node->child[0], NULL);
+ RCU_INIT_POINTER(new_node->child[1], NULL);
+@@ -376,6 +378,10 @@ static long trie_update_elem(struct bpf_map *map,
+ * simply assign the @new_node to that slot and be done.
+ */
+ if (!node) {
++ ret = trie_check_add_elem(trie, flags);
++ if (ret)
++ goto out;
++
+ rcu_assign_pointer(*slot, new_node);
+ goto out;
+ }
+@@ -384,18 +390,30 @@ static long trie_update_elem(struct bpf_map *map,
+ * which already has the correct data array set.
+ */
+ if (node->prefixlen == matchlen) {
++ if (!(node->flags & LPM_TREE_NODE_FLAG_IM)) {
++ if (flags == BPF_NOEXIST) {
++ ret = -EEXIST;
++ goto out;
++ }
++ } else {
++ ret = trie_check_add_elem(trie, flags);
++ if (ret)
++ goto out;
++ }
++
+ new_node->child[0] = node->child[0];
+ new_node->child[1] = node->child[1];
+
+- if (!(node->flags & LPM_TREE_NODE_FLAG_IM))
+- trie->n_entries--;
+-
+ rcu_assign_pointer(*slot, new_node);
+ free_node = node;
+
+ goto out;
+ }
+
++ ret = trie_check_add_elem(trie, flags);
++ if (ret)
++ goto out;
++
+ /* If the new node matches the prefix completely, it must be inserted
+ * as an ancestor. Simply insert it between @node and *@slot.
+ */
+@@ -408,6 +426,7 @@ static long trie_update_elem(struct bpf_map *map,
+
+ im_node = lpm_trie_node_alloc(trie, NULL);
+ if (!im_node) {
++ trie->n_entries--;
+ ret = -ENOMEM;
+ goto out;
+ }
+@@ -429,14 +448,8 @@ static long trie_update_elem(struct bpf_map *map,
+ rcu_assign_pointer(*slot, im_node);
+
+ out:
+- if (ret) {
+- if (new_node)
+- trie->n_entries--;
+-
++ if (ret)
+ kfree(new_node);
+- kfree(im_node);
+- }
+-
+ spin_unlock_irqrestore(&trie->lock, irq_flags);
+ kfree_rcu(free_node, rcu);
+
+@@ -633,7 +646,7 @@ static int trie_get_next_key(struct bpf_map *map, void *_key, void *_next_key)
+ struct lpm_trie_node **node_stack = NULL;
+ int err = 0, stack_ptr = -1;
+ unsigned int next_bit;
+- size_t matchlen;
++ size_t matchlen = 0;
+
+ /* The get_next_key follows postorder. For the 4 node example in
+ * the top of this file, the trie_get_next_key() returns the following
+@@ -672,7 +685,7 @@ static int trie_get_next_key(struct bpf_map *map, void *_key, void *_next_key)
+ next_bit = extract_bit(key->data, node->prefixlen);
+ node = rcu_dereference(node->child[next_bit]);
+ }
+- if (!node || node->prefixlen != key->prefixlen ||
++ if (!node || node->prefixlen != matchlen ||
+ (node->flags & LPM_TREE_NODE_FLAG_IM))
+ goto find_leftmost;
+
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index c5aa127ed4cc01..368ae8d231d417 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -2976,12 +2976,24 @@ void bpf_link_inc(struct bpf_link *link)
+ atomic64_inc(&link->refcnt);
+ }
+
++static void bpf_link_dealloc(struct bpf_link *link)
++{
++ /* now that we know that bpf_link itself can't be reached, put underlying BPF program */
++ if (link->prog)
++ bpf_prog_put(link->prog);
++
++ /* free bpf_link and its containing memory */
++ if (link->ops->dealloc_deferred)
++ link->ops->dealloc_deferred(link);
++ else
++ link->ops->dealloc(link);
++}
++
+ static void bpf_link_defer_dealloc_rcu_gp(struct rcu_head *rcu)
+ {
+ struct bpf_link *link = container_of(rcu, struct bpf_link, rcu);
+
+- /* free bpf_link and its containing memory */
+- link->ops->dealloc_deferred(link);
++ bpf_link_dealloc(link);
+ }
+
+ static void bpf_link_defer_dealloc_mult_rcu_gp(struct rcu_head *rcu)
+@@ -3003,7 +3015,6 @@ static void bpf_link_free(struct bpf_link *link)
+ sleepable = link->prog->sleepable;
+ /* detach BPF program, clean up used resources */
+ ops->release(link);
+- bpf_prog_put(link->prog);
+ }
+ if (ops->dealloc_deferred) {
+ /* schedule BPF link deallocation; if underlying BPF program
+@@ -3014,8 +3025,9 @@ static void bpf_link_free(struct bpf_link *link)
+ call_rcu_tasks_trace(&link->rcu, bpf_link_defer_dealloc_mult_rcu_gp);
+ else
+ call_rcu(&link->rcu, bpf_link_defer_dealloc_rcu_gp);
+- } else if (ops->dealloc)
+- ops->dealloc(link);
++ } else if (ops->dealloc) {
++ bpf_link_dealloc(link);
++ }
+ }
+
+ static void bpf_link_put_deferred(struct work_struct *work)
+@@ -3218,7 +3230,8 @@ static void bpf_tracing_link_release(struct bpf_link *link)
+ container_of(link, struct bpf_tracing_link, link.link);
+
+ WARN_ON_ONCE(bpf_trampoline_unlink_prog(&tr_link->link,
+- tr_link->trampoline));
++ tr_link->trampoline,
++ tr_link->tgt_prog));
+
+ bpf_trampoline_put(tr_link->trampoline);
+
+@@ -3358,7 +3371,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
+ * in prog->aux
+ *
+ * - if prog->aux->dst_trampoline is NULL, the program has already been
+- * attached to a target and its initial target was cleared (below)
++ * attached to a target and its initial target was cleared (below)
+ *
+ * - if tgt_prog != NULL, the caller specified tgt_prog_fd +
+ * target_btf_id using the link_create API.
+@@ -3433,7 +3446,7 @@ static int bpf_tracing_prog_attach(struct bpf_prog *prog,
+ if (err)
+ goto out_unlock;
+
+- err = bpf_trampoline_link_prog(&link->link, tr);
++ err = bpf_trampoline_link_prog(&link->link, tr, tgt_prog);
+ if (err) {
+ bpf_link_cleanup(&link_primer);
+ link = NULL;
+diff --git a/kernel/bpf/trampoline.c b/kernel/bpf/trampoline.c
+index 1166d9dd3e8b5d..ecdd2660561f5b 100644
+--- a/kernel/bpf/trampoline.c
++++ b/kernel/bpf/trampoline.c
+@@ -528,7 +528,27 @@ static enum bpf_tramp_prog_type bpf_attach_type_to_tramp(struct bpf_prog *prog)
+ }
+ }
+
+-static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr)
++static int bpf_freplace_check_tgt_prog(struct bpf_prog *tgt_prog)
++{
++ struct bpf_prog_aux *aux = tgt_prog->aux;
++
++ guard(mutex)(&aux->ext_mutex);
++ if (aux->prog_array_member_cnt)
++ /* Program extensions can not extend target prog when the target
++ * prog has been updated to any prog_array map as tail callee.
++ * It's to prevent a potential infinite loop like:
++ * tgt prog entry -> tgt prog subprog -> freplace prog entry
++ * --tailcall-> tgt prog entry.
++ */
++ return -EBUSY;
++
++ aux->is_extended = true;
++ return 0;
++}
++
++static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ enum bpf_tramp_prog_type kind;
+ struct bpf_tramp_link *link_exiting;
+@@ -549,6 +569,9 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr
+ /* Cannot attach extension if fentry/fexit are in use. */
+ if (cnt)
+ return -EBUSY;
++ err = bpf_freplace_check_tgt_prog(tgt_prog);
++ if (err)
++ return err;
+ tr->extension_prog = link->link.prog;
+ return bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP, NULL,
+ link->link.prog->bpf_func);
+@@ -575,17 +598,21 @@ static int __bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_tr
+ return err;
+ }
+
+-int bpf_trampoline_link_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr)
++int bpf_trampoline_link_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ int err;
+
+ mutex_lock(&tr->mutex);
+- err = __bpf_trampoline_link_prog(link, tr);
++ err = __bpf_trampoline_link_prog(link, tr, tgt_prog);
+ mutex_unlock(&tr->mutex);
+ return err;
+ }
+
+-static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr)
++static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ enum bpf_tramp_prog_type kind;
+ int err;
+@@ -596,6 +623,8 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_
+ err = bpf_arch_text_poke(tr->func.addr, BPF_MOD_JUMP,
+ tr->extension_prog->bpf_func, NULL);
+ tr->extension_prog = NULL;
++ guard(mutex)(&tgt_prog->aux->ext_mutex);
++ tgt_prog->aux->is_extended = false;
+ return err;
+ }
+ hlist_del_init(&link->tramp_hlist);
+@@ -604,12 +633,14 @@ static int __bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_
+ }
+
+ /* bpf_trampoline_unlink_prog() should never fail. */
+-int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link, struct bpf_trampoline *tr)
++int bpf_trampoline_unlink_prog(struct bpf_tramp_link *link,
++ struct bpf_trampoline *tr,
++ struct bpf_prog *tgt_prog)
+ {
+ int err;
+
+ mutex_lock(&tr->mutex);
+- err = __bpf_trampoline_unlink_prog(link, tr);
++ err = __bpf_trampoline_unlink_prog(link, tr, tgt_prog);
+ mutex_unlock(&tr->mutex);
+ return err;
+ }
+@@ -624,7 +655,7 @@ static void bpf_shim_tramp_link_release(struct bpf_link *link)
+ if (!shim_link->trampoline)
+ return;
+
+- WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link, shim_link->trampoline));
++ WARN_ON_ONCE(bpf_trampoline_unlink_prog(&shim_link->link, shim_link->trampoline, NULL));
+ bpf_trampoline_put(shim_link->trampoline);
+ }
+
+@@ -738,7 +769,7 @@ int bpf_trampoline_link_cgroup_shim(struct bpf_prog *prog,
+ goto err;
+ }
+
+- err = __bpf_trampoline_link_prog(&shim_link->link, tr);
++ err = __bpf_trampoline_link_prog(&shim_link->link, tr, NULL);
+ if (err)
+ goto err;
+
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 91317857ea3ee5..b2008076df9c26 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1200,14 +1200,17 @@ static bool is_spilled_scalar_reg64(const struct bpf_stack_state *stack)
+ /* Mark stack slot as STACK_MISC, unless it is already STACK_INVALID, in which
+ * case they are equivalent, or it's STACK_ZERO, in which case we preserve
+ * more precise STACK_ZERO.
+- * Note, in uprivileged mode leaving STACK_INVALID is wrong, so we take
+- * env->allow_ptr_leaks into account and force STACK_MISC, if necessary.
++ * Regardless of allow_ptr_leaks setting (i.e., privileged or unprivileged
++ * mode), we won't promote STACK_INVALID to STACK_MISC. In privileged case it is
++ * unnecessary as both are considered equivalent when loading data and pruning,
++ * in case of unprivileged mode it will be incorrect to allow reads of invalid
++ * slots.
+ */
+ static void mark_stack_slot_misc(struct bpf_verifier_env *env, u8 *stype)
+ {
+ if (*stype == STACK_ZERO)
+ return;
+- if (env->allow_ptr_leaks && *stype == STACK_INVALID)
++ if (*stype == STACK_INVALID)
+ return;
+ *stype = STACK_MISC;
+ }
+@@ -4646,6 +4649,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env,
+ */
+ if (!env->allow_ptr_leaks &&
+ is_spilled_reg(&state->stack[spi]) &&
++ !is_spilled_scalar_reg(&state->stack[spi]) &&
+ size != BPF_REG_SIZE) {
+ verbose(env, "attempt to corrupt spilled pointer on stack\n");
+ return -EACCES;
+@@ -8021,6 +8025,11 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
+ const struct btf_type *t;
+ int spi, err, i, nr_slots, btf_id;
+
++ if (reg->type != PTR_TO_STACK) {
++ verbose(env, "arg#%d expected pointer to an iterator on stack\n", regno - 1);
++ return -EINVAL;
++ }
++
+ /* For iter_{new,next,destroy} functions, btf_check_iter_kfuncs()
+ * ensures struct convention, so we wouldn't need to do any BTF
+ * validation here. But given iter state can be passed as a parameter
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index d570535342cb78..f6f0387761d05a 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -1052,9 +1052,13 @@ static void check_unmap(struct dma_debug_entry *ref)
+ }
+
+ hash_bucket_del(entry);
+- dma_entry_free(entry);
+-
+ put_hash_bucket(bucket, flags);
++
++ /*
++ * Free the entry outside of bucket_lock to avoid ABBA deadlocks
++ * between that and radix_lock.
++ */
++ dma_entry_free(entry);
+ }
+
+ static void check_for_stack(struct device *dev,
+diff --git a/kernel/kcsan/debugfs.c b/kernel/kcsan/debugfs.c
+index 53b21ae30e00ee..b14072071889fb 100644
+--- a/kernel/kcsan/debugfs.c
++++ b/kernel/kcsan/debugfs.c
+@@ -46,14 +46,8 @@ static struct {
+ int used; /* number of elements used */
+ bool sorted; /* if elements are sorted */
+ bool whitelist; /* if list is a blacklist or whitelist */
+-} report_filterlist = {
+- .addrs = NULL,
+- .size = 8, /* small initial size */
+- .used = 0,
+- .sorted = false,
+- .whitelist = false, /* default is blacklist */
+-};
+-static DEFINE_SPINLOCK(report_filterlist_lock);
++} report_filterlist;
++static DEFINE_RAW_SPINLOCK(report_filterlist_lock);
+
+ /*
+ * The microbenchmark allows benchmarking KCSAN core runtime only. To run
+@@ -110,7 +104,7 @@ bool kcsan_skip_report_debugfs(unsigned long func_addr)
+ return false;
+ func_addr -= offset; /* Get function start */
+
+- spin_lock_irqsave(&report_filterlist_lock, flags);
++ raw_spin_lock_irqsave(&report_filterlist_lock, flags);
+ if (report_filterlist.used == 0)
+ goto out;
+
+@@ -127,7 +121,7 @@ bool kcsan_skip_report_debugfs(unsigned long func_addr)
+ ret = !ret;
+
+ out:
+- spin_unlock_irqrestore(&report_filterlist_lock, flags);
++ raw_spin_unlock_irqrestore(&report_filterlist_lock, flags);
+ return ret;
+ }
+
+@@ -135,9 +129,9 @@ static void set_report_filterlist_whitelist(bool whitelist)
+ {
+ unsigned long flags;
+
+- spin_lock_irqsave(&report_filterlist_lock, flags);
++ raw_spin_lock_irqsave(&report_filterlist_lock, flags);
+ report_filterlist.whitelist = whitelist;
+- spin_unlock_irqrestore(&report_filterlist_lock, flags);
++ raw_spin_unlock_irqrestore(&report_filterlist_lock, flags);
+ }
+
+ /* Returns 0 on success, error-code otherwise. */
+@@ -145,6 +139,9 @@ static ssize_t insert_report_filterlist(const char *func)
+ {
+ unsigned long flags;
+ unsigned long addr = kallsyms_lookup_name(func);
++ unsigned long *delay_free = NULL;
++ unsigned long *new_addrs = NULL;
++ size_t new_size = 0;
+ ssize_t ret = 0;
+
+ if (!addr) {
+@@ -152,32 +149,33 @@ static ssize_t insert_report_filterlist(const char *func)
+ return -ENOENT;
+ }
+
+- spin_lock_irqsave(&report_filterlist_lock, flags);
++retry_alloc:
++ /*
++ * Check if we need an allocation, and re-validate under the lock. Since
++ * the report_filterlist_lock is a raw, cannot allocate under the lock.
++ */
++ if (data_race(report_filterlist.used == report_filterlist.size)) {
++ new_size = (report_filterlist.size ?: 4) * 2;
++ delay_free = new_addrs = kmalloc_array(new_size, sizeof(unsigned long), GFP_KERNEL);
++ if (!new_addrs)
++ return -ENOMEM;
++ }
+
+- if (report_filterlist.addrs == NULL) {
+- /* initial allocation */
+- report_filterlist.addrs =
+- kmalloc_array(report_filterlist.size,
+- sizeof(unsigned long), GFP_ATOMIC);
+- if (report_filterlist.addrs == NULL) {
+- ret = -ENOMEM;
+- goto out;
+- }
+- } else if (report_filterlist.used == report_filterlist.size) {
+- /* resize filterlist */
+- size_t new_size = report_filterlist.size * 2;
+- unsigned long *new_addrs =
+- krealloc(report_filterlist.addrs,
+- new_size * sizeof(unsigned long), GFP_ATOMIC);
+-
+- if (new_addrs == NULL) {
+- /* leave filterlist itself untouched */
+- ret = -ENOMEM;
+- goto out;
++ raw_spin_lock_irqsave(&report_filterlist_lock, flags);
++ if (report_filterlist.used == report_filterlist.size) {
++ /* Check we pre-allocated enough, and retry if not. */
++ if (report_filterlist.used >= new_size) {
++ raw_spin_unlock_irqrestore(&report_filterlist_lock, flags);
++ kfree(new_addrs); /* kfree(NULL) is safe */
++ delay_free = new_addrs = NULL;
++ goto retry_alloc;
+ }
+
++ if (report_filterlist.used)
++ memcpy(new_addrs, report_filterlist.addrs, report_filterlist.used * sizeof(unsigned long));
++ delay_free = report_filterlist.addrs; /* free the old list */
++ report_filterlist.addrs = new_addrs; /* switch to the new list */
+ report_filterlist.size = new_size;
+- report_filterlist.addrs = new_addrs;
+ }
+
+ /* Note: deduplicating should be done in userspace. */
+@@ -185,9 +183,9 @@ static ssize_t insert_report_filterlist(const char *func)
+ kallsyms_lookup_name(func);
+ report_filterlist.sorted = false;
+
+-out:
+- spin_unlock_irqrestore(&report_filterlist_lock, flags);
++ raw_spin_unlock_irqrestore(&report_filterlist_lock, flags);
+
++ kfree(delay_free);
+ return ret;
+ }
+
+@@ -204,13 +202,13 @@ static int show_info(struct seq_file *file, void *v)
+ }
+
+ /* show filter functions, and filter type */
+- spin_lock_irqsave(&report_filterlist_lock, flags);
++ raw_spin_lock_irqsave(&report_filterlist_lock, flags);
+ seq_printf(file, "\n%s functions: %s\n",
+ report_filterlist.whitelist ? "whitelisted" : "blacklisted",
+ report_filterlist.used == 0 ? "none" : "");
+ for (i = 0; i < report_filterlist.used; ++i)
+ seq_printf(file, " %ps\n", (void *)report_filterlist.addrs[i]);
+- spin_unlock_irqrestore(&report_filterlist_lock, flags);
++ raw_spin_unlock_irqrestore(&report_filterlist_lock, flags);
+
+ return 0;
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 76b27b2a9c56ad..6cc12777bb11ab 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1242,9 +1242,9 @@ static void nohz_csd_func(void *info)
+ WARN_ON(!(flags & NOHZ_KICK_MASK));
+
+ rq->idle_balance = idle_cpu(cpu);
+- if (rq->idle_balance && !need_resched()) {
++ if (rq->idle_balance) {
+ rq->nohz_idle_balance = flags;
+- raise_softirq_irqoff(SCHED_SOFTIRQ);
++ __raise_softirq_irqoff(SCHED_SOFTIRQ);
+ }
+ }
+
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index be1b917dc8ce4c..40a1ad4493b4d9 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -2042,6 +2042,7 @@ enqueue_dl_entity(struct sched_dl_entity *dl_se, int flags)
+ } else if (flags & ENQUEUE_REPLENISH) {
+ replenish_dl_entity(dl_se);
+ } else if ((flags & ENQUEUE_RESTORE) &&
++ !is_dl_boosted(dl_se) &&
+ dl_time_before(dl_se->deadline, rq_clock(rq_of_dl_se(dl_se)))) {
+ setup_new_dl_entity(dl_se);
+ }
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 16613631543f18..79bb18651cdb8b 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -3105,6 +3105,12 @@ static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
+
+ *found = false;
+
++
++ /*
++ * This is necessary to protect llc_cpus.
++ */
++ rcu_read_lock();
++
+ /*
+ * If WAKE_SYNC, the waker's local DSQ is empty, and the system is
+ * under utilized, wake up @p to the local DSQ of the waker. Checking
+@@ -3147,9 +3153,12 @@ static s32 scx_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
+ if (cpu >= 0)
+ goto cpu_found;
+
++ rcu_read_unlock();
+ return prev_cpu;
+
+ cpu_found:
++ rcu_read_unlock();
++
+ *found = true;
+ return cpu;
+ }
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 2d16c8545c71ed..782ce70ebd1b08 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3399,10 +3399,16 @@ static void task_numa_work(struct callback_head *work)
+
+ /* Initialise new per-VMA NUMAB state. */
+ if (!vma->numab_state) {
+- vma->numab_state = kzalloc(sizeof(struct vma_numab_state),
+- GFP_KERNEL);
+- if (!vma->numab_state)
++ struct vma_numab_state *ptr;
++
++ ptr = kzalloc(sizeof(*ptr), GFP_KERNEL);
++ if (!ptr)
++ continue;
++
++ if (cmpxchg(&vma->numab_state, NULL, ptr)) {
++ kfree(ptr);
+ continue;
++ }
+
+ vma->numab_state->start_scan_seq = mm->numa_scan_seq;
+
+@@ -12574,7 +12580,7 @@ static void _nohz_idle_balance(struct rq *this_rq, unsigned int flags)
+ * work being done for other CPUs. Next load
+ * balancing owner will pick it up.
+ */
+- if (need_resched()) {
++ if (!idle_cpu(this_cpu) && need_resched()) {
+ if (flags & NOHZ_STATS_KICK)
+ has_blocked_load = true;
+ if (flags & NOHZ_NEXT_KICK)
+diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
+index 24f9f90b6574e5..1784ed1fb3fe5d 100644
+--- a/kernel/sched/syscalls.c
++++ b/kernel/sched/syscalls.c
+@@ -1238,7 +1238,7 @@ int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
+ bool empty = !cpumask_and(new_mask, new_mask,
+ ctx->user_mask);
+
+- if (WARN_ON_ONCE(empty))
++ if (empty)
+ cpumask_copy(new_mask, cpus_allowed);
+ }
+ __set_cpus_allowed_ptr(p, ctx);
+diff --git a/kernel/softirq.c b/kernel/softirq.c
+index d082e7840f8802..8c4524ce65fafe 100644
+--- a/kernel/softirq.c
++++ b/kernel/softirq.c
+@@ -280,17 +280,24 @@ static inline void invoke_softirq(void)
+ wakeup_softirqd();
+ }
+
++#define SCHED_SOFTIRQ_MASK BIT(SCHED_SOFTIRQ)
++
+ /*
+ * flush_smp_call_function_queue() can raise a soft interrupt in a function
+- * call. On RT kernels this is undesired and the only known functionality
+- * in the block layer which does this is disabled on RT. If soft interrupts
+- * get raised which haven't been raised before the flush, warn so it can be
++ * call. On RT kernels this is undesired and the only known functionalities
++ * are in the block layer which is disabled on RT, and in the scheduler for
++ * idle load balancing. If soft interrupts get raised which haven't been
++ * raised before the flush, warn if it is not a SCHED_SOFTIRQ so it can be
+ * investigated.
+ */
+ void do_softirq_post_smp_call_flush(unsigned int was_pending)
+ {
+- if (WARN_ON_ONCE(was_pending != local_softirq_pending()))
++ unsigned int is_pending = local_softirq_pending();
++
++ if (unlikely(was_pending != is_pending)) {
++ WARN_ON_ONCE(was_pending != (is_pending & ~SCHED_SOFTIRQ_MASK));
+ invoke_softirq();
++ }
+ }
+
+ #else /* CONFIG_PREEMPT_RT */
+diff --git a/kernel/time/Kconfig b/kernel/time/Kconfig
+index 8ebb6d5a106bea..b0b97a60aaa6fc 100644
+--- a/kernel/time/Kconfig
++++ b/kernel/time/Kconfig
+@@ -17,11 +17,6 @@ config ARCH_CLOCKSOURCE_DATA
+ config ARCH_CLOCKSOURCE_INIT
+ bool
+
+-# Clocksources require validation of the clocksource against the last
+-# cycle update - x86/TSC misfeature
+-config CLOCKSOURCE_VALIDATE_LAST_CYCLE
+- bool
+-
+ # Timekeeping vsyscall support
+ config GENERIC_TIME_VSYSCALL
+ bool
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 23336eecb4f43b..8a40a616288b81 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -22,7 +22,7 @@
+
+ static noinline u64 cycles_to_nsec_safe(struct clocksource *cs, u64 start, u64 end)
+ {
+- u64 delta = clocksource_delta(end, start, cs->mask);
++ u64 delta = clocksource_delta(end, start, cs->mask, cs->max_raw_delta);
+
+ if (likely(delta < cs->max_cycles))
+ return clocksource_cyc2ns(delta, cs->mult, cs->shift);
+@@ -985,6 +985,15 @@ static inline void clocksource_update_max_deferment(struct clocksource *cs)
+ cs->max_idle_ns = clocks_calc_max_nsecs(cs->mult, cs->shift,
+ cs->maxadj, cs->mask,
+ &cs->max_cycles);
++
++ /*
++ * Threshold for detecting negative motion in clocksource_delta().
++ *
++ * Allow for 0.875 of the counter width so that overly long idle
++ * sleeps, which go slightly over mask/2, do not trigger the
++ * negative motion detection.
++ */
++ cs->max_raw_delta = (cs->mask >> 1) + (cs->mask >> 2) + (cs->mask >> 3);
+ }
+
+ static struct clocksource *clocksource_find_best(bool oneshot, bool skipcur)
+diff --git a/kernel/time/ntp.c b/kernel/time/ntp.c
+index 802b336f4b8c2f..de3547d63aa975 100644
+--- a/kernel/time/ntp.c
++++ b/kernel/time/ntp.c
+@@ -804,7 +804,7 @@ int __do_adjtimex(struct __kernel_timex *txc, const struct timespec64 *ts,
+ txc->offset = shift_right(time_offset * NTP_INTERVAL_FREQ,
+ NTP_SCALE_SHIFT);
+ if (!(time_status & STA_NANO))
+- txc->offset = (u32)txc->offset / NSEC_PER_USEC;
++ txc->offset = div_s64(txc->offset, NSEC_PER_USEC);
+ }
+
+ result = time_state; /* mostly `TIME_OK' */
+diff --git a/kernel/time/timekeeping.c b/kernel/time/timekeeping.c
+index 7e6f409bf3114a..96933082431fe0 100644
+--- a/kernel/time/timekeeping.c
++++ b/kernel/time/timekeeping.c
+@@ -195,97 +195,6 @@ static inline u64 tk_clock_read(const struct tk_read_base *tkr)
+ return clock->read(clock);
+ }
+
+-#ifdef CONFIG_DEBUG_TIMEKEEPING
+-#define WARNING_FREQ (HZ*300) /* 5 minute rate-limiting */
+-
+-static void timekeeping_check_update(struct timekeeper *tk, u64 offset)
+-{
+-
+- u64 max_cycles = tk->tkr_mono.clock->max_cycles;
+- const char *name = tk->tkr_mono.clock->name;
+-
+- if (offset > max_cycles) {
+- printk_deferred("WARNING: timekeeping: Cycle offset (%lld) is larger than allowed by the '%s' clock's max_cycles value (%lld): time overflow danger\n",
+- offset, name, max_cycles);
+- printk_deferred(" timekeeping: Your kernel is sick, but tries to cope by capping time updates\n");
+- } else {
+- if (offset > (max_cycles >> 1)) {
+- printk_deferred("INFO: timekeeping: Cycle offset (%lld) is larger than the '%s' clock's 50%% safety margin (%lld)\n",
+- offset, name, max_cycles >> 1);
+- printk_deferred(" timekeeping: Your kernel is still fine, but is feeling a bit nervous\n");
+- }
+- }
+-
+- if (tk->underflow_seen) {
+- if (jiffies - tk->last_warning > WARNING_FREQ) {
+- printk_deferred("WARNING: Underflow in clocksource '%s' observed, time update ignored.\n", name);
+- printk_deferred(" Please report this, consider using a different clocksource, if possible.\n");
+- printk_deferred(" Your kernel is probably still fine.\n");
+- tk->last_warning = jiffies;
+- }
+- tk->underflow_seen = 0;
+- }
+-
+- if (tk->overflow_seen) {
+- if (jiffies - tk->last_warning > WARNING_FREQ) {
+- printk_deferred("WARNING: Overflow in clocksource '%s' observed, time update capped.\n", name);
+- printk_deferred(" Please report this, consider using a different clocksource, if possible.\n");
+- printk_deferred(" Your kernel is probably still fine.\n");
+- tk->last_warning = jiffies;
+- }
+- tk->overflow_seen = 0;
+- }
+-}
+-
+-static inline u64 timekeeping_cycles_to_ns(const struct tk_read_base *tkr, u64 cycles);
+-
+-static inline u64 timekeeping_debug_get_ns(const struct tk_read_base *tkr)
+-{
+- struct timekeeper *tk = &tk_core.timekeeper;
+- u64 now, last, mask, max, delta;
+- unsigned int seq;
+-
+- /*
+- * Since we're called holding a seqcount, the data may shift
+- * under us while we're doing the calculation. This can cause
+- * false positives, since we'd note a problem but throw the
+- * results away. So nest another seqcount here to atomically
+- * grab the points we are checking with.
+- */
+- do {
+- seq = read_seqcount_begin(&tk_core.seq);
+- now = tk_clock_read(tkr);
+- last = tkr->cycle_last;
+- mask = tkr->mask;
+- max = tkr->clock->max_cycles;
+- } while (read_seqcount_retry(&tk_core.seq, seq));
+-
+- delta = clocksource_delta(now, last, mask);
+-
+- /*
+- * Try to catch underflows by checking if we are seeing small
+- * mask-relative negative values.
+- */
+- if (unlikely((~delta & mask) < (mask >> 3)))
+- tk->underflow_seen = 1;
+-
+- /* Check for multiplication overflows */
+- if (unlikely(delta > max))
+- tk->overflow_seen = 1;
+-
+- /* timekeeping_cycles_to_ns() handles both under and overflow */
+- return timekeeping_cycles_to_ns(tkr, now);
+-}
+-#else
+-static inline void timekeeping_check_update(struct timekeeper *tk, u64 offset)
+-{
+-}
+-static inline u64 timekeeping_debug_get_ns(const struct tk_read_base *tkr)
+-{
+- BUG();
+-}
+-#endif
+-
+ /**
+ * tk_setup_internals - Set up internals to use clocksource clock.
+ *
+@@ -390,19 +299,11 @@ static inline u64 timekeeping_cycles_to_ns(const struct tk_read_base *tkr, u64 c
+ return ((delta * tkr->mult) + tkr->xtime_nsec) >> tkr->shift;
+ }
+
+-static __always_inline u64 __timekeeping_get_ns(const struct tk_read_base *tkr)
++static __always_inline u64 timekeeping_get_ns(const struct tk_read_base *tkr)
+ {
+ return timekeeping_cycles_to_ns(tkr, tk_clock_read(tkr));
+ }
+
+-static inline u64 timekeeping_get_ns(const struct tk_read_base *tkr)
+-{
+- if (IS_ENABLED(CONFIG_DEBUG_TIMEKEEPING))
+- return timekeeping_debug_get_ns(tkr);
+-
+- return __timekeeping_get_ns(tkr);
+-}
+-
+ /**
+ * update_fast_timekeeper - Update the fast and NMI safe monotonic timekeeper.
+ * @tkr: Timekeeping readout base from which we take the update
+@@ -446,7 +347,7 @@ static __always_inline u64 __ktime_get_fast_ns(struct tk_fast *tkf)
+ seq = raw_read_seqcount_latch(&tkf->seq);
+ tkr = tkf->base + (seq & 0x01);
+ now = ktime_to_ns(tkr->base);
+- now += __timekeeping_get_ns(tkr);
++ now += timekeeping_get_ns(tkr);
+ } while (raw_read_seqcount_latch_retry(&tkf->seq, seq));
+
+ return now;
+@@ -562,7 +463,7 @@ static __always_inline u64 __ktime_get_real_fast(struct tk_fast *tkf, u64 *mono)
+ tkr = tkf->base + (seq & 0x01);
+ basem = ktime_to_ns(tkr->base);
+ baser = ktime_to_ns(tkr->base_real);
+- delta = __timekeeping_get_ns(tkr);
++ delta = timekeeping_get_ns(tkr);
+ } while (raw_read_seqcount_latch_retry(&tkf->seq, seq));
+
+ if (mono)
+@@ -793,7 +694,8 @@ static void timekeeping_forward_now(struct timekeeper *tk)
+ u64 cycle_now, delta;
+
+ cycle_now = tk_clock_read(&tk->tkr_mono);
+- delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, tk->tkr_mono.mask);
++ delta = clocksource_delta(cycle_now, tk->tkr_mono.cycle_last, tk->tkr_mono.mask,
++ tk->tkr_mono.clock->max_raw_delta);
+ tk->tkr_mono.cycle_last = cycle_now;
+ tk->tkr_raw.cycle_last = cycle_now;
+
+@@ -2292,15 +2194,13 @@ static bool timekeeping_advance(enum timekeeping_adv_mode mode)
+ goto out;
+
+ offset = clocksource_delta(tk_clock_read(&tk->tkr_mono),
+- tk->tkr_mono.cycle_last, tk->tkr_mono.mask);
++ tk->tkr_mono.cycle_last, tk->tkr_mono.mask,
++ tk->tkr_mono.clock->max_raw_delta);
+
+ /* Check if there's really nothing to do */
+ if (offset < real_tk->cycle_interval && mode == TK_ADV_TICK)
+ goto out;
+
+- /* Do some additional sanity checking */
+- timekeeping_check_update(tk, offset);
+-
+ /*
+ * With NO_HZ we may have to accumulate many cycle_intervals
+ * (think "ticks") worth of time at once. To do this efficiently,
+diff --git a/kernel/time/timekeeping_internal.h b/kernel/time/timekeeping_internal.h
+index 4ca2787d1642e2..feb366b0142887 100644
+--- a/kernel/time/timekeeping_internal.h
++++ b/kernel/time/timekeeping_internal.h
+@@ -15,23 +15,16 @@ extern void tk_debug_account_sleep_time(const struct timespec64 *t);
+ #define tk_debug_account_sleep_time(x)
+ #endif
+
+-#ifdef CONFIG_CLOCKSOURCE_VALIDATE_LAST_CYCLE
+-static inline u64 clocksource_delta(u64 now, u64 last, u64 mask)
++static inline u64 clocksource_delta(u64 now, u64 last, u64 mask, u64 max_delta)
+ {
+ u64 ret = (now - last) & mask;
+
+ /*
+- * Prevent time going backwards by checking the MSB of mask in
+- * the result. If set, return 0.
++ * Prevent time going backwards by checking the result against
++ * @max_delta. If greater, return 0.
+ */
+- return ret & ~(mask >> 1) ? 0 : ret;
++ return ret > max_delta ? 0 : ret;
+ }
+-#else
+-static inline u64 clocksource_delta(u64 now, u64 last, u64 mask)
+-{
+- return (now - last) & mask;
+-}
+-#endif
+
+ /* Semi public for serialization of non timekeeper VDSO updates. */
+ extern raw_spinlock_t timekeeper_lock;
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 5807116bcd0bf7..366eb4c4f28e57 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -482,6 +482,8 @@ struct ring_buffer_per_cpu {
+ unsigned long nr_pages;
+ unsigned int current_context;
+ struct list_head *pages;
++ /* pages generation counter, incremented when the list changes */
++ unsigned long cnt;
+ struct buffer_page *head_page; /* read from head */
+ struct buffer_page *tail_page; /* write to tail */
+ struct buffer_page *commit_page; /* committed pages */
+@@ -1475,40 +1477,87 @@ static void rb_check_bpage(struct ring_buffer_per_cpu *cpu_buffer,
+ RB_WARN_ON(cpu_buffer, val & RB_FLAG_MASK);
+ }
+
++static bool rb_check_links(struct ring_buffer_per_cpu *cpu_buffer,
++ struct list_head *list)
++{
++ if (RB_WARN_ON(cpu_buffer,
++ rb_list_head(rb_list_head(list->next)->prev) != list))
++ return false;
++
++ if (RB_WARN_ON(cpu_buffer,
++ rb_list_head(rb_list_head(list->prev)->next) != list))
++ return false;
++
++ return true;
++}
++
+ /**
+ * rb_check_pages - integrity check of buffer pages
+ * @cpu_buffer: CPU buffer with pages to test
+ *
+ * As a safety measure we check to make sure the data pages have not
+ * been corrupted.
+- *
+- * Callers of this function need to guarantee that the list of pages doesn't get
+- * modified during the check. In particular, if it's possible that the function
+- * is invoked with concurrent readers which can swap in a new reader page then
+- * the caller should take cpu_buffer->reader_lock.
+ */
+ static void rb_check_pages(struct ring_buffer_per_cpu *cpu_buffer)
+ {
+- struct list_head *head = rb_list_head(cpu_buffer->pages);
+- struct list_head *tmp;
++ struct list_head *head, *tmp;
++ unsigned long buffer_cnt;
++ unsigned long flags;
++ int nr_loops = 0;
+
+- if (RB_WARN_ON(cpu_buffer,
+- rb_list_head(rb_list_head(head->next)->prev) != head))
++ /*
++ * Walk the linked list underpinning the ring buffer and validate all
++ * its next and prev links.
++ *
++ * The check acquires the reader_lock to avoid concurrent processing
++ * with code that could be modifying the list. However, the lock cannot
++ * be held for the entire duration of the walk, as this would make the
++ * time when interrupts are disabled non-deterministic, dependent on the
++ * ring buffer size. Therefore, the code releases and re-acquires the
++ * lock after checking each page. The ring_buffer_per_cpu.cnt variable
++ * is then used to detect if the list was modified while the lock was
++ * not held, in which case the check needs to be restarted.
++ *
++ * The code attempts to perform the check at most three times before
++ * giving up. This is acceptable because this is only a self-validation
++ * to detect problems early on. In practice, the list modification
++ * operations are fairly spaced, and so this check typically succeeds at
++ * most on the second try.
++ */
++again:
++ if (++nr_loops > 3)
+ return;
+
+- if (RB_WARN_ON(cpu_buffer,
+- rb_list_head(rb_list_head(head->prev)->next) != head))
+- return;
++ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
++ head = rb_list_head(cpu_buffer->pages);
++ if (!rb_check_links(cpu_buffer, head))
++ goto out_locked;
++ buffer_cnt = cpu_buffer->cnt;
++ tmp = head;
++ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+
+- for (tmp = rb_list_head(head->next); tmp != head; tmp = rb_list_head(tmp->next)) {
+- if (RB_WARN_ON(cpu_buffer,
+- rb_list_head(rb_list_head(tmp->next)->prev) != tmp))
+- return;
++ while (true) {
++ raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+
+- if (RB_WARN_ON(cpu_buffer,
+- rb_list_head(rb_list_head(tmp->prev)->next) != tmp))
+- return;
++ if (buffer_cnt != cpu_buffer->cnt) {
++ /* The list was updated, try again. */
++ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
++ goto again;
++ }
++
++ tmp = rb_list_head(tmp->next);
++ if (tmp == head)
++ /* The iteration circled back, all is done. */
++ goto out_locked;
++
++ if (!rb_check_links(cpu_buffer, tmp))
++ goto out_locked;
++
++ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ }
++
++out_locked:
++ raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ }
+
+ /*
+@@ -2532,6 +2581,7 @@ rb_remove_pages(struct ring_buffer_per_cpu *cpu_buffer, unsigned long nr_pages)
+
+ /* make sure pages points to a valid page in the ring buffer */
+ cpu_buffer->pages = next_page;
++ cpu_buffer->cnt++;
+
+ /* update head page */
+ if (head_bit)
+@@ -2638,6 +2688,7 @@ rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer)
+ * pointer to point to end of list
+ */
+ head_page->prev = last_page;
++ cpu_buffer->cnt++;
+ success = true;
+ break;
+ }
+@@ -2873,12 +2924,8 @@ int ring_buffer_resize(struct trace_buffer *buffer, unsigned long size,
+ */
+ synchronize_rcu();
+ for_each_buffer_cpu(buffer, cpu) {
+- unsigned long flags;
+-
+ cpu_buffer = buffer->buffers[cpu];
+- raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ rb_check_pages(cpu_buffer);
+- raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+ }
+ atomic_dec(&buffer->record_disabled);
+ }
+@@ -5296,6 +5343,7 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
+ rb_list_head(reader->list.next)->prev = &cpu_buffer->reader_page->list;
+ rb_inc_page(&cpu_buffer->head_page);
+
++ cpu_buffer->cnt++;
+ local_inc(&cpu_buffer->pages_read);
+
+ /* Finally update the reader page to the new head */
+@@ -5835,12 +5883,9 @@ void
+ ring_buffer_read_finish(struct ring_buffer_iter *iter)
+ {
+ struct ring_buffer_per_cpu *cpu_buffer = iter->cpu_buffer;
+- unsigned long flags;
+
+ /* Use this opportunity to check the integrity of the ring buffer. */
+- raw_spin_lock_irqsave(&cpu_buffer->reader_lock, flags);
+ rb_check_pages(cpu_buffer);
+- raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);
+
+ atomic_dec(&cpu_buffer->resize_disabled);
+ kfree(iter->event);
+@@ -6757,6 +6802,7 @@ int ring_buffer_subbuf_order_set(struct trace_buffer *buffer, int order)
+ /* Install the new pages, remove the head from the list */
+ cpu_buffer->pages = cpu_buffer->new_pages.next;
+ list_del_init(&cpu_buffer->new_pages);
++ cpu_buffer->cnt++;
+
+ cpu_buffer->head_page
+ = list_entry(cpu_buffer->pages, struct buffer_page, list);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 6a891e00aa7f46..17d2ffde0bb604 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -988,7 +988,8 @@ static inline void trace_access_lock_init(void)
+ #endif
+
+ #ifdef CONFIG_STACKTRACE
+-static void __ftrace_trace_stack(struct trace_buffer *buffer,
++static void __ftrace_trace_stack(struct trace_array *tr,
++ struct trace_buffer *buffer,
+ unsigned int trace_ctx,
+ int skip, struct pt_regs *regs);
+ static inline void ftrace_trace_stack(struct trace_array *tr,
+@@ -997,7 +998,8 @@ static inline void ftrace_trace_stack(struct trace_array *tr,
+ int skip, struct pt_regs *regs);
+
+ #else
+-static inline void __ftrace_trace_stack(struct trace_buffer *buffer,
++static inline void __ftrace_trace_stack(struct trace_array *tr,
++ struct trace_buffer *buffer,
+ unsigned int trace_ctx,
+ int skip, struct pt_regs *regs)
+ {
+@@ -2947,7 +2949,8 @@ struct ftrace_stacks {
+ static DEFINE_PER_CPU(struct ftrace_stacks, ftrace_stacks);
+ static DEFINE_PER_CPU(int, ftrace_stack_reserve);
+
+-static void __ftrace_trace_stack(struct trace_buffer *buffer,
++static void __ftrace_trace_stack(struct trace_array *tr,
++ struct trace_buffer *buffer,
+ unsigned int trace_ctx,
+ int skip, struct pt_regs *regs)
+ {
+@@ -2994,6 +2997,20 @@ static void __ftrace_trace_stack(struct trace_buffer *buffer,
+ nr_entries = stack_trace_save(fstack->calls, size, skip);
+ }
+
++#ifdef CONFIG_DYNAMIC_FTRACE
++ /* Mark entry of stack trace as trampoline code */
++ if (tr->ops && tr->ops->trampoline) {
++ unsigned long tramp_start = tr->ops->trampoline;
++ unsigned long tramp_end = tramp_start + tr->ops->trampoline_size;
++ unsigned long *calls = fstack->calls;
++
++ for (int i = 0; i < nr_entries; i++) {
++ if (calls[i] >= tramp_start && calls[i] < tramp_end)
++ calls[i] = FTRACE_TRAMPOLINE_MARKER;
++ }
++ }
++#endif
++
+ event = __trace_buffer_lock_reserve(buffer, TRACE_STACK,
+ struct_size(entry, caller, nr_entries),
+ trace_ctx);
+@@ -3024,7 +3041,7 @@ static inline void ftrace_trace_stack(struct trace_array *tr,
+ if (!(tr->trace_flags & TRACE_ITER_STACKTRACE))
+ return;
+
+- __ftrace_trace_stack(buffer, trace_ctx, skip, regs);
++ __ftrace_trace_stack(tr, buffer, trace_ctx, skip, regs);
+ }
+
+ void __trace_stack(struct trace_array *tr, unsigned int trace_ctx,
+@@ -3033,7 +3050,7 @@ void __trace_stack(struct trace_array *tr, unsigned int trace_ctx,
+ struct trace_buffer *buffer = tr->array_buffer.buffer;
+
+ if (rcu_is_watching()) {
+- __ftrace_trace_stack(buffer, trace_ctx, skip, NULL);
++ __ftrace_trace_stack(tr, buffer, trace_ctx, skip, NULL);
+ return;
+ }
+
+@@ -3050,7 +3067,7 @@ void __trace_stack(struct trace_array *tr, unsigned int trace_ctx,
+ return;
+
+ ct_irq_enter_irqson();
+- __ftrace_trace_stack(buffer, trace_ctx, skip, NULL);
++ __ftrace_trace_stack(tr, buffer, trace_ctx, skip, NULL);
+ ct_irq_exit_irqson();
+ }
+
+@@ -3067,8 +3084,8 @@ void trace_dump_stack(int skip)
+ /* Skip 1 to skip this function. */
+ skip++;
+ #endif
+- __ftrace_trace_stack(printk_trace->array_buffer.buffer,
+- tracing_gen_ctx(), skip, NULL);
++ __ftrace_trace_stack(printk_trace, printk_trace->array_buffer.buffer,
++ tracing_gen_ctx(), skip, NULL);
+ }
+ EXPORT_SYMBOL_GPL(trace_dump_stack);
+
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index c866991b9c78bf..30d6675c78cfe1 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -2176,4 +2176,11 @@ static inline int rv_init_interface(void)
+ }
+ #endif
+
++/*
++ * This is used only to distinguish
++ * function address from trampoline code.
++ * So this value has no meaning.
++ */
++#define FTRACE_TRAMPOLINE_MARKER ((unsigned long) INT_MAX)
++
+ #endif /* _LINUX_KERNEL_TRACE_H */
+diff --git a/kernel/trace/trace_clock.c b/kernel/trace/trace_clock.c
+index 4702efb00ff21e..4cb2ebc439be68 100644
+--- a/kernel/trace/trace_clock.c
++++ b/kernel/trace/trace_clock.c
+@@ -154,5 +154,5 @@ static atomic64_t trace_counter;
+ */
+ u64 notrace trace_clock_counter(void)
+ {
+- return atomic64_add_return(1, &trace_counter);
++ return atomic64_inc_return(&trace_counter);
+ }
+diff --git a/kernel/trace/trace_eprobe.c b/kernel/trace/trace_eprobe.c
+index ebda68ee9abff9..be8be0c1aaf0f1 100644
+--- a/kernel/trace/trace_eprobe.c
++++ b/kernel/trace/trace_eprobe.c
+@@ -963,6 +963,11 @@ static int __trace_eprobe_create(int argc, const char *argv[])
+ goto error;
+ }
+ ret = dyn_event_add(&ep->devent, &ep->tp.event->call);
++ if (ret < 0) {
++ trace_probe_unregister_event_call(&ep->tp);
++ mutex_unlock(&event_mutex);
++ goto error;
++ }
+ mutex_unlock(&event_mutex);
+ return ret;
+ parse_error:
+diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
+index 868f2f912f2809..c14573e5a90337 100644
+--- a/kernel/trace/trace_output.c
++++ b/kernel/trace/trace_output.c
+@@ -1246,6 +1246,10 @@ static enum print_line_t trace_stack_print(struct trace_iterator *iter,
+ break;
+
+ trace_seq_puts(s, " => ");
++ if ((*p) == FTRACE_TRAMPOLINE_MARKER) {
++ trace_seq_puts(s, "[FTRACE TRAMPOLINE]\n");
++ continue;
++ }
+ seq_print_ip_sym(s, (*p) + delta, flags);
+ trace_seq_putc(s, '\n');
+ }
+diff --git a/kernel/trace/trace_syscalls.c b/kernel/trace/trace_syscalls.c
+index 785733245eadf5..f9b21bac9d45e6 100644
+--- a/kernel/trace/trace_syscalls.c
++++ b/kernel/trace/trace_syscalls.c
+@@ -299,6 +299,12 @@ static void ftrace_syscall_enter(void *data, struct pt_regs *regs, long id)
+ int syscall_nr;
+ int size;
+
++ /*
++ * Syscall probe called with preemption enabled, but the ring
++ * buffer and per-cpu data require preemption to be disabled.
++ */
++ guard(preempt_notrace)();
++
+ syscall_nr = trace_get_syscall_nr(current, regs);
+ if (syscall_nr < 0 || syscall_nr >= NR_syscalls)
+ return;
+@@ -338,6 +344,12 @@ static void ftrace_syscall_exit(void *data, struct pt_regs *regs, long ret)
+ struct trace_event_buffer fbuffer;
+ int syscall_nr;
+
++ /*
++ * Syscall probe called with preemption enabled, but the ring
++ * buffer and per-cpu data require preemption to be disabled.
++ */
++ guard(preempt_notrace)();
++
+ syscall_nr = trace_get_syscall_nr(current, regs);
+ if (syscall_nr < 0 || syscall_nr >= NR_syscalls)
+ return;
+diff --git a/kernel/trace/tracing_map.c b/kernel/trace/tracing_map.c
+index 3a56e7c8aa4f67..1921ade45be38b 100644
+--- a/kernel/trace/tracing_map.c
++++ b/kernel/trace/tracing_map.c
+@@ -845,15 +845,11 @@ int tracing_map_init(struct tracing_map *map)
+ static int cmp_entries_dup(const void *A, const void *B)
+ {
+ const struct tracing_map_sort_entry *a, *b;
+- int ret = 0;
+
+ a = *(const struct tracing_map_sort_entry **)A;
+ b = *(const struct tracing_map_sort_entry **)B;
+
+- if (memcmp(a->key, b->key, a->elt->map->key_size))
+- ret = 1;
+-
+- return ret;
++ return memcmp(a->key, b->key, a->elt->map->key_size);
+ }
+
+ static int cmp_entries_sum(const void *A, const void *B)
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 7312ae7c3cc57b..3f9c238bb58ea3 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1328,19 +1328,6 @@ config SCHEDSTATS
+
+ endmenu
+
+-config DEBUG_TIMEKEEPING
+- bool "Enable extra timekeeping sanity checking"
+- help
+- This option will enable additional timekeeping sanity checks
+- which may be helpful when diagnosing issues where timekeeping
+- problems are suspected.
+-
+- This may include checks in the timekeeping hotpaths, so this
+- option may have a (very small) performance impact to some
+- workloads.
+-
+- If unsure, say N.
+-
+ config DEBUG_PREEMPT
+ bool "Debug preemptible kernel"
+ depends on DEBUG_KERNEL && PREEMPTION && TRACE_IRQFLAGS_SUPPORT
+diff --git a/lib/stackdepot.c b/lib/stackdepot.c
+index 5ed34cc963fc38..245d5b41669995 100644
+--- a/lib/stackdepot.c
++++ b/lib/stackdepot.c
+@@ -630,7 +630,15 @@ depot_stack_handle_t stack_depot_save_flags(unsigned long *entries,
+ prealloc = page_address(page);
+ }
+
+- raw_spin_lock_irqsave(&pool_lock, flags);
++ if (in_nmi()) {
++ /* We can never allocate in NMI context. */
++ WARN_ON_ONCE(can_alloc);
++ /* Best effort; bail if we fail to take the lock. */
++ if (!raw_spin_trylock_irqsave(&pool_lock, flags))
++ goto exit;
++ } else {
++ raw_spin_lock_irqsave(&pool_lock, flags);
++ }
+ printk_deferred_enter();
+
+ /* Try to find again, to avoid concurrently inserting duplicates. */
+diff --git a/lib/stackinit_kunit.c b/lib/stackinit_kunit.c
+index c14c6f8e6308df..c40818ec9c1801 100644
+--- a/lib/stackinit_kunit.c
++++ b/lib/stackinit_kunit.c
+@@ -212,6 +212,7 @@ static noinline void test_ ## name (struct kunit *test) \
+ static noinline DO_NOTHING_TYPE_ ## which(var_type) \
+ do_nothing_ ## name(var_type *ptr) \
+ { \
++ OPTIMIZER_HIDE_VAR(ptr); \
+ /* Will always be true, but compiler doesn't know. */ \
+ if ((unsigned long)ptr > 0x2) \
+ return DO_NOTHING_RETURN_ ## which(ptr); \
+diff --git a/mm/debug.c b/mm/debug.c
+index aa57d3ffd4edf6..95b6ab809c0ee6 100644
+--- a/mm/debug.c
++++ b/mm/debug.c
+@@ -124,19 +124,22 @@ static void __dump_page(const struct page *page)
+ {
+ struct folio *foliop, folio;
+ struct page precise;
++ unsigned long head;
+ unsigned long pfn = page_to_pfn(page);
+ unsigned long idx, nr_pages = 1;
+ int loops = 5;
+
+ again:
+ memcpy(&precise, page, sizeof(*page));
+- foliop = page_folio(&precise);
+- if (foliop == (struct folio *)&precise) {
++ head = precise.compound_head;
++ if ((head & 1) == 0) {
++ foliop = (struct folio *)&precise;
+ idx = 0;
+ if (!folio_test_large(foliop))
+ goto dump;
+ foliop = (struct folio *)page;
+ } else {
++ foliop = (struct folio *)(head - 1);
+ idx = folio_page_idx(foliop, page);
+ }
+
+diff --git a/mm/gup.c b/mm/gup.c
+index ad0c8922dac3cb..7053f8114e0127 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -52,7 +52,12 @@ static inline void sanity_check_pinned_pages(struct page **pages,
+ */
+ for (; npages; npages--, pages++) {
+ struct page *page = *pages;
+- struct folio *folio = page_folio(page);
++ struct folio *folio;
++
++ if (!page)
++ continue;
++
++ folio = page_folio(page);
+
+ if (is_zero_page(page) ||
+ !folio_test_anon(folio))
+@@ -409,6 +414,10 @@ void unpin_user_pages(struct page **pages, unsigned long npages)
+
+ sanity_check_pinned_pages(pages, npages);
+ for (i = 0; i < npages; i += nr) {
++ if (!pages[i]) {
++ nr = 1;
++ continue;
++ }
+ folio = gup_folio_next(pages, npages, i, &nr);
+ gup_put_folio(folio, nr, FOLL_PIN);
+ }
+diff --git a/mm/kasan/report.c b/mm/kasan/report.c
+index b48c768acc84d2..c7c0083203cb73 100644
+--- a/mm/kasan/report.c
++++ b/mm/kasan/report.c
+@@ -200,7 +200,7 @@ static inline void fail_non_kasan_kunit_test(void) { }
+
+ #endif /* CONFIG_KUNIT */
+
+-static DEFINE_SPINLOCK(report_lock);
++static DEFINE_RAW_SPINLOCK(report_lock);
+
+ static void start_report(unsigned long *flags, bool sync)
+ {
+@@ -211,7 +211,7 @@ static void start_report(unsigned long *flags, bool sync)
+ lockdep_off();
+ /* Make sure we don't end up in loop. */
+ report_suppress_start();
+- spin_lock_irqsave(&report_lock, *flags);
++ raw_spin_lock_irqsave(&report_lock, *flags);
+ pr_err("==================================================================\n");
+ }
+
+@@ -221,7 +221,7 @@ static void end_report(unsigned long *flags, const void *addr, bool is_write)
+ trace_error_report_end(ERROR_DETECTOR_KASAN,
+ (unsigned long)addr);
+ pr_err("==================================================================\n");
+- spin_unlock_irqrestore(&report_lock, *flags);
++ raw_spin_unlock_irqrestore(&report_lock, *flags);
+ if (!test_bit(KASAN_BIT_MULTI_SHOT, &kasan_flags))
+ check_panic_on_warn("KASAN");
+ switch (kasan_arg_fault) {
+diff --git a/mm/memblock.c b/mm/memblock.c
+index 0389ce5cd281e1..095c18b5c430da 100644
+--- a/mm/memblock.c
++++ b/mm/memblock.c
+@@ -735,7 +735,7 @@ int __init_memblock memblock_add(phys_addr_t base, phys_addr_t size)
+ /**
+ * memblock_validate_numa_coverage - check if amount of memory with
+ * no node ID assigned is less than a threshold
+- * @threshold_bytes: maximal number of pages that can have unassigned node
++ * @threshold_bytes: maximal memory size that can have unassigned node
+ * ID (in bytes).
+ *
+ * A buggy firmware may report memory that does not belong to any node.
+@@ -755,7 +755,7 @@ bool __init_memblock memblock_validate_numa_coverage(unsigned long threshold_byt
+ nr_pages += end_pfn - start_pfn;
+ }
+
+- if ((nr_pages << PAGE_SHIFT) >= threshold_bytes) {
++ if ((nr_pages << PAGE_SHIFT) > threshold_bytes) {
+ mem_size_mb = memblock_phys_mem_size() >> 20;
+ pr_err("NUMA: no nodes coverage for %luMB of %luMB RAM\n",
+ (nr_pages << PAGE_SHIFT) >> 20, mem_size_mb);
+diff --git a/mm/memcontrol-v1.h b/mm/memcontrol-v1.h
+index c0672e25bcdb20..6fbc78e0e440ce 100644
+--- a/mm/memcontrol-v1.h
++++ b/mm/memcontrol-v1.h
+@@ -38,7 +38,7 @@ void mem_cgroup_id_put_many(struct mem_cgroup *memcg, unsigned int n);
+ iter = mem_cgroup_iter(NULL, iter, NULL))
+
+ /* Whether legacy memory+swap accounting is active */
+-static bool do_memsw_account(void)
++static inline bool do_memsw_account(void)
+ {
+ return !cgroup_subsys_on_dfl(memory_cgrp_subsys);
+ }
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index b646fab3e45e10..7b908c4cc7eecb 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -1080,6 +1080,10 @@ static long migrate_to_node(struct mm_struct *mm, int source, int dest,
+
+ mmap_read_lock(mm);
+ vma = find_vma(mm, 0);
++ if (unlikely(!vma)) {
++ mmap_read_unlock(mm);
++ return 0;
++ }
+
+ /*
+ * This does not migrate the range, but isolates all pages that
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 4f6e566d52faa6..7fb4c1e97175f9 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -901,6 +901,7 @@ __get_unmapped_area(struct file *file, unsigned long addr, unsigned long len,
+ if (get_area) {
+ addr = get_area(file, addr, len, pgoff, flags);
+ } else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)
++ && !addr /* no hint */
+ && IS_ALIGNED(len, PMD_SIZE)) {
+ /* Ensures that larger anonymous mappings are THP aligned. */
+ addr = thp_get_unmapped_area_vmflags(file, addr, len,
+diff --git a/mm/readahead.c b/mm/readahead.c
+index 3dc6c7a128dd35..99fdb2b5b56862 100644
+--- a/mm/readahead.c
++++ b/mm/readahead.c
+@@ -453,8 +453,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
+ struct file_ra_state *ra, unsigned int new_order)
+ {
+ struct address_space *mapping = ractl->mapping;
+- pgoff_t start = readahead_index(ractl);
+- pgoff_t index = start;
++ pgoff_t index = readahead_index(ractl);
+ unsigned int min_order = mapping_min_folio_order(mapping);
+ pgoff_t limit = (i_size_read(mapping->host) - 1) >> PAGE_SHIFT;
+ pgoff_t mark = index + ra->size - ra->async_size;
+@@ -517,7 +516,7 @@ void page_cache_ra_order(struct readahead_control *ractl,
+ if (!err)
+ return;
+ fallback:
+- do_page_cache_ra(ractl, ra->size - (index - start), ra->async_size);
++ do_page_cache_ra(ractl, ra->size, ra->async_size);
+ }
+
+ static unsigned long ractl_max_pages(struct readahead_control *ractl,
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 5480b77f4167d7..0161cb4391e1d1 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -4093,7 +4093,8 @@ void *vrealloc_noprof(const void *p, size_t size, gfp_t flags)
+ /* Zero out spare memory. */
+ if (want_init_on_alloc(flags))
+ memset((void *)p + size, 0, old_size - size);
+-
++ kasan_poison_vmalloc(p + size, old_size - size);
++ kasan_unpoison_vmalloc(p, size, KASAN_VMALLOC_PROT_NORMAL);
+ return (void *)p;
+ }
+
+diff --git a/net/bluetooth/hci_conn.c b/net/bluetooth/hci_conn.c
+index 6354cdf9c2b372..e6591f487a5119 100644
+--- a/net/bluetooth/hci_conn.c
++++ b/net/bluetooth/hci_conn.c
+@@ -1128,9 +1128,9 @@ void hci_conn_del(struct hci_conn *conn)
+
+ hci_conn_unlink(conn);
+
+- cancel_delayed_work_sync(&conn->disc_work);
+- cancel_delayed_work_sync(&conn->auto_accept_work);
+- cancel_delayed_work_sync(&conn->idle_work);
++ disable_delayed_work_sync(&conn->disc_work);
++ disable_delayed_work_sync(&conn->auto_accept_work);
++ disable_delayed_work_sync(&conn->idle_work);
+
+ if (conn->type == ACL_LINK) {
+ /* Unacked frames */
+@@ -2345,13 +2345,9 @@ struct hci_conn *hci_bind_bis(struct hci_dev *hdev, bdaddr_t *dst,
+ conn->iso_qos.bcast.big);
+ if (parent && parent != conn) {
+ link = hci_conn_link(parent, conn);
+- if (!link) {
+- hci_conn_drop(conn);
+- return ERR_PTR(-ENOLINK);
+- }
+-
+- /* Link takes the refcount */
+ hci_conn_drop(conn);
++ if (!link)
++ return ERR_PTR(-ENOLINK);
+ }
+
+ return conn;
+@@ -2441,15 +2437,12 @@ struct hci_conn *hci_connect_cis(struct hci_dev *hdev, bdaddr_t *dst,
+ }
+
+ link = hci_conn_link(le, cis);
++ hci_conn_drop(cis);
+ if (!link) {
+ hci_conn_drop(le);
+- hci_conn_drop(cis);
+ return ERR_PTR(-ENOLINK);
+ }
+
+- /* Link takes the refcount */
+- hci_conn_drop(cis);
+-
+ cis->state = BT_CONNECT;
+
+ hci_le_create_cis_pending(hdev);
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 0ac354db817794..72439764186ed2 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -3771,18 +3771,22 @@ static void hci_tx_work(struct work_struct *work)
+ /* ACL data packet */
+ static void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+- struct hci_acl_hdr *hdr = (void *) skb->data;
++ struct hci_acl_hdr *hdr;
+ struct hci_conn *conn;
+ __u16 handle, flags;
+
+- skb_pull(skb, HCI_ACL_HDR_SIZE);
++ hdr = skb_pull_data(skb, sizeof(*hdr));
++ if (!hdr) {
++ bt_dev_err(hdev, "ACL packet too small");
++ goto drop;
++ }
+
+ handle = __le16_to_cpu(hdr->handle);
+ flags = hci_flags(handle);
+ handle = hci_handle(handle);
+
+- BT_DBG("%s len %d handle 0x%4.4x flags 0x%4.4x", hdev->name, skb->len,
+- handle, flags);
++ bt_dev_dbg(hdev, "len %d handle 0x%4.4x flags 0x%4.4x", skb->len,
++ handle, flags);
+
+ hdev->stat.acl_rx++;
+
+@@ -3801,6 +3805,7 @@ static void hci_acldata_packet(struct hci_dev *hdev, struct sk_buff *skb)
+ handle);
+ }
+
++drop:
+ kfree_skb(skb);
+ }
+
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 2e4bd3e961ce09..2b5ba8acd1d84a 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3626,6 +3626,13 @@ static void hci_encrypt_change_evt(struct hci_dev *hdev, void *data,
+ goto unlock;
+ }
+
++ /* We skip the WRITE_AUTH_PAYLOAD_TIMEOUT for ATS2851 based controllers
++ * to avoid unexpected SMP command errors when pairing.
++ */
++ if (test_bit(HCI_QUIRK_BROKEN_WRITE_AUTH_PAYLOAD_TIMEOUT,
++ &hdev->quirks))
++ goto notify;
++
+ /* Set the default Authenticated Payload Timeout after
+ * an LE Link is established. As per Core Spec v5.0, Vol 2, Part B
+ * Section 3.3, the HCI command WRITE_AUTH_PAYLOAD_TIMEOUT should be
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index c0203a2b510756..c86f4e42e69cab 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -4842,6 +4842,13 @@ static const struct {
+ HCI_QUIRK_BROKEN(SET_RPA_TIMEOUT,
+ "HCI LE Set Random Private Address Timeout command is "
+ "advertised, but not supported."),
++ HCI_QUIRK_BROKEN(EXT_CREATE_CONN,
++ "HCI LE Extended Create Connection command is "
++ "advertised, but not supported."),
++ HCI_QUIRK_BROKEN(WRITE_AUTH_PAYLOAD_TIMEOUT,
++ "HCI WRITE AUTH PAYLOAD TIMEOUT command leads "
++ "to unexpected SMP errors when pairing "
++ "and will not be used."),
+ HCI_QUIRK_BROKEN(LE_CODED,
+ "HCI LE Coded PHY feature bit is set, "
+ "but its usage is not supported.")
+@@ -6477,7 +6484,7 @@ static int hci_le_create_conn_sync(struct hci_dev *hdev, void *data)
+ &own_addr_type);
+ if (err)
+ goto done;
+-
++ /* Send command LE Extended Create Connection if supported */
+ if (use_ext_conn(hdev)) {
+ err = hci_le_ext_create_conn_sync(hdev, conn, own_addr_type);
+ goto done;
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index ba437c6f6ee591..18e89e764f3b42 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1886,6 +1886,7 @@ static struct sock *l2cap_sock_alloc(struct net *net, struct socket *sock,
+ chan = l2cap_chan_create();
+ if (!chan) {
+ sk_free(sk);
++ sock->sk = NULL;
+ return NULL;
+ }
+
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index 8af1bf518321fd..40766f8119ed9c 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -274,13 +274,13 @@ static struct sock *rfcomm_sock_alloc(struct net *net, struct socket *sock,
+ struct rfcomm_dlc *d;
+ struct sock *sk;
+
+- sk = bt_sock_alloc(net, sock, &rfcomm_proto, proto, prio, kern);
+- if (!sk)
++ d = rfcomm_dlc_alloc(prio);
++ if (!d)
+ return NULL;
+
+- d = rfcomm_dlc_alloc(prio);
+- if (!d) {
+- sk_free(sk);
++ sk = bt_sock_alloc(net, sock, &rfcomm_proto, proto, prio, kern);
++ if (!sk) {
++ rfcomm_dlc_free(d);
+ return NULL;
+ }
+
+diff --git a/net/can/af_can.c b/net/can/af_can.c
+index 707576eeeb5823..01f3fbb3b67dc6 100644
+--- a/net/can/af_can.c
++++ b/net/can/af_can.c
+@@ -171,6 +171,7 @@ static int can_create(struct net *net, struct socket *sock, int protocol,
+ /* release sk on errors */
+ sock_orphan(sk);
+ sock_put(sk);
++ sock->sk = NULL;
+ }
+
+ errout:
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 319f47df33300c..95f7a7e65a73fa 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -1505,7 +1505,7 @@ static struct j1939_session *j1939_session_new(struct j1939_priv *priv,
+ session->state = J1939_SESSION_NEW;
+
+ skb_queue_head_init(&session->skb_queue);
+- skb_queue_tail(&session->skb_queue, skb);
++ skb_queue_tail(&session->skb_queue, skb_get(skb));
+
+ skcb = j1939_skb_to_cb(skb);
+ memcpy(&session->skcb, skcb, sizeof(session->skcb));
+diff --git a/net/core/link_watch.c b/net/core/link_watch.c
+index ab150641142aa1..1b4d39e3808427 100644
+--- a/net/core/link_watch.c
++++ b/net/core/link_watch.c
+@@ -45,9 +45,14 @@ static unsigned int default_operstate(const struct net_device *dev)
+ int iflink = dev_get_iflink(dev);
+ struct net_device *peer;
+
+- if (iflink == dev->ifindex)
++ /* If called from netdev_run_todo()/linkwatch_sync_dev(),
++ * dev_net(dev) can be already freed, and RTNL is not held.
++ */
++ if (dev->reg_state == NETREG_UNREGISTERED ||
++ iflink == dev->ifindex)
+ return IF_OPER_DOWN;
+
++ ASSERT_RTNL();
+ peer = __dev_get_by_index(dev_net(dev), iflink);
+ if (!peer)
+ return IF_OPER_DOWN;
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 77b819cd995b25..cc58315a40a79c 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -2876,6 +2876,7 @@ static int neigh_dump_info(struct sk_buff *skb, struct netlink_callback *cb)
+ err = neigh_valid_dump_req(nlh, cb->strict_check, &filter, cb->extack);
+ if (err < 0 && cb->strict_check)
+ return err;
++ err = 0;
+
+ s_t = cb->args[0];
+
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index aa49b92e9194ba..45fb60bc480395 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -626,7 +626,7 @@ int __netpoll_setup(struct netpoll *np, struct net_device *ndev)
+ goto out;
+ }
+
+- if (!ndev->npinfo) {
++ if (!rcu_access_pointer(ndev->npinfo)) {
+ npinfo = kmalloc(sizeof(*npinfo), GFP_KERNEL);
+ if (!npinfo) {
+ err = -ENOMEM;
+diff --git a/net/dccp/feat.c b/net/dccp/feat.c
+index 54086bb05c42cd..f7554dcdaaba93 100644
+--- a/net/dccp/feat.c
++++ b/net/dccp/feat.c
+@@ -1166,8 +1166,12 @@ static u8 dccp_feat_change_recv(struct list_head *fn, u8 is_mandatory, u8 opt,
+ goto not_valid_or_not_known;
+ }
+
+- return dccp_feat_push_confirm(fn, feat, local, &fval);
++ if (dccp_feat_push_confirm(fn, feat, local, &fval)) {
++ kfree(fval.sp.vec);
++ return DCCP_RESET_CODE_TOO_BUSY;
++ }
+
++ return 0;
+ } else if (entry->state == FEAT_UNSTABLE) { /* 6.6.2 */
+ return 0;
+ }
+diff --git a/net/ethtool/bitset.c b/net/ethtool/bitset.c
+index 0515d6604b3b9d..f0883357d12e52 100644
+--- a/net/ethtool/bitset.c
++++ b/net/ethtool/bitset.c
+@@ -425,12 +425,32 @@ static int ethnl_parse_bit(unsigned int *index, bool *val, unsigned int nbits,
+ return 0;
+ }
+
++/**
++ * ethnl_bitmap32_equal() - Compare two bitmaps
++ * @map1: first bitmap
++ * @map2: second bitmap
++ * @nbits: bit size to compare
++ *
++ * Return: true if first @nbits are equal, false if not
++ */
++static bool ethnl_bitmap32_equal(const u32 *map1, const u32 *map2,
++ unsigned int nbits)
++{
++ if (memcmp(map1, map2, nbits / 32 * sizeof(u32)))
++ return false;
++ if (nbits % 32 == 0)
++ return true;
++ return !((map1[nbits / 32] ^ map2[nbits / 32]) &
++ ethnl_lower_bits(nbits % 32));
++}
++
+ static int
+ ethnl_update_bitset32_verbose(u32 *bitmap, unsigned int nbits,
+ const struct nlattr *attr, struct nlattr **tb,
+ ethnl_string_array_t names,
+ struct netlink_ext_ack *extack, bool *mod)
+ {
++ u32 *saved_bitmap = NULL;
+ struct nlattr *bit_attr;
+ bool no_mask;
+ int rem;
+@@ -448,8 +468,20 @@ ethnl_update_bitset32_verbose(u32 *bitmap, unsigned int nbits,
+ }
+
+ no_mask = tb[ETHTOOL_A_BITSET_NOMASK];
+- if (no_mask)
+- ethnl_bitmap32_clear(bitmap, 0, nbits, mod);
++ if (no_mask) {
++ unsigned int nwords = DIV_ROUND_UP(nbits, 32);
++ unsigned int nbytes = nwords * sizeof(u32);
++ bool dummy;
++
++ /* The bitmap size is only the size of the map part without
++ * its mask part.
++ */
++ saved_bitmap = kcalloc(nwords, sizeof(u32), GFP_KERNEL);
++ if (!saved_bitmap)
++ return -ENOMEM;
++ memcpy(saved_bitmap, bitmap, nbytes);
++ ethnl_bitmap32_clear(bitmap, 0, nbits, &dummy);
++ }
+
+ nla_for_each_nested(bit_attr, tb[ETHTOOL_A_BITSET_BITS], rem) {
+ bool old_val, new_val;
+@@ -458,22 +490,30 @@ ethnl_update_bitset32_verbose(u32 *bitmap, unsigned int nbits,
+ if (nla_type(bit_attr) != ETHTOOL_A_BITSET_BITS_BIT) {
+ NL_SET_ERR_MSG_ATTR(extack, bit_attr,
+ "only ETHTOOL_A_BITSET_BITS_BIT allowed in ETHTOOL_A_BITSET_BITS");
++ kfree(saved_bitmap);
+ return -EINVAL;
+ }
+ ret = ethnl_parse_bit(&idx, &new_val, nbits, bit_attr, no_mask,
+ names, extack);
+- if (ret < 0)
++ if (ret < 0) {
++ kfree(saved_bitmap);
+ return ret;
++ }
+ old_val = bitmap[idx / 32] & ((u32)1 << (idx % 32));
+ if (new_val != old_val) {
+ if (new_val)
+ bitmap[idx / 32] |= ((u32)1 << (idx % 32));
+ else
+ bitmap[idx / 32] &= ~((u32)1 << (idx % 32));
+- *mod = true;
++ if (!no_mask)
++ *mod = true;
+ }
+ }
+
++ if (no_mask && !ethnl_bitmap32_equal(saved_bitmap, bitmap, nbits))
++ *mod = true;
++
++ kfree(saved_bitmap);
+ return 0;
+ }
+
+diff --git a/net/hsr/hsr_device.c b/net/hsr/hsr_device.c
+index f630d6645636dd..44048d7538ddc3 100644
+--- a/net/hsr/hsr_device.c
++++ b/net/hsr/hsr_device.c
+@@ -246,20 +246,22 @@ static const struct header_ops hsr_header_ops = {
+ .parse = eth_header_parse,
+ };
+
+-static struct sk_buff *hsr_init_skb(struct hsr_port *master)
++static struct sk_buff *hsr_init_skb(struct hsr_port *master, int extra)
+ {
+ struct hsr_priv *hsr = master->hsr;
+ struct sk_buff *skb;
+ int hlen, tlen;
++ int len;
+
+ hlen = LL_RESERVED_SPACE(master->dev);
+ tlen = master->dev->needed_tailroom;
++ len = sizeof(struct hsr_sup_tag) + sizeof(struct hsr_sup_payload);
+ /* skb size is same for PRP/HSR frames, only difference
+ * being, for PRP it is a trailer and for HSR it is a
+- * header
++ * header.
++ * RedBox might use @extra more bytes.
+ */
+- skb = dev_alloc_skb(sizeof(struct hsr_sup_tag) +
+- sizeof(struct hsr_sup_payload) + hlen + tlen);
++ skb = dev_alloc_skb(len + extra + hlen + tlen);
+
+ if (!skb)
+ return skb;
+@@ -295,6 +297,7 @@ static void send_hsr_supervision_frame(struct hsr_port *port,
+ struct hsr_sup_tlv *hsr_stlv;
+ struct hsr_sup_tag *hsr_stag;
+ struct sk_buff *skb;
++ int extra = 0;
+
+ *interval = msecs_to_jiffies(HSR_LIFE_CHECK_INTERVAL);
+ if (hsr->announce_count < 3 && hsr->prot_version == 0) {
+@@ -303,7 +306,11 @@ static void send_hsr_supervision_frame(struct hsr_port *port,
+ hsr->announce_count++;
+ }
+
+- skb = hsr_init_skb(port);
++ if (hsr->redbox)
++ extra = sizeof(struct hsr_sup_tlv) +
++ sizeof(struct hsr_sup_payload);
++
++ skb = hsr_init_skb(port, extra);
+ if (!skb) {
+ netdev_warn_once(port->dev, "HSR: Could not send supervision frame\n");
+ return;
+@@ -362,7 +369,7 @@ static void send_prp_supervision_frame(struct hsr_port *master,
+ struct hsr_sup_tag *hsr_stag;
+ struct sk_buff *skb;
+
+- skb = hsr_init_skb(master);
++ skb = hsr_init_skb(master, 0);
+ if (!skb) {
+ netdev_warn_once(master->dev, "PRP: Could not send supervision frame\n");
+ return;
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index b38060246e62e8..40c5fbbd155d66 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -688,6 +688,8 @@ static int fill_frame_info(struct hsr_frame_info *frame,
+ frame->is_vlan = true;
+
+ if (frame->is_vlan) {
++ if (skb->mac_len < offsetofend(struct hsr_vlan_ethhdr, vlanhdr))
++ return -EINVAL;
+ vlan_hdr = (struct hsr_vlan_ethhdr *)ethhdr;
+ proto = vlan_hdr->vlanhdr.h_vlan_encapsulated_proto;
+ /* FIXME: */
+diff --git a/net/ieee802154/socket.c b/net/ieee802154/socket.c
+index 990a83455dcfb5..18d267921bb531 100644
+--- a/net/ieee802154/socket.c
++++ b/net/ieee802154/socket.c
+@@ -1043,19 +1043,21 @@ static int ieee802154_create(struct net *net, struct socket *sock,
+
+ if (sk->sk_prot->hash) {
+ rc = sk->sk_prot->hash(sk);
+- if (rc) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (rc)
++ goto out_sk_release;
+ }
+
+ if (sk->sk_prot->init) {
+ rc = sk->sk_prot->init(sk);
+ if (rc)
+- sk_common_release(sk);
++ goto out_sk_release;
+ }
+ out:
+ return rc;
++out_sk_release:
++ sk_common_release(sk);
++ sock->sk = NULL;
++ goto out;
+ }
+
+ static const struct net_proto_family ieee802154_family_ops = {
+diff --git a/net/ipv4/af_inet.c b/net/ipv4/af_inet.c
+index b24d74616637a0..8095e82de8083d 100644
+--- a/net/ipv4/af_inet.c
++++ b/net/ipv4/af_inet.c
+@@ -376,32 +376,30 @@ static int inet_create(struct net *net, struct socket *sock, int protocol,
+ inet->inet_sport = htons(inet->inet_num);
+ /* Add to protocol hash chains. */
+ err = sk->sk_prot->hash(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+
+ if (sk->sk_prot->init) {
+ err = sk->sk_prot->init(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+
+ if (!kern) {
+ err = BPF_CGROUP_RUN_PROG_INET_SOCK(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+ out:
+ return err;
+ out_rcu_unlock:
+ rcu_read_unlock();
+ goto out;
++out_sk_release:
++ sk_common_release(sk);
++ sock->sk = NULL;
++ goto out;
+ }
+
+
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index e1384e7331d82f..c3ad41573b33ea 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -519,6 +519,9 @@ static struct rtable *icmp_route_lookup(struct net *net,
+ if (!IS_ERR(dst)) {
+ if (rt != rt2)
+ return rt;
++ if (inet_addr_type_dev_table(net, route_lookup_dev,
++ fl4->daddr) == RTN_LOCAL)
++ return rt;
+ } else if (PTR_ERR(dst) == -EPERM) {
+ rt = NULL;
+ } else {
+diff --git a/net/ipv4/tcp_ao.c b/net/ipv4/tcp_ao.c
+index db6516092daf5b..bbb8d5f0eae7d3 100644
+--- a/net/ipv4/tcp_ao.c
++++ b/net/ipv4/tcp_ao.c
+@@ -109,12 +109,13 @@ bool tcp_ao_ignore_icmp(const struct sock *sk, int family, int type, int code)
+ * it's known that the keys in ao_info are matching peer's
+ * family/address/VRF/etc.
+ */
+-struct tcp_ao_key *tcp_ao_established_key(struct tcp_ao_info *ao,
++struct tcp_ao_key *tcp_ao_established_key(const struct sock *sk,
++ struct tcp_ao_info *ao,
+ int sndid, int rcvid)
+ {
+ struct tcp_ao_key *key;
+
+- hlist_for_each_entry_rcu(key, &ao->head, node) {
++ hlist_for_each_entry_rcu(key, &ao->head, node, lockdep_sock_is_held(sk)) {
+ if ((sndid >= 0 && key->sndid != sndid) ||
+ (rcvid >= 0 && key->rcvid != rcvid))
+ continue;
+@@ -205,7 +206,7 @@ static struct tcp_ao_key *__tcp_ao_do_lookup(const struct sock *sk, int l3index,
+ if (!ao)
+ return NULL;
+
+- hlist_for_each_entry_rcu(key, &ao->head, node) {
++ hlist_for_each_entry_rcu(key, &ao->head, node, lockdep_sock_is_held(sk)) {
+ u8 prefixlen = min(prefix, key->prefixlen);
+
+ if (!tcp_ao_key_cmp(key, l3index, addr, prefixlen,
+@@ -793,7 +794,7 @@ int tcp_ao_prepare_reset(const struct sock *sk, struct sk_buff *skb,
+ if (!ao_info)
+ return -ENOENT;
+
+- *key = tcp_ao_established_key(ao_info, aoh->rnext_keyid, -1);
++ *key = tcp_ao_established_key(sk, ao_info, aoh->rnext_keyid, -1);
+ if (!*key)
+ return -ENOENT;
+ *traffic_key = snd_other_key(*key);
+@@ -979,7 +980,7 @@ tcp_inbound_ao_hash(struct sock *sk, const struct sk_buff *skb,
+ */
+ key = READ_ONCE(info->rnext_key);
+ if (key->rcvid != aoh->keyid) {
+- key = tcp_ao_established_key(info, -1, aoh->keyid);
++ key = tcp_ao_established_key(sk, info, -1, aoh->keyid);
+ if (!key)
+ goto key_not_found;
+ }
+@@ -1003,7 +1004,7 @@ tcp_inbound_ao_hash(struct sock *sk, const struct sk_buff *skb,
+ aoh->rnext_keyid,
+ tcp_ao_hdr_maclen(aoh));
+ /* If the key is not found we do nothing. */
+- key = tcp_ao_established_key(info, aoh->rnext_keyid, -1);
++ key = tcp_ao_established_key(sk, info, aoh->rnext_keyid, -1);
+ if (key)
+ /* pairs with tcp_ao_del_cmd */
+ WRITE_ONCE(info->current_key, key);
+@@ -1163,7 +1164,7 @@ void tcp_ao_established(struct sock *sk)
+ if (!ao)
+ return;
+
+- hlist_for_each_entry_rcu(key, &ao->head, node)
++ hlist_for_each_entry_rcu(key, &ao->head, node, lockdep_sock_is_held(sk))
+ tcp_ao_cache_traffic_keys(sk, ao, key);
+ }
+
+@@ -1180,7 +1181,7 @@ void tcp_ao_finish_connect(struct sock *sk, struct sk_buff *skb)
+ WRITE_ONCE(ao->risn, tcp_hdr(skb)->seq);
+ ao->rcv_sne = 0;
+
+- hlist_for_each_entry_rcu(key, &ao->head, node)
++ hlist_for_each_entry_rcu(key, &ao->head, node, lockdep_sock_is_held(sk))
+ tcp_ao_cache_traffic_keys(sk, ao, key);
+ }
+
+@@ -1256,14 +1257,14 @@ int tcp_ao_copy_all_matching(const struct sock *sk, struct sock *newsk,
+ key_head = rcu_dereference(hlist_first_rcu(&new_ao->head));
+ first_key = hlist_entry_safe(key_head, struct tcp_ao_key, node);
+
+- key = tcp_ao_established_key(new_ao, tcp_rsk(req)->ao_keyid, -1);
++ key = tcp_ao_established_key(req_to_sk(req), new_ao, tcp_rsk(req)->ao_keyid, -1);
+ if (key)
+ new_ao->current_key = key;
+ else
+ new_ao->current_key = first_key;
+
+ /* set rnext_key */
+- key = tcp_ao_established_key(new_ao, -1, tcp_rsk(req)->ao_rcv_next);
++ key = tcp_ao_established_key(req_to_sk(req), new_ao, -1, tcp_rsk(req)->ao_rcv_next);
+ if (key)
+ new_ao->rnext_key = key;
+ else
+@@ -1857,12 +1858,12 @@ static int tcp_ao_del_cmd(struct sock *sk, unsigned short int family,
+ * if there's any.
+ */
+ if (cmd.set_current) {
+- new_current = tcp_ao_established_key(ao_info, cmd.current_key, -1);
++ new_current = tcp_ao_established_key(sk, ao_info, cmd.current_key, -1);
+ if (!new_current)
+ return -ENOENT;
+ }
+ if (cmd.set_rnext) {
+- new_rnext = tcp_ao_established_key(ao_info, -1, cmd.rnext);
++ new_rnext = tcp_ao_established_key(sk, ao_info, -1, cmd.rnext);
+ if (!new_rnext)
+ return -ENOENT;
+ }
+@@ -1902,7 +1903,8 @@ static int tcp_ao_del_cmd(struct sock *sk, unsigned short int family,
+ * "It is presumed that an MKT affecting a particular
+ * connection cannot be destroyed during an active connection"
+ */
+- hlist_for_each_entry_rcu(key, &ao_info->head, node) {
++ hlist_for_each_entry_rcu(key, &ao_info->head, node,
++ lockdep_sock_is_held(sk)) {
+ if (cmd.sndid != key->sndid ||
+ cmd.rcvid != key->rcvid)
+ continue;
+@@ -2000,14 +2002,14 @@ static int tcp_ao_info_cmd(struct sock *sk, unsigned short int family,
+ * if there's any.
+ */
+ if (cmd.set_current) {
+- new_current = tcp_ao_established_key(ao_info, cmd.current_key, -1);
++ new_current = tcp_ao_established_key(sk, ao_info, cmd.current_key, -1);
+ if (!new_current) {
+ err = -ENOENT;
+ goto out;
+ }
+ }
+ if (cmd.set_rnext) {
+- new_rnext = tcp_ao_established_key(ao_info, -1, cmd.rnext);
++ new_rnext = tcp_ao_established_key(sk, ao_info, -1, cmd.rnext);
+ if (!new_rnext) {
+ err = -ENOENT;
+ goto out;
+@@ -2101,7 +2103,8 @@ int tcp_v4_parse_ao(struct sock *sk, int cmd, sockptr_t optval, int optlen)
+ * The layout of the fields in the user and kernel structures is expected to
+ * be the same (including in the 32bit vs 64bit case).
+ */
+-static int tcp_ao_copy_mkts_to_user(struct tcp_ao_info *ao_info,
++static int tcp_ao_copy_mkts_to_user(const struct sock *sk,
++ struct tcp_ao_info *ao_info,
+ sockptr_t optval, sockptr_t optlen)
+ {
+ struct tcp_ao_getsockopt opt_in, opt_out;
+@@ -2229,7 +2232,8 @@ static int tcp_ao_copy_mkts_to_user(struct tcp_ao_info *ao_info,
+ /* May change in RX, while we're dumping, pre-fetch it */
+ current_key = READ_ONCE(ao_info->current_key);
+
+- hlist_for_each_entry_rcu(key, &ao_info->head, node) {
++ hlist_for_each_entry_rcu(key, &ao_info->head, node,
++ lockdep_sock_is_held(sk)) {
+ if (opt_in.get_all)
+ goto match;
+
+@@ -2309,7 +2313,7 @@ int tcp_ao_get_mkts(struct sock *sk, sockptr_t optval, sockptr_t optlen)
+ if (!ao_info)
+ return -ENOENT;
+
+- return tcp_ao_copy_mkts_to_user(ao_info, optval, optlen);
++ return tcp_ao_copy_mkts_to_user(sk, ao_info, optval, optlen);
+ }
+
+ int tcp_ao_get_sock_info(struct sock *sk, sockptr_t optval, sockptr_t optlen)
+@@ -2396,7 +2400,7 @@ int tcp_ao_set_repair(struct sock *sk, sockptr_t optval, unsigned int optlen)
+ WRITE_ONCE(ao->snd_sne, cmd.snd_sne);
+ WRITE_ONCE(ao->rcv_sne, cmd.rcv_sne);
+
+- hlist_for_each_entry_rcu(key, &ao->head, node)
++ hlist_for_each_entry_rcu(key, &ao->head, node, lockdep_sock_is_held(sk))
+ tcp_ao_cache_traffic_keys(sk, ao, key);
+
+ return 0;
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 370993c03d3136..99cef92e6290cf 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -441,7 +441,6 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ cork = true;
+ psock->cork = NULL;
+ }
+- sk_msg_return(sk, msg, tosend);
+ release_sock(sk);
+
+ origsize = msg->sg.size;
+@@ -453,8 +452,9 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ sock_put(sk_redir);
+
+ lock_sock(sk);
++ sk_mem_uncharge(sk, sent);
+ if (unlikely(ret < 0)) {
+- int free = sk_msg_free_nocharge(sk, msg);
++ int free = sk_msg_free(sk, msg);
+
+ if (!cork)
+ *copied -= free;
+@@ -468,7 +468,7 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ break;
+ case __SK_DROP:
+ default:
+- sk_msg_free_partial(sk, msg, tosend);
++ sk_msg_free(sk, msg);
+ sk_msg_apply_bytes(psock, tosend);
+ *copied -= (tosend + delta);
+ return -EACCES;
+@@ -484,11 +484,8 @@ static int tcp_bpf_send_verdict(struct sock *sk, struct sk_psock *psock,
+ }
+ if (msg &&
+ msg->sg.data[msg->sg.start].page_link &&
+- msg->sg.data[msg->sg.start].length) {
+- if (eval == __SK_REDIRECT)
+- sk_mem_charge(sk, tosend - sent);
++ msg->sg.data[msg->sg.start].length)
+ goto more_data;
+- }
+ }
+ return ret;
+ }
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index 5afe5e57c89b5c..a7cd433a54c9ae 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -1053,7 +1053,8 @@ static void tcp_v4_timewait_ack(struct sock *sk, struct sk_buff *skb)
+ }
+
+ if (aoh)
+- key.ao_key = tcp_ao_established_key(ao_info, aoh->rnext_keyid, -1);
++ key.ao_key = tcp_ao_established_key(sk, ao_info,
++ aoh->rnext_keyid, -1);
+ }
+ }
+ if (key.ao_key) {
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 2849b273b13107..ff85242720a0a9 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1516,7 +1516,6 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
+ struct sk_buff_head *list = &sk->sk_receive_queue;
+ int rmem, err = -ENOMEM;
+ spinlock_t *busy = NULL;
+- bool becomes_readable;
+ int size, rcvbuf;
+
+ /* Immediately drop when the receive queue is full.
+@@ -1557,19 +1556,12 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
+ */
+ sock_skb_set_dropcount(sk, skb);
+
+- becomes_readable = skb_queue_empty(list);
+ __skb_queue_tail(list, skb);
+ spin_unlock(&list->lock);
+
+- if (!sock_flag(sk, SOCK_DEAD)) {
+- if (becomes_readable ||
+- sk->sk_data_ready != sock_def_readable ||
+- READ_ONCE(sk->sk_peek_off) >= 0)
+- INDIRECT_CALL_1(sk->sk_data_ready,
+- sock_def_readable, sk);
+- else
+- sk_wake_async_rcu(sk, SOCK_WAKE_WAITD, POLL_IN);
+- }
++ if (!sock_flag(sk, SOCK_DEAD))
++ INDIRECT_CALL_1(sk->sk_data_ready, sock_def_readable, sk);
++
+ busylock_release(busy);
+ return 0;
+
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 01115e1a34cb66..f7c17388ff6aaf 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -4821,7 +4821,7 @@ inet6_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh,
+ ifm->ifa_prefixlen, extack);
+ }
+
+-static int modify_prefix_route(struct inet6_ifaddr *ifp,
++static int modify_prefix_route(struct net *net, struct inet6_ifaddr *ifp,
+ unsigned long expires, u32 flags,
+ bool modify_peer)
+ {
+@@ -4845,7 +4845,9 @@ static int modify_prefix_route(struct inet6_ifaddr *ifp,
+ ifp->prefix_len,
+ ifp->rt_priority, ifp->idev->dev,
+ expires, flags, GFP_KERNEL);
+- } else {
++ return 0;
++ }
++ if (f6i != net->ipv6.fib6_null_entry) {
+ table = f6i->fib6_table;
+ spin_lock_bh(&table->tb6_lock);
+
+@@ -4858,9 +4860,8 @@ static int modify_prefix_route(struct inet6_ifaddr *ifp,
+ }
+
+ spin_unlock_bh(&table->tb6_lock);
+-
+- fib6_info_release(f6i);
+ }
++ fib6_info_release(f6i);
+
+ return 0;
+ }
+@@ -4939,7 +4940,7 @@ static int inet6_addr_modify(struct net *net, struct inet6_ifaddr *ifp,
+ int rc = -ENOENT;
+
+ if (had_prefixroute)
+- rc = modify_prefix_route(ifp, expires, flags, false);
++ rc = modify_prefix_route(net, ifp, expires, flags, false);
+
+ /* prefix route could have been deleted; if so restore it */
+ if (rc == -ENOENT) {
+@@ -4949,7 +4950,7 @@ static int inet6_addr_modify(struct net *net, struct inet6_ifaddr *ifp,
+ }
+
+ if (had_prefixroute && !ipv6_addr_any(&ifp->peer_addr))
+- rc = modify_prefix_route(ifp, expires, flags, true);
++ rc = modify_prefix_route(net, ifp, expires, flags, true);
+
+ if (rc == -ENOENT && !ipv6_addr_any(&ifp->peer_addr)) {
+ addrconf_prefix_route(&ifp->peer_addr, ifp->prefix_len,
+diff --git a/net/ipv6/af_inet6.c b/net/ipv6/af_inet6.c
+index ba69b86f1c7d5e..f60ec8b0f8ea40 100644
+--- a/net/ipv6/af_inet6.c
++++ b/net/ipv6/af_inet6.c
+@@ -252,31 +252,29 @@ static int inet6_create(struct net *net, struct socket *sock, int protocol,
+ */
+ inet->inet_sport = htons(inet->inet_num);
+ err = sk->sk_prot->hash(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+ if (sk->sk_prot->init) {
+ err = sk->sk_prot->init(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+
+ if (!kern) {
+ err = BPF_CGROUP_RUN_PROG_INET_SOCK(sk);
+- if (err) {
+- sk_common_release(sk);
+- goto out;
+- }
++ if (err)
++ goto out_sk_release;
+ }
+ out:
+ return err;
+ out_rcu_unlock:
+ rcu_read_unlock();
+ goto out;
++out_sk_release:
++ sk_common_release(sk);
++ sock->sk = NULL;
++ goto out;
+ }
+
+ static int __inet6_bind(struct sock *sk, struct sockaddr *uaddr, int addr_len,
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index cff4fbbc66efb2..8ebfed5d63232e 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2780,10 +2780,10 @@ static void ip6_negative_advice(struct sock *sk,
+ if (rt->rt6i_flags & RTF_CACHE) {
+ rcu_read_lock();
+ if (rt6_check_expired(rt)) {
+- /* counteract the dst_release() in sk_dst_reset() */
+- dst_hold(dst);
++ /* rt/dst can not be destroyed yet,
++ * because of rcu_read_lock()
++ */
+ sk_dst_reset(sk);
+-
+ rt6_remove_exception_rt(rt);
+ }
+ rcu_read_unlock();
+diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c
+index c9de5ef8f26750..59173f58ce9923 100644
+--- a/net/ipv6/tcp_ipv6.c
++++ b/net/ipv6/tcp_ipv6.c
+@@ -1169,8 +1169,8 @@ static void tcp_v6_timewait_ack(struct sock *sk, struct sk_buff *skb)
+ if (tcp_parse_auth_options(tcp_hdr(skb), NULL, &aoh))
+ goto out;
+ if (aoh)
+- key.ao_key = tcp_ao_established_key(ao_info,
+- aoh->rnext_keyid, -1);
++ key.ao_key = tcp_ao_established_key(sk, ao_info,
++ aoh->rnext_keyid, -1);
+ }
+ }
+ if (key.ao_key) {
+diff --git a/net/mptcp/diag.c b/net/mptcp/diag.c
+index 2d3efb405437d8..02205f7994d752 100644
+--- a/net/mptcp/diag.c
++++ b/net/mptcp/diag.c
+@@ -47,7 +47,7 @@ static int subflow_get_info(struct sock *sk, struct sk_buff *skb)
+ flags |= MPTCP_SUBFLOW_FLAG_BKUP_REM;
+ if (sf->request_bkup)
+ flags |= MPTCP_SUBFLOW_FLAG_BKUP_LOC;
+- if (sf->fully_established)
++ if (READ_ONCE(sf->fully_established))
+ flags |= MPTCP_SUBFLOW_FLAG_FULLY_ESTABLISHED;
+ if (sf->conn_finished)
+ flags |= MPTCP_SUBFLOW_FLAG_CONNECTED;
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 370c3836b7712f..1603b3702e2207 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -461,7 +461,7 @@ static bool mptcp_established_options_mp(struct sock *sk, struct sk_buff *skb,
+ return false;
+
+ /* MPC/MPJ needed only on 3rd ack packet, DATA_FIN and TCP shutdown take precedence */
+- if (subflow->fully_established || snd_data_fin_enable ||
++ if (READ_ONCE(subflow->fully_established) || snd_data_fin_enable ||
+ subflow->snd_isn != TCP_SKB_CB(skb)->seq ||
+ sk->sk_state != TCP_ESTABLISHED)
+ return false;
+@@ -930,7 +930,7 @@ static bool check_fully_established(struct mptcp_sock *msk, struct sock *ssk,
+ /* here we can process OoO, in-window pkts, only in-sequence 4th ack
+ * will make the subflow fully established
+ */
+- if (likely(subflow->fully_established)) {
++ if (likely(READ_ONCE(subflow->fully_established))) {
+ /* on passive sockets, check for 3rd ack retransmission
+ * note that msk is always set by subflow_syn_recv_sock()
+ * for mp_join subflows
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 48d480982b7870..8a8e8fee337f5e 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -2728,8 +2728,8 @@ void mptcp_reset_tout_timer(struct mptcp_sock *msk, unsigned long fail_tout)
+ if (!fail_tout && !inet_csk(sk)->icsk_mtup.probe_timestamp)
+ return;
+
+- close_timeout = inet_csk(sk)->icsk_mtup.probe_timestamp - tcp_jiffies32 + jiffies +
+- mptcp_close_timeout(sk);
++ close_timeout = (unsigned long)inet_csk(sk)->icsk_mtup.probe_timestamp -
++ tcp_jiffies32 + jiffies + mptcp_close_timeout(sk);
+
+ /* the close timeout takes precedence on the fail one, and here at least one of
+ * them is active
+@@ -3519,7 +3519,7 @@ static void schedule_3rdack_retransmission(struct sock *ssk)
+ struct tcp_sock *tp = tcp_sk(ssk);
+ unsigned long timeout;
+
+- if (mptcp_subflow_ctx(ssk)->fully_established)
++ if (READ_ONCE(mptcp_subflow_ctx(ssk)->fully_established))
+ return;
+
+ /* reschedule with a timeout above RTT, as we must look only for drop */
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 568a72702b080d..a93e661ef5c435 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -513,7 +513,6 @@ struct mptcp_subflow_context {
+ request_bkup : 1,
+ mp_capable : 1, /* remote is MPTCP capable */
+ mp_join : 1, /* remote is JOINing */
+- fully_established : 1, /* path validated */
+ pm_notified : 1, /* PM hook called for established status */
+ conn_finished : 1,
+ map_valid : 1,
+@@ -532,10 +531,11 @@ struct mptcp_subflow_context {
+ is_mptfo : 1, /* subflow is doing TFO */
+ close_event_done : 1, /* has done the post-closed part */
+ mpc_drop : 1, /* the MPC option has been dropped in a rtx */
+- __unused : 8;
++ __unused : 9;
+ bool data_avail;
+ bool scheduled;
+ bool pm_listener; /* a listener managed by the kernel PM? */
++ bool fully_established; /* path validated */
+ u32 remote_nonce;
+ u64 thmac;
+ u32 local_nonce;
+@@ -780,7 +780,7 @@ static inline bool __tcp_can_send(const struct sock *ssk)
+ static inline bool __mptcp_subflow_active(struct mptcp_subflow_context *subflow)
+ {
+ /* can't send if JOIN hasn't completed yet (i.e. is usable for mptcp) */
+- if (subflow->request_join && !subflow->fully_established)
++ if (subflow->request_join && !READ_ONCE(subflow->fully_established))
+ return false;
+
+ return __tcp_can_send(mptcp_subflow_tcp_sock(subflow));
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 6170f2fff71e4f..860903e0642255 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -800,7 +800,7 @@ void __mptcp_subflow_fully_established(struct mptcp_sock *msk,
+ const struct mptcp_options_received *mp_opt)
+ {
+ subflow_set_remote_key(msk, subflow, mp_opt);
+- subflow->fully_established = 1;
++ WRITE_ONCE(subflow->fully_established, true);
+ WRITE_ONCE(msk->fully_established, true);
+
+ if (subflow->is_mptfo)
+@@ -2062,7 +2062,7 @@ static void subflow_ulp_clone(const struct request_sock *req,
+ } else if (subflow_req->mp_join) {
+ new_ctx->ssn_offset = subflow_req->ssn_offset;
+ new_ctx->mp_join = 1;
+- new_ctx->fully_established = 1;
++ WRITE_ONCE(new_ctx->fully_established, true);
+ new_ctx->remote_key_valid = 1;
+ new_ctx->backup = subflow_req->backup;
+ new_ctx->request_bkup = subflow_req->request_bkup;
+diff --git a/net/netfilter/ipset/ip_set_core.c b/net/netfilter/ipset/ip_set_core.c
+index 61431690cbd5f1..cc20e6d56807c6 100644
+--- a/net/netfilter/ipset/ip_set_core.c
++++ b/net/netfilter/ipset/ip_set_core.c
+@@ -104,14 +104,19 @@ find_set_type(const char *name, u8 family, u8 revision)
+ static bool
+ load_settype(const char *name)
+ {
++ if (!try_module_get(THIS_MODULE))
++ return false;
++
+ nfnl_unlock(NFNL_SUBSYS_IPSET);
+ pr_debug("try to load ip_set_%s\n", name);
+ if (request_module("ip_set_%s", name) < 0) {
+ pr_warn("Can't find ip_set type %s\n", name);
+ nfnl_lock(NFNL_SUBSYS_IPSET);
++ module_put(THIS_MODULE);
+ return false;
+ }
+ nfnl_lock(NFNL_SUBSYS_IPSET);
++ module_put(THIS_MODULE);
+ return true;
+ }
+
+diff --git a/net/netfilter/ipvs/ip_vs_proto.c b/net/netfilter/ipvs/ip_vs_proto.c
+index f100da4ba3bc3c..a9fd1d3fc2cbfe 100644
+--- a/net/netfilter/ipvs/ip_vs_proto.c
++++ b/net/netfilter/ipvs/ip_vs_proto.c
+@@ -340,7 +340,7 @@ void __net_exit ip_vs_protocol_net_cleanup(struct netns_ipvs *ipvs)
+
+ int __init ip_vs_protocol_init(void)
+ {
+- char protocols[64];
++ char protocols[64] = { 0 };
+ #define REGISTER_PROTOCOL(p) \
+ do { \
+ register_ip_vs_protocol(p); \
+@@ -348,8 +348,6 @@ int __init ip_vs_protocol_init(void)
+ strcat(protocols, (p)->name); \
+ } while (0)
+
+- protocols[0] = '\0';
+- protocols[2] = '\0';
+ #ifdef CONFIG_IP_VS_PROTO_TCP
+ REGISTER_PROTOCOL(&ip_vs_protocol_tcp);
+ #endif
+diff --git a/net/netfilter/nft_inner.c b/net/netfilter/nft_inner.c
+index 928312d01eb1d6..817ab978d24a19 100644
+--- a/net/netfilter/nft_inner.c
++++ b/net/netfilter/nft_inner.c
+@@ -210,35 +210,66 @@ static int nft_inner_parse(const struct nft_inner *priv,
+ struct nft_pktinfo *pkt,
+ struct nft_inner_tun_ctx *tun_ctx)
+ {
+- struct nft_inner_tun_ctx ctx = {};
+ u32 off = pkt->inneroff;
+
+ if (priv->flags & NFT_INNER_HDRSIZE &&
+- nft_inner_parse_tunhdr(priv, pkt, &ctx, &off) < 0)
++ nft_inner_parse_tunhdr(priv, pkt, tun_ctx, &off) < 0)
+ return -1;
+
+ if (priv->flags & (NFT_INNER_LL | NFT_INNER_NH)) {
+- if (nft_inner_parse_l2l3(priv, pkt, &ctx, off) < 0)
++ if (nft_inner_parse_l2l3(priv, pkt, tun_ctx, off) < 0)
+ return -1;
+ } else if (priv->flags & NFT_INNER_TH) {
+- ctx.inner_thoff = off;
+- ctx.flags |= NFT_PAYLOAD_CTX_INNER_TH;
++ tun_ctx->inner_thoff = off;
++ tun_ctx->flags |= NFT_PAYLOAD_CTX_INNER_TH;
+ }
+
+- *tun_ctx = ctx;
+ tun_ctx->type = priv->type;
++ tun_ctx->cookie = (unsigned long)pkt->skb;
+ pkt->flags |= NFT_PKTINFO_INNER_FULL;
+
+ return 0;
+ }
+
++static bool nft_inner_restore_tun_ctx(const struct nft_pktinfo *pkt,
++ struct nft_inner_tun_ctx *tun_ctx)
++{
++ struct nft_inner_tun_ctx *this_cpu_tun_ctx;
++
++ local_bh_disable();
++ this_cpu_tun_ctx = this_cpu_ptr(&nft_pcpu_tun_ctx);
++ if (this_cpu_tun_ctx->cookie != (unsigned long)pkt->skb) {
++ local_bh_enable();
++ return false;
++ }
++ *tun_ctx = *this_cpu_tun_ctx;
++ local_bh_enable();
++
++ return true;
++}
++
++static void nft_inner_save_tun_ctx(const struct nft_pktinfo *pkt,
++ const struct nft_inner_tun_ctx *tun_ctx)
++{
++ struct nft_inner_tun_ctx *this_cpu_tun_ctx;
++
++ local_bh_disable();
++ this_cpu_tun_ctx = this_cpu_ptr(&nft_pcpu_tun_ctx);
++ if (this_cpu_tun_ctx->cookie != tun_ctx->cookie)
++ *this_cpu_tun_ctx = *tun_ctx;
++ local_bh_enable();
++}
++
+ static bool nft_inner_parse_needed(const struct nft_inner *priv,
+ const struct nft_pktinfo *pkt,
+- const struct nft_inner_tun_ctx *tun_ctx)
++ struct nft_inner_tun_ctx *tun_ctx)
+ {
+ if (!(pkt->flags & NFT_PKTINFO_INNER_FULL))
+ return true;
+
++ if (!nft_inner_restore_tun_ctx(pkt, tun_ctx))
++ return true;
++
+ if (priv->type != tun_ctx->type)
+ return true;
+
+@@ -248,27 +279,29 @@ static bool nft_inner_parse_needed(const struct nft_inner *priv,
+ static void nft_inner_eval(const struct nft_expr *expr, struct nft_regs *regs,
+ const struct nft_pktinfo *pkt)
+ {
+- struct nft_inner_tun_ctx *tun_ctx = this_cpu_ptr(&nft_pcpu_tun_ctx);
+ const struct nft_inner *priv = nft_expr_priv(expr);
++ struct nft_inner_tun_ctx tun_ctx = {};
+
+ if (nft_payload_inner_offset(pkt) < 0)
+ goto err;
+
+- if (nft_inner_parse_needed(priv, pkt, tun_ctx) &&
+- nft_inner_parse(priv, (struct nft_pktinfo *)pkt, tun_ctx) < 0)
++ if (nft_inner_parse_needed(priv, pkt, &tun_ctx) &&
++ nft_inner_parse(priv, (struct nft_pktinfo *)pkt, &tun_ctx) < 0)
+ goto err;
+
+ switch (priv->expr_type) {
+ case NFT_INNER_EXPR_PAYLOAD:
+- nft_payload_inner_eval((struct nft_expr *)&priv->expr, regs, pkt, tun_ctx);
++ nft_payload_inner_eval((struct nft_expr *)&priv->expr, regs, pkt, &tun_ctx);
+ break;
+ case NFT_INNER_EXPR_META:
+- nft_meta_inner_eval((struct nft_expr *)&priv->expr, regs, pkt, tun_ctx);
++ nft_meta_inner_eval((struct nft_expr *)&priv->expr, regs, pkt, &tun_ctx);
+ break;
+ default:
+ WARN_ON_ONCE(1);
+ goto err;
+ }
++ nft_inner_save_tun_ctx(pkt, &tun_ctx);
++
+ return;
+ err:
+ regs->verdict.code = NFT_BREAK;
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index daa56dda737ae2..b93f046ac7d1e1 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -24,11 +24,13 @@
+ struct nft_rhash {
+ struct rhashtable ht;
+ struct delayed_work gc_work;
++ u32 wq_gc_seq;
+ };
+
+ struct nft_rhash_elem {
+ struct nft_elem_priv priv;
+ struct rhash_head node;
++ u32 wq_gc_seq;
+ struct nft_set_ext ext;
+ };
+
+@@ -338,6 +340,10 @@ static void nft_rhash_gc(struct work_struct *work)
+ if (!gc)
+ goto done;
+
++ /* Elements never collected use a zero gc worker sequence number. */
++ if (unlikely(++priv->wq_gc_seq == 0))
++ priv->wq_gc_seq++;
++
+ rhashtable_walk_enter(&priv->ht, &hti);
+ rhashtable_walk_start(&hti);
+
+@@ -355,6 +361,14 @@ static void nft_rhash_gc(struct work_struct *work)
+ goto try_later;
+ }
+
++ /* rhashtable walk is unstable, already seen in this gc run?
++ * Then, skip this element. In case of (unlikely) sequence
++ * wraparound and stale element wq_gc_seq, next gc run will
++ * just find this expired element.
++ */
++ if (he->wq_gc_seq == priv->wq_gc_seq)
++ continue;
++
+ if (nft_set_elem_is_dead(&he->ext))
+ goto dead_elem;
+
+@@ -371,6 +385,8 @@ static void nft_rhash_gc(struct work_struct *work)
+ if (!gc)
+ goto try_later;
+
++ /* annotate gc sequence for this attempt. */
++ he->wq_gc_seq = priv->wq_gc_seq;
+ nft_trans_gc_elem_add(gc, he);
+ }
+
+diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c
+index f5da0c1775f2e7..35d0409b009501 100644
+--- a/net/netfilter/nft_socket.c
++++ b/net/netfilter/nft_socket.c
+@@ -68,7 +68,7 @@ static noinline int nft_socket_cgroup_subtree_level(void)
+
+ cgroup_put(cgrp);
+
+- if (WARN_ON_ONCE(level > 255))
++ if (level > 255)
+ return -ERANGE;
+
+ if (WARN_ON_ONCE(level < 0))
+diff --git a/net/netfilter/xt_LED.c b/net/netfilter/xt_LED.c
+index f7b0286d106ac1..8a80fd76fe45b2 100644
+--- a/net/netfilter/xt_LED.c
++++ b/net/netfilter/xt_LED.c
+@@ -96,7 +96,9 @@ static int led_tg_check(const struct xt_tgchk_param *par)
+ struct xt_led_info_internal *ledinternal;
+ int err;
+
+- if (ledinfo->id[0] == '\0')
++ /* Bail out if empty string or not a string at all. */
++ if (ledinfo->id[0] == '\0' ||
++ !memchr(ledinfo->id, '\0', sizeof(ledinfo->id)))
+ return -EINVAL;
+
+ mutex_lock(&xt_led_mutex);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index a705ec21425409..97774bd4b6cb11 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3421,17 +3421,17 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
+ if (sock->type == SOCK_PACKET)
+ sock->ops = &packet_ops_spkt;
+
++ po = pkt_sk(sk);
++ err = packet_alloc_pending(po);
++ if (err)
++ goto out_sk_free;
++
+ sock_init_data(sock, sk);
+
+- po = pkt_sk(sk);
+ init_completion(&po->skb_completion);
+ sk->sk_family = PF_PACKET;
+ po->num = proto;
+
+- err = packet_alloc_pending(po);
+- if (err)
+- goto out2;
+-
+ packet_cached_dev_reset(po);
+
+ sk->sk_destruct = packet_sock_destruct;
+@@ -3463,7 +3463,7 @@ static int packet_create(struct net *net, struct socket *sock, int protocol,
+ sock_prot_inuse_add(net, &packet_proto, 1);
+
+ return 0;
+-out2:
++out_sk_free:
+ sk_free(sk);
+ out:
+ return err;
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index e280c27cb9f9af..1008ec8a464c93 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -1369,7 +1369,6 @@ static int fl_set_erspan_opt(const struct nlattr *nla, struct fl_flow_key *key,
+ int err;
+
+ md = (struct erspan_metadata *)&key->enc_opts.data[key->enc_opts.len];
+- memset(md, 0xff, sizeof(*md));
+ md->version = 1;
+
+ if (!depth)
+@@ -1398,9 +1397,9 @@ static int fl_set_erspan_opt(const struct nlattr *nla, struct fl_flow_key *key,
+ NL_SET_ERR_MSG(extack, "Missing tunnel key erspan option index");
+ return -EINVAL;
+ }
++ memset(&md->u.index, 0xff, sizeof(md->u.index));
+ if (tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX]) {
+ nla = tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_INDEX];
+- memset(&md->u, 0x00, sizeof(md->u));
+ md->u.index = nla_get_be32(nla);
+ }
+ } else if (md->version == 2) {
+@@ -1409,10 +1408,12 @@ static int fl_set_erspan_opt(const struct nlattr *nla, struct fl_flow_key *key,
+ NL_SET_ERR_MSG(extack, "Missing tunnel key erspan option dir or hwid");
+ return -EINVAL;
+ }
++ md->u.md2.dir = 1;
+ if (tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_DIR]) {
+ nla = tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_DIR];
+ md->u.md2.dir = nla_get_u8(nla);
+ }
++ set_hwid(&md->u.md2, 0xff);
+ if (tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_HWID]) {
+ nla = tb[TCA_FLOWER_KEY_ENC_OPT_ERSPAN_HWID];
+ set_hwid(&md->u.md2, nla_get_u8(nla));
+diff --git a/net/sched/sch_cbs.c b/net/sched/sch_cbs.c
+index 939425da18955b..8c9a0400c8622c 100644
+--- a/net/sched/sch_cbs.c
++++ b/net/sched/sch_cbs.c
+@@ -310,7 +310,7 @@ static void cbs_set_port_rate(struct net_device *dev, struct cbs_sched_data *q)
+ {
+ struct ethtool_link_ksettings ecmd;
+ int speed = SPEED_10;
+- int port_rate;
++ s64 port_rate;
+ int err;
+
+ err = __ethtool_get_link_ksettings(dev, &ecmd);
+diff --git a/net/sched/sch_tbf.c b/net/sched/sch_tbf.c
+index f1d09183ae632d..dc26b22d53c734 100644
+--- a/net/sched/sch_tbf.c
++++ b/net/sched/sch_tbf.c
+@@ -208,7 +208,7 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
+ struct tbf_sched_data *q = qdisc_priv(sch);
+ struct sk_buff *segs, *nskb;
+ netdev_features_t features = netif_skb_features(skb);
+- unsigned int len = 0, prev_len = qdisc_pkt_len(skb);
++ unsigned int len = 0, prev_len = qdisc_pkt_len(skb), seg_len;
+ int ret, nb;
+
+ segs = skb_gso_segment(skb, features & ~NETIF_F_GSO_MASK);
+@@ -219,21 +219,27 @@ static int tbf_segment(struct sk_buff *skb, struct Qdisc *sch,
+ nb = 0;
+ skb_list_walk_safe(segs, segs, nskb) {
+ skb_mark_not_on_list(segs);
+- qdisc_skb_cb(segs)->pkt_len = segs->len;
+- len += segs->len;
++ seg_len = segs->len;
++ qdisc_skb_cb(segs)->pkt_len = seg_len;
+ ret = qdisc_enqueue(segs, q->qdisc, to_free);
+ if (ret != NET_XMIT_SUCCESS) {
+ if (net_xmit_drop_count(ret))
+ qdisc_qstats_drop(sch);
+ } else {
+ nb++;
++ len += seg_len;
+ }
+ }
+ sch->q.qlen += nb;
+- if (nb > 1)
++ sch->qstats.backlog += len;
++ if (nb > 0) {
+ qdisc_tree_reduce_backlog(sch, 1 - nb, prev_len - len);
+- consume_skb(skb);
+- return nb > 0 ? NET_XMIT_SUCCESS : NET_XMIT_DROP;
++ consume_skb(skb);
++ return NET_XMIT_SUCCESS;
++ }
++
++ kfree_skb(skb);
++ return NET_XMIT_DROP;
+ }
+
+ static int tbf_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 9d76e902fd770f..9e6c69d18581ce 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -383,6 +383,7 @@ void smc_sk_init(struct net *net, struct sock *sk, int protocol)
+ smc->limit_smc_hs = net->smc.limit_smc_hs;
+ smc->use_fallback = false; /* assume rdma capability first */
+ smc->fallback_rsn = 0;
++ smc_close_init(smc);
+ }
+
+ static struct sock *smc_sock_alloc(struct net *net, struct socket *sock,
+@@ -1299,7 +1300,6 @@ static int smc_connect_rdma(struct smc_sock *smc,
+ goto connect_abort;
+ }
+
+- smc_close_init(smc);
+ smc_rx_init(smc);
+
+ if (ini->first_contact_local) {
+@@ -1435,7 +1435,6 @@ static int smc_connect_ism(struct smc_sock *smc,
+ goto connect_abort;
+ }
+ }
+- smc_close_init(smc);
+ smc_rx_init(smc);
+ smc_tx_init(smc);
+
+@@ -1901,6 +1900,7 @@ static void smc_listen_out(struct smc_sock *new_smc)
+ if (tcp_sk(new_smc->clcsock->sk)->syn_smc)
+ atomic_dec(&lsmc->queued_smc_hs);
+
++ release_sock(newsmcsk); /* lock in smc_listen_work() */
+ if (lsmc->sk.sk_state == SMC_LISTEN) {
+ lock_sock_nested(&lsmc->sk, SINGLE_DEPTH_NESTING);
+ smc_accept_enqueue(&lsmc->sk, newsmcsk);
+@@ -2422,6 +2422,7 @@ static void smc_listen_work(struct work_struct *work)
+ u8 accept_version;
+ int rc = 0;
+
++ lock_sock(&new_smc->sk); /* release in smc_listen_out() */
+ if (new_smc->listen_smc->sk.sk_state != SMC_LISTEN)
+ return smc_listen_out_err(new_smc);
+
+@@ -2479,7 +2480,6 @@ static void smc_listen_work(struct work_struct *work)
+ goto out_decl;
+
+ mutex_lock(&smc_server_lgr_pending);
+- smc_close_init(new_smc);
+ smc_rx_init(new_smc);
+ smc_tx_init(new_smc);
+
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index 439f7553997728..b7e25e7e9933b6 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -814,10 +814,10 @@ static void cleanup_bearer(struct work_struct *work)
+ kfree_rcu(rcast, rcu);
+ }
+
+- atomic_dec(&tipc_net(sock_net(ub->ubsock->sk))->wq_count);
+ dst_cache_destroy(&ub->rcast.dst_cache);
+ udp_tunnel_sock_release(ub->ubsock);
+ synchronize_net();
++ atomic_dec(&tipc_net(sock_net(ub->ubsock->sk))->wq_count);
+ kfree(ub);
+ }
+
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index dfd29160fe11c4..b52b798aa4c292 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -117,12 +117,14 @@
+ static int __vsock_bind(struct sock *sk, struct sockaddr_vm *addr);
+ static void vsock_sk_destruct(struct sock *sk);
+ static int vsock_queue_rcv_skb(struct sock *sk, struct sk_buff *skb);
++static void vsock_close(struct sock *sk, long timeout);
+
+ /* Protocol family. */
+ struct proto vsock_proto = {
+ .name = "AF_VSOCK",
+ .owner = THIS_MODULE,
+ .obj_size = sizeof(struct vsock_sock),
++ .close = vsock_close,
+ #ifdef CONFIG_BPF_SYSCALL
+ .psock_update_sk_prot = vsock_bpf_update_proto,
+ #endif
+@@ -797,39 +799,37 @@ static bool sock_type_connectible(u16 type)
+
+ static void __vsock_release(struct sock *sk, int level)
+ {
+- if (sk) {
+- struct sock *pending;
+- struct vsock_sock *vsk;
+-
+- vsk = vsock_sk(sk);
+- pending = NULL; /* Compiler warning. */
++ struct vsock_sock *vsk;
++ struct sock *pending;
+
+- /* When "level" is SINGLE_DEPTH_NESTING, use the nested
+- * version to avoid the warning "possible recursive locking
+- * detected". When "level" is 0, lock_sock_nested(sk, level)
+- * is the same as lock_sock(sk).
+- */
+- lock_sock_nested(sk, level);
++ vsk = vsock_sk(sk);
++ pending = NULL; /* Compiler warning. */
+
+- if (vsk->transport)
+- vsk->transport->release(vsk);
+- else if (sock_type_connectible(sk->sk_type))
+- vsock_remove_sock(vsk);
++ /* When "level" is SINGLE_DEPTH_NESTING, use the nested
++ * version to avoid the warning "possible recursive locking
++ * detected". When "level" is 0, lock_sock_nested(sk, level)
++ * is the same as lock_sock(sk).
++ */
++ lock_sock_nested(sk, level);
+
+- sock_orphan(sk);
+- sk->sk_shutdown = SHUTDOWN_MASK;
++ if (vsk->transport)
++ vsk->transport->release(vsk);
++ else if (sock_type_connectible(sk->sk_type))
++ vsock_remove_sock(vsk);
+
+- skb_queue_purge(&sk->sk_receive_queue);
++ sock_orphan(sk);
++ sk->sk_shutdown = SHUTDOWN_MASK;
+
+- /* Clean up any sockets that never were accepted. */
+- while ((pending = vsock_dequeue_accept(sk)) != NULL) {
+- __vsock_release(pending, SINGLE_DEPTH_NESTING);
+- sock_put(pending);
+- }
++ skb_queue_purge(&sk->sk_receive_queue);
+
+- release_sock(sk);
+- sock_put(sk);
++ /* Clean up any sockets that never were accepted. */
++ while ((pending = vsock_dequeue_accept(sk)) != NULL) {
++ __vsock_release(pending, SINGLE_DEPTH_NESTING);
++ sock_put(pending);
+ }
++
++ release_sock(sk);
++ sock_put(sk);
+ }
+
+ static void vsock_sk_destruct(struct sock *sk)
+@@ -901,9 +901,22 @@ void vsock_data_ready(struct sock *sk)
+ }
+ EXPORT_SYMBOL_GPL(vsock_data_ready);
+
++/* Dummy callback required by sockmap.
++ * See unconditional call of saved_close() in sock_map_close().
++ */
++static void vsock_close(struct sock *sk, long timeout)
++{
++}
++
+ static int vsock_release(struct socket *sock)
+ {
+- __vsock_release(sock->sk, 0);
++ struct sock *sk = sock->sk;
++
++ if (!sk)
++ return 0;
++
++ sk->sk_prot->close(sk, 0);
++ __vsock_release(sk, 0);
+ sock->sk = NULL;
+ sock->state = SS_FREE;
+
+@@ -1054,6 +1067,9 @@ static __poll_t vsock_poll(struct file *file, struct socket *sock,
+ mask |= EPOLLRDHUP;
+ }
+
++ if (sk_is_readable(sk))
++ mask |= EPOLLIN | EPOLLRDNORM;
++
+ if (sock->type == SOCK_DGRAM) {
+ /* For datagram sockets we can read if there is something in
+ * the queue and write as long as the socket isn't shutdown for
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index 521a2938e50a12..0662d34b09ee78 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -387,10 +387,9 @@ void xp_dma_unmap(struct xsk_buff_pool *pool, unsigned long attrs)
+ return;
+ }
+
+- if (!refcount_dec_and_test(&dma_map->users))
+- return;
++ if (refcount_dec_and_test(&dma_map->users))
++ __xp_dma_unmap(dma_map, attrs);
+
+- __xp_dma_unmap(dma_map, attrs);
+ kvfree(pool->dma_pages);
+ pool->dma_pages = NULL;
+ pool->dma_pages_cnt = 0;
+diff --git a/net/xdp/xskmap.c b/net/xdp/xskmap.c
+index e1c526f97ce31f..afa457506274c1 100644
+--- a/net/xdp/xskmap.c
++++ b/net/xdp/xskmap.c
+@@ -224,7 +224,7 @@ static long xsk_map_delete_elem(struct bpf_map *map, void *key)
+ struct xsk_map *m = container_of(map, struct xsk_map, map);
+ struct xdp_sock __rcu **map_entry;
+ struct xdp_sock *old_xs;
+- int k = *(u32 *)key;
++ u32 k = *(u32 *)key;
+
+ if (k >= map->max_entries)
+ return -EINVAL;
+diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
+index 032c9089e6862d..e936254531fd0a 100644
+--- a/rust/kernel/lib.rs
++++ b/rust/kernel/lib.rs
+@@ -12,10 +12,10 @@
+ //! do so first instead of bypassing this crate.
+
+ #![no_std]
++#![feature(arbitrary_self_types)]
+ #![feature(coerce_unsized)]
+ #![feature(dispatch_from_dyn)]
+ #![feature(new_uninit)]
+-#![feature(receiver_trait)]
+ #![feature(unsize)]
+
+ // Ensure conditional compilation based on the kernel configuration works;
+diff --git a/rust/kernel/list/arc.rs b/rust/kernel/list/arc.rs
+index d801b9dc6291db..3483d8c232c4f1 100644
+--- a/rust/kernel/list/arc.rs
++++ b/rust/kernel/list/arc.rs
+@@ -441,9 +441,6 @@ fn as_ref(&self) -> &Arc<T> {
+ }
+ }
+
+-// This is to allow [`ListArc`] (and variants) to be used as the type of `self`.
+-impl<T, const ID: u64> core::ops::Receiver for ListArc<T, ID> where T: ListArcSafe<ID> + ?Sized {}
+-
+ // This is to allow coercion from `ListArc<T>` to `ListArc<U>` if `T` can be converted to the
+ // dynamically-sized type (DST) `U`.
+ impl<T, U, const ID: u64> core::ops::CoerceUnsized<ListArc<U, ID>> for ListArc<T, ID>
+diff --git a/rust/kernel/sync/arc.rs b/rust/kernel/sync/arc.rs
+index 3021f30fd822f6..28743a7c74a847 100644
+--- a/rust/kernel/sync/arc.rs
++++ b/rust/kernel/sync/arc.rs
+@@ -171,9 +171,6 @@ unsafe fn container_of(ptr: *const T) -> NonNull<ArcInner<T>> {
+ }
+ }
+
+-// This is to allow [`Arc`] (and variants) to be used as the type of `self`.
+-impl<T: ?Sized> core::ops::Receiver for Arc<T> {}
+-
+ // This is to allow coercion from `Arc<T>` to `Arc<U>` if `T` can be converted to the
+ // dynamically-sized type (DST) `U`.
+ impl<T: ?Sized + Unsize<U>, U: ?Sized> core::ops::CoerceUnsized<Arc<U>> for Arc<T> {}
+@@ -480,9 +477,6 @@ pub struct ArcBorrow<'a, T: ?Sized + 'a> {
+ _p: PhantomData<&'a ()>,
+ }
+
+-// This is to allow [`ArcBorrow`] (and variants) to be used as the type of `self`.
+-impl<T: ?Sized> core::ops::Receiver for ArcBorrow<'_, T> {}
+-
+ // This is to allow `ArcBorrow<U>` to be dispatched on when `ArcBorrow<T>` can be coerced into
+ // `ArcBorrow<U>`.
+ impl<T: ?Sized + Unsize<U>, U: ?Sized> core::ops::DispatchFromDyn<ArcBorrow<'_, U>>
+diff --git a/samples/bpf/test_cgrp2_sock.c b/samples/bpf/test_cgrp2_sock.c
+index a0811df888f453..8ca2a445ffa155 100644
+--- a/samples/bpf/test_cgrp2_sock.c
++++ b/samples/bpf/test_cgrp2_sock.c
+@@ -178,8 +178,10 @@ static int show_sockopts(int family)
+ return 1;
+ }
+
+- if (get_bind_to_device(sd, name, sizeof(name)) < 0)
++ if (get_bind_to_device(sd, name, sizeof(name)) < 0) {
++ close(sd);
+ return 1;
++ }
+
+ mark = get_somark(sd);
+ prio = get_priority(sd);
+diff --git a/scripts/Makefile.build b/scripts/Makefile.build
+index 8f423a1faf5077..880785b52c04ad 100644
+--- a/scripts/Makefile.build
++++ b/scripts/Makefile.build
+@@ -248,7 +248,7 @@ $(obj)/%.lst: $(obj)/%.c FORCE
+ # Compile Rust sources (.rs)
+ # ---------------------------------------------------------------------------
+
+-rust_allowed_features := new_uninit
++rust_allowed_features := arbitrary_self_types,new_uninit
+
+ # `--out-dir` is required to avoid temporaries being created by `rustc` in the
+ # current working directory, which may be not accessible in the out-of-tree
+diff --git a/scripts/mod/modpost.c b/scripts/mod/modpost.c
+index 107393a8c48a59..971eda0c6ba737 100644
+--- a/scripts/mod/modpost.c
++++ b/scripts/mod/modpost.c
+@@ -785,7 +785,7 @@ static void check_section(const char *modname, struct elf_info *elf,
+ ".ltext", ".ltext.*"
+ #define OTHER_TEXT_SECTIONS ".ref.text", ".head.text", ".spinlock.text", \
+ ".fixup", ".entry.text", ".exception.text", \
+- ".coldtext", ".softirqentry.text"
++ ".coldtext", ".softirqentry.text", ".irqentry.text"
+
+ #define ALL_TEXT_SECTIONS ".init.text", ".exit.text", \
+ TEXT_SECTIONS, OTHER_TEXT_SECTIONS
+diff --git a/scripts/setlocalversion b/scripts/setlocalversion
+index 38b96c6797f408..5818465abba984 100755
+--- a/scripts/setlocalversion
++++ b/scripts/setlocalversion
+@@ -30,6 +30,27 @@ if test $# -gt 0 -o ! -d "$srctree"; then
+ usage
+ fi
+
++try_tag() {
++ tag="$1"
++
++ # Is $tag an annotated tag?
++ [ "$(git cat-file -t "$tag" 2> /dev/null)" = tag ] || return 1
++
++ # Is it an ancestor of HEAD, and if so, how many commits are in $tag..HEAD?
++ # shellcheck disable=SC2046 # word splitting is the point here
++ set -- $(git rev-list --count --left-right "$tag"...HEAD 2> /dev/null)
++
++ # $1 is 0 if and only if $tag is an ancestor of HEAD. Use
++ # string comparison, because $1 is empty if the 'git rev-list'
++ # command somehow failed.
++ [ "$1" = 0 ] || return 1
++
++ # $2 is the number of commits in the range $tag..HEAD, possibly 0.
++ count="$2"
++
++ return 0
++}
++
+ scm_version()
+ {
+ local short=false
+@@ -61,33 +82,33 @@ scm_version()
+ # stable kernel: 6.1.7 -> v6.1.7
+ version_tag=v$(echo "${KERNELVERSION}" | sed -E 's/^([0-9]+\.[0-9]+)\.0(.*)$/\1\2/')
+
++ # try_tag initializes count if the tag is usable.
++ count=
++
+ # If a localversion* file exists, and the corresponding
+ # annotated tag exists and is an ancestor of HEAD, use
+ # it. This is the case in linux-next.
+- tag=${file_localversion#-}
+- desc=
+- if [ -n "${tag}" ]; then
+- desc=$(git describe --match=$tag 2>/dev/null)
++ if [ -n "${file_localversion#-}" ] ; then
++ try_tag "${file_localversion#-}"
+ fi
+
+ # Otherwise, if a localversion* file exists, and the tag
+ # obtained by appending it to the tag derived from
+ # KERNELVERSION exists and is an ancestor of HEAD, use
+ # it. This is e.g. the case in linux-rt.
+- if [ -z "${desc}" ] && [ -n "${file_localversion}" ]; then
+- tag="${version_tag}${file_localversion}"
+- desc=$(git describe --match=$tag 2>/dev/null)
++ if [ -z "${count}" ] && [ -n "${file_localversion}" ]; then
++ try_tag "${version_tag}${file_localversion}"
+ fi
+
+ # Otherwise, default to the annotated tag derived from KERNELVERSION.
+- if [ -z "${desc}" ]; then
+- tag="${version_tag}"
+- desc=$(git describe --match=$tag 2>/dev/null)
++ if [ -z "${count}" ]; then
++ try_tag "${version_tag}"
+ fi
+
+- # If we are at the tagged commit, we ignore it because the version is
+- # well-defined.
+- if [ "${tag}" != "${desc}" ]; then
++ # If we are at the tagged commit, we ignore it because the
++ # version is well-defined. If none of the attempted tags exist
++ # or were usable, $count is still empty.
++ if [ -z "${count}" ] || [ "${count}" -gt 0 ]; then
+
+ # If only the short version is requested, don't bother
+ # running further git commands
+@@ -95,14 +116,15 @@ scm_version()
+ echo "+"
+ return
+ fi
++
+ # If we are past the tagged commit, we pretty print it.
+ # (like 6.1.0-14595-g292a089d78d3)
+- if [ -n "${desc}" ]; then
+- echo "${desc}" | awk -F- '{printf("-%05d", $(NF-1))}'
++ if [ -n "${count}" ]; then
++ printf "%s%05d" "-" "${count}"
+ fi
+
+ # Add -g and exactly 12 hex chars.
+- printf '%s%s' -g "$(echo $head | cut -c1-12)"
++ printf '%s%.12s' -g "$head"
+ fi
+
+ if ${no_dirty}; then
+diff --git a/sound/core/seq/seq_ump_client.c b/sound/core/seq/seq_ump_client.c
+index e5d3f4d206bf6a..e956f17f379282 100644
+--- a/sound/core/seq/seq_ump_client.c
++++ b/sound/core/seq/seq_ump_client.c
+@@ -257,12 +257,12 @@ static void update_port_infos(struct seq_ump_client *client)
+ continue;
+
+ old->addr.client = client->seq_client;
+- old->addr.port = i;
++ old->addr.port = ump_group_to_seq_port(i);
+ err = snd_seq_kernel_client_ctl(client->seq_client,
+ SNDRV_SEQ_IOCTL_GET_PORT_INFO,
+ old);
+ if (err < 0)
+- return;
++ continue;
+ fill_port_info(new, client, &client->ump->groups[i]);
+ if (old->capability == new->capability &&
+ !strcmp(old->name, new->name))
+@@ -271,7 +271,7 @@ static void update_port_infos(struct seq_ump_client *client)
+ SNDRV_SEQ_IOCTL_SET_PORT_INFO,
+ new);
+ if (err < 0)
+- return;
++ continue;
+ /* notify to system port */
+ snd_seq_system_client_ev_port_change(client->seq_client, i);
+ }
+diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c
+index 7c6b1fe8dfcce3..8e74be038b0fad 100644
+--- a/sound/pci/hda/hda_auto_parser.c
++++ b/sound/pci/hda/hda_auto_parser.c
+@@ -956,6 +956,28 @@ void snd_hda_pick_pin_fixup(struct hda_codec *codec,
+ }
+ EXPORT_SYMBOL_GPL(snd_hda_pick_pin_fixup);
+
++/* check whether the given quirk entry matches with vendor/device pair */
++static bool hda_quirk_match(u16 vendor, u16 device, const struct hda_quirk *q)
++{
++ if (q->subvendor != vendor)
++ return false;
++ return !q->subdevice ||
++ (device & q->subdevice_mask) == q->subdevice;
++}
++
++/* look through the quirk list and return the matching entry */
++static const struct hda_quirk *
++hda_quirk_lookup_id(u16 vendor, u16 device, const struct hda_quirk *list)
++{
++ const struct hda_quirk *q;
++
++ for (q = list; q->subvendor || q->subdevice; q++) {
++ if (hda_quirk_match(vendor, device, q))
++ return q;
++ }
++ return NULL;
++}
++
+ /**
+ * snd_hda_pick_fixup - Pick up a fixup matching with PCI/codec SSID or model string
+ * @codec: the HDA codec
+@@ -975,14 +997,16 @@ EXPORT_SYMBOL_GPL(snd_hda_pick_pin_fixup);
+ */
+ void snd_hda_pick_fixup(struct hda_codec *codec,
+ const struct hda_model_fixup *models,
+- const struct snd_pci_quirk *quirk,
++ const struct hda_quirk *quirk,
+ const struct hda_fixup *fixlist)
+ {
+- const struct snd_pci_quirk *q;
++ const struct hda_quirk *q;
+ int id = HDA_FIXUP_ID_NOT_SET;
+ const char *name = NULL;
+ const char *type = NULL;
+ unsigned int vendor, device;
++ u16 pci_vendor, pci_device;
++ u16 codec_vendor, codec_device;
+
+ if (codec->fixup_id != HDA_FIXUP_ID_NOT_SET)
+ return;
+@@ -1013,27 +1037,42 @@ void snd_hda_pick_fixup(struct hda_codec *codec,
+ if (!quirk)
+ return;
+
++ if (codec->bus->pci) {
++ pci_vendor = codec->bus->pci->subsystem_vendor;
++ pci_device = codec->bus->pci->subsystem_device;
++ }
++
++ codec_vendor = codec->core.subsystem_id >> 16;
++ codec_device = codec->core.subsystem_id & 0xffff;
++
+ /* match with the SSID alias given by the model string "XXXX:YYYY" */
+ if (codec->modelname &&
+ sscanf(codec->modelname, "%04x:%04x", &vendor, &device) == 2) {
+- q = snd_pci_quirk_lookup_id(vendor, device, quirk);
++ q = hda_quirk_lookup_id(vendor, device, quirk);
+ if (q) {
+ type = "alias SSID";
+ goto found_device;
+ }
+ }
+
+- /* match with the PCI SSID */
+- q = snd_pci_quirk_lookup(codec->bus->pci, quirk);
+- if (q) {
+- type = "PCI SSID";
+- goto found_device;
++ /* match primarily with the PCI SSID */
++ for (q = quirk; q->subvendor || q->subdevice; q++) {
++ /* if the entry is specific to codec SSID, check with it */
++ if (!codec->bus->pci || q->match_codec_ssid) {
++ if (hda_quirk_match(codec_vendor, codec_device, q)) {
++ type = "codec SSID";
++ goto found_device;
++ }
++ } else {
++ if (hda_quirk_match(pci_vendor, pci_device, q)) {
++ type = "PCI SSID";
++ goto found_device;
++ }
++ }
+ }
+
+ /* match with the codec SSID */
+- q = snd_pci_quirk_lookup_id(codec->core.subsystem_id >> 16,
+- codec->core.subsystem_id & 0xffff,
+- quirk);
++ q = hda_quirk_lookup_id(codec_vendor, codec_device, quirk);
+ if (q) {
+ type = "codec SSID";
+ goto found_device;
+diff --git a/sound/pci/hda/hda_local.h b/sound/pci/hda/hda_local.h
+index 53a5a62b78fa98..763f79f6f32e70 100644
+--- a/sound/pci/hda/hda_local.h
++++ b/sound/pci/hda/hda_local.h
+@@ -292,6 +292,32 @@ struct hda_fixup {
+ } v;
+ };
+
++/*
++ * extended form of snd_pci_quirk:
++ * for PCI SSID matching, use SND_PCI_QUIRK() like before;
++ * for codec SSID matching, use the new HDA_CODEC_QUIRK() instead
++ */
++struct hda_quirk {
++ unsigned short subvendor; /* PCI subvendor ID */
++ unsigned short subdevice; /* PCI subdevice ID */
++ unsigned short subdevice_mask; /* bitmask to match */
++ bool match_codec_ssid; /* match only with codec SSID */
++ int value; /* value */
++#ifdef CONFIG_SND_DEBUG_VERBOSE
++ const char *name; /* name of the device (optional) */
++#endif
++};
++
++#ifdef CONFIG_SND_DEBUG_VERBOSE
++#define HDA_CODEC_QUIRK(vend, dev, xname, val) \
++ { _SND_PCI_QUIRK_ID(vend, dev), .value = (val), .name = (xname),\
++ .match_codec_ssid = true }
++#else
++#define HDA_CODEC_QUIRK(vend, dev, xname, val) \
++ { _SND_PCI_QUIRK_ID(vend, dev), .value = (val), \
++ .match_codec_ssid = true }
++#endif
++
+ struct snd_hda_pin_quirk {
+ unsigned int codec; /* Codec vendor/device ID */
+ unsigned short subvendor; /* PCI subvendor ID */
+@@ -351,7 +377,7 @@ void snd_hda_apply_fixup(struct hda_codec *codec, int action);
+ void __snd_hda_apply_fixup(struct hda_codec *codec, int id, int action, int depth);
+ void snd_hda_pick_fixup(struct hda_codec *codec,
+ const struct hda_model_fixup *models,
+- const struct snd_pci_quirk *quirk,
++ const struct hda_quirk *quirk,
+ const struct hda_fixup *fixlist);
+ void snd_hda_pick_pin_fixup(struct hda_codec *codec,
+ const struct snd_hda_pin_quirk *pin_quirk,
+diff --git a/sound/pci/hda/patch_analog.c b/sound/pci/hda/patch_analog.c
+index 1e9dadcdc51be2..56354fe060a1aa 100644
+--- a/sound/pci/hda/patch_analog.c
++++ b/sound/pci/hda/patch_analog.c
+@@ -345,7 +345,7 @@ static const struct hda_fixup ad1986a_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk ad1986a_fixup_tbl[] = {
++static const struct hda_quirk ad1986a_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x30af, "HP B2800", AD1986A_FIXUP_LAPTOP_IMIC),
+ SND_PCI_QUIRK(0x1043, 0x1153, "ASUS M9V", AD1986A_FIXUP_LAPTOP_IMIC),
+ SND_PCI_QUIRK(0x1043, 0x1443, "ASUS Z99He", AD1986A_FIXUP_EAPD),
+@@ -588,7 +588,7 @@ static const struct hda_fixup ad1981_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk ad1981_fixup_tbl[] = {
++static const struct hda_quirk ad1981_fixup_tbl[] = {
+ SND_PCI_QUIRK_VENDOR(0x1014, "Lenovo", AD1981_FIXUP_AMP_OVERRIDE),
+ SND_PCI_QUIRK_VENDOR(0x103c, "HP", AD1981_FIXUP_HP_EAPD),
+ SND_PCI_QUIRK_VENDOR(0x17aa, "Lenovo", AD1981_FIXUP_AMP_OVERRIDE),
+@@ -1061,7 +1061,7 @@ static const struct hda_fixup ad1884_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk ad1884_fixup_tbl[] = {
++static const struct hda_quirk ad1884_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x2a82, "HP Touchsmart", AD1884_FIXUP_HP_TOUCHSMART),
+ SND_PCI_QUIRK_VENDOR(0x103c, "HP", AD1884_FIXUP_HP_EAPD),
+ SND_PCI_QUIRK_VENDOR(0x17aa, "Lenovo Thinkpad", AD1884_FIXUP_THINKPAD),
+diff --git a/sound/pci/hda/patch_cirrus.c b/sound/pci/hda/patch_cirrus.c
+index 654724559355ef..06e046214a4134 100644
+--- a/sound/pci/hda/patch_cirrus.c
++++ b/sound/pci/hda/patch_cirrus.c
+@@ -385,7 +385,7 @@ static const struct hda_model_fixup cs420x_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cs420x_fixup_tbl[] = {
++static const struct hda_quirk cs420x_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x10de, 0x0ac0, "MacBookPro 5,3", CS420X_MBP53),
+ SND_PCI_QUIRK(0x10de, 0x0d94, "MacBookAir 3,1(2)", CS420X_MBP55),
+ SND_PCI_QUIRK(0x10de, 0xcb79, "MacBookPro 5,5", CS420X_MBP55),
+@@ -634,13 +634,13 @@ static const struct hda_model_fixup cs4208_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cs4208_fixup_tbl[] = {
++static const struct hda_quirk cs4208_fixup_tbl[] = {
+ SND_PCI_QUIRK_VENDOR(0x106b, "Apple", CS4208_MAC_AUTO),
+ {} /* terminator */
+ };
+
+ /* codec SSID matching */
+-static const struct snd_pci_quirk cs4208_mac_fixup_tbl[] = {
++static const struct hda_quirk cs4208_mac_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x106b, 0x5e00, "MacBookPro 11,2", CS4208_MBP11),
+ SND_PCI_QUIRK(0x106b, 0x6c00, "MacMini 7,1", CS4208_MACMINI),
+ SND_PCI_QUIRK(0x106b, 0x7100, "MacBookAir 6,1", CS4208_MBA6),
+@@ -818,7 +818,7 @@ static const struct hda_model_fixup cs421x_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cs421x_fixup_tbl[] = {
++static const struct hda_quirk cs421x_fixup_tbl[] = {
+ /* Test Intel board + CDB2410 */
+ SND_PCI_QUIRK(0x8086, 0x5001, "DP45SG/CDB4210", CS421X_CDB4210),
+ {} /* terminator */
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index b2bcdf76da3058..2e9f817b948eb3 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -828,23 +828,6 @@ static const struct hda_pintbl cxt_pincfg_sws_js201d[] = {
+ {}
+ };
+
+-/* pincfg quirk for Tuxedo Sirius;
+- * unfortunately the (PCI) SSID conflicts with System76 Pangolin pang14,
+- * which has incompatible pin setup, so we check the codec SSID (luckily
+- * different one!) and conditionally apply the quirk here
+- */
+-static void cxt_fixup_sirius_top_speaker(struct hda_codec *codec,
+- const struct hda_fixup *fix,
+- int action)
+-{
+- /* ignore for incorrectly picked-up pang14 */
+- if (codec->core.subsystem_id == 0x278212b3)
+- return;
+- /* set up the top speaker pin */
+- if (action == HDA_FIXUP_ACT_PRE_PROBE)
+- snd_hda_codec_set_pincfg(codec, 0x1d, 0x82170111);
+-}
+-
+ static const struct hda_fixup cxt_fixups[] = {
+ [CXT_PINCFG_LENOVO_X200] = {
+ .type = HDA_FIXUP_PINS,
+@@ -1009,12 +992,15 @@ static const struct hda_fixup cxt_fixups[] = {
+ .v.pins = cxt_pincfg_sws_js201d,
+ },
+ [CXT_PINCFG_TOP_SPEAKER] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = cxt_fixup_sirius_top_speaker,
++ .type = HDA_FIXUP_PINS,
++ .v.pins = (const struct hda_pintbl[]) {
++ { 0x1d, 0x82170111 },
++ { }
++ },
+ },
+ };
+
+-static const struct snd_pci_quirk cxt5045_fixups[] = {
++static const struct hda_quirk cxt5045_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x30d5, "HP 530", CXT_FIXUP_HP_530),
+ SND_PCI_QUIRK(0x1179, 0xff31, "Toshiba P105", CXT_FIXUP_TOSHIBA_P105),
+ /* HP, Packard Bell, Fujitsu-Siemens & Lenovo laptops have
+@@ -1034,7 +1020,7 @@ static const struct hda_model_fixup cxt5045_fixup_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cxt5047_fixups[] = {
++static const struct hda_quirk cxt5047_fixups[] = {
+ /* HP laptops have really bad sound over 0 dB on NID 0x10.
+ */
+ SND_PCI_QUIRK_VENDOR(0x103c, "HP", CXT_FIXUP_CAP_MIX_AMP_5047),
+@@ -1046,7 +1032,7 @@ static const struct hda_model_fixup cxt5047_fixup_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cxt5051_fixups[] = {
++static const struct hda_quirk cxt5051_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x360b, "Compaq CQ60", CXT_PINCFG_COMPAQ_CQ60),
+ SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo X200", CXT_PINCFG_LENOVO_X200),
+ {}
+@@ -1057,7 +1043,7 @@ static const struct hda_model_fixup cxt5051_fixup_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk cxt5066_fixups[] = {
++static const struct hda_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x1025, 0x0543, "Acer Aspire One 522", CXT_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x054c, "Acer Aspire 3830TG", CXT_FIXUP_ASPIRE_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x054f, "Acer Aspire 4830T", CXT_FIXUP_ASPIRE_DMIC),
+@@ -1109,8 +1095,8 @@ static const struct snd_pci_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK_VENDOR(0x17aa, "Thinkpad", CXT_FIXUP_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x1c06, 0x2011, "Lemote A1004", CXT_PINCFG_LEMOTE_A1004),
+ SND_PCI_QUIRK(0x1c06, 0x2012, "Lemote A1205", CXT_PINCFG_LEMOTE_A1205),
+- SND_PCI_QUIRK(0x2782, 0x12c3, "Sirius Gen1", CXT_PINCFG_TOP_SPEAKER),
+- SND_PCI_QUIRK(0x2782, 0x12c5, "Sirius Gen2", CXT_PINCFG_TOP_SPEAKER),
++ HDA_CODEC_QUIRK(0x2782, 0x12c3, "Sirius Gen1", CXT_PINCFG_TOP_SPEAKER),
++ HDA_CODEC_QUIRK(0x2782, 0x12c5, "Sirius Gen2", CXT_PINCFG_TOP_SPEAKER),
+ {}
+ };
+
+diff --git a/sound/pci/hda/patch_cs8409-tables.c b/sound/pci/hda/patch_cs8409-tables.c
+index 36b411d1a9609a..759f48038273df 100644
+--- a/sound/pci/hda/patch_cs8409-tables.c
++++ b/sound/pci/hda/patch_cs8409-tables.c
+@@ -473,7 +473,7 @@ struct sub_codec dolphin_cs42l42_1 = {
+ * Arrays Used for all projects using CS8409
+ ******************************************************************************/
+
+-const struct snd_pci_quirk cs8409_fixup_tbl[] = {
++const struct hda_quirk cs8409_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x0A11, "Bullseye", CS8409_BULLSEYE),
+ SND_PCI_QUIRK(0x1028, 0x0A12, "Bullseye", CS8409_BULLSEYE),
+ SND_PCI_QUIRK(0x1028, 0x0A23, "Bullseye", CS8409_BULLSEYE),
+diff --git a/sound/pci/hda/patch_cs8409.h b/sound/pci/hda/patch_cs8409.h
+index 937e9387abdc7a..5e48115caf096b 100644
+--- a/sound/pci/hda/patch_cs8409.h
++++ b/sound/pci/hda/patch_cs8409.h
+@@ -355,7 +355,7 @@ int cs42l42_volume_put(struct snd_kcontrol *kctrl, struct snd_ctl_elem_value *uc
+
+ extern const struct hda_pcm_stream cs42l42_48k_pcm_analog_playback;
+ extern const struct hda_pcm_stream cs42l42_48k_pcm_analog_capture;
+-extern const struct snd_pci_quirk cs8409_fixup_tbl[];
++extern const struct hda_quirk cs8409_fixup_tbl[];
+ extern const struct hda_model_fixup cs8409_models[];
+ extern const struct hda_fixup cs8409_fixups[];
+ extern const struct hda_verb cs8409_cs42l42_init_verbs[];
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 18e6779a83be2f..973671e0cdb09d 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -1567,7 +1567,7 @@ static const struct hda_fixup alc880_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc880_fixup_tbl[] = {
++static const struct hda_quirk alc880_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1019, 0x0f69, "Coeus G610P", ALC880_FIXUP_W810),
+ SND_PCI_QUIRK(0x1043, 0x10c3, "ASUS W5A", ALC880_FIXUP_ASUS_W5A),
+ SND_PCI_QUIRK(0x1043, 0x1964, "ASUS Z71V", ALC880_FIXUP_Z71V),
+@@ -1876,7 +1876,7 @@ static const struct hda_fixup alc260_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc260_fixup_tbl[] = {
++static const struct hda_quirk alc260_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x007b, "Acer C20x", ALC260_FIXUP_GPIO1),
+ SND_PCI_QUIRK(0x1025, 0x007f, "Acer Aspire 9500", ALC260_FIXUP_COEF),
+ SND_PCI_QUIRK(0x1025, 0x008f, "Acer", ALC260_FIXUP_GPIO1),
+@@ -2568,7 +2568,7 @@ static const struct hda_fixup alc882_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc882_fixup_tbl[] = {
++static const struct hda_quirk alc882_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x006c, "Acer Aspire 9810", ALC883_FIXUP_ACER_EAPD),
+ SND_PCI_QUIRK(0x1025, 0x0090, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
+ SND_PCI_QUIRK(0x1025, 0x0107, "Acer Aspire", ALC883_FIXUP_ACER_EAPD),
+@@ -2912,7 +2912,7 @@ static const struct hda_fixup alc262_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc262_fixup_tbl[] = {
++static const struct hda_quirk alc262_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x170b, "HP Z200", ALC262_FIXUP_HP_Z200),
+ SND_PCI_QUIRK(0x10cf, 0x1397, "Fujitsu Lifebook S7110", ALC262_FIXUP_FSC_S7110),
+ SND_PCI_QUIRK(0x10cf, 0x142d, "Fujitsu Lifebook E8410", ALC262_FIXUP_BENQ),
+@@ -3073,7 +3073,7 @@ static const struct hda_model_fixup alc268_fixup_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk alc268_fixup_tbl[] = {
++static const struct hda_quirk alc268_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x0139, "Acer TravelMate 6293", ALC268_FIXUP_SPDIF),
+ SND_PCI_QUIRK(0x1025, 0x015b, "Acer AOA 150 (ZG5)", ALC268_FIXUP_INV_DMIC),
+ /* below is codec SSID since multiple Toshiba laptops have the
+@@ -7726,8 +7726,6 @@ enum {
+ ALC287_FIXUP_LEGION_15IMHG05_AUTOMUTE,
+ ALC287_FIXUP_YOGA7_14ITL_SPEAKERS,
+ ALC298_FIXUP_LENOVO_C940_DUET7,
+- ALC287_FIXUP_LENOVO_14IRP8_DUETITL,
+- ALC287_FIXUP_LENOVO_LEGION_7,
+ ALC287_FIXUP_13S_GEN2_SPEAKERS,
+ ALC256_FIXUP_SET_COEF_DEFAULTS,
+ ALC256_FIXUP_SYSTEM76_MIC_NO_PRESENCE,
+@@ -7772,8 +7770,6 @@ enum {
+ ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1,
+ ALC287_FIXUP_LENOVO_THKPAD_WH_ALC1318,
+ ALC256_FIXUP_CHROME_BOOK,
+- ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7,
+- ALC287_FIXUP_LENOVO_SSID_17AA3820,
+ ALC245_FIXUP_CLEVO_NOISY_MIC,
+ ALC269_FIXUP_VAIO_VJFH52_MIC_NO_PRESENCE,
+ ALC233_FIXUP_MEDION_MTL_SPK,
+@@ -7796,72 +7792,6 @@ static void alc298_fixup_lenovo_c940_duet7(struct hda_codec *codec,
+ __snd_hda_apply_fixup(codec, id, action, 0);
+ }
+
+-/* A special fixup for Lenovo Slim/Yoga Pro 9 14IRP8 and Yoga DuetITL 2021;
+- * 14IRP8 PCI SSID will mistakenly be matched with the DuetITL codec SSID,
+- * so we need to apply a different fixup in this case. The only DuetITL codec
+- * SSID reported so far is the 17aa:3802 while the 14IRP8 has the 17aa:38be
+- * and 17aa:38bf. If it weren't for the PCI SSID, the 14IRP8 models would
+- * have matched correctly by their codecs.
+- */
+-static void alc287_fixup_lenovo_14irp8_duetitl(struct hda_codec *codec,
+- const struct hda_fixup *fix,
+- int action)
+-{
+- int id;
+-
+- if (codec->core.subsystem_id == 0x17aa3802)
+- id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* DuetITL */
+- else
+- id = ALC287_FIXUP_TAS2781_I2C; /* 14IRP8 */
+- __snd_hda_apply_fixup(codec, id, action, 0);
+-}
+-
+-/* Similar to above the Lenovo Yoga Pro 7 14ARP8 PCI SSID matches the codec SSID of the
+- Legion Y9000X 2022 IAH7.*/
+-static void alc287_fixup_lenovo_14arp8_legion_iah7(struct hda_codec *codec,
+- const struct hda_fixup *fix,
+- int action)
+-{
+- int id;
+-
+- if (codec->core.subsystem_id == 0x17aa386e)
+- id = ALC287_FIXUP_CS35L41_I2C_2; /* Legion Y9000X 2022 IAH7 */
+- else
+- id = ALC285_FIXUP_SPEAKER2_TO_DAC1; /* Yoga Pro 7 14ARP8 */
+- __snd_hda_apply_fixup(codec, id, action, 0);
+-}
+-
+-/* Another hilarious PCI SSID conflict with Lenovo Legion Pro 7 16ARX8H (with
+- * TAS2781 codec) and Legion 7i 16IAX7 (with CS35L41 codec);
+- * we apply a corresponding fixup depending on the codec SSID instead
+- */
+-static void alc287_fixup_lenovo_legion_7(struct hda_codec *codec,
+- const struct hda_fixup *fix,
+- int action)
+-{
+- int id;
+-
+- if (codec->core.subsystem_id == 0x17aa38a8)
+- id = ALC287_FIXUP_TAS2781_I2C; /* Legion Pro 7 16ARX8H */
+- else
+- id = ALC287_FIXUP_CS35L41_I2C_2; /* Legion 7i 16IAX7 */
+- __snd_hda_apply_fixup(codec, id, action, 0);
+-}
+-
+-/* Yet more conflicting PCI SSID (17aa:3820) on two Lenovo models */
+-static void alc287_fixup_lenovo_ssid_17aa3820(struct hda_codec *codec,
+- const struct hda_fixup *fix,
+- int action)
+-{
+- int id;
+-
+- if (codec->core.subsystem_id == 0x17aa3820)
+- id = ALC269_FIXUP_ASPIRE_HEADSET_MIC; /* IdeaPad 330-17IKB 81DM */
+- else /* 0x17aa3802 */
+- id = ALC287_FIXUP_YOGA7_14ITL_SPEAKERS; /* "Yoga Duet 7 13ITL6 */
+- __snd_hda_apply_fixup(codec, id, action, 0);
+-}
+-
+ static const struct hda_fixup alc269_fixups[] = {
+ [ALC269_FIXUP_GPIO2] = {
+ .type = HDA_FIXUP_FUNC,
+@@ -9810,14 +9740,6 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc298_fixup_lenovo_c940_duet7,
+ },
+- [ALC287_FIXUP_LENOVO_14IRP8_DUETITL] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = alc287_fixup_lenovo_14irp8_duetitl,
+- },
+- [ALC287_FIXUP_LENOVO_LEGION_7] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = alc287_fixup_lenovo_legion_7,
+- },
+ [ALC287_FIXUP_13S_GEN2_SPEAKERS] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+@@ -10002,10 +9924,6 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK,
+ },
+- [ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = alc287_fixup_lenovo_14arp8_legion_iah7,
+- },
+ [ALC287_FIXUP_YOGA9_14IMH9_BASS_SPK_PIN] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc287_fixup_yoga9_14iap7_bass_spk_pin,
+@@ -10140,10 +10058,6 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC225_FIXUP_HEADSET_JACK
+ },
+- [ALC287_FIXUP_LENOVO_SSID_17AA3820] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = alc287_fixup_lenovo_ssid_17aa3820,
+- },
+ [ALC245_FIXUP_CLEVO_NOISY_MIC] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc269_fixup_limit_int_mic_boost,
+@@ -10169,7 +10083,7 @@ static const struct hda_fixup alc269_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc269_fixup_tbl[] = {
++static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x0283, "Acer TravelMate 8371", ALC269_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x029b, "Acer 1810TZ", ALC269_FIXUP_INV_DMIC),
+ SND_PCI_QUIRK(0x1025, 0x0349, "Acer AOD260", ALC269_FIXUP_INV_DMIC),
+@@ -10411,6 +10325,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x87b7, "HP Laptop 14-fq0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x87c8, "HP", ALC287_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87d3, "HP Laptop 15-gw0xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
++ SND_PCI_QUIRK(0x103c, 0x87df, "HP ProBook 430 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87e5, "HP ProBook 440 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87e7, "HP ProBook 450 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x87f1, "HP ProBook 630 G8 Notebook PC", ALC236_FIXUP_HP_GPIO_LED),
+@@ -10592,7 +10507,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8cdf, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8ce0, "HP SnowWhite", ALC287_FIXUP_CS35L41_I2C_2_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8cf5, "HP ZBook Studio 16", ALC245_FIXUP_CS35L41_SPI_4_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d01, "HP ZBook Power 14 G12", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8d84, "HP EliteBook X G1i", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d91, "HP ZBook Firefly 14 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8d92, "HP ZBook Firefly 16 G12", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e18, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e19, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED),
++ SND_PCI_QUIRK(0x103c, 0x8e1a, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -10746,6 +10667,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x144d, 0xc830, "Samsung Galaxy Book Ion (NT950XCJ-X716A)", ALC298_FIXUP_SAMSUNG_AMP),
+ SND_PCI_QUIRK(0x144d, 0xc832, "Samsung Galaxy Book Flex Alpha (NP730QCJ)", ALC256_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xca03, "Samsung Galaxy Book2 Pro 360 (NP930QED)", ALC298_FIXUP_SAMSUNG_AMP),
++ SND_PCI_QUIRK(0x144d, 0xca06, "Samsung Galaxy Book3 360 (NP730QFG)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ SND_PCI_QUIRK(0x144d, 0xc868, "Samsung Galaxy Book2 Pro (NP930XED)", ALC298_FIXUP_SAMSUNG_AMP),
+ SND_PCI_QUIRK(0x144d, 0xc870, "Samsung Galaxy Book2 Pro (NP950XED)", ALC298_FIXUP_SAMSUNG_AMP_V2_2_AMPS),
+ SND_PCI_QUIRK(0x144d, 0xc872, "Samsung Galaxy Book2 Pro (NP950XEE)", ALC298_FIXUP_SAMSUNG_AMP_V2_2_AMPS),
+@@ -10903,11 +10825,13 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
+ SND_PCI_QUIRK(0x17aa, 0x334b, "Lenovo ThinkCentre M70 Gen5", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x3801, "Lenovo Yoga9 14IAP7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+- SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga Pro 9 14IRP8 / DuetITL 2021", ALC287_FIXUP_LENOVO_14IRP8_DUETITL),
++ HDA_CODEC_QUIRK(0x17aa, 0x3802, "DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
++ SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga Pro 9 14IRP8", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3813, "Legion 7i 15IMHG05", ALC287_FIXUP_LEGION_15IMHG05_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3818, "Lenovo C940 / Yoga Duet 7", ALC298_FIXUP_LENOVO_C940_DUET7),
+ SND_PCI_QUIRK(0x17aa, 0x3819, "Lenovo 13s Gen2 ITL", ALC287_FIXUP_13S_GEN2_SPEAKERS),
+- SND_PCI_QUIRK(0x17aa, 0x3820, "IdeaPad 330 / Yoga Duet 7", ALC287_FIXUP_LENOVO_SSID_17AA3820),
++ HDA_CODEC_QUIRK(0x17aa, 0x3820, "IdeaPad 330-17IKB 81DM", ALC269_FIXUP_ASPIRE_HEADSET_MIC),
++ SND_PCI_QUIRK(0x17aa, 0x3820, "Yoga Duet 7 13ITL6", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3824, "Legion Y9000X 2020", ALC285_FIXUP_LEGION_Y9000X_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3827, "Ideapad S740", ALC285_FIXUP_IDEAPAD_S740_COEF),
+ SND_PCI_QUIRK(0x17aa, 0x3834, "Lenovo IdeaPad Slim 9i 14ITL5", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+@@ -10921,8 +10845,10 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3865, "Lenovo 13X", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3866, "Lenovo 13X", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3869, "Lenovo Yoga7 14IAL7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+- SND_PCI_QUIRK(0x17aa, 0x386e, "Legion Y9000X 2022 IAH7 / Yoga Pro 7 14ARP8", ALC287_FIXUP_LENOVO_14ARP8_LEGION_IAH7),
+- SND_PCI_QUIRK(0x17aa, 0x386f, "Legion Pro 7/7i", ALC287_FIXUP_LENOVO_LEGION_7),
++ HDA_CODEC_QUIRK(0x17aa, 0x386e, "Legion Y9000X 2022 IAH7", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x17aa, 0x386e, "Yoga Pro 7 14ARP8", ALC285_FIXUP_SPEAKER2_TO_DAC1),
++ HDA_CODEC_QUIRK(0x17aa, 0x386f, "Legion Pro 7 16ARX8H", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x17aa, 0x386f, "Legion Pro 7i 16IAX7", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3870, "Lenovo Yoga 7 14ARB7", ALC287_FIXUP_YOGA7_14ARB7_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3877, "Lenovo Legion 7 Slim 16ARHA7", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3878, "Lenovo Legion 7 Slim 16ARHA7", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -11096,7 +11022,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk alc269_fixup_vendor_tbl[] = {
++static const struct hda_quirk alc269_fixup_vendor_tbl[] = {
+ SND_PCI_QUIRK_VENDOR(0x1025, "Acer Aspire", ALC271_FIXUP_DMIC),
+ SND_PCI_QUIRK_VENDOR(0x103c, "HP", ALC269_FIXUP_HP_MUTE_LED),
+ SND_PCI_QUIRK_VENDOR(0x104d, "Sony VAIO", ALC269_FIXUP_SONY_VAIO),
+@@ -12032,7 +11958,7 @@ static const struct hda_fixup alc861_fixups[] = {
+ }
+ };
+
+-static const struct snd_pci_quirk alc861_fixup_tbl[] = {
++static const struct hda_quirk alc861_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1253, "ASUS W7J", ALC660_FIXUP_ASUS_W7J),
+ SND_PCI_QUIRK(0x1043, 0x1263, "ASUS Z35HL", ALC660_FIXUP_ASUS_W7J),
+ SND_PCI_QUIRK(0x1043, 0x1393, "ASUS A6Rp", ALC861_FIXUP_ASUS_A6RP),
+@@ -12136,7 +12062,7 @@ static const struct hda_fixup alc861vd_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc861vd_fixup_tbl[] = {
++static const struct hda_quirk alc861vd_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x30bf, "HP TX1000", ALC861VD_FIX_DALLAS),
+ SND_PCI_QUIRK(0x1043, 0x1339, "ASUS A7-K", ALC660VD_FIX_ASUS_GPIO1),
+ SND_PCI_QUIRK(0x1179, 0xff31, "Toshiba L30-149", ALC861VD_FIX_DALLAS),
+@@ -12937,7 +12863,7 @@ static const struct hda_fixup alc662_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk alc662_fixup_tbl[] = {
++static const struct hda_quirk alc662_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1019, 0x9087, "ECS", ALC662_FIXUP_ASUS_MODE2),
+ SND_PCI_QUIRK(0x1019, 0x9859, "JP-IK LEAP W502", ALC897_FIXUP_HEADSET_MIC_PIN3),
+ SND_PCI_QUIRK(0x1025, 0x022f, "Acer Aspire One", ALC662_FIXUP_INV_DMIC),
+diff --git a/sound/pci/hda/patch_sigmatel.c b/sound/pci/hda/patch_sigmatel.c
+index ae1a34c68c6161..bde6b737385831 100644
+--- a/sound/pci/hda/patch_sigmatel.c
++++ b/sound/pci/hda/patch_sigmatel.c
+@@ -1462,7 +1462,7 @@ static const struct hda_model_fixup stac9200_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac9200_fixup_tbl[] = {
++static const struct hda_quirk stac9200_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_REF),
+@@ -1683,7 +1683,7 @@ static const struct hda_model_fixup stac925x_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac925x_fixup_tbl[] = {
++static const struct hda_quirk stac925x_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668, "DFI LanParty", STAC_REF),
+ SND_PCI_QUIRK(PCI_VENDOR_ID_DFI, 0x3101, "DFI LanParty", STAC_REF),
+@@ -1957,7 +1957,7 @@ static const struct hda_model_fixup stac92hd73xx_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac92hd73xx_fixup_tbl[] = {
++static const struct hda_quirk stac92hd73xx_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_92HD73XX_REF),
+@@ -2753,7 +2753,7 @@ static const struct hda_model_fixup stac92hd83xxx_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac92hd83xxx_fixup_tbl[] = {
++static const struct hda_quirk stac92hd83xxx_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_92HD83XXX_REF),
+@@ -3236,7 +3236,7 @@ static const struct hda_model_fixup stac92hd71bxx_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac92hd71bxx_fixup_tbl[] = {
++static const struct hda_quirk stac92hd71bxx_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_92HD71BXX_REF),
+@@ -3496,7 +3496,7 @@ static const struct hda_pintbl ecs202_pin_configs[] = {
+ };
+
+ /* codec SSIDs for Intel Mac sharing the same PCI SSID 8384:7680 */
+-static const struct snd_pci_quirk stac922x_intel_mac_fixup_tbl[] = {
++static const struct hda_quirk stac922x_intel_mac_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x0000, 0x0100, "Mac Mini", STAC_INTEL_MAC_V3),
+ SND_PCI_QUIRK(0x106b, 0x0800, "Mac", STAC_INTEL_MAC_V1),
+ SND_PCI_QUIRK(0x106b, 0x0600, "Mac", STAC_INTEL_MAC_V2),
+@@ -3640,7 +3640,7 @@ static const struct hda_model_fixup stac922x_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac922x_fixup_tbl[] = {
++static const struct hda_quirk stac922x_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_D945_REF),
+@@ -3968,7 +3968,7 @@ static const struct hda_model_fixup stac927x_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac927x_fixup_tbl[] = {
++static const struct hda_quirk stac927x_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_D965_REF),
+@@ -4178,7 +4178,7 @@ static const struct hda_model_fixup stac9205_models[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk stac9205_fixup_tbl[] = {
++static const struct hda_quirk stac9205_fixup_tbl[] = {
+ /* SigmaTel reference board */
+ SND_PCI_QUIRK(PCI_VENDOR_ID_INTEL, 0x2668,
+ "DFI LanParty", STAC_9205_REF),
+@@ -4255,7 +4255,7 @@ static const struct hda_fixup stac92hd95_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk stac92hd95_fixup_tbl[] = {
++static const struct hda_quirk stac92hd95_fixup_tbl[] = {
+ SND_PCI_QUIRK(PCI_VENDOR_ID_HP, 0x1911, "HP Spectre 13", STAC_92HD95_HP_BASS),
+ {} /* terminator */
+ };
+@@ -5002,7 +5002,7 @@ static const struct hda_fixup stac9872_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk stac9872_fixup_tbl[] = {
++static const struct hda_quirk stac9872_fixup_tbl[] = {
+ SND_PCI_QUIRK_MASK(0x104d, 0xfff0, 0x81e0,
+ "Sony VAIO F/S", STAC_9872_VAIO),
+ {} /* terminator */
+diff --git a/sound/pci/hda/patch_via.c b/sound/pci/hda/patch_via.c
+index a8ef4bb70dd057..d0893059b1b9b7 100644
+--- a/sound/pci/hda/patch_via.c
++++ b/sound/pci/hda/patch_via.c
+@@ -1035,7 +1035,7 @@ static const struct hda_fixup via_fixups[] = {
+ },
+ };
+
+-static const struct snd_pci_quirk vt2002p_fixups[] = {
++static const struct hda_quirk vt2002p_fixups[] = {
+ SND_PCI_QUIRK(0x1043, 0x13f7, "Asus B23E", VIA_FIXUP_POWER_SAVE),
+ SND_PCI_QUIRK(0x1043, 0x1487, "Asus G75", VIA_FIXUP_ASUS_G75),
+ SND_PCI_QUIRK(0x1043, 0x8532, "Asus X202E", VIA_FIXUP_INTMIC_BOOST),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index 5153a68d8c0795..e38c5885dadfbc 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -220,6 +220,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "21J6"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21M1"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+@@ -416,6 +423,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Xiaomi Book Pro 14 2022"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "TIMI"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Redmi G 2022"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+diff --git a/sound/soc/codecs/hdmi-codec.c b/sound/soc/codecs/hdmi-codec.c
+index 74caae52e1273f..d9df29a26f4f21 100644
+--- a/sound/soc/codecs/hdmi-codec.c
++++ b/sound/soc/codecs/hdmi-codec.c
+@@ -185,84 +185,97 @@ static const struct snd_pcm_chmap_elem hdmi_codec_8ch_chmaps[] = {
+ /*
+ * hdmi_codec_channel_alloc: speaker configuration available for CEA
+ *
+- * This is an ordered list that must match with hdmi_codec_8ch_chmaps struct
++ * This is an ordered list where ca_id must exist in hdmi_codec_8ch_chmaps
+ * The preceding ones have better chances to be selected by
+ * hdmi_codec_get_ch_alloc_table_idx().
+ */
+ static const struct hdmi_codec_cea_spk_alloc hdmi_codec_channel_alloc[] = {
+ { .ca_id = 0x00, .n_ch = 2,
+- .mask = FL | FR},
+- /* 2.1 */
+- { .ca_id = 0x01, .n_ch = 4,
+- .mask = FL | FR | LFE},
+- /* Dolby Surround */
++ .mask = FL | FR },
++ { .ca_id = 0x03, .n_ch = 4,
++ .mask = FL | FR | LFE | FC },
+ { .ca_id = 0x02, .n_ch = 4,
+ .mask = FL | FR | FC },
+- /* surround51 */
++ { .ca_id = 0x01, .n_ch = 4,
++ .mask = FL | FR | LFE },
+ { .ca_id = 0x0b, .n_ch = 6,
+- .mask = FL | FR | LFE | FC | RL | RR},
+- /* surround40 */
+- { .ca_id = 0x08, .n_ch = 6,
+- .mask = FL | FR | RL | RR },
+- /* surround41 */
+- { .ca_id = 0x09, .n_ch = 6,
+- .mask = FL | FR | LFE | RL | RR },
+- /* surround50 */
++ .mask = FL | FR | LFE | FC | RL | RR },
+ { .ca_id = 0x0a, .n_ch = 6,
+ .mask = FL | FR | FC | RL | RR },
+- /* 6.1 */
+- { .ca_id = 0x0f, .n_ch = 8,
+- .mask = FL | FR | LFE | FC | RL | RR | RC },
+- /* surround71 */
++ { .ca_id = 0x09, .n_ch = 6,
++ .mask = FL | FR | LFE | RL | RR },
++ { .ca_id = 0x08, .n_ch = 6,
++ .mask = FL | FR | RL | RR },
++ { .ca_id = 0x07, .n_ch = 6,
++ .mask = FL | FR | LFE | FC | RC },
++ { .ca_id = 0x06, .n_ch = 6,
++ .mask = FL | FR | FC | RC },
++ { .ca_id = 0x05, .n_ch = 6,
++ .mask = FL | FR | LFE | RC },
++ { .ca_id = 0x04, .n_ch = 6,
++ .mask = FL | FR | RC },
+ { .ca_id = 0x13, .n_ch = 8,
+ .mask = FL | FR | LFE | FC | RL | RR | RLC | RRC },
+- /* others */
+- { .ca_id = 0x03, .n_ch = 8,
+- .mask = FL | FR | LFE | FC },
+- { .ca_id = 0x04, .n_ch = 8,
+- .mask = FL | FR | RC},
+- { .ca_id = 0x05, .n_ch = 8,
+- .mask = FL | FR | LFE | RC },
+- { .ca_id = 0x06, .n_ch = 8,
+- .mask = FL | FR | FC | RC },
+- { .ca_id = 0x07, .n_ch = 8,
+- .mask = FL | FR | LFE | FC | RC },
+- { .ca_id = 0x0c, .n_ch = 8,
+- .mask = FL | FR | RC | RL | RR },
+- { .ca_id = 0x0d, .n_ch = 8,
+- .mask = FL | FR | LFE | RL | RR | RC },
+- { .ca_id = 0x0e, .n_ch = 8,
+- .mask = FL | FR | FC | RL | RR | RC },
+- { .ca_id = 0x10, .n_ch = 8,
+- .mask = FL | FR | RL | RR | RLC | RRC },
+- { .ca_id = 0x11, .n_ch = 8,
+- .mask = FL | FR | LFE | RL | RR | RLC | RRC },
++ { .ca_id = 0x1f, .n_ch = 8,
++ .mask = FL | FR | LFE | FC | RL | RR | FLC | FRC },
+ { .ca_id = 0x12, .n_ch = 8,
+ .mask = FL | FR | FC | RL | RR | RLC | RRC },
+- { .ca_id = 0x14, .n_ch = 8,
+- .mask = FL | FR | FLC | FRC },
+- { .ca_id = 0x15, .n_ch = 8,
+- .mask = FL | FR | LFE | FLC | FRC },
+- { .ca_id = 0x16, .n_ch = 8,
+- .mask = FL | FR | FC | FLC | FRC },
+- { .ca_id = 0x17, .n_ch = 8,
+- .mask = FL | FR | LFE | FC | FLC | FRC },
+- { .ca_id = 0x18, .n_ch = 8,
+- .mask = FL | FR | RC | FLC | FRC },
+- { .ca_id = 0x19, .n_ch = 8,
+- .mask = FL | FR | LFE | RC | FLC | FRC },
+- { .ca_id = 0x1a, .n_ch = 8,
+- .mask = FL | FR | RC | FC | FLC | FRC },
+- { .ca_id = 0x1b, .n_ch = 8,
+- .mask = FL | FR | LFE | RC | FC | FLC | FRC },
+- { .ca_id = 0x1c, .n_ch = 8,
+- .mask = FL | FR | RL | RR | FLC | FRC },
+- { .ca_id = 0x1d, .n_ch = 8,
+- .mask = FL | FR | LFE | RL | RR | FLC | FRC },
+ { .ca_id = 0x1e, .n_ch = 8,
+ .mask = FL | FR | FC | RL | RR | FLC | FRC },
+- { .ca_id = 0x1f, .n_ch = 8,
+- .mask = FL | FR | LFE | FC | RL | RR | FLC | FRC },
++ { .ca_id = 0x11, .n_ch = 8,
++ .mask = FL | FR | LFE | RL | RR | RLC | RRC },
++ { .ca_id = 0x1d, .n_ch = 8,
++ .mask = FL | FR | LFE | RL | RR | FLC | FRC },
++ { .ca_id = 0x10, .n_ch = 8,
++ .mask = FL | FR | RL | RR | RLC | RRC },
++ { .ca_id = 0x1c, .n_ch = 8,
++ .mask = FL | FR | RL | RR | FLC | FRC },
++ { .ca_id = 0x0f, .n_ch = 8,
++ .mask = FL | FR | LFE | FC | RL | RR | RC },
++ { .ca_id = 0x1b, .n_ch = 8,
++ .mask = FL | FR | LFE | RC | FC | FLC | FRC },
++ { .ca_id = 0x0e, .n_ch = 8,
++ .mask = FL | FR | FC | RL | RR | RC },
++ { .ca_id = 0x1a, .n_ch = 8,
++ .mask = FL | FR | RC | FC | FLC | FRC },
++ { .ca_id = 0x0d, .n_ch = 8,
++ .mask = FL | FR | LFE | RL | RR | RC },
++ { .ca_id = 0x19, .n_ch = 8,
++ .mask = FL | FR | LFE | RC | FLC | FRC },
++ { .ca_id = 0x0c, .n_ch = 8,
++ .mask = FL | FR | RC | RL | RR },
++ { .ca_id = 0x18, .n_ch = 8,
++ .mask = FL | FR | RC | FLC | FRC },
++ { .ca_id = 0x17, .n_ch = 8,
++ .mask = FL | FR | LFE | FC | FLC | FRC },
++ { .ca_id = 0x16, .n_ch = 8,
++ .mask = FL | FR | FC | FLC | FRC },
++ { .ca_id = 0x15, .n_ch = 8,
++ .mask = FL | FR | LFE | FLC | FRC },
++ { .ca_id = 0x14, .n_ch = 8,
++ .mask = FL | FR | FLC | FRC },
++ { .ca_id = 0x0b, .n_ch = 8,
++ .mask = FL | FR | LFE | FC | RL | RR },
++ { .ca_id = 0x0a, .n_ch = 8,
++ .mask = FL | FR | FC | RL | RR },
++ { .ca_id = 0x09, .n_ch = 8,
++ .mask = FL | FR | LFE | RL | RR },
++ { .ca_id = 0x08, .n_ch = 8,
++ .mask = FL | FR | RL | RR },
++ { .ca_id = 0x07, .n_ch = 8,
++ .mask = FL | FR | LFE | FC | RC },
++ { .ca_id = 0x06, .n_ch = 8,
++ .mask = FL | FR | FC | RC },
++ { .ca_id = 0x05, .n_ch = 8,
++ .mask = FL | FR | LFE | RC },
++ { .ca_id = 0x04, .n_ch = 8,
++ .mask = FL | FR | RC },
++ { .ca_id = 0x03, .n_ch = 8,
++ .mask = FL | FR | LFE | FC },
++ { .ca_id = 0x02, .n_ch = 8,
++ .mask = FL | FR | FC },
++ { .ca_id = 0x01, .n_ch = 8,
++ .mask = FL | FR | LFE },
+ };
+
+ struct hdmi_codec_priv {
+@@ -371,7 +384,8 @@ static int hdmi_codec_chmap_ctl_get(struct snd_kcontrol *kcontrol,
+ struct snd_pcm_chmap *info = snd_kcontrol_chip(kcontrol);
+ struct hdmi_codec_priv *hcp = info->private_data;
+
+- map = info->chmap[hcp->chmap_idx].map;
++ if (hcp->chmap_idx != HDMI_CODEC_CHMAP_IDX_UNKNOWN)
++ map = info->chmap[hcp->chmap_idx].map;
+
+ for (i = 0; i < info->max_channels; i++) {
+ if (hcp->chmap_idx == HDMI_CODEC_CHMAP_IDX_UNKNOWN)
+diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
+index 4af81158035681..945f9c0a6a5455 100644
+--- a/sound/soc/intel/avs/pcm.c
++++ b/sound/soc/intel/avs/pcm.c
+@@ -509,7 +509,7 @@ static int avs_pcm_hw_constraints_init(struct snd_pcm_substream *substream)
+ SNDRV_PCM_HW_PARAM_FORMAT, SNDRV_PCM_HW_PARAM_CHANNELS,
+ SNDRV_PCM_HW_PARAM_RATE, -1);
+
+- return ret;
++ return 0;
+ }
+
+ static int avs_dai_fe_startup(struct snd_pcm_substream *substream, struct snd_soc_dai *dai)
+diff --git a/sound/soc/intel/boards/sof_rt5682.c b/sound/soc/intel/boards/sof_rt5682.c
+index bc581fea0e3a16..866589fece7a3d 100644
+--- a/sound/soc/intel/boards/sof_rt5682.c
++++ b/sound/soc/intel/boards/sof_rt5682.c
+@@ -870,6 +870,13 @@ static const struct platform_device_id board_ids[] = {
+ SOF_SSP_PORT_BT_OFFLOAD(2) |
+ SOF_BT_OFFLOAD_PRESENT),
+ },
++ {
++ .name = "mtl_rt5682_c1_h02",
++ .driver_data = (kernel_ulong_t)(SOF_RT5682_MCLK_EN |
++ SOF_SSP_PORT_CODEC(1) |
++ /* SSP 0 and SSP 2 are used for HDMI IN */
++ SOF_SSP_MASK_HDMI_CAPTURE(0x5)),
++ },
+ {
+ .name = "arl_rt5682_c1_h02",
+ .driver_data = (kernel_ulong_t)(SOF_RT5682_MCLK_EN |
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 4a0ab50d1e50dc..a58842a8c8a641 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -580,6 +580,47 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ },
+ .driver_data = (void *)(SOC_SDW_CODEC_SPKR),
+ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "3838")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "3832")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "380E")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233C")
++ },
++ /* Note this quirk excludes the CODEC mic */
++ .driver_data = (void *)(SOC_SDW_CODEC_MIC),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233B")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ },
+
+ /* ArrowLake devices */
+ {
+diff --git a/sound/soc/intel/common/soc-acpi-intel-arl-match.c b/sound/soc/intel/common/soc-acpi-intel-arl-match.c
+index 072b8486d0727c..24d850df77ca8e 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-arl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-arl-match.c
+@@ -44,6 +44,31 @@ static const struct snd_soc_acpi_endpoint spk_3_endpoint = {
+ .group_id = 1,
+ };
+
++/*
++ * RT722 is a multi-function codec, three endpoints are created for
++ * its headset, amp and dmic functions.
++ */
++static const struct snd_soc_acpi_endpoint rt722_endpoints[] = {
++ {
++ .num = 0,
++ .aggregated = 0,
++ .group_position = 0,
++ .group_id = 0,
++ },
++ {
++ .num = 1,
++ .aggregated = 0,
++ .group_position = 0,
++ .group_id = 0,
++ },
++ {
++ .num = 2,
++ .aggregated = 0,
++ .group_position = 0,
++ .group_id = 0,
++ },
++};
++
+ static const struct snd_soc_acpi_adr_device cs35l56_2_lr_adr[] = {
+ {
+ .adr = 0x00023001FA355601ull,
+@@ -185,6 +210,24 @@ static const struct snd_soc_acpi_adr_device rt711_sdca_0_adr[] = {
+ }
+ };
+
++static const struct snd_soc_acpi_adr_device rt722_0_single_adr[] = {
++ {
++ .adr = 0x000030025D072201ull,
++ .num_endpoints = ARRAY_SIZE(rt722_endpoints),
++ .endpoints = rt722_endpoints,
++ .name_prefix = "rt722"
++ }
++};
++
++static const struct snd_soc_acpi_adr_device rt1320_2_single_adr[] = {
++ {
++ .adr = 0x000230025D132001ull,
++ .num_endpoints = 1,
++ .endpoints = &single_endpoint,
++ .name_prefix = "rt1320-1"
++ }
++};
++
+ static const struct snd_soc_acpi_link_adr arl_cs42l43_l0[] = {
+ {
+ .mask = BIT(0),
+@@ -287,6 +330,20 @@ static const struct snd_soc_acpi_link_adr arl_sdca_rvp[] = {
+ {}
+ };
+
++static const struct snd_soc_acpi_link_adr arl_rt722_l0_rt1320_l2[] = {
++ {
++ .mask = BIT(0),
++ .num_adr = ARRAY_SIZE(rt722_0_single_adr),
++ .adr_d = rt722_0_single_adr,
++ },
++ {
++ .mask = BIT(2),
++ .num_adr = ARRAY_SIZE(rt1320_2_single_adr),
++ .adr_d = rt1320_2_single_adr,
++ },
++ {}
++};
++
+ static const struct snd_soc_acpi_codecs arl_essx_83x6 = {
+ .num_codecs = 3,
+ .codecs = { "ESSX8316", "ESSX8326", "ESSX8336"},
+@@ -385,6 +442,12 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_arl_sdw_machines[] = {
+ .drv_name = "sof_sdw",
+ .sof_tplg_filename = "sof-arl-rt711-l0.tplg",
+ },
++ {
++ .link_mask = BIT(0) | BIT(2),
++ .links = arl_rt722_l0_rt1320_l2,
++ .drv_name = "sof_sdw",
++ .sof_tplg_filename = "sof-arl-rt722-l0_rt1320-l2.tplg",
++ },
+ {},
+ };
+ EXPORT_SYMBOL_GPL(snd_soc_acpi_intel_arl_sdw_machines);
+diff --git a/sound/soc/intel/common/soc-acpi-intel-mtl-match.c b/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
+index d4435a34a3a3f4..fd02c864e25ef9 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
+@@ -42,6 +42,13 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_mtl_machines[] = {
+ SND_SOC_ACPI_TPLG_INTEL_SSP_MSB |
+ SND_SOC_ACPI_TPLG_INTEL_DMIC_NUMBER,
+ },
++ {
++ .comp_ids = &mtl_rt5682_rt5682s_hp,
++ .drv_name = "mtl_rt5682_c1_h02",
++ .machine_quirk = snd_soc_acpi_codec_list,
++ .quirk_data = &mtl_lt6911_hdmi,
++ .sof_tplg_filename = "sof-mtl-rt5682-ssp1-hdmi-ssp02.tplg",
++ },
+ /* place boards for each headphone codec: sof driver will complete the
+ * tplg name and machine driver will detect the amp type
+ */
+diff --git a/sound/soc/mediatek/mt8188/mt8188-mt6359.c b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+index 4eed90d13a5326..62429e8e57b559 100644
+--- a/sound/soc/mediatek/mt8188/mt8188-mt6359.c
++++ b/sound/soc/mediatek/mt8188/mt8188-mt6359.c
+@@ -188,9 +188,7 @@ SND_SOC_DAILINK_DEFS(pcm1,
+ SND_SOC_DAILINK_DEFS(ul_src,
+ DAILINK_COMP_ARRAY(COMP_CPU("UL_SRC")),
+ DAILINK_COMP_ARRAY(COMP_CODEC("mt6359-sound",
+- "mt6359-snd-codec-aif1"),
+- COMP_CODEC("dmic-codec",
+- "dmic-hifi")),
++ "mt6359-snd-codec-aif1")),
+ DAILINK_COMP_ARRAY(COMP_EMPTY()));
+
+ SND_SOC_DAILINK_DEFS(AFE_SOF_DL2,
+diff --git a/sound/soc/sdw_utils/soc_sdw_utils.c b/sound/soc/sdw_utils/soc_sdw_utils.c
+index a6070f822eb9e4..e6ac5c0fd3bec8 100644
+--- a/sound/soc/sdw_utils/soc_sdw_utils.c
++++ b/sound/soc/sdw_utils/soc_sdw_utils.c
+@@ -363,6 +363,8 @@ struct asoc_sdw_codec_info codec_info_list[] = {
+ .num_controls = ARRAY_SIZE(generic_spk_controls),
+ .widgets = generic_spk_widgets,
+ .num_widgets = ARRAY_SIZE(generic_spk_widgets),
++ .quirk = SOC_SDW_CODEC_SPKR,
++ .quirk_exclude = true,
+ },
+ {
+ .direction = {false, true},
+@@ -487,6 +489,8 @@ struct asoc_sdw_codec_info codec_info_list[] = {
+ .rtd_init = asoc_sdw_cs42l43_dmic_rtd_init,
+ .widgets = generic_dmic_widgets,
+ .num_widgets = ARRAY_SIZE(generic_dmic_widgets),
++ .quirk = SOC_SDW_CODEC_MIC,
++ .quirk_exclude = true,
+ },
+ {
+ .direction = {false, true},
+@@ -1112,7 +1116,8 @@ int asoc_sdw_parse_sdw_endpoints(struct snd_soc_card *card,
+ dai_info = &codec_info->dais[adr_end->num];
+ soc_dai = asoc_sdw_find_dailink(soc_dais, adr_end);
+
+- if (dai_info->quirk && !(dai_info->quirk & ctx->mc_quirk))
++ if (dai_info->quirk &&
++ !(dai_info->quirk_exclude ^ !!(dai_info->quirk & ctx->mc_quirk)))
+ continue;
+
+ dev_dbg(dev,
+diff --git a/sound/soc/sof/ipc3-topology.c b/sound/soc/sof/ipc3-topology.c
+index be61e377e59e03..e98b53b67d12b9 100644
+--- a/sound/soc/sof/ipc3-topology.c
++++ b/sound/soc/sof/ipc3-topology.c
+@@ -20,6 +20,9 @@
+ /* size of tplg ABI in bytes */
+ #define SOF_IPC3_TPLG_ABI_SIZE 3
+
++/* Base of SOF_DAI_INTEL_ALH, this should be aligned with SOC_SDW_INTEL_BIDIR_PDI_BASE */
++#define INTEL_ALH_DAI_INDEX_BASE 2
++
+ struct sof_widget_data {
+ int ctrl_type;
+ int ipc_cmd;
+@@ -1585,14 +1588,26 @@ static int sof_ipc3_widget_setup_comp_dai(struct snd_sof_widget *swidget)
+ ret = sof_update_ipc_object(scomp, comp_dai, SOF_DAI_TOKENS, swidget->tuples,
+ swidget->num_tuples, sizeof(*comp_dai), 1);
+ if (ret < 0)
+- goto free;
++ goto free_comp;
+
+ /* update comp_tokens */
+ ret = sof_update_ipc_object(scomp, &comp_dai->config, SOF_COMP_TOKENS,
+ swidget->tuples, swidget->num_tuples,
+ sizeof(comp_dai->config), 1);
+ if (ret < 0)
+- goto free;
++ goto free_comp;
++
++ /* Subtract the base to match the FW dai index. */
++ if (comp_dai->type == SOF_DAI_INTEL_ALH) {
++ if (comp_dai->dai_index < INTEL_ALH_DAI_INDEX_BASE) {
++ dev_err(sdev->dev,
++ "Invalid ALH dai index %d, only Pin numbers >= %d can be used\n",
++ comp_dai->dai_index, INTEL_ALH_DAI_INDEX_BASE);
++ ret = -EINVAL;
++ goto free_comp;
++ }
++ comp_dai->dai_index -= INTEL_ALH_DAI_INDEX_BASE;
++ }
+
+ dev_dbg(scomp->dev, "dai %s: type %d index %d\n",
+ swidget->widget->name, comp_dai->type, comp_dai->dai_index);
+@@ -2167,8 +2182,16 @@ static int sof_ipc3_dai_config(struct snd_sof_dev *sdev, struct snd_sof_widget *
+ case SOF_DAI_INTEL_ALH:
+ if (data) {
+ /* save the dai_index during hw_params and reuse it for hw_free */
+- if (flags & SOF_DAI_CONFIG_FLAGS_HW_PARAMS)
+- config->dai_index = data->dai_index;
++ if (flags & SOF_DAI_CONFIG_FLAGS_HW_PARAMS) {
++ /* Subtract the base to match the FW dai index. */
++ if (data->dai_index < INTEL_ALH_DAI_INDEX_BASE) {
++ dev_err(sdev->dev,
++ "Invalid ALH dai index %d, only Pin numbers >= %d can be used\n",
++ config->dai_index, INTEL_ALH_DAI_INDEX_BASE);
++ return -EINVAL;
++ }
++ config->dai_index = data->dai_index - INTEL_ALH_DAI_INDEX_BASE;
++ }
+ config->alh.stream_id = data->dai_data;
+ }
+ break;
+diff --git a/sound/usb/endpoint.c b/sound/usb/endpoint.c
+index 568099467dbbcc..a29f28eb7d0c64 100644
+--- a/sound/usb/endpoint.c
++++ b/sound/usb/endpoint.c
+@@ -403,10 +403,15 @@ static int prepare_inbound_urb(struct snd_usb_endpoint *ep,
+ static void notify_xrun(struct snd_usb_endpoint *ep)
+ {
+ struct snd_usb_substream *data_subs;
++ struct snd_pcm_substream *psubs;
+
+ data_subs = READ_ONCE(ep->data_subs);
+- if (data_subs && data_subs->pcm_substream)
+- snd_pcm_stop_xrun(data_subs->pcm_substream);
++ if (!data_subs)
++ return;
++ psubs = data_subs->pcm_substream;
++ if (psubs && psubs->runtime &&
++ psubs->runtime->state == SNDRV_PCM_STATE_RUNNING)
++ snd_pcm_stop_xrun(psubs);
+ }
+
+ static struct snd_usb_packet_info *
+@@ -562,7 +567,10 @@ static void snd_complete_urb(struct urb *urb)
+ push_back_to_ready_list(ep, ctx);
+ clear_bit(ctx->index, &ep->active_mask);
+ snd_usb_queue_pending_output_urbs(ep, false);
+- atomic_dec(&ep->submitted_urbs); /* decrement at last */
++ /* decrement at last, and check xrun */
++ if (atomic_dec_and_test(&ep->submitted_urbs) &&
++ !snd_usb_endpoint_implicit_feedback_sink(ep))
++ notify_xrun(ep);
+ return;
+ }
+
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index bd67027c767751..0591da2839269b 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1084,6 +1084,21 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ struct snd_kcontrol *kctl)
+ {
+ struct snd_usb_audio *chip = cval->head.mixer->chip;
++
++ if (chip->quirk_flags & QUIRK_FLAG_MIC_RES_384) {
++ if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
++ usb_audio_info(chip,
++ "set resolution quirk: cval->res = 384\n");
++ cval->res = 384;
++ }
++ } else if (chip->quirk_flags & QUIRK_FLAG_MIC_RES_16) {
++ if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
++ usb_audio_info(chip,
++ "set resolution quirk: cval->res = 16\n");
++ cval->res = 16;
++ }
++ }
++
+ switch (chip->usb_id) {
+ case USB_ID(0x0763, 0x2030): /* M-Audio Fast Track C400 */
+ case USB_ID(0x0763, 0x2031): /* M-Audio Fast Track C600 */
+@@ -1168,27 +1183,6 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ }
+ break;
+
+- case USB_ID(0x046d, 0x0807): /* Logitech Webcam C500 */
+- case USB_ID(0x046d, 0x0808):
+- case USB_ID(0x046d, 0x0809):
+- case USB_ID(0x046d, 0x0819): /* Logitech Webcam C210 */
+- case USB_ID(0x046d, 0x081b): /* HD Webcam c310 */
+- case USB_ID(0x046d, 0x081d): /* HD Webcam c510 */
+- case USB_ID(0x046d, 0x0825): /* HD Webcam c270 */
+- case USB_ID(0x046d, 0x0826): /* HD Webcam c525 */
+- case USB_ID(0x046d, 0x08ca): /* Logitech Quickcam Fusion */
+- case USB_ID(0x046d, 0x0991):
+- case USB_ID(0x046d, 0x09a2): /* QuickCam Communicate Deluxe/S7500 */
+- /* Most audio usb devices lie about volume resolution.
+- * Most Logitech webcams have res = 384.
+- * Probably there is some logitech magic behind this number --fishor
+- */
+- if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+- usb_audio_info(chip,
+- "set resolution quirk: cval->res = 384\n");
+- cval->res = 384;
+- }
+- break;
+ case USB_ID(0x0495, 0x3042): /* ESS Technology Asus USB DAC */
+ if ((strstr(kctl->id.name, "Playback Volume") != NULL) ||
+ strstr(kctl->id.name, "Capture Volume") != NULL) {
+@@ -1197,28 +1191,6 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ cval->res = 1;
+ }
+ break;
+- case USB_ID(0x1224, 0x2a25): /* Jieli Technology USB PHY 2.0 */
+- if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+- usb_audio_info(chip,
+- "set resolution quirk: cval->res = 16\n");
+- cval->res = 16;
+- }
+- break;
+- case USB_ID(0x1bcf, 0x2283): /* NexiGo N930AF FHD Webcam */
+- case USB_ID(0x03f0, 0x654a): /* HP 320 FHD Webcam */
+- if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+- usb_audio_info(chip,
+- "set resolution quirk: cval->res = 16\n");
+- cval->res = 16;
+- }
+- break;
+- case USB_ID(0x1bcf, 0x2281): /* HD Webcam */
+- if (!strcmp(kctl->id.name, "Mic Capture Volume")) {
+- usb_audio_info(chip,
+- "set resolution quirk: cval->res = 16\n");
+- cval->res = 16;
+- }
+- break;
+ }
+ }
+
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 23260aa1919d32..0e9b5431a47f20 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -621,6 +621,16 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ .id = USB_ID(0x1b1c, 0x0a42),
+ .map = corsair_virtuoso_map,
+ },
++ {
++ /* Corsair HS80 RGB Wireless (wired mode) */
++ .id = USB_ID(0x1b1c, 0x0a6a),
++ .map = corsair_virtuoso_map,
++ },
++ {
++ /* Corsair HS80 RGB Wireless (wireless mode) */
++ .id = USB_ID(0x1b1c, 0x0a6b),
++ .map = corsair_virtuoso_map,
++ },
+ { /* Gigabyte TRX40 Aorus Master (rear panel + front mic) */
+ .id = USB_ID(0x0414, 0xa001),
+ .map = aorus_master_alc1220vb_map,
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index 6456e87e2f3974..a95ebcf4e46e76 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -4059,6 +4059,7 @@ int snd_usb_mixer_apply_create_quirk(struct usb_mixer_interface *mixer)
+ err = snd_bbfpro_controls_create(mixer);
+ break;
+ case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++ case USB_ID(0x2a39, 0x3fa0): /* RME Digiface USB (alternate) */
+ err = snd_rme_digiface_controls_create(mixer);
+ break;
+ case USB_ID(0x2b73, 0x0017): /* Pioneer DJ DJM-250MK2 */
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 199d0603cf8e59..3f8beacca27a17 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3616,176 +3616,181 @@ YAMAHA_DEVICE(0x7010, "UB99"),
+ }
+ }
+ },
+-{
+- /* Only claim interface 0 */
+- .match_flags = USB_DEVICE_ID_MATCH_VENDOR |
+- USB_DEVICE_ID_MATCH_PRODUCT |
+- USB_DEVICE_ID_MATCH_INT_CLASS |
+- USB_DEVICE_ID_MATCH_INT_NUMBER,
+- .idVendor = 0x2a39,
+- .idProduct = 0x3f8c,
+- .bInterfaceClass = USB_CLASS_VENDOR_SPEC,
+- .bInterfaceNumber = 0,
+- QUIRK_DRIVER_INFO {
+- QUIRK_DATA_COMPOSITE {
++#define QUIRK_RME_DIGIFACE(pid) \
++{ \
++ /* Only claim interface 0 */ \
++ .match_flags = USB_DEVICE_ID_MATCH_VENDOR | \
++ USB_DEVICE_ID_MATCH_PRODUCT | \
++ USB_DEVICE_ID_MATCH_INT_CLASS | \
++ USB_DEVICE_ID_MATCH_INT_NUMBER, \
++ .idVendor = 0x2a39, \
++ .idProduct = pid, \
++ .bInterfaceClass = USB_CLASS_VENDOR_SPEC, \
++ .bInterfaceNumber = 0, \
++ QUIRK_DRIVER_INFO { \
++ QUIRK_DATA_COMPOSITE { \
+ /*
+ * Three modes depending on sample rate band,
+ * with different channel counts for in/out
+- */
+- { QUIRK_DATA_STANDARD_MIXER(0) },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 34, // outputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x02,
+- .ep_idx = 1,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_32000 |
+- SNDRV_PCM_RATE_44100 |
+- SNDRV_PCM_RATE_48000,
+- .rate_min = 32000,
+- .rate_max = 48000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 32000, 44100, 48000,
+- },
+- .sync_ep = 0x81,
+- .sync_iface = 0,
+- .sync_altsetting = 1,
+- .sync_ep_idx = 0,
+- .implicit_fb = 1,
+- },
+- },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 18, // outputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x02,
+- .ep_idx = 1,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_64000 |
+- SNDRV_PCM_RATE_88200 |
+- SNDRV_PCM_RATE_96000,
+- .rate_min = 64000,
+- .rate_max = 96000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 64000, 88200, 96000,
+- },
+- .sync_ep = 0x81,
+- .sync_iface = 0,
+- .sync_altsetting = 1,
+- .sync_ep_idx = 0,
+- .implicit_fb = 1,
+- },
+- },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 10, // outputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x02,
+- .ep_idx = 1,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_KNOT |
+- SNDRV_PCM_RATE_176400 |
+- SNDRV_PCM_RATE_192000,
+- .rate_min = 128000,
+- .rate_max = 192000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 128000, 176400, 192000,
+- },
+- .sync_ep = 0x81,
+- .sync_iface = 0,
+- .sync_altsetting = 1,
+- .sync_ep_idx = 0,
+- .implicit_fb = 1,
+- },
+- },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 32, // inputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x81,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_32000 |
+- SNDRV_PCM_RATE_44100 |
+- SNDRV_PCM_RATE_48000,
+- .rate_min = 32000,
+- .rate_max = 48000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 32000, 44100, 48000,
+- }
+- }
+- },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 16, // inputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x81,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_64000 |
+- SNDRV_PCM_RATE_88200 |
+- SNDRV_PCM_RATE_96000,
+- .rate_min = 64000,
+- .rate_max = 96000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 64000, 88200, 96000,
+- }
+- }
+- },
+- {
+- QUIRK_DATA_AUDIOFORMAT(0) {
+- .formats = SNDRV_PCM_FMTBIT_S32_LE,
+- .channels = 8, // inputs
+- .fmt_bits = 24,
+- .iface = 0,
+- .altsetting = 1,
+- .altset_idx = 1,
+- .endpoint = 0x81,
+- .ep_attr = USB_ENDPOINT_XFER_ISOC |
+- USB_ENDPOINT_SYNC_ASYNC,
+- .rates = SNDRV_PCM_RATE_KNOT |
+- SNDRV_PCM_RATE_176400 |
+- SNDRV_PCM_RATE_192000,
+- .rate_min = 128000,
+- .rate_max = 192000,
+- .nr_rates = 3,
+- .rate_table = (unsigned int[]) {
+- 128000, 176400, 192000,
+- }
+- }
+- },
+- QUIRK_COMPOSITE_END
+- }
+- }
+-},
++ */ \
++ { QUIRK_DATA_STANDARD_MIXER(0) }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 34, /* outputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x02, \
++ .ep_idx = 1, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_32000 | \
++ SNDRV_PCM_RATE_44100 | \
++ SNDRV_PCM_RATE_48000, \
++ .rate_min = 32000, \
++ .rate_max = 48000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 32000, 44100, 48000, \
++ }, \
++ .sync_ep = 0x81, \
++ .sync_iface = 0, \
++ .sync_altsetting = 1, \
++ .sync_ep_idx = 0, \
++ .implicit_fb = 1, \
++ }, \
++ }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 18, /* outputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x02, \
++ .ep_idx = 1, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_64000 | \
++ SNDRV_PCM_RATE_88200 | \
++ SNDRV_PCM_RATE_96000, \
++ .rate_min = 64000, \
++ .rate_max = 96000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 64000, 88200, 96000, \
++ }, \
++ .sync_ep = 0x81, \
++ .sync_iface = 0, \
++ .sync_altsetting = 1, \
++ .sync_ep_idx = 0, \
++ .implicit_fb = 1, \
++ }, \
++ }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 10, /* outputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x02, \
++ .ep_idx = 1, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_KNOT | \
++ SNDRV_PCM_RATE_176400 | \
++ SNDRV_PCM_RATE_192000, \
++ .rate_min = 128000, \
++ .rate_max = 192000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 128000, 176400, 192000, \
++ }, \
++ .sync_ep = 0x81, \
++ .sync_iface = 0, \
++ .sync_altsetting = 1, \
++ .sync_ep_idx = 0, \
++ .implicit_fb = 1, \
++ }, \
++ }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 32, /* inputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x81, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_32000 | \
++ SNDRV_PCM_RATE_44100 | \
++ SNDRV_PCM_RATE_48000, \
++ .rate_min = 32000, \
++ .rate_max = 48000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 32000, 44100, 48000, \
++ } \
++ } \
++ }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 16, /* inputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x81, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_64000 | \
++ SNDRV_PCM_RATE_88200 | \
++ SNDRV_PCM_RATE_96000, \
++ .rate_min = 64000, \
++ .rate_max = 96000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 64000, 88200, 96000, \
++ } \
++ } \
++ }, \
++ { \
++ QUIRK_DATA_AUDIOFORMAT(0) { \
++ .formats = SNDRV_PCM_FMTBIT_S32_LE, \
++ .channels = 8, /* inputs */ \
++ .fmt_bits = 24, \
++ .iface = 0, \
++ .altsetting = 1, \
++ .altset_idx = 1, \
++ .endpoint = 0x81, \
++ .ep_attr = USB_ENDPOINT_XFER_ISOC | \
++ USB_ENDPOINT_SYNC_ASYNC, \
++ .rates = SNDRV_PCM_RATE_KNOT | \
++ SNDRV_PCM_RATE_176400 | \
++ SNDRV_PCM_RATE_192000, \
++ .rate_min = 128000, \
++ .rate_max = 192000, \
++ .nr_rates = 3, \
++ .rate_table = (unsigned int[]) { \
++ 128000, 176400, 192000, \
++ } \
++ } \
++ }, \
++ QUIRK_COMPOSITE_END \
++ } \
++ } \
++}
++
++QUIRK_RME_DIGIFACE(0x3f8c),
++QUIRK_RME_DIGIFACE(0x3fa0),
++
+ #undef USB_DEVICE_VENDOR_SPEC
+ #undef USB_AUDIO_DEVICE
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 8538fdfce3535b..00101875d9a8d5 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -555,7 +555,7 @@ int snd_usb_create_quirk(struct snd_usb_audio *chip,
+ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interface *intf)
+ {
+ struct usb_host_config *config = dev->actconfig;
+- struct usb_device_descriptor new_device_descriptor;
++ struct usb_device_descriptor *new_device_descriptor __free(kfree) = NULL;
+ int err;
+
+ if (le16_to_cpu(get_cfg_desc(config)->wTotalLength) == EXTIGY_FIRMWARE_SIZE_OLD ||
+@@ -566,15 +566,19 @@ static int snd_usb_extigy_boot_quirk(struct usb_device *dev, struct usb_interfac
+ 0x10, 0x43, 0x0001, 0x000a, NULL, 0);
+ if (err < 0)
+ dev_dbg(&dev->dev, "error sending boot message: %d\n", err);
++
++ new_device_descriptor = kmalloc(sizeof(*new_device_descriptor), GFP_KERNEL);
++ if (!new_device_descriptor)
++ return -ENOMEM;
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &new_device_descriptor, sizeof(new_device_descriptor));
++ new_device_descriptor, sizeof(*new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
+- if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ if (new_device_descriptor->bNumConfigurations > dev->descriptor.bNumConfigurations)
+ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
+- new_device_descriptor.bNumConfigurations);
++ new_device_descriptor->bNumConfigurations);
+ else
+- memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
++ memcpy(&dev->descriptor, new_device_descriptor, sizeof(dev->descriptor));
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_reset_configuration: %d\n", err);
+@@ -906,7 +910,7 @@ static void mbox2_setup_48_24_magic(struct usb_device *dev)
+ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
+ {
+ struct usb_host_config *config = dev->actconfig;
+- struct usb_device_descriptor new_device_descriptor;
++ struct usb_device_descriptor *new_device_descriptor __free(kfree) = NULL;
+ int err;
+ u8 bootresponse[0x12];
+ int fwsize;
+@@ -941,15 +945,19 @@ static int snd_usb_mbox2_boot_quirk(struct usb_device *dev)
+
+ dev_dbg(&dev->dev, "device initialised!\n");
+
++ new_device_descriptor = kmalloc(sizeof(*new_device_descriptor), GFP_KERNEL);
++ if (!new_device_descriptor)
++ return -ENOMEM;
++
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &new_device_descriptor, sizeof(new_device_descriptor));
++ new_device_descriptor, sizeof(*new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "error usb_get_descriptor: %d\n", err);
+- if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ if (new_device_descriptor->bNumConfigurations > dev->descriptor.bNumConfigurations)
+ dev_dbg(&dev->dev, "error too large bNumConfigurations: %d\n",
+- new_device_descriptor.bNumConfigurations);
++ new_device_descriptor->bNumConfigurations);
+ else
+- memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
++ memcpy(&dev->descriptor, new_device_descriptor, sizeof(dev->descriptor));
+
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+@@ -1259,7 +1267,7 @@ static void mbox3_setup_defaults(struct usb_device *dev)
+ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
+ {
+ struct usb_host_config *config = dev->actconfig;
+- struct usb_device_descriptor new_device_descriptor;
++ struct usb_device_descriptor *new_device_descriptor __free(kfree) = NULL;
+ int err;
+ int descriptor_size;
+
+@@ -1272,15 +1280,19 @@ static int snd_usb_mbox3_boot_quirk(struct usb_device *dev)
+
+ dev_dbg(&dev->dev, "MBOX3: device initialised!\n");
+
++ new_device_descriptor = kmalloc(sizeof(*new_device_descriptor), GFP_KERNEL);
++ if (!new_device_descriptor)
++ return -ENOMEM;
++
+ err = usb_get_descriptor(dev, USB_DT_DEVICE, 0,
+- &new_device_descriptor, sizeof(new_device_descriptor));
++ new_device_descriptor, sizeof(*new_device_descriptor));
+ if (err < 0)
+ dev_dbg(&dev->dev, "MBOX3: error usb_get_descriptor: %d\n", err);
+- if (new_device_descriptor.bNumConfigurations > dev->descriptor.bNumConfigurations)
++ if (new_device_descriptor->bNumConfigurations > dev->descriptor.bNumConfigurations)
+ dev_dbg(&dev->dev, "MBOX3: error too large bNumConfigurations: %d\n",
+- new_device_descriptor.bNumConfigurations);
++ new_device_descriptor->bNumConfigurations);
+ else
+- memcpy(&dev->descriptor, &new_device_descriptor, sizeof(dev->descriptor));
++ memcpy(&dev->descriptor, new_device_descriptor, sizeof(dev->descriptor));
+
+ err = usb_reset_configuration(dev);
+ if (err < 0)
+@@ -1653,6 +1665,7 @@ int snd_usb_apply_boot_quirk(struct usb_device *dev,
+ return snd_usb_motu_microbookii_boot_quirk(dev);
+ break;
+ case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++ case USB_ID(0x2a39, 0x3fa0): /* RME Digiface USB (alternate) */
+ return snd_usb_rme_digiface_boot_quirk(dev);
+ }
+
+@@ -1866,6 +1879,7 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ mbox3_set_format_quirk(subs, fmt); /* Digidesign Mbox 3 */
+ break;
+ case USB_ID(0x2a39, 0x3f8c): /* RME Digiface USB */
++ case USB_ID(0x2a39, 0x3fa0): /* RME Digiface USB (alternate) */
+ rme_digiface_set_format_quirk(subs);
+ break;
+ }
+@@ -2130,7 +2144,7 @@ struct usb_audio_quirk_flags_table {
+ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ /* Device matches */
+ DEVICE_FLG(0x03f0, 0x654a, /* HP 320 FHD Webcam */
+- QUIRK_FLAG_GET_SAMPLE_RATE),
++ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ DEVICE_FLG(0x041e, 0x3000, /* Creative SB Extigy */
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x041e, 0x4080, /* Creative Live Cam VF0610 */
+@@ -2138,10 +2152,31 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ DEVICE_FLG(0x045e, 0x083c, /* MS USB Link headset */
+ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_CTL_MSG_DELAY |
+ QUIRK_FLAG_DISABLE_AUTOSUSPEND),
++ DEVICE_FLG(0x046d, 0x0807, /* Logitech Webcam C500 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x0808, /* Logitech Webcam C600 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x0809,
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x0819, /* Logitech Webcam C210 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x081b, /* HD Webcam c310 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x081d, /* HD Webcam c510 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x0825, /* HD Webcam c270 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x0826, /* HD Webcam c525 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
+ DEVICE_FLG(0x046d, 0x084c, /* Logitech ConferenceCam Connect */
+ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_CTL_MSG_DELAY_1M),
++ DEVICE_FLG(0x046d, 0x08ca, /* Logitech Quickcam Fusion */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
+ DEVICE_FLG(0x046d, 0x0991, /* Logitech QuickCam Pro */
+- QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR),
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR |
++ QUIRK_FLAG_MIC_RES_384),
++ DEVICE_FLG(0x046d, 0x09a2, /* QuickCam Communicate Deluxe/S7500 */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
+ DEVICE_FLG(0x046d, 0x09a4, /* Logitech QuickCam E 3500 */
+ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x0499, 0x1509, /* Steinberg UR22 */
+@@ -2209,7 +2244,7 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ DEVICE_FLG(0x0fd9, 0x0008, /* Hauppauge HVR-950Q */
+ QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
+ DEVICE_FLG(0x1224, 0x2a25, /* Jieli Technology USB PHY 2.0 */
+- QUIRK_FLAG_GET_SAMPLE_RATE),
++ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ DEVICE_FLG(0x1395, 0x740a, /* Sennheiser DECT */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x1397, 0x0507, /* Behringer UMC202HD */
+@@ -2247,9 +2282,9 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ DEVICE_FLG(0x19f7, 0x0035, /* RODE NT-USB+ */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
+ DEVICE_FLG(0x1bcf, 0x2281, /* HD Webcam */
+- QUIRK_FLAG_GET_SAMPLE_RATE),
++ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ DEVICE_FLG(0x1bcf, 0x2283, /* NexiGo N930AF FHD Webcam */
+- QUIRK_FLAG_GET_SAMPLE_RATE),
++ QUIRK_FLAG_GET_SAMPLE_RATE | QUIRK_FLAG_MIC_RES_16),
+ DEVICE_FLG(0x2040, 0x7200, /* Hauppauge HVR-950Q */
+ QUIRK_FLAG_SHARE_MEDIA_DEVICE | QUIRK_FLAG_ALIGN_TRANSFER),
+ DEVICE_FLG(0x2040, 0x7201, /* Hauppauge HVR-950Q-MXL */
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index b0f042c996087e..158ec053dc44dd 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -194,6 +194,8 @@ extern bool snd_usb_skip_validation;
+ * QUIRK_FLAG_FIXED_RATE
+ * Do not set PCM rate (frequency) when only one rate is available
+ * for the given endpoint.
++ * QUIRK_FLAG_MIC_RES_16 and QUIRK_FLAG_MIC_RES_384
++ * Set the fixed resolution for Mic Capture Volume (mostly for webcams)
+ */
+
+ #define QUIRK_FLAG_GET_SAMPLE_RATE (1U << 0)
+@@ -218,5 +220,7 @@ extern bool snd_usb_skip_validation;
+ #define QUIRK_FLAG_IFACE_SKIP_CLOSE (1U << 19)
+ #define QUIRK_FLAG_FORCE_IFACE_RESET (1U << 20)
+ #define QUIRK_FLAG_FIXED_RATE (1U << 21)
++#define QUIRK_FLAG_MIC_RES_16 (1U << 22)
++#define QUIRK_FLAG_MIC_RES_384 (1U << 23)
+
+ #endif /* __USBAUDIO_H */
+diff --git a/tools/bpf/bpftool/prog.c b/tools/bpf/bpftool/prog.c
+index 2ff949ea82fa66..e71be67f1d8658 100644
+--- a/tools/bpf/bpftool/prog.c
++++ b/tools/bpf/bpftool/prog.c
+@@ -822,11 +822,18 @@ prog_dump(struct bpf_prog_info *info, enum dump_mode mode,
+ printf("%s:\n", sym_name);
+ }
+
+- if (disasm_print_insn(img, lens[i], opcodes,
+- name, disasm_opt, btf,
+- prog_linfo, ksyms[i], i,
+- linum))
+- goto exit_free;
++ if (ksyms) {
++ if (disasm_print_insn(img, lens[i], opcodes,
++ name, disasm_opt, btf,
++ prog_linfo, ksyms[i], i,
++ linum))
++ goto exit_free;
++ } else {
++ if (disasm_print_insn(img, lens[i], opcodes,
++ name, disasm_opt, btf,
++ NULL, 0, 0, false))
++ goto exit_free;
++ }
+
+ img += lens[i];
+
+diff --git a/tools/scripts/Makefile.arch b/tools/scripts/Makefile.arch
+index f6a50f06dfc453..eabfe9f411d914 100644
+--- a/tools/scripts/Makefile.arch
++++ b/tools/scripts/Makefile.arch
+@@ -7,8 +7,8 @@ HOSTARCH := $(shell uname -m | sed -e s/i.86/x86/ -e s/x86_64/x86/ \
+ -e s/sh[234].*/sh/ -e s/aarch64.*/arm64/ \
+ -e s/riscv.*/riscv/ -e s/loongarch.*/loongarch/)
+
+-ifndef ARCH
+-ARCH := $(HOSTARCH)
++ifeq ($(strip $(ARCH)),)
++override ARCH := $(HOSTARCH)
+ endif
+
+ SRCARCH := $(ARCH)
+diff --git a/tools/testing/selftests/arm64/fp/fp-stress.c b/tools/testing/selftests/arm64/fp/fp-stress.c
+index faac24bdefeb94..80f22789504d66 100644
+--- a/tools/testing/selftests/arm64/fp/fp-stress.c
++++ b/tools/testing/selftests/arm64/fp/fp-stress.c
+@@ -79,7 +79,7 @@ static void child_start(struct child_data *child, const char *program)
+ */
+ ret = dup2(pipefd[1], 1);
+ if (ret == -1) {
+- fprintf(stderr, "dup2() %d\n", errno);
++ printf("dup2() %d\n", errno);
+ exit(EXIT_FAILURE);
+ }
+
+@@ -89,7 +89,7 @@ static void child_start(struct child_data *child, const char *program)
+ */
+ ret = dup2(startup_pipe[0], 3);
+ if (ret == -1) {
+- fprintf(stderr, "dup2() %d\n", errno);
++ printf("dup2() %d\n", errno);
+ exit(EXIT_FAILURE);
+ }
+
+@@ -107,16 +107,15 @@ static void child_start(struct child_data *child, const char *program)
+ */
+ ret = read(3, &i, sizeof(i));
+ if (ret < 0)
+- fprintf(stderr, "read(startp pipe) failed: %s (%d)\n",
+- strerror(errno), errno);
++ printf("read(startp pipe) failed: %s (%d)\n",
++ strerror(errno), errno);
+ if (ret > 0)
+- fprintf(stderr, "%d bytes of data on startup pipe\n",
+- ret);
++ printf("%d bytes of data on startup pipe\n", ret);
+ close(3);
+
+ ret = execl(program, program, NULL);
+- fprintf(stderr, "execl(%s) failed: %d (%s)\n",
+- program, errno, strerror(errno));
++ printf("execl(%s) failed: %d (%s)\n",
++ program, errno, strerror(errno));
+
+ exit(EXIT_FAILURE);
+ } else {
+diff --git a/tools/testing/selftests/arm64/pauth/pac.c b/tools/testing/selftests/arm64/pauth/pac.c
+index b743daa772f55f..5a07b3958fbf29 100644
+--- a/tools/testing/selftests/arm64/pauth/pac.c
++++ b/tools/testing/selftests/arm64/pauth/pac.c
+@@ -182,6 +182,9 @@ int exec_sign_all(struct signatures *signed_vals, size_t val)
+ return -1;
+ }
+
++ close(new_stdin[1]);
++ close(new_stdout[0]);
++
+ return 0;
+ }
+
+diff --git a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
+index 7c881bca9af5c7..a7a6ae6c162fe0 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
++++ b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
+@@ -35,9 +35,9 @@ __description("uninitialized iter in ->next()")
+ __failure __msg("expected an initialized iter_bits as arg #1")
+ int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
+ {
+- struct bpf_iter_bits *it = NULL;
++ struct bpf_iter_bits it = {};
+
+- bpf_iter_bits_next(it);
++ bpf_iter_bits_next(&it);
+ return 0;
+ }
+
+diff --git a/tools/testing/selftests/damon/Makefile b/tools/testing/selftests/damon/Makefile
+index 5b2a6a5dd1af7f..812f656260fba9 100644
+--- a/tools/testing/selftests/damon/Makefile
++++ b/tools/testing/selftests/damon/Makefile
+@@ -6,7 +6,7 @@ TEST_GEN_FILES += debugfs_target_ids_read_before_terminate_race
+ TEST_GEN_FILES += debugfs_target_ids_pid_leak
+ TEST_GEN_FILES += access_memory access_memory_even
+
+-TEST_FILES = _chk_dependency.sh _debugfs_common.sh
++TEST_FILES = _chk_dependency.sh _debugfs_common.sh _damon_sysfs.py
+
+ # functionality tests
+ TEST_PROGS = debugfs_attrs.sh debugfs_schemes.sh debugfs_target_ids.sh
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
+index a16c6a6f6055cf..8f1c58f0c2397f 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_syntax_errors.tc
+@@ -111,7 +111,7 @@ check_error 'p vfs_read $arg* ^$arg*' # DOUBLE_ARGS
+ if !grep -q 'kernel return probes support:' README; then
+ check_error 'r vfs_read ^$arg*' # NOFENTRY_ARGS
+ fi
+-check_error 'p vfs_read+8 ^$arg*' # NOFENTRY_ARGS
++check_error 'p vfs_read+20 ^$arg*' # NOFENTRY_ARGS
+ check_error 'p vfs_read ^hoge' # NO_BTFARG
+ check_error 'p kfree ^$arg10' # NO_BTFARG (exceed the number of parameters)
+ check_error 'r kfree ^$retval' # NO_RETVAL
+diff --git a/tools/testing/selftests/hid/run-hid-tools-tests.sh b/tools/testing/selftests/hid/run-hid-tools-tests.sh
+index bdae8464da8656..af1682a53c27e1 100755
+--- a/tools/testing/selftests/hid/run-hid-tools-tests.sh
++++ b/tools/testing/selftests/hid/run-hid-tools-tests.sh
+@@ -2,24 +2,26 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Runs tests for the HID subsystem
+
++KSELFTEST_SKIP_TEST=4
++
+ if ! command -v python3 > /dev/null 2>&1; then
+ echo "hid-tools: [SKIP] python3 not installed"
+- exit 77
++ exit $KSELFTEST_SKIP_TEST
+ fi
+
+ if ! python3 -c "import pytest" > /dev/null 2>&1; then
+- echo "hid: [SKIP/ pytest module not installed"
+- exit 77
++ echo "hid: [SKIP] pytest module not installed"
++ exit $KSELFTEST_SKIP_TEST
+ fi
+
+ if ! python3 -c "import pytest_tap" > /dev/null 2>&1; then
+- echo "hid: [SKIP/ pytest_tap module not installed"
+- exit 77
++ echo "hid: [SKIP] pytest_tap module not installed"
++ exit $KSELFTEST_SKIP_TEST
+ fi
+
+ if ! python3 -c "import hidtools" > /dev/null 2>&1; then
+- echo "hid: [SKIP/ hid-tools module not installed"
+- exit 77
++ echo "hid: [SKIP] hid-tools module not installed"
++ exit $KSELFTEST_SKIP_TEST
+ fi
+
+ TARGET=${TARGET:=.}
+diff --git a/tools/testing/selftests/mm/hugetlb_dio.c b/tools/testing/selftests/mm/hugetlb_dio.c
+index 432d5af15e66b7..db63abe5ee5e85 100644
+--- a/tools/testing/selftests/mm/hugetlb_dio.c
++++ b/tools/testing/selftests/mm/hugetlb_dio.c
+@@ -76,19 +76,15 @@ void run_dio_using_hugetlb(unsigned int start_off, unsigned int end_off)
+ /* Get the free huge pages after unmap*/
+ free_hpage_a = get_free_hugepages();
+
++ ksft_print_msg("No. Free pages before allocation : %d\n", free_hpage_b);
++ ksft_print_msg("No. Free pages after munmap : %d\n", free_hpage_a);
++
+ /*
+ * If the no. of free hugepages before allocation and after unmap does
+ * not match - that means there could still be a page which is pinned.
+ */
+- if (free_hpage_a != free_hpage_b) {
+- ksft_print_msg("No. Free pages before allocation : %d\n", free_hpage_b);
+- ksft_print_msg("No. Free pages after munmap : %d\n", free_hpage_a);
+- ksft_test_result_fail(": Huge pages not freed!\n");
+- } else {
+- ksft_print_msg("No. Free pages before allocation : %d\n", free_hpage_b);
+- ksft_print_msg("No. Free pages after munmap : %d\n", free_hpage_a);
+- ksft_test_result_pass(": Huge pages freed successfully !\n");
+- }
++ ksft_test_result(free_hpage_a == free_hpage_b,
++ "free huge pages from %u-%u\n", start_off, end_off);
+ }
+
+ int main(void)
+diff --git a/tools/testing/selftests/resctrl/resctrl_val.c b/tools/testing/selftests/resctrl/resctrl_val.c
+index f118f659e89600..e92e4f463f37bb 100644
+--- a/tools/testing/selftests/resctrl/resctrl_val.c
++++ b/tools/testing/selftests/resctrl/resctrl_val.c
+@@ -159,7 +159,7 @@ static int read_from_imc_dir(char *imc_dir, int count)
+
+ return -1;
+ }
+- if (fscanf(fp, "%s", cas_count_cfg) <= 0) {
++ if (fscanf(fp, "%1023s", cas_count_cfg) <= 0) {
+ ksft_perror("Could not get iMC cas count read");
+ fclose(fp);
+
+@@ -177,7 +177,7 @@ static int read_from_imc_dir(char *imc_dir, int count)
+
+ return -1;
+ }
+- if (fscanf(fp, "%s", cas_count_cfg) <= 0) {
++ if (fscanf(fp, "%1023s", cas_count_cfg) <= 0) {
+ ksft_perror("Could not get iMC cas count write");
+ fclose(fp);
+
+diff --git a/tools/testing/selftests/resctrl/resctrlfs.c b/tools/testing/selftests/resctrl/resctrlfs.c
+index 250c320349a785..a53cd1cb6e0c64 100644
+--- a/tools/testing/selftests/resctrl/resctrlfs.c
++++ b/tools/testing/selftests/resctrl/resctrlfs.c
+@@ -182,7 +182,7 @@ int get_cache_size(int cpu_no, const char *cache_type, unsigned long *cache_size
+
+ return -1;
+ }
+- if (fscanf(fp, "%s", cache_str) <= 0) {
++ if (fscanf(fp, "%63s", cache_str) <= 0) {
+ ksft_perror("Could not get cache_size");
+ fclose(fp);
+
+diff --git a/tools/testing/selftests/wireguard/qemu/debug.config b/tools/testing/selftests/wireguard/qemu/debug.config
+index 9d172210e2c63f..139fd9aa8b1218 100644
+--- a/tools/testing/selftests/wireguard/qemu/debug.config
++++ b/tools/testing/selftests/wireguard/qemu/debug.config
+@@ -31,7 +31,6 @@ CONFIG_SCHED_DEBUG=y
+ CONFIG_SCHED_INFO=y
+ CONFIG_SCHEDSTATS=y
+ CONFIG_SCHED_STACK_END_CHECK=y
+-CONFIG_DEBUG_TIMEKEEPING=y
+ CONFIG_DEBUG_PREEMPT=y
+ CONFIG_DEBUG_RT_MUTEXES=y
+ CONFIG_DEBUG_SPINLOCK=y
+diff --git a/tools/testing/vsock/vsock_perf.c b/tools/testing/vsock/vsock_perf.c
+index 4e8578f815e08a..8e0a6c0770d372 100644
+--- a/tools/testing/vsock/vsock_perf.c
++++ b/tools/testing/vsock/vsock_perf.c
+@@ -33,7 +33,7 @@
+
+ static unsigned int port = DEFAULT_PORT;
+ static unsigned long buf_size_bytes = DEFAULT_BUF_SIZE_BYTES;
+-static unsigned long vsock_buf_bytes = DEFAULT_VSOCK_BUF_BYTES;
++static unsigned long long vsock_buf_bytes = DEFAULT_VSOCK_BUF_BYTES;
+ static bool zerocopy;
+
+ static void error(const char *s)
+@@ -133,7 +133,7 @@ static float get_gbps(unsigned long bits, time_t ns_delta)
+ ((float)ns_delta / NSEC_PER_SEC);
+ }
+
+-static void run_receiver(unsigned long rcvlowat_bytes)
++static void run_receiver(int rcvlowat_bytes)
+ {
+ unsigned int read_cnt;
+ time_t rx_begin_ns;
+@@ -162,8 +162,8 @@ static void run_receiver(unsigned long rcvlowat_bytes)
+ printf("Run as receiver\n");
+ printf("Listen port %u\n", port);
+ printf("RX buffer %lu bytes\n", buf_size_bytes);
+- printf("vsock buffer %lu bytes\n", vsock_buf_bytes);
+- printf("SO_RCVLOWAT %lu bytes\n", rcvlowat_bytes);
++ printf("vsock buffer %llu bytes\n", vsock_buf_bytes);
++ printf("SO_RCVLOWAT %d bytes\n", rcvlowat_bytes);
+
+ fd = socket(AF_VSOCK, SOCK_STREAM, 0);
+
+@@ -439,7 +439,7 @@ static long strtolx(const char *arg)
+ int main(int argc, char **argv)
+ {
+ unsigned long to_send_bytes = DEFAULT_TO_SEND_BYTES;
+- unsigned long rcvlowat_bytes = DEFAULT_RCVLOWAT_BYTES;
++ int rcvlowat_bytes = DEFAULT_RCVLOWAT_BYTES;
+ int peer_cid = -1;
+ bool sender = false;
+
+diff --git a/tools/testing/vsock/vsock_test.c b/tools/testing/vsock/vsock_test.c
+index 8d38dbf8f41f04..0b7f5bf546da56 100644
+--- a/tools/testing/vsock/vsock_test.c
++++ b/tools/testing/vsock/vsock_test.c
+@@ -429,7 +429,7 @@ static void test_seqpacket_msg_bounds_client(const struct test_opts *opts)
+
+ static void test_seqpacket_msg_bounds_server(const struct test_opts *opts)
+ {
+- unsigned long sock_buf_size;
++ unsigned long long sock_buf_size;
+ unsigned long remote_hash;
+ unsigned long curr_hash;
+ int fd;
+@@ -634,7 +634,8 @@ static void test_seqpacket_timeout_server(const struct test_opts *opts)
+
+ static void test_seqpacket_bigmsg_client(const struct test_opts *opts)
+ {
+- unsigned long sock_buf_size;
++ unsigned long long sock_buf_size;
++ size_t buf_size;
+ socklen_t len;
+ void *data;
+ int fd;
+@@ -655,13 +656,20 @@ static void test_seqpacket_bigmsg_client(const struct test_opts *opts)
+
+ sock_buf_size++;
+
+- data = malloc(sock_buf_size);
++ /* size_t can be < unsigned long long */
++ buf_size = (size_t)sock_buf_size;
++ if (buf_size != sock_buf_size) {
++ fprintf(stderr, "Returned BUFFER_SIZE too large\n");
++ exit(EXIT_FAILURE);
++ }
++
++ data = malloc(buf_size);
+ if (!data) {
+ perror("malloc");
+ exit(EXIT_FAILURE);
+ }
+
+- send_buf(fd, data, sock_buf_size, 0, -EMSGSIZE);
++ send_buf(fd, data, buf_size, 0, -EMSGSIZE);
+
+ control_writeln("CLISENT");
+
+@@ -835,7 +843,7 @@ static void test_stream_poll_rcvlowat_server(const struct test_opts *opts)
+
+ static void test_stream_poll_rcvlowat_client(const struct test_opts *opts)
+ {
+- unsigned long lowat_val = RCVLOWAT_BUF_SIZE;
++ int lowat_val = RCVLOWAT_BUF_SIZE;
+ char buf[RCVLOWAT_BUF_SIZE];
+ struct pollfd fds;
+ short poll_flags;
+@@ -1357,9 +1365,10 @@ static void test_stream_rcvlowat_def_cred_upd_client(const struct test_opts *opt
+ static void test_stream_credit_update_test(const struct test_opts *opts,
+ bool low_rx_bytes_test)
+ {
+- size_t recv_buf_size;
++ int recv_buf_size;
+ struct pollfd fds;
+ size_t buf_size;
++ unsigned long long sock_buf_size;
+ void *buf;
+ int fd;
+
+@@ -1371,8 +1380,11 @@ static void test_stream_credit_update_test(const struct test_opts *opts,
+
+ buf_size = RCVLOWAT_CREDIT_UPD_BUF_SIZE;
+
++ /* size_t can be < unsigned long long */
++ sock_buf_size = buf_size;
++
+ if (setsockopt(fd, AF_VSOCK, SO_VM_SOCKETS_BUFFER_SIZE,
+- &buf_size, sizeof(buf_size))) {
++ &sock_buf_size, sizeof(sock_buf_size))) {
+ perror("setsockopt(SO_VM_SOCKETS_BUFFER_SIZE)");
+ exit(EXIT_FAILURE);
+ }
+diff --git a/tools/tracing/rtla/sample/timerlat_load.py b/tools/tracing/rtla/sample/timerlat_load.py
+index 8cc5eb2d2e69e5..52eccb6225f92d 100644
+--- a/tools/tracing/rtla/sample/timerlat_load.py
++++ b/tools/tracing/rtla/sample/timerlat_load.py
+@@ -25,13 +25,12 @@ import sys
+ import os
+
+ parser = argparse.ArgumentParser(description='user-space timerlat thread in Python')
+-parser.add_argument("cpu", help='CPU to run timerlat thread')
+-parser.add_argument("-p", "--prio", help='FIFO priority')
+-
++parser.add_argument("cpu", type=int, help='CPU to run timerlat thread')
++parser.add_argument("-p", "--prio", type=int, help='FIFO priority')
+ args = parser.parse_args()
+
+ try:
+- affinity_mask = { int(args.cpu) }
++ affinity_mask = {args.cpu}
+ except:
+ print("Invalid cpu: " + args.cpu)
+ exit(1)
+@@ -44,7 +43,7 @@ except:
+
+ if (args.prio):
+ try:
+- param = os.sched_param(int(args.prio))
++ param = os.sched_param(args.prio)
+ os.sched_setscheduler(0, os.SCHED_FIFO, param)
+ except:
+ print("Error setting priority")
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index 829511a712224f..ae55cd79128336 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -62,9 +62,9 @@ struct timerlat_hist_cpu {
+ int *thread;
+ int *user;
+
+- int irq_count;
+- int thread_count;
+- int user_count;
++ unsigned long long irq_count;
++ unsigned long long thread_count;
++ unsigned long long user_count;
+
+ unsigned long long min_irq;
+ unsigned long long sum_irq;
+@@ -304,15 +304,15 @@ timerlat_print_summary(struct timerlat_hist_params *params,
+ continue;
+
+ if (!params->no_irq)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ data->hist[cpu].irq_count);
+
+ if (!params->no_thread)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ data->hist[cpu].thread_count);
+
+ if (params->user_hist)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ data->hist[cpu].user_count);
+ }
+ trace_seq_printf(trace->seq, "\n");
+@@ -488,15 +488,15 @@ timerlat_print_stats_all(struct timerlat_hist_params *params,
+ trace_seq_printf(trace->seq, "count:");
+
+ if (!params->no_irq)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ sum.irq_count);
+
+ if (!params->no_thread)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ sum.thread_count);
+
+ if (params->user_hist)
+- trace_seq_printf(trace->seq, "%9d ",
++ trace_seq_printf(trace->seq, "%9llu ",
+ sum.user_count);
+
+ trace_seq_printf(trace->seq, "\n");
+@@ -778,7 +778,7 @@ static struct timerlat_hist_params
+ /* getopt_long stores the option index here. */
+ int option_index = 0;
+
+- c = getopt_long(argc, argv, "a:c:C::b:d:e:E:DhH:i:knp:P:s:t::T:uU0123456:7:8:9\1\2:\3",
++ c = getopt_long(argc, argv, "a:c:C::b:d:e:E:DhH:i:knp:P:s:t::T:uU0123456:7:8:9\1\2:\3:",
+ long_options, &option_index);
+
+ /* detect the end of the options. */
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index 3b62519a412fc9..ac2ff38a57ee55 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -54,9 +54,9 @@ struct timerlat_top_params {
+ };
+
+ struct timerlat_top_cpu {
+- int irq_count;
+- int thread_count;
+- int user_count;
++ unsigned long long irq_count;
++ unsigned long long thread_count;
++ unsigned long long user_count;
+
+ unsigned long long cur_irq;
+ unsigned long long min_irq;
+@@ -280,7 +280,7 @@ static void timerlat_top_print(struct osnoise_tool *top, int cpu)
+ /*
+ * Unless trace is being lost, IRQ counter is always the max.
+ */
+- trace_seq_printf(s, "%3d #%-9d |", cpu, cpu_data->irq_count);
++ trace_seq_printf(s, "%3d #%-9llu |", cpu, cpu_data->irq_count);
+
+ if (!cpu_data->irq_count) {
+ trace_seq_printf(s, "%s %s %s %s |", no_value, no_value, no_value, no_value);
+diff --git a/tools/tracing/rtla/src/utils.c b/tools/tracing/rtla/src/utils.c
+index 9ac71a66840c1b..0735fcb827ed76 100644
+--- a/tools/tracing/rtla/src/utils.c
++++ b/tools/tracing/rtla/src/utils.c
+@@ -233,7 +233,7 @@ long parse_ns_duration(char *val)
+
+ #define SCHED_DEADLINE 6
+
+-static inline int sched_setattr(pid_t pid, const struct sched_attr *attr,
++static inline int syscall_sched_setattr(pid_t pid, const struct sched_attr *attr,
+ unsigned int flags) {
+ return syscall(__NR_sched_setattr, pid, attr, flags);
+ }
+@@ -243,7 +243,7 @@ int __set_sched_attr(int pid, struct sched_attr *attr)
+ int flags = 0;
+ int retval;
+
+- retval = sched_setattr(pid, attr, flags);
++ retval = syscall_sched_setattr(pid, attr, flags);
+ if (retval < 0) {
+ err_msg("Failed to set sched attributes to the pid %d: %s\n",
+ pid, strerror(errno));
+diff --git a/tools/tracing/rtla/src/utils.h b/tools/tracing/rtla/src/utils.h
+index d44513e6c66a01..99c9cf81bcd02c 100644
+--- a/tools/tracing/rtla/src/utils.h
++++ b/tools/tracing/rtla/src/utils.h
+@@ -46,6 +46,7 @@ update_sum(unsigned long long *a, unsigned long long *b)
+ *a += *b;
+ }
+
++#ifndef SCHED_ATTR_SIZE_VER0
+ struct sched_attr {
+ uint32_t size;
+ uint32_t sched_policy;
+@@ -56,6 +57,7 @@ struct sched_attr {
+ uint64_t sched_deadline;
+ uint64_t sched_period;
+ };
++#endif /* SCHED_ATTR_SIZE_VER0 */
+
+ int parse_prio(char *arg, struct sched_attr *sched_param);
+ int parse_cpu_set(char *cpu_list, cpu_set_t *set);
+diff --git a/tools/verification/dot2/automata.py b/tools/verification/dot2/automata.py
+index baffeb960ff0b3..bdeb98baa8b065 100644
+--- a/tools/verification/dot2/automata.py
++++ b/tools/verification/dot2/automata.py
+@@ -29,11 +29,11 @@ class Automata:
+
+ def __get_model_name(self):
+ basename = ntpath.basename(self.__dot_path)
+- if basename.endswith(".dot") == False:
++ if not basename.endswith(".dot") and not basename.endswith(".gv"):
+ print("not a dot file")
+ raise Exception("not a dot file: %s" % self.__dot_path)
+
+- model_name = basename[0:-4]
++ model_name = ntpath.splitext(basename)[0]
+ if model_name.__len__() == 0:
+ raise Exception("not a dot file: %s" % self.__dot_path)
+
+@@ -68,9 +68,9 @@ class Automata:
+ def __get_cursor_begin_events(self):
+ cursor = 0
+ while self.__dot_lines[cursor].split()[0] != "{node":
+- cursor += 1
++ cursor += 1
+ while self.__dot_lines[cursor].split()[0] == "{node":
+- cursor += 1
++ cursor += 1
+ # skip initial state transition
+ cursor += 1
+ return cursor
+@@ -94,11 +94,11 @@ class Automata:
+ initial_state = state[7:]
+ else:
+ states.append(state)
+- if self.__dot_lines[cursor].__contains__("doublecircle") == True:
++ if "doublecircle" in self.__dot_lines[cursor]:
+ final_states.append(state)
+ has_final_states = True
+
+- if self.__dot_lines[cursor].__contains__("ellipse") == True:
++ if "ellipse" in self.__dot_lines[cursor]:
+ final_states.append(state)
+ has_final_states = True
+
+@@ -110,7 +110,7 @@ class Automata:
+ # Insert the initial state at the bein og the states
+ states.insert(0, initial_state)
+
+- if has_final_states == False:
++ if not has_final_states:
+ final_states.append(initial_state)
+
+ return states, initial_state, final_states
+@@ -120,7 +120,7 @@ class Automata:
+ cursor = self.__get_cursor_begin_events()
+
+ events = []
+- while self.__dot_lines[cursor][1] == '"':
++ while self.__dot_lines[cursor].lstrip()[0] == '"':
+ # transitions have the format:
+ # "all_fired" -> "both_fired" [ label = "disable_irq" ];
+ # ------------ event is here ------------^^^^^
+@@ -161,7 +161,7 @@ class Automata:
+ # and we are back! Let's fill the matrix
+ cursor = self.__get_cursor_begin_events()
+
+- while self.__dot_lines[cursor][1] == '"':
++ while self.__dot_lines[cursor].lstrip()[0] == '"':
+ if self.__dot_lines[cursor].split()[1] == "->":
+ line = self.__dot_lines[cursor].split()
+ origin_state = line[0].replace('"','').replace(',','_')
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-14 23:59 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-14 23:59 UTC (permalink / raw
To: gentoo-commits
commit: 19cecaf31ceabc39e8291a5e852adf5e8f726445
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Dec 14 23:59:03 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Dec 14 23:59:03 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=19cecaf3
Remove redundant patch
Removed:
2700_drm-display-GCC15.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ----
2700_drm-display-GCC15.patch | 52 --------------------------------------------
2 files changed, 56 deletions(-)
diff --git a/0000_README b/0000_README
index 6429d035..81c02320 100644
--- a/0000_README
+++ b/0000_README
@@ -87,10 +87,6 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
-Patch: 2700_drm-display-GCC15.patch
-From: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
-Desc: drm/display: Fix building with GCC 15
-
Patch: 2901_tools-lib-subcmd-compile-fix.patch
From: https://lore.kernel.org/all/20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de/
Desc: tools lib subcmd: Fixed uninitialized use of variable in parse-options
diff --git a/2700_drm-display-GCC15.patch b/2700_drm-display-GCC15.patch
deleted file mode 100644
index 0be775ea..00000000
--- a/2700_drm-display-GCC15.patch
+++ /dev/null
@@ -1,52 +0,0 @@
-From a500f3751d3c861be7e4463c933cf467240cca5d Mon Sep 17 00:00:00 2001
-From: Brahmajit Das <brahmajit.xyz@gmail.com>
-Date: Wed, 2 Oct 2024 14:53:11 +0530
-Subject: drm/display: Fix building with GCC 15
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-GCC 15 enables -Werror=unterminated-string-initialization by default.
-This results in the following build error
-
-drivers/gpu/drm/display/drm_dp_dual_mode_helper.c: In function ‘is_hdmi_adaptor’:
-drivers/gpu/drm/display/drm_dp_dual_mode_helper.c:164:17: error: initializer-string for array of
- ‘char’ is too long [-Werror=unterminated-string-initialization]
- 164 | "DP-HDMI ADAPTOR\x04";
- | ^~~~~~~~~~~~~~~~~~~~~
-
-After discussion with Ville, the fix was to increase the size of
-dp_dual_mode_hdmi_id array by one, so that it can accommodate the NULL
-line character. This should let us build the kernel with GCC 15.
-
-Signed-off-by: Brahmajit Das <brahmajit.xyz@gmail.com>
-Reviewed-by: Jani Nikula <jani.nikula@intel.com>
-Link: https://patchwork.freedesktop.org/patch/msgid/20241002092311.942822-1-brahmajit.xyz@gmail.com
-Signed-off-by: Jani Nikula <jani.nikula@intel.com>
----
- drivers/gpu/drm/display/drm_dp_dual_mode_helper.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-(limited to 'drivers/gpu/drm/display/drm_dp_dual_mode_helper.c')
-
-diff --git a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
-index 14a2a8473682b0..c491e3203bf11c 100644
---- a/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
-+++ b/drivers/gpu/drm/display/drm_dp_dual_mode_helper.c
-@@ -160,11 +160,11 @@ EXPORT_SYMBOL(drm_dp_dual_mode_write);
-
- static bool is_hdmi_adaptor(const char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN])
- {
-- static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN] =
-+ static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN + 1] =
- "DP-HDMI ADAPTOR\x04";
-
- return memcmp(hdmi_id, dp_dual_mode_hdmi_id,
-- sizeof(dp_dual_mode_hdmi_id)) == 0;
-+ DP_DUAL_MODE_HDMI_ID_LEN) == 0;
- }
-
- static bool is_type1_adaptor(uint8_t adaptor_id)
---
-cgit 1.2.3-korg
-
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-15 0:02 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-15 0:02 UTC (permalink / raw
To: gentoo-commits
commit: 1142e63b91589e751e9f9e537c19cc52f96c790d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Dec 15 00:02:06 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Dec 15 00:02:06 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1142e63b
Remove redundant patches
Removed
1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 --
...-change-caller-of-update_pkru_in_sigframe.patch | 107 ---------------------
...eys-ensure-updated-pkru-value-is-xrstor-d.patch | 96 ------------------
3 files changed, 211 deletions(-)
diff --git a/0000_README b/0000_README
index 81c02320..a2c9782d 100644
--- a/0000_README
+++ b/0000_README
@@ -75,14 +75,6 @@ Patch: 1730_parisc-Disable-prctl.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
-Patch: 1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
-From: https://git.kernel.org/
-Desc: x86/pkeys: Change caller of update_pkru_in_sigframe()
-
-Patch: 1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
-From: https://git.kernel.org/
-Desc: x86/pkeys: Ensure updated PKRU value is XRSTOR'd
-
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch b/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
deleted file mode 100644
index 3a1fbd82..00000000
--- a/1740_x86-pkeys-change-caller-of-update_pkru_in_sigframe.patch
+++ /dev/null
@@ -1,107 +0,0 @@
-From 5683d0ce8fb46f36315a2b508f90ec6221cda018 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Tue, 19 Nov 2024 17:45:19 +0000
-Subject: x86/pkeys: Change caller of update_pkru_in_sigframe()
-
-From: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
-
-[ Upstream commit 6a1853bdf17874392476b552398df261f75503e0 ]
-
-update_pkru_in_sigframe() will shortly need some information which
-is only available inside xsave_to_user_sigframe(). Move
-update_pkru_in_sigframe() inside the other function to make it
-easier to provide it that information.
-
-No functional changes.
-
-Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
-Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
-Link: https://lore.kernel.org/all/20241119174520.3987538-2-aruna.ramakrishna%40oracle.com
-Stable-dep-of: ae6012d72fa6 ("x86/pkeys: Ensure updated PKRU value is XRSTOR'd")
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/kernel/fpu/signal.c | 20 ++------------------
- arch/x86/kernel/fpu/xstate.h | 15 ++++++++++++++-
- 2 files changed, 16 insertions(+), 19 deletions(-)
-
-diff --git a/arch/x86/kernel/fpu/signal.c b/arch/x86/kernel/fpu/signal.c
-index 1065ab995305c..8f62e0666dea5 100644
---- a/arch/x86/kernel/fpu/signal.c
-+++ b/arch/x86/kernel/fpu/signal.c
-@@ -63,16 +63,6 @@ static inline bool check_xstate_in_sigframe(struct fxregs_state __user *fxbuf,
- return true;
- }
-
--/*
-- * Update the value of PKRU register that was already pushed onto the signal frame.
-- */
--static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
--{
-- if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
-- return 0;
-- return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
--}
--
- /*
- * Signal frame handlers.
- */
-@@ -168,14 +158,8 @@ static inline bool save_xstate_epilog(void __user *buf, int ia32_frame,
-
- static inline int copy_fpregs_to_sigframe(struct xregs_state __user *buf, u32 pkru)
- {
-- int err = 0;
--
-- if (use_xsave()) {
-- err = xsave_to_user_sigframe(buf);
-- if (!err)
-- err = update_pkru_in_sigframe(buf, pkru);
-- return err;
-- }
-+ if (use_xsave())
-+ return xsave_to_user_sigframe(buf, pkru);
-
- if (use_fxsr())
- return fxsave_to_user_sigframe((struct fxregs_state __user *) buf);
-diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
-index 0b86a5002c846..6b2924fbe5b8d 100644
---- a/arch/x86/kernel/fpu/xstate.h
-+++ b/arch/x86/kernel/fpu/xstate.h
-@@ -69,6 +69,16 @@ static inline u64 xfeatures_mask_independent(void)
- return fpu_kernel_cfg.independent_features;
- }
-
-+/*
-+ * Update the value of PKRU register that was already pushed onto the signal frame.
-+ */
-+static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
-+{
-+ if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
-+ return 0;
-+ return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
-+}
-+
- /* XSAVE/XRSTOR wrapper functions */
-
- #ifdef CONFIG_X86_64
-@@ -256,7 +266,7 @@ static inline u64 xfeatures_need_sigframe_write(void)
- * The caller has to zero buf::header before calling this because XSAVE*
- * does not touch the reserved fields in the header.
- */
--static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
-+static inline int xsave_to_user_sigframe(struct xregs_state __user *buf, u32 pkru)
- {
- /*
- * Include the features which are not xsaved/rstored by the kernel
-@@ -281,6 +291,9 @@ static inline int xsave_to_user_sigframe(struct xregs_state __user *buf)
- XSTATE_OP(XSAVE, buf, lmask, hmask, err);
- clac();
-
-+ if (!err)
-+ err = update_pkru_in_sigframe(buf, pkru);
-+
- return err;
- }
-
---
-2.43.0
-
diff --git a/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch b/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
deleted file mode 100644
index 11b1f768..00000000
--- a/1741_x86-pkeys-ensure-updated-pkru-value-is-xrstor-d.patch
+++ /dev/null
@@ -1,96 +0,0 @@
-From 24fedf2768fd57e0d767137044c4f7493357b325 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Tue, 19 Nov 2024 17:45:20 +0000
-Subject: x86/pkeys: Ensure updated PKRU value is XRSTOR'd
-
-From: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
-
-[ Upstream commit ae6012d72fa60c9ff92de5bac7a8021a47458e5b ]
-
-When XSTATE_BV[i] is 0, and XRSTOR attempts to restore state component
-'i' it ignores any value in the XSAVE buffer and instead restores the
-state component's init value.
-
-This means that if XSAVE writes XSTATE_BV[PKRU]=0 then XRSTOR will
-ignore the value that update_pkru_in_sigframe() writes to the XSAVE buffer.
-
-XSTATE_BV[PKRU] only gets written as 0 if PKRU is in its init state. On
-Intel CPUs, basically never happens because the kernel usually
-overwrites the init value (aside: this is why we didn't notice this bug
-until now). But on AMD, the init tracker is more aggressive and will
-track PKRU as being in its init state upon any wrpkru(0x0).
-Unfortunately, sig_prepare_pkru() does just that: wrpkru(0x0).
-
-This writes XSTATE_BV[PKRU]=0 which makes XRSTOR ignore the PKRU value
-in the sigframe.
-
-To fix this, always overwrite the sigframe XSTATE_BV with a value that
-has XSTATE_BV[PKRU]==1. This ensures that XRSTOR will not ignore what
-update_pkru_in_sigframe() wrote.
-
-The problematic sequence of events is something like this:
-
-Userspace does:
- * wrpkru(0xffff0000) (or whatever)
- * Hardware sets: XINUSE[PKRU]=1
-Signal happens, kernel is entered:
- * sig_prepare_pkru() => wrpkru(0x00000000)
- * Hardware sets: XINUSE[PKRU]=0 (aggressive AMD init tracker)
- * XSAVE writes most of XSAVE buffer, including
- XSTATE_BV[PKRU]=XINUSE[PKRU]=0
- * update_pkru_in_sigframe() overwrites PKRU in XSAVE buffer
-... signal handling
- * XRSTOR sees XSTATE_BV[PKRU]==0, ignores just-written value
- from update_pkru_in_sigframe()
-
-Fixes: 70044df250d0 ("x86/pkeys: Update PKRU to enable all pkeys before XSAVE")
-Suggested-by: Rudi Horn <rudi.horn@oracle.com>
-Signed-off-by: Aruna Ramakrishna <aruna.ramakrishna@oracle.com>
-Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
-Acked-by: Dave Hansen <dave.hansen@linux.intel.com>
-Link: https://lore.kernel.org/all/20241119174520.3987538-3-aruna.ramakrishna%40oracle.com
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/kernel/fpu/xstate.h | 16 ++++++++++++++--
- 1 file changed, 14 insertions(+), 2 deletions(-)
-
-diff --git a/arch/x86/kernel/fpu/xstate.h b/arch/x86/kernel/fpu/xstate.h
-index 6b2924fbe5b8d..aa16f1a1bbcf1 100644
---- a/arch/x86/kernel/fpu/xstate.h
-+++ b/arch/x86/kernel/fpu/xstate.h
-@@ -72,10 +72,22 @@ static inline u64 xfeatures_mask_independent(void)
- /*
- * Update the value of PKRU register that was already pushed onto the signal frame.
- */
--static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u32 pkru)
-+static inline int update_pkru_in_sigframe(struct xregs_state __user *buf, u64 mask, u32 pkru)
- {
-+ u64 xstate_bv;
-+ int err;
-+
- if (unlikely(!cpu_feature_enabled(X86_FEATURE_OSPKE)))
- return 0;
-+
-+ /* Mark PKRU as in-use so that it is restored correctly. */
-+ xstate_bv = (mask & xfeatures_in_use()) | XFEATURE_MASK_PKRU;
-+
-+ err = __put_user(xstate_bv, &buf->header.xfeatures);
-+ if (err)
-+ return err;
-+
-+ /* Update PKRU value in the userspace xsave buffer. */
- return __put_user(pkru, (unsigned int __user *)get_xsave_addr_user(buf, XFEATURE_PKRU));
- }
-
-@@ -292,7 +304,7 @@ static inline int xsave_to_user_sigframe(struct xregs_state __user *buf, u32 pkr
- clac();
-
- if (!err)
-- err = update_pkru_in_sigframe(buf, pkru);
-+ err = update_pkru_in_sigframe(buf, mask, pkru);
-
- return err;
- }
---
-2.43.0
-
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-19 18:07 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-19 18:07 UTC (permalink / raw
To: gentoo-commits
commit: 1d0712601fc0cfb16d2abc9bf8d0e34c43b6afe4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Dec 19 18:07:10 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Dec 19 18:07:10 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1d071260
Linuxpatch 6.12.6
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1005_linux-6.12.6.patch | 8402 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8406 insertions(+)
diff --git a/0000_README b/0000_README
index a2c9782d..1bb8df77 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch: 1004_linux-6.12.5.patch
From: https://www.kernel.org
Desc: Linux 6.12.5
+Patch: 1005_linux-6.12.6.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.6
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1005_linux-6.12.6.patch b/1005_linux-6.12.6.patch
new file mode 100644
index 00000000..e9bbd96e
--- /dev/null
+++ b/1005_linux-6.12.6.patch
@@ -0,0 +1,8402 @@
+diff --git a/Documentation/networking/ip-sysctl.rst b/Documentation/networking/ip-sysctl.rst
+index eacf8983e23074..dcbb6f6caf6de3 100644
+--- a/Documentation/networking/ip-sysctl.rst
++++ b/Documentation/networking/ip-sysctl.rst
+@@ -2170,6 +2170,12 @@ nexthop_compat_mode - BOOLEAN
+ understands the new API, this sysctl can be disabled to achieve full
+ performance benefits of the new API by disabling the nexthop expansion
+ and extraneous notifications.
++
++ Note that as a backward-compatible mode, dumping of modern features
++ might be incomplete or wrong. For example, resilient groups will not be
++ shown as such, but rather as just a list of next hops. Also weights that
++ do not fit into 8 bits will show incorrectly.
++
+ Default: true (backward compat mode)
+
+ fib_notify_on_flag_change - INTEGER
+diff --git a/Documentation/power/runtime_pm.rst b/Documentation/power/runtime_pm.rst
+index 53d1996460abfc..12f429359a823e 100644
+--- a/Documentation/power/runtime_pm.rst
++++ b/Documentation/power/runtime_pm.rst
+@@ -347,7 +347,9 @@ drivers/base/power/runtime.c and include/linux/pm_runtime.h:
+
+ `int pm_runtime_resume_and_get(struct device *dev);`
+ - run pm_runtime_resume(dev) and if successful, increment the device's
+- usage counter; return the result of pm_runtime_resume
++ usage counter; returns 0 on success (whether or not the device's
++ runtime PM status was already 'active') or the error code from
++ pm_runtime_resume() on failure.
+
+ `int pm_request_idle(struct device *dev);`
+ - submit a request to execute the subsystem-level idle callback for the
+diff --git a/Makefile b/Makefile
+index f158bfe6407ac9..c10952585c14b0 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index ff8c4e1b847ed4..fbed433283c9b9 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1535,6 +1535,7 @@ static u64 __kvm_read_sanitised_id_reg(const struct kvm_vcpu *vcpu,
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MTEX);
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_DF2);
+ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_PFAR);
++ val &= ~ARM64_FEATURE_MASK(ID_AA64PFR1_EL1_MPAM_frac);
+ break;
+ case SYS_ID_AA64PFR2_EL1:
+ /* We only expose FPMR */
+@@ -1724,6 +1725,13 @@ static u64 read_sanitised_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
+
+ val &= ~ID_AA64PFR0_EL1_AMU_MASK;
+
++ /*
++ * MPAM is disabled by default as KVM also needs a set of PARTID to
++ * program the MPAMVPMx_EL2 PARTID remapping registers with. But some
++ * older kernels let the guest see the ID bit.
++ */
++ val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
++
+ return val;
+ }
+
+@@ -1834,6 +1842,42 @@ static int set_id_dfr0_el1(struct kvm_vcpu *vcpu,
+ return set_id_reg(vcpu, rd, val);
+ }
+
++static int set_id_aa64pfr0_el1(struct kvm_vcpu *vcpu,
++ const struct sys_reg_desc *rd, u64 user_val)
++{
++ u64 hw_val = read_sanitised_ftr_reg(SYS_ID_AA64PFR0_EL1);
++ u64 mpam_mask = ID_AA64PFR0_EL1_MPAM_MASK;
++
++ /*
++ * Commit 011e5f5bf529f ("arm64/cpufeature: Add remaining feature bits
++ * in ID_AA64PFR0 register") exposed the MPAM field of AA64PFR0_EL1 to
++ * guests, but didn't add trap handling. KVM doesn't support MPAM and
++ * always returns an UNDEF for these registers. The guest must see 0
++ * for this field.
++ *
++ * But KVM must also accept values from user-space that were provided
++ * by KVM. On CPUs that support MPAM, permit user-space to write
++ * the sanitizied value to ID_AA64PFR0_EL1.MPAM, but ignore this field.
++ */
++ if ((hw_val & mpam_mask) == (user_val & mpam_mask))
++ user_val &= ~ID_AA64PFR0_EL1_MPAM_MASK;
++
++ return set_id_reg(vcpu, rd, user_val);
++}
++
++static int set_id_aa64pfr1_el1(struct kvm_vcpu *vcpu,
++ const struct sys_reg_desc *rd, u64 user_val)
++{
++ u64 hw_val = read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1);
++ u64 mpam_mask = ID_AA64PFR1_EL1_MPAM_frac_MASK;
++
++ /* See set_id_aa64pfr0_el1 for comment about MPAM */
++ if ((hw_val & mpam_mask) == (user_val & mpam_mask))
++ user_val &= ~ID_AA64PFR1_EL1_MPAM_frac_MASK;
++
++ return set_id_reg(vcpu, rd, user_val);
++}
++
+ /*
+ * cpufeature ID register user accessors
+ *
+@@ -2377,7 +2421,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ { SYS_DESC(SYS_ID_AA64PFR0_EL1),
+ .access = access_id_reg,
+ .get_user = get_id_reg,
+- .set_user = set_id_reg,
++ .set_user = set_id_aa64pfr0_el1,
+ .reset = read_sanitised_id_aa64pfr0_el1,
+ .val = ~(ID_AA64PFR0_EL1_AMU |
+ ID_AA64PFR0_EL1_MPAM |
+@@ -2385,7 +2429,12 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ ID_AA64PFR0_EL1_RAS |
+ ID_AA64PFR0_EL1_AdvSIMD |
+ ID_AA64PFR0_EL1_FP), },
+- ID_WRITABLE(ID_AA64PFR1_EL1, ~(ID_AA64PFR1_EL1_PFAR |
++ { SYS_DESC(SYS_ID_AA64PFR1_EL1),
++ .access = access_id_reg,
++ .get_user = get_id_reg,
++ .set_user = set_id_aa64pfr1_el1,
++ .reset = kvm_read_sanitised_id_reg,
++ .val = ~(ID_AA64PFR1_EL1_PFAR |
+ ID_AA64PFR1_EL1_DF2 |
+ ID_AA64PFR1_EL1_MTEX |
+ ID_AA64PFR1_EL1_THE |
+@@ -2397,7 +2446,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ ID_AA64PFR1_EL1_RES0 |
+ ID_AA64PFR1_EL1_MPAM_frac |
+ ID_AA64PFR1_EL1_RAS_frac |
+- ID_AA64PFR1_EL1_MTE)),
++ ID_AA64PFR1_EL1_MTE), },
+ ID_WRITABLE(ID_AA64PFR2_EL1, ID_AA64PFR2_EL1_FPMR),
+ ID_UNALLOCATED(4,3),
+ ID_WRITABLE(ID_AA64ZFR0_EL1, ~ID_AA64ZFR0_EL1_RES0),
+diff --git a/arch/riscv/include/asm/kfence.h b/arch/riscv/include/asm/kfence.h
+index 7388edd88986f9..d08bf7fb3aee61 100644
+--- a/arch/riscv/include/asm/kfence.h
++++ b/arch/riscv/include/asm/kfence.h
+@@ -22,7 +22,9 @@ static inline bool kfence_protect_page(unsigned long addr, bool protect)
+ else
+ set_pte(pte, __pte(pte_val(ptep_get(pte)) | _PAGE_PRESENT));
+
+- flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
++ preempt_disable();
++ local_flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
++ preempt_enable();
+
+ return true;
+ }
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 26c886db4fb3d1..2b3c152d3c91f5 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -227,7 +227,7 @@ static void __init init_resources(void)
+ static void __init parse_dtb(void)
+ {
+ /* Early scan of device tree from init memory */
+- if (early_init_dt_scan(dtb_early_va, __pa(dtb_early_va))) {
++ if (early_init_dt_scan(dtb_early_va, dtb_early_pa)) {
+ const char *name = of_flat_dt_get_machine_name();
+
+ if (name) {
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 0e8c20adcd98df..fc53ce748c8049 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -1566,7 +1566,7 @@ static void __meminit free_pte_table(pte_t *pte_start, pmd_t *pmd)
+ pmd_clear(pmd);
+ }
+
+-static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud)
++static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud, bool is_vmemmap)
+ {
+ struct page *page = pud_page(*pud);
+ struct ptdesc *ptdesc = page_ptdesc(page);
+@@ -1579,7 +1579,8 @@ static void __meminit free_pmd_table(pmd_t *pmd_start, pud_t *pud)
+ return;
+ }
+
+- pagetable_pmd_dtor(ptdesc);
++ if (!is_vmemmap)
++ pagetable_pmd_dtor(ptdesc);
+ if (PageReserved(page))
+ free_reserved_page(page);
+ else
+@@ -1703,7 +1704,7 @@ static void __meminit remove_pud_mapping(pud_t *pud_base, unsigned long addr, un
+ remove_pmd_mapping(pmd_base, addr, next, is_vmemmap, altmap);
+
+ if (pgtable_l4_enabled)
+- free_pmd_table(pmd_base, pudp);
++ free_pmd_table(pmd_base, pudp, is_vmemmap);
+ }
+ }
+
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index fa5ea65de0d0fa..6188650707ab27 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -1468,7 +1468,7 @@ void intel_pmu_pebs_enable(struct perf_event *event)
+ * hence we need to drain when changing said
+ * size.
+ */
+- intel_pmu_drain_large_pebs(cpuc);
++ intel_pmu_drain_pebs_buffer();
+ adaptive_pebs_record_size_update();
+ wrmsrl(MSR_PEBS_DATA_CFG, pebs_data_cfg);
+ cpuc->active_pebs_data_cfg = pebs_data_cfg;
+diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
+index 4a686f0e5dbf6d..2d776635aa539e 100644
+--- a/arch/x86/include/asm/processor.h
++++ b/arch/x86/include/asm/processor.h
+@@ -212,6 +212,8 @@ static inline unsigned long long l1tf_pfn_limit(void)
+ return BIT_ULL(boot_cpu_data.x86_cache_bits - 1 - PAGE_SHIFT);
+ }
+
++void init_cpu_devs(void);
++void get_cpu_vendor(struct cpuinfo_x86 *c);
+ extern void early_cpu_init(void);
+ extern void identify_secondary_cpu(struct cpuinfo_x86 *);
+ extern void print_cpu_info(struct cpuinfo_x86 *);
+diff --git a/arch/x86/include/asm/static_call.h b/arch/x86/include/asm/static_call.h
+index 125c407e2abe6d..41502bd2afd646 100644
+--- a/arch/x86/include/asm/static_call.h
++++ b/arch/x86/include/asm/static_call.h
+@@ -65,4 +65,19 @@
+
+ extern bool __static_call_fixup(void *tramp, u8 op, void *dest);
+
++extern void __static_call_update_early(void *tramp, void *func);
++
++#define static_call_update_early(name, _func) \
++({ \
++ typeof(&STATIC_CALL_TRAMP(name)) __F = (_func); \
++ if (static_call_initialized) { \
++ __static_call_update(&STATIC_CALL_KEY(name), \
++ STATIC_CALL_TRAMP_ADDR(name), __F);\
++ } else { \
++ WRITE_ONCE(STATIC_CALL_KEY(name).func, _func); \
++ __static_call_update_early(STATIC_CALL_TRAMP_ADDR(name),\
++ __F); \
++ } \
++})
++
+ #endif /* _ASM_STATIC_CALL_H */
+diff --git a/arch/x86/include/asm/sync_core.h b/arch/x86/include/asm/sync_core.h
+index ab7382f92aff27..96bda43538ee70 100644
+--- a/arch/x86/include/asm/sync_core.h
++++ b/arch/x86/include/asm/sync_core.h
+@@ -8,7 +8,7 @@
+ #include <asm/special_insns.h>
+
+ #ifdef CONFIG_X86_32
+-static inline void iret_to_self(void)
++static __always_inline void iret_to_self(void)
+ {
+ asm volatile (
+ "pushfl\n\t"
+@@ -19,7 +19,7 @@ static inline void iret_to_self(void)
+ : ASM_CALL_CONSTRAINT : : "memory");
+ }
+ #else
+-static inline void iret_to_self(void)
++static __always_inline void iret_to_self(void)
+ {
+ unsigned int tmp;
+
+@@ -55,7 +55,7 @@ static inline void iret_to_self(void)
+ * Like all of Linux's memory ordering operations, this is a
+ * compiler barrier as well.
+ */
+-static inline void sync_core(void)
++static __always_inline void sync_core(void)
+ {
+ /*
+ * The SERIALIZE instruction is the most straightforward way to
+diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
+index a2dd24947eb85a..97771b9d33af30 100644
+--- a/arch/x86/include/asm/xen/hypercall.h
++++ b/arch/x86/include/asm/xen/hypercall.h
+@@ -39,9 +39,11 @@
+ #include <linux/string.h>
+ #include <linux/types.h>
+ #include <linux/pgtable.h>
++#include <linux/instrumentation.h>
+
+ #include <trace/events/xen.h>
+
++#include <asm/alternative.h>
+ #include <asm/page.h>
+ #include <asm/smap.h>
+ #include <asm/nospec-branch.h>
+@@ -86,11 +88,20 @@ struct xen_dm_op_buf;
+ * there aren't more than 5 arguments...)
+ */
+
+-extern struct { char _entry[32]; } hypercall_page[];
++void xen_hypercall_func(void);
++DECLARE_STATIC_CALL(xen_hypercall, xen_hypercall_func);
+
+-#define __HYPERCALL "call hypercall_page+%c[offset]"
+-#define __HYPERCALL_ENTRY(x) \
+- [offset] "i" (__HYPERVISOR_##x * sizeof(hypercall_page[0]))
++#ifdef MODULE
++#define __ADDRESSABLE_xen_hypercall
++#else
++#define __ADDRESSABLE_xen_hypercall __ADDRESSABLE_ASM_STR(__SCK__xen_hypercall)
++#endif
++
++#define __HYPERCALL \
++ __ADDRESSABLE_xen_hypercall \
++ "call __SCT__xen_hypercall"
++
++#define __HYPERCALL_ENTRY(x) "a" (x)
+
+ #ifdef CONFIG_X86_32
+ #define __HYPERCALL_RETREG "eax"
+@@ -148,7 +159,7 @@ extern struct { char _entry[32]; } hypercall_page[];
+ __HYPERCALL_0ARG(); \
+ asm volatile (__HYPERCALL \
+ : __HYPERCALL_0PARAM \
+- : __HYPERCALL_ENTRY(name) \
++ : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \
+ : __HYPERCALL_CLOBBER0); \
+ (type)__res; \
+ })
+@@ -159,7 +170,7 @@ extern struct { char _entry[32]; } hypercall_page[];
+ __HYPERCALL_1ARG(a1); \
+ asm volatile (__HYPERCALL \
+ : __HYPERCALL_1PARAM \
+- : __HYPERCALL_ENTRY(name) \
++ : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \
+ : __HYPERCALL_CLOBBER1); \
+ (type)__res; \
+ })
+@@ -170,7 +181,7 @@ extern struct { char _entry[32]; } hypercall_page[];
+ __HYPERCALL_2ARG(a1, a2); \
+ asm volatile (__HYPERCALL \
+ : __HYPERCALL_2PARAM \
+- : __HYPERCALL_ENTRY(name) \
++ : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \
+ : __HYPERCALL_CLOBBER2); \
+ (type)__res; \
+ })
+@@ -181,7 +192,7 @@ extern struct { char _entry[32]; } hypercall_page[];
+ __HYPERCALL_3ARG(a1, a2, a3); \
+ asm volatile (__HYPERCALL \
+ : __HYPERCALL_3PARAM \
+- : __HYPERCALL_ENTRY(name) \
++ : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \
+ : __HYPERCALL_CLOBBER3); \
+ (type)__res; \
+ })
+@@ -192,7 +203,7 @@ extern struct { char _entry[32]; } hypercall_page[];
+ __HYPERCALL_4ARG(a1, a2, a3, a4); \
+ asm volatile (__HYPERCALL \
+ : __HYPERCALL_4PARAM \
+- : __HYPERCALL_ENTRY(name) \
++ : __HYPERCALL_ENTRY(__HYPERVISOR_ ## name) \
+ : __HYPERCALL_CLOBBER4); \
+ (type)__res; \
+ })
+@@ -206,12 +217,9 @@ xen_single_call(unsigned int call,
+ __HYPERCALL_DECLS;
+ __HYPERCALL_5ARG(a1, a2, a3, a4, a5);
+
+- if (call >= PAGE_SIZE / sizeof(hypercall_page[0]))
+- return -EINVAL;
+-
+- asm volatile(CALL_NOSPEC
++ asm volatile(__HYPERCALL
+ : __HYPERCALL_5PARAM
+- : [thunk_target] "a" (&hypercall_page[call])
++ : __HYPERCALL_ENTRY(call)
+ : __HYPERCALL_CLOBBER5);
+
+ return (long)__res;
+diff --git a/arch/x86/kernel/callthunks.c b/arch/x86/kernel/callthunks.c
+index 4656474567533b..f17d166078823c 100644
+--- a/arch/x86/kernel/callthunks.c
++++ b/arch/x86/kernel/callthunks.c
+@@ -142,11 +142,6 @@ static bool skip_addr(void *dest)
+ if (dest >= (void *)relocate_kernel &&
+ dest < (void*)relocate_kernel + KEXEC_CONTROL_CODE_MAX_SIZE)
+ return true;
+-#endif
+-#ifdef CONFIG_XEN
+- if (dest >= (void *)hypercall_page &&
+- dest < (void*)hypercall_page + PAGE_SIZE)
+- return true;
+ #endif
+ return false;
+ }
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index b17bcf9b67eed4..f439763f45ae6f 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -868,7 +868,7 @@ static void cpu_detect_tlb(struct cpuinfo_x86 *c)
+ tlb_lld_4m[ENTRIES], tlb_lld_1g[ENTRIES]);
+ }
+
+-static void get_cpu_vendor(struct cpuinfo_x86 *c)
++void get_cpu_vendor(struct cpuinfo_x86 *c)
+ {
+ char *v = c->x86_vendor_id;
+ int i;
+@@ -1652,15 +1652,11 @@ static void __init early_identify_cpu(struct cpuinfo_x86 *c)
+ detect_nopl();
+ }
+
+-void __init early_cpu_init(void)
++void __init init_cpu_devs(void)
+ {
+ const struct cpu_dev *const *cdev;
+ int count = 0;
+
+-#ifdef CONFIG_PROCESSOR_SELECT
+- pr_info("KERNEL supported cpus:\n");
+-#endif
+-
+ for (cdev = __x86_cpu_dev_start; cdev < __x86_cpu_dev_end; cdev++) {
+ const struct cpu_dev *cpudev = *cdev;
+
+@@ -1668,20 +1664,30 @@ void __init early_cpu_init(void)
+ break;
+ cpu_devs[count] = cpudev;
+ count++;
++ }
++}
+
++void __init early_cpu_init(void)
++{
+ #ifdef CONFIG_PROCESSOR_SELECT
+- {
+- unsigned int j;
+-
+- for (j = 0; j < 2; j++) {
+- if (!cpudev->c_ident[j])
+- continue;
+- pr_info(" %s %s\n", cpudev->c_vendor,
+- cpudev->c_ident[j]);
+- }
+- }
++ unsigned int i, j;
++
++ pr_info("KERNEL supported cpus:\n");
+ #endif
++
++ init_cpu_devs();
++
++#ifdef CONFIG_PROCESSOR_SELECT
++ for (i = 0; i < X86_VENDOR_NUM && cpu_devs[i]; i++) {
++ for (j = 0; j < 2; j++) {
++ if (!cpu_devs[i]->c_ident[j])
++ continue;
++ pr_info(" %s %s\n", cpu_devs[i]->c_vendor,
++ cpu_devs[i]->c_ident[j]);
++ }
+ }
++#endif
++
+ early_identify_cpu(&boot_cpu_data);
+ }
+
+diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
+index 4eefaac64c6cba..9eed0c144dad51 100644
+--- a/arch/x86/kernel/static_call.c
++++ b/arch/x86/kernel/static_call.c
+@@ -172,6 +172,15 @@ void arch_static_call_transform(void *site, void *tramp, void *func, bool tail)
+ }
+ EXPORT_SYMBOL_GPL(arch_static_call_transform);
+
++noinstr void __static_call_update_early(void *tramp, void *func)
++{
++ BUG_ON(system_state != SYSTEM_BOOTING);
++ BUG_ON(!early_boot_irqs_disabled);
++ BUG_ON(static_call_initialized);
++ __text_gen_insn(tramp, JMP32_INSN_OPCODE, tramp, func, JMP32_INSN_SIZE);
++ sync_core();
++}
++
+ #ifdef CONFIG_MITIGATION_RETHUNK
+ /*
+ * This is called by apply_returns() to fix up static call trampolines,
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index 84e5adbd0925cb..b4f3784f27e956 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -2,6 +2,7 @@
+
+ #include <linux/console.h>
+ #include <linux/cpu.h>
++#include <linux/instrumentation.h>
+ #include <linux/kexec.h>
+ #include <linux/memblock.h>
+ #include <linux/slab.h>
+@@ -21,7 +22,8 @@
+
+ #include "xen-ops.h"
+
+-EXPORT_SYMBOL_GPL(hypercall_page);
++DEFINE_STATIC_CALL(xen_hypercall, xen_hypercall_hvm);
++EXPORT_STATIC_CALL_TRAMP(xen_hypercall);
+
+ /*
+ * Pointer to the xen_vcpu_info structure or
+@@ -68,6 +70,66 @@ EXPORT_SYMBOL(xen_start_flags);
+ */
+ struct shared_info *HYPERVISOR_shared_info = &xen_dummy_shared_info;
+
++static __ref void xen_get_vendor(void)
++{
++ init_cpu_devs();
++ cpu_detect(&boot_cpu_data);
++ get_cpu_vendor(&boot_cpu_data);
++}
++
++void xen_hypercall_setfunc(void)
++{
++ if (static_call_query(xen_hypercall) != xen_hypercall_hvm)
++ return;
++
++ if ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
++ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON))
++ static_call_update(xen_hypercall, xen_hypercall_amd);
++ else
++ static_call_update(xen_hypercall, xen_hypercall_intel);
++}
++
++/*
++ * Evaluate processor vendor in order to select the correct hypercall
++ * function for HVM/PVH guests.
++ * Might be called very early in boot before vendor has been set by
++ * early_cpu_init().
++ */
++noinstr void *__xen_hypercall_setfunc(void)
++{
++ void (*func)(void);
++
++ /*
++ * Xen is supported only on CPUs with CPUID, so testing for
++ * X86_FEATURE_CPUID is a test for early_cpu_init() having been
++ * run.
++ *
++ * Note that __xen_hypercall_setfunc() is noinstr only due to a nasty
++ * dependency chain: it is being called via the xen_hypercall static
++ * call when running as a PVH or HVM guest. Hypercalls need to be
++ * noinstr due to PV guests using hypercalls in noinstr code. So we
++ * the PV guest requirement is not of interest here (xen_get_vendor()
++ * calls noinstr functions, and static_call_update_early() might do
++ * so, too).
++ */
++ instrumentation_begin();
++
++ if (!boot_cpu_has(X86_FEATURE_CPUID))
++ xen_get_vendor();
++
++ if ((boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
++ boot_cpu_data.x86_vendor == X86_VENDOR_HYGON))
++ func = xen_hypercall_amd;
++ else
++ func = xen_hypercall_intel;
++
++ static_call_update_early(xen_hypercall, func);
++
++ instrumentation_end();
++
++ return func;
++}
++
+ static int xen_cpu_up_online(unsigned int cpu)
+ {
+ xen_init_lock_cpu(cpu);
+diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
+index 24d2957a4726d8..fe57ff85d004ba 100644
+--- a/arch/x86/xen/enlighten_hvm.c
++++ b/arch/x86/xen/enlighten_hvm.c
+@@ -106,15 +106,8 @@ static void __init init_hvm_pv_info(void)
+ /* PVH set up hypercall page in xen_prepare_pvh(). */
+ if (xen_pvh_domain())
+ pv_info.name = "Xen PVH";
+- else {
+- u64 pfn;
+- uint32_t msr;
+-
++ else
+ pv_info.name = "Xen HVM";
+- msr = cpuid_ebx(base + 2);
+- pfn = __pa(hypercall_page);
+- wrmsr_safe(msr, (u32)pfn, (u32)(pfn >> 32));
+- }
+
+ xen_setup_features();
+
+@@ -300,6 +293,10 @@ static uint32_t __init xen_platform_hvm(void)
+ if (xen_pv_domain())
+ return 0;
+
++ /* Set correct hypercall function. */
++ if (xen_domain)
++ xen_hypercall_setfunc();
++
+ if (xen_pvh_domain() && nopv) {
+ /* Guest booting via the Xen-PVH boot entry goes here */
+ pr_info("\"nopv\" parameter is ignored in PVH guest\n");
+diff --git a/arch/x86/xen/enlighten_pv.c b/arch/x86/xen/enlighten_pv.c
+index d6818c6cafda16..a8eb7e0c473cf6 100644
+--- a/arch/x86/xen/enlighten_pv.c
++++ b/arch/x86/xen/enlighten_pv.c
+@@ -1341,6 +1341,9 @@ asmlinkage __visible void __init xen_start_kernel(struct start_info *si)
+
+ xen_domain_type = XEN_PV_DOMAIN;
+ xen_start_flags = xen_start_info->flags;
++ /* Interrupts are guaranteed to be off initially. */
++ early_boot_irqs_disabled = true;
++ static_call_update_early(xen_hypercall, xen_hypercall_pv);
+
+ xen_setup_features();
+
+@@ -1431,7 +1434,6 @@ asmlinkage __visible void __init xen_start_kernel(struct start_info *si)
+ WARN_ON(xen_cpuhp_setup(xen_cpu_up_prepare_pv, xen_cpu_dead_pv));
+
+ local_irq_disable();
+- early_boot_irqs_disabled = true;
+
+ xen_raw_console_write("mapping kernel into physical memory\n");
+ xen_setup_kernel_pagetable((pgd_t *)xen_start_info->pt_base,
+diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
+index bf68c329fc013e..0e3d930bcb89e8 100644
+--- a/arch/x86/xen/enlighten_pvh.c
++++ b/arch/x86/xen/enlighten_pvh.c
+@@ -129,17 +129,10 @@ static void __init pvh_arch_setup(void)
+
+ void __init xen_pvh_init(struct boot_params *boot_params)
+ {
+- u32 msr;
+- u64 pfn;
+-
+ xen_pvh = 1;
+ xen_domain_type = XEN_HVM_DOMAIN;
+ xen_start_flags = pvh_start_info.flags;
+
+- msr = cpuid_ebx(xen_cpuid_base() + 2);
+- pfn = __pa(hypercall_page);
+- wrmsr_safe(msr, (u32)pfn, (u32)(pfn >> 32));
+-
+ x86_init.oem.arch_setup = pvh_arch_setup;
+ x86_init.oem.banner = xen_banner;
+
+diff --git a/arch/x86/xen/xen-asm.S b/arch/x86/xen/xen-asm.S
+index 83189cf5cdce93..b518f36d1ca2e7 100644
+--- a/arch/x86/xen/xen-asm.S
++++ b/arch/x86/xen/xen-asm.S
+@@ -20,9 +20,32 @@
+
+ #include <linux/init.h>
+ #include <linux/linkage.h>
++#include <linux/objtool.h>
+ #include <../entry/calling.h>
+
+ .pushsection .noinstr.text, "ax"
++/*
++ * PV hypercall interface to the hypervisor.
++ *
++ * Called via inline asm(), so better preserve %rcx and %r11.
++ *
++ * Input:
++ * %eax: hypercall number
++ * %rdi, %rsi, %rdx, %r10, %r8: args 1..5 for the hypercall
++ * Output: %rax
++ */
++SYM_FUNC_START(xen_hypercall_pv)
++ ANNOTATE_NOENDBR
++ push %rcx
++ push %r11
++ UNWIND_HINT_SAVE
++ syscall
++ UNWIND_HINT_RESTORE
++ pop %r11
++ pop %rcx
++ RET
++SYM_FUNC_END(xen_hypercall_pv)
++
+ /*
+ * Disabling events is simply a matter of making the event mask
+ * non-zero.
+@@ -176,7 +199,6 @@ SYM_CODE_START(xen_early_idt_handler_array)
+ SYM_CODE_END(xen_early_idt_handler_array)
+ __FINIT
+
+-hypercall_iret = hypercall_page + __HYPERVISOR_iret * 32
+ /*
+ * Xen64 iret frame:
+ *
+@@ -186,17 +208,28 @@ hypercall_iret = hypercall_page + __HYPERVISOR_iret * 32
+ * cs
+ * rip <-- standard iret frame
+ *
+- * flags
++ * flags <-- xen_iret must push from here on
+ *
+- * rcx }
+- * r11 }<-- pushed by hypercall page
+- * rsp->rax }
++ * rcx
++ * r11
++ * rsp->rax
+ */
++.macro xen_hypercall_iret
++ pushq $0 /* Flags */
++ push %rcx
++ push %r11
++ push %rax
++ mov $__HYPERVISOR_iret, %eax
++ syscall /* Do the IRET. */
++#ifdef CONFIG_MITIGATION_SLS
++ int3
++#endif
++.endm
++
+ SYM_CODE_START(xen_iret)
+ UNWIND_HINT_UNDEFINED
+ ANNOTATE_NOENDBR
+- pushq $0
+- jmp hypercall_iret
++ xen_hypercall_iret
+ SYM_CODE_END(xen_iret)
+
+ /*
+@@ -301,8 +334,7 @@ SYM_CODE_START(xen_entry_SYSENTER_compat)
+ ENDBR
+ lea 16(%rsp), %rsp /* strip %rcx, %r11 */
+ mov $-ENOSYS, %rax
+- pushq $0
+- jmp hypercall_iret
++ xen_hypercall_iret
+ SYM_CODE_END(xen_entry_SYSENTER_compat)
+ SYM_CODE_END(xen_entry_SYSCALL_compat)
+
+diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
+index 758bcd47b72d32..721a57700a3b05 100644
+--- a/arch/x86/xen/xen-head.S
++++ b/arch/x86/xen/xen-head.S
+@@ -6,9 +6,11 @@
+
+ #include <linux/elfnote.h>
+ #include <linux/init.h>
++#include <linux/instrumentation.h>
+
+ #include <asm/boot.h>
+ #include <asm/asm.h>
++#include <asm/frame.h>
+ #include <asm/msr.h>
+ #include <asm/page_types.h>
+ #include <asm/percpu.h>
+@@ -20,28 +22,6 @@
+ #include <xen/interface/xen-mca.h>
+ #include <asm/xen/interface.h>
+
+-.pushsection .noinstr.text, "ax"
+- .balign PAGE_SIZE
+-SYM_CODE_START(hypercall_page)
+- .rept (PAGE_SIZE / 32)
+- UNWIND_HINT_FUNC
+- ANNOTATE_NOENDBR
+- ANNOTATE_UNRET_SAFE
+- ret
+- /*
+- * Xen will write the hypercall page, and sort out ENDBR.
+- */
+- .skip 31, 0xcc
+- .endr
+-
+-#define HYPERCALL(n) \
+- .equ xen_hypercall_##n, hypercall_page + __HYPERVISOR_##n * 32; \
+- .type xen_hypercall_##n, @function; .size xen_hypercall_##n, 32
+-#include <asm/xen-hypercalls.h>
+-#undef HYPERCALL
+-SYM_CODE_END(hypercall_page)
+-.popsection
+-
+ #ifdef CONFIG_XEN_PV
+ __INIT
+ SYM_CODE_START(startup_xen)
+@@ -87,6 +67,87 @@ SYM_CODE_END(xen_cpu_bringup_again)
+ #endif
+ #endif
+
++ .pushsection .noinstr.text, "ax"
++/*
++ * Xen hypercall interface to the hypervisor.
++ *
++ * Input:
++ * %eax: hypercall number
++ * 32-bit:
++ * %ebx, %ecx, %edx, %esi, %edi: args 1..5 for the hypercall
++ * 64-bit:
++ * %rdi, %rsi, %rdx, %r10, %r8: args 1..5 for the hypercall
++ * Output: %[er]ax
++ */
++SYM_FUNC_START(xen_hypercall_hvm)
++ ENDBR
++ FRAME_BEGIN
++ /* Save all relevant registers (caller save and arguments). */
++#ifdef CONFIG_X86_32
++ push %eax
++ push %ebx
++ push %ecx
++ push %edx
++ push %esi
++ push %edi
++#else
++ push %rax
++ push %rcx
++ push %rdx
++ push %rdi
++ push %rsi
++ push %r11
++ push %r10
++ push %r9
++ push %r8
++#ifdef CONFIG_FRAME_POINTER
++ pushq $0 /* Dummy push for stack alignment. */
++#endif
++#endif
++ /* Set the vendor specific function. */
++ call __xen_hypercall_setfunc
++ /* Set ZF = 1 if AMD, Restore saved registers. */
++#ifdef CONFIG_X86_32
++ lea xen_hypercall_amd, %ebx
++ cmp %eax, %ebx
++ pop %edi
++ pop %esi
++ pop %edx
++ pop %ecx
++ pop %ebx
++ pop %eax
++#else
++ lea xen_hypercall_amd(%rip), %rbx
++ cmp %rax, %rbx
++#ifdef CONFIG_FRAME_POINTER
++ pop %rax /* Dummy pop. */
++#endif
++ pop %r8
++ pop %r9
++ pop %r10
++ pop %r11
++ pop %rsi
++ pop %rdi
++ pop %rdx
++ pop %rcx
++ pop %rax
++#endif
++ /* Use correct hypercall function. */
++ jz xen_hypercall_amd
++ jmp xen_hypercall_intel
++SYM_FUNC_END(xen_hypercall_hvm)
++
++SYM_FUNC_START(xen_hypercall_amd)
++ vmmcall
++ RET
++SYM_FUNC_END(xen_hypercall_amd)
++
++SYM_FUNC_START(xen_hypercall_intel)
++ vmcall
++ RET
++SYM_FUNC_END(xen_hypercall_intel)
++ .popsection
++
+ ELFNOTE(Xen, XEN_ELFNOTE_GUEST_OS, .asciz "linux")
+ ELFNOTE(Xen, XEN_ELFNOTE_GUEST_VERSION, .asciz "2.6")
+ ELFNOTE(Xen, XEN_ELFNOTE_XEN_VERSION, .asciz "xen-3.0")
+@@ -115,7 +176,6 @@ SYM_CODE_END(xen_cpu_bringup_again)
+ #else
+ # define FEATURES_DOM0 0
+ #endif
+- ELFNOTE(Xen, XEN_ELFNOTE_HYPERCALL_PAGE, _ASM_PTR hypercall_page)
+ ELFNOTE(Xen, XEN_ELFNOTE_SUPPORTED_FEATURES,
+ .long FEATURES_PV | FEATURES_PVH | FEATURES_DOM0)
+ ELFNOTE(Xen, XEN_ELFNOTE_LOADER, .asciz "generic")
+diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
+index e1b782e823e6b4..63c13a2ccf556a 100644
+--- a/arch/x86/xen/xen-ops.h
++++ b/arch/x86/xen/xen-ops.h
+@@ -326,4 +326,13 @@ static inline void xen_smp_intr_free_pv(unsigned int cpu) {}
+ static inline void xen_smp_count_cpus(void) { }
+ #endif /* CONFIG_SMP */
+
++#ifdef CONFIG_XEN_PV
++void xen_hypercall_pv(void);
++#endif
++void xen_hypercall_hvm(void);
++void xen_hypercall_amd(void);
++void xen_hypercall_intel(void);
++void xen_hypercall_setfunc(void);
++void *__xen_hypercall_setfunc(void);
++
+ #endif /* XEN_OPS_H */
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index e68c725cf8d975..45a395862fbc88 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1324,10 +1324,14 @@ void blkcg_unpin_online(struct cgroup_subsys_state *blkcg_css)
+ struct blkcg *blkcg = css_to_blkcg(blkcg_css);
+
+ do {
++ struct blkcg *parent;
++
+ if (!refcount_dec_and_test(&blkcg->online_pin))
+ break;
++
++ parent = blkcg_parent(blkcg);
+ blkcg_destroy_blkgs(blkcg);
+- blkcg = blkcg_parent(blkcg);
++ blkcg = parent;
+ } while (blkcg);
+ }
+
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 384aa15e8260bd..a5894ec9696e7e 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1098,7 +1098,14 @@ static void __propagate_weights(struct ioc_gq *iocg, u32 active, u32 inuse,
+ inuse = DIV64_U64_ROUND_UP(active * iocg->child_inuse_sum,
+ iocg->child_active_sum);
+ } else {
+- inuse = clamp_t(u32, inuse, 1, active);
++ /*
++ * It may be tempting to turn this into a clamp expression with
++ * a lower limit of 1 but active may be 0, which cannot be used
++ * as an upper limit in that situation. This expression allows
++ * active to clamp inuse unless it is 0, in which case inuse
++ * becomes 1.
++ */
++ inuse = min(inuse, active) ?: 1;
+ }
+
+ iocg->last_inuse = iocg->inuse;
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index 156e9bb07abf1a..cd5ea6eaa76b09 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -275,15 +275,13 @@ void blk_mq_sysfs_unregister_hctxs(struct request_queue *q)
+ struct blk_mq_hw_ctx *hctx;
+ unsigned long i;
+
+- mutex_lock(&q->sysfs_dir_lock);
++ lockdep_assert_held(&q->sysfs_dir_lock);
++
+ if (!q->mq_sysfs_init_done)
+- goto unlock;
++ return;
+
+ queue_for_each_hw_ctx(q, hctx, i)
+ blk_mq_unregister_hctx(hctx);
+-
+-unlock:
+- mutex_unlock(&q->sysfs_dir_lock);
+ }
+
+ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+@@ -292,9 +290,10 @@ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+ unsigned long i;
+ int ret = 0;
+
+- mutex_lock(&q->sysfs_dir_lock);
++ lockdep_assert_held(&q->sysfs_dir_lock);
++
+ if (!q->mq_sysfs_init_done)
+- goto unlock;
++ return ret;
+
+ queue_for_each_hw_ctx(q, hctx, i) {
+ ret = blk_mq_register_hctx(hctx);
+@@ -302,8 +301,5 @@ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+ break;
+ }
+
+-unlock:
+- mutex_unlock(&q->sysfs_dir_lock);
+-
+ return ret;
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index b4fba7b398e5bc..cc1b3202383840 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -43,6 +43,7 @@
+
+ static DEFINE_PER_CPU(struct llist_head, blk_cpu_done);
+ static DEFINE_PER_CPU(call_single_data_t, blk_cpu_csd);
++static DEFINE_MUTEX(blk_mq_cpuhp_lock);
+
+ static void blk_mq_insert_request(struct request *rq, blk_insert_t flags);
+ static void blk_mq_request_bypass_insert(struct request *rq,
+@@ -3740,13 +3741,91 @@ static int blk_mq_hctx_notify_dead(unsigned int cpu, struct hlist_node *node)
+ return 0;
+ }
+
+-static void blk_mq_remove_cpuhp(struct blk_mq_hw_ctx *hctx)
++static void __blk_mq_remove_cpuhp(struct blk_mq_hw_ctx *hctx)
+ {
+- if (!(hctx->flags & BLK_MQ_F_STACKING))
++ lockdep_assert_held(&blk_mq_cpuhp_lock);
++
++ if (!(hctx->flags & BLK_MQ_F_STACKING) &&
++ !hlist_unhashed(&hctx->cpuhp_online)) {
+ cpuhp_state_remove_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
+ &hctx->cpuhp_online);
+- cpuhp_state_remove_instance_nocalls(CPUHP_BLK_MQ_DEAD,
+- &hctx->cpuhp_dead);
++ INIT_HLIST_NODE(&hctx->cpuhp_online);
++ }
++
++ if (!hlist_unhashed(&hctx->cpuhp_dead)) {
++ cpuhp_state_remove_instance_nocalls(CPUHP_BLK_MQ_DEAD,
++ &hctx->cpuhp_dead);
++ INIT_HLIST_NODE(&hctx->cpuhp_dead);
++ }
++}
++
++static void blk_mq_remove_cpuhp(struct blk_mq_hw_ctx *hctx)
++{
++ mutex_lock(&blk_mq_cpuhp_lock);
++ __blk_mq_remove_cpuhp(hctx);
++ mutex_unlock(&blk_mq_cpuhp_lock);
++}
++
++static void __blk_mq_add_cpuhp(struct blk_mq_hw_ctx *hctx)
++{
++ lockdep_assert_held(&blk_mq_cpuhp_lock);
++
++ if (!(hctx->flags & BLK_MQ_F_STACKING) &&
++ hlist_unhashed(&hctx->cpuhp_online))
++ cpuhp_state_add_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
++ &hctx->cpuhp_online);
++
++ if (hlist_unhashed(&hctx->cpuhp_dead))
++ cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD,
++ &hctx->cpuhp_dead);
++}
++
++static void __blk_mq_remove_cpuhp_list(struct list_head *head)
++{
++ struct blk_mq_hw_ctx *hctx;
++
++ lockdep_assert_held(&blk_mq_cpuhp_lock);
++
++ list_for_each_entry(hctx, head, hctx_list)
++ __blk_mq_remove_cpuhp(hctx);
++}
++
++/*
++ * Unregister cpuhp callbacks from exited hw queues
++ *
++ * Safe to call if this `request_queue` is live
++ */
++static void blk_mq_remove_hw_queues_cpuhp(struct request_queue *q)
++{
++ LIST_HEAD(hctx_list);
++
++ spin_lock(&q->unused_hctx_lock);
++ list_splice_init(&q->unused_hctx_list, &hctx_list);
++ spin_unlock(&q->unused_hctx_lock);
++
++ mutex_lock(&blk_mq_cpuhp_lock);
++ __blk_mq_remove_cpuhp_list(&hctx_list);
++ mutex_unlock(&blk_mq_cpuhp_lock);
++
++ spin_lock(&q->unused_hctx_lock);
++ list_splice(&hctx_list, &q->unused_hctx_list);
++ spin_unlock(&q->unused_hctx_lock);
++}
++
++/*
++ * Register cpuhp callbacks from all hw queues
++ *
++ * Safe to call if this `request_queue` is live
++ */
++static void blk_mq_add_hw_queues_cpuhp(struct request_queue *q)
++{
++ struct blk_mq_hw_ctx *hctx;
++ unsigned long i;
++
++ mutex_lock(&blk_mq_cpuhp_lock);
++ queue_for_each_hw_ctx(q, hctx, i)
++ __blk_mq_add_cpuhp(hctx);
++ mutex_unlock(&blk_mq_cpuhp_lock);
+ }
+
+ /*
+@@ -3797,8 +3876,6 @@ static void blk_mq_exit_hctx(struct request_queue *q,
+ if (set->ops->exit_hctx)
+ set->ops->exit_hctx(hctx, hctx_idx);
+
+- blk_mq_remove_cpuhp(hctx);
+-
+ xa_erase(&q->hctx_table, hctx_idx);
+
+ spin_lock(&q->unused_hctx_lock);
+@@ -3815,6 +3892,7 @@ static void blk_mq_exit_hw_queues(struct request_queue *q,
+ queue_for_each_hw_ctx(q, hctx, i) {
+ if (i == nr_queue)
+ break;
++ blk_mq_remove_cpuhp(hctx);
+ blk_mq_exit_hctx(q, set, hctx, i);
+ }
+ }
+@@ -3878,6 +3956,8 @@ blk_mq_alloc_hctx(struct request_queue *q, struct blk_mq_tag_set *set,
+ INIT_DELAYED_WORK(&hctx->run_work, blk_mq_run_work_fn);
+ spin_lock_init(&hctx->lock);
+ INIT_LIST_HEAD(&hctx->dispatch);
++ INIT_HLIST_NODE(&hctx->cpuhp_dead);
++ INIT_HLIST_NODE(&hctx->cpuhp_online);
+ hctx->queue = q;
+ hctx->flags = set->flags & ~BLK_MQ_F_TAG_QUEUE_SHARED;
+
+@@ -4382,7 +4462,8 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+ unsigned long i, j;
+
+ /* protect against switching io scheduler */
+- mutex_lock(&q->sysfs_lock);
++ lockdep_assert_held(&q->sysfs_lock);
++
+ for (i = 0; i < set->nr_hw_queues; i++) {
+ int old_node;
+ int node = blk_mq_get_hctx_node(set, i);
+@@ -4415,7 +4496,12 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+
+ xa_for_each_start(&q->hctx_table, j, hctx, j)
+ blk_mq_exit_hctx(q, set, hctx, j);
+- mutex_unlock(&q->sysfs_lock);
++
++ /* unregister cpuhp callbacks for exited hctxs */
++ blk_mq_remove_hw_queues_cpuhp(q);
++
++ /* register cpuhp for new initialized hctxs */
++ blk_mq_add_hw_queues_cpuhp(q);
+ }
+
+ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+@@ -4441,10 +4527,14 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+
+ xa_init(&q->hctx_table);
+
++ mutex_lock(&q->sysfs_lock);
++
+ blk_mq_realloc_hw_ctxs(set, q);
+ if (!q->nr_hw_queues)
+ goto err_hctxs;
+
++ mutex_unlock(&q->sysfs_lock);
++
+ INIT_WORK(&q->timeout_work, blk_mq_timeout_work);
+ blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ);
+
+@@ -4463,6 +4553,7 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+ return 0;
+
+ err_hctxs:
++ mutex_unlock(&q->sysfs_lock);
+ blk_mq_release(q);
+ err_exit:
+ q->mq_ops = NULL;
+@@ -4843,12 +4934,12 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ return false;
+
+ /* q->elevator needs protection from ->sysfs_lock */
+- mutex_lock(&q->sysfs_lock);
++ lockdep_assert_held(&q->sysfs_lock);
+
+ /* the check has to be done with holding sysfs_lock */
+ if (!q->elevator) {
+ kfree(qe);
+- goto unlock;
++ goto out;
+ }
+
+ INIT_LIST_HEAD(&qe->node);
+@@ -4858,9 +4949,7 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ __elevator_get(qe->type);
+ list_add(&qe->node, head);
+ elevator_disable(q);
+-unlock:
+- mutex_unlock(&q->sysfs_lock);
+-
++out:
+ return true;
+ }
+
+@@ -4889,11 +4978,9 @@ static void blk_mq_elv_switch_back(struct list_head *head,
+ list_del(&qe->node);
+ kfree(qe);
+
+- mutex_lock(&q->sysfs_lock);
+ elevator_switch(q, t);
+ /* drop the reference acquired in blk_mq_elv_switch_none */
+ elevator_put(t);
+- mutex_unlock(&q->sysfs_lock);
+ }
+
+ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+@@ -4913,8 +5000,11 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ if (set->nr_maps == 1 && nr_hw_queues == set->nr_hw_queues)
+ return;
+
+- list_for_each_entry(q, &set->tag_list, tag_set_list)
++ list_for_each_entry(q, &set->tag_list, tag_set_list) {
++ mutex_lock(&q->sysfs_dir_lock);
++ mutex_lock(&q->sysfs_lock);
+ blk_mq_freeze_queue(q);
++ }
+ /*
+ * Switch IO scheduler to 'none', cleaning up the data associated
+ * with the previous scheduler. We will switch back once we are done
+@@ -4970,8 +5060,11 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ list_for_each_entry(q, &set->tag_list, tag_set_list)
+ blk_mq_elv_switch_back(&head, q);
+
+- list_for_each_entry(q, &set->tag_list, tag_set_list)
++ list_for_each_entry(q, &set->tag_list, tag_set_list) {
+ blk_mq_unfreeze_queue(q);
++ mutex_unlock(&q->sysfs_lock);
++ mutex_unlock(&q->sysfs_dir_lock);
++ }
+
+ /* Free the excess tags when nr_hw_queues shrink. */
+ for (i = set->nr_hw_queues; i < prev_nr_hw_queues; i++)
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 207577145c54f4..42c2cb97d778af 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -690,11 +690,11 @@ queue_attr_store(struct kobject *kobj, struct attribute *attr,
+ return res;
+ }
+
+- blk_mq_freeze_queue(q);
+ mutex_lock(&q->sysfs_lock);
++ blk_mq_freeze_queue(q);
+ res = entry->store(disk, page, length);
+- mutex_unlock(&q->sysfs_lock);
+ blk_mq_unfreeze_queue(q);
++ mutex_unlock(&q->sysfs_lock);
+ return res;
+ }
+
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 0b1184176ce77a..767bcbce74facb 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -18,7 +18,7 @@
+ #include <linux/vmalloc.h>
+ #include <linux/sched/mm.h>
+ #include <linux/spinlock.h>
+-#include <linux/atomic.h>
++#include <linux/refcount.h>
+ #include <linux/mempool.h>
+
+ #include "blk.h"
+@@ -41,7 +41,6 @@ static const char *const zone_cond_name[] = {
+ /*
+ * Per-zone write plug.
+ * @node: hlist_node structure for managing the plug using a hash table.
+- * @link: To list the plug in the zone write plug error list of the disk.
+ * @ref: Zone write plug reference counter. A zone write plug reference is
+ * always at least 1 when the plug is hashed in the disk plug hash table.
+ * The reference is incremented whenever a new BIO needing plugging is
+@@ -63,8 +62,7 @@ static const char *const zone_cond_name[] = {
+ */
+ struct blk_zone_wplug {
+ struct hlist_node node;
+- struct list_head link;
+- atomic_t ref;
++ refcount_t ref;
+ spinlock_t lock;
+ unsigned int flags;
+ unsigned int zone_no;
+@@ -80,8 +78,8 @@ struct blk_zone_wplug {
+ * - BLK_ZONE_WPLUG_PLUGGED: Indicates that the zone write plug is plugged,
+ * that is, that write BIOs are being throttled due to a write BIO already
+ * being executed or the zone write plug bio list is not empty.
+- * - BLK_ZONE_WPLUG_ERROR: Indicates that a write error happened which will be
+- * recovered with a report zone to update the zone write pointer offset.
++ * - BLK_ZONE_WPLUG_NEED_WP_UPDATE: Indicates that we lost track of a zone
++ * write pointer offset and need to update it.
+ * - BLK_ZONE_WPLUG_UNHASHED: Indicates that the zone write plug was removed
+ * from the disk hash table and that the initial reference to the zone
+ * write plug set when the plug was first added to the hash table has been
+@@ -91,11 +89,9 @@ struct blk_zone_wplug {
+ * freed once all remaining references from BIOs or functions are dropped.
+ */
+ #define BLK_ZONE_WPLUG_PLUGGED (1U << 0)
+-#define BLK_ZONE_WPLUG_ERROR (1U << 1)
++#define BLK_ZONE_WPLUG_NEED_WP_UPDATE (1U << 1)
+ #define BLK_ZONE_WPLUG_UNHASHED (1U << 2)
+
+-#define BLK_ZONE_WPLUG_BUSY (BLK_ZONE_WPLUG_PLUGGED | BLK_ZONE_WPLUG_ERROR)
+-
+ /**
+ * blk_zone_cond_str - Return string XXX in BLK_ZONE_COND_XXX.
+ * @zone_cond: BLK_ZONE_COND_XXX.
+@@ -115,6 +111,30 @@ const char *blk_zone_cond_str(enum blk_zone_cond zone_cond)
+ }
+ EXPORT_SYMBOL_GPL(blk_zone_cond_str);
+
++struct disk_report_zones_cb_args {
++ struct gendisk *disk;
++ report_zones_cb user_cb;
++ void *user_data;
++};
++
++static void disk_zone_wplug_sync_wp_offset(struct gendisk *disk,
++ struct blk_zone *zone);
++
++static int disk_report_zones_cb(struct blk_zone *zone, unsigned int idx,
++ void *data)
++{
++ struct disk_report_zones_cb_args *args = data;
++ struct gendisk *disk = args->disk;
++
++ if (disk->zone_wplugs_hash)
++ disk_zone_wplug_sync_wp_offset(disk, zone);
++
++ if (!args->user_cb)
++ return 0;
++
++ return args->user_cb(zone, idx, args->user_data);
++}
++
+ /**
+ * blkdev_report_zones - Get zones information
+ * @bdev: Target block device
+@@ -139,6 +159,11 @@ int blkdev_report_zones(struct block_device *bdev, sector_t sector,
+ {
+ struct gendisk *disk = bdev->bd_disk;
+ sector_t capacity = get_capacity(disk);
++ struct disk_report_zones_cb_args args = {
++ .disk = disk,
++ .user_cb = cb,
++ .user_data = data,
++ };
+
+ if (!bdev_is_zoned(bdev) || WARN_ON_ONCE(!disk->fops->report_zones))
+ return -EOPNOTSUPP;
+@@ -146,7 +171,8 @@ int blkdev_report_zones(struct block_device *bdev, sector_t sector,
+ if (!nr_zones || sector >= capacity)
+ return 0;
+
+- return disk->fops->report_zones(disk, sector, nr_zones, cb, data);
++ return disk->fops->report_zones(disk, sector, nr_zones,
++ disk_report_zones_cb, &args);
+ }
+ EXPORT_SYMBOL_GPL(blkdev_report_zones);
+
+@@ -417,7 +443,7 @@ static struct blk_zone_wplug *disk_get_zone_wplug(struct gendisk *disk,
+
+ hlist_for_each_entry_rcu(zwplug, &disk->zone_wplugs_hash[idx], node) {
+ if (zwplug->zone_no == zno &&
+- atomic_inc_not_zero(&zwplug->ref)) {
++ refcount_inc_not_zero(&zwplug->ref)) {
+ rcu_read_unlock();
+ return zwplug;
+ }
+@@ -438,9 +464,9 @@ static void disk_free_zone_wplug_rcu(struct rcu_head *rcu_head)
+
+ static inline void disk_put_zone_wplug(struct blk_zone_wplug *zwplug)
+ {
+- if (atomic_dec_and_test(&zwplug->ref)) {
++ if (refcount_dec_and_test(&zwplug->ref)) {
+ WARN_ON_ONCE(!bio_list_empty(&zwplug->bio_list));
+- WARN_ON_ONCE(!list_empty(&zwplug->link));
++ WARN_ON_ONCE(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED);
+ WARN_ON_ONCE(!(zwplug->flags & BLK_ZONE_WPLUG_UNHASHED));
+
+ call_rcu(&zwplug->rcu_head, disk_free_zone_wplug_rcu);
+@@ -454,8 +480,8 @@ static inline bool disk_should_remove_zone_wplug(struct gendisk *disk,
+ if (zwplug->flags & BLK_ZONE_WPLUG_UNHASHED)
+ return false;
+
+- /* If the zone write plug is still busy, it cannot be removed. */
+- if (zwplug->flags & BLK_ZONE_WPLUG_BUSY)
++ /* If the zone write plug is still plugged, it cannot be removed. */
++ if (zwplug->flags & BLK_ZONE_WPLUG_PLUGGED)
+ return false;
+
+ /*
+@@ -469,7 +495,7 @@ static inline bool disk_should_remove_zone_wplug(struct gendisk *disk,
+ * taken when the plug was allocated and another reference taken by the
+ * caller context).
+ */
+- if (atomic_read(&zwplug->ref) > 2)
++ if (refcount_read(&zwplug->ref) > 2)
+ return false;
+
+ /* We can remove zone write plugs for zones that are empty or full. */
+@@ -538,12 +564,11 @@ static struct blk_zone_wplug *disk_get_and_lock_zone_wplug(struct gendisk *disk,
+ return NULL;
+
+ INIT_HLIST_NODE(&zwplug->node);
+- INIT_LIST_HEAD(&zwplug->link);
+- atomic_set(&zwplug->ref, 2);
++ refcount_set(&zwplug->ref, 2);
+ spin_lock_init(&zwplug->lock);
+ zwplug->flags = 0;
+ zwplug->zone_no = zno;
+- zwplug->wp_offset = sector & (disk->queue->limits.chunk_sectors - 1);
++ zwplug->wp_offset = bdev_offset_from_zone_start(disk->part0, sector);
+ bio_list_init(&zwplug->bio_list);
+ INIT_WORK(&zwplug->bio_work, blk_zone_wplug_bio_work);
+ zwplug->disk = disk;
+@@ -587,124 +612,81 @@ static void disk_zone_wplug_abort(struct blk_zone_wplug *zwplug)
+ }
+
+ /*
+- * Abort (fail) all plugged BIOs of a zone write plug that are not aligned
+- * with the assumed write pointer location of the zone when the BIO will
+- * be unplugged.
++ * Set a zone write plug write pointer offset to the specified value.
++ * This aborts all plugged BIOs, which is fine as this function is called for
++ * a zone reset operation, a zone finish operation or if the zone needs a wp
++ * update from a report zone after a write error.
+ */
+-static void disk_zone_wplug_abort_unaligned(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug)
+-{
+- unsigned int wp_offset = zwplug->wp_offset;
+- struct bio_list bl = BIO_EMPTY_LIST;
+- struct bio *bio;
+-
+- while ((bio = bio_list_pop(&zwplug->bio_list))) {
+- if (disk_zone_is_full(disk, zwplug->zone_no, wp_offset) ||
+- (bio_op(bio) != REQ_OP_ZONE_APPEND &&
+- bio_offset_from_zone_start(bio) != wp_offset)) {
+- blk_zone_wplug_bio_io_error(zwplug, bio);
+- continue;
+- }
+-
+- wp_offset += bio_sectors(bio);
+- bio_list_add(&bl, bio);
+- }
+-
+- bio_list_merge(&zwplug->bio_list, &bl);
+-}
+-
+-static inline void disk_zone_wplug_set_error(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug)
++static void disk_zone_wplug_set_wp_offset(struct gendisk *disk,
++ struct blk_zone_wplug *zwplug,
++ unsigned int wp_offset)
+ {
+- unsigned long flags;
++ lockdep_assert_held(&zwplug->lock);
+
+- if (zwplug->flags & BLK_ZONE_WPLUG_ERROR)
+- return;
++ /* Update the zone write pointer and abort all plugged BIOs. */
++ zwplug->flags &= ~BLK_ZONE_WPLUG_NEED_WP_UPDATE;
++ zwplug->wp_offset = wp_offset;
++ disk_zone_wplug_abort(zwplug);
+
+ /*
+- * At this point, we already have a reference on the zone write plug.
+- * However, since we are going to add the plug to the disk zone write
+- * plugs work list, increase its reference count. This reference will
+- * be dropped in disk_zone_wplugs_work() once the error state is
+- * handled, or in disk_zone_wplug_clear_error() if the zone is reset or
+- * finished.
++ * The zone write plug now has no BIO plugged: remove it from the
++ * hash table so that it cannot be seen. The plug will be freed
++ * when the last reference is dropped.
+ */
+- zwplug->flags |= BLK_ZONE_WPLUG_ERROR;
+- atomic_inc(&zwplug->ref);
+-
+- spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
+- list_add_tail(&zwplug->link, &disk->zone_wplugs_err_list);
+- spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
++ if (disk_should_remove_zone_wplug(disk, zwplug))
++ disk_remove_zone_wplug(disk, zwplug);
+ }
+
+-static inline void disk_zone_wplug_clear_error(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug)
++static unsigned int blk_zone_wp_offset(struct blk_zone *zone)
+ {
+- unsigned long flags;
+-
+- if (!(zwplug->flags & BLK_ZONE_WPLUG_ERROR))
+- return;
+-
+- /*
+- * We are racing with the error handling work which drops the reference
+- * on the zone write plug after handling the error state. So remove the
+- * plug from the error list and drop its reference count only if the
+- * error handling has not yet started, that is, if the zone write plug
+- * is still listed.
+- */
+- spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
+- if (!list_empty(&zwplug->link)) {
+- list_del_init(&zwplug->link);
+- zwplug->flags &= ~BLK_ZONE_WPLUG_ERROR;
+- disk_put_zone_wplug(zwplug);
++ switch (zone->cond) {
++ case BLK_ZONE_COND_IMP_OPEN:
++ case BLK_ZONE_COND_EXP_OPEN:
++ case BLK_ZONE_COND_CLOSED:
++ return zone->wp - zone->start;
++ case BLK_ZONE_COND_FULL:
++ return zone->len;
++ case BLK_ZONE_COND_EMPTY:
++ return 0;
++ case BLK_ZONE_COND_NOT_WP:
++ case BLK_ZONE_COND_OFFLINE:
++ case BLK_ZONE_COND_READONLY:
++ default:
++ /*
++ * Conventional, offline and read-only zones do not have a valid
++ * write pointer.
++ */
++ return UINT_MAX;
+ }
+- spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
+ }
+
+-/*
+- * Set a zone write plug write pointer offset to either 0 (zone reset case)
+- * or to the zone size (zone finish case). This aborts all plugged BIOs, which
+- * is fine to do as doing a zone reset or zone finish while writes are in-flight
+- * is a mistake from the user which will most likely cause all plugged BIOs to
+- * fail anyway.
+- */
+-static void disk_zone_wplug_set_wp_offset(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug,
+- unsigned int wp_offset)
++static void disk_zone_wplug_sync_wp_offset(struct gendisk *disk,
++ struct blk_zone *zone)
+ {
++ struct blk_zone_wplug *zwplug;
+ unsigned long flags;
+
+- spin_lock_irqsave(&zwplug->lock, flags);
+-
+- /*
+- * Make sure that a BIO completion or another zone reset or finish
+- * operation has not already removed the plug from the hash table.
+- */
+- if (zwplug->flags & BLK_ZONE_WPLUG_UNHASHED) {
+- spin_unlock_irqrestore(&zwplug->lock, flags);
++ zwplug = disk_get_zone_wplug(disk, zone->start);
++ if (!zwplug)
+ return;
+- }
+
+- /* Update the zone write pointer and abort all plugged BIOs. */
+- zwplug->wp_offset = wp_offset;
+- disk_zone_wplug_abort(zwplug);
++ spin_lock_irqsave(&zwplug->lock, flags);
++ if (zwplug->flags & BLK_ZONE_WPLUG_NEED_WP_UPDATE)
++ disk_zone_wplug_set_wp_offset(disk, zwplug,
++ blk_zone_wp_offset(zone));
++ spin_unlock_irqrestore(&zwplug->lock, flags);
+
+- /*
+- * Updating the write pointer offset puts back the zone
+- * in a good state. So clear the error flag and decrement the
+- * error count if we were in error state.
+- */
+- disk_zone_wplug_clear_error(disk, zwplug);
++ disk_put_zone_wplug(zwplug);
++}
+
+- /*
+- * The zone write plug now has no BIO plugged: remove it from the
+- * hash table so that it cannot be seen. The plug will be freed
+- * when the last reference is dropped.
+- */
+- if (disk_should_remove_zone_wplug(disk, zwplug))
+- disk_remove_zone_wplug(disk, zwplug);
++static int disk_zone_sync_wp_offset(struct gendisk *disk, sector_t sector)
++{
++ struct disk_report_zones_cb_args args = {
++ .disk = disk,
++ };
+
+- spin_unlock_irqrestore(&zwplug->lock, flags);
++ return disk->fops->report_zones(disk, sector, 1,
++ disk_report_zones_cb, &args);
+ }
+
+ static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio,
+@@ -713,6 +695,7 @@ static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio,
+ struct gendisk *disk = bio->bi_bdev->bd_disk;
+ sector_t sector = bio->bi_iter.bi_sector;
+ struct blk_zone_wplug *zwplug;
++ unsigned long flags;
+
+ /* Conventional zones cannot be reset nor finished. */
+ if (disk_zone_is_conv(disk, sector)) {
+@@ -720,6 +703,15 @@ static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio,
+ return true;
+ }
+
++ /*
++ * No-wait reset or finish BIOs do not make much sense as the callers
++ * issue these as blocking operations in most cases. To avoid issues
++ * the BIO execution potentially failing with BLK_STS_AGAIN, warn about
++ * REQ_NOWAIT being set and ignore that flag.
++ */
++ if (WARN_ON_ONCE(bio->bi_opf & REQ_NOWAIT))
++ bio->bi_opf &= ~REQ_NOWAIT;
++
+ /*
+ * If we have a zone write plug, set its write pointer offset to 0
+ * (reset case) or to the zone size (finish case). This will abort all
+@@ -729,7 +721,9 @@ static bool blk_zone_wplug_handle_reset_or_finish(struct bio *bio,
+ */
+ zwplug = disk_get_zone_wplug(disk, sector);
+ if (zwplug) {
++ spin_lock_irqsave(&zwplug->lock, flags);
+ disk_zone_wplug_set_wp_offset(disk, zwplug, wp_offset);
++ spin_unlock_irqrestore(&zwplug->lock, flags);
+ disk_put_zone_wplug(zwplug);
+ }
+
+@@ -740,6 +734,7 @@ static bool blk_zone_wplug_handle_reset_all(struct bio *bio)
+ {
+ struct gendisk *disk = bio->bi_bdev->bd_disk;
+ struct blk_zone_wplug *zwplug;
++ unsigned long flags;
+ sector_t sector;
+
+ /*
+@@ -751,7 +746,9 @@ static bool blk_zone_wplug_handle_reset_all(struct bio *bio)
+ sector += disk->queue->limits.chunk_sectors) {
+ zwplug = disk_get_zone_wplug(disk, sector);
+ if (zwplug) {
++ spin_lock_irqsave(&zwplug->lock, flags);
+ disk_zone_wplug_set_wp_offset(disk, zwplug, 0);
++ spin_unlock_irqrestore(&zwplug->lock, flags);
+ disk_put_zone_wplug(zwplug);
+ }
+ }
+@@ -759,9 +756,25 @@ static bool blk_zone_wplug_handle_reset_all(struct bio *bio)
+ return false;
+ }
+
+-static inline void blk_zone_wplug_add_bio(struct blk_zone_wplug *zwplug,
+- struct bio *bio, unsigned int nr_segs)
++static void disk_zone_wplug_schedule_bio_work(struct gendisk *disk,
++ struct blk_zone_wplug *zwplug)
+ {
++ /*
++ * Take a reference on the zone write plug and schedule the submission
++ * of the next plugged BIO. blk_zone_wplug_bio_work() will release the
++ * reference we take here.
++ */
++ WARN_ON_ONCE(!(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED));
++ refcount_inc(&zwplug->ref);
++ queue_work(disk->zone_wplugs_wq, &zwplug->bio_work);
++}
++
++static inline void disk_zone_wplug_add_bio(struct gendisk *disk,
++ struct blk_zone_wplug *zwplug,
++ struct bio *bio, unsigned int nr_segs)
++{
++ bool schedule_bio_work = false;
++
+ /*
+ * Grab an extra reference on the BIO request queue usage counter.
+ * This reference will be reused to submit a request for the BIO for
+@@ -777,6 +790,16 @@ static inline void blk_zone_wplug_add_bio(struct blk_zone_wplug *zwplug,
+ */
+ bio_clear_polled(bio);
+
++ /*
++ * REQ_NOWAIT BIOs are always handled using the zone write plug BIO
++ * work, which can block. So clear the REQ_NOWAIT flag and schedule the
++ * work if this is the first BIO we are plugging.
++ */
++ if (bio->bi_opf & REQ_NOWAIT) {
++ schedule_bio_work = !(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED);
++ bio->bi_opf &= ~REQ_NOWAIT;
++ }
++
+ /*
+ * Reuse the poll cookie field to store the number of segments when
+ * split to the hardware limits.
+@@ -790,6 +813,11 @@ static inline void blk_zone_wplug_add_bio(struct blk_zone_wplug *zwplug,
+ * at the tail of the list to preserve the sequential write order.
+ */
+ bio_list_add(&zwplug->bio_list, bio);
++
++ zwplug->flags |= BLK_ZONE_WPLUG_PLUGGED;
++
++ if (schedule_bio_work)
++ disk_zone_wplug_schedule_bio_work(disk, zwplug);
+ }
+
+ /*
+@@ -902,13 +930,23 @@ static bool blk_zone_wplug_prepare_bio(struct blk_zone_wplug *zwplug,
+ {
+ struct gendisk *disk = bio->bi_bdev->bd_disk;
+
++ /*
++ * If we lost track of the zone write pointer due to a write error,
++ * the user must either execute a report zones, reset the zone or finish
++ * the to recover a reliable write pointer position. Fail BIOs if the
++ * user did not do that as we cannot handle emulated zone append
++ * otherwise.
++ */
++ if (zwplug->flags & BLK_ZONE_WPLUG_NEED_WP_UPDATE)
++ return false;
++
+ /*
+ * Check that the user is not attempting to write to a full zone.
+ * We know such BIO will fail, and that would potentially overflow our
+ * write pointer offset beyond the end of the zone.
+ */
+ if (disk_zone_wplug_is_full(disk, zwplug))
+- goto err;
++ return false;
+
+ if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
+ /*
+@@ -927,24 +965,18 @@ static bool blk_zone_wplug_prepare_bio(struct blk_zone_wplug *zwplug,
+ bio_set_flag(bio, BIO_EMULATES_ZONE_APPEND);
+ } else {
+ /*
+- * Check for non-sequential writes early because we avoid a
+- * whole lot of error handling trouble if we don't send it off
+- * to the driver.
++ * Check for non-sequential writes early as we know that BIOs
++ * with a start sector not unaligned to the zone write pointer
++ * will fail.
+ */
+ if (bio_offset_from_zone_start(bio) != zwplug->wp_offset)
+- goto err;
++ return false;
+ }
+
+ /* Advance the zone write pointer offset. */
+ zwplug->wp_offset += bio_sectors(bio);
+
+ return true;
+-
+-err:
+- /* We detected an invalid write BIO: schedule error recovery. */
+- disk_zone_wplug_set_error(disk, zwplug);
+- kblockd_schedule_work(&disk->zone_wplugs_work);
+- return false;
+ }
+
+ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
+@@ -983,7 +1015,10 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
+
+ zwplug = disk_get_and_lock_zone_wplug(disk, sector, gfp_mask, &flags);
+ if (!zwplug) {
+- bio_io_error(bio);
++ if (bio->bi_opf & REQ_NOWAIT)
++ bio_wouldblock_error(bio);
++ else
++ bio_io_error(bio);
+ return true;
+ }
+
+@@ -991,18 +1026,20 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
+ bio_set_flag(bio, BIO_ZONE_WRITE_PLUGGING);
+
+ /*
+- * If the zone is already plugged or has a pending error, add the BIO
+- * to the plug BIO list. Otherwise, plug and let the BIO execute.
++ * If the zone is already plugged, add the BIO to the plug BIO list.
++ * Do the same for REQ_NOWAIT BIOs to ensure that we will not see a
++ * BLK_STS_AGAIN failure if we let the BIO execute.
++ * Otherwise, plug and let the BIO execute.
+ */
+- if (zwplug->flags & BLK_ZONE_WPLUG_BUSY)
++ if ((zwplug->flags & BLK_ZONE_WPLUG_PLUGGED) ||
++ (bio->bi_opf & REQ_NOWAIT))
+ goto plug;
+
+- /*
+- * If an error is detected when preparing the BIO, add it to the BIO
+- * list so that error recovery can deal with it.
+- */
+- if (!blk_zone_wplug_prepare_bio(zwplug, bio))
+- goto plug;
++ if (!blk_zone_wplug_prepare_bio(zwplug, bio)) {
++ spin_unlock_irqrestore(&zwplug->lock, flags);
++ bio_io_error(bio);
++ return true;
++ }
+
+ zwplug->flags |= BLK_ZONE_WPLUG_PLUGGED;
+
+@@ -1011,8 +1048,7 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
+ return false;
+
+ plug:
+- zwplug->flags |= BLK_ZONE_WPLUG_PLUGGED;
+- blk_zone_wplug_add_bio(zwplug, bio, nr_segs);
++ disk_zone_wplug_add_bio(disk, zwplug, bio, nr_segs);
+
+ spin_unlock_irqrestore(&zwplug->lock, flags);
+
+@@ -1096,19 +1132,6 @@ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs)
+ }
+ EXPORT_SYMBOL_GPL(blk_zone_plug_bio);
+
+-static void disk_zone_wplug_schedule_bio_work(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug)
+-{
+- /*
+- * Take a reference on the zone write plug and schedule the submission
+- * of the next plugged BIO. blk_zone_wplug_bio_work() will release the
+- * reference we take here.
+- */
+- WARN_ON_ONCE(!(zwplug->flags & BLK_ZONE_WPLUG_PLUGGED));
+- atomic_inc(&zwplug->ref);
+- queue_work(disk->zone_wplugs_wq, &zwplug->bio_work);
+-}
+-
+ static void disk_zone_wplug_unplug_bio(struct gendisk *disk,
+ struct blk_zone_wplug *zwplug)
+ {
+@@ -1116,16 +1139,6 @@ static void disk_zone_wplug_unplug_bio(struct gendisk *disk,
+
+ spin_lock_irqsave(&zwplug->lock, flags);
+
+- /*
+- * If we had an error, schedule error recovery. The recovery work
+- * will restart submission of plugged BIOs.
+- */
+- if (zwplug->flags & BLK_ZONE_WPLUG_ERROR) {
+- spin_unlock_irqrestore(&zwplug->lock, flags);
+- kblockd_schedule_work(&disk->zone_wplugs_work);
+- return;
+- }
+-
+ /* Schedule submission of the next plugged BIO if we have one. */
+ if (!bio_list_empty(&zwplug->bio_list)) {
+ disk_zone_wplug_schedule_bio_work(disk, zwplug);
+@@ -1168,12 +1181,13 @@ void blk_zone_write_plug_bio_endio(struct bio *bio)
+ }
+
+ /*
+- * If the BIO failed, mark the plug as having an error to trigger
+- * recovery.
++ * If the BIO failed, abort all plugged BIOs and mark the plug as
++ * needing a write pointer update.
+ */
+ if (bio->bi_status != BLK_STS_OK) {
+ spin_lock_irqsave(&zwplug->lock, flags);
+- disk_zone_wplug_set_error(disk, zwplug);
++ disk_zone_wplug_abort(zwplug);
++ zwplug->flags |= BLK_ZONE_WPLUG_NEED_WP_UPDATE;
+ spin_unlock_irqrestore(&zwplug->lock, flags);
+ }
+
+@@ -1229,6 +1243,7 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ */
+ spin_lock_irqsave(&zwplug->lock, flags);
+
++again:
+ bio = bio_list_pop(&zwplug->bio_list);
+ if (!bio) {
+ zwplug->flags &= ~BLK_ZONE_WPLUG_PLUGGED;
+@@ -1237,10 +1252,8 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ }
+
+ if (!blk_zone_wplug_prepare_bio(zwplug, bio)) {
+- /* Error recovery will decide what to do with the BIO. */
+- bio_list_add_head(&zwplug->bio_list, bio);
+- spin_unlock_irqrestore(&zwplug->lock, flags);
+- goto put_zwplug;
++ blk_zone_wplug_bio_io_error(zwplug, bio);
++ goto again;
+ }
+
+ spin_unlock_irqrestore(&zwplug->lock, flags);
+@@ -1262,120 +1275,6 @@ static void blk_zone_wplug_bio_work(struct work_struct *work)
+ disk_put_zone_wplug(zwplug);
+ }
+
+-static unsigned int blk_zone_wp_offset(struct blk_zone *zone)
+-{
+- switch (zone->cond) {
+- case BLK_ZONE_COND_IMP_OPEN:
+- case BLK_ZONE_COND_EXP_OPEN:
+- case BLK_ZONE_COND_CLOSED:
+- return zone->wp - zone->start;
+- case BLK_ZONE_COND_FULL:
+- return zone->len;
+- case BLK_ZONE_COND_EMPTY:
+- return 0;
+- case BLK_ZONE_COND_NOT_WP:
+- case BLK_ZONE_COND_OFFLINE:
+- case BLK_ZONE_COND_READONLY:
+- default:
+- /*
+- * Conventional, offline and read-only zones do not have a valid
+- * write pointer.
+- */
+- return UINT_MAX;
+- }
+-}
+-
+-static int blk_zone_wplug_report_zone_cb(struct blk_zone *zone,
+- unsigned int idx, void *data)
+-{
+- struct blk_zone *zonep = data;
+-
+- *zonep = *zone;
+- return 0;
+-}
+-
+-static void disk_zone_wplug_handle_error(struct gendisk *disk,
+- struct blk_zone_wplug *zwplug)
+-{
+- sector_t zone_start_sector =
+- bdev_zone_sectors(disk->part0) * zwplug->zone_no;
+- unsigned int noio_flag;
+- struct blk_zone zone;
+- unsigned long flags;
+- int ret;
+-
+- /* Get the current zone information from the device. */
+- noio_flag = memalloc_noio_save();
+- ret = disk->fops->report_zones(disk, zone_start_sector, 1,
+- blk_zone_wplug_report_zone_cb, &zone);
+- memalloc_noio_restore(noio_flag);
+-
+- spin_lock_irqsave(&zwplug->lock, flags);
+-
+- /*
+- * A zone reset or finish may have cleared the error already. In such
+- * case, do nothing as the report zones may have seen the "old" write
+- * pointer value before the reset/finish operation completed.
+- */
+- if (!(zwplug->flags & BLK_ZONE_WPLUG_ERROR))
+- goto unlock;
+-
+- zwplug->flags &= ~BLK_ZONE_WPLUG_ERROR;
+-
+- if (ret != 1) {
+- /*
+- * We failed to get the zone information, meaning that something
+- * is likely really wrong with the device. Abort all remaining
+- * plugged BIOs as otherwise we could endup waiting forever on
+- * plugged BIOs to complete if there is a queue freeze on-going.
+- */
+- disk_zone_wplug_abort(zwplug);
+- goto unplug;
+- }
+-
+- /* Update the zone write pointer offset. */
+- zwplug->wp_offset = blk_zone_wp_offset(&zone);
+- disk_zone_wplug_abort_unaligned(disk, zwplug);
+-
+- /* Restart BIO submission if we still have any BIO left. */
+- if (!bio_list_empty(&zwplug->bio_list)) {
+- disk_zone_wplug_schedule_bio_work(disk, zwplug);
+- goto unlock;
+- }
+-
+-unplug:
+- zwplug->flags &= ~BLK_ZONE_WPLUG_PLUGGED;
+- if (disk_should_remove_zone_wplug(disk, zwplug))
+- disk_remove_zone_wplug(disk, zwplug);
+-
+-unlock:
+- spin_unlock_irqrestore(&zwplug->lock, flags);
+-}
+-
+-static void disk_zone_wplugs_work(struct work_struct *work)
+-{
+- struct gendisk *disk =
+- container_of(work, struct gendisk, zone_wplugs_work);
+- struct blk_zone_wplug *zwplug;
+- unsigned long flags;
+-
+- spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
+-
+- while (!list_empty(&disk->zone_wplugs_err_list)) {
+- zwplug = list_first_entry(&disk->zone_wplugs_err_list,
+- struct blk_zone_wplug, link);
+- list_del_init(&zwplug->link);
+- spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
+-
+- disk_zone_wplug_handle_error(disk, zwplug);
+- disk_put_zone_wplug(zwplug);
+-
+- spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
+- }
+-
+- spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
+-}
+-
+ static inline unsigned int disk_zone_wplugs_hash_size(struct gendisk *disk)
+ {
+ return 1U << disk->zone_wplugs_hash_bits;
+@@ -1384,8 +1283,6 @@ static inline unsigned int disk_zone_wplugs_hash_size(struct gendisk *disk)
+ void disk_init_zone_resources(struct gendisk *disk)
+ {
+ spin_lock_init(&disk->zone_wplugs_lock);
+- INIT_LIST_HEAD(&disk->zone_wplugs_err_list);
+- INIT_WORK(&disk->zone_wplugs_work, disk_zone_wplugs_work);
+ }
+
+ /*
+@@ -1450,7 +1347,7 @@ static void disk_destroy_zone_wplugs_hash_table(struct gendisk *disk)
+ while (!hlist_empty(&disk->zone_wplugs_hash[i])) {
+ zwplug = hlist_entry(disk->zone_wplugs_hash[i].first,
+ struct blk_zone_wplug, node);
+- atomic_inc(&zwplug->ref);
++ refcount_inc(&zwplug->ref);
+ disk_remove_zone_wplug(disk, zwplug);
+ disk_put_zone_wplug(zwplug);
+ }
+@@ -1484,8 +1381,6 @@ void disk_free_zone_resources(struct gendisk *disk)
+ if (!disk->zone_wplugs_pool)
+ return;
+
+- cancel_work_sync(&disk->zone_wplugs_work);
+-
+ if (disk->zone_wplugs_wq) {
+ destroy_workqueue(disk->zone_wplugs_wq);
+ disk->zone_wplugs_wq = NULL;
+@@ -1682,6 +1577,8 @@ static int blk_revalidate_seq_zone(struct blk_zone *zone, unsigned int idx,
+ if (!disk->zone_wplugs_hash)
+ return 0;
+
++ disk_zone_wplug_sync_wp_offset(disk, zone);
++
+ wp_offset = blk_zone_wp_offset(zone);
+ if (!wp_offset || wp_offset >= zone->capacity)
+ return 0;
+@@ -1818,6 +1715,7 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ memalloc_noio_restore(noio_flag);
+ return ret;
+ }
++
+ ret = disk->fops->report_zones(disk, 0, UINT_MAX,
+ blk_revalidate_zone_cb, &args);
+ if (!ret) {
+@@ -1854,6 +1752,48 @@ int blk_revalidate_disk_zones(struct gendisk *disk)
+ }
+ EXPORT_SYMBOL_GPL(blk_revalidate_disk_zones);
+
++/**
++ * blk_zone_issue_zeroout - zero-fill a block range in a zone
++ * @bdev: blockdev to write
++ * @sector: start sector
++ * @nr_sects: number of sectors to write
++ * @gfp_mask: memory allocation flags (for bio_alloc)
++ *
++ * Description:
++ * Zero-fill a block range in a zone (@sector must be equal to the zone write
++ * pointer), handling potential errors due to the (initially unknown) lack of
++ * hardware offload (See blkdev_issue_zeroout()).
++ */
++int blk_zone_issue_zeroout(struct block_device *bdev, sector_t sector,
++ sector_t nr_sects, gfp_t gfp_mask)
++{
++ int ret;
++
++ if (WARN_ON_ONCE(!bdev_is_zoned(bdev)))
++ return -EIO;
++
++ ret = blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask,
++ BLKDEV_ZERO_NOFALLBACK);
++ if (ret != -EOPNOTSUPP)
++ return ret;
++
++ /*
++ * The failed call to blkdev_issue_zeroout() advanced the zone write
++ * pointer. Undo this using a report zone to update the zone write
++ * pointer to the correct current value.
++ */
++ ret = disk_zone_sync_wp_offset(bdev->bd_disk, sector);
++ if (ret != 1)
++ return ret < 0 ? ret : -EIO;
++
++ /*
++ * Retry without BLKDEV_ZERO_NOFALLBACK to force the fallback to a
++ * regular write with zero-pages.
++ */
++ return blkdev_issue_zeroout(bdev, sector, nr_sects, gfp_mask, 0);
++}
++EXPORT_SYMBOL_GPL(blk_zone_issue_zeroout);
++
+ #ifdef CONFIG_BLK_DEBUG_FS
+
+ int queue_zone_wplugs_show(void *data, struct seq_file *m)
+@@ -1876,7 +1816,7 @@ int queue_zone_wplugs_show(void *data, struct seq_file *m)
+ spin_lock_irqsave(&zwplug->lock, flags);
+ zwp_zone_no = zwplug->zone_no;
+ zwp_flags = zwplug->flags;
+- zwp_ref = atomic_read(&zwplug->ref);
++ zwp_ref = refcount_read(&zwplug->ref);
+ zwp_wp_offset = zwplug->wp_offset;
+ zwp_bio_list_size = bio_list_size(&zwplug->bio_list);
+ spin_unlock_irqrestore(&zwplug->lock, flags);
+diff --git a/drivers/acpi/acpica/evxfregn.c b/drivers/acpi/acpica/evxfregn.c
+index 95f78383bbdba1..bff2d099f4691e 100644
+--- a/drivers/acpi/acpica/evxfregn.c
++++ b/drivers/acpi/acpica/evxfregn.c
+@@ -232,8 +232,6 @@ acpi_remove_address_space_handler(acpi_handle device,
+
+ /* Now we can delete the handler object */
+
+- acpi_os_release_mutex(handler_obj->address_space.
+- context_mutex);
+ acpi_ut_remove_reference(handler_obj);
+ goto unlock_and_exit;
+ }
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index 5429ec9ef06f06..a5d47819b3a4e2 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -454,8 +454,13 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ if (cmd_rc)
+ *cmd_rc = -EINVAL;
+
+- if (cmd == ND_CMD_CALL)
++ if (cmd == ND_CMD_CALL) {
++ if (!buf || buf_len < sizeof(*call_pkg))
++ return -EINVAL;
++
+ call_pkg = buf;
++ }
++
+ func = cmd_to_func(nfit_mem, cmd, call_pkg, &family);
+ if (func < 0)
+ return func;
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 7fe842dae1ec05..821867de43bea3 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -250,6 +250,9 @@ static bool acpi_decode_space(struct resource_win *win,
+ switch (addr->resource_type) {
+ case ACPI_MEMORY_RANGE:
+ acpi_dev_memresource_flags(res, len, wp);
++
++ if (addr->info.mem.caching == ACPI_PREFETCHABLE_MEMORY)
++ res->flags |= IORESOURCE_PREFETCH;
+ break;
+ case ACPI_IO_RANGE:
+ acpi_dev_ioresource_flags(res, len, iodec,
+@@ -265,9 +268,6 @@ static bool acpi_decode_space(struct resource_win *win,
+ if (addr->producer_consumer == ACPI_PRODUCER)
+ res->flags |= IORESOURCE_WINDOW;
+
+- if (addr->info.mem.caching == ACPI_PREFETCHABLE_MEMORY)
+- res->flags |= IORESOURCE_PREFETCH;
+-
+ return !(res->flags & IORESOURCE_DISABLED);
+ }
+
+diff --git a/drivers/ata/sata_highbank.c b/drivers/ata/sata_highbank.c
+index 63ef7bb073ce03..596c6d294da906 100644
+--- a/drivers/ata/sata_highbank.c
++++ b/drivers/ata/sata_highbank.c
+@@ -348,6 +348,7 @@ static int highbank_initialize_phys(struct device *dev, void __iomem *addr)
+ phy_nodes[phy] = phy_data.np;
+ cphy_base[phy] = of_iomap(phy_nodes[phy], 0);
+ if (cphy_base[phy] == NULL) {
++ of_node_put(phy_data.np);
+ return 0;
+ }
+ phy_count += 1;
+diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c
+index 480e4adba9faa6..85e99641eaae02 100644
+--- a/drivers/bluetooth/btmtk.c
++++ b/drivers/bluetooth/btmtk.c
+@@ -395,6 +395,7 @@ int btmtk_process_coredump(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+ struct btmtk_data *data = hci_get_priv(hdev);
+ int err;
++ bool complete = false;
+
+ if (!IS_ENABLED(CONFIG_DEV_COREDUMP)) {
+ kfree_skb(skb);
+@@ -416,19 +417,22 @@ int btmtk_process_coredump(struct hci_dev *hdev, struct sk_buff *skb)
+ fallthrough;
+ case HCI_DEVCOREDUMP_ACTIVE:
+ default:
++ /* Mediatek coredump data would be more than MTK_COREDUMP_NUM */
++ if (data->cd_info.cnt >= MTK_COREDUMP_NUM &&
++ skb->len > MTK_COREDUMP_END_LEN)
++ if (!memcmp((char *)&skb->data[skb->len - MTK_COREDUMP_END_LEN],
++ MTK_COREDUMP_END, MTK_COREDUMP_END_LEN - 1))
++ complete = true;
++
+ err = hci_devcd_append(hdev, skb);
+ if (err < 0)
+ break;
+ data->cd_info.cnt++;
+
+- /* Mediatek coredump data would be more than MTK_COREDUMP_NUM */
+- if (data->cd_info.cnt > MTK_COREDUMP_NUM &&
+- skb->len > MTK_COREDUMP_END_LEN)
+- if (!memcmp((char *)&skb->data[skb->len - MTK_COREDUMP_END_LEN],
+- MTK_COREDUMP_END, MTK_COREDUMP_END_LEN - 1)) {
+- bt_dev_info(hdev, "Mediatek coredump end");
+- hci_devcd_complete(hdev);
+- }
++ if (complete) {
++ bt_dev_info(hdev, "Mediatek coredump end");
++ hci_devcd_complete(hdev);
++ }
+
+ break;
+ }
+diff --git a/drivers/clk/clk-en7523.c b/drivers/clk/clk-en7523.c
+index bc21b292144926..62a62eaba2aad8 100644
+--- a/drivers/clk/clk-en7523.c
++++ b/drivers/clk/clk-en7523.c
+@@ -92,6 +92,7 @@ static const u32 slic_base[] = { 100000000, 3125000 };
+ static const u32 npu_base[] = { 333000000, 400000000, 500000000 };
+ /* EN7581 */
+ static const u32 emi7581_base[] = { 540000000, 480000000, 400000000, 300000000 };
++static const u32 bus7581_base[] = { 600000000, 540000000 };
+ static const u32 npu7581_base[] = { 800000000, 750000000, 720000000, 600000000 };
+ static const u32 crypto_base[] = { 540000000, 480000000 };
+
+@@ -227,8 +228,8 @@ static const struct en_clk_desc en7581_base_clks[] = {
+ .base_reg = REG_BUS_CLK_DIV_SEL,
+ .base_bits = 1,
+ .base_shift = 8,
+- .base_values = bus_base,
+- .n_base_values = ARRAY_SIZE(bus_base),
++ .base_values = bus7581_base,
++ .n_base_values = ARRAY_SIZE(bus7581_base),
+
+ .div_bits = 3,
+ .div_shift = 0,
+diff --git a/drivers/crypto/hisilicon/debugfs.c b/drivers/crypto/hisilicon/debugfs.c
+index 1b9b7bccdeff08..45e130b901eb5e 100644
+--- a/drivers/crypto/hisilicon/debugfs.c
++++ b/drivers/crypto/hisilicon/debugfs.c
+@@ -192,7 +192,7 @@ static int qm_sqc_dump(struct hisi_qm *qm, char *s, char *name)
+
+ down_read(&qm->qps_lock);
+ if (qm->sqc) {
+- memcpy(&sqc, qm->sqc + qp_id * sizeof(struct qm_sqc), sizeof(struct qm_sqc));
++ memcpy(&sqc, qm->sqc + qp_id, sizeof(struct qm_sqc));
+ sqc.base_h = cpu_to_le32(QM_XQC_ADDR_MASK);
+ sqc.base_l = cpu_to_le32(QM_XQC_ADDR_MASK);
+ dump_show(qm, &sqc, sizeof(struct qm_sqc), "SOFT SQC");
+@@ -229,7 +229,7 @@ static int qm_cqc_dump(struct hisi_qm *qm, char *s, char *name)
+
+ down_read(&qm->qps_lock);
+ if (qm->cqc) {
+- memcpy(&cqc, qm->cqc + qp_id * sizeof(struct qm_cqc), sizeof(struct qm_cqc));
++ memcpy(&cqc, qm->cqc + qp_id, sizeof(struct qm_cqc));
+ cqc.base_h = cpu_to_le32(QM_XQC_ADDR_MASK);
+ cqc.base_l = cpu_to_le32(QM_XQC_ADDR_MASK);
+ dump_show(qm, &cqc, sizeof(struct qm_cqc), "SOFT CQC");
+diff --git a/drivers/gpio/gpio-graniterapids.c b/drivers/gpio/gpio-graniterapids.c
+index f2e911a3d2ca02..ad6a045fd3d2d2 100644
+--- a/drivers/gpio/gpio-graniterapids.c
++++ b/drivers/gpio/gpio-graniterapids.c
+@@ -32,12 +32,14 @@
+ #define GNR_PINS_PER_REG 32
+ #define GNR_NUM_REGS DIV_ROUND_UP(GNR_NUM_PINS, GNR_PINS_PER_REG)
+
+-#define GNR_CFG_BAR 0x00
++#define GNR_CFG_PADBAR 0x00
+ #define GNR_CFG_LOCK_OFFSET 0x04
+-#define GNR_GPI_STATUS_OFFSET 0x20
++#define GNR_GPI_STATUS_OFFSET 0x14
+ #define GNR_GPI_ENABLE_OFFSET 0x24
+
+-#define GNR_CFG_DW_RX_MASK GENMASK(25, 22)
++#define GNR_CFG_DW_HOSTSW_MODE BIT(27)
++#define GNR_CFG_DW_RX_MASK GENMASK(23, 22)
++#define GNR_CFG_DW_INTSEL_MASK GENMASK(21, 14)
+ #define GNR_CFG_DW_RX_DISABLE FIELD_PREP(GNR_CFG_DW_RX_MASK, 2)
+ #define GNR_CFG_DW_RX_EDGE FIELD_PREP(GNR_CFG_DW_RX_MASK, 1)
+ #define GNR_CFG_DW_RX_LEVEL FIELD_PREP(GNR_CFG_DW_RX_MASK, 0)
+@@ -50,6 +52,7 @@
+ * struct gnr_gpio - Intel Granite Rapids-D vGPIO driver state
+ * @gc: GPIO controller interface
+ * @reg_base: base address of the GPIO registers
++ * @pad_base: base address of the vGPIO pad configuration registers
+ * @ro_bitmap: bitmap of read-only pins
+ * @lock: guard the registers
+ * @pad_backup: backup of the register state for suspend
+@@ -57,6 +60,7 @@
+ struct gnr_gpio {
+ struct gpio_chip gc;
+ void __iomem *reg_base;
++ void __iomem *pad_base;
+ DECLARE_BITMAP(ro_bitmap, GNR_NUM_PINS);
+ raw_spinlock_t lock;
+ u32 pad_backup[];
+@@ -65,7 +69,7 @@ struct gnr_gpio {
+ static void __iomem *gnr_gpio_get_padcfg_addr(const struct gnr_gpio *priv,
+ unsigned int gpio)
+ {
+- return priv->reg_base + gpio * sizeof(u32);
++ return priv->pad_base + gpio * sizeof(u32);
+ }
+
+ static int gnr_gpio_configure_line(struct gpio_chip *gc, unsigned int gpio,
+@@ -88,6 +92,20 @@ static int gnr_gpio_configure_line(struct gpio_chip *gc, unsigned int gpio,
+ return 0;
+ }
+
++static int gnr_gpio_request(struct gpio_chip *gc, unsigned int gpio)
++{
++ struct gnr_gpio *priv = gpiochip_get_data(gc);
++ u32 dw;
++
++ dw = readl(gnr_gpio_get_padcfg_addr(priv, gpio));
++ if (!(dw & GNR_CFG_DW_HOSTSW_MODE)) {
++ dev_warn(gc->parent, "GPIO %u is not owned by host", gpio);
++ return -EBUSY;
++ }
++
++ return 0;
++}
++
+ static int gnr_gpio_get(struct gpio_chip *gc, unsigned int gpio)
+ {
+ const struct gnr_gpio *priv = gpiochip_get_data(gc);
+@@ -139,6 +157,7 @@ static int gnr_gpio_direction_output(struct gpio_chip *gc, unsigned int gpio, in
+
+ static const struct gpio_chip gnr_gpio_chip = {
+ .owner = THIS_MODULE,
++ .request = gnr_gpio_request,
+ .get = gnr_gpio_get,
+ .set = gnr_gpio_set,
+ .get_direction = gnr_gpio_get_direction,
+@@ -166,7 +185,7 @@ static void gnr_gpio_irq_ack(struct irq_data *d)
+ guard(raw_spinlock_irqsave)(&priv->lock);
+
+ reg = readl(addr);
+- reg &= ~BIT(bit_idx);
++ reg |= BIT(bit_idx);
+ writel(reg, addr);
+ }
+
+@@ -209,10 +228,18 @@ static void gnr_gpio_irq_unmask(struct irq_data *d)
+ static int gnr_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ {
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
+- irq_hw_number_t pin = irqd_to_hwirq(d);
+- u32 mask = GNR_CFG_DW_RX_MASK;
++ struct gnr_gpio *priv = gpiochip_get_data(gc);
++ irq_hw_number_t hwirq = irqd_to_hwirq(d);
++ u32 reg;
+ u32 set;
+
++ /* Allow interrupts only if Interrupt Select field is non-zero */
++ reg = readl(gnr_gpio_get_padcfg_addr(priv, hwirq));
++ if (!(reg & GNR_CFG_DW_INTSEL_MASK)) {
++ dev_dbg(gc->parent, "GPIO %lu cannot be used as IRQ", hwirq);
++ return -EPERM;
++ }
++
+ /* Falling edge and level low triggers not supported by the GPIO controller */
+ switch (type) {
+ case IRQ_TYPE_NONE:
+@@ -230,10 +257,11 @@ static int gnr_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ return -EINVAL;
+ }
+
+- return gnr_gpio_configure_line(gc, pin, mask, set);
++ return gnr_gpio_configure_line(gc, hwirq, GNR_CFG_DW_RX_MASK, set);
+ }
+
+ static const struct irq_chip gnr_gpio_irq_chip = {
++ .name = "gpio-graniterapids",
+ .irq_ack = gnr_gpio_irq_ack,
+ .irq_mask = gnr_gpio_irq_mask,
+ .irq_unmask = gnr_gpio_irq_unmask,
+@@ -291,6 +319,7 @@ static int gnr_gpio_probe(struct platform_device *pdev)
+ struct gnr_gpio *priv;
+ void __iomem *regs;
+ int irq, ret;
++ u32 offset;
+
+ priv = devm_kzalloc(dev, struct_size(priv, pad_backup, num_backup_pins), GFP_KERNEL);
+ if (!priv)
+@@ -302,6 +331,10 @@ static int gnr_gpio_probe(struct platform_device *pdev)
+ if (IS_ERR(regs))
+ return PTR_ERR(regs);
+
++ priv->reg_base = regs;
++ offset = readl(priv->reg_base + GNR_CFG_PADBAR);
++ priv->pad_base = priv->reg_base + offset;
++
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0)
+ return irq;
+@@ -311,8 +344,6 @@ static int gnr_gpio_probe(struct platform_device *pdev)
+ if (ret)
+ return dev_err_probe(dev, ret, "failed to request interrupt\n");
+
+- priv->reg_base = regs + readl(regs + GNR_CFG_BAR);
+-
+ gnr_gpio_init_pin_ro_bits(dev, priv->reg_base + GNR_CFG_LOCK_OFFSET,
+ priv->ro_bitmap);
+
+@@ -324,7 +355,6 @@ static int gnr_gpio_probe(struct platform_device *pdev)
+
+ girq = &priv->gc.irq;
+ gpio_irq_chip_set_chip(girq, &gnr_gpio_irq_chip);
+- girq->chip->name = dev_name(dev);
+ girq->parent_handler = NULL;
+ girq->num_parents = 0;
+ girq->parents = NULL;
+diff --git a/drivers/gpio/gpio-ljca.c b/drivers/gpio/gpio-ljca.c
+index dfec9fbfc7a9bd..c2a9b425397441 100644
+--- a/drivers/gpio/gpio-ljca.c
++++ b/drivers/gpio/gpio-ljca.c
+@@ -82,9 +82,9 @@ static int ljca_gpio_config(struct ljca_gpio_dev *ljca_gpio, u8 gpio_id,
+ int ret;
+
+ mutex_lock(&ljca_gpio->trans_lock);
++ packet->num = 1;
+ packet->item[0].index = gpio_id;
+ packet->item[0].value = config | ljca_gpio->connect_mode[gpio_id];
+- packet->num = 1;
+
+ ret = ljca_transfer(ljca_gpio->ljca, LJCA_GPIO_CONFIG, (u8 *)packet,
+ struct_size(packet, item, packet->num), NULL, 0);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+index d891ab779ca7f5..5df21529b3b13e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
+@@ -1801,13 +1801,18 @@ int amdgpu_cs_find_mapping(struct amdgpu_cs_parser *parser,
+ if (dma_resv_locking_ctx((*bo)->tbo.base.resv) != &parser->exec.ticket)
+ return -EINVAL;
+
++ /* Make sure VRAM is allocated contigiously */
+ (*bo)->flags |= AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS;
+- amdgpu_bo_placement_from_domain(*bo, (*bo)->allowed_domains);
+- for (i = 0; i < (*bo)->placement.num_placement; i++)
+- (*bo)->placements[i].flags |= TTM_PL_FLAG_CONTIGUOUS;
+- r = ttm_bo_validate(&(*bo)->tbo, &(*bo)->placement, &ctx);
+- if (r)
+- return r;
++ if ((*bo)->tbo.resource->mem_type == TTM_PL_VRAM &&
++ !((*bo)->tbo.resource->placement & TTM_PL_FLAG_CONTIGUOUS)) {
++
++ amdgpu_bo_placement_from_domain(*bo, (*bo)->allowed_domains);
++ for (i = 0; i < (*bo)->placement.num_placement; i++)
++ (*bo)->placements[i].flags |= TTM_PL_FLAG_CONTIGUOUS;
++ r = ttm_bo_validate(&(*bo)->tbo, &(*bo)->placement, &ctx);
++ if (r)
++ return r;
++ }
+
+ return amdgpu_ttm_alloc_gart(&(*bo)->tbo);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+index 31fd30dcd593ba..65bb26215e867a 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_uvd.c
+@@ -551,6 +551,8 @@ static void amdgpu_uvd_force_into_uvd_segment(struct amdgpu_bo *abo)
+ for (i = 0; i < abo->placement.num_placement; ++i) {
+ abo->placements[i].fpfn = 0 >> PAGE_SHIFT;
+ abo->placements[i].lpfn = (256 * 1024 * 1024) >> PAGE_SHIFT;
++ if (abo->placements[i].mem_type == TTM_PL_VRAM)
++ abo->placements[i].flags |= TTM_PL_FLAG_CONTIGUOUS;
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 6005280f5f38f0..8d2562d0f143c7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -674,12 +674,8 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,
+ pasid_mapping_needed &= adev->gmc.gmc_funcs->emit_pasid_mapping &&
+ ring->funcs->emit_wreg;
+
+- if (adev->gfx.enable_cleaner_shader &&
+- ring->funcs->emit_cleaner_shader &&
+- job->enforce_isolation)
+- ring->funcs->emit_cleaner_shader(ring);
+-
+- if (!vm_flush_needed && !gds_switch_needed && !need_pipe_sync)
++ if (!vm_flush_needed && !gds_switch_needed && !need_pipe_sync &&
++ !(job->enforce_isolation && !job->vmid))
+ return 0;
+
+ amdgpu_ring_ib_begin(ring);
+@@ -690,6 +686,11 @@ int amdgpu_vm_flush(struct amdgpu_ring *ring, struct amdgpu_job *job,
+ if (need_pipe_sync)
+ amdgpu_ring_emit_pipeline_sync(ring);
+
++ if (adev->gfx.enable_cleaner_shader &&
++ ring->funcs->emit_cleaner_shader &&
++ job->enforce_isolation)
++ ring->funcs->emit_cleaner_shader(ring);
++
+ if (vm_flush_needed) {
+ trace_amdgpu_vm_flush(ring, job->vmid, job->vm_pd_addr);
+ amdgpu_ring_emit_vm_flush(ring, job->vmid, job->vm_pd_addr);
+diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+index 6068b784dc6938..9a30b8c10838c1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
+@@ -1289,7 +1289,7 @@ static int uvd_v7_0_ring_patch_cs_in_place(struct amdgpu_cs_parser *p,
+ struct amdgpu_job *job,
+ struct amdgpu_ib *ib)
+ {
+- struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
++ struct amdgpu_ring *ring = amdgpu_job_ring(job);
+ unsigned i;
+
+ /* No patching necessary for the first instance */
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+index 8de61cc524c943..d2993594c848ad 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c
+@@ -1422,6 +1422,7 @@ int kfd_parse_crat_table(void *crat_image, struct list_head *device_list,
+
+
+ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
++ bool cache_line_size_missing,
+ struct kfd_gpu_cache_info *pcache_info)
+ {
+ struct amdgpu_device *adev = kdev->adev;
+@@ -1436,6 +1437,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.gc_num_tcp_per_wpg / 2;
+ pcache_info[i].cache_line_size = adev->gfx.config.gc_tcp_cache_line_size;
++ if (cache_line_size_missing && !pcache_info[i].cache_line_size)
++ pcache_info[i].cache_line_size = 128;
+ i++;
+ }
+ /* Scalar L1 Instruction Cache per SQC */
+@@ -1448,6 +1451,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.gc_num_sqc_per_wgp * 2;
+ pcache_info[i].cache_line_size = adev->gfx.config.gc_instruction_cache_line_size;
++ if (cache_line_size_missing && !pcache_info[i].cache_line_size)
++ pcache_info[i].cache_line_size = 128;
+ i++;
+ }
+ /* Scalar L1 Data Cache per SQC */
+@@ -1459,6 +1464,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.gc_num_sqc_per_wgp * 2;
+ pcache_info[i].cache_line_size = adev->gfx.config.gc_scalar_data_cache_line_size;
++ if (cache_line_size_missing && !pcache_info[i].cache_line_size)
++ pcache_info[i].cache_line_size = 64;
+ i++;
+ }
+ /* GL1 Data Cache per SA */
+@@ -1471,7 +1478,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.max_cu_per_sh;
+- pcache_info[i].cache_line_size = 0;
++ if (cache_line_size_missing)
++ pcache_info[i].cache_line_size = 128;
+ i++;
+ }
+ /* L2 Data Cache per GPU (Total Tex Cache) */
+@@ -1483,6 +1491,8 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.max_cu_per_sh;
+ pcache_info[i].cache_line_size = adev->gfx.config.gc_tcc_cache_line_size;
++ if (cache_line_size_missing && !pcache_info[i].cache_line_size)
++ pcache_info[i].cache_line_size = 128;
+ i++;
+ }
+ /* L3 Data Cache per GPU */
+@@ -1493,7 +1503,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config(struct kfd_dev *kdev,
+ CRAT_CACHE_FLAGS_DATA_CACHE |
+ CRAT_CACHE_FLAGS_SIMD_CACHE);
+ pcache_info[i].num_cu_shared = adev->gfx.config.max_cu_per_sh;
+- pcache_info[i].cache_line_size = 0;
++ pcache_info[i].cache_line_size = 64;
+ i++;
+ }
+ return i;
+@@ -1568,6 +1578,7 @@ static int kfd_fill_gpu_cache_info_from_gfx_config_v2(struct kfd_dev *kdev,
+ int kfd_get_gpu_cache_info(struct kfd_node *kdev, struct kfd_gpu_cache_info **pcache_info)
+ {
+ int num_of_cache_types = 0;
++ bool cache_line_size_missing = false;
+
+ switch (kdev->adev->asic_type) {
+ case CHIP_KAVERI:
+@@ -1691,10 +1702,17 @@ int kfd_get_gpu_cache_info(struct kfd_node *kdev, struct kfd_gpu_cache_info **pc
+ case IP_VERSION(11, 5, 0):
+ case IP_VERSION(11, 5, 1):
+ case IP_VERSION(11, 5, 2):
++ /* Cacheline size not available in IP discovery for gc11.
++ * kfd_fill_gpu_cache_info_from_gfx_config to hard code it
++ */
++ cache_line_size_missing = true;
++ fallthrough;
+ case IP_VERSION(12, 0, 0):
+ case IP_VERSION(12, 0, 1):
+ num_of_cache_types =
+- kfd_fill_gpu_cache_info_from_gfx_config(kdev->kfd, *pcache_info);
++ kfd_fill_gpu_cache_info_from_gfx_config(kdev->kfd,
++ cache_line_size_missing,
++ *pcache_info);
+ break;
+ default:
+ *pcache_info = dummy_cache_info;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 648f40091aa395..f5b3ed20e891b3 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -205,6 +205,21 @@ static int add_queue_mes(struct device_queue_manager *dqm, struct queue *q,
+ if (!down_read_trylock(&adev->reset_domain->sem))
+ return -EIO;
+
++ if (!pdd->proc_ctx_cpu_ptr) {
++ r = amdgpu_amdkfd_alloc_gtt_mem(adev,
++ AMDGPU_MES_PROC_CTX_SIZE,
++ &pdd->proc_ctx_bo,
++ &pdd->proc_ctx_gpu_addr,
++ &pdd->proc_ctx_cpu_ptr,
++ false);
++ if (r) {
++ dev_err(adev->dev,
++ "failed to allocate process context bo\n");
++ return r;
++ }
++ memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
++ }
++
+ memset(&queue_input, 0x0, sizeof(struct mes_add_queue_input));
+ queue_input.process_id = qpd->pqm->process->pasid;
+ queue_input.page_table_base_addr = qpd->page_table_base;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index ff34bb1ac9db79..3139987b82b100 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -1076,7 +1076,8 @@ static void kfd_process_destroy_pdds(struct kfd_process *p)
+
+ kfd_free_process_doorbells(pdd->dev->kfd, pdd);
+
+- if (pdd->dev->kfd->shared_resources.enable_mes)
++ if (pdd->dev->kfd->shared_resources.enable_mes &&
++ pdd->proc_ctx_cpu_ptr)
+ amdgpu_amdkfd_free_gtt_mem(pdd->dev->adev,
+ &pdd->proc_ctx_bo);
+ /*
+@@ -1610,7 +1611,6 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_node *dev,
+ struct kfd_process *p)
+ {
+ struct kfd_process_device *pdd = NULL;
+- int retval = 0;
+
+ if (WARN_ON_ONCE(p->n_pdds >= MAX_GPU_INSTANCE))
+ return NULL;
+@@ -1634,21 +1634,6 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_node *dev,
+ pdd->user_gpu_id = dev->id;
+ atomic64_set(&pdd->evict_duration_counter, 0);
+
+- if (dev->kfd->shared_resources.enable_mes) {
+- retval = amdgpu_amdkfd_alloc_gtt_mem(dev->adev,
+- AMDGPU_MES_PROC_CTX_SIZE,
+- &pdd->proc_ctx_bo,
+- &pdd->proc_ctx_gpu_addr,
+- &pdd->proc_ctx_cpu_ptr,
+- false);
+- if (retval) {
+- dev_err(dev->adev->dev,
+- "failed to allocate process context bo\n");
+- goto err_free_pdd;
+- }
+- memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
+- }
+-
+ p->pdds[p->n_pdds++] = pdd;
+ if (kfd_dbg_is_per_vmid_supported(pdd->dev))
+ pdd->spi_dbg_override = pdd->dev->kfd2kgd->disable_debug_trap(
+@@ -1660,10 +1645,6 @@ struct kfd_process_device *kfd_create_process_device_data(struct kfd_node *dev,
+ idr_init(&pdd->alloc_idr);
+
+ return pdd;
+-
+-err_free_pdd:
+- kfree(pdd);
+- return NULL;
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index 01b960b152743d..ead4317a21680b 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -212,13 +212,17 @@ static void pqm_clean_queue_resource(struct process_queue_manager *pqm,
+ void pqm_uninit(struct process_queue_manager *pqm)
+ {
+ struct process_queue_node *pqn, *next;
+- struct kfd_process_device *pdd;
+
+ list_for_each_entry_safe(pqn, next, &pqm->queues, process_queue_list) {
+ if (pqn->q) {
+- pdd = kfd_get_process_device_data(pqn->q->device, pqm->process);
+- kfd_queue_unref_bo_vas(pdd, &pqn->q->properties);
+- kfd_queue_release_buffers(pdd, &pqn->q->properties);
++ struct kfd_process_device *pdd = kfd_get_process_device_data(pqn->q->device,
++ pqm->process);
++ if (pdd) {
++ kfd_queue_unref_bo_vas(pdd, &pqn->q->properties);
++ kfd_queue_release_buffers(pdd, &pqn->q->properties);
++ } else {
++ WARN_ON(!pdd);
++ }
+ pqm_clean_queue_resource(pqm, pqn);
+ }
+
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index d0e6d051e9cf9f..1aedfafa507f7e 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2717,4 +2717,5 @@ void smu_v13_0_7_set_ppt_funcs(struct smu_context *smu)
+ smu->workload_map = smu_v13_0_7_workload_map;
+ smu->smc_driver_if_version = SMU13_0_7_DRIVER_IF_VERSION;
+ smu_v13_0_set_smu_mailbox_registers(smu);
++ smu->power_profile_mode = PP_SMC_POWER_PROFILE_BOOTUP_DEFAULT;
+ }
+diff --git a/drivers/gpu/drm/drm_panic_qr.rs b/drivers/gpu/drm/drm_panic_qr.rs
+index 1ef56cb07dfbd2..447740d79d3d2e 100644
+--- a/drivers/gpu/drm/drm_panic_qr.rs
++++ b/drivers/gpu/drm/drm_panic_qr.rs
+@@ -929,7 +929,6 @@ fn draw_all(&mut self, data: impl Iterator<Item = u8>) {
+ /// * `tmp` must be valid for reading and writing for `tmp_size` bytes.
+ ///
+ /// They must remain valid for the duration of the function call.
+-
+ #[no_mangle]
+ pub unsafe extern "C" fn drm_panic_qr_generate(
+ url: *const i8,
+diff --git a/drivers/gpu/drm/i915/display/intel_color.c b/drivers/gpu/drm/i915/display/intel_color.c
+index 5d701f48351b96..ec55cb651d4498 100644
+--- a/drivers/gpu/drm/i915/display/intel_color.c
++++ b/drivers/gpu/drm/i915/display/intel_color.c
+@@ -1333,19 +1333,29 @@ static void ilk_load_lut_8(const struct intel_crtc_state *crtc_state,
+ lut = blob->data;
+
+ /*
+- * DSB fails to correctly load the legacy LUT
+- * unless we either write each entry twice,
+- * or use non-posted writes
++ * DSB fails to correctly load the legacy LUT unless
++ * we either write each entry twice when using posted
++ * writes, or we use non-posted writes.
++ *
++ * If palette anti-collision is active during LUT
++ * register writes:
++ * - posted writes simply get dropped and thus the LUT
++ * contents may not be correctly updated
++ * - non-posted writes are blocked and thus the LUT
++ * contents are always correct, but simultaneous CPU
++ * MMIO access will start to fail
++ *
++ * Choose the lesser of two evils and use posted writes.
++ * Using posted writes is also faster, even when having
++ * to write each register twice.
+ */
+- if (crtc_state->dsb_color_vblank)
+- intel_dsb_nonpost_start(crtc_state->dsb_color_vblank);
+-
+- for (i = 0; i < 256; i++)
++ for (i = 0; i < 256; i++) {
+ ilk_lut_write(crtc_state, LGC_PALETTE(pipe, i),
+ i9xx_lut_8(&lut[i]));
+-
+- if (crtc_state->dsb_color_vblank)
+- intel_dsb_nonpost_end(crtc_state->dsb_color_vblank);
++ if (crtc_state->dsb_color_vblank)
++ ilk_lut_write(crtc_state, LGC_PALETTE(pipe, i),
++ i9xx_lut_8(&lut[i]));
++ }
+ }
+
+ static void ilk_load_lut_10(const struct intel_crtc_state *crtc_state,
+diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
+index 6469b9bcf2ec44..082ac72c757a9f 100644
+--- a/drivers/gpu/drm/i915/i915_gpu_error.c
++++ b/drivers/gpu/drm/i915/i915_gpu_error.c
+@@ -1652,9 +1652,21 @@ capture_engine(struct intel_engine_cs *engine,
+ return NULL;
+
+ intel_engine_get_hung_entity(engine, &ce, &rq);
+- if (rq && !i915_request_started(rq))
+- drm_info(&engine->gt->i915->drm, "Got hung context on %s with active request %lld:%lld [0x%04X] not yet started\n",
+- engine->name, rq->fence.context, rq->fence.seqno, ce->guc_id.id);
++ if (rq && !i915_request_started(rq)) {
++ /*
++ * We want to know also what is the guc_id of the context,
++ * but if we don't have the context reference, then skip
++ * printing it.
++ */
++ if (ce)
++ drm_info(&engine->gt->i915->drm,
++ "Got hung context on %s with active request %lld:%lld [0x%04X] not yet started\n",
++ engine->name, rq->fence.context, rq->fence.seqno, ce->guc_id.id);
++ else
++ drm_info(&engine->gt->i915->drm,
++ "Got hung context on %s with active request %lld:%lld not yet started\n",
++ engine->name, rq->fence.context, rq->fence.seqno);
++ }
+
+ if (rq) {
+ capture = intel_engine_coredump_add_request(ee, rq, ATOMIC_MAYFAIL);
+diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
+index 762127dd56c538..70a854557e6ec5 100644
+--- a/drivers/gpu/drm/i915/i915_scheduler.c
++++ b/drivers/gpu/drm/i915/i915_scheduler.c
+@@ -506,6 +506,6 @@ int __init i915_scheduler_module_init(void)
+ return 0;
+
+ err_priorities:
+- kmem_cache_destroy(slab_priorities);
++ kmem_cache_destroy(slab_dependencies);
+ return -ENOMEM;
+ }
+diff --git a/drivers/gpu/drm/xe/tests/xe_migrate.c b/drivers/gpu/drm/xe/tests/xe_migrate.c
+index 1a192a2a941b69..3bbdb362d6f0dc 100644
+--- a/drivers/gpu/drm/xe/tests/xe_migrate.c
++++ b/drivers/gpu/drm/xe/tests/xe_migrate.c
+@@ -224,8 +224,8 @@ static void xe_migrate_sanity_test(struct xe_migrate *m, struct kunit *test)
+ XE_BO_FLAG_VRAM_IF_DGFX(tile) |
+ XE_BO_FLAG_PINNED);
+ if (IS_ERR(tiny)) {
+- KUNIT_FAIL(test, "Failed to allocate fake pt: %li\n",
+- PTR_ERR(pt));
++ KUNIT_FAIL(test, "Failed to allocate tiny fake pt: %li\n",
++ PTR_ERR(tiny));
+ goto free_pt;
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+index 9d82ea30f4df23..7e385940df0863 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+@@ -65,6 +65,14 @@ invalidation_fence_signal(struct xe_device *xe, struct xe_gt_tlb_invalidation_fe
+ __invalidation_fence_signal(xe, fence);
+ }
+
++void xe_gt_tlb_invalidation_fence_signal(struct xe_gt_tlb_invalidation_fence *fence)
++{
++ if (WARN_ON_ONCE(!fence->gt))
++ return;
++
++ __invalidation_fence_signal(gt_to_xe(fence->gt), fence);
++}
++
+ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
+ {
+ struct xe_gt *gt = container_of(work, struct xe_gt,
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+index f430d5797af701..00b1c6c01e8d95 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+@@ -28,6 +28,7 @@ int xe_guc_tlb_invalidation_done_handler(struct xe_guc *guc, u32 *msg, u32 len);
+ void xe_gt_tlb_invalidation_fence_init(struct xe_gt *gt,
+ struct xe_gt_tlb_invalidation_fence *fence,
+ bool stack);
++void xe_gt_tlb_invalidation_fence_signal(struct xe_gt_tlb_invalidation_fence *fence);
+
+ static inline void
+ xe_gt_tlb_invalidation_fence_wait(struct xe_gt_tlb_invalidation_fence *fence)
+diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
+index f27f579f4d85aa..797576690356f2 100644
+--- a/drivers/gpu/drm/xe/xe_pt.c
++++ b/drivers/gpu/drm/xe/xe_pt.c
+@@ -1333,8 +1333,7 @@ static void invalidation_fence_cb(struct dma_fence *fence,
+ queue_work(system_wq, &ifence->work);
+ } else {
+ ifence->base.base.error = ifence->fence->error;
+- dma_fence_signal(&ifence->base.base);
+- dma_fence_put(&ifence->base.base);
++ xe_gt_tlb_invalidation_fence_signal(&ifence->base);
+ }
+ dma_fence_put(ifence->fence);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_reg_sr.c b/drivers/gpu/drm/xe/xe_reg_sr.c
+index 440ac572f6e5ef..52969c0909659d 100644
+--- a/drivers/gpu/drm/xe/xe_reg_sr.c
++++ b/drivers/gpu/drm/xe/xe_reg_sr.c
+@@ -26,46 +26,27 @@
+ #include "xe_reg_whitelist.h"
+ #include "xe_rtp_types.h"
+
+-#define XE_REG_SR_GROW_STEP_DEFAULT 16
+-
+ static void reg_sr_fini(struct drm_device *drm, void *arg)
+ {
+ struct xe_reg_sr *sr = arg;
++ struct xe_reg_sr_entry *entry;
++ unsigned long reg;
++
++ xa_for_each(&sr->xa, reg, entry)
++ kfree(entry);
+
+ xa_destroy(&sr->xa);
+- kfree(sr->pool.arr);
+- memset(&sr->pool, 0, sizeof(sr->pool));
+ }
+
+ int xe_reg_sr_init(struct xe_reg_sr *sr, const char *name, struct xe_device *xe)
+ {
+ xa_init(&sr->xa);
+- memset(&sr->pool, 0, sizeof(sr->pool));
+- sr->pool.grow_step = XE_REG_SR_GROW_STEP_DEFAULT;
+ sr->name = name;
+
+ return drmm_add_action_or_reset(&xe->drm, reg_sr_fini, sr);
+ }
+ EXPORT_SYMBOL_IF_KUNIT(xe_reg_sr_init);
+
+-static struct xe_reg_sr_entry *alloc_entry(struct xe_reg_sr *sr)
+-{
+- if (sr->pool.used == sr->pool.allocated) {
+- struct xe_reg_sr_entry *arr;
+-
+- arr = krealloc_array(sr->pool.arr,
+- ALIGN(sr->pool.allocated + 1, sr->pool.grow_step),
+- sizeof(*arr), GFP_KERNEL);
+- if (!arr)
+- return NULL;
+-
+- sr->pool.arr = arr;
+- sr->pool.allocated += sr->pool.grow_step;
+- }
+-
+- return &sr->pool.arr[sr->pool.used++];
+-}
+-
+ static bool compatible_entries(const struct xe_reg_sr_entry *e1,
+ const struct xe_reg_sr_entry *e2)
+ {
+@@ -111,7 +92,7 @@ int xe_reg_sr_add(struct xe_reg_sr *sr,
+ return 0;
+ }
+
+- pentry = alloc_entry(sr);
++ pentry = kmalloc(sizeof(*pentry), GFP_KERNEL);
+ if (!pentry) {
+ ret = -ENOMEM;
+ goto fail;
+diff --git a/drivers/gpu/drm/xe/xe_reg_sr_types.h b/drivers/gpu/drm/xe/xe_reg_sr_types.h
+index ad48a52b824a18..ebe11f237fa26d 100644
+--- a/drivers/gpu/drm/xe/xe_reg_sr_types.h
++++ b/drivers/gpu/drm/xe/xe_reg_sr_types.h
+@@ -20,12 +20,6 @@ struct xe_reg_sr_entry {
+ };
+
+ struct xe_reg_sr {
+- struct {
+- struct xe_reg_sr_entry *arr;
+- unsigned int used;
+- unsigned int allocated;
+- unsigned int grow_step;
+- } pool;
+ struct xarray xa;
+ const char *name;
+
+diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+index c8ec74f089f3d6..6e41ddaa24d636 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
++++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+@@ -339,7 +339,7 @@ tegra241_cmdqv_get_cmdq(struct arm_smmu_device *smmu,
+ * one CPU at a time can enter the process, while the others
+ * will be spinning at the same lock.
+ */
+- lidx = smp_processor_id() % cmdqv->num_lvcmdqs_per_vintf;
++ lidx = raw_smp_processor_id() % cmdqv->num_lvcmdqs_per_vintf;
+ vcmdq = vintf->lvcmdqs[lidx];
+ if (!vcmdq || !READ_ONCE(vcmdq->enabled))
+ return NULL;
+diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
+index e5b89f728ad3b2..09694cca8752df 100644
+--- a/drivers/iommu/intel/cache.c
++++ b/drivers/iommu/intel/cache.c
+@@ -105,12 +105,35 @@ static void cache_tag_unassign(struct dmar_domain *domain, u16 did,
+ spin_unlock_irqrestore(&domain->cache_lock, flags);
+ }
+
++/* domain->qi_batch will be freed in iommu_free_domain() path. */
++static int domain_qi_batch_alloc(struct dmar_domain *domain)
++{
++ unsigned long flags;
++ int ret = 0;
++
++ spin_lock_irqsave(&domain->cache_lock, flags);
++ if (domain->qi_batch)
++ goto out_unlock;
++
++ domain->qi_batch = kzalloc(sizeof(*domain->qi_batch), GFP_ATOMIC);
++ if (!domain->qi_batch)
++ ret = -ENOMEM;
++out_unlock:
++ spin_unlock_irqrestore(&domain->cache_lock, flags);
++
++ return ret;
++}
++
+ static int __cache_tag_assign_domain(struct dmar_domain *domain, u16 did,
+ struct device *dev, ioasid_t pasid)
+ {
+ struct device_domain_info *info = dev_iommu_priv_get(dev);
+ int ret;
+
++ ret = domain_qi_batch_alloc(domain);
++ if (ret)
++ return ret;
++
+ ret = cache_tag_assign(domain, did, dev, pasid, CACHE_TAG_IOTLB);
+ if (ret || !info->ats_enabled)
+ return ret;
+@@ -139,6 +162,10 @@ static int __cache_tag_assign_parent_domain(struct dmar_domain *domain, u16 did,
+ struct device_domain_info *info = dev_iommu_priv_get(dev);
+ int ret;
+
++ ret = domain_qi_batch_alloc(domain);
++ if (ret)
++ return ret;
++
+ ret = cache_tag_assign(domain, did, dev, pasid, CACHE_TAG_NESTING_IOTLB);
+ if (ret || !info->ats_enabled)
+ return ret;
+@@ -190,13 +217,6 @@ int cache_tag_assign_domain(struct dmar_domain *domain,
+ u16 did = domain_get_id_for_dev(domain, dev);
+ int ret;
+
+- /* domain->qi_bach will be freed in iommu_free_domain() path. */
+- if (!domain->qi_batch) {
+- domain->qi_batch = kzalloc(sizeof(*domain->qi_batch), GFP_KERNEL);
+- if (!domain->qi_batch)
+- return -ENOMEM;
+- }
+-
+ ret = __cache_tag_assign_domain(domain, did, dev, pasid);
+ if (ret || domain->domain.type != IOMMU_DOMAIN_NESTED)
+ return ret;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index a167d59101ae2e..cc23cfcdeb2d59 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -3372,6 +3372,9 @@ void device_block_translation(struct device *dev)
+ struct intel_iommu *iommu = info->iommu;
+ unsigned long flags;
+
++ if (info->domain)
++ cache_tag_unassign_domain(info->domain, dev, IOMMU_NO_PASID);
++
+ iommu_disable_pci_caps(info);
+ if (!dev_is_real_dma_subdevice(dev)) {
+ if (sm_supported(iommu))
+@@ -3388,7 +3391,6 @@ void device_block_translation(struct device *dev)
+ list_del(&info->link);
+ spin_unlock_irqrestore(&info->domain->lock, flags);
+
+- cache_tag_unassign_domain(info->domain, dev, IOMMU_NO_PASID);
+ domain_detach_iommu(info->domain, iommu);
+ info->domain = NULL;
+ }
+diff --git a/drivers/md/dm-zoned-reclaim.c b/drivers/md/dm-zoned-reclaim.c
+index d58db9a27e6cfd..76e2c686854871 100644
+--- a/drivers/md/dm-zoned-reclaim.c
++++ b/drivers/md/dm-zoned-reclaim.c
+@@ -76,9 +76,9 @@ static int dmz_reclaim_align_wp(struct dmz_reclaim *zrc, struct dm_zone *zone,
+ * pointer and the requested position.
+ */
+ nr_blocks = block - wp_block;
+- ret = blkdev_issue_zeroout(dev->bdev,
+- dmz_start_sect(zmd, zone) + dmz_blk2sect(wp_block),
+- dmz_blk2sect(nr_blocks), GFP_NOIO, 0);
++ ret = blk_zone_issue_zeroout(dev->bdev,
++ dmz_start_sect(zmd, zone) + dmz_blk2sect(wp_block),
++ dmz_blk2sect(nr_blocks), GFP_NOIO);
+ if (ret) {
+ dmz_dev_err(dev,
+ "Align zone %u wp %llu to %llu (wp+%u) blocks failed %d",
+diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c
+index 15e0f14d0d49de..4d73abae503d1e 100644
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -1520,9 +1520,7 @@ static netdev_features_t bond_fix_features(struct net_device *dev,
+ struct slave *slave;
+
+ mask = features;
+-
+- features &= ~NETIF_F_ONE_FOR_ALL;
+- features |= NETIF_F_ALL_FOR_ALL;
++ features = netdev_base_features(features);
+
+ bond_for_each_slave(bond, slave, iter) {
+ features = netdev_increment_features(features,
+@@ -1536,6 +1534,7 @@ static netdev_features_t bond_fix_features(struct net_device *dev,
+
+ #define BOND_VLAN_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
+ NETIF_F_FRAGLIST | NETIF_F_GSO_SOFTWARE | \
++ NETIF_F_GSO_ENCAP_ALL | \
+ NETIF_F_HIGHDMA | NETIF_F_LRO)
+
+ #define BOND_ENC_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
+@@ -1564,8 +1563,9 @@ static void bond_compute_features(struct bonding *bond)
+
+ if (!bond_has_slaves(bond))
+ goto done;
+- vlan_features &= NETIF_F_ALL_FOR_ALL;
+- mpls_features &= NETIF_F_ALL_FOR_ALL;
++
++ vlan_features = netdev_base_features(vlan_features);
++ mpls_features = netdev_base_features(mpls_features);
+
+ bond_for_each_slave(bond, slave, iter) {
+ vlan_features = netdev_increment_features(vlan_features,
+diff --git a/drivers/net/dsa/microchip/ksz_common.c b/drivers/net/dsa/microchip/ksz_common.c
+index 5290f5ad98f392..bf26cd0abf6dd9 100644
+--- a/drivers/net/dsa/microchip/ksz_common.c
++++ b/drivers/net/dsa/microchip/ksz_common.c
+@@ -1098,10 +1098,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x1030, 0x1030),
+ regmap_reg_range(0x1100, 0x1115),
+ regmap_reg_range(0x111a, 0x111f),
+- regmap_reg_range(0x1122, 0x1127),
+- regmap_reg_range(0x112a, 0x112b),
+- regmap_reg_range(0x1136, 0x1139),
+- regmap_reg_range(0x113e, 0x113f),
++ regmap_reg_range(0x1120, 0x112b),
++ regmap_reg_range(0x1134, 0x113b),
++ regmap_reg_range(0x113c, 0x113f),
+ regmap_reg_range(0x1400, 0x1401),
+ regmap_reg_range(0x1403, 0x1403),
+ regmap_reg_range(0x1410, 0x1417),
+@@ -1128,10 +1127,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x2030, 0x2030),
+ regmap_reg_range(0x2100, 0x2115),
+ regmap_reg_range(0x211a, 0x211f),
+- regmap_reg_range(0x2122, 0x2127),
+- regmap_reg_range(0x212a, 0x212b),
+- regmap_reg_range(0x2136, 0x2139),
+- regmap_reg_range(0x213e, 0x213f),
++ regmap_reg_range(0x2120, 0x212b),
++ regmap_reg_range(0x2134, 0x213b),
++ regmap_reg_range(0x213c, 0x213f),
+ regmap_reg_range(0x2400, 0x2401),
+ regmap_reg_range(0x2403, 0x2403),
+ regmap_reg_range(0x2410, 0x2417),
+@@ -1158,10 +1156,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x3030, 0x3030),
+ regmap_reg_range(0x3100, 0x3115),
+ regmap_reg_range(0x311a, 0x311f),
+- regmap_reg_range(0x3122, 0x3127),
+- regmap_reg_range(0x312a, 0x312b),
+- regmap_reg_range(0x3136, 0x3139),
+- regmap_reg_range(0x313e, 0x313f),
++ regmap_reg_range(0x3120, 0x312b),
++ regmap_reg_range(0x3134, 0x313b),
++ regmap_reg_range(0x313c, 0x313f),
+ regmap_reg_range(0x3400, 0x3401),
+ regmap_reg_range(0x3403, 0x3403),
+ regmap_reg_range(0x3410, 0x3417),
+@@ -1188,10 +1185,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x4030, 0x4030),
+ regmap_reg_range(0x4100, 0x4115),
+ regmap_reg_range(0x411a, 0x411f),
+- regmap_reg_range(0x4122, 0x4127),
+- regmap_reg_range(0x412a, 0x412b),
+- regmap_reg_range(0x4136, 0x4139),
+- regmap_reg_range(0x413e, 0x413f),
++ regmap_reg_range(0x4120, 0x412b),
++ regmap_reg_range(0x4134, 0x413b),
++ regmap_reg_range(0x413c, 0x413f),
+ regmap_reg_range(0x4400, 0x4401),
+ regmap_reg_range(0x4403, 0x4403),
+ regmap_reg_range(0x4410, 0x4417),
+@@ -1218,10 +1214,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x5030, 0x5030),
+ regmap_reg_range(0x5100, 0x5115),
+ regmap_reg_range(0x511a, 0x511f),
+- regmap_reg_range(0x5122, 0x5127),
+- regmap_reg_range(0x512a, 0x512b),
+- regmap_reg_range(0x5136, 0x5139),
+- regmap_reg_range(0x513e, 0x513f),
++ regmap_reg_range(0x5120, 0x512b),
++ regmap_reg_range(0x5134, 0x513b),
++ regmap_reg_range(0x513c, 0x513f),
+ regmap_reg_range(0x5400, 0x5401),
+ regmap_reg_range(0x5403, 0x5403),
+ regmap_reg_range(0x5410, 0x5417),
+@@ -1248,10 +1243,9 @@ static const struct regmap_range ksz9896_valid_regs[] = {
+ regmap_reg_range(0x6030, 0x6030),
+ regmap_reg_range(0x6100, 0x6115),
+ regmap_reg_range(0x611a, 0x611f),
+- regmap_reg_range(0x6122, 0x6127),
+- regmap_reg_range(0x612a, 0x612b),
+- regmap_reg_range(0x6136, 0x6139),
+- regmap_reg_range(0x613e, 0x613f),
++ regmap_reg_range(0x6120, 0x612b),
++ regmap_reg_range(0x6134, 0x613b),
++ regmap_reg_range(0x613c, 0x613f),
+ regmap_reg_range(0x6300, 0x6301),
+ regmap_reg_range(0x6400, 0x6401),
+ regmap_reg_range(0x6403, 0x6403),
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 0102a82e88cc61..940f1b71226d64 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -24,7 +24,7 @@
+ #define VSC9959_NUM_PORTS 6
+
+ #define VSC9959_TAS_GCL_ENTRY_MAX 63
+-#define VSC9959_TAS_MIN_GATE_LEN_NS 33
++#define VSC9959_TAS_MIN_GATE_LEN_NS 35
+ #define VSC9959_VCAP_POLICER_BASE 63
+ #define VSC9959_VCAP_POLICER_MAX 383
+ #define VSC9959_SWITCH_PCI_BAR 4
+@@ -1056,11 +1056,15 @@ static void vsc9959_mdio_bus_free(struct ocelot *ocelot)
+ mdiobus_free(felix->imdio);
+ }
+
+-/* The switch considers any frame (regardless of size) as eligible for
+- * transmission if the traffic class gate is open for at least 33 ns.
++/* The switch considers any frame (regardless of size) as eligible
++ * for transmission if the traffic class gate is open for at least
++ * VSC9959_TAS_MIN_GATE_LEN_NS.
++ *
+ * Overruns are prevented by cropping an interval at the end of the gate time
+- * slot for which egress scheduling is blocked, but we need to still keep 33 ns
+- * available for one packet to be transmitted, otherwise the port tc will hang.
++ * slot for which egress scheduling is blocked, but we need to still keep
++ * VSC9959_TAS_MIN_GATE_LEN_NS available for one packet to be transmitted,
++ * otherwise the port tc will hang.
++ *
+ * This function returns the size of a gate interval that remains available for
+ * setting the guard band, after reserving the space for one egress frame.
+ */
+@@ -1303,7 +1307,8 @@ static void vsc9959_tas_guard_bands_update(struct ocelot *ocelot, int port)
+ * per-tc static guard band lengths, so it reduces the
+ * useful gate interval length. Therefore, be careful
+ * to calculate a guard band (and therefore max_sdu)
+- * that still leaves 33 ns available in the time slot.
++ * that still leaves VSC9959_TAS_MIN_GATE_LEN_NS
++ * available in the time slot.
+ */
+ max_sdu = div_u64(remaining_gate_len_ps, picos_per_byte);
+ /* A TC gate may be completely closed, which is a
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 3d9ee91e1f8be0..dafc5a4039cd2c 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -1518,7 +1518,7 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
+ if (TPA_START_IS_IPV6(tpa_start1))
+ tpa_info->gso_type = SKB_GSO_TCPV6;
+ /* RSS profiles 1 and 3 with extract code 0 for inner 4-tuple */
+- else if (cmp_type == CMP_TYPE_RX_L2_TPA_START_CMP &&
++ else if (!BNXT_CHIP_P4_PLUS(bp) &&
+ TPA_START_HASH_TYPE(tpa_start) == 3)
+ tpa_info->gso_type = SKB_GSO_TCPV6;
+ tpa_info->rss_hash =
+@@ -2212,15 +2212,13 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ if (cmp_type == CMP_TYPE_RX_L2_V3_CMP) {
+ type = bnxt_rss_ext_op(bp, rxcmp);
+ } else {
+- u32 hash_type = RX_CMP_HASH_TYPE(rxcmp);
++ u32 itypes = RX_CMP_ITYPES(rxcmp);
+
+- /* RSS profiles 1 and 3 with extract code 0 for inner
+- * 4-tuple
+- */
+- if (hash_type != 1 && hash_type != 3)
+- type = PKT_HASH_TYPE_L3;
+- else
++ if (itypes == RX_CMP_FLAGS_ITYPE_TCP ||
++ itypes == RX_CMP_FLAGS_ITYPE_UDP)
+ type = PKT_HASH_TYPE_L4;
++ else
++ type = PKT_HASH_TYPE_L3;
+ }
+ skb_set_hash(skb, le32_to_cpu(rxcmp->rx_cmp_rss_hash), type);
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 69231e85140b2e..9e05704d94450e 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -267,6 +267,9 @@ struct rx_cmp {
+ (((le32_to_cpu((rxcmp)->rx_cmp_misc_v1) & RX_CMP_RSS_HASH_TYPE) >>\
+ RX_CMP_RSS_HASH_TYPE_SHIFT) & RSS_PROFILE_ID_MASK)
+
++#define RX_CMP_ITYPES(rxcmp) \
++ (le32_to_cpu((rxcmp)->rx_cmp_len_flags_type) & RX_CMP_FLAGS_ITYPES_MASK)
++
+ #define RX_CMP_V3_HASH_TYPE_LEGACY(rxcmp) \
+ ((le32_to_cpu((rxcmp)->rx_cmp_misc_v1) & RX_CMP_V3_RSS_EXT_OP_LEGACY) >>\
+ RX_CMP_V3_RSS_EXT_OP_LEGACY_SHIFT)
+@@ -378,7 +381,7 @@ struct rx_agg_cmp {
+ u32 rx_agg_cmp_opaque;
+ __le32 rx_agg_cmp_v;
+ #define RX_AGG_CMP_V (1 << 0)
+- #define RX_AGG_CMP_AGG_ID (0xffff << 16)
++ #define RX_AGG_CMP_AGG_ID (0x0fff << 16)
+ #define RX_AGG_CMP_AGG_ID_SHIFT 16
+ __le32 rx_agg_cmp_unused;
+ };
+@@ -416,7 +419,7 @@ struct rx_tpa_start_cmp {
+ #define RX_TPA_START_CMP_V3_RSS_HASH_TYPE_SHIFT 7
+ #define RX_TPA_START_CMP_AGG_ID (0x7f << 25)
+ #define RX_TPA_START_CMP_AGG_ID_SHIFT 25
+- #define RX_TPA_START_CMP_AGG_ID_P5 (0xffff << 16)
++ #define RX_TPA_START_CMP_AGG_ID_P5 (0x0fff << 16)
+ #define RX_TPA_START_CMP_AGG_ID_SHIFT_P5 16
+ #define RX_TPA_START_CMP_METADATA1 (0xf << 28)
+ #define RX_TPA_START_CMP_METADATA1_SHIFT 28
+@@ -540,7 +543,7 @@ struct rx_tpa_end_cmp {
+ #define RX_TPA_END_CMP_PAYLOAD_OFFSET_SHIFT 16
+ #define RX_TPA_END_CMP_AGG_ID (0x7f << 25)
+ #define RX_TPA_END_CMP_AGG_ID_SHIFT 25
+- #define RX_TPA_END_CMP_AGG_ID_P5 (0xffff << 16)
++ #define RX_TPA_END_CMP_AGG_ID_P5 (0x0fff << 16)
+ #define RX_TPA_END_CMP_AGG_ID_SHIFT_P5 16
+
+ __le32 rx_tpa_end_cmp_tsdelta;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+index bbf7641a0fc799..7e13cd69f68a1f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+@@ -2077,7 +2077,7 @@ void t4_idma_monitor(struct adapter *adapter,
+ struct sge_idma_monitor_state *idma,
+ int hz, int ticks);
+ int t4_set_vf_mac_acl(struct adapter *adapter, unsigned int vf,
+- unsigned int naddr, u8 *addr);
++ u8 start, unsigned int naddr, u8 *addr);
+ void t4_tp_pio_read(struct adapter *adap, u32 *buff, u32 nregs,
+ u32 start_index, bool sleep_ok);
+ void t4_tp_tm_pio_read(struct adapter *adap, u32 *buff, u32 nregs,
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 2418645c882373..fb3933fbb8425e 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -3246,7 +3246,7 @@ static int cxgb4_mgmt_set_vf_mac(struct net_device *dev, int vf, u8 *mac)
+
+ dev_info(pi->adapter->pdev_dev,
+ "Setting MAC %pM on VF %d\n", mac, vf);
+- ret = t4_set_vf_mac_acl(adap, vf + 1, 1, mac);
++ ret = t4_set_vf_mac_acl(adap, vf + 1, pi->lport, 1, mac);
+ if (!ret)
+ ether_addr_copy(adap->vfinfo[vf].vf_mac_addr, mac);
+ return ret;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index 76de55306c4d01..175bf9b1305888 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -10215,11 +10215,12 @@ int t4_load_cfg(struct adapter *adap, const u8 *cfg_data, unsigned int size)
+ * t4_set_vf_mac_acl - Set MAC address for the specified VF
+ * @adapter: The adapter
+ * @vf: one of the VFs instantiated by the specified PF
++ * @start: The start port id associated with specified VF
+ * @naddr: the number of MAC addresses
+ * @addr: the MAC address(es) to be set to the specified VF
+ */
+ int t4_set_vf_mac_acl(struct adapter *adapter, unsigned int vf,
+- unsigned int naddr, u8 *addr)
++ u8 start, unsigned int naddr, u8 *addr)
+ {
+ struct fw_acl_mac_cmd cmd;
+
+@@ -10234,7 +10235,7 @@ int t4_set_vf_mac_acl(struct adapter *adapter, unsigned int vf,
+ cmd.en_to_len16 = cpu_to_be32((unsigned int)FW_LEN16(cmd));
+ cmd.nmac = naddr;
+
+- switch (adapter->pf) {
++ switch (start) {
+ case 3:
+ memcpy(cmd.macaddr3, addr, sizeof(cmd.macaddr3));
+ break;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
+index 3d74109f82300e..49f22cad92bfd0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_domain.c
+@@ -297,7 +297,9 @@ dr_domain_add_vport_cap(struct mlx5dr_domain *dmn, u16 vport)
+ if (ret) {
+ mlx5dr_dbg(dmn, "Couldn't insert new vport into xarray (%d)\n", ret);
+ kvfree(vport_caps);
+- return ERR_PTR(ret);
++ if (ret == -EBUSY)
++ return ERR_PTR(-EBUSY);
++ return NULL;
+ }
+
+ return vport_caps;
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
+index b64c814eac11e8..0c4c75b3682faa 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_main.c
+@@ -693,12 +693,11 @@ static int sparx5_start(struct sparx5 *sparx5)
+ err = -ENXIO;
+ if (sparx5->fdma_irq >= 0) {
+ if (GCB_CHIP_ID_REV_ID_GET(sparx5->chip_id) > 0)
+- err = devm_request_threaded_irq(sparx5->dev,
+- sparx5->fdma_irq,
+- NULL,
+- sparx5_fdma_handler,
+- IRQF_ONESHOT,
+- "sparx5-fdma", sparx5);
++ err = devm_request_irq(sparx5->dev,
++ sparx5->fdma_irq,
++ sparx5_fdma_handler,
++ 0,
++ "sparx5-fdma", sparx5);
+ if (!err)
+ err = sparx5_fdma_start(sparx5);
+ if (err)
+diff --git a/drivers/net/ethernet/microchip/sparx5/sparx5_port.c b/drivers/net/ethernet/microchip/sparx5/sparx5_port.c
+index 062e486c002cf6..672508efce5c29 100644
+--- a/drivers/net/ethernet/microchip/sparx5/sparx5_port.c
++++ b/drivers/net/ethernet/microchip/sparx5/sparx5_port.c
+@@ -1119,7 +1119,7 @@ int sparx5_port_init(struct sparx5 *sparx5,
+ spx5_inst_rmw(DEV10G_MAC_MAXLEN_CFG_MAX_LEN_SET(ETH_MAXLEN),
+ DEV10G_MAC_MAXLEN_CFG_MAX_LEN,
+ devinst,
+- DEV10G_MAC_ENA_CFG(0));
++ DEV10G_MAC_MAXLEN_CFG(0));
+
+ /* Handle Signal Detect in 10G PCS */
+ spx5_inst_wr(PCS10G_BR_PCS_SD_CFG_SD_POL_SET(sd_pol) |
+diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+index ca4ed58f1206dd..0c2ba2fa88c466 100644
+--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
++++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+@@ -1315,7 +1315,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
+ GFP_KERNEL);
+ if (!gc->irq_contexts) {
+ err = -ENOMEM;
+- goto free_irq_vector;
++ goto free_irq_array;
+ }
+
+ for (i = 0; i < nvec; i++) {
+@@ -1372,6 +1372,7 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
+ gc->max_num_msix = nvec;
+ gc->num_msix_usable = nvec;
+ cpus_read_unlock();
++ kfree(irqs);
+ return 0;
+
+ free_irq:
+@@ -1384,8 +1385,9 @@ static int mana_gd_setup_irqs(struct pci_dev *pdev)
+ }
+
+ kfree(gc->irq_contexts);
+- kfree(irqs);
+ gc->irq_contexts = NULL;
++free_irq_array:
++ kfree(irqs);
+ free_irq_vector:
+ cpus_read_unlock();
+ pci_free_irq_vectors(pdev);
+diff --git a/drivers/net/ethernet/mscc/ocelot_ptp.c b/drivers/net/ethernet/mscc/ocelot_ptp.c
+index e172638b060102..808ce8e68d3937 100644
+--- a/drivers/net/ethernet/mscc/ocelot_ptp.c
++++ b/drivers/net/ethernet/mscc/ocelot_ptp.c
+@@ -14,6 +14,8 @@
+ #include <soc/mscc/ocelot.h>
+ #include "ocelot.h"
+
++#define OCELOT_PTP_TX_TSTAMP_TIMEOUT (5 * HZ)
++
+ int ocelot_ptp_gettime64(struct ptp_clock_info *ptp, struct timespec64 *ts)
+ {
+ struct ocelot *ocelot = container_of(ptp, struct ocelot, ptp_info);
+@@ -495,6 +497,28 @@ static int ocelot_traps_to_ptp_rx_filter(unsigned int proto)
+ return HWTSTAMP_FILTER_NONE;
+ }
+
++static int ocelot_ptp_tx_type_to_cmd(int tx_type, int *ptp_cmd)
++{
++ switch (tx_type) {
++ case HWTSTAMP_TX_ON:
++ *ptp_cmd = IFH_REW_OP_TWO_STEP_PTP;
++ break;
++ case HWTSTAMP_TX_ONESTEP_SYNC:
++ /* IFH_REW_OP_ONE_STEP_PTP updates the correctionField,
++ * what we need to update is the originTimestamp.
++ */
++ *ptp_cmd = IFH_REW_OP_ORIGIN_PTP;
++ break;
++ case HWTSTAMP_TX_OFF:
++ *ptp_cmd = 0;
++ break;
++ default:
++ return -ERANGE;
++ }
++
++ return 0;
++}
++
+ int ocelot_hwstamp_get(struct ocelot *ocelot, int port, struct ifreq *ifr)
+ {
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+@@ -521,30 +545,19 @@ EXPORT_SYMBOL(ocelot_hwstamp_get);
+ int ocelot_hwstamp_set(struct ocelot *ocelot, int port, struct ifreq *ifr)
+ {
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
++ int ptp_cmd, old_ptp_cmd = ocelot_port->ptp_cmd;
+ bool l2 = false, l4 = false;
+ struct hwtstamp_config cfg;
++ bool old_l2, old_l4;
+ int err;
+
+ if (copy_from_user(&cfg, ifr->ifr_data, sizeof(cfg)))
+ return -EFAULT;
+
+ /* Tx type sanity check */
+- switch (cfg.tx_type) {
+- case HWTSTAMP_TX_ON:
+- ocelot_port->ptp_cmd = IFH_REW_OP_TWO_STEP_PTP;
+- break;
+- case HWTSTAMP_TX_ONESTEP_SYNC:
+- /* IFH_REW_OP_ONE_STEP_PTP updates the correctional field, we
+- * need to update the origin time.
+- */
+- ocelot_port->ptp_cmd = IFH_REW_OP_ORIGIN_PTP;
+- break;
+- case HWTSTAMP_TX_OFF:
+- ocelot_port->ptp_cmd = 0;
+- break;
+- default:
+- return -ERANGE;
+- }
++ err = ocelot_ptp_tx_type_to_cmd(cfg.tx_type, &ptp_cmd);
++ if (err)
++ return err;
+
+ switch (cfg.rx_filter) {
+ case HWTSTAMP_FILTER_NONE:
+@@ -569,13 +582,27 @@ int ocelot_hwstamp_set(struct ocelot *ocelot, int port, struct ifreq *ifr)
+ return -ERANGE;
+ }
+
++ old_l2 = ocelot_port->trap_proto & OCELOT_PROTO_PTP_L2;
++ old_l4 = ocelot_port->trap_proto & OCELOT_PROTO_PTP_L4;
++
+ err = ocelot_setup_ptp_traps(ocelot, port, l2, l4);
+ if (err)
+ return err;
+
++ ocelot_port->ptp_cmd = ptp_cmd;
++
+ cfg.rx_filter = ocelot_traps_to_ptp_rx_filter(ocelot_port->trap_proto);
+
+- return copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg)) ? -EFAULT : 0;
++ if (copy_to_user(ifr->ifr_data, &cfg, sizeof(cfg))) {
++ err = -EFAULT;
++ goto out_restore_ptp_traps;
++ }
++
++ return 0;
++out_restore_ptp_traps:
++ ocelot_setup_ptp_traps(ocelot, port, old_l2, old_l4);
++ ocelot_port->ptp_cmd = old_ptp_cmd;
++ return err;
+ }
+ EXPORT_SYMBOL(ocelot_hwstamp_set);
+
+@@ -603,34 +630,87 @@ int ocelot_get_ts_info(struct ocelot *ocelot, int port,
+ }
+ EXPORT_SYMBOL(ocelot_get_ts_info);
+
+-static int ocelot_port_add_txtstamp_skb(struct ocelot *ocelot, int port,
+- struct sk_buff *clone)
++static struct sk_buff *ocelot_port_dequeue_ptp_tx_skb(struct ocelot *ocelot,
++ int port, u8 ts_id,
++ u32 seqid)
+ {
+ struct ocelot_port *ocelot_port = ocelot->ports[port];
+- unsigned long flags;
++ struct sk_buff *skb, *skb_tmp, *skb_match = NULL;
++ struct ptp_header *hdr;
+
+- spin_lock_irqsave(&ocelot->ts_id_lock, flags);
++ spin_lock(&ocelot->ts_id_lock);
+
+- if (ocelot_port->ptp_skbs_in_flight == OCELOT_MAX_PTP_ID ||
+- ocelot->ptp_skbs_in_flight == OCELOT_PTP_FIFO_SIZE) {
+- spin_unlock_irqrestore(&ocelot->ts_id_lock, flags);
+- return -EBUSY;
++ skb_queue_walk_safe(&ocelot_port->tx_skbs, skb, skb_tmp) {
++ if (OCELOT_SKB_CB(skb)->ts_id != ts_id)
++ continue;
++
++ /* Check that the timestamp ID is for the expected PTP
++ * sequenceId. We don't have to test ptp_parse_header() against
++ * NULL, because we've pre-validated the packet's ptp_class.
++ */
++ hdr = ptp_parse_header(skb, OCELOT_SKB_CB(skb)->ptp_class);
++ if (seqid != ntohs(hdr->sequence_id))
++ continue;
++
++ __skb_unlink(skb, &ocelot_port->tx_skbs);
++ ocelot->ptp_skbs_in_flight--;
++ skb_match = skb;
++ break;
+ }
+
+- skb_shinfo(clone)->tx_flags |= SKBTX_IN_PROGRESS;
+- /* Store timestamp ID in OCELOT_SKB_CB(clone)->ts_id */
+- OCELOT_SKB_CB(clone)->ts_id = ocelot_port->ts_id;
++ spin_unlock(&ocelot->ts_id_lock);
+
+- ocelot_port->ts_id++;
+- if (ocelot_port->ts_id == OCELOT_MAX_PTP_ID)
+- ocelot_port->ts_id = 0;
++ return skb_match;
++}
++
++static int ocelot_port_queue_ptp_tx_skb(struct ocelot *ocelot, int port,
++ struct sk_buff *clone)
++{
++ struct ocelot_port *ocelot_port = ocelot->ports[port];
++ DECLARE_BITMAP(ts_id_in_flight, OCELOT_MAX_PTP_ID);
++ struct sk_buff *skb, *skb_tmp;
++ unsigned long n;
++
++ spin_lock(&ocelot->ts_id_lock);
++
++ /* To get a better chance of acquiring a timestamp ID, first flush the
++ * stale packets still waiting in the TX timestamping queue. They are
++ * probably lost.
++ */
++ skb_queue_walk_safe(&ocelot_port->tx_skbs, skb, skb_tmp) {
++ if (time_before(OCELOT_SKB_CB(skb)->ptp_tx_time +
++ OCELOT_PTP_TX_TSTAMP_TIMEOUT, jiffies)) {
++ dev_warn_ratelimited(ocelot->dev,
++ "port %d invalidating stale timestamp ID %u which seems lost\n",
++ port, OCELOT_SKB_CB(skb)->ts_id);
++ __skb_unlink(skb, &ocelot_port->tx_skbs);
++ kfree_skb(skb);
++ ocelot->ptp_skbs_in_flight--;
++ } else {
++ __set_bit(OCELOT_SKB_CB(skb)->ts_id, ts_id_in_flight);
++ }
++ }
++
++ if (ocelot->ptp_skbs_in_flight == OCELOT_PTP_FIFO_SIZE) {
++ spin_unlock(&ocelot->ts_id_lock);
++ return -EBUSY;
++ }
++
++ n = find_first_zero_bit(ts_id_in_flight, OCELOT_MAX_PTP_ID);
++ if (n == OCELOT_MAX_PTP_ID) {
++ spin_unlock(&ocelot->ts_id_lock);
++ return -EBUSY;
++ }
+
+- ocelot_port->ptp_skbs_in_flight++;
++ /* Found an available timestamp ID, use it */
++ OCELOT_SKB_CB(clone)->ts_id = n;
++ OCELOT_SKB_CB(clone)->ptp_tx_time = jiffies;
+ ocelot->ptp_skbs_in_flight++;
++ __skb_queue_tail(&ocelot_port->tx_skbs, clone);
+
+- skb_queue_tail(&ocelot_port->tx_skbs, clone);
++ spin_unlock(&ocelot->ts_id_lock);
+
+- spin_unlock_irqrestore(&ocelot->ts_id_lock, flags);
++ dev_dbg_ratelimited(ocelot->dev, "port %d timestamp id %lu\n", port, n);
+
+ return 0;
+ }
+@@ -687,10 +767,14 @@ int ocelot_port_txtstamp_request(struct ocelot *ocelot, int port,
+ if (!(*clone))
+ return -ENOMEM;
+
+- err = ocelot_port_add_txtstamp_skb(ocelot, port, *clone);
+- if (err)
++ /* Store timestamp ID in OCELOT_SKB_CB(clone)->ts_id */
++ err = ocelot_port_queue_ptp_tx_skb(ocelot, port, *clone);
++ if (err) {
++ kfree_skb(*clone);
+ return err;
++ }
+
++ skb_shinfo(*clone)->tx_flags |= SKBTX_IN_PROGRESS;
+ OCELOT_SKB_CB(skb)->ptp_cmd = ptp_cmd;
+ OCELOT_SKB_CB(*clone)->ptp_class = ptp_class;
+ }
+@@ -726,28 +810,15 @@ static void ocelot_get_hwtimestamp(struct ocelot *ocelot,
+ spin_unlock_irqrestore(&ocelot->ptp_clock_lock, flags);
+ }
+
+-static bool ocelot_validate_ptp_skb(struct sk_buff *clone, u16 seqid)
+-{
+- struct ptp_header *hdr;
+-
+- hdr = ptp_parse_header(clone, OCELOT_SKB_CB(clone)->ptp_class);
+- if (WARN_ON(!hdr))
+- return false;
+-
+- return seqid == ntohs(hdr->sequence_id);
+-}
+-
+ void ocelot_get_txtstamp(struct ocelot *ocelot)
+ {
+ int budget = OCELOT_PTP_QUEUE_SZ;
+
+ while (budget--) {
+- struct sk_buff *skb, *skb_tmp, *skb_match = NULL;
+ struct skb_shared_hwtstamps shhwtstamps;
+ u32 val, id, seqid, txport;
+- struct ocelot_port *port;
++ struct sk_buff *skb_match;
+ struct timespec64 ts;
+- unsigned long flags;
+
+ val = ocelot_read(ocelot, SYS_PTP_STATUS);
+
+@@ -762,36 +833,14 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
+ txport = SYS_PTP_STATUS_PTP_MESS_TXPORT_X(val);
+ seqid = SYS_PTP_STATUS_PTP_MESS_SEQ_ID(val);
+
+- port = ocelot->ports[txport];
+-
+- spin_lock(&ocelot->ts_id_lock);
+- port->ptp_skbs_in_flight--;
+- ocelot->ptp_skbs_in_flight--;
+- spin_unlock(&ocelot->ts_id_lock);
+-
+ /* Retrieve its associated skb */
+-try_again:
+- spin_lock_irqsave(&port->tx_skbs.lock, flags);
+-
+- skb_queue_walk_safe(&port->tx_skbs, skb, skb_tmp) {
+- if (OCELOT_SKB_CB(skb)->ts_id != id)
+- continue;
+- __skb_unlink(skb, &port->tx_skbs);
+- skb_match = skb;
+- break;
+- }
+-
+- spin_unlock_irqrestore(&port->tx_skbs.lock, flags);
+-
+- if (WARN_ON(!skb_match))
+- continue;
+-
+- if (!ocelot_validate_ptp_skb(skb_match, seqid)) {
+- dev_err_ratelimited(ocelot->dev,
+- "port %d received stale TX timestamp for seqid %d, discarding\n",
+- txport, seqid);
+- dev_kfree_skb_any(skb);
+- goto try_again;
++ skb_match = ocelot_port_dequeue_ptp_tx_skb(ocelot, txport, id,
++ seqid);
++ if (!skb_match) {
++ dev_warn_ratelimited(ocelot->dev,
++ "port %d received TX timestamp (seqid %d, ts id %u) for packet previously declared stale\n",
++ txport, seqid, id);
++ goto next_ts;
+ }
+
+ /* Get the h/w timestamp */
+@@ -802,7 +851,7 @@ void ocelot_get_txtstamp(struct ocelot *ocelot)
+ shhwtstamps.hwtstamp = ktime_set(ts.tv_sec, ts.tv_nsec);
+ skb_complete_tx_timestamp(skb_match, &shhwtstamps);
+
+- /* Next ts */
++next_ts:
+ ocelot_write(ocelot, SYS_PTP_NXT_PTP_NXT, SYS_PTP_NXT);
+ }
+ }
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
+index 8f7ce6b51a1c9b..6b4b40c6e1fe00 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.c
++++ b/drivers/net/ethernet/qualcomm/qca_spi.c
+@@ -53,7 +53,7 @@ MODULE_PARM_DESC(qcaspi_burst_len, "Number of data bytes per burst. Use 1-5000."
+
+ #define QCASPI_PLUGGABLE_MIN 0
+ #define QCASPI_PLUGGABLE_MAX 1
+-static int qcaspi_pluggable = QCASPI_PLUGGABLE_MIN;
++static int qcaspi_pluggable = QCASPI_PLUGGABLE_MAX;
+ module_param(qcaspi_pluggable, int, 0);
+ MODULE_PARM_DESC(qcaspi_pluggable, "Pluggable SPI connection (yes/no).");
+
+@@ -812,7 +812,6 @@ qcaspi_netdev_init(struct net_device *dev)
+
+ dev->mtu = QCAFRM_MAX_MTU;
+ dev->type = ARPHRD_ETHER;
+- qca->clkspeed = qcaspi_clkspeed;
+ qca->burst_len = qcaspi_burst_len;
+ qca->spi_thread = NULL;
+ qca->buffer_size = (QCAFRM_MAX_MTU + VLAN_ETH_HLEN + QCAFRM_HEADER_LEN +
+@@ -903,17 +902,15 @@ qca_spi_probe(struct spi_device *spi)
+ legacy_mode = of_property_read_bool(spi->dev.of_node,
+ "qca,legacy-mode");
+
+- if (qcaspi_clkspeed == 0) {
+- if (spi->max_speed_hz)
+- qcaspi_clkspeed = spi->max_speed_hz;
+- else
+- qcaspi_clkspeed = QCASPI_CLK_SPEED;
+- }
++ if (qcaspi_clkspeed)
++ spi->max_speed_hz = qcaspi_clkspeed;
++ else if (!spi->max_speed_hz)
++ spi->max_speed_hz = QCASPI_CLK_SPEED;
+
+- if ((qcaspi_clkspeed < QCASPI_CLK_SPEED_MIN) ||
+- (qcaspi_clkspeed > QCASPI_CLK_SPEED_MAX)) {
+- dev_err(&spi->dev, "Invalid clkspeed: %d\n",
+- qcaspi_clkspeed);
++ if (spi->max_speed_hz < QCASPI_CLK_SPEED_MIN ||
++ spi->max_speed_hz > QCASPI_CLK_SPEED_MAX) {
++ dev_err(&spi->dev, "Invalid clkspeed: %u\n",
++ spi->max_speed_hz);
+ return -EINVAL;
+ }
+
+@@ -938,14 +935,13 @@ qca_spi_probe(struct spi_device *spi)
+ return -EINVAL;
+ }
+
+- dev_info(&spi->dev, "ver=%s, clkspeed=%d, burst_len=%d, pluggable=%d\n",
++ dev_info(&spi->dev, "ver=%s, clkspeed=%u, burst_len=%d, pluggable=%d\n",
+ QCASPI_DRV_VERSION,
+- qcaspi_clkspeed,
++ spi->max_speed_hz,
+ qcaspi_burst_len,
+ qcaspi_pluggable);
+
+ spi->mode = SPI_MODE_3;
+- spi->max_speed_hz = qcaspi_clkspeed;
+ if (spi_setup(spi) < 0) {
+ dev_err(&spi->dev, "Unable to setup SPI device\n");
+ return -EFAULT;
+diff --git a/drivers/net/ethernet/qualcomm/qca_spi.h b/drivers/net/ethernet/qualcomm/qca_spi.h
+index 8f4808695e8206..0831cefc58b898 100644
+--- a/drivers/net/ethernet/qualcomm/qca_spi.h
++++ b/drivers/net/ethernet/qualcomm/qca_spi.h
+@@ -89,7 +89,6 @@ struct qcaspi {
+ #endif
+
+ /* user configurable options */
+- u32 clkspeed;
+ u8 legacy_mode;
+ u16 burst_len;
+ };
+diff --git a/drivers/net/ethernet/renesas/rswitch.c b/drivers/net/ethernet/renesas/rswitch.c
+index b80aa27a7214d4..09117110e3dd2a 100644
+--- a/drivers/net/ethernet/renesas/rswitch.c
++++ b/drivers/net/ethernet/renesas/rswitch.c
+@@ -862,13 +862,10 @@ static void rswitch_tx_free(struct net_device *ndev)
+ struct rswitch_ext_desc *desc;
+ struct sk_buff *skb;
+
+- for (; rswitch_get_num_cur_queues(gq) > 0;
+- gq->dirty = rswitch_next_queue_index(gq, false, 1)) {
+- desc = &gq->tx_ring[gq->dirty];
+- if ((desc->desc.die_dt & DT_MASK) != DT_FEMPTY)
+- break;
+-
++ desc = &gq->tx_ring[gq->dirty];
++ while ((desc->desc.die_dt & DT_MASK) == DT_FEMPTY) {
+ dma_rmb();
++
+ skb = gq->skbs[gq->dirty];
+ if (skb) {
+ rdev->ndev->stats.tx_packets++;
+@@ -879,7 +876,10 @@ static void rswitch_tx_free(struct net_device *ndev)
+ dev_kfree_skb_any(gq->skbs[gq->dirty]);
+ gq->skbs[gq->dirty] = NULL;
+ }
++
+ desc->desc.die_dt = DT_EEMPTY;
++ gq->dirty = rswitch_next_queue_index(gq, false, 1);
++ desc = &gq->tx_ring[gq->dirty];
+ }
+ }
+
+@@ -908,8 +908,10 @@ static int rswitch_poll(struct napi_struct *napi, int budget)
+
+ if (napi_complete_done(napi, budget - quota)) {
+ spin_lock_irqsave(&priv->lock, flags);
+- rswitch_enadis_data_irq(priv, rdev->tx_queue->index, true);
+- rswitch_enadis_data_irq(priv, rdev->rx_queue->index, true);
++ if (test_bit(rdev->port, priv->opened_ports)) {
++ rswitch_enadis_data_irq(priv, rdev->tx_queue->index, true);
++ rswitch_enadis_data_irq(priv, rdev->rx_queue->index, true);
++ }
+ spin_unlock_irqrestore(&priv->lock, flags);
+ }
+
+@@ -1114,25 +1116,40 @@ static int rswitch_etha_wait_link_verification(struct rswitch_etha *etha)
+
+ static void rswitch_rmac_setting(struct rswitch_etha *etha, const u8 *mac)
+ {
+- u32 val;
++ u32 pis, lsc;
+
+ rswitch_etha_write_mac_address(etha, mac);
+
++ switch (etha->phy_interface) {
++ case PHY_INTERFACE_MODE_SGMII:
++ pis = MPIC_PIS_GMII;
++ break;
++ case PHY_INTERFACE_MODE_USXGMII:
++ case PHY_INTERFACE_MODE_5GBASER:
++ pis = MPIC_PIS_XGMII;
++ break;
++ default:
++ pis = FIELD_GET(MPIC_PIS, ioread32(etha->addr + MPIC));
++ break;
++ }
++
+ switch (etha->speed) {
+ case 100:
+- val = MPIC_LSC_100M;
++ lsc = MPIC_LSC_100M;
+ break;
+ case 1000:
+- val = MPIC_LSC_1G;
++ lsc = MPIC_LSC_1G;
+ break;
+ case 2500:
+- val = MPIC_LSC_2_5G;
++ lsc = MPIC_LSC_2_5G;
+ break;
+ default:
+- return;
++ lsc = FIELD_GET(MPIC_LSC, ioread32(etha->addr + MPIC));
++ break;
+ }
+
+- iowrite32(MPIC_PIS_GMII | val, etha->addr + MPIC);
++ rswitch_modify(etha->addr, MPIC, MPIC_PIS | MPIC_LSC,
++ FIELD_PREP(MPIC_PIS, pis) | FIELD_PREP(MPIC_LSC, lsc));
+ }
+
+ static void rswitch_etha_enable_mii(struct rswitch_etha *etha)
+@@ -1538,20 +1555,20 @@ static int rswitch_open(struct net_device *ndev)
+ struct rswitch_device *rdev = netdev_priv(ndev);
+ unsigned long flags;
+
+- phy_start(ndev->phydev);
++ if (bitmap_empty(rdev->priv->opened_ports, RSWITCH_NUM_PORTS))
++ iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDIE);
+
+ napi_enable(&rdev->napi);
+- netif_start_queue(ndev);
+
+ spin_lock_irqsave(&rdev->priv->lock, flags);
++ bitmap_set(rdev->priv->opened_ports, rdev->port, 1);
+ rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, true);
+ rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, true);
+ spin_unlock_irqrestore(&rdev->priv->lock, flags);
+
+- if (bitmap_empty(rdev->priv->opened_ports, RSWITCH_NUM_PORTS))
+- iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDIE);
++ phy_start(ndev->phydev);
+
+- bitmap_set(rdev->priv->opened_ports, rdev->port, 1);
++ netif_start_queue(ndev);
+
+ return 0;
+ };
+@@ -1563,7 +1580,16 @@ static int rswitch_stop(struct net_device *ndev)
+ unsigned long flags;
+
+ netif_tx_stop_all_queues(ndev);
++
++ phy_stop(ndev->phydev);
++
++ spin_lock_irqsave(&rdev->priv->lock, flags);
++ rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, false);
++ rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, false);
+ bitmap_clear(rdev->priv->opened_ports, rdev->port, 1);
++ spin_unlock_irqrestore(&rdev->priv->lock, flags);
++
++ napi_disable(&rdev->napi);
+
+ if (bitmap_empty(rdev->priv->opened_ports, RSWITCH_NUM_PORTS))
+ iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDID);
+@@ -1576,14 +1602,6 @@ static int rswitch_stop(struct net_device *ndev)
+ kfree(ts_info);
+ }
+
+- spin_lock_irqsave(&rdev->priv->lock, flags);
+- rswitch_enadis_data_irq(rdev->priv, rdev->tx_queue->index, false);
+- rswitch_enadis_data_irq(rdev->priv, rdev->rx_queue->index, false);
+- spin_unlock_irqrestore(&rdev->priv->lock, flags);
+-
+- phy_stop(ndev->phydev);
+- napi_disable(&rdev->napi);
+-
+ return 0;
+ };
+
+@@ -1681,8 +1699,11 @@ static netdev_tx_t rswitch_start_xmit(struct sk_buff *skb, struct net_device *nd
+ if (dma_mapping_error(ndev->dev.parent, dma_addr_orig))
+ goto err_kfree;
+
+- gq->skbs[gq->cur] = skb;
+- gq->unmap_addrs[gq->cur] = dma_addr_orig;
++ /* Stored the skb at the last descriptor to avoid skb free before hardware completes send */
++ gq->skbs[(gq->cur + nr_desc - 1) % gq->ring_size] = skb;
++ gq->unmap_addrs[(gq->cur + nr_desc - 1) % gq->ring_size] = dma_addr_orig;
++
++ dma_wmb();
+
+ /* DT_FSTART should be set at last. So, this is reverse order. */
+ for (i = nr_desc; i-- > 0; ) {
+@@ -1694,14 +1715,13 @@ static netdev_tx_t rswitch_start_xmit(struct sk_buff *skb, struct net_device *nd
+ goto err_unmap;
+ }
+
+- wmb(); /* gq->cur must be incremented after die_dt was set */
+-
+ gq->cur = rswitch_next_queue_index(gq, true, nr_desc);
+ rswitch_modify(rdev->addr, GWTRC(gq->index), 0, BIT(gq->index % 32));
+
+ return ret;
+
+ err_unmap:
++ gq->skbs[(gq->cur + nr_desc - 1) % gq->ring_size] = NULL;
+ dma_unmap_single(ndev->dev.parent, dma_addr_orig, skb->len, DMA_TO_DEVICE);
+
+ err_kfree:
+@@ -1889,7 +1909,6 @@ static int rswitch_device_alloc(struct rswitch_private *priv, unsigned int index
+ rdev->np_port = rswitch_get_port_node(rdev);
+ rdev->disabled = !rdev->np_port;
+ err = of_get_ethdev_address(rdev->np_port, ndev);
+- of_node_put(rdev->np_port);
+ if (err) {
+ if (is_valid_ether_addr(rdev->etha->mac_addr))
+ eth_hw_addr_set(ndev, rdev->etha->mac_addr);
+@@ -1919,6 +1938,7 @@ static int rswitch_device_alloc(struct rswitch_private *priv, unsigned int index
+
+ out_rxdmac:
+ out_get_params:
++ of_node_put(rdev->np_port);
+ netif_napi_del(&rdev->napi);
+ free_netdev(ndev);
+
+@@ -1932,6 +1952,7 @@ static void rswitch_device_free(struct rswitch_private *priv, unsigned int index
+
+ rswitch_txdmac_free(ndev);
+ rswitch_rxdmac_free(ndev);
++ of_node_put(rdev->np_port);
+ netif_napi_del(&rdev->napi);
+ free_netdev(ndev);
+ }
+diff --git a/drivers/net/ethernet/renesas/rswitch.h b/drivers/net/ethernet/renesas/rswitch.h
+index 72e3ff596d3183..e020800dcc570e 100644
+--- a/drivers/net/ethernet/renesas/rswitch.h
++++ b/drivers/net/ethernet/renesas/rswitch.h
+@@ -724,13 +724,13 @@ enum rswitch_etha_mode {
+
+ #define EAVCC_VEM_SC_TAG (0x3 << 16)
+
+-#define MPIC_PIS_MII 0x00
+-#define MPIC_PIS_GMII 0x02
+-#define MPIC_PIS_XGMII 0x04
+-#define MPIC_LSC_SHIFT 3
+-#define MPIC_LSC_100M (1 << MPIC_LSC_SHIFT)
+-#define MPIC_LSC_1G (2 << MPIC_LSC_SHIFT)
+-#define MPIC_LSC_2_5G (3 << MPIC_LSC_SHIFT)
++#define MPIC_PIS GENMASK(2, 0)
++#define MPIC_PIS_GMII 2
++#define MPIC_PIS_XGMII 4
++#define MPIC_LSC GENMASK(5, 3)
++#define MPIC_LSC_100M 1
++#define MPIC_LSC_1G 2
++#define MPIC_LSC_2_5G 3
+
+ #define MDIO_READ_C45 0x03
+ #define MDIO_WRITE_C45 0x01
+diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
+index 18191d5a8bd4d3..6ace5a74cddb57 100644
+--- a/drivers/net/team/team_core.c
++++ b/drivers/net/team/team_core.c
+@@ -983,7 +983,8 @@ static void team_port_disable(struct team *team,
+
+ #define TEAM_VLAN_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
+ NETIF_F_FRAGLIST | NETIF_F_GSO_SOFTWARE | \
+- NETIF_F_HIGHDMA | NETIF_F_LRO)
++ NETIF_F_HIGHDMA | NETIF_F_LRO | \
++ NETIF_F_GSO_ENCAP_ALL)
+
+ #define TEAM_ENC_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | \
+ NETIF_F_RXCSUM | NETIF_F_GSO_SOFTWARE)
+@@ -991,13 +992,14 @@ static void team_port_disable(struct team *team,
+ static void __team_compute_features(struct team *team)
+ {
+ struct team_port *port;
+- netdev_features_t vlan_features = TEAM_VLAN_FEATURES &
+- NETIF_F_ALL_FOR_ALL;
++ netdev_features_t vlan_features = TEAM_VLAN_FEATURES;
+ netdev_features_t enc_features = TEAM_ENC_FEATURES;
+ unsigned short max_hard_header_len = ETH_HLEN;
+ unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE |
+ IFF_XMIT_DST_RELEASE_PERM;
+
++ vlan_features = netdev_base_features(vlan_features);
++
+ rcu_read_lock();
+ list_for_each_entry_rcu(port, &team->port_list, list) {
+ vlan_features = netdev_increment_features(vlan_features,
+@@ -2012,8 +2014,7 @@ static netdev_features_t team_fix_features(struct net_device *dev,
+ netdev_features_t mask;
+
+ mask = features;
+- features &= ~NETIF_F_ONE_FOR_ALL;
+- features |= NETIF_F_ALL_FOR_ALL;
++ features = netdev_base_features(features);
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(port, &team->port_list, list) {
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index c897afef0b414c..60027b439021b8 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -502,6 +502,7 @@ struct virtio_net_common_hdr {
+ };
+
+ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
++static void virtnet_sq_free_unused_buf_done(struct virtqueue *vq);
+ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
+ struct net_device *dev,
+ unsigned int *xdp_xmit,
+@@ -2898,7 +2899,6 @@ static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index)
+ if (err < 0)
+ goto err_xdp_reg_mem_model;
+
+- netdev_tx_reset_queue(netdev_get_tx_queue(vi->dev, qp_index));
+ virtnet_napi_enable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi);
+ virtnet_napi_tx_enable(vi, vi->sq[qp_index].vq, &vi->sq[qp_index].napi);
+
+@@ -3166,7 +3166,7 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
+
+ virtnet_rx_pause(vi, rq);
+
+- err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_unmap_free_buf);
++ err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_unmap_free_buf, NULL);
+ if (err)
+ netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err);
+
+@@ -3229,7 +3229,8 @@ static int virtnet_tx_resize(struct virtnet_info *vi, struct send_queue *sq,
+
+ virtnet_tx_pause(vi, sq);
+
+- err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
++ err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf,
++ virtnet_sq_free_unused_buf_done);
+ if (err)
+ netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err);
+
+@@ -5997,6 +5998,14 @@ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
+ xdp_return_frame(ptr_to_xdp(buf));
+ }
+
++static void virtnet_sq_free_unused_buf_done(struct virtqueue *vq)
++{
++ struct virtnet_info *vi = vq->vdev->priv;
++ int i = vq2txq(vq);
++
++ netdev_tx_reset_queue(netdev_get_tx_queue(vi->dev, i));
++}
++
+ static void free_unused_bufs(struct virtnet_info *vi)
+ {
+ void *buf;
+@@ -6728,11 +6737,20 @@ static int virtnet_probe(struct virtio_device *vdev)
+
+ static void remove_vq_common(struct virtnet_info *vi)
+ {
++ int i;
++
+ virtio_reset_device(vi->vdev);
+
+ /* Free unused buffers in both send and recv, if any. */
+ free_unused_bufs(vi);
+
++ /*
++ * Rule of thumb is netdev_tx_reset_queue() should follow any
++ * skb freeing not followed by netdev_tx_completed_queue()
++ */
++ for (i = 0; i < vi->max_queue_pairs; i++)
++ netdev_tx_reset_queue(netdev_get_tx_queue(vi->dev, i));
++
+ free_receive_bufs(vi);
+
+ free_receive_page_frags(vi);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+index a7a10e716e6517..e96ddaeeeeff52 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c
+@@ -1967,7 +1967,7 @@ void iwl_mvm_channel_switch_error_notif(struct iwl_mvm *mvm,
+ if (csa_err_mask & (CS_ERR_COUNT_ERROR |
+ CS_ERR_LONG_DELAY_AFTER_CS |
+ CS_ERR_TX_BLOCK_TIMER_EXPIRED))
+- ieee80211_channel_switch_disconnect(vif, true);
++ ieee80211_channel_switch_disconnect(vif);
+ rcu_read_unlock();
+ }
+
+diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
+index 4265c1cd0ff716..63fe51d0e64db3 100644
+--- a/drivers/net/xen-netfront.c
++++ b/drivers/net/xen-netfront.c
+@@ -867,7 +867,7 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
+ static int xennet_close(struct net_device *dev)
+ {
+ struct netfront_info *np = netdev_priv(dev);
+- unsigned int num_queues = dev->real_num_tx_queues;
++ unsigned int num_queues = np->queues ? dev->real_num_tx_queues : 0;
+ unsigned int i;
+ struct netfront_queue *queue;
+ netif_tx_stop_all_queues(np->netdev);
+@@ -882,6 +882,9 @@ static void xennet_destroy_queues(struct netfront_info *info)
+ {
+ unsigned int i;
+
++ if (!info->queues)
++ return;
++
+ for (i = 0; i < info->netdev->real_num_tx_queues; i++) {
+ struct netfront_queue *queue = &info->queues[i];
+
+diff --git a/drivers/ptp/ptp_kvm_x86.c b/drivers/ptp/ptp_kvm_x86.c
+index 617c8d6706d3d0..6cea4fe39bcfe4 100644
+--- a/drivers/ptp/ptp_kvm_x86.c
++++ b/drivers/ptp/ptp_kvm_x86.c
+@@ -26,7 +26,7 @@ int kvm_arch_ptp_init(void)
+ long ret;
+
+ if (!kvm_para_available())
+- return -ENODEV;
++ return -EOPNOTSUPP;
+
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
+ p = alloc_page(GFP_KERNEL | __GFP_ZERO);
+@@ -46,14 +46,14 @@ int kvm_arch_ptp_init(void)
+
+ clock_pair_gpa = slow_virt_to_phys(clock_pair);
+ if (!pvclock_get_pvti_cpu0_va()) {
+- ret = -ENODEV;
++ ret = -EOPNOTSUPP;
+ goto err;
+ }
+
+ ret = kvm_hypercall2(KVM_HC_CLOCK_PAIRING, clock_pair_gpa,
+ KVM_CLOCK_PAIRING_WALLCLOCK);
+ if (ret == -KVM_ENOSYS) {
+- ret = -ENODEV;
++ ret = -EOPNOTSUPP;
+ goto err;
+ }
+
+diff --git a/drivers/regulator/axp20x-regulator.c b/drivers/regulator/axp20x-regulator.c
+index a8e91d9d028b89..945d2917b91bac 100644
+--- a/drivers/regulator/axp20x-regulator.c
++++ b/drivers/regulator/axp20x-regulator.c
+@@ -371,8 +371,8 @@
+ .ops = &axp20x_ops, \
+ }
+
+-#define AXP_DESC(_family, _id, _match, _supply, _min, _max, _step, _vreg, \
+- _vmask, _ereg, _emask) \
++#define AXP_DESC_DELAY(_family, _id, _match, _supply, _min, _max, _step, _vreg, \
++ _vmask, _ereg, _emask, _ramp_delay) \
+ [_family##_##_id] = { \
+ .name = (_match), \
+ .supply_name = (_supply), \
+@@ -388,9 +388,15 @@
+ .vsel_mask = (_vmask), \
+ .enable_reg = (_ereg), \
+ .enable_mask = (_emask), \
++ .ramp_delay = (_ramp_delay), \
+ .ops = &axp20x_ops, \
+ }
+
++#define AXP_DESC(_family, _id, _match, _supply, _min, _max, _step, _vreg, \
++ _vmask, _ereg, _emask) \
++ AXP_DESC_DELAY(_family, _id, _match, _supply, _min, _max, _step, _vreg, \
++ _vmask, _ereg, _emask, 0)
++
+ #define AXP_DESC_SW(_family, _id, _match, _supply, _ereg, _emask) \
+ [_family##_##_id] = { \
+ .name = (_match), \
+@@ -419,8 +425,8 @@
+ .ops = &axp20x_ops_fixed \
+ }
+
+-#define AXP_DESC_RANGES(_family, _id, _match, _supply, _ranges, _n_voltages, \
+- _vreg, _vmask, _ereg, _emask) \
++#define AXP_DESC_RANGES_DELAY(_family, _id, _match, _supply, _ranges, _n_voltages, \
++ _vreg, _vmask, _ereg, _emask, _ramp_delay) \
+ [_family##_##_id] = { \
+ .name = (_match), \
+ .supply_name = (_supply), \
+@@ -436,9 +442,15 @@
+ .enable_mask = (_emask), \
+ .linear_ranges = (_ranges), \
+ .n_linear_ranges = ARRAY_SIZE(_ranges), \
++ .ramp_delay = (_ramp_delay), \
+ .ops = &axp20x_ops_range, \
+ }
+
++#define AXP_DESC_RANGES(_family, _id, _match, _supply, _ranges, _n_voltages, \
++ _vreg, _vmask, _ereg, _emask) \
++ AXP_DESC_RANGES_DELAY(_family, _id, _match, _supply, _ranges, \
++ _n_voltages, _vreg, _vmask, _ereg, _emask, 0)
++
+ static const int axp209_dcdc2_ldo3_slew_rates[] = {
+ 1600,
+ 800,
+@@ -781,21 +793,21 @@ static const struct linear_range axp717_dcdc3_ranges[] = {
+ };
+
+ static const struct regulator_desc axp717_regulators[] = {
+- AXP_DESC_RANGES(AXP717, DCDC1, "dcdc1", "vin1",
++ AXP_DESC_RANGES_DELAY(AXP717, DCDC1, "dcdc1", "vin1",
+ axp717_dcdc1_ranges, AXP717_DCDC1_NUM_VOLTAGES,
+ AXP717_DCDC1_CONTROL, AXP717_DCDC_V_OUT_MASK,
+- AXP717_DCDC_OUTPUT_CONTROL, BIT(0)),
+- AXP_DESC_RANGES(AXP717, DCDC2, "dcdc2", "vin2",
++ AXP717_DCDC_OUTPUT_CONTROL, BIT(0), 640),
++ AXP_DESC_RANGES_DELAY(AXP717, DCDC2, "dcdc2", "vin2",
+ axp717_dcdc2_ranges, AXP717_DCDC2_NUM_VOLTAGES,
+ AXP717_DCDC2_CONTROL, AXP717_DCDC_V_OUT_MASK,
+- AXP717_DCDC_OUTPUT_CONTROL, BIT(1)),
+- AXP_DESC_RANGES(AXP717, DCDC3, "dcdc3", "vin3",
++ AXP717_DCDC_OUTPUT_CONTROL, BIT(1), 640),
++ AXP_DESC_RANGES_DELAY(AXP717, DCDC3, "dcdc3", "vin3",
+ axp717_dcdc3_ranges, AXP717_DCDC3_NUM_VOLTAGES,
+ AXP717_DCDC3_CONTROL, AXP717_DCDC_V_OUT_MASK,
+- AXP717_DCDC_OUTPUT_CONTROL, BIT(2)),
+- AXP_DESC(AXP717, DCDC4, "dcdc4", "vin4", 1000, 3700, 100,
++ AXP717_DCDC_OUTPUT_CONTROL, BIT(2), 640),
++ AXP_DESC_DELAY(AXP717, DCDC4, "dcdc4", "vin4", 1000, 3700, 100,
+ AXP717_DCDC4_CONTROL, AXP717_DCDC_V_OUT_MASK,
+- AXP717_DCDC_OUTPUT_CONTROL, BIT(3)),
++ AXP717_DCDC_OUTPUT_CONTROL, BIT(3), 6400),
+ AXP_DESC(AXP717, ALDO1, "aldo1", "aldoin", 500, 3500, 100,
+ AXP717_ALDO1_CONTROL, AXP717_LDO_V_OUT_MASK,
+ AXP717_LDO0_OUTPUT_CONTROL, BIT(0)),
+diff --git a/drivers/spi/spi-aspeed-smc.c b/drivers/spi/spi-aspeed-smc.c
+index bbd417c55e7f56..b0e3f307b28353 100644
+--- a/drivers/spi/spi-aspeed-smc.c
++++ b/drivers/spi/spi-aspeed-smc.c
+@@ -239,7 +239,7 @@ static ssize_t aspeed_spi_read_user(struct aspeed_spi_chip *chip,
+
+ ret = aspeed_spi_send_cmd_addr(chip, op->addr.nbytes, offset, op->cmd.opcode);
+ if (ret < 0)
+- return ret;
++ goto stop_user;
+
+ if (op->dummy.buswidth && op->dummy.nbytes) {
+ for (i = 0; i < op->dummy.nbytes / op->dummy.buswidth; i++)
+@@ -249,8 +249,9 @@ static ssize_t aspeed_spi_read_user(struct aspeed_spi_chip *chip,
+ aspeed_spi_set_io_mode(chip, io_mode);
+
+ aspeed_spi_read_from_ahb(buf, chip->ahb_base, len);
++stop_user:
+ aspeed_spi_stop_user(chip);
+- return 0;
++ return ret;
+ }
+
+ static ssize_t aspeed_spi_write_user(struct aspeed_spi_chip *chip,
+@@ -261,10 +262,11 @@ static ssize_t aspeed_spi_write_user(struct aspeed_spi_chip *chip,
+ aspeed_spi_start_user(chip);
+ ret = aspeed_spi_send_cmd_addr(chip, op->addr.nbytes, op->addr.val, op->cmd.opcode);
+ if (ret < 0)
+- return ret;
++ goto stop_user;
+ aspeed_spi_write_to_ahb(chip->ahb_base, op->data.buf.out, op->data.nbytes);
++stop_user:
+ aspeed_spi_stop_user(chip);
+- return 0;
++ return ret;
+ }
+
+ /* support for 1-1-1, 1-1-2 or 1-1-4 */
+diff --git a/drivers/spi/spi-rockchip.c b/drivers/spi/spi-rockchip.c
+index 0bb33c43b1b46e..40a64a598a7495 100644
+--- a/drivers/spi/spi-rockchip.c
++++ b/drivers/spi/spi-rockchip.c
+@@ -241,6 +241,20 @@ static void rockchip_spi_set_cs(struct spi_device *spi, bool enable)
+ struct spi_controller *ctlr = spi->controller;
+ struct rockchip_spi *rs = spi_controller_get_devdata(ctlr);
+ bool cs_asserted = spi->mode & SPI_CS_HIGH ? enable : !enable;
++ bool cs_actual;
++
++ /*
++ * SPI subsystem tries to avoid no-op calls that would break the PM
++ * refcount below. It can't however for the first time it is used.
++ * To detect this case we read it here and bail out early for no-ops.
++ */
++ if (spi_get_csgpiod(spi, 0))
++ cs_actual = !!(readl_relaxed(rs->regs + ROCKCHIP_SPI_SER) & 1);
++ else
++ cs_actual = !!(readl_relaxed(rs->regs + ROCKCHIP_SPI_SER) &
++ BIT(spi_get_chipselect(spi, 0)));
++ if (unlikely(cs_actual == cs_asserted))
++ return;
+
+ if (cs_asserted) {
+ /* Keep things powered as long as CS is asserted */
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index b80e9a528e17ff..bdf17eafd3598d 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -157,6 +157,7 @@ struct sci_port {
+
+ bool has_rtscts;
+ bool autorts;
++ bool tx_occurred;
+ };
+
+ #define SCI_NPORTS CONFIG_SERIAL_SH_SCI_NR_UARTS
+@@ -850,6 +851,7 @@ static void sci_transmit_chars(struct uart_port *port)
+ {
+ struct tty_port *tport = &port->state->port;
+ unsigned int stopped = uart_tx_stopped(port);
++ struct sci_port *s = to_sci_port(port);
+ unsigned short status;
+ unsigned short ctrl;
+ int count;
+@@ -885,6 +887,7 @@ static void sci_transmit_chars(struct uart_port *port)
+ }
+
+ sci_serial_out(port, SCxTDR, c);
++ s->tx_occurred = true;
+
+ port->icount.tx++;
+ } while (--count > 0);
+@@ -1241,6 +1244,8 @@ static void sci_dma_tx_complete(void *arg)
+ if (kfifo_len(&tport->xmit_fifo) < WAKEUP_CHARS)
+ uart_write_wakeup(port);
+
++ s->tx_occurred = true;
++
+ if (!kfifo_is_empty(&tport->xmit_fifo)) {
+ s->cookie_tx = 0;
+ schedule_work(&s->work_tx);
+@@ -1731,6 +1736,19 @@ static void sci_flush_buffer(struct uart_port *port)
+ s->cookie_tx = -EINVAL;
+ }
+ }
++
++static void sci_dma_check_tx_occurred(struct sci_port *s)
++{
++ struct dma_tx_state state;
++ enum dma_status status;
++
++ if (!s->chan_tx)
++ return;
++
++ status = dmaengine_tx_status(s->chan_tx, s->cookie_tx, &state);
++ if (status == DMA_COMPLETE || status == DMA_IN_PROGRESS)
++ s->tx_occurred = true;
++}
+ #else /* !CONFIG_SERIAL_SH_SCI_DMA */
+ static inline void sci_request_dma(struct uart_port *port)
+ {
+@@ -1740,6 +1758,10 @@ static inline void sci_free_dma(struct uart_port *port)
+ {
+ }
+
++static void sci_dma_check_tx_occurred(struct sci_port *s)
++{
++}
++
+ #define sci_flush_buffer NULL
+ #endif /* !CONFIG_SERIAL_SH_SCI_DMA */
+
+@@ -2076,6 +2098,12 @@ static unsigned int sci_tx_empty(struct uart_port *port)
+ {
+ unsigned short status = sci_serial_in(port, SCxSR);
+ unsigned short in_tx_fifo = sci_txfill(port);
++ struct sci_port *s = to_sci_port(port);
++
++ sci_dma_check_tx_occurred(s);
++
++ if (!s->tx_occurred)
++ return TIOCSER_TEMT;
+
+ return (status & SCxSR_TEND(port)) && !in_tx_fifo ? TIOCSER_TEMT : 0;
+ }
+@@ -2247,6 +2275,7 @@ static int sci_startup(struct uart_port *port)
+
+ dev_dbg(port->dev, "%s(%d)\n", __func__, port->line);
+
++ s->tx_occurred = false;
+ sci_request_dma(port);
+
+ ret = sci_request_irq(s);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index cfebe4a1af9e84..bc13133efaa508 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -5566,6 +5566,7 @@ void ufshcd_compl_one_cqe(struct ufs_hba *hba, int task_tag,
+
+ lrbp = &hba->lrb[task_tag];
+ lrbp->compl_time_stamp = ktime_get();
++ lrbp->compl_time_stamp_local_clock = local_clock();
+ cmd = lrbp->cmd;
+ if (cmd) {
+ if (unlikely(ufshcd_should_inform_monitor(hba, lrbp)))
+diff --git a/drivers/usb/core/hcd.c b/drivers/usb/core/hcd.c
+index 500dc35e64774d..0b2490347b9fe7 100644
+--- a/drivers/usb/core/hcd.c
++++ b/drivers/usb/core/hcd.c
+@@ -2794,8 +2794,14 @@ int usb_add_hcd(struct usb_hcd *hcd,
+ int retval;
+ struct usb_device *rhdev;
+ struct usb_hcd *shared_hcd;
++ int skip_phy_initialization;
+
+- if (!hcd->skip_phy_initialization) {
++ if (usb_hcd_is_primary_hcd(hcd))
++ skip_phy_initialization = hcd->skip_phy_initialization;
++ else
++ skip_phy_initialization = hcd->primary_hcd->skip_phy_initialization;
++
++ if (!skip_phy_initialization) {
+ if (usb_hcd_is_primary_hcd(hcd)) {
+ hcd->phy_roothub = usb_phy_roothub_alloc(hcd->self.sysdev);
+ if (IS_ERR(hcd->phy_roothub))
+diff --git a/drivers/usb/dwc2/hcd.c b/drivers/usb/dwc2/hcd.c
+index cb54390e7de488..8c3941ecaaf5d4 100644
+--- a/drivers/usb/dwc2/hcd.c
++++ b/drivers/usb/dwc2/hcd.c
+@@ -3546,11 +3546,9 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
+ port_status |= USB_PORT_STAT_C_OVERCURRENT << 16;
+ }
+
+- if (!hsotg->flags.b.port_connect_status) {
++ if (dwc2_is_device_mode(hsotg)) {
+ /*
+- * The port is disconnected, which means the core is
+- * either in device mode or it soon will be. Just
+- * return 0's for the remainder of the port status
++ * Just return 0's for the remainder of the port status
+ * since the port register can't be read if the core
+ * is in device mode.
+ */
+@@ -3620,13 +3618,11 @@ static int dwc2_hcd_hub_control(struct dwc2_hsotg *hsotg, u16 typereq,
+ if (wvalue != USB_PORT_FEAT_TEST && (!windex || windex > 1))
+ goto error;
+
+- if (!hsotg->flags.b.port_connect_status) {
++ if (dwc2_is_device_mode(hsotg)) {
+ /*
+- * The port is disconnected, which means the core is
+- * either in device mode or it soon will be. Just
+- * return without doing anything since the port
+- * register can't be written if the core is in device
+- * mode.
++ * Just return 0's for the remainder of the port status
++ * since the port register can't be read if the core
++ * is in device mode.
+ */
+ break;
+ }
+@@ -4349,7 +4345,7 @@ static int _dwc2_hcd_suspend(struct usb_hcd *hcd)
+ if (hsotg->bus_suspended)
+ goto skip_power_saving;
+
+- if (hsotg->flags.b.port_connect_status == 0)
++ if (!(dwc2_read_hprt0(hsotg) & HPRT0_CONNSTS))
+ goto skip_power_saving;
+
+ switch (hsotg->params.power_down) {
+@@ -4431,6 +4427,7 @@ static int _dwc2_hcd_resume(struct usb_hcd *hcd)
+ * Power Down mode.
+ */
+ if (hprt0 & HPRT0_CONNSTS) {
++ set_bit(HCD_FLAG_HW_ACCESSIBLE, &hcd->flags);
+ hsotg->lx_state = DWC2_L0;
+ goto unlock;
+ }
+diff --git a/drivers/usb/dwc3/dwc3-imx8mp.c b/drivers/usb/dwc3/dwc3-imx8mp.c
+index 64c0cd1995aa06..e99faf014c78a6 100644
+--- a/drivers/usb/dwc3/dwc3-imx8mp.c
++++ b/drivers/usb/dwc3/dwc3-imx8mp.c
+@@ -129,6 +129,16 @@ static void dwc3_imx8mp_wakeup_disable(struct dwc3_imx8mp *dwc3_imx)
+ writel(val, dwc3_imx->hsio_blk_base + USB_WAKEUP_CTRL);
+ }
+
++static const struct property_entry dwc3_imx8mp_properties[] = {
++ PROPERTY_ENTRY_BOOL("xhci-missing-cas-quirk"),
++ PROPERTY_ENTRY_BOOL("xhci-skip-phy-init-quirk"),
++ {},
++};
++
++static const struct software_node dwc3_imx8mp_swnode = {
++ .properties = dwc3_imx8mp_properties,
++};
++
+ static irqreturn_t dwc3_imx8mp_interrupt(int irq, void *_dwc3_imx)
+ {
+ struct dwc3_imx8mp *dwc3_imx = _dwc3_imx;
+@@ -148,17 +158,6 @@ static irqreturn_t dwc3_imx8mp_interrupt(int irq, void *_dwc3_imx)
+ return IRQ_HANDLED;
+ }
+
+-static int dwc3_imx8mp_set_software_node(struct device *dev)
+-{
+- struct property_entry props[3] = { 0 };
+- int prop_idx = 0;
+-
+- props[prop_idx++] = PROPERTY_ENTRY_BOOL("xhci-missing-cas-quirk");
+- props[prop_idx++] = PROPERTY_ENTRY_BOOL("xhci-skip-phy-init-quirk");
+-
+- return device_create_managed_software_node(dev, props, NULL);
+-}
+-
+ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -221,17 +220,17 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+ if (err < 0)
+ goto disable_rpm;
+
+- err = dwc3_imx8mp_set_software_node(dev);
++ err = device_add_software_node(dev, &dwc3_imx8mp_swnode);
+ if (err) {
+ err = -ENODEV;
+- dev_err(dev, "failed to create software node\n");
++ dev_err(dev, "failed to add software node\n");
+ goto disable_rpm;
+ }
+
+ err = of_platform_populate(node, NULL, NULL, dev);
+ if (err) {
+ dev_err(&pdev->dev, "failed to create dwc3 core\n");
+- goto disable_rpm;
++ goto remove_swnode;
+ }
+
+ dwc3_imx->dwc3 = of_find_device_by_node(dwc3_np);
+@@ -255,6 +254,8 @@ static int dwc3_imx8mp_probe(struct platform_device *pdev)
+
+ depopulate:
+ of_platform_depopulate(dev);
++remove_swnode:
++ device_remove_software_node(dev);
+ disable_rpm:
+ pm_runtime_disable(dev);
+ pm_runtime_put_noidle(dev);
+@@ -268,6 +269,7 @@ static void dwc3_imx8mp_remove(struct platform_device *pdev)
+
+ pm_runtime_get_sync(dev);
+ of_platform_depopulate(dev);
++ device_remove_software_node(dev);
+
+ pm_runtime_disable(dev);
+ pm_runtime_put_noidle(dev);
+diff --git a/drivers/usb/dwc3/dwc3-xilinx.c b/drivers/usb/dwc3/dwc3-xilinx.c
+index b5e5be424ce997..96c87dc4757f22 100644
+--- a/drivers/usb/dwc3/dwc3-xilinx.c
++++ b/drivers/usb/dwc3/dwc3-xilinx.c
+@@ -121,8 +121,11 @@ static int dwc3_xlnx_init_zynqmp(struct dwc3_xlnx *priv_data)
+ * in use but the usb3-phy entry is missing from the device tree.
+ * Therefore, skip these operations in this case.
+ */
+- if (!priv_data->usb3_phy)
++ if (!priv_data->usb3_phy) {
++ /* Deselect the PIPE Clock Select bit in FPD PIPE Clock register */
++ writel(PIPE_CLK_DESELECT, priv_data->regs + XLNX_USB_FPD_PIPE_CLK);
+ goto skip_usb3_phy;
++ }
+
+ crst = devm_reset_control_get_exclusive(dev, "usb_crst");
+ if (IS_ERR(crst)) {
+diff --git a/drivers/usb/gadget/function/f_midi2.c b/drivers/usb/gadget/function/f_midi2.c
+index 8285df9ed6fd78..8c9d0074db588b 100644
+--- a/drivers/usb/gadget/function/f_midi2.c
++++ b/drivers/usb/gadget/function/f_midi2.c
+@@ -1593,7 +1593,11 @@ static int f_midi2_create_card(struct f_midi2 *midi2)
+ fb->info.midi_ci_version = b->midi_ci_version;
+ fb->info.ui_hint = reverse_dir(b->ui_hint);
+ fb->info.sysex8_streams = b->sysex8_streams;
+- fb->info.flags |= b->is_midi1;
++ if (b->is_midi1 < 2)
++ fb->info.flags |= b->is_midi1;
++ else
++ fb->info.flags |= SNDRV_UMP_BLOCK_IS_MIDI1 |
++ SNDRV_UMP_BLOCK_IS_LOWSPEED;
+ strscpy(fb->info.name, ump_fb_name(b),
+ sizeof(fb->info.name));
+ }
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index 0a8c05b2746b4e..53d9fc41acc522 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -579,9 +579,12 @@ static int gs_start_io(struct gs_port *port)
+ * we didn't in gs_start_tx() */
+ tty_wakeup(port->port.tty);
+ } else {
+- gs_free_requests(ep, head, &port->read_allocated);
+- gs_free_requests(port->port_usb->in, &port->write_pool,
+- &port->write_allocated);
++ /* Free reqs only if we are still connected */
++ if (port->port_usb) {
++ gs_free_requests(ep, head, &port->read_allocated);
++ gs_free_requests(port->port_usb->in, &port->write_pool,
++ &port->write_allocated);
++ }
+ status = -EIO;
+ }
+
+diff --git a/drivers/usb/host/ehci-sh.c b/drivers/usb/host/ehci-sh.c
+index d31d9506e41ab0..7c2b2339e674dd 100644
+--- a/drivers/usb/host/ehci-sh.c
++++ b/drivers/usb/host/ehci-sh.c
+@@ -119,8 +119,12 @@ static int ehci_hcd_sh_probe(struct platform_device *pdev)
+ if (IS_ERR(priv->iclk))
+ priv->iclk = NULL;
+
+- clk_enable(priv->fclk);
+- clk_enable(priv->iclk);
++ ret = clk_enable(priv->fclk);
++ if (ret)
++ goto fail_request_resource;
++ ret = clk_enable(priv->iclk);
++ if (ret)
++ goto fail_iclk;
+
+ ret = usb_add_hcd(hcd, irq, IRQF_SHARED);
+ if (ret != 0) {
+@@ -136,6 +140,7 @@ static int ehci_hcd_sh_probe(struct platform_device *pdev)
+
+ fail_add_hcd:
+ clk_disable(priv->iclk);
++fail_iclk:
+ clk_disable(priv->fclk);
+
+ fail_request_resource:
+diff --git a/drivers/usb/host/max3421-hcd.c b/drivers/usb/host/max3421-hcd.c
+index 9fe4f48b18980c..0881fdd1823e0b 100644
+--- a/drivers/usb/host/max3421-hcd.c
++++ b/drivers/usb/host/max3421-hcd.c
+@@ -779,11 +779,17 @@ max3421_check_unlink(struct usb_hcd *hcd)
+ retval = 1;
+ dev_dbg(&spi->dev, "%s: URB %p unlinked=%d",
+ __func__, urb, urb->unlinked);
+- usb_hcd_unlink_urb_from_ep(hcd, urb);
+- spin_unlock_irqrestore(&max3421_hcd->lock,
+- flags);
+- usb_hcd_giveback_urb(hcd, urb, 0);
+- spin_lock_irqsave(&max3421_hcd->lock, flags);
++ if (urb == max3421_hcd->curr_urb) {
++ max3421_hcd->urb_done = 1;
++ max3421_hcd->hien &= ~(BIT(MAX3421_HI_HXFRDN_BIT) |
++ BIT(MAX3421_HI_RCVDAV_BIT));
++ } else {
++ usb_hcd_unlink_urb_from_ep(hcd, urb);
++ spin_unlock_irqrestore(&max3421_hcd->lock,
++ flags);
++ usb_hcd_giveback_urb(hcd, urb, 0);
++ spin_lock_irqsave(&max3421_hcd->lock, flags);
++ }
+ }
+ }
+ }
+diff --git a/drivers/usb/misc/onboard_usb_dev.c b/drivers/usb/misc/onboard_usb_dev.c
+index 75dfdca04ff1c2..27b0a6e182678b 100644
+--- a/drivers/usb/misc/onboard_usb_dev.c
++++ b/drivers/usb/misc/onboard_usb_dev.c
+@@ -407,8 +407,10 @@ static int onboard_dev_probe(struct platform_device *pdev)
+ }
+
+ if (of_device_is_compatible(pdev->dev.of_node, "usb424,2744") ||
+- of_device_is_compatible(pdev->dev.of_node, "usb424,5744"))
++ of_device_is_compatible(pdev->dev.of_node, "usb424,5744")) {
+ err = onboard_dev_5744_i2c_init(client);
++ onboard_dev->always_powered_in_suspend = true;
++ }
+
+ put_device(&client->dev);
+ if (err < 0)
+diff --git a/drivers/usb/typec/anx7411.c b/drivers/usb/typec/anx7411.c
+index d1e7c487ddfbb5..0ae0a5ee3fae07 100644
+--- a/drivers/usb/typec/anx7411.c
++++ b/drivers/usb/typec/anx7411.c
+@@ -290,6 +290,8 @@ struct anx7411_data {
+ struct power_supply *psy;
+ struct power_supply_desc psy_desc;
+ struct device *dev;
++ struct fwnode_handle *switch_node;
++ struct fwnode_handle *mux_node;
+ };
+
+ static u8 snk_identity[] = {
+@@ -1021,6 +1023,16 @@ static void anx7411_port_unregister_altmodes(struct typec_altmode **adev)
+ }
+ }
+
++static void anx7411_port_unregister(struct typec_params *typecp)
++{
++ fwnode_handle_put(typecp->caps.fwnode);
++ anx7411_port_unregister_altmodes(typecp->port_amode);
++ if (typecp->port)
++ typec_unregister_port(typecp->port);
++ if (typecp->role_sw)
++ usb_role_switch_put(typecp->role_sw);
++}
++
+ static int anx7411_usb_mux_set(struct typec_mux_dev *mux,
+ struct typec_mux_state *state)
+ {
+@@ -1089,6 +1101,7 @@ static void anx7411_unregister_mux(struct anx7411_data *ctx)
+ if (ctx->typec.typec_mux) {
+ typec_mux_unregister(ctx->typec.typec_mux);
+ ctx->typec.typec_mux = NULL;
++ fwnode_handle_put(ctx->mux_node);
+ }
+ }
+
+@@ -1097,6 +1110,7 @@ static void anx7411_unregister_switch(struct anx7411_data *ctx)
+ if (ctx->typec.typec_switch) {
+ typec_switch_unregister(ctx->typec.typec_switch);
+ ctx->typec.typec_switch = NULL;
++ fwnode_handle_put(ctx->switch_node);
+ }
+ }
+
+@@ -1104,28 +1118,29 @@ static int anx7411_typec_switch_probe(struct anx7411_data *ctx,
+ struct device *dev)
+ {
+ int ret;
+- struct device_node *node;
+
+- node = of_get_child_by_name(dev->of_node, "orientation_switch");
+- if (!node)
++ ctx->switch_node = device_get_named_child_node(dev, "orientation_switch");
++ if (!ctx->switch_node)
+ return 0;
+
+- ret = anx7411_register_switch(ctx, dev, &node->fwnode);
++ ret = anx7411_register_switch(ctx, dev, ctx->switch_node);
+ if (ret) {
+ dev_err(dev, "failed register switch");
++ fwnode_handle_put(ctx->switch_node);
+ return ret;
+ }
+
+- node = of_get_child_by_name(dev->of_node, "mode_switch");
+- if (!node) {
++ ctx->mux_node = device_get_named_child_node(dev, "mode_switch");
++ if (!ctx->mux_node) {
+ dev_err(dev, "no typec mux exist");
+ ret = -ENODEV;
+ goto unregister_switch;
+ }
+
+- ret = anx7411_register_mux(ctx, dev, &node->fwnode);
++ ret = anx7411_register_mux(ctx, dev, ctx->mux_node);
+ if (ret) {
+ dev_err(dev, "failed register mode switch");
++ fwnode_handle_put(ctx->mux_node);
+ ret = -ENODEV;
+ goto unregister_switch;
+ }
+@@ -1154,34 +1169,34 @@ static int anx7411_typec_port_probe(struct anx7411_data *ctx,
+ ret = fwnode_property_read_string(fwnode, "power-role", &buf);
+ if (ret) {
+ dev_err(dev, "power-role not found: %d\n", ret);
+- return ret;
++ goto put_fwnode;
+ }
+
+ ret = typec_find_port_power_role(buf);
+ if (ret < 0)
+- return ret;
++ goto put_fwnode;
+ cap->type = ret;
+
+ ret = fwnode_property_read_string(fwnode, "data-role", &buf);
+ if (ret) {
+ dev_err(dev, "data-role not found: %d\n", ret);
+- return ret;
++ goto put_fwnode;
+ }
+
+ ret = typec_find_port_data_role(buf);
+ if (ret < 0)
+- return ret;
++ goto put_fwnode;
+ cap->data = ret;
+
+ ret = fwnode_property_read_string(fwnode, "try-power-role", &buf);
+ if (ret) {
+ dev_err(dev, "try-power-role not found: %d\n", ret);
+- return ret;
++ goto put_fwnode;
+ }
+
+ ret = typec_find_power_role(buf);
+ if (ret < 0)
+- return ret;
++ goto put_fwnode;
+ cap->prefer_role = ret;
+
+ /* Get source pdos */
+@@ -1193,7 +1208,7 @@ static int anx7411_typec_port_probe(struct anx7411_data *ctx,
+ typecp->src_pdo_nr);
+ if (ret < 0) {
+ dev_err(dev, "source cap validate failed: %d\n", ret);
+- return -EINVAL;
++ goto put_fwnode;
+ }
+
+ typecp->caps_flags |= HAS_SOURCE_CAP;
+@@ -1207,7 +1222,7 @@ static int anx7411_typec_port_probe(struct anx7411_data *ctx,
+ typecp->sink_pdo_nr);
+ if (ret < 0) {
+ dev_err(dev, "sink cap validate failed: %d\n", ret);
+- return -EINVAL;
++ goto put_fwnode;
+ }
+
+ for (i = 0; i < typecp->sink_pdo_nr; i++) {
+@@ -1251,13 +1266,21 @@ static int anx7411_typec_port_probe(struct anx7411_data *ctx,
+ ret = PTR_ERR(ctx->typec.port);
+ ctx->typec.port = NULL;
+ dev_err(dev, "Failed to register type c port %d\n", ret);
+- return ret;
++ goto put_usb_role_switch;
+ }
+
+ typec_port_register_altmodes(ctx->typec.port, NULL, ctx,
+ ctx->typec.port_amode,
+ MAX_ALTMODE);
+ return 0;
++
++put_usb_role_switch:
++ if (ctx->typec.role_sw)
++ usb_role_switch_put(ctx->typec.role_sw);
++put_fwnode:
++ fwnode_handle_put(fwnode);
++
++ return ret;
+ }
+
+ static int anx7411_typec_check_connection(struct anx7411_data *ctx)
+@@ -1523,8 +1546,7 @@ static int anx7411_i2c_probe(struct i2c_client *client)
+ destroy_workqueue(plat->workqueue);
+
+ free_typec_port:
+- typec_unregister_port(plat->typec.port);
+- anx7411_port_unregister_altmodes(plat->typec.port_amode);
++ anx7411_port_unregister(&plat->typec);
+
+ free_typec_switch:
+ anx7411_unregister_switch(plat);
+@@ -1548,17 +1570,11 @@ static void anx7411_i2c_remove(struct i2c_client *client)
+
+ i2c_unregister_device(plat->spi_client);
+
+- if (plat->typec.role_sw)
+- usb_role_switch_put(plat->typec.role_sw);
+-
+ anx7411_unregister_mux(plat);
+
+ anx7411_unregister_switch(plat);
+
+- if (plat->typec.port)
+- typec_unregister_port(plat->typec.port);
+-
+- anx7411_port_unregister_altmodes(plat->typec.port_amode);
++ anx7411_port_unregister(&plat->typec);
+ }
+
+ static const struct i2c_device_id anx7411_id[] = {
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index e0f3925e401b3d..7a3f0f5af38fdb 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -46,11 +46,11 @@ void ucsi_notify_common(struct ucsi *ucsi, u32 cci)
+ ucsi_connector_change(ucsi, UCSI_CCI_CONNECTOR(cci));
+
+ if (cci & UCSI_CCI_ACK_COMPLETE &&
+- test_bit(ACK_PENDING, &ucsi->flags))
++ test_and_clear_bit(ACK_PENDING, &ucsi->flags))
+ complete(&ucsi->complete);
+
+ if (cci & UCSI_CCI_COMMAND_COMPLETE &&
+- test_bit(COMMAND_PENDING, &ucsi->flags))
++ test_and_clear_bit(COMMAND_PENDING, &ucsi->flags))
+ complete(&ucsi->complete);
+ }
+ EXPORT_SYMBOL_GPL(ucsi_notify_common);
+@@ -65,6 +65,8 @@ int ucsi_sync_control_common(struct ucsi *ucsi, u64 command)
+ else
+ set_bit(COMMAND_PENDING, &ucsi->flags);
+
++ reinit_completion(&ucsi->complete);
++
+ ret = ucsi->ops->async_control(ucsi, command);
+ if (ret)
+ goto out_clear_bit;
+diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
+index 98374ed7c57723..0112742e4504b9 100644
+--- a/drivers/virtio/virtio_ring.c
++++ b/drivers/virtio/virtio_ring.c
+@@ -2716,6 +2716,7 @@ EXPORT_SYMBOL_GPL(vring_create_virtqueue_dma);
+ * @_vq: the struct virtqueue we're talking about.
+ * @num: new ring num
+ * @recycle: callback to recycle unused buffers
++ * @recycle_done: callback to be invoked when recycle for all unused buffers done
+ *
+ * When it is really necessary to create a new vring, it will set the current vq
+ * into the reset state. Then call the passed callback to recycle the buffer
+@@ -2736,7 +2737,8 @@ EXPORT_SYMBOL_GPL(vring_create_virtqueue_dma);
+ *
+ */
+ int virtqueue_resize(struct virtqueue *_vq, u32 num,
+- void (*recycle)(struct virtqueue *vq, void *buf))
++ void (*recycle)(struct virtqueue *vq, void *buf),
++ void (*recycle_done)(struct virtqueue *vq))
+ {
+ struct vring_virtqueue *vq = to_vvq(_vq);
+ int err;
+@@ -2753,6 +2755,8 @@ int virtqueue_resize(struct virtqueue *_vq, u32 num,
+ err = virtqueue_disable_and_recycle(_vq, recycle);
+ if (err)
+ return err;
++ if (recycle_done)
++ recycle_done(_vq);
+
+ if (vq->packed_ring)
+ err = virtqueue_resize_packed(_vq, num);
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index b35fe1075503e1..fafc07e38663ca 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1925,6 +1925,7 @@ int cifs_unlink(struct inode *dir, struct dentry *dentry)
+ goto unlink_out;
+ }
+
++ netfs_wait_for_outstanding_io(inode);
+ cifs_close_deferred_file_under_dentry(tcon, full_path);
+ #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ if (cap_unix(tcon->ses) && (CIFS_UNIX_POSIX_PATH_OPS_CAP &
+@@ -2442,8 +2443,10 @@ cifs_rename2(struct mnt_idmap *idmap, struct inode *source_dir,
+ }
+
+ cifs_close_deferred_file_under_dentry(tcon, from_name);
+- if (d_inode(target_dentry) != NULL)
++ if (d_inode(target_dentry) != NULL) {
++ netfs_wait_for_outstanding_io(d_inode(target_dentry));
+ cifs_close_deferred_file_under_dentry(tcon, to_name);
++ }
+
+ rc = cifs_do_rename(xid, source_dentry, from_name, target_dentry,
+ to_name);
+diff --git a/fs/smb/server/auth.c b/fs/smb/server/auth.c
+index 611716bc8f27c1..8892177e500f19 100644
+--- a/fs/smb/server/auth.c
++++ b/fs/smb/server/auth.c
+@@ -1016,6 +1016,8 @@ static int ksmbd_get_encryption_key(struct ksmbd_work *work, __u64 ses_id,
+
+ ses_enc_key = enc ? sess->smb3encryptionkey :
+ sess->smb3decryptionkey;
++ if (enc)
++ ksmbd_user_session_get(sess);
+ memcpy(key, ses_enc_key, SMB3_ENC_DEC_KEY_SIZE);
+
+ return 0;
+diff --git a/fs/smb/server/mgmt/user_session.c b/fs/smb/server/mgmt/user_session.c
+index ad02fe555fda7e..d960ddcbba1657 100644
+--- a/fs/smb/server/mgmt/user_session.c
++++ b/fs/smb/server/mgmt/user_session.c
+@@ -263,8 +263,10 @@ struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn,
+
+ down_read(&conn->session_lock);
+ sess = xa_load(&conn->sessions, id);
+- if (sess)
++ if (sess) {
+ sess->last_active = jiffies;
++ ksmbd_user_session_get(sess);
++ }
+ up_read(&conn->session_lock);
+ return sess;
+ }
+@@ -275,6 +277,8 @@ struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id)
+
+ down_read(&sessions_table_lock);
+ sess = __session_lookup(id);
++ if (sess)
++ ksmbd_user_session_get(sess);
+ up_read(&sessions_table_lock);
+
+ return sess;
+diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c
+index c8cc6fa6fc3ebb..698af37e988d7b 100644
+--- a/fs/smb/server/server.c
++++ b/fs/smb/server/server.c
+@@ -241,14 +241,14 @@ static void __handle_ksmbd_work(struct ksmbd_work *work,
+ if (work->tcon)
+ ksmbd_tree_connect_put(work->tcon);
+ smb3_preauth_hash_rsp(work);
+- if (work->sess)
+- ksmbd_user_session_put(work->sess);
+ if (work->sess && work->sess->enc && work->encrypted &&
+ conn->ops->encrypt_resp) {
+ rc = conn->ops->encrypt_resp(work);
+ if (rc < 0)
+ conn->ops->set_rsp_status(work, STATUS_DATA_ERROR);
+ }
++ if (work->sess)
++ ksmbd_user_session_put(work->sess);
+
+ ksmbd_conn_write(work);
+ }
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index d0836d710f1814..7d01dd313351f7 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -67,8 +67,10 @@ static inline bool check_session_id(struct ksmbd_conn *conn, u64 id)
+ return false;
+
+ sess = ksmbd_session_lookup_all(conn, id);
+- if (sess)
++ if (sess) {
++ ksmbd_user_session_put(sess);
+ return true;
++ }
+ pr_err("Invalid user session id: %llu\n", id);
+ return false;
+ }
+@@ -605,10 +607,8 @@ int smb2_check_user_session(struct ksmbd_work *work)
+
+ /* Check for validity of user session */
+ work->sess = ksmbd_session_lookup_all(conn, sess_id);
+- if (work->sess) {
+- ksmbd_user_session_get(work->sess);
++ if (work->sess)
+ return 1;
+- }
+ ksmbd_debug(SMB, "Invalid user session, Uid %llu\n", sess_id);
+ return -ENOENT;
+ }
+@@ -1701,29 +1701,35 @@ int smb2_sess_setup(struct ksmbd_work *work)
+
+ if (conn->dialect != sess->dialect) {
+ rc = -EINVAL;
++ ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (!(req->hdr.Flags & SMB2_FLAGS_SIGNED)) {
+ rc = -EINVAL;
++ ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (strncmp(conn->ClientGUID, sess->ClientGUID,
+ SMB2_CLIENT_GUID_SIZE)) {
+ rc = -ENOENT;
++ ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (sess->state == SMB2_SESSION_IN_PROGRESS) {
+ rc = -EACCES;
++ ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (sess->state == SMB2_SESSION_EXPIRED) {
+ rc = -EFAULT;
++ ksmbd_user_session_put(sess);
+ goto out_err;
+ }
++ ksmbd_user_session_put(sess);
+
+ if (ksmbd_conn_need_reconnect(conn)) {
+ rc = -EFAULT;
+@@ -1731,7 +1737,8 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ goto out_err;
+ }
+
+- if (ksmbd_session_lookup(conn, sess_id)) {
++ sess = ksmbd_session_lookup(conn, sess_id);
++ if (!sess) {
+ rc = -EACCES;
+ goto out_err;
+ }
+@@ -1742,7 +1749,6 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ }
+
+ conn->binding = true;
+- ksmbd_user_session_get(sess);
+ } else if ((conn->dialect < SMB30_PROT_ID ||
+ server_conf.flags & KSMBD_GLOBAL_FLAG_SMB3_MULTICHANNEL) &&
+ (req->Flags & SMB2_SESSION_REQ_FLAG_BINDING)) {
+@@ -1769,7 +1775,6 @@ int smb2_sess_setup(struct ksmbd_work *work)
+ }
+
+ conn->binding = false;
+- ksmbd_user_session_get(sess);
+ }
+ work->sess = sess;
+
+@@ -2195,9 +2200,9 @@ int smb2_tree_disconnect(struct ksmbd_work *work)
+ int smb2_session_logoff(struct ksmbd_work *work)
+ {
+ struct ksmbd_conn *conn = work->conn;
++ struct ksmbd_session *sess = work->sess;
+ struct smb2_logoff_req *req;
+ struct smb2_logoff_rsp *rsp;
+- struct ksmbd_session *sess;
+ u64 sess_id;
+ int err;
+
+@@ -2219,11 +2224,6 @@ int smb2_session_logoff(struct ksmbd_work *work)
+ ksmbd_close_session_fds(work);
+ ksmbd_conn_wait_idle(conn);
+
+- /*
+- * Re-lookup session to validate if session is deleted
+- * while waiting request complete
+- */
+- sess = ksmbd_session_lookup_all(conn, sess_id);
+ if (ksmbd_tree_conn_session_logoff(sess)) {
+ ksmbd_debug(SMB, "Invalid tid %d\n", req->hdr.Id.SyncId.TreeId);
+ rsp->hdr.Status = STATUS_NETWORK_NAME_DELETED;
+@@ -8962,6 +8962,7 @@ int smb3_decrypt_req(struct ksmbd_work *work)
+ le64_to_cpu(tr_hdr->SessionId));
+ return -ECONNABORTED;
+ }
++ ksmbd_user_session_put(sess);
+
+ iov[0].iov_base = buf;
+ iov[0].iov_len = sizeof(struct smb2_transform_hdr) + 4;
+diff --git a/fs/xfs/libxfs/xfs_btree.c b/fs/xfs/libxfs/xfs_btree.c
+index a5c4af148853f8..134d87b3489aa4 100644
+--- a/fs/xfs/libxfs/xfs_btree.c
++++ b/fs/xfs/libxfs/xfs_btree.c
+@@ -3569,14 +3569,31 @@ xfs_btree_insrec(
+ xfs_btree_log_block(cur, bp, XFS_BB_NUMRECS);
+
+ /*
+- * If we just inserted into a new tree block, we have to
+- * recalculate nkey here because nkey is out of date.
++ * Update btree keys to reflect the newly added record or keyptr.
++ * There are three cases here to be aware of. Normally, all we have to
++ * do is walk towards the root, updating keys as necessary.
+ *
+- * Otherwise we're just updating an existing block (having shoved
+- * some records into the new tree block), so use the regular key
+- * update mechanism.
++ * If the caller had us target a full block for the insertion, we dealt
++ * with that by calling the _make_block_unfull function. If the
++ * "make unfull" function splits the block, it'll hand us back the key
++ * and pointer of the new block. We haven't yet added the new block to
++ * the next level up, so if we decide to add the new record to the new
++ * block (bp->b_bn != old_bn), we have to update the caller's pointer
++ * so that the caller adds the new block with the correct key.
++ *
++ * However, there is a third possibility-- if the selected block is the
++ * root block of an inode-rooted btree and cannot be expanded further,
++ * the "make unfull" function moves the root block contents to a new
++ * block and updates the root block to point to the new block. In this
++ * case, no block pointer is passed back because the block has already
++ * been added to the btree. In this case, we need to use the regular
++ * key update function, just like the first case. This is critical for
++ * overlapping btrees, because the high key must be updated to reflect
++ * the entire tree, not just the subtree accessible through the first
++ * child of the root (which is now two levels down from the root).
+ */
+- if (bp && xfs_buf_daddr(bp) != old_bn) {
++ if (!xfs_btree_ptr_is_null(cur, &nptr) &&
++ bp && xfs_buf_daddr(bp) != old_bn) {
+ xfs_btree_get_keys(cur, block, lkey);
+ } else if (xfs_btree_needs_key_update(cur, optr)) {
+ error = xfs_btree_update_keys(cur, level);
+@@ -5156,7 +5173,7 @@ xfs_btree_count_blocks_helper(
+ int level,
+ void *data)
+ {
+- xfs_extlen_t *blocks = data;
++ xfs_filblks_t *blocks = data;
+ (*blocks)++;
+
+ return 0;
+@@ -5166,7 +5183,7 @@ xfs_btree_count_blocks_helper(
+ int
+ xfs_btree_count_blocks(
+ struct xfs_btree_cur *cur,
+- xfs_extlen_t *blocks)
++ xfs_filblks_t *blocks)
+ {
+ *blocks = 0;
+ return xfs_btree_visit_blocks(cur, xfs_btree_count_blocks_helper,
+diff --git a/fs/xfs/libxfs/xfs_btree.h b/fs/xfs/libxfs/xfs_btree.h
+index 10b7ddc3b2b34e..91e0b6dac31ec6 100644
+--- a/fs/xfs/libxfs/xfs_btree.h
++++ b/fs/xfs/libxfs/xfs_btree.h
+@@ -485,7 +485,7 @@ typedef int (*xfs_btree_visit_blocks_fn)(struct xfs_btree_cur *cur, int level,
+ int xfs_btree_visit_blocks(struct xfs_btree_cur *cur,
+ xfs_btree_visit_blocks_fn fn, unsigned int flags, void *data);
+
+-int xfs_btree_count_blocks(struct xfs_btree_cur *cur, xfs_extlen_t *blocks);
++int xfs_btree_count_blocks(struct xfs_btree_cur *cur, xfs_filblks_t *blocks);
+
+ union xfs_btree_rec *xfs_btree_rec_addr(struct xfs_btree_cur *cur, int n,
+ struct xfs_btree_block *block);
+diff --git a/fs/xfs/libxfs/xfs_ialloc_btree.c b/fs/xfs/libxfs/xfs_ialloc_btree.c
+index 401b42d52af686..6aa43f3fc68e03 100644
+--- a/fs/xfs/libxfs/xfs_ialloc_btree.c
++++ b/fs/xfs/libxfs/xfs_ialloc_btree.c
+@@ -743,6 +743,7 @@ xfs_finobt_count_blocks(
+ {
+ struct xfs_buf *agbp = NULL;
+ struct xfs_btree_cur *cur;
++ xfs_filblks_t blocks;
+ int error;
+
+ error = xfs_ialloc_read_agi(pag, tp, 0, &agbp);
+@@ -750,9 +751,10 @@ xfs_finobt_count_blocks(
+ return error;
+
+ cur = xfs_finobt_init_cursor(pag, tp, agbp);
+- error = xfs_btree_count_blocks(cur, tree_blocks);
++ error = xfs_btree_count_blocks(cur, &blocks);
+ xfs_btree_del_cursor(cur, error);
+ xfs_trans_brelse(tp, agbp);
++ *tree_blocks = blocks;
+
+ return error;
+ }
+diff --git a/fs/xfs/libxfs/xfs_symlink_remote.c b/fs/xfs/libxfs/xfs_symlink_remote.c
+index f228127a88ff26..fb47a76ead18c2 100644
+--- a/fs/xfs/libxfs/xfs_symlink_remote.c
++++ b/fs/xfs/libxfs/xfs_symlink_remote.c
+@@ -92,8 +92,10 @@ xfs_symlink_verify(
+ struct xfs_mount *mp = bp->b_mount;
+ struct xfs_dsymlink_hdr *dsl = bp->b_addr;
+
++ /* no verification of non-crc buffers */
+ if (!xfs_has_crc(mp))
+- return __this_address;
++ return NULL;
++
+ if (!xfs_verify_magic(bp, dsl->sl_magic))
+ return __this_address;
+ if (!uuid_equal(&dsl->sl_uuid, &mp->m_sb.sb_meta_uuid))
+diff --git a/fs/xfs/scrub/agheader.c b/fs/xfs/scrub/agheader.c
+index f8e5b67128d25a..da30f926cbe66d 100644
+--- a/fs/xfs/scrub/agheader.c
++++ b/fs/xfs/scrub/agheader.c
+@@ -434,7 +434,7 @@ xchk_agf_xref_btreeblks(
+ {
+ struct xfs_agf *agf = sc->sa.agf_bp->b_addr;
+ struct xfs_mount *mp = sc->mp;
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+ xfs_agblock_t btreeblks;
+ int error;
+
+@@ -483,7 +483,7 @@ xchk_agf_xref_refcblks(
+ struct xfs_scrub *sc)
+ {
+ struct xfs_agf *agf = sc->sa.agf_bp->b_addr;
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+ int error;
+
+ if (!sc->sa.refc_cur)
+@@ -816,7 +816,7 @@ xchk_agi_xref_fiblocks(
+ struct xfs_scrub *sc)
+ {
+ struct xfs_agi *agi = sc->sa.agi_bp->b_addr;
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+ int error = 0;
+
+ if (!xfs_has_inobtcounts(sc->mp))
+diff --git a/fs/xfs/scrub/agheader_repair.c b/fs/xfs/scrub/agheader_repair.c
+index 2f98d90d7fd66d..69b003259784fe 100644
+--- a/fs/xfs/scrub/agheader_repair.c
++++ b/fs/xfs/scrub/agheader_repair.c
+@@ -256,7 +256,7 @@ xrep_agf_calc_from_btrees(
+ struct xfs_agf *agf = agf_bp->b_addr;
+ struct xfs_mount *mp = sc->mp;
+ xfs_agblock_t btreeblks;
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+ int error;
+
+ /* Update the AGF counters from the bnobt. */
+@@ -946,7 +946,7 @@ xrep_agi_calc_from_btrees(
+ if (error)
+ goto err;
+ if (xfs_has_inobtcounts(mp)) {
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+
+ error = xfs_btree_count_blocks(cur, &blocks);
+ if (error)
+@@ -959,7 +959,7 @@ xrep_agi_calc_from_btrees(
+ agi->agi_freecount = cpu_to_be32(freecount);
+
+ if (xfs_has_finobt(mp) && xfs_has_inobtcounts(mp)) {
+- xfs_agblock_t blocks;
++ xfs_filblks_t blocks;
+
+ cur = xfs_finobt_init_cursor(sc->sa.pag, sc->tp, agi_bp);
+ error = xfs_btree_count_blocks(cur, &blocks);
+diff --git a/fs/xfs/scrub/fscounters.c b/fs/xfs/scrub/fscounters.c
+index 1d3e98346933e1..454f17595c9c9e 100644
+--- a/fs/xfs/scrub/fscounters.c
++++ b/fs/xfs/scrub/fscounters.c
+@@ -261,7 +261,7 @@ xchk_fscount_btreeblks(
+ struct xchk_fscounters *fsc,
+ xfs_agnumber_t agno)
+ {
+- xfs_extlen_t blocks;
++ xfs_filblks_t blocks;
+ int error;
+
+ error = xchk_ag_init_existing(sc, agno, &sc->sa);
+diff --git a/fs/xfs/scrub/ialloc.c b/fs/xfs/scrub/ialloc.c
+index 750d7b0cd25a78..a59c44e5903a45 100644
+--- a/fs/xfs/scrub/ialloc.c
++++ b/fs/xfs/scrub/ialloc.c
+@@ -652,8 +652,8 @@ xchk_iallocbt_xref_rmap_btreeblks(
+ struct xfs_scrub *sc)
+ {
+ xfs_filblks_t blocks;
+- xfs_extlen_t inobt_blocks = 0;
+- xfs_extlen_t finobt_blocks = 0;
++ xfs_filblks_t inobt_blocks = 0;
++ xfs_filblks_t finobt_blocks = 0;
+ int error;
+
+ if (!sc->sa.ino_cur || !sc->sa.rmap_cur ||
+diff --git a/fs/xfs/scrub/refcount.c b/fs/xfs/scrub/refcount.c
+index d0c7d4a29c0feb..cccf39d917a09c 100644
+--- a/fs/xfs/scrub/refcount.c
++++ b/fs/xfs/scrub/refcount.c
+@@ -490,7 +490,7 @@ xchk_refcount_xref_rmap(
+ struct xfs_scrub *sc,
+ xfs_filblks_t cow_blocks)
+ {
+- xfs_extlen_t refcbt_blocks = 0;
++ xfs_filblks_t refcbt_blocks = 0;
+ xfs_filblks_t blocks;
+ int error;
+
+diff --git a/fs/xfs/scrub/symlink_repair.c b/fs/xfs/scrub/symlink_repair.c
+index d015a86ef460fb..953ce7be78dc2f 100644
+--- a/fs/xfs/scrub/symlink_repair.c
++++ b/fs/xfs/scrub/symlink_repair.c
+@@ -36,6 +36,7 @@
+ #include "scrub/tempfile.h"
+ #include "scrub/tempexch.h"
+ #include "scrub/reap.h"
++#include "scrub/health.h"
+
+ /*
+ * Symbolic Link Repair
+@@ -233,7 +234,7 @@ xrep_symlink_salvage(
+ * target zapped flag.
+ */
+ if (buflen == 0) {
+- sc->sick_mask |= XFS_SICK_INO_SYMLINK_ZAPPED;
++ xchk_mark_healthy_if_clean(sc, XFS_SICK_INO_SYMLINK_ZAPPED);
+ sprintf(target_buf, DUMMY_TARGET);
+ }
+
+diff --git a/fs/xfs/scrub/trace.h b/fs/xfs/scrub/trace.h
+index c886d5d0eb021a..da773fee8638af 100644
+--- a/fs/xfs/scrub/trace.h
++++ b/fs/xfs/scrub/trace.h
+@@ -601,7 +601,7 @@ TRACE_EVENT(xchk_ifork_btree_op_error,
+ TP_fast_assign(
+ xfs_fsblock_t fsbno = xchk_btree_cur_fsbno(cur, level);
+ __entry->dev = sc->mp->m_super->s_dev;
+- __entry->ino = sc->ip->i_ino;
++ __entry->ino = cur->bc_ino.ip->i_ino;
+ __entry->whichfork = cur->bc_ino.whichfork;
+ __entry->type = sc->sm->sm_type;
+ __assign_str(name);
+diff --git a/fs/xfs/xfs_bmap_util.c b/fs/xfs/xfs_bmap_util.c
+index edaf193dbd5ccc..95f8a09f96ae20 100644
+--- a/fs/xfs/xfs_bmap_util.c
++++ b/fs/xfs/xfs_bmap_util.c
+@@ -111,7 +111,7 @@ xfs_bmap_count_blocks(
+ struct xfs_mount *mp = ip->i_mount;
+ struct xfs_ifork *ifp = xfs_ifork_ptr(ip, whichfork);
+ struct xfs_btree_cur *cur;
+- xfs_extlen_t btblocks = 0;
++ xfs_filblks_t btblocks = 0;
+ int error;
+
+ *nextents = 0;
+diff --git a/fs/xfs/xfs_file.c b/fs/xfs/xfs_file.c
+index b19916b11fd563..aba54e3c583661 100644
+--- a/fs/xfs/xfs_file.c
++++ b/fs/xfs/xfs_file.c
+@@ -1228,6 +1228,14 @@ xfs_file_remap_range(
+ xfs_iunlock2_remapping(src, dest);
+ if (ret)
+ trace_xfs_reflink_remap_range_error(dest, ret, _RET_IP_);
++ /*
++ * If the caller did not set CAN_SHORTEN, then it is not prepared to
++ * handle partial results -- either the whole remap succeeds, or we
++ * must say why it did not. In this case, any error should be returned
++ * to the caller.
++ */
++ if (ret && remapped < len && !(remap_flags & REMAP_FILE_CAN_SHORTEN))
++ return ret;
+ return remapped > 0 ? remapped : ret;
+ }
+
+diff --git a/fs/xfs/xfs_rtalloc.c b/fs/xfs/xfs_rtalloc.c
+index 3a2005a1e673dc..8caa55b8167467 100644
+--- a/fs/xfs/xfs_rtalloc.c
++++ b/fs/xfs/xfs_rtalloc.c
+@@ -1295,7 +1295,7 @@ xfs_rtallocate(
+ * For an allocation to an empty file at offset 0, pick an extent that
+ * will space things out in the rt area.
+ */
+- if (bno_hint)
++ if (bno_hint != NULLFSBLOCK)
+ start = xfs_rtb_to_rtx(args.mp, bno_hint);
+ else if (initial_user_data)
+ start = xfs_rtpick_extent(args.mp, tp, maxlen);
+diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c
+index bdf3704dc30118..30e03342287a94 100644
+--- a/fs/xfs/xfs_trans.c
++++ b/fs/xfs/xfs_trans.c
+@@ -834,13 +834,6 @@ __xfs_trans_commit(
+
+ trace_xfs_trans_commit(tp, _RET_IP_);
+
+- error = xfs_trans_run_precommits(tp);
+- if (error) {
+- if (tp->t_flags & XFS_TRANS_PERM_LOG_RES)
+- xfs_defer_cancel(tp);
+- goto out_unreserve;
+- }
+-
+ /*
+ * Finish deferred items on final commit. Only permanent transactions
+ * should ever have deferred ops.
+@@ -851,13 +844,12 @@ __xfs_trans_commit(
+ error = xfs_defer_finish_noroll(&tp);
+ if (error)
+ goto out_unreserve;
+-
+- /* Run precommits from final tx in defer chain. */
+- error = xfs_trans_run_precommits(tp);
+- if (error)
+- goto out_unreserve;
+ }
+
++ error = xfs_trans_run_precommits(tp);
++ if (error)
++ goto out_unreserve;
++
+ /*
+ * If there is nothing to be logged by the transaction,
+ * then unlock all of the items associated with the
+@@ -1382,5 +1374,8 @@ xfs_trans_alloc_dir(
+
+ out_cancel:
+ xfs_trans_cancel(tp);
++ xfs_iunlock(dp, XFS_ILOCK_EXCL);
++ if (dp != ip)
++ xfs_iunlock(ip, XFS_ILOCK_EXCL);
+ return error;
+ }
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 6b4bc85f4999ba..b7f327ce797e5b 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -200,8 +200,6 @@ struct gendisk {
+ spinlock_t zone_wplugs_lock;
+ struct mempool_s *zone_wplugs_pool;
+ struct hlist_head *zone_wplugs_hash;
+- struct list_head zone_wplugs_err_list;
+- struct work_struct zone_wplugs_work;
+ struct workqueue_struct *zone_wplugs_wq;
+ #endif /* CONFIG_BLK_DEV_ZONED */
+
+@@ -1386,6 +1384,9 @@ static inline bool bdev_is_zone_start(struct block_device *bdev,
+ return bdev_offset_from_zone_start(bdev, sector) == 0;
+ }
+
++int blk_zone_issue_zeroout(struct block_device *bdev, sector_t sector,
++ sector_t nr_sects, gfp_t gfp_mask);
++
+ static inline int queue_dma_alignment(const struct request_queue *q)
+ {
+ return q->limits.dma_alignment;
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index cbe2350912460b..a7af13f550e0d4 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -2157,26 +2157,25 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
+ * rcu-protected dynamically sized maps.
+ */
+ static __always_inline u32
+-bpf_prog_run_array_uprobe(const struct bpf_prog_array __rcu *array_rcu,
++bpf_prog_run_array_uprobe(const struct bpf_prog_array *array,
+ const void *ctx, bpf_prog_run_fn run_prog)
+ {
+ const struct bpf_prog_array_item *item;
+ const struct bpf_prog *prog;
+- const struct bpf_prog_array *array;
+ struct bpf_run_ctx *old_run_ctx;
+ struct bpf_trace_run_ctx run_ctx;
+ u32 ret = 1;
+
+ might_fault();
++ RCU_LOCKDEP_WARN(!rcu_read_lock_trace_held(), "no rcu lock held");
++
++ if (unlikely(!array))
++ return ret;
+
+- rcu_read_lock_trace();
+ migrate_disable();
+
+ run_ctx.is_uprobe = true;
+
+- array = rcu_dereference_check(array_rcu, rcu_read_lock_trace_held());
+- if (unlikely(!array))
+- goto out;
+ old_run_ctx = bpf_set_run_ctx(&run_ctx.run_ctx);
+ item = &array->items[0];
+ while ((prog = READ_ONCE(item->prog))) {
+@@ -2191,9 +2190,7 @@ bpf_prog_run_array_uprobe(const struct bpf_prog_array __rcu *array_rcu,
+ rcu_read_unlock();
+ }
+ bpf_reset_run_ctx(old_run_ctx);
+-out:
+ migrate_enable();
+- rcu_read_unlock_trace();
+ return ret;
+ }
+
+@@ -3471,10 +3468,4 @@ static inline bool bpf_is_subprog(const struct bpf_prog *prog)
+ return prog->aux->func_idx != 0;
+ }
+
+-static inline bool bpf_prog_is_raw_tp(const struct bpf_prog *prog)
+-{
+- return prog->type == BPF_PROG_TYPE_TRACING &&
+- prog->expected_attach_type == BPF_TRACE_RAW_TP;
+-}
+-
+ #endif /* _LINUX_BPF_H */
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 4d4e23b6e3e761..2d962dade9faee 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -216,28 +216,43 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+
+ #endif /* __KERNEL__ */
+
++/**
++ * offset_to_ptr - convert a relative memory offset to an absolute pointer
++ * @off: the address of the 32-bit offset value
++ */
++static inline void *offset_to_ptr(const int *off)
++{
++ return (void *)((unsigned long)off + *off);
++}
++
++#endif /* __ASSEMBLY__ */
++
++#ifdef CONFIG_64BIT
++#define ARCH_SEL(a,b) a
++#else
++#define ARCH_SEL(a,b) b
++#endif
++
+ /*
+ * Force the compiler to emit 'sym' as a symbol, so that we can reference
+ * it from inline assembler. Necessary in case 'sym' could be inlined
+ * otherwise, or eliminated entirely due to lack of references that are
+ * visible to the compiler.
+ */
+-#define ___ADDRESSABLE(sym, __attrs) \
+- static void * __used __attrs \
++#define ___ADDRESSABLE(sym, __attrs) \
++ static void * __used __attrs \
+ __UNIQUE_ID(__PASTE(__addressable_,sym)) = (void *)(uintptr_t)&sym;
++
+ #define __ADDRESSABLE(sym) \
+ ___ADDRESSABLE(sym, __section(".discard.addressable"))
+
+-/**
+- * offset_to_ptr - convert a relative memory offset to an absolute pointer
+- * @off: the address of the 32-bit offset value
+- */
+-static inline void *offset_to_ptr(const int *off)
+-{
+- return (void *)((unsigned long)off + *off);
+-}
++#define __ADDRESSABLE_ASM(sym) \
++ .pushsection .discard.addressable,"aw"; \
++ .align ARCH_SEL(8,4); \
++ ARCH_SEL(.quad, .long) __stringify(sym); \
++ .popsection;
+
+-#endif /* __ASSEMBLY__ */
++#define __ADDRESSABLE_ASM_STR(sym) __stringify(__ADDRESSABLE_ASM(sym))
+
+ /* &a[0] degrades to a pointer: a different type from an array */
+ #define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
+diff --git a/include/linux/dsa/ocelot.h b/include/linux/dsa/ocelot.h
+index 6fbfbde68a37c3..620a3260fc0802 100644
+--- a/include/linux/dsa/ocelot.h
++++ b/include/linux/dsa/ocelot.h
+@@ -15,6 +15,7 @@
+ struct ocelot_skb_cb {
+ struct sk_buff *clone;
+ unsigned int ptp_class; /* valid only for clones */
++ unsigned long ptp_tx_time; /* valid only for clones */
+ u32 tstamp_lo;
+ u8 ptp_cmd;
+ u8 ts_id;
+diff --git a/include/linux/netdev_features.h b/include/linux/netdev_features.h
+index 66e7d26b70a4fe..11be70a7929f28 100644
+--- a/include/linux/netdev_features.h
++++ b/include/linux/netdev_features.h
+@@ -253,4 +253,11 @@ static inline int find_next_netdev_feature(u64 feature, unsigned long start)
+ NETIF_F_GSO_UDP_TUNNEL | \
+ NETIF_F_GSO_UDP_TUNNEL_CSUM)
+
++static inline netdev_features_t netdev_base_features(netdev_features_t features)
++{
++ features &= ~NETIF_F_ONE_FOR_ALL;
++ features |= NETIF_F_ALL_FOR_ALL;
++ return features;
++}
++
+ #endif /* _LINUX_NETDEV_FEATURES_H */
+diff --git a/include/linux/static_call.h b/include/linux/static_call.h
+index 141e6b176a1b30..78a77a4ae0ea87 100644
+--- a/include/linux/static_call.h
++++ b/include/linux/static_call.h
+@@ -160,6 +160,8 @@ extern void arch_static_call_transform(void *site, void *tramp, void *func, bool
+
+ #ifdef CONFIG_HAVE_STATIC_CALL_INLINE
+
++extern int static_call_initialized;
++
+ extern int __init static_call_init(void);
+
+ extern void static_call_force_reinit(void);
+@@ -225,6 +227,8 @@ extern long __static_call_return0(void);
+
+ #elif defined(CONFIG_HAVE_STATIC_CALL)
+
++#define static_call_initialized 0
++
+ static inline int static_call_init(void) { return 0; }
+
+ #define DEFINE_STATIC_CALL(name, _func) \
+@@ -281,6 +285,8 @@ extern long __static_call_return0(void);
+
+ #else /* Generic implementation */
+
++#define static_call_initialized 0
++
+ static inline int static_call_init(void) { return 0; }
+
+ static inline long __static_call_return0(void)
+diff --git a/include/linux/virtio.h b/include/linux/virtio.h
+index 306137a15d0753..73c8922e69e095 100644
+--- a/include/linux/virtio.h
++++ b/include/linux/virtio.h
+@@ -100,7 +100,8 @@ dma_addr_t virtqueue_get_avail_addr(const struct virtqueue *vq);
+ dma_addr_t virtqueue_get_used_addr(const struct virtqueue *vq);
+
+ int virtqueue_resize(struct virtqueue *vq, u32 num,
+- void (*recycle)(struct virtqueue *vq, void *buf));
++ void (*recycle)(struct virtqueue *vq, void *buf),
++ void (*recycle_done)(struct virtqueue *vq));
+ int virtqueue_reset(struct virtqueue *vq,
+ void (*recycle)(struct virtqueue *vq, void *buf));
+
+diff --git a/include/net/bluetooth/bluetooth.h b/include/net/bluetooth/bluetooth.h
+index f66bc85c6411dd..435250c72d5684 100644
+--- a/include/net/bluetooth/bluetooth.h
++++ b/include/net/bluetooth/bluetooth.h
+@@ -123,6 +123,7 @@ struct bt_voice {
+
+ #define BT_VOICE_TRANSPARENT 0x0003
+ #define BT_VOICE_CVSD_16BIT 0x0060
++#define BT_VOICE_TRANSPARENT_16BIT 0x0063
+
+ #define BT_SNDMTU 12
+ #define BT_RCVMTU 13
+@@ -590,15 +591,6 @@ static inline struct sk_buff *bt_skb_sendmmsg(struct sock *sk,
+ return skb;
+ }
+
+-static inline int bt_copy_from_sockptr(void *dst, size_t dst_size,
+- sockptr_t src, size_t src_size)
+-{
+- if (dst_size > src_size)
+- return -EINVAL;
+-
+- return copy_from_sockptr(dst, src, dst_size);
+-}
+-
+ int bt_to_errno(u16 code);
+ __u8 bt_status(int err);
+
+diff --git a/include/net/lapb.h b/include/net/lapb.h
+index 124ee122f2c8f8..6c07420644e45a 100644
+--- a/include/net/lapb.h
++++ b/include/net/lapb.h
+@@ -4,7 +4,7 @@
+ #include <linux/lapb.h>
+ #include <linux/refcount.h>
+
+-#define LAPB_HEADER_LEN 20 /* LAPB over Ethernet + a bit more */
++#define LAPB_HEADER_LEN MAX_HEADER /* LAPB over Ethernet + a bit more */
+
+ #define LAPB_ACK_PENDING_CONDITION 0x01
+ #define LAPB_REJECT_CONDITION 0x02
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index 333e0fae6796c8..5b712582f9a9ce 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -6770,14 +6770,12 @@ void ieee80211_chswitch_done(struct ieee80211_vif *vif, bool success,
+ /**
+ * ieee80211_channel_switch_disconnect - disconnect due to channel switch error
+ * @vif: &struct ieee80211_vif pointer from the add_interface callback.
+- * @block_tx: if %true, do not send deauth frame.
+ *
+ * Instruct mac80211 to disconnect due to a channel switch error. The channel
+ * switch can request to block the tx and so, we need to make sure we do not send
+ * a deauth frame in this case.
+ */
+-void ieee80211_channel_switch_disconnect(struct ieee80211_vif *vif,
+- bool block_tx);
++void ieee80211_channel_switch_disconnect(struct ieee80211_vif *vif);
+
+ /**
+ * ieee80211_request_smps - request SM PS transition
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index e67b483cc8bbb8..9398c8f4995368 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -80,6 +80,7 @@ struct net {
+ * or to unregister pernet ops
+ * (pernet_ops_rwsem write locked).
+ */
++ struct llist_node defer_free_list;
+ struct llist_node cleanup_list; /* namespaces on death row */
+
+ #ifdef CONFIG_KEYS
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 066a3ea33b12e9..91ae20cb76485b 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1103,7 +1103,6 @@ struct nft_rule_blob {
+ * @name: name of the chain
+ * @udlen: user data length
+ * @udata: user data in the chain
+- * @rcu_head: rcu head for deferred release
+ * @blob_next: rule blob pointer to the next in the chain
+ */
+ struct nft_chain {
+@@ -1121,7 +1120,6 @@ struct nft_chain {
+ char *name;
+ u16 udlen;
+ u8 *udata;
+- struct rcu_head rcu_head;
+
+ /* Only used during control plane commit phase: */
+ struct nft_rule_blob *blob_next;
+@@ -1265,7 +1263,6 @@ static inline void nft_use_inc_restore(u32 *use)
+ * @sets: sets in the table
+ * @objects: stateful objects in the table
+ * @flowtables: flow tables in the table
+- * @net: netnamespace this table belongs to
+ * @hgenerator: handle generator state
+ * @handle: table handle
+ * @use: number of chain references to this table
+@@ -1285,7 +1282,6 @@ struct nft_table {
+ struct list_head sets;
+ struct list_head objects;
+ struct list_head flowtables;
+- possible_net_t net;
+ u64 hgenerator;
+ u64 handle;
+ u32 use;
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index 462c653e101746..2db9ae0575b609 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -778,7 +778,6 @@ struct ocelot_port {
+
+ phy_interface_t phy_mode;
+
+- unsigned int ptp_skbs_in_flight;
+ struct sk_buff_head tx_skbs;
+
+ unsigned int trap_proto;
+@@ -786,7 +785,6 @@ struct ocelot_port {
+ u16 mrp_ring_id;
+
+ u8 ptp_cmd;
+- u8 ts_id;
+
+ u8 index;
+
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 346826e3c933da..41d20b7199c4af 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -6415,6 +6415,101 @@ int btf_ctx_arg_offset(const struct btf *btf, const struct btf_type *func_proto,
+ return off;
+ }
+
++struct bpf_raw_tp_null_args {
++ const char *func;
++ u64 mask;
++};
++
++static const struct bpf_raw_tp_null_args raw_tp_null_args[] = {
++ /* sched */
++ { "sched_pi_setprio", 0x10 },
++ /* ... from sched_numa_pair_template event class */
++ { "sched_stick_numa", 0x100 },
++ { "sched_swap_numa", 0x100 },
++ /* afs */
++ { "afs_make_fs_call", 0x10 },
++ { "afs_make_fs_calli", 0x10 },
++ { "afs_make_fs_call1", 0x10 },
++ { "afs_make_fs_call2", 0x10 },
++ { "afs_protocol_error", 0x1 },
++ { "afs_flock_ev", 0x10 },
++ /* cachefiles */
++ { "cachefiles_lookup", 0x1 | 0x200 },
++ { "cachefiles_unlink", 0x1 },
++ { "cachefiles_rename", 0x1 },
++ { "cachefiles_prep_read", 0x1 },
++ { "cachefiles_mark_active", 0x1 },
++ { "cachefiles_mark_failed", 0x1 },
++ { "cachefiles_mark_inactive", 0x1 },
++ { "cachefiles_vfs_error", 0x1 },
++ { "cachefiles_io_error", 0x1 },
++ { "cachefiles_ondemand_open", 0x1 },
++ { "cachefiles_ondemand_copen", 0x1 },
++ { "cachefiles_ondemand_close", 0x1 },
++ { "cachefiles_ondemand_read", 0x1 },
++ { "cachefiles_ondemand_cread", 0x1 },
++ { "cachefiles_ondemand_fd_write", 0x1 },
++ { "cachefiles_ondemand_fd_release", 0x1 },
++ /* ext4, from ext4__mballoc event class */
++ { "ext4_mballoc_discard", 0x10 },
++ { "ext4_mballoc_free", 0x10 },
++ /* fib */
++ { "fib_table_lookup", 0x100 },
++ /* filelock */
++ /* ... from filelock_lock event class */
++ { "posix_lock_inode", 0x10 },
++ { "fcntl_setlk", 0x10 },
++ { "locks_remove_posix", 0x10 },
++ { "flock_lock_inode", 0x10 },
++ /* ... from filelock_lease event class */
++ { "break_lease_noblock", 0x10 },
++ { "break_lease_block", 0x10 },
++ { "break_lease_unblock", 0x10 },
++ { "generic_delete_lease", 0x10 },
++ { "time_out_leases", 0x10 },
++ /* host1x */
++ { "host1x_cdma_push_gather", 0x10000 },
++ /* huge_memory */
++ { "mm_khugepaged_scan_pmd", 0x10 },
++ { "mm_collapse_huge_page_isolate", 0x1 },
++ { "mm_khugepaged_scan_file", 0x10 },
++ { "mm_khugepaged_collapse_file", 0x10 },
++ /* kmem */
++ { "mm_page_alloc", 0x1 },
++ { "mm_page_pcpu_drain", 0x1 },
++ /* .. from mm_page event class */
++ { "mm_page_alloc_zone_locked", 0x1 },
++ /* netfs */
++ { "netfs_failure", 0x10 },
++ /* power */
++ { "device_pm_callback_start", 0x10 },
++ /* qdisc */
++ { "qdisc_dequeue", 0x1000 },
++ /* rxrpc */
++ { "rxrpc_recvdata", 0x1 },
++ { "rxrpc_resend", 0x10 },
++ /* sunrpc */
++ { "xs_stream_read_data", 0x1 },
++ /* ... from xprt_cong_event event class */
++ { "xprt_reserve_cong", 0x10 },
++ { "xprt_release_cong", 0x10 },
++ { "xprt_get_cong", 0x10 },
++ { "xprt_put_cong", 0x10 },
++ /* tcp */
++ { "tcp_send_reset", 0x11 },
++ /* tegra_apb_dma */
++ { "tegra_dma_tx_status", 0x100 },
++ /* timer_migration */
++ { "tmigr_update_events", 0x1 },
++ /* writeback, from writeback_folio_template event class */
++ { "writeback_dirty_folio", 0x10 },
++ { "folio_wait_writeback", 0x10 },
++ /* rdma */
++ { "mr_integ_alloc", 0x2000 },
++ /* bpf_testmod */
++ { "bpf_testmod_test_read", 0x0 },
++};
++
+ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ const struct bpf_prog *prog,
+ struct bpf_insn_access_aux *info)
+@@ -6425,6 +6520,7 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ const char *tname = prog->aux->attach_func_name;
+ struct bpf_verifier_log *log = info->log;
+ const struct btf_param *args;
++ bool ptr_err_raw_tp = false;
+ const char *tag_value;
+ u32 nr_args, arg;
+ int i, ret;
+@@ -6519,6 +6615,12 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ return false;
+ }
+
++ if (size != sizeof(u64)) {
++ bpf_log(log, "func '%s' size %d must be 8\n",
++ tname, size);
++ return false;
++ }
++
+ /* check for PTR_TO_RDONLY_BUF_OR_NULL or PTR_TO_RDWR_BUF_OR_NULL */
+ for (i = 0; i < prog->aux->ctx_arg_info_size; i++) {
+ const struct bpf_ctx_arg_aux *ctx_arg_info = &prog->aux->ctx_arg_info[i];
+@@ -6564,12 +6666,42 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ if (prog_args_trusted(prog))
+ info->reg_type |= PTR_TRUSTED;
+
+- /* Raw tracepoint arguments always get marked as maybe NULL */
+- if (bpf_prog_is_raw_tp(prog))
+- info->reg_type |= PTR_MAYBE_NULL;
+- else if (btf_param_match_suffix(btf, &args[arg], "__nullable"))
++ if (btf_param_match_suffix(btf, &args[arg], "__nullable"))
+ info->reg_type |= PTR_MAYBE_NULL;
+
++ if (prog->expected_attach_type == BPF_TRACE_RAW_TP) {
++ struct btf *btf = prog->aux->attach_btf;
++ const struct btf_type *t;
++ const char *tname;
++
++ /* BTF lookups cannot fail, return false on error */
++ t = btf_type_by_id(btf, prog->aux->attach_btf_id);
++ if (!t)
++ return false;
++ tname = btf_name_by_offset(btf, t->name_off);
++ if (!tname)
++ return false;
++ /* Checked by bpf_check_attach_target */
++ tname += sizeof("btf_trace_") - 1;
++ for (i = 0; i < ARRAY_SIZE(raw_tp_null_args); i++) {
++ /* Is this a func with potential NULL args? */
++ if (strcmp(tname, raw_tp_null_args[i].func))
++ continue;
++ if (raw_tp_null_args[i].mask & (0x1 << (arg * 4)))
++ info->reg_type |= PTR_MAYBE_NULL;
++ /* Is the current arg IS_ERR? */
++ if (raw_tp_null_args[i].mask & (0x2 << (arg * 4)))
++ ptr_err_raw_tp = true;
++ break;
++ }
++ /* If we don't know NULL-ness specification and the tracepoint
++ * is coming from a loadable module, be conservative and mark
++ * argument as PTR_MAYBE_NULL.
++ */
++ if (i == ARRAY_SIZE(raw_tp_null_args) && btf_is_module(btf))
++ info->reg_type |= PTR_MAYBE_NULL;
++ }
++
+ if (tgt_prog) {
+ enum bpf_prog_type tgt_type;
+
+@@ -6614,6 +6746,15 @@ bool btf_ctx_access(int off, int size, enum bpf_access_type type,
+ bpf_log(log, "func '%s' arg%d has btf_id %d type %s '%s'\n",
+ tname, arg, info->btf_id, btf_type_str(t),
+ __btf_name_by_offset(btf, t->name_off));
++
++ /* Perform all checks on the validity of type for this argument, but if
++ * we know it can be IS_ERR at runtime, scrub pointer type and mark as
++ * scalar.
++ */
++ if (ptr_err_raw_tp) {
++ bpf_log(log, "marking pointer arg%d as scalar as it may encode error", arg);
++ info->reg_type = SCALAR_VALUE;
++ }
+ return true;
+ }
+ EXPORT_SYMBOL_GPL(btf_ctx_access);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index b2008076df9c26..4c486a0bfcc4d8 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -418,25 +418,6 @@ static struct btf_record *reg_btf_record(const struct bpf_reg_state *reg)
+ return rec;
+ }
+
+-static bool mask_raw_tp_reg_cond(const struct bpf_verifier_env *env, struct bpf_reg_state *reg) {
+- return reg->type == (PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL) &&
+- bpf_prog_is_raw_tp(env->prog) && !reg->ref_obj_id;
+-}
+-
+-static bool mask_raw_tp_reg(const struct bpf_verifier_env *env, struct bpf_reg_state *reg)
+-{
+- if (!mask_raw_tp_reg_cond(env, reg))
+- return false;
+- reg->type &= ~PTR_MAYBE_NULL;
+- return true;
+-}
+-
+-static void unmask_raw_tp_reg(struct bpf_reg_state *reg, bool result)
+-{
+- if (result)
+- reg->type |= PTR_MAYBE_NULL;
+-}
+-
+ static bool subprog_is_global(const struct bpf_verifier_env *env, int subprog)
+ {
+ struct bpf_func_info_aux *aux = env->prog->aux->func_info_aux;
+@@ -6618,7 +6599,6 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ const char *field_name = NULL;
+ enum bpf_type_flag flag = 0;
+ u32 btf_id = 0;
+- bool mask;
+ int ret;
+
+ if (!env->allow_ptr_leaks) {
+@@ -6690,21 +6670,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+
+ if (ret < 0)
+ return ret;
+- /* For raw_tp progs, we allow dereference of PTR_MAYBE_NULL
+- * trusted PTR_TO_BTF_ID, these are the ones that are possibly
+- * arguments to the raw_tp. Since internal checks in for trusted
+- * reg in check_ptr_to_btf_access would consider PTR_MAYBE_NULL
+- * modifier as problematic, mask it out temporarily for the
+- * check. Don't apply this to pointers with ref_obj_id > 0, as
+- * those won't be raw_tp args.
+- *
+- * We may end up applying this relaxation to other trusted
+- * PTR_TO_BTF_ID with maybe null flag, since we cannot
+- * distinguish PTR_MAYBE_NULL tagged for arguments vs normal
+- * tagging, but that should expand allowed behavior, and not
+- * cause regression for existing behavior.
+- */
+- mask = mask_raw_tp_reg(env, reg);
++
+ if (ret != PTR_TO_BTF_ID) {
+ /* just mark; */
+
+@@ -6765,13 +6731,8 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ clear_trusted_flags(&flag);
+ }
+
+- if (atype == BPF_READ && value_regno >= 0) {
++ if (atype == BPF_READ && value_regno >= 0)
+ mark_btf_ld_reg(env, regs, value_regno, ret, reg->btf, btf_id, flag);
+- /* We've assigned a new type to regno, so don't undo masking. */
+- if (regno == value_regno)
+- mask = false;
+- }
+- unmask_raw_tp_reg(reg, mask);
+
+ return 0;
+ }
+@@ -7146,7 +7107,7 @@ static int check_mem_access(struct bpf_verifier_env *env, int insn_idx, u32 regn
+ if (!err && t == BPF_READ && value_regno >= 0)
+ mark_reg_unknown(env, regs, value_regno);
+ } else if (base_type(reg->type) == PTR_TO_BTF_ID &&
+- (mask_raw_tp_reg_cond(env, reg) || !type_may_be_null(reg->type))) {
++ !type_may_be_null(reg->type)) {
+ err = check_ptr_to_btf_access(env, regs, regno, off, size, t,
+ value_regno);
+ } else if (reg->type == CONST_PTR_TO_MAP) {
+@@ -8844,7 +8805,6 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ enum bpf_reg_type type = reg->type;
+ u32 *arg_btf_id = NULL;
+ int err = 0;
+- bool mask;
+
+ if (arg_type == ARG_DONTCARE)
+ return 0;
+@@ -8885,11 +8845,11 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 arg,
+ base_type(arg_type) == ARG_PTR_TO_SPIN_LOCK)
+ arg_btf_id = fn->arg_btf_id[arg];
+
+- mask = mask_raw_tp_reg(env, reg);
+ err = check_reg_type(env, regno, arg_type, arg_btf_id, meta);
++ if (err)
++ return err;
+
+- err = err ?: check_func_arg_reg_off(env, reg, regno, arg_type);
+- unmask_raw_tp_reg(reg, mask);
++ err = check_func_arg_reg_off(env, reg, regno, arg_type);
+ if (err)
+ return err;
+
+@@ -9684,17 +9644,14 @@ static int btf_check_func_arg_match(struct bpf_verifier_env *env, int subprog,
+ return ret;
+ } else if (base_type(arg->arg_type) == ARG_PTR_TO_BTF_ID) {
+ struct bpf_call_arg_meta meta;
+- bool mask;
+ int err;
+
+ if (register_is_null(reg) && type_may_be_null(arg->arg_type))
+ continue;
+
+ memset(&meta, 0, sizeof(meta)); /* leave func_id as zero */
+- mask = mask_raw_tp_reg(env, reg);
+ err = check_reg_type(env, regno, arg->arg_type, &arg->btf_id, &meta);
+ err = err ?: check_func_arg_reg_off(env, reg, regno, arg->arg_type);
+- unmask_raw_tp_reg(reg, mask);
+ if (err)
+ return err;
+ } else {
+@@ -12009,7 +11966,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ enum bpf_arg_type arg_type = ARG_DONTCARE;
+ u32 regno = i + 1, ref_id, type_size;
+ bool is_ret_buf_sz = false;
+- bool mask = false;
+ int kf_arg_type;
+
+ t = btf_type_skip_modifiers(btf, args[i].type, NULL);
+@@ -12068,15 +12024,12 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ return -EINVAL;
+ }
+
+- mask = mask_raw_tp_reg(env, reg);
+ if ((is_kfunc_trusted_args(meta) || is_kfunc_rcu(meta)) &&
+ (register_is_null(reg) || type_may_be_null(reg->type)) &&
+ !is_kfunc_arg_nullable(meta->btf, &args[i])) {
+ verbose(env, "Possibly NULL pointer passed to trusted arg%d\n", i);
+- unmask_raw_tp_reg(reg, mask);
+ return -EACCES;
+ }
+- unmask_raw_tp_reg(reg, mask);
+
+ if (reg->ref_obj_id) {
+ if (is_kfunc_release(meta) && meta->ref_obj_id) {
+@@ -12134,24 +12087,16 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ if (!is_kfunc_trusted_args(meta) && !is_kfunc_rcu(meta))
+ break;
+
+- /* Allow passing maybe NULL raw_tp arguments to
+- * kfuncs for compatibility. Don't apply this to
+- * arguments with ref_obj_id > 0.
+- */
+- mask = mask_raw_tp_reg(env, reg);
+ if (!is_trusted_reg(reg)) {
+ if (!is_kfunc_rcu(meta)) {
+ verbose(env, "R%d must be referenced or trusted\n", regno);
+- unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ if (!is_rcu_reg(reg)) {
+ verbose(env, "R%d must be a rcu pointer\n", regno);
+- unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ }
+- unmask_raw_tp_reg(reg, mask);
+ fallthrough;
+ case KF_ARG_PTR_TO_CTX:
+ case KF_ARG_PTR_TO_DYNPTR:
+@@ -12174,9 +12119,7 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+
+ if (is_kfunc_release(meta) && reg->ref_obj_id)
+ arg_type |= OBJ_RELEASE;
+- mask = mask_raw_tp_reg(env, reg);
+ ret = check_func_arg_reg_off(env, reg, regno, arg_type);
+- unmask_raw_tp_reg(reg, mask);
+ if (ret < 0)
+ return ret;
+
+@@ -12353,7 +12296,6 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ ref_tname = btf_name_by_offset(btf, ref_t->name_off);
+ fallthrough;
+ case KF_ARG_PTR_TO_BTF_ID:
+- mask = mask_raw_tp_reg(env, reg);
+ /* Only base_type is checked, further checks are done here */
+ if ((base_type(reg->type) != PTR_TO_BTF_ID ||
+ (bpf_type_has_unsafe_modifiers(reg->type) && !is_rcu_reg(reg))) &&
+@@ -12362,11 +12304,9 @@ static int check_kfunc_args(struct bpf_verifier_env *env, struct bpf_kfunc_call_
+ verbose(env, "expected %s or socket\n",
+ reg_type_str(env, base_type(reg->type) |
+ (type_flag(reg->type) & BPF_REG_TRUSTED_MODIFIERS)));
+- unmask_raw_tp_reg(reg, mask);
+ return -EINVAL;
+ }
+ ret = process_kf_arg_ptr_to_btf_id(env, reg, ref_t, ref_tname, ref_id, meta, i);
+- unmask_raw_tp_reg(reg, mask);
+ if (ret < 0)
+ return ret;
+ break;
+@@ -13336,7 +13276,7 @@ static int sanitize_check_bounds(struct bpf_verifier_env *env,
+ */
+ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ struct bpf_insn *insn,
+- struct bpf_reg_state *ptr_reg,
++ const struct bpf_reg_state *ptr_reg,
+ const struct bpf_reg_state *off_reg)
+ {
+ struct bpf_verifier_state *vstate = env->cur_state;
+@@ -13350,7 +13290,6 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ struct bpf_sanitize_info info = {};
+ u8 opcode = BPF_OP(insn->code);
+ u32 dst = insn->dst_reg;
+- bool mask;
+ int ret;
+
+ dst_reg = ®s[dst];
+@@ -13377,14 +13316,11 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
+ return -EACCES;
+ }
+
+- mask = mask_raw_tp_reg(env, ptr_reg);
+ if (ptr_reg->type & PTR_MAYBE_NULL) {
+ verbose(env, "R%d pointer arithmetic on %s prohibited, null-check it first\n",
+ dst, reg_type_str(env, ptr_reg->type));
+- unmask_raw_tp_reg(ptr_reg, mask);
+ return -EACCES;
+ }
+- unmask_raw_tp_reg(ptr_reg, mask);
+
+ switch (base_type(ptr_reg->type)) {
+ case PTR_TO_CTX:
+@@ -19934,7 +19870,6 @@ static int convert_ctx_accesses(struct bpf_verifier_env *env)
+ * for this case.
+ */
+ case PTR_TO_BTF_ID | MEM_ALLOC | PTR_UNTRUSTED:
+- case PTR_TO_BTF_ID | PTR_TRUSTED | PTR_MAYBE_NULL:
+ if (type == BPF_READ) {
+ if (BPF_MODE(insn->code) == BPF_MEM)
+ insn->code = BPF_LDX | BPF_PROBE_MEM |
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index 40a1ad4493b4d9..fc6f41ac33eb13 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -781,7 +781,7 @@ static inline void replenish_dl_new_period(struct sched_dl_entity *dl_se,
+ * If it is a deferred reservation, and the server
+ * is not handling an starvation case, defer it.
+ */
+- if (dl_se->dl_defer & !dl_se->dl_defer_running) {
++ if (dl_se->dl_defer && !dl_se->dl_defer_running) {
+ dl_se->dl_throttled = 1;
+ dl_se->dl_defer_armed = 1;
+ }
+diff --git a/kernel/static_call_inline.c b/kernel/static_call_inline.c
+index 5259cda486d058..bb7d066a7c3979 100644
+--- a/kernel/static_call_inline.c
++++ b/kernel/static_call_inline.c
+@@ -15,7 +15,7 @@ extern struct static_call_site __start_static_call_sites[],
+ extern struct static_call_tramp_key __start_static_call_tramp_key[],
+ __stop_static_call_tramp_key[];
+
+-static int static_call_initialized;
++int static_call_initialized;
+
+ /*
+ * Must be called before early_initcall() to be effective.
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 792dc35414a3c3..50881898e758d8 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -2215,6 +2215,9 @@ void perf_event_detach_bpf_prog(struct perf_event *event)
+ goto unlock;
+
+ old_array = bpf_event_rcu_dereference(event->tp_event->prog_array);
++ if (!old_array)
++ goto put;
++
+ ret = bpf_prog_array_copy(old_array, event->prog, NULL, 0, &new_array);
+ if (ret < 0) {
+ bpf_prog_array_delete_safe(old_array, event->prog);
+@@ -2223,6 +2226,14 @@ void perf_event_detach_bpf_prog(struct perf_event *event)
+ bpf_prog_array_free_sleepable(old_array);
+ }
+
++put:
++ /*
++ * It could be that the bpf_prog is not sleepable (and will be freed
++ * via normal RCU), but is called from a point that supports sleepable
++ * programs and uses tasks-trace-RCU.
++ */
++ synchronize_rcu_tasks_trace();
++
+ bpf_prog_put(event->prog);
+ event->prog = NULL;
+
+diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
+index b30fc8fcd0956a..b085a8a164ea03 100644
+--- a/kernel/trace/trace_uprobe.c
++++ b/kernel/trace/trace_uprobe.c
+@@ -1400,9 +1400,13 @@ static void __uprobe_perf_func(struct trace_uprobe *tu,
+
+ #ifdef CONFIG_BPF_EVENTS
+ if (bpf_prog_array_valid(call)) {
++ const struct bpf_prog_array *array;
+ u32 ret;
+
+- ret = bpf_prog_run_array_uprobe(call->prog_array, regs, bpf_prog_run);
++ rcu_read_lock_trace();
++ array = rcu_dereference_check(call->prog_array, rcu_read_lock_trace_held());
++ ret = bpf_prog_run_array_uprobe(array, regs, bpf_prog_run);
++ rcu_read_unlock_trace();
+ if (!ret)
+ return;
+ }
+diff --git a/mm/slub.c b/mm/slub.c
+index 15ba89fef89a1f..b9447a955f6112 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -2199,9 +2199,24 @@ bool memcg_slab_post_charge(void *p, gfp_t flags)
+
+ folio = virt_to_folio(p);
+ if (!folio_test_slab(folio)) {
+- return folio_memcg_kmem(folio) ||
+- (__memcg_kmem_charge_page(folio_page(folio, 0), flags,
+- folio_order(folio)) == 0);
++ int size;
++
++ if (folio_memcg_kmem(folio))
++ return true;
++
++ if (__memcg_kmem_charge_page(folio_page(folio, 0), flags,
++ folio_order(folio)))
++ return false;
++
++ /*
++ * This folio has already been accounted in the global stats but
++ * not in the memcg stats. So, subtract from the global and use
++ * the interface which adds to both global and memcg stats.
++ */
++ size = folio_size(folio);
++ node_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, -size);
++ lruvec_stat_mod_folio(folio, NR_SLAB_UNRECLAIMABLE_B, size);
++ return true;
+ }
+
+ slab = folio_slab(folio);
+diff --git a/net/batman-adv/translation-table.c b/net/batman-adv/translation-table.c
+index 2243cec18ecc86..53dea8ae96e477 100644
+--- a/net/batman-adv/translation-table.c
++++ b/net/batman-adv/translation-table.c
+@@ -990,16 +990,25 @@ static void batadv_tt_tvlv_container_update(struct batadv_priv *bat_priv)
+ int tt_diff_len, tt_change_len = 0;
+ int tt_diff_entries_num = 0;
+ int tt_diff_entries_count = 0;
++ bool drop_changes = false;
++ size_t tt_extra_len = 0;
+ u16 tvlv_len;
+
+ tt_diff_entries_num = atomic_read(&bat_priv->tt.local_changes);
+ tt_diff_len = batadv_tt_len(tt_diff_entries_num);
+
+ /* if we have too many changes for one packet don't send any
+- * and wait for the tt table request which will be fragmented
++ * and wait for the tt table request so we can reply with the full
++ * (fragmented) table.
++ *
++ * The local change history should still be cleaned up so the next
++ * TT round can start again with a clean state.
+ */
+- if (tt_diff_len > bat_priv->soft_iface->mtu)
++ if (tt_diff_len > bat_priv->soft_iface->mtu) {
+ tt_diff_len = 0;
++ tt_diff_entries_num = 0;
++ drop_changes = true;
++ }
+
+ tvlv_len = batadv_tt_prepare_tvlv_local_data(bat_priv, &tt_data,
+ &tt_change, &tt_diff_len);
+@@ -1008,7 +1017,7 @@ static void batadv_tt_tvlv_container_update(struct batadv_priv *bat_priv)
+
+ tt_data->flags = BATADV_TT_OGM_DIFF;
+
+- if (tt_diff_len == 0)
++ if (!drop_changes && tt_diff_len == 0)
+ goto container_register;
+
+ spin_lock_bh(&bat_priv->tt.changes_list_lock);
+@@ -1027,6 +1036,9 @@ static void batadv_tt_tvlv_container_update(struct batadv_priv *bat_priv)
+ }
+ spin_unlock_bh(&bat_priv->tt.changes_list_lock);
+
++ tt_extra_len = batadv_tt_len(tt_diff_entries_num -
++ tt_diff_entries_count);
++
+ /* Keep the buffer for possible tt_request */
+ spin_lock_bh(&bat_priv->tt.last_changeset_lock);
+ kfree(bat_priv->tt.last_changeset);
+@@ -1035,6 +1047,7 @@ static void batadv_tt_tvlv_container_update(struct batadv_priv *bat_priv)
+ tt_change_len = batadv_tt_len(tt_diff_entries_count);
+ /* check whether this new OGM has no changes due to size problems */
+ if (tt_diff_entries_count > 0) {
++ tt_diff_len -= tt_extra_len;
+ /* if kmalloc() fails we will reply with the full table
+ * instead of providing the diff
+ */
+@@ -1047,6 +1060,8 @@ static void batadv_tt_tvlv_container_update(struct batadv_priv *bat_priv)
+ }
+ spin_unlock_bh(&bat_priv->tt.last_changeset_lock);
+
++ /* Remove extra packet space for OGM */
++ tvlv_len -= tt_extra_len;
+ container_register:
+ batadv_tvlv_container_register(bat_priv, BATADV_TVLV_TT, 1, tt_data,
+ tvlv_len);
+@@ -2747,14 +2762,16 @@ static bool batadv_tt_global_valid(const void *entry_ptr,
+ *
+ * Fills the tvlv buff with the tt entries from the specified hash. If valid_cb
+ * is not provided then this becomes a no-op.
++ *
++ * Return: Remaining unused length in tvlv_buff.
+ */
+-static void batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
+- struct batadv_hashtable *hash,
+- void *tvlv_buff, u16 tt_len,
+- bool (*valid_cb)(const void *,
+- const void *,
+- u8 *flags),
+- void *cb_data)
++static u16 batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
++ struct batadv_hashtable *hash,
++ void *tvlv_buff, u16 tt_len,
++ bool (*valid_cb)(const void *,
++ const void *,
++ u8 *flags),
++ void *cb_data)
+ {
+ struct batadv_tt_common_entry *tt_common_entry;
+ struct batadv_tvlv_tt_change *tt_change;
+@@ -2768,7 +2785,7 @@ static void batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
+ tt_change = tvlv_buff;
+
+ if (!valid_cb)
+- return;
++ return tt_len;
+
+ rcu_read_lock();
+ for (i = 0; i < hash->size; i++) {
+@@ -2794,6 +2811,8 @@ static void batadv_tt_tvlv_generate(struct batadv_priv *bat_priv,
+ }
+ }
+ rcu_read_unlock();
++
++ return batadv_tt_len(tt_tot - tt_num_entries);
+ }
+
+ /**
+@@ -3069,10 +3088,11 @@ static bool batadv_send_other_tt_response(struct batadv_priv *bat_priv,
+ goto out;
+
+ /* fill the rest of the tvlv with the real TT entries */
+- batadv_tt_tvlv_generate(bat_priv, bat_priv->tt.global_hash,
+- tt_change, tt_len,
+- batadv_tt_global_valid,
+- req_dst_orig_node);
++ tvlv_len -= batadv_tt_tvlv_generate(bat_priv,
++ bat_priv->tt.global_hash,
++ tt_change, tt_len,
++ batadv_tt_global_valid,
++ req_dst_orig_node);
+ }
+
+ /* Don't send the response, if larger than fragmented packet. */
+@@ -3196,9 +3216,11 @@ static bool batadv_send_my_tt_response(struct batadv_priv *bat_priv,
+ goto out;
+
+ /* fill the rest of the tvlv with the real TT entries */
+- batadv_tt_tvlv_generate(bat_priv, bat_priv->tt.local_hash,
+- tt_change, tt_len,
+- batadv_tt_local_valid, NULL);
++ tvlv_len -= batadv_tt_tvlv_generate(bat_priv,
++ bat_priv->tt.local_hash,
++ tt_change, tt_len,
++ batadv_tt_local_valid,
++ NULL);
+ }
+
+ tvlv_tt_data->flags = BATADV_TT_RESPONSE;
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 2b5ba8acd1d84a..388d46c6a043d4 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6872,38 +6872,27 @@ static void hci_le_create_big_complete_evt(struct hci_dev *hdev, void *data,
+ return;
+
+ hci_dev_lock(hdev);
+- rcu_read_lock();
+
+ /* Connect all BISes that are bound to the BIG */
+- list_for_each_entry_rcu(conn, &hdev->conn_hash.list, list) {
+- if (bacmp(&conn->dst, BDADDR_ANY) ||
+- conn->type != ISO_LINK ||
+- conn->iso_qos.bcast.big != ev->handle)
++ while ((conn = hci_conn_hash_lookup_big_state(hdev, ev->handle,
++ BT_BOUND))) {
++ if (ev->status) {
++ hci_connect_cfm(conn, ev->status);
++ hci_conn_del(conn);
+ continue;
++ }
+
+ if (hci_conn_set_handle(conn,
+ __le16_to_cpu(ev->bis_handle[i++])))
+ continue;
+
+- if (!ev->status) {
+- conn->state = BT_CONNECTED;
+- set_bit(HCI_CONN_BIG_CREATED, &conn->flags);
+- rcu_read_unlock();
+- hci_debugfs_create_conn(conn);
+- hci_conn_add_sysfs(conn);
+- hci_iso_setup_path(conn);
+- rcu_read_lock();
+- continue;
+- }
+-
+- hci_connect_cfm(conn, ev->status);
+- rcu_read_unlock();
+- hci_conn_del(conn);
+- rcu_read_lock();
++ conn->state = BT_CONNECTED;
++ set_bit(HCI_CONN_BIG_CREATED, &conn->flags);
++ hci_debugfs_create_conn(conn);
++ hci_conn_add_sysfs(conn);
++ hci_iso_setup_path(conn);
+ }
+
+- rcu_read_unlock();
+-
+ if (!ev->status && !i)
+ /* If no BISes have been connected for the BIG,
+ * terminate. This is in case all bound connections
+diff --git a/net/bluetooth/hci_sock.c b/net/bluetooth/hci_sock.c
+index 2272e1849ebd89..022b86797acdc5 100644
+--- a/net/bluetooth/hci_sock.c
++++ b/net/bluetooth/hci_sock.c
+@@ -1926,7 +1926,7 @@ static int hci_sock_sendmsg(struct socket *sock, struct msghdr *msg,
+ }
+
+ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
+- sockptr_t optval, unsigned int len)
++ sockptr_t optval, unsigned int optlen)
+ {
+ struct hci_ufilter uf = { .opcode = 0 };
+ struct sock *sk = sock->sk;
+@@ -1943,7 +1943,7 @@ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
+
+ switch (optname) {
+ case HCI_DATA_DIR:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, len);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1954,7 +1954,7 @@ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
+ break;
+
+ case HCI_TIME_STAMP:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, len);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1974,7 +1974,7 @@ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
+ uf.event_mask[1] = *((u32 *) f->event_mask + 1);
+ }
+
+- err = bt_copy_from_sockptr(&uf, sizeof(uf), optval, len);
++ err = copy_safe_from_sockptr(&uf, sizeof(uf), optval, optlen);
+ if (err)
+ break;
+
+@@ -2005,7 +2005,7 @@ static int hci_sock_setsockopt_old(struct socket *sock, int level, int optname,
+ }
+
+ static int hci_sock_setsockopt(struct socket *sock, int level, int optname,
+- sockptr_t optval, unsigned int len)
++ sockptr_t optval, unsigned int optlen)
+ {
+ struct sock *sk = sock->sk;
+ int err = 0;
+@@ -2015,7 +2015,7 @@ static int hci_sock_setsockopt(struct socket *sock, int level, int optname,
+
+ if (level == SOL_HCI)
+ return hci_sock_setsockopt_old(sock, level, optname, optval,
+- len);
++ optlen);
+
+ if (level != SOL_BLUETOOTH)
+ return -ENOPROTOOPT;
+@@ -2035,7 +2035,7 @@ static int hci_sock_setsockopt(struct socket *sock, int level, int optname,
+ goto done;
+ }
+
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, len);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 5e2d9758bd3c1c..644b606743e212 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -1129,6 +1129,7 @@ static int iso_listen_bis(struct sock *sk)
+ return -EHOSTUNREACH;
+
+ hci_dev_lock(hdev);
++ lock_sock(sk);
+
+ /* Fail if user set invalid QoS */
+ if (iso_pi(sk)->qos_user_set && !check_bcast_qos(&iso_pi(sk)->qos)) {
+@@ -1158,10 +1159,10 @@ static int iso_listen_bis(struct sock *sk)
+ goto unlock;
+ }
+
+- hci_dev_put(hdev);
+-
+ unlock:
++ release_sock(sk);
+ hci_dev_unlock(hdev);
++ hci_dev_put(hdev);
+ return err;
+ }
+
+@@ -1188,6 +1189,7 @@ static int iso_sock_listen(struct socket *sock, int backlog)
+
+ BT_DBG("sk %p backlog %d", sk, backlog);
+
++ sock_hold(sk);
+ lock_sock(sk);
+
+ if (sk->sk_state != BT_BOUND) {
+@@ -1200,10 +1202,16 @@ static int iso_sock_listen(struct socket *sock, int backlog)
+ goto done;
+ }
+
+- if (!bacmp(&iso_pi(sk)->dst, BDADDR_ANY))
++ if (!bacmp(&iso_pi(sk)->dst, BDADDR_ANY)) {
+ err = iso_listen_cis(sk);
+- else
++ } else {
++ /* Drop sock lock to avoid potential
++ * deadlock with the hdev lock.
++ */
++ release_sock(sk);
+ err = iso_listen_bis(sk);
++ lock_sock(sk);
++ }
+
+ if (err)
+ goto done;
+@@ -1215,6 +1223,7 @@ static int iso_sock_listen(struct socket *sock, int backlog)
+
+ done:
+ release_sock(sk);
++ sock_put(sk);
+ return err;
+ }
+
+@@ -1226,7 +1235,11 @@ static int iso_sock_accept(struct socket *sock, struct socket *newsock,
+ long timeo;
+ int err = 0;
+
+- lock_sock(sk);
++ /* Use explicit nested locking to avoid lockdep warnings generated
++ * because the parent socket and the child socket are locked on the
++ * same thread.
++ */
++ lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
+
+ timeo = sock_rcvtimeo(sk, arg->flags & O_NONBLOCK);
+
+@@ -1257,7 +1270,7 @@ static int iso_sock_accept(struct socket *sock, struct socket *newsock,
+ release_sock(sk);
+
+ timeo = wait_woken(&wait, TASK_INTERRUPTIBLE, timeo);
+- lock_sock(sk);
++ lock_sock_nested(sk, SINGLE_DEPTH_NESTING);
+ }
+ remove_wait_queue(sk_sleep(sk), &wait);
+
+@@ -1398,6 +1411,7 @@ static void iso_conn_big_sync(struct sock *sk)
+ * change.
+ */
+ hci_dev_lock(hdev);
++ lock_sock(sk);
+
+ if (!test_and_set_bit(BT_SK_BIG_SYNC, &iso_pi(sk)->flags)) {
+ err = hci_le_big_create_sync(hdev, iso_pi(sk)->conn->hcon,
+@@ -1410,6 +1424,7 @@ static void iso_conn_big_sync(struct sock *sk)
+ err);
+ }
+
++ release_sock(sk);
+ hci_dev_unlock(hdev);
+ }
+
+@@ -1418,39 +1433,57 @@ static int iso_sock_recvmsg(struct socket *sock, struct msghdr *msg,
+ {
+ struct sock *sk = sock->sk;
+ struct iso_pinfo *pi = iso_pi(sk);
++ bool early_ret = false;
++ int err = 0;
+
+ BT_DBG("sk %p", sk);
+
+ if (test_and_clear_bit(BT_SK_DEFER_SETUP, &bt_sk(sk)->flags)) {
++ sock_hold(sk);
+ lock_sock(sk);
++
+ switch (sk->sk_state) {
+ case BT_CONNECT2:
+ if (test_bit(BT_SK_PA_SYNC, &pi->flags)) {
++ release_sock(sk);
+ iso_conn_big_sync(sk);
++ lock_sock(sk);
++
+ sk->sk_state = BT_LISTEN;
+ } else {
+ iso_conn_defer_accept(pi->conn->hcon);
+ sk->sk_state = BT_CONFIG;
+ }
+- release_sock(sk);
+- return 0;
++
++ early_ret = true;
++ break;
+ case BT_CONNECTED:
+ if (test_bit(BT_SK_PA_SYNC, &iso_pi(sk)->flags)) {
++ release_sock(sk);
+ iso_conn_big_sync(sk);
++ lock_sock(sk);
++
+ sk->sk_state = BT_LISTEN;
+- release_sock(sk);
+- return 0;
++ early_ret = true;
+ }
+
+- release_sock(sk);
+ break;
+ case BT_CONNECT:
+ release_sock(sk);
+- return iso_connect_cis(sk);
++ err = iso_connect_cis(sk);
++ lock_sock(sk);
++
++ early_ret = true;
++ break;
+ default:
+- release_sock(sk);
+ break;
+ }
++
++ release_sock(sk);
++ sock_put(sk);
++
++ if (early_ret)
++ return err;
+ }
+
+ return bt_sock_recvmsg(sock, msg, len, flags);
+@@ -1566,7 +1599,7 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1577,7 +1610,7 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+
+ case BT_PKT_STATUS:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1596,7 +1629,7 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&qos, sizeof(qos), optval, optlen);
++ err = copy_safe_from_sockptr(&qos, sizeof(qos), optval, optlen);
+ if (err)
+ break;
+
+@@ -1617,8 +1650,8 @@ static int iso_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(iso_pi(sk)->base, optlen, optval,
+- optlen);
++ err = copy_safe_from_sockptr(iso_pi(sk)->base, optlen, optval,
++ optlen);
+ if (err)
+ break;
+
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 18e89e764f3b42..3d2553dcdb1b3c 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -755,7 +755,8 @@ static int l2cap_sock_setsockopt_old(struct socket *sock, int optname,
+ opts.max_tx = chan->max_tx;
+ opts.txwin_size = chan->tx_win;
+
+- err = bt_copy_from_sockptr(&opts, sizeof(opts), optval, optlen);
++ err = copy_safe_from_sockptr(&opts, sizeof(opts), optval,
++ optlen);
+ if (err)
+ break;
+
+@@ -800,7 +801,7 @@ static int l2cap_sock_setsockopt_old(struct socket *sock, int optname,
+ break;
+
+ case L2CAP_LM:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -909,7 +910,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+
+ sec.level = BT_SECURITY_LOW;
+
+- err = bt_copy_from_sockptr(&sec, sizeof(sec), optval, optlen);
++ err = copy_safe_from_sockptr(&sec, sizeof(sec), optval, optlen);
+ if (err)
+ break;
+
+@@ -956,7 +957,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -970,7 +971,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+
+ case BT_FLUSHABLE:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1004,7 +1005,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+
+ pwr.force_active = BT_POWER_FORCE_ACTIVE_ON;
+
+- err = bt_copy_from_sockptr(&pwr, sizeof(pwr), optval, optlen);
++ err = copy_safe_from_sockptr(&pwr, sizeof(pwr), optval, optlen);
+ if (err)
+ break;
+
+@@ -1015,7 +1016,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+
+ case BT_CHANNEL_POLICY:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -1046,7 +1047,7 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&mtu, sizeof(mtu), optval, optlen);
++ err = copy_safe_from_sockptr(&mtu, sizeof(mtu), optval, optlen);
+ if (err)
+ break;
+
+@@ -1076,7 +1077,8 @@ static int l2cap_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&mode, sizeof(mode), optval, optlen);
++ err = copy_safe_from_sockptr(&mode, sizeof(mode), optval,
++ optlen);
+ if (err)
+ break;
+
+diff --git a/net/bluetooth/rfcomm/sock.c b/net/bluetooth/rfcomm/sock.c
+index 40766f8119ed9c..913402806fa0d4 100644
+--- a/net/bluetooth/rfcomm/sock.c
++++ b/net/bluetooth/rfcomm/sock.c
+@@ -629,10 +629,9 @@ static int rfcomm_sock_setsockopt_old(struct socket *sock, int optname,
+
+ switch (optname) {
+ case RFCOMM_LM:
+- if (bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen)) {
+- err = -EFAULT;
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ if (err)
+ break;
+- }
+
+ if (opt & RFCOMM_LM_FIPS) {
+ err = -EINVAL;
+@@ -685,7 +684,7 @@ static int rfcomm_sock_setsockopt(struct socket *sock, int level, int optname,
+
+ sec.level = BT_SECURITY_LOW;
+
+- err = bt_copy_from_sockptr(&sec, sizeof(sec), optval, optlen);
++ err = copy_safe_from_sockptr(&sec, sizeof(sec), optval, optlen);
+ if (err)
+ break;
+
+@@ -703,7 +702,7 @@ static int rfcomm_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 1c7252a3686694..b872a2ca3ff38b 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -267,10 +267,13 @@ static int sco_connect(struct sock *sk)
+ else
+ type = SCO_LINK;
+
+- if (sco_pi(sk)->setting == BT_VOICE_TRANSPARENT &&
+- (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev))) {
+- err = -EOPNOTSUPP;
+- goto unlock;
++ switch (sco_pi(sk)->setting & SCO_AIRMODE_MASK) {
++ case SCO_AIRMODE_TRANSP:
++ if (!lmp_transp_capable(hdev) || !lmp_esco_capable(hdev)) {
++ err = -EOPNOTSUPP;
++ goto unlock;
++ }
++ break;
+ }
+
+ hcon = hci_connect_sco(hdev, type, &sco_pi(sk)->dst,
+@@ -853,7 +856,7 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -872,18 +875,11 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+
+ voice.setting = sco_pi(sk)->setting;
+
+- err = bt_copy_from_sockptr(&voice, sizeof(voice), optval,
+- optlen);
++ err = copy_safe_from_sockptr(&voice, sizeof(voice), optval,
++ optlen);
+ if (err)
+ break;
+
+- /* Explicitly check for these values */
+- if (voice.setting != BT_VOICE_TRANSPARENT &&
+- voice.setting != BT_VOICE_CVSD_16BIT) {
+- err = -EINVAL;
+- break;
+- }
+-
+ sco_pi(sk)->setting = voice.setting;
+ hdev = hci_get_route(&sco_pi(sk)->dst, &sco_pi(sk)->src,
+ BDADDR_BREDR);
+@@ -891,14 +887,19 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ err = -EBADFD;
+ break;
+ }
+- if (enhanced_sync_conn_capable(hdev) &&
+- voice.setting == BT_VOICE_TRANSPARENT)
+- sco_pi(sk)->codec.id = BT_CODEC_TRANSPARENT;
++
++ switch (sco_pi(sk)->setting & SCO_AIRMODE_MASK) {
++ case SCO_AIRMODE_TRANSP:
++ if (enhanced_sync_conn_capable(hdev))
++ sco_pi(sk)->codec.id = BT_CODEC_TRANSPARENT;
++ break;
++ }
++
+ hci_dev_put(hdev);
+ break;
+
+ case BT_PKT_STATUS:
+- err = bt_copy_from_sockptr(&opt, sizeof(opt), optval, optlen);
++ err = copy_safe_from_sockptr(&opt, sizeof(opt), optval, optlen);
+ if (err)
+ break;
+
+@@ -941,7 +942,8 @@ static int sco_sock_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- err = bt_copy_from_sockptr(buffer, optlen, optval, optlen);
++ err = copy_struct_from_sockptr(buffer, sizeof(buffer), optval,
++ optlen);
+ if (err) {
+ hci_dev_put(hdev);
+ break;
+diff --git a/net/core/net_namespace.c b/net/core/net_namespace.c
+index e39479f1c9a486..70fea7c1a4b0a4 100644
+--- a/net/core/net_namespace.c
++++ b/net/core/net_namespace.c
+@@ -443,6 +443,21 @@ static struct net *net_alloc(void)
+ goto out;
+ }
+
++static LLIST_HEAD(defer_free_list);
++
++static void net_complete_free(void)
++{
++ struct llist_node *kill_list;
++ struct net *net, *next;
++
++ /* Get the list of namespaces to free from last round. */
++ kill_list = llist_del_all(&defer_free_list);
++
++ llist_for_each_entry_safe(net, next, kill_list, defer_free_list)
++ kmem_cache_free(net_cachep, net);
++
++}
++
+ static void net_free(struct net *net)
+ {
+ if (refcount_dec_and_test(&net->passive)) {
+@@ -451,7 +466,8 @@ static void net_free(struct net *net)
+ /* There should not be any trackers left there. */
+ ref_tracker_dir_exit(&net->notrefcnt_tracker);
+
+- kmem_cache_free(net_cachep, net);
++ /* Wait for an extra rcu_barrier() before final free. */
++ llist_add(&net->defer_free_list, &defer_free_list);
+ }
+ }
+
+@@ -636,6 +652,8 @@ static void cleanup_net(struct work_struct *work)
+ */
+ rcu_barrier();
+
++ net_complete_free();
++
+ /* Finally it is safe to free my network namespace structure */
+ list_for_each_entry_safe(net, tmp, &net_exit_list, exit_list) {
+ list_del_init(&net->exit_list);
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index 78347d7d25ef31..f1b9b3958792cd 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -159,6 +159,7 @@ static void sock_map_del_link(struct sock *sk,
+ verdict_stop = true;
+ list_del(&link->list);
+ sk_psock_free_link(link);
++ break;
+ }
+ }
+ spin_unlock_bh(&psock->link_lock);
+@@ -411,12 +412,11 @@ static void *sock_map_lookup_sys(struct bpf_map *map, void *key)
+ static int __sock_map_delete(struct bpf_stab *stab, struct sock *sk_test,
+ struct sock **psk)
+ {
+- struct sock *sk;
++ struct sock *sk = NULL;
+ int err = 0;
+
+ spin_lock_bh(&stab->lock);
+- sk = *psk;
+- if (!sk_test || sk_test == sk)
++ if (!sk_test || sk_test == *psk)
+ sk = xchg(psk, NULL);
+
+ if (likely(sk))
+diff --git a/net/dsa/tag_ocelot_8021q.c b/net/dsa/tag_ocelot_8021q.c
+index 8e8b1bef6af69d..11ea8cfd62661c 100644
+--- a/net/dsa/tag_ocelot_8021q.c
++++ b/net/dsa/tag_ocelot_8021q.c
+@@ -79,7 +79,7 @@ static struct sk_buff *ocelot_xmit(struct sk_buff *skb,
+ static struct sk_buff *ocelot_rcv(struct sk_buff *skb,
+ struct net_device *netdev)
+ {
+- int src_port, switch_id;
++ int src_port = -1, switch_id = -1;
+
+ dsa_8021q_rcv(skb, &src_port, &switch_id, NULL, NULL);
+
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 68804fd01dafc4..8efc58716ce969 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -883,8 +883,10 @@ static unsigned int tcp_syn_options(struct sock *sk, struct sk_buff *skb,
+ unsigned int size;
+
+ if (mptcp_syn_options(sk, skb, &size, &opts->mptcp)) {
+- opts->options |= OPTION_MPTCP;
+- remaining -= size;
++ if (remaining >= size) {
++ opts->options |= OPTION_MPTCP;
++ remaining -= size;
++ }
+ }
+ }
+
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 6dfc61a9acd4a5..1b1bf044378d48 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -1061,13 +1061,13 @@ ieee80211_copy_mbssid_beacon(u8 *pos, struct cfg80211_mbssid_elems *dst,
+ {
+ int i, offset = 0;
+
++ dst->cnt = src->cnt;
+ for (i = 0; i < src->cnt; i++) {
+ memcpy(pos + offset, src->elem[i].data, src->elem[i].len);
+ dst->elem[i].len = src->elem[i].len;
+ dst->elem[i].data = pos + offset;
+ offset += dst->elem[i].len;
+ }
+- dst->cnt = src->cnt;
+
+ return offset;
+ }
+@@ -1911,6 +1911,8 @@ static int sta_link_apply_parameters(struct ieee80211_local *local,
+ params->eht_capa_len,
+ link_sta);
+
++ ieee80211_sta_init_nss(link_sta);
++
+ if (params->opmode_notif_used) {
+ /* returned value is only needed for rc update, but the
+ * rc isn't initialized here yet, so ignore it
+@@ -1920,8 +1922,6 @@ static int sta_link_apply_parameters(struct ieee80211_local *local,
+ sband->band);
+ }
+
+- ieee80211_sta_init_nss(link_sta);
+-
+ return 0;
+ }
+
+@@ -3674,13 +3674,12 @@ void ieee80211_csa_finish(struct ieee80211_vif *vif, unsigned int link_id)
+ }
+ EXPORT_SYMBOL(ieee80211_csa_finish);
+
+-void ieee80211_channel_switch_disconnect(struct ieee80211_vif *vif, bool block_tx)
++void ieee80211_channel_switch_disconnect(struct ieee80211_vif *vif)
+ {
+ struct ieee80211_sub_if_data *sdata = vif_to_sdata(vif);
+ struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+ struct ieee80211_local *local = sdata->local;
+
+- sdata->csa_blocked_queues = block_tx;
+ sdata_info(sdata, "channel switch failed, disconnecting\n");
+ wiphy_work_queue(local->hw.wiphy, &ifmgd->csa_connection_drop_work);
+ }
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 3d3c9139ff5e45..7a0242e937d364 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1106,8 +1106,6 @@ struct ieee80211_sub_if_data {
+
+ unsigned long state;
+
+- bool csa_blocked_queues;
+-
+ char name[IFNAMSIZ];
+
+ struct ieee80211_fragment_cache frags;
+@@ -2411,17 +2409,13 @@ void ieee80211_send_4addr_nullfunc(struct ieee80211_local *local,
+ struct ieee80211_sub_if_data *sdata);
+ void ieee80211_sta_tx_notify(struct ieee80211_sub_if_data *sdata,
+ struct ieee80211_hdr *hdr, bool ack, u16 tx_time);
+-
++unsigned int
++ieee80211_get_vif_queues(struct ieee80211_local *local,
++ struct ieee80211_sub_if_data *sdata);
+ void ieee80211_wake_queues_by_reason(struct ieee80211_hw *hw,
+ unsigned long queues,
+ enum queue_stop_reason reason,
+ bool refcounted);
+-void ieee80211_stop_vif_queues(struct ieee80211_local *local,
+- struct ieee80211_sub_if_data *sdata,
+- enum queue_stop_reason reason);
+-void ieee80211_wake_vif_queues(struct ieee80211_local *local,
+- struct ieee80211_sub_if_data *sdata,
+- enum queue_stop_reason reason);
+ void ieee80211_stop_queues_by_reason(struct ieee80211_hw *hw,
+ unsigned long queues,
+ enum queue_stop_reason reason,
+@@ -2432,6 +2426,43 @@ void ieee80211_wake_queue_by_reason(struct ieee80211_hw *hw, int queue,
+ void ieee80211_stop_queue_by_reason(struct ieee80211_hw *hw, int queue,
+ enum queue_stop_reason reason,
+ bool refcounted);
++static inline void
++ieee80211_stop_vif_queues(struct ieee80211_local *local,
++ struct ieee80211_sub_if_data *sdata,
++ enum queue_stop_reason reason)
++{
++ ieee80211_stop_queues_by_reason(&local->hw,
++ ieee80211_get_vif_queues(local, sdata),
++ reason, true);
++}
++
++static inline void
++ieee80211_wake_vif_queues(struct ieee80211_local *local,
++ struct ieee80211_sub_if_data *sdata,
++ enum queue_stop_reason reason)
++{
++ ieee80211_wake_queues_by_reason(&local->hw,
++ ieee80211_get_vif_queues(local, sdata),
++ reason, true);
++}
++static inline void
++ieee80211_stop_vif_queues_norefcount(struct ieee80211_local *local,
++ struct ieee80211_sub_if_data *sdata,
++ enum queue_stop_reason reason)
++{
++ ieee80211_stop_queues_by_reason(&local->hw,
++ ieee80211_get_vif_queues(local, sdata),
++ reason, false);
++}
++static inline void
++ieee80211_wake_vif_queues_norefcount(struct ieee80211_local *local,
++ struct ieee80211_sub_if_data *sdata,
++ enum queue_stop_reason reason)
++{
++ ieee80211_wake_queues_by_reason(&local->hw,
++ ieee80211_get_vif_queues(local, sdata),
++ reason, false);
++}
+ void ieee80211_add_pending_skb(struct ieee80211_local *local,
+ struct sk_buff *skb);
+ void ieee80211_add_pending_skbs(struct ieee80211_local *local,
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 6ef0990d3d296a..af9055252e6dfa 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -2364,18 +2364,14 @@ void ieee80211_vif_block_queues_csa(struct ieee80211_sub_if_data *sdata)
+ if (ieee80211_hw_check(&local->hw, HANDLES_QUIET_CSA))
+ return;
+
+- ieee80211_stop_vif_queues(local, sdata,
+- IEEE80211_QUEUE_STOP_REASON_CSA);
+- sdata->csa_blocked_queues = true;
++ ieee80211_stop_vif_queues_norefcount(local, sdata,
++ IEEE80211_QUEUE_STOP_REASON_CSA);
+ }
+
+ void ieee80211_vif_unblock_queues_csa(struct ieee80211_sub_if_data *sdata)
+ {
+ struct ieee80211_local *local = sdata->local;
+
+- if (sdata->csa_blocked_queues) {
+- ieee80211_wake_vif_queues(local, sdata,
+- IEEE80211_QUEUE_STOP_REASON_CSA);
+- sdata->csa_blocked_queues = false;
+- }
++ ieee80211_wake_vif_queues_norefcount(local, sdata,
++ IEEE80211_QUEUE_STOP_REASON_CSA);
+ }
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 0303972c23e4cb..111066928b963c 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -2636,8 +2636,6 @@ ieee80211_sta_process_chanswitch(struct ieee80211_link_data *link,
+ */
+ link->conf->csa_active = true;
+ link->u.mgd.csa.blocked_tx = csa_ie.mode;
+- sdata->csa_blocked_queues =
+- csa_ie.mode && !ieee80211_hw_check(&local->hw, HANDLES_QUIET_CSA);
+
+ wiphy_work_queue(sdata->local->hw.wiphy,
+ &ifmgd->csa_connection_drop_work);
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index f94faa86ba8a35..b4814e97cf7422 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -657,7 +657,7 @@ void ieee80211_wake_queues(struct ieee80211_hw *hw)
+ }
+ EXPORT_SYMBOL(ieee80211_wake_queues);
+
+-static unsigned int
++unsigned int
+ ieee80211_get_vif_queues(struct ieee80211_local *local,
+ struct ieee80211_sub_if_data *sdata)
+ {
+@@ -669,7 +669,8 @@ ieee80211_get_vif_queues(struct ieee80211_local *local,
+ queues = 0;
+
+ for (ac = 0; ac < IEEE80211_NUM_ACS; ac++)
+- queues |= BIT(sdata->vif.hw_queue[ac]);
++ if (sdata->vif.hw_queue[ac] != IEEE80211_INVAL_HW_QUEUE)
++ queues |= BIT(sdata->vif.hw_queue[ac]);
+ if (sdata->vif.cab_queue != IEEE80211_INVAL_HW_QUEUE)
+ queues |= BIT(sdata->vif.cab_queue);
+ } else {
+@@ -724,24 +725,6 @@ void ieee80211_flush_queues(struct ieee80211_local *local,
+ __ieee80211_flush_queues(local, sdata, 0, drop);
+ }
+
+-void ieee80211_stop_vif_queues(struct ieee80211_local *local,
+- struct ieee80211_sub_if_data *sdata,
+- enum queue_stop_reason reason)
+-{
+- ieee80211_stop_queues_by_reason(&local->hw,
+- ieee80211_get_vif_queues(local, sdata),
+- reason, true);
+-}
+-
+-void ieee80211_wake_vif_queues(struct ieee80211_local *local,
+- struct ieee80211_sub_if_data *sdata,
+- enum queue_stop_reason reason)
+-{
+- ieee80211_wake_queues_by_reason(&local->hw,
+- ieee80211_get_vif_queues(local, sdata),
+- reason, true);
+-}
+-
+ static void __iterate_interfaces(struct ieee80211_local *local,
+ u32 iter_flags,
+ void (*iterator)(void *data, u8 *mac,
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 4a137afaf0b87e..0c5ff4afc37022 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1495,7 +1495,6 @@ static int nf_tables_newtable(struct sk_buff *skb, const struct nfnl_info *info,
+ INIT_LIST_HEAD(&table->sets);
+ INIT_LIST_HEAD(&table->objects);
+ INIT_LIST_HEAD(&table->flowtables);
+- write_pnet(&table->net, net);
+ table->family = family;
+ table->flags = flags;
+ table->handle = ++nft_net->table_handle;
+@@ -3884,8 +3883,11 @@ void nf_tables_rule_destroy(const struct nft_ctx *ctx, struct nft_rule *rule)
+ kfree(rule);
+ }
+
++/* can only be used if rule is no longer visible to dumps */
+ static void nf_tables_rule_release(const struct nft_ctx *ctx, struct nft_rule *rule)
+ {
++ lockdep_commit_lock_is_held(ctx->net);
++
+ nft_rule_expr_deactivate(ctx, rule, NFT_TRANS_RELEASE);
+ nf_tables_rule_destroy(ctx, rule);
+ }
+@@ -5650,6 +5652,8 @@ void nf_tables_deactivate_set(const struct nft_ctx *ctx, struct nft_set *set,
+ struct nft_set_binding *binding,
+ enum nft_trans_phase phase)
+ {
++ lockdep_commit_lock_is_held(ctx->net);
++
+ switch (phase) {
+ case NFT_TRANS_PREPARE_ERROR:
+ nft_set_trans_unbind(ctx, set);
+@@ -11456,19 +11460,6 @@ static void __nft_release_basechain_now(struct nft_ctx *ctx)
+ nf_tables_chain_destroy(ctx->chain);
+ }
+
+-static void nft_release_basechain_rcu(struct rcu_head *head)
+-{
+- struct nft_chain *chain = container_of(head, struct nft_chain, rcu_head);
+- struct nft_ctx ctx = {
+- .family = chain->table->family,
+- .chain = chain,
+- .net = read_pnet(&chain->table->net),
+- };
+-
+- __nft_release_basechain_now(&ctx);
+- put_net(ctx.net);
+-}
+-
+ int __nft_release_basechain(struct nft_ctx *ctx)
+ {
+ struct nft_rule *rule;
+@@ -11483,11 +11474,18 @@ int __nft_release_basechain(struct nft_ctx *ctx)
+ nft_chain_del(ctx->chain);
+ nft_use_dec(&ctx->table->use);
+
+- if (maybe_get_net(ctx->net))
+- call_rcu(&ctx->chain->rcu_head, nft_release_basechain_rcu);
+- else
++ if (!maybe_get_net(ctx->net)) {
+ __nft_release_basechain_now(ctx);
++ return 0;
++ }
++
++ /* wait for ruleset dumps to complete. Owning chain is no longer in
++ * lists, so new dumps can't find any of these rules anymore.
++ */
++ synchronize_rcu();
+
++ __nft_release_basechain_now(ctx);
++ put_net(ctx->net);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(__nft_release_basechain);
+diff --git a/net/netfilter/xt_IDLETIMER.c b/net/netfilter/xt_IDLETIMER.c
+index f8b25b6f5da736..9869ef3c2ab378 100644
+--- a/net/netfilter/xt_IDLETIMER.c
++++ b/net/netfilter/xt_IDLETIMER.c
+@@ -409,21 +409,23 @@ static void idletimer_tg_destroy(const struct xt_tgdtor_param *par)
+
+ mutex_lock(&list_mutex);
+
+- if (--info->timer->refcnt == 0) {
+- pr_debug("deleting timer %s\n", info->label);
+-
+- list_del(&info->timer->entry);
+- timer_shutdown_sync(&info->timer->timer);
+- cancel_work_sync(&info->timer->work);
+- sysfs_remove_file(idletimer_tg_kobj, &info->timer->attr.attr);
+- kfree(info->timer->attr.attr.name);
+- kfree(info->timer);
+- } else {
++ if (--info->timer->refcnt > 0) {
+ pr_debug("decreased refcnt of timer %s to %u\n",
+ info->label, info->timer->refcnt);
++ mutex_unlock(&list_mutex);
++ return;
+ }
+
++ pr_debug("deleting timer %s\n", info->label);
++
++ list_del(&info->timer->entry);
+ mutex_unlock(&list_mutex);
++
++ timer_shutdown_sync(&info->timer->timer);
++ cancel_work_sync(&info->timer->work);
++ sysfs_remove_file(idletimer_tg_kobj, &info->timer->attr.attr);
++ kfree(info->timer->attr.attr.name);
++ kfree(info->timer);
+ }
+
+ static void idletimer_tg_destroy_v1(const struct xt_tgdtor_param *par)
+@@ -434,25 +436,27 @@ static void idletimer_tg_destroy_v1(const struct xt_tgdtor_param *par)
+
+ mutex_lock(&list_mutex);
+
+- if (--info->timer->refcnt == 0) {
+- pr_debug("deleting timer %s\n", info->label);
+-
+- list_del(&info->timer->entry);
+- if (info->timer->timer_type & XT_IDLETIMER_ALARM) {
+- alarm_cancel(&info->timer->alarm);
+- } else {
+- timer_shutdown_sync(&info->timer->timer);
+- }
+- cancel_work_sync(&info->timer->work);
+- sysfs_remove_file(idletimer_tg_kobj, &info->timer->attr.attr);
+- kfree(info->timer->attr.attr.name);
+- kfree(info->timer);
+- } else {
++ if (--info->timer->refcnt > 0) {
+ pr_debug("decreased refcnt of timer %s to %u\n",
+ info->label, info->timer->refcnt);
++ mutex_unlock(&list_mutex);
++ return;
+ }
+
++ pr_debug("deleting timer %s\n", info->label);
++
++ list_del(&info->timer->entry);
+ mutex_unlock(&list_mutex);
++
++ if (info->timer->timer_type & XT_IDLETIMER_ALARM) {
++ alarm_cancel(&info->timer->alarm);
++ } else {
++ timer_shutdown_sync(&info->timer->timer);
++ }
++ cancel_work_sync(&info->timer->work);
++ sysfs_remove_file(idletimer_tg_kobj, &info->timer->attr.attr);
++ kfree(info->timer->attr.attr.name);
++ kfree(info->timer);
+ }
+
+
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 39382ee1e33108..3b519adc01259f 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -78,6 +78,8 @@ struct netem_sched_data {
+ struct sk_buff *t_head;
+ struct sk_buff *t_tail;
+
++ u32 t_len;
++
+ /* optional qdisc for classful handling (NULL at netem init) */
+ struct Qdisc *qdisc;
+
+@@ -382,6 +384,7 @@ static void tfifo_reset(struct Qdisc *sch)
+ rtnl_kfree_skbs(q->t_head, q->t_tail);
+ q->t_head = NULL;
+ q->t_tail = NULL;
++ q->t_len = 0;
+ }
+
+ static void tfifo_enqueue(struct sk_buff *nskb, struct Qdisc *sch)
+@@ -411,6 +414,7 @@ static void tfifo_enqueue(struct sk_buff *nskb, struct Qdisc *sch)
+ rb_link_node(&nskb->rbnode, parent, p);
+ rb_insert_color(&nskb->rbnode, &q->t_root);
+ }
++ q->t_len++;
+ sch->q.qlen++;
+ }
+
+@@ -517,7 +521,7 @@ static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ 1<<get_random_u32_below(8);
+ }
+
+- if (unlikely(sch->q.qlen >= sch->limit)) {
++ if (unlikely(q->t_len >= sch->limit)) {
+ /* re-link segs, so that qdisc_drop_all() frees them all */
+ skb->next = segs;
+ qdisc_drop_all(skb, sch, to_free);
+@@ -701,8 +705,8 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+ tfifo_dequeue:
+ skb = __qdisc_dequeue_head(&sch->q);
+ if (skb) {
+- qdisc_qstats_backlog_dec(sch, skb);
+ deliver:
++ qdisc_qstats_backlog_dec(sch, skb);
+ qdisc_bstats_update(sch, skb);
+ return skb;
+ }
+@@ -718,8 +722,7 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+
+ if (time_to_send <= now && q->slot.slot_next <= now) {
+ netem_erase_head(q, skb);
+- sch->q.qlen--;
+- qdisc_qstats_backlog_dec(sch, skb);
++ q->t_len--;
+ skb->next = NULL;
+ skb->prev = NULL;
+ /* skb->dev shares skb->rbnode area,
+@@ -746,16 +749,21 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+ if (net_xmit_drop_count(err))
+ qdisc_qstats_drop(sch);
+ qdisc_tree_reduce_backlog(sch, 1, pkt_len);
++ sch->qstats.backlog -= pkt_len;
++ sch->q.qlen--;
+ }
+ goto tfifo_dequeue;
+ }
++ sch->q.qlen--;
+ goto deliver;
+ }
+
+ if (q->qdisc) {
+ skb = q->qdisc->ops->dequeue(q->qdisc);
+- if (skb)
++ if (skb) {
++ sch->q.qlen--;
+ goto deliver;
++ }
+ }
+
+ qdisc_watchdog_schedule_ns(&q->watchdog,
+@@ -765,8 +773,10 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+
+ if (q->qdisc) {
+ skb = q->qdisc->ops->dequeue(q->qdisc);
+- if (skb)
++ if (skb) {
++ sch->q.qlen--;
+ goto deliver;
++ }
+ }
+ return NULL;
+ }
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index b7e25e7e9933b6..108a4cc2e00107 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -807,6 +807,7 @@ static void cleanup_bearer(struct work_struct *work)
+ {
+ struct udp_bearer *ub = container_of(work, struct udp_bearer, work);
+ struct udp_replicast *rcast, *tmp;
++ struct tipc_net *tn;
+
+ list_for_each_entry_safe(rcast, tmp, &ub->rcast.list, list) {
+ dst_cache_destroy(&rcast->dst_cache);
+@@ -814,10 +815,14 @@ static void cleanup_bearer(struct work_struct *work)
+ kfree_rcu(rcast, rcu);
+ }
+
++ tn = tipc_net(sock_net(ub->ubsock->sk));
++
+ dst_cache_destroy(&ub->rcast.dst_cache);
+ udp_tunnel_sock_release(ub->ubsock);
++
++ /* Note: could use a call_rcu() to avoid another synchronize_net() */
+ synchronize_net();
+- atomic_dec(&tipc_net(sock_net(ub->ubsock->sk))->wq_count);
++ atomic_dec(&tn->wq_count);
+ kfree(ub);
+ }
+
+diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
+index 001ccc55ef0f93..6b176230044397 100644
+--- a/net/unix/af_unix.c
++++ b/net/unix/af_unix.c
+@@ -2313,6 +2313,7 @@ static int unix_stream_sendmsg(struct socket *sock, struct msghdr *msg,
+ fds_sent = true;
+
+ if (unlikely(msg->msg_flags & MSG_SPLICE_PAGES)) {
++ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ err = skb_splice_from_iter(skb, &msg->msg_iter, size,
+ sk->sk_allocation);
+ if (err < 0) {
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 9b1b9dc5a7eb2a..1e78f575fb5630 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -814,7 +814,7 @@ static const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ [NL80211_ATTR_MLO_LINKS] =
+ NLA_POLICY_NESTED_ARRAY(nl80211_policy),
+ [NL80211_ATTR_MLO_LINK_ID] =
+- NLA_POLICY_RANGE(NLA_U8, 0, IEEE80211_MLD_MAX_NUM_LINKS),
++ NLA_POLICY_RANGE(NLA_U8, 0, IEEE80211_MLD_MAX_NUM_LINKS - 1),
+ [NL80211_ATTR_MLD_ADDR] = NLA_POLICY_EXACT_LEN(ETH_ALEN),
+ [NL80211_ATTR_MLO_SUPPORT] = { .type = NLA_FLAG },
+ [NL80211_ATTR_MAX_NUM_AKM_SUITES] = { .type = NLA_REJECT },
+diff --git a/net/wireless/sme.c b/net/wireless/sme.c
+index 431da30817a6f6..26817160008766 100644
+--- a/net/wireless/sme.c
++++ b/net/wireless/sme.c
+@@ -83,6 +83,7 @@ static int cfg80211_conn_scan(struct wireless_dev *wdev)
+ if (!request)
+ return -ENOMEM;
+
++ request->n_channels = n_channels;
+ if (wdev->conn->params.channel) {
+ enum nl80211_band band = wdev->conn->params.channel->band;
+ struct ieee80211_supported_band *sband =
+diff --git a/rust/Makefile b/rust/Makefile
+index b5e0a73b78f3e5..9f59baacaf7730 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -267,9 +267,22 @@ endif
+
+ bindgen_c_flags_final = $(bindgen_c_flags_lto) -D__BINDGEN__
+
++# Each `bindgen` release may upgrade the list of Rust target versions. By
++# default, the highest stable release in their list is used. Thus we need to set
++# a `--rust-target` to avoid future `bindgen` releases emitting code that
++# `rustc` may not understand. On top of that, `bindgen` does not support passing
++# an unknown Rust target version.
++#
++# Therefore, the Rust target for `bindgen` can be only as high as the minimum
++# Rust version the kernel supports and only as high as the greatest stable Rust
++# target supported by the minimum `bindgen` version the kernel supports (that
++# is, if we do not test the actual `rustc`/`bindgen` versions running).
++#
++# Starting with `bindgen` 0.71.0, we will be able to set any future Rust version
++# instead, i.e. we will be able to set here our minimum supported Rust version.
+ quiet_cmd_bindgen = BINDGEN $@
+ cmd_bindgen = \
+- $(BINDGEN) $< $(bindgen_target_flags) \
++ $(BINDGEN) $< $(bindgen_target_flags) --rust-target 1.68 \
+ --use-core --with-derive-default --ctypes-prefix core::ffi --no-layout-tests \
+ --no-debug '.*' --enable-function-attribute-detection \
+ -o $@ -- $(bindgen_c_flags_final) -DMODULE \
+diff --git a/sound/core/control_led.c b/sound/core/control_led.c
+index 65a1ebe877768f..e33dfcf863cf13 100644
+--- a/sound/core/control_led.c
++++ b/sound/core/control_led.c
+@@ -668,10 +668,16 @@ static void snd_ctl_led_sysfs_add(struct snd_card *card)
+ goto cerr;
+ led->cards[card->number] = led_card;
+ snprintf(link_name, sizeof(link_name), "led-%s", led->name);
+- WARN(sysfs_create_link(&card->ctl_dev->kobj, &led_card->dev.kobj, link_name),
+- "can't create symlink to controlC%i device\n", card->number);
+- WARN(sysfs_create_link(&led_card->dev.kobj, &card->card_dev.kobj, "card"),
+- "can't create symlink to card%i\n", card->number);
++ if (sysfs_create_link(&card->ctl_dev->kobj, &led_card->dev.kobj,
++ link_name))
++ dev_err(card->dev,
++ "%s: can't create symlink to controlC%i device\n",
++ __func__, card->number);
++ if (sysfs_create_link(&led_card->dev.kobj, &card->card_dev.kobj,
++ "card"))
++ dev_err(card->dev,
++ "%s: can't create symlink to card%i\n",
++ __func__, card->number);
+
+ continue;
+ cerr:
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 973671e0cdb09d..192fc75b51e6db 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10127,6 +10127,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x1466, "Acer Aspire A515-56", ALC255_FIXUP_ACER_HEADPHONE_AND_MIC),
+ SND_PCI_QUIRK(0x1025, 0x1534, "Acer Predator PH315-54", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1025, 0x159c, "Acer Nitro 5 AN515-58", ALC2XX_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x169a, "Acer Swift SFG16", ALC256_FIXUP_ACER_SFG16_MICMUTE_LED),
+ SND_PCI_QUIRK(0x1028, 0x0470, "Dell M101z", ALC269_FIXUP_DELL_M101Z),
+ SND_PCI_QUIRK(0x1028, 0x053c, "Dell Latitude E5430", ALC292_FIXUP_DELL_E7X),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index e38c5885dadfbc..ecf57a6cb7c37d 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -578,14 +578,19 @@ static int acp6x_probe(struct platform_device *pdev)
+
+ handle = ACPI_HANDLE(pdev->dev.parent);
+ ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status);
+- if (!ACPI_FAILURE(ret))
++ if (!ACPI_FAILURE(ret)) {
+ wov_en = dmic_status;
++ if (!wov_en)
++ return -ENODEV;
++ } else {
++ /* Incase of ACPI method read failure then jump to check_dmi_entry */
++ goto check_dmi_entry;
++ }
+
+- if (is_dmic_enable && wov_en)
++ if (is_dmic_enable)
+ platform_set_drvdata(pdev, &acp6x_card);
+- else
+- return 0;
+
++check_dmi_entry:
+ /* check for any DMI overrides */
+ dmi_id = dmi_first_match(yc_acp_quirk_table);
+ if (dmi_id)
+diff --git a/sound/soc/codecs/tas2781-i2c.c b/sound/soc/codecs/tas2781-i2c.c
+index 12d093437ba9b6..1b2f55030c3961 100644
+--- a/sound/soc/codecs/tas2781-i2c.c
++++ b/sound/soc/codecs/tas2781-i2c.c
+@@ -370,7 +370,7 @@ static void sngl_calib_start(struct tasdevice_priv *tas_priv, int i,
+ tasdevice_dev_read(tas_priv, i, p[j].reg,
+ (int *)&p[j].val[0]);
+ } else {
+- switch (p[j].reg) {
++ switch (tas2781_cali_start_reg[j].reg) {
+ case 0: {
+ if (!reg[0])
+ continue;
+diff --git a/sound/soc/fsl/fsl_spdif.c b/sound/soc/fsl/fsl_spdif.c
+index b6ff04f7138a2c..ee946e0d3f4969 100644
+--- a/sound/soc/fsl/fsl_spdif.c
++++ b/sound/soc/fsl/fsl_spdif.c
+@@ -1204,7 +1204,7 @@ static struct snd_kcontrol_new fsl_spdif_ctrls[] = {
+ },
+ /* DPLL lock info get controller */
+ {
+- .iface = SNDRV_CTL_ELEM_IFACE_PCM,
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = RX_SAMPLE_RATE_KCONTROL,
+ .access = SNDRV_CTL_ELEM_ACCESS_READ |
+ SNDRV_CTL_ELEM_ACCESS_VOLATILE,
+diff --git a/sound/soc/fsl/fsl_xcvr.c b/sound/soc/fsl/fsl_xcvr.c
+index beede7344efd63..4341269eb97780 100644
+--- a/sound/soc/fsl/fsl_xcvr.c
++++ b/sound/soc/fsl/fsl_xcvr.c
+@@ -169,7 +169,7 @@ static int fsl_xcvr_capds_put(struct snd_kcontrol *kcontrol,
+ }
+
+ static struct snd_kcontrol_new fsl_xcvr_earc_capds_kctl = {
+- .iface = SNDRV_CTL_ELEM_IFACE_PCM,
++ .iface = SNDRV_CTL_ELEM_IFACE_MIXER,
+ .name = "Capabilities Data Structure",
+ .access = SNDRV_CTL_ELEM_ACCESS_READWRITE,
+ .info = fsl_xcvr_type_capds_bytes_info,
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index a58842a8c8a641..db57292c00ca1e 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -1003,8 +1003,12 @@ static int sof_card_dai_links_create(struct snd_soc_card *card)
+ return ret;
+ }
+
+- /* One per DAI link, worst case is a DAI link for every endpoint */
+- sof_dais = kcalloc(num_ends, sizeof(*sof_dais), GFP_KERNEL);
++ /*
++ * One per DAI link, worst case is a DAI link for every endpoint, also
++ * add one additional to act as a terminator such that code can iterate
++ * until it hits an uninitialised DAI.
++ */
++ sof_dais = kcalloc(num_ends + 1, sizeof(*sof_dais), GFP_KERNEL);
+ if (!sof_dais)
+ return -ENOMEM;
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 00101875d9a8d5..a0767de7f1b7ed 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2179,6 +2179,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_MIC_RES_384),
+ DEVICE_FLG(0x046d, 0x09a4, /* Logitech QuickCam E 3500 */
+ QUIRK_FLAG_CTL_MSG_DELAY_1M | QUIRK_FLAG_IGNORE_CTL_ERROR),
++ DEVICE_FLG(0x0499, 0x1506, /* Yamaha THR5 */
++ QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ DEVICE_FLG(0x0499, 0x1509, /* Steinberg UR22 */
+ QUIRK_FLAG_GENERIC_IMPLICIT_FB),
+ DEVICE_FLG(0x0499, 0x3108, /* Yamaha YIT-W12TX */
+diff --git a/tools/lib/perf/evlist.c b/tools/lib/perf/evlist.c
+index c6d67fc9e57ef0..83c43dc13313cc 100644
+--- a/tools/lib/perf/evlist.c
++++ b/tools/lib/perf/evlist.c
+@@ -47,6 +47,20 @@ static void __perf_evlist__propagate_maps(struct perf_evlist *evlist,
+ */
+ perf_cpu_map__put(evsel->cpus);
+ evsel->cpus = perf_cpu_map__intersect(evlist->user_requested_cpus, evsel->own_cpus);
++
++ /*
++ * Empty cpu lists would eventually get opened as "any" so remove
++ * genuinely empty ones before they're opened in the wrong place.
++ */
++ if (perf_cpu_map__is_empty(evsel->cpus)) {
++ struct perf_evsel *next = perf_evlist__next(evlist, evsel);
++
++ perf_evlist__remove(evlist, evsel);
++ /* Keep idx contiguous */
++ if (next)
++ list_for_each_entry_from(next, &evlist->entries, node)
++ next->idx--;
++ }
+ } else if (!evsel->own_cpus || evlist->has_user_cpus ||
+ (!evsel->requires_cpu && perf_cpu_map__has_any_cpu(evlist->user_requested_cpus))) {
+ /*
+@@ -80,11 +94,11 @@ static void __perf_evlist__propagate_maps(struct perf_evlist *evlist,
+
+ static void perf_evlist__propagate_maps(struct perf_evlist *evlist)
+ {
+- struct perf_evsel *evsel;
++ struct perf_evsel *evsel, *n;
+
+ evlist->needs_map_propagation = true;
+
+- perf_evlist__for_each_evsel(evlist, evsel)
++ list_for_each_entry_safe(evsel, n, &evlist->entries, node)
+ __perf_evlist__propagate_maps(evlist, evsel);
+ }
+
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 6604f5d038aadf..f0d8796b984a80 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -3820,9 +3820,12 @@ static int validate_branch(struct objtool_file *file, struct symbol *func,
+ break;
+
+ case INSN_CONTEXT_SWITCH:
+- if (func && (!next_insn || !next_insn->hint)) {
+- WARN_INSN(insn, "unsupported instruction in callable function");
+- return 1;
++ if (func) {
++ if (!next_insn || !next_insn->hint) {
++ WARN_INSN(insn, "unsupported instruction in callable function");
++ return 1;
++ }
++ break;
+ }
+ return 0;
+
+diff --git a/tools/perf/builtin-ftrace.c b/tools/perf/builtin-ftrace.c
+index 272d3c70810e7d..a56cf8b0a7d405 100644
+--- a/tools/perf/builtin-ftrace.c
++++ b/tools/perf/builtin-ftrace.c
+@@ -1151,8 +1151,9 @@ static int cmp_profile_data(const void *a, const void *b)
+
+ if (v1 > v2)
+ return -1;
+- else
++ if (v1 < v2)
+ return 1;
++ return 0;
+ }
+
+ static void print_profile_result(struct perf_ftrace *ftrace)
+diff --git a/tools/perf/util/build-id.c b/tools/perf/util/build-id.c
+index 8982f68e7230cd..e763e8d99a4367 100644
+--- a/tools/perf/util/build-id.c
++++ b/tools/perf/util/build-id.c
+@@ -277,7 +277,7 @@ static int write_buildid(const char *name, size_t name_len, struct build_id *bid
+ struct perf_record_header_build_id b;
+ size_t len;
+
+- len = sizeof(b) + name_len + 1;
++ len = name_len + 1;
+ len = PERF_ALIGN(len, sizeof(u64));
+
+ memset(&b, 0, sizeof(b));
+@@ -286,7 +286,7 @@ static int write_buildid(const char *name, size_t name_len, struct build_id *bid
+ misc |= PERF_RECORD_MISC_BUILD_ID_SIZE;
+ b.pid = pid;
+ b.header.misc = misc;
+- b.header.size = len;
++ b.header.size = sizeof(b) + len;
+
+ err = do_write(fd, &b, sizeof(b));
+ if (err < 0)
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 4f0ac998b0ccfd..27d5345d2b307a 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -134,6 +134,8 @@ struct machine *machine__new_host(void)
+
+ if (machine__create_kernel_maps(machine) < 0)
+ goto out_delete;
++
++ machine->env = &perf_env;
+ }
+
+ return machine;
+diff --git a/tools/testing/selftests/arm64/abi/syscall-abi-asm.S b/tools/testing/selftests/arm64/abi/syscall-abi-asm.S
+index df3230fdac3958..66ab2e0bae5fd0 100644
+--- a/tools/testing/selftests/arm64/abi/syscall-abi-asm.S
++++ b/tools/testing/selftests/arm64/abi/syscall-abi-asm.S
+@@ -81,32 +81,31 @@ do_syscall:
+ stp x27, x28, [sp, #96]
+
+ // Set SVCR if we're doing SME
+- cbz x1, 1f
++ cbz x1, load_gpr
+ adrp x2, svcr_in
+ ldr x2, [x2, :lo12:svcr_in]
+ msr S3_3_C4_C2_2, x2
+-1:
+
+ // Load ZA and ZT0 if enabled - uses x12 as scratch due to SME LDR
+- tbz x2, #SVCR_ZA_SHIFT, 1f
++ tbz x2, #SVCR_ZA_SHIFT, load_gpr
+ mov w12, #0
+ ldr x2, =za_in
+-2: _ldr_za 12, 2
++1: _ldr_za 12, 2
+ add x2, x2, x1
+ add x12, x12, #1
+ cmp x1, x12
+- bne 2b
++ bne 1b
+
+ // ZT0
+ mrs x2, S3_0_C0_C4_5 // ID_AA64SMFR0_EL1
+ ubfx x2, x2, #ID_AA64SMFR0_EL1_SMEver_SHIFT, \
+ #ID_AA64SMFR0_EL1_SMEver_WIDTH
+- cbz x2, 1f
++ cbz x2, load_gpr
+ adrp x2, zt_in
+ add x2, x2, :lo12:zt_in
+ _ldr_zt 2
+-1:
+
++load_gpr:
+ // Load GPRs x8-x28, and save our SP/FP for later comparison
+ ldr x2, =gpr_in
+ add x2, x2, #64
+@@ -125,9 +124,9 @@ do_syscall:
+ str x30, [x2], #8 // LR
+
+ // Load FPRs if we're not doing neither SVE nor streaming SVE
+- cbnz x0, 1f
++ cbnz x0, check_sve_in
+ ldr x2, =svcr_in
+- tbnz x2, #SVCR_SM_SHIFT, 1f
++ tbnz x2, #SVCR_SM_SHIFT, check_sve_in
+
+ ldr x2, =fpr_in
+ ldp q0, q1, [x2]
+@@ -148,8 +147,8 @@ do_syscall:
+ ldp q30, q31, [x2, #16 * 30]
+
+ b 2f
+-1:
+
++check_sve_in:
+ // Load the SVE registers if we're doing SVE/SME
+
+ ldr x2, =z_in
+@@ -256,32 +255,31 @@ do_syscall:
+ stp q30, q31, [x2, #16 * 30]
+
+ // Save SVCR if we're doing SME
+- cbz x1, 1f
++ cbz x1, check_sve_out
+ mrs x2, S3_3_C4_C2_2
+ adrp x3, svcr_out
+ str x2, [x3, :lo12:svcr_out]
+-1:
+
+ // Save ZA if it's enabled - uses x12 as scratch due to SME STR
+- tbz x2, #SVCR_ZA_SHIFT, 1f
++ tbz x2, #SVCR_ZA_SHIFT, check_sve_out
+ mov w12, #0
+ ldr x2, =za_out
+-2: _str_za 12, 2
++1: _str_za 12, 2
+ add x2, x2, x1
+ add x12, x12, #1
+ cmp x1, x12
+- bne 2b
++ bne 1b
+
+ // ZT0
+ mrs x2, S3_0_C0_C4_5 // ID_AA64SMFR0_EL1
+ ubfx x2, x2, #ID_AA64SMFR0_EL1_SMEver_SHIFT, \
+ #ID_AA64SMFR0_EL1_SMEver_WIDTH
+- cbz x2, 1f
++ cbz x2, check_sve_out
+ adrp x2, zt_out
+ add x2, x2, :lo12:zt_out
+ _str_zt 2
+-1:
+
++check_sve_out:
+ // Save the SVE state if we have some
+ cbz x0, 1f
+
+diff --git a/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
+index 5aaf2b065f86c2..bba3e37f749b86 100644
+--- a/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
++++ b/tools/testing/selftests/bpf/progs/test_tp_btf_nullable.c
+@@ -7,11 +7,7 @@
+ #include "bpf_misc.h"
+
+ SEC("tp_btf/bpf_testmod_test_nullable_bare")
+-/* This used to be a failure test, but raw_tp nullable arguments can now
+- * directly be dereferenced, whether they have nullable annotation or not,
+- * and don't need to be explicitly checked.
+- */
+-__success
++__failure __msg("R1 invalid mem access 'trusted_ptr_or_null_'")
+ int BPF_PROG(handle_tp_btf_nullable_bare1, struct bpf_testmod_test_read_ctx *nullable_ctx)
+ {
+ return nullable_ctx->len;
+diff --git a/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c b/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c
+index a570e48b917acc..bfc3bf18fed4fe 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c
++++ b/tools/testing/selftests/bpf/progs/verifier_btf_ctx_access.c
+@@ -11,7 +11,7 @@ __success __retval(0)
+ __naked void btf_ctx_access_accept(void)
+ {
+ asm volatile (" \
+- r2 = *(u32*)(r1 + 8); /* load 2nd argument value (int pointer) */\
++ r2 = *(u64 *)(r1 + 8); /* load 2nd argument value (int pointer) */\
+ r0 = 0; \
+ exit; \
+ " ::: __clobber_all);
+@@ -23,7 +23,7 @@ __success __retval(0)
+ __naked void ctx_access_u32_pointer_accept(void)
+ {
+ asm volatile (" \
+- r2 = *(u32*)(r1 + 0); /* load 1nd argument value (u32 pointer) */\
++ r2 = *(u64 *)(r1 + 0); /* load 1nd argument value (u32 pointer) */\
+ r0 = 0; \
+ exit; \
+ " ::: __clobber_all);
+diff --git a/tools/testing/selftests/bpf/progs/verifier_d_path.c b/tools/testing/selftests/bpf/progs/verifier_d_path.c
+index ec79cbcfde91ef..87e51a215558fd 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_d_path.c
++++ b/tools/testing/selftests/bpf/progs/verifier_d_path.c
+@@ -11,7 +11,7 @@ __success __retval(0)
+ __naked void d_path_accept(void)
+ {
+ asm volatile (" \
+- r1 = *(u32*)(r1 + 0); \
++ r1 = *(u64 *)(r1 + 0); \
+ r2 = r10; \
+ r2 += -8; \
+ r6 = 0; \
+@@ -31,7 +31,7 @@ __failure __msg("helper call is not allowed in probe")
+ __naked void d_path_reject(void)
+ {
+ asm volatile (" \
+- r1 = *(u32*)(r1 + 0); \
++ r1 = *(u64 *)(r1 + 0); \
+ r2 = r10; \
+ r2 += -8; \
+ r6 = 0; \
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer.sh b/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer.sh
+index 0c47faff9274b1..c068e6c2a580ea 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/sharedbuffer.sh
+@@ -22,20 +22,34 @@ SB_ITC=0
+ h1_create()
+ {
+ simple_if_init $h1 192.0.1.1/24
++ tc qdisc add dev $h1 clsact
++
++ # Add egress filter on $h1 that will guarantee that the packet sent,
++ # will be the only packet being passed to the device.
++ tc filter add dev $h1 egress pref 2 handle 102 matchall action drop
+ }
+
+ h1_destroy()
+ {
++ tc filter del dev $h1 egress pref 2 handle 102 matchall action drop
++ tc qdisc del dev $h1 clsact
+ simple_if_fini $h1 192.0.1.1/24
+ }
+
+ h2_create()
+ {
+ simple_if_init $h2 192.0.1.2/24
++ tc qdisc add dev $h2 clsact
++
++ # Add egress filter on $h2 that will guarantee that the packet sent,
++ # will be the only packet being passed to the device.
++ tc filter add dev $h2 egress pref 1 handle 101 matchall action drop
+ }
+
+ h2_destroy()
+ {
++ tc filter del dev $h2 egress pref 1 handle 101 matchall action drop
++ tc qdisc del dev $h2 clsact
+ simple_if_fini $h2 192.0.1.2/24
+ }
+
+@@ -101,6 +115,11 @@ port_pool_test()
+ local exp_max_occ=$(devlink_cell_size_get)
+ local max_occ
+
++ tc filter add dev $h1 egress protocol ip pref 1 handle 101 flower \
++ src_mac $h1mac dst_mac $h2mac \
++ src_ip 192.0.1.1 dst_ip 192.0.1.2 \
++ action pass
++
+ devlink sb occupancy clearmax $DEVLINK_DEV
+
+ $MZ $h1 -c 1 -p 10 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \
+@@ -108,11 +127,6 @@ port_pool_test()
+
+ devlink sb occupancy snapshot $DEVLINK_DEV
+
+- RET=0
+- max_occ=$(sb_occ_pool_check $dl_port1 $SB_POOL_ING $exp_max_occ)
+- check_err $? "Expected iPool($SB_POOL_ING) max occupancy to be $exp_max_occ, but got $max_occ"
+- log_test "physical port's($h1) ingress pool"
+-
+ RET=0
+ max_occ=$(sb_occ_pool_check $dl_port2 $SB_POOL_ING $exp_max_occ)
+ check_err $? "Expected iPool($SB_POOL_ING) max occupancy to be $exp_max_occ, but got $max_occ"
+@@ -122,6 +136,11 @@ port_pool_test()
+ max_occ=$(sb_occ_pool_check $cpu_dl_port $SB_POOL_EGR_CPU $exp_max_occ)
+ check_err $? "Expected ePool($SB_POOL_EGR_CPU) max occupancy to be $exp_max_occ, but got $max_occ"
+ log_test "CPU port's egress pool"
++
++ tc filter del dev $h1 egress protocol ip pref 1 handle 101 flower \
++ src_mac $h1mac dst_mac $h2mac \
++ src_ip 192.0.1.1 dst_ip 192.0.1.2 \
++ action pass
+ }
+
+ port_tc_ip_test()
+@@ -129,6 +148,11 @@ port_tc_ip_test()
+ local exp_max_occ=$(devlink_cell_size_get)
+ local max_occ
+
++ tc filter add dev $h1 egress protocol ip pref 1 handle 101 flower \
++ src_mac $h1mac dst_mac $h2mac \
++ src_ip 192.0.1.1 dst_ip 192.0.1.2 \
++ action pass
++
+ devlink sb occupancy clearmax $DEVLINK_DEV
+
+ $MZ $h1 -c 1 -p 10 -a $h1mac -b $h2mac -A 192.0.1.1 -B 192.0.1.2 \
+@@ -136,11 +160,6 @@ port_tc_ip_test()
+
+ devlink sb occupancy snapshot $DEVLINK_DEV
+
+- RET=0
+- max_occ=$(sb_occ_itc_check $dl_port2 $SB_ITC $exp_max_occ)
+- check_err $? "Expected ingress TC($SB_ITC) max occupancy to be $exp_max_occ, but got $max_occ"
+- log_test "physical port's($h1) ingress TC - IP packet"
+-
+ RET=0
+ max_occ=$(sb_occ_itc_check $dl_port2 $SB_ITC $exp_max_occ)
+ check_err $? "Expected ingress TC($SB_ITC) max occupancy to be $exp_max_occ, but got $max_occ"
+@@ -150,6 +169,11 @@ port_tc_ip_test()
+ max_occ=$(sb_occ_etc_check $cpu_dl_port $SB_ITC_CPU_IP $exp_max_occ)
+ check_err $? "Expected egress TC($SB_ITC_CPU_IP) max occupancy to be $exp_max_occ, but got $max_occ"
+ log_test "CPU port's egress TC - IP packet"
++
++ tc filter del dev $h1 egress protocol ip pref 1 handle 101 flower \
++ src_mac $h1mac dst_mac $h2mac \
++ src_ip 192.0.1.1 dst_ip 192.0.1.2 \
++ action pass
+ }
+
+ port_tc_arp_test()
+@@ -157,17 +181,15 @@ port_tc_arp_test()
+ local exp_max_occ=$(devlink_cell_size_get)
+ local max_occ
+
++ tc filter add dev $h1 egress protocol arp pref 1 handle 101 flower \
++ src_mac $h1mac action pass
++
+ devlink sb occupancy clearmax $DEVLINK_DEV
+
+ $MZ $h1 -c 1 -p 10 -a $h1mac -A 192.0.1.1 -t arp -q
+
+ devlink sb occupancy snapshot $DEVLINK_DEV
+
+- RET=0
+- max_occ=$(sb_occ_itc_check $dl_port2 $SB_ITC $exp_max_occ)
+- check_err $? "Expected ingress TC($SB_ITC) max occupancy to be $exp_max_occ, but got $max_occ"
+- log_test "physical port's($h1) ingress TC - ARP packet"
+-
+ RET=0
+ max_occ=$(sb_occ_itc_check $dl_port2 $SB_ITC $exp_max_occ)
+ check_err $? "Expected ingress TC($SB_ITC) max occupancy to be $exp_max_occ, but got $max_occ"
+@@ -177,6 +199,9 @@ port_tc_arp_test()
+ max_occ=$(sb_occ_etc_check $cpu_dl_port $SB_ITC_CPU_ARP $exp_max_occ)
+ check_err $? "Expected egress TC($SB_ITC_IP2ME) max occupancy to be $exp_max_occ, but got $max_occ"
+ log_test "CPU port's egress TC - ARP packet"
++
++ tc filter del dev $h1 egress protocol arp pref 1 handle 101 flower \
++ src_mac $h1mac action pass
+ }
+
+ setup_prepare()
+diff --git a/tools/testing/selftests/net/netfilter/rpath.sh b/tools/testing/selftests/net/netfilter/rpath.sh
+index 4485fd7675ed7e..86ec4e68594dc3 100755
+--- a/tools/testing/selftests/net/netfilter/rpath.sh
++++ b/tools/testing/selftests/net/netfilter/rpath.sh
+@@ -61,9 +61,20 @@ ip -net "$ns2" a a 192.168.42.1/24 dev d0
+ ip -net "$ns1" a a fec0:42::2/64 dev v0 nodad
+ ip -net "$ns2" a a fec0:42::1/64 dev d0 nodad
+
++# avoid neighbor lookups and enable martian IPv6 pings
++ns2_hwaddr=$(ip -net "$ns2" link show dev v0 | \
++ sed -n 's, *link/ether \([^ ]*\) .*,\1,p')
++ns1_hwaddr=$(ip -net "$ns1" link show dev v0 | \
++ sed -n 's, *link/ether \([^ ]*\) .*,\1,p')
++ip -net "$ns1" neigh add fec0:42::1 lladdr "$ns2_hwaddr" nud permanent dev v0
++ip -net "$ns1" neigh add fec0:23::1 lladdr "$ns2_hwaddr" nud permanent dev v0
++ip -net "$ns2" neigh add fec0:42::2 lladdr "$ns1_hwaddr" nud permanent dev d0
++ip -net "$ns2" neigh add fec0:23::2 lladdr "$ns1_hwaddr" nud permanent dev v0
++
+ # firewall matches to test
+ [ -n "$iptables" ] && {
+ common='-t raw -A PREROUTING -s 192.168.0.0/16'
++ common+=' -p icmp --icmp-type echo-request'
+ if ! ip netns exec "$ns2" "$iptables" $common -m rpfilter;then
+ echo "Cannot add rpfilter rule"
+ exit $ksft_skip
+@@ -72,6 +83,7 @@ ip -net "$ns2" a a fec0:42::1/64 dev d0 nodad
+ }
+ [ -n "$ip6tables" ] && {
+ common='-t raw -A PREROUTING -s fec0::/16'
++ common+=' -p icmpv6 --icmpv6-type echo-request'
+ if ! ip netns exec "$ns2" "$ip6tables" $common -m rpfilter;then
+ echo "Cannot add rpfilter rule"
+ exit $ksft_skip
+@@ -82,8 +94,10 @@ ip -net "$ns2" a a fec0:42::1/64 dev d0 nodad
+ table inet t {
+ chain c {
+ type filter hook prerouting priority raw;
+- ip saddr 192.168.0.0/16 fib saddr . iif oif exists counter
+- ip6 saddr fec0::/16 fib saddr . iif oif exists counter
++ ip saddr 192.168.0.0/16 icmp type echo-request \
++ fib saddr . iif oif exists counter
++ ip6 saddr fec0::/16 icmpv6 type echo-request \
++ fib saddr . iif oif exists counter
+ }
+ }
+ EOF
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2024-12-27 14:08 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2024-12-27 14:08 UTC (permalink / raw
To: gentoo-commits
commit: 671fd61e3eafab207b759f2bca79a6eda9cf710a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Dec 27 14:07:47 2024 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Dec 27 14:07:47 2024 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=671fd61e
Linux patch 6.12.7
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1006_linux-6.12.7.patch | 6443 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6447 insertions(+)
diff --git a/0000_README b/0000_README
index 1bb8df77..6961ab2e 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch: 1005_linux-6.12.6.patch
From: https://www.kernel.org
Desc: Linux 6.12.6
+Patch: 1006_linux-6.12.7.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.7
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1006_linux-6.12.7.patch b/1006_linux-6.12.7.patch
new file mode 100644
index 00000000..17157109
--- /dev/null
+++ b/1006_linux-6.12.7.patch
@@ -0,0 +1,6443 @@
+diff --git a/Makefile b/Makefile
+index c10952585c14b0..685a57f6c8d279 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index fbed433283c9b9..42791971f75887 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -2503,7 +2503,8 @@ static const struct sys_reg_desc sys_reg_descs[] = {
+ ID_WRITABLE(ID_AA64MMFR0_EL1, ~(ID_AA64MMFR0_EL1_RES0 |
+ ID_AA64MMFR0_EL1_TGRAN4_2 |
+ ID_AA64MMFR0_EL1_TGRAN64_2 |
+- ID_AA64MMFR0_EL1_TGRAN16_2)),
++ ID_AA64MMFR0_EL1_TGRAN16_2 |
++ ID_AA64MMFR0_EL1_ASIDBITS)),
+ ID_WRITABLE(ID_AA64MMFR1_EL1, ~(ID_AA64MMFR1_EL1_RES0 |
+ ID_AA64MMFR1_EL1_HCX |
+ ID_AA64MMFR1_EL1_TWED |
+diff --git a/arch/hexagon/Makefile b/arch/hexagon/Makefile
+index 92d005958dfb23..ff172cbe5881a0 100644
+--- a/arch/hexagon/Makefile
++++ b/arch/hexagon/Makefile
+@@ -32,3 +32,9 @@ KBUILD_LDFLAGS += $(ldflags-y)
+ TIR_NAME := r19
+ KBUILD_CFLAGS += -ffixed-$(TIR_NAME) -DTHREADINFO_REG=$(TIR_NAME) -D__linux__
+ KBUILD_AFLAGS += -DTHREADINFO_REG=$(TIR_NAME)
++
++# Disable HexagonConstExtenders pass for LLVM versions prior to 19.1.0
++# https://github.com/llvm/llvm-project/issues/99714
++ifneq ($(call clang-min-version, 190100),y)
++KBUILD_CFLAGS += -mllvm -hexagon-cext=false
++endif
+diff --git a/arch/riscv/kvm/aia.c b/arch/riscv/kvm/aia.c
+index 2967d305c44278..9f3b527596ded8 100644
+--- a/arch/riscv/kvm/aia.c
++++ b/arch/riscv/kvm/aia.c
+@@ -552,7 +552,7 @@ void kvm_riscv_aia_enable(void)
+ csr_set(CSR_HIE, BIT(IRQ_S_GEXT));
+ /* Enable IRQ filtering for overflow interrupt only if sscofpmf is present */
+ if (__riscv_isa_extension_available(NULL, RISCV_ISA_EXT_SSCOFPMF))
+- csr_write(CSR_HVIEN, BIT(IRQ_PMU_OVF));
++ csr_set(CSR_HVIEN, BIT(IRQ_PMU_OVF));
+ }
+
+ void kvm_riscv_aia_disable(void)
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index c8f149ad77e584..c2ee0745f59edc 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -231,6 +231,8 @@ static unsigned long get_vmem_size(unsigned long identity_size,
+ vsize = round_up(SZ_2G + max_mappable, rte_size) +
+ round_up(vmemmap_size, rte_size) +
+ FIXMAP_SIZE + MODULES_LEN + KASLR_LEN;
++ if (IS_ENABLED(CONFIG_KMSAN))
++ vsize += MODULES_LEN * 2;
+ return size_add(vsize, vmalloc_size);
+ }
+
+diff --git a/arch/s390/boot/vmem.c b/arch/s390/boot/vmem.c
+index 145035f84a0e3e..3fa28db2fe59f4 100644
+--- a/arch/s390/boot/vmem.c
++++ b/arch/s390/boot/vmem.c
+@@ -306,7 +306,7 @@ static void pgtable_pte_populate(pmd_t *pmd, unsigned long addr, unsigned long e
+ pages++;
+ }
+ }
+- if (mode == POPULATE_DIRECT)
++ if (mode == POPULATE_IDENTITY)
+ update_page_count(PG_DIRECT_MAP_4K, pages);
+ }
+
+@@ -339,7 +339,7 @@ static void pgtable_pmd_populate(pud_t *pud, unsigned long addr, unsigned long e
+ }
+ pgtable_pte_populate(pmd, addr, next, mode);
+ }
+- if (mode == POPULATE_DIRECT)
++ if (mode == POPULATE_IDENTITY)
+ update_page_count(PG_DIRECT_MAP_1M, pages);
+ }
+
+@@ -372,7 +372,7 @@ static void pgtable_pud_populate(p4d_t *p4d, unsigned long addr, unsigned long e
+ }
+ pgtable_pmd_populate(pud, addr, next, mode);
+ }
+- if (mode == POPULATE_DIRECT)
++ if (mode == POPULATE_IDENTITY)
+ update_page_count(PG_DIRECT_MAP_2G, pages);
+ }
+
+diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
+index f17bb7bf939242..5fa203f4bc6b80 100644
+--- a/arch/s390/kernel/ipl.c
++++ b/arch/s390/kernel/ipl.c
+@@ -270,7 +270,7 @@ static ssize_t sys_##_prefix##_##_name##_store(struct kobject *kobj, \
+ if (len >= sizeof(_value)) \
+ return -E2BIG; \
+ len = strscpy(_value, buf, sizeof(_value)); \
+- if (len < 0) \
++ if ((ssize_t)len < 0) \
+ return len; \
+ strim(_value); \
+ return len; \
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index d18078834dedac..dc12fe5ef3caa9 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -223,6 +223,63 @@ static void hv_machine_crash_shutdown(struct pt_regs *regs)
+ hyperv_cleanup();
+ }
+ #endif /* CONFIG_CRASH_DUMP */
++
++static u64 hv_ref_counter_at_suspend;
++static void (*old_save_sched_clock_state)(void);
++static void (*old_restore_sched_clock_state)(void);
++
++/*
++ * Hyper-V clock counter resets during hibernation. Save and restore clock
++ * offset during suspend/resume, while also considering the time passed
++ * before suspend. This is to make sure that sched_clock using hv tsc page
++ * based clocksource, proceeds from where it left off during suspend and
++ * it shows correct time for the timestamps of kernel messages after resume.
++ */
++static void save_hv_clock_tsc_state(void)
++{
++ hv_ref_counter_at_suspend = hv_read_reference_counter();
++}
++
++static void restore_hv_clock_tsc_state(void)
++{
++ /*
++ * Adjust the offsets used by hv tsc clocksource to
++ * account for the time spent before hibernation.
++ * adjusted value = reference counter (time) at suspend
++ * - reference counter (time) now.
++ */
++ hv_adj_sched_clock_offset(hv_ref_counter_at_suspend - hv_read_reference_counter());
++}
++
++/*
++ * Functions to override save_sched_clock_state and restore_sched_clock_state
++ * functions of x86_platform. The Hyper-V clock counter is reset during
++ * suspend-resume and the offset used to measure time needs to be
++ * corrected, post resume.
++ */
++static void hv_save_sched_clock_state(void)
++{
++ old_save_sched_clock_state();
++ save_hv_clock_tsc_state();
++}
++
++static void hv_restore_sched_clock_state(void)
++{
++ restore_hv_clock_tsc_state();
++ old_restore_sched_clock_state();
++}
++
++static void __init x86_setup_ops_for_tsc_pg_clock(void)
++{
++ if (!(ms_hyperv.features & HV_MSR_REFERENCE_TSC_AVAILABLE))
++ return;
++
++ old_save_sched_clock_state = x86_platform.save_sched_clock_state;
++ x86_platform.save_sched_clock_state = hv_save_sched_clock_state;
++
++ old_restore_sched_clock_state = x86_platform.restore_sched_clock_state;
++ x86_platform.restore_sched_clock_state = hv_restore_sched_clock_state;
++}
+ #endif /* CONFIG_HYPERV */
+
+ static uint32_t __init ms_hyperv_platform(void)
+@@ -579,6 +636,7 @@ static void __init ms_hyperv_init_platform(void)
+
+ /* Register Hyper-V specific clocksource */
+ hv_init_clocksource();
++ x86_setup_ops_for_tsc_pg_clock();
+ hv_vtl_init_platform();
+ #endif
+ /*
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 41786b834b1635..83bfecd1a6e40c 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -36,6 +36,26 @@
+ u32 kvm_cpu_caps[NR_KVM_CPU_CAPS] __read_mostly;
+ EXPORT_SYMBOL_GPL(kvm_cpu_caps);
+
++struct cpuid_xstate_sizes {
++ u32 eax;
++ u32 ebx;
++ u32 ecx;
++};
++
++static struct cpuid_xstate_sizes xstate_sizes[XFEATURE_MAX] __ro_after_init;
++
++void __init kvm_init_xstate_sizes(void)
++{
++ u32 ign;
++ int i;
++
++ for (i = XFEATURE_YMM; i < ARRAY_SIZE(xstate_sizes); i++) {
++ struct cpuid_xstate_sizes *xs = &xstate_sizes[i];
++
++ cpuid_count(0xD, i, &xs->eax, &xs->ebx, &xs->ecx, &ign);
++ }
++}
++
+ u32 xstate_required_size(u64 xstate_bv, bool compacted)
+ {
+ int feature_bit = 0;
+@@ -44,14 +64,15 @@ u32 xstate_required_size(u64 xstate_bv, bool compacted)
+ xstate_bv &= XFEATURE_MASK_EXTEND;
+ while (xstate_bv) {
+ if (xstate_bv & 0x1) {
+- u32 eax, ebx, ecx, edx, offset;
+- cpuid_count(0xD, feature_bit, &eax, &ebx, &ecx, &edx);
++ struct cpuid_xstate_sizes *xs = &xstate_sizes[feature_bit];
++ u32 offset;
++
+ /* ECX[1]: 64B alignment in compacted form */
+ if (compacted)
+- offset = (ecx & 0x2) ? ALIGN(ret, 64) : ret;
++ offset = (xs->ecx & 0x2) ? ALIGN(ret, 64) : ret;
+ else
+- offset = ebx;
+- ret = max(ret, offset + eax);
++ offset = xs->ebx;
++ ret = max(ret, offset + xs->eax);
+ }
+
+ xstate_bv >>= 1;
+diff --git a/arch/x86/kvm/cpuid.h b/arch/x86/kvm/cpuid.h
+index 41697cca354e6b..ad479cfb91bc7b 100644
+--- a/arch/x86/kvm/cpuid.h
++++ b/arch/x86/kvm/cpuid.h
+@@ -32,6 +32,7 @@ int kvm_vcpu_ioctl_get_cpuid2(struct kvm_vcpu *vcpu,
+ bool kvm_cpuid(struct kvm_vcpu *vcpu, u32 *eax, u32 *ebx,
+ u32 *ecx, u32 *edx, bool exact_only);
+
++void __init kvm_init_xstate_sizes(void);
+ u32 xstate_required_size(u64 xstate_bv, bool compacted);
+
+ int cpuid_query_maxphyaddr(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 9df3e1e5ae81a1..4543dd6bcab2cb 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -3199,15 +3199,6 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ if (data & ~supported_de_cfg)
+ return 1;
+
+- /*
+- * Don't let the guest change the host-programmed value. The
+- * MSR is very model specific, i.e. contains multiple bits that
+- * are completely unknown to KVM, and the one bit known to KVM
+- * is simply a reflection of hardware capabilities.
+- */
+- if (!msr->host_initiated && data != svm->msr_decfg)
+- return 1;
+-
+ svm->msr_decfg = data;
+ break;
+ }
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 83fe0a78146fc1..b49e2eb4893080 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -9991,7 +9991,7 @@ static int complete_hypercall_exit(struct kvm_vcpu *vcpu)
+ {
+ u64 ret = vcpu->run->hypercall.ret;
+
+- if (!is_64_bit_mode(vcpu))
++ if (!is_64_bit_hypercall(vcpu))
+ ret = (u32)ret;
+ kvm_rax_write(vcpu, ret);
+ ++vcpu->stat.hypercalls;
+@@ -14010,6 +14010,8 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_rmp_fault);
+
+ static int __init kvm_x86_init(void)
+ {
++ kvm_init_xstate_sizes();
++
+ kvm_mmu_x86_module_init();
+ mitigate_smt_rsb &= boot_cpu_has_bug(X86_BUG_SMT_RSB) && cpu_smt_possible();
+ return 0;
+diff --git a/block/blk-mq-sysfs.c b/block/blk-mq-sysfs.c
+index cd5ea6eaa76b09..156e9bb07abf1a 100644
+--- a/block/blk-mq-sysfs.c
++++ b/block/blk-mq-sysfs.c
+@@ -275,13 +275,15 @@ void blk_mq_sysfs_unregister_hctxs(struct request_queue *q)
+ struct blk_mq_hw_ctx *hctx;
+ unsigned long i;
+
+- lockdep_assert_held(&q->sysfs_dir_lock);
+-
++ mutex_lock(&q->sysfs_dir_lock);
+ if (!q->mq_sysfs_init_done)
+- return;
++ goto unlock;
+
+ queue_for_each_hw_ctx(q, hctx, i)
+ blk_mq_unregister_hctx(hctx);
++
++unlock:
++ mutex_unlock(&q->sysfs_dir_lock);
+ }
+
+ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+@@ -290,10 +292,9 @@ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+ unsigned long i;
+ int ret = 0;
+
+- lockdep_assert_held(&q->sysfs_dir_lock);
+-
++ mutex_lock(&q->sysfs_dir_lock);
+ if (!q->mq_sysfs_init_done)
+- return ret;
++ goto unlock;
+
+ queue_for_each_hw_ctx(q, hctx, i) {
+ ret = blk_mq_register_hctx(hctx);
+@@ -301,5 +302,8 @@ int blk_mq_sysfs_register_hctxs(struct request_queue *q)
+ break;
+ }
+
++unlock:
++ mutex_unlock(&q->sysfs_dir_lock);
++
+ return ret;
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index cc1b3202383840..d5995021815ddf 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -4421,6 +4421,15 @@ struct gendisk *blk_mq_alloc_disk_for_queue(struct request_queue *q,
+ }
+ EXPORT_SYMBOL(blk_mq_alloc_disk_for_queue);
+
++/*
++ * Only hctx removed from cpuhp list can be reused
++ */
++static bool blk_mq_hctx_is_reusable(struct blk_mq_hw_ctx *hctx)
++{
++ return hlist_unhashed(&hctx->cpuhp_online) &&
++ hlist_unhashed(&hctx->cpuhp_dead);
++}
++
+ static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx(
+ struct blk_mq_tag_set *set, struct request_queue *q,
+ int hctx_idx, int node)
+@@ -4430,7 +4439,7 @@ static struct blk_mq_hw_ctx *blk_mq_alloc_and_init_hctx(
+ /* reuse dead hctx first */
+ spin_lock(&q->unused_hctx_lock);
+ list_for_each_entry(tmp, &q->unused_hctx_list, hctx_list) {
+- if (tmp->numa_node == node) {
++ if (tmp->numa_node == node && blk_mq_hctx_is_reusable(tmp)) {
+ hctx = tmp;
+ break;
+ }
+@@ -4462,8 +4471,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+ unsigned long i, j;
+
+ /* protect against switching io scheduler */
+- lockdep_assert_held(&q->sysfs_lock);
+-
++ mutex_lock(&q->sysfs_lock);
+ for (i = 0; i < set->nr_hw_queues; i++) {
+ int old_node;
+ int node = blk_mq_get_hctx_node(set, i);
+@@ -4496,6 +4504,7 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+
+ xa_for_each_start(&q->hctx_table, j, hctx, j)
+ blk_mq_exit_hctx(q, set, hctx, j);
++ mutex_unlock(&q->sysfs_lock);
+
+ /* unregister cpuhp callbacks for exited hctxs */
+ blk_mq_remove_hw_queues_cpuhp(q);
+@@ -4527,14 +4536,10 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+
+ xa_init(&q->hctx_table);
+
+- mutex_lock(&q->sysfs_lock);
+-
+ blk_mq_realloc_hw_ctxs(set, q);
+ if (!q->nr_hw_queues)
+ goto err_hctxs;
+
+- mutex_unlock(&q->sysfs_lock);
+-
+ INIT_WORK(&q->timeout_work, blk_mq_timeout_work);
+ blk_queue_rq_timeout(q, set->timeout ? set->timeout : 30 * HZ);
+
+@@ -4553,7 +4558,6 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
+ return 0;
+
+ err_hctxs:
+- mutex_unlock(&q->sysfs_lock);
+ blk_mq_release(q);
+ err_exit:
+ q->mq_ops = NULL;
+@@ -4934,12 +4938,12 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ return false;
+
+ /* q->elevator needs protection from ->sysfs_lock */
+- lockdep_assert_held(&q->sysfs_lock);
++ mutex_lock(&q->sysfs_lock);
+
+ /* the check has to be done with holding sysfs_lock */
+ if (!q->elevator) {
+ kfree(qe);
+- goto out;
++ goto unlock;
+ }
+
+ INIT_LIST_HEAD(&qe->node);
+@@ -4949,7 +4953,9 @@ static bool blk_mq_elv_switch_none(struct list_head *head,
+ __elevator_get(qe->type);
+ list_add(&qe->node, head);
+ elevator_disable(q);
+-out:
++unlock:
++ mutex_unlock(&q->sysfs_lock);
++
+ return true;
+ }
+
+@@ -4978,9 +4984,11 @@ static void blk_mq_elv_switch_back(struct list_head *head,
+ list_del(&qe->node);
+ kfree(qe);
+
++ mutex_lock(&q->sysfs_lock);
+ elevator_switch(q, t);
+ /* drop the reference acquired in blk_mq_elv_switch_none */
+ elevator_put(t);
++ mutex_unlock(&q->sysfs_lock);
+ }
+
+ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+@@ -5000,11 +5008,8 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ if (set->nr_maps == 1 && nr_hw_queues == set->nr_hw_queues)
+ return;
+
+- list_for_each_entry(q, &set->tag_list, tag_set_list) {
+- mutex_lock(&q->sysfs_dir_lock);
+- mutex_lock(&q->sysfs_lock);
++ list_for_each_entry(q, &set->tag_list, tag_set_list)
+ blk_mq_freeze_queue(q);
+- }
+ /*
+ * Switch IO scheduler to 'none', cleaning up the data associated
+ * with the previous scheduler. We will switch back once we are done
+@@ -5060,11 +5065,8 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ list_for_each_entry(q, &set->tag_list, tag_set_list)
+ blk_mq_elv_switch_back(&head, q);
+
+- list_for_each_entry(q, &set->tag_list, tag_set_list) {
++ list_for_each_entry(q, &set->tag_list, tag_set_list)
+ blk_mq_unfreeze_queue(q);
+- mutex_unlock(&q->sysfs_lock);
+- mutex_unlock(&q->sysfs_dir_lock);
+- }
+
+ /* Free the excess tags when nr_hw_queues shrink. */
+ for (i = set->nr_hw_queues; i < prev_nr_hw_queues; i++)
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 42c2cb97d778af..207577145c54f4 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -690,11 +690,11 @@ queue_attr_store(struct kobject *kobj, struct attribute *attr,
+ return res;
+ }
+
+- mutex_lock(&q->sysfs_lock);
+ blk_mq_freeze_queue(q);
++ mutex_lock(&q->sysfs_lock);
+ res = entry->store(disk, page, length);
+- blk_mq_unfreeze_queue(q);
+ mutex_unlock(&q->sysfs_lock);
++ blk_mq_unfreeze_queue(q);
+ return res;
+ }
+
+diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c
+index 1b409dbd332d80..c8daffd90f3001 100644
+--- a/drivers/accel/ivpu/ivpu_gem.c
++++ b/drivers/accel/ivpu/ivpu_gem.c
+@@ -406,7 +406,7 @@ static void ivpu_bo_print_info(struct ivpu_bo *bo, struct drm_printer *p)
+ mutex_lock(&bo->lock);
+
+ drm_printf(p, "%-9p %-3u 0x%-12llx %-10lu 0x%-8x %-4u",
+- bo, bo->ctx->id, bo->vpu_addr, bo->base.base.size,
++ bo, bo->ctx ? bo->ctx->id : 0, bo->vpu_addr, bo->base.base.size,
+ bo->flags, kref_read(&bo->base.base.refcount));
+
+ if (bo->base.pages)
+diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
+index 59d3170f5e3541..10b7ae0f866c98 100644
+--- a/drivers/accel/ivpu/ivpu_pm.c
++++ b/drivers/accel/ivpu/ivpu_pm.c
+@@ -364,6 +364,7 @@ void ivpu_pm_init(struct ivpu_device *vdev)
+
+ pm_runtime_use_autosuspend(dev);
+ pm_runtime_set_autosuspend_delay(dev, delay);
++ pm_runtime_set_active(dev);
+
+ ivpu_dbg(vdev, PM, "Autosuspend delay = %d\n", delay);
+ }
+@@ -378,7 +379,6 @@ void ivpu_pm_enable(struct ivpu_device *vdev)
+ {
+ struct device *dev = vdev->drm.dev;
+
+- pm_runtime_set_active(dev);
+ pm_runtime_allow(dev);
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index d0432b1707ceb6..bf83a104086cce 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -524,6 +524,12 @@ static ssize_t backing_dev_store(struct device *dev,
+ }
+
+ nr_pages = i_size_read(inode) >> PAGE_SHIFT;
++ /* Refuse to use zero sized device (also prevents self reference) */
++ if (!nr_pages) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ bitmap_sz = BITS_TO_LONGS(nr_pages) * sizeof(long);
+ bitmap = kvzalloc(bitmap_sz, GFP_KERNEL);
+ if (!bitmap) {
+@@ -1319,12 +1325,16 @@ static void zram_meta_free(struct zram *zram, u64 disksize)
+ size_t num_pages = disksize >> PAGE_SHIFT;
+ size_t index;
+
++ if (!zram->table)
++ return;
++
+ /* Free all pages that are still in this zram device */
+ for (index = 0; index < num_pages; index++)
+ zram_free_page(zram, index);
+
+ zs_destroy_pool(zram->mem_pool);
+ vfree(zram->table);
++ zram->table = NULL;
+ }
+
+ static bool zram_meta_alloc(struct zram *zram, u64 disksize)
+@@ -2165,11 +2175,6 @@ static void zram_reset_device(struct zram *zram)
+
+ zram->limit_pages = 0;
+
+- if (!init_done(zram)) {
+- up_write(&zram->init_lock);
+- return;
+- }
+-
+ set_capacity_and_notify(zram->disk, 0);
+ part_stat_set_all(zram->disk->part0, 0);
+
+diff --git a/drivers/clocksource/hyperv_timer.c b/drivers/clocksource/hyperv_timer.c
+index 99177835cadec4..b39dee7b93af04 100644
+--- a/drivers/clocksource/hyperv_timer.c
++++ b/drivers/clocksource/hyperv_timer.c
+@@ -27,7 +27,8 @@
+ #include <asm/mshyperv.h>
+
+ static struct clock_event_device __percpu *hv_clock_event;
+-static u64 hv_sched_clock_offset __ro_after_init;
++/* Note: offset can hold negative values after hibernation. */
++static u64 hv_sched_clock_offset __read_mostly;
+
+ /*
+ * If false, we're using the old mechanism for stimer0 interrupts
+@@ -470,6 +471,17 @@ static void resume_hv_clock_tsc(struct clocksource *arg)
+ hv_set_msr(HV_MSR_REFERENCE_TSC, tsc_msr.as_uint64);
+ }
+
++/*
++ * Called during resume from hibernation, from overridden
++ * x86_platform.restore_sched_clock_state routine. This is to adjust offsets
++ * used to calculate time for hv tsc page based sched_clock, to account for
++ * time spent before hibernation.
++ */
++void hv_adj_sched_clock_offset(u64 offset)
++{
++ hv_sched_clock_offset -= offset;
++}
++
+ #ifdef HAVE_VDSO_CLOCKMODE_HVCLOCK
+ static int hv_cs_enable(struct clocksource *cs)
+ {
+diff --git a/drivers/cxl/core/region.c b/drivers/cxl/core/region.c
+index dff618c708dc68..a0d6e8d7f42c8a 100644
+--- a/drivers/cxl/core/region.c
++++ b/drivers/cxl/core/region.c
+@@ -1295,6 +1295,7 @@ static int cxl_port_setup_targets(struct cxl_port *port,
+ struct cxl_region_params *p = &cxlr->params;
+ struct cxl_decoder *cxld = cxl_rr->decoder;
+ struct cxl_switch_decoder *cxlsd;
++ struct cxl_port *iter = port;
+ u16 eig, peig;
+ u8 eiw, peiw;
+
+@@ -1311,16 +1312,26 @@ static int cxl_port_setup_targets(struct cxl_port *port,
+
+ cxlsd = to_cxl_switch_decoder(&cxld->dev);
+ if (cxl_rr->nr_targets_set) {
+- int i, distance;
++ int i, distance = 1;
++ struct cxl_region_ref *cxl_rr_iter;
+
+ /*
+- * Passthrough decoders impose no distance requirements between
+- * peers
++ * The "distance" between peer downstream ports represents which
++ * endpoint positions in the region interleave a given port can
++ * host.
++ *
++ * For example, at the root of a hierarchy the distance is
++ * always 1 as every index targets a different host-bridge. At
++ * each subsequent switch level those ports map every Nth region
++ * position where N is the width of the switch == distance.
+ */
+- if (cxl_rr->nr_targets == 1)
+- distance = 0;
+- else
+- distance = p->nr_targets / cxl_rr->nr_targets;
++ do {
++ cxl_rr_iter = cxl_rr_load(iter, cxlr);
++ distance *= cxl_rr_iter->nr_targets;
++ iter = to_cxl_port(iter->dev.parent);
++ } while (!is_cxl_root(iter));
++ distance *= cxlrd->cxlsd.cxld.interleave_ways;
++
+ for (i = 0; i < cxl_rr->nr_targets_set; i++)
+ if (ep->dport == cxlsd->target[i]) {
+ rc = check_last_peer(cxled, ep, cxl_rr,
+diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
+index 188412d45e0d26..6e553b5752b1dd 100644
+--- a/drivers/cxl/pci.c
++++ b/drivers/cxl/pci.c
+@@ -942,8 +942,7 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ if (rc)
+ return rc;
+
+- rc = cxl_pci_ras_unmask(pdev);
+- if (rc)
++ if (cxl_pci_ras_unmask(pdev))
+ dev_dbg(&pdev->dev, "No RAS reporting unmasked\n");
+
+ pci_save_state(pdev);
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index 8892bc701a662d..afb8c1c5010735 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -60,7 +60,7 @@ static void __dma_buf_debugfs_list_add(struct dma_buf *dmabuf)
+ {
+ }
+
+-static void __dma_buf_debugfs_list_del(struct file *file)
++static void __dma_buf_debugfs_list_del(struct dma_buf *dmabuf)
+ {
+ }
+ #endif
+diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c
+index a3638ccc15f571..5e836e4e5b449a 100644
+--- a/drivers/dma-buf/udmabuf.c
++++ b/drivers/dma-buf/udmabuf.c
+@@ -256,15 +256,12 @@ static const struct dma_buf_ops udmabuf_ops = {
+ };
+
+ #define SEALS_WANTED (F_SEAL_SHRINK)
+-#define SEALS_DENIED (F_SEAL_WRITE)
++#define SEALS_DENIED (F_SEAL_WRITE|F_SEAL_FUTURE_WRITE)
+
+ static int check_memfd_seals(struct file *memfd)
+ {
+ int seals;
+
+- if (!memfd)
+- return -EBADFD;
+-
+ if (!shmem_file(memfd) && !is_file_hugepages(memfd))
+ return -EBADFD;
+
+@@ -279,12 +276,10 @@ static int check_memfd_seals(struct file *memfd)
+ return 0;
+ }
+
+-static int export_udmabuf(struct udmabuf *ubuf,
+- struct miscdevice *device,
+- u32 flags)
++static struct dma_buf *export_udmabuf(struct udmabuf *ubuf,
++ struct miscdevice *device)
+ {
+ DEFINE_DMA_BUF_EXPORT_INFO(exp_info);
+- struct dma_buf *buf;
+
+ ubuf->device = device;
+ exp_info.ops = &udmabuf_ops;
+@@ -292,24 +287,72 @@ static int export_udmabuf(struct udmabuf *ubuf,
+ exp_info.priv = ubuf;
+ exp_info.flags = O_RDWR;
+
+- buf = dma_buf_export(&exp_info);
+- if (IS_ERR(buf))
+- return PTR_ERR(buf);
++ return dma_buf_export(&exp_info);
++}
++
++static long udmabuf_pin_folios(struct udmabuf *ubuf, struct file *memfd,
++ loff_t start, loff_t size)
++{
++ pgoff_t pgoff, pgcnt, upgcnt = ubuf->pagecount;
++ struct folio **folios = NULL;
++ u32 cur_folio, cur_pgcnt;
++ long nr_folios;
++ long ret = 0;
++ loff_t end;
++
++ pgcnt = size >> PAGE_SHIFT;
++ folios = kvmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
++ if (!folios)
++ return -ENOMEM;
++
++ end = start + (pgcnt << PAGE_SHIFT) - 1;
++ nr_folios = memfd_pin_folios(memfd, start, end, folios, pgcnt, &pgoff);
++ if (nr_folios <= 0) {
++ ret = nr_folios ? nr_folios : -EINVAL;
++ goto end;
++ }
+
+- return dma_buf_fd(buf, flags);
++ cur_pgcnt = 0;
++ for (cur_folio = 0; cur_folio < nr_folios; ++cur_folio) {
++ pgoff_t subpgoff = pgoff;
++ size_t fsize = folio_size(folios[cur_folio]);
++
++ ret = add_to_unpin_list(&ubuf->unpin_list, folios[cur_folio]);
++ if (ret < 0)
++ goto end;
++
++ for (; subpgoff < fsize; subpgoff += PAGE_SIZE) {
++ ubuf->folios[upgcnt] = folios[cur_folio];
++ ubuf->offsets[upgcnt] = subpgoff;
++ ++upgcnt;
++
++ if (++cur_pgcnt >= pgcnt)
++ goto end;
++ }
++
++ /**
++ * In a given range, only the first subpage of the first folio
++ * has an offset, that is returned by memfd_pin_folios().
++ * The first subpages of other folios (in the range) have an
++ * offset of 0.
++ */
++ pgoff = 0;
++ }
++end:
++ ubuf->pagecount = upgcnt;
++ kvfree(folios);
++ return ret;
+ }
+
+ static long udmabuf_create(struct miscdevice *device,
+ struct udmabuf_create_list *head,
+ struct udmabuf_create_item *list)
+ {
+- pgoff_t pgoff, pgcnt, pglimit, pgbuf = 0;
+- long nr_folios, ret = -EINVAL;
+- struct file *memfd = NULL;
+- struct folio **folios;
++ pgoff_t pgcnt = 0, pglimit;
+ struct udmabuf *ubuf;
+- u32 i, j, k, flags;
+- loff_t end;
++ struct dma_buf *dmabuf;
++ long ret = -EINVAL;
++ u32 i, flags;
+
+ ubuf = kzalloc(sizeof(*ubuf), GFP_KERNEL);
+ if (!ubuf)
+@@ -318,93 +361,76 @@ static long udmabuf_create(struct miscdevice *device,
+ INIT_LIST_HEAD(&ubuf->unpin_list);
+ pglimit = (size_limit_mb * 1024 * 1024) >> PAGE_SHIFT;
+ for (i = 0; i < head->count; i++) {
+- if (!IS_ALIGNED(list[i].offset, PAGE_SIZE))
++ if (!PAGE_ALIGNED(list[i].offset))
+ goto err;
+- if (!IS_ALIGNED(list[i].size, PAGE_SIZE))
++ if (!PAGE_ALIGNED(list[i].size))
+ goto err;
+- ubuf->pagecount += list[i].size >> PAGE_SHIFT;
+- if (ubuf->pagecount > pglimit)
++
++ pgcnt += list[i].size >> PAGE_SHIFT;
++ if (pgcnt > pglimit)
+ goto err;
+ }
+
+- if (!ubuf->pagecount)
++ if (!pgcnt)
+ goto err;
+
+- ubuf->folios = kvmalloc_array(ubuf->pagecount, sizeof(*ubuf->folios),
+- GFP_KERNEL);
++ ubuf->folios = kvmalloc_array(pgcnt, sizeof(*ubuf->folios), GFP_KERNEL);
+ if (!ubuf->folios) {
+ ret = -ENOMEM;
+ goto err;
+ }
+- ubuf->offsets = kvcalloc(ubuf->pagecount, sizeof(*ubuf->offsets),
+- GFP_KERNEL);
++
++ ubuf->offsets = kvcalloc(pgcnt, sizeof(*ubuf->offsets), GFP_KERNEL);
+ if (!ubuf->offsets) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+- pgbuf = 0;
+ for (i = 0; i < head->count; i++) {
+- memfd = fget(list[i].memfd);
+- ret = check_memfd_seals(memfd);
+- if (ret < 0)
+- goto err;
+-
+- pgcnt = list[i].size >> PAGE_SHIFT;
+- folios = kvmalloc_array(pgcnt, sizeof(*folios), GFP_KERNEL);
+- if (!folios) {
+- ret = -ENOMEM;
+- goto err;
+- }
++ struct file *memfd = fget(list[i].memfd);
+
+- end = list[i].offset + (pgcnt << PAGE_SHIFT) - 1;
+- ret = memfd_pin_folios(memfd, list[i].offset, end,
+- folios, pgcnt, &pgoff);
+- if (ret <= 0) {
+- kvfree(folios);
+- if (!ret)
+- ret = -EINVAL;
++ if (!memfd) {
++ ret = -EBADFD;
+ goto err;
+ }
+
+- nr_folios = ret;
+- pgoff >>= PAGE_SHIFT;
+- for (j = 0, k = 0; j < pgcnt; j++) {
+- ubuf->folios[pgbuf] = folios[k];
+- ubuf->offsets[pgbuf] = pgoff << PAGE_SHIFT;
+-
+- if (j == 0 || ubuf->folios[pgbuf-1] != folios[k]) {
+- ret = add_to_unpin_list(&ubuf->unpin_list,
+- folios[k]);
+- if (ret < 0) {
+- kfree(folios);
+- goto err;
+- }
+- }
+-
+- pgbuf++;
+- if (++pgoff == folio_nr_pages(folios[k])) {
+- pgoff = 0;
+- if (++k == nr_folios)
+- break;
+- }
+- }
++ /*
++ * Take the inode lock to protect against concurrent
++ * memfd_add_seals(), which takes this lock in write mode.
++ */
++ inode_lock_shared(file_inode(memfd));
++ ret = check_memfd_seals(memfd);
++ if (ret)
++ goto out_unlock;
+
+- kvfree(folios);
++ ret = udmabuf_pin_folios(ubuf, memfd, list[i].offset,
++ list[i].size);
++out_unlock:
++ inode_unlock_shared(file_inode(memfd));
+ fput(memfd);
+- memfd = NULL;
++ if (ret)
++ goto err;
+ }
+
+ flags = head->flags & UDMABUF_FLAGS_CLOEXEC ? O_CLOEXEC : 0;
+- ret = export_udmabuf(ubuf, device, flags);
+- if (ret < 0)
++ dmabuf = export_udmabuf(ubuf, device);
++ if (IS_ERR(dmabuf)) {
++ ret = PTR_ERR(dmabuf);
+ goto err;
++ }
++ /*
++ * Ownership of ubuf is held by the dmabuf from here.
++ * If the following dma_buf_fd() fails, dma_buf_put() cleans up both the
++ * dmabuf and the ubuf (through udmabuf_ops.release).
++ */
++
++ ret = dma_buf_fd(dmabuf, flags);
++ if (ret < 0)
++ dma_buf_put(dmabuf);
+
+ return ret;
+
+ err:
+- if (memfd)
+- fput(memfd);
+ unpin_all_folios(&ubuf->unpin_list);
+ kvfree(ubuf->offsets);
+ kvfree(ubuf->folios);
+diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
+index ddfbdb66b794d7..5d356b7c45897c 100644
+--- a/drivers/edac/amd64_edac.c
++++ b/drivers/edac/amd64_edac.c
+@@ -3362,36 +3362,24 @@ static bool dct_ecc_enabled(struct amd64_pvt *pvt)
+
+ static bool umc_ecc_enabled(struct amd64_pvt *pvt)
+ {
+- u8 umc_en_mask = 0, ecc_en_mask = 0;
+- u16 nid = pvt->mc_node_id;
+ struct amd64_umc *umc;
+- u8 ecc_en = 0, i;
++ bool ecc_en = false;
++ int i;
+
++ /* Check whether at least one UMC is enabled: */
+ for_each_umc(i) {
+ umc = &pvt->umc[i];
+
+- /* Only check enabled UMCs. */
+- if (!(umc->sdp_ctrl & UMC_SDP_INIT))
+- continue;
+-
+- umc_en_mask |= BIT(i);
+-
+- if (umc->umc_cap_hi & UMC_ECC_ENABLED)
+- ecc_en_mask |= BIT(i);
++ if (umc->sdp_ctrl & UMC_SDP_INIT &&
++ umc->umc_cap_hi & UMC_ECC_ENABLED) {
++ ecc_en = true;
++ break;
++ }
+ }
+
+- /* Check whether at least one UMC is enabled: */
+- if (umc_en_mask)
+- ecc_en = umc_en_mask == ecc_en_mask;
+- else
+- edac_dbg(0, "Node %d: No enabled UMCs.\n", nid);
+-
+- edac_dbg(3, "Node %d: DRAM ECC %s.\n", nid, (ecc_en ? "enabled" : "disabled"));
++ edac_dbg(3, "Node %d: DRAM ECC %s.\n", pvt->mc_node_id, (ecc_en ? "enabled" : "disabled"));
+
+- if (!ecc_en)
+- return false;
+- else
+- return true;
++ return ecc_en;
+ }
+
+ static inline void
+diff --git a/drivers/firmware/arm_ffa/bus.c b/drivers/firmware/arm_ffa/bus.c
+index eb17d03b66fec9..dfda5ffc14db72 100644
+--- a/drivers/firmware/arm_ffa/bus.c
++++ b/drivers/firmware/arm_ffa/bus.c
+@@ -187,13 +187,18 @@ bool ffa_device_is_valid(struct ffa_device *ffa_dev)
+ return valid;
+ }
+
+-struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id,
+- const struct ffa_ops *ops)
++struct ffa_device *
++ffa_device_register(const struct ffa_partition_info *part_info,
++ const struct ffa_ops *ops)
+ {
+ int id, ret;
++ uuid_t uuid;
+ struct device *dev;
+ struct ffa_device *ffa_dev;
+
++ if (!part_info)
++ return NULL;
++
+ id = ida_alloc_min(&ffa_bus_id, 1, GFP_KERNEL);
+ if (id < 0)
+ return NULL;
+@@ -210,9 +215,11 @@ struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id,
+ dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id);
+
+ ffa_dev->id = id;
+- ffa_dev->vm_id = vm_id;
++ ffa_dev->vm_id = part_info->id;
++ ffa_dev->properties = part_info->properties;
+ ffa_dev->ops = ops;
+- uuid_copy(&ffa_dev->uuid, uuid);
++ import_uuid(&uuid, (u8 *)part_info->uuid);
++ uuid_copy(&ffa_dev->uuid, &uuid);
+
+ ret = device_register(&ffa_dev->dev);
+ if (ret) {
+diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c
+index b14cbdae94e82b..2c2ec3c35f1561 100644
+--- a/drivers/firmware/arm_ffa/driver.c
++++ b/drivers/firmware/arm_ffa/driver.c
+@@ -1387,7 +1387,6 @@ static struct notifier_block ffa_bus_nb = {
+ static int ffa_setup_partitions(void)
+ {
+ int count, idx, ret;
+- uuid_t uuid;
+ struct ffa_device *ffa_dev;
+ struct ffa_dev_part_info *info;
+ struct ffa_partition_info *pbuf, *tpbuf;
+@@ -1406,23 +1405,19 @@ static int ffa_setup_partitions(void)
+
+ xa_init(&drv_info->partition_info);
+ for (idx = 0, tpbuf = pbuf; idx < count; idx++, tpbuf++) {
+- import_uuid(&uuid, (u8 *)tpbuf->uuid);
+-
+ /* Note that if the UUID will be uuid_null, that will require
+ * ffa_bus_notifier() to find the UUID of this partition id
+ * with help of ffa_device_match_uuid(). FF-A v1.1 and above
+ * provides UUID here for each partition as part of the
+ * discovery API and the same is passed.
+ */
+- ffa_dev = ffa_device_register(&uuid, tpbuf->id, &ffa_drv_ops);
++ ffa_dev = ffa_device_register(tpbuf, &ffa_drv_ops);
+ if (!ffa_dev) {
+ pr_err("%s: failed to register partition ID 0x%x\n",
+ __func__, tpbuf->id);
+ continue;
+ }
+
+- ffa_dev->properties = tpbuf->properties;
+-
+ if (drv_info->version > FFA_VERSION_1_0 &&
+ !(tpbuf->properties & FFA_PARTITION_AARCH64_EXEC))
+ ffa_mode_32bit_set(ffa_dev);
+diff --git a/drivers/firmware/arm_scmi/vendors/imx/Kconfig b/drivers/firmware/arm_scmi/vendors/imx/Kconfig
+index 2883ed24a84d65..a01bf5e47301d2 100644
+--- a/drivers/firmware/arm_scmi/vendors/imx/Kconfig
++++ b/drivers/firmware/arm_scmi/vendors/imx/Kconfig
+@@ -15,6 +15,7 @@ config IMX_SCMI_BBM_EXT
+ config IMX_SCMI_MISC_EXT
+ tristate "i.MX SCMI MISC EXTENSION"
+ depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
++ depends on IMX_SCMI_MISC_DRV
+ default y if ARCH_MXC
+ help
+ This enables i.MX System MISC control logic such as gpio expander
+diff --git a/drivers/firmware/imx/Kconfig b/drivers/firmware/imx/Kconfig
+index 477d3f32d99a6b..907cd149c40a8b 100644
+--- a/drivers/firmware/imx/Kconfig
++++ b/drivers/firmware/imx/Kconfig
+@@ -25,7 +25,6 @@ config IMX_SCU
+
+ config IMX_SCMI_MISC_DRV
+ tristate "IMX SCMI MISC Protocol driver"
+- depends on IMX_SCMI_MISC_EXT || COMPILE_TEST
+ default y if ARCH_MXC
+ help
+ The System Controller Management Interface firmware (SCMI FW) is
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
+index 5ac59b62020cf2..18b3b1aaa1d3b7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dev_coredump.c
+@@ -345,11 +345,10 @@ void amdgpu_coredump(struct amdgpu_device *adev, bool skip_vram_check,
+ coredump->skip_vram_check = skip_vram_check;
+ coredump->reset_vram_lost = vram_lost;
+
+- if (job && job->vm) {
+- struct amdgpu_vm *vm = job->vm;
++ if (job && job->pasid) {
+ struct amdgpu_task_info *ti;
+
+- ti = amdgpu_vm_get_task_info_vm(vm);
++ ti = amdgpu_vm_get_task_info_pasid(adev, job->pasid);
+ if (ti) {
+ coredump->reset_task_info = *ti;
+ amdgpu_vm_put_task_info(ti);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+index 16f2605ac50b99..1ce20a19be8ba9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+@@ -253,7 +253,6 @@ void amdgpu_job_set_resources(struct amdgpu_job *job, struct amdgpu_bo *gds,
+
+ void amdgpu_job_free_resources(struct amdgpu_job *job)
+ {
+- struct amdgpu_ring *ring = to_amdgpu_ring(job->base.sched);
+ struct dma_fence *f;
+ unsigned i;
+
+@@ -266,7 +265,7 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
+ f = NULL;
+
+ for (i = 0; i < job->num_ibs; ++i)
+- amdgpu_ib_free(ring->adev, &job->ibs[i], f);
++ amdgpu_ib_free(NULL, &job->ibs[i], f);
+ }
+
+ static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 8d2562d0f143c7..73e02141a6e215 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -1260,10 +1260,9 @@ int amdgpu_vm_bo_update(struct amdgpu_device *adev, struct amdgpu_bo_va *bo_va,
+ * next command submission.
+ */
+ if (amdgpu_vm_is_bo_always_valid(vm, bo)) {
+- uint32_t mem_type = bo->tbo.resource->mem_type;
+-
+- if (!(bo->preferred_domains &
+- amdgpu_mem_type_to_domain(mem_type)))
++ if (bo->tbo.resource &&
++ !(bo->preferred_domains &
++ amdgpu_mem_type_to_domain(bo->tbo.resource->mem_type)))
+ amdgpu_vm_bo_evicted(&bo_va->base);
+ else
+ amdgpu_vm_bo_idle(&bo_va->base);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index 47b47d21f46447..6c19626ec59e9d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -4105,7 +4105,7 @@ static int gfx_v12_0_set_clockgating_state(void *handle,
+ if (amdgpu_sriov_vf(adev))
+ return 0;
+
+- switch (adev->ip_versions[GC_HWIP][0]) {
++ switch (amdgpu_ip_version(adev, GC_HWIP, 0)) {
+ case IP_VERSION(12, 0, 0):
+ case IP_VERSION(12, 0, 1):
+ gfx_v12_0_update_gfx_clock_gating(adev,
+diff --git a/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c b/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
+index 0fbc3be81f140f..f2ab5001b49249 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mmhub_v4_1_0.c
+@@ -108,7 +108,7 @@ mmhub_v4_1_0_print_l2_protection_fault_status(struct amdgpu_device *adev,
+ dev_err(adev->dev,
+ "MMVM_L2_PROTECTION_FAULT_STATUS_LO32:0x%08X\n",
+ status);
+- switch (adev->ip_versions[MMHUB_HWIP][0]) {
++ switch (amdgpu_ip_version(adev, MMHUB_HWIP, 0)) {
+ case IP_VERSION(4, 1, 0):
+ mmhub_cid = mmhub_client_ids_v4_1_0[cid][rw];
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_0.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_0.c
+index b1b57dcc5a7370..d1032e9992b49c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_0.c
+@@ -271,8 +271,19 @@ const struct nbio_hdp_flush_reg nbio_v7_0_hdp_flush_reg = {
+ .ref_and_mask_sdma1 = GPU_HDP_FLUSH_DONE__SDMA1_MASK,
+ };
+
++#define regRCC_DEV0_EPF6_STRAP4 0xd304
++#define regRCC_DEV0_EPF6_STRAP4_BASE_IDX 5
++
+ static void nbio_v7_0_init_registers(struct amdgpu_device *adev)
+ {
++ uint32_t data;
++
++ switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) {
++ case IP_VERSION(2, 5, 0):
++ data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF6_STRAP4) & ~BIT(23);
++ WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF6_STRAP4, data);
++ break;
++ }
+ }
+
+ #define MMIO_REG_HOLE_OFFSET (0x80000 - PAGE_SIZE)
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
+index 814ab59fdd4a3a..41421da63a0846 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_11.c
+@@ -275,7 +275,7 @@ static void nbio_v7_11_init_registers(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_SOC15(NBIO, 0, regBIF_BIF256_CI256_RC3X4_USB4_PCIE_MST_CTRL_3, data);
+
+- switch (adev->ip_versions[NBIO_HWIP][0]) {
++ switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) {
+ case IP_VERSION(7, 11, 0):
+ case IP_VERSION(7, 11, 1):
+ case IP_VERSION(7, 11, 2):
+diff --git a/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c b/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
+index 1ac730328516ff..3fb6d2aa7e3b39 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
++++ b/drivers/gpu/drm/amd/amdgpu/nbio_v7_7.c
+@@ -247,7 +247,7 @@ static void nbio_v7_7_init_registers(struct amdgpu_device *adev)
+ if (def != data)
+ WREG32_SOC15(NBIO, 0, regBIF0_PCIE_MST_CTRL_3, data);
+
+- switch (adev->ip_versions[NBIO_HWIP][0]) {
++ switch (amdgpu_ip_version(adev, NBIO_HWIP, 0)) {
+ case IP_VERSION(7, 7, 0):
+ data = RREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4) & ~BIT(23);
+ WREG32_SOC15(NBIO, 0, regRCC_DEV0_EPF5_STRAP4, data);
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index b22fb7eafcd3f2..9ec53431f2c32d 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -2108,7 +2108,7 @@ static int smu_v14_0_2_enable_gfx_features(struct smu_context *smu)
+ {
+ struct amdgpu_device *adev = smu->adev;
+
+- if (adev->ip_versions[MP1_HWIP][0] == IP_VERSION(14, 0, 2))
++ if (amdgpu_ip_version(adev, MP1_HWIP, 0) == IP_VERSION(14, 0, 2))
+ return smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_EnableAllSmuFeatures,
+ FEATURE_PWR_GFX, NULL);
+ else
+diff --git a/drivers/gpu/drm/display/drm_dp_tunnel.c b/drivers/gpu/drm/display/drm_dp_tunnel.c
+index 48b2df120086c9..90fe07a89260e2 100644
+--- a/drivers/gpu/drm/display/drm_dp_tunnel.c
++++ b/drivers/gpu/drm/display/drm_dp_tunnel.c
+@@ -1896,8 +1896,8 @@ static void destroy_mgr(struct drm_dp_tunnel_mgr *mgr)
+ *
+ * Creates a DP tunnel manager for @dev.
+ *
+- * Returns a pointer to the tunnel manager if created successfully or NULL in
+- * case of an error.
++ * Returns a pointer to the tunnel manager if created successfully or error
++ * pointer in case of failure.
+ */
+ struct drm_dp_tunnel_mgr *
+ drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
+@@ -1907,7 +1907,7 @@ drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
+
+ mgr = kzalloc(sizeof(*mgr), GFP_KERNEL);
+ if (!mgr)
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+
+ mgr->dev = dev;
+ init_waitqueue_head(&mgr->bw_req_queue);
+@@ -1916,7 +1916,7 @@ drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
+ if (!mgr->groups) {
+ kfree(mgr);
+
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+ }
+
+ #ifdef CONFIG_DRM_DISPLAY_DP_TUNNEL_STATE_DEBUG
+@@ -1927,7 +1927,7 @@ drm_dp_tunnel_mgr_create(struct drm_device *dev, int max_group_count)
+ if (!init_group(mgr, &mgr->groups[i])) {
+ destroy_mgr(mgr);
+
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+ }
+
+ mgr->group_count++;
+diff --git a/drivers/gpu/drm/drm_modes.c b/drivers/gpu/drm/drm_modes.c
+index 6ba167a3346134..71573b85d9242e 100644
+--- a/drivers/gpu/drm/drm_modes.c
++++ b/drivers/gpu/drm/drm_modes.c
+@@ -1287,14 +1287,11 @@ EXPORT_SYMBOL(drm_mode_set_name);
+ */
+ int drm_mode_vrefresh(const struct drm_display_mode *mode)
+ {
+- unsigned int num, den;
++ unsigned int num = 1, den = 1;
+
+ if (mode->htotal == 0 || mode->vtotal == 0)
+ return 0;
+
+- num = mode->clock;
+- den = mode->htotal * mode->vtotal;
+-
+ if (mode->flags & DRM_MODE_FLAG_INTERLACE)
+ num *= 2;
+ if (mode->flags & DRM_MODE_FLAG_DBLSCAN)
+@@ -1302,6 +1299,12 @@ int drm_mode_vrefresh(const struct drm_display_mode *mode)
+ if (mode->vscan > 1)
+ den *= mode->vscan;
+
++ if (check_mul_overflow(mode->clock, num, &num))
++ return 0;
++
++ if (check_mul_overflow(mode->htotal * mode->vtotal, den, &den))
++ return 0;
++
+ return DIV_ROUND_CLOSEST_ULL(mul_u32_u32(num, 1000), den);
+ }
+ EXPORT_SYMBOL(drm_mode_vrefresh);
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
+index ba55c059063dbb..fe1f85e5dda330 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
++++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
+@@ -343,6 +343,11 @@ struct intel_engine_guc_stats {
+ * @start_gt_clk: GT clock time of last idle to active transition.
+ */
+ u64 start_gt_clk;
++
++ /**
++ * @total: The last value of total returned
++ */
++ u64 total;
+ };
+
+ union intel_engine_tlb_inv_reg {
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index ed979847187f53..ee12ee0ed41871 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -1243,6 +1243,21 @@ static void __get_engine_usage_record(struct intel_engine_cs *engine,
+ } while (++i < 6);
+ }
+
++static void __set_engine_usage_record(struct intel_engine_cs *engine,
++ u32 last_in, u32 id, u32 total)
++{
++ struct iosys_map rec_map = intel_guc_engine_usage_record_map(engine);
++
++#define record_write(map_, field_, val_) \
++ iosys_map_wr_field(map_, 0, struct guc_engine_usage_record, field_, val_)
++
++ record_write(&rec_map, last_switch_in_stamp, last_in);
++ record_write(&rec_map, current_context_index, id);
++ record_write(&rec_map, total_runtime, total);
++
++#undef record_write
++}
++
+ static void guc_update_engine_gt_clks(struct intel_engine_cs *engine)
+ {
+ struct intel_engine_guc_stats *stats = &engine->stats.guc;
+@@ -1363,9 +1378,12 @@ static ktime_t guc_engine_busyness(struct intel_engine_cs *engine, ktime_t *now)
+ total += intel_gt_clock_interval_to_ns(gt, clk);
+ }
+
++ if (total > stats->total)
++ stats->total = total;
++
+ spin_unlock_irqrestore(&guc->timestamp.lock, flags);
+
+- return ns_to_ktime(total);
++ return ns_to_ktime(stats->total);
+ }
+
+ static void guc_enable_busyness_worker(struct intel_guc *guc)
+@@ -1431,8 +1449,21 @@ static void __reset_guc_busyness_stats(struct intel_guc *guc)
+
+ guc_update_pm_timestamp(guc, &unused);
+ for_each_engine(engine, gt, id) {
++ struct intel_engine_guc_stats *stats = &engine->stats.guc;
++
+ guc_update_engine_gt_clks(engine);
+- engine->stats.guc.prev_total = 0;
++
++ /*
++ * If resetting a running context, accumulate the active
++ * time as well since there will be no context switch.
++ */
++ if (stats->running) {
++ u64 clk = guc->timestamp.gt_stamp - stats->start_gt_clk;
++
++ stats->total_gt_clks += clk;
++ }
++ stats->prev_total = 0;
++ stats->running = 0;
+ }
+
+ spin_unlock_irqrestore(&guc->timestamp.lock, flags);
+@@ -1543,6 +1574,9 @@ static void guc_timestamp_ping(struct work_struct *wrk)
+
+ static int guc_action_enable_usage_stats(struct intel_guc *guc)
+ {
++ struct intel_gt *gt = guc_to_gt(guc);
++ struct intel_engine_cs *engine;
++ enum intel_engine_id id;
+ u32 offset = intel_guc_engine_usage_offset(guc);
+ u32 action[] = {
+ INTEL_GUC_ACTION_SET_ENG_UTIL_BUFF,
+@@ -1550,6 +1584,9 @@ static int guc_action_enable_usage_stats(struct intel_guc *guc)
+ 0,
+ };
+
++ for_each_engine(engine, gt, id)
++ __set_engine_usage_record(engine, 0, 0xffffffff, 0);
++
+ return intel_guc_send(guc, action, ARRAY_SIZE(action));
+ }
+
+diff --git a/drivers/gpu/drm/panel/panel-himax-hx83102.c b/drivers/gpu/drm/panel/panel-himax-hx83102.c
+index 8b48bba181316c..3644a7544b935d 100644
+--- a/drivers/gpu/drm/panel/panel-himax-hx83102.c
++++ b/drivers/gpu/drm/panel/panel-himax-hx83102.c
+@@ -565,6 +565,8 @@ static int hx83102_get_modes(struct drm_panel *panel,
+ struct drm_display_mode *mode;
+
+ mode = drm_mode_duplicate(connector->dev, m);
++ if (!mode)
++ return -ENOMEM;
+
+ mode->type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED;
+ drm_mode_set_name(mode);
+diff --git a/drivers/gpu/drm/panel/panel-novatek-nt35950.c b/drivers/gpu/drm/panel/panel-novatek-nt35950.c
+index b036208f93560e..08b22b592ab045 100644
+--- a/drivers/gpu/drm/panel/panel-novatek-nt35950.c
++++ b/drivers/gpu/drm/panel/panel-novatek-nt35950.c
+@@ -481,9 +481,9 @@ static int nt35950_probe(struct mipi_dsi_device *dsi)
+ return dev_err_probe(dev, -EPROBE_DEFER, "Cannot get secondary DSI host\n");
+
+ nt->dsi[1] = mipi_dsi_device_register_full(dsi_r_host, info);
+- if (!nt->dsi[1]) {
++ if (IS_ERR(nt->dsi[1])) {
+ dev_err(dev, "Cannot get secondary DSI node\n");
+- return -ENODEV;
++ return PTR_ERR(nt->dsi[1]);
+ }
+ num_dsis++;
+ }
+diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7701.c b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
+index eef03d04e0cd2d..1f72ef7ca74c93 100644
+--- a/drivers/gpu/drm/panel/panel-sitronix-st7701.c
++++ b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
+@@ -1177,6 +1177,7 @@ static int st7701_probe(struct device *dev, int connector_type)
+ return dev_err_probe(dev, ret, "Failed to get orientation\n");
+
+ drm_panel_init(&st7701->panel, dev, &st7701_funcs, connector_type);
++ st7701->panel.prepare_prev_first = true;
+
+ /**
+ * Once sleep out has been issued, ST7701 IC required to wait 120ms
+diff --git a/drivers/gpu/drm/panel/panel-synaptics-r63353.c b/drivers/gpu/drm/panel/panel-synaptics-r63353.c
+index 169c629746c714..17349825543fe6 100644
+--- a/drivers/gpu/drm/panel/panel-synaptics-r63353.c
++++ b/drivers/gpu/drm/panel/panel-synaptics-r63353.c
+@@ -325,7 +325,7 @@ static void r63353_panel_shutdown(struct mipi_dsi_device *dsi)
+ {
+ struct r63353_panel *rpanel = mipi_dsi_get_drvdata(dsi);
+
+- r63353_panel_unprepare(&rpanel->base);
++ drm_panel_unprepare(&rpanel->base);
+ }
+
+ static const struct r63353_desc sharp_ls068b3sx02_data = {
+diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c
+index d35b60c0611486..77017d9518267c 100644
+--- a/drivers/hv/hv_kvp.c
++++ b/drivers/hv/hv_kvp.c
+@@ -767,6 +767,12 @@ hv_kvp_init(struct hv_util_service *srv)
+ */
+ kvp_transaction.state = HVUTIL_DEVICE_INIT;
+
++ return 0;
++}
++
++int
++hv_kvp_init_transport(void)
++{
+ hvt = hvutil_transport_init(kvp_devname, CN_KVP_IDX, CN_KVP_VAL,
+ kvp_on_msg, kvp_on_reset);
+ if (!hvt)
+diff --git a/drivers/hv/hv_snapshot.c b/drivers/hv/hv_snapshot.c
+index 0d2184be169125..397f4c8fa46c31 100644
+--- a/drivers/hv/hv_snapshot.c
++++ b/drivers/hv/hv_snapshot.c
+@@ -388,6 +388,12 @@ hv_vss_init(struct hv_util_service *srv)
+ */
+ vss_transaction.state = HVUTIL_DEVICE_INIT;
+
++ return 0;
++}
++
++int
++hv_vss_init_transport(void)
++{
+ hvt = hvutil_transport_init(vss_devname, CN_VSS_IDX, CN_VSS_VAL,
+ vss_on_msg, vss_on_reset);
+ if (!hvt) {
+diff --git a/drivers/hv/hv_util.c b/drivers/hv/hv_util.c
+index c4f525325790fa..3d9360fd909acc 100644
+--- a/drivers/hv/hv_util.c
++++ b/drivers/hv/hv_util.c
+@@ -141,6 +141,7 @@ static struct hv_util_service util_heartbeat = {
+ static struct hv_util_service util_kvp = {
+ .util_cb = hv_kvp_onchannelcallback,
+ .util_init = hv_kvp_init,
++ .util_init_transport = hv_kvp_init_transport,
+ .util_pre_suspend = hv_kvp_pre_suspend,
+ .util_pre_resume = hv_kvp_pre_resume,
+ .util_deinit = hv_kvp_deinit,
+@@ -149,6 +150,7 @@ static struct hv_util_service util_kvp = {
+ static struct hv_util_service util_vss = {
+ .util_cb = hv_vss_onchannelcallback,
+ .util_init = hv_vss_init,
++ .util_init_transport = hv_vss_init_transport,
+ .util_pre_suspend = hv_vss_pre_suspend,
+ .util_pre_resume = hv_vss_pre_resume,
+ .util_deinit = hv_vss_deinit,
+@@ -613,6 +615,13 @@ static int util_probe(struct hv_device *dev,
+ if (ret)
+ goto error;
+
++ if (srv->util_init_transport) {
++ ret = srv->util_init_transport();
++ if (ret) {
++ vmbus_close(dev->channel);
++ goto error;
++ }
++ }
+ return 0;
+
+ error:
+diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
+index d2856023d53c9a..52cb744b4d7fde 100644
+--- a/drivers/hv/hyperv_vmbus.h
++++ b/drivers/hv/hyperv_vmbus.h
+@@ -370,12 +370,14 @@ void vmbus_on_event(unsigned long data);
+ void vmbus_on_msg_dpc(unsigned long data);
+
+ int hv_kvp_init(struct hv_util_service *srv);
++int hv_kvp_init_transport(void);
+ void hv_kvp_deinit(void);
+ int hv_kvp_pre_suspend(void);
+ int hv_kvp_pre_resume(void);
+ void hv_kvp_onchannelcallback(void *context);
+
+ int hv_vss_init(struct hv_util_service *srv);
++int hv_vss_init_transport(void);
+ void hv_vss_deinit(void);
+ int hv_vss_pre_suspend(void);
+ int hv_vss_pre_resume(void);
+diff --git a/drivers/hwmon/tmp513.c b/drivers/hwmon/tmp513.c
+index 926d28cd3fab55..1c2cb12071b808 100644
+--- a/drivers/hwmon/tmp513.c
++++ b/drivers/hwmon/tmp513.c
+@@ -182,7 +182,7 @@ struct tmp51x_data {
+ struct regmap *regmap;
+ };
+
+-// Set the shift based on the gain 8=4, 4=3, 2=2, 1=1
++// Set the shift based on the gain: 8 -> 1, 4 -> 2, 2 -> 3, 1 -> 4
+ static inline u8 tmp51x_get_pga_shift(struct tmp51x_data *data)
+ {
+ return 5 - ffs(data->pga_gain);
+@@ -204,7 +204,9 @@ static int tmp51x_get_value(struct tmp51x_data *data, u8 reg, u8 pos,
+ * 2's complement number shifted by one to four depending
+ * on the pga gain setting. 1lsb = 10uV
+ */
+- *val = sign_extend32(regval, 17 - tmp51x_get_pga_shift(data));
++ *val = sign_extend32(regval,
++ reg == TMP51X_SHUNT_CURRENT_RESULT ?
++ 16 - tmp51x_get_pga_shift(data) : 15);
+ *val = DIV_ROUND_CLOSEST(*val * 10 * MILLI, data->shunt_uohms);
+ break;
+ case TMP51X_BUS_VOLTAGE_RESULT:
+@@ -220,7 +222,7 @@ static int tmp51x_get_value(struct tmp51x_data *data, u8 reg, u8 pos,
+ break;
+ case TMP51X_BUS_CURRENT_RESULT:
+ // Current = (ShuntVoltage * CalibrationRegister) / 4096
+- *val = sign_extend32(regval, 16) * data->curr_lsb_ua;
++ *val = sign_extend32(regval, 15) * (long)data->curr_lsb_ua;
+ *val = DIV_ROUND_CLOSEST(*val, MILLI);
+ break;
+ case TMP51X_LOCAL_TEMP_RESULT:
+@@ -232,7 +234,7 @@ static int tmp51x_get_value(struct tmp51x_data *data, u8 reg, u8 pos,
+ case TMP51X_REMOTE_TEMP_LIMIT_2:
+ case TMP513_REMOTE_TEMP_LIMIT_3:
+ // 1lsb = 0.0625 degrees centigrade
+- *val = sign_extend32(regval, 16) >> TMP51X_TEMP_SHIFT;
++ *val = sign_extend32(regval, 15) >> TMP51X_TEMP_SHIFT;
+ *val = DIV_ROUND_CLOSEST(*val * 625, 10);
+ break;
+ case TMP51X_N_FACTOR_AND_HYST_1:
+diff --git a/drivers/i2c/busses/i2c-pnx.c b/drivers/i2c/busses/i2c-pnx.c
+index 1dafadda73af3a..135300f3b53428 100644
+--- a/drivers/i2c/busses/i2c-pnx.c
++++ b/drivers/i2c/busses/i2c-pnx.c
+@@ -95,7 +95,7 @@ enum {
+
+ static inline int wait_timeout(struct i2c_pnx_algo_data *data)
+ {
+- long timeout = data->timeout;
++ long timeout = jiffies_to_msecs(data->timeout);
+ while (timeout > 0 &&
+ (ioread32(I2C_REG_STS(data)) & mstatus_active)) {
+ mdelay(1);
+@@ -106,7 +106,7 @@ static inline int wait_timeout(struct i2c_pnx_algo_data *data)
+
+ static inline int wait_reset(struct i2c_pnx_algo_data *data)
+ {
+- long timeout = data->timeout;
++ long timeout = jiffies_to_msecs(data->timeout);
+ while (timeout > 0 &&
+ (ioread32(I2C_REG_CTL(data)) & mcntrl_reset)) {
+ mdelay(1);
+diff --git a/drivers/i2c/busses/i2c-riic.c b/drivers/i2c/busses/i2c-riic.c
+index c7f3a4c0247023..2c982199782f9b 100644
+--- a/drivers/i2c/busses/i2c-riic.c
++++ b/drivers/i2c/busses/i2c-riic.c
+@@ -352,7 +352,7 @@ static int riic_init_hw(struct riic_dev *riic)
+ if (brl <= (0x1F + 3))
+ break;
+
+- total_ticks /= 2;
++ total_ticks = DIV_ROUND_UP(total_ticks, 2);
+ rate /= 2;
+ }
+
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 8b6159f4cdafa4..b0bfb61539c202 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -161,7 +161,22 @@ static bool cpus_have_group0 __ro_after_init;
+
+ static void __init gic_prio_init(void)
+ {
+- cpus_have_security_disabled = gic_dist_security_disabled();
++ bool ds;
++
++ ds = gic_dist_security_disabled();
++ if (!ds) {
++ u32 val;
++
++ val = readl_relaxed(gic_data.dist_base + GICD_CTLR);
++ val |= GICD_CTLR_DS;
++ writel_relaxed(val, gic_data.dist_base + GICD_CTLR);
++
++ ds = gic_dist_security_disabled();
++ if (ds)
++ pr_warn("Broken GIC integration, security disabled");
++ }
++
++ cpus_have_security_disabled = ds;
+ cpus_have_group0 = gic_has_group0();
+
+ /*
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 813bc20cfb5a6c..6e62415de2e5ec 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -2924,6 +2924,7 @@ static int msdc_drv_probe(struct platform_device *pdev)
+ msdc_gate_clock(host);
+ platform_set_drvdata(pdev, NULL);
+ release_mem:
++ device_init_wakeup(&pdev->dev, false);
+ if (host->dma.gpd)
+ dma_free_coherent(&pdev->dev,
+ 2 * sizeof(struct mt_gpdma_desc),
+@@ -2957,6 +2958,7 @@ static void msdc_drv_remove(struct platform_device *pdev)
+ host->dma.gpd, host->dma.gpd_addr);
+ dma_free_coherent(&pdev->dev, MAX_BD_NUM * sizeof(struct mt_bdma_desc),
+ host->dma.bd, host->dma.bd_addr);
++ device_init_wakeup(&pdev->dev, false);
+ }
+
+ static void msdc_save_reg(struct msdc_host *host)
+diff --git a/drivers/mmc/host/sdhci-tegra.c b/drivers/mmc/host/sdhci-tegra.c
+index 1ad0a6b3a2eb77..7b6b82bec8556c 100644
+--- a/drivers/mmc/host/sdhci-tegra.c
++++ b/drivers/mmc/host/sdhci-tegra.c
+@@ -1525,7 +1525,6 @@ static const struct sdhci_pltfm_data sdhci_tegra186_pdata = {
+ .quirks = SDHCI_QUIRK_BROKEN_TIMEOUT_VAL |
+ SDHCI_QUIRK_SINGLE_POWER_WRITE |
+ SDHCI_QUIRK_NO_HISPD_BIT |
+- SDHCI_QUIRK_BROKEN_ADMA_ZEROLEN_DESC |
+ SDHCI_QUIRK_CAP_CLOCK_BASE_BROKEN,
+ .quirks2 = SDHCI_QUIRK2_PRESET_VALUE_BROKEN |
+ SDHCI_QUIRK2_ISSUE_CMD_DAT_RESET_TOGETHER,
+diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
+index 533bcb77c9f934..97cd8bbf2e32a9 100644
+--- a/drivers/net/can/m_can/m_can.c
++++ b/drivers/net/can/m_can/m_can.c
+@@ -1220,20 +1220,32 @@ static void m_can_coalescing_update(struct m_can_classdev *cdev, u32 ir)
+ static int m_can_interrupt_handler(struct m_can_classdev *cdev)
+ {
+ struct net_device *dev = cdev->net;
+- u32 ir;
++ u32 ir = 0, ir_read;
+ int ret;
+
+ if (pm_runtime_suspended(cdev->dev))
+ return IRQ_NONE;
+
+- ir = m_can_read(cdev, M_CAN_IR);
++ /* The m_can controller signals its interrupt status as a level, but
++ * depending in the integration the CPU may interpret the signal as
++ * edge-triggered (for example with m_can_pci). For these
++ * edge-triggered integrations, we must observe that IR is 0 at least
++ * once to be sure that the next interrupt will generate an edge.
++ */
++ while ((ir_read = m_can_read(cdev, M_CAN_IR)) != 0) {
++ ir |= ir_read;
++
++ /* ACK all irqs */
++ m_can_write(cdev, M_CAN_IR, ir);
++
++ if (!cdev->irq_edge_triggered)
++ break;
++ }
++
+ m_can_coalescing_update(cdev, ir);
+ if (!ir)
+ return IRQ_NONE;
+
+- /* ACK all irqs */
+- m_can_write(cdev, M_CAN_IR, ir);
+-
+ if (cdev->ops->clear_interrupts)
+ cdev->ops->clear_interrupts(cdev);
+
+@@ -1695,6 +1707,14 @@ static int m_can_dev_setup(struct m_can_classdev *cdev)
+ return -EINVAL;
+ }
+
++ /* Write the INIT bit, in case no hardware reset has happened before
++ * the probe (for example, it was observed that the Intel Elkhart Lake
++ * SoCs do not properly reset the CAN controllers on reboot)
++ */
++ err = m_can_cccr_update_bits(cdev, CCCR_INIT, CCCR_INIT);
++ if (err)
++ return err;
++
+ if (!cdev->is_peripheral)
+ netif_napi_add(dev, &cdev->napi, m_can_poll);
+
+@@ -1746,11 +1766,7 @@ static int m_can_dev_setup(struct m_can_classdev *cdev)
+ return -EINVAL;
+ }
+
+- /* Forcing standby mode should be redundant, as the chip should be in
+- * standby after a reset. Write the INIT bit anyways, should the chip
+- * be configured by previous stage.
+- */
+- return m_can_cccr_update_bits(cdev, CCCR_INIT, CCCR_INIT);
++ return 0;
+ }
+
+ static void m_can_stop(struct net_device *dev)
+diff --git a/drivers/net/can/m_can/m_can.h b/drivers/net/can/m_can/m_can.h
+index 92b2bd8628e6b3..ef39e8e527ab67 100644
+--- a/drivers/net/can/m_can/m_can.h
++++ b/drivers/net/can/m_can/m_can.h
+@@ -99,6 +99,7 @@ struct m_can_classdev {
+ int pm_clock_support;
+ int pm_wake_source;
+ int is_peripheral;
++ bool irq_edge_triggered;
+
+ // Cached M_CAN_IE register content
+ u32 active_interrupts;
+diff --git a/drivers/net/can/m_can/m_can_pci.c b/drivers/net/can/m_can/m_can_pci.c
+index d72fe771dfc7aa..9ad7419f88f830 100644
+--- a/drivers/net/can/m_can/m_can_pci.c
++++ b/drivers/net/can/m_can/m_can_pci.c
+@@ -127,6 +127,7 @@ static int m_can_pci_probe(struct pci_dev *pci, const struct pci_device_id *id)
+ mcan_class->pm_clock_support = 1;
+ mcan_class->pm_wake_source = 0;
+ mcan_class->can.clock.freq = id->driver_data;
++ mcan_class->irq_edge_triggered = true;
+ mcan_class->ops = &m_can_pci_ops;
+
+ pci_set_drvdata(pci, mcan_class);
+diff --git a/drivers/net/ethernet/broadcom/bgmac-platform.c b/drivers/net/ethernet/broadcom/bgmac-platform.c
+index 77425c7a32dbf8..78f7862ca00669 100644
+--- a/drivers/net/ethernet/broadcom/bgmac-platform.c
++++ b/drivers/net/ethernet/broadcom/bgmac-platform.c
+@@ -171,6 +171,7 @@ static int platform_phy_connect(struct bgmac *bgmac)
+ static int bgmac_probe(struct platform_device *pdev)
+ {
+ struct device_node *np = pdev->dev.of_node;
++ struct device_node *phy_node;
+ struct bgmac *bgmac;
+ struct resource *regs;
+ int ret;
+@@ -236,7 +237,9 @@ static int bgmac_probe(struct platform_device *pdev)
+ bgmac->cco_ctl_maskset = platform_bgmac_cco_ctl_maskset;
+ bgmac->get_bus_clock = platform_bgmac_get_bus_clock;
+ bgmac->cmn_maskset32 = platform_bgmac_cmn_maskset32;
+- if (of_parse_phandle(np, "phy-handle", 0)) {
++ phy_node = of_parse_phandle(np, "phy-handle", 0);
++ if (phy_node) {
++ of_node_put(phy_node);
+ bgmac->phy_connect = platform_phy_connect;
+ } else {
+ bgmac->phy_connect = bgmac_phy_connect_direct;
+diff --git a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
+index 455a54708be440..a83e7d3c2485bd 100644
+--- a/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
++++ b/drivers/net/ethernet/chelsio/inline_crypto/chtls/chtls_main.c
+@@ -346,8 +346,9 @@ static struct sk_buff *copy_gl_to_skb_pkt(const struct pkt_gl *gl,
+ * driver. Once driver synthesizes cpl_pass_accpet_req the skb will go
+ * through the regular cpl_pass_accept_req processing in TOM.
+ */
+- skb = alloc_skb(gl->tot_len + sizeof(struct cpl_pass_accept_req)
+- - pktshift, GFP_ATOMIC);
++ skb = alloc_skb(size_add(gl->tot_len,
++ sizeof(struct cpl_pass_accept_req)) -
++ pktshift, GFP_ATOMIC);
+ if (unlikely(!skb))
+ return NULL;
+ __skb_put(skb, gl->tot_len + sizeof(struct cpl_pass_accept_req)
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+index 890f213da8d180..ae1f523d6841b5 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+@@ -172,6 +172,7 @@ static int create_txqs(struct hinic_dev *nic_dev)
+ hinic_sq_dbgfs_uninit(nic_dev);
+
+ devm_kfree(&netdev->dev, nic_dev->txqs);
++ nic_dev->txqs = NULL;
+ return err;
+ }
+
+@@ -268,6 +269,7 @@ static int create_rxqs(struct hinic_dev *nic_dev)
+ hinic_rq_dbgfs_uninit(nic_dev);
+
+ devm_kfree(&netdev->dev, nic_dev->rxqs);
++ nic_dev->rxqs = NULL;
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 3d72aa7b130503..ef93df52088710 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1432,7 +1432,7 @@ void ocelot_ifh_set_basic(void *ifh, struct ocelot *ocelot, int port,
+
+ memset(ifh, 0, OCELOT_TAG_LEN);
+ ocelot_ifh_set_bypass(ifh, 1);
+- ocelot_ifh_set_src(ifh, BIT_ULL(ocelot->num_phys_ports));
++ ocelot_ifh_set_src(ifh, ocelot->num_phys_ports);
+ ocelot_ifh_set_dest(ifh, BIT_ULL(port));
+ ocelot_ifh_set_qos_class(ifh, qos_class);
+ ocelot_ifh_set_tag_type(ifh, tag_type);
+diff --git a/drivers/net/ethernet/oa_tc6.c b/drivers/net/ethernet/oa_tc6.c
+index f9c0dcd965c2e7..db200e4ec284d7 100644
+--- a/drivers/net/ethernet/oa_tc6.c
++++ b/drivers/net/ethernet/oa_tc6.c
+@@ -113,6 +113,7 @@ struct oa_tc6 {
+ struct mii_bus *mdiobus;
+ struct spi_device *spi;
+ struct mutex spi_ctrl_lock; /* Protects spi control transfer */
++ spinlock_t tx_skb_lock; /* Protects tx skb handling */
+ void *spi_ctrl_tx_buf;
+ void *spi_ctrl_rx_buf;
+ void *spi_data_tx_buf;
+@@ -1004,8 +1005,10 @@ static u16 oa_tc6_prepare_spi_tx_buf_for_tx_skbs(struct oa_tc6 *tc6)
+ for (used_tx_credits = 0; used_tx_credits < tc6->tx_credits;
+ used_tx_credits++) {
+ if (!tc6->ongoing_tx_skb) {
++ spin_lock_bh(&tc6->tx_skb_lock);
+ tc6->ongoing_tx_skb = tc6->waiting_tx_skb;
+ tc6->waiting_tx_skb = NULL;
++ spin_unlock_bh(&tc6->tx_skb_lock);
+ }
+ if (!tc6->ongoing_tx_skb)
+ break;
+@@ -1111,8 +1114,9 @@ static int oa_tc6_spi_thread_handler(void *data)
+ /* This kthread will be waken up if there is a tx skb or mac-phy
+ * interrupt to perform spi transfer with tx chunks.
+ */
+- wait_event_interruptible(tc6->spi_wq, tc6->waiting_tx_skb ||
+- tc6->int_flag ||
++ wait_event_interruptible(tc6->spi_wq, tc6->int_flag ||
++ (tc6->waiting_tx_skb &&
++ tc6->tx_credits) ||
+ kthread_should_stop());
+
+ if (kthread_should_stop())
+@@ -1209,7 +1213,9 @@ netdev_tx_t oa_tc6_start_xmit(struct oa_tc6 *tc6, struct sk_buff *skb)
+ return NETDEV_TX_OK;
+ }
+
++ spin_lock_bh(&tc6->tx_skb_lock);
+ tc6->waiting_tx_skb = skb;
++ spin_unlock_bh(&tc6->tx_skb_lock);
+
+ /* Wake spi kthread to perform spi transfer */
+ wake_up_interruptible(&tc6->spi_wq);
+@@ -1239,6 +1245,7 @@ struct oa_tc6 *oa_tc6_init(struct spi_device *spi, struct net_device *netdev)
+ tc6->netdev = netdev;
+ SET_NETDEV_DEV(netdev, &spi->dev);
+ mutex_init(&tc6->spi_ctrl_lock);
++ spin_lock_init(&tc6->tx_skb_lock);
+
+ /* Set the SPI controller to pump at realtime priority */
+ tc6->spi->rt = true;
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_dev.c b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+index 9e42d599840ded..57edcde9e6f8c6 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_dev.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_dev.c
+@@ -277,7 +277,10 @@ void ionic_dev_teardown(struct ionic *ionic)
+ idev->phy_cmb_pages = 0;
+ idev->cmb_npages = 0;
+
+- destroy_workqueue(ionic->wq);
++ if (ionic->wq) {
++ destroy_workqueue(ionic->wq);
++ ionic->wq = NULL;
++ }
+ mutex_destroy(&idev->cmb_inuse_lock);
+ }
+
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+index dda22fa4448cff..9b7f78b6cdb1e3 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_ethtool.c
+@@ -961,8 +961,8 @@ static int ionic_get_module_eeprom(struct net_device *netdev,
+ len = min_t(u32, sizeof(xcvr->sprom), ee->len);
+
+ do {
+- memcpy(data, xcvr->sprom, len);
+- memcpy(tbuf, xcvr->sprom, len);
++ memcpy(data, &xcvr->sprom[ee->offset], len);
++ memcpy(tbuf, &xcvr->sprom[ee->offset], len);
+
+ /* Let's make sure we got a consistent copy */
+ if (!memcmp(data, tbuf, len))
+diff --git a/drivers/net/ethernet/pensando/ionic/ionic_lif.c b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+index 40496587b2b318..3d3f936779f7d9 100644
+--- a/drivers/net/ethernet/pensando/ionic/ionic_lif.c
++++ b/drivers/net/ethernet/pensando/ionic/ionic_lif.c
+@@ -3869,8 +3869,8 @@ int ionic_lif_register(struct ionic_lif *lif)
+ /* only register LIF0 for now */
+ err = register_netdev(lif->netdev);
+ if (err) {
+- dev_err(lif->ionic->dev, "Cannot register net device, aborting\n");
+- ionic_lif_unregister_phc(lif);
++ dev_err(lif->ionic->dev, "Cannot register net device: %d, aborting\n", err);
++ ionic_lif_unregister(lif);
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/renesas/rswitch.c b/drivers/net/ethernet/renesas/rswitch.c
+index 09117110e3dd2a..f86fcecb91a8bd 100644
+--- a/drivers/net/ethernet/renesas/rswitch.c
++++ b/drivers/net/ethernet/renesas/rswitch.c
+@@ -547,7 +547,6 @@ static int rswitch_gwca_ts_queue_alloc(struct rswitch_private *priv)
+ desc = &gq->ts_ring[gq->ring_size];
+ desc->desc.die_dt = DT_LINKFIX;
+ rswitch_desc_set_dptr(&desc->desc, gq->ring_dma);
+- INIT_LIST_HEAD(&priv->gwca.ts_info_list);
+
+ return 0;
+ }
+@@ -1003,9 +1002,10 @@ static int rswitch_gwca_request_irqs(struct rswitch_private *priv)
+ static void rswitch_ts(struct rswitch_private *priv)
+ {
+ struct rswitch_gwca_queue *gq = &priv->gwca.ts_queue;
+- struct rswitch_gwca_ts_info *ts_info, *ts_info2;
+ struct skb_shared_hwtstamps shhwtstamps;
+ struct rswitch_ts_desc *desc;
++ struct rswitch_device *rdev;
++ struct sk_buff *ts_skb;
+ struct timespec64 ts;
+ unsigned int num;
+ u32 tag, port;
+@@ -1015,23 +1015,28 @@ static void rswitch_ts(struct rswitch_private *priv)
+ dma_rmb();
+
+ port = TS_DESC_DPN(__le32_to_cpu(desc->desc.dptrl));
+- tag = TS_DESC_TSUN(__le32_to_cpu(desc->desc.dptrl));
+-
+- list_for_each_entry_safe(ts_info, ts_info2, &priv->gwca.ts_info_list, list) {
+- if (!(ts_info->port == port && ts_info->tag == tag))
+- continue;
+-
+- memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+- ts.tv_sec = __le32_to_cpu(desc->ts_sec);
+- ts.tv_nsec = __le32_to_cpu(desc->ts_nsec & cpu_to_le32(0x3fffffff));
+- shhwtstamps.hwtstamp = timespec64_to_ktime(ts);
+- skb_tstamp_tx(ts_info->skb, &shhwtstamps);
+- dev_consume_skb_irq(ts_info->skb);
+- list_del(&ts_info->list);
+- kfree(ts_info);
+- break;
+- }
++ if (unlikely(port >= RSWITCH_NUM_PORTS))
++ goto next;
++ rdev = priv->rdev[port];
+
++ tag = TS_DESC_TSUN(__le32_to_cpu(desc->desc.dptrl));
++ if (unlikely(tag >= TS_TAGS_PER_PORT))
++ goto next;
++ ts_skb = xchg(&rdev->ts_skb[tag], NULL);
++ smp_mb(); /* order rdev->ts_skb[] read before bitmap update */
++ clear_bit(tag, rdev->ts_skb_used);
++
++ if (unlikely(!ts_skb))
++ goto next;
++
++ memset(&shhwtstamps, 0, sizeof(shhwtstamps));
++ ts.tv_sec = __le32_to_cpu(desc->ts_sec);
++ ts.tv_nsec = __le32_to_cpu(desc->ts_nsec & cpu_to_le32(0x3fffffff));
++ shhwtstamps.hwtstamp = timespec64_to_ktime(ts);
++ skb_tstamp_tx(ts_skb, &shhwtstamps);
++ dev_consume_skb_irq(ts_skb);
++
++next:
+ gq->cur = rswitch_next_queue_index(gq, true, 1);
+ desc = &gq->ts_ring[gq->cur];
+ }
+@@ -1576,8 +1581,9 @@ static int rswitch_open(struct net_device *ndev)
+ static int rswitch_stop(struct net_device *ndev)
+ {
+ struct rswitch_device *rdev = netdev_priv(ndev);
+- struct rswitch_gwca_ts_info *ts_info, *ts_info2;
++ struct sk_buff *ts_skb;
+ unsigned long flags;
++ unsigned int tag;
+
+ netif_tx_stop_all_queues(ndev);
+
+@@ -1594,12 +1600,13 @@ static int rswitch_stop(struct net_device *ndev)
+ if (bitmap_empty(rdev->priv->opened_ports, RSWITCH_NUM_PORTS))
+ iowrite32(GWCA_TS_IRQ_BIT, rdev->priv->addr + GWTSDID);
+
+- list_for_each_entry_safe(ts_info, ts_info2, &rdev->priv->gwca.ts_info_list, list) {
+- if (ts_info->port != rdev->port)
+- continue;
+- dev_kfree_skb_irq(ts_info->skb);
+- list_del(&ts_info->list);
+- kfree(ts_info);
++ for (tag = find_first_bit(rdev->ts_skb_used, TS_TAGS_PER_PORT);
++ tag < TS_TAGS_PER_PORT;
++ tag = find_next_bit(rdev->ts_skb_used, TS_TAGS_PER_PORT, tag + 1)) {
++ ts_skb = xchg(&rdev->ts_skb[tag], NULL);
++ clear_bit(tag, rdev->ts_skb_used);
++ if (ts_skb)
++ dev_kfree_skb(ts_skb);
+ }
+
+ return 0;
+@@ -1612,20 +1619,17 @@ static bool rswitch_ext_desc_set_info1(struct rswitch_device *rdev,
+ desc->info1 = cpu_to_le64(INFO1_DV(BIT(rdev->etha->index)) |
+ INFO1_IPV(GWCA_IPV_NUM) | INFO1_FMT);
+ if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) {
+- struct rswitch_gwca_ts_info *ts_info;
++ unsigned int tag;
+
+- ts_info = kzalloc(sizeof(*ts_info), GFP_ATOMIC);
+- if (!ts_info)
++ tag = find_first_zero_bit(rdev->ts_skb_used, TS_TAGS_PER_PORT);
++ if (tag == TS_TAGS_PER_PORT)
+ return false;
++ smp_mb(); /* order bitmap read before rdev->ts_skb[] write */
++ rdev->ts_skb[tag] = skb_get(skb);
++ set_bit(tag, rdev->ts_skb_used);
+
+ skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+- rdev->ts_tag++;
+- desc->info1 |= cpu_to_le64(INFO1_TSUN(rdev->ts_tag) | INFO1_TXC);
+-
+- ts_info->skb = skb_get(skb);
+- ts_info->port = rdev->port;
+- ts_info->tag = rdev->ts_tag;
+- list_add_tail(&ts_info->list, &rdev->priv->gwca.ts_info_list);
++ desc->info1 |= cpu_to_le64(INFO1_TSUN(tag) | INFO1_TXC);
+
+ skb_tx_timestamp(skb);
+ }
+diff --git a/drivers/net/ethernet/renesas/rswitch.h b/drivers/net/ethernet/renesas/rswitch.h
+index e020800dcc570e..d8d4ed7d7f8b6a 100644
+--- a/drivers/net/ethernet/renesas/rswitch.h
++++ b/drivers/net/ethernet/renesas/rswitch.h
+@@ -972,14 +972,6 @@ struct rswitch_gwca_queue {
+ };
+ };
+
+-struct rswitch_gwca_ts_info {
+- struct sk_buff *skb;
+- struct list_head list;
+-
+- int port;
+- u8 tag;
+-};
+-
+ #define RSWITCH_NUM_IRQ_REGS (RSWITCH_MAX_NUM_QUEUES / BITS_PER_TYPE(u32))
+ struct rswitch_gwca {
+ unsigned int index;
+@@ -989,7 +981,6 @@ struct rswitch_gwca {
+ struct rswitch_gwca_queue *queues;
+ int num_queues;
+ struct rswitch_gwca_queue ts_queue;
+- struct list_head ts_info_list;
+ DECLARE_BITMAP(used, RSWITCH_MAX_NUM_QUEUES);
+ u32 tx_irq_bits[RSWITCH_NUM_IRQ_REGS];
+ u32 rx_irq_bits[RSWITCH_NUM_IRQ_REGS];
+@@ -997,6 +988,7 @@ struct rswitch_gwca {
+ };
+
+ #define NUM_QUEUES_PER_NDEV 2
++#define TS_TAGS_PER_PORT 256
+ struct rswitch_device {
+ struct rswitch_private *priv;
+ struct net_device *ndev;
+@@ -1004,7 +996,8 @@ struct rswitch_device {
+ void __iomem *addr;
+ struct rswitch_gwca_queue *tx_queue;
+ struct rswitch_gwca_queue *rx_queue;
+- u8 ts_tag;
++ struct sk_buff *ts_skb[TS_TAGS_PER_PORT];
++ DECLARE_BITMAP(ts_skb_used, TS_TAGS_PER_PORT);
+ bool disabled;
+
+ int port;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 766213ee82c16e..cf7b59b8cc64b3 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -4220,8 +4220,8 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+ struct stmmac_txq_stats *txq_stats;
+ struct stmmac_tx_queue *tx_q;
+ u32 pay_len, mss, queue;
++ dma_addr_t tso_des, des;
+ u8 proto_hdr_len, hdr;
+- dma_addr_t des;
+ bool set_ic;
+ int i;
+
+@@ -4317,14 +4317,15 @@ static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+
+ /* If needed take extra descriptors to fill the remaining payload */
+ tmp_pay_len = pay_len - TSO_MAX_BUFF_SIZE;
++ tso_des = des;
+ } else {
+ stmmac_set_desc_addr(priv, first, des);
+ tmp_pay_len = pay_len;
+- des += proto_hdr_len;
++ tso_des = des + proto_hdr_len;
+ pay_len = 0;
+ }
+
+- stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0), queue);
++ stmmac_tso_allocator(priv, tso_des, tmp_pay_len, (nfrags == 0), queue);
+
+ /* In case two or more DMA transmit descriptors are allocated for this
+ * non-paged SKB data, the DMA buffer address should be saved to
+diff --git a/drivers/net/mdio/fwnode_mdio.c b/drivers/net/mdio/fwnode_mdio.c
+index b156493d708415..aea0f03575689a 100644
+--- a/drivers/net/mdio/fwnode_mdio.c
++++ b/drivers/net/mdio/fwnode_mdio.c
+@@ -40,6 +40,7 @@ fwnode_find_pse_control(struct fwnode_handle *fwnode)
+ static struct mii_timestamper *
+ fwnode_find_mii_timestamper(struct fwnode_handle *fwnode)
+ {
++ struct mii_timestamper *mii_ts;
+ struct of_phandle_args arg;
+ int err;
+
+@@ -53,10 +54,16 @@ fwnode_find_mii_timestamper(struct fwnode_handle *fwnode)
+ else if (err)
+ return ERR_PTR(err);
+
+- if (arg.args_count != 1)
+- return ERR_PTR(-EINVAL);
++ if (arg.args_count != 1) {
++ mii_ts = ERR_PTR(-EINVAL);
++ goto put_node;
++ }
++
++ mii_ts = register_mii_timestamper(arg.np, arg.args[0]);
+
+- return register_mii_timestamper(arg.np, arg.args[0]);
++put_node:
++ of_node_put(arg.np);
++ return mii_ts;
+ }
+
+ int fwnode_mdiobus_phy_device_register(struct mii_bus *mdio,
+diff --git a/drivers/net/netdevsim/health.c b/drivers/net/netdevsim/health.c
+index 70e8bdf34be900..688f05316b5e10 100644
+--- a/drivers/net/netdevsim/health.c
++++ b/drivers/net/netdevsim/health.c
+@@ -149,6 +149,8 @@ static ssize_t nsim_dev_health_break_write(struct file *file,
+ char *break_msg;
+ int err;
+
++ if (count == 0 || count > PAGE_SIZE)
++ return -EINVAL;
+ break_msg = memdup_user_nul(data, count);
+ if (IS_ERR(break_msg))
+ return PTR_ERR(break_msg);
+diff --git a/drivers/net/netdevsim/netdev.c b/drivers/net/netdevsim/netdev.c
+index 017a6102be0a22..1b29d1d794a201 100644
+--- a/drivers/net/netdevsim/netdev.c
++++ b/drivers/net/netdevsim/netdev.c
+@@ -596,10 +596,10 @@ nsim_pp_hold_write(struct file *file, const char __user *data,
+ page_pool_put_full_page(ns->page->pp, ns->page, false);
+ ns->page = NULL;
+ }
+- rtnl_unlock();
+
+ exit:
+- return count;
++ rtnl_unlock();
++ return ret;
+ }
+
+ static const struct file_operations nsim_pp_hold_fops = {
+diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
+index 6ace5a74cddb57..1c85dda83825d8 100644
+--- a/drivers/net/team/team_core.c
++++ b/drivers/net/team/team_core.c
+@@ -998,9 +998,13 @@ static void __team_compute_features(struct team *team)
+ unsigned int dst_release_flag = IFF_XMIT_DST_RELEASE |
+ IFF_XMIT_DST_RELEASE_PERM;
+
++ rcu_read_lock();
++ if (list_empty(&team->port_list))
++ goto done;
++
+ vlan_features = netdev_base_features(vlan_features);
++ enc_features = netdev_base_features(enc_features);
+
+- rcu_read_lock();
+ list_for_each_entry_rcu(port, &team->port_list, list) {
+ vlan_features = netdev_increment_features(vlan_features,
+ port->dev->vlan_features,
+@@ -1010,11 +1014,11 @@ static void __team_compute_features(struct team *team)
+ port->dev->hw_enc_features,
+ TEAM_ENC_FEATURES);
+
+-
+ dst_release_flag &= port->dev->priv_flags;
+ if (port->dev->hard_header_len > max_hard_header_len)
+ max_hard_header_len = port->dev->hard_header_len;
+ }
++done:
+ rcu_read_unlock();
+
+ team->dev->vlan_features = vlan_features;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 9a0f6eb3201661..03fe9e3ee7af15 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1481,7 +1481,7 @@ static struct sk_buff *tun_napi_alloc_frags(struct tun_file *tfile,
+ skb->truesize += skb->data_len;
+
+ for (i = 1; i < it->nr_segs; i++) {
+- const struct iovec *iov = iter_iov(it);
++ const struct iovec *iov = iter_iov(it) + i;
+ size_t fragsz = iov->iov_len;
+ struct page *page;
+ void *frag;
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index 286f0c161e332f..a565b8c91da593 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -455,7 +455,8 @@ static int of_translate_one(struct device_node *parent, struct of_bus *bus,
+ }
+ if (ranges == NULL || rlen == 0) {
+ offset = of_read_number(addr, na);
+- memset(addr, 0, pna * 4);
++ /* set address to zero, pass flags through */
++ memset(addr + pbus->flag_cells, 0, (pna - pbus->flag_cells) * 4);
+ pr_debug("empty ranges; 1:1 translation\n");
+ goto finish;
+ }
+@@ -615,7 +616,7 @@ struct device_node *__of_get_dma_parent(const struct device_node *np)
+ if (ret < 0)
+ return of_get_parent(np);
+
+- return of_node_get(args.np);
++ return args.np;
+ }
+ #endif
+
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 20603d3c9931b8..63161d0f72b4e8 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -1455,8 +1455,10 @@ int of_parse_phandle_with_args_map(const struct device_node *np,
+ map_len--;
+
+ /* Check if not found */
+- if (!new)
++ if (!new) {
++ ret = -EINVAL;
+ goto put;
++ }
+
+ if (!of_device_is_available(new))
+ match = 0;
+@@ -1466,17 +1468,20 @@ int of_parse_phandle_with_args_map(const struct device_node *np,
+ goto put;
+
+ /* Check for malformed properties */
+- if (WARN_ON(new_size > MAX_PHANDLE_ARGS))
+- goto put;
+- if (map_len < new_size)
++ if (WARN_ON(new_size > MAX_PHANDLE_ARGS) ||
++ map_len < new_size) {
++ ret = -EINVAL;
+ goto put;
++ }
+
+ /* Move forward by new node's #<list>-cells amount */
+ map += new_size;
+ map_len -= new_size;
+ }
+- if (!match)
++ if (!match) {
++ ret = -ENOENT;
+ goto put;
++ }
+
+ /* Get the <list>-map-pass-thru property (optional) */
+ pass = of_get_property(cur, pass_name, NULL);
+diff --git a/drivers/of/irq.c b/drivers/of/irq.c
+index a494f56a0d0ee4..1fb329c0a55b8c 100644
+--- a/drivers/of/irq.c
++++ b/drivers/of/irq.c
+@@ -111,6 +111,7 @@ const __be32 *of_irq_parse_imap_parent(const __be32 *imap, int len, struct of_ph
+ else
+ np = of_find_node_by_phandle(be32_to_cpup(imap));
+ imap++;
++ len--;
+
+ /* Check if not found */
+ if (!np) {
+@@ -354,6 +355,7 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ return of_irq_parse_oldworld(device, index, out_irq);
+
+ /* Get the reg property (if any) */
++ addr_len = 0;
+ addr = of_get_property(device, "reg", &addr_len);
+
+ /* Prevent out-of-bounds read in case of longer interrupt parent address size */
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index 11b922fde7af16..7bd8390f2fba5e 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -1213,7 +1213,6 @@ DEFINE_SIMPLE_PROP(iommus, "iommus", "#iommu-cells")
+ DEFINE_SIMPLE_PROP(mboxes, "mboxes", "#mbox-cells")
+ DEFINE_SIMPLE_PROP(io_channels, "io-channels", "#io-channel-cells")
+ DEFINE_SIMPLE_PROP(io_backends, "io-backends", "#io-backend-cells")
+-DEFINE_SIMPLE_PROP(interrupt_parent, "interrupt-parent", NULL)
+ DEFINE_SIMPLE_PROP(dmas, "dmas", "#dma-cells")
+ DEFINE_SIMPLE_PROP(power_domains, "power-domains", "#power-domain-cells")
+ DEFINE_SIMPLE_PROP(hwlocks, "hwlocks", "#hwlock-cells")
+@@ -1359,7 +1358,6 @@ static const struct supplier_bindings of_supplier_bindings[] = {
+ { .parse_prop = parse_mboxes, },
+ { .parse_prop = parse_io_channels, },
+ { .parse_prop = parse_io_backends, },
+- { .parse_prop = parse_interrupt_parent, },
+ { .parse_prop = parse_dmas, .optional = true, },
+ { .parse_prop = parse_power_domains, },
+ { .parse_prop = parse_hwlocks, },
+diff --git a/drivers/platform/x86/p2sb.c b/drivers/platform/x86/p2sb.c
+index 31f38309b389ab..c56650b9ff9628 100644
+--- a/drivers/platform/x86/p2sb.c
++++ b/drivers/platform/x86/p2sb.c
+@@ -42,6 +42,7 @@ struct p2sb_res_cache {
+ };
+
+ static struct p2sb_res_cache p2sb_resources[NR_P2SB_RES_CACHE];
++static bool p2sb_hidden_by_bios;
+
+ static void p2sb_get_devfn(unsigned int *devfn)
+ {
+@@ -96,6 +97,12 @@ static void p2sb_scan_and_cache_devfn(struct pci_bus *bus, unsigned int devfn)
+
+ static int p2sb_scan_and_cache(struct pci_bus *bus, unsigned int devfn)
+ {
++ /*
++ * The BIOS prevents the P2SB device from being enumerated by the PCI
++ * subsystem, so we need to unhide and hide it back to lookup the BAR.
++ */
++ pci_bus_write_config_dword(bus, devfn, P2SBC, 0);
++
+ /* Scan the P2SB device and cache its BAR0 */
+ p2sb_scan_and_cache_devfn(bus, devfn);
+
+@@ -103,6 +110,8 @@ static int p2sb_scan_and_cache(struct pci_bus *bus, unsigned int devfn)
+ if (devfn == P2SB_DEVFN_GOLDMONT)
+ p2sb_scan_and_cache_devfn(bus, SPI_DEVFN_GOLDMONT);
+
++ pci_bus_write_config_dword(bus, devfn, P2SBC, P2SBC_HIDE);
++
+ if (!p2sb_valid_resource(&p2sb_resources[PCI_FUNC(devfn)].res))
+ return -ENOENT;
+
+@@ -128,7 +137,7 @@ static int p2sb_cache_resources(void)
+ u32 value = P2SBC_HIDE;
+ struct pci_bus *bus;
+ u16 class;
+- int ret;
++ int ret = 0;
+
+ /* Get devfn for P2SB device itself */
+ p2sb_get_devfn(&devfn_p2sb);
+@@ -151,22 +160,53 @@ static int p2sb_cache_resources(void)
+ */
+ pci_lock_rescan_remove();
+
++ pci_bus_read_config_dword(bus, devfn_p2sb, P2SBC, &value);
++ p2sb_hidden_by_bios = value & P2SBC_HIDE;
++
+ /*
+- * The BIOS prevents the P2SB device from being enumerated by the PCI
+- * subsystem, so we need to unhide and hide it back to lookup the BAR.
+- * Unhide the P2SB device here, if needed.
++ * If the BIOS does not hide the P2SB device then its resources
++ * are accesilble. Cache them only if the P2SB device is hidden.
+ */
+- pci_bus_read_config_dword(bus, devfn_p2sb, P2SBC, &value);
+- if (value & P2SBC_HIDE)
+- pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, 0);
++ if (p2sb_hidden_by_bios)
++ ret = p2sb_scan_and_cache(bus, devfn_p2sb);
+
+- ret = p2sb_scan_and_cache(bus, devfn_p2sb);
++ pci_unlock_rescan_remove();
+
+- /* Hide the P2SB device, if it was hidden */
+- if (value & P2SBC_HIDE)
+- pci_bus_write_config_dword(bus, devfn_p2sb, P2SBC, P2SBC_HIDE);
++ return ret;
++}
+
+- pci_unlock_rescan_remove();
++static int p2sb_read_from_cache(struct pci_bus *bus, unsigned int devfn,
++ struct resource *mem)
++{
++ struct p2sb_res_cache *cache = &p2sb_resources[PCI_FUNC(devfn)];
++
++ if (cache->bus_dev_id != bus->dev.id)
++ return -ENODEV;
++
++ if (!p2sb_valid_resource(&cache->res))
++ return -ENOENT;
++
++ memcpy(mem, &cache->res, sizeof(*mem));
++
++ return 0;
++}
++
++static int p2sb_read_from_dev(struct pci_bus *bus, unsigned int devfn,
++ struct resource *mem)
++{
++ struct pci_dev *pdev;
++ int ret = 0;
++
++ pdev = pci_get_slot(bus, devfn);
++ if (!pdev)
++ return -ENODEV;
++
++ if (p2sb_valid_resource(pci_resource_n(pdev, 0)))
++ p2sb_read_bar0(pdev, mem);
++ else
++ ret = -ENOENT;
++
++ pci_dev_put(pdev);
+
+ return ret;
+ }
+@@ -187,8 +227,6 @@ static int p2sb_cache_resources(void)
+ */
+ int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem)
+ {
+- struct p2sb_res_cache *cache;
+-
+ bus = p2sb_get_bus(bus);
+ if (!bus)
+ return -ENODEV;
+@@ -196,15 +234,10 @@ int p2sb_bar(struct pci_bus *bus, unsigned int devfn, struct resource *mem)
+ if (!devfn)
+ p2sb_get_devfn(&devfn);
+
+- cache = &p2sb_resources[PCI_FUNC(devfn)];
+- if (cache->bus_dev_id != bus->dev.id)
+- return -ENODEV;
++ if (p2sb_hidden_by_bios)
++ return p2sb_read_from_cache(bus, devfn, mem);
+
+- if (!p2sb_valid_resource(&cache->res))
+- return -ENOENT;
+-
+- memcpy(mem, &cache->res, sizeof(*mem));
+- return 0;
++ return p2sb_read_from_dev(bus, devfn, mem);
+ }
+ EXPORT_SYMBOL_GPL(p2sb_bar);
+
+diff --git a/drivers/thunderbolt/nhi.c b/drivers/thunderbolt/nhi.c
+index 7af2642b97cb81..7c42e303740af2 100644
+--- a/drivers/thunderbolt/nhi.c
++++ b/drivers/thunderbolt/nhi.c
+@@ -1520,6 +1520,14 @@ static struct pci_device_id nhi_ids[] = {
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_LNL_NHI1),
+ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI0),
++ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_M_NHI1),
++ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI0),
++ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
++ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_PTL_P_NHI1),
++ .driver_data = (kernel_ulong_t)&icl_nhi_ops },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_80G_NHI) },
+ { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_BARLOW_RIDGE_HOST_40G_NHI) },
+
+diff --git a/drivers/thunderbolt/nhi.h b/drivers/thunderbolt/nhi.h
+index 7a07c7c1a9c2c6..16744f25a9a069 100644
+--- a/drivers/thunderbolt/nhi.h
++++ b/drivers/thunderbolt/nhi.h
+@@ -92,6 +92,10 @@ extern const struct tb_nhi_ops icl_nhi_ops;
+ #define PCI_DEVICE_ID_INTEL_RPL_NHI1 0xa76d
+ #define PCI_DEVICE_ID_INTEL_LNL_NHI0 0xa833
+ #define PCI_DEVICE_ID_INTEL_LNL_NHI1 0xa834
++#define PCI_DEVICE_ID_INTEL_PTL_M_NHI0 0xe333
++#define PCI_DEVICE_ID_INTEL_PTL_M_NHI1 0xe334
++#define PCI_DEVICE_ID_INTEL_PTL_P_NHI0 0xe433
++#define PCI_DEVICE_ID_INTEL_PTL_P_NHI1 0xe434
+
+ #define PCI_CLASS_SERIAL_USB_USB4 0x0c0340
+
+diff --git a/drivers/thunderbolt/retimer.c b/drivers/thunderbolt/retimer.c
+index 89d2919d0193e8..eeb64433ebbca0 100644
+--- a/drivers/thunderbolt/retimer.c
++++ b/drivers/thunderbolt/retimer.c
+@@ -103,6 +103,7 @@ static int tb_retimer_nvm_add(struct tb_retimer *rt)
+
+ err_nvm:
+ dev_dbg(&rt->dev, "NVM upgrade disabled\n");
++ rt->no_nvm_upgrade = true;
+ if (!IS_ERR(nvm))
+ tb_nvm_free(nvm);
+
+@@ -182,8 +183,6 @@ static ssize_t nvm_authenticate_show(struct device *dev,
+
+ if (!rt->nvm)
+ ret = -EAGAIN;
+- else if (rt->no_nvm_upgrade)
+- ret = -EOPNOTSUPP;
+ else
+ ret = sysfs_emit(buf, "%#x\n", rt->auth_status);
+
+@@ -323,8 +322,6 @@ static ssize_t nvm_version_show(struct device *dev,
+
+ if (!rt->nvm)
+ ret = -EAGAIN;
+- else if (rt->no_nvm_upgrade)
+- ret = -EOPNOTSUPP;
+ else
+ ret = sysfs_emit(buf, "%x.%x\n", rt->nvm->major, rt->nvm->minor);
+
+@@ -342,6 +339,19 @@ static ssize_t vendor_show(struct device *dev, struct device_attribute *attr,
+ }
+ static DEVICE_ATTR_RO(vendor);
+
++static umode_t retimer_is_visible(struct kobject *kobj, struct attribute *attr,
++ int n)
++{
++ struct device *dev = kobj_to_dev(kobj);
++ struct tb_retimer *rt = tb_to_retimer(dev);
++
++ if (attr == &dev_attr_nvm_authenticate.attr ||
++ attr == &dev_attr_nvm_version.attr)
++ return rt->no_nvm_upgrade ? 0 : attr->mode;
++
++ return attr->mode;
++}
++
+ static struct attribute *retimer_attrs[] = {
+ &dev_attr_device.attr,
+ &dev_attr_nvm_authenticate.attr,
+@@ -351,6 +361,7 @@ static struct attribute *retimer_attrs[] = {
+ };
+
+ static const struct attribute_group retimer_group = {
++ .is_visible = retimer_is_visible,
+ .attrs = retimer_attrs,
+ };
+
+diff --git a/drivers/thunderbolt/tb.c b/drivers/thunderbolt/tb.c
+index 4f777788e9179c..a7c6919fbf9788 100644
+--- a/drivers/thunderbolt/tb.c
++++ b/drivers/thunderbolt/tb.c
+@@ -2059,6 +2059,37 @@ static void tb_exit_redrive(struct tb_port *port)
+ }
+ }
+
++static void tb_switch_enter_redrive(struct tb_switch *sw)
++{
++ struct tb_port *port;
++
++ tb_switch_for_each_port(sw, port)
++ tb_enter_redrive(port);
++}
++
++/*
++ * Called during system and runtime suspend to forcefully exit redrive
++ * mode without querying whether the resource is available.
++ */
++static void tb_switch_exit_redrive(struct tb_switch *sw)
++{
++ struct tb_port *port;
++
++ if (!(sw->quirks & QUIRK_KEEP_POWER_IN_DP_REDRIVE))
++ return;
++
++ tb_switch_for_each_port(sw, port) {
++ if (!tb_port_is_dpin(port))
++ continue;
++
++ if (port->redrive) {
++ port->redrive = false;
++ pm_runtime_put(&sw->dev);
++ tb_port_dbg(port, "exit redrive mode\n");
++ }
++ }
++}
++
+ static void tb_dp_resource_unavailable(struct tb *tb, struct tb_port *port)
+ {
+ struct tb_port *in, *out;
+@@ -2909,6 +2940,7 @@ static int tb_start(struct tb *tb, bool reset)
+ tb_create_usb3_tunnels(tb->root_switch);
+ /* Add DP IN resources for the root switch */
+ tb_add_dp_resources(tb->root_switch);
++ tb_switch_enter_redrive(tb->root_switch);
+ /* Make the discovered switches available to the userspace */
+ device_for_each_child(&tb->root_switch->dev, NULL,
+ tb_scan_finalize_switch);
+@@ -2924,6 +2956,7 @@ static int tb_suspend_noirq(struct tb *tb)
+
+ tb_dbg(tb, "suspending...\n");
+ tb_disconnect_and_release_dp(tb);
++ tb_switch_exit_redrive(tb->root_switch);
+ tb_switch_suspend(tb->root_switch, false);
+ tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */
+ tb_dbg(tb, "suspend finished\n");
+@@ -3016,6 +3049,7 @@ static int tb_resume_noirq(struct tb *tb)
+ tb_dbg(tb, "tunnels restarted, sleeping for 100ms\n");
+ msleep(100);
+ }
++ tb_switch_enter_redrive(tb->root_switch);
+ /* Allow tb_handle_hotplug to progress events */
+ tcm->hotplug_active = true;
+ tb_dbg(tb, "resume finished\n");
+@@ -3079,6 +3113,12 @@ static int tb_runtime_suspend(struct tb *tb)
+ struct tb_cm *tcm = tb_priv(tb);
+
+ mutex_lock(&tb->lock);
++ /*
++ * The below call only releases DP resources to allow exiting and
++ * re-entering redrive mode.
++ */
++ tb_disconnect_and_release_dp(tb);
++ tb_switch_exit_redrive(tb->root_switch);
+ tb_switch_suspend(tb->root_switch, true);
+ tcm->hotplug_active = false;
+ mutex_unlock(&tb->lock);
+@@ -3110,6 +3150,7 @@ static int tb_runtime_resume(struct tb *tb)
+ tb_restore_children(tb->root_switch);
+ list_for_each_entry_safe(tunnel, n, &tcm->tunnel_list, list)
+ tb_tunnel_restart(tunnel);
++ tb_switch_enter_redrive(tb->root_switch);
+ tcm->hotplug_active = true;
+ mutex_unlock(&tb->lock);
+
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index f318864732f2db..b267dae14d3904 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -1192,8 +1192,6 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
+ * Keep retrying until the EP starts and stops again, on
+ * chips where this is known to help. Wait for 100ms.
+ */
+- if (!(xhci->quirks & XHCI_NEC_HOST))
+- break;
+ if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100)))
+ break;
+ fallthrough;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 9ba5584061c8c4..64317b390d2285 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -625,6 +625,8 @@ static void option_instat_callback(struct urb *urb);
+ #define MEIGSMART_PRODUCT_SRM825L 0x4d22
+ /* MeiG Smart SLM320 based on UNISOC UIS8910 */
+ #define MEIGSMART_PRODUCT_SLM320 0x4d41
++/* MeiG Smart SLM770A based on ASR1803 */
++#define MEIGSMART_PRODUCT_SLM770A 0x4d57
+
+ /* Device flags */
+
+@@ -1395,6 +1397,12 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10aa, 0xff), /* Telit FN920C04 (MBIM) */
+ .driver_info = NCTRL(3) | RSVD(4) | RSVD(5) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c0, 0xff), /* Telit FE910C04 (rmnet) */
++ .driver_info = RSVD(0) | NCTRL(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c4, 0xff), /* Telit FE910C04 (rmnet) */
++ .driver_info = RSVD(0) | NCTRL(3) },
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c8, 0xff), /* Telit FE910C04 (rmnet) */
++ .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+@@ -2247,6 +2255,8 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(2) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7127, 0xff, 0x00, 0x00),
+ .driver_info = NCTRL(2) | NCTRL(3) | NCTRL(4) },
++ { USB_DEVICE_AND_INTERFACE_INFO(MEDIATEK_VENDOR_ID, 0x7129, 0xff, 0x00, 0x00), /* MediaTek T7XX */
++ .driver_info = NCTRL(2) | NCTRL(3) | NCTRL(4) },
+ { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MEN200) },
+ { USB_DEVICE(CELLIENT_VENDOR_ID, CELLIENT_PRODUCT_MPL200),
+ .driver_info = RSVD(1) | RSVD(4) },
+@@ -2375,6 +2385,18 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for Golbal EDU */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0x00, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x0116, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WRD for WWAN Ready */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0x00, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010a, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for WWAN Ready */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0x00, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010b, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WRD for WWAN Ready */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0x00, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010c, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0xff, 0x30) }, /* NetPrisma LCUK54-WWD for WWAN Ready */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0x00, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(0x3731, 0x010d, 0xff, 0xff, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(OPPO_VENDOR_ID, OPPO_PRODUCT_R11, 0xff, 0xff, 0x30) },
+ { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x30) },
+ { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0xff, 0x40) },
+@@ -2382,9 +2404,14 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) },
++ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM770A, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x30) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x60) },
++ { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0530, 0xff), /* TCL IK512 MBIM */
++ .driver_info = NCTRL(1) },
++ { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0640, 0xff), /* TCL IK512 ECM */
++ .driver_info = NCTRL(3) },
+ { } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
+index 7e0f9600b80c43..3d2376caedfa68 100644
+--- a/fs/btrfs/bio.c
++++ b/fs/btrfs/bio.c
+@@ -649,8 +649,14 @@ static u64 btrfs_append_map_length(struct btrfs_bio *bbio, u64 map_length)
+ map_length = min(map_length, bbio->fs_info->max_zone_append_size);
+ sector_offset = bio_split_rw_at(&bbio->bio, &bbio->fs_info->limits,
+ &nr_segs, map_length);
+- if (sector_offset)
+- return sector_offset << SECTOR_SHIFT;
++ if (sector_offset) {
++ /*
++ * bio_split_rw_at() could split at a size smaller than our
++ * sectorsize and thus cause unaligned I/Os. Fix that by
++ * always rounding down to the nearest boundary.
++ */
++ return ALIGN_DOWN(sector_offset << SECTOR_SHIFT, bbio->fs_info->sectorsize);
++ }
+ return map_length;
+ }
+
+diff --git a/fs/btrfs/ctree.h b/fs/btrfs/ctree.h
+index 317a3712270fc0..2034d371083331 100644
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -370,6 +370,25 @@ static inline void btrfs_set_root_last_trans(struct btrfs_root *root, u64 transi
+ WRITE_ONCE(root->last_trans, transid);
+ }
+
++/*
++ * Return the generation this root started with.
++ *
++ * Every normal root that is created with root->root_key.offset set to it's
++ * originating generation. If it is a snapshot it is the generation when the
++ * snapshot was created.
++ *
++ * However for TREE_RELOC roots root_key.offset is the objectid of the owning
++ * tree root. Thankfully we copy the root item of the owning tree root, which
++ * has it's last_snapshot set to what we would have root_key.offset set to, so
++ * return that if this is a TREE_RELOC root.
++ */
++static inline u64 btrfs_root_origin_generation(const struct btrfs_root *root)
++{
++ if (btrfs_root_id(root) == BTRFS_TREE_RELOC_OBJECTID)
++ return btrfs_root_last_snapshot(&root->root_item);
++ return root->root_key.offset;
++}
++
+ /*
+ * Structure that conveys information about an extent that is going to replace
+ * all the extents in a file range.
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index b43a8611aca5c6..f3e93ba7ec97fa 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -5308,7 +5308,7 @@ static bool visit_node_for_delete(struct btrfs_root *root, struct walk_control *
+ * reference to it.
+ */
+ generation = btrfs_node_ptr_generation(eb, slot);
+- if (!wc->update_ref || generation <= root->root_key.offset)
++ if (!wc->update_ref || generation <= btrfs_root_origin_generation(root))
+ return false;
+
+ /*
+@@ -5363,7 +5363,7 @@ static noinline void reada_walk_down(struct btrfs_trans_handle *trans,
+ goto reada;
+
+ if (wc->stage == UPDATE_BACKREF &&
+- generation <= root->root_key.offset)
++ generation <= btrfs_root_origin_generation(root))
+ continue;
+
+ /* We don't lock the tree block, it's OK to be racy here */
+@@ -5706,7 +5706,7 @@ static noinline int do_walk_down(struct btrfs_trans_handle *trans,
+ * for the subtree
+ */
+ if (wc->stage == UPDATE_BACKREF &&
+- generation <= root->root_key.offset) {
++ generation <= btrfs_root_origin_generation(root)) {
+ wc->lookup_info = 1;
+ return 1;
+ }
+diff --git a/fs/btrfs/tree-checker.c b/fs/btrfs/tree-checker.c
+index 7b50263723bc1a..ffa5b83d3a4a3a 100644
+--- a/fs/btrfs/tree-checker.c
++++ b/fs/btrfs/tree-checker.c
+@@ -1527,6 +1527,11 @@ static int check_extent_item(struct extent_buffer *leaf,
+ dref_offset, fs_info->sectorsize);
+ return -EUCLEAN;
+ }
++ if (unlikely(btrfs_extent_data_ref_count(leaf, dref) == 0)) {
++ extent_err(leaf, slot,
++ "invalid data ref count, should have non-zero value");
++ return -EUCLEAN;
++ }
+ inline_refs += btrfs_extent_data_ref_count(leaf, dref);
+ break;
+ /* Contains parent bytenr and ref count */
+@@ -1539,6 +1544,11 @@ static int check_extent_item(struct extent_buffer *leaf,
+ inline_offset, fs_info->sectorsize);
+ return -EUCLEAN;
+ }
++ if (unlikely(btrfs_shared_data_ref_count(leaf, sref) == 0)) {
++ extent_err(leaf, slot,
++ "invalid shared data ref count, should have non-zero value");
++ return -EUCLEAN;
++ }
+ inline_refs += btrfs_shared_data_ref_count(leaf, sref);
+ break;
+ case BTRFS_EXTENT_OWNER_REF_KEY:
+@@ -1611,8 +1621,18 @@ static int check_simple_keyed_refs(struct extent_buffer *leaf,
+ {
+ u32 expect_item_size = 0;
+
+- if (key->type == BTRFS_SHARED_DATA_REF_KEY)
++ if (key->type == BTRFS_SHARED_DATA_REF_KEY) {
++ struct btrfs_shared_data_ref *sref;
++
++ sref = btrfs_item_ptr(leaf, slot, struct btrfs_shared_data_ref);
++ if (unlikely(btrfs_shared_data_ref_count(leaf, sref) == 0)) {
++ extent_err(leaf, slot,
++ "invalid shared data backref count, should have non-zero value");
++ return -EUCLEAN;
++ }
++
+ expect_item_size = sizeof(struct btrfs_shared_data_ref);
++ }
+
+ if (unlikely(btrfs_item_size(leaf, slot) != expect_item_size)) {
+ generic_err(leaf, slot,
+@@ -1689,6 +1709,11 @@ static int check_extent_data_ref(struct extent_buffer *leaf,
+ offset, leaf->fs_info->sectorsize);
+ return -EUCLEAN;
+ }
++ if (unlikely(btrfs_extent_data_ref_count(leaf, dref) == 0)) {
++ extent_err(leaf, slot,
++ "invalid extent data backref count, should have non-zero value");
++ return -EUCLEAN;
++ }
+ }
+ return 0;
+ }
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 4b8d59ebda0092..67468d88f13908 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -1066,7 +1066,7 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ if (ceph_inode_is_shutdown(inode))
+ return -EIO;
+
+- if (!len)
++ if (!len || !i_size)
+ return 0;
+ /*
+ * flush any page cache pages in this range. this
+@@ -1086,7 +1086,7 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ int num_pages;
+ size_t page_off;
+ bool more;
+- int idx;
++ int idx = 0;
+ size_t left;
+ struct ceph_osd_req_op *op;
+ u64 read_off = off;
+@@ -1116,6 +1116,16 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ len = read_off + read_len - off;
+ more = len < iov_iter_count(to);
+
++ op = &req->r_ops[0];
++ if (sparse) {
++ extent_cnt = __ceph_sparse_read_ext_count(inode, read_len);
++ ret = ceph_alloc_sparse_ext_map(op, extent_cnt);
++ if (ret) {
++ ceph_osdc_put_request(req);
++ break;
++ }
++ }
++
+ num_pages = calc_pages_for(read_off, read_len);
+ page_off = offset_in_page(off);
+ pages = ceph_alloc_page_vector(num_pages, GFP_KERNEL);
+@@ -1127,17 +1137,7 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+
+ osd_req_op_extent_osd_data_pages(req, 0, pages, read_len,
+ offset_in_page(read_off),
+- false, false);
+-
+- op = &req->r_ops[0];
+- if (sparse) {
+- extent_cnt = __ceph_sparse_read_ext_count(inode, read_len);
+- ret = ceph_alloc_sparse_ext_map(op, extent_cnt);
+- if (ret) {
+- ceph_osdc_put_request(req);
+- break;
+- }
+- }
++ false, true);
+
+ ceph_osdc_start_request(osdc, req);
+ ret = ceph_osdc_wait_request(osdc, req);
+@@ -1160,7 +1160,14 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ else if (ret == -ENOENT)
+ ret = 0;
+
+- if (ret > 0 && IS_ENCRYPTED(inode)) {
++ if (ret < 0) {
++ ceph_osdc_put_request(req);
++ if (ret == -EBLOCKLISTED)
++ fsc->blocklisted = true;
++ break;
++ }
++
++ if (IS_ENCRYPTED(inode)) {
+ int fret;
+
+ fret = ceph_fscrypt_decrypt_extents(inode, pages,
+@@ -1186,10 +1193,8 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ ret = min_t(ssize_t, fret, len);
+ }
+
+- ceph_osdc_put_request(req);
+-
+ /* Short read but not EOF? Zero out the remainder. */
+- if (ret >= 0 && ret < len && (off + ret < i_size)) {
++ if (ret < len && (off + ret < i_size)) {
+ int zlen = min(len - ret, i_size - off - ret);
+ int zoff = page_off + ret;
+
+@@ -1199,13 +1204,11 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ ret += zlen;
+ }
+
+- idx = 0;
+- if (ret <= 0)
+- left = 0;
+- else if (off + ret > i_size)
+- left = i_size - off;
++ if (off + ret > i_size)
++ left = (i_size > off) ? i_size - off : 0;
+ else
+ left = ret;
++
+ while (left > 0) {
+ size_t plen, copied;
+
+@@ -1221,13 +1224,8 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
+ break;
+ }
+ }
+- ceph_release_page_vector(pages, num_pages);
+
+- if (ret < 0) {
+- if (ret == -EBLOCKLISTED)
+- fsc->blocklisted = true;
+- break;
+- }
++ ceph_osdc_put_request(req);
+
+ if (off >= i_size || !more)
+ break;
+@@ -1553,6 +1551,16 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ break;
+ }
+
++ op = &req->r_ops[0];
++ if (sparse) {
++ extent_cnt = __ceph_sparse_read_ext_count(inode, size);
++ ret = ceph_alloc_sparse_ext_map(op, extent_cnt);
++ if (ret) {
++ ceph_osdc_put_request(req);
++ break;
++ }
++ }
++
+ len = iter_get_bvecs_alloc(iter, size, &bvecs, &num_pages);
+ if (len < 0) {
+ ceph_osdc_put_request(req);
+@@ -1562,6 +1570,8 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ if (len != size)
+ osd_req_op_extent_update(req, 0, len);
+
++ osd_req_op_extent_osd_data_bvecs(req, 0, bvecs, num_pages, len);
++
+ /*
+ * To simplify error handling, allow AIO when IO within i_size
+ * or IO can be satisfied by single OSD request.
+@@ -1593,17 +1603,6 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ req->r_mtime = mtime;
+ }
+
+- osd_req_op_extent_osd_data_bvecs(req, 0, bvecs, num_pages, len);
+- op = &req->r_ops[0];
+- if (sparse) {
+- extent_cnt = __ceph_sparse_read_ext_count(inode, size);
+- ret = ceph_alloc_sparse_ext_map(op, extent_cnt);
+- if (ret) {
+- ceph_osdc_put_request(req);
+- break;
+- }
+- }
+-
+ if (aio_req) {
+ aio_req->total_len += len;
+ aio_req->num_reqs++;
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index cf92b75745e2a5..f48242262b2177 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -2808,12 +2808,11 @@ char *ceph_mdsc_build_path(struct ceph_mds_client *mdsc, struct dentry *dentry,
+
+ if (pos < 0) {
+ /*
+- * A rename didn't occur, but somehow we didn't end up where
+- * we thought we would. Throw a warning and try again.
++ * The path is longer than PATH_MAX and this function
++ * cannot ever succeed. Creating paths that long is
++ * possible with Ceph, but Linux cannot use them.
+ */
+- pr_warn_client(cl, "did not end path lookup where expected (pos = %d)\n",
+- pos);
+- goto retry;
++ return ERR_PTR(-ENAMETOOLONG);
+ }
+
+ *pbase = base;
+diff --git a/fs/ceph/super.c b/fs/ceph/super.c
+index 86480e5a215e51..c235f9a60394c2 100644
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -431,6 +431,8 @@ static int ceph_parse_mount_param(struct fs_context *fc,
+
+ switch (token) {
+ case Opt_snapdirname:
++ if (strlen(param->string) > NAME_MAX)
++ return invalfc(fc, "snapdirname too long");
+ kfree(fsopt->snapdir_name);
+ fsopt->snapdir_name = param->string;
+ param->string = NULL;
+diff --git a/fs/efivarfs/inode.c b/fs/efivarfs/inode.c
+index 586446e02ef72d..ec23da8405ff8e 100644
+--- a/fs/efivarfs/inode.c
++++ b/fs/efivarfs/inode.c
+@@ -51,7 +51,7 @@ struct inode *efivarfs_get_inode(struct super_block *sb,
+ *
+ * VariableName-12345678-1234-1234-1234-1234567891bc
+ */
+-bool efivarfs_valid_name(const char *str, int len)
++static bool efivarfs_valid_name(const char *str, int len)
+ {
+ const char *s = str + len - EFI_VARIABLE_GUID_LEN;
+
+diff --git a/fs/efivarfs/internal.h b/fs/efivarfs/internal.h
+index d71d2e08422f09..74f0602a9e016c 100644
+--- a/fs/efivarfs/internal.h
++++ b/fs/efivarfs/internal.h
+@@ -60,7 +60,6 @@ bool efivar_variable_is_removable(efi_guid_t vendor, const char *name,
+
+ extern const struct file_operations efivarfs_file_operations;
+ extern const struct inode_operations efivarfs_dir_inode_operations;
+-extern bool efivarfs_valid_name(const char *str, int len);
+ extern struct inode *efivarfs_get_inode(struct super_block *sb,
+ const struct inode *dir, int mode, dev_t dev,
+ bool is_removable);
+diff --git a/fs/efivarfs/super.c b/fs/efivarfs/super.c
+index a929f1b613be84..beba15673be8d3 100644
+--- a/fs/efivarfs/super.c
++++ b/fs/efivarfs/super.c
+@@ -144,9 +144,6 @@ static int efivarfs_d_hash(const struct dentry *dentry, struct qstr *qstr)
+ const unsigned char *s = qstr->name;
+ unsigned int len = qstr->len;
+
+- if (!efivarfs_valid_name(s, len))
+- return -EINVAL;
+-
+ while (len-- > EFI_VARIABLE_GUID_LEN)
+ hash = partial_name_hash(*s++, hash);
+
+diff --git a/fs/erofs/data.c b/fs/erofs/data.c
+index fa51437e1d99d9..722151d3fee8b4 100644
+--- a/fs/erofs/data.c
++++ b/fs/erofs/data.c
+@@ -63,10 +63,10 @@ void erofs_init_metabuf(struct erofs_buf *buf, struct super_block *sb)
+
+ buf->file = NULL;
+ if (erofs_is_fileio_mode(sbi)) {
+- buf->file = sbi->fdev; /* some fs like FUSE needs it */
++ buf->file = sbi->dif0.file; /* some fs like FUSE needs it */
+ buf->mapping = buf->file->f_mapping;
+ } else if (erofs_is_fscache_mode(sb))
+- buf->mapping = sbi->s_fscache->inode->i_mapping;
++ buf->mapping = sbi->dif0.fscache->inode->i_mapping;
+ else
+ buf->mapping = sb->s_bdev->bd_mapping;
+ }
+@@ -186,19 +186,13 @@ int erofs_map_blocks(struct inode *inode, struct erofs_map_blocks *map)
+ }
+
+ static void erofs_fill_from_devinfo(struct erofs_map_dev *map,
+- struct erofs_device_info *dif)
++ struct super_block *sb, struct erofs_device_info *dif)
+ {
++ map->m_sb = sb;
++ map->m_dif = dif;
+ map->m_bdev = NULL;
+- map->m_fp = NULL;
+- if (dif->file) {
+- if (S_ISBLK(file_inode(dif->file)->i_mode))
+- map->m_bdev = file_bdev(dif->file);
+- else
+- map->m_fp = dif->file;
+- }
+- map->m_daxdev = dif->dax_dev;
+- map->m_dax_part_off = dif->dax_part_off;
+- map->m_fscache = dif->fscache;
++ if (dif->file && S_ISBLK(file_inode(dif->file)->i_mode))
++ map->m_bdev = file_bdev(dif->file);
+ }
+
+ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
+@@ -208,12 +202,8 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
+ erofs_off_t startoff, length;
+ int id;
+
+- map->m_bdev = sb->s_bdev;
+- map->m_daxdev = EROFS_SB(sb)->dax_dev;
+- map->m_dax_part_off = EROFS_SB(sb)->dax_part_off;
+- map->m_fscache = EROFS_SB(sb)->s_fscache;
+- map->m_fp = EROFS_SB(sb)->fdev;
+-
++ erofs_fill_from_devinfo(map, sb, &EROFS_SB(sb)->dif0);
++ map->m_bdev = sb->s_bdev; /* use s_bdev for the primary device */
+ if (map->m_deviceid) {
+ down_read(&devs->rwsem);
+ dif = idr_find(&devs->tree, map->m_deviceid - 1);
+@@ -226,7 +216,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
+ up_read(&devs->rwsem);
+ return 0;
+ }
+- erofs_fill_from_devinfo(map, dif);
++ erofs_fill_from_devinfo(map, sb, dif);
+ up_read(&devs->rwsem);
+ } else if (devs->extra_devices && !devs->flatdev) {
+ down_read(&devs->rwsem);
+@@ -239,7 +229,7 @@ int erofs_map_dev(struct super_block *sb, struct erofs_map_dev *map)
+ if (map->m_pa >= startoff &&
+ map->m_pa < startoff + length) {
+ map->m_pa -= startoff;
+- erofs_fill_from_devinfo(map, dif);
++ erofs_fill_from_devinfo(map, sb, dif);
+ break;
+ }
+ }
+@@ -309,7 +299,7 @@ static int erofs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+
+ iomap->offset = map.m_la;
+ if (flags & IOMAP_DAX)
+- iomap->dax_dev = mdev.m_daxdev;
++ iomap->dax_dev = mdev.m_dif->dax_dev;
+ else
+ iomap->bdev = mdev.m_bdev;
+ iomap->length = map.m_llen;
+@@ -338,7 +328,7 @@ static int erofs_iomap_begin(struct inode *inode, loff_t offset, loff_t length,
+ iomap->type = IOMAP_MAPPED;
+ iomap->addr = mdev.m_pa;
+ if (flags & IOMAP_DAX)
+- iomap->addr += mdev.m_dax_part_off;
++ iomap->addr += mdev.m_dif->dax_part_off;
+ }
+ return 0;
+ }
+diff --git a/fs/erofs/fileio.c b/fs/erofs/fileio.c
+index 3af96b1e2c2aa8..33f8539dda4aeb 100644
+--- a/fs/erofs/fileio.c
++++ b/fs/erofs/fileio.c
+@@ -9,6 +9,7 @@ struct erofs_fileio_rq {
+ struct bio_vec bvecs[BIO_MAX_VECS];
+ struct bio bio;
+ struct kiocb iocb;
++ struct super_block *sb;
+ };
+
+ struct erofs_fileio {
+@@ -52,8 +53,9 @@ static void erofs_fileio_rq_submit(struct erofs_fileio_rq *rq)
+ rq->iocb.ki_pos = rq->bio.bi_iter.bi_sector << SECTOR_SHIFT;
+ rq->iocb.ki_ioprio = get_current_ioprio();
+ rq->iocb.ki_complete = erofs_fileio_ki_complete;
+- rq->iocb.ki_flags = (rq->iocb.ki_filp->f_mode & FMODE_CAN_ODIRECT) ?
+- IOCB_DIRECT : 0;
++ if (test_opt(&EROFS_SB(rq->sb)->opt, DIRECT_IO) &&
++ rq->iocb.ki_filp->f_mode & FMODE_CAN_ODIRECT)
++ rq->iocb.ki_flags = IOCB_DIRECT;
+ iov_iter_bvec(&iter, ITER_DEST, rq->bvecs, rq->bio.bi_vcnt,
+ rq->bio.bi_iter.bi_size);
+ ret = vfs_iocb_iter_read(rq->iocb.ki_filp, &rq->iocb, &iter);
+@@ -67,7 +69,8 @@ static struct erofs_fileio_rq *erofs_fileio_rq_alloc(struct erofs_map_dev *mdev)
+ GFP_KERNEL | __GFP_NOFAIL);
+
+ bio_init(&rq->bio, NULL, rq->bvecs, BIO_MAX_VECS, REQ_OP_READ);
+- rq->iocb.ki_filp = mdev->m_fp;
++ rq->iocb.ki_filp = mdev->m_dif->file;
++ rq->sb = mdev->m_sb;
+ return rq;
+ }
+
+diff --git a/fs/erofs/fscache.c b/fs/erofs/fscache.c
+index fda16eedafb578..ce3d8737df85d4 100644
+--- a/fs/erofs/fscache.c
++++ b/fs/erofs/fscache.c
+@@ -198,7 +198,7 @@ struct bio *erofs_fscache_bio_alloc(struct erofs_map_dev *mdev)
+
+ io = kmalloc(sizeof(*io), GFP_KERNEL | __GFP_NOFAIL);
+ bio_init(&io->bio, NULL, io->bvecs, BIO_MAX_VECS, REQ_OP_READ);
+- io->io.private = mdev->m_fscache->cookie;
++ io->io.private = mdev->m_dif->fscache->cookie;
+ io->io.end_io = erofs_fscache_bio_endio;
+ refcount_set(&io->io.ref, 1);
+ return &io->bio;
+@@ -316,7 +316,7 @@ static int erofs_fscache_data_read_slice(struct erofs_fscache_rq *req)
+ if (!io)
+ return -ENOMEM;
+ iov_iter_xarray(&io->iter, ITER_DEST, &mapping->i_pages, pos, count);
+- ret = erofs_fscache_read_io_async(mdev.m_fscache->cookie,
++ ret = erofs_fscache_read_io_async(mdev.m_dif->fscache->cookie,
+ mdev.m_pa + (pos - map.m_la), io);
+ erofs_fscache_req_io_put(io);
+
+@@ -657,7 +657,7 @@ int erofs_fscache_register_fs(struct super_block *sb)
+ if (IS_ERR(fscache))
+ return PTR_ERR(fscache);
+
+- sbi->s_fscache = fscache;
++ sbi->dif0.fscache = fscache;
+ return 0;
+ }
+
+@@ -665,14 +665,14 @@ void erofs_fscache_unregister_fs(struct super_block *sb)
+ {
+ struct erofs_sb_info *sbi = EROFS_SB(sb);
+
+- erofs_fscache_unregister_cookie(sbi->s_fscache);
++ erofs_fscache_unregister_cookie(sbi->dif0.fscache);
+
+ if (sbi->domain)
+ erofs_fscache_domain_put(sbi->domain);
+ else
+ fscache_relinquish_volume(sbi->volume, NULL, false);
+
+- sbi->s_fscache = NULL;
++ sbi->dif0.fscache = NULL;
+ sbi->volume = NULL;
+ sbi->domain = NULL;
+ }
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 9b03c8f323a762..77e785a6dfa7ff 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -113,6 +113,7 @@ struct erofs_xattr_prefix_item {
+ };
+
+ struct erofs_sb_info {
++ struct erofs_device_info dif0;
+ struct erofs_mount_opts opt; /* options */
+ #ifdef CONFIG_EROFS_FS_ZIP
+ /* list for all registered superblocks, mainly for shrinker */
+@@ -130,13 +131,9 @@ struct erofs_sb_info {
+
+ struct erofs_sb_lz4_info lz4;
+ #endif /* CONFIG_EROFS_FS_ZIP */
+- struct file *fdev;
+ struct inode *packed_inode;
+ struct erofs_dev_context *devs;
+- struct dax_device *dax_dev;
+- u64 dax_part_off;
+ u64 total_blocks;
+- u32 primarydevice_blocks;
+
+ u32 meta_blkaddr;
+ #ifdef CONFIG_EROFS_FS_XATTR
+@@ -172,7 +169,6 @@ struct erofs_sb_info {
+
+ /* fscache support */
+ struct fscache_volume *volume;
+- struct erofs_fscache *s_fscache;
+ struct erofs_domain *domain;
+ char *fsid;
+ char *domain_id;
+@@ -186,6 +182,7 @@ struct erofs_sb_info {
+ #define EROFS_MOUNT_POSIX_ACL 0x00000020
+ #define EROFS_MOUNT_DAX_ALWAYS 0x00000040
+ #define EROFS_MOUNT_DAX_NEVER 0x00000080
++#define EROFS_MOUNT_DIRECT_IO 0x00000100
+
+ #define clear_opt(opt, option) ((opt)->mount_opt &= ~EROFS_MOUNT_##option)
+ #define set_opt(opt, option) ((opt)->mount_opt |= EROFS_MOUNT_##option)
+@@ -193,7 +190,7 @@ struct erofs_sb_info {
+
+ static inline bool erofs_is_fileio_mode(struct erofs_sb_info *sbi)
+ {
+- return IS_ENABLED(CONFIG_EROFS_FS_BACKED_BY_FILE) && sbi->fdev;
++ return IS_ENABLED(CONFIG_EROFS_FS_BACKED_BY_FILE) && sbi->dif0.file;
+ }
+
+ static inline bool erofs_is_fscache_mode(struct super_block *sb)
+@@ -370,11 +367,9 @@ enum {
+ };
+
+ struct erofs_map_dev {
+- struct erofs_fscache *m_fscache;
++ struct super_block *m_sb;
++ struct erofs_device_info *m_dif;
+ struct block_device *m_bdev;
+- struct dax_device *m_daxdev;
+- struct file *m_fp;
+- u64 m_dax_part_off;
+
+ erofs_off_t m_pa;
+ unsigned int m_deviceid;
+diff --git a/fs/erofs/super.c b/fs/erofs/super.c
+index 2dd7d819572f40..5b279977c9d5d6 100644
+--- a/fs/erofs/super.c
++++ b/fs/erofs/super.c
+@@ -218,7 +218,7 @@ static int erofs_scan_devices(struct super_block *sb,
+ struct erofs_device_info *dif;
+ int id, err = 0;
+
+- sbi->total_blocks = sbi->primarydevice_blocks;
++ sbi->total_blocks = sbi->dif0.blocks;
+ if (!erofs_sb_has_device_table(sbi))
+ ondisk_extradevs = 0;
+ else
+@@ -322,7 +322,7 @@ static int erofs_read_superblock(struct super_block *sb)
+ sbi->sb_size);
+ goto out;
+ }
+- sbi->primarydevice_blocks = le32_to_cpu(dsb->blocks);
++ sbi->dif0.blocks = le32_to_cpu(dsb->blocks);
+ sbi->meta_blkaddr = le32_to_cpu(dsb->meta_blkaddr);
+ #ifdef CONFIG_EROFS_FS_XATTR
+ sbi->xattr_blkaddr = le32_to_cpu(dsb->xattr_blkaddr);
+@@ -379,14 +379,8 @@ static void erofs_default_options(struct erofs_sb_info *sbi)
+ }
+
+ enum {
+- Opt_user_xattr,
+- Opt_acl,
+- Opt_cache_strategy,
+- Opt_dax,
+- Opt_dax_enum,
+- Opt_device,
+- Opt_fsid,
+- Opt_domain_id,
++ Opt_user_xattr, Opt_acl, Opt_cache_strategy, Opt_dax, Opt_dax_enum,
++ Opt_device, Opt_fsid, Opt_domain_id, Opt_directio,
+ Opt_err
+ };
+
+@@ -413,6 +407,7 @@ static const struct fs_parameter_spec erofs_fs_parameters[] = {
+ fsparam_string("device", Opt_device),
+ fsparam_string("fsid", Opt_fsid),
+ fsparam_string("domain_id", Opt_domain_id),
++ fsparam_flag_no("directio", Opt_directio),
+ {}
+ };
+
+@@ -526,6 +521,16 @@ static int erofs_fc_parse_param(struct fs_context *fc,
+ errorfc(fc, "%s option not supported", erofs_fs_parameters[opt].name);
+ break;
+ #endif
++ case Opt_directio:
++#ifdef CONFIG_EROFS_FS_BACKED_BY_FILE
++ if (result.boolean)
++ set_opt(&sbi->opt, DIRECT_IO);
++ else
++ clear_opt(&sbi->opt, DIRECT_IO);
++#else
++ errorfc(fc, "%s option not supported", erofs_fs_parameters[opt].name);
++#endif
++ break;
+ default:
+ return -ENOPARAM;
+ }
+@@ -617,9 +622,8 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ return -EINVAL;
+ }
+
+- sbi->dax_dev = fs_dax_get_by_bdev(sb->s_bdev,
+- &sbi->dax_part_off,
+- NULL, NULL);
++ sbi->dif0.dax_dev = fs_dax_get_by_bdev(sb->s_bdev,
++ &sbi->dif0.dax_part_off, NULL, NULL);
+ }
+
+ err = erofs_read_superblock(sb);
+@@ -642,7 +646,7 @@ static int erofs_fc_fill_super(struct super_block *sb, struct fs_context *fc)
+ }
+
+ if (test_opt(&sbi->opt, DAX_ALWAYS)) {
+- if (!sbi->dax_dev) {
++ if (!sbi->dif0.dax_dev) {
+ errorfc(fc, "DAX unsupported by block device. Turning off DAX.");
+ clear_opt(&sbi->opt, DAX_ALWAYS);
+ } else if (sbi->blkszbits != PAGE_SHIFT) {
+@@ -718,16 +722,18 @@ static int erofs_fc_get_tree(struct fs_context *fc)
+ GET_TREE_BDEV_QUIET_LOOKUP : 0);
+ #ifdef CONFIG_EROFS_FS_BACKED_BY_FILE
+ if (ret == -ENOTBLK) {
++ struct file *file;
++
+ if (!fc->source)
+ return invalf(fc, "No source specified");
+- sbi->fdev = filp_open(fc->source, O_RDONLY | O_LARGEFILE, 0);
+- if (IS_ERR(sbi->fdev))
+- return PTR_ERR(sbi->fdev);
++ file = filp_open(fc->source, O_RDONLY | O_LARGEFILE, 0);
++ if (IS_ERR(file))
++ return PTR_ERR(file);
++ sbi->dif0.file = file;
+
+- if (S_ISREG(file_inode(sbi->fdev)->i_mode) &&
+- sbi->fdev->f_mapping->a_ops->read_folio)
++ if (S_ISREG(file_inode(sbi->dif0.file)->i_mode) &&
++ sbi->dif0.file->f_mapping->a_ops->read_folio)
+ return get_tree_nodev(fc, erofs_fc_fill_super);
+- fput(sbi->fdev);
+ }
+ #endif
+ return ret;
+@@ -778,19 +784,24 @@ static void erofs_free_dev_context(struct erofs_dev_context *devs)
+ kfree(devs);
+ }
+
+-static void erofs_fc_free(struct fs_context *fc)
++static void erofs_sb_free(struct erofs_sb_info *sbi)
+ {
+- struct erofs_sb_info *sbi = fc->s_fs_info;
+-
+- if (!sbi)
+- return;
+-
+ erofs_free_dev_context(sbi->devs);
+ kfree(sbi->fsid);
+ kfree(sbi->domain_id);
++ if (sbi->dif0.file)
++ fput(sbi->dif0.file);
+ kfree(sbi);
+ }
+
++static void erofs_fc_free(struct fs_context *fc)
++{
++ struct erofs_sb_info *sbi = fc->s_fs_info;
++
++ if (sbi) /* free here if an error occurs before transferring to sb */
++ erofs_sb_free(sbi);
++}
++
+ static const struct fs_context_operations erofs_context_ops = {
+ .parse_param = erofs_fc_parse_param,
+ .get_tree = erofs_fc_get_tree,
+@@ -824,19 +835,14 @@ static void erofs_kill_sb(struct super_block *sb)
+ {
+ struct erofs_sb_info *sbi = EROFS_SB(sb);
+
+- if ((IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && sbi->fsid) || sbi->fdev)
++ if ((IS_ENABLED(CONFIG_EROFS_FS_ONDEMAND) && sbi->fsid) ||
++ sbi->dif0.file)
+ kill_anon_super(sb);
+ else
+ kill_block_super(sb);
+-
+- erofs_free_dev_context(sbi->devs);
+- fs_put_dax(sbi->dax_dev, NULL);
++ fs_put_dax(sbi->dif0.dax_dev, NULL);
+ erofs_fscache_unregister_fs(sb);
+- kfree(sbi->fsid);
+- kfree(sbi->domain_id);
+- if (sbi->fdev)
+- fput(sbi->fdev);
+- kfree(sbi);
++ erofs_sb_free(sbi);
+ sb->s_fs_info = NULL;
+ }
+
+@@ -962,6 +968,8 @@ static int erofs_show_options(struct seq_file *seq, struct dentry *root)
+ seq_puts(seq, ",dax=always");
+ if (test_opt(opt, DAX_NEVER))
+ seq_puts(seq, ",dax=never");
++ if (erofs_is_fileio_mode(sbi) && test_opt(opt, DIRECT_IO))
++ seq_puts(seq, ",directio");
+ #ifdef CONFIG_EROFS_FS_ONDEMAND
+ if (sbi->fsid)
+ seq_printf(seq, ",fsid=%s", sbi->fsid);
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index a569ff9dfd0442..1a00f061798a3c 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -1679,9 +1679,9 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+ erofs_fscache_submit_bio(bio);
+ else
+ submit_bio(bio);
+- if (memstall)
+- psi_memstall_leave(&pflags);
+ }
++ if (memstall)
++ psi_memstall_leave(&pflags);
+
+ /*
+ * although background is preferred, no one is pending for submission.
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index 90fbab6b6f0363..1a06e462b6efba 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1373,7 +1373,10 @@ static int ep_poll_callback(wait_queue_entry_t *wait, unsigned mode, int sync, v
+ break;
+ }
+ }
+- wake_up(&ep->wq);
++ if (sync)
++ wake_up_sync(&ep->wq);
++ else
++ wake_up(&ep->wq);
+ }
+ if (waitqueue_active(&ep->poll_wait))
+ pwake++;
+diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
+index 5cf327337e2276..c0856585bb6386 100644
+--- a/fs/hugetlbfs/inode.c
++++ b/fs/hugetlbfs/inode.c
+@@ -893,7 +893,7 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset,
+ error = PTR_ERR(folio);
+ goto out;
+ }
+- folio_zero_user(folio, ALIGN_DOWN(addr, hpage_size));
++ folio_zero_user(folio, addr);
+ __folio_mark_uptodate(folio);
+ error = hugetlb_add_to_page_cache(folio, mapping, index);
+ if (unlikely(error)) {
+diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c
+index 0d16b383a45262..5f582713bf05eb 100644
+--- a/fs/nfs/pnfs.c
++++ b/fs/nfs/pnfs.c
+@@ -1308,7 +1308,7 @@ pnfs_prepare_layoutreturn(struct pnfs_layout_hdr *lo,
+ enum pnfs_iomode *iomode)
+ {
+ /* Serialise LAYOUTGET/LAYOUTRETURN */
+- if (atomic_read(&lo->plh_outstanding) != 0)
++ if (atomic_read(&lo->plh_outstanding) != 0 && lo->plh_return_seq == 0)
+ return false;
+ if (test_and_set_bit(NFS_LAYOUT_RETURN_LOCK, &lo->plh_flags))
+ return false;
+diff --git a/fs/nilfs2/btnode.c b/fs/nilfs2/btnode.c
+index 501ad7be5174cb..54a3fa0cf67edb 100644
+--- a/fs/nilfs2/btnode.c
++++ b/fs/nilfs2/btnode.c
+@@ -35,6 +35,7 @@ void nilfs_init_btnc_inode(struct inode *btnc_inode)
+ ii->i_flags = 0;
+ memset(&ii->i_bmap_data, 0, sizeof(struct nilfs_bmap));
+ mapping_set_gfp_mask(btnc_inode->i_mapping, GFP_NOFS);
++ btnc_inode->i_mapping->a_ops = &nilfs_buffer_cache_aops;
+ }
+
+ void nilfs_btnode_cache_clear(struct address_space *btnc)
+diff --git a/fs/nilfs2/gcinode.c b/fs/nilfs2/gcinode.c
+index ace22253fed0f2..2dbb15767df16e 100644
+--- a/fs/nilfs2/gcinode.c
++++ b/fs/nilfs2/gcinode.c
+@@ -163,7 +163,7 @@ int nilfs_init_gcinode(struct inode *inode)
+
+ inode->i_mode = S_IFREG;
+ mapping_set_gfp_mask(inode->i_mapping, GFP_NOFS);
+- inode->i_mapping->a_ops = &empty_aops;
++ inode->i_mapping->a_ops = &nilfs_buffer_cache_aops;
+
+ ii->i_flags = 0;
+ nilfs_bmap_init_gc(ii->i_bmap);
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index be6acf6e2bfc59..aaca34ec678f26 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -307,6 +307,10 @@ const struct address_space_operations nilfs_aops = {
+ .is_partially_uptodate = block_is_partially_uptodate,
+ };
+
++const struct address_space_operations nilfs_buffer_cache_aops = {
++ .invalidate_folio = block_invalidate_folio,
++};
++
+ static int nilfs_insert_inode_locked(struct inode *inode,
+ struct nilfs_root *root,
+ unsigned long ino)
+@@ -575,8 +579,14 @@ struct inode *nilfs_iget(struct super_block *sb, struct nilfs_root *root,
+ inode = nilfs_iget_locked(sb, root, ino);
+ if (unlikely(!inode))
+ return ERR_PTR(-ENOMEM);
+- if (!(inode->i_state & I_NEW))
++
++ if (!(inode->i_state & I_NEW)) {
++ if (!inode->i_nlink) {
++ iput(inode);
++ return ERR_PTR(-ESTALE);
++ }
+ return inode;
++ }
+
+ err = __nilfs_read_inode(sb, root, ino, inode);
+ if (unlikely(err)) {
+@@ -706,6 +716,7 @@ struct inode *nilfs_iget_for_shadow(struct inode *inode)
+ NILFS_I(s_inode)->i_flags = 0;
+ memset(NILFS_I(s_inode)->i_bmap, 0, sizeof(struct nilfs_bmap));
+ mapping_set_gfp_mask(s_inode->i_mapping, GFP_NOFS);
++ s_inode->i_mapping->a_ops = &nilfs_buffer_cache_aops;
+
+ err = nilfs_attach_btree_node_cache(s_inode);
+ if (unlikely(err)) {
+diff --git a/fs/nilfs2/namei.c b/fs/nilfs2/namei.c
+index 9b108052d9f71f..1d836a5540f3b1 100644
+--- a/fs/nilfs2/namei.c
++++ b/fs/nilfs2/namei.c
+@@ -67,6 +67,11 @@ nilfs_lookup(struct inode *dir, struct dentry *dentry, unsigned int flags)
+ inode = NULL;
+ } else {
+ inode = nilfs_iget(dir->i_sb, NILFS_I(dir)->i_root, ino);
++ if (inode == ERR_PTR(-ESTALE)) {
++ nilfs_error(dir->i_sb,
++ "deleted inode referenced: %lu", ino);
++ return ERR_PTR(-EIO);
++ }
+ }
+
+ return d_splice_alias(inode, dentry);
+diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h
+index 45d03826eaf157..dff241c53fc583 100644
+--- a/fs/nilfs2/nilfs.h
++++ b/fs/nilfs2/nilfs.h
+@@ -401,6 +401,7 @@ extern const struct file_operations nilfs_dir_operations;
+ extern const struct inode_operations nilfs_file_inode_operations;
+ extern const struct file_operations nilfs_file_operations;
+ extern const struct address_space_operations nilfs_aops;
++extern const struct address_space_operations nilfs_buffer_cache_aops;
+ extern const struct inode_operations nilfs_dir_inode_operations;
+ extern const struct inode_operations nilfs_special_inode_operations;
+ extern const struct inode_operations nilfs_symlink_inode_operations;
+diff --git a/fs/ocfs2/localalloc.c b/fs/ocfs2/localalloc.c
+index 5df34561c551c6..d1aa04a5af1b1c 100644
+--- a/fs/ocfs2/localalloc.c
++++ b/fs/ocfs2/localalloc.c
+@@ -971,9 +971,9 @@ static int ocfs2_sync_local_to_main(struct ocfs2_super *osb,
+ start = count = 0;
+ left = le32_to_cpu(alloc->id1.bitmap1.i_total);
+
+- while ((bit_off = ocfs2_find_next_zero_bit(bitmap, left, start)) <
+- left) {
+- if (bit_off == start) {
++ while (1) {
++ bit_off = ocfs2_find_next_zero_bit(bitmap, left, start);
++ if ((bit_off < left) && (bit_off == start)) {
+ count++;
+ start++;
+ continue;
+@@ -998,6 +998,8 @@ static int ocfs2_sync_local_to_main(struct ocfs2_super *osb,
+ }
+ }
+
++ if (bit_off >= left)
++ break;
+ count = 1;
+ start = bit_off + 1;
+ }
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index feff3324d39c6d..fe40152b915d82 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -987,9 +987,13 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
+ msleep(125);
+ if (cifs_rdma_enabled(server))
+ smbd_destroy(server);
++
+ if (server->ssocket) {
+ sock_release(server->ssocket);
+ server->ssocket = NULL;
++
++ /* Release netns reference for the socket. */
++ put_net(cifs_net_ns(server));
+ }
+
+ if (!list_empty(&server->pending_mid_q)) {
+@@ -1037,6 +1041,7 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
+ */
+ }
+
++ /* Release netns reference for this server. */
+ put_net(cifs_net_ns(server));
+ kfree(server->leaf_fullpath);
+ kfree(server);
+@@ -1713,6 +1718,8 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+
+ tcp_ses->ops = ctx->ops;
+ tcp_ses->vals = ctx->vals;
++
++ /* Grab netns reference for this server. */
+ cifs_set_net_ns(tcp_ses, get_net(current->nsproxy->net_ns));
+
+ tcp_ses->conn_id = atomic_inc_return(&tcpSesNextId);
+@@ -1844,6 +1851,7 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+ out_err_crypto_release:
+ cifs_crypto_secmech_release(tcp_ses);
+
++ /* Release netns reference for this server. */
+ put_net(cifs_net_ns(tcp_ses));
+
+ out_err:
+@@ -1852,8 +1860,10 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+ cifs_put_tcp_session(tcp_ses->primary_server, false);
+ kfree(tcp_ses->hostname);
+ kfree(tcp_ses->leaf_fullpath);
+- if (tcp_ses->ssocket)
++ if (tcp_ses->ssocket) {
+ sock_release(tcp_ses->ssocket);
++ put_net(cifs_net_ns(tcp_ses));
++ }
+ kfree(tcp_ses);
+ }
+ return ERR_PTR(rc);
+@@ -3111,20 +3121,20 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ socket = server->ssocket;
+ } else {
+ struct net *net = cifs_net_ns(server);
+- struct sock *sk;
+
+- rc = __sock_create(net, sfamily, SOCK_STREAM,
+- IPPROTO_TCP, &server->ssocket, 1);
++ rc = sock_create_kern(net, sfamily, SOCK_STREAM, IPPROTO_TCP, &server->ssocket);
+ if (rc < 0) {
+ cifs_server_dbg(VFS, "Error %d creating socket\n", rc);
+ return rc;
+ }
+
+- sk = server->ssocket->sk;
+- __netns_tracker_free(net, &sk->ns_tracker, false);
+- sk->sk_net_refcnt = 1;
+- get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
+- sock_inuse_add(net, 1);
++ /*
++ * Grab netns reference for the socket.
++ *
++ * It'll be released here, on error, or in clean_demultiplex_info() upon server
++ * teardown.
++ */
++ get_net(net);
+
+ /* BB other socket options to set KEEPALIVE, NODELAY? */
+ cifs_dbg(FYI, "Socket created\n");
+@@ -3138,8 +3148,10 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ }
+
+ rc = bind_socket(server);
+- if (rc < 0)
++ if (rc < 0) {
++ put_net(cifs_net_ns(server));
+ return rc;
++ }
+
+ /*
+ * Eventually check for other socket options to change from
+@@ -3176,6 +3188,7 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ if (rc < 0) {
+ cifs_dbg(FYI, "Error %d connecting to server\n", rc);
+ trace_smb3_connect_err(server->hostname, server->conn_id, &server->dstaddr, rc);
++ put_net(cifs_net_ns(server));
+ sock_release(socket);
+ server->ssocket = NULL;
+ return rc;
+@@ -3184,6 +3197,9 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ if (sport == htons(RFC1001_PORT))
+ rc = ip_rfc1001_connect(server);
+
++ if (rc < 0)
++ put_net(cifs_net_ns(server));
++
+ return rc;
+ }
+
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index e6a72f75ab94ba..bf45822db5d589 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -70,7 +70,6 @@ struct ksmbd_conn *ksmbd_conn_alloc(void)
+ atomic_set(&conn->req_running, 0);
+ atomic_set(&conn->r_count, 0);
+ atomic_set(&conn->refcnt, 1);
+- atomic_set(&conn->mux_smb_requests, 0);
+ conn->total_credits = 1;
+ conn->outstanding_credits = 0;
+
+@@ -120,8 +119,8 @@ void ksmbd_conn_enqueue_request(struct ksmbd_work *work)
+ if (conn->ops->get_cmd_val(work) != SMB2_CANCEL_HE)
+ requests_queue = &conn->requests;
+
++ atomic_inc(&conn->req_running);
+ if (requests_queue) {
+- atomic_inc(&conn->req_running);
+ spin_lock(&conn->request_lock);
+ list_add_tail(&work->request_entry, requests_queue);
+ spin_unlock(&conn->request_lock);
+@@ -132,11 +131,14 @@ void ksmbd_conn_try_dequeue_request(struct ksmbd_work *work)
+ {
+ struct ksmbd_conn *conn = work->conn;
+
++ atomic_dec(&conn->req_running);
++ if (waitqueue_active(&conn->req_running_q))
++ wake_up(&conn->req_running_q);
++
+ if (list_empty(&work->request_entry) &&
+ list_empty(&work->async_request_entry))
+ return;
+
+- atomic_dec(&conn->req_running);
+ spin_lock(&conn->request_lock);
+ list_del_init(&work->request_entry);
+ spin_unlock(&conn->request_lock);
+@@ -308,7 +310,7 @@ int ksmbd_conn_handler_loop(void *p)
+ {
+ struct ksmbd_conn *conn = (struct ksmbd_conn *)p;
+ struct ksmbd_transport *t = conn->transport;
+- unsigned int pdu_size, max_allowed_pdu_size;
++ unsigned int pdu_size, max_allowed_pdu_size, max_req;
+ char hdr_buf[4] = {0,};
+ int size;
+
+@@ -318,6 +320,7 @@ int ksmbd_conn_handler_loop(void *p)
+ if (t->ops->prepare && t->ops->prepare(t))
+ goto out;
+
++ max_req = server_conf.max_inflight_req;
+ conn->last_active = jiffies;
+ set_freezable();
+ while (ksmbd_conn_alive(conn)) {
+@@ -327,6 +330,13 @@ int ksmbd_conn_handler_loop(void *p)
+ kvfree(conn->request_buf);
+ conn->request_buf = NULL;
+
++recheck:
++ if (atomic_read(&conn->req_running) + 1 > max_req) {
++ wait_event_interruptible(conn->req_running_q,
++ atomic_read(&conn->req_running) < max_req);
++ goto recheck;
++ }
++
+ size = t->ops->read(t, hdr_buf, sizeof(hdr_buf), -1);
+ if (size != sizeof(hdr_buf))
+ break;
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index 8ddd5a3c7bafb6..b379ae4fdcdffa 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -107,7 +107,6 @@ struct ksmbd_conn {
+ __le16 signing_algorithm;
+ bool binding;
+ atomic_t refcnt;
+- atomic_t mux_smb_requests;
+ };
+
+ struct ksmbd_conn_ops {
+diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c
+index 698af37e988d7b..d146b0e7c3a9dd 100644
+--- a/fs/smb/server/server.c
++++ b/fs/smb/server/server.c
+@@ -270,7 +270,6 @@ static void handle_ksmbd_work(struct work_struct *wk)
+
+ ksmbd_conn_try_dequeue_request(work);
+ ksmbd_free_work_struct(work);
+- atomic_dec(&conn->mux_smb_requests);
+ /*
+ * Checking waitqueue to dropping pending requests on
+ * disconnection. waitqueue_active is safe because it
+@@ -300,11 +299,6 @@ static int queue_ksmbd_work(struct ksmbd_conn *conn)
+ if (err)
+ return 0;
+
+- if (atomic_inc_return(&conn->mux_smb_requests) >= conn->vals->max_credits) {
+- atomic_dec_return(&conn->mux_smb_requests);
+- return -ENOSPC;
+- }
+-
+ work = ksmbd_alloc_work_struct();
+ if (!work) {
+ pr_err("allocation for work failed\n");
+@@ -367,6 +361,7 @@ static int server_conf_init(void)
+ server_conf.auth_mechs |= KSMBD_AUTH_KRB5 |
+ KSMBD_AUTH_MSKRB5;
+ #endif
++ server_conf.max_inflight_req = SMB2_MAX_CREDITS;
+ return 0;
+ }
+
+diff --git a/fs/smb/server/server.h b/fs/smb/server/server.h
+index 4fc529335271f7..94187628ff089f 100644
+--- a/fs/smb/server/server.h
++++ b/fs/smb/server/server.h
+@@ -42,6 +42,7 @@ struct ksmbd_server_config {
+ struct smb_sid domain_sid;
+ unsigned int auth_mechs;
+ unsigned int max_connections;
++ unsigned int max_inflight_req;
+
+ char *conf[SERVER_CONF_WORK_GROUP + 1];
+ struct task_struct *dh_task;
+diff --git a/fs/smb/server/transport_ipc.c b/fs/smb/server/transport_ipc.c
+index 2f27afb695f62e..6de351cc2b60e0 100644
+--- a/fs/smb/server/transport_ipc.c
++++ b/fs/smb/server/transport_ipc.c
+@@ -319,8 +319,11 @@ static int ipc_server_config_on_startup(struct ksmbd_startup_request *req)
+ init_smb2_max_write_size(req->smb2_max_write);
+ if (req->smb2_max_trans)
+ init_smb2_max_trans_size(req->smb2_max_trans);
+- if (req->smb2_max_credits)
++ if (req->smb2_max_credits) {
+ init_smb2_max_credits(req->smb2_max_credits);
++ server_conf.max_inflight_req =
++ req->smb2_max_credits;
++ }
+ if (req->smbd_max_io_size)
+ init_smbd_max_io_size(req->smbd_max_io_size);
+
+diff --git a/fs/xfs/libxfs/xfs_ialloc.c b/fs/xfs/libxfs/xfs_ialloc.c
+index 271855227514cb..6258527315f28b 100644
+--- a/fs/xfs/libxfs/xfs_ialloc.c
++++ b/fs/xfs/libxfs/xfs_ialloc.c
+@@ -855,7 +855,8 @@ xfs_ialloc_ag_alloc(
+ * the end of the AG.
+ */
+ args.min_agbno = args.mp->m_sb.sb_inoalignmt;
+- args.max_agbno = round_down(args.mp->m_sb.sb_agblocks,
++ args.max_agbno = round_down(xfs_ag_block_count(args.mp,
++ pag->pag_agno),
+ args.mp->m_sb.sb_inoalignmt) -
+ igeo->ialloc_blks;
+
+@@ -2332,9 +2333,9 @@ xfs_difree(
+ return -EINVAL;
+ }
+ agbno = XFS_AGINO_TO_AGBNO(mp, agino);
+- if (agbno >= mp->m_sb.sb_agblocks) {
+- xfs_warn(mp, "%s: agbno >= mp->m_sb.sb_agblocks (%d >= %d).",
+- __func__, agbno, mp->m_sb.sb_agblocks);
++ if (agbno >= xfs_ag_block_count(mp, pag->pag_agno)) {
++ xfs_warn(mp, "%s: agbno >= xfs_ag_block_count (%d >= %d).",
++ __func__, agbno, xfs_ag_block_count(mp, pag->pag_agno));
+ ASSERT(0);
+ return -EINVAL;
+ }
+@@ -2457,7 +2458,7 @@ xfs_imap(
+ */
+ agino = XFS_INO_TO_AGINO(mp, ino);
+ agbno = XFS_AGINO_TO_AGBNO(mp, agino);
+- if (agbno >= mp->m_sb.sb_agblocks ||
++ if (agbno >= xfs_ag_block_count(mp, pag->pag_agno) ||
+ ino != XFS_AGINO_TO_INO(mp, pag->pag_agno, agino)) {
+ error = -EINVAL;
+ #ifdef DEBUG
+@@ -2467,11 +2468,12 @@ xfs_imap(
+ */
+ if (flags & XFS_IGET_UNTRUSTED)
+ return error;
+- if (agbno >= mp->m_sb.sb_agblocks) {
++ if (agbno >= xfs_ag_block_count(mp, pag->pag_agno)) {
+ xfs_alert(mp,
+ "%s: agbno (0x%llx) >= mp->m_sb.sb_agblocks (0x%lx)",
+ __func__, (unsigned long long)agbno,
+- (unsigned long)mp->m_sb.sb_agblocks);
++ (unsigned long)xfs_ag_block_count(mp,
++ pag->pag_agno));
+ }
+ if (ino != XFS_AGINO_TO_INO(mp, pag->pag_agno, agino)) {
+ xfs_alert(mp,
+diff --git a/fs/xfs/libxfs/xfs_sb.c b/fs/xfs/libxfs/xfs_sb.c
+index 02ebcbc4882f5b..e27b63281d012a 100644
+--- a/fs/xfs/libxfs/xfs_sb.c
++++ b/fs/xfs/libxfs/xfs_sb.c
+@@ -391,6 +391,21 @@ xfs_validate_sb_common(
+ sbp->sb_inoalignmt, align);
+ return -EINVAL;
+ }
++
++ if (sbp->sb_spino_align &&
++ (sbp->sb_spino_align > sbp->sb_inoalignmt ||
++ (sbp->sb_inoalignmt % sbp->sb_spino_align) != 0)) {
++ xfs_warn(mp,
++"Sparse inode alignment (%u) is invalid, must be integer factor of (%u).",
++ sbp->sb_spino_align,
++ sbp->sb_inoalignmt);
++ return -EINVAL;
++ }
++ } else if (sbp->sb_spino_align) {
++ xfs_warn(mp,
++ "Sparse inode alignment (%u) should be zero.",
++ sbp->sb_spino_align);
++ return -EINVAL;
+ }
+ } else if (sbp->sb_qflags & (XFS_PQUOTA_ENFD | XFS_GQUOTA_ENFD |
+ XFS_PQUOTA_CHKD | XFS_GQUOTA_CHKD)) {
+diff --git a/fs/xfs/scrub/agheader.c b/fs/xfs/scrub/agheader.c
+index da30f926cbe66d..0f2f1852d58fe7 100644
+--- a/fs/xfs/scrub/agheader.c
++++ b/fs/xfs/scrub/agheader.c
+@@ -59,6 +59,30 @@ xchk_superblock_xref(
+ /* scrub teardown will take care of sc->sa for us */
+ }
+
++/*
++ * Calculate the ondisk superblock size in bytes given the feature set of the
++ * mounted filesystem (aka the primary sb). This is subtlely different from
++ * the logic in xfs_repair, which computes the size of a secondary sb given the
++ * featureset listed in the secondary sb.
++ */
++STATIC size_t
++xchk_superblock_ondisk_size(
++ struct xfs_mount *mp)
++{
++ if (xfs_has_metauuid(mp))
++ return offsetofend(struct xfs_dsb, sb_meta_uuid);
++ if (xfs_has_crc(mp))
++ return offsetofend(struct xfs_dsb, sb_lsn);
++ if (xfs_sb_version_hasmorebits(&mp->m_sb))
++ return offsetofend(struct xfs_dsb, sb_bad_features2);
++ if (xfs_has_logv2(mp))
++ return offsetofend(struct xfs_dsb, sb_logsunit);
++ if (xfs_has_sector(mp))
++ return offsetofend(struct xfs_dsb, sb_logsectsize);
++ /* only support dirv2 or more recent */
++ return offsetofend(struct xfs_dsb, sb_dirblklog);
++}
++
+ /*
+ * Scrub the filesystem superblock.
+ *
+@@ -75,6 +99,7 @@ xchk_superblock(
+ struct xfs_buf *bp;
+ struct xfs_dsb *sb;
+ struct xfs_perag *pag;
++ size_t sblen;
+ xfs_agnumber_t agno;
+ uint32_t v2_ok;
+ __be32 features_mask;
+@@ -350,8 +375,8 @@ xchk_superblock(
+ }
+
+ /* Everything else must be zero. */
+- if (memchr_inv(sb + 1, 0,
+- BBTOB(bp->b_length) - sizeof(struct xfs_dsb)))
++ sblen = xchk_superblock_ondisk_size(mp);
++ if (memchr_inv((char *)sb + sblen, 0, BBTOB(bp->b_length) - sblen))
+ xchk_block_set_corrupt(sc, bp);
+
+ xchk_superblock_xref(sc, bp);
+diff --git a/fs/xfs/xfs_fsmap.c b/fs/xfs/xfs_fsmap.c
+index ae18ab86e608b5..8712b891defbc7 100644
+--- a/fs/xfs/xfs_fsmap.c
++++ b/fs/xfs/xfs_fsmap.c
+@@ -162,7 +162,8 @@ struct xfs_getfsmap_info {
+ xfs_daddr_t next_daddr; /* next daddr we expect */
+ /* daddr of low fsmap key when we're using the rtbitmap */
+ xfs_daddr_t low_daddr;
+- xfs_daddr_t end_daddr; /* daddr of high fsmap key */
++ /* daddr of high fsmap key, or the last daddr on the device */
++ xfs_daddr_t end_daddr;
+ u64 missing_owner; /* owner of holes */
+ u32 dev; /* device id */
+ /*
+@@ -306,7 +307,7 @@ xfs_getfsmap_helper(
+ * Note that if the btree query found a mapping, there won't be a gap.
+ */
+ if (info->last && info->end_daddr != XFS_BUF_DADDR_NULL)
+- rec_daddr = info->end_daddr;
++ rec_daddr = info->end_daddr + 1;
+
+ /* Are we just counting mappings? */
+ if (info->head->fmh_count == 0) {
+@@ -898,7 +899,10 @@ xfs_getfsmap(
+ struct xfs_trans *tp = NULL;
+ struct xfs_fsmap dkeys[2]; /* per-dev keys */
+ struct xfs_getfsmap_dev handlers[XFS_GETFSMAP_DEVS];
+- struct xfs_getfsmap_info info = { NULL };
++ struct xfs_getfsmap_info info = {
++ .fsmap_recs = fsmap_recs,
++ .head = head,
++ };
+ bool use_rmap;
+ int i;
+ int error = 0;
+@@ -963,9 +967,6 @@ xfs_getfsmap(
+
+ info.next_daddr = head->fmh_keys[0].fmr_physical +
+ head->fmh_keys[0].fmr_length;
+- info.end_daddr = XFS_BUF_DADDR_NULL;
+- info.fsmap_recs = fsmap_recs;
+- info.head = head;
+
+ /* For each device we support... */
+ for (i = 0; i < XFS_GETFSMAP_DEVS; i++) {
+@@ -978,17 +979,23 @@ xfs_getfsmap(
+ break;
+
+ /*
+- * If this device number matches the high key, we have
+- * to pass the high key to the handler to limit the
+- * query results. If the device number exceeds the
+- * low key, zero out the low key so that we get
+- * everything from the beginning.
++ * If this device number matches the high key, we have to pass
++ * the high key to the handler to limit the query results, and
++ * set the end_daddr so that we can synthesize records at the
++ * end of the query range or device.
+ */
+ if (handlers[i].dev == head->fmh_keys[1].fmr_device) {
+ dkeys[1] = head->fmh_keys[1];
+ info.end_daddr = min(handlers[i].nr_sectors - 1,
+ dkeys[1].fmr_physical);
++ } else {
++ info.end_daddr = handlers[i].nr_sectors - 1;
+ }
++
++ /*
++ * If the device number exceeds the low key, zero out the low
++ * key so that we get everything from the beginning.
++ */
+ if (handlers[i].dev > head->fmh_keys[0].fmr_device)
+ memset(&dkeys[0], 0, sizeof(struct xfs_fsmap));
+
+diff --git a/include/clocksource/hyperv_timer.h b/include/clocksource/hyperv_timer.h
+index 6cdc873ac907f5..aa5233b1eba970 100644
+--- a/include/clocksource/hyperv_timer.h
++++ b/include/clocksource/hyperv_timer.h
+@@ -38,6 +38,8 @@ extern void hv_remap_tsc_clocksource(void);
+ extern unsigned long hv_get_tsc_pfn(void);
+ extern struct ms_hyperv_tsc_page *hv_get_tsc_page(void);
+
++extern void hv_adj_sched_clock_offset(u64 offset);
++
+ static __always_inline bool
+ hv_read_tsc_page_tsc(const struct ms_hyperv_tsc_page *tsc_pg,
+ u64 *cur_tsc, u64 *time)
+diff --git a/include/linux/alloc_tag.h b/include/linux/alloc_tag.h
+index 941deffc590dfd..6073a8f13c413c 100644
+--- a/include/linux/alloc_tag.h
++++ b/include/linux/alloc_tag.h
+@@ -48,7 +48,12 @@ static inline void set_codetag_empty(union codetag_ref *ref)
+ #else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */
+
+ static inline bool is_codetag_empty(union codetag_ref *ref) { return false; }
+-static inline void set_codetag_empty(union codetag_ref *ref) {}
++
++static inline void set_codetag_empty(union codetag_ref *ref)
++{
++ if (ref)
++ ref->ct = NULL;
++}
+
+ #endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */
+
+diff --git a/include/linux/arm_ffa.h b/include/linux/arm_ffa.h
+index a28e2a6a13d05a..74169dd0f65948 100644
+--- a/include/linux/arm_ffa.h
++++ b/include/linux/arm_ffa.h
+@@ -166,9 +166,12 @@ static inline void *ffa_dev_get_drvdata(struct ffa_device *fdev)
+ return dev_get_drvdata(&fdev->dev);
+ }
+
++struct ffa_partition_info;
++
+ #if IS_REACHABLE(CONFIG_ARM_FFA_TRANSPORT)
+-struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id,
+- const struct ffa_ops *ops);
++struct ffa_device *
++ffa_device_register(const struct ffa_partition_info *part_info,
++ const struct ffa_ops *ops);
+ void ffa_device_unregister(struct ffa_device *ffa_dev);
+ int ffa_driver_register(struct ffa_driver *driver, struct module *owner,
+ const char *mod_name);
+@@ -176,9 +179,9 @@ void ffa_driver_unregister(struct ffa_driver *driver);
+ bool ffa_device_is_valid(struct ffa_device *ffa_dev);
+
+ #else
+-static inline
+-struct ffa_device *ffa_device_register(const uuid_t *uuid, int vm_id,
+- const struct ffa_ops *ops)
++static inline struct ffa_device *
++ffa_device_register(const struct ffa_partition_info *part_info,
++ const struct ffa_ops *ops)
+ {
+ return NULL;
+ }
+diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
+index 22c22fb9104214..02a226bcf0edc9 100644
+--- a/include/linux/hyperv.h
++++ b/include/linux/hyperv.h
+@@ -1559,6 +1559,7 @@ struct hv_util_service {
+ void *channel;
+ void (*util_cb)(void *);
+ int (*util_init)(struct hv_util_service *);
++ int (*util_init_transport)(void);
+ void (*util_deinit)(void);
+ int (*util_pre_suspend)(void);
+ int (*util_pre_resume)(void);
+diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h
+index e123d5e17b5261..85fe4e6b275c7d 100644
+--- a/include/linux/io_uring.h
++++ b/include/linux/io_uring.h
+@@ -15,10 +15,8 @@ bool io_is_uring_fops(struct file *file);
+
+ static inline void io_uring_files_cancel(void)
+ {
+- if (current->io_uring) {
+- io_uring_unreg_ringfd();
++ if (current->io_uring)
+ __io_uring_cancel(false);
+- }
+ }
+ static inline void io_uring_task_cancel(void)
+ {
+diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
+index 74aa9fbbdae70b..48c66b84668281 100644
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -860,18 +860,10 @@ static inline void ClearPageCompound(struct page *page)
+ ClearPageHead(page);
+ }
+ FOLIO_FLAG(large_rmappable, FOLIO_SECOND_PAGE)
+-FOLIO_TEST_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
+-/*
+- * PG_partially_mapped is protected by deferred_split split_queue_lock,
+- * so its safe to use non-atomic set/clear.
+- */
+-__FOLIO_SET_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
+-__FOLIO_CLEAR_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
++FOLIO_FLAG(partially_mapped, FOLIO_SECOND_PAGE)
+ #else
+ FOLIO_FLAG_FALSE(large_rmappable)
+-FOLIO_TEST_FLAG_FALSE(partially_mapped)
+-__FOLIO_SET_FLAG_NOOP(partially_mapped)
+-__FOLIO_CLEAR_FLAG_NOOP(partially_mapped)
++FOLIO_FLAG_FALSE(partially_mapped)
+ #endif
+
+ #define PG_head_mask ((1UL << PG_head))
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index bb343136ddd05d..c14446c6164d72 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -656,6 +656,12 @@ struct sched_dl_entity {
+ * @dl_defer_armed tells if the deferrable server is waiting
+ * for the replenishment timer to activate it.
+ *
++ * @dl_server_active tells if the dlserver is active(started).
++ * dlserver is started on first cfs enqueue on an idle runqueue
++ * and is stopped when a dequeue results in 0 cfs tasks on the
++ * runqueue. In other words, dlserver is active only when cpu's
++ * runqueue has atleast one cfs task.
++ *
+ * @dl_defer_running tells if the deferrable server is actually
+ * running, skipping the defer phase.
+ */
+@@ -664,6 +670,7 @@ struct sched_dl_entity {
+ unsigned int dl_non_contending : 1;
+ unsigned int dl_overrun : 1;
+ unsigned int dl_server : 1;
++ unsigned int dl_server_active : 1;
+ unsigned int dl_defer : 1;
+ unsigned int dl_defer_armed : 1;
+ unsigned int dl_defer_running : 1;
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index 42bedcddd5113e..4df2ff81d3dea5 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -285,7 +285,8 @@ struct trace_event_fields {
+ const char *name;
+ const int size;
+ const int align;
+- const int is_signed;
++ const unsigned int is_signed:1;
++ unsigned int needs_test:1;
+ const int filter_type;
+ const int len;
+ };
+@@ -337,6 +338,7 @@ enum {
+ TRACE_EVENT_FL_EPROBE_BIT,
+ TRACE_EVENT_FL_FPROBE_BIT,
+ TRACE_EVENT_FL_CUSTOM_BIT,
++ TRACE_EVENT_FL_TEST_STR_BIT,
+ };
+
+ /*
+@@ -354,6 +356,7 @@ enum {
+ * CUSTOM - Event is a custom event (to be attached to an exsiting tracepoint)
+ * This is set when the custom event has not been attached
+ * to a tracepoint yet, then it is cleared when it is.
++ * TEST_STR - The event has a "%s" that points to a string outside the event
+ */
+ enum {
+ TRACE_EVENT_FL_FILTERED = (1 << TRACE_EVENT_FL_FILTERED_BIT),
+@@ -367,6 +370,7 @@ enum {
+ TRACE_EVENT_FL_EPROBE = (1 << TRACE_EVENT_FL_EPROBE_BIT),
+ TRACE_EVENT_FL_FPROBE = (1 << TRACE_EVENT_FL_FPROBE_BIT),
+ TRACE_EVENT_FL_CUSTOM = (1 << TRACE_EVENT_FL_CUSTOM_BIT),
++ TRACE_EVENT_FL_TEST_STR = (1 << TRACE_EVENT_FL_TEST_STR_BIT),
+ };
+
+ #define TRACE_EVENT_FL_UKPROBE (TRACE_EVENT_FL_KPROBE | TRACE_EVENT_FL_UPROBE)
+diff --git a/include/linux/wait.h b/include/linux/wait.h
+index 8aa3372f21a080..2b322a9b88a2bd 100644
+--- a/include/linux/wait.h
++++ b/include/linux/wait.h
+@@ -221,6 +221,7 @@ void __wake_up_pollfree(struct wait_queue_head *wq_head);
+ #define wake_up_all(x) __wake_up(x, TASK_NORMAL, 0, NULL)
+ #define wake_up_locked(x) __wake_up_locked((x), TASK_NORMAL, 1)
+ #define wake_up_all_locked(x) __wake_up_locked((x), TASK_NORMAL, 0)
++#define wake_up_sync(x) __wake_up_sync(x, TASK_NORMAL)
+
+ #define wake_up_interruptible(x) __wake_up(x, TASK_INTERRUPTIBLE, 1, NULL)
+ #define wake_up_interruptible_nr(x, nr) __wake_up(x, TASK_INTERRUPTIBLE, nr, NULL)
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index b2736e3491b862..9849da128364af 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -515,7 +515,11 @@ static void io_queue_iowq(struct io_kiocb *req)
+ struct io_uring_task *tctx = req->task->io_uring;
+
+ BUG_ON(!tctx);
+- BUG_ON(!tctx->io_wq);
++
++ if ((current->flags & PF_KTHREAD) || !tctx->io_wq) {
++ io_req_task_queue_fail(req, -ECANCELED);
++ return;
++ }
+
+ /* init ->work of the whole link before punting */
+ io_prep_async_link(req);
+@@ -3230,6 +3234,7 @@ __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sqd)
+
+ void __io_uring_cancel(bool cancel_all)
+ {
++ io_uring_unreg_ringfd();
+ io_uring_cancel_generic(cancel_all, NULL);
+ }
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 6cc12777bb11ab..d07dc87787dff3 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1300,7 +1300,7 @@ bool sched_can_stop_tick(struct rq *rq)
+ if (scx_enabled() && !scx_can_stop_tick(rq))
+ return false;
+
+- if (rq->cfs.nr_running > 1)
++ if (rq->cfs.h_nr_running > 1)
+ return false;
+
+ /*
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index fc6f41ac33eb13..a17c23b53049cc 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -1647,6 +1647,7 @@ void dl_server_start(struct sched_dl_entity *dl_se)
+ if (!dl_se->dl_runtime)
+ return;
+
++ dl_se->dl_server_active = 1;
+ enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP);
+ if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl))
+ resched_curr(dl_se->rq);
+@@ -1661,6 +1662,7 @@ void dl_server_stop(struct sched_dl_entity *dl_se)
+ hrtimer_try_to_cancel(&dl_se->dl_timer);
+ dl_se->dl_defer_armed = 0;
+ dl_se->dl_throttled = 0;
++ dl_se->dl_server_active = 0;
+ }
+
+ void dl_server_init(struct sched_dl_entity *dl_se, struct rq *rq,
+@@ -2420,8 +2422,10 @@ static struct task_struct *__pick_task_dl(struct rq *rq)
+ if (dl_server(dl_se)) {
+ p = dl_se->server_pick_task(dl_se);
+ if (!p) {
+- dl_se->dl_yielded = 1;
+- update_curr_dl_se(rq, dl_se, 0);
++ if (dl_server_active(dl_se)) {
++ dl_se->dl_yielded = 1;
++ update_curr_dl_se(rq, dl_se, 0);
++ }
+ goto again;
+ }
+ rq->dl_server = dl_se;
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index f4035c7a0fa1df..82b165bf48c423 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -844,6 +844,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq)
+ SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", SPLIT_NS(spread));
+ SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running);
+ SEQ_printf(m, " .%-30s: %d\n", "h_nr_running", cfs_rq->h_nr_running);
++ SEQ_printf(m, " .%-30s: %d\n", "h_nr_delayed", cfs_rq->h_nr_delayed);
+ SEQ_printf(m, " .%-30s: %d\n", "idle_nr_running",
+ cfs_rq->idle_nr_running);
+ SEQ_printf(m, " .%-30s: %d\n", "idle_h_nr_running",
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 782ce70ebd1b08..1ca96c99872f08 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -1159,8 +1159,6 @@ static inline void update_curr_task(struct task_struct *p, s64 delta_exec)
+ trace_sched_stat_runtime(p, delta_exec);
+ account_group_exec_runtime(p, delta_exec);
+ cgroup_account_cputime(p, delta_exec);
+- if (p->dl_server)
+- dl_server_update(p->dl_server, delta_exec);
+ }
+
+ static inline bool did_preempt_short(struct cfs_rq *cfs_rq, struct sched_entity *curr)
+@@ -1237,11 +1235,16 @@ static void update_curr(struct cfs_rq *cfs_rq)
+ update_curr_task(p, delta_exec);
+
+ /*
+- * Any fair task that runs outside of fair_server should
+- * account against fair_server such that it can account for
+- * this time and possibly avoid running this period.
++ * If the fair_server is active, we need to account for the
++ * fair_server time whether or not the task is running on
++ * behalf of fair_server or not:
++ * - If the task is running on behalf of fair_server, we need
++ * to limit its time based on the assigned runtime.
++ * - Fair task that runs outside of fair_server should account
++ * against fair_server such that it can account for this time
++ * and possibly avoid running this period.
+ */
+- if (p->dl_server != &rq->fair_server)
++ if (dl_server_active(&rq->fair_server))
+ dl_server_update(&rq->fair_server, delta_exec);
+ }
+
+@@ -5471,9 +5474,33 @@ static void clear_buddies(struct cfs_rq *cfs_rq, struct sched_entity *se)
+
+ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq);
+
+-static inline void finish_delayed_dequeue_entity(struct sched_entity *se)
++static void set_delayed(struct sched_entity *se)
++{
++ se->sched_delayed = 1;
++ for_each_sched_entity(se) {
++ struct cfs_rq *cfs_rq = cfs_rq_of(se);
++
++ cfs_rq->h_nr_delayed++;
++ if (cfs_rq_throttled(cfs_rq))
++ break;
++ }
++}
++
++static void clear_delayed(struct sched_entity *se)
+ {
+ se->sched_delayed = 0;
++ for_each_sched_entity(se) {
++ struct cfs_rq *cfs_rq = cfs_rq_of(se);
++
++ cfs_rq->h_nr_delayed--;
++ if (cfs_rq_throttled(cfs_rq))
++ break;
++ }
++}
++
++static inline void finish_delayed_dequeue_entity(struct sched_entity *se)
++{
++ clear_delayed(se);
+ if (sched_feat(DELAY_ZERO) && se->vlag > 0)
+ se->vlag = 0;
+ }
+@@ -5484,6 +5511,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+ bool sleep = flags & DEQUEUE_SLEEP;
+
+ update_curr(cfs_rq);
++ clear_buddies(cfs_rq, se);
+
+ if (flags & DEQUEUE_DELAYED) {
+ SCHED_WARN_ON(!se->sched_delayed);
+@@ -5500,10 +5528,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+
+ if (sched_feat(DELAY_DEQUEUE) && delay &&
+ !entity_eligible(cfs_rq, se)) {
+- if (cfs_rq->next == se)
+- cfs_rq->next = NULL;
+ update_load_avg(cfs_rq, se, 0);
+- se->sched_delayed = 1;
++ set_delayed(se);
+ return false;
+ }
+ }
+@@ -5526,8 +5552,6 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+
+ update_stats_dequeue_fair(cfs_rq, se, flags);
+
+- clear_buddies(cfs_rq, se);
+-
+ update_entity_lag(cfs_rq, se);
+ if (sched_feat(PLACE_REL_DEADLINE) && !sleep) {
+ se->deadline -= se->vruntime;
+@@ -5923,7 +5947,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
+ struct rq *rq = rq_of(cfs_rq);
+ struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg);
+ struct sched_entity *se;
+- long task_delta, idle_task_delta, dequeue = 1;
++ long task_delta, idle_task_delta, delayed_delta, dequeue = 1;
+ long rq_h_nr_running = rq->cfs.h_nr_running;
+
+ raw_spin_lock(&cfs_b->lock);
+@@ -5956,6 +5980,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ task_delta = cfs_rq->h_nr_running;
+ idle_task_delta = cfs_rq->idle_h_nr_running;
++ delayed_delta = cfs_rq->h_nr_delayed;
+ for_each_sched_entity(se) {
+ struct cfs_rq *qcfs_rq = cfs_rq_of(se);
+ int flags;
+@@ -5979,6 +6004,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ qcfs_rq->h_nr_running -= task_delta;
+ qcfs_rq->idle_h_nr_running -= idle_task_delta;
++ qcfs_rq->h_nr_delayed -= delayed_delta;
+
+ if (qcfs_rq->load.weight) {
+ /* Avoid re-evaluating load for this entity: */
+@@ -6001,6 +6027,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ qcfs_rq->h_nr_running -= task_delta;
+ qcfs_rq->idle_h_nr_running -= idle_task_delta;
++ qcfs_rq->h_nr_delayed -= delayed_delta;
+ }
+
+ /* At this point se is NULL and we are at root level*/
+@@ -6026,7 +6053,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
+ struct rq *rq = rq_of(cfs_rq);
+ struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg);
+ struct sched_entity *se;
+- long task_delta, idle_task_delta;
++ long task_delta, idle_task_delta, delayed_delta;
+ long rq_h_nr_running = rq->cfs.h_nr_running;
+
+ se = cfs_rq->tg->se[cpu_of(rq)];
+@@ -6062,6 +6089,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ task_delta = cfs_rq->h_nr_running;
+ idle_task_delta = cfs_rq->idle_h_nr_running;
++ delayed_delta = cfs_rq->h_nr_delayed;
+ for_each_sched_entity(se) {
+ struct cfs_rq *qcfs_rq = cfs_rq_of(se);
+
+@@ -6079,6 +6107,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ qcfs_rq->h_nr_running += task_delta;
+ qcfs_rq->idle_h_nr_running += idle_task_delta;
++ qcfs_rq->h_nr_delayed += delayed_delta;
+
+ /* end evaluation on encountering a throttled cfs_rq */
+ if (cfs_rq_throttled(qcfs_rq))
+@@ -6096,6 +6125,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
+
+ qcfs_rq->h_nr_running += task_delta;
+ qcfs_rq->idle_h_nr_running += idle_task_delta;
++ qcfs_rq->h_nr_delayed += delayed_delta;
+
+ /* end evaluation on encountering a throttled cfs_rq */
+ if (cfs_rq_throttled(qcfs_rq))
+@@ -6949,7 +6979,7 @@ requeue_delayed_entity(struct sched_entity *se)
+ }
+
+ update_load_avg(cfs_rq, se, 0);
+- se->sched_delayed = 0;
++ clear_delayed(se);
+ }
+
+ /*
+@@ -6963,6 +6993,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ struct cfs_rq *cfs_rq;
+ struct sched_entity *se = &p->se;
+ int idle_h_nr_running = task_has_idle_policy(p);
++ int h_nr_delayed = 0;
+ int task_new = !(flags & ENQUEUE_WAKEUP);
+ int rq_h_nr_running = rq->cfs.h_nr_running;
+ u64 slice = 0;
+@@ -6989,6 +7020,9 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ if (p->in_iowait)
+ cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT);
+
++ if (task_new)
++ h_nr_delayed = !!se->sched_delayed;
++
+ for_each_sched_entity(se) {
+ if (se->on_rq) {
+ if (se->sched_delayed)
+@@ -7011,6 +7045,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+
+ cfs_rq->h_nr_running++;
+ cfs_rq->idle_h_nr_running += idle_h_nr_running;
++ cfs_rq->h_nr_delayed += h_nr_delayed;
+
+ if (cfs_rq_is_idle(cfs_rq))
+ idle_h_nr_running = 1;
+@@ -7034,6 +7069,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+
+ cfs_rq->h_nr_running++;
+ cfs_rq->idle_h_nr_running += idle_h_nr_running;
++ cfs_rq->h_nr_delayed += h_nr_delayed;
+
+ if (cfs_rq_is_idle(cfs_rq))
+ idle_h_nr_running = 1;
+@@ -7096,6 +7132,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+ struct task_struct *p = NULL;
+ int idle_h_nr_running = 0;
+ int h_nr_running = 0;
++ int h_nr_delayed = 0;
+ struct cfs_rq *cfs_rq;
+ u64 slice = 0;
+
+@@ -7103,6 +7140,8 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+ p = task_of(se);
+ h_nr_running = 1;
+ idle_h_nr_running = task_has_idle_policy(p);
++ if (!task_sleep && !task_delayed)
++ h_nr_delayed = !!se->sched_delayed;
+ } else {
+ cfs_rq = group_cfs_rq(se);
+ slice = cfs_rq_min_slice(cfs_rq);
+@@ -7120,6 +7159,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+
+ cfs_rq->h_nr_running -= h_nr_running;
+ cfs_rq->idle_h_nr_running -= idle_h_nr_running;
++ cfs_rq->h_nr_delayed -= h_nr_delayed;
+
+ if (cfs_rq_is_idle(cfs_rq))
+ idle_h_nr_running = h_nr_running;
+@@ -7158,6 +7198,7 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+
+ cfs_rq->h_nr_running -= h_nr_running;
+ cfs_rq->idle_h_nr_running -= idle_h_nr_running;
++ cfs_rq->h_nr_delayed -= h_nr_delayed;
+
+ if (cfs_rq_is_idle(cfs_rq))
+ idle_h_nr_running = h_nr_running;
+@@ -8786,7 +8827,7 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ if (unlikely(throttled_hierarchy(cfs_rq_of(pse))))
+ return;
+
+- if (sched_feat(NEXT_BUDDY) && !(wake_flags & WF_FORK)) {
++ if (sched_feat(NEXT_BUDDY) && !(wake_flags & WF_FORK) && !pse->sched_delayed) {
+ set_next_buddy(pse);
+ }
+
+diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
+index a9c65d97b3cac6..171a802420a10a 100644
+--- a/kernel/sched/pelt.c
++++ b/kernel/sched/pelt.c
+@@ -321,7 +321,7 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq)
+ {
+ if (___update_load_sum(now, &cfs_rq->avg,
+ scale_load_down(cfs_rq->load.weight),
+- cfs_rq->h_nr_running,
++ cfs_rq->h_nr_running - cfs_rq->h_nr_delayed,
+ cfs_rq->curr != NULL)) {
+
+ ___update_load_avg(&cfs_rq->avg, 1);
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index c03b3d7b320e9c..f2ef520513c4a2 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -398,6 +398,11 @@ extern void __dl_server_attach_root(struct sched_dl_entity *dl_se, struct rq *rq
+ extern int dl_server_apply_params(struct sched_dl_entity *dl_se,
+ u64 runtime, u64 period, bool init);
+
++static inline bool dl_server_active(struct sched_dl_entity *dl_se)
++{
++ return dl_se->dl_server_active;
++}
++
+ #ifdef CONFIG_CGROUP_SCHED
+
+ extern struct list_head task_groups;
+@@ -649,6 +654,7 @@ struct cfs_rq {
+ unsigned int h_nr_running; /* SCHED_{NORMAL,BATCH,IDLE} */
+ unsigned int idle_nr_running; /* SCHED_IDLE */
+ unsigned int idle_h_nr_running; /* SCHED_IDLE */
++ unsigned int h_nr_delayed;
+
+ s64 avg_vruntime;
+ u64 avg_load;
+@@ -898,8 +904,11 @@ struct dl_rq {
+
+ static inline void se_update_runnable(struct sched_entity *se)
+ {
+- if (!entity_is_task(se))
+- se->runnable_weight = se->my_q->h_nr_running;
++ if (!entity_is_task(se)) {
++ struct cfs_rq *cfs_rq = se->my_q;
++
++ se->runnable_weight = cfs_rq->h_nr_running - cfs_rq->h_nr_delayed;
++ }
+ }
+
+ static inline long se_runnable(struct sched_entity *se)
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index 69e226a48daa92..72bcbfad53db04 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -1160,7 +1160,7 @@ void fgraph_update_pid_func(void)
+ static int start_graph_tracing(void)
+ {
+ unsigned long **ret_stack_list;
+- int ret;
++ int ret, cpu;
+
+ ret_stack_list = kcalloc(FTRACE_RETSTACK_ALLOC_SIZE,
+ sizeof(*ret_stack_list), GFP_KERNEL);
+@@ -1168,6 +1168,12 @@ static int start_graph_tracing(void)
+ if (!ret_stack_list)
+ return -ENOMEM;
+
++ /* The cpu_boot init_task->ret_stack will never be freed */
++ for_each_online_cpu(cpu) {
++ if (!idle_task(cpu)->ret_stack)
++ ftrace_graph_init_idle_task(idle_task(cpu), cpu);
++ }
++
+ do {
+ ret = alloc_retstack_tasklist(ret_stack_list);
+ } while (ret == -EAGAIN);
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 366eb4c4f28e57..703978b2d557d7 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -7019,7 +7019,11 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
+ lockdep_assert_held(&cpu_buffer->mapping_lock);
+
+ nr_subbufs = cpu_buffer->nr_pages + 1; /* + reader-subbuf */
+- nr_pages = ((nr_subbufs + 1) << subbuf_order) - pgoff; /* + meta-page */
++ nr_pages = ((nr_subbufs + 1) << subbuf_order); /* + meta-page */
++ if (nr_pages <= pgoff)
++ return -EINVAL;
++
++ nr_pages -= pgoff;
+
+ nr_vma_pages = vma_pages(vma);
+ if (!nr_vma_pages || nr_vma_pages > nr_pages)
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 17d2ffde0bb604..35515192aa0fda 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -3635,17 +3635,12 @@ char *trace_iter_expand_format(struct trace_iterator *iter)
+ }
+
+ /* Returns true if the string is safe to dereference from an event */
+-static bool trace_safe_str(struct trace_iterator *iter, const char *str,
+- bool star, int len)
++static bool trace_safe_str(struct trace_iterator *iter, const char *str)
+ {
+ unsigned long addr = (unsigned long)str;
+ struct trace_event *trace_event;
+ struct trace_event_call *event;
+
+- /* Ignore strings with no length */
+- if (star && !len)
+- return true;
+-
+ /* OK if part of the event data */
+ if ((addr >= (unsigned long)iter->ent) &&
+ (addr < (unsigned long)iter->ent + iter->ent_size))
+@@ -3685,181 +3680,69 @@ static bool trace_safe_str(struct trace_iterator *iter, const char *str,
+ return false;
+ }
+
+-static DEFINE_STATIC_KEY_FALSE(trace_no_verify);
+-
+-static int test_can_verify_check(const char *fmt, ...)
+-{
+- char buf[16];
+- va_list ap;
+- int ret;
+-
+- /*
+- * The verifier is dependent on vsnprintf() modifies the va_list
+- * passed to it, where it is sent as a reference. Some architectures
+- * (like x86_32) passes it by value, which means that vsnprintf()
+- * does not modify the va_list passed to it, and the verifier
+- * would then need to be able to understand all the values that
+- * vsnprintf can use. If it is passed by value, then the verifier
+- * is disabled.
+- */
+- va_start(ap, fmt);
+- vsnprintf(buf, 16, "%d", ap);
+- ret = va_arg(ap, int);
+- va_end(ap);
+-
+- return ret;
+-}
+-
+-static void test_can_verify(void)
+-{
+- if (!test_can_verify_check("%d %d", 0, 1)) {
+- pr_info("trace event string verifier disabled\n");
+- static_branch_inc(&trace_no_verify);
+- }
+-}
+-
+ /**
+- * trace_check_vprintf - Check dereferenced strings while writing to the seq buffer
++ * ignore_event - Check dereferenced fields while writing to the seq buffer
+ * @iter: The iterator that holds the seq buffer and the event being printed
+- * @fmt: The format used to print the event
+- * @ap: The va_list holding the data to print from @fmt.
+ *
+- * This writes the data into the @iter->seq buffer using the data from
+- * @fmt and @ap. If the format has a %s, then the source of the string
+- * is examined to make sure it is safe to print, otherwise it will
+- * warn and print "[UNSAFE MEMORY]" in place of the dereferenced string
+- * pointer.
++ * At boot up, test_event_printk() will flag any event that dereferences
++ * a string with "%s" that does exist in the ring buffer. It may still
++ * be valid, as the string may point to a static string in the kernel
++ * rodata that never gets freed. But if the string pointer is pointing
++ * to something that was allocated, there's a chance that it can be freed
++ * by the time the user reads the trace. This would cause a bad memory
++ * access by the kernel and possibly crash the system.
++ *
++ * This function will check if the event has any fields flagged as needing
++ * to be checked at runtime and perform those checks.
++ *
++ * If it is found that a field is unsafe, it will write into the @iter->seq
++ * a message stating what was found to be unsafe.
++ *
++ * @return: true if the event is unsafe and should be ignored,
++ * false otherwise.
+ */
+-void trace_check_vprintf(struct trace_iterator *iter, const char *fmt,
+- va_list ap)
++bool ignore_event(struct trace_iterator *iter)
+ {
+- long text_delta = 0;
+- long data_delta = 0;
+- const char *p = fmt;
+- const char *str;
+- bool good;
+- int i, j;
++ struct ftrace_event_field *field;
++ struct trace_event *trace_event;
++ struct trace_event_call *event;
++ struct list_head *head;
++ struct trace_seq *seq;
++ const void *ptr;
+
+- if (WARN_ON_ONCE(!fmt))
+- return;
++ trace_event = ftrace_find_event(iter->ent->type);
+
+- if (static_branch_unlikely(&trace_no_verify))
+- goto print;
++ seq = &iter->seq;
+
+- /*
+- * When the kernel is booted with the tp_printk command line
+- * parameter, trace events go directly through to printk().
+- * It also is checked by this function, but it does not
+- * have an associated trace_array (tr) for it.
+- */
+- if (iter->tr) {
+- text_delta = iter->tr->text_delta;
+- data_delta = iter->tr->data_delta;
++ if (!trace_event) {
++ trace_seq_printf(seq, "EVENT ID %d NOT FOUND?\n", iter->ent->type);
++ return true;
+ }
+
+- /* Don't bother checking when doing a ftrace_dump() */
+- if (iter->fmt == static_fmt_buf)
+- goto print;
+-
+- while (*p) {
+- bool star = false;
+- int len = 0;
+-
+- j = 0;
+-
+- /*
+- * We only care about %s and variants
+- * as well as %p[sS] if delta is non-zero
+- */
+- for (i = 0; p[i]; i++) {
+- if (i + 1 >= iter->fmt_size) {
+- /*
+- * If we can't expand the copy buffer,
+- * just print it.
+- */
+- if (!trace_iter_expand_format(iter))
+- goto print;
+- }
+-
+- if (p[i] == '\\' && p[i+1]) {
+- i++;
+- continue;
+- }
+- if (p[i] == '%') {
+- /* Need to test cases like %08.*s */
+- for (j = 1; p[i+j]; j++) {
+- if (isdigit(p[i+j]) ||
+- p[i+j] == '.')
+- continue;
+- if (p[i+j] == '*') {
+- star = true;
+- continue;
+- }
+- break;
+- }
+- if (p[i+j] == 's')
+- break;
+-
+- if (text_delta && p[i+1] == 'p' &&
+- ((p[i+2] == 's' || p[i+2] == 'S')))
+- break;
+-
+- star = false;
+- }
+- j = 0;
+- }
+- /* If no %s found then just print normally */
+- if (!p[i])
+- break;
+-
+- /* Copy up to the %s, and print that */
+- strncpy(iter->fmt, p, i);
+- iter->fmt[i] = '\0';
+- trace_seq_vprintf(&iter->seq, iter->fmt, ap);
++ event = container_of(trace_event, struct trace_event_call, event);
++ if (!(event->flags & TRACE_EVENT_FL_TEST_STR))
++ return false;
+
+- /* Add delta to %pS pointers */
+- if (p[i+1] == 'p') {
+- unsigned long addr;
+- char fmt[4];
++ head = trace_get_fields(event);
++ if (!head) {
++ trace_seq_printf(seq, "FIELDS FOR EVENT '%s' NOT FOUND?\n",
++ trace_event_name(event));
++ return true;
++ }
+
+- fmt[0] = '%';
+- fmt[1] = 'p';
+- fmt[2] = p[i+2]; /* Either %ps or %pS */
+- fmt[3] = '\0';
++ /* Offsets are from the iter->ent that points to the raw event */
++ ptr = iter->ent;
+
+- addr = va_arg(ap, unsigned long);
+- addr += text_delta;
+- trace_seq_printf(&iter->seq, fmt, (void *)addr);
++ list_for_each_entry(field, head, link) {
++ const char *str;
++ bool good;
+
+- p += i + 3;
++ if (!field->needs_test)
+ continue;
+- }
+
+- /*
+- * If iter->seq is full, the above call no longer guarantees
+- * that ap is in sync with fmt processing, and further calls
+- * to va_arg() can return wrong positional arguments.
+- *
+- * Ensure that ap is no longer used in this case.
+- */
+- if (iter->seq.full) {
+- p = "";
+- break;
+- }
++ str = *(const char **)(ptr + field->offset);
+
+- if (star)
+- len = va_arg(ap, int);
+-
+- /* The ap now points to the string data of the %s */
+- str = va_arg(ap, const char *);
+-
+- good = trace_safe_str(iter, str, star, len);
+-
+- /* Could be from the last boot */
+- if (data_delta && !good) {
+- str += data_delta;
+- good = trace_safe_str(iter, str, star, len);
+- }
++ good = trace_safe_str(iter, str);
+
+ /*
+ * If you hit this warning, it is likely that the
+@@ -3870,44 +3753,14 @@ void trace_check_vprintf(struct trace_iterator *iter, const char *fmt,
+ * instead. See samples/trace_events/trace-events-sample.h
+ * for reference.
+ */
+- if (WARN_ONCE(!good, "fmt: '%s' current_buffer: '%s'",
+- fmt, seq_buf_str(&iter->seq.seq))) {
+- int ret;
+-
+- /* Try to safely read the string */
+- if (star) {
+- if (len + 1 > iter->fmt_size)
+- len = iter->fmt_size - 1;
+- if (len < 0)
+- len = 0;
+- ret = copy_from_kernel_nofault(iter->fmt, str, len);
+- iter->fmt[len] = 0;
+- star = false;
+- } else {
+- ret = strncpy_from_kernel_nofault(iter->fmt, str,
+- iter->fmt_size);
+- }
+- if (ret < 0)
+- trace_seq_printf(&iter->seq, "(0x%px)", str);
+- else
+- trace_seq_printf(&iter->seq, "(0x%px:%s)",
+- str, iter->fmt);
+- str = "[UNSAFE-MEMORY]";
+- strcpy(iter->fmt, "%s");
+- } else {
+- strncpy(iter->fmt, p + i, j + 1);
+- iter->fmt[j+1] = '\0';
++ if (WARN_ONCE(!good, "event '%s' has unsafe pointer field '%s'",
++ trace_event_name(event), field->name)) {
++ trace_seq_printf(seq, "EVENT %s: HAS UNSAFE POINTER FIELD '%s'\n",
++ trace_event_name(event), field->name);
++ return true;
+ }
+- if (star)
+- trace_seq_printf(&iter->seq, iter->fmt, len, str);
+- else
+- trace_seq_printf(&iter->seq, iter->fmt, str);
+-
+- p += i + j + 1;
+ }
+- print:
+- if (*p)
+- trace_seq_vprintf(&iter->seq, p, ap);
++ return false;
+ }
+
+ const char *trace_event_format(struct trace_iterator *iter, const char *fmt)
+@@ -4377,6 +4230,15 @@ static enum print_line_t print_trace_fmt(struct trace_iterator *iter)
+ if (event) {
+ if (tr->trace_flags & TRACE_ITER_FIELDS)
+ return print_event_fields(iter, event);
++ /*
++ * For TRACE_EVENT() events, the print_fmt is not
++ * safe to use if the array has delta offsets
++ * Force printing via the fields.
++ */
++ if ((tr->text_delta || tr->data_delta) &&
++ event->type > __TRACE_LAST_TYPE)
++ return print_event_fields(iter, event);
++
+ return event->funcs->trace(iter, sym_flags, event);
+ }
+
+@@ -10794,8 +10656,6 @@ __init static int tracer_alloc_buffers(void)
+
+ register_snapshot_cmd();
+
+- test_can_verify();
+-
+ return 0;
+
+ out_free_pipe_cpumask:
+diff --git a/kernel/trace/trace.h b/kernel/trace/trace.h
+index 30d6675c78cfe1..04ea327198ba80 100644
+--- a/kernel/trace/trace.h
++++ b/kernel/trace/trace.h
+@@ -664,9 +664,8 @@ void trace_buffer_unlock_commit_nostack(struct trace_buffer *buffer,
+
+ bool trace_is_tracepoint_string(const char *str);
+ const char *trace_event_format(struct trace_iterator *iter, const char *fmt);
+-void trace_check_vprintf(struct trace_iterator *iter, const char *fmt,
+- va_list ap) __printf(2, 0);
+ char *trace_iter_expand_format(struct trace_iterator *iter);
++bool ignore_event(struct trace_iterator *iter);
+
+ int trace_empty(struct trace_iterator *iter);
+
+@@ -1402,7 +1401,8 @@ struct ftrace_event_field {
+ int filter_type;
+ int offset;
+ int size;
+- int is_signed;
++ unsigned int is_signed:1;
++ unsigned int needs_test:1;
+ int len;
+ };
+
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 7266ec2a4eea00..7149cd6fd4795e 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -82,7 +82,7 @@ static int system_refcount_dec(struct event_subsystem *system)
+ }
+
+ static struct ftrace_event_field *
+-__find_event_field(struct list_head *head, char *name)
++__find_event_field(struct list_head *head, const char *name)
+ {
+ struct ftrace_event_field *field;
+
+@@ -114,7 +114,8 @@ trace_find_event_field(struct trace_event_call *call, char *name)
+
+ static int __trace_define_field(struct list_head *head, const char *type,
+ const char *name, int offset, int size,
+- int is_signed, int filter_type, int len)
++ int is_signed, int filter_type, int len,
++ int need_test)
+ {
+ struct ftrace_event_field *field;
+
+@@ -133,6 +134,7 @@ static int __trace_define_field(struct list_head *head, const char *type,
+ field->offset = offset;
+ field->size = size;
+ field->is_signed = is_signed;
++ field->needs_test = need_test;
+ field->len = len;
+
+ list_add(&field->link, head);
+@@ -151,13 +153,13 @@ int trace_define_field(struct trace_event_call *call, const char *type,
+
+ head = trace_get_fields(call);
+ return __trace_define_field(head, type, name, offset, size,
+- is_signed, filter_type, 0);
++ is_signed, filter_type, 0, 0);
+ }
+ EXPORT_SYMBOL_GPL(trace_define_field);
+
+ static int trace_define_field_ext(struct trace_event_call *call, const char *type,
+ const char *name, int offset, int size, int is_signed,
+- int filter_type, int len)
++ int filter_type, int len, int need_test)
+ {
+ struct list_head *head;
+
+@@ -166,13 +168,13 @@ static int trace_define_field_ext(struct trace_event_call *call, const char *typ
+
+ head = trace_get_fields(call);
+ return __trace_define_field(head, type, name, offset, size,
+- is_signed, filter_type, len);
++ is_signed, filter_type, len, need_test);
+ }
+
+ #define __generic_field(type, item, filter_type) \
+ ret = __trace_define_field(&ftrace_generic_fields, #type, \
+ #item, 0, 0, is_signed_type(type), \
+- filter_type, 0); \
++ filter_type, 0, 0); \
+ if (ret) \
+ return ret;
+
+@@ -181,7 +183,8 @@ static int trace_define_field_ext(struct trace_event_call *call, const char *typ
+ "common_" #item, \
+ offsetof(typeof(ent), item), \
+ sizeof(ent.item), \
+- is_signed_type(type), FILTER_OTHER, 0); \
++ is_signed_type(type), FILTER_OTHER, \
++ 0, 0); \
+ if (ret) \
+ return ret;
+
+@@ -244,19 +247,16 @@ int trace_event_get_offsets(struct trace_event_call *call)
+ return tail->offset + tail->size;
+ }
+
+-/*
+- * Check if the referenced field is an array and return true,
+- * as arrays are OK to dereference.
+- */
+-static bool test_field(const char *fmt, struct trace_event_call *call)
++
++static struct trace_event_fields *find_event_field(const char *fmt,
++ struct trace_event_call *call)
+ {
+ struct trace_event_fields *field = call->class->fields_array;
+- const char *array_descriptor;
+ const char *p = fmt;
+ int len;
+
+ if (!(len = str_has_prefix(fmt, "REC->")))
+- return false;
++ return NULL;
+ fmt += len;
+ for (p = fmt; *p; p++) {
+ if (!isalnum(*p) && *p != '_')
+@@ -265,16 +265,129 @@ static bool test_field(const char *fmt, struct trace_event_call *call)
+ len = p - fmt;
+
+ for (; field->type; field++) {
+- if (strncmp(field->name, fmt, len) ||
+- field->name[len])
++ if (strncmp(field->name, fmt, len) || field->name[len])
+ continue;
+- array_descriptor = strchr(field->type, '[');
+- /* This is an array and is OK to dereference. */
+- return array_descriptor != NULL;
++
++ return field;
++ }
++ return NULL;
++}
++
++/*
++ * Check if the referenced field is an array and return true,
++ * as arrays are OK to dereference.
++ */
++static bool test_field(const char *fmt, struct trace_event_call *call)
++{
++ struct trace_event_fields *field;
++
++ field = find_event_field(fmt, call);
++ if (!field)
++ return false;
++
++ /* This is an array and is OK to dereference. */
++ return strchr(field->type, '[') != NULL;
++}
++
++/* Look for a string within an argument */
++static bool find_print_string(const char *arg, const char *str, const char *end)
++{
++ const char *r;
++
++ r = strstr(arg, str);
++ return r && r < end;
++}
++
++/* Return true if the argument pointer is safe */
++static bool process_pointer(const char *fmt, int len, struct trace_event_call *call)
++{
++ const char *r, *e, *a;
++
++ e = fmt + len;
++
++ /* Find the REC-> in the argument */
++ r = strstr(fmt, "REC->");
++ if (r && r < e) {
++ /*
++ * Addresses of events on the buffer, or an array on the buffer is
++ * OK to dereference. There's ways to fool this, but
++ * this is to catch common mistakes, not malicious code.
++ */
++ a = strchr(fmt, '&');
++ if ((a && (a < r)) || test_field(r, call))
++ return true;
++ } else if (find_print_string(fmt, "__get_dynamic_array(", e)) {
++ return true;
++ } else if (find_print_string(fmt, "__get_rel_dynamic_array(", e)) {
++ return true;
++ } else if (find_print_string(fmt, "__get_dynamic_array_len(", e)) {
++ return true;
++ } else if (find_print_string(fmt, "__get_rel_dynamic_array_len(", e)) {
++ return true;
++ } else if (find_print_string(fmt, "__get_sockaddr(", e)) {
++ return true;
++ } else if (find_print_string(fmt, "__get_rel_sockaddr(", e)) {
++ return true;
+ }
+ return false;
+ }
+
++/* Return true if the string is safe */
++static bool process_string(const char *fmt, int len, struct trace_event_call *call)
++{
++ struct trace_event_fields *field;
++ const char *r, *e, *s;
++
++ e = fmt + len;
++
++ /*
++ * There are several helper functions that return strings.
++ * If the argument contains a function, then assume its field is valid.
++ * It is considered that the argument has a function if it has:
++ * alphanumeric or '_' before a parenthesis.
++ */
++ s = fmt;
++ do {
++ r = strstr(s, "(");
++ if (!r || r >= e)
++ break;
++ for (int i = 1; r - i >= s; i++) {
++ char ch = *(r - i);
++ if (isspace(ch))
++ continue;
++ if (isalnum(ch) || ch == '_')
++ return true;
++ /* Anything else, this isn't a function */
++ break;
++ }
++ /* A function could be wrapped in parethesis, try the next one */
++ s = r + 1;
++ } while (s < e);
++
++ /*
++ * If there's any strings in the argument consider this arg OK as it
++ * could be: REC->field ? "foo" : "bar" and we don't want to get into
++ * verifying that logic here.
++ */
++ if (find_print_string(fmt, "\"", e))
++ return true;
++
++ /* Dereferenced strings are also valid like any other pointer */
++ if (process_pointer(fmt, len, call))
++ return true;
++
++ /* Make sure the field is found */
++ field = find_event_field(fmt, call);
++ if (!field)
++ return false;
++
++ /* Test this field's string before printing the event */
++ call->flags |= TRACE_EVENT_FL_TEST_STR;
++ field->needs_test = 1;
++
++ return true;
++}
++
+ /*
+ * Examine the print fmt of the event looking for unsafe dereference
+ * pointers using %p* that could be recorded in the trace event and
+@@ -284,13 +397,14 @@ static bool test_field(const char *fmt, struct trace_event_call *call)
+ static void test_event_printk(struct trace_event_call *call)
+ {
+ u64 dereference_flags = 0;
++ u64 string_flags = 0;
+ bool first = true;
+- const char *fmt, *c, *r, *a;
++ const char *fmt;
+ int parens = 0;
+ char in_quote = 0;
+ int start_arg = 0;
+ int arg = 0;
+- int i;
++ int i, e;
+
+ fmt = call->print_fmt;
+
+@@ -374,8 +488,16 @@ static void test_event_printk(struct trace_event_call *call)
+ star = true;
+ continue;
+ }
+- if ((fmt[i + j] == 's') && star)
+- arg++;
++ if ((fmt[i + j] == 's')) {
++ if (star)
++ arg++;
++ if (WARN_ONCE(arg == 63,
++ "Too many args for event: %s",
++ trace_event_name(call)))
++ return;
++ dereference_flags |= 1ULL << arg;
++ string_flags |= 1ULL << arg;
++ }
+ break;
+ }
+ break;
+@@ -403,42 +525,47 @@ static void test_event_printk(struct trace_event_call *call)
+ case ',':
+ if (in_quote || parens)
+ continue;
++ e = i;
+ i++;
+ while (isspace(fmt[i]))
+ i++;
+- start_arg = i;
+- if (!(dereference_flags & (1ULL << arg)))
+- goto next_arg;
+
+- /* Find the REC-> in the argument */
+- c = strchr(fmt + i, ',');
+- r = strstr(fmt + i, "REC->");
+- if (r && (!c || r < c)) {
+- /*
+- * Addresses of events on the buffer,
+- * or an array on the buffer is
+- * OK to dereference.
+- * There's ways to fool this, but
+- * this is to catch common mistakes,
+- * not malicious code.
+- */
+- a = strchr(fmt + i, '&');
+- if ((a && (a < r)) || test_field(r, call))
++ /*
++ * If start_arg is zero, then this is the start of the
++ * first argument. The processing of the argument happens
++ * when the end of the argument is found, as it needs to
++ * handle paranthesis and such.
++ */
++ if (!start_arg) {
++ start_arg = i;
++ /* Balance out the i++ in the for loop */
++ i--;
++ continue;
++ }
++
++ if (dereference_flags & (1ULL << arg)) {
++ if (string_flags & (1ULL << arg)) {
++ if (process_string(fmt + start_arg, e - start_arg, call))
++ dereference_flags &= ~(1ULL << arg);
++ } else if (process_pointer(fmt + start_arg, e - start_arg, call))
+ dereference_flags &= ~(1ULL << arg);
+- } else if ((r = strstr(fmt + i, "__get_dynamic_array(")) &&
+- (!c || r < c)) {
+- dereference_flags &= ~(1ULL << arg);
+- } else if ((r = strstr(fmt + i, "__get_sockaddr(")) &&
+- (!c || r < c)) {
+- dereference_flags &= ~(1ULL << arg);
+ }
+
+- next_arg:
+- i--;
++ start_arg = i;
+ arg++;
++ /* Balance out the i++ in the for loop */
++ i--;
+ }
+ }
+
++ if (dereference_flags & (1ULL << arg)) {
++ if (string_flags & (1ULL << arg)) {
++ if (process_string(fmt + start_arg, i - start_arg, call))
++ dereference_flags &= ~(1ULL << arg);
++ } else if (process_pointer(fmt + start_arg, i - start_arg, call))
++ dereference_flags &= ~(1ULL << arg);
++ }
++
+ /*
+ * If you triggered the below warning, the trace event reported
+ * uses an unsafe dereference pointer %p*. As the data stored
+@@ -2471,7 +2598,7 @@ event_define_fields(struct trace_event_call *call)
+ ret = trace_define_field_ext(call, field->type, field->name,
+ offset, field->size,
+ field->is_signed, field->filter_type,
+- field->len);
++ field->len, field->needs_test);
+ if (WARN_ON_ONCE(ret)) {
+ pr_err("error code is %d\n", ret);
+ break;
+diff --git a/kernel/trace/trace_output.c b/kernel/trace/trace_output.c
+index c14573e5a90337..6e7090e8bf3097 100644
+--- a/kernel/trace/trace_output.c
++++ b/kernel/trace/trace_output.c
+@@ -317,10 +317,14 @@ EXPORT_SYMBOL(trace_raw_output_prep);
+
+ void trace_event_printf(struct trace_iterator *iter, const char *fmt, ...)
+ {
++ struct trace_seq *s = &iter->seq;
+ va_list ap;
+
++ if (ignore_event(iter))
++ return;
++
+ va_start(ap, fmt);
+- trace_check_vprintf(iter, trace_event_format(iter, fmt), ap);
++ trace_seq_vprintf(s, trace_event_format(iter, fmt), ap);
+ va_end(ap);
+ }
+ EXPORT_SYMBOL(trace_event_printf);
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 5734d5d5060f32..7e0f72cd9fd4a0 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -3503,7 +3503,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
+ !list_empty(&folio->_deferred_list)) {
+ ds_queue->split_queue_len--;
+ if (folio_test_partially_mapped(folio)) {
+- __folio_clear_partially_mapped(folio);
++ folio_clear_partially_mapped(folio);
+ mod_mthp_stat(folio_order(folio),
+ MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
+ }
+@@ -3615,7 +3615,7 @@ bool __folio_unqueue_deferred_split(struct folio *folio)
+ if (!list_empty(&folio->_deferred_list)) {
+ ds_queue->split_queue_len--;
+ if (folio_test_partially_mapped(folio)) {
+- __folio_clear_partially_mapped(folio);
++ folio_clear_partially_mapped(folio);
+ mod_mthp_stat(folio_order(folio),
+ MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
+ }
+@@ -3659,7 +3659,7 @@ void deferred_split_folio(struct folio *folio, bool partially_mapped)
+ spin_lock_irqsave(&ds_queue->split_queue_lock, flags);
+ if (partially_mapped) {
+ if (!folio_test_partially_mapped(folio)) {
+- __folio_set_partially_mapped(folio);
++ folio_set_partially_mapped(folio);
+ if (folio_test_pmd_mappable(folio))
+ count_vm_event(THP_DEFERRED_SPLIT_PAGE);
+ count_mthp_stat(folio_order(folio), MTHP_STAT_SPLIT_DEFERRED);
+@@ -3752,7 +3752,7 @@ static unsigned long deferred_split_scan(struct shrinker *shrink,
+ } else {
+ /* We lost race with folio_put() */
+ if (folio_test_partially_mapped(folio)) {
+- __folio_clear_partially_mapped(folio);
++ folio_clear_partially_mapped(folio);
+ mod_mthp_stat(folio_order(folio),
+ MTHP_STAT_NR_ANON_PARTIALLY_MAPPED, -1);
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 190fa05635f4a9..5dc57b74a8fe9a 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5333,7 +5333,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, struct mm_struct *src,
+ break;
+ }
+ ret = copy_user_large_folio(new_folio, pte_folio,
+- ALIGN_DOWN(addr, sz), dst_vma);
++ addr, dst_vma);
+ folio_put(pte_folio);
+ if (ret) {
+ folio_put(new_folio);
+@@ -6632,8 +6632,7 @@ int hugetlb_mfill_atomic_pte(pte_t *dst_pte,
+ *foliop = NULL;
+ goto out;
+ }
+- ret = copy_user_large_folio(folio, *foliop,
+- ALIGN_DOWN(dst_addr, size), dst_vma);
++ ret = copy_user_large_folio(folio, *foliop, dst_addr, dst_vma);
+ folio_put(*foliop);
+ *foliop = NULL;
+ if (ret) {
+diff --git a/mm/memory.c b/mm/memory.c
+index bdf77a3ec47bc2..d322ddfe679167 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -6780,9 +6780,10 @@ static inline int process_huge_page(
+ return 0;
+ }
+
+-static void clear_gigantic_page(struct folio *folio, unsigned long addr,
++static void clear_gigantic_page(struct folio *folio, unsigned long addr_hint,
+ unsigned int nr_pages)
+ {
++ unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(folio));
+ int i;
+
+ might_sleep();
+@@ -6816,13 +6817,14 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
+ }
+
+ static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
+- unsigned long addr,
++ unsigned long addr_hint,
+ struct vm_area_struct *vma,
+ unsigned int nr_pages)
+ {
+- int i;
++ unsigned long addr = ALIGN_DOWN(addr_hint, folio_size(dst));
+ struct page *dst_page;
+ struct page *src_page;
++ int i;
+
+ for (i = 0; i < nr_pages; i++) {
+ dst_page = folio_page(dst, i);
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index b6958333054d06..de65e8b4f75f21 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1238,13 +1238,15 @@ static void split_large_buddy(struct zone *zone, struct page *page,
+ if (order > pageblock_order)
+ order = pageblock_order;
+
+- while (pfn != end) {
++ do {
+ int mt = get_pfnblock_migratetype(page, pfn);
+
+ __free_one_page(page, pfn, zone, order, mt, fpi);
+ pfn += 1 << order;
++ if (pfn == end)
++ break;
+ page = pfn_to_page(pfn);
+- }
++ } while (1);
+ }
+
+ static void free_one_page(struct zone *zone, struct page *page,
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 568bb290bdce3e..b03ced0c3d4858 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -779,6 +779,14 @@ static bool shmem_huge_global_enabled(struct inode *inode, pgoff_t index,
+ }
+ #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
++static void shmem_update_stats(struct folio *folio, int nr_pages)
++{
++ if (folio_test_pmd_mappable(folio))
++ __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr_pages);
++ __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages);
++ __lruvec_stat_mod_folio(folio, NR_SHMEM, nr_pages);
++}
++
+ /*
+ * Somewhat like filemap_add_folio, but error if expected item has gone.
+ */
+@@ -813,10 +821,7 @@ static int shmem_add_to_page_cache(struct folio *folio,
+ xas_store(&xas, folio);
+ if (xas_error(&xas))
+ goto unlock;
+- if (folio_test_pmd_mappable(folio))
+- __lruvec_stat_mod_folio(folio, NR_SHMEM_THPS, nr);
+- __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, nr);
+- __lruvec_stat_mod_folio(folio, NR_SHMEM, nr);
++ shmem_update_stats(folio, nr);
+ mapping->nrpages += nr;
+ unlock:
+ xas_unlock_irq(&xas);
+@@ -844,8 +849,7 @@ static void shmem_delete_from_page_cache(struct folio *folio, void *radswap)
+ error = shmem_replace_entry(mapping, folio->index, folio, radswap);
+ folio->mapping = NULL;
+ mapping->nrpages -= nr;
+- __lruvec_stat_mod_folio(folio, NR_FILE_PAGES, -nr);
+- __lruvec_stat_mod_folio(folio, NR_SHMEM, -nr);
++ shmem_update_stats(folio, -nr);
+ xa_unlock_irq(&mapping->i_pages);
+ folio_put_refs(folio, nr);
+ BUG_ON(error);
+@@ -1944,10 +1948,8 @@ static int shmem_replace_folio(struct folio **foliop, gfp_t gfp,
+ }
+ if (!error) {
+ mem_cgroup_replace_folio(old, new);
+- __lruvec_stat_mod_folio(new, NR_FILE_PAGES, nr_pages);
+- __lruvec_stat_mod_folio(new, NR_SHMEM, nr_pages);
+- __lruvec_stat_mod_folio(old, NR_FILE_PAGES, -nr_pages);
+- __lruvec_stat_mod_folio(old, NR_SHMEM, -nr_pages);
++ shmem_update_stats(new, nr_pages);
++ shmem_update_stats(old, -nr_pages);
+ }
+ xa_unlock_irq(&swap_mapping->i_pages);
+
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 0161cb4391e1d1..3f9255dfacb0c1 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -3369,7 +3369,8 @@ void vfree(const void *addr)
+ struct page *page = vm->pages[i];
+
+ BUG_ON(!page);
+- mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
++ if (!(vm->flags & VM_MAP_PUT_PAGES))
++ mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
+ /*
+ * High-order allocs for huge vmallocs are split, so
+ * can be freed as an array of order-0 allocations
+@@ -3377,7 +3378,8 @@ void vfree(const void *addr)
+ __free_page(page);
+ cond_resched();
+ }
+- atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages);
++ if (!(vm->flags & VM_MAP_PUT_PAGES))
++ atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages);
+ kvfree(vm->pages);
+ kfree(vm);
+ }
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index d2baa1af9df09e..7ce22f40db5b04 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -359,10 +359,10 @@ static int
+ netdev_nl_queue_fill(struct sk_buff *rsp, struct net_device *netdev, u32 q_idx,
+ u32 q_type, const struct genl_info *info)
+ {
+- int err = 0;
++ int err;
+
+ if (!(netdev->flags & IFF_UP))
+- return err;
++ return -ENOENT;
+
+ err = netdev_nl_queue_validate(netdev, q_idx, q_type);
+ if (err)
+@@ -417,24 +417,21 @@ netdev_nl_queue_dump_one(struct net_device *netdev, struct sk_buff *rsp,
+ struct netdev_nl_dump_ctx *ctx)
+ {
+ int err = 0;
+- int i;
+
+ if (!(netdev->flags & IFF_UP))
+ return err;
+
+- for (i = ctx->rxq_idx; i < netdev->real_num_rx_queues;) {
+- err = netdev_nl_queue_fill_one(rsp, netdev, i,
++ for (; ctx->rxq_idx < netdev->real_num_rx_queues; ctx->rxq_idx++) {
++ err = netdev_nl_queue_fill_one(rsp, netdev, ctx->rxq_idx,
+ NETDEV_QUEUE_TYPE_RX, info);
+ if (err)
+ return err;
+- ctx->rxq_idx = i++;
+ }
+- for (i = ctx->txq_idx; i < netdev->real_num_tx_queues;) {
+- err = netdev_nl_queue_fill_one(rsp, netdev, i,
++ for (; ctx->txq_idx < netdev->real_num_tx_queues; ctx->txq_idx++) {
++ err = netdev_nl_queue_fill_one(rsp, netdev, ctx->txq_idx,
+ NETDEV_QUEUE_TYPE_TX, info);
+ if (err)
+ return err;
+- ctx->txq_idx = i++;
+ }
+
+ return err;
+@@ -600,7 +597,7 @@ netdev_nl_stats_by_queue(struct net_device *netdev, struct sk_buff *rsp,
+ i, info);
+ if (err)
+ return err;
+- ctx->rxq_idx = i++;
++ ctx->rxq_idx = ++i;
+ }
+ i = ctx->txq_idx;
+ while (ops->get_queue_stats_tx && i < netdev->real_num_tx_queues) {
+@@ -608,7 +605,7 @@ netdev_nl_stats_by_queue(struct net_device *netdev, struct sk_buff *rsp,
+ i, info);
+ if (err)
+ return err;
+- ctx->txq_idx = i++;
++ ctx->txq_idx = ++i;
+ }
+
+ ctx->rxq_idx = 0;
+diff --git a/net/dsa/tag.h b/net/dsa/tag.h
+index d5707870906bc9..5d80ddad4ff6b1 100644
+--- a/net/dsa/tag.h
++++ b/net/dsa/tag.h
+@@ -138,9 +138,10 @@ static inline void dsa_software_untag_vlan_unaware_bridge(struct sk_buff *skb,
+ * dsa_software_vlan_untag: Software VLAN untagging in DSA receive path
+ * @skb: Pointer to socket buffer (packet)
+ *
+- * Receive path method for switches which cannot avoid tagging all packets
+- * towards the CPU port. Called when ds->untag_bridge_pvid (legacy) or
+- * ds->untag_vlan_aware_bridge_pvid is set to true.
++ * Receive path method for switches which send some packets as VLAN-tagged
++ * towards the CPU port (generally from VLAN-aware bridge ports) even when the
++ * packet was not tagged on the wire. Called when ds->untag_bridge_pvid
++ * (legacy) or ds->untag_vlan_aware_bridge_pvid is set to true.
+ *
+ * As a side effect of this method, any VLAN tag from the skb head is moved
+ * to hwaccel.
+@@ -149,14 +150,19 @@ static inline struct sk_buff *dsa_software_vlan_untag(struct sk_buff *skb)
+ {
+ struct dsa_port *dp = dsa_user_to_port(skb->dev);
+ struct net_device *br = dsa_port_bridge_dev_get(dp);
+- u16 vid;
++ u16 vid, proto;
++ int err;
+
+ /* software untagging for standalone ports not yet necessary */
+ if (!br)
+ return skb;
+
++ err = br_vlan_get_proto(br, &proto);
++ if (err)
++ return skb;
++
+ /* Move VLAN tag from data to hwaccel */
+- if (!skb_vlan_tag_present(skb)) {
++ if (!skb_vlan_tag_present(skb) && skb->protocol == htons(proto)) {
+ skb = skb_vlan_untag(skb);
+ if (!skb)
+ return NULL;
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index 597e9cf5aa6444..3f2bd65ff5e3c9 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -374,8 +374,13 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ msk = NULL;
+ rc = -EINVAL;
+
+- /* we may be receiving a locally-routed packet; drop source sk
+- * accounting
++ /* We may be receiving a locally-routed packet; drop source sk
++ * accounting.
++ *
++ * From here, we will either queue the skb - either to a frag_queue, or
++ * to a receiving socket. When that succeeds, we clear the skb pointer;
++ * a non-NULL skb on exit will be otherwise unowned, and hence
++ * kfree_skb()-ed.
+ */
+ skb_orphan(skb);
+
+@@ -434,7 +439,9 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ * pending key.
+ */
+ if (flags & MCTP_HDR_FLAG_EOM) {
+- sock_queue_rcv_skb(&msk->sk, skb);
++ rc = sock_queue_rcv_skb(&msk->sk, skb);
++ if (!rc)
++ skb = NULL;
+ if (key) {
+ /* we've hit a pending reassembly; not much we
+ * can do but drop it
+@@ -443,7 +450,6 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ MCTP_TRACE_KEY_REPLIED);
+ key = NULL;
+ }
+- rc = 0;
+ goto out_unlock;
+ }
+
+@@ -470,8 +476,10 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ * this function.
+ */
+ rc = mctp_key_add(key, msk);
+- if (!rc)
++ if (!rc) {
+ trace_mctp_key_acquire(key);
++ skb = NULL;
++ }
+
+ /* we don't need to release key->lock on exit, so
+ * clean up here and suppress the unlock via
+@@ -489,6 +497,8 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ key = NULL;
+ } else {
+ rc = mctp_frag_queue(key, skb);
++ if (!rc)
++ skb = NULL;
+ }
+ }
+
+@@ -503,12 +513,19 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ else
+ rc = mctp_frag_queue(key, skb);
+
++ if (rc)
++ goto out_unlock;
++
++ /* we've queued; the queue owns the skb now */
++ skb = NULL;
++
+ /* end of message? deliver to socket, and we're done with
+ * the reassembly/response key
+ */
+- if (!rc && flags & MCTP_HDR_FLAG_EOM) {
+- sock_queue_rcv_skb(key->sk, key->reasm_head);
+- key->reasm_head = NULL;
++ if (flags & MCTP_HDR_FLAG_EOM) {
++ rc = sock_queue_rcv_skb(key->sk, key->reasm_head);
++ if (!rc)
++ key->reasm_head = NULL;
+ __mctp_key_done_in(key, net, f, MCTP_TRACE_KEY_REPLIED);
+ key = NULL;
+ }
+@@ -527,8 +544,7 @@ static int mctp_route_input(struct mctp_route *route, struct sk_buff *skb)
+ if (any_key)
+ mctp_key_unref(any_key);
+ out:
+- if (rc)
+- kfree_skb(skb);
++ kfree_skb(skb);
+ return rc;
+ }
+
+diff --git a/net/mctp/test/route-test.c b/net/mctp/test/route-test.c
+index 8551dab1d1e698..17165b86ce22d4 100644
+--- a/net/mctp/test/route-test.c
++++ b/net/mctp/test/route-test.c
+@@ -837,6 +837,90 @@ static void mctp_test_route_input_multiple_nets_key(struct kunit *test)
+ mctp_test_route_input_multiple_nets_key_fini(test, &t2);
+ }
+
++/* Input route to socket, using a single-packet message, where sock delivery
++ * fails. Ensure we're handling the failure appropriately.
++ */
++static void mctp_test_route_input_sk_fail_single(struct kunit *test)
++{
++ const struct mctp_hdr hdr = RX_HDR(1, 10, 8, FL_S | FL_E | FL_TO);
++ struct mctp_test_route *rt;
++ struct mctp_test_dev *dev;
++ struct socket *sock;
++ struct sk_buff *skb;
++ int rc;
++
++ __mctp_route_test_init(test, &dev, &rt, &sock, MCTP_NET_ANY);
++
++ /* No rcvbuf space, so delivery should fail. __sock_set_rcvbuf will
++ * clamp the minimum to SOCK_MIN_RCVBUF, so we open-code this.
++ */
++ lock_sock(sock->sk);
++ WRITE_ONCE(sock->sk->sk_rcvbuf, 0);
++ release_sock(sock->sk);
++
++ skb = mctp_test_create_skb(&hdr, 10);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, skb);
++ skb_get(skb);
++
++ mctp_test_skb_set_dev(skb, dev);
++
++ /* do route input, which should fail */
++ rc = mctp_route_input(&rt->rt, skb);
++ KUNIT_EXPECT_NE(test, rc, 0);
++
++ /* we should hold the only reference to skb */
++ KUNIT_EXPECT_EQ(test, refcount_read(&skb->users), 1);
++ kfree_skb(skb);
++
++ __mctp_route_test_fini(test, dev, rt, sock);
++}
++
++/* Input route to socket, using a fragmented message, where sock delivery fails.
++ */
++static void mctp_test_route_input_sk_fail_frag(struct kunit *test)
++{
++ const struct mctp_hdr hdrs[2] = { RX_FRAG(FL_S, 0), RX_FRAG(FL_E, 1) };
++ struct mctp_test_route *rt;
++ struct mctp_test_dev *dev;
++ struct sk_buff *skbs[2];
++ struct socket *sock;
++ unsigned int i;
++ int rc;
++
++ __mctp_route_test_init(test, &dev, &rt, &sock, MCTP_NET_ANY);
++
++ lock_sock(sock->sk);
++ WRITE_ONCE(sock->sk->sk_rcvbuf, 0);
++ release_sock(sock->sk);
++
++ for (i = 0; i < ARRAY_SIZE(skbs); i++) {
++ skbs[i] = mctp_test_create_skb(&hdrs[i], 10);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, skbs[i]);
++ skb_get(skbs[i]);
++
++ mctp_test_skb_set_dev(skbs[i], dev);
++ }
++
++ /* first route input should succeed, we're only queueing to the
++ * frag list
++ */
++ rc = mctp_route_input(&rt->rt, skbs[0]);
++ KUNIT_EXPECT_EQ(test, rc, 0);
++
++ /* final route input should fail to deliver to the socket */
++ rc = mctp_route_input(&rt->rt, skbs[1]);
++ KUNIT_EXPECT_NE(test, rc, 0);
++
++ /* we should hold the only reference to both skbs */
++ KUNIT_EXPECT_EQ(test, refcount_read(&skbs[0]->users), 1);
++ kfree_skb(skbs[0]);
++
++ KUNIT_EXPECT_EQ(test, refcount_read(&skbs[1]->users), 1);
++ kfree_skb(skbs[1]);
++
++ __mctp_route_test_fini(test, dev, rt, sock);
++}
++
+ #if IS_ENABLED(CONFIG_MCTP_FLOWS)
+
+ static void mctp_test_flow_init(struct kunit *test,
+@@ -1053,6 +1137,8 @@ static struct kunit_case mctp_test_cases[] = {
+ mctp_route_input_sk_reasm_gen_params),
+ KUNIT_CASE_PARAM(mctp_test_route_input_sk_keys,
+ mctp_route_input_sk_keys_gen_params),
++ KUNIT_CASE(mctp_test_route_input_sk_fail_single),
++ KUNIT_CASE(mctp_test_route_input_sk_fail_frag),
+ KUNIT_CASE(mctp_test_route_input_multiple_nets_bind),
+ KUNIT_CASE(mctp_test_route_input_multiple_nets_key),
+ KUNIT_CASE(mctp_test_packet_flow),
+diff --git a/net/netfilter/ipset/ip_set_list_set.c b/net/netfilter/ipset/ip_set_list_set.c
+index bfae7066936bb9..db794fe1300e69 100644
+--- a/net/netfilter/ipset/ip_set_list_set.c
++++ b/net/netfilter/ipset/ip_set_list_set.c
+@@ -611,6 +611,8 @@ init_list_set(struct net *net, struct ip_set *set, u32 size)
+ return true;
+ }
+
++static struct lock_class_key list_set_lockdep_key;
++
+ static int
+ list_set_create(struct net *net, struct ip_set *set, struct nlattr *tb[],
+ u32 flags)
+@@ -627,6 +629,7 @@ list_set_create(struct net *net, struct ip_set *set, struct nlattr *tb[],
+ if (size < IP_SET_LIST_MIN_SIZE)
+ size = IP_SET_LIST_MIN_SIZE;
+
++ lockdep_set_class(&set->lock, &list_set_lockdep_key);
+ set->variant = &set_variant;
+ set->dsize = ip_set_elem_len(set, tb, sizeof(struct set_elem),
+ __alignof__(struct set_elem));
+diff --git a/net/netfilter/ipvs/ip_vs_conn.c b/net/netfilter/ipvs/ip_vs_conn.c
+index 98d7dbe3d78760..c0289f83f96df8 100644
+--- a/net/netfilter/ipvs/ip_vs_conn.c
++++ b/net/netfilter/ipvs/ip_vs_conn.c
+@@ -1495,8 +1495,8 @@ int __init ip_vs_conn_init(void)
+ max_avail -= 2; /* ~4 in hash row */
+ max_avail -= 1; /* IPVS up to 1/2 of mem */
+ max_avail -= order_base_2(sizeof(struct ip_vs_conn));
+- max = clamp(max, min, max_avail);
+- ip_vs_conn_tab_bits = clamp_val(ip_vs_conn_tab_bits, min, max);
++ max = clamp(max_avail, min, max);
++ ip_vs_conn_tab_bits = clamp(ip_vs_conn_tab_bits, min, max);
+ ip_vs_conn_tab_size = 1 << ip_vs_conn_tab_bits;
+ ip_vs_conn_tab_mask = ip_vs_conn_tab_size - 1;
+
+diff --git a/net/psample/psample.c b/net/psample/psample.c
+index a0ddae8a65f917..25f92ba0840c67 100644
+--- a/net/psample/psample.c
++++ b/net/psample/psample.c
+@@ -393,7 +393,9 @@ void psample_sample_packet(struct psample_group *group,
+ nla_total_size_64bit(sizeof(u64)) + /* timestamp */
+ nla_total_size(sizeof(u16)) + /* protocol */
+ (md->user_cookie_len ?
+- nla_total_size(md->user_cookie_len) : 0); /* user cookie */
++ nla_total_size(md->user_cookie_len) : 0) + /* user cookie */
++ (md->rate_as_probability ?
++ nla_total_size(0) : 0); /* rate as probability */
+
+ #ifdef CONFIG_INET
+ tun_info = skb_tunnel_info(skb);
+@@ -498,8 +500,9 @@ void psample_sample_packet(struct psample_group *group,
+ md->user_cookie))
+ goto error;
+
+- if (md->rate_as_probability)
+- nla_put_flag(nl_skb, PSAMPLE_ATTR_SAMPLE_PROBABILITY);
++ if (md->rate_as_probability &&
++ nla_put_flag(nl_skb, PSAMPLE_ATTR_SAMPLE_PROBABILITY))
++ goto error;
+
+ genlmsg_end(nl_skb, data);
+ genlmsg_multicast_netns(&psample_nl_family, group->net, nl_skb, 0,
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index f2f9b75008bb05..8d8b2db4653c0c 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -1525,7 +1525,6 @@ static unsigned int cake_drop(struct Qdisc *sch, struct sk_buff **to_free)
+ b->backlogs[idx] -= len;
+ b->tin_backlog -= len;
+ sch->qstats.backlog -= len;
+- qdisc_tree_reduce_backlog(sch, 1, len);
+
+ flow->dropped++;
+ b->tin_dropped++;
+@@ -1536,6 +1535,7 @@ static unsigned int cake_drop(struct Qdisc *sch, struct sk_buff **to_free)
+
+ __qdisc_drop(skb, to_free);
+ sch->q.qlen--;
++ qdisc_tree_reduce_backlog(sch, 1, len);
+
+ cake_heapify(q, 0);
+
+diff --git a/net/sched/sch_choke.c b/net/sched/sch_choke.c
+index 91072010923d18..757b89292e7e6f 100644
+--- a/net/sched/sch_choke.c
++++ b/net/sched/sch_choke.c
+@@ -123,10 +123,10 @@ static void choke_drop_by_idx(struct Qdisc *sch, unsigned int idx,
+ if (idx == q->tail)
+ choke_zap_tail_holes(q);
+
++ --sch->q.qlen;
+ qdisc_qstats_backlog_dec(sch, skb);
+ qdisc_tree_reduce_backlog(sch, 1, qdisc_pkt_len(skb));
+ qdisc_drop(skb, sch, to_free);
+- --sch->q.qlen;
+ }
+
+ struct choke_skb_cb {
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 9e6c69d18581ce..6cc7b846cff1bb 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2032,6 +2032,8 @@ static int smc_listen_prfx_check(struct smc_sock *new_smc,
+ if (pclc->hdr.typev1 == SMC_TYPE_N)
+ return 0;
+ pclc_prfx = smc_clc_proposal_get_prefix(pclc);
++ if (!pclc_prfx)
++ return -EPROTO;
+ if (smc_clc_prfx_match(newclcsock, pclc_prfx))
+ return SMC_CLC_DECL_DIFFPREFIX;
+
+@@ -2145,6 +2147,8 @@ static void smc_find_ism_v2_device_serv(struct smc_sock *new_smc,
+ pclc_smcd = smc_get_clc_msg_smcd(pclc);
+ smc_v2_ext = smc_get_clc_v2_ext(pclc);
+ smcd_v2_ext = smc_get_clc_smcd_v2_ext(smc_v2_ext);
++ if (!pclc_smcd || !smc_v2_ext || !smcd_v2_ext)
++ goto not_found;
+
+ mutex_lock(&smcd_dev_list.mutex);
+ if (pclc_smcd->ism.chid) {
+@@ -2221,7 +2225,9 @@ static void smc_find_ism_v1_device_serv(struct smc_sock *new_smc,
+ int rc = 0;
+
+ /* check if ISM V1 is available */
+- if (!(ini->smcd_version & SMC_V1) || !smcd_indicated(ini->smc_type_v1))
++ if (!(ini->smcd_version & SMC_V1) ||
++ !smcd_indicated(ini->smc_type_v1) ||
++ !pclc_smcd)
+ goto not_found;
+ ini->is_smcd = true; /* prepare ISM check */
+ ini->ism_peer_gid[0].gid = ntohll(pclc_smcd->ism.gid);
+@@ -2272,7 +2278,8 @@ static void smc_find_rdma_v2_device_serv(struct smc_sock *new_smc,
+ goto not_found;
+
+ smc_v2_ext = smc_get_clc_v2_ext(pclc);
+- if (!smc_clc_match_eid(ini->negotiated_eid, smc_v2_ext, NULL, NULL))
++ if (!smc_v2_ext ||
++ !smc_clc_match_eid(ini->negotiated_eid, smc_v2_ext, NULL, NULL))
+ goto not_found;
+
+ /* prepare RDMA check */
+@@ -2881,6 +2888,13 @@ __poll_t smc_poll(struct file *file, struct socket *sock,
+ } else {
+ sk_set_bit(SOCKWQ_ASYNC_NOSPACE, sk);
+ set_bit(SOCK_NOSPACE, &sk->sk_socket->flags);
++
++ if (sk->sk_state != SMC_INIT) {
++ /* Race breaker the same way as tcp_poll(). */
++ smp_mb__after_atomic();
++ if (atomic_read(&smc->conn.sndbuf_space))
++ mask |= EPOLLOUT | EPOLLWRNORM;
++ }
+ }
+ if (atomic_read(&smc->conn.bytes_to_rcv))
+ mask |= EPOLLIN | EPOLLRDNORM;
+diff --git a/net/smc/smc_clc.c b/net/smc/smc_clc.c
+index 33fa787c28ebb2..521f5df80e10ca 100644
+--- a/net/smc/smc_clc.c
++++ b/net/smc/smc_clc.c
+@@ -352,8 +352,11 @@ static bool smc_clc_msg_prop_valid(struct smc_clc_msg_proposal *pclc)
+ struct smc_clc_msg_hdr *hdr = &pclc->hdr;
+ struct smc_clc_v2_extension *v2_ext;
+
+- v2_ext = smc_get_clc_v2_ext(pclc);
+ pclc_prfx = smc_clc_proposal_get_prefix(pclc);
++ if (!pclc_prfx ||
++ pclc_prfx->ipv6_prefixes_cnt > SMC_CLC_MAX_V6_PREFIX)
++ return false;
++
+ if (hdr->version == SMC_V1) {
+ if (hdr->typev1 == SMC_TYPE_N)
+ return false;
+@@ -365,6 +368,13 @@ static bool smc_clc_msg_prop_valid(struct smc_clc_msg_proposal *pclc)
+ sizeof(struct smc_clc_msg_trail))
+ return false;
+ } else {
++ v2_ext = smc_get_clc_v2_ext(pclc);
++ if ((hdr->typev2 != SMC_TYPE_N &&
++ (!v2_ext || v2_ext->hdr.eid_cnt > SMC_CLC_MAX_UEID)) ||
++ (smcd_indicated(hdr->typev2) &&
++ v2_ext->hdr.ism_gid_cnt > SMCD_CLC_MAX_V2_GID_ENTRIES))
++ return false;
++
+ if (ntohs(hdr->length) !=
+ sizeof(*pclc) +
+ sizeof(struct smc_clc_msg_smcd) +
+@@ -764,6 +774,11 @@ int smc_clc_wait_msg(struct smc_sock *smc, void *buf, int buflen,
+ SMC_CLC_RECV_BUF_LEN : datlen;
+ iov_iter_kvec(&msg.msg_iter, ITER_DEST, &vec, 1, recvlen);
+ len = sock_recvmsg(smc->clcsock, &msg, krflags);
++ if (len < recvlen) {
++ smc->sk.sk_err = EPROTO;
++ reason_code = -EPROTO;
++ goto out;
++ }
+ datlen -= len;
+ }
+ if (clcm->type == SMC_CLC_DECLINE) {
+diff --git a/net/smc/smc_clc.h b/net/smc/smc_clc.h
+index 5625fda2960b03..1a7676227f16c5 100644
+--- a/net/smc/smc_clc.h
++++ b/net/smc/smc_clc.h
+@@ -336,8 +336,12 @@ struct smc_clc_msg_decline_v2 { /* clc decline message */
+ static inline struct smc_clc_msg_proposal_prefix *
+ smc_clc_proposal_get_prefix(struct smc_clc_msg_proposal *pclc)
+ {
++ u16 offset = ntohs(pclc->iparea_offset);
++
++ if (offset > sizeof(struct smc_clc_msg_smcd))
++ return NULL;
+ return (struct smc_clc_msg_proposal_prefix *)
+- ((u8 *)pclc + sizeof(*pclc) + ntohs(pclc->iparea_offset));
++ ((u8 *)pclc + sizeof(*pclc) + offset);
+ }
+
+ static inline bool smcr_indicated(int smc_type)
+@@ -376,8 +380,14 @@ static inline struct smc_clc_v2_extension *
+ smc_get_clc_v2_ext(struct smc_clc_msg_proposal *prop)
+ {
+ struct smc_clc_msg_smcd *prop_smcd = smc_get_clc_msg_smcd(prop);
++ u16 max_offset;
+
+- if (!prop_smcd || !ntohs(prop_smcd->v2_ext_offset))
++ max_offset = offsetof(struct smc_clc_msg_proposal_area, pclc_v2_ext) -
++ offsetof(struct smc_clc_msg_proposal_area, pclc_smcd) -
++ offsetofend(struct smc_clc_msg_smcd, v2_ext_offset);
++
++ if (!prop_smcd || !ntohs(prop_smcd->v2_ext_offset) ||
++ ntohs(prop_smcd->v2_ext_offset) > max_offset)
+ return NULL;
+
+ return (struct smc_clc_v2_extension *)
+@@ -390,9 +400,15 @@ smc_get_clc_v2_ext(struct smc_clc_msg_proposal *prop)
+ static inline struct smc_clc_smcd_v2_extension *
+ smc_get_clc_smcd_v2_ext(struct smc_clc_v2_extension *prop_v2ext)
+ {
++ u16 max_offset = offsetof(struct smc_clc_msg_proposal_area, pclc_smcd_v2_ext) -
++ offsetof(struct smc_clc_msg_proposal_area, pclc_v2_ext) -
++ offsetof(struct smc_clc_v2_extension, hdr) -
++ offsetofend(struct smc_clnt_opts_area_hdr, smcd_v2_ext_offset);
++
+ if (!prop_v2ext)
+ return NULL;
+- if (!ntohs(prop_v2ext->hdr.smcd_v2_ext_offset))
++ if (!ntohs(prop_v2ext->hdr.smcd_v2_ext_offset) ||
++ ntohs(prop_v2ext->hdr.smcd_v2_ext_offset) > max_offset)
+ return NULL;
+
+ return (struct smc_clc_smcd_v2_extension *)
+diff --git a/net/smc/smc_core.c b/net/smc/smc_core.c
+index 4e694860ece4ac..68515a41d776c4 100644
+--- a/net/smc/smc_core.c
++++ b/net/smc/smc_core.c
+@@ -1818,7 +1818,9 @@ void smcr_link_down_cond_sched(struct smc_link *lnk)
+ {
+ if (smc_link_downing(&lnk->state)) {
+ trace_smcr_link_down(lnk, __builtin_return_address(0));
+- schedule_work(&lnk->link_down_wrk);
++ smcr_link_hold(lnk); /* smcr_link_put in link_down_wrk */
++ if (!schedule_work(&lnk->link_down_wrk))
++ smcr_link_put(lnk);
+ }
+ }
+
+@@ -1850,11 +1852,14 @@ static void smc_link_down_work(struct work_struct *work)
+ struct smc_link_group *lgr = link->lgr;
+
+ if (list_empty(&lgr->list))
+- return;
++ goto out;
+ wake_up_all(&lgr->llc_msg_waiter);
+ down_write(&lgr->llc_conf_mutex);
+ smcr_link_down(link);
+ up_write(&lgr->llc_conf_mutex);
++
++out:
++ smcr_link_put(link); /* smcr_link_hold by schedulers of link_down_work */
+ }
+
+ static int smc_vlan_by_tcpsk_walk(struct net_device *lower_dev,
+diff --git a/sound/soc/fsl/Kconfig b/sound/soc/fsl/Kconfig
+index e283751abfefe8..678540b7828059 100644
+--- a/sound/soc/fsl/Kconfig
++++ b/sound/soc/fsl/Kconfig
+@@ -29,6 +29,7 @@ config SND_SOC_FSL_SAI
+ config SND_SOC_FSL_MQS
+ tristate "Medium Quality Sound (MQS) module support"
+ depends on SND_SOC_FSL_SAI
++ depends on IMX_SCMI_MISC_DRV || !IMX_SCMI_MISC_DRV
+ select REGMAP_MMIO
+ help
+ Say Y if you want to add Medium Quality Sound (MQS)
+diff --git a/tools/hv/hv_fcopy_uio_daemon.c b/tools/hv/hv_fcopy_uio_daemon.c
+index 7a00f3066a9807..12743d7f164f0d 100644
+--- a/tools/hv/hv_fcopy_uio_daemon.c
++++ b/tools/hv/hv_fcopy_uio_daemon.c
+@@ -35,8 +35,6 @@
+ #define WIN8_SRV_MINOR 1
+ #define WIN8_SRV_VERSION (WIN8_SRV_MAJOR << 16 | WIN8_SRV_MINOR)
+
+-#define MAX_FOLDER_NAME 15
+-#define MAX_PATH_LEN 15
+ #define FCOPY_UIO "/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio"
+
+ #define FCOPY_VER_COUNT 1
+@@ -51,7 +49,7 @@ static const int fw_versions[] = {
+
+ #define HV_RING_SIZE 0x4000 /* 16KB ring buffer size */
+
+-unsigned char desc[HV_RING_SIZE];
++static unsigned char desc[HV_RING_SIZE];
+
+ static int target_fd;
+ static char target_fname[PATH_MAX];
+@@ -409,8 +407,8 @@ int main(int argc, char *argv[])
+ struct vmbus_br txbr, rxbr;
+ void *ring;
+ uint32_t len = HV_RING_SIZE;
+- char uio_name[MAX_FOLDER_NAME] = {0};
+- char uio_dev_path[MAX_PATH_LEN] = {0};
++ char uio_name[NAME_MAX] = {0};
++ char uio_dev_path[PATH_MAX] = {0};
+
+ static struct option long_options[] = {
+ {"help", no_argument, 0, 'h' },
+diff --git a/tools/hv/hv_set_ifconfig.sh b/tools/hv/hv_set_ifconfig.sh
+index 440a91b35823bf..2f8baed2b8f796 100755
+--- a/tools/hv/hv_set_ifconfig.sh
++++ b/tools/hv/hv_set_ifconfig.sh
+@@ -81,7 +81,7 @@ echo "ONBOOT=yes" >> $1
+
+ cp $1 /etc/sysconfig/network-scripts/
+
+-chmod 600 $2
++umask 0177
+ interface=$(echo $2 | awk -F - '{ print $2 }')
+ filename="${2##*/}"
+
+diff --git a/tools/net/ynl/lib/ynl.py b/tools/net/ynl/lib/ynl.py
+index c22c22bf2cb7d1..a3f741fed0a343 100644
+--- a/tools/net/ynl/lib/ynl.py
++++ b/tools/net/ynl/lib/ynl.py
+@@ -553,10 +553,10 @@ class YnlFamily(SpecFamily):
+ if attr["type"] == 'nest':
+ nl_type |= Netlink.NLA_F_NESTED
+ attr_payload = b''
+- sub_attrs = SpaceAttrs(self.attr_sets[space], value, search_attrs)
++ sub_space = attr['nested-attributes']
++ sub_attrs = SpaceAttrs(self.attr_sets[sub_space], value, search_attrs)
+ for subname, subvalue in value.items():
+- attr_payload += self._add_attr(attr['nested-attributes'],
+- subname, subvalue, sub_attrs)
++ attr_payload += self._add_attr(sub_space, subname, subvalue, sub_attrs)
+ elif attr["type"] == 'flag':
+ if not value:
+ # If value is absent or false then skip attribute creation.
+diff --git a/tools/testing/selftests/bpf/sdt.h b/tools/testing/selftests/bpf/sdt.h
+index ca0162b4dc5752..1fcfa5160231de 100644
+--- a/tools/testing/selftests/bpf/sdt.h
++++ b/tools/testing/selftests/bpf/sdt.h
+@@ -102,6 +102,8 @@
+ # define STAP_SDT_ARG_CONSTRAINT nZr
+ # elif defined __arm__
+ # define STAP_SDT_ARG_CONSTRAINT g
++# elif defined __loongarch__
++# define STAP_SDT_ARG_CONSTRAINT nmr
+ # else
+ # define STAP_SDT_ARG_CONSTRAINT nor
+ # endif
+diff --git a/tools/testing/selftests/memfd/memfd_test.c b/tools/testing/selftests/memfd/memfd_test.c
+index 95af2d78fd318c..0a0b5551602808 100644
+--- a/tools/testing/selftests/memfd/memfd_test.c
++++ b/tools/testing/selftests/memfd/memfd_test.c
+@@ -9,6 +9,7 @@
+ #include <fcntl.h>
+ #include <linux/memfd.h>
+ #include <sched.h>
++#include <stdbool.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <signal.h>
+@@ -1557,6 +1558,11 @@ static void test_share_fork(char *banner, char *b_suffix)
+ close(fd);
+ }
+
++static bool pid_ns_supported(void)
++{
++ return access("/proc/self/ns/pid", F_OK) == 0;
++}
++
+ int main(int argc, char **argv)
+ {
+ pid_t pid;
+@@ -1591,8 +1597,12 @@ int main(int argc, char **argv)
+ test_seal_grow();
+ test_seal_resize();
+
+- test_sysctl_simple();
+- test_sysctl_nested();
++ if (pid_ns_supported()) {
++ test_sysctl_simple();
++ test_sysctl_nested();
++ } else {
++ printf("PID namespaces are not supported; skipping sysctl tests\n");
++ }
+
+ test_share_dup("SHARE-DUP", "");
+ test_share_mmap("SHARE-MMAP", "");
+diff --git a/tools/testing/selftests/net/openvswitch/openvswitch.sh b/tools/testing/selftests/net/openvswitch/openvswitch.sh
+index cc0bfae2bafa1b..960e1ab4dd04b1 100755
+--- a/tools/testing/selftests/net/openvswitch/openvswitch.sh
++++ b/tools/testing/selftests/net/openvswitch/openvswitch.sh
+@@ -171,8 +171,10 @@ ovs_add_netns_and_veths () {
+ ovs_add_if "$1" "$2" "$4" -u || return 1
+ fi
+
+- [ $TRACING -eq 1 ] && ovs_netns_spawn_daemon "$1" "$ns" \
+- tcpdump -i any -s 65535
++ if [ $TRACING -eq 1 ]; then
++ ovs_netns_spawn_daemon "$1" "$3" tcpdump -l -i any -s 6553
++ ovs_wait grep -q "listening on any" ${ovs_dir}/stderr
++ fi
+
+ return 0
+ }
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-02 12:31 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-01-02 12:31 UTC (permalink / raw
To: gentoo-commits
commit: dd372ef01fb36fbfcde98200d805cbd936e6afc8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 2 12:26:21 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 2 12:26:21 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dd372ef0
Linux patch 6.12.8
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1007_linux-6.12.8.patch | 4586 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4590 insertions(+)
diff --git a/0000_README b/0000_README
index 6961ab2e..483a9fde 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch: 1006_linux-6.12.7.patch
From: https://www.kernel.org
Desc: Linux 6.12.7
+Patch: 1007_linux-6.12.8.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.8
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1007_linux-6.12.8.patch b/1007_linux-6.12.8.patch
new file mode 100644
index 00000000..6c1c3893
--- /dev/null
+++ b/1007_linux-6.12.8.patch
@@ -0,0 +1,4586 @@
+diff --git a/Documentation/arch/arm64/silicon-errata.rst b/Documentation/arch/arm64/silicon-errata.rst
+index 77db10e944f039..b42fea07c5cec8 100644
+--- a/Documentation/arch/arm64/silicon-errata.rst
++++ b/Documentation/arch/arm64/silicon-errata.rst
+@@ -255,8 +255,9 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Hisilicon | Hip08 SMMU PMCG | #162001800 | N/A |
+ +----------------+-----------------+-----------------+-----------------------------+
+-| Hisilicon | Hip{08,09,10,10C| #162001900 | N/A |
+-| | ,11} SMMU PMCG | | |
++| Hisilicon | Hip{08,09,09A,10| #162001900 | N/A |
++| | ,10C,11} | | |
++| | SMMU PMCG | | |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | Hisilicon | Hip09 | #162100801 | HISILICON_ERRATUM_162100801 |
+ +----------------+-----------------+-----------------+-----------------------------+
+diff --git a/Documentation/devicetree/bindings/sound/realtek,rt5645.yaml b/Documentation/devicetree/bindings/sound/realtek,rt5645.yaml
+index 13f09f1bc8003a..0a698798c22be2 100644
+--- a/Documentation/devicetree/bindings/sound/realtek,rt5645.yaml
++++ b/Documentation/devicetree/bindings/sound/realtek,rt5645.yaml
+@@ -51,7 +51,7 @@ properties:
+ description: Power supply for AVDD, providing 1.8V.
+
+ cpvdd-supply:
+- description: Power supply for CPVDD, providing 3.5V.
++ description: Power supply for CPVDD, providing 1.8V.
+
+ hp-detect-gpios:
+ description:
+diff --git a/Makefile b/Makefile
+index 685a57f6c8d279..8a10105c2539cf 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/boot/dts/broadcom/bcm2712.dtsi b/arch/arm64/boot/dts/broadcom/bcm2712.dtsi
+index 6e5a984c1d4ea1..26a29e5e5078d5 100644
+--- a/arch/arm64/boot/dts/broadcom/bcm2712.dtsi
++++ b/arch/arm64/boot/dts/broadcom/bcm2712.dtsi
+@@ -67,7 +67,7 @@ cpu0: cpu@0 {
+ l2_cache_l0: l2-cache-l0 {
+ compatible = "cache";
+ cache-size = <0x80000>;
+- cache-line-size = <128>;
++ cache-line-size = <64>;
+ cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set
+ cache-level = <2>;
+ cache-unified;
+@@ -91,7 +91,7 @@ cpu1: cpu@1 {
+ l2_cache_l1: l2-cache-l1 {
+ compatible = "cache";
+ cache-size = <0x80000>;
+- cache-line-size = <128>;
++ cache-line-size = <64>;
+ cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set
+ cache-level = <2>;
+ cache-unified;
+@@ -115,7 +115,7 @@ cpu2: cpu@2 {
+ l2_cache_l2: l2-cache-l2 {
+ compatible = "cache";
+ cache-size = <0x80000>;
+- cache-line-size = <128>;
++ cache-line-size = <64>;
+ cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set
+ cache-level = <2>;
+ cache-unified;
+@@ -139,7 +139,7 @@ cpu3: cpu@3 {
+ l2_cache_l3: l2-cache-l3 {
+ compatible = "cache";
+ cache-size = <0x80000>;
+- cache-line-size = <128>;
++ cache-line-size = <64>;
+ cache-sets = <1024>; //512KiB(size)/64(line-size)=8192ways/8-way set
+ cache-level = <2>;
+ cache-unified;
+diff --git a/arch/loongarch/include/asm/inst.h b/arch/loongarch/include/asm/inst.h
+index 944482063f14e3..3089785ca97e78 100644
+--- a/arch/loongarch/include/asm/inst.h
++++ b/arch/loongarch/include/asm/inst.h
+@@ -683,7 +683,17 @@ DEF_EMIT_REG2I16_FORMAT(blt, blt_op)
+ DEF_EMIT_REG2I16_FORMAT(bge, bge_op)
+ DEF_EMIT_REG2I16_FORMAT(bltu, bltu_op)
+ DEF_EMIT_REG2I16_FORMAT(bgeu, bgeu_op)
+-DEF_EMIT_REG2I16_FORMAT(jirl, jirl_op)
++
++static inline void emit_jirl(union loongarch_instruction *insn,
++ enum loongarch_gpr rd,
++ enum loongarch_gpr rj,
++ int offset)
++{
++ insn->reg2i16_format.opcode = jirl_op;
++ insn->reg2i16_format.immediate = offset;
++ insn->reg2i16_format.rd = rd;
++ insn->reg2i16_format.rj = rj;
++}
+
+ #define DEF_EMIT_REG2BSTRD_FORMAT(NAME, OP) \
+ static inline void emit_##NAME(union loongarch_instruction *insn, \
+diff --git a/arch/loongarch/kernel/efi.c b/arch/loongarch/kernel/efi.c
+index 2bf86aeda874c7..de21e72759eebc 100644
+--- a/arch/loongarch/kernel/efi.c
++++ b/arch/loongarch/kernel/efi.c
+@@ -95,7 +95,7 @@ static void __init init_screen_info(void)
+ memset(si, 0, sizeof(*si));
+ early_memunmap(si, sizeof(*si));
+
+- memblock_reserve(screen_info.lfb_base, screen_info.lfb_size);
++ memblock_reserve(__screen_info_lfb_base(&screen_info), screen_info.lfb_size);
+ }
+
+ void __init efi_init(void)
+diff --git a/arch/loongarch/kernel/inst.c b/arch/loongarch/kernel/inst.c
+index 3050329556d118..14d7d700bcb98f 100644
+--- a/arch/loongarch/kernel/inst.c
++++ b/arch/loongarch/kernel/inst.c
+@@ -332,7 +332,7 @@ u32 larch_insn_gen_jirl(enum loongarch_gpr rd, enum loongarch_gpr rj, int imm)
+ return INSN_BREAK;
+ }
+
+- emit_jirl(&insn, rj, rd, imm >> 2);
++ emit_jirl(&insn, rd, rj, imm >> 2);
+
+ return insn.word;
+ }
+diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
+index dd350cba1252f9..ea357a3edc0943 100644
+--- a/arch/loongarch/net/bpf_jit.c
++++ b/arch/loongarch/net/bpf_jit.c
+@@ -181,13 +181,13 @@ static void __build_epilogue(struct jit_ctx *ctx, bool is_tail_call)
+ /* Set return value */
+ emit_insn(ctx, addiw, LOONGARCH_GPR_A0, regmap[BPF_REG_0], 0);
+ /* Return to the caller */
+- emit_insn(ctx, jirl, LOONGARCH_GPR_RA, LOONGARCH_GPR_ZERO, 0);
++ emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_RA, 0);
+ } else {
+ /*
+ * Call the next bpf prog and skip the first instruction
+ * of TCC initialization.
+ */
+- emit_insn(ctx, jirl, LOONGARCH_GPR_T3, LOONGARCH_GPR_ZERO, 1);
++ emit_insn(ctx, jirl, LOONGARCH_GPR_ZERO, LOONGARCH_GPR_T3, 1);
+ }
+ }
+
+@@ -904,7 +904,7 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
+ return ret;
+
+ move_addr(ctx, t1, func_addr);
+- emit_insn(ctx, jirl, t1, LOONGARCH_GPR_RA, 0);
++ emit_insn(ctx, jirl, LOONGARCH_GPR_RA, t1, 0);
+ move_reg(ctx, regmap[BPF_REG_0], LOONGARCH_GPR_A0);
+ break;
+
+diff --git a/arch/powerpc/platforms/book3s/vas-api.c b/arch/powerpc/platforms/book3s/vas-api.c
+index f381b177ea06ad..0b6365d85d1171 100644
+--- a/arch/powerpc/platforms/book3s/vas-api.c
++++ b/arch/powerpc/platforms/book3s/vas-api.c
+@@ -464,7 +464,43 @@ static vm_fault_t vas_mmap_fault(struct vm_fault *vmf)
+ return VM_FAULT_SIGBUS;
+ }
+
++/*
++ * During mmap() paste address, mapping VMA is saved in VAS window
++ * struct which is used to unmap during migration if the window is
++ * still open. But the user space can remove this mapping with
++ * munmap() before closing the window and the VMA address will
++ * be invalid. Set VAS window VMA to NULL in this function which
++ * is called before VMA free.
++ */
++static void vas_mmap_close(struct vm_area_struct *vma)
++{
++ struct file *fp = vma->vm_file;
++ struct coproc_instance *cp_inst = fp->private_data;
++ struct vas_window *txwin;
++
++ /* Should not happen */
++ if (!cp_inst || !cp_inst->txwin) {
++ pr_err("No attached VAS window for the paste address mmap\n");
++ return;
++ }
++
++ txwin = cp_inst->txwin;
++ /*
++ * task_ref.vma is set in coproc_mmap() during mmap paste
++ * address. So it has to be the same VMA that is getting freed.
++ */
++ if (WARN_ON(txwin->task_ref.vma != vma)) {
++ pr_err("Invalid paste address mmaping\n");
++ return;
++ }
++
++ mutex_lock(&txwin->task_ref.mmap_mutex);
++ txwin->task_ref.vma = NULL;
++ mutex_unlock(&txwin->task_ref.mmap_mutex);
++}
++
+ static const struct vm_operations_struct vas_vm_ops = {
++ .close = vas_mmap_close,
+ .fault = vas_mmap_fault,
+ };
+
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index d879478db3f572..28b4312f25631c 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -429,6 +429,16 @@ static struct event_constraint intel_lnc_event_constraints[] = {
+ EVENT_CONSTRAINT_END
+ };
+
++static struct extra_reg intel_lnc_extra_regs[] __read_mostly = {
++ INTEL_UEVENT_EXTRA_REG(0x012a, MSR_OFFCORE_RSP_0, 0xfffffffffffull, RSP_0),
++ INTEL_UEVENT_EXTRA_REG(0x012b, MSR_OFFCORE_RSP_1, 0xfffffffffffull, RSP_1),
++ INTEL_UEVENT_PEBS_LDLAT_EXTRA_REG(0x01cd),
++ INTEL_UEVENT_EXTRA_REG(0x02c6, MSR_PEBS_FRONTEND, 0x9, FE),
++ INTEL_UEVENT_EXTRA_REG(0x03c6, MSR_PEBS_FRONTEND, 0x7fff1f, FE),
++ INTEL_UEVENT_EXTRA_REG(0x40ad, MSR_PEBS_FRONTEND, 0xf, FE),
++ INTEL_UEVENT_EXTRA_REG(0x04c2, MSR_PEBS_FRONTEND, 0x8, FE),
++ EVENT_EXTRA_END
++};
+
+ EVENT_ATTR_STR(mem-loads, mem_ld_nhm, "event=0x0b,umask=0x10,ldlat=3");
+ EVENT_ATTR_STR(mem-loads, mem_ld_snb, "event=0xcd,umask=0x1,ldlat=3");
+@@ -6344,7 +6354,7 @@ static __always_inline void intel_pmu_init_lnc(struct pmu *pmu)
+ intel_pmu_init_glc(pmu);
+ hybrid(pmu, event_constraints) = intel_lnc_event_constraints;
+ hybrid(pmu, pebs_constraints) = intel_lnc_pebs_event_constraints;
+- hybrid(pmu, extra_regs) = intel_rwc_extra_regs;
++ hybrid(pmu, extra_regs) = intel_lnc_extra_regs;
+ }
+
+ static __always_inline void intel_pmu_init_skt(struct pmu *pmu)
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 6188650707ab27..19a9fd974e3e1d 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -2496,6 +2496,7 @@ void __init intel_ds_init(void)
+ x86_pmu.large_pebs_flags |= PERF_SAMPLE_TIME;
+ break;
+
++ case 6:
+ case 5:
+ x86_pmu.pebs_ept = 1;
+ fallthrough;
+diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
+index d98fac56768469..e7aba7349231d1 100644
+--- a/arch/x86/events/intel/uncore.c
++++ b/arch/x86/events/intel/uncore.c
+@@ -1910,6 +1910,7 @@ static const struct x86_cpu_id intel_uncore_match[] __initconst = {
+ X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, &adl_uncore_init),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, &gnr_uncore_init),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, &gnr_uncore_init),
++ X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, &gnr_uncore_init),
+ {},
+ };
+ MODULE_DEVICE_TABLE(x86cpu, intel_uncore_match);
+diff --git a/arch/x86/kernel/cet.c b/arch/x86/kernel/cet.c
+index d2c732a34e5d90..303bf74d175b30 100644
+--- a/arch/x86/kernel/cet.c
++++ b/arch/x86/kernel/cet.c
+@@ -81,6 +81,34 @@ static void do_user_cp_fault(struct pt_regs *regs, unsigned long error_code)
+
+ static __ro_after_init bool ibt_fatal = true;
+
++/*
++ * By definition, all missing-ENDBRANCH #CPs are a result of WFE && !ENDBR.
++ *
++ * For the kernel IBT no ENDBR selftest where #CPs are deliberately triggered,
++ * the WFE state of the interrupted context needs to be cleared to let execution
++ * continue. Otherwise when the CPU resumes from the instruction that just
++ * caused the previous #CP, another missing-ENDBRANCH #CP is raised and the CPU
++ * enters a dead loop.
++ *
++ * This is not a problem with IDT because it doesn't preserve WFE and IRET doesn't
++ * set WFE. But FRED provides space on the entry stack (in an expanded CS area)
++ * to save and restore the WFE state, thus the WFE state is no longer clobbered,
++ * so software must clear it.
++ */
++static void ibt_clear_fred_wfe(struct pt_regs *regs)
++{
++ /*
++ * No need to do any FRED checks.
++ *
++ * For IDT event delivery, the high-order 48 bits of CS are pushed
++ * as 0s into the stack, and later IRET ignores these bits.
++ *
++ * For FRED, a test to check if fred_cs.wfe is set would be dropped
++ * by compilers.
++ */
++ regs->fred_cs.wfe = 0;
++}
++
+ static void do_kernel_cp_fault(struct pt_regs *regs, unsigned long error_code)
+ {
+ if ((error_code & CP_EC) != CP_ENDBR) {
+@@ -90,6 +118,7 @@ static void do_kernel_cp_fault(struct pt_regs *regs, unsigned long error_code)
+
+ if (unlikely(regs->ip == (unsigned long)&ibt_selftest_noendbr)) {
+ regs->ax = 0;
++ ibt_clear_fred_wfe(regs);
+ return;
+ }
+
+@@ -97,6 +126,7 @@ static void do_kernel_cp_fault(struct pt_regs *regs, unsigned long error_code)
+ if (!ibt_fatal) {
+ printk(KERN_DEFAULT CUT_HERE);
+ __warn(__FILE__, __LINE__, (void *)regs->ip, TAINT_WARN, regs, NULL);
++ ibt_clear_fred_wfe(regs);
+ return;
+ }
+ BUG();
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index d5995021815ddf..4e76651e786d19 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -3903,16 +3903,11 @@ static int blk_mq_init_hctx(struct request_queue *q,
+ {
+ hctx->queue_num = hctx_idx;
+
+- if (!(hctx->flags & BLK_MQ_F_STACKING))
+- cpuhp_state_add_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
+- &hctx->cpuhp_online);
+- cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead);
+-
+ hctx->tags = set->tags[hctx_idx];
+
+ if (set->ops->init_hctx &&
+ set->ops->init_hctx(hctx, set->driver_data, hctx_idx))
+- goto unregister_cpu_notifier;
++ goto fail;
+
+ if (blk_mq_init_request(set, hctx->fq->flush_rq, hctx_idx,
+ hctx->numa_node))
+@@ -3921,6 +3916,11 @@ static int blk_mq_init_hctx(struct request_queue *q,
+ if (xa_insert(&q->hctx_table, hctx_idx, hctx, GFP_KERNEL))
+ goto exit_flush_rq;
+
++ if (!(hctx->flags & BLK_MQ_F_STACKING))
++ cpuhp_state_add_instance_nocalls(CPUHP_AP_BLK_MQ_ONLINE,
++ &hctx->cpuhp_online);
++ cpuhp_state_add_instance_nocalls(CPUHP_BLK_MQ_DEAD, &hctx->cpuhp_dead);
++
+ return 0;
+
+ exit_flush_rq:
+@@ -3929,8 +3929,7 @@ static int blk_mq_init_hctx(struct request_queue *q,
+ exit_hctx:
+ if (set->ops->exit_hctx)
+ set->ops->exit_hctx(hctx, hctx_idx);
+- unregister_cpu_notifier:
+- blk_mq_remove_cpuhp(hctx);
++ fail:
+ return -1;
+ }
+
+diff --git a/drivers/acpi/arm64/iort.c b/drivers/acpi/arm64/iort.c
+index 4c745a26226b27..bf3be532e0895d 100644
+--- a/drivers/acpi/arm64/iort.c
++++ b/drivers/acpi/arm64/iort.c
+@@ -1703,6 +1703,8 @@ static struct acpi_platform_list pmcg_plat_info[] __initdata = {
+ /* HiSilicon Hip09 Platform */
+ {"HISI ", "HIP09 ", 0, ACPI_SIG_IORT, greater_than_or_equal,
+ "Erratum #162001900", IORT_SMMU_V3_PMCG_HISI_HIP09},
++ {"HISI ", "HIP09A ", 0, ACPI_SIG_IORT, greater_than_or_equal,
++ "Erratum #162001900", IORT_SMMU_V3_PMCG_HISI_HIP09},
+ /* HiSilicon Hip10/11 Platform uses the same SMMU IP with Hip09 */
+ {"HISI ", "HIP10 ", 0, ACPI_SIG_IORT, greater_than_or_equal,
+ "Erratum #162001900", IORT_SMMU_V3_PMCG_HISI_HIP09},
+diff --git a/drivers/base/regmap/regmap.c b/drivers/base/regmap/regmap.c
+index e3e2afc2c83c6b..5962ea1230a17e 100644
+--- a/drivers/base/regmap/regmap.c
++++ b/drivers/base/regmap/regmap.c
+@@ -1063,13 +1063,13 @@ struct regmap *__regmap_init(struct device *dev,
+
+ /* Sanity check */
+ if (range_cfg->range_max < range_cfg->range_min) {
+- dev_err(map->dev, "Invalid range %d: %d < %d\n", i,
++ dev_err(map->dev, "Invalid range %d: %u < %u\n", i,
+ range_cfg->range_max, range_cfg->range_min);
+ goto err_range;
+ }
+
+ if (range_cfg->range_max > map->max_register) {
+- dev_err(map->dev, "Invalid range %d: %d > %d\n", i,
++ dev_err(map->dev, "Invalid range %d: %u > %u\n", i,
+ range_cfg->range_max, map->max_register);
+ goto err_range;
+ }
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 90bc605ff6c299..458ac54e7b201e 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -1599,6 +1599,21 @@ static void ublk_unquiesce_dev(struct ublk_device *ub)
+ blk_mq_kick_requeue_list(ub->ub_disk->queue);
+ }
+
++static struct gendisk *ublk_detach_disk(struct ublk_device *ub)
++{
++ struct gendisk *disk;
++
++ /* Sync with ublk_abort_queue() by holding the lock */
++ spin_lock(&ub->lock);
++ disk = ub->ub_disk;
++ ub->dev_info.state = UBLK_S_DEV_DEAD;
++ ub->dev_info.ublksrv_pid = -1;
++ ub->ub_disk = NULL;
++ spin_unlock(&ub->lock);
++
++ return disk;
++}
++
+ static void ublk_stop_dev(struct ublk_device *ub)
+ {
+ struct gendisk *disk;
+@@ -1612,14 +1627,7 @@ static void ublk_stop_dev(struct ublk_device *ub)
+ ublk_unquiesce_dev(ub);
+ }
+ del_gendisk(ub->ub_disk);
+-
+- /* Sync with ublk_abort_queue() by holding the lock */
+- spin_lock(&ub->lock);
+- disk = ub->ub_disk;
+- ub->dev_info.state = UBLK_S_DEV_DEAD;
+- ub->dev_info.ublksrv_pid = -1;
+- ub->ub_disk = NULL;
+- spin_unlock(&ub->lock);
++ disk = ublk_detach_disk(ub);
+ put_disk(disk);
+ unlock:
+ mutex_unlock(&ub->mutex);
+@@ -2295,7 +2303,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
+
+ out_put_cdev:
+ if (ret) {
+- ub->dev_info.state = UBLK_S_DEV_DEAD;
++ ublk_detach_disk(ub);
+ ublk_put_device(ub);
+ }
+ if (ret)
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 43c96b73a7118f..0e50b65e1dbf5a 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -1587,9 +1587,12 @@ static void virtblk_remove(struct virtio_device *vdev)
+ static int virtblk_freeze(struct virtio_device *vdev)
+ {
+ struct virtio_blk *vblk = vdev->priv;
++ struct request_queue *q = vblk->disk->queue;
+
+ /* Ensure no requests in virtqueues before deleting vqs. */
+- blk_mq_freeze_queue(vblk->disk->queue);
++ blk_mq_freeze_queue(q);
++ blk_mq_quiesce_queue_nowait(q);
++ blk_mq_unfreeze_queue(q);
+
+ /* Ensure we don't receive any more interrupts */
+ virtio_reset_device(vdev);
+@@ -1613,8 +1616,8 @@ static int virtblk_restore(struct virtio_device *vdev)
+ return ret;
+
+ virtio_device_ready(vdev);
++ blk_mq_unquiesce_queue(vblk->disk->queue);
+
+- blk_mq_unfreeze_queue(vblk->disk->queue);
+ return 0;
+ }
+ #endif
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 11755cb1eb1635..0c85c981a8334a 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -870,6 +870,7 @@ struct btusb_data {
+
+ int (*suspend)(struct hci_dev *hdev);
+ int (*resume)(struct hci_dev *hdev);
++ int (*disconnect)(struct hci_dev *hdev);
+
+ int oob_wake_irq; /* irq for out-of-band wake-on-bt */
+ unsigned cmd_timeout_cnt;
+@@ -2643,11 +2644,11 @@ static void btusb_mtk_claim_iso_intf(struct btusb_data *data)
+ init_usb_anchor(&btmtk_data->isopkt_anchor);
+ }
+
+-static void btusb_mtk_release_iso_intf(struct btusb_data *data)
++static void btusb_mtk_release_iso_intf(struct hci_dev *hdev)
+ {
+- struct btmtk_data *btmtk_data = hci_get_priv(data->hdev);
++ struct btmtk_data *btmtk_data = hci_get_priv(hdev);
+
+- if (btmtk_data->isopkt_intf) {
++ if (test_bit(BTMTK_ISOPKT_OVER_INTR, &btmtk_data->flags)) {
+ usb_kill_anchored_urbs(&btmtk_data->isopkt_anchor);
+ clear_bit(BTMTK_ISOPKT_RUNNING, &btmtk_data->flags);
+
+@@ -2661,6 +2662,16 @@ static void btusb_mtk_release_iso_intf(struct btusb_data *data)
+ clear_bit(BTMTK_ISOPKT_OVER_INTR, &btmtk_data->flags);
+ }
+
++static int btusb_mtk_disconnect(struct hci_dev *hdev)
++{
++ /* This function describes the specific additional steps taken by MediaTek
++ * when Bluetooth usb driver's resume function is called.
++ */
++ btusb_mtk_release_iso_intf(hdev);
++
++ return 0;
++}
++
+ static int btusb_mtk_reset(struct hci_dev *hdev, void *rst_data)
+ {
+ struct btusb_data *data = hci_get_drvdata(hdev);
+@@ -2677,8 +2688,8 @@ static int btusb_mtk_reset(struct hci_dev *hdev, void *rst_data)
+ if (err < 0)
+ return err;
+
+- if (test_bit(BTMTK_ISOPKT_RUNNING, &btmtk_data->flags))
+- btusb_mtk_release_iso_intf(data);
++ /* Release MediaTek ISO data interface */
++ btusb_mtk_release_iso_intf(hdev);
+
+ btusb_stop_traffic(data);
+ usb_kill_anchored_urbs(&data->tx_anchor);
+@@ -2723,22 +2734,24 @@ static int btusb_mtk_setup(struct hci_dev *hdev)
+ btmtk_data->reset_sync = btusb_mtk_reset;
+
+ /* Claim ISO data interface and endpoint */
+- btmtk_data->isopkt_intf = usb_ifnum_to_if(data->udev, MTK_ISO_IFNUM);
+- if (btmtk_data->isopkt_intf)
++ if (!test_bit(BTMTK_ISOPKT_OVER_INTR, &btmtk_data->flags)) {
++ btmtk_data->isopkt_intf = usb_ifnum_to_if(data->udev, MTK_ISO_IFNUM);
+ btusb_mtk_claim_iso_intf(data);
++ }
+
+ return btmtk_usb_setup(hdev);
+ }
+
+ static int btusb_mtk_shutdown(struct hci_dev *hdev)
+ {
+- struct btusb_data *data = hci_get_drvdata(hdev);
+- struct btmtk_data *btmtk_data = hci_get_priv(hdev);
++ int ret;
+
+- if (test_bit(BTMTK_ISOPKT_RUNNING, &btmtk_data->flags))
+- btusb_mtk_release_iso_intf(data);
++ ret = btmtk_usb_shutdown(hdev);
+
+- return btmtk_usb_shutdown(hdev);
++ /* Release MediaTek iso interface after shutdown */
++ btusb_mtk_release_iso_intf(hdev);
++
++ return ret;
+ }
+
+ #ifdef CONFIG_PM
+@@ -3850,6 +3863,7 @@ static int btusb_probe(struct usb_interface *intf,
+ data->recv_acl = btmtk_usb_recv_acl;
+ data->suspend = btmtk_usb_suspend;
+ data->resume = btmtk_usb_resume;
++ data->disconnect = btusb_mtk_disconnect;
+ }
+
+ if (id->driver_info & BTUSB_SWAVE) {
+@@ -4040,6 +4054,9 @@ static void btusb_disconnect(struct usb_interface *intf)
+ if (data->diag)
+ usb_set_intfdata(data->diag, NULL);
+
++ if (data->disconnect)
++ data->disconnect(hdev);
++
+ hci_unregister_dev(hdev);
+
+ if (intf == data->intf) {
+diff --git a/drivers/dma/amd/qdma/qdma.c b/drivers/dma/amd/qdma/qdma.c
+index b0a1f3ad851b1e..4761fa25501561 100644
+--- a/drivers/dma/amd/qdma/qdma.c
++++ b/drivers/dma/amd/qdma/qdma.c
+@@ -7,9 +7,9 @@
+ #include <linux/bitfield.h>
+ #include <linux/bitops.h>
+ #include <linux/dmaengine.h>
++#include <linux/dma-mapping.h>
+ #include <linux/module.h>
+ #include <linux/mod_devicetable.h>
+-#include <linux/dma-map-ops.h>
+ #include <linux/platform_device.h>
+ #include <linux/platform_data/amd_qdma.h>
+ #include <linux/regmap.h>
+@@ -492,18 +492,9 @@ static int qdma_device_verify(struct qdma_device *qdev)
+
+ static int qdma_device_setup(struct qdma_device *qdev)
+ {
+- struct device *dev = &qdev->pdev->dev;
+ u32 ring_sz = QDMA_DEFAULT_RING_SIZE;
+ int ret = 0;
+
+- while (dev && get_dma_ops(dev))
+- dev = dev->parent;
+- if (!dev) {
+- qdma_err(qdev, "dma device not found");
+- return -EINVAL;
+- }
+- set_dma_ops(&qdev->pdev->dev, get_dma_ops(dev));
+-
+ ret = qdma_setup_fmap_context(qdev);
+ if (ret) {
+ qdma_err(qdev, "Failed setup fmap context");
+@@ -548,11 +539,12 @@ static void qdma_free_queue_resources(struct dma_chan *chan)
+ {
+ struct qdma_queue *queue = to_qdma_queue(chan);
+ struct qdma_device *qdev = queue->qdev;
+- struct device *dev = qdev->dma_dev.dev;
++ struct qdma_platdata *pdata;
+
+ qdma_clear_queue_context(queue);
+ vchan_free_chan_resources(&queue->vchan);
+- dma_free_coherent(dev, queue->ring_size * QDMA_MM_DESC_SIZE,
++ pdata = dev_get_platdata(&qdev->pdev->dev);
++ dma_free_coherent(pdata->dma_dev, queue->ring_size * QDMA_MM_DESC_SIZE,
+ queue->desc_base, queue->dma_desc_base);
+ }
+
+@@ -565,6 +557,7 @@ static int qdma_alloc_queue_resources(struct dma_chan *chan)
+ struct qdma_queue *queue = to_qdma_queue(chan);
+ struct qdma_device *qdev = queue->qdev;
+ struct qdma_ctxt_sw_desc desc;
++ struct qdma_platdata *pdata;
+ size_t size;
+ int ret;
+
+@@ -572,8 +565,9 @@ static int qdma_alloc_queue_resources(struct dma_chan *chan)
+ if (ret)
+ return ret;
+
++ pdata = dev_get_platdata(&qdev->pdev->dev);
+ size = queue->ring_size * QDMA_MM_DESC_SIZE;
+- queue->desc_base = dma_alloc_coherent(qdev->dma_dev.dev, size,
++ queue->desc_base = dma_alloc_coherent(pdata->dma_dev, size,
+ &queue->dma_desc_base,
+ GFP_KERNEL);
+ if (!queue->desc_base) {
+@@ -588,7 +582,7 @@ static int qdma_alloc_queue_resources(struct dma_chan *chan)
+ if (ret) {
+ qdma_err(qdev, "Failed to setup SW desc ctxt for %s",
+ chan->name);
+- dma_free_coherent(qdev->dma_dev.dev, size, queue->desc_base,
++ dma_free_coherent(pdata->dma_dev, size, queue->desc_base,
+ queue->dma_desc_base);
+ return ret;
+ }
+@@ -948,8 +942,9 @@ static int qdma_init_error_irq(struct qdma_device *qdev)
+
+ static int qdmam_alloc_qintr_rings(struct qdma_device *qdev)
+ {
+- u32 ctxt[QDMA_CTXT_REGMAP_LEN];
++ struct qdma_platdata *pdata = dev_get_platdata(&qdev->pdev->dev);
+ struct device *dev = &qdev->pdev->dev;
++ u32 ctxt[QDMA_CTXT_REGMAP_LEN];
+ struct qdma_intr_ring *ring;
+ struct qdma_ctxt_intr intr_ctxt;
+ u32 vector;
+@@ -969,7 +964,8 @@ static int qdmam_alloc_qintr_rings(struct qdma_device *qdev)
+ ring->msix_id = qdev->err_irq_idx + i + 1;
+ ring->ridx = i;
+ ring->color = 1;
+- ring->base = dmam_alloc_coherent(dev, QDMA_INTR_RING_SIZE,
++ ring->base = dmam_alloc_coherent(pdata->dma_dev,
++ QDMA_INTR_RING_SIZE,
+ &ring->dev_base, GFP_KERNEL);
+ if (!ring->base) {
+ qdma_err(qdev, "Failed to alloc intr ring %d", i);
+diff --git a/drivers/dma/apple-admac.c b/drivers/dma/apple-admac.c
+index 9588773dd2eb67..037ec38730cf98 100644
+--- a/drivers/dma/apple-admac.c
++++ b/drivers/dma/apple-admac.c
+@@ -153,6 +153,8 @@ static int admac_alloc_sram_carveout(struct admac_data *ad,
+ {
+ struct admac_sram *sram;
+ int i, ret = 0, nblocks;
++ ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE);
++ ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE);
+
+ if (dir == DMA_MEM_TO_DEV)
+ sram = &ad->txcache;
+@@ -912,12 +914,7 @@ static int admac_probe(struct platform_device *pdev)
+ goto free_irq;
+ }
+
+- ad->txcache.size = readl_relaxed(ad->base + REG_TX_SRAM_SIZE);
+- ad->rxcache.size = readl_relaxed(ad->base + REG_RX_SRAM_SIZE);
+-
+ dev_info(&pdev->dev, "Audio DMA Controller\n");
+- dev_info(&pdev->dev, "imprint %x TX cache %u RX cache %u\n",
+- readl_relaxed(ad->base + REG_IMPRINT), ad->txcache.size, ad->rxcache.size);
+
+ return 0;
+
+diff --git a/drivers/dma/at_xdmac.c b/drivers/dma/at_xdmac.c
+index 299396121e6dc5..e847ad66dc0b49 100644
+--- a/drivers/dma/at_xdmac.c
++++ b/drivers/dma/at_xdmac.c
+@@ -1363,6 +1363,8 @@ at_xdmac_prep_dma_memset(struct dma_chan *chan, dma_addr_t dest, int value,
+ return NULL;
+
+ desc = at_xdmac_memset_create_desc(chan, atchan, dest, len, value);
++ if (!desc)
++ return NULL;
+ list_add_tail(&desc->desc_node, &desc->descs_list);
+
+ desc->tx_dma_desc.cookie = -EBUSY;
+diff --git a/drivers/dma/dw/acpi.c b/drivers/dma/dw/acpi.c
+index c510c109d2c3ad..b6452fffa657ad 100644
+--- a/drivers/dma/dw/acpi.c
++++ b/drivers/dma/dw/acpi.c
+@@ -8,13 +8,15 @@
+
+ static bool dw_dma_acpi_filter(struct dma_chan *chan, void *param)
+ {
++ struct dw_dma *dw = to_dw_dma(chan->device);
++ struct dw_dma_chip_pdata *data = dev_get_drvdata(dw->dma.dev);
+ struct acpi_dma_spec *dma_spec = param;
+ struct dw_dma_slave slave = {
+ .dma_dev = dma_spec->dev,
+ .src_id = dma_spec->slave_id,
+ .dst_id = dma_spec->slave_id,
+- .m_master = 0,
+- .p_master = 1,
++ .m_master = data->m_master,
++ .p_master = data->p_master,
+ };
+
+ return dw_dma_filter(chan, &slave);
+diff --git a/drivers/dma/dw/internal.h b/drivers/dma/dw/internal.h
+index 563ce73488db32..f1bd06a20cd611 100644
+--- a/drivers/dma/dw/internal.h
++++ b/drivers/dma/dw/internal.h
+@@ -51,11 +51,15 @@ struct dw_dma_chip_pdata {
+ int (*probe)(struct dw_dma_chip *chip);
+ int (*remove)(struct dw_dma_chip *chip);
+ struct dw_dma_chip *chip;
++ u8 m_master;
++ u8 p_master;
+ };
+
+ static __maybe_unused const struct dw_dma_chip_pdata dw_dma_chip_pdata = {
+ .probe = dw_dma_probe,
+ .remove = dw_dma_remove,
++ .m_master = 0,
++ .p_master = 1,
+ };
+
+ static const struct dw_dma_platform_data idma32_pdata = {
+@@ -72,6 +76,8 @@ static __maybe_unused const struct dw_dma_chip_pdata idma32_chip_pdata = {
+ .pdata = &idma32_pdata,
+ .probe = idma32_dma_probe,
+ .remove = idma32_dma_remove,
++ .m_master = 0,
++ .p_master = 0,
+ };
+
+ static const struct dw_dma_platform_data xbar_pdata = {
+@@ -88,6 +94,8 @@ static __maybe_unused const struct dw_dma_chip_pdata xbar_chip_pdata = {
+ .pdata = &xbar_pdata,
+ .probe = idma32_dma_probe,
+ .remove = idma32_dma_remove,
++ .m_master = 0,
++ .p_master = 0,
+ };
+
+ #endif /* _DMA_DW_INTERNAL_H */
+diff --git a/drivers/dma/dw/pci.c b/drivers/dma/dw/pci.c
+index ad2d4d012cf729..e8a0eb81726a56 100644
+--- a/drivers/dma/dw/pci.c
++++ b/drivers/dma/dw/pci.c
+@@ -56,10 +56,10 @@ static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
+ if (ret)
+ return ret;
+
+- dw_dma_acpi_controller_register(chip->dw);
+-
+ pci_set_drvdata(pdev, data);
+
++ dw_dma_acpi_controller_register(chip->dw);
++
+ return 0;
+ }
+
+diff --git a/drivers/dma/fsl-edma-common.h b/drivers/dma/fsl-edma-common.h
+index ce37e1ee9c462d..fe8f103d4a6378 100644
+--- a/drivers/dma/fsl-edma-common.h
++++ b/drivers/dma/fsl-edma-common.h
+@@ -166,6 +166,7 @@ struct fsl_edma_chan {
+ struct work_struct issue_worker;
+ struct platform_device *pdev;
+ struct device *pd_dev;
++ struct device_link *pd_dev_link;
+ u32 srcid;
+ struct clk *clk;
+ int priority;
+diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
+index f9f1eda792546e..70cb7fda757a94 100644
+--- a/drivers/dma/fsl-edma-main.c
++++ b/drivers/dma/fsl-edma-main.c
+@@ -417,10 +417,33 @@ static const struct of_device_id fsl_edma_dt_ids[] = {
+ };
+ MODULE_DEVICE_TABLE(of, fsl_edma_dt_ids);
+
++static void fsl_edma3_detach_pd(struct fsl_edma_engine *fsl_edma)
++{
++ struct fsl_edma_chan *fsl_chan;
++ int i;
++
++ for (i = 0; i < fsl_edma->n_chans; i++) {
++ if (fsl_edma->chan_masked & BIT(i))
++ continue;
++ fsl_chan = &fsl_edma->chans[i];
++ if (fsl_chan->pd_dev_link)
++ device_link_del(fsl_chan->pd_dev_link);
++ if (fsl_chan->pd_dev) {
++ dev_pm_domain_detach(fsl_chan->pd_dev, false);
++ pm_runtime_dont_use_autosuspend(fsl_chan->pd_dev);
++ pm_runtime_set_suspended(fsl_chan->pd_dev);
++ }
++ }
++}
++
++static void devm_fsl_edma3_detach_pd(void *data)
++{
++ fsl_edma3_detach_pd(data);
++}
++
+ static int fsl_edma3_attach_pd(struct platform_device *pdev, struct fsl_edma_engine *fsl_edma)
+ {
+ struct fsl_edma_chan *fsl_chan;
+- struct device_link *link;
+ struct device *pd_chan;
+ struct device *dev;
+ int i;
+@@ -436,15 +459,16 @@ static int fsl_edma3_attach_pd(struct platform_device *pdev, struct fsl_edma_eng
+ pd_chan = dev_pm_domain_attach_by_id(dev, i);
+ if (IS_ERR_OR_NULL(pd_chan)) {
+ dev_err(dev, "Failed attach pd %d\n", i);
+- return -EINVAL;
++ goto detach;
+ }
+
+- link = device_link_add(dev, pd_chan, DL_FLAG_STATELESS |
++ fsl_chan->pd_dev_link = device_link_add(dev, pd_chan, DL_FLAG_STATELESS |
+ DL_FLAG_PM_RUNTIME |
+ DL_FLAG_RPM_ACTIVE);
+- if (!link) {
++ if (!fsl_chan->pd_dev_link) {
+ dev_err(dev, "Failed to add device_link to %d\n", i);
+- return -EINVAL;
++ dev_pm_domain_detach(pd_chan, false);
++ goto detach;
+ }
+
+ fsl_chan->pd_dev = pd_chan;
+@@ -455,6 +479,10 @@ static int fsl_edma3_attach_pd(struct platform_device *pdev, struct fsl_edma_eng
+ }
+
+ return 0;
++
++detach:
++ fsl_edma3_detach_pd(fsl_edma);
++ return -EINVAL;
+ }
+
+ static int fsl_edma_probe(struct platform_device *pdev)
+@@ -544,6 +572,9 @@ static int fsl_edma_probe(struct platform_device *pdev)
+ ret = fsl_edma3_attach_pd(pdev, fsl_edma);
+ if (ret)
+ return ret;
++ ret = devm_add_action_or_reset(&pdev->dev, devm_fsl_edma3_detach_pd, fsl_edma);
++ if (ret)
++ return ret;
+ }
+
+ if (drvdata->flags & FSL_EDMA_DRV_TCD64)
+diff --git a/drivers/dma/ls2x-apb-dma.c b/drivers/dma/ls2x-apb-dma.c
+index 9652e86667224b..b4f18be6294574 100644
+--- a/drivers/dma/ls2x-apb-dma.c
++++ b/drivers/dma/ls2x-apb-dma.c
+@@ -31,7 +31,7 @@
+ #define LDMA_ASK_VALID BIT(2)
+ #define LDMA_START BIT(3) /* DMA start operation */
+ #define LDMA_STOP BIT(4) /* DMA stop operation */
+-#define LDMA_CONFIG_MASK GENMASK(4, 0) /* DMA controller config bits mask */
++#define LDMA_CONFIG_MASK GENMASK_ULL(4, 0) /* DMA controller config bits mask */
+
+ /* Bitfields in ndesc_addr field of HW descriptor */
+ #define LDMA_DESC_EN BIT(0) /*1: The next descriptor is valid */
+diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
+index 43efce77bb5770..40b76b40bc30c2 100644
+--- a/drivers/dma/mv_xor.c
++++ b/drivers/dma/mv_xor.c
+@@ -1388,6 +1388,7 @@ static int mv_xor_probe(struct platform_device *pdev)
+ irq = irq_of_parse_and_map(np, 0);
+ if (!irq) {
+ ret = -ENODEV;
++ of_node_put(np);
+ goto err_channel_add;
+ }
+
+@@ -1396,6 +1397,7 @@ static int mv_xor_probe(struct platform_device *pdev)
+ if (IS_ERR(chan)) {
+ ret = PTR_ERR(chan);
+ irq_dispose_mapping(irq);
++ of_node_put(np);
+ goto err_channel_add;
+ }
+
+diff --git a/drivers/dma/tegra186-gpc-dma.c b/drivers/dma/tegra186-gpc-dma.c
+index 3642508e88bb22..adca05ee98c922 100644
+--- a/drivers/dma/tegra186-gpc-dma.c
++++ b/drivers/dma/tegra186-gpc-dma.c
+@@ -231,6 +231,7 @@ struct tegra_dma_channel {
+ bool config_init;
+ char name[30];
+ enum dma_transfer_direction sid_dir;
++ enum dma_status status;
+ int id;
+ int irq;
+ int slave_id;
+@@ -393,6 +394,8 @@ static int tegra_dma_pause(struct tegra_dma_channel *tdc)
+ tegra_dma_dump_chan_regs(tdc);
+ }
+
++ tdc->status = DMA_PAUSED;
++
+ return ret;
+ }
+
+@@ -419,6 +422,8 @@ static void tegra_dma_resume(struct tegra_dma_channel *tdc)
+ val = tdc_read(tdc, TEGRA_GPCDMA_CHAN_CSRE);
+ val &= ~TEGRA_GPCDMA_CHAN_CSRE_PAUSE;
+ tdc_write(tdc, TEGRA_GPCDMA_CHAN_CSRE, val);
++
++ tdc->status = DMA_IN_PROGRESS;
+ }
+
+ static int tegra_dma_device_resume(struct dma_chan *dc)
+@@ -544,6 +549,7 @@ static void tegra_dma_xfer_complete(struct tegra_dma_channel *tdc)
+
+ tegra_dma_sid_free(tdc);
+ tdc->dma_desc = NULL;
++ tdc->status = DMA_COMPLETE;
+ }
+
+ static void tegra_dma_chan_decode_error(struct tegra_dma_channel *tdc,
+@@ -716,6 +722,7 @@ static int tegra_dma_terminate_all(struct dma_chan *dc)
+ tdc->dma_desc = NULL;
+ }
+
++ tdc->status = DMA_COMPLETE;
+ tegra_dma_sid_free(tdc);
+ vchan_get_all_descriptors(&tdc->vc, &head);
+ spin_unlock_irqrestore(&tdc->vc.lock, flags);
+@@ -769,6 +776,9 @@ static enum dma_status tegra_dma_tx_status(struct dma_chan *dc,
+ if (ret == DMA_COMPLETE)
+ return ret;
+
++ if (tdc->status == DMA_PAUSED)
++ ret = DMA_PAUSED;
++
+ spin_lock_irqsave(&tdc->vc.lock, flags);
+ vd = vchan_find_desc(&tdc->vc, cookie);
+ if (vd) {
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index bcf3a33123be1c..f0c6d50d8c3345 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -4108,9 +4108,10 @@ static void drm_dp_mst_up_req_work(struct work_struct *work)
+ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ {
+ struct drm_dp_pending_up_req *up_req;
++ struct drm_dp_mst_branch *mst_primary;
+
+ if (!drm_dp_get_one_sb_msg(mgr, true, NULL))
+- goto out;
++ goto out_clear_reply;
+
+ if (!mgr->up_req_recv.have_eomt)
+ return 0;
+@@ -4128,10 +4129,19 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ drm_dbg_kms(mgr->dev, "Received unknown up req type, ignoring: %x\n",
+ up_req->msg.req_type);
+ kfree(up_req);
+- goto out;
++ goto out_clear_reply;
++ }
++
++ mutex_lock(&mgr->lock);
++ mst_primary = mgr->mst_primary;
++ if (!mst_primary || !drm_dp_mst_topology_try_get_mstb(mst_primary)) {
++ mutex_unlock(&mgr->lock);
++ kfree(up_req);
++ goto out_clear_reply;
+ }
++ mutex_unlock(&mgr->lock);
+
+- drm_dp_send_up_ack_reply(mgr, mgr->mst_primary, up_req->msg.req_type,
++ drm_dp_send_up_ack_reply(mgr, mst_primary, up_req->msg.req_type,
+ false);
+
+ if (up_req->msg.req_type == DP_CONNECTION_STATUS_NOTIFY) {
+@@ -4148,13 +4158,13 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ conn_stat->peer_device_type);
+
+ mutex_lock(&mgr->probe_lock);
+- handle_csn = mgr->mst_primary->link_address_sent;
++ handle_csn = mst_primary->link_address_sent;
+ mutex_unlock(&mgr->probe_lock);
+
+ if (!handle_csn) {
+ drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it.");
+ kfree(up_req);
+- goto out;
++ goto out_put_primary;
+ }
+ } else if (up_req->msg.req_type == DP_RESOURCE_STATUS_NOTIFY) {
+ const struct drm_dp_resource_status_notify *res_stat =
+@@ -4171,7 +4181,9 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ mutex_unlock(&mgr->up_req_lock);
+ queue_work(system_long_wq, &mgr->up_req_work);
+
+-out:
++out_put_primary:
++ drm_dp_mst_topology_put_mstb(mst_primary);
++out_clear_reply:
+ memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
+index 5221ee3f12149b..c18e463092afa5 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.c
++++ b/drivers/gpu/drm/xe/xe_devcoredump.c
+@@ -20,6 +20,7 @@
+ #include "xe_guc_ct.h"
+ #include "xe_guc_submit.h"
+ #include "xe_hw_engine.h"
++#include "xe_pm.h"
+ #include "xe_sched_job.h"
+ #include "xe_vm.h"
+
+@@ -143,31 +144,6 @@ static void xe_devcoredump_snapshot_free(struct xe_devcoredump_snapshot *ss)
+ ss->vm = NULL;
+ }
+
+-static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
+-{
+- struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
+- struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
+- unsigned int fw_ref;
+-
+- /* keep going if fw fails as we still want to save the memory and SW data */
+- fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
+- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
+- xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
+- xe_vm_snapshot_capture_delayed(ss->vm);
+- xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
+- xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
+-
+- /* Calculate devcoredump size */
+- ss->read.size = __xe_devcoredump_read(NULL, INT_MAX, coredump);
+-
+- ss->read.buffer = kvmalloc(ss->read.size, GFP_USER);
+- if (!ss->read.buffer)
+- return;
+-
+- __xe_devcoredump_read(ss->read.buffer, ss->read.size, coredump);
+- xe_devcoredump_snapshot_free(ss);
+-}
+-
+ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
+ size_t count, void *data, size_t datalen)
+ {
+@@ -216,6 +192,45 @@ static void xe_devcoredump_free(void *data)
+ "Xe device coredump has been deleted.\n");
+ }
+
++static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
++{
++ struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
++ struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
++ struct xe_device *xe = coredump_to_xe(coredump);
++ unsigned int fw_ref;
++
++ /*
++ * NB: Despite passing a GFP_ flags parameter here, more allocations are done
++ * internally using GFP_KERNEL expliictly. Hence this call must be in the worker
++ * thread and not in the initial capture call.
++ */
++ dev_coredumpm_timeout(gt_to_xe(ss->gt)->drm.dev, THIS_MODULE, coredump, 0, GFP_KERNEL,
++ xe_devcoredump_read, xe_devcoredump_free,
++ XE_COREDUMP_TIMEOUT_JIFFIES);
++
++ xe_pm_runtime_get(xe);
++
++ /* keep going if fw fails as we still want to save the memory and SW data */
++ fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
++ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
++ xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
++ xe_vm_snapshot_capture_delayed(ss->vm);
++ xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
++ xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
++
++ xe_pm_runtime_put(xe);
++
++ /* Calculate devcoredump size */
++ ss->read.size = __xe_devcoredump_read(NULL, INT_MAX, coredump);
++
++ ss->read.buffer = kvmalloc(ss->read.size, GFP_USER);
++ if (!ss->read.buffer)
++ return;
++
++ __xe_devcoredump_read(ss->read.buffer, ss->read.size, coredump);
++ xe_devcoredump_snapshot_free(ss);
++}
++
+ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
+ struct xe_sched_job *job)
+ {
+@@ -299,10 +314,6 @@ void xe_devcoredump(struct xe_sched_job *job)
+ drm_info(&xe->drm, "Xe device coredump has been created\n");
+ drm_info(&xe->drm, "Check your /sys/class/drm/card%d/device/devcoredump/data\n",
+ xe->drm.primary->index);
+-
+- dev_coredumpm_timeout(xe->drm.dev, THIS_MODULE, coredump, 0, GFP_KERNEL,
+- xe_devcoredump_read, xe_devcoredump_free,
+- XE_COREDUMP_TIMEOUT_JIFFIES);
+ }
+
+ static void xe_driver_devcoredump_fini(void *arg)
+diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
+index 98539313cbc970..c5224d43eea45e 100644
+--- a/drivers/i2c/busses/i2c-imx.c
++++ b/drivers/i2c/busses/i2c-imx.c
+@@ -282,6 +282,7 @@ static const struct of_device_id i2c_imx_dt_ids[] = {
+ { .compatible = "fsl,imx6sll-i2c", .data = &imx6_i2c_hwdata, },
+ { .compatible = "fsl,imx6sx-i2c", .data = &imx6_i2c_hwdata, },
+ { .compatible = "fsl,imx6ul-i2c", .data = &imx6_i2c_hwdata, },
++ { .compatible = "fsl,imx7d-i2c", .data = &imx6_i2c_hwdata, },
+ { .compatible = "fsl,imx7s-i2c", .data = &imx6_i2c_hwdata, },
+ { .compatible = "fsl,imx8mm-i2c", .data = &imx6_i2c_hwdata, },
+ { .compatible = "fsl,imx8mn-i2c", .data = &imx6_i2c_hwdata, },
+diff --git a/drivers/i2c/busses/i2c-microchip-corei2c.c b/drivers/i2c/busses/i2c-microchip-corei2c.c
+index 0b0a1c4d17caef..b0a51695138ad0 100644
+--- a/drivers/i2c/busses/i2c-microchip-corei2c.c
++++ b/drivers/i2c/busses/i2c-microchip-corei2c.c
+@@ -93,27 +93,35 @@
+ * @base: pointer to register struct
+ * @dev: device reference
+ * @i2c_clk: clock reference for i2c input clock
++ * @msg_queue: pointer to the messages requiring sending
+ * @buf: pointer to msg buffer for easier use
+ * @msg_complete: xfer completion object
+ * @adapter: core i2c abstraction
+ * @msg_err: error code for completed message
+ * @bus_clk_rate: current i2c bus clock rate
+ * @isr_status: cached copy of local ISR status
++ * @total_num: total number of messages to be sent/received
++ * @current_num: index of the current message being sent/received
+ * @msg_len: number of bytes transferred in msg
+ * @addr: address of the current slave
++ * @restart_needed: whether or not a repeated start is required after current message
+ */
+ struct mchp_corei2c_dev {
+ void __iomem *base;
+ struct device *dev;
+ struct clk *i2c_clk;
++ struct i2c_msg *msg_queue;
+ u8 *buf;
+ struct completion msg_complete;
+ struct i2c_adapter adapter;
+ int msg_err;
++ int total_num;
++ int current_num;
+ u32 bus_clk_rate;
+ u32 isr_status;
+ u16 msg_len;
+ u8 addr;
++ bool restart_needed;
+ };
+
+ static void mchp_corei2c_core_disable(struct mchp_corei2c_dev *idev)
+@@ -222,6 +230,47 @@ static int mchp_corei2c_fill_tx(struct mchp_corei2c_dev *idev)
+ return 0;
+ }
+
++static void mchp_corei2c_next_msg(struct mchp_corei2c_dev *idev)
++{
++ struct i2c_msg *this_msg;
++ u8 ctrl;
++
++ if (idev->current_num >= idev->total_num) {
++ complete(&idev->msg_complete);
++ return;
++ }
++
++ /*
++ * If there's been an error, the isr needs to return control
++ * to the "main" part of the driver, so as not to keep sending
++ * messages once it completes and clears the SI bit.
++ */
++ if (idev->msg_err) {
++ complete(&idev->msg_complete);
++ return;
++ }
++
++ this_msg = idev->msg_queue++;
++
++ if (idev->current_num < (idev->total_num - 1)) {
++ struct i2c_msg *next_msg = idev->msg_queue;
++
++ idev->restart_needed = next_msg->flags & I2C_M_RD;
++ } else {
++ idev->restart_needed = false;
++ }
++
++ idev->addr = i2c_8bit_addr_from_msg(this_msg);
++ idev->msg_len = this_msg->len;
++ idev->buf = this_msg->buf;
++
++ ctrl = readb(idev->base + CORE_I2C_CTRL);
++ ctrl |= CTRL_STA;
++ writeb(ctrl, idev->base + CORE_I2C_CTRL);
++
++ idev->current_num++;
++}
++
+ static irqreturn_t mchp_corei2c_handle_isr(struct mchp_corei2c_dev *idev)
+ {
+ u32 status = idev->isr_status;
+@@ -238,8 +287,6 @@ static irqreturn_t mchp_corei2c_handle_isr(struct mchp_corei2c_dev *idev)
+ ctrl &= ~CTRL_STA;
+ writeb(idev->addr, idev->base + CORE_I2C_DATA);
+ writeb(ctrl, idev->base + CORE_I2C_CTRL);
+- if (idev->msg_len == 0)
+- finished = true;
+ break;
+ case STATUS_M_ARB_LOST:
+ idev->msg_err = -EAGAIN;
+@@ -247,10 +294,14 @@ static irqreturn_t mchp_corei2c_handle_isr(struct mchp_corei2c_dev *idev)
+ break;
+ case STATUS_M_SLAW_ACK:
+ case STATUS_M_TX_DATA_ACK:
+- if (idev->msg_len > 0)
++ if (idev->msg_len > 0) {
+ mchp_corei2c_fill_tx(idev);
+- else
+- last_byte = true;
++ } else {
++ if (idev->restart_needed)
++ finished = true;
++ else
++ last_byte = true;
++ }
+ break;
+ case STATUS_M_TX_DATA_NACK:
+ case STATUS_M_SLAR_NACK:
+@@ -287,7 +338,7 @@ static irqreturn_t mchp_corei2c_handle_isr(struct mchp_corei2c_dev *idev)
+ mchp_corei2c_stop(idev);
+
+ if (last_byte || finished)
+- complete(&idev->msg_complete);
++ mchp_corei2c_next_msg(idev);
+
+ return IRQ_HANDLED;
+ }
+@@ -311,21 +362,48 @@ static irqreturn_t mchp_corei2c_isr(int irq, void *_dev)
+ return ret;
+ }
+
+-static int mchp_corei2c_xfer_msg(struct mchp_corei2c_dev *idev,
+- struct i2c_msg *msg)
++static int mchp_corei2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
++ int num)
+ {
+- u8 ctrl;
++ struct mchp_corei2c_dev *idev = i2c_get_adapdata(adap);
++ struct i2c_msg *this_msg = msgs;
+ unsigned long time_left;
++ u8 ctrl;
++
++ mchp_corei2c_core_enable(idev);
++
++ /*
++ * The isr controls the flow of a transfer, this info needs to be saved
++ * to a location that it can access the queue information from.
++ */
++ idev->restart_needed = false;
++ idev->msg_queue = msgs;
++ idev->total_num = num;
++ idev->current_num = 0;
+
+- idev->addr = i2c_8bit_addr_from_msg(msg);
+- idev->msg_len = msg->len;
+- idev->buf = msg->buf;
++ /*
++ * But the first entry to the isr is triggered by the start in this
++ * function, so the first message needs to be "dequeued".
++ */
++ idev->addr = i2c_8bit_addr_from_msg(this_msg);
++ idev->msg_len = this_msg->len;
++ idev->buf = this_msg->buf;
+ idev->msg_err = 0;
+
+- reinit_completion(&idev->msg_complete);
++ if (idev->total_num > 1) {
++ struct i2c_msg *next_msg = msgs + 1;
+
+- mchp_corei2c_core_enable(idev);
++ idev->restart_needed = next_msg->flags & I2C_M_RD;
++ }
+
++ idev->current_num++;
++ idev->msg_queue++;
++
++ reinit_completion(&idev->msg_complete);
++
++ /*
++ * Send the first start to pass control to the isr
++ */
+ ctrl = readb(idev->base + CORE_I2C_CTRL);
+ ctrl |= CTRL_STA;
+ writeb(ctrl, idev->base + CORE_I2C_CTRL);
+@@ -335,20 +413,8 @@ static int mchp_corei2c_xfer_msg(struct mchp_corei2c_dev *idev,
+ if (!time_left)
+ return -ETIMEDOUT;
+
+- return idev->msg_err;
+-}
+-
+-static int mchp_corei2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
+- int num)
+-{
+- struct mchp_corei2c_dev *idev = i2c_get_adapdata(adap);
+- int i, ret;
+-
+- for (i = 0; i < num; i++) {
+- ret = mchp_corei2c_xfer_msg(idev, msgs++);
+- if (ret)
+- return ret;
+- }
++ if (idev->msg_err)
++ return idev->msg_err;
+
+ return num;
+ }
+diff --git a/drivers/media/dvb-frontends/dib3000mb.c b/drivers/media/dvb-frontends/dib3000mb.c
+index c598b2a6332565..7c452ddd9e40fa 100644
+--- a/drivers/media/dvb-frontends/dib3000mb.c
++++ b/drivers/media/dvb-frontends/dib3000mb.c
+@@ -51,7 +51,7 @@ MODULE_PARM_DESC(debug, "set debugging level (1=info,2=xfer,4=setfe,8=getfe (|-a
+ static int dib3000_read_reg(struct dib3000_state *state, u16 reg)
+ {
+ u8 wb[] = { ((reg >> 8) | 0x80) & 0xff, reg & 0xff };
+- u8 rb[2];
++ u8 rb[2] = {};
+ struct i2c_msg msg[] = {
+ { .addr = state->config.demod_address, .flags = 0, .buf = wb, .len = 2 },
+ { .addr = state->config.demod_address, .flags = I2C_M_RD, .buf = rb, .len = 2 },
+diff --git a/drivers/mtd/nand/raw/arasan-nand-controller.c b/drivers/mtd/nand/raw/arasan-nand-controller.c
+index 5436ec4a8fde42..a52a9f5a75e021 100644
+--- a/drivers/mtd/nand/raw/arasan-nand-controller.c
++++ b/drivers/mtd/nand/raw/arasan-nand-controller.c
+@@ -1409,8 +1409,8 @@ static int anfc_parse_cs(struct arasan_nfc *nfc)
+ * case, the "not" chosen CS is assigned to nfc->spare_cs and selected
+ * whenever a GPIO CS must be asserted.
+ */
+- if (nfc->cs_array && nfc->ncs > 2) {
+- if (!nfc->cs_array[0] && !nfc->cs_array[1]) {
++ if (nfc->cs_array) {
++ if (nfc->ncs > 2 && !nfc->cs_array[0] && !nfc->cs_array[1]) {
+ dev_err(nfc->dev,
+ "Assign a single native CS when using GPIOs\n");
+ return -EINVAL;
+@@ -1478,8 +1478,15 @@ static int anfc_probe(struct platform_device *pdev)
+
+ static void anfc_remove(struct platform_device *pdev)
+ {
++ int i;
+ struct arasan_nfc *nfc = platform_get_drvdata(pdev);
+
++ for (i = 0; i < nfc->ncs; i++) {
++ if (nfc->cs_array[i]) {
++ gpiod_put(nfc->cs_array[i]);
++ }
++ }
++
+ anfc_chips_cleanup(nfc);
+ }
+
+diff --git a/drivers/mtd/nand/raw/atmel/pmecc.c b/drivers/mtd/nand/raw/atmel/pmecc.c
+index a22aab4ed4e8ab..3c7dee1be21df1 100644
+--- a/drivers/mtd/nand/raw/atmel/pmecc.c
++++ b/drivers/mtd/nand/raw/atmel/pmecc.c
+@@ -380,10 +380,8 @@ atmel_pmecc_create_user(struct atmel_pmecc *pmecc,
+ user->delta = user->dmu + req->ecc.strength + 1;
+
+ gf_tables = atmel_pmecc_get_gf_tables(req);
+- if (IS_ERR(gf_tables)) {
+- kfree(user);
++ if (IS_ERR(gf_tables))
+ return ERR_CAST(gf_tables);
+- }
+
+ user->gf_tables = gf_tables;
+
+diff --git a/drivers/mtd/nand/raw/diskonchip.c b/drivers/mtd/nand/raw/diskonchip.c
+index 8db7fc42457111..70d6c2250f32c8 100644
+--- a/drivers/mtd/nand/raw/diskonchip.c
++++ b/drivers/mtd/nand/raw/diskonchip.c
+@@ -1098,7 +1098,7 @@ static inline int __init inftl_partscan(struct mtd_info *mtd, struct mtd_partiti
+ (i == 0) && (ip->firstUnit > 0)) {
+ parts[0].name = " DiskOnChip IPL / Media Header partition";
+ parts[0].offset = 0;
+- parts[0].size = mtd->erasesize * ip->firstUnit;
++ parts[0].size = (uint64_t)mtd->erasesize * ip->firstUnit;
+ numparts = 1;
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+index e95ffe3035473c..c70da7281551a2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-trans.h
+@@ -1074,12 +1074,13 @@ int iwl_trans_read_config32(struct iwl_trans *trans, u32 ofs,
+ void iwl_trans_debugfs_cleanup(struct iwl_trans *trans);
+ #endif
+
+-#define iwl_trans_read_mem_bytes(trans, addr, buf, bufsize) \
+- do { \
+- if (__builtin_constant_p(bufsize)) \
+- BUILD_BUG_ON((bufsize) % sizeof(u32)); \
+- iwl_trans_read_mem(trans, addr, buf, (bufsize) / sizeof(u32));\
+- } while (0)
++#define iwl_trans_read_mem_bytes(trans, addr, buf, bufsize) \
++ ({ \
++ if (__builtin_constant_p(bufsize)) \
++ BUILD_BUG_ON((bufsize) % sizeof(u32)); \
++ iwl_trans_read_mem(trans, addr, buf, \
++ (bufsize) / sizeof(u32)); \
++ })
+
+ int iwl_trans_write_imr_mem(struct iwl_trans *trans, u32 dst_addr,
+ u64 src_addr, u32 byte_cnt);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index 244ca8cab9d1a2..1a814eb6743e80 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -3032,13 +3032,18 @@ static bool iwl_mvm_rt_status(struct iwl_trans *trans, u32 base, u32 *err_id)
+ /* cf. struct iwl_error_event_table */
+ u32 valid;
+ __le32 err_id;
+- } err_info;
++ } err_info = {};
++ int ret;
+
+ if (!base)
+ return false;
+
+- iwl_trans_read_mem_bytes(trans, base,
+- &err_info, sizeof(err_info));
++ ret = iwl_trans_read_mem_bytes(trans, base,
++ &err_info, sizeof(err_info));
++
++ if (ret)
++ return true;
++
+ if (err_info.valid && err_id)
+ *err_id = le32_to_cpu(err_info.err_id);
+
+@@ -3635,22 +3640,31 @@ int iwl_mvm_fast_resume(struct iwl_mvm *mvm)
+ iwl_fw_dbg_read_d3_debug_data(&mvm->fwrt);
+
+ if (iwl_mvm_check_rt_status(mvm, NULL)) {
++ IWL_ERR(mvm,
++ "iwl_mvm_check_rt_status failed, device is gone during suspend\n");
+ set_bit(STATUS_FW_ERROR, &mvm->trans->status);
+ iwl_mvm_dump_nic_error_log(mvm);
+ iwl_dbg_tlv_time_point(&mvm->fwrt,
+ IWL_FW_INI_TIME_POINT_FW_ASSERT, NULL);
+ iwl_fw_dbg_collect_desc(&mvm->fwrt, &iwl_dump_desc_assert,
+ false, 0);
+- return -ENODEV;
++ mvm->trans->state = IWL_TRANS_NO_FW;
++ ret = -ENODEV;
++
++ goto out;
+ }
+ ret = iwl_mvm_d3_notif_wait(mvm, &d3_data);
++
++ if (ret) {
++ IWL_ERR(mvm, "Couldn't get the d3 notif %d\n", ret);
++ mvm->trans->state = IWL_TRANS_NO_FW;
++ }
++
++out:
+ clear_bit(IWL_MVM_STATUS_IN_D3, &mvm->status);
+ mvm->trans->system_pm_mode = IWL_PLAT_PM_MODE_DISABLED;
+ mvm->fast_resume = false;
+
+- if (ret)
+- IWL_ERR(mvm, "Couldn't get the d3 notif %d\n", ret);
+-
+ return ret;
+ }
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+index 3b9943eb69341e..d19b3bd0866bda 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/trans.c
+@@ -1643,6 +1643,8 @@ int iwl_trans_pcie_d3_resume(struct iwl_trans *trans,
+ out:
+ if (*status == IWL_D3_STATUS_ALIVE)
+ ret = iwl_pcie_d3_handshake(trans, false);
++ else
++ trans->state = IWL_TRANS_NO_FW;
+
+ return ret;
+ }
+diff --git a/drivers/pci/msi/irqdomain.c b/drivers/pci/msi/irqdomain.c
+index 569125726b3e19..d7ba8795d60f81 100644
+--- a/drivers/pci/msi/irqdomain.c
++++ b/drivers/pci/msi/irqdomain.c
+@@ -350,8 +350,11 @@ bool pci_msi_domain_supports(struct pci_dev *pdev, unsigned int feature_mask,
+
+ domain = dev_get_msi_domain(&pdev->dev);
+
+- if (!domain || !irq_domain_is_hierarchy(domain))
+- return mode == ALLOW_LEGACY;
++ if (!domain || !irq_domain_is_hierarchy(domain)) {
++ if (IS_ENABLED(CONFIG_PCI_MSI_ARCH_FALLBACKS))
++ return mode == ALLOW_LEGACY;
++ return false;
++ }
+
+ if (!irq_domain_is_msi_parent(domain)) {
+ /*
+diff --git a/drivers/pci/msi/msi.c b/drivers/pci/msi/msi.c
+index 3a45879d85db96..2f647cac4cae34 100644
+--- a/drivers/pci/msi/msi.c
++++ b/drivers/pci/msi/msi.c
+@@ -433,6 +433,10 @@ int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec,
+ if (WARN_ON_ONCE(dev->msi_enabled))
+ return -EINVAL;
+
++ /* Test for the availability of MSI support */
++ if (!pci_msi_domain_supports(dev, 0, ALLOW_LEGACY))
++ return -ENOTSUPP;
++
+ nvec = pci_msi_vec_count(dev);
+ if (nvec < 0)
+ return nvec;
+diff --git a/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c b/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
+index 950b7ae1d1a838..dc452610934add 100644
+--- a/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
++++ b/drivers/phy/broadcom/phy-brcm-usb-init-synopsys.c
+@@ -325,6 +325,12 @@ static void usb_init_common_7216(struct brcm_usb_init_params *params)
+ void __iomem *ctrl = params->regs[BRCM_REGS_CTRL];
+
+ USB_CTRL_UNSET(ctrl, USB_PM, XHC_S2_CLK_SWITCH_EN);
++
++ /*
++ * The PHY might be in a bad state if it is already powered
++ * up. Toggle the power just in case.
++ */
++ USB_CTRL_SET(ctrl, USB_PM, USB_PWRDN);
+ USB_CTRL_UNSET(ctrl, USB_PM, USB_PWRDN);
+
+ /* 1 millisecond - for USB clocks to settle down */
+diff --git a/drivers/phy/phy-core.c b/drivers/phy/phy-core.c
+index f053b525ccffab..413f76e2d1744d 100644
+--- a/drivers/phy/phy-core.c
++++ b/drivers/phy/phy-core.c
+@@ -145,8 +145,10 @@ static struct phy_provider *of_phy_provider_lookup(struct device_node *node)
+ return phy_provider;
+
+ for_each_child_of_node(phy_provider->children, child)
+- if (child == node)
++ if (child == node) {
++ of_node_put(child);
+ return phy_provider;
++ }
+ }
+
+ return ERR_PTR(-EPROBE_DEFER);
+@@ -629,8 +631,10 @@ static struct phy *_of_phy_get(struct device_node *np, int index)
+ return ERR_PTR(-ENODEV);
+
+ /* This phy type handled by the usb-phy subsystem for now */
+- if (of_device_is_compatible(args.np, "usb-nop-xceiv"))
+- return ERR_PTR(-ENODEV);
++ if (of_device_is_compatible(args.np, "usb-nop-xceiv")) {
++ phy = ERR_PTR(-ENODEV);
++ goto out_put_node;
++ }
+
+ mutex_lock(&phy_provider_mutex);
+ phy_provider = of_phy_provider_lookup(args.np);
+@@ -652,6 +656,7 @@ static struct phy *_of_phy_get(struct device_node *np, int index)
+
+ out_unlock:
+ mutex_unlock(&phy_provider_mutex);
++out_put_node:
+ of_node_put(args.np);
+
+ return phy;
+@@ -737,7 +742,7 @@ void devm_phy_put(struct device *dev, struct phy *phy)
+ if (!phy)
+ return;
+
+- r = devres_destroy(dev, devm_phy_release, devm_phy_match, phy);
++ r = devres_release(dev, devm_phy_release, devm_phy_match, phy);
+ dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n");
+ }
+ EXPORT_SYMBOL_GPL(devm_phy_put);
+@@ -1121,7 +1126,7 @@ void devm_phy_destroy(struct device *dev, struct phy *phy)
+ {
+ int r;
+
+- r = devres_destroy(dev, devm_phy_consume, devm_phy_match, phy);
++ r = devres_release(dev, devm_phy_consume, devm_phy_match, phy);
+ dev_WARN_ONCE(dev, r, "couldn't find PHY resource\n");
+ }
+ EXPORT_SYMBOL_GPL(devm_phy_destroy);
+@@ -1259,12 +1264,12 @@ EXPORT_SYMBOL_GPL(of_phy_provider_unregister);
+ * of_phy_provider_unregister to unregister the phy provider.
+ */
+ void devm_of_phy_provider_unregister(struct device *dev,
+- struct phy_provider *phy_provider)
++ struct phy_provider *phy_provider)
+ {
+ int r;
+
+- r = devres_destroy(dev, devm_phy_provider_release, devm_phy_match,
+- phy_provider);
++ r = devres_release(dev, devm_phy_provider_release, devm_phy_match,
++ phy_provider);
+ dev_WARN_ONCE(dev, r, "couldn't find PHY provider device resource\n");
+ }
+ EXPORT_SYMBOL_GPL(devm_of_phy_provider_unregister);
+diff --git a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+index 1246d3bc8b92f8..8e2cd2c178d6b2 100644
+--- a/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
++++ b/drivers/phy/qualcomm/phy-qcom-qmp-usb.c
+@@ -1008,7 +1008,7 @@ static const struct qmp_phy_init_tbl sc8280xp_usb3_uniphy_rx_tbl[] = {
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_FO_GAIN, 0x2f),
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_COUNT_LOW, 0xff),
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FASTLOCK_COUNT_HIGH, 0x0f),
+- QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_SO_GAIN, 0x0a),
++ QMP_PHY_INIT_CFG(QSERDES_V5_RX_UCDR_FO_GAIN, 0x0a),
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_VGA_CAL_CNTRL1, 0x54),
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_VGA_CAL_CNTRL2, 0x0f),
+ QMP_PHY_INIT_CFG(QSERDES_V5_RX_RX_EQU_ADAPTOR_CNTRL2, 0x0f),
+diff --git a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
+index 0a9989e41237f1..2eb3329ca23f67 100644
+--- a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
++++ b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
+@@ -309,7 +309,7 @@ static int rockchip_combphy_parse_dt(struct device *dev, struct rockchip_combphy
+
+ priv->ext_refclk = device_property_present(dev, "rockchip,ext-refclk");
+
+- priv->phy_rst = devm_reset_control_array_get_exclusive(dev);
++ priv->phy_rst = devm_reset_control_get(dev, "phy");
+ if (IS_ERR(priv->phy_rst))
+ return dev_err_probe(dev, PTR_ERR(priv->phy_rst), "failed to get phy reset\n");
+
+diff --git a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+index 9f084697dd05ce..69c3ec0938f74f 100644
+--- a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
++++ b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+@@ -1116,6 +1116,8 @@ static int rk_hdptx_phy_probe(struct platform_device *pdev)
+ return dev_err_probe(dev, PTR_ERR(hdptx->grf),
+ "Could not get GRF syscon\n");
+
++ platform_set_drvdata(pdev, hdptx);
++
+ ret = devm_pm_runtime_enable(dev);
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to enable runtime PM\n");
+@@ -1125,7 +1127,6 @@ static int rk_hdptx_phy_probe(struct platform_device *pdev)
+ return dev_err_probe(dev, PTR_ERR(hdptx->phy),
+ "Failed to create HDMI PHY\n");
+
+- platform_set_drvdata(pdev, hdptx);
+ phy_set_drvdata(hdptx->phy, hdptx);
+ phy_set_bus_width(hdptx->phy, 8);
+
+diff --git a/drivers/platform/chrome/cros_ec_lpc.c b/drivers/platform/chrome/cros_ec_lpc.c
+index c784119ab5dc0c..626e2635e3da70 100644
+--- a/drivers/platform/chrome/cros_ec_lpc.c
++++ b/drivers/platform/chrome/cros_ec_lpc.c
+@@ -707,7 +707,7 @@ static const struct dmi_system_id cros_ec_lpc_dmi_table[] __initconst = {
+ /* Framework Laptop (12th Gen Intel Core) */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Framework"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "12th Gen Intel Core"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Laptop (12th Gen Intel Core)"),
+ },
+ .driver_data = (void *)&framework_laptop_mec_lpc_driver_data,
+ },
+@@ -715,7 +715,7 @@ static const struct dmi_system_id cros_ec_lpc_dmi_table[] __initconst = {
+ /* Framework Laptop (13th Gen Intel Core) */
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Framework"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "13th Gen Intel Core"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "Laptop (13th Gen Intel Core)"),
+ },
+ .driver_data = (void *)&framework_laptop_mec_lpc_driver_data,
+ },
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index ef04d396f61c77..a5933980ade3d6 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -623,6 +623,7 @@ static const struct key_entry asus_nb_wmi_keymap[] = {
+ { KE_KEY, 0xC4, { KEY_KBDILLUMUP } },
+ { KE_KEY, 0xC5, { KEY_KBDILLUMDOWN } },
+ { KE_IGNORE, 0xC6, }, /* Ambient Light Sensor notification */
++ { KE_IGNORE, 0xCF, }, /* AC mode */
+ { KE_KEY, 0xFA, { KEY_PROG2 } }, /* Lid flip action */
+ { KE_KEY, 0xBD, { KEY_PROG2 } }, /* Lid flip action on ROG xflow laptops */
+ { KE_END, 0},
+diff --git a/drivers/power/supply/bq24190_charger.c b/drivers/power/supply/bq24190_charger.c
+index 2b393eb5c2820e..c47f32f152e602 100644
+--- a/drivers/power/supply/bq24190_charger.c
++++ b/drivers/power/supply/bq24190_charger.c
+@@ -567,6 +567,7 @@ static int bq24190_set_otg_vbus(struct bq24190_dev_info *bdi, bool enable)
+
+ static int bq24296_set_otg_vbus(struct bq24190_dev_info *bdi, bool enable)
+ {
++ union power_supply_propval val = { .intval = bdi->charge_type };
+ int ret;
+
+ ret = pm_runtime_resume_and_get(bdi->dev);
+@@ -587,13 +588,18 @@ static int bq24296_set_otg_vbus(struct bq24190_dev_info *bdi, bool enable)
+
+ ret = bq24190_write_mask(bdi, BQ24190_REG_POC,
+ BQ24296_REG_POC_OTG_CONFIG_MASK,
+- BQ24296_REG_POC_CHG_CONFIG_SHIFT,
++ BQ24296_REG_POC_OTG_CONFIG_SHIFT,
+ BQ24296_REG_POC_OTG_CONFIG_OTG);
+- } else
++ } else {
+ ret = bq24190_write_mask(bdi, BQ24190_REG_POC,
+ BQ24296_REG_POC_OTG_CONFIG_MASK,
+- BQ24296_REG_POC_CHG_CONFIG_SHIFT,
++ BQ24296_REG_POC_OTG_CONFIG_SHIFT,
+ BQ24296_REG_POC_OTG_CONFIG_DISABLE);
++ if (ret < 0)
++ goto out;
++
++ ret = bq24190_charger_set_charge_type(bdi, &val);
++ }
+
+ out:
+ pm_runtime_mark_last_busy(bdi->dev);
+diff --git a/drivers/power/supply/cros_charge-control.c b/drivers/power/supply/cros_charge-control.c
+index 17c53591ce197d..9b0a7500296b4d 100644
+--- a/drivers/power/supply/cros_charge-control.c
++++ b/drivers/power/supply/cros_charge-control.c
+@@ -7,8 +7,10 @@
+ #include <acpi/battery.h>
+ #include <linux/container_of.h>
+ #include <linux/dmi.h>
++#include <linux/lockdep.h>
+ #include <linux/mod_devicetable.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/platform_data/cros_ec_commands.h>
+ #include <linux/platform_data/cros_ec_proto.h>
+ #include <linux/platform_device.h>
+@@ -49,6 +51,7 @@ struct cros_chctl_priv {
+ struct attribute *attributes[_CROS_CHCTL_ATTR_COUNT];
+ struct attribute_group group;
+
++ struct mutex lock; /* protects fields below and cros_ec */
+ enum power_supply_charge_behaviour current_behaviour;
+ u8 current_start_threshold, current_end_threshold;
+ };
+@@ -85,6 +88,8 @@ static int cros_chctl_configure_ec(struct cros_chctl_priv *priv)
+ {
+ struct ec_params_charge_control req = {};
+
++ lockdep_assert_held(&priv->lock);
++
+ req.cmd = EC_CHARGE_CONTROL_CMD_SET;
+
+ switch (priv->current_behaviour) {
+@@ -134,11 +139,15 @@ static ssize_t cros_chctl_store_threshold(struct device *dev, struct cros_chctl_
+ return -EINVAL;
+
+ if (is_end_threshold) {
+- if (val <= priv->current_start_threshold)
++ /* Start threshold is not exposed, use fixed value */
++ if (priv->cmd_version == 2)
++ priv->current_start_threshold = val == 100 ? 0 : val;
++
++ if (val < priv->current_start_threshold)
+ return -EINVAL;
+ priv->current_end_threshold = val;
+ } else {
+- if (val >= priv->current_end_threshold)
++ if (val > priv->current_end_threshold)
+ return -EINVAL;
+ priv->current_start_threshold = val;
+ }
+@@ -159,6 +168,7 @@ static ssize_t charge_control_start_threshold_show(struct device *dev,
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
+ CROS_CHCTL_ATTR_START_THRESHOLD);
+
++ guard(mutex)(&priv->lock);
+ return sysfs_emit(buf, "%u\n", (unsigned int)priv->current_start_threshold);
+ }
+
+@@ -169,6 +179,7 @@ static ssize_t charge_control_start_threshold_store(struct device *dev,
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
+ CROS_CHCTL_ATTR_START_THRESHOLD);
+
++ guard(mutex)(&priv->lock);
+ return cros_chctl_store_threshold(dev, priv, 0, buf, count);
+ }
+
+@@ -178,6 +189,7 @@ static ssize_t charge_control_end_threshold_show(struct device *dev, struct devi
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
+ CROS_CHCTL_ATTR_END_THRESHOLD);
+
++ guard(mutex)(&priv->lock);
+ return sysfs_emit(buf, "%u\n", (unsigned int)priv->current_end_threshold);
+ }
+
+@@ -187,6 +199,7 @@ static ssize_t charge_control_end_threshold_store(struct device *dev, struct dev
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
+ CROS_CHCTL_ATTR_END_THRESHOLD);
+
++ guard(mutex)(&priv->lock);
+ return cros_chctl_store_threshold(dev, priv, 1, buf, count);
+ }
+
+@@ -195,6 +208,7 @@ static ssize_t charge_behaviour_show(struct device *dev, struct device_attribute
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(&attr->attr,
+ CROS_CHCTL_ATTR_CHARGE_BEHAVIOUR);
+
++ guard(mutex)(&priv->lock);
+ return power_supply_charge_behaviour_show(dev, EC_CHARGE_CONTROL_BEHAVIOURS,
+ priv->current_behaviour, buf);
+ }
+@@ -210,6 +224,7 @@ static ssize_t charge_behaviour_store(struct device *dev, struct device_attribut
+ if (ret < 0)
+ return ret;
+
++ guard(mutex)(&priv->lock);
+ priv->current_behaviour = ret;
+
+ ret = cros_chctl_configure_ec(priv);
+@@ -223,12 +238,10 @@ static umode_t cros_chtl_attr_is_visible(struct kobject *kobj, struct attribute
+ {
+ struct cros_chctl_priv *priv = cros_chctl_attr_to_priv(attr, n);
+
+- if (priv->cmd_version < 2) {
+- if (n == CROS_CHCTL_ATTR_START_THRESHOLD)
+- return 0;
+- if (n == CROS_CHCTL_ATTR_END_THRESHOLD)
+- return 0;
+- }
++ if (n == CROS_CHCTL_ATTR_START_THRESHOLD && priv->cmd_version < 3)
++ return 0;
++ else if (n == CROS_CHCTL_ATTR_END_THRESHOLD && priv->cmd_version < 2)
++ return 0;
+
+ return attr->mode;
+ }
+@@ -290,6 +303,10 @@ static int cros_chctl_probe(struct platform_device *pdev)
+ if (!priv)
+ return -ENOMEM;
+
++ ret = devm_mutex_init(dev, &priv->lock);
++ if (ret)
++ return ret;
++
+ ret = cros_ec_get_cmd_versions(cros_ec, EC_CMD_CHARGE_CONTROL);
+ if (ret < 0)
+ return ret;
+@@ -327,7 +344,8 @@ static int cros_chctl_probe(struct platform_device *pdev)
+ priv->current_end_threshold = 100;
+
+ /* Bring EC into well-known state */
+- ret = cros_chctl_configure_ec(priv);
++ scoped_guard(mutex, &priv->lock)
++ ret = cros_chctl_configure_ec(priv);
+ if (ret < 0)
+ return ret;
+
+diff --git a/drivers/power/supply/gpio-charger.c b/drivers/power/supply/gpio-charger.c
+index 68212b39785bea..6139f736ecbe4f 100644
+--- a/drivers/power/supply/gpio-charger.c
++++ b/drivers/power/supply/gpio-charger.c
+@@ -67,6 +67,14 @@ static int set_charge_current_limit(struct gpio_charger *gpio_charger, int val)
+ if (gpio_charger->current_limit_map[i].limit_ua <= val)
+ break;
+ }
++
++ /*
++ * If a valid charge current limit isn't found, default to smallest
++ * current limitation for safety reasons.
++ */
++ if (i >= gpio_charger->current_limit_map_size)
++ i = gpio_charger->current_limit_map_size - 1;
++
+ mapping = gpio_charger->current_limit_map[i];
+
+ for (i = 0; i < ndescs; i++) {
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 8e75e2e279a40a..50f1dcb6d58460 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -8907,8 +8907,11 @@ megasas_aen_polling(struct work_struct *work)
+ (ld_target_id / MEGASAS_MAX_DEV_PER_CHANNEL),
+ (ld_target_id % MEGASAS_MAX_DEV_PER_CHANNEL),
+ 0);
+- if (sdev1)
++ if (sdev1) {
++ mutex_unlock(&instance->reset_mutex);
+ megasas_remove_scsi_device(sdev1);
++ mutex_lock(&instance->reset_mutex);
++ }
+
+ event_type = SCAN_VD_CHANNEL;
+ break;
+diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h
+index 81bb408ce56d8f..1e715fd65a7d4b 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr.h
++++ b/drivers/scsi/mpi3mr/mpi3mr.h
+@@ -134,8 +134,6 @@ extern atomic64_t event_counter;
+
+ #define MPI3MR_WATCHDOG_INTERVAL 1000 /* in milli seconds */
+
+-#define MPI3MR_DEFAULT_CFG_PAGE_SZ 1024 /* in bytes */
+-
+ #define MPI3MR_RESET_TOPOLOGY_SETTLE_TIME 10
+
+ #define MPI3MR_SCMD_TIMEOUT (60 * HZ)
+@@ -1133,9 +1131,6 @@ struct scmd_priv {
+ * @io_throttle_low: I/O size to stop throttle in 512b blocks
+ * @num_io_throttle_group: Maximum number of throttle groups
+ * @throttle_groups: Pointer to throttle group info structures
+- * @cfg_page: Default memory for configuration pages
+- * @cfg_page_dma: Configuration page DMA address
+- * @cfg_page_sz: Default configuration page memory size
+ * @sas_transport_enabled: SAS transport enabled or not
+ * @scsi_device_channel: Channel ID for SCSI devices
+ * @transport_cmds: Command tracker for SAS transport commands
+@@ -1332,10 +1327,6 @@ struct mpi3mr_ioc {
+ u16 num_io_throttle_group;
+ struct mpi3mr_throttle_group_info *throttle_groups;
+
+- void *cfg_page;
+- dma_addr_t cfg_page_dma;
+- u16 cfg_page_sz;
+-
+ u8 sas_transport_enabled;
+ u8 scsi_device_channel;
+ struct mpi3mr_drv_cmd transport_cmds;
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c
+index 01f035f9330e4b..10b8e4dc64f8b0 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_app.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_app.c
+@@ -2329,6 +2329,15 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ if (!mrioc)
+ return -ENODEV;
+
++ if (mutex_lock_interruptible(&mrioc->bsg_cmds.mutex))
++ return -ERESTARTSYS;
++
++ if (mrioc->bsg_cmds.state & MPI3MR_CMD_PENDING) {
++ dprint_bsg_err(mrioc, "%s: command is in use\n", __func__);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
++ return -EAGAIN;
++ }
++
+ if (!mrioc->ioctl_sges_allocated) {
+ dprint_bsg_err(mrioc, "%s: DMA memory was not allocated\n",
+ __func__);
+@@ -2339,13 +2348,16 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ karg->timeout = MPI3MR_APP_DEFAULT_TIMEOUT;
+
+ mpi_req = kzalloc(MPI3MR_ADMIN_REQ_FRAME_SZ, GFP_KERNEL);
+- if (!mpi_req)
++ if (!mpi_req) {
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ return -ENOMEM;
++ }
+ mpi_header = (struct mpi3_request_header *)mpi_req;
+
+ bufcnt = karg->buf_entry_list.num_of_entries;
+ drv_bufs = kzalloc((sizeof(*drv_bufs) * bufcnt), GFP_KERNEL);
+ if (!drv_bufs) {
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -ENOMEM;
+ goto out;
+ }
+@@ -2353,6 +2365,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ dout_buf = kzalloc(job->request_payload.payload_len,
+ GFP_KERNEL);
+ if (!dout_buf) {
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -ENOMEM;
+ goto out;
+ }
+@@ -2360,6 +2373,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ din_buf = kzalloc(job->reply_payload.payload_len,
+ GFP_KERNEL);
+ if (!din_buf) {
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -ENOMEM;
+ goto out;
+ }
+@@ -2435,6 +2449,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ (mpi_msg_size > MPI3MR_ADMIN_REQ_FRAME_SZ)) {
+ dprint_bsg_err(mrioc, "%s: invalid MPI message size\n",
+ __func__);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2447,6 +2462,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ if (invalid_be) {
+ dprint_bsg_err(mrioc, "%s: invalid buffer entries passed\n",
+ __func__);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2454,12 +2470,14 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ if (sgl_dout_iter > (dout_buf + job->request_payload.payload_len)) {
+ dprint_bsg_err(mrioc, "%s: data_out buffer length mismatch\n",
+ __func__);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+ if (sgl_din_iter > (din_buf + job->reply_payload.payload_len)) {
+ dprint_bsg_err(mrioc, "%s: data_in buffer length mismatch\n",
+ __func__);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2472,6 +2490,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ dprint_bsg_err(mrioc, "%s:%d: invalid data transfer size passed for function 0x%x din_size = %d, dout_size = %d\n",
+ __func__, __LINE__, mpi_header->function, din_size,
+ dout_size);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2480,6 +2499,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ dprint_bsg_err(mrioc,
+ "%s:%d: invalid data transfer size passed for function 0x%x din_size=%d\n",
+ __func__, __LINE__, mpi_header->function, din_size);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2487,6 +2507,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ dprint_bsg_err(mrioc,
+ "%s:%d: invalid data transfer size passed for function 0x%x dout_size = %d\n",
+ __func__, __LINE__, mpi_header->function, dout_size);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2497,6 +2518,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ dprint_bsg_err(mrioc, "%s:%d: invalid message size passed:%d:%d:%d:%d\n",
+ __func__, __LINE__, din_cnt, dout_cnt, din_size,
+ dout_size);
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ rval = -EINVAL;
+ goto out;
+ }
+@@ -2544,6 +2566,7 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ continue;
+ if (mpi3mr_map_data_buffer_dma(mrioc, drv_buf_iter, desc_count)) {
+ rval = -ENOMEM;
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ dprint_bsg_err(mrioc, "%s:%d: mapping data buffers failed\n",
+ __func__, __LINE__);
+ goto out;
+@@ -2556,20 +2579,11 @@ static long mpi3mr_bsg_process_mpt_cmds(struct bsg_job *job)
+ sense_buff_k = kzalloc(erbsz, GFP_KERNEL);
+ if (!sense_buff_k) {
+ rval = -ENOMEM;
++ mutex_unlock(&mrioc->bsg_cmds.mutex);
+ goto out;
+ }
+ }
+
+- if (mutex_lock_interruptible(&mrioc->bsg_cmds.mutex)) {
+- rval = -ERESTARTSYS;
+- goto out;
+- }
+- if (mrioc->bsg_cmds.state & MPI3MR_CMD_PENDING) {
+- rval = -EAGAIN;
+- dprint_bsg_err(mrioc, "%s: command is in use\n", __func__);
+- mutex_unlock(&mrioc->bsg_cmds.mutex);
+- goto out;
+- }
+ if (mrioc->unrecoverable) {
+ dprint_bsg_err(mrioc, "%s: unrecoverable controller\n",
+ __func__);
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+index f1ab76351bd81e..5ed31fe57474a3 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+@@ -1035,6 +1035,36 @@ static const char *mpi3mr_reset_type_name(u16 reset_type)
+ return name;
+ }
+
++/**
++ * mpi3mr_is_fault_recoverable - Read fault code and decide
++ * whether the controller can be recoverable
++ * @mrioc: Adapter instance reference
++ * Return: true if fault is recoverable, false otherwise.
++ */
++static inline bool mpi3mr_is_fault_recoverable(struct mpi3mr_ioc *mrioc)
++{
++ u32 fault;
++
++ fault = (readl(&mrioc->sysif_regs->fault) &
++ MPI3_SYSIF_FAULT_CODE_MASK);
++
++ switch (fault) {
++ case MPI3_SYSIF_FAULT_CODE_COMPLETE_RESET_NEEDED:
++ case MPI3_SYSIF_FAULT_CODE_POWER_CYCLE_REQUIRED:
++ ioc_warn(mrioc,
++ "controller requires system power cycle, marking controller as unrecoverable\n");
++ return false;
++ case MPI3_SYSIF_FAULT_CODE_INSUFFICIENT_PCI_SLOT_POWER:
++ ioc_warn(mrioc,
++ "controller faulted due to insufficient power,\n"
++ " try by connecting it to a different slot\n");
++ return false;
++ default:
++ break;
++ }
++ return true;
++}
++
+ /**
+ * mpi3mr_print_fault_info - Display fault information
+ * @mrioc: Adapter instance reference
+@@ -1373,6 +1403,11 @@ static int mpi3mr_bring_ioc_ready(struct mpi3mr_ioc *mrioc)
+ ioc_info(mrioc, "ioc_status(0x%08x), ioc_config(0x%08x), ioc_info(0x%016llx) at the bringup\n",
+ ioc_status, ioc_config, base_info);
+
++ if (!mpi3mr_is_fault_recoverable(mrioc)) {
++ mrioc->unrecoverable = 1;
++ goto out_device_not_present;
++ }
++
+ /*The timeout value is in 2sec unit, changing it to seconds*/
+ mrioc->ready_timeout =
+ ((base_info & MPI3_SYSIF_IOC_INFO_LOW_TIMEOUT_MASK) >>
+@@ -2734,6 +2769,11 @@ static void mpi3mr_watchdog_work(struct work_struct *work)
+ mpi3mr_print_fault_info(mrioc);
+ mrioc->diagsave_timeout = 0;
+
++ if (!mpi3mr_is_fault_recoverable(mrioc)) {
++ mrioc->unrecoverable = 1;
++ goto schedule_work;
++ }
++
+ switch (trigger_data.fault) {
+ case MPI3_SYSIF_FAULT_CODE_COMPLETE_RESET_NEEDED:
+ case MPI3_SYSIF_FAULT_CODE_POWER_CYCLE_REQUIRED:
+@@ -4186,17 +4226,6 @@ int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc)
+ mpi3mr_read_tsu_interval(mrioc);
+ mpi3mr_print_ioc_info(mrioc);
+
+- if (!mrioc->cfg_page) {
+- dprint_init(mrioc, "allocating config page buffers\n");
+- mrioc->cfg_page_sz = MPI3MR_DEFAULT_CFG_PAGE_SZ;
+- mrioc->cfg_page = dma_alloc_coherent(&mrioc->pdev->dev,
+- mrioc->cfg_page_sz, &mrioc->cfg_page_dma, GFP_KERNEL);
+- if (!mrioc->cfg_page) {
+- retval = -1;
+- goto out_failed_noretry;
+- }
+- }
+-
+ dprint_init(mrioc, "allocating host diag buffers\n");
+ mpi3mr_alloc_diag_bufs(mrioc);
+
+@@ -4768,11 +4797,7 @@ void mpi3mr_free_mem(struct mpi3mr_ioc *mrioc)
+ mrioc->admin_req_base, mrioc->admin_req_dma);
+ mrioc->admin_req_base = NULL;
+ }
+- if (mrioc->cfg_page) {
+- dma_free_coherent(&mrioc->pdev->dev, mrioc->cfg_page_sz,
+- mrioc->cfg_page, mrioc->cfg_page_dma);
+- mrioc->cfg_page = NULL;
+- }
++
+ if (mrioc->pel_seqnum_virt) {
+ dma_free_coherent(&mrioc->pdev->dev, mrioc->pel_seqnum_sz,
+ mrioc->pel_seqnum_virt, mrioc->pel_seqnum_dma);
+@@ -5392,55 +5417,6 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
+ return retval;
+ }
+
+-
+-/**
+- * mpi3mr_free_config_dma_memory - free memory for config page
+- * @mrioc: Adapter instance reference
+- * @mem_desc: memory descriptor structure
+- *
+- * Check whether the size of the buffer specified by the memory
+- * descriptor is greater than the default page size if so then
+- * free the memory pointed by the descriptor.
+- *
+- * Return: Nothing.
+- */
+-static void mpi3mr_free_config_dma_memory(struct mpi3mr_ioc *mrioc,
+- struct dma_memory_desc *mem_desc)
+-{
+- if ((mem_desc->size > mrioc->cfg_page_sz) && mem_desc->addr) {
+- dma_free_coherent(&mrioc->pdev->dev, mem_desc->size,
+- mem_desc->addr, mem_desc->dma_addr);
+- mem_desc->addr = NULL;
+- }
+-}
+-
+-/**
+- * mpi3mr_alloc_config_dma_memory - Alloc memory for config page
+- * @mrioc: Adapter instance reference
+- * @mem_desc: Memory descriptor to hold dma memory info
+- *
+- * This function allocates new dmaable memory or provides the
+- * default config page dmaable memory based on the memory size
+- * described by the descriptor.
+- *
+- * Return: 0 on success, non-zero on failure.
+- */
+-static int mpi3mr_alloc_config_dma_memory(struct mpi3mr_ioc *mrioc,
+- struct dma_memory_desc *mem_desc)
+-{
+- if (mem_desc->size > mrioc->cfg_page_sz) {
+- mem_desc->addr = dma_alloc_coherent(&mrioc->pdev->dev,
+- mem_desc->size, &mem_desc->dma_addr, GFP_KERNEL);
+- if (!mem_desc->addr)
+- return -ENOMEM;
+- } else {
+- mem_desc->addr = mrioc->cfg_page;
+- mem_desc->dma_addr = mrioc->cfg_page_dma;
+- memset(mem_desc->addr, 0, mrioc->cfg_page_sz);
+- }
+- return 0;
+-}
+-
+ /**
+ * mpi3mr_post_cfg_req - Issue config requests and wait
+ * @mrioc: Adapter instance reference
+@@ -5596,8 +5572,12 @@ static int mpi3mr_process_cfg_req(struct mpi3mr_ioc *mrioc,
+ cfg_req->page_length = cfg_hdr->page_length;
+ cfg_req->page_version = cfg_hdr->page_version;
+ }
+- if (mpi3mr_alloc_config_dma_memory(mrioc, &mem_desc))
+- goto out;
++
++ mem_desc.addr = dma_alloc_coherent(&mrioc->pdev->dev,
++ mem_desc.size, &mem_desc.dma_addr, GFP_KERNEL);
++
++ if (!mem_desc.addr)
++ return retval;
+
+ mpi3mr_add_sg_single(&cfg_req->sgl, sgl_flags, mem_desc.size,
+ mem_desc.dma_addr);
+@@ -5626,7 +5606,12 @@ static int mpi3mr_process_cfg_req(struct mpi3mr_ioc *mrioc,
+ }
+
+ out:
+- mpi3mr_free_config_dma_memory(mrioc, &mem_desc);
++ if (mem_desc.addr) {
++ dma_free_coherent(&mrioc->pdev->dev, mem_desc.size,
++ mem_desc.addr, mem_desc.dma_addr);
++ mem_desc.addr = NULL;
++ }
++
+ return retval;
+ }
+
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index 5f2f67acf8bf31..1bef88130d0c06 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -5215,7 +5215,7 @@ mpi3mr_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ }
+
+ mrioc = shost_priv(shost);
+- retval = ida_alloc_range(&mrioc_ida, 1, U8_MAX, GFP_KERNEL);
++ retval = ida_alloc_range(&mrioc_ida, 0, U8_MAX, GFP_KERNEL);
+ if (retval < 0)
+ goto id_alloc_failed;
+ mrioc->id = (u8)retval;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index ed5046593fdab6..16ac2267c71e19 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -7041,11 +7041,12 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ int i;
+ u8 failed;
+ __le32 *mfp;
++ int ret_val;
+
+ /* make sure doorbell is not in use */
+ if ((ioc->base_readl_ext_retry(&ioc->chip->Doorbell) & MPI2_DOORBELL_USED)) {
+ ioc_err(ioc, "doorbell is in use (line=%d)\n", __LINE__);
+- return -EFAULT;
++ goto doorbell_diag_reset;
+ }
+
+ /* clear pending doorbell interrupts from previous state changes */
+@@ -7135,6 +7136,10 @@ _base_handshake_req_reply_wait(struct MPT3SAS_ADAPTER *ioc, int request_bytes,
+ le32_to_cpu(mfp[i]));
+ }
+ return 0;
++
++doorbell_diag_reset:
++ ret_val = _base_diag_reset(ioc);
++ return ret_val;
+ }
+
+ /**
+diff --git a/drivers/scsi/qla1280.h b/drivers/scsi/qla1280.h
+index d309e2ca14deb3..dea2290b37d4d7 100644
+--- a/drivers/scsi/qla1280.h
++++ b/drivers/scsi/qla1280.h
+@@ -116,12 +116,12 @@ struct device_reg {
+ uint16_t id_h; /* ID high */
+ uint16_t cfg_0; /* Configuration 0 */
+ #define ISP_CFG0_HWMSK 0x000f /* Hardware revision mask */
+-#define ISP_CFG0_1020 BIT_0 /* ISP1020 */
+-#define ISP_CFG0_1020A BIT_1 /* ISP1020A */
+-#define ISP_CFG0_1040 BIT_2 /* ISP1040 */
+-#define ISP_CFG0_1040A BIT_3 /* ISP1040A */
+-#define ISP_CFG0_1040B BIT_4 /* ISP1040B */
+-#define ISP_CFG0_1040C BIT_5 /* ISP1040C */
++#define ISP_CFG0_1020 1 /* ISP1020 */
++#define ISP_CFG0_1020A 2 /* ISP1020A */
++#define ISP_CFG0_1040 3 /* ISP1040 */
++#define ISP_CFG0_1040A 4 /* ISP1040A */
++#define ISP_CFG0_1040B 5 /* ISP1040B */
++#define ISP_CFG0_1040C 6 /* ISP1040C */
+ uint16_t cfg_1; /* Configuration 1 */
+ #define ISP_CFG1_F128 BIT_6 /* 128-byte FIFO threshold */
+ #define ISP_CFG1_F64 BIT_4|BIT_5 /* 128-byte FIFO threshold */
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index 7ceb982040a5df..d0b55c1fa908a5 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -149,6 +149,8 @@ struct hv_fc_wwn_packet {
+ */
+ static int vmstor_proto_version;
+
++static bool hv_dev_is_fc(struct hv_device *hv_dev);
++
+ #define STORVSC_LOGGING_NONE 0
+ #define STORVSC_LOGGING_ERROR 1
+ #define STORVSC_LOGGING_WARN 2
+@@ -1138,6 +1140,7 @@ static void storvsc_on_io_completion(struct storvsc_device *stor_device,
+ * not correctly handle:
+ * INQUIRY command with page code parameter set to 0x80
+ * MODE_SENSE command with cmd[2] == 0x1c
++ * MAINTENANCE_IN is not supported by HyperV FC passthrough
+ *
+ * Setup srb and scsi status so this won't be fatal.
+ * We do this so we can distinguish truly fatal failues
+@@ -1145,7 +1148,9 @@ static void storvsc_on_io_completion(struct storvsc_device *stor_device,
+ */
+
+ if ((stor_pkt->vm_srb.cdb[0] == INQUIRY) ||
+- (stor_pkt->vm_srb.cdb[0] == MODE_SENSE)) {
++ (stor_pkt->vm_srb.cdb[0] == MODE_SENSE) ||
++ (stor_pkt->vm_srb.cdb[0] == MAINTENANCE_IN &&
++ hv_dev_is_fc(device))) {
+ vstor_packet->vm_srb.scsi_status = 0;
+ vstor_packet->vm_srb.srb_status = SRB_STATUS_SUCCESS;
+ }
+diff --git a/drivers/spi/spi-intel-pci.c b/drivers/spi/spi-intel-pci.c
+index 4337ca51d7aa21..5c0dec90eec1df 100644
+--- a/drivers/spi/spi-intel-pci.c
++++ b/drivers/spi/spi-intel-pci.c
+@@ -86,6 +86,8 @@ static const struct pci_device_id intel_spi_pci_ids[] = {
+ { PCI_VDEVICE(INTEL, 0xa324), (unsigned long)&cnl_info },
+ { PCI_VDEVICE(INTEL, 0xa3a4), (unsigned long)&cnl_info },
+ { PCI_VDEVICE(INTEL, 0xa823), (unsigned long)&cnl_info },
++ { PCI_VDEVICE(INTEL, 0xe323), (unsigned long)&cnl_info },
++ { PCI_VDEVICE(INTEL, 0xe423), (unsigned long)&cnl_info },
+ { },
+ };
+ MODULE_DEVICE_TABLE(pci, intel_spi_pci_ids);
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index 2c043817c66a88..4a2f84c4d22e5f 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -1561,10 +1561,10 @@ static int omap2_mcspi_probe(struct platform_device *pdev)
+ }
+
+ mcspi->ref_clk = devm_clk_get_optional_enabled(&pdev->dev, NULL);
+- if (mcspi->ref_clk)
+- mcspi->ref_clk_hz = clk_get_rate(mcspi->ref_clk);
+- else
++ if (IS_ERR(mcspi->ref_clk))
+ mcspi->ref_clk_hz = OMAP2_MCSPI_MAX_FREQ;
++ else
++ mcspi->ref_clk_hz = clk_get_rate(mcspi->ref_clk);
+ ctlr->max_speed_hz = mcspi->ref_clk_hz;
+ ctlr->min_speed_hz = mcspi->ref_clk_hz >> 15;
+
+diff --git a/drivers/virt/coco/tdx-guest/tdx-guest.c b/drivers/virt/coco/tdx-guest/tdx-guest.c
+index d7db6c824e13de..224e7dde9cdee8 100644
+--- a/drivers/virt/coco/tdx-guest/tdx-guest.c
++++ b/drivers/virt/coco/tdx-guest/tdx-guest.c
+@@ -124,10 +124,8 @@ static void *alloc_quote_buf(void)
+ if (!addr)
+ return NULL;
+
+- if (set_memory_decrypted((unsigned long)addr, count)) {
+- free_pages_exact(addr, len);
++ if (set_memory_decrypted((unsigned long)addr, count))
+ return NULL;
+- }
+
+ return addr;
+ }
+diff --git a/drivers/watchdog/Kconfig b/drivers/watchdog/Kconfig
+index 94c96bcfefe347..0b59c669c26d35 100644
+--- a/drivers/watchdog/Kconfig
++++ b/drivers/watchdog/Kconfig
+@@ -549,6 +549,7 @@ config S3C2410_WATCHDOG
+ tristate "S3C6410/S5Pv210/Exynos Watchdog"
+ depends on ARCH_S3C64XX || ARCH_S5PV210 || ARCH_EXYNOS || COMPILE_TEST
+ select WATCHDOG_CORE
++ select MFD_SYSCON if ARCH_EXYNOS
+ help
+ Watchdog timer block in the Samsung S3C64xx, S5Pv210 and Exynos
+ SoCs. This will reboot the system when the timer expires with
+diff --git a/drivers/watchdog/it87_wdt.c b/drivers/watchdog/it87_wdt.c
+index 3e8c15138eddad..1a5a0a2c3f2e37 100644
+--- a/drivers/watchdog/it87_wdt.c
++++ b/drivers/watchdog/it87_wdt.c
+@@ -20,6 +20,8 @@
+
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
++#include <linux/bits.h>
++#include <linux/dmi.h>
+ #include <linux/init.h>
+ #include <linux/io.h>
+ #include <linux/kernel.h>
+@@ -40,6 +42,7 @@
+ #define VAL 0x2f
+
+ /* Logical device Numbers LDN */
++#define EC 0x04
+ #define GPIO 0x07
+
+ /* Configuration Registers and Functions */
+@@ -73,6 +76,12 @@
+ #define IT8784_ID 0x8784
+ #define IT8786_ID 0x8786
+
++/* Environment Controller Configuration Registers LDN=0x04 */
++#define SCR1 0xfa
++
++/* Environment Controller Bits SCR1 */
++#define WDT_PWRGD 0x20
++
+ /* GPIO Configuration Registers LDN=0x07 */
+ #define WDTCTRL 0x71
+ #define WDTCFG 0x72
+@@ -240,6 +249,21 @@ static int wdt_set_timeout(struct watchdog_device *wdd, unsigned int t)
+ return ret;
+ }
+
++enum {
++ IT87_WDT_OUTPUT_THROUGH_PWRGD = BIT(0),
++};
++
++static const struct dmi_system_id it87_quirks[] = {
++ {
++ /* Qotom Q30900P (IT8786) */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_BOARD_NAME, "QCML04"),
++ },
++ .driver_data = (void *)IT87_WDT_OUTPUT_THROUGH_PWRGD,
++ },
++ {}
++};
++
+ static const struct watchdog_info ident = {
+ .options = WDIOF_SETTIMEOUT | WDIOF_MAGICCLOSE | WDIOF_KEEPALIVEPING,
+ .firmware_version = 1,
+@@ -261,8 +285,10 @@ static struct watchdog_device wdt_dev = {
+
+ static int __init it87_wdt_init(void)
+ {
++ const struct dmi_system_id *dmi_id;
+ u8 chip_rev;
+ u8 ctrl;
++ int quirks = 0;
+ int rc;
+
+ rc = superio_enter();
+@@ -273,6 +299,10 @@ static int __init it87_wdt_init(void)
+ chip_rev = superio_inb(CHIPREV) & 0x0f;
+ superio_exit();
+
++ dmi_id = dmi_first_match(it87_quirks);
++ if (dmi_id)
++ quirks = (long)dmi_id->driver_data;
++
+ switch (chip_type) {
+ case IT8702_ID:
+ max_units = 255;
+@@ -333,6 +363,15 @@ static int __init it87_wdt_init(void)
+ superio_outb(0x00, WDTCTRL);
+ }
+
++ if (quirks & IT87_WDT_OUTPUT_THROUGH_PWRGD) {
++ superio_select(EC);
++ ctrl = superio_inb(SCR1);
++ if (!(ctrl & WDT_PWRGD)) {
++ ctrl |= WDT_PWRGD;
++ superio_outb(ctrl, SCR1);
++ }
++ }
++
+ superio_exit();
+
+ if (timeout < 1 || timeout > max_units * 60) {
+diff --git a/drivers/watchdog/mtk_wdt.c b/drivers/watchdog/mtk_wdt.c
+index e2d7a57d6ea2e7..91d110646e16f7 100644
+--- a/drivers/watchdog/mtk_wdt.c
++++ b/drivers/watchdog/mtk_wdt.c
+@@ -10,6 +10,7 @@
+ */
+
+ #include <dt-bindings/reset/mt2712-resets.h>
++#include <dt-bindings/reset/mediatek,mt6735-wdt.h>
+ #include <dt-bindings/reset/mediatek,mt6795-resets.h>
+ #include <dt-bindings/reset/mt7986-resets.h>
+ #include <dt-bindings/reset/mt8183-resets.h>
+@@ -87,6 +88,10 @@ static const struct mtk_wdt_data mt2712_data = {
+ .toprgu_sw_rst_num = MT2712_TOPRGU_SW_RST_NUM,
+ };
+
++static const struct mtk_wdt_data mt6735_data = {
++ .toprgu_sw_rst_num = MT6735_TOPRGU_RST_NUM,
++};
++
+ static const struct mtk_wdt_data mt6795_data = {
+ .toprgu_sw_rst_num = MT6795_TOPRGU_SW_RST_NUM,
+ };
+@@ -489,6 +494,7 @@ static int mtk_wdt_resume(struct device *dev)
+ static const struct of_device_id mtk_wdt_dt_ids[] = {
+ { .compatible = "mediatek,mt2712-wdt", .data = &mt2712_data },
+ { .compatible = "mediatek,mt6589-wdt" },
++ { .compatible = "mediatek,mt6735-wdt", .data = &mt6735_data },
+ { .compatible = "mediatek,mt6795-wdt", .data = &mt6795_data },
+ { .compatible = "mediatek,mt7986-wdt", .data = &mt7986_data },
+ { .compatible = "mediatek,mt7988-wdt", .data = &mt7988_data },
+diff --git a/drivers/watchdog/rzg2l_wdt.c b/drivers/watchdog/rzg2l_wdt.c
+index 2a35f890a2883a..11bbe48160ec9c 100644
+--- a/drivers/watchdog/rzg2l_wdt.c
++++ b/drivers/watchdog/rzg2l_wdt.c
+@@ -12,6 +12,7 @@
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/platform_device.h>
++#include <linux/pm_domain.h>
+ #include <linux/pm_runtime.h>
+ #include <linux/reset.h>
+ #include <linux/units.h>
+@@ -166,8 +167,22 @@ static int rzg2l_wdt_restart(struct watchdog_device *wdev,
+ struct rzg2l_wdt_priv *priv = watchdog_get_drvdata(wdev);
+ int ret;
+
+- clk_prepare_enable(priv->pclk);
+- clk_prepare_enable(priv->osc_clk);
++ /*
++ * In case of RZ/G3S the watchdog device may be part of an IRQ safe power
++ * domain that is currently powered off. In this case we need to power
++ * it on before accessing registers. Along with this the clocks will be
++ * enabled. We don't undo the pm_runtime_resume_and_get() as the device
++ * need to be on for the reboot to happen.
++ *
++ * For the rest of SoCs not registering a watchdog IRQ safe power
++ * domain it is safe to call pm_runtime_resume_and_get() as the
++ * irq_safe_dev_in_sleep_domain() call in genpd_runtime_resume()
++ * returns non zero value and the genpd_lock() is avoided, thus, there
++ * will be no invalid wait context reported by lockdep.
++ */
++ ret = pm_runtime_resume_and_get(wdev->parent);
++ if (ret)
++ return ret;
+
+ if (priv->devtype == WDT_RZG2L) {
+ ret = reset_control_deassert(priv->rstc);
+@@ -275,6 +290,7 @@ static int rzg2l_wdt_probe(struct platform_device *pdev)
+
+ priv->devtype = (uintptr_t)of_device_get_match_data(dev);
+
++ pm_runtime_irq_safe(&pdev->dev);
+ pm_runtime_enable(&pdev->dev);
+
+ priv->wdev.info = &rzg2l_wdt_ident;
+diff --git a/drivers/watchdog/s3c2410_wdt.c b/drivers/watchdog/s3c2410_wdt.c
+index 686cf544d0ae7a..349d30462c8c0c 100644
+--- a/drivers/watchdog/s3c2410_wdt.c
++++ b/drivers/watchdog/s3c2410_wdt.c
+@@ -24,9 +24,9 @@
+ #include <linux/slab.h>
+ #include <linux/err.h>
+ #include <linux/of.h>
++#include <linux/mfd/syscon.h>
+ #include <linux/regmap.h>
+ #include <linux/delay.h>
+-#include <linux/soc/samsung/exynos-pmu.h>
+
+ #define S3C2410_WTCON 0x00
+ #define S3C2410_WTDAT 0x04
+@@ -699,11 +699,11 @@ static int s3c2410wdt_probe(struct platform_device *pdev)
+ return ret;
+
+ if (wdt->drv_data->quirks & QUIRKS_HAVE_PMUREG) {
+- wdt->pmureg = exynos_get_pmu_regmap_by_phandle(dev->of_node,
+- "samsung,syscon-phandle");
++ wdt->pmureg = syscon_regmap_lookup_by_phandle(dev->of_node,
++ "samsung,syscon-phandle");
+ if (IS_ERR(wdt->pmureg))
+ return dev_err_probe(dev, PTR_ERR(wdt->pmureg),
+- "PMU regmap lookup failed.\n");
++ "syscon regmap lookup failed.\n");
+ }
+
+ wdt_irq = platform_get_irq(pdev, 0);
+diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
+index 9c05cab473f577..29c16459740112 100644
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -654,6 +654,8 @@ int btrfs_force_cow_block(struct btrfs_trans_handle *trans,
+ goto error_unlock_cow;
+ }
+ }
++
++ trace_btrfs_cow_block(root, buf, cow);
+ if (unlock_orig)
+ btrfs_tree_unlock(buf);
+ free_extent_buffer_stale(buf);
+@@ -710,7 +712,6 @@ int btrfs_cow_block(struct btrfs_trans_handle *trans,
+ {
+ struct btrfs_fs_info *fs_info = root->fs_info;
+ u64 search_start;
+- int ret;
+
+ if (unlikely(test_bit(BTRFS_ROOT_DELETING, &root->state))) {
+ btrfs_abort_transaction(trans, -EUCLEAN);
+@@ -751,12 +752,8 @@ int btrfs_cow_block(struct btrfs_trans_handle *trans,
+ * Also We don't care about the error, as it's handled internally.
+ */
+ btrfs_qgroup_trace_subtree_after_cow(trans, root, buf);
+- ret = btrfs_force_cow_block(trans, root, buf, parent, parent_slot,
+- cow_ret, search_start, 0, nest);
+-
+- trace_btrfs_cow_block(root, buf, *cow_ret);
+-
+- return ret;
++ return btrfs_force_cow_block(trans, root, buf, parent, parent_slot,
++ cow_ret, search_start, 0, nest);
+ }
+ ALLOW_ERROR_INJECTION(btrfs_cow_block, ERRNO);
+
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 58ffe78132d9d6..4b3e256e0d0b88 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -7117,6 +7117,8 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
+ ret = -EAGAIN;
+ goto out;
+ }
++
++ cond_resched();
+ }
+
+ if (file_extent)
+@@ -9780,15 +9782,25 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ struct btrfs_fs_info *fs_info = root->fs_info;
+ struct extent_io_tree *io_tree = &BTRFS_I(inode)->io_tree;
+ struct extent_state *cached_state = NULL;
+- struct extent_map *em = NULL;
+ struct btrfs_chunk_map *map = NULL;
+ struct btrfs_device *device = NULL;
+ struct btrfs_swap_info bsi = {
+ .lowest_ppage = (sector_t)-1ULL,
+ };
++ struct btrfs_backref_share_check_ctx *backref_ctx = NULL;
++ struct btrfs_path *path = NULL;
+ int ret = 0;
+ u64 isize;
+- u64 start;
++ u64 prev_extent_end = 0;
++
++ /*
++ * Acquire the inode's mmap lock to prevent races with memory mapped
++ * writes, as they could happen after we flush delalloc below and before
++ * we lock the extent range further below. The inode was already locked
++ * up in the call chain.
++ */
++ btrfs_assert_inode_locked(BTRFS_I(inode));
++ down_write(&BTRFS_I(inode)->i_mmap_lock);
+
+ /*
+ * If the swap file was just created, make sure delalloc is done. If the
+@@ -9797,22 +9809,32 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ */
+ ret = btrfs_wait_ordered_range(BTRFS_I(inode), 0, (u64)-1);
+ if (ret)
+- return ret;
++ goto out_unlock_mmap;
+
+ /*
+ * The inode is locked, so these flags won't change after we check them.
+ */
+ if (BTRFS_I(inode)->flags & BTRFS_INODE_COMPRESS) {
+ btrfs_warn(fs_info, "swapfile must not be compressed");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out_unlock_mmap;
+ }
+ if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATACOW)) {
+ btrfs_warn(fs_info, "swapfile must not be copy-on-write");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out_unlock_mmap;
+ }
+ if (!(BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM)) {
+ btrfs_warn(fs_info, "swapfile must not be checksummed");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out_unlock_mmap;
++ }
++
++ path = btrfs_alloc_path();
++ backref_ctx = btrfs_alloc_backref_share_check_ctx();
++ if (!path || !backref_ctx) {
++ ret = -ENOMEM;
++ goto out_unlock_mmap;
+ }
+
+ /*
+@@ -9827,7 +9849,8 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ if (!btrfs_exclop_start(fs_info, BTRFS_EXCLOP_SWAP_ACTIVATE)) {
+ btrfs_warn(fs_info,
+ "cannot activate swapfile while exclusive operation is running");
+- return -EBUSY;
++ ret = -EBUSY;
++ goto out_unlock_mmap;
+ }
+
+ /*
+@@ -9841,7 +9864,8 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ btrfs_exclop_finish(fs_info);
+ btrfs_warn(fs_info,
+ "cannot activate swapfile because snapshot creation is in progress");
+- return -EINVAL;
++ ret = -EINVAL;
++ goto out_unlock_mmap;
+ }
+ /*
+ * Snapshots can create extents which require COW even if NODATACOW is
+@@ -9862,7 +9886,8 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ btrfs_warn(fs_info,
+ "cannot activate swapfile because subvolume %llu is being deleted",
+ btrfs_root_id(root));
+- return -EPERM;
++ ret = -EPERM;
++ goto out_unlock_mmap;
+ }
+ atomic_inc(&root->nr_swapfiles);
+ spin_unlock(&root->root_item_lock);
+@@ -9870,24 +9895,39 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ isize = ALIGN_DOWN(inode->i_size, fs_info->sectorsize);
+
+ lock_extent(io_tree, 0, isize - 1, &cached_state);
+- start = 0;
+- while (start < isize) {
+- u64 logical_block_start, physical_block_start;
++ while (prev_extent_end < isize) {
++ struct btrfs_key key;
++ struct extent_buffer *leaf;
++ struct btrfs_file_extent_item *ei;
+ struct btrfs_block_group *bg;
+- u64 len = isize - start;
++ u64 logical_block_start;
++ u64 physical_block_start;
++ u64 extent_gen;
++ u64 disk_bytenr;
++ u64 len;
+
+- em = btrfs_get_extent(BTRFS_I(inode), NULL, start, len);
+- if (IS_ERR(em)) {
+- ret = PTR_ERR(em);
++ key.objectid = btrfs_ino(BTRFS_I(inode));
++ key.type = BTRFS_EXTENT_DATA_KEY;
++ key.offset = prev_extent_end;
++
++ ret = btrfs_search_slot(NULL, root, &key, path, 0, 0);
++ if (ret < 0)
+ goto out;
+- }
+
+- if (em->disk_bytenr == EXTENT_MAP_HOLE) {
++ /*
++ * If key not found it means we have an implicit hole (NO_HOLES
++ * is enabled).
++ */
++ if (ret > 0) {
+ btrfs_warn(fs_info, "swapfile must not have holes");
+ ret = -EINVAL;
+ goto out;
+ }
+- if (em->disk_bytenr == EXTENT_MAP_INLINE) {
++
++ leaf = path->nodes[0];
++ ei = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_file_extent_item);
++
++ if (btrfs_file_extent_type(leaf, ei) == BTRFS_FILE_EXTENT_INLINE) {
+ /*
+ * It's unlikely we'll ever actually find ourselves
+ * here, as a file small enough to fit inline won't be
+@@ -9899,23 +9939,45 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ ret = -EINVAL;
+ goto out;
+ }
+- if (extent_map_is_compressed(em)) {
++
++ if (btrfs_file_extent_compression(leaf, ei) != BTRFS_COMPRESS_NONE) {
+ btrfs_warn(fs_info, "swapfile must not be compressed");
+ ret = -EINVAL;
+ goto out;
+ }
+
+- logical_block_start = extent_map_block_start(em) + (start - em->start);
+- len = min(len, em->len - (start - em->start));
+- free_extent_map(em);
+- em = NULL;
++ disk_bytenr = btrfs_file_extent_disk_bytenr(leaf, ei);
++ if (disk_bytenr == 0) {
++ btrfs_warn(fs_info, "swapfile must not have holes");
++ ret = -EINVAL;
++ goto out;
++ }
++
++ logical_block_start = disk_bytenr + btrfs_file_extent_offset(leaf, ei);
++ extent_gen = btrfs_file_extent_generation(leaf, ei);
++ prev_extent_end = btrfs_file_extent_end(path);
++
++ if (prev_extent_end > isize)
++ len = isize - key.offset;
++ else
++ len = btrfs_file_extent_num_bytes(leaf, ei);
++
++ backref_ctx->curr_leaf_bytenr = leaf->start;
++
++ /*
++ * Don't need the path anymore, release to avoid deadlocks when
++ * calling btrfs_is_data_extent_shared() because when joining a
++ * transaction it can block waiting for the current one's commit
++ * which in turn may be trying to lock the same leaf to flush
++ * delayed items for example.
++ */
++ btrfs_release_path(path);
+
+- ret = can_nocow_extent(inode, start, &len, NULL, false, true);
++ ret = btrfs_is_data_extent_shared(BTRFS_I(inode), disk_bytenr,
++ extent_gen, backref_ctx);
+ if (ret < 0) {
+ goto out;
+- } else if (ret) {
+- ret = 0;
+- } else {
++ } else if (ret > 0) {
+ btrfs_warn(fs_info,
+ "swapfile must not be copy-on-write");
+ ret = -EINVAL;
+@@ -9950,7 +10012,6 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+
+ physical_block_start = (map->stripes[0].physical +
+ (logical_block_start - map->start));
+- len = min(len, map->chunk_len - (logical_block_start - map->start));
+ btrfs_free_chunk_map(map);
+ map = NULL;
+
+@@ -9991,20 +10052,16 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ if (ret)
+ goto out;
+ }
+- bsi.start = start;
++ bsi.start = key.offset;
+ bsi.block_start = physical_block_start;
+ bsi.block_len = len;
+ }
+-
+- start += len;
+ }
+
+ if (bsi.block_len)
+ ret = btrfs_add_swap_extent(sis, &bsi);
+
+ out:
+- if (!IS_ERR_OR_NULL(em))
+- free_extent_map(em);
+ if (!IS_ERR_OR_NULL(map))
+ btrfs_free_chunk_map(map);
+
+@@ -10017,6 +10074,10 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+
+ btrfs_exclop_finish(fs_info);
+
++out_unlock_mmap:
++ up_write(&BTRFS_I(inode)->i_mmap_lock);
++ btrfs_free_backref_share_ctx(backref_ctx);
++ btrfs_free_path(path);
+ if (ret)
+ return ret;
+
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index a0e8deca87a7a6..e70ed857fc743b 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1122,6 +1122,7 @@ int btrfs_quota_enable(struct btrfs_fs_info *fs_info,
+ fs_info->qgroup_flags = BTRFS_QGROUP_STATUS_FLAG_ON;
+ if (simple) {
+ fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_SIMPLE_MODE;
++ btrfs_set_fs_incompat(fs_info, SIMPLE_QUOTA);
+ btrfs_set_qgroup_status_enable_gen(leaf, ptr, trans->transid);
+ } else {
+ fs_info->qgroup_flags |= BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT;
+@@ -1255,8 +1256,6 @@ int btrfs_quota_enable(struct btrfs_fs_info *fs_info,
+ spin_lock(&fs_info->qgroup_lock);
+ fs_info->quota_root = quota_root;
+ set_bit(BTRFS_FS_QUOTA_ENABLED, &fs_info->flags);
+- if (simple)
+- btrfs_set_fs_incompat(fs_info, SIMPLE_QUOTA);
+ spin_unlock(&fs_info->qgroup_lock);
+
+ /* Skip rescan for simple qgroups. */
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index f3834f8d26b456..adcbdc970f9ea4 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -2902,6 +2902,7 @@ static int relocate_one_folio(struct reloc_control *rc,
+ const bool use_rst = btrfs_need_stripe_tree_update(fs_info, rc->block_group->flags);
+
+ ASSERT(index <= last_index);
++again:
+ folio = filemap_lock_folio(inode->i_mapping, index);
+ if (IS_ERR(folio)) {
+
+@@ -2937,6 +2938,11 @@ static int relocate_one_folio(struct reloc_control *rc,
+ ret = -EIO;
+ goto release_folio;
+ }
++ if (folio->mapping != inode->i_mapping) {
++ folio_unlock(folio);
++ folio_put(folio);
++ goto again;
++ }
+ }
+
+ /*
+diff --git a/fs/btrfs/send.c b/fs/btrfs/send.c
+index 0cb11dcd10cd4b..b1015f383f75ef 100644
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -5291,6 +5291,7 @@ static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len)
+ unsigned cur_len = min_t(unsigned, len,
+ PAGE_SIZE - pg_offset);
+
++again:
+ folio = filemap_lock_folio(mapping, index);
+ if (IS_ERR(folio)) {
+ page_cache_sync_readahead(mapping,
+@@ -5323,6 +5324,11 @@ static int put_file_data(struct send_ctx *sctx, u64 offset, u32 len)
+ ret = -EIO;
+ break;
+ }
++ if (folio->mapping != mapping) {
++ folio_unlock(folio);
++ folio_put(folio);
++ goto again;
++ }
+ }
+
+ memcpy_from_folio(sctx->send_buf + sctx->send_size, folio,
+diff --git a/fs/btrfs/sysfs.c b/fs/btrfs/sysfs.c
+index 03926ad467c919..5912d505776660 100644
+--- a/fs/btrfs/sysfs.c
++++ b/fs/btrfs/sysfs.c
+@@ -1118,7 +1118,7 @@ static ssize_t btrfs_nodesize_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return sysfs_emit(buf, "%u\n", fs_info->super_copy->nodesize);
++ return sysfs_emit(buf, "%u\n", fs_info->nodesize);
+ }
+
+ BTRFS_ATTR(, nodesize, btrfs_nodesize_show);
+@@ -1128,7 +1128,7 @@ static ssize_t btrfs_sectorsize_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return sysfs_emit(buf, "%u\n", fs_info->super_copy->sectorsize);
++ return sysfs_emit(buf, "%u\n", fs_info->sectorsize);
+ }
+
+ BTRFS_ATTR(, sectorsize, btrfs_sectorsize_show);
+@@ -1180,7 +1180,7 @@ static ssize_t btrfs_clone_alignment_show(struct kobject *kobj,
+ {
+ struct btrfs_fs_info *fs_info = to_fs_info(kobj);
+
+- return sysfs_emit(buf, "%u\n", fs_info->super_copy->sectorsize);
++ return sysfs_emit(buf, "%u\n", fs_info->sectorsize);
+ }
+
+ BTRFS_ATTR(, clone_alignment, btrfs_clone_alignment_show);
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 67468d88f13908..851d70200c6b8f 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -1552,7 +1552,7 @@ ceph_direct_read_write(struct kiocb *iocb, struct iov_iter *iter,
+ }
+
+ op = &req->r_ops[0];
+- if (sparse) {
++ if (!write && sparse) {
+ extent_cnt = __ceph_sparse_read_ext_count(inode, size);
+ ret = ceph_alloc_sparse_ext_map(op, extent_cnt);
+ if (ret) {
+diff --git a/fs/nfsd/export.c b/fs/nfsd/export.c
+index 6d0455973d641e..49aede376d8668 100644
+--- a/fs/nfsd/export.c
++++ b/fs/nfsd/export.c
+@@ -40,24 +40,15 @@
+ #define EXPKEY_HASHMAX (1 << EXPKEY_HASHBITS)
+ #define EXPKEY_HASHMASK (EXPKEY_HASHMAX -1)
+
+-static void expkey_put_work(struct work_struct *work)
++static void expkey_put(struct kref *ref)
+ {
+- struct svc_expkey *key =
+- container_of(to_rcu_work(work), struct svc_expkey, ek_rcu_work);
++ struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
+
+ if (test_bit(CACHE_VALID, &key->h.flags) &&
+ !test_bit(CACHE_NEGATIVE, &key->h.flags))
+ path_put(&key->ek_path);
+ auth_domain_put(key->ek_client);
+- kfree(key);
+-}
+-
+-static void expkey_put(struct kref *ref)
+-{
+- struct svc_expkey *key = container_of(ref, struct svc_expkey, h.ref);
+-
+- INIT_RCU_WORK(&key->ek_rcu_work, expkey_put_work);
+- queue_rcu_work(system_wq, &key->ek_rcu_work);
++ kfree_rcu(key, ek_rcu);
+ }
+
+ static int expkey_upcall(struct cache_detail *cd, struct cache_head *h)
+@@ -364,26 +355,16 @@ static void export_stats_destroy(struct export_stats *stats)
+ EXP_STATS_COUNTERS_NUM);
+ }
+
+-static void svc_export_put_work(struct work_struct *work)
++static void svc_export_put(struct kref *ref)
+ {
+- struct svc_export *exp =
+- container_of(to_rcu_work(work), struct svc_export, ex_rcu_work);
+-
++ struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
+ path_put(&exp->ex_path);
+ auth_domain_put(exp->ex_client);
+ nfsd4_fslocs_free(&exp->ex_fslocs);
+ export_stats_destroy(exp->ex_stats);
+ kfree(exp->ex_stats);
+ kfree(exp->ex_uuid);
+- kfree(exp);
+-}
+-
+-static void svc_export_put(struct kref *ref)
+-{
+- struct svc_export *exp = container_of(ref, struct svc_export, h.ref);
+-
+- INIT_RCU_WORK(&exp->ex_rcu_work, svc_export_put_work);
+- queue_rcu_work(system_wq, &exp->ex_rcu_work);
++ kfree_rcu(exp, ex_rcu);
+ }
+
+ static int svc_export_upcall(struct cache_detail *cd, struct cache_head *h)
+diff --git a/fs/nfsd/export.h b/fs/nfsd/export.h
+index 081afb68681e14..3794ae253a7016 100644
+--- a/fs/nfsd/export.h
++++ b/fs/nfsd/export.h
+@@ -75,7 +75,7 @@ struct svc_export {
+ u32 ex_layout_types;
+ struct nfsd4_deviceid_map *ex_devid_map;
+ struct cache_detail *cd;
+- struct rcu_work ex_rcu_work;
++ struct rcu_head ex_rcu;
+ unsigned long ex_xprtsec_modes;
+ struct export_stats *ex_stats;
+ };
+@@ -92,7 +92,7 @@ struct svc_expkey {
+ u32 ek_fsid[6];
+
+ struct path ek_path;
+- struct rcu_work ek_rcu_work;
++ struct rcu_head ek_rcu;
+ };
+
+ #define EX_ISSYNC(exp) (!((exp)->ex_flags & NFSEXP_ASYNC))
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index b8cbb15560040f..de076365254978 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -1058,7 +1058,7 @@ static int setup_callback_client(struct nfs4_client *clp, struct nfs4_cb_conn *c
+ args.authflavor = clp->cl_cred.cr_flavor;
+ clp->cl_cb_ident = conn->cb_ident;
+ } else {
+- if (!conn->cb_xprt)
++ if (!conn->cb_xprt || !ses)
+ return -EINVAL;
+ clp->cl_cb_session = ses;
+ args.bc_xprt = conn->cb_xprt;
+@@ -1461,8 +1461,6 @@ static void nfsd4_process_cb_update(struct nfsd4_callback *cb)
+ ses = c->cn_session;
+ }
+ spin_unlock(&clp->cl_lock);
+- if (!c)
+- return;
+
+ err = setup_callback_client(clp, &conn, ses);
+ if (err) {
+diff --git a/fs/smb/client/Kconfig b/fs/smb/client/Kconfig
+index 2aff6d1395ce39..9f05f94e265a6d 100644
+--- a/fs/smb/client/Kconfig
++++ b/fs/smb/client/Kconfig
+@@ -2,7 +2,6 @@
+ config CIFS
+ tristate "SMB3 and CIFS support (advanced network filesystem)"
+ depends on INET
+- select NETFS_SUPPORT
+ select NLS
+ select NLS_UCS2_UTILS
+ select CRYPTO
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index d1bd69cbfe09a5..4750505465ae63 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -4855,6 +4855,8 @@ smb2_writev_callback(struct mid_q_entry *mid)
+ if (written > wdata->subreq.len)
+ written &= 0xFFFF;
+
++ cifs_stats_bytes_written(tcon, written);
++
+ if (written < wdata->subreq.len)
+ wdata->result = -ENOSPC;
+ else
+@@ -5171,6 +5173,7 @@ SMB2_write(const unsigned int xid, struct cifs_io_parms *io_parms,
+ cifs_dbg(VFS, "Send error in write = %d\n", rc);
+ } else {
+ *nbytes = le32_to_cpu(rsp->DataLength);
++ cifs_stats_bytes_written(io_parms->tcon, *nbytes);
+ trace_smb3_write_done(0, 0, xid,
+ req->PersistentFileId,
+ io_parms->tcon->tid,
+diff --git a/fs/smb/server/smb_common.c b/fs/smb/server/smb_common.c
+index 75b4eb856d32f7..af8e24163bf261 100644
+--- a/fs/smb/server/smb_common.c
++++ b/fs/smb/server/smb_common.c
+@@ -18,8 +18,8 @@
+ #include "mgmt/share_config.h"
+
+ /*for shortname implementation */
+-static const char basechars[43] = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_-!@#$%";
+-#define MANGLE_BASE (sizeof(basechars) / sizeof(char) - 1)
++static const char *basechars = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ_-!@#$%";
++#define MANGLE_BASE (strlen(basechars) - 1)
+ #define MAGIC_CHAR '~'
+ #define PERIOD '.'
+ #define mangle(V) ((char)(basechars[(V) % MANGLE_BASE]))
+diff --git a/fs/udf/namei.c b/fs/udf/namei.c
+index 78a603129dd583..2cb49b6b07168a 100644
+--- a/fs/udf/namei.c
++++ b/fs/udf/namei.c
+@@ -517,7 +517,11 @@ static int udf_rmdir(struct inode *dir, struct dentry *dentry)
+ inode->i_nlink);
+ clear_nlink(inode);
+ inode->i_size = 0;
+- inode_dec_link_count(dir);
++ if (dir->i_nlink >= 3)
++ inode_dec_link_count(dir);
++ else
++ udf_warn(inode->i_sb, "parent dir link count too low (%u)\n",
++ dir->i_nlink);
+ udf_add_fid_counter(dir->i_sb, true, -1);
+ inode_set_mtime_to_ts(dir,
+ inode_set_ctime_to_ts(dir, inode_set_ctime_current(inode)));
+@@ -787,8 +791,18 @@ static int udf_rename(struct mnt_idmap *idmap, struct inode *old_dir,
+ retval = -ENOTEMPTY;
+ if (!empty_dir(new_inode))
+ goto out_oiter;
++ retval = -EFSCORRUPTED;
++ if (new_inode->i_nlink != 2)
++ goto out_oiter;
+ }
++ retval = -EFSCORRUPTED;
++ if (old_dir->i_nlink < 3)
++ goto out_oiter;
+ is_dir = true;
++ } else if (new_inode) {
++ retval = -EFSCORRUPTED;
++ if (new_inode->i_nlink < 1)
++ goto out_oiter;
+ }
+ if (is_dir && old_dir != new_dir) {
+ retval = udf_fiiter_find_entry(old_inode, &dotdot_name,
+diff --git a/include/linux/platform_data/amd_qdma.h b/include/linux/platform_data/amd_qdma.h
+index 576d952f97edd4..967a6ef31cf982 100644
+--- a/include/linux/platform_data/amd_qdma.h
++++ b/include/linux/platform_data/amd_qdma.h
+@@ -26,11 +26,13 @@ struct dma_slave_map;
+ * @max_mm_channels: Maximum number of MM DMA channels in each direction
+ * @device_map: DMA slave map
+ * @irq_index: The index of first IRQ
++ * @dma_dev: The device pointer for dma operations
+ */
+ struct qdma_platdata {
+ u32 max_mm_channels;
+ u32 irq_index;
+ struct dma_slave_map *device_map;
++ struct device *dma_dev;
+ };
+
+ #endif /* _PLATDATA_AMD_QDMA_H */
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index c14446c6164d72..02eaf84c8626f4 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1633,8 +1633,9 @@ static inline unsigned int __task_state_index(unsigned int tsk_state,
+ * We're lying here, but rather than expose a completely new task state
+ * to userspace, we can make this appear as if the task has gone through
+ * a regular rt_mutex_lock() call.
++ * Report frozen tasks as uninterruptible.
+ */
+- if (tsk_state & TASK_RTLOCK_WAIT)
++ if ((tsk_state & TASK_RTLOCK_WAIT) || (tsk_state & TASK_FROZEN))
+ state = TASK_UNINTERRUPTIBLE;
+
+ return fls(state);
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index d9b03e0746e7a4..2cbe0c22a32f3c 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -317,17 +317,22 @@ static inline void sock_drop(struct sock *sk, struct sk_buff *skb)
+ kfree_skb(skb);
+ }
+
+-static inline void sk_psock_queue_msg(struct sk_psock *psock,
++static inline bool sk_psock_queue_msg(struct sk_psock *psock,
+ struct sk_msg *msg)
+ {
++ bool ret;
++
+ spin_lock_bh(&psock->ingress_lock);
+- if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED))
++ if (sk_psock_test_state(psock, SK_PSOCK_TX_ENABLED)) {
+ list_add_tail(&msg->list, &psock->ingress_msg);
+- else {
++ ret = true;
++ } else {
+ sk_msg_free(psock->sk, msg);
+ kfree(msg);
++ ret = false;
+ }
+ spin_unlock_bh(&psock->ingress_lock);
++ return ret;
+ }
+
+ static inline struct sk_msg *sk_psock_dequeue_msg(struct sk_psock *psock)
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index 4df2ff81d3dea5..77769ff5054441 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -379,7 +379,7 @@ struct trace_event_call {
+ struct list_head list;
+ struct trace_event_class *class;
+ union {
+- char *name;
++ const char *name;
+ /* Set TRACE_EVENT_FL_TRACEPOINT flag when using "tp" */
+ struct tracepoint *tp;
+ };
+diff --git a/include/linux/vmstat.h b/include/linux/vmstat.h
+index d2761bf8ff32c9..9f3a04345b8606 100644
+--- a/include/linux/vmstat.h
++++ b/include/linux/vmstat.h
+@@ -515,7 +515,7 @@ static inline const char *node_stat_name(enum node_stat_item item)
+
+ static inline const char *lru_list_name(enum lru_list lru)
+ {
+- return node_stat_name(NR_LRU_BASE + lru) + 3; // skip "nr_"
++ return node_stat_name(NR_LRU_BASE + (enum node_stat_item)lru) + 3; // skip "nr_"
+ }
+
+ #if defined(CONFIG_VM_EVENT_COUNTERS) || defined(CONFIG_MEMCG)
+diff --git a/include/net/sock.h b/include/net/sock.h
+index f29c1444893875..fa055cf1785efd 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -1521,7 +1521,7 @@ static inline bool sk_wmem_schedule(struct sock *sk, int size)
+ }
+
+ static inline bool
+-sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size)
++__sk_rmem_schedule(struct sock *sk, int size, bool pfmemalloc)
+ {
+ int delta;
+
+@@ -1529,7 +1529,13 @@ sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size)
+ return true;
+ delta = size - sk->sk_forward_alloc;
+ return delta <= 0 || __sk_mem_schedule(sk, delta, SK_MEM_RECV) ||
+- skb_pfmemalloc(skb);
++ pfmemalloc;
++}
++
++static inline bool
++sk_rmem_schedule(struct sock *sk, struct sk_buff *skb, int size)
++{
++ return __sk_rmem_schedule(sk, size, skb_pfmemalloc(skb));
+ }
+
+ static inline int sk_unused_reserved_mem(const struct sock *sk)
+diff --git a/include/uapi/linux/stddef.h b/include/uapi/linux/stddef.h
+index 58154117d9b090..a6fce46aeb37c9 100644
+--- a/include/uapi/linux/stddef.h
++++ b/include/uapi/linux/stddef.h
+@@ -8,6 +8,13 @@
+ #define __always_inline inline
+ #endif
+
++/* Not all C++ standards support type declarations inside an anonymous union */
++#ifndef __cplusplus
++#define __struct_group_tag(TAG) TAG
++#else
++#define __struct_group_tag(TAG)
++#endif
++
+ /**
+ * __struct_group() - Create a mirrored named and anonyomous struct
+ *
+@@ -20,13 +27,13 @@
+ * and size: one anonymous and one named. The former's members can be used
+ * normally without sub-struct naming, and the latter can be used to
+ * reason about the start, end, and size of the group of struct members.
+- * The named struct can also be explicitly tagged for layer reuse, as well
+- * as both having struct attributes appended.
++ * The named struct can also be explicitly tagged for layer reuse (C only),
++ * as well as both having struct attributes appended.
+ */
+ #define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \
+ union { \
+ struct { MEMBERS } ATTRS; \
+- struct TAG { MEMBERS } ATTRS NAME; \
++ struct __struct_group_tag(TAG) { MEMBERS } ATTRS NAME; \
+ } ATTRS
+
+ #ifdef __cplusplus
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index a26593979887f3..1cfcc735b8e38e 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -412,6 +412,7 @@ void io_sqpoll_wait_sq(struct io_ring_ctx *ctx)
+ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ struct io_uring_params *p)
+ {
++ struct task_struct *task_to_put = NULL;
+ int ret;
+
+ /* Retain compatibility with failing for an invalid attach attempt */
+@@ -492,6 +493,7 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ }
+
+ sqd->thread = tsk;
++ task_to_put = get_task_struct(tsk);
+ ret = io_uring_alloc_task_context(tsk, ctx);
+ wake_up_new_task(tsk);
+ if (ret)
+@@ -502,11 +504,15 @@ __cold int io_sq_offload_create(struct io_ring_ctx *ctx,
+ goto err;
+ }
+
++ if (task_to_put)
++ put_task_struct(task_to_put);
+ return 0;
+ err_sqpoll:
+ complete(&ctx->sq_data->exited);
+ err:
+ io_sq_thread_finish(ctx);
++ if (task_to_put)
++ put_task_struct(task_to_put);
+ return ret;
+ }
+
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 4c486a0bfcc4d8..767f1cb8c27e17 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -7868,7 +7868,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
+ if (reg->type != PTR_TO_STACK && reg->type != CONST_PTR_TO_DYNPTR) {
+ verbose(env,
+ "arg#%d expected pointer to stack or const struct bpf_dynptr\n",
+- regno);
++ regno - 1);
+ return -EINVAL;
+ }
+
+@@ -7922,7 +7922,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
+ if (!is_dynptr_reg_valid_init(env, reg)) {
+ verbose(env,
+ "Expected an initialized dynptr as arg #%d\n",
+- regno);
++ regno - 1);
+ return -EINVAL;
+ }
+
+@@ -7930,7 +7930,7 @@ static int process_dynptr_func(struct bpf_verifier_env *env, int regno, int insn
+ if (!is_dynptr_type_expected(env, reg, arg_type & ~MEM_RDONLY)) {
+ verbose(env,
+ "Expected a dynptr of type %s as arg #%d\n",
+- dynptr_type_str(arg_to_dynptr_type(arg_type)), regno);
++ dynptr_type_str(arg_to_dynptr_type(arg_type)), regno - 1);
+ return -EINVAL;
+ }
+
+@@ -7999,7 +7999,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
+ */
+ btf_id = btf_check_iter_arg(meta->btf, meta->func_proto, regno - 1);
+ if (btf_id < 0) {
+- verbose(env, "expected valid iter pointer as arg #%d\n", regno);
++ verbose(env, "expected valid iter pointer as arg #%d\n", regno - 1);
+ return -EINVAL;
+ }
+ t = btf_type_by_id(meta->btf, btf_id);
+@@ -8009,7 +8009,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
+ /* bpf_iter_<type>_new() expects pointer to uninit iter state */
+ if (!is_iter_reg_valid_uninit(env, reg, nr_slots)) {
+ verbose(env, "expected uninitialized iter_%s as arg #%d\n",
+- iter_type_str(meta->btf, btf_id), regno);
++ iter_type_str(meta->btf, btf_id), regno - 1);
+ return -EINVAL;
+ }
+
+@@ -8033,7 +8033,7 @@ static int process_iter_arg(struct bpf_verifier_env *env, int regno, int insn_id
+ break;
+ case -EINVAL:
+ verbose(env, "expected an initialized iter_%s as arg #%d\n",
+- iter_type_str(meta->btf, btf_id), regno);
++ iter_type_str(meta->btf, btf_id), regno - 1);
+ return err;
+ case -EPROTO:
+ verbose(env, "expected an RCU CS when using %s\n", meta->func_name);
+@@ -21085,11 +21085,15 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
+ * changed in some incompatible and hard to support
+ * way, it's fine to back out this inlining logic
+ */
++#ifdef CONFIG_SMP
+ insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
+ insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
+ insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
+ cnt = 3;
+-
++#else
++ insn_buf[0] = BPF_ALU32_REG(BPF_XOR, BPF_REG_0, BPF_REG_0);
++ cnt = 1;
++#endif
+ new_prog = bpf_patch_insn_data(env, i + delta, insn_buf, cnt);
+ if (!new_prog)
+ return -ENOMEM;
+diff --git a/kernel/fork.c b/kernel/fork.c
+index ce8be55e5e04b3..e192bdbc9adebb 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -640,11 +640,8 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ LIST_HEAD(uf);
+ VMA_ITERATOR(vmi, mm, 0);
+
+- uprobe_start_dup_mmap();
+- if (mmap_write_lock_killable(oldmm)) {
+- retval = -EINTR;
+- goto fail_uprobe_end;
+- }
++ if (mmap_write_lock_killable(oldmm))
++ return -EINTR;
+ flush_cache_dup_mm(oldmm);
+ uprobe_dup_mmap(oldmm, mm);
+ /*
+@@ -783,8 +780,6 @@ static __latent_entropy int dup_mmap(struct mm_struct *mm,
+ dup_userfaultfd_complete(&uf);
+ else
+ dup_userfaultfd_fail(&uf);
+-fail_uprobe_end:
+- uprobe_end_dup_mmap();
+ return retval;
+
+ fail_nomem_anon_vma_fork:
+@@ -1692,9 +1687,11 @@ static struct mm_struct *dup_mm(struct task_struct *tsk,
+ if (!mm_init(mm, tsk, mm->user_ns))
+ goto fail_nomem;
+
++ uprobe_start_dup_mmap();
+ err = dup_mmap(mm, oldmm);
+ if (err)
+ goto free_pt;
++ uprobe_end_dup_mmap();
+
+ mm->hiwater_rss = get_mm_rss(mm);
+ mm->hiwater_vm = mm->total_vm;
+@@ -1709,6 +1706,8 @@ static struct mm_struct *dup_mm(struct task_struct *tsk,
+ mm->binfmt = NULL;
+ mm_init_owner(mm, NULL);
+ mmput(mm);
++ if (err)
++ uprobe_end_dup_mmap();
+
+ fail_nomem:
+ return NULL;
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 35515192aa0fda..b04990385a6a87 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -5111,6 +5111,9 @@ tracing_cpumask_write(struct file *filp, const char __user *ubuf,
+ cpumask_var_t tracing_cpumask_new;
+ int err;
+
++ if (count == 0 || count > KMALLOC_MAX_SIZE)
++ return -EINVAL;
++
+ if (!zalloc_cpumask_var(&tracing_cpumask_new, GFP_KERNEL))
+ return -ENOMEM;
+
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 263fac44d3ca32..935a886af40c90 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -725,7 +725,7 @@ static int trace_kprobe_module_callback(struct notifier_block *nb,
+
+ static struct notifier_block trace_kprobe_module_nb = {
+ .notifier_call = trace_kprobe_module_callback,
+- .priority = 1 /* Invoked after kprobe module callback */
++ .priority = 2 /* Invoked after kprobe and jump_label module callback */
+ };
+ static int trace_kprobe_register_module_notifier(void)
+ {
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index 9d078b37fe0b9b..abac770bc0b4c7 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -1173,6 +1173,8 @@ EXPORT_SYMBOL(ceph_osdc_new_request);
+
+ int __ceph_alloc_sparse_ext_map(struct ceph_osd_req_op *op, int cnt)
+ {
++ WARN_ON(op->op != CEPH_OSD_OP_SPARSE_READ);
++
+ op->extent.sparse_ext_cnt = cnt;
+ op->extent.sparse_ext = kmalloc_array(cnt,
+ sizeof(*op->extent.sparse_ext),
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 9a459213d283f1..55495063621d6c 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -3751,13 +3751,22 @@ static const struct bpf_func_proto bpf_skb_adjust_room_proto = {
+
+ static u32 __bpf_skb_min_len(const struct sk_buff *skb)
+ {
+- u32 min_len = skb_network_offset(skb);
++ int offset = skb_network_offset(skb);
++ u32 min_len = 0;
+
+- if (skb_transport_header_was_set(skb))
+- min_len = skb_transport_offset(skb);
+- if (skb->ip_summed == CHECKSUM_PARTIAL)
+- min_len = skb_checksum_start_offset(skb) +
+- skb->csum_offset + sizeof(__sum16);
++ if (offset > 0)
++ min_len = offset;
++ if (skb_transport_header_was_set(skb)) {
++ offset = skb_transport_offset(skb);
++ if (offset > 0)
++ min_len = offset;
++ }
++ if (skb->ip_summed == CHECKSUM_PARTIAL) {
++ offset = skb_checksum_start_offset(skb) +
++ skb->csum_offset + sizeof(__sum16);
++ if (offset > 0)
++ min_len = offset;
++ }
+ return min_len;
+ }
+
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index e90fbab703b2db..8ad7e6755fd642 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -445,8 +445,10 @@ int sk_msg_recvmsg(struct sock *sk, struct sk_psock *psock, struct msghdr *msg,
+ if (likely(!peek)) {
+ sge->offset += copy;
+ sge->length -= copy;
+- if (!msg_rx->skb)
++ if (!msg_rx->skb) {
+ sk_mem_uncharge(sk, copy);
++ atomic_sub(copy, &sk->sk_rmem_alloc);
++ }
+ msg_rx->sg.size -= copy;
+
+ if (!sge->length) {
+@@ -772,6 +774,8 @@ static void __sk_psock_purge_ingress_msg(struct sk_psock *psock)
+
+ list_for_each_entry_safe(msg, tmp, &psock->ingress_msg, list) {
+ list_del(&msg->list);
++ if (!msg->skb)
++ atomic_sub(msg->sg.size, &psock->sk->sk_rmem_alloc);
+ sk_msg_free(psock->sk, msg);
+ kfree(msg);
+ }
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 99cef92e6290cf..392678ae80f4ed 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -49,13 +49,14 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
+ sge = sk_msg_elem(msg, i);
+ size = (apply && apply_bytes < sge->length) ?
+ apply_bytes : sge->length;
+- if (!sk_wmem_schedule(sk, size)) {
++ if (!__sk_rmem_schedule(sk, size, false)) {
+ if (!copied)
+ ret = -ENOMEM;
+ break;
+ }
+
+ sk_mem_charge(sk, size);
++ atomic_add(size, &sk->sk_rmem_alloc);
+ sk_msg_xfer(tmp, msg, i, size);
+ copied += size;
+ if (sge->length)
+@@ -74,7 +75,8 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
+
+ if (!ret) {
+ msg->sg.start = i;
+- sk_psock_queue_msg(psock, tmp);
++ if (!sk_psock_queue_msg(psock, tmp))
++ atomic_sub(copied, &sk->sk_rmem_alloc);
+ sk_psock_data_ready(sk, psock);
+ } else {
+ sk_msg_free(sk, tmp);
+diff --git a/sound/core/memalloc.c b/sound/core/memalloc.c
+index 13b71069ae1874..b3853583d2ae1c 100644
+--- a/sound/core/memalloc.c
++++ b/sound/core/memalloc.c
+@@ -505,7 +505,7 @@ static void *snd_dma_wc_alloc(struct snd_dma_buffer *dmab, size_t size)
+ if (!p)
+ return NULL;
+ dmab->addr = dma_map_single(dmab->dev.dev, p, size, DMA_BIDIRECTIONAL);
+- if (dmab->addr == DMA_MAPPING_ERROR) {
++ if (dma_mapping_error(dmab->dev.dev, dmab->addr)) {
+ do_free_pages(dmab->area, size, true);
+ return NULL;
+ }
+diff --git a/sound/core/ump.c b/sound/core/ump.c
+index 8d37f237f83b2e..bd26bb2210cbd4 100644
+--- a/sound/core/ump.c
++++ b/sound/core/ump.c
+@@ -37,6 +37,7 @@ static int process_legacy_output(struct snd_ump_endpoint *ump,
+ u32 *buffer, int count);
+ static void process_legacy_input(struct snd_ump_endpoint *ump, const u32 *src,
+ int words);
++static void update_legacy_names(struct snd_ump_endpoint *ump);
+ #else
+ static inline int process_legacy_output(struct snd_ump_endpoint *ump,
+ u32 *buffer, int count)
+@@ -47,6 +48,9 @@ static inline void process_legacy_input(struct snd_ump_endpoint *ump,
+ const u32 *src, int words)
+ {
+ }
++static inline void update_legacy_names(struct snd_ump_endpoint *ump)
++{
++}
+ #endif
+
+ static const struct snd_rawmidi_global_ops snd_ump_rawmidi_ops = {
+@@ -861,6 +865,7 @@ static int ump_handle_fb_info_msg(struct snd_ump_endpoint *ump,
+ fill_fb_info(ump, &fb->info, buf);
+ if (ump->parsed) {
+ snd_ump_update_group_attrs(ump);
++ update_legacy_names(ump);
+ seq_notify_fb_change(ump, fb);
+ }
+ }
+@@ -893,6 +898,7 @@ static int ump_handle_fb_name_msg(struct snd_ump_endpoint *ump,
+ /* notify the FB name update to sequencer, too */
+ if (ret > 0 && ump->parsed) {
+ snd_ump_update_group_attrs(ump);
++ update_legacy_names(ump);
+ seq_notify_fb_change(ump, fb);
+ }
+ return ret;
+@@ -1087,6 +1093,8 @@ static int snd_ump_legacy_open(struct snd_rawmidi_substream *substream)
+ guard(mutex)(&ump->open_mutex);
+ if (ump->legacy_substreams[dir][group])
+ return -EBUSY;
++ if (!ump->groups[group].active)
++ return -ENODEV;
+ if (dir == SNDRV_RAWMIDI_STREAM_OUTPUT) {
+ if (!ump->legacy_out_opens) {
+ err = snd_rawmidi_kernel_open(&ump->core, 0,
+@@ -1254,11 +1262,20 @@ static void fill_substream_names(struct snd_ump_endpoint *ump,
+ name = ump->groups[idx].name;
+ if (!*name)
+ name = ump->info.name;
+- snprintf(s->name, sizeof(s->name), "Group %d (%.16s)",
+- idx + 1, name);
++ scnprintf(s->name, sizeof(s->name), "Group %d (%.16s)%s",
++ idx + 1, name,
++ ump->groups[idx].active ? "" : " [Inactive]");
+ }
+ }
+
++static void update_legacy_names(struct snd_ump_endpoint *ump)
++{
++ struct snd_rawmidi *rmidi = ump->legacy_rmidi;
++
++ fill_substream_names(ump, rmidi, SNDRV_RAWMIDI_STREAM_INPUT);
++ fill_substream_names(ump, rmidi, SNDRV_RAWMIDI_STREAM_OUTPUT);
++}
++
+ int snd_ump_attach_legacy_rawmidi(struct snd_ump_endpoint *ump,
+ char *id, int device)
+ {
+@@ -1295,10 +1312,7 @@ int snd_ump_attach_legacy_rawmidi(struct snd_ump_endpoint *ump,
+ rmidi->ops = &snd_ump_legacy_ops;
+ rmidi->private_data = ump;
+ ump->legacy_rmidi = rmidi;
+- if (input)
+- fill_substream_names(ump, rmidi, SNDRV_RAWMIDI_STREAM_INPUT);
+- if (output)
+- fill_substream_names(ump, rmidi, SNDRV_RAWMIDI_STREAM_OUTPUT);
++ update_legacy_names(ump);
+
+ ump_dbg(ump, "Created a legacy rawmidi #%d (%s)\n", device, id);
+ return 0;
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 2e9f817b948eb3..538c37a78a56f7 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -307,6 +307,7 @@ enum {
+ CXT_FIXUP_HP_MIC_NO_PRESENCE,
+ CXT_PINCFG_SWS_JS201D,
+ CXT_PINCFG_TOP_SPEAKER,
++ CXT_FIXUP_HP_A_U,
+ };
+
+ /* for hda_fixup_thinkpad_acpi() */
+@@ -774,6 +775,18 @@ static void cxt_setup_mute_led(struct hda_codec *codec,
+ }
+ }
+
++static void cxt_setup_gpio_unmute(struct hda_codec *codec,
++ unsigned int gpio_mute_mask)
++{
++ if (gpio_mute_mask) {
++ // set gpio data to 0.
++ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DATA, 0);
++ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_MASK, gpio_mute_mask);
++ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_DIRECTION, gpio_mute_mask);
++ snd_hda_codec_write(codec, 0x01, 0, AC_VERB_SET_GPIO_STICKY_MASK, 0);
++ }
++}
++
+ static void cxt_fixup_mute_led_gpio(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+@@ -788,6 +801,15 @@ static void cxt_fixup_hp_zbook_mute_led(struct hda_codec *codec,
+ cxt_setup_mute_led(codec, 0x10, 0x20);
+ }
+
++static void cxt_fixup_hp_a_u(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ // Init vers in BIOS mute the spk/hp by set gpio high to avoid pop noise,
++ // so need to unmute once by clearing the gpio data when runs into the system.
++ if (action == HDA_FIXUP_ACT_INIT)
++ cxt_setup_gpio_unmute(codec, 0x2);
++}
++
+ /* ThinkPad X200 & co with cxt5051 */
+ static const struct hda_pintbl cxt_pincfg_lenovo_x200[] = {
+ { 0x16, 0x042140ff }, /* HP (seq# overridden) */
+@@ -998,6 +1020,10 @@ static const struct hda_fixup cxt_fixups[] = {
+ { }
+ },
+ },
++ [CXT_FIXUP_HP_A_U] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = cxt_fixup_hp_a_u,
++ },
+ };
+
+ static const struct hda_quirk cxt5045_fixups[] = {
+@@ -1072,6 +1098,7 @@ static const struct hda_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x8457, "HP Z2 G4 mini", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x8458, "HP Z2 G4 mini premium", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x138d, "Asus", CXT_FIXUP_HEADPHONE_MIC_PIN),
++ SND_PCI_QUIRK(0x14f1, 0x0252, "MBX-Z60MR100", CXT_FIXUP_HP_A_U),
+ SND_PCI_QUIRK(0x14f1, 0x0265, "SWS JS201D", CXT_PINCFG_SWS_JS201D),
+ SND_PCI_QUIRK(0x152d, 0x0833, "OLPC XO-1.5", CXT_FIXUP_OLPC_XO),
+ SND_PCI_QUIRK(0x17aa, 0x20f2, "Lenovo T400", CXT_PINCFG_LENOVO_TP410),
+@@ -1117,6 +1144,7 @@ static const struct hda_model_fixup cxt5066_fixup_models[] = {
+ { .id = CXT_PINCFG_LENOVO_NOTEBOOK, .name = "lenovo-20149" },
+ { .id = CXT_PINCFG_SWS_JS201D, .name = "sws-js201d" },
+ { .id = CXT_PINCFG_TOP_SPEAKER, .name = "sirius-top-speaker" },
++ { .id = CXT_FIXUP_HP_A_U, .name = "HP-U-support" },
+ {}
+ };
+
+diff --git a/sound/sh/sh_dac_audio.c b/sound/sh/sh_dac_audio.c
+index e7b6ce7bd086bd..1c1c14708f0181 100644
+--- a/sound/sh/sh_dac_audio.c
++++ b/sound/sh/sh_dac_audio.c
+@@ -163,7 +163,7 @@ static int snd_sh_dac_pcm_copy(struct snd_pcm_substream *substream,
+ /* channel is not used (interleaved data) */
+ struct snd_sh_dac *chip = snd_pcm_substream_chip(substream);
+
+- if (copy_from_iter_toio(chip->data_buffer + pos, src, count))
++ if (copy_from_iter(chip->data_buffer + pos, count, src) != count)
+ return -EFAULT;
+ chip->buffer_end = chip->data_buffer + pos + count;
+
+@@ -182,7 +182,7 @@ static int snd_sh_dac_pcm_silence(struct snd_pcm_substream *substream,
+ /* channel is not used (interleaved data) */
+ struct snd_sh_dac *chip = snd_pcm_substream_chip(substream);
+
+- memset_io(chip->data_buffer + pos, 0, count);
++ memset(chip->data_buffer + pos, 0, count);
+ chip->buffer_end = chip->data_buffer + pos + count;
+
+ if (chip->empty) {
+@@ -211,7 +211,6 @@ static const struct snd_pcm_ops snd_sh_dac_pcm_ops = {
+ .pointer = snd_sh_dac_pcm_pointer,
+ .copy = snd_sh_dac_pcm_copy,
+ .fill_silence = snd_sh_dac_pcm_silence,
+- .mmap = snd_pcm_lib_mmap_iomem,
+ };
+
+ static int snd_sh_dac_pcm(struct snd_sh_dac *chip, int device)
+diff --git a/sound/soc/amd/ps/pci-ps.c b/sound/soc/amd/ps/pci-ps.c
+index c72d666d51bdf4..5c4a0be7a78892 100644
+--- a/sound/soc/amd/ps/pci-ps.c
++++ b/sound/soc/amd/ps/pci-ps.c
+@@ -375,11 +375,18 @@ static int get_acp63_device_config(struct pci_dev *pci, struct acp63_dev_data *a
+ {
+ struct acpi_device *pdm_dev;
+ const union acpi_object *obj;
++ acpi_handle handle;
++ acpi_integer dmic_status;
+ u32 config;
+ bool is_dmic_dev = false;
+ bool is_sdw_dev = false;
++ bool wov_en, dmic_en;
+ int ret;
+
++ /* IF WOV entry not found, enable dmic based on acp-audio-device-type entry*/
++ wov_en = true;
++ dmic_en = false;
++
+ config = readl(acp_data->acp63_base + ACP_PIN_CONFIG);
+ switch (config) {
+ case ACP_CONFIG_4:
+@@ -412,10 +419,18 @@ static int get_acp63_device_config(struct pci_dev *pci, struct acp63_dev_data *a
+ if (!acpi_dev_get_property(pdm_dev, "acp-audio-device-type",
+ ACPI_TYPE_INTEGER, &obj) &&
+ obj->integer.value == ACP_DMIC_DEV)
+- is_dmic_dev = true;
++ dmic_en = true;
+ }
++
++ handle = ACPI_HANDLE(&pci->dev);
++ ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status);
++ if (!ACPI_FAILURE(ret))
++ wov_en = dmic_status;
+ }
+
++ if (dmic_en && wov_en)
++ is_dmic_dev = true;
++
+ if (acp_data->is_sdw_config) {
+ ret = acp_scan_sdw_devices(&pci->dev, ACP63_SDW_ADDR);
+ if (!ret && acp_data->info.link_mask)
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index db57292c00ca1e..41042259f2b26e 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -608,7 +608,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233C")
++ DMI_MATCH(DMI_PRODUCT_NAME, "21QB")
+ },
+ /* Note this quirk excludes the CODEC mic */
+ .driver_data = (void *)(SOC_SDW_CODEC_MIC),
+@@ -617,9 +617,26 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "233B")
++ DMI_MATCH(DMI_PRODUCT_NAME, "21QA")
+ },
+- .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ /* Note this quirk excludes the CODEC mic */
++ .driver_data = (void *)(SOC_SDW_CODEC_MIC),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21Q6")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21Q7")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
+ },
+
+ /* ArrowLake devices */
+diff --git a/sound/soc/sof/intel/hda-dai.c b/sound/soc/sof/intel/hda-dai.c
+index ac505c7ad34295..82f46ecd94301e 100644
+--- a/sound/soc/sof/intel/hda-dai.c
++++ b/sound/soc/sof/intel/hda-dai.c
+@@ -103,8 +103,10 @@ hda_dai_get_ops(struct snd_pcm_substream *substream, struct snd_soc_dai *cpu_dai
+ return sdai->platform_private;
+ }
+
+-int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_stream *hext_stream,
+- struct snd_soc_dai *cpu_dai)
++static int
++hda_link_dma_cleanup(struct snd_pcm_substream *substream,
++ struct hdac_ext_stream *hext_stream,
++ struct snd_soc_dai *cpu_dai, bool release)
+ {
+ const struct hda_dai_widget_dma_ops *ops = hda_dai_get_ops(substream, cpu_dai);
+ struct sof_intel_hda_stream *hda_stream;
+@@ -128,6 +130,17 @@ int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_st
+ snd_hdac_ext_bus_link_clear_stream_id(hlink, stream_tag);
+ }
+
++ if (!release) {
++ /*
++ * Force stream reconfiguration without releasing the channel on
++ * subsequent stream restart (without free), including LinkDMA
++ * reset.
++ * The stream is released via hda_dai_hw_free()
++ */
++ hext_stream->link_prepared = 0;
++ return 0;
++ }
++
+ if (ops->release_hext_stream)
+ ops->release_hext_stream(sdev, cpu_dai, substream);
+
+@@ -211,7 +224,7 @@ static int __maybe_unused hda_dai_hw_free(struct snd_pcm_substream *substream,
+ if (!hext_stream)
+ return 0;
+
+- return hda_link_dma_cleanup(substream, hext_stream, cpu_dai);
++ return hda_link_dma_cleanup(substream, hext_stream, cpu_dai, true);
+ }
+
+ static int __maybe_unused hda_dai_hw_params_data(struct snd_pcm_substream *substream,
+@@ -304,7 +317,8 @@ static int __maybe_unused hda_dai_trigger(struct snd_pcm_substream *substream, i
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_STOP:
+ case SNDRV_PCM_TRIGGER_SUSPEND:
+- ret = hda_link_dma_cleanup(substream, hext_stream, dai);
++ ret = hda_link_dma_cleanup(substream, hext_stream, dai,
++ cmd == SNDRV_PCM_TRIGGER_STOP ? false : true);
+ if (ret < 0) {
+ dev_err(sdev->dev, "%s: failed to clean up link DMA\n", __func__);
+ return ret;
+@@ -656,8 +670,7 @@ static int hda_dai_suspend(struct hdac_bus *bus)
+ }
+
+ ret = hda_link_dma_cleanup(hext_stream->link_substream,
+- hext_stream,
+- cpu_dai);
++ hext_stream, cpu_dai, true);
+ if (ret < 0)
+ return ret;
+ }
+diff --git a/sound/soc/sof/intel/hda.h b/sound/soc/sof/intel/hda.h
+index b74a472435b5d2..4a4a0b55f0bc60 100644
+--- a/sound/soc/sof/intel/hda.h
++++ b/sound/soc/sof/intel/hda.h
+@@ -1028,8 +1028,6 @@ const struct hda_dai_widget_dma_ops *
+ hda_select_dai_widget_ops(struct snd_sof_dev *sdev, struct snd_sof_widget *swidget);
+ int hda_dai_config(struct snd_soc_dapm_widget *w, unsigned int flags,
+ struct snd_sof_dai_config_data *data);
+-int hda_link_dma_cleanup(struct snd_pcm_substream *substream, struct hdac_ext_stream *hext_stream,
+- struct snd_soc_dai *cpu_dai);
+
+ static inline struct snd_sof_dev *widget_to_sdev(struct snd_soc_dapm_widget *w)
+ {
+diff --git a/tools/include/uapi/linux/stddef.h b/tools/include/uapi/linux/stddef.h
+index bb6ea517efb511..c53cde425406b7 100644
+--- a/tools/include/uapi/linux/stddef.h
++++ b/tools/include/uapi/linux/stddef.h
+@@ -8,6 +8,13 @@
+ #define __always_inline __inline__
+ #endif
+
++/* Not all C++ standards support type declarations inside an anonymous union */
++#ifndef __cplusplus
++#define __struct_group_tag(TAG) TAG
++#else
++#define __struct_group_tag(TAG)
++#endif
++
+ /**
+ * __struct_group() - Create a mirrored named and anonyomous struct
+ *
+@@ -20,14 +27,14 @@
+ * and size: one anonymous and one named. The former's members can be used
+ * normally without sub-struct naming, and the latter can be used to
+ * reason about the start, end, and size of the group of struct members.
+- * The named struct can also be explicitly tagged for layer reuse, as well
+- * as both having struct attributes appended.
++ * The named struct can also be explicitly tagged for layer reuse (C only),
++ * as well as both having struct attributes appended.
+ */
+ #define __struct_group(TAG, NAME, ATTRS, MEMBERS...) \
+ union { \
+ struct { MEMBERS } ATTRS; \
+- struct TAG { MEMBERS } ATTRS NAME; \
+- }
++ struct __struct_group_tag(TAG) { MEMBERS } ATTRS NAME; \
++ } ATTRS
+
+ /**
+ * __DECLARE_FLEX_ARRAY() - Declare a flexible array usable in a union
+diff --git a/tools/objtool/noreturns.h b/tools/objtool/noreturns.h
+index e7da92489167e9..f98dc0e1c99c4a 100644
+--- a/tools/objtool/noreturns.h
++++ b/tools/objtool/noreturns.h
+@@ -20,6 +20,7 @@ NORETURN(__x64_sys_exit_group)
+ NORETURN(arch_cpu_idle_dead)
+ NORETURN(bch2_trans_in_restart_error)
+ NORETURN(bch2_trans_restart_error)
++NORETURN(bch2_trans_unlocked_error)
+ NORETURN(cpu_bringup_and_idle)
+ NORETURN(cpu_startup_entry)
+ NORETURN(do_exit)
+diff --git a/tools/testing/selftests/bpf/progs/dynptr_fail.c b/tools/testing/selftests/bpf/progs/dynptr_fail.c
+index 8f36c9de759152..dfd817d0348c47 100644
+--- a/tools/testing/selftests/bpf/progs/dynptr_fail.c
++++ b/tools/testing/selftests/bpf/progs/dynptr_fail.c
+@@ -149,7 +149,7 @@ int ringbuf_release_uninit_dynptr(void *ctx)
+
+ /* A dynptr can't be used after it has been invalidated */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #3")
++__failure __msg("Expected an initialized dynptr as arg #2")
+ int use_after_invalid(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -428,7 +428,7 @@ int invalid_helper2(void *ctx)
+
+ /* A bpf_dynptr is invalidated if it's been written into */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int invalid_write1(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -1407,7 +1407,7 @@ int invalid_slice_rdwr_rdonly(struct __sk_buff *skb)
+
+ /* bpf_dynptr_adjust can only be called on initialized dynptrs */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int dynptr_adjust_invalid(void *ctx)
+ {
+ struct bpf_dynptr ptr = {};
+@@ -1420,7 +1420,7 @@ int dynptr_adjust_invalid(void *ctx)
+
+ /* bpf_dynptr_is_null can only be called on initialized dynptrs */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int dynptr_is_null_invalid(void *ctx)
+ {
+ struct bpf_dynptr ptr = {};
+@@ -1433,7 +1433,7 @@ int dynptr_is_null_invalid(void *ctx)
+
+ /* bpf_dynptr_is_rdonly can only be called on initialized dynptrs */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int dynptr_is_rdonly_invalid(void *ctx)
+ {
+ struct bpf_dynptr ptr = {};
+@@ -1446,7 +1446,7 @@ int dynptr_is_rdonly_invalid(void *ctx)
+
+ /* bpf_dynptr_size can only be called on initialized dynptrs */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int dynptr_size_invalid(void *ctx)
+ {
+ struct bpf_dynptr ptr = {};
+@@ -1459,7 +1459,7 @@ int dynptr_size_invalid(void *ctx)
+
+ /* Only initialized dynptrs can be cloned */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #1")
++__failure __msg("Expected an initialized dynptr as arg #0")
+ int clone_invalid1(void *ctx)
+ {
+ struct bpf_dynptr ptr1 = {};
+@@ -1493,7 +1493,7 @@ int clone_invalid2(struct xdp_md *xdp)
+
+ /* Invalidating a dynptr should invalidate its clones */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #3")
++__failure __msg("Expected an initialized dynptr as arg #2")
+ int clone_invalidate1(void *ctx)
+ {
+ struct bpf_dynptr clone;
+@@ -1514,7 +1514,7 @@ int clone_invalidate1(void *ctx)
+
+ /* Invalidating a dynptr should invalidate its parent */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #3")
++__failure __msg("Expected an initialized dynptr as arg #2")
+ int clone_invalidate2(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -1535,7 +1535,7 @@ int clone_invalidate2(void *ctx)
+
+ /* Invalidating a dynptr should invalidate its siblings */
+ SEC("?raw_tp")
+-__failure __msg("Expected an initialized dynptr as arg #3")
++__failure __msg("Expected an initialized dynptr as arg #2")
+ int clone_invalidate3(void *ctx)
+ {
+ struct bpf_dynptr ptr;
+@@ -1723,7 +1723,7 @@ __noinline long global_call_bpf_dynptr(const struct bpf_dynptr *dynptr)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("arg#1 expected pointer to stack or const struct bpf_dynptr")
++__failure __msg("arg#0 expected pointer to stack or const struct bpf_dynptr")
+ int test_dynptr_reg_type(void *ctx)
+ {
+ struct task_struct *current = NULL;
+diff --git a/tools/testing/selftests/bpf/progs/iters_state_safety.c b/tools/testing/selftests/bpf/progs/iters_state_safety.c
+index d47e59aba6de35..f41257eadbb258 100644
+--- a/tools/testing/selftests/bpf/progs/iters_state_safety.c
++++ b/tools/testing/selftests/bpf/progs/iters_state_safety.c
+@@ -73,7 +73,7 @@ int create_and_forget_to_destroy_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int destroy_without_creating_fail(void *ctx)
+ {
+ /* init with zeros to stop verifier complaining about uninit stack */
+@@ -91,7 +91,7 @@ int destroy_without_creating_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int compromise_iter_w_direct_write_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+@@ -143,7 +143,7 @@ int compromise_iter_w_direct_write_and_skip_destroy_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int compromise_iter_w_helper_write_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+@@ -230,7 +230,7 @@ int valid_stack_reuse(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected uninitialized iter_num as arg #1")
++__failure __msg("expected uninitialized iter_num as arg #0")
+ int double_create_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+@@ -258,7 +258,7 @@ int double_create_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int double_destroy_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+@@ -284,7 +284,7 @@ int double_destroy_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int next_without_new_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+@@ -305,7 +305,7 @@ int next_without_new_fail(void *ctx)
+ }
+
+ SEC("?raw_tp")
+-__failure __msg("expected an initialized iter_num as arg #1")
++__failure __msg("expected an initialized iter_num as arg #0")
+ int next_after_destroy_fail(void *ctx)
+ {
+ struct bpf_iter_num iter;
+diff --git a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
+index 4a176e6aede897..6543d5b6e0a976 100644
+--- a/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
++++ b/tools/testing/selftests/bpf/progs/iters_testmod_seq.c
+@@ -79,7 +79,7 @@ int testmod_seq_truncated(const void *ctx)
+
+ SEC("?raw_tp")
+ __failure
+-__msg("expected an initialized iter_testmod_seq as arg #2")
++__msg("expected an initialized iter_testmod_seq as arg #1")
+ int testmod_seq_getter_before_bad(const void *ctx)
+ {
+ struct bpf_iter_testmod_seq it;
+@@ -89,7 +89,7 @@ int testmod_seq_getter_before_bad(const void *ctx)
+
+ SEC("?raw_tp")
+ __failure
+-__msg("expected an initialized iter_testmod_seq as arg #2")
++__msg("expected an initialized iter_testmod_seq as arg #1")
+ int testmod_seq_getter_after_bad(const void *ctx)
+ {
+ struct bpf_iter_testmod_seq it;
+diff --git a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
+index e68667aec6a652..cd4d752bd089ca 100644
+--- a/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
++++ b/tools/testing/selftests/bpf/progs/test_kfunc_dynptr_param.c
+@@ -45,7 +45,7 @@ int BPF_PROG(not_valid_dynptr, int cmd, union bpf_attr *attr, unsigned int size)
+ }
+
+ SEC("?lsm.s/bpf")
+-__failure __msg("arg#1 expected pointer to stack or const struct bpf_dynptr")
++__failure __msg("arg#0 expected pointer to stack or const struct bpf_dynptr")
+ int BPF_PROG(not_ptr_to_stack, int cmd, union bpf_attr *attr, unsigned int size)
+ {
+ unsigned long val = 0;
+diff --git a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
+index a7a6ae6c162fe0..8bcddadfc4daed 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
++++ b/tools/testing/selftests/bpf/progs/verifier_bits_iter.c
+@@ -32,7 +32,7 @@ int BPF_PROG(no_destroy, struct bpf_iter_meta *meta, struct cgroup *cgrp)
+
+ SEC("iter/cgroup")
+ __description("uninitialized iter in ->next()")
+-__failure __msg("expected an initialized iter_bits as arg #1")
++__failure __msg("expected an initialized iter_bits as arg #0")
+ int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
+ {
+ struct bpf_iter_bits it = {};
+@@ -43,7 +43,7 @@ int BPF_PROG(next_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
+
+ SEC("iter/cgroup")
+ __description("uninitialized iter in ->destroy()")
+-__failure __msg("expected an initialized iter_bits as arg #1")
++__failure __msg("expected an initialized iter_bits as arg #0")
+ int BPF_PROG(destroy_uninit, struct bpf_iter_meta *meta, struct cgroup *cgrp)
+ {
+ struct bpf_iter_bits it = {};
+diff --git a/tools/testing/selftests/bpf/trace_helpers.c b/tools/testing/selftests/bpf/trace_helpers.c
+index 2d742fdac6b977..81943c6254e6bc 100644
+--- a/tools/testing/selftests/bpf/trace_helpers.c
++++ b/tools/testing/selftests/bpf/trace_helpers.c
+@@ -293,6 +293,10 @@ static int procmap_query(int fd, const void *addr, __u32 query_flags, size_t *st
+ return 0;
+ }
+ #else
++# ifndef PROCMAP_QUERY_VMA_EXECUTABLE
++# define PROCMAP_QUERY_VMA_EXECUTABLE 0x04
++# endif
++
+ static int procmap_query(int fd, const void *addr, __u32 query_flags, size_t *start, size_t *offset, int *flags)
+ {
+ return -EOPNOTSUPP;
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index ae55cd79128336..2cc3ffcbc983d3 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -280,6 +280,21 @@ static void timerlat_hist_header(struct osnoise_tool *tool)
+ trace_seq_reset(s);
+ }
+
++/*
++ * format_summary_value - format a line of summary value (min, max or avg)
++ * of hist data
++ */
++static void format_summary_value(struct trace_seq *seq,
++ int count,
++ unsigned long long val,
++ bool avg)
++{
++ if (count)
++ trace_seq_printf(seq, "%9llu ", avg ? val / count : val);
++ else
++ trace_seq_printf(seq, "%9c ", '-');
++}
++
+ /*
+ * timerlat_print_summary - print the summary of the hist data to the output
+ */
+@@ -327,29 +342,23 @@ timerlat_print_summary(struct timerlat_hist_params *params,
+ if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count)
+ continue;
+
+- if (!params->no_irq) {
+- if (data->hist[cpu].irq_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].min_irq);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_irq)
++ format_summary_value(trace->seq,
++ data->hist[cpu].irq_count,
++ data->hist[cpu].min_irq,
++ false);
+
+- if (!params->no_thread) {
+- if (data->hist[cpu].thread_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].min_thread);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_thread)
++ format_summary_value(trace->seq,
++ data->hist[cpu].thread_count,
++ data->hist[cpu].min_thread,
++ false);
+
+- if (params->user_hist) {
+- if (data->hist[cpu].user_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].min_user);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (params->user_hist)
++ format_summary_value(trace->seq,
++ data->hist[cpu].user_count,
++ data->hist[cpu].min_user,
++ false);
+ }
+ trace_seq_printf(trace->seq, "\n");
+
+@@ -363,29 +372,23 @@ timerlat_print_summary(struct timerlat_hist_params *params,
+ if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count)
+ continue;
+
+- if (!params->no_irq) {
+- if (data->hist[cpu].irq_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].sum_irq / data->hist[cpu].irq_count);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_irq)
++ format_summary_value(trace->seq,
++ data->hist[cpu].irq_count,
++ data->hist[cpu].sum_irq,
++ true);
+
+- if (!params->no_thread) {
+- if (data->hist[cpu].thread_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].sum_thread / data->hist[cpu].thread_count);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_thread)
++ format_summary_value(trace->seq,
++ data->hist[cpu].thread_count,
++ data->hist[cpu].sum_thread,
++ true);
+
+- if (params->user_hist) {
+- if (data->hist[cpu].user_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].sum_user / data->hist[cpu].user_count);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (params->user_hist)
++ format_summary_value(trace->seq,
++ data->hist[cpu].user_count,
++ data->hist[cpu].sum_user,
++ true);
+ }
+ trace_seq_printf(trace->seq, "\n");
+
+@@ -399,29 +402,23 @@ timerlat_print_summary(struct timerlat_hist_params *params,
+ if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count)
+ continue;
+
+- if (!params->no_irq) {
+- if (data->hist[cpu].irq_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].max_irq);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_irq)
++ format_summary_value(trace->seq,
++ data->hist[cpu].irq_count,
++ data->hist[cpu].max_irq,
++ false);
+
+- if (!params->no_thread) {
+- if (data->hist[cpu].thread_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].max_thread);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (!params->no_thread)
++ format_summary_value(trace->seq,
++ data->hist[cpu].thread_count,
++ data->hist[cpu].max_thread,
++ false);
+
+- if (params->user_hist) {
+- if (data->hist[cpu].user_count)
+- trace_seq_printf(trace->seq, "%9llu ",
+- data->hist[cpu].max_user);
+- else
+- trace_seq_printf(trace->seq, " - ");
+- }
++ if (params->user_hist)
++ format_summary_value(trace->seq,
++ data->hist[cpu].user_count,
++ data->hist[cpu].max_user,
++ false);
+ }
+ trace_seq_printf(trace->seq, "\n");
+ trace_seq_do_printf(trace->seq);
+@@ -505,16 +502,22 @@ timerlat_print_stats_all(struct timerlat_hist_params *params,
+ trace_seq_printf(trace->seq, "min: ");
+
+ if (!params->no_irq)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.min_irq);
++ format_summary_value(trace->seq,
++ sum.irq_count,
++ sum.min_irq,
++ false);
+
+ if (!params->no_thread)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.min_thread);
++ format_summary_value(trace->seq,
++ sum.thread_count,
++ sum.min_thread,
++ false);
+
+ if (params->user_hist)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.min_user);
++ format_summary_value(trace->seq,
++ sum.user_count,
++ sum.min_user,
++ false);
+
+ trace_seq_printf(trace->seq, "\n");
+
+@@ -522,16 +525,22 @@ timerlat_print_stats_all(struct timerlat_hist_params *params,
+ trace_seq_printf(trace->seq, "avg: ");
+
+ if (!params->no_irq)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.sum_irq / sum.irq_count);
++ format_summary_value(trace->seq,
++ sum.irq_count,
++ sum.sum_irq,
++ true);
+
+ if (!params->no_thread)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.sum_thread / sum.thread_count);
++ format_summary_value(trace->seq,
++ sum.thread_count,
++ sum.sum_thread,
++ true);
+
+ if (params->user_hist)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.sum_user / sum.user_count);
++ format_summary_value(trace->seq,
++ sum.user_count,
++ sum.sum_user,
++ true);
+
+ trace_seq_printf(trace->seq, "\n");
+
+@@ -539,16 +548,22 @@ timerlat_print_stats_all(struct timerlat_hist_params *params,
+ trace_seq_printf(trace->seq, "max: ");
+
+ if (!params->no_irq)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.max_irq);
++ format_summary_value(trace->seq,
++ sum.irq_count,
++ sum.max_irq,
++ false);
+
+ if (!params->no_thread)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.max_thread);
++ format_summary_value(trace->seq,
++ sum.thread_count,
++ sum.max_thread,
++ false);
+
+ if (params->user_hist)
+- trace_seq_printf(trace->seq, "%9llu ",
+- sum.max_user);
++ format_summary_value(trace->seq,
++ sum.user_count,
++ sum.max_user,
++ false);
+
+ trace_seq_printf(trace->seq, "\n");
+ trace_seq_do_printf(trace->seq);
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-09 13:51 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-01-09 13:51 UTC (permalink / raw
To: gentoo-commits
commit: dce11bba7397f8cff2d315b9195b222824bbeed4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 9 13:51:24 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 9 13:51:24 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dce11bba
Linux patch 6.12.9
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1008_linux-6.12.9.patch | 6461 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6465 insertions(+)
diff --git a/0000_README b/0000_README
index 483a9fde..29d9187b 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch: 1007_linux-6.12.8.patch
From: https://www.kernel.org
Desc: Linux 6.12.8
+Patch: 1008_linux-6.12.9.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.9
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1008_linux-6.12.9.patch b/1008_linux-6.12.9.patch
new file mode 100644
index 00000000..9db1b6b3
--- /dev/null
+++ b/1008_linux-6.12.9.patch
@@ -0,0 +1,6461 @@
+diff --git a/Documentation/admin-guide/laptops/thinkpad-acpi.rst b/Documentation/admin-guide/laptops/thinkpad-acpi.rst
+index 7f674a6cfa8a7b..4ab0fef7d440d1 100644
+--- a/Documentation/admin-guide/laptops/thinkpad-acpi.rst
++++ b/Documentation/admin-guide/laptops/thinkpad-acpi.rst
+@@ -445,8 +445,10 @@ event code Key Notes
+ 0x1008 0x07 FN+F8 IBM: toggle screen expand
+ Lenovo: configure UltraNav,
+ or toggle screen expand.
+- On newer platforms (2024+)
+- replaced by 0x131f (see below)
++ On 2024 platforms replaced by
++ 0x131f (see below) and on newer
++ platforms (2025 +) keycode is
++ replaced by 0x1401 (see below).
+
+ 0x1009 0x08 FN+F9 -
+
+@@ -506,9 +508,11 @@ event code Key Notes
+
+ 0x1019 0x18 unknown
+
+-0x131f ... FN+F8 Platform Mode change.
++0x131f ... FN+F8 Platform Mode change (2024 systems).
+ Implemented in driver.
+
++0x1401 ... FN+F8 Platform Mode change (2025 + systems).
++ Implemented in driver.
+ ... ... ...
+
+ 0x1020 0x1F unknown
+diff --git a/Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml b/Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml
+index df20a3c9c74479..ec89115c74e4d3 100644
+--- a/Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml
++++ b/Documentation/devicetree/bindings/display/bridge/adi,adv7533.yaml
+@@ -90,7 +90,7 @@ properties:
+ adi,dsi-lanes:
+ description: Number of DSI data lanes connected to the DSI host.
+ $ref: /schemas/types.yaml#/definitions/uint32
+- enum: [ 1, 2, 3, 4 ]
++ enum: [ 2, 3, 4 ]
+
+ "#sound-dai-cells":
+ const: 0
+diff --git a/Makefile b/Makefile
+index 8a10105c2539cf..80151f53d8ee0f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
+index 5b248814204147..69c6e71fa1e6ba 100644
+--- a/arch/arc/Kconfig
++++ b/arch/arc/Kconfig
+@@ -297,7 +297,6 @@ config ARC_PAGE_SIZE_16K
+ config ARC_PAGE_SIZE_4K
+ bool "4KB"
+ select HAVE_PAGE_SIZE_4KB
+- depends on ARC_MMU_V3 || ARC_MMU_V4
+
+ endchoice
+
+@@ -474,7 +473,8 @@ config HIGHMEM
+
+ config ARC_HAS_PAE40
+ bool "Support for the 40-bit Physical Address Extension"
+- depends on ISA_ARCV2
++ depends on MMU_V4
++ depends on !ARC_PAGE_SIZE_4K
+ select HIGHMEM
+ select PHYS_ADDR_T_64BIT
+ help
+diff --git a/arch/arc/Makefile b/arch/arc/Makefile
+index 2390dd042e3636..fb98478ed1ab09 100644
+--- a/arch/arc/Makefile
++++ b/arch/arc/Makefile
+@@ -6,7 +6,7 @@
+ KBUILD_DEFCONFIG := haps_hs_smp_defconfig
+
+ ifeq ($(CROSS_COMPILE),)
+-CROSS_COMPILE := $(call cc-cross-prefix, arc-linux- arceb-linux-)
++CROSS_COMPILE := $(call cc-cross-prefix, arc-linux- arceb-linux- arc-linux-gnu-)
+ endif
+
+ cflags-y += -fno-common -pipe -fno-builtin -mmedium-calls -D__linux__
+diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
+index 58045c89834045..76f43db0890fcd 100644
+--- a/arch/arc/include/asm/cmpxchg.h
++++ b/arch/arc/include/asm/cmpxchg.h
+@@ -48,7 +48,7 @@
+ \
+ switch(sizeof((_p_))) { \
+ case 1: \
+- _prev_ = (__typeof__(*(ptr)))cmpxchg_emu_u8((volatile u8 *)_p_, (uintptr_t)_o_, (uintptr_t)_n_); \
++ _prev_ = (__typeof__(*(ptr)))cmpxchg_emu_u8((volatile u8 *__force)_p_, (uintptr_t)_o_, (uintptr_t)_n_); \
+ break; \
+ case 4: \
+ _prev_ = __cmpxchg(_p_, _o_, _n_); \
+diff --git a/arch/arc/net/bpf_jit_arcv2.c b/arch/arc/net/bpf_jit_arcv2.c
+index 4458e409ca0a84..6d989b6d88c69b 100644
+--- a/arch/arc/net/bpf_jit_arcv2.c
++++ b/arch/arc/net/bpf_jit_arcv2.c
+@@ -2916,7 +2916,7 @@ bool check_jmp_32(u32 curr_off, u32 targ_off, u8 cond)
+ addendum = (cond == ARC_CC_AL) ? 0 : INSN_len_normal;
+ disp = get_displacement(curr_off + addendum, targ_off);
+
+- if (ARC_CC_AL)
++ if (cond == ARC_CC_AL)
+ return is_valid_far_disp(disp);
+ else
+ return is_valid_near_disp(disp);
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 28b4312f25631c..f558be868a50b6 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -7067,6 +7067,7 @@ __init int intel_pmu_init(void)
+
+ case INTEL_METEORLAKE:
+ case INTEL_METEORLAKE_L:
++ case INTEL_ARROWLAKE_U:
+ intel_pmu_init_hybrid(hybrid_big_small);
+
+ x86_pmu.pebs_latency_data = cmt_latency_data;
+diff --git a/block/blk.h b/block/blk.h
+index 88fab6a81701ed..1426f9c281973e 100644
+--- a/block/blk.h
++++ b/block/blk.h
+@@ -469,11 +469,6 @@ static inline bool bio_zone_write_plugging(struct bio *bio)
+ {
+ return bio_flagged(bio, BIO_ZONE_WRITE_PLUGGING);
+ }
+-static inline bool bio_is_zone_append(struct bio *bio)
+-{
+- return bio_op(bio) == REQ_OP_ZONE_APPEND ||
+- bio_flagged(bio, BIO_EMULATES_ZONE_APPEND);
+-}
+ void blk_zone_write_plug_bio_merged(struct bio *bio);
+ void blk_zone_write_plug_init_request(struct request *rq);
+ static inline void blk_zone_update_request_bio(struct request *rq,
+@@ -522,10 +517,6 @@ static inline bool bio_zone_write_plugging(struct bio *bio)
+ {
+ return false;
+ }
+-static inline bool bio_is_zone_append(struct bio *bio)
+-{
+- return false;
+-}
+ static inline void blk_zone_write_plug_bio_merged(struct bio *bio)
+ {
+ }
+diff --git a/drivers/clk/imx/clk-imx8mp-audiomix.c b/drivers/clk/imx/clk-imx8mp-audiomix.c
+index b2cb157703c57f..c409fc7e061869 100644
+--- a/drivers/clk/imx/clk-imx8mp-audiomix.c
++++ b/drivers/clk/imx/clk-imx8mp-audiomix.c
+@@ -278,7 +278,8 @@ static int clk_imx8mp_audiomix_reset_controller_register(struct device *dev,
+
+ #else /* !CONFIG_RESET_CONTROLLER */
+
+-static int clk_imx8mp_audiomix_reset_controller_register(struct clk_imx8mp_audiomix_priv *priv)
++static int clk_imx8mp_audiomix_reset_controller_register(struct device *dev,
++ struct clk_imx8mp_audiomix_priv *priv)
+ {
+ return 0;
+ }
+diff --git a/drivers/clk/thead/clk-th1520-ap.c b/drivers/clk/thead/clk-th1520-ap.c
+index 17e32ae08720cb..1015fab9525157 100644
+--- a/drivers/clk/thead/clk-th1520-ap.c
++++ b/drivers/clk/thead/clk-th1520-ap.c
+@@ -779,6 +779,13 @@ static struct ccu_div dpu1_clk = {
+ },
+ };
+
++static CLK_FIXED_FACTOR_HW(emmc_sdio_ref_clk, "emmc-sdio-ref",
++ &video_pll_clk.common.hw, 4, 1, 0);
++
++static const struct clk_parent_data emmc_sdio_ref_clk_pd[] = {
++ { .hw = &emmc_sdio_ref_clk.hw },
++};
++
+ static CCU_GATE(CLK_BROM, brom_clk, "brom", ahb2_cpusys_hclk_pd, 0x100, BIT(4), 0);
+ static CCU_GATE(CLK_BMU, bmu_clk, "bmu", axi4_cpusys2_aclk_pd, 0x100, BIT(5), 0);
+ static CCU_GATE(CLK_AON2CPU_A2X, aon2cpu_a2x_clk, "aon2cpu-a2x", axi4_cpusys2_aclk_pd,
+@@ -798,7 +805,7 @@ static CCU_GATE(CLK_PERISYS_APB4_HCLK, perisys_apb4_hclk, "perisys-apb4-hclk", p
+ 0x150, BIT(12), 0);
+ static CCU_GATE(CLK_NPU_AXI, npu_axi_clk, "npu-axi", axi_aclk_pd, 0x1c8, BIT(5), 0);
+ static CCU_GATE(CLK_CPU2VP, cpu2vp_clk, "cpu2vp", axi_aclk_pd, 0x1e0, BIT(13), 0);
+-static CCU_GATE(CLK_EMMC_SDIO, emmc_sdio_clk, "emmc-sdio", video_pll_clk_pd, 0x204, BIT(30), 0);
++static CCU_GATE(CLK_EMMC_SDIO, emmc_sdio_clk, "emmc-sdio", emmc_sdio_ref_clk_pd, 0x204, BIT(30), 0);
+ static CCU_GATE(CLK_GMAC1, gmac1_clk, "gmac1", gmac_pll_clk_pd, 0x204, BIT(26), 0);
+ static CCU_GATE(CLK_PADCTRL1, padctrl1_clk, "padctrl1", perisys_apb_pclk_pd, 0x204, BIT(24), 0);
+ static CCU_GATE(CLK_DSMART, dsmart_clk, "dsmart", perisys_apb_pclk_pd, 0x204, BIT(23), 0);
+@@ -1059,6 +1066,10 @@ static int th1520_clk_probe(struct platform_device *pdev)
+ return ret;
+ priv->hws[CLK_PLL_GMAC_100M] = &gmac_pll_clk_100m.hw;
+
++ ret = devm_clk_hw_register(dev, &emmc_sdio_ref_clk.hw);
++ if (ret)
++ return ret;
++
+ ret = devm_of_clk_add_hw_provider(dev, of_clk_hw_onecell_get, priv);
+ if (ret)
+ return ret;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 51904906545e59..45e28726e148e9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3721,8 +3721,12 @@ static int amdgpu_device_ip_resume_phase3(struct amdgpu_device *adev)
+ continue;
+ if (adev->ip_blocks[i].version->type == AMD_IP_BLOCK_TYPE_DCE) {
+ r = adev->ip_blocks[i].version->funcs->resume(adev);
+- if (r)
++ if (r) {
++ DRM_ERROR("resume of IP block <%s> failed %d\n",
++ adev->ip_blocks[i].version->funcs->name, r);
+ return r;
++ }
++ adev->ip_blocks[i].status.hw = true;
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+index c100845409f794..ffdb966c4127ee 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+@@ -45,6 +45,8 @@ MODULE_FIRMWARE("amdgpu/gc_9_4_3_mec.bin");
+ MODULE_FIRMWARE("amdgpu/gc_9_4_4_mec.bin");
+ MODULE_FIRMWARE("amdgpu/gc_9_4_3_rlc.bin");
+ MODULE_FIRMWARE("amdgpu/gc_9_4_4_rlc.bin");
++MODULE_FIRMWARE("amdgpu/gc_9_4_3_sjt_mec.bin");
++MODULE_FIRMWARE("amdgpu/gc_9_4_4_sjt_mec.bin");
+
+ #define GFX9_MEC_HPD_SIZE 4096
+ #define RLCG_UCODE_LOADING_START_ADDRESS 0x00002000L
+@@ -574,8 +576,12 @@ static int gfx_v9_4_3_init_cp_compute_microcode(struct amdgpu_device *adev,
+ {
+ int err;
+
+- err = amdgpu_ucode_request(adev, &adev->gfx.mec_fw,
+- "amdgpu/%s_mec.bin", chip_name);
++ if (amdgpu_sriov_vf(adev))
++ err = amdgpu_ucode_request(adev, &adev->gfx.mec_fw,
++ "amdgpu/%s_sjt_mec.bin", chip_name);
++ else
++ err = amdgpu_ucode_request(adev, &adev->gfx.mec_fw,
++ "amdgpu/%s_mec.bin", chip_name);
+ if (err)
+ goto out;
+ amdgpu_gfx_cp_init_microcode(adev, AMDGPU_UCODE_ID_CP_MEC1);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+index 8ee3d07ffbdfa2..f31e9fbf634a0f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_migrate.c
+@@ -306,7 +306,7 @@ svm_migrate_copy_to_vram(struct kfd_node *node, struct svm_range *prange,
+ spage = migrate_pfn_to_page(migrate->src[i]);
+ if (spage && !is_zone_device_page(spage)) {
+ src[i] = dma_map_page(dev, spage, 0, PAGE_SIZE,
+- DMA_TO_DEVICE);
++ DMA_BIDIRECTIONAL);
+ r = dma_mapping_error(dev, src[i]);
+ if (r) {
+ dev_err(dev, "%s: fail %d dma_map_page\n",
+@@ -630,7 +630,7 @@ svm_migrate_copy_to_ram(struct amdgpu_device *adev, struct svm_range *prange,
+ goto out_oom;
+ }
+
+- dst[i] = dma_map_page(dev, dpage, 0, PAGE_SIZE, DMA_FROM_DEVICE);
++ dst[i] = dma_map_page(dev, dpage, 0, PAGE_SIZE, DMA_BIDIRECTIONAL);
+ r = dma_mapping_error(dev, dst[i]);
+ if (r) {
+ dev_err(adev->dev, "%s: fail %d dma_map_page\n", __func__, r);
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c b/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
+index 61f4a38e7d2bf6..8f786592143b6c 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_audio.c
+@@ -153,7 +153,16 @@ static int adv7511_hdmi_hw_params(struct device *dev, void *data,
+ ADV7511_AUDIO_CFG3_LEN_MASK, len);
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_I2C_FREQ_ID_CFG,
+ ADV7511_I2C_FREQ_ID_CFG_RATE_MASK, rate << 4);
+- regmap_write(adv7511->regmap, 0x73, 0x1);
++
++ /* send current Audio infoframe values while updating */
++ regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE,
++ BIT(5), BIT(5));
++
++ regmap_write(adv7511->regmap, ADV7511_REG_AUDIO_INFOFRAME(0), 0x1);
++
++ /* use Audio infoframe updated info */
++ regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE,
++ BIT(5), 0);
+
+ return 0;
+ }
+@@ -184,8 +193,9 @@ static int audio_startup(struct device *dev, void *data)
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_GC(0),
+ BIT(7) | BIT(6), BIT(7));
+ /* use Audio infoframe updated info */
+- regmap_update_bits(adv7511->regmap, ADV7511_REG_GC(1),
++ regmap_update_bits(adv7511->regmap, ADV7511_REG_INFOFRAME_UPDATE,
+ BIT(5), 0);
++
+ /* enable SPDIF receiver */
+ if (adv7511->audio_source == ADV7511_AUDIO_SOURCE_SPDIF)
+ regmap_update_bits(adv7511->regmap, ADV7511_REG_AUDIO_CONFIG,
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+index eb5919b382635e..a13b3d8ab6ac60 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
+@@ -1241,8 +1241,10 @@ static int adv7511_probe(struct i2c_client *i2c)
+ return ret;
+
+ ret = adv7511_init_regulators(adv7511);
+- if (ret)
+- return dev_err_probe(dev, ret, "failed to init regulators\n");
++ if (ret) {
++ dev_err_probe(dev, ret, "failed to init regulators\n");
++ goto err_of_node_put;
++ }
+
+ /*
+ * The power down GPIO is optional. If present, toggle it from active to
+@@ -1363,6 +1365,8 @@ static int adv7511_probe(struct i2c_client *i2c)
+ i2c_unregister_device(adv7511->i2c_edid);
+ uninit_regulators:
+ adv7511_uninit_regulators(adv7511);
++err_of_node_put:
++ of_node_put(adv7511->host_node);
+
+ return ret;
+ }
+@@ -1371,6 +1375,8 @@ static void adv7511_remove(struct i2c_client *i2c)
+ {
+ struct adv7511 *adv7511 = i2c_get_clientdata(i2c);
+
++ of_node_put(adv7511->host_node);
++
+ adv7511_uninit_regulators(adv7511);
+
+ drm_bridge_remove(&adv7511->bridge);
+diff --git a/drivers/gpu/drm/bridge/adv7511/adv7533.c b/drivers/gpu/drm/bridge/adv7511/adv7533.c
+index 4481489aaf5ebf..122ad91e8a3293 100644
+--- a/drivers/gpu/drm/bridge/adv7511/adv7533.c
++++ b/drivers/gpu/drm/bridge/adv7511/adv7533.c
+@@ -172,7 +172,7 @@ int adv7533_parse_dt(struct device_node *np, struct adv7511 *adv)
+
+ of_property_read_u32(np, "adi,dsi-lanes", &num_lanes);
+
+- if (num_lanes < 1 || num_lanes > 4)
++ if (num_lanes < 2 || num_lanes > 4)
+ return -EINVAL;
+
+ adv->num_dsi_lanes = num_lanes;
+@@ -181,8 +181,6 @@ int adv7533_parse_dt(struct device_node *np, struct adv7511 *adv)
+ if (!adv->host_node)
+ return -ENODEV;
+
+- of_node_put(adv->host_node);
+-
+ adv->use_timing_gen = !of_property_read_bool(np,
+ "adi,disable-timing-generator");
+
+diff --git a/drivers/gpu/drm/i915/display/intel_cx0_phy.c b/drivers/gpu/drm/i915/display/intel_cx0_phy.c
+index 4a6c3040ca15ef..f11309efff3398 100644
+--- a/drivers/gpu/drm/i915/display/intel_cx0_phy.c
++++ b/drivers/gpu/drm/i915/display/intel_cx0_phy.c
+@@ -2084,14 +2084,6 @@ static void intel_c10_pll_program(struct drm_i915_private *i915,
+ 0, C10_VDR_CTRL_MSGBUS_ACCESS,
+ MB_WRITE_COMMITTED);
+
+- /* Custom width needs to be programmed to 0 for both the phy lanes */
+- intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH,
+- C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10,
+- MB_WRITE_COMMITTED);
+- intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CONTROL(1),
+- 0, C10_VDR_CTRL_UPDATE_CFG,
+- MB_WRITE_COMMITTED);
+-
+ /* Program the pll values only for the master lane */
+ for (i = 0; i < ARRAY_SIZE(pll_state->pll); i++)
+ intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_PLL(i),
+@@ -2101,6 +2093,10 @@ static void intel_c10_pll_program(struct drm_i915_private *i915,
+ intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CMN(0), pll_state->cmn, MB_WRITE_COMMITTED);
+ intel_cx0_write(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_TX(0), pll_state->tx, MB_WRITE_COMMITTED);
+
++ /* Custom width needs to be programmed to 0 for both the phy lanes */
++ intel_cx0_rmw(encoder, INTEL_CX0_BOTH_LANES, PHY_C10_VDR_CUSTOM_WIDTH,
++ C10_VDR_CUSTOM_WIDTH_MASK, C10_VDR_CUSTOM_WIDTH_8_10,
++ MB_WRITE_COMMITTED);
+ intel_cx0_rmw(encoder, INTEL_CX0_LANE0, PHY_C10_VDR_CONTROL(1),
+ 0, C10_VDR_CTRL_MASTER_LANE | C10_VDR_CTRL_UPDATE_CFG,
+ MB_WRITE_COMMITTED);
+diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c
+index c864d101faf941..9378d5901c4939 100644
+--- a/drivers/gpu/drm/i915/gt/intel_rc6.c
++++ b/drivers/gpu/drm/i915/gt/intel_rc6.c
+@@ -133,7 +133,7 @@ static void gen11_rc6_enable(struct intel_rc6 *rc6)
+ GEN9_MEDIA_PG_ENABLE |
+ GEN11_MEDIA_SAMPLER_PG_ENABLE;
+
+- if (GRAPHICS_VER(gt->i915) >= 12) {
++ if (GRAPHICS_VER(gt->i915) >= 12 && !IS_DG1(gt->i915)) {
+ for (i = 0; i < I915_MAX_VCS; i++)
+ if (HAS_ENGINE(gt, _VCS(i)))
+ pg_enable |= (VDN_HCP_POWERGATE_ENABLE(i) |
+diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c
+index 2a093540354e89..84e327b569252f 100644
+--- a/drivers/gpu/drm/xe/xe_bo.c
++++ b/drivers/gpu/drm/xe/xe_bo.c
+@@ -722,7 +722,7 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
+ new_mem->mem_type == XE_PL_SYSTEM) {
+ long timeout = dma_resv_wait_timeout(ttm_bo->base.resv,
+ DMA_RESV_USAGE_BOOKKEEP,
+- true,
++ false,
+ MAX_SCHEDULE_TIMEOUT);
+ if (timeout < 0) {
+ ret = timeout;
+@@ -846,8 +846,16 @@ static int xe_bo_move(struct ttm_buffer_object *ttm_bo, bool evict,
+
+ out:
+ if ((!ttm_bo->resource || ttm_bo->resource->mem_type == XE_PL_SYSTEM) &&
+- ttm_bo->ttm)
++ ttm_bo->ttm) {
++ long timeout = dma_resv_wait_timeout(ttm_bo->base.resv,
++ DMA_RESV_USAGE_KERNEL,
++ false,
++ MAX_SCHEDULE_TIMEOUT);
++ if (timeout < 0)
++ ret = timeout;
++
+ xe_tt_unmap_sg(ttm_bo->ttm);
++ }
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
+index c18e463092afa5..85aa3ab0da3b87 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.c
++++ b/drivers/gpu/drm/xe/xe_devcoredump.c
+@@ -104,7 +104,11 @@ static ssize_t __xe_devcoredump_read(char *buffer, size_t count,
+ drm_puts(&p, "\n**** GuC CT ****\n");
+ xe_guc_ct_snapshot_print(ss->ct, &p);
+
+- drm_puts(&p, "\n**** Contexts ****\n");
++ /*
++ * Don't add a new section header here because the mesa debug decoder
++ * tool expects the context information to be in the 'GuC CT' section.
++ */
++ /* drm_puts(&p, "\n**** Contexts ****\n"); */
+ xe_guc_exec_queue_snapshot_print(ss->ge, &p);
+
+ drm_puts(&p, "\n**** Job ****\n");
+@@ -358,6 +362,15 @@ void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
+ char buff[ASCII85_BUFSZ], *line_buff;
+ size_t line_pos = 0;
+
++ /*
++ * Splitting blobs across multiple lines is not compatible with the mesa
++ * debug decoder tool. Note that even dropping the explicit '\n' below
++ * doesn't help because the GuC log is so big some underlying implementation
++ * still splits the lines at 512K characters. So just bail completely for
++ * the moment.
++ */
++ return;
++
+ #define DMESG_MAX_LINE_LEN 800
+ #define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "\n\0" */
+
+diff --git a/drivers/gpu/drm/xe/xe_exec_queue.c b/drivers/gpu/drm/xe/xe_exec_queue.c
+index fd0f3b3c9101d4..268cd3123be9d9 100644
+--- a/drivers/gpu/drm/xe/xe_exec_queue.c
++++ b/drivers/gpu/drm/xe/xe_exec_queue.c
+@@ -8,6 +8,7 @@
+ #include <linux/nospec.h>
+
+ #include <drm/drm_device.h>
++#include <drm/drm_drv.h>
+ #include <drm/drm_file.h>
+ #include <uapi/drm/xe_drm.h>
+
+@@ -762,9 +763,11 @@ bool xe_exec_queue_is_idle(struct xe_exec_queue *q)
+ */
+ void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)
+ {
++ struct xe_device *xe = gt_to_xe(q->gt);
+ struct xe_file *xef;
+ struct xe_lrc *lrc;
+ u32 old_ts, new_ts;
++ int idx;
+
+ /*
+ * Jobs that are run during driver load may use an exec_queue, but are
+@@ -774,6 +777,10 @@ void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)
+ if (!q->vm || !q->vm->xef)
+ return;
+
++ /* Synchronize with unbind while holding the xe file open */
++ if (!drm_dev_enter(&xe->drm, &idx))
++ return;
++
+ xef = q->vm->xef;
+
+ /*
+@@ -787,6 +794,8 @@ void xe_exec_queue_update_run_ticks(struct xe_exec_queue *q)
+ lrc = q->lrc[0];
+ new_ts = xe_lrc_update_timestamp(lrc, &old_ts);
+ xef->run_ticks[q->class] += (new_ts - old_ts) * q->width;
++
++ drm_dev_exit(idx);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+index afdb477ecf833d..c9ed996b9cb0c3 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_pf_config.c
+@@ -2038,7 +2038,7 @@ static int pf_validate_vf_config(struct xe_gt *gt, unsigned int vfid)
+ valid_any = valid_any || (valid_ggtt && is_primary);
+
+ if (IS_DGFX(xe)) {
+- bool valid_lmem = pf_get_vf_config_ggtt(primary_gt, vfid);
++ bool valid_lmem = pf_get_vf_config_lmem(primary_gt, vfid);
+
+ valid_any = valid_any || (valid_lmem && is_primary);
+ valid_all = valid_all && valid_lmem;
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 64ace0b968f07f..91db10515d7472 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -690,6 +690,7 @@ cma_validate_port(struct ib_device *device, u32 port,
+ int bound_if_index = dev_addr->bound_dev_if;
+ int dev_type = dev_addr->dev_type;
+ struct net_device *ndev = NULL;
++ struct net_device *pdev = NULL;
+
+ if (!rdma_dev_access_netns(device, id_priv->id.route.addr.dev_addr.net))
+ goto out;
+@@ -714,6 +715,21 @@ cma_validate_port(struct ib_device *device, u32 port,
+
+ rcu_read_lock();
+ ndev = rcu_dereference(sgid_attr->ndev);
++ if (ndev->ifindex != bound_if_index) {
++ pdev = dev_get_by_index_rcu(dev_addr->net, bound_if_index);
++ if (pdev) {
++ if (is_vlan_dev(pdev)) {
++ pdev = vlan_dev_real_dev(pdev);
++ if (ndev->ifindex == pdev->ifindex)
++ bound_if_index = pdev->ifindex;
++ }
++ if (is_vlan_dev(ndev)) {
++ pdev = vlan_dev_real_dev(ndev);
++ if (bound_if_index == pdev->ifindex)
++ bound_if_index = ndev->ifindex;
++ }
++ }
++ }
+ if (!net_eq(dev_net(ndev), dev_addr->net) ||
+ ndev->ifindex != bound_if_index) {
+ rdma_put_gid_attr(sgid_attr);
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index 7dc8e2ec62cc8b..f121899863034a 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -2802,8 +2802,8 @@ int rdma_nl_notify_event(struct ib_device *device, u32 port_num,
+ enum rdma_nl_notify_event_type type)
+ {
+ struct sk_buff *skb;
++ int ret = -EMSGSIZE;
+ struct net *net;
+- int ret = 0;
+ void *nlh;
+
+ net = read_pnet(&device->coredev.rdma_net);
+diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
+index a4cce360df2178..edef79daed3fa8 100644
+--- a/drivers/infiniband/core/uverbs_cmd.c
++++ b/drivers/infiniband/core/uverbs_cmd.c
+@@ -161,7 +161,7 @@ static const void __user *uverbs_request_next_ptr(struct uverbs_req_iter *iter,
+ {
+ const void __user *res = iter->cur;
+
+- if (iter->cur + len > iter->end)
++ if (len > iter->end - iter->cur)
+ return (void __force __user *)ERR_PTR(-ENOSPC);
+ iter->cur += len;
+ return res;
+@@ -2010,11 +2010,13 @@ static int ib_uverbs_post_send(struct uverbs_attr_bundle *attrs)
+ ret = uverbs_request_start(attrs, &iter, &cmd, sizeof(cmd));
+ if (ret)
+ return ret;
+- wqes = uverbs_request_next_ptr(&iter, cmd.wqe_size * cmd.wr_count);
++ wqes = uverbs_request_next_ptr(&iter, size_mul(cmd.wqe_size,
++ cmd.wr_count));
+ if (IS_ERR(wqes))
+ return PTR_ERR(wqes);
+- sgls = uverbs_request_next_ptr(
+- &iter, cmd.sge_count * sizeof(struct ib_uverbs_sge));
++ sgls = uverbs_request_next_ptr(&iter,
++ size_mul(cmd.sge_count,
++ sizeof(struct ib_uverbs_sge)));
+ if (IS_ERR(sgls))
+ return PTR_ERR(sgls);
+ ret = uverbs_request_finish(&iter);
+@@ -2200,11 +2202,11 @@ ib_uverbs_unmarshall_recv(struct uverbs_req_iter *iter, u32 wr_count,
+ if (wqe_size < sizeof(struct ib_uverbs_recv_wr))
+ return ERR_PTR(-EINVAL);
+
+- wqes = uverbs_request_next_ptr(iter, wqe_size * wr_count);
++ wqes = uverbs_request_next_ptr(iter, size_mul(wqe_size, wr_count));
+ if (IS_ERR(wqes))
+ return ERR_CAST(wqes);
+- sgls = uverbs_request_next_ptr(
+- iter, sge_count * sizeof(struct ib_uverbs_sge));
++ sgls = uverbs_request_next_ptr(iter, size_mul(sge_count,
++ sizeof(struct ib_uverbs_sge)));
+ if (IS_ERR(sgls))
+ return ERR_CAST(sgls);
+ ret = uverbs_request_finish(iter);
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 160096792224b1..b20cffcc3e7d2d 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -156,7 +156,7 @@ int bnxt_re_query_device(struct ib_device *ibdev,
+
+ ib_attr->vendor_id = rdev->en_dev->pdev->vendor;
+ ib_attr->vendor_part_id = rdev->en_dev->pdev->device;
+- ib_attr->hw_ver = rdev->en_dev->pdev->subsystem_device;
++ ib_attr->hw_ver = rdev->en_dev->pdev->revision;
+ ib_attr->max_qp = dev_attr->max_qp;
+ ib_attr->max_qp_wr = dev_attr->max_qp_wqes;
+ ib_attr->device_cap_flags =
+@@ -2107,18 +2107,20 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
+ }
+ }
+
+- if (qp_attr_mask & IB_QP_PATH_MTU) {
+- qp->qplib_qp.modify_flags |=
+- CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
+- qp->qplib_qp.path_mtu = __from_ib_mtu(qp_attr->path_mtu);
+- qp->qplib_qp.mtu = ib_mtu_enum_to_int(qp_attr->path_mtu);
+- } else if (qp_attr->qp_state == IB_QPS_RTR) {
+- qp->qplib_qp.modify_flags |=
+- CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
+- qp->qplib_qp.path_mtu =
+- __from_ib_mtu(iboe_get_mtu(rdev->netdev->mtu));
+- qp->qplib_qp.mtu =
+- ib_mtu_enum_to_int(iboe_get_mtu(rdev->netdev->mtu));
++ if (qp_attr->qp_state == IB_QPS_RTR) {
++ enum ib_mtu qpmtu;
++
++ qpmtu = iboe_get_mtu(rdev->netdev->mtu);
++ if (qp_attr_mask & IB_QP_PATH_MTU) {
++ if (ib_mtu_enum_to_int(qp_attr->path_mtu) >
++ ib_mtu_enum_to_int(qpmtu))
++ return -EINVAL;
++ qpmtu = qp_attr->path_mtu;
++ }
++
++ qp->qplib_qp.modify_flags |= CMDQ_MODIFY_QP_MODIFY_MASK_PATH_MTU;
++ qp->qplib_qp.path_mtu = __from_ib_mtu(qpmtu);
++ qp->qplib_qp.mtu = ib_mtu_enum_to_int(qpmtu);
+ }
+
+ if (qp_attr_mask & IB_QP_TIMEOUT) {
+@@ -2763,7 +2765,8 @@ static int bnxt_re_post_send_shadow_qp(struct bnxt_re_dev *rdev,
+ wr = wr->next;
+ }
+ bnxt_qplib_post_send_db(&qp->qplib_qp);
+- bnxt_ud_qp_hw_stall_workaround(qp);
++ if (!bnxt_qplib_is_chip_gen_p5_p7(qp->rdev->chip_ctx))
++ bnxt_ud_qp_hw_stall_workaround(qp);
+ spin_unlock_irqrestore(&qp->sq_lock, flags);
+ return rc;
+ }
+@@ -2875,7 +2878,8 @@ int bnxt_re_post_send(struct ib_qp *ib_qp, const struct ib_send_wr *wr,
+ wr = wr->next;
+ }
+ bnxt_qplib_post_send_db(&qp->qplib_qp);
+- bnxt_ud_qp_hw_stall_workaround(qp);
++ if (!bnxt_qplib_is_chip_gen_p5_p7(qp->rdev->chip_ctx))
++ bnxt_ud_qp_hw_stall_workaround(qp);
+ spin_unlock_irqrestore(&qp->sq_lock, flags);
+
+ return rc;
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 2ac8ddbed576f5..8abd1b723f8ff5 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -1435,11 +1435,8 @@ static bool bnxt_re_is_qp1_or_shadow_qp(struct bnxt_re_dev *rdev,
+
+ static void bnxt_re_dev_stop(struct bnxt_re_dev *rdev)
+ {
+- int mask = IB_QP_STATE;
+- struct ib_qp_attr qp_attr;
+ struct bnxt_re_qp *qp;
+
+- qp_attr.qp_state = IB_QPS_ERR;
+ mutex_lock(&rdev->qp_lock);
+ list_for_each_entry(qp, &rdev->qp_list, list) {
+ /* Modify the state of all QPs except QP1/Shadow QP */
+@@ -1447,12 +1444,9 @@ static void bnxt_re_dev_stop(struct bnxt_re_dev *rdev)
+ if (qp->qplib_qp.state !=
+ CMDQ_MODIFY_QP_NEW_STATE_RESET &&
+ qp->qplib_qp.state !=
+- CMDQ_MODIFY_QP_NEW_STATE_ERR) {
++ CMDQ_MODIFY_QP_NEW_STATE_ERR)
+ bnxt_re_dispatch_event(&rdev->ibdev, &qp->ib_qp,
+ 1, IB_EVENT_QP_FATAL);
+- bnxt_re_modify_qp(&qp->ib_qp, &qp_attr, mask,
+- NULL);
+- }
+ }
+ }
+ mutex_unlock(&rdev->qp_lock);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 7ad83566ab0f41..828e2f9808012b 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -658,13 +658,6 @@ int bnxt_qplib_create_srq(struct bnxt_qplib_res *res,
+ rc = bnxt_qplib_alloc_init_hwq(&srq->hwq, &hwq_attr);
+ if (rc)
+ return rc;
+-
+- srq->swq = kcalloc(srq->hwq.max_elements, sizeof(*srq->swq),
+- GFP_KERNEL);
+- if (!srq->swq) {
+- rc = -ENOMEM;
+- goto fail;
+- }
+ srq->dbinfo.flags = 0;
+ bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
+ CMDQ_BASE_OPCODE_CREATE_SRQ,
+@@ -693,9 +686,17 @@ int bnxt_qplib_create_srq(struct bnxt_qplib_res *res,
+ spin_lock_init(&srq->lock);
+ srq->start_idx = 0;
+ srq->last_idx = srq->hwq.max_elements - 1;
+- for (idx = 0; idx < srq->hwq.max_elements; idx++)
+- srq->swq[idx].next_idx = idx + 1;
+- srq->swq[srq->last_idx].next_idx = -1;
++ if (!srq->hwq.is_user) {
++ srq->swq = kcalloc(srq->hwq.max_elements, sizeof(*srq->swq),
++ GFP_KERNEL);
++ if (!srq->swq) {
++ rc = -ENOMEM;
++ goto fail;
++ }
++ for (idx = 0; idx < srq->hwq.max_elements; idx++)
++ srq->swq[idx].next_idx = idx + 1;
++ srq->swq[srq->last_idx].next_idx = -1;
++ }
+
+ srq->id = le32_to_cpu(resp.xid);
+ srq->dbinfo.hwq = &srq->hwq;
+@@ -999,9 +1000,7 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ u32 tbl_indx;
+ u16 nsge;
+
+- if (res->dattr)
+- qp->is_host_msn_tbl = _is_host_msn_table(res->dattr->dev_cap_flags2);
+-
++ qp->is_host_msn_tbl = _is_host_msn_table(res->dattr->dev_cap_flags2);
+ sq->dbinfo.flags = 0;
+ bnxt_qplib_rcfw_cmd_prep((struct cmdq_base *)&req,
+ CMDQ_BASE_OPCODE_CREATE_QP,
+@@ -1033,7 +1032,12 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ : 0;
+ /* Update msn tbl size */
+ if (qp->is_host_msn_tbl && psn_sz) {
+- hwq_attr.aux_depth = roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
++ if (qp->wqe_mode == BNXT_QPLIB_WQE_MODE_STATIC)
++ hwq_attr.aux_depth =
++ roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
++ else
++ hwq_attr.aux_depth =
++ roundup_pow_of_two(bnxt_qplib_set_sq_size(sq, qp->wqe_mode)) / 2;
+ qp->msn_tbl_sz = hwq_attr.aux_depth;
+ qp->msn = 0;
+ }
+@@ -1043,13 +1047,14 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ if (rc)
+ return rc;
+
+- rc = bnxt_qplib_alloc_init_swq(sq);
+- if (rc)
+- goto fail_sq;
+-
+- if (psn_sz)
+- bnxt_qplib_init_psn_ptr(qp, psn_sz);
++ if (!sq->hwq.is_user) {
++ rc = bnxt_qplib_alloc_init_swq(sq);
++ if (rc)
++ goto fail_sq;
+
++ if (psn_sz)
++ bnxt_qplib_init_psn_ptr(qp, psn_sz);
++ }
+ req.sq_size = cpu_to_le32(bnxt_qplib_set_sq_size(sq, qp->wqe_mode));
+ pbl = &sq->hwq.pbl[PBL_LVL_0];
+ req.sq_pbl = cpu_to_le64(pbl->pg_map_arr[0]);
+@@ -1075,9 +1080,11 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ rc = bnxt_qplib_alloc_init_hwq(&rq->hwq, &hwq_attr);
+ if (rc)
+ goto sq_swq;
+- rc = bnxt_qplib_alloc_init_swq(rq);
+- if (rc)
+- goto fail_rq;
++ if (!rq->hwq.is_user) {
++ rc = bnxt_qplib_alloc_init_swq(rq);
++ if (rc)
++ goto fail_rq;
++ }
+
+ req.rq_size = cpu_to_le32(rq->max_wqe);
+ pbl = &rq->hwq.pbl[PBL_LVL_0];
+@@ -1173,9 +1180,11 @@ int bnxt_qplib_create_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ rq->dbinfo.db = qp->dpi->dbr;
+ rq->dbinfo.max_slot = bnxt_qplib_set_rq_max_slot(rq->wqe_size);
+ }
++ spin_lock_bh(&rcfw->tbl_lock);
+ tbl_indx = map_qp_id_to_tbl_indx(qp->id, rcfw);
+ rcfw->qp_tbl[tbl_indx].qp_id = qp->id;
+ rcfw->qp_tbl[tbl_indx].qp_handle = (void *)qp;
++ spin_unlock_bh(&rcfw->tbl_lock);
+
+ return 0;
+ fail:
+@@ -2596,10 +2605,12 @@ static int bnxt_qplib_cq_process_req(struct bnxt_qplib_cq *cq,
+ bnxt_qplib_add_flush_qp(qp);
+ } else {
+ /* Before we complete, do WA 9060 */
+- if (do_wa9060(qp, cq, cq_cons, sq->swq_last,
+- cqe_sq_cons)) {
+- *lib_qp = qp;
+- goto out;
++ if (!bnxt_qplib_is_chip_gen_p5_p7(qp->cctx)) {
++ if (do_wa9060(qp, cq, cq_cons, sq->swq_last,
++ cqe_sq_cons)) {
++ *lib_qp = qp;
++ goto out;
++ }
+ }
+ if (swq->flags & SQ_SEND_FLAGS_SIGNAL_COMP) {
+ cqe->status = CQ_REQ_STATUS_OK;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index f55958e5fddb4a..d8c71c024613bf 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -114,7 +114,6 @@ struct bnxt_qplib_sge {
+ u32 size;
+ };
+
+-#define BNXT_QPLIB_QP_MAX_SGL 6
+ struct bnxt_qplib_swq {
+ u64 wr_id;
+ int next_idx;
+@@ -154,7 +153,7 @@ struct bnxt_qplib_swqe {
+ #define BNXT_QPLIB_SWQE_FLAGS_UC_FENCE BIT(2)
+ #define BNXT_QPLIB_SWQE_FLAGS_SOLICIT_EVENT BIT(3)
+ #define BNXT_QPLIB_SWQE_FLAGS_INLINE BIT(4)
+- struct bnxt_qplib_sge sg_list[BNXT_QPLIB_QP_MAX_SGL];
++ struct bnxt_qplib_sge sg_list[BNXT_VAR_MAX_SGE];
+ int num_sge;
+ /* Max inline data is 96 bytes */
+ u32 inline_len;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+index e82bd37158ad6c..7a099580ca8bff 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.c
+@@ -424,7 +424,8 @@ static int __send_message_basic_sanity(struct bnxt_qplib_rcfw *rcfw,
+
+ /* Prevent posting if f/w is not in a state to process */
+ if (test_bit(ERR_DEVICE_DETACHED, &rcfw->cmdq.flags))
+- return bnxt_qplib_map_rc(opcode);
++ return -ENXIO;
++
+ if (test_bit(FIRMWARE_STALL_DETECTED, &cmdq->flags))
+ return -ETIMEDOUT;
+
+@@ -493,7 +494,7 @@ static int __bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw,
+
+ rc = __send_message_basic_sanity(rcfw, msg, opcode);
+ if (rc)
+- return rc;
++ return rc == -ENXIO ? bnxt_qplib_map_rc(opcode) : rc;
+
+ rc = __send_message(rcfw, msg, opcode);
+ if (rc)
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index e29fbbdab9fd68..3cca7b1395f6a7 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -129,12 +129,18 @@ int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+ attr->max_qp_init_rd_atom =
+ sb->max_qp_init_rd_atom > BNXT_QPLIB_MAX_OUT_RD_ATOM ?
+ BNXT_QPLIB_MAX_OUT_RD_ATOM : sb->max_qp_init_rd_atom;
+- attr->max_qp_wqes = le16_to_cpu(sb->max_qp_wr);
+- /*
+- * 128 WQEs needs to be reserved for the HW (8916). Prevent
+- * reporting the max number
+- */
+- attr->max_qp_wqes -= BNXT_QPLIB_RESERVED_QP_WRS + 1;
++ attr->max_qp_wqes = le16_to_cpu(sb->max_qp_wr) - 1;
++ if (!bnxt_qplib_is_chip_gen_p5_p7(rcfw->res->cctx)) {
++ /*
++ * 128 WQEs needs to be reserved for the HW (8916). Prevent
++ * reporting the max number on legacy devices
++ */
++ attr->max_qp_wqes -= BNXT_QPLIB_RESERVED_QP_WRS + 1;
++ }
++
++ /* Adjust for max_qp_wqes for variable wqe */
++ if (cctx->modes.wqe_mode == BNXT_QPLIB_WQE_MODE_VARIABLE)
++ attr->max_qp_wqes = BNXT_VAR_MAX_WQE - 1;
+
+ attr->max_qp_sges = cctx->modes.wqe_mode == BNXT_QPLIB_WQE_MODE_VARIABLE ?
+ min_t(u32, sb->max_sge_var_wqe, BNXT_VAR_MAX_SGE) : 6;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index f84521be3bea4a..605562122ecce2 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -931,6 +931,7 @@ struct hns_roce_hem_item {
+ size_t count; /* max ba numbers */
+ int start; /* start buf offset in this hem */
+ int end; /* end buf offset in this hem */
++ bool exist_bt;
+ };
+
+ /* All HEM items are linked in a tree structure */
+@@ -959,6 +960,7 @@ hem_list_alloc_item(struct hns_roce_dev *hr_dev, int start, int end, int count,
+ }
+ }
+
++ hem->exist_bt = exist_bt;
+ hem->count = count;
+ hem->start = start;
+ hem->end = end;
+@@ -969,22 +971,22 @@ hem_list_alloc_item(struct hns_roce_dev *hr_dev, int start, int end, int count,
+ }
+
+ static void hem_list_free_item(struct hns_roce_dev *hr_dev,
+- struct hns_roce_hem_item *hem, bool exist_bt)
++ struct hns_roce_hem_item *hem)
+ {
+- if (exist_bt)
++ if (hem->exist_bt)
+ dma_free_coherent(hr_dev->dev, hem->count * BA_BYTE_LEN,
+ hem->addr, hem->dma_addr);
+ kfree(hem);
+ }
+
+ static void hem_list_free_all(struct hns_roce_dev *hr_dev,
+- struct list_head *head, bool exist_bt)
++ struct list_head *head)
+ {
+ struct hns_roce_hem_item *hem, *temp_hem;
+
+ list_for_each_entry_safe(hem, temp_hem, head, list) {
+ list_del(&hem->list);
+- hem_list_free_item(hr_dev, hem, exist_bt);
++ hem_list_free_item(hr_dev, hem);
+ }
+ }
+
+@@ -1084,6 +1086,10 @@ int hns_roce_hem_list_calc_root_ba(const struct hns_roce_buf_region *regions,
+
+ for (i = 0; i < region_cnt; i++) {
+ r = (struct hns_roce_buf_region *)®ions[i];
++ /* when r->hopnum = 0, the region should not occupy root_ba. */
++ if (!r->hopnum)
++ continue;
++
+ if (r->hopnum > 1) {
+ step = hem_list_calc_ba_range(r->hopnum, 1, unit);
+ if (step > 0)
+@@ -1177,7 +1183,7 @@ static int hem_list_alloc_mid_bt(struct hns_roce_dev *hr_dev,
+
+ err_exit:
+ for (level = 1; level < hopnum; level++)
+- hem_list_free_all(hr_dev, &temp_list[level], true);
++ hem_list_free_all(hr_dev, &temp_list[level]);
+
+ return ret;
+ }
+@@ -1218,16 +1224,26 @@ static int alloc_fake_root_bt(struct hns_roce_dev *hr_dev, void *cpu_base,
+ {
+ struct hns_roce_hem_item *hem;
+
++ /* This is on the has_mtt branch, if r->hopnum
++ * is 0, there is no root_ba to reuse for the
++ * region's fake hem, so a dma_alloc request is
++ * necessary here.
++ */
+ hem = hem_list_alloc_item(hr_dev, r->offset, r->offset + r->count - 1,
+- r->count, false);
++ r->count, !r->hopnum);
+ if (!hem)
+ return -ENOMEM;
+
+- hem_list_assign_bt(hem, cpu_base, phy_base);
++ /* The root_ba can be reused only when r->hopnum > 0. */
++ if (r->hopnum)
++ hem_list_assign_bt(hem, cpu_base, phy_base);
+ list_add(&hem->list, branch_head);
+ list_add(&hem->sibling, leaf_head);
+
+- return r->count;
++ /* If r->hopnum == 0, 0 is returned,
++ * so that the root_bt entry is not occupied.
++ */
++ return r->hopnum ? r->count : 0;
+ }
+
+ static int setup_middle_bt(struct hns_roce_dev *hr_dev, void *cpu_base,
+@@ -1271,7 +1287,7 @@ setup_root_hem(struct hns_roce_dev *hr_dev, struct hns_roce_hem_list *hem_list,
+ return -ENOMEM;
+
+ total = 0;
+- for (i = 0; i < region_cnt && total < max_ba_num; i++) {
++ for (i = 0; i < region_cnt && total <= max_ba_num; i++) {
+ r = ®ions[i];
+ if (!r->count)
+ continue;
+@@ -1337,9 +1353,9 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
+ region_cnt);
+ if (ret) {
+ for (i = 0; i < region_cnt; i++)
+- hem_list_free_all(hr_dev, &head.branch[i], false);
++ hem_list_free_all(hr_dev, &head.branch[i]);
+
+- hem_list_free_all(hr_dev, &head.root, true);
++ hem_list_free_all(hr_dev, &head.root);
+ }
+
+ return ret;
+@@ -1402,10 +1418,9 @@ void hns_roce_hem_list_release(struct hns_roce_dev *hr_dev,
+
+ for (i = 0; i < HNS_ROCE_MAX_BT_REGION; i++)
+ for (j = 0; j < HNS_ROCE_MAX_BT_LEVEL; j++)
+- hem_list_free_all(hr_dev, &hem_list->mid_bt[i][j],
+- j != 0);
++ hem_list_free_all(hr_dev, &hem_list->mid_bt[i][j]);
+
+- hem_list_free_all(hr_dev, &hem_list->root_bt, true);
++ hem_list_free_all(hr_dev, &hem_list->root_bt);
+ INIT_LIST_HEAD(&hem_list->btm_bt);
+ hem_list->root_ba = 0;
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 697b17cca02e71..0144e7210d05a1 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -468,7 +468,7 @@ static inline int set_ud_wqe(struct hns_roce_qp *qp,
+ valid_num_sge = calc_wr_sge_num(wr, &msg_len);
+
+ ret = set_ud_opcode(ud_sq_wqe, wr);
+- if (WARN_ON(ret))
++ if (WARN_ON_ONCE(ret))
+ return ret;
+
+ ud_sq_wqe->msg_len = cpu_to_le32(msg_len);
+@@ -572,7 +572,7 @@ static inline int set_rc_wqe(struct hns_roce_qp *qp,
+ rc_sq_wqe->msg_len = cpu_to_le32(msg_len);
+
+ ret = set_rc_opcode(hr_dev, rc_sq_wqe, wr);
+- if (WARN_ON(ret))
++ if (WARN_ON_ONCE(ret))
+ return ret;
+
+ hr_reg_write(rc_sq_wqe, RC_SEND_WQE_SO,
+@@ -670,6 +670,10 @@ static void write_dwqe(struct hns_roce_dev *hr_dev, struct hns_roce_qp *qp,
+ #define HNS_ROCE_SL_SHIFT 2
+ struct hns_roce_v2_rc_send_wqe *rc_sq_wqe = wqe;
+
++ if (unlikely(qp->state == IB_QPS_ERR)) {
++ flush_cqe(hr_dev, qp);
++ return;
++ }
+ /* All kinds of DirectWQE have the same header field layout */
+ hr_reg_enable(rc_sq_wqe, RC_SEND_WQE_FLAG);
+ hr_reg_write(rc_sq_wqe, RC_SEND_WQE_DB_SL_L, qp->sl);
+@@ -5619,6 +5623,9 @@ static void put_dip_ctx_idx(struct hns_roce_dev *hr_dev,
+ {
+ struct hns_roce_dip *hr_dip = hr_qp->dip;
+
++ if (!hr_dip)
++ return;
++
+ xa_lock(&hr_dev->qp_table.dip_xa);
+
+ hr_dip->qp_cnt--;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_mr.c b/drivers/infiniband/hw/hns/hns_roce_mr.c
+index bf30b3a65a9ba9..55b9283bfc6f03 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_mr.c
++++ b/drivers/infiniband/hw/hns/hns_roce_mr.c
+@@ -814,11 +814,6 @@ int hns_roce_mtr_map(struct hns_roce_dev *hr_dev, struct hns_roce_mtr *mtr,
+ for (i = 0, mapped_cnt = 0; i < mtr->hem_cfg.region_count &&
+ mapped_cnt < page_cnt; i++) {
+ r = &mtr->hem_cfg.region[i];
+- /* if hopnum is 0, no need to map pages in this region */
+- if (!r->hopnum) {
+- mapped_cnt += r->count;
+- continue;
+- }
+
+ if (r->offset + r->count > page_cnt) {
+ ret = -EINVAL;
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index ac20ab3bbabf47..8c47cb4edd0a0a 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -2831,7 +2831,7 @@ static int mlx5_ib_get_plane_num(struct mlx5_core_dev *mdev, u8 *num_plane)
+ int err;
+
+ *num_plane = 0;
+- if (!MLX5_CAP_GEN(mdev, ib_virt))
++ if (!MLX5_CAP_GEN(mdev, ib_virt) || !MLX5_CAP_GEN_2(mdev, multiplane))
+ return 0;
+
+ err = mlx5_query_hca_vport_context(mdev, 0, 1, 0, &vport_ctx);
+@@ -3631,7 +3631,8 @@ static int mlx5_ib_init_multiport_master(struct mlx5_ib_dev *dev)
+ list_for_each_entry(mpi, &mlx5_ib_unaffiliated_port_list,
+ list) {
+ if (dev->sys_image_guid == mpi->sys_image_guid &&
+- (mlx5_core_native_port_num(mpi->mdev) - 1) == i) {
++ (mlx5_core_native_port_num(mpi->mdev) - 1) == i &&
++ mlx5_core_same_coredev_type(dev->mdev, mpi->mdev)) {
+ bound = mlx5_ib_bind_slave_port(dev, mpi);
+ }
+
+@@ -4776,7 +4777,8 @@ static int mlx5r_mp_probe(struct auxiliary_device *adev,
+
+ mutex_lock(&mlx5_ib_multiport_mutex);
+ list_for_each_entry(dev, &mlx5_ib_dev_list, ib_dev_list) {
+- if (dev->sys_image_guid == mpi->sys_image_guid)
++ if (dev->sys_image_guid == mpi->sys_image_guid &&
++ mlx5_core_same_coredev_type(dev->mdev, mpi->mdev))
+ bound = mlx5_ib_bind_slave_port(dev, mpi);
+
+ if (bound) {
+diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
+index 255677bc12b2ab..1ba4a0c8726aed 100644
+--- a/drivers/infiniband/sw/rxe/rxe.c
++++ b/drivers/infiniband/sw/rxe/rxe.c
+@@ -40,6 +40,8 @@ void rxe_dealloc(struct ib_device *ib_dev)
+ /* initialize rxe device parameters */
+ static void rxe_init_device_param(struct rxe_dev *rxe)
+ {
++ struct net_device *ndev;
++
+ rxe->max_inline_data = RXE_MAX_INLINE_DATA;
+
+ rxe->attr.vendor_id = RXE_VENDOR_ID;
+@@ -71,8 +73,15 @@ static void rxe_init_device_param(struct rxe_dev *rxe)
+ rxe->attr.max_fast_reg_page_list_len = RXE_MAX_FMR_PAGE_LIST_LEN;
+ rxe->attr.max_pkeys = RXE_MAX_PKEYS;
+ rxe->attr.local_ca_ack_delay = RXE_LOCAL_CA_ACK_DELAY;
++
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return;
++
+ addrconf_addr_eui48((unsigned char *)&rxe->attr.sys_image_guid,
+- rxe->ndev->dev_addr);
++ ndev->dev_addr);
++
++ dev_put(ndev);
+
+ rxe->max_ucontext = RXE_MAX_UCONTEXT;
+ }
+@@ -109,10 +118,15 @@ static void rxe_init_port_param(struct rxe_port *port)
+ static void rxe_init_ports(struct rxe_dev *rxe)
+ {
+ struct rxe_port *port = &rxe->port;
++ struct net_device *ndev;
+
+ rxe_init_port_param(port);
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return;
+ addrconf_addr_eui48((unsigned char *)&port->port_guid,
+- rxe->ndev->dev_addr);
++ ndev->dev_addr);
++ dev_put(ndev);
+ spin_lock_init(&port->port_lock);
+ }
+
+@@ -167,12 +181,13 @@ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu)
+ /* called by ifc layer to create new rxe device.
+ * The caller should allocate memory for rxe by calling ib_alloc_device.
+ */
+-int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name)
++int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name,
++ struct net_device *ndev)
+ {
+ rxe_init(rxe);
+ rxe_set_mtu(rxe, mtu);
+
+- return rxe_register_device(rxe, ibdev_name);
++ return rxe_register_device(rxe, ibdev_name, ndev);
+ }
+
+ static int rxe_newlink(const char *ibdev_name, struct net_device *ndev)
+diff --git a/drivers/infiniband/sw/rxe/rxe.h b/drivers/infiniband/sw/rxe/rxe.h
+index d8fb2c7af30a7e..fe7f9706673255 100644
+--- a/drivers/infiniband/sw/rxe/rxe.h
++++ b/drivers/infiniband/sw/rxe/rxe.h
+@@ -139,7 +139,8 @@ enum resp_states {
+
+ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int dev_mtu);
+
+-int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name);
++int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name,
++ struct net_device *ndev);
+
+ void rxe_rcv(struct sk_buff *skb);
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_mcast.c b/drivers/infiniband/sw/rxe/rxe_mcast.c
+index 86cc2e18a7fdaf..07ff47bae31df9 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mcast.c
++++ b/drivers/infiniband/sw/rxe/rxe_mcast.c
+@@ -31,10 +31,19 @@
+ static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid)
+ {
+ unsigned char ll_addr[ETH_ALEN];
++ struct net_device *ndev;
++ int ret;
++
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return -ENODEV;
+
+ ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr);
+
+- return dev_mc_add(rxe->ndev, ll_addr);
++ ret = dev_mc_add(ndev, ll_addr);
++ dev_put(ndev);
++
++ return ret;
+ }
+
+ /**
+@@ -47,10 +56,19 @@ static int rxe_mcast_add(struct rxe_dev *rxe, union ib_gid *mgid)
+ static int rxe_mcast_del(struct rxe_dev *rxe, union ib_gid *mgid)
+ {
+ unsigned char ll_addr[ETH_ALEN];
++ struct net_device *ndev;
++ int ret;
++
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return -ENODEV;
+
+ ipv6_eth_mc_map((struct in6_addr *)mgid->raw, ll_addr);
+
+- return dev_mc_del(rxe->ndev, ll_addr);
++ ret = dev_mc_del(ndev, ll_addr);
++ dev_put(ndev);
++
++ return ret;
+ }
+
+ /**
+diff --git a/drivers/infiniband/sw/rxe/rxe_net.c b/drivers/infiniband/sw/rxe/rxe_net.c
+index 75d1407db52d4d..8cc64ceeb3569b 100644
+--- a/drivers/infiniband/sw/rxe/rxe_net.c
++++ b/drivers/infiniband/sw/rxe/rxe_net.c
+@@ -524,7 +524,16 @@ struct sk_buff *rxe_init_packet(struct rxe_dev *rxe, struct rxe_av *av,
+ */
+ const char *rxe_parent_name(struct rxe_dev *rxe, unsigned int port_num)
+ {
+- return rxe->ndev->name;
++ struct net_device *ndev;
++ char *ndev_name;
++
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return NULL;
++ ndev_name = ndev->name;
++ dev_put(ndev);
++
++ return ndev_name;
+ }
+
+ int rxe_net_add(const char *ibdev_name, struct net_device *ndev)
+@@ -536,10 +545,9 @@ int rxe_net_add(const char *ibdev_name, struct net_device *ndev)
+ if (!rxe)
+ return -ENOMEM;
+
+- rxe->ndev = ndev;
+ ib_mark_name_assigned_by_user(&rxe->ib_dev);
+
+- err = rxe_add(rxe, ndev->mtu, ibdev_name);
++ err = rxe_add(rxe, ndev->mtu, ibdev_name, ndev);
+ if (err) {
+ ib_dealloc_device(&rxe->ib_dev);
+ return err;
+@@ -587,10 +595,18 @@ void rxe_port_down(struct rxe_dev *rxe)
+
+ void rxe_set_port_state(struct rxe_dev *rxe)
+ {
+- if (netif_running(rxe->ndev) && netif_carrier_ok(rxe->ndev))
++ struct net_device *ndev;
++
++ ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
++ if (!ndev)
++ return;
++
++ if (netif_running(ndev) && netif_carrier_ok(ndev))
+ rxe_port_up(rxe);
+ else
+ rxe_port_down(rxe);
++
++ dev_put(ndev);
+ }
+
+ static int rxe_notify(struct notifier_block *not_blk,
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index 5c18f7e342f294..8a5fc20fd18692 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -41,6 +41,7 @@ static int rxe_query_port(struct ib_device *ibdev,
+ u32 port_num, struct ib_port_attr *attr)
+ {
+ struct rxe_dev *rxe = to_rdev(ibdev);
++ struct net_device *ndev;
+ int err, ret;
+
+ if (port_num != 1) {
+@@ -49,6 +50,12 @@ static int rxe_query_port(struct ib_device *ibdev,
+ goto err_out;
+ }
+
++ ndev = rxe_ib_device_get_netdev(ibdev);
++ if (!ndev) {
++ err = -ENODEV;
++ goto err_out;
++ }
++
+ memcpy(attr, &rxe->port.attr, sizeof(*attr));
+
+ mutex_lock(&rxe->usdev_lock);
+@@ -57,13 +64,14 @@ static int rxe_query_port(struct ib_device *ibdev,
+
+ if (attr->state == IB_PORT_ACTIVE)
+ attr->phys_state = IB_PORT_PHYS_STATE_LINK_UP;
+- else if (dev_get_flags(rxe->ndev) & IFF_UP)
++ else if (dev_get_flags(ndev) & IFF_UP)
+ attr->phys_state = IB_PORT_PHYS_STATE_POLLING;
+ else
+ attr->phys_state = IB_PORT_PHYS_STATE_DISABLED;
+
+ mutex_unlock(&rxe->usdev_lock);
+
++ dev_put(ndev);
+ return ret;
+
+ err_out:
+@@ -1425,9 +1433,16 @@ static const struct attribute_group rxe_attr_group = {
+ static int rxe_enable_driver(struct ib_device *ib_dev)
+ {
+ struct rxe_dev *rxe = container_of(ib_dev, struct rxe_dev, ib_dev);
++ struct net_device *ndev;
++
++ ndev = rxe_ib_device_get_netdev(ib_dev);
++ if (!ndev)
++ return -ENODEV;
+
+ rxe_set_port_state(rxe);
+- dev_info(&rxe->ib_dev.dev, "added %s\n", netdev_name(rxe->ndev));
++ dev_info(&rxe->ib_dev.dev, "added %s\n", netdev_name(ndev));
++
++ dev_put(ndev);
+ return 0;
+ }
+
+@@ -1495,7 +1510,8 @@ static const struct ib_device_ops rxe_dev_ops = {
+ INIT_RDMA_OBJ_SIZE(ib_mw, rxe_mw, ibmw),
+ };
+
+-int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name)
++int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name,
++ struct net_device *ndev)
+ {
+ int err;
+ struct ib_device *dev = &rxe->ib_dev;
+@@ -1507,13 +1523,13 @@ int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name)
+ dev->num_comp_vectors = num_possible_cpus();
+ dev->local_dma_lkey = 0;
+ addrconf_addr_eui48((unsigned char *)&dev->node_guid,
+- rxe->ndev->dev_addr);
++ ndev->dev_addr);
+
+ dev->uverbs_cmd_mask |= BIT_ULL(IB_USER_VERBS_CMD_POST_SEND) |
+ BIT_ULL(IB_USER_VERBS_CMD_REQ_NOTIFY_CQ);
+
+ ib_set_device_ops(dev, &rxe_dev_ops);
+- err = ib_device_set_netdev(&rxe->ib_dev, rxe->ndev, 1);
++ err = ib_device_set_netdev(&rxe->ib_dev, ndev, 1);
+ if (err)
+ return err;
+
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.h b/drivers/infiniband/sw/rxe/rxe_verbs.h
+index 3c1354f82283e6..6573ceec0ef583 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.h
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.h
+@@ -370,6 +370,7 @@ struct rxe_port {
+ u32 qp_gsi_index;
+ };
+
++#define RXE_PORT 1
+ struct rxe_dev {
+ struct ib_device ib_dev;
+ struct ib_device_attr attr;
+@@ -377,8 +378,6 @@ struct rxe_dev {
+ int max_inline_data;
+ struct mutex usdev_lock;
+
+- struct net_device *ndev;
+-
+ struct rxe_pool uc_pool;
+ struct rxe_pool pd_pool;
+ struct rxe_pool ah_pool;
+@@ -406,6 +405,11 @@ struct rxe_dev {
+ struct crypto_shash *tfm;
+ };
+
++static inline struct net_device *rxe_ib_device_get_netdev(struct ib_device *dev)
++{
++ return ib_device_get_netdev(dev, RXE_PORT);
++}
++
+ static inline void rxe_counter_inc(struct rxe_dev *rxe, enum rxe_counters index)
+ {
+ atomic64_inc(&rxe->stats_counters[index]);
+@@ -471,6 +475,7 @@ static inline struct rxe_pd *rxe_mw_pd(struct rxe_mw *mw)
+ return to_rpd(mw->ibmw.pd);
+ }
+
+-int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name);
++int rxe_register_device(struct rxe_dev *rxe, const char *ibdev_name,
++ struct net_device *ndev);
+
+ #endif /* RXE_VERBS_H */
+diff --git a/drivers/infiniband/sw/siw/siw.h b/drivers/infiniband/sw/siw/siw.h
+index 86d4d6a2170e17..ea5eee50dc39d0 100644
+--- a/drivers/infiniband/sw/siw/siw.h
++++ b/drivers/infiniband/sw/siw/siw.h
+@@ -46,6 +46,9 @@
+ */
+ #define SIW_IRQ_MAXBURST_SQ_ACTIVE 4
+
++/* There is always only a port 1 per siw device */
++#define SIW_PORT 1
++
+ struct siw_dev_cap {
+ int max_qp;
+ int max_qp_wr;
+@@ -69,16 +72,12 @@ struct siw_pd {
+
+ struct siw_device {
+ struct ib_device base_dev;
+- struct net_device *netdev;
+ struct siw_dev_cap attrs;
+
+ u32 vendor_part_id;
+ int numa_node;
+ char raw_gid[ETH_ALEN];
+
+- /* physical port state (only one port per device) */
+- enum ib_port_state state;
+-
+ spinlock_t lock;
+
+ struct xarray qp_xa;
+diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c
+index 86323918a570eb..708b13993fdfd3 100644
+--- a/drivers/infiniband/sw/siw/siw_cm.c
++++ b/drivers/infiniband/sw/siw/siw_cm.c
+@@ -1759,6 +1759,7 @@ int siw_create_listen(struct iw_cm_id *id, int backlog)
+ {
+ struct socket *s;
+ struct siw_cep *cep = NULL;
++ struct net_device *ndev = NULL;
+ struct siw_device *sdev = to_siw_dev(id->device);
+ int addr_family = id->local_addr.ss_family;
+ int rv = 0;
+@@ -1779,9 +1780,15 @@ int siw_create_listen(struct iw_cm_id *id, int backlog)
+ struct sockaddr_in *laddr = &to_sockaddr_in(id->local_addr);
+
+ /* For wildcard addr, limit binding to current device only */
+- if (ipv4_is_zeronet(laddr->sin_addr.s_addr))
+- s->sk->sk_bound_dev_if = sdev->netdev->ifindex;
+-
++ if (ipv4_is_zeronet(laddr->sin_addr.s_addr)) {
++ ndev = ib_device_get_netdev(id->device, SIW_PORT);
++ if (ndev) {
++ s->sk->sk_bound_dev_if = ndev->ifindex;
++ } else {
++ rv = -ENODEV;
++ goto error;
++ }
++ }
+ rv = s->ops->bind(s, (struct sockaddr *)laddr,
+ sizeof(struct sockaddr_in));
+ } else {
+@@ -1797,9 +1804,15 @@ int siw_create_listen(struct iw_cm_id *id, int backlog)
+ }
+
+ /* For wildcard addr, limit binding to current device only */
+- if (ipv6_addr_any(&laddr->sin6_addr))
+- s->sk->sk_bound_dev_if = sdev->netdev->ifindex;
+-
++ if (ipv6_addr_any(&laddr->sin6_addr)) {
++ ndev = ib_device_get_netdev(id->device, SIW_PORT);
++ if (ndev) {
++ s->sk->sk_bound_dev_if = ndev->ifindex;
++ } else {
++ rv = -ENODEV;
++ goto error;
++ }
++ }
+ rv = s->ops->bind(s, (struct sockaddr *)laddr,
+ sizeof(struct sockaddr_in6));
+ }
+@@ -1860,6 +1873,7 @@ int siw_create_listen(struct iw_cm_id *id, int backlog)
+ }
+ list_add_tail(&cep->listenq, (struct list_head *)id->provider_data);
+ cep->state = SIW_EPSTATE_LISTENING;
++ dev_put(ndev);
+
+ siw_dbg(id->device, "Listen at laddr %pISp\n", &id->local_addr);
+
+@@ -1879,6 +1893,7 @@ int siw_create_listen(struct iw_cm_id *id, int backlog)
+ siw_cep_set_free_and_put(cep);
+ }
+ sock_release(s);
++ dev_put(ndev);
+
+ return rv;
+ }
+diff --git a/drivers/infiniband/sw/siw/siw_main.c b/drivers/infiniband/sw/siw/siw_main.c
+index 17abef48abcd22..14d3103aee6f8a 100644
+--- a/drivers/infiniband/sw/siw/siw_main.c
++++ b/drivers/infiniband/sw/siw/siw_main.c
+@@ -287,7 +287,6 @@ static struct siw_device *siw_device_create(struct net_device *netdev)
+ return NULL;
+
+ base_dev = &sdev->base_dev;
+- sdev->netdev = netdev;
+
+ if (netdev->addr_len) {
+ memcpy(sdev->raw_gid, netdev->dev_addr,
+@@ -381,12 +380,10 @@ static int siw_netdev_event(struct notifier_block *nb, unsigned long event,
+
+ switch (event) {
+ case NETDEV_UP:
+- sdev->state = IB_PORT_ACTIVE;
+ siw_port_event(sdev, 1, IB_EVENT_PORT_ACTIVE);
+ break;
+
+ case NETDEV_DOWN:
+- sdev->state = IB_PORT_DOWN;
+ siw_port_event(sdev, 1, IB_EVENT_PORT_ERR);
+ break;
+
+@@ -407,12 +404,8 @@ static int siw_netdev_event(struct notifier_block *nb, unsigned long event,
+ siw_port_event(sdev, 1, IB_EVENT_LID_CHANGE);
+ break;
+ /*
+- * Todo: Below netdev events are currently not handled.
++ * All other events are not handled
+ */
+- case NETDEV_CHANGEMTU:
+- case NETDEV_CHANGE:
+- break;
+-
+ default:
+ break;
+ }
+@@ -442,12 +435,6 @@ static int siw_newlink(const char *basedev_name, struct net_device *netdev)
+ sdev = siw_device_create(netdev);
+ if (sdev) {
+ dev_dbg(&netdev->dev, "siw: new device\n");
+-
+- if (netif_running(netdev) && netif_carrier_ok(netdev))
+- sdev->state = IB_PORT_ACTIVE;
+- else
+- sdev->state = IB_PORT_DOWN;
+-
+ ib_mark_name_assigned_by_user(&sdev->base_dev);
+ rv = siw_device_register(sdev, basedev_name);
+ if (rv)
+diff --git a/drivers/infiniband/sw/siw/siw_verbs.c b/drivers/infiniband/sw/siw/siw_verbs.c
+index 986666c19378a1..7ca0297d68a4a7 100644
+--- a/drivers/infiniband/sw/siw/siw_verbs.c
++++ b/drivers/infiniband/sw/siw/siw_verbs.c
+@@ -171,21 +171,29 @@ int siw_query_device(struct ib_device *base_dev, struct ib_device_attr *attr,
+ int siw_query_port(struct ib_device *base_dev, u32 port,
+ struct ib_port_attr *attr)
+ {
+- struct siw_device *sdev = to_siw_dev(base_dev);
++ struct net_device *ndev;
+ int rv;
+
+ memset(attr, 0, sizeof(*attr));
+
+ rv = ib_get_eth_speed(base_dev, port, &attr->active_speed,
+ &attr->active_width);
++ if (rv)
++ return rv;
++
++ ndev = ib_device_get_netdev(base_dev, SIW_PORT);
++ if (!ndev)
++ return -ENODEV;
++
+ attr->gid_tbl_len = 1;
+ attr->max_msg_sz = -1;
+- attr->max_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu);
+- attr->active_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu);
+- attr->phys_state = sdev->state == IB_PORT_ACTIVE ?
++ attr->max_mtu = ib_mtu_int_to_enum(ndev->max_mtu);
++ attr->active_mtu = ib_mtu_int_to_enum(READ_ONCE(ndev->mtu));
++ attr->phys_state = (netif_running(ndev) && netif_carrier_ok(ndev)) ?
+ IB_PORT_PHYS_STATE_LINK_UP : IB_PORT_PHYS_STATE_DISABLED;
++ attr->state = attr->phys_state == IB_PORT_PHYS_STATE_LINK_UP ?
++ IB_PORT_ACTIVE : IB_PORT_DOWN;
+ attr->port_cap_flags = IB_PORT_CM_SUP | IB_PORT_DEVICE_MGMT_SUP;
+- attr->state = sdev->state;
+ /*
+ * All zero
+ *
+@@ -199,6 +207,7 @@ int siw_query_port(struct ib_device *base_dev, u32 port,
+ * attr->subnet_timeout = 0;
+ * attr->init_type_repy = 0;
+ */
++ dev_put(ndev);
+ return rv;
+ }
+
+@@ -505,21 +514,24 @@ int siw_query_qp(struct ib_qp *base_qp, struct ib_qp_attr *qp_attr,
+ int qp_attr_mask, struct ib_qp_init_attr *qp_init_attr)
+ {
+ struct siw_qp *qp;
+- struct siw_device *sdev;
++ struct net_device *ndev;
+
+- if (base_qp && qp_attr && qp_init_attr) {
++ if (base_qp && qp_attr && qp_init_attr)
+ qp = to_siw_qp(base_qp);
+- sdev = to_siw_dev(base_qp->device);
+- } else {
++ else
+ return -EINVAL;
+- }
++
++ ndev = ib_device_get_netdev(base_qp->device, SIW_PORT);
++ if (!ndev)
++ return -ENODEV;
++
+ qp_attr->qp_state = siw_qp_state_to_ib_qp_state[qp->attrs.state];
+ qp_attr->cap.max_inline_data = SIW_MAX_INLINE;
+ qp_attr->cap.max_send_wr = qp->attrs.sq_size;
+ qp_attr->cap.max_send_sge = qp->attrs.sq_max_sges;
+ qp_attr->cap.max_recv_wr = qp->attrs.rq_size;
+ qp_attr->cap.max_recv_sge = qp->attrs.rq_max_sges;
+- qp_attr->path_mtu = ib_mtu_int_to_enum(sdev->netdev->mtu);
++ qp_attr->path_mtu = ib_mtu_int_to_enum(READ_ONCE(ndev->mtu));
+ qp_attr->max_rd_atomic = qp->attrs.irq_size;
+ qp_attr->max_dest_rd_atomic = qp->attrs.orq_size;
+
+@@ -534,6 +546,7 @@ int siw_query_qp(struct ib_qp *base_qp, struct ib_qp_attr *qp_attr,
+
+ qp_init_attr->cap = qp_attr->cap;
+
++ dev_put(ndev);
+ return 0;
+ }
+
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs-srv.c b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+index e83d956478521d..ef4abdea3c2d2e 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs-srv.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs-srv.c
+@@ -349,6 +349,7 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id,
+ struct rtrs_srv_mr *srv_mr;
+ bool need_inval = false;
+ enum ib_send_flags flags;
++ struct ib_sge list;
+ u32 imm;
+ int err;
+
+@@ -401,7 +402,6 @@ static int send_io_resp_imm(struct rtrs_srv_con *con, struct rtrs_srv_op *id,
+ imm = rtrs_to_io_rsp_imm(id->msg_id, errno, need_inval);
+ imm_wr.wr.next = NULL;
+ if (always_invalidate) {
+- struct ib_sge list;
+ struct rtrs_msg_rkey_rsp *msg;
+
+ srv_mr = &srv_path->mrs[id->msg_id];
+diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
+index 3be7bd8cd8cdeb..32abc2916b40ff 100644
+--- a/drivers/irqchip/irq-gic.c
++++ b/drivers/irqchip/irq-gic.c
+@@ -64,7 +64,7 @@ static void gic_check_cpu_features(void)
+
+ union gic_base {
+ void __iomem *common_base;
+- void __percpu * __iomem *percpu_base;
++ void __iomem * __percpu *percpu_base;
+ };
+
+ struct gic_chip_data {
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index e113b99a3eab59..8716004fcf6c90 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -1867,20 +1867,20 @@ static int sdhci_msm_program_key(struct cqhci_host *cq_host,
+ struct sdhci_msm_host *msm_host = sdhci_pltfm_priv(pltfm_host);
+ union cqhci_crypto_cap_entry cap;
+
++ if (!(cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE))
++ return qcom_ice_evict_key(msm_host->ice, slot);
++
+ /* Only AES-256-XTS has been tested so far. */
+ cap = cq_host->crypto_cap_array[cfg->crypto_cap_idx];
+ if (cap.algorithm_id != CQHCI_CRYPTO_ALG_AES_XTS ||
+ cap.key_size != CQHCI_CRYPTO_KEY_SIZE_256)
+ return -EINVAL;
+
+- if (cfg->config_enable & CQHCI_CRYPTO_CONFIGURATION_ENABLE)
+- return qcom_ice_program_key(msm_host->ice,
+- QCOM_ICE_CRYPTO_ALG_AES_XTS,
+- QCOM_ICE_CRYPTO_KEY_SIZE_256,
+- cfg->crypto_key,
+- cfg->data_unit_size, slot);
+- else
+- return qcom_ice_evict_key(msm_host->ice, slot);
++ return qcom_ice_program_key(msm_host->ice,
++ QCOM_ICE_CRYPTO_ALG_AES_XTS,
++ QCOM_ICE_CRYPTO_KEY_SIZE_256,
++ cfg->crypto_key,
++ cfg->data_unit_size, slot);
+ }
+
+ #else /* CONFIG_MMC_CRYPTO */
+diff --git a/drivers/net/dsa/microchip/ksz9477.c b/drivers/net/dsa/microchip/ksz9477.c
+index 0ba658a72d8fea..22556d339d6ea5 100644
+--- a/drivers/net/dsa/microchip/ksz9477.c
++++ b/drivers/net/dsa/microchip/ksz9477.c
+@@ -2,7 +2,7 @@
+ /*
+ * Microchip KSZ9477 switch driver main logic
+ *
+- * Copyright (C) 2017-2019 Microchip Technology Inc.
++ * Copyright (C) 2017-2024 Microchip Technology Inc.
+ */
+
+ #include <linux/kernel.h>
+@@ -983,26 +983,51 @@ void ksz9477_get_caps(struct ksz_device *dev, int port,
+ int ksz9477_set_ageing_time(struct ksz_device *dev, unsigned int msecs)
+ {
+ u32 secs = msecs / 1000;
+- u8 value;
+- u8 data;
++ u8 data, mult, value;
++ u32 max_val;
+ int ret;
+
+- value = FIELD_GET(SW_AGE_PERIOD_7_0_M, secs);
++#define MAX_TIMER_VAL ((1 << 8) - 1)
+
+- ret = ksz_write8(dev, REG_SW_LUE_CTRL_3, value);
+- if (ret < 0)
+- return ret;
++ /* The aging timer comprises a 3-bit multiplier and an 8-bit second
++ * value. Either of them cannot be zero. The maximum timer is then
++ * 7 * 255 = 1785 seconds.
++ */
++ if (!secs)
++ secs = 1;
+
+- data = FIELD_GET(SW_AGE_PERIOD_10_8_M, secs);
++ /* Return error if too large. */
++ else if (secs > 7 * MAX_TIMER_VAL)
++ return -EINVAL;
+
+ ret = ksz_read8(dev, REG_SW_LUE_CTRL_0, &value);
+ if (ret < 0)
+ return ret;
+
+- value &= ~SW_AGE_CNT_M;
+- value |= FIELD_PREP(SW_AGE_CNT_M, data);
++ /* Check whether there is need to update the multiplier. */
++ mult = FIELD_GET(SW_AGE_CNT_M, value);
++ max_val = MAX_TIMER_VAL;
++ if (mult > 0) {
++ /* Try to use the same multiplier already in the register as
++ * the hardware default uses multiplier 4 and 75 seconds for
++ * 300 seconds.
++ */
++ max_val = DIV_ROUND_UP(secs, mult);
++ if (max_val > MAX_TIMER_VAL || max_val * mult != secs)
++ max_val = MAX_TIMER_VAL;
++ }
++
++ data = DIV_ROUND_UP(secs, max_val);
++ if (mult != data) {
++ value &= ~SW_AGE_CNT_M;
++ value |= FIELD_PREP(SW_AGE_CNT_M, data);
++ ret = ksz_write8(dev, REG_SW_LUE_CTRL_0, value);
++ if (ret < 0)
++ return ret;
++ }
+
+- return ksz_write8(dev, REG_SW_LUE_CTRL_0, value);
++ value = DIV_ROUND_UP(secs, data);
++ return ksz_write8(dev, REG_SW_LUE_CTRL_3, value);
+ }
+
+ void ksz9477_port_queue_split(struct ksz_device *dev, int port)
+diff --git a/drivers/net/dsa/microchip/ksz9477_reg.h b/drivers/net/dsa/microchip/ksz9477_reg.h
+index 04235c22bf40e4..ff579920078ee3 100644
+--- a/drivers/net/dsa/microchip/ksz9477_reg.h
++++ b/drivers/net/dsa/microchip/ksz9477_reg.h
+@@ -2,7 +2,7 @@
+ /*
+ * Microchip KSZ9477 register definitions
+ *
+- * Copyright (C) 2017-2018 Microchip Technology Inc.
++ * Copyright (C) 2017-2024 Microchip Technology Inc.
+ */
+
+ #ifndef __KSZ9477_REGS_H
+@@ -165,8 +165,6 @@
+ #define SW_VLAN_ENABLE BIT(7)
+ #define SW_DROP_INVALID_VID BIT(6)
+ #define SW_AGE_CNT_M GENMASK(5, 3)
+-#define SW_AGE_CNT_S 3
+-#define SW_AGE_PERIOD_10_8_M GENMASK(10, 8)
+ #define SW_RESV_MCAST_ENABLE BIT(2)
+ #define SW_HASH_OPTION_M 0x03
+ #define SW_HASH_OPTION_CRC 1
+diff --git a/drivers/net/dsa/microchip/lan937x_main.c b/drivers/net/dsa/microchip/lan937x_main.c
+index 824d9309a3d35e..7fe127a075de31 100644
+--- a/drivers/net/dsa/microchip/lan937x_main.c
++++ b/drivers/net/dsa/microchip/lan937x_main.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ /* Microchip LAN937X switch driver main logic
+- * Copyright (C) 2019-2022 Microchip Technology Inc.
++ * Copyright (C) 2019-2024 Microchip Technology Inc.
+ */
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+@@ -260,10 +260,66 @@ int lan937x_change_mtu(struct ksz_device *dev, int port, int new_mtu)
+
+ int lan937x_set_ageing_time(struct ksz_device *dev, unsigned int msecs)
+ {
+- u32 secs = msecs / 1000;
+- u32 value;
++ u8 data, mult, value8;
++ bool in_msec = false;
++ u32 max_val, value;
++ u32 secs = msecs;
+ int ret;
+
++#define MAX_TIMER_VAL ((1 << 20) - 1)
++
++ /* The aging timer comprises a 3-bit multiplier and a 20-bit second
++ * value. Either of them cannot be zero. The maximum timer is then
++ * 7 * 1048575 = 7340025 seconds. As this value is too large for
++ * practical use it can be interpreted as microseconds, making the
++ * maximum timer 7340 seconds with finer control. This allows for
++ * maximum 122 minutes compared to 29 minutes in KSZ9477 switch.
++ */
++ if (msecs % 1000)
++ in_msec = true;
++ else
++ secs /= 1000;
++ if (!secs)
++ secs = 1;
++
++ /* Return error if too large. */
++ else if (secs > 7 * MAX_TIMER_VAL)
++ return -EINVAL;
++
++ /* Configure how to interpret the number value. */
++ ret = ksz_rmw8(dev, REG_SW_LUE_CTRL_2, SW_AGE_CNT_IN_MICROSEC,
++ in_msec ? SW_AGE_CNT_IN_MICROSEC : 0);
++ if (ret < 0)
++ return ret;
++
++ ret = ksz_read8(dev, REG_SW_LUE_CTRL_0, &value8);
++ if (ret < 0)
++ return ret;
++
++ /* Check whether there is need to update the multiplier. */
++ mult = FIELD_GET(SW_AGE_CNT_M, value8);
++ max_val = MAX_TIMER_VAL;
++ if (mult > 0) {
++ /* Try to use the same multiplier already in the register as
++ * the hardware default uses multiplier 4 and 75 seconds for
++ * 300 seconds.
++ */
++ max_val = DIV_ROUND_UP(secs, mult);
++ if (max_val > MAX_TIMER_VAL || max_val * mult != secs)
++ max_val = MAX_TIMER_VAL;
++ }
++
++ data = DIV_ROUND_UP(secs, max_val);
++ if (mult != data) {
++ value8 &= ~SW_AGE_CNT_M;
++ value8 |= FIELD_PREP(SW_AGE_CNT_M, data);
++ ret = ksz_write8(dev, REG_SW_LUE_CTRL_0, value8);
++ if (ret < 0)
++ return ret;
++ }
++
++ secs = DIV_ROUND_UP(secs, data);
++
+ value = FIELD_GET(SW_AGE_PERIOD_7_0_M, secs);
+
+ ret = ksz_write8(dev, REG_SW_AGE_PERIOD__1, value);
+diff --git a/drivers/net/dsa/microchip/lan937x_reg.h b/drivers/net/dsa/microchip/lan937x_reg.h
+index 2f22a9d01de36b..35269f74a314b4 100644
+--- a/drivers/net/dsa/microchip/lan937x_reg.h
++++ b/drivers/net/dsa/microchip/lan937x_reg.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 */
+ /* Microchip LAN937X switch register definitions
+- * Copyright (C) 2019-2021 Microchip Technology Inc.
++ * Copyright (C) 2019-2024 Microchip Technology Inc.
+ */
+ #ifndef __LAN937X_REG_H
+ #define __LAN937X_REG_H
+@@ -52,8 +52,7 @@
+
+ #define SW_VLAN_ENABLE BIT(7)
+ #define SW_DROP_INVALID_VID BIT(6)
+-#define SW_AGE_CNT_M 0x7
+-#define SW_AGE_CNT_S 3
++#define SW_AGE_CNT_M GENMASK(5, 3)
+ #define SW_RESV_MCAST_ENABLE BIT(2)
+
+ #define REG_SW_LUE_CTRL_1 0x0311
+@@ -66,6 +65,10 @@
+ #define SW_FAST_AGING BIT(1)
+ #define SW_LINK_AUTO_AGING BIT(0)
+
++#define REG_SW_LUE_CTRL_2 0x0312
++
++#define SW_AGE_CNT_IN_MICROSEC BIT(7)
++
+ #define REG_SW_AGE_PERIOD__1 0x0313
+ #define SW_AGE_PERIOD_7_0_M GENMASK(7, 0)
+
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index 0a68b526e4a821..2b784ced06451f 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -1967,7 +1967,11 @@ static int bcm_sysport_open(struct net_device *dev)
+ unsigned int i;
+ int ret;
+
+- clk_prepare_enable(priv->clk);
++ ret = clk_prepare_enable(priv->clk);
++ if (ret) {
++ netdev_err(dev, "could not enable priv clock\n");
++ return ret;
++ }
+
+ /* Reset UniMAC */
+ umac_reset(priv);
+@@ -2625,7 +2629,11 @@ static int bcm_sysport_probe(struct platform_device *pdev)
+ goto err_deregister_notifier;
+ }
+
+- clk_prepare_enable(priv->clk);
++ ret = clk_prepare_enable(priv->clk);
++ if (ret) {
++ dev_err(&pdev->dev, "could not enable priv clock\n");
++ goto err_deregister_netdev;
++ }
+
+ priv->rev = topctrl_readl(priv, REV_CNTL) & REV_MASK;
+ dev_info(&pdev->dev,
+@@ -2639,6 +2647,8 @@ static int bcm_sysport_probe(struct platform_device *pdev)
+
+ return 0;
+
++err_deregister_netdev:
++ unregister_netdev(dev);
+ err_deregister_notifier:
+ unregister_netdevice_notifier(&priv->netdev_notifier);
+ err_deregister_fixed_link:
+@@ -2808,7 +2818,12 @@ static int __maybe_unused bcm_sysport_resume(struct device *d)
+ if (!netif_running(dev))
+ return 0;
+
+- clk_prepare_enable(priv->clk);
++ ret = clk_prepare_enable(priv->clk);
++ if (ret) {
++ netdev_err(dev, "could not enable priv clock\n");
++ return ret;
++ }
++
+ if (priv->wolopts)
+ clk_disable_unprepare(priv->wol_clk);
+
+diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
+index 301fa1ea4f5167..95471cfcff420a 100644
+--- a/drivers/net/ethernet/google/gve/gve.h
++++ b/drivers/net/ethernet/google/gve/gve.h
+@@ -1134,6 +1134,7 @@ int gve_xdp_xmit_one(struct gve_priv *priv, struct gve_tx_ring *tx,
+ void gve_xdp_tx_flush(struct gve_priv *priv, u32 xdp_qid);
+ bool gve_tx_poll(struct gve_notify_block *block, int budget);
+ bool gve_xdp_poll(struct gve_notify_block *block, int budget);
++int gve_xsk_tx_poll(struct gve_notify_block *block, int budget);
+ int gve_tx_alloc_rings_gqi(struct gve_priv *priv,
+ struct gve_tx_alloc_rings_cfg *cfg);
+ void gve_tx_free_rings_gqi(struct gve_priv *priv,
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index 661566db68c860..d404819ebc9b3f 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -333,6 +333,14 @@ int gve_napi_poll(struct napi_struct *napi, int budget)
+
+ if (block->rx) {
+ work_done = gve_rx_poll(block, budget);
++
++ /* Poll XSK TX as part of RX NAPI. Setup re-poll based on max of
++ * TX and RX work done.
++ */
++ if (priv->xdp_prog)
++ work_done = max_t(int, work_done,
++ gve_xsk_tx_poll(block, budget));
++
+ reschedule |= work_done == budget;
+ }
+
+@@ -922,11 +930,13 @@ static void gve_init_sync_stats(struct gve_priv *priv)
+ static void gve_tx_get_curr_alloc_cfg(struct gve_priv *priv,
+ struct gve_tx_alloc_rings_cfg *cfg)
+ {
++ int num_xdp_queues = priv->xdp_prog ? priv->rx_cfg.num_queues : 0;
++
+ cfg->qcfg = &priv->tx_cfg;
+ cfg->raw_addressing = !gve_is_qpl(priv);
+ cfg->ring_size = priv->tx_desc_cnt;
+ cfg->start_idx = 0;
+- cfg->num_rings = gve_num_tx_queues(priv);
++ cfg->num_rings = priv->tx_cfg.num_queues + num_xdp_queues;
+ cfg->tx = priv->tx;
+ }
+
+@@ -1623,8 +1633,8 @@ static int gve_xsk_pool_enable(struct net_device *dev,
+ if (err)
+ return err;
+
+- /* If XDP prog is not installed, return */
+- if (!priv->xdp_prog)
++ /* If XDP prog is not installed or interface is down, return. */
++ if (!priv->xdp_prog || !netif_running(dev))
+ return 0;
+
+ rx = &priv->rx[qid];
+@@ -1669,21 +1679,16 @@ static int gve_xsk_pool_disable(struct net_device *dev,
+ if (qid >= priv->rx_cfg.num_queues)
+ return -EINVAL;
+
+- /* If XDP prog is not installed, unmap DMA and return */
+- if (!priv->xdp_prog)
+- goto done;
+-
+- tx_qid = gve_xdp_tx_queue_id(priv, qid);
+- if (!netif_running(dev)) {
+- priv->rx[qid].xsk_pool = NULL;
+- xdp_rxq_info_unreg(&priv->rx[qid].xsk_rxq);
+- priv->tx[tx_qid].xsk_pool = NULL;
++ /* If XDP prog is not installed or interface is down, unmap DMA and
++ * return.
++ */
++ if (!priv->xdp_prog || !netif_running(dev))
+ goto done;
+- }
+
+ napi_rx = &priv->ntfy_blocks[priv->rx[qid].ntfy_id].napi;
+ napi_disable(napi_rx); /* make sure current rx poll is done */
+
++ tx_qid = gve_xdp_tx_queue_id(priv, qid);
+ napi_tx = &priv->ntfy_blocks[priv->tx[tx_qid].ntfy_id].napi;
+ napi_disable(napi_tx); /* make sure current tx poll is done */
+
+@@ -1709,24 +1714,20 @@ static int gve_xsk_pool_disable(struct net_device *dev,
+ static int gve_xsk_wakeup(struct net_device *dev, u32 queue_id, u32 flags)
+ {
+ struct gve_priv *priv = netdev_priv(dev);
+- int tx_queue_id = gve_xdp_tx_queue_id(priv, queue_id);
++ struct napi_struct *napi;
++
++ if (!gve_get_napi_enabled(priv))
++ return -ENETDOWN;
+
+ if (queue_id >= priv->rx_cfg.num_queues || !priv->xdp_prog)
+ return -EINVAL;
+
+- if (flags & XDP_WAKEUP_TX) {
+- struct gve_tx_ring *tx = &priv->tx[tx_queue_id];
+- struct napi_struct *napi =
+- &priv->ntfy_blocks[tx->ntfy_id].napi;
+-
+- if (!napi_if_scheduled_mark_missed(napi)) {
+- /* Call local_bh_enable to trigger SoftIRQ processing */
+- local_bh_disable();
+- napi_schedule(napi);
+- local_bh_enable();
+- }
+-
+- tx->xdp_xsk_wakeup++;
++ napi = &priv->ntfy_blocks[gve_rx_idx_to_ntfy(priv, queue_id)].napi;
++ if (!napi_if_scheduled_mark_missed(napi)) {
++ /* Call local_bh_enable to trigger SoftIRQ processing */
++ local_bh_disable();
++ napi_schedule(napi);
++ local_bh_enable();
+ }
+
+ return 0;
+@@ -1837,6 +1838,7 @@ int gve_adjust_queues(struct gve_priv *priv,
+ {
+ struct gve_tx_alloc_rings_cfg tx_alloc_cfg = {0};
+ struct gve_rx_alloc_rings_cfg rx_alloc_cfg = {0};
++ int num_xdp_queues;
+ int err;
+
+ gve_get_curr_alloc_cfgs(priv, &tx_alloc_cfg, &rx_alloc_cfg);
+@@ -1847,6 +1849,10 @@ int gve_adjust_queues(struct gve_priv *priv,
+ rx_alloc_cfg.qcfg = &new_rx_config;
+ tx_alloc_cfg.num_rings = new_tx_config.num_queues;
+
++ /* Add dedicated XDP TX queues if enabled. */
++ num_xdp_queues = priv->xdp_prog ? new_rx_config.num_queues : 0;
++ tx_alloc_cfg.num_rings += num_xdp_queues;
++
+ if (netif_running(priv->dev)) {
+ err = gve_adjust_config(priv, &tx_alloc_cfg, &rx_alloc_cfg);
+ return err;
+@@ -1891,6 +1897,9 @@ static void gve_turndown(struct gve_priv *priv)
+
+ gve_clear_napi_enabled(priv);
+ gve_clear_report_stats(priv);
++
++ /* Make sure that all traffic is finished processing. */
++ synchronize_net();
+ }
+
+ static void gve_turnup(struct gve_priv *priv)
+diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c
+index e7fb7d6d283df1..4350ebd9c2bd9e 100644
+--- a/drivers/net/ethernet/google/gve/gve_tx.c
++++ b/drivers/net/ethernet/google/gve/gve_tx.c
+@@ -206,7 +206,10 @@ void gve_tx_stop_ring_gqi(struct gve_priv *priv, int idx)
+ return;
+
+ gve_remove_napi(priv, ntfy_idx);
+- gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
++ if (tx->q_num < priv->tx_cfg.num_queues)
++ gve_clean_tx_done(priv, tx, priv->tx_desc_cnt, false);
++ else
++ gve_clean_xdp_done(priv, tx, priv->tx_desc_cnt);
+ netdev_tx_reset_queue(tx->netdev_txq);
+ gve_tx_remove_from_block(priv, idx);
+ }
+@@ -834,9 +837,12 @@ int gve_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames,
+ struct gve_tx_ring *tx;
+ int i, err = 0, qid;
+
+- if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
++ if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK) || !priv->xdp_prog)
+ return -EINVAL;
+
++ if (!gve_get_napi_enabled(priv))
++ return -ENETDOWN;
++
+ qid = gve_xdp_tx_queue_id(priv,
+ smp_processor_id() % priv->num_xdp_queues);
+
+@@ -975,33 +981,41 @@ static int gve_xsk_tx(struct gve_priv *priv, struct gve_tx_ring *tx,
+ return sent;
+ }
+
++int gve_xsk_tx_poll(struct gve_notify_block *rx_block, int budget)
++{
++ struct gve_rx_ring *rx = rx_block->rx;
++ struct gve_priv *priv = rx->gve;
++ struct gve_tx_ring *tx;
++ int sent = 0;
++
++ tx = &priv->tx[gve_xdp_tx_queue_id(priv, rx->q_num)];
++ if (tx->xsk_pool) {
++ sent = gve_xsk_tx(priv, tx, budget);
++
++ u64_stats_update_begin(&tx->statss);
++ tx->xdp_xsk_sent += sent;
++ u64_stats_update_end(&tx->statss);
++ if (xsk_uses_need_wakeup(tx->xsk_pool))
++ xsk_set_tx_need_wakeup(tx->xsk_pool);
++ }
++
++ return sent;
++}
++
+ bool gve_xdp_poll(struct gve_notify_block *block, int budget)
+ {
+ struct gve_priv *priv = block->priv;
+ struct gve_tx_ring *tx = block->tx;
+ u32 nic_done;
+- bool repoll;
+ u32 to_do;
+
+ /* Find out how much work there is to be done */
+ nic_done = gve_tx_load_event_counter(priv, tx);
+ to_do = min_t(u32, (nic_done - tx->done), budget);
+ gve_clean_xdp_done(priv, tx, to_do);
+- repoll = nic_done != tx->done;
+-
+- if (tx->xsk_pool) {
+- int sent = gve_xsk_tx(priv, tx, budget);
+-
+- u64_stats_update_begin(&tx->statss);
+- tx->xdp_xsk_sent += sent;
+- u64_stats_update_end(&tx->statss);
+- repoll |= (sent == budget);
+- if (xsk_uses_need_wakeup(tx->xsk_pool))
+- xsk_set_tx_need_wakeup(tx->xsk_pool);
+- }
+
+ /* If we still have work we want to repoll */
+- return repoll;
++ return nic_done != tx->done;
+ }
+
+ bool gve_tx_poll(struct gve_notify_block *block, int budget)
+diff --git a/drivers/net/ethernet/marvell/mv643xx_eth.c b/drivers/net/ethernet/marvell/mv643xx_eth.c
+index 9e80899546d996..83b9905666e24f 100644
+--- a/drivers/net/ethernet/marvell/mv643xx_eth.c
++++ b/drivers/net/ethernet/marvell/mv643xx_eth.c
+@@ -2708,9 +2708,15 @@ static struct platform_device *port_platdev[3];
+
+ static void mv643xx_eth_shared_of_remove(void)
+ {
++ struct mv643xx_eth_platform_data *pd;
+ int n;
+
+ for (n = 0; n < 3; n++) {
++ if (!port_platdev[n])
++ continue;
++ pd = dev_get_platdata(&port_platdev[n]->dev);
++ if (pd)
++ of_node_put(pd->phy_node);
+ platform_device_del(port_platdev[n]);
+ port_platdev[n] = NULL;
+ }
+@@ -2773,8 +2779,10 @@ static int mv643xx_eth_shared_of_add_port(struct platform_device *pdev,
+ }
+
+ ppdev = platform_device_alloc(MV643XX_ETH_NAME, dev_num);
+- if (!ppdev)
+- return -ENOMEM;
++ if (!ppdev) {
++ ret = -ENOMEM;
++ goto put_err;
++ }
+ ppdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
+ ppdev->dev.of_node = pnp;
+
+@@ -2796,6 +2804,8 @@ static int mv643xx_eth_shared_of_add_port(struct platform_device *pdev,
+
+ port_err:
+ platform_device_put(ppdev);
++put_err:
++ of_node_put(ppd.phy_node);
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
+index a7a16eac189134..52d99908d0e9d3 100644
+--- a/drivers/net/ethernet/marvell/sky2.c
++++ b/drivers/net/ethernet/marvell/sky2.c
+@@ -130,6 +130,7 @@ static const struct pci_device_id sky2_id_table[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436C) }, /* 88E8072 */
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x436D) }, /* 88E8055 */
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4370) }, /* 88E8075 */
++ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4373) }, /* 88E8075 */
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4380) }, /* 88E8057 */
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4381) }, /* 88E8059 */
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL, 0x4382) }, /* 88E8079 */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
+index cc9bcc42003242..6ab02f3fc29123 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/macsec.c
+@@ -339,9 +339,13 @@ static int mlx5e_macsec_init_sa_fs(struct macsec_context *ctx,
+ {
+ struct mlx5e_priv *priv = macsec_netdev_priv(ctx->netdev);
+ struct mlx5_macsec_fs *macsec_fs = priv->mdev->macsec_fs;
++ const struct macsec_tx_sc *tx_sc = &ctx->secy->tx_sc;
+ struct mlx5_macsec_rule_attrs rule_attrs;
+ union mlx5_macsec_rule *macsec_rule;
+
++ if (is_tx && tx_sc->encoding_sa != sa->assoc_num)
++ return 0;
++
+ rule_attrs.macsec_obj_id = sa->macsec_obj_id;
+ rule_attrs.sci = sa->sci;
+ rule_attrs.assoc_num = sa->assoc_num;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index c14bef83d84d0f..62b8a7c1c6b54a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -6510,8 +6510,23 @@ static void _mlx5e_remove(struct auxiliary_device *adev)
+
+ mlx5_core_uplink_netdev_set(mdev, NULL);
+ mlx5e_dcbnl_delete_app(priv);
+- unregister_netdev(priv->netdev);
+- _mlx5e_suspend(adev, false);
++ /* When unload driver, the netdev is in registered state
++ * if it's from legacy mode. If from switchdev mode, it
++ * is already unregistered before changing to NIC profile.
++ */
++ if (priv->netdev->reg_state == NETREG_REGISTERED) {
++ unregister_netdev(priv->netdev);
++ _mlx5e_suspend(adev, false);
++ } else {
++ struct mlx5_core_dev *pos;
++ int i;
++
++ if (test_bit(MLX5E_STATE_DESTROYING, &priv->state))
++ mlx5_sd_for_each_dev(i, mdev, pos)
++ mlx5e_destroy_mdev_resources(pos);
++ else
++ _mlx5e_suspend(adev, true);
++ }
+ /* Avoid cleanup if profile rollback failed. */
+ if (priv->profile)
+ priv->profile->cleanup(priv);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 92094bf60d5986..0657d107653577 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -1508,6 +1508,21 @@ mlx5e_vport_uplink_rep_unload(struct mlx5e_rep_priv *rpriv)
+
+ priv = netdev_priv(netdev);
+
++ /* This bit is set when using devlink to change eswitch mode from
++ * switchdev to legacy. As need to keep uplink netdev ifindex, we
++ * detach uplink representor profile and attach NIC profile only.
++ * The netdev will be unregistered later when unload NIC auxiliary
++ * driver for this case.
++ * We explicitly block devlink eswitch mode change if any IPSec rules
++ * offloaded, but can't block other cases, such as driver unload
++ * and devlink reload. We have to unregister netdev before profile
++ * change for those cases. This is to avoid resource leak because
++ * the offloaded rules don't have the chance to be unoffloaded before
++ * cleanup which is triggered by detach uplink representor profile.
++ */
++ if (!(priv->mdev->priv.flags & MLX5_PRIV_FLAGS_SWITCH_LEGACY))
++ unregister_netdev(netdev);
++
+ mlx5e_netdev_attach_nic_profile(priv);
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
+index 5a0047bdcb5105..ed977ae75fab89 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/esw/ipsec_fs.c
+@@ -150,11 +150,11 @@ void mlx5_esw_ipsec_restore_dest_uplink(struct mlx5_core_dev *mdev)
+ unsigned long i;
+ int err;
+
+- xa_for_each(&esw->offloads.vport_reps, i, rep) {
+- rpriv = rep->rep_data[REP_ETH].priv;
+- if (!rpriv || !rpriv->netdev)
++ mlx5_esw_for_each_rep(esw, i, rep) {
++ if (atomic_read(&rep->rep_data[REP_ETH].state) != REP_LOADED)
+ continue;
+
++ rpriv = rep->rep_data[REP_ETH].priv;
+ rhashtable_walk_enter(&rpriv->tc_ht, &iter);
+ rhashtable_walk_start(&iter);
+ while ((flow = rhashtable_walk_next(&iter)) != NULL) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+index f44b4c7ebcfd73..48fd0400ffd4ec 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+@@ -716,6 +716,9 @@ void mlx5e_tc_clean_fdb_peer_flows(struct mlx5_eswitch *esw);
+ MLX5_CAP_GEN_2((esw->dev), ec_vf_vport_base) +\
+ (last) - 1)
+
++#define mlx5_esw_for_each_rep(esw, i, rep) \
++ xa_for_each(&((esw)->offloads.vport_reps), i, rep)
++
+ struct mlx5_eswitch *__must_check
+ mlx5_devlink_eswitch_get(struct devlink *devlink);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+index 8cf61ae8b89d24..3950b1d4b3d8e5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch_offloads.c
+@@ -53,9 +53,6 @@
+ #include "lag/lag.h"
+ #include "en/tc/post_meter.h"
+
+-#define mlx5_esw_for_each_rep(esw, i, rep) \
+- xa_for_each(&((esw)->offloads.vport_reps), i, rep)
+-
+ /* There are two match-all miss flows, one for unicast dst mac and
+ * one for multicast.
+ */
+@@ -3762,6 +3759,8 @@ int mlx5_devlink_eswitch_mode_set(struct devlink *devlink, u16 mode,
+ esw->eswitch_operation_in_progress = true;
+ up_write(&esw->mode_lock);
+
++ if (mode == DEVLINK_ESWITCH_MODE_LEGACY)
++ esw->dev->priv.flags |= MLX5_PRIV_FLAGS_SWITCH_LEGACY;
+ mlx5_eswitch_disable_locked(esw);
+ if (mode == DEVLINK_ESWITCH_MODE_SWITCHDEV) {
+ if (mlx5_devlink_trap_get_num_active(esw->dev)) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+index 6fa06ba2d34653..f57c84e5128bc7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+@@ -1067,7 +1067,6 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ int inlen, err, eqn;
+ void *cqc, *in;
+ __be64 *pas;
+- int vector;
+ u32 i;
+
+ cq = kzalloc(sizeof(*cq), GFP_KERNEL);
+@@ -1096,8 +1095,7 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ if (!in)
+ goto err_cqwq;
+
+- vector = raw_smp_processor_id() % mlx5_comp_vectors_max(mdev);
+- err = mlx5_comp_eqn_get(mdev, vector, &eqn);
++ err = mlx5_comp_eqn_get(mdev, 0, &eqn);
+ if (err) {
+ kvfree(in);
+ goto err_cqwq;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+index 4b5fd71c897ddb..32d2e61f2b8238 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_span.c
+@@ -423,8 +423,7 @@ mlxsw_sp_span_gretap4_route(const struct net_device *to_dev,
+
+ parms = mlxsw_sp_ipip_netdev_parms4(to_dev);
+ ip_tunnel_init_flow(&fl4, parms.iph.protocol, *daddrp, *saddrp,
+- 0, 0, dev_net(to_dev), parms.link, tun->fwmark, 0,
+- 0);
++ 0, 0, tun->net, parms.link, tun->fwmark, 0, 0);
+
+ rt = ip_route_output_key(tun->net, &fl4);
+ if (IS_ERR(rt))
+diff --git a/drivers/net/ethernet/sfc/tc_conntrack.c b/drivers/net/ethernet/sfc/tc_conntrack.c
+index d90206f27161e4..c0603f54cec3ad 100644
+--- a/drivers/net/ethernet/sfc/tc_conntrack.c
++++ b/drivers/net/ethernet/sfc/tc_conntrack.c
+@@ -16,7 +16,7 @@ static int efx_tc_flow_block(enum tc_setup_type type, void *type_data,
+ void *cb_priv);
+
+ static const struct rhashtable_params efx_tc_ct_zone_ht_params = {
+- .key_len = offsetof(struct efx_tc_ct_zone, linkage),
++ .key_len = sizeof_field(struct efx_tc_ct_zone, zone),
+ .key_offset = 0,
+ .head_offset = offsetof(struct efx_tc_ct_zone, linkage),
+ };
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+index ad868e8d195d59..aaf008bdbbcd46 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+@@ -405,22 +405,6 @@ static int stmmac_of_get_mac_mode(struct device_node *np)
+ return -ENODEV;
+ }
+
+-/**
+- * stmmac_remove_config_dt - undo the effects of stmmac_probe_config_dt()
+- * @pdev: platform_device structure
+- * @plat: driver data platform structure
+- *
+- * Release resources claimed by stmmac_probe_config_dt().
+- */
+-static void stmmac_remove_config_dt(struct platform_device *pdev,
+- struct plat_stmmacenet_data *plat)
+-{
+- clk_disable_unprepare(plat->stmmac_clk);
+- clk_disable_unprepare(plat->pclk);
+- of_node_put(plat->phy_node);
+- of_node_put(plat->mdio_node);
+-}
+-
+ /**
+ * stmmac_probe_config_dt - parse device-tree driver parameters
+ * @pdev: platform_device structure
+@@ -490,8 +474,10 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ dev_warn(&pdev->dev, "snps,phy-addr property is deprecated\n");
+
+ rc = stmmac_mdio_setup(plat, np, &pdev->dev);
+- if (rc)
+- return ERR_PTR(rc);
++ if (rc) {
++ ret = ERR_PTR(rc);
++ goto error_put_phy;
++ }
+
+ of_property_read_u32(np, "tx-fifo-depth", &plat->tx_fifo_size);
+
+@@ -580,8 +566,8 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ dma_cfg = devm_kzalloc(&pdev->dev, sizeof(*dma_cfg),
+ GFP_KERNEL);
+ if (!dma_cfg) {
+- stmmac_remove_config_dt(pdev, plat);
+- return ERR_PTR(-ENOMEM);
++ ret = ERR_PTR(-ENOMEM);
++ goto error_put_mdio;
+ }
+ plat->dma_cfg = dma_cfg;
+
+@@ -609,8 +595,8 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+
+ rc = stmmac_mtl_setup(pdev, plat);
+ if (rc) {
+- stmmac_remove_config_dt(pdev, plat);
+- return ERR_PTR(rc);
++ ret = ERR_PTR(rc);
++ goto error_put_mdio;
+ }
+
+ /* clock setup */
+@@ -662,6 +648,10 @@ stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+ clk_disable_unprepare(plat->pclk);
+ error_pclk_get:
+ clk_disable_unprepare(plat->stmmac_clk);
++error_put_mdio:
++ of_node_put(plat->mdio_node);
++error_put_phy:
++ of_node_put(plat->phy_node);
+
+ return ret;
+ }
+@@ -670,16 +660,17 @@ static void devm_stmmac_remove_config_dt(void *data)
+ {
+ struct plat_stmmacenet_data *plat = data;
+
+- /* Platform data argument is unused */
+- stmmac_remove_config_dt(NULL, plat);
++ clk_disable_unprepare(plat->stmmac_clk);
++ clk_disable_unprepare(plat->pclk);
++ of_node_put(plat->mdio_node);
++ of_node_put(plat->phy_node);
+ }
+
+ /**
+ * devm_stmmac_probe_config_dt
+ * @pdev: platform_device structure
+ * @mac: MAC address to use
+- * Description: Devres variant of stmmac_probe_config_dt(). Does not require
+- * the user to call stmmac_remove_config_dt() at driver detach.
++ * Description: Devres variant of stmmac_probe_config_dt().
+ */
+ struct plat_stmmacenet_data *
+ devm_stmmac_probe_config_dt(struct platform_device *pdev, u8 *mac)
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index ba6db61dd227c4..dfca13b82bdce2 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -3525,7 +3525,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+ init_completion(&common->tdown_complete);
+ common->tx_ch_num = AM65_CPSW_DEFAULT_TX_CHNS;
+ common->rx_ch_num_flows = AM65_CPSW_DEFAULT_RX_CHN_FLOWS;
+- common->pf_p0_rx_ptype_rrobin = false;
++ common->pf_p0_rx_ptype_rrobin = true;
+ common->default_vlan = 1;
+
+ common->ports = devm_kcalloc(dev, common->port_num,
+diff --git a/drivers/net/ethernet/ti/icssg/icss_iep.c b/drivers/net/ethernet/ti/icssg/icss_iep.c
+index 5d6d1cf78e93f2..768578c0d9587d 100644
+--- a/drivers/net/ethernet/ti/icssg/icss_iep.c
++++ b/drivers/net/ethernet/ti/icssg/icss_iep.c
+@@ -215,6 +215,9 @@ static void icss_iep_enable_shadow_mode(struct icss_iep *iep)
+ for (cmp = IEP_MIN_CMP; cmp < IEP_MAX_CMP; cmp++) {
+ regmap_update_bits(iep->map, ICSS_IEP_CMP_STAT_REG,
+ IEP_CMP_STATUS(cmp), IEP_CMP_STATUS(cmp));
++
++ regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
++ IEP_CMP_CFG_CMP_EN(cmp), 0);
+ }
+
+ /* enable reset counter on CMP0 event */
+@@ -780,6 +783,11 @@ int icss_iep_exit(struct icss_iep *iep)
+ }
+ icss_iep_disable(iep);
+
++ if (iep->pps_enabled)
++ icss_iep_pps_enable(iep, false);
++ else if (iep->perout_enabled)
++ icss_iep_perout_enable(iep, NULL, false);
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(icss_iep_exit);
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_common.c b/drivers/net/ethernet/ti/icssg/icssg_common.c
+index fdebeb2f84e00c..74f0f200a89d4f 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_common.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_common.c
+@@ -855,31 +855,6 @@ irqreturn_t prueth_rx_irq(int irq, void *dev_id)
+ }
+ EXPORT_SYMBOL_GPL(prueth_rx_irq);
+
+-void prueth_emac_stop(struct prueth_emac *emac)
+-{
+- struct prueth *prueth = emac->prueth;
+- int slice;
+-
+- switch (emac->port_id) {
+- case PRUETH_PORT_MII0:
+- slice = ICSS_SLICE0;
+- break;
+- case PRUETH_PORT_MII1:
+- slice = ICSS_SLICE1;
+- break;
+- default:
+- netdev_err(emac->ndev, "invalid port\n");
+- return;
+- }
+-
+- emac->fw_running = 0;
+- if (!emac->is_sr1)
+- rproc_shutdown(prueth->txpru[slice]);
+- rproc_shutdown(prueth->rtu[slice]);
+- rproc_shutdown(prueth->pru[slice]);
+-}
+-EXPORT_SYMBOL_GPL(prueth_emac_stop);
+-
+ void prueth_cleanup_tx_ts(struct prueth_emac *emac)
+ {
+ int i;
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_config.c b/drivers/net/ethernet/ti/icssg/icssg_config.c
+index 5d2491c2943a8b..ddfd1c02a88544 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_config.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_config.c
+@@ -397,7 +397,7 @@ static int prueth_emac_buffer_setup(struct prueth_emac *emac)
+ return 0;
+ }
+
+-static void icssg_init_emac_mode(struct prueth *prueth)
++void icssg_init_emac_mode(struct prueth *prueth)
+ {
+ /* When the device is configured as a bridge and it is being brought
+ * back to the emac mode, the host mac address has to be set as 0.
+@@ -406,9 +406,6 @@ static void icssg_init_emac_mode(struct prueth *prueth)
+ int i;
+ u8 mac[ETH_ALEN] = { 0 };
+
+- if (prueth->emacs_initialized)
+- return;
+-
+ /* Set VLAN TABLE address base */
+ regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK,
+ addr << SMEM_VLAN_OFFSET);
+@@ -423,15 +420,13 @@ static void icssg_init_emac_mode(struct prueth *prueth)
+ /* Clear host MAC address */
+ icssg_class_set_host_mac_addr(prueth->miig_rt, mac);
+ }
++EXPORT_SYMBOL_GPL(icssg_init_emac_mode);
+
+-static void icssg_init_fw_offload_mode(struct prueth *prueth)
++void icssg_init_fw_offload_mode(struct prueth *prueth)
+ {
+ u32 addr = prueth->shram.pa + EMAC_ICSSG_SWITCH_DEFAULT_VLAN_TABLE_OFFSET;
+ int i;
+
+- if (prueth->emacs_initialized)
+- return;
+-
+ /* Set VLAN TABLE address base */
+ regmap_update_bits(prueth->miig_rt, FDB_GEN_CFG1, SMEM_VLAN_OFFSET_MASK,
+ addr << SMEM_VLAN_OFFSET);
+@@ -448,6 +443,7 @@ static void icssg_init_fw_offload_mode(struct prueth *prueth)
+ icssg_class_set_host_mac_addr(prueth->miig_rt, prueth->hw_bridge_dev->dev_addr);
+ icssg_set_pvid(prueth, prueth->default_vlan, PRUETH_PORT_HOST);
+ }
++EXPORT_SYMBOL_GPL(icssg_init_fw_offload_mode);
+
+ int icssg_config(struct prueth *prueth, struct prueth_emac *emac, int slice)
+ {
+@@ -455,11 +451,6 @@ int icssg_config(struct prueth *prueth, struct prueth_emac *emac, int slice)
+ struct icssg_flow_cfg __iomem *flow_cfg;
+ int ret;
+
+- if (prueth->is_switch_mode || prueth->is_hsr_offload_mode)
+- icssg_init_fw_offload_mode(prueth);
+- else
+- icssg_init_emac_mode(prueth);
+-
+ memset_io(config, 0, TAS_GATE_MASK_LIST0);
+ icssg_miig_queues_init(prueth, slice);
+
+@@ -786,3 +777,27 @@ void icssg_set_pvid(struct prueth *prueth, u8 vid, u8 port)
+ writel(pvid, prueth->shram.va + EMAC_ICSSG_SWITCH_PORT0_DEFAULT_VLAN_OFFSET);
+ }
+ EXPORT_SYMBOL_GPL(icssg_set_pvid);
++
++int emac_fdb_flow_id_updated(struct prueth_emac *emac)
++{
++ struct mgmt_cmd_rsp fdb_cmd_rsp = { 0 };
++ int slice = prueth_emac_slice(emac);
++ struct mgmt_cmd fdb_cmd = { 0 };
++ int ret;
++
++ fdb_cmd.header = ICSSG_FW_MGMT_CMD_HEADER;
++ fdb_cmd.type = ICSSG_FW_MGMT_FDB_CMD_TYPE_RX_FLOW;
++ fdb_cmd.seqnum = ++(emac->prueth->icssg_hwcmdseq);
++ fdb_cmd.param = 0;
++
++ fdb_cmd.param |= (slice << 4);
++ fdb_cmd.cmd_args[0] = 0;
++
++ ret = icssg_send_fdb_msg(emac, &fdb_cmd, &fdb_cmd_rsp);
++ if (ret)
++ return ret;
++
++ WARN_ON(fdb_cmd.seqnum != fdb_cmd_rsp.seqnum);
++ return fdb_cmd_rsp.status == 1 ? 0 : -EINVAL;
++}
++EXPORT_SYMBOL_GPL(emac_fdb_flow_id_updated);
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_config.h b/drivers/net/ethernet/ti/icssg/icssg_config.h
+index 92c2deaa306835..c884e9fa099e6f 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_config.h
++++ b/drivers/net/ethernet/ti/icssg/icssg_config.h
+@@ -55,6 +55,7 @@ struct icssg_rxq_ctx {
+ #define ICSSG_FW_MGMT_FDB_CMD_TYPE 0x03
+ #define ICSSG_FW_MGMT_CMD_TYPE 0x04
+ #define ICSSG_FW_MGMT_PKT 0x80000000
++#define ICSSG_FW_MGMT_FDB_CMD_TYPE_RX_FLOW 0x05
+
+ struct icssg_r30_cmd {
+ u32 cmd[4];
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index fe2fd1bfc904db..cb11635a8d1209 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -164,11 +164,26 @@ static struct icssg_firmwares icssg_emac_firmwares[] = {
+ }
+ };
+
+-static int prueth_emac_start(struct prueth *prueth, struct prueth_emac *emac)
++static int prueth_start(struct rproc *rproc, const char *fw_name)
++{
++ int ret;
++
++ ret = rproc_set_firmware(rproc, fw_name);
++ if (ret)
++ return ret;
++ return rproc_boot(rproc);
++}
++
++static void prueth_shutdown(struct rproc *rproc)
++{
++ rproc_shutdown(rproc);
++}
++
++static int prueth_emac_start(struct prueth *prueth)
+ {
+ struct icssg_firmwares *firmwares;
+ struct device *dev = prueth->dev;
+- int slice, ret;
++ int ret, slice;
+
+ if (prueth->is_switch_mode)
+ firmwares = icssg_switch_firmwares;
+@@ -177,49 +192,126 @@ static int prueth_emac_start(struct prueth *prueth, struct prueth_emac *emac)
+ else
+ firmwares = icssg_emac_firmwares;
+
+- slice = prueth_emac_slice(emac);
+- if (slice < 0) {
+- netdev_err(emac->ndev, "invalid port\n");
+- return -EINVAL;
++ for (slice = 0; slice < PRUETH_NUM_MACS; slice++) {
++ ret = prueth_start(prueth->pru[slice], firmwares[slice].pru);
++ if (ret) {
++ dev_err(dev, "failed to boot PRU%d: %d\n", slice, ret);
++ goto unwind_slices;
++ }
++
++ ret = prueth_start(prueth->rtu[slice], firmwares[slice].rtu);
++ if (ret) {
++ dev_err(dev, "failed to boot RTU%d: %d\n", slice, ret);
++ rproc_shutdown(prueth->pru[slice]);
++ goto unwind_slices;
++ }
++
++ ret = prueth_start(prueth->txpru[slice], firmwares[slice].txpru);
++ if (ret) {
++ dev_err(dev, "failed to boot TX_PRU%d: %d\n", slice, ret);
++ rproc_shutdown(prueth->rtu[slice]);
++ rproc_shutdown(prueth->pru[slice]);
++ goto unwind_slices;
++ }
+ }
+
+- ret = icssg_config(prueth, emac, slice);
+- if (ret)
+- return ret;
++ return 0;
+
+- ret = rproc_set_firmware(prueth->pru[slice], firmwares[slice].pru);
+- ret = rproc_boot(prueth->pru[slice]);
+- if (ret) {
+- dev_err(dev, "failed to boot PRU%d: %d\n", slice, ret);
+- return -EINVAL;
++unwind_slices:
++ while (--slice >= 0) {
++ prueth_shutdown(prueth->txpru[slice]);
++ prueth_shutdown(prueth->rtu[slice]);
++ prueth_shutdown(prueth->pru[slice]);
+ }
+
+- ret = rproc_set_firmware(prueth->rtu[slice], firmwares[slice].rtu);
+- ret = rproc_boot(prueth->rtu[slice]);
+- if (ret) {
+- dev_err(dev, "failed to boot RTU%d: %d\n", slice, ret);
+- goto halt_pru;
++ return ret;
++}
++
++static void prueth_emac_stop(struct prueth *prueth)
++{
++ int slice;
++
++ for (slice = 0; slice < PRUETH_NUM_MACS; slice++) {
++ prueth_shutdown(prueth->txpru[slice]);
++ prueth_shutdown(prueth->rtu[slice]);
++ prueth_shutdown(prueth->pru[slice]);
+ }
++}
++
++static int prueth_emac_common_start(struct prueth *prueth)
++{
++ struct prueth_emac *emac;
++ int ret = 0;
++ int slice;
++
++ if (!prueth->emac[ICSS_SLICE0] && !prueth->emac[ICSS_SLICE1])
++ return -EINVAL;
++
++ /* clear SMEM and MSMC settings for all slices */
++ memset_io(prueth->msmcram.va, 0, prueth->msmcram.size);
++ memset_io(prueth->shram.va, 0, ICSSG_CONFIG_OFFSET_SLICE1 * PRUETH_NUM_MACS);
++
++ icssg_class_default(prueth->miig_rt, ICSS_SLICE0, 0, false);
++ icssg_class_default(prueth->miig_rt, ICSS_SLICE1, 0, false);
++
++ if (prueth->is_switch_mode || prueth->is_hsr_offload_mode)
++ icssg_init_fw_offload_mode(prueth);
++ else
++ icssg_init_emac_mode(prueth);
++
++ for (slice = 0; slice < PRUETH_NUM_MACS; slice++) {
++ emac = prueth->emac[slice];
++ if (!emac)
++ continue;
++ ret = icssg_config(prueth, emac, slice);
++ if (ret)
++ goto disable_class;
++ }
++
++ ret = prueth_emac_start(prueth);
++ if (ret)
++ goto disable_class;
+
+- ret = rproc_set_firmware(prueth->txpru[slice], firmwares[slice].txpru);
+- ret = rproc_boot(prueth->txpru[slice]);
++ emac = prueth->emac[ICSS_SLICE0] ? prueth->emac[ICSS_SLICE0] :
++ prueth->emac[ICSS_SLICE1];
++ ret = icss_iep_init(emac->iep, &prueth_iep_clockops,
++ emac, IEP_DEFAULT_CYCLE_TIME_NS);
+ if (ret) {
+- dev_err(dev, "failed to boot TX_PRU%d: %d\n", slice, ret);
+- goto halt_rtu;
++ dev_err(prueth->dev, "Failed to initialize IEP module\n");
++ goto stop_pruss;
+ }
+
+- emac->fw_running = 1;
+ return 0;
+
+-halt_rtu:
+- rproc_shutdown(prueth->rtu[slice]);
++stop_pruss:
++ prueth_emac_stop(prueth);
+
+-halt_pru:
+- rproc_shutdown(prueth->pru[slice]);
++disable_class:
++ icssg_class_disable(prueth->miig_rt, ICSS_SLICE0);
++ icssg_class_disable(prueth->miig_rt, ICSS_SLICE1);
+
+ return ret;
+ }
+
++static int prueth_emac_common_stop(struct prueth *prueth)
++{
++ struct prueth_emac *emac;
++
++ if (!prueth->emac[ICSS_SLICE0] && !prueth->emac[ICSS_SLICE1])
++ return -EINVAL;
++
++ icssg_class_disable(prueth->miig_rt, ICSS_SLICE0);
++ icssg_class_disable(prueth->miig_rt, ICSS_SLICE1);
++
++ prueth_emac_stop(prueth);
++
++ emac = prueth->emac[ICSS_SLICE0] ? prueth->emac[ICSS_SLICE0] :
++ prueth->emac[ICSS_SLICE1];
++ icss_iep_exit(emac->iep);
++
++ return 0;
++}
++
+ /* called back by PHY layer if there is change in link state of hw port*/
+ static void emac_adjust_link(struct net_device *ndev)
+ {
+@@ -374,9 +466,6 @@ static void prueth_iep_settime(void *clockops_data, u64 ns)
+ u32 cycletime;
+ int timeout;
+
+- if (!emac->fw_running)
+- return;
+-
+ sc_descp = emac->prueth->shram.va + TIMESYNC_FW_WC_SETCLOCK_DESC_OFFSET;
+
+ cycletime = IEP_DEFAULT_CYCLE_TIME_NS;
+@@ -543,23 +632,17 @@ static int emac_ndo_open(struct net_device *ndev)
+ {
+ struct prueth_emac *emac = netdev_priv(ndev);
+ int ret, i, num_data_chn = emac->tx_ch_num;
++ struct icssg_flow_cfg __iomem *flow_cfg;
+ struct prueth *prueth = emac->prueth;
+ int slice = prueth_emac_slice(emac);
+ struct device *dev = prueth->dev;
+ int max_rx_flows;
+ int rx_flow;
+
+- /* clear SMEM and MSMC settings for all slices */
+- if (!prueth->emacs_initialized) {
+- memset_io(prueth->msmcram.va, 0, prueth->msmcram.size);
+- memset_io(prueth->shram.va, 0, ICSSG_CONFIG_OFFSET_SLICE1 * PRUETH_NUM_MACS);
+- }
+-
+ /* set h/w MAC as user might have re-configured */
+ ether_addr_copy(emac->mac_addr, ndev->dev_addr);
+
+ icssg_class_set_mac_addr(prueth->miig_rt, slice, emac->mac_addr);
+- icssg_class_default(prueth->miig_rt, slice, 0, false);
+ icssg_ft1_set_mac_addr(prueth->miig_rt, slice, emac->mac_addr);
+
+ /* Notify the stack of the actual queue counts. */
+@@ -597,18 +680,23 @@ static int emac_ndo_open(struct net_device *ndev)
+ goto cleanup_napi;
+ }
+
+- /* reset and start PRU firmware */
+- ret = prueth_emac_start(prueth, emac);
+- if (ret)
+- goto free_rx_irq;
++ if (!prueth->emacs_initialized) {
++ ret = prueth_emac_common_start(prueth);
++ if (ret)
++ goto free_rx_irq;
++ }
+
+- icssg_mii_update_mtu(prueth->mii_rt, slice, ndev->max_mtu);
++ flow_cfg = emac->dram.va + ICSSG_CONFIG_OFFSET + PSI_L_REGULAR_FLOW_ID_BASE_OFFSET;
++ writew(emac->rx_flow_id_base, &flow_cfg->rx_base_flow);
++ ret = emac_fdb_flow_id_updated(emac);
+
+- if (!prueth->emacs_initialized) {
+- ret = icss_iep_init(emac->iep, &prueth_iep_clockops,
+- emac, IEP_DEFAULT_CYCLE_TIME_NS);
++ if (ret) {
++ netdev_err(ndev, "Failed to update Rx Flow ID %d", ret);
++ goto stop;
+ }
+
++ icssg_mii_update_mtu(prueth->mii_rt, slice, ndev->max_mtu);
++
+ ret = request_threaded_irq(emac->tx_ts_irq, NULL, prueth_tx_ts_irq,
+ IRQF_ONESHOT, dev_name(dev), emac);
+ if (ret)
+@@ -653,7 +741,8 @@ static int emac_ndo_open(struct net_device *ndev)
+ free_tx_ts_irq:
+ free_irq(emac->tx_ts_irq, emac);
+ stop:
+- prueth_emac_stop(emac);
++ if (!prueth->emacs_initialized)
++ prueth_emac_common_stop(prueth);
+ free_rx_irq:
+ free_irq(emac->rx_chns.irq[rx_flow], emac);
+ cleanup_napi:
+@@ -689,8 +778,6 @@ static int emac_ndo_stop(struct net_device *ndev)
+ if (ndev->phydev)
+ phy_stop(ndev->phydev);
+
+- icssg_class_disable(prueth->miig_rt, prueth_emac_slice(emac));
+-
+ if (emac->prueth->is_hsr_offload_mode)
+ __dev_mc_unsync(ndev, icssg_prueth_hsr_del_mcast);
+ else
+@@ -728,11 +815,9 @@ static int emac_ndo_stop(struct net_device *ndev)
+ /* Destroying the queued work in ndo_stop() */
+ cancel_delayed_work_sync(&emac->stats_work);
+
+- if (prueth->emacs_initialized == 1)
+- icss_iep_exit(emac->iep);
+-
+ /* stop PRUs */
+- prueth_emac_stop(emac);
++ if (prueth->emacs_initialized == 1)
++ prueth_emac_common_stop(prueth);
+
+ free_irq(emac->tx_ts_irq, emac);
+
+@@ -1010,10 +1095,11 @@ static void prueth_offload_fwd_mark_update(struct prueth *prueth)
+ }
+ }
+
+-static void prueth_emac_restart(struct prueth *prueth)
++static int prueth_emac_restart(struct prueth *prueth)
+ {
+ struct prueth_emac *emac0 = prueth->emac[PRUETH_MAC0];
+ struct prueth_emac *emac1 = prueth->emac[PRUETH_MAC1];
++ int ret;
+
+ /* Detach the net_device for both PRUeth ports*/
+ if (netif_running(emac0->ndev))
+@@ -1022,36 +1108,46 @@ static void prueth_emac_restart(struct prueth *prueth)
+ netif_device_detach(emac1->ndev);
+
+ /* Disable both PRUeth ports */
+- icssg_set_port_state(emac0, ICSSG_EMAC_PORT_DISABLE);
+- icssg_set_port_state(emac1, ICSSG_EMAC_PORT_DISABLE);
++ ret = icssg_set_port_state(emac0, ICSSG_EMAC_PORT_DISABLE);
++ ret |= icssg_set_port_state(emac1, ICSSG_EMAC_PORT_DISABLE);
++ if (ret)
++ return ret;
+
+ /* Stop both pru cores for both PRUeth ports*/
+- prueth_emac_stop(emac0);
+- prueth->emacs_initialized--;
+- prueth_emac_stop(emac1);
+- prueth->emacs_initialized--;
++ ret = prueth_emac_common_stop(prueth);
++ if (ret) {
++ dev_err(prueth->dev, "Failed to stop the firmwares");
++ return ret;
++ }
+
+ /* Start both pru cores for both PRUeth ports */
+- prueth_emac_start(prueth, emac0);
+- prueth->emacs_initialized++;
+- prueth_emac_start(prueth, emac1);
+- prueth->emacs_initialized++;
++ ret = prueth_emac_common_start(prueth);
++ if (ret) {
++ dev_err(prueth->dev, "Failed to start the firmwares");
++ return ret;
++ }
+
+ /* Enable forwarding for both PRUeth ports */
+- icssg_set_port_state(emac0, ICSSG_EMAC_PORT_FORWARD);
+- icssg_set_port_state(emac1, ICSSG_EMAC_PORT_FORWARD);
++ ret = icssg_set_port_state(emac0, ICSSG_EMAC_PORT_FORWARD);
++ ret |= icssg_set_port_state(emac1, ICSSG_EMAC_PORT_FORWARD);
+
+ /* Attache net_device for both PRUeth ports */
+ netif_device_attach(emac0->ndev);
+ netif_device_attach(emac1->ndev);
++
++ return ret;
+ }
+
+ static void icssg_change_mode(struct prueth *prueth)
+ {
+ struct prueth_emac *emac;
+- int mac;
++ int mac, ret;
+
+- prueth_emac_restart(prueth);
++ ret = prueth_emac_restart(prueth);
++ if (ret) {
++ dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process");
++ return;
++ }
+
+ for (mac = PRUETH_MAC0; mac < PRUETH_NUM_MACS; mac++) {
+ emac = prueth->emac[mac];
+@@ -1130,13 +1226,18 @@ static void prueth_netdevice_port_unlink(struct net_device *ndev)
+ {
+ struct prueth_emac *emac = netdev_priv(ndev);
+ struct prueth *prueth = emac->prueth;
++ int ret;
+
+ prueth->br_members &= ~BIT(emac->port_id);
+
+ if (prueth->is_switch_mode) {
+ prueth->is_switch_mode = false;
+ emac->port_vlan = 0;
+- prueth_emac_restart(prueth);
++ ret = prueth_emac_restart(prueth);
++ if (ret) {
++ dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process");
++ return;
++ }
+ }
+
+ prueth_offload_fwd_mark_update(prueth);
+@@ -1185,6 +1286,7 @@ static void prueth_hsr_port_unlink(struct net_device *ndev)
+ struct prueth *prueth = emac->prueth;
+ struct prueth_emac *emac0;
+ struct prueth_emac *emac1;
++ int ret;
+
+ emac0 = prueth->emac[PRUETH_MAC0];
+ emac1 = prueth->emac[PRUETH_MAC1];
+@@ -1195,7 +1297,11 @@ static void prueth_hsr_port_unlink(struct net_device *ndev)
+ emac0->port_vlan = 0;
+ emac1->port_vlan = 0;
+ prueth->hsr_dev = NULL;
+- prueth_emac_restart(prueth);
++ ret = prueth_emac_restart(prueth);
++ if (ret) {
++ dev_err(prueth->dev, "Failed to restart the firmwares, aborting the process");
++ return;
++ }
+ netdev_dbg(ndev, "Disabling HSR Offload mode\n");
+ }
+ }
+@@ -1370,13 +1476,10 @@ static int prueth_probe(struct platform_device *pdev)
+ prueth->pa_stats = NULL;
+ }
+
+- if (eth0_node) {
++ if (eth0_node || eth1_node) {
+ ret = prueth_get_cores(prueth, ICSS_SLICE0, false);
+ if (ret)
+ goto put_cores;
+- }
+-
+- if (eth1_node) {
+ ret = prueth_get_cores(prueth, ICSS_SLICE1, false);
+ if (ret)
+ goto put_cores;
+@@ -1575,14 +1678,12 @@ static int prueth_probe(struct platform_device *pdev)
+ pruss_put(prueth->pruss);
+
+ put_cores:
+- if (eth1_node) {
+- prueth_put_cores(prueth, ICSS_SLICE1);
+- of_node_put(eth1_node);
+- }
+-
+- if (eth0_node) {
++ if (eth0_node || eth1_node) {
+ prueth_put_cores(prueth, ICSS_SLICE0);
+ of_node_put(eth0_node);
++
++ prueth_put_cores(prueth, ICSS_SLICE1);
++ of_node_put(eth1_node);
+ }
+
+ return ret;
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.h b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+index f5c1d473e9f991..5473315ea20406 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.h
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+@@ -140,7 +140,6 @@ struct prueth_rx_chn {
+ /* data for each emac port */
+ struct prueth_emac {
+ bool is_sr1;
+- bool fw_running;
+ struct prueth *prueth;
+ struct net_device *ndev;
+ u8 mac_addr[6];
+@@ -361,6 +360,8 @@ int icssg_set_port_state(struct prueth_emac *emac,
+ enum icssg_port_state_cmd state);
+ void icssg_config_set_speed(struct prueth_emac *emac);
+ void icssg_config_half_duplex(struct prueth_emac *emac);
++void icssg_init_emac_mode(struct prueth *prueth);
++void icssg_init_fw_offload_mode(struct prueth *prueth);
+
+ /* Buffer queue helpers */
+ int icssg_queue_pop(struct prueth *prueth, u8 queue);
+@@ -377,6 +378,7 @@ void icssg_vtbl_modify(struct prueth_emac *emac, u8 vid, u8 port_mask,
+ u8 untag_mask, bool add);
+ u16 icssg_get_pvid(struct prueth_emac *emac);
+ void icssg_set_pvid(struct prueth *prueth, u8 vid, u8 port);
++int emac_fdb_flow_id_updated(struct prueth_emac *emac);
+ #define prueth_napi_to_tx_chn(pnapi) \
+ container_of(pnapi, struct prueth_tx_chn, napi_tx)
+
+@@ -407,7 +409,6 @@ void emac_rx_timestamp(struct prueth_emac *emac,
+ struct sk_buff *skb, u32 *psdata);
+ enum netdev_tx icssg_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev);
+ irqreturn_t prueth_rx_irq(int irq, void *dev_id);
+-void prueth_emac_stop(struct prueth_emac *emac);
+ void prueth_cleanup_tx_ts(struct prueth_emac *emac);
+ int icssg_napi_rx_poll(struct napi_struct *napi_rx, int budget);
+ int prueth_prepare_rx_chan(struct prueth_emac *emac,
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c b/drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c
+index 292f04d29f4f7b..f88cdc8f012f12 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth_sr1.c
+@@ -440,7 +440,6 @@ static int prueth_emac_start(struct prueth *prueth, struct prueth_emac *emac)
+ goto halt_pru;
+ }
+
+- emac->fw_running = 1;
+ return 0;
+
+ halt_pru:
+@@ -449,6 +448,29 @@ static int prueth_emac_start(struct prueth *prueth, struct prueth_emac *emac)
+ return ret;
+ }
+
++static void prueth_emac_stop(struct prueth_emac *emac)
++{
++ struct prueth *prueth = emac->prueth;
++ int slice;
++
++ switch (emac->port_id) {
++ case PRUETH_PORT_MII0:
++ slice = ICSS_SLICE0;
++ break;
++ case PRUETH_PORT_MII1:
++ slice = ICSS_SLICE1;
++ break;
++ default:
++ netdev_err(emac->ndev, "invalid port\n");
++ return;
++ }
++
++ if (!emac->is_sr1)
++ rproc_shutdown(prueth->txpru[slice]);
++ rproc_shutdown(prueth->rtu[slice]);
++ rproc_shutdown(prueth->pru[slice]);
++}
++
+ /**
+ * emac_ndo_open - EMAC device open
+ * @ndev: network adapter device
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 65b0a3115e14cd..64926240b0071d 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -432,10 +432,12 @@ struct kszphy_ptp_priv {
+ struct kszphy_priv {
+ struct kszphy_ptp_priv ptp_priv;
+ const struct kszphy_type *type;
++ struct clk *clk;
+ int led_mode;
+ u16 vct_ctrl1000;
+ bool rmii_ref_clk_sel;
+ bool rmii_ref_clk_sel_val;
++ bool clk_enable;
+ u64 stats[ARRAY_SIZE(kszphy_hw_stats)];
+ };
+
+@@ -2052,6 +2054,46 @@ static void kszphy_get_stats(struct phy_device *phydev,
+ data[i] = kszphy_get_stat(phydev, i);
+ }
+
++static void kszphy_enable_clk(struct phy_device *phydev)
++{
++ struct kszphy_priv *priv = phydev->priv;
++
++ if (!priv->clk_enable && priv->clk) {
++ clk_prepare_enable(priv->clk);
++ priv->clk_enable = true;
++ }
++}
++
++static void kszphy_disable_clk(struct phy_device *phydev)
++{
++ struct kszphy_priv *priv = phydev->priv;
++
++ if (priv->clk_enable && priv->clk) {
++ clk_disable_unprepare(priv->clk);
++ priv->clk_enable = false;
++ }
++}
++
++static int kszphy_generic_resume(struct phy_device *phydev)
++{
++ kszphy_enable_clk(phydev);
++
++ return genphy_resume(phydev);
++}
++
++static int kszphy_generic_suspend(struct phy_device *phydev)
++{
++ int ret;
++
++ ret = genphy_suspend(phydev);
++ if (ret)
++ return ret;
++
++ kszphy_disable_clk(phydev);
++
++ return 0;
++}
++
+ static int kszphy_suspend(struct phy_device *phydev)
+ {
+ /* Disable PHY Interrupts */
+@@ -2061,7 +2103,7 @@ static int kszphy_suspend(struct phy_device *phydev)
+ phydev->drv->config_intr(phydev);
+ }
+
+- return genphy_suspend(phydev);
++ return kszphy_generic_suspend(phydev);
+ }
+
+ static void kszphy_parse_led_mode(struct phy_device *phydev)
+@@ -2092,7 +2134,9 @@ static int kszphy_resume(struct phy_device *phydev)
+ {
+ int ret;
+
+- genphy_resume(phydev);
++ ret = kszphy_generic_resume(phydev);
++ if (ret)
++ return ret;
+
+ /* After switching from power-down to normal mode, an internal global
+ * reset is automatically generated. Wait a minimum of 1 ms before
+@@ -2114,6 +2158,24 @@ static int kszphy_resume(struct phy_device *phydev)
+ return 0;
+ }
+
++/* Because of errata DS80000700A, receiver error following software
++ * power down. Suspend and resume callbacks only disable and enable
++ * external rmii reference clock.
++ */
++static int ksz8041_resume(struct phy_device *phydev)
++{
++ kszphy_enable_clk(phydev);
++
++ return 0;
++}
++
++static int ksz8041_suspend(struct phy_device *phydev)
++{
++ kszphy_disable_clk(phydev);
++
++ return 0;
++}
++
+ static int ksz9477_resume(struct phy_device *phydev)
+ {
+ int ret;
+@@ -2161,7 +2223,10 @@ static int ksz8061_resume(struct phy_device *phydev)
+ if (!(ret & BMCR_PDOWN))
+ return 0;
+
+- genphy_resume(phydev);
++ ret = kszphy_generic_resume(phydev);
++ if (ret)
++ return ret;
++
+ usleep_range(1000, 2000);
+
+ /* Re-program the value after chip is reset. */
+@@ -2179,6 +2244,11 @@ static int ksz8061_resume(struct phy_device *phydev)
+ return 0;
+ }
+
++static int ksz8061_suspend(struct phy_device *phydev)
++{
++ return kszphy_suspend(phydev);
++}
++
+ static int kszphy_probe(struct phy_device *phydev)
+ {
+ const struct kszphy_type *type = phydev->drv->driver_data;
+@@ -2219,10 +2289,14 @@ static int kszphy_probe(struct phy_device *phydev)
+ } else if (!clk) {
+ /* unnamed clock from the generic ethernet-phy binding */
+ clk = devm_clk_get_optional_enabled(&phydev->mdio.dev, NULL);
+- if (IS_ERR(clk))
+- return PTR_ERR(clk);
+ }
+
++ if (IS_ERR(clk))
++ return PTR_ERR(clk);
++
++ clk_disable_unprepare(clk);
++ priv->clk = clk;
++
+ if (ksz8041_fiber_mode(phydev))
+ phydev->port = PORT_FIBRE;
+
+@@ -5292,6 +5366,21 @@ static int lan8841_probe(struct phy_device *phydev)
+ return 0;
+ }
+
++static int lan8804_resume(struct phy_device *phydev)
++{
++ return kszphy_resume(phydev);
++}
++
++static int lan8804_suspend(struct phy_device *phydev)
++{
++ return kszphy_generic_suspend(phydev);
++}
++
++static int lan8841_resume(struct phy_device *phydev)
++{
++ return kszphy_generic_resume(phydev);
++}
++
+ static int lan8841_suspend(struct phy_device *phydev)
+ {
+ struct kszphy_priv *priv = phydev->priv;
+@@ -5300,7 +5389,7 @@ static int lan8841_suspend(struct phy_device *phydev)
+ if (ptp_priv->ptp_clock)
+ ptp_cancel_worker_sync(ptp_priv->ptp_clock);
+
+- return genphy_suspend(phydev);
++ return kszphy_generic_suspend(phydev);
+ }
+
+ static struct phy_driver ksphy_driver[] = {
+@@ -5360,9 +5449,8 @@ static struct phy_driver ksphy_driver[] = {
+ .get_sset_count = kszphy_get_sset_count,
+ .get_strings = kszphy_get_strings,
+ .get_stats = kszphy_get_stats,
+- /* No suspend/resume callbacks because of errata DS80000700A,
+- * receiver error following software power down.
+- */
++ .suspend = ksz8041_suspend,
++ .resume = ksz8041_resume,
+ }, {
+ .phy_id = PHY_ID_KSZ8041RNLI,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
+@@ -5438,7 +5526,7 @@ static struct phy_driver ksphy_driver[] = {
+ .soft_reset = genphy_soft_reset,
+ .config_intr = kszphy_config_intr,
+ .handle_interrupt = kszphy_handle_interrupt,
+- .suspend = kszphy_suspend,
++ .suspend = ksz8061_suspend,
+ .resume = ksz8061_resume,
+ }, {
+ .phy_id = PHY_ID_KSZ9021,
+@@ -5509,8 +5597,8 @@ static struct phy_driver ksphy_driver[] = {
+ .get_sset_count = kszphy_get_sset_count,
+ .get_strings = kszphy_get_strings,
+ .get_stats = kszphy_get_stats,
+- .suspend = genphy_suspend,
+- .resume = kszphy_resume,
++ .suspend = lan8804_suspend,
++ .resume = lan8804_resume,
+ .config_intr = lan8804_config_intr,
+ .handle_interrupt = lan8804_handle_interrupt,
+ }, {
+@@ -5528,7 +5616,7 @@ static struct phy_driver ksphy_driver[] = {
+ .get_strings = kszphy_get_strings,
+ .get_stats = kszphy_get_stats,
+ .suspend = lan8841_suspend,
+- .resume = genphy_resume,
++ .resume = lan8841_resume,
+ .cable_test_start = lan8814_cable_test_start,
+ .cable_test_get_status = ksz886x_cable_test_get_status,
+ }, {
+diff --git a/drivers/net/pse-pd/tps23881.c b/drivers/net/pse-pd/tps23881.c
+index 5c4e88be46ee33..8797ca1a8a219c 100644
+--- a/drivers/net/pse-pd/tps23881.c
++++ b/drivers/net/pse-pd/tps23881.c
+@@ -64,15 +64,11 @@ static int tps23881_pi_enable(struct pse_controller_dev *pcdev, int id)
+ if (id >= TPS23881_MAX_CHANS)
+ return -ERANGE;
+
+- ret = i2c_smbus_read_word_data(client, TPS23881_REG_PW_STATUS);
+- if (ret < 0)
+- return ret;
+-
+ chan = priv->port[id].chan[0];
+ if (chan < 4)
+- val = (u16)(ret | BIT(chan));
++ val = BIT(chan);
+ else
+- val = (u16)(ret | BIT(chan + 4));
++ val = BIT(chan + 4);
+
+ if (priv->port[id].is_4p) {
+ chan = priv->port[id].chan[1];
+@@ -100,15 +96,11 @@ static int tps23881_pi_disable(struct pse_controller_dev *pcdev, int id)
+ if (id >= TPS23881_MAX_CHANS)
+ return -ERANGE;
+
+- ret = i2c_smbus_read_word_data(client, TPS23881_REG_PW_STATUS);
+- if (ret < 0)
+- return ret;
+-
+ chan = priv->port[id].chan[0];
+ if (chan < 4)
+- val = (u16)(ret | BIT(chan + 4));
++ val = BIT(chan + 4);
+ else
+- val = (u16)(ret | BIT(chan + 8));
++ val = BIT(chan + 8);
+
+ if (priv->port[id].is_4p) {
+ chan = priv->port[id].chan[1];
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 0c011d8f5d4db2..9fe7f704a2f7b8 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1365,6 +1365,9 @@ static const struct usb_device_id products[] = {
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a0, 0)}, /* Telit FN920C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a4, 0)}, /* Telit FN920C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a9, 0)}, /* Telit FN920C04 */
++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c0, 0)}, /* Telit FE910C04 */
++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c4, 0)}, /* Telit FE910C04 */
++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c8, 0)}, /* Telit FE910C04 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */
+ {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
+index fa1be8c54d3c1a..c18c6e933f478e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/bz.c
+@@ -161,6 +161,7 @@ const struct iwl_cfg_trans_params iwl_gl_trans_cfg = {
+
+ const char iwl_bz_name[] = "Intel(R) TBD Bz device";
+ const char iwl_fm_name[] = "Intel(R) Wi-Fi 7 BE201 320MHz";
++const char iwl_wh_name[] = "Intel(R) Wi-Fi 7 BE211 320MHz";
+ const char iwl_gl_name[] = "Intel(R) Wi-Fi 7 BE200 320MHz";
+ const char iwl_mtp_name[] = "Intel(R) Wi-Fi 7 BE202 160MHz";
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index 34c91deca57b1b..17721bb47e2511 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -545,6 +545,7 @@ extern const char iwl_ax231_name[];
+ extern const char iwl_ax411_name[];
+ extern const char iwl_bz_name[];
+ extern const char iwl_fm_name[];
++extern const char iwl_wh_name[];
+ extern const char iwl_gl_name[];
+ extern const char iwl_mtp_name[];
+ extern const char iwl_sc_name[];
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+index 1a814eb6743e80..6a4300c01d41d1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/d3.c
+@@ -2871,6 +2871,7 @@ static void iwl_mvm_query_set_freqs(struct iwl_mvm *mvm,
+ int idx)
+ {
+ int i;
++ int n_channels = 0;
+
+ if (fw_has_api(&mvm->fw->ucode_capa,
+ IWL_UCODE_TLV_API_SCAN_OFFLOAD_CHANS)) {
+@@ -2879,7 +2880,7 @@ static void iwl_mvm_query_set_freqs(struct iwl_mvm *mvm,
+
+ for (i = 0; i < SCAN_OFFLOAD_MATCHING_CHANNELS_LEN * 8; i++)
+ if (matches[idx].matching_channels[i / 8] & (BIT(i % 8)))
+- match->channels[match->n_channels++] =
++ match->channels[n_channels++] =
+ mvm->nd_channels[i]->center_freq;
+ } else {
+ struct iwl_scan_offload_profile_match_v1 *matches =
+@@ -2887,9 +2888,11 @@ static void iwl_mvm_query_set_freqs(struct iwl_mvm *mvm,
+
+ for (i = 0; i < SCAN_OFFLOAD_MATCHING_CHANNELS_LEN_V1 * 8; i++)
+ if (matches[idx].matching_channels[i / 8] & (BIT(i % 8)))
+- match->channels[match->n_channels++] =
++ match->channels[n_channels++] =
+ mvm->nd_channels[i]->center_freq;
+ }
++ /* We may have ended up with fewer channels than we allocated. */
++ match->n_channels = n_channels;
+ }
+
+ /**
+@@ -2970,6 +2973,8 @@ static void iwl_mvm_query_netdetect_reasons(struct iwl_mvm *mvm,
+ GFP_KERNEL);
+ if (!net_detect || !n_matches)
+ goto out_report_nd;
++ net_detect->n_matches = n_matches;
++ n_matches = 0;
+
+ for_each_set_bit(i, &matched_profiles, mvm->n_nd_match_sets) {
+ struct cfg80211_wowlan_nd_match *match;
+@@ -2983,8 +2988,9 @@ static void iwl_mvm_query_netdetect_reasons(struct iwl_mvm *mvm,
+ GFP_KERNEL);
+ if (!match)
+ goto out_report_nd;
++ match->n_channels = n_channels;
+
+- net_detect->matches[net_detect->n_matches++] = match;
++ net_detect->matches[n_matches++] = match;
+
+ /* We inverted the order of the SSIDs in the scan
+ * request, so invert the index here.
+@@ -2999,6 +3005,8 @@ static void iwl_mvm_query_netdetect_reasons(struct iwl_mvm *mvm,
+
+ iwl_mvm_query_set_freqs(mvm, d3_data->nd_results, match, i);
+ }
++ /* We may have fewer matches than we allocated. */
++ net_detect->n_matches = n_matches;
+
+ out_report_nd:
+ wakeup.net_detect = net_detect;
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 805fb249a0c6a2..8fb2aa28224212 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -1106,19 +1106,54 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
+ iwlax210_2ax_cfg_so_jf_b0, iwl9462_name),
+
+ /* Bz */
+-/* FIXME: need to change the naming according to the actual CRF */
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_ax201_name),
++
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_ax211_name),
++
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
+ iwl_cfg_bz, iwl_fm_name),
+
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_wh_name),
++
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_HR2, IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_ax201_name),
++
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_GF, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_ax211_name),
++
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_FM, IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
+ iwl_cfg_bz, iwl_fm_name),
+
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BZ_W, IWL_CFG_ANY,
++ IWL_CFG_RF_TYPE_WH, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_bz, iwl_wh_name),
++
+ /* Ga (Gl) */
+ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_MAC_TYPE_GL, IWL_CFG_ANY,
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_mmio.c b/drivers/net/wwan/iosm/iosm_ipc_mmio.c
+index 63eb08c43c0517..6764c13530b9bd 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_mmio.c
++++ b/drivers/net/wwan/iosm/iosm_ipc_mmio.c
+@@ -104,7 +104,7 @@ struct iosm_mmio *ipc_mmio_init(void __iomem *mmio, struct device *dev)
+ break;
+
+ msleep(20);
+- } while (retries-- > 0);
++ } while (--retries > 0);
+
+ if (!retries) {
+ dev_err(ipc_mmio->dev, "invalid exec stage %X", stage);
+diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.c b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+index 3931c7a13f5ab2..cbdbb91e8381fc 100644
+--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.c
++++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.c
+@@ -104,14 +104,21 @@ void t7xx_fsm_broadcast_state(struct t7xx_fsm_ctl *ctl, enum md_state state)
+ fsm_state_notify(ctl->md, state);
+ }
+
++static void fsm_release_command(struct kref *ref)
++{
++ struct t7xx_fsm_command *cmd = container_of(ref, typeof(*cmd), refcnt);
++
++ kfree(cmd);
++}
++
+ static void fsm_finish_command(struct t7xx_fsm_ctl *ctl, struct t7xx_fsm_command *cmd, int result)
+ {
+ if (cmd->flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+- *cmd->ret = result;
+- complete_all(cmd->done);
++ cmd->result = result;
++ complete_all(&cmd->done);
+ }
+
+- kfree(cmd);
++ kref_put(&cmd->refcnt, fsm_release_command);
+ }
+
+ static void fsm_del_kf_event(struct t7xx_fsm_event *event)
+@@ -475,7 +482,6 @@ static int fsm_main_thread(void *data)
+
+ int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id, unsigned int flag)
+ {
+- DECLARE_COMPLETION_ONSTACK(done);
+ struct t7xx_fsm_command *cmd;
+ unsigned long flags;
+ int ret;
+@@ -487,11 +493,13 @@ int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id
+ INIT_LIST_HEAD(&cmd->entry);
+ cmd->cmd_id = cmd_id;
+ cmd->flag = flag;
++ kref_init(&cmd->refcnt);
+ if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+- cmd->done = &done;
+- cmd->ret = &ret;
++ init_completion(&cmd->done);
++ kref_get(&cmd->refcnt);
+ }
+
++ kref_get(&cmd->refcnt);
+ spin_lock_irqsave(&ctl->command_lock, flags);
+ list_add_tail(&cmd->entry, &ctl->command_queue);
+ spin_unlock_irqrestore(&ctl->command_lock, flags);
+@@ -501,11 +509,11 @@ int t7xx_fsm_append_cmd(struct t7xx_fsm_ctl *ctl, enum t7xx_fsm_cmd_state cmd_id
+ if (flag & FSM_CMD_FLAG_WAIT_FOR_COMPLETION) {
+ unsigned long wait_ret;
+
+- wait_ret = wait_for_completion_timeout(&done,
++ wait_ret = wait_for_completion_timeout(&cmd->done,
+ msecs_to_jiffies(FSM_CMD_TIMEOUT_MS));
+- if (!wait_ret)
+- return -ETIMEDOUT;
+
++ ret = wait_ret ? cmd->result : -ETIMEDOUT;
++ kref_put(&cmd->refcnt, fsm_release_command);
+ return ret;
+ }
+
+diff --git a/drivers/net/wwan/t7xx/t7xx_state_monitor.h b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
+index 7b0a9baf488c18..6e0601bb752e51 100644
+--- a/drivers/net/wwan/t7xx/t7xx_state_monitor.h
++++ b/drivers/net/wwan/t7xx/t7xx_state_monitor.h
+@@ -110,8 +110,9 @@ struct t7xx_fsm_command {
+ struct list_head entry;
+ enum t7xx_fsm_cmd_state cmd_id;
+ unsigned int flag;
+- struct completion *done;
+- int *ret;
++ struct completion done;
++ int result;
++ struct kref refcnt;
+ };
+
+ struct t7xx_fsm_notifier {
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 093cb423f536be..61bba5513de05a 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -173,6 +173,11 @@ enum nvme_quirks {
+ * MSI (but not MSI-X) interrupts are broken and never fire.
+ */
+ NVME_QUIRK_BROKEN_MSI = (1 << 21),
++
++ /*
++ * Align dma pool segment size to 512 bytes
++ */
++ NVME_QUIRK_DMAPOOL_ALIGN_512 = (1 << 22),
+ };
+
+ /*
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 55af3dfbc2607b..76b3f7b396c86b 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2690,15 +2690,20 @@ static int nvme_disable_prepare_reset(struct nvme_dev *dev, bool shutdown)
+
+ static int nvme_setup_prp_pools(struct nvme_dev *dev)
+ {
++ size_t small_align = 256;
++
+ dev->prp_page_pool = dma_pool_create("prp list page", dev->dev,
+ NVME_CTRL_PAGE_SIZE,
+ NVME_CTRL_PAGE_SIZE, 0);
+ if (!dev->prp_page_pool)
+ return -ENOMEM;
+
++ if (dev->ctrl.quirks & NVME_QUIRK_DMAPOOL_ALIGN_512)
++ small_align = 512;
++
+ /* Optimisation for I/Os between 4k and 128k */
+ dev->prp_small_pool = dma_pool_create("prp list 256", dev->dev,
+- 256, 256, 0);
++ 256, small_align, 0);
+ if (!dev->prp_small_pool) {
+ dma_pool_destroy(dev->prp_page_pool);
+ return -ENOMEM;
+@@ -3446,7 +3451,7 @@ static const struct pci_device_id nvme_id_table[] = {
+ { PCI_VDEVICE(REDHAT, 0x0010), /* Qemu emulated controller */
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
+ { PCI_DEVICE(0x1217, 0x8760), /* O2 Micro 64GB Steam Deck */
+- .driver_data = NVME_QUIRK_QDEPTH_ONE },
++ .driver_data = NVME_QUIRK_DMAPOOL_ALIGN_512, },
+ { PCI_DEVICE(0x126f, 0x2262), /* Silicon Motion generic */
+ .driver_data = NVME_QUIRK_NO_DEEPEST_PS |
+ NVME_QUIRK_BOGUS_NID, },
+diff --git a/drivers/nvme/target/configfs.c b/drivers/nvme/target/configfs.c
+index 685e89b35d330d..cfbab198693b03 100644
+--- a/drivers/nvme/target/configfs.c
++++ b/drivers/nvme/target/configfs.c
+@@ -2227,12 +2227,17 @@ static ssize_t nvmet_root_discovery_nqn_store(struct config_item *item,
+ const char *page, size_t count)
+ {
+ struct list_head *entry;
++ char *old_nqn, *new_nqn;
+ size_t len;
+
+ len = strcspn(page, "\n");
+ if (!len || len > NVMF_NQN_FIELD_LEN - 1)
+ return -EINVAL;
+
++ new_nqn = kstrndup(page, len, GFP_KERNEL);
++ if (!new_nqn)
++ return -ENOMEM;
++
+ down_write(&nvmet_config_sem);
+ list_for_each(entry, &nvmet_subsystems_group.cg_children) {
+ struct config_item *item =
+@@ -2241,13 +2246,15 @@ static ssize_t nvmet_root_discovery_nqn_store(struct config_item *item,
+ if (!strncmp(config_item_name(item), page, len)) {
+ pr_err("duplicate NQN %s\n", config_item_name(item));
+ up_write(&nvmet_config_sem);
++ kfree(new_nqn);
+ return -EINVAL;
+ }
+ }
+- memset(nvmet_disc_subsys->subsysnqn, 0, NVMF_NQN_FIELD_LEN);
+- memcpy(nvmet_disc_subsys->subsysnqn, page, len);
++ old_nqn = nvmet_disc_subsys->subsysnqn;
++ nvmet_disc_subsys->subsysnqn = new_nqn;
+ up_write(&nvmet_config_sem);
+
++ kfree(old_nqn);
+ return len;
+ }
+
+diff --git a/drivers/pinctrl/pinctrl-mcp23s08.c b/drivers/pinctrl/pinctrl-mcp23s08.c
+index 737d0ae3d0b662..f384c72d955452 100644
+--- a/drivers/pinctrl/pinctrl-mcp23s08.c
++++ b/drivers/pinctrl/pinctrl-mcp23s08.c
+@@ -86,6 +86,7 @@ const struct regmap_config mcp23x08_regmap = {
+ .num_reg_defaults = ARRAY_SIZE(mcp23x08_defaults),
+ .cache_type = REGCACHE_FLAT,
+ .max_register = MCP_OLAT,
++ .disable_locking = true, /* mcp->lock protects the regmap */
+ };
+ EXPORT_SYMBOL_GPL(mcp23x08_regmap);
+
+@@ -132,6 +133,7 @@ const struct regmap_config mcp23x17_regmap = {
+ .num_reg_defaults = ARRAY_SIZE(mcp23x17_defaults),
+ .cache_type = REGCACHE_FLAT,
+ .val_format_endian = REGMAP_ENDIAN_LITTLE,
++ .disable_locking = true, /* mcp->lock protects the regmap */
+ };
+ EXPORT_SYMBOL_GPL(mcp23x17_regmap);
+
+@@ -228,7 +230,9 @@ static int mcp_pinconf_get(struct pinctrl_dev *pctldev, unsigned int pin,
+
+ switch (param) {
+ case PIN_CONFIG_BIAS_PULL_UP:
++ mutex_lock(&mcp->lock);
+ ret = mcp_read(mcp, MCP_GPPU, &data);
++ mutex_unlock(&mcp->lock);
+ if (ret < 0)
+ return ret;
+ status = (data & BIT(pin)) ? 1 : 0;
+@@ -257,7 +261,9 @@ static int mcp_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin,
+
+ switch (param) {
+ case PIN_CONFIG_BIAS_PULL_UP:
++ mutex_lock(&mcp->lock);
+ ret = mcp_set_bit(mcp, MCP_GPPU, pin, arg);
++ mutex_unlock(&mcp->lock);
+ break;
+ default:
+ dev_dbg(mcp->dev, "Invalid config param %04x\n", param);
+diff --git a/drivers/platform/x86/hp/hp-wmi.c b/drivers/platform/x86/hp/hp-wmi.c
+index 8c05e0dd2a218e..3ba9c43d5516ae 100644
+--- a/drivers/platform/x86/hp/hp-wmi.c
++++ b/drivers/platform/x86/hp/hp-wmi.c
+@@ -64,7 +64,7 @@ static const char * const omen_thermal_profile_boards[] = {
+ "874A", "8603", "8604", "8748", "886B", "886C", "878A", "878B", "878C",
+ "88C8", "88CB", "8786", "8787", "8788", "88D1", "88D2", "88F4", "88FD",
+ "88F5", "88F6", "88F7", "88FE", "88FF", "8900", "8901", "8902", "8912",
+- "8917", "8918", "8949", "894A", "89EB", "8BAD", "8A42"
++ "8917", "8918", "8949", "894A", "89EB", "8BAD", "8A42", "8A15"
+ };
+
+ /* DMI Board names of Omen laptops that are specifically set to be thermal
+@@ -80,7 +80,7 @@ static const char * const omen_thermal_profile_force_v0_boards[] = {
+ * "balanced" when reaching zero.
+ */
+ static const char * const omen_timed_thermal_profile_boards[] = {
+- "8BAD", "8A42"
++ "8BAD", "8A42", "8A15"
+ };
+
+ /* DMI Board names of Victus laptops */
+diff --git a/drivers/platform/x86/mlx-platform.c b/drivers/platform/x86/mlx-platform.c
+index 9d70146fd7420a..1a09f2dfb7bca0 100644
+--- a/drivers/platform/x86/mlx-platform.c
++++ b/drivers/platform/x86/mlx-platform.c
+@@ -6237,6 +6237,7 @@ mlxplat_pci_fpga_device_init(unsigned int device, const char *res_name, struct p
+ fail_pci_request_regions:
+ pci_disable_device(pci_dev);
+ fail_pci_enable_device:
++ pci_dev_put(pci_dev);
+ return err;
+ }
+
+@@ -6247,6 +6248,7 @@ mlxplat_pci_fpga_device_exit(struct pci_dev *pci_bridge,
+ iounmap(pci_bridge_addr);
+ pci_release_regions(pci_bridge);
+ pci_disable_device(pci_bridge);
++ pci_dev_put(pci_bridge);
+ }
+
+ static int
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 6371a9f765c139..2cfb2ac3f465aa 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -184,7 +184,8 @@ enum tpacpi_hkey_event_t {
+ */
+ TP_HKEY_EV_AMT_TOGGLE = 0x131a, /* Toggle AMT on/off */
+ TP_HKEY_EV_DOUBLETAP_TOGGLE = 0x131c, /* Toggle trackpoint doubletap on/off */
+- TP_HKEY_EV_PROFILE_TOGGLE = 0x131f, /* Toggle platform profile */
++ TP_HKEY_EV_PROFILE_TOGGLE = 0x131f, /* Toggle platform profile in 2024 systems */
++ TP_HKEY_EV_PROFILE_TOGGLE2 = 0x1401, /* Toggle platform profile in 2025 + systems */
+
+ /* Reasons for waking up from S3/S4 */
+ TP_HKEY_EV_WKUP_S3_UNDOCK = 0x2304, /* undock requested, S3 */
+@@ -11200,6 +11201,7 @@ static bool tpacpi_driver_event(const unsigned int hkey_event)
+ tp_features.trackpoint_doubletap = !tp_features.trackpoint_doubletap;
+ return true;
+ case TP_HKEY_EV_PROFILE_TOGGLE:
++ case TP_HKEY_EV_PROFILE_TOGGLE2:
+ platform_profile_cycle();
+ return true;
+ }
+diff --git a/drivers/pmdomain/core.c b/drivers/pmdomain/core.c
+index 778ff187ac59e6..88819659df83a2 100644
+--- a/drivers/pmdomain/core.c
++++ b/drivers/pmdomain/core.c
+@@ -2141,6 +2141,11 @@ static int genpd_set_default_power_state(struct generic_pm_domain *genpd)
+ return 0;
+ }
+
++static void genpd_provider_release(struct device *dev)
++{
++ /* nothing to be done here */
++}
++
+ static int genpd_alloc_data(struct generic_pm_domain *genpd)
+ {
+ struct genpd_governor_data *gd = NULL;
+@@ -2172,6 +2177,7 @@ static int genpd_alloc_data(struct generic_pm_domain *genpd)
+
+ genpd->gd = gd;
+ device_initialize(&genpd->dev);
++ genpd->dev.release = genpd_provider_release;
+
+ if (!genpd_is_dev_name_fw(genpd)) {
+ dev_set_name(&genpd->dev, "%s", genpd->name);
+diff --git a/drivers/pmdomain/imx/gpcv2.c b/drivers/pmdomain/imx/gpcv2.c
+index 3f0e6960f47fc2..e03c2cb39a6936 100644
+--- a/drivers/pmdomain/imx/gpcv2.c
++++ b/drivers/pmdomain/imx/gpcv2.c
+@@ -1458,12 +1458,12 @@ static int imx_gpcv2_probe(struct platform_device *pdev)
+ .max_register = SZ_4K,
+ };
+ struct device *dev = &pdev->dev;
+- struct device_node *pgc_np;
++ struct device_node *pgc_np __free(device_node) =
++ of_get_child_by_name(dev->of_node, "pgc");
+ struct regmap *regmap;
+ void __iomem *base;
+ int ret;
+
+- pgc_np = of_get_child_by_name(dev->of_node, "pgc");
+ if (!pgc_np) {
+ dev_err(dev, "No power domains specified in DT\n");
+ return -EINVAL;
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 1755ca026f08ff..73b1edd0531b43 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -43,6 +43,7 @@ static_assert(CQSPI_MAX_CHIPSELECT <= SPI_CS_CNT_MAX);
+ #define CQSPI_SLOW_SRAM BIT(4)
+ #define CQSPI_NEEDS_APB_AHB_HAZARD_WAR BIT(5)
+ #define CQSPI_RD_NO_IRQ BIT(6)
++#define CQSPI_DISABLE_STIG_MODE BIT(7)
+
+ /* Capabilities */
+ #define CQSPI_SUPPORTS_OCTAL BIT(0)
+@@ -103,6 +104,7 @@ struct cqspi_st {
+ bool apb_ahb_hazard;
+
+ bool is_jh7110; /* Flag for StarFive JH7110 SoC */
++ bool disable_stig_mode;
+
+ const struct cqspi_driver_platdata *ddata;
+ };
+@@ -1416,7 +1418,8 @@ static int cqspi_mem_process(struct spi_mem *mem, const struct spi_mem_op *op)
+ * reads, prefer STIG mode for such small reads.
+ */
+ if (!op->addr.nbytes ||
+- op->data.nbytes <= CQSPI_STIG_DATA_LEN_MAX)
++ (op->data.nbytes <= CQSPI_STIG_DATA_LEN_MAX &&
++ !cqspi->disable_stig_mode))
+ return cqspi_command_read(f_pdata, op);
+
+ return cqspi_read(f_pdata, op);
+@@ -1880,6 +1883,8 @@ static int cqspi_probe(struct platform_device *pdev)
+ if (ret)
+ goto probe_reset_failed;
+ }
++ if (ddata->quirks & CQSPI_DISABLE_STIG_MODE)
++ cqspi->disable_stig_mode = true;
+
+ if (of_device_is_compatible(pdev->dev.of_node,
+ "xlnx,versal-ospi-1.0")) {
+@@ -2043,7 +2048,8 @@ static const struct cqspi_driver_platdata intel_lgm_qspi = {
+ static const struct cqspi_driver_platdata socfpga_qspi = {
+ .quirks = CQSPI_DISABLE_DAC_MODE
+ | CQSPI_NO_SUPPORT_WR_COMPLETION
+- | CQSPI_SLOW_SRAM,
++ | CQSPI_SLOW_SRAM
++ | CQSPI_DISABLE_STIG_MODE,
+ };
+
+ static const struct cqspi_driver_platdata versal_ospi = {
+diff --git a/fs/btrfs/bio.c b/fs/btrfs/bio.c
+index 3d2376caedfa68..5fd0b39d8c703b 100644
+--- a/fs/btrfs/bio.c
++++ b/fs/btrfs/bio.c
+@@ -81,6 +81,9 @@ static struct btrfs_bio *btrfs_split_bio(struct btrfs_fs_info *fs_info,
+
+ bio = bio_split(&orig_bbio->bio, map_length >> SECTOR_SHIFT, GFP_NOFS,
+ &btrfs_clone_bioset);
++ if (IS_ERR(bio))
++ return ERR_CAST(bio);
++
+ bbio = btrfs_bio(bio);
+ btrfs_bio_init(bbio, fs_info, NULL, orig_bbio);
+ bbio->inode = orig_bbio->inode;
+@@ -355,7 +358,7 @@ static void btrfs_simple_end_io(struct bio *bio)
+ INIT_WORK(&bbio->end_io_work, btrfs_end_bio_work);
+ queue_work(btrfs_end_io_wq(fs_info, bio), &bbio->end_io_work);
+ } else {
+- if (bio_op(bio) == REQ_OP_ZONE_APPEND && !bio->bi_status)
++ if (bio_is_zone_append(bio) && !bio->bi_status)
+ btrfs_record_physical_zoned(bbio);
+ btrfs_bio_end_io(bbio, bbio->bio.bi_status);
+ }
+@@ -398,7 +401,7 @@ static void btrfs_orig_write_end_io(struct bio *bio)
+ else
+ bio->bi_status = BLK_STS_OK;
+
+- if (bio_op(bio) == REQ_OP_ZONE_APPEND && !bio->bi_status)
++ if (bio_is_zone_append(bio) && !bio->bi_status)
+ stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
+
+ btrfs_bio_end_io(bbio, bbio->bio.bi_status);
+@@ -412,7 +415,7 @@ static void btrfs_clone_write_end_io(struct bio *bio)
+ if (bio->bi_status) {
+ atomic_inc(&stripe->bioc->error);
+ btrfs_log_dev_io_error(bio, stripe->dev);
+- } else if (bio_op(bio) == REQ_OP_ZONE_APPEND) {
++ } else if (bio_is_zone_append(bio)) {
+ stripe->physical = bio->bi_iter.bi_sector << SECTOR_SHIFT;
+ }
+
+@@ -684,7 +687,8 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+ &bioc, &smap, &mirror_num);
+ if (error) {
+ ret = errno_to_blk_status(error);
+- goto fail;
++ btrfs_bio_counter_dec(fs_info);
++ goto end_bbio;
+ }
+
+ map_length = min(map_length, length);
+@@ -692,7 +696,15 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+ map_length = btrfs_append_map_length(bbio, map_length);
+
+ if (map_length < length) {
+- bbio = btrfs_split_bio(fs_info, bbio, map_length);
++ struct btrfs_bio *split;
++
++ split = btrfs_split_bio(fs_info, bbio, map_length);
++ if (IS_ERR(split)) {
++ ret = errno_to_blk_status(PTR_ERR(split));
++ btrfs_bio_counter_dec(fs_info);
++ goto end_bbio;
++ }
++ bbio = split;
+ bio = &bbio->bio;
+ }
+
+@@ -766,6 +778,7 @@ static bool btrfs_submit_chunk(struct btrfs_bio *bbio, int mirror_num)
+
+ btrfs_bio_end_io(remaining, ret);
+ }
++end_bbio:
+ btrfs_bio_end_io(bbio, ret);
+ /* Do not submit another chunk */
+ return true;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 43b7b331b2da36..563f106774e592 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4264,6 +4264,15 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ * already the cleaner, but below we run all pending delayed iputs.
+ */
+ btrfs_flush_workqueue(fs_info->fixup_workers);
++ /*
++ * Similar case here, we have to wait for delalloc workers before we
++ * proceed below and stop the cleaner kthread, otherwise we trigger a
++ * use-after-tree on the cleaner kthread task_struct when a delalloc
++ * worker running submit_compressed_extents() adds a delayed iput, which
++ * does a wake up on the cleaner kthread, which was already freed below
++ * when we call kthread_stop().
++ */
++ btrfs_flush_workqueue(fs_info->delalloc_workers);
+
+ /*
+ * After we parked the cleaner kthread, ordered extents may have
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 4b3e256e0d0b88..b5cfb85af937fc 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -10056,6 +10056,11 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ bsi.block_start = physical_block_start;
+ bsi.block_len = len;
+ }
++
++ if (fatal_signal_pending(current)) {
++ ret = -EINTR;
++ goto out;
++ }
+ }
+
+ if (bsi.block_len)
+diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
+index 2b0daced98ebb4..3404e7a30c330c 100644
+--- a/fs/ocfs2/quota_global.c
++++ b/fs/ocfs2/quota_global.c
+@@ -893,7 +893,7 @@ static int ocfs2_get_next_id(struct super_block *sb, struct kqid *qid)
+ int status = 0;
+
+ trace_ocfs2_get_next_id(from_kqid(&init_user_ns, *qid), type);
+- if (!sb_has_quota_loaded(sb, type)) {
++ if (!sb_has_quota_active(sb, type)) {
+ status = -ESRCH;
+ goto out;
+ }
+diff --git a/fs/ocfs2/quota_local.c b/fs/ocfs2/quota_local.c
+index 73d3367c533b8a..2956d888c13145 100644
+--- a/fs/ocfs2/quota_local.c
++++ b/fs/ocfs2/quota_local.c
+@@ -867,6 +867,7 @@ static int ocfs2_local_free_info(struct super_block *sb, int type)
+ brelse(oinfo->dqi_libh);
+ brelse(oinfo->dqi_lqi_bh);
+ kfree(oinfo);
++ info->dqi_priv = NULL;
+ return status;
+ }
+
+diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
+index 7eb010de39fe26..536b7dc4538182 100644
+--- a/fs/proc/task_mmu.c
++++ b/fs/proc/task_mmu.c
+@@ -1810,7 +1810,7 @@ static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end,
+ }
+
+ for (; addr != end; addr += PAGE_SIZE, idx++) {
+- unsigned long cur_flags = flags;
++ u64 cur_flags = flags;
+ pagemap_entry_t pme;
+
+ if (folio && (flags & PM_PRESENT) &&
+diff --git a/fs/smb/client/cifsfs.c b/fs/smb/client/cifsfs.c
+index bf909c2f6b963b..0ceebde38f9fe0 100644
+--- a/fs/smb/client/cifsfs.c
++++ b/fs/smb/client/cifsfs.c
+@@ -2018,6 +2018,7 @@ exit_cifs(void)
+ destroy_workqueue(decrypt_wq);
+ destroy_workqueue(fileinfo_put_wq);
+ destroy_workqueue(serverclose_wq);
++ destroy_workqueue(cfid_put_wq);
+ destroy_workqueue(cifsiod_wq);
+ cifs_proc_clean();
+ }
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 7d01dd313351f7..04ffc5b158c3bf 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -4224,6 +4224,7 @@ static bool __query_dir(struct dir_context *ctx, const char *name, int namlen,
+ /* dot and dotdot entries are already reserved */
+ if (!strcmp(".", name) || !strcmp("..", name))
+ return true;
++ d_info->num_scan++;
+ if (ksmbd_share_veto_filename(priv->work->tcon->share_conf, name))
+ return true;
+ if (!match_pattern(name, namlen, priv->search_pattern))
+@@ -4384,8 +4385,17 @@ int smb2_query_dir(struct ksmbd_work *work)
+ query_dir_private.info_level = req->FileInformationClass;
+ dir_fp->readdir_data.private = &query_dir_private;
+ set_ctx_actor(&dir_fp->readdir_data.ctx, __query_dir);
+-
++again:
++ d_info.num_scan = 0;
+ rc = iterate_dir(dir_fp->filp, &dir_fp->readdir_data.ctx);
++ /*
++ * num_entry can be 0 if the directory iteration stops before reaching
++ * the end of the directory and no file is matched with the search
++ * pattern.
++ */
++ if (rc >= 0 && !d_info.num_entry && d_info.num_scan &&
++ d_info.out_buf_len > 0)
++ goto again;
+ /*
+ * req->OutputBufferLength is too small to contain even one entry.
+ * In this case, it immediately returns OutputBufferLength 0 to client.
+@@ -6006,15 +6016,13 @@ static int set_file_basic_info(struct ksmbd_file *fp,
+ attrs.ia_valid |= (ATTR_ATIME | ATTR_ATIME_SET);
+ }
+
+- attrs.ia_valid |= ATTR_CTIME;
+ if (file_info->ChangeTime)
+- attrs.ia_ctime = ksmbd_NTtimeToUnix(file_info->ChangeTime);
+- else
+- attrs.ia_ctime = inode_get_ctime(inode);
++ inode_set_ctime_to_ts(inode,
++ ksmbd_NTtimeToUnix(file_info->ChangeTime));
+
+ if (file_info->LastWriteTime) {
+ attrs.ia_mtime = ksmbd_NTtimeToUnix(file_info->LastWriteTime);
+- attrs.ia_valid |= (ATTR_MTIME | ATTR_MTIME_SET);
++ attrs.ia_valid |= (ATTR_MTIME | ATTR_MTIME_SET | ATTR_CTIME);
+ }
+
+ if (file_info->Attributes) {
+@@ -6056,8 +6064,6 @@ static int set_file_basic_info(struct ksmbd_file *fp,
+ return -EACCES;
+
+ inode_lock(inode);
+- inode_set_ctime_to_ts(inode, attrs.ia_ctime);
+- attrs.ia_valid &= ~ATTR_CTIME;
+ rc = notify_change(idmap, dentry, &attrs, NULL);
+ inode_unlock(inode);
+ }
+diff --git a/fs/smb/server/vfs.h b/fs/smb/server/vfs.h
+index cb76f4b5bafe8c..06903024a2d88b 100644
+--- a/fs/smb/server/vfs.h
++++ b/fs/smb/server/vfs.h
+@@ -43,6 +43,7 @@ struct ksmbd_dir_info {
+ char *rptr;
+ int name_len;
+ int out_buf_len;
++ int num_scan;
+ int num_entry;
+ int data_count;
+ int last_entry_offset;
+diff --git a/include/linux/bio.h b/include/linux/bio.h
+index faceadb040f9ac..66b7620a1b5333 100644
+--- a/include/linux/bio.h
++++ b/include/linux/bio.h
+@@ -677,6 +677,23 @@ static inline void bio_clear_polled(struct bio *bio)
+ bio->bi_opf &= ~REQ_POLLED;
+ }
+
++/**
++ * bio_is_zone_append - is this a zone append bio?
++ * @bio: bio to check
++ *
++ * Check if @bio is a zone append operation. Core block layer code and end_io
++ * handlers must use this instead of an open coded REQ_OP_ZONE_APPEND check
++ * because the block layer can rewrite REQ_OP_ZONE_APPEND to REQ_OP_WRITE if
++ * it is not natively supported.
++ */
++static inline bool bio_is_zone_append(struct bio *bio)
++{
++ if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
++ return false;
++ return bio_op(bio) == REQ_OP_ZONE_APPEND ||
++ bio_flagged(bio, BIO_EMULATES_ZONE_APPEND);
++}
++
+ struct bio *blk_next_bio(struct bio *bio, struct block_device *bdev,
+ unsigned int nr_pages, blk_opf_t opf, gfp_t gfp);
+ struct bio *bio_chain_and_submit(struct bio *prev, struct bio *new);
+diff --git a/include/linux/filter.h b/include/linux/filter.h
+index 7d7578a8eac10b..5118caf8aa1c70 100644
+--- a/include/linux/filter.h
++++ b/include/linux/filter.h
+@@ -1121,7 +1121,7 @@ bool bpf_jit_supports_arena(void);
+ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena);
+ u64 bpf_arch_uaddress_limit(void);
+ void arch_bpf_stack_walk(bool (*consume_fn)(void *cookie, u64 ip, u64 sp, u64 bp), void *cookie);
+-bool bpf_helper_changes_pkt_data(void *func);
++bool bpf_helper_changes_pkt_data(enum bpf_func_id func_id);
+
+ static inline bool bpf_dump_raw_ok(const struct cred *cred)
+ {
+diff --git a/include/linux/if_vlan.h b/include/linux/if_vlan.h
+index c1645c86eed969..d65b5d71b93bf8 100644
+--- a/include/linux/if_vlan.h
++++ b/include/linux/if_vlan.h
+@@ -585,13 +585,16 @@ static inline int vlan_get_tag(const struct sk_buff *skb, u16 *vlan_tci)
+ * vlan_get_protocol - get protocol EtherType.
+ * @skb: skbuff to query
+ * @type: first vlan protocol
++ * @mac_offset: MAC offset
+ * @depth: buffer to store length of eth and vlan tags in bytes
+ *
+ * Returns the EtherType of the packet, regardless of whether it is
+ * vlan encapsulated (normal or hardware accelerated) or not.
+ */
+-static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
+- int *depth)
++static inline __be16 __vlan_get_protocol_offset(const struct sk_buff *skb,
++ __be16 type,
++ int mac_offset,
++ int *depth)
+ {
+ unsigned int vlan_depth = skb->mac_len, parse_depth = VLAN_MAX_DEPTH;
+
+@@ -610,7 +613,8 @@ static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
+ do {
+ struct vlan_hdr vhdr, *vh;
+
+- vh = skb_header_pointer(skb, vlan_depth, sizeof(vhdr), &vhdr);
++ vh = skb_header_pointer(skb, mac_offset + vlan_depth,
++ sizeof(vhdr), &vhdr);
+ if (unlikely(!vh || !--parse_depth))
+ return 0;
+
+@@ -625,6 +629,12 @@ static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
+ return type;
+ }
+
++static inline __be16 __vlan_get_protocol(const struct sk_buff *skb, __be16 type,
++ int *depth)
++{
++ return __vlan_get_protocol_offset(skb, type, 0, depth);
++}
++
+ /**
+ * vlan_get_protocol - get protocol EtherType.
+ * @skb: skbuff to query
+diff --git a/include/linux/memfd.h b/include/linux/memfd.h
+index 3f2cf339ceafd9..d437e30708502e 100644
+--- a/include/linux/memfd.h
++++ b/include/linux/memfd.h
+@@ -7,6 +7,7 @@
+ #ifdef CONFIG_MEMFD_CREATE
+ extern long memfd_fcntl(struct file *file, unsigned int cmd, unsigned int arg);
+ struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx);
++unsigned int *memfd_file_seals_ptr(struct file *file);
+ #else
+ static inline long memfd_fcntl(struct file *f, unsigned int c, unsigned int a)
+ {
+@@ -16,6 +17,19 @@ static inline struct folio *memfd_alloc_folio(struct file *memfd, pgoff_t idx)
+ {
+ return ERR_PTR(-EINVAL);
+ }
++
++static inline unsigned int *memfd_file_seals_ptr(struct file *file)
++{
++ return NULL;
++}
+ #endif
+
++/* Retrieve memfd seals associated with the file, if any. */
++static inline unsigned int memfd_file_seals(struct file *file)
++{
++ unsigned int *sealsp = memfd_file_seals_ptr(file);
++
++ return sealsp ? *sealsp : 0;
++}
++
+ #endif /* __LINUX_MEMFD_H */
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index e23c692a34c702..82c7056e27599e 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -555,6 +555,7 @@ enum {
+ * creation/deletion on drivers rescan. Unset during device attach.
+ */
+ MLX5_PRIV_FLAGS_DETACH = 1 << 2,
++ MLX5_PRIV_FLAGS_SWITCH_LEGACY = 1 << 3,
+ };
+
+ struct mlx5_adev {
+@@ -1233,6 +1234,12 @@ static inline bool mlx5_core_is_vf(const struct mlx5_core_dev *dev)
+ return dev->coredev_type == MLX5_COREDEV_VF;
+ }
+
++static inline bool mlx5_core_same_coredev_type(const struct mlx5_core_dev *dev1,
++ const struct mlx5_core_dev *dev2)
++{
++ return dev1->coredev_type == dev2->coredev_type;
++}
++
+ static inline bool mlx5_core_is_ecpf(const struct mlx5_core_dev *dev)
+ {
+ return dev->caps.embedded_cpu;
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index 96d369112bfa03..512e25c416ae29 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -2113,7 +2113,9 @@ struct mlx5_ifc_cmd_hca_cap_2_bits {
+ u8 migration_in_chunks[0x1];
+ u8 reserved_at_d1[0x1];
+ u8 sf_eq_usage[0x1];
+- u8 reserved_at_d3[0xd];
++ u8 reserved_at_d3[0x5];
++ u8 multiplane[0x1];
++ u8 reserved_at_d9[0x7];
+
+ u8 cross_vhca_object_to_object_supported[0x20];
+
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 61fff5d34ed532..8617adc6becd1f 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -3100,6 +3100,7 @@ static inline bool pagetable_pmd_ctor(struct ptdesc *ptdesc)
+ if (!pmd_ptlock_init(ptdesc))
+ return false;
+ __folio_set_pgtable(folio);
++ ptdesc_pmd_pts_init(ptdesc);
+ lruvec_stat_add_folio(folio, NR_PAGETABLE);
+ return true;
+ }
+@@ -4079,6 +4080,37 @@ void mem_dump_obj(void *object);
+ static inline void mem_dump_obj(void *object) {}
+ #endif
+
++static inline bool is_write_sealed(int seals)
++{
++ return seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE);
++}
++
++/**
++ * is_readonly_sealed - Checks whether write-sealed but mapped read-only,
++ * in which case writes should be disallowing moving
++ * forwards.
++ * @seals: the seals to check
++ * @vm_flags: the VMA flags to check
++ *
++ * Returns whether readonly sealed, in which case writess should be disallowed
++ * going forward.
++ */
++static inline bool is_readonly_sealed(int seals, vm_flags_t vm_flags)
++{
++ /*
++ * Since an F_SEAL_[FUTURE_]WRITE sealed memfd can be mapped as
++ * MAP_SHARED and read-only, take care to not allow mprotect to
++ * revert protections on such mappings. Do this only for shared
++ * mappings. For private mappings, don't need to mask
++ * VM_MAYWRITE as we still want them to be COW-writable.
++ */
++ if (is_write_sealed(seals) &&
++ ((vm_flags & (VM_SHARED | VM_WRITE)) == VM_SHARED))
++ return true;
++
++ return false;
++}
++
+ /**
+ * seal_check_write - Check for F_SEAL_WRITE or F_SEAL_FUTURE_WRITE flags and
+ * handle them.
+@@ -4090,24 +4122,15 @@ static inline void mem_dump_obj(void *object) {}
+ */
+ static inline int seal_check_write(int seals, struct vm_area_struct *vma)
+ {
+- if (seals & (F_SEAL_WRITE | F_SEAL_FUTURE_WRITE)) {
+- /*
+- * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
+- * write seals are active.
+- */
+- if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
+- return -EPERM;
+-
+- /*
+- * Since an F_SEAL_[FUTURE_]WRITE sealed memfd can be mapped as
+- * MAP_SHARED and read-only, take care to not allow mprotect to
+- * revert protections on such mappings. Do this only for shared
+- * mappings. For private mappings, don't need to mask
+- * VM_MAYWRITE as we still want them to be COW-writable.
+- */
+- if (vma->vm_flags & VM_SHARED)
+- vm_flags_clear(vma, VM_MAYWRITE);
+- }
++ if (!is_write_sealed(seals))
++ return 0;
++
++ /*
++ * New PROT_WRITE and MAP_SHARED mmaps are not allowed when
++ * write seals are active.
++ */
++ if ((vma->vm_flags & VM_SHARED) && (vma->vm_flags & VM_WRITE))
++ return -EPERM;
+
+ return 0;
+ }
+diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
+index 6e3bdf8e38bcae..6894de506b364f 100644
+--- a/include/linux/mm_types.h
++++ b/include/linux/mm_types.h
+@@ -445,6 +445,7 @@ FOLIO_MATCH(compound_head, _head_2a);
+ * @pt_index: Used for s390 gmap.
+ * @pt_mm: Used for x86 pgds.
+ * @pt_frag_refcount: For fragmented page table tracking. Powerpc only.
++ * @pt_share_count: Used for HugeTLB PMD page table share count.
+ * @_pt_pad_2: Padding to ensure proper alignment.
+ * @ptl: Lock for the page table.
+ * @__page_type: Same as page->page_type. Unused for page tables.
+@@ -471,6 +472,9 @@ struct ptdesc {
+ pgoff_t pt_index;
+ struct mm_struct *pt_mm;
+ atomic_t pt_frag_refcount;
++#ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING
++ atomic_t pt_share_count;
++#endif
+ };
+
+ union {
+@@ -516,6 +520,32 @@ static_assert(sizeof(struct ptdesc) <= sizeof(struct page));
+ const struct page *: (const struct ptdesc *)(p), \
+ struct page *: (struct ptdesc *)(p)))
+
++#ifdef CONFIG_HUGETLB_PMD_PAGE_TABLE_SHARING
++static inline void ptdesc_pmd_pts_init(struct ptdesc *ptdesc)
++{
++ atomic_set(&ptdesc->pt_share_count, 0);
++}
++
++static inline void ptdesc_pmd_pts_inc(struct ptdesc *ptdesc)
++{
++ atomic_inc(&ptdesc->pt_share_count);
++}
++
++static inline void ptdesc_pmd_pts_dec(struct ptdesc *ptdesc)
++{
++ atomic_dec(&ptdesc->pt_share_count);
++}
++
++static inline int ptdesc_pmd_pts_count(struct ptdesc *ptdesc)
++{
++ return atomic_read(&ptdesc->pt_share_count);
++}
++#else
++static inline void ptdesc_pmd_pts_init(struct ptdesc *ptdesc)
++{
++}
++#endif
++
+ /*
+ * Used for sizing the vmemmap region on some architectures
+ */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index c95f7e6ba25514..ba7b52584770d7 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -804,7 +804,6 @@ struct hci_conn_params {
+ extern struct list_head hci_dev_list;
+ extern struct list_head hci_cb_list;
+ extern rwlock_t hci_dev_list_lock;
+-extern struct mutex hci_cb_list_lock;
+
+ #define hci_dev_set_flag(hdev, nr) set_bit((nr), (hdev)->dev_flags)
+ #define hci_dev_clear_flag(hdev, nr) clear_bit((nr), (hdev)->dev_flags)
+@@ -2007,24 +2006,47 @@ struct hci_cb {
+
+ char *name;
+
++ bool (*match) (struct hci_conn *conn);
+ void (*connect_cfm) (struct hci_conn *conn, __u8 status);
+ void (*disconn_cfm) (struct hci_conn *conn, __u8 status);
+ void (*security_cfm) (struct hci_conn *conn, __u8 status,
+- __u8 encrypt);
++ __u8 encrypt);
+ void (*key_change_cfm) (struct hci_conn *conn, __u8 status);
+ void (*role_switch_cfm) (struct hci_conn *conn, __u8 status, __u8 role);
+ };
+
++static inline void hci_cb_lookup(struct hci_conn *conn, struct list_head *list)
++{
++ struct hci_cb *cb, *cpy;
++
++ rcu_read_lock();
++ list_for_each_entry_rcu(cb, &hci_cb_list, list) {
++ if (cb->match && cb->match(conn)) {
++ cpy = kmalloc(sizeof(*cpy), GFP_ATOMIC);
++ if (!cpy)
++ break;
++
++ *cpy = *cb;
++ INIT_LIST_HEAD(&cpy->list);
++ list_add_rcu(&cpy->list, list);
++ }
++ }
++ rcu_read_unlock();
++}
++
+ static inline void hci_connect_cfm(struct hci_conn *conn, __u8 status)
+ {
+- struct hci_cb *cb;
++ struct list_head list;
++ struct hci_cb *cb, *tmp;
++
++ INIT_LIST_HEAD(&list);
++ hci_cb_lookup(conn, &list);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
++ list_for_each_entry_safe(cb, tmp, &list, list) {
+ if (cb->connect_cfm)
+ cb->connect_cfm(conn, status);
++ kfree(cb);
+ }
+- mutex_unlock(&hci_cb_list_lock);
+
+ if (conn->connect_cfm_cb)
+ conn->connect_cfm_cb(conn, status);
+@@ -2032,43 +2054,55 @@ static inline void hci_connect_cfm(struct hci_conn *conn, __u8 status)
+
+ static inline void hci_disconn_cfm(struct hci_conn *conn, __u8 reason)
+ {
+- struct hci_cb *cb;
++ struct list_head list;
++ struct hci_cb *cb, *tmp;
++
++ INIT_LIST_HEAD(&list);
++ hci_cb_lookup(conn, &list);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
++ list_for_each_entry_safe(cb, tmp, &list, list) {
+ if (cb->disconn_cfm)
+ cb->disconn_cfm(conn, reason);
++ kfree(cb);
+ }
+- mutex_unlock(&hci_cb_list_lock);
+
+ if (conn->disconn_cfm_cb)
+ conn->disconn_cfm_cb(conn, reason);
+ }
+
+-static inline void hci_auth_cfm(struct hci_conn *conn, __u8 status)
++static inline void hci_security_cfm(struct hci_conn *conn, __u8 status,
++ __u8 encrypt)
+ {
+- struct hci_cb *cb;
+- __u8 encrypt;
+-
+- if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags))
+- return;
++ struct list_head list;
++ struct hci_cb *cb, *tmp;
+
+- encrypt = test_bit(HCI_CONN_ENCRYPT, &conn->flags) ? 0x01 : 0x00;
++ INIT_LIST_HEAD(&list);
++ hci_cb_lookup(conn, &list);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
++ list_for_each_entry_safe(cb, tmp, &list, list) {
+ if (cb->security_cfm)
+ cb->security_cfm(conn, status, encrypt);
++ kfree(cb);
+ }
+- mutex_unlock(&hci_cb_list_lock);
+
+ if (conn->security_cfm_cb)
+ conn->security_cfm_cb(conn, status);
+ }
+
++static inline void hci_auth_cfm(struct hci_conn *conn, __u8 status)
++{
++ __u8 encrypt;
++
++ if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags))
++ return;
++
++ encrypt = test_bit(HCI_CONN_ENCRYPT, &conn->flags) ? 0x01 : 0x00;
++
++ hci_security_cfm(conn, status, encrypt);
++}
++
+ static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status)
+ {
+- struct hci_cb *cb;
+ __u8 encrypt;
+
+ if (conn->state == BT_CONFIG) {
+@@ -2095,40 +2129,38 @@ static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status)
+ conn->sec_level = conn->pending_sec_level;
+ }
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
+- if (cb->security_cfm)
+- cb->security_cfm(conn, status, encrypt);
+- }
+- mutex_unlock(&hci_cb_list_lock);
+-
+- if (conn->security_cfm_cb)
+- conn->security_cfm_cb(conn, status);
++ hci_security_cfm(conn, status, encrypt);
+ }
+
+ static inline void hci_key_change_cfm(struct hci_conn *conn, __u8 status)
+ {
+- struct hci_cb *cb;
++ struct list_head list;
++ struct hci_cb *cb, *tmp;
++
++ INIT_LIST_HEAD(&list);
++ hci_cb_lookup(conn, &list);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
++ list_for_each_entry_safe(cb, tmp, &list, list) {
+ if (cb->key_change_cfm)
+ cb->key_change_cfm(conn, status);
++ kfree(cb);
+ }
+- mutex_unlock(&hci_cb_list_lock);
+ }
+
+ static inline void hci_role_switch_cfm(struct hci_conn *conn, __u8 status,
+ __u8 role)
+ {
+- struct hci_cb *cb;
++ struct list_head list;
++ struct hci_cb *cb, *tmp;
++
++ INIT_LIST_HEAD(&list);
++ hci_cb_lookup(conn, &list);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_for_each_entry(cb, &hci_cb_list, list) {
++ list_for_each_entry_safe(cb, tmp, &list, list) {
+ if (cb->role_switch_cfm)
+ cb->role_switch_cfm(conn, status, role);
++ kfree(cb);
+ }
+- mutex_unlock(&hci_cb_list_lock);
+ }
+
+ static inline bool hci_bdaddr_is_rpa(bdaddr_t *bdaddr, u8 addr_type)
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 91ae20cb76485b..471c353d32a4a5 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -733,15 +733,18 @@ struct nft_set_ext_tmpl {
+ /**
+ * struct nft_set_ext - set extensions
+ *
+- * @genmask: generation mask
++ * @genmask: generation mask, but also flags (see NFT_SET_ELEM_DEAD_BIT)
+ * @offset: offsets of individual extension types
+ * @data: beginning of extension data
++ *
++ * This structure must be aligned to word size, otherwise atomic bitops
++ * on genmask field can cause alignment failure on some archs.
+ */
+ struct nft_set_ext {
+ u8 genmask;
+ u8 offset[NFT_SET_EXT_NUM];
+ char data[];
+-};
++} __aligned(BITS_PER_LONG / 8);
+
+ static inline void nft_set_ext_prepare(struct nft_set_ext_tmpl *tmpl)
+ {
+diff --git a/include/sound/cs35l56.h b/include/sound/cs35l56.h
+index 94e8185c4795fe..3dc7a1551ac350 100644
+--- a/include/sound/cs35l56.h
++++ b/include/sound/cs35l56.h
+@@ -271,12 +271,6 @@ struct cs35l56_base {
+ struct gpio_desc *reset_gpio;
+ };
+
+-/* Temporary to avoid a build break with the HDA driver */
+-static inline int cs35l56_force_sync_asp1_registers_from_cache(struct cs35l56_base *cs35l56_base)
+-{
+- return 0;
+-}
+-
+ static inline bool cs35l56_is_otp_register(unsigned int reg)
+ {
+ return (reg >> 16) == 3;
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index d407576ddfb782..eec5eb7de8430e 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -139,6 +139,7 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
+ struct io_uring_buf_ring *br = bl->buf_ring;
+ __u16 tail, head = bl->head;
+ struct io_uring_buf *buf;
++ void __user *ret;
+
+ tail = smp_load_acquire(&br->tail);
+ if (unlikely(tail == head))
+@@ -153,6 +154,7 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
+ req->flags |= REQ_F_BUFFER_RING | REQ_F_BUFFERS_COMMIT;
+ req->buf_list = bl;
+ req->buf_index = buf->bid;
++ ret = u64_to_user_ptr(buf->addr);
+
+ if (issue_flags & IO_URING_F_UNLOCKED || !io_file_can_poll(req)) {
+ /*
+@@ -168,7 +170,7 @@ static void __user *io_ring_buffer_select(struct io_kiocb *req, size_t *len,
+ io_kbuf_commit(req, bl, *len, 1);
+ req->buf_list = NULL;
+ }
+- return u64_to_user_ptr(buf->addr);
++ return ret;
+ }
+
+ void __user *io_buffer_select(struct io_kiocb *req, size_t *len,
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 18507658a921d7..7f549be9abd1e6 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -748,6 +748,7 @@ static int io_recvmsg_prep_setup(struct io_kiocb *req)
+ if (req->opcode == IORING_OP_RECV) {
+ kmsg->msg.msg_name = NULL;
+ kmsg->msg.msg_namelen = 0;
++ kmsg->msg.msg_inq = 0;
+ kmsg->msg.msg_control = NULL;
+ kmsg->msg.msg_get_inq = 1;
+ kmsg->msg.msg_controllen = 0;
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 155938f1009313..39ad25d16ed404 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -979,6 +979,8 @@ int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags)
+ io_kbuf_recycle(req, issue_flags);
+ if (ret < 0)
+ req_set_fail(req);
++ } else if (!(req->flags & REQ_F_APOLL_MULTISHOT)) {
++ cflags = io_put_kbuf(req, ret, issue_flags);
+ } else {
+ /*
+ * Any successful return value will keep the multishot read
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 233ea78f8f1bd9..2b9c8c168a0ba3 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -539,6 +539,8 @@ struct bpf_prog *bpf_patch_insn_single(struct bpf_prog *prog, u32 off,
+
+ int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt)
+ {
++ int err;
++
+ /* Branch offsets can't overflow when program is shrinking, no need
+ * to call bpf_adj_branches(..., true) here
+ */
+@@ -546,7 +548,9 @@ int bpf_remove_insns(struct bpf_prog *prog, u32 off, u32 cnt)
+ sizeof(struct bpf_insn) * (prog->len - off - cnt));
+ prog->len -= cnt;
+
+- return WARN_ON_ONCE(bpf_adj_branches(prog, off, off + cnt, off, false));
++ err = bpf_adj_branches(prog, off, off + cnt, off, false);
++ WARN_ON_ONCE(err);
++ return err;
+ }
+
+ static void bpf_prog_kallsyms_del_subprogs(struct bpf_prog *fp)
+@@ -2936,7 +2940,7 @@ void __weak bpf_jit_compile(struct bpf_prog *prog)
+ {
+ }
+
+-bool __weak bpf_helper_changes_pkt_data(void *func)
++bool __weak bpf_helper_changes_pkt_data(enum bpf_func_id func_id)
+ {
+ return false;
+ }
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 767f1cb8c27e17..a0cab0d0252fab 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -10476,7 +10476,7 @@ static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn
+ }
+
+ /* With LD_ABS/IND some JITs save/restore skb from r1. */
+- changes_data = bpf_helper_changes_pkt_data(fn->func);
++ changes_data = bpf_helper_changes_pkt_data(func_id);
+ if (changes_data && fn->arg1_type != ARG_PTR_TO_CTX) {
+ verbose(env, "kernel subsystem misconfigured func %s#%d: r1 != ctx\n",
+ func_id_name(func_id), func_id);
+diff --git a/kernel/kcov.c b/kernel/kcov.c
+index 28a6be6e64fdd7..187ba1b80bda16 100644
+--- a/kernel/kcov.c
++++ b/kernel/kcov.c
+@@ -166,7 +166,7 @@ static void kcov_remote_area_put(struct kcov_remote_area *area,
+ * Unlike in_serving_softirq(), this function returns false when called during
+ * a hardirq or an NMI that happened in the softirq context.
+ */
+-static inline bool in_softirq_really(void)
++static __always_inline bool in_softirq_really(void)
+ {
+ return in_serving_softirq() && !in_hardirq() && !in_nmi();
+ }
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 79bb18651cdb8b..40f915f893e2ed 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -4367,7 +4367,7 @@ static void scx_ops_bypass(bool bypass)
+ * sees scx_rq_bypassing() before moving tasks to SCX.
+ */
+ if (!scx_enabled()) {
+- rq_unlock_irqrestore(rq, &rf);
++ rq_unlock(rq, &rf);
+ continue;
+ }
+
+@@ -6637,7 +6637,7 @@ __bpf_kfunc int bpf_iter_scx_dsq_new(struct bpf_iter_scx_dsq *it, u64 dsq_id,
+ return -ENOENT;
+
+ INIT_LIST_HEAD(&kit->cursor.node);
+- kit->cursor.flags |= SCX_DSQ_LNODE_ITER_CURSOR | flags;
++ kit->cursor.flags = SCX_DSQ_LNODE_ITER_CURSOR | flags;
+ kit->cursor.priv = READ_ONCE(kit->dsq->seq);
+
+ return 0;
+diff --git a/kernel/trace/fgraph.c b/kernel/trace/fgraph.c
+index 72bcbfad53db04..c12335499ec91e 100644
+--- a/kernel/trace/fgraph.c
++++ b/kernel/trace/fgraph.c
+@@ -802,7 +802,7 @@ static unsigned long __ftrace_return_to_handler(struct fgraph_ret_regs *ret_regs
+ #endif
+ {
+ for_each_set_bit(i, &bitmap, sizeof(bitmap) * BITS_PER_BYTE) {
+- struct fgraph_ops *gops = fgraph_array[i];
++ struct fgraph_ops *gops = READ_ONCE(fgraph_array[i]);
+
+ if (gops == &fgraph_stub)
+ continue;
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 3dd3b97d8049ae..cd9dbfb3038330 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -883,16 +883,13 @@ static void profile_graph_return(struct ftrace_graph_ret *trace,
+ }
+
+ static struct fgraph_ops fprofiler_ops = {
+- .ops = {
+- .flags = FTRACE_OPS_FL_INITIALIZED,
+- INIT_OPS_HASH(fprofiler_ops.ops)
+- },
+ .entryfunc = &profile_graph_entry,
+ .retfunc = &profile_graph_return,
+ };
+
+ static int register_ftrace_profiler(void)
+ {
++ ftrace_ops_set_global_filter(&fprofiler_ops.ops);
+ return register_ftrace_graph(&fprofiler_ops);
+ }
+
+@@ -903,12 +900,11 @@ static void unregister_ftrace_profiler(void)
+ #else
+ static struct ftrace_ops ftrace_profile_ops __read_mostly = {
+ .func = function_profile_call,
+- .flags = FTRACE_OPS_FL_INITIALIZED,
+- INIT_OPS_HASH(ftrace_profile_ops)
+ };
+
+ static int register_ftrace_profiler(void)
+ {
++ ftrace_ops_set_global_filter(&ftrace_profile_ops);
+ return register_ftrace_function(&ftrace_profile_ops);
+ }
+
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 7149cd6fd4795e..ea9b44847ce6b7 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -364,6 +364,18 @@ static bool process_string(const char *fmt, int len, struct trace_event_call *ca
+ s = r + 1;
+ } while (s < e);
+
++ /*
++ * Check for arrays. If the argument has: foo[REC->val]
++ * then it is very likely that foo is an array of strings
++ * that are safe to use.
++ */
++ r = strstr(s, "[");
++ if (r && r < e) {
++ r = strstr(r, "REC->");
++ if (r && r < e)
++ return true;
++ }
++
+ /*
+ * If there's any strings in the argument consider this arg OK as it
+ * could be: REC->field ? "foo" : "bar" and we don't want to get into
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index 9949ffad8df09d..cee65cb4310816 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3680,23 +3680,27 @@ void workqueue_softirq_dead(unsigned int cpu)
+ * check_flush_dependency - check for flush dependency sanity
+ * @target_wq: workqueue being flushed
+ * @target_work: work item being flushed (NULL for workqueue flushes)
++ * @from_cancel: are we called from the work cancel path
+ *
+ * %current is trying to flush the whole @target_wq or @target_work on it.
+- * If @target_wq doesn't have %WQ_MEM_RECLAIM, verify that %current is not
+- * reclaiming memory or running on a workqueue which doesn't have
+- * %WQ_MEM_RECLAIM as that can break forward-progress guarantee leading to
+- * a deadlock.
++ * If this is not the cancel path (which implies work being flushed is either
++ * already running, or will not be at all), check if @target_wq doesn't have
++ * %WQ_MEM_RECLAIM and verify that %current is not reclaiming memory or running
++ * on a workqueue which doesn't have %WQ_MEM_RECLAIM as that can break forward-
++ * progress guarantee leading to a deadlock.
+ */
+ static void check_flush_dependency(struct workqueue_struct *target_wq,
+- struct work_struct *target_work)
++ struct work_struct *target_work,
++ bool from_cancel)
+ {
+- work_func_t target_func = target_work ? target_work->func : NULL;
++ work_func_t target_func;
+ struct worker *worker;
+
+- if (target_wq->flags & WQ_MEM_RECLAIM)
++ if (from_cancel || target_wq->flags & WQ_MEM_RECLAIM)
+ return;
+
+ worker = current_wq_worker();
++ target_func = target_work ? target_work->func : NULL;
+
+ WARN_ONCE(current->flags & PF_MEMALLOC,
+ "workqueue: PF_MEMALLOC task %d(%s) is flushing !WQ_MEM_RECLAIM %s:%ps",
+@@ -3966,7 +3970,7 @@ void __flush_workqueue(struct workqueue_struct *wq)
+ list_add_tail(&this_flusher.list, &wq->flusher_overflow);
+ }
+
+- check_flush_dependency(wq, NULL);
++ check_flush_dependency(wq, NULL, false);
+
+ mutex_unlock(&wq->mutex);
+
+@@ -4141,7 +4145,7 @@ static bool start_flush_work(struct work_struct *work, struct wq_barrier *barr,
+ }
+
+ wq = pwq->wq;
+- check_flush_dependency(wq, work);
++ check_flush_dependency(wq, work, from_cancel);
+
+ insert_wq_barrier(pwq, barr, work, worker);
+ raw_spin_unlock_irq(&pool->lock);
+@@ -5627,6 +5631,7 @@ static void wq_adjust_max_active(struct workqueue_struct *wq)
+ } while (activated);
+ }
+
++__printf(1, 0)
+ static struct workqueue_struct *__alloc_workqueue(const char *fmt,
+ unsigned int flags,
+ int max_active, va_list args)
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index 8d83e217271967..0cbe913634be4b 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -4367,6 +4367,7 @@ int mas_alloc_cyclic(struct ma_state *mas, unsigned long *startp,
+ ret = 1;
+ }
+ if (ret < 0 && range_lo > min) {
++ mas_reset(mas);
+ ret = mas_empty_area(mas, min, range_hi, 1);
+ if (ret == 0)
+ ret = 1;
+diff --git a/mm/damon/core.c b/mm/damon/core.c
+index 511c3f61ab44c4..54f4dd8d549f06 100644
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -868,6 +868,11 @@ static int damon_commit_schemes(struct damon_ctx *dst, struct damon_ctx *src)
+ NUMA_NO_NODE);
+ if (!new_scheme)
+ return -ENOMEM;
++ err = damos_commit(new_scheme, src_scheme);
++ if (err) {
++ damon_destroy_scheme(new_scheme);
++ return err;
++ }
+ damon_add_scheme(dst, new_scheme);
+ }
+ return 0;
+@@ -961,8 +966,11 @@ static int damon_commit_targets(
+ return -ENOMEM;
+ err = damon_commit_target(new_target, false,
+ src_target, damon_target_has_pid(src));
+- if (err)
++ if (err) {
++ damon_destroy_target(new_target);
+ return err;
++ }
++ damon_add_target(dst, new_target);
+ }
+ return 0;
+ }
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 5dc57b74a8fe9a..2fa87b9ecec6c7 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -7200,7 +7200,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
+ spte = hugetlb_walk(svma, saddr,
+ vma_mmu_pagesize(svma));
+ if (spte) {
+- get_page(virt_to_page(spte));
++ ptdesc_pmd_pts_inc(virt_to_ptdesc(spte));
+ break;
+ }
+ }
+@@ -7215,7 +7215,7 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
+ (pmd_t *)((unsigned long)spte & PAGE_MASK));
+ mm_inc_nr_pmds(mm);
+ } else {
+- put_page(virt_to_page(spte));
++ ptdesc_pmd_pts_dec(virt_to_ptdesc(spte));
+ }
+ spin_unlock(&mm->page_table_lock);
+ out:
+@@ -7227,10 +7227,6 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
+ /*
+ * unmap huge page backed by shared pte.
+ *
+- * Hugetlb pte page is ref counted at the time of mapping. If pte is shared
+- * indicated by page_count > 1, unmap is achieved by clearing pud and
+- * decrementing the ref count. If count == 1, the pte page is not shared.
+- *
+ * Called with page table lock held.
+ *
+ * returns: 1 successfully unmapped a shared pte page
+@@ -7239,18 +7235,20 @@ pte_t *huge_pmd_share(struct mm_struct *mm, struct vm_area_struct *vma,
+ int huge_pmd_unshare(struct mm_struct *mm, struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep)
+ {
++ unsigned long sz = huge_page_size(hstate_vma(vma));
+ pgd_t *pgd = pgd_offset(mm, addr);
+ p4d_t *p4d = p4d_offset(pgd, addr);
+ pud_t *pud = pud_offset(p4d, addr);
+
+ i_mmap_assert_write_locked(vma->vm_file->f_mapping);
+ hugetlb_vma_assert_locked(vma);
+- BUG_ON(page_count(virt_to_page(ptep)) == 0);
+- if (page_count(virt_to_page(ptep)) == 1)
++ if (sz != PMD_SIZE)
++ return 0;
++ if (!ptdesc_pmd_pts_count(virt_to_ptdesc(ptep)))
+ return 0;
+
+ pud_clear(pud);
+- put_page(virt_to_page(ptep));
++ ptdesc_pmd_pts_dec(virt_to_ptdesc(ptep));
+ mm_dec_nr_pmds(mm);
+ return 1;
+ }
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 0400f5e8ac60de..74f5f4c51ab8c8 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -373,7 +373,7 @@ static void print_unreferenced(struct seq_file *seq,
+
+ for (i = 0; i < nr_entries; i++) {
+ void *ptr = (void *)entries[i];
+- warn_or_seq_printf(seq, " [<%pK>] %pS\n", ptr, ptr);
++ warn_or_seq_printf(seq, " %pS\n", ptr);
+ }
+ }
+
+diff --git a/mm/memfd.c b/mm/memfd.c
+index c17c3ea701a17e..35a370d75c9ad7 100644
+--- a/mm/memfd.c
++++ b/mm/memfd.c
+@@ -170,7 +170,7 @@ static int memfd_wait_for_pins(struct address_space *mapping)
+ return error;
+ }
+
+-static unsigned int *memfd_file_seals_ptr(struct file *file)
++unsigned int *memfd_file_seals_ptr(struct file *file)
+ {
+ if (shmem_file(file))
+ return &SHMEM_I(file_inode(file))->seals;
+diff --git a/mm/mmap.c b/mm/mmap.c
+index 7fb4c1e97175f9..6183805f6f9e6e 100644
+--- a/mm/mmap.c
++++ b/mm/mmap.c
+@@ -47,6 +47,7 @@
+ #include <linux/oom.h>
+ #include <linux/sched/mm.h>
+ #include <linux/ksm.h>
++#include <linux/memfd.h>
+
+ #include <linux/uaccess.h>
+ #include <asm/cacheflush.h>
+@@ -368,6 +369,7 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
+
+ if (file) {
+ struct inode *inode = file_inode(file);
++ unsigned int seals = memfd_file_seals(file);
+ unsigned long flags_mask;
+
+ if (!file_mmap_ok(file, inode, pgoff, len))
+@@ -408,6 +410,8 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
+ vm_flags |= VM_SHARED | VM_MAYSHARE;
+ if (!(file->f_mode & FMODE_WRITE))
+ vm_flags &= ~(VM_MAYWRITE | VM_SHARED);
++ else if (is_readonly_sealed(seals, vm_flags))
++ vm_flags &= ~VM_MAYWRITE;
+ fallthrough;
+ case MAP_PRIVATE:
+ if (!(file->f_mode & FMODE_READ))
+diff --git a/mm/readahead.c b/mm/readahead.c
+index 99fdb2b5b56862..bf79275060f3be 100644
+--- a/mm/readahead.c
++++ b/mm/readahead.c
+@@ -641,7 +641,11 @@ void page_cache_async_ra(struct readahead_control *ractl,
+ 1UL << order);
+ if (index == expected) {
+ ra->start += ra->size;
+- ra->size = get_next_ra_size(ra, max_pages);
++ /*
++ * In the case of MADV_HUGEPAGE, the actual size might exceed
++ * the readahead window.
++ */
++ ra->size = max(ra->size, get_next_ra_size(ra, max_pages));
+ ra->async_size = ra->size;
+ goto readit;
+ }
+diff --git a/mm/shmem.c b/mm/shmem.c
+index b03ced0c3d4858..dd4eb11c84b59e 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -1527,7 +1527,7 @@ static int shmem_writepage(struct page *page, struct writeback_control *wbc)
+ !shmem_falloc->waitq &&
+ index >= shmem_falloc->start &&
+ index < shmem_falloc->next)
+- shmem_falloc->nr_unswapped++;
++ shmem_falloc->nr_unswapped += nr_pages;
+ else
+ shmem_falloc = NULL;
+ spin_unlock(&inode->i_lock);
+@@ -1664,6 +1664,7 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
+ unsigned long mask = READ_ONCE(huge_shmem_orders_always);
+ unsigned long within_size_orders = READ_ONCE(huge_shmem_orders_within_size);
+ unsigned long vm_flags = vma ? vma->vm_flags : 0;
++ pgoff_t aligned_index;
+ bool global_huge;
+ loff_t i_size;
+ int order;
+@@ -1698,9 +1699,9 @@ unsigned long shmem_allowable_huge_orders(struct inode *inode,
+ /* Allow mTHP that will be fully within i_size. */
+ order = highest_order(within_size_orders);
+ while (within_size_orders) {
+- index = round_up(index + 1, order);
++ aligned_index = round_up(index + 1, 1 << order);
+ i_size = round_up(i_size_read(inode), PAGE_SIZE);
+- if (i_size >> PAGE_SHIFT >= index) {
++ if (i_size >> PAGE_SHIFT >= aligned_index) {
+ mask |= within_size_orders;
+ break;
+ }
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 28ba2b06fc7dc2..67a680e4b484d7 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -374,7 +374,14 @@ unsigned long zone_reclaimable_pages(struct zone *zone)
+ if (can_reclaim_anon_pages(NULL, zone_to_nid(zone), NULL))
+ nr += zone_page_state_snapshot(zone, NR_ZONE_INACTIVE_ANON) +
+ zone_page_state_snapshot(zone, NR_ZONE_ACTIVE_ANON);
+-
++ /*
++ * If there are no reclaimable file-backed or anonymous pages,
++ * ensure zones with sufficient free pages are not skipped.
++ * This prevents zones like DMA32 from being ignored in reclaim
++ * scenarios where they can still help alleviate memory pressure.
++ */
++ if (nr == 0)
++ nr = zone_page_state_snapshot(zone, NR_FREE_PAGES);
+ return nr;
+ }
+
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index 72439764186ed2..b5553c08e73162 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -57,7 +57,6 @@ DEFINE_RWLOCK(hci_dev_list_lock);
+
+ /* HCI callback list */
+ LIST_HEAD(hci_cb_list);
+-DEFINE_MUTEX(hci_cb_list_lock);
+
+ /* HCI ID Numbering */
+ static DEFINE_IDA(hci_index_ida);
+@@ -2993,9 +2992,7 @@ int hci_register_cb(struct hci_cb *cb)
+ {
+ BT_DBG("%p name %s", cb, cb->name);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_add_tail(&cb->list, &hci_cb_list);
+- mutex_unlock(&hci_cb_list_lock);
++ list_add_tail_rcu(&cb->list, &hci_cb_list);
+
+ return 0;
+ }
+@@ -3005,9 +3002,8 @@ int hci_unregister_cb(struct hci_cb *cb)
+ {
+ BT_DBG("%p name %s", cb, cb->name);
+
+- mutex_lock(&hci_cb_list_lock);
+- list_del(&cb->list);
+- mutex_unlock(&hci_cb_list_lock);
++ list_del_rcu(&cb->list);
++ synchronize_rcu();
+
+ return 0;
+ }
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index 644b606743e212..bda2f2da7d7311 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -2137,6 +2137,11 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ return HCI_LM_ACCEPT;
+ }
+
++static bool iso_match(struct hci_conn *hcon)
++{
++ return hcon->type == ISO_LINK || hcon->type == LE_LINK;
++}
++
+ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status)
+ {
+ if (hcon->type != ISO_LINK) {
+@@ -2318,6 +2323,7 @@ void iso_recv(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
+
+ static struct hci_cb iso_cb = {
+ .name = "ISO",
++ .match = iso_match,
+ .connect_cfm = iso_connect_cfm,
+ .disconn_cfm = iso_disconn_cfm,
+ };
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 6544c1ed714344..27b4c4a2ba1fdd 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -7217,6 +7217,11 @@ static struct l2cap_chan *l2cap_global_fixed_chan(struct l2cap_chan *c,
+ return NULL;
+ }
+
++static bool l2cap_match(struct hci_conn *hcon)
++{
++ return hcon->type == ACL_LINK || hcon->type == LE_LINK;
++}
++
+ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status)
+ {
+ struct hci_dev *hdev = hcon->hdev;
+@@ -7224,9 +7229,6 @@ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status)
+ struct l2cap_chan *pchan;
+ u8 dst_type;
+
+- if (hcon->type != ACL_LINK && hcon->type != LE_LINK)
+- return;
+-
+ BT_DBG("hcon %p bdaddr %pMR status %d", hcon, &hcon->dst, status);
+
+ if (status) {
+@@ -7291,9 +7293,6 @@ int l2cap_disconn_ind(struct hci_conn *hcon)
+
+ static void l2cap_disconn_cfm(struct hci_conn *hcon, u8 reason)
+ {
+- if (hcon->type != ACL_LINK && hcon->type != LE_LINK)
+- return;
+-
+ BT_DBG("hcon %p reason %d", hcon, reason);
+
+ l2cap_conn_del(hcon, bt_to_errno(reason));
+@@ -7572,6 +7571,7 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
+
+ static struct hci_cb l2cap_cb = {
+ .name = "L2CAP",
++ .match = l2cap_match,
+ .connect_cfm = l2cap_connect_cfm,
+ .disconn_cfm = l2cap_disconn_cfm,
+ .security_cfm = l2cap_security_cfm,
+diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c
+index ad5177e3a69b77..4c56ca5a216c6f 100644
+--- a/net/bluetooth/rfcomm/core.c
++++ b/net/bluetooth/rfcomm/core.c
+@@ -2134,6 +2134,11 @@ static int rfcomm_run(void *unused)
+ return 0;
+ }
+
++static bool rfcomm_match(struct hci_conn *hcon)
++{
++ return hcon->type == ACL_LINK;
++}
++
+ static void rfcomm_security_cfm(struct hci_conn *conn, u8 status, u8 encrypt)
+ {
+ struct rfcomm_session *s;
+@@ -2180,6 +2185,7 @@ static void rfcomm_security_cfm(struct hci_conn *conn, u8 status, u8 encrypt)
+
+ static struct hci_cb rfcomm_cb = {
+ .name = "RFCOMM",
++ .match = rfcomm_match,
+ .security_cfm = rfcomm_security_cfm
+ };
+
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index b872a2ca3ff38b..071c404c790af9 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -1355,11 +1355,13 @@ int sco_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ return lm;
+ }
+
+-static void sco_connect_cfm(struct hci_conn *hcon, __u8 status)
++static bool sco_match(struct hci_conn *hcon)
+ {
+- if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK)
+- return;
++ return hcon->type == SCO_LINK || hcon->type == ESCO_LINK;
++}
+
++static void sco_connect_cfm(struct hci_conn *hcon, __u8 status)
++{
+ BT_DBG("hcon %p bdaddr %pMR status %u", hcon, &hcon->dst, status);
+
+ if (!status) {
+@@ -1374,9 +1376,6 @@ static void sco_connect_cfm(struct hci_conn *hcon, __u8 status)
+
+ static void sco_disconn_cfm(struct hci_conn *hcon, __u8 reason)
+ {
+- if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK)
+- return;
+-
+ BT_DBG("hcon %p reason %d", hcon, reason);
+
+ sco_conn_del(hcon, bt_to_errno(reason));
+@@ -1402,6 +1401,7 @@ void sco_recv_scodata(struct hci_conn *hcon, struct sk_buff *skb)
+
+ static struct hci_cb sco_cb = {
+ .name = "SCO",
++ .match = sco_match,
+ .connect_cfm = sco_connect_cfm,
+ .disconn_cfm = sco_disconn_cfm,
+ };
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 8453e14d301b63..f3fa8353d262b0 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3640,8 +3640,10 @@ int skb_csum_hwoffload_help(struct sk_buff *skb,
+
+ if (features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
+ if (vlan_get_protocol(skb) == htons(ETH_P_IPV6) &&
+- skb_network_header_len(skb) != sizeof(struct ipv6hdr))
++ skb_network_header_len(skb) != sizeof(struct ipv6hdr) &&
++ !ipv6_has_hopopt_jumbo(skb))
+ goto sw_checksum;
++
+ switch (skb->csum_offset) {
+ case offsetof(struct tcphdr, check):
+ case offsetof(struct udphdr, check):
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 55495063621d6c..54a53fae9e98f5 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -7918,42 +7918,37 @@ static const struct bpf_func_proto bpf_tcp_raw_check_syncookie_ipv6_proto = {
+
+ #endif /* CONFIG_INET */
+
+-bool bpf_helper_changes_pkt_data(void *func)
+-{
+- if (func == bpf_skb_vlan_push ||
+- func == bpf_skb_vlan_pop ||
+- func == bpf_skb_store_bytes ||
+- func == bpf_skb_change_proto ||
+- func == bpf_skb_change_head ||
+- func == sk_skb_change_head ||
+- func == bpf_skb_change_tail ||
+- func == sk_skb_change_tail ||
+- func == bpf_skb_adjust_room ||
+- func == sk_skb_adjust_room ||
+- func == bpf_skb_pull_data ||
+- func == sk_skb_pull_data ||
+- func == bpf_clone_redirect ||
+- func == bpf_l3_csum_replace ||
+- func == bpf_l4_csum_replace ||
+- func == bpf_xdp_adjust_head ||
+- func == bpf_xdp_adjust_meta ||
+- func == bpf_msg_pull_data ||
+- func == bpf_msg_push_data ||
+- func == bpf_msg_pop_data ||
+- func == bpf_xdp_adjust_tail ||
+-#if IS_ENABLED(CONFIG_IPV6_SEG6_BPF)
+- func == bpf_lwt_seg6_store_bytes ||
+- func == bpf_lwt_seg6_adjust_srh ||
+- func == bpf_lwt_seg6_action ||
+-#endif
+-#ifdef CONFIG_INET
+- func == bpf_sock_ops_store_hdr_opt ||
+-#endif
+- func == bpf_lwt_in_push_encap ||
+- func == bpf_lwt_xmit_push_encap)
++bool bpf_helper_changes_pkt_data(enum bpf_func_id func_id)
++{
++ switch (func_id) {
++ case BPF_FUNC_clone_redirect:
++ case BPF_FUNC_l3_csum_replace:
++ case BPF_FUNC_l4_csum_replace:
++ case BPF_FUNC_lwt_push_encap:
++ case BPF_FUNC_lwt_seg6_action:
++ case BPF_FUNC_lwt_seg6_adjust_srh:
++ case BPF_FUNC_lwt_seg6_store_bytes:
++ case BPF_FUNC_msg_pop_data:
++ case BPF_FUNC_msg_pull_data:
++ case BPF_FUNC_msg_push_data:
++ case BPF_FUNC_skb_adjust_room:
++ case BPF_FUNC_skb_change_head:
++ case BPF_FUNC_skb_change_proto:
++ case BPF_FUNC_skb_change_tail:
++ case BPF_FUNC_skb_pull_data:
++ case BPF_FUNC_skb_store_bytes:
++ case BPF_FUNC_skb_vlan_pop:
++ case BPF_FUNC_skb_vlan_push:
++ case BPF_FUNC_store_hdr_opt:
++ case BPF_FUNC_xdp_adjust_head:
++ case BPF_FUNC_xdp_adjust_meta:
++ case BPF_FUNC_xdp_adjust_tail:
++ /* tail-called program could call any of the above */
++ case BPF_FUNC_tail_call:
+ return true;
+-
+- return false;
++ default:
++ return false;
++ }
+ }
+
+ const struct bpf_func_proto bpf_event_output_data_proto __weak;
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index 7ce22f40db5b04..d58270b48cb2cf 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -228,8 +228,12 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ rcu_read_unlock();
+ rtnl_unlock();
+
+- if (err)
++ if (err) {
++ goto err_free_msg;
++ } else if (!rsp->len) {
++ err = -ENOENT;
+ goto err_free_msg;
++ }
+
+ return genlmsg_reply(rsp, info);
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index da50df485090ff..a83f64a1d96a29 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -1300,7 +1300,10 @@ int sk_setsockopt(struct sock *sk, int level, int optname,
+ sk->sk_reuse = (valbool ? SK_CAN_REUSE : SK_NO_REUSE);
+ break;
+ case SO_REUSEPORT:
+- sk->sk_reuseport = valbool;
++ if (valbool && !sk_is_inet(sk))
++ ret = -EOPNOTSUPP;
++ else
++ sk->sk_reuseport = valbool;
+ break;
+ case SO_DONTROUTE:
+ sock_valbool_flag(sk, SOCK_LOCALROUTE, valbool);
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 25505f9b724c33..09b73acf037ae2 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -294,7 +294,7 @@ static int ip_tunnel_bind_dev(struct net_device *dev)
+
+ ip_tunnel_init_flow(&fl4, iph->protocol, iph->daddr,
+ iph->saddr, tunnel->parms.o_key,
+- iph->tos & INET_DSCP_MASK, dev_net(dev),
++ iph->tos & INET_DSCP_MASK, tunnel->net,
+ tunnel->parms.link, tunnel->fwmark, 0, 0);
+ rt = ip_route_output_key(tunnel->net, &fl4);
+
+@@ -611,7 +611,7 @@ void ip_md_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+ }
+ ip_tunnel_init_flow(&fl4, proto, key->u.ipv4.dst, key->u.ipv4.src,
+ tunnel_id_to_key32(key->tun_id),
+- tos & INET_DSCP_MASK, dev_net(dev), 0, skb->mark,
++ tos & INET_DSCP_MASK, tunnel->net, 0, skb->mark,
+ skb_get_hash(skb), key->flow_flags);
+
+ if (!tunnel_hlen)
+@@ -774,7 +774,7 @@ void ip_tunnel_xmit(struct sk_buff *skb, struct net_device *dev,
+
+ ip_tunnel_init_flow(&fl4, protocol, dst, tnl_params->saddr,
+ tunnel->parms.o_key, tos & INET_DSCP_MASK,
+- dev_net(dev), READ_ONCE(tunnel->parms.link),
++ tunnel->net, READ_ONCE(tunnel->parms.link),
+ tunnel->fwmark, skb_get_hash(skb), 0);
+
+ if (ip_tunnel_encap(skb, &tunnel->encap, &protocol, &fl4) < 0)
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 2d844e1f867f0a..2d43b29da15e20 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -7328,6 +7328,7 @@ int tcp_conn_request(struct request_sock_ops *rsk_ops,
+ if (unlikely(!inet_csk_reqsk_queue_hash_add(sk, req,
+ req->timeout))) {
+ reqsk_free(req);
++ dst_release(dst);
+ return 0;
+ }
+
+diff --git a/net/ipv6/ila/ila_xlat.c b/net/ipv6/ila/ila_xlat.c
+index 534a4498e280d7..fff09f5a796a75 100644
+--- a/net/ipv6/ila/ila_xlat.c
++++ b/net/ipv6/ila/ila_xlat.c
+@@ -200,6 +200,8 @@ static const struct nf_hook_ops ila_nf_hook_ops[] = {
+ },
+ };
+
++static DEFINE_MUTEX(ila_mutex);
++
+ static int ila_add_mapping(struct net *net, struct ila_xlat_params *xp)
+ {
+ struct ila_net *ilan = net_generic(net, ila_net_id);
+@@ -207,16 +209,20 @@ static int ila_add_mapping(struct net *net, struct ila_xlat_params *xp)
+ spinlock_t *lock = ila_get_lock(ilan, xp->ip.locator_match);
+ int err = 0, order;
+
+- if (!ilan->xlat.hooks_registered) {
++ if (!READ_ONCE(ilan->xlat.hooks_registered)) {
+ /* We defer registering net hooks in the namespace until the
+ * first mapping is added.
+ */
+- err = nf_register_net_hooks(net, ila_nf_hook_ops,
+- ARRAY_SIZE(ila_nf_hook_ops));
++ mutex_lock(&ila_mutex);
++ if (!ilan->xlat.hooks_registered) {
++ err = nf_register_net_hooks(net, ila_nf_hook_ops,
++ ARRAY_SIZE(ila_nf_hook_ops));
++ if (!err)
++ WRITE_ONCE(ilan->xlat.hooks_registered, true);
++ }
++ mutex_unlock(&ila_mutex);
+ if (err)
+ return err;
+-
+- ilan->xlat.hooks_registered = true;
+ }
+
+ ila = kzalloc(sizeof(*ila), GFP_KERNEL);
+diff --git a/net/llc/llc_input.c b/net/llc/llc_input.c
+index 51bccfb00a9cd9..61b0159b2fbee6 100644
+--- a/net/llc/llc_input.c
++++ b/net/llc/llc_input.c
+@@ -124,8 +124,8 @@ static inline int llc_fixup_skb(struct sk_buff *skb)
+ if (unlikely(!pskb_may_pull(skb, llc_len)))
+ return 0;
+
+- skb->transport_header += llc_len;
+ skb_pull(skb, llc_len);
++ skb_reset_transport_header(skb);
+ if (skb->protocol == htons(ETH_P_802_2)) {
+ __be16 pdulen;
+ s32 data_size;
+diff --git a/net/mac80211/cfg.c b/net/mac80211/cfg.c
+index 1b1bf044378d48..f11fd360b422dd 100644
+--- a/net/mac80211/cfg.c
++++ b/net/mac80211/cfg.c
+@@ -4992,10 +4992,16 @@ static void ieee80211_del_intf_link(struct wiphy *wiphy,
+ unsigned int link_id)
+ {
+ struct ieee80211_sub_if_data *sdata = IEEE80211_WDEV_TO_SUB_IF(wdev);
++ u16 new_links = wdev->valid_links & ~BIT(link_id);
+
+ lockdep_assert_wiphy(sdata->local->hw.wiphy);
+
+- ieee80211_vif_set_links(sdata, wdev->valid_links, 0);
++ /* During the link teardown process, certain functions require the
++ * link_id to remain in the valid_links bitmap. Therefore, instead
++ * of removing the link_id from the bitmap, pass a masked value to
++ * simulate as if link_id does not exist anymore.
++ */
++ ieee80211_vif_set_links(sdata, new_links, 0);
+ }
+
+ static int
+diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
+index 640239f4425b16..50eb1d8cd43deb 100644
+--- a/net/mac80211/mesh.c
++++ b/net/mac80211/mesh.c
+@@ -1157,14 +1157,14 @@ void ieee80211_mbss_info_change_notify(struct ieee80211_sub_if_data *sdata,
+ u64 changed)
+ {
+ struct ieee80211_if_mesh *ifmsh = &sdata->u.mesh;
+- unsigned long bits = changed;
++ unsigned long bits[] = { BITMAP_FROM_U64(changed) };
+ u32 bit;
+
+- if (!bits)
++ if (!changed)
+ return;
+
+ /* if we race with running work, worst case this work becomes a noop */
+- for_each_set_bit(bit, &bits, sizeof(changed) * BITS_PER_BYTE)
++ for_each_set_bit(bit, bits, sizeof(changed) * BITS_PER_BYTE)
+ set_bit(bit, ifmsh->mbss_changed);
+ set_bit(MESH_WORK_MBSS_CHANGED, &ifmsh->wrkq_flags);
+ wiphy_work_queue(sdata->local->hw.wiphy, &sdata->work);
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index b4814e97cf7422..38c30e4ddda98c 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1825,6 +1825,9 @@ int ieee80211_reconfig(struct ieee80211_local *local)
+ WARN(1, "Hardware became unavailable upon resume. This could be a software issue prior to suspend or a hardware issue.\n");
+ else
+ WARN(1, "Hardware became unavailable during restart.\n");
++ ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP,
++ IEEE80211_QUEUE_STOP_REASON_SUSPEND,
++ false);
+ ieee80211_handle_reconfig_failure(local);
+ return res;
+ }
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 1603b3702e2207..a62bc874bf1e17 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -667,8 +667,15 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff *
+ &echo, &drop_other_suboptions))
+ return false;
+
++ /*
++ * Later on, mptcp_write_options() will enforce mutually exclusion with
++ * DSS, bail out if such option is set and we can't drop it.
++ */
+ if (drop_other_suboptions)
+ remaining += opt_size;
++ else if (opts->suboptions & OPTION_MPTCP_DSS)
++ return false;
++
+ len = mptcp_add_addr_len(opts->addr.family, echo, !!opts->addr.port);
+ if (remaining < len)
+ return false;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 8a8e8fee337f5e..4b9d850ce85a25 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -528,13 +528,13 @@ static void mptcp_send_ack(struct mptcp_sock *msk)
+ mptcp_subflow_send_ack(mptcp_subflow_tcp_sock(subflow));
+ }
+
+-static void mptcp_subflow_cleanup_rbuf(struct sock *ssk)
++static void mptcp_subflow_cleanup_rbuf(struct sock *ssk, int copied)
+ {
+ bool slow;
+
+ slow = lock_sock_fast(ssk);
+ if (tcp_can_send_ack(ssk))
+- tcp_cleanup_rbuf(ssk, 1);
++ tcp_cleanup_rbuf(ssk, copied);
+ unlock_sock_fast(ssk, slow);
+ }
+
+@@ -551,7 +551,7 @@ static bool mptcp_subflow_could_cleanup(const struct sock *ssk, bool rx_empty)
+ (ICSK_ACK_PUSHED2 | ICSK_ACK_PUSHED)));
+ }
+
+-static void mptcp_cleanup_rbuf(struct mptcp_sock *msk)
++static void mptcp_cleanup_rbuf(struct mptcp_sock *msk, int copied)
+ {
+ int old_space = READ_ONCE(msk->old_wspace);
+ struct mptcp_subflow_context *subflow;
+@@ -559,14 +559,14 @@ static void mptcp_cleanup_rbuf(struct mptcp_sock *msk)
+ int space = __mptcp_space(sk);
+ bool cleanup, rx_empty;
+
+- cleanup = (space > 0) && (space >= (old_space << 1));
+- rx_empty = !__mptcp_rmem(sk);
++ cleanup = (space > 0) && (space >= (old_space << 1)) && copied;
++ rx_empty = !__mptcp_rmem(sk) && copied;
+
+ mptcp_for_each_subflow(msk, subflow) {
+ struct sock *ssk = mptcp_subflow_tcp_sock(subflow);
+
+ if (cleanup || mptcp_subflow_could_cleanup(ssk, rx_empty))
+- mptcp_subflow_cleanup_rbuf(ssk);
++ mptcp_subflow_cleanup_rbuf(ssk, copied);
+ }
+ }
+
+@@ -1939,6 +1939,8 @@ static int mptcp_sendmsg(struct sock *sk, struct msghdr *msg, size_t len)
+ goto out;
+ }
+
++static void mptcp_rcv_space_adjust(struct mptcp_sock *msk, int copied);
++
+ static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk,
+ struct msghdr *msg,
+ size_t len, int flags,
+@@ -1992,6 +1994,7 @@ static int __mptcp_recvmsg_mskq(struct mptcp_sock *msk,
+ break;
+ }
+
++ mptcp_rcv_space_adjust(msk, copied);
+ return copied;
+ }
+
+@@ -2217,9 +2220,6 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+
+ copied += bytes_read;
+
+- /* be sure to advertise window change */
+- mptcp_cleanup_rbuf(msk);
+-
+ if (skb_queue_empty(&msk->receive_queue) && __mptcp_move_skbs(msk))
+ continue;
+
+@@ -2268,7 +2268,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ }
+
+ pr_debug("block timeout %ld\n", timeo);
+- mptcp_rcv_space_adjust(msk, copied);
++ mptcp_cleanup_rbuf(msk, copied);
+ err = sk_wait_data(sk, &timeo, NULL);
+ if (err < 0) {
+ err = copied ? : err;
+@@ -2276,7 +2276,7 @@ static int mptcp_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ }
+ }
+
+- mptcp_rcv_space_adjust(msk, copied);
++ mptcp_cleanup_rbuf(msk, copied);
+
+ out_err:
+ if (cmsg_flags && copied >= 0) {
+diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
+index 2b5e246b8d9a7a..b94cb2ffbaf8fa 100644
+--- a/net/netrom/nr_route.c
++++ b/net/netrom/nr_route.c
+@@ -754,6 +754,12 @@ int nr_route_frame(struct sk_buff *skb, ax25_cb *ax25)
+ int ret;
+ struct sk_buff *skbn;
+
++ /*
++ * Reject malformed packets early. Check that it contains at least 2
++ * addresses and 1 byte more for Time-To-Live
++ */
++ if (skb->len < 2 * sizeof(ax25_address) + 1)
++ return 0;
+
+ nr_src = (ax25_address *)(skb->data + 0);
+ nr_dest = (ax25_address *)(skb->data + 7);
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index 97774bd4b6cb11..f3cecb3e4bcb18 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -538,10 +538,8 @@ static void *packet_current_frame(struct packet_sock *po,
+ return packet_lookup_frame(po, rb, rb->head, status);
+ }
+
+-static u16 vlan_get_tci(struct sk_buff *skb, struct net_device *dev)
++static u16 vlan_get_tci(const struct sk_buff *skb, struct net_device *dev)
+ {
+- u8 *skb_orig_data = skb->data;
+- int skb_orig_len = skb->len;
+ struct vlan_hdr vhdr, *vh;
+ unsigned int header_len;
+
+@@ -562,33 +560,21 @@ static u16 vlan_get_tci(struct sk_buff *skb, struct net_device *dev)
+ else
+ return 0;
+
+- skb_push(skb, skb->data - skb_mac_header(skb));
+- vh = skb_header_pointer(skb, header_len, sizeof(vhdr), &vhdr);
+- if (skb_orig_data != skb->data) {
+- skb->data = skb_orig_data;
+- skb->len = skb_orig_len;
+- }
++ vh = skb_header_pointer(skb, skb_mac_offset(skb) + header_len,
++ sizeof(vhdr), &vhdr);
+ if (unlikely(!vh))
+ return 0;
+
+ return ntohs(vh->h_vlan_TCI);
+ }
+
+-static __be16 vlan_get_protocol_dgram(struct sk_buff *skb)
++static __be16 vlan_get_protocol_dgram(const struct sk_buff *skb)
+ {
+ __be16 proto = skb->protocol;
+
+- if (unlikely(eth_type_vlan(proto))) {
+- u8 *skb_orig_data = skb->data;
+- int skb_orig_len = skb->len;
+-
+- skb_push(skb, skb->data - skb_mac_header(skb));
+- proto = __vlan_get_protocol(skb, proto, NULL);
+- if (skb_orig_data != skb->data) {
+- skb->data = skb_orig_data;
+- skb->len = skb_orig_len;
+- }
+- }
++ if (unlikely(eth_type_vlan(proto)))
++ proto = __vlan_get_protocol_offset(skb, proto,
++ skb_mac_offset(skb), NULL);
+
+ return proto;
+ }
+diff --git a/net/sctp/associola.c b/net/sctp/associola.c
+index c45c192b787873..0b0794f164cf2e 100644
+--- a/net/sctp/associola.c
++++ b/net/sctp/associola.c
+@@ -137,7 +137,8 @@ static struct sctp_association *sctp_association_init(
+ = 5 * asoc->rto_max;
+
+ asoc->timeouts[SCTP_EVENT_TIMEOUT_SACK] = asoc->sackdelay;
+- asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] = sp->autoclose * HZ;
++ asoc->timeouts[SCTP_EVENT_TIMEOUT_AUTOCLOSE] =
++ (unsigned long)sp->autoclose * HZ;
+
+ /* Initializes the timers */
+ for (i = SCTP_EVENT_TIMEOUT_NONE; i < SCTP_NUM_TIMEOUT_TYPES; ++i)
+diff --git a/net/wireless/util.c b/net/wireless/util.c
+index f49b55724f8341..18585b1416c662 100644
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -2843,10 +2843,9 @@ void cfg80211_remove_link(struct wireless_dev *wdev, unsigned int link_id)
+ break;
+ }
+
+- wdev->valid_links &= ~BIT(link_id);
+-
+ rdev_del_intf_link(rdev, wdev, link_id);
+
++ wdev->valid_links &= ~BIT(link_id);
+ eth_zero_addr(wdev->links[link_id].addr);
+ }
+
+diff --git a/scripts/mksysmap b/scripts/mksysmap
+index c12723a0465562..3accbdb269ac70 100755
+--- a/scripts/mksysmap
++++ b/scripts/mksysmap
+@@ -26,7 +26,7 @@
+ # (do not forget a space before each pattern)
+
+ # local symbols for ARM, MIPS, etc.
+-/ \\$/d
++/ \$/d
+
+ # local labels, .LBB, .Ltmpxxx, .L__unnamed_xx, .LASANPC, etc.
+ / \.L/d
+@@ -39,7 +39,7 @@
+ / __pi_\.L/d
+
+ # arm64 local symbols in non-VHE KVM namespace
+-/ __kvm_nvhe_\\$/d
++/ __kvm_nvhe_\$/d
+ / __kvm_nvhe_\.L/d
+
+ # lld arm/aarch64/mips thunks
+diff --git a/scripts/mod/file2alias.c b/scripts/mod/file2alias.c
+index 634e40748287c0..721e0e9f17cada 100644
+--- a/scripts/mod/file2alias.c
++++ b/scripts/mod/file2alias.c
+@@ -742,7 +742,7 @@ static void do_input(char *alias,
+
+ for (i = min / BITS_PER_LONG; i < max / BITS_PER_LONG + 1; i++)
+ arr[i] = TO_NATIVE(arr[i]);
+- for (i = min; i < max; i++)
++ for (i = min; i <= max; i++)
+ if (arr[i / BITS_PER_LONG] & (1ULL << (i%BITS_PER_LONG)))
+ sprintf(alias + strlen(alias), "%X,*", i);
+ }
+diff --git a/scripts/package/PKGBUILD b/scripts/package/PKGBUILD
+index f83493838cf96a..dca706617adc76 100644
+--- a/scripts/package/PKGBUILD
++++ b/scripts/package/PKGBUILD
+@@ -103,7 +103,7 @@ _package-headers() {
+
+ _package-api-headers() {
+ pkgdesc="Kernel headers sanitized for use in userspace"
+- provides=(linux-api-headers)
++ provides=(linux-api-headers="${pkgver}")
+ conflicts=(linux-api-headers)
+
+ _prologue
+diff --git a/scripts/sorttable.h b/scripts/sorttable.h
+index 7bd0184380d3b9..a7c5445baf0027 100644
+--- a/scripts/sorttable.h
++++ b/scripts/sorttable.h
+@@ -110,7 +110,7 @@ static inline unsigned long orc_ip(const int *ip)
+
+ static int orc_sort_cmp(const void *_a, const void *_b)
+ {
+- struct orc_entry *orc_a;
++ struct orc_entry *orc_a, *orc_b;
+ const int *a = g_orc_ip_table + *(int *)_a;
+ const int *b = g_orc_ip_table + *(int *)_b;
+ unsigned long a_val = orc_ip(a);
+@@ -128,6 +128,9 @@ static int orc_sort_cmp(const void *_a, const void *_b)
+ * whitelisted .o files which didn't get objtool generation.
+ */
+ orc_a = g_orc_table + (a - g_orc_ip_table);
++ orc_b = g_orc_table + (b - g_orc_ip_table);
++ if (orc_a->type == ORC_TYPE_UNDEFINED && orc_b->type == ORC_TYPE_UNDEFINED)
++ return 0;
+ return orc_a->type == ORC_TYPE_UNDEFINED ? -1 : 1;
+ }
+
+diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
+index a9830fbfc5c66c..88850405ded929 100644
+--- a/security/selinux/ss/services.c
++++ b/security/selinux/ss/services.c
+@@ -955,7 +955,10 @@ void services_compute_xperms_decision(struct extended_perms_decision *xpermd,
+ xpermd->driver))
+ return;
+ } else {
+- BUG();
++ pr_warn_once(
++ "SELinux: unknown extended permission (%u) will be ignored\n",
++ node->datum.u.xperms->specified);
++ return;
+ }
+
+ if (node->key.specified == AVTAB_XPERMS_ALLOWED) {
+@@ -992,7 +995,8 @@ void services_compute_xperms_decision(struct extended_perms_decision *xpermd,
+ node->datum.u.xperms->perms.p[i];
+ }
+ } else {
+- BUG();
++ pr_warn_once("SELinux: unknown specified key (%u)\n",
++ node->key.specified);
+ }
+ }
+
+diff --git a/sound/core/seq/oss/seq_oss_synth.c b/sound/core/seq/oss/seq_oss_synth.c
+index e3394919daa09a..51ee4c00a84310 100644
+--- a/sound/core/seq/oss/seq_oss_synth.c
++++ b/sound/core/seq/oss/seq_oss_synth.c
+@@ -66,6 +66,7 @@ static struct seq_oss_synth midi_synth_dev = {
+ };
+
+ static DEFINE_SPINLOCK(register_lock);
++static DEFINE_MUTEX(sysex_mutex);
+
+ /*
+ * prototypes
+@@ -497,6 +498,7 @@ snd_seq_oss_synth_sysex(struct seq_oss_devinfo *dp, int dev, unsigned char *buf,
+ if (!info)
+ return -ENXIO;
+
++ guard(mutex)(&sysex_mutex);
+ sysex = info->sysex;
+ if (sysex == NULL) {
+ sysex = kzalloc(sizeof(*sysex), GFP_KERNEL);
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 3930e2f9082f42..77b6ac9b5c11bc 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -1275,10 +1275,16 @@ static int snd_seq_ioctl_set_client_info(struct snd_seq_client *client,
+ if (client->type != client_info->type)
+ return -EINVAL;
+
+- /* check validity of midi_version field */
+- if (client->user_pversion >= SNDRV_PROTOCOL_VERSION(1, 0, 3) &&
+- client_info->midi_version > SNDRV_SEQ_CLIENT_UMP_MIDI_2_0)
+- return -EINVAL;
++ if (client->user_pversion >= SNDRV_PROTOCOL_VERSION(1, 0, 3)) {
++ /* check validity of midi_version field */
++ if (client_info->midi_version > SNDRV_SEQ_CLIENT_UMP_MIDI_2_0)
++ return -EINVAL;
++
++ /* check if UMP is supported in kernel */
++ if (!IS_ENABLED(CONFIG_SND_SEQ_UMP) &&
++ client_info->midi_version > 0)
++ return -EINVAL;
++ }
+
+ /* fill the info fields */
+ if (client_info->name[0])
+diff --git a/sound/core/ump.c b/sound/core/ump.c
+index bd26bb2210cbd4..abc537d54b7312 100644
+--- a/sound/core/ump.c
++++ b/sound/core/ump.c
+@@ -1244,7 +1244,7 @@ static int fill_legacy_mapping(struct snd_ump_endpoint *ump)
+
+ num = 0;
+ for (i = 0; i < SNDRV_UMP_MAX_GROUPS; i++)
+- if ((group_maps & (1U << i)) && ump->groups[i].valid)
++ if (group_maps & (1U << i))
+ ump->legacy_mapping[num++] = i;
+
+ return num;
+diff --git a/sound/pci/hda/cs35l56_hda.c b/sound/pci/hda/cs35l56_hda.c
+index e3ac0e23ae3211..7baf3b506eefec 100644
+--- a/sound/pci/hda/cs35l56_hda.c
++++ b/sound/pci/hda/cs35l56_hda.c
+@@ -151,10 +151,6 @@ static int cs35l56_hda_runtime_resume(struct device *dev)
+ }
+ }
+
+- ret = cs35l56_force_sync_asp1_registers_from_cache(&cs35l56->base);
+- if (ret)
+- goto err;
+-
+ return 0;
+
+ err:
+@@ -1059,9 +1055,6 @@ int cs35l56_hda_common_probe(struct cs35l56_hda *cs35l56, int hid, int id)
+
+ regmap_multi_reg_write(cs35l56->base.regmap, cs35l56_hda_dai_config,
+ ARRAY_SIZE(cs35l56_hda_dai_config));
+- ret = cs35l56_force_sync_asp1_registers_from_cache(&cs35l56->base);
+- if (ret)
+- goto dsp_err;
+
+ /*
+ * By default only enable one ASP1TXn, where n=amplifier index,
+@@ -1087,7 +1080,6 @@ int cs35l56_hda_common_probe(struct cs35l56_hda *cs35l56, int hid, int id)
+
+ pm_err:
+ pm_runtime_disable(cs35l56->base.dev);
+-dsp_err:
+ cs_dsp_remove(&cs35l56->cs_dsp);
+ err:
+ gpiod_set_value_cansleep(cs35l56->base.reset_gpio, 0);
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index e4673a71551a3b..d40197fb5fbd58 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -1134,7 +1134,6 @@ struct ca0132_spec {
+
+ struct hda_codec *codec;
+ struct delayed_work unsol_hp_work;
+- int quirk;
+
+ #ifdef ENABLE_TUNING_CONTROLS
+ long cur_ctl_vals[TUNING_CTLS_COUNT];
+@@ -1166,7 +1165,6 @@ struct ca0132_spec {
+ * CA0132 quirks table
+ */
+ enum {
+- QUIRK_NONE,
+ QUIRK_ALIENWARE,
+ QUIRK_ALIENWARE_M17XR4,
+ QUIRK_SBZ,
+@@ -1176,10 +1174,11 @@ enum {
+ QUIRK_R3D,
+ QUIRK_AE5,
+ QUIRK_AE7,
++ QUIRK_NONE = HDA_FIXUP_ID_NOT_SET,
+ };
+
+ #ifdef CONFIG_PCI
+-#define ca0132_quirk(spec) ((spec)->quirk)
++#define ca0132_quirk(spec) ((spec)->codec->fixup_id)
+ #define ca0132_use_pci_mmio(spec) ((spec)->use_pci_mmio)
+ #define ca0132_use_alt_functions(spec) ((spec)->use_alt_functions)
+ #define ca0132_use_alt_controls(spec) ((spec)->use_alt_controls)
+@@ -1293,7 +1292,7 @@ static const struct hda_pintbl ae7_pincfgs[] = {
+ {}
+ };
+
+-static const struct snd_pci_quirk ca0132_quirks[] = {
++static const struct hda_quirk ca0132_quirks[] = {
+ SND_PCI_QUIRK(0x1028, 0x057b, "Alienware M17x R4", QUIRK_ALIENWARE_M17XR4),
+ SND_PCI_QUIRK(0x1028, 0x0685, "Alienware 15 2015", QUIRK_ALIENWARE),
+ SND_PCI_QUIRK(0x1028, 0x0688, "Alienware 17 2015", QUIRK_ALIENWARE),
+@@ -1316,6 +1315,19 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
+ {}
+ };
+
++static const struct hda_model_fixup ca0132_quirk_models[] = {
++ { .id = QUIRK_ALIENWARE, .name = "alienware" },
++ { .id = QUIRK_ALIENWARE_M17XR4, .name = "alienware-m17xr4" },
++ { .id = QUIRK_SBZ, .name = "sbz" },
++ { .id = QUIRK_ZXR, .name = "zxr" },
++ { .id = QUIRK_ZXR_DBPRO, .name = "zxr-dbpro" },
++ { .id = QUIRK_R3DI, .name = "r3di" },
++ { .id = QUIRK_R3D, .name = "r3d" },
++ { .id = QUIRK_AE5, .name = "ae5" },
++ { .id = QUIRK_AE7, .name = "ae7" },
++ {}
++};
++
+ /* Output selection quirk info structures. */
+ #define MAX_QUIRK_MMIO_GPIO_SET_VALS 3
+ #define MAX_QUIRK_SCP_SET_VALS 2
+@@ -9957,17 +9969,15 @@ static int ca0132_prepare_verbs(struct hda_codec *codec)
+ */
+ static void sbz_detect_quirk(struct hda_codec *codec)
+ {
+- struct ca0132_spec *spec = codec->spec;
+-
+ switch (codec->core.subsystem_id) {
+ case 0x11020033:
+- spec->quirk = QUIRK_ZXR;
++ codec->fixup_id = QUIRK_ZXR;
+ break;
+ case 0x1102003f:
+- spec->quirk = QUIRK_ZXR_DBPRO;
++ codec->fixup_id = QUIRK_ZXR_DBPRO;
+ break;
+ default:
+- spec->quirk = QUIRK_SBZ;
++ codec->fixup_id = QUIRK_SBZ;
+ break;
+ }
+ }
+@@ -9976,7 +9986,6 @@ static int patch_ca0132(struct hda_codec *codec)
+ {
+ struct ca0132_spec *spec;
+ int err;
+- const struct snd_pci_quirk *quirk;
+
+ codec_dbg(codec, "patch_ca0132\n");
+
+@@ -9987,11 +9996,7 @@ static int patch_ca0132(struct hda_codec *codec)
+ spec->codec = codec;
+
+ /* Detect codec quirk */
+- quirk = snd_pci_quirk_lookup(codec->bus->pci, ca0132_quirks);
+- if (quirk)
+- spec->quirk = quirk->value;
+- else
+- spec->quirk = QUIRK_NONE;
++ snd_hda_pick_fixup(codec, ca0132_quirk_models, ca0132_quirks, NULL);
+ if (ca0132_quirk(spec) == QUIRK_SBZ)
+ sbz_detect_quirk(codec);
+
+@@ -10068,7 +10073,7 @@ static int patch_ca0132(struct hda_codec *codec)
+ spec->mem_base = pci_iomap(codec->bus->pci, 2, 0xC20);
+ if (spec->mem_base == NULL) {
+ codec_warn(codec, "pci_iomap failed! Setting quirk to QUIRK_NONE.");
+- spec->quirk = QUIRK_NONE;
++ codec->fixup_id = QUIRK_NONE;
+ }
+ }
+ #endif
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 192fc75b51e6db..3ed82f98e2de9e 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7704,6 +7704,7 @@ enum {
+ ALC274_FIXUP_HP_MIC,
+ ALC274_FIXUP_HP_HEADSET_MIC,
+ ALC274_FIXUP_HP_ENVY_GPIO,
++ ALC274_FIXUP_ASUS_ZEN_AIO_27,
+ ALC256_FIXUP_ASUS_HPE,
+ ALC285_FIXUP_THINKPAD_NO_BASS_SPK_HEADSET_JACK,
+ ALC287_FIXUP_HP_GPIO_LED,
+@@ -9505,6 +9506,26 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc274_fixup_hp_envy_gpio,
+ },
++ [ALC274_FIXUP_ASUS_ZEN_AIO_27] = {
++ .type = HDA_FIXUP_VERBS,
++ .v.verbs = (const struct hda_verb[]) {
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x10 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xc420 },
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x40 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x8800 },
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x49 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x0249 },
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x4a },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x202b },
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x62 },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0xa007 },
++ { 0x20, AC_VERB_SET_COEF_INDEX, 0x6b },
++ { 0x20, AC_VERB_SET_PROC_COEF, 0x5060 },
++ {}
++ },
++ .chained = true,
++ .chain_id = ALC2XX_FIXUP_HEADSET_MIC,
++ },
+ [ALC256_FIXUP_ASUS_HPE] = {
+ .type = HDA_FIXUP_VERBS,
+ .v.verbs = (const struct hda_verb[]) {
+@@ -10615,6 +10636,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1f62, "ASUS UX7602ZM", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
++ SND_PCI_QUIRK(0x1043, 0x31d0, "ASUS Zen AIO 27 Z272SD_A272SD", ALC274_FIXUP_ASUS_ZEN_AIO_27),
+ SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+@@ -10971,6 +10993,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0xf111, 0x0001, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0xf111, 0x0006, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0xf111, 0x0009, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0xf111, 0x000c, "Framework Laptop", ALC295_FIXUP_FRAMEWORK_LAPTOP_MIC_NO_PRESENCE),
+
+ #if 0
+ /* Below is a quirk table taken from the old code.
+@@ -11162,6 +11185,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ {.id = ALC255_FIXUP_ACER_HEADPHONE_AND_MIC, .name = "alc255-acer-headphone-and-mic"},
+ {.id = ALC285_FIXUP_HP_GPIO_AMP_INIT, .name = "alc285-hp-amp-init"},
+ {.id = ALC236_FIXUP_LENOVO_INV_DMIC, .name = "alc236-fixup-lenovo-inv-mic"},
++ {.id = ALC2XX_FIXUP_HEADSET_MIC, .name = "alc2xx-fixup-headset-mic"},
+ {}
+ };
+ #define ALC225_STANDARD_PINS \
+diff --git a/sound/soc/generic/audio-graph-card2.c b/sound/soc/generic/audio-graph-card2.c
+index 93eee40cec760c..63837e25965956 100644
+--- a/sound/soc/generic/audio-graph-card2.c
++++ b/sound/soc/generic/audio-graph-card2.c
+@@ -779,7 +779,7 @@ static void graph_link_init(struct simple_util_priv *priv,
+ of_node_get(port_codec);
+ if (graph_lnk_is_multi(port_codec)) {
+ ep_codec = graph_get_next_multi_ep(&port_codec);
+- of_node_put(port_cpu);
++ of_node_put(port_codec);
+ port_codec = ep_to_port(ep_codec);
+ } else {
+ ep_codec = port_to_endpoint(port_codec);
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 0cbf1d4fbe6edd..6049d957694ca6 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -60,6 +60,8 @@ static u64 parse_audio_format_i_type(struct snd_usb_audio *chip,
+ pcm_formats |= SNDRV_PCM_FMTBIT_SPECIAL;
+ /* flag potentially raw DSD capable altsettings */
+ fp->dsd_raw = true;
++ /* clear special format bit to avoid "unsupported format" msg below */
++ format &= ~UAC2_FORMAT_TYPE_I_RAW_DATA;
+ }
+
+ format <<= 1;
+@@ -71,8 +73,11 @@ static u64 parse_audio_format_i_type(struct snd_usb_audio *chip,
+ sample_width = as->bBitResolution;
+ sample_bytes = as->bSubslotSize;
+
+- if (format & UAC3_FORMAT_TYPE_I_RAW_DATA)
++ if (format & UAC3_FORMAT_TYPE_I_RAW_DATA) {
+ pcm_formats |= SNDRV_PCM_FMTBIT_SPECIAL;
++ /* clear special format bit to avoid "unsupported format" msg below */
++ format &= ~UAC3_FORMAT_TYPE_I_RAW_DATA;
++ }
+
+ format <<= 1;
+ break;
+diff --git a/sound/usb/mixer_us16x08.c b/sound/usb/mixer_us16x08.c
+index 6eb7d93b358d99..20ac32635f1f50 100644
+--- a/sound/usb/mixer_us16x08.c
++++ b/sound/usb/mixer_us16x08.c
+@@ -687,7 +687,7 @@ static int snd_us16x08_meter_get(struct snd_kcontrol *kcontrol,
+ struct usb_mixer_elem_info *elem = kcontrol->private_data;
+ struct snd_usb_audio *chip = elem->head.mixer->chip;
+ struct snd_us16x08_meter_store *store = elem->private_data;
+- u8 meter_urb[64];
++ u8 meter_urb[64] = {0};
+
+ switch (kcontrol->private_value) {
+ case 0: {
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index a0767de7f1b7ed..8ba0aff8be2ec2 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2325,6 +2325,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_DSD_RAW),
+ DEVICE_FLG(0x2522, 0x0007, /* LH Labs Geek Out HD Audio 1V5 */
+ QUIRK_FLAG_SET_IFACE_FIRST),
++ DEVICE_FLG(0x262a, 0x9302, /* ddHiFi TC44C */
++ QUIRK_FLAG_DSD_RAW),
+ DEVICE_FLG(0x2708, 0x0002, /* Audient iD14 */
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x2912, 0x30c8, /* Audioengine D1 */
+diff --git a/tools/sched_ext/scx_central.c b/tools/sched_ext/scx_central.c
+index 21deea320bd785..e938156ed0a0d0 100644
+--- a/tools/sched_ext/scx_central.c
++++ b/tools/sched_ext/scx_central.c
+@@ -97,7 +97,7 @@ int main(int argc, char **argv)
+ SCX_BUG_ON(!cpuset, "Failed to allocate cpuset");
+ CPU_ZERO(cpuset);
+ CPU_SET(skel->rodata->central_cpu, cpuset);
+- SCX_BUG_ON(sched_setaffinity(0, sizeof(cpuset), cpuset),
++ SCX_BUG_ON(sched_setaffinity(0, sizeof(*cpuset), cpuset),
+ "Failed to affinitize to central CPU %d (max %d)",
+ skel->rodata->central_cpu, skel->rodata->nr_cpu_ids - 1);
+ CPU_FREE(cpuset);
+diff --git a/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c b/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
+index 8a0632c37839a3..79f5087dade224 100644
+--- a/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
++++ b/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
+@@ -10,6 +10,8 @@ int subprog(struct __sk_buff *skb)
+ int ret = 1;
+
+ __sink(ret);
++ /* let verifier know that 'subprog_tc' can change pointers to skb->data */
++ bpf_skb_change_proto(skb, 0, 0);
+ return ret;
+ }
+
+diff --git a/tools/testing/selftests/net/forwarding/local_termination.sh b/tools/testing/selftests/net/forwarding/local_termination.sh
+index c35548767756d0..ecd34f364125cb 100755
+--- a/tools/testing/selftests/net/forwarding/local_termination.sh
++++ b/tools/testing/selftests/net/forwarding/local_termination.sh
+@@ -7,7 +7,6 @@ ALL_TESTS="standalone vlan_unaware_bridge vlan_aware_bridge test_vlan \
+ NUM_NETIFS=2
+ PING_COUNT=1
+ REQUIRE_MTOOLS=yes
+-REQUIRE_MZ=no
+
+ source lib.sh
+
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-17 13:18 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-01-17 13:18 UTC (permalink / raw
To: gentoo-commits
commit: bbd5b42d3ff847b17f85f7ce29fa19f28f88b798
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Jan 13 17:18:55 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Jan 13 17:18:55 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bbd5b42d
BMQ(BitMap Queue) Schedule r1 version bump
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 2 +-
...=> 5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch | 252 +++++++++++++++------
2 files changed, 184 insertions(+), 70 deletions(-)
diff --git a/0000_README b/0000_README
index 29d9187b..06b9cb3f 100644
--- a/0000_README
+++ b/0000_README
@@ -127,7 +127,7 @@ Patch: 5010_enable-cpu-optimizations-universal.patch
From: https://github.com/graysky2/kernel_compiler_patch
Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
-Patch: 5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
+Patch: 5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch
From: https://gitlab.com/alfredchen/projectc
Desc: BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch b/5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch
similarity index 98%
rename from 5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
rename to 5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch
index 9eb3139f..532813fd 100644
--- a/5020_BMQ-and-PDS-io-scheduler-v6.12-r0.patch
+++ b/5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch
@@ -158,7 +158,7 @@ index 8874f681b056..59eb72bf7d5f 100644
[RLIMIT_RTTIME] = { RLIM_INFINITY, RLIM_INFINITY }, \
}
diff --git a/include/linux/sched.h b/include/linux/sched.h
-index bb343136ddd0..212d9204e9aa 100644
+index bb343136ddd0..6adfea989b7b 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -804,9 +804,13 @@ struct task_struct {
@@ -212,7 +212,34 @@ index bb343136ddd0..212d9204e9aa 100644
#ifdef CONFIG_CGROUP_SCHED
struct task_group *sched_task_group;
-@@ -1609,6 +1628,15 @@ struct task_struct {
+@@ -878,11 +897,15 @@ struct task_struct {
+ const cpumask_t *cpus_ptr;
+ cpumask_t *user_cpus_ptr;
+ cpumask_t cpus_mask;
++#ifndef CONFIG_SCHED_ALT
+ void *migration_pending;
++#endif
+ #ifdef CONFIG_SMP
+ unsigned short migration_disabled;
+ #endif
++#ifndef CONFIG_SCHED_ALT
+ unsigned short migration_flags;
++#endif
+
+ #ifdef CONFIG_PREEMPT_RCU
+ int rcu_read_lock_nesting;
+@@ -914,8 +937,10 @@ struct task_struct {
+
+ struct list_head tasks;
+ #ifdef CONFIG_SMP
++#ifndef CONFIG_SCHED_ALT
+ struct plist_node pushable_tasks;
+ struct rb_node pushable_dl_tasks;
++#endif
+ #endif
+
+ struct mm_struct *mm;
+@@ -1609,6 +1634,15 @@ struct task_struct {
*/
};
@@ -228,7 +255,7 @@ index bb343136ddd0..212d9204e9aa 100644
#define TASK_REPORT_IDLE (TASK_REPORT + 1)
#define TASK_REPORT_MAX (TASK_REPORT_IDLE << 1)
-@@ -2135,7 +2163,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
+@@ -2135,7 +2169,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
static inline bool task_is_runnable(struct task_struct *p)
{
@@ -341,7 +368,7 @@ index 4237daa5ac7a..3cebd93c49c8 100644
#else
static inline void rebuild_sched_domains_energy(void)
diff --git a/init/Kconfig b/init/Kconfig
-index c521e1421ad4..131a599fcde2 100644
+index c521e1421ad4..4a397b48a453 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -652,6 +652,7 @@ config TASK_IO_ACCOUNTING
@@ -352,15 +379,7 @@ index c521e1421ad4..131a599fcde2 100644
select KERNFS
help
Collect metrics that indicate how overcommitted the CPU, memory,
-@@ -817,6 +818,7 @@ menu "Scheduler features"
- config UCLAMP_TASK
- bool "Enable utilization clamping for RT/FAIR tasks"
- depends on CPU_FREQ_GOV_SCHEDUTIL
-+ depends on !SCHED_ALT
- help
- This feature enables the scheduler to track the clamped utilization
- of each CPU based on RUNNABLE tasks scheduled on that CPU.
-@@ -863,6 +865,35 @@ config UCLAMP_BUCKETS_COUNT
+@@ -863,6 +864,35 @@ config UCLAMP_BUCKETS_COUNT
If in doubt, use the default value.
@@ -396,7 +415,7 @@ index c521e1421ad4..131a599fcde2 100644
endmenu
#
-@@ -928,6 +959,7 @@ config NUMA_BALANCING
+@@ -928,6 +958,7 @@ config NUMA_BALANCING
depends on ARCH_SUPPORTS_NUMA_BALANCING
depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
@@ -404,23 +423,7 @@ index c521e1421ad4..131a599fcde2 100644
help
This option adds support for automatic NUMA aware memory/task placement.
The mechanism is quite primitive and is based on migrating memory when
-@@ -1036,6 +1068,7 @@ menuconfig CGROUP_SCHED
- tasks.
-
- if CGROUP_SCHED
-+if !SCHED_ALT
- config GROUP_SCHED_WEIGHT
- def_bool n
-
-@@ -1073,6 +1106,7 @@ config EXT_GROUP_SCHED
- select GROUP_SCHED_WEIGHT
- default y
-
-+endif #!SCHED_ALT
- endif #CGROUP_SCHED
-
- config SCHED_MM_CID
-@@ -1334,6 +1368,7 @@ config CHECKPOINT_RESTORE
+@@ -1334,6 +1365,7 @@ config CHECKPOINT_RESTORE
config SCHED_AUTOGROUP
bool "Automatic process group scheduling"
@@ -429,7 +432,7 @@ index c521e1421ad4..131a599fcde2 100644
select CGROUP_SCHED
select FAIR_GROUP_SCHED
diff --git a/init/init_task.c b/init/init_task.c
-index 136a8231355a..03770079619a 100644
+index 136a8231355a..12c01ab8e718 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -71,9 +71,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
@@ -466,14 +469,20 @@ index 136a8231355a..03770079619a 100644
.se = {
.group_node = LIST_HEAD_INIT(init_task.se.group_node),
},
-@@ -93,6 +110,7 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
+@@ -93,10 +110,13 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
.run_list = LIST_HEAD_INIT(init_task.rt.run_list),
.time_slice = RR_TIMESLICE,
},
+#endif
.tasks = LIST_HEAD_INIT(init_task.tasks),
++#ifndef CONFIG_SCHED_ALT
#ifdef CONFIG_SMP
.pushable_tasks = PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
+ #endif
++#endif
+ #ifdef CONFIG_CGROUP_SCHED
+ .sched_task_group = &root_task_group,
+ #endif
diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
index fe782cd77388..d27d2154d71a 100644
--- a/kernel/Kconfig.preempt
@@ -700,10 +709,10 @@ index 976092b7bd45..31d587c16ec1 100644
obj-y += build_utility.o
diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
new file mode 100644
-index 000000000000..c59691742340
+index 000000000000..0a08bc0176ac
--- /dev/null
+++ b/kernel/sched/alt_core.c
-@@ -0,0 +1,7458 @@
+@@ -0,0 +1,7515 @@
+/*
+ * kernel/sched/alt_core.c
+ *
@@ -782,7 +791,7 @@ index 000000000000..c59691742340
+#define sched_feat(x) (0)
+#endif /* CONFIG_SCHED_DEBUG */
+
-+#define ALT_SCHED_VERSION "v6.12-r0"
++#define ALT_SCHED_VERSION "v6.12-r1"
+
+#define STOP_PRIO (MAX_RT_PRIO - 1)
+
@@ -2144,8 +2153,6 @@ index 000000000000..c59691742340
+ __set_task_cpu(p, new_cpu);
+}
+
-+#define MDF_FORCE_ENABLED 0x80
-+
+static void
+__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
+{
@@ -2186,8 +2193,6 @@ index 000000000000..c59691742340
+ if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
+ cpu_rq(cpu)->nr_pinned++;
+ p->migration_disabled = 1;
-+ p->migration_flags &= ~MDF_FORCE_ENABLED;
-+
+ /*
+ * Violates locking rules! see comment in __do_set_cpus_ptr().
+ */
@@ -2237,6 +2242,15 @@ index 000000000000..c59691742340
+}
+EXPORT_SYMBOL_GPL(migrate_enable);
+
++static void __migrate_force_enable(struct task_struct *p, struct rq *rq)
++{
++ if (likely(p->cpus_ptr != &p->cpus_mask))
++ __do_set_cpus_ptr(p, &p->cpus_mask);
++ p->migration_disabled = 0;
++ /* When p is migrate_disabled, rq->lock should be held */
++ rq->nr_pinned--;
++}
++
+static inline bool rq_has_pinned_tasks(struct rq *rq)
+{
+ return rq->nr_pinned;
@@ -2417,6 +2431,9 @@ index 000000000000..c59691742340
+
+ __do_set_cpus_allowed(p, &ac);
+
++ if (is_migration_disabled(p) && !cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
++ __migrate_force_enable(p, task_rq(p));
++
+ /*
+ * Because this is called with p->pi_lock held, it is not possible
+ * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
@@ -2712,14 +2729,8 @@ index 000000000000..c59691742340
+{
+ /* Can the task run on the task's current CPU? If so, we're done */
+ if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
-+ if (p->migration_disabled) {
-+ if (likely(p->cpus_ptr != &p->cpus_mask))
-+ __do_set_cpus_ptr(p, &p->cpus_mask);
-+ p->migration_disabled = 0;
-+ p->migration_flags |= MDF_FORCE_ENABLED;
-+ /* When p is migrate_disabled, rq->lock should be held */
-+ rq->nr_pinned--;
-+ }
++ if (is_migration_disabled(p))
++ __migrate_force_enable(p, rq);
+
+ if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
+ struct migration_arg arg = { p, dest_cpu };
@@ -7178,9 +7189,6 @@ index 000000000000..c59691742340
+ if (preempt_count() > 0)
+ return;
+
-+ if (current->migration_flags & MDF_FORCE_ENABLED)
-+ return;
-+
+ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
+ return;
+ prev_jiffy = jiffies;
@@ -7374,6 +7382,43 @@ index 000000000000..c59691742340
+{
+}
+
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
++{
++ return 0;
++}
++
++static int sched_group_set_idle(struct task_group *tg, long idle)
++{
++ return 0;
++}
++
++static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
++ struct cftype *cftype, u64 shareval)
++{
++ return sched_group_set_shares(css_tg(css), shareval);
++}
++
++static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft)
++{
++ return 0;
++}
++
++static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
++ struct cftype *cft, s64 idle)
++{
++ return sched_group_set_idle(css_tg(css), idle);
++}
++#endif
++
++#ifdef CONFIG_CFS_BANDWIDTH
+static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
+ struct cftype *cft)
+{
@@ -7419,7 +7464,9 @@ index 000000000000..c59691742340
+{
+ return 0;
+}
++#endif
+
++#ifdef CONFIG_RT_GROUP_SCHED
+static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
+ struct cftype *cft, s64 val)
+{
@@ -7443,7 +7490,9 @@ index 000000000000..c59691742340
+{
+ return 0;
+}
++#endif
+
++#ifdef CONFIG_UCLAMP_TASK_GROUP
+static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
+{
+ return 0;
@@ -7467,8 +7516,22 @@ index 000000000000..c59691742340
+{
+ return nbytes;
+}
++#endif
+
+static struct cftype cpu_legacy_files[] = {
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
++ {
++ .name = "shares",
++ .read_u64 = cpu_shares_read_u64,
++ .write_u64 = cpu_shares_write_u64,
++ },
++ {
++ .name = "idle",
++ .read_s64 = cpu_idle_read_s64,
++ .write_s64 = cpu_idle_write_s64,
++ },
++#endif
++#ifdef CONFIG_CFS_BANDWIDTH
+ {
+ .name = "cfs_quota_us",
+ .read_s64 = cpu_cfs_quota_read_s64,
@@ -7492,6 +7555,8 @@ index 000000000000..c59691742340
+ .name = "stat.local",
+ .seq_show = cpu_cfs_local_stat_show,
+ },
++#endif
++#ifdef CONFIG_RT_GROUP_SCHED
+ {
+ .name = "rt_runtime_us",
+ .read_s64 = cpu_rt_runtime_read,
@@ -7502,6 +7567,8 @@ index 000000000000..c59691742340
+ .read_u64 = cpu_rt_period_read_uint,
+ .write_u64 = cpu_rt_period_write_uint,
+ },
++#endif
++#ifdef CONFIG_UCLAMP_TASK_GROUP
+ {
+ .name = "uclamp.min",
+ .flags = CFTYPE_NOT_ON_ROOT,
@@ -7514,9 +7581,11 @@ index 000000000000..c59691742340
+ .seq_show = cpu_uclamp_max_show,
+ .write = cpu_uclamp_max_write,
+ },
++#endif
+ { } /* Terminate */
+};
+
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
+static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
+ struct cftype *cft)
+{
@@ -7540,19 +7609,9 @@ index 000000000000..c59691742340
+{
+ return 0;
+}
++#endif
+
-+static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
-+ struct cftype *cft)
-+{
-+ return 0;
-+}
-+
-+static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
-+ struct cftype *cft, s64 idle)
-+{
-+ return 0;
-+}
-+
++#ifdef CONFIG_CFS_BANDWIDTH
+static int cpu_max_show(struct seq_file *sf, void *v)
+{
+ return 0;
@@ -7563,8 +7622,10 @@ index 000000000000..c59691742340
+{
+ return nbytes;
+}
++#endif
+
+static struct cftype cpu_files[] = {
++#ifdef CONFIG_GROUP_SCHED_WEIGHT
+ {
+ .name = "weight",
+ .flags = CFTYPE_NOT_ON_ROOT,
@@ -7583,6 +7644,8 @@ index 000000000000..c59691742340
+ .read_s64 = cpu_idle_read_s64,
+ .write_s64 = cpu_idle_write_s64,
+ },
++#endif
++#ifdef CONFIG_CFS_BANDWIDTH
+ {
+ .name = "max",
+ .flags = CFTYPE_NOT_ON_ROOT,
@@ -7595,6 +7658,8 @@ index 000000000000..c59691742340
+ .read_u64 = cpu_cfs_burst_read_u64,
+ .write_u64 = cpu_cfs_burst_write_u64,
+ },
++#endif
++#ifdef CONFIG_UCLAMP_TASK_GROUP
+ {
+ .name = "uclamp.min",
+ .flags = CFTYPE_NOT_ON_ROOT,
@@ -7607,6 +7672,7 @@ index 000000000000..c59691742340
+ .seq_show = cpu_uclamp_max_show,
+ .write = cpu_uclamp_max_write,
+ },
++#endif
+ { } /* terminate */
+};
+
@@ -8421,10 +8487,10 @@ index 000000000000..1dbd7eb6a434
+{}
diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
new file mode 100644
-index 000000000000..09c9e9f80bf4
+index 000000000000..7fb3433c5c41
--- /dev/null
+++ b/kernel/sched/alt_sched.h
-@@ -0,0 +1,971 @@
+@@ -0,0 +1,997 @@
+#ifndef _KERNEL_SCHED_ALT_SCHED_H
+#define _KERNEL_SCHED_ALT_SCHED_H
+
@@ -9120,15 +9186,41 @@ index 000000000000..09c9e9f80bf4
+
+static inline void nohz_run_idle_balance(int cpu) { }
+
-+static inline
-+unsigned long uclamp_rq_util_with(struct rq *rq, unsigned long util,
-+ struct task_struct *p)
++static inline unsigned long
++uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id)
+{
-+ return util;
++ if (clamp_id == UCLAMP_MIN)
++ return 0;
++
++ return SCHED_CAPACITY_SCALE;
+}
+
+static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
+
++static inline bool uclamp_is_used(void)
++{
++ return false;
++}
++
++static inline unsigned long
++uclamp_rq_get(struct rq *rq, enum uclamp_id clamp_id)
++{
++ if (clamp_id == UCLAMP_MIN)
++ return 0;
++
++ return SCHED_CAPACITY_SCALE;
++}
++
++static inline void
++uclamp_rq_set(struct rq *rq, enum uclamp_id clamp_id, unsigned int value)
++{
++}
++
++static inline bool uclamp_rq_is_idle(struct rq *rq)
++{
++ return false;
++}
++
+#ifdef CONFIG_SCHED_MM_CID
+
+#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
@@ -11109,6 +11201,28 @@ index 6bcee4704059..cf88205fd4a2 100644
return false;
}
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index a50ed23bee77..be0477666049 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1665,6 +1665,9 @@ static void osnoise_sleep(bool skip_period)
+ */
+ static inline int osnoise_migration_pending(void)
+ {
++#ifdef CONFIG_SCHED_ALT
++ return 0;
++#else
+ if (!current->migration_pending)
+ return 0;
+
+@@ -1686,6 +1689,7 @@ static inline int osnoise_migration_pending(void)
+ mutex_unlock(&interface_lock);
+
+ return 1;
++#endif
+ }
+
+ /*
diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
index 1469dd8075fa..803527a0e48a 100644
--- a/kernel/trace/trace_selftest.c
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-17 13:18 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-01-17 13:18 UTC (permalink / raw
To: gentoo-commits
commit: 0ac150344b6ac5bfe1c306054599ae5cb28d7d74
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Jan 17 13:18:02 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Jan 17 13:18:02 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0ac15034
Linux patch 6.12.10
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1009_linux-6.12.10.patch | 6476 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 6480 insertions(+)
diff --git a/0000_README b/0000_README
index 06b9cb3f..20574d29 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch: 1008_linux-6.12.9.patch
From: https://www.kernel.org
Desc: Linux 6.12.9
+Patch: 1009_linux-6.12.10.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.10
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1009_linux-6.12.10.patch b/1009_linux-6.12.10.patch
new file mode 100644
index 00000000..af53d00b
--- /dev/null
+++ b/1009_linux-6.12.10.patch
@@ -0,0 +1,6476 @@
+diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
+index 6d02168d78bed6..2cb58daf3089ba 100644
+--- a/Documentation/admin-guide/cgroup-v2.rst
++++ b/Documentation/admin-guide/cgroup-v2.rst
+@@ -2954,7 +2954,7 @@ following two functions.
+ a queue (device) has been associated with the bio and
+ before submission.
+
+- wbc_account_cgroup_owner(@wbc, @page, @bytes)
++ wbc_account_cgroup_owner(@wbc, @folio, @bytes)
+ Should be called for each data segment being written out.
+ While this function doesn't care exactly when it's called
+ during the writeback session, it's the easiest and most
+diff --git a/Makefile b/Makefile
+index 80151f53d8ee0f..233e9e88e402e7 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/boot/dts/nxp/imx/imxrt1050.dtsi b/arch/arm/boot/dts/nxp/imx/imxrt1050.dtsi
+index dd714d235d5f6a..b0bad0d1ba36f4 100644
+--- a/arch/arm/boot/dts/nxp/imx/imxrt1050.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imxrt1050.dtsi
+@@ -87,7 +87,7 @@ usdhc1: mmc@402c0000 {
+ reg = <0x402c0000 0x4000>;
+ interrupts = <110>;
+ clocks = <&clks IMXRT1050_CLK_IPG_PDOF>,
+- <&clks IMXRT1050_CLK_OSC>,
++ <&clks IMXRT1050_CLK_AHB_PODF>,
+ <&clks IMXRT1050_CLK_USDHC1>;
+ clock-names = "ipg", "ahb", "per";
+ bus-width = <4>;
+diff --git a/arch/arm64/boot/dts/freescale/imx95.dtsi b/arch/arm64/boot/dts/freescale/imx95.dtsi
+index 03661e76550f4d..40cbb071f265cf 100644
+--- a/arch/arm64/boot/dts/freescale/imx95.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx95.dtsi
+@@ -1609,7 +1609,7 @@ pcie1_ep: pcie-ep@4c380000 {
+
+ netcmix_blk_ctrl: syscon@4c810000 {
+ compatible = "nxp,imx95-netcmix-blk-ctrl", "syscon";
+- reg = <0x0 0x4c810000 0x0 0x10000>;
++ reg = <0x0 0x4c810000 0x0 0x8>;
+ #clock-cells = <1>;
+ clocks = <&scmi_clk IMX95_CLK_BUSNETCMIX>;
+ assigned-clocks = <&scmi_clk IMX95_CLK_BUSNETCMIX>;
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p.dtsi b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+index e8dbc8d820a64f..8a21448c0fa845 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p.dtsi
+@@ -1940,6 +1940,7 @@ tpdm@4003000 {
+
+ qcom,cmb-element-bits = <32>;
+ qcom,cmb-msrs-num = <32>;
++ status = "disabled";
+
+ out-ports {
+ port {
+@@ -5587,7 +5588,7 @@ pcie0_ep: pcie-ep@1c00000 {
+ <0x0 0x40000000 0x0 0xf20>,
+ <0x0 0x40000f20 0x0 0xa8>,
+ <0x0 0x40001000 0x0 0x4000>,
+- <0x0 0x40200000 0x0 0x100000>,
++ <0x0 0x40200000 0x0 0x1fe00000>,
+ <0x0 0x01c03000 0x0 0x1000>,
+ <0x0 0x40005000 0x0 0x2000>;
+ reg-names = "parf", "dbi", "elbi", "atu", "addr_space",
+@@ -5744,7 +5745,7 @@ pcie1_ep: pcie-ep@1c10000 {
+ <0x0 0x60000000 0x0 0xf20>,
+ <0x0 0x60000f20 0x0 0xa8>,
+ <0x0 0x60001000 0x0 0x4000>,
+- <0x0 0x60200000 0x0 0x100000>,
++ <0x0 0x60200000 0x0 0x1fe00000>,
+ <0x0 0x01c13000 0x0 0x1000>,
+ <0x0 0x60005000 0x0 0x2000>;
+ reg-names = "parf", "dbi", "elbi", "atu", "addr_space",
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index 914f9cb3aca215..a97ceff939d882 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -2925,7 +2925,7 @@ pcie6a: pci@1bf8000 {
+ #address-cells = <3>;
+ #size-cells = <2>;
+ ranges = <0x01000000 0x0 0x00000000 0x0 0x70200000 0x0 0x100000>,
+- <0x02000000 0x0 0x70300000 0x0 0x70300000 0x0 0x1d00000>;
++ <0x02000000 0x0 0x70300000 0x0 0x70300000 0x0 0x3d00000>;
+ bus-range = <0x00 0xff>;
+
+ dma-coherent;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328.dtsi b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+index c01a4cad48f30e..d16a13d6442f88 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3328.dtsi
+@@ -333,6 +333,7 @@ power: power-controller {
+
+ power-domain@RK3328_PD_HEVC {
+ reg = <RK3328_PD_HEVC>;
++ clocks = <&cru SCLK_VENC_CORE>;
+ #power-domain-cells = <0>;
+ };
+ power-domain@RK3328_PD_VIDEO {
+diff --git a/arch/riscv/include/asm/page.h b/arch/riscv/include/asm/page.h
+index 32d308a3355fd4..febf820d505837 100644
+--- a/arch/riscv/include/asm/page.h
++++ b/arch/riscv/include/asm/page.h
+@@ -124,6 +124,7 @@ struct kernel_mapping {
+
+ extern struct kernel_mapping kernel_map;
+ extern phys_addr_t phys_ram_base;
++extern unsigned long vmemmap_start_pfn;
+
+ #define is_kernel_mapping(x) \
+ ((x) >= kernel_map.virt_addr && (x) < (kernel_map.virt_addr + kernel_map.size))
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index e79f15293492d5..c0866ada5bbc49 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -87,7 +87,7 @@
+ * Define vmemmap for pfn_to_page & page_to_pfn calls. Needed if kernel
+ * is configured with CONFIG_SPARSEMEM_VMEMMAP enabled.
+ */
+-#define vmemmap ((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SHIFT))
++#define vmemmap ((struct page *)VMEMMAP_START - vmemmap_start_pfn)
+
+ #define PCI_IO_SIZE SZ_16M
+ #define PCI_IO_END VMEMMAP_START
+diff --git a/arch/riscv/include/asm/sbi.h b/arch/riscv/include/asm/sbi.h
+index 98f631b051dba8..9be38b05f4adff 100644
+--- a/arch/riscv/include/asm/sbi.h
++++ b/arch/riscv/include/asm/sbi.h
+@@ -158,6 +158,7 @@ struct riscv_pmu_snapshot_data {
+ };
+
+ #define RISCV_PMU_RAW_EVENT_MASK GENMASK_ULL(47, 0)
++#define RISCV_PMU_PLAT_FW_EVENT_MASK GENMASK_ULL(61, 0)
+ #define RISCV_PMU_RAW_EVENT_IDX 0x20000
+ #define RISCV_PLAT_FW_EVENT 0xFFFF
+
+diff --git a/arch/riscv/kernel/entry.S b/arch/riscv/kernel/entry.S
+index c200d329d4bdbe..33a5a9f2a0d4e1 100644
+--- a/arch/riscv/kernel/entry.S
++++ b/arch/riscv/kernel/entry.S
+@@ -23,21 +23,21 @@
+ REG_S a0, TASK_TI_A0(tp)
+ csrr a0, CSR_CAUSE
+ /* Exclude IRQs */
+- blt a0, zero, _new_vmalloc_restore_context_a0
++ blt a0, zero, .Lnew_vmalloc_restore_context_a0
+
+ REG_S a1, TASK_TI_A1(tp)
+ /* Only check new_vmalloc if we are in page/protection fault */
+ li a1, EXC_LOAD_PAGE_FAULT
+- beq a0, a1, _new_vmalloc_kernel_address
++ beq a0, a1, .Lnew_vmalloc_kernel_address
+ li a1, EXC_STORE_PAGE_FAULT
+- beq a0, a1, _new_vmalloc_kernel_address
++ beq a0, a1, .Lnew_vmalloc_kernel_address
+ li a1, EXC_INST_PAGE_FAULT
+- bne a0, a1, _new_vmalloc_restore_context_a1
++ bne a0, a1, .Lnew_vmalloc_restore_context_a1
+
+-_new_vmalloc_kernel_address:
++.Lnew_vmalloc_kernel_address:
+ /* Is it a kernel address? */
+ csrr a0, CSR_TVAL
+- bge a0, zero, _new_vmalloc_restore_context_a1
++ bge a0, zero, .Lnew_vmalloc_restore_context_a1
+
+ /* Check if a new vmalloc mapping appeared that could explain the trap */
+ REG_S a2, TASK_TI_A2(tp)
+@@ -69,7 +69,7 @@ _new_vmalloc_kernel_address:
+ /* Check the value of new_vmalloc for this cpu */
+ REG_L a2, 0(a0)
+ and a2, a2, a1
+- beq a2, zero, _new_vmalloc_restore_context
++ beq a2, zero, .Lnew_vmalloc_restore_context
+
+ /* Atomically reset the current cpu bit in new_vmalloc */
+ amoxor.d a0, a1, (a0)
+@@ -83,11 +83,11 @@ _new_vmalloc_kernel_address:
+ csrw CSR_SCRATCH, x0
+ sret
+
+-_new_vmalloc_restore_context:
++.Lnew_vmalloc_restore_context:
+ REG_L a2, TASK_TI_A2(tp)
+-_new_vmalloc_restore_context_a1:
++.Lnew_vmalloc_restore_context_a1:
+ REG_L a1, TASK_TI_A1(tp)
+-_new_vmalloc_restore_context_a0:
++.Lnew_vmalloc_restore_context_a0:
+ REG_L a0, TASK_TI_A0(tp)
+ .endm
+
+@@ -278,6 +278,7 @@ SYM_CODE_START_NOALIGN(ret_from_exception)
+ #else
+ sret
+ #endif
++SYM_INNER_LABEL(ret_from_exception_end, SYM_L_GLOBAL)
+ SYM_CODE_END(ret_from_exception)
+ ASM_NOKPROBE(ret_from_exception)
+
+diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
+index 1cd461f3d8726d..47d0ebeec93c23 100644
+--- a/arch/riscv/kernel/module.c
++++ b/arch/riscv/kernel/module.c
+@@ -23,7 +23,7 @@ struct used_bucket {
+
+ struct relocation_head {
+ struct hlist_node node;
+- struct list_head *rel_entry;
++ struct list_head rel_entry;
+ void *location;
+ };
+
+@@ -634,7 +634,7 @@ process_accumulated_relocations(struct module *me,
+ location = rel_head_iter->location;
+ list_for_each_entry_safe(rel_entry_iter,
+ rel_entry_iter_tmp,
+- rel_head_iter->rel_entry,
++ &rel_head_iter->rel_entry,
+ head) {
+ curr_type = rel_entry_iter->type;
+ reloc_handlers[curr_type].reloc_handler(
+@@ -704,16 +704,7 @@ static int add_relocation_to_accumulate(struct module *me, int type,
+ return -ENOMEM;
+ }
+
+- rel_head->rel_entry =
+- kmalloc(sizeof(struct list_head), GFP_KERNEL);
+-
+- if (!rel_head->rel_entry) {
+- kfree(entry);
+- kfree(rel_head);
+- return -ENOMEM;
+- }
+-
+- INIT_LIST_HEAD(rel_head->rel_entry);
++ INIT_LIST_HEAD(&rel_head->rel_entry);
+ rel_head->location = location;
+ INIT_HLIST_NODE(&rel_head->node);
+ if (!current_head->first) {
+@@ -722,7 +713,6 @@ static int add_relocation_to_accumulate(struct module *me, int type,
+
+ if (!bucket) {
+ kfree(entry);
+- kfree(rel_head->rel_entry);
+ kfree(rel_head);
+ return -ENOMEM;
+ }
+@@ -735,7 +725,7 @@ static int add_relocation_to_accumulate(struct module *me, int type,
+ }
+
+ /* Add relocation to head of discovered rel_head */
+- list_add_tail(&entry->head, rel_head->rel_entry);
++ list_add_tail(&entry->head, &rel_head->rel_entry);
+
+ return 0;
+ }
+diff --git a/arch/riscv/kernel/probes/kprobes.c b/arch/riscv/kernel/probes/kprobes.c
+index 474a6521365783..d2dacea1aedd9e 100644
+--- a/arch/riscv/kernel/probes/kprobes.c
++++ b/arch/riscv/kernel/probes/kprobes.c
+@@ -30,7 +30,7 @@ static void __kprobes arch_prepare_ss_slot(struct kprobe *p)
+ p->ainsn.api.restore = (unsigned long)p->addr + len;
+
+ patch_text_nosync(p->ainsn.api.insn, &p->opcode, len);
+- patch_text_nosync(p->ainsn.api.insn + len, &insn, GET_INSN_LENGTH(insn));
++ patch_text_nosync((void *)p->ainsn.api.insn + len, &insn, GET_INSN_LENGTH(insn));
+ }
+
+ static void __kprobes arch_prepare_simulate(struct kprobe *p)
+diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
+index 153a2db4c5fa14..d4355c770c36ac 100644
+--- a/arch/riscv/kernel/stacktrace.c
++++ b/arch/riscv/kernel/stacktrace.c
+@@ -17,6 +17,7 @@
+ #ifdef CONFIG_FRAME_POINTER
+
+ extern asmlinkage void handle_exception(void);
++extern unsigned long ret_from_exception_end;
+
+ static inline int fp_is_valid(unsigned long fp, unsigned long sp)
+ {
+@@ -71,7 +72,8 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
+ fp = frame->fp;
+ pc = ftrace_graph_ret_addr(current, &graph_idx, frame->ra,
+ &frame->ra);
+- if (pc == (unsigned long)handle_exception) {
++ if (pc >= (unsigned long)handle_exception &&
++ pc < (unsigned long)&ret_from_exception_end) {
+ if (unlikely(!__kernel_text_address(pc) || !fn(arg, pc)))
+ break;
+
+diff --git a/arch/riscv/kernel/traps.c b/arch/riscv/kernel/traps.c
+index 51ebfd23e00764..8ff8e8b36524b7 100644
+--- a/arch/riscv/kernel/traps.c
++++ b/arch/riscv/kernel/traps.c
+@@ -35,7 +35,7 @@
+
+ int show_unhandled_signals = 1;
+
+-static DEFINE_SPINLOCK(die_lock);
++static DEFINE_RAW_SPINLOCK(die_lock);
+
+ static int copy_code(struct pt_regs *regs, u16 *val, const u16 *insns)
+ {
+@@ -81,7 +81,7 @@ void die(struct pt_regs *regs, const char *str)
+
+ oops_enter();
+
+- spin_lock_irqsave(&die_lock, flags);
++ raw_spin_lock_irqsave(&die_lock, flags);
+ console_verbose();
+ bust_spinlocks(1);
+
+@@ -100,7 +100,7 @@ void die(struct pt_regs *regs, const char *str)
+
+ bust_spinlocks(0);
+ add_taint(TAINT_DIE, LOCKDEP_NOW_UNRELIABLE);
+- spin_unlock_irqrestore(&die_lock, flags);
++ raw_spin_unlock_irqrestore(&die_lock, flags);
+ oops_exit();
+
+ if (in_interrupt())
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index fc53ce748c8049..8d167e09f1fea5 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -33,6 +33,7 @@
+ #include <asm/pgtable.h>
+ #include <asm/sections.h>
+ #include <asm/soc.h>
++#include <asm/sparsemem.h>
+ #include <asm/tlbflush.h>
+
+ #include "../kernel/head.h"
+@@ -62,6 +63,13 @@ EXPORT_SYMBOL(pgtable_l5_enabled);
+ phys_addr_t phys_ram_base __ro_after_init;
+ EXPORT_SYMBOL(phys_ram_base);
+
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++#define VMEMMAP_ADDR_ALIGN (1ULL << SECTION_SIZE_BITS)
++
++unsigned long vmemmap_start_pfn __ro_after_init;
++EXPORT_SYMBOL(vmemmap_start_pfn);
++#endif
++
+ unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)]
+ __page_aligned_bss;
+ EXPORT_SYMBOL(empty_zero_page);
+@@ -240,8 +248,12 @@ static void __init setup_bootmem(void)
+ * Make sure we align the start of the memory on a PMD boundary so that
+ * at worst, we map the linear mapping with PMD mappings.
+ */
+- if (!IS_ENABLED(CONFIG_XIP_KERNEL))
++ if (!IS_ENABLED(CONFIG_XIP_KERNEL)) {
+ phys_ram_base = memblock_start_of_DRAM() & PMD_MASK;
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++ vmemmap_start_pfn = round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> PAGE_SHIFT;
++#endif
++ }
+
+ /*
+ * In 64-bit, any use of __va/__pa before this point is wrong as we
+@@ -1101,6 +1113,9 @@ asmlinkage void __init setup_vm(uintptr_t dtb_pa)
+ kernel_map.xiprom_sz = (uintptr_t)(&_exiprom) - (uintptr_t)(&_xiprom);
+
+ phys_ram_base = CONFIG_PHYS_RAM_BASE;
++#ifdef CONFIG_SPARSEMEM_VMEMMAP
++ vmemmap_start_pfn = round_down(phys_ram_base, VMEMMAP_ADDR_ALIGN) >> PAGE_SHIFT;
++#endif
+ kernel_map.phys_addr = (uintptr_t)CONFIG_PHYS_RAM_BASE;
+ kernel_map.size = (uintptr_t)(&_end) - (uintptr_t)(&_start);
+
+diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c
+index 6bc1eb2a21bd92..887b0b8e21e364 100644
+--- a/arch/x86/kernel/fpu/regset.c
++++ b/arch/x86/kernel/fpu/regset.c
+@@ -190,7 +190,8 @@ int ssp_get(struct task_struct *target, const struct user_regset *regset,
+ struct fpu *fpu = &target->thread.fpu;
+ struct cet_user_state *cetregs;
+
+- if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK))
++ if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK) ||
++ !ssp_active(target, regset))
+ return -ENODEV;
+
+ sync_fpstate(fpu);
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 95dd7b79593565..cad16c163611b5 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -6844,16 +6844,24 @@ static struct bfq_queue *bfq_waker_bfqq(struct bfq_queue *bfqq)
+ if (new_bfqq == waker_bfqq) {
+ /*
+ * If waker_bfqq is in the merge chain, and current
+- * is the only procress.
++ * is the only process, waker_bfqq can be freed.
+ */
+ if (bfqq_process_refs(waker_bfqq) == 1)
+ return NULL;
+- break;
++
++ return waker_bfqq;
+ }
+
+ new_bfqq = new_bfqq->new_bfqq;
+ }
+
++ /*
++ * If waker_bfqq is not in the merge chain, and it's procress reference
++ * is 0, waker_bfqq can be freed.
++ */
++ if (bfqq_process_refs(waker_bfqq) == 0)
++ return NULL;
++
+ return waker_bfqq;
+ }
+
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 821867de43bea3..d27a3bf96f80d8 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -440,6 +440,13 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"),
+ },
+ },
++ {
++ /* Asus Vivobook X1504VAP */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_BOARD_NAME, "X1504VAP"),
++ },
++ },
+ {
+ /* Asus Vivobook X1704VAP */
+ .matches = {
+@@ -646,6 +653,17 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "GMxHGxx"),
+ },
+ },
++ {
++ /*
++ * TongFang GM5HG0A in case of the SKIKK Vanaheim relabel the
++ * board-name is changed, so check OEM strings instead. Note
++ * OEM string matches are always exact matches.
++ * https://bugzilla.kernel.org/show_bug.cgi?id=219614
++ */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_OEM_STRING, "GM5HG0A"),
++ },
++ },
+ { }
+ };
+
+diff --git a/drivers/base/topology.c b/drivers/base/topology.c
+index 89f98be5c5b991..d293cbd253e4f9 100644
+--- a/drivers/base/topology.c
++++ b/drivers/base/topology.c
+@@ -27,9 +27,17 @@ static ssize_t name##_read(struct file *file, struct kobject *kobj, \
+ loff_t off, size_t count) \
+ { \
+ struct device *dev = kobj_to_dev(kobj); \
++ cpumask_var_t mask; \
++ ssize_t n; \
+ \
+- return cpumap_print_bitmask_to_buf(buf, topology_##mask(dev->id), \
+- off, count); \
++ if (!alloc_cpumask_var(&mask, GFP_KERNEL)) \
++ return -ENOMEM; \
++ \
++ cpumask_copy(mask, topology_##mask(dev->id)); \
++ n = cpumap_print_bitmask_to_buf(buf, mask, off, count); \
++ free_cpumask_var(mask); \
++ \
++ return n; \
+ } \
+ \
+ static ssize_t name##_list_read(struct file *file, struct kobject *kobj, \
+@@ -37,9 +45,17 @@ static ssize_t name##_list_read(struct file *file, struct kobject *kobj, \
+ loff_t off, size_t count) \
+ { \
+ struct device *dev = kobj_to_dev(kobj); \
++ cpumask_var_t mask; \
++ ssize_t n; \
++ \
++ if (!alloc_cpumask_var(&mask, GFP_KERNEL)) \
++ return -ENOMEM; \
++ \
++ cpumask_copy(mask, topology_##mask(dev->id)); \
++ n = cpumap_print_list_to_buf(buf, mask, off, count); \
++ free_cpumask_var(mask); \
+ \
+- return cpumap_print_list_to_buf(buf, topology_##mask(dev->id), \
+- off, count); \
++ return n; \
+ }
+
+ define_id_show_func(physical_package_id, "%d");
+diff --git a/drivers/bluetooth/btmtk.c b/drivers/bluetooth/btmtk.c
+index 85e99641eaae02..af487abe9932aa 100644
+--- a/drivers/bluetooth/btmtk.c
++++ b/drivers/bluetooth/btmtk.c
+@@ -1472,10 +1472,15 @@ EXPORT_SYMBOL_GPL(btmtk_usb_setup);
+
+ int btmtk_usb_shutdown(struct hci_dev *hdev)
+ {
++ struct btmtk_data *data = hci_get_priv(hdev);
+ struct btmtk_hci_wmt_params wmt_params;
+ u8 param = 0;
+ int err;
+
++ err = usb_autopm_get_interface(data->intf);
++ if (err < 0)
++ return err;
++
+ /* Disable the device */
+ wmt_params.op = BTMTK_WMT_FUNC_CTRL;
+ wmt_params.flag = 0;
+@@ -1486,9 +1491,11 @@ int btmtk_usb_shutdown(struct hci_dev *hdev)
+ err = btmtk_usb_hci_wmt_sync(hdev, &wmt_params);
+ if (err < 0) {
+ bt_dev_err(hdev, "Failed to send wmt func ctrl (%d)", err);
++ usb_autopm_put_interface(data->intf);
+ return err;
+ }
+
++ usb_autopm_put_interface(data->intf);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(btmtk_usb_shutdown);
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index 5ea0d23e88c02b..a028984f27829c 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -1336,6 +1336,7 @@ static void btnxpuart_tx_work(struct work_struct *work)
+
+ while ((skb = nxp_dequeue(nxpdev))) {
+ len = serdev_device_write_buf(serdev, skb->data, skb->len);
++ serdev_device_wait_until_sent(serdev, 0);
+ hdev->stat.byte_tx += len;
+
+ skb_pull(skb, len);
+diff --git a/drivers/cpuidle/cpuidle-riscv-sbi.c b/drivers/cpuidle/cpuidle-riscv-sbi.c
+index d228b4d18d5600..77a1fc668ae27b 100644
+--- a/drivers/cpuidle/cpuidle-riscv-sbi.c
++++ b/drivers/cpuidle/cpuidle-riscv-sbi.c
+@@ -500,12 +500,12 @@ static int sbi_cpuidle_probe(struct platform_device *pdev)
+ int cpu, ret;
+ struct cpuidle_driver *drv;
+ struct cpuidle_device *dev;
+- struct device_node *np, *pds_node;
++ struct device_node *pds_node;
+
+ /* Detect OSI support based on CPU DT nodes */
+ sbi_cpuidle_use_osi = true;
+ for_each_possible_cpu(cpu) {
+- np = of_cpu_device_node_get(cpu);
++ struct device_node *np __free(device_node) = of_cpu_device_node_get(cpu);
+ if (np &&
+ of_property_present(np, "power-domains") &&
+ of_property_present(np, "power-domain-names")) {
+diff --git a/drivers/gpio/gpio-loongson-64bit.c b/drivers/gpio/gpio-loongson-64bit.c
+index 6749d4dd6d6496..7f4d78fd800e7e 100644
+--- a/drivers/gpio/gpio-loongson-64bit.c
++++ b/drivers/gpio/gpio-loongson-64bit.c
+@@ -237,9 +237,9 @@ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data1 = {
+ static const struct loongson_gpio_chip_data loongson_gpio_ls2k2000_data2 = {
+ .label = "ls2k2000_gpio",
+ .mode = BIT_CTRL_MODE,
+- .conf_offset = 0x84,
+- .in_offset = 0x88,
+- .out_offset = 0x80,
++ .conf_offset = 0x4,
++ .in_offset = 0x8,
++ .out_offset = 0x0,
+ };
+
+ static const struct loongson_gpio_chip_data loongson_gpio_ls3a5000_data = {
+diff --git a/drivers/gpio/gpio-virtuser.c b/drivers/gpio/gpio-virtuser.c
+index 91b6352c957cf9..d6244f0d3bc752 100644
+--- a/drivers/gpio/gpio-virtuser.c
++++ b/drivers/gpio/gpio-virtuser.c
+@@ -1410,7 +1410,7 @@ gpio_virtuser_make_lookup_table(struct gpio_virtuser_device *dev)
+ size_t num_entries = gpio_virtuser_get_lookup_count(dev);
+ struct gpio_virtuser_lookup_entry *entry;
+ struct gpio_virtuser_lookup *lookup;
+- unsigned int i = 0;
++ unsigned int i = 0, idx;
+
+ lockdep_assert_held(&dev->lock);
+
+@@ -1424,12 +1424,12 @@ gpio_virtuser_make_lookup_table(struct gpio_virtuser_device *dev)
+ return -ENOMEM;
+
+ list_for_each_entry(lookup, &dev->lookup_list, siblings) {
++ idx = 0;
+ list_for_each_entry(entry, &lookup->entry_list, siblings) {
+- table->table[i] =
++ table->table[i++] =
+ GPIO_LOOKUP_IDX(entry->key,
+ entry->offset < 0 ? U16_MAX : entry->offset,
+- lookup->con_id, i, entry->flags);
+- i++;
++ lookup->con_id, idx++, entry->flags);
+ }
+ }
+
+@@ -1439,6 +1439,15 @@ gpio_virtuser_make_lookup_table(struct gpio_virtuser_device *dev)
+ return 0;
+ }
+
++static void
++gpio_virtuser_remove_lookup_table(struct gpio_virtuser_device *dev)
++{
++ gpiod_remove_lookup_table(dev->lookup_table);
++ kfree(dev->lookup_table->dev_id);
++ kfree(dev->lookup_table);
++ dev->lookup_table = NULL;
++}
++
+ static struct fwnode_handle *
+ gpio_virtuser_make_device_swnode(struct gpio_virtuser_device *dev)
+ {
+@@ -1487,10 +1496,8 @@ gpio_virtuser_device_activate(struct gpio_virtuser_device *dev)
+ pdevinfo.fwnode = swnode;
+
+ ret = gpio_virtuser_make_lookup_table(dev);
+- if (ret) {
+- fwnode_remove_software_node(swnode);
+- return ret;
+- }
++ if (ret)
++ goto err_remove_swnode;
+
+ reinit_completion(&dev->probe_completion);
+ dev->driver_bound = false;
+@@ -1498,23 +1505,31 @@ gpio_virtuser_device_activate(struct gpio_virtuser_device *dev)
+
+ pdev = platform_device_register_full(&pdevinfo);
+ if (IS_ERR(pdev)) {
++ ret = PTR_ERR(pdev);
+ bus_unregister_notifier(&platform_bus_type, &dev->bus_notifier);
+- fwnode_remove_software_node(swnode);
+- return PTR_ERR(pdev);
++ goto err_remove_lookup_table;
+ }
+
+ wait_for_completion(&dev->probe_completion);
+ bus_unregister_notifier(&platform_bus_type, &dev->bus_notifier);
+
+ if (!dev->driver_bound) {
+- platform_device_unregister(pdev);
+- fwnode_remove_software_node(swnode);
+- return -ENXIO;
++ ret = -ENXIO;
++ goto err_unregister_pdev;
+ }
+
+ dev->pdev = pdev;
+
+ return 0;
++
++err_unregister_pdev:
++ platform_device_unregister(pdev);
++err_remove_lookup_table:
++ gpio_virtuser_remove_lookup_table(dev);
++err_remove_swnode:
++ fwnode_remove_software_node(swnode);
++
++ return ret;
+ }
+
+ static void
+@@ -1526,10 +1541,9 @@ gpio_virtuser_device_deactivate(struct gpio_virtuser_device *dev)
+
+ swnode = dev_fwnode(&dev->pdev->dev);
+ platform_device_unregister(dev->pdev);
++ gpio_virtuser_remove_lookup_table(dev);
+ fwnode_remove_software_node(swnode);
+ dev->pdev = NULL;
+- gpiod_remove_lookup_table(dev->lookup_table);
+- kfree(dev->lookup_table);
+ }
+
+ static ssize_t
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+index 7d26a962f811cf..ff5e52025266cd 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vram_mgr.c
+@@ -567,7 +567,6 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
+ else
+ remaining_size -= size;
+ }
+- mutex_unlock(&mgr->lock);
+
+ if (bo->flags & AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS && adjust_dcc_size) {
+ struct drm_buddy_block *dcc_block;
+@@ -584,6 +583,7 @@ static int amdgpu_vram_mgr_new(struct ttm_resource_manager *man,
+ (u64)vres->base.size,
+ &vres->blocks);
+ }
++ mutex_unlock(&mgr->lock);
+
+ vres->base.start = 0;
+ size = max_t(u64, amdgpu_vram_mgr_blocks_size(&vres->blocks),
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_debug.c b/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
+index 312dfa84f29f84..a8abc309180137 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_debug.c
+@@ -350,10 +350,27 @@ int kfd_dbg_set_mes_debug_mode(struct kfd_process_device *pdd, bool sq_trap_en)
+ {
+ uint32_t spi_dbg_cntl = pdd->spi_dbg_override | pdd->spi_dbg_launch_mode;
+ uint32_t flags = pdd->process->dbg_flags;
++ struct amdgpu_device *adev = pdd->dev->adev;
++ int r;
+
+ if (!kfd_dbg_is_per_vmid_supported(pdd->dev))
+ return 0;
+
++ if (!pdd->proc_ctx_cpu_ptr) {
++ r = amdgpu_amdkfd_alloc_gtt_mem(adev,
++ AMDGPU_MES_PROC_CTX_SIZE,
++ &pdd->proc_ctx_bo,
++ &pdd->proc_ctx_gpu_addr,
++ &pdd->proc_ctx_cpu_ptr,
++ false);
++ if (r) {
++ dev_err(adev->dev,
++ "failed to allocate process context bo\n");
++ return r;
++ }
++ memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
++ }
++
+ return amdgpu_mes_set_shader_debugger(pdd->dev->adev, pdd->proc_ctx_gpu_addr, spi_dbg_cntl,
+ pdd->watch_points, flags, sq_trap_en);
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 3139987b82b100..264bd764f6f27d 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -1160,7 +1160,8 @@ static void kfd_process_wq_release(struct work_struct *work)
+ */
+ synchronize_rcu();
+ ef = rcu_access_pointer(p->ef);
+- dma_fence_signal(ef);
++ if (ef)
++ dma_fence_signal(ef);
+
+ kfd_process_remove_sysfs(p);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index ad3a3aa72b51f3..ea403fece8392c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8393,16 +8393,6 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
+ struct amdgpu_crtc *acrtc,
+ struct dm_crtc_state *acrtc_state)
+ {
+- /*
+- * We have no guarantee that the frontend index maps to the same
+- * backend index - some even map to more than one.
+- *
+- * TODO: Use a different interrupt or check DC itself for the mapping.
+- */
+- int irq_type =
+- amdgpu_display_crtc_idx_to_irq_type(
+- adev,
+- acrtc->crtc_id);
+ struct drm_vblank_crtc_config config = {0};
+ struct dc_crtc_timing *timing;
+ int offdelay;
+@@ -8428,28 +8418,7 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
+
+ drm_crtc_vblank_on_config(&acrtc->base,
+ &config);
+-
+- amdgpu_irq_get(
+- adev,
+- &adev->pageflip_irq,
+- irq_type);
+-#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
+- amdgpu_irq_get(
+- adev,
+- &adev->vline0_irq,
+- irq_type);
+-#endif
+ } else {
+-#if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
+- amdgpu_irq_put(
+- adev,
+- &adev->vline0_irq,
+- irq_type);
+-#endif
+- amdgpu_irq_put(
+- adev,
+- &adev->pageflip_irq,
+- irq_type);
+ drm_crtc_vblank_off(&acrtc->base);
+ }
+ }
+@@ -11146,8 +11115,8 @@ dm_get_plane_scale(struct drm_plane_state *plane_state,
+ int plane_src_w, plane_src_h;
+
+ dm_get_oriented_plane_size(plane_state, &plane_src_w, &plane_src_h);
+- *out_plane_scale_w = plane_state->crtc_w * 1000 / plane_src_w;
+- *out_plane_scale_h = plane_state->crtc_h * 1000 / plane_src_h;
++ *out_plane_scale_w = plane_src_w ? plane_state->crtc_w * 1000 / plane_src_w : 0;
++ *out_plane_scale_h = plane_src_h ? plane_state->crtc_h * 1000 / plane_src_h : 0;
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 9f570d447c2099..6d4ee8fe615c38 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -4421,7 +4421,7 @@ static bool commit_minimal_transition_based_on_current_context(struct dc *dc,
+ struct pipe_split_policy_backup policy;
+ struct dc_state *intermediate_context;
+ struct dc_state *old_current_state = dc->current_state;
+- struct dc_surface_update srf_updates[MAX_SURFACE_NUM] = {0};
++ struct dc_surface_update srf_updates[MAX_SURFACES] = {0};
+ int surface_count;
+
+ /*
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_state.c b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+index e006f816ff2f74..1b2cce127981d9 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_state.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_state.c
+@@ -483,9 +483,9 @@ bool dc_state_add_plane(
+ if (stream_status == NULL) {
+ dm_error("Existing stream not found; failed to attach surface!\n");
+ goto out;
+- } else if (stream_status->plane_count == MAX_SURFACE_NUM) {
++ } else if (stream_status->plane_count == MAX_SURFACES) {
+ dm_error("Surface: can not attach plane_state %p! Maximum is: %d\n",
+- plane_state, MAX_SURFACE_NUM);
++ plane_state, MAX_SURFACES);
+ goto out;
+ } else if (!otg_master_pipe) {
+ goto out;
+@@ -600,7 +600,7 @@ bool dc_state_rem_all_planes_for_stream(
+ {
+ int i, old_plane_count;
+ struct dc_stream_status *stream_status = NULL;
+- struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 };
++ struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 };
+
+ for (i = 0; i < state->stream_count; i++)
+ if (state->streams[i] == stream) {
+@@ -875,7 +875,7 @@ bool dc_state_rem_all_phantom_planes_for_stream(
+ {
+ int i, old_plane_count;
+ struct dc_stream_status *stream_status = NULL;
+- struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 };
++ struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 };
+
+ for (i = 0; i < state->stream_count; i++)
+ if (state->streams[i] == phantom_stream) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 7c163aa7e8bd2d..a4f6ff7155c2a0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -57,7 +57,7 @@ struct dmub_notification;
+
+ #define DC_VER "3.2.301"
+
+-#define MAX_SURFACES 3
++#define MAX_SURFACES 4
+ #define MAX_PLANES 6
+ #define MAX_STREAMS 6
+ #define MIN_VIEWPORT_SIZE 12
+@@ -1390,7 +1390,7 @@ struct dc_scratch_space {
+ * store current value in plane states so we can still recover
+ * a valid current state during dc update.
+ */
+- struct dc_plane_state plane_states[MAX_SURFACE_NUM];
++ struct dc_plane_state plane_states[MAX_SURFACES];
+
+ struct dc_stream_state stream_state;
+ };
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+index 14ea47eda0c873..8b9af1a6a03162 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+@@ -56,7 +56,7 @@ struct dc_stream_status {
+ int plane_count;
+ int audio_inst;
+ struct timing_sync_info timing_sync_info;
+- struct dc_plane_state *plane_states[MAX_SURFACE_NUM];
++ struct dc_plane_state *plane_states[MAX_SURFACES];
+ bool is_abm_supported;
+ struct mall_stream_config mall_stream_config;
+ bool fpo_in_use;
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_types.h b/drivers/gpu/drm/amd/display/dc/dc_types.h
+index 6d7989b751e2ce..c8bdbbba44ef9d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_types.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_types.h
+@@ -76,7 +76,6 @@ struct dc_perf_trace {
+ unsigned long last_entry_write;
+ };
+
+-#define MAX_SURFACE_NUM 6
+ #define NUM_PIXEL_FORMATS 10
+
+ enum tiling_mode {
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h b/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
+index 072bd053960594..6b2ab4ec2b5ffe 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
++++ b/drivers/gpu/drm/amd/display/dc/dml/dml_inline_defs.h
+@@ -66,11 +66,15 @@ static inline double dml_max5(double a, double b, double c, double d, double e)
+
+ static inline double dml_ceil(double a, double granularity)
+ {
++ if (granularity == 0)
++ return 0;
+ return (double) dcn_bw_ceil2(a, granularity);
+ }
+
+ static inline double dml_floor(double a, double granularity)
+ {
++ if (granularity == 0)
++ return 0;
+ return (double) dcn_bw_floor2(a, granularity);
+ }
+
+@@ -114,11 +118,15 @@ static inline double dml_ceil_2(double f)
+
+ static inline double dml_ceil_ex(double x, double granularity)
+ {
++ if (granularity == 0)
++ return 0;
+ return (double) dcn_bw_ceil2(x, granularity);
+ }
+
+ static inline double dml_floor_ex(double x, double granularity)
+ {
++ if (granularity == 0)
++ return 0;
+ return (double) dcn_bw_floor2(x, granularity);
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c
+index 3d29169dd6bbf0..6b3b8803e0aee2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_mall_phantom.c
+@@ -813,7 +813,7 @@ static bool remove_all_phantom_planes_for_stream(struct dml2_context *ctx, struc
+ {
+ int i, old_plane_count;
+ struct dc_stream_status *stream_status = NULL;
+- struct dc_plane_state *del_planes[MAX_SURFACE_NUM] = { 0 };
++ struct dc_plane_state *del_planes[MAX_SURFACES] = { 0 };
+
+ for (i = 0; i < context->stream_count; i++)
+ if (context->streams[i] == stream) {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
+index e58220a7ee2f70..30178dde6d49fc 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
++++ b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
+@@ -302,5 +302,7 @@ int smu_v13_0_set_wbrf_exclusion_ranges(struct smu_context *smu,
+ int smu_v13_0_get_boot_freq_by_index(struct smu_context *smu,
+ enum smu_clk_type clk_type,
+ uint32_t *value);
++
++void smu_v13_0_interrupt_work(struct smu_context *smu);
+ #endif
+ #endif
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index e17466cc19522d..2024a85fa11bd5 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -1320,11 +1320,11 @@ static int smu_v13_0_set_irq_state(struct amdgpu_device *adev,
+ return 0;
+ }
+
+-static int smu_v13_0_ack_ac_dc_interrupt(struct smu_context *smu)
++void smu_v13_0_interrupt_work(struct smu_context *smu)
+ {
+- return smu_cmn_send_smc_msg(smu,
+- SMU_MSG_ReenableAcDcInterrupt,
+- NULL);
++ smu_cmn_send_smc_msg(smu,
++ SMU_MSG_ReenableAcDcInterrupt,
++ NULL);
+ }
+
+ #define THM_11_0__SRCID__THM_DIG_THERM_L2H 0 /* ASIC_TEMP > CG_THERMAL_INT.DIG_THERM_INTH */
+@@ -1377,12 +1377,12 @@ static int smu_v13_0_irq_process(struct amdgpu_device *adev,
+ switch (ctxid) {
+ case SMU_IH_INTERRUPT_CONTEXT_ID_AC:
+ dev_dbg(adev->dev, "Switched to AC mode!\n");
+- smu_v13_0_ack_ac_dc_interrupt(smu);
++ schedule_work(&smu->interrupt_work);
+ adev->pm.ac_power = true;
+ break;
+ case SMU_IH_INTERRUPT_CONTEXT_ID_DC:
+ dev_dbg(adev->dev, "Switched to DC mode!\n");
+- smu_v13_0_ack_ac_dc_interrupt(smu);
++ schedule_work(&smu->interrupt_work);
+ adev->pm.ac_power = false;
+ break;
+ case SMU_IH_INTERRUPT_CONTEXT_ID_THERMAL_THROTTLING:
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index a9373968807164..cd2cf0ffc0f5cb 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -3126,6 +3126,7 @@ static const struct pptable_funcs smu_v13_0_0_ppt_funcs = {
+ .is_asic_wbrf_supported = smu_v13_0_0_wbrf_support_check,
+ .enable_uclk_shadow = smu_v13_0_enable_uclk_shadow,
+ .set_wbrf_exclusion_ranges = smu_v13_0_set_wbrf_exclusion_ranges,
++ .interrupt_work = smu_v13_0_interrupt_work,
+ };
+
+ void smu_v13_0_0_set_ppt_funcs(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+index 1aedfafa507f7e..7c753d795287d9 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_7_ppt.c
+@@ -2704,6 +2704,7 @@ static const struct pptable_funcs smu_v13_0_7_ppt_funcs = {
+ .is_asic_wbrf_supported = smu_v13_0_7_wbrf_support_check,
+ .enable_uclk_shadow = smu_v13_0_enable_uclk_shadow,
+ .set_wbrf_exclusion_ranges = smu_v13_0_set_wbrf_exclusion_ranges,
++ .interrupt_work = smu_v13_0_interrupt_work,
+ };
+
+ void smu_v13_0_7_set_ppt_funcs(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/mediatek/Kconfig b/drivers/gpu/drm/mediatek/Kconfig
+index 417ac8c9af4194..a749c01199d40e 100644
+--- a/drivers/gpu/drm/mediatek/Kconfig
++++ b/drivers/gpu/drm/mediatek/Kconfig
+@@ -13,9 +13,6 @@ config DRM_MEDIATEK
+ select DRM_BRIDGE_CONNECTOR
+ select DRM_MIPI_DSI
+ select DRM_PANEL
+- select MEMORY
+- select MTK_SMI
+- select PHY_MTK_MIPI_DSI
+ select VIDEOMODE_HELPERS
+ help
+ Choose this option if you have a Mediatek SoCs.
+@@ -26,7 +23,6 @@ config DRM_MEDIATEK
+ config DRM_MEDIATEK_DP
+ tristate "DRM DPTX Support for MediaTek SoCs"
+ depends on DRM_MEDIATEK
+- select PHY_MTK_DP
+ select DRM_DISPLAY_HELPER
+ select DRM_DISPLAY_DP_HELPER
+ select DRM_DISPLAY_DP_AUX_BUS
+@@ -37,6 +33,5 @@ config DRM_MEDIATEK_HDMI
+ tristate "DRM HDMI Support for Mediatek SoCs"
+ depends on DRM_MEDIATEK
+ select SND_SOC_HDMI_CODEC if SND_SOC
+- select PHY_MTK_HDMI
+ help
+ DRM/KMS HDMI driver for Mediatek SoCs
+diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c
+index eb0e1233ad0435..5674f5707cca83 100644
+--- a/drivers/gpu/drm/mediatek/mtk_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_crtc.c
+@@ -112,6 +112,11 @@ static void mtk_drm_finish_page_flip(struct mtk_crtc *mtk_crtc)
+
+ drm_crtc_handle_vblank(&mtk_crtc->base);
+
++#if IS_REACHABLE(CONFIG_MTK_CMDQ)
++ if (mtk_crtc->cmdq_client.chan)
++ return;
++#endif
++
+ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ if (!mtk_crtc->config_updating && mtk_crtc->pending_needs_vblank) {
+ mtk_crtc_finish_page_flip(mtk_crtc);
+@@ -284,10 +289,8 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
+ state = to_mtk_crtc_state(mtk_crtc->base.state);
+
+ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+- if (mtk_crtc->config_updating) {
+- spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++ if (mtk_crtc->config_updating)
+ goto ddp_cmdq_cb_out;
+- }
+
+ state->pending_config = false;
+
+@@ -315,10 +318,15 @@ static void ddp_cmdq_cb(struct mbox_client *cl, void *mssg)
+ mtk_crtc->pending_async_planes = false;
+ }
+
+- spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
+-
+ ddp_cmdq_cb_out:
+
++ if (mtk_crtc->pending_needs_vblank) {
++ mtk_crtc_finish_page_flip(mtk_crtc);
++ mtk_crtc->pending_needs_vblank = false;
++ }
++
++ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++
+ mtk_crtc->cmdq_vblank_cnt = 0;
+ wake_up(&mtk_crtc->cb_blocking_queue);
+ }
+@@ -606,13 +614,18 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
+ */
+ mtk_crtc->cmdq_vblank_cnt = 3;
+
++ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
++ mtk_crtc->config_updating = false;
++ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++
+ mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle);
+ mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0);
+ }
+-#endif
++#else
+ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ mtk_crtc->config_updating = false;
+ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
++#endif
+
+ mutex_unlock(&mtk_crtc->hw_lock);
+ }
+diff --git a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+index e0c0bb01f65ae0..19b0d508398198 100644
+--- a/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
++++ b/drivers/gpu/drm/mediatek/mtk_disp_ovl.c
+@@ -460,6 +460,29 @@ static unsigned int mtk_ovl_fmt_convert(struct mtk_disp_ovl *ovl,
+ }
+ }
+
++static void mtk_ovl_afbc_layer_config(struct mtk_disp_ovl *ovl,
++ unsigned int idx,
++ struct mtk_plane_pending_state *pending,
++ struct cmdq_pkt *cmdq_pkt)
++{
++ unsigned int pitch_msb = pending->pitch >> 16;
++ unsigned int hdr_pitch = pending->hdr_pitch;
++ unsigned int hdr_addr = pending->hdr_addr;
++
++ if (pending->modifier != DRM_FORMAT_MOD_LINEAR) {
++ mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, &ovl->cmdq_reg, ovl->regs,
++ DISP_REG_OVL_HDR_ADDR(ovl, idx));
++ mtk_ddp_write_relaxed(cmdq_pkt,
++ OVL_PITCH_MSB_2ND_SUBBUF | pitch_msb,
++ &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
++ mtk_ddp_write_relaxed(cmdq_pkt, hdr_pitch, &ovl->cmdq_reg, ovl->regs,
++ DISP_REG_OVL_HDR_PITCH(ovl, idx));
++ } else {
++ mtk_ddp_write_relaxed(cmdq_pkt, pitch_msb,
++ &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
++ }
++}
++
+ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
+ struct mtk_plane_state *state,
+ struct cmdq_pkt *cmdq_pkt)
+@@ -467,25 +490,14 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
+ struct mtk_disp_ovl *ovl = dev_get_drvdata(dev);
+ struct mtk_plane_pending_state *pending = &state->pending;
+ unsigned int addr = pending->addr;
+- unsigned int hdr_addr = pending->hdr_addr;
+- unsigned int pitch = pending->pitch;
+- unsigned int hdr_pitch = pending->hdr_pitch;
++ unsigned int pitch_lsb = pending->pitch & GENMASK(15, 0);
+ unsigned int fmt = pending->format;
++ unsigned int rotation = pending->rotation;
+ unsigned int offset = (pending->y << 16) | pending->x;
+ unsigned int src_size = (pending->height << 16) | pending->width;
+ unsigned int blend_mode = state->base.pixel_blend_mode;
+ unsigned int ignore_pixel_alpha = 0;
+ unsigned int con;
+- bool is_afbc = pending->modifier != DRM_FORMAT_MOD_LINEAR;
+- union overlay_pitch {
+- struct split_pitch {
+- u16 lsb;
+- u16 msb;
+- } split_pitch;
+- u32 pitch;
+- } overlay_pitch;
+-
+- overlay_pitch.pitch = pitch;
+
+ if (!pending->enable) {
+ mtk_ovl_layer_off(dev, idx, cmdq_pkt);
+@@ -513,22 +525,30 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
+ ignore_pixel_alpha = OVL_CONST_BLEND;
+ }
+
+- if (pending->rotation & DRM_MODE_REFLECT_Y) {
++ /*
++ * Treat rotate 180 as flip x + flip y, and XOR the original rotation value
++ * to flip x + flip y to support both in the same time.
++ */
++ if (rotation & DRM_MODE_ROTATE_180)
++ rotation ^= DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y;
++
++ if (rotation & DRM_MODE_REFLECT_Y) {
+ con |= OVL_CON_VIRT_FLIP;
+ addr += (pending->height - 1) * pending->pitch;
+ }
+
+- if (pending->rotation & DRM_MODE_REFLECT_X) {
++ if (rotation & DRM_MODE_REFLECT_X) {
+ con |= OVL_CON_HORZ_FLIP;
+ addr += pending->pitch - 1;
+ }
+
+ if (ovl->data->supports_afbc)
+- mtk_ovl_set_afbc(ovl, cmdq_pkt, idx, is_afbc);
++ mtk_ovl_set_afbc(ovl, cmdq_pkt, idx,
++ pending->modifier != DRM_FORMAT_MOD_LINEAR);
+
+ mtk_ddp_write_relaxed(cmdq_pkt, con, &ovl->cmdq_reg, ovl->regs,
+ DISP_REG_OVL_CON(idx));
+- mtk_ddp_write_relaxed(cmdq_pkt, overlay_pitch.split_pitch.lsb | ignore_pixel_alpha,
++ mtk_ddp_write_relaxed(cmdq_pkt, pitch_lsb | ignore_pixel_alpha,
+ &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH(idx));
+ mtk_ddp_write_relaxed(cmdq_pkt, src_size, &ovl->cmdq_reg, ovl->regs,
+ DISP_REG_OVL_SRC_SIZE(idx));
+@@ -537,19 +557,8 @@ void mtk_ovl_layer_config(struct device *dev, unsigned int idx,
+ mtk_ddp_write_relaxed(cmdq_pkt, addr, &ovl->cmdq_reg, ovl->regs,
+ DISP_REG_OVL_ADDR(ovl, idx));
+
+- if (is_afbc) {
+- mtk_ddp_write_relaxed(cmdq_pkt, hdr_addr, &ovl->cmdq_reg, ovl->regs,
+- DISP_REG_OVL_HDR_ADDR(ovl, idx));
+- mtk_ddp_write_relaxed(cmdq_pkt,
+- OVL_PITCH_MSB_2ND_SUBBUF | overlay_pitch.split_pitch.msb,
+- &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
+- mtk_ddp_write_relaxed(cmdq_pkt, hdr_pitch, &ovl->cmdq_reg, ovl->regs,
+- DISP_REG_OVL_HDR_PITCH(ovl, idx));
+- } else {
+- mtk_ddp_write_relaxed(cmdq_pkt,
+- overlay_pitch.split_pitch.msb,
+- &ovl->cmdq_reg, ovl->regs, DISP_REG_OVL_PITCH_MSB(idx));
+- }
++ if (ovl->data->supports_afbc)
++ mtk_ovl_afbc_layer_config(ovl, idx, pending, cmdq_pkt);
+
+ mtk_ovl_set_bit_depth(dev, idx, fmt, cmdq_pkt);
+ mtk_ovl_layer_on(dev, idx, cmdq_pkt);
+diff --git a/drivers/gpu/drm/mediatek/mtk_dp.c b/drivers/gpu/drm/mediatek/mtk_dp.c
+index f2bee617f063a7..cad65ea851edc7 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dp.c
++++ b/drivers/gpu/drm/mediatek/mtk_dp.c
+@@ -543,18 +543,16 @@ static int mtk_dp_set_color_format(struct mtk_dp *mtk_dp,
+ enum dp_pixelformat color_format)
+ {
+ u32 val;
+-
+- /* update MISC0 */
+- mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034,
+- color_format << DP_TEST_COLOR_FORMAT_SHIFT,
+- DP_TEST_COLOR_FORMAT_MASK);
++ u32 misc0_color;
+
+ switch (color_format) {
+ case DP_PIXELFORMAT_YUV422:
+ val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_YCBCR422;
++ misc0_color = DP_COLOR_FORMAT_YCbCr422;
+ break;
+ case DP_PIXELFORMAT_RGB:
+ val = PIXEL_ENCODE_FORMAT_DP_ENC0_P0_RGB;
++ misc0_color = DP_COLOR_FORMAT_RGB;
+ break;
+ default:
+ drm_warn(mtk_dp->drm_dev, "Unsupported color format: %d\n",
+@@ -562,6 +560,11 @@ static int mtk_dp_set_color_format(struct mtk_dp *mtk_dp,
+ return -EINVAL;
+ }
+
++ /* update MISC0 */
++ mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_3034,
++ misc0_color,
++ DP_TEST_COLOR_FORMAT_MASK);
++
+ mtk_dp_update_bits(mtk_dp, MTK_DP_ENC0_P0_303C,
+ val, PIXEL_ENCODE_FORMAT_DP_ENC0_P0_MASK);
+ return 0;
+@@ -2100,7 +2103,6 @@ static enum drm_connector_status mtk_dp_bdg_detect(struct drm_bridge *bridge)
+ struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge);
+ enum drm_connector_status ret = connector_status_disconnected;
+ bool enabled = mtk_dp->enabled;
+- u8 sink_count = 0;
+
+ if (!mtk_dp->train_info.cable_plugged_in)
+ return ret;
+@@ -2115,8 +2117,8 @@ static enum drm_connector_status mtk_dp_bdg_detect(struct drm_bridge *bridge)
+ * function, we just need to check the HPD connection to check
+ * whether we connect to a sink device.
+ */
+- drm_dp_dpcd_readb(&mtk_dp->aux, DP_SINK_COUNT, &sink_count);
+- if (DP_GET_SINK_COUNT(sink_count))
++
++ if (drm_dp_read_sink_count(&mtk_dp->aux) > 0)
+ ret = connector_status_connected;
+
+ if (!enabled)
+@@ -2408,12 +2410,19 @@ mtk_dp_bridge_mode_valid(struct drm_bridge *bridge,
+ {
+ struct mtk_dp *mtk_dp = mtk_dp_from_bridge(bridge);
+ u32 bpp = info->color_formats & DRM_COLOR_FORMAT_YCBCR422 ? 16 : 24;
+- u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) *
+- drm_dp_max_lane_count(mtk_dp->rx_cap),
+- drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) *
+- mtk_dp->max_lanes);
++ u32 lane_count_min = mtk_dp->train_info.lane_count;
++ u32 rate = drm_dp_bw_code_to_link_rate(mtk_dp->train_info.link_rate) *
++ lane_count_min;
+
+- if (rate < mode->clock * bpp / 8)
++ /*
++ *FEC overhead is approximately 2.4% from DP 1.4a spec 2.2.1.4.2.
++ *The down-spread amplitude shall either be disabled (0.0%) or up
++ *to 0.5% from 1.4a 3.5.2.6. Add up to approximately 3% total overhead.
++ *
++ *Because rate is already divided by 10,
++ *mode->clock does not need to be multiplied by 10
++ */
++ if ((rate * 97 / 100) < (mode->clock * bpp / 8))
+ return MODE_CLOCK_HIGH;
+
+ return MODE_OK;
+@@ -2454,10 +2463,9 @@ static u32 *mtk_dp_bridge_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
+ struct drm_display_mode *mode = &crtc_state->adjusted_mode;
+ struct drm_display_info *display_info =
+ &conn_state->connector->display_info;
+- u32 rate = min_t(u32, drm_dp_max_link_rate(mtk_dp->rx_cap) *
+- drm_dp_max_lane_count(mtk_dp->rx_cap),
+- drm_dp_bw_code_to_link_rate(mtk_dp->max_linkrate) *
+- mtk_dp->max_lanes);
++ u32 lane_count_min = mtk_dp->train_info.lane_count;
++ u32 rate = drm_dp_bw_code_to_link_rate(mtk_dp->train_info.link_rate) *
++ lane_count_min;
+
+ *num_input_fmts = 0;
+
+@@ -2466,8 +2474,8 @@ static u32 *mtk_dp_bridge_atomic_get_input_bus_fmts(struct drm_bridge *bridge,
+ * datarate of YUV422 and sink device supports YUV422, we output YUV422
+ * format. Use this condition, we can support more resolution.
+ */
+- if ((rate < (mode->clock * 24 / 8)) &&
+- (rate > (mode->clock * 16 / 8)) &&
++ if (((rate * 97 / 100) < (mode->clock * 24 / 8)) &&
++ ((rate * 97 / 100) > (mode->clock * 16 / 8)) &&
+ (display_info->color_formats & DRM_COLOR_FORMAT_YCBCR422)) {
+ input_fmts = kcalloc(1, sizeof(*input_fmts), GFP_KERNEL);
+ if (!input_fmts)
+diff --git a/drivers/gpu/drm/mediatek/mtk_drm_drv.c b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+index 2c1cb335d8623f..4e93fd075e03cc 100644
+--- a/drivers/gpu/drm/mediatek/mtk_drm_drv.c
++++ b/drivers/gpu/drm/mediatek/mtk_drm_drv.c
+@@ -673,6 +673,8 @@ static int mtk_drm_bind(struct device *dev)
+ err_free:
+ private->drm = NULL;
+ drm_dev_put(drm);
++ for (i = 0; i < private->data->mmsys_dev_num; i++)
++ private->all_drm_private[i]->drm = NULL;
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index eeec641cab60db..b9b7fd08b7d7e9 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -139,11 +139,11 @@
+ #define CLK_HS_POST GENMASK(15, 8)
+ #define CLK_HS_EXIT GENMASK(23, 16)
+
+-#define DSI_VM_CMD_CON 0x130
++/* DSI_VM_CMD_CON */
+ #define VM_CMD_EN BIT(0)
+ #define TS_VFP_EN BIT(5)
+
+-#define DSI_SHADOW_DEBUG 0x190U
++/* DSI_SHADOW_DEBUG */
+ #define FORCE_COMMIT BIT(0)
+ #define BYPASS_SHADOW BIT(1)
+
+@@ -187,6 +187,8 @@ struct phy;
+
+ struct mtk_dsi_driver_data {
+ const u32 reg_cmdq_off;
++ const u32 reg_vm_cmd_off;
++ const u32 reg_shadow_dbg_off;
+ bool has_shadow_ctl;
+ bool has_size_ctl;
+ bool cmdq_long_packet_ctl;
+@@ -246,23 +248,22 @@ static void mtk_dsi_phy_timconfig(struct mtk_dsi *dsi)
+ u32 data_rate_mhz = DIV_ROUND_UP(dsi->data_rate, HZ_PER_MHZ);
+ struct mtk_phy_timing *timing = &dsi->phy_timing;
+
+- timing->lpx = (80 * data_rate_mhz / (8 * 1000)) + 1;
+- timing->da_hs_prepare = (59 * data_rate_mhz + 4 * 1000) / 8000 + 1;
+- timing->da_hs_zero = (163 * data_rate_mhz + 11 * 1000) / 8000 + 1 -
++ timing->lpx = (60 * data_rate_mhz / (8 * 1000)) + 1;
++ timing->da_hs_prepare = (80 * data_rate_mhz + 4 * 1000) / 8000;
++ timing->da_hs_zero = (170 * data_rate_mhz + 10 * 1000) / 8000 + 1 -
+ timing->da_hs_prepare;
+- timing->da_hs_trail = (78 * data_rate_mhz + 7 * 1000) / 8000 + 1;
++ timing->da_hs_trail = timing->da_hs_prepare + 1;
+
+- timing->ta_go = 4 * timing->lpx;
+- timing->ta_sure = 3 * timing->lpx / 2;
+- timing->ta_get = 5 * timing->lpx;
+- timing->da_hs_exit = (118 * data_rate_mhz / (8 * 1000)) + 1;
++ timing->ta_go = 4 * timing->lpx - 2;
++ timing->ta_sure = timing->lpx + 2;
++ timing->ta_get = 4 * timing->lpx;
++ timing->da_hs_exit = 2 * timing->lpx + 1;
+
+- timing->clk_hs_prepare = (57 * data_rate_mhz / (8 * 1000)) + 1;
+- timing->clk_hs_post = (65 * data_rate_mhz + 53 * 1000) / 8000 + 1;
+- timing->clk_hs_trail = (78 * data_rate_mhz + 7 * 1000) / 8000 + 1;
+- timing->clk_hs_zero = (330 * data_rate_mhz / (8 * 1000)) + 1 -
+- timing->clk_hs_prepare;
+- timing->clk_hs_exit = (118 * data_rate_mhz / (8 * 1000)) + 1;
++ timing->clk_hs_prepare = 70 * data_rate_mhz / (8 * 1000);
++ timing->clk_hs_post = timing->clk_hs_prepare + 8;
++ timing->clk_hs_trail = timing->clk_hs_prepare;
++ timing->clk_hs_zero = timing->clk_hs_trail * 4;
++ timing->clk_hs_exit = 2 * timing->clk_hs_trail;
+
+ timcon0 = FIELD_PREP(LPX, timing->lpx) |
+ FIELD_PREP(HS_PREP, timing->da_hs_prepare) |
+@@ -367,8 +368,8 @@ static void mtk_dsi_set_mode(struct mtk_dsi *dsi)
+
+ static void mtk_dsi_set_vm_cmd(struct mtk_dsi *dsi)
+ {
+- mtk_dsi_mask(dsi, DSI_VM_CMD_CON, VM_CMD_EN, VM_CMD_EN);
+- mtk_dsi_mask(dsi, DSI_VM_CMD_CON, TS_VFP_EN, TS_VFP_EN);
++ mtk_dsi_mask(dsi, dsi->driver_data->reg_vm_cmd_off, VM_CMD_EN, VM_CMD_EN);
++ mtk_dsi_mask(dsi, dsi->driver_data->reg_vm_cmd_off, TS_VFP_EN, TS_VFP_EN);
+ }
+
+ static void mtk_dsi_rxtx_control(struct mtk_dsi *dsi)
+@@ -714,7 +715,7 @@ static int mtk_dsi_poweron(struct mtk_dsi *dsi)
+
+ if (dsi->driver_data->has_shadow_ctl)
+ writel(FORCE_COMMIT | BYPASS_SHADOW,
+- dsi->regs + DSI_SHADOW_DEBUG);
++ dsi->regs + dsi->driver_data->reg_shadow_dbg_off);
+
+ mtk_dsi_reset_engine(dsi);
+ mtk_dsi_phy_timconfig(dsi);
+@@ -1255,26 +1256,36 @@ static void mtk_dsi_remove(struct platform_device *pdev)
+
+ static const struct mtk_dsi_driver_data mt8173_dsi_driver_data = {
+ .reg_cmdq_off = 0x200,
++ .reg_vm_cmd_off = 0x130,
++ .reg_shadow_dbg_off = 0x190
+ };
+
+ static const struct mtk_dsi_driver_data mt2701_dsi_driver_data = {
+ .reg_cmdq_off = 0x180,
++ .reg_vm_cmd_off = 0x130,
++ .reg_shadow_dbg_off = 0x190
+ };
+
+ static const struct mtk_dsi_driver_data mt8183_dsi_driver_data = {
+ .reg_cmdq_off = 0x200,
++ .reg_vm_cmd_off = 0x130,
++ .reg_shadow_dbg_off = 0x190,
+ .has_shadow_ctl = true,
+ .has_size_ctl = true,
+ };
+
+ static const struct mtk_dsi_driver_data mt8186_dsi_driver_data = {
+ .reg_cmdq_off = 0xd00,
++ .reg_vm_cmd_off = 0x200,
++ .reg_shadow_dbg_off = 0xc00,
+ .has_shadow_ctl = true,
+ .has_size_ctl = true,
+ };
+
+ static const struct mtk_dsi_driver_data mt8188_dsi_driver_data = {
+ .reg_cmdq_off = 0xd00,
++ .reg_vm_cmd_off = 0x200,
++ .reg_shadow_dbg_off = 0xc00,
+ .has_shadow_ctl = true,
+ .has_size_ctl = true,
+ .cmdq_long_packet_ctl = true,
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index d5fd6a089b7ccc..b940688c361356 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -386,6 +386,10 @@ int xe_gt_init_early(struct xe_gt *gt)
+ xe_force_wake_init_gt(gt, gt_to_fw(gt));
+ spin_lock_init(>->global_invl_lock);
+
++ err = xe_gt_tlb_invalidation_init_early(gt);
++ if (err)
++ return err;
++
+ return 0;
+ }
+
+@@ -585,10 +589,6 @@ int xe_gt_init(struct xe_gt *gt)
+ xe_hw_fence_irq_init(>->fence_irq[i]);
+ }
+
+- err = xe_gt_tlb_invalidation_init(gt);
+- if (err)
+- return err;
+-
+ err = xe_gt_pagefault_init(gt);
+ if (err)
+ return err;
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+index 7e385940df0863..ace1fe831a7b72 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+@@ -106,7 +106,7 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
+ }
+
+ /**
+- * xe_gt_tlb_invalidation_init - Initialize GT TLB invalidation state
++ * xe_gt_tlb_invalidation_init_early - Initialize GT TLB invalidation state
+ * @gt: graphics tile
+ *
+ * Initialize GT TLB invalidation state, purely software initialization, should
+@@ -114,7 +114,7 @@ static void xe_gt_tlb_fence_timeout(struct work_struct *work)
+ *
+ * Return: 0 on success, negative error code on error.
+ */
+-int xe_gt_tlb_invalidation_init(struct xe_gt *gt)
++int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt)
+ {
+ gt->tlb_invalidation.seqno = 1;
+ INIT_LIST_HEAD(>->tlb_invalidation.pending_fences);
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+index 00b1c6c01e8d95..672acfcdf0d70d 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.h
+@@ -14,7 +14,8 @@ struct xe_gt;
+ struct xe_guc;
+ struct xe_vma;
+
+-int xe_gt_tlb_invalidation_init(struct xe_gt *gt);
++int xe_gt_tlb_invalidation_init_early(struct xe_gt *gt);
++
+ void xe_gt_tlb_invalidation_reset(struct xe_gt *gt);
+ int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt);
+ int xe_gt_tlb_invalidation_vma(struct xe_gt *gt,
+diff --git a/drivers/hwmon/drivetemp.c b/drivers/hwmon/drivetemp.c
+index 6bdd21aa005ab8..2a4ec55ddb47ed 100644
+--- a/drivers/hwmon/drivetemp.c
++++ b/drivers/hwmon/drivetemp.c
+@@ -165,6 +165,7 @@ static int drivetemp_scsi_command(struct drivetemp_data *st,
+ {
+ u8 scsi_cmd[MAX_COMMAND_SIZE];
+ enum req_op op;
++ int err;
+
+ memset(scsi_cmd, 0, sizeof(scsi_cmd));
+ scsi_cmd[0] = ATA_16;
+@@ -192,8 +193,11 @@ static int drivetemp_scsi_command(struct drivetemp_data *st,
+ scsi_cmd[12] = lba_high;
+ scsi_cmd[14] = ata_command;
+
+- return scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata,
+- ATA_SECT_SIZE, HZ, 5, NULL);
++ err = scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata,
++ ATA_SECT_SIZE, HZ, 5, NULL);
++ if (err > 0)
++ err = -EIO;
++ return err;
+ }
+
+ static int drivetemp_ata_command(struct drivetemp_data *st, u8 feature,
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index b79c48d46cccf8..8d94bc2b1cac35 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -917,6 +917,9 @@ static int ad7124_setup(struct ad7124_state *st)
+ * set all channels to this default value.
+ */
+ ad7124_set_channel_odr(st, i, 10);
++
++ /* Disable all channels to prevent unintended conversions. */
++ ad_sd_write_reg(&st->sd, AD7124_CHANNEL(i), 2, 0);
+ }
+
+ ret = ad_sd_write_reg(&st->sd, AD7124_ADC_CONTROL, 2, st->adc_control);
+diff --git a/drivers/iio/adc/ad7173.c b/drivers/iio/adc/ad7173.c
+index 0702ec71aa2933..5a65be00dd190f 100644
+--- a/drivers/iio/adc/ad7173.c
++++ b/drivers/iio/adc/ad7173.c
+@@ -198,6 +198,7 @@ struct ad7173_channel {
+
+ struct ad7173_state {
+ struct ad_sigma_delta sd;
++ struct ad_sigma_delta_info sigma_delta_info;
+ const struct ad7173_device_info *info;
+ struct ad7173_channel *channels;
+ struct regulator_bulk_data regulators[3];
+@@ -733,7 +734,7 @@ static int ad7173_disable_one(struct ad_sigma_delta *sd, unsigned int chan)
+ return ad_sd_write_reg(sd, AD7173_REG_CH(chan), 2, 0);
+ }
+
+-static struct ad_sigma_delta_info ad7173_sigma_delta_info = {
++static const struct ad_sigma_delta_info ad7173_sigma_delta_info = {
+ .set_channel = ad7173_set_channel,
+ .append_status = ad7173_append_status,
+ .disable_all = ad7173_disable_all,
+@@ -1371,7 +1372,7 @@ static int ad7173_fw_parse_device_config(struct iio_dev *indio_dev)
+ if (ret < 0)
+ return dev_err_probe(dev, ret, "Interrupt 'rdy' is required\n");
+
+- ad7173_sigma_delta_info.irq_line = ret;
++ st->sigma_delta_info.irq_line = ret;
+
+ return ad7173_fw_parse_channel_config(indio_dev);
+ }
+@@ -1404,8 +1405,9 @@ static int ad7173_probe(struct spi_device *spi)
+ spi->mode = SPI_MODE_3;
+ spi_setup(spi);
+
+- ad7173_sigma_delta_info.num_slots = st->info->num_configs;
+- ret = ad_sd_init(&st->sd, indio_dev, spi, &ad7173_sigma_delta_info);
++ st->sigma_delta_info = ad7173_sigma_delta_info;
++ st->sigma_delta_info.num_slots = st->info->num_configs;
++ ret = ad_sd_init(&st->sd, indio_dev, spi, &st->sigma_delta_info);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
+index 9c39acff17e658..6ab0e88f6895af 100644
+--- a/drivers/iio/adc/at91_adc.c
++++ b/drivers/iio/adc/at91_adc.c
+@@ -979,7 +979,7 @@ static int at91_ts_register(struct iio_dev *idev,
+ return ret;
+
+ err:
+- input_free_device(st->ts_input);
++ input_free_device(input);
+ return ret;
+ }
+
+diff --git a/drivers/iio/adc/rockchip_saradc.c b/drivers/iio/adc/rockchip_saradc.c
+index 240cfa391674e7..dfd47a6e1f4a1b 100644
+--- a/drivers/iio/adc/rockchip_saradc.c
++++ b/drivers/iio/adc/rockchip_saradc.c
+@@ -368,6 +368,8 @@ static irqreturn_t rockchip_saradc_trigger_handler(int irq, void *p)
+ int ret;
+ int i, j = 0;
+
++ memset(&data, 0, sizeof(data));
++
+ mutex_lock(&info->lock);
+
+ iio_for_each_active_channel(i_dev, i) {
+diff --git a/drivers/iio/adc/ti-ads1119.c b/drivers/iio/adc/ti-ads1119.c
+index 1c760637514928..6637cb6a6dda4a 100644
+--- a/drivers/iio/adc/ti-ads1119.c
++++ b/drivers/iio/adc/ti-ads1119.c
+@@ -500,12 +500,14 @@ static irqreturn_t ads1119_trigger_handler(int irq, void *private)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct ads1119_state *st = iio_priv(indio_dev);
+ struct {
+- unsigned int sample;
++ s16 sample;
+ s64 timestamp __aligned(8);
+ } scan;
+ unsigned int index;
+ int ret;
+
++ memset(&scan, 0, sizeof(scan));
++
+ if (!iio_trigger_using_own(indio_dev)) {
+ index = find_first_bit(indio_dev->active_scan_mask,
+ iio_get_masklength(indio_dev));
+diff --git a/drivers/iio/adc/ti-ads124s08.c b/drivers/iio/adc/ti-ads124s08.c
+index 425b48d8986f52..f452f57f11c956 100644
+--- a/drivers/iio/adc/ti-ads124s08.c
++++ b/drivers/iio/adc/ti-ads124s08.c
+@@ -183,9 +183,9 @@ static int ads124s_reset(struct iio_dev *indio_dev)
+ struct ads124s_private *priv = iio_priv(indio_dev);
+
+ if (priv->reset_gpio) {
+- gpiod_set_value(priv->reset_gpio, 0);
++ gpiod_set_value_cansleep(priv->reset_gpio, 0);
+ udelay(200);
+- gpiod_set_value(priv->reset_gpio, 1);
++ gpiod_set_value_cansleep(priv->reset_gpio, 1);
+ } else {
+ return ads124s_write_cmd(indio_dev, ADS124S08_CMD_RESET);
+ }
+diff --git a/drivers/iio/adc/ti-ads1298.c b/drivers/iio/adc/ti-ads1298.c
+index 0f9f75baaebbf7..d00cd169e8dfd5 100644
+--- a/drivers/iio/adc/ti-ads1298.c
++++ b/drivers/iio/adc/ti-ads1298.c
+@@ -613,6 +613,8 @@ static int ads1298_init(struct iio_dev *indio_dev)
+ }
+ indio_dev->name = devm_kasprintf(dev, GFP_KERNEL, "ads129%u%s",
+ indio_dev->num_channels, suffix);
++ if (!indio_dev->name)
++ return -ENOMEM;
+
+ /* Enable internal test signal, double amplitude, double frequency */
+ ret = regmap_write(priv->regmap, ADS1298_REG_CONFIG2,
+diff --git a/drivers/iio/adc/ti-ads8688.c b/drivers/iio/adc/ti-ads8688.c
+index 9b1814f1965a37..a31658b760a4ae 100644
+--- a/drivers/iio/adc/ti-ads8688.c
++++ b/drivers/iio/adc/ti-ads8688.c
+@@ -381,7 +381,7 @@ static irqreturn_t ads8688_trigger_handler(int irq, void *p)
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ /* Ensure naturally aligned timestamp */
+- u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8);
++ u16 buffer[ADS8688_MAX_CHANNELS + sizeof(s64)/sizeof(u16)] __aligned(8) = { };
+ int i, j = 0;
+
+ iio_for_each_active_channel(indio_dev, i) {
+diff --git a/drivers/iio/dummy/iio_simple_dummy_buffer.c b/drivers/iio/dummy/iio_simple_dummy_buffer.c
+index 4ca3f1aaff9996..288880346707a2 100644
+--- a/drivers/iio/dummy/iio_simple_dummy_buffer.c
++++ b/drivers/iio/dummy/iio_simple_dummy_buffer.c
+@@ -48,7 +48,7 @@ static irqreturn_t iio_simple_dummy_trigger_h(int irq, void *p)
+ int i = 0, j;
+ u16 *data;
+
+- data = kmalloc(indio_dev->scan_bytes, GFP_KERNEL);
++ data = kzalloc(indio_dev->scan_bytes, GFP_KERNEL);
+ if (!data)
+ goto done;
+
+diff --git a/drivers/iio/gyro/fxas21002c_core.c b/drivers/iio/gyro/fxas21002c_core.c
+index c28d17ca6f5ee0..aabc5e2d788d15 100644
+--- a/drivers/iio/gyro/fxas21002c_core.c
++++ b/drivers/iio/gyro/fxas21002c_core.c
+@@ -730,14 +730,21 @@ static irqreturn_t fxas21002c_trigger_handler(int irq, void *p)
+ int ret;
+
+ mutex_lock(&data->lock);
++ ret = fxas21002c_pm_get(data);
++ if (ret < 0)
++ goto out_unlock;
++
+ ret = regmap_bulk_read(data->regmap, FXAS21002C_REG_OUT_X_MSB,
+ data->buffer, CHANNEL_SCAN_MAX * sizeof(s16));
+ if (ret < 0)
+- goto out_unlock;
++ goto out_pm_put;
+
+ iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
+ data->timestamp);
+
++out_pm_put:
++ fxas21002c_pm_put(data);
++
+ out_unlock:
+ mutex_unlock(&data->lock);
+
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600.h b/drivers/iio/imu/inv_icm42600/inv_icm42600.h
+index 3a07e43e4cf154..18787a43477b89 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600.h
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600.h
+@@ -403,6 +403,7 @@ struct inv_icm42600_sensor_state {
+ typedef int (*inv_icm42600_bus_setup)(struct inv_icm42600_state *);
+
+ extern const struct regmap_config inv_icm42600_regmap_config;
++extern const struct regmap_config inv_icm42600_spi_regmap_config;
+ extern const struct dev_pm_ops inv_icm42600_pm_ops;
+
+ const struct iio_mount_matrix *
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
+index c3924cc6190ee4..a0bed49c3ba674 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_core.c
+@@ -87,6 +87,21 @@ const struct regmap_config inv_icm42600_regmap_config = {
+ };
+ EXPORT_SYMBOL_NS_GPL(inv_icm42600_regmap_config, IIO_ICM42600);
+
++/* define specific regmap for SPI not supporting burst write */
++const struct regmap_config inv_icm42600_spi_regmap_config = {
++ .name = "inv_icm42600",
++ .reg_bits = 8,
++ .val_bits = 8,
++ .max_register = 0x4FFF,
++ .ranges = inv_icm42600_regmap_ranges,
++ .num_ranges = ARRAY_SIZE(inv_icm42600_regmap_ranges),
++ .volatile_table = inv_icm42600_regmap_volatile_accesses,
++ .rd_noinc_table = inv_icm42600_regmap_rd_noinc_accesses,
++ .cache_type = REGCACHE_RBTREE,
++ .use_single_write = true,
++};
++EXPORT_SYMBOL_NS_GPL(inv_icm42600_spi_regmap_config, IIO_ICM42600);
++
+ struct inv_icm42600_hw {
+ uint8_t whoami;
+ const char *name;
+@@ -822,6 +837,8 @@ static int inv_icm42600_suspend(struct device *dev)
+ static int inv_icm42600_resume(struct device *dev)
+ {
+ struct inv_icm42600_state *st = dev_get_drvdata(dev);
++ struct inv_icm42600_sensor_state *gyro_st = iio_priv(st->indio_gyro);
++ struct inv_icm42600_sensor_state *accel_st = iio_priv(st->indio_accel);
+ int ret;
+
+ mutex_lock(&st->lock);
+@@ -842,9 +859,12 @@ static int inv_icm42600_resume(struct device *dev)
+ goto out_unlock;
+
+ /* restore FIFO data streaming */
+- if (st->fifo.on)
++ if (st->fifo.on) {
++ inv_sensors_timestamp_reset(&gyro_st->ts);
++ inv_sensors_timestamp_reset(&accel_st->ts);
+ ret = regmap_write(st->map, INV_ICM42600_REG_FIFO_CONFIG,
+ INV_ICM42600_FIFO_CONFIG_STREAM);
++ }
+
+ out_unlock:
+ mutex_unlock(&st->lock);
+diff --git a/drivers/iio/imu/inv_icm42600/inv_icm42600_spi.c b/drivers/iio/imu/inv_icm42600/inv_icm42600_spi.c
+index eae5ff7a3cc102..36fe8d94ec1cc6 100644
+--- a/drivers/iio/imu/inv_icm42600/inv_icm42600_spi.c
++++ b/drivers/iio/imu/inv_icm42600/inv_icm42600_spi.c
+@@ -59,7 +59,8 @@ static int inv_icm42600_probe(struct spi_device *spi)
+ return -EINVAL;
+ chip = (uintptr_t)match;
+
+- regmap = devm_regmap_init_spi(spi, &inv_icm42600_regmap_config);
++ /* use SPI specific regmap */
++ regmap = devm_regmap_init_spi(spi, &inv_icm42600_spi_regmap_config);
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
+
+diff --git a/drivers/iio/imu/kmx61.c b/drivers/iio/imu/kmx61.c
+index c61c012e25bbaa..53773418610f75 100644
+--- a/drivers/iio/imu/kmx61.c
++++ b/drivers/iio/imu/kmx61.c
+@@ -1192,7 +1192,7 @@ static irqreturn_t kmx61_trigger_handler(int irq, void *p)
+ struct kmx61_data *data = kmx61_get_data(indio_dev);
+ int bit, ret, i = 0;
+ u8 base;
+- s16 buffer[8];
++ s16 buffer[8] = { };
+
+ if (indio_dev == data->acc_indio_dev)
+ base = KMX61_ACC_XOUT_L;
+diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
+index 3305ebbdbc0787..1155487f7aeac8 100644
+--- a/drivers/iio/inkern.c
++++ b/drivers/iio/inkern.c
+@@ -499,7 +499,7 @@ struct iio_channel *iio_channel_get_all(struct device *dev)
+ return_ptr(chans);
+
+ error_free_chans:
+- for (i = 0; i < nummaps; i++)
++ for (i = 0; i < mapind; i++)
+ iio_device_put(chans[i].indio_dev);
+ return ERR_PTR(ret);
+ }
+diff --git a/drivers/iio/light/bh1745.c b/drivers/iio/light/bh1745.c
+index 2e458e9d5d8530..a025e279df0747 100644
+--- a/drivers/iio/light/bh1745.c
++++ b/drivers/iio/light/bh1745.c
+@@ -750,6 +750,8 @@ static irqreturn_t bh1745_trigger_handler(int interrupt, void *p)
+ int i;
+ int j = 0;
+
++ memset(&scan, 0, sizeof(scan));
++
+ iio_for_each_active_channel(indio_dev, i) {
+ ret = regmap_bulk_read(data->regmap, BH1745_RED_LSB + 2 * i,
+ &value, 2);
+diff --git a/drivers/iio/light/vcnl4035.c b/drivers/iio/light/vcnl4035.c
+index 337a1332c2c64a..67c94be0201897 100644
+--- a/drivers/iio/light/vcnl4035.c
++++ b/drivers/iio/light/vcnl4035.c
+@@ -105,7 +105,7 @@ static irqreturn_t vcnl4035_trigger_consumer_handler(int irq, void *p)
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct vcnl4035_data *data = iio_priv(indio_dev);
+ /* Ensure naturally aligned timestamp */
+- u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)] __aligned(8);
++ u8 buffer[ALIGN(sizeof(u16), sizeof(s64)) + sizeof(s64)] __aligned(8) = { };
+ int ret;
+
+ ret = regmap_read(data->regmap, VCNL4035_ALS_DATA, (int *)buffer);
+diff --git a/drivers/iio/pressure/zpa2326.c b/drivers/iio/pressure/zpa2326.c
+index 950f8dee2b26b7..b4c6c7c4725694 100644
+--- a/drivers/iio/pressure/zpa2326.c
++++ b/drivers/iio/pressure/zpa2326.c
+@@ -586,6 +586,8 @@ static int zpa2326_fill_sample_buffer(struct iio_dev *indio_dev,
+ } sample;
+ int err;
+
++ memset(&sample, 0, sizeof(sample));
++
+ if (test_bit(0, indio_dev->active_scan_mask)) {
+ /* Get current pressure from hardware FIFO. */
+ err = zpa2326_dequeue_pressure(indio_dev, &sample.pressure);
+diff --git a/drivers/md/dm-ebs-target.c b/drivers/md/dm-ebs-target.c
+index ec5db1478b2fce..18ae45dcbfb28b 100644
+--- a/drivers/md/dm-ebs-target.c
++++ b/drivers/md/dm-ebs-target.c
+@@ -442,7 +442,7 @@ static int ebs_iterate_devices(struct dm_target *ti,
+ static struct target_type ebs_target = {
+ .name = "ebs",
+ .version = {1, 0, 1},
+- .features = DM_TARGET_PASSES_INTEGRITY,
++ .features = 0,
+ .module = THIS_MODULE,
+ .ctr = ebs_ctr,
+ .dtr = ebs_dtr,
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index c9f47d0cccf9bb..872bb59f547055 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -2332,10 +2332,9 @@ static struct thin_c *get_first_thin(struct pool *pool)
+ struct thin_c *tc = NULL;
+
+ rcu_read_lock();
+- if (!list_empty(&pool->active_thins)) {
+- tc = list_entry_rcu(pool->active_thins.next, struct thin_c, list);
++ tc = list_first_or_null_rcu(&pool->active_thins, struct thin_c, list);
++ if (tc)
+ thin_get(tc);
+- }
+ rcu_read_unlock();
+
+ return tc;
+diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
+index 62b1a44b8dd2e7..6bd9848518d477 100644
+--- a/drivers/md/dm-verity-fec.c
++++ b/drivers/md/dm-verity-fec.c
+@@ -60,15 +60,19 @@ static int fec_decode_rs8(struct dm_verity *v, struct dm_verity_fec_io *fio,
+ * to the data block. Caller is responsible for releasing buf.
+ */
+ static u8 *fec_read_parity(struct dm_verity *v, u64 rsb, int index,
+- unsigned int *offset, struct dm_buffer **buf,
+- unsigned short ioprio)
++ unsigned int *offset, unsigned int par_buf_offset,
++ struct dm_buffer **buf, unsigned short ioprio)
+ {
+ u64 position, block, rem;
+ u8 *res;
+
++ /* We have already part of parity bytes read, skip to the next block */
++ if (par_buf_offset)
++ index++;
++
+ position = (index + rsb) * v->fec->roots;
+ block = div64_u64_rem(position, v->fec->io_size, &rem);
+- *offset = (unsigned int)rem;
++ *offset = par_buf_offset ? 0 : (unsigned int)rem;
+
+ res = dm_bufio_read_with_ioprio(v->fec->bufio, block, buf, ioprio);
+ if (IS_ERR(res)) {
+@@ -128,11 +132,12 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io,
+ {
+ int r, corrected = 0, res;
+ struct dm_buffer *buf;
+- unsigned int n, i, offset;
+- u8 *par, *block;
++ unsigned int n, i, offset, par_buf_offset = 0;
++ u8 *par, *block, par_buf[DM_VERITY_FEC_RSM - DM_VERITY_FEC_MIN_RSN];
+ struct bio *bio = dm_bio_from_per_bio_data(io, v->ti->per_io_data_size);
+
+- par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio));
++ par = fec_read_parity(v, rsb, block_offset, &offset,
++ par_buf_offset, &buf, bio_prio(bio));
+ if (IS_ERR(par))
+ return PTR_ERR(par);
+
+@@ -142,7 +147,8 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io,
+ */
+ fec_for_each_buffer_rs_block(fio, n, i) {
+ block = fec_buffer_rs_block(v, fio, n, i);
+- res = fec_decode_rs8(v, fio, block, &par[offset], neras);
++ memcpy(&par_buf[par_buf_offset], &par[offset], v->fec->roots - par_buf_offset);
++ res = fec_decode_rs8(v, fio, block, par_buf, neras);
+ if (res < 0) {
+ r = res;
+ goto error;
+@@ -155,12 +161,21 @@ static int fec_decode_bufs(struct dm_verity *v, struct dm_verity_io *io,
+ if (block_offset >= 1 << v->data_dev_block_bits)
+ goto done;
+
+- /* read the next block when we run out of parity bytes */
+- offset += v->fec->roots;
++ /* Read the next block when we run out of parity bytes */
++ offset += (v->fec->roots - par_buf_offset);
++ /* Check if parity bytes are split between blocks */
++ if (offset < v->fec->io_size && (offset + v->fec->roots) > v->fec->io_size) {
++ par_buf_offset = v->fec->io_size - offset;
++ memcpy(par_buf, &par[offset], par_buf_offset);
++ offset += par_buf_offset;
++ } else
++ par_buf_offset = 0;
++
+ if (offset >= v->fec->io_size) {
+ dm_bufio_release(buf);
+
+- par = fec_read_parity(v, rsb, block_offset, &offset, &buf, bio_prio(bio));
++ par = fec_read_parity(v, rsb, block_offset, &offset,
++ par_buf_offset, &buf, bio_prio(bio));
+ if (IS_ERR(par))
+ return PTR_ERR(par);
+ }
+@@ -724,10 +739,7 @@ int verity_fec_ctr(struct dm_verity *v)
+ return -E2BIG;
+ }
+
+- if ((f->roots << SECTOR_SHIFT) & ((1 << v->data_dev_block_bits) - 1))
+- f->io_size = 1 << v->data_dev_block_bits;
+- else
+- f->io_size = v->fec->roots << SECTOR_SHIFT;
++ f->io_size = 1 << v->data_dev_block_bits;
+
+ f->bufio = dm_bufio_client_create(f->dev->bdev,
+ f->io_size,
+diff --git a/drivers/md/persistent-data/dm-array.c b/drivers/md/persistent-data/dm-array.c
+index 157c9bd2fed741..8f8792e5580639 100644
+--- a/drivers/md/persistent-data/dm-array.c
++++ b/drivers/md/persistent-data/dm-array.c
+@@ -917,23 +917,27 @@ static int load_ablock(struct dm_array_cursor *c)
+ if (c->block)
+ unlock_ablock(c->info, c->block);
+
+- c->block = NULL;
+- c->ab = NULL;
+ c->index = 0;
+
+ r = dm_btree_cursor_get_value(&c->cursor, &key, &value_le);
+ if (r) {
+ DMERR("dm_btree_cursor_get_value failed");
+- dm_btree_cursor_end(&c->cursor);
++ goto out;
+
+ } else {
+ r = get_ablock(c->info, le64_to_cpu(value_le), &c->block, &c->ab);
+ if (r) {
+ DMERR("get_ablock failed");
+- dm_btree_cursor_end(&c->cursor);
++ goto out;
+ }
+ }
+
++ return 0;
++
++out:
++ dm_btree_cursor_end(&c->cursor);
++ c->block = NULL;
++ c->ab = NULL;
+ return r;
+ }
+
+@@ -956,10 +960,10 @@ EXPORT_SYMBOL_GPL(dm_array_cursor_begin);
+
+ void dm_array_cursor_end(struct dm_array_cursor *c)
+ {
+- if (c->block) {
++ if (c->block)
+ unlock_ablock(c->info, c->block);
+- dm_btree_cursor_end(&c->cursor);
+- }
++
++ dm_btree_cursor_end(&c->cursor);
+ }
+ EXPORT_SYMBOL_GPL(dm_array_cursor_end);
+
+@@ -999,6 +1003,7 @@ int dm_array_cursor_skip(struct dm_array_cursor *c, uint32_t count)
+ }
+
+ count -= remaining;
++ c->index += (remaining - 1);
+ r = dm_array_cursor_next(c);
+
+ } while (!r);
+diff --git a/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c b/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
+index e616e3ec2b42fd..3c1359d8d4e692 100644
+--- a/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
++++ b/drivers/misc/mchp_pci1xxxx/mchp_pci1xxxx_gpio.c
+@@ -148,7 +148,7 @@ static int pci1xxxx_gpio_set_config(struct gpio_chip *gpio, unsigned int offset,
+ pci1xxx_assign_bit(priv->reg_base, OPENDRAIN_OFFSET(offset), (offset % 32), true);
+ break;
+ default:
+- ret = -EOPNOTSUPP;
++ ret = -ENOTSUPP;
+ break;
+ }
+ spin_unlock_irqrestore(&priv->lock, flags);
+@@ -277,7 +277,7 @@ static irqreturn_t pci1xxxx_gpio_irq_handler(int irq, void *dev_id)
+ writel(BIT(bit), priv->reg_base + INTR_STATUS_OFFSET(gpiobank));
+ spin_unlock_irqrestore(&priv->lock, flags);
+ irq = irq_find_mapping(gc->irq.domain, (bit + (gpiobank * 32)));
+- generic_handle_irq(irq);
++ handle_nested_irq(irq);
+ }
+ }
+ spin_lock_irqsave(&priv->lock, flags);
+diff --git a/drivers/net/ethernet/amd/pds_core/devlink.c b/drivers/net/ethernet/amd/pds_core/devlink.c
+index 2681889162a25e..44971e71991ff5 100644
+--- a/drivers/net/ethernet/amd/pds_core/devlink.c
++++ b/drivers/net/ethernet/amd/pds_core/devlink.c
+@@ -118,7 +118,7 @@ int pdsc_dl_info_get(struct devlink *dl, struct devlink_info_req *req,
+ if (err && err != -EIO)
+ return err;
+
+- listlen = fw_list.num_fw_slots;
++ listlen = min(fw_list.num_fw_slots, ARRAY_SIZE(fw_list.fw_names));
+ for (i = 0; i < listlen; i++) {
+ if (i < ARRAY_SIZE(fw_slotnames))
+ strscpy(buf, fw_slotnames[i], sizeof(buf));
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index dafc5a4039cd2c..c255445e97f3c5 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -2826,6 +2826,13 @@ static int bnxt_hwrm_handler(struct bnxt *bp, struct tx_cmp *txcmp)
+ return 0;
+ }
+
++static bool bnxt_vnic_is_active(struct bnxt *bp)
++{
++ struct bnxt_vnic_info *vnic = &bp->vnic_info[0];
++
++ return vnic->fw_vnic_id != INVALID_HW_RING_ID && vnic->mru > 0;
++}
++
+ static irqreturn_t bnxt_msix(int irq, void *dev_instance)
+ {
+ struct bnxt_napi *bnapi = dev_instance;
+@@ -3093,7 +3100,7 @@ static int bnxt_poll(struct napi_struct *napi, int budget)
+ break;
+ }
+ }
+- if (bp->flags & BNXT_FLAG_DIM) {
++ if ((bp->flags & BNXT_FLAG_DIM) && bnxt_vnic_is_active(bp)) {
+ struct dim_sample dim_sample = {};
+
+ dim_update_sample(cpr->event_ctr,
+@@ -3224,7 +3231,7 @@ static int bnxt_poll_p5(struct napi_struct *napi, int budget)
+ poll_done:
+ cpr_rx = &cpr->cp_ring_arr[0];
+ if (cpr_rx->cp_ring_type == BNXT_NQ_HDL_TYPE_RX &&
+- (bp->flags & BNXT_FLAG_DIM)) {
++ (bp->flags & BNXT_FLAG_DIM) && bnxt_vnic_is_active(bp)) {
+ struct dim_sample dim_sample = {};
+
+ dim_update_sample(cpr->event_ctr,
+@@ -7116,6 +7123,26 @@ static int bnxt_hwrm_ring_alloc(struct bnxt *bp)
+ return rc;
+ }
+
++static void bnxt_cancel_dim(struct bnxt *bp)
++{
++ int i;
++
++ /* DIM work is initialized in bnxt_enable_napi(). Proceed only
++ * if NAPI is enabled.
++ */
++ if (!bp->bnapi || test_bit(BNXT_STATE_NAPI_DISABLED, &bp->state))
++ return;
++
++ /* Make sure NAPI sees that the VNIC is disabled */
++ synchronize_net();
++ for (i = 0; i < bp->rx_nr_rings; i++) {
++ struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
++ struct bnxt_napi *bnapi = rxr->bnapi;
++
++ cancel_work_sync(&bnapi->cp_ring.dim.work);
++ }
++}
++
+ static int hwrm_ring_free_send_msg(struct bnxt *bp,
+ struct bnxt_ring_struct *ring,
+ u32 ring_type, int cmpl_ring_id)
+@@ -7216,6 +7243,7 @@ static void bnxt_hwrm_ring_free(struct bnxt *bp, bool close_path)
+ }
+ }
+
++ bnxt_cancel_dim(bp);
+ for (i = 0; i < bp->rx_nr_rings; i++) {
+ bnxt_hwrm_rx_ring_free(bp, &bp->rx_ring[i], close_path);
+ bnxt_hwrm_rx_agg_ring_free(bp, &bp->rx_ring[i], close_path);
+@@ -11012,8 +11040,6 @@ static void bnxt_disable_napi(struct bnxt *bp)
+ if (bnapi->in_reset)
+ cpr->sw_stats->rx.rx_resets++;
+ napi_disable(&bnapi->napi);
+- if (bnapi->rx_ring)
+- cancel_work_sync(&cpr->dim.work);
+ }
+ }
+
+@@ -15269,8 +15295,10 @@ static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx)
+ bnxt_hwrm_vnic_update(bp, vnic,
+ VNIC_UPDATE_REQ_ENABLES_MRU_VALID);
+ }
+-
++ /* Make sure NAPI sees that the VNIC is disabled */
++ synchronize_net();
+ rxr = &bp->rx_ring[idx];
++ cancel_work_sync(&rxr->bnapi->cp_ring.dim.work);
+ bnxt_hwrm_rx_ring_free(bp, rxr, false);
+ bnxt_hwrm_rx_agg_ring_free(bp, rxr, false);
+ rxr->rx_next_cons = 0;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+index fdd6356f21efb3..546d9a3d7efea7 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ulp.c
+@@ -208,7 +208,7 @@ int bnxt_send_msg(struct bnxt_en_dev *edev,
+
+ rc = hwrm_req_replace(bp, req, fw_msg->msg, fw_msg->msg_len);
+ if (rc)
+- return rc;
++ goto drop_req;
+
+ hwrm_req_timeout(bp, req, fw_msg->timeout);
+ resp = hwrm_req_hold(bp, req);
+@@ -220,6 +220,7 @@ int bnxt_send_msg(struct bnxt_en_dev *edev,
+
+ memcpy(fw_msg->resp, resp, resp_len);
+ }
++drop_req:
+ hwrm_req_drop(bp, req);
+ return rc;
+ }
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index fb3933fbb8425e..757c6484f53515 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -1799,7 +1799,10 @@ void cxgb4_remove_tid(struct tid_info *t, unsigned int chan, unsigned int tid,
+ struct adapter *adap = container_of(t, struct adapter, tids);
+ struct sk_buff *skb;
+
+- WARN_ON(tid_out_of_range(&adap->tids, tid));
++ if (tid_out_of_range(&adap->tids, tid)) {
++ dev_err(adap->pdev_dev, "tid %d out of range\n", tid);
++ return;
++ }
+
+ if (t->tid_tab[tid - adap->tids.tid_base]) {
+ t->tid_tab[tid - adap->tids.tid_base] = NULL;
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index d404819ebc9b3f..f985a3cf2b11fa 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -2224,14 +2224,18 @@ static void gve_service_task(struct work_struct *work)
+
+ static void gve_set_netdev_xdp_features(struct gve_priv *priv)
+ {
++ xdp_features_t xdp_features;
++
+ if (priv->queue_format == GVE_GQI_QPL_FORMAT) {
+- priv->dev->xdp_features = NETDEV_XDP_ACT_BASIC;
+- priv->dev->xdp_features |= NETDEV_XDP_ACT_REDIRECT;
+- priv->dev->xdp_features |= NETDEV_XDP_ACT_NDO_XMIT;
+- priv->dev->xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
++ xdp_features = NETDEV_XDP_ACT_BASIC;
++ xdp_features |= NETDEV_XDP_ACT_REDIRECT;
++ xdp_features |= NETDEV_XDP_ACT_NDO_XMIT;
++ xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
+ } else {
+- priv->dev->xdp_features = 0;
++ xdp_features = 0;
+ }
++
++ xdp_set_features_flag(priv->dev, xdp_features);
+ }
+
+ static int gve_init_priv(struct gve_priv *priv, bool skip_describe_device)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index 27dbe367f3d355..d873523e84f271 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -916,9 +916,6 @@ struct hnae3_handle {
+
+ u8 netdev_flags;
+ struct dentry *hnae3_dbgfs;
+- /* protects concurrent contention between debugfs commands */
+- struct mutex dbgfs_lock;
+- char **dbgfs_buf;
+
+ /* Network interface message level enabled bits */
+ u32 msg_enable;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+index 807eb3bbb11c04..9bbece25552b17 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c
+@@ -1260,69 +1260,55 @@ static int hns3_dbg_read_cmd(struct hns3_dbg_data *dbg_data,
+ static ssize_t hns3_dbg_read(struct file *filp, char __user *buffer,
+ size_t count, loff_t *ppos)
+ {
+- struct hns3_dbg_data *dbg_data = filp->private_data;
++ char *buf = filp->private_data;
++
++ return simple_read_from_buffer(buffer, count, ppos, buf, strlen(buf));
++}
++
++static int hns3_dbg_open(struct inode *inode, struct file *filp)
++{
++ struct hns3_dbg_data *dbg_data = inode->i_private;
+ struct hnae3_handle *handle = dbg_data->handle;
+ struct hns3_nic_priv *priv = handle->priv;
+- ssize_t size = 0;
+- char **save_buf;
+- char *read_buf;
+ u32 index;
++ char *buf;
+ int ret;
+
++ if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||
++ test_bit(HNS3_NIC_STATE_RESETTING, &priv->state))
++ return -EBUSY;
++
+ ret = hns3_dbg_get_cmd_index(dbg_data, &index);
+ if (ret)
+ return ret;
+
+- mutex_lock(&handle->dbgfs_lock);
+- save_buf = &handle->dbgfs_buf[index];
+-
+- if (!test_bit(HNS3_NIC_STATE_INITED, &priv->state) ||
+- test_bit(HNS3_NIC_STATE_RESETTING, &priv->state)) {
+- ret = -EBUSY;
+- goto out;
+- }
+-
+- if (*save_buf) {
+- read_buf = *save_buf;
+- } else {
+- read_buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL);
+- if (!read_buf) {
+- ret = -ENOMEM;
+- goto out;
+- }
+-
+- /* save the buffer addr until the last read operation */
+- *save_buf = read_buf;
+-
+- /* get data ready for the first time to read */
+- ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd,
+- read_buf, hns3_dbg_cmd[index].buf_len);
+- if (ret)
+- goto out;
+- }
++ buf = kvzalloc(hns3_dbg_cmd[index].buf_len, GFP_KERNEL);
++ if (!buf)
++ return -ENOMEM;
+
+- size = simple_read_from_buffer(buffer, count, ppos, read_buf,
+- strlen(read_buf));
+- if (size > 0) {
+- mutex_unlock(&handle->dbgfs_lock);
+- return size;
++ ret = hns3_dbg_read_cmd(dbg_data, hns3_dbg_cmd[index].cmd,
++ buf, hns3_dbg_cmd[index].buf_len);
++ if (ret) {
++ kvfree(buf);
++ return ret;
+ }
+
+-out:
+- /* free the buffer for the last read operation */
+- if (*save_buf) {
+- kvfree(*save_buf);
+- *save_buf = NULL;
+- }
++ filp->private_data = buf;
++ return 0;
++}
+
+- mutex_unlock(&handle->dbgfs_lock);
+- return ret;
++static int hns3_dbg_release(struct inode *inode, struct file *filp)
++{
++ kvfree(filp->private_data);
++ filp->private_data = NULL;
++ return 0;
+ }
+
+ static const struct file_operations hns3_dbg_fops = {
+ .owner = THIS_MODULE,
+- .open = simple_open,
++ .open = hns3_dbg_open,
+ .read = hns3_dbg_read,
++ .release = hns3_dbg_release,
+ };
+
+ static int hns3_dbg_bd_file_init(struct hnae3_handle *handle, u32 cmd)
+@@ -1379,13 +1365,6 @@ int hns3_dbg_init(struct hnae3_handle *handle)
+ int ret;
+ u32 i;
+
+- handle->dbgfs_buf = devm_kcalloc(&handle->pdev->dev,
+- ARRAY_SIZE(hns3_dbg_cmd),
+- sizeof(*handle->dbgfs_buf),
+- GFP_KERNEL);
+- if (!handle->dbgfs_buf)
+- return -ENOMEM;
+-
+ hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry =
+ debugfs_create_dir(name, hns3_dbgfs_root);
+ handle->hnae3_dbgfs = hns3_dbg_dentry[HNS3_DBG_DENTRY_COMMON].dentry;
+@@ -1395,8 +1374,6 @@ int hns3_dbg_init(struct hnae3_handle *handle)
+ debugfs_create_dir(hns3_dbg_dentry[i].name,
+ handle->hnae3_dbgfs);
+
+- mutex_init(&handle->dbgfs_lock);
+-
+ for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++) {
+ if ((hns3_dbg_cmd[i].cmd == HNAE3_DBG_CMD_TM_NODES &&
+ ae_dev->dev_version <= HNAE3_DEVICE_VERSION_V2) ||
+@@ -1425,24 +1402,13 @@ int hns3_dbg_init(struct hnae3_handle *handle)
+ out:
+ debugfs_remove_recursive(handle->hnae3_dbgfs);
+ handle->hnae3_dbgfs = NULL;
+- mutex_destroy(&handle->dbgfs_lock);
+ return ret;
+ }
+
+ void hns3_dbg_uninit(struct hnae3_handle *handle)
+ {
+- u32 i;
+-
+ debugfs_remove_recursive(handle->hnae3_dbgfs);
+ handle->hnae3_dbgfs = NULL;
+-
+- for (i = 0; i < ARRAY_SIZE(hns3_dbg_cmd); i++)
+- if (handle->dbgfs_buf[i]) {
+- kvfree(handle->dbgfs_buf[i]);
+- handle->dbgfs_buf[i] = NULL;
+- }
+-
+- mutex_destroy(&handle->dbgfs_lock);
+ }
+
+ void hns3_dbg_register_debugfs(const char *debugfs_dir_name)
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 4cbc4d069a1f36..73825b6bd485d1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -2452,7 +2452,6 @@ static int hns3_nic_set_features(struct net_device *netdev,
+ return ret;
+ }
+
+- netdev->features = features;
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index bd86efd92a5a7d..9a67fe0554a52b 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -6,6 +6,7 @@
+ #include <linux/etherdevice.h>
+ #include <linux/init.h>
+ #include <linux/interrupt.h>
++#include <linux/irq.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/netdevice.h>
+@@ -3584,6 +3585,17 @@ static int hclge_set_vf_link_state(struct hnae3_handle *handle, int vf,
+ return ret;
+ }
+
++static void hclge_set_reset_pending(struct hclge_dev *hdev,
++ enum hnae3_reset_type reset_type)
++{
++ /* When an incorrect reset type is executed, the get_reset_level
++ * function generates the HNAE3_NONE_RESET flag. As a result, this
++ * type do not need to pending.
++ */
++ if (reset_type != HNAE3_NONE_RESET)
++ set_bit(reset_type, &hdev->reset_pending);
++}
++
+ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
+ {
+ u32 cmdq_src_reg, msix_src_reg, hw_err_src_reg;
+@@ -3604,7 +3616,7 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
+ */
+ if (BIT(HCLGE_VECTOR0_IMPRESET_INT_B) & msix_src_reg) {
+ dev_info(&hdev->pdev->dev, "IMP reset interrupt\n");
+- set_bit(HNAE3_IMP_RESET, &hdev->reset_pending);
++ hclge_set_reset_pending(hdev, HNAE3_IMP_RESET);
+ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state);
+ *clearval = BIT(HCLGE_VECTOR0_IMPRESET_INT_B);
+ hdev->rst_stats.imp_rst_cnt++;
+@@ -3614,7 +3626,7 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval)
+ if (BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B) & msix_src_reg) {
+ dev_info(&hdev->pdev->dev, "global reset interrupt\n");
+ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state);
+- set_bit(HNAE3_GLOBAL_RESET, &hdev->reset_pending);
++ hclge_set_reset_pending(hdev, HNAE3_GLOBAL_RESET);
+ *clearval = BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B);
+ hdev->rst_stats.global_rst_cnt++;
+ return HCLGE_VECTOR0_EVENT_RST;
+@@ -3769,7 +3781,7 @@ static int hclge_misc_irq_init(struct hclge_dev *hdev)
+ snprintf(hdev->misc_vector.name, HNAE3_INT_NAME_LEN, "%s-misc-%s",
+ HCLGE_NAME, pci_name(hdev->pdev));
+ ret = request_irq(hdev->misc_vector.vector_irq, hclge_misc_irq_handle,
+- 0, hdev->misc_vector.name, hdev);
++ IRQF_NO_AUTOEN, hdev->misc_vector.name, hdev);
+ if (ret) {
+ hclge_free_vector(hdev, 0);
+ dev_err(&hdev->pdev->dev, "request misc irq(%d) fail\n",
+@@ -4062,7 +4074,7 @@ static void hclge_do_reset(struct hclge_dev *hdev)
+ case HNAE3_FUNC_RESET:
+ dev_info(&pdev->dev, "PF reset requested\n");
+ /* schedule again to check later */
+- set_bit(HNAE3_FUNC_RESET, &hdev->reset_pending);
++ hclge_set_reset_pending(hdev, HNAE3_FUNC_RESET);
+ hclge_reset_task_schedule(hdev);
+ break;
+ default:
+@@ -4096,6 +4108,8 @@ static enum hnae3_reset_type hclge_get_reset_level(struct hnae3_ae_dev *ae_dev,
+ clear_bit(HNAE3_FLR_RESET, addr);
+ }
+
++ clear_bit(HNAE3_NONE_RESET, addr);
++
+ if (hdev->reset_type != HNAE3_NONE_RESET &&
+ rst_level < hdev->reset_type)
+ return HNAE3_NONE_RESET;
+@@ -4237,7 +4251,7 @@ static bool hclge_reset_err_handle(struct hclge_dev *hdev)
+ return false;
+ } else if (hdev->rst_stats.reset_fail_cnt < MAX_RESET_FAIL_CNT) {
+ hdev->rst_stats.reset_fail_cnt++;
+- set_bit(hdev->reset_type, &hdev->reset_pending);
++ hclge_set_reset_pending(hdev, hdev->reset_type);
+ dev_info(&hdev->pdev->dev,
+ "re-schedule reset task(%u)\n",
+ hdev->rst_stats.reset_fail_cnt);
+@@ -4480,8 +4494,20 @@ static void hclge_reset_event(struct pci_dev *pdev, struct hnae3_handle *handle)
+ static void hclge_set_def_reset_request(struct hnae3_ae_dev *ae_dev,
+ enum hnae3_reset_type rst_type)
+ {
++#define HCLGE_SUPPORT_RESET_TYPE \
++ (BIT(HNAE3_FLR_RESET) | BIT(HNAE3_FUNC_RESET) | \
++ BIT(HNAE3_GLOBAL_RESET) | BIT(HNAE3_IMP_RESET))
++
+ struct hclge_dev *hdev = ae_dev->priv;
+
++ if (!(BIT(rst_type) & HCLGE_SUPPORT_RESET_TYPE)) {
++ /* To prevent reset triggered by hclge_reset_event */
++ set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request);
++ dev_warn(&hdev->pdev->dev, "unsupported reset type %d\n",
++ rst_type);
++ return;
++ }
++
+ set_bit(rst_type, &hdev->default_reset_request);
+ }
+
+@@ -11891,9 +11917,6 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
+
+ hclge_init_rxd_adv_layout(hdev);
+
+- /* Enable MISC vector(vector0) */
+- hclge_enable_vector(&hdev->misc_vector, true);
+-
+ ret = hclge_init_wol(hdev);
+ if (ret)
+ dev_warn(&pdev->dev,
+@@ -11906,6 +11929,10 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev)
+ hclge_state_init(hdev);
+ hdev->last_reset_time = jiffies;
+
++ /* Enable MISC vector(vector0) */
++ enable_irq(hdev->misc_vector.vector_irq);
++ hclge_enable_vector(&hdev->misc_vector, true);
++
+ dev_info(&hdev->pdev->dev, "%s driver initialization finished.\n",
+ HCLGE_DRIVER_NAME);
+
+@@ -12311,7 +12338,7 @@ static void hclge_uninit_ae_dev(struct hnae3_ae_dev *ae_dev)
+
+ /* Disable MISC vector(vector0) */
+ hclge_enable_vector(&hdev->misc_vector, false);
+- synchronize_irq(hdev->misc_vector.vector_irq);
++ disable_irq(hdev->misc_vector.vector_irq);
+
+ /* Disable all hw interrupts */
+ hclge_config_mac_tnl_int(hdev, false);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+index 5505caea88e981..bab16c2191b2f0 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+@@ -58,6 +58,9 @@ bool hclge_ptp_set_tx_info(struct hnae3_handle *handle, struct sk_buff *skb)
+ struct hclge_dev *hdev = vport->back;
+ struct hclge_ptp *ptp = hdev->ptp;
+
++ if (!ptp)
++ return false;
++
+ if (!test_bit(HCLGE_PTP_FLAG_TX_EN, &ptp->flags) ||
+ test_and_set_bit(HCLGE_STATE_PTP_TX_HANDLING, &hdev->state)) {
+ ptp->tx_skipped++;
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
+index 43c1c18fa81f8d..8c057192aae6e1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_regs.c
+@@ -510,9 +510,9 @@ static int hclge_get_dfx_reg(struct hclge_dev *hdev, void *data)
+ static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data,
+ struct hnae3_knic_private_info *kinfo)
+ {
+-#define HCLGE_RING_REG_OFFSET 0x200
+ #define HCLGE_RING_INT_REG_OFFSET 0x4
+
++ struct hnae3_queue *tqp;
+ int i, j, reg_num;
+ int data_num_sum;
+ u32 *reg = data;
+@@ -533,10 +533,11 @@ static int hclge_fetch_pf_reg(struct hclge_dev *hdev, void *data,
+ reg_num = ARRAY_SIZE(ring_reg_addr_list);
+ for (j = 0; j < kinfo->num_tqps; j++) {
+ reg += hclge_reg_get_tlv(HCLGE_REG_TAG_RING, reg_num, reg);
++ tqp = kinfo->tqp[j];
+ for (i = 0; i < reg_num; i++)
+- *reg++ = hclge_read_dev(&hdev->hw,
+- ring_reg_addr_list[i] +
+- HCLGE_RING_REG_OFFSET * j);
++ *reg++ = readl_relaxed(tqp->io_base -
++ HCLGE_TQP_REG_OFFSET +
++ ring_reg_addr_list[i]);
+ }
+ data_num_sum += (reg_num + HCLGE_REG_TLV_SPACE) * kinfo->num_tqps;
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index 094a7c7b55921f..d47bd8d6145f97 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -1395,6 +1395,17 @@ static int hclgevf_notify_roce_client(struct hclgevf_dev *hdev,
+ return ret;
+ }
+
++static void hclgevf_set_reset_pending(struct hclgevf_dev *hdev,
++ enum hnae3_reset_type reset_type)
++{
++ /* When an incorrect reset type is executed, the get_reset_level
++ * function generates the HNAE3_NONE_RESET flag. As a result, this
++ * type do not need to pending.
++ */
++ if (reset_type != HNAE3_NONE_RESET)
++ set_bit(reset_type, &hdev->reset_pending);
++}
++
+ static int hclgevf_reset_wait(struct hclgevf_dev *hdev)
+ {
+ #define HCLGEVF_RESET_WAIT_US 20000
+@@ -1544,7 +1555,7 @@ static void hclgevf_reset_err_handle(struct hclgevf_dev *hdev)
+ hdev->rst_stats.rst_fail_cnt);
+
+ if (hdev->rst_stats.rst_fail_cnt < HCLGEVF_RESET_MAX_FAIL_CNT)
+- set_bit(hdev->reset_type, &hdev->reset_pending);
++ hclgevf_set_reset_pending(hdev, hdev->reset_type);
+
+ if (hclgevf_is_reset_pending(hdev)) {
+ set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
+@@ -1664,6 +1675,8 @@ static enum hnae3_reset_type hclgevf_get_reset_level(unsigned long *addr)
+ clear_bit(HNAE3_FLR_RESET, addr);
+ }
+
++ clear_bit(HNAE3_NONE_RESET, addr);
++
+ return rst_level;
+ }
+
+@@ -1673,14 +1686,15 @@ static void hclgevf_reset_event(struct pci_dev *pdev,
+ struct hnae3_ae_dev *ae_dev = pci_get_drvdata(pdev);
+ struct hclgevf_dev *hdev = ae_dev->priv;
+
+- dev_info(&hdev->pdev->dev, "received reset request from VF enet\n");
+-
+ if (hdev->default_reset_request)
+ hdev->reset_level =
+ hclgevf_get_reset_level(&hdev->default_reset_request);
+ else
+ hdev->reset_level = HNAE3_VF_FUNC_RESET;
+
++ dev_info(&hdev->pdev->dev, "received reset request from VF enet, reset level is %d\n",
++ hdev->reset_level);
++
+ /* reset of this VF requested */
+ set_bit(HCLGEVF_RESET_REQUESTED, &hdev->reset_state);
+ hclgevf_reset_task_schedule(hdev);
+@@ -1691,8 +1705,20 @@ static void hclgevf_reset_event(struct pci_dev *pdev,
+ static void hclgevf_set_def_reset_request(struct hnae3_ae_dev *ae_dev,
+ enum hnae3_reset_type rst_type)
+ {
++#define HCLGEVF_SUPPORT_RESET_TYPE \
++ (BIT(HNAE3_VF_RESET) | BIT(HNAE3_VF_FUNC_RESET) | \
++ BIT(HNAE3_VF_PF_FUNC_RESET) | BIT(HNAE3_VF_FULL_RESET) | \
++ BIT(HNAE3_FLR_RESET) | BIT(HNAE3_VF_EXP_RESET))
++
+ struct hclgevf_dev *hdev = ae_dev->priv;
+
++ if (!(BIT(rst_type) & HCLGEVF_SUPPORT_RESET_TYPE)) {
++ /* To prevent reset triggered by hclge_reset_event */
++ set_bit(HNAE3_NONE_RESET, &hdev->default_reset_request);
++ dev_info(&hdev->pdev->dev, "unsupported reset type %d\n",
++ rst_type);
++ return;
++ }
+ set_bit(rst_type, &hdev->default_reset_request);
+ }
+
+@@ -1849,14 +1875,14 @@ static void hclgevf_reset_service_task(struct hclgevf_dev *hdev)
+ */
+ if (hdev->reset_attempts > HCLGEVF_MAX_RESET_ATTEMPTS_CNT) {
+ /* prepare for full reset of stack + pcie interface */
+- set_bit(HNAE3_VF_FULL_RESET, &hdev->reset_pending);
++ hclgevf_set_reset_pending(hdev, HNAE3_VF_FULL_RESET);
+
+ /* "defer" schedule the reset task again */
+ set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
+ } else {
+ hdev->reset_attempts++;
+
+- set_bit(hdev->reset_level, &hdev->reset_pending);
++ hclgevf_set_reset_pending(hdev, hdev->reset_level);
+ set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
+ }
+ hclgevf_reset_task_schedule(hdev);
+@@ -1979,7 +2005,7 @@ static enum hclgevf_evt_cause hclgevf_check_evt_cause(struct hclgevf_dev *hdev,
+ rst_ing_reg = hclgevf_read_dev(&hdev->hw, HCLGEVF_RST_ING);
+ dev_info(&hdev->pdev->dev,
+ "receive reset interrupt 0x%x!\n", rst_ing_reg);
+- set_bit(HNAE3_VF_RESET, &hdev->reset_pending);
++ hclgevf_set_reset_pending(hdev, HNAE3_VF_RESET);
+ set_bit(HCLGEVF_RESET_PENDING, &hdev->reset_state);
+ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state);
+ *clearval = ~(1U << HCLGEVF_VECTOR0_RST_INT_B);
+@@ -2289,6 +2315,8 @@ static void hclgevf_state_init(struct hclgevf_dev *hdev)
+ clear_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state);
+
+ INIT_DELAYED_WORK(&hdev->service_task, hclgevf_service_task);
++ /* timer needs to be initialized before misc irq */
++ timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0);
+
+ mutex_init(&hdev->mbx_resp.mbx_mutex);
+ sema_init(&hdev->reset_sem, 1);
+@@ -2988,7 +3016,6 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev)
+ HCLGEVF_DRIVER_NAME);
+
+ hclgevf_task_schedule(hdev, round_jiffies_relative(HZ));
+- timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0);
+
+ return 0;
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
+index 6db415d8b9176c..7d9d9dbc75603a 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_regs.c
+@@ -123,10 +123,10 @@ int hclgevf_get_regs_len(struct hnae3_handle *handle)
+ void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version,
+ void *data)
+ {
+-#define HCLGEVF_RING_REG_OFFSET 0x200
+ #define HCLGEVF_RING_INT_REG_OFFSET 0x4
+
+ struct hclgevf_dev *hdev = hclgevf_ae_get_hdev(handle);
++ struct hnae3_queue *tqp;
+ int i, j, reg_um;
+ u32 *reg = data;
+
+@@ -147,10 +147,11 @@ void hclgevf_get_regs(struct hnae3_handle *handle, u32 *version,
+ reg_um = ARRAY_SIZE(ring_reg_addr_list);
+ for (j = 0; j < hdev->num_tqps; j++) {
+ reg += hclgevf_reg_get_tlv(HCLGEVF_REG_TAG_RING, reg_um, reg);
++ tqp = &hdev->htqp[j].q;
+ for (i = 0; i < reg_um; i++)
+- *reg++ = hclgevf_read_dev(&hdev->hw,
+- ring_reg_addr_list[i] +
+- HCLGEVF_RING_REG_OFFSET * j);
++ *reg++ = readl_relaxed(tqp->io_base -
++ HCLGEVF_TQP_REG_OFFSET +
++ ring_reg_addr_list[i]);
+ }
+
+ reg_um = ARRAY_SIZE(tqp_intr_reg_addr_list);
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index 0be1a98d7cc1b5..79a6edd0be0ec4 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -2238,6 +2238,8 @@ struct ice_aqc_get_pkg_info_resp {
+ struct ice_aqc_get_pkg_info pkg_info[];
+ };
+
++#define ICE_AQC_GET_CGU_MAX_PHASE_ADJ GENMASK(30, 0)
++
+ /* Get CGU abilities command response data structure (indirect 0x0C61) */
+ struct ice_aqc_get_cgu_abilities {
+ u8 num_inputs;
+diff --git a/drivers/net/ethernet/intel/ice/ice_dpll.c b/drivers/net/ethernet/intel/ice/ice_dpll.c
+index d5ad6d84007c21..38e151c7ea2362 100644
+--- a/drivers/net/ethernet/intel/ice/ice_dpll.c
++++ b/drivers/net/ethernet/intel/ice/ice_dpll.c
+@@ -2064,6 +2064,18 @@ static int ice_dpll_init_worker(struct ice_pf *pf)
+ return 0;
+ }
+
++/**
++ * ice_dpll_phase_range_set - initialize phase adjust range helper
++ * @range: pointer to phase adjust range struct to be initialized
++ * @phase_adj: a value to be used as min(-)/max(+) boundary
++ */
++static void ice_dpll_phase_range_set(struct dpll_pin_phase_adjust_range *range,
++ u32 phase_adj)
++{
++ range->min = -phase_adj;
++ range->max = phase_adj;
++}
++
+ /**
+ * ice_dpll_init_info_pins_generic - initializes generic pins info
+ * @pf: board private structure
+@@ -2105,8 +2117,8 @@ static int ice_dpll_init_info_pins_generic(struct ice_pf *pf, bool input)
+ for (i = 0; i < pin_num; i++) {
+ pins[i].idx = i;
+ pins[i].prop.board_label = labels[i];
+- pins[i].prop.phase_range.min = phase_adj_max;
+- pins[i].prop.phase_range.max = -phase_adj_max;
++ ice_dpll_phase_range_set(&pins[i].prop.phase_range,
++ phase_adj_max);
+ pins[i].prop.capabilities = cap;
+ pins[i].pf = pf;
+ ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL);
+@@ -2152,6 +2164,7 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf,
+ struct ice_hw *hw = &pf->hw;
+ struct ice_dpll_pin *pins;
+ unsigned long caps;
++ u32 phase_adj_max;
+ u8 freq_supp_num;
+ bool input;
+
+@@ -2159,11 +2172,13 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf,
+ case ICE_DPLL_PIN_TYPE_INPUT:
+ pins = pf->dplls.inputs;
+ num_pins = pf->dplls.num_inputs;
++ phase_adj_max = pf->dplls.input_phase_adj_max;
+ input = true;
+ break;
+ case ICE_DPLL_PIN_TYPE_OUTPUT:
+ pins = pf->dplls.outputs;
+ num_pins = pf->dplls.num_outputs;
++ phase_adj_max = pf->dplls.output_phase_adj_max;
+ input = false;
+ break;
+ default:
+@@ -2188,19 +2203,13 @@ ice_dpll_init_info_direct_pins(struct ice_pf *pf,
+ return ret;
+ caps |= (DPLL_PIN_CAPABILITIES_PRIORITY_CAN_CHANGE |
+ DPLL_PIN_CAPABILITIES_STATE_CAN_CHANGE);
+- pins[i].prop.phase_range.min =
+- pf->dplls.input_phase_adj_max;
+- pins[i].prop.phase_range.max =
+- -pf->dplls.input_phase_adj_max;
+ } else {
+- pins[i].prop.phase_range.min =
+- pf->dplls.output_phase_adj_max;
+- pins[i].prop.phase_range.max =
+- -pf->dplls.output_phase_adj_max;
+ ret = ice_cgu_get_output_pin_state_caps(hw, i, &caps);
+ if (ret)
+ return ret;
+ }
++ ice_dpll_phase_range_set(&pins[i].prop.phase_range,
++ phase_adj_max);
+ pins[i].prop.capabilities = caps;
+ ret = ice_dpll_pin_state_update(pf, &pins[i], pin_type, NULL);
+ if (ret)
+@@ -2308,8 +2317,10 @@ static int ice_dpll_init_info(struct ice_pf *pf, bool cgu)
+ dp->dpll_idx = abilities.pps_dpll_idx;
+ d->num_inputs = abilities.num_inputs;
+ d->num_outputs = abilities.num_outputs;
+- d->input_phase_adj_max = le32_to_cpu(abilities.max_in_phase_adj);
+- d->output_phase_adj_max = le32_to_cpu(abilities.max_out_phase_adj);
++ d->input_phase_adj_max = le32_to_cpu(abilities.max_in_phase_adj) &
++ ICE_AQC_GET_CGU_MAX_PHASE_ADJ;
++ d->output_phase_adj_max = le32_to_cpu(abilities.max_out_phase_adj) &
++ ICE_AQC_GET_CGU_MAX_PHASE_ADJ;
+
+ alloc_size = sizeof(*d->inputs) * d->num_inputs;
+ d->inputs = kzalloc(alloc_size, GFP_KERNEL);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_consts.h b/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
+index e6980b94a6c1d6..3005dd252a1026 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
+@@ -761,9 +761,9 @@ const struct ice_vernier_info_e82x e822_vernier[NUM_ICE_PTP_LNK_SPD] = {
+ /* rx_desk_rsgb_par */
+ 644531250, /* 644.53125 MHz Reed Solomon gearbox */
+ /* tx_desk_rsgb_pcs */
+- 644531250, /* 644.53125 MHz Reed Solomon gearbox */
++ 390625000, /* 390.625 MHz Reed Solomon gearbox */
+ /* rx_desk_rsgb_pcs */
+- 644531250, /* 644.53125 MHz Reed Solomon gearbox */
++ 390625000, /* 390.625 MHz Reed Solomon gearbox */
+ /* tx_fixed_delay */
+ 1620,
+ /* pmd_adj_divisor */
+diff --git a/drivers/net/ethernet/intel/igc/igc_base.c b/drivers/net/ethernet/intel/igc/igc_base.c
+index 9fae8bdec2a7c8..1613b562d17c52 100644
+--- a/drivers/net/ethernet/intel/igc/igc_base.c
++++ b/drivers/net/ethernet/intel/igc/igc_base.c
+@@ -68,6 +68,10 @@ static s32 igc_init_nvm_params_base(struct igc_hw *hw)
+ u32 eecd = rd32(IGC_EECD);
+ u16 size;
+
++ /* failed to read reg and got all F's */
++ if (!(~eecd))
++ return -ENXIO;
++
+ size = FIELD_GET(IGC_EECD_SIZE_EX_MASK, eecd);
+
+ /* Added to a constant, "size" becomes the left-shift value
+@@ -221,6 +225,8 @@ static s32 igc_get_invariants_base(struct igc_hw *hw)
+
+ /* NVM initialization */
+ ret_val = igc_init_nvm_params_base(hw);
++ if (ret_val)
++ goto out;
+ switch (hw->mac.type) {
+ case igc_i225:
+ ret_val = igc_init_nvm_params_i225(hw);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 6bd8a18e3af3a1..e733b81e18a21a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -1013,6 +1013,7 @@ static void cmd_work_handler(struct work_struct *work)
+ complete(&ent->done);
+ }
+ up(&cmd->vars.sem);
++ complete(&ent->slotted);
+ return;
+ }
+ } else {
+diff --git a/drivers/net/ethernet/realtek/rtase/rtase_main.c b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+index 1bfe5ef40c522d..14ffd45e9a25a7 100644
+--- a/drivers/net/ethernet/realtek/rtase/rtase_main.c
++++ b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+@@ -1827,7 +1827,7 @@ static int rtase_alloc_msix(struct pci_dev *pdev, struct rtase_private *tp)
+
+ for (i = 0; i < tp->int_nums; i++) {
+ irq = pci_irq_vector(pdev, i);
+- if (!irq) {
++ if (irq < 0) {
+ pci_disable_msix(pdev);
+ return irq;
+ }
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
+index 6fdd94c8919ec2..2996bcdea9a28e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-tegra.c
+@@ -1,4 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0-only
++#include <linux/iommu.h>
+ #include <linux/platform_device.h>
+ #include <linux/of.h>
+ #include <linux/module.h>
+@@ -19,6 +20,8 @@ struct tegra_mgbe {
+ struct reset_control *rst_mac;
+ struct reset_control *rst_pcs;
+
++ u32 iommu_sid;
++
+ void __iomem *hv;
+ void __iomem *regs;
+ void __iomem *xpcs;
+@@ -50,7 +53,6 @@ struct tegra_mgbe {
+ #define MGBE_WRAP_COMMON_INTR_ENABLE 0x8704
+ #define MAC_SBD_INTR BIT(2)
+ #define MGBE_WRAP_AXI_ASID0_CTRL 0x8400
+-#define MGBE_SID 0x6
+
+ static int __maybe_unused tegra_mgbe_suspend(struct device *dev)
+ {
+@@ -84,7 +86,7 @@ static int __maybe_unused tegra_mgbe_resume(struct device *dev)
+ writel(MAC_SBD_INTR, mgbe->regs + MGBE_WRAP_COMMON_INTR_ENABLE);
+
+ /* Program SID */
+- writel(MGBE_SID, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
++ writel(mgbe->iommu_sid, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
+
+ value = readl(mgbe->xpcs + XPCS_WRAP_UPHY_STATUS);
+ if ((value & XPCS_WRAP_UPHY_STATUS_TX_P_UP) == 0) {
+@@ -241,6 +243,12 @@ static int tegra_mgbe_probe(struct platform_device *pdev)
+ if (IS_ERR(mgbe->xpcs))
+ return PTR_ERR(mgbe->xpcs);
+
++ /* get controller's stream id from iommu property in device tree */
++ if (!tegra_dev_iommu_get_stream_id(mgbe->dev, &mgbe->iommu_sid)) {
++ dev_err(mgbe->dev, "failed to get iommu stream id\n");
++ return -EINVAL;
++ }
++
+ res.addr = mgbe->regs;
+ res.irq = irq;
+
+@@ -346,7 +354,7 @@ static int tegra_mgbe_probe(struct platform_device *pdev)
+ writel(MAC_SBD_INTR, mgbe->regs + MGBE_WRAP_COMMON_INTR_ENABLE);
+
+ /* Program SID */
+- writel(MGBE_SID, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
++ writel(mgbe->iommu_sid, mgbe->hv + MGBE_WRAP_AXI_ASID0_CTRL);
+
+ plat->flags |= STMMAC_FLAG_SERDES_UP_AFTER_PHY_LINKUP;
+
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_hw.c b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+index 1bf9c38e412562..deaf670c160ebf 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_hw.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_hw.c
+@@ -334,27 +334,25 @@ int wx_host_interface_command(struct wx *wx, u32 *buffer,
+ status = read_poll_timeout(rd32, hicr, hicr & WX_MNG_MBOX_CTL_FWRDY, 1000,
+ timeout * 1000, false, wx, WX_MNG_MBOX_CTL);
+
++ buf[0] = rd32(wx, WX_MNG_MBOX);
++ if ((buf[0] & 0xff0000) >> 16 == 0x80) {
++ wx_err(wx, "Unknown FW command: 0x%x\n", buffer[0] & 0xff);
++ status = -EINVAL;
++ goto rel_out;
++ }
++
+ /* Check command completion */
+ if (status) {
+- wx_dbg(wx, "Command has failed with no status valid.\n");
+-
+- buf[0] = rd32(wx, WX_MNG_MBOX);
+- if ((buffer[0] & 0xff) != (~buf[0] >> 24)) {
+- status = -EINVAL;
+- goto rel_out;
+- }
+- if ((buf[0] & 0xff0000) >> 16 == 0x80) {
+- wx_dbg(wx, "It's unknown cmd.\n");
+- status = -EINVAL;
+- goto rel_out;
+- }
+-
++ wx_err(wx, "Command has failed with no status valid.\n");
+ wx_dbg(wx, "write value:\n");
+ for (i = 0; i < dword_len; i++)
+ wx_dbg(wx, "%x ", buffer[i]);
+ wx_dbg(wx, "read value:\n");
+ for (i = 0; i < dword_len; i++)
+ wx_dbg(wx, "%x ", buf[i]);
++ wx_dbg(wx, "\ncheck: %x %x\n", buffer[0] & 0xff, ~buf[0] >> 24);
++
++ goto rel_out;
+ }
+
+ if (!return_data)
+diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c
+index e685a7f946f0f8..753215ebc67c70 100644
+--- a/drivers/net/ieee802154/ca8210.c
++++ b/drivers/net/ieee802154/ca8210.c
+@@ -3072,7 +3072,11 @@ static int ca8210_probe(struct spi_device *spi_device)
+ spi_set_drvdata(priv->spi, priv);
+ if (IS_ENABLED(CONFIG_IEEE802154_CA8210_DEBUGFS)) {
+ cascoda_api_upstream = ca8210_test_int_driver_write;
+- ca8210_test_interface_init(priv);
++ ret = ca8210_test_interface_init(priv);
++ if (ret) {
++ dev_crit(&spi_device->dev, "ca8210_test_interface_init failed\n");
++ goto error;
++ }
+ } else {
+ cascoda_api_upstream = NULL;
+ }
+diff --git a/drivers/net/mctp/mctp-i3c.c b/drivers/net/mctp/mctp-i3c.c
+index 1bc87a0626860f..ee9d562f0817cf 100644
+--- a/drivers/net/mctp/mctp-i3c.c
++++ b/drivers/net/mctp/mctp-i3c.c
+@@ -125,6 +125,8 @@ static int mctp_i3c_read(struct mctp_i3c_device *mi)
+
+ xfer.data.in = skb_put(skb, mi->mrl);
+
++ /* Make sure netif_rx() is read in the same order as i3c. */
++ mutex_lock(&mi->lock);
+ rc = i3c_device_do_priv_xfers(mi->i3c, &xfer, 1);
+ if (rc < 0)
+ goto err;
+@@ -166,8 +168,10 @@ static int mctp_i3c_read(struct mctp_i3c_device *mi)
+ stats->rx_dropped++;
+ }
+
++ mutex_unlock(&mi->lock);
+ return 0;
+ err:
++ mutex_unlock(&mi->lock);
+ kfree_skb(skb);
+ return rc;
+ }
+diff --git a/drivers/perf/riscv_pmu_sbi.c b/drivers/perf/riscv_pmu_sbi.c
+index 1aa303f76cc7af..da3651d329069c 100644
+--- a/drivers/perf/riscv_pmu_sbi.c
++++ b/drivers/perf/riscv_pmu_sbi.c
+@@ -507,8 +507,7 @@ static int pmu_sbi_event_map(struct perf_event *event, u64 *econfig)
+ {
+ u32 type = event->attr.type;
+ u64 config = event->attr.config;
+- u64 raw_config_val;
+- int ret;
++ int ret = -ENOENT;
+
+ /*
+ * Ensure we are finished checking standard hardware events for
+@@ -528,21 +527,20 @@ static int pmu_sbi_event_map(struct perf_event *event, u64 *econfig)
+ case PERF_TYPE_RAW:
+ /*
+ * As per SBI specification, the upper 16 bits must be unused
+- * for a raw event.
++ * for a hardware raw event.
+ * Bits 63:62 are used to distinguish between raw events
+ * 00 - Hardware raw event
+ * 10 - SBI firmware events
+ * 11 - Risc-V platform specific firmware event
+ */
+- raw_config_val = config & RISCV_PMU_RAW_EVENT_MASK;
++
+ switch (config >> 62) {
+ case 0:
+ ret = RISCV_PMU_RAW_EVENT_IDX;
+- *econfig = raw_config_val;
++ *econfig = config & RISCV_PMU_RAW_EVENT_MASK;
+ break;
+ case 2:
+- ret = (raw_config_val & 0xFFFF) |
+- (SBI_PMU_EVENT_TYPE_FW << 16);
++ ret = (config & 0xFFFF) | (SBI_PMU_EVENT_TYPE_FW << 16);
+ break;
+ case 3:
+ /*
+@@ -551,12 +549,13 @@ static int pmu_sbi_event_map(struct perf_event *event, u64 *econfig)
+ * Event data - raw event encoding
+ */
+ ret = SBI_PMU_EVENT_TYPE_FW << 16 | RISCV_PLAT_FW_EVENT;
+- *econfig = raw_config_val;
++ *econfig = config & RISCV_PMU_PLAT_FW_EVENT_MASK;
++ break;
++ default:
+ break;
+ }
+ break;
+ default:
+- ret = -ENOENT;
+ break;
+ }
+
+diff --git a/drivers/platform/x86/amd/pmc/pmc.c b/drivers/platform/x86/amd/pmc/pmc.c
+index 5669f94c3d06bf..4d3acfe849bf4e 100644
+--- a/drivers/platform/x86/amd/pmc/pmc.c
++++ b/drivers/platform/x86/amd/pmc/pmc.c
+@@ -947,6 +947,10 @@ static int amd_pmc_suspend_handler(struct device *dev)
+ {
+ struct amd_pmc_dev *pdev = dev_get_drvdata(dev);
+
++ /*
++ * Must be called only from the same set of dev_pm_ops handlers
++ * as i8042_pm_suspend() is called: currently just from .suspend.
++ */
+ if (pdev->disable_8042_wakeup && !disable_workarounds) {
+ int rc = amd_pmc_wa_irq1(pdev);
+
+@@ -959,7 +963,9 @@ static int amd_pmc_suspend_handler(struct device *dev)
+ return 0;
+ }
+
+-static DEFINE_SIMPLE_DEV_PM_OPS(amd_pmc_pm, amd_pmc_suspend_handler, NULL);
++static const struct dev_pm_ops amd_pmc_pm = {
++ .suspend = amd_pmc_suspend_handler,
++};
+
+ static const struct pci_device_id pmc_pci_ids[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_AMD, AMD_CPU_ID_PS) },
+diff --git a/drivers/platform/x86/intel/pmc/core_ssram.c b/drivers/platform/x86/intel/pmc/core_ssram.c
+index 8504154b649f47..927f58dc73e324 100644
+--- a/drivers/platform/x86/intel/pmc/core_ssram.c
++++ b/drivers/platform/x86/intel/pmc/core_ssram.c
+@@ -269,8 +269,12 @@ pmc_core_ssram_get_pmc(struct pmc_dev *pmcdev, int pmc_idx, u32 offset)
+ /*
+ * The secondary PMC BARS (which are behind hidden PCI devices)
+ * are read from fixed offsets in MMIO of the primary PMC BAR.
++ * If a device is not present, the value will be 0.
+ */
+ ssram_base = get_base(tmp_ssram, offset);
++ if (!ssram_base)
++ return 0;
++
+ ssram = ioremap(ssram_base, SSRAM_HDR_SIZE);
+ if (!ssram)
+ return -ENOMEM;
+diff --git a/drivers/staging/iio/frequency/ad9832.c b/drivers/staging/iio/frequency/ad9832.c
+index 492612e8f8bad5..140ee4f9c137f5 100644
+--- a/drivers/staging/iio/frequency/ad9832.c
++++ b/drivers/staging/iio/frequency/ad9832.c
+@@ -158,7 +158,7 @@ static int ad9832_write_frequency(struct ad9832_state *st,
+ static int ad9832_write_phase(struct ad9832_state *st,
+ unsigned long addr, unsigned long phase)
+ {
+- if (phase > BIT(AD9832_PHASE_BITS))
++ if (phase >= BIT(AD9832_PHASE_BITS))
+ return -EINVAL;
+
+ st->phase_data[0] = cpu_to_be16((AD9832_CMD_PHA8BITSW << CMD_SHIFT) |
+diff --git a/drivers/staging/iio/frequency/ad9834.c b/drivers/staging/iio/frequency/ad9834.c
+index 47e7d7e6d92089..6e99e008c5f432 100644
+--- a/drivers/staging/iio/frequency/ad9834.c
++++ b/drivers/staging/iio/frequency/ad9834.c
+@@ -131,7 +131,7 @@ static int ad9834_write_frequency(struct ad9834_state *st,
+ static int ad9834_write_phase(struct ad9834_state *st,
+ unsigned long addr, unsigned long phase)
+ {
+- if (phase > BIT(AD9834_PHASE_BITS))
++ if (phase >= BIT(AD9834_PHASE_BITS))
+ return -EINVAL;
+ st->data = cpu_to_be16(addr | phase);
+
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 07e09897165f34..5d3d8ce672cd51 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -176,6 +176,7 @@ static struct device_node *of_thermal_zone_find(struct device_node *sensor, int
+ goto out;
+ }
+
++ of_node_put(sensor_specs.np);
+ if ((sensor == sensor_specs.np) && id == (sensor_specs.args_count ?
+ sensor_specs.args[0] : 0)) {
+ pr_debug("sensor %pOFn id=%d belongs to %pOFn\n", sensor, id, child);
+diff --git a/drivers/tty/serial/8250/8250_core.c b/drivers/tty/serial/8250/8250_core.c
+index 5f9f06911795cc..68baf75bdadc42 100644
+--- a/drivers/tty/serial/8250/8250_core.c
++++ b/drivers/tty/serial/8250/8250_core.c
+@@ -812,6 +812,9 @@ int serial8250_register_8250_port(const struct uart_8250_port *up)
+ uart->dl_write = up->dl_write;
+
+ if (uart->port.type != PORT_8250_CIR) {
++ if (uart_console_registered(&uart->port))
++ pm_runtime_get_sync(uart->port.dev);
++
+ if (serial8250_isa_config != NULL)
+ serial8250_isa_config(0, &uart->port,
+ &uart->capabilities);
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index e1e7bc04c57920..f5199fdecff278 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -1051,14 +1051,14 @@ static void stm32_usart_break_ctl(struct uart_port *port, int break_state)
+ const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs;
+ unsigned long flags;
+
+- spin_lock_irqsave(&port->lock, flags);
++ uart_port_lock_irqsave(port, &flags);
+
+ if (break_state)
+ stm32_usart_set_bits(port, ofs->rqr, USART_RQR_SBKRQ);
+ else
+ stm32_usart_clr_bits(port, ofs->rqr, USART_RQR_SBKRQ);
+
+- spin_unlock_irqrestore(&port->lock, flags);
++ uart_port_unlock_irqrestore(port, flags);
+ }
+
+ static int stm32_usart_startup(struct uart_port *port)
+diff --git a/drivers/ufs/core/ufshcd-priv.h b/drivers/ufs/core/ufshcd-priv.h
+index 9ffd94ddf8c7ce..786f20ef22386b 100644
+--- a/drivers/ufs/core/ufshcd-priv.h
++++ b/drivers/ufs/core/ufshcd-priv.h
+@@ -237,12 +237,6 @@ static inline void ufshcd_vops_config_scaling_param(struct ufs_hba *hba,
+ hba->vops->config_scaling_param(hba, p, data);
+ }
+
+-static inline void ufshcd_vops_reinit_notify(struct ufs_hba *hba)
+-{
+- if (hba->vops && hba->vops->reinit_notify)
+- hba->vops->reinit_notify(hba);
+-}
+-
+ static inline int ufshcd_vops_mcq_config_resource(struct ufs_hba *hba)
+ {
+ if (hba->vops && hba->vops->mcq_config_resource)
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index bc13133efaa508..05b936ad353be7 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -8881,7 +8881,6 @@ static int ufshcd_probe_hba(struct ufs_hba *hba, bool init_dev_params)
+ ufshcd_device_reset(hba);
+ ufs_put_device_desc(hba);
+ ufshcd_hba_stop(hba);
+- ufshcd_vops_reinit_notify(hba);
+ ret = ufshcd_hba_enable(hba);
+ if (ret) {
+ dev_err(hba->dev, "Host controller enable failed\n");
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index 91127fb171864f..989692fb91083f 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -368,6 +368,11 @@ static int ufs_qcom_power_up_sequence(struct ufs_hba *hba)
+ if (ret)
+ return ret;
+
++ if (phy->power_count) {
++ phy_power_off(phy);
++ phy_exit(phy);
++ }
++
+ /* phy initialization - calibrate the phy */
+ ret = phy_init(phy);
+ if (ret) {
+@@ -1562,13 +1567,6 @@ static void ufs_qcom_config_scaling_param(struct ufs_hba *hba,
+ }
+ #endif
+
+-static void ufs_qcom_reinit_notify(struct ufs_hba *hba)
+-{
+- struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+-
+- phy_power_off(host->generic_phy);
+-}
+-
+ /* Resources */
+ static const struct ufshcd_res_info ufs_res_info[RES_MAX] = {
+ {.name = "ufs_mem",},
+@@ -1807,7 +1805,6 @@ static const struct ufs_hba_variant_ops ufs_hba_qcom_vops = {
+ .device_reset = ufs_qcom_device_reset,
+ .config_scaling_param = ufs_qcom_config_scaling_param,
+ .program_key = ufs_qcom_ice_program_key,
+- .reinit_notify = ufs_qcom_reinit_notify,
+ .mcq_config_resource = ufs_qcom_mcq_config_resource,
+ .get_hba_mac = ufs_qcom_get_hba_mac,
+ .op_runtime_config = ufs_qcom_op_runtime_config,
+diff --git a/drivers/usb/chipidea/ci_hdrc_imx.c b/drivers/usb/chipidea/ci_hdrc_imx.c
+index 17b3ac2ac8a1e8..46d1a4428b9a98 100644
+--- a/drivers/usb/chipidea/ci_hdrc_imx.c
++++ b/drivers/usb/chipidea/ci_hdrc_imx.c
+@@ -370,25 +370,29 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ data->pinctrl = devm_pinctrl_get(dev);
+ if (PTR_ERR(data->pinctrl) == -ENODEV)
+ data->pinctrl = NULL;
+- else if (IS_ERR(data->pinctrl))
+- return dev_err_probe(dev, PTR_ERR(data->pinctrl),
++ else if (IS_ERR(data->pinctrl)) {
++ ret = dev_err_probe(dev, PTR_ERR(data->pinctrl),
+ "pinctrl get failed\n");
++ goto err_put;
++ }
+
+ data->hsic_pad_regulator =
+ devm_regulator_get_optional(dev, "hsic");
+ if (PTR_ERR(data->hsic_pad_regulator) == -ENODEV) {
+ /* no pad regulator is needed */
+ data->hsic_pad_regulator = NULL;
+- } else if (IS_ERR(data->hsic_pad_regulator))
+- return dev_err_probe(dev, PTR_ERR(data->hsic_pad_regulator),
++ } else if (IS_ERR(data->hsic_pad_regulator)) {
++ ret = dev_err_probe(dev, PTR_ERR(data->hsic_pad_regulator),
+ "Get HSIC pad regulator error\n");
++ goto err_put;
++ }
+
+ if (data->hsic_pad_regulator) {
+ ret = regulator_enable(data->hsic_pad_regulator);
+ if (ret) {
+ dev_err(dev,
+ "Failed to enable HSIC pad regulator\n");
+- return ret;
++ goto err_put;
+ }
+ }
+ }
+@@ -402,13 +406,14 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ dev_err(dev,
+ "pinctrl_hsic_idle lookup failed, err=%ld\n",
+ PTR_ERR(pinctrl_hsic_idle));
+- return PTR_ERR(pinctrl_hsic_idle);
++ ret = PTR_ERR(pinctrl_hsic_idle);
++ goto err_put;
+ }
+
+ ret = pinctrl_select_state(data->pinctrl, pinctrl_hsic_idle);
+ if (ret) {
+ dev_err(dev, "hsic_idle select failed, err=%d\n", ret);
+- return ret;
++ goto err_put;
+ }
+
+ data->pinctrl_hsic_active = pinctrl_lookup_state(data->pinctrl,
+@@ -417,7 +422,8 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ dev_err(dev,
+ "pinctrl_hsic_active lookup failed, err=%ld\n",
+ PTR_ERR(data->pinctrl_hsic_active));
+- return PTR_ERR(data->pinctrl_hsic_active);
++ ret = PTR_ERR(data->pinctrl_hsic_active);
++ goto err_put;
+ }
+ }
+
+@@ -527,6 +533,8 @@ static int ci_hdrc_imx_probe(struct platform_device *pdev)
+ if (pdata.flags & CI_HDRC_PMQOS)
+ cpu_latency_qos_remove_request(&data->pm_qos_req);
+ data->ci_pdev = NULL;
++err_put:
++ put_device(data->usbmisc_data->dev);
+ return ret;
+ }
+
+@@ -551,6 +559,7 @@ static void ci_hdrc_imx_remove(struct platform_device *pdev)
+ if (data->hsic_pad_regulator)
+ regulator_disable(data->hsic_pad_regulator);
+ }
++ put_device(data->usbmisc_data->dev);
+ }
+
+ static void ci_hdrc_imx_shutdown(struct platform_device *pdev)
+diff --git a/drivers/usb/class/usblp.c b/drivers/usb/class/usblp.c
+index 5a2e43331064eb..ff1a941fd2ede4 100644
+--- a/drivers/usb/class/usblp.c
++++ b/drivers/usb/class/usblp.c
+@@ -1337,11 +1337,12 @@ static int usblp_set_protocol(struct usblp *usblp, int protocol)
+ if (protocol < USBLP_FIRST_PROTOCOL || protocol > USBLP_LAST_PROTOCOL)
+ return -EINVAL;
+
++ alts = usblp->protocol[protocol].alt_setting;
++ if (alts < 0)
++ return -EINVAL;
++
+ /* Don't unnecessarily set the interface if there's a single alt. */
+ if (usblp->intf->num_altsetting > 1) {
+- alts = usblp->protocol[protocol].alt_setting;
+- if (alts < 0)
+- return -EINVAL;
+ r = usb_set_interface(usblp->dev, usblp->ifnum, alts);
+ if (r < 0) {
+ printk(KERN_ERR "usblp: can't set desired altsetting %d on interface %d\n",
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 4b93c0bd1d4bcc..21ac9b464696f5 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -2663,13 +2663,13 @@ int usb_new_device(struct usb_device *udev)
+ err = sysfs_create_link(&udev->dev.kobj,
+ &port_dev->dev.kobj, "port");
+ if (err)
+- goto fail;
++ goto out_del_dev;
+
+ err = sysfs_create_link(&port_dev->dev.kobj,
+ &udev->dev.kobj, "device");
+ if (err) {
+ sysfs_remove_link(&udev->dev.kobj, "port");
+- goto fail;
++ goto out_del_dev;
+ }
+
+ if (!test_and_set_bit(port1, hub->child_usage_bits))
+@@ -2683,6 +2683,8 @@ int usb_new_device(struct usb_device *udev)
+ pm_runtime_put_sync_autosuspend(&udev->dev);
+ return err;
+
++out_del_dev:
++ device_del(&udev->dev);
+ fail:
+ usb_set_device_state(udev, USB_STATE_NOTATTACHED);
+ pm_runtime_disable(&udev->dev);
+diff --git a/drivers/usb/core/port.c b/drivers/usb/core/port.c
+index e7da2fca11a48c..c92fb648a1c4c0 100644
+--- a/drivers/usb/core/port.c
++++ b/drivers/usb/core/port.c
+@@ -452,10 +452,11 @@ static int usb_port_runtime_suspend(struct device *dev)
+ static void usb_port_shutdown(struct device *dev)
+ {
+ struct usb_port *port_dev = to_usb_port(dev);
++ struct usb_device *udev = port_dev->child;
+
+- if (port_dev->child) {
+- usb_disable_usb2_hardware_lpm(port_dev->child);
+- usb_unlocked_disable_lpm(port_dev->child);
++ if (udev && !udev->port_is_suspended) {
++ usb_disable_usb2_hardware_lpm(udev);
++ usb_unlocked_disable_lpm(udev);
+ }
+ }
+
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 0b9ba338b2654c..0e91a227507fff 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -464,6 +464,7 @@
+ #define DWC3_DCTL_TRGTULST_SS_INACT (DWC3_DCTL_TRGTULST(6))
+
+ /* These apply for core versions 1.94a and later */
++#define DWC3_DCTL_NYET_THRES_MASK (0xf << 20)
+ #define DWC3_DCTL_NYET_THRES(n) (((n) & 0xf) << 20)
+
+ #define DWC3_DCTL_KEEP_CONNECT BIT(19)
+diff --git a/drivers/usb/dwc3/dwc3-am62.c b/drivers/usb/dwc3/dwc3-am62.c
+index fad151e78fd669..538185a4d1b4fb 100644
+--- a/drivers/usb/dwc3/dwc3-am62.c
++++ b/drivers/usb/dwc3/dwc3-am62.c
+@@ -309,6 +309,7 @@ static void dwc3_ti_remove(struct platform_device *pdev)
+
+ pm_runtime_put_sync(dev);
+ pm_runtime_disable(dev);
++ pm_runtime_dont_use_autosuspend(dev);
+ pm_runtime_set_suspended(dev);
+ }
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 56744b11e67cb9..a5d75d7d0a8707 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -4208,8 +4208,10 @@ static void dwc3_gadget_conndone_interrupt(struct dwc3 *dwc)
+ WARN_ONCE(DWC3_VER_IS_PRIOR(DWC3, 240A) && dwc->has_lpm_erratum,
+ "LPM Erratum not available on dwc3 revisions < 2.40a\n");
+
+- if (dwc->has_lpm_erratum && !DWC3_VER_IS_PRIOR(DWC3, 240A))
++ if (dwc->has_lpm_erratum && !DWC3_VER_IS_PRIOR(DWC3, 240A)) {
++ reg &= ~DWC3_DCTL_NYET_THRES_MASK;
+ reg |= DWC3_DCTL_NYET_THRES(dwc->lpm_nyet_threshold);
++ }
+
+ dwc3_gadget_dctl_write_safe(dwc, reg);
+ } else {
+diff --git a/drivers/usb/gadget/Kconfig b/drivers/usb/gadget/Kconfig
+index 566ff0b1282a82..76521555e3c14c 100644
+--- a/drivers/usb/gadget/Kconfig
++++ b/drivers/usb/gadget/Kconfig
+@@ -211,6 +211,8 @@ config USB_F_MIDI
+
+ config USB_F_MIDI2
+ tristate
++ select SND_UMP
++ select SND_UMP_LEGACY_RAWMIDI
+
+ config USB_F_HID
+ tristate
+@@ -445,8 +447,6 @@ config USB_CONFIGFS_F_MIDI2
+ depends on USB_CONFIGFS
+ depends on SND
+ select USB_LIBCOMPOSITE
+- select SND_UMP
+- select SND_UMP_LEGACY_RAWMIDI
+ select USB_F_MIDI2
+ help
+ The MIDI 2.0 function driver provides the generic emulated
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index c82a6a0fba93dd..29390d573e2346 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -827,11 +827,15 @@ static ssize_t gadget_string_s_store(struct config_item *item, const char *page,
+ {
+ struct gadget_string *string = to_gadget_string(item);
+ int size = min(sizeof(string->string), len + 1);
++ ssize_t cpy_len;
+
+ if (len > USB_MAX_STRING_LEN)
+ return -EINVAL;
+
+- return strscpy(string->string, page, size);
++ cpy_len = strscpy(string->string, page, size);
++ if (cpy_len > 0 && string->string[cpy_len - 1] == '\n')
++ string->string[cpy_len - 1] = 0;
++ return len;
+ }
+ CONFIGFS_ATTR(gadget_string_, s);
+
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 2920f8000bbd83..92c883440e02cd 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -2285,7 +2285,7 @@ static int functionfs_bind(struct ffs_data *ffs, struct usb_composite_dev *cdev)
+ struct usb_gadget_strings **lang;
+ int first_id;
+
+- if (WARN_ON(ffs->state != FFS_ACTIVE
++ if ((ffs->state != FFS_ACTIVE
+ || test_and_set_bit(FFS_FL_BOUND, &ffs->flags)))
+ return -EBADFD;
+
+diff --git a/drivers/usb/gadget/function/f_uac2.c b/drivers/usb/gadget/function/f_uac2.c
+index ce5b77f8919026..9b324821c93bd0 100644
+--- a/drivers/usb/gadget/function/f_uac2.c
++++ b/drivers/usb/gadget/function/f_uac2.c
+@@ -1185,6 +1185,7 @@ afunc_bind(struct usb_configuration *cfg, struct usb_function *fn)
+ uac2->as_in_alt = 0;
+ }
+
++ std_ac_if_desc.bNumEndpoints = 0;
+ if (FUOUT_EN(uac2_opts) || FUIN_EN(uac2_opts)) {
+ uac2->int_ep = usb_ep_autoconfig(gadget, &fs_ep_int_desc);
+ if (!uac2->int_ep) {
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index 53d9fc41acc522..bc143a86c2ddf0 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -1420,6 +1420,10 @@ void gserial_disconnect(struct gserial *gser)
+ /* REVISIT as above: how best to track this? */
+ port->port_line_coding = gser->port_line_coding;
+
++ /* disable endpoints, aborting down any active I/O */
++ usb_ep_disable(gser->out);
++ usb_ep_disable(gser->in);
++
+ port->port_usb = NULL;
+ gser->ioport = NULL;
+ if (port->port.count > 0) {
+@@ -1431,10 +1435,6 @@ void gserial_disconnect(struct gserial *gser)
+ spin_unlock(&port->port_lock);
+ spin_unlock_irqrestore(&serial_port_lock, flags);
+
+- /* disable endpoints, aborting down any active I/O */
+- usb_ep_disable(gser->out);
+- usb_ep_disable(gser->in);
+-
+ /* finally, free any unused/unusable I/O buffers */
+ spin_lock_irqsave(&port->port_lock, flags);
+ if (port->port.count == 0)
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index ecaa75718e5926..e6660472501e4d 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -290,7 +290,8 @@ int xhci_plat_probe(struct platform_device *pdev, struct device *sysdev, const s
+
+ hcd->tpl_support = of_usb_host_tpl_support(sysdev->of_node);
+
+- if (priv && (priv->quirks & XHCI_SKIP_PHY_INIT))
++ if ((priv && (priv->quirks & XHCI_SKIP_PHY_INIT)) ||
++ (xhci->quirks & XHCI_SKIP_PHY_INIT))
+ hcd->skip_phy_initialization = 1;
+
+ if (priv && (priv->quirks & XHCI_SG_TRB_CACHE_SIZE_QUIRK))
+diff --git a/drivers/usb/serial/cp210x.c b/drivers/usb/serial/cp210x.c
+index c24101f0a07ad1..9960ac2b10b719 100644
+--- a/drivers/usb/serial/cp210x.c
++++ b/drivers/usb/serial/cp210x.c
+@@ -223,6 +223,7 @@ static const struct usb_device_id id_table[] = {
+ { USB_DEVICE(0x19CF, 0x3000) }, /* Parrot NMEA GPS Flight Recorder */
+ { USB_DEVICE(0x1ADB, 0x0001) }, /* Schweitzer Engineering C662 Cable */
+ { USB_DEVICE(0x1B1C, 0x1C00) }, /* Corsair USB Dongle */
++ { USB_DEVICE(0x1B93, 0x1013) }, /* Phoenix Contact UPS Device */
+ { USB_DEVICE(0x1BA4, 0x0002) }, /* Silicon Labs 358x factory default */
+ { USB_DEVICE(0x1BE3, 0x07A6) }, /* WAGO 750-923 USB Service Cable */
+ { USB_DEVICE(0x1D6F, 0x0010) }, /* Seluxit ApS RF Dongle */
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 64317b390d2285..1e2ae0c6c41c79 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -621,7 +621,7 @@ static void option_instat_callback(struct urb *urb);
+
+ /* MeiG Smart Technology products */
+ #define MEIGSMART_VENDOR_ID 0x2dee
+-/* MeiG Smart SRM825L based on Qualcomm 315 */
++/* MeiG Smart SRM815/SRM825L based on Qualcomm 315 */
+ #define MEIGSMART_PRODUCT_SRM825L 0x4d22
+ /* MeiG Smart SLM320 based on UNISOC UIS8910 */
+ #define MEIGSMART_PRODUCT_SLM320 0x4d41
+@@ -2405,6 +2405,7 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM770A, 0xff, 0, 0) },
++ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x30) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x40) },
+ { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x60) },
+@@ -2412,6 +2413,7 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(1) },
+ { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0640, 0xff), /* TCL IK512 ECM */
+ .driver_info = NCTRL(3) },
++ { USB_DEVICE_INTERFACE_CLASS(0x2949, 0x8700, 0xff) }, /* Neoway N723-EA */
+ { } /* Terminating entry */
+ };
+ MODULE_DEVICE_TABLE(usb, option_ids);
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index e5ad23d86833d5..54f0b1c83317cd 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -255,6 +255,13 @@ UNUSUAL_DEV( 0x0421, 0x06aa, 0x1110, 0x1110,
+ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
+ US_FL_MAX_SECTORS_64 ),
+
++/* Added by Lubomir Rintel <lkundrak@v3.sk>, a very fine chap */
++UNUSUAL_DEV( 0x0421, 0x06c2, 0x0000, 0x0406,
++ "Nokia",
++ "Nokia 208",
++ USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++ US_FL_MAX_SECTORS_64 ),
++
+ #ifdef NO_SDDR09
+ UNUSUAL_DEV( 0x0436, 0x0005, 0x0100, 0x0100,
+ "Microtech",
+diff --git a/drivers/usb/typec/tcpm/maxim_contaminant.c b/drivers/usb/typec/tcpm/maxim_contaminant.c
+index 22163d8f9eb07e..0cdda06592fd3c 100644
+--- a/drivers/usb/typec/tcpm/maxim_contaminant.c
++++ b/drivers/usb/typec/tcpm/maxim_contaminant.c
+@@ -135,7 +135,7 @@ static int max_contaminant_read_resistance_kohm(struct max_tcpci_chip *chip,
+
+ mv = max_contaminant_read_adc_mv(chip, channel, sleep_msec, raw, true);
+ if (mv < 0)
+- return ret;
++ return mv;
+
+ /* OVP enable */
+ ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, CCOVPDIS, 0);
+@@ -157,7 +157,7 @@ static int max_contaminant_read_resistance_kohm(struct max_tcpci_chip *chip,
+
+ mv = max_contaminant_read_adc_mv(chip, channel, sleep_msec, raw, true);
+ if (mv < 0)
+- return ret;
++ return mv;
+ /* Disable current source */
+ ret = regmap_update_bits(regmap, TCPC_VENDOR_CC_CTRL2, SBURPCTRL, 0);
+ if (ret < 0)
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index ed32583829bec2..24a6a4354df8ba 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -700,7 +700,7 @@ static int tcpci_init(struct tcpc_dev *tcpc)
+
+ tcpci->alert_mask = reg;
+
+- return tcpci_write16(tcpci, TCPC_ALERT_MASK, reg);
++ return 0;
+ }
+
+ irqreturn_t tcpci_irq(struct tcpci *tcpci)
+@@ -923,22 +923,27 @@ static int tcpci_probe(struct i2c_client *client)
+
+ chip->data.set_orientation = err;
+
++ chip->tcpci = tcpci_register_port(&client->dev, &chip->data);
++ if (IS_ERR(chip->tcpci))
++ return PTR_ERR(chip->tcpci);
++
+ err = devm_request_threaded_irq(&client->dev, client->irq, NULL,
+ _tcpci_irq,
+ IRQF_SHARED | IRQF_ONESHOT,
+ dev_name(&client->dev), chip);
+ if (err < 0)
+- return err;
++ goto unregister_port;
+
+- /*
+- * Disable irq while registering port. If irq is configured as an edge
+- * irq this allow to keep track and process the irq as soon as it is enabled.
+- */
+- disable_irq(client->irq);
+- chip->tcpci = tcpci_register_port(&client->dev, &chip->data);
+- enable_irq(client->irq);
++ /* Enable chip interrupts at last */
++ err = tcpci_write16(chip->tcpci, TCPC_ALERT_MASK, chip->tcpci->alert_mask);
++ if (err < 0)
++ goto unregister_port;
+
+- return PTR_ERR_OR_ZERO(chip->tcpci);
++ return 0;
++
++unregister_port:
++ tcpci_unregister_port(chip->tcpci);
++ return err;
+ }
+
+ static void tcpci_remove(struct i2c_client *client)
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index fcb8e61136cfd7..740171f24ef9fa 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -646,7 +646,7 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+ UCSI_CMD_CONNECTOR_MASK;
+ if (con_index == 0) {
+ ret = -EINVAL;
+- goto unlock;
++ goto err_put;
+ }
+ con = &uc->ucsi->connector[con_index - 1];
+ ucsi_ccg_update_set_new_cam_cmd(uc, con, &command);
+@@ -654,8 +654,8 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+
+ ret = ucsi_sync_control_common(ucsi, command);
+
++err_put:
+ pm_runtime_put_sync(uc->dev);
+-unlock:
+ mutex_unlock(&uc->lock);
+
+ return ret;
+diff --git a/drivers/vfio/pci/vfio_pci_core.c b/drivers/vfio/pci/vfio_pci_core.c
+index 1ab58da9f38a6e..1a4ed5a357d360 100644
+--- a/drivers/vfio/pci/vfio_pci_core.c
++++ b/drivers/vfio/pci/vfio_pci_core.c
+@@ -1661,14 +1661,15 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
+ unsigned long pfn, pgoff = vmf->pgoff - vma->vm_pgoff;
+ vm_fault_t ret = VM_FAULT_SIGBUS;
+
+- if (order && (vmf->address & ((PAGE_SIZE << order) - 1) ||
++ pfn = vma_to_pfn(vma) + pgoff;
++
++ if (order && (pfn & ((1 << order) - 1) ||
++ vmf->address & ((PAGE_SIZE << order) - 1) ||
+ vmf->address + (PAGE_SIZE << order) > vma->vm_end)) {
+ ret = VM_FAULT_FALLBACK;
+ goto out;
+ }
+
+- pfn = vma_to_pfn(vma);
+-
+ down_read(&vdev->memory_lock);
+
+ if (vdev->pm_runtime_engaged || !__vfio_pci_memory_enabled(vdev))
+@@ -1676,18 +1677,18 @@ static vm_fault_t vfio_pci_mmap_huge_fault(struct vm_fault *vmf,
+
+ switch (order) {
+ case 0:
+- ret = vmf_insert_pfn(vma, vmf->address, pfn + pgoff);
++ ret = vmf_insert_pfn(vma, vmf->address, pfn);
+ break;
+ #ifdef CONFIG_ARCH_SUPPORTS_PMD_PFNMAP
+ case PMD_ORDER:
+- ret = vmf_insert_pfn_pmd(vmf, __pfn_to_pfn_t(pfn + pgoff,
+- PFN_DEV), false);
++ ret = vmf_insert_pfn_pmd(vmf,
++ __pfn_to_pfn_t(pfn, PFN_DEV), false);
+ break;
+ #endif
+ #ifdef CONFIG_ARCH_SUPPORTS_PUD_PFNMAP
+ case PUD_ORDER:
+- ret = vmf_insert_pfn_pud(vmf, __pfn_to_pfn_t(pfn + pgoff,
+- PFN_DEV), false);
++ ret = vmf_insert_pfn_pud(vmf,
++ __pfn_to_pfn_t(pfn, PFN_DEV), false);
+ break;
+ #endif
+ default:
+diff --git a/fs/afs/afs.h b/fs/afs/afs.h
+index b488072aee87ae..ec3db00bd0813c 100644
+--- a/fs/afs/afs.h
++++ b/fs/afs/afs.h
+@@ -10,7 +10,7 @@
+
+ #include <linux/in.h>
+
+-#define AFS_MAXCELLNAME 256 /* Maximum length of a cell name */
++#define AFS_MAXCELLNAME 253 /* Maximum length of a cell name (DNS limited) */
+ #define AFS_MAXVOLNAME 64 /* Maximum length of a volume name */
+ #define AFS_MAXNSERVERS 8 /* Maximum servers in a basic volume record */
+ #define AFS_NMAXNSERVERS 13 /* Maximum servers in a N/U-class volume record */
+diff --git a/fs/afs/afs_vl.h b/fs/afs/afs_vl.h
+index a06296c8827d42..b835e25a2c02d3 100644
+--- a/fs/afs/afs_vl.h
++++ b/fs/afs/afs_vl.h
+@@ -13,6 +13,7 @@
+ #define AFS_VL_PORT 7003 /* volume location service port */
+ #define VL_SERVICE 52 /* RxRPC service ID for the Volume Location service */
+ #define YFS_VL_SERVICE 2503 /* Service ID for AuriStor upgraded VL service */
++#define YFS_VL_MAXCELLNAME 256 /* Maximum length of a cell name in YFS protocol */
+
+ enum AFSVL_Operations {
+ VLGETENTRYBYID = 503, /* AFS Get VLDB entry by ID */
+diff --git a/fs/afs/vl_alias.c b/fs/afs/vl_alias.c
+index 9f36e14f1c2d24..f9e76b604f31b9 100644
+--- a/fs/afs/vl_alias.c
++++ b/fs/afs/vl_alias.c
+@@ -253,6 +253,7 @@ static char *afs_vl_get_cell_name(struct afs_cell *cell, struct key *key)
+ static int yfs_check_canonical_cell_name(struct afs_cell *cell, struct key *key)
+ {
+ struct afs_cell *master;
++ size_t name_len;
+ char *cell_name;
+
+ cell_name = afs_vl_get_cell_name(cell, key);
+@@ -264,8 +265,11 @@ static int yfs_check_canonical_cell_name(struct afs_cell *cell, struct key *key)
+ return 0;
+ }
+
+- master = afs_lookup_cell(cell->net, cell_name, strlen(cell_name),
+- NULL, false);
++ name_len = strlen(cell_name);
++ if (!name_len || name_len > AFS_MAXCELLNAME)
++ master = ERR_PTR(-EOPNOTSUPP);
++ else
++ master = afs_lookup_cell(cell->net, cell_name, name_len, NULL, false);
+ kfree(cell_name);
+ if (IS_ERR(master))
+ return PTR_ERR(master);
+diff --git a/fs/afs/vlclient.c b/fs/afs/vlclient.c
+index cac75f89b64ad1..55dd0fc5aad7bf 100644
+--- a/fs/afs/vlclient.c
++++ b/fs/afs/vlclient.c
+@@ -697,7 +697,7 @@ static int afs_deliver_yfsvl_get_cell_name(struct afs_call *call)
+ return ret;
+
+ namesz = ntohl(call->tmp);
+- if (namesz > AFS_MAXCELLNAME)
++ if (namesz > YFS_VL_MAXCELLNAME)
+ return afs_protocol_error(call, afs_eproto_cellname_len);
+ paddedsz = (namesz + 3) & ~3;
+ call->count = namesz;
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 872cca54cc6ce4..42c9899d9241c9 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -786,7 +786,7 @@ static void submit_extent_folio(struct btrfs_bio_ctrl *bio_ctrl,
+ }
+
+ if (bio_ctrl->wbc)
+- wbc_account_cgroup_owner(bio_ctrl->wbc, &folio->page,
++ wbc_account_cgroup_owner(bio_ctrl->wbc, folio,
+ len);
+
+ size -= len;
+@@ -1708,7 +1708,7 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb,
+ ret = bio_add_folio(&bbio->bio, folio, eb->len,
+ eb->start - folio_pos(folio));
+ ASSERT(ret);
+- wbc_account_cgroup_owner(wbc, folio_page(folio, 0), eb->len);
++ wbc_account_cgroup_owner(wbc, folio, eb->len);
+ folio_unlock(folio);
+ } else {
+ int num_folios = num_extent_folios(eb);
+@@ -1722,8 +1722,7 @@ static noinline_for_stack void write_one_eb(struct extent_buffer *eb,
+ folio_start_writeback(folio);
+ ret = bio_add_folio(&bbio->bio, folio, eb->folio_size, 0);
+ ASSERT(ret);
+- wbc_account_cgroup_owner(wbc, folio_page(folio, 0),
+- eb->folio_size);
++ wbc_account_cgroup_owner(wbc, folio, eb->folio_size);
+ wbc->nr_to_write -= folio_nr_pages(folio);
+ folio_unlock(folio);
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index b5cfb85af937fc..a3c861b2a6d25d 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1729,7 +1729,7 @@ static bool run_delalloc_compressed(struct btrfs_inode *inode,
+ * need full accuracy. Just account the whole thing
+ * against the first page.
+ */
+- wbc_account_cgroup_owner(wbc, &locked_folio->page,
++ wbc_account_cgroup_owner(wbc, locked_folio,
+ cur_end - start);
+ async_chunk[i].locked_folio = locked_folio;
+ locked_folio = NULL;
+diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
+index 3a34274280746c..c73a41b1ad5607 100644
+--- a/fs/btrfs/scrub.c
++++ b/fs/btrfs/scrub.c
+@@ -1541,6 +1541,10 @@ static int scrub_find_fill_first_stripe(struct btrfs_block_group *bg,
+ u64 extent_gen;
+ int ret;
+
++ if (unlikely(!extent_root)) {
++ btrfs_err(fs_info, "no valid extent root for scrub");
++ return -EUCLEAN;
++ }
+ memset(stripe->sectors, 0, sizeof(struct scrub_sector_verification) *
+ stripe->nr_sectors);
+ scrub_stripe_reset_bitmaps(stripe);
+diff --git a/fs/btrfs/zlib.c b/fs/btrfs/zlib.c
+index 100abc00b794ca..03958d1a53b0eb 100644
+--- a/fs/btrfs/zlib.c
++++ b/fs/btrfs/zlib.c
+@@ -174,10 +174,10 @@ int zlib_compress_folios(struct list_head *ws, struct address_space *mapping,
+ copy_page(workspace->buf + i * PAGE_SIZE,
+ data_in);
+ start += PAGE_SIZE;
+- workspace->strm.avail_in =
+- (in_buf_folios << PAGE_SHIFT);
+ }
+ workspace->strm.next_in = workspace->buf;
++ workspace->strm.avail_in = min(bytes_left,
++ in_buf_folios << PAGE_SHIFT);
+ } else {
+ unsigned int pg_off;
+ unsigned int cur_len;
+diff --git a/fs/buffer.c b/fs/buffer.c
+index 1fc9a50def0b51..32bd0f4c422360 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -2803,7 +2803,7 @@ static void submit_bh_wbc(blk_opf_t opf, struct buffer_head *bh,
+ bio->bi_iter.bi_sector = bh->b_blocknr * (bh->b_size >> 9);
+ bio->bi_write_hint = write_hint;
+
+- __bio_add_page(bio, bh->b_page, bh->b_size, bh_offset(bh));
++ bio_add_folio_nofail(bio, bh->b_folio, bh->b_size, bh_offset(bh));
+
+ bio->bi_end_io = end_bio_bh_io_sync;
+ bio->bi_private = bh;
+@@ -2813,7 +2813,7 @@ static void submit_bh_wbc(blk_opf_t opf, struct buffer_head *bh,
+
+ if (wbc) {
+ wbc_init_bio(wbc, bio);
+- wbc_account_cgroup_owner(wbc, bh->b_page, bh->b_size);
++ wbc_account_cgroup_owner(wbc, bh->b_folio, bh->b_size);
+ }
+
+ submit_bio(bio);
+diff --git a/fs/exfat/dir.c b/fs/exfat/dir.c
+index 7446bf09a04a8f..9d8848872fe8ac 100644
+--- a/fs/exfat/dir.c
++++ b/fs/exfat/dir.c
+@@ -125,7 +125,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
+ type = exfat_get_entry_type(ep);
+ if (type == TYPE_UNUSED) {
+ brelse(bh);
+- break;
++ goto out;
+ }
+
+ if (type != TYPE_FILE && type != TYPE_DIR) {
+@@ -189,6 +189,7 @@ static int exfat_readdir(struct inode *inode, loff_t *cpos, struct exfat_dir_ent
+ }
+ }
+
++out:
+ dir_entry->namebuf.lfn[0] = '\0';
+ *cpos = EXFAT_DEN_TO_B(dentry);
+ return 0;
+diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c
+index 773c320d68f3f2..9e5492ac409b07 100644
+--- a/fs/exfat/fatent.c
++++ b/fs/exfat/fatent.c
+@@ -216,6 +216,16 @@ static int __exfat_free_cluster(struct inode *inode, struct exfat_chain *p_chain
+
+ if (err)
+ goto dec_used_clus;
++
++ if (num_clusters >= sbi->num_clusters - EXFAT_FIRST_CLUSTER) {
++ /*
++ * The cluster chain includes a loop, scan the
++ * bitmap to get the number of used clusters.
++ */
++ exfat_count_used_clusters(sb, &sbi->used_clusters);
++
++ return 0;
++ }
+ } while (clu != EXFAT_EOF_CLUSTER);
+ }
+
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index fb38769c3e39d1..05b51e7217838f 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -545,6 +545,7 @@ static int exfat_extend_valid_size(struct file *file, loff_t new_valid_size)
+ while (pos < new_valid_size) {
+ u32 len;
+ struct folio *folio;
++ unsigned long off;
+
+ len = PAGE_SIZE - (pos & (PAGE_SIZE - 1));
+ if (pos + len > new_valid_size)
+@@ -554,6 +555,9 @@ static int exfat_extend_valid_size(struct file *file, loff_t new_valid_size)
+ if (err)
+ goto out;
+
++ off = offset_in_folio(folio, pos);
++ folio_zero_new_buffers(folio, off, off + len);
++
+ err = ops->write_end(file, mapping, pos, len, len, folio, NULL);
+ if (err < 0)
+ goto out;
+@@ -563,6 +567,8 @@ static int exfat_extend_valid_size(struct file *file, loff_t new_valid_size)
+ cond_resched();
+ }
+
++ return 0;
++
+ out:
+ return err;
+ }
+diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c
+index ad5543866d2152..b7b9261fec3b50 100644
+--- a/fs/ext4/page-io.c
++++ b/fs/ext4/page-io.c
+@@ -421,7 +421,7 @@ static void io_submit_add_bh(struct ext4_io_submit *io,
+ io_submit_init_bio(io, bh);
+ if (!bio_add_folio(io->io_bio, io_folio, bh->b_size, bh_offset(bh)))
+ goto submit_and_retry;
+- wbc_account_cgroup_owner(io->io_wbc, &folio->page, bh->b_size);
++ wbc_account_cgroup_owner(io->io_wbc, folio, bh->b_size);
+ io->io_next_block++;
+ }
+
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index da0960d496ae09..1b0050b8421d88 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -711,7 +711,8 @@ int f2fs_submit_page_bio(struct f2fs_io_info *fio)
+ }
+
+ if (fio->io_wbc && !is_read_io(fio->op))
+- wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
++ wbc_account_cgroup_owner(fio->io_wbc, page_folio(fio->page),
++ PAGE_SIZE);
+
+ inc_page_count(fio->sbi, is_read_io(fio->op) ?
+ __read_io_type(page) : WB_DATA_TYPE(fio->page, false));
+@@ -911,7 +912,8 @@ int f2fs_merge_page_bio(struct f2fs_io_info *fio)
+ }
+
+ if (fio->io_wbc)
+- wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
++ wbc_account_cgroup_owner(fio->io_wbc, page_folio(fio->page),
++ PAGE_SIZE);
+
+ inc_page_count(fio->sbi, WB_DATA_TYPE(page, false));
+
+@@ -1011,7 +1013,8 @@ void f2fs_submit_page_write(struct f2fs_io_info *fio)
+ }
+
+ if (fio->io_wbc)
+- wbc_account_cgroup_owner(fio->io_wbc, fio->page, PAGE_SIZE);
++ wbc_account_cgroup_owner(fio->io_wbc, page_folio(fio->page),
++ PAGE_SIZE);
+
+ io->last_block_in_bio = fio->new_blkaddr;
+
+diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
+index d8bec3c1bb1fa7..2391b09f4cedec 100644
+--- a/fs/fs-writeback.c
++++ b/fs/fs-writeback.c
+@@ -890,17 +890,16 @@ EXPORT_SYMBOL_GPL(wbc_detach_inode);
+ /**
+ * wbc_account_cgroup_owner - account writeback to update inode cgroup ownership
+ * @wbc: writeback_control of the writeback in progress
+- * @page: page being written out
++ * @folio: folio being written out
+ * @bytes: number of bytes being written out
+ *
+- * @bytes from @page are about to written out during the writeback
++ * @bytes from @folio are about to written out during the writeback
+ * controlled by @wbc. Keep the book for foreign inode detection. See
+ * wbc_detach_inode().
+ */
+-void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
++void wbc_account_cgroup_owner(struct writeback_control *wbc, struct folio *folio,
+ size_t bytes)
+ {
+- struct folio *folio;
+ struct cgroup_subsys_state *css;
+ int id;
+
+@@ -913,7 +912,6 @@ void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
+ if (!wbc->wb || wbc->no_cgroup_owner)
+ return;
+
+- folio = page_folio(page);
+ css = mem_cgroup_css_from_folio(folio);
+ /* dead cgroups shouldn't contribute to inode ownership arbitration */
+ if (!(css->flags & CSS_ONLINE))
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 54104dd48af7c9..2e62e62c07f836 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1680,6 +1680,8 @@ static int fuse_dir_open(struct inode *inode, struct file *file)
+ */
+ if (ff->open_flags & (FOPEN_STREAM | FOPEN_NONSEEKABLE))
+ nonseekable_open(inode, file);
++ if (!(ff->open_flags & FOPEN_KEEP_CACHE))
++ invalidate_inode_pages2(inode->i_mapping);
+ }
+
+ return err;
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index ef0b68bccbb612..25d1ede6bb0eb0 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1764,7 +1764,8 @@ static bool iomap_can_add_to_ioend(struct iomap_writepage_ctx *wpc, loff_t pos)
+ */
+ static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc,
+ struct writeback_control *wbc, struct folio *folio,
+- struct inode *inode, loff_t pos, unsigned len)
++ struct inode *inode, loff_t pos, loff_t end_pos,
++ unsigned len)
+ {
+ struct iomap_folio_state *ifs = folio->private;
+ size_t poff = offset_in_folio(folio, pos);
+@@ -1783,15 +1784,60 @@ static int iomap_add_to_ioend(struct iomap_writepage_ctx *wpc,
+
+ if (ifs)
+ atomic_add(len, &ifs->write_bytes_pending);
++
++ /*
++ * Clamp io_offset and io_size to the incore EOF so that ondisk
++ * file size updates in the ioend completion are byte-accurate.
++ * This avoids recovering files with zeroed tail regions when
++ * writeback races with appending writes:
++ *
++ * Thread 1: Thread 2:
++ * ------------ -----------
++ * write [A, A+B]
++ * update inode size to A+B
++ * submit I/O [A, A+BS]
++ * write [A+B, A+B+C]
++ * update inode size to A+B+C
++ * <I/O completes, updates disk size to min(A+B+C, A+BS)>
++ * <power failure>
++ *
++ * After reboot:
++ * 1) with A+B+C < A+BS, the file has zero padding in range
++ * [A+B, A+B+C]
++ *
++ * |< Block Size (BS) >|
++ * |DDDDDDDDDDDD0000000000000|
++ * ^ ^ ^
++ * A A+B A+B+C
++ * (EOF)
++ *
++ * 2) with A+B+C > A+BS, the file has zero padding in range
++ * [A+B, A+BS]
++ *
++ * |< Block Size (BS) >|< Block Size (BS) >|
++ * |DDDDDDDDDDDD0000000000000|00000000000000000000000000|
++ * ^ ^ ^ ^
++ * A A+B A+BS A+B+C
++ * (EOF)
++ *
++ * D = Valid Data
++ * 0 = Zero Padding
++ *
++ * Note that this defeats the ability to chain the ioends of
++ * appending writes.
++ */
+ wpc->ioend->io_size += len;
+- wbc_account_cgroup_owner(wbc, &folio->page, len);
++ if (wpc->ioend->io_offset + wpc->ioend->io_size > end_pos)
++ wpc->ioend->io_size = end_pos - wpc->ioend->io_offset;
++
++ wbc_account_cgroup_owner(wbc, folio, len);
+ return 0;
+ }
+
+ static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc,
+ struct writeback_control *wbc, struct folio *folio,
+- struct inode *inode, u64 pos, unsigned dirty_len,
+- unsigned *count)
++ struct inode *inode, u64 pos, u64 end_pos,
++ unsigned dirty_len, unsigned *count)
+ {
+ int error;
+
+@@ -1816,7 +1862,7 @@ static int iomap_writepage_map_blocks(struct iomap_writepage_ctx *wpc,
+ break;
+ default:
+ error = iomap_add_to_ioend(wpc, wbc, folio, inode, pos,
+- map_len);
++ end_pos, map_len);
+ if (!error)
+ (*count)++;
+ break;
+@@ -1887,11 +1933,11 @@ static bool iomap_writepage_handle_eof(struct folio *folio, struct inode *inode,
+ * remaining memory is zeroed when mapped, and writes to that
+ * region are not written out to the file.
+ *
+- * Also adjust the writeback range to skip all blocks entirely
+- * beyond i_size.
++ * Also adjust the end_pos to the end of file and skip writeback
++ * for all blocks entirely beyond i_size.
+ */
+ folio_zero_segment(folio, poff, folio_size(folio));
+- *end_pos = round_up(isize, i_blocksize(inode));
++ *end_pos = isize;
+ }
+
+ return true;
+@@ -1904,6 +1950,7 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc,
+ struct inode *inode = folio->mapping->host;
+ u64 pos = folio_pos(folio);
+ u64 end_pos = pos + folio_size(folio);
++ u64 end_aligned = 0;
+ unsigned count = 0;
+ int error = 0;
+ u32 rlen;
+@@ -1945,9 +1992,10 @@ static int iomap_writepage_map(struct iomap_writepage_ctx *wpc,
+ /*
+ * Walk through the folio to find dirty areas to write back.
+ */
+- while ((rlen = iomap_find_dirty_range(folio, &pos, end_pos))) {
++ end_aligned = round_up(end_pos, i_blocksize(inode));
++ while ((rlen = iomap_find_dirty_range(folio, &pos, end_aligned))) {
+ error = iomap_writepage_map_blocks(wpc, wbc, folio, inode,
+- pos, rlen, &count);
++ pos, end_pos, rlen, &count);
+ if (error)
+ break;
+ pos += rlen;
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 4305a1ac808a60..f95cf272a1b500 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -776,9 +776,9 @@ void jbd2_journal_commit_transaction(journal_t *journal)
+ /*
+ * If the journal is not located on the file system device,
+ * then we must flush the file system device before we issue
+- * the commit record
++ * the commit record and update the journal tail sequence.
+ */
+- if (commit_transaction->t_need_data_flush &&
++ if ((commit_transaction->t_need_data_flush || update_tail) &&
+ (journal->j_fs_dev != journal->j_dev) &&
+ (journal->j_flags & JBD2_BARRIER))
+ blkdev_issue_flush(journal->j_fs_dev);
+diff --git a/fs/jbd2/revoke.c b/fs/jbd2/revoke.c
+index 4556e468902449..ce63d5fde9c3a8 100644
+--- a/fs/jbd2/revoke.c
++++ b/fs/jbd2/revoke.c
+@@ -654,7 +654,7 @@ static void flush_descriptor(journal_t *journal,
+ set_buffer_jwrite(descriptor);
+ BUFFER_TRACE(descriptor, "write");
+ set_buffer_dirty(descriptor);
+- write_dirty_buffer(descriptor, REQ_SYNC);
++ write_dirty_buffer(descriptor, JBD2_JOURNAL_REQ_FLAGS);
+ }
+ #endif
+
+diff --git a/fs/mount.h b/fs/mount.h
+index 185fc56afc1333..179f690a0c7223 100644
+--- a/fs/mount.h
++++ b/fs/mount.h
+@@ -38,6 +38,7 @@ struct mount {
+ struct dentry *mnt_mountpoint;
+ struct vfsmount mnt;
+ union {
++ struct rb_node mnt_node; /* node in the ns->mounts rbtree */
+ struct rcu_head mnt_rcu;
+ struct llist_node mnt_llist;
+ };
+@@ -51,10 +52,7 @@ struct mount {
+ struct list_head mnt_child; /* and going through their mnt_child */
+ struct list_head mnt_instance; /* mount instance on sb->s_mounts */
+ const char *mnt_devname; /* Name of device e.g. /dev/dsk/hda1 */
+- union {
+- struct rb_node mnt_node; /* Under ns->mounts */
+- struct list_head mnt_list;
+- };
++ struct list_head mnt_list;
+ struct list_head mnt_expire; /* link in fs-specific expiry list */
+ struct list_head mnt_share; /* circular list of shared mounts */
+ struct list_head mnt_slave_list;/* list of slave mounts */
+@@ -145,11 +143,16 @@ static inline bool is_anon_ns(struct mnt_namespace *ns)
+ return ns->seq == 0;
+ }
+
++static inline bool mnt_ns_attached(const struct mount *mnt)
++{
++ return !RB_EMPTY_NODE(&mnt->mnt_node);
++}
++
+ static inline void move_from_ns(struct mount *mnt, struct list_head *dt_list)
+ {
+- WARN_ON(!(mnt->mnt.mnt_flags & MNT_ONRB));
+- mnt->mnt.mnt_flags &= ~MNT_ONRB;
++ WARN_ON(!mnt_ns_attached(mnt));
+ rb_erase(&mnt->mnt_node, &mnt->mnt_ns->mounts);
++ RB_CLEAR_NODE(&mnt->mnt_node);
+ list_add_tail(&mnt->mnt_list, dt_list);
+ }
+
+diff --git a/fs/mpage.c b/fs/mpage.c
+index b5b5ddf9d513d4..82aecf37274379 100644
+--- a/fs/mpage.c
++++ b/fs/mpage.c
+@@ -606,7 +606,7 @@ static int __mpage_writepage(struct folio *folio, struct writeback_control *wbc,
+ * the confused fail path above (OOM) will be very confused when
+ * it finds all bh marked clean (i.e. it will not write anything)
+ */
+- wbc_account_cgroup_owner(wbc, &folio->page, folio_size(folio));
++ wbc_account_cgroup_owner(wbc, folio, folio_size(folio));
+ length = first_unmapped << blkbits;
+ if (!bio_add_folio(bio, folio, length, 0)) {
+ bio = mpage_bio_submit_write(bio);
+diff --git a/fs/namespace.c b/fs/namespace.c
+index d26f5e6d2ca35f..5ea644b679add5 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -344,6 +344,7 @@ static struct mount *alloc_vfsmnt(const char *name)
+ INIT_HLIST_NODE(&mnt->mnt_mp_list);
+ INIT_LIST_HEAD(&mnt->mnt_umounting);
+ INIT_HLIST_HEAD(&mnt->mnt_stuck_children);
++ RB_CLEAR_NODE(&mnt->mnt_node);
+ mnt->mnt.mnt_idmap = &nop_mnt_idmap;
+ }
+ return mnt;
+@@ -1124,7 +1125,7 @@ static void mnt_add_to_ns(struct mnt_namespace *ns, struct mount *mnt)
+ struct rb_node **link = &ns->mounts.rb_node;
+ struct rb_node *parent = NULL;
+
+- WARN_ON(mnt->mnt.mnt_flags & MNT_ONRB);
++ WARN_ON(mnt_ns_attached(mnt));
+ mnt->mnt_ns = ns;
+ while (*link) {
+ parent = *link;
+@@ -1135,7 +1136,6 @@ static void mnt_add_to_ns(struct mnt_namespace *ns, struct mount *mnt)
+ }
+ rb_link_node(&mnt->mnt_node, parent, link);
+ rb_insert_color(&mnt->mnt_node, &ns->mounts);
+- mnt->mnt.mnt_flags |= MNT_ONRB;
+ }
+
+ /*
+@@ -1305,7 +1305,7 @@ static struct mount *clone_mnt(struct mount *old, struct dentry *root,
+ }
+
+ mnt->mnt.mnt_flags = old->mnt.mnt_flags;
+- mnt->mnt.mnt_flags &= ~(MNT_WRITE_HOLD|MNT_MARKED|MNT_INTERNAL|MNT_ONRB);
++ mnt->mnt.mnt_flags &= ~(MNT_WRITE_HOLD|MNT_MARKED|MNT_INTERNAL);
+
+ atomic_inc(&sb->s_active);
+ mnt->mnt.mnt_idmap = mnt_idmap_get(mnt_idmap(&old->mnt));
+@@ -1763,7 +1763,7 @@ static void umount_tree(struct mount *mnt, enum umount_tree_flags how)
+ /* Gather the mounts to umount */
+ for (p = mnt; p; p = next_mnt(p, mnt)) {
+ p->mnt.mnt_flags |= MNT_UMOUNT;
+- if (p->mnt.mnt_flags & MNT_ONRB)
++ if (mnt_ns_attached(p))
+ move_from_ns(p, &tmp_list);
+ else
+ list_move(&p->mnt_list, &tmp_list);
+@@ -1912,16 +1912,14 @@ static int do_umount(struct mount *mnt, int flags)
+
+ event++;
+ if (flags & MNT_DETACH) {
+- if (mnt->mnt.mnt_flags & MNT_ONRB ||
+- !list_empty(&mnt->mnt_list))
++ if (mnt_ns_attached(mnt) || !list_empty(&mnt->mnt_list))
+ umount_tree(mnt, UMOUNT_PROPAGATE);
+ retval = 0;
+ } else {
+ shrink_submounts(mnt);
+ retval = -EBUSY;
+ if (!propagate_mount_busy(mnt, 2)) {
+- if (mnt->mnt.mnt_flags & MNT_ONRB ||
+- !list_empty(&mnt->mnt_list))
++ if (mnt_ns_attached(mnt) || !list_empty(&mnt->mnt_list))
+ umount_tree(mnt, UMOUNT_PROPAGATE|UMOUNT_SYNC);
+ retval = 0;
+ }
+@@ -2055,9 +2053,15 @@ SYSCALL_DEFINE1(oldumount, char __user *, name)
+
+ static bool is_mnt_ns_file(struct dentry *dentry)
+ {
++ struct ns_common *ns;
++
+ /* Is this a proxy for a mount namespace? */
+- return dentry->d_op == &ns_dentry_operations &&
+- dentry->d_fsdata == &mntns_operations;
++ if (dentry->d_op != &ns_dentry_operations)
++ return false;
++
++ ns = d_inode(dentry)->i_private;
++
++ return ns->ops == &mntns_operations;
+ }
+
+ struct ns_common *from_mnt_ns(struct mnt_namespace *mnt)
+diff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c
+index af46a598f4d7c7..2dd2260352dbf8 100644
+--- a/fs/netfs/buffered_read.c
++++ b/fs/netfs/buffered_read.c
+@@ -275,22 +275,14 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ netfs_stat(&netfs_n_rh_download);
+ if (rreq->netfs_ops->prepare_read) {
+ ret = rreq->netfs_ops->prepare_read(subreq);
+- if (ret < 0) {
+- atomic_dec(&rreq->nr_outstanding);
+- netfs_put_subrequest(subreq, false,
+- netfs_sreq_trace_put_cancel);
+- break;
+- }
++ if (ret < 0)
++ goto prep_failed;
+ trace_netfs_sreq(subreq, netfs_sreq_trace_prepare);
+ }
+
+ slice = netfs_prepare_read_iterator(subreq);
+- if (slice < 0) {
+- atomic_dec(&rreq->nr_outstanding);
+- netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
+- ret = slice;
+- break;
+- }
++ if (slice < 0)
++ goto prep_iter_failed;
+
+ rreq->netfs_ops->issue_read(subreq);
+ goto done;
+@@ -302,6 +294,8 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ netfs_stat(&netfs_n_rh_zero);
+ slice = netfs_prepare_read_iterator(subreq);
++ if (slice < 0)
++ goto prep_iter_failed;
+ __set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);
+ netfs_read_subreq_terminated(subreq, 0, false);
+ goto done;
+@@ -310,6 +304,8 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ if (source == NETFS_READ_FROM_CACHE) {
+ trace_netfs_sreq(subreq, netfs_sreq_trace_submit);
+ slice = netfs_prepare_read_iterator(subreq);
++ if (slice < 0)
++ goto prep_iter_failed;
+ netfs_read_cache_to_pagecache(rreq, subreq);
+ goto done;
+ }
+@@ -318,6 +314,14 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
+ WARN_ON_ONCE(1);
+ break;
+
++ prep_iter_failed:
++ ret = slice;
++ prep_failed:
++ subreq->error = ret;
++ atomic_dec(&rreq->nr_outstanding);
++ netfs_put_subrequest(subreq, false, netfs_sreq_trace_put_cancel);
++ break;
++
+ done:
+ size -= slice;
+ start += slice;
+diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c
+index 88f2adfab75e92..26cf9c94deebb3 100644
+--- a/fs/netfs/direct_write.c
++++ b/fs/netfs/direct_write.c
+@@ -67,7 +67,7 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ * allocate a sufficiently large bvec array and may shorten the
+ * request.
+ */
+- if (async || user_backed_iter(iter)) {
++ if (user_backed_iter(iter)) {
+ n = netfs_extract_user_iter(iter, len, &wreq->iter, 0);
+ if (n < 0) {
+ ret = n;
+@@ -77,6 +77,11 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *
+ wreq->direct_bv_count = n;
+ wreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);
+ } else {
++ /* If this is a kernel-generated async DIO request,
++ * assume that any resources the iterator points to
++ * (eg. a bio_vec array) will persist till the end of
++ * the op.
++ */
+ wreq->iter = *iter;
+ }
+
+diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
+index 3cbb289535a85a..e70eb4ea21c038 100644
+--- a/fs/netfs/read_collect.c
++++ b/fs/netfs/read_collect.c
+@@ -62,10 +62,14 @@ static void netfs_unlock_read_folio(struct netfs_io_subrequest *subreq,
+ } else {
+ trace_netfs_folio(folio, netfs_folio_trace_read_done);
+ }
++
++ folioq_clear(folioq, slot);
+ } else {
+ // TODO: Use of PG_private_2 is deprecated.
+ if (test_bit(NETFS_SREQ_COPY_TO_CACHE, &subreq->flags))
+ netfs_pgpriv2_mark_copy_to_cache(subreq, rreq, folioq, slot);
++ else
++ folioq_clear(folioq, slot);
+ }
+
+ if (!test_bit(NETFS_RREQ_DONT_UNLOCK_FOLIOS, &rreq->flags)) {
+@@ -77,8 +81,6 @@ static void netfs_unlock_read_folio(struct netfs_io_subrequest *subreq,
+ folio_unlock(folio);
+ }
+ }
+-
+- folioq_clear(folioq, slot);
+ }
+
+ /*
+@@ -378,8 +380,7 @@ static void netfs_rreq_assess(struct netfs_io_request *rreq)
+ task_io_account_read(rreq->transferred);
+
+ trace_netfs_rreq(rreq, netfs_rreq_trace_wake_ip);
+- clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
+- wake_up_bit(&rreq->flags, NETFS_RREQ_IN_PROGRESS);
++ clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &rreq->flags);
+
+ trace_netfs_rreq(rreq, netfs_rreq_trace_done);
+ netfs_clear_subrequests(rreq, false);
+diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c
+index ba5af89d37fae5..54d5004fec1826 100644
+--- a/fs/netfs/read_pgpriv2.c
++++ b/fs/netfs/read_pgpriv2.c
+@@ -170,6 +170,10 @@ void netfs_pgpriv2_write_to_the_cache(struct netfs_io_request *rreq)
+
+ trace_netfs_write(wreq, netfs_write_trace_copy_to_cache);
+ netfs_stat(&netfs_n_wh_copy_to_cache);
++ if (!wreq->io_streams[1].avail) {
++ netfs_put_request(wreq, false, netfs_rreq_trace_put_return);
++ goto couldnt_start;
++ }
+
+ for (;;) {
+ error = netfs_pgpriv2_copy_folio(wreq, folio);
+diff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c
+index 0350592ea8047d..48fb0303f7eee0 100644
+--- a/fs/netfs/read_retry.c
++++ b/fs/netfs/read_retry.c
+@@ -49,7 +49,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
+ * up to the first permanently failed one.
+ */
+ if (!rreq->netfs_ops->prepare_read &&
+- !test_bit(NETFS_RREQ_COPY_TO_CACHE, &rreq->flags)) {
++ !rreq->cache_resources.ops) {
+ struct netfs_io_subrequest *subreq;
+
+ list_for_each_entry(subreq, &rreq->subrequests, rreq_link) {
+@@ -149,7 +149,8 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)
+ BUG_ON(!len);
+
+ /* Renegotiate max_len (rsize) */
+- if (rreq->netfs_ops->prepare_read(subreq) < 0) {
++ if (rreq->netfs_ops->prepare_read &&
++ rreq->netfs_ops->prepare_read(subreq) < 0) {
+ trace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed);
+ __set_bit(NETFS_SREQ_FAILED, &subreq->flags);
+ }
+diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
+index 1d438be2e1b4b8..82290c92ba7a29 100644
+--- a/fs/netfs/write_collect.c
++++ b/fs/netfs/write_collect.c
+@@ -501,8 +501,7 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)
+ goto need_retry;
+ if ((notes & MADE_PROGRESS) && test_bit(NETFS_RREQ_PAUSE, &wreq->flags)) {
+ trace_netfs_rreq(wreq, netfs_rreq_trace_unpause);
+- clear_bit_unlock(NETFS_RREQ_PAUSE, &wreq->flags);
+- wake_up_bit(&wreq->flags, NETFS_RREQ_PAUSE);
++ clear_and_wake_up_bit(NETFS_RREQ_PAUSE, &wreq->flags);
+ }
+
+ if (notes & NEED_REASSESS) {
+@@ -605,8 +604,7 @@ void netfs_write_collection_worker(struct work_struct *work)
+
+ _debug("finished");
+ trace_netfs_rreq(wreq, netfs_rreq_trace_wake_ip);
+- clear_bit_unlock(NETFS_RREQ_IN_PROGRESS, &wreq->flags);
+- wake_up_bit(&wreq->flags, NETFS_RREQ_IN_PROGRESS);
++ clear_and_wake_up_bit(NETFS_RREQ_IN_PROGRESS, &wreq->flags);
+
+ if (wreq->iocb) {
+ size_t written = min(wreq->transferred, wreq->len);
+@@ -714,8 +712,7 @@ void netfs_write_subrequest_terminated(void *_op, ssize_t transferred_or_error,
+
+ trace_netfs_sreq(subreq, netfs_sreq_trace_terminated);
+
+- clear_bit_unlock(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+- wake_up_bit(&subreq->flags, NETFS_SREQ_IN_PROGRESS);
++ clear_and_wake_up_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);
+
+ /* If we are at the head of the queue, wake up the collector,
+ * transferring a ref to it if we were the ones to do so.
+diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
+index 810269ee0a50e6..d49e4ce279994f 100644
+--- a/fs/nfs/fscache.c
++++ b/fs/nfs/fscache.c
+@@ -263,6 +263,12 @@ int nfs_netfs_readahead(struct readahead_control *ractl)
+ static atomic_t nfs_netfs_debug_id;
+ static int nfs_netfs_init_request(struct netfs_io_request *rreq, struct file *file)
+ {
++ if (!file) {
++ if (WARN_ON_ONCE(rreq->origin != NETFS_PGPRIV2_COPY_TO_CACHE))
++ return -EIO;
++ return 0;
++ }
++
+ rreq->netfs_priv = get_nfs_open_context(nfs_file_open_context(file));
+ rreq->debug_id = atomic_inc_return(&nfs_netfs_debug_id);
+ /* [DEPRECATED] Use PG_private_2 to mark folio being written to the cache. */
+@@ -274,7 +280,8 @@ static int nfs_netfs_init_request(struct netfs_io_request *rreq, struct file *fi
+
+ static void nfs_netfs_free_request(struct netfs_io_request *rreq)
+ {
+- put_nfs_open_context(rreq->netfs_priv);
++ if (rreq->netfs_priv)
++ put_nfs_open_context(rreq->netfs_priv);
+ }
+
+ static struct nfs_netfs_io_data *nfs_netfs_alloc(struct netfs_io_subrequest *sreq)
+diff --git a/fs/notify/fdinfo.c b/fs/notify/fdinfo.c
+index dec553034027e0..e933f9c65d904a 100644
+--- a/fs/notify/fdinfo.c
++++ b/fs/notify/fdinfo.c
+@@ -47,10 +47,8 @@ static void show_mark_fhandle(struct seq_file *m, struct inode *inode)
+ size = f->handle_bytes >> 2;
+
+ ret = exportfs_encode_fid(inode, (struct fid *)f->f_handle, &size);
+- if ((ret == FILEID_INVALID) || (ret < 0)) {
+- WARN_ONCE(1, "Can't encode file handler for inotify: %d\n", ret);
++ if ((ret == FILEID_INVALID) || (ret < 0))
+ return;
+- }
+
+ f->handle_type = ret;
+ f->handle_bytes = size * sizeof(u32);
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 2ed6ad641a2069..b2c78621da44a4 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -416,13 +416,13 @@ int ovl_set_attr(struct ovl_fs *ofs, struct dentry *upperdentry,
+ return err;
+ }
+
+-struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real,
++struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct inode *realinode,
+ bool is_upper)
+ {
+ struct ovl_fh *fh;
+ int fh_type, dwords;
+ int buflen = MAX_HANDLE_SZ;
+- uuid_t *uuid = &real->d_sb->s_uuid;
++ uuid_t *uuid = &realinode->i_sb->s_uuid;
+ int err;
+
+ /* Make sure the real fid stays 32bit aligned */
+@@ -439,13 +439,13 @@ struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real,
+ * the price or reconnecting the dentry.
+ */
+ dwords = buflen >> 2;
+- fh_type = exportfs_encode_fh(real, (void *)fh->fb.fid, &dwords, 0);
++ fh_type = exportfs_encode_inode_fh(realinode, (void *)fh->fb.fid,
++ &dwords, NULL, 0);
+ buflen = (dwords << 2);
+
+ err = -EIO;
+- if (WARN_ON(fh_type < 0) ||
+- WARN_ON(buflen > MAX_HANDLE_SZ) ||
+- WARN_ON(fh_type == FILEID_INVALID))
++ if (fh_type < 0 || fh_type == FILEID_INVALID ||
++ WARN_ON(buflen > MAX_HANDLE_SZ))
+ goto out_err;
+
+ fh->fb.version = OVL_FH_VERSION;
+@@ -481,7 +481,7 @@ struct ovl_fh *ovl_get_origin_fh(struct ovl_fs *ofs, struct dentry *origin)
+ if (!ovl_can_decode_fh(origin->d_sb))
+ return NULL;
+
+- return ovl_encode_real_fh(ofs, origin, false);
++ return ovl_encode_real_fh(ofs, d_inode(origin), false);
+ }
+
+ int ovl_set_origin_fh(struct ovl_fs *ofs, const struct ovl_fh *fh,
+@@ -506,7 +506,7 @@ static int ovl_set_upper_fh(struct ovl_fs *ofs, struct dentry *upper,
+ const struct ovl_fh *fh;
+ int err;
+
+- fh = ovl_encode_real_fh(ofs, upper, true);
++ fh = ovl_encode_real_fh(ofs, d_inode(upper), true);
+ if (IS_ERR(fh))
+ return PTR_ERR(fh);
+
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index 5868cb2229552f..444aeeccb6daf9 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -176,35 +176,37 @@ static int ovl_connect_layer(struct dentry *dentry)
+ *
+ * Return 0 for upper file handle, > 0 for lower file handle or < 0 on error.
+ */
+-static int ovl_check_encode_origin(struct dentry *dentry)
++static int ovl_check_encode_origin(struct inode *inode)
+ {
+- struct ovl_fs *ofs = OVL_FS(dentry->d_sb);
++ struct ovl_fs *ofs = OVL_FS(inode->i_sb);
+ bool decodable = ofs->config.nfs_export;
++ struct dentry *dentry;
++ int err;
+
+ /* No upper layer? */
+ if (!ovl_upper_mnt(ofs))
+ return 1;
+
+ /* Lower file handle for non-upper non-decodable */
+- if (!ovl_dentry_upper(dentry) && !decodable)
++ if (!ovl_inode_upper(inode) && !decodable)
+ return 1;
+
+ /* Upper file handle for pure upper */
+- if (!ovl_dentry_lower(dentry))
++ if (!ovl_inode_lower(inode))
+ return 0;
+
+ /*
+ * Root is never indexed, so if there's an upper layer, encode upper for
+ * root.
+ */
+- if (dentry == dentry->d_sb->s_root)
++ if (inode == d_inode(inode->i_sb->s_root))
+ return 0;
+
+ /*
+ * Upper decodable file handle for non-indexed upper.
+ */
+- if (ovl_dentry_upper(dentry) && decodable &&
+- !ovl_test_flag(OVL_INDEX, d_inode(dentry)))
++ if (ovl_inode_upper(inode) && decodable &&
++ !ovl_test_flag(OVL_INDEX, inode))
+ return 0;
+
+ /*
+@@ -213,14 +215,23 @@ static int ovl_check_encode_origin(struct dentry *dentry)
+ * ovl_connect_layer() will try to make origin's layer "connected" by
+ * copying up a "connectable" ancestor.
+ */
+- if (d_is_dir(dentry) && decodable)
+- return ovl_connect_layer(dentry);
++ if (!decodable || !S_ISDIR(inode->i_mode))
++ return 1;
++
++ dentry = d_find_any_alias(inode);
++ if (!dentry)
++ return -ENOENT;
++
++ err = ovl_connect_layer(dentry);
++ dput(dentry);
++ if (err < 0)
++ return err;
+
+ /* Lower file handle for indexed and non-upper dir/non-dir */
+ return 1;
+ }
+
+-static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct dentry *dentry,
++static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct inode *inode,
+ u32 *fid, int buflen)
+ {
+ struct ovl_fh *fh = NULL;
+@@ -231,13 +242,13 @@ static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct dentry *dentry,
+ * Check if we should encode a lower or upper file handle and maybe
+ * copy up an ancestor to make lower file handle connectable.
+ */
+- err = enc_lower = ovl_check_encode_origin(dentry);
++ err = enc_lower = ovl_check_encode_origin(inode);
+ if (enc_lower < 0)
+ goto fail;
+
+ /* Encode an upper or lower file handle */
+- fh = ovl_encode_real_fh(ofs, enc_lower ? ovl_dentry_lower(dentry) :
+- ovl_dentry_upper(dentry), !enc_lower);
++ fh = ovl_encode_real_fh(ofs, enc_lower ? ovl_inode_lower(inode) :
++ ovl_inode_upper(inode), !enc_lower);
+ if (IS_ERR(fh))
+ return PTR_ERR(fh);
+
+@@ -251,8 +262,8 @@ static int ovl_dentry_to_fid(struct ovl_fs *ofs, struct dentry *dentry,
+ return err;
+
+ fail:
+- pr_warn_ratelimited("failed to encode file handle (%pd2, err=%i)\n",
+- dentry, err);
++ pr_warn_ratelimited("failed to encode file handle (ino=%lu, err=%i)\n",
++ inode->i_ino, err);
+ goto out;
+ }
+
+@@ -260,19 +271,13 @@ static int ovl_encode_fh(struct inode *inode, u32 *fid, int *max_len,
+ struct inode *parent)
+ {
+ struct ovl_fs *ofs = OVL_FS(inode->i_sb);
+- struct dentry *dentry;
+ int bytes, buflen = *max_len << 2;
+
+ /* TODO: encode connectable file handles */
+ if (parent)
+ return FILEID_INVALID;
+
+- dentry = d_find_any_alias(inode);
+- if (!dentry)
+- return FILEID_INVALID;
+-
+- bytes = ovl_dentry_to_fid(ofs, dentry, fid, buflen);
+- dput(dentry);
++ bytes = ovl_dentry_to_fid(ofs, inode, fid, buflen);
+ if (bytes <= 0)
+ return FILEID_INVALID;
+
+diff --git a/fs/overlayfs/namei.c b/fs/overlayfs/namei.c
+index 5764f91d283e70..42b73ae5ba01be 100644
+--- a/fs/overlayfs/namei.c
++++ b/fs/overlayfs/namei.c
+@@ -542,7 +542,7 @@ int ovl_verify_origin_xattr(struct ovl_fs *ofs, struct dentry *dentry,
+ struct ovl_fh *fh;
+ int err;
+
+- fh = ovl_encode_real_fh(ofs, real, is_upper);
++ fh = ovl_encode_real_fh(ofs, d_inode(real), is_upper);
+ err = PTR_ERR(fh);
+ if (IS_ERR(fh)) {
+ fh = NULL;
+@@ -738,7 +738,7 @@ int ovl_get_index_name(struct ovl_fs *ofs, struct dentry *origin,
+ struct ovl_fh *fh;
+ int err;
+
+- fh = ovl_encode_real_fh(ofs, origin, false);
++ fh = ovl_encode_real_fh(ofs, d_inode(origin), false);
+ if (IS_ERR(fh))
+ return PTR_ERR(fh);
+
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 0bfe35da4b7b7a..844874b4a91a94 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -869,7 +869,7 @@ int ovl_copy_up_with_data(struct dentry *dentry);
+ int ovl_maybe_copy_up(struct dentry *dentry, int flags);
+ int ovl_copy_xattr(struct super_block *sb, const struct path *path, struct dentry *new);
+ int ovl_set_attr(struct ovl_fs *ofs, struct dentry *upper, struct kstat *stat);
+-struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct dentry *real,
++struct ovl_fh *ovl_encode_real_fh(struct ovl_fs *ofs, struct inode *realinode,
+ bool is_upper);
+ struct ovl_fh *ovl_get_origin_fh(struct ovl_fs *ofs, struct dentry *origin);
+ int ovl_set_origin_fh(struct ovl_fs *ofs, const struct ovl_fh *fh,
+diff --git a/fs/smb/client/namespace.c b/fs/smb/client/namespace.c
+index 0f788031b7405f..e3f9213131c467 100644
+--- a/fs/smb/client/namespace.c
++++ b/fs/smb/client/namespace.c
+@@ -196,11 +196,28 @@ static struct vfsmount *cifs_do_automount(struct path *path)
+ struct smb3_fs_context tmp;
+ char *full_path;
+ struct vfsmount *mnt;
++ struct cifs_sb_info *mntpt_sb;
++ struct cifs_ses *ses;
+
+ if (IS_ROOT(mntpt))
+ return ERR_PTR(-ESTALE);
+
+- cur_ctx = CIFS_SB(mntpt->d_sb)->ctx;
++ mntpt_sb = CIFS_SB(mntpt->d_sb);
++ ses = cifs_sb_master_tcon(mntpt_sb)->ses;
++ cur_ctx = mntpt_sb->ctx;
++
++ /*
++ * At this point, the root session should be in the mntpt sb. We should
++ * bring the sb context passwords in sync with the root session's
++ * passwords. This would help prevent unnecessary retries and password
++ * swaps for automounts.
++ */
++ mutex_lock(&ses->session_mutex);
++ rc = smb3_sync_session_ctx_passwords(mntpt_sb, ses);
++ mutex_unlock(&ses->session_mutex);
++
++ if (rc)
++ return ERR_PTR(rc);
+
+ fc = fs_context_for_submount(path->mnt->mnt_sb->s_type, mntpt);
+ if (IS_ERR(fc))
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 04ffc5b158c3bf..c763a2f7df6640 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -695,6 +695,9 @@ void smb2_send_interim_resp(struct ksmbd_work *work, __le32 status)
+ struct smb2_hdr *rsp_hdr;
+ struct ksmbd_work *in_work = ksmbd_alloc_work_struct();
+
++ if (!in_work)
++ return;
++
+ if (allocate_interim_rsp_buf(in_work)) {
+ pr_err("smb_allocate_rsp_buf failed!\n");
+ ksmbd_free_work_struct(in_work);
+@@ -3985,6 +3988,26 @@ static int smb2_populate_readdir_entry(struct ksmbd_conn *conn, int info_level,
+ posix_info->DeviceId = cpu_to_le32(ksmbd_kstat->kstat->rdev);
+ posix_info->HardLinks = cpu_to_le32(ksmbd_kstat->kstat->nlink);
+ posix_info->Mode = cpu_to_le32(ksmbd_kstat->kstat->mode & 0777);
++ switch (ksmbd_kstat->kstat->mode & S_IFMT) {
++ case S_IFDIR:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_DIR << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFLNK:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_SYMLINK << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFCHR:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_CHARDEV << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFBLK:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_BLKDEV << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFIFO:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_FIFO << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFSOCK:
++ posix_info->Mode |= cpu_to_le32(POSIX_TYPE_SOCKET << POSIX_FILETYPE_SHIFT);
++ }
++
+ posix_info->Inode = cpu_to_le64(ksmbd_kstat->kstat->ino);
+ posix_info->DosAttributes =
+ S_ISDIR(ksmbd_kstat->kstat->mode) ?
+@@ -5173,6 +5196,26 @@ static int find_file_posix_info(struct smb2_query_info_rsp *rsp,
+ file_info->AllocationSize = cpu_to_le64(stat.blocks << 9);
+ file_info->HardLinks = cpu_to_le32(stat.nlink);
+ file_info->Mode = cpu_to_le32(stat.mode & 0777);
++ switch (stat.mode & S_IFMT) {
++ case S_IFDIR:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_DIR << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFLNK:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_SYMLINK << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFCHR:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_CHARDEV << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFBLK:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_BLKDEV << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFIFO:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_FIFO << POSIX_FILETYPE_SHIFT);
++ break;
++ case S_IFSOCK:
++ file_info->Mode |= cpu_to_le32(POSIX_TYPE_SOCKET << POSIX_FILETYPE_SHIFT);
++ }
++
+ file_info->DeviceId = cpu_to_le32(stat.rdev);
+
+ /*
+diff --git a/fs/smb/server/smb2pdu.h b/fs/smb/server/smb2pdu.h
+index 649dacf7e8c493..17a0b18a8406b3 100644
+--- a/fs/smb/server/smb2pdu.h
++++ b/fs/smb/server/smb2pdu.h
+@@ -502,4 +502,14 @@ static inline void *smb2_get_msg(void *buf)
+ return buf + 4;
+ }
+
++#define POSIX_TYPE_FILE 0
++#define POSIX_TYPE_DIR 1
++#define POSIX_TYPE_SYMLINK 2
++#define POSIX_TYPE_CHARDEV 3
++#define POSIX_TYPE_BLKDEV 4
++#define POSIX_TYPE_FIFO 5
++#define POSIX_TYPE_SOCKET 6
++
++#define POSIX_FILETYPE_SHIFT 12
++
+ #endif /* _SMB2PDU_H */
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index 7cbd580120d129..ee825971abd9ab 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -1264,6 +1264,8 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name,
+ filepath,
+ flags,
+ path);
++ if (!is_last)
++ next[0] = '/';
+ if (err)
+ goto out2;
+ else if (is_last)
+@@ -1271,7 +1273,6 @@ int ksmbd_vfs_kern_path_locked(struct ksmbd_work *work, char *name,
+ path_put(parent_path);
+ *parent_path = *path;
+
+- next[0] = '/';
+ remain_len -= filename_len + 1;
+ }
+
+diff --git a/include/linux/bus/stm32_firewall_device.h b/include/linux/bus/stm32_firewall_device.h
+index 18e0a2fc3816ac..5178b72bc92098 100644
+--- a/include/linux/bus/stm32_firewall_device.h
++++ b/include/linux/bus/stm32_firewall_device.h
+@@ -115,7 +115,7 @@ void stm32_firewall_release_access_by_id(struct stm32_firewall *firewall, u32 su
+ #else /* CONFIG_STM32_FIREWALL */
+
+ int stm32_firewall_get_firewall(struct device_node *np, struct stm32_firewall *firewall,
+- unsigned int nb_firewall);
++ unsigned int nb_firewall)
+ {
+ return -ENODEV;
+ }
+diff --git a/include/linux/iomap.h b/include/linux/iomap.h
+index f61407e3b12192..d204dcd35063d7 100644
+--- a/include/linux/iomap.h
++++ b/include/linux/iomap.h
+@@ -330,7 +330,7 @@ struct iomap_ioend {
+ u16 io_type;
+ u16 io_flags; /* IOMAP_F_* */
+ struct inode *io_inode; /* file being written to */
+- size_t io_size; /* size of the extent */
++ size_t io_size; /* size of data within eof */
+ loff_t io_offset; /* offset in the file */
+ sector_t io_sector; /* start sector of ioend */
+ struct bio io_bio; /* MUST BE LAST! */
+diff --git a/include/linux/mount.h b/include/linux/mount.h
+index c34c18b4e8f36f..04213d8ef8376d 100644
+--- a/include/linux/mount.h
++++ b/include/linux/mount.h
+@@ -50,7 +50,7 @@ struct path;
+ #define MNT_ATIME_MASK (MNT_NOATIME | MNT_NODIRATIME | MNT_RELATIME )
+
+ #define MNT_INTERNAL_FLAGS (MNT_SHARED | MNT_WRITE_HOLD | MNT_INTERNAL | \
+- MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED | MNT_ONRB)
++ MNT_DOOMED | MNT_SYNC_UMOUNT | MNT_MARKED)
+
+ #define MNT_INTERNAL 0x4000
+
+@@ -64,7 +64,6 @@ struct path;
+ #define MNT_SYNC_UMOUNT 0x2000000
+ #define MNT_MARKED 0x4000000
+ #define MNT_UMOUNT 0x8000000
+-#define MNT_ONRB 0x10000000
+
+ struct vfsmount {
+ struct dentry *mnt_root; /* root of the mounted tree */
+diff --git a/include/linux/netfs.h b/include/linux/netfs.h
+index 5eaceef41e6cac..474481ee8b7c29 100644
+--- a/include/linux/netfs.h
++++ b/include/linux/netfs.h
+@@ -269,7 +269,6 @@ struct netfs_io_request {
+ size_t prev_donated; /* Fallback for subreq->prev_donated */
+ refcount_t ref;
+ unsigned long flags;
+-#define NETFS_RREQ_COPY_TO_CACHE 1 /* Need to write to the cache */
+ #define NETFS_RREQ_NO_UNLOCK_FOLIO 2 /* Don't unlock no_unlock_folio on completion */
+ #define NETFS_RREQ_DONT_UNLOCK_FOLIOS 3 /* Don't unlock the folios on completion */
+ #define NETFS_RREQ_FAILED 4 /* The request failed */
+diff --git a/include/linux/writeback.h b/include/linux/writeback.h
+index d6db822e4bb30c..641a057e041329 100644
+--- a/include/linux/writeback.h
++++ b/include/linux/writeback.h
+@@ -217,7 +217,7 @@ void wbc_attach_and_unlock_inode(struct writeback_control *wbc,
+ struct inode *inode)
+ __releases(&inode->i_lock);
+ void wbc_detach_inode(struct writeback_control *wbc);
+-void wbc_account_cgroup_owner(struct writeback_control *wbc, struct page *page,
++void wbc_account_cgroup_owner(struct writeback_control *wbc, struct folio *folio,
+ size_t bytes);
+ int cgroup_writeback_by_id(u64 bdi_id, int memcg_id,
+ enum wb_reason reason, struct wb_completion *done);
+@@ -324,7 +324,7 @@ static inline void wbc_init_bio(struct writeback_control *wbc, struct bio *bio)
+ }
+
+ static inline void wbc_account_cgroup_owner(struct writeback_control *wbc,
+- struct page *page, size_t bytes)
++ struct folio *folio, size_t bytes)
+ {
+ }
+
+diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h
+index c0deaafebfdc0b..4bd93571e6c1b5 100644
+--- a/include/net/inet_connection_sock.h
++++ b/include/net/inet_connection_sock.h
+@@ -281,7 +281,7 @@ static inline int inet_csk_reqsk_queue_len(const struct sock *sk)
+
+ static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk)
+ {
+- return inet_csk_reqsk_queue_len(sk) >= READ_ONCE(sk->sk_max_ack_backlog);
++ return inet_csk_reqsk_queue_len(sk) > READ_ONCE(sk->sk_max_ack_backlog);
+ }
+
+ bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req);
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index 8932ec5bd7c029..20c5374e922ef5 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -329,7 +329,6 @@ struct ufs_pwr_mode_info {
+ * @program_key: program or evict an inline encryption key
+ * @fill_crypto_prdt: initialize crypto-related fields in the PRDT
+ * @event_notify: called to notify important events
+- * @reinit_notify: called to notify reinit of UFSHCD during max gear switch
+ * @mcq_config_resource: called to configure MCQ platform resources
+ * @get_hba_mac: reports maximum number of outstanding commands supported by
+ * the controller. Should be implemented for UFSHCI 4.0 or later
+@@ -381,7 +380,6 @@ struct ufs_hba_variant_ops {
+ void *prdt, unsigned int num_segments);
+ void (*event_notify)(struct ufs_hba *hba,
+ enum ufs_event_type evt, void *data);
+- void (*reinit_notify)(struct ufs_hba *);
+ int (*mcq_config_resource)(struct ufs_hba *hba);
+ int (*get_hba_mac)(struct ufs_hba *hba);
+ int (*op_runtime_config)(struct ufs_hba *hba);
+diff --git a/io_uring/eventfd.c b/io_uring/eventfd.c
+index e37fddd5d9ce8e..ffc4bd17d0786c 100644
+--- a/io_uring/eventfd.c
++++ b/io_uring/eventfd.c
+@@ -38,7 +38,7 @@ static void io_eventfd_do_signal(struct rcu_head *rcu)
+ eventfd_signal_mask(ev_fd->cq_ev_fd, EPOLL_URING_WAKE);
+
+ if (refcount_dec_and_test(&ev_fd->refs))
+- io_eventfd_free(rcu);
++ call_rcu(&ev_fd->rcu, io_eventfd_free);
+ }
+
+ void io_eventfd_signal(struct io_ring_ctx *ctx)
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 9849da128364af..21f1bcba2f52b5 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -1244,10 +1244,7 @@ static void io_req_normal_work_add(struct io_kiocb *req)
+
+ /* SQPOLL doesn't need the task_work added, it'll run it itself */
+ if (ctx->flags & IORING_SETUP_SQPOLL) {
+- struct io_sq_data *sqd = ctx->sq_data;
+-
+- if (sqd->thread)
+- __set_notify_signal(sqd->thread);
++ __set_notify_signal(req->task);
+ return;
+ }
+
+diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c
+index 1cfcc735b8e38e..5bc54c6df20fd6 100644
+--- a/io_uring/sqpoll.c
++++ b/io_uring/sqpoll.c
+@@ -275,8 +275,12 @@ static int io_sq_thread(void *data)
+ DEFINE_WAIT(wait);
+
+ /* offload context creation failed, just exit */
+- if (!current->io_uring)
++ if (!current->io_uring) {
++ mutex_lock(&sqd->lock);
++ sqd->thread = NULL;
++ mutex_unlock(&sqd->lock);
+ goto err_out;
++ }
+
+ snprintf(buf, sizeof(buf), "iou-sqp-%d", sqd->task_pid);
+ set_task_comm(current, buf);
+diff --git a/io_uring/timeout.c b/io_uring/timeout.c
+index 9973876d91b0ef..21c4bfea79f1c9 100644
+--- a/io_uring/timeout.c
++++ b/io_uring/timeout.c
+@@ -409,10 +409,12 @@ static int io_timeout_update(struct io_ring_ctx *ctx, __u64 user_data,
+
+ timeout->off = 0; /* noseq */
+ data = req->async_data;
++ data->ts = *ts;
++
+ list_add_tail(&timeout->list, &ctx->timeout_list);
+ hrtimer_init(&data->timer, io_timeout_get_clock(data), mode);
+ data->timer.function = io_timeout_fn;
+- hrtimer_start(&data->timer, timespec64_to_ktime(*ts), mode);
++ hrtimer_start(&data->timer, timespec64_to_ktime(data->ts), mode);
+ return 0;
+ }
+
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index a4dd285cdf39b7..24ece85fd3b126 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -862,7 +862,15 @@ static int generate_sched_domains(cpumask_var_t **domains,
+ */
+ if (cgrpv2) {
+ for (i = 0; i < ndoms; i++) {
+- cpumask_copy(doms[i], csa[i]->effective_cpus);
++ /*
++ * The top cpuset may contain some boot time isolated
++ * CPUs that need to be excluded from the sched domain.
++ */
++ if (csa[i] == &top_cpuset)
++ cpumask_and(doms[i], csa[i]->effective_cpus,
++ housekeeping_cpumask(HK_TYPE_DOMAIN));
++ else
++ cpumask_copy(doms[i], csa[i]->effective_cpus);
+ if (dattr)
+ dattr[i] = SD_ATTR_INIT;
+ }
+@@ -3102,29 +3110,6 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
+ int retval = -ENODEV;
+
+ buf = strstrip(buf);
+-
+- /*
+- * CPU or memory hotunplug may leave @cs w/o any execution
+- * resources, in which case the hotplug code asynchronously updates
+- * configuration and transfers all tasks to the nearest ancestor
+- * which can execute.
+- *
+- * As writes to "cpus" or "mems" may restore @cs's execution
+- * resources, wait for the previously scheduled operations before
+- * proceeding, so that we don't end up keep removing tasks added
+- * after execution capability is restored.
+- *
+- * cpuset_handle_hotplug may call back into cgroup core asynchronously
+- * via cgroup_transfer_tasks() and waiting for it from a cgroupfs
+- * operation like this one can lead to a deadlock through kernfs
+- * active_ref protection. Let's break the protection. Losing the
+- * protection is okay as we check whether @cs is online after
+- * grabbing cpuset_mutex anyway. This only happens on the legacy
+- * hierarchies.
+- */
+- css_get(&cs->css);
+- kernfs_break_active_protection(of->kn);
+-
+ cpus_read_lock();
+ mutex_lock(&cpuset_mutex);
+ if (!is_cpuset_online(cs))
+@@ -3155,8 +3140,6 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
+ out_unlock:
+ mutex_unlock(&cpuset_mutex);
+ cpus_read_unlock();
+- kernfs_unbreak_active_protection(of->kn);
+- css_put(&cs->css);
+ flush_workqueue(cpuset_migrate_mm_wq);
+ return retval ?: nbytes;
+ }
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 40f915f893e2ed..f928a67a07d29a 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -2917,7 +2917,7 @@ static void put_prev_task_scx(struct rq *rq, struct task_struct *p,
+ */
+ if (p->scx.slice && !scx_rq_bypassing(rq)) {
+ dispatch_enqueue(&rq->scx.local_dsq, p, SCX_ENQ_HEAD);
+- return;
++ goto switch_class;
+ }
+
+ /*
+@@ -2934,6 +2934,7 @@ static void put_prev_task_scx(struct rq *rq, struct task_struct *p,
+ }
+ }
+
++switch_class:
+ if (next && next->sched_class != &ext_sched_class)
+ switch_class(rq, next);
+ }
+@@ -3239,16 +3240,8 @@ static void reset_idle_masks(void)
+ cpumask_copy(idle_masks.smt, cpu_online_mask);
+ }
+
+-void __scx_update_idle(struct rq *rq, bool idle)
++static void update_builtin_idle(int cpu, bool idle)
+ {
+- int cpu = cpu_of(rq);
+-
+- if (SCX_HAS_OP(update_idle) && !scx_rq_bypassing(rq)) {
+- SCX_CALL_OP(SCX_KF_REST, update_idle, cpu_of(rq), idle);
+- if (!static_branch_unlikely(&scx_builtin_idle_enabled))
+- return;
+- }
+-
+ if (idle)
+ cpumask_set_cpu(cpu, idle_masks.cpu);
+ else
+@@ -3275,6 +3268,57 @@ void __scx_update_idle(struct rq *rq, bool idle)
+ #endif
+ }
+
++/*
++ * Update the idle state of a CPU to @idle.
++ *
++ * If @do_notify is true, ops.update_idle() is invoked to notify the scx
++ * scheduler of an actual idle state transition (idle to busy or vice
++ * versa). If @do_notify is false, only the idle state in the idle masks is
++ * refreshed without invoking ops.update_idle().
++ *
++ * This distinction is necessary, because an idle CPU can be "reserved" and
++ * awakened via scx_bpf_pick_idle_cpu() + scx_bpf_kick_cpu(), marking it as
++ * busy even if no tasks are dispatched. In this case, the CPU may return
++ * to idle without a true state transition. Refreshing the idle masks
++ * without invoking ops.update_idle() ensures accurate idle state tracking
++ * while avoiding unnecessary updates and maintaining balanced state
++ * transitions.
++ */
++void __scx_update_idle(struct rq *rq, bool idle, bool do_notify)
++{
++ int cpu = cpu_of(rq);
++
++ lockdep_assert_rq_held(rq);
++
++ /*
++ * Trigger ops.update_idle() only when transitioning from a task to
++ * the idle thread and vice versa.
++ *
++ * Idle transitions are indicated by do_notify being set to true,
++ * managed by put_prev_task_idle()/set_next_task_idle().
++ */
++ if (SCX_HAS_OP(update_idle) && do_notify && !scx_rq_bypassing(rq))
++ SCX_CALL_OP(SCX_KF_REST, update_idle, cpu_of(rq), idle);
++
++ /*
++ * Update the idle masks:
++ * - for real idle transitions (do_notify == true)
++ * - for idle-to-idle transitions (indicated by the previous task
++ * being the idle thread, managed by pick_task_idle())
++ *
++ * Skip updating idle masks if the previous task is not the idle
++ * thread, since set_next_task_idle() has already handled it when
++ * transitioning from a task to the idle thread (calling this
++ * function with do_notify == true).
++ *
++ * In this way we can avoid updating the idle masks twice,
++ * unnecessarily.
++ */
++ if (static_branch_likely(&scx_builtin_idle_enabled))
++ if (do_notify || is_idle_task(rq->curr))
++ update_builtin_idle(cpu, idle);
++}
++
+ static void handle_hotplug(struct rq *rq, bool online)
+ {
+ int cpu = cpu_of(rq);
+@@ -4348,10 +4392,9 @@ static void scx_ops_bypass(bool bypass)
+ */
+ for_each_possible_cpu(cpu) {
+ struct rq *rq = cpu_rq(cpu);
+- struct rq_flags rf;
+ struct task_struct *p, *n;
+
+- rq_lock(rq, &rf);
++ raw_spin_rq_lock(rq);
+
+ if (bypass) {
+ WARN_ON_ONCE(rq->scx.flags & SCX_RQ_BYPASSING);
+@@ -4367,7 +4410,7 @@ static void scx_ops_bypass(bool bypass)
+ * sees scx_rq_bypassing() before moving tasks to SCX.
+ */
+ if (!scx_enabled()) {
+- rq_unlock(rq, &rf);
++ raw_spin_rq_unlock(rq);
+ continue;
+ }
+
+@@ -4387,10 +4430,11 @@ static void scx_ops_bypass(bool bypass)
+ sched_enq_and_set_task(&ctx);
+ }
+
+- rq_unlock(rq, &rf);
+-
+ /* resched to restore ticks and idle state */
+- resched_cpu(cpu);
++ if (cpu_online(cpu) || cpu == smp_processor_id())
++ resched_curr(rq);
++
++ raw_spin_rq_unlock(rq);
+ }
+ unlock:
+ raw_spin_unlock_irqrestore(&__scx_ops_bypass_lock, flags);
+diff --git a/kernel/sched/ext.h b/kernel/sched/ext.h
+index b1675bb59fc461..4d022d17ac7dd6 100644
+--- a/kernel/sched/ext.h
++++ b/kernel/sched/ext.h
+@@ -57,15 +57,15 @@ static inline void init_sched_ext_class(void) {}
+ #endif /* CONFIG_SCHED_CLASS_EXT */
+
+ #if defined(CONFIG_SCHED_CLASS_EXT) && defined(CONFIG_SMP)
+-void __scx_update_idle(struct rq *rq, bool idle);
++void __scx_update_idle(struct rq *rq, bool idle, bool do_notify);
+
+-static inline void scx_update_idle(struct rq *rq, bool idle)
++static inline void scx_update_idle(struct rq *rq, bool idle, bool do_notify)
+ {
+ if (scx_enabled())
+- __scx_update_idle(rq, idle);
++ __scx_update_idle(rq, idle, do_notify);
+ }
+ #else
+-static inline void scx_update_idle(struct rq *rq, bool idle) {}
++static inline void scx_update_idle(struct rq *rq, bool idle, bool do_notify) {}
+ #endif
+
+ #ifdef CONFIG_CGROUP_SCHED
+diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
+index d2f096bb274c3f..53bb9193c537a8 100644
+--- a/kernel/sched/idle.c
++++ b/kernel/sched/idle.c
+@@ -453,19 +453,20 @@ static void wakeup_preempt_idle(struct rq *rq, struct task_struct *p, int flags)
+ static void put_prev_task_idle(struct rq *rq, struct task_struct *prev, struct task_struct *next)
+ {
+ dl_server_update_idle_time(rq, prev);
+- scx_update_idle(rq, false);
++ scx_update_idle(rq, false, true);
+ }
+
+ static void set_next_task_idle(struct rq *rq, struct task_struct *next, bool first)
+ {
+ update_idle_core(rq);
+- scx_update_idle(rq, true);
++ scx_update_idle(rq, true, true);
+ schedstat_inc(rq->sched_goidle);
+ next->se.exec_start = rq_clock_task(rq);
+ }
+
+ struct task_struct *pick_task_idle(struct rq *rq)
+ {
++ scx_update_idle(rq, true, false);
+ return rq->idle;
+ }
+
+diff --git a/net/802/psnap.c b/net/802/psnap.c
+index fca9d454905fe3..389df460c8c4b9 100644
+--- a/net/802/psnap.c
++++ b/net/802/psnap.c
+@@ -55,11 +55,11 @@ static int snap_rcv(struct sk_buff *skb, struct net_device *dev,
+ goto drop;
+
+ rcu_read_lock();
+- proto = find_snap_client(skb_transport_header(skb));
++ proto = find_snap_client(skb->data);
+ if (proto) {
+ /* Pass the frame on. */
+- skb->transport_header += 5;
+ skb_pull_rcsum(skb, 5);
++ skb_reset_transport_header(skb);
+ rc = proto->rcvfunc(skb, dev, &snap_packet_type, orig_dev);
+ }
+ rcu_read_unlock();
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index c86f4e42e69cab..7b2b04d6b85630 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -1031,9 +1031,9 @@ static bool adv_use_rpa(struct hci_dev *hdev, uint32_t flags)
+
+ static int hci_set_random_addr_sync(struct hci_dev *hdev, bdaddr_t *rpa)
+ {
+- /* If we're advertising or initiating an LE connection we can't
+- * go ahead and change the random address at this time. This is
+- * because the eventual initiator address used for the
++ /* If a random_addr has been set we're advertising or initiating an LE
++ * connection we can't go ahead and change the random address at this
++ * time. This is because the eventual initiator address used for the
+ * subsequently created connection will be undefined (some
+ * controllers use the new address and others the one we had
+ * when the operation started).
+@@ -1041,8 +1041,9 @@ static int hci_set_random_addr_sync(struct hci_dev *hdev, bdaddr_t *rpa)
+ * In this kind of scenario skip the update and let the random
+ * address be updated at the next cycle.
+ */
+- if (hci_dev_test_flag(hdev, HCI_LE_ADV) ||
+- hci_lookup_le_connect(hdev)) {
++ if (bacmp(&hdev->random_addr, BDADDR_ANY) &&
++ (hci_dev_test_flag(hdev, HCI_LE_ADV) ||
++ hci_lookup_le_connect(hdev))) {
+ bt_dev_dbg(hdev, "Deferring random address update");
+ hci_dev_set_flag(hdev, HCI_RPA_EXPIRED);
+ return 0;
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 2343e15f8938ec..7dc315c1658e7d 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -7596,6 +7596,24 @@ static void device_added(struct sock *sk, struct hci_dev *hdev,
+ mgmt_event(MGMT_EV_DEVICE_ADDED, hdev, &ev, sizeof(ev), sk);
+ }
+
++static void add_device_complete(struct hci_dev *hdev, void *data, int err)
++{
++ struct mgmt_pending_cmd *cmd = data;
++ struct mgmt_cp_add_device *cp = cmd->param;
++
++ if (!err) {
++ device_added(cmd->sk, hdev, &cp->addr.bdaddr, cp->addr.type,
++ cp->action);
++ device_flags_changed(NULL, hdev, &cp->addr.bdaddr,
++ cp->addr.type, hdev->conn_flags,
++ PTR_UINT(cmd->user_data));
++ }
++
++ mgmt_cmd_complete(cmd->sk, hdev->id, MGMT_OP_ADD_DEVICE,
++ mgmt_status(err), &cp->addr, sizeof(cp->addr));
++ mgmt_pending_free(cmd);
++}
++
+ static int add_device_sync(struct hci_dev *hdev, void *data)
+ {
+ return hci_update_passive_scan_sync(hdev);
+@@ -7604,6 +7622,7 @@ static int add_device_sync(struct hci_dev *hdev, void *data)
+ static int add_device(struct sock *sk, struct hci_dev *hdev,
+ void *data, u16 len)
+ {
++ struct mgmt_pending_cmd *cmd;
+ struct mgmt_cp_add_device *cp = data;
+ u8 auto_conn, addr_type;
+ struct hci_conn_params *params;
+@@ -7684,9 +7703,24 @@ static int add_device(struct sock *sk, struct hci_dev *hdev,
+ current_flags = params->flags;
+ }
+
+- err = hci_cmd_sync_queue(hdev, add_device_sync, NULL, NULL);
+- if (err < 0)
++ cmd = mgmt_pending_new(sk, MGMT_OP_ADD_DEVICE, hdev, data, len);
++ if (!cmd) {
++ err = -ENOMEM;
+ goto unlock;
++ }
++
++ cmd->user_data = UINT_PTR(current_flags);
++
++ err = hci_cmd_sync_queue(hdev, add_device_sync, cmd,
++ add_device_complete);
++ if (err < 0) {
++ err = mgmt_cmd_complete(sk, hdev->id, MGMT_OP_ADD_DEVICE,
++ MGMT_STATUS_FAILED, &cp->addr,
++ sizeof(cp->addr));
++ mgmt_pending_free(cmd);
++ }
++
++ goto unlock;
+
+ added:
+ device_added(sk, hdev, &cp->addr.bdaddr, cp->addr.type, cp->action);
+diff --git a/net/bluetooth/rfcomm/tty.c b/net/bluetooth/rfcomm/tty.c
+index af80d599c33715..21a5b5535ebceb 100644
+--- a/net/bluetooth/rfcomm/tty.c
++++ b/net/bluetooth/rfcomm/tty.c
+@@ -201,14 +201,14 @@ static ssize_t address_show(struct device *tty_dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct rfcomm_dev *dev = dev_get_drvdata(tty_dev);
+- return sprintf(buf, "%pMR\n", &dev->dst);
++ return sysfs_emit(buf, "%pMR\n", &dev->dst);
+ }
+
+ static ssize_t channel_show(struct device *tty_dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct rfcomm_dev *dev = dev_get_drvdata(tty_dev);
+- return sprintf(buf, "%d\n", dev->channel);
++ return sysfs_emit(buf, "%d\n", dev->channel);
+ }
+
+ static DEVICE_ATTR_RO(address);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index f3fa8353d262b0..1867a6a8d76da9 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -753,6 +753,36 @@ int dev_fill_forward_path(const struct net_device *dev, const u8 *daddr,
+ }
+ EXPORT_SYMBOL_GPL(dev_fill_forward_path);
+
++/* must be called under rcu_read_lock(), as we dont take a reference */
++static struct napi_struct *napi_by_id(unsigned int napi_id)
++{
++ unsigned int hash = napi_id % HASH_SIZE(napi_hash);
++ struct napi_struct *napi;
++
++ hlist_for_each_entry_rcu(napi, &napi_hash[hash], napi_hash_node)
++ if (napi->napi_id == napi_id)
++ return napi;
++
++ return NULL;
++}
++
++/* must be called under rcu_read_lock(), as we dont take a reference */
++struct napi_struct *netdev_napi_by_id(struct net *net, unsigned int napi_id)
++{
++ struct napi_struct *napi;
++
++ napi = napi_by_id(napi_id);
++ if (!napi)
++ return NULL;
++
++ if (WARN_ON_ONCE(!napi->dev))
++ return NULL;
++ if (!net_eq(net, dev_net(napi->dev)))
++ return NULL;
++
++ return napi;
++}
++
+ /**
+ * __dev_get_by_name - find a device by its name
+ * @net: the applicable net namespace
+@@ -6291,19 +6321,6 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
+ }
+ EXPORT_SYMBOL(napi_complete_done);
+
+-/* must be called under rcu_read_lock(), as we dont take a reference */
+-struct napi_struct *napi_by_id(unsigned int napi_id)
+-{
+- unsigned int hash = napi_id % HASH_SIZE(napi_hash);
+- struct napi_struct *napi;
+-
+- hlist_for_each_entry_rcu(napi, &napi_hash[hash], napi_hash_node)
+- if (napi->napi_id == napi_id)
+- return napi;
+-
+- return NULL;
+-}
+-
+ static void skb_defer_free_flush(struct softnet_data *sd)
+ {
+ struct sk_buff *skb, *next;
+diff --git a/net/core/dev.h b/net/core/dev.h
+index 5654325c5b710c..2e3bb7669984a6 100644
+--- a/net/core/dev.h
++++ b/net/core/dev.h
+@@ -22,6 +22,8 @@ struct sd_flow_limit {
+
+ extern int netdev_flow_limit_table_len;
+
++struct napi_struct *netdev_napi_by_id(struct net *net, unsigned int napi_id);
++
+ #ifdef CONFIG_PROC_FS
+ int __init dev_proc_init(void);
+ #else
+@@ -146,7 +148,6 @@ void xdp_do_check_flushed(struct napi_struct *napi);
+ static inline void xdp_do_check_flushed(struct napi_struct *napi) { }
+ #endif
+
+-struct napi_struct *napi_by_id(unsigned int napi_id);
+ void kick_defer_list_purge(struct softnet_data *sd, unsigned int cpu);
+
+ #define XMIT_RECURSION_LIMIT 8
+diff --git a/net/core/link_watch.c b/net/core/link_watch.c
+index 1b4d39e3808427..cb04ef2b9807c9 100644
+--- a/net/core/link_watch.c
++++ b/net/core/link_watch.c
+@@ -42,14 +42,18 @@ static unsigned int default_operstate(const struct net_device *dev)
+ * first check whether lower is indeed the source of its down state.
+ */
+ if (!netif_carrier_ok(dev)) {
+- int iflink = dev_get_iflink(dev);
+ struct net_device *peer;
++ int iflink;
+
+ /* If called from netdev_run_todo()/linkwatch_sync_dev(),
+ * dev_net(dev) can be already freed, and RTNL is not held.
+ */
+- if (dev->reg_state == NETREG_UNREGISTERED ||
+- iflink == dev->ifindex)
++ if (dev->reg_state <= NETREG_REGISTERED)
++ iflink = dev_get_iflink(dev);
++ else
++ iflink = dev->ifindex;
++
++ if (iflink == dev->ifindex)
+ return IF_OPER_DOWN;
+
+ ASSERT_RTNL();
+diff --git a/net/core/netdev-genl.c b/net/core/netdev-genl.c
+index d58270b48cb2cf..ad426b3a03b526 100644
+--- a/net/core/netdev-genl.c
++++ b/net/core/netdev-genl.c
+@@ -164,8 +164,6 @@ netdev_nl_napi_fill_one(struct sk_buff *rsp, struct napi_struct *napi,
+ void *hdr;
+ pid_t pid;
+
+- if (WARN_ON_ONCE(!napi->dev))
+- return -EINVAL;
+ if (!(napi->dev->flags & IFF_UP))
+ return 0;
+
+@@ -173,8 +171,7 @@ netdev_nl_napi_fill_one(struct sk_buff *rsp, struct napi_struct *napi,
+ if (!hdr)
+ return -EMSGSIZE;
+
+- if (napi->napi_id >= MIN_NAPI_ID &&
+- nla_put_u32(rsp, NETDEV_A_NAPI_ID, napi->napi_id))
++ if (nla_put_u32(rsp, NETDEV_A_NAPI_ID, napi->napi_id))
+ goto nla_put_failure;
+
+ if (nla_put_u32(rsp, NETDEV_A_NAPI_IFINDEX, napi->dev->ifindex))
+@@ -217,7 +214,7 @@ int netdev_nl_napi_get_doit(struct sk_buff *skb, struct genl_info *info)
+ rtnl_lock();
+ rcu_read_lock();
+
+- napi = napi_by_id(napi_id);
++ napi = netdev_napi_by_id(genl_info_net(info), napi_id);
+ if (napi) {
+ err = netdev_nl_napi_fill_one(rsp, napi, info);
+ } else {
+@@ -254,6 +251,8 @@ netdev_nl_napi_dump_one(struct net_device *netdev, struct sk_buff *rsp,
+ return err;
+
+ list_for_each_entry(napi, &netdev->napi_list, dev_list) {
++ if (napi->napi_id < MIN_NAPI_ID)
++ continue;
+ if (ctx->napi_id && napi->napi_id >= ctx->napi_id)
+ continue;
+
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index a7cd433a54c9ae..bcc2f1e090c7db 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -896,7 +896,7 @@ static void tcp_v4_send_reset(const struct sock *sk, struct sk_buff *skb,
+ sock_net_set(ctl_sk, net);
+ if (sk) {
+ ctl_sk->sk_mark = (sk->sk_state == TCP_TIME_WAIT) ?
+- inet_twsk(sk)->tw_mark : sk->sk_mark;
++ inet_twsk(sk)->tw_mark : READ_ONCE(sk->sk_mark);
+ ctl_sk->sk_priority = (sk->sk_state == TCP_TIME_WAIT) ?
+ inet_twsk(sk)->tw_priority : READ_ONCE(sk->sk_priority);
+ transmit_time = tcp_transmit_time(sk);
+diff --git a/net/mptcp/ctrl.c b/net/mptcp/ctrl.c
+index 38d8121331d4a9..b0dd008e2114bc 100644
+--- a/net/mptcp/ctrl.c
++++ b/net/mptcp/ctrl.c
+@@ -102,16 +102,15 @@ static void mptcp_pernet_set_defaults(struct mptcp_pernet *pernet)
+ }
+
+ #ifdef CONFIG_SYSCTL
+-static int mptcp_set_scheduler(const struct net *net, const char *name)
++static int mptcp_set_scheduler(char *scheduler, const char *name)
+ {
+- struct mptcp_pernet *pernet = mptcp_get_pernet(net);
+ struct mptcp_sched_ops *sched;
+ int ret = 0;
+
+ rcu_read_lock();
+ sched = mptcp_sched_find(name);
+ if (sched)
+- strscpy(pernet->scheduler, name, MPTCP_SCHED_NAME_MAX);
++ strscpy(scheduler, name, MPTCP_SCHED_NAME_MAX);
+ else
+ ret = -ENOENT;
+ rcu_read_unlock();
+@@ -122,7 +121,7 @@ static int mptcp_set_scheduler(const struct net *net, const char *name)
+ static int proc_scheduler(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- const struct net *net = current->nsproxy->net_ns;
++ char (*scheduler)[MPTCP_SCHED_NAME_MAX] = ctl->data;
+ char val[MPTCP_SCHED_NAME_MAX];
+ struct ctl_table tbl = {
+ .data = val,
+@@ -130,11 +129,11 @@ static int proc_scheduler(const struct ctl_table *ctl, int write,
+ };
+ int ret;
+
+- strscpy(val, mptcp_get_scheduler(net), MPTCP_SCHED_NAME_MAX);
++ strscpy(val, *scheduler, MPTCP_SCHED_NAME_MAX);
+
+ ret = proc_dostring(&tbl, write, buffer, lenp, ppos);
+ if (write && ret == 0)
+- ret = mptcp_set_scheduler(net, val);
++ ret = mptcp_set_scheduler(*scheduler, val);
+
+ return ret;
+ }
+@@ -161,7 +160,9 @@ static int proc_blackhole_detect_timeout(const struct ctl_table *table,
+ int write, void *buffer, size_t *lenp,
+ loff_t *ppos)
+ {
+- struct mptcp_pernet *pernet = mptcp_get_pernet(current->nsproxy->net_ns);
++ struct mptcp_pernet *pernet = container_of(table->data,
++ struct mptcp_pernet,
++ blackhole_timeout);
+ int ret;
+
+ ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+@@ -228,7 +229,7 @@ static struct ctl_table mptcp_sysctl_table[] = {
+ {
+ .procname = "available_schedulers",
+ .maxlen = MPTCP_SCHED_BUF_MAX,
+- .mode = 0644,
++ .mode = 0444,
+ .proc_handler = proc_available_schedulers,
+ },
+ {
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 9db3e2b0b1c347..456446d7af200e 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -2517,12 +2517,15 @@ void *nf_ct_alloc_hashtable(unsigned int *sizep, int nulls)
+ struct hlist_nulls_head *hash;
+ unsigned int nr_slots, i;
+
+- if (*sizep > (UINT_MAX / sizeof(struct hlist_nulls_head)))
++ if (*sizep > (INT_MAX / sizeof(struct hlist_nulls_head)))
+ return NULL;
+
+ BUILD_BUG_ON(sizeof(struct hlist_nulls_head) != sizeof(struct hlist_head));
+ nr_slots = *sizep = roundup(*sizep, PAGE_SIZE / sizeof(struct hlist_nulls_head));
+
++ if (nr_slots > (INT_MAX / sizeof(struct hlist_nulls_head)))
++ return NULL;
++
+ hash = kvcalloc(nr_slots, sizeof(struct hlist_nulls_head), GFP_KERNEL);
+
+ if (hash && nulls)
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 0c5ff4afc37022..42dc8cc721ff7b 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -8565,6 +8565,7 @@ static void nft_unregister_flowtable_hook(struct net *net,
+ }
+
+ static void __nft_unregister_flowtable_net_hooks(struct net *net,
++ struct nft_flowtable *flowtable,
+ struct list_head *hook_list,
+ bool release_netdev)
+ {
+@@ -8572,6 +8573,8 @@ static void __nft_unregister_flowtable_net_hooks(struct net *net,
+
+ list_for_each_entry_safe(hook, next, hook_list, list) {
+ nf_unregister_net_hook(net, &hook->ops);
++ flowtable->data.type->setup(&flowtable->data, hook->ops.dev,
++ FLOW_BLOCK_UNBIND);
+ if (release_netdev) {
+ list_del(&hook->list);
+ kfree_rcu(hook, rcu);
+@@ -8580,9 +8583,10 @@ static void __nft_unregister_flowtable_net_hooks(struct net *net,
+ }
+
+ static void nft_unregister_flowtable_net_hooks(struct net *net,
++ struct nft_flowtable *flowtable,
+ struct list_head *hook_list)
+ {
+- __nft_unregister_flowtable_net_hooks(net, hook_list, false);
++ __nft_unregister_flowtable_net_hooks(net, flowtable, hook_list, false);
+ }
+
+ static int nft_register_flowtable_net_hooks(struct net *net,
+@@ -9223,8 +9227,6 @@ static void nf_tables_flowtable_destroy(struct nft_flowtable *flowtable)
+
+ flowtable->data.type->free(&flowtable->data);
+ list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) {
+- flowtable->data.type->setup(&flowtable->data, hook->ops.dev,
+- FLOW_BLOCK_UNBIND);
+ list_del_rcu(&hook->list);
+ kfree_rcu(hook, rcu);
+ }
+@@ -10622,6 +10624,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ &nft_trans_flowtable_hooks(trans),
+ trans->msg_type);
+ nft_unregister_flowtable_net_hooks(net,
++ nft_trans_flowtable(trans),
+ &nft_trans_flowtable_hooks(trans));
+ } else {
+ list_del_rcu(&nft_trans_flowtable(trans)->list);
+@@ -10630,6 +10633,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb)
+ NULL,
+ trans->msg_type);
+ nft_unregister_flowtable_net_hooks(net,
++ nft_trans_flowtable(trans),
+ &nft_trans_flowtable(trans)->hook_list);
+ }
+ break;
+@@ -10901,11 +10905,13 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action)
+ case NFT_MSG_NEWFLOWTABLE:
+ if (nft_trans_flowtable_update(trans)) {
+ nft_unregister_flowtable_net_hooks(net,
++ nft_trans_flowtable(trans),
+ &nft_trans_flowtable_hooks(trans));
+ } else {
+ nft_use_dec_restore(&table->use);
+ list_del_rcu(&nft_trans_flowtable(trans)->list);
+ nft_unregister_flowtable_net_hooks(net,
++ nft_trans_flowtable(trans),
+ &nft_trans_flowtable(trans)->hook_list);
+ }
+ break;
+@@ -11498,7 +11504,8 @@ static void __nft_release_hook(struct net *net, struct nft_table *table)
+ list_for_each_entry(chain, &table->chains, list)
+ __nf_tables_unregister_hook(net, table, chain, true);
+ list_for_each_entry(flowtable, &table->flowtables, list)
+- __nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list,
++ __nft_unregister_flowtable_net_hooks(net, flowtable,
++ &flowtable->hook_list,
+ true);
+ }
+
+diff --git a/net/rds/tcp.c b/net/rds/tcp.c
+index 351ac1747224a3..0581c53e651704 100644
+--- a/net/rds/tcp.c
++++ b/net/rds/tcp.c
+@@ -61,8 +61,10 @@ static atomic_t rds_tcp_unloading = ATOMIC_INIT(0);
+
+ static struct kmem_cache *rds_tcp_conn_slab;
+
+-static int rds_tcp_skbuf_handler(const struct ctl_table *ctl, int write,
+- void *buffer, size_t *lenp, loff_t *fpos);
++static int rds_tcp_sndbuf_handler(const struct ctl_table *ctl, int write,
++ void *buffer, size_t *lenp, loff_t *fpos);
++static int rds_tcp_rcvbuf_handler(const struct ctl_table *ctl, int write,
++ void *buffer, size_t *lenp, loff_t *fpos);
+
+ static int rds_tcp_min_sndbuf = SOCK_MIN_SNDBUF;
+ static int rds_tcp_min_rcvbuf = SOCK_MIN_RCVBUF;
+@@ -74,7 +76,7 @@ static struct ctl_table rds_tcp_sysctl_table[] = {
+ /* data is per-net pointer */
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = rds_tcp_skbuf_handler,
++ .proc_handler = rds_tcp_sndbuf_handler,
+ .extra1 = &rds_tcp_min_sndbuf,
+ },
+ #define RDS_TCP_RCVBUF 1
+@@ -83,7 +85,7 @@ static struct ctl_table rds_tcp_sysctl_table[] = {
+ /* data is per-net pointer */
+ .maxlen = sizeof(int),
+ .mode = 0644,
+- .proc_handler = rds_tcp_skbuf_handler,
++ .proc_handler = rds_tcp_rcvbuf_handler,
+ .extra1 = &rds_tcp_min_rcvbuf,
+ },
+ };
+@@ -682,10 +684,10 @@ static void rds_tcp_sysctl_reset(struct net *net)
+ spin_unlock_irq(&rds_tcp_conn_lock);
+ }
+
+-static int rds_tcp_skbuf_handler(const struct ctl_table *ctl, int write,
++static int rds_tcp_skbuf_handler(struct rds_tcp_net *rtn,
++ const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *fpos)
+ {
+- struct net *net = current->nsproxy->net_ns;
+ int err;
+
+ err = proc_dointvec_minmax(ctl, write, buffer, lenp, fpos);
+@@ -694,11 +696,34 @@ static int rds_tcp_skbuf_handler(const struct ctl_table *ctl, int write,
+ *(int *)(ctl->extra1));
+ return err;
+ }
+- if (write)
++
++ if (write && rtn->rds_tcp_listen_sock && rtn->rds_tcp_listen_sock->sk) {
++ struct net *net = sock_net(rtn->rds_tcp_listen_sock->sk);
++
+ rds_tcp_sysctl_reset(net);
++ }
++
+ return 0;
+ }
+
++static int rds_tcp_sndbuf_handler(const struct ctl_table *ctl, int write,
++ void *buffer, size_t *lenp, loff_t *fpos)
++{
++ struct rds_tcp_net *rtn = container_of(ctl->data, struct rds_tcp_net,
++ sndbuf_size);
++
++ return rds_tcp_skbuf_handler(rtn, ctl, write, buffer, lenp, fpos);
++}
++
++static int rds_tcp_rcvbuf_handler(const struct ctl_table *ctl, int write,
++ void *buffer, size_t *lenp, loff_t *fpos)
++{
++ struct rds_tcp_net *rtn = container_of(ctl->data, struct rds_tcp_net,
++ rcvbuf_size);
++
++ return rds_tcp_skbuf_handler(rtn, ctl, write, buffer, lenp, fpos);
++}
++
+ static void rds_tcp_exit(void)
+ {
+ rds_tcp_set_unloading();
+diff --git a/net/sched/cls_flow.c b/net/sched/cls_flow.c
+index 5502998aace741..5c2580a07530e4 100644
+--- a/net/sched/cls_flow.c
++++ b/net/sched/cls_flow.c
+@@ -356,7 +356,8 @@ static const struct nla_policy flow_policy[TCA_FLOW_MAX + 1] = {
+ [TCA_FLOW_KEYS] = { .type = NLA_U32 },
+ [TCA_FLOW_MODE] = { .type = NLA_U32 },
+ [TCA_FLOW_BASECLASS] = { .type = NLA_U32 },
+- [TCA_FLOW_RSHIFT] = { .type = NLA_U32 },
++ [TCA_FLOW_RSHIFT] = NLA_POLICY_MAX(NLA_U32,
++ 31 /* BITS_PER_U32 - 1 */),
+ [TCA_FLOW_ADDEND] = { .type = NLA_U32 },
+ [TCA_FLOW_MASK] = { .type = NLA_U32 },
+ [TCA_FLOW_XOR] = { .type = NLA_U32 },
+diff --git a/net/sched/sch_cake.c b/net/sched/sch_cake.c
+index 8d8b2db4653c0c..2c2e2a67f3b244 100644
+--- a/net/sched/sch_cake.c
++++ b/net/sched/sch_cake.c
+@@ -627,6 +627,63 @@ static bool cake_ddst(int flow_mode)
+ return (flow_mode & CAKE_FLOW_DUAL_DST) == CAKE_FLOW_DUAL_DST;
+ }
+
++static void cake_dec_srchost_bulk_flow_count(struct cake_tin_data *q,
++ struct cake_flow *flow,
++ int flow_mode)
++{
++ if (likely(cake_dsrc(flow_mode) &&
++ q->hosts[flow->srchost].srchost_bulk_flow_count))
++ q->hosts[flow->srchost].srchost_bulk_flow_count--;
++}
++
++static void cake_inc_srchost_bulk_flow_count(struct cake_tin_data *q,
++ struct cake_flow *flow,
++ int flow_mode)
++{
++ if (likely(cake_dsrc(flow_mode) &&
++ q->hosts[flow->srchost].srchost_bulk_flow_count < CAKE_QUEUES))
++ q->hosts[flow->srchost].srchost_bulk_flow_count++;
++}
++
++static void cake_dec_dsthost_bulk_flow_count(struct cake_tin_data *q,
++ struct cake_flow *flow,
++ int flow_mode)
++{
++ if (likely(cake_ddst(flow_mode) &&
++ q->hosts[flow->dsthost].dsthost_bulk_flow_count))
++ q->hosts[flow->dsthost].dsthost_bulk_flow_count--;
++}
++
++static void cake_inc_dsthost_bulk_flow_count(struct cake_tin_data *q,
++ struct cake_flow *flow,
++ int flow_mode)
++{
++ if (likely(cake_ddst(flow_mode) &&
++ q->hosts[flow->dsthost].dsthost_bulk_flow_count < CAKE_QUEUES))
++ q->hosts[flow->dsthost].dsthost_bulk_flow_count++;
++}
++
++static u16 cake_get_flow_quantum(struct cake_tin_data *q,
++ struct cake_flow *flow,
++ int flow_mode)
++{
++ u16 host_load = 1;
++
++ if (cake_dsrc(flow_mode))
++ host_load = max(host_load,
++ q->hosts[flow->srchost].srchost_bulk_flow_count);
++
++ if (cake_ddst(flow_mode))
++ host_load = max(host_load,
++ q->hosts[flow->dsthost].dsthost_bulk_flow_count);
++
++ /* The get_random_u16() is a way to apply dithering to avoid
++ * accumulating roundoff errors
++ */
++ return (q->flow_quantum * quantum_div[host_load] +
++ get_random_u16()) >> 16;
++}
++
+ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ int flow_mode, u16 flow_override, u16 host_override)
+ {
+@@ -773,10 +830,8 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ allocate_dst = cake_ddst(flow_mode);
+
+ if (q->flows[outer_hash + k].set == CAKE_SET_BULK) {
+- if (allocate_src)
+- q->hosts[q->flows[reduced_hash].srchost].srchost_bulk_flow_count--;
+- if (allocate_dst)
+- q->hosts[q->flows[reduced_hash].dsthost].dsthost_bulk_flow_count--;
++ cake_dec_srchost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode);
++ cake_dec_dsthost_bulk_flow_count(q, &q->flows[outer_hash + k], flow_mode);
+ }
+ found:
+ /* reserve queue for future packets in same flow */
+@@ -801,9 +856,10 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ q->hosts[outer_hash + k].srchost_tag = srchost_hash;
+ found_src:
+ srchost_idx = outer_hash + k;
+- if (q->flows[reduced_hash].set == CAKE_SET_BULK)
+- q->hosts[srchost_idx].srchost_bulk_flow_count++;
+ q->flows[reduced_hash].srchost = srchost_idx;
++
++ if (q->flows[reduced_hash].set == CAKE_SET_BULK)
++ cake_inc_srchost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode);
+ }
+
+ if (allocate_dst) {
+@@ -824,9 +880,10 @@ static u32 cake_hash(struct cake_tin_data *q, const struct sk_buff *skb,
+ q->hosts[outer_hash + k].dsthost_tag = dsthost_hash;
+ found_dst:
+ dsthost_idx = outer_hash + k;
+- if (q->flows[reduced_hash].set == CAKE_SET_BULK)
+- q->hosts[dsthost_idx].dsthost_bulk_flow_count++;
+ q->flows[reduced_hash].dsthost = dsthost_idx;
++
++ if (q->flows[reduced_hash].set == CAKE_SET_BULK)
++ cake_inc_dsthost_bulk_flow_count(q, &q->flows[reduced_hash], flow_mode);
+ }
+ }
+
+@@ -1839,10 +1896,6 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+
+ /* flowchain */
+ if (!flow->set || flow->set == CAKE_SET_DECAYING) {
+- struct cake_host *srchost = &b->hosts[flow->srchost];
+- struct cake_host *dsthost = &b->hosts[flow->dsthost];
+- u16 host_load = 1;
+-
+ if (!flow->set) {
+ list_add_tail(&flow->flowchain, &b->new_flows);
+ } else {
+@@ -1852,18 +1905,8 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ flow->set = CAKE_SET_SPARSE;
+ b->sparse_flow_count++;
+
+- if (cake_dsrc(q->flow_mode))
+- host_load = max(host_load, srchost->srchost_bulk_flow_count);
+-
+- if (cake_ddst(q->flow_mode))
+- host_load = max(host_load, dsthost->dsthost_bulk_flow_count);
+-
+- flow->deficit = (b->flow_quantum *
+- quantum_div[host_load]) >> 16;
++ flow->deficit = cake_get_flow_quantum(b, flow, q->flow_mode);
+ } else if (flow->set == CAKE_SET_SPARSE_WAIT) {
+- struct cake_host *srchost = &b->hosts[flow->srchost];
+- struct cake_host *dsthost = &b->hosts[flow->dsthost];
+-
+ /* this flow was empty, accounted as a sparse flow, but actually
+ * in the bulk rotation.
+ */
+@@ -1871,12 +1914,8 @@ static s32 cake_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ b->sparse_flow_count--;
+ b->bulk_flow_count++;
+
+- if (cake_dsrc(q->flow_mode))
+- srchost->srchost_bulk_flow_count++;
+-
+- if (cake_ddst(q->flow_mode))
+- dsthost->dsthost_bulk_flow_count++;
+-
++ cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode);
++ cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode);
+ }
+
+ if (q->buffer_used > q->buffer_max_used)
+@@ -1933,13 +1972,11 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ {
+ struct cake_sched_data *q = qdisc_priv(sch);
+ struct cake_tin_data *b = &q->tins[q->cur_tin];
+- struct cake_host *srchost, *dsthost;
+ ktime_t now = ktime_get();
+ struct cake_flow *flow;
+ struct list_head *head;
+ bool first_flow = true;
+ struct sk_buff *skb;
+- u16 host_load;
+ u64 delay;
+ u32 len;
+
+@@ -2039,11 +2076,6 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ q->cur_flow = flow - b->flows;
+ first_flow = false;
+
+- /* triple isolation (modified DRR++) */
+- srchost = &b->hosts[flow->srchost];
+- dsthost = &b->hosts[flow->dsthost];
+- host_load = 1;
+-
+ /* flow isolation (DRR++) */
+ if (flow->deficit <= 0) {
+ /* Keep all flows with deficits out of the sparse and decaying
+@@ -2055,11 +2087,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ b->sparse_flow_count--;
+ b->bulk_flow_count++;
+
+- if (cake_dsrc(q->flow_mode))
+- srchost->srchost_bulk_flow_count++;
+-
+- if (cake_ddst(q->flow_mode))
+- dsthost->dsthost_bulk_flow_count++;
++ cake_inc_srchost_bulk_flow_count(b, flow, q->flow_mode);
++ cake_inc_dsthost_bulk_flow_count(b, flow, q->flow_mode);
+
+ flow->set = CAKE_SET_BULK;
+ } else {
+@@ -2071,19 +2100,7 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ }
+ }
+
+- if (cake_dsrc(q->flow_mode))
+- host_load = max(host_load, srchost->srchost_bulk_flow_count);
+-
+- if (cake_ddst(q->flow_mode))
+- host_load = max(host_load, dsthost->dsthost_bulk_flow_count);
+-
+- WARN_ON(host_load > CAKE_QUEUES);
+-
+- /* The get_random_u16() is a way to apply dithering to avoid
+- * accumulating roundoff errors
+- */
+- flow->deficit += (b->flow_quantum * quantum_div[host_load] +
+- get_random_u16()) >> 16;
++ flow->deficit += cake_get_flow_quantum(b, flow, q->flow_mode);
+ list_move_tail(&flow->flowchain, &b->old_flows);
+
+ goto retry;
+@@ -2107,11 +2124,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ if (flow->set == CAKE_SET_BULK) {
+ b->bulk_flow_count--;
+
+- if (cake_dsrc(q->flow_mode))
+- srchost->srchost_bulk_flow_count--;
+-
+- if (cake_ddst(q->flow_mode))
+- dsthost->dsthost_bulk_flow_count--;
++ cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode);
++ cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode);
+
+ b->decaying_flow_count++;
+ } else if (flow->set == CAKE_SET_SPARSE ||
+@@ -2129,12 +2143,8 @@ static struct sk_buff *cake_dequeue(struct Qdisc *sch)
+ else if (flow->set == CAKE_SET_BULK) {
+ b->bulk_flow_count--;
+
+- if (cake_dsrc(q->flow_mode))
+- srchost->srchost_bulk_flow_count--;
+-
+- if (cake_ddst(q->flow_mode))
+- dsthost->dsthost_bulk_flow_count--;
+-
++ cake_dec_srchost_bulk_flow_count(b, flow, q->flow_mode);
++ cake_dec_dsthost_bulk_flow_count(b, flow, q->flow_mode);
+ } else
+ b->decaying_flow_count--;
+
+diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c
+index e5a5af343c4c98..8e1e97be4df79f 100644
+--- a/net/sctp/sysctl.c
++++ b/net/sctp/sysctl.c
+@@ -387,7 +387,8 @@ static struct ctl_table sctp_net_table[] = {
+ static int proc_sctp_do_hmac_alg(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net,
++ sctp.sctp_hmac_alg);
+ struct ctl_table tbl;
+ bool changed = false;
+ char *none = "none";
+@@ -432,7 +433,7 @@ static int proc_sctp_do_hmac_alg(const struct ctl_table *ctl, int write,
+ static int proc_sctp_do_rto_min(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net, sctp.rto_min);
+ unsigned int min = *(unsigned int *) ctl->extra1;
+ unsigned int max = *(unsigned int *) ctl->extra2;
+ struct ctl_table tbl;
+@@ -460,7 +461,7 @@ static int proc_sctp_do_rto_min(const struct ctl_table *ctl, int write,
+ static int proc_sctp_do_rto_max(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net, sctp.rto_max);
+ unsigned int min = *(unsigned int *) ctl->extra1;
+ unsigned int max = *(unsigned int *) ctl->extra2;
+ struct ctl_table tbl;
+@@ -498,7 +499,7 @@ static int proc_sctp_do_alpha_beta(const struct ctl_table *ctl, int write,
+ static int proc_sctp_do_auth(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net, sctp.auth_enable);
+ struct ctl_table tbl;
+ int new_value, ret;
+
+@@ -527,7 +528,7 @@ static int proc_sctp_do_auth(const struct ctl_table *ctl, int write,
+ static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net, sctp.udp_port);
+ unsigned int min = *(unsigned int *)ctl->extra1;
+ unsigned int max = *(unsigned int *)ctl->extra2;
+ struct ctl_table tbl;
+@@ -568,7 +569,8 @@ static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write,
+ static int proc_sctp_do_probe_interval(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+- struct net *net = current->nsproxy->net_ns;
++ struct net *net = container_of(ctl->data, struct net,
++ sctp.probe_interval);
+ struct ctl_table tbl;
+ int ret, new_value;
+
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index bbf26cc4f6ee26..7bcc9b4408a2c7 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -458,7 +458,7 @@ int tls_tx_records(struct sock *sk, int flags)
+
+ tx_err:
+ if (rc < 0 && rc != -EAGAIN)
+- tls_err_abort(sk, -EBADMSG);
++ tls_err_abort(sk, rc);
+
+ return rc;
+ }
+diff --git a/sound/soc/codecs/rt722-sdca.c b/sound/soc/codecs/rt722-sdca.c
+index f9f7512ca36087..9a0747c4bdeac0 100644
+--- a/sound/soc/codecs/rt722-sdca.c
++++ b/sound/soc/codecs/rt722-sdca.c
+@@ -1467,13 +1467,18 @@ static void rt722_sdca_jack_preset(struct rt722_sdca_priv *rt722)
+ 0x008d);
+ /* check HP calibration FSM status */
+ for (loop_check = 0; loop_check < chk_cnt; loop_check++) {
++ usleep_range(10000, 11000);
+ ret = rt722_sdca_index_read(rt722, RT722_VENDOR_CALI,
+ RT722_DAC_DC_CALI_CTL3, &calib_status);
+- if (ret < 0 || loop_check == chk_cnt)
++ if (ret < 0)
+ dev_dbg(&rt722->slave->dev, "calibration failed!, ret=%d\n", ret);
+ if ((calib_status & 0x0040) == 0x0)
+ break;
+ }
++
++ if (loop_check == chk_cnt)
++ dev_dbg(&rt722->slave->dev, "%s, calibration time-out!\n", __func__);
++
+ /* Set ADC09 power entity floating control */
+ rt722_sdca_index_write(rt722, RT722_VENDOR_HDA_CTL, RT722_ADC0A_08_PDE_FLOAT_CTL,
+ 0x2a12);
+diff --git a/sound/soc/mediatek/common/mtk-afe-platform-driver.c b/sound/soc/mediatek/common/mtk-afe-platform-driver.c
+index 9b72b2a7ae917e..6b633058394140 100644
+--- a/sound/soc/mediatek/common/mtk-afe-platform-driver.c
++++ b/sound/soc/mediatek/common/mtk-afe-platform-driver.c
+@@ -120,8 +120,8 @@ int mtk_afe_pcm_new(struct snd_soc_component *component,
+ struct mtk_base_afe *afe = snd_soc_component_get_drvdata(component);
+
+ size = afe->mtk_afe_hardware->buffer_bytes_max;
+- snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV,
+- afe->dev, size, size);
++ snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_DEV, afe->dev, 0, size);
++
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(mtk_afe_pcm_new);
+diff --git a/tools/testing/selftests/alsa/Makefile b/tools/testing/selftests/alsa/Makefile
+index 944279160fed26..8dab90ad22bb27 100644
+--- a/tools/testing/selftests/alsa/Makefile
++++ b/tools/testing/selftests/alsa/Makefile
+@@ -27,5 +27,5 @@ include ../lib.mk
+ $(OUTPUT)/libatest.so: conf.c alsa-local.h
+ $(CC) $(CFLAGS) -shared -fPIC $< $(LDLIBS) -o $@
+
+-$(OUTPUT)/%: %.c $(TEST_GEN_PROGS_EXTENDED) alsa-local.h
++$(OUTPUT)/%: %.c $(OUTPUT)/libatest.so alsa-local.h
+ $(CC) $(CFLAGS) $< $(LDLIBS) -latest -o $@
+diff --git a/tools/testing/selftests/cgroup/test_cpuset_prs.sh b/tools/testing/selftests/cgroup/test_cpuset_prs.sh
+index 03c1bdaed2c3c5..400a696a0d212e 100755
+--- a/tools/testing/selftests/cgroup/test_cpuset_prs.sh
++++ b/tools/testing/selftests/cgroup/test_cpuset_prs.sh
+@@ -86,15 +86,15 @@ echo "" > test/cpuset.cpus
+
+ #
+ # If isolated CPUs have been reserved at boot time (as shown in
+-# cpuset.cpus.isolated), these isolated CPUs should be outside of CPUs 0-7
++# cpuset.cpus.isolated), these isolated CPUs should be outside of CPUs 0-8
+ # that will be used by this script for testing purpose. If not, some of
+-# the tests may fail incorrectly. These isolated CPUs will also be removed
+-# before being compared with the expected results.
++# the tests may fail incorrectly. These pre-isolated CPUs should stay in
++# an isolated state throughout the testing process for now.
+ #
+ BOOT_ISOLCPUS=$(cat $CGROUP2/cpuset.cpus.isolated)
+ if [[ -n "$BOOT_ISOLCPUS" ]]
+ then
+- [[ $(echo $BOOT_ISOLCPUS | sed -e "s/[,-].*//") -le 7 ]] &&
++ [[ $(echo $BOOT_ISOLCPUS | sed -e "s/[,-].*//") -le 8 ]] &&
+ skip_test "Pre-isolated CPUs ($BOOT_ISOLCPUS) overlap CPUs to be tested"
+ echo "Pre-isolated CPUs: $BOOT_ISOLCPUS"
+ fi
+@@ -683,15 +683,19 @@ check_isolcpus()
+ EXPECT_VAL2=$EXPECT_VAL
+ fi
+
++ #
++ # Appending pre-isolated CPUs
++ # Even though CPU #8 isn't used for testing, it can't be pre-isolated
++ # to make appending those CPUs easier.
++ #
++ [[ -n "$BOOT_ISOLCPUS" ]] && {
++ EXPECT_VAL=${EXPECT_VAL:+${EXPECT_VAL},}${BOOT_ISOLCPUS}
++ EXPECT_VAL2=${EXPECT_VAL2:+${EXPECT_VAL2},}${BOOT_ISOLCPUS}
++ }
++
+ #
+ # Check cpuset.cpus.isolated cpumask
+ #
+- if [[ -z "$BOOT_ISOLCPUS" ]]
+- then
+- ISOLCPUS=$(cat $ISCPUS)
+- else
+- ISOLCPUS=$(cat $ISCPUS | sed -e "s/,*$BOOT_ISOLCPUS//")
+- fi
+ [[ "$EXPECT_VAL2" != "$ISOLCPUS" ]] && {
+ # Take a 50ms pause and try again
+ pause 0.05
+@@ -731,8 +735,6 @@ check_isolcpus()
+ fi
+ done
+ [[ "$ISOLCPUS" = *- ]] && ISOLCPUS=${ISOLCPUS}$LASTISOLCPU
+- [[ -n "BOOT_ISOLCPUS" ]] &&
+- ISOLCPUS=$(echo $ISOLCPUS | sed -e "s/,*$BOOT_ISOLCPUS//")
+
+ [[ "$EXPECT_VAL" = "$ISOLCPUS" ]]
+ }
+@@ -836,8 +838,11 @@ run_state_test()
+ # if available
+ [[ -n "$ICPUS" ]] && {
+ check_isolcpus $ICPUS
+- [[ $? -ne 0 ]] && test_fail $I "isolated CPU" \
+- "Expect $ICPUS, get $ISOLCPUS instead"
++ [[ $? -ne 0 ]] && {
++ [[ -n "$BOOT_ISOLCPUS" ]] && ICPUS=${ICPUS},${BOOT_ISOLCPUS}
++ test_fail $I "isolated CPU" \
++ "Expect $ICPUS, get $ISOLCPUS instead"
++ }
+ }
+ reset_cgroup_states
+ #
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-23 17:02 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-01-23 17:02 UTC (permalink / raw
To: gentoo-commits
commit: be3664f69c599632b14b89fe20fa9dd3418eff74
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 23 17:02:05 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 23 17:02:05 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=be3664f6
Linux patch 6.12.11
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1010_linux-6.12.11.patch | 5351 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 5355 insertions(+)
diff --git a/0000_README b/0000_README
index 20574d29..9c94906b 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch: 1009_linux-6.12.10.patch
From: https://www.kernel.org
Desc: Linux 6.12.10
+Patch: 1010_linux-6.12.11.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.11
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1010_linux-6.12.11.patch b/1010_linux-6.12.11.patch
new file mode 100644
index 00000000..b7226d9f
--- /dev/null
+++ b/1010_linux-6.12.11.patch
@@ -0,0 +1,5351 @@
+diff --git a/Makefile b/Makefile
+index 233e9e88e402e7..7cf8f11975f89c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/x86/include/asm/special_insns.h b/arch/x86/include/asm/special_insns.h
+index aec6e2d3aa1d52..98bfc097389c4e 100644
+--- a/arch/x86/include/asm/special_insns.h
++++ b/arch/x86/include/asm/special_insns.h
+@@ -217,7 +217,7 @@ static inline int write_user_shstk_64(u64 __user *addr, u64 val)
+
+ #define nop() asm volatile ("nop")
+
+-static inline void serialize(void)
++static __always_inline void serialize(void)
+ {
+ /* Instruction opcode for SERIALIZE; supported in binutils >= 2.35. */
+ asm volatile(".byte 0xf, 0x1, 0xe8" ::: "memory");
+diff --git a/arch/x86/kernel/fred.c b/arch/x86/kernel/fred.c
+index 8d32c3f48abc0c..5e2cd10049804e 100644
+--- a/arch/x86/kernel/fred.c
++++ b/arch/x86/kernel/fred.c
+@@ -50,7 +50,13 @@ void cpu_init_fred_exceptions(void)
+ FRED_CONFIG_ENTRYPOINT(asm_fred_entrypoint_user));
+
+ wrmsrl(MSR_IA32_FRED_STKLVLS, 0);
+- wrmsrl(MSR_IA32_FRED_RSP0, 0);
++
++ /*
++ * Ater a CPU offline/online cycle, the FRED RSP0 MSR should be
++ * resynchronized with its per-CPU cache.
++ */
++ wrmsrl(MSR_IA32_FRED_RSP0, __this_cpu_read(fred_rsp0));
++
+ wrmsrl(MSR_IA32_FRED_RSP1, 0);
+ wrmsrl(MSR_IA32_FRED_RSP2, 0);
+ wrmsrl(MSR_IA32_FRED_RSP3, 0);
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index d27a3bf96f80d8..90aaec923889cf 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -689,11 +689,11 @@ static bool acpi_dev_irq_override(u32 gsi, u8 triggering, u8 polarity,
+ for (i = 0; i < ARRAY_SIZE(override_table); i++) {
+ const struct irq_override_cmp *entry = &override_table[i];
+
+- if (dmi_check_system(entry->system) &&
+- entry->irq == gsi &&
++ if (entry->irq == gsi &&
+ entry->triggering == triggering &&
+ entry->polarity == polarity &&
+- entry->shareable == shareable)
++ entry->shareable == shareable &&
++ dmi_check_system(entry->system))
+ return entry->override;
+ }
+
+diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
+index bf83a104086cce..76b326ddd75c47 100644
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1349,6 +1349,7 @@ static bool zram_meta_alloc(struct zram *zram, u64 disksize)
+ zram->mem_pool = zs_create_pool(zram->disk->disk_name);
+ if (!zram->mem_pool) {
+ vfree(zram->table);
++ zram->table = NULL;
+ return false;
+ }
+
+diff --git a/drivers/cpufreq/Kconfig b/drivers/cpufreq/Kconfig
+index 2561b215432a82..588ab1cc6d557c 100644
+--- a/drivers/cpufreq/Kconfig
++++ b/drivers/cpufreq/Kconfig
+@@ -311,8 +311,6 @@ config QORIQ_CPUFREQ
+ This adds the CPUFreq driver support for Freescale QorIQ SoCs
+ which are capable of changing the CPU's frequency dynamically.
+
+-endif
+-
+ config ACPI_CPPC_CPUFREQ
+ tristate "CPUFreq driver based on the ACPI CPPC spec"
+ depends on ACPI_PROCESSOR
+@@ -341,4 +339,6 @@ config ACPI_CPPC_CPUFREQ_FIE
+
+ If in doubt, say N.
+
++endif
++
+ endmenu
+diff --git a/drivers/cpuidle/governors/teo.c b/drivers/cpuidle/governors/teo.c
+index f2992f92d8db86..173ddcac540ade 100644
+--- a/drivers/cpuidle/governors/teo.c
++++ b/drivers/cpuidle/governors/teo.c
+@@ -10,25 +10,27 @@
+ * DOC: teo-description
+ *
+ * The idea of this governor is based on the observation that on many systems
+- * timer events are two or more orders of magnitude more frequent than any
+- * other interrupts, so they are likely to be the most significant cause of CPU
+- * wakeups from idle states. Moreover, information about what happened in the
+- * (relatively recent) past can be used to estimate whether or not the deepest
+- * idle state with target residency within the (known) time till the closest
+- * timer event, referred to as the sleep length, is likely to be suitable for
+- * the upcoming CPU idle period and, if not, then which of the shallower idle
+- * states to choose instead of it.
++ * timer interrupts are two or more orders of magnitude more frequent than any
++ * other interrupt types, so they are likely to dominate CPU wakeup patterns.
++ * Moreover, in principle, the time when the next timer event is going to occur
++ * can be determined at the idle state selection time, although doing that may
++ * be costly, so it can be regarded as the most reliable source of information
++ * for idle state selection.
+ *
+- * Of course, non-timer wakeup sources are more important in some use cases
+- * which can be covered by taking a few most recent idle time intervals of the
+- * CPU into account. However, even in that context it is not necessary to
+- * consider idle duration values greater than the sleep length, because the
+- * closest timer will ultimately wake up the CPU anyway unless it is woken up
+- * earlier.
++ * Of course, non-timer wakeup sources are more important in some use cases,
++ * but even then it is generally unnecessary to consider idle duration values
++ * greater than the time time till the next timer event, referred as the sleep
++ * length in what follows, because the closest timer will ultimately wake up the
++ * CPU anyway unless it is woken up earlier.
+ *
+- * Thus this governor estimates whether or not the prospective idle duration of
+- * a CPU is likely to be significantly shorter than the sleep length and selects
+- * an idle state for it accordingly.
++ * However, since obtaining the sleep length may be costly, the governor first
++ * checks if it can select a shallow idle state using wakeup pattern information
++ * from recent times, in which case it can do without knowing the sleep length
++ * at all. For this purpose, it counts CPU wakeup events and looks for an idle
++ * state whose target residency has not exceeded the idle duration (measured
++ * after wakeup) in the majority of relevant recent cases. If the target
++ * residency of that state is small enough, it may be used right away and the
++ * sleep length need not be determined.
+ *
+ * The computations carried out by this governor are based on using bins whose
+ * boundaries are aligned with the target residency parameter values of the CPU
+@@ -39,7 +41,11 @@
+ * idle state 2, the third bin spans from the target residency of idle state 2
+ * up to, but not including, the target residency of idle state 3 and so on.
+ * The last bin spans from the target residency of the deepest idle state
+- * supplied by the driver to infinity.
++ * supplied by the driver to the scheduler tick period length or to infinity if
++ * the tick period length is less than the target residency of that state. In
++ * the latter case, the governor also counts events with the measured idle
++ * duration between the tick period length and the target residency of the
++ * deepest idle state.
+ *
+ * Two metrics called "hits" and "intercepts" are associated with each bin.
+ * They are updated every time before selecting an idle state for the given CPU
+@@ -49,47 +55,46 @@
+ * sleep length and the idle duration measured after CPU wakeup fall into the
+ * same bin (that is, the CPU appears to wake up "on time" relative to the sleep
+ * length). In turn, the "intercepts" metric reflects the relative frequency of
+- * situations in which the measured idle duration is so much shorter than the
+- * sleep length that the bin it falls into corresponds to an idle state
+- * shallower than the one whose bin is fallen into by the sleep length (these
+- * situations are referred to as "intercepts" below).
++ * non-timer wakeup events for which the measured idle duration falls into a bin
++ * that corresponds to an idle state shallower than the one whose bin is fallen
++ * into by the sleep length (these events are also referred to as "intercepts"
++ * below).
+ *
+ * In order to select an idle state for a CPU, the governor takes the following
+ * steps (modulo the possible latency constraint that must be taken into account
+ * too):
+ *
+- * 1. Find the deepest CPU idle state whose target residency does not exceed
+- * the current sleep length (the candidate idle state) and compute 2 sums as
+- * follows:
++ * 1. Find the deepest enabled CPU idle state (the candidate idle state) and
++ * compute 2 sums as follows:
+ *
+- * - The sum of the "hits" and "intercepts" metrics for the candidate state
+- * and all of the deeper idle states (it represents the cases in which the
+- * CPU was idle long enough to avoid being intercepted if the sleep length
+- * had been equal to the current one).
++ * - The sum of the "hits" metric for all of the idle states shallower than
++ * the candidate one (it represents the cases in which the CPU was likely
++ * woken up by a timer).
+ *
+- * - The sum of the "intercepts" metrics for all of the idle states shallower
+- * than the candidate one (it represents the cases in which the CPU was not
+- * idle long enough to avoid being intercepted if the sleep length had been
+- * equal to the current one).
++ * - The sum of the "intercepts" metric for all of the idle states shallower
++ * than the candidate one (it represents the cases in which the CPU was
++ * likely woken up by a non-timer wakeup source).
+ *
+- * 2. If the second sum is greater than the first one the CPU is likely to wake
+- * up early, so look for an alternative idle state to select.
++ * 2. If the second sum computed in step 1 is greater than a half of the sum of
++ * both metrics for the candidate state bin and all subsequent bins(if any),
++ * a shallower idle state is likely to be more suitable, so look for it.
+ *
+- * - Traverse the idle states shallower than the candidate one in the
++ * - Traverse the enabled idle states shallower than the candidate one in the
+ * descending order.
+ *
+ * - For each of them compute the sum of the "intercepts" metrics over all
+ * of the idle states between it and the candidate one (including the
+ * former and excluding the latter).
+ *
+- * - If each of these sums that needs to be taken into account (because the
+- * check related to it has indicated that the CPU is likely to wake up
+- * early) is greater than a half of the corresponding sum computed in step
+- * 1 (which means that the target residency of the state in question had
+- * not exceeded the idle duration in over a half of the relevant cases),
+- * select the given idle state instead of the candidate one.
++ * - If this sum is greater than a half of the second sum computed in step 1,
++ * use the given idle state as the new candidate one.
+ *
+- * 3. By default, select the candidate state.
++ * 3. If the current candidate state is state 0 or its target residency is short
++ * enough, return it and prevent the scheduler tick from being stopped.
++ *
++ * 4. Obtain the sleep length value and check if it is below the target
++ * residency of the current candidate state, in which case a new shallower
++ * candidate state needs to be found, so look for it.
+ */
+
+ #include <linux/cpuidle.h>
+diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig
+index 72f2537d90cafd..f45c70154a9302 100644
+--- a/drivers/firmware/efi/Kconfig
++++ b/drivers/firmware/efi/Kconfig
+@@ -76,10 +76,6 @@ config EFI_ZBOOT
+ bool "Enable the generic EFI decompressor"
+ depends on EFI_GENERIC_STUB && !ARM
+ select HAVE_KERNEL_GZIP
+- select HAVE_KERNEL_LZ4
+- select HAVE_KERNEL_LZMA
+- select HAVE_KERNEL_LZO
+- select HAVE_KERNEL_XZ
+ select HAVE_KERNEL_ZSTD
+ help
+ Create the bootable image as an EFI application that carries the
+diff --git a/drivers/firmware/efi/libstub/Makefile.zboot b/drivers/firmware/efi/libstub/Makefile.zboot
+index 65ffd0b760b2fb..48842b5c106b83 100644
+--- a/drivers/firmware/efi/libstub/Makefile.zboot
++++ b/drivers/firmware/efi/libstub/Makefile.zboot
+@@ -12,22 +12,16 @@ quiet_cmd_copy_and_pad = PAD $@
+ $(obj)/vmlinux.bin: $(obj)/$(EFI_ZBOOT_PAYLOAD) FORCE
+ $(call if_changed,copy_and_pad)
+
+-comp-type-$(CONFIG_KERNEL_GZIP) := gzip
+-comp-type-$(CONFIG_KERNEL_LZ4) := lz4
+-comp-type-$(CONFIG_KERNEL_LZMA) := lzma
+-comp-type-$(CONFIG_KERNEL_LZO) := lzo
+-comp-type-$(CONFIG_KERNEL_XZ) := xzkern
+-comp-type-$(CONFIG_KERNEL_ZSTD) := zstd22
+-
+ # in GZIP, the appended le32 carrying the uncompressed size is part of the
+ # format, but in other cases, we just append it at the end for convenience,
+ # causing the original tools to complain when checking image integrity.
+-# So disregard it when calculating the payload size in the zimage header.
+-zboot-method-y := $(comp-type-y)_with_size
+-zboot-size-len-y := 4
++comp-type-y := gzip
++zboot-method-y := gzip
++zboot-size-len-y := 0
+
+-zboot-method-$(CONFIG_KERNEL_GZIP) := gzip
+-zboot-size-len-$(CONFIG_KERNEL_GZIP) := 0
++comp-type-$(CONFIG_KERNEL_ZSTD) := zstd
++zboot-method-$(CONFIG_KERNEL_ZSTD) := zstd22_with_size
++zboot-size-len-$(CONFIG_KERNEL_ZSTD) := 4
+
+ $(obj)/vmlinuz: $(obj)/vmlinux.bin FORCE
+ $(call if_changed,$(zboot-method-y))
+diff --git a/drivers/gpio/gpio-sim.c b/drivers/gpio/gpio-sim.c
+index dcca1d7f173e5f..deedacdeb23952 100644
+--- a/drivers/gpio/gpio-sim.c
++++ b/drivers/gpio/gpio-sim.c
+@@ -1030,6 +1030,30 @@ static void gpio_sim_device_deactivate(struct gpio_sim_device *dev)
+ dev->pdev = NULL;
+ }
+
++static void
++gpio_sim_device_lockup_configfs(struct gpio_sim_device *dev, bool lock)
++{
++ struct configfs_subsystem *subsys = dev->group.cg_subsys;
++ struct gpio_sim_bank *bank;
++ struct gpio_sim_line *line;
++
++ /*
++ * The device only needs to depend on leaf line entries. This is
++ * sufficient to lock up all the configfs entries that the
++ * instantiated, alive device depends on.
++ */
++ list_for_each_entry(bank, &dev->bank_list, siblings) {
++ list_for_each_entry(line, &bank->line_list, siblings) {
++ if (lock)
++ WARN_ON(configfs_depend_item_unlocked(
++ subsys, &line->group.cg_item));
++ else
++ configfs_undepend_item_unlocked(
++ &line->group.cg_item);
++ }
++ }
++}
++
+ static ssize_t
+ gpio_sim_device_config_live_store(struct config_item *item,
+ const char *page, size_t count)
+@@ -1042,14 +1066,24 @@ gpio_sim_device_config_live_store(struct config_item *item,
+ if (ret)
+ return ret;
+
+- guard(mutex)(&dev->lock);
++ if (live)
++ gpio_sim_device_lockup_configfs(dev, true);
+
+- if (live == gpio_sim_device_is_live(dev))
+- ret = -EPERM;
+- else if (live)
+- ret = gpio_sim_device_activate(dev);
+- else
+- gpio_sim_device_deactivate(dev);
++ scoped_guard(mutex, &dev->lock) {
++ if (live == gpio_sim_device_is_live(dev))
++ ret = -EPERM;
++ else if (live)
++ ret = gpio_sim_device_activate(dev);
++ else
++ gpio_sim_device_deactivate(dev);
++ }
++
++ /*
++ * Undepend is required only if device disablement (live == 0)
++ * succeeds or if device enablement (live == 1) fails.
++ */
++ if (live == !!ret)
++ gpio_sim_device_lockup_configfs(dev, false);
+
+ return ret ?: count;
+ }
+diff --git a/drivers/gpio/gpio-virtuser.c b/drivers/gpio/gpio-virtuser.c
+index d6244f0d3bc752..e89f299f214009 100644
+--- a/drivers/gpio/gpio-virtuser.c
++++ b/drivers/gpio/gpio-virtuser.c
+@@ -1546,6 +1546,30 @@ gpio_virtuser_device_deactivate(struct gpio_virtuser_device *dev)
+ dev->pdev = NULL;
+ }
+
++static void
++gpio_virtuser_device_lockup_configfs(struct gpio_virtuser_device *dev, bool lock)
++{
++ struct configfs_subsystem *subsys = dev->group.cg_subsys;
++ struct gpio_virtuser_lookup_entry *entry;
++ struct gpio_virtuser_lookup *lookup;
++
++ /*
++ * The device only needs to depend on leaf lookup entries. This is
++ * sufficient to lock up all the configfs entries that the
++ * instantiated, alive device depends on.
++ */
++ list_for_each_entry(lookup, &dev->lookup_list, siblings) {
++ list_for_each_entry(entry, &lookup->entry_list, siblings) {
++ if (lock)
++ WARN_ON(configfs_depend_item_unlocked(
++ subsys, &entry->group.cg_item));
++ else
++ configfs_undepend_item_unlocked(
++ &entry->group.cg_item);
++ }
++ }
++}
++
+ static ssize_t
+ gpio_virtuser_device_config_live_store(struct config_item *item,
+ const char *page, size_t count)
+@@ -1558,15 +1582,24 @@ gpio_virtuser_device_config_live_store(struct config_item *item,
+ if (ret)
+ return ret;
+
+- guard(mutex)(&dev->lock);
++ if (live)
++ gpio_virtuser_device_lockup_configfs(dev, true);
+
+- if (live == gpio_virtuser_device_is_live(dev))
+- return -EPERM;
++ scoped_guard(mutex, &dev->lock) {
++ if (live == gpio_virtuser_device_is_live(dev))
++ ret = -EPERM;
++ else if (live)
++ ret = gpio_virtuser_device_activate(dev);
++ else
++ gpio_virtuser_device_deactivate(dev);
++ }
+
+- if (live)
+- ret = gpio_virtuser_device_activate(dev);
+- else
+- gpio_virtuser_device_deactivate(dev);
++ /*
++ * Undepend is required only if device disablement (live == 0)
++ * succeeds or if device enablement (live == 1) fails.
++ */
++ if (live == !!ret)
++ gpio_virtuser_device_lockup_configfs(dev, false);
+
+ return ret ?: count;
+ }
+diff --git a/drivers/gpio/gpio-xilinx.c b/drivers/gpio/gpio-xilinx.c
+index afcf432a1573ed..2ea8ccfbdccdd4 100644
+--- a/drivers/gpio/gpio-xilinx.c
++++ b/drivers/gpio/gpio-xilinx.c
+@@ -65,7 +65,7 @@ struct xgpio_instance {
+ DECLARE_BITMAP(state, 64);
+ DECLARE_BITMAP(last_irq_read, 64);
+ DECLARE_BITMAP(dir, 64);
+- spinlock_t gpio_lock; /* For serializing operations */
++ raw_spinlock_t gpio_lock; /* For serializing operations */
+ int irq;
+ DECLARE_BITMAP(enable, 64);
+ DECLARE_BITMAP(rising_edge, 64);
+@@ -179,14 +179,14 @@ static void xgpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
+ struct xgpio_instance *chip = gpiochip_get_data(gc);
+ int bit = xgpio_to_bit(chip, gpio);
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ /* Write to GPIO signal and set its direction to output */
+ __assign_bit(bit, chip->state, val);
+
+ xgpio_write_ch(chip, XGPIO_DATA_OFFSET, bit, chip->state);
+
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+ }
+
+ /**
+@@ -210,7 +210,7 @@ static void xgpio_set_multiple(struct gpio_chip *gc, unsigned long *mask,
+ bitmap_remap(hw_mask, mask, chip->sw_map, chip->hw_map, 64);
+ bitmap_remap(hw_bits, bits, chip->sw_map, chip->hw_map, 64);
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ bitmap_replace(state, chip->state, hw_bits, hw_mask, 64);
+
+@@ -218,7 +218,7 @@ static void xgpio_set_multiple(struct gpio_chip *gc, unsigned long *mask,
+
+ bitmap_copy(chip->state, state, 64);
+
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+ }
+
+ /**
+@@ -236,13 +236,13 @@ static int xgpio_dir_in(struct gpio_chip *gc, unsigned int gpio)
+ struct xgpio_instance *chip = gpiochip_get_data(gc);
+ int bit = xgpio_to_bit(chip, gpio);
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ /* Set the GPIO bit in shadow register and set direction as input */
+ __set_bit(bit, chip->dir);
+ xgpio_write_ch(chip, XGPIO_TRI_OFFSET, bit, chip->dir);
+
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+
+ return 0;
+ }
+@@ -265,7 +265,7 @@ static int xgpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
+ struct xgpio_instance *chip = gpiochip_get_data(gc);
+ int bit = xgpio_to_bit(chip, gpio);
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ /* Write state of GPIO signal */
+ __assign_bit(bit, chip->state, val);
+@@ -275,7 +275,7 @@ static int xgpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
+ __clear_bit(bit, chip->dir);
+ xgpio_write_ch(chip, XGPIO_TRI_OFFSET, bit, chip->dir);
+
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+
+ return 0;
+ }
+@@ -398,7 +398,7 @@ static void xgpio_irq_mask(struct irq_data *irq_data)
+ int bit = xgpio_to_bit(chip, irq_offset);
+ u32 mask = BIT(bit / 32), temp;
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ __clear_bit(bit, chip->enable);
+
+@@ -408,7 +408,7 @@ static void xgpio_irq_mask(struct irq_data *irq_data)
+ temp &= ~mask;
+ xgpio_writereg(chip->regs + XGPIO_IPIER_OFFSET, temp);
+ }
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+
+ gpiochip_disable_irq(&chip->gc, irq_offset);
+ }
+@@ -428,7 +428,7 @@ static void xgpio_irq_unmask(struct irq_data *irq_data)
+
+ gpiochip_enable_irq(&chip->gc, irq_offset);
+
+- spin_lock_irqsave(&chip->gpio_lock, flags);
++ raw_spin_lock_irqsave(&chip->gpio_lock, flags);
+
+ __set_bit(bit, chip->enable);
+
+@@ -447,7 +447,7 @@ static void xgpio_irq_unmask(struct irq_data *irq_data)
+ xgpio_writereg(chip->regs + XGPIO_IPIER_OFFSET, val);
+ }
+
+- spin_unlock_irqrestore(&chip->gpio_lock, flags);
++ raw_spin_unlock_irqrestore(&chip->gpio_lock, flags);
+ }
+
+ /**
+@@ -512,7 +512,7 @@ static void xgpio_irqhandler(struct irq_desc *desc)
+
+ chained_irq_enter(irqchip, desc);
+
+- spin_lock(&chip->gpio_lock);
++ raw_spin_lock(&chip->gpio_lock);
+
+ xgpio_read_ch_all(chip, XGPIO_DATA_OFFSET, all);
+
+@@ -529,7 +529,7 @@ static void xgpio_irqhandler(struct irq_desc *desc)
+ bitmap_copy(chip->last_irq_read, all, 64);
+ bitmap_or(all, rising, falling, 64);
+
+- spin_unlock(&chip->gpio_lock);
++ raw_spin_unlock(&chip->gpio_lock);
+
+ dev_dbg(gc->parent, "IRQ rising %*pb falling %*pb\n", 64, rising, 64, falling);
+
+@@ -620,7 +620,7 @@ static int xgpio_probe(struct platform_device *pdev)
+ bitmap_set(chip->hw_map, 0, width[0]);
+ bitmap_set(chip->hw_map, 32, width[1]);
+
+- spin_lock_init(&chip->gpio_lock);
++ raw_spin_lock_init(&chip->gpio_lock);
+
+ chip->gc.base = -1;
+ chip->gc.ngpio = bitmap_weight(chip->hw_map, 64);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+index e41318bfbf4575..84e5364d1f67d0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd.c
+@@ -715,8 +715,9 @@ int amdgpu_amdkfd_submit_ib(struct amdgpu_device *adev,
+ void amdgpu_amdkfd_set_compute_idle(struct amdgpu_device *adev, bool idle)
+ {
+ enum amd_powergating_state state = idle ? AMD_PG_STATE_GATE : AMD_PG_STATE_UNGATE;
+- if (IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 11 &&
+- ((adev->mes.kiq_version & AMDGPU_MES_VERSION_MASK) <= 64)) {
++ if ((IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 11 &&
++ ((adev->mes.kiq_version & AMDGPU_MES_VERSION_MASK) <= 64)) ||
++ (IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 12)) {
+ pr_debug("GFXOFF is %s\n", idle ? "enabled" : "disabled");
+ amdgpu_gfx_off_ctrl(adev, idle);
+ } else if ((IP_VERSION_MAJ(amdgpu_ip_version(adev, GC_HWIP, 0)) == 9) &&
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fw_attestation.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fw_attestation.c
+index 2d4b67175b55be..328a1b9635481c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fw_attestation.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fw_attestation.c
+@@ -122,6 +122,10 @@ static int amdgpu_is_fw_attestation_supported(struct amdgpu_device *adev)
+ if (adev->flags & AMD_IS_APU)
+ return 0;
+
++ if (amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(14, 0, 2) ||
++ amdgpu_ip_version(adev, MP0_HWIP, 0) == IP_VERSION(14, 0, 3))
++ return 0;
++
+ if (adev->asic_type >= CHIP_SIENNA_CICHLID)
+ return 1;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+index 8b512dc28df838..071f187f5e282f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
+@@ -193,8 +193,8 @@ int amdgpu_ib_schedule(struct amdgpu_ring *ring, unsigned int num_ibs,
+ need_ctx_switch = ring->current_ctx != fence_ctx;
+ if (ring->funcs->emit_pipeline_sync && job &&
+ ((tmp = amdgpu_sync_get_fence(&job->explicit_sync)) ||
+- (amdgpu_sriov_vf(adev) && need_ctx_switch) ||
+- amdgpu_vm_need_pipeline_sync(ring, job))) {
++ need_ctx_switch || amdgpu_vm_need_pipeline_sync(ring, job))) {
++
+ need_pipe_sync = true;
+
+ if (tmp)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index ea403fece8392c..08c58d0315de7f 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -8889,6 +8889,7 @@ static void amdgpu_dm_enable_self_refresh(struct amdgpu_crtc *acrtc_attach,
+ struct replay_settings *pr = &acrtc_state->stream->link->replay_settings;
+ struct amdgpu_dm_connector *aconn =
+ (struct amdgpu_dm_connector *)acrtc_state->stream->dm_stream_context;
++ bool vrr_active = amdgpu_dm_crtc_vrr_active(acrtc_state);
+
+ if (acrtc_state->update_type > UPDATE_TYPE_FAST) {
+ if (pr->config.replay_supported && !pr->replay_feature_enabled)
+@@ -8915,14 +8916,15 @@ static void amdgpu_dm_enable_self_refresh(struct amdgpu_crtc *acrtc_attach,
+ * adequate number of fast atomic commits to notify KMD
+ * of update events. See `vblank_control_worker()`.
+ */
+- if (acrtc_attach->dm_irq_params.allow_sr_entry &&
++ if (!vrr_active &&
++ acrtc_attach->dm_irq_params.allow_sr_entry &&
+ #ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
+ !amdgpu_dm_crc_window_is_activated(acrtc_state->base.crtc) &&
+ #endif
+ (current_ts - psr->psr_dirty_rects_change_timestamp_ns) > 500000000) {
+ if (pr->replay_feature_enabled && !pr->replay_allow_active)
+ amdgpu_dm_replay_enable(acrtc_state->stream, true);
+- if (psr->psr_version >= DC_PSR_VERSION_SU_1 &&
++ if (psr->psr_version == DC_PSR_VERSION_SU_1 &&
+ !psr->psr_allow_active && !aconn->disallow_edp_enter_psr)
+ amdgpu_dm_psr_enable(acrtc_state->stream);
+ }
+@@ -9093,7 +9095,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ acrtc_state->stream->link->psr_settings.psr_dirty_rects_change_timestamp_ns =
+ timestamp_ns;
+ if (acrtc_state->stream->link->psr_settings.psr_allow_active)
+- amdgpu_dm_psr_disable(acrtc_state->stream);
++ amdgpu_dm_psr_disable(acrtc_state->stream, true);
+ mutex_unlock(&dm->dc_lock);
+ }
+ }
+@@ -9259,11 +9261,11 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ bundle->stream_update.abm_level = &acrtc_state->abm_level;
+
+ mutex_lock(&dm->dc_lock);
+- if (acrtc_state->update_type > UPDATE_TYPE_FAST) {
++ if ((acrtc_state->update_type > UPDATE_TYPE_FAST) || vrr_active) {
+ if (acrtc_state->stream->link->replay_settings.replay_allow_active)
+ amdgpu_dm_replay_disable(acrtc_state->stream);
+ if (acrtc_state->stream->link->psr_settings.psr_allow_active)
+- amdgpu_dm_psr_disable(acrtc_state->stream);
++ amdgpu_dm_psr_disable(acrtc_state->stream, true);
+ }
+ mutex_unlock(&dm->dc_lock);
+
+@@ -11370,6 +11372,25 @@ static int dm_crtc_get_cursor_mode(struct amdgpu_device *adev,
+ return 0;
+ }
+
++static bool amdgpu_dm_crtc_mem_type_changed(struct drm_device *dev,
++ struct drm_atomic_state *state,
++ struct drm_crtc_state *crtc_state)
++{
++ struct drm_plane *plane;
++ struct drm_plane_state *new_plane_state, *old_plane_state;
++
++ drm_for_each_plane_mask(plane, dev, crtc_state->plane_mask) {
++ new_plane_state = drm_atomic_get_plane_state(state, plane);
++ old_plane_state = drm_atomic_get_plane_state(state, plane);
++
++ if (old_plane_state->fb && new_plane_state->fb &&
++ get_mem_type(old_plane_state->fb) != get_mem_type(new_plane_state->fb))
++ return true;
++ }
++
++ return false;
++}
++
+ /**
+ * amdgpu_dm_atomic_check() - Atomic check implementation for AMDgpu DM.
+ *
+@@ -11567,10 +11588,6 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+
+ /* Remove exiting planes if they are modified */
+ for_each_oldnew_plane_in_descending_zpos(state, plane, old_plane_state, new_plane_state) {
+- if (old_plane_state->fb && new_plane_state->fb &&
+- get_mem_type(old_plane_state->fb) !=
+- get_mem_type(new_plane_state->fb))
+- lock_and_validation_needed = true;
+
+ ret = dm_update_plane_state(dc, state, plane,
+ old_plane_state,
+@@ -11865,9 +11882,11 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+
+ /*
+ * Only allow async flips for fast updates that don't change
+- * the FB pitch, the DCC state, rotation, etc.
++ * the FB pitch, the DCC state, rotation, mem_type, etc.
+ */
+- if (new_crtc_state->async_flip && lock_and_validation_needed) {
++ if (new_crtc_state->async_flip &&
++ (lock_and_validation_needed ||
++ amdgpu_dm_crtc_mem_type_changed(dev, state, new_crtc_state))) {
+ drm_dbg_atomic(crtc->dev,
+ "[CRTC:%d:%s] async flips are only supported for fast updates\n",
+ crtc->base.id, crtc->name);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+index f936a35fa9ebb7..0f6ba7b1575d08 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crc.c
+@@ -30,6 +30,7 @@
+ #include "amdgpu_dm.h"
+ #include "dc.h"
+ #include "amdgpu_securedisplay.h"
++#include "amdgpu_dm_psr.h"
+
+ static const char *const pipe_crc_sources[] = {
+ "none",
+@@ -224,6 +225,10 @@ int amdgpu_dm_crtc_configure_crc_source(struct drm_crtc *crtc,
+
+ mutex_lock(&adev->dm.dc_lock);
+
++ /* For PSR1, check that the panel has exited PSR */
++ if (stream_state->link->psr_settings.psr_version < DC_PSR_VERSION_SU_1)
++ amdgpu_dm_psr_wait_disable(stream_state);
++
+ /* Enable or disable CRTC CRC generation */
+ if (dm_is_crc_source_crtc(source) || source == AMDGPU_DM_PIPE_CRC_SOURCE_NONE) {
+ if (!dc_stream_configure_crc(stream_state->ctx->dc,
+@@ -357,6 +362,17 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
+
+ }
+
++ /*
++ * Reading the CRC requires the vblank interrupt handler to be
++ * enabled. Keep a reference until CRC capture stops.
++ */
++ enabled = amdgpu_dm_is_valid_crc_source(cur_crc_src);
++ if (!enabled && enable) {
++ ret = drm_crtc_vblank_get(crtc);
++ if (ret)
++ goto cleanup;
++ }
++
+ #if defined(CONFIG_DRM_AMD_SECURE_DISPLAY)
+ /* Reset secure_display when we change crc source from debugfs */
+ amdgpu_dm_set_crc_window_default(crtc, crtc_state->stream);
+@@ -367,16 +383,7 @@ int amdgpu_dm_crtc_set_crc_source(struct drm_crtc *crtc, const char *src_name)
+ goto cleanup;
+ }
+
+- /*
+- * Reading the CRC requires the vblank interrupt handler to be
+- * enabled. Keep a reference until CRC capture stops.
+- */
+- enabled = amdgpu_dm_is_valid_crc_source(cur_crc_src);
+ if (!enabled && enable) {
+- ret = drm_crtc_vblank_get(crtc);
+- if (ret)
+- goto cleanup;
+-
+ if (dm_is_crc_source_dprx(source)) {
+ if (drm_dp_start_crc(aux, crtc)) {
+ DRM_DEBUG_DRIVER("dp start crc failed\n");
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index 9be87b53251739..70fcfae8e4c552 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -93,7 +93,7 @@ int amdgpu_dm_crtc_set_vupdate_irq(struct drm_crtc *crtc, bool enable)
+ return rc;
+ }
+
+-bool amdgpu_dm_crtc_vrr_active(struct dm_crtc_state *dm_state)
++bool amdgpu_dm_crtc_vrr_active(const struct dm_crtc_state *dm_state)
+ {
+ return dm_state->freesync_config.state == VRR_STATE_ACTIVE_VARIABLE ||
+ dm_state->freesync_config.state == VRR_STATE_ACTIVE_FIXED;
+@@ -142,7 +142,7 @@ static void amdgpu_dm_crtc_set_panel_sr_feature(
+ amdgpu_dm_replay_enable(vblank_work->stream, true);
+ } else if (vblank_enabled) {
+ if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1 && is_sr_active)
+- amdgpu_dm_psr_disable(vblank_work->stream);
++ amdgpu_dm_psr_disable(vblank_work->stream, false);
+ } else if (link->psr_settings.psr_feature_enabled &&
+ allow_sr_entry && !is_sr_active && !is_crc_window_active) {
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h
+index 17e948753f59bd..c1212947a77b83 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.h
+@@ -37,7 +37,7 @@ int amdgpu_dm_crtc_set_vupdate_irq(struct drm_crtc *crtc, bool enable);
+
+ bool amdgpu_dm_crtc_vrr_active_irq(struct amdgpu_crtc *acrtc);
+
+-bool amdgpu_dm_crtc_vrr_active(struct dm_crtc_state *dm_state);
++bool amdgpu_dm_crtc_vrr_active(const struct dm_crtc_state *dm_state);
+
+ int amdgpu_dm_crtc_enable_vblank(struct drm_crtc *crtc);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+index db56b0aa545454..98e88903d07d52 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_debugfs.c
+@@ -3638,7 +3638,7 @@ static int crc_win_update_set(void *data, u64 val)
+ /* PSR may write to OTG CRC window control register,
+ * so close it before starting secure_display.
+ */
+- amdgpu_dm_psr_disable(acrtc->dm_irq_params.stream);
++ amdgpu_dm_psr_disable(acrtc->dm_irq_params.stream, true);
+
+ spin_lock_irq(&adev_to_drm(adev)->event_lock);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 32b025c92c63cf..3d624ae6d9bdfe 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1831,11 +1831,15 @@ enum dc_status dm_dp_mst_is_port_support_mode(
+ if (immediate_upstream_port) {
+ virtual_channel_bw_in_kbps = kbps_from_pbn(immediate_upstream_port->full_pbn);
+ virtual_channel_bw_in_kbps = min(root_link_bw_in_kbps, virtual_channel_bw_in_kbps);
+- if (bw_range.min_kbps > virtual_channel_bw_in_kbps) {
+- DRM_DEBUG_DRIVER("MST_DSC dsc decode at last link."
+- "Max dsc compression can't fit into MST available bw\n");
+- return DC_FAIL_BANDWIDTH_VALIDATE;
+- }
++ } else {
++ /* For topology LCT 1 case - only one mstb*/
++ virtual_channel_bw_in_kbps = root_link_bw_in_kbps;
++ }
++
++ if (bw_range.min_kbps > virtual_channel_bw_in_kbps) {
++ DRM_DEBUG_DRIVER("MST_DSC dsc decode at last link."
++ "Max dsc compression can't fit into MST available bw\n");
++ return DC_FAIL_BANDWIDTH_VALIDATE;
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+index f40240aafe988e..45858bf1523d8f 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+@@ -201,14 +201,13 @@ void amdgpu_dm_psr_enable(struct dc_stream_state *stream)
+ *
+ * Return: true if success
+ */
+-bool amdgpu_dm_psr_disable(struct dc_stream_state *stream)
++bool amdgpu_dm_psr_disable(struct dc_stream_state *stream, bool wait)
+ {
+- unsigned int power_opt = 0;
+ bool psr_enable = false;
+
+ DRM_DEBUG_DRIVER("Disabling psr...\n");
+
+- return dc_link_set_psr_allow_active(stream->link, &psr_enable, true, false, &power_opt);
++ return dc_link_set_psr_allow_active(stream->link, &psr_enable, wait, false, NULL);
+ }
+
+ /*
+@@ -251,3 +250,33 @@ bool amdgpu_dm_psr_is_active_allowed(struct amdgpu_display_manager *dm)
+
+ return allow_active;
+ }
++
++/**
++ * amdgpu_dm_psr_wait_disable() - Wait for eDP panel to exit PSR
++ * @stream: stream state attached to the eDP link
++ *
++ * Waits for a max of 500ms for the eDP panel to exit PSR.
++ *
++ * Return: true if panel exited PSR, false otherwise.
++ */
++bool amdgpu_dm_psr_wait_disable(struct dc_stream_state *stream)
++{
++ enum dc_psr_state psr_state = PSR_STATE0;
++ struct dc_link *link = stream->link;
++ int retry_count;
++
++ if (link == NULL)
++ return false;
++
++ for (retry_count = 0; retry_count <= 1000; retry_count++) {
++ dc_link_get_psr_state(link, &psr_state);
++ if (psr_state == PSR_STATE0)
++ break;
++ udelay(500);
++ }
++
++ if (retry_count == 1000)
++ return false;
++
++ return true;
++}
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
+index cd2d45c2b5ef01..e2366321a3c1bd 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.h
+@@ -34,8 +34,9 @@
+ void amdgpu_dm_set_psr_caps(struct dc_link *link);
+ void amdgpu_dm_psr_enable(struct dc_stream_state *stream);
+ bool amdgpu_dm_link_setup_psr(struct dc_stream_state *stream);
+-bool amdgpu_dm_psr_disable(struct dc_stream_state *stream);
++bool amdgpu_dm_psr_disable(struct dc_stream_state *stream, bool wait);
+ bool amdgpu_dm_psr_disable_all(struct amdgpu_display_manager *dm);
+ bool amdgpu_dm_psr_is_active_allowed(struct amdgpu_display_manager *dm);
++bool amdgpu_dm_psr_wait_disable(struct dc_stream_state *stream);
+
+ #endif /* AMDGPU_DM_AMDGPU_DM_PSR_H_ */
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c b/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
+index beed7adbbd43e0..47d785204f29cb 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn35/dcn35_fpu.c
+@@ -195,9 +195,9 @@ struct _vcs_dpi_soc_bounding_box_st dcn3_5_soc = {
+ .dcn_downspread_percent = 0.5,
+ .gpuvm_min_page_size_bytes = 4096,
+ .hostvm_min_page_size_bytes = 4096,
+- .do_urgent_latency_adjustment = 1,
++ .do_urgent_latency_adjustment = 0,
+ .urgent_latency_adjustment_fabric_clock_component_us = 0,
+- .urgent_latency_adjustment_fabric_clock_reference_mhz = 3000,
++ .urgent_latency_adjustment_fabric_clock_reference_mhz = 0,
+ };
+
+ void dcn35_build_wm_range_table_fpu(struct clk_mgr *clk_mgr)
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+index cd2cf0ffc0f5cb..5a0a10144a73fe 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
+@@ -2549,11 +2549,12 @@ static int smu_v13_0_0_set_power_profile_mode(struct smu_context *smu,
+ &backend_workload_mask);
+
+ /* Add optimizations for SMU13.0.0/10. Reuse the power saving profile */
+- if ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
+- ((smu->adev->pm.fw_version == 0x004e6601) ||
+- (smu->adev->pm.fw_version >= 0x004e7300))) ||
+- (amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 10) &&
+- smu->adev->pm.fw_version >= 0x00504500)) {
++ if ((workload_mask & (1 << PP_SMC_POWER_PROFILE_COMPUTE)) &&
++ ((amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 0) &&
++ ((smu->adev->pm.fw_version == 0x004e6601) ||
++ (smu->adev->pm.fw_version >= 0x004e7300))) ||
++ (amdgpu_ip_version(smu->adev, MP1_HWIP, 0) == IP_VERSION(13, 0, 10) &&
++ smu->adev->pm.fw_version >= 0x00504500))) {
+ workload_type = smu_cmn_to_asic_specific_index(smu,
+ CMN2ASIC_MAPPING_WORKLOAD,
+ PP_SMC_POWER_PROFILE_POWERSAVING);
+diff --git a/drivers/gpu/drm/i915/display/intel_fb.c b/drivers/gpu/drm/i915/display/intel_fb.c
+index 35557d98d7a700..3d16e9406dc6b1 100644
+--- a/drivers/gpu/drm/i915/display/intel_fb.c
++++ b/drivers/gpu/drm/i915/display/intel_fb.c
+@@ -1613,7 +1613,7 @@ int intel_fill_fb_info(struct drm_i915_private *i915, struct intel_framebuffer *
+ * arithmetic related to alignment and offset calculation.
+ */
+ if (is_gen12_ccs_cc_plane(&fb->base, i)) {
+- if (IS_ALIGNED(fb->base.offsets[i], PAGE_SIZE))
++ if (IS_ALIGNED(fb->base.offsets[i], 64))
+ continue;
+ else
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_fence.c b/drivers/gpu/drm/nouveau/nouveau_fence.c
+index 09686d038d6053..7cc84472cecec2 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_fence.c
++++ b/drivers/gpu/drm/nouveau/nouveau_fence.c
+@@ -387,11 +387,13 @@ nouveau_fence_sync(struct nouveau_bo *nvbo, struct nouveau_channel *chan,
+ if (f) {
+ struct nouveau_channel *prev;
+ bool must_wait = true;
++ bool local;
+
+ rcu_read_lock();
+ prev = rcu_dereference(f->channel);
+- if (prev && (prev == chan ||
+- fctx->sync(f, prev, chan) == 0))
++ local = prev && prev->cli->drm == chan->cli->drm;
++ if (local && (prev == chan ||
++ fctx->sync(f, prev, chan) == 0))
+ must_wait = false;
+ rcu_read_unlock();
+ if (!must_wait)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/disp/mcp77.c b/drivers/gpu/drm/nouveau/nvkm/engine/disp/mcp77.c
+index 841e3b69fcaf3e..5a0c9b8a79f3ec 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/disp/mcp77.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/disp/mcp77.c
+@@ -31,6 +31,7 @@ mcp77_sor = {
+ .state = g94_sor_state,
+ .power = nv50_sor_power,
+ .clock = nv50_sor_clock,
++ .bl = &nv50_sor_bl,
+ .hdmi = &g84_sor_hdmi,
+ .dp = &g94_sor_dp,
+ };
+diff --git a/drivers/gpu/drm/tests/drm_kunit_helpers.c b/drivers/gpu/drm/tests/drm_kunit_helpers.c
+index 04a6b8cc62ac67..3c0b7824c0be37 100644
+--- a/drivers/gpu/drm/tests/drm_kunit_helpers.c
++++ b/drivers/gpu/drm/tests/drm_kunit_helpers.c
+@@ -320,8 +320,7 @@ static void kunit_action_drm_mode_destroy(void *ptr)
+ }
+
+ /**
+- * drm_kunit_display_mode_from_cea_vic() - return a mode for CEA VIC
+- for a KUnit test
++ * drm_kunit_display_mode_from_cea_vic() - return a mode for CEA VIC for a KUnit test
+ * @test: The test context object
+ * @dev: DRM device
+ * @video_code: CEA VIC of the mode
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index 20bf33702c3c4f..da203045df9bec 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -108,6 +108,7 @@ v3d_irq(int irq, void *arg)
+ v3d_job_update_stats(&v3d->bin_job->base, V3D_BIN);
+ trace_v3d_bcl_irq(&v3d->drm, fence->seqno);
+ dma_fence_signal(&fence->base);
++ v3d->bin_job = NULL;
+ status = IRQ_HANDLED;
+ }
+
+@@ -118,6 +119,7 @@ v3d_irq(int irq, void *arg)
+ v3d_job_update_stats(&v3d->render_job->base, V3D_RENDER);
+ trace_v3d_rcl_irq(&v3d->drm, fence->seqno);
+ dma_fence_signal(&fence->base);
++ v3d->render_job = NULL;
+ status = IRQ_HANDLED;
+ }
+
+@@ -128,6 +130,7 @@ v3d_irq(int irq, void *arg)
+ v3d_job_update_stats(&v3d->csd_job->base, V3D_CSD);
+ trace_v3d_csd_irq(&v3d->drm, fence->seqno);
+ dma_fence_signal(&fence->base);
++ v3d->csd_job = NULL;
+ status = IRQ_HANDLED;
+ }
+
+@@ -165,6 +168,7 @@ v3d_hub_irq(int irq, void *arg)
+ v3d_job_update_stats(&v3d->tfu_job->base, V3D_TFU);
+ trace_v3d_tfu_irq(&v3d->drm, fence->seqno);
+ dma_fence_signal(&fence->base);
++ v3d->tfu_job = NULL;
+ status = IRQ_HANDLED;
+ }
+
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+index a0e433fbcba67c..183cda50094cb7 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
+@@ -443,7 +443,8 @@ static int vmw_bo_init(struct vmw_private *dev_priv,
+
+ if (params->pin)
+ ttm_bo_pin(&vmw_bo->tbo);
+- ttm_bo_unreserve(&vmw_bo->tbo);
++ if (!params->keep_resv)
++ ttm_bo_unreserve(&vmw_bo->tbo);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+index 43b5439ec9f760..c21ba7ff773682 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
+@@ -56,8 +56,9 @@ struct vmw_bo_params {
+ u32 domain;
+ u32 busy_domain;
+ enum ttm_bo_type bo_type;
+- size_t size;
+ bool pin;
++ bool keep_resv;
++ size_t size;
+ struct dma_resv *resv;
+ struct sg_table *sg;
+ };
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+index 2825dd3149ed5c..2e84e1029732d3 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
+@@ -401,7 +401,8 @@ static int vmw_dummy_query_bo_create(struct vmw_private *dev_priv)
+ .busy_domain = VMW_BO_DOMAIN_SYS,
+ .bo_type = ttm_bo_type_kernel,
+ .size = PAGE_SIZE,
+- .pin = true
++ .pin = true,
++ .keep_resv = true,
+ };
+
+ /*
+@@ -413,10 +414,6 @@ static int vmw_dummy_query_bo_create(struct vmw_private *dev_priv)
+ if (unlikely(ret != 0))
+ return ret;
+
+- ret = ttm_bo_reserve(&vbo->tbo, false, true, NULL);
+- BUG_ON(ret != 0);
+- vmw_bo_pin_reserved(vbo, true);
+-
+ ret = ttm_bo_kmap(&vbo->tbo, 0, 1, &map);
+ if (likely(ret == 0)) {
+ result = ttm_kmap_obj_virtual(&map, &dummy);
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
+index b9857f37ca1ac6..ed5015ced3920a 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
+@@ -206,6 +206,7 @@ struct drm_gem_object *vmw_prime_import_sg_table(struct drm_device *dev,
+ .bo_type = ttm_bo_type_sg,
+ .size = attach->dmabuf->size,
+ .pin = false,
++ .keep_resv = true,
+ .resv = attach->dmabuf->resv,
+ .sg = table,
+
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+index 10d596cb4b4029..5f99f7437ae614 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
+@@ -750,6 +750,7 @@ vmw_du_cursor_plane_atomic_update(struct drm_plane *plane,
+ struct vmw_plane_state *old_vps = vmw_plane_state_to_vps(old_state);
+ struct vmw_bo *old_bo = NULL;
+ struct vmw_bo *new_bo = NULL;
++ struct ww_acquire_ctx ctx;
+ s32 hotspot_x, hotspot_y;
+ int ret;
+
+@@ -769,9 +770,11 @@ vmw_du_cursor_plane_atomic_update(struct drm_plane *plane,
+ if (du->cursor_surface)
+ du->cursor_age = du->cursor_surface->snooper.age;
+
++ ww_acquire_init(&ctx, &reservation_ww_class);
++
+ if (!vmw_user_object_is_null(&old_vps->uo)) {
+ old_bo = vmw_user_object_buffer(&old_vps->uo);
+- ret = ttm_bo_reserve(&old_bo->tbo, false, false, NULL);
++ ret = ttm_bo_reserve(&old_bo->tbo, false, false, &ctx);
+ if (ret != 0)
+ return;
+ }
+@@ -779,9 +782,14 @@ vmw_du_cursor_plane_atomic_update(struct drm_plane *plane,
+ if (!vmw_user_object_is_null(&vps->uo)) {
+ new_bo = vmw_user_object_buffer(&vps->uo);
+ if (old_bo != new_bo) {
+- ret = ttm_bo_reserve(&new_bo->tbo, false, false, NULL);
+- if (ret != 0)
++ ret = ttm_bo_reserve(&new_bo->tbo, false, false, &ctx);
++ if (ret != 0) {
++ if (old_bo) {
++ ttm_bo_unreserve(&old_bo->tbo);
++ ww_acquire_fini(&ctx);
++ }
+ return;
++ }
+ } else {
+ new_bo = NULL;
+ }
+@@ -803,10 +811,12 @@ vmw_du_cursor_plane_atomic_update(struct drm_plane *plane,
+ hotspot_x, hotspot_y);
+ }
+
+- if (old_bo)
+- ttm_bo_unreserve(&old_bo->tbo);
+ if (new_bo)
+ ttm_bo_unreserve(&new_bo->tbo);
++ if (old_bo)
++ ttm_bo_unreserve(&old_bo->tbo);
++
++ ww_acquire_fini(&ctx);
+
+ du->cursor_x = new_state->crtc_x + du->set_gui_x;
+ du->cursor_y = new_state->crtc_y + du->set_gui_y;
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
+index a01ca3226d0af8..7fb1c88bcc475f 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
+@@ -896,7 +896,8 @@ int vmw_compat_shader_add(struct vmw_private *dev_priv,
+ .busy_domain = VMW_BO_DOMAIN_SYS,
+ .bo_type = ttm_bo_type_device,
+ .size = size,
+- .pin = true
++ .pin = true,
++ .keep_resv = true,
+ };
+
+ if (!vmw_shader_id_ok(user_key, shader_type))
+@@ -906,10 +907,6 @@ int vmw_compat_shader_add(struct vmw_private *dev_priv,
+ if (unlikely(ret != 0))
+ goto out;
+
+- ret = ttm_bo_reserve(&buf->tbo, false, true, NULL);
+- if (unlikely(ret != 0))
+- goto no_reserve;
+-
+ /* Map and copy shader bytecode. */
+ ret = ttm_bo_kmap(&buf->tbo, 0, PFN_UP(size), &map);
+ if (unlikely(ret != 0)) {
+diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+index 621d98b376bbbc..5553892d7c3e0d 100644
+--- a/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
+@@ -572,15 +572,14 @@ int vmw_bo_create_and_populate(struct vmw_private *dev_priv,
+ .busy_domain = domain,
+ .bo_type = ttm_bo_type_kernel,
+ .size = bo_size,
+- .pin = true
++ .pin = true,
++ .keep_resv = true,
+ };
+
+ ret = vmw_bo_create(dev_priv, &bo_params, &vbo);
+ if (unlikely(ret != 0))
+ return ret;
+
+- ret = ttm_bo_reserve(&vbo->tbo, false, true, NULL);
+- BUG_ON(ret != 0);
+ ret = vmw_ttm_populate(vbo->tbo.bdev, vbo->tbo.ttm, &ctx);
+ if (likely(ret == 0)) {
+ struct vmw_ttm_tt *vmw_tt =
+diff --git a/drivers/gpu/drm/xe/xe_hw_engine.c b/drivers/gpu/drm/xe/xe_hw_engine.c
+index 547919e8ce9e45..b11bc0f00dfda1 100644
+--- a/drivers/gpu/drm/xe/xe_hw_engine.c
++++ b/drivers/gpu/drm/xe/xe_hw_engine.c
+@@ -417,7 +417,7 @@ hw_engine_setup_default_state(struct xe_hw_engine *hwe)
+ * Bspec: 72161
+ */
+ const u8 mocs_write_idx = gt->mocs.uc_index;
+- const u8 mocs_read_idx = hwe->class == XE_ENGINE_CLASS_COMPUTE &&
++ const u8 mocs_read_idx = hwe->class == XE_ENGINE_CLASS_COMPUTE && IS_DGFX(xe) &&
+ (GRAPHICS_VER(xe) >= 20 || xe->info.platform == XE_PVC) ?
+ gt->mocs.wb_index : gt->mocs.uc_index;
+ u32 ring_cmd_cctl_val = REG_FIELD_PREP(CMD_CCTL_WRITE_OVERRIDE_MASK, mocs_write_idx) |
+diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
+index 78823f53d2905d..6fc00d63b2857f 100644
+--- a/drivers/gpu/drm/xe/xe_oa.c
++++ b/drivers/gpu/drm/xe/xe_oa.c
+@@ -1980,6 +1980,7 @@ static const struct xe_mmio_range xe2_oa_mux_regs[] = {
+ { .start = 0x5194, .end = 0x5194 }, /* SYS_MEM_LAT_MEASURE_MERTF_GRP_3D */
+ { .start = 0x8704, .end = 0x8704 }, /* LMEM_LAT_MEASURE_MCFG_GRP */
+ { .start = 0xB1BC, .end = 0xB1BC }, /* L3_BANK_LAT_MEASURE_LBCF_GFX */
++ { .start = 0xD0E0, .end = 0xD0F4 }, /* VISACTL */
+ { .start = 0xE18C, .end = 0xE18C }, /* SAMPLER_MODE */
+ { .start = 0xE590, .end = 0xE590 }, /* TDL_LSC_LAT_MEASURE_TDL_GFX */
+ { .start = 0x13000, .end = 0x137FC }, /* PES_0_PESL0 - PES_63_UPPER_PESL3 */
+diff --git a/drivers/hwmon/ltc2991.c b/drivers/hwmon/ltc2991.c
+index 7ca139e4b6aff0..6d5d4cb846daf3 100644
+--- a/drivers/hwmon/ltc2991.c
++++ b/drivers/hwmon/ltc2991.c
+@@ -125,7 +125,7 @@ static int ltc2991_get_curr(struct ltc2991_state *st, u32 reg, int channel,
+
+ /* Vx-Vy, 19.075uV/LSB */
+ *val = DIV_ROUND_CLOSEST(sign_extend32(reg_val, 14) * 19075,
+- st->r_sense_uohm[channel]);
++ (s32)st->r_sense_uohm[channel]);
+
+ return 0;
+ }
+diff --git a/drivers/hwmon/tmp513.c b/drivers/hwmon/tmp513.c
+index 1c2cb12071b808..5acbfd7d088dd5 100644
+--- a/drivers/hwmon/tmp513.c
++++ b/drivers/hwmon/tmp513.c
+@@ -207,7 +207,8 @@ static int tmp51x_get_value(struct tmp51x_data *data, u8 reg, u8 pos,
+ *val = sign_extend32(regval,
+ reg == TMP51X_SHUNT_CURRENT_RESULT ?
+ 16 - tmp51x_get_pga_shift(data) : 15);
+- *val = DIV_ROUND_CLOSEST(*val * 10 * MILLI, data->shunt_uohms);
++ *val = DIV_ROUND_CLOSEST(*val * 10 * (long)MILLI, (long)data->shunt_uohms);
++
+ break;
+ case TMP51X_BUS_VOLTAGE_RESULT:
+ case TMP51X_BUS_VOLTAGE_H_LIMIT:
+@@ -223,7 +224,7 @@ static int tmp51x_get_value(struct tmp51x_data *data, u8 reg, u8 pos,
+ case TMP51X_BUS_CURRENT_RESULT:
+ // Current = (ShuntVoltage * CalibrationRegister) / 4096
+ *val = sign_extend32(regval, 15) * (long)data->curr_lsb_ua;
+- *val = DIV_ROUND_CLOSEST(*val, MILLI);
++ *val = DIV_ROUND_CLOSEST(*val, (long)MILLI);
+ break;
+ case TMP51X_LOCAL_TEMP_RESULT:
+ case TMP51X_REMOTE_TEMP_RESULT_1:
+@@ -263,7 +264,7 @@ static int tmp51x_set_value(struct tmp51x_data *data, u8 reg, long val)
+ * The user enter current value and we convert it to
+ * voltage. 1lsb = 10uV
+ */
+- val = DIV_ROUND_CLOSEST(val * data->shunt_uohms, 10 * MILLI);
++ val = DIV_ROUND_CLOSEST(val * (long)data->shunt_uohms, 10 * (long)MILLI);
+ max_val = U16_MAX >> tmp51x_get_pga_shift(data);
+ regval = clamp_val(val, -max_val, max_val);
+ break;
+diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
+index 9267df38c2d0a1..3991224148214a 100644
+--- a/drivers/i2c/busses/i2c-rcar.c
++++ b/drivers/i2c/busses/i2c-rcar.c
+@@ -130,6 +130,8 @@
+ #define ID_P_PM_BLOCKED BIT(31)
+ #define ID_P_MASK GENMASK(31, 27)
+
++#define ID_SLAVE_NACK BIT(0)
++
+ enum rcar_i2c_type {
+ I2C_RCAR_GEN1,
+ I2C_RCAR_GEN2,
+@@ -166,6 +168,7 @@ struct rcar_i2c_priv {
+ int irq;
+
+ struct i2c_client *host_notify_client;
++ u8 slave_flags;
+ };
+
+ #define rcar_i2c_priv_to_dev(p) ((p)->adap.dev.parent)
+@@ -655,6 +658,7 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ {
+ u32 ssr_raw, ssr_filtered;
+ u8 value;
++ int ret;
+
+ ssr_raw = rcar_i2c_read(priv, ICSSR) & 0xff;
+ ssr_filtered = ssr_raw & rcar_i2c_read(priv, ICSIER);
+@@ -670,7 +674,10 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ rcar_i2c_write(priv, ICRXTX, value);
+ rcar_i2c_write(priv, ICSIER, SDE | SSR | SAR);
+ } else {
+- i2c_slave_event(priv->slave, I2C_SLAVE_WRITE_REQUESTED, &value);
++ ret = i2c_slave_event(priv->slave, I2C_SLAVE_WRITE_REQUESTED, &value);
++ if (ret)
++ priv->slave_flags |= ID_SLAVE_NACK;
++
+ rcar_i2c_read(priv, ICRXTX); /* dummy read */
+ rcar_i2c_write(priv, ICSIER, SDR | SSR | SAR);
+ }
+@@ -683,18 +690,21 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
+ if (ssr_filtered & SSR) {
+ i2c_slave_event(priv->slave, I2C_SLAVE_STOP, &value);
+ rcar_i2c_write(priv, ICSCR, SIE | SDBS); /* clear our NACK */
++ priv->slave_flags &= ~ID_SLAVE_NACK;
+ rcar_i2c_write(priv, ICSIER, SAR);
+ rcar_i2c_write(priv, ICSSR, ~SSR & 0xff);
+ }
+
+ /* master wants to write to us */
+ if (ssr_filtered & SDR) {
+- int ret;
+-
+ value = rcar_i2c_read(priv, ICRXTX);
+ ret = i2c_slave_event(priv->slave, I2C_SLAVE_WRITE_RECEIVED, &value);
+- /* Send NACK in case of error */
+- rcar_i2c_write(priv, ICSCR, SIE | SDBS | (ret < 0 ? FNA : 0));
++ if (ret)
++ priv->slave_flags |= ID_SLAVE_NACK;
++
++ /* Send NACK in case of error, but it will come 1 byte late :( */
++ rcar_i2c_write(priv, ICSCR, SIE | SDBS |
++ (priv->slave_flags & ID_SLAVE_NACK ? FNA : 0));
+ rcar_i2c_write(priv, ICSSR, ~SDR & 0xff);
+ }
+
+diff --git a/drivers/i2c/i2c-atr.c b/drivers/i2c/i2c-atr.c
+index f21475ae592183..0d54d0b5e32731 100644
+--- a/drivers/i2c/i2c-atr.c
++++ b/drivers/i2c/i2c-atr.c
+@@ -412,7 +412,7 @@ static int i2c_atr_bus_notifier_call(struct notifier_block *nb,
+ dev_name(dev), ret);
+ break;
+
+- case BUS_NOTIFY_DEL_DEVICE:
++ case BUS_NOTIFY_REMOVED_DEVICE:
+ i2c_atr_detach_client(client->adapter, client);
+ break;
+
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index 7c810893bfa332..75d30861ffe21a 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -1562,6 +1562,7 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
+ res = device_add(&adap->dev);
+ if (res) {
+ pr_err("adapter '%s': can't register device (%d)\n", adap->name, res);
++ put_device(&adap->dev);
+ goto out_list;
+ }
+
+diff --git a/drivers/i2c/i2c-slave-testunit.c b/drivers/i2c/i2c-slave-testunit.c
+index 9fe3150378e863..7ae0c7902f670b 100644
+--- a/drivers/i2c/i2c-slave-testunit.c
++++ b/drivers/i2c/i2c-slave-testunit.c
+@@ -38,6 +38,7 @@ enum testunit_regs {
+
+ enum testunit_flags {
+ TU_FLAG_IN_PROCESS,
++ TU_FLAG_NACK,
+ };
+
+ struct testunit_data {
+@@ -90,8 +91,10 @@ static int i2c_slave_testunit_slave_cb(struct i2c_client *client,
+
+ switch (event) {
+ case I2C_SLAVE_WRITE_REQUESTED:
+- if (test_bit(TU_FLAG_IN_PROCESS, &tu->flags))
+- return -EBUSY;
++ if (test_bit(TU_FLAG_IN_PROCESS | TU_FLAG_NACK, &tu->flags)) {
++ ret = -EBUSY;
++ break;
++ }
+
+ memset(tu->regs, 0, TU_NUM_REGS);
+ tu->reg_idx = 0;
+@@ -99,8 +102,10 @@ static int i2c_slave_testunit_slave_cb(struct i2c_client *client,
+ break;
+
+ case I2C_SLAVE_WRITE_RECEIVED:
+- if (test_bit(TU_FLAG_IN_PROCESS, &tu->flags))
+- return -EBUSY;
++ if (test_bit(TU_FLAG_IN_PROCESS | TU_FLAG_NACK, &tu->flags)) {
++ ret = -EBUSY;
++ break;
++ }
+
+ if (tu->reg_idx < TU_NUM_REGS)
+ tu->regs[tu->reg_idx] = *val;
+@@ -129,6 +134,8 @@ static int i2c_slave_testunit_slave_cb(struct i2c_client *client,
+ * here because we still need them in the workqueue!
+ */
+ tu->reg_idx = 0;
++
++ clear_bit(TU_FLAG_NACK, &tu->flags);
+ break;
+
+ case I2C_SLAVE_READ_PROCESSED:
+@@ -151,6 +158,10 @@ static int i2c_slave_testunit_slave_cb(struct i2c_client *client,
+ break;
+ }
+
++ /* If an error occurred somewhen, we NACK everything until next STOP */
++ if (ret)
++ set_bit(TU_FLAG_NACK, &tu->flags);
++
+ return ret;
+ }
+
+diff --git a/drivers/i2c/muxes/i2c-demux-pinctrl.c b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+index 7e2686b606c04d..cec7f3447e1935 100644
+--- a/drivers/i2c/muxes/i2c-demux-pinctrl.c
++++ b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+@@ -261,7 +261,9 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
+ pm_runtime_no_callbacks(&pdev->dev);
+
+ /* switch to first parent as active master */
+- i2c_demux_activate_master(priv, 0);
++ err = i2c_demux_activate_master(priv, 0);
++ if (err)
++ goto err_rollback;
+
+ err = device_create_file(&pdev->dev, &dev_attr_available_masters);
+ if (err)
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index b20cffcc3e7d2d..14e434ff51edea 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -2269,6 +2269,7 @@ int bnxt_re_query_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
+ qp_attr->retry_cnt = qplib_qp->retry_cnt;
+ qp_attr->rnr_retry = qplib_qp->rnr_retry;
+ qp_attr->min_rnr_timer = qplib_qp->min_rnr_timer;
++ qp_attr->port_num = __to_ib_port_num(qplib_qp->port_id);
+ qp_attr->rq_psn = qplib_qp->rq.psn;
+ qp_attr->max_rd_atomic = qplib_qp->max_rd_atomic;
+ qp_attr->sq_psn = qplib_qp->sq.psn;
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.h b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+index b789e47ec97a85..9cd8f770d1b27e 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.h
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.h
+@@ -264,6 +264,10 @@ void bnxt_re_dealloc_ucontext(struct ib_ucontext *context);
+ int bnxt_re_mmap(struct ib_ucontext *context, struct vm_area_struct *vma);
+ void bnxt_re_mmap_free(struct rdma_user_mmap_entry *rdma_entry);
+
++static inline u32 __to_ib_port_num(u16 port_id)
++{
++ return (u32)port_id + 1;
++}
+
+ unsigned long bnxt_re_lock_cqs(struct bnxt_re_qp *qp);
+ void bnxt_re_unlock_cqs(struct bnxt_re_qp *qp, unsigned long flags);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 828e2f9808012b..613b5fc70e13ea 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -1479,6 +1479,7 @@ int bnxt_qplib_query_qp(struct bnxt_qplib_res *res, struct bnxt_qplib_qp *qp)
+ qp->dest_qpn = le32_to_cpu(sb->dest_qp_id);
+ memcpy(qp->smac, sb->src_mac, 6);
+ qp->vlan_id = le16_to_cpu(sb->vlan_pcp_vlan_dei_vlan_id);
++ qp->port_id = le16_to_cpu(sb->port_id);
+ bail:
+ dma_free_coherent(&rcfw->pdev->dev, sbuf.size,
+ sbuf.sb, sbuf.dma_addr);
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.h b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+index d8c71c024613bf..6f02954eb1429f 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.h
+@@ -298,6 +298,7 @@ struct bnxt_qplib_qp {
+ u32 dest_qpn;
+ u8 smac[6];
+ u16 vlan_id;
++ u16 port_id;
+ u8 nw_type;
+ struct bnxt_qplib_ah ah;
+
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index d9b6ec844cdda0..0d3a889b1905c7 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -1961,7 +1961,7 @@ static int its_irq_set_vcpu_affinity(struct irq_data *d, void *vcpu_info)
+ if (!is_v4(its_dev->its))
+ return -EINVAL;
+
+- guard(raw_spinlock_irq)(&its_dev->event_map.vlpi_lock);
++ guard(raw_spinlock)(&its_dev->event_map.vlpi_lock);
+
+ /* Unmap request? */
+ if (!info)
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index b0bfb61539c202..8fdee511bc0f2c 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -1522,7 +1522,7 @@ static int gic_retrigger(struct irq_data *data)
+ static int gic_cpu_pm_notifier(struct notifier_block *self,
+ unsigned long cmd, void *v)
+ {
+- if (cmd == CPU_PM_EXIT) {
++ if (cmd == CPU_PM_EXIT || cmd == CPU_PM_ENTER_FAILED) {
+ if (gic_dist_security_disabled())
+ gic_enable_redist(true);
+ gic_cpu_sys_reg_enable();
+diff --git a/drivers/irqchip/irqchip.c b/drivers/irqchip/irqchip.c
+index 1eeb0d0156ce9e..0ee7b6b71f5fa5 100644
+--- a/drivers/irqchip/irqchip.c
++++ b/drivers/irqchip/irqchip.c
+@@ -35,11 +35,10 @@ void __init irqchip_init(void)
+ int platform_irqchip_probe(struct platform_device *pdev)
+ {
+ struct device_node *np = pdev->dev.of_node;
+- struct device_node *par_np = of_irq_find_parent(np);
++ struct device_node *par_np __free(device_node) = of_irq_find_parent(np);
+ of_irq_init_cb_t irq_init_cb = of_device_get_match_data(&pdev->dev);
+
+ if (!irq_init_cb) {
+- of_node_put(par_np);
+ return -EINVAL;
+ }
+
+@@ -55,7 +54,6 @@ int platform_irqchip_probe(struct platform_device *pdev)
+ * interrupt controller can check for specific domains as necessary.
+ */
+ if (par_np && !irq_find_matching_host(par_np, DOMAIN_BUS_ANY)) {
+- of_node_put(par_np);
+ return -EPROBE_DEFER;
+ }
+
+diff --git a/drivers/mtd/spi-nor/core.c b/drivers/mtd/spi-nor/core.c
+index 8c57df44c40fe8..9d6e85bf227b92 100644
+--- a/drivers/mtd/spi-nor/core.c
++++ b/drivers/mtd/spi-nor/core.c
+@@ -89,7 +89,7 @@ void spi_nor_spimem_setup_op(const struct spi_nor *nor,
+ op->addr.buswidth = spi_nor_get_protocol_addr_nbits(proto);
+
+ if (op->dummy.nbytes)
+- op->dummy.buswidth = spi_nor_get_protocol_data_nbits(proto);
++ op->dummy.buswidth = spi_nor_get_protocol_addr_nbits(proto);
+
+ if (op->data.nbytes)
+ op->data.buswidth = spi_nor_get_protocol_data_nbits(proto);
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+index 6a716337f48be1..268399dfcf22f0 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c
+@@ -923,7 +923,6 @@ static void xgbe_phy_free_phy_device(struct xgbe_prv_data *pdata)
+
+ static bool xgbe_phy_finisar_phy_quirks(struct xgbe_prv_data *pdata)
+ {
+- __ETHTOOL_DECLARE_LINK_MODE_MASK(supported) = { 0, };
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+ unsigned int phy_id = phy_data->phydev->phy_id;
+
+@@ -945,14 +944,7 @@ static bool xgbe_phy_finisar_phy_quirks(struct xgbe_prv_data *pdata)
+ phy_write(phy_data->phydev, 0x04, 0x0d01);
+ phy_write(phy_data->phydev, 0x00, 0x9140);
+
+- linkmode_set_bit_array(phy_10_100_features_array,
+- ARRAY_SIZE(phy_10_100_features_array),
+- supported);
+- linkmode_set_bit_array(phy_gbit_features_array,
+- ARRAY_SIZE(phy_gbit_features_array),
+- supported);
+-
+- linkmode_copy(phy_data->phydev->supported, supported);
++ linkmode_copy(phy_data->phydev->supported, PHY_GBIT_FEATURES);
+
+ phy_support_asym_pause(phy_data->phydev);
+
+@@ -964,7 +956,6 @@ static bool xgbe_phy_finisar_phy_quirks(struct xgbe_prv_data *pdata)
+
+ static bool xgbe_phy_belfuse_phy_quirks(struct xgbe_prv_data *pdata)
+ {
+- __ETHTOOL_DECLARE_LINK_MODE_MASK(supported) = { 0, };
+ struct xgbe_phy_data *phy_data = pdata->phy_data;
+ struct xgbe_sfp_eeprom *sfp_eeprom = &phy_data->sfp_eeprom;
+ unsigned int phy_id = phy_data->phydev->phy_id;
+@@ -1028,13 +1019,7 @@ static bool xgbe_phy_belfuse_phy_quirks(struct xgbe_prv_data *pdata)
+ reg = phy_read(phy_data->phydev, 0x00);
+ phy_write(phy_data->phydev, 0x00, reg & ~0x00800);
+
+- linkmode_set_bit_array(phy_10_100_features_array,
+- ARRAY_SIZE(phy_10_100_features_array),
+- supported);
+- linkmode_set_bit_array(phy_gbit_features_array,
+- ARRAY_SIZE(phy_gbit_features_array),
+- supported);
+- linkmode_copy(phy_data->phydev->supported, supported);
++ linkmode_copy(phy_data->phydev->supported, PHY_GBIT_FEATURES);
+ phy_support_asym_pause(phy_data->phydev);
+
+ netif_dbg(pdata, drv, pdata->netdev,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index c255445e97f3c5..603e9c968c44bd 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -4558,7 +4558,7 @@ void bnxt_set_ring_params(struct bnxt *bp)
+ /* Changing allocation mode of RX rings.
+ * TODO: Update when extending xdp_rxq_info to support allocation modes.
+ */
+-int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
++static void __bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
+ {
+ struct net_device *dev = bp->dev;
+
+@@ -4579,15 +4579,30 @@ int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
+ bp->rx_skb_func = bnxt_rx_page_skb;
+ }
+ bp->rx_dir = DMA_BIDIRECTIONAL;
+- /* Disable LRO or GRO_HW */
+- netdev_update_features(dev);
+ } else {
+ dev->max_mtu = bp->max_mtu;
+ bp->flags &= ~BNXT_FLAG_RX_PAGE_MODE;
+ bp->rx_dir = DMA_FROM_DEVICE;
+ bp->rx_skb_func = bnxt_rx_skb;
+ }
+- return 0;
++}
++
++void bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode)
++{
++ __bnxt_set_rx_skb_mode(bp, page_mode);
++
++ if (!page_mode) {
++ int rx, tx;
++
++ bnxt_get_max_rings(bp, &rx, &tx, true);
++ if (rx > 1) {
++ bp->flags &= ~BNXT_FLAG_NO_AGG_RINGS;
++ bp->dev->hw_features |= NETIF_F_LRO;
++ }
++ }
++
++ /* Update LRO and GRO_HW availability */
++ netdev_update_features(bp->dev);
+ }
+
+ static void bnxt_free_vnic_attributes(struct bnxt *bp)
+@@ -15909,7 +15924,7 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ if (bp->max_fltr < BNXT_MAX_FLTR)
+ bp->max_fltr = BNXT_MAX_FLTR;
+ bnxt_init_l2_fltr_tbl(bp);
+- bnxt_set_rx_skb_mode(bp, false);
++ __bnxt_set_rx_skb_mode(bp, false);
+ bnxt_set_tpa_flags(bp);
+ bnxt_set_ring_params(bp);
+ bnxt_rdma_aux_device_init(bp);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 9e05704d94450e..bee645f58d0bde 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -2796,7 +2796,7 @@ void bnxt_reuse_rx_data(struct bnxt_rx_ring_info *rxr, u16 cons, void *data);
+ u32 bnxt_fw_health_readl(struct bnxt *bp, int reg_idx);
+ void bnxt_set_tpa_flags(struct bnxt *bp);
+ void bnxt_set_ring_params(struct bnxt *);
+-int bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode);
++void bnxt_set_rx_skb_mode(struct bnxt *bp, bool page_mode);
+ void bnxt_insert_usr_fltr(struct bnxt *bp, struct bnxt_filter_base *fltr);
+ void bnxt_del_one_usr_fltr(struct bnxt *bp, struct bnxt_filter_base *fltr);
+ int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp, unsigned long *bmap,
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+index f88b641533fcc5..dc51dce209d5f0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+@@ -422,15 +422,8 @@ static int bnxt_xdp_set(struct bnxt *bp, struct bpf_prog *prog)
+ bnxt_set_rx_skb_mode(bp, true);
+ xdp_features_set_redirect_target(dev, true);
+ } else {
+- int rx, tx;
+-
+ xdp_features_clear_redirect_target(dev);
+ bnxt_set_rx_skb_mode(bp, false);
+- bnxt_get_max_rings(bp, &rx, &tx, true);
+- if (rx > 1) {
+- bp->flags &= ~BNXT_FLAG_NO_AGG_RINGS;
+- bp->dev->hw_features |= NETIF_F_LRO;
+- }
+ }
+ bp->tx_nr_rings_xdp = tx_xdp;
+ bp->tx_nr_rings = bp->tx_nr_rings_per_tc * tc + tx_xdp;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 9d9fcec41488e3..49d1748e0c043d 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -1591,19 +1591,22 @@ static void fec_enet_tx(struct net_device *ndev, int budget)
+ fec_enet_tx_queue(ndev, i, budget);
+ }
+
+-static void fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq,
++static int fec_enet_update_cbd(struct fec_enet_priv_rx_q *rxq,
+ struct bufdesc *bdp, int index)
+ {
+ struct page *new_page;
+ dma_addr_t phys_addr;
+
+ new_page = page_pool_dev_alloc_pages(rxq->page_pool);
+- WARN_ON(!new_page);
+- rxq->rx_skb_info[index].page = new_page;
++ if (unlikely(!new_page))
++ return -ENOMEM;
+
++ rxq->rx_skb_info[index].page = new_page;
+ rxq->rx_skb_info[index].offset = FEC_ENET_XDP_HEADROOM;
+ phys_addr = page_pool_get_dma_addr(new_page) + FEC_ENET_XDP_HEADROOM;
+ bdp->cbd_bufaddr = cpu_to_fec32(phys_addr);
++
++ return 0;
+ }
+
+ static u32
+@@ -1698,6 +1701,7 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
+ int cpu = smp_processor_id();
+ struct xdp_buff xdp;
+ struct page *page;
++ __fec32 cbd_bufaddr;
+ u32 sub_len = 4;
+
+ #if !defined(CONFIG_M5272)
+@@ -1766,12 +1770,17 @@ fec_enet_rx_queue(struct net_device *ndev, int budget, u16 queue_id)
+
+ index = fec_enet_get_bd_index(bdp, &rxq->bd);
+ page = rxq->rx_skb_info[index].page;
++ cbd_bufaddr = bdp->cbd_bufaddr;
++ if (fec_enet_update_cbd(rxq, bdp, index)) {
++ ndev->stats.rx_dropped++;
++ goto rx_processing_done;
++ }
++
+ dma_sync_single_for_cpu(&fep->pdev->dev,
+- fec32_to_cpu(bdp->cbd_bufaddr),
++ fec32_to_cpu(cbd_bufaddr),
+ pkt_len,
+ DMA_FROM_DEVICE);
+ prefetch(page_address(page));
+- fec_enet_update_cbd(rxq, bdp, index);
+
+ if (xdp_prog) {
+ xdp_buff_clear_frags_flag(&xdp);
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index d6f80da30decf4..558cda577191d6 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -1047,5 +1047,10 @@ static inline void ice_clear_rdma_cap(struct ice_pf *pf)
+ clear_bit(ICE_FLAG_RDMA_ENA, pf->flags);
+ }
+
++static inline enum ice_phy_model ice_get_phy_model(const struct ice_hw *hw)
++{
++ return hw->ptp.phy_model;
++}
++
+ extern const struct xdp_metadata_ops ice_xdp_md_ops;
+ #endif /* _ICE_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_adapter.c b/drivers/net/ethernet/intel/ice/ice_adapter.c
+index ad84d8ad49a63f..f3e195974a8efa 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adapter.c
++++ b/drivers/net/ethernet/intel/ice/ice_adapter.c
+@@ -40,11 +40,17 @@ static struct ice_adapter *ice_adapter_new(void)
+ spin_lock_init(&adapter->ptp_gltsyn_time_lock);
+ refcount_set(&adapter->refcount, 1);
+
++ mutex_init(&adapter->ports.lock);
++ INIT_LIST_HEAD(&adapter->ports.ports);
++
+ return adapter;
+ }
+
+ static void ice_adapter_free(struct ice_adapter *adapter)
+ {
++ WARN_ON(!list_empty(&adapter->ports.ports));
++ mutex_destroy(&adapter->ports.lock);
++
+ kfree(adapter);
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_adapter.h b/drivers/net/ethernet/intel/ice/ice_adapter.h
+index 9d11014ec02ff2..e233225848b384 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adapter.h
++++ b/drivers/net/ethernet/intel/ice/ice_adapter.h
+@@ -4,22 +4,42 @@
+ #ifndef _ICE_ADAPTER_H_
+ #define _ICE_ADAPTER_H_
+
++#include <linux/types.h>
+ #include <linux/spinlock_types.h>
+ #include <linux/refcount_types.h>
+
+ struct pci_dev;
++struct ice_pf;
++
++/**
++ * struct ice_port_list - data used to store the list of adapter ports
++ *
++ * This structure contains data used to maintain a list of adapter ports
++ *
++ * @ports: list of ports
++ * @lock: protect access to the ports list
++ */
++struct ice_port_list {
++ struct list_head ports;
++ /* To synchronize the ports list operations */
++ struct mutex lock;
++};
+
+ /**
+ * struct ice_adapter - PCI adapter resources shared across PFs
+ * @ptp_gltsyn_time_lock: Spinlock protecting access to the GLTSYN_TIME
+ * register of the PTP clock.
+ * @refcount: Reference count. struct ice_pf objects hold the references.
++ * @ctrl_pf: Control PF of the adapter
++ * @ports: Ports list
+ */
+ struct ice_adapter {
++ refcount_t refcount;
+ /* For access to the GLTSYN_TIME register */
+ spinlock_t ptp_gltsyn_time_lock;
+
+- refcount_t refcount;
++ struct ice_pf *ctrl_pf;
++ struct ice_port_list ports;
+ };
+
+ struct ice_adapter *ice_adapter_get(const struct pci_dev *pdev);
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index 79a6edd0be0ec4..80f3dfd2712430 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -1648,6 +1648,7 @@ struct ice_aqc_get_port_options_elem {
+ #define ICE_AQC_PORT_OPT_MAX_LANE_25G 5
+ #define ICE_AQC_PORT_OPT_MAX_LANE_50G 6
+ #define ICE_AQC_PORT_OPT_MAX_LANE_100G 7
++#define ICE_AQC_PORT_OPT_MAX_LANE_200G 8
+
+ u8 global_scid[2];
+ u8 phy_scid[2];
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
+index f1324e25b2af1c..068a467de1d56d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.c
++++ b/drivers/net/ethernet/intel/ice/ice_common.c
+@@ -4074,6 +4074,57 @@ ice_aq_set_port_option(struct ice_hw *hw, u8 lport, u8 lport_valid,
+ return ice_aq_send_cmd(hw, &desc, NULL, 0, NULL);
+ }
+
++/**
++ * ice_get_phy_lane_number - Get PHY lane number for current adapter
++ * @hw: pointer to the hw struct
++ *
++ * Return: PHY lane number on success, negative error code otherwise.
++ */
++int ice_get_phy_lane_number(struct ice_hw *hw)
++{
++ struct ice_aqc_get_port_options_elem *options;
++ unsigned int lport = 0;
++ unsigned int lane;
++ int err;
++
++ options = kcalloc(ICE_AQC_PORT_OPT_MAX, sizeof(*options), GFP_KERNEL);
++ if (!options)
++ return -ENOMEM;
++
++ for (lane = 0; lane < ICE_MAX_PORT_PER_PCI_DEV; lane++) {
++ u8 options_count = ICE_AQC_PORT_OPT_MAX;
++ u8 speed, active_idx, pending_idx;
++ bool active_valid, pending_valid;
++
++ err = ice_aq_get_port_options(hw, options, &options_count, lane,
++ true, &active_idx, &active_valid,
++ &pending_idx, &pending_valid);
++ if (err)
++ goto err;
++
++ if (!active_valid)
++ continue;
++
++ speed = options[active_idx].max_lane_speed;
++ /* If we don't get speed for this lane, it's unoccupied */
++ if (speed > ICE_AQC_PORT_OPT_MAX_LANE_200G)
++ continue;
++
++ if (hw->pf_id == lport) {
++ kfree(options);
++ return lane;
++ }
++
++ lport++;
++ }
++
++ /* PHY lane not found */
++ err = -ENXIO;
++err:
++ kfree(options);
++ return err;
++}
++
+ /**
+ * ice_aq_sff_eeprom
+ * @hw: pointer to the HW struct
+diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
+index 27208a60cece51..fe6f88cfd94866 100644
+--- a/drivers/net/ethernet/intel/ice/ice_common.h
++++ b/drivers/net/ethernet/intel/ice/ice_common.h
+@@ -193,6 +193,7 @@ ice_aq_get_port_options(struct ice_hw *hw,
+ int
+ ice_aq_set_port_option(struct ice_hw *hw, u8 lport, u8 lport_valid,
+ u8 new_option);
++int ice_get_phy_lane_number(struct ice_hw *hw);
+ int
+ ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr,
+ u16 mem_addr, u8 page, u8 set_page, u8 *data, u8 length,
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 8f2e758c394277..45eefe22fb5b73 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -1144,7 +1144,7 @@ ice_link_event(struct ice_pf *pf, struct ice_port_info *pi, bool link_up,
+ if (link_up == old_link && link_speed == old_link_speed)
+ return 0;
+
+- ice_ptp_link_change(pf, pf->hw.pf_id, link_up);
++ ice_ptp_link_change(pf, link_up);
+
+ if (ice_is_dcb_active(pf)) {
+ if (test_bit(ICE_FLAG_DCB_ENA, pf->flags))
+@@ -6744,7 +6744,7 @@ static int ice_up_complete(struct ice_vsi *vsi)
+ ice_print_link_msg(vsi, true);
+ netif_tx_start_all_queues(vsi->netdev);
+ netif_carrier_on(vsi->netdev);
+- ice_ptp_link_change(pf, pf->hw.pf_id, true);
++ ice_ptp_link_change(pf, true);
+ }
+
+ /* Perform an initial read of the statistics registers now to
+@@ -7214,7 +7214,7 @@ int ice_down(struct ice_vsi *vsi)
+
+ if (vsi->netdev) {
+ vlan_err = ice_vsi_del_vlan_zero(vsi);
+- ice_ptp_link_change(vsi->back, vsi->back->hw.pf_id, false);
++ ice_ptp_link_change(vsi->back, false);
+ netif_carrier_off(vsi->netdev);
+ netif_tx_disable(vsi->netdev);
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.c b/drivers/net/ethernet/intel/ice/ice_ptp.c
+index ef2e858f49bb0e..7c6f81beaee460 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.c
+@@ -16,6 +16,18 @@ static const struct ptp_pin_desc ice_pin_desc_e810t[] = {
+ { "U.FL2", UFL2, PTP_PF_NONE, 2, { 0, } },
+ };
+
++static struct ice_pf *ice_get_ctrl_pf(struct ice_pf *pf)
++{
++ return !pf->adapter ? NULL : pf->adapter->ctrl_pf;
++}
++
++static struct ice_ptp *ice_get_ctrl_ptp(struct ice_pf *pf)
++{
++ struct ice_pf *ctrl_pf = ice_get_ctrl_pf(pf);
++
++ return !ctrl_pf ? NULL : &ctrl_pf->ptp;
++}
++
+ /**
+ * ice_get_sma_config_e810t
+ * @hw: pointer to the hw struct
+@@ -800,8 +812,8 @@ static enum ice_tx_tstamp_work ice_ptp_tx_tstamp_owner(struct ice_pf *pf)
+ struct ice_ptp_port *port;
+ unsigned int i;
+
+- mutex_lock(&pf->ptp.ports_owner.lock);
+- list_for_each_entry(port, &pf->ptp.ports_owner.ports, list_member) {
++ mutex_lock(&pf->adapter->ports.lock);
++ list_for_each_entry(port, &pf->adapter->ports.ports, list_node) {
+ struct ice_ptp_tx *tx = &port->tx;
+
+ if (!tx || !tx->init)
+@@ -809,7 +821,7 @@ static enum ice_tx_tstamp_work ice_ptp_tx_tstamp_owner(struct ice_pf *pf)
+
+ ice_ptp_process_tx_tstamp(tx);
+ }
+- mutex_unlock(&pf->ptp.ports_owner.lock);
++ mutex_unlock(&pf->adapter->ports.lock);
+
+ for (i = 0; i < ICE_GET_QUAD_NUM(pf->hw.ptp.num_lports); i++) {
+ u64 tstamp_ready;
+@@ -974,7 +986,7 @@ ice_ptp_flush_all_tx_tracker(struct ice_pf *pf)
+ {
+ struct ice_ptp_port *port;
+
+- list_for_each_entry(port, &pf->ptp.ports_owner.ports, list_member)
++ list_for_each_entry(port, &pf->adapter->ports.ports, list_node)
+ ice_ptp_flush_tx_tracker(ptp_port_to_pf(port), &port->tx);
+ }
+
+@@ -1363,7 +1375,7 @@ ice_ptp_port_phy_stop(struct ice_ptp_port *ptp_port)
+
+ mutex_lock(&ptp_port->ps_lock);
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ err = ice_stop_phy_timer_eth56g(hw, port, true);
+ break;
+@@ -1409,7 +1421,7 @@ ice_ptp_port_phy_restart(struct ice_ptp_port *ptp_port)
+
+ mutex_lock(&ptp_port->ps_lock);
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ err = ice_start_phy_timer_eth56g(hw, port);
+ break;
+@@ -1454,10 +1466,9 @@ ice_ptp_port_phy_restart(struct ice_ptp_port *ptp_port)
+ /**
+ * ice_ptp_link_change - Reconfigure PTP after link status change
+ * @pf: Board private structure
+- * @port: Port for which the PHY start is set
+ * @linkup: Link is up or down
+ */
+-void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
++void ice_ptp_link_change(struct ice_pf *pf, bool linkup)
+ {
+ struct ice_ptp_port *ptp_port;
+ struct ice_hw *hw = &pf->hw;
+@@ -1465,14 +1476,7 @@ void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
+ if (pf->ptp.state != ICE_PTP_READY)
+ return;
+
+- if (WARN_ON_ONCE(port >= hw->ptp.num_lports))
+- return;
+-
+ ptp_port = &pf->ptp.port;
+- if (ice_is_e825c(hw) && hw->ptp.is_2x50g_muxed_topo)
+- port *= 2;
+- if (WARN_ON_ONCE(ptp_port->port_num != port))
+- return;
+
+ /* Update cached link status for this port immediately */
+ ptp_port->link_up = linkup;
+@@ -1480,8 +1484,7 @@ void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
+ /* Skip HW writes if reset is in progress */
+ if (pf->hw.reset_ongoing)
+ return;
+-
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_E810:
+ /* Do not reconfigure E810 PHY */
+ return;
+@@ -1514,7 +1517,7 @@ static int ice_ptp_cfg_phy_interrupt(struct ice_pf *pf, bool ena, u32 threshold)
+
+ ice_ptp_reset_ts_memory(hw);
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G: {
+ int port;
+
+@@ -1553,7 +1556,7 @@ static int ice_ptp_cfg_phy_interrupt(struct ice_pf *pf, bool ena, u32 threshold)
+ case ICE_PHY_UNSUP:
+ default:
+ dev_warn(dev, "%s: Unexpected PHY model %d\n", __func__,
+- hw->ptp.phy_model);
++ ice_get_phy_model(hw));
+ return -EOPNOTSUPP;
+ }
+ }
+@@ -1575,10 +1578,10 @@ static void ice_ptp_restart_all_phy(struct ice_pf *pf)
+ {
+ struct list_head *entry;
+
+- list_for_each(entry, &pf->ptp.ports_owner.ports) {
++ list_for_each(entry, &pf->adapter->ports.ports) {
+ struct ice_ptp_port *port = list_entry(entry,
+ struct ice_ptp_port,
+- list_member);
++ list_node);
+
+ if (port->link_up)
+ ice_ptp_port_phy_restart(port);
+@@ -2059,7 +2062,7 @@ ice_ptp_settime64(struct ptp_clock_info *info, const struct timespec64 *ts)
+ /* For Vernier mode on E82X, we need to recalibrate after new settime.
+ * Start with marking timestamps as invalid.
+ */
+- if (hw->ptp.phy_model == ICE_PHY_E82X) {
++ if (ice_get_phy_model(hw) == ICE_PHY_E82X) {
+ err = ice_ptp_clear_phy_offset_ready_e82x(hw);
+ if (err)
+ dev_warn(ice_pf_to_dev(pf), "Failed to mark timestamps as invalid before settime\n");
+@@ -2083,7 +2086,7 @@ ice_ptp_settime64(struct ptp_clock_info *info, const struct timespec64 *ts)
+ ice_ptp_enable_all_clkout(pf);
+
+ /* Recalibrate and re-enable timestamp blocks for E822/E823 */
+- if (hw->ptp.phy_model == ICE_PHY_E82X)
++ if (ice_get_phy_model(hw) == ICE_PHY_E82X)
+ ice_ptp_restart_all_phy(pf);
+ exit:
+ if (err) {
+@@ -2895,6 +2898,50 @@ void ice_ptp_rebuild(struct ice_pf *pf, enum ice_reset_req reset_type)
+ dev_err(ice_pf_to_dev(pf), "PTP reset failed %d\n", err);
+ }
+
++static bool ice_is_primary(struct ice_hw *hw)
++{
++ return ice_is_e825c(hw) && ice_is_dual(hw) ?
++ !!(hw->dev_caps.nac_topo.mode & ICE_NAC_TOPO_PRIMARY_M) : true;
++}
++
++static int ice_ptp_setup_adapter(struct ice_pf *pf)
++{
++ if (!ice_pf_src_tmr_owned(pf) || !ice_is_primary(&pf->hw))
++ return -EPERM;
++
++ pf->adapter->ctrl_pf = pf;
++
++ return 0;
++}
++
++static int ice_ptp_setup_pf(struct ice_pf *pf)
++{
++ struct ice_ptp *ctrl_ptp = ice_get_ctrl_ptp(pf);
++ struct ice_ptp *ptp = &pf->ptp;
++
++ if (WARN_ON(!ctrl_ptp) || ice_get_phy_model(&pf->hw) == ICE_PHY_UNSUP)
++ return -ENODEV;
++
++ INIT_LIST_HEAD(&ptp->port.list_node);
++ mutex_lock(&pf->adapter->ports.lock);
++
++ list_add(&ptp->port.list_node,
++ &pf->adapter->ports.ports);
++ mutex_unlock(&pf->adapter->ports.lock);
++
++ return 0;
++}
++
++static void ice_ptp_cleanup_pf(struct ice_pf *pf)
++{
++ struct ice_ptp *ptp = &pf->ptp;
++
++ if (ice_get_phy_model(&pf->hw) != ICE_PHY_UNSUP) {
++ mutex_lock(&pf->adapter->ports.lock);
++ list_del(&ptp->port.list_node);
++ mutex_unlock(&pf->adapter->ports.lock);
++ }
++}
+ /**
+ * ice_ptp_aux_dev_to_aux_pf - Get auxiliary PF handle for the auxiliary device
+ * @aux_dev: auxiliary device to get the auxiliary PF for
+@@ -2946,9 +2993,9 @@ static int ice_ptp_auxbus_probe(struct auxiliary_device *aux_dev,
+ if (WARN_ON(!owner_pf))
+ return -ENODEV;
+
+- INIT_LIST_HEAD(&aux_pf->ptp.port.list_member);
++ INIT_LIST_HEAD(&aux_pf->ptp.port.list_node);
+ mutex_lock(&owner_pf->ptp.ports_owner.lock);
+- list_add(&aux_pf->ptp.port.list_member,
++ list_add(&aux_pf->ptp.port.list_node,
+ &owner_pf->ptp.ports_owner.ports);
+ mutex_unlock(&owner_pf->ptp.ports_owner.lock);
+
+@@ -2965,7 +3012,7 @@ static void ice_ptp_auxbus_remove(struct auxiliary_device *aux_dev)
+ struct ice_pf *aux_pf = ice_ptp_aux_dev_to_aux_pf(aux_dev);
+
+ mutex_lock(&owner_pf->ptp.ports_owner.lock);
+- list_del(&aux_pf->ptp.port.list_member);
++ list_del(&aux_pf->ptp.port.list_node);
+ mutex_unlock(&owner_pf->ptp.ports_owner.lock);
+ }
+
+@@ -3025,7 +3072,7 @@ ice_ptp_auxbus_create_id_table(struct ice_pf *pf, const char *name)
+ * ice_ptp_register_auxbus_driver - Register PTP auxiliary bus driver
+ * @pf: Board private structure
+ */
+-static int ice_ptp_register_auxbus_driver(struct ice_pf *pf)
++static int __always_unused ice_ptp_register_auxbus_driver(struct ice_pf *pf)
+ {
+ struct auxiliary_driver *aux_driver;
+ struct ice_ptp *ptp;
+@@ -3068,7 +3115,7 @@ static int ice_ptp_register_auxbus_driver(struct ice_pf *pf)
+ * ice_ptp_unregister_auxbus_driver - Unregister PTP auxiliary bus driver
+ * @pf: Board private structure
+ */
+-static void ice_ptp_unregister_auxbus_driver(struct ice_pf *pf)
++static void __always_unused ice_ptp_unregister_auxbus_driver(struct ice_pf *pf)
+ {
+ struct auxiliary_driver *aux_driver = &pf->ptp.ports_owner.aux_driver;
+
+@@ -3087,15 +3134,12 @@ static void ice_ptp_unregister_auxbus_driver(struct ice_pf *pf)
+ */
+ int ice_ptp_clock_index(struct ice_pf *pf)
+ {
+- struct auxiliary_device *aux_dev;
+- struct ice_pf *owner_pf;
++ struct ice_ptp *ctrl_ptp = ice_get_ctrl_ptp(pf);
+ struct ptp_clock *clock;
+
+- aux_dev = &pf->ptp.port.aux_dev;
+- owner_pf = ice_ptp_aux_dev_to_owner_pf(aux_dev);
+- if (!owner_pf)
++ if (!ctrl_ptp)
+ return -1;
+- clock = owner_pf->ptp.clock;
++ clock = ctrl_ptp->clock;
+
+ return clock ? ptp_clock_index(clock) : -1;
+ }
+@@ -3155,15 +3199,7 @@ static int ice_ptp_init_owner(struct ice_pf *pf)
+ if (err)
+ goto err_clk;
+
+- err = ice_ptp_register_auxbus_driver(pf);
+- if (err) {
+- dev_err(ice_pf_to_dev(pf), "Failed to register PTP auxbus driver");
+- goto err_aux;
+- }
+-
+ return 0;
+-err_aux:
+- ptp_clock_unregister(pf->ptp.clock);
+ err_clk:
+ pf->ptp.clock = NULL;
+ err_exit:
+@@ -3209,7 +3245,7 @@ static int ice_ptp_init_port(struct ice_pf *pf, struct ice_ptp_port *ptp_port)
+
+ mutex_init(&ptp_port->ps_lock);
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_ptp_init_tx_eth56g(pf, &ptp_port->tx,
+ ptp_port->port_num);
+@@ -3239,7 +3275,7 @@ static void ice_ptp_release_auxbus_device(struct device *dev)
+ * ice_ptp_create_auxbus_device - Create PTP auxiliary bus device
+ * @pf: Board private structure
+ */
+-static int ice_ptp_create_auxbus_device(struct ice_pf *pf)
++static __always_unused int ice_ptp_create_auxbus_device(struct ice_pf *pf)
+ {
+ struct auxiliary_device *aux_dev;
+ struct ice_ptp *ptp;
+@@ -3286,7 +3322,7 @@ static int ice_ptp_create_auxbus_device(struct ice_pf *pf)
+ * ice_ptp_remove_auxbus_device - Remove PTP auxiliary bus device
+ * @pf: Board private structure
+ */
+-static void ice_ptp_remove_auxbus_device(struct ice_pf *pf)
++static __always_unused void ice_ptp_remove_auxbus_device(struct ice_pf *pf)
+ {
+ struct auxiliary_device *aux_dev = &pf->ptp.port.aux_dev;
+
+@@ -3307,7 +3343,7 @@ static void ice_ptp_remove_auxbus_device(struct ice_pf *pf)
+ */
+ static void ice_ptp_init_tx_interrupt_mode(struct ice_pf *pf)
+ {
+- switch (pf->hw.ptp.phy_model) {
++ switch (ice_get_phy_model(&pf->hw)) {
+ case ICE_PHY_E82X:
+ /* E822 based PHY has the clock owner process the interrupt
+ * for all ports.
+@@ -3339,10 +3375,17 @@ void ice_ptp_init(struct ice_pf *pf)
+ {
+ struct ice_ptp *ptp = &pf->ptp;
+ struct ice_hw *hw = &pf->hw;
+- int err;
++ int lane_num, err;
+
+ ptp->state = ICE_PTP_INITIALIZING;
+
++ lane_num = ice_get_phy_lane_number(hw);
++ if (lane_num < 0) {
++ err = lane_num;
++ goto err_exit;
++ }
++
++ ptp->port.port_num = (u8)lane_num;
+ ice_ptp_init_hw(hw);
+
+ ice_ptp_init_tx_interrupt_mode(pf);
+@@ -3350,19 +3393,22 @@ void ice_ptp_init(struct ice_pf *pf)
+ /* If this function owns the clock hardware, it must allocate and
+ * configure the PTP clock device to represent it.
+ */
+- if (ice_pf_src_tmr_owned(pf)) {
++ if (ice_pf_src_tmr_owned(pf) && ice_is_primary(hw)) {
++ err = ice_ptp_setup_adapter(pf);
++ if (err)
++ goto err_exit;
+ err = ice_ptp_init_owner(pf);
+ if (err)
+- goto err;
++ goto err_exit;
+ }
+
+- ptp->port.port_num = hw->pf_id;
+- if (ice_is_e825c(hw) && hw->ptp.is_2x50g_muxed_topo)
+- ptp->port.port_num = hw->pf_id * 2;
++ err = ice_ptp_setup_pf(pf);
++ if (err)
++ goto err_exit;
+
+ err = ice_ptp_init_port(pf, &ptp->port);
+ if (err)
+- goto err;
++ goto err_exit;
+
+ /* Start the PHY timestamping block */
+ ice_ptp_reset_phy_timestamping(pf);
+@@ -3370,20 +3416,16 @@ void ice_ptp_init(struct ice_pf *pf)
+ /* Configure initial Tx interrupt settings */
+ ice_ptp_cfg_tx_interrupt(pf);
+
+- err = ice_ptp_create_auxbus_device(pf);
+- if (err)
+- goto err;
+-
+ ptp->state = ICE_PTP_READY;
+
+ err = ice_ptp_init_work(pf, ptp);
+ if (err)
+- goto err;
++ goto err_exit;
+
+ dev_info(ice_pf_to_dev(pf), "PTP init successful\n");
+ return;
+
+-err:
++err_exit:
+ /* If we registered a PTP clock, release it */
+ if (pf->ptp.clock) {
+ ptp_clock_unregister(ptp->clock);
+@@ -3410,7 +3452,7 @@ void ice_ptp_release(struct ice_pf *pf)
+ /* Disable timestamping for both Tx and Rx */
+ ice_ptp_disable_timestamp_mode(pf);
+
+- ice_ptp_remove_auxbus_device(pf);
++ ice_ptp_cleanup_pf(pf);
+
+ ice_ptp_release_tx_tracker(pf, &pf->ptp.port.tx);
+
+@@ -3425,9 +3467,6 @@ void ice_ptp_release(struct ice_pf *pf)
+ pf->ptp.kworker = NULL;
+ }
+
+- if (ice_pf_src_tmr_owned(pf))
+- ice_ptp_unregister_auxbus_driver(pf);
+-
+ if (!pf->ptp.clock)
+ return;
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp.h b/drivers/net/ethernet/intel/ice/ice_ptp.h
+index 2db2257a0fb2f3..f1cfa6aa4e76bf 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp.h
+@@ -169,7 +169,7 @@ struct ice_ptp_tx {
+ * ready for PTP functionality. It is used to track the port initialization
+ * and determine when the port's PHY offset is valid.
+ *
+- * @list_member: list member structure of auxiliary device
++ * @list_node: list member structure
+ * @tx: Tx timestamp tracking for this port
+ * @aux_dev: auxiliary device associated with this port
+ * @ov_work: delayed work task for tracking when PHY offset is valid
+@@ -179,7 +179,7 @@ struct ice_ptp_tx {
+ * @port_num: the port number this structure represents
+ */
+ struct ice_ptp_port {
+- struct list_head list_member;
++ struct list_head list_node;
+ struct ice_ptp_tx tx;
+ struct auxiliary_device aux_dev;
+ struct kthread_delayed_work ov_work;
+@@ -205,6 +205,7 @@ enum ice_ptp_tx_interrupt {
+ * @ports: list of porst handled by this port owner
+ * @lock: protect access to ports list
+ */
++
+ struct ice_ptp_port_owner {
+ struct auxiliary_driver aux_driver;
+ struct list_head ports;
+@@ -331,7 +332,7 @@ void ice_ptp_prepare_for_reset(struct ice_pf *pf,
+ enum ice_reset_req reset_type);
+ void ice_ptp_init(struct ice_pf *pf);
+ void ice_ptp_release(struct ice_pf *pf);
+-void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup);
++void ice_ptp_link_change(struct ice_pf *pf, bool linkup);
+ #else /* IS_ENABLED(CONFIG_PTP_1588_CLOCK) */
+ static inline int ice_ptp_set_ts_config(struct ice_pf *pf, struct ifreq *ifr)
+ {
+@@ -379,7 +380,7 @@ static inline void ice_ptp_prepare_for_reset(struct ice_pf *pf,
+ }
+ static inline void ice_ptp_init(struct ice_pf *pf) { }
+ static inline void ice_ptp_release(struct ice_pf *pf) { }
+-static inline void ice_ptp_link_change(struct ice_pf *pf, u8 port, bool linkup)
++static inline void ice_ptp_link_change(struct ice_pf *pf, bool linkup)
+ {
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_consts.h b/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
+index 3005dd252a1026..bdb1020147d1c2 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_consts.h
+@@ -131,7 +131,7 @@ struct ice_eth56g_mac_reg_cfg eth56g_mac_cfg[NUM_ICE_ETH56G_LNK_SPD] = {
+ .rx_offset = {
+ .serdes = 0xffffeb27, /* -10.42424 */
+ .no_fec = 0xffffcccd, /* -25.6 */
+- .fc = 0xfffe0014, /* -255.96 */
++ .fc = 0xfffc557b, /* -469.26 */
+ .sfd = 0x4a4, /* 2.32 */
+ .bs_ds = 0x32 /* 0.0969697 */
+ }
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+index 3816e45b6ab44a..7190fde16c8681 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.c
+@@ -804,7 +804,7 @@ static u32 ice_ptp_tmr_cmd_to_port_reg(struct ice_hw *hw,
+ /* Certain hardware families share the same register values for the
+ * port register and source timer register.
+ */
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_E810:
+ return ice_ptp_tmr_cmd_to_src_reg(hw, cmd) & TS_CMD_MASK_E810;
+ default:
+@@ -877,31 +877,46 @@ static void ice_ptp_exec_tmr_cmd(struct ice_hw *hw)
+ * The following functions operate on devices with the ETH 56G PHY.
+ */
+
++/**
++ * ice_ptp_get_dest_dev_e825 - get destination PHY for given port number
++ * @hw: pointer to the HW struct
++ * @port: destination port
++ *
++ * Return: destination sideband queue PHY device.
++ */
++static enum ice_sbq_msg_dev ice_ptp_get_dest_dev_e825(struct ice_hw *hw,
++ u8 port)
++{
++ /* On a single complex E825, PHY 0 is always destination device phy_0
++ * and PHY 1 is phy_0_peer.
++ */
++ if (port >= hw->ptp.ports_per_phy)
++ return eth56g_phy_1;
++ else
++ return eth56g_phy_0;
++}
++
+ /**
+ * ice_write_phy_eth56g - Write a PHY port register
+ * @hw: pointer to the HW struct
+- * @phy_idx: PHY index
++ * @port: destination port
+ * @addr: PHY register address
+ * @val: Value to write
+ *
+ * Return: 0 on success, other error codes when failed to write to PHY
+ */
+-static int ice_write_phy_eth56g(struct ice_hw *hw, u8 phy_idx, u32 addr,
+- u32 val)
++static int ice_write_phy_eth56g(struct ice_hw *hw, u8 port, u32 addr, u32 val)
+ {
+- struct ice_sbq_msg_input phy_msg;
++ struct ice_sbq_msg_input msg = {
++ .dest_dev = ice_ptp_get_dest_dev_e825(hw, port),
++ .opcode = ice_sbq_msg_wr,
++ .msg_addr_low = lower_16_bits(addr),
++ .msg_addr_high = upper_16_bits(addr),
++ .data = val
++ };
+ int err;
+
+- phy_msg.opcode = ice_sbq_msg_wr;
+-
+- phy_msg.msg_addr_low = lower_16_bits(addr);
+- phy_msg.msg_addr_high = upper_16_bits(addr);
+-
+- phy_msg.data = val;
+- phy_msg.dest_dev = hw->ptp.phy.eth56g.phy_addr[phy_idx];
+-
+- err = ice_sbq_rw_reg(hw, &phy_msg, ICE_AQ_FLAG_RD);
+-
++ err = ice_sbq_rw_reg(hw, &msg, ICE_AQ_FLAG_RD);
+ if (err)
+ ice_debug(hw, ICE_DBG_PTP, "PTP failed to send msg to phy %d\n",
+ err);
+@@ -912,41 +927,36 @@ static int ice_write_phy_eth56g(struct ice_hw *hw, u8 phy_idx, u32 addr,
+ /**
+ * ice_read_phy_eth56g - Read a PHY port register
+ * @hw: pointer to the HW struct
+- * @phy_idx: PHY index
++ * @port: destination port
+ * @addr: PHY register address
+ * @val: Value to write
+ *
+ * Return: 0 on success, other error codes when failed to read from PHY
+ */
+-static int ice_read_phy_eth56g(struct ice_hw *hw, u8 phy_idx, u32 addr,
+- u32 *val)
++static int ice_read_phy_eth56g(struct ice_hw *hw, u8 port, u32 addr, u32 *val)
+ {
+- struct ice_sbq_msg_input phy_msg;
++ struct ice_sbq_msg_input msg = {
++ .dest_dev = ice_ptp_get_dest_dev_e825(hw, port),
++ .opcode = ice_sbq_msg_rd,
++ .msg_addr_low = lower_16_bits(addr),
++ .msg_addr_high = upper_16_bits(addr)
++ };
+ int err;
+
+- phy_msg.opcode = ice_sbq_msg_rd;
+-
+- phy_msg.msg_addr_low = lower_16_bits(addr);
+- phy_msg.msg_addr_high = upper_16_bits(addr);
+-
+- phy_msg.data = 0;
+- phy_msg.dest_dev = hw->ptp.phy.eth56g.phy_addr[phy_idx];
+-
+- err = ice_sbq_rw_reg(hw, &phy_msg, ICE_AQ_FLAG_RD);
+- if (err) {
++ err = ice_sbq_rw_reg(hw, &msg, ICE_AQ_FLAG_RD);
++ if (err)
+ ice_debug(hw, ICE_DBG_PTP, "PTP failed to send msg to phy %d\n",
+ err);
+- return err;
+- }
+-
+- *val = phy_msg.data;
++ else
++ *val = msg.data;
+
+- return 0;
++ return err;
+ }
+
+ /**
+ * ice_phy_res_address_eth56g - Calculate a PHY port register address
+- * @port: Port number to be written
++ * @hw: pointer to the HW struct
++ * @lane: Lane number to be written
+ * @res_type: resource type (register/memory)
+ * @offset: Offset from PHY port register base
+ * @addr: The result address
+@@ -955,17 +965,19 @@ static int ice_read_phy_eth56g(struct ice_hw *hw, u8 phy_idx, u32 addr,
+ * * %0 - success
+ * * %EINVAL - invalid port number or resource type
+ */
+-static int ice_phy_res_address_eth56g(u8 port, enum eth56g_res_type res_type,
+- u32 offset, u32 *addr)
++static int ice_phy_res_address_eth56g(struct ice_hw *hw, u8 lane,
++ enum eth56g_res_type res_type,
++ u32 offset,
++ u32 *addr)
+ {
+- u8 lane = port % ICE_PORTS_PER_QUAD;
+- u8 phy = ICE_GET_QUAD_NUM(port);
+-
+ if (res_type >= NUM_ETH56G_PHY_RES)
+ return -EINVAL;
+
+- *addr = eth56g_phy_res[res_type].base[phy] +
++ /* Lanes 4..7 are in fact 0..3 on a second PHY */
++ lane %= hw->ptp.ports_per_phy;
++ *addr = eth56g_phy_res[res_type].base[0] +
+ lane * eth56g_phy_res[res_type].step + offset;
++
+ return 0;
+ }
+
+@@ -985,19 +997,17 @@ static int ice_phy_res_address_eth56g(u8 port, enum eth56g_res_type res_type,
+ static int ice_write_port_eth56g(struct ice_hw *hw, u8 port, u32 offset,
+ u32 val, enum eth56g_res_type res_type)
+ {
+- u8 phy_port = port % hw->ptp.ports_per_phy;
+- u8 phy_idx = port / hw->ptp.ports_per_phy;
+ u32 addr;
+ int err;
+
+ if (port >= hw->ptp.num_lports)
+ return -EINVAL;
+
+- err = ice_phy_res_address_eth56g(phy_port, res_type, offset, &addr);
++ err = ice_phy_res_address_eth56g(hw, port, res_type, offset, &addr);
+ if (err)
+ return err;
+
+- return ice_write_phy_eth56g(hw, phy_idx, addr, val);
++ return ice_write_phy_eth56g(hw, port, addr, val);
+ }
+
+ /**
+@@ -1016,19 +1026,17 @@ static int ice_write_port_eth56g(struct ice_hw *hw, u8 port, u32 offset,
+ static int ice_read_port_eth56g(struct ice_hw *hw, u8 port, u32 offset,
+ u32 *val, enum eth56g_res_type res_type)
+ {
+- u8 phy_port = port % hw->ptp.ports_per_phy;
+- u8 phy_idx = port / hw->ptp.ports_per_phy;
+ u32 addr;
+ int err;
+
+ if (port >= hw->ptp.num_lports)
+ return -EINVAL;
+
+- err = ice_phy_res_address_eth56g(phy_port, res_type, offset, &addr);
++ err = ice_phy_res_address_eth56g(hw, port, res_type, offset, &addr);
+ if (err)
+ return err;
+
+- return ice_read_phy_eth56g(hw, phy_idx, addr, val);
++ return ice_read_phy_eth56g(hw, port, addr, val);
+ }
+
+ /**
+@@ -1177,6 +1185,56 @@ static int ice_write_port_mem_eth56g(struct ice_hw *hw, u8 port, u16 offset,
+ return ice_write_port_eth56g(hw, port, offset, val, ETH56G_PHY_MEM_PTP);
+ }
+
++/**
++ * ice_write_quad_ptp_reg_eth56g - Write a PHY quad register
++ * @hw: pointer to the HW struct
++ * @offset: PHY register offset
++ * @port: Port number
++ * @val: Value to write
++ *
++ * Return:
++ * * %0 - success
++ * * %EIO - invalid port number or resource type
++ * * %other - failed to write to PHY
++ */
++static int ice_write_quad_ptp_reg_eth56g(struct ice_hw *hw, u8 port,
++ u32 offset, u32 val)
++{
++ u32 addr;
++
++ if (port >= hw->ptp.num_lports)
++ return -EIO;
++
++ addr = eth56g_phy_res[ETH56G_PHY_REG_PTP].base[0] + offset;
++
++ return ice_write_phy_eth56g(hw, port, addr, val);
++}
++
++/**
++ * ice_read_quad_ptp_reg_eth56g - Read a PHY quad register
++ * @hw: pointer to the HW struct
++ * @offset: PHY register offset
++ * @port: Port number
++ * @val: Value to read
++ *
++ * Return:
++ * * %0 - success
++ * * %EIO - invalid port number or resource type
++ * * %other - failed to read from PHY
++ */
++static int ice_read_quad_ptp_reg_eth56g(struct ice_hw *hw, u8 port,
++ u32 offset, u32 *val)
++{
++ u32 addr;
++
++ if (port >= hw->ptp.num_lports)
++ return -EIO;
++
++ addr = eth56g_phy_res[ETH56G_PHY_REG_PTP].base[0] + offset;
++
++ return ice_read_phy_eth56g(hw, port, addr, val);
++}
++
+ /**
+ * ice_is_64b_phy_reg_eth56g - Check if this is a 64bit PHY register
+ * @low_addr: the low address to check
+@@ -1896,7 +1954,6 @@ ice_phy_get_speed_eth56g(struct ice_link_status *li)
+ */
+ static int ice_phy_cfg_parpcs_eth56g(struct ice_hw *hw, u8 port)
+ {
+- u8 port_blk = port & ~(ICE_PORTS_PER_QUAD - 1);
+ u32 val;
+ int err;
+
+@@ -1911,8 +1968,8 @@ static int ice_phy_cfg_parpcs_eth56g(struct ice_hw *hw, u8 port)
+ switch (ice_phy_get_speed_eth56g(&hw->port_info->phy.link_info)) {
+ case ICE_ETH56G_LNK_SPD_1G:
+ case ICE_ETH56G_LNK_SPD_2_5G:
+- err = ice_read_ptp_reg_eth56g(hw, port_blk,
+- PHY_GPCS_CONFIG_REG0, &val);
++ err = ice_read_quad_ptp_reg_eth56g(hw, port,
++ PHY_GPCS_CONFIG_REG0, &val);
+ if (err) {
+ ice_debug(hw, ICE_DBG_PTP, "Failed to read PHY_GPCS_CONFIG_REG0, status: %d",
+ err);
+@@ -1923,8 +1980,8 @@ static int ice_phy_cfg_parpcs_eth56g(struct ice_hw *hw, u8 port)
+ val |= FIELD_PREP(PHY_GPCS_CONFIG_REG0_TX_THR_M,
+ ICE_ETH56G_NOMINAL_TX_THRESH);
+
+- err = ice_write_ptp_reg_eth56g(hw, port_blk,
+- PHY_GPCS_CONFIG_REG0, val);
++ err = ice_write_quad_ptp_reg_eth56g(hw, port,
++ PHY_GPCS_CONFIG_REG0, val);
+ if (err) {
+ ice_debug(hw, ICE_DBG_PTP, "Failed to write PHY_GPCS_CONFIG_REG0, status: %d",
+ err);
+@@ -1965,50 +2022,47 @@ static int ice_phy_cfg_parpcs_eth56g(struct ice_hw *hw, u8 port)
+ */
+ int ice_phy_cfg_ptp_1step_eth56g(struct ice_hw *hw, u8 port)
+ {
+- u8 port_blk = port & ~(ICE_PORTS_PER_QUAD - 1);
+- u8 blk_port = port & (ICE_PORTS_PER_QUAD - 1);
++ u8 quad_lane = port % ICE_PORTS_PER_QUAD;
++ u32 addr, val, peer_delay;
+ bool enable, sfd_ena;
+- u32 val, peer_delay;
+ int err;
+
+ enable = hw->ptp.phy.eth56g.onestep_ena;
+ peer_delay = hw->ptp.phy.eth56g.peer_delay;
+ sfd_ena = hw->ptp.phy.eth56g.sfd_ena;
+
+- /* PHY_PTP_1STEP_CONFIG */
+- err = ice_read_ptp_reg_eth56g(hw, port_blk, PHY_PTP_1STEP_CONFIG, &val);
++ addr = PHY_PTP_1STEP_CONFIG;
++ err = ice_read_quad_ptp_reg_eth56g(hw, port, addr, &val);
+ if (err)
+ return err;
+
+ if (enable)
+- val |= blk_port;
++ val |= BIT(quad_lane);
+ else
+- val &= ~blk_port;
++ val &= ~BIT(quad_lane);
+
+ val &= ~(PHY_PTP_1STEP_T1S_UP64_M | PHY_PTP_1STEP_T1S_DELTA_M);
+
+- err = ice_write_ptp_reg_eth56g(hw, port_blk, PHY_PTP_1STEP_CONFIG, val);
++ err = ice_write_quad_ptp_reg_eth56g(hw, port, addr, val);
+ if (err)
+ return err;
+
+- /* PHY_PTP_1STEP_PEER_DELAY */
++ addr = PHY_PTP_1STEP_PEER_DELAY(quad_lane);
+ val = FIELD_PREP(PHY_PTP_1STEP_PD_DELAY_M, peer_delay);
+ if (peer_delay)
+ val |= PHY_PTP_1STEP_PD_ADD_PD_M;
+ val |= PHY_PTP_1STEP_PD_DLY_V_M;
+- err = ice_write_ptp_reg_eth56g(hw, port_blk,
+- PHY_PTP_1STEP_PEER_DELAY(blk_port), val);
++ err = ice_write_quad_ptp_reg_eth56g(hw, port, addr, val);
+ if (err)
+ return err;
+
+ val &= ~PHY_PTP_1STEP_PD_DLY_V_M;
+- err = ice_write_ptp_reg_eth56g(hw, port_blk,
+- PHY_PTP_1STEP_PEER_DELAY(blk_port), val);
++ err = ice_write_quad_ptp_reg_eth56g(hw, port, addr, val);
+ if (err)
+ return err;
+
+- /* PHY_MAC_XIF_MODE */
+- err = ice_read_mac_reg_eth56g(hw, port, PHY_MAC_XIF_MODE, &val);
++ addr = PHY_MAC_XIF_MODE;
++ err = ice_read_mac_reg_eth56g(hw, port, addr, &val);
+ if (err)
+ return err;
+
+@@ -2028,7 +2082,7 @@ int ice_phy_cfg_ptp_1step_eth56g(struct ice_hw *hw, u8 port)
+ FIELD_PREP(PHY_MAC_XIF_TS_BIN_MODE_M, enable) |
+ FIELD_PREP(PHY_MAC_XIF_TS_SFD_ENA_M, sfd_ena);
+
+- return ice_write_mac_reg_eth56g(hw, port, PHY_MAC_XIF_MODE, val);
++ return ice_write_mac_reg_eth56g(hw, port, addr, val);
+ }
+
+ /**
+@@ -2070,21 +2124,22 @@ static u32 ice_ptp_calc_bitslip_eth56g(struct ice_hw *hw, u8 port, u32 bs,
+ bool fc, bool rs,
+ enum ice_eth56g_link_spd spd)
+ {
+- u8 port_offset = port & (ICE_PORTS_PER_QUAD - 1);
+- u8 port_blk = port & ~(ICE_PORTS_PER_QUAD - 1);
+ u32 bitslip;
+ int err;
+
+ if (!bs || rs)
+ return 0;
+
+- if (spd == ICE_ETH56G_LNK_SPD_1G || spd == ICE_ETH56G_LNK_SPD_2_5G)
++ if (spd == ICE_ETH56G_LNK_SPD_1G || spd == ICE_ETH56G_LNK_SPD_2_5G) {
+ err = ice_read_gpcs_reg_eth56g(hw, port, PHY_GPCS_BITSLIP,
+ &bitslip);
+- else
+- err = ice_read_ptp_reg_eth56g(hw, port_blk,
+- PHY_REG_SD_BIT_SLIP(port_offset),
+- &bitslip);
++ } else {
++ u8 quad_lane = port % ICE_PORTS_PER_QUAD;
++ u32 addr;
++
++ addr = PHY_REG_SD_BIT_SLIP(quad_lane);
++ err = ice_read_quad_ptp_reg_eth56g(hw, port, addr, &bitslip);
++ }
+ if (err)
+ return 0;
+
+@@ -2644,59 +2699,29 @@ static int ice_get_phy_tx_tstamp_ready_eth56g(struct ice_hw *hw, u8 port,
+ }
+
+ /**
+- * ice_is_muxed_topo - detect breakout 2x50G topology for E825C
+- * @hw: pointer to the HW struct
+- *
+- * Return: true if it's 2x50 breakout topology, false otherwise
+- */
+-static bool ice_is_muxed_topo(struct ice_hw *hw)
+-{
+- u8 link_topo;
+- bool mux;
+- u32 val;
+-
+- val = rd32(hw, GLGEN_SWITCH_MODE_CONFIG);
+- mux = FIELD_GET(GLGEN_SWITCH_MODE_CONFIG_25X4_QUAD_M, val);
+- val = rd32(hw, GLGEN_MAC_LINK_TOPO);
+- link_topo = FIELD_GET(GLGEN_MAC_LINK_TOPO_LINK_TOPO_M, val);
+-
+- return (mux && link_topo == ICE_LINK_TOPO_UP_TO_2_LINKS);
+-}
+-
+-/**
+- * ice_ptp_init_phy_e825c - initialize PHY parameters
++ * ice_ptp_init_phy_e825 - initialize PHY parameters
+ * @hw: pointer to the HW struct
+ */
+-static void ice_ptp_init_phy_e825c(struct ice_hw *hw)
++static void ice_ptp_init_phy_e825(struct ice_hw *hw)
+ {
+ struct ice_ptp_hw *ptp = &hw->ptp;
+ struct ice_eth56g_params *params;
+- u8 phy;
++ u32 phy_rev;
++ int err;
+
+ ptp->phy_model = ICE_PHY_ETH56G;
+ params = &ptp->phy.eth56g;
+ params->onestep_ena = false;
+ params->peer_delay = 0;
+ params->sfd_ena = false;
+- params->phy_addr[0] = eth56g_phy_0;
+- params->phy_addr[1] = eth56g_phy_1;
+ params->num_phys = 2;
+ ptp->ports_per_phy = 4;
+ ptp->num_lports = params->num_phys * ptp->ports_per_phy;
+
+ ice_sb_access_ena_eth56g(hw, true);
+- for (phy = 0; phy < params->num_phys; phy++) {
+- u32 phy_rev;
+- int err;
+-
+- err = ice_read_phy_eth56g(hw, phy, PHY_REG_REVISION, &phy_rev);
+- if (err || phy_rev != PHY_REVISION_ETH56G) {
+- ptp->phy_model = ICE_PHY_UNSUP;
+- return;
+- }
+- }
+-
+- ptp->is_2x50g_muxed_topo = ice_is_muxed_topo(hw);
++ err = ice_read_phy_eth56g(hw, hw->pf_id, PHY_REG_REVISION, &phy_rev);
++ if (err || phy_rev != PHY_REVISION_ETH56G)
++ ptp->phy_model = ICE_PHY_UNSUP;
+ }
+
+ /* E822 family functions
+@@ -2715,10 +2740,9 @@ static void ice_fill_phy_msg_e82x(struct ice_hw *hw,
+ struct ice_sbq_msg_input *msg, u8 port,
+ u16 offset)
+ {
+- int phy_port, phy, quadtype;
++ int phy_port, quadtype;
+
+ phy_port = port % hw->ptp.ports_per_phy;
+- phy = port / hw->ptp.ports_per_phy;
+ quadtype = ICE_GET_QUAD_NUM(port) %
+ ICE_GET_QUAD_NUM(hw->ptp.ports_per_phy);
+
+@@ -2730,12 +2754,7 @@ static void ice_fill_phy_msg_e82x(struct ice_hw *hw,
+ msg->msg_addr_high = P_Q1_H(P_4_BASE + offset, phy_port);
+ }
+
+- if (phy == 0)
+- msg->dest_dev = rmn_0;
+- else if (phy == 1)
+- msg->dest_dev = rmn_1;
+- else
+- msg->dest_dev = rmn_2;
++ msg->dest_dev = rmn_0;
+ }
+
+ /**
+@@ -5395,7 +5414,7 @@ void ice_ptp_init_hw(struct ice_hw *hw)
+ else if (ice_is_e810(hw))
+ ice_ptp_init_phy_e810(ptp);
+ else if (ice_is_e825c(hw))
+- ice_ptp_init_phy_e825c(hw);
++ ice_ptp_init_phy_e825(hw);
+ else
+ ptp->phy_model = ICE_PHY_UNSUP;
+ }
+@@ -5418,7 +5437,7 @@ void ice_ptp_init_hw(struct ice_hw *hw)
+ static int ice_ptp_write_port_cmd(struct ice_hw *hw, u8 port,
+ enum ice_ptp_tmr_cmd cmd)
+ {
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_ptp_write_port_cmd_eth56g(hw, port, cmd);
+ case ICE_PHY_E82X:
+@@ -5483,7 +5502,7 @@ static int ice_ptp_port_cmd(struct ice_hw *hw, enum ice_ptp_tmr_cmd cmd)
+ u32 port;
+
+ /* PHY models which can program all ports simultaneously */
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_E810:
+ return ice_ptp_port_cmd_e810(hw, cmd);
+ default:
+@@ -5562,7 +5581,7 @@ int ice_ptp_init_time(struct ice_hw *hw, u64 time)
+
+ /* PHY timers */
+ /* Fill Rx and Tx ports and send msg to PHY */
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ err = ice_ptp_prep_phy_time_eth56g(hw,
+ (u32)(time & 0xFFFFFFFF));
+@@ -5608,7 +5627,7 @@ int ice_ptp_write_incval(struct ice_hw *hw, u64 incval)
+ wr32(hw, GLTSYN_SHADJ_L(tmr_idx), lower_32_bits(incval));
+ wr32(hw, GLTSYN_SHADJ_H(tmr_idx), upper_32_bits(incval));
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ err = ice_ptp_prep_phy_incval_eth56g(hw, incval);
+ break;
+@@ -5677,7 +5696,7 @@ int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj)
+ wr32(hw, GLTSYN_SHADJ_L(tmr_idx), 0);
+ wr32(hw, GLTSYN_SHADJ_H(tmr_idx), adj);
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ err = ice_ptp_prep_phy_adj_eth56g(hw, adj);
+ break;
+@@ -5710,7 +5729,7 @@ int ice_ptp_adj_clock(struct ice_hw *hw, s32 adj)
+ */
+ int ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp)
+ {
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_read_ptp_tstamp_eth56g(hw, block, idx, tstamp);
+ case ICE_PHY_E810:
+@@ -5740,7 +5759,7 @@ int ice_read_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx, u64 *tstamp)
+ */
+ int ice_clear_phy_tstamp(struct ice_hw *hw, u8 block, u8 idx)
+ {
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_clear_ptp_tstamp_eth56g(hw, block, idx);
+ case ICE_PHY_E810:
+@@ -5803,7 +5822,7 @@ static int ice_get_pf_c827_idx(struct ice_hw *hw, u8 *idx)
+ */
+ void ice_ptp_reset_ts_memory(struct ice_hw *hw)
+ {
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ ice_ptp_reset_ts_memory_eth56g(hw);
+ break;
+@@ -5832,7 +5851,7 @@ int ice_ptp_init_phc(struct ice_hw *hw)
+ /* Clear event err indications for auxiliary pins */
+ (void)rd32(hw, GLTSYN_STAT(src_idx));
+
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_ptp_init_phc_eth56g(hw);
+ case ICE_PHY_E810:
+@@ -5857,7 +5876,7 @@ int ice_ptp_init_phc(struct ice_hw *hw)
+ */
+ int ice_get_phy_tx_tstamp_ready(struct ice_hw *hw, u8 block, u64 *tstamp_ready)
+ {
+- switch (hw->ptp.phy_model) {
++ switch (ice_get_phy_model(hw)) {
+ case ICE_PHY_ETH56G:
+ return ice_get_phy_tx_tstamp_ready_eth56g(hw, block,
+ tstamp_ready);
+diff --git a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+index 4c8b8457134427..3499062218b59e 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
++++ b/drivers/net/ethernet/intel/ice/ice_ptp_hw.h
+@@ -452,6 +452,11 @@ static inline u64 ice_get_base_incval(struct ice_hw *hw)
+ }
+ }
+
++static inline bool ice_is_dual(struct ice_hw *hw)
++{
++ return !!(hw->dev_caps.nac_topo.mode & ICE_NAC_TOPO_DUAL_M);
++}
++
+ #define PFTSYN_SEM_BYTES 4
+
+ #define ICE_PTP_CLOCK_INDEX_0 0x00
+diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
+index 45768796691fec..609f31e0dfdede 100644
+--- a/drivers/net/ethernet/intel/ice/ice_type.h
++++ b/drivers/net/ethernet/intel/ice/ice_type.h
+@@ -850,7 +850,6 @@ struct ice_mbx_data {
+
+ struct ice_eth56g_params {
+ u8 num_phys;
+- u8 phy_addr[2];
+ bool onestep_ena;
+ bool sfd_ena;
+ u32 peer_delay;
+@@ -881,7 +880,6 @@ struct ice_ptp_hw {
+ union ice_phy_params phy;
+ u8 num_lports;
+ u8 ports_per_phy;
+- bool is_2x50g_muxed_topo;
+ };
+
+ /* Port hardware description */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+index ca92e518be7669..1baf8933a07cb0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec.c
+@@ -724,6 +724,12 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
+ /* check esn */
+ if (x->props.flags & XFRM_STATE_ESN)
+ mlx5e_ipsec_update_esn_state(sa_entry);
++ else
++ /* According to RFC4303, section "3.3.3. Sequence Number Generation",
++ * the first packet sent using a given SA will contain a sequence
++ * number of 1.
++ */
++ sa_entry->esn_state.esn = 1;
+
+ mlx5e_ipsec_build_accel_xfrm_attrs(sa_entry, &sa_entry->attrs);
+
+@@ -768,9 +774,12 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
+ MLX5_IPSEC_RESCHED);
+
+ if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET &&
+- x->props.mode == XFRM_MODE_TUNNEL)
+- xa_set_mark(&ipsec->sadb, sa_entry->ipsec_obj_id,
+- MLX5E_IPSEC_TUNNEL_SA);
++ x->props.mode == XFRM_MODE_TUNNEL) {
++ xa_lock_bh(&ipsec->sadb);
++ __xa_set_mark(&ipsec->sadb, sa_entry->ipsec_obj_id,
++ MLX5E_IPSEC_TUNNEL_SA);
++ xa_unlock_bh(&ipsec->sadb);
++ }
+
+ out:
+ x->xso.offload_handle = (unsigned long)sa_entry;
+@@ -797,7 +806,6 @@ static int mlx5e_xfrm_add_state(struct xfrm_state *x,
+ static void mlx5e_xfrm_del_state(struct xfrm_state *x)
+ {
+ struct mlx5e_ipsec_sa_entry *sa_entry = to_ipsec_sa_entry(x);
+- struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs;
+ struct mlx5e_ipsec *ipsec = sa_entry->ipsec;
+ struct mlx5e_ipsec_sa_entry *old;
+
+@@ -806,12 +814,6 @@ static void mlx5e_xfrm_del_state(struct xfrm_state *x)
+
+ old = xa_erase_bh(&ipsec->sadb, sa_entry->ipsec_obj_id);
+ WARN_ON(old != sa_entry);
+-
+- if (attrs->mode == XFRM_MODE_TUNNEL &&
+- attrs->type == XFRM_DEV_OFFLOAD_PACKET)
+- /* Make sure that no ARP requests are running in parallel */
+- flush_workqueue(ipsec->wq);
+-
+ }
+
+ static void mlx5e_xfrm_free_state(struct xfrm_state *x)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
+index e51b03d4c717f1..57861d34d46f85 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_fs.c
+@@ -1718,23 +1718,21 @@ static int tx_add_rule(struct mlx5e_ipsec_sa_entry *sa_entry)
+ goto err_alloc;
+ }
+
+- if (attrs->family == AF_INET)
+- setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4);
+- else
+- setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6);
+-
+ setup_fte_no_frags(spec);
+ setup_fte_upper_proto_match(spec, &attrs->upspec);
+
+ switch (attrs->type) {
+ case XFRM_DEV_OFFLOAD_CRYPTO:
++ if (attrs->family == AF_INET)
++ setup_fte_addr4(spec, &attrs->saddr.a4, &attrs->daddr.a4);
++ else
++ setup_fte_addr6(spec, attrs->saddr.a6, attrs->daddr.a6);
+ setup_fte_spi(spec, attrs->spi, false);
+ setup_fte_esp(spec);
+ setup_fte_reg_a(spec);
+ break;
+ case XFRM_DEV_OFFLOAD_PACKET:
+- if (attrs->reqid)
+- setup_fte_reg_c4(spec, attrs->reqid);
++ setup_fte_reg_c4(spec, attrs->reqid);
+ err = setup_pkt_reformat(ipsec, attrs, &flow_act);
+ if (err)
+ goto err_pkt_reformat;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
+index 53cfa39188cb0e..820debf3fbbf22 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ipsec_offload.c
+@@ -91,8 +91,9 @@ u32 mlx5_ipsec_device_caps(struct mlx5_core_dev *mdev)
+ EXPORT_SYMBOL_GPL(mlx5_ipsec_device_caps);
+
+ static void mlx5e_ipsec_packet_setup(void *obj, u32 pdn,
+- struct mlx5_accel_esp_xfrm_attrs *attrs)
++ struct mlx5e_ipsec_sa_entry *sa_entry)
+ {
++ struct mlx5_accel_esp_xfrm_attrs *attrs = &sa_entry->attrs;
+ void *aso_ctx;
+
+ aso_ctx = MLX5_ADDR_OF(ipsec_obj, obj, ipsec_aso);
+@@ -120,8 +121,12 @@ static void mlx5e_ipsec_packet_setup(void *obj, u32 pdn,
+ * active.
+ */
+ MLX5_SET(ipsec_obj, obj, aso_return_reg, MLX5_IPSEC_ASO_REG_C_4_5);
+- if (attrs->dir == XFRM_DEV_OFFLOAD_OUT)
++ if (attrs->dir == XFRM_DEV_OFFLOAD_OUT) {
+ MLX5_SET(ipsec_aso, aso_ctx, mode, MLX5_IPSEC_ASO_INC_SN);
++ if (!attrs->replay_esn.trigger)
++ MLX5_SET(ipsec_aso, aso_ctx, mode_parameter,
++ sa_entry->esn_state.esn);
++ }
+
+ if (attrs->lft.hard_packet_limit != XFRM_INF) {
+ MLX5_SET(ipsec_aso, aso_ctx, remove_flow_pkt_cnt,
+@@ -175,7 +180,7 @@ static int mlx5_create_ipsec_obj(struct mlx5e_ipsec_sa_entry *sa_entry)
+
+ res = &mdev->mlx5e_res.hw_objs;
+ if (attrs->type == XFRM_DEV_OFFLOAD_PACKET)
+- mlx5e_ipsec_packet_setup(obj, res->pdn, attrs);
++ mlx5e_ipsec_packet_setup(obj, res->pdn, sa_entry);
+
+ err = mlx5_cmd_exec(mdev, in, sizeof(in), out, sizeof(out));
+ if (!err)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 2eabfcc247c6ae..0ce999706d412a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -2709,6 +2709,7 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev,
+ break;
+ case MLX5_FLOW_NAMESPACE_RDMA_TX:
+ root_ns = steering->rdma_tx_root_ns;
++ prio = RDMA_TX_BYPASS_PRIO;
+ break;
+ case MLX5_FLOW_NAMESPACE_RDMA_RX_COUNTERS:
+ root_ns = steering->rdma_rx_root_ns;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
+index ab2717012b79b5..39e80704b1c425 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/port_sel.c
+@@ -530,7 +530,7 @@ int mlx5_lag_port_sel_create(struct mlx5_lag *ldev,
+ set_tt_map(port_sel, hash_type);
+ err = mlx5_lag_create_definers(ldev, hash_type, ports);
+ if (err)
+- return err;
++ goto clear_port_sel;
+
+ if (port_sel->tunnel) {
+ err = mlx5_lag_create_inner_ttc_table(ldev);
+@@ -549,6 +549,8 @@ int mlx5_lag_port_sel_create(struct mlx5_lag *ldev,
+ mlx5_destroy_ttc_table(port_sel->inner.ttc);
+ destroy_definers:
+ mlx5_lag_destroy_definers(ldev);
++clear_port_sel:
++ memset(port_sel, 0, sizeof(*port_sel));
+ return err;
+ }
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
+index a96be98be032f5..b96909fbeb12de 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/sf/devlink.c
+@@ -257,6 +257,7 @@ static int mlx5_sf_add(struct mlx5_core_dev *dev, struct mlx5_sf_table *table,
+ return 0;
+
+ esw_err:
++ mlx5_sf_function_id_erase(table, sf);
+ mlx5_sf_free(table, sf);
+ return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/wc.c b/drivers/net/ethernet/mellanox/mlx5/core/wc.c
+index 1bed75eca97db8..740b719e7072df 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/wc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/wc.c
+@@ -382,6 +382,7 @@ static void mlx5_core_test_wc(struct mlx5_core_dev *mdev)
+
+ bool mlx5_wc_support_get(struct mlx5_core_dev *mdev)
+ {
++ struct mutex *wc_state_lock = &mdev->wc_state_lock;
+ struct mlx5_core_dev *parent = NULL;
+
+ if (!MLX5_CAP_GEN(mdev, bf)) {
+@@ -400,32 +401,31 @@ bool mlx5_wc_support_get(struct mlx5_core_dev *mdev)
+ */
+ goto out;
+
+- mutex_lock(&mdev->wc_state_lock);
+-
+- if (mdev->wc_state != MLX5_WC_STATE_UNINITIALIZED)
+- goto unlock;
+-
+ #ifdef CONFIG_MLX5_SF
+- if (mlx5_core_is_sf(mdev))
++ if (mlx5_core_is_sf(mdev)) {
+ parent = mdev->priv.parent_mdev;
++ wc_state_lock = &parent->wc_state_lock;
++ }
+ #endif
+
+- if (parent) {
+- mutex_lock(&parent->wc_state_lock);
++ mutex_lock(wc_state_lock);
+
++ if (mdev->wc_state != MLX5_WC_STATE_UNINITIALIZED)
++ goto unlock;
++
++ if (parent) {
+ mlx5_core_test_wc(parent);
+
+ mlx5_core_dbg(mdev, "parent set wc_state=%d\n",
+ parent->wc_state);
+ mdev->wc_state = parent->wc_state;
+
+- mutex_unlock(&parent->wc_state_lock);
++ } else {
++ mlx5_core_test_wc(mdev);
+ }
+
+- mlx5_core_test_wc(mdev);
+-
+ unlock:
+- mutex_unlock(&mdev->wc_state_lock);
++ mutex_unlock(wc_state_lock);
+ out:
+ mlx5_core_dbg(mdev, "wc_state=%d\n", mdev->wc_state);
+
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/offload.c b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
+index 9d97cd281f18e4..c03558adda91eb 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/offload.c
++++ b/drivers/net/ethernet/netronome/nfp/bpf/offload.c
+@@ -458,7 +458,8 @@ int nfp_bpf_event_output(struct nfp_app_bpf *bpf, const void *data,
+ map_id_full = be64_to_cpu(cbe->map_ptr);
+ map_id = map_id_full;
+
+- if (len < sizeof(struct cmsg_bpf_event) + pkt_size + data_size)
++ if (size_add(pkt_size, data_size) > INT_MAX ||
++ len < sizeof(struct cmsg_bpf_event) + pkt_size + data_size)
+ return -EINVAL;
+ if (cbe->hdr.ver != NFP_CCM_ABI_VERSION)
+ return -EINVAL;
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 907af4651c5534..6f6b0566c65bcb 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -2756,6 +2756,7 @@ static const struct ravb_hw_info ravb_rzv2m_hw_info = {
+ .net_features = NETIF_F_RXCSUM,
+ .stats_len = ARRAY_SIZE(ravb_gstrings_stats),
+ .tccr_mask = TCCR_TSRQ0 | TCCR_TSRQ1 | TCCR_TSRQ2 | TCCR_TSRQ3,
++ .tx_max_frame_size = SZ_2K,
+ .rx_max_frame_size = SZ_2K,
+ .rx_buffer_size = SZ_2K +
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)),
+diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c
+index 8d02d2b2142937..dc5e247ca5d1a6 100644
+--- a/drivers/net/ethernet/ti/cpsw_ale.c
++++ b/drivers/net/ethernet/ti/cpsw_ale.c
+@@ -127,15 +127,15 @@ struct cpsw_ale_dev_id {
+
+ static inline int cpsw_ale_get_field(u32 *ale_entry, u32 start, u32 bits)
+ {
+- int idx, idx2;
++ int idx, idx2, index;
+ u32 hi_val = 0;
+
+ idx = start / 32;
+ idx2 = (start + bits - 1) / 32;
+ /* Check if bits to be fetched exceed a word */
+ if (idx != idx2) {
+- idx2 = 2 - idx2; /* flip */
+- hi_val = ale_entry[idx2] << ((idx2 * 32) - start);
++ index = 2 - idx2; /* flip */
++ hi_val = ale_entry[index] << ((idx2 * 32) - start);
+ }
+ start -= idx * 32;
+ idx = 2 - idx; /* flip */
+@@ -145,16 +145,16 @@ static inline int cpsw_ale_get_field(u32 *ale_entry, u32 start, u32 bits)
+ static inline void cpsw_ale_set_field(u32 *ale_entry, u32 start, u32 bits,
+ u32 value)
+ {
+- int idx, idx2;
++ int idx, idx2, index;
+
+ value &= BITMASK(bits);
+ idx = start / 32;
+ idx2 = (start + bits - 1) / 32;
+ /* Check if bits to be set exceed a word */
+ if (idx != idx2) {
+- idx2 = 2 - idx2; /* flip */
+- ale_entry[idx2] &= ~(BITMASK(bits + start - (idx2 * 32)));
+- ale_entry[idx2] |= (value >> ((idx2 * 32) - start));
++ index = 2 - idx2; /* flip */
++ ale_entry[index] &= ~(BITMASK(bits + start - (idx2 * 32)));
++ ale_entry[index] |= (value >> ((idx2 * 32) - start));
+ }
+ start -= idx * 32;
+ idx = 2 - idx; /* flip */
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index 1fcbcaa85ebdb4..de10a2d08c428e 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -2056,6 +2056,12 @@ axienet_ethtools_set_coalesce(struct net_device *ndev,
+ return -EBUSY;
+ }
+
++ if (ecoalesce->rx_max_coalesced_frames > 255 ||
++ ecoalesce->tx_max_coalesced_frames > 255) {
++ NL_SET_ERR_MSG(extack, "frames must be less than 256");
++ return -EINVAL;
++ }
++
+ if (ecoalesce->rx_max_coalesced_frames)
+ lp->coalesce_count_rx = ecoalesce->rx_max_coalesced_frames;
+ if (ecoalesce->rx_coalesce_usecs)
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 70f981887518aa..47406ce9901612 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -1526,8 +1526,8 @@ static int gtp_newlink(struct net *src_net, struct net_device *dev,
+ goto out_encap;
+ }
+
+- gn = net_generic(dev_net(dev), gtp_net_id);
+- list_add_rcu(>p->list, &gn->gtp_dev_list);
++ gn = net_generic(src_net, gtp_net_id);
++ list_add(>p->list, &gn->gtp_dev_list);
+ dev->priv_destructor = gtp_destructor;
+
+ netdev_dbg(dev, "registered new GTP interface\n");
+@@ -1553,7 +1553,7 @@ static void gtp_dellink(struct net_device *dev, struct list_head *head)
+ hlist_for_each_entry_safe(pctx, next, >p->tid_hash[i], hlist_tid)
+ pdp_context_delete(pctx);
+
+- list_del_rcu(>p->list);
++ list_del(>p->list);
+ unregister_netdevice_queue(dev, head);
+ }
+
+@@ -2279,16 +2279,19 @@ static int gtp_genl_dump_pdp(struct sk_buff *skb,
+ struct gtp_dev *last_gtp = (struct gtp_dev *)cb->args[2], *gtp;
+ int i, j, bucket = cb->args[0], skip = cb->args[1];
+ struct net *net = sock_net(skb->sk);
++ struct net_device *dev;
+ struct pdp_ctx *pctx;
+- struct gtp_net *gn;
+-
+- gn = net_generic(net, gtp_net_id);
+
+ if (cb->args[4])
+ return 0;
+
+ rcu_read_lock();
+- list_for_each_entry_rcu(gtp, &gn->gtp_dev_list, list) {
++ for_each_netdev_rcu(net, dev) {
++ if (dev->rtnl_link_ops != >p_link_ops)
++ continue;
++
++ gtp = netdev_priv(dev);
++
+ if (last_gtp && last_gtp != gtp)
+ continue;
+ else
+@@ -2483,9 +2486,14 @@ static void __net_exit gtp_net_exit_batch_rtnl(struct list_head *net_list,
+
+ list_for_each_entry(net, net_list, exit_list) {
+ struct gtp_net *gn = net_generic(net, gtp_net_id);
+- struct gtp_dev *gtp;
++ struct gtp_dev *gtp, *gtp_next;
++ struct net_device *dev;
++
++ for_each_netdev(net, dev)
++ if (dev->rtnl_link_ops == >p_link_ops)
++ gtp_dellink(dev, dev_to_kill);
+
+- list_for_each_entry(gtp, &gn->gtp_dev_list, list)
++ list_for_each_entry_safe(gtp, gtp_next, &gn->gtp_dev_list, list)
+ gtp_dellink(gtp->dev, dev_to_kill);
+ }
+ }
+diff --git a/drivers/net/pfcp.c b/drivers/net/pfcp.c
+index 69434fd13f9612..68d0d9e92a2209 100644
+--- a/drivers/net/pfcp.c
++++ b/drivers/net/pfcp.c
+@@ -206,8 +206,8 @@ static int pfcp_newlink(struct net *net, struct net_device *dev,
+ goto exit_del_pfcp_sock;
+ }
+
+- pn = net_generic(dev_net(dev), pfcp_net_id);
+- list_add_rcu(&pfcp->list, &pn->pfcp_dev_list);
++ pn = net_generic(net, pfcp_net_id);
++ list_add(&pfcp->list, &pn->pfcp_dev_list);
+
+ netdev_dbg(dev, "registered new PFCP interface\n");
+
+@@ -224,7 +224,7 @@ static void pfcp_dellink(struct net_device *dev, struct list_head *head)
+ {
+ struct pfcp_dev *pfcp = netdev_priv(dev);
+
+- list_del_rcu(&pfcp->list);
++ list_del(&pfcp->list);
+ unregister_netdevice_queue(dev, head);
+ }
+
+@@ -247,11 +247,16 @@ static int __net_init pfcp_net_init(struct net *net)
+ static void __net_exit pfcp_net_exit(struct net *net)
+ {
+ struct pfcp_net *pn = net_generic(net, pfcp_net_id);
+- struct pfcp_dev *pfcp;
++ struct pfcp_dev *pfcp, *pfcp_next;
++ struct net_device *dev;
+ LIST_HEAD(list);
+
+ rtnl_lock();
+- list_for_each_entry(pfcp, &pn->pfcp_dev_list, list)
++ for_each_netdev(net, dev)
++ if (dev->rtnl_link_ops == &pfcp_link_ops)
++ pfcp_dellink(dev, &list);
++
++ list_for_each_entry_safe(pfcp, pfcp_next, &pn->pfcp_dev_list, list)
+ pfcp_dellink(pfcp->dev, &list);
+
+ unregister_netdevice_many(&list);
+diff --git a/drivers/nvme/target/io-cmd-bdev.c b/drivers/nvme/target/io-cmd-bdev.c
+index 0bda83d0fc3e08..eaf31c823cbe88 100644
+--- a/drivers/nvme/target/io-cmd-bdev.c
++++ b/drivers/nvme/target/io-cmd-bdev.c
+@@ -36,7 +36,7 @@ void nvmet_bdev_set_limits(struct block_device *bdev, struct nvme_id_ns *id)
+ */
+ id->nsfeat |= 1 << 4;
+ /* NPWG = Namespace Preferred Write Granularity. 0's based */
+- id->npwg = lpp0b;
++ id->npwg = to0based(bdev_io_min(bdev) / bdev_logical_block_size(bdev));
+ /* NPWA = Namespace Preferred Write Alignment. 0's based */
+ id->npwa = id->npwg;
+ /* NPDG = Namespace Preferred Deallocate Granularity. 0's based */
+diff --git a/drivers/platform/x86/dell/dell-uart-backlight.c b/drivers/platform/x86/dell/dell-uart-backlight.c
+index 3995f90add4568..c45bc332af7a02 100644
+--- a/drivers/platform/x86/dell/dell-uart-backlight.c
++++ b/drivers/platform/x86/dell/dell-uart-backlight.c
+@@ -283,6 +283,9 @@ static int dell_uart_bl_serdev_probe(struct serdev_device *serdev)
+ init_waitqueue_head(&dell_bl->wait_queue);
+ dell_bl->dev = dev;
+
++ serdev_device_set_drvdata(serdev, dell_bl);
++ serdev_device_set_client_ops(serdev, &dell_uart_bl_serdev_ops);
++
+ ret = devm_serdev_device_open(dev, serdev);
+ if (ret)
+ return dev_err_probe(dev, ret, "opening UART device\n");
+@@ -290,8 +293,6 @@ static int dell_uart_bl_serdev_probe(struct serdev_device *serdev)
+ /* 9600 bps, no flow control, these are the default but set them to be sure */
+ serdev_device_set_baudrate(serdev, 9600);
+ serdev_device_set_flow_control(serdev, false);
+- serdev_device_set_drvdata(serdev, dell_bl);
+- serdev_device_set_client_ops(serdev, &dell_uart_bl_serdev_ops);
+
+ get_version[0] = DELL_SOF(GET_CMD_LEN);
+ get_version[1] = CMD_GET_VERSION;
+diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+index 1e46e30dae9669..dbcd3087aaa4b0 100644
+--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+@@ -804,6 +804,7 @@ EXPORT_SYMBOL_GPL(isst_if_cdev_unregister);
+ static const struct x86_cpu_id isst_cpu_ids[] = {
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, SST_HPM_SUPPORTED),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, SST_HPM_SUPPORTED),
++ X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, SST_HPM_SUPPORTED),
+ X86_MATCH_VFM(INTEL_EMERALDRAPIDS_X, 0),
+ X86_MATCH_VFM(INTEL_GRANITERAPIDS_D, SST_HPM_SUPPORTED),
+ X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, SST_HPM_SUPPORTED),
+diff --git a/drivers/platform/x86/intel/tpmi_power_domains.c b/drivers/platform/x86/intel/tpmi_power_domains.c
+index 0609a8320f7ec1..12fb0943b5dc37 100644
+--- a/drivers/platform/x86/intel/tpmi_power_domains.c
++++ b/drivers/platform/x86/intel/tpmi_power_domains.c
+@@ -81,6 +81,7 @@ static const struct x86_cpu_id tpmi_cpu_ids[] = {
+ X86_MATCH_VFM(INTEL_GRANITERAPIDS_X, NULL),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT_X, NULL),
+ X86_MATCH_VFM(INTEL_ATOM_CRESTMONT, NULL),
++ X86_MATCH_VFM(INTEL_ATOM_DARKMONT_X, NULL),
+ X86_MATCH_VFM(INTEL_GRANITERAPIDS_D, NULL),
+ X86_MATCH_VFM(INTEL_PANTHERCOVE_X, NULL),
+ {}
+diff --git a/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c b/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
+index d525bdc8ca9b3f..32d9b6009c4229 100644
+--- a/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
++++ b/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
+@@ -199,14 +199,15 @@ static int yt2_1380_fc_serdev_probe(struct serdev_device *serdev)
+ if (ret)
+ return ret;
+
++ serdev_device_set_drvdata(serdev, fc);
++ serdev_device_set_client_ops(serdev, &yt2_1380_fc_serdev_ops);
++
+ ret = devm_serdev_device_open(dev, serdev);
+ if (ret)
+ return dev_err_probe(dev, ret, "opening UART device\n");
+
+ serdev_device_set_baudrate(serdev, 600);
+ serdev_device_set_flow_control(serdev, false);
+- serdev_device_set_drvdata(serdev, fc);
+- serdev_device_set_client_ops(serdev, &yt2_1380_fc_serdev_ops);
+
+ ret = devm_extcon_register_notifier_all(dev, fc->extcon, &fc->nb);
+ if (ret)
+diff --git a/drivers/pmdomain/imx/imx8mp-blk-ctrl.c b/drivers/pmdomain/imx/imx8mp-blk-ctrl.c
+index 77e889165eed3c..a19e806bb14726 100644
+--- a/drivers/pmdomain/imx/imx8mp-blk-ctrl.c
++++ b/drivers/pmdomain/imx/imx8mp-blk-ctrl.c
+@@ -770,7 +770,7 @@ static void imx8mp_blk_ctrl_remove(struct platform_device *pdev)
+
+ of_genpd_del_provider(pdev->dev.of_node);
+
+- for (i = 0; bc->onecell_data.num_domains; i++) {
++ for (i = 0; i < bc->onecell_data.num_domains; i++) {
+ struct imx8mp_blk_ctrl_domain *domain = &bc->domains[i];
+
+ pm_genpd_remove(&domain->genpd);
+diff --git a/drivers/reset/reset-rzg2l-usbphy-ctrl.c b/drivers/reset/reset-rzg2l-usbphy-ctrl.c
+index 1cd157f4f03b47..4e2ac1f0060c0d 100644
+--- a/drivers/reset/reset-rzg2l-usbphy-ctrl.c
++++ b/drivers/reset/reset-rzg2l-usbphy-ctrl.c
+@@ -176,6 +176,7 @@ static int rzg2l_usbphy_ctrl_probe(struct platform_device *pdev)
+ vdev->dev.parent = dev;
+ priv->vdev = vdev;
+
++ device_set_of_node_from_dev(&vdev->dev, dev);
+ error = platform_device_add(vdev);
+ if (error)
+ goto err_device_put;
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 05b936ad353be7..6cc9e61cca07de 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -10589,14 +10589,17 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ }
+
+ /*
+- * Set the default power management level for runtime and system PM.
++ * Set the default power management level for runtime and system PM if
++ * not set by the host controller drivers.
+ * Default power saving mode is to keep UFS link in Hibern8 state
+ * and UFS device in sleep state.
+ */
+- hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
++ if (!hba->rpm_lvl)
++ hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
+ UFS_SLEEP_PWR_MODE,
+ UIC_LINK_HIBERN8_STATE);
+- hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
++ if (!hba->spm_lvl)
++ hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
+ UFS_SLEEP_PWR_MODE,
+ UIC_LINK_HIBERN8_STATE);
+
+diff --git a/fs/afs/addr_prefs.c b/fs/afs/addr_prefs.c
+index a189ff8a5034e0..c0384201b8feb5 100644
+--- a/fs/afs/addr_prefs.c
++++ b/fs/afs/addr_prefs.c
+@@ -413,8 +413,10 @@ int afs_proc_addr_prefs_write(struct file *file, char *buf, size_t size)
+
+ do {
+ argc = afs_split_string(&buf, argv, ARRAY_SIZE(argv));
+- if (argc < 0)
+- return argc;
++ if (argc < 0) {
++ ret = argc;
++ goto done;
++ }
+ if (argc < 2)
+ goto inval;
+
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 0c4d14c59ebec5..395b8b880ce786 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -797,6 +797,10 @@ static int get_canonical_dev_path(const char *dev_path, char *canonical)
+ if (ret)
+ goto out;
+ resolved_path = d_path(&path, path_buf, PATH_MAX);
++ if (IS_ERR(resolved_path)) {
++ ret = PTR_ERR(resolved_path);
++ goto out;
++ }
+ ret = strscpy(canonical, resolved_path, PATH_MAX);
+ out:
+ kfree(path_buf);
+diff --git a/fs/cachefiles/daemon.c b/fs/cachefiles/daemon.c
+index 89b11336a83697..1806bff8e59bc3 100644
+--- a/fs/cachefiles/daemon.c
++++ b/fs/cachefiles/daemon.c
+@@ -15,6 +15,7 @@
+ #include <linux/namei.h>
+ #include <linux/poll.h>
+ #include <linux/mount.h>
++#include <linux/security.h>
+ #include <linux/statfs.h>
+ #include <linux/ctype.h>
+ #include <linux/string.h>
+@@ -576,7 +577,7 @@ static int cachefiles_daemon_dir(struct cachefiles_cache *cache, char *args)
+ */
+ static int cachefiles_daemon_secctx(struct cachefiles_cache *cache, char *args)
+ {
+- char *secctx;
++ int err;
+
+ _enter(",%s", args);
+
+@@ -585,16 +586,16 @@ static int cachefiles_daemon_secctx(struct cachefiles_cache *cache, char *args)
+ return -EINVAL;
+ }
+
+- if (cache->secctx) {
++ if (cache->have_secid) {
+ pr_err("Second security context specified\n");
+ return -EINVAL;
+ }
+
+- secctx = kstrdup(args, GFP_KERNEL);
+- if (!secctx)
+- return -ENOMEM;
++ err = security_secctx_to_secid(args, strlen(args), &cache->secid);
++ if (err)
++ return err;
+
+- cache->secctx = secctx;
++ cache->have_secid = true;
+ return 0;
+ }
+
+@@ -820,7 +821,6 @@ static void cachefiles_daemon_unbind(struct cachefiles_cache *cache)
+ put_cred(cache->cache_cred);
+
+ kfree(cache->rootdirname);
+- kfree(cache->secctx);
+ kfree(cache->tag);
+
+ _leave("");
+diff --git a/fs/cachefiles/internal.h b/fs/cachefiles/internal.h
+index 7b99bd98de75b8..38c236e38cef85 100644
+--- a/fs/cachefiles/internal.h
++++ b/fs/cachefiles/internal.h
+@@ -122,7 +122,6 @@ struct cachefiles_cache {
+ #define CACHEFILES_STATE_CHANGED 3 /* T if state changed (poll trigger) */
+ #define CACHEFILES_ONDEMAND_MODE 4 /* T if in on-demand read mode */
+ char *rootdirname; /* name of cache root directory */
+- char *secctx; /* LSM security context */
+ char *tag; /* cache binding tag */
+ refcount_t unbind_pincount;/* refcount to do daemon unbind */
+ struct xarray reqs; /* xarray of pending on-demand requests */
+@@ -130,6 +129,8 @@ struct cachefiles_cache {
+ struct xarray ondemand_ids; /* xarray for ondemand_id allocation */
+ u32 ondemand_id_next;
+ u32 msg_id_next;
++ u32 secid; /* LSM security id */
++ bool have_secid; /* whether "secid" was set */
+ };
+
+ static inline bool cachefiles_in_ondemand_mode(struct cachefiles_cache *cache)
+diff --git a/fs/cachefiles/security.c b/fs/cachefiles/security.c
+index fe777164f1d894..fc6611886b3b5e 100644
+--- a/fs/cachefiles/security.c
++++ b/fs/cachefiles/security.c
+@@ -18,7 +18,7 @@ int cachefiles_get_security_ID(struct cachefiles_cache *cache)
+ struct cred *new;
+ int ret;
+
+- _enter("{%s}", cache->secctx);
++ _enter("{%u}", cache->have_secid ? cache->secid : 0);
+
+ new = prepare_kernel_cred(current);
+ if (!new) {
+@@ -26,8 +26,8 @@ int cachefiles_get_security_ID(struct cachefiles_cache *cache)
+ goto error;
+ }
+
+- if (cache->secctx) {
+- ret = set_security_override_from_ctx(new, cache->secctx);
++ if (cache->have_secid) {
++ ret = set_security_override(new, cache->secid);
+ if (ret < 0) {
+ put_cred(new);
+ pr_err("Security denies permission to nominate security context: error %d\n",
+diff --git a/fs/file.c b/fs/file.c
+index eb093e73697206..4cb952541dd036 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -21,6 +21,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/close_range.h>
+ #include <net/sock.h>
++#include <linux/init_task.h>
+
+ #include "internal.h"
+
+diff --git a/fs/hfs/super.c b/fs/hfs/super.c
+index eeac99765f0d61..cf13b5cc108488 100644
+--- a/fs/hfs/super.c
++++ b/fs/hfs/super.c
+@@ -419,11 +419,13 @@ static int hfs_fill_super(struct super_block *sb, void *data, int silent)
+ goto bail_no_root;
+ res = hfs_cat_find_brec(sb, HFS_ROOT_CNID, &fd);
+ if (!res) {
+- if (fd.entrylength > sizeof(rec) || fd.entrylength < 0) {
++ if (fd.entrylength != sizeof(rec.dir)) {
+ res = -EIO;
+ goto bail_hfs_find;
+ }
+ hfs_bnode_read(fd.bnode, &rec, fd.entryoffset, fd.entrylength);
++ if (rec.type != HFS_CDR_DIR)
++ res = -EIO;
+ }
+ if (res)
+ goto bail_hfs_find;
+diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
+index 25d1ede6bb0eb0..1bad460275ebe2 100644
+--- a/fs/iomap/buffered-io.c
++++ b/fs/iomap/buffered-io.c
+@@ -1138,7 +1138,7 @@ static void iomap_write_delalloc_scan(struct inode *inode,
+ start_byte, end_byte, iomap, punch);
+
+ /* move offset to start of next folio in range */
+- start_byte = folio_next_index(folio) << PAGE_SHIFT;
++ start_byte = folio_pos(folio) + folio_size(folio);
+ folio_unlock(folio);
+ folio_put(folio);
+ }
+diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
+index e70eb4ea21c038..a44132c986538b 100644
+--- a/fs/netfs/read_collect.c
++++ b/fs/netfs/read_collect.c
+@@ -249,16 +249,17 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bool was
+
+ /* Deal with the trickiest case: that this subreq is in the middle of a
+ * folio, not touching either edge, but finishes first. In such a
+- * case, we donate to the previous subreq, if there is one, so that the
+- * donation is only handled when that completes - and remove this
+- * subreq from the list.
++ * case, we donate to the previous subreq, if there is one and if it is
++ * contiguous, so that the donation is only handled when that completes
++ * - and remove this subreq from the list.
+ *
+ * If the previous subreq finished first, we will have acquired their
+ * donation and should be able to unlock folios and/or donate nextwards.
+ */
+ if (!subreq->consumed &&
+ !prev_donated &&
+- !list_is_first(&subreq->rreq_link, &rreq->subrequests)) {
++ !list_is_first(&subreq->rreq_link, &rreq->subrequests) &&
++ subreq->start == prev->start + prev->len) {
+ prev = list_prev_entry(subreq, rreq_link);
+ WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len);
+ subreq->start += subreq->len;
+diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
+index b4521b09605881..387a7a176ad84b 100644
+--- a/fs/proc/vmcore.c
++++ b/fs/proc/vmcore.c
+@@ -404,6 +404,8 @@ static ssize_t __read_vmcore(struct iov_iter *iter, loff_t *fpos)
+ if (!iov_iter_count(iter))
+ return acc;
+ }
++
++ cond_resched();
+ }
+
+ return acc;
+diff --git a/fs/qnx6/inode.c b/fs/qnx6/inode.c
+index 85925ec0051a97..3310d1ad4d0e98 100644
+--- a/fs/qnx6/inode.c
++++ b/fs/qnx6/inode.c
+@@ -179,8 +179,7 @@ static int qnx6_statfs(struct dentry *dentry, struct kstatfs *buf)
+ */
+ static const char *qnx6_checkroot(struct super_block *s)
+ {
+- static char match_root[2][3] = {".\0\0", "..\0"};
+- int i, error = 0;
++ int error = 0;
+ struct qnx6_dir_entry *dir_entry;
+ struct inode *root = d_inode(s->s_root);
+ struct address_space *mapping = root->i_mapping;
+@@ -189,11 +188,9 @@ static const char *qnx6_checkroot(struct super_block *s)
+ if (IS_ERR(folio))
+ return "error reading root directory";
+ dir_entry = kmap_local_folio(folio, 0);
+- for (i = 0; i < 2; i++) {
+- /* maximum 3 bytes - due to match_root limitation */
+- if (strncmp(dir_entry[i].de_fname, match_root[i], 3))
+- error = 1;
+- }
++ if (memcmp(dir_entry[0].de_fname, ".", 2) ||
++ memcmp(dir_entry[1].de_fname, "..", 3))
++ error = 1;
+ folio_release_kmap(folio, dir_entry);
+ if (error)
+ return "error reading root directory.";
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index fe40152b915d82..fb51cdf5520617 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -1044,6 +1044,7 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
+ /* Release netns reference for this server. */
+ put_net(cifs_net_ns(server));
+ kfree(server->leaf_fullpath);
++ kfree(server->hostname);
+ kfree(server);
+
+ length = atomic_dec_return(&tcpSesAllocCount);
+@@ -1670,8 +1671,6 @@ cifs_put_tcp_session(struct TCP_Server_Info *server, int from_reconnect)
+ kfree_sensitive(server->session_key.response);
+ server->session_key.response = NULL;
+ server->session_key.len = 0;
+- kfree(server->hostname);
+- server->hostname = NULL;
+
+ task = xchg(&server->tsk, NULL);
+ if (task)
+diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
+index aa1e65ccb61584..6caaa62d2b1f89 100644
+--- a/include/linux/hrtimer.h
++++ b/include/linux/hrtimer.h
+@@ -379,6 +379,7 @@ extern void __init hrtimers_init(void);
+ extern void sysrq_timer_list_show(void);
+
+ int hrtimers_prepare_cpu(unsigned int cpu);
++int hrtimers_cpu_starting(unsigned int cpu);
+ #ifdef CONFIG_HOTPLUG_CPU
+ int hrtimers_cpu_dying(unsigned int cpu);
+ #else
+diff --git a/include/linux/poll.h b/include/linux/poll.h
+index d1ea4f3714a848..fc641b50f1298e 100644
+--- a/include/linux/poll.h
++++ b/include/linux/poll.h
+@@ -41,8 +41,16 @@ typedef struct poll_table_struct {
+
+ static inline void poll_wait(struct file * filp, wait_queue_head_t * wait_address, poll_table *p)
+ {
+- if (p && p->_qproc && wait_address)
++ if (p && p->_qproc && wait_address) {
+ p->_qproc(filp, wait_address, p);
++ /*
++ * This memory barrier is paired in the wq_has_sleeper().
++ * See the comment above prepare_to_wait(), we need to
++ * ensure that subsequent tests in this thread can't be
++ * reordered with __add_wait_queue() in _qproc() paths.
++ */
++ smp_mb();
++ }
+ }
+
+ /*
+diff --git a/include/linux/pruss_driver.h b/include/linux/pruss_driver.h
+index c9a31c567e85bf..2e18fef1a2e109 100644
+--- a/include/linux/pruss_driver.h
++++ b/include/linux/pruss_driver.h
+@@ -144,32 +144,32 @@ static inline int pruss_release_mem_region(struct pruss *pruss,
+ static inline int pruss_cfg_get_gpmux(struct pruss *pruss,
+ enum pruss_pru_id pru_id, u8 *mux)
+ {
+- return ERR_PTR(-EOPNOTSUPP);
++ return -EOPNOTSUPP;
+ }
+
+ static inline int pruss_cfg_set_gpmux(struct pruss *pruss,
+ enum pruss_pru_id pru_id, u8 mux)
+ {
+- return ERR_PTR(-EOPNOTSUPP);
++ return -EOPNOTSUPP;
+ }
+
+ static inline int pruss_cfg_gpimode(struct pruss *pruss,
+ enum pruss_pru_id pru_id,
+ enum pruss_gpi_mode mode)
+ {
+- return ERR_PTR(-EOPNOTSUPP);
++ return -EOPNOTSUPP;
+ }
+
+ static inline int pruss_cfg_miirt_enable(struct pruss *pruss, bool enable)
+ {
+- return ERR_PTR(-EOPNOTSUPP);
++ return -EOPNOTSUPP;
+ }
+
+ static inline int pruss_cfg_xfr_enable(struct pruss *pruss,
+ enum pru_type pru_type,
+- bool enable);
++ bool enable)
+ {
+- return ERR_PTR(-EOPNOTSUPP);
++ return -EOPNOTSUPP;
+ }
+
+ #endif /* CONFIG_TI_PRUSS */
+diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h
+index cb40f1a1d0811d..75342022d14414 100644
+--- a/include/linux/userfaultfd_k.h
++++ b/include/linux/userfaultfd_k.h
+@@ -247,6 +247,13 @@ static inline bool vma_can_userfault(struct vm_area_struct *vma,
+ vma_is_shmem(vma);
+ }
+
++static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma)
++{
++ struct userfaultfd_ctx *uffd_ctx = vma->vm_userfaultfd_ctx.ctx;
++
++ return uffd_ctx && (uffd_ctx->features & UFFD_FEATURE_EVENT_REMAP) == 0;
++}
++
+ extern int dup_userfaultfd(struct vm_area_struct *, struct list_head *);
+ extern void dup_userfaultfd_complete(struct list_head *);
+ void dup_userfaultfd_fail(struct list_head *);
+@@ -402,6 +409,11 @@ static inline bool userfaultfd_wp_async(struct vm_area_struct *vma)
+ return false;
+ }
+
++static inline bool vma_has_uffd_without_event_remap(struct vm_area_struct *vma)
++{
++ return false;
++}
++
+ #endif /* CONFIG_USERFAULTFD */
+
+ static inline bool userfaultfd_wp_use_markers(struct vm_area_struct *vma)
+diff --git a/include/net/page_pool/helpers.h b/include/net/page_pool/helpers.h
+index 793e6fd78bc5c0..60a5347922becc 100644
+--- a/include/net/page_pool/helpers.h
++++ b/include/net/page_pool/helpers.h
+@@ -294,7 +294,7 @@ static inline long page_pool_unref_page(struct page *page, long nr)
+
+ static inline void page_pool_ref_netmem(netmem_ref netmem)
+ {
+- atomic_long_inc(&netmem_to_page(netmem)->pp_ref_count);
++ atomic_long_inc(netmem_get_pp_ref_count_ref(netmem));
+ }
+
+ static inline void page_pool_ref_page(struct page *page)
+diff --git a/include/trace/events/mmflags.h b/include/trace/events/mmflags.h
+index bb8a59c6caa219..d36c857dd24974 100644
+--- a/include/trace/events/mmflags.h
++++ b/include/trace/events/mmflags.h
+@@ -13,6 +13,69 @@
+ * Thus most bits set go first.
+ */
+
++/* These define the values that are enums (the bits) */
++#define TRACE_GFP_FLAGS_GENERAL \
++ TRACE_GFP_EM(DMA) \
++ TRACE_GFP_EM(HIGHMEM) \
++ TRACE_GFP_EM(DMA32) \
++ TRACE_GFP_EM(MOVABLE) \
++ TRACE_GFP_EM(RECLAIMABLE) \
++ TRACE_GFP_EM(HIGH) \
++ TRACE_GFP_EM(IO) \
++ TRACE_GFP_EM(FS) \
++ TRACE_GFP_EM(ZERO) \
++ TRACE_GFP_EM(DIRECT_RECLAIM) \
++ TRACE_GFP_EM(KSWAPD_RECLAIM) \
++ TRACE_GFP_EM(WRITE) \
++ TRACE_GFP_EM(NOWARN) \
++ TRACE_GFP_EM(RETRY_MAYFAIL) \
++ TRACE_GFP_EM(NOFAIL) \
++ TRACE_GFP_EM(NORETRY) \
++ TRACE_GFP_EM(MEMALLOC) \
++ TRACE_GFP_EM(COMP) \
++ TRACE_GFP_EM(NOMEMALLOC) \
++ TRACE_GFP_EM(HARDWALL) \
++ TRACE_GFP_EM(THISNODE) \
++ TRACE_GFP_EM(ACCOUNT) \
++ TRACE_GFP_EM(ZEROTAGS)
++
++#ifdef CONFIG_KASAN_HW_TAGS
++# define TRACE_GFP_FLAGS_KASAN \
++ TRACE_GFP_EM(SKIP_ZERO) \
++ TRACE_GFP_EM(SKIP_KASAN)
++#else
++# define TRACE_GFP_FLAGS_KASAN
++#endif
++
++#ifdef CONFIG_LOCKDEP
++# define TRACE_GFP_FLAGS_LOCKDEP \
++ TRACE_GFP_EM(NOLOCKDEP)
++#else
++# define TRACE_GFP_FLAGS_LOCKDEP
++#endif
++
++#ifdef CONFIG_SLAB_OBJ_EXT
++# define TRACE_GFP_FLAGS_SLAB \
++ TRACE_GFP_EM(NO_OBJ_EXT)
++#else
++# define TRACE_GFP_FLAGS_SLAB
++#endif
++
++#define TRACE_GFP_FLAGS \
++ TRACE_GFP_FLAGS_GENERAL \
++ TRACE_GFP_FLAGS_KASAN \
++ TRACE_GFP_FLAGS_LOCKDEP \
++ TRACE_GFP_FLAGS_SLAB
++
++#undef TRACE_GFP_EM
++#define TRACE_GFP_EM(a) TRACE_DEFINE_ENUM(___GFP_##a##_BIT);
++
++TRACE_GFP_FLAGS
++
++/* Just in case these are ever used */
++TRACE_DEFINE_ENUM(___GFP_UNUSED_BIT);
++TRACE_DEFINE_ENUM(___GFP_LAST_BIT);
++
+ #define gfpflag_string(flag) {(__force unsigned long)flag, #flag}
+
+ #define __def_gfpflag_names \
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index d293d52a3e00e1..9ee6c9145b1df9 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -2179,7 +2179,7 @@ static struct cpuhp_step cpuhp_hp_states[] = {
+ },
+ [CPUHP_AP_HRTIMERS_DYING] = {
+ .name = "hrtimers:dying",
+- .startup.single = NULL,
++ .startup.single = hrtimers_cpu_starting,
+ .teardown.single = hrtimers_cpu_dying,
+ },
+ [CPUHP_AP_TICK_DYING] = {
+diff --git a/kernel/gen_kheaders.sh b/kernel/gen_kheaders.sh
+index 383fd43ac61222..7e1340da5acae6 100755
+--- a/kernel/gen_kheaders.sh
++++ b/kernel/gen_kheaders.sh
+@@ -89,6 +89,7 @@ find $cpio_dir -type f -print0 |
+
+ # Create archive and try to normalize metadata for reproducibility.
+ tar "${KBUILD_BUILD_TIMESTAMP:+--mtime=$KBUILD_BUILD_TIMESTAMP}" \
++ --exclude=".__afs*" --exclude=".nfs*" \
+ --owner=0 --group=0 --sort=name --numeric-owner --mode=u=rw,go=r,a+X \
+ -I $XZ -cf $tarfile -C $cpio_dir/ . > /dev/null
+
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index f928a67a07d29a..4c4681cb9337b4 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -2630,6 +2630,7 @@ static int balance_one(struct rq *rq, struct task_struct *prev)
+ {
+ struct scx_dsp_ctx *dspc = this_cpu_ptr(scx_dsp_ctx);
+ bool prev_on_scx = prev->sched_class == &ext_sched_class;
++ bool prev_on_rq = prev->scx.flags & SCX_TASK_QUEUED;
+ int nr_loops = SCX_DSP_MAX_LOOPS;
+
+ lockdep_assert_rq_held(rq);
+@@ -2662,8 +2663,7 @@ static int balance_one(struct rq *rq, struct task_struct *prev)
+ * See scx_ops_disable_workfn() for the explanation on the
+ * bypassing test.
+ */
+- if ((prev->scx.flags & SCX_TASK_QUEUED) &&
+- prev->scx.slice && !scx_rq_bypassing(rq)) {
++ if (prev_on_rq && prev->scx.slice && !scx_rq_bypassing(rq)) {
+ rq->scx.flags |= SCX_RQ_BAL_KEEP;
+ goto has_tasks;
+ }
+@@ -2696,6 +2696,10 @@ static int balance_one(struct rq *rq, struct task_struct *prev)
+
+ flush_dispatch_buf(rq);
+
++ if (prev_on_rq && prev->scx.slice) {
++ rq->scx.flags |= SCX_RQ_BAL_KEEP;
++ goto has_tasks;
++ }
+ if (rq->scx.local_dsq.nr)
+ goto has_tasks;
+ if (consume_global_dsq(rq))
+@@ -2721,8 +2725,7 @@ static int balance_one(struct rq *rq, struct task_struct *prev)
+ * Didn't find another task to run. Keep running @prev unless
+ * %SCX_OPS_ENQ_LAST is in effect.
+ */
+- if ((prev->scx.flags & SCX_TASK_QUEUED) &&
+- (!static_branch_unlikely(&scx_ops_enq_last) ||
++ if (prev_on_rq && (!static_branch_unlikely(&scx_ops_enq_last) ||
+ scx_rq_bypassing(rq))) {
+ rq->scx.flags |= SCX_RQ_BAL_KEEP;
+ goto has_tasks;
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 1ca96c99872f08..60be5f8bbe7115 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4065,7 +4065,11 @@ static void update_cfs_group(struct sched_entity *se)
+ struct cfs_rq *gcfs_rq = group_cfs_rq(se);
+ long shares;
+
+- if (!gcfs_rq)
++ /*
++ * When a group becomes empty, preserve its weight. This matters for
++ * DELAY_DEQUEUE.
++ */
++ if (!gcfs_rq || !gcfs_rq->load.weight)
+ return;
+
+ if (throttled_hierarchy(gcfs_rq))
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index cddcd08ea827f9..ee20f5032a0366 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -2156,6 +2156,15 @@ int hrtimers_prepare_cpu(unsigned int cpu)
+ }
+
+ cpu_base->cpu = cpu;
++ hrtimer_cpu_base_init_expiry_lock(cpu_base);
++ return 0;
++}
++
++int hrtimers_cpu_starting(unsigned int cpu)
++{
++ struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
++
++ /* Clear out any left over state from a CPU down operation */
+ cpu_base->active_bases = 0;
+ cpu_base->hres_active = 0;
+ cpu_base->hang_detected = 0;
+@@ -2164,7 +2173,6 @@ int hrtimers_prepare_cpu(unsigned int cpu)
+ cpu_base->expires_next = KTIME_MAX;
+ cpu_base->softirq_expires_next = KTIME_MAX;
+ cpu_base->online = 1;
+- hrtimer_cpu_base_init_expiry_lock(cpu_base);
+ return 0;
+ }
+
+@@ -2240,6 +2248,7 @@ int hrtimers_cpu_dying(unsigned int dying_cpu)
+ void __init hrtimers_init(void)
+ {
+ hrtimers_prepare_cpu(smp_processor_id());
++ hrtimers_cpu_starting(smp_processor_id());
+ open_softirq(HRTIMER_SOFTIRQ, hrtimer_run_softirq);
+ }
+
+diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
+index 8d57f7686bb03a..371a62a749aad3 100644
+--- a/kernel/time/timer_migration.c
++++ b/kernel/time/timer_migration.c
+@@ -534,8 +534,13 @@ static void __walk_groups(up_f up, struct tmigr_walk *data,
+ break;
+
+ child = group;
+- group = group->parent;
++ /*
++ * Pairs with the store release on group connection
++ * to make sure group initialization is visible.
++ */
++ group = READ_ONCE(group->parent);
+ data->childmask = child->groupmask;
++ WARN_ON_ONCE(!data->childmask);
+ } while (group);
+ }
+
+@@ -1487,6 +1492,21 @@ static void tmigr_init_group(struct tmigr_group *group, unsigned int lvl,
+ s.seq = 0;
+ atomic_set(&group->migr_state, s.state);
+
++ /*
++ * If this is a new top-level, prepare its groupmask in advance.
++ * This avoids accidents where yet another new top-level is
++ * created in the future and made visible before the current groupmask.
++ */
++ if (list_empty(&tmigr_level_list[lvl])) {
++ group->groupmask = BIT(0);
++ /*
++ * The previous top level has prepared its groupmask already,
++ * simply account it as the first child.
++ */
++ if (lvl > 0)
++ group->num_children = 1;
++ }
++
+ timerqueue_init_head(&group->events);
+ timerqueue_init(&group->groupevt.nextevt);
+ group->groupevt.nextevt.expires = KTIME_MAX;
+@@ -1550,8 +1570,25 @@ static void tmigr_connect_child_parent(struct tmigr_group *child,
+ raw_spin_lock_irq(&child->lock);
+ raw_spin_lock_nested(&parent->lock, SINGLE_DEPTH_NESTING);
+
+- child->parent = parent;
+- child->groupmask = BIT(parent->num_children++);
++ if (activate) {
++ /*
++ * @child is the old top and @parent the new one. In this
++ * case groupmask is pre-initialized and @child already
++ * accounted, along with its new sibling corresponding to the
++ * CPU going up.
++ */
++ WARN_ON_ONCE(child->groupmask != BIT(0) || parent->num_children != 2);
++ } else {
++ /* Adding @child for the CPU going up to @parent. */
++ child->groupmask = BIT(parent->num_children++);
++ }
++
++ /*
++ * Make sure parent initialization is visible before publishing it to a
++ * racing CPU entering/exiting idle. This RELEASE barrier enforces an
++ * address dependency that pairs with the READ_ONCE() in __walk_groups().
++ */
++ smp_store_release(&child->parent, parent);
+
+ raw_spin_unlock(&parent->lock);
+ raw_spin_unlock_irq(&child->lock);
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 56fa431c52af7b..dc83baab85a140 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -3004,7 +3004,7 @@ static inline loff_t folio_seek_hole_data(struct xa_state *xas,
+ if (ops->is_partially_uptodate(folio, offset, bsz) ==
+ seek_data)
+ break;
+- start = (start + bsz) & ~(bsz - 1);
++ start = (start + bsz) & ~((u64)bsz - 1);
+ offset += bsz;
+ } while (offset < folio_size(folio));
+ unlock:
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 7e0f72cd9fd4a0..f127b61f04a825 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -2132,6 +2132,16 @@ static pmd_t move_soft_dirty_pmd(pmd_t pmd)
+ return pmd;
+ }
+
++static pmd_t clear_uffd_wp_pmd(pmd_t pmd)
++{
++ if (pmd_present(pmd))
++ pmd = pmd_clear_uffd_wp(pmd);
++ else if (is_swap_pmd(pmd))
++ pmd = pmd_swp_clear_uffd_wp(pmd);
++
++ return pmd;
++}
++
+ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ unsigned long new_addr, pmd_t *old_pmd, pmd_t *new_pmd)
+ {
+@@ -2170,6 +2180,8 @@ bool move_huge_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ pgtable_trans_huge_deposit(mm, new_pmd, pgtable);
+ }
+ pmd = move_soft_dirty_pmd(pmd);
++ if (vma_has_uffd_without_event_remap(vma))
++ pmd = clear_uffd_wp_pmd(pmd);
+ set_pmd_at(mm, new_addr, new_pmd, pmd);
+ if (force_flush)
+ flush_pmd_tlb_range(vma, old_addr, old_addr + PMD_SIZE);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 2fa87b9ecec6c7..4a8a4f3535caf7 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5395,6 +5395,7 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr,
+ unsigned long new_addr, pte_t *src_pte, pte_t *dst_pte,
+ unsigned long sz)
+ {
++ bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma);
+ struct hstate *h = hstate_vma(vma);
+ struct mm_struct *mm = vma->vm_mm;
+ spinlock_t *src_ptl, *dst_ptl;
+@@ -5411,7 +5412,18 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr,
+ spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
+
+ pte = huge_ptep_get_and_clear(mm, old_addr, src_pte);
+- set_huge_pte_at(mm, new_addr, dst_pte, pte, sz);
++
++ if (need_clear_uffd_wp && pte_marker_uffd_wp(pte))
++ huge_pte_clear(mm, new_addr, dst_pte, sz);
++ else {
++ if (need_clear_uffd_wp) {
++ if (pte_present(pte))
++ pte = huge_pte_clear_uffd_wp(pte);
++ else if (is_swap_pte(pte))
++ pte = pte_swp_clear_uffd_wp(pte);
++ }
++ set_huge_pte_at(mm, new_addr, dst_pte, pte, sz);
++ }
+
+ if (src_ptl != dst_ptl)
+ spin_unlock(src_ptl);
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 74f5f4c51ab8c8..5f878ee05ff80b 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -1071,7 +1071,7 @@ void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
+ pr_debug("%s(0x%px, %zu)\n", __func__, ptr, size);
+
+ if (kmemleak_enabled && ptr && !IS_ERR_PCPU(ptr))
+- create_object_percpu((__force unsigned long)ptr, size, 0, gfp);
++ create_object_percpu((__force unsigned long)ptr, size, 1, gfp);
+ }
+ EXPORT_SYMBOL_GPL(kmemleak_alloc_percpu);
+
+diff --git a/mm/mremap.c b/mm/mremap.c
+index dee98ff2bbd644..1b2edd65c2a172 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -138,6 +138,7 @@ static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ struct vm_area_struct *new_vma, pmd_t *new_pmd,
+ unsigned long new_addr, bool need_rmap_locks)
+ {
++ bool need_clear_uffd_wp = vma_has_uffd_without_event_remap(vma);
+ struct mm_struct *mm = vma->vm_mm;
+ pte_t *old_pte, *new_pte, pte;
+ spinlock_t *old_ptl, *new_ptl;
+@@ -207,7 +208,18 @@ static int move_ptes(struct vm_area_struct *vma, pmd_t *old_pmd,
+ force_flush = true;
+ pte = move_pte(pte, old_addr, new_addr);
+ pte = move_soft_dirty_pte(pte);
+- set_pte_at(mm, new_addr, new_pte, pte);
++
++ if (need_clear_uffd_wp && pte_marker_uffd_wp(pte))
++ pte_clear(mm, new_addr, new_pte);
++ else {
++ if (need_clear_uffd_wp) {
++ if (pte_present(pte))
++ pte = pte_clear_uffd_wp(pte);
++ else if (is_swap_pte(pte))
++ pte = pte_swp_clear_uffd_wp(pte);
++ }
++ set_pte_at(mm, new_addr, new_pte, pte);
++ }
+ }
+
+ arch_leave_lazy_mmu_mode();
+@@ -269,6 +281,15 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
+ if (WARN_ON_ONCE(!pmd_none(*new_pmd)))
+ return false;
+
++ /* If this pmd belongs to a uffd vma with remap events disabled, we need
++ * to ensure that the uffd-wp state is cleared from all pgtables. This
++ * means recursing into lower page tables in move_page_tables(), and we
++ * can reuse the existing code if we simply treat the entry as "not
++ * moved".
++ */
++ if (vma_has_uffd_without_event_remap(vma))
++ return false;
++
+ /*
+ * We don't have to worry about the ordering of src and dst
+ * ptlocks because exclusive mmap_lock prevents deadlock.
+@@ -324,6 +345,15 @@ static bool move_normal_pud(struct vm_area_struct *vma, unsigned long old_addr,
+ if (WARN_ON_ONCE(!pud_none(*new_pud)))
+ return false;
+
++ /* If this pud belongs to a uffd vma with remap events disabled, we need
++ * to ensure that the uffd-wp state is cleared from all pgtables. This
++ * means recursing into lower page tables in move_page_tables(), and we
++ * can reuse the existing code if we simply treat the entry as "not
++ * moved".
++ */
++ if (vma_has_uffd_without_event_remap(vma))
++ return false;
++
+ /*
+ * We don't have to worry about the ordering of src and dst
+ * ptlocks because exclusive mmap_lock prevents deadlock.
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 67a680e4b484d7..d81d667907448c 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -4637,6 +4637,9 @@ static int evict_folios(struct lruvec *lruvec, struct scan_control *sc, int swap
+ reset_batch_size(walk);
+ }
+
++ __mod_lruvec_state(lruvec, PGDEMOTE_KSWAPD + reclaimer_offset(),
++ stat.nr_demoted);
++
+ item = PGSTEAL_KSWAPD + reclaimer_offset();
+ if (!cgroup_reclaim(sc))
+ __count_vm_events(item, reclaimed);
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 54a53fae9e98f5..46da488ff0703f 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -11263,6 +11263,7 @@ BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern,
+ bool is_sockarray = map->map_type == BPF_MAP_TYPE_REUSEPORT_SOCKARRAY;
+ struct sock_reuseport *reuse;
+ struct sock *selected_sk;
++ int err;
+
+ selected_sk = map->ops->map_lookup_elem(map, key);
+ if (!selected_sk)
+@@ -11270,10 +11271,6 @@ BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern,
+
+ reuse = rcu_dereference(selected_sk->sk_reuseport_cb);
+ if (!reuse) {
+- /* Lookup in sock_map can return TCP ESTABLISHED sockets. */
+- if (sk_is_refcounted(selected_sk))
+- sock_put(selected_sk);
+-
+ /* reuseport_array has only sk with non NULL sk_reuseport_cb.
+ * The only (!reuse) case here is - the sk has already been
+ * unhashed (e.g. by close()), so treat it as -ENOENT.
+@@ -11281,24 +11278,33 @@ BPF_CALL_4(sk_select_reuseport, struct sk_reuseport_kern *, reuse_kern,
+ * Other maps (e.g. sock_map) do not provide this guarantee and
+ * the sk may never be in the reuseport group to begin with.
+ */
+- return is_sockarray ? -ENOENT : -EINVAL;
++ err = is_sockarray ? -ENOENT : -EINVAL;
++ goto error;
+ }
+
+ if (unlikely(reuse->reuseport_id != reuse_kern->reuseport_id)) {
+ struct sock *sk = reuse_kern->sk;
+
+- if (sk->sk_protocol != selected_sk->sk_protocol)
+- return -EPROTOTYPE;
+- else if (sk->sk_family != selected_sk->sk_family)
+- return -EAFNOSUPPORT;
+-
+- /* Catch all. Likely bound to a different sockaddr. */
+- return -EBADFD;
++ if (sk->sk_protocol != selected_sk->sk_protocol) {
++ err = -EPROTOTYPE;
++ } else if (sk->sk_family != selected_sk->sk_family) {
++ err = -EAFNOSUPPORT;
++ } else {
++ /* Catch all. Likely bound to a different sockaddr. */
++ err = -EBADFD;
++ }
++ goto error;
+ }
+
+ reuse_kern->selected_sk = selected_sk;
+
+ return 0;
++error:
++ /* Lookup in sock_map can return TCP ESTABLISHED sockets. */
++ if (sk_is_refcounted(selected_sk))
++ sock_put(selected_sk);
++
++ return err;
+ }
+
+ static const struct bpf_func_proto sk_select_reuseport_proto = {
+diff --git a/net/core/netdev-genl-gen.c b/net/core/netdev-genl-gen.c
+index b28424ae06d5fa..8614988fc67b9a 100644
+--- a/net/core/netdev-genl-gen.c
++++ b/net/core/netdev-genl-gen.c
+@@ -178,6 +178,16 @@ static const struct genl_multicast_group netdev_nl_mcgrps[] = {
+ [NETDEV_NLGRP_PAGE_POOL] = { "page-pool", },
+ };
+
++static void __netdev_nl_sock_priv_init(void *priv)
++{
++ netdev_nl_sock_priv_init(priv);
++}
++
++static void __netdev_nl_sock_priv_destroy(void *priv)
++{
++ netdev_nl_sock_priv_destroy(priv);
++}
++
+ struct genl_family netdev_nl_family __ro_after_init = {
+ .name = NETDEV_FAMILY_NAME,
+ .version = NETDEV_FAMILY_VERSION,
+@@ -189,6 +199,6 @@ struct genl_family netdev_nl_family __ro_after_init = {
+ .mcgrps = netdev_nl_mcgrps,
+ .n_mcgrps = ARRAY_SIZE(netdev_nl_mcgrps),
+ .sock_priv_size = sizeof(struct list_head),
+- .sock_priv_init = (void *)netdev_nl_sock_priv_init,
+- .sock_priv_destroy = (void *)netdev_nl_sock_priv_destroy,
++ .sock_priv_init = __netdev_nl_sock_priv_init,
++ .sock_priv_destroy = __netdev_nl_sock_priv_destroy,
+ };
+diff --git a/net/core/pktgen.c b/net/core/pktgen.c
+index 34f68ef74b8f2c..b6db4910359bb5 100644
+--- a/net/core/pktgen.c
++++ b/net/core/pktgen.c
+@@ -851,6 +851,9 @@ static ssize_t get_imix_entries(const char __user *buffer,
+ unsigned long weight;
+ unsigned long size;
+
++ if (pkt_dev->n_imix_entries >= MAX_IMIX_ENTRIES)
++ return -E2BIG;
++
+ len = num_arg(&buffer[i], max_digits, &size);
+ if (len < 0)
+ return len;
+@@ -880,9 +883,6 @@ static ssize_t get_imix_entries(const char __user *buffer,
+
+ i++;
+ pkt_dev->n_imix_entries++;
+-
+- if (pkt_dev->n_imix_entries > MAX_IMIX_ENTRIES)
+- return -E2BIG;
+ } while (c == ' ');
+
+ return i;
+diff --git a/net/mac802154/iface.c b/net/mac802154/iface.c
+index c0e2da5072bea2..9e4631fade90c9 100644
+--- a/net/mac802154/iface.c
++++ b/net/mac802154/iface.c
+@@ -684,6 +684,10 @@ void ieee802154_if_remove(struct ieee802154_sub_if_data *sdata)
+ ASSERT_RTNL();
+
+ mutex_lock(&sdata->local->iflist_mtx);
++ if (list_empty(&sdata->local->interfaces)) {
++ mutex_unlock(&sdata->local->iflist_mtx);
++ return;
++ }
+ list_del_rcu(&sdata->list);
+ mutex_unlock(&sdata->local->iflist_mtx);
+
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index a62bc874bf1e17..123f3f2972841a 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -607,7 +607,6 @@ static bool mptcp_established_options_dss(struct sock *sk, struct sk_buff *skb,
+ }
+ opts->ext_copy.use_ack = 1;
+ opts->suboptions = OPTION_MPTCP_DSS;
+- WRITE_ONCE(msk->old_wspace, __mptcp_space((struct sock *)msk));
+
+ /* Add kind/length/subtype/flag overhead if mapping is not populated */
+ if (dss_size == 0)
+@@ -1288,7 +1287,7 @@ static void mptcp_set_rwin(struct tcp_sock *tp, struct tcphdr *th)
+ }
+ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDCONFLICT);
+ }
+- return;
++ goto update_wspace;
+ }
+
+ if (rcv_wnd_new != rcv_wnd_old) {
+@@ -1313,6 +1312,9 @@ static void mptcp_set_rwin(struct tcp_sock *tp, struct tcphdr *th)
+ th->window = htons(new_win);
+ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_RCVWNDSHARED);
+ }
++
++update_wspace:
++ WRITE_ONCE(msk->old_wspace, tp->rcv_wnd);
+ }
+
+ __sum16 __mptcp_make_csum(u64 data_seq, u32 subflow_seq, u16 data_len, __wsum sum)
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index a93e661ef5c435..73526f1d768fcb 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -760,10 +760,15 @@ static inline u64 mptcp_data_avail(const struct mptcp_sock *msk)
+
+ static inline bool mptcp_epollin_ready(const struct sock *sk)
+ {
++ u64 data_avail = mptcp_data_avail(mptcp_sk(sk));
++
++ if (!data_avail)
++ return false;
++
+ /* mptcp doesn't have to deal with small skbs in the receive queue,
+- * at it can always coalesce them
++ * as it can always coalesce them
+ */
+- return (mptcp_data_avail(mptcp_sk(sk)) >= sk->sk_rcvlowat) ||
++ return (data_avail >= sk->sk_rcvlowat) ||
+ (mem_cgroup_sockets_enabled && sk->sk_memcg &&
+ mem_cgroup_under_socket_pressure(sk->sk_memcg)) ||
+ READ_ONCE(tcp_memory_pressure);
+diff --git a/net/ncsi/internal.h b/net/ncsi/internal.h
+index ef0f8f73826f53..4e0842df5234ea 100644
+--- a/net/ncsi/internal.h
++++ b/net/ncsi/internal.h
+@@ -289,6 +289,7 @@ enum {
+ ncsi_dev_state_config_sp = 0x0301,
+ ncsi_dev_state_config_cis,
+ ncsi_dev_state_config_oem_gma,
++ ncsi_dev_state_config_apply_mac,
+ ncsi_dev_state_config_clear_vids,
+ ncsi_dev_state_config_svf,
+ ncsi_dev_state_config_ev,
+@@ -322,6 +323,7 @@ struct ncsi_dev_priv {
+ #define NCSI_DEV_RESHUFFLE 4
+ #define NCSI_DEV_RESET 8 /* Reset state of NC */
+ unsigned int gma_flag; /* OEM GMA flag */
++ struct sockaddr pending_mac; /* MAC address received from GMA */
+ spinlock_t lock; /* Protect the NCSI device */
+ unsigned int package_probe_id;/* Current ID during probe */
+ unsigned int package_num; /* Number of packages */
+diff --git a/net/ncsi/ncsi-manage.c b/net/ncsi/ncsi-manage.c
+index 5cf55bde366d18..bf276eaf933075 100644
+--- a/net/ncsi/ncsi-manage.c
++++ b/net/ncsi/ncsi-manage.c
+@@ -1038,7 +1038,7 @@ static void ncsi_configure_channel(struct ncsi_dev_priv *ndp)
+ : ncsi_dev_state_config_clear_vids;
+ break;
+ case ncsi_dev_state_config_oem_gma:
+- nd->state = ncsi_dev_state_config_clear_vids;
++ nd->state = ncsi_dev_state_config_apply_mac;
+
+ nca.package = np->id;
+ nca.channel = nc->id;
+@@ -1050,10 +1050,22 @@ static void ncsi_configure_channel(struct ncsi_dev_priv *ndp)
+ nca.type = NCSI_PKT_CMD_OEM;
+ ret = ncsi_gma_handler(&nca, nc->version.mf_id);
+ }
+- if (ret < 0)
++ if (ret < 0) {
++ nd->state = ncsi_dev_state_config_clear_vids;
+ schedule_work(&ndp->work);
++ }
+
+ break;
++ case ncsi_dev_state_config_apply_mac:
++ rtnl_lock();
++ ret = dev_set_mac_address(dev, &ndp->pending_mac, NULL);
++ rtnl_unlock();
++ if (ret < 0)
++ netdev_warn(dev, "NCSI: 'Writing MAC address to device failed\n");
++
++ nd->state = ncsi_dev_state_config_clear_vids;
++
++ fallthrough;
+ case ncsi_dev_state_config_clear_vids:
+ case ncsi_dev_state_config_svf:
+ case ncsi_dev_state_config_ev:
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index e28be33bdf2c48..14bd66909ca455 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -628,16 +628,14 @@ static int ncsi_rsp_handler_snfc(struct ncsi_request *nr)
+ static int ncsi_rsp_handler_oem_gma(struct ncsi_request *nr, int mfr_id)
+ {
+ struct ncsi_dev_priv *ndp = nr->ndp;
++ struct sockaddr *saddr = &ndp->pending_mac;
+ struct net_device *ndev = ndp->ndev.dev;
+ struct ncsi_rsp_oem_pkt *rsp;
+- struct sockaddr saddr;
+ u32 mac_addr_off = 0;
+- int ret = 0;
+
+ /* Get the response header */
+ rsp = (struct ncsi_rsp_oem_pkt *)skb_network_header(nr->rsp);
+
+- saddr.sa_family = ndev->type;
+ ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ if (mfr_id == NCSI_OEM_MFR_BCM_ID)
+ mac_addr_off = BCM_MAC_ADDR_OFFSET;
+@@ -646,22 +644,17 @@ static int ncsi_rsp_handler_oem_gma(struct ncsi_request *nr, int mfr_id)
+ else if (mfr_id == NCSI_OEM_MFR_INTEL_ID)
+ mac_addr_off = INTEL_MAC_ADDR_OFFSET;
+
+- memcpy(saddr.sa_data, &rsp->data[mac_addr_off], ETH_ALEN);
++ saddr->sa_family = ndev->type;
++ memcpy(saddr->sa_data, &rsp->data[mac_addr_off], ETH_ALEN);
+ if (mfr_id == NCSI_OEM_MFR_BCM_ID || mfr_id == NCSI_OEM_MFR_INTEL_ID)
+- eth_addr_inc((u8 *)saddr.sa_data);
+- if (!is_valid_ether_addr((const u8 *)saddr.sa_data))
++ eth_addr_inc((u8 *)saddr->sa_data);
++ if (!is_valid_ether_addr((const u8 *)saddr->sa_data))
+ return -ENXIO;
+
+ /* Set the flag for GMA command which should only be called once */
+ ndp->gma_flag = 1;
+
+- rtnl_lock();
+- ret = dev_set_mac_address(ndev, &saddr, NULL);
+- rtnl_unlock();
+- if (ret < 0)
+- netdev_warn(ndev, "NCSI: 'Writing mac address to device failed\n");
+-
+- return ret;
++ return 0;
+ }
+
+ /* Response handler for Mellanox card */
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 16e26001468449..704c858cf2093b 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -934,7 +934,9 @@ static void do_output(struct datapath *dp, struct sk_buff *skb, int out_port,
+ {
+ struct vport *vport = ovs_vport_rcu(dp, out_port);
+
+- if (likely(vport && netif_carrier_ok(vport->dev))) {
++ if (likely(vport &&
++ netif_running(vport->dev) &&
++ netif_carrier_ok(vport->dev))) {
+ u16 mru = OVS_CB(skb)->mru;
+ u32 cutlen = OVS_CB(skb)->cutlen;
+
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index b52b798aa4c292..15724f171b0f96 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -491,6 +491,15 @@ int vsock_assign_transport(struct vsock_sock *vsk, struct vsock_sock *psk)
+ */
+ vsk->transport->release(vsk);
+ vsock_deassign_transport(vsk);
++
++ /* transport's release() and destruct() can touch some socket
++ * state, since we are reassigning the socket to a new transport
++ * during vsock_connect(), let's reset these fields to have a
++ * clean state.
++ */
++ sock_reset_flag(sk, SOCK_DONE);
++ sk->sk_state = TCP_CLOSE;
++ vsk->peer_shutdown = 0;
+ }
+
+ /* We increase the module refcnt to prevent the transport unloading
+@@ -870,6 +879,9 @@ EXPORT_SYMBOL_GPL(vsock_create_connected);
+
+ s64 vsock_stream_has_data(struct vsock_sock *vsk)
+ {
++ if (WARN_ON(!vsk->transport))
++ return 0;
++
+ return vsk->transport->stream_has_data(vsk);
+ }
+ EXPORT_SYMBOL_GPL(vsock_stream_has_data);
+@@ -878,6 +890,9 @@ s64 vsock_connectible_has_data(struct vsock_sock *vsk)
+ {
+ struct sock *sk = sk_vsock(vsk);
+
++ if (WARN_ON(!vsk->transport))
++ return 0;
++
+ if (sk->sk_type == SOCK_SEQPACKET)
+ return vsk->transport->seqpacket_has_data(vsk);
+ else
+@@ -887,6 +902,9 @@ EXPORT_SYMBOL_GPL(vsock_connectible_has_data);
+
+ s64 vsock_stream_has_space(struct vsock_sock *vsk)
+ {
++ if (WARN_ON(!vsk->transport))
++ return 0;
++
+ return vsk->transport->stream_has_space(vsk);
+ }
+ EXPORT_SYMBOL_GPL(vsock_stream_has_space);
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index 9acc13ab3f822d..7f7de6d8809655 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -26,6 +26,9 @@
+ /* Threshold for detecting small packets to copy */
+ #define GOOD_COPY_LEN 128
+
++static void virtio_transport_cancel_close_work(struct vsock_sock *vsk,
++ bool cancel_timeout);
++
+ static const struct virtio_transport *
+ virtio_transport_get_ops(struct vsock_sock *vsk)
+ {
+@@ -1109,6 +1112,8 @@ void virtio_transport_destruct(struct vsock_sock *vsk)
+ {
+ struct virtio_vsock_sock *vvs = vsk->trans;
+
++ virtio_transport_cancel_close_work(vsk, true);
++
+ kfree(vvs);
+ vsk->trans = NULL;
+ }
+@@ -1204,17 +1209,11 @@ static void virtio_transport_wait_close(struct sock *sk, long timeout)
+ }
+ }
+
+-static void virtio_transport_do_close(struct vsock_sock *vsk,
+- bool cancel_timeout)
++static void virtio_transport_cancel_close_work(struct vsock_sock *vsk,
++ bool cancel_timeout)
+ {
+ struct sock *sk = sk_vsock(vsk);
+
+- sock_set_flag(sk, SOCK_DONE);
+- vsk->peer_shutdown = SHUTDOWN_MASK;
+- if (vsock_stream_has_data(vsk) <= 0)
+- sk->sk_state = TCP_CLOSING;
+- sk->sk_state_change(sk);
+-
+ if (vsk->close_work_scheduled &&
+ (!cancel_timeout || cancel_delayed_work(&vsk->close_work))) {
+ vsk->close_work_scheduled = false;
+@@ -1226,6 +1225,20 @@ static void virtio_transport_do_close(struct vsock_sock *vsk,
+ }
+ }
+
++static void virtio_transport_do_close(struct vsock_sock *vsk,
++ bool cancel_timeout)
++{
++ struct sock *sk = sk_vsock(vsk);
++
++ sock_set_flag(sk, SOCK_DONE);
++ vsk->peer_shutdown = SHUTDOWN_MASK;
++ if (vsock_stream_has_data(vsk) <= 0)
++ sk->sk_state = TCP_CLOSING;
++ sk->sk_state_change(sk);
++
++ virtio_transport_cancel_close_work(vsk, cancel_timeout);
++}
++
+ static void virtio_transport_close_timeout(struct work_struct *work)
+ {
+ struct vsock_sock *vsk =
+@@ -1628,8 +1641,11 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
+
+ lock_sock(sk);
+
+- /* Check if sk has been closed before lock_sock */
+- if (sock_flag(sk, SOCK_DONE)) {
++ /* Check if sk has been closed or assigned to another transport before
++ * lock_sock (note: listener sockets are not assigned to any transport)
++ */
++ if (sock_flag(sk, SOCK_DONE) ||
++ (sk->sk_state != TCP_LISTEN && vsk->transport != &t->transport)) {
+ (void)virtio_transport_reset_no_sock(t, skb);
+ release_sock(sk);
+ sock_put(sk);
+diff --git a/net/vmw_vsock/vsock_bpf.c b/net/vmw_vsock/vsock_bpf.c
+index 4aa6e74ec2957b..f201d9eca1df2f 100644
+--- a/net/vmw_vsock/vsock_bpf.c
++++ b/net/vmw_vsock/vsock_bpf.c
+@@ -77,6 +77,7 @@ static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+ size_t len, int flags, int *addr_len)
+ {
+ struct sk_psock *psock;
++ struct vsock_sock *vsk;
+ int copied;
+
+ psock = sk_psock_get(sk);
+@@ -84,6 +85,13 @@ static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+ return __vsock_recvmsg(sk, msg, len, flags);
+
+ lock_sock(sk);
++ vsk = vsock_sk(sk);
++
++ if (!vsk->transport) {
++ copied = -ENODEV;
++ goto out;
++ }
++
+ if (vsock_has_data(sk, psock) && sk_psock_queue_empty(psock)) {
+ release_sock(sk);
+ sk_psock_put(sk, psock);
+@@ -108,6 +116,7 @@ static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+ copied = sk_msg_recvmsg(sk, psock, msg, len, flags);
+ }
+
++out:
+ release_sock(sk);
+ sk_psock_put(sk, psock);
+
+diff --git a/security/apparmor/policy.c b/security/apparmor/policy.c
+index 14df15e3569525..105706abf281b2 100644
+--- a/security/apparmor/policy.c
++++ b/security/apparmor/policy.c
+@@ -626,6 +626,7 @@ struct aa_profile *aa_alloc_null(struct aa_profile *parent, const char *name,
+
+ /* TODO: ideally we should inherit abi from parent */
+ profile->label.flags |= FLAG_NULL;
++ profile->attach.xmatch = aa_get_pdb(nullpdb);
+ rules = list_first_entry(&profile->rules, typeof(*rules), list);
+ rules->file = aa_get_pdb(nullpdb);
+ rules->policy = aa_get_pdb(nullpdb);
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 3ed82f98e2de9e..a9f6138b59b0c1 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10625,6 +10625,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1e1f, "ASUS Vivobook 15 X1504VAP", ALC2XX_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1e51, "ASUS Zephyrus M15", ALC294_FIXUP_ASUS_GU502_PINS),
+ SND_PCI_QUIRK(0x1043, 0x1e5e, "ASUS ROG Strix G513", ALC294_FIXUP_ASUS_G513_PINS),
++ SND_PCI_QUIRK(0x1043, 0x1e63, "ASUS H7606W", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1),
++ SND_PCI_QUIRK(0x1043, 0x1e83, "ASUS GA605W", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1),
+ SND_PCI_QUIRK(0x1043, 0x1e8e, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1eb3, "ASUS Ally RCLA72", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x1ed3, "ASUS HN7306W", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10979,6 +10981,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1d72, 0x1901, "RedmiBook 14", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1f66, 0x0105, "Ayaneo Portable Game Player", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13),
+ SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
+diff --git a/tools/net/ynl/ynl-gen-c.py b/tools/net/ynl/ynl-gen-c.py
+index 717530bc9c52e7..463f1394ab971b 100755
+--- a/tools/net/ynl/ynl-gen-c.py
++++ b/tools/net/ynl/ynl-gen-c.py
+@@ -2361,6 +2361,17 @@ def print_kernel_family_struct_src(family, cw):
+ if not kernel_can_gen_family_struct(family):
+ return
+
++ if 'sock-priv' in family.kernel_family:
++ # Generate "trampolines" to make CFI happy
++ cw.write_func("static void", f"__{family.c_name}_nl_sock_priv_init",
++ [f"{family.c_name}_nl_sock_priv_init(priv);"],
++ ["void *priv"])
++ cw.nl()
++ cw.write_func("static void", f"__{family.c_name}_nl_sock_priv_destroy",
++ [f"{family.c_name}_nl_sock_priv_destroy(priv);"],
++ ["void *priv"])
++ cw.nl()
++
+ cw.block_start(f"struct genl_family {family.ident_name}_nl_family __ro_after_init =")
+ cw.p('.name\t\t= ' + family.fam_key + ',')
+ cw.p('.version\t= ' + family.ver_key + ',')
+@@ -2378,9 +2389,8 @@ def print_kernel_family_struct_src(family, cw):
+ cw.p(f'.n_mcgrps\t= ARRAY_SIZE({family.c_name}_nl_mcgrps),')
+ if 'sock-priv' in family.kernel_family:
+ cw.p(f'.sock_priv_size\t= sizeof({family.kernel_family["sock-priv"]}),')
+- # Force cast here, actual helpers take pointer to the real type.
+- cw.p(f'.sock_priv_init\t= (void *){family.c_name}_nl_sock_priv_init,')
+- cw.p(f'.sock_priv_destroy = (void *){family.c_name}_nl_sock_priv_destroy,')
++ cw.p(f'.sock_priv_init\t= __{family.c_name}_nl_sock_priv_init,')
++ cw.p(f'.sock_priv_destroy = __{family.c_name}_nl_sock_priv_destroy,')
+ cw.block_end(';')
+
+
+diff --git a/tools/testing/selftests/mm/cow.c b/tools/testing/selftests/mm/cow.c
+index 32c6ccc2a6be98..1238e1c5aae150 100644
+--- a/tools/testing/selftests/mm/cow.c
++++ b/tools/testing/selftests/mm/cow.c
+@@ -758,7 +758,7 @@ static void do_run_with_base_page(test_fn fn, bool swapout)
+ }
+
+ /* Populate a base page. */
+- memset(mem, 0, pagesize);
++ memset(mem, 1, pagesize);
+
+ if (swapout) {
+ madvise(mem, pagesize, MADV_PAGEOUT);
+@@ -824,12 +824,12 @@ static void do_run_with_thp(test_fn fn, enum thp_run thp_run, size_t thpsize)
+ * Try to populate a THP. Touch the first sub-page and test if
+ * we get the last sub-page populated automatically.
+ */
+- mem[0] = 0;
++ mem[0] = 1;
+ if (!pagemap_is_populated(pagemap_fd, mem + thpsize - pagesize)) {
+ ksft_test_result_skip("Did not get a THP populated\n");
+ goto munmap;
+ }
+- memset(mem, 0, thpsize);
++ memset(mem, 1, thpsize);
+
+ size = thpsize;
+ switch (thp_run) {
+@@ -1012,7 +1012,7 @@ static void run_with_hugetlb(test_fn fn, const char *desc, size_t hugetlbsize)
+ }
+
+ /* Populate an huge page. */
+- memset(mem, 0, hugetlbsize);
++ memset(mem, 1, hugetlbsize);
+
+ /*
+ * We need a total of two hugetlb pages to handle COW/unsharing
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index 4209b95690394b..414addef9a4514 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -25,6 +25,8 @@
+ #include <sys/types.h>
+ #include <sys/mman.h>
+
++#include <arpa/inet.h>
++
+ #include <netdb.h>
+ #include <netinet/in.h>
+
+@@ -1211,23 +1213,42 @@ static void parse_setsock_options(const char *name)
+ exit(1);
+ }
+
+-void xdisconnect(int fd, int addrlen)
++void xdisconnect(int fd)
+ {
+- struct sockaddr_storage empty;
++ socklen_t addrlen = sizeof(struct sockaddr_storage);
++ struct sockaddr_storage addr, empty;
+ int msec_sleep = 10;
+- int queued = 1;
+- int i;
++ void *raw_addr;
++ int i, cmdlen;
++ char cmd[128];
++
++ /* get the local address and convert it to string */
++ if (getsockname(fd, (struct sockaddr *)&addr, &addrlen) < 0)
++ xerror("getsockname");
++
++ if (addr.ss_family == AF_INET)
++ raw_addr = &(((struct sockaddr_in *)&addr)->sin_addr);
++ else if (addr.ss_family == AF_INET6)
++ raw_addr = &(((struct sockaddr_in6 *)&addr)->sin6_addr);
++ else
++ xerror("bad family");
++
++ strcpy(cmd, "ss -M | grep -q ");
++ cmdlen = strlen(cmd);
++ if (!inet_ntop(addr.ss_family, raw_addr, &cmd[cmdlen],
++ sizeof(cmd) - cmdlen))
++ xerror("inet_ntop");
+
+ shutdown(fd, SHUT_WR);
+
+- /* while until the pending data is completely flushed, the later
++ /*
++ * wait until the pending data is completely flushed and all
++ * the MPTCP sockets reached the closed status.
+ * disconnect will bypass/ignore/drop any pending data.
+ */
+ for (i = 0; ; i += msec_sleep) {
+- if (ioctl(fd, SIOCOUTQ, &queued) < 0)
+- xerror("can't query out socket queue: %d", errno);
+-
+- if (!queued)
++ /* closed socket are not listed by 'ss' */
++ if (system(cmd) != 0)
+ break;
+
+ if (i > poll_timeout)
+@@ -1281,9 +1302,9 @@ int main_loop(void)
+ return ret;
+
+ if (cfg_truncate > 0) {
+- xdisconnect(fd, peer->ai_addrlen);
++ xdisconnect(fd);
+ } else if (--cfg_repeat > 0) {
+- xdisconnect(fd, peer->ai_addrlen);
++ xdisconnect(fd);
+
+ /* the socket could be unblocking at this point, we need the
+ * connect to be blocking
+diff --git a/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c b/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
+index 37d9bf6fb7458d..6f4c3f5a1c5d99 100644
+--- a/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
++++ b/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
+@@ -20,7 +20,7 @@ s32 BPF_STRUCT_OPS(ddsp_bogus_dsq_fail_select_cpu, struct task_struct *p,
+ * If we dispatch to a bogus DSQ that will fall back to the
+ * builtin global DSQ, we fail gracefully.
+ */
+- scx_bpf_dispatch_vtime(p, 0xcafef00d, SCX_SLICE_DFL,
++ scx_bpf_dsq_insert_vtime(p, 0xcafef00d, SCX_SLICE_DFL,
+ p->scx.dsq_vtime, 0);
+ return cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c b/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
+index dffc97d9cdf141..e4a55027778fd0 100644
+--- a/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
++++ b/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
+@@ -17,8 +17,8 @@ s32 BPF_STRUCT_OPS(ddsp_vtimelocal_fail_select_cpu, struct task_struct *p,
+
+ if (cpu >= 0) {
+ /* Shouldn't be allowed to vtime dispatch to a builtin DSQ. */
+- scx_bpf_dispatch_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL,
+- p->scx.dsq_vtime, 0);
++ scx_bpf_dsq_insert_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL,
++ p->scx.dsq_vtime, 0);
+ return cpu;
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
+index 6a7db1502c29e1..fbda6bf5467128 100644
+--- a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
++++ b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
+@@ -43,9 +43,12 @@ void BPF_STRUCT_OPS(dsp_local_on_dispatch, s32 cpu, struct task_struct *prev)
+ if (!p)
+ return;
+
+- target = bpf_get_prandom_u32() % nr_cpus;
++ if (p->nr_cpus_allowed == nr_cpus)
++ target = bpf_get_prandom_u32() % nr_cpus;
++ else
++ target = scx_bpf_task_cpu(p);
+
+- scx_bpf_dispatch(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0);
++ scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0);
+ bpf_task_release(p);
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/dsp_local_on.c b/tools/testing/selftests/sched_ext/dsp_local_on.c
+index 472851b5685487..0ff27e57fe4303 100644
+--- a/tools/testing/selftests/sched_ext/dsp_local_on.c
++++ b/tools/testing/selftests/sched_ext/dsp_local_on.c
+@@ -34,9 +34,10 @@ static enum scx_test_status run(void *ctx)
+ /* Just sleeping is fine, plenty of scheduling events happening */
+ sleep(1);
+
+- SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_ERROR));
+ bpf_link__destroy(link);
+
++ SCX_EQ(skel->data->uei.kind, EXIT_KIND(SCX_EXIT_UNREG));
++
+ return SCX_TEST_PASS;
+ }
+
+@@ -50,7 +51,7 @@ static void cleanup(void *ctx)
+ struct scx_test dsp_local_on = {
+ .name = "dsp_local_on",
+ .description = "Verify we can directly dispatch tasks to a local DSQs "
+- "from osp.dispatch()",
++ "from ops.dispatch()",
+ .setup = setup,
+ .run = run,
+ .cleanup = cleanup,
+diff --git a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+index 1efb50d61040ad..a7cf868d5e311d 100644
+--- a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
++++ b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+@@ -31,7 +31,7 @@ void BPF_STRUCT_OPS(enq_select_cpu_fails_enqueue, struct task_struct *p,
+ /* Can only call from ops.select_cpu() */
+ scx_bpf_select_cpu_dfl(p, 0, 0, &found);
+
+- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
+ }
+
+ SEC(".struct_ops.link")
+diff --git a/tools/testing/selftests/sched_ext/exit.bpf.c b/tools/testing/selftests/sched_ext/exit.bpf.c
+index d75d4faf07f6d5..4bc36182d3ffc2 100644
+--- a/tools/testing/selftests/sched_ext/exit.bpf.c
++++ b/tools/testing/selftests/sched_ext/exit.bpf.c
+@@ -33,7 +33,7 @@ void BPF_STRUCT_OPS(exit_enqueue, struct task_struct *p, u64 enq_flags)
+ if (exit_point == EXIT_ENQUEUE)
+ EXIT_CLEANLY();
+
+- scx_bpf_dispatch(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
+ }
+
+ void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p)
+@@ -41,7 +41,7 @@ void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p)
+ if (exit_point == EXIT_DISPATCH)
+ EXIT_CLEANLY();
+
+- scx_bpf_consume(DSQ_ID);
++ scx_bpf_dsq_move_to_local(DSQ_ID);
+ }
+
+ void BPF_STRUCT_OPS(exit_enable, struct task_struct *p)
+diff --git a/tools/testing/selftests/sched_ext/maximal.bpf.c b/tools/testing/selftests/sched_ext/maximal.bpf.c
+index 4d4cd8d966dba6..430f5e13bf5544 100644
+--- a/tools/testing/selftests/sched_ext/maximal.bpf.c
++++ b/tools/testing/selftests/sched_ext/maximal.bpf.c
+@@ -12,6 +12,8 @@
+
+ char _license[] SEC("license") = "GPL";
+
++#define DSQ_ID 0
++
+ s32 BPF_STRUCT_OPS(maximal_select_cpu, struct task_struct *p, s32 prev_cpu,
+ u64 wake_flags)
+ {
+@@ -20,7 +22,7 @@ s32 BPF_STRUCT_OPS(maximal_select_cpu, struct task_struct *p, s32 prev_cpu,
+
+ void BPF_STRUCT_OPS(maximal_enqueue, struct task_struct *p, u64 enq_flags)
+ {
+- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
+ }
+
+ void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags)
+@@ -28,7 +30,7 @@ void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags)
+
+ void BPF_STRUCT_OPS(maximal_dispatch, s32 cpu, struct task_struct *prev)
+ {
+- scx_bpf_consume(SCX_DSQ_GLOBAL);
++ scx_bpf_dsq_move_to_local(DSQ_ID);
+ }
+
+ void BPF_STRUCT_OPS(maximal_runnable, struct task_struct *p, u64 enq_flags)
+@@ -123,7 +125,7 @@ void BPF_STRUCT_OPS(maximal_cgroup_set_weight, struct cgroup *cgrp, u32 weight)
+
+ s32 BPF_STRUCT_OPS_SLEEPABLE(maximal_init)
+ {
+- return 0;
++ return scx_bpf_create_dsq(DSQ_ID, -1);
+ }
+
+ void BPF_STRUCT_OPS(maximal_exit, struct scx_exit_info *info)
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
+index f171ac47097060..13d0f5be788d12 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
+@@ -30,7 +30,7 @@ void BPF_STRUCT_OPS(select_cpu_dfl_enqueue, struct task_struct *p,
+ }
+ scx_bpf_put_idle_cpumask(idle_mask);
+
+- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
+ }
+
+ SEC(".struct_ops.link")
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+index 9efdbb7da92887..815f1d5d61ac43 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+@@ -67,7 +67,7 @@ void BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_enqueue, struct task_struct *p,
+ saw_local = true;
+ }
+
+- scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, enq_flags);
+ }
+
+ s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_init_task,
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
+index 59bfc4f36167a7..4bb99699e9209c 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
+@@ -29,7 +29,7 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_select_cpu, struct task_struct *p,
+ cpu = prev_cpu;
+
+ dispatch:
+- scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, 0);
++ scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, 0);
+ return cpu;
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
+index 3bbd5fcdfb18e0..2a75de11b2cfd5 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
+@@ -18,7 +18,7 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_bad_dsq_select_cpu, struct task_struct *p
+ s32 prev_cpu, u64 wake_flags)
+ {
+ /* Dispatching to a random DSQ should fail. */
+- scx_bpf_dispatch(p, 0xcafef00d, SCX_SLICE_DFL, 0);
++ scx_bpf_dsq_insert(p, 0xcafef00d, SCX_SLICE_DFL, 0);
+
+ return prev_cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
+index 0fda57fe0ecfae..99d075695c9743 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
+@@ -18,8 +18,8 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_dbl_dsp_select_cpu, struct task_struct *p
+ s32 prev_cpu, u64 wake_flags)
+ {
+ /* Dispatching twice in a row is disallowed. */
+- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
+- scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
++ scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
+
+ return prev_cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
+index e6c67bcf5e6e35..bfcb96cd4954bd 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
+@@ -2,8 +2,8 @@
+ /*
+ * A scheduler that validates that enqueue flags are properly stored and
+ * applied at dispatch time when a task is directly dispatched from
+- * ops.select_cpu(). We validate this by using scx_bpf_dispatch_vtime(), and
+- * making the test a very basic vtime scheduler.
++ * ops.select_cpu(). We validate this by using scx_bpf_dsq_insert_vtime(),
++ * and making the test a very basic vtime scheduler.
+ *
+ * Copyright (c) 2024 Meta Platforms, Inc. and affiliates.
+ * Copyright (c) 2024 David Vernet <dvernet@meta.com>
+@@ -47,13 +47,13 @@ s32 BPF_STRUCT_OPS(select_cpu_vtime_select_cpu, struct task_struct *p,
+ cpu = prev_cpu;
+ scx_bpf_test_and_clear_cpu_idle(cpu);
+ ddsp:
+- scx_bpf_dispatch_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0);
++ scx_bpf_dsq_insert_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0);
+ return cpu;
+ }
+
+ void BPF_STRUCT_OPS(select_cpu_vtime_dispatch, s32 cpu, struct task_struct *p)
+ {
+- if (scx_bpf_consume(VTIME_DSQ))
++ if (scx_bpf_dsq_move_to_local(VTIME_DSQ))
+ consumed = true;
+ }
+
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json b/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json
+index 58189327f6444a..383fbda07245c8 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/filters/flow.json
+@@ -78,10 +78,10 @@
+ "setup": [
+ "$TC qdisc add dev $DEV1 ingress"
+ ],
+- "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 protocol ip flow map key dst rshift 0xff",
++ "cmdUnderTest": "$TC filter add dev $DEV1 parent ffff: handle 1 prio 1 protocol ip flow map key dst rshift 0x1f",
+ "expExitCode": "0",
+ "verifyCmd": "$TC filter get dev $DEV1 parent ffff: handle 1 protocol ip prio 1 flow",
+- "matchPattern": "filter parent ffff: protocol ip pref 1 flow chain [0-9]+ handle 0x1 map keys dst rshift 255 baseclass",
++ "matchPattern": "filter parent ffff: protocol ip pref 1 flow chain [0-9]+ handle 0x1 map keys dst rshift 31 baseclass",
+ "matchCount": "1",
+ "teardown": [
+ "$TC qdisc del dev $DEV1 ingress"
+diff --git a/tools/testing/shared/linux/maple_tree.h b/tools/testing/shared/linux/maple_tree.h
+index 06c89bdcc51541..f67d47d32857ce 100644
+--- a/tools/testing/shared/linux/maple_tree.h
++++ b/tools/testing/shared/linux/maple_tree.h
+@@ -2,6 +2,6 @@
+ #define atomic_t int32_t
+ #define atomic_inc(x) uatomic_inc(x)
+ #define atomic_read(x) uatomic_read(x)
+-#define atomic_set(x, y) do {} while (0)
++#define atomic_set(x, y) uatomic_set(x, y)
+ #define U8_MAX UCHAR_MAX
+ #include "../../../../include/linux/maple_tree.h"
+diff --git a/tools/testing/vma/linux/atomic.h b/tools/testing/vma/linux/atomic.h
+index e01f66f9898279..3e1b6adc027b99 100644
+--- a/tools/testing/vma/linux/atomic.h
++++ b/tools/testing/vma/linux/atomic.h
+@@ -6,7 +6,7 @@
+ #define atomic_t int32_t
+ #define atomic_inc(x) uatomic_inc(x)
+ #define atomic_read(x) uatomic_read(x)
+-#define atomic_set(x, y) do {} while (0)
++#define atomic_set(x, y) uatomic_set(x, y)
+ #define U8_MAX UCHAR_MAX
+
+ #endif /* _LINUX_ATOMIC_H */
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-01-30 12:47 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-01-30 12:47 UTC (permalink / raw
To: gentoo-commits
commit: 5e62dc25f02e5f291109108ac41ca60469dc4531
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Jan 30 12:47:37 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Jan 30 12:47:37 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5e62dc25
Update CPU Optimization patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
5010_enable-cpu-optimizations-universal.patch | 64 ++++++++++++++++-----------
1 file changed, 38 insertions(+), 26 deletions(-)
diff --git a/5010_enable-cpu-optimizations-universal.patch b/5010_enable-cpu-optimizations-universal.patch
index 0758b0ba..5011aaa6 100644
--- a/5010_enable-cpu-optimizations-universal.patch
+++ b/5010_enable-cpu-optimizations-universal.patch
@@ -116,13 +116,13 @@ REFERENCES
4. http://www.linuxforge.net/docs/linux/linux-gcc.php
---
- arch/x86/Kconfig.cpu | 359 ++++++++++++++++++++++++++++++--
- arch/x86/Makefile | 87 +++++++-
- arch/x86/include/asm/vermagic.h | 70 +++++++
- 3 files changed, 499 insertions(+), 17 deletions(-)
+ arch/x86/Kconfig.cpu | 367 ++++++++++++++++++++++++++++++--
+ arch/x86/Makefile | 89 +++++++-
+ arch/x86/include/asm/vermagic.h | 72 +++++++
+ 3 files changed, 511 insertions(+), 17 deletions(-)
diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
-index 2a7279d80460..abfadddd1b23 100644
+index ce5ed2c2db0c..6d89f21aba52 100644
--- a/arch/x86/Kconfig.cpu
+++ b/arch/x86/Kconfig.cpu
@@ -155,9 +155,8 @@ config MPENTIUM4
@@ -252,7 +252,7 @@ index 2a7279d80460..abfadddd1b23 100644
+
+config MZEN5
+ bool "AMD Zen 5"
-+ depends on (CC_IS_GCC && GCC_VERSION > 140000) || (CC_IS_CLANG && CLANG_VERSION >= 191000)
++ depends on (CC_IS_GCC && GCC_VERSION > 140000) || (CC_IS_CLANG && CLANG_VERSION >= 190100)
+ help
+ Select this for AMD Family 19h Zen 5 processors.
+
@@ -280,7 +280,7 @@ index 2a7279d80460..abfadddd1b23 100644
help
Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and
-@@ -278,14 +388,191 @@ config MCORE2
+@@ -278,14 +388,199 @@ config MCORE2
family in /proc/cpuinfo. Newer ones have 6 and older ones 15
(not a typo)
@@ -388,14 +388,22 @@ index 2a7279d80460..abfadddd1b23 100644
+
+ Enables -march=cannonlake
+
-+config MICELAKE
++config MICELAKE_CLIENT
+ bool "Intel Ice Lake"
+ help
+
-+ Select this for 10th Gen Core processors in the Ice Lake family.
++ Select this for 10th Gen Core client processors in the Ice Lake family.
+
+ Enables -march=icelake-client
+
++config MICELAKE_SERVER
++ bool "Intel Ice Lake Server"
++ help
++
++ Select this for 10th Gen Core server processors in the Ice Lake family.
++
++ Enables -march=icelake-server
++
+config MCASCADELAKE
+ bool "Intel Cascade Lake"
+ help
@@ -478,7 +486,7 @@ index 2a7279d80460..abfadddd1b23 100644
config GENERIC_CPU
bool "Generic-x86-64"
-@@ -294,6 +581,26 @@ config GENERIC_CPU
+@@ -294,6 +589,26 @@ config GENERIC_CPU
Generic x86-64 CPU.
Run equally well on all x86-64 CPUs.
@@ -505,7 +513,7 @@ index 2a7279d80460..abfadddd1b23 100644
endchoice
config X86_GENERIC
-@@ -308,6 +615,30 @@ config X86_GENERIC
+@@ -308,6 +623,30 @@ config X86_GENERIC
This is really intended for distributors who need more
generic optimizations.
@@ -536,34 +544,34 @@ index 2a7279d80460..abfadddd1b23 100644
#
# Define implied options from the CPU selection here
config X86_INTERNODE_CACHE_SHIFT
-@@ -318,7 +649,7 @@ config X86_INTERNODE_CACHE_SHIFT
+@@ -318,7 +657,7 @@ config X86_INTERNODE_CACHE_SHIFT
config X86_L1_CACHE_SHIFT
int
default "7" if MPENTIUM4 || MPSC
- default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU
-+ default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
++ default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
default "4" if MELAN || M486SX || M486 || MGEODEGX1
default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX
-@@ -336,11 +667,11 @@ config X86_ALIGNMENT_16
+@@ -336,11 +675,11 @@ config X86_ALIGNMENT_16
config X86_INTEL_USERCOPY
def_bool y
- depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2
-+ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL
++ depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL
config X86_USE_PPRO_CHECKSUM
def_bool y
- depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM
-+ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
++ depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MJAGUAR || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MZEN || MZEN2 || MZEN3 || MZEN4 || MZEN5 || MNEHALEM || MWESTMERE || MSILVERMONT || MGOLDMONT || MGOLDMONTPLUS || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MSKYLAKEX || MCANNONLAKE || MICELAKE_CLIENT || MICELAKE_SERVER || MCASCADELAKE || MCOOPERLAKE || MTIGERLAKE || MSAPPHIRERAPIDS || MROCKETLAKE || MALDERLAKE || MRAPTORLAKE || MMETEORLAKE || MEMERALDRAPIDS || MNATIVE_INTEL || MNATIVE_AMD
#
# P6_NOPs are a relatively minor optimization that require a family >=
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
-index cd75e78a06c1..396d1db12bca 100644
+index 3419ffa2a350..aafb069de612 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
-@@ -181,15 +181,96 @@ else
+@@ -152,15 +152,98 @@ else
cflags-$(CONFIG_MK8) += -march=k8
cflags-$(CONFIG_MPSC) += -march=nocona
cflags-$(CONFIG_MCORE2) += -march=core2
@@ -605,7 +613,8 @@ index cd75e78a06c1..396d1db12bca 100644
+ cflags-$(CONFIG_MSKYLAKE) += -march=skylake
+ cflags-$(CONFIG_MSKYLAKEX) += -march=skylake-avx512
+ cflags-$(CONFIG_MCANNONLAKE) += -march=cannonlake
-+ cflags-$(CONFIG_MICELAKE) += -march=icelake-client
++ cflags-$(CONFIG_MICELAKE_CLIENT) += -march=icelake-client
++ cflags-$(CONFIG_MICELAKE_SERVER) += -march=icelake-server
+ cflags-$(CONFIG_MCASCADELAKE) += -march=cascadelake
+ cflags-$(CONFIG_MCOOPERLAKE) += -march=cooperlake
+ cflags-$(CONFIG_MTIGERLAKE) += -march=tigerlake
@@ -650,7 +659,8 @@ index cd75e78a06c1..396d1db12bca 100644
+ rustflags-$(CONFIG_MSKYLAKE) += -Ctarget-cpu=skylake
+ rustflags-$(CONFIG_MSKYLAKEX) += -Ctarget-cpu=skylake-avx512
+ rustflags-$(CONFIG_MCANNONLAKE) += -Ctarget-cpu=cannonlake
-+ rustflags-$(CONFIG_MICELAKE) += -Ctarget-cpu=icelake-client
++ rustflags-$(CONFIG_MICELAKE_CLIENT) += -Ctarget-cpu=icelake-client
++ rustflags-$(CONFIG_MICELAKE_SERVER) += -Ctarget-cpu=icelake-server
+ rustflags-$(CONFIG_MCASCADELAKE) += -Ctarget-cpu=cascadelake
+ rustflags-$(CONFIG_MCOOPERLAKE) += -Ctarget-cpu=cooperlake
+ rustflags-$(CONFIG_MTIGERLAKE) += -Ctarget-cpu=tigerlake
@@ -664,10 +674,10 @@ index cd75e78a06c1..396d1db12bca 100644
KBUILD_CFLAGS += -mno-red-zone
diff --git a/arch/x86/include/asm/vermagic.h b/arch/x86/include/asm/vermagic.h
-index 75884d2cdec3..f4e29563473d 100644
+index 75884d2cdec3..2fdae271f47f 100644
--- a/arch/x86/include/asm/vermagic.h
+++ b/arch/x86/include/asm/vermagic.h
-@@ -17,6 +17,54 @@
+@@ -17,6 +17,56 @@
#define MODULE_PROC_FAMILY "586MMX "
#elif defined CONFIG_MCORE2
#define MODULE_PROC_FAMILY "CORE2 "
@@ -699,8 +709,10 @@ index 75884d2cdec3..f4e29563473d 100644
+#define MODULE_PROC_FAMILY "SKYLAKEX "
+#elif defined CONFIG_MCANNONLAKE
+#define MODULE_PROC_FAMILY "CANNONLAKE "
-+#elif defined CONFIG_MICELAKE
-+#define MODULE_PROC_FAMILY "ICELAKE "
++#elif defined CONFIG_MICELAKE_CLIENT
++#define MODULE_PROC_FAMILY "ICELAKE_CLIENT "
++#elif defined CONFIG_MICELAKE_SERVER
++#define MODULE_PROC_FAMILY "ICELAKE_SERVER "
+#elif defined CONFIG_MCASCADELAKE
+#define MODULE_PROC_FAMILY "CASCADELAKE "
+#elif defined CONFIG_MCOOPERLAKE
@@ -722,7 +734,7 @@ index 75884d2cdec3..f4e29563473d 100644
#elif defined CONFIG_MATOM
#define MODULE_PROC_FAMILY "ATOM "
#elif defined CONFIG_M686
-@@ -35,6 +83,28 @@
+@@ -35,6 +85,28 @@
#define MODULE_PROC_FAMILY "K7 "
#elif defined CONFIG_MK8
#define MODULE_PROC_FAMILY "K8 "
@@ -752,5 +764,5 @@ index 75884d2cdec3..f4e29563473d 100644
#define MODULE_PROC_FAMILY "ELAN "
#elif defined CONFIG_MCRUSOE
--
-2.46.2
+2.47.1
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-01 23:07 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-02-01 23:07 UTC (permalink / raw
To: gentoo-commits
commit: 05ce594d8f20da5e0040106fb43894994a64fd0b
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 1 23:06:45 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb 1 23:06:45 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=05ce594d
Linux patch 6.12.12
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1011_linux-6.12.12.patch | 1388 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 1392 insertions(+)
diff --git a/0000_README b/0000_README
index 9c94906b..17858f75 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch: 1010_linux-6.12.11.patch
From: https://www.kernel.org
Desc: Linux 6.12.11
+Patch: 1011_linux-6.12.12.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.12
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1011_linux-6.12.12.patch b/1011_linux-6.12.12.patch
new file mode 100644
index 00000000..921cacc4
--- /dev/null
+++ b/1011_linux-6.12.12.patch
@@ -0,0 +1,1388 @@
+diff --git a/Makefile b/Makefile
+index 7cf8f11975f89c..9e6246e733eb94 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+index bf636b28e3e16e..5bb8b78bf250a0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+@@ -63,7 +63,8 @@ void dmub_hw_lock_mgr_inbox0_cmd(struct dc_dmub_srv *dmub_srv,
+
+ bool should_use_dmub_lock(struct dc_link *link)
+ {
+- if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1)
++ if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1 ||
++ link->psr_settings.psr_version == DC_PSR_VERSION_1)
+ return true;
+
+ if (link->replay_settings.replay_feature_enabled)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+index 3ea54fd52e4683..e2a3764d9d181a 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4_calcs.c
+@@ -578,8 +578,8 @@ static void CalculateBytePerPixelAndBlockSizes(
+ {
+ *BytePerPixelDETY = 0;
+ *BytePerPixelDETC = 0;
+- *BytePerPixelY = 0;
+- *BytePerPixelC = 0;
++ *BytePerPixelY = 1;
++ *BytePerPixelC = 1;
+
+ if (SourcePixelFormat == dml2_444_64) {
+ *BytePerPixelDETY = 8;
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index fc35f47e2849ed..ca7f43c8d6f1b3 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -507,6 +507,9 @@ int drmm_connector_hdmi_init(struct drm_device *dev,
+ if (!supported_formats || !(supported_formats & BIT(HDMI_COLORSPACE_RGB)))
+ return -EINVAL;
+
++ if (connector->ycbcr_420_allowed != !!(supported_formats & BIT(HDMI_COLORSPACE_YUV420)))
++ return -EINVAL;
++
+ if (!(max_bpc == 8 || max_bpc == 10 || max_bpc == 12))
+ return -EINVAL;
+
+diff --git a/drivers/gpu/drm/v3d/v3d_irq.c b/drivers/gpu/drm/v3d/v3d_irq.c
+index da203045df9bec..72b6a119412fa7 100644
+--- a/drivers/gpu/drm/v3d/v3d_irq.c
++++ b/drivers/gpu/drm/v3d/v3d_irq.c
+@@ -107,8 +107,10 @@ v3d_irq(int irq, void *arg)
+
+ v3d_job_update_stats(&v3d->bin_job->base, V3D_BIN);
+ trace_v3d_bcl_irq(&v3d->drm, fence->seqno);
+- dma_fence_signal(&fence->base);
++
+ v3d->bin_job = NULL;
++ dma_fence_signal(&fence->base);
++
+ status = IRQ_HANDLED;
+ }
+
+@@ -118,8 +120,10 @@ v3d_irq(int irq, void *arg)
+
+ v3d_job_update_stats(&v3d->render_job->base, V3D_RENDER);
+ trace_v3d_rcl_irq(&v3d->drm, fence->seqno);
+- dma_fence_signal(&fence->base);
++
+ v3d->render_job = NULL;
++ dma_fence_signal(&fence->base);
++
+ status = IRQ_HANDLED;
+ }
+
+@@ -129,8 +133,10 @@ v3d_irq(int irq, void *arg)
+
+ v3d_job_update_stats(&v3d->csd_job->base, V3D_CSD);
+ trace_v3d_csd_irq(&v3d->drm, fence->seqno);
+- dma_fence_signal(&fence->base);
++
+ v3d->csd_job = NULL;
++ dma_fence_signal(&fence->base);
++
+ status = IRQ_HANDLED;
+ }
+
+@@ -167,8 +173,10 @@ v3d_hub_irq(int irq, void *arg)
+
+ v3d_job_update_stats(&v3d->tfu_job->base, V3D_TFU);
+ trace_v3d_tfu_irq(&v3d->drm, fence->seqno);
+- dma_fence_signal(&fence->base);
++
+ v3d->tfu_job = NULL;
++ dma_fence_signal(&fence->base);
++
+ status = IRQ_HANDLED;
+ }
+
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 0f23be98c56e22..ceb3b1a72e235c 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -506,7 +506,6 @@
+ #define USB_DEVICE_ID_GENERAL_TOUCH_WIN8_PIT_E100 0xe100
+
+ #define I2C_VENDOR_ID_GOODIX 0x27c6
+-#define I2C_DEVICE_ID_GOODIX_01E0 0x01e0
+ #define I2C_DEVICE_ID_GOODIX_01E8 0x01e8
+ #define I2C_DEVICE_ID_GOODIX_01E9 0x01e9
+ #define I2C_DEVICE_ID_GOODIX_01F0 0x01f0
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index e936019d21fecf..d1b7ccfb3e051f 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1452,8 +1452,7 @@ static const __u8 *mt_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ {
+ if (hdev->vendor == I2C_VENDOR_ID_GOODIX &&
+ (hdev->product == I2C_DEVICE_ID_GOODIX_01E8 ||
+- hdev->product == I2C_DEVICE_ID_GOODIX_01E9 ||
+- hdev->product == I2C_DEVICE_ID_GOODIX_01E0)) {
++ hdev->product == I2C_DEVICE_ID_GOODIX_01E9)) {
+ if (rdesc[607] == 0x15) {
+ rdesc[607] = 0x25;
+ dev_info(
+@@ -2079,10 +2078,7 @@ static const struct hid_device_id mt_devices[] = {
+ I2C_DEVICE_ID_GOODIX_01E8) },
+ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
+ HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
+- I2C_DEVICE_ID_GOODIX_01E9) },
+- { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
+- HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
+- I2C_DEVICE_ID_GOODIX_01E0) },
++ I2C_DEVICE_ID_GOODIX_01E8) },
+
+ /* GoodTouch panels */
+ { .driver_data = MT_CLS_NSMU,
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 9843b52bd017a0..34428349fa3118 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -1370,17 +1370,6 @@ static int wacom_led_register_one(struct device *dev, struct wacom *wacom,
+ if (!name)
+ return -ENOMEM;
+
+- if (!read_only) {
+- led->trigger.name = name;
+- error = devm_led_trigger_register(dev, &led->trigger);
+- if (error) {
+- hid_err(wacom->hdev,
+- "failed to register LED trigger %s: %d\n",
+- led->cdev.name, error);
+- return error;
+- }
+- }
+-
+ led->group = group;
+ led->id = id;
+ led->wacom = wacom;
+@@ -1397,6 +1386,19 @@ static int wacom_led_register_one(struct device *dev, struct wacom *wacom,
+ led->cdev.brightness_set = wacom_led_readonly_brightness_set;
+ }
+
++ if (!read_only) {
++ led->trigger.name = name;
++ if (id == wacom->led.groups[group].select)
++ led->trigger.brightness = wacom_leds_brightness_get(led);
++ error = devm_led_trigger_register(dev, &led->trigger);
++ if (error) {
++ hid_err(wacom->hdev,
++ "failed to register LED trigger %s: %d\n",
++ led->cdev.name, error);
++ return error;
++ }
++ }
++
+ error = devm_led_classdev_register(dev, &led->cdev);
+ if (error) {
+ hid_err(wacom->hdev,
+diff --git a/drivers/hwmon/drivetemp.c b/drivers/hwmon/drivetemp.c
+index 2a4ec55ddb47ed..291d91f6864676 100644
+--- a/drivers/hwmon/drivetemp.c
++++ b/drivers/hwmon/drivetemp.c
+@@ -194,7 +194,7 @@ static int drivetemp_scsi_command(struct drivetemp_data *st,
+ scsi_cmd[14] = ata_command;
+
+ err = scsi_execute_cmd(st->sdev, scsi_cmd, op, st->smartdata,
+- ATA_SECT_SIZE, HZ, 5, NULL);
++ ATA_SECT_SIZE, 10 * HZ, 5, NULL);
+ if (err > 0)
+ err = -EIO;
+ return err;
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 22ea58bf76cb5c..77fddab9d9502e 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -150,6 +150,7 @@ static const struct xpad_device {
+ { 0x045e, 0x028e, "Microsoft X-Box 360 pad", 0, XTYPE_XBOX360 },
+ { 0x045e, 0x028f, "Microsoft X-Box 360 pad v2", 0, XTYPE_XBOX360 },
+ { 0x045e, 0x0291, "Xbox 360 Wireless Receiver (XBOX)", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360W },
++ { 0x045e, 0x02a9, "Xbox 360 Wireless Receiver (Unofficial)", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360W },
+ { 0x045e, 0x02d1, "Microsoft X-Box One pad", 0, XTYPE_XBOXONE },
+ { 0x045e, 0x02dd, "Microsoft X-Box One pad (Firmware 2015)", 0, XTYPE_XBOXONE },
+ { 0x045e, 0x02e3, "Microsoft X-Box One Elite pad", MAP_PADDLES, XTYPE_XBOXONE },
+@@ -305,6 +306,7 @@ static const struct xpad_device {
+ { 0x1689, 0xfe00, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ { 0x17ef, 0x6182, "Lenovo Legion Controller for Windows", 0, XTYPE_XBOX360 },
+ { 0x1949, 0x041a, "Amazon Game Controller", 0, XTYPE_XBOX360 },
++ { 0x1a86, 0xe310, "QH Electronics Controller", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0x0002, "Harmonix Rock Band Guitar", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0x0003, "Harmonix Rock Band Drumkit", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x1bad, 0x0130, "Ion Drum Rocker", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
+@@ -373,16 +375,19 @@ static const struct xpad_device {
+ { 0x294b, 0x3303, "Snakebyte GAMEPAD BASE X", 0, XTYPE_XBOXONE },
+ { 0x294b, 0x3404, "Snakebyte GAMEPAD RGB X", 0, XTYPE_XBOXONE },
+ { 0x2dc8, 0x2000, "8BitDo Pro 2 Wired Controller fox Xbox", 0, XTYPE_XBOXONE },
+- { 0x2dc8, 0x3106, "8BitDo Pro 2 Wired Controller", 0, XTYPE_XBOX360 },
++ { 0x2dc8, 0x3106, "8BitDo Ultimate Wireless / Pro 2 Wired Controller", 0, XTYPE_XBOX360 },
+ { 0x2dc8, 0x310a, "8BitDo Ultimate 2C Wireless Controller", 0, XTYPE_XBOX360 },
+ { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE },
+ { 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1210, "Wooting Lekker", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1220, "Wooting Two HE", 0, XTYPE_XBOX360 },
++ { 0x31e3, 0x1230, "Wooting Two HE (ARM)", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1300, "Wooting 60HE (AVR)", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1310, "Wooting 60HE (ARM)", 0, XTYPE_XBOX360 },
+ { 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 },
++ { 0x3285, 0x0646, "Nacon Pro Compact", 0, XTYPE_XBOXONE },
++ { 0x3285, 0x0663, "Nacon Evol-X", 0, XTYPE_XBOXONE },
+ { 0x3537, 0x1004, "GameSir T4 Kaleid", 0, XTYPE_XBOX360 },
+ { 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX },
+ { 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX },
+@@ -514,6 +519,7 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x1689), /* Razer Onza */
+ XPAD_XBOX360_VENDOR(0x17ef), /* Lenovo */
+ XPAD_XBOX360_VENDOR(0x1949), /* Amazon controllers */
++ XPAD_XBOX360_VENDOR(0x1a86), /* QH Electronics */
+ XPAD_XBOX360_VENDOR(0x1bad), /* Harmonix Rock Band guitar and drums */
+ XPAD_XBOX360_VENDOR(0x20d6), /* PowerA controllers */
+ XPAD_XBOXONE_VENDOR(0x20d6), /* PowerA controllers */
+@@ -530,6 +536,7 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x2f24), /* GameSir controllers */
+ XPAD_XBOX360_VENDOR(0x31e3), /* Wooting Keyboards */
+ XPAD_XBOX360_VENDOR(0x3285), /* Nacon GC-100 */
++ XPAD_XBOXONE_VENDOR(0x3285), /* Nacon Evol-X */
+ XPAD_XBOX360_VENDOR(0x3537), /* GameSir Controllers */
+ XPAD_XBOXONE_VENDOR(0x3537), /* GameSir Controllers */
+ { }
+diff --git a/drivers/input/keyboard/atkbd.c b/drivers/input/keyboard/atkbd.c
+index 5855d4fc6e6a4d..f7b08b359c9c67 100644
+--- a/drivers/input/keyboard/atkbd.c
++++ b/drivers/input/keyboard/atkbd.c
+@@ -89,7 +89,7 @@ static const unsigned short atkbd_set2_keycode[ATKBD_KEYMAP_SIZE] = {
+ 0, 46, 45, 32, 18, 5, 4, 95, 0, 57, 47, 33, 20, 19, 6,183,
+ 0, 49, 48, 35, 34, 21, 7,184, 0, 0, 50, 36, 22, 8, 9,185,
+ 0, 51, 37, 23, 24, 11, 10, 0, 0, 52, 53, 38, 39, 25, 12, 0,
+- 0, 89, 40, 0, 26, 13, 0, 0, 58, 54, 28, 27, 0, 43, 0, 85,
++ 0, 89, 40, 0, 26, 13, 0,193, 58, 54, 28, 27, 0, 43, 0, 85,
+ 0, 86, 91, 90, 92, 0, 14, 94, 0, 79,124, 75, 71,121, 0, 0,
+ 82, 83, 80, 76, 77, 72, 1, 69, 87, 78, 81, 74, 55, 73, 70, 99,
+
+diff --git a/drivers/irqchip/irq-sunxi-nmi.c b/drivers/irqchip/irq-sunxi-nmi.c
+index bb92fd85e975f8..0b431215202434 100644
+--- a/drivers/irqchip/irq-sunxi-nmi.c
++++ b/drivers/irqchip/irq-sunxi-nmi.c
+@@ -186,7 +186,8 @@ static int __init sunxi_sc_nmi_irq_init(struct device_node *node,
+ gc->chip_types[0].chip.irq_unmask = irq_gc_mask_set_bit;
+ gc->chip_types[0].chip.irq_eoi = irq_gc_ack_set_bit;
+ gc->chip_types[0].chip.irq_set_type = sunxi_sc_nmi_set_type;
+- gc->chip_types[0].chip.flags = IRQCHIP_EOI_THREADED | IRQCHIP_EOI_IF_HANDLED;
++ gc->chip_types[0].chip.flags = IRQCHIP_EOI_THREADED | IRQCHIP_EOI_IF_HANDLED |
++ IRQCHIP_SKIP_SET_WAKE;
+ gc->chip_types[0].regs.ack = reg_offs->pend;
+ gc->chip_types[0].regs.mask = reg_offs->enable;
+ gc->chip_types[0].regs.type = reg_offs->ctrl;
+diff --git a/drivers/net/wireless/realtek/rtl8xxxu/core.c b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+index f95898f68d68a5..4ce0c05c512910 100644
+--- a/drivers/net/wireless/realtek/rtl8xxxu/core.c
++++ b/drivers/net/wireless/realtek/rtl8xxxu/core.c
+@@ -8147,6 +8147,8 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x817e, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x8186, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x818a, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x317f, 0xff, 0xff, 0xff),
+@@ -8157,12 +8159,18 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x050d, 0x1102, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x050d, 0x11f2, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x06f8, 0xe033, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x07b8, 0x8188, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x07b8, 0x8189, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0846, 0x9041, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x0846, 0x9043, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0b05, 0x17ba, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(USB_VENDOR_ID_REALTEK, 0x1e1e, 0xff, 0xff, 0xff),
+@@ -8179,6 +8187,10 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x13d3, 0x3357, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x13d3, 0x3358, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x13d3, 0x3359, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x330b, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x2019, 0x4902, 0xff, 0xff, 0xff),
+@@ -8193,6 +8205,8 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x4856, 0x0091, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x9846, 0x9041, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0xcdab, 0x8010, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x04f2, 0xaff7, 0xff, 0xff, 0xff),
+@@ -8218,6 +8232,8 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0586, 0x341f, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x06f8, 0xe033, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x06f8, 0xe035, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0b05, 0x17ab, 0xff, 0xff, 0xff),
+@@ -8226,6 +8242,8 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0df6, 0x0070, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x0df6, 0x0077, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x0789, 0x016d, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x07aa, 0x0056, 0xff, 0xff, 0xff),
+@@ -8248,6 +8266,8 @@ static const struct usb_device_id dev_table[] = {
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x330a, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
++{USB_DEVICE_AND_INTERFACE_INFO(0x2001, 0x330d, 0xff, 0xff, 0xff),
++ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x2019, 0xab2b, 0xff, 0xff, 0xff),
+ .driver_info = (unsigned long)&rtl8192cu_fops},
+ {USB_DEVICE_AND_INTERFACE_INFO(0x20f4, 0x624d, 0xff, 0xff, 0xff),
+diff --git a/drivers/of/unittest-data/tests-platform.dtsi b/drivers/of/unittest-data/tests-platform.dtsi
+index fa39611071b32f..cd310b26b50c81 100644
+--- a/drivers/of/unittest-data/tests-platform.dtsi
++++ b/drivers/of/unittest-data/tests-platform.dtsi
+@@ -34,5 +34,18 @@ dev@100 {
+ };
+ };
+ };
++
++ platform-tests-2 {
++ // No #address-cells or #size-cells
++ node {
++ #address-cells = <1>;
++ #size-cells = <1>;
++
++ test-device@100 {
++ compatible = "test-sub-device";
++ reg = <0x100 1>;
++ };
++ };
++ };
+ };
+ };
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index daf9a2dddd7e0d..576e9beefc7c8f 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -1342,6 +1342,7 @@ static void __init of_unittest_bus_3cell_ranges(void)
+ static void __init of_unittest_reg(void)
+ {
+ struct device_node *np;
++ struct resource res;
+ int ret;
+ u64 addr, size;
+
+@@ -1358,6 +1359,19 @@ static void __init of_unittest_reg(void)
+ np, addr);
+
+ of_node_put(np);
++
++ np = of_find_node_by_path("/testcase-data/platform-tests-2/node/test-device@100");
++ if (!np) {
++ pr_err("missing testcase data\n");
++ return;
++ }
++
++ ret = of_address_to_resource(np, 0, &res);
++ unittest(ret == -EINVAL, "of_address_to_resource(%pOF) expected error on untranslatable address\n",
++ np);
++
++ of_node_put(np);
++
+ }
+
+ struct of_unittest_expected_res {
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index fde7de3b1e5538..9b47f91c5b9720 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -4104,7 +4104,7 @@ iscsi_if_rx(struct sk_buff *skb)
+ }
+ do {
+ /*
+- * special case for GET_STATS:
++ * special case for GET_STATS, GET_CHAP and GET_HOST_STATS:
+ * on success - sending reply and stats from
+ * inside of if_recv_msg(),
+ * on error - fall through.
+@@ -4113,6 +4113,8 @@ iscsi_if_rx(struct sk_buff *skb)
+ break;
+ if (ev->type == ISCSI_UEVENT_GET_CHAP && !err)
+ break;
++ if (ev->type == ISCSI_UEVENT_GET_HOST_STATS && !err)
++ break;
+ err = iscsi_if_send_reply(portid, nlh->nlmsg_type,
+ ev, sizeof(*ev));
+ if (err == -EAGAIN && --retries < 0) {
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index d0b55c1fa908a5..b3c588b102d900 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -171,6 +171,12 @@ do { \
+ dev_warn(&(dev)->device, fmt, ##__VA_ARGS__); \
+ } while (0)
+
++#define storvsc_log_ratelimited(dev, level, fmt, ...) \
++do { \
++ if (do_logging(level)) \
++ dev_warn_ratelimited(&(dev)->device, fmt, ##__VA_ARGS__); \
++} while (0)
++
+ struct vmscsi_request {
+ u16 length;
+ u8 srb_status;
+@@ -1177,7 +1183,7 @@ static void storvsc_on_io_completion(struct storvsc_device *stor_device,
+ int loglevel = (stor_pkt->vm_srb.cdb[0] == TEST_UNIT_READY) ?
+ STORVSC_LOGGING_WARN : STORVSC_LOGGING_ERROR;
+
+- storvsc_log(device, loglevel,
++ storvsc_log_ratelimited(device, loglevel,
+ "tag#%d cmd 0x%x status: scsi 0x%x srb 0x%x hv 0x%x\n",
+ scsi_cmd_to_rq(request->cmd)->tag,
+ stor_pkt->vm_srb.cdb[0],
+diff --git a/drivers/usb/gadget/function/u_serial.c b/drivers/usb/gadget/function/u_serial.c
+index bc143a86c2ddf0..53d9fc41acc522 100644
+--- a/drivers/usb/gadget/function/u_serial.c
++++ b/drivers/usb/gadget/function/u_serial.c
+@@ -1420,10 +1420,6 @@ void gserial_disconnect(struct gserial *gser)
+ /* REVISIT as above: how best to track this? */
+ port->port_line_coding = gser->port_line_coding;
+
+- /* disable endpoints, aborting down any active I/O */
+- usb_ep_disable(gser->out);
+- usb_ep_disable(gser->in);
+-
+ port->port_usb = NULL;
+ gser->ioport = NULL;
+ if (port->port.count > 0) {
+@@ -1435,6 +1431,10 @@ void gserial_disconnect(struct gserial *gser)
+ spin_unlock(&port->port_lock);
+ spin_unlock_irqrestore(&serial_port_lock, flags);
+
++ /* disable endpoints, aborting down any active I/O */
++ usb_ep_disable(gser->out);
++ usb_ep_disable(gser->in);
++
+ /* finally, free any unused/unusable I/O buffers */
+ spin_lock_irqsave(&port->port_lock, flags);
+ if (port->port.count == 0)
+diff --git a/drivers/usb/serial/quatech2.c b/drivers/usb/serial/quatech2.c
+index a317bdbd00ad5c..72fe83a6c97801 100644
+--- a/drivers/usb/serial/quatech2.c
++++ b/drivers/usb/serial/quatech2.c
+@@ -503,7 +503,7 @@ static void qt2_process_read_urb(struct urb *urb)
+
+ newport = *(ch + 3);
+
+- if (newport > serial->num_ports) {
++ if (newport >= serial->num_ports) {
+ dev_err(&port->dev,
+ "%s - port change to invalid port: %i\n",
+ __func__, newport);
+diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
+index e53757d1d0958a..3bf1043cd7957c 100644
+--- a/drivers/vfio/platform/vfio_platform_common.c
++++ b/drivers/vfio/platform/vfio_platform_common.c
+@@ -388,6 +388,11 @@ static ssize_t vfio_platform_read_mmio(struct vfio_platform_region *reg,
+ {
+ unsigned int done = 0;
+
++ if (off >= reg->size)
++ return -EINVAL;
++
++ count = min_t(size_t, count, reg->size - off);
++
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+@@ -467,6 +472,11 @@ static ssize_t vfio_platform_write_mmio(struct vfio_platform_region *reg,
+ {
+ unsigned int done = 0;
+
++ if (off >= reg->size)
++ return -EINVAL;
++
++ count = min_t(size_t, count, reg->size - off);
++
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+diff --git a/fs/gfs2/file.c b/fs/gfs2/file.c
+index f7dd64856c9bff..ac6f50167076d8 100644
+--- a/fs/gfs2/file.c
++++ b/fs/gfs2/file.c
+@@ -251,6 +251,7 @@ static int do_gfs2_set_flags(struct inode *inode, u32 reqflags, u32 mask)
+ error = filemap_fdatawait(inode->i_mapping);
+ if (error)
+ goto out;
++ truncate_inode_pages(inode->i_mapping, 0);
+ if (new_flags & GFS2_DIF_JDATA)
+ gfs2_ordered_del_inode(ip);
+ }
+diff --git a/fs/libfs.c b/fs/libfs.c
+index 46966fd8bcf9f0..b0f262223b5351 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -241,9 +241,16 @@ const struct inode_operations simple_dir_inode_operations = {
+ };
+ EXPORT_SYMBOL(simple_dir_inode_operations);
+
+-/* 0 is '.', 1 is '..', so always start with offset 2 or more */
++/* simple_offset_add() never assigns these to a dentry */
+ enum {
+- DIR_OFFSET_MIN = 2,
++ DIR_OFFSET_FIRST = 2, /* Find first real entry */
++ DIR_OFFSET_EOD = S32_MAX,
++};
++
++/* simple_offset_add() allocation range */
++enum {
++ DIR_OFFSET_MIN = DIR_OFFSET_FIRST + 1,
++ DIR_OFFSET_MAX = DIR_OFFSET_EOD - 1,
+ };
+
+ static void offset_set(struct dentry *dentry, long offset)
+@@ -287,9 +294,10 @@ int simple_offset_add(struct offset_ctx *octx, struct dentry *dentry)
+ return -EBUSY;
+
+ ret = mtree_alloc_cyclic(&octx->mt, &offset, dentry, DIR_OFFSET_MIN,
+- LONG_MAX, &octx->next_offset, GFP_KERNEL);
+- if (ret < 0)
+- return ret;
++ DIR_OFFSET_MAX, &octx->next_offset,
++ GFP_KERNEL);
++ if (unlikely(ret < 0))
++ return ret == -EBUSY ? -ENOSPC : ret;
+
+ offset_set(dentry, offset);
+ return 0;
+@@ -325,38 +333,6 @@ void simple_offset_remove(struct offset_ctx *octx, struct dentry *dentry)
+ offset_set(dentry, 0);
+ }
+
+-/**
+- * simple_offset_empty - Check if a dentry can be unlinked
+- * @dentry: dentry to be tested
+- *
+- * Returns 0 if @dentry is a non-empty directory; otherwise returns 1.
+- */
+-int simple_offset_empty(struct dentry *dentry)
+-{
+- struct inode *inode = d_inode(dentry);
+- struct offset_ctx *octx;
+- struct dentry *child;
+- unsigned long index;
+- int ret = 1;
+-
+- if (!inode || !S_ISDIR(inode->i_mode))
+- return ret;
+-
+- index = DIR_OFFSET_MIN;
+- octx = inode->i_op->get_offset_ctx(inode);
+- mt_for_each(&octx->mt, child, index, LONG_MAX) {
+- spin_lock(&child->d_lock);
+- if (simple_positive(child)) {
+- spin_unlock(&child->d_lock);
+- ret = 0;
+- break;
+- }
+- spin_unlock(&child->d_lock);
+- }
+-
+- return ret;
+-}
+-
+ /**
+ * simple_offset_rename - handle directory offsets for rename
+ * @old_dir: parent directory of source entry
+@@ -450,14 +426,6 @@ void simple_offset_destroy(struct offset_ctx *octx)
+ mtree_destroy(&octx->mt);
+ }
+
+-static int offset_dir_open(struct inode *inode, struct file *file)
+-{
+- struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode);
+-
+- file->private_data = (void *)ctx->next_offset;
+- return 0;
+-}
+-
+ /**
+ * offset_dir_llseek - Advance the read position of a directory descriptor
+ * @file: an open directory whose position is to be updated
+@@ -471,9 +439,6 @@ static int offset_dir_open(struct inode *inode, struct file *file)
+ */
+ static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence)
+ {
+- struct inode *inode = file->f_inode;
+- struct offset_ctx *ctx = inode->i_op->get_offset_ctx(inode);
+-
+ switch (whence) {
+ case SEEK_CUR:
+ offset += file->f_pos;
+@@ -486,62 +451,89 @@ static loff_t offset_dir_llseek(struct file *file, loff_t offset, int whence)
+ return -EINVAL;
+ }
+
+- /* In this case, ->private_data is protected by f_pos_lock */
+- if (!offset)
+- file->private_data = (void *)ctx->next_offset;
+ return vfs_setpos(file, offset, LONG_MAX);
+ }
+
+-static struct dentry *offset_find_next(struct offset_ctx *octx, loff_t offset)
++static struct dentry *find_positive_dentry(struct dentry *parent,
++ struct dentry *dentry,
++ bool next)
+ {
+- MA_STATE(mas, &octx->mt, offset, offset);
++ struct dentry *found = NULL;
++
++ spin_lock(&parent->d_lock);
++ if (next)
++ dentry = d_next_sibling(dentry);
++ else if (!dentry)
++ dentry = d_first_child(parent);
++ hlist_for_each_entry_from(dentry, d_sib) {
++ if (!simple_positive(dentry))
++ continue;
++ spin_lock_nested(&dentry->d_lock, DENTRY_D_LOCK_NESTED);
++ if (simple_positive(dentry))
++ found = dget_dlock(dentry);
++ spin_unlock(&dentry->d_lock);
++ if (likely(found))
++ break;
++ }
++ spin_unlock(&parent->d_lock);
++ return found;
++}
++
++static noinline_for_stack struct dentry *
++offset_dir_lookup(struct dentry *parent, loff_t offset)
++{
++ struct inode *inode = d_inode(parent);
++ struct offset_ctx *octx = inode->i_op->get_offset_ctx(inode);
+ struct dentry *child, *found = NULL;
+
+- rcu_read_lock();
+- child = mas_find(&mas, LONG_MAX);
+- if (!child)
+- goto out;
+- spin_lock(&child->d_lock);
+- if (simple_positive(child))
+- found = dget_dlock(child);
+- spin_unlock(&child->d_lock);
+-out:
+- rcu_read_unlock();
++ MA_STATE(mas, &octx->mt, offset, offset);
++
++ if (offset == DIR_OFFSET_FIRST)
++ found = find_positive_dentry(parent, NULL, false);
++ else {
++ rcu_read_lock();
++ child = mas_find(&mas, DIR_OFFSET_MAX);
++ found = find_positive_dentry(parent, child, false);
++ rcu_read_unlock();
++ }
+ return found;
+ }
+
+ static bool offset_dir_emit(struct dir_context *ctx, struct dentry *dentry)
+ {
+ struct inode *inode = d_inode(dentry);
+- long offset = dentry2offset(dentry);
+
+- return ctx->actor(ctx, dentry->d_name.name, dentry->d_name.len, offset,
+- inode->i_ino, fs_umode_to_dtype(inode->i_mode));
++ return dir_emit(ctx, dentry->d_name.name, dentry->d_name.len,
++ inode->i_ino, fs_umode_to_dtype(inode->i_mode));
+ }
+
+-static void offset_iterate_dir(struct inode *inode, struct dir_context *ctx, long last_index)
++static void offset_iterate_dir(struct file *file, struct dir_context *ctx)
+ {
+- struct offset_ctx *octx = inode->i_op->get_offset_ctx(inode);
++ struct dentry *dir = file->f_path.dentry;
+ struct dentry *dentry;
+
++ dentry = offset_dir_lookup(dir, ctx->pos);
++ if (!dentry)
++ goto out_eod;
+ while (true) {
+- dentry = offset_find_next(octx, ctx->pos);
+- if (!dentry)
+- return;
+-
+- if (dentry2offset(dentry) >= last_index) {
+- dput(dentry);
+- return;
+- }
++ struct dentry *next;
+
+- if (!offset_dir_emit(ctx, dentry)) {
+- dput(dentry);
+- return;
+- }
++ ctx->pos = dentry2offset(dentry);
++ if (!offset_dir_emit(ctx, dentry))
++ break;
+
+- ctx->pos = dentry2offset(dentry) + 1;
++ next = find_positive_dentry(dir, dentry, true);
+ dput(dentry);
++
++ if (!next)
++ goto out_eod;
++ dentry = next;
+ }
++ dput(dentry);
++ return;
++
++out_eod:
++ ctx->pos = DIR_OFFSET_EOD;
+ }
+
+ /**
+@@ -561,6 +553,8 @@ static void offset_iterate_dir(struct inode *inode, struct dir_context *ctx, lon
+ *
+ * On return, @ctx->pos contains an offset that will read the next entry
+ * in this directory when offset_readdir() is called again with @ctx.
++ * Caller places this value in the d_off field of the last entry in the
++ * user's buffer.
+ *
+ * Return values:
+ * %0 - Complete
+@@ -568,19 +562,17 @@ static void offset_iterate_dir(struct inode *inode, struct dir_context *ctx, lon
+ static int offset_readdir(struct file *file, struct dir_context *ctx)
+ {
+ struct dentry *dir = file->f_path.dentry;
+- long last_index = (long)file->private_data;
+
+ lockdep_assert_held(&d_inode(dir)->i_rwsem);
+
+ if (!dir_emit_dots(file, ctx))
+ return 0;
+-
+- offset_iterate_dir(d_inode(dir), ctx, last_index);
++ if (ctx->pos != DIR_OFFSET_EOD)
++ offset_iterate_dir(file, ctx);
+ return 0;
+ }
+
+ const struct file_operations simple_offset_dir_operations = {
+- .open = offset_dir_open,
+ .llseek = offset_dir_llseek,
+ .iterate_shared = offset_readdir,
+ .read = generic_read_dir,
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index a55f0044d30bde..b935c1a62d10cf 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -176,27 +176,27 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ struct kvec *out_iov, int *out_buftype, struct dentry *dentry)
+ {
+
+- struct reparse_data_buffer *rbuf;
++ struct smb2_query_info_rsp *qi_rsp = NULL;
+ struct smb2_compound_vars *vars = NULL;
+- struct kvec *rsp_iov, *iov;
+- struct smb_rqst *rqst;
+- int rc;
+- __le16 *utf16_path = NULL;
+ __u8 oplock = SMB2_OPLOCK_LEVEL_NONE;
+- struct cifs_fid fid;
++ struct cifs_open_info_data *idata;
+ struct cifs_ses *ses = tcon->ses;
++ struct reparse_data_buffer *rbuf;
+ struct TCP_Server_Info *server;
+- int num_rqst = 0, i;
+ int resp_buftype[MAX_COMPOUND];
+- struct smb2_query_info_rsp *qi_rsp = NULL;
+- struct cifs_open_info_data *idata;
++ int retries = 0, cur_sleep = 1;
++ __u8 delete_pending[8] = {1,};
++ struct kvec *rsp_iov, *iov;
+ struct inode *inode = NULL;
+- int flags = 0;
+- __u8 delete_pending[8] = {1, 0, 0, 0, 0, 0, 0, 0};
++ __le16 *utf16_path = NULL;
++ struct smb_rqst *rqst;
+ unsigned int size[2];
+- void *data[2];
++ struct cifs_fid fid;
++ int num_rqst = 0, i;
+ unsigned int len;
+- int retries = 0, cur_sleep = 1;
++ int tmp_rc, rc;
++ int flags = 0;
++ void *data[2];
+
+ replay_again:
+ /* reinitialize for possible replay */
+@@ -637,7 +637,14 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ tcon->need_reconnect = true;
+ }
+
++ tmp_rc = rc;
+ for (i = 0; i < num_cmds; i++) {
++ char *buf = rsp_iov[i + i].iov_base;
++
++ if (buf && resp_buftype[i + 1] != CIFS_NO_BUFFER)
++ rc = server->ops->map_error(buf, false);
++ else
++ rc = tmp_rc;
+ switch (cmds[i]) {
+ case SMB2_OP_QUERY_INFO:
+ idata = in_iov[i].iov_base;
+@@ -803,6 +810,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ }
+ }
+ SMB2_close_free(&rqst[num_rqst]);
++ rc = tmp_rc;
+
+ num_cmds += 2;
+ if (out_iov && out_buftype) {
+@@ -858,22 +866,52 @@ static int parse_create_response(struct cifs_open_info_data *data,
+ return rc;
+ }
+
++/* Check only if SMB2_OP_QUERY_WSL_EA command failed in the compound chain */
++static bool ea_unsupported(int *cmds, int num_cmds,
++ struct kvec *out_iov, int *out_buftype)
++{
++ int i;
++
++ if (cmds[num_cmds - 1] != SMB2_OP_QUERY_WSL_EA)
++ return false;
++
++ for (i = 1; i < num_cmds - 1; i++) {
++ struct smb2_hdr *hdr = out_iov[i].iov_base;
++
++ if (out_buftype[i] == CIFS_NO_BUFFER || !hdr ||
++ hdr->Status != STATUS_SUCCESS)
++ return false;
++ }
++ return true;
++}
++
++static inline void free_rsp_iov(struct kvec *iovs, int *buftype, int count)
++{
++ int i;
++
++ for (i = 0; i < count; i++) {
++ free_rsp_buf(buftype[i], iovs[i].iov_base);
++ memset(&iovs[i], 0, sizeof(*iovs));
++ buftype[i] = CIFS_NO_BUFFER;
++ }
++}
++
+ int smb2_query_path_info(const unsigned int xid,
+ struct cifs_tcon *tcon,
+ struct cifs_sb_info *cifs_sb,
+ const char *full_path,
+ struct cifs_open_info_data *data)
+ {
++ struct kvec in_iov[3], out_iov[5] = {};
++ struct cached_fid *cfid = NULL;
+ struct cifs_open_parms oparms;
+- __u32 create_options = 0;
+ struct cifsFileInfo *cfile;
+- struct cached_fid *cfid = NULL;
++ __u32 create_options = 0;
++ int out_buftype[5] = {};
+ struct smb2_hdr *hdr;
+- struct kvec in_iov[3], out_iov[3] = {};
+- int out_buftype[3] = {};
++ int num_cmds = 0;
+ int cmds[3];
+ bool islink;
+- int i, num_cmds = 0;
+ int rc, rc2;
+
+ data->adjust_tz = false;
+@@ -943,14 +981,14 @@ int smb2_query_path_info(const unsigned int xid,
+ if (rc || !data->reparse_point)
+ goto out;
+
+- if (!tcon->posix_extensions)
+- cmds[num_cmds++] = SMB2_OP_QUERY_WSL_EA;
+ /*
+ * Skip SMB2_OP_GET_REPARSE if symlink already parsed in create
+ * response.
+ */
+ if (data->reparse.tag != IO_REPARSE_TAG_SYMLINK)
+ cmds[num_cmds++] = SMB2_OP_GET_REPARSE;
++ if (!tcon->posix_extensions)
++ cmds[num_cmds++] = SMB2_OP_QUERY_WSL_EA;
+
+ oparms = CIFS_OPARMS(cifs_sb, tcon, full_path,
+ FILE_READ_ATTRIBUTES |
+@@ -958,9 +996,18 @@ int smb2_query_path_info(const unsigned int xid,
+ FILE_OPEN, create_options |
+ OPEN_REPARSE_POINT, ACL_NO_MODE);
+ cifs_get_readable_path(tcon, full_path, &cfile);
++ free_rsp_iov(out_iov, out_buftype, ARRAY_SIZE(out_iov));
+ rc = smb2_compound_op(xid, tcon, cifs_sb, full_path,
+ &oparms, in_iov, cmds, num_cmds,
+- cfile, NULL, NULL, NULL);
++ cfile, out_iov, out_buftype, NULL);
++ if (rc && ea_unsupported(cmds, num_cmds,
++ out_iov, out_buftype)) {
++ if (data->reparse.tag != IO_REPARSE_TAG_LX_BLK &&
++ data->reparse.tag != IO_REPARSE_TAG_LX_CHR)
++ rc = 0;
++ else
++ rc = -EOPNOTSUPP;
++ }
+ break;
+ case -EREMOTE:
+ break;
+@@ -978,8 +1025,7 @@ int smb2_query_path_info(const unsigned int xid,
+ }
+
+ out:
+- for (i = 0; i < ARRAY_SIZE(out_buftype); i++)
+- free_rsp_buf(out_buftype[i], out_iov[i].iov_base);
++ free_rsp_iov(out_iov, out_buftype, ARRAY_SIZE(out_iov));
+ return rc;
+ }
+
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index 4b5cad44a12683..fc3de42d9d764f 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3434,7 +3434,6 @@ struct offset_ctx {
+ void simple_offset_init(struct offset_ctx *octx);
+ int simple_offset_add(struct offset_ctx *octx, struct dentry *dentry);
+ void simple_offset_remove(struct offset_ctx *octx, struct dentry *dentry);
+-int simple_offset_empty(struct dentry *dentry);
+ int simple_offset_rename(struct inode *old_dir, struct dentry *old_dentry,
+ struct inode *new_dir, struct dentry *new_dentry);
+ int simple_offset_rename_exchange(struct inode *old_dir,
+diff --git a/include/linux/seccomp.h b/include/linux/seccomp.h
+index 709ad84809e1ea..8934c7da47f4c3 100644
+--- a/include/linux/seccomp.h
++++ b/include/linux/seccomp.h
+@@ -50,10 +50,10 @@ struct seccomp_data;
+
+ #ifdef CONFIG_HAVE_ARCH_SECCOMP_FILTER
+ static inline int secure_computing(void) { return 0; }
+-static inline int __secure_computing(const struct seccomp_data *sd) { return 0; }
+ #else
+ static inline void secure_computing_strict(int this_syscall) { return; }
+ #endif
++static inline int __secure_computing(const struct seccomp_data *sd) { return 0; }
+
+ static inline long prctl_get_seccomp(void)
+ {
+diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c
+index 6f3b6de230bd2b..a67bae350416b3 100644
+--- a/io_uring/rsrc.c
++++ b/io_uring/rsrc.c
+@@ -1153,6 +1153,13 @@ static int io_clone_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *src_ctx
+ struct io_rsrc_data *data;
+ int i, ret, nbufs;
+
++ /*
++ * Accounting state is shared between the two rings; that only works if
++ * both rings are accounted towards the same counters.
++ */
++ if (ctx->user != src_ctx->user || ctx->mm_account != src_ctx->mm_account)
++ return -EINVAL;
++
+ /*
+ * Drop our own lock here. We'll setup the data we need and reference
+ * the source buffers, then re-grab, check, and assign at the end.
+diff --git a/mm/filemap.c b/mm/filemap.c
+index dc83baab85a140..05adf0392625da 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -4383,6 +4383,20 @@ static void filemap_cachestat(struct address_space *mapping,
+ rcu_read_unlock();
+ }
+
++/*
++ * See mincore: reveal pagecache information only for files
++ * that the calling process has write access to, or could (if
++ * tried) open for writing.
++ */
++static inline bool can_do_cachestat(struct file *f)
++{
++ if (f->f_mode & FMODE_WRITE)
++ return true;
++ if (inode_owner_or_capable(file_mnt_idmap(f), file_inode(f)))
++ return true;
++ return file_permission(f, MAY_WRITE) == 0;
++}
++
+ /*
+ * The cachestat(2) system call.
+ *
+@@ -4442,6 +4456,11 @@ SYSCALL_DEFINE4(cachestat, unsigned int, fd,
+ return -EOPNOTSUPP;
+ }
+
++ if (!can_do_cachestat(fd_file(f))) {
++ fdput(f);
++ return -EPERM;
++ }
++
+ if (flags != 0) {
+ fdput(f);
+ return -EINVAL;
+diff --git a/mm/shmem.c b/mm/shmem.c
+index dd4eb11c84b59e..5960e5035f9835 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -3700,7 +3700,7 @@ static int shmem_unlink(struct inode *dir, struct dentry *dentry)
+
+ static int shmem_rmdir(struct inode *dir, struct dentry *dentry)
+ {
+- if (!simple_offset_empty(dentry))
++ if (!simple_empty(dentry))
+ return -ENOTEMPTY;
+
+ drop_nlink(d_inode(dentry));
+@@ -3757,7 +3757,7 @@ static int shmem_rename2(struct mnt_idmap *idmap,
+ return simple_offset_rename_exchange(old_dir, old_dentry,
+ new_dir, new_dentry);
+
+- if (!simple_offset_empty(new_dentry))
++ if (!simple_empty(new_dentry))
+ return -ENOTEMPTY;
+
+ if (flags & RENAME_WHITEOUT) {
+diff --git a/mm/zswap.c b/mm/zswap.c
+index 0030ce8fecfc56..7fefb2eb3fcd80 100644
+--- a/mm/zswap.c
++++ b/mm/zswap.c
+@@ -251,7 +251,7 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
+ struct zswap_pool *pool;
+ char name[38]; /* 'zswap' + 32 char (max) num + \0 */
+ gfp_t gfp = __GFP_NORETRY | __GFP_NOWARN | __GFP_KSWAPD_RECLAIM;
+- int ret;
++ int ret, cpu;
+
+ if (!zswap_has_pool) {
+ /* if either are unset, pool initialization failed, and we
+@@ -285,6 +285,9 @@ static struct zswap_pool *zswap_pool_create(char *type, char *compressor)
+ goto error;
+ }
+
++ for_each_possible_cpu(cpu)
++ mutex_init(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex);
++
+ ret = cpuhp_state_add_instance(CPUHP_MM_ZSWP_POOL_PREPARE,
+ &pool->node);
+ if (ret)
+@@ -812,36 +815,41 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node)
+ {
+ struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node);
+ struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu);
+- struct crypto_acomp *acomp;
+- struct acomp_req *req;
++ struct crypto_acomp *acomp = NULL;
++ struct acomp_req *req = NULL;
++ u8 *buffer = NULL;
+ int ret;
+
+- mutex_init(&acomp_ctx->mutex);
+-
+- acomp_ctx->buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu));
+- if (!acomp_ctx->buffer)
+- return -ENOMEM;
++ buffer = kmalloc_node(PAGE_SIZE * 2, GFP_KERNEL, cpu_to_node(cpu));
++ if (!buffer) {
++ ret = -ENOMEM;
++ goto fail;
++ }
+
+ acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu));
+ if (IS_ERR(acomp)) {
+ pr_err("could not alloc crypto acomp %s : %ld\n",
+ pool->tfm_name, PTR_ERR(acomp));
+ ret = PTR_ERR(acomp);
+- goto acomp_fail;
++ goto fail;
+ }
+- acomp_ctx->acomp = acomp;
+- acomp_ctx->is_sleepable = acomp_is_async(acomp);
+
+- req = acomp_request_alloc(acomp_ctx->acomp);
++ req = acomp_request_alloc(acomp);
+ if (!req) {
+ pr_err("could not alloc crypto acomp_request %s\n",
+ pool->tfm_name);
+ ret = -ENOMEM;
+- goto req_fail;
++ goto fail;
+ }
+- acomp_ctx->req = req;
+
++ /*
++ * Only hold the mutex after completing allocations, otherwise we may
++ * recurse into zswap through reclaim and attempt to hold the mutex
++ * again resulting in a deadlock.
++ */
++ mutex_lock(&acomp_ctx->mutex);
+ crypto_init_wait(&acomp_ctx->wait);
++
+ /*
+ * if the backend of acomp is async zip, crypto_req_done() will wakeup
+ * crypto_wait_req(); if the backend of acomp is scomp, the callback
+@@ -850,12 +858,17 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node)
+ acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ crypto_req_done, &acomp_ctx->wait);
+
++ acomp_ctx->buffer = buffer;
++ acomp_ctx->acomp = acomp;
++ acomp_ctx->is_sleepable = acomp_is_async(acomp);
++ acomp_ctx->req = req;
++ mutex_unlock(&acomp_ctx->mutex);
+ return 0;
+
+-req_fail:
+- crypto_free_acomp(acomp_ctx->acomp);
+-acomp_fail:
+- kfree(acomp_ctx->buffer);
++fail:
++ if (acomp)
++ crypto_free_acomp(acomp);
++ kfree(buffer);
+ return ret;
+ }
+
+@@ -864,17 +877,45 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node)
+ struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node);
+ struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu);
+
++ mutex_lock(&acomp_ctx->mutex);
+ if (!IS_ERR_OR_NULL(acomp_ctx)) {
+ if (!IS_ERR_OR_NULL(acomp_ctx->req))
+ acomp_request_free(acomp_ctx->req);
++ acomp_ctx->req = NULL;
+ if (!IS_ERR_OR_NULL(acomp_ctx->acomp))
+ crypto_free_acomp(acomp_ctx->acomp);
+ kfree(acomp_ctx->buffer);
+ }
++ mutex_unlock(&acomp_ctx->mutex);
+
+ return 0;
+ }
+
++static struct crypto_acomp_ctx *acomp_ctx_get_cpu_lock(struct zswap_pool *pool)
++{
++ struct crypto_acomp_ctx *acomp_ctx;
++
++ for (;;) {
++ acomp_ctx = raw_cpu_ptr(pool->acomp_ctx);
++ mutex_lock(&acomp_ctx->mutex);
++ if (likely(acomp_ctx->req))
++ return acomp_ctx;
++ /*
++ * It is possible that we were migrated to a different CPU after
++ * getting the per-CPU ctx but before the mutex was acquired. If
++ * the old CPU got offlined, zswap_cpu_comp_dead() could have
++ * already freed ctx->req (among other things) and set it to
++ * NULL. Just try again on the new CPU that we ended up on.
++ */
++ mutex_unlock(&acomp_ctx->mutex);
++ }
++}
++
++static void acomp_ctx_put_unlock(struct crypto_acomp_ctx *acomp_ctx)
++{
++ mutex_unlock(&acomp_ctx->mutex);
++}
++
+ static bool zswap_compress(struct folio *folio, struct zswap_entry *entry)
+ {
+ struct crypto_acomp_ctx *acomp_ctx;
+@@ -887,10 +928,7 @@ static bool zswap_compress(struct folio *folio, struct zswap_entry *entry)
+ gfp_t gfp;
+ u8 *dst;
+
+- acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
+-
+- mutex_lock(&acomp_ctx->mutex);
+-
++ acomp_ctx = acomp_ctx_get_cpu_lock(entry->pool);
+ dst = acomp_ctx->buffer;
+ sg_init_table(&input, 1);
+ sg_set_folio(&input, folio, PAGE_SIZE, 0);
+@@ -943,7 +981,7 @@ static bool zswap_compress(struct folio *folio, struct zswap_entry *entry)
+ else if (alloc_ret)
+ zswap_reject_alloc_fail++;
+
+- mutex_unlock(&acomp_ctx->mutex);
++ acomp_ctx_put_unlock(acomp_ctx);
+ return comp_ret == 0 && alloc_ret == 0;
+ }
+
+@@ -954,9 +992,7 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio)
+ struct crypto_acomp_ctx *acomp_ctx;
+ u8 *src;
+
+- acomp_ctx = raw_cpu_ptr(entry->pool->acomp_ctx);
+- mutex_lock(&acomp_ctx->mutex);
+-
++ acomp_ctx = acomp_ctx_get_cpu_lock(entry->pool);
+ src = zpool_map_handle(zpool, entry->handle, ZPOOL_MM_RO);
+ /*
+ * If zpool_map_handle is atomic, we cannot reliably utilize its mapped buffer
+@@ -980,10 +1016,10 @@ static void zswap_decompress(struct zswap_entry *entry, struct folio *folio)
+ acomp_request_set_params(acomp_ctx->req, &input, &output, entry->length, PAGE_SIZE);
+ BUG_ON(crypto_wait_req(crypto_acomp_decompress(acomp_ctx->req), &acomp_ctx->wait));
+ BUG_ON(acomp_ctx->req->dlen != PAGE_SIZE);
+- mutex_unlock(&acomp_ctx->mutex);
+
+ if (src != acomp_ctx->buffer)
+ zpool_unmap_handle(zpool, entry->handle);
++ acomp_ctx_put_unlock(acomp_ctx);
+ }
+
+ /*********************************
+diff --git a/net/sched/sch_ets.c b/net/sched/sch_ets.c
+index f80bc05d4c5a50..516038a4416380 100644
+--- a/net/sched/sch_ets.c
++++ b/net/sched/sch_ets.c
+@@ -91,6 +91,8 @@ ets_class_from_arg(struct Qdisc *sch, unsigned long arg)
+ {
+ struct ets_sched *q = qdisc_priv(sch);
+
++ if (arg == 0 || arg > q->nbands)
++ return NULL;
+ return &q->classes[arg - 1];
+ }
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index a9f6138b59b0c1..8c4de5a253addf 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10916,8 +10916,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x38e0, "Yoga Y990 Intel VECO Dual", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38f8, "Yoga Book 9i", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x38df, "Y990 YG DUAL", ALC287_FIXUP_TAS2781_I2C),
+- SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
+- SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x17aa, 0x38f9, "Thinkbook 16P Gen5", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD),
++ SND_PCI_QUIRK(0x17aa, 0x38fa, "Thinkbook 16P Gen5", ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD),
+ SND_PCI_QUIRK(0x17aa, 0x38fd, "ThinkBook plus Gen5 Hybrid", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3902, "Lenovo E50-80", ALC269_FIXUP_DMIC_THINKPAD_ACPI),
+ SND_PCI_QUIRK(0x17aa, 0x3913, "Lenovo 145", ALC236_FIXUP_LENOVO_INV_DMIC),
+diff --git a/sound/soc/codecs/Kconfig b/sound/soc/codecs/Kconfig
+index 7092842480ef17..0d9d1d250f2b5e 100644
+--- a/sound/soc/codecs/Kconfig
++++ b/sound/soc/codecs/Kconfig
+@@ -2397,6 +2397,7 @@ config SND_SOC_WM8993
+
+ config SND_SOC_WM8994
+ tristate
++ depends on MFD_WM8994
+
+ config SND_SOC_WM8995
+ tristate
+diff --git a/sound/soc/codecs/cs42l43.c b/sound/soc/codecs/cs42l43.c
+index d0098b4558b529..8ec4083cd3b807 100644
+--- a/sound/soc/codecs/cs42l43.c
++++ b/sound/soc/codecs/cs42l43.c
+@@ -2446,6 +2446,7 @@ static const struct dev_pm_ops cs42l43_codec_pm_ops = {
+ SYSTEM_SLEEP_PM_OPS(cs42l43_codec_suspend, cs42l43_codec_resume)
+ NOIRQ_SYSTEM_SLEEP_PM_OPS(cs42l43_codec_suspend_noirq, cs42l43_codec_resume_noirq)
+ RUNTIME_PM_OPS(NULL, cs42l43_codec_runtime_resume, NULL)
++ SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, pm_runtime_force_resume)
+ };
+
+ static const struct platform_device_id cs42l43_codec_id_table[] = {
+diff --git a/sound/soc/codecs/es8316.c b/sound/soc/codecs/es8316.c
+index 61729e5b50a8e4..f508df01145bfb 100644
+--- a/sound/soc/codecs/es8316.c
++++ b/sound/soc/codecs/es8316.c
+@@ -39,7 +39,9 @@ struct es8316_priv {
+ struct snd_soc_jack *jack;
+ int irq;
+ unsigned int sysclk;
+- unsigned int allowed_rates[ARRAY_SIZE(supported_mclk_lrck_ratios)];
++ /* ES83xx supports halving the MCLK so it supports twice as many rates
++ */
++ unsigned int allowed_rates[ARRAY_SIZE(supported_mclk_lrck_ratios) * 2];
+ struct snd_pcm_hw_constraint_list sysclk_constraints;
+ bool jd_inverted;
+ };
+@@ -386,6 +388,12 @@ static int es8316_set_dai_sysclk(struct snd_soc_dai *codec_dai,
+
+ if (freq % ratio == 0)
+ es8316->allowed_rates[count++] = freq / ratio;
++
++ /* We also check if the halved MCLK produces a valid rate
++ * since the codec supports halving the MCLK.
++ */
++ if ((freq / ratio) % 2 == 0)
++ es8316->allowed_rates[count++] = freq / ratio / 2;
+ }
+
+ if (count) {
+diff --git a/sound/soc/samsung/Kconfig b/sound/soc/samsung/Kconfig
+index 4b1ea7b2c79617..60b4b7b7521554 100644
+--- a/sound/soc/samsung/Kconfig
++++ b/sound/soc/samsung/Kconfig
+@@ -127,8 +127,9 @@ config SND_SOC_SAMSUNG_TM2_WM5110
+
+ config SND_SOC_SAMSUNG_ARIES_WM8994
+ tristate "SoC I2S Audio support for WM8994 on Aries"
+- depends on SND_SOC_SAMSUNG && MFD_WM8994 && IIO && EXTCON
++ depends on SND_SOC_SAMSUNG && I2C && IIO && EXTCON
+ select SND_SOC_BT_SCO
++ select MFD_WM8994
+ select SND_SOC_WM8994
+ select SND_SAMSUNG_I2S
+ help
+@@ -140,8 +141,9 @@ config SND_SOC_SAMSUNG_ARIES_WM8994
+
+ config SND_SOC_SAMSUNG_MIDAS_WM1811
+ tristate "SoC I2S Audio support for Midas boards"
+- depends on SND_SOC_SAMSUNG && IIO
++ depends on SND_SOC_SAMSUNG && I2C && IIO
+ select SND_SAMSUNG_I2S
++ select MFD_WM8994
+ select SND_SOC_WM8994
+ help
+ Say Y if you want to add support for SoC audio on the Midas boards.
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 8ba0aff8be2ec2..7968d6a2f592ac 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2239,6 +2239,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x0c45, 0x6340, /* Sonix HD USB Camera */
+ QUIRK_FLAG_GET_SAMPLE_RATE),
++ DEVICE_FLG(0x0d8c, 0x0014, /* USB Audio Device */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x0ecb, 0x205c, /* JBL Quantum610 Wireless */
+ QUIRK_FLAG_FIXED_RATE),
+ DEVICE_FLG(0x0ecb, 0x2069, /* JBL Quantum810 Wireless */
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-08 11:26 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-02-08 11:26 UTC (permalink / raw
To: gentoo-commits
commit: 2415ed0f24c93b7a8b46736d437efcca53246fd4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Feb 8 11:25:59 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Feb 8 11:25:59 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2415ed0f
Linux patch 6.12.13
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1012_linux-6.12.13.patch | 26355 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 26359 insertions(+)
diff --git a/0000_README b/0000_README
index 17858f75..ceb862e7 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch: 1011_linux-6.12.12.patch
From: https://www.kernel.org
Desc: Linux 6.12.12
+Patch: 1012_linux-6.12.13.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.13
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1012_linux-6.12.13.patch b/1012_linux-6.12.13.patch
new file mode 100644
index 00000000..784a2897
--- /dev/null
+++ b/1012_linux-6.12.13.patch
@@ -0,0 +1,26355 @@
+diff --git a/Documentation/core-api/symbol-namespaces.rst b/Documentation/core-api/symbol-namespaces.rst
+index 12e4aecdae9452..d1154eb438101a 100644
+--- a/Documentation/core-api/symbol-namespaces.rst
++++ b/Documentation/core-api/symbol-namespaces.rst
+@@ -68,7 +68,7 @@ is to define the default namespace in the ``Makefile`` of the subsystem. E.g. to
+ export all symbols defined in usb-common into the namespace USB_COMMON, add a
+ line like this to drivers/usb/common/Makefile::
+
+- ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=USB_COMMON
++ ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"USB_COMMON"'
+
+ That will affect all EXPORT_SYMBOL() and EXPORT_SYMBOL_GPL() statements. A
+ symbol exported with EXPORT_SYMBOL_NS() while this definition is present, will
+@@ -79,7 +79,7 @@ A second option to define the default namespace is directly in the compilation
+ unit as preprocessor statement. The above example would then read::
+
+ #undef DEFAULT_SYMBOL_NAMESPACE
+- #define DEFAULT_SYMBOL_NAMESPACE USB_COMMON
++ #define DEFAULT_SYMBOL_NAMESPACE "USB_COMMON"
+
+ within the corresponding compilation unit before any EXPORT_SYMBOL macro is
+ used.
+diff --git a/Documentation/devicetree/bindings/clock/imx93-clock.yaml b/Documentation/devicetree/bindings/clock/imx93-clock.yaml
+index ccb53c6b96c119..98c0800732ef5d 100644
+--- a/Documentation/devicetree/bindings/clock/imx93-clock.yaml
++++ b/Documentation/devicetree/bindings/clock/imx93-clock.yaml
+@@ -16,6 +16,7 @@ description: |
+ properties:
+ compatible:
+ enum:
++ - fsl,imx91-ccm
+ - fsl,imx93-ccm
+
+ reg:
+diff --git a/Documentation/devicetree/bindings/leds/leds-class-multicolor.yaml b/Documentation/devicetree/bindings/leds/leds-class-multicolor.yaml
+index e850a8894758df..bb40bb9e036ee0 100644
+--- a/Documentation/devicetree/bindings/leds/leds-class-multicolor.yaml
++++ b/Documentation/devicetree/bindings/leds/leds-class-multicolor.yaml
+@@ -27,7 +27,7 @@ properties:
+ description: |
+ For multicolor LED support this property should be defined as either
+ LED_COLOR_ID_RGB or LED_COLOR_ID_MULTI which can be found in
+- include/linux/leds/common.h.
++ include/dt-bindings/leds/common.h.
+ enum: [ 8, 9 ]
+
+ required:
+diff --git a/Documentation/devicetree/bindings/mfd/rohm,bd71815-pmic.yaml b/Documentation/devicetree/bindings/mfd/rohm,bd71815-pmic.yaml
+index bb81307dc11b89..4fc78efaa5504a 100644
+--- a/Documentation/devicetree/bindings/mfd/rohm,bd71815-pmic.yaml
++++ b/Documentation/devicetree/bindings/mfd/rohm,bd71815-pmic.yaml
+@@ -50,15 +50,15 @@ properties:
+ minimum: 0
+ maximum: 1
+
+- rohm,charger-sense-resistor-ohms:
+- minimum: 10000000
+- maximum: 50000000
++ rohm,charger-sense-resistor-micro-ohms:
++ minimum: 10000
++ maximum: 50000
+ description: |
+- BD71827 and BD71828 have SAR ADC for measuring charging currents.
+- External sense resistor (RSENSE in data sheet) should be used. If
+- something other but 30MOhm resistor is used the resistance value
+- should be given here in Ohms.
+- default: 30000000
++ BD71815 has SAR ADC for measuring charging currents. External sense
++ resistor (RSENSE in data sheet) should be used. If something other
++ but a 30 mOhm resistor is used the resistance value should be given
++ here in micro Ohms.
++ default: 30000
+
+ regulators:
+ $ref: /schemas/regulator/rohm,bd71815-regulator.yaml
+@@ -67,7 +67,7 @@ properties:
+
+ gpio-reserved-ranges:
+ description: |
+- Usage of BD71828 GPIO pins can be changed via OTP. This property can be
++ Usage of BD71815 GPIO pins can be changed via OTP. This property can be
+ used to mark the pins which should not be configured for GPIO. Please see
+ the ../gpio/gpio.txt for more information.
+
+@@ -113,7 +113,7 @@ examples:
+ gpio-controller;
+ #gpio-cells = <2>;
+
+- rohm,charger-sense-resistor-ohms = <10000000>;
++ rohm,charger-sense-resistor-micro-ohms = <10000>;
+
+ regulators {
+ buck1: buck1 {
+diff --git a/Documentation/devicetree/bindings/mmc/mmc-controller.yaml b/Documentation/devicetree/bindings/mmc/mmc-controller.yaml
+index 58ae298cd2fcf4..23884b8184a9df 100644
+--- a/Documentation/devicetree/bindings/mmc/mmc-controller.yaml
++++ b/Documentation/devicetree/bindings/mmc/mmc-controller.yaml
+@@ -25,7 +25,7 @@ properties:
+ "#address-cells":
+ const: 1
+ description: |
+- The cell is the slot ID if a function subnode is used.
++ The cell is the SDIO function number if a function subnode is used.
+
+ "#size-cells":
+ const: 0
+diff --git a/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml b/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
+index cd4aa27218a1b6..fa6743bb269d44 100644
+--- a/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/mt6315-regulator.yaml
+@@ -35,10 +35,6 @@ properties:
+ $ref: regulator.yaml#
+ unevaluatedProperties: false
+
+- properties:
+- regulator-compatible:
+- pattern: "^vbuck[1-4]$"
+-
+ additionalProperties: false
+
+ required:
+@@ -56,7 +52,6 @@ examples:
+
+ regulators {
+ vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+ regulator-enable-ramp-delay = <256>;
+@@ -64,7 +59,6 @@ examples:
+ };
+
+ vbuck3 {
+- regulator-compatible = "vbuck3";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+ regulator-enable-ramp-delay = <256>;
+diff --git a/Documentation/driver-api/crypto/iaa/iaa-crypto.rst b/Documentation/driver-api/crypto/iaa/iaa-crypto.rst
+index bba40158dd5c5a..8e50b900d51c27 100644
+--- a/Documentation/driver-api/crypto/iaa/iaa-crypto.rst
++++ b/Documentation/driver-api/crypto/iaa/iaa-crypto.rst
+@@ -272,7 +272,7 @@ The available attributes are:
+ echo async_irq > /sys/bus/dsa/drivers/crypto/sync_mode
+
+ Async mode without interrupts (caller must poll) can be enabled by
+- writing 'async' to it::
++ writing 'async' to it (please see Caveat)::
+
+ echo async > /sys/bus/dsa/drivers/crypto/sync_mode
+
+@@ -283,6 +283,13 @@ The available attributes are:
+
+ The default mode is 'sync'.
+
++ Caveat: since the only mechanism that iaa_crypto currently implements
++ for async polling without interrupts is via the 'sync' mode as
++ described earlier, writing 'async' to
++ '/sys/bus/dsa/drivers/crypto/sync_mode' will internally enable the
++ 'sync' mode. This is to ensure correct iaa_crypto behavior until true
++ async polling without interrupts is enabled in iaa_crypto.
++
+ .. _iaa_default_config:
+
+ IAA Default Configuration
+diff --git a/Documentation/translations/it_IT/core-api/symbol-namespaces.rst b/Documentation/translations/it_IT/core-api/symbol-namespaces.rst
+index 17abc25ee4c1e4..6657f82c0101f1 100644
+--- a/Documentation/translations/it_IT/core-api/symbol-namespaces.rst
++++ b/Documentation/translations/it_IT/core-api/symbol-namespaces.rst
+@@ -69,7 +69,7 @@ Per esempio per esportare tutti i simboli definiti in usb-common nello spazio
+ dei nomi USB_COMMON, si può aggiungere la seguente linea in
+ drivers/usb/common/Makefile::
+
+- ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=USB_COMMON
++ ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"USB_COMMON"'
+
+ Questo cambierà tutte le macro EXPORT_SYMBOL() ed EXPORT_SYMBOL_GPL(). Invece,
+ un simbolo esportato con EXPORT_SYMBOL_NS() non verrà cambiato e il simbolo
+@@ -79,7 +79,7 @@ Una seconda possibilità è quella di definire il simbolo di preprocessore
+ direttamente nei file da compilare. L'esempio precedente diventerebbe::
+
+ #undef DEFAULT_SYMBOL_NAMESPACE
+- #define DEFAULT_SYMBOL_NAMESPACE USB_COMMON
++ #define DEFAULT_SYMBOL_NAMESPACE "USB_COMMON"
+
+ Questo va messo prima di un qualsiasi uso di EXPORT_SYMBOL.
+
+diff --git a/Documentation/translations/zh_CN/core-api/symbol-namespaces.rst b/Documentation/translations/zh_CN/core-api/symbol-namespaces.rst
+index bb16f0611046d3..f3e73834f7d7df 100644
+--- a/Documentation/translations/zh_CN/core-api/symbol-namespaces.rst
++++ b/Documentation/translations/zh_CN/core-api/symbol-namespaces.rst
+@@ -66,7 +66,7 @@
+ 子系统的 ``Makefile`` 中定义默认命名空间。例如,如果要将usb-common中定义的所有符号导
+ 出到USB_COMMON命名空间,可以在drivers/usb/common/Makefile中添加这样一行::
+
+- ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=USB_COMMON
++ ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"USB_COMMON"'
+
+ 这将影响所有 EXPORT_SYMBOL() 和 EXPORT_SYMBOL_GPL() 语句。当这个定义存在时,
+ 用EXPORT_SYMBOL_NS()导出的符号仍然会被导出到作为命名空间参数传递的命名空间中,
+@@ -76,7 +76,7 @@
+ 成::
+
+ #undef DEFAULT_SYMBOL_NAMESPACE
+- #define DEFAULT_SYMBOL_NAMESPACE USB_COMMON
++ #define DEFAULT_SYMBOL_NAMESPACE "USB_COMMON"
+
+ 应置于相关编译单元中任何 EXPORT_SYMBOL 宏之前
+
+diff --git a/Makefile b/Makefile
+index 9e6246e733eb94..5442ff45f963ed 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -509,7 +509,7 @@ KGZIP = gzip
+ KBZIP2 = bzip2
+ KLZOP = lzop
+ LZMA = lzma
+-LZ4 = lz4c
++LZ4 = lz4
+ XZ = xz
+ ZSTD = zstd
+
+diff --git a/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-yosemite4.dts b/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-yosemite4.dts
+index 98477792aa005a..14d17510310680 100644
+--- a/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-yosemite4.dts
++++ b/arch/arm/boot/dts/aspeed/aspeed-bmc-facebook-yosemite4.dts
+@@ -284,12 +284,12 @@ &i2c10 {
+ &i2c11 {
+ status = "okay";
+ power-sensor@10 {
+- compatible = "adi, adm1272";
++ compatible = "adi,adm1272";
+ reg = <0x10>;
+ };
+
+ power-sensor@12 {
+- compatible = "adi, adm1272";
++ compatible = "adi,adm1272";
+ reg = <0x12>;
+ };
+
+@@ -461,22 +461,20 @@ adc@1f {
+ };
+
+ pwm@20{
+- compatible = "max31790";
++ compatible = "maxim,max31790";
+ reg = <0x20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+ };
+
+ gpio@22{
+ compatible = "ti,tca6424";
+ reg = <0x22>;
++ gpio-controller;
++ #gpio-cells = <2>;
+ };
+
+ pwm@23{
+- compatible = "max31790";
++ compatible = "maxim,max31790";
+ reg = <0x23>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+ };
+
+ adc@33 {
+@@ -511,22 +509,20 @@ adc@1f {
+ };
+
+ pwm@20{
+- compatible = "max31790";
++ compatible = "maxim,max31790";
+ reg = <0x20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+ };
+
+ gpio@22{
+ compatible = "ti,tca6424";
+ reg = <0x22>;
++ gpio-controller;
++ #gpio-cells = <2>;
+ };
+
+ pwm@23{
+- compatible = "max31790";
++ compatible = "maxim,max31790";
+ reg = <0x23>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+ };
+
+ adc@33 {
+diff --git a/arch/arm/boot/dts/intel/socfpga/socfpga_arria10.dtsi b/arch/arm/boot/dts/intel/socfpga/socfpga_arria10.dtsi
+index 6b6e77596ffa86..b108265e9bde42 100644
+--- a/arch/arm/boot/dts/intel/socfpga/socfpga_arria10.dtsi
++++ b/arch/arm/boot/dts/intel/socfpga/socfpga_arria10.dtsi
+@@ -440,7 +440,7 @@ gmac0: ethernet@ff800000 {
+ clocks = <&l4_mp_clk>, <&peri_emac_ptp_clk>;
+ clock-names = "stmmaceth", "ptp_ref";
+ resets = <&rst EMAC0_RESET>, <&rst EMAC0_OCP_RESET>;
+- reset-names = "stmmaceth", "ahb";
++ reset-names = "stmmaceth", "stmmaceth-ocp";
+ snps,axi-config = <&socfpga_axi_setup>;
+ status = "disabled";
+ };
+@@ -460,7 +460,7 @@ gmac1: ethernet@ff802000 {
+ clocks = <&l4_mp_clk>, <&peri_emac_ptp_clk>;
+ clock-names = "stmmaceth", "ptp_ref";
+ resets = <&rst EMAC1_RESET>, <&rst EMAC1_OCP_RESET>;
+- reset-names = "stmmaceth", "ahb";
++ reset-names = "stmmaceth", "stmmaceth-ocp";
+ snps,axi-config = <&socfpga_axi_setup>;
+ status = "disabled";
+ };
+@@ -480,7 +480,7 @@ gmac2: ethernet@ff804000 {
+ clocks = <&l4_mp_clk>, <&peri_emac_ptp_clk>;
+ clock-names = "stmmaceth", "ptp_ref";
+ resets = <&rst EMAC2_RESET>, <&rst EMAC2_OCP_RESET>;
+- reset-names = "stmmaceth", "ahb";
++ reset-names = "stmmaceth", "stmmaceth-ocp";
+ snps,axi-config = <&socfpga_axi_setup>;
+ status = "disabled";
+ };
+diff --git a/arch/arm/boot/dts/mediatek/mt7623.dtsi b/arch/arm/boot/dts/mediatek/mt7623.dtsi
+index 814586abc2979e..fd7a89cc337d69 100644
+--- a/arch/arm/boot/dts/mediatek/mt7623.dtsi
++++ b/arch/arm/boot/dts/mediatek/mt7623.dtsi
+@@ -308,7 +308,7 @@ pwrap: pwrap@1000d000 {
+ clock-names = "spi", "wrap";
+ };
+
+- cir: cir@10013000 {
++ cir: ir-receiver@10013000 {
+ compatible = "mediatek,mt7623-cir";
+ reg = <0 0x10013000 0 0x1000>;
+ interrupts = <GIC_SPI 87 IRQ_TYPE_LEVEL_LOW>;
+diff --git a/arch/arm/boot/dts/microchip/at91-sama5d27_wlsom1_ek.dts b/arch/arm/boot/dts/microchip/at91-sama5d27_wlsom1_ek.dts
+index 15239834d886ed..35a933eec5738f 100644
+--- a/arch/arm/boot/dts/microchip/at91-sama5d27_wlsom1_ek.dts
++++ b/arch/arm/boot/dts/microchip/at91-sama5d27_wlsom1_ek.dts
+@@ -197,6 +197,7 @@ qspi1_flash: flash@0 {
+
+ &sdmmc0 {
+ bus-width = <4>;
++ no-1-8-v;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sdmmc0_default>;
+ status = "okay";
+diff --git a/arch/arm/boot/dts/microchip/at91-sama5d29_curiosity.dts b/arch/arm/boot/dts/microchip/at91-sama5d29_curiosity.dts
+index 951a0c97d3c6bb..5933840bb8f7e0 100644
+--- a/arch/arm/boot/dts/microchip/at91-sama5d29_curiosity.dts
++++ b/arch/arm/boot/dts/microchip/at91-sama5d29_curiosity.dts
+@@ -514,6 +514,7 @@ kernel@200000 {
+
+ &sdmmc0 {
+ bus-width = <4>;
++ no-1-8-v;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sdmmc0_default>;
+ disable-wp;
+diff --git a/arch/arm/boot/dts/nxp/imx/imx7-tqma7.dtsi b/arch/arm/boot/dts/nxp/imx/imx7-tqma7.dtsi
+index 028961eb71089c..91ca23a66bf3c2 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx7-tqma7.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx7-tqma7.dtsi
+@@ -135,6 +135,7 @@ vgen6_reg: vldo4 {
+ lm75a: temperature-sensor@48 {
+ compatible = "national,lm75a";
+ reg = <0x48>;
++ vs-supply = <&vgen4_reg>;
+ };
+
+ /* NXP SE97BTP with temperature sensor + eeprom, TQMa7x 02xx */
+diff --git a/arch/arm/boot/dts/st/stm32mp13xx-dhcor-som.dtsi b/arch/arm/boot/dts/st/stm32mp13xx-dhcor-som.dtsi
+index ddad6497775b8e..ffb7233b063d23 100644
+--- a/arch/arm/boot/dts/st/stm32mp13xx-dhcor-som.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp13xx-dhcor-som.dtsi
+@@ -85,8 +85,8 @@ regulators {
+
+ vddcpu: buck1 { /* VDD_CPU_1V2 */
+ regulator-name = "vddcpu";
+- regulator-min-microvolt = <1250000>;
+- regulator-max-microvolt = <1250000>;
++ regulator-min-microvolt = <1350000>;
++ regulator-max-microvolt = <1350000>;
+ regulator-always-on;
+ regulator-initial-mode = <0>;
+ regulator-over-current-protection;
+diff --git a/arch/arm/boot/dts/st/stm32mp151.dtsi b/arch/arm/boot/dts/st/stm32mp151.dtsi
+index 4f878ec102c1f6..fdc42a89bd37d4 100644
+--- a/arch/arm/boot/dts/st/stm32mp151.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp151.dtsi
+@@ -129,7 +129,7 @@ ipcc: mailbox@4c001000 {
+ reg = <0x4c001000 0x400>;
+ st,proc-id = <0>;
+ interrupts-extended =
+- <&exti 61 1>,
++ <&exti 61 IRQ_TYPE_LEVEL_HIGH>,
+ <&intc GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "rx", "tx";
+ clocks = <&rcc IPCC>;
+diff --git a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-drc02.dtsi b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-drc02.dtsi
+index bb4f8a0b937f37..abe2dfe706364b 100644
+--- a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-drc02.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-drc02.dtsi
+@@ -6,18 +6,6 @@
+ #include <dt-bindings/input/input.h>
+ #include <dt-bindings/pwm/pwm.h>
+
+-/ {
+- aliases {
+- serial0 = &uart4;
+- serial1 = &usart3;
+- serial2 = &uart8;
+- };
+-
+- chosen {
+- stdout-path = "serial0:115200n8";
+- };
+-};
+-
+ &adc {
+ status = "disabled";
+ };
+diff --git a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-pdk2.dtsi b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-pdk2.dtsi
+index 171d7c7658fa86..0fb4e55843b9d2 100644
+--- a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-pdk2.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-pdk2.dtsi
+@@ -7,16 +7,6 @@
+ #include <dt-bindings/pwm/pwm.h>
+
+ / {
+- aliases {
+- serial0 = &uart4;
+- serial1 = &usart3;
+- serial2 = &uart8;
+- };
+-
+- chosen {
+- stdout-path = "serial0:115200n8";
+- };
+-
+ clk_ext_audio_codec: clock-codec {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+diff --git a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-picoitx.dtsi b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-picoitx.dtsi
+index b5bc53accd6b2f..01c693cc03446c 100644
+--- a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-picoitx.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-picoitx.dtsi
+@@ -7,16 +7,6 @@
+ #include <dt-bindings/pwm/pwm.h>
+
+ / {
+- aliases {
+- serial0 = &uart4;
+- serial1 = &usart3;
+- serial2 = &uart8;
+- };
+-
+- chosen {
+- stdout-path = "serial0:115200n8";
+- };
+-
+ led {
+ compatible = "gpio-leds";
+
+diff --git a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi
+index 74a11ccc5333f8..142d4a8731f8d4 100644
+--- a/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi
++++ b/arch/arm/boot/dts/st/stm32mp15xx-dhcom-som.dtsi
+@@ -14,6 +14,13 @@ aliases {
+ ethernet1 = &ksz8851;
+ rtc0 = &hwrtc;
+ rtc1 = &rtc;
++ serial0 = &uart4;
++ serial1 = &uart8;
++ serial2 = &usart3;
++ };
++
++ chosen {
++ stdout-path = "serial0:115200n8";
+ };
+
+ memory@c0000000 {
+diff --git a/arch/arm/mach-at91/pm.c b/arch/arm/mach-at91/pm.c
+index b9b995f8a36e14..05a1547642b60f 100644
+--- a/arch/arm/mach-at91/pm.c
++++ b/arch/arm/mach-at91/pm.c
+@@ -598,7 +598,21 @@ static int at91_suspend_finish(unsigned long val)
+ return 0;
+ }
+
+-static void at91_pm_switch_ba_to_vbat(void)
++/**
++ * at91_pm_switch_ba_to_auto() - Configure Backup Unit Power Switch
++ * to automatic/hardware mode.
++ *
++ * The Backup Unit Power Switch can be managed either by software or hardware.
++ * Enabling hardware mode allows the automatic transition of power between
++ * VDDANA (or VDDIN33) and VDDBU (or VBAT, respectively), based on the
++ * availability of these power sources.
++ *
++ * If the Backup Unit Power Switch is already in automatic mode, no action is
++ * required. If it is in software-controlled mode, it is switched to automatic
++ * mode to enhance safety and eliminate the need for toggling between power
++ * sources.
++ */
++static void at91_pm_switch_ba_to_auto(void)
+ {
+ unsigned int offset = offsetof(struct at91_pm_sfrbu_regs, pswbu);
+ unsigned int val;
+@@ -609,24 +623,19 @@ static void at91_pm_switch_ba_to_vbat(void)
+
+ val = readl(soc_pm.data.sfrbu + offset);
+
+- /* Already on VBAT. */
+- if (!(val & soc_pm.sfrbu_regs.pswbu.state))
++ /* Already on auto/hardware. */
++ if (!(val & soc_pm.sfrbu_regs.pswbu.ctrl))
+ return;
+
+- val &= ~soc_pm.sfrbu_regs.pswbu.softsw;
+- val |= soc_pm.sfrbu_regs.pswbu.key | soc_pm.sfrbu_regs.pswbu.ctrl;
++ val &= ~soc_pm.sfrbu_regs.pswbu.ctrl;
++ val |= soc_pm.sfrbu_regs.pswbu.key;
+ writel(val, soc_pm.data.sfrbu + offset);
+-
+- /* Wait for update. */
+- val = readl(soc_pm.data.sfrbu + offset);
+- while (val & soc_pm.sfrbu_regs.pswbu.state)
+- val = readl(soc_pm.data.sfrbu + offset);
+ }
+
+ static void at91_pm_suspend(suspend_state_t state)
+ {
+ if (soc_pm.data.mode == AT91_PM_BACKUP) {
+- at91_pm_switch_ba_to_vbat();
++ at91_pm_switch_ba_to_auto();
+
+ cpu_suspend(0, at91_suspend_finish);
+
+diff --git a/arch/arm/mach-omap1/board-nokia770.c b/arch/arm/mach-omap1/board-nokia770.c
+index 3312ef93355da7..a5bf5554800fe1 100644
+--- a/arch/arm/mach-omap1/board-nokia770.c
++++ b/arch/arm/mach-omap1/board-nokia770.c
+@@ -289,7 +289,7 @@ static struct gpiod_lookup_table nokia770_irq_gpio_table = {
+ GPIO_LOOKUP("gpio-0-15", 15, "ads7846_irq",
+ GPIO_ACTIVE_HIGH),
+ /* GPIO used for retu IRQ */
+- GPIO_LOOKUP("gpio-48-63", 15, "retu_irq",
++ GPIO_LOOKUP("gpio-48-63", 14, "retu_irq",
+ GPIO_ACTIVE_HIGH),
+ /* GPIO used for tahvo IRQ */
+ GPIO_LOOKUP("gpio-32-47", 8, "tahvo_irq",
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts
+index 379c2c8466f504..86d44349e09517 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-pinebook.dts
+@@ -390,6 +390,8 @@ &sound {
+ &tcon0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&lcd_rgb666_pins>;
++ assigned-clocks = <&ccu CLK_TCON0>;
++ assigned-clock-parents = <&ccu CLK_PLL_VIDEO0_2X>;
+
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64-teres-i.dts b/arch/arm64/boot/dts/allwinner/sun50i-a64-teres-i.dts
+index b407e1dd08a737..ec055510af8b68 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64-teres-i.dts
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64-teres-i.dts
+@@ -369,6 +369,8 @@ &sound {
+ &tcon0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&lcd_rgb666_pins>;
++ assigned-clocks = <&ccu CLK_TCON0>;
++ assigned-clock-parents = <&ccu CLK_PLL_VIDEO0_2X>;
+
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+index a5c3920e0f048e..0fecf0abb204c7 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+@@ -445,6 +445,8 @@ tcon0: lcd-controller@1c0c000 {
+ clock-names = "ahb", "tcon-ch0";
+ clock-output-names = "tcon-data-clock";
+ #clock-cells = <0>;
++ assigned-clocks = <&ccu CLK_TCON0>;
++ assigned-clock-parents = <&ccu CLK_PLL_MIPI>;
+ resets = <&ccu RST_BUS_TCON0>, <&ccu RST_BUS_LVDS>;
+ reset-names = "lcd", "lvds";
+
+diff --git a/arch/arm64/boot/dts/freescale/imx93.dtsi b/arch/arm64/boot/dts/freescale/imx93.dtsi
+index 04b9b3d31f4faf..7bc3852c6ef8fb 100644
+--- a/arch/arm64/boot/dts/freescale/imx93.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx93.dtsi
+@@ -917,7 +917,7 @@ xcvr: xcvr@42680000 {
+ reg-names = "ram", "regs", "rxfifo", "txfifo";
+ interrupts = <GIC_SPI 203 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 204 IRQ_TYPE_LEVEL_HIGH>;
+- clocks = <&clk IMX93_CLK_BUS_WAKEUP>,
++ clocks = <&clk IMX93_CLK_SPDIF_IPG>,
+ <&clk IMX93_CLK_SPDIF_GATE>,
+ <&clk IMX93_CLK_DUMMY>,
+ <&clk IMX93_CLK_AUD_XCVR_GATE>;
+diff --git a/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts b/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
+index b1ea7dcaed17dc..47234d0858dd21 100644
+--- a/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
++++ b/arch/arm64/boot/dts/marvell/cn9131-cf-solidwan.dts
+@@ -435,7 +435,7 @@ &cp1_eth1 {
+ managed = "in-band-status";
+ phy-mode = "sgmii";
+ phy = <&cp1_phy0>;
+- phys = <&cp0_comphy3 1>;
++ phys = <&cp1_comphy3 1>;
+ status = "okay";
+ };
+
+@@ -444,7 +444,7 @@ &cp1_eth2 {
+ managed = "in-band-status";
+ phy-mode = "sgmii";
+ phy = <&cp1_phy1>;
+- phys = <&cp0_comphy5 2>;
++ phys = <&cp1_comphy5 2>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt7988a.dtsi b/arch/arm64/boot/dts/mediatek/mt7988a.dtsi
+index aa728331e876b7..284e240b79977f 100644
+--- a/arch/arm64/boot/dts/mediatek/mt7988a.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt7988a.dtsi
+@@ -129,6 +129,7 @@ i2c@11003000 {
+ reg = <0 0x11003000 0 0x1000>,
+ <0 0x10217080 0 0x80>;
+ interrupts = <GIC_SPI 136 IRQ_TYPE_LEVEL_HIGH>;
++ clock-div = <1>;
+ clocks = <&infracfg CLK_INFRA_I2C_BCK>,
+ <&infracfg CLK_INFRA_66M_AP_DMA_BCK>;
+ clock-names = "main", "dma";
+@@ -142,6 +143,7 @@ i2c@11004000 {
+ reg = <0 0x11004000 0 0x1000>,
+ <0 0x10217100 0 0x80>;
+ interrupts = <GIC_SPI 144 IRQ_TYPE_LEVEL_HIGH>;
++ clock-div = <1>;
+ clocks = <&infracfg CLK_INFRA_I2C_BCK>,
+ <&infracfg CLK_INFRA_66M_AP_DMA_BCK>;
+ clock-names = "main", "dma";
+@@ -155,6 +157,7 @@ i2c@11005000 {
+ reg = <0 0x11005000 0 0x1000>,
+ <0 0x10217180 0 0x80>;
+ interrupts = <GIC_SPI 145 IRQ_TYPE_LEVEL_HIGH>;
++ clock-div = <1>;
+ clocks = <&infracfg CLK_INFRA_I2C_BCK>,
+ <&infracfg CLK_INFRA_66M_AP_DMA_BCK>;
+ clock-names = "main", "dma";
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi b/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
+index b4d85147b77b0b..309e2d104fdc9f 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173-elm.dtsi
+@@ -931,7 +931,7 @@ pmic: pmic {
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+- clock: mt6397clock {
++ clock: clocks {
+ compatible = "mediatek,mt6397-clk";
+ #clock-cells = <1>;
+ };
+@@ -942,11 +942,10 @@ pio6397: pinctrl {
+ #gpio-cells = <2>;
+ };
+
+- regulator: mt6397regulator {
++ regulators {
+ compatible = "mediatek,mt6397-regulator";
+
+ mt6397_vpca15_reg: buck_vpca15 {
+- regulator-compatible = "buck_vpca15";
+ regulator-name = "vpca15";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -956,7 +955,6 @@ mt6397_vpca15_reg: buck_vpca15 {
+ };
+
+ mt6397_vpca7_reg: buck_vpca7 {
+- regulator-compatible = "buck_vpca7";
+ regulator-name = "vpca7";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -966,7 +964,6 @@ mt6397_vpca7_reg: buck_vpca7 {
+ };
+
+ mt6397_vsramca15_reg: buck_vsramca15 {
+- regulator-compatible = "buck_vsramca15";
+ regulator-name = "vsramca15";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -975,7 +972,6 @@ mt6397_vsramca15_reg: buck_vsramca15 {
+ };
+
+ mt6397_vsramca7_reg: buck_vsramca7 {
+- regulator-compatible = "buck_vsramca7";
+ regulator-name = "vsramca7";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -984,7 +980,6 @@ mt6397_vsramca7_reg: buck_vsramca7 {
+ };
+
+ mt6397_vcore_reg: buck_vcore {
+- regulator-compatible = "buck_vcore";
+ regulator-name = "vcore";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -993,7 +988,6 @@ mt6397_vcore_reg: buck_vcore {
+ };
+
+ mt6397_vgpu_reg: buck_vgpu {
+- regulator-compatible = "buck_vgpu";
+ regulator-name = "vgpu";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -1002,7 +996,6 @@ mt6397_vgpu_reg: buck_vgpu {
+ };
+
+ mt6397_vdrm_reg: buck_vdrm {
+- regulator-compatible = "buck_vdrm";
+ regulator-name = "vdrm";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1400000>;
+@@ -1011,7 +1004,6 @@ mt6397_vdrm_reg: buck_vdrm {
+ };
+
+ mt6397_vio18_reg: buck_vio18 {
+- regulator-compatible = "buck_vio18";
+ regulator-name = "vio18";
+ regulator-min-microvolt = <1620000>;
+ regulator-max-microvolt = <1980000>;
+@@ -1020,18 +1012,15 @@ mt6397_vio18_reg: buck_vio18 {
+ };
+
+ mt6397_vtcxo_reg: ldo_vtcxo {
+- regulator-compatible = "ldo_vtcxo";
+ regulator-name = "vtcxo";
+ regulator-always-on;
+ };
+
+ mt6397_va28_reg: ldo_va28 {
+- regulator-compatible = "ldo_va28";
+ regulator-name = "va28";
+ };
+
+ mt6397_vcama_reg: ldo_vcama {
+- regulator-compatible = "ldo_vcama";
+ regulator-name = "vcama";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+@@ -1039,18 +1028,15 @@ mt6397_vcama_reg: ldo_vcama {
+ };
+
+ mt6397_vio28_reg: ldo_vio28 {
+- regulator-compatible = "ldo_vio28";
+ regulator-name = "vio28";
+ regulator-always-on;
+ };
+
+ mt6397_vusb_reg: ldo_vusb {
+- regulator-compatible = "ldo_vusb";
+ regulator-name = "vusb";
+ };
+
+ mt6397_vmc_reg: ldo_vmc {
+- regulator-compatible = "ldo_vmc";
+ regulator-name = "vmc";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1058,7 +1044,6 @@ mt6397_vmc_reg: ldo_vmc {
+ };
+
+ mt6397_vmch_reg: ldo_vmch {
+- regulator-compatible = "ldo_vmch";
+ regulator-name = "vmch";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1066,7 +1051,6 @@ mt6397_vmch_reg: ldo_vmch {
+ };
+
+ mt6397_vemc_3v3_reg: ldo_vemc3v3 {
+- regulator-compatible = "ldo_vemc3v3";
+ regulator-name = "vemc_3v3";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1074,7 +1058,6 @@ mt6397_vemc_3v3_reg: ldo_vemc3v3 {
+ };
+
+ mt6397_vgp1_reg: ldo_vgp1 {
+- regulator-compatible = "ldo_vgp1";
+ regulator-name = "vcamd";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+@@ -1082,7 +1065,6 @@ mt6397_vgp1_reg: ldo_vgp1 {
+ };
+
+ mt6397_vgp2_reg: ldo_vgp2 {
+- regulator-compatible = "ldo_vgp2";
+ regulator-name = "vcamio";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1090,7 +1072,6 @@ mt6397_vgp2_reg: ldo_vgp2 {
+ };
+
+ mt6397_vgp3_reg: ldo_vgp3 {
+- regulator-compatible = "ldo_vgp3";
+ regulator-name = "vcamaf";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+@@ -1098,7 +1079,6 @@ mt6397_vgp3_reg: ldo_vgp3 {
+ };
+
+ mt6397_vgp4_reg: ldo_vgp4 {
+- regulator-compatible = "ldo_vgp4";
+ regulator-name = "vgp4";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1106,7 +1086,6 @@ mt6397_vgp4_reg: ldo_vgp4 {
+ };
+
+ mt6397_vgp5_reg: ldo_vgp5 {
+- regulator-compatible = "ldo_vgp5";
+ regulator-name = "vgp5";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3000000>;
+@@ -1114,7 +1093,6 @@ mt6397_vgp5_reg: ldo_vgp5 {
+ };
+
+ mt6397_vgp6_reg: ldo_vgp6 {
+- regulator-compatible = "ldo_vgp6";
+ regulator-name = "vgp6";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1123,7 +1101,6 @@ mt6397_vgp6_reg: ldo_vgp6 {
+ };
+
+ mt6397_vibr_reg: ldo_vibr {
+- regulator-compatible = "ldo_vibr";
+ regulator-name = "vibr";
+ regulator-min-microvolt = <1300000>;
+ regulator-max-microvolt = <3300000>;
+@@ -1131,7 +1108,7 @@ mt6397_vibr_reg: ldo_vibr {
+ };
+ };
+
+- rtc: mt6397rtc {
++ rtc: rtc {
+ compatible = "mediatek,mt6397-rtc";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+index bb4671c18e3bd4..9fffed0ef4bff4 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8173-evb.dts
+@@ -307,11 +307,10 @@ pmic: pmic {
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+- mt6397regulator: mt6397regulator {
++ regulators {
+ compatible = "mediatek,mt6397-regulator";
+
+ mt6397_vpca15_reg: buck_vpca15 {
+- regulator-compatible = "buck_vpca15";
+ regulator-name = "vpca15";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -320,7 +319,6 @@ mt6397_vpca15_reg: buck_vpca15 {
+ };
+
+ mt6397_vpca7_reg: buck_vpca7 {
+- regulator-compatible = "buck_vpca7";
+ regulator-name = "vpca7";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -329,7 +327,6 @@ mt6397_vpca7_reg: buck_vpca7 {
+ };
+
+ mt6397_vsramca15_reg: buck_vsramca15 {
+- regulator-compatible = "buck_vsramca15";
+ regulator-name = "vsramca15";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -338,7 +335,6 @@ mt6397_vsramca15_reg: buck_vsramca15 {
+ };
+
+ mt6397_vsramca7_reg: buck_vsramca7 {
+- regulator-compatible = "buck_vsramca7";
+ regulator-name = "vsramca7";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -347,7 +343,6 @@ mt6397_vsramca7_reg: buck_vsramca7 {
+ };
+
+ mt6397_vcore_reg: buck_vcore {
+- regulator-compatible = "buck_vcore";
+ regulator-name = "vcore";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -356,7 +351,6 @@ mt6397_vcore_reg: buck_vcore {
+ };
+
+ mt6397_vgpu_reg: buck_vgpu {
+- regulator-compatible = "buck_vgpu";
+ regulator-name = "vgpu";
+ regulator-min-microvolt = < 700000>;
+ regulator-max-microvolt = <1350000>;
+@@ -365,7 +359,6 @@ mt6397_vgpu_reg: buck_vgpu {
+ };
+
+ mt6397_vdrm_reg: buck_vdrm {
+- regulator-compatible = "buck_vdrm";
+ regulator-name = "vdrm";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1400000>;
+@@ -374,7 +367,6 @@ mt6397_vdrm_reg: buck_vdrm {
+ };
+
+ mt6397_vio18_reg: buck_vio18 {
+- regulator-compatible = "buck_vio18";
+ regulator-name = "vio18";
+ regulator-min-microvolt = <1620000>;
+ regulator-max-microvolt = <1980000>;
+@@ -383,19 +375,16 @@ mt6397_vio18_reg: buck_vio18 {
+ };
+
+ mt6397_vtcxo_reg: ldo_vtcxo {
+- regulator-compatible = "ldo_vtcxo";
+ regulator-name = "vtcxo";
+ regulator-always-on;
+ };
+
+ mt6397_va28_reg: ldo_va28 {
+- regulator-compatible = "ldo_va28";
+ regulator-name = "va28";
+ regulator-always-on;
+ };
+
+ mt6397_vcama_reg: ldo_vcama {
+- regulator-compatible = "ldo_vcama";
+ regulator-name = "vcama";
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <2800000>;
+@@ -403,18 +392,15 @@ mt6397_vcama_reg: ldo_vcama {
+ };
+
+ mt6397_vio28_reg: ldo_vio28 {
+- regulator-compatible = "ldo_vio28";
+ regulator-name = "vio28";
+ regulator-always-on;
+ };
+
+ mt6397_vusb_reg: ldo_vusb {
+- regulator-compatible = "ldo_vusb";
+ regulator-name = "vusb";
+ };
+
+ mt6397_vmc_reg: ldo_vmc {
+- regulator-compatible = "ldo_vmc";
+ regulator-name = "vmc";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+@@ -422,7 +408,6 @@ mt6397_vmc_reg: ldo_vmc {
+ };
+
+ mt6397_vmch_reg: ldo_vmch {
+- regulator-compatible = "ldo_vmch";
+ regulator-name = "vmch";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3300000>;
+@@ -430,7 +415,6 @@ mt6397_vmch_reg: ldo_vmch {
+ };
+
+ mt6397_vemc_3v3_reg: ldo_vemc3v3 {
+- regulator-compatible = "ldo_vemc3v3";
+ regulator-name = "vemc_3v3";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3300000>;
+@@ -438,7 +422,6 @@ mt6397_vemc_3v3_reg: ldo_vemc3v3 {
+ };
+
+ mt6397_vgp1_reg: ldo_vgp1 {
+- regulator-compatible = "ldo_vgp1";
+ regulator-name = "vcamd";
+ regulator-min-microvolt = <1220000>;
+ regulator-max-microvolt = <3300000>;
+@@ -446,7 +429,6 @@ mt6397_vgp1_reg: ldo_vgp1 {
+ };
+
+ mt6397_vgp2_reg: ldo_vgp2 {
+- regulator-compatible = "ldo_vgp2";
+ regulator-name = "vcamio";
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <3300000>;
+@@ -454,7 +436,6 @@ mt6397_vgp2_reg: ldo_vgp2 {
+ };
+
+ mt6397_vgp3_reg: ldo_vgp3 {
+- regulator-compatible = "ldo_vgp3";
+ regulator-name = "vcamaf";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3300000>;
+@@ -462,7 +443,6 @@ mt6397_vgp3_reg: ldo_vgp3 {
+ };
+
+ mt6397_vgp4_reg: ldo_vgp4 {
+- regulator-compatible = "ldo_vgp4";
+ regulator-name = "vgp4";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3300000>;
+@@ -470,7 +450,6 @@ mt6397_vgp4_reg: ldo_vgp4 {
+ };
+
+ mt6397_vgp5_reg: ldo_vgp5 {
+- regulator-compatible = "ldo_vgp5";
+ regulator-name = "vgp5";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3000000>;
+@@ -478,7 +457,6 @@ mt6397_vgp5_reg: ldo_vgp5 {
+ };
+
+ mt6397_vgp6_reg: ldo_vgp6 {
+- regulator-compatible = "ldo_vgp6";
+ regulator-name = "vgp6";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3300000>;
+@@ -486,7 +464,6 @@ mt6397_vgp6_reg: ldo_vgp6 {
+ };
+
+ mt6397_vibr_reg: ldo_vibr {
+- regulator-compatible = "ldo_vibr";
+ regulator-name = "vibr";
+ regulator-min-microvolt = <1300000>;
+ regulator-max-microvolt = <3300000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
+index 65860b33c01fe8..3935d83a047e08 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-damu.dts
+@@ -26,6 +26,10 @@ &touchscreen {
+ hid-descr-addr = <0x0001>;
+ };
+
++&mt6358codec {
++ mediatek,dmic-mode = <1>; /* one-wire */
++};
++
+ &qca_wifi {
+ qcom,ath10k-calibration-variant = "GO_DAMU";
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-kenzo.dts b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-kenzo.dts
+index e8241587949b2b..561770fcf69e66 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-kenzo.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-kenzo.dts
+@@ -12,3 +12,18 @@ / {
+ chassis-type = "laptop";
+ compatible = "google,juniper-sku17", "google,juniper", "mediatek,mt8183";
+ };
++
++&i2c0 {
++ touchscreen@40 {
++ compatible = "hid-over-i2c";
++ reg = <0x40>;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&touchscreen_pins>;
++
++ interrupts-extended = <&pio 155 IRQ_TYPE_LEVEL_LOW>;
++
++ post-power-on-delay-ms = <70>;
++ hid-descr-addr = <0x0001>;
++ };
++};
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-willow.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-willow.dtsi
+index 76d33540166f90..c942e461a177ef 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-willow.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi-willow.dtsi
+@@ -6,6 +6,21 @@
+ /dts-v1/;
+ #include "mt8183-kukui-jacuzzi.dtsi"
+
++&i2c0 {
++ touchscreen@40 {
++ compatible = "hid-over-i2c";
++ reg = <0x40>;
++
++ pinctrl-names = "default";
++ pinctrl-0 = <&touchscreen_pins>;
++
++ interrupts-extended = <&pio 155 IRQ_TYPE_LEVEL_LOW>;
++
++ post-power-on-delay-ms = <70>;
++ hid-descr-addr = <0x0001>;
++ };
++};
++
+ &i2c2 {
+ trackpad@2c {
+ compatible = "hid-over-i2c";
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+index 49e053b932e76c..80888bd4ad823d 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui-jacuzzi.dtsi
+@@ -39,8 +39,6 @@ pp1800_mipibrdg: pp1800-mipibrdg {
+ pp3300_panel: pp3300-panel {
+ compatible = "regulator-fixed";
+ regulator-name = "pp3300_panel";
+- regulator-min-microvolt = <3300000>;
+- regulator-max-microvolt = <3300000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pp3300_panel_pins>;
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 0a6578aacf8280..9cd5e0cef02a29 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -1024,7 +1024,8 @@ pwrap: pwrap@1000d000 {
+ };
+
+ keyboard: keyboard@10010000 {
+- compatible = "mediatek,mt6779-keypad";
++ compatible = "mediatek,mt8183-keypad",
++ "mediatek,mt6779-keypad";
+ reg = <0 0x10010000 0 0x1000>;
+ interrupts = <GIC_SPI 186 IRQ_TYPE_EDGE_FALLING>;
+ clocks = <&clk26m>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8186.dtsi b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+index 148c332018b0d8..ac34ba3afacb05 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8186.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8186.dtsi
+@@ -1570,6 +1570,8 @@ ssusb0: usb@11201000 {
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
++ wakeup-source;
++ mediatek,syscon-wakeup = <&pericfg 0x420 2>;
+ status = "disabled";
+
+ usb_host0: usb@11200000 {
+@@ -1583,8 +1585,6 @@ usb_host0: usb@11200000 {
+ <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_XHCI>;
+ clock-names = "sys_ck", "ref_ck", "mcu_ck", "dma_ck", "xhci_ck";
+ interrupts = <GIC_SPI 294 IRQ_TYPE_LEVEL_HIGH 0>;
+- mediatek,syscon-wakeup = <&pericfg 0x420 2>;
+- wakeup-source;
+ status = "disabled";
+ };
+ };
+@@ -1636,6 +1636,8 @@ ssusb1: usb@11281000 {
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
++ wakeup-source;
++ mediatek,syscon-wakeup = <&pericfg 0x424 2>;
+ status = "disabled";
+
+ usb_host1: usb@11280000 {
+@@ -1649,8 +1651,6 @@ usb_host1: usb@11280000 {
+ <&infracfg_ao CLK_INFRA_AO_SSUSB_TOP_P1_XHCI>;
+ clock-names = "sys_ck", "ref_ck", "mcu_ck", "dma_ck","xhci_ck";
+ interrupts = <GIC_SPI 324 IRQ_TYPE_LEVEL_HIGH 0>;
+- mediatek,syscon-wakeup = <&pericfg 0x424 2>;
+- wakeup-source;
+ status = "disabled";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+index 08d71ddf36683e..ad52c1d6e4eef7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8192-asurada.dtsi
+@@ -1420,7 +1420,6 @@ mt6315_6: pmic@6 {
+
+ regulators {
+ mt6315_6_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vbcpu";
+ regulator-min-microvolt = <400000>;
+ regulator-max-microvolt = <1193750>;
+@@ -1430,7 +1429,6 @@ mt6315_6_vbuck1: vbuck1 {
+ };
+
+ mt6315_6_vbuck3: vbuck3 {
+- regulator-compatible = "vbuck3";
+ regulator-name = "Vlcpu";
+ regulator-min-microvolt = <400000>;
+ regulator-max-microvolt = <1193750>;
+@@ -1447,7 +1445,6 @@ mt6315_7: pmic@7 {
+
+ regulators {
+ mt6315_7_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vgpu";
+ regulator-min-microvolt = <400000>;
+ regulator-max-microvolt = <800000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+index 2c7b2223ee76b1..5056e07399e23a 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195-cherry.dtsi
+@@ -1285,7 +1285,6 @@ mt6315@6 {
+
+ regulators {
+ mt6315_6_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vbcpu";
+ regulator-min-microvolt = <400000>;
+ regulator-max-microvolt = <1193750>;
+@@ -1303,7 +1302,6 @@ mt6315@7 {
+
+ regulators {
+ mt6315_7_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vgpu";
+ regulator-min-microvolt = <400000>;
+ regulator-max-microvolt = <1193750>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195-demo.dts b/arch/arm64/boot/dts/mediatek/mt8195-demo.dts
+index 31d424b8fc7ced..bfb75296795c39 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195-demo.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8195-demo.dts
+@@ -137,7 +137,6 @@ charger {
+ richtek,vinovp-microvolt = <14500000>;
+
+ otg_vbus_regulator: usb-otg-vbus-regulator {
+- regulator-compatible = "usb-otg-vbus";
+ regulator-name = "usb-otg-vbus";
+ regulator-min-microvolt = <4425000>;
+ regulator-max-microvolt = <5825000>;
+@@ -149,7 +148,6 @@ regulator {
+ LDO_VIN3-supply = <&mt6360_buck2>;
+
+ mt6360_buck1: buck1 {
+- regulator-compatible = "BUCK1";
+ regulator-name = "mt6360,buck1";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1300000>;
+@@ -160,7 +158,6 @@ MT6360_OPMODE_LP
+ };
+
+ mt6360_buck2: buck2 {
+- regulator-compatible = "BUCK2";
+ regulator-name = "mt6360,buck2";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1300000>;
+@@ -171,7 +168,6 @@ MT6360_OPMODE_LP
+ };
+
+ mt6360_ldo1: ldo1 {
+- regulator-compatible = "LDO1";
+ regulator-name = "mt6360,ldo1";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3600000>;
+@@ -180,7 +176,6 @@ mt6360_ldo1: ldo1 {
+ };
+
+ mt6360_ldo2: ldo2 {
+- regulator-compatible = "LDO2";
+ regulator-name = "mt6360,ldo2";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3600000>;
+@@ -189,7 +184,6 @@ mt6360_ldo2: ldo2 {
+ };
+
+ mt6360_ldo3: ldo3 {
+- regulator-compatible = "LDO3";
+ regulator-name = "mt6360,ldo3";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <3600000>;
+@@ -198,7 +192,6 @@ mt6360_ldo3: ldo3 {
+ };
+
+ mt6360_ldo5: ldo5 {
+- regulator-compatible = "LDO5";
+ regulator-name = "mt6360,ldo5";
+ regulator-min-microvolt = <2700000>;
+ regulator-max-microvolt = <3600000>;
+@@ -207,7 +200,6 @@ mt6360_ldo5: ldo5 {
+ };
+
+ mt6360_ldo6: ldo6 {
+- regulator-compatible = "LDO6";
+ regulator-name = "mt6360,ldo6";
+ regulator-min-microvolt = <500000>;
+ regulator-max-microvolt = <2100000>;
+@@ -216,7 +208,6 @@ mt6360_ldo6: ldo6 {
+ };
+
+ mt6360_ldo7: ldo7 {
+- regulator-compatible = "LDO7";
+ regulator-name = "mt6360,ldo7";
+ regulator-min-microvolt = <500000>;
+ regulator-max-microvolt = <2100000>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8195.dtsi b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+index ade685ed2190b7..f013dbad9dc4ea 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8195.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8195.dtsi
+@@ -1611,9 +1611,6 @@ pcie1: pcie@112f8000 {
+ phy-names = "pcie-phy";
+ power-domains = <&spm MT8195_POWER_DOMAIN_PCIE_MAC_P1>;
+
+- resets = <&infracfg_ao MT8195_INFRA_RST2_PCIE_P1_SWRST>;
+- reset-names = "mac";
+-
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 7>;
+ interrupt-map = <0 0 0 1 &pcie_intc1 0>,
+@@ -3138,7 +3135,7 @@ larb20: larb@1b010000 {
+ };
+
+ ovl0: ovl@1c000000 {
+- compatible = "mediatek,mt8195-disp-ovl", "mediatek,mt8183-disp-ovl";
++ compatible = "mediatek,mt8195-disp-ovl";
+ reg = <0 0x1c000000 0 0x1000>;
+ interrupts = <GIC_SPI 636 IRQ_TYPE_LEVEL_HIGH 0>;
+ power-domains = <&spm MT8195_POWER_DOMAIN_VDOSYS0>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8365.dtsi b/arch/arm64/boot/dts/mediatek/mt8365.dtsi
+index 9c91fe8ea0f969..2bf8c9d02b6ee7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8365.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8365.dtsi
+@@ -449,7 +449,8 @@ pwrap: pwrap@1000d000 {
+ };
+
+ keypad: keypad@10010000 {
+- compatible = "mediatek,mt6779-keypad";
++ compatible = "mediatek,mt8365-keypad",
++ "mediatek,mt6779-keypad";
+ reg = <0 0x10010000 0 0x1000>;
+ wakeup-source;
+ interrupts = <GIC_SPI 124 IRQ_TYPE_EDGE_FALLING>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
+index b4b48eb93f3c54..6f34b06a0359a7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8395-genio-1200-evk.dts
+@@ -820,7 +820,6 @@ mt6315_6: pmic@6 {
+
+ regulators {
+ mt6315_6_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vbcpu";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+@@ -837,7 +836,6 @@ mt6315_7: pmic@7 {
+
+ regulators {
+ mt6315_7_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vgpu";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts b/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
+index 14ec970c4e491f..41dc34837b02e7 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8395-radxa-nio-12l.dts
+@@ -812,7 +812,6 @@ mt6315_6: pmic@6 {
+
+ regulators {
+ mt6315_6_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vbcpu";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+@@ -829,7 +828,6 @@ mt6315_7: pmic@7 {
+
+ regulators {
+ mt6315_7_vbuck1: vbuck1 {
+- regulator-compatible = "vbuck1";
+ regulator-name = "Vgpu";
+ regulator-min-microvolt = <300000>;
+ regulator-max-microvolt = <1193750>;
+diff --git a/arch/arm64/boot/dts/mediatek/mt8516.dtsi b/arch/arm64/boot/dts/mediatek/mt8516.dtsi
+index d0b03dc4d3f43a..e30623ebac0e1b 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8516.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8516.dtsi
+@@ -144,10 +144,10 @@ reserved-memory {
+ #size-cells = <2>;
+ ranges;
+
+- /* 128 KiB reserved for ARM Trusted Firmware (BL31) */
++ /* 192 KiB reserved for ARM Trusted Firmware (BL31) */
+ bl31_secmon_reserved: secmon@43000000 {
+ no-map;
+- reg = <0 0x43000000 0 0x20000>;
++ reg = <0 0x43000000 0 0x30000>;
+ };
+ };
+
+@@ -206,7 +206,7 @@ watchdog@10007000 {
+ compatible = "mediatek,mt8516-wdt",
+ "mediatek,mt6589-wdt";
+ reg = <0 0x10007000 0 0x1000>;
+- interrupts = <GIC_SPI 198 IRQ_TYPE_EDGE_FALLING>;
++ interrupts = <GIC_SPI 198 IRQ_TYPE_LEVEL_LOW>;
+ #reset-cells = <1>;
+ };
+
+@@ -268,7 +268,7 @@ gic: interrupt-controller@10310000 {
+ interrupt-parent = <&gic>;
+ interrupt-controller;
+ reg = <0 0x10310000 0 0x1000>,
+- <0 0x10320000 0 0x1000>,
++ <0 0x1032f000 0 0x2000>,
+ <0 0x10340000 0 0x2000>,
+ <0 0x10360000 0 0x2000>;
+ interrupts = <GIC_PPI 9
+@@ -344,6 +344,7 @@ i2c0: i2c@11009000 {
+ reg = <0 0x11009000 0 0x90>,
+ <0 0x11000180 0 0x80>;
+ interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_LOW>;
++ clock-div = <2>;
+ clocks = <&topckgen CLK_TOP_I2C0>,
+ <&topckgen CLK_TOP_APDMA>;
+ clock-names = "main", "dma";
+@@ -358,6 +359,7 @@ i2c1: i2c@1100a000 {
+ reg = <0 0x1100a000 0 0x90>,
+ <0 0x11000200 0 0x80>;
+ interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_LOW>;
++ clock-div = <2>;
+ clocks = <&topckgen CLK_TOP_I2C1>,
+ <&topckgen CLK_TOP_APDMA>;
+ clock-names = "main", "dma";
+@@ -372,6 +374,7 @@ i2c2: i2c@1100b000 {
+ reg = <0 0x1100b000 0 0x90>,
+ <0 0x11000280 0 0x80>;
+ interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_LOW>;
++ clock-div = <2>;
+ clocks = <&topckgen CLK_TOP_I2C2>,
+ <&topckgen CLK_TOP_APDMA>;
+ clock-names = "main", "dma";
+diff --git a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
+index ec8dfb3d1c6d69..a356db5fcc5f3c 100644
+--- a/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
++++ b/arch/arm64/boot/dts/mediatek/pumpkin-common.dtsi
+@@ -47,7 +47,6 @@ key-volume-down {
+ };
+
+ &i2c0 {
+- clock-div = <2>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c0_pins_a>;
+ status = "okay";
+@@ -156,7 +155,6 @@ cam-pwdn-hog {
+ };
+
+ &i2c2 {
+- clock-div = <2>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2_pins_a>;
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+index 984c85eab41afd..570331baa09ee3 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+@@ -3900,7 +3900,7 @@ spi@c260000 {
+ assigned-clock-parents = <&bpmp TEGRA234_CLK_PLLP_OUT0>;
+ resets = <&bpmp TEGRA234_RESET_SPI2>;
+ reset-names = "spi";
+- dmas = <&gpcdma 19>, <&gpcdma 19>;
++ dmas = <&gpcdma 16>, <&gpcdma 16>;
+ dma-names = "rx", "tx";
+ dma-coherent;
+ status = "disabled";
+diff --git a/arch/arm64/boot/dts/qcom/Makefile b/arch/arm64/boot/dts/qcom/Makefile
+index ae002c7cf1268a..b13c169ec70d26 100644
+--- a/arch/arm64/boot/dts/qcom/Makefile
++++ b/arch/arm64/boot/dts/qcom/Makefile
+@@ -207,6 +207,9 @@ dtb-$(CONFIG_ARCH_QCOM) += sdm845-cheza-r1.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-cheza-r2.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-cheza-r3.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-db845c.dtb
++
++sdm845-db845c-navigation-mezzanine-dtbs := sdm845-db845c.dtb sdm845-db845c-navigation-mezzanine.dtbo
++
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-db845c-navigation-mezzanine.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-lg-judyln.dtb
+ dtb-$(CONFIG_ARCH_QCOM) += sdm845-lg-judyp.dtb
+diff --git a/arch/arm64/boot/dts/qcom/msm8916.dtsi b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+index 0ee44706b70ba3..800bfe83dbf837 100644
+--- a/arch/arm64/boot/dts/qcom/msm8916.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8916.dtsi
+@@ -125,7 +125,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32768>;
++ clock-frequency = <32764>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/msm8939.dtsi b/arch/arm64/boot/dts/qcom/msm8939.dtsi
+index 7af210789879af..effa3aaeb25054 100644
+--- a/arch/arm64/boot/dts/qcom/msm8939.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8939.dtsi
+@@ -34,7 +34,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32768>;
++ clock-frequency = <32764>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/msm8994.dtsi b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+index fc2a7f13f690ee..8a7de1dba2b9d0 100644
+--- a/arch/arm64/boot/dts/qcom/msm8994.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8994.dtsi
+@@ -34,7 +34,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32768>;
++ clock-frequency = <32764>;
+ clock-output-names = "sleep_clk";
+ };
+ };
+@@ -437,6 +437,15 @@ usb3: usb@f92f8800 {
+ #size-cells = <1>;
+ ranges;
+
++ interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 311 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 310 IRQ_TYPE_LEVEL_HIGH>;
++ interrupt-names = "pwr_event",
++ "qusb2_phy",
++ "hs_phy_irq",
++ "ss_phy_irq";
++
+ clocks = <&gcc GCC_USB30_MASTER_CLK>,
+ <&gcc GCC_SYS_NOC_USB3_AXI_CLK>,
+ <&gcc GCC_USB30_SLEEP_CLK>,
+diff --git a/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts b/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts
+index f8e9d90afab000..dbad8f57f2fa34 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts
++++ b/arch/arm64/boot/dts/qcom/msm8996-xiaomi-gemini.dts
+@@ -64,7 +64,7 @@ led@0 {
+ };
+
+ led@1 {
+- reg = <0>;
++ reg = <1>;
+ chan-name = "button-backlight1";
+ led-cur = /bits/ 8 <0x32>;
+ max-cur = /bits/ 8 <0xc8>;
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index e5966724f37c69..0a8884145865d6 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -3065,9 +3065,14 @@ usb3: usb@6af8800 {
+ #size-cells = <1>;
+ ranges;
+
+- interrupts = <GIC_SPI 347 IRQ_TYPE_LEVEL_HIGH>,
++ interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 347 IRQ_TYPE_LEVEL_HIGH>,
++ <GIC_SPI 133 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 243 IRQ_TYPE_LEVEL_HIGH>;
+- interrupt-names = "hs_phy_irq", "ss_phy_irq";
++ interrupt-names = "pwr_event",
++ "qusb2_phy",
++ "hs_phy_irq",
++ "ss_phy_irq";
+
+ clocks = <&gcc GCC_SYS_NOC_USB3_AXI_CLK>,
+ <&gcc GCC_USB30_MASTER_CLK>,
+diff --git a/arch/arm64/boot/dts/qcom/qcm6490-shift-otter.dts b/arch/arm64/boot/dts/qcom/qcm6490-shift-otter.dts
+index 4667e47a74bc5b..75930f95769663 100644
+--- a/arch/arm64/boot/dts/qcom/qcm6490-shift-otter.dts
++++ b/arch/arm64/boot/dts/qcom/qcm6490-shift-otter.dts
+@@ -942,8 +942,6 @@ &usb_1_hsphy {
+
+ qcom,squelch-detector-bp = <(-2090)>;
+
+- orientation-switch;
+-
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/qcs404.dtsi b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+index cddc16bac0cea4..81a161c0cc5a82 100644
+--- a/arch/arm64/boot/dts/qcom/qcs404.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs404.dtsi
+@@ -28,7 +28,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32768>;
++ clock-frequency = <32764>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/qcs8550-aim300.dtsi b/arch/arm64/boot/dts/qcom/qcs8550-aim300.dtsi
+index f6960e2d466a26..e6ac529e6b7216 100644
+--- a/arch/arm64/boot/dts/qcom/qcs8550-aim300.dtsi
++++ b/arch/arm64/boot/dts/qcom/qcs8550-aim300.dtsi
+@@ -367,7 +367,7 @@ &pm8550b_eusb2_repeater {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &ufs_mem_hc {
+diff --git a/arch/arm64/boot/dts/qcom/qdu1000-idp.dts b/arch/arm64/boot/dts/qcom/qdu1000-idp.dts
+index e65305f8136c88..c73eda75faf820 100644
+--- a/arch/arm64/boot/dts/qcom/qdu1000-idp.dts
++++ b/arch/arm64/boot/dts/qcom/qdu1000-idp.dts
+@@ -31,7 +31,7 @@ xo_board: xo-board-clk {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+index 1888d99d398b11..f99fb9159e0b68 100644
+--- a/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
++++ b/arch/arm64/boot/dts/qcom/qrb4210-rb2.dts
+@@ -545,7 +545,7 @@ can@0 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &tlmm {
+diff --git a/arch/arm64/boot/dts/qcom/qru1000-idp.dts b/arch/arm64/boot/dts/qcom/qru1000-idp.dts
+index 1c781d9e24cf4d..52ce51e56e2fdc 100644
+--- a/arch/arm64/boot/dts/qcom/qru1000-idp.dts
++++ b/arch/arm64/boot/dts/qcom/qru1000-idp.dts
+@@ -31,7 +31,7 @@ xo_board: xo-board-clk {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sa8775p-ride.dtsi b/arch/arm64/boot/dts/qcom/sa8775p-ride.dtsi
+index 0c1b21def4b62c..adb71aeff339b5 100644
+--- a/arch/arm64/boot/dts/qcom/sa8775p-ride.dtsi
++++ b/arch/arm64/boot/dts/qcom/sa8775p-ride.dtsi
+@@ -517,7 +517,7 @@ &serdes1 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32764>;
++ clock-frequency = <32000>;
+ };
+
+ &spi16 {
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-firmware-tfa.dtsi b/arch/arm64/boot/dts/qcom/sc7180-firmware-tfa.dtsi
+index ee35a454dbf6f3..59162b3afcb841 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-firmware-tfa.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-firmware-tfa.dtsi
+@@ -6,82 +6,82 @@
+ * by Qualcomm firmware.
+ */
+
+-&CPU0 {
++&cpu0 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU1 {
++&cpu1 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU2 {
++&cpu2 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU3 {
++&cpu3 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU4 {
++&cpu4 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU5 {
++&cpu5 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&LITTLE_CPU_SLEEP_0
+- &LITTLE_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&little_cpu_sleep_0
++ &little_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU6 {
++&cpu6 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&BIG_CPU_SLEEP_0
+- &BIG_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&big_cpu_sleep_0
++ &big_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+-&CPU7 {
++&cpu7 {
+ /delete-property/ power-domains;
+ /delete-property/ power-domain-names;
+
+- cpu-idle-states = <&BIG_CPU_SLEEP_0
+- &BIG_CPU_SLEEP_1
+- &CLUSTER_SLEEP_0>;
++ cpu-idle-states = <&big_cpu_sleep_0
++ &big_cpu_sleep_1
++ &cluster_sleep_0>;
+ };
+
+ /delete-node/ &domain_idle_states;
+
+ &idle_states {
+- CLUSTER_SLEEP_0: cluster-sleep-0 {
++ cluster_sleep_0: cluster-sleep-0 {
+ compatible = "arm,idle-state";
+ idle-state-name = "cluster-power-down";
+ arm,psci-suspend-param = <0x40003444>;
+@@ -92,15 +92,15 @@ CLUSTER_SLEEP_0: cluster-sleep-0 {
+ };
+ };
+
+-/delete-node/ &CPU_PD0;
+-/delete-node/ &CPU_PD1;
+-/delete-node/ &CPU_PD2;
+-/delete-node/ &CPU_PD3;
+-/delete-node/ &CPU_PD4;
+-/delete-node/ &CPU_PD5;
+-/delete-node/ &CPU_PD6;
+-/delete-node/ &CPU_PD7;
+-/delete-node/ &CLUSTER_PD;
++/delete-node/ &cpu_pd0;
++/delete-node/ &cpu_pd1;
++/delete-node/ &cpu_pd2;
++/delete-node/ &cpu_pd3;
++/delete-node/ &cpu_pd4;
++/delete-node/ &cpu_pd5;
++/delete-node/ &cpu_pd6;
++/delete-node/ &cpu_pd7;
++/delete-node/ &cluster_pd;
+
+ &apps_rsc {
+ /delete-property/ power-domains;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+index 3c124bbe2f4c94..25b17b0425f24e 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-coachz.dtsi
+@@ -53,14 +53,14 @@ skin-temp-crit {
+ cooling-maps {
+ map0 {
+ trip = <&skin_temp_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+
+ map1 {
+ trip = <&skin_temp_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
+index b2df22faafe889..f57976906d6304 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-homestar.dtsi
+@@ -71,14 +71,14 @@ skin-temp-crit {
+ cooling-maps {
+ map0 {
+ trip = <&skin_temp_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+
+ map1 {
+ trip = <&skin_temp_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi
+index ac8d4589e3fb74..f7300ffbb4519a 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-pompom.dtsi
+@@ -12,11 +12,11 @@
+
+ / {
+ thermal-zones {
+- 5v-choke-thermal {
++ choke-5v-thermal {
+ thermal-sensors = <&pm6150_adc_tm 1>;
+
+ trips {
+- 5v-choke-crit {
++ choke-5v-crit {
+ temperature = <125000>;
+ hysteresis = <1000>;
+ type = "critical";
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-quackingstick.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-quackingstick.dtsi
+index 00229b1515e605..ff8996b4de4e1e 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-quackingstick.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-quackingstick.dtsi
+@@ -78,6 +78,7 @@ panel: panel@0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&lcd_rst>;
+ avdd-supply = <&ppvar_lcd>;
++ avee-supply = <&ppvar_lcd>;
+ pp1800-supply = <&v1p8_disp>;
+ pp3300-supply = <&pp3300_dx_edp>;
+ backlight = <&backlight>;
+diff --git a/arch/arm64/boot/dts/qcom/sc7180-trogdor-wormdingler.dtsi b/arch/arm64/boot/dts/qcom/sc7180-trogdor-wormdingler.dtsi
+index af89d80426abbd..d4925be3b1fcf5 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180-trogdor-wormdingler.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180-trogdor-wormdingler.dtsi
+@@ -78,14 +78,14 @@ skin-temp-crit {
+ cooling-maps {
+ map0 {
+ trip = <&skin_temp_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+
+ map1 {
+ trip = <&skin_temp_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7180.dtsi b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+index b5ebf898032512..249b257fc6a74b 100644
+--- a/arch/arm64/boot/dts/qcom/sc7180.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7180.dtsi
+@@ -77,28 +77,28 @@ cpus {
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+- CPU0: cpu@0 {
++ cpu0: cpu@0 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x0>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD0>;
++ power-domains = <&cpu_pd0>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+- next-level-cache = <&L2_0>;
++ next-level-cache = <&l2_0>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_0: l2-cache {
++ l2_0: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
+- L3_0: l3-cache {
++ next-level-cache = <&l3_0>;
++ l3_0: l3-cache {
+ compatible = "cache";
+ cache-level = <3>;
+ cache-unified;
+@@ -106,206 +106,206 @@ L3_0: l3-cache {
+ };
+ };
+
+- CPU1: cpu@100 {
++ cpu1: cpu@100 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x100>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD1>;
++ power-domains = <&cpu_pd1>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+- next-level-cache = <&L2_100>;
++ next-level-cache = <&l2_100>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_100: l2-cache {
++ l2_100: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU2: cpu@200 {
++ cpu2: cpu@200 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x200>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD2>;
++ power-domains = <&cpu_pd2>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+- next-level-cache = <&L2_200>;
++ next-level-cache = <&l2_200>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_200: l2-cache {
++ l2_200: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU3: cpu@300 {
++ cpu3: cpu@300 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x300>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD3>;
++ power-domains = <&cpu_pd3>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+- next-level-cache = <&L2_300>;
++ next-level-cache = <&l2_300>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_300: l2-cache {
++ l2_300: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU4: cpu@400 {
++ cpu4: cpu@400 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x400>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD4>;
++ power-domains = <&cpu_pd4>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+- next-level-cache = <&L2_400>;
++ next-level-cache = <&l2_400>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_400: l2-cache {
++ l2_400: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU5: cpu@500 {
++ cpu5: cpu@500 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x500>;
+ clocks = <&cpufreq_hw 0>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD5>;
++ power-domains = <&cpu_pd5>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <415>;
+ dynamic-power-coefficient = <137>;
+- next-level-cache = <&L2_500>;
++ next-level-cache = <&l2_500>;
+ operating-points-v2 = <&cpu0_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 0>;
+- L2_500: l2-cache {
++ l2_500: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU6: cpu@600 {
++ cpu6: cpu@600 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x600>;
+ clocks = <&cpufreq_hw 1>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD6>;
++ power-domains = <&cpu_pd6>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <1024>;
+ dynamic-power-coefficient = <480>;
+- next-level-cache = <&L2_600>;
++ next-level-cache = <&l2_600>;
+ operating-points-v2 = <&cpu6_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 1>;
+- L2_600: l2-cache {
++ l2_600: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+- CPU7: cpu@700 {
++ cpu7: cpu@700 {
+ device_type = "cpu";
+ compatible = "qcom,kryo468";
+ reg = <0x0 0x700>;
+ clocks = <&cpufreq_hw 1>;
+ enable-method = "psci";
+- power-domains = <&CPU_PD7>;
++ power-domains = <&cpu_pd7>;
+ power-domain-names = "psci";
+ capacity-dmips-mhz = <1024>;
+ dynamic-power-coefficient = <480>;
+- next-level-cache = <&L2_700>;
++ next-level-cache = <&l2_700>;
+ operating-points-v2 = <&cpu6_opp_table>;
+ interconnects = <&gem_noc MASTER_APPSS_PROC 3 &mc_virt SLAVE_EBI1 3>,
+ <&osm_l3 MASTER_OSM_L3_APPS &osm_l3 SLAVE_OSM_L3>;
+ #cooling-cells = <2>;
+ qcom,freq-domain = <&cpufreq_hw 1>;
+- L2_700: l2-cache {
++ l2_700: l2-cache {
+ compatible = "cache";
+ cache-level = <2>;
+ cache-unified;
+- next-level-cache = <&L3_0>;
++ next-level-cache = <&l3_0>;
+ };
+ };
+
+ cpu-map {
+ cluster0 {
+ core0 {
+- cpu = <&CPU0>;
++ cpu = <&cpu0>;
+ };
+
+ core1 {
+- cpu = <&CPU1>;
++ cpu = <&cpu1>;
+ };
+
+ core2 {
+- cpu = <&CPU2>;
++ cpu = <&cpu2>;
+ };
+
+ core3 {
+- cpu = <&CPU3>;
++ cpu = <&cpu3>;
+ };
+
+ core4 {
+- cpu = <&CPU4>;
++ cpu = <&cpu4>;
+ };
+
+ core5 {
+- cpu = <&CPU5>;
++ cpu = <&cpu5>;
+ };
+
+ core6 {
+- cpu = <&CPU6>;
++ cpu = <&cpu6>;
+ };
+
+ core7 {
+- cpu = <&CPU7>;
++ cpu = <&cpu7>;
+ };
+ };
+ };
+@@ -313,7 +313,7 @@ core7 {
+ idle_states: idle-states {
+ entry-method = "psci";
+
+- LITTLE_CPU_SLEEP_0: cpu-sleep-0-0 {
++ little_cpu_sleep_0: cpu-sleep-0-0 {
+ compatible = "arm,idle-state";
+ idle-state-name = "little-power-down";
+ arm,psci-suspend-param = <0x40000003>;
+@@ -323,7 +323,7 @@ LITTLE_CPU_SLEEP_0: cpu-sleep-0-0 {
+ local-timer-stop;
+ };
+
+- LITTLE_CPU_SLEEP_1: cpu-sleep-0-1 {
++ little_cpu_sleep_1: cpu-sleep-0-1 {
+ compatible = "arm,idle-state";
+ idle-state-name = "little-rail-power-down";
+ arm,psci-suspend-param = <0x40000004>;
+@@ -333,7 +333,7 @@ LITTLE_CPU_SLEEP_1: cpu-sleep-0-1 {
+ local-timer-stop;
+ };
+
+- BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
++ big_cpu_sleep_0: cpu-sleep-1-0 {
+ compatible = "arm,idle-state";
+ idle-state-name = "big-power-down";
+ arm,psci-suspend-param = <0x40000003>;
+@@ -343,7 +343,7 @@ BIG_CPU_SLEEP_0: cpu-sleep-1-0 {
+ local-timer-stop;
+ };
+
+- BIG_CPU_SLEEP_1: cpu-sleep-1-1 {
++ big_cpu_sleep_1: cpu-sleep-1-1 {
+ compatible = "arm,idle-state";
+ idle-state-name = "big-rail-power-down";
+ arm,psci-suspend-param = <0x40000004>;
+@@ -355,7 +355,7 @@ BIG_CPU_SLEEP_1: cpu-sleep-1-1 {
+ };
+
+ domain_idle_states: domain-idle-states {
+- CLUSTER_SLEEP_PC: cluster-sleep-0 {
++ cluster_sleep_pc: cluster-sleep-0 {
+ compatible = "domain-idle-state";
+ idle-state-name = "cluster-l3-power-collapse";
+ arm,psci-suspend-param = <0x41000044>;
+@@ -364,7 +364,7 @@ CLUSTER_SLEEP_PC: cluster-sleep-0 {
+ min-residency-us = <6118>;
+ };
+
+- CLUSTER_SLEEP_CX_RET: cluster-sleep-1 {
++ cluster_sleep_cx_ret: cluster-sleep-1 {
+ compatible = "domain-idle-state";
+ idle-state-name = "cluster-cx-retention";
+ arm,psci-suspend-param = <0x41001244>;
+@@ -373,7 +373,7 @@ CLUSTER_SLEEP_CX_RET: cluster-sleep-1 {
+ min-residency-us = <8467>;
+ };
+
+- CLUSTER_AOSS_SLEEP: cluster-sleep-2 {
++ cluster_aoss_sleep: cluster-sleep-2 {
+ compatible = "domain-idle-state";
+ idle-state-name = "cluster-power-down";
+ arm,psci-suspend-param = <0x4100b244>;
+@@ -583,59 +583,59 @@ psci {
+ compatible = "arm,psci-1.0";
+ method = "smc";
+
+- CPU_PD0: cpu0 {
++ cpu_pd0: power-domain-cpu0 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD1: cpu1 {
++ cpu_pd1: power-domain-cpu1 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD2: cpu2 {
++ cpu_pd2: power-domain-cpu2 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD3: cpu3 {
++ cpu_pd3: power-domain-cpu3 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD4: cpu4 {
++ cpu_pd4: power-domain-cpu4 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD5: cpu5 {
++ cpu_pd5: power-domain-cpu5 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&LITTLE_CPU_SLEEP_0 &LITTLE_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&little_cpu_sleep_0 &little_cpu_sleep_1>;
+ };
+
+- CPU_PD6: cpu6 {
++ cpu_pd6: power-domain-cpu6 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&BIG_CPU_SLEEP_0 &BIG_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&big_cpu_sleep_0 &big_cpu_sleep_1>;
+ };
+
+- CPU_PD7: cpu7 {
++ cpu_pd7: power-domain-cpu7 {
+ #power-domain-cells = <0>;
+- power-domains = <&CLUSTER_PD>;
+- domain-idle-states = <&BIG_CPU_SLEEP_0 &BIG_CPU_SLEEP_1>;
++ power-domains = <&cluster_pd>;
++ domain-idle-states = <&big_cpu_sleep_0 &big_cpu_sleep_1>;
+ };
+
+- CLUSTER_PD: cpu-cluster0 {
++ cluster_pd: power-domain-cluster {
+ #power-domain-cells = <0>;
+- domain-idle-states = <&CLUSTER_SLEEP_PC
+- &CLUSTER_SLEEP_CX_RET
+- &CLUSTER_AOSS_SLEEP>;
++ domain-idle-states = <&cluster_sleep_pc
++ &cluster_sleep_cx_ret
++ &cluster_aoss_sleep>;
+ };
+ };
+
+@@ -2546,7 +2546,7 @@ etm@7040000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07040000 0 0x1000>;
+
+- cpu = <&CPU0>;
++ cpu = <&cpu0>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2566,7 +2566,7 @@ etm@7140000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07140000 0 0x1000>;
+
+- cpu = <&CPU1>;
++ cpu = <&cpu1>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2586,7 +2586,7 @@ etm@7240000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07240000 0 0x1000>;
+
+- cpu = <&CPU2>;
++ cpu = <&cpu2>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2606,7 +2606,7 @@ etm@7340000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07340000 0 0x1000>;
+
+- cpu = <&CPU3>;
++ cpu = <&cpu3>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2626,7 +2626,7 @@ etm@7440000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07440000 0 0x1000>;
+
+- cpu = <&CPU4>;
++ cpu = <&cpu4>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2646,7 +2646,7 @@ etm@7540000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07540000 0 0x1000>;
+
+- cpu = <&CPU5>;
++ cpu = <&cpu5>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2666,7 +2666,7 @@ etm@7640000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07640000 0 0x1000>;
+
+- cpu = <&CPU6>;
++ cpu = <&cpu6>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -2686,7 +2686,7 @@ etm@7740000 {
+ compatible = "arm,coresight-etm4x", "arm,primecell";
+ reg = <0 0x07740000 0 0x1000>;
+
+- cpu = <&CPU7>;
++ cpu = <&cpu7>;
+
+ clocks = <&aoss_qmp>;
+ clock-names = "apb_pclk";
+@@ -3734,7 +3734,7 @@ apps_rsc: rsc@18200000 {
+ <SLEEP_TCS 3>,
+ <WAKE_TCS 3>,
+ <CONTROL_TCS 1>;
+- power-domains = <&CLUSTER_PD>;
++ power-domains = <&cluster_pd>;
+
+ rpmhcc: clock-controller {
+ compatible = "qcom,sc7180-rpmh-clk";
+@@ -4063,21 +4063,21 @@ cpu0_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu0_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu0_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4111,21 +4111,21 @@ cpu1_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu1_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu1_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4159,21 +4159,21 @@ cpu2_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu2_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu2_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4207,21 +4207,21 @@ cpu3_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu3_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu3_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4255,21 +4255,21 @@ cpu4_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu4_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu4_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4303,21 +4303,21 @@ cpu5_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu5_alert0>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu5_alert1>;
+- cooling-device = <&CPU0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu1 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu2 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu3 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu4 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu5 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4351,13 +4351,13 @@ cpu6_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu6_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu6_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4391,13 +4391,13 @@ cpu7_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu7_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu7_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4431,13 +4431,13 @@ cpu8_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu8_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu8_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+@@ -4471,13 +4471,13 @@ cpu9_crit: cpu-crit {
+ cooling-maps {
+ map0 {
+ trip = <&cpu9_alert0>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu9_alert1>;
+- cooling-device = <&CPU6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
+- <&CPU7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
++ cooling-device = <&cpu6 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>,
++ <&cpu7 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc7280.dtsi b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+index 3d8410683402fd..8fbc95cf63fe7e 100644
+--- a/arch/arm64/boot/dts/qcom/sc7280.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc7280.dtsi
+@@ -83,7 +83,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+index 80a57aa228397e..b1e0e51a558291 100644
+--- a/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
++++ b/arch/arm64/boot/dts/qcom/sc8280xp.dtsi
+@@ -2725,7 +2725,7 @@ usb_2_qmpphy1: phy@88f1000 {
+
+ remoteproc_adsp: remoteproc@3000000 {
+ compatible = "qcom,sc8280xp-adsp-pas";
+- reg = <0 0x03000000 0 0x100>;
++ reg = <0 0x03000000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 162 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -3882,26 +3882,26 @@ camss: camss@ac5a000 {
+ "vfe3",
+ "csid3";
+
+- interrupts = <GIC_SPI 359 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 360 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 448 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 464 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 465 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 477 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 478 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 479 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 640 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 641 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 758 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 759 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 760 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 761 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 762 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 764 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 359 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 360 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 448 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 464 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 465 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 466 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 467 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 468 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 469 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 477 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 478 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 479 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 640 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 641 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 758 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 759 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 760 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 761 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 762 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 764 IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "csid1_lite",
+ "vfe_lite1",
+ "csiphy3",
+@@ -5205,7 +5205,7 @@ cpufreq_hw: cpufreq@18591000 {
+
+ remoteproc_nsp0: remoteproc@1b300000 {
+ compatible = "qcom,sc8280xp-nsp0-pas";
+- reg = <0 0x1b300000 0 0x100>;
++ reg = <0 0x1b300000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_nsp0_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -5336,7 +5336,7 @@ compute-cb@14 {
+
+ remoteproc_nsp1: remoteproc@21300000 {
+ compatible = "qcom,sc8280xp-nsp1-pas";
+- reg = <0 0x21300000 0 0x100>;
++ reg = <0 0x21300000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 887 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_nsp1_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dts b/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dts
+deleted file mode 100644
+index a21caa6f3fa259..00000000000000
+--- a/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dts
++++ /dev/null
+@@ -1,104 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-/*
+- * Copyright (c) 2022, Linaro Ltd.
+- */
+-
+-/dts-v1/;
+-
+-#include "sdm845-db845c.dts"
+-
+-&camss {
+- vdda-phy-supply = <&vreg_l1a_0p875>;
+- vdda-pll-supply = <&vreg_l26a_1p2>;
+-
+- status = "okay";
+-
+- ports {
+- port@0 {
+- csiphy0_ep: endpoint {
+- data-lanes = <0 1 2 3>;
+- remote-endpoint = <&ov8856_ep>;
+- };
+- };
+- };
+-};
+-
+-&cci {
+- status = "okay";
+-};
+-
+-&cci_i2c0 {
+- camera@10 {
+- compatible = "ovti,ov8856";
+- reg = <0x10>;
+-
+- /* CAM0_RST_N */
+- reset-gpios = <&tlmm 9 GPIO_ACTIVE_LOW>;
+- pinctrl-names = "default";
+- pinctrl-0 = <&cam0_default>;
+-
+- clocks = <&clock_camcc CAM_CC_MCLK0_CLK>;
+- clock-names = "xvclk";
+- clock-frequency = <19200000>;
+-
+- /*
+- * The &vreg_s4a_1p8 trace is powered on as a,
+- * so it is represented by a fixed regulator.
+- *
+- * The 2.8V vdda-supply and 1.2V vddd-supply regulators
+- * both have to be enabled through the power management
+- * gpios.
+- */
+- dovdd-supply = <&vreg_lvs1a_1p8>;
+- avdd-supply = <&cam0_avdd_2v8>;
+- dvdd-supply = <&cam0_dvdd_1v2>;
+-
+- port {
+- ov8856_ep: endpoint {
+- link-frequencies = /bits/ 64
+- <360000000 180000000>;
+- data-lanes = <1 2 3 4>;
+- remote-endpoint = <&csiphy0_ep>;
+- };
+- };
+- };
+-};
+-
+-&cci_i2c1 {
+- camera@60 {
+- compatible = "ovti,ov7251";
+-
+- /* I2C address as per ov7251.txt linux documentation */
+- reg = <0x60>;
+-
+- /* CAM3_RST_N */
+- enable-gpios = <&tlmm 21 GPIO_ACTIVE_HIGH>;
+- pinctrl-names = "default";
+- pinctrl-0 = <&cam3_default>;
+-
+- clocks = <&clock_camcc CAM_CC_MCLK3_CLK>;
+- clock-names = "xclk";
+- clock-frequency = <24000000>;
+-
+- /*
+- * The &vreg_s4a_1p8 trace always powered on.
+- *
+- * The 2.8V vdda-supply regulator is enabled when the
+- * vreg_s4a_1p8 trace is pulled high.
+- * It too is represented by a fixed regulator.
+- *
+- * No 1.2V vddd-supply regulator is used.
+- */
+- vdddo-supply = <&vreg_lvs1a_1p8>;
+- vdda-supply = <&cam3_avdd_2v8>;
+-
+- status = "disabled";
+-
+- port {
+- ov7251_ep: endpoint {
+- data-lanes = <0 1>;
+-/* remote-endpoint = <&csiphy3_ep>; */
+- };
+- };
+- };
+-};
+diff --git a/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dtso b/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dtso
+new file mode 100644
+index 00000000000000..51f1a4883ab8f0
+--- /dev/null
++++ b/arch/arm64/boot/dts/qcom/sdm845-db845c-navigation-mezzanine.dtso
+@@ -0,0 +1,70 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * Copyright (c) 2022, Linaro Ltd.
++ */
++
++/dts-v1/;
++/plugin/;
++
++#include <dt-bindings/clock/qcom,camcc-sdm845.h>
++#include <dt-bindings/gpio/gpio.h>
++
++&camss {
++ vdda-phy-supply = <&vreg_l1a_0p875>;
++ vdda-pll-supply = <&vreg_l26a_1p2>;
++
++ status = "okay";
++
++ ports {
++ port@0 {
++ csiphy0_ep: endpoint {
++ data-lanes = <0 1 2 3>;
++ remote-endpoint = <&ov8856_ep>;
++ };
++ };
++ };
++};
++
++&cci {
++ status = "okay";
++};
++
++&cci_i2c0 {
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ camera@10 {
++ compatible = "ovti,ov8856";
++ reg = <0x10>;
++
++ /* CAM0_RST_N */
++ reset-gpios = <&tlmm 9 GPIO_ACTIVE_LOW>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&cam0_default>;
++
++ clocks = <&clock_camcc CAM_CC_MCLK0_CLK>;
++ clock-names = "xvclk";
++ clock-frequency = <19200000>;
++
++ /*
++ * The &vreg_s4a_1p8 trace is powered on as a,
++ * so it is represented by a fixed regulator.
++ *
++ * The 2.8V vdda-supply and 1.2V vddd-supply regulators
++ * both have to be enabled through the power management
++ * gpios.
++ */
++ dovdd-supply = <&vreg_lvs1a_1p8>;
++ avdd-supply = <&cam0_avdd_2v8>;
++ dvdd-supply = <&cam0_dvdd_1v2>;
++
++ port {
++ ov8856_ep: endpoint {
++ link-frequencies = /bits/ 64
++ <360000000 180000000>;
++ data-lanes = <1 2 3 4>;
++ remote-endpoint = <&csiphy0_ep>;
++ };
++ };
++ };
++};
+diff --git a/arch/arm64/boot/dts/qcom/sdm845.dtsi b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+index 54077549b9da7f..0a0cef9dfcc416 100644
+--- a/arch/arm64/boot/dts/qcom/sdm845.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdm845.dtsi
+@@ -4326,16 +4326,16 @@ camss: camss@acb3000 {
+ "vfe1",
+ "vfe_lite";
+
+- interrupts = <GIC_SPI 464 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 477 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 478 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 479 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 448 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 465 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 464 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 466 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 468 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 477 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 478 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 479 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 448 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 465 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 467 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 469 IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "csid0",
+ "csid1",
+ "csid2",
+diff --git a/arch/arm64/boot/dts/qcom/sdx75.dtsi b/arch/arm64/boot/dts/qcom/sdx75.dtsi
+index 7cf3fcb469a868..dcb925348e3f31 100644
+--- a/arch/arm64/boot/dts/qcom/sdx75.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdx75.dtsi
+@@ -34,7 +34,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm4450.dtsi b/arch/arm64/boot/dts/qcom/sm4450.dtsi
+index 1e05cd00b635ee..0bbacab6842c3e 100644
+--- a/arch/arm64/boot/dts/qcom/sm4450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm4450.dtsi
+@@ -29,7 +29,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sm6125.dtsi b/arch/arm64/boot/dts/qcom/sm6125.dtsi
+index 133610d14fc41a..1f7fd429ad4286 100644
+--- a/arch/arm64/boot/dts/qcom/sm6125.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6125.dtsi
+@@ -28,7 +28,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ clock-output-names = "sleep_clk";
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm6375.dtsi b/arch/arm64/boot/dts/qcom/sm6375.dtsi
+index 4d519dd6e7ef2f..72e01437ded125 100644
+--- a/arch/arm64/boot/dts/qcom/sm6375.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6375.dtsi
+@@ -29,7 +29,7 @@ xo_board_clk: xo-board-clk {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm7125.dtsi b/arch/arm64/boot/dts/qcom/sm7125.dtsi
+index 12dd72859a433b..a53145a610a3c8 100644
+--- a/arch/arm64/boot/dts/qcom/sm7125.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm7125.dtsi
+@@ -6,11 +6,11 @@
+ #include "sc7180.dtsi"
+
+ /* SM7125 uses Kryo 465 instead of Kryo 468 */
+-&CPU0 { compatible = "qcom,kryo465"; };
+-&CPU1 { compatible = "qcom,kryo465"; };
+-&CPU2 { compatible = "qcom,kryo465"; };
+-&CPU3 { compatible = "qcom,kryo465"; };
+-&CPU4 { compatible = "qcom,kryo465"; };
+-&CPU5 { compatible = "qcom,kryo465"; };
+-&CPU6 { compatible = "qcom,kryo465"; };
+-&CPU7 { compatible = "qcom,kryo465"; };
++&cpu0 { compatible = "qcom,kryo465"; };
++&cpu1 { compatible = "qcom,kryo465"; };
++&cpu2 { compatible = "qcom,kryo465"; };
++&cpu3 { compatible = "qcom,kryo465"; };
++&cpu4 { compatible = "qcom,kryo465"; };
++&cpu5 { compatible = "qcom,kryo465"; };
++&cpu6 { compatible = "qcom,kryo465"; };
++&cpu7 { compatible = "qcom,kryo465"; };
+diff --git a/arch/arm64/boot/dts/qcom/sm7225-fairphone-fp4.dts b/arch/arm64/boot/dts/qcom/sm7225-fairphone-fp4.dts
+index 2ee2561b57b1d6..52b16a4fdc4321 100644
+--- a/arch/arm64/boot/dts/qcom/sm7225-fairphone-fp4.dts
++++ b/arch/arm64/boot/dts/qcom/sm7225-fairphone-fp4.dts
+@@ -32,7 +32,7 @@ / {
+ chassis-type = "handset";
+
+ /* required for bootloader to select correct board */
+- qcom,msm-id = <434 0x10000>, <459 0x10000>;
++ qcom,msm-id = <459 0x10000>;
+ qcom,board-id = <8 32>;
+
+ aliases {
+diff --git a/arch/arm64/boot/dts/qcom/sm8150-microsoft-surface-duo.dts b/arch/arm64/boot/dts/qcom/sm8150-microsoft-surface-duo.dts
+index b039773c44653a..a1323a8b8e6bfb 100644
+--- a/arch/arm64/boot/dts/qcom/sm8150-microsoft-surface-duo.dts
++++ b/arch/arm64/boot/dts/qcom/sm8150-microsoft-surface-duo.dts
+@@ -376,8 +376,8 @@ da7280@4a {
+ pinctrl-0 = <&da7280_intr_default>;
+
+ dlg,actuator-type = "LRA";
+- dlg,dlg,const-op-mode = <1>;
+- dlg,dlg,periodic-op-mode = <1>;
++ dlg,const-op-mode = <1>;
++ dlg,periodic-op-mode = <1>;
+ dlg,nom-microvolt = <2000000>;
+ dlg,abs-max-microvolt = <2000000>;
+ dlg,imax-microamp = <129000>;
+diff --git a/arch/arm64/boot/dts/qcom/sm8250.dtsi b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+index 630f4eff20bf81..faa36d17b9f2c9 100644
+--- a/arch/arm64/boot/dts/qcom/sm8250.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8250.dtsi
+@@ -84,7 +84,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32768>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+@@ -4481,20 +4481,20 @@ camss: camss@ac6a000 {
+ "vfe_lite0",
+ "vfe_lite1";
+
+- interrupts = <GIC_SPI 477 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 478 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 479 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 448 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 464 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 466 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 468 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 359 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 465 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 467 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 469 IRQ_TYPE_LEVEL_HIGH>,
+- <GIC_SPI 360 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 477 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 478 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 479 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 448 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 86 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 89 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 464 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 466 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 468 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 359 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 465 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 467 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 469 IRQ_TYPE_EDGE_RISING>,
++ <GIC_SPI 360 IRQ_TYPE_EDGE_RISING>;
+ interrupt-names = "csiphy0",
+ "csiphy1",
+ "csiphy2",
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 37a2aba0d4cae0..041750d71e4550 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -42,7 +42,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+ };
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index 38cb524cc56893..f7d52e491b694b 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -43,7 +43,7 @@ xo_board: xo-board {
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+ };
+
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-hdk.dts b/arch/arm64/boot/dts/qcom/sm8550-hdk.dts
+index 01c92160260572..29bc1ddfc7b25f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-hdk.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-hdk.dts
+@@ -1172,7 +1172,7 @@ &sdhc_2 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-mtp.dts b/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
+index ab447fc252f7dd..5648ab60ba4c4b 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-mtp.dts
+@@ -825,7 +825,7 @@ &sdhc_2 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-qrd.dts b/arch/arm64/boot/dts/qcom/sm8550-qrd.dts
+index 6052dd922ec55c..3a6cb279130489 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-qrd.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-qrd.dts
+@@ -1005,7 +1005,7 @@ &remoteproc_mpss {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-samsung-q5q.dts b/arch/arm64/boot/dts/qcom/sm8550-samsung-q5q.dts
+index 3d351e90bb3986..62a6b90697b063 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-samsung-q5q.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-samsung-q5q.dts
+@@ -565,7 +565,7 @@ &remoteproc_mpss {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &tlmm {
+diff --git a/arch/arm64/boot/dts/qcom/sm8550-sony-xperia-yodo-pdx234.dts b/arch/arm64/boot/dts/qcom/sm8550-sony-xperia-yodo-pdx234.dts
+index 85d487ef80a0be..d90dc7b37c4a74 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550-sony-xperia-yodo-pdx234.dts
++++ b/arch/arm64/boot/dts/qcom/sm8550-sony-xperia-yodo-pdx234.dts
+@@ -722,7 +722,7 @@ &sdhc_2 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &tlmm {
+diff --git a/arch/arm64/boot/dts/qcom/sm8650-hdk.dts b/arch/arm64/boot/dts/qcom/sm8650-hdk.dts
+index 127c7aacd4fc31..59363267d2e0ab 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650-hdk.dts
++++ b/arch/arm64/boot/dts/qcom/sm8650-hdk.dts
+@@ -1117,7 +1117,7 @@ &sdhc_2 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8650-mtp.dts b/arch/arm64/boot/dts/qcom/sm8650-mtp.dts
+index c63822f5b12789..74275ca668c76f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650-mtp.dts
++++ b/arch/arm64/boot/dts/qcom/sm8650-mtp.dts
+@@ -734,7 +734,7 @@ &sdhc_2 {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &swr0 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8650-qrd.dts b/arch/arm64/boot/dts/qcom/sm8650-qrd.dts
+index 8ca0d28eba9bd0..1689699d6de710 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650-qrd.dts
++++ b/arch/arm64/boot/dts/qcom/sm8650-qrd.dts
+@@ -1045,7 +1045,7 @@ &remoteproc_mpss {
+ };
+
+ &sleep_clk {
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ };
+
+ &spi4 {
+diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+index 01ac3769ffa62f..cd54fd723ce40e 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+@@ -5624,7 +5624,7 @@ compute-cb@8 {
+
+ /* note: secure cb9 in downstream */
+
+- compute-cb@10 {
++ compute-cb@12 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <12>;
+
+@@ -5634,7 +5634,7 @@ compute-cb@10 {
+ dma-coherent;
+ };
+
+- compute-cb@11 {
++ compute-cb@13 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <13>;
+
+@@ -5644,7 +5644,7 @@ compute-cb@11 {
+ dma-coherent;
+ };
+
+- compute-cb@12 {
++ compute-cb@14 {
+ compatible = "qcom,fastrpc-compute-cb";
+ reg = <14>;
+
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+index cdb401767c4206..89e39d55278579 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+@@ -680,14 +680,14 @@ &qupv3_2 {
+
+ &remoteproc_adsp {
+ firmware-name = "qcom/x1e80100/microsoft/Romulus/qcadsp8380.mbn",
+- "qcom/x1e80100/microsoft/Romulus/adsp_dtb.mbn";
++ "qcom/x1e80100/microsoft/Romulus/adsp_dtbs.elf";
+
+ status = "okay";
+ };
+
+ &remoteproc_cdsp {
+ firmware-name = "qcom/x1e80100/microsoft/Romulus/qccdsp8380.mbn",
+- "qcom/x1e80100/microsoft/Romulus/cdsp_dtb.mbn";
++ "qcom/x1e80100/microsoft/Romulus/cdsp_dtbs.elf";
+
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index a97ceff939d882..f0797df9619b15 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -38,7 +38,7 @@ xo_board: xo-board {
+
+ sleep_clk: sleep-clk {
+ compatible = "fixed-clock";
+- clock-frequency = <32000>;
++ clock-frequency = <32764>;
+ #clock-cells = <0>;
+ };
+
+diff --git a/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi b/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi
+index 21bfa4e03972ff..612cdc7efabbcc 100644
+--- a/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi
++++ b/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi
+@@ -42,11 +42,6 @@ aliases {
+ #endif
+ };
+
+- chosen {
+- bootargs = "ignore_loglevel";
+- stdout-path = "serial0:115200n8";
+- };
+-
+ memory@48000000 {
+ device_type = "memory";
+ /* First 128MB is reserved for secure area. */
+diff --git a/arch/arm64/boot/dts/renesas/rzg3s-smarc.dtsi b/arch/arm64/boot/dts/renesas/rzg3s-smarc.dtsi
+index 7945d44e6ee159..af2ab1629104b0 100644
+--- a/arch/arm64/boot/dts/renesas/rzg3s-smarc.dtsi
++++ b/arch/arm64/boot/dts/renesas/rzg3s-smarc.dtsi
+@@ -12,10 +12,15 @@
+ / {
+ aliases {
+ i2c0 = &i2c0;
+- serial0 = &scif0;
++ serial3 = &scif0;
+ mmc1 = &sdhi1;
+ };
+
++ chosen {
++ bootargs = "ignore_loglevel";
++ stdout-path = "serial3:115200n8";
++ };
++
+ keys {
+ compatible = "gpio-keys";
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3308-rock-s0.dts b/arch/arm64/boot/dts/rockchip/rk3308-rock-s0.dts
+index bd6419a5c20a22..8311af4c8689f6 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3308-rock-s0.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3308-rock-s0.dts
+@@ -74,6 +74,23 @@ vcc_io: regulator-3v3-vcc-io {
+ vin-supply = <&vcc5v0_sys>;
+ };
+
++ /*
++ * HW revision prior to v1.2 must pull GPIO4_D6 low to access sdmmc.
++ * This is modeled as an always-on active low fixed regulator.
++ */
++ vcc_sd: regulator-3v3-vcc-sd {
++ compatible = "regulator-fixed";
++ gpios = <&gpio4 RK_PD6 GPIO_ACTIVE_LOW>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&sdmmc_2030>;
++ regulator-name = "vcc_sd";
++ regulator-always-on;
++ regulator-boot-on;
++ regulator-min-microvolt = <3300000>;
++ regulator-max-microvolt = <3300000>;
++ vin-supply = <&vcc_io>;
++ };
++
+ vcc5v0_sys: regulator-5v0-vcc-sys {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc5v0_sys";
+@@ -181,6 +198,12 @@ pwr_led: pwr-led {
+ };
+ };
+
++ sdmmc {
++ sdmmc_2030: sdmmc-2030 {
++ rockchip,pins = <4 RK_PD6 RK_FUNC_GPIO &pcfg_pull_none>;
++ };
++ };
++
+ wifi {
+ wifi_reg_on: wifi-reg-on {
+ rockchip,pins = <0 RK_PA2 RK_FUNC_GPIO &pcfg_pull_none>;
+@@ -233,7 +256,7 @@ &sdmmc {
+ cap-mmc-highspeed;
+ cap-sd-highspeed;
+ disable-wp;
+- vmmc-supply = <&vcc_io>;
++ vmmc-supply = <&vcc_sd>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5.dts b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5.dts
+index 170b14f92f51b5..f9ef0af8aa1ac8 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3568-wolfvision-pf5.dts
+@@ -53,7 +53,7 @@ hdmi_tx_5v: hdmi-tx-5v-regulator {
+
+ pdm_codec: pdm-codec {
+ compatible = "dmic-codec";
+- num-channels = <1>;
++ num-channels = <2>;
+ #sound-dai-cells = <0>;
+ };
+
+diff --git a/arch/arm64/boot/dts/ti/Makefile b/arch/arm64/boot/dts/ti/Makefile
+index bcd392c3206e50..562e6d57bc9919 100644
+--- a/arch/arm64/boot/dts/ti/Makefile
++++ b/arch/arm64/boot/dts/ti/Makefile
+@@ -41,10 +41,6 @@ dtb-$(CONFIG_ARCH_K3) += k3-am62x-sk-csi2-imx219.dtbo
+ dtb-$(CONFIG_ARCH_K3) += k3-am62x-sk-hdmi-audio.dtbo
+
+ # Boards with AM64x SoC
+-k3-am642-hummingboard-t-pcie-dtbs := \
+- k3-am642-hummingboard-t.dtb k3-am642-hummingboard-t-pcie.dtbo
+-k3-am642-hummingboard-t-usb3-dtbs := \
+- k3-am642-hummingboard-t.dtb k3-am642-hummingboard-t-usb3.dtbo
+ dtb-$(CONFIG_ARCH_K3) += k3-am642-evm.dtb
+ dtb-$(CONFIG_ARCH_K3) += k3-am642-evm-icssg1-dualemac.dtbo
+ dtb-$(CONFIG_ARCH_K3) += k3-am642-evm-icssg1-dualemac-mii.dtbo
+diff --git a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+index 5b92aef5b284b7..60c6814206a1f9 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62-main.dtsi
+@@ -23,7 +23,6 @@ gic500: interrupt-controller@1800000 {
+ interrupt-controller;
+ reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
+ <0x00 0x01880000 0x00 0xc0000>, /* GICR */
+- <0x00 0x01880000 0x00 0xc0000>, /* GICR */
+ <0x01 0x00000000 0x00 0x2000>, /* GICC */
+ <0x01 0x00010000 0x00 0x1000>, /* GICH */
+ <0x01 0x00020000 0x00 0x2000>; /* GICV */
+diff --git a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+index 16a578ae2b412f..56945d29e0150b 100644
+--- a/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am62a-main.dtsi
+@@ -18,7 +18,6 @@ gic500: interrupt-controller@1800000 {
+ compatible = "arm,gic-v3";
+ reg = <0x00 0x01800000 0x00 0x10000>, /* GICD */
+ <0x00 0x01880000 0x00 0xc0000>, /* GICR */
+- <0x00 0x01880000 0x00 0xc0000>, /* GICR */
+ <0x01 0x00000000 0x00 0x2000>, /* GICC */
+ <0x01 0x00010000 0x00 0x1000>, /* GICH */
+ <0x01 0x00020000 0x00 0x2000>; /* GICV */
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dts b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dts
+new file mode 100644
+index 00000000000000..023b2a6aaa5668
+--- /dev/null
++++ b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dts
+@@ -0,0 +1,47 @@
++// SPDX-License-Identifier: GPL-2.0+
++/*
++ * Copyright (C) 2023 Josua Mayer <josua@solid-run.com>
++ *
++ * DTS for SolidRun AM642 HummingBoard-T,
++ * running on Cortex A53, with PCI-E.
++ *
++ */
++
++#include "k3-am642-hummingboard-t.dts"
++
++#include "k3-serdes.h"
++
++/ {
++ model = "SolidRun AM642 HummingBoard-T with PCI-E";
++};
++
++&pcie0_rc {
++ pinctrl-names = "default";
++ pinctrl-0 = <&pcie0_default_pins>;
++ reset-gpios = <&main_gpio1 15 GPIO_ACTIVE_HIGH>;
++ phys = <&serdes0_link>;
++ phy-names = "pcie-phy";
++ num-lanes = <1>;
++ status = "okay";
++};
++
++&serdes0 {
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ serdes0_link: phy@0 {
++ reg = <0>;
++ cdns,num-lanes = <1>;
++ cdns,phy-type = <PHY_TYPE_PCIE>;
++ #phy-cells = <0>;
++ resets = <&serdes_wiz0 1>;
++ };
++};
++
++&serdes_ln_ctrl {
++ idle-states = <AM64_SERDES0_LANE0_PCIE0>;
++};
++
++&serdes_mux {
++ idle-state = <1>;
++};
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dtso b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dtso
+deleted file mode 100644
+index bd9a5caf20da5b..00000000000000
+--- a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-pcie.dtso
++++ /dev/null
+@@ -1,45 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0+
+-/*
+- * Copyright (C) 2023 Josua Mayer <josua@solid-run.com>
+- *
+- * Overlay for SolidRun AM642 HummingBoard-T to enable PCI-E.
+- */
+-
+-/dts-v1/;
+-/plugin/;
+-
+-#include <dt-bindings/gpio/gpio.h>
+-#include <dt-bindings/phy/phy.h>
+-
+-#include "k3-serdes.h"
+-
+-&pcie0_rc {
+- pinctrl-names = "default";
+- pinctrl-0 = <&pcie0_default_pins>;
+- reset-gpios = <&main_gpio1 15 GPIO_ACTIVE_HIGH>;
+- phys = <&serdes0_link>;
+- phy-names = "pcie-phy";
+- num-lanes = <1>;
+- status = "okay";
+-};
+-
+-&serdes0 {
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- serdes0_link: phy@0 {
+- reg = <0>;
+- cdns,num-lanes = <1>;
+- cdns,phy-type = <PHY_TYPE_PCIE>;
+- #phy-cells = <0>;
+- resets = <&serdes_wiz0 1>;
+- };
+-};
+-
+-&serdes_ln_ctrl {
+- idle-states = <AM64_SERDES0_LANE0_PCIE0>;
+-};
+-
+-&serdes_mux {
+- idle-state = <1>;
+-};
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dts b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dts
+new file mode 100644
+index 00000000000000..ee9bd618f37010
+--- /dev/null
++++ b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dts
+@@ -0,0 +1,47 @@
++// SPDX-License-Identifier: GPL-2.0+
++/*
++ * Copyright (C) 2023 Josua Mayer <josua@solid-run.com>
++ *
++ * DTS for SolidRun AM642 HummingBoard-T,
++ * running on Cortex A53, with USB-3.1 Gen 1.
++ *
++ */
++
++#include "k3-am642-hummingboard-t.dts"
++
++#include "k3-serdes.h"
++
++/ {
++ model = "SolidRun AM642 HummingBoard-T with USB-3.1 Gen 1";
++};
++
++&serdes0 {
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ serdes0_link: phy@0 {
++ reg = <0>;
++ cdns,num-lanes = <1>;
++ cdns,phy-type = <PHY_TYPE_USB3>;
++ #phy-cells = <0>;
++ resets = <&serdes_wiz0 1>;
++ };
++};
++
++&serdes_ln_ctrl {
++ idle-states = <AM64_SERDES0_LANE0_USB>;
++};
++
++&serdes_mux {
++ idle-state = <0>;
++};
++
++&usbss0 {
++ /delete-property/ ti,usb2-only;
++};
++
++&usb0 {
++ maximum-speed = "super-speed";
++ phys = <&serdes0_link>;
++ phy-names = "cdns3,usb3-phy";
++};
+diff --git a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dtso b/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dtso
+deleted file mode 100644
+index ffcc3bd3c7bc5d..00000000000000
+--- a/arch/arm64/boot/dts/ti/k3-am642-hummingboard-t-usb3.dtso
++++ /dev/null
+@@ -1,44 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0+
+-/*
+- * Copyright (C) 2023 Josua Mayer <josua@solid-run.com>
+- *
+- * Overlay for SolidRun AM642 HummingBoard-T to enable USB-3.1.
+- */
+-
+-/dts-v1/;
+-/plugin/;
+-
+-#include <dt-bindings/phy/phy.h>
+-
+-#include "k3-serdes.h"
+-
+-&serdes0 {
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- serdes0_link: phy@0 {
+- reg = <0>;
+- cdns,num-lanes = <1>;
+- cdns,phy-type = <PHY_TYPE_USB3>;
+- #phy-cells = <0>;
+- resets = <&serdes_wiz0 1>;
+- };
+-};
+-
+-&serdes_ln_ctrl {
+- idle-states = <AM64_SERDES0_LANE0_USB>;
+-};
+-
+-&serdes_mux {
+- idle-state = <0>;
+-};
+-
+-&usbss0 {
+- /delete-property/ ti,usb2-only;
+-};
+-
+-&usb0 {
+- maximum-speed = "super-speed";
+- phys = <&serdes0_link>;
+- phy-names = "cdns3,usb3-phy";
+-};
+diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
+index 5fdbfea7a5b295..8fe7dbae33bf90 100644
+--- a/arch/arm64/configs/defconfig
++++ b/arch/arm64/configs/defconfig
+@@ -1347,7 +1347,6 @@ CONFIG_SM_DISPCC_6115=m
+ CONFIG_SM_DISPCC_8250=y
+ CONFIG_SM_DISPCC_8450=m
+ CONFIG_SM_DISPCC_8550=m
+-CONFIG_SM_DISPCC_8650=m
+ CONFIG_SM_GCC_4450=y
+ CONFIG_SM_GCC_6115=y
+ CONFIG_SM_GCC_8350=y
+diff --git a/arch/hexagon/include/asm/cmpxchg.h b/arch/hexagon/include/asm/cmpxchg.h
+index bf6cf5579cf459..9c58fb81f7fd67 100644
+--- a/arch/hexagon/include/asm/cmpxchg.h
++++ b/arch/hexagon/include/asm/cmpxchg.h
+@@ -56,7 +56,7 @@ __arch_xchg(unsigned long x, volatile void *ptr, int size)
+ __typeof__(ptr) __ptr = (ptr); \
+ __typeof__(*(ptr)) __old = (old); \
+ __typeof__(*(ptr)) __new = (new); \
+- __typeof__(*(ptr)) __oldval = 0; \
++ __typeof__(*(ptr)) __oldval = (__typeof__(*(ptr))) 0; \
+ \
+ asm volatile( \
+ "1: %0 = memw_locked(%1);\n" \
+diff --git a/arch/hexagon/kernel/traps.c b/arch/hexagon/kernel/traps.c
+index 75e062722d285b..040a958de1dfc5 100644
+--- a/arch/hexagon/kernel/traps.c
++++ b/arch/hexagon/kernel/traps.c
+@@ -195,8 +195,10 @@ int die(const char *str, struct pt_regs *regs, long err)
+ printk(KERN_EMERG "Oops: %s[#%d]:\n", str, ++die.counter);
+
+ if (notify_die(DIE_OOPS, str, regs, err, pt_cause(regs), SIGSEGV) ==
+- NOTIFY_STOP)
++ NOTIFY_STOP) {
++ spin_unlock_irq(&die.lock);
+ return 1;
++ }
+
+ print_modules();
+ show_regs(regs);
+diff --git a/arch/loongarch/include/asm/hw_breakpoint.h b/arch/loongarch/include/asm/hw_breakpoint.h
+index d78330916bd18a..13b2462f3d8c9d 100644
+--- a/arch/loongarch/include/asm/hw_breakpoint.h
++++ b/arch/loongarch/include/asm/hw_breakpoint.h
+@@ -38,8 +38,8 @@ struct arch_hw_breakpoint {
+ * Limits.
+ * Changing these will require modifications to the register accessors.
+ */
+-#define LOONGARCH_MAX_BRP 8
+-#define LOONGARCH_MAX_WRP 8
++#define LOONGARCH_MAX_BRP 14
++#define LOONGARCH_MAX_WRP 14
+
+ /* Virtual debug register bases. */
+ #define CSR_CFG_ADDR 0
+diff --git a/arch/loongarch/include/asm/loongarch.h b/arch/loongarch/include/asm/loongarch.h
+index 64ad277e096edd..aaa4ad6b85944a 100644
+--- a/arch/loongarch/include/asm/loongarch.h
++++ b/arch/loongarch/include/asm/loongarch.h
+@@ -959,6 +959,36 @@
+ #define LOONGARCH_CSR_DB7CTRL 0x34a /* data breakpoint 7 control */
+ #define LOONGARCH_CSR_DB7ASID 0x34b /* data breakpoint 7 asid */
+
++#define LOONGARCH_CSR_DB8ADDR 0x350 /* data breakpoint 8 address */
++#define LOONGARCH_CSR_DB8MASK 0x351 /* data breakpoint 8 mask */
++#define LOONGARCH_CSR_DB8CTRL 0x352 /* data breakpoint 8 control */
++#define LOONGARCH_CSR_DB8ASID 0x353 /* data breakpoint 8 asid */
++
++#define LOONGARCH_CSR_DB9ADDR 0x358 /* data breakpoint 9 address */
++#define LOONGARCH_CSR_DB9MASK 0x359 /* data breakpoint 9 mask */
++#define LOONGARCH_CSR_DB9CTRL 0x35a /* data breakpoint 9 control */
++#define LOONGARCH_CSR_DB9ASID 0x35b /* data breakpoint 9 asid */
++
++#define LOONGARCH_CSR_DB10ADDR 0x360 /* data breakpoint 10 address */
++#define LOONGARCH_CSR_DB10MASK 0x361 /* data breakpoint 10 mask */
++#define LOONGARCH_CSR_DB10CTRL 0x362 /* data breakpoint 10 control */
++#define LOONGARCH_CSR_DB10ASID 0x363 /* data breakpoint 10 asid */
++
++#define LOONGARCH_CSR_DB11ADDR 0x368 /* data breakpoint 11 address */
++#define LOONGARCH_CSR_DB11MASK 0x369 /* data breakpoint 11 mask */
++#define LOONGARCH_CSR_DB11CTRL 0x36a /* data breakpoint 11 control */
++#define LOONGARCH_CSR_DB11ASID 0x36b /* data breakpoint 11 asid */
++
++#define LOONGARCH_CSR_DB12ADDR 0x370 /* data breakpoint 12 address */
++#define LOONGARCH_CSR_DB12MASK 0x371 /* data breakpoint 12 mask */
++#define LOONGARCH_CSR_DB12CTRL 0x372 /* data breakpoint 12 control */
++#define LOONGARCH_CSR_DB12ASID 0x373 /* data breakpoint 12 asid */
++
++#define LOONGARCH_CSR_DB13ADDR 0x378 /* data breakpoint 13 address */
++#define LOONGARCH_CSR_DB13MASK 0x379 /* data breakpoint 13 mask */
++#define LOONGARCH_CSR_DB13CTRL 0x37a /* data breakpoint 13 control */
++#define LOONGARCH_CSR_DB13ASID 0x37b /* data breakpoint 13 asid */
++
+ #define LOONGARCH_CSR_FWPC 0x380 /* instruction breakpoint config */
+ #define LOONGARCH_CSR_FWPS 0x381 /* instruction breakpoint status */
+
+@@ -1002,6 +1032,36 @@
+ #define LOONGARCH_CSR_IB7CTRL 0x3ca /* inst breakpoint 7 control */
+ #define LOONGARCH_CSR_IB7ASID 0x3cb /* inst breakpoint 7 asid */
+
++#define LOONGARCH_CSR_IB8ADDR 0x3d0 /* inst breakpoint 8 address */
++#define LOONGARCH_CSR_IB8MASK 0x3d1 /* inst breakpoint 8 mask */
++#define LOONGARCH_CSR_IB8CTRL 0x3d2 /* inst breakpoint 8 control */
++#define LOONGARCH_CSR_IB8ASID 0x3d3 /* inst breakpoint 8 asid */
++
++#define LOONGARCH_CSR_IB9ADDR 0x3d8 /* inst breakpoint 9 address */
++#define LOONGARCH_CSR_IB9MASK 0x3d9 /* inst breakpoint 9 mask */
++#define LOONGARCH_CSR_IB9CTRL 0x3da /* inst breakpoint 9 control */
++#define LOONGARCH_CSR_IB9ASID 0x3db /* inst breakpoint 9 asid */
++
++#define LOONGARCH_CSR_IB10ADDR 0x3e0 /* inst breakpoint 10 address */
++#define LOONGARCH_CSR_IB10MASK 0x3e1 /* inst breakpoint 10 mask */
++#define LOONGARCH_CSR_IB10CTRL 0x3e2 /* inst breakpoint 10 control */
++#define LOONGARCH_CSR_IB10ASID 0x3e3 /* inst breakpoint 10 asid */
++
++#define LOONGARCH_CSR_IB11ADDR 0x3e8 /* inst breakpoint 11 address */
++#define LOONGARCH_CSR_IB11MASK 0x3e9 /* inst breakpoint 11 mask */
++#define LOONGARCH_CSR_IB11CTRL 0x3ea /* inst breakpoint 11 control */
++#define LOONGARCH_CSR_IB11ASID 0x3eb /* inst breakpoint 11 asid */
++
++#define LOONGARCH_CSR_IB12ADDR 0x3f0 /* inst breakpoint 12 address */
++#define LOONGARCH_CSR_IB12MASK 0x3f1 /* inst breakpoint 12 mask */
++#define LOONGARCH_CSR_IB12CTRL 0x3f2 /* inst breakpoint 12 control */
++#define LOONGARCH_CSR_IB12ASID 0x3f3 /* inst breakpoint 12 asid */
++
++#define LOONGARCH_CSR_IB13ADDR 0x3f8 /* inst breakpoint 13 address */
++#define LOONGARCH_CSR_IB13MASK 0x3f9 /* inst breakpoint 13 mask */
++#define LOONGARCH_CSR_IB13CTRL 0x3fa /* inst breakpoint 13 control */
++#define LOONGARCH_CSR_IB13ASID 0x3fb /* inst breakpoint 13 asid */
++
+ #define LOONGARCH_CSR_DEBUG 0x500 /* debug config */
+ #define LOONGARCH_CSR_DERA 0x501 /* debug era */
+ #define LOONGARCH_CSR_DESAVE 0x502 /* debug save */
+diff --git a/arch/loongarch/kernel/hw_breakpoint.c b/arch/loongarch/kernel/hw_breakpoint.c
+index a6e4b605bfa8d6..c35f9bf3803349 100644
+--- a/arch/loongarch/kernel/hw_breakpoint.c
++++ b/arch/loongarch/kernel/hw_breakpoint.c
+@@ -51,7 +51,13 @@ int hw_breakpoint_slots(int type)
+ READ_WB_REG_CASE(OFF, 4, REG, T, VAL); \
+ READ_WB_REG_CASE(OFF, 5, REG, T, VAL); \
+ READ_WB_REG_CASE(OFF, 6, REG, T, VAL); \
+- READ_WB_REG_CASE(OFF, 7, REG, T, VAL);
++ READ_WB_REG_CASE(OFF, 7, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 8, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 9, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 10, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 11, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 12, REG, T, VAL); \
++ READ_WB_REG_CASE(OFF, 13, REG, T, VAL);
+
+ #define GEN_WRITE_WB_REG_CASES(OFF, REG, T, VAL) \
+ WRITE_WB_REG_CASE(OFF, 0, REG, T, VAL); \
+@@ -61,7 +67,13 @@ int hw_breakpoint_slots(int type)
+ WRITE_WB_REG_CASE(OFF, 4, REG, T, VAL); \
+ WRITE_WB_REG_CASE(OFF, 5, REG, T, VAL); \
+ WRITE_WB_REG_CASE(OFF, 6, REG, T, VAL); \
+- WRITE_WB_REG_CASE(OFF, 7, REG, T, VAL);
++ WRITE_WB_REG_CASE(OFF, 7, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 8, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 9, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 10, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 11, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 12, REG, T, VAL); \
++ WRITE_WB_REG_CASE(OFF, 13, REG, T, VAL);
+
+ static u64 read_wb_reg(int reg, int n, int t)
+ {
+diff --git a/arch/loongarch/power/platform.c b/arch/loongarch/power/platform.c
+index 0909729dc2e153..5bbdb9fd76e5d0 100644
+--- a/arch/loongarch/power/platform.c
++++ b/arch/loongarch/power/platform.c
+@@ -17,7 +17,7 @@ void enable_gpe_wakeup(void)
+ if (acpi_gbl_reduced_hardware)
+ return;
+
+- acpi_enable_all_wakeup_gpes();
++ acpi_hw_enable_all_wakeup_gpes();
+ }
+
+ void enable_pci_wakeup(void)
+diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h
+index 18a3028ac3b6de..dad2e7980f245b 100644
+--- a/arch/powerpc/include/asm/hugetlb.h
++++ b/arch/powerpc/include/asm/hugetlb.h
+@@ -15,6 +15,15 @@
+
+ extern bool hugetlb_disabled;
+
++static inline bool hugepages_supported(void)
++{
++ if (hugetlb_disabled)
++ return false;
++
++ return HPAGE_SHIFT != 0;
++}
++#define hugepages_supported hugepages_supported
++
+ void __init hugetlbpage_init_defaultsize(void);
+
+ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
+diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
+index 76381e14e800c7..0ebae6e4c19dd7 100644
+--- a/arch/powerpc/kernel/iommu.c
++++ b/arch/powerpc/kernel/iommu.c
+@@ -687,7 +687,7 @@ void iommu_table_clear(struct iommu_table *tbl)
+ void iommu_table_reserve_pages(struct iommu_table *tbl,
+ unsigned long res_start, unsigned long res_end)
+ {
+- int i;
++ unsigned long i;
+
+ WARN_ON_ONCE(res_end < res_start);
+ /*
+diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
+index 534cd159e9ab4c..ae6f7a235d8b24 100644
+--- a/arch/powerpc/platforms/pseries/iommu.c
++++ b/arch/powerpc/platforms/pseries/iommu.c
+@@ -1650,7 +1650,8 @@ static bool enable_ddw(struct pci_dev *dev, struct device_node *pdn)
+ iommu_table_setparms_common(newtbl, pci->phb->bus->number, create.liobn,
+ dynamic_addr, dynamic_len, page_shift, NULL,
+ &iommu_table_lpar_multi_ops);
+- iommu_init_table(newtbl, pci->phb->node, start, end);
++ iommu_init_table(newtbl, pci->phb->node,
++ start >> page_shift, end >> page_shift);
+
+ pci->table_group->tables[default_win_removed ? 0 : 1] = newtbl;
+
+@@ -2065,7 +2066,9 @@ static long spapr_tce_create_table(struct iommu_table_group *table_group, int nu
+ offset, 1UL << window_shift,
+ IOMMU_PAGE_SHIFT_4K, NULL,
+ &iommu_table_lpar_multi_ops);
+- iommu_init_table(tbl, pci->phb->node, start, end);
++ iommu_init_table(tbl, pci->phb->node,
++ start >> IOMMU_PAGE_SHIFT_4K,
++ end >> IOMMU_PAGE_SHIFT_4K);
+
+ table_group->tables[0] = tbl;
+
+@@ -2136,7 +2139,7 @@ static long spapr_tce_create_table(struct iommu_table_group *table_group, int nu
+ /* New table for using DDW instead of the default DMA window */
+ iommu_table_setparms_common(tbl, pci->phb->bus->number, create.liobn, win_addr,
+ 1UL << len, page_shift, NULL, &iommu_table_lpar_multi_ops);
+- iommu_init_table(tbl, pci->phb->node, start, end);
++ iommu_init_table(tbl, pci->phb->node, start >> page_shift, end >> page_shift);
+
+ pci->table_group->tables[num] = tbl;
+ set_iommu_table_base(&pdev->dev, tbl);
+@@ -2205,6 +2208,9 @@ static long spapr_tce_unset_window(struct iommu_table_group *table_group, int nu
+ const char *win_name;
+ int ret = -ENODEV;
+
++ if (!tbl) /* The table was never created OR window was never opened */
++ return 0;
++
+ mutex_lock(&dma_win_init_mutex);
+
+ if ((num == 0) && is_default_window_table(table_group, tbl))
+diff --git a/arch/riscv/kernel/vector.c b/arch/riscv/kernel/vector.c
+index 682b3feee45114..a30fb2fb8a2b16 100644
+--- a/arch/riscv/kernel/vector.c
++++ b/arch/riscv/kernel/vector.c
+@@ -309,7 +309,7 @@ static int __init riscv_v_sysctl_init(void)
+ static int __init riscv_v_sysctl_init(void) { return 0; }
+ #endif /* ! CONFIG_SYSCTL */
+
+-static int riscv_v_init(void)
++static int __init riscv_v_init(void)
+ {
+ return riscv_v_sysctl_init();
+ }
+diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
+index cc1f9cffe2a5f4..62f2c9e8e05f7d 100644
+--- a/arch/s390/Kconfig
++++ b/arch/s390/Kconfig
+@@ -65,6 +65,7 @@ config S390
+ select ARCH_ENABLE_MEMORY_HOTPLUG if SPARSEMEM
+ select ARCH_ENABLE_MEMORY_HOTREMOVE
+ select ARCH_ENABLE_SPLIT_PMD_PTLOCK if PGTABLE_LEVELS > 2
++ select ARCH_HAS_CPU_FINALIZE_INIT
+ select ARCH_HAS_CURRENT_STACK_POINTER
+ select ARCH_HAS_DEBUG_VIRTUAL
+ select ARCH_HAS_DEBUG_VM_PGTABLE
+diff --git a/arch/s390/Makefile b/arch/s390/Makefile
+index 7fd57398221ea3..9b772093278704 100644
+--- a/arch/s390/Makefile
++++ b/arch/s390/Makefile
+@@ -22,7 +22,7 @@ KBUILD_AFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -D__ASSEMBLY__
+ ifndef CONFIG_AS_IS_LLVM
+ KBUILD_AFLAGS_DECOMPRESSOR += $(if $(CONFIG_DEBUG_INFO),$(aflags_dwarf))
+ endif
+-KBUILD_CFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -O2 -mpacked-stack
++KBUILD_CFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -O2 -mpacked-stack -std=gnu11
+ KBUILD_CFLAGS_DECOMPRESSOR += -DDISABLE_BRANCH_PROFILING -D__NO_FORTIFY
+ KBUILD_CFLAGS_DECOMPRESSOR += -D__DECOMPRESSOR
+ KBUILD_CFLAGS_DECOMPRESSOR += -fno-delete-null-pointer-checks -msoft-float -mbackchain
+diff --git a/arch/s390/include/asm/sclp.h b/arch/s390/include/asm/sclp.h
+index eb00fa1771da07..ad17d91ad2e661 100644
+--- a/arch/s390/include/asm/sclp.h
++++ b/arch/s390/include/asm/sclp.h
+@@ -137,6 +137,7 @@ void sclp_early_printk(const char *s);
+ void __sclp_early_printk(const char *s, unsigned int len);
+ void sclp_emergency_printk(const char *s);
+
++int sclp_init(void);
+ int sclp_early_get_memsize(unsigned long *mem);
+ int sclp_early_get_hsa_size(unsigned long *hsa_size);
+ int _sclp_get_core_info(struct sclp_core_info *info);
+diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
+index e2e0aa463fbd1e..c3075e4a8efc31 100644
+--- a/arch/s390/kernel/perf_cpum_cf.c
++++ b/arch/s390/kernel/perf_cpum_cf.c
+@@ -981,7 +981,7 @@ static int cfdiag_push_sample(struct perf_event *event,
+ if (event->attr.sample_type & PERF_SAMPLE_RAW) {
+ raw.frag.size = cpuhw->usedss;
+ raw.frag.data = cpuhw->stop;
+- perf_sample_save_raw_data(&data, &raw);
++ perf_sample_save_raw_data(&data, event, &raw);
+ }
+
+ overflow = perf_event_overflow(event, &data, ®s);
+diff --git a/arch/s390/kernel/perf_pai_crypto.c b/arch/s390/kernel/perf_pai_crypto.c
+index fa732545426611..10725f5a6f0fd1 100644
+--- a/arch/s390/kernel/perf_pai_crypto.c
++++ b/arch/s390/kernel/perf_pai_crypto.c
+@@ -478,7 +478,7 @@ static int paicrypt_push_sample(size_t rawsize, struct paicrypt_map *cpump,
+ if (event->attr.sample_type & PERF_SAMPLE_RAW) {
+ raw.frag.size = rawsize;
+ raw.frag.data = cpump->save;
+- perf_sample_save_raw_data(&data, &raw);
++ perf_sample_save_raw_data(&data, event, &raw);
+ }
+
+ overflow = perf_event_overflow(event, &data, ®s);
+diff --git a/arch/s390/kernel/perf_pai_ext.c b/arch/s390/kernel/perf_pai_ext.c
+index 7f462bef1fc075..a8f0bad99cf04f 100644
+--- a/arch/s390/kernel/perf_pai_ext.c
++++ b/arch/s390/kernel/perf_pai_ext.c
+@@ -503,7 +503,7 @@ static int paiext_push_sample(size_t rawsize, struct paiext_map *cpump,
+ if (event->attr.sample_type & PERF_SAMPLE_RAW) {
+ raw.frag.size = rawsize;
+ raw.frag.data = cpump->save;
+- perf_sample_save_raw_data(&data, &raw);
++ perf_sample_save_raw_data(&data, event, &raw);
+ }
+
+ overflow = perf_event_overflow(event, &data, ®s);
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index a3fea683b22706..99f165726ca9ef 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -1006,3 +1006,8 @@ void __init setup_arch(char **cmdline_p)
+ /* Add system specific data to the random pool */
+ setup_randomness();
+ }
++
++void __init arch_cpu_finalize_init(void)
++{
++ sclp_init();
++}
+diff --git a/arch/s390/purgatory/Makefile b/arch/s390/purgatory/Makefile
+index 24eccaa293371b..bdcf2a3b6c41b3 100644
+--- a/arch/s390/purgatory/Makefile
++++ b/arch/s390/purgatory/Makefile
+@@ -13,7 +13,7 @@ CFLAGS_sha256.o := -D__DISABLE_EXPORTS -D__NO_FORTIFY
+ $(obj)/mem.o: $(srctree)/arch/s390/lib/mem.S FORCE
+ $(call if_changed_rule,as_o_S)
+
+-KBUILD_CFLAGS := -fno-strict-aliasing -Wall -Wstrict-prototypes
++KBUILD_CFLAGS := -std=gnu11 -fno-strict-aliasing -Wall -Wstrict-prototypes
+ KBUILD_CFLAGS += -Wno-pointer-sign -Wno-sign-compare
+ KBUILD_CFLAGS += -fno-zero-initialized-in-bss -fno-builtin -ffreestanding
+ KBUILD_CFLAGS += -Os -m64 -msoft-float -fno-common
+diff --git a/arch/x86/events/amd/ibs.c b/arch/x86/events/amd/ibs.c
+index e91970b01d6243..c3a2f6f57770ab 100644
+--- a/arch/x86/events/amd/ibs.c
++++ b/arch/x86/events/amd/ibs.c
+@@ -1118,7 +1118,7 @@ static int perf_ibs_handle_irq(struct perf_ibs *perf_ibs, struct pt_regs *iregs)
+ .data = ibs_data.data,
+ },
+ };
+- perf_sample_save_raw_data(&data, &raw);
++ perf_sample_save_raw_data(&data, event, &raw);
+ }
+
+ if (perf_ibs == &perf_ibs_op)
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 427d1daf06d06a..6b981868905f5d 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1735,7 +1735,7 @@ struct kvm_x86_ops {
+ bool allow_apicv_in_x2apic_without_x2apic_virtualization;
+ void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu);
+ void (*hwapic_irr_update)(struct kvm_vcpu *vcpu, int max_irr);
+- void (*hwapic_isr_update)(int isr);
++ void (*hwapic_isr_update)(struct kvm_vcpu *vcpu, int isr);
+ void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
+ void (*set_virtual_apic_mode)(struct kvm_vcpu *vcpu);
+ void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 766f092dab80b3..f1fac08fdef28c 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -495,14 +495,6 @@ static int x86_cluster_flags(void)
+ }
+ #endif
+
+-static int x86_die_flags(void)
+-{
+- if (cpu_feature_enabled(X86_FEATURE_HYBRID_CPU))
+- return x86_sched_itmt_flags();
+-
+- return 0;
+-}
+-
+ /*
+ * Set if a package/die has multiple NUMA nodes inside.
+ * AMD Magny-Cours, Intel Cluster-on-Die, and Intel
+@@ -538,7 +530,7 @@ static void __init build_sched_topology(void)
+ */
+ if (!x86_has_numa_in_package) {
+ x86_topology[i++] = (struct sched_domain_topology_level){
+- cpu_cpu_mask, x86_die_flags, SD_INIT_NAME(PKG)
++ cpu_cpu_mask, x86_sched_itmt_flags, SD_INIT_NAME(PKG)
+ };
+ }
+
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 95c6beb8ce2799..375bbb9600d3c1 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -763,7 +763,7 @@ static inline void apic_set_isr(int vec, struct kvm_lapic *apic)
+ * just set SVI.
+ */
+ if (unlikely(apic->apicv_active))
+- kvm_x86_call(hwapic_isr_update)(vec);
++ kvm_x86_call(hwapic_isr_update)(apic->vcpu, vec);
+ else {
+ ++apic->isr_count;
+ BUG_ON(apic->isr_count > MAX_APIC_VECTOR);
+@@ -808,7 +808,7 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
+ * and must be left alone.
+ */
+ if (unlikely(apic->apicv_active))
+- kvm_x86_call(hwapic_isr_update)(apic_find_highest_isr(apic));
++ kvm_x86_call(hwapic_isr_update)(apic->vcpu, apic_find_highest_isr(apic));
+ else {
+ --apic->isr_count;
+ BUG_ON(apic->isr_count < 0);
+@@ -2786,7 +2786,7 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
+ if (apic->apicv_active) {
+ kvm_x86_call(apicv_post_state_restore)(vcpu);
+ kvm_x86_call(hwapic_irr_update)(vcpu, -1);
+- kvm_x86_call(hwapic_isr_update)(-1);
++ kvm_x86_call(hwapic_isr_update)(vcpu, -1);
+ }
+
+ vcpu->arch.apic_arb_prio = 0;
+@@ -3102,9 +3102,8 @@ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s)
+ kvm_apic_update_apicv(vcpu);
+ if (apic->apicv_active) {
+ kvm_x86_call(apicv_post_state_restore)(vcpu);
+- kvm_x86_call(hwapic_irr_update)(vcpu,
+- apic_find_highest_irr(apic));
+- kvm_x86_call(hwapic_isr_update)(apic_find_highest_isr(apic));
++ kvm_x86_call(hwapic_irr_update)(vcpu, apic_find_highest_irr(apic));
++ kvm_x86_call(hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
+ }
+ kvm_make_request(KVM_REQ_EVENT, vcpu);
+ if (ioapic_in_kernel(vcpu->kvm))
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 92fee5e8a3c741..968ddf71405446 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6847,7 +6847,7 @@ void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu)
+ kvm_release_pfn_clean(pfn);
+ }
+
+-void vmx_hwapic_isr_update(int max_isr)
++void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
+ {
+ u16 status;
+ u8 old;
+diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
+index a55981c5216e63..48dc76bf0ec03a 100644
+--- a/arch/x86/kvm/vmx/x86_ops.h
++++ b/arch/x86/kvm/vmx/x86_ops.h
+@@ -48,7 +48,7 @@ void vmx_migrate_timers(struct kvm_vcpu *vcpu);
+ void vmx_set_virtual_apic_mode(struct kvm_vcpu *vcpu);
+ void vmx_apicv_pre_state_restore(struct kvm_vcpu *vcpu);
+ void vmx_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr);
+-void vmx_hwapic_isr_update(int max_isr);
++void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr);
+ int vmx_sync_pir_to_irr(struct kvm_vcpu *vcpu);
+ void vmx_deliver_interrupt(struct kvm_lapic *apic, int delivery_mode,
+ int trig_mode, int vector);
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index 88e3ad73c38549..9aed61fcd0bf94 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -118,17 +118,18 @@ static void bio_integrity_unpin_bvec(struct bio_vec *bv, int nr_vecs,
+
+ static void bio_integrity_uncopy_user(struct bio_integrity_payload *bip)
+ {
+- unsigned short nr_vecs = bip->bip_max_vcnt - 1;
+- struct bio_vec *copy = &bip->bip_vec[1];
+- size_t bytes = bip->bip_iter.bi_size;
+- struct iov_iter iter;
++ unsigned short orig_nr_vecs = bip->bip_max_vcnt - 1;
++ struct bio_vec *orig_bvecs = &bip->bip_vec[1];
++ struct bio_vec *bounce_bvec = &bip->bip_vec[0];
++ size_t bytes = bounce_bvec->bv_len;
++ struct iov_iter orig_iter;
+ int ret;
+
+- iov_iter_bvec(&iter, ITER_DEST, copy, nr_vecs, bytes);
+- ret = copy_to_iter(bvec_virt(bip->bip_vec), bytes, &iter);
++ iov_iter_bvec(&orig_iter, ITER_DEST, orig_bvecs, orig_nr_vecs, bytes);
++ ret = copy_to_iter(bvec_virt(bounce_bvec), bytes, &orig_iter);
+ WARN_ON_ONCE(ret != bytes);
+
+- bio_integrity_unpin_bvec(copy, nr_vecs, true);
++ bio_integrity_unpin_bvec(orig_bvecs, orig_nr_vecs, true);
+ }
+
+ /**
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 4f791a3114a12c..42023addf9cda6 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -629,8 +629,14 @@ static void __submit_bio(struct bio *bio)
+ blk_mq_submit_bio(bio);
+ } else if (likely(bio_queue_enter(bio) == 0)) {
+ struct gendisk *disk = bio->bi_bdev->bd_disk;
+-
+- disk->fops->submit_bio(bio);
++
++ if ((bio->bi_opf & REQ_POLLED) &&
++ !(disk->queue->limits.features & BLK_FEAT_POLL)) {
++ bio->bi_status = BLK_STS_NOTSUPP;
++ bio_endio(bio);
++ } else {
++ disk->fops->submit_bio(bio);
++ }
+ blk_queue_exit(disk->queue);
+ }
+
+@@ -805,12 +811,6 @@ void submit_bio_noacct(struct bio *bio)
+ }
+ }
+
+- if (!(q->limits.features & BLK_FEAT_POLL) &&
+- (bio->bi_opf & REQ_POLLED)) {
+- bio_clear_polled(bio);
+- goto not_supported;
+- }
+-
+ switch (bio_op(bio)) {
+ case REQ_OP_READ:
+ break;
+@@ -935,7 +935,7 @@ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags)
+ return 0;
+
+ q = bdev_get_queue(bdev);
+- if (cookie == BLK_QC_T_NONE || !(q->limits.features & BLK_FEAT_POLL))
++ if (cookie == BLK_QC_T_NONE)
+ return 0;
+
+ blk_flush_plug(current->plug, false);
+@@ -956,7 +956,8 @@ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags)
+ } else {
+ struct gendisk *disk = q->disk;
+
+- if (disk && disk->fops->poll_bio)
++ if ((q->limits.features & BLK_FEAT_POLL) && disk &&
++ disk->fops->poll_bio)
+ ret = disk->fops->poll_bio(bio, iob, flags);
+ }
+ blk_queue_exit(q);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 4e76651e786d19..662e52ab06467f 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -3092,14 +3092,21 @@ void blk_mq_submit_bio(struct bio *bio)
+ }
+
+ /*
+- * Device reconfiguration may change logical block size, so alignment
+- * check has to be done with queue usage counter held
++ * Device reconfiguration may change logical block size or reduce the
++ * number of poll queues, so the checks for alignment and poll support
++ * have to be done with queue usage counter held.
+ */
+ if (unlikely(bio_unaligned(bio, q))) {
+ bio_io_error(bio);
+ goto queue_exit;
+ }
+
++ if ((bio->bi_opf & REQ_POLLED) && !blk_mq_can_poll(q)) {
++ bio->bi_status = BLK_STS_NOTSUPP;
++ bio_endio(bio);
++ goto queue_exit;
++ }
++
+ bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
+ if (!bio)
+ goto queue_exit;
+@@ -4325,12 +4332,6 @@ void blk_mq_release(struct request_queue *q)
+ blk_mq_sysfs_deinit(q);
+ }
+
+-static bool blk_mq_can_poll(struct blk_mq_tag_set *set)
+-{
+- return set->nr_maps > HCTX_TYPE_POLL &&
+- set->map[HCTX_TYPE_POLL].nr_queues;
+-}
+-
+ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
+ struct queue_limits *lim, void *queuedata)
+ {
+@@ -4341,7 +4342,7 @@ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
+ if (!lim)
+ lim = &default_lim;
+ lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
+- if (blk_mq_can_poll(set))
++ if (set->nr_maps > HCTX_TYPE_POLL)
+ lim->features |= BLK_FEAT_POLL;
+
+ q = blk_alloc_queue(lim, set->numa_node);
+@@ -5029,8 +5030,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ fallback:
+ blk_mq_update_queue_map(set);
+ list_for_each_entry(q, &set->tag_list, tag_set_list) {
+- struct queue_limits lim;
+-
+ blk_mq_realloc_hw_ctxs(set, q);
+
+ if (q->nr_hw_queues != set->nr_hw_queues) {
+@@ -5044,13 +5043,6 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
+ set->nr_hw_queues = prev_nr_hw_queues;
+ goto fallback;
+ }
+- lim = queue_limits_start_update(q);
+- if (blk_mq_can_poll(set))
+- lim.features |= BLK_FEAT_POLL;
+- else
+- lim.features &= ~BLK_FEAT_POLL;
+- if (queue_limits_commit_update(q, &lim) < 0)
+- pr_warn("updating the poll flag failed\n");
+ blk_mq_map_swqueue(q);
+ }
+
+@@ -5110,9 +5102,9 @@ static int blk_hctx_poll(struct request_queue *q, struct blk_mq_hw_ctx *hctx,
+ int blk_mq_poll(struct request_queue *q, blk_qc_t cookie,
+ struct io_comp_batch *iob, unsigned int flags)
+ {
+- struct blk_mq_hw_ctx *hctx = xa_load(&q->hctx_table, cookie);
+-
+- return blk_hctx_poll(q, hctx, iob, flags);
++ if (!blk_mq_can_poll(q))
++ return 0;
++ return blk_hctx_poll(q, xa_load(&q->hctx_table, cookie), iob, flags);
+ }
+
+ int blk_rq_poll(struct request *rq, struct io_comp_batch *iob,
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index f4ac1af77a267e..364c0415293cf7 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -451,4 +451,10 @@ do { \
+ #define blk_mq_run_dispatch_ops(q, dispatch_ops) \
+ __blk_mq_run_dispatch_ops(q, true, dispatch_ops) \
+
++static inline bool blk_mq_can_poll(struct request_queue *q)
++{
++ return (q->limits.features & BLK_FEAT_POLL) &&
++ q->tag_set->map[HCTX_TYPE_POLL].nr_queues;
++}
++
+ #endif
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 207577145c54f4..692b27266220fe 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -256,10 +256,17 @@ static ssize_t queue_##_name##_show(struct gendisk *disk, char *page) \
+ !!(disk->queue->limits.features & _feature)); \
+ }
+
+-QUEUE_SYSFS_FEATURE_SHOW(poll, BLK_FEAT_POLL);
+ QUEUE_SYSFS_FEATURE_SHOW(fua, BLK_FEAT_FUA);
+ QUEUE_SYSFS_FEATURE_SHOW(dax, BLK_FEAT_DAX);
+
++static ssize_t queue_poll_show(struct gendisk *disk, char *page)
++{
++ if (queue_is_mq(disk->queue))
++ return sysfs_emit(page, "%u\n", blk_mq_can_poll(disk->queue));
++ return sysfs_emit(page, "%u\n",
++ !!(disk->queue->limits.features & BLK_FEAT_POLL));
++}
++
+ static ssize_t queue_zoned_show(struct gendisk *disk, char *page)
+ {
+ if (blk_queue_is_zoned(disk->queue))
+diff --git a/block/genhd.c b/block/genhd.c
+index 8645cf3b0816e4..99344f53c78975 100644
+--- a/block/genhd.c
++++ b/block/genhd.c
+@@ -778,7 +778,7 @@ static ssize_t disk_badblocks_store(struct device *dev,
+ }
+
+ #ifdef CONFIG_BLOCK_LEGACY_AUTOLOAD
+-void blk_request_module(dev_t devt)
++static bool blk_probe_dev(dev_t devt)
+ {
+ unsigned int major = MAJOR(devt);
+ struct blk_major_name **n;
+@@ -788,14 +788,26 @@ void blk_request_module(dev_t devt)
+ if ((*n)->major == major && (*n)->probe) {
+ (*n)->probe(devt);
+ mutex_unlock(&major_names_lock);
+- return;
++ return true;
+ }
+ }
+ mutex_unlock(&major_names_lock);
++ return false;
++}
++
++void blk_request_module(dev_t devt)
++{
++ int error;
++
++ if (blk_probe_dev(devt))
++ return;
+
+- if (request_module("block-major-%d-%d", MAJOR(devt), MINOR(devt)) > 0)
+- /* Make old-style 2.4 aliases work */
+- request_module("block-major-%d", MAJOR(devt));
++ error = request_module("block-major-%d-%d", MAJOR(devt), MINOR(devt));
++ /* Make old-style 2.4 aliases work */
++ if (error > 0)
++ error = request_module("block-major-%d", MAJOR(devt));
++ if (!error)
++ blk_probe_dev(devt);
+ }
+ #endif /* CONFIG_BLOCK_LEGACY_AUTOLOAD */
+
+diff --git a/block/partitions/ldm.h b/block/partitions/ldm.h
+index e259180c89148b..aa3bd050d8cdd0 100644
+--- a/block/partitions/ldm.h
++++ b/block/partitions/ldm.h
+@@ -1,5 +1,5 @@
+ // SPDX-License-Identifier: GPL-2.0-or-later
+-/**
++/*
+ * ldm - Part of the Linux-NTFS project.
+ *
+ * Copyright (C) 2001,2002 Richard Russon <ldm@flatcap.org>
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 004d27e41315ff..c067412d909a16 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -1022,6 +1022,8 @@ static void __init crypto_start_tests(void)
+ if (IS_ENABLED(CONFIG_CRYPTO_MANAGER_DISABLE_TESTS))
+ return;
+
++ set_crypto_boot_test_finished();
++
+ for (;;) {
+ struct crypto_larval *larval = NULL;
+ struct crypto_alg *q;
+@@ -1053,8 +1055,6 @@ static void __init crypto_start_tests(void)
+ if (!larval)
+ break;
+ }
+-
+- set_crypto_boot_test_finished();
+ }
+
+ static int __init crypto_algapi_init(void)
+diff --git a/drivers/acpi/acpica/achware.h b/drivers/acpi/acpica/achware.h
+index 79bbfe00d241f9..b8543a34caeada 100644
+--- a/drivers/acpi/acpica/achware.h
++++ b/drivers/acpi/acpica/achware.h
+@@ -103,8 +103,6 @@ acpi_hw_get_gpe_status(struct acpi_gpe_event_info *gpe_event_info,
+
+ acpi_status acpi_hw_enable_all_runtime_gpes(void);
+
+-acpi_status acpi_hw_enable_all_wakeup_gpes(void);
+-
+ u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number);
+
+ acpi_status
+diff --git a/drivers/acpi/fan_core.c b/drivers/acpi/fan_core.c
+index 7cea4495f19bbe..300e5d91998648 100644
+--- a/drivers/acpi/fan_core.c
++++ b/drivers/acpi/fan_core.c
+@@ -371,19 +371,25 @@ static int acpi_fan_probe(struct platform_device *pdev)
+ result = sysfs_create_link(&pdev->dev.kobj,
+ &cdev->device.kobj,
+ "thermal_cooling");
+- if (result)
++ if (result) {
+ dev_err(&pdev->dev, "Failed to create sysfs link 'thermal_cooling'\n");
++ goto err_unregister;
++ }
+
+ result = sysfs_create_link(&cdev->device.kobj,
+ &pdev->dev.kobj,
+ "device");
+ if (result) {
+ dev_err(&pdev->dev, "Failed to create sysfs link 'device'\n");
+- goto err_end;
++ goto err_remove_link;
+ }
+
+ return 0;
+
++err_remove_link:
++ sysfs_remove_link(&pdev->dev.kobj, "thermal_cooling");
++err_unregister:
++ thermal_cooling_device_unregister(cdev);
+ err_end:
+ if (fan->acpi4)
+ acpi_fan_delete_attributes(device);
+diff --git a/drivers/base/class.c b/drivers/base/class.c
+index cb5359235c7020..ce460e1ab1376d 100644
+--- a/drivers/base/class.c
++++ b/drivers/base/class.c
+@@ -323,8 +323,12 @@ void class_dev_iter_init(struct class_dev_iter *iter, const struct class *class,
+ struct subsys_private *sp = class_to_subsys(class);
+ struct klist_node *start_knode = NULL;
+
+- if (!sp)
++ memset(iter, 0, sizeof(*iter));
++ if (!sp) {
++ pr_crit("%s: class %p was not registered yet\n",
++ __func__, class);
+ return;
++ }
+
+ if (start)
+ start_knode = &start->p->knode_class;
+@@ -351,6 +355,9 @@ struct device *class_dev_iter_next(struct class_dev_iter *iter)
+ struct klist_node *knode;
+ struct device *dev;
+
++ if (!iter->sp)
++ return NULL;
++
+ while (1) {
+ knode = klist_next(&iter->ki);
+ if (!knode)
+diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
+index b852050d8a9665..450458267e6e64 100644
+--- a/drivers/block/nbd.c
++++ b/drivers/block/nbd.c
+@@ -2180,6 +2180,7 @@ static void nbd_disconnect_and_put(struct nbd_device *nbd)
+ flush_workqueue(nbd->recv_workq);
+ nbd_clear_que(nbd);
+ nbd->task_setup = NULL;
++ clear_bit(NBD_RT_BOUND, &nbd->config->runtime_flags);
+ mutex_unlock(&nbd->config_lock);
+
+ if (test_and_clear_bit(NBD_RT_HAS_CONFIG_REF,
+diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
+index ff45ed76646957..226ffc743238e9 100644
+--- a/drivers/block/ps3disk.c
++++ b/drivers/block/ps3disk.c
+@@ -384,9 +384,9 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
+ unsigned int devidx;
+ struct queue_limits lim = {
+ .logical_block_size = dev->blk_size,
+- .max_hw_sectors = dev->bounce_size >> 9,
++ .max_hw_sectors = BOUNCE_SIZE >> 9,
+ .max_segments = -1,
+- .max_segment_size = dev->bounce_size,
++ .max_segment_size = BOUNCE_SIZE,
+ .dma_alignment = dev->blk_size - 1,
+ .features = BLK_FEAT_WRITE_CACHE |
+ BLK_FEAT_ROTATIONAL,
+diff --git a/drivers/bluetooth/btbcm.c b/drivers/bluetooth/btbcm.c
+index a1153ada74d206..0a60660fc8ce80 100644
+--- a/drivers/bluetooth/btbcm.c
++++ b/drivers/bluetooth/btbcm.c
+@@ -553,6 +553,9 @@ static const char *btbcm_get_board_name(struct device *dev)
+
+ /* get rid of any '/' in the compatible string */
+ board_type = devm_kstrdup(dev, tmp, GFP_KERNEL);
++ if (!board_type)
++ return NULL;
++
+ strreplace(board_type, '/', '-');
+
+ return board_type;
+diff --git a/drivers/bluetooth/btnxpuart.c b/drivers/bluetooth/btnxpuart.c
+index a028984f27829c..84a1ad61c4ad5f 100644
+--- a/drivers/bluetooth/btnxpuart.c
++++ b/drivers/bluetooth/btnxpuart.c
+@@ -1336,13 +1336,12 @@ static void btnxpuart_tx_work(struct work_struct *work)
+
+ while ((skb = nxp_dequeue(nxpdev))) {
+ len = serdev_device_write_buf(serdev, skb->data, skb->len);
+- serdev_device_wait_until_sent(serdev, 0);
+ hdev->stat.byte_tx += len;
+
+ skb_pull(skb, len);
+ if (skb->len > 0) {
+ skb_queue_head(&nxpdev->txq, skb);
+- break;
++ continue;
+ }
+
+ switch (hci_skb_pkt_type(skb)) {
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index 0bcb44cf7b31d7..0a6ca6dfb94841 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -1351,12 +1351,14 @@ int btrtl_setup_realtek(struct hci_dev *hdev)
+
+ btrtl_set_quirks(hdev, btrtl_dev);
+
+- hci_set_hw_info(hdev,
++ if (btrtl_dev->ic_info) {
++ hci_set_hw_info(hdev,
+ "RTL lmp_subver=%u hci_rev=%u hci_ver=%u hci_bus=%u",
+ btrtl_dev->ic_info->lmp_subver,
+ btrtl_dev->ic_info->hci_rev,
+ btrtl_dev->ic_info->hci_ver,
+ btrtl_dev->ic_info->hci_bus);
++ }
+
+ btrtl_free(btrtl_dev);
+ return ret;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 0c85c981a8334a..258a5cb6f27afe 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -2632,8 +2632,15 @@ static void btusb_mtk_claim_iso_intf(struct btusb_data *data)
+ struct btmtk_data *btmtk_data = hci_get_priv(data->hdev);
+ int err;
+
++ /*
++ * The function usb_driver_claim_interface() is documented to need
++ * locks held if it's not called from a probe routine. The code here
++ * is called from the hci_power_on workqueue, so grab the lock.
++ */
++ device_lock(&btmtk_data->isopkt_intf->dev);
+ err = usb_driver_claim_interface(&btusb_driver,
+ btmtk_data->isopkt_intf, data);
++ device_unlock(&btmtk_data->isopkt_intf->dev);
+ if (err < 0) {
+ btmtk_data->isopkt_intf = NULL;
+ bt_dev_err(data->hdev, "Failed to claim iso interface");
+diff --git a/drivers/cdx/Makefile b/drivers/cdx/Makefile
+index 749a3295c2bdc1..3ca7068a305256 100644
+--- a/drivers/cdx/Makefile
++++ b/drivers/cdx/Makefile
+@@ -5,7 +5,7 @@
+ # Copyright (C) 2022-2023, Advanced Micro Devices, Inc.
+ #
+
+-ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CDX_BUS
++ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"CDX_BUS"'
+
+ obj-$(CONFIG_CDX_BUS) += cdx.o controller/
+
+diff --git a/drivers/char/ipmi/ipmb_dev_int.c b/drivers/char/ipmi/ipmb_dev_int.c
+index 7296127181eca3..8a14fd0291d89b 100644
+--- a/drivers/char/ipmi/ipmb_dev_int.c
++++ b/drivers/char/ipmi/ipmb_dev_int.c
+@@ -321,6 +321,9 @@ static int ipmb_probe(struct i2c_client *client)
+ ipmb_dev->miscdev.name = devm_kasprintf(&client->dev, GFP_KERNEL,
+ "%s%d", "ipmb-",
+ client->adapter->nr);
++ if (!ipmb_dev->miscdev.name)
++ return -ENOMEM;
++
+ ipmb_dev->miscdev.fops = &ipmb_fops;
+ ipmb_dev->miscdev.parent = &client->dev;
+ ret = misc_register(&ipmb_dev->miscdev);
+diff --git a/drivers/char/ipmi/ssif_bmc.c b/drivers/char/ipmi/ssif_bmc.c
+index a14fafc583d4d8..310f17dd9511a5 100644
+--- a/drivers/char/ipmi/ssif_bmc.c
++++ b/drivers/char/ipmi/ssif_bmc.c
+@@ -292,7 +292,6 @@ static void complete_response(struct ssif_bmc_ctx *ssif_bmc)
+ ssif_bmc->nbytes_processed = 0;
+ ssif_bmc->remain_len = 0;
+ ssif_bmc->busy = false;
+- memset(&ssif_bmc->part_buf, 0, sizeof(struct ssif_part_buffer));
+ wake_up_all(&ssif_bmc->wait_queue);
+ }
+
+@@ -744,9 +743,11 @@ static void on_stop_event(struct ssif_bmc_ctx *ssif_bmc, u8 *val)
+ ssif_bmc->aborting = true;
+ }
+ } else if (ssif_bmc->state == SSIF_RES_SENDING) {
+- if (ssif_bmc->is_singlepart_read || ssif_bmc->block_num == 0xFF)
++ if (ssif_bmc->is_singlepart_read || ssif_bmc->block_num == 0xFF) {
++ memset(&ssif_bmc->part_buf, 0, sizeof(struct ssif_part_buffer));
+ /* Invalidate response buffer to denote it is sent */
+ complete_response(ssif_bmc);
++ }
+ ssif_bmc->state = SSIF_READY;
+ }
+
+diff --git a/drivers/clk/analogbits/wrpll-cln28hpc.c b/drivers/clk/analogbits/wrpll-cln28hpc.c
+index 65d422a588e1f1..9d178afc73bdd1 100644
+--- a/drivers/clk/analogbits/wrpll-cln28hpc.c
++++ b/drivers/clk/analogbits/wrpll-cln28hpc.c
+@@ -292,7 +292,7 @@ int wrpll_configure_for_rate(struct wrpll_cfg *c, u32 target_rate,
+ vco = vco_pre * f;
+ }
+
+- delta = abs(target_rate - vco);
++ delta = abs(target_vco_rate - vco);
+ if (delta < best_delta) {
+ best_delta = delta;
+ best_r = r;
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index d02451f951cf05..5b4ab94193c2b0 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -5391,8 +5391,10 @@ const char *of_clk_get_parent_name(const struct device_node *np, int index)
+ count++;
+ }
+ /* We went off the end of 'clock-indices' without finding it */
+- if (of_property_present(clkspec.np, "clock-indices") && !found)
++ if (of_property_present(clkspec.np, "clock-indices") && !found) {
++ of_node_put(clkspec.np);
+ return NULL;
++ }
+
+ if (of_property_read_string_index(clkspec.np, "clock-output-names",
+ index,
+diff --git a/drivers/clk/imx/clk-imx8mp.c b/drivers/clk/imx/clk-imx8mp.c
+index 516dbd170c8a35..fb18f507f12135 100644
+--- a/drivers/clk/imx/clk-imx8mp.c
++++ b/drivers/clk/imx/clk-imx8mp.c
+@@ -399,8 +399,9 @@ static const char * const imx8mp_dram_core_sels[] = {"dram_pll_out", "dram_alt_r
+
+ static const char * const imx8mp_clkout_sels[] = {"audio_pll1_out", "audio_pll2_out", "video_pll1_out",
+ "dummy", "dummy", "gpu_pll_out", "vpu_pll_out",
+- "arm_pll_out", "sys_pll1", "sys_pll2", "sys_pll3",
+- "dummy", "dummy", "osc_24m", "dummy", "osc_32k"};
++ "arm_pll_out", "sys_pll1_out", "sys_pll2_out",
++ "sys_pll3_out", "dummy", "dummy", "osc_24m",
++ "dummy", "osc_32k"};
+
+ static struct clk_hw **hws;
+ static struct clk_hw_onecell_data *clk_hw_data;
+diff --git a/drivers/clk/imx/clk-imx93.c b/drivers/clk/imx/clk-imx93.c
+index c6a9bc8ecc1fc7..c5f358a75f307b 100644
+--- a/drivers/clk/imx/clk-imx93.c
++++ b/drivers/clk/imx/clk-imx93.c
+@@ -15,6 +15,11 @@
+
+ #include "clk.h"
+
++#define IMX93_CLK_END 208
++
++#define PLAT_IMX93 BIT(0)
++#define PLAT_IMX91 BIT(1)
++
+ enum clk_sel {
+ LOW_SPEED_IO_SEL,
+ NON_IO_SEL,
+@@ -33,6 +38,7 @@ static u32 share_count_sai2;
+ static u32 share_count_sai3;
+ static u32 share_count_mub;
+ static u32 share_count_pdm;
++static u32 share_count_spdif;
+
+ static const char * const a55_core_sels[] = {"a55_alt", "arm_pll"};
+ static const char *parent_names[MAX_SEL][4] = {
+@@ -53,6 +59,7 @@ static const struct imx93_clk_root {
+ u32 off;
+ enum clk_sel sel;
+ unsigned long flags;
++ unsigned long plat;
+ } root_array[] = {
+ /* a55/m33/bus critical clk for system run */
+ { IMX93_CLK_A55_PERIPH, "a55_periph_root", 0x0000, FAST_SEL, CLK_IS_CRITICAL },
+@@ -63,9 +70,9 @@ static const struct imx93_clk_root {
+ { IMX93_CLK_BUS_AON, "bus_aon_root", 0x0300, LOW_SPEED_IO_SEL, CLK_IS_CRITICAL },
+ { IMX93_CLK_WAKEUP_AXI, "wakeup_axi_root", 0x0380, FAST_SEL, CLK_IS_CRITICAL },
+ { IMX93_CLK_SWO_TRACE, "swo_trace_root", 0x0400, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_M33_SYSTICK, "m33_systick_root", 0x0480, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_FLEXIO1, "flexio1_root", 0x0500, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_FLEXIO2, "flexio2_root", 0x0580, LOW_SPEED_IO_SEL, },
++ { IMX93_CLK_M33_SYSTICK, "m33_systick_root", 0x0480, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_FLEXIO1, "flexio1_root", 0x0500, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_FLEXIO2, "flexio2_root", 0x0580, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
+ { IMX93_CLK_LPTMR1, "lptmr1_root", 0x0700, LOW_SPEED_IO_SEL, },
+ { IMX93_CLK_LPTMR2, "lptmr2_root", 0x0780, LOW_SPEED_IO_SEL, },
+ { IMX93_CLK_TPM2, "tpm2_root", 0x0880, TPM_SEL, },
+@@ -120,15 +127,15 @@ static const struct imx93_clk_root {
+ { IMX93_CLK_HSIO_ACSCAN_80M, "hsio_acscan_80m_root", 0x1f80, LOW_SPEED_IO_SEL, },
+ { IMX93_CLK_HSIO_ACSCAN_480M, "hsio_acscan_480m_root", 0x2000, MISC_SEL, },
+ { IMX93_CLK_NIC_AXI, "nic_axi_root", 0x2080, FAST_SEL, CLK_IS_CRITICAL, },
+- { IMX93_CLK_ML_APB, "ml_apb_root", 0x2180, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_ML, "ml_root", 0x2200, FAST_SEL, },
++ { IMX93_CLK_ML_APB, "ml_apb_root", 0x2180, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_ML, "ml_root", 0x2200, FAST_SEL, 0, PLAT_IMX93, },
+ { IMX93_CLK_MEDIA_AXI, "media_axi_root", 0x2280, FAST_SEL, },
+ { IMX93_CLK_MEDIA_APB, "media_apb_root", 0x2300, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_MEDIA_LDB, "media_ldb_root", 0x2380, VIDEO_SEL, },
++ { IMX93_CLK_MEDIA_LDB, "media_ldb_root", 0x2380, VIDEO_SEL, 0, PLAT_IMX93, },
+ { IMX93_CLK_MEDIA_DISP_PIX, "media_disp_pix_root", 0x2400, VIDEO_SEL, },
+ { IMX93_CLK_CAM_PIX, "cam_pix_root", 0x2480, VIDEO_SEL, },
+- { IMX93_CLK_MIPI_TEST_BYTE, "mipi_test_byte_root", 0x2500, VIDEO_SEL, },
+- { IMX93_CLK_MIPI_PHY_CFG, "mipi_phy_cfg_root", 0x2580, VIDEO_SEL, },
++ { IMX93_CLK_MIPI_TEST_BYTE, "mipi_test_byte_root", 0x2500, VIDEO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_MIPI_PHY_CFG, "mipi_phy_cfg_root", 0x2580, VIDEO_SEL, 0, PLAT_IMX93, },
+ { IMX93_CLK_ADC, "adc_root", 0x2700, LOW_SPEED_IO_SEL, },
+ { IMX93_CLK_PDM, "pdm_root", 0x2780, AUDIO_SEL, },
+ { IMX93_CLK_TSTMR1, "tstmr1_root", 0x2800, LOW_SPEED_IO_SEL, },
+@@ -137,13 +144,16 @@ static const struct imx93_clk_root {
+ { IMX93_CLK_MQS2, "mqs2_root", 0x2980, AUDIO_SEL, },
+ { IMX93_CLK_AUDIO_XCVR, "audio_xcvr_root", 0x2a00, NON_IO_SEL, },
+ { IMX93_CLK_SPDIF, "spdif_root", 0x2a80, AUDIO_SEL, },
+- { IMX93_CLK_ENET, "enet_root", 0x2b00, NON_IO_SEL, },
+- { IMX93_CLK_ENET_TIMER1, "enet_timer1_root", 0x2b80, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_ENET_TIMER2, "enet_timer2_root", 0x2c00, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_ENET_REF, "enet_ref_root", 0x2c80, NON_IO_SEL, },
+- { IMX93_CLK_ENET_REF_PHY, "enet_ref_phy_root", 0x2d00, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_I3C1_SLOW, "i3c1_slow_root", 0x2d80, LOW_SPEED_IO_SEL, },
+- { IMX93_CLK_I3C2_SLOW, "i3c2_slow_root", 0x2e00, LOW_SPEED_IO_SEL, },
++ { IMX93_CLK_ENET, "enet_root", 0x2b00, NON_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_ENET_TIMER1, "enet_timer1_root", 0x2b80, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_ENET_TIMER2, "enet_timer2_root", 0x2c00, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_ENET_REF, "enet_ref_root", 0x2c80, NON_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_ENET_REF_PHY, "enet_ref_phy_root", 0x2d00, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX91_CLK_ENET1_QOS_TSN, "enet1_qos_tsn_root", 0x2b00, NON_IO_SEL, 0, PLAT_IMX91, },
++ { IMX91_CLK_ENET_TIMER, "enet_timer_root", 0x2b80, LOW_SPEED_IO_SEL, 0, PLAT_IMX91, },
++ { IMX91_CLK_ENET2_REGULAR, "enet2_regular_root", 0x2c80, NON_IO_SEL, 0, PLAT_IMX91, },
++ { IMX93_CLK_I3C1_SLOW, "i3c1_slow_root", 0x2d80, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
++ { IMX93_CLK_I3C2_SLOW, "i3c2_slow_root", 0x2e00, LOW_SPEED_IO_SEL, 0, PLAT_IMX93, },
+ { IMX93_CLK_USB_PHY_BURUNIN, "usb_phy_root", 0x2e80, LOW_SPEED_IO_SEL, },
+ { IMX93_CLK_PAL_CAME_SCAN, "pal_came_scan_root", 0x2f00, MISC_SEL, }
+ };
+@@ -155,6 +165,7 @@ static const struct imx93_clk_ccgr {
+ u32 off;
+ unsigned long flags;
+ u32 *shared_count;
++ unsigned long plat;
+ } ccgr_array[] = {
+ { IMX93_CLK_A55_GATE, "a55_alt", "a55_alt_root", 0x8000, },
+ /* M33 critical clk for system run */
+@@ -167,10 +178,10 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_WDOG5_GATE, "wdog5", "osc_24m", 0x8400, },
+ { IMX93_CLK_SEMA1_GATE, "sema1", "bus_aon_root", 0x8440, },
+ { IMX93_CLK_SEMA2_GATE, "sema2", "bus_wakeup_root", 0x8480, },
+- { IMX93_CLK_MU1_A_GATE, "mu1_a", "bus_aon_root", 0x84c0, CLK_IGNORE_UNUSED },
+- { IMX93_CLK_MU2_A_GATE, "mu2_a", "bus_wakeup_root", 0x84c0, CLK_IGNORE_UNUSED },
+- { IMX93_CLK_MU1_B_GATE, "mu1_b", "bus_aon_root", 0x8500, 0, &share_count_mub },
+- { IMX93_CLK_MU2_B_GATE, "mu2_b", "bus_wakeup_root", 0x8500, 0, &share_count_mub },
++ { IMX93_CLK_MU1_A_GATE, "mu1_a", "bus_aon_root", 0x84c0, CLK_IGNORE_UNUSED, NULL, PLAT_IMX93 },
++ { IMX93_CLK_MU2_A_GATE, "mu2_a", "bus_wakeup_root", 0x84c0, CLK_IGNORE_UNUSED, NULL, PLAT_IMX93 },
++ { IMX93_CLK_MU1_B_GATE, "mu1_b", "bus_aon_root", 0x8500, 0, &share_count_mub, PLAT_IMX93 },
++ { IMX93_CLK_MU2_B_GATE, "mu2_b", "bus_wakeup_root", 0x8500, 0, &share_count_mub, PLAT_IMX93 },
+ { IMX93_CLK_EDMA1_GATE, "edma1", "m33_root", 0x8540, },
+ { IMX93_CLK_EDMA2_GATE, "edma2", "wakeup_axi_root", 0x8580, },
+ { IMX93_CLK_FLEXSPI1_GATE, "flexspi1", "flexspi1_root", 0x8640, },
+@@ -178,8 +189,8 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_GPIO2_GATE, "gpio2", "bus_wakeup_root", 0x88c0, },
+ { IMX93_CLK_GPIO3_GATE, "gpio3", "bus_wakeup_root", 0x8900, },
+ { IMX93_CLK_GPIO4_GATE, "gpio4", "bus_wakeup_root", 0x8940, },
+- { IMX93_CLK_FLEXIO1_GATE, "flexio1", "flexio1_root", 0x8980, },
+- { IMX93_CLK_FLEXIO2_GATE, "flexio2", "flexio2_root", 0x89c0, },
++ { IMX93_CLK_FLEXIO1_GATE, "flexio1", "flexio1_root", 0x8980, 0, NULL, PLAT_IMX93},
++ { IMX93_CLK_FLEXIO2_GATE, "flexio2", "flexio2_root", 0x89c0, 0, NULL, PLAT_IMX93},
+ { IMX93_CLK_LPIT1_GATE, "lpit1", "bus_aon_root", 0x8a00, },
+ { IMX93_CLK_LPIT2_GATE, "lpit2", "bus_wakeup_root", 0x8a40, },
+ { IMX93_CLK_LPTMR1_GATE, "lptmr1", "lptmr1_root", 0x8a80, },
+@@ -228,10 +239,10 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_SAI3_GATE, "sai3", "sai3_root", 0x94c0, 0, &share_count_sai3},
+ { IMX93_CLK_SAI3_IPG, "sai3_ipg_clk", "bus_wakeup_root", 0x94c0, 0, &share_count_sai3},
+ { IMX93_CLK_MIPI_CSI_GATE, "mipi_csi", "media_apb_root", 0x9580, },
+- { IMX93_CLK_MIPI_DSI_GATE, "mipi_dsi", "media_apb_root", 0x95c0, },
+- { IMX93_CLK_LVDS_GATE, "lvds", "media_ldb_root", 0x9600, },
++ { IMX93_CLK_MIPI_DSI_GATE, "mipi_dsi", "media_apb_root", 0x95c0, 0, NULL, PLAT_IMX93 },
++ { IMX93_CLK_LVDS_GATE, "lvds", "media_ldb_root", 0x9600, 0, NULL, PLAT_IMX93 },
+ { IMX93_CLK_LCDIF_GATE, "lcdif", "media_apb_root", 0x9640, },
+- { IMX93_CLK_PXP_GATE, "pxp", "media_apb_root", 0x9680, },
++ { IMX93_CLK_PXP_GATE, "pxp", "media_apb_root", 0x9680, 0, NULL, PLAT_IMX93 },
+ { IMX93_CLK_ISI_GATE, "isi", "media_apb_root", 0x96c0, },
+ { IMX93_CLK_NIC_MEDIA_GATE, "nic_media", "media_axi_root", 0x9700, },
+ { IMX93_CLK_USB_CONTROLLER_GATE, "usb_controller", "hsio_root", 0x9a00, },
+@@ -242,10 +253,13 @@ static const struct imx93_clk_ccgr {
+ { IMX93_CLK_MQS1_GATE, "mqs1", "sai1_root", 0x9b00, },
+ { IMX93_CLK_MQS2_GATE, "mqs2", "sai3_root", 0x9b40, },
+ { IMX93_CLK_AUD_XCVR_GATE, "aud_xcvr", "audio_xcvr_root", 0x9b80, },
+- { IMX93_CLK_SPDIF_GATE, "spdif", "spdif_root", 0x9c00, },
++ { IMX93_CLK_SPDIF_IPG, "spdif_ipg_clk", "bus_wakeup_root", 0x9c00, 0, &share_count_spdif},
++ { IMX93_CLK_SPDIF_GATE, "spdif", "spdif_root", 0x9c00, 0, &share_count_spdif},
+ { IMX93_CLK_HSIO_32K_GATE, "hsio_32k", "osc_32k", 0x9dc0, },
+- { IMX93_CLK_ENET1_GATE, "enet1", "wakeup_axi_root", 0x9e00, },
+- { IMX93_CLK_ENET_QOS_GATE, "enet_qos", "wakeup_axi_root", 0x9e40, },
++ { IMX93_CLK_ENET1_GATE, "enet1", "wakeup_axi_root", 0x9e00, 0, NULL, PLAT_IMX93, },
++ { IMX93_CLK_ENET_QOS_GATE, "enet_qos", "wakeup_axi_root", 0x9e40, 0, NULL, PLAT_IMX93, },
++ { IMX91_CLK_ENET2_REGULAR_GATE, "enet2_regular", "wakeup_axi_root", 0x9e00, 0, NULL, PLAT_IMX91, },
++ { IMX91_CLK_ENET1_QOS_TSN_GATE, "enet1_qos_tsn", "wakeup_axi_root", 0x9e40, 0, NULL, PLAT_IMX91, },
+ /* Critical because clk accessed during CPU idle */
+ { IMX93_CLK_SYS_CNT_GATE, "sys_cnt", "osc_24m", 0x9e80, CLK_IS_CRITICAL},
+ { IMX93_CLK_TSTMR1_GATE, "tstmr1", "bus_aon_root", 0x9ec0, },
+@@ -265,6 +279,7 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ const struct imx93_clk_ccgr *ccgr;
+ void __iomem *base, *anatop_base;
+ int i, ret;
++ const unsigned long plat = (unsigned long)device_get_match_data(&pdev->dev);
+
+ clk_hw_data = devm_kzalloc(dev, struct_size(clk_hw_data, hws,
+ IMX93_CLK_END), GFP_KERNEL);
+@@ -314,17 +329,20 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+
+ for (i = 0; i < ARRAY_SIZE(root_array); i++) {
+ root = &root_array[i];
+- clks[root->clk] = imx93_clk_composite_flags(root->name,
+- parent_names[root->sel],
+- 4, base + root->off, 3,
+- root->flags);
++ if (!root->plat || root->plat & plat)
++ clks[root->clk] = imx93_clk_composite_flags(root->name,
++ parent_names[root->sel],
++ 4, base + root->off, 3,
++ root->flags);
+ }
+
+ for (i = 0; i < ARRAY_SIZE(ccgr_array); i++) {
+ ccgr = &ccgr_array[i];
+- clks[ccgr->clk] = imx93_clk_gate(NULL, ccgr->name, ccgr->parent_name,
+- ccgr->flags, base + ccgr->off, 0, 1, 1, 3,
+- ccgr->shared_count);
++ if (!ccgr->plat || ccgr->plat & plat)
++ clks[ccgr->clk] = imx93_clk_gate(NULL,
++ ccgr->name, ccgr->parent_name,
++ ccgr->flags, base + ccgr->off, 0, 1, 1, 3,
++ ccgr->shared_count);
+ }
+
+ clks[IMX93_CLK_A55_SEL] = imx_clk_hw_mux2("a55_sel", base + 0x4820, 0, 1, a55_core_sels,
+@@ -354,7 +372,8 @@ static int imx93_clocks_probe(struct platform_device *pdev)
+ }
+
+ static const struct of_device_id imx93_clk_of_match[] = {
+- { .compatible = "fsl,imx93-ccm" },
++ { .compatible = "fsl,imx93-ccm", .data = (void *)PLAT_IMX93 },
++ { .compatible = "fsl,imx91-ccm", .data = (void *)PLAT_IMX91 },
+ { /* Sentinel */ },
+ };
+ MODULE_DEVICE_TABLE(of, imx93_clk_of_match);
+diff --git a/drivers/clk/qcom/camcc-x1e80100.c b/drivers/clk/qcom/camcc-x1e80100.c
+index 85e76c7712ad84..b73524ae64b1b2 100644
+--- a/drivers/clk/qcom/camcc-x1e80100.c
++++ b/drivers/clk/qcom/camcc-x1e80100.c
+@@ -2212,6 +2212,8 @@ static struct clk_branch cam_cc_sfe_0_fast_ahb_clk = {
+ },
+ };
+
++static struct gdsc cam_cc_titan_top_gdsc;
++
+ static struct gdsc cam_cc_bps_gdsc = {
+ .gdscr = 0x10004,
+ .en_rest_wait_val = 0x2,
+@@ -2221,6 +2223,7 @@ static struct gdsc cam_cc_bps_gdsc = {
+ .name = "cam_cc_bps_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
++ .parent = &cam_cc_titan_top_gdsc.pd,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -2233,6 +2236,7 @@ static struct gdsc cam_cc_ife_0_gdsc = {
+ .name = "cam_cc_ife_0_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
++ .parent = &cam_cc_titan_top_gdsc.pd,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -2245,6 +2249,7 @@ static struct gdsc cam_cc_ife_1_gdsc = {
+ .name = "cam_cc_ife_1_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
++ .parent = &cam_cc_titan_top_gdsc.pd,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -2257,6 +2262,7 @@ static struct gdsc cam_cc_ipe_0_gdsc = {
+ .name = "cam_cc_ipe_0_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
++ .parent = &cam_cc_titan_top_gdsc.pd,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -2269,6 +2275,7 @@ static struct gdsc cam_cc_sfe_0_gdsc = {
+ .name = "cam_cc_sfe_0_gdsc",
+ },
+ .pwrsts = PWRSTS_OFF_ON,
++ .parent = &cam_cc_titan_top_gdsc.pd,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+diff --git a/drivers/clk/qcom/gcc-sdm845.c b/drivers/clk/qcom/gcc-sdm845.c
+index dc3aa7014c3ed1..c6692808a8228c 100644
+--- a/drivers/clk/qcom/gcc-sdm845.c
++++ b/drivers/clk/qcom/gcc-sdm845.c
+@@ -454,7 +454,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s0_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s0_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s0_clk_src = {
+@@ -470,7 +470,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s1_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s1_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s1_clk_src = {
+@@ -486,7 +486,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s2_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s2_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s2_clk_src = {
+@@ -502,7 +502,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s3_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s3_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s3_clk_src = {
+@@ -518,7 +518,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s4_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s4_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s4_clk_src = {
+@@ -534,7 +534,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s5_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s5_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s5_clk_src = {
+@@ -550,7 +550,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s6_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s6_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s6_clk_src = {
+@@ -566,7 +566,7 @@ static struct clk_init_data gcc_qupv3_wrap0_s7_clk_src_init = {
+ .name = "gcc_qupv3_wrap0_s7_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap0_s7_clk_src = {
+@@ -582,7 +582,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s0_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s0_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s0_clk_src = {
+@@ -598,7 +598,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s1_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s1_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s1_clk_src = {
+@@ -614,7 +614,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s2_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s2_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s2_clk_src = {
+@@ -630,7 +630,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s3_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s3_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s3_clk_src = {
+@@ -646,7 +646,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s4_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s4_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s4_clk_src = {
+@@ -662,7 +662,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s5_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s5_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s5_clk_src = {
+@@ -678,7 +678,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s6_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s6_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s6_clk_src = {
+@@ -694,7 +694,7 @@ static struct clk_init_data gcc_qupv3_wrap1_s7_clk_src_init = {
+ .name = "gcc_qupv3_wrap1_s7_clk_src",
+ .parent_data = gcc_parent_data_0,
+ .num_parents = ARRAY_SIZE(gcc_parent_data_0),
+- .ops = &clk_rcg2_shared_ops,
++ .ops = &clk_rcg2_ops,
+ };
+
+ static struct clk_rcg2 gcc_qupv3_wrap1_s7_clk_src = {
+diff --git a/drivers/clk/qcom/gcc-x1e80100.c b/drivers/clk/qcom/gcc-x1e80100.c
+index 8ea25aa25dff04..7288af845434d8 100644
+--- a/drivers/clk/qcom/gcc-x1e80100.c
++++ b/drivers/clk/qcom/gcc-x1e80100.c
+@@ -6083,7 +6083,7 @@ static struct gdsc gcc_usb20_prim_gdsc = {
+ .pd = {
+ .name = "gcc_usb20_prim_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+diff --git a/drivers/clk/ralink/clk-mtmips.c b/drivers/clk/ralink/clk-mtmips.c
+index 76285fbbdeaa2d..4b5d8b741e4e17 100644
+--- a/drivers/clk/ralink/clk-mtmips.c
++++ b/drivers/clk/ralink/clk-mtmips.c
+@@ -264,7 +264,6 @@ static int mtmips_register_pherip_clocks(struct device_node *np,
+ }
+
+ static struct mtmips_clk_fixed rt3883_fixed_clocks[] = {
+- CLK_FIXED("xtal", NULL, 40000000),
+ CLK_FIXED("periph", "xtal", 40000000)
+ };
+
+diff --git a/drivers/clk/renesas/renesas-cpg-mssr.c b/drivers/clk/renesas/renesas-cpg-mssr.c
+index 1b421b8097965b..0f27c33192e10d 100644
+--- a/drivers/clk/renesas/renesas-cpg-mssr.c
++++ b/drivers/clk/renesas/renesas-cpg-mssr.c
+@@ -981,7 +981,7 @@ static void __init cpg_mssr_reserved_exit(struct cpg_mssr_priv *priv)
+ static int __init cpg_mssr_reserved_init(struct cpg_mssr_priv *priv,
+ const struct cpg_mssr_info *info)
+ {
+- struct device_node *soc = of_find_node_by_path("/soc");
++ struct device_node *soc __free(device_node) = of_find_node_by_path("/soc");
+ struct device_node *node;
+ uint32_t args[MAX_PHANDLE_ARGS];
+ unsigned int *ids = NULL;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-a64.c b/drivers/clk/sunxi-ng/ccu-sun50i-a64.c
+index c255dba2c96db3..6727a3e30a1297 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun50i-a64.c
++++ b/drivers/clk/sunxi-ng/ccu-sun50i-a64.c
+@@ -535,11 +535,11 @@ static SUNXI_CCU_M_WITH_MUX_GATE(de_clk, "de", de_parents,
+ CLK_SET_RATE_PARENT);
+
+ /*
+- * DSI output seems to work only when PLL_MIPI selected. Set it and prevent
+- * the mux from reparenting.
++ * Experiments showed that RGB output requires pll-video0-2x, while DSI
++ * requires pll-mipi. It will not work with incorrect clock, the screen will
++ * be blank.
++ * sun50i-a64.dtsi assigns pll-mipi as TCON0 parent by default
+ */
+-#define SUN50I_A64_TCON0_CLK_REG 0x118
+-
+ static const char * const tcon0_parents[] = { "pll-mipi", "pll-video0-2x" };
+ static const u8 tcon0_table[] = { 0, 2, };
+ static SUNXI_CCU_MUX_TABLE_WITH_GATE_CLOSEST(tcon0_clk, "tcon0", tcon0_parents,
+@@ -959,11 +959,6 @@ static int sun50i_a64_ccu_probe(struct platform_device *pdev)
+
+ writel(0x515, reg + SUN50I_A64_PLL_MIPI_REG);
+
+- /* Set PLL MIPI as parent for TCON0 */
+- val = readl(reg + SUN50I_A64_TCON0_CLK_REG);
+- val &= ~GENMASK(26, 24);
+- writel(val | (0 << 24), reg + SUN50I_A64_TCON0_CLK_REG);
+-
+ ret = devm_sunxi_ccu_probe(&pdev->dev, reg, &sun50i_a64_ccu_desc);
+ if (ret)
+ return ret;
+diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-a64.h b/drivers/clk/sunxi-ng/ccu-sun50i-a64.h
+index a8c11c0b4e0676..dfba88a5ad0f7c 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun50i-a64.h
++++ b/drivers/clk/sunxi-ng/ccu-sun50i-a64.h
+@@ -21,7 +21,6 @@
+
+ /* PLL_VIDEO0 exported for HDMI PHY */
+
+-#define CLK_PLL_VIDEO0_2X 8
+ #define CLK_PLL_VE 9
+ #define CLK_PLL_DDR0 10
+
+@@ -32,7 +31,6 @@
+ #define CLK_PLL_PERIPH1_2X 14
+ #define CLK_PLL_VIDEO1 15
+ #define CLK_PLL_GPU 16
+-#define CLK_PLL_MIPI 17
+ #define CLK_PLL_HSIC 18
+ #define CLK_PLL_DE 19
+ #define CLK_PLL_DDR1 20
+diff --git a/drivers/clk/thead/clk-th1520-ap.c b/drivers/clk/thead/clk-th1520-ap.c
+index 1015fab9525157..4c9555fc61844d 100644
+--- a/drivers/clk/thead/clk-th1520-ap.c
++++ b/drivers/clk/thead/clk-th1520-ap.c
+@@ -657,7 +657,7 @@ static struct ccu_div apb_pclk = {
+ .hw.init = CLK_HW_INIT_PARENTS_DATA("apb-pclk",
+ apb_parents,
+ &ccu_div_ops,
+- 0),
++ CLK_IGNORE_UNUSED),
+ },
+ };
+
+@@ -794,13 +794,13 @@ static CCU_GATE(CLK_X2X_CPUSYS, x2x_cpusys_clk, "x2x-cpusys", axi4_cpusys2_aclk_
+ 0x134, BIT(7), 0);
+ static CCU_GATE(CLK_CPU2AON_X2H, cpu2aon_x2h_clk, "cpu2aon-x2h", axi_aclk_pd, 0x138, BIT(8), 0);
+ static CCU_GATE(CLK_CPU2PERI_X2H, cpu2peri_x2h_clk, "cpu2peri-x2h", axi4_cpusys2_aclk_pd,
+- 0x140, BIT(9), 0);
++ 0x140, BIT(9), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB1_HCLK, perisys_apb1_hclk, "perisys-apb1-hclk", perisys_ahb_hclk_pd,
+ 0x150, BIT(9), 0);
+ static CCU_GATE(CLK_PERISYS_APB2_HCLK, perisys_apb2_hclk, "perisys-apb2-hclk", perisys_ahb_hclk_pd,
+- 0x150, BIT(10), 0);
++ 0x150, BIT(10), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB3_HCLK, perisys_apb3_hclk, "perisys-apb3-hclk", perisys_ahb_hclk_pd,
+- 0x150, BIT(11), 0);
++ 0x150, BIT(11), CLK_IGNORE_UNUSED);
+ static CCU_GATE(CLK_PERISYS_APB4_HCLK, perisys_apb4_hclk, "perisys-apb4-hclk", perisys_ahb_hclk_pd,
+ 0x150, BIT(12), 0);
+ static CCU_GATE(CLK_NPU_AXI, npu_axi_clk, "npu-axi", axi_aclk_pd, 0x1c8, BIT(5), 0);
+@@ -896,7 +896,6 @@ static struct ccu_common *th1520_div_clks[] = {
+ &vo_axi_clk.common,
+ &vp_apb_clk.common,
+ &vp_axi_clk.common,
+- &cpu2vp_clk.common,
+ &venc_clk.common,
+ &dpu0_clk.common,
+ &dpu1_clk.common,
+@@ -916,6 +915,7 @@ static struct ccu_common *th1520_gate_clks[] = {
+ &bmu_clk.common,
+ &cpu2aon_x2h_clk.common,
+ &cpu2peri_x2h_clk.common,
++ &cpu2vp_clk.common,
+ &perisys_apb1_hclk.common,
+ &perisys_apb2_hclk.common,
+ &perisys_apb3_hclk.common,
+@@ -1048,7 +1048,8 @@ static int th1520_clk_probe(struct platform_device *pdev)
+ hw = devm_clk_hw_register_gate_parent_data(dev,
+ cg->common.hw.init->name,
+ cg->common.hw.init->parent_data,
+- 0, base + cg->common.cfg0,
++ cg->common.hw.init->flags,
++ base + cg->common.cfg0,
+ ffs(cg->enable) - 1, 0, NULL);
+ if (IS_ERR(hw))
+ return PTR_ERR(hw);
+diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
+index 0f04feb6cafaf1..47e910c22a80bd 100644
+--- a/drivers/cpufreq/acpi-cpufreq.c
++++ b/drivers/cpufreq/acpi-cpufreq.c
+@@ -626,7 +626,14 @@ static int acpi_cpufreq_blacklist(struct cpuinfo_x86 *c)
+ #endif
+
+ #ifdef CONFIG_ACPI_CPPC_LIB
+-static u64 get_max_boost_ratio(unsigned int cpu)
++/*
++ * get_max_boost_ratio: Computes the max_boost_ratio as the ratio
++ * between the highest_perf and the nominal_perf.
++ *
++ * Returns the max_boost_ratio for @cpu. Returns the CPPC nominal
++ * frequency via @nominal_freq if it is non-NULL pointer.
++ */
++static u64 get_max_boost_ratio(unsigned int cpu, u64 *nominal_freq)
+ {
+ struct cppc_perf_caps perf_caps;
+ u64 highest_perf, nominal_perf;
+@@ -655,6 +662,9 @@ static u64 get_max_boost_ratio(unsigned int cpu)
+
+ nominal_perf = perf_caps.nominal_perf;
+
++ if (nominal_freq)
++ *nominal_freq = perf_caps.nominal_freq;
++
+ if (!highest_perf || !nominal_perf) {
+ pr_debug("CPU%d: highest or nominal performance missing\n", cpu);
+ return 0;
+@@ -667,8 +677,12 @@ static u64 get_max_boost_ratio(unsigned int cpu)
+
+ return div_u64(highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf);
+ }
++
+ #else
+-static inline u64 get_max_boost_ratio(unsigned int cpu) { return 0; }
++static inline u64 get_max_boost_ratio(unsigned int cpu, u64 *nominal_freq)
++{
++ return 0;
++}
+ #endif
+
+ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+@@ -678,9 +692,9 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ struct acpi_cpufreq_data *data;
+ unsigned int cpu = policy->cpu;
+ struct cpuinfo_x86 *c = &cpu_data(cpu);
++ u64 max_boost_ratio, nominal_freq = 0;
+ unsigned int valid_states = 0;
+ unsigned int result = 0;
+- u64 max_boost_ratio;
+ unsigned int i;
+ #ifdef CONFIG_SMP
+ static int blacklisted;
+@@ -830,16 +844,20 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
+ }
+ freq_table[valid_states].frequency = CPUFREQ_TABLE_END;
+
+- max_boost_ratio = get_max_boost_ratio(cpu);
++ max_boost_ratio = get_max_boost_ratio(cpu, &nominal_freq);
+ if (max_boost_ratio) {
+- unsigned int freq = freq_table[0].frequency;
++ unsigned int freq = nominal_freq;
+
+ /*
+- * Because the loop above sorts the freq_table entries in the
+- * descending order, freq is the maximum frequency in the table.
+- * Assume that it corresponds to the CPPC nominal frequency and
+- * use it to set cpuinfo.max_freq.
++ * The loop above sorts the freq_table entries in the
++ * descending order. If ACPI CPPC has not advertised
++ * the nominal frequency (this is possible in CPPC
++ * revisions prior to 3), then use the first entry in
++ * the pstate table as a proxy for nominal frequency.
+ */
++ if (!freq)
++ freq = freq_table[0].frequency;
++
+ policy->cpuinfo.max_freq = freq * max_boost_ratio >> SCHED_CAPACITY_SHIFT;
+ } else {
+ /*
+diff --git a/drivers/cpufreq/qcom-cpufreq-hw.c b/drivers/cpufreq/qcom-cpufreq-hw.c
+index 900d6844c43d3f..e7399780638393 100644
+--- a/drivers/cpufreq/qcom-cpufreq-hw.c
++++ b/drivers/cpufreq/qcom-cpufreq-hw.c
+@@ -143,14 +143,12 @@ static unsigned long qcom_lmh_get_throttle_freq(struct qcom_cpufreq_data *data)
+ }
+
+ /* Get the frequency requested by the cpufreq core for the CPU */
+-static unsigned int qcom_cpufreq_get_freq(unsigned int cpu)
++static unsigned int qcom_cpufreq_get_freq(struct cpufreq_policy *policy)
+ {
+ struct qcom_cpufreq_data *data;
+ const struct qcom_cpufreq_soc_data *soc_data;
+- struct cpufreq_policy *policy;
+ unsigned int index;
+
+- policy = cpufreq_cpu_get_raw(cpu);
+ if (!policy)
+ return 0;
+
+@@ -163,12 +161,10 @@ static unsigned int qcom_cpufreq_get_freq(unsigned int cpu)
+ return policy->freq_table[index].frequency;
+ }
+
+-static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)
++static unsigned int __qcom_cpufreq_hw_get(struct cpufreq_policy *policy)
+ {
+ struct qcom_cpufreq_data *data;
+- struct cpufreq_policy *policy;
+
+- policy = cpufreq_cpu_get_raw(cpu);
+ if (!policy)
+ return 0;
+
+@@ -177,7 +173,12 @@ static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)
+ if (data->throttle_irq >= 0)
+ return qcom_lmh_get_throttle_freq(data) / HZ_PER_KHZ;
+
+- return qcom_cpufreq_get_freq(cpu);
++ return qcom_cpufreq_get_freq(policy);
++}
++
++static unsigned int qcom_cpufreq_hw_get(unsigned int cpu)
++{
++ return __qcom_cpufreq_hw_get(cpufreq_cpu_get_raw(cpu));
+ }
+
+ static unsigned int qcom_cpufreq_hw_fast_switch(struct cpufreq_policy *policy,
+@@ -363,7 +364,7 @@ static void qcom_lmh_dcvs_notify(struct qcom_cpufreq_data *data)
+ * If h/w throttled frequency is higher than what cpufreq has requested
+ * for, then stop polling and switch back to interrupt mechanism.
+ */
+- if (throttled_freq >= qcom_cpufreq_get_freq(cpu))
++ if (throttled_freq >= qcom_cpufreq_get_freq(cpufreq_cpu_get_raw(cpu)))
+ enable_irq(data->throttle_irq);
+ else
+ mod_delayed_work(system_highpri_wq, &data->throttle_work,
+@@ -441,7 +442,6 @@ static int qcom_cpufreq_hw_lmh_init(struct cpufreq_policy *policy, int index)
+ return data->throttle_irq;
+
+ data->cancel_throttle = false;
+- data->policy = policy;
+
+ mutex_init(&data->throttle_lock);
+ INIT_DEFERRABLE_WORK(&data->throttle_work, qcom_lmh_dcvs_poll);
+@@ -552,6 +552,7 @@ static int qcom_cpufreq_hw_cpu_init(struct cpufreq_policy *policy)
+
+ policy->driver_data = data;
+ policy->dvfs_possible_from_any_cpu = true;
++ data->policy = policy;
+
+ ret = qcom_cpufreq_hw_read_lut(cpu_dev, policy);
+ if (ret) {
+@@ -622,11 +623,24 @@ static unsigned long qcom_cpufreq_hw_recalc_rate(struct clk_hw *hw, unsigned lon
+ {
+ struct qcom_cpufreq_data *data = container_of(hw, struct qcom_cpufreq_data, cpu_clk);
+
+- return qcom_lmh_get_throttle_freq(data);
++ return __qcom_cpufreq_hw_get(data->policy) * HZ_PER_KHZ;
++}
++
++/*
++ * Since we cannot determine the closest rate of the target rate, let's just
++ * return the actual rate at which the clock is running at. This is needed to
++ * make clk_set_rate() API work properly.
++ */
++static int qcom_cpufreq_hw_determine_rate(struct clk_hw *hw, struct clk_rate_request *req)
++{
++ req->rate = qcom_cpufreq_hw_recalc_rate(hw, 0);
++
++ return 0;
+ }
+
+ static const struct clk_ops qcom_cpufreq_hw_clk_ops = {
+ .recalc_rate = qcom_cpufreq_hw_recalc_rate,
++ .determine_rate = qcom_cpufreq_hw_determine_rate,
+ };
+
+ static int qcom_cpufreq_hw_driver_probe(struct platform_device *pdev)
+diff --git a/drivers/crypto/caam/blob_gen.c b/drivers/crypto/caam/blob_gen.c
+index 87781c1534ee5b..079a22cc9f02be 100644
+--- a/drivers/crypto/caam/blob_gen.c
++++ b/drivers/crypto/caam/blob_gen.c
+@@ -2,6 +2,7 @@
+ /*
+ * Copyright (C) 2015 Pengutronix, Steffen Trumtrar <kernel@pengutronix.de>
+ * Copyright (C) 2021 Pengutronix, Ahmad Fatoum <kernel@pengutronix.de>
++ * Copyright 2024 NXP
+ */
+
+ #define pr_fmt(fmt) "caam blob_gen: " fmt
+@@ -104,7 +105,7 @@ int caam_process_blob(struct caam_blob_priv *priv,
+ }
+
+ ctrlpriv = dev_get_drvdata(jrdev->parent);
+- moo = FIELD_GET(CSTA_MOO, rd_reg32(&ctrlpriv->ctrl->perfmon.status));
++ moo = FIELD_GET(CSTA_MOO, rd_reg32(&ctrlpriv->jr[0]->perfmon.status));
+ if (moo != CSTA_MOO_SECURE && moo != CSTA_MOO_TRUSTED)
+ dev_warn(jrdev,
+ "using insecure test key, enable HAB to use unique device key!\n");
+diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
+index 410c83712e2851..30c2b1a64695c0 100644
+--- a/drivers/crypto/hisilicon/sec2/sec.h
++++ b/drivers/crypto/hisilicon/sec2/sec.h
+@@ -37,6 +37,7 @@ struct sec_aead_req {
+ u8 *a_ivin;
+ dma_addr_t a_ivin_dma;
+ struct aead_request *aead_req;
++ bool fallback;
+ };
+
+ /* SEC request of Crypto */
+@@ -90,9 +91,7 @@ struct sec_auth_ctx {
+ dma_addr_t a_key_dma;
+ u8 *a_key;
+ u8 a_key_len;
+- u8 mac_len;
+ u8 a_alg;
+- bool fallback;
+ struct crypto_shash *hash_tfm;
+ struct crypto_aead *fallback_aead_tfm;
+ };
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index 0558f98e221f63..a9b1b9b0b03bf7 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -948,15 +948,14 @@ static int sec_aead_mac_init(struct sec_aead_req *req)
+ struct aead_request *aead_req = req->aead_req;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req);
+ size_t authsize = crypto_aead_authsize(tfm);
+- u8 *mac_out = req->out_mac;
+ struct scatterlist *sgl = aead_req->src;
++ u8 *mac_out = req->out_mac;
+ size_t copy_size;
+ off_t skip_size;
+
+ /* Copy input mac */
+ skip_size = aead_req->assoclen + aead_req->cryptlen - authsize;
+- copy_size = sg_pcopy_to_buffer(sgl, sg_nents(sgl), mac_out,
+- authsize, skip_size);
++ copy_size = sg_pcopy_to_buffer(sgl, sg_nents(sgl), mac_out, authsize, skip_size);
+ if (unlikely(copy_size != authsize))
+ return -EINVAL;
+
+@@ -1120,10 +1119,7 @@ static int sec_aead_setauthsize(struct crypto_aead *aead, unsigned int authsize)
+ struct sec_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
+
+- if (unlikely(a_ctx->fallback_aead_tfm))
+- return crypto_aead_setauthsize(a_ctx->fallback_aead_tfm, authsize);
+-
+- return 0;
++ return crypto_aead_setauthsize(a_ctx->fallback_aead_tfm, authsize);
+ }
+
+ static int sec_aead_fallback_setkey(struct sec_auth_ctx *a_ctx,
+@@ -1139,7 +1135,6 @@ static int sec_aead_fallback_setkey(struct sec_auth_ctx *a_ctx,
+ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ const u32 keylen, const enum sec_hash_alg a_alg,
+ const enum sec_calg c_alg,
+- const enum sec_mac_len mac_len,
+ const enum sec_cmode c_mode)
+ {
+ struct sec_ctx *ctx = crypto_aead_ctx(tfm);
+@@ -1151,7 +1146,6 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+
+ ctx->a_ctx.a_alg = a_alg;
+ ctx->c_ctx.c_alg = c_alg;
+- ctx->a_ctx.mac_len = mac_len;
+ c_ctx->c_mode = c_mode;
+
+ if (c_mode == SEC_CMODE_CCM || c_mode == SEC_CMODE_GCM) {
+@@ -1162,13 +1156,7 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ }
+ memcpy(c_ctx->c_key, key, keylen);
+
+- if (unlikely(a_ctx->fallback_aead_tfm)) {
+- ret = sec_aead_fallback_setkey(a_ctx, tfm, key, keylen);
+- if (ret)
+- return ret;
+- }
+-
+- return 0;
++ return sec_aead_fallback_setkey(a_ctx, tfm, key, keylen);
+ }
+
+ ret = crypto_authenc_extractkeys(&keys, key, keylen);
+@@ -1187,10 +1175,15 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ goto bad_key;
+ }
+
+- if ((ctx->a_ctx.mac_len & SEC_SQE_LEN_RATE_MASK) ||
+- (ctx->a_ctx.a_key_len & SEC_SQE_LEN_RATE_MASK)) {
++ if (ctx->a_ctx.a_key_len & SEC_SQE_LEN_RATE_MASK) {
+ ret = -EINVAL;
+- dev_err(dev, "MAC or AUTH key length error!\n");
++ dev_err(dev, "AUTH key length error!\n");
++ goto bad_key;
++ }
++
++ ret = sec_aead_fallback_setkey(a_ctx, tfm, key, keylen);
++ if (ret) {
++ dev_err(dev, "set sec fallback key err!\n");
+ goto bad_key;
+ }
+
+@@ -1202,27 +1195,19 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ }
+
+
+-#define GEN_SEC_AEAD_SETKEY_FUNC(name, aalg, calg, maclen, cmode) \
+-static int sec_setkey_##name(struct crypto_aead *tfm, const u8 *key, \
+- u32 keylen) \
+-{ \
+- return sec_aead_setkey(tfm, key, keylen, aalg, calg, maclen, cmode);\
+-}
+-
+-GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha1, SEC_A_HMAC_SHA1,
+- SEC_CALG_AES, SEC_HMAC_SHA1_MAC, SEC_CMODE_CBC)
+-GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha256, SEC_A_HMAC_SHA256,
+- SEC_CALG_AES, SEC_HMAC_SHA256_MAC, SEC_CMODE_CBC)
+-GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha512, SEC_A_HMAC_SHA512,
+- SEC_CALG_AES, SEC_HMAC_SHA512_MAC, SEC_CMODE_CBC)
+-GEN_SEC_AEAD_SETKEY_FUNC(aes_ccm, 0, SEC_CALG_AES,
+- SEC_HMAC_CCM_MAC, SEC_CMODE_CCM)
+-GEN_SEC_AEAD_SETKEY_FUNC(aes_gcm, 0, SEC_CALG_AES,
+- SEC_HMAC_GCM_MAC, SEC_CMODE_GCM)
+-GEN_SEC_AEAD_SETKEY_FUNC(sm4_ccm, 0, SEC_CALG_SM4,
+- SEC_HMAC_CCM_MAC, SEC_CMODE_CCM)
+-GEN_SEC_AEAD_SETKEY_FUNC(sm4_gcm, 0, SEC_CALG_SM4,
+- SEC_HMAC_GCM_MAC, SEC_CMODE_GCM)
++#define GEN_SEC_AEAD_SETKEY_FUNC(name, aalg, calg, cmode) \
++static int sec_setkey_##name(struct crypto_aead *tfm, const u8 *key, u32 keylen) \
++{ \
++ return sec_aead_setkey(tfm, key, keylen, aalg, calg, cmode); \
++}
++
++GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha1, SEC_A_HMAC_SHA1, SEC_CALG_AES, SEC_CMODE_CBC)
++GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha256, SEC_A_HMAC_SHA256, SEC_CALG_AES, SEC_CMODE_CBC)
++GEN_SEC_AEAD_SETKEY_FUNC(aes_cbc_sha512, SEC_A_HMAC_SHA512, SEC_CALG_AES, SEC_CMODE_CBC)
++GEN_SEC_AEAD_SETKEY_FUNC(aes_ccm, 0, SEC_CALG_AES, SEC_CMODE_CCM)
++GEN_SEC_AEAD_SETKEY_FUNC(aes_gcm, 0, SEC_CALG_AES, SEC_CMODE_GCM)
++GEN_SEC_AEAD_SETKEY_FUNC(sm4_ccm, 0, SEC_CALG_SM4, SEC_CMODE_CCM)
++GEN_SEC_AEAD_SETKEY_FUNC(sm4_gcm, 0, SEC_CALG_SM4, SEC_CMODE_GCM)
+
+ static int sec_aead_sgl_map(struct sec_ctx *ctx, struct sec_req *req)
+ {
+@@ -1470,9 +1455,10 @@ static void sec_skcipher_callback(struct sec_ctx *ctx, struct sec_req *req,
+ static void set_aead_auth_iv(struct sec_ctx *ctx, struct sec_req *req)
+ {
+ struct aead_request *aead_req = req->aead_req.aead_req;
+- struct sec_cipher_req *c_req = &req->c_req;
++ struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req);
++ size_t authsize = crypto_aead_authsize(tfm);
+ struct sec_aead_req *a_req = &req->aead_req;
+- size_t authsize = ctx->a_ctx.mac_len;
++ struct sec_cipher_req *c_req = &req->c_req;
+ u32 data_size = aead_req->cryptlen;
+ u8 flage = 0;
+ u8 cm, cl;
+@@ -1513,10 +1499,8 @@ static void set_aead_auth_iv(struct sec_ctx *ctx, struct sec_req *req)
+ static void sec_aead_set_iv(struct sec_ctx *ctx, struct sec_req *req)
+ {
+ struct aead_request *aead_req = req->aead_req.aead_req;
+- struct crypto_aead *tfm = crypto_aead_reqtfm(aead_req);
+- size_t authsize = crypto_aead_authsize(tfm);
+- struct sec_cipher_req *c_req = &req->c_req;
+ struct sec_aead_req *a_req = &req->aead_req;
++ struct sec_cipher_req *c_req = &req->c_req;
+
+ memcpy(c_req->c_ivin, aead_req->iv, ctx->c_ctx.ivsize);
+
+@@ -1524,15 +1508,11 @@ static void sec_aead_set_iv(struct sec_ctx *ctx, struct sec_req *req)
+ /*
+ * CCM 16Byte Cipher_IV: {1B_Flage,13B_IV,2B_counter},
+ * the counter must set to 0x01
++ * CCM 16Byte Auth_IV: {1B_AFlage,13B_IV,2B_Ptext_length}
+ */
+- ctx->a_ctx.mac_len = authsize;
+- /* CCM 16Byte Auth_IV: {1B_AFlage,13B_IV,2B_Ptext_length} */
+ set_aead_auth_iv(ctx, req);
+- }
+-
+- /* GCM 12Byte Cipher_IV == Auth_IV */
+- if (ctx->c_ctx.c_mode == SEC_CMODE_GCM) {
+- ctx->a_ctx.mac_len = authsize;
++ } else if (ctx->c_ctx.c_mode == SEC_CMODE_GCM) {
++ /* GCM 12Byte Cipher_IV == Auth_IV */
+ memcpy(a_req->a_ivin, c_req->c_ivin, SEC_AIV_SIZE);
+ }
+ }
+@@ -1542,9 +1522,11 @@ static void sec_auth_bd_fill_xcm(struct sec_auth_ctx *ctx, int dir,
+ {
+ struct sec_aead_req *a_req = &req->aead_req;
+ struct aead_request *aq = a_req->aead_req;
++ struct crypto_aead *tfm = crypto_aead_reqtfm(aq);
++ size_t authsize = crypto_aead_authsize(tfm);
+
+ /* C_ICV_Len is MAC size, 0x4 ~ 0x10 */
+- sec_sqe->type2.icvw_kmode |= cpu_to_le16((u16)ctx->mac_len);
++ sec_sqe->type2.icvw_kmode |= cpu_to_le16((u16)authsize);
+
+ /* mode set to CCM/GCM, don't set {A_Alg, AKey_Len, MAC_Len} */
+ sec_sqe->type2.a_key_addr = sec_sqe->type2.c_key_addr;
+@@ -1568,9 +1550,11 @@ static void sec_auth_bd_fill_xcm_v3(struct sec_auth_ctx *ctx, int dir,
+ {
+ struct sec_aead_req *a_req = &req->aead_req;
+ struct aead_request *aq = a_req->aead_req;
++ struct crypto_aead *tfm = crypto_aead_reqtfm(aq);
++ size_t authsize = crypto_aead_authsize(tfm);
+
+ /* C_ICV_Len is MAC size, 0x4 ~ 0x10 */
+- sqe3->c_icv_key |= cpu_to_le16((u16)ctx->mac_len << SEC_MAC_OFFSET_V3);
++ sqe3->c_icv_key |= cpu_to_le16((u16)authsize << SEC_MAC_OFFSET_V3);
+
+ /* mode set to CCM/GCM, don't set {A_Alg, AKey_Len, MAC_Len} */
+ sqe3->a_key_addr = sqe3->c_key_addr;
+@@ -1594,11 +1578,12 @@ static void sec_auth_bd_fill_ex(struct sec_auth_ctx *ctx, int dir,
+ struct sec_aead_req *a_req = &req->aead_req;
+ struct sec_cipher_req *c_req = &req->c_req;
+ struct aead_request *aq = a_req->aead_req;
++ struct crypto_aead *tfm = crypto_aead_reqtfm(aq);
++ size_t authsize = crypto_aead_authsize(tfm);
+
+ sec_sqe->type2.a_key_addr = cpu_to_le64(ctx->a_key_dma);
+
+- sec_sqe->type2.mac_key_alg =
+- cpu_to_le32(ctx->mac_len / SEC_SQE_LEN_RATE);
++ sec_sqe->type2.mac_key_alg = cpu_to_le32(authsize / SEC_SQE_LEN_RATE);
+
+ sec_sqe->type2.mac_key_alg |=
+ cpu_to_le32((u32)((ctx->a_key_len) /
+@@ -1648,11 +1633,13 @@ static void sec_auth_bd_fill_ex_v3(struct sec_auth_ctx *ctx, int dir,
+ struct sec_aead_req *a_req = &req->aead_req;
+ struct sec_cipher_req *c_req = &req->c_req;
+ struct aead_request *aq = a_req->aead_req;
++ struct crypto_aead *tfm = crypto_aead_reqtfm(aq);
++ size_t authsize = crypto_aead_authsize(tfm);
+
+ sqe3->a_key_addr = cpu_to_le64(ctx->a_key_dma);
+
+ sqe3->auth_mac_key |=
+- cpu_to_le32((u32)(ctx->mac_len /
++ cpu_to_le32((u32)(authsize /
+ SEC_SQE_LEN_RATE) << SEC_MAC_OFFSET_V3);
+
+ sqe3->auth_mac_key |=
+@@ -1703,9 +1690,9 @@ static void sec_aead_callback(struct sec_ctx *c, struct sec_req *req, int err)
+ {
+ struct aead_request *a_req = req->aead_req.aead_req;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(a_req);
++ size_t authsize = crypto_aead_authsize(tfm);
+ struct sec_aead_req *aead_req = &req->aead_req;
+ struct sec_cipher_req *c_req = &req->c_req;
+- size_t authsize = crypto_aead_authsize(tfm);
+ struct sec_qp_ctx *qp_ctx = req->qp_ctx;
+ struct aead_request *backlog_aead_req;
+ struct sec_req *backlog_req;
+@@ -1718,10 +1705,8 @@ static void sec_aead_callback(struct sec_ctx *c, struct sec_req *req, int err)
+ if (!err && c_req->encrypt) {
+ struct scatterlist *sgl = a_req->dst;
+
+- sz = sg_pcopy_from_buffer(sgl, sg_nents(sgl),
+- aead_req->out_mac,
+- authsize, a_req->cryptlen +
+- a_req->assoclen);
++ sz = sg_pcopy_from_buffer(sgl, sg_nents(sgl), aead_req->out_mac,
++ authsize, a_req->cryptlen + a_req->assoclen);
+ if (unlikely(sz != authsize)) {
+ dev_err(c->dev, "copy out mac err!\n");
+ err = -EINVAL;
+@@ -1929,8 +1914,10 @@ static void sec_aead_exit(struct crypto_aead *tfm)
+
+ static int sec_aead_ctx_init(struct crypto_aead *tfm, const char *hash_name)
+ {
++ struct aead_alg *alg = crypto_aead_alg(tfm);
+ struct sec_ctx *ctx = crypto_aead_ctx(tfm);
+- struct sec_auth_ctx *auth_ctx = &ctx->a_ctx;
++ struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
++ const char *aead_name = alg->base.cra_name;
+ int ret;
+
+ ret = sec_aead_init(tfm);
+@@ -1939,11 +1926,20 @@ static int sec_aead_ctx_init(struct crypto_aead *tfm, const char *hash_name)
+ return ret;
+ }
+
+- auth_ctx->hash_tfm = crypto_alloc_shash(hash_name, 0, 0);
+- if (IS_ERR(auth_ctx->hash_tfm)) {
++ a_ctx->hash_tfm = crypto_alloc_shash(hash_name, 0, 0);
++ if (IS_ERR(a_ctx->hash_tfm)) {
+ dev_err(ctx->dev, "aead alloc shash error!\n");
+ sec_aead_exit(tfm);
+- return PTR_ERR(auth_ctx->hash_tfm);
++ return PTR_ERR(a_ctx->hash_tfm);
++ }
++
++ a_ctx->fallback_aead_tfm = crypto_alloc_aead(aead_name, 0,
++ CRYPTO_ALG_NEED_FALLBACK | CRYPTO_ALG_ASYNC);
++ if (IS_ERR(a_ctx->fallback_aead_tfm)) {
++ dev_err(ctx->dev, "aead driver alloc fallback tfm error!\n");
++ crypto_free_shash(ctx->a_ctx.hash_tfm);
++ sec_aead_exit(tfm);
++ return PTR_ERR(a_ctx->fallback_aead_tfm);
+ }
+
+ return 0;
+@@ -1953,6 +1949,7 @@ static void sec_aead_ctx_exit(struct crypto_aead *tfm)
+ {
+ struct sec_ctx *ctx = crypto_aead_ctx(tfm);
+
++ crypto_free_aead(ctx->a_ctx.fallback_aead_tfm);
+ crypto_free_shash(ctx->a_ctx.hash_tfm);
+ sec_aead_exit(tfm);
+ }
+@@ -1979,7 +1976,6 @@ static int sec_aead_xcm_ctx_init(struct crypto_aead *tfm)
+ sec_aead_exit(tfm);
+ return PTR_ERR(a_ctx->fallback_aead_tfm);
+ }
+- a_ctx->fallback = false;
+
+ return 0;
+ }
+@@ -2233,21 +2229,20 @@ static int sec_aead_spec_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ {
+ struct aead_request *req = sreq->aead_req.aead_req;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+- size_t authsize = crypto_aead_authsize(tfm);
++ size_t sz = crypto_aead_authsize(tfm);
+ u8 c_mode = ctx->c_ctx.c_mode;
+ struct device *dev = ctx->dev;
+ int ret;
+
+- if (unlikely(req->cryptlen + req->assoclen > MAX_INPUT_DATA_LEN ||
+- req->assoclen > SEC_MAX_AAD_LEN)) {
+- dev_err(dev, "aead input spec error!\n");
++ /* Hardware does not handle cases where authsize is less than 4 bytes */
++ if (unlikely(sz < MIN_MAC_LEN)) {
++ sreq->aead_req.fallback = true;
+ return -EINVAL;
+ }
+
+- if (unlikely((c_mode == SEC_CMODE_GCM && authsize < DES_BLOCK_SIZE) ||
+- (c_mode == SEC_CMODE_CCM && (authsize < MIN_MAC_LEN ||
+- authsize & MAC_LEN_MASK)))) {
+- dev_err(dev, "aead input mac length error!\n");
++ if (unlikely(req->cryptlen + req->assoclen > MAX_INPUT_DATA_LEN ||
++ req->assoclen > SEC_MAX_AAD_LEN)) {
++ dev_err(dev, "aead input spec error!\n");
+ return -EINVAL;
+ }
+
+@@ -2266,7 +2261,7 @@ static int sec_aead_spec_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ if (sreq->c_req.encrypt)
+ sreq->c_req.c_len = req->cryptlen;
+ else
+- sreq->c_req.c_len = req->cryptlen - authsize;
++ sreq->c_req.c_len = req->cryptlen - sz;
+ if (c_mode == SEC_CMODE_CBC) {
+ if (unlikely(sreq->c_req.c_len & (AES_BLOCK_SIZE - 1))) {
+ dev_err(dev, "aead crypto length error!\n");
+@@ -2292,8 +2287,8 @@ static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
+
+ if (ctx->sec->qm.ver == QM_HW_V2) {
+ if (unlikely(!req->cryptlen || (!sreq->c_req.encrypt &&
+- req->cryptlen <= authsize))) {
+- ctx->a_ctx.fallback = true;
++ req->cryptlen <= authsize))) {
++ sreq->aead_req.fallback = true;
+ return -EINVAL;
+ }
+ }
+@@ -2321,16 +2316,9 @@ static int sec_aead_soft_crypto(struct sec_ctx *ctx,
+ bool encrypt)
+ {
+ struct sec_auth_ctx *a_ctx = &ctx->a_ctx;
+- struct device *dev = ctx->dev;
+ struct aead_request *subreq;
+ int ret;
+
+- /* Kunpeng920 aead mode not support input 0 size */
+- if (!a_ctx->fallback_aead_tfm) {
+- dev_err(dev, "aead fallback tfm is NULL!\n");
+- return -EINVAL;
+- }
+-
+ subreq = aead_request_alloc(a_ctx->fallback_aead_tfm, GFP_KERNEL);
+ if (!subreq)
+ return -ENOMEM;
+@@ -2362,10 +2350,11 @@ static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
+ req->aead_req.aead_req = a_req;
+ req->c_req.encrypt = encrypt;
+ req->ctx = ctx;
++ req->aead_req.fallback = false;
+
+ ret = sec_aead_param_check(ctx, req);
+ if (unlikely(ret)) {
+- if (ctx->a_ctx.fallback)
++ if (req->aead_req.fallback)
+ return sec_aead_soft_crypto(ctx, a_req, encrypt);
+ return -EINVAL;
+ }
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.h b/drivers/crypto/hisilicon/sec2/sec_crypto.h
+index 27a0ee5ad9131c..04725b514382f8 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.h
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.h
+@@ -23,17 +23,6 @@ enum sec_hash_alg {
+ SEC_A_HMAC_SHA512 = 0x15,
+ };
+
+-enum sec_mac_len {
+- SEC_HMAC_CCM_MAC = 16,
+- SEC_HMAC_GCM_MAC = 16,
+- SEC_SM3_MAC = 32,
+- SEC_HMAC_SM3_MAC = 32,
+- SEC_HMAC_MD5_MAC = 16,
+- SEC_HMAC_SHA1_MAC = 20,
+- SEC_HMAC_SHA256_MAC = 32,
+- SEC_HMAC_SHA512_MAC = 64,
+-};
+-
+ enum sec_cmode {
+ SEC_CMODE_ECB = 0x0,
+ SEC_CMODE_CBC = 0x1,
+diff --git a/drivers/crypto/intel/iaa/Makefile b/drivers/crypto/intel/iaa/Makefile
+index b64b208d234408..55bda7770fac79 100644
+--- a/drivers/crypto/intel/iaa/Makefile
++++ b/drivers/crypto/intel/iaa/Makefile
+@@ -3,7 +3,7 @@
+ # Makefile for IAA crypto device drivers
+ #
+
+-ccflags-y += -I $(srctree)/drivers/dma/idxd -DDEFAULT_SYMBOL_NAMESPACE=IDXD
++ccflags-y += -I $(srctree)/drivers/dma/idxd -DDEFAULT_SYMBOL_NAMESPACE='"IDXD"'
+
+ obj-$(CONFIG_CRYPTO_DEV_IAA_CRYPTO) := iaa_crypto.o
+
+diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+index 237f8700007021..d2f07e34f3142d 100644
+--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
++++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+@@ -173,7 +173,7 @@ static int set_iaa_sync_mode(const char *name)
+ async_mode = false;
+ use_irq = false;
+ } else if (sysfs_streq(name, "async")) {
+- async_mode = true;
++ async_mode = false;
+ use_irq = false;
+ } else if (sysfs_streq(name, "async_irq")) {
+ async_mode = true;
+diff --git a/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c b/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c
+index f8a77bff88448d..e43361392c83f7 100644
+--- a/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c
++++ b/drivers/crypto/intel/ixp4xx/ixp4xx_crypto.c
+@@ -471,6 +471,7 @@ static int init_ixp_crypto(struct device *dev)
+ return -ENODEV;
+ }
+ npe_id = npe_spec.args[0];
++ of_node_put(npe_spec.np);
+
+ ret = of_parse_phandle_with_fixed_args(np, "queue-rx", 1, 0,
+ &queue_spec);
+@@ -479,6 +480,7 @@ static int init_ixp_crypto(struct device *dev)
+ return -ENODEV;
+ }
+ recv_qid = queue_spec.args[0];
++ of_node_put(queue_spec.np);
+
+ ret = of_parse_phandle_with_fixed_args(np, "queue-txready", 1, 0,
+ &queue_spec);
+@@ -487,6 +489,7 @@ static int init_ixp_crypto(struct device *dev)
+ return -ENODEV;
+ }
+ send_qid = queue_spec.args[0];
++ of_node_put(queue_spec.np);
+ } else {
+ /*
+ * Hardcoded engine when using platform data, this goes away
+diff --git a/drivers/crypto/intel/qat/qat_common/Makefile b/drivers/crypto/intel/qat/qat_common/Makefile
+index eac73cbfdd38e2..7acf9c576149ba 100644
+--- a/drivers/crypto/intel/qat/qat_common/Makefile
++++ b/drivers/crypto/intel/qat/qat_common/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ obj-$(CONFIG_CRYPTO_DEV_QAT) += intel_qat.o
+-ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=CRYPTO_QAT
++ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"CRYPTO_QAT"'
+ intel_qat-objs := adf_cfg.o \
+ adf_isr.o \
+ adf_ctl_drv.o \
+diff --git a/drivers/crypto/tegra/tegra-se-aes.c b/drivers/crypto/tegra/tegra-se-aes.c
+index ae7a0f8435fc63..3106fd1e84b91e 100644
+--- a/drivers/crypto/tegra/tegra-se-aes.c
++++ b/drivers/crypto/tegra/tegra-se-aes.c
+@@ -1752,10 +1752,13 @@ static int tegra_cmac_digest(struct ahash_request *req)
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct tegra_cmac_ctx *ctx = crypto_ahash_ctx(tfm);
+ struct tegra_cmac_reqctx *rctx = ahash_request_ctx(req);
++ int ret;
+
+- tegra_cmac_init(req);
+- rctx->task |= SHA_UPDATE | SHA_FINAL;
++ ret = tegra_cmac_init(req);
++ if (ret)
++ return ret;
+
++ rctx->task |= SHA_UPDATE | SHA_FINAL;
+ return crypto_transfer_hash_request_to_engine(ctx->se->engine, req);
+ }
+
+diff --git a/drivers/crypto/tegra/tegra-se-hash.c b/drivers/crypto/tegra/tegra-se-hash.c
+index 4d4bd727f49869..0b5cdd5676b17e 100644
+--- a/drivers/crypto/tegra/tegra-se-hash.c
++++ b/drivers/crypto/tegra/tegra-se-hash.c
+@@ -615,13 +615,16 @@ static int tegra_sha_digest(struct ahash_request *req)
+ struct tegra_sha_reqctx *rctx = ahash_request_ctx(req);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct tegra_sha_ctx *ctx = crypto_ahash_ctx(tfm);
++ int ret;
+
+ if (ctx->fallback)
+ return tegra_sha_fallback_digest(req);
+
+- tegra_sha_init(req);
+- rctx->task |= SHA_UPDATE | SHA_FINAL;
++ ret = tegra_sha_init(req);
++ if (ret)
++ return ret;
+
++ rctx->task |= SHA_UPDATE | SHA_FINAL;
+ return crypto_transfer_hash_request_to_engine(ctx->se->engine, req);
+ }
+
+diff --git a/drivers/dma/idxd/Makefile b/drivers/dma/idxd/Makefile
+index 2b4a0d406e1e71..9ff9d7b87b649d 100644
+--- a/drivers/dma/idxd/Makefile
++++ b/drivers/dma/idxd/Makefile
+@@ -1,4 +1,4 @@
+-ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=IDXD
++ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"IDXD"'
+
+ obj-$(CONFIG_INTEL_IDXD_BUS) += idxd_bus.o
+ idxd_bus-y := bus.o
+diff --git a/drivers/dma/ti/edma.c b/drivers/dma/ti/edma.c
+index 5f8d2e93ff3fb5..7f861fb07cb837 100644
+--- a/drivers/dma/ti/edma.c
++++ b/drivers/dma/ti/edma.c
+@@ -208,7 +208,6 @@ struct edma_desc {
+ struct edma_cc;
+
+ struct edma_tc {
+- struct device_node *node;
+ u16 id;
+ };
+
+@@ -2466,13 +2465,13 @@ static int edma_probe(struct platform_device *pdev)
+ if (ret || i == ecc->num_tc)
+ break;
+
+- ecc->tc_list[i].node = tc_args.np;
+ ecc->tc_list[i].id = i;
+ queue_priority_mapping[i][1] = tc_args.args[0];
+ if (queue_priority_mapping[i][1] > lowest_priority) {
+ lowest_priority = queue_priority_mapping[i][1];
+ info->default_queue = i;
+ }
++ of_node_put(tc_args.np);
+ }
+
+ /* See if we have optional dma-channel-mask array */
+diff --git a/drivers/firewire/device-attribute-test.c b/drivers/firewire/device-attribute-test.c
+index 2f123c6b0a1659..97478a96d1c965 100644
+--- a/drivers/firewire/device-attribute-test.c
++++ b/drivers/firewire/device-attribute-test.c
+@@ -99,6 +99,7 @@ static void device_attr_simple_avc(struct kunit *test)
+ struct device *unit0_dev = (struct device *)&unit0.device;
+ static const int unit0_expected_ids[] = {0x00ffffff, 0x00ffffff, 0x0000a02d, 0x00010001};
+ char *buf = kunit_kzalloc(test, PAGE_SIZE, GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buf);
+ int ids[4] = {0, 0, 0, 0};
+
+ // Ensure associations for node and unit devices.
+@@ -180,6 +181,7 @@ static void device_attr_legacy_avc(struct kunit *test)
+ struct device *unit0_dev = (struct device *)&unit0.device;
+ static const int unit0_expected_ids[] = {0x00012345, 0x00fedcba, 0x00abcdef, 0x00543210};
+ char *buf = kunit_kzalloc(test, PAGE_SIZE, GFP_KERNEL);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, buf);
+ int ids[4] = {0, 0, 0, 0};
+
+ // Ensure associations for node and unit devices.
+diff --git a/drivers/firmware/efi/sysfb_efi.c b/drivers/firmware/efi/sysfb_efi.c
+index cc807ed35aedf7..1e509595ac0343 100644
+--- a/drivers/firmware/efi/sysfb_efi.c
++++ b/drivers/firmware/efi/sysfb_efi.c
+@@ -91,6 +91,7 @@ void efifb_setup_from_dmi(struct screen_info *si, const char *opt)
+ _ret_; \
+ })
+
++#ifdef CONFIG_EFI
+ static int __init efifb_set_system(const struct dmi_system_id *id)
+ {
+ struct efifb_dmi_info *info = id->driver_data;
+@@ -346,7 +347,6 @@ static const struct fwnode_operations efifb_fwnode_ops = {
+ .add_links = efifb_add_links,
+ };
+
+-#ifdef CONFIG_EFI
+ static struct fwnode_handle efifb_fwnode;
+
+ __init void sysfb_apply_efi_quirks(void)
+diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
+index 14afd68664a911..a6bdedbbf70888 100644
+--- a/drivers/firmware/qcom/qcom_scm.c
++++ b/drivers/firmware/qcom/qcom_scm.c
+@@ -2001,13 +2001,17 @@ static int qcom_scm_probe(struct platform_device *pdev)
+
+ irq = platform_get_irq_optional(pdev, 0);
+ if (irq < 0) {
+- if (irq != -ENXIO)
+- return irq;
++ if (irq != -ENXIO) {
++ ret = irq;
++ goto err;
++ }
+ } else {
+ ret = devm_request_threaded_irq(__scm->dev, irq, NULL, qcom_scm_irq_handler,
+ IRQF_ONESHOT, "qcom-scm", __scm);
+- if (ret < 0)
+- return dev_err_probe(scm->dev, ret, "Failed to request qcom-scm irq\n");
++ if (ret < 0) {
++ dev_err_probe(scm->dev, ret, "Failed to request qcom-scm irq\n");
++ goto err;
++ }
+ }
+
+ __get_convention();
+@@ -2026,14 +2030,18 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ qcom_scm_disable_sdi();
+
+ ret = of_reserved_mem_device_init(__scm->dev);
+- if (ret && ret != -ENODEV)
+- return dev_err_probe(__scm->dev, ret,
+- "Failed to setup the reserved memory region for TZ mem\n");
++ if (ret && ret != -ENODEV) {
++ dev_err_probe(__scm->dev, ret,
++ "Failed to setup the reserved memory region for TZ mem\n");
++ goto err;
++ }
+
+ ret = qcom_tzmem_enable(__scm->dev);
+- if (ret)
+- return dev_err_probe(__scm->dev, ret,
+- "Failed to enable the TrustZone memory allocator\n");
++ if (ret) {
++ dev_err_probe(__scm->dev, ret,
++ "Failed to enable the TrustZone memory allocator\n");
++ goto err;
++ }
+
+ memset(&pool_config, 0, sizeof(pool_config));
+ pool_config.initial_size = 0;
+@@ -2041,9 +2049,11 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ pool_config.max_size = SZ_256K;
+
+ __scm->mempool = devm_qcom_tzmem_pool_new(__scm->dev, &pool_config);
+- if (IS_ERR(__scm->mempool))
+- return dev_err_probe(__scm->dev, PTR_ERR(__scm->mempool),
+- "Failed to create the SCM memory pool\n");
++ if (IS_ERR(__scm->mempool)) {
++ dev_err_probe(__scm->dev, PTR_ERR(__scm->mempool),
++ "Failed to create the SCM memory pool\n");
++ goto err;
++ }
+
+ /*
+ * Initialize the QSEECOM interface.
+@@ -2059,6 +2069,12 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ WARN(ret < 0, "failed to initialize qseecom: %d\n", ret);
+
+ return 0;
++
++err:
++ /* Paired with smp_load_acquire() in qcom_scm_is_available(). */
++ smp_store_release(&__scm, NULL);
++
++ return ret;
+ }
+
+ static void qcom_scm_shutdown(struct platform_device *pdev)
+diff --git a/drivers/gpio/gpio-idio-16.c b/drivers/gpio/gpio-idio-16.c
+index 53b1eb876a1257..2c951258929721 100644
+--- a/drivers/gpio/gpio-idio-16.c
++++ b/drivers/gpio/gpio-idio-16.c
+@@ -14,7 +14,7 @@
+
+ #include "gpio-idio-16.h"
+
+-#define DEFAULT_SYMBOL_NAMESPACE GPIO_IDIO_16
++#define DEFAULT_SYMBOL_NAMESPACE "GPIO_IDIO_16"
+
+ #define IDIO_16_DAT_BASE 0x0
+ #define IDIO_16_OUT_BASE IDIO_16_DAT_BASE
+diff --git a/drivers/gpio/gpio-mxc.c b/drivers/gpio/gpio-mxc.c
+index 4cb455b2bdee71..619b6fb9d833a4 100644
+--- a/drivers/gpio/gpio-mxc.c
++++ b/drivers/gpio/gpio-mxc.c
+@@ -490,8 +490,7 @@ static int mxc_gpio_probe(struct platform_device *pdev)
+ port->gc.request = mxc_gpio_request;
+ port->gc.free = mxc_gpio_free;
+ port->gc.to_irq = mxc_gpio_to_irq;
+- port->gc.base = (pdev->id < 0) ? of_alias_get_id(np, "gpio") * 32 :
+- pdev->id * 32;
++ port->gc.base = of_alias_get_id(np, "gpio") * 32;
+
+ err = devm_gpiochip_add_data(&pdev->dev, &port->gc, port);
+ if (err)
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 3f2d33ee20cca9..e49802f26e07f8 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -1088,7 +1088,8 @@ static int pca953x_probe(struct i2c_client *client)
+ */
+ reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_OUT_LOW);
+ if (IS_ERR(reset_gpio))
+- return PTR_ERR(reset_gpio);
++ return dev_err_probe(dev, PTR_ERR(reset_gpio),
++ "Failed to get reset gpio\n");
+ }
+
+ chip->client = client;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+index 3bc0cbf45bc59a..a46d6dd6de32fc 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gfx_v9.c
+@@ -1133,8 +1133,7 @@ uint64_t kgd_gfx_v9_hqd_get_pq_addr(struct amdgpu_device *adev,
+ uint32_t low, high;
+ uint64_t queue_addr = 0;
+
+- if (!adev->debug_exp_resets &&
+- !adev->gfx.num_gfx_rings)
++ if (!amdgpu_gpu_recovery)
+ return 0;
+
+ kgd_gfx_v9_acquire_queue(adev, pipe_id, queue_id, inst);
+@@ -1185,6 +1184,9 @@ uint64_t kgd_gfx_v9_hqd_reset(struct amdgpu_device *adev,
+ uint32_t low, high, pipe_reset_data = 0;
+ uint64_t queue_addr = 0;
+
++ if (!amdgpu_gpu_recovery)
++ return 0;
++
+ kgd_gfx_v9_acquire_queue(adev, pipe_id, queue_id, inst);
+ amdgpu_gfx_rlc_enter_safe_mode(adev, inst);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index 9f922ec50ea2dc..ae9ca6788df78c 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -2065,6 +2065,7 @@ void amdgpu_ttm_fini(struct amdgpu_device *adev)
+ ttm_range_man_fini(&adev->mman.bdev, AMDGPU_PL_GDS);
+ ttm_range_man_fini(&adev->mman.bdev, AMDGPU_PL_GWS);
+ ttm_range_man_fini(&adev->mman.bdev, AMDGPU_PL_OA);
++ ttm_range_man_fini(&adev->mman.bdev, AMDGPU_PL_DOORBELL);
+ ttm_device_fini(&adev->mman.bdev);
+ adev->mman.initialized = false;
+ DRM_INFO("amdgpu: ttm finalized\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index e7cd51c95141e1..e2501c98e107d3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -7251,10 +7251,6 @@ static int gfx_v9_0_reset_kcq(struct amdgpu_ring *ring,
+ unsigned long flags;
+ int i, r;
+
+- if (!adev->debug_exp_resets &&
+- !adev->gfx.num_gfx_rings)
+- return -EINVAL;
+-
+ if (amdgpu_sriov_vf(adev))
+ return -EINVAL;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+index ffdb966c4127ee..5dc3454d7d3610 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_4_3.c
+@@ -3062,9 +3062,6 @@ static void gfx_v9_4_3_ring_soft_recovery(struct amdgpu_ring *ring,
+ struct amdgpu_device *adev = ring->adev;
+ uint32_t value = 0;
+
+- if (!adev->debug_exp_resets)
+- return;
+-
+ value = REG_SET_FIELD(value, SQ_CMD, CMD, 0x03);
+ value = REG_SET_FIELD(value, SQ_CMD, MODE, 0x01);
+ value = REG_SET_FIELD(value, SQ_CMD, CHECK_VMID, 1);
+@@ -3580,9 +3577,6 @@ static int gfx_v9_4_3_reset_kcq(struct amdgpu_ring *ring,
+ unsigned long flags;
+ int r;
+
+- if (!adev->debug_exp_resets)
+- return -EINVAL;
+-
+ if (amdgpu_sriov_vf(adev))
+ return -EINVAL;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+index 6fca2915ea8fd5..84c6b0f5c4c0b2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v4_0_3.c
+@@ -943,6 +943,8 @@ static int vcn_v4_0_3_start_sriov(struct amdgpu_device *adev)
+ for (i = 0; i < adev->vcn.num_vcn_inst; i++) {
+ vcn_inst = GET_INST(VCN, i);
+
++ vcn_v4_0_3_fw_shared_init(adev, vcn_inst);
++
+ memset(&header, 0, sizeof(struct mmsch_v4_0_3_init_header));
+ header.version = MMSCH_VERSION;
+ header.total_size = sizeof(struct mmsch_v4_0_3_init_header) >> 2;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+index a0bc2c0ac04d96..20ad72d1b0d9b3 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.h
+@@ -697,6 +697,8 @@ struct amdgpu_dm_connector {
+ struct drm_dp_mst_port *mst_output_port;
+ struct amdgpu_dm_connector *mst_root;
+ struct drm_dp_aux *dsc_aux;
++ uint32_t mst_local_bw;
++ uint16_t vc_full_pbn;
+ struct mutex handle_mst_msg_ready;
+
+ /* TODO see if we can merge with ddc_bus or make a dm_connector */
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 3d624ae6d9bdfe..754dbc544f03a3 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -155,6 +155,17 @@ amdgpu_dm_mst_connector_late_register(struct drm_connector *connector)
+ return 0;
+ }
+
++
++static inline void
++amdgpu_dm_mst_reset_mst_connector_setting(struct amdgpu_dm_connector *aconnector)
++{
++ aconnector->edid = NULL;
++ aconnector->dsc_aux = NULL;
++ aconnector->mst_output_port->passthrough_aux = NULL;
++ aconnector->mst_local_bw = 0;
++ aconnector->vc_full_pbn = 0;
++}
++
+ static void
+ amdgpu_dm_mst_connector_early_unregister(struct drm_connector *connector)
+ {
+@@ -182,9 +193,7 @@ amdgpu_dm_mst_connector_early_unregister(struct drm_connector *connector)
+
+ dc_sink_release(dc_sink);
+ aconnector->dc_sink = NULL;
+- aconnector->edid = NULL;
+- aconnector->dsc_aux = NULL;
+- port->passthrough_aux = NULL;
++ amdgpu_dm_mst_reset_mst_connector_setting(aconnector);
+ }
+
+ aconnector->mst_status = MST_STATUS_DEFAULT;
+@@ -500,9 +509,7 @@ dm_dp_mst_detect(struct drm_connector *connector,
+
+ dc_sink_release(aconnector->dc_sink);
+ aconnector->dc_sink = NULL;
+- aconnector->edid = NULL;
+- aconnector->dsc_aux = NULL;
+- port->passthrough_aux = NULL;
++ amdgpu_dm_mst_reset_mst_connector_setting(aconnector);
+
+ amdgpu_dm_set_mst_status(&aconnector->mst_status,
+ MST_REMOTE_EDID | MST_ALLOCATE_NEW_PAYLOAD | MST_CLEAR_ALLOCATED_PAYLOAD,
+@@ -1815,9 +1822,18 @@ enum dc_status dm_dp_mst_is_port_support_mode(
+ struct drm_dp_mst_port *immediate_upstream_port = NULL;
+ uint32_t end_link_bw = 0;
+
+- /*Get last DP link BW capability*/
+- if (dp_get_link_current_set_bw(&aconnector->mst_output_port->aux, &end_link_bw)) {
+- if (stream_kbps > end_link_bw) {
++ /*Get last DP link BW capability. Mode shall be supported by Legacy peer*/
++ if (aconnector->mst_output_port->pdt != DP_PEER_DEVICE_DP_LEGACY_CONV &&
++ aconnector->mst_output_port->pdt != DP_PEER_DEVICE_NONE) {
++ if (aconnector->vc_full_pbn != aconnector->mst_output_port->full_pbn) {
++ dp_get_link_current_set_bw(&aconnector->mst_output_port->aux, &end_link_bw);
++ aconnector->vc_full_pbn = aconnector->mst_output_port->full_pbn;
++ aconnector->mst_local_bw = end_link_bw;
++ } else {
++ end_link_bw = aconnector->mst_local_bw;
++ }
++
++ if (end_link_bw > 0 && stream_kbps > end_link_bw) {
+ DRM_DEBUG_DRIVER("MST_DSC dsc decode at last link."
+ "Mode required bw can't fit into last link\n");
+ return DC_FAIL_BANDWIDTH_VALIDATE;
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
+index e1da48b05d0094..961d8936150ab7 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
+@@ -194,6 +194,9 @@ void dpp_reset(struct dpp *dpp_base)
+ dpp->filter_h = NULL;
+ dpp->filter_v = NULL;
+
++ memset(&dpp_base->pos, 0, sizeof(dpp_base->pos));
++ memset(&dpp_base->att, 0, sizeof(dpp_base->att));
++
+ memset(&dpp->scl_data, 0, sizeof(dpp->scl_data));
+ memset(&dpp->pwl_data, 0, sizeof(dpp->pwl_data));
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c
+index 22ac2b7e49aeae..da963f73829f6c 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.c
+@@ -532,6 +532,12 @@ void hubp1_dcc_control(struct hubp *hubp, bool enable,
+ SECONDARY_SURFACE_DCC_IND_64B_BLK, dcc_ind_64b_blk);
+ }
+
++void hubp_reset(struct hubp *hubp)
++{
++ memset(&hubp->pos, 0, sizeof(hubp->pos));
++ memset(&hubp->att, 0, sizeof(hubp->att));
++}
++
+ void hubp1_program_surface_config(
+ struct hubp *hubp,
+ enum surface_pixel_format format,
+@@ -1337,8 +1343,9 @@ static void hubp1_wait_pipe_read_start(struct hubp *hubp)
+
+ void hubp1_init(struct hubp *hubp)
+ {
+- //do nothing
++ hubp_reset(hubp);
+ }
++
+ static const struct hubp_funcs dcn10_hubp_funcs = {
+ .hubp_program_surface_flip_and_addr =
+ hubp1_program_surface_flip_and_addr,
+@@ -1351,6 +1358,7 @@ static const struct hubp_funcs dcn10_hubp_funcs = {
+ .hubp_set_vm_context0_settings = hubp1_set_vm_context0_settings,
+ .set_blank = hubp1_set_blank,
+ .dcc_control = hubp1_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_hubp_blank_en = hubp1_set_hubp_blank_en,
+ .set_cursor_attributes = hubp1_cursor_set_attributes,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
+index 69119b2fdce23b..193e48b440ef18 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn10/dcn10_hubp.h
+@@ -746,6 +746,8 @@ void hubp1_dcc_control(struct hubp *hubp,
+ bool enable,
+ enum hubp_ind_block_size independent_64b_blks);
+
++void hubp_reset(struct hubp *hubp);
++
+ bool hubp1_program_surface_flip_and_addr(
+ struct hubp *hubp,
+ const struct dc_plane_address *address,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
+index 0637e4c552d8a2..b405fa22f87a9e 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
+@@ -1660,6 +1660,7 @@ static struct hubp_funcs dcn20_hubp_funcs = {
+ .set_blank = hubp2_set_blank,
+ .set_blank_regs = hubp2_set_blank_regs,
+ .dcc_control = hubp2_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_cursor_attributes = hubp2_cursor_set_attributes,
+ .set_cursor_position = hubp2_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn201/dcn201_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn201/dcn201_hubp.c
+index cd2bfcc5127650..6efcb10abf3dee 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn201/dcn201_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn201/dcn201_hubp.c
+@@ -121,6 +121,7 @@ static struct hubp_funcs dcn201_hubp_funcs = {
+ .set_cursor_position = hubp1_cursor_set_position,
+ .set_blank = hubp1_set_blank,
+ .dcc_control = hubp1_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .hubp_clk_cntl = hubp1_clk_cntl,
+ .hubp_vtg_sel = hubp1_vtg_sel,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c
+index e13d69a22c1c7f..4e2d9d381db393 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn21/dcn21_hubp.c
+@@ -811,6 +811,8 @@ static void hubp21_init(struct hubp *hubp)
+ struct dcn21_hubp *hubp21 = TO_DCN21_HUBP(hubp);
+ //hubp[i].HUBPREQ_DEBUG.HUBPREQ_DEBUG[26] = 1;
+ REG_WRITE(HUBPREQ_DEBUG, 1 << 26);
++
++ hubp_reset(hubp);
+ }
+ static struct hubp_funcs dcn21_hubp_funcs = {
+ .hubp_enable_tripleBuffer = hubp2_enable_triplebuffer,
+@@ -823,6 +825,7 @@ static struct hubp_funcs dcn21_hubp_funcs = {
+ .hubp_set_vm_system_aperture_settings = hubp21_set_vm_system_aperture_settings,
+ .set_blank = hubp1_set_blank,
+ .dcc_control = hubp1_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = hubp21_set_viewport,
+ .set_cursor_attributes = hubp2_cursor_set_attributes,
+ .set_cursor_position = hubp1_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
+index 60a64d29035274..c55b1b8be8ffd6 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
+@@ -483,6 +483,8 @@ void hubp3_init(struct hubp *hubp)
+ struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+ //hubp[i].HUBPREQ_DEBUG.HUBPREQ_DEBUG[26] = 1;
+ REG_WRITE(HUBPREQ_DEBUG, 1 << 26);
++
++ hubp_reset(hubp);
+ }
+
+ static struct hubp_funcs dcn30_hubp_funcs = {
+@@ -497,6 +499,7 @@ static struct hubp_funcs dcn30_hubp_funcs = {
+ .set_blank = hubp2_set_blank,
+ .set_blank_regs = hubp2_set_blank_regs,
+ .dcc_control = hubp3_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_cursor_attributes = hubp2_cursor_set_attributes,
+ .set_cursor_position = hubp2_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
+index 8394e8c069199f..a65a0ddee64672 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
+@@ -79,6 +79,7 @@ static struct hubp_funcs dcn31_hubp_funcs = {
+ .hubp_set_vm_system_aperture_settings = hubp3_set_vm_system_aperture_settings,
+ .set_blank = hubp2_set_blank,
+ .dcc_control = hubp3_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_cursor_attributes = hubp2_cursor_set_attributes,
+ .set_cursor_position = hubp2_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
+index ca5b4b28a66441..45023fa9b708dc 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
+@@ -181,6 +181,7 @@ static struct hubp_funcs dcn32_hubp_funcs = {
+ .set_blank = hubp2_set_blank,
+ .set_blank_regs = hubp2_set_blank_regs,
+ .dcc_control = hubp3_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_cursor_attributes = hubp32_cursor_set_attributes,
+ .set_cursor_position = hubp2_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c
+index d1f05b82b3dd5c..e7625290c0e467 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn35/dcn35_hubp.c
+@@ -199,6 +199,7 @@ static struct hubp_funcs dcn35_hubp_funcs = {
+ .hubp_set_vm_system_aperture_settings = hubp3_set_vm_system_aperture_settings,
+ .set_blank = hubp2_set_blank,
+ .dcc_control = hubp3_dcc_control,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = min_set_viewport,
+ .set_cursor_attributes = hubp2_cursor_set_attributes,
+ .set_cursor_position = hubp2_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
+index b1ebf5053b4fc3..2d52100510f05f 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
+@@ -141,7 +141,7 @@ void hubp401_update_mall_sel(struct hubp *hubp, uint32_t mall_sel, bool c_cursor
+
+ void hubp401_init(struct hubp *hubp)
+ {
+- //For now nothing to do, HUBPREQ_DEBUG_DB register is removed on DCN4x.
++ hubp_reset(hubp);
+ }
+
+ void hubp401_vready_at_or_After_vsync(struct hubp *hubp,
+@@ -974,6 +974,7 @@ static struct hubp_funcs dcn401_hubp_funcs = {
+ .hubp_set_vm_system_aperture_settings = hubp3_set_vm_system_aperture_settings,
+ .set_blank = hubp2_set_blank,
+ .set_blank_regs = hubp2_set_blank_regs,
++ .hubp_reset = hubp_reset,
+ .mem_program_viewport = hubp401_set_viewport,
+ .set_cursor_attributes = hubp32_cursor_set_attributes,
+ .set_cursor_position = hubp401_cursor_set_position,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+index a6a1db5ba8bad1..fd0530251c6e5a 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+@@ -1286,6 +1286,7 @@ void dcn10_plane_atomic_power_down(struct dc *dc,
+ if (hws->funcs.hubp_pg_control)
+ hws->funcs.hubp_pg_control(hws, hubp->inst, false);
+
++ hubp->funcs->hubp_reset(hubp);
+ dpp->funcs->dpp_reset(dpp);
+
+ REG_SET(DC_IP_REQUEST_CNTL, 0,
+@@ -1447,6 +1448,7 @@ void dcn10_init_pipes(struct dc *dc, struct dc_state *context)
+ /* Disable on the current state so the new one isn't cleared. */
+ pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[i];
+
++ hubp->funcs->hubp_reset(hubp);
+ dpp->funcs->dpp_reset(dpp);
+
+ pipe_ctx->stream_res.tg = tg;
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index bd309dbdf7b2a7..f6b17bd3f714fa 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -787,6 +787,7 @@ void dcn35_init_pipes(struct dc *dc, struct dc_state *context)
+ /* Disable on the current state so the new one isn't cleared. */
+ pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[i];
+
++ hubp->funcs->hubp_reset(hubp);
+ dpp->funcs->dpp_reset(dpp);
+
+ pipe_ctx->stream_res.tg = tg;
+@@ -940,6 +941,7 @@ void dcn35_plane_atomic_disable(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ /*to do, need to support both case*/
+ hubp->power_gated = true;
+
++ hubp->funcs->hubp_reset(hubp);
+ dpp->funcs->dpp_reset(dpp);
+
+ pipe_ctx->stream = NULL;
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
+index 16580d62427891..eec16b0a199dd4 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/hubp.h
+@@ -152,6 +152,8 @@ struct hubp_funcs {
+ void (*dcc_control)(struct hubp *hubp, bool enable,
+ enum hubp_ind_block_size blk_size);
+
++ void (*hubp_reset)(struct hubp *hubp);
++
+ void (*mem_program_viewport)(
+ struct hubp *hubp,
+ const struct rect *viewport,
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+index b56298d9da98f3..5c54c9fd446196 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/ppatomctrl.c
+@@ -1420,6 +1420,8 @@ int atomctrl_get_smc_sclk_range_table(struct pp_hwmgr *hwmgr, struct pp_atom_ctr
+ GetIndexIntoMasterTable(DATA, SMU_Info),
+ &size, &frev, &crev);
+
++ if (!psmu_info)
++ return -EINVAL;
+
+ for (i = 0; i < psmu_info->ucSclkEntryNum; i++) {
+ table->entry[i].ucVco_setting = psmu_info->asSclkFcwRangeEntry[i].ucVco_setting;
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_powertune.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_powertune.c
+index 3007b054c873c9..776d58ea63ae90 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_powertune.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_powertune.c
+@@ -1120,13 +1120,14 @@ static int vega10_enable_se_edc_force_stall_config(struct pp_hwmgr *hwmgr)
+ result = vega10_program_didt_config_registers(hwmgr, SEEDCForceStallPatternConfig_Vega10, VEGA10_CONFIGREG_DIDT);
+ result |= vega10_program_didt_config_registers(hwmgr, SEEDCCtrlForceStallConfig_Vega10, VEGA10_CONFIGREG_DIDT);
+ if (0 != result)
+- return result;
++ goto exit_safe_mode;
+
+ vega10_didt_set_mask(hwmgr, false);
+
++exit_safe_mode:
+ amdgpu_gfx_rlc_exit_safe_mode(adev, 0);
+
+- return 0;
++ return result;
+ }
+
+ static int vega10_disable_se_edc_force_stall_config(struct pp_hwmgr *hwmgr)
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index 008d86cc562af7..cf891e7677c0e2 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -300,7 +300,7 @@
+ #define MAX_CR_LEVEL 0x03
+ #define MAX_EQ_LEVEL 0x03
+ #define AUX_WAIT_TIMEOUT_MS 15
+-#define AUX_FIFO_MAX_SIZE 32
++#define AUX_FIFO_MAX_SIZE 16
+ #define PIXEL_CLK_DELAY 1
+ #define PIXEL_CLK_INVERSE 0
+ #define ADJUST_PHASE_THRESHOLD 80000
+diff --git a/drivers/gpu/drm/display/drm_hdmi_state_helper.c b/drivers/gpu/drm/display/drm_hdmi_state_helper.c
+index feb7a3a759811a..936a8f95d80f7e 100644
+--- a/drivers/gpu/drm/display/drm_hdmi_state_helper.c
++++ b/drivers/gpu/drm/display/drm_hdmi_state_helper.c
+@@ -347,6 +347,8 @@ static int hdmi_generate_avi_infoframe(const struct drm_connector *connector,
+ is_limited_range ? HDMI_QUANTIZATION_RANGE_LIMITED : HDMI_QUANTIZATION_RANGE_FULL;
+ int ret;
+
++ infoframe->set = false;
++
+ ret = drm_hdmi_avi_infoframe_from_display_mode(frame, connector, mode);
+ if (ret)
+ return ret;
+@@ -376,6 +378,8 @@ static int hdmi_generate_spd_infoframe(const struct drm_connector *connector,
+ &infoframe->data.spd;
+ int ret;
+
++ infoframe->set = false;
++
+ ret = hdmi_spd_infoframe_init(frame,
+ connector->hdmi.vendor,
+ connector->hdmi.product);
+@@ -398,6 +402,8 @@ static int hdmi_generate_hdr_infoframe(const struct drm_connector *connector,
+ &infoframe->data.drm;
+ int ret;
+
++ infoframe->set = false;
++
+ if (connector->max_bpc < 10)
+ return 0;
+
+@@ -425,6 +431,8 @@ static int hdmi_generate_hdmi_vendor_infoframe(const struct drm_connector *conne
+ &infoframe->data.vendor.hdmi;
+ int ret;
+
++ infoframe->set = false;
++
+ if (!info->has_hdmi_infoframe)
+ return 0;
+
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+index 5c0c9d4e3be183..d3f6df047f5a2b 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c
+@@ -342,6 +342,7 @@ void *etnaviv_gem_vmap(struct drm_gem_object *obj)
+ static void *etnaviv_gem_vmap_impl(struct etnaviv_gem_object *obj)
+ {
+ struct page **pages;
++ pgprot_t prot;
+
+ lockdep_assert_held(&obj->lock);
+
+@@ -349,8 +350,19 @@ static void *etnaviv_gem_vmap_impl(struct etnaviv_gem_object *obj)
+ if (IS_ERR(pages))
+ return NULL;
+
+- return vmap(pages, obj->base.size >> PAGE_SHIFT,
+- VM_MAP, pgprot_writecombine(PAGE_KERNEL));
++ switch (obj->flags & ETNA_BO_CACHE_MASK) {
++ case ETNA_BO_CACHED:
++ prot = PAGE_KERNEL;
++ break;
++ case ETNA_BO_UNCACHED:
++ prot = pgprot_noncached(PAGE_KERNEL);
++ break;
++ case ETNA_BO_WC:
++ default:
++ prot = pgprot_writecombine(PAGE_KERNEL);
++ }
++
++ return vmap(pages, obj->base.size >> PAGE_SHIFT, VM_MAP, prot);
+ }
+
+ static inline enum dma_data_direction etnaviv_op_to_dma_dir(u32 op)
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index 14db7376c712d1..e386b059187acf 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -1603,7 +1603,9 @@ int a6xx_gmu_wrapper_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
+
+ gmu->dev = &pdev->dev;
+
+- of_dma_configure(gmu->dev, node, true);
++ ret = of_dma_configure(gmu->dev, node, true);
++ if (ret)
++ return ret;
+
+ pm_runtime_enable(gmu->dev);
+
+@@ -1668,7 +1670,9 @@ int a6xx_gmu_init(struct a6xx_gpu *a6xx_gpu, struct device_node *node)
+
+ gmu->dev = &pdev->dev;
+
+- of_dma_configure(gmu->dev, node, true);
++ ret = of_dma_configure(gmu->dev, node, true);
++ if (ret)
++ return ret;
+
+ /* Fow now, don't do anything fancy until we get our feet under us */
+ gmu->idle_level = GMU_IDLE_STATE_ACTIVE;
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_10_0_sm8650.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_10_0_sm8650.h
+index eb5dfff2ec4f48..e187e7b1cef167 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_10_0_sm8650.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_10_0_sm8650.h
+@@ -160,6 +160,7 @@ static const struct dpu_lm_cfg sm8650_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x400,
+@@ -167,6 +168,7 @@ static const struct dpu_lm_cfg sm8650_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x400,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_1_sdm670.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_1_sdm670.h
+index cbbdaebe357ec4..daef07924886a5 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_1_sdm670.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_4_1_sdm670.h
+@@ -65,6 +65,54 @@ static const struct dpu_sspp_cfg sdm670_sspp[] = {
+ },
+ };
+
++static const struct dpu_lm_cfg sdm670_lm[] = {
++ {
++ .name = "lm_0", .id = LM_0,
++ .base = 0x44000, .len = 0x320,
++ .features = MIXER_SDM845_MASK,
++ .sblk = &sdm845_lm_sblk,
++ .lm_pair = LM_1,
++ .pingpong = PINGPONG_0,
++ .dspp = DSPP_0,
++ }, {
++ .name = "lm_1", .id = LM_1,
++ .base = 0x45000, .len = 0x320,
++ .features = MIXER_SDM845_MASK,
++ .sblk = &sdm845_lm_sblk,
++ .lm_pair = LM_0,
++ .pingpong = PINGPONG_1,
++ .dspp = DSPP_1,
++ }, {
++ .name = "lm_2", .id = LM_2,
++ .base = 0x46000, .len = 0x320,
++ .features = MIXER_SDM845_MASK,
++ .sblk = &sdm845_lm_sblk,
++ .lm_pair = LM_5,
++ .pingpong = PINGPONG_2,
++ }, {
++ .name = "lm_5", .id = LM_5,
++ .base = 0x49000, .len = 0x320,
++ .features = MIXER_SDM845_MASK,
++ .sblk = &sdm845_lm_sblk,
++ .lm_pair = LM_2,
++ .pingpong = PINGPONG_3,
++ },
++};
++
++static const struct dpu_dspp_cfg sdm670_dspp[] = {
++ {
++ .name = "dspp_0", .id = DSPP_0,
++ .base = 0x54000, .len = 0x1800,
++ .features = DSPP_SC7180_MASK,
++ .sblk = &sdm845_dspp_sblk,
++ }, {
++ .name = "dspp_1", .id = DSPP_1,
++ .base = 0x56000, .len = 0x1800,
++ .features = DSPP_SC7180_MASK,
++ .sblk = &sdm845_dspp_sblk,
++ },
++};
++
+ static const struct dpu_dsc_cfg sdm670_dsc[] = {
+ {
+ .name = "dsc_0", .id = DSC_0,
+@@ -88,8 +136,10 @@ const struct dpu_mdss_cfg dpu_sdm670_cfg = {
+ .ctl = sdm845_ctl,
+ .sspp_count = ARRAY_SIZE(sdm670_sspp),
+ .sspp = sdm670_sspp,
+- .mixer_count = ARRAY_SIZE(sdm845_lm),
+- .mixer = sdm845_lm,
++ .mixer_count = ARRAY_SIZE(sdm670_lm),
++ .mixer = sdm670_lm,
++ .dspp_count = ARRAY_SIZE(sdm670_dspp),
++ .dspp = sdm670_dspp,
+ .pingpong_count = ARRAY_SIZE(sdm845_pp),
+ .pingpong = sdm845_pp,
+ .dsc_count = ARRAY_SIZE(sdm670_dsc),
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+index 6ccfde82fecdb4..421afacb724803 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+@@ -164,6 +164,7 @@ static const struct dpu_lm_cfg sm8150_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -171,6 +172,7 @@ static const struct dpu_lm_cfg sm8150_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+index bab19ddd1d4f97..641023b102bf59 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+@@ -163,6 +163,7 @@ static const struct dpu_lm_cfg sc8180x_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -170,6 +171,7 @@ static const struct dpu_lm_cfg sc8180x_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+index a57d50b1f02807..e8916ae826a6da 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_6_0_sm8250.h
+@@ -162,6 +162,7 @@ static const struct dpu_lm_cfg sm8250_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -169,6 +170,7 @@ static const struct dpu_lm_cfg sm8250_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
+index aced16e350daa1..f7c08e89c88203 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_7_0_sm8350.h
+@@ -162,6 +162,7 @@ static const struct dpu_lm_cfg sm8350_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -169,6 +170,7 @@ static const struct dpu_lm_cfg sm8350_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
+index ad48defa154f7d..a1dbbf5c652ff9 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_0_sm8550.h
+@@ -160,6 +160,7 @@ static const struct dpu_lm_cfg sm8550_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -167,6 +168,7 @@ static const struct dpu_lm_cfg sm8550_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
+index a3e60ac70689e7..e084406ebb0711 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
+@@ -159,6 +159,7 @@ static const struct dpu_lm_cfg x1e80100_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_3,
+ .pingpong = PINGPONG_2,
++ .dspp = DSPP_2,
+ }, {
+ .name = "lm_3", .id = LM_3,
+ .base = 0x47000, .len = 0x320,
+@@ -166,6 +167,7 @@ static const struct dpu_lm_cfg x1e80100_lm[] = {
+ .sblk = &sdm845_lm_sblk,
+ .lm_pair = LM_2,
+ .pingpong = PINGPONG_3,
++ .dspp = DSPP_3,
+ }, {
+ .name = "lm_4", .id = LM_4,
+ .base = 0x48000, .len = 0x320,
+diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_lcdc_encoder.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_lcdc_encoder.c
+index 576995ddce37e9..8bbc7fb881d599 100644
+--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_lcdc_encoder.c
++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_lcdc_encoder.c
+@@ -389,7 +389,7 @@ struct drm_encoder *mdp4_lcdc_encoder_init(struct drm_device *dev,
+
+ /* TODO: different regulators in other cases? */
+ mdp4_lcdc_encoder->regs[0].supply = "lvds-vccs-3p3v";
+- mdp4_lcdc_encoder->regs[1].supply = "lvds-vccs-3p3v";
++ mdp4_lcdc_encoder->regs[1].supply = "lvds-pll-vdda";
+ mdp4_lcdc_encoder->regs[2].supply = "lvds-vdda";
+
+ ret = devm_regulator_bulk_get(dev->dev,
+diff --git a/drivers/gpu/drm/msm/dp/dp_audio.c b/drivers/gpu/drm/msm/dp/dp_audio.c
+index a599fc5d63c524..f4e01da5c55b00 100644
+--- a/drivers/gpu/drm/msm/dp/dp_audio.c
++++ b/drivers/gpu/drm/msm/dp/dp_audio.c
+@@ -329,10 +329,10 @@ static void dp_audio_safe_to_exit_level(struct dp_audio_private *audio)
+ safe_to_exit_level = 5;
+ break;
+ default:
++ safe_to_exit_level = 14;
+ drm_dbg_dp(audio->drm_dev,
+ "setting the default safe_to_exit_level = %u\n",
+ safe_to_exit_level);
+- safe_to_exit_level = 14;
+ break;
+ }
+
+diff --git a/drivers/gpu/drm/msm/hdmi/hdmi_phy_8998.c b/drivers/gpu/drm/msm/hdmi/hdmi_phy_8998.c
+index e6ffaf92d26d32..1c4211cfa2a476 100644
+--- a/drivers/gpu/drm/msm/hdmi/hdmi_phy_8998.c
++++ b/drivers/gpu/drm/msm/hdmi/hdmi_phy_8998.c
+@@ -137,7 +137,7 @@ static inline u32 pll_get_integloop_gain(u64 frac_start, u64 bclk, u32 ref_clk,
+
+ base <<= (digclk_divsel == 2 ? 1 : 0);
+
+- return (base <= 2046 ? base : 2046);
++ return base;
+ }
+
+ static inline u32 pll_get_pll_cmp(u64 fdata, unsigned long ref_clk)
+diff --git a/drivers/gpu/drm/msm/msm_kms.c b/drivers/gpu/drm/msm/msm_kms.c
+index af6a6fcb11736f..6749f0fbca96d5 100644
+--- a/drivers/gpu/drm/msm/msm_kms.c
++++ b/drivers/gpu/drm/msm/msm_kms.c
+@@ -244,7 +244,6 @@ int msm_drm_kms_init(struct device *dev, const struct drm_driver *drv)
+ ret = priv->kms_init(ddev);
+ if (ret) {
+ DRM_DEV_ERROR(dev, "failed to load kms\n");
+- priv->kms = NULL;
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/panthor/panthor_device.c b/drivers/gpu/drm/panthor/panthor_device.c
+index 6fbff516c1c1f0..01dff89bed4e1d 100644
+--- a/drivers/gpu/drm/panthor/panthor_device.c
++++ b/drivers/gpu/drm/panthor/panthor_device.c
+@@ -445,8 +445,8 @@ int panthor_device_resume(struct device *dev)
+ drm_dev_enter(&ptdev->base, &cookie)) {
+ panthor_gpu_resume(ptdev);
+ panthor_mmu_resume(ptdev);
+- ret = drm_WARN_ON(&ptdev->base, panthor_fw_resume(ptdev));
+- if (!ret) {
++ ret = panthor_fw_resume(ptdev);
++ if (!drm_WARN_ON(&ptdev->base, ret)) {
+ panthor_sched_resume(ptdev);
+ } else {
+ panthor_mmu_suspend(ptdev);
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+index 9873172e3fd331..5880d87fe6b3aa 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
+@@ -33,7 +33,6 @@
+ #include <uapi/linux/videodev2.h>
+ #include <dt-bindings/soc/rockchip,vop2.h>
+
+-#include "rockchip_drm_drv.h"
+ #include "rockchip_drm_gem.h"
+ #include "rockchip_drm_vop2.h"
+ #include "rockchip_rgb.h"
+@@ -550,6 +549,25 @@ static bool rockchip_vop2_mod_supported(struct drm_plane *plane, u32 format,
+ if (modifier == DRM_FORMAT_MOD_INVALID)
+ return false;
+
++ if (vop2->data->soc_id == 3568 || vop2->data->soc_id == 3566) {
++ if (vop2_cluster_window(win)) {
++ if (modifier == DRM_FORMAT_MOD_LINEAR) {
++ drm_dbg_kms(vop2->drm,
++ "Cluster window only supports format with afbc\n");
++ return false;
++ }
++ }
++ }
++
++ if (format == DRM_FORMAT_XRGB2101010 || format == DRM_FORMAT_XBGR2101010) {
++ if (vop2->data->soc_id == 3588) {
++ if (!rockchip_afbc(plane, modifier)) {
++ drm_dbg_kms(vop2->drm, "Only support 32 bpp format with afbc\n");
++ return false;
++ }
++ }
++ }
++
+ if (modifier == DRM_FORMAT_MOD_LINEAR)
+ return true;
+
+@@ -1320,6 +1338,12 @@ static void vop2_plane_atomic_update(struct drm_plane *plane,
+ &fb->format->format,
+ afbc_en ? "AFBC" : "", &yrgb_mst);
+
++ if (vop2->data->soc_id > 3568) {
++ vop2_win_write(win, VOP2_WIN_AXI_BUS_ID, win->data->axi_bus_id);
++ vop2_win_write(win, VOP2_WIN_AXI_YRGB_R_ID, win->data->axi_yrgb_r_id);
++ vop2_win_write(win, VOP2_WIN_AXI_UV_R_ID, win->data->axi_uv_r_id);
++ }
++
+ if (vop2_cluster_window(win))
+ vop2_win_write(win, VOP2_WIN_AFBC_HALF_BLOCK_EN, half_block_en);
+
+@@ -1721,9 +1745,9 @@ static unsigned long rk3588_calc_cru_cfg(struct vop2_video_port *vp, int id,
+ else
+ dclk_out_rate = v_pixclk >> 2;
+
+- dclk_rate = rk3588_calc_dclk(dclk_out_rate, 600000);
++ dclk_rate = rk3588_calc_dclk(dclk_out_rate, 600000000);
+ if (!dclk_rate) {
+- drm_err(vop2->drm, "DP dclk_out_rate out of range, dclk_out_rate: %ld KHZ\n",
++ drm_err(vop2->drm, "DP dclk_out_rate out of range, dclk_out_rate: %ld Hz\n",
+ dclk_out_rate);
+ return 0;
+ }
+@@ -1738,9 +1762,9 @@ static unsigned long rk3588_calc_cru_cfg(struct vop2_video_port *vp, int id,
+ * dclk_rate = N * dclk_core_rate N = (1,2,4 ),
+ * we get a little factor here
+ */
+- dclk_rate = rk3588_calc_dclk(dclk_out_rate, 600000);
++ dclk_rate = rk3588_calc_dclk(dclk_out_rate, 600000000);
+ if (!dclk_rate) {
+- drm_err(vop2->drm, "MIPI dclk out of range, dclk_out_rate: %ld KHZ\n",
++ drm_err(vop2->drm, "MIPI dclk out of range, dclk_out_rate: %ld Hz\n",
+ dclk_out_rate);
+ return 0;
+ }
+@@ -2159,7 +2183,6 @@ static int vop2_find_start_mixer_id_for_vp(struct vop2 *vop2, u8 port_id)
+
+ static void vop2_setup_cluster_alpha(struct vop2 *vop2, struct vop2_win *main_win)
+ {
+- u32 offset = (main_win->data->phys_id * 0x10);
+ struct vop2_alpha_config alpha_config;
+ struct vop2_alpha alpha;
+ struct drm_plane_state *bottom_win_pstate;
+@@ -2167,6 +2190,7 @@ static void vop2_setup_cluster_alpha(struct vop2 *vop2, struct vop2_win *main_wi
+ u16 src_glb_alpha_val, dst_glb_alpha_val;
+ bool premulti_en = false;
+ bool swap = false;
++ u32 offset = 0;
+
+ /* At one win mode, win0 is dst/bottom win, and win1 is a all zero src/top win */
+ bottom_win_pstate = main_win->base.state;
+@@ -2185,6 +2209,22 @@ static void vop2_setup_cluster_alpha(struct vop2 *vop2, struct vop2_win *main_wi
+ vop2_parse_alpha(&alpha_config, &alpha);
+
+ alpha.src_color_ctrl.bits.src_dst_swap = swap;
++
++ switch (main_win->data->phys_id) {
++ case ROCKCHIP_VOP2_CLUSTER0:
++ offset = 0x0;
++ break;
++ case ROCKCHIP_VOP2_CLUSTER1:
++ offset = 0x10;
++ break;
++ case ROCKCHIP_VOP2_CLUSTER2:
++ offset = 0x20;
++ break;
++ case ROCKCHIP_VOP2_CLUSTER3:
++ offset = 0x30;
++ break;
++ }
++
+ vop2_writel(vop2, RK3568_CLUSTER0_MIX_SRC_COLOR_CTRL + offset,
+ alpha.src_color_ctrl.val);
+ vop2_writel(vop2, RK3568_CLUSTER0_MIX_DST_COLOR_CTRL + offset,
+@@ -2232,6 +2272,12 @@ static void vop2_setup_alpha(struct vop2_video_port *vp)
+ struct vop2_win *win = to_vop2_win(plane);
+ int zpos = plane->state->normalized_zpos;
+
++ /*
++ * Need to configure alpha from second layer.
++ */
++ if (zpos == 0)
++ continue;
++
+ if (plane->state->pixel_blend_mode == DRM_MODE_BLEND_PREMULTI)
+ premulti_en = 1;
+ else
+@@ -2308,7 +2354,10 @@ static void vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ struct drm_plane *plane;
+ u32 layer_sel = 0;
+ u32 port_sel;
+- unsigned int nlayer, ofs;
++ u8 layer_id;
++ u8 old_layer_id;
++ u8 layer_sel_id;
++ unsigned int ofs;
+ u32 ovl_ctrl;
+ int i;
+ struct vop2_video_port *vp0 = &vop2->vps[0];
+@@ -2352,9 +2401,30 @@ static void vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ for (i = 0; i < vp->id; i++)
+ ofs += vop2->vps[i].nlayers;
+
+- nlayer = 0;
+ drm_atomic_crtc_for_each_plane(plane, &vp->crtc) {
+ struct vop2_win *win = to_vop2_win(plane);
++ struct vop2_win *old_win;
++
++ layer_id = (u8)(plane->state->normalized_zpos + ofs);
++
++ /*
++ * Find the layer this win bind in old state.
++ */
++ for (old_layer_id = 0; old_layer_id < vop2->data->win_size; old_layer_id++) {
++ layer_sel_id = (layer_sel >> (4 * old_layer_id)) & 0xf;
++ if (layer_sel_id == win->data->layer_sel_id)
++ break;
++ }
++
++ /*
++ * Find the win bind to this layer in old state
++ */
++ for (i = 0; i < vop2->data->win_size; i++) {
++ old_win = &vop2->win[i];
++ layer_sel_id = (layer_sel >> (4 * layer_id)) & 0xf;
++ if (layer_sel_id == old_win->data->layer_sel_id)
++ break;
++ }
+
+ switch (win->data->phys_id) {
+ case ROCKCHIP_VOP2_CLUSTER0:
+@@ -2399,17 +2469,14 @@ static void vop2_setup_layer_mixer(struct vop2_video_port *vp)
+ break;
+ }
+
+- layer_sel &= ~RK3568_OVL_LAYER_SEL__LAYER(plane->state->normalized_zpos + ofs,
+- 0x7);
+- layer_sel |= RK3568_OVL_LAYER_SEL__LAYER(plane->state->normalized_zpos + ofs,
+- win->data->layer_sel_id);
+- nlayer++;
+- }
+-
+- /* configure unused layers to 0x5 (reserved) */
+- for (; nlayer < vp->nlayers; nlayer++) {
+- layer_sel &= ~RK3568_OVL_LAYER_SEL__LAYER(nlayer + ofs, 0x7);
+- layer_sel |= RK3568_OVL_LAYER_SEL__LAYER(nlayer + ofs, 5);
++ layer_sel &= ~RK3568_OVL_LAYER_SEL__LAYER(layer_id, 0x7);
++ layer_sel |= RK3568_OVL_LAYER_SEL__LAYER(layer_id, win->data->layer_sel_id);
++ /*
++ * When we bind a window from layerM to layerN, we also need to move the old
++ * window on layerN to layerM to avoid one window selected by two or more layers.
++ */
++ layer_sel &= ~RK3568_OVL_LAYER_SEL__LAYER(old_layer_id, 0x7);
++ layer_sel |= RK3568_OVL_LAYER_SEL__LAYER(old_layer_id, old_win->data->layer_sel_id);
+ }
+
+ vop2_writel(vop2, RK3568_OVL_LAYER_SEL, layer_sel);
+@@ -2444,9 +2511,11 @@ static void vop2_setup_dly_for_windows(struct vop2 *vop2)
+ sdly |= FIELD_PREP(RK3568_SMART_DLY_NUM__ESMART1, dly);
+ break;
+ case ROCKCHIP_VOP2_SMART0:
++ case ROCKCHIP_VOP2_ESMART2:
+ sdly |= FIELD_PREP(RK3568_SMART_DLY_NUM__SMART0, dly);
+ break;
+ case ROCKCHIP_VOP2_SMART1:
++ case ROCKCHIP_VOP2_ESMART3:
+ sdly |= FIELD_PREP(RK3568_SMART_DLY_NUM__SMART1, dly);
+ break;
+ }
+@@ -2865,6 +2934,10 @@ static struct reg_field vop2_cluster_regs[VOP2_WIN_MAX_REG] = {
+ [VOP2_WIN_Y2R_EN] = REG_FIELD(RK3568_CLUSTER_WIN_CTRL0, 8, 8),
+ [VOP2_WIN_R2Y_EN] = REG_FIELD(RK3568_CLUSTER_WIN_CTRL0, 9, 9),
+ [VOP2_WIN_CSC_MODE] = REG_FIELD(RK3568_CLUSTER_WIN_CTRL0, 10, 11),
++ [VOP2_WIN_AXI_YRGB_R_ID] = REG_FIELD(RK3568_CLUSTER_WIN_CTRL2, 0, 3),
++ [VOP2_WIN_AXI_UV_R_ID] = REG_FIELD(RK3568_CLUSTER_WIN_CTRL2, 5, 8),
++ /* RK3588 only, reserved bit on rk3568*/
++ [VOP2_WIN_AXI_BUS_ID] = REG_FIELD(RK3568_CLUSTER_CTRL, 13, 13),
+
+ /* Scale */
+ [VOP2_WIN_SCALE_YRGB_X] = REG_FIELD(RK3568_CLUSTER_WIN_SCL_FACTOR_YRGB, 0, 15),
+@@ -2957,6 +3030,10 @@ static struct reg_field vop2_esmart_regs[VOP2_WIN_MAX_REG] = {
+ [VOP2_WIN_YMIRROR] = REG_FIELD(RK3568_SMART_CTRL1, 31, 31),
+ [VOP2_WIN_COLOR_KEY] = REG_FIELD(RK3568_SMART_COLOR_KEY_CTRL, 0, 29),
+ [VOP2_WIN_COLOR_KEY_EN] = REG_FIELD(RK3568_SMART_COLOR_KEY_CTRL, 31, 31),
++ [VOP2_WIN_AXI_YRGB_R_ID] = REG_FIELD(RK3568_SMART_CTRL1, 4, 8),
++ [VOP2_WIN_AXI_UV_R_ID] = REG_FIELD(RK3568_SMART_CTRL1, 12, 16),
++ /* RK3588 only, reserved register on rk3568 */
++ [VOP2_WIN_AXI_BUS_ID] = REG_FIELD(RK3588_SMART_AXI_CTRL, 1, 1),
+
+ /* Scale */
+ [VOP2_WIN_SCALE_YRGB_X] = REG_FIELD(RK3568_SMART_REGION0_SCL_FACTOR_YRGB, 0, 15),
+diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
+index 615a16196aff6b..130aaa40316d13 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop2.h
+@@ -9,6 +9,7 @@
+
+ #include <linux/regmap.h>
+ #include <drm/drm_modes.h>
++#include "rockchip_drm_drv.h"
+ #include "rockchip_drm_vop.h"
+
+ #define VOP2_VP_FEATURE_OUTPUT_10BIT BIT(0)
+@@ -78,6 +79,9 @@ enum vop2_win_regs {
+ VOP2_WIN_COLOR_KEY,
+ VOP2_WIN_COLOR_KEY_EN,
+ VOP2_WIN_DITHER_UP,
++ VOP2_WIN_AXI_BUS_ID,
++ VOP2_WIN_AXI_YRGB_R_ID,
++ VOP2_WIN_AXI_UV_R_ID,
+
+ /* scale regs */
+ VOP2_WIN_SCALE_YRGB_X,
+@@ -140,6 +144,10 @@ struct vop2_win_data {
+ unsigned int layer_sel_id;
+ uint64_t feature;
+
++ uint8_t axi_bus_id;
++ uint8_t axi_yrgb_r_id;
++ uint8_t axi_uv_r_id;
++
+ unsigned int max_upscale_factor;
+ unsigned int max_downscale_factor;
+ const u8 dly[VOP2_DLY_MODE_MAX];
+@@ -308,6 +316,7 @@ enum dst_factor_mode {
+
+ #define RK3568_CLUSTER_WIN_CTRL0 0x00
+ #define RK3568_CLUSTER_WIN_CTRL1 0x04
++#define RK3568_CLUSTER_WIN_CTRL2 0x08
+ #define RK3568_CLUSTER_WIN_YRGB_MST 0x10
+ #define RK3568_CLUSTER_WIN_CBR_MST 0x14
+ #define RK3568_CLUSTER_WIN_VIR 0x18
+@@ -330,6 +339,7 @@ enum dst_factor_mode {
+ /* (E)smart register definition, offset relative to window base */
+ #define RK3568_SMART_CTRL0 0x00
+ #define RK3568_SMART_CTRL1 0x04
++#define RK3588_SMART_AXI_CTRL 0x08
+ #define RK3568_SMART_REGION0_CTRL 0x10
+ #define RK3568_SMART_REGION0_YRGB_MST 0x14
+ #define RK3568_SMART_REGION0_CBR_MST 0x18
+diff --git a/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c b/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
+index 18efb3fe1c000f..e473a8f8fd32d4 100644
+--- a/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
++++ b/drivers/gpu/drm/rockchip/rockchip_vop2_reg.c
+@@ -313,7 +313,7 @@ static const struct vop2_video_port_data rk3588_vop_video_ports[] = {
+ * AXI1 is a read only bus.
+ *
+ * Every window on a AXI bus must assigned two unique
+- * read id(yrgb_id/uv_id, valid id are 0x1~0xe).
++ * read id(yrgb_r_id/uv_r_id, valid id are 0x1~0xe).
+ *
+ * AXI0:
+ * Cluster0/1, Esmart0/1, WriteBack
+@@ -333,6 +333,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .layer_sel_id = 0,
+ .supported_rotations = DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_270 |
+ DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y,
++ .axi_bus_id = 0,
++ .axi_yrgb_r_id = 2,
++ .axi_uv_r_id = 3,
+ .max_upscale_factor = 4,
+ .max_downscale_factor = 4,
+ .dly = { 4, 26, 29 },
+@@ -349,6 +352,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .supported_rotations = DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_270 |
+ DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_PRIMARY,
++ .axi_bus_id = 0,
++ .axi_yrgb_r_id = 6,
++ .axi_uv_r_id = 7,
+ .max_upscale_factor = 4,
+ .max_downscale_factor = 4,
+ .dly = { 4, 26, 29 },
+@@ -364,6 +370,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .supported_rotations = DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_270 |
+ DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_PRIMARY,
++ .axi_bus_id = 1,
++ .axi_yrgb_r_id = 2,
++ .axi_uv_r_id = 3,
+ .max_upscale_factor = 4,
+ .max_downscale_factor = 4,
+ .dly = { 4, 26, 29 },
+@@ -379,6 +388,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .supported_rotations = DRM_MODE_ROTATE_90 | DRM_MODE_ROTATE_270 |
+ DRM_MODE_REFLECT_X | DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_PRIMARY,
++ .axi_bus_id = 1,
++ .axi_yrgb_r_id = 6,
++ .axi_uv_r_id = 7,
+ .max_upscale_factor = 4,
+ .max_downscale_factor = 4,
+ .dly = { 4, 26, 29 },
+@@ -393,6 +405,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .layer_sel_id = 2,
+ .supported_rotations = DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_OVERLAY,
++ .axi_bus_id = 0,
++ .axi_yrgb_r_id = 0x0a,
++ .axi_uv_r_id = 0x0b,
+ .max_upscale_factor = 8,
+ .max_downscale_factor = 8,
+ .dly = { 23, 45, 48 },
+@@ -406,6 +421,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .layer_sel_id = 3,
+ .supported_rotations = DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_OVERLAY,
++ .axi_bus_id = 0,
++ .axi_yrgb_r_id = 0x0c,
++ .axi_uv_r_id = 0x01,
+ .max_upscale_factor = 8,
+ .max_downscale_factor = 8,
+ .dly = { 23, 45, 48 },
+@@ -419,6 +437,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .layer_sel_id = 6,
+ .supported_rotations = DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_OVERLAY,
++ .axi_bus_id = 1,
++ .axi_yrgb_r_id = 0x0a,
++ .axi_uv_r_id = 0x0b,
+ .max_upscale_factor = 8,
+ .max_downscale_factor = 8,
+ .dly = { 23, 45, 48 },
+@@ -432,6 +453,9 @@ static const struct vop2_win_data rk3588_vop_win_data[] = {
+ .layer_sel_id = 7,
+ .supported_rotations = DRM_MODE_REFLECT_Y,
+ .type = DRM_PLANE_TYPE_OVERLAY,
++ .axi_bus_id = 1,
++ .axi_yrgb_r_id = 0x0c,
++ .axi_uv_r_id = 0x0d,
+ .max_upscale_factor = 8,
+ .max_downscale_factor = 8,
+ .dly = { 23, 45, 48 },
+diff --git a/drivers/gpu/drm/v3d/v3d_debugfs.c b/drivers/gpu/drm/v3d/v3d_debugfs.c
+index 19e3ee7ac897fe..76816f2551c100 100644
+--- a/drivers/gpu/drm/v3d/v3d_debugfs.c
++++ b/drivers/gpu/drm/v3d/v3d_debugfs.c
+@@ -237,8 +237,8 @@ static int v3d_measure_clock(struct seq_file *m, void *unused)
+ if (v3d->ver >= 40) {
+ int cycle_count_reg = V3D_PCTR_CYCLE_COUNT(v3d->ver);
+ V3D_CORE_WRITE(core, V3D_V4_PCTR_0_SRC_0_3,
+- V3D_SET_FIELD(cycle_count_reg,
+- V3D_PCTR_S0));
++ V3D_SET_FIELD_VER(cycle_count_reg,
++ V3D_PCTR_S0, v3d->ver));
+ V3D_CORE_WRITE(core, V3D_V4_PCTR_0_CLR, 1);
+ V3D_CORE_WRITE(core, V3D_V4_PCTR_0_EN, 1);
+ } else {
+diff --git a/drivers/gpu/drm/v3d/v3d_perfmon.c b/drivers/gpu/drm/v3d/v3d_perfmon.c
+index 6ee56cbd3f1bfc..e3013ac3a5c2a6 100644
+--- a/drivers/gpu/drm/v3d/v3d_perfmon.c
++++ b/drivers/gpu/drm/v3d/v3d_perfmon.c
+@@ -240,17 +240,18 @@ void v3d_perfmon_start(struct v3d_dev *v3d, struct v3d_perfmon *perfmon)
+
+ for (i = 0; i < ncounters; i++) {
+ u32 source = i / 4;
+- u32 channel = V3D_SET_FIELD(perfmon->counters[i], V3D_PCTR_S0);
++ u32 channel = V3D_SET_FIELD_VER(perfmon->counters[i], V3D_PCTR_S0,
++ v3d->ver);
+
+ i++;
+- channel |= V3D_SET_FIELD(i < ncounters ? perfmon->counters[i] : 0,
+- V3D_PCTR_S1);
++ channel |= V3D_SET_FIELD_VER(i < ncounters ? perfmon->counters[i] : 0,
++ V3D_PCTR_S1, v3d->ver);
+ i++;
+- channel |= V3D_SET_FIELD(i < ncounters ? perfmon->counters[i] : 0,
+- V3D_PCTR_S2);
++ channel |= V3D_SET_FIELD_VER(i < ncounters ? perfmon->counters[i] : 0,
++ V3D_PCTR_S2, v3d->ver);
+ i++;
+- channel |= V3D_SET_FIELD(i < ncounters ? perfmon->counters[i] : 0,
+- V3D_PCTR_S3);
++ channel |= V3D_SET_FIELD_VER(i < ncounters ? perfmon->counters[i] : 0,
++ V3D_PCTR_S3, v3d->ver);
+ V3D_CORE_WRITE(0, V3D_V4_PCTR_0_SRC_X(source), channel);
+ }
+
+diff --git a/drivers/gpu/drm/v3d/v3d_regs.h b/drivers/gpu/drm/v3d/v3d_regs.h
+index 1b1a62ad95852b..6da3c69082bd6d 100644
+--- a/drivers/gpu/drm/v3d/v3d_regs.h
++++ b/drivers/gpu/drm/v3d/v3d_regs.h
+@@ -15,6 +15,14 @@
+ fieldval & field##_MASK; \
+ })
+
++#define V3D_SET_FIELD_VER(value, field, ver) \
++ ({ \
++ typeof(ver) _ver = (ver); \
++ u32 fieldval = (value) << field##_SHIFT(_ver); \
++ WARN_ON((fieldval & ~field##_MASK(_ver)) != 0); \
++ fieldval & field##_MASK(_ver); \
++ })
++
+ #define V3D_GET_FIELD(word, field) (((word) & field##_MASK) >> \
+ field##_SHIFT)
+
+@@ -354,18 +362,15 @@
+ #define V3D_V4_PCTR_0_SRC_28_31 0x0067c
+ #define V3D_V4_PCTR_0_SRC_X(x) (V3D_V4_PCTR_0_SRC_0_3 + \
+ 4 * (x))
+-# define V3D_PCTR_S0_MASK V3D_MASK(6, 0)
+-# define V3D_V7_PCTR_S0_MASK V3D_MASK(7, 0)
+-# define V3D_PCTR_S0_SHIFT 0
+-# define V3D_PCTR_S1_MASK V3D_MASK(14, 8)
+-# define V3D_V7_PCTR_S1_MASK V3D_MASK(15, 8)
+-# define V3D_PCTR_S1_SHIFT 8
+-# define V3D_PCTR_S2_MASK V3D_MASK(22, 16)
+-# define V3D_V7_PCTR_S2_MASK V3D_MASK(23, 16)
+-# define V3D_PCTR_S2_SHIFT 16
+-# define V3D_PCTR_S3_MASK V3D_MASK(30, 24)
+-# define V3D_V7_PCTR_S3_MASK V3D_MASK(31, 24)
+-# define V3D_PCTR_S3_SHIFT 24
++# define V3D_PCTR_S0_MASK(ver) (((ver) >= 71) ? V3D_MASK(7, 0) : V3D_MASK(6, 0))
++# define V3D_PCTR_S0_SHIFT(ver) 0
++# define V3D_PCTR_S1_MASK(ver) (((ver) >= 71) ? V3D_MASK(15, 8) : V3D_MASK(14, 8))
++# define V3D_PCTR_S1_SHIFT(ver) 8
++# define V3D_PCTR_S2_MASK(ver) (((ver) >= 71) ? V3D_MASK(23, 16) : V3D_MASK(22, 16))
++# define V3D_PCTR_S2_SHIFT(ver) 16
++# define V3D_PCTR_S3_MASK(ver) (((ver) >= 71) ? V3D_MASK(31, 24) : V3D_MASK(30, 24))
++# define V3D_PCTR_S3_SHIFT(ver) 24
++
+ #define V3D_PCTR_CYCLE_COUNT(ver) ((ver >= 71) ? 0 : 32)
+
+ /* Output values of the counters. */
+diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
+index 935ccc38d12958..155deef867ac09 100644
+--- a/drivers/hid/hid-core.c
++++ b/drivers/hid/hid-core.c
+@@ -1125,6 +1125,8 @@ static void hid_apply_multiplier(struct hid_device *hid,
+ while (multiplier_collection->parent_idx != -1 &&
+ multiplier_collection->type != HID_COLLECTION_LOGICAL)
+ multiplier_collection = &hid->collection[multiplier_collection->parent_idx];
++ if (multiplier_collection->type != HID_COLLECTION_LOGICAL)
++ multiplier_collection = NULL;
+
+ effective_multiplier = hid_calculate_multiplier(hid, multiplier);
+
+diff --git a/drivers/hid/hid-input.c b/drivers/hid/hid-input.c
+index fda9dce3da9980..9d80635a91ebd8 100644
+--- a/drivers/hid/hid-input.c
++++ b/drivers/hid/hid-input.c
+@@ -810,10 +810,23 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ break;
+ }
+
+- if ((usage->hid & 0xf0) == 0x90) { /* SystemControl*/
+- switch (usage->hid & 0xf) {
+- case 0xb: map_key_clear(KEY_DO_NOT_DISTURB); break;
+- default: goto ignore;
++ if ((usage->hid & 0xf0) == 0x90) { /* SystemControl & D-pad */
++ switch (usage->hid) {
++ case HID_GD_UP: usage->hat_dir = 1; break;
++ case HID_GD_DOWN: usage->hat_dir = 5; break;
++ case HID_GD_RIGHT: usage->hat_dir = 3; break;
++ case HID_GD_LEFT: usage->hat_dir = 7; break;
++ case HID_GD_DO_NOT_DISTURB:
++ map_key_clear(KEY_DO_NOT_DISTURB); break;
++ default: goto unknown;
++ }
++
++ if (usage->hid <= HID_GD_LEFT) {
++ if (field->dpad) {
++ map_abs(field->dpad);
++ goto ignore;
++ }
++ map_abs(ABS_HAT0X);
+ }
+ break;
+ }
+@@ -844,22 +857,6 @@ static void hidinput_configure_usage(struct hid_input *hidinput, struct hid_fiel
+ if (field->application == HID_GD_SYSTEM_CONTROL)
+ goto ignore;
+
+- if ((usage->hid & 0xf0) == 0x90) { /* D-pad */
+- switch (usage->hid) {
+- case HID_GD_UP: usage->hat_dir = 1; break;
+- case HID_GD_DOWN: usage->hat_dir = 5; break;
+- case HID_GD_RIGHT: usage->hat_dir = 3; break;
+- case HID_GD_LEFT: usage->hat_dir = 7; break;
+- default: goto unknown;
+- }
+- if (field->dpad) {
+- map_abs(field->dpad);
+- goto ignore;
+- }
+- map_abs(ABS_HAT0X);
+- break;
+- }
+-
+ switch (usage->hid) {
+ /* These usage IDs map directly to the usage codes. */
+ case HID_GD_X: case HID_GD_Y: case HID_GD_Z:
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index d1b7ccfb3e051f..e07d63db5e1f47 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2078,7 +2078,7 @@ static const struct hid_device_id mt_devices[] = {
+ I2C_DEVICE_ID_GOODIX_01E8) },
+ { .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT_NSMU,
+ HID_DEVICE(BUS_I2C, HID_GROUP_ANY, I2C_VENDOR_ID_GOODIX,
+- I2C_DEVICE_ID_GOODIX_01E8) },
++ I2C_DEVICE_ID_GOODIX_01E9) },
+
+ /* GoodTouch panels */
+ { .driver_data = MT_CLS_NSMU,
+diff --git a/drivers/hid/hid-thrustmaster.c b/drivers/hid/hid-thrustmaster.c
+index cf1679b0d4fbb5..6c3e758bbb09e3 100644
+--- a/drivers/hid/hid-thrustmaster.c
++++ b/drivers/hid/hid-thrustmaster.c
+@@ -170,6 +170,14 @@ static void thrustmaster_interrupts(struct hid_device *hdev)
+ ep = &usbif->cur_altsetting->endpoint[1];
+ b_ep = ep->desc.bEndpointAddress;
+
++ /* Are the expected endpoints present? */
++ u8 ep_addr[1] = {b_ep};
++
++ if (!usb_check_int_endpoints(usbif, ep_addr)) {
++ hid_err(hdev, "Unexpected non-int endpoint\n");
++ return;
++ }
++
+ for (i = 0; i < ARRAY_SIZE(setup_arr); ++i) {
+ memcpy(send_buf, setup_arr[i], setup_arr_sizes[i]);
+
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 08a3c863f80a27..58480a3f4683fe 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -413,7 +413,7 @@ config SENSORS_ASPEED
+ will be called aspeed_pwm_tacho.
+
+ config SENSORS_ASPEED_G6
+- tristate "ASPEED g6 PWM and Fan tach driver"
++ tristate "ASPEED G6 PWM and Fan tach driver"
+ depends on ARCH_ASPEED || COMPILE_TEST
+ depends on PWM
+ help
+@@ -421,7 +421,7 @@ config SENSORS_ASPEED_G6
+ controllers.
+
+ This driver can also be built as a module. If so, the module
+- will be called aspeed_pwm_tacho.
++ will be called aspeed_g6_pwm_tach.
+
+ config SENSORS_ATXP1
+ tristate "Attansic ATXP1 VID controller"
+diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
+index ee04795b98aabe..fa3351351825b7 100644
+--- a/drivers/hwmon/nct6775-core.c
++++ b/drivers/hwmon/nct6775-core.c
+@@ -42,6 +42,9 @@
+
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
++#undef DEFAULT_SYMBOL_NAMESPACE
++#define DEFAULT_SYMBOL_NAMESPACE "HWMON_NCT6775"
++
+ #include <linux/module.h>
+ #include <linux/init.h>
+ #include <linux/slab.h>
+@@ -56,9 +59,6 @@
+ #include "lm75.h"
+ #include "nct6775.h"
+
+-#undef DEFAULT_SYMBOL_NAMESPACE
+-#define DEFAULT_SYMBOL_NAMESPACE HWMON_NCT6775
+-
+ #define USE_ALTERNATE
+
+ /* used to set data->name = nct6775_device_names[data->sio_kind] */
+diff --git a/drivers/i2c/busses/i2c-designware-common.c b/drivers/i2c/busses/i2c-designware-common.c
+index 9d88b4fa03e423..b3282785523d48 100644
+--- a/drivers/i2c/busses/i2c-designware-common.c
++++ b/drivers/i2c/busses/i2c-designware-common.c
+@@ -8,6 +8,9 @@
+ * Copyright (C) 2007 MontaVista Software Inc.
+ * Copyright (C) 2009 Provigent Ltd.
+ */
++
++#define DEFAULT_SYMBOL_NAMESPACE "I2C_DW_COMMON"
++
+ #include <linux/acpi.h>
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+@@ -29,8 +32,6 @@
+ #include <linux/types.h>
+ #include <linux/units.h>
+
+-#define DEFAULT_SYMBOL_NAMESPACE I2C_DW_COMMON
+-
+ #include "i2c-designware-core.h"
+
+ static char *abort_sources[] = {
+diff --git a/drivers/i2c/busses/i2c-designware-master.c b/drivers/i2c/busses/i2c-designware-master.c
+index e8ac9a7bf0b3d2..28188c6d0555e0 100644
+--- a/drivers/i2c/busses/i2c-designware-master.c
++++ b/drivers/i2c/busses/i2c-designware-master.c
+@@ -8,6 +8,9 @@
+ * Copyright (C) 2007 MontaVista Software Inc.
+ * Copyright (C) 2009 Provigent Ltd.
+ */
++
++#define DEFAULT_SYMBOL_NAMESPACE "I2C_DW"
++
+ #include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/errno.h>
+@@ -22,8 +25,6 @@
+ #include <linux/regmap.h>
+ #include <linux/reset.h>
+
+-#define DEFAULT_SYMBOL_NAMESPACE I2C_DW
+-
+ #include "i2c-designware-core.h"
+
+ #define AMD_TIMEOUT_MIN_US 25
+diff --git a/drivers/i2c/busses/i2c-designware-slave.c b/drivers/i2c/busses/i2c-designware-slave.c
+index 7035296aa24ce0..f0f0f1f2131d0a 100644
+--- a/drivers/i2c/busses/i2c-designware-slave.c
++++ b/drivers/i2c/busses/i2c-designware-slave.c
+@@ -6,6 +6,9 @@
+ *
+ * Copyright (C) 2016 Synopsys Inc.
+ */
++
++#define DEFAULT_SYMBOL_NAMESPACE "I2C_DW"
++
+ #include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/errno.h>
+@@ -16,8 +19,6 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/regmap.h>
+
+-#define DEFAULT_SYMBOL_NAMESPACE I2C_DW
+-
+ #include "i2c-designware-core.h"
+
+ static void i2c_dw_configure_fifo_slave(struct dw_i2c_dev *dev)
+diff --git a/drivers/i3c/master/dw-i3c-master.c b/drivers/i3c/master/dw-i3c-master.c
+index 8d694672c1104f..dbcd3984f25780 100644
+--- a/drivers/i3c/master/dw-i3c-master.c
++++ b/drivers/i3c/master/dw-i3c-master.c
+@@ -1624,6 +1624,7 @@ EXPORT_SYMBOL_GPL(dw_i3c_common_probe);
+
+ void dw_i3c_common_remove(struct dw_i3c_master *master)
+ {
++ cancel_work_sync(&master->hj_work);
+ i3c_master_unregister(&master->base);
+
+ pm_runtime_disable(master->dev);
+diff --git a/drivers/infiniband/hw/Makefile b/drivers/infiniband/hw/Makefile
+index 1211f4317a9f4f..aba96ca9bce5df 100644
+--- a/drivers/infiniband/hw/Makefile
++++ b/drivers/infiniband/hw/Makefile
+@@ -11,7 +11,7 @@ obj-$(CONFIG_INFINIBAND_OCRDMA) += ocrdma/
+ obj-$(CONFIG_INFINIBAND_VMWARE_PVRDMA) += vmw_pvrdma/
+ obj-$(CONFIG_INFINIBAND_USNIC) += usnic/
+ obj-$(CONFIG_INFINIBAND_HFI1) += hfi1/
+-obj-$(CONFIG_INFINIBAND_HNS) += hns/
++obj-$(CONFIG_INFINIBAND_HNS_HIP08) += hns/
+ obj-$(CONFIG_INFINIBAND_QEDR) += qedr/
+ obj-$(CONFIG_INFINIBAND_BNXT_RE) += bnxt_re/
+ obj-$(CONFIG_INFINIBAND_ERDMA) += erdma/
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index 14e434ff51edea..a7067c3c067972 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -4395,9 +4395,10 @@ int bnxt_re_mmap(struct ib_ucontext *ib_uctx, struct vm_area_struct *vma)
+ case BNXT_RE_MMAP_TOGGLE_PAGE:
+ /* Driver doesn't expect write access for user space */
+ if (vma->vm_flags & VM_WRITE)
+- return -EFAULT;
+- ret = vm_insert_page(vma, vma->vm_start,
+- virt_to_page((void *)bnxt_entry->mem_offset));
++ ret = -EFAULT;
++ else
++ ret = vm_insert_page(vma, vma->vm_start,
++ virt_to_page((void *)bnxt_entry->mem_offset));
+ break;
+ default:
+ ret = -EINVAL;
+diff --git a/drivers/infiniband/hw/cxgb4/device.c b/drivers/infiniband/hw/cxgb4/device.c
+index 80970a1738f8a6..034b85c4225555 100644
+--- a/drivers/infiniband/hw/cxgb4/device.c
++++ b/drivers/infiniband/hw/cxgb4/device.c
+@@ -1114,8 +1114,10 @@ static inline struct sk_buff *copy_gl_to_skb_pkt(const struct pkt_gl *gl,
+ * The math here assumes sizeof cpl_pass_accept_req >= sizeof
+ * cpl_rx_pkt.
+ */
+- skb = alloc_skb(gl->tot_len + sizeof(struct cpl_pass_accept_req) +
+- sizeof(struct rss_header) - pktshift, GFP_ATOMIC);
++ skb = alloc_skb(size_add(gl->tot_len,
++ sizeof(struct cpl_pass_accept_req) +
++ sizeof(struct rss_header)) - pktshift,
++ GFP_ATOMIC);
+ if (unlikely(!skb))
+ return NULL;
+
+diff --git a/drivers/infiniband/hw/cxgb4/qp.c b/drivers/infiniband/hw/cxgb4/qp.c
+index 7b5c4522b426a6..955f061a55e9ae 100644
+--- a/drivers/infiniband/hw/cxgb4/qp.c
++++ b/drivers/infiniband/hw/cxgb4/qp.c
+@@ -1599,6 +1599,7 @@ static void __flush_qp(struct c4iw_qp *qhp, struct c4iw_cq *rchp,
+ int count;
+ int rq_flushed = 0, sq_flushed;
+ unsigned long flag;
++ struct ib_event ev;
+
+ pr_debug("qhp %p rchp %p schp %p\n", qhp, rchp, schp);
+
+@@ -1607,6 +1608,13 @@ static void __flush_qp(struct c4iw_qp *qhp, struct c4iw_cq *rchp,
+ if (schp != rchp)
+ spin_lock(&schp->lock);
+ spin_lock(&qhp->lock);
++ if (qhp->srq && qhp->attr.state == C4IW_QP_STATE_ERROR &&
++ qhp->ibqp.event_handler) {
++ ev.device = qhp->ibqp.device;
++ ev.element.qp = &qhp->ibqp;
++ ev.event = IB_EVENT_QP_LAST_WQE_REACHED;
++ qhp->ibqp.event_handler(&ev, qhp->ibqp.qp_context);
++ }
+
+ if (qhp->wq.flushed) {
+ spin_unlock(&qhp->lock);
+diff --git a/drivers/infiniband/hw/hns/Kconfig b/drivers/infiniband/hw/hns/Kconfig
+index ab3fbba70789ca..44cdb706fe276d 100644
+--- a/drivers/infiniband/hw/hns/Kconfig
++++ b/drivers/infiniband/hw/hns/Kconfig
+@@ -1,21 +1,11 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+-config INFINIBAND_HNS
+- tristate "HNS RoCE Driver"
+- depends on NET_VENDOR_HISILICON
+- depends on ARM64 || (COMPILE_TEST && 64BIT)
+- depends on (HNS_DSAF && HNS_ENET) || HNS3
+- help
+- This is a RoCE/RDMA driver for the Hisilicon RoCE engine.
+-
+- To compile HIP08 driver as module, choose M here.
+-
+ config INFINIBAND_HNS_HIP08
+- bool "Hisilicon Hip08 Family RoCE support"
+- depends on INFINIBAND_HNS && PCI && HNS3
+- depends on INFINIBAND_HNS=m || HNS3=y
++ tristate "Hisilicon Hip08 Family RoCE support"
++ depends on ARM64 || (COMPILE_TEST && 64BIT)
++ depends on PCI && HNS3
+ help
+ RoCE driver support for Hisilicon RoCE engine in Hisilicon Hip08 SoC.
+ The RoCE engine is a PCI device.
+
+- To compile this driver, choose Y here: if INFINIBAND_HNS is m, this
+- module will be called hns-roce-hw-v2.
++ To compile this driver, choose M here. This module will be called
++ hns-roce-hw-v2.
+diff --git a/drivers/infiniband/hw/hns/Makefile b/drivers/infiniband/hw/hns/Makefile
+index be1e1cdbcfa8a8..7917af8e6380e8 100644
+--- a/drivers/infiniband/hw/hns/Makefile
++++ b/drivers/infiniband/hw/hns/Makefile
+@@ -5,12 +5,9 @@
+
+ ccflags-y := -I $(srctree)/drivers/net/ethernet/hisilicon/hns3
+
+-hns-roce-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \
++hns-roce-hw-v2-objs := hns_roce_main.o hns_roce_cmd.o hns_roce_pd.o \
+ hns_roce_ah.o hns_roce_hem.o hns_roce_mr.o hns_roce_qp.o \
+ hns_roce_cq.o hns_roce_alloc.o hns_roce_db.o hns_roce_srq.o hns_roce_restrack.o \
+- hns_roce_debugfs.o
++ hns_roce_debugfs.o hns_roce_hw_v2.o
+
+-ifdef CONFIG_INFINIBAND_HNS_HIP08
+-hns-roce-hw-v2-objs := hns_roce_hw_v2.o $(hns-roce-objs)
+-obj-$(CONFIG_INFINIBAND_HNS) += hns-roce-hw-v2.o
+-endif
++obj-$(CONFIG_INFINIBAND_HNS_HIP08) += hns-roce-hw-v2.o
+diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
+index 529db874d67c69..b1bbdcff631d56 100644
+--- a/drivers/infiniband/hw/mlx4/main.c
++++ b/drivers/infiniband/hw/mlx4/main.c
+@@ -351,7 +351,7 @@ static int mlx4_ib_del_gid(const struct ib_gid_attr *attr, void **context)
+ struct mlx4_port_gid_table *port_gid_table;
+ int ret = 0;
+ int hw_update = 0;
+- struct gid_entry *gids;
++ struct gid_entry *gids = NULL;
+
+ if (!rdma_cap_roce_gid_table(attr->device, attr->port_num))
+ return -EINVAL;
+@@ -389,10 +389,10 @@ static int mlx4_ib_del_gid(const struct ib_gid_attr *attr, void **context)
+ }
+ spin_unlock_bh(&iboe->lock);
+
+- if (!ret && hw_update) {
++ if (gids)
+ ret = mlx4_ib_update_gids(gids, ibdev, attr->port_num);
+- kfree(gids);
+- }
++
++ kfree(gids);
+ return ret;
+ }
+
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 4b37446758fd4e..64b441542cd5dd 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -228,13 +228,27 @@ static void destroy_unused_implicit_child_mr(struct mlx5_ib_mr *mr)
+ unsigned long idx = ib_umem_start(odp) >> MLX5_IMR_MTT_SHIFT;
+ struct mlx5_ib_mr *imr = mr->parent;
+
++ /*
++ * If userspace is racing freeing the parent implicit ODP MR then we can
++ * loose the race with parent destruction. In this case
++ * mlx5_ib_free_odp_mr() will free everything in the implicit_children
++ * xarray so NOP is fine. This child MR cannot be destroyed here because
++ * we are under its umem_mutex.
++ */
+ if (!refcount_inc_not_zero(&imr->mmkey.usecount))
+ return;
+
+- xa_erase(&imr->implicit_children, idx);
++ xa_lock(&imr->implicit_children);
++ if (__xa_cmpxchg(&imr->implicit_children, idx, mr, NULL, GFP_KERNEL) !=
++ mr) {
++ xa_unlock(&imr->implicit_children);
++ return;
++ }
++
+ if (MLX5_CAP_ODP(mr_to_mdev(mr)->mdev, mem_page_fault))
+- xa_erase(&mr_to_mdev(mr)->odp_mkeys,
+- mlx5_base_mkey(mr->mmkey.key));
++ __xa_erase(&mr_to_mdev(mr)->odp_mkeys,
++ mlx5_base_mkey(mr->mmkey.key));
++ xa_unlock(&imr->implicit_children);
+
+ /* Freeing a MR is a sleeping operation, so bounce to a work queue */
+ INIT_WORK(&mr->odp_destroy.work, free_implicit_child_mr_work);
+@@ -500,18 +514,18 @@ static struct mlx5_ib_mr *implicit_get_child_mr(struct mlx5_ib_mr *imr,
+ refcount_inc(&ret->mmkey.usecount);
+ goto out_lock;
+ }
+- xa_unlock(&imr->implicit_children);
+
+ if (MLX5_CAP_ODP(dev->mdev, mem_page_fault)) {
+- ret = xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key),
+- &mr->mmkey, GFP_KERNEL);
++ ret = __xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key),
++ &mr->mmkey, GFP_KERNEL);
+ if (xa_is_err(ret)) {
+ ret = ERR_PTR(xa_err(ret));
+- xa_erase(&imr->implicit_children, idx);
+- goto out_mr;
++ __xa_erase(&imr->implicit_children, idx);
++ goto out_lock;
+ }
+ mr->mmkey.type = MLX5_MKEY_IMPLICIT_CHILD;
+ }
++ xa_unlock(&imr->implicit_children);
+ mlx5_ib_dbg(mr_to_mdev(imr), "key %x mr %p\n", mr->mmkey.key, mr);
+ return mr;
+
+@@ -944,8 +958,7 @@ static struct mlx5_ib_mkey *find_odp_mkey(struct mlx5_ib_dev *dev, u32 key)
+ /*
+ * Handle a single data segment in a page-fault WQE or RDMA region.
+ *
+- * Returns number of OS pages retrieved on success. The caller may continue to
+- * the next data segment.
++ * Returns zero on success. The caller may continue to the next data segment.
+ * Can return the following error codes:
+ * -EAGAIN to designate a temporary error. The caller will abort handling the
+ * page fault and resolve it.
+@@ -958,7 +971,7 @@ static int pagefault_single_data_segment(struct mlx5_ib_dev *dev,
+ u32 *bytes_committed,
+ u32 *bytes_mapped)
+ {
+- int npages = 0, ret, i, outlen, cur_outlen = 0, depth = 0;
++ int ret, i, outlen, cur_outlen = 0, depth = 0, pages_in_range;
+ struct pf_frame *head = NULL, *frame;
+ struct mlx5_ib_mkey *mmkey;
+ struct mlx5_ib_mr *mr;
+@@ -993,13 +1006,20 @@ static int pagefault_single_data_segment(struct mlx5_ib_dev *dev,
+ case MLX5_MKEY_MR:
+ mr = container_of(mmkey, struct mlx5_ib_mr, mmkey);
+
++ pages_in_range = (ALIGN(io_virt + bcnt, PAGE_SIZE) -
++ (io_virt & PAGE_MASK)) >>
++ PAGE_SHIFT;
+ ret = pagefault_mr(mr, io_virt, bcnt, bytes_mapped, 0, false);
+ if (ret < 0)
+ goto end;
+
+ mlx5_update_odp_stats(mr, faults, ret);
+
+- npages += ret;
++ if (ret < pages_in_range) {
++ ret = -EFAULT;
++ goto end;
++ }
++
+ ret = 0;
+ break;
+
+@@ -1090,7 +1110,7 @@ static int pagefault_single_data_segment(struct mlx5_ib_dev *dev,
+ kfree(out);
+
+ *bytes_committed = 0;
+- return ret ? ret : npages;
++ return ret;
+ }
+
+ /*
+@@ -1109,8 +1129,7 @@ static int pagefault_single_data_segment(struct mlx5_ib_dev *dev,
+ * the committed bytes).
+ * @receive_queue: receive WQE end of sg list
+ *
+- * Returns the number of pages loaded if positive, zero for an empty WQE, or a
+- * negative error code.
++ * Returns zero for success or a negative error code.
+ */
+ static int pagefault_data_segments(struct mlx5_ib_dev *dev,
+ struct mlx5_pagefault *pfault,
+@@ -1118,7 +1137,7 @@ static int pagefault_data_segments(struct mlx5_ib_dev *dev,
+ void *wqe_end, u32 *bytes_mapped,
+ u32 *total_wqe_bytes, bool receive_queue)
+ {
+- int ret = 0, npages = 0;
++ int ret = 0;
+ u64 io_virt;
+ __be32 key;
+ u32 byte_count;
+@@ -1175,10 +1194,9 @@ static int pagefault_data_segments(struct mlx5_ib_dev *dev,
+ bytes_mapped);
+ if (ret < 0)
+ break;
+- npages += ret;
+ }
+
+- return ret < 0 ? ret : npages;
++ return ret;
+ }
+
+ /*
+@@ -1414,12 +1432,6 @@ static void mlx5_ib_mr_wqe_pfault_handler(struct mlx5_ib_dev *dev,
+ free_page((unsigned long)wqe_start);
+ }
+
+-static int pages_in_range(u64 address, u32 length)
+-{
+- return (ALIGN(address + length, PAGE_SIZE) -
+- (address & PAGE_MASK)) >> PAGE_SHIFT;
+-}
+-
+ static void mlx5_ib_mr_rdma_pfault_handler(struct mlx5_ib_dev *dev,
+ struct mlx5_pagefault *pfault)
+ {
+@@ -1458,7 +1470,7 @@ static void mlx5_ib_mr_rdma_pfault_handler(struct mlx5_ib_dev *dev,
+ if (ret == -EAGAIN) {
+ /* We're racing with an invalidation, don't prefetch */
+ prefetch_activated = 0;
+- } else if (ret < 0 || pages_in_range(address, length) > ret) {
++ } else if (ret < 0) {
+ mlx5_ib_page_fault_resume(dev, pfault, 1);
+ if (ret != -ENOENT)
+ mlx5_ib_dbg(dev, "PAGE FAULT error %d. QP 0x%llx, type: 0x%x\n",
+diff --git a/drivers/infiniband/sw/rxe/rxe_param.h b/drivers/infiniband/sw/rxe/rxe_param.h
+index d2f57ead78ad12..003f681e5dc022 100644
+--- a/drivers/infiniband/sw/rxe/rxe_param.h
++++ b/drivers/infiniband/sw/rxe/rxe_param.h
+@@ -129,7 +129,7 @@ enum rxe_device_param {
+ enum rxe_port_param {
+ RXE_PORT_GID_TBL_LEN = 1024,
+ RXE_PORT_PORT_CAP_FLAGS = IB_PORT_CM_SUP,
+- RXE_PORT_MAX_MSG_SZ = 0x800000,
++ RXE_PORT_MAX_MSG_SZ = (1UL << 31),
+ RXE_PORT_BAD_PKEY_CNTR = 0,
+ RXE_PORT_QKEY_VIOL_CNTR = 0,
+ RXE_PORT_LID = 0,
+diff --git a/drivers/infiniband/sw/rxe/rxe_pool.c b/drivers/infiniband/sw/rxe/rxe_pool.c
+index 67567d62195e86..d9cb682fd71f88 100644
+--- a/drivers/infiniband/sw/rxe/rxe_pool.c
++++ b/drivers/infiniband/sw/rxe/rxe_pool.c
+@@ -178,7 +178,6 @@ int __rxe_cleanup(struct rxe_pool_elem *elem, bool sleepable)
+ {
+ struct rxe_pool *pool = elem->pool;
+ struct xarray *xa = &pool->xa;
+- static int timeout = RXE_POOL_TIMEOUT;
+ int ret, err = 0;
+ void *xa_ret;
+
+@@ -202,19 +201,19 @@ int __rxe_cleanup(struct rxe_pool_elem *elem, bool sleepable)
+ * return to rdma-core
+ */
+ if (sleepable) {
+- if (!completion_done(&elem->complete) && timeout) {
++ if (!completion_done(&elem->complete)) {
+ ret = wait_for_completion_timeout(&elem->complete,
+- timeout);
++ msecs_to_jiffies(50000));
+
+ /* Shouldn't happen. There are still references to
+ * the object but, rather than deadlock, free the
+ * object or pass back to rdma-core.
+ */
+ if (WARN_ON(!ret))
+- err = -EINVAL;
++ err = -ETIMEDOUT;
+ }
+ } else {
+- unsigned long until = jiffies + timeout;
++ unsigned long until = jiffies + RXE_POOL_TIMEOUT;
+
+ /* AH objects are unique in that the destroy_ah verb
+ * can be called in atomic context. This delay
+@@ -226,7 +225,7 @@ int __rxe_cleanup(struct rxe_pool_elem *elem, bool sleepable)
+ mdelay(1);
+
+ if (WARN_ON(!completion_done(&elem->complete)))
+- err = -EINVAL;
++ err = -ETIMEDOUT;
+ }
+
+ if (pool->cleanup)
+diff --git a/drivers/infiniband/sw/rxe/rxe_verbs.c b/drivers/infiniband/sw/rxe/rxe_verbs.c
+index 8a5fc20fd18692..589ac0d8489dbd 100644
+--- a/drivers/infiniband/sw/rxe/rxe_verbs.c
++++ b/drivers/infiniband/sw/rxe/rxe_verbs.c
+@@ -696,7 +696,7 @@ static int validate_send_wr(struct rxe_qp *qp, const struct ib_send_wr *ibwr,
+ for (i = 0; i < ibwr->num_sge; i++)
+ length += ibwr->sg_list[i].length;
+
+- if (length > (1UL << 31)) {
++ if (length > RXE_PORT_MAX_MSG_SZ) {
+ rxe_err_qp(qp, "message length too long\n");
+ break;
+ }
+@@ -980,8 +980,7 @@ static int post_one_recv(struct rxe_rq *rq, const struct ib_recv_wr *ibwr)
+ for (i = 0; i < num_sge; i++)
+ length += ibwr->sg_list[i].length;
+
+- /* IBA max message size is 2^31 */
+- if (length >= (1UL<<31)) {
++ if (length > RXE_PORT_MAX_MSG_SZ) {
+ err = -EINVAL;
+ rxe_dbg("message length too long\n");
+ goto err_out;
+diff --git a/drivers/infiniband/ulp/rtrs/rtrs.c b/drivers/infiniband/ulp/rtrs/rtrs.c
+index 4e17d546d4ccf3..bf38ac6f87c47a 100644
+--- a/drivers/infiniband/ulp/rtrs/rtrs.c
++++ b/drivers/infiniband/ulp/rtrs/rtrs.c
+@@ -584,6 +584,9 @@ static void dev_free(struct kref *ref)
+ list_del(&dev->entry);
+ mutex_unlock(&pool->mutex);
+
++ if (pool->ops && pool->ops->deinit)
++ pool->ops->deinit(dev);
++
+ ib_dealloc_pd(dev->ib_pd);
+ kfree(dev);
+ }
+diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
+index 2916e77f589b81..7289ae0b83aced 100644
+--- a/drivers/infiniband/ulp/srp/ib_srp.c
++++ b/drivers/infiniband/ulp/srp/ib_srp.c
+@@ -3978,7 +3978,6 @@ static struct srp_host *srp_add_port(struct srp_device *device, u32 port)
+ return host;
+
+ put_host:
+- device_del(&host->dev);
+ put_device(&host->dev);
+ return NULL;
+ }
+diff --git a/drivers/iommu/amd/amd_iommu.h b/drivers/iommu/amd/amd_iommu.h
+index 6386fa4556d9b8..6fac9ee8dd3ed0 100644
+--- a/drivers/iommu/amd/amd_iommu.h
++++ b/drivers/iommu/amd/amd_iommu.h
+@@ -87,7 +87,6 @@ int amd_iommu_complete_ppr(struct device *dev, u32 pasid, int status, int tag);
+ */
+ void amd_iommu_flush_all_caches(struct amd_iommu *iommu);
+ void amd_iommu_update_and_flush_device_table(struct protection_domain *domain);
+-void amd_iommu_domain_update(struct protection_domain *domain);
+ void amd_iommu_domain_flush_pages(struct protection_domain *domain,
+ u64 address, size_t size);
+ void amd_iommu_dev_flush_pasid_pages(struct iommu_dev_data *dev_data,
+diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
+index 8364cd6fa47d01..a24a97a2c6469b 100644
+--- a/drivers/iommu/amd/iommu.c
++++ b/drivers/iommu/amd/iommu.c
+@@ -1606,15 +1606,6 @@ void amd_iommu_update_and_flush_device_table(struct protection_domain *domain)
+ domain_flush_complete(domain);
+ }
+
+-void amd_iommu_domain_update(struct protection_domain *domain)
+-{
+- /* Update device table */
+- amd_iommu_update_and_flush_device_table(domain);
+-
+- /* Flush domain TLB(s) and wait for completion */
+- amd_iommu_domain_flush_all(domain);
+-}
+-
+ int amd_iommu_complete_ppr(struct device *dev, u32 pasid, int status, int tag)
+ {
+ struct iommu_dev_data *dev_data;
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index 353fea58cd318a..f1a8f8c75cb0e9 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -2702,9 +2702,14 @@ static int arm_smmu_attach_prepare(struct arm_smmu_attach_state *state,
+ * Translation Requests and Translated transactions are denied
+ * as though ATS is disabled for the stream (STE.EATS == 0b00),
+ * causing F_BAD_ATS_TREQ and F_TRANSL_FORBIDDEN events
+- * (IHI0070Ea 5.2 Stream Table Entry). Thus ATS can only be
+- * enabled if we have arm_smmu_domain, those always have page
+- * tables.
++ * (IHI0070Ea 5.2 Stream Table Entry).
++ *
++ * However, if we have installed a CD table and are using S1DSS
++ * then ATS will work in S1DSS bypass. See "13.6.4 Full ATS
++ * skipping stage 1".
++ *
++ * Disable ATS if we are going to create a normal 0b100 bypass
++ * STE.
+ */
+ state->ats_enabled = arm_smmu_ats_supported(master);
+ }
+@@ -3017,8 +3022,10 @@ static void arm_smmu_attach_dev_ste(struct iommu_domain *domain,
+ if (arm_smmu_ssids_in_use(&master->cd_table)) {
+ /*
+ * If a CD table has to be present then we need to run with ATS
+- * on even though the RID will fail ATS queries with UR. This is
+- * because we have no idea what the PASID's need.
++ * on because we have to assume a PASID is using ATS. For
++ * IDENTITY this will setup things so that S1DSS=bypass which
++ * follows the explanation in "13.6.4 Full ATS skipping stage 1"
++ * and allows for ATS on the RID to work.
+ */
+ state.cd_needs_ats = true;
+ arm_smmu_attach_prepare(&state, domain);
+diff --git a/drivers/iommu/iommufd/iova_bitmap.c b/drivers/iommu/iommufd/iova_bitmap.c
+index d90b9e253412ff..2cdc4f542df472 100644
+--- a/drivers/iommu/iommufd/iova_bitmap.c
++++ b/drivers/iommu/iommufd/iova_bitmap.c
+@@ -130,7 +130,7 @@ struct iova_bitmap {
+ static unsigned long iova_bitmap_offset_to_index(struct iova_bitmap *bitmap,
+ unsigned long iova)
+ {
+- unsigned long pgsize = 1 << bitmap->mapped.pgshift;
++ unsigned long pgsize = 1UL << bitmap->mapped.pgshift;
+
+ return iova / (BITS_PER_TYPE(*bitmap->bitmap) * pgsize);
+ }
+diff --git a/drivers/iommu/iommufd/main.c b/drivers/iommu/iommufd/main.c
+index b5f5d27ee9634e..649fe79d0f0cc6 100644
+--- a/drivers/iommu/iommufd/main.c
++++ b/drivers/iommu/iommufd/main.c
+@@ -130,7 +130,7 @@ static int iommufd_object_dec_wait_shortterm(struct iommufd_ctx *ictx,
+ if (wait_event_timeout(ictx->destroy_wait,
+ refcount_read(&to_destroy->shortterm_users) ==
+ 0,
+- msecs_to_jiffies(10000)))
++ msecs_to_jiffies(60000)))
+ return 0;
+
+ pr_crit("Time out waiting for iommufd object to become free\n");
+diff --git a/drivers/leds/leds-cht-wcove.c b/drivers/leds/leds-cht-wcove.c
+index b4998402b8c6f0..711ac4bd60580d 100644
+--- a/drivers/leds/leds-cht-wcove.c
++++ b/drivers/leds/leds-cht-wcove.c
+@@ -394,7 +394,7 @@ static int cht_wc_leds_probe(struct platform_device *pdev)
+ led->cdev.pattern_clear = cht_wc_leds_pattern_clear;
+ led->cdev.max_brightness = 255;
+
+- ret = led_classdev_register(&pdev->dev, &led->cdev);
++ ret = devm_led_classdev_register(&pdev->dev, &led->cdev);
+ if (ret < 0)
+ return ret;
+ }
+@@ -406,10 +406,6 @@ static int cht_wc_leds_probe(struct platform_device *pdev)
+ static void cht_wc_leds_remove(struct platform_device *pdev)
+ {
+ struct cht_wc_leds *leds = platform_get_drvdata(pdev);
+- int i;
+-
+- for (i = 0; i < CHT_WC_LED_COUNT; i++)
+- led_classdev_unregister(&leds->leds[i].cdev);
+
+ /* Restore LED1 regs if hw-control was active else leave LED1 off */
+ if (!(leds->led1_initial_regs.ctrl & CHT_WC_LED1_SWCTL))
+diff --git a/drivers/leds/leds-netxbig.c b/drivers/leds/leds-netxbig.c
+index af5a908b8d9edd..e95287416ef879 100644
+--- a/drivers/leds/leds-netxbig.c
++++ b/drivers/leds/leds-netxbig.c
+@@ -439,6 +439,7 @@ static int netxbig_leds_get_of_pdata(struct device *dev,
+ }
+ gpio_ext_pdev = of_find_device_by_node(gpio_ext_np);
+ if (!gpio_ext_pdev) {
++ of_node_put(gpio_ext_np);
+ dev_err(dev, "Failed to find platform device for gpio-ext\n");
+ return -ENODEV;
+ }
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index c3a42dd66ce551..2e3087556adb37 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -1671,24 +1671,13 @@ __acquires(bitmap->lock)
+ }
+
+ static int bitmap_startwrite(struct mddev *mddev, sector_t offset,
+- unsigned long sectors, bool behind)
++ unsigned long sectors)
+ {
+ struct bitmap *bitmap = mddev->bitmap;
+
+ if (!bitmap)
+ return 0;
+
+- if (behind) {
+- int bw;
+- atomic_inc(&bitmap->behind_writes);
+- bw = atomic_read(&bitmap->behind_writes);
+- if (bw > bitmap->behind_writes_used)
+- bitmap->behind_writes_used = bw;
+-
+- pr_debug("inc write-behind count %d/%lu\n",
+- bw, bitmap->mddev->bitmap_info.max_write_behind);
+- }
+-
+ while (sectors) {
+ sector_t blocks;
+ bitmap_counter_t *bmc;
+@@ -1737,21 +1726,13 @@ static int bitmap_startwrite(struct mddev *mddev, sector_t offset,
+ }
+
+ static void bitmap_endwrite(struct mddev *mddev, sector_t offset,
+- unsigned long sectors, bool success, bool behind)
++ unsigned long sectors)
+ {
+ struct bitmap *bitmap = mddev->bitmap;
+
+ if (!bitmap)
+ return;
+
+- if (behind) {
+- if (atomic_dec_and_test(&bitmap->behind_writes))
+- wake_up(&bitmap->behind_wait);
+- pr_debug("dec write-behind count %d/%lu\n",
+- atomic_read(&bitmap->behind_writes),
+- bitmap->mddev->bitmap_info.max_write_behind);
+- }
+-
+ while (sectors) {
+ sector_t blocks;
+ unsigned long flags;
+@@ -1764,15 +1745,16 @@ static void bitmap_endwrite(struct mddev *mddev, sector_t offset,
+ return;
+ }
+
+- if (success && !bitmap->mddev->degraded &&
+- bitmap->events_cleared < bitmap->mddev->events) {
+- bitmap->events_cleared = bitmap->mddev->events;
+- bitmap->need_sync = 1;
+- sysfs_notify_dirent_safe(bitmap->sysfs_can_clear);
+- }
+-
+- if (!success && !NEEDED(*bmc))
++ if (!bitmap->mddev->degraded) {
++ if (bitmap->events_cleared < bitmap->mddev->events) {
++ bitmap->events_cleared = bitmap->mddev->events;
++ bitmap->need_sync = 1;
++ sysfs_notify_dirent_safe(
++ bitmap->sysfs_can_clear);
++ }
++ } else if (!NEEDED(*bmc)) {
+ *bmc |= NEEDED_MASK;
++ }
+
+ if (COUNTER(*bmc) == COUNTER_MAX)
+ wake_up(&bitmap->overflow_wait);
+@@ -2062,6 +2044,37 @@ static void md_bitmap_free(void *data)
+ kfree(bitmap);
+ }
+
++static void bitmap_start_behind_write(struct mddev *mddev)
++{
++ struct bitmap *bitmap = mddev->bitmap;
++ int bw;
++
++ if (!bitmap)
++ return;
++
++ atomic_inc(&bitmap->behind_writes);
++ bw = atomic_read(&bitmap->behind_writes);
++ if (bw > bitmap->behind_writes_used)
++ bitmap->behind_writes_used = bw;
++
++ pr_debug("inc write-behind count %d/%lu\n",
++ bw, bitmap->mddev->bitmap_info.max_write_behind);
++}
++
++static void bitmap_end_behind_write(struct mddev *mddev)
++{
++ struct bitmap *bitmap = mddev->bitmap;
++
++ if (!bitmap)
++ return;
++
++ if (atomic_dec_and_test(&bitmap->behind_writes))
++ wake_up(&bitmap->behind_wait);
++ pr_debug("dec write-behind count %d/%lu\n",
++ atomic_read(&bitmap->behind_writes),
++ bitmap->mddev->bitmap_info.max_write_behind);
++}
++
+ static void bitmap_wait_behind_writes(struct mddev *mddev)
+ {
+ struct bitmap *bitmap = mddev->bitmap;
+@@ -2342,7 +2355,10 @@ static int bitmap_get_stats(void *data, struct md_bitmap_stats *stats)
+
+ if (!bitmap)
+ return -ENOENT;
+-
++ if (bitmap->mddev->bitmap_info.external)
++ return -ENOENT;
++ if (!bitmap->storage.sb_page) /* no superblock */
++ return -EINVAL;
+ sb = kmap_local_page(bitmap->storage.sb_page);
+ stats->sync_size = le64_to_cpu(sb->sync_size);
+ kunmap_local(sb);
+@@ -2981,6 +2997,9 @@ static struct bitmap_operations bitmap_ops = {
+ .dirty_bits = bitmap_dirty_bits,
+ .unplug = bitmap_unplug,
+ .daemon_work = bitmap_daemon_work,
++
++ .start_behind_write = bitmap_start_behind_write,
++ .end_behind_write = bitmap_end_behind_write,
+ .wait_behind_writes = bitmap_wait_behind_writes,
+
+ .startwrite = bitmap_startwrite,
+diff --git a/drivers/md/md-bitmap.h b/drivers/md/md-bitmap.h
+index 662e6fc141a775..31c93019c76bf3 100644
+--- a/drivers/md/md-bitmap.h
++++ b/drivers/md/md-bitmap.h
+@@ -84,12 +84,15 @@ struct bitmap_operations {
+ unsigned long e);
+ void (*unplug)(struct mddev *mddev, bool sync);
+ void (*daemon_work)(struct mddev *mddev);
++
++ void (*start_behind_write)(struct mddev *mddev);
++ void (*end_behind_write)(struct mddev *mddev);
+ void (*wait_behind_writes)(struct mddev *mddev);
+
+ int (*startwrite)(struct mddev *mddev, sector_t offset,
+- unsigned long sectors, bool behind);
++ unsigned long sectors);
+ void (*endwrite)(struct mddev *mddev, sector_t offset,
+- unsigned long sectors, bool success, bool behind);
++ unsigned long sectors);
+ bool (*start_sync)(struct mddev *mddev, sector_t offset,
+ sector_t *blocks, bool degraded);
+ void (*end_sync)(struct mddev *mddev, sector_t offset, sector_t *blocks);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 67108c397c5a86..44c4c518430d9b 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -8376,6 +8376,10 @@ static int md_seq_show(struct seq_file *seq, void *v)
+ return 0;
+
+ spin_unlock(&all_mddevs_lock);
++
++ /* prevent bitmap to be freed after checking */
++ mutex_lock(&mddev->bitmap_info.mutex);
++
+ spin_lock(&mddev->lock);
+ if (mddev->pers || mddev->raid_disks || !list_empty(&mddev->disks)) {
+ seq_printf(seq, "%s : ", mdname(mddev));
+@@ -8451,6 +8455,7 @@ static int md_seq_show(struct seq_file *seq, void *v)
+ seq_printf(seq, "\n");
+ }
+ spin_unlock(&mddev->lock);
++ mutex_unlock(&mddev->bitmap_info.mutex);
+ spin_lock(&all_mddevs_lock);
+
+ if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs))
+@@ -8745,12 +8750,32 @@ void md_submit_discard_bio(struct mddev *mddev, struct md_rdev *rdev,
+ }
+ EXPORT_SYMBOL_GPL(md_submit_discard_bio);
+
++static void md_bitmap_start(struct mddev *mddev,
++ struct md_io_clone *md_io_clone)
++{
++ if (mddev->pers->bitmap_sector)
++ mddev->pers->bitmap_sector(mddev, &md_io_clone->offset,
++ &md_io_clone->sectors);
++
++ mddev->bitmap_ops->startwrite(mddev, md_io_clone->offset,
++ md_io_clone->sectors);
++}
++
++static void md_bitmap_end(struct mddev *mddev, struct md_io_clone *md_io_clone)
++{
++ mddev->bitmap_ops->endwrite(mddev, md_io_clone->offset,
++ md_io_clone->sectors);
++}
++
+ static void md_end_clone_io(struct bio *bio)
+ {
+ struct md_io_clone *md_io_clone = bio->bi_private;
+ struct bio *orig_bio = md_io_clone->orig_bio;
+ struct mddev *mddev = md_io_clone->mddev;
+
++ if (bio_data_dir(orig_bio) == WRITE && mddev->bitmap)
++ md_bitmap_end(mddev, md_io_clone);
++
+ if (bio->bi_status && !orig_bio->bi_status)
+ orig_bio->bi_status = bio->bi_status;
+
+@@ -8775,6 +8800,12 @@ static void md_clone_bio(struct mddev *mddev, struct bio **bio)
+ if (blk_queue_io_stat(bdev->bd_disk->queue))
+ md_io_clone->start_time = bio_start_io_acct(*bio);
+
++ if (bio_data_dir(*bio) == WRITE && mddev->bitmap) {
++ md_io_clone->offset = (*bio)->bi_iter.bi_sector;
++ md_io_clone->sectors = bio_sectors(*bio);
++ md_bitmap_start(mddev, md_io_clone);
++ }
++
+ clone->bi_end_io = md_end_clone_io;
+ clone->bi_private = md_io_clone;
+ *bio = clone;
+@@ -8793,6 +8824,9 @@ void md_free_cloned_bio(struct bio *bio)
+ struct bio *orig_bio = md_io_clone->orig_bio;
+ struct mddev *mddev = md_io_clone->mddev;
+
++ if (bio_data_dir(orig_bio) == WRITE && mddev->bitmap)
++ md_bitmap_end(mddev, md_io_clone);
++
+ if (bio->bi_status && !orig_bio->bi_status)
+ orig_bio->bi_status = bio->bi_status;
+
+diff --git a/drivers/md/md.h b/drivers/md/md.h
+index 5d2e6bd58e4da2..8826dce9717da9 100644
+--- a/drivers/md/md.h
++++ b/drivers/md/md.h
+@@ -746,6 +746,9 @@ struct md_personality
+ void *(*takeover) (struct mddev *mddev);
+ /* Changes the consistency policy of an active array. */
+ int (*change_consistency_policy)(struct mddev *mddev, const char *buf);
++ /* convert io ranges from array to bitmap */
++ void (*bitmap_sector)(struct mddev *mddev, sector_t *offset,
++ unsigned long *sectors);
+ };
+
+ struct md_sysfs_entry {
+@@ -828,6 +831,8 @@ struct md_io_clone {
+ struct mddev *mddev;
+ struct bio *orig_bio;
+ unsigned long start_time;
++ sector_t offset;
++ unsigned long sectors;
+ struct bio bio_clone;
+ };
+
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index 6c9d24203f39f0..d83fe3b3abc009 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -420,10 +420,8 @@ static void close_write(struct r1bio *r1_bio)
+ r1_bio->behind_master_bio = NULL;
+ }
+
+- /* clear the bitmap if all writes complete successfully */
+- mddev->bitmap_ops->endwrite(mddev, r1_bio->sector, r1_bio->sectors,
+- !test_bit(R1BIO_Degraded, &r1_bio->state),
+- test_bit(R1BIO_BehindIO, &r1_bio->state));
++ if (test_bit(R1BIO_BehindIO, &r1_bio->state))
++ mddev->bitmap_ops->end_behind_write(mddev);
+ md_write_end(mddev);
+ }
+
+@@ -480,8 +478,6 @@ static void raid1_end_write_request(struct bio *bio)
+ if (!test_bit(Faulty, &rdev->flags))
+ set_bit(R1BIO_WriteError, &r1_bio->state);
+ else {
+- /* Fail the request */
+- set_bit(R1BIO_Degraded, &r1_bio->state);
+ /* Finished with this branch */
+ r1_bio->bios[mirror] = NULL;
+ to_put = bio;
+@@ -1492,11 +1488,8 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ break;
+ }
+ r1_bio->bios[i] = NULL;
+- if (!rdev || test_bit(Faulty, &rdev->flags)) {
+- if (i < conf->raid_disks)
+- set_bit(R1BIO_Degraded, &r1_bio->state);
++ if (!rdev || test_bit(Faulty, &rdev->flags))
+ continue;
+- }
+
+ atomic_inc(&rdev->nr_pending);
+ if (test_bit(WriteErrorSeen, &rdev->flags)) {
+@@ -1522,16 +1515,6 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ */
+ max_sectors = bad_sectors;
+ rdev_dec_pending(rdev, mddev);
+- /* We don't set R1BIO_Degraded as that
+- * only applies if the disk is
+- * missing, so it might be re-added,
+- * and we want to know to recover this
+- * chunk.
+- * In this case the device is here,
+- * and the fact that this chunk is not
+- * in-sync is recorded in the bad
+- * block log
+- */
+ continue;
+ }
+ if (is_bad) {
+@@ -1611,9 +1594,8 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
+ stats.behind_writes < max_write_behind)
+ alloc_behind_master_bio(r1_bio, bio);
+
+- mddev->bitmap_ops->startwrite(
+- mddev, r1_bio->sector, r1_bio->sectors,
+- test_bit(R1BIO_BehindIO, &r1_bio->state));
++ if (test_bit(R1BIO_BehindIO, &r1_bio->state))
++ mddev->bitmap_ops->start_behind_write(mddev);
+ first_clone = 0;
+ }
+
+@@ -2567,12 +2549,10 @@ static void handle_write_finished(struct r1conf *conf, struct r1bio *r1_bio)
+ * errors.
+ */
+ fail = true;
+- if (!narrow_write_error(r1_bio, m)) {
++ if (!narrow_write_error(r1_bio, m))
+ md_error(conf->mddev,
+ conf->mirrors[m].rdev);
+ /* an I/O failed, we can't clear the bitmap */
+- set_bit(R1BIO_Degraded, &r1_bio->state);
+- }
+ rdev_dec_pending(conf->mirrors[m].rdev,
+ conf->mddev);
+ }
+@@ -2663,8 +2643,6 @@ static void raid1d(struct md_thread *thread)
+ list_del(&r1_bio->retry_list);
+ idx = sector_to_idx(r1_bio->sector);
+ atomic_dec(&conf->nr_queued[idx]);
+- if (mddev->degraded)
+- set_bit(R1BIO_Degraded, &r1_bio->state);
+ if (test_bit(R1BIO_WriteError, &r1_bio->state))
+ close_write(r1_bio);
+ raid_end_bio_io(r1_bio);
+diff --git a/drivers/md/raid1.h b/drivers/md/raid1.h
+index 5300cbaa58a415..33f318fcc268d8 100644
+--- a/drivers/md/raid1.h
++++ b/drivers/md/raid1.h
+@@ -188,7 +188,6 @@ struct r1bio {
+ enum r1bio_state {
+ R1BIO_Uptodate,
+ R1BIO_IsSync,
+- R1BIO_Degraded,
+ R1BIO_BehindIO,
+ /* Set ReadError on bios that experience a readerror so that
+ * raid1d knows what to do with them.
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index 862b1fb71d864b..daf42acc4fb6f3 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -428,10 +428,6 @@ static void close_write(struct r10bio *r10_bio)
+ {
+ struct mddev *mddev = r10_bio->mddev;
+
+- /* clear the bitmap if all writes complete successfully */
+- mddev->bitmap_ops->endwrite(mddev, r10_bio->sector, r10_bio->sectors,
+- !test_bit(R10BIO_Degraded, &r10_bio->state),
+- false);
+ md_write_end(mddev);
+ }
+
+@@ -501,7 +497,6 @@ static void raid10_end_write_request(struct bio *bio)
+ set_bit(R10BIO_WriteError, &r10_bio->state);
+ else {
+ /* Fail the request */
+- set_bit(R10BIO_Degraded, &r10_bio->state);
+ r10_bio->devs[slot].bio = NULL;
+ to_put = bio;
+ dec_rdev = 1;
+@@ -1430,10 +1425,8 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ r10_bio->devs[i].bio = NULL;
+ r10_bio->devs[i].repl_bio = NULL;
+
+- if (!rdev && !rrdev) {
+- set_bit(R10BIO_Degraded, &r10_bio->state);
++ if (!rdev && !rrdev)
+ continue;
+- }
+ if (rdev && test_bit(WriteErrorSeen, &rdev->flags)) {
+ sector_t first_bad;
+ sector_t dev_sector = r10_bio->devs[i].addr;
+@@ -1450,14 +1443,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ * to other devices yet
+ */
+ max_sectors = bad_sectors;
+- /* We don't set R10BIO_Degraded as that
+- * only applies if the disk is missing,
+- * so it might be re-added, and we want to
+- * know to recover this chunk.
+- * In this case the device is here, and the
+- * fact that this chunk is not in-sync is
+- * recorded in the bad block log.
+- */
+ continue;
+ }
+ if (is_bad) {
+@@ -1493,8 +1478,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
+ md_account_bio(mddev, &bio);
+ r10_bio->master_bio = bio;
+ atomic_set(&r10_bio->remaining, 1);
+- mddev->bitmap_ops->startwrite(mddev, r10_bio->sector, r10_bio->sectors,
+- false);
+
+ for (i = 0; i < conf->copies; i++) {
+ if (r10_bio->devs[i].bio)
+@@ -2910,11 +2893,8 @@ static void handle_write_completed(struct r10conf *conf, struct r10bio *r10_bio)
+ rdev_dec_pending(rdev, conf->mddev);
+ } else if (bio != NULL && bio->bi_status) {
+ fail = true;
+- if (!narrow_write_error(r10_bio, m)) {
++ if (!narrow_write_error(r10_bio, m))
+ md_error(conf->mddev, rdev);
+- set_bit(R10BIO_Degraded,
+- &r10_bio->state);
+- }
+ rdev_dec_pending(rdev, conf->mddev);
+ }
+ bio = r10_bio->devs[m].repl_bio;
+@@ -2973,8 +2953,6 @@ static void raid10d(struct md_thread *thread)
+ r10_bio = list_first_entry(&tmp, struct r10bio,
+ retry_list);
+ list_del(&r10_bio->retry_list);
+- if (mddev->degraded)
+- set_bit(R10BIO_Degraded, &r10_bio->state);
+
+ if (test_bit(R10BIO_WriteError,
+ &r10_bio->state))
+diff --git a/drivers/md/raid10.h b/drivers/md/raid10.h
+index 2e75e88d08023f..3f16ad6904a9fb 100644
+--- a/drivers/md/raid10.h
++++ b/drivers/md/raid10.h
+@@ -161,7 +161,6 @@ enum r10bio_state {
+ R10BIO_IsSync,
+ R10BIO_IsRecover,
+ R10BIO_IsReshape,
+- R10BIO_Degraded,
+ /* Set ReadError on bios that experience a read error
+ * so that raid10d knows what to do with them.
+ */
+diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
+index b4f7b79fd187d0..011246e16a99e5 100644
+--- a/drivers/md/raid5-cache.c
++++ b/drivers/md/raid5-cache.c
+@@ -313,10 +313,6 @@ void r5c_handle_cached_data_endio(struct r5conf *conf,
+ if (sh->dev[i].written) {
+ set_bit(R5_UPTODATE, &sh->dev[i].flags);
+ r5c_return_dev_pending_writes(conf, &sh->dev[i]);
+- conf->mddev->bitmap_ops->endwrite(conf->mddev,
+- sh->sector, RAID5_STRIPE_SECTORS(conf),
+- !test_bit(STRIPE_DEGRADED, &sh->state),
+- false);
+ }
+ }
+ }
+diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
+index 2fa1f270fb1d3c..39e7596e78c0b0 100644
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -906,8 +906,7 @@ static bool stripe_can_batch(struct stripe_head *sh)
+ if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ return false;
+ return test_bit(STRIPE_BATCH_READY, &sh->state) &&
+- !test_bit(STRIPE_BITMAP_PENDING, &sh->state) &&
+- is_full_stripe_write(sh);
++ is_full_stripe_write(sh);
+ }
+
+ /* we only do back search */
+@@ -1345,8 +1344,6 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
+ submit_bio_noacct(rbi);
+ }
+ if (!rdev && !rrdev) {
+- if (op_is_write(op))
+- set_bit(STRIPE_DEGRADED, &sh->state);
+ pr_debug("skip op %d on disc %d for sector %llu\n",
+ bi->bi_opf, i, (unsigned long long)sh->sector);
+ clear_bit(R5_LOCKED, &sh->dev[i].flags);
+@@ -2884,7 +2881,6 @@ static void raid5_end_write_request(struct bio *bi)
+ set_bit(R5_MadeGoodRepl, &sh->dev[i].flags);
+ } else {
+ if (bi->bi_status) {
+- set_bit(STRIPE_DEGRADED, &sh->state);
+ set_bit(WriteErrorSeen, &rdev->flags);
+ set_bit(R5_WriteError, &sh->dev[i].flags);
+ if (!test_and_set_bit(WantReplacement, &rdev->flags))
+@@ -3548,29 +3544,9 @@ static void __add_stripe_bio(struct stripe_head *sh, struct bio *bi,
+ (*bip)->bi_iter.bi_sector, sh->sector, dd_idx,
+ sh->dev[dd_idx].sector);
+
+- if (conf->mddev->bitmap && firstwrite) {
+- /* Cannot hold spinlock over bitmap_startwrite,
+- * but must ensure this isn't added to a batch until
+- * we have added to the bitmap and set bm_seq.
+- * So set STRIPE_BITMAP_PENDING to prevent
+- * batching.
+- * If multiple __add_stripe_bio() calls race here they
+- * much all set STRIPE_BITMAP_PENDING. So only the first one
+- * to complete "bitmap_startwrite" gets to set
+- * STRIPE_BIT_DELAY. This is important as once a stripe
+- * is added to a batch, STRIPE_BIT_DELAY cannot be changed
+- * any more.
+- */
+- set_bit(STRIPE_BITMAP_PENDING, &sh->state);
+- spin_unlock_irq(&sh->stripe_lock);
+- conf->mddev->bitmap_ops->startwrite(conf->mddev, sh->sector,
+- RAID5_STRIPE_SECTORS(conf), false);
+- spin_lock_irq(&sh->stripe_lock);
+- clear_bit(STRIPE_BITMAP_PENDING, &sh->state);
+- if (!sh->batch_head) {
+- sh->bm_seq = conf->seq_flush+1;
+- set_bit(STRIPE_BIT_DELAY, &sh->state);
+- }
++ if (conf->mddev->bitmap && firstwrite && !sh->batch_head) {
++ sh->bm_seq = conf->seq_flush+1;
++ set_bit(STRIPE_BIT_DELAY, &sh->state);
+ }
+ }
+
+@@ -3621,7 +3597,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
+ BUG_ON(sh->batch_head);
+ for (i = disks; i--; ) {
+ struct bio *bi;
+- int bitmap_end = 0;
+
+ if (test_bit(R5_ReadError, &sh->dev[i].flags)) {
+ struct md_rdev *rdev = conf->disks[i].rdev;
+@@ -3646,8 +3621,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
+ sh->dev[i].towrite = NULL;
+ sh->overwrite_disks = 0;
+ spin_unlock_irq(&sh->stripe_lock);
+- if (bi)
+- bitmap_end = 1;
+
+ log_stripe_write_finished(sh);
+
+@@ -3662,11 +3635,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
+ bio_io_error(bi);
+ bi = nextbi;
+ }
+- if (bitmap_end)
+- conf->mddev->bitmap_ops->endwrite(conf->mddev,
+- sh->sector, RAID5_STRIPE_SECTORS(conf),
+- false, false);
+- bitmap_end = 0;
+ /* and fail all 'written' */
+ bi = sh->dev[i].written;
+ sh->dev[i].written = NULL;
+@@ -3675,7 +3643,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
+ sh->dev[i].page = sh->dev[i].orig_page;
+ }
+
+- if (bi) bitmap_end = 1;
+ while (bi && bi->bi_iter.bi_sector <
+ sh->dev[i].sector + RAID5_STRIPE_SECTORS(conf)) {
+ struct bio *bi2 = r5_next_bio(conf, bi, sh->dev[i].sector);
+@@ -3709,10 +3676,6 @@ handle_failed_stripe(struct r5conf *conf, struct stripe_head *sh,
+ bi = nextbi;
+ }
+ }
+- if (bitmap_end)
+- conf->mddev->bitmap_ops->endwrite(conf->mddev,
+- sh->sector, RAID5_STRIPE_SECTORS(conf),
+- false, false);
+ /* If we were in the middle of a write the parity block might
+ * still be locked - so just clear all R5_LOCKED flags
+ */
+@@ -4061,10 +4024,7 @@ static void handle_stripe_clean_event(struct r5conf *conf,
+ bio_endio(wbi);
+ wbi = wbi2;
+ }
+- conf->mddev->bitmap_ops->endwrite(conf->mddev,
+- sh->sector, RAID5_STRIPE_SECTORS(conf),
+- !test_bit(STRIPE_DEGRADED, &sh->state),
+- false);
++
+ if (head_sh->batch_head) {
+ sh = list_first_entry(&sh->batch_list,
+ struct stripe_head,
+@@ -4341,7 +4301,6 @@ static void handle_parity_checks5(struct r5conf *conf, struct stripe_head *sh,
+ s->locked++;
+ set_bit(R5_Wantwrite, &dev->flags);
+
+- clear_bit(STRIPE_DEGRADED, &sh->state);
+ set_bit(STRIPE_INSYNC, &sh->state);
+ break;
+ case check_state_run:
+@@ -4498,7 +4457,6 @@ static void handle_parity_checks6(struct r5conf *conf, struct stripe_head *sh,
+ clear_bit(R5_Wantwrite, &dev->flags);
+ s->locked--;
+ }
+- clear_bit(STRIPE_DEGRADED, &sh->state);
+
+ set_bit(STRIPE_INSYNC, &sh->state);
+ break;
+@@ -4892,8 +4850,7 @@ static void break_stripe_batch_list(struct stripe_head *head_sh,
+ (1 << STRIPE_COMPUTE_RUN) |
+ (1 << STRIPE_DISCARD) |
+ (1 << STRIPE_BATCH_READY) |
+- (1 << STRIPE_BATCH_ERR) |
+- (1 << STRIPE_BITMAP_PENDING)),
++ (1 << STRIPE_BATCH_ERR)),
+ "stripe state: %lx\n", sh->state);
+ WARN_ONCE(head_sh->state & ((1 << STRIPE_DISCARD) |
+ (1 << STRIPE_REPLACED)),
+@@ -4901,7 +4858,6 @@ static void break_stripe_batch_list(struct stripe_head *head_sh,
+
+ set_mask_bits(&sh->state, ~(STRIPE_EXPAND_SYNC_FLAGS |
+ (1 << STRIPE_PREREAD_ACTIVE) |
+- (1 << STRIPE_DEGRADED) |
+ (1 << STRIPE_ON_UNPLUG_LIST)),
+ head_sh->state & (1 << STRIPE_INSYNC));
+
+@@ -5785,10 +5741,6 @@ static void make_discard_request(struct mddev *mddev, struct bio *bi)
+ }
+ spin_unlock_irq(&sh->stripe_lock);
+ if (conf->mddev->bitmap) {
+- for (d = 0; d < conf->raid_disks - conf->max_degraded;
+- d++)
+- mddev->bitmap_ops->startwrite(mddev, sh->sector,
+- RAID5_STRIPE_SECTORS(conf), false);
+ sh->bm_seq = conf->seq_flush + 1;
+ set_bit(STRIPE_BIT_DELAY, &sh->state);
+ }
+@@ -5929,6 +5881,54 @@ static enum reshape_loc get_reshape_loc(struct mddev *mddev,
+ return LOC_BEHIND_RESHAPE;
+ }
+
++static void raid5_bitmap_sector(struct mddev *mddev, sector_t *offset,
++ unsigned long *sectors)
++{
++ struct r5conf *conf = mddev->private;
++ sector_t start = *offset;
++ sector_t end = start + *sectors;
++ sector_t prev_start = start;
++ sector_t prev_end = end;
++ int sectors_per_chunk;
++ enum reshape_loc loc;
++ int dd_idx;
++
++ sectors_per_chunk = conf->chunk_sectors *
++ (conf->raid_disks - conf->max_degraded);
++ start = round_down(start, sectors_per_chunk);
++ end = round_up(end, sectors_per_chunk);
++
++ start = raid5_compute_sector(conf, start, 0, &dd_idx, NULL);
++ end = raid5_compute_sector(conf, end, 0, &dd_idx, NULL);
++
++ /*
++ * For LOC_INSIDE_RESHAPE, this IO will wait for reshape to make
++ * progress, hence it's the same as LOC_BEHIND_RESHAPE.
++ */
++ loc = get_reshape_loc(mddev, conf, prev_start);
++ if (likely(loc != LOC_AHEAD_OF_RESHAPE)) {
++ *offset = start;
++ *sectors = end - start;
++ return;
++ }
++
++ sectors_per_chunk = conf->prev_chunk_sectors *
++ (conf->previous_raid_disks - conf->max_degraded);
++ prev_start = round_down(prev_start, sectors_per_chunk);
++ prev_end = round_down(prev_end, sectors_per_chunk);
++
++ prev_start = raid5_compute_sector(conf, prev_start, 1, &dd_idx, NULL);
++ prev_end = raid5_compute_sector(conf, prev_end, 1, &dd_idx, NULL);
++
++ /*
++ * for LOC_AHEAD_OF_RESHAPE, reshape can make progress before this IO
++ * is handled in make_stripe_request(), we can't know this here hence
++ * we set bits for both.
++ */
++ *offset = min(start, prev_start);
++ *sectors = max(end, prev_end) - *offset;
++}
++
+ static enum stripe_result make_stripe_request(struct mddev *mddev,
+ struct r5conf *conf, struct stripe_request_ctx *ctx,
+ sector_t logical_sector, struct bio *bi)
+@@ -8977,6 +8977,7 @@ static struct md_personality raid6_personality =
+ .takeover = raid6_takeover,
+ .change_consistency_policy = raid5_change_consistency_policy,
+ .prepare_suspend = raid5_prepare_suspend,
++ .bitmap_sector = raid5_bitmap_sector,
+ };
+ static struct md_personality raid5_personality =
+ {
+@@ -9002,6 +9003,7 @@ static struct md_personality raid5_personality =
+ .takeover = raid5_takeover,
+ .change_consistency_policy = raid5_change_consistency_policy,
+ .prepare_suspend = raid5_prepare_suspend,
++ .bitmap_sector = raid5_bitmap_sector,
+ };
+
+ static struct md_personality raid4_personality =
+@@ -9028,6 +9030,7 @@ static struct md_personality raid4_personality =
+ .takeover = raid4_takeover,
+ .change_consistency_policy = raid5_change_consistency_policy,
+ .prepare_suspend = raid5_prepare_suspend,
++ .bitmap_sector = raid5_bitmap_sector,
+ };
+
+ static int __init raid5_init(void)
+diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
+index 896ecfc4afa6fa..2e42c1641049f9 100644
+--- a/drivers/md/raid5.h
++++ b/drivers/md/raid5.h
+@@ -358,7 +358,6 @@ enum {
+ STRIPE_REPLACED,
+ STRIPE_PREREAD_ACTIVE,
+ STRIPE_DELAYED,
+- STRIPE_DEGRADED,
+ STRIPE_BIT_DELAY,
+ STRIPE_EXPANDING,
+ STRIPE_EXPAND_SOURCE,
+@@ -372,9 +371,6 @@ enum {
+ STRIPE_ON_RELEASE_LIST,
+ STRIPE_BATCH_READY,
+ STRIPE_BATCH_ERR,
+- STRIPE_BITMAP_PENDING, /* Being added to bitmap, don't add
+- * to batch yet.
+- */
+ STRIPE_LOG_TRAPPED, /* trapped into log (see raid5-cache.c)
+ * this bit is used in two scenarios:
+ *
+diff --git a/drivers/media/i2c/imx290.c b/drivers/media/i2c/imx290.c
+index 458905dfb3e110..a87a265cd83957 100644
+--- a/drivers/media/i2c/imx290.c
++++ b/drivers/media/i2c/imx290.c
+@@ -269,7 +269,6 @@ static const struct cci_reg_sequence imx290_global_init_settings[] = {
+ { IMX290_WINWV, 1097 },
+ { IMX290_XSOUTSEL, IMX290_XSOUTSEL_XVSOUTSEL_VSYNC |
+ IMX290_XSOUTSEL_XHSOUTSEL_HSYNC },
+- { CCI_REG8(0x3011), 0x02 },
+ { CCI_REG8(0x3012), 0x64 },
+ { CCI_REG8(0x3013), 0x00 },
+ };
+@@ -277,6 +276,7 @@ static const struct cci_reg_sequence imx290_global_init_settings[] = {
+ static const struct cci_reg_sequence imx290_global_init_settings_290[] = {
+ { CCI_REG8(0x300f), 0x00 },
+ { CCI_REG8(0x3010), 0x21 },
++ { CCI_REG8(0x3011), 0x00 },
+ { CCI_REG8(0x3016), 0x09 },
+ { CCI_REG8(0x3070), 0x02 },
+ { CCI_REG8(0x3071), 0x11 },
+@@ -330,6 +330,7 @@ static const struct cci_reg_sequence xclk_regs[][IMX290_NUM_CLK_REGS] = {
+ };
+
+ static const struct cci_reg_sequence imx290_global_init_settings_327[] = {
++ { CCI_REG8(0x3011), 0x02 },
+ { CCI_REG8(0x309e), 0x4A },
+ { CCI_REG8(0x309f), 0x4A },
+ { CCI_REG8(0x313b), 0x61 },
+diff --git a/drivers/media/i2c/imx412.c b/drivers/media/i2c/imx412.c
+index 0bfe3046fcc872..c74097a59c4285 100644
+--- a/drivers/media/i2c/imx412.c
++++ b/drivers/media/i2c/imx412.c
+@@ -547,7 +547,7 @@ static int imx412_update_exp_gain(struct imx412 *imx412, u32 exposure, u32 gain)
+
+ lpfr = imx412->vblank + imx412->cur_mode->height;
+
+- dev_dbg(imx412->dev, "Set exp %u, analog gain %u, lpfr %u",
++ dev_dbg(imx412->dev, "Set exp %u, analog gain %u, lpfr %u\n",
+ exposure, gain, lpfr);
+
+ ret = imx412_write_reg(imx412, IMX412_REG_HOLD, 1, 1);
+@@ -594,7 +594,7 @@ static int imx412_set_ctrl(struct v4l2_ctrl *ctrl)
+ case V4L2_CID_VBLANK:
+ imx412->vblank = imx412->vblank_ctrl->val;
+
+- dev_dbg(imx412->dev, "Received vblank %u, new lpfr %u",
++ dev_dbg(imx412->dev, "Received vblank %u, new lpfr %u\n",
+ imx412->vblank,
+ imx412->vblank + imx412->cur_mode->height);
+
+@@ -613,7 +613,7 @@ static int imx412_set_ctrl(struct v4l2_ctrl *ctrl)
+ exposure = ctrl->val;
+ analog_gain = imx412->again_ctrl->val;
+
+- dev_dbg(imx412->dev, "Received exp %u, analog gain %u",
++ dev_dbg(imx412->dev, "Received exp %u, analog gain %u\n",
+ exposure, analog_gain);
+
+ ret = imx412_update_exp_gain(imx412, exposure, analog_gain);
+@@ -622,7 +622,7 @@ static int imx412_set_ctrl(struct v4l2_ctrl *ctrl)
+
+ break;
+ default:
+- dev_err(imx412->dev, "Invalid control %d", ctrl->id);
++ dev_err(imx412->dev, "Invalid control %d\n", ctrl->id);
+ ret = -EINVAL;
+ }
+
+@@ -803,14 +803,14 @@ static int imx412_start_streaming(struct imx412 *imx412)
+ ret = imx412_write_regs(imx412, reg_list->regs,
+ reg_list->num_of_regs);
+ if (ret) {
+- dev_err(imx412->dev, "fail to write initial registers");
++ dev_err(imx412->dev, "fail to write initial registers\n");
+ return ret;
+ }
+
+ /* Setup handler will write actual exposure and gain */
+ ret = __v4l2_ctrl_handler_setup(imx412->sd.ctrl_handler);
+ if (ret) {
+- dev_err(imx412->dev, "fail to setup handler");
++ dev_err(imx412->dev, "fail to setup handler\n");
+ return ret;
+ }
+
+@@ -821,7 +821,7 @@ static int imx412_start_streaming(struct imx412 *imx412)
+ ret = imx412_write_reg(imx412, IMX412_REG_MODE_SELECT,
+ 1, IMX412_MODE_STREAMING);
+ if (ret) {
+- dev_err(imx412->dev, "fail to start streaming");
++ dev_err(imx412->dev, "fail to start streaming\n");
+ return ret;
+ }
+
+@@ -895,7 +895,7 @@ static int imx412_detect(struct imx412 *imx412)
+ return ret;
+
+ if (val != IMX412_ID) {
+- dev_err(imx412->dev, "chip id mismatch: %x!=%x",
++ dev_err(imx412->dev, "chip id mismatch: %x!=%x\n",
+ IMX412_ID, val);
+ return -ENXIO;
+ }
+@@ -927,7 +927,7 @@ static int imx412_parse_hw_config(struct imx412 *imx412)
+ imx412->reset_gpio = devm_gpiod_get_optional(imx412->dev, "reset",
+ GPIOD_OUT_LOW);
+ if (IS_ERR(imx412->reset_gpio)) {
+- dev_err(imx412->dev, "failed to get reset gpio %ld",
++ dev_err(imx412->dev, "failed to get reset gpio %ld\n",
+ PTR_ERR(imx412->reset_gpio));
+ return PTR_ERR(imx412->reset_gpio);
+ }
+@@ -935,13 +935,13 @@ static int imx412_parse_hw_config(struct imx412 *imx412)
+ /* Get sensor input clock */
+ imx412->inclk = devm_clk_get(imx412->dev, NULL);
+ if (IS_ERR(imx412->inclk)) {
+- dev_err(imx412->dev, "could not get inclk");
++ dev_err(imx412->dev, "could not get inclk\n");
+ return PTR_ERR(imx412->inclk);
+ }
+
+ rate = clk_get_rate(imx412->inclk);
+ if (rate != IMX412_INCLK_RATE) {
+- dev_err(imx412->dev, "inclk frequency mismatch");
++ dev_err(imx412->dev, "inclk frequency mismatch\n");
+ return -EINVAL;
+ }
+
+@@ -966,14 +966,14 @@ static int imx412_parse_hw_config(struct imx412 *imx412)
+
+ if (bus_cfg.bus.mipi_csi2.num_data_lanes != IMX412_NUM_DATA_LANES) {
+ dev_err(imx412->dev,
+- "number of CSI2 data lanes %d is not supported",
++ "number of CSI2 data lanes %d is not supported\n",
+ bus_cfg.bus.mipi_csi2.num_data_lanes);
+ ret = -EINVAL;
+ goto done_endpoint_free;
+ }
+
+ if (!bus_cfg.nr_of_link_frequencies) {
+- dev_err(imx412->dev, "no link frequencies defined");
++ dev_err(imx412->dev, "no link frequencies defined\n");
+ ret = -EINVAL;
+ goto done_endpoint_free;
+ }
+@@ -1034,7 +1034,7 @@ static int imx412_power_on(struct device *dev)
+
+ ret = clk_prepare_enable(imx412->inclk);
+ if (ret) {
+- dev_err(imx412->dev, "fail to enable inclk");
++ dev_err(imx412->dev, "fail to enable inclk\n");
+ goto error_reset;
+ }
+
+@@ -1145,7 +1145,7 @@ static int imx412_init_controls(struct imx412 *imx412)
+ imx412->hblank_ctrl->flags |= V4L2_CTRL_FLAG_READ_ONLY;
+
+ if (ctrl_hdlr->error) {
+- dev_err(imx412->dev, "control init failed: %d",
++ dev_err(imx412->dev, "control init failed: %d\n",
+ ctrl_hdlr->error);
+ v4l2_ctrl_handler_free(ctrl_hdlr);
+ return ctrl_hdlr->error;
+@@ -1183,7 +1183,7 @@ static int imx412_probe(struct i2c_client *client)
+
+ ret = imx412_parse_hw_config(imx412);
+ if (ret) {
+- dev_err(imx412->dev, "HW configuration is not supported");
++ dev_err(imx412->dev, "HW configuration is not supported\n");
+ return ret;
+ }
+
+@@ -1191,14 +1191,14 @@ static int imx412_probe(struct i2c_client *client)
+
+ ret = imx412_power_on(imx412->dev);
+ if (ret) {
+- dev_err(imx412->dev, "failed to power-on the sensor");
++ dev_err(imx412->dev, "failed to power-on the sensor\n");
+ goto error_mutex_destroy;
+ }
+
+ /* Check module identity */
+ ret = imx412_detect(imx412);
+ if (ret) {
+- dev_err(imx412->dev, "failed to find sensor: %d", ret);
++ dev_err(imx412->dev, "failed to find sensor: %d\n", ret);
+ goto error_power_off;
+ }
+
+@@ -1208,7 +1208,7 @@ static int imx412_probe(struct i2c_client *client)
+
+ ret = imx412_init_controls(imx412);
+ if (ret) {
+- dev_err(imx412->dev, "failed to init controls: %d", ret);
++ dev_err(imx412->dev, "failed to init controls: %d\n", ret);
+ goto error_power_off;
+ }
+
+@@ -1222,14 +1222,14 @@ static int imx412_probe(struct i2c_client *client)
+ imx412->pad.flags = MEDIA_PAD_FL_SOURCE;
+ ret = media_entity_pads_init(&imx412->sd.entity, 1, &imx412->pad);
+ if (ret) {
+- dev_err(imx412->dev, "failed to init entity pads: %d", ret);
++ dev_err(imx412->dev, "failed to init entity pads: %d\n", ret);
+ goto error_handler_free;
+ }
+
+ ret = v4l2_async_register_subdev_sensor(&imx412->sd);
+ if (ret < 0) {
+ dev_err(imx412->dev,
+- "failed to register async subdev: %d", ret);
++ "failed to register async subdev: %d\n", ret);
+ goto error_media_entity;
+ }
+
+diff --git a/drivers/media/i2c/ov9282.c b/drivers/media/i2c/ov9282.c
+index 9f52af6f047f3c..87e5d7ce5a47ee 100644
+--- a/drivers/media/i2c/ov9282.c
++++ b/drivers/media/i2c/ov9282.c
+@@ -40,7 +40,7 @@
+ /* Exposure control */
+ #define OV9282_REG_EXPOSURE 0x3500
+ #define OV9282_EXPOSURE_MIN 1
+-#define OV9282_EXPOSURE_OFFSET 12
++#define OV9282_EXPOSURE_OFFSET 25
+ #define OV9282_EXPOSURE_STEP 1
+ #define OV9282_EXPOSURE_DEFAULT 0x0282
+
+diff --git a/drivers/media/platform/marvell/mcam-core.c b/drivers/media/platform/marvell/mcam-core.c
+index c81593c969e057..a62c3a484cb3ff 100644
+--- a/drivers/media/platform/marvell/mcam-core.c
++++ b/drivers/media/platform/marvell/mcam-core.c
+@@ -935,7 +935,12 @@ static int mclk_enable(struct clk_hw *hw)
+ ret = pm_runtime_resume_and_get(cam->dev);
+ if (ret < 0)
+ return ret;
+- clk_enable(cam->clk[0]);
++ ret = clk_enable(cam->clk[0]);
++ if (ret) {
++ pm_runtime_put(cam->dev);
++ return ret;
++ }
++
+ mcam_reg_write(cam, REG_CLKCTRL, (mclk_src << 29) | mclk_div);
+ mcam_ctlr_power_up(cam);
+
+diff --git a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+index 1bf85c1cf96435..b8c9bb017fb5f6 100644
+--- a/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
++++ b/drivers/media/platform/nxp/imx-jpeg/mxc-jpeg.c
+@@ -2679,11 +2679,12 @@ static void mxc_jpeg_detach_pm_domains(struct mxc_jpeg_dev *jpeg)
+ int i;
+
+ for (i = 0; i < jpeg->num_domains; i++) {
+- if (jpeg->pd_dev[i] && !pm_runtime_suspended(jpeg->pd_dev[i]))
++ if (!IS_ERR_OR_NULL(jpeg->pd_dev[i]) &&
++ !pm_runtime_suspended(jpeg->pd_dev[i]))
+ pm_runtime_force_suspend(jpeg->pd_dev[i]);
+- if (jpeg->pd_link[i] && !IS_ERR(jpeg->pd_link[i]))
++ if (!IS_ERR_OR_NULL(jpeg->pd_link[i]))
+ device_link_del(jpeg->pd_link[i]);
+- if (jpeg->pd_dev[i] && !IS_ERR(jpeg->pd_dev[i]))
++ if (!IS_ERR_OR_NULL(jpeg->pd_dev[i]))
+ dev_pm_domain_detach(jpeg->pd_dev[i], true);
+ jpeg->pd_dev[i] = NULL;
+ jpeg->pd_link[i] = NULL;
+diff --git a/drivers/media/platform/nxp/imx8-isi/imx8-isi-video.c b/drivers/media/platform/nxp/imx8-isi/imx8-isi-video.c
+index 4091f1c0e78bdc..a71eb30323c8d2 100644
+--- a/drivers/media/platform/nxp/imx8-isi/imx8-isi-video.c
++++ b/drivers/media/platform/nxp/imx8-isi/imx8-isi-video.c
+@@ -861,6 +861,7 @@ int mxc_isi_video_buffer_prepare(struct mxc_isi_dev *isi, struct vb2_buffer *vb2
+ const struct mxc_isi_format_info *info,
+ const struct v4l2_pix_format_mplane *pix)
+ {
++ struct vb2_v4l2_buffer *v4l2_buf = to_vb2_v4l2_buffer(vb2);
+ unsigned int i;
+
+ for (i = 0; i < info->mem_planes; i++) {
+@@ -875,6 +876,8 @@ int mxc_isi_video_buffer_prepare(struct mxc_isi_dev *isi, struct vb2_buffer *vb2
+ vb2_set_plane_payload(vb2, i, size);
+ }
+
++ v4l2_buf->field = pix->field;
++
+ return 0;
+ }
+
+diff --git a/drivers/media/platform/samsung/exynos4-is/mipi-csis.c b/drivers/media/platform/samsung/exynos4-is/mipi-csis.c
+index 4b9b20ba35041c..38c5f22b850b97 100644
+--- a/drivers/media/platform/samsung/exynos4-is/mipi-csis.c
++++ b/drivers/media/platform/samsung/exynos4-is/mipi-csis.c
+@@ -940,13 +940,19 @@ static int s5pcsis_pm_resume(struct device *dev, bool runtime)
+ state->supplies);
+ goto unlock;
+ }
+- clk_enable(state->clock[CSIS_CLK_GATE]);
++ ret = clk_enable(state->clock[CSIS_CLK_GATE]);
++ if (ret) {
++ phy_power_off(state->phy);
++ regulator_bulk_disable(CSIS_NUM_SUPPLIES,
++ state->supplies);
++ goto unlock;
++ }
+ }
+ if (state->flags & ST_STREAMING)
+ s5pcsis_start_stream(state);
+
+ state->flags &= ~ST_SUSPENDED;
+- unlock:
++unlock:
+ mutex_unlock(&state->lock);
+ return ret ? -EAGAIN : 0;
+ }
+diff --git a/drivers/media/platform/samsung/s3c-camif/camif-core.c b/drivers/media/platform/samsung/s3c-camif/camif-core.c
+index e4529f666e2060..8c597dd01713a6 100644
+--- a/drivers/media/platform/samsung/s3c-camif/camif-core.c
++++ b/drivers/media/platform/samsung/s3c-camif/camif-core.c
+@@ -527,10 +527,19 @@ static void s3c_camif_remove(struct platform_device *pdev)
+ static int s3c_camif_runtime_resume(struct device *dev)
+ {
+ struct camif_dev *camif = dev_get_drvdata(dev);
++ int ret;
++
++ ret = clk_enable(camif->clock[CLK_GATE]);
++ if (ret)
++ return ret;
+
+- clk_enable(camif->clock[CLK_GATE]);
+ /* null op on s3c244x */
+- clk_enable(camif->clock[CLK_CAM]);
++ ret = clk_enable(camif->clock[CLK_CAM]);
++ if (ret) {
++ clk_disable(camif->clock[CLK_GATE]);
++ return ret;
++ }
++
+ return 0;
+ }
+
+diff --git a/drivers/media/rc/iguanair.c b/drivers/media/rc/iguanair.c
+index 276bf3c8a8cb49..8af94246e5916e 100644
+--- a/drivers/media/rc/iguanair.c
++++ b/drivers/media/rc/iguanair.c
+@@ -194,8 +194,10 @@ static int iguanair_send(struct iguanair *ir, unsigned size)
+ if (rc)
+ return rc;
+
+- if (wait_for_completion_timeout(&ir->completion, TIMEOUT) == 0)
++ if (wait_for_completion_timeout(&ir->completion, TIMEOUT) == 0) {
++ usb_kill_urb(ir->urb_out);
+ return -ETIMEDOUT;
++ }
+
+ return rc;
+ }
+diff --git a/drivers/media/usb/dvb-usb-v2/af9035.c b/drivers/media/usb/dvb-usb-v2/af9035.c
+index 0d2c42819d3909..218f712f56b17c 100644
+--- a/drivers/media/usb/dvb-usb-v2/af9035.c
++++ b/drivers/media/usb/dvb-usb-v2/af9035.c
+@@ -322,13 +322,16 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ ret = -EOPNOTSUPP;
+ } else if ((msg[0].addr == state->af9033_i2c_addr[0]) ||
+ (msg[0].addr == state->af9033_i2c_addr[1])) {
++ /* demod access via firmware interface */
++ u32 reg;
++
+ if (msg[0].len < 3 || msg[1].len < 1) {
+ ret = -EOPNOTSUPP;
+ goto unlock;
+ }
+- /* demod access via firmware interface */
+- u32 reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
+- msg[0].buf[2];
++
++ reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
++ msg[0].buf[2];
+
+ if (msg[0].addr == state->af9033_i2c_addr[1])
+ reg |= 0x100000;
+@@ -385,13 +388,16 @@ static int af9035_i2c_master_xfer(struct i2c_adapter *adap,
+ ret = -EOPNOTSUPP;
+ } else if ((msg[0].addr == state->af9033_i2c_addr[0]) ||
+ (msg[0].addr == state->af9033_i2c_addr[1])) {
++ /* demod access via firmware interface */
++ u32 reg;
++
+ if (msg[0].len < 3) {
+ ret = -EOPNOTSUPP;
+ goto unlock;
+ }
+- /* demod access via firmware interface */
+- u32 reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
+- msg[0].buf[2];
++
++ reg = msg[0].buf[0] << 16 | msg[0].buf[1] << 8 |
++ msg[0].buf[2];
+
+ if (msg[0].addr == state->af9033_i2c_addr[1])
+ reg |= 0x100000;
+diff --git a/drivers/media/usb/dvb-usb-v2/lmedm04.c b/drivers/media/usb/dvb-usb-v2/lmedm04.c
+index 8a34e6c0d6a6d1..f0537b741d1352 100644
+--- a/drivers/media/usb/dvb-usb-v2/lmedm04.c
++++ b/drivers/media/usb/dvb-usb-v2/lmedm04.c
+@@ -373,6 +373,7 @@ static int lme2510_int_read(struct dvb_usb_adapter *adap)
+ struct dvb_usb_device *d = adap_to_d(adap);
+ struct lme2510_state *lme_int = adap_to_priv(adap);
+ struct usb_host_endpoint *ep;
++ int ret;
+
+ lme_int->lme_urb = usb_alloc_urb(0, GFP_KERNEL);
+
+@@ -390,11 +391,20 @@ static int lme2510_int_read(struct dvb_usb_adapter *adap)
+
+ /* Quirk of pipe reporting PIPE_BULK but behaves as interrupt */
+ ep = usb_pipe_endpoint(d->udev, lme_int->lme_urb->pipe);
++ if (!ep) {
++ usb_free_urb(lme_int->lme_urb);
++ return -ENODEV;
++ }
+
+ if (usb_endpoint_type(&ep->desc) == USB_ENDPOINT_XFER_BULK)
+ lme_int->lme_urb->pipe = usb_rcvbulkpipe(d->udev, 0xa);
+
+- usb_submit_urb(lme_int->lme_urb, GFP_KERNEL);
++ ret = usb_submit_urb(lme_int->lme_urb, GFP_KERNEL);
++ if (ret) {
++ usb_free_urb(lme_int->lme_urb);
++ return ret;
++ }
++
+ info("INT Interrupt Service Started");
+
+ return 0;
+diff --git a/drivers/media/usb/uvc/uvc_queue.c b/drivers/media/usb/uvc/uvc_queue.c
+index 16fa17bbd15eaa..83ed7821fa2a77 100644
+--- a/drivers/media/usb/uvc/uvc_queue.c
++++ b/drivers/media/usb/uvc/uvc_queue.c
+@@ -483,7 +483,8 @@ static void uvc_queue_buffer_complete(struct kref *ref)
+
+ buf->state = buf->error ? UVC_BUF_STATE_ERROR : UVC_BUF_STATE_DONE;
+ vb2_set_plane_payload(&buf->buf.vb2_buf, 0, buf->bytesused);
+- vb2_buffer_done(&buf->buf.vb2_buf, VB2_BUF_STATE_DONE);
++ vb2_buffer_done(&buf->buf.vb2_buf, buf->error ? VB2_BUF_STATE_ERROR :
++ VB2_BUF_STATE_DONE);
+ }
+
+ /*
+diff --git a/drivers/media/usb/uvc/uvc_status.c b/drivers/media/usb/uvc/uvc_status.c
+index a78a88c710e24a..b5f6682ff38311 100644
+--- a/drivers/media/usb/uvc/uvc_status.c
++++ b/drivers/media/usb/uvc/uvc_status.c
+@@ -269,6 +269,7 @@ int uvc_status_init(struct uvc_device *dev)
+ dev->int_urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (!dev->int_urb) {
+ kfree(dev->status);
++ dev->status = NULL;
+ return -ENOMEM;
+ }
+
+diff --git a/drivers/memory/tegra/tegra20-emc.c b/drivers/memory/tegra/tegra20-emc.c
+index 7193f848d17e66..9b7d30a21a5bd0 100644
+--- a/drivers/memory/tegra/tegra20-emc.c
++++ b/drivers/memory/tegra/tegra20-emc.c
+@@ -474,14 +474,15 @@ tegra_emc_find_node_by_ram_code(struct tegra_emc *emc)
+
+ ram_code = tegra_read_ram_code();
+
+- for (np = of_find_node_by_name(dev->of_node, "emc-tables"); np;
+- np = of_find_node_by_name(np, "emc-tables")) {
++ for_each_child_of_node(dev->of_node, np) {
++ if (!of_node_name_eq(np, "emc-tables"))
++ continue;
+ err = of_property_read_u32(np, "nvidia,ram-code", &value);
+ if (err || value != ram_code) {
+ struct device_node *lpddr2_np;
+ bool cfg_mismatches = false;
+
+- lpddr2_np = of_find_node_by_name(np, "lpddr2");
++ lpddr2_np = of_get_child_by_name(np, "lpddr2");
+ if (lpddr2_np) {
+ const struct lpddr2_info *info;
+
+@@ -518,7 +519,6 @@ tegra_emc_find_node_by_ram_code(struct tegra_emc *emc)
+ }
+
+ if (cfg_mismatches) {
+- of_node_put(np);
+ continue;
+ }
+ }
+diff --git a/drivers/mfd/syscon.c b/drivers/mfd/syscon.c
+index 2ce15f60eb1071..729e79e1be49fa 100644
+--- a/drivers/mfd/syscon.c
++++ b/drivers/mfd/syscon.c
+@@ -15,6 +15,7 @@
+ #include <linux/io.h>
+ #include <linux/init.h>
+ #include <linux/list.h>
++#include <linux/mutex.h>
+ #include <linux/of.h>
+ #include <linux/of_address.h>
+ #include <linux/of_platform.h>
+@@ -27,7 +28,7 @@
+
+ static struct platform_driver syscon_driver;
+
+-static DEFINE_SPINLOCK(syscon_list_slock);
++static DEFINE_MUTEX(syscon_list_lock);
+ static LIST_HEAD(syscon_list);
+
+ struct syscon {
+@@ -54,6 +55,8 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_res)
+ struct resource res;
+ struct reset_control *reset;
+
++ WARN_ON(!mutex_is_locked(&syscon_list_lock));
++
+ struct syscon *syscon __free(kfree) = kzalloc(sizeof(*syscon), GFP_KERNEL);
+ if (!syscon)
+ return ERR_PTR(-ENOMEM);
+@@ -144,9 +147,7 @@ static struct syscon *of_syscon_register(struct device_node *np, bool check_res)
+ syscon->regmap = regmap;
+ syscon->np = np;
+
+- spin_lock(&syscon_list_slock);
+ list_add_tail(&syscon->list, &syscon_list);
+- spin_unlock(&syscon_list_slock);
+
+ return_ptr(syscon);
+
+@@ -167,7 +168,7 @@ static struct regmap *device_node_get_regmap(struct device_node *np,
+ {
+ struct syscon *entry, *syscon = NULL;
+
+- spin_lock(&syscon_list_slock);
++ mutex_lock(&syscon_list_lock);
+
+ list_for_each_entry(entry, &syscon_list, list)
+ if (entry->np == np) {
+@@ -175,11 +176,11 @@ static struct regmap *device_node_get_regmap(struct device_node *np,
+ break;
+ }
+
+- spin_unlock(&syscon_list_slock);
+-
+ if (!syscon)
+ syscon = of_syscon_register(np, check_res);
+
++ mutex_unlock(&syscon_list_lock);
++
+ if (IS_ERR(syscon))
+ return ERR_CAST(syscon);
+
+@@ -210,7 +211,7 @@ int of_syscon_register_regmap(struct device_node *np, struct regmap *regmap)
+ return -ENOMEM;
+
+ /* check if syscon entry already exists */
+- spin_lock(&syscon_list_slock);
++ mutex_lock(&syscon_list_lock);
+
+ list_for_each_entry(entry, &syscon_list, list)
+ if (entry->np == np) {
+@@ -223,12 +224,12 @@ int of_syscon_register_regmap(struct device_node *np, struct regmap *regmap)
+
+ /* register the regmap in syscon list */
+ list_add_tail(&syscon->list, &syscon_list);
+- spin_unlock(&syscon_list_slock);
++ mutex_unlock(&syscon_list_lock);
+
+ return 0;
+
+ err_unlock:
+- spin_unlock(&syscon_list_slock);
++ mutex_unlock(&syscon_list_lock);
+ kfree(syscon);
+ return ret;
+ }
+diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c
+index f150d8769f1986..285a748748d701 100644
+--- a/drivers/misc/cardreader/rtsx_usb.c
++++ b/drivers/misc/cardreader/rtsx_usb.c
+@@ -286,6 +286,7 @@ static int rtsx_usb_get_status_with_bulk(struct rtsx_ucr *ucr, u16 *status)
+ int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status)
+ {
+ int ret;
++ u8 interrupt_val = 0;
+ u16 *buf;
+
+ if (!status)
+@@ -308,6 +309,20 @@ int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status)
+ ret = rtsx_usb_get_status_with_bulk(ucr, status);
+ }
+
++ rtsx_usb_read_register(ucr, CARD_INT_PEND, &interrupt_val);
++ /* Cross check presence with interrupts */
++ if (*status & XD_CD)
++ if (!(interrupt_val & XD_INT))
++ *status &= ~XD_CD;
++
++ if (*status & SD_CD)
++ if (!(interrupt_val & SD_INT))
++ *status &= ~SD_CD;
++
++ if (*status & MS_CD)
++ if (!(interrupt_val & MS_INT))
++ *status &= ~MS_CD;
++
+ /* usb_control_msg may return positive when success */
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/mtd/hyperbus/hbmc-am654.c b/drivers/mtd/hyperbus/hbmc-am654.c
+index dbe3eb361cca28..4b6cbee23fe893 100644
+--- a/drivers/mtd/hyperbus/hbmc-am654.c
++++ b/drivers/mtd/hyperbus/hbmc-am654.c
+@@ -174,26 +174,30 @@ static int am654_hbmc_probe(struct platform_device *pdev)
+ priv->hbdev.np = of_get_next_child(np, NULL);
+ ret = of_address_to_resource(priv->hbdev.np, 0, &res);
+ if (ret)
+- return ret;
++ goto put_node;
+
+ if (of_property_read_bool(dev->of_node, "mux-controls")) {
+ struct mux_control *control = devm_mux_control_get(dev, NULL);
+
+- if (IS_ERR(control))
+- return PTR_ERR(control);
++ if (IS_ERR(control)) {
++ ret = PTR_ERR(control);
++ goto put_node;
++ }
+
+ ret = mux_control_select(control, 1);
+ if (ret) {
+ dev_err(dev, "Failed to select HBMC mux\n");
+- return ret;
++ goto put_node;
+ }
+ priv->mux_ctrl = control;
+ }
+
+ priv->hbdev.map.size = resource_size(&res);
+ priv->hbdev.map.virt = devm_ioremap_resource(dev, &res);
+- if (IS_ERR(priv->hbdev.map.virt))
+- return PTR_ERR(priv->hbdev.map.virt);
++ if (IS_ERR(priv->hbdev.map.virt)) {
++ ret = PTR_ERR(priv->hbdev.map.virt);
++ goto disable_mux;
++ }
+
+ priv->ctlr.dev = dev;
+ priv->ctlr.ops = &am654_hbmc_ops;
+@@ -226,6 +230,8 @@ static int am654_hbmc_probe(struct platform_device *pdev)
+ disable_mux:
+ if (priv->mux_ctrl)
+ mux_control_deselect(priv->mux_ctrl);
++put_node:
++ of_node_put(priv->hbdev.np);
+ return ret;
+ }
+
+@@ -241,6 +247,7 @@ static void am654_hbmc_remove(struct platform_device *pdev)
+
+ if (dev_priv->rx_chan)
+ dma_release_channel(dev_priv->rx_chan);
++ of_node_put(priv->hbdev.np);
+ }
+
+ static const struct of_device_id am654_hbmc_dt_ids[] = {
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index 1b2ec0fec60c7a..e76df6a00ed4f5 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -2342,6 +2342,11 @@ static int brcmnand_write(struct mtd_info *mtd, struct nand_chip *chip,
+ brcmnand_send_cmd(host, CMD_PROGRAM_PAGE);
+ status = brcmnand_waitfunc(chip);
+
++ if (status < 0) {
++ ret = status;
++ goto out;
++ }
++
+ if (status & NAND_STATUS_FAIL) {
+ dev_info(ctrl->dev, "program failed at %llx\n",
+ (unsigned long long)addr);
+diff --git a/drivers/net/ethernet/broadcom/bgmac.h b/drivers/net/ethernet/broadcom/bgmac.h
+index d73ef262991d61..6fee9a41839c0b 100644
+--- a/drivers/net/ethernet/broadcom/bgmac.h
++++ b/drivers/net/ethernet/broadcom/bgmac.h
+@@ -328,8 +328,7 @@
+ #define BGMAC_RX_FRAME_OFFSET 30 /* There are 2 unused bytes between header and real data */
+ #define BGMAC_RX_BUF_OFFSET (NET_SKB_PAD + NET_IP_ALIGN - \
+ BGMAC_RX_FRAME_OFFSET)
+-/* Jumbo frame size with FCS */
+-#define BGMAC_RX_MAX_FRAME_SIZE 9724
++#define BGMAC_RX_MAX_FRAME_SIZE 1536
+ #define BGMAC_RX_BUF_SIZE (BGMAC_RX_FRAME_OFFSET + BGMAC_RX_MAX_FRAME_SIZE)
+ #define BGMAC_RX_ALLOC_SIZE (SKB_DATA_ALIGN(BGMAC_RX_BUF_SIZE + BGMAC_RX_BUF_OFFSET) + \
+ SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
+diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
+index 150cc94ae9f884..25a604379b2f43 100644
+--- a/drivers/net/ethernet/davicom/dm9000.c
++++ b/drivers/net/ethernet/davicom/dm9000.c
+@@ -1777,10 +1777,11 @@ static void dm9000_drv_remove(struct platform_device *pdev)
+
+ unregister_netdev(ndev);
+ dm9000_release_board(pdev, dm);
+- free_netdev(ndev); /* free device structure */
+ if (dm->power_supply)
+ regulator_disable(dm->power_supply);
+
++ free_netdev(ndev); /* free device structure */
++
+ dev_dbg(&pdev->dev, "released and freed device\n");
+ }
+
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 49d1748e0c043d..2b05d9c6c21a43 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -840,6 +840,8 @@ static int fec_enet_txq_submit_tso(struct fec_enet_priv_tx_q *txq,
+ struct fec_enet_private *fep = netdev_priv(ndev);
+ int hdr_len, total_len, data_left;
+ struct bufdesc *bdp = txq->bd.cur;
++ struct bufdesc *tmp_bdp;
++ struct bufdesc_ex *ebdp;
+ struct tso_t tso;
+ unsigned int index = 0;
+ int ret;
+@@ -913,7 +915,34 @@ static int fec_enet_txq_submit_tso(struct fec_enet_priv_tx_q *txq,
+ return 0;
+
+ err_release:
+- /* TODO: Release all used data descriptors for TSO */
++ /* Release all used data descriptors for TSO */
++ tmp_bdp = txq->bd.cur;
++
++ while (tmp_bdp != bdp) {
++ /* Unmap data buffers */
++ if (tmp_bdp->cbd_bufaddr &&
++ !IS_TSO_HEADER(txq, fec32_to_cpu(tmp_bdp->cbd_bufaddr)))
++ dma_unmap_single(&fep->pdev->dev,
++ fec32_to_cpu(tmp_bdp->cbd_bufaddr),
++ fec16_to_cpu(tmp_bdp->cbd_datlen),
++ DMA_TO_DEVICE);
++
++ /* Clear standard buffer descriptor fields */
++ tmp_bdp->cbd_sc = 0;
++ tmp_bdp->cbd_datlen = 0;
++ tmp_bdp->cbd_bufaddr = 0;
++
++ /* Handle extended descriptor if enabled */
++ if (fep->bufdesc_ex) {
++ ebdp = (struct bufdesc_ex *)tmp_bdp;
++ ebdp->cbd_esc = 0;
++ }
++
++ tmp_bdp = fec_enet_get_nextdesc(tmp_bdp, &txq->bd);
++ }
++
++ dev_kfree_skb_any(skb);
++
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.c b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+index 9a63fbc6940831..b25fb400f4767e 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.c
+@@ -40,6 +40,21 @@ EXPORT_SYMBOL(hnae3_unregister_ae_algo_prepare);
+ */
+ static DEFINE_MUTEX(hnae3_common_lock);
+
++/* ensure the drivers being unloaded one by one */
++static DEFINE_MUTEX(hnae3_unload_lock);
++
++void hnae3_acquire_unload_lock(void)
++{
++ mutex_lock(&hnae3_unload_lock);
++}
++EXPORT_SYMBOL(hnae3_acquire_unload_lock);
++
++void hnae3_release_unload_lock(void)
++{
++ mutex_unlock(&hnae3_unload_lock);
++}
++EXPORT_SYMBOL(hnae3_release_unload_lock);
++
+ static bool hnae3_client_match(enum hnae3_client_type client_type)
+ {
+ if (client_type == HNAE3_CLIENT_KNIC ||
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+index d873523e84f271..388c70331a55b5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h
++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h
+@@ -963,4 +963,6 @@ int hnae3_register_client(struct hnae3_client *client);
+ void hnae3_set_client_init_flag(struct hnae3_client *client,
+ struct hnae3_ae_dev *ae_dev,
+ unsigned int inited);
++void hnae3_acquire_unload_lock(void);
++void hnae3_release_unload_lock(void);
+ #endif
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+index 73825b6bd485d1..dc60ac3bde7f2c 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_enet.c
+@@ -6002,9 +6002,11 @@ module_init(hns3_init_module);
+ */
+ static void __exit hns3_exit_module(void)
+ {
++ hnae3_acquire_unload_lock();
+ pci_unregister_driver(&hns3_driver);
+ hnae3_unregister_client(&client);
+ hns3_dbg_unregister_debugfs();
++ hnae3_release_unload_lock();
+ }
+ module_exit(hns3_exit_module);
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+index 9a67fe0554a52b..06eedf80cfac4f 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c
+@@ -12929,9 +12929,11 @@ static int __init hclge_init(void)
+
+ static void __exit hclge_exit(void)
+ {
++ hnae3_acquire_unload_lock();
+ hnae3_unregister_ae_algo_prepare(&ae_algo);
+ hnae3_unregister_ae_algo(&ae_algo);
+ destroy_workqueue(hclge_wq);
++ hnae3_release_unload_lock();
+ }
+ module_init(hclge_init);
+ module_exit(hclge_exit);
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+index d47bd8d6145f97..fd5992164846b1 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c
+@@ -3412,8 +3412,10 @@ static int __init hclgevf_init(void)
+
+ static void __exit hclgevf_exit(void)
+ {
++ hnae3_acquire_unload_lock();
+ hnae3_unregister_ae_algo(&ae_algovf);
+ destroy_workqueue(hclgevf_wq);
++ hnae3_release_unload_lock();
+ }
+ module_init(hclgevf_init);
+ module_exit(hclgevf_exit);
+diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
+index f782402cd78986..5516795cc250a8 100644
+--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
++++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
+@@ -773,6 +773,11 @@ iavf_vlan_filter *iavf_add_vlan(struct iavf_adapter *adapter,
+ f->state = IAVF_VLAN_ADD;
+ adapter->num_vlan_filters++;
+ iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_ADD_VLAN_FILTER);
++ } else if (f->state == IAVF_VLAN_REMOVE) {
++ /* IAVF_VLAN_REMOVE means that VLAN wasn't yet removed.
++ * We can safely only change the state here.
++ */
++ f->state = IAVF_VLAN_ACTIVE;
+ }
+
+ clearout:
+@@ -793,8 +798,18 @@ static void iavf_del_vlan(struct iavf_adapter *adapter, struct iavf_vlan vlan)
+
+ f = iavf_find_vlan(adapter, vlan);
+ if (f) {
+- f->state = IAVF_VLAN_REMOVE;
+- iavf_schedule_aq_request(adapter, IAVF_FLAG_AQ_DEL_VLAN_FILTER);
++ /* IAVF_ADD_VLAN means that VLAN wasn't even added yet.
++ * Remove it from the list.
++ */
++ if (f->state == IAVF_VLAN_ADD) {
++ list_del(&f->list);
++ kfree(f);
++ adapter->num_vlan_filters--;
++ } else {
++ f->state = IAVF_VLAN_REMOVE;
++ iavf_schedule_aq_request(adapter,
++ IAVF_FLAG_AQ_DEL_VLAN_FILTER);
++ }
+ }
+
+ spin_unlock_bh(&adapter->mac_vlan_list_lock);
+diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+index 80f3dfd2712430..66ae0352c6bca0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
++++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+@@ -1491,7 +1491,23 @@ struct ice_aqc_dnl_equa_param {
+ #define ICE_AQC_RX_EQU_POST1 (0x12 << ICE_AQC_RX_EQU_SHIFT)
+ #define ICE_AQC_RX_EQU_BFLF (0x13 << ICE_AQC_RX_EQU_SHIFT)
+ #define ICE_AQC_RX_EQU_BFHF (0x14 << ICE_AQC_RX_EQU_SHIFT)
+-#define ICE_AQC_RX_EQU_DRATE (0x15 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_CTLE_GAINHF (0x20 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_CTLE_GAINLF (0x21 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_CTLE_GAINDC (0x22 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_CTLE_BW (0x23 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_GAIN (0x30 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_GAIN2 (0x31 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_2 (0x32 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_3 (0x33 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_4 (0x34 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_5 (0x35 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_6 (0x36 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_7 (0x37 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_8 (0x38 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_9 (0x39 << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_10 (0x3A << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_11 (0x3B << ICE_AQC_RX_EQU_SHIFT)
++#define ICE_AQC_RX_EQU_DFE_12 (0x3C << ICE_AQC_RX_EQU_SHIFT)
+ #define ICE_AQC_TX_EQU_PRE1 0x0
+ #define ICE_AQC_TX_EQU_PRE3 0x3
+ #define ICE_AQC_TX_EQU_ATTEN 0x4
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+index d5cc934d135949..7d1feeb317be34 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
+@@ -693,75 +693,52 @@ static int ice_get_port_topology(struct ice_hw *hw, u8 lport,
+ static int ice_get_tx_rx_equa(struct ice_hw *hw, u8 serdes_num,
+ struct ice_serdes_equalization_to_ethtool *ptr)
+ {
++ static const int tx = ICE_AQC_OP_CODE_TX_EQU;
++ static const int rx = ICE_AQC_OP_CODE_RX_EQU;
++ struct {
++ int data_in;
++ int opcode;
++ int *out;
++ } aq_params[] = {
++ { ICE_AQC_TX_EQU_PRE1, tx, &ptr->tx_equ_pre1 },
++ { ICE_AQC_TX_EQU_PRE3, tx, &ptr->tx_equ_pre3 },
++ { ICE_AQC_TX_EQU_ATTEN, tx, &ptr->tx_equ_atten },
++ { ICE_AQC_TX_EQU_POST1, tx, &ptr->tx_equ_post1 },
++ { ICE_AQC_TX_EQU_PRE2, tx, &ptr->tx_equ_pre2 },
++ { ICE_AQC_RX_EQU_PRE2, rx, &ptr->rx_equ_pre2 },
++ { ICE_AQC_RX_EQU_PRE1, rx, &ptr->rx_equ_pre1 },
++ { ICE_AQC_RX_EQU_POST1, rx, &ptr->rx_equ_post1 },
++ { ICE_AQC_RX_EQU_BFLF, rx, &ptr->rx_equ_bflf },
++ { ICE_AQC_RX_EQU_BFHF, rx, &ptr->rx_equ_bfhf },
++ { ICE_AQC_RX_EQU_CTLE_GAINHF, rx, &ptr->rx_equ_ctle_gainhf },
++ { ICE_AQC_RX_EQU_CTLE_GAINLF, rx, &ptr->rx_equ_ctle_gainlf },
++ { ICE_AQC_RX_EQU_CTLE_GAINDC, rx, &ptr->rx_equ_ctle_gaindc },
++ { ICE_AQC_RX_EQU_CTLE_BW, rx, &ptr->rx_equ_ctle_bw },
++ { ICE_AQC_RX_EQU_DFE_GAIN, rx, &ptr->rx_equ_dfe_gain },
++ { ICE_AQC_RX_EQU_DFE_GAIN2, rx, &ptr->rx_equ_dfe_gain_2 },
++ { ICE_AQC_RX_EQU_DFE_2, rx, &ptr->rx_equ_dfe_2 },
++ { ICE_AQC_RX_EQU_DFE_3, rx, &ptr->rx_equ_dfe_3 },
++ { ICE_AQC_RX_EQU_DFE_4, rx, &ptr->rx_equ_dfe_4 },
++ { ICE_AQC_RX_EQU_DFE_5, rx, &ptr->rx_equ_dfe_5 },
++ { ICE_AQC_RX_EQU_DFE_6, rx, &ptr->rx_equ_dfe_6 },
++ { ICE_AQC_RX_EQU_DFE_7, rx, &ptr->rx_equ_dfe_7 },
++ { ICE_AQC_RX_EQU_DFE_8, rx, &ptr->rx_equ_dfe_8 },
++ { ICE_AQC_RX_EQU_DFE_9, rx, &ptr->rx_equ_dfe_9 },
++ { ICE_AQC_RX_EQU_DFE_10, rx, &ptr->rx_equ_dfe_10 },
++ { ICE_AQC_RX_EQU_DFE_11, rx, &ptr->rx_equ_dfe_11 },
++ { ICE_AQC_RX_EQU_DFE_12, rx, &ptr->rx_equ_dfe_12 },
++ };
+ int err;
+
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_PRE1,
+- ICE_AQC_OP_CODE_TX_EQU, serdes_num,
+- &ptr->tx_equalization_pre1);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_PRE3,
+- ICE_AQC_OP_CODE_TX_EQU, serdes_num,
+- &ptr->tx_equalization_pre3);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_ATTEN,
+- ICE_AQC_OP_CODE_TX_EQU, serdes_num,
+- &ptr->tx_equalization_atten);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_POST1,
+- ICE_AQC_OP_CODE_TX_EQU, serdes_num,
+- &ptr->tx_equalization_post1);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_TX_EQU_PRE2,
+- ICE_AQC_OP_CODE_TX_EQU, serdes_num,
+- &ptr->tx_equalization_pre2);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_PRE2,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_pre2);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_PRE1,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_pre1);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_POST1,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_post1);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_BFLF,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_bflf);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_BFHF,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_bfhf);
+- if (err)
+- return err;
+-
+- err = ice_aq_get_phy_equalization(hw, ICE_AQC_RX_EQU_DRATE,
+- ICE_AQC_OP_CODE_RX_EQU, serdes_num,
+- &ptr->rx_equalization_drate);
+- if (err)
+- return err;
++ for (int i = 0; i < ARRAY_SIZE(aq_params); i++) {
++ err = ice_aq_get_phy_equalization(hw, aq_params[i].data_in,
++ aq_params[i].opcode,
++ serdes_num, aq_params[i].out);
++ if (err)
++ break;
++ }
+
+- return 0;
++ return err;
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.h b/drivers/net/ethernet/intel/ice/ice_ethtool.h
+index 9acccae38625ae..23b2cfbc9684c0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_ethtool.h
++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.h
+@@ -10,17 +10,33 @@ struct ice_phy_type_to_ethtool {
+ };
+
+ struct ice_serdes_equalization_to_ethtool {
+- int rx_equalization_pre2;
+- int rx_equalization_pre1;
+- int rx_equalization_post1;
+- int rx_equalization_bflf;
+- int rx_equalization_bfhf;
+- int rx_equalization_drate;
+- int tx_equalization_pre1;
+- int tx_equalization_pre3;
+- int tx_equalization_atten;
+- int tx_equalization_post1;
+- int tx_equalization_pre2;
++ int rx_equ_pre2;
++ int rx_equ_pre1;
++ int rx_equ_post1;
++ int rx_equ_bflf;
++ int rx_equ_bfhf;
++ int rx_equ_ctle_gainhf;
++ int rx_equ_ctle_gainlf;
++ int rx_equ_ctle_gaindc;
++ int rx_equ_ctle_bw;
++ int rx_equ_dfe_gain;
++ int rx_equ_dfe_gain_2;
++ int rx_equ_dfe_2;
++ int rx_equ_dfe_3;
++ int rx_equ_dfe_4;
++ int rx_equ_dfe_5;
++ int rx_equ_dfe_6;
++ int rx_equ_dfe_7;
++ int rx_equ_dfe_8;
++ int rx_equ_dfe_9;
++ int rx_equ_dfe_10;
++ int rx_equ_dfe_11;
++ int rx_equ_dfe_12;
++ int tx_equ_pre1;
++ int tx_equ_pre3;
++ int tx_equ_atten;
++ int tx_equ_post1;
++ int tx_equ_pre2;
+ };
+
+ struct ice_regdump_to_ethtool {
+diff --git a/drivers/net/ethernet/intel/ice/ice_parser.h b/drivers/net/ethernet/intel/ice/ice_parser.h
+index 6509d807627cee..4f56d53d56b9ad 100644
+--- a/drivers/net/ethernet/intel/ice/ice_parser.h
++++ b/drivers/net/ethernet/intel/ice/ice_parser.h
+@@ -257,7 +257,6 @@ ice_pg_nm_cam_match(struct ice_pg_nm_cam_item *table, int size,
+ /*** ICE_SID_RXPARSER_BOOST_TCAM and ICE_SID_LBL_RXPARSER_TMEM sections ***/
+ #define ICE_BST_TCAM_TABLE_SIZE 256
+ #define ICE_BST_TCAM_KEY_SIZE 20
+-#define ICE_BST_KEY_TCAM_SIZE 19
+
+ /* Boost TCAM item */
+ struct ice_bst_tcam_item {
+@@ -401,7 +400,6 @@ u16 ice_xlt_kb_flag_get(struct ice_xlt_kb *kb, u64 pkt_flag);
+ #define ICE_PARSER_GPR_NUM 128
+ #define ICE_PARSER_FLG_NUM 64
+ #define ICE_PARSER_ERR_NUM 16
+-#define ICE_BST_KEY_SIZE 10
+ #define ICE_MARKER_ID_SIZE 9
+ #define ICE_MARKER_MAX_SIZE \
+ (ICE_MARKER_ID_SIZE * BITS_PER_BYTE - 1)
+@@ -431,13 +429,13 @@ struct ice_parser_rt {
+ u8 pkt_buf[ICE_PARSER_MAX_PKT_LEN + ICE_PARSER_PKT_REV];
+ u16 pkt_len;
+ u16 po;
+- u8 bst_key[ICE_BST_KEY_SIZE];
++ u8 bst_key[ICE_BST_TCAM_KEY_SIZE];
+ struct ice_pg_cam_key pg_key;
++ u8 pg_prio;
+ struct ice_alu *alu0;
+ struct ice_alu *alu1;
+ struct ice_alu *alu2;
+ struct ice_pg_cam_action *action;
+- u8 pg_prio;
+ struct ice_gpr_pu pu;
+ u8 markers[ICE_MARKER_ID_SIZE];
+ bool protocols[ICE_PO_PAIR_SIZE];
+diff --git a/drivers/net/ethernet/intel/ice/ice_parser_rt.c b/drivers/net/ethernet/intel/ice/ice_parser_rt.c
+index dedf5e854e4b76..3995d662e05099 100644
+--- a/drivers/net/ethernet/intel/ice/ice_parser_rt.c
++++ b/drivers/net/ethernet/intel/ice/ice_parser_rt.c
+@@ -125,22 +125,20 @@ static void ice_bst_key_init(struct ice_parser_rt *rt,
+ else
+ key[idd] = imem->b_kb.prio;
+
+- idd = ICE_BST_KEY_TCAM_SIZE - 1;
++ idd = ICE_BST_TCAM_KEY_SIZE - 2;
+ for (i = idd; i >= 0; i--) {
+ int j;
+
+ j = ho + idd - i;
+ if (j < ICE_PARSER_MAX_PKT_LEN)
+- key[i] = rt->pkt_buf[ho + idd - i];
++ key[i] = rt->pkt_buf[j];
+ else
+ key[i] = 0;
+ }
+
+- ice_debug(rt->psr->hw, ICE_DBG_PARSER, "Generated Boost TCAM Key:\n");
+- ice_debug(rt->psr->hw, ICE_DBG_PARSER, "%02X %02X %02X %02X %02X %02X %02X %02X %02X %02X\n",
+- key[0], key[1], key[2], key[3], key[4],
+- key[5], key[6], key[7], key[8], key[9]);
+- ice_debug(rt->psr->hw, ICE_DBG_PARSER, "\n");
++ ice_debug_array_w_prefix(rt->psr->hw, ICE_DBG_PARSER,
++ KBUILD_MODNAME ": Generated Boost TCAM Key",
++ key, ICE_BST_TCAM_KEY_SIZE);
+ }
+
+ static u16 ice_bit_rev_u16(u16 v, int len)
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq.c b/drivers/net/ethernet/intel/idpf/idpf_controlq.c
+index 4849590a5591f1..b28991dd187036 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_controlq.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_controlq.c
+@@ -376,6 +376,9 @@ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count,
+ if (!(le16_to_cpu(desc->flags) & IDPF_CTLQ_FLAG_DD))
+ break;
+
++ /* Ensure no other fields are read until DD flag is checked */
++ dma_rmb();
++
+ /* strip off FW internal code */
+ desc_err = le16_to_cpu(desc->ret_val) & 0xff;
+
+@@ -563,6 +566,9 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg,
+ if (!(flags & IDPF_CTLQ_FLAG_DD))
+ break;
+
++ /* Ensure no other fields are read until DD flag is checked */
++ dma_rmb();
++
+ q_msg[i].vmvf_type = (flags &
+ (IDPF_CTLQ_FLAG_FTYPE_VM |
+ IDPF_CTLQ_FLAG_FTYPE_PF)) >>
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c
+index db476b3314c8a5..dfd56fc5ff6550 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_main.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_main.c
+@@ -174,7 +174,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ pci_set_master(pdev);
+ pci_set_drvdata(pdev, adapter);
+
+- adapter->init_wq = alloc_workqueue("%s-%s-init", 0, 0,
++ adapter->init_wq = alloc_workqueue("%s-%s-init",
++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->init_wq) {
+@@ -183,7 +184,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_free;
+ }
+
+- adapter->serv_wq = alloc_workqueue("%s-%s-service", 0, 0,
++ adapter->serv_wq = alloc_workqueue("%s-%s-service",
++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->serv_wq) {
+@@ -192,7 +194,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_serv_wq_alloc;
+ }
+
+- adapter->mbx_wq = alloc_workqueue("%s-%s-mbx", 0, 0,
++ adapter->mbx_wq = alloc_workqueue("%s-%s-mbx",
++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->mbx_wq) {
+@@ -201,7 +204,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_mbx_wq_alloc;
+ }
+
+- adapter->stats_wq = alloc_workqueue("%s-%s-stats", 0, 0,
++ adapter->stats_wq = alloc_workqueue("%s-%s-stats",
++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->stats_wq) {
+@@ -210,7 +214,8 @@ static int idpf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ goto err_stats_wq_alloc;
+ }
+
+- adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event", 0, 0,
++ adapter->vc_event_wq = alloc_workqueue("%s-%s-vc_event",
++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0,
+ dev_driver_string(dev),
+ dev_name(dev));
+ if (!adapter->vc_event_wq) {
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+index d46c95f91b0d81..99bdb95bf22661 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_virtchnl.c
+@@ -612,14 +612,15 @@ idpf_vc_xn_forward_reply(struct idpf_adapter *adapter,
+ return -EINVAL;
+ }
+ xn = &adapter->vcxn_mngr->ring[xn_idx];
++ idpf_vc_xn_lock(xn);
+ salt = FIELD_GET(IDPF_VC_XN_SALT_M, msg_info);
+ if (xn->salt != salt) {
+ dev_err_ratelimited(&adapter->pdev->dev, "Transaction salt does not match (%02x != %02x)\n",
+ xn->salt, salt);
++ idpf_vc_xn_unlock(xn);
+ return -EINVAL;
+ }
+
+- idpf_vc_xn_lock(xn);
+ switch (xn->state) {
+ case IDPF_VC_XN_WAITING:
+ /* success */
+@@ -3077,12 +3078,21 @@ int idpf_vc_core_init(struct idpf_adapter *adapter)
+ */
+ void idpf_vc_core_deinit(struct idpf_adapter *adapter)
+ {
++ bool remove_in_prog;
++
+ if (!test_bit(IDPF_VC_CORE_INIT, adapter->flags))
+ return;
+
++ /* Avoid transaction timeouts when called during reset */
++ remove_in_prog = test_bit(IDPF_REMOVE_IN_PROG, adapter->flags);
++ if (!remove_in_prog)
++ idpf_vc_xn_shutdown(adapter->vcxn_mngr);
++
+ idpf_deinit_task(adapter);
+ idpf_intr_rel(adapter);
+- idpf_vc_xn_shutdown(adapter->vcxn_mngr);
++
++ if (remove_in_prog)
++ idpf_vc_xn_shutdown(adapter->vcxn_mngr);
+
+ cancel_delayed_work_sync(&adapter->serv_task);
+ cancel_delayed_work_sync(&adapter->mbx_task);
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+index 549436efc20488..730aa5632cceee 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+@@ -995,12 +995,6 @@ static void octep_get_stats64(struct net_device *netdev,
+ struct octep_device *oct = netdev_priv(netdev);
+ int q;
+
+- if (netif_running(netdev))
+- octep_ctrl_net_get_if_stats(oct,
+- OCTEP_CTRL_NET_INVALID_VFID,
+- &oct->iface_rx_stats,
+- &oct->iface_tx_stats);
+-
+ tx_packets = 0;
+ tx_bytes = 0;
+ rx_packets = 0;
+@@ -1018,10 +1012,6 @@ static void octep_get_stats64(struct net_device *netdev,
+ stats->tx_bytes = tx_bytes;
+ stats->rx_packets = rx_packets;
+ stats->rx_bytes = rx_bytes;
+- stats->multicast = oct->iface_rx_stats.mcast_pkts;
+- stats->rx_errors = oct->iface_rx_stats.err_pkts;
+- stats->collisions = oct->iface_tx_stats.xscol;
+- stats->tx_fifo_errors = oct->iface_tx_stats.undflw;
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
+index 7e6771c9cdbbab..4c699514fd57a0 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
+@@ -799,14 +799,6 @@ static void octep_vf_get_stats64(struct net_device *netdev,
+ stats->tx_bytes = tx_bytes;
+ stats->rx_packets = rx_packets;
+ stats->rx_bytes = rx_bytes;
+- if (!octep_vf_get_if_stats(oct)) {
+- stats->multicast = oct->iface_rx_stats.mcast_pkts;
+- stats->rx_errors = oct->iface_rx_stats.err_pkts;
+- stats->rx_dropped = oct->iface_rx_stats.dropped_pkts_fifo_full +
+- oct->iface_rx_stats.err_pkts;
+- stats->rx_missed_errors = oct->iface_rx_stats.dropped_pkts_fifo_full;
+- stats->tx_dropped = oct->iface_tx_stats.dropped;
+- }
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/mediatek/airoha_eth.c b/drivers/net/ethernet/mediatek/airoha_eth.c
+index 2c26eb18528372..20cf7ba9d75084 100644
+--- a/drivers/net/ethernet/mediatek/airoha_eth.c
++++ b/drivers/net/ethernet/mediatek/airoha_eth.c
+@@ -258,11 +258,11 @@
+ #define REG_GDM3_FWD_CFG GDM3_BASE
+ #define GDM3_PAD_EN_MASK BIT(28)
+
+-#define REG_GDM4_FWD_CFG (GDM4_BASE + 0x100)
++#define REG_GDM4_FWD_CFG GDM4_BASE
+ #define GDM4_PAD_EN_MASK BIT(28)
+ #define GDM4_SPORT_OFFSET0_MASK GENMASK(11, 8)
+
+-#define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x33c)
++#define REG_GDM4_SRC_PORT_SET (GDM4_BASE + 0x23c)
+ #define GDM4_SPORT_OFF2_MASK GENMASK(19, 16)
+ #define GDM4_SPORT_OFF1_MASK GENMASK(15, 12)
+ #define GDM4_SPORT_OFF0_MASK GENMASK(11, 8)
+@@ -2123,17 +2123,14 @@ static void airoha_hw_cleanup(struct airoha_qdma *qdma)
+ if (!qdma->q_rx[i].ndesc)
+ continue;
+
+- napi_disable(&qdma->q_rx[i].napi);
+ netif_napi_del(&qdma->q_rx[i].napi);
+ airoha_qdma_cleanup_rx_queue(&qdma->q_rx[i]);
+ if (qdma->q_rx[i].page_pool)
+ page_pool_destroy(qdma->q_rx[i].page_pool);
+ }
+
+- for (i = 0; i < ARRAY_SIZE(qdma->q_tx_irq); i++) {
+- napi_disable(&qdma->q_tx_irq[i].napi);
++ for (i = 0; i < ARRAY_SIZE(qdma->q_tx_irq); i++)
+ netif_napi_del(&qdma->q_tx_irq[i].napi);
+- }
+
+ for (i = 0; i < ARRAY_SIZE(qdma->q_tx); i++) {
+ if (!qdma->q_tx[i].ndesc)
+@@ -2158,6 +2155,21 @@ static void airoha_qdma_start_napi(struct airoha_qdma *qdma)
+ }
+ }
+
++static void airoha_qdma_stop_napi(struct airoha_qdma *qdma)
++{
++ int i;
++
++ for (i = 0; i < ARRAY_SIZE(qdma->q_tx_irq); i++)
++ napi_disable(&qdma->q_tx_irq[i].napi);
++
++ for (i = 0; i < ARRAY_SIZE(qdma->q_rx); i++) {
++ if (!qdma->q_rx[i].ndesc)
++ continue;
++
++ napi_disable(&qdma->q_rx[i].napi);
++ }
++}
++
+ static void airoha_update_hw_stats(struct airoha_gdm_port *port)
+ {
+ struct airoha_eth *eth = port->qdma->eth;
+@@ -2713,7 +2725,7 @@ static int airoha_probe(struct platform_device *pdev)
+
+ err = airoha_hw_init(pdev, eth);
+ if (err)
+- goto error;
++ goto error_hw_cleanup;
+
+ for (i = 0; i < ARRAY_SIZE(eth->qdma); i++)
+ airoha_qdma_start_napi(ð->qdma[i]);
+@@ -2728,13 +2740,16 @@ static int airoha_probe(struct platform_device *pdev)
+ err = airoha_alloc_gdm_port(eth, np);
+ if (err) {
+ of_node_put(np);
+- goto error;
++ goto error_napi_stop;
+ }
+ }
+
+ return 0;
+
+-error:
++error_napi_stop:
++ for (i = 0; i < ARRAY_SIZE(eth->qdma); i++)
++ airoha_qdma_stop_napi(ð->qdma[i]);
++error_hw_cleanup:
+ for (i = 0; i < ARRAY_SIZE(eth->qdma); i++)
+ airoha_hw_cleanup(ð->qdma[i]);
+
+@@ -2755,8 +2770,10 @@ static void airoha_remove(struct platform_device *pdev)
+ struct airoha_eth *eth = platform_get_drvdata(pdev);
+ int i;
+
+- for (i = 0; i < ARRAY_SIZE(eth->qdma); i++)
++ for (i = 0; i < ARRAY_SIZE(eth->qdma); i++) {
++ airoha_qdma_stop_napi(ð->qdma[i]);
+ airoha_hw_cleanup(ð->qdma[i]);
++ }
+
+ for (i = 0; i < ARRAY_SIZE(eth->ports); i++) {
+ struct airoha_gdm_port *port = eth->ports[i];
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_definer.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_definer.c
+index 3f4c58bada3745..ab5f8f07f1f7e5 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_definer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_definer.c
+@@ -70,7 +70,7 @@
+ u32 second_dw_mask = (mask) & ((1 << _bit_off) - 1); \
+ _HWS_SET32(p, (v) >> _bit_off, byte_off, 0, (mask) >> _bit_off); \
+ _HWS_SET32(p, (v) & second_dw_mask, (byte_off) + DW_SIZE, \
+- (bit_off) % BITS_IN_DW, second_dw_mask); \
++ (bit_off + BITS_IN_DW) % BITS_IN_DW, second_dw_mask); \
+ } else { \
+ _HWS_SET32(p, v, byte_off, (bit_off), (mask)); \
+ } \
+diff --git a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
+index 46245e0b24623d..43c84900369a36 100644
+--- a/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
++++ b/drivers/net/ethernet/mellanox/mlxfw/mlxfw_fsm.c
+@@ -14,7 +14,6 @@
+ #define MLXFW_FSM_STATE_WAIT_TIMEOUT_MS 30000
+ #define MLXFW_FSM_STATE_WAIT_ROUNDS \
+ (MLXFW_FSM_STATE_WAIT_TIMEOUT_MS / MLXFW_FSM_STATE_WAIT_CYCLE_MS)
+-#define MLXFW_FSM_MAX_COMPONENT_SIZE (10 * (1 << 20))
+
+ static const int mlxfw_fsm_state_errno[] = {
+ [MLXFW_FSM_STATE_ERR_ERROR] = -EIO,
+@@ -229,7 +228,6 @@ static int mlxfw_flash_component(struct mlxfw_dev *mlxfw_dev,
+ return err;
+ }
+
+- comp_max_size = min_t(u32, comp_max_size, MLXFW_FSM_MAX_COMPONENT_SIZE);
+ if (comp->data_size > comp_max_size) {
+ MLXFW_ERR_MSG(mlxfw_dev, extack,
+ "Component size is bigger than limit", -EINVAL);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
+index 69cd689dbc83e9..5afe6b155ef0d5 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr.c
+@@ -1003,10 +1003,10 @@ static void mlxsw_sp_mr_route_stats_update(struct mlxsw_sp *mlxsw_sp,
+ mr->mr_ops->route_stats(mlxsw_sp, mr_route->route_priv, &packets,
+ &bytes);
+
+- if (mr_route->mfc->mfc_un.res.pkt != packets)
+- mr_route->mfc->mfc_un.res.lastuse = jiffies;
+- mr_route->mfc->mfc_un.res.pkt = packets;
+- mr_route->mfc->mfc_un.res.bytes = bytes;
++ if (atomic_long_read(&mr_route->mfc->mfc_un.res.pkt) != packets)
++ WRITE_ONCE(mr_route->mfc->mfc_un.res.lastuse, jiffies);
++ atomic_long_set(&mr_route->mfc->mfc_un.res.pkt, packets);
++ atomic_long_set(&mr_route->mfc->mfc_un.res.bytes, bytes);
+ }
+
+ static void mlxsw_sp_mr_stats_update(struct work_struct *work)
+diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
+index 6f6b0566c65bcb..cc4f0d16c76303 100644
+--- a/drivers/net/ethernet/renesas/ravb_main.c
++++ b/drivers/net/ethernet/renesas/ravb_main.c
+@@ -3208,10 +3208,15 @@ static int ravb_suspend(struct device *dev)
+
+ netif_device_detach(ndev);
+
+- if (priv->wol_enabled)
+- return ravb_wol_setup(ndev);
++ rtnl_lock();
++ if (priv->wol_enabled) {
++ ret = ravb_wol_setup(ndev);
++ rtnl_unlock();
++ return ret;
++ }
+
+ ret = ravb_close(ndev);
++ rtnl_unlock();
+ if (ret)
+ return ret;
+
+@@ -3236,19 +3241,20 @@ static int ravb_resume(struct device *dev)
+ if (!netif_running(ndev))
+ return 0;
+
++ rtnl_lock();
+ /* If WoL is enabled restore the interface. */
+- if (priv->wol_enabled) {
++ if (priv->wol_enabled)
+ ret = ravb_wol_restore(ndev);
+- if (ret)
+- return ret;
+- } else {
++ else
+ ret = pm_runtime_force_resume(dev);
+- if (ret)
+- return ret;
++ if (ret) {
++ rtnl_unlock();
++ return ret;
+ }
+
+ /* Reopening the interface will restore the device to the working state. */
+ ret = ravb_open(ndev);
++ rtnl_unlock();
+ if (ret < 0)
+ goto out_rpm_put;
+
+diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
+index 7a25903e35c305..bc12c0c7347f6b 100644
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -3494,10 +3494,12 @@ static int sh_eth_suspend(struct device *dev)
+
+ netif_device_detach(ndev);
+
++ rtnl_lock();
+ if (mdp->wol_enabled)
+ ret = sh_eth_wol_setup(ndev);
+ else
+ ret = sh_eth_close(ndev);
++ rtnl_unlock();
+
+ return ret;
+ }
+@@ -3511,10 +3513,12 @@ static int sh_eth_resume(struct device *dev)
+ if (!netif_running(ndev))
+ return 0;
+
++ rtnl_lock();
+ if (mdp->wol_enabled)
+ ret = sh_eth_wol_restore(ndev);
+ else
+ ret = sh_eth_open(ndev);
++ rtnl_unlock();
+
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/net/ethernet/sfc/ef100_ethtool.c b/drivers/net/ethernet/sfc/ef100_ethtool.c
+index 5c2551369812cb..6c3b74000d3b6a 100644
+--- a/drivers/net/ethernet/sfc/ef100_ethtool.c
++++ b/drivers/net/ethernet/sfc/ef100_ethtool.c
+@@ -59,6 +59,7 @@ const struct ethtool_ops ef100_ethtool_ops = {
+ .get_rxfh_indir_size = efx_ethtool_get_rxfh_indir_size,
+ .get_rxfh_key_size = efx_ethtool_get_rxfh_key_size,
+ .rxfh_per_ctx_key = true,
++ .cap_rss_rxnfc_adds = true,
+ .rxfh_priv_size = sizeof(struct efx_rss_context_priv),
+ .get_rxfh = efx_ethtool_get_rxfh,
+ .set_rxfh = efx_ethtool_set_rxfh,
+diff --git a/drivers/net/ethernet/sfc/ethtool.c b/drivers/net/ethernet/sfc/ethtool.c
+index bb1930818beba4..83d715544f7fb2 100644
+--- a/drivers/net/ethernet/sfc/ethtool.c
++++ b/drivers/net/ethernet/sfc/ethtool.c
+@@ -263,6 +263,7 @@ const struct ethtool_ops efx_ethtool_ops = {
+ .get_rxfh_indir_size = efx_ethtool_get_rxfh_indir_size,
+ .get_rxfh_key_size = efx_ethtool_get_rxfh_key_size,
+ .rxfh_per_ctx_key = true,
++ .cap_rss_rxnfc_adds = true,
+ .rxfh_priv_size = sizeof(struct efx_rss_context_priv),
+ .get_rxfh = efx_ethtool_get_rxfh,
+ .set_rxfh = efx_ethtool_set_rxfh,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index cf7b59b8cc64b3..918d7f2e8ba992 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -7236,6 +7236,36 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
+ if (priv->dma_cap.tsoen)
+ dev_info(priv->device, "TSO supported\n");
+
++ if (priv->dma_cap.number_rx_queues &&
++ priv->plat->rx_queues_to_use > priv->dma_cap.number_rx_queues) {
++ dev_warn(priv->device,
++ "Number of Rx queues (%u) exceeds dma capability\n",
++ priv->plat->rx_queues_to_use);
++ priv->plat->rx_queues_to_use = priv->dma_cap.number_rx_queues;
++ }
++ if (priv->dma_cap.number_tx_queues &&
++ priv->plat->tx_queues_to_use > priv->dma_cap.number_tx_queues) {
++ dev_warn(priv->device,
++ "Number of Tx queues (%u) exceeds dma capability\n",
++ priv->plat->tx_queues_to_use);
++ priv->plat->tx_queues_to_use = priv->dma_cap.number_tx_queues;
++ }
++
++ if (priv->dma_cap.rx_fifo_size &&
++ priv->plat->rx_fifo_size > priv->dma_cap.rx_fifo_size) {
++ dev_warn(priv->device,
++ "Rx FIFO size (%u) exceeds dma capability\n",
++ priv->plat->rx_fifo_size);
++ priv->plat->rx_fifo_size = priv->dma_cap.rx_fifo_size;
++ }
++ if (priv->dma_cap.tx_fifo_size &&
++ priv->plat->tx_fifo_size > priv->dma_cap.tx_fifo_size) {
++ dev_warn(priv->device,
++ "Tx FIFO size (%u) exceeds dma capability\n",
++ priv->plat->tx_fifo_size);
++ priv->plat->tx_fifo_size = priv->dma_cap.tx_fifo_size;
++ }
++
+ priv->hw->vlan_fail_q_en =
+ (priv->plat->flags & STMMAC_FLAG_VLAN_FAIL_Q_EN);
+ priv->hw->vlan_fail_q = priv->plat->vlan_fail_q;
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index dfca13b82bdce2..b13c7e958e6b4e 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2207,7 +2207,7 @@ static void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common)
+ for (i = 0; i < common->tx_ch_num; i++) {
+ struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
+
+- if (tx_chn->irq)
++ if (tx_chn->irq > 0)
+ devm_free_irq(dev, tx_chn->irq, tx_chn);
+
+ netif_napi_del(&tx_chn->napi_tx);
+diff --git a/drivers/net/netdevsim/netdevsim.h b/drivers/net/netdevsim/netdevsim.h
+index bf02efa10956a6..84181dcb98831f 100644
+--- a/drivers/net/netdevsim/netdevsim.h
++++ b/drivers/net/netdevsim/netdevsim.h
+@@ -129,6 +129,7 @@ struct netdevsim {
+ u32 sleep;
+ u32 __ports[2][NSIM_UDP_TUNNEL_N_PORTS];
+ u32 (*ports)[NSIM_UDP_TUNNEL_N_PORTS];
++ struct dentry *ddir;
+ struct debugfs_u32_array dfs_ports[2];
+ } udp_ports;
+
+diff --git a/drivers/net/netdevsim/udp_tunnels.c b/drivers/net/netdevsim/udp_tunnels.c
+index 02dc3123eb6c16..640b4983a9a0d1 100644
+--- a/drivers/net/netdevsim/udp_tunnels.c
++++ b/drivers/net/netdevsim/udp_tunnels.c
+@@ -112,9 +112,11 @@ nsim_udp_tunnels_info_reset_write(struct file *file, const char __user *data,
+ struct net_device *dev = file->private_data;
+ struct netdevsim *ns = netdev_priv(dev);
+
+- memset(ns->udp_ports.ports, 0, sizeof(ns->udp_ports.__ports));
+ rtnl_lock();
+- udp_tunnel_nic_reset_ntf(dev);
++ if (dev->reg_state == NETREG_REGISTERED) {
++ memset(ns->udp_ports.ports, 0, sizeof(ns->udp_ports.__ports));
++ udp_tunnel_nic_reset_ntf(dev);
++ }
+ rtnl_unlock();
+
+ return count;
+@@ -144,23 +146,23 @@ int nsim_udp_tunnels_info_create(struct nsim_dev *nsim_dev,
+ else
+ ns->udp_ports.ports = nsim_dev->udp_ports.__ports;
+
+- debugfs_create_u32("udp_ports_inject_error", 0600,
+- ns->nsim_dev_port->ddir,
++ ns->udp_ports.ddir = debugfs_create_dir("udp_ports",
++ ns->nsim_dev_port->ddir);
++
++ debugfs_create_u32("inject_error", 0600, ns->udp_ports.ddir,
+ &ns->udp_ports.inject_error);
+
+ ns->udp_ports.dfs_ports[0].array = ns->udp_ports.ports[0];
+ ns->udp_ports.dfs_ports[0].n_elements = NSIM_UDP_TUNNEL_N_PORTS;
+- debugfs_create_u32_array("udp_ports_table0", 0400,
+- ns->nsim_dev_port->ddir,
++ debugfs_create_u32_array("table0", 0400, ns->udp_ports.ddir,
+ &ns->udp_ports.dfs_ports[0]);
+
+ ns->udp_ports.dfs_ports[1].array = ns->udp_ports.ports[1];
+ ns->udp_ports.dfs_ports[1].n_elements = NSIM_UDP_TUNNEL_N_PORTS;
+- debugfs_create_u32_array("udp_ports_table1", 0400,
+- ns->nsim_dev_port->ddir,
++ debugfs_create_u32_array("table1", 0400, ns->udp_ports.ddir,
+ &ns->udp_ports.dfs_ports[1]);
+
+- debugfs_create_file("udp_ports_reset", 0200, ns->nsim_dev_port->ddir,
++ debugfs_create_file("reset", 0200, ns->udp_ports.ddir,
+ dev, &nsim_udp_tunnels_info_reset_fops);
+
+ /* Note: it's not normal to allocate the info struct like this!
+@@ -196,6 +198,9 @@ int nsim_udp_tunnels_info_create(struct nsim_dev *nsim_dev,
+
+ void nsim_udp_tunnels_info_destroy(struct net_device *dev)
+ {
++ struct netdevsim *ns = netdev_priv(dev);
++
++ debugfs_remove_recursive(ns->udp_ports.ddir);
+ kfree(dev->udp_tunnel_nic_info);
+ dev->udp_tunnel_nic_info = NULL;
+ }
+diff --git a/drivers/net/phy/marvell-88q2xxx.c b/drivers/net/phy/marvell-88q2xxx.c
+index c812f16eaa3a88..b3a5a0af19da66 100644
+--- a/drivers/net/phy/marvell-88q2xxx.c
++++ b/drivers/net/phy/marvell-88q2xxx.c
+@@ -95,6 +95,10 @@
+
+ #define MDIO_MMD_PCS_MV_TDR_OFF_CUTOFF 65246
+
++struct mv88q2xxx_priv {
++ bool enable_temp;
++};
++
+ struct mmd_val {
+ int devad;
+ u32 regnum;
+@@ -669,17 +673,12 @@ static const struct hwmon_chip_info mv88q2xxx_hwmon_chip_info = {
+
+ static int mv88q2xxx_hwmon_probe(struct phy_device *phydev)
+ {
++ struct mv88q2xxx_priv *priv = phydev->priv;
+ struct device *dev = &phydev->mdio.dev;
+ struct device *hwmon;
+ char *hwmon_name;
+- int ret;
+-
+- /* Enable temperature sense */
+- ret = phy_modify_mmd(phydev, MDIO_MMD_PCS, MDIO_MMD_PCS_MV_TEMP_SENSOR2,
+- MDIO_MMD_PCS_MV_TEMP_SENSOR2_DIS_MASK, 0);
+- if (ret < 0)
+- return ret;
+
++ priv->enable_temp = true;
+ hwmon_name = devm_hwmon_sanitize_name(dev, dev_name(dev));
+ if (IS_ERR(hwmon_name))
+ return PTR_ERR(hwmon_name);
+@@ -702,6 +701,14 @@ static int mv88q2xxx_hwmon_probe(struct phy_device *phydev)
+
+ static int mv88q2xxx_probe(struct phy_device *phydev)
+ {
++ struct mv88q2xxx_priv *priv;
++
++ priv = devm_kzalloc(&phydev->mdio.dev, sizeof(*priv), GFP_KERNEL);
++ if (!priv)
++ return -ENOMEM;
++
++ phydev->priv = priv;
++
+ return mv88q2xxx_hwmon_probe(phydev);
+ }
+
+@@ -792,6 +799,18 @@ static int mv88q222x_revb1_revb2_config_init(struct phy_device *phydev)
+
+ static int mv88q222x_config_init(struct phy_device *phydev)
+ {
++ struct mv88q2xxx_priv *priv = phydev->priv;
++ int ret;
++
++ /* Enable temperature sense */
++ if (priv->enable_temp) {
++ ret = phy_modify_mmd(phydev, MDIO_MMD_PCS,
++ MDIO_MMD_PCS_MV_TEMP_SENSOR2,
++ MDIO_MMD_PCS_MV_TEMP_SENSOR2_DIS_MASK, 0);
++ if (ret < 0)
++ return ret;
++ }
++
+ if (phydev->c45_ids.device_ids[MDIO_MMD_PMAPMD] == PHY_ID_88Q2220_REVB0)
+ return mv88q222x_revb0_config_init(phydev);
+ else
+diff --git a/drivers/net/tap.c b/drivers/net/tap.c
+index 5aa41d5f7765a6..5ca6ecf0ce5fbc 100644
+--- a/drivers/net/tap.c
++++ b/drivers/net/tap.c
+@@ -1329,9 +1329,9 @@ int tap_queue_resize(struct tap_dev *tap)
+ list_for_each_entry(q, &tap->queue_list, next)
+ rings[i++] = &q->ring;
+
+- ret = ptr_ring_resize_multiple(rings, n,
+- dev->tx_queue_len, GFP_KERNEL,
+- __skb_array_destroy_skb);
++ ret = ptr_ring_resize_multiple_bh(rings, n,
++ dev->tx_queue_len, GFP_KERNEL,
++ __skb_array_destroy_skb);
+
+ kfree(rings);
+ return ret;
+diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
+index 1c85dda83825d8..7f4ef219eee44f 100644
+--- a/drivers/net/team/team_core.c
++++ b/drivers/net/team/team_core.c
+@@ -1175,6 +1175,13 @@ static int team_port_add(struct team *team, struct net_device *port_dev,
+ return -EBUSY;
+ }
+
++ if (netdev_has_upper_dev(port_dev, dev)) {
++ NL_SET_ERR_MSG(extack, "Device is already a lower device of the team interface");
++ netdev_err(dev, "Device %s is already a lower device of the team interface\n",
++ portname);
++ return -EBUSY;
++ }
++
+ if (port_dev->features & NETIF_F_VLAN_CHALLENGED &&
+ vlan_uses_dev(dev)) {
+ NL_SET_ERR_MSG(extack, "Device is VLAN challenged and team device has VLAN set up");
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 03fe9e3ee7af15..6fc60950100c7c 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -3697,9 +3697,9 @@ static int tun_queue_resize(struct tun_struct *tun)
+ list_for_each_entry(tfile, &tun->disabled, next)
+ rings[i++] = &tfile->tx_ring;
+
+- ret = ptr_ring_resize_multiple(rings, n,
+- dev->tx_queue_len, GFP_KERNEL,
+- tun_ptr_free);
++ ret = ptr_ring_resize_multiple_bh(rings, n,
++ dev->tx_queue_len, GFP_KERNEL,
++ tun_ptr_free);
+
+ kfree(rings);
+ return ret;
+diff --git a/drivers/net/usb/rtl8150.c b/drivers/net/usb/rtl8150.c
+index 01a3b2417a5401..ddff6f19ff98eb 100644
+--- a/drivers/net/usb/rtl8150.c
++++ b/drivers/net/usb/rtl8150.c
+@@ -71,6 +71,14 @@
+ #define MSR_SPEED (1<<3)
+ #define MSR_LINK (1<<2)
+
++/* USB endpoints */
++enum rtl8150_usb_ep {
++ RTL8150_USB_EP_CONTROL = 0,
++ RTL8150_USB_EP_BULK_IN = 1,
++ RTL8150_USB_EP_BULK_OUT = 2,
++ RTL8150_USB_EP_INT_IN = 3,
++};
++
+ /* Interrupt pipe data */
+ #define INT_TSR 0x00
+ #define INT_RSR 0x01
+@@ -867,6 +875,13 @@ static int rtl8150_probe(struct usb_interface *intf,
+ struct usb_device *udev = interface_to_usbdev(intf);
+ rtl8150_t *dev;
+ struct net_device *netdev;
++ static const u8 bulk_ep_addr[] = {
++ RTL8150_USB_EP_BULK_IN | USB_DIR_IN,
++ RTL8150_USB_EP_BULK_OUT | USB_DIR_OUT,
++ 0};
++ static const u8 int_ep_addr[] = {
++ RTL8150_USB_EP_INT_IN | USB_DIR_IN,
++ 0};
+
+ netdev = alloc_etherdev(sizeof(rtl8150_t));
+ if (!netdev)
+@@ -880,6 +895,13 @@ static int rtl8150_probe(struct usb_interface *intf,
+ return -ENOMEM;
+ }
+
++ /* Verify that all required endpoints are present */
++ if (!usb_check_bulk_endpoints(intf, bulk_ep_addr) ||
++ !usb_check_int_endpoints(intf, int_ep_addr)) {
++ dev_err(&intf->dev, "couldn't find required endpoints\n");
++ goto out;
++ }
++
+ tasklet_setup(&dev->tl, rx_fixup);
+ spin_lock_init(&dev->rx_pool_lock);
+
+diff --git a/drivers/net/vxlan/vxlan_vnifilter.c b/drivers/net/vxlan/vxlan_vnifilter.c
+index d2023e7131bd4f..6e6e9f05509ab0 100644
+--- a/drivers/net/vxlan/vxlan_vnifilter.c
++++ b/drivers/net/vxlan/vxlan_vnifilter.c
+@@ -411,6 +411,11 @@ static int vxlan_vnifilter_dump(struct sk_buff *skb, struct netlink_callback *cb
+ struct tunnel_msg *tmsg;
+ struct net_device *dev;
+
++ if (cb->nlh->nlmsg_len < nlmsg_msg_size(sizeof(struct tunnel_msg))) {
++ NL_SET_ERR_MSG(cb->extack, "Invalid msg length");
++ return -EINVAL;
++ }
++
+ tmsg = nlmsg_data(cb->nlh);
+
+ if (tmsg->flags & ~TUNNEL_MSG_VALID_USER_FLAGS) {
+diff --git a/drivers/net/wireless/ath/ath11k/dp_rx.c b/drivers/net/wireless/ath/ath11k/dp_rx.c
+index 40088e62572e12..40b52d12b43235 100644
+--- a/drivers/net/wireless/ath/ath11k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath11k/dp_rx.c
+@@ -3872,6 +3872,7 @@ int ath11k_dp_process_rx_err(struct ath11k_base *ab, struct napi_struct *napi,
+ ath11k_hal_rx_msdu_link_info_get(link_desc_va, &num_msdus, msdu_cookies,
+ &rbm);
+ if (rbm != HAL_RX_BUF_RBM_WBM_IDLE_DESC_LIST &&
++ rbm != HAL_RX_BUF_RBM_SW1_BM &&
+ rbm != HAL_RX_BUF_RBM_SW3_BM) {
+ ab->soc_stats.invalid_rbm++;
+ ath11k_warn(ab, "invalid return buffer manager %d\n", rbm);
+diff --git a/drivers/net/wireless/ath/ath11k/hal_rx.c b/drivers/net/wireless/ath/ath11k/hal_rx.c
+index 8f7dd43dc1bd8e..753bd93f02123d 100644
+--- a/drivers/net/wireless/ath/ath11k/hal_rx.c
++++ b/drivers/net/wireless/ath/ath11k/hal_rx.c
+@@ -372,7 +372,8 @@ int ath11k_hal_wbm_desc_parse_err(struct ath11k_base *ab, void *desc,
+
+ ret_buf_mgr = FIELD_GET(BUFFER_ADDR_INFO1_RET_BUF_MGR,
+ wbm_desc->buf_addr_info.info1);
+- if (ret_buf_mgr != HAL_RX_BUF_RBM_SW3_BM) {
++ if (ret_buf_mgr != HAL_RX_BUF_RBM_SW1_BM &&
++ ret_buf_mgr != HAL_RX_BUF_RBM_SW3_BM) {
+ ab->soc_stats.invalid_rbm++;
+ return -EINVAL;
+ }
+diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c
+index 8946141aa0dce6..fbf5d57283576f 100644
+--- a/drivers/net/wireless/ath/ath12k/mac.c
++++ b/drivers/net/wireless/ath/ath12k/mac.c
+@@ -7220,9 +7220,9 @@ ath12k_mac_vdev_start_restart(struct ath12k_vif *arvif,
+ chandef->chan->band,
+ arvif->vif->type);
+ arg.min_power = 0;
+- arg.max_power = chandef->chan->max_power * 2;
+- arg.max_reg_power = chandef->chan->max_reg_power * 2;
+- arg.max_antenna_gain = chandef->chan->max_antenna_gain * 2;
++ arg.max_power = chandef->chan->max_power;
++ arg.max_reg_power = chandef->chan->max_reg_power;
++ arg.max_antenna_gain = chandef->chan->max_antenna_gain;
+
+ arg.pref_tx_streams = ar->num_tx_chains;
+ arg.pref_rx_streams = ar->num_rx_chains;
+diff --git a/drivers/net/wireless/ath/wcn36xx/main.c b/drivers/net/wireless/ath/wcn36xx/main.c
+index 408776562a7e56..cd36cab6db75d3 100644
+--- a/drivers/net/wireless/ath/wcn36xx/main.c
++++ b/drivers/net/wireless/ath/wcn36xx/main.c
+@@ -1590,7 +1590,10 @@ static int wcn36xx_probe(struct platform_device *pdev)
+ }
+
+ n_channels = wcn_band_2ghz.n_channels + wcn_band_5ghz.n_channels;
+- wcn->chan_survey = devm_kmalloc(wcn->dev, n_channels, GFP_KERNEL);
++ wcn->chan_survey = devm_kcalloc(wcn->dev,
++ n_channels,
++ sizeof(struct wcn36xx_chan_survey),
++ GFP_KERNEL);
+ if (!wcn->chan_survey) {
+ ret = -ENOMEM;
+ goto out_wq;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
+index 31e080e4da6697..ab3d6cfcb02bde 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h
+@@ -6,6 +6,8 @@
+ #ifndef _fwil_h_
+ #define _fwil_h_
+
++#include "debug.h"
++
+ /*******************************************************************************
+ * Dongle command codes that are interpreted by firmware
+ ******************************************************************************/
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+index 091fb6fd7c787c..834f7c9bb9e92d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/uefi.c
+@@ -13,9 +13,12 @@
+ #include <linux/efi.h>
+ #include "fw/runtime.h"
+
+-#define IWL_EFI_VAR_GUID EFI_GUID(0x92daaf2f, 0xc02b, 0x455b, \
+- 0xb2, 0xec, 0xf5, 0xa3, \
+- 0x59, 0x4f, 0x4a, 0xea)
++#define IWL_EFI_WIFI_GUID EFI_GUID(0x92daaf2f, 0xc02b, 0x455b, \
++ 0xb2, 0xec, 0xf5, 0xa3, \
++ 0x59, 0x4f, 0x4a, 0xea)
++#define IWL_EFI_WIFI_BT_GUID EFI_GUID(0xe65d8884, 0xd4af, 0x4b20, \
++ 0x8d, 0x03, 0x77, 0x2e, \
++ 0xcc, 0x3d, 0xa5, 0x31)
+
+ struct iwl_uefi_pnvm_mem_desc {
+ __le32 addr;
+@@ -61,7 +64,7 @@ void *iwl_uefi_get_pnvm(struct iwl_trans *trans, size_t *len)
+
+ *len = 0;
+
+- data = iwl_uefi_get_variable(IWL_UEFI_OEM_PNVM_NAME, &IWL_EFI_VAR_GUID,
++ data = iwl_uefi_get_variable(IWL_UEFI_OEM_PNVM_NAME, &IWL_EFI_WIFI_GUID,
+ &package_size);
+ if (IS_ERR(data)) {
+ IWL_DEBUG_FW(trans,
+@@ -76,18 +79,18 @@ void *iwl_uefi_get_pnvm(struct iwl_trans *trans, size_t *len)
+ return data;
+ }
+
+-static
+-void *iwl_uefi_get_verified_variable(struct iwl_trans *trans,
+- efi_char16_t *uefi_var_name,
+- char *var_name,
+- unsigned int expected_size,
+- unsigned long *size)
++static void *
++iwl_uefi_get_verified_variable_guid(struct iwl_trans *trans,
++ efi_guid_t *guid,
++ efi_char16_t *uefi_var_name,
++ char *var_name,
++ unsigned int expected_size,
++ unsigned long *size)
+ {
+ void *var;
+ unsigned long var_size;
+
+- var = iwl_uefi_get_variable(uefi_var_name, &IWL_EFI_VAR_GUID,
+- &var_size);
++ var = iwl_uefi_get_variable(uefi_var_name, guid, &var_size);
+
+ if (IS_ERR(var)) {
+ IWL_DEBUG_RADIO(trans,
+@@ -112,6 +115,18 @@ void *iwl_uefi_get_verified_variable(struct iwl_trans *trans,
+ return var;
+ }
+
++static void *
++iwl_uefi_get_verified_variable(struct iwl_trans *trans,
++ efi_char16_t *uefi_var_name,
++ char *var_name,
++ unsigned int expected_size,
++ unsigned long *size)
++{
++ return iwl_uefi_get_verified_variable_guid(trans, &IWL_EFI_WIFI_GUID,
++ uefi_var_name, var_name,
++ expected_size, size);
++}
++
+ int iwl_uefi_handle_tlv_mem_desc(struct iwl_trans *trans, const u8 *data,
+ u32 tlv_len, struct iwl_pnvm_image *pnvm_data)
+ {
+@@ -311,8 +326,9 @@ void iwl_uefi_get_step_table(struct iwl_trans *trans)
+ if (trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_AX210)
+ return;
+
+- data = iwl_uefi_get_verified_variable(trans, IWL_UEFI_STEP_NAME,
+- "STEP", sizeof(*data), NULL);
++ data = iwl_uefi_get_verified_variable_guid(trans, &IWL_EFI_WIFI_BT_GUID,
++ IWL_UEFI_STEP_NAME,
++ "STEP", sizeof(*data), NULL);
+ if (IS_ERR(data))
+ return;
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/coex.c b/drivers/net/wireless/intel/iwlwifi/mvm/coex.c
+index b607961970e970..9b8624304fa308 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/coex.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/coex.c
+@@ -530,18 +530,15 @@ static void iwl_mvm_bt_coex_notif_iterator(void *_data, u8 *mac,
+ struct ieee80211_vif *vif)
+ {
+ struct iwl_mvm *mvm = _data;
++ struct ieee80211_bss_conf *link_conf;
++ unsigned int link_id;
+
+ lockdep_assert_held(&mvm->mutex);
+
+ if (vif->type != NL80211_IFTYPE_STATION)
+ return;
+
+- for (int link_id = 0;
+- link_id < IEEE80211_MLD_MAX_NUM_LINKS;
+- link_id++) {
+- struct ieee80211_bss_conf *link_conf =
+- rcu_dereference_check(vif->link_conf[link_id],
+- lockdep_is_held(&mvm->mutex));
++ for_each_vif_active_link(vif, link_conf, link_id) {
+ struct ieee80211_chanctx_conf *chanctx_conf =
+ rcu_dereference_check(link_conf->chanctx_conf,
+ lockdep_is_held(&mvm->mutex));
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+index ca026b5256ce33..5f4942f6cc68e4 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/tx.c
+@@ -1880,7 +1880,9 @@ static void iwl_mvm_rx_tx_cmd_single(struct iwl_mvm *mvm,
+ IWL_DEBUG_TX_REPLY(mvm,
+ "Next reclaimed packet:%d\n",
+ next_reclaimed);
+- iwl_mvm_count_mpdu(mvmsta, sta_id, 1, true, 0);
++ if (tid < IWL_MAX_TID_COUNT)
++ iwl_mvm_count_mpdu(mvmsta, sta_id, 1,
++ true, 0);
+ } else {
+ IWL_DEBUG_TX_REPLY(mvm,
+ "NDP - don't update next_reclaimed\n");
+diff --git a/drivers/net/wireless/mediatek/mt76/mac80211.c b/drivers/net/wireless/mediatek/mt76/mac80211.c
+index 9d5561f441347b..0ca83f1a3e3ea2 100644
+--- a/drivers/net/wireless/mediatek/mt76/mac80211.c
++++ b/drivers/net/wireless/mediatek/mt76/mac80211.c
+@@ -958,11 +958,11 @@ int mt76_set_channel(struct mt76_phy *phy, struct cfg80211_chan_def *chandef,
+
+ if (chandef->chan != phy->main_chan)
+ memset(phy->chan_state, 0, sizeof(*phy->chan_state));
+- mt76_worker_enable(&dev->tx_worker);
+
+ ret = dev->drv->set_channel(phy);
+
+ clear_bit(MT76_RESET, &phy->state);
++ mt76_worker_enable(&dev->tx_worker);
+ mt76_worker_schedule(&dev->tx_worker);
+
+ mutex_unlock(&dev->mutex);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+index 96e34277fece9b..1cc8fc8fefe740 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7615/mcu.c
+@@ -1113,7 +1113,7 @@ mt7615_mcu_uni_add_dev(struct mt7615_phy *phy, struct ieee80211_vif *vif,
+ {
+ struct mt7615_vif *mvif = (struct mt7615_vif *)vif->drv_priv;
+
+- return mt76_connac_mcu_uni_add_dev(phy->mt76, &vif->bss_conf,
++ return mt76_connac_mcu_uni_add_dev(phy->mt76, &vif->bss_conf, &mvif->mt76,
+ &mvif->sta.wcid, enable);
+ }
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index 864246f9408899..7d07e720e4ec1d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -1137,10 +1137,10 @@ EXPORT_SYMBOL_GPL(mt76_connac_mcu_wtbl_ba_tlv);
+
+ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
+ struct ieee80211_bss_conf *bss_conf,
++ struct mt76_vif *mvif,
+ struct mt76_wcid *wcid,
+ bool enable)
+ {
+- struct mt76_vif *mvif = (struct mt76_vif *)bss_conf->vif->drv_priv;
+ struct mt76_dev *dev = phy->dev;
+ struct {
+ struct {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+index 1b0e80dfc346b8..57a8340fa70097 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.h
+@@ -1938,6 +1938,7 @@ void mt76_connac_mcu_sta_ba_tlv(struct sk_buff *skb,
+ bool enable, bool tx);
+ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
+ struct ieee80211_bss_conf *bss_conf,
++ struct mt76_vif *mvif,
+ struct mt76_wcid *wcid,
+ bool enable);
+ int mt76_connac_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif *mvif,
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 6bef96e3d2a3d9..77d82ccd73079d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -82,7 +82,7 @@ static ssize_t mt7915_thermal_temp_store(struct device *dev,
+ return ret;
+
+ mutex_lock(&phy->dev->mt76.mutex);
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 60, 130);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 60 * 1000, 130 * 1000), 1000);
+
+ if ((i - 1 == MT7915_CRIT_TEMP_IDX &&
+ val > phy->throttle_temp[MT7915_MAX_TEMP_IDX]) ||
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+index cf77ce0c875991..799e8d2cc7e6ec 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mac.c
+@@ -1388,6 +1388,8 @@ mt7915_mac_restart(struct mt7915_dev *dev)
+ if (dev_is_pci(mdev->dev)) {
+ mt76_wr(dev, MT_PCIE_MAC_INT_ENABLE, 0xff);
+ if (dev->hif2) {
++ mt76_wr(dev, MT_PCIE_RECOG_ID,
++ dev->hif2->index | MT_PCIE_RECOG_ID_SEM);
+ if (is_mt7915(mdev))
+ mt76_wr(dev, MT_PCIE1_MAC_INT_ENABLE, 0xff);
+ else
+@@ -1442,9 +1444,11 @@ static void
+ mt7915_mac_full_reset(struct mt7915_dev *dev)
+ {
+ struct mt76_phy *ext_phy;
++ struct mt7915_phy *phy2;
+ int i;
+
+ ext_phy = dev->mt76.phys[MT_BAND1];
++ phy2 = ext_phy ? ext_phy->priv : NULL;
+
+ dev->recovery.hw_full_reset = true;
+
+@@ -1474,6 +1478,9 @@ mt7915_mac_full_reset(struct mt7915_dev *dev)
+
+ memset(dev->mt76.wcid_mask, 0, sizeof(dev->mt76.wcid_mask));
+ dev->mt76.vif_mask = 0;
++ dev->phy.omac_mask = 0;
++ if (phy2)
++ phy2->omac_mask = 0;
+
+ i = mt76_wcid_alloc(dev->mt76.wcid_mask, MT7915_WTBL_STA);
+ dev->mt76.global_wcid.idx = i;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/main.c b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+index d75e8dea1fbdc8..8c0d63cebf3e14 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/main.c
+@@ -246,8 +246,10 @@ static int mt7915_add_interface(struct ieee80211_hw *hw,
+ phy->omac_mask |= BIT_ULL(mvif->mt76.omac_idx);
+
+ idx = mt76_wcid_alloc(dev->mt76.wcid_mask, mt7915_wtbl_size(dev));
+- if (idx < 0)
+- return -ENOSPC;
++ if (idx < 0) {
++ ret = -ENOSPC;
++ goto out;
++ }
+
+ INIT_LIST_HEAD(&mvif->sta.rc_list);
+ INIT_LIST_HEAD(&mvif->sta.wcid.poll_list);
+@@ -619,8 +621,9 @@ static void mt7915_bss_info_changed(struct ieee80211_hw *hw,
+ if (changed & BSS_CHANGED_ASSOC)
+ set_bss_info = vif->cfg.assoc;
+ if (changed & BSS_CHANGED_BEACON_ENABLED &&
++ info->enable_beacon &&
+ vif->type != NL80211_IFTYPE_AP)
+- set_bss_info = set_sta = info->enable_beacon;
++ set_bss_info = set_sta = 1;
+
+ if (set_bss_info == 1)
+ mt7915_mcu_add_bss_info(phy, vif, true);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+index 44e112b8b5b368..2e7604eed27b02 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mmio.c
+@@ -484,7 +484,7 @@ static u32 __mt7915_reg_addr(struct mt7915_dev *dev, u32 addr)
+ continue;
+
+ ofs = addr - dev->reg.map[i].phys;
+- if (ofs > dev->reg.map[i].size)
++ if (ofs >= dev->reg.map[i].size)
+ continue;
+
+ return dev->reg.map[i].maps + ofs;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+index ac0b1f0eb27c14..5fe872ef2e939b 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/mt7915.h
+@@ -191,6 +191,7 @@ struct mt7915_hif {
+ struct device *dev;
+ void __iomem *regs;
+ int irq;
++ u32 index;
+ };
+
+ struct mt7915_phy {
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/pci.c b/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
+index 39132894e8ea29..07b0a5766eab7d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/pci.c
+@@ -42,6 +42,7 @@ static struct mt7915_hif *mt7915_pci_get_hif2(u32 idx)
+ continue;
+
+ get_device(hif->dev);
++ hif->index = idx;
+ goto out;
+ }
+ hif = NULL;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+index 047106b65d2bc6..bd1455698ebe5f 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/mac.c
+@@ -647,6 +647,7 @@ mt7921_vif_connect_iter(void *priv, u8 *mac,
+ ieee80211_disconnect(vif, true);
+
+ mt76_connac_mcu_uni_add_dev(&dev->mphy, &vif->bss_conf,
++ &mvif->bss_conf.mt76,
+ &mvif->sta.deflink.wcid, true);
+ mt7921_mcu_set_tx(dev, vif);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index a7f5bfbc02ed1f..e2dfd3670c4c93 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -308,6 +308,7 @@ mt7921_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ mvif->bss_conf.mt76.wmm_idx = mvif->bss_conf.mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
+
+ ret = mt76_connac_mcu_uni_add_dev(&dev->mphy, &vif->bss_conf,
++ &mvif->bss_conf.mt76,
+ &mvif->sta.deflink.wcid, true);
+ if (ret)
+ goto out;
+@@ -531,7 +532,13 @@ static int mt7921_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ } else {
+ if (idx == *wcid_keyidx)
+ *wcid_keyidx = -1;
+- goto out;
++
++ /* For security issue we don't trigger the key deletion when
++ * reassociating. But we should trigger the deletion process
++ * to avoid using incorrect cipher after disconnection,
++ */
++ if (vif->type != NL80211_IFTYPE_STATION || vif->cfg.assoc)
++ goto out;
+ }
+
+ mt76_wcid_key_setup(&dev->mt76, wcid, key);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mac.c b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
+index 634c42bbf23f67..a095fb31e391a1 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mac.c
+@@ -49,7 +49,7 @@ static void mt7925_mac_sta_poll(struct mt792x_dev *dev)
+ break;
+ mlink = list_first_entry(&sta_poll_list,
+ struct mt792x_link_sta, wcid.poll_list);
+- msta = container_of(mlink, struct mt792x_sta, deflink);
++ msta = mlink->sta;
+ spin_lock_bh(&dev->mt76.sta_poll_lock);
+ list_del_init(&mlink->wcid.poll_list);
+ spin_unlock_bh(&dev->mt76.sta_poll_lock);
+@@ -1271,6 +1271,7 @@ mt7925_vif_connect_iter(void *priv, u8 *mac,
+ struct mt792x_dev *dev = mvif->phy->dev;
+ struct ieee80211_hw *hw = mt76_hw(dev);
+ struct ieee80211_bss_conf *bss_conf;
++ struct mt792x_bss_conf *mconf;
+ int i;
+
+ if (vif->type == NL80211_IFTYPE_STATION)
+@@ -1278,8 +1279,9 @@ mt7925_vif_connect_iter(void *priv, u8 *mac,
+
+ for_each_set_bit(i, &valid, IEEE80211_MLD_MAX_NUM_LINKS) {
+ bss_conf = mt792x_vif_to_bss_conf(vif, i);
++ mconf = mt792x_vif_to_link(mvif, i);
+
+- mt76_connac_mcu_uni_add_dev(&dev->mphy, bss_conf,
++ mt76_connac_mcu_uni_add_dev(&dev->mphy, bss_conf, &mconf->mt76,
+ &mvif->sta.deflink.wcid, true);
+ mt7925_mcu_set_tx(dev, bss_conf);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/main.c b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+index 791c8b00e11264..ddc67423efe2cb 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+@@ -365,18 +365,14 @@ static int mt7925_mac_link_bss_add(struct mt792x_dev *dev,
+ mconf->mt76.omac_idx = ieee80211_vif_is_mld(vif) ?
+ 0 : mconf->mt76.idx;
+ mconf->mt76.band_idx = 0xff;
+- mconf->mt76.wmm_idx = mconf->mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
++ mconf->mt76.wmm_idx = ieee80211_vif_is_mld(vif) ?
++ 0 : mconf->mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
+
+ if (mvif->phy->mt76->chandef.chan->band != NL80211_BAND_2GHZ)
+ mconf->mt76.basic_rates_idx = MT792x_BASIC_RATES_TBL + 4;
+ else
+ mconf->mt76.basic_rates_idx = MT792x_BASIC_RATES_TBL;
+
+- ret = mt76_connac_mcu_uni_add_dev(&dev->mphy, link_conf,
+- &mlink->wcid, true);
+- if (ret)
+- goto out;
+-
+ dev->mt76.vif_mask |= BIT_ULL(mconf->mt76.idx);
+ mvif->phy->omac_mask |= BIT_ULL(mconf->mt76.omac_idx);
+
+@@ -384,7 +380,7 @@ static int mt7925_mac_link_bss_add(struct mt792x_dev *dev,
+
+ INIT_LIST_HEAD(&mlink->wcid.poll_list);
+ mlink->wcid.idx = idx;
+- mlink->wcid.phy_idx = mconf->mt76.band_idx;
++ mlink->wcid.phy_idx = 0;
+ mlink->wcid.hw_key_idx = -1;
+ mlink->wcid.tx_info |= MT_WCID_TX_INFO_SET;
+ mt76_wcid_init(&mlink->wcid);
+@@ -395,6 +391,12 @@ static int mt7925_mac_link_bss_add(struct mt792x_dev *dev,
+ ewma_rssi_init(&mconf->rssi);
+
+ rcu_assign_pointer(dev->mt76.wcid[idx], &mlink->wcid);
++
++ ret = mt76_connac_mcu_uni_add_dev(&dev->mphy, link_conf, &mconf->mt76,
++ &mlink->wcid, true);
++ if (ret)
++ goto out;
++
+ if (vif->txq) {
+ mtxq = (struct mt76_txq *)vif->txq->drv_priv;
+ mtxq->wcid = idx;
+@@ -837,6 +839,7 @@ static int mt7925_mac_link_sta_add(struct mt76_dev *mdev,
+ u8 link_id = link_sta->link_id;
+ struct mt792x_link_sta *mlink;
+ struct mt792x_sta *msta;
++ struct mt76_wcid *wcid;
+ int ret, idx;
+
+ msta = (struct mt792x_sta *)link_sta->sta->drv_priv;
+@@ -850,11 +853,20 @@ static int mt7925_mac_link_sta_add(struct mt76_dev *mdev,
+ INIT_LIST_HEAD(&mlink->wcid.poll_list);
+ mlink->wcid.sta = 1;
+ mlink->wcid.idx = idx;
+- mlink->wcid.phy_idx = mconf->mt76.band_idx;
++ mlink->wcid.phy_idx = 0;
+ mlink->wcid.tx_info |= MT_WCID_TX_INFO_SET;
+ mlink->last_txs = jiffies;
+ mlink->wcid.link_id = link_sta->link_id;
+ mlink->wcid.link_valid = !!link_sta->sta->valid_links;
++ mlink->sta = msta;
++
++ wcid = &mlink->wcid;
++ ewma_signal_init(&wcid->rssi);
++ rcu_assign_pointer(dev->mt76.wcid[wcid->idx], wcid);
++ mt76_wcid_init(wcid);
++ ewma_avg_signal_init(&mlink->avg_ack_signal);
++ memset(mlink->airtime_ac, 0,
++ sizeof(msta->deflink.airtime_ac));
+
+ ret = mt76_connac_pm_wake(&dev->mphy, &dev->pm);
+ if (ret)
+@@ -866,9 +878,14 @@ static int mt7925_mac_link_sta_add(struct mt76_dev *mdev,
+ link_conf = mt792x_vif_to_bss_conf(vif, link_id);
+
+ /* should update bss info before STA add */
+- if (vif->type == NL80211_IFTYPE_STATION && !link_sta->sta->tdls)
+- mt7925_mcu_add_bss_info(&dev->phy, mconf->mt76.ctx,
+- link_conf, link_sta, false);
++ if (vif->type == NL80211_IFTYPE_STATION && !link_sta->sta->tdls) {
++ if (ieee80211_vif_is_mld(vif))
++ mt7925_mcu_add_bss_info(&dev->phy, mconf->mt76.ctx,
++ link_conf, link_sta, link_sta != mlink->pri_link);
++ else
++ mt7925_mcu_add_bss_info(&dev->phy, mconf->mt76.ctx,
++ link_conf, link_sta, false);
++ }
+
+ if (ieee80211_vif_is_mld(vif) &&
+ link_sta == mlink->pri_link) {
+@@ -904,7 +921,6 @@ mt7925_mac_sta_add_links(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta, unsigned long new_links)
+ {
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
+- struct mt76_wcid *wcid;
+ unsigned int link_id;
+ int err = 0;
+
+@@ -921,14 +937,6 @@ mt7925_mac_sta_add_links(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ err = -ENOMEM;
+ break;
+ }
+-
+- wcid = &mlink->wcid;
+- ewma_signal_init(&wcid->rssi);
+- rcu_assign_pointer(dev->mt76.wcid[wcid->idx], wcid);
+- mt76_wcid_init(wcid);
+- ewma_avg_signal_init(&mlink->avg_ack_signal);
+- memset(mlink->airtime_ac, 0,
+- sizeof(msta->deflink.airtime_ac));
+ }
+
+ msta->valid_links |= BIT(link_id);
+@@ -1141,8 +1149,7 @@ static void mt7925_mac_link_sta_remove(struct mt76_dev *mdev,
+ struct mt792x_bss_conf *mconf;
+
+ mconf = mt792x_link_conf_to_mconf(link_conf);
+- mt7925_mcu_add_bss_info(&dev->phy, mconf->mt76.ctx, link_conf,
+- link_sta, false);
++ mt792x_mac_link_bss_remove(dev, mconf, mlink);
+ }
+
+ spin_lock_bh(&mdev->sta_poll_lock);
+@@ -1200,12 +1207,45 @@ void mt7925_mac_sta_remove(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ {
+ struct mt792x_dev *dev = container_of(mdev, struct mt792x_dev, mt76);
+ struct mt792x_sta *msta = (struct mt792x_sta *)sta->drv_priv;
++ struct {
++ struct {
++ u8 omac_idx;
++ u8 band_idx;
++ __le16 pad;
++ } __packed hdr;
++ struct req_tlv {
++ __le16 tag;
++ __le16 len;
++ u8 active;
++ u8 link_idx; /* hw link idx */
++ u8 omac_addr[ETH_ALEN];
++ } __packed tlv;
++ } dev_req = {
++ .hdr = {
++ .omac_idx = 0,
++ .band_idx = 0,
++ },
++ .tlv = {
++ .tag = cpu_to_le16(DEV_INFO_ACTIVE),
++ .len = cpu_to_le16(sizeof(struct req_tlv)),
++ .active = true,
++ },
++ };
+ unsigned long rem;
+
+ rem = ieee80211_vif_is_mld(vif) ? msta->valid_links : BIT(0);
+
+ mt7925_mac_sta_remove_links(dev, vif, sta, rem);
+
++ if (ieee80211_vif_is_mld(vif)) {
++ mt7925_mcu_set_dbdc(&dev->mphy, false);
++
++ /* recovery omac address for the legacy interface */
++ memcpy(dev_req.tlv.omac_addr, vif->addr, ETH_ALEN);
++ mt76_mcu_send_msg(mdev, MCU_UNI_CMD(DEV_INFO_UPDATE),
++ &dev_req, sizeof(dev_req), true);
++ }
++
+ if (vif->type == NL80211_IFTYPE_STATION) {
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+
+@@ -1250,22 +1290,22 @@ mt7925_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ case IEEE80211_AMPDU_RX_START:
+ mt76_rx_aggr_start(&dev->mt76, &msta->deflink.wcid, tid, ssn,
+ params->buf_size);
+- mt7925_mcu_uni_rx_ba(dev, params, true);
++ mt7925_mcu_uni_rx_ba(dev, vif, params, true);
+ break;
+ case IEEE80211_AMPDU_RX_STOP:
+ mt76_rx_aggr_stop(&dev->mt76, &msta->deflink.wcid, tid);
+- mt7925_mcu_uni_rx_ba(dev, params, false);
++ mt7925_mcu_uni_rx_ba(dev, vif, params, false);
+ break;
+ case IEEE80211_AMPDU_TX_OPERATIONAL:
+ mtxq->aggr = true;
+ mtxq->send_bar = false;
+- mt7925_mcu_uni_tx_ba(dev, params, true);
++ mt7925_mcu_uni_tx_ba(dev, vif, params, true);
+ break;
+ case IEEE80211_AMPDU_TX_STOP_FLUSH:
+ case IEEE80211_AMPDU_TX_STOP_FLUSH_CONT:
+ mtxq->aggr = false;
+ clear_bit(tid, &msta->deflink.wcid.ampdu_state);
+- mt7925_mcu_uni_tx_ba(dev, params, false);
++ mt7925_mcu_uni_tx_ba(dev, vif, params, false);
+ break;
+ case IEEE80211_AMPDU_TX_START:
+ set_bit(tid, &msta->deflink.wcid.ampdu_state);
+@@ -1274,7 +1314,7 @@ mt7925_ampdu_action(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ case IEEE80211_AMPDU_TX_STOP_CONT:
+ mtxq->aggr = false;
+ clear_bit(tid, &msta->deflink.wcid.ampdu_state);
+- mt7925_mcu_uni_tx_ba(dev, params, false);
++ mt7925_mcu_uni_tx_ba(dev, vif, params, false);
+ ieee80211_stop_tx_ba_cb_irqsafe(vif, sta->addr, tid);
+ break;
+ }
+@@ -1895,6 +1935,13 @@ static void mt7925_link_info_changed(struct ieee80211_hw *hw,
+ if (changed & (BSS_CHANGED_QOS | BSS_CHANGED_BEACON_ENABLED))
+ mt7925_mcu_set_tx(dev, info);
+
++ if (changed & BSS_CHANGED_BSSID) {
++ if (ieee80211_vif_is_mld(vif) &&
++ hweight16(mvif->valid_links) == 2)
++ /* Indicate the secondary setup done */
++ mt7925_mcu_uni_bss_bcnft(dev, info, true);
++ }
++
+ mt792x_mutex_release(dev);
+ }
+
+@@ -1946,6 +1993,8 @@ mt7925_change_vif_links(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ GFP_KERNEL);
+ mlink = devm_kzalloc(dev->mt76.dev, sizeof(*mlink),
+ GFP_KERNEL);
++ if (!mconf || !mlink)
++ return -ENOMEM;
+ }
+
+ mconfs[link_id] = mconf;
+@@ -1974,6 +2023,8 @@ mt7925_change_vif_links(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ goto free;
+
+ if (mconf != &mvif->bss_conf) {
++ mt7925_mcu_set_bss_pm(dev, link_conf, true);
++
+ err = mt7925_set_mlo_roc(phy, &mvif->bss_conf,
+ vif->active_links);
+ if (err < 0)
+@@ -2071,18 +2122,16 @@ static void mt7925_unassign_vif_chanctx(struct ieee80211_hw *hw,
+ struct mt792x_chanctx *mctx = (struct mt792x_chanctx *)ctx->drv_priv;
+ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+ struct mt792x_dev *dev = mt792x_hw_dev(hw);
+- struct ieee80211_bss_conf *pri_link_conf;
+ struct mt792x_bss_conf *mconf;
+
+ mutex_lock(&dev->mt76.mutex);
+
+ if (ieee80211_vif_is_mld(vif)) {
+ mconf = mt792x_vif_to_link(mvif, link_conf->link_id);
+- pri_link_conf = mt792x_vif_to_bss_conf(vif, mvif->deflink_id);
+
+ if (vif->type == NL80211_IFTYPE_STATION &&
+ mconf == &mvif->bss_conf)
+- mt7925_mcu_add_bss_info(&dev->phy, NULL, pri_link_conf,
++ mt7925_mcu_add_bss_info(&dev->phy, NULL, link_conf,
+ NULL, false);
+ } else {
+ mconf = &mvif->bss_conf;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index 748ea6adbc6b39..ce3d8197b026a6 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -123,10 +123,8 @@ EXPORT_SYMBOL_GPL(mt7925_mcu_regval);
+ int mt7925_mcu_update_arp_filter(struct mt76_dev *dev,
+ struct ieee80211_bss_conf *link_conf)
+ {
+- struct ieee80211_vif *mvif = container_of((void *)link_conf->vif,
+- struct ieee80211_vif,
+- drv_priv);
+ struct mt792x_bss_conf *mconf = mt792x_link_conf_to_mconf(link_conf);
++ struct ieee80211_vif *mvif = link_conf->vif;
+ struct sk_buff *skb;
+ int i, len = min_t(int, mvif->cfg.arp_addr_cnt,
+ IEEE80211_BSS_ARP_ADDR_LIST_LEN);
+@@ -531,10 +529,10 @@ void mt7925_mcu_rx_event(struct mt792x_dev *dev, struct sk_buff *skb)
+
+ static int
+ mt7925_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif *mvif,
++ struct mt76_wcid *wcid,
+ struct ieee80211_ampdu_params *params,
+ bool enable, bool tx)
+ {
+- struct mt76_wcid *wcid = (struct mt76_wcid *)params->sta->drv_priv;
+ struct sta_rec_ba_uni *ba;
+ struct sk_buff *skb;
+ struct tlv *tlv;
+@@ -562,28 +560,60 @@ mt7925_mcu_sta_ba(struct mt76_dev *dev, struct mt76_vif *mvif,
+
+ /** starec & wtbl **/
+ int mt7925_mcu_uni_tx_ba(struct mt792x_dev *dev,
++ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable)
+ {
+ struct mt792x_sta *msta = (struct mt792x_sta *)params->sta->drv_priv;
+- struct mt792x_vif *mvif = msta->vif;
++ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
++ struct mt792x_link_sta *mlink;
++ struct mt792x_bss_conf *mconf;
++ unsigned long usable_links = ieee80211_vif_usable_links(vif);
++ struct mt76_wcid *wcid;
++ u8 link_id, ret;
++
++ for_each_set_bit(link_id, &usable_links, IEEE80211_MLD_MAX_NUM_LINKS) {
++ mconf = mt792x_vif_to_link(mvif, link_id);
++ mlink = mt792x_sta_to_link(msta, link_id);
++ wcid = &mlink->wcid;
+
+- if (enable && !params->amsdu)
+- msta->deflink.wcid.amsdu = false;
++ if (enable && !params->amsdu)
++ mlink->wcid.amsdu = false;
+
+- return mt7925_mcu_sta_ba(&dev->mt76, &mvif->bss_conf.mt76, params,
+- enable, true);
++ ret = mt7925_mcu_sta_ba(&dev->mt76, &mconf->mt76, wcid, params,
++ enable, true);
++ if (ret < 0)
++ break;
++ }
++
++ return ret;
+ }
+
+ int mt7925_mcu_uni_rx_ba(struct mt792x_dev *dev,
++ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable)
+ {
+ struct mt792x_sta *msta = (struct mt792x_sta *)params->sta->drv_priv;
+- struct mt792x_vif *mvif = msta->vif;
++ struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
++ struct mt792x_link_sta *mlink;
++ struct mt792x_bss_conf *mconf;
++ unsigned long usable_links = ieee80211_vif_usable_links(vif);
++ struct mt76_wcid *wcid;
++ u8 link_id, ret;
++
++ for_each_set_bit(link_id, &usable_links, IEEE80211_MLD_MAX_NUM_LINKS) {
++ mconf = mt792x_vif_to_link(mvif, link_id);
++ mlink = mt792x_sta_to_link(msta, link_id);
++ wcid = &mlink->wcid;
++
++ ret = mt7925_mcu_sta_ba(&dev->mt76, &mconf->mt76, wcid, params,
++ enable, false);
++ if (ret < 0)
++ break;
++ }
+
+- return mt7925_mcu_sta_ba(&dev->mt76, &mvif->bss_conf.mt76, params,
+- enable, false);
++ return ret;
+ }
+
+ static int mt7925_load_clc(struct mt792x_dev *dev, const char *fw_name)
+@@ -638,7 +668,7 @@ static int mt7925_load_clc(struct mt792x_dev *dev, const char *fw_name)
+ for (offset = 0; offset < len; offset += le32_to_cpu(clc->len)) {
+ clc = (const struct mt7925_clc *)(clc_base + offset);
+
+- if (clc->idx > ARRAY_SIZE(phy->clc))
++ if (clc->idx >= ARRAY_SIZE(phy->clc))
+ break;
+
+ /* do not init buf again if chip reset triggered */
+@@ -823,7 +853,7 @@ mt7925_mcu_get_nic_capability(struct mt792x_dev *dev)
+ mt7925_mcu_parse_phy_cap(dev, tlv->data);
+ break;
+ case MT_NIC_CAP_CHIP_CAP:
+- memcpy(&dev->phy.chip_cap, (void *)skb->data, sizeof(u64));
++ dev->phy.chip_cap = le64_to_cpu(*(__le64 *)tlv->data);
+ break;
+ case MT_NIC_CAP_EML_CAP:
+ mt7925_mcu_parse_eml_cap(dev, tlv->data);
+@@ -1153,7 +1183,12 @@ int mt7925_mcu_set_mlo_roc(struct mt792x_bss_conf *mconf, u16 sel_links,
+ u8 rsv[4];
+ } __packed hdr;
+ struct roc_acquire_tlv roc[2];
+- } __packed req;
++ } __packed req = {
++ .roc[0].tag = cpu_to_le16(UNI_ROC_NUM),
++ .roc[0].len = cpu_to_le16(sizeof(struct roc_acquire_tlv)),
++ .roc[1].tag = cpu_to_le16(UNI_ROC_NUM),
++ .roc[1].len = cpu_to_le16(sizeof(struct roc_acquire_tlv))
++ };
+
+ if (!mconf || hweight16(vif->valid_links) < 2 ||
+ hweight16(sel_links) != 2)
+@@ -1200,6 +1235,8 @@ int mt7925_mcu_set_mlo_roc(struct mt792x_bss_conf *mconf, u16 sel_links,
+ req.roc[i].bw_from_ap = CMD_CBW_20MHZ;
+ req.roc[i].center_chan = center_ch;
+ req.roc[i].center_chan_from_ap = center_ch;
++ req.roc[i].center_chan2 = 0;
++ req.roc[i].center_chan2_from_ap = 0;
+
+ /* STR : 0xfe indicates BAND_ALL with enabling DBDC
+ * EMLSR : 0xff indicates (BAND_AUTO) without DBDC
+@@ -1215,7 +1252,7 @@ int mt7925_mcu_set_mlo_roc(struct mt792x_bss_conf *mconf, u16 sel_links,
+ }
+
+ return mt76_mcu_send_msg(&mvif->phy->dev->mt76, MCU_UNI_CMD(ROC),
+- &req, sizeof(req), false);
++ &req, sizeof(req), true);
+ }
+
+ int mt7925_mcu_set_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
+@@ -1264,7 +1301,7 @@ int mt7925_mcu_set_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
+ }
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(ROC),
+- &req, sizeof(req), false);
++ &req, sizeof(req), true);
+ }
+
+ int mt7925_mcu_abort_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
+@@ -1294,7 +1331,7 @@ int mt7925_mcu_abort_roc(struct mt792x_phy *phy, struct mt792x_bss_conf *mconf,
+ };
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(ROC),
+- &req, sizeof(req), false);
++ &req, sizeof(req), true);
+ }
+
+ int mt7925_mcu_set_eeprom(struct mt792x_dev *dev)
+@@ -1357,7 +1394,7 @@ int mt7925_mcu_uni_bss_ps(struct mt792x_dev *dev,
+ &ps_req, sizeof(ps_req), true);
+ }
+
+-static int
++int
+ mt7925_mcu_uni_bss_bcnft(struct mt792x_dev *dev,
+ struct ieee80211_bss_conf *link_conf, bool enable)
+ {
+@@ -1447,12 +1484,12 @@ mt7925_mcu_set_bss_pm(struct mt792x_dev *dev,
+ int err;
+
+ err = mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+- &req1, sizeof(req1), false);
++ &req1, sizeof(req1), true);
+ if (err < 0 || !enable)
+ return err;
+
+ return mt76_mcu_send_msg(&dev->mt76, MCU_UNI_CMD(BSS_INFO_UPDATE),
+- &req, sizeof(req), false);
++ &req, sizeof(req), true);
+ }
+
+ static void
+@@ -1898,7 +1935,11 @@ int mt7925_mcu_sta_update(struct mt792x_dev *dev,
+ mlink = mt792x_sta_to_link(msta, link_sta->link_id);
+ }
+ info.wcid = link_sta ? &mlink->wcid : &mvif->sta.deflink.wcid;
+- info.newly = link_sta ? state != MT76_STA_INFO_STATE_ASSOC : true;
++
++ if (link_sta)
++ info.newly = state != MT76_STA_INFO_STATE_ASSOC;
++ else
++ info.newly = state == MT76_STA_INFO_STATE_ASSOC ? false : true;
+
+ if (ieee80211_vif_is_mld(vif))
+ err = mt7925_mcu_mlo_sta_cmd(&dev->mphy, &info);
+@@ -1914,32 +1955,21 @@ int mt7925_mcu_set_beacon_filter(struct mt792x_dev *dev,
+ {
+ #define MT7925_FIF_BIT_CLR BIT(1)
+ #define MT7925_FIF_BIT_SET BIT(0)
+- struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+- unsigned long valid = ieee80211_vif_is_mld(vif) ?
+- mvif->valid_links : BIT(0);
+- struct ieee80211_bss_conf *bss_conf;
+ int err = 0;
+- int i;
+
+ if (enable) {
+- for_each_set_bit(i, &valid, IEEE80211_MLD_MAX_NUM_LINKS) {
+- bss_conf = mt792x_vif_to_bss_conf(vif, i);
+- err = mt7925_mcu_uni_bss_bcnft(dev, bss_conf, true);
+- if (err < 0)
+- return err;
+- }
++ err = mt7925_mcu_uni_bss_bcnft(dev, &vif->bss_conf, true);
++ if (err < 0)
++ return err;
+
+ return mt7925_mcu_set_rxfilter(dev, 0,
+ MT7925_FIF_BIT_SET,
+ MT_WF_RFCR_DROP_OTHER_BEACON);
+ }
+
+- for_each_set_bit(i, &valid, IEEE80211_MLD_MAX_NUM_LINKS) {
+- bss_conf = mt792x_vif_to_bss_conf(vif, i);
+- err = mt7925_mcu_set_bss_pm(dev, bss_conf, false);
+- if (err)
+- return err;
+- }
++ err = mt7925_mcu_set_bss_pm(dev, &vif->bss_conf, false);
++ if (err < 0)
++ return err;
+
+ return mt7925_mcu_set_rxfilter(dev, 0,
+ MT7925_FIF_BIT_CLR,
+@@ -1976,8 +2006,6 @@ int mt7925_get_txpwr_info(struct mt792x_dev *dev, u8 band_idx, struct mt7925_txp
+ int mt7925_mcu_set_sniffer(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ bool enable)
+ {
+- struct mt792x_vif *mvif = (struct mt792x_vif *)vif->drv_priv;
+-
+ struct {
+ struct {
+ u8 band_idx;
+@@ -1991,7 +2019,7 @@ int mt7925_mcu_set_sniffer(struct mt792x_dev *dev, struct ieee80211_vif *vif,
+ } __packed enable;
+ } __packed req = {
+ .hdr = {
+- .band_idx = mvif->bss_conf.mt76.band_idx,
++ .band_idx = 0,
+ },
+ .enable = {
+ .tag = cpu_to_le16(UNI_SNIFFER_ENABLE),
+@@ -2050,7 +2078,7 @@ int mt7925_mcu_config_sniffer(struct mt792x_vif *vif,
+ } __packed tlv;
+ } __packed req = {
+ .hdr = {
+- .band_idx = vif->bss_conf.mt76.band_idx,
++ .band_idx = 0,
+ },
+ .tlv = {
+ .tag = cpu_to_le16(UNI_SNIFFER_CONFIG),
+@@ -2179,11 +2207,27 @@ void mt7925_mcu_bss_rlm_tlv(struct sk_buff *skb, struct mt76_phy *phy,
+ req = (struct bss_rlm_tlv *)tlv;
+ req->control_channel = chandef->chan->hw_value;
+ req->center_chan = ieee80211_frequency_to_channel(freq1);
+- req->center_chan2 = ieee80211_frequency_to_channel(freq2);
++ req->center_chan2 = 0;
+ req->tx_streams = hweight8(phy->antenna_mask);
+ req->ht_op_info = 4; /* set HT 40M allowed */
+ req->rx_streams = hweight8(phy->antenna_mask);
+- req->band = band;
++ req->center_chan2 = 0;
++ req->sco = 0;
++ req->band = 1;
++
++ switch (band) {
++ case NL80211_BAND_2GHZ:
++ req->band = 1;
++ break;
++ case NL80211_BAND_5GHZ:
++ req->band = 2;
++ break;
++ case NL80211_BAND_6GHZ:
++ req->band = 3;
++ break;
++ default:
++ break;
++ }
+
+ switch (chandef->width) {
+ case NL80211_CHAN_WIDTH_40:
+@@ -2194,6 +2238,7 @@ void mt7925_mcu_bss_rlm_tlv(struct sk_buff *skb, struct mt76_phy *phy,
+ break;
+ case NL80211_CHAN_WIDTH_80P80:
+ req->bw = CMD_CBW_8080MHZ;
++ req->center_chan2 = ieee80211_frequency_to_channel(freq2);
+ break;
+ case NL80211_CHAN_WIDTH_160:
+ req->bw = CMD_CBW_160MHZ;
+@@ -2463,6 +2508,7 @@ static void
+ mt7925_mcu_bss_mld_tlv(struct sk_buff *skb,
+ struct ieee80211_bss_conf *link_conf)
+ {
++ struct ieee80211_vif *vif = link_conf->vif;
+ struct mt792x_bss_conf *mconf = mt792x_link_conf_to_mconf(link_conf);
+ struct mt792x_vif *mvif = (struct mt792x_vif *)link_conf->vif->drv_priv;
+ struct bss_mld_tlv *mld;
+@@ -2483,7 +2529,7 @@ mt7925_mcu_bss_mld_tlv(struct sk_buff *skb,
+ mld->eml_enable = !!(link_conf->vif->cfg.eml_cap &
+ IEEE80211_EML_CAP_EMLSR_SUPP);
+
+- memcpy(mld->mac_addr, link_conf->addr, ETH_ALEN);
++ memcpy(mld->mac_addr, vif->addr, ETH_ALEN);
+ }
+
+ static void
+@@ -2614,7 +2660,7 @@ int mt7925_mcu_add_bss_info(struct mt792x_phy *phy,
+ MCU_UNI_CMD(BSS_INFO_UPDATE), true);
+ }
+
+-int mt7925_mcu_set_dbdc(struct mt76_phy *phy)
++int mt7925_mcu_set_dbdc(struct mt76_phy *phy, bool enable)
+ {
+ struct mt76_dev *mdev = phy->dev;
+
+@@ -2634,7 +2680,7 @@ int mt7925_mcu_set_dbdc(struct mt76_phy *phy)
+ tlv = mt76_connac_mcu_add_tlv(skb, UNI_MBMC_SETTING, sizeof(*conf));
+ conf = (struct mbmc_conf_tlv *)tlv;
+
+- conf->mbmc_en = 1;
++ conf->mbmc_en = enable;
+ conf->band = 0; /* unused */
+
+ err = mt76_mcu_skb_send_msg(mdev, skb, MCU_UNI_CMD(SET_DBDC_PARMS),
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+index ac53bdc993322f..fe6a613ba00889 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+@@ -616,7 +616,7 @@ mt7925_mcu_get_cipher(int cipher)
+ }
+ }
+
+-int mt7925_mcu_set_dbdc(struct mt76_phy *phy);
++int mt7925_mcu_set_dbdc(struct mt76_phy *phy, bool enable);
+ int mt7925_mcu_hw_scan(struct mt76_phy *phy, struct ieee80211_vif *vif,
+ struct ieee80211_scan_request *scan_req);
+ int mt7925_mcu_cancel_hw_scan(struct mt76_phy *phy,
+@@ -643,4 +643,7 @@ int mt7925_mcu_set_chctx(struct mt76_phy *phy, struct mt76_vif *mvif,
+ int mt7925_mcu_set_rate_txpower(struct mt76_phy *phy);
+ int mt7925_mcu_update_arp_filter(struct mt76_dev *dev,
+ struct ieee80211_bss_conf *link_conf);
++int
++mt7925_mcu_uni_bss_bcnft(struct mt792x_dev *dev,
++ struct ieee80211_bss_conf *link_conf, bool enable);
+ #endif
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
+index f5c02e5f506633..df3c705d1cb3fa 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mt7925.h
+@@ -242,9 +242,11 @@ int mt7925_mcu_set_beacon_filter(struct mt792x_dev *dev,
+ struct ieee80211_vif *vif,
+ bool enable);
+ int mt7925_mcu_uni_tx_ba(struct mt792x_dev *dev,
++ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable);
+ int mt7925_mcu_uni_rx_ba(struct mt792x_dev *dev,
++ struct ieee80211_vif *vif,
+ struct ieee80211_ampdu_params *params,
+ bool enable);
+ void mt7925_scan_work(struct work_struct *work);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x.h b/drivers/net/wireless/mediatek/mt76/mt792x.h
+index ab12616ec2b87c..2b8b9b2977f74a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x.h
++++ b/drivers/net/wireless/mediatek/mt76/mt792x.h
+@@ -241,6 +241,7 @@ static inline struct mt792x_bss_conf *
+ mt792x_vif_to_link(struct mt792x_vif *mvif, u8 link_id)
+ {
+ struct ieee80211_vif *vif;
++ struct mt792x_bss_conf *bss_conf;
+
+ vif = container_of((void *)mvif, struct ieee80211_vif, drv_priv);
+
+@@ -248,8 +249,10 @@ mt792x_vif_to_link(struct mt792x_vif *mvif, u8 link_id)
+ link_id >= IEEE80211_LINK_UNSPECIFIED)
+ return &mvif->bss_conf;
+
+- return rcu_dereference_protected(mvif->link_conf[link_id],
+- lockdep_is_held(&mvif->phy->dev->mt76.mutex));
++ bss_conf = rcu_dereference_protected(mvif->link_conf[link_id],
++ lockdep_is_held(&mvif->phy->dev->mt76.mutex));
++
++ return bss_conf ? bss_conf : &mvif->bss_conf;
+ }
+
+ static inline struct mt792x_link_sta *
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_core.c b/drivers/net/wireless/mediatek/mt76/mt792x_core.c
+index 78fe37c2e07b59..b87eed4d168df5 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_core.c
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_core.c
+@@ -147,7 +147,8 @@ void mt792x_mac_link_bss_remove(struct mt792x_dev *dev,
+ link_conf = mt792x_vif_to_bss_conf(vif, mconf->link_id);
+
+ mt76_connac_free_pending_tx_skbs(&dev->pm, &mlink->wcid);
+- mt76_connac_mcu_uni_add_dev(&dev->mphy, link_conf, &mlink->wcid, false);
++ mt76_connac_mcu_uni_add_dev(&dev->mphy, link_conf, &mconf->mt76,
++ &mlink->wcid, false);
+
+ rcu_assign_pointer(dev->mt76.wcid[idx], NULL);
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt792x_mac.c b/drivers/net/wireless/mediatek/mt76/mt792x_mac.c
+index 106273935b267f..05978d9c7b916a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt792x_mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt792x_mac.c
+@@ -153,7 +153,7 @@ struct mt76_wcid *mt792x_rx_get_wcid(struct mt792x_dev *dev, u16 idx,
+ return NULL;
+
+ link = container_of(wcid, struct mt792x_link_sta, wcid);
+- sta = container_of(link, struct mt792x_sta, deflink);
++ sta = link->sta;
+ if (!sta->vif)
+ return NULL;
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/init.c b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+index 5e96973226bbb5..d8a013812d1e37 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/init.c
+@@ -16,9 +16,6 @@
+
+ static const struct ieee80211_iface_limit if_limits[] = {
+ {
+- .max = 1,
+- .types = BIT(NL80211_IFTYPE_ADHOC)
+- }, {
+ .max = 16,
+ .types = BIT(NL80211_IFTYPE_AP)
+ #ifdef CONFIG_MAC80211_MESH
+@@ -85,7 +82,7 @@ static ssize_t mt7996_thermal_temp_store(struct device *dev,
+ return ret;
+
+ mutex_lock(&phy->dev->mt76.mutex);
+- val = clamp_val(DIV_ROUND_CLOSEST(val, 1000), 40, 130);
++ val = DIV_ROUND_CLOSEST(clamp_val(val, 40 * 1000, 130 * 1000), 1000);
+
+ /* add a safety margin ~10 */
+ if ((i - 1 == MT7996_CRIT_TEMP_IDX &&
+@@ -1080,6 +1077,9 @@ mt7996_init_he_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ he_cap_elem->phy_cap_info[2] = IEEE80211_HE_PHY_CAP2_STBC_TX_UNDER_80MHZ |
+ IEEE80211_HE_PHY_CAP2_STBC_RX_UNDER_80MHZ;
+
++ he_cap_elem->phy_cap_info[7] =
++ IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI;
++
+ switch (iftype) {
+ case NL80211_IFTYPE_AP:
+ he_cap_elem->mac_cap_info[0] |= IEEE80211_HE_MAC_CAP0_TWT_RES;
+@@ -1119,8 +1119,7 @@ mt7996_init_he_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ IEEE80211_HE_PHY_CAP6_PARTIAL_BW_EXT_RANGE |
+ IEEE80211_HE_PHY_CAP6_PPE_THRESHOLD_PRESENT;
+ he_cap_elem->phy_cap_info[7] |=
+- IEEE80211_HE_PHY_CAP7_POWER_BOOST_FACTOR_SUPP |
+- IEEE80211_HE_PHY_CAP7_HE_SU_MU_PPDU_4XLTF_AND_08_US_GI;
++ IEEE80211_HE_PHY_CAP7_POWER_BOOST_FACTOR_SUPP;
+ he_cap_elem->phy_cap_info[8] |=
+ IEEE80211_HE_PHY_CAP8_20MHZ_IN_40MHZ_HE_PPDU_IN_2G |
+ IEEE80211_HE_PHY_CAP8_20MHZ_IN_160MHZ_HE_PPDU |
+@@ -1190,7 +1189,9 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+
+ eht_cap_elem->mac_cap_info[0] =
+ IEEE80211_EHT_MAC_CAP0_EPCS_PRIO_ACCESS |
+- IEEE80211_EHT_MAC_CAP0_OM_CONTROL;
++ IEEE80211_EHT_MAC_CAP0_OM_CONTROL |
++ u8_encode_bits(IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_11454,
++ IEEE80211_EHT_MAC_CAP0_MAX_MPDU_LEN_MASK);
+
+ eht_cap_elem->phy_cap_info[0] =
+ IEEE80211_EHT_PHY_CAP0_NDP_4_EHT_LFT_32_GI |
+@@ -1233,21 +1234,20 @@ mt7996_init_eht_caps(struct mt7996_phy *phy, enum nl80211_band band,
+ IEEE80211_EHT_PHY_CAP3_CODEBOOK_7_5_MU_FDBK;
+
+ eht_cap_elem->phy_cap_info[4] =
++ IEEE80211_EHT_PHY_CAP4_EHT_MU_PPDU_4_EHT_LTF_08_GI |
+ u8_encode_bits(min_t(int, sts - 1, 2),
+ IEEE80211_EHT_PHY_CAP4_MAX_NC_MASK);
+
+ eht_cap_elem->phy_cap_info[5] =
+ u8_encode_bits(IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_16US,
+ IEEE80211_EHT_PHY_CAP5_COMMON_NOMINAL_PKT_PAD_MASK) |
+- u8_encode_bits(u8_get_bits(0x11, GENMASK(1, 0)),
++ u8_encode_bits(u8_get_bits(1, GENMASK(1, 0)),
+ IEEE80211_EHT_PHY_CAP5_MAX_NUM_SUPP_EHT_LTF_MASK);
+
+ val = width == NL80211_CHAN_WIDTH_320 ? 0xf :
+ width == NL80211_CHAN_WIDTH_160 ? 0x7 :
+ width == NL80211_CHAN_WIDTH_80 ? 0x3 : 0x1;
+ eht_cap_elem->phy_cap_info[6] =
+- u8_encode_bits(u8_get_bits(0x11, GENMASK(4, 2)),
+- IEEE80211_EHT_PHY_CAP6_MAX_NUM_SUPP_EHT_LTF_MASK) |
+ u8_encode_bits(val, IEEE80211_EHT_PHY_CAP6_MCS15_SUPP_MASK);
+
+ val = u8_encode_bits(nss, IEEE80211_EHT_MCS_NSS_RX) |
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+index 0d21414e2c884a..f590902fdeea37 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mac.c
+@@ -819,6 +819,7 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ struct ieee80211_key_conf *key, int pid,
+ enum mt76_txq_id qid, u32 changed)
+ {
++ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_vif *vif = info->control.vif;
+ u8 band_idx = (info->hw_queue & MT_TX_HW_QUEUE_PHY) >> 2;
+@@ -886,8 +887,9 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ val = MT_TXD6_DIS_MAT | MT_TXD6_DAS;
+ if (is_mt7996(&dev->mt76))
+ val |= FIELD_PREP(MT_TXD6_MSDU_CNT, 1);
+- else
++ else if (is_8023 || !ieee80211_is_mgmt(hdr->frame_control))
+ val |= FIELD_PREP(MT_TXD6_MSDU_CNT_V2, 1);
++
+ txwi[6] = cpu_to_le32(val);
+ txwi[7] = 0;
+
+@@ -897,7 +899,6 @@ void mt7996_mac_write_txwi(struct mt7996_dev *dev, __le32 *txwi,
+ mt7996_mac_write_txwi_80211(dev, txwi, skb, key);
+
+ if (txwi[1] & cpu_to_le32(MT_TXD1_FIXED_RATE)) {
+- struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ bool mcast = ieee80211_is_data(hdr->frame_control) &&
+ is_multicast_ether_addr(hdr->addr1);
+ u8 idx = MT7996_BASIC_RATES_TBL;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/main.c b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+index 39f071ece35e6e..4d11083b86c092 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/main.c
+@@ -496,8 +496,7 @@ static void mt7996_configure_filter(struct ieee80211_hw *hw,
+
+ MT76_FILTER(CONTROL, MT_WF_RFCR_DROP_CTS |
+ MT_WF_RFCR_DROP_RTS |
+- MT_WF_RFCR_DROP_CTL_RSV |
+- MT_WF_RFCR_DROP_NDPA);
++ MT_WF_RFCR_DROP_CTL_RSV);
+
+ *total_flags = flags;
+ mt76_wr(dev, MT_WF_RFCR(phy->mt76->band_idx), phy->rxfilter);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+index 6c445a9dbc03d8..265958f7b78711 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mcu.c
+@@ -2070,7 +2070,7 @@ mt7996_mcu_sta_rate_ctrl_tlv(struct sk_buff *skb, struct mt7996_dev *dev,
+ cap |= STA_CAP_VHT_TX_STBC;
+ if (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXSTBC_1)
+ cap |= STA_CAP_VHT_RX_STBC;
+- if (vif->bss_conf.vht_ldpc &&
++ if ((vif->type != NL80211_IFTYPE_AP || vif->bss_conf.vht_ldpc) &&
+ (sta->deflink.vht_cap.cap & IEEE80211_VHT_CAP_RXLDPC))
+ cap |= STA_CAP_VHT_LDPC;
+
+@@ -3666,6 +3666,13 @@ int mt7996_mcu_get_chip_config(struct mt7996_dev *dev, u32 *cap)
+
+ int mt7996_mcu_get_chan_mib_info(struct mt7996_phy *phy, bool chan_switch)
+ {
++ enum {
++ IDX_TX_TIME,
++ IDX_RX_TIME,
++ IDX_OBSS_AIRTIME,
++ IDX_NON_WIFI_TIME,
++ IDX_NUM
++ };
+ struct {
+ struct {
+ u8 band;
+@@ -3675,16 +3682,15 @@ int mt7996_mcu_get_chan_mib_info(struct mt7996_phy *phy, bool chan_switch)
+ __le16 tag;
+ __le16 len;
+ __le32 offs;
+- } data[4];
++ } data[IDX_NUM];
+ } __packed req = {
+ .hdr.band = phy->mt76->band_idx,
+ };
+- /* strict order */
+ static const u32 offs[] = {
+- UNI_MIB_TX_TIME,
+- UNI_MIB_RX_TIME,
+- UNI_MIB_OBSS_AIRTIME,
+- UNI_MIB_NON_WIFI_TIME,
++ [IDX_TX_TIME] = UNI_MIB_TX_TIME,
++ [IDX_RX_TIME] = UNI_MIB_RX_TIME,
++ [IDX_OBSS_AIRTIME] = UNI_MIB_OBSS_AIRTIME,
++ [IDX_NON_WIFI_TIME] = UNI_MIB_NON_WIFI_TIME,
+ };
+ struct mt76_channel_state *state = phy->mt76->chan_state;
+ struct mt76_channel_state *state_ts = &phy->state_ts;
+@@ -3693,7 +3699,7 @@ int mt7996_mcu_get_chan_mib_info(struct mt7996_phy *phy, bool chan_switch)
+ struct sk_buff *skb;
+ int i, ret;
+
+- for (i = 0; i < 4; i++) {
++ for (i = 0; i < IDX_NUM; i++) {
+ req.data[i].tag = cpu_to_le16(UNI_CMD_MIB_DATA);
+ req.data[i].len = cpu_to_le16(sizeof(req.data[i]));
+ req.data[i].offs = cpu_to_le32(offs[i]);
+@@ -3712,17 +3718,24 @@ int mt7996_mcu_get_chan_mib_info(struct mt7996_phy *phy, bool chan_switch)
+ goto out;
+
+ #define __res_u64(s) le64_to_cpu(res[s].data)
+- state->cc_tx += __res_u64(1) - state_ts->cc_tx;
+- state->cc_bss_rx += __res_u64(2) - state_ts->cc_bss_rx;
+- state->cc_rx += __res_u64(2) + __res_u64(3) - state_ts->cc_rx;
+- state->cc_busy += __res_u64(0) + __res_u64(1) + __res_u64(2) + __res_u64(3) -
++ state->cc_tx += __res_u64(IDX_TX_TIME) - state_ts->cc_tx;
++ state->cc_bss_rx += __res_u64(IDX_RX_TIME) - state_ts->cc_bss_rx;
++ state->cc_rx += __res_u64(IDX_RX_TIME) +
++ __res_u64(IDX_OBSS_AIRTIME) -
++ state_ts->cc_rx;
++ state->cc_busy += __res_u64(IDX_TX_TIME) +
++ __res_u64(IDX_RX_TIME) +
++ __res_u64(IDX_OBSS_AIRTIME) +
++ __res_u64(IDX_NON_WIFI_TIME) -
+ state_ts->cc_busy;
+-
+ out:
+- state_ts->cc_tx = __res_u64(1);
+- state_ts->cc_bss_rx = __res_u64(2);
+- state_ts->cc_rx = __res_u64(2) + __res_u64(3);
+- state_ts->cc_busy = __res_u64(0) + __res_u64(1) + __res_u64(2) + __res_u64(3);
++ state_ts->cc_tx = __res_u64(IDX_TX_TIME);
++ state_ts->cc_bss_rx = __res_u64(IDX_RX_TIME);
++ state_ts->cc_rx = __res_u64(IDX_RX_TIME) + __res_u64(IDX_OBSS_AIRTIME);
++ state_ts->cc_busy = __res_u64(IDX_TX_TIME) +
++ __res_u64(IDX_RX_TIME) +
++ __res_u64(IDX_OBSS_AIRTIME) +
++ __res_u64(IDX_NON_WIFI_TIME);
+ #undef __res_u64
+
+ dev_kfree_skb(skb);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+index 40e45fb2b62607..442f72450352b0 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7996/mmio.c
+@@ -177,7 +177,7 @@ static u32 __mt7996_reg_addr(struct mt7996_dev *dev, u32 addr)
+ continue;
+
+ ofs = addr - dev->reg.map[i].phys;
+- if (ofs > dev->reg.map[i].size)
++ if (ofs >= dev->reg.map[i].size)
+ continue;
+
+ return dev->reg.map[i].mapped + ofs;
+diff --git a/drivers/net/wireless/mediatek/mt76/usb.c b/drivers/net/wireless/mediatek/mt76/usb.c
+index 58ff068233894e..f9e67b8c3b3c89 100644
+--- a/drivers/net/wireless/mediatek/mt76/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/usb.c
+@@ -33,9 +33,9 @@ int __mt76u_vendor_request(struct mt76_dev *dev, u8 req, u8 req_type,
+
+ ret = usb_control_msg(udev, pipe, req, req_type, val,
+ offset, buf, len, MT_VEND_REQ_TOUT_MS);
+- if (ret == -ENODEV)
++ if (ret == -ENODEV || ret == -EPROTO)
+ set_bit(MT76_REMOVED, &dev->phy.state);
+- if (ret >= 0 || ret == -ENODEV)
++ if (ret >= 0 || ret == -ENODEV || ret == -EPROTO)
+ return ret;
+ usleep_range(5000, 10000);
+ }
+diff --git a/drivers/net/wireless/realtek/rtlwifi/base.c b/drivers/net/wireless/realtek/rtlwifi/base.c
+index aab4605de9c47c..ff61867d142fa4 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/base.c
++++ b/drivers/net/wireless/realtek/rtlwifi/base.c
+@@ -575,9 +575,15 @@ static void rtl_free_entries_from_ack_queue(struct ieee80211_hw *hw,
+
+ void rtl_deinit_core(struct ieee80211_hw *hw)
+ {
++ struct rtl_priv *rtlpriv = rtl_priv(hw);
++
+ rtl_c2hcmd_launcher(hw, 0);
+ rtl_free_entries_from_scan_list(hw);
+ rtl_free_entries_from_ack_queue(hw, false);
++ if (rtlpriv->works.rtl_wq) {
++ destroy_workqueue(rtlpriv->works.rtl_wq);
++ rtlpriv->works.rtl_wq = NULL;
++ }
+ }
+ EXPORT_SYMBOL_GPL(rtl_deinit_core);
+
+@@ -2696,9 +2702,6 @@ MODULE_AUTHOR("Larry Finger <Larry.FInger@lwfinger.net>");
+ MODULE_LICENSE("GPL");
+ MODULE_DESCRIPTION("Realtek 802.11n PCI wireless core");
+
+-struct rtl_global_var rtl_global_var = {};
+-EXPORT_SYMBOL_GPL(rtl_global_var);
+-
+ static int __init rtl_core_module_init(void)
+ {
+ BUILD_BUG_ON(TX_PWR_BY_RATE_NUM_RATE < TX_PWR_BY_RATE_NUM_SECTION);
+@@ -2712,10 +2715,6 @@ static int __init rtl_core_module_init(void)
+ /* add debugfs */
+ rtl_debugfs_add_topdir();
+
+- /* init some global vars */
+- INIT_LIST_HEAD(&rtl_global_var.glb_priv_list);
+- spin_lock_init(&rtl_global_var.glb_list_lock);
+-
+ return 0;
+ }
+
+diff --git a/drivers/net/wireless/realtek/rtlwifi/base.h b/drivers/net/wireless/realtek/rtlwifi/base.h
+index f081a9a90563f5..f3a6a43a42eca8 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/base.h
++++ b/drivers/net/wireless/realtek/rtlwifi/base.h
+@@ -124,7 +124,6 @@ int rtl_send_smps_action(struct ieee80211_hw *hw,
+ u8 *rtl_find_ie(u8 *data, unsigned int len, u8 ie);
+ void rtl_recognize_peer(struct ieee80211_hw *hw, u8 *data, unsigned int len);
+ u8 rtl_tid_to_ac(u8 tid);
+-extern struct rtl_global_var rtl_global_var;
+ void rtl_phy_scan_operation_backup(struct ieee80211_hw *hw, u8 operation);
+
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtlwifi/pci.c b/drivers/net/wireless/realtek/rtlwifi/pci.c
+index 11709b6c83f1aa..0eafc4d125f91d 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/pci.c
++++ b/drivers/net/wireless/realtek/rtlwifi/pci.c
+@@ -295,46 +295,6 @@ static bool rtl_pci_get_amd_l1_patch(struct ieee80211_hw *hw)
+ return status;
+ }
+
+-static bool rtl_pci_check_buddy_priv(struct ieee80211_hw *hw,
+- struct rtl_priv **buddy_priv)
+-{
+- struct rtl_priv *rtlpriv = rtl_priv(hw);
+- struct rtl_pci_priv *pcipriv = rtl_pcipriv(hw);
+- struct rtl_priv *tpriv = NULL, *iter;
+- struct rtl_pci_priv *tpcipriv = NULL;
+-
+- if (!list_empty(&rtlpriv->glb_var->glb_priv_list)) {
+- list_for_each_entry(iter, &rtlpriv->glb_var->glb_priv_list,
+- list) {
+- tpcipriv = (struct rtl_pci_priv *)iter->priv;
+- rtl_dbg(rtlpriv, COMP_INIT, DBG_LOUD,
+- "pcipriv->ndis_adapter.funcnumber %x\n",
+- pcipriv->ndis_adapter.funcnumber);
+- rtl_dbg(rtlpriv, COMP_INIT, DBG_LOUD,
+- "tpcipriv->ndis_adapter.funcnumber %x\n",
+- tpcipriv->ndis_adapter.funcnumber);
+-
+- if (pcipriv->ndis_adapter.busnumber ==
+- tpcipriv->ndis_adapter.busnumber &&
+- pcipriv->ndis_adapter.devnumber ==
+- tpcipriv->ndis_adapter.devnumber &&
+- pcipriv->ndis_adapter.funcnumber !=
+- tpcipriv->ndis_adapter.funcnumber) {
+- tpriv = iter;
+- break;
+- }
+- }
+- }
+-
+- rtl_dbg(rtlpriv, COMP_INIT, DBG_LOUD,
+- "find_buddy_priv %d\n", tpriv != NULL);
+-
+- if (tpriv)
+- *buddy_priv = tpriv;
+-
+- return tpriv != NULL;
+-}
+-
+ static void rtl_pci_parse_configuration(struct pci_dev *pdev,
+ struct ieee80211_hw *hw)
+ {
+@@ -1696,8 +1656,6 @@ static void rtl_pci_deinit(struct ieee80211_hw *hw)
+ synchronize_irq(rtlpci->pdev->irq);
+ tasklet_kill(&rtlpriv->works.irq_tasklet);
+ cancel_work_sync(&rtlpriv->works.lps_change_work);
+-
+- destroy_workqueue(rtlpriv->works.rtl_wq);
+ }
+
+ static int rtl_pci_init(struct ieee80211_hw *hw, struct pci_dev *pdev)
+@@ -2011,7 +1969,6 @@ static bool _rtl_pci_find_adapter(struct pci_dev *pdev,
+ pcipriv->ndis_adapter.amd_l1_patch);
+
+ rtl_pci_parse_configuration(pdev, hw);
+- list_add_tail(&rtlpriv->list, &rtlpriv->glb_var->glb_priv_list);
+
+ return true;
+ }
+@@ -2158,7 +2115,6 @@ int rtl_pci_probe(struct pci_dev *pdev,
+ rtlpriv->rtlhal.interface = INTF_PCI;
+ rtlpriv->cfg = (struct rtl_hal_cfg *)(id->driver_data);
+ rtlpriv->intf_ops = &rtl_pci_ops;
+- rtlpriv->glb_var = &rtl_global_var;
+ rtl_efuse_ops_init(hw);
+
+ /* MEM map */
+@@ -2209,7 +2165,7 @@ int rtl_pci_probe(struct pci_dev *pdev,
+ if (rtlpriv->cfg->ops->init_sw_vars(hw)) {
+ pr_err("Can't init_sw_vars\n");
+ err = -ENODEV;
+- goto fail3;
++ goto fail2;
+ }
+ rtl_init_sw_leds(hw);
+
+@@ -2227,14 +2183,14 @@ int rtl_pci_probe(struct pci_dev *pdev,
+ err = rtl_pci_init(hw, pdev);
+ if (err) {
+ pr_err("Failed to init PCI\n");
+- goto fail3;
++ goto fail4;
+ }
+
+ err = ieee80211_register_hw(hw);
+ if (err) {
+ pr_err("Can't register mac80211 hw.\n");
+ err = -ENODEV;
+- goto fail3;
++ goto fail5;
+ }
+ rtlpriv->mac80211.mac80211_registered = 1;
+
+@@ -2257,16 +2213,19 @@ int rtl_pci_probe(struct pci_dev *pdev,
+ set_bit(RTL_STATUS_INTERFACE_START, &rtlpriv->status);
+ return 0;
+
+-fail3:
+- pci_set_drvdata(pdev, NULL);
++fail5:
++ rtl_pci_deinit(hw);
++fail4:
+ rtl_deinit_core(hw);
++fail3:
++ wait_for_completion(&rtlpriv->firmware_loading_complete);
++ rtlpriv->cfg->ops->deinit_sw_vars(hw);
+
+ fail2:
+ if (rtlpriv->io.pci_mem_start != 0)
+ pci_iounmap(pdev, (void __iomem *)rtlpriv->io.pci_mem_start);
+
+ pci_release_regions(pdev);
+- complete(&rtlpriv->firmware_loading_complete);
+
+ fail1:
+ if (hw)
+@@ -2317,7 +2276,6 @@ void rtl_pci_disconnect(struct pci_dev *pdev)
+ if (rtlpci->using_msi)
+ pci_disable_msi(rtlpci->pdev);
+
+- list_del(&rtlpriv->list);
+ if (rtlpriv->io.pci_mem_start != 0) {
+ pci_iounmap(pdev, (void __iomem *)rtlpriv->io.pci_mem_start);
+ pci_release_regions(pdev);
+@@ -2376,7 +2334,6 @@ EXPORT_SYMBOL(rtl_pci_resume);
+ const struct rtl_intf_ops rtl_pci_ops = {
+ .adapter_start = rtl_pci_start,
+ .adapter_stop = rtl_pci_stop,
+- .check_buddy_priv = rtl_pci_check_buddy_priv,
+ .adapter_tx = rtl_pci_tx,
+ .flush = rtl_pci_flush,
+ .reset_trx_ring = rtl_pci_reset_trx_ring,
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/sw.c b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/sw.c
+index bbf8ff63dcedb4..e63c67b1861b5f 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8192se/sw.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8192se/sw.c
+@@ -64,22 +64,23 @@ static void rtl92se_fw_cb(const struct firmware *firmware, void *context)
+
+ rtl_dbg(rtlpriv, COMP_ERR, DBG_LOUD,
+ "Firmware callback routine entered!\n");
+- complete(&rtlpriv->firmware_loading_complete);
+ if (!firmware) {
+ pr_err("Firmware %s not available\n", fw_name);
+ rtlpriv->max_fw_size = 0;
+- return;
++ goto exit;
+ }
+ if (firmware->size > rtlpriv->max_fw_size) {
+ pr_err("Firmware is too big!\n");
+ rtlpriv->max_fw_size = 0;
+ release_firmware(firmware);
+- return;
++ goto exit;
+ }
+ pfirmware = (struct rt_firmware *)rtlpriv->rtlhal.pfirmware;
+ memcpy(pfirmware->sz_fw_tmpbuffer, firmware->data, firmware->size);
+ pfirmware->sz_fw_tmpbufferlen = firmware->size;
+ release_firmware(firmware);
++exit:
++ complete(&rtlpriv->firmware_loading_complete);
+ }
+
+ static int rtl92s_init_sw_vars(struct ieee80211_hw *hw)
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+index 1be51ea3f3c820..9eddbada8af12c 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c
+@@ -2033,8 +2033,10 @@ static bool _rtl8821ae_phy_config_bb_with_pgheaderfile(struct ieee80211_hw *hw,
+ if (!_rtl8821ae_check_condition(hw, v1)) {
+ i += 2; /* skip the pair of expression*/
+ v2 = array[i+1];
+- while (v2 != 0xDEAD)
++ while (v2 != 0xDEAD) {
+ i += 3;
++ v2 = array[i + 1];
++ }
+ }
+ }
+ }
+diff --git a/drivers/net/wireless/realtek/rtlwifi/usb.c b/drivers/net/wireless/realtek/rtlwifi/usb.c
+index d37a017b2b814f..f5718e570011e6 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/usb.c
++++ b/drivers/net/wireless/realtek/rtlwifi/usb.c
+@@ -629,11 +629,6 @@ static void _rtl_usb_cleanup_rx(struct ieee80211_hw *hw)
+ tasklet_kill(&rtlusb->rx_work_tasklet);
+ cancel_work_sync(&rtlpriv->works.lps_change_work);
+
+- if (rtlpriv->works.rtl_wq) {
+- destroy_workqueue(rtlpriv->works.rtl_wq);
+- rtlpriv->works.rtl_wq = NULL;
+- }
+-
+ skb_queue_purge(&rtlusb->rx_queue);
+
+ while ((urb = usb_get_from_anchor(&rtlusb->rx_cleanup_urbs))) {
+@@ -1028,19 +1023,22 @@ int rtl_usb_probe(struct usb_interface *intf,
+ err = ieee80211_register_hw(hw);
+ if (err) {
+ pr_err("Can't register mac80211 hw.\n");
+- goto error_out;
++ goto error_init_vars;
+ }
+ rtlpriv->mac80211.mac80211_registered = 1;
+
+ set_bit(RTL_STATUS_INTERFACE_START, &rtlpriv->status);
+ return 0;
+
++error_init_vars:
++ wait_for_completion(&rtlpriv->firmware_loading_complete);
++ rtlpriv->cfg->ops->deinit_sw_vars(hw);
+ error_out:
++ rtl_usb_deinit(hw);
+ rtl_deinit_core(hw);
+ error_out2:
+ _rtl_usb_io_handler_release(hw);
+ usb_put_dev(udev);
+- complete(&rtlpriv->firmware_loading_complete);
+ kfree(rtlpriv->usb_data);
+ ieee80211_free_hw(hw);
+ return -ENODEV;
+diff --git a/drivers/net/wireless/realtek/rtlwifi/wifi.h b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+index ae6e351bc83c91..f1830ddcdd8c19 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/wifi.h
++++ b/drivers/net/wireless/realtek/rtlwifi/wifi.h
+@@ -2270,8 +2270,6 @@ struct rtl_intf_ops {
+ /*com */
+ int (*adapter_start)(struct ieee80211_hw *hw);
+ void (*adapter_stop)(struct ieee80211_hw *hw);
+- bool (*check_buddy_priv)(struct ieee80211_hw *hw,
+- struct rtl_priv **buddy_priv);
+
+ int (*adapter_tx)(struct ieee80211_hw *hw,
+ struct ieee80211_sta *sta,
+@@ -2514,14 +2512,6 @@ struct dig_t {
+ u32 rssi_max;
+ };
+
+-struct rtl_global_var {
+- /* from this list we can get
+- * other adapter's rtl_priv
+- */
+- struct list_head glb_priv_list;
+- spinlock_t glb_list_lock;
+-};
+-
+ #define IN_4WAY_TIMEOUT_TIME (30 * MSEC_PER_SEC) /* 30 seconds */
+
+ struct rtl_btc_info {
+@@ -2667,9 +2657,7 @@ struct rtl_scan_list {
+ struct rtl_priv {
+ struct ieee80211_hw *hw;
+ struct completion firmware_loading_complete;
+- struct list_head list;
+ struct rtl_priv *buddy_priv;
+- struct rtl_global_var *glb_var;
+ struct rtl_dmsp_ctl dmsp_ctl;
+ struct rtl_locks locks;
+ struct rtl_works works;
+diff --git a/drivers/net/wireless/realtek/rtw89/chan.c b/drivers/net/wireless/realtek/rtw89/chan.c
+index ba6332da8019c1..4df4e04c3e67d7 100644
+--- a/drivers/net/wireless/realtek/rtw89/chan.c
++++ b/drivers/net/wireless/realtek/rtw89/chan.c
+@@ -10,6 +10,10 @@
+ #include "ps.h"
+ #include "util.h"
+
++static void rtw89_swap_chanctx(struct rtw89_dev *rtwdev,
++ enum rtw89_chanctx_idx idx1,
++ enum rtw89_chanctx_idx idx2);
++
+ static enum rtw89_subband rtw89_get_subband_type(enum rtw89_band band,
+ u8 center_chan)
+ {
+@@ -226,11 +230,15 @@ static void rtw89_config_default_chandef(struct rtw89_dev *rtwdev)
+ void rtw89_entity_init(struct rtw89_dev *rtwdev)
+ {
+ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_entity_mgnt *mgnt = &hal->entity_mgnt;
+
+ hal->entity_pause = false;
+ bitmap_zero(hal->entity_map, NUM_OF_RTW89_CHANCTX);
+ bitmap_zero(hal->changes, NUM_OF_RTW89_CHANCTX_CHANGES);
+ atomic_set(&hal->roc_chanctx_idx, RTW89_CHANCTX_IDLE);
++
++ INIT_LIST_HEAD(&mgnt->active_list);
++
+ rtw89_config_default_chandef(rtwdev);
+ }
+
+@@ -272,6 +280,143 @@ static void rtw89_entity_calculate_weight(struct rtw89_dev *rtwdev,
+ }
+ }
+
++static void rtw89_normalize_link_chanctx(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link)
++{
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
++ struct rtw89_vif_link *cur;
++
++ if (unlikely(!rtwvif_link->chanctx_assigned))
++ return;
++
++ cur = rtw89_vif_get_link_inst(rtwvif, 0);
++ if (!cur || !cur->chanctx_assigned)
++ return;
++
++ if (cur == rtwvif_link)
++ return;
++
++ rtw89_swap_chanctx(rtwdev, rtwvif_link->chanctx_idx, cur->chanctx_idx);
++}
++
++const struct rtw89_chan *__rtw89_mgnt_chan_get(struct rtw89_dev *rtwdev,
++ const char *caller_message,
++ u8 link_index)
++{
++ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_entity_mgnt *mgnt = &hal->entity_mgnt;
++ enum rtw89_chanctx_idx chanctx_idx;
++ enum rtw89_chanctx_idx roc_idx;
++ enum rtw89_entity_mode mode;
++ u8 role_index;
++
++ lockdep_assert_held(&rtwdev->mutex);
++
++ if (unlikely(link_index >= __RTW89_MLD_MAX_LINK_NUM)) {
++ WARN(1, "link index %u is invalid (max link inst num: %d)\n",
++ link_index, __RTW89_MLD_MAX_LINK_NUM);
++ goto dflt;
++ }
++
++ mode = rtw89_get_entity_mode(rtwdev);
++ switch (mode) {
++ case RTW89_ENTITY_MODE_SCC_OR_SMLD:
++ case RTW89_ENTITY_MODE_MCC:
++ role_index = 0;
++ break;
++ case RTW89_ENTITY_MODE_MCC_PREPARE:
++ role_index = 1;
++ break;
++ default:
++ WARN(1, "Invalid ent mode: %d\n", mode);
++ goto dflt;
++ }
++
++ chanctx_idx = mgnt->chanctx_tbl[role_index][link_index];
++ if (chanctx_idx == RTW89_CHANCTX_IDLE)
++ goto dflt;
++
++ roc_idx = atomic_read(&hal->roc_chanctx_idx);
++ if (roc_idx != RTW89_CHANCTX_IDLE) {
++ /* ROC is ongoing (given ROC runs on RTW89_ROC_BY_LINK_INDEX).
++ * If @link_index is the same as RTW89_ROC_BY_LINK_INDEX, get
++ * the ongoing ROC chanctx.
++ */
++ if (link_index == RTW89_ROC_BY_LINK_INDEX)
++ chanctx_idx = roc_idx;
++ }
++
++ return rtw89_chan_get(rtwdev, chanctx_idx);
++
++dflt:
++ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
++ "%s (%s): prefetch NULL on link index %u\n",
++ __func__, caller_message ?: "", link_index);
++
++ return rtw89_chan_get(rtwdev, RTW89_CHANCTX_0);
++}
++EXPORT_SYMBOL(__rtw89_mgnt_chan_get);
++
++static void rtw89_entity_recalc_mgnt_roles(struct rtw89_dev *rtwdev)
++{
++ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_entity_mgnt *mgnt = &hal->entity_mgnt;
++ struct rtw89_vif_link *link;
++ struct rtw89_vif *role;
++ u8 pos = 0;
++ int i, j;
++
++ lockdep_assert_held(&rtwdev->mutex);
++
++ for (i = 0; i < RTW89_MAX_INTERFACE_NUM; i++)
++ mgnt->active_roles[i] = NULL;
++
++ for (i = 0; i < RTW89_MAX_INTERFACE_NUM; i++) {
++ for (j = 0; j < __RTW89_MLD_MAX_LINK_NUM; j++)
++ mgnt->chanctx_tbl[i][j] = RTW89_CHANCTX_IDLE;
++ }
++
++ /* To be consistent with legacy behavior, expect the first active role
++ * which uses RTW89_CHANCTX_0 to put at position 0, and make its first
++ * link instance take RTW89_CHANCTX_0. (normalizing)
++ */
++ list_for_each_entry(role, &mgnt->active_list, mgnt_entry) {
++ for (i = 0; i < role->links_inst_valid_num; i++) {
++ link = rtw89_vif_get_link_inst(role, i);
++ if (!link || !link->chanctx_assigned)
++ continue;
++
++ if (link->chanctx_idx == RTW89_CHANCTX_0) {
++ rtw89_normalize_link_chanctx(rtwdev, link);
++
++ list_del(&role->mgnt_entry);
++ list_add(&role->mgnt_entry, &mgnt->active_list);
++ goto fill;
++ }
++ }
++ }
++
++fill:
++ list_for_each_entry(role, &mgnt->active_list, mgnt_entry) {
++ if (unlikely(pos >= RTW89_MAX_INTERFACE_NUM)) {
++ rtw89_warn(rtwdev,
++ "%s: active roles are over max iface num\n",
++ __func__);
++ break;
++ }
++
++ for (i = 0; i < role->links_inst_valid_num; i++) {
++ link = rtw89_vif_get_link_inst(role, i);
++ if (!link || !link->chanctx_assigned)
++ continue;
++
++ mgnt->chanctx_tbl[pos][i] = link->chanctx_idx;
++ }
++
++ mgnt->active_roles[pos++] = role;
++ }
++}
++
+ enum rtw89_entity_mode rtw89_entity_recalc(struct rtw89_dev *rtwdev)
+ {
+ DECLARE_BITMAP(recalc_map, NUM_OF_RTW89_CHANCTX) = {};
+@@ -298,9 +443,14 @@ enum rtw89_entity_mode rtw89_entity_recalc(struct rtw89_dev *rtwdev)
+ set_bit(RTW89_CHANCTX_0, recalc_map);
+ fallthrough;
+ case 1:
+- mode = RTW89_ENTITY_MODE_SCC;
++ mode = RTW89_ENTITY_MODE_SCC_OR_SMLD;
+ break;
+ case 2 ... NUM_OF_RTW89_CHANCTX:
++ if (w.active_roles == 1) {
++ mode = RTW89_ENTITY_MODE_SCC_OR_SMLD;
++ break;
++ }
++
+ if (w.active_roles != NUM_OF_RTW89_MCC_ROLES) {
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+ "unhandled ent: %d chanctxs %d roles\n",
+@@ -327,6 +477,8 @@ enum rtw89_entity_mode rtw89_entity_recalc(struct rtw89_dev *rtwdev)
+ rtw89_assign_entity_chan(rtwdev, idx, &chan);
+ }
+
++ rtw89_entity_recalc_mgnt_roles(rtwdev);
++
+ if (hal->entity_pause)
+ return rtw89_get_entity_mode(rtwdev);
+
+@@ -650,7 +802,7 @@ static void rtw89_mcc_fill_role_limit(struct rtw89_dev *rtwdev,
+
+ mcc_role->limit.max_toa = max_toa_us / 1024;
+ mcc_role->limit.max_tob = max_tob_us / 1024;
+- mcc_role->limit.max_dur = max_dur_us / 1024;
++ mcc_role->limit.max_dur = mcc_role->limit.max_toa + mcc_role->limit.max_tob;
+ mcc_role->limit.enable = true;
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN,
+@@ -716,6 +868,7 @@ struct rtw89_mcc_fill_role_selector {
+ };
+
+ static_assert((u8)NUM_OF_RTW89_CHANCTX >= NUM_OF_RTW89_MCC_ROLES);
++static_assert(RTW89_MAX_INTERFACE_NUM >= NUM_OF_RTW89_MCC_ROLES);
+
+ static int rtw89_mcc_fill_role_iterator(struct rtw89_dev *rtwdev,
+ struct rtw89_mcc_role *mcc_role,
+@@ -745,14 +898,18 @@ static int rtw89_mcc_fill_role_iterator(struct rtw89_dev *rtwdev,
+
+ static int rtw89_mcc_fill_all_roles(struct rtw89_dev *rtwdev)
+ {
++ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_entity_mgnt *mgnt = &hal->entity_mgnt;
+ struct rtw89_mcc_fill_role_selector sel = {};
+ struct rtw89_vif_link *rtwvif_link;
+ struct rtw89_vif *rtwvif;
+ int ret;
++ int i;
+
+- rtw89_for_each_rtwvif(rtwdev, rtwvif) {
+- if (!rtw89_vif_is_active_role(rtwvif))
+- continue;
++ for (i = 0; i < NUM_OF_RTW89_MCC_ROLES; i++) {
++ rtwvif = mgnt->active_roles[i];
++ if (!rtwvif)
++ break;
+
+ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
+ if (unlikely(!rtwvif_link)) {
+@@ -760,14 +917,7 @@ static int rtw89_mcc_fill_all_roles(struct rtw89_dev *rtwdev)
+ continue;
+ }
+
+- if (sel.bind_vif[rtwvif_link->chanctx_idx]) {
+- rtw89_warn(rtwdev,
+- "MCC skip extra vif <macid %d> on chanctx[%d]\n",
+- rtwvif_link->mac_id, rtwvif_link->chanctx_idx);
+- continue;
+- }
+-
+- sel.bind_vif[rtwvif_link->chanctx_idx] = rtwvif_link;
++ sel.bind_vif[i] = rtwvif_link;
+ }
+
+ ret = rtw89_iterate_mcc_roles(rtwdev, rtw89_mcc_fill_role_iterator, &sel);
+@@ -2381,7 +2531,25 @@ void rtw89_chanctx_pause(struct rtw89_dev *rtwdev,
+ hal->entity_pause = true;
+ }
+
+-void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev)
++static void rtw89_chanctx_proceed_cb(struct rtw89_dev *rtwdev,
++ const struct rtw89_chanctx_cb_parm *parm)
++{
++ int ret;
++
++ if (!parm || !parm->cb)
++ return;
++
++ ret = parm->cb(rtwdev, parm->data);
++ if (ret)
++ rtw89_warn(rtwdev, "%s (%s): cb failed: %d\n", __func__,
++ parm->caller ?: "unknown", ret);
++}
++
++/* pass @cb_parm if there is a @cb_parm->cb which needs to invoke right after
++ * call rtw89_set_channel() and right before proceed entity according to mode.
++ */
++void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev,
++ const struct rtw89_chanctx_cb_parm *cb_parm)
+ {
+ struct rtw89_hal *hal = &rtwdev->hal;
+ enum rtw89_entity_mode mode;
+@@ -2389,14 +2557,18 @@ void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev)
+
+ lockdep_assert_held(&rtwdev->mutex);
+
+- if (!hal->entity_pause)
++ if (unlikely(!hal->entity_pause)) {
++ rtw89_chanctx_proceed_cb(rtwdev, cb_parm);
+ return;
++ }
+
+ rtw89_debug(rtwdev, RTW89_DBG_CHAN, "chanctx proceed\n");
+
+ hal->entity_pause = false;
+ rtw89_set_channel(rtwdev);
+
++ rtw89_chanctx_proceed_cb(rtwdev, cb_parm);
++
+ mode = rtw89_get_entity_mode(rtwdev);
+ switch (mode) {
+ case RTW89_ENTITY_MODE_MCC:
+@@ -2501,12 +2673,18 @@ int rtw89_chanctx_ops_assign_vif(struct rtw89_dev *rtwdev,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_chanctx_cfg *cfg = (struct rtw89_chanctx_cfg *)ctx->drv_priv;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
++ struct rtw89_hal *hal = &rtwdev->hal;
++ struct rtw89_entity_mgnt *mgnt = &hal->entity_mgnt;
+ struct rtw89_entity_weight w = {};
+
+ rtwvif_link->chanctx_idx = cfg->idx;
+ rtwvif_link->chanctx_assigned = true;
+ cfg->ref_count++;
+
++ if (list_empty(&rtwvif->mgnt_entry))
++ list_add_tail(&rtwvif->mgnt_entry, &mgnt->active_list);
++
+ if (cfg->idx == RTW89_CHANCTX_0)
+ goto out;
+
+@@ -2526,6 +2704,7 @@ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+ struct ieee80211_chanctx_conf *ctx)
+ {
+ struct rtw89_chanctx_cfg *cfg = (struct rtw89_chanctx_cfg *)ctx->drv_priv;
++ struct rtw89_vif *rtwvif = rtwvif_link->rtwvif;
+ struct rtw89_hal *hal = &rtwdev->hal;
+ enum rtw89_chanctx_idx roll;
+ enum rtw89_entity_mode cur;
+@@ -2536,6 +2715,9 @@ void rtw89_chanctx_ops_unassign_vif(struct rtw89_dev *rtwdev,
+ rtwvif_link->chanctx_assigned = false;
+ cfg->ref_count--;
+
++ if (!rtw89_vif_is_active_role(rtwvif))
++ list_del_init(&rtwvif->mgnt_entry);
++
+ if (cfg->ref_count != 0)
+ goto out;
+
+diff --git a/drivers/net/wireless/realtek/rtw89/chan.h b/drivers/net/wireless/realtek/rtw89/chan.h
+index 4ed777ea506485..092a6f676894f5 100644
+--- a/drivers/net/wireless/realtek/rtw89/chan.h
++++ b/drivers/net/wireless/realtek/rtw89/chan.h
+@@ -38,23 +38,32 @@ enum rtw89_chanctx_pause_reasons {
+ RTW89_CHANCTX_PAUSE_REASON_ROC,
+ };
+
++struct rtw89_chanctx_cb_parm {
++ int (*cb)(struct rtw89_dev *rtwdev, void *data);
++ void *data;
++ const char *caller;
++};
++
+ struct rtw89_entity_weight {
+ unsigned int active_chanctxs;
+ unsigned int active_roles;
+ };
+
+-static inline bool rtw89_get_entity_state(struct rtw89_dev *rtwdev)
++static inline bool rtw89_get_entity_state(struct rtw89_dev *rtwdev,
++ enum rtw89_phy_idx phy_idx)
+ {
+ struct rtw89_hal *hal = &rtwdev->hal;
+
+- return READ_ONCE(hal->entity_active);
++ return READ_ONCE(hal->entity_active[phy_idx]);
+ }
+
+-static inline void rtw89_set_entity_state(struct rtw89_dev *rtwdev, bool active)
++static inline void rtw89_set_entity_state(struct rtw89_dev *rtwdev,
++ enum rtw89_phy_idx phy_idx,
++ bool active)
+ {
+ struct rtw89_hal *hal = &rtwdev->hal;
+
+- WRITE_ONCE(hal->entity_active, active);
++ WRITE_ONCE(hal->entity_active[phy_idx], active);
+ }
+
+ static inline
+@@ -97,7 +106,16 @@ void rtw89_queue_chanctx_change(struct rtw89_dev *rtwdev,
+ void rtw89_chanctx_track(struct rtw89_dev *rtwdev);
+ void rtw89_chanctx_pause(struct rtw89_dev *rtwdev,
+ enum rtw89_chanctx_pause_reasons rsn);
+-void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev);
++void rtw89_chanctx_proceed(struct rtw89_dev *rtwdev,
++ const struct rtw89_chanctx_cb_parm *cb_parm);
++
++const struct rtw89_chan *__rtw89_mgnt_chan_get(struct rtw89_dev *rtwdev,
++ const char *caller_message,
++ u8 link_index);
++
++#define rtw89_mgnt_chan_get(rtwdev, link_index) \
++ __rtw89_mgnt_chan_get(rtwdev, __func__, link_index)
++
+ int rtw89_chanctx_ops_add(struct rtw89_dev *rtwdev,
+ struct ieee80211_chanctx_conf *ctx);
+ void rtw89_chanctx_ops_remove(struct rtw89_dev *rtwdev,
+diff --git a/drivers/net/wireless/realtek/rtw89/core.c b/drivers/net/wireless/realtek/rtw89/core.c
+index 5b8e65f6de6a4e..f82a26be6fa82b 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.c
++++ b/drivers/net/wireless/realtek/rtw89/core.c
+@@ -192,13 +192,13 @@ static const struct ieee80211_iface_combination rtw89_iface_combs[] = {
+ {
+ .limits = rtw89_iface_limits,
+ .n_limits = ARRAY_SIZE(rtw89_iface_limits),
+- .max_interfaces = 2,
++ .max_interfaces = RTW89_MAX_INTERFACE_NUM,
+ .num_different_channels = 1,
+ },
+ {
+ .limits = rtw89_iface_limits_mcc,
+ .n_limits = ARRAY_SIZE(rtw89_iface_limits_mcc),
+- .max_interfaces = 2,
++ .max_interfaces = RTW89_MAX_INTERFACE_NUM,
+ .num_different_channels = 2,
+ },
+ };
+@@ -341,83 +341,47 @@ void rtw89_get_channel_params(const struct cfg80211_chan_def *chandef,
+ rtw89_chan_create(chan, center_chan, channel->hw_value, band, bandwidth);
+ }
+
+-void rtw89_core_set_chip_txpwr(struct rtw89_dev *rtwdev)
++static void __rtw89_core_set_chip_txpwr(struct rtw89_dev *rtwdev,
++ const struct rtw89_chan *chan,
++ enum rtw89_phy_idx phy_idx)
+ {
+- struct rtw89_hal *hal = &rtwdev->hal;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+- const struct rtw89_chan *chan;
+- enum rtw89_chanctx_idx chanctx_idx;
+- enum rtw89_chanctx_idx roc_idx;
+- enum rtw89_phy_idx phy_idx;
+- enum rtw89_entity_mode mode;
+ bool entity_active;
+
+- entity_active = rtw89_get_entity_state(rtwdev);
++ entity_active = rtw89_get_entity_state(rtwdev, phy_idx);
+ if (!entity_active)
+ return;
+
+- mode = rtw89_get_entity_mode(rtwdev);
+- switch (mode) {
+- case RTW89_ENTITY_MODE_SCC:
+- case RTW89_ENTITY_MODE_MCC:
+- chanctx_idx = RTW89_CHANCTX_0;
+- break;
+- case RTW89_ENTITY_MODE_MCC_PREPARE:
+- chanctx_idx = RTW89_CHANCTX_1;
+- break;
+- default:
+- WARN(1, "Invalid ent mode: %d\n", mode);
+- return;
+- }
++ chip->ops->set_txpwr(rtwdev, chan, phy_idx);
++}
+
+- roc_idx = atomic_read(&hal->roc_chanctx_idx);
+- if (roc_idx != RTW89_CHANCTX_IDLE)
+- chanctx_idx = roc_idx;
++void rtw89_core_set_chip_txpwr(struct rtw89_dev *rtwdev)
++{
++ const struct rtw89_chan *chan;
+
+- phy_idx = RTW89_PHY_0;
+- chan = rtw89_chan_get(rtwdev, chanctx_idx);
+- chip->ops->set_txpwr(rtwdev, chan, phy_idx);
++ chan = rtw89_mgnt_chan_get(rtwdev, 0);
++ __rtw89_core_set_chip_txpwr(rtwdev, chan, RTW89_PHY_0);
++
++ if (!rtwdev->support_mlo)
++ return;
++
++ chan = rtw89_mgnt_chan_get(rtwdev, 1);
++ __rtw89_core_set_chip_txpwr(rtwdev, chan, RTW89_PHY_1);
+ }
+
+-int rtw89_set_channel(struct rtw89_dev *rtwdev)
++static void __rtw89_set_channel(struct rtw89_dev *rtwdev,
++ const struct rtw89_chan *chan,
++ enum rtw89_mac_idx mac_idx,
++ enum rtw89_phy_idx phy_idx)
+ {
+- struct rtw89_hal *hal = &rtwdev->hal;
+ const struct rtw89_chip_info *chip = rtwdev->chip;
+ const struct rtw89_chan_rcd *chan_rcd;
+- const struct rtw89_chan *chan;
+- enum rtw89_chanctx_idx chanctx_idx;
+- enum rtw89_chanctx_idx roc_idx;
+- enum rtw89_mac_idx mac_idx;
+- enum rtw89_phy_idx phy_idx;
+ struct rtw89_channel_help_params bak;
+- enum rtw89_entity_mode mode;
+ bool entity_active;
+
+- entity_active = rtw89_get_entity_state(rtwdev);
+-
+- mode = rtw89_entity_recalc(rtwdev);
+- switch (mode) {
+- case RTW89_ENTITY_MODE_SCC:
+- case RTW89_ENTITY_MODE_MCC:
+- chanctx_idx = RTW89_CHANCTX_0;
+- break;
+- case RTW89_ENTITY_MODE_MCC_PREPARE:
+- chanctx_idx = RTW89_CHANCTX_1;
+- break;
+- default:
+- WARN(1, "Invalid ent mode: %d\n", mode);
+- return -EINVAL;
+- }
+-
+- roc_idx = atomic_read(&hal->roc_chanctx_idx);
+- if (roc_idx != RTW89_CHANCTX_IDLE)
+- chanctx_idx = roc_idx;
++ entity_active = rtw89_get_entity_state(rtwdev, phy_idx);
+
+- mac_idx = RTW89_MAC_0;
+- phy_idx = RTW89_PHY_0;
+-
+- chan = rtw89_chan_get(rtwdev, chanctx_idx);
+- chan_rcd = rtw89_chan_rcd_get(rtwdev, chanctx_idx);
++ chan_rcd = rtw89_chan_rcd_get_by_chan(chan);
+
+ rtw89_chip_set_channel_prepare(rtwdev, &bak, chan, mac_idx, phy_idx);
+
+@@ -432,7 +396,29 @@ int rtw89_set_channel(struct rtw89_dev *rtwdev)
+ rtw89_chip_rfk_band_changed(rtwdev, phy_idx, chan);
+ }
+
+- rtw89_set_entity_state(rtwdev, true);
++ rtw89_set_entity_state(rtwdev, phy_idx, true);
++}
++
++int rtw89_set_channel(struct rtw89_dev *rtwdev)
++{
++ const struct rtw89_chan *chan;
++ enum rtw89_entity_mode mode;
++
++ mode = rtw89_entity_recalc(rtwdev);
++ if (mode < 0 || mode >= NUM_OF_RTW89_ENTITY_MODE) {
++ WARN(1, "Invalid ent mode: %d\n", mode);
++ return -EINVAL;
++ }
++
++ chan = rtw89_mgnt_chan_get(rtwdev, 0);
++ __rtw89_set_channel(rtwdev, chan, RTW89_MAC_0, RTW89_PHY_0);
++
++ if (!rtwdev->support_mlo)
++ return 0;
++
++ chan = rtw89_mgnt_chan_get(rtwdev, 1);
++ __rtw89_set_channel(rtwdev, chan, RTW89_MAC_1, RTW89_PHY_1);
++
+ return 0;
+ }
+
+@@ -3157,9 +3143,10 @@ void rtw89_roc_start(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ rtw89_leave_ips_by_hwflags(rtwdev);
+ rtw89_leave_lps(rtwdev);
+
+- rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, RTW89_ROC_BY_LINK_INDEX);
+ if (unlikely(!rtwvif_link)) {
+- rtw89_err(rtwdev, "roc start: find no link on HW-0\n");
++ rtw89_err(rtwdev, "roc start: find no link on HW-%u\n",
++ RTW89_ROC_BY_LINK_INDEX);
+ return;
+ }
+
+@@ -3211,9 +3198,10 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+ rtw89_leave_ips_by_hwflags(rtwdev);
+ rtw89_leave_lps(rtwdev);
+
+- rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
++ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, RTW89_ROC_BY_LINK_INDEX);
+ if (unlikely(!rtwvif_link)) {
+- rtw89_err(rtwdev, "roc end: find no link on HW-0\n");
++ rtw89_err(rtwdev, "roc end: find no link on HW-%u\n",
++ RTW89_ROC_BY_LINK_INDEX);
+ return;
+ }
+
+@@ -3224,7 +3212,7 @@ void rtw89_roc_end(struct rtw89_dev *rtwdev, struct rtw89_vif *rtwvif)
+
+ roc->state = RTW89_ROC_IDLE;
+ rtw89_config_roc_chandef(rtwdev, rtwvif_link->chanctx_idx, NULL);
+- rtw89_chanctx_proceed(rtwdev);
++ rtw89_chanctx_proceed(rtwdev, NULL);
+ ret = rtw89_core_send_nullfunc(rtwdev, rtwvif_link, true, false);
+ if (ret)
+ rtw89_debug(rtwdev, RTW89_DBG_TXRX,
+diff --git a/drivers/net/wireless/realtek/rtw89/core.h b/drivers/net/wireless/realtek/rtw89/core.h
+index de33320b1354cd..ff3048d2489f12 100644
+--- a/drivers/net/wireless/realtek/rtw89/core.h
++++ b/drivers/net/wireless/realtek/rtw89/core.h
+@@ -3424,6 +3424,8 @@ enum rtw89_roc_state {
+ RTW89_ROC_MGMT,
+ };
+
++#define RTW89_ROC_BY_LINK_INDEX 0
++
+ struct rtw89_roc {
+ struct ieee80211_channel chan;
+ struct delayed_work roc_work;
+@@ -4619,7 +4621,7 @@ enum rtw89_chanctx_changes {
+ };
+
+ enum rtw89_entity_mode {
+- RTW89_ENTITY_MODE_SCC,
++ RTW89_ENTITY_MODE_SCC_OR_SMLD,
+ RTW89_ENTITY_MODE_MCC_PREPARE,
+ RTW89_ENTITY_MODE_MCC,
+
+@@ -4628,6 +4630,16 @@ enum rtw89_entity_mode {
+ RTW89_ENTITY_MODE_UNHANDLED = -ESRCH,
+ };
+
++#define RTW89_MAX_INTERFACE_NUM 2
++
++/* only valid when running with chanctx_ops */
++struct rtw89_entity_mgnt {
++ struct list_head active_list;
++ struct rtw89_vif *active_roles[RTW89_MAX_INTERFACE_NUM];
++ enum rtw89_chanctx_idx chanctx_tbl[RTW89_MAX_INTERFACE_NUM]
++ [__RTW89_MLD_MAX_LINK_NUM];
++};
++
+ struct rtw89_chanctx {
+ struct cfg80211_chan_def chandef;
+ struct rtw89_chan chan;
+@@ -4668,9 +4680,10 @@ struct rtw89_hal {
+ struct rtw89_chanctx chanctx[NUM_OF_RTW89_CHANCTX];
+ struct cfg80211_chan_def roc_chandef;
+
+- bool entity_active;
++ bool entity_active[RTW89_PHY_MAX];
+ bool entity_pause;
+ enum rtw89_entity_mode entity_mode;
++ struct rtw89_entity_mgnt entity_mgnt;
+
+ struct rtw89_edcca_bak edcca_bak;
+ u32 disabled_dm_bitmap; /* bitmap of enum rtw89_dm_type */
+@@ -5607,6 +5620,7 @@ struct rtw89_dev {
+ struct rtw89_vif {
+ struct rtw89_dev *rtwdev;
+ struct list_head list;
++ struct list_head mgnt_entry;
+
+ u8 mac_addr[ETH_ALEN];
+ __be32 ip_addr;
+@@ -6361,6 +6375,15 @@ const struct rtw89_chan_rcd *rtw89_chan_rcd_get(struct rtw89_dev *rtwdev,
+ return &hal->chanctx[idx].rcd;
+ }
+
++static inline
++const struct rtw89_chan_rcd *rtw89_chan_rcd_get_by_chan(const struct rtw89_chan *chan)
++{
++ const struct rtw89_chanctx *chanctx =
++ container_of_const(chan, struct rtw89_chanctx, chan);
++
++ return &chanctx->rcd;
++}
++
+ static inline
+ const struct rtw89_chan *rtw89_scan_chan_get(struct rtw89_dev *rtwdev)
+ {
+diff --git a/drivers/net/wireless/realtek/rtw89/fw.c b/drivers/net/wireless/realtek/rtw89/fw.c
+index e6bceef691e9be..620e076d1b597d 100644
+--- a/drivers/net/wireless/realtek/rtw89/fw.c
++++ b/drivers/net/wireless/realtek/rtw89/fw.c
+@@ -6637,21 +6637,24 @@ void rtw89_hw_scan_start(struct rtw89_dev *rtwdev,
+ rtw89_chanctx_pause(rtwdev, RTW89_CHANCTX_PAUSE_REASON_HW_SCAN);
+ }
+
+-void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev,
+- struct rtw89_vif_link *rtwvif_link,
+- bool aborted)
++struct rtw89_hw_scan_complete_cb_data {
++ struct rtw89_vif_link *rtwvif_link;
++ bool aborted;
++};
++
++static int rtw89_hw_scan_complete_cb(struct rtw89_dev *rtwdev, void *data)
+ {
+ const struct rtw89_mac_gen_def *mac = rtwdev->chip->mac_def;
+ struct rtw89_hw_scan_info *scan_info = &rtwdev->scan_info;
++ struct rtw89_hw_scan_complete_cb_data *cb_data = data;
++ struct rtw89_vif_link *rtwvif_link = cb_data->rtwvif_link;
+ struct cfg80211_scan_info info = {
+- .aborted = aborted,
++ .aborted = cb_data->aborted,
+ };
+ struct rtw89_vif *rtwvif;
+
+ if (!rtwvif_link)
+- return;
+-
+- rtw89_chanctx_proceed(rtwdev);
++ return -EINVAL;
+
+ rtwvif = rtwvif_link->rtwvif;
+
+@@ -6672,6 +6675,29 @@ void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev,
+ scan_info->last_chan_idx = 0;
+ scan_info->scanning_vif = NULL;
+ scan_info->abort = false;
++
++ return 0;
++}
++
++void rtw89_hw_scan_complete(struct rtw89_dev *rtwdev,
++ struct rtw89_vif_link *rtwvif_link,
++ bool aborted)
++{
++ struct rtw89_hw_scan_complete_cb_data cb_data = {
++ .rtwvif_link = rtwvif_link,
++ .aborted = aborted,
++ };
++ const struct rtw89_chanctx_cb_parm cb_parm = {
++ .cb = rtw89_hw_scan_complete_cb,
++ .data = &cb_data,
++ .caller = __func__,
++ };
++
++ /* The things here needs to be done after setting channel (for coex)
++ * and before proceeding entity mode (for MCC). So, pass a callback
++ * of them for the right sequence rather than doing them directly.
++ */
++ rtw89_chanctx_proceed(rtwdev, &cb_parm);
+ }
+
+ void rtw89_hw_scan_abort(struct rtw89_dev *rtwdev,
+diff --git a/drivers/net/wireless/realtek/rtw89/mac.c b/drivers/net/wireless/realtek/rtw89/mac.c
+index 4e15d539e3d1c4..4574aa62839b02 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac.c
++++ b/drivers/net/wireless/realtek/rtw89/mac.c
+@@ -1483,7 +1483,8 @@ static int rtw89_mac_power_switch(struct rtw89_dev *rtwdev, bool on)
+ clear_bit(RTW89_FLAG_CMAC1_FUNC, rtwdev->flags);
+ clear_bit(RTW89_FLAG_FW_RDY, rtwdev->flags);
+ rtw89_write8(rtwdev, R_AX_SCOREBOARD + 3, MAC_AX_NOTIFY_PWR_MAJOR);
+- rtw89_set_entity_state(rtwdev, false);
++ rtw89_set_entity_state(rtwdev, RTW89_PHY_0, false);
++ rtw89_set_entity_state(rtwdev, RTW89_PHY_1, false);
+ }
+
+ return 0;
+diff --git a/drivers/net/wireless/realtek/rtw89/mac80211.c b/drivers/net/wireless/realtek/rtw89/mac80211.c
+index 13fb3cac27016b..8351a70d325d4a 100644
+--- a/drivers/net/wireless/realtek/rtw89/mac80211.c
++++ b/drivers/net/wireless/realtek/rtw89/mac80211.c
+@@ -189,8 +189,10 @@ static int rtw89_ops_add_interface(struct ieee80211_hw *hw,
+
+ rtw89_core_txq_init(rtwdev, vif->txq);
+
+- if (!rtw89_rtwvif_in_list(rtwdev, rtwvif))
++ if (!rtw89_rtwvif_in_list(rtwdev, rtwvif)) {
+ list_add_tail(&rtwvif->list, &rtwdev->rtwvifs_list);
++ INIT_LIST_HEAD(&rtwvif->mgnt_entry);
++ }
+
+ ether_addr_copy(rtwvif->mac_addr, vif->addr);
+
+@@ -1271,11 +1273,11 @@ static void rtw89_ops_cancel_hw_scan(struct ieee80211_hw *hw,
+ if (!RTW89_CHK_FW_FEATURE(SCAN_OFFLOAD, &rtwdev->fw))
+ return;
+
+- if (!rtwdev->scanning)
+- return;
+-
+ mutex_lock(&rtwdev->mutex);
+
++ if (!rtwdev->scanning)
++ goto out;
++
+ rtwvif_link = rtw89_vif_get_link_inst(rtwvif, 0);
+ if (unlikely(!rtwvif_link)) {
+ rtw89_err(rtwdev, "cancel hw scan: find no link on HW-0\n");
+diff --git a/drivers/net/wireless/ti/wlcore/main.c b/drivers/net/wireless/ti/wlcore/main.c
+index 0c77b8524160db..42805ed7ca1202 100644
+--- a/drivers/net/wireless/ti/wlcore/main.c
++++ b/drivers/net/wireless/ti/wlcore/main.c
+@@ -2612,24 +2612,24 @@ static int wl1271_op_add_interface(struct ieee80211_hw *hw,
+ if (test_bit(WL1271_FLAG_RECOVERY_IN_PROGRESS, &wl->flags) ||
+ test_bit(WLVIF_FLAG_INITIALIZED, &wlvif->flags)) {
+ ret = -EBUSY;
+- goto out;
++ goto out_unlock;
+ }
+
+
+ ret = wl12xx_init_vif_data(wl, vif);
+ if (ret < 0)
+- goto out;
++ goto out_unlock;
+
+ wlvif->wl = wl;
+ role_type = wl12xx_get_role_type(wl, wlvif);
+ if (role_type == WL12XX_INVALID_ROLE_TYPE) {
+ ret = -EINVAL;
+- goto out;
++ goto out_unlock;
+ }
+
+ ret = wlcore_allocate_hw_queue_base(wl, wlvif);
+ if (ret < 0)
+- goto out;
++ goto out_unlock;
+
+ /*
+ * TODO: after the nvs issue will be solved, move this block
+@@ -2644,7 +2644,7 @@ static int wl1271_op_add_interface(struct ieee80211_hw *hw,
+
+ ret = wl12xx_init_fw(wl);
+ if (ret < 0)
+- goto out;
++ goto out_unlock;
+ }
+
+ /*
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 249914b90dbfa7..4c409efd8cec17 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3085,7 +3085,7 @@ int nvme_get_log(struct nvme_ctrl *ctrl, u32 nsid, u8 log_page, u8 lsp, u8 csi,
+ static int nvme_get_effects_log(struct nvme_ctrl *ctrl, u8 csi,
+ struct nvme_effects_log **log)
+ {
+- struct nvme_effects_log *cel = xa_load(&ctrl->cels, csi);
++ struct nvme_effects_log *old, *cel = xa_load(&ctrl->cels, csi);
+ int ret;
+
+ if (cel)
+@@ -3102,7 +3102,11 @@ static int nvme_get_effects_log(struct nvme_ctrl *ctrl, u8 csi,
+ return ret;
+ }
+
+- xa_store(&ctrl->cels, csi, cel, GFP_KERNEL);
++ old = xa_store(&ctrl->cels, csi, cel, GFP_KERNEL);
++ if (xa_is_err(old)) {
++ kfree(cel);
++ return xa_err(old);
++ }
+ out:
+ *log = cel;
+ return 0;
+@@ -3164,6 +3168,25 @@ static int nvme_init_non_mdts_limits(struct nvme_ctrl *ctrl)
+ return ret;
+ }
+
++static int nvme_init_effects_log(struct nvme_ctrl *ctrl,
++ u8 csi, struct nvme_effects_log **log)
++{
++ struct nvme_effects_log *effects, *old;
++
++ effects = kzalloc(sizeof(*effects), GFP_KERNEL);
++ if (!effects)
++ return -ENOMEM;
++
++ old = xa_store(&ctrl->cels, csi, effects, GFP_KERNEL);
++ if (xa_is_err(old)) {
++ kfree(effects);
++ return xa_err(old);
++ }
++
++ *log = effects;
++ return 0;
++}
++
+ static void nvme_init_known_nvm_effects(struct nvme_ctrl *ctrl)
+ {
+ struct nvme_effects_log *log = ctrl->effects;
+@@ -3210,10 +3233,9 @@ static int nvme_init_effects(struct nvme_ctrl *ctrl, struct nvme_id_ctrl *id)
+ }
+
+ if (!ctrl->effects) {
+- ctrl->effects = kzalloc(sizeof(*ctrl->effects), GFP_KERNEL);
+- if (!ctrl->effects)
+- return -ENOMEM;
+- xa_store(&ctrl->cels, NVME_CSI_NVM, ctrl->effects, GFP_KERNEL);
++ ret = nvme_init_effects_log(ctrl, NVME_CSI_NVM, &ctrl->effects);
++ if (ret < 0)
++ return ret;
+ }
+
+ nvme_init_known_nvm_effects(ctrl);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 55abfe5e1d2548..8305d3c1280748 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -54,6 +54,8 @@ MODULE_PARM_DESC(tls_handshake_timeout,
+ "nvme TLS handshake timeout in seconds (default 10)");
+ #endif
+
++static atomic_t nvme_tcp_cpu_queues[NR_CPUS];
++
+ #ifdef CONFIG_DEBUG_LOCK_ALLOC
+ /* lockdep can detect a circular dependency of the form
+ * sk_lock -> mmap_lock (page fault) -> fs locks -> sk_lock
+@@ -127,6 +129,7 @@ enum nvme_tcp_queue_flags {
+ NVME_TCP_Q_ALLOCATED = 0,
+ NVME_TCP_Q_LIVE = 1,
+ NVME_TCP_Q_POLLING = 2,
++ NVME_TCP_Q_IO_CPU_SET = 3,
+ };
+
+ enum nvme_tcp_recv_state {
+@@ -1562,23 +1565,56 @@ static bool nvme_tcp_poll_queue(struct nvme_tcp_queue *queue)
+ ctrl->io_queues[HCTX_TYPE_POLL];
+ }
+
++/**
++ * Track the number of queues assigned to each cpu using a global per-cpu
++ * counter and select the least used cpu from the mq_map. Our goal is to spread
++ * different controllers I/O threads across different cpu cores.
++ *
++ * Note that the accounting is not 100% perfect, but we don't need to be, we're
++ * simply putting our best effort to select the best candidate cpu core that we
++ * find at any given point.
++ */
+ static void nvme_tcp_set_queue_io_cpu(struct nvme_tcp_queue *queue)
+ {
+ struct nvme_tcp_ctrl *ctrl = queue->ctrl;
+- int qid = nvme_tcp_queue_id(queue);
+- int n = 0;
++ struct blk_mq_tag_set *set = &ctrl->tag_set;
++ int qid = nvme_tcp_queue_id(queue) - 1;
++ unsigned int *mq_map = NULL;
++ int cpu, min_queues = INT_MAX, io_cpu;
++
++ if (wq_unbound)
++ goto out;
+
+ if (nvme_tcp_default_queue(queue))
+- n = qid - 1;
++ mq_map = set->map[HCTX_TYPE_DEFAULT].mq_map;
+ else if (nvme_tcp_read_queue(queue))
+- n = qid - ctrl->io_queues[HCTX_TYPE_DEFAULT] - 1;
++ mq_map = set->map[HCTX_TYPE_READ].mq_map;
+ else if (nvme_tcp_poll_queue(queue))
+- n = qid - ctrl->io_queues[HCTX_TYPE_DEFAULT] -
+- ctrl->io_queues[HCTX_TYPE_READ] - 1;
+- if (wq_unbound)
+- queue->io_cpu = WORK_CPU_UNBOUND;
+- else
+- queue->io_cpu = cpumask_next_wrap(n - 1, cpu_online_mask, -1, false);
++ mq_map = set->map[HCTX_TYPE_POLL].mq_map;
++
++ if (WARN_ON(!mq_map))
++ goto out;
++
++ /* Search for the least used cpu from the mq_map */
++ io_cpu = WORK_CPU_UNBOUND;
++ for_each_online_cpu(cpu) {
++ int num_queues = atomic_read(&nvme_tcp_cpu_queues[cpu]);
++
++ if (mq_map[cpu] != qid)
++ continue;
++ if (num_queues < min_queues) {
++ io_cpu = cpu;
++ min_queues = num_queues;
++ }
++ }
++ if (io_cpu != WORK_CPU_UNBOUND) {
++ queue->io_cpu = io_cpu;
++ atomic_inc(&nvme_tcp_cpu_queues[io_cpu]);
++ set_bit(NVME_TCP_Q_IO_CPU_SET, &queue->flags);
++ }
++out:
++ dev_dbg(ctrl->ctrl.device, "queue %d: using cpu %d\n",
++ qid, queue->io_cpu);
+ }
+
+ static void nvme_tcp_tls_done(void *data, int status, key_serial_t pskid)
+@@ -1722,7 +1758,7 @@ static int nvme_tcp_alloc_queue(struct nvme_ctrl *nctrl, int qid,
+
+ queue->sock->sk->sk_allocation = GFP_ATOMIC;
+ queue->sock->sk->sk_use_task_frag = false;
+- nvme_tcp_set_queue_io_cpu(queue);
++ queue->io_cpu = WORK_CPU_UNBOUND;
+ queue->request = NULL;
+ queue->data_remaining = 0;
+ queue->ddgst_remaining = 0;
+@@ -1844,6 +1880,9 @@ static void nvme_tcp_stop_queue(struct nvme_ctrl *nctrl, int qid)
+ if (!test_bit(NVME_TCP_Q_ALLOCATED, &queue->flags))
+ return;
+
++ if (test_and_clear_bit(NVME_TCP_Q_IO_CPU_SET, &queue->flags))
++ atomic_dec(&nvme_tcp_cpu_queues[queue->io_cpu]);
++
+ mutex_lock(&queue->queue_lock);
+ if (test_and_clear_bit(NVME_TCP_Q_LIVE, &queue->flags))
+ __nvme_tcp_stop_queue(queue);
+@@ -1878,9 +1917,10 @@ static int nvme_tcp_start_queue(struct nvme_ctrl *nctrl, int idx)
+ nvme_tcp_init_recv_ctx(queue);
+ nvme_tcp_setup_sock_ops(queue);
+
+- if (idx)
++ if (idx) {
++ nvme_tcp_set_queue_io_cpu(queue);
+ ret = nvmf_connect_io_queue(nctrl, idx);
+- else
++ } else
+ ret = nvmf_connect_admin_queue(nctrl);
+
+ if (!ret) {
+@@ -2856,6 +2896,7 @@ static struct nvmf_transport_ops nvme_tcp_transport = {
+ static int __init nvme_tcp_init_module(void)
+ {
+ unsigned int wq_flags = WQ_MEM_RECLAIM | WQ_HIGHPRI | WQ_SYSFS;
++ int cpu;
+
+ BUILD_BUG_ON(sizeof(struct nvme_tcp_hdr) != 8);
+ BUILD_BUG_ON(sizeof(struct nvme_tcp_cmd_pdu) != 72);
+@@ -2873,6 +2914,9 @@ static int __init nvme_tcp_init_module(void)
+ if (!nvme_tcp_wq)
+ return -ENOMEM;
+
++ for_each_possible_cpu(cpu)
++ atomic_set(&nvme_tcp_cpu_queues[cpu], 0);
++
+ nvmf_register_transport(&nvme_tcp_transport);
+ return 0;
+ }
+diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
+index 546e76ac407cfd..8c80f4dc8b3fae 100644
+--- a/drivers/of/fdt.c
++++ b/drivers/of/fdt.c
+@@ -8,7 +8,6 @@
+
+ #define pr_fmt(fmt) "OF: fdt: " fmt
+
+-#include <linux/acpi.h>
+ #include <linux/crash_dump.h>
+ #include <linux/crc32.h>
+ #include <linux/kernel.h>
+@@ -512,8 +511,6 @@ void __init early_init_fdt_scan_reserved_mem(void)
+ break;
+ memblock_reserve(base, size);
+ }
+-
+- fdt_init_reserved_mem();
+ }
+
+ /**
+@@ -1214,14 +1211,10 @@ void __init unflatten_device_tree(void)
+ {
+ void *fdt = initial_boot_params;
+
+- /* Don't use the bootloader provided DTB if ACPI is enabled */
+- if (!acpi_disabled)
+- fdt = NULL;
++ /* Save the statically-placed regions in the reserved_mem array */
++ fdt_scan_reserved_mem_reg_nodes();
+
+- /*
+- * Populate an empty root node when ACPI is enabled or bootloader
+- * doesn't provide one.
+- */
++ /* Populate an empty root node when bootloader doesn't provide one */
+ if (!fdt) {
+ fdt = (void *) __dtb_empty_root_begin;
+ /* fdt_totalsize() will be used for copy size */
+diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
+index c235d6c909a16a..10698862252572 100644
+--- a/drivers/of/of_private.h
++++ b/drivers/of/of_private.h
+@@ -9,6 +9,7 @@
+ */
+
+ #define FDT_ALIGN_SIZE 8
++#define MAX_RESERVED_REGIONS 64
+
+ /**
+ * struct alias_prop - Alias property in 'aliases' node
+@@ -183,7 +184,7 @@ static inline struct device_node *__of_get_dma_parent(const struct device_node *
+ #endif
+
+ int fdt_scan_reserved_mem(void);
+-void fdt_init_reserved_mem(void);
++void __init fdt_scan_reserved_mem_reg_nodes(void);
+
+ bool of_fdt_device_is_available(const void *blob, unsigned long node);
+
+diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
+index 46e1c3fbc7692c..45445a1600a968 100644
+--- a/drivers/of/of_reserved_mem.c
++++ b/drivers/of/of_reserved_mem.c
+@@ -27,7 +27,6 @@
+
+ #include "of_private.h"
+
+-#define MAX_RESERVED_REGIONS 64
+ static struct reserved_mem reserved_mem[MAX_RESERVED_REGIONS];
+ static int reserved_mem_count;
+
+@@ -51,11 +50,13 @@ static int __init early_init_dt_alloc_reserved_memory_arch(phys_addr_t size,
+ memblock_phys_free(base, size);
+ }
+
+- kmemleak_ignore_phys(base);
++ if (!err)
++ kmemleak_ignore_phys(base);
+
+ return err;
+ }
+
++static void __init fdt_init_reserved_mem_node(struct reserved_mem *rmem);
+ /*
+ * fdt_reserved_mem_save_node() - save fdt node for second pass initialization
+ */
+@@ -74,6 +75,9 @@ static void __init fdt_reserved_mem_save_node(unsigned long node, const char *un
+ rmem->base = base;
+ rmem->size = size;
+
++ /* Call the region specific initialization function */
++ fdt_init_reserved_mem_node(rmem);
++
+ reserved_mem_count++;
+ return;
+ }
+@@ -106,7 +110,6 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
+ phys_addr_t base, size;
+ int len;
+ const __be32 *prop;
+- int first = 1;
+ bool nomap;
+
+ prop = of_get_flat_dt_prop(node, "reg", &len);
+@@ -134,10 +137,6 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
+ uname, &base, (unsigned long)(size / SZ_1M));
+
+ len -= t_len;
+- if (first) {
+- fdt_reserved_mem_save_node(node, uname, base, size);
+- first = 0;
+- }
+ }
+ return 0;
+ }
+@@ -165,12 +164,82 @@ static int __init __reserved_mem_check_root(unsigned long node)
+ return 0;
+ }
+
++static void __init __rmem_check_for_overlap(void);
++
++/**
++ * fdt_scan_reserved_mem_reg_nodes() - Store info for the "reg" defined
++ * reserved memory regions.
++ *
++ * This function is used to scan through the DT and store the
++ * information for the reserved memory regions that are defined using
++ * the "reg" property. The region node number, name, base address, and
++ * size are all stored in the reserved_mem array by calling the
++ * fdt_reserved_mem_save_node() function.
++ */
++void __init fdt_scan_reserved_mem_reg_nodes(void)
++{
++ int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32);
++ const void *fdt = initial_boot_params;
++ phys_addr_t base, size;
++ const __be32 *prop;
++ int node, child;
++ int len;
++
++ if (!fdt)
++ return;
++
++ node = fdt_path_offset(fdt, "/reserved-memory");
++ if (node < 0) {
++ pr_info("Reserved memory: No reserved-memory node in the DT\n");
++ return;
++ }
++
++ if (__reserved_mem_check_root(node)) {
++ pr_err("Reserved memory: unsupported node format, ignoring\n");
++ return;
++ }
++
++ fdt_for_each_subnode(child, fdt, node) {
++ const char *uname;
++
++ prop = of_get_flat_dt_prop(child, "reg", &len);
++ if (!prop)
++ continue;
++ if (!of_fdt_device_is_available(fdt, child))
++ continue;
++
++ uname = fdt_get_name(fdt, child, NULL);
++ if (len && len % t_len != 0) {
++ pr_err("Reserved memory: invalid reg property in '%s', skipping node.\n",
++ uname);
++ continue;
++ }
++
++ if (len > t_len)
++ pr_warn("%s() ignores %d regions in node '%s'\n",
++ __func__, len / t_len - 1, uname);
++
++ base = dt_mem_next_cell(dt_root_addr_cells, &prop);
++ size = dt_mem_next_cell(dt_root_size_cells, &prop);
++
++ if (size)
++ fdt_reserved_mem_save_node(child, uname, base, size);
++ }
++
++ /* check for overlapping reserved regions */
++ __rmem_check_for_overlap();
++}
++
++static int __init __reserved_mem_alloc_size(unsigned long node, const char *uname);
++
+ /*
+ * fdt_scan_reserved_mem() - scan a single FDT node for reserved memory
+ */
+ int __init fdt_scan_reserved_mem(void)
+ {
+ int node, child;
++ int dynamic_nodes_cnt = 0;
++ int dynamic_nodes[MAX_RESERVED_REGIONS];
+ const void *fdt = initial_boot_params;
+
+ node = fdt_path_offset(fdt, "/reserved-memory");
+@@ -192,8 +261,24 @@ int __init fdt_scan_reserved_mem(void)
+ uname = fdt_get_name(fdt, child, NULL);
+
+ err = __reserved_mem_reserve_reg(child, uname);
+- if (err == -ENOENT && of_get_flat_dt_prop(child, "size", NULL))
+- fdt_reserved_mem_save_node(child, uname, 0, 0);
++ /*
++ * Save the nodes for the dynamically-placed regions
++ * into an array which will be used for allocation right
++ * after all the statically-placed regions are reserved
++ * or marked as no-map. This is done to avoid dynamically
++ * allocating from one of the statically-placed regions.
++ */
++ if (err == -ENOENT && of_get_flat_dt_prop(child, "size", NULL)) {
++ dynamic_nodes[dynamic_nodes_cnt] = child;
++ dynamic_nodes_cnt++;
++ }
++ }
++ for (int i = 0; i < dynamic_nodes_cnt; i++) {
++ const char *uname;
++
++ child = dynamic_nodes[i];
++ uname = fdt_get_name(fdt, child, NULL);
++ __reserved_mem_alloc_size(child, uname);
+ }
+ return 0;
+ }
+@@ -253,8 +338,7 @@ static int __init __reserved_mem_alloc_in_range(phys_addr_t size,
+ * __reserved_mem_alloc_size() - allocate reserved memory described by
+ * 'size', 'alignment' and 'alloc-ranges' properties.
+ */
+-static int __init __reserved_mem_alloc_size(unsigned long node,
+- const char *uname, phys_addr_t *res_base, phys_addr_t *res_size)
++static int __init __reserved_mem_alloc_size(unsigned long node, const char *uname)
+ {
+ int t_len = (dt_root_addr_cells + dt_root_size_cells) * sizeof(__be32);
+ phys_addr_t start = 0, end = 0;
+@@ -334,9 +418,8 @@ static int __init __reserved_mem_alloc_size(unsigned long node,
+ return -ENOMEM;
+ }
+
+- *res_base = base;
+- *res_size = size;
+-
++ /* Save region in the reserved_mem array */
++ fdt_reserved_mem_save_node(node, uname, base, size);
+ return 0;
+ }
+
+@@ -425,48 +508,37 @@ static void __init __rmem_check_for_overlap(void)
+ }
+
+ /**
+- * fdt_init_reserved_mem() - allocate and init all saved reserved memory regions
++ * fdt_init_reserved_mem_node() - Initialize a reserved memory region
++ * @rmem: reserved_mem struct of the memory region to be initialized.
++ *
++ * This function is used to call the region specific initialization
++ * function for a reserved memory region.
+ */
+-void __init fdt_init_reserved_mem(void)
++static void __init fdt_init_reserved_mem_node(struct reserved_mem *rmem)
+ {
+- int i;
+-
+- /* check for overlapping reserved regions */
+- __rmem_check_for_overlap();
+-
+- for (i = 0; i < reserved_mem_count; i++) {
+- struct reserved_mem *rmem = &reserved_mem[i];
+- unsigned long node = rmem->fdt_node;
+- int err = 0;
+- bool nomap;
++ unsigned long node = rmem->fdt_node;
++ int err = 0;
++ bool nomap;
+
+- nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
++ nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
+
+- if (rmem->size == 0)
+- err = __reserved_mem_alloc_size(node, rmem->name,
+- &rmem->base, &rmem->size);
+- if (err == 0) {
+- err = __reserved_mem_init_node(rmem);
+- if (err != 0 && err != -ENOENT) {
+- pr_info("node %s compatible matching fail\n",
+- rmem->name);
+- if (nomap)
+- memblock_clear_nomap(rmem->base, rmem->size);
+- else
+- memblock_phys_free(rmem->base,
+- rmem->size);
+- } else {
+- phys_addr_t end = rmem->base + rmem->size - 1;
+- bool reusable =
+- (of_get_flat_dt_prop(node, "reusable", NULL)) != NULL;
+-
+- pr_info("%pa..%pa (%lu KiB) %s %s %s\n",
+- &rmem->base, &end, (unsigned long)(rmem->size / SZ_1K),
+- nomap ? "nomap" : "map",
+- reusable ? "reusable" : "non-reusable",
+- rmem->name ? rmem->name : "unknown");
+- }
+- }
++ err = __reserved_mem_init_node(rmem);
++ if (err != 0 && err != -ENOENT) {
++ pr_info("node %s compatible matching fail\n", rmem->name);
++ if (nomap)
++ memblock_clear_nomap(rmem->base, rmem->size);
++ else
++ memblock_phys_free(rmem->base, rmem->size);
++ } else {
++ phys_addr_t end = rmem->base + rmem->size - 1;
++ bool reusable =
++ (of_get_flat_dt_prop(node, "reusable", NULL)) != NULL;
++
++ pr_info("%pa..%pa (%lu KiB) %s %s %s\n",
++ &rmem->base, &end, (unsigned long)(rmem->size / SZ_1K),
++ nomap ? "nomap" : "map",
++ reusable ? "reusable" : "non-reusable",
++ rmem->name ? rmem->name : "unknown");
+ }
+ }
+
+diff --git a/drivers/of/property.c b/drivers/of/property.c
+index 7bd8390f2fba5e..906a33ae717f78 100644
+--- a/drivers/of/property.c
++++ b/drivers/of/property.c
+@@ -1317,9 +1317,9 @@ static struct device_node *parse_interrupt_map(struct device_node *np,
+ addrcells = of_bus_n_addr_cells(np);
+
+ imap = of_get_property(np, "interrupt-map", &imaplen);
+- imaplen /= sizeof(*imap);
+ if (!imap)
+ return NULL;
++ imaplen /= sizeof(*imap);
+
+ imap_end = imap + imaplen;
+
+diff --git a/drivers/opp/core.c b/drivers/opp/core.c
+index 3aa18737470fa2..5ac209472c0cf6 100644
+--- a/drivers/opp/core.c
++++ b/drivers/opp/core.c
+@@ -101,11 +101,30 @@ struct opp_table *_find_opp_table(struct device *dev)
+ * representation in the OPP table and manage the clock configuration themselves
+ * in an platform specific way.
+ */
+-static bool assert_single_clk(struct opp_table *opp_table)
++static bool assert_single_clk(struct opp_table *opp_table,
++ unsigned int __always_unused index)
+ {
+ return !WARN_ON(opp_table->clk_count > 1);
+ }
+
++/*
++ * Returns true if clock table is large enough to contain the clock index.
++ */
++static bool assert_clk_index(struct opp_table *opp_table,
++ unsigned int index)
++{
++ return opp_table->clk_count > index;
++}
++
++/*
++ * Returns true if bandwidth table is large enough to contain the bandwidth index.
++ */
++static bool assert_bandwidth_index(struct opp_table *opp_table,
++ unsigned int index)
++{
++ return opp_table->path_count > index;
++}
++
+ /**
+ * dev_pm_opp_get_voltage() - Gets the voltage corresponding to an opp
+ * @opp: opp for which voltage has to be returned for
+@@ -499,12 +518,12 @@ static struct dev_pm_opp *_opp_table_find_key(struct opp_table *opp_table,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+ bool (*compare)(struct dev_pm_opp **opp, struct dev_pm_opp *temp_opp,
+ unsigned long opp_key, unsigned long key),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ struct dev_pm_opp *temp_opp, *opp = ERR_PTR(-ERANGE);
+
+ /* Assert that the requirement is met */
+- if (assert && !assert(opp_table))
++ if (assert && !assert(opp_table, index))
+ return ERR_PTR(-EINVAL);
+
+ mutex_lock(&opp_table->lock);
+@@ -532,7 +551,7 @@ _find_key(struct device *dev, unsigned long *key, int index, bool available,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+ bool (*compare)(struct dev_pm_opp **opp, struct dev_pm_opp *temp_opp,
+ unsigned long opp_key, unsigned long key),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ struct opp_table *opp_table;
+ struct dev_pm_opp *opp;
+@@ -555,7 +574,7 @@ _find_key(struct device *dev, unsigned long *key, int index, bool available,
+ static struct dev_pm_opp *_find_key_exact(struct device *dev,
+ unsigned long key, int index, bool available,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ /*
+ * The value of key will be updated here, but will be ignored as the
+@@ -568,7 +587,7 @@ static struct dev_pm_opp *_find_key_exact(struct device *dev,
+ static struct dev_pm_opp *_opp_table_find_key_ceil(struct opp_table *opp_table,
+ unsigned long *key, int index, bool available,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ return _opp_table_find_key(opp_table, key, index, available, read,
+ _compare_ceil, assert);
+@@ -577,7 +596,7 @@ static struct dev_pm_opp *_opp_table_find_key_ceil(struct opp_table *opp_table,
+ static struct dev_pm_opp *_find_key_ceil(struct device *dev, unsigned long *key,
+ int index, bool available,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ return _find_key(dev, key, index, available, read, _compare_ceil,
+ assert);
+@@ -586,7 +605,7 @@ static struct dev_pm_opp *_find_key_ceil(struct device *dev, unsigned long *key,
+ static struct dev_pm_opp *_find_key_floor(struct device *dev,
+ unsigned long *key, int index, bool available,
+ unsigned long (*read)(struct dev_pm_opp *opp, int index),
+- bool (*assert)(struct opp_table *opp_table))
++ bool (*assert)(struct opp_table *opp_table, unsigned int index))
+ {
+ return _find_key(dev, key, index, available, read, _compare_floor,
+ assert);
+@@ -647,7 +666,8 @@ struct dev_pm_opp *
+ dev_pm_opp_find_freq_exact_indexed(struct device *dev, unsigned long freq,
+ u32 index, bool available)
+ {
+- return _find_key_exact(dev, freq, index, available, _read_freq, NULL);
++ return _find_key_exact(dev, freq, index, available, _read_freq,
++ assert_clk_index);
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_exact_indexed);
+
+@@ -707,7 +727,8 @@ struct dev_pm_opp *
+ dev_pm_opp_find_freq_ceil_indexed(struct device *dev, unsigned long *freq,
+ u32 index)
+ {
+- return _find_key_ceil(dev, freq, index, true, _read_freq, NULL);
++ return _find_key_ceil(dev, freq, index, true, _read_freq,
++ assert_clk_index);
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_ceil_indexed);
+
+@@ -760,7 +781,7 @@ struct dev_pm_opp *
+ dev_pm_opp_find_freq_floor_indexed(struct device *dev, unsigned long *freq,
+ u32 index)
+ {
+- return _find_key_floor(dev, freq, index, true, _read_freq, NULL);
++ return _find_key_floor(dev, freq, index, true, _read_freq, assert_clk_index);
+ }
+ EXPORT_SYMBOL_GPL(dev_pm_opp_find_freq_floor_indexed);
+
+@@ -878,7 +899,8 @@ struct dev_pm_opp *dev_pm_opp_find_bw_ceil(struct device *dev, unsigned int *bw,
+ unsigned long temp = *bw;
+ struct dev_pm_opp *opp;
+
+- opp = _find_key_ceil(dev, &temp, index, true, _read_bw, NULL);
++ opp = _find_key_ceil(dev, &temp, index, true, _read_bw,
++ assert_bandwidth_index);
+ *bw = temp;
+ return opp;
+ }
+@@ -909,7 +931,8 @@ struct dev_pm_opp *dev_pm_opp_find_bw_floor(struct device *dev,
+ unsigned long temp = *bw;
+ struct dev_pm_opp *opp;
+
+- opp = _find_key_floor(dev, &temp, index, true, _read_bw, NULL);
++ opp = _find_key_floor(dev, &temp, index, true, _read_bw,
++ assert_bandwidth_index);
+ *bw = temp;
+ return opp;
+ }
+@@ -1702,7 +1725,7 @@ void dev_pm_opp_remove(struct device *dev, unsigned long freq)
+ if (IS_ERR(opp_table))
+ return;
+
+- if (!assert_single_clk(opp_table))
++ if (!assert_single_clk(opp_table, 0))
+ goto put_table;
+
+ mutex_lock(&opp_table->lock);
+@@ -2054,7 +2077,7 @@ int _opp_add_v1(struct opp_table *opp_table, struct device *dev,
+ unsigned long tol, u_volt = data->u_volt;
+ int ret;
+
+- if (!assert_single_clk(opp_table))
++ if (!assert_single_clk(opp_table, 0))
+ return -EINVAL;
+
+ new_opp = _opp_allocate(opp_table);
+@@ -2911,7 +2934,7 @@ static int _opp_set_availability(struct device *dev, unsigned long freq,
+ return r;
+ }
+
+- if (!assert_single_clk(opp_table)) {
++ if (!assert_single_clk(opp_table, 0)) {
+ r = -EINVAL;
+ goto put_table;
+ }
+@@ -2987,7 +3010,7 @@ int dev_pm_opp_adjust_voltage(struct device *dev, unsigned long freq,
+ return r;
+ }
+
+- if (!assert_single_clk(opp_table)) {
++ if (!assert_single_clk(opp_table, 0)) {
+ r = -EINVAL;
+ goto put_table;
+ }
+diff --git a/drivers/opp/of.c b/drivers/opp/of.c
+index 55c8cfef97d489..dcab0e7ace1068 100644
+--- a/drivers/opp/of.c
++++ b/drivers/opp/of.c
+@@ -959,7 +959,7 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
+
+ ret = _of_opp_alloc_required_opps(opp_table, new_opp);
+ if (ret)
+- goto free_opp;
++ goto put_node;
+
+ if (!of_property_read_u32(np, "clock-latency-ns", &val))
+ new_opp->clock_latency_ns = val;
+@@ -1009,6 +1009,8 @@ static struct dev_pm_opp *_opp_add_static_v2(struct opp_table *opp_table,
+
+ free_required_opps:
+ _of_opp_free_required_opps(opp_table, new_opp);
++put_node:
++ of_node_put(np);
+ free_opp:
+ _opp_free(new_opp);
+
+diff --git a/drivers/pci/controller/dwc/pci-imx6.c b/drivers/pci/controller/dwc/pci-imx6.c
+index c8d5c90aa4d45b..ad3028b755d16a 100644
+--- a/drivers/pci/controller/dwc/pci-imx6.c
++++ b/drivers/pci/controller/dwc/pci-imx6.c
+@@ -598,10 +598,9 @@ static int imx_pcie_attach_pd(struct device *dev)
+
+ static int imx6sx_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
+ {
+- if (enable)
+- regmap_clear_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
+- IMX6SX_GPR12_PCIE_TEST_POWERDOWN);
+-
++ regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
++ IMX6SX_GPR12_PCIE_TEST_POWERDOWN,
++ enable ? 0 : IMX6SX_GPR12_PCIE_TEST_POWERDOWN);
+ return 0;
+ }
+
+@@ -630,19 +629,20 @@ static int imx8mm_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
+ {
+ int offset = imx_pcie_grp_offset(imx_pcie);
+
+- if (enable) {
+- regmap_clear_bits(imx_pcie->iomuxc_gpr, offset, IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE);
+- regmap_set_bits(imx_pcie->iomuxc_gpr, offset, IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN);
+- }
+-
++ regmap_update_bits(imx_pcie->iomuxc_gpr, offset,
++ IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE,
++ enable ? 0 : IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE);
++ regmap_update_bits(imx_pcie->iomuxc_gpr, offset,
++ IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN,
++ enable ? IMX8MQ_GPR_PCIE_CLK_REQ_OVERRIDE_EN : 0);
+ return 0;
+ }
+
+ static int imx7d_pcie_enable_ref_clk(struct imx_pcie *imx_pcie, bool enable)
+ {
+- if (!enable)
+- regmap_set_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
+- IMX7D_GPR12_PCIE_PHY_REFCLK_SEL);
++ regmap_update_bits(imx_pcie->iomuxc_gpr, IOMUXC_GPR12,
++ IMX7D_GPR12_PCIE_PHY_REFCLK_SEL,
++ enable ? 0 : IMX7D_GPR12_PCIE_PHY_REFCLK_SEL);
+ return 0;
+ }
+
+@@ -775,6 +775,7 @@ static void imx_pcie_assert_core_reset(struct imx_pcie *imx_pcie)
+ static int imx_pcie_deassert_core_reset(struct imx_pcie *imx_pcie)
+ {
+ reset_control_deassert(imx_pcie->pciephy_reset);
++ reset_control_deassert(imx_pcie->apps_reset);
+
+ if (imx_pcie->drvdata->core_reset)
+ imx_pcie->drvdata->core_reset(imx_pcie, false);
+@@ -966,7 +967,9 @@ static int imx_pcie_host_init(struct dw_pcie_rp *pp)
+ goto err_clk_disable;
+ }
+
+- ret = phy_set_mode_ext(imx_pcie->phy, PHY_MODE_PCIE, PHY_MODE_PCIE_RC);
++ ret = phy_set_mode_ext(imx_pcie->phy, PHY_MODE_PCIE,
++ imx_pcie->drvdata->mode == DW_PCIE_EP_TYPE ?
++ PHY_MODE_PCIE_EP : PHY_MODE_PCIE_RC);
+ if (ret) {
+ dev_err(dev, "unable to set PCIe PHY mode\n");
+ goto err_phy_exit;
+@@ -1391,7 +1394,6 @@ static int imx_pcie_probe(struct platform_device *pdev)
+ switch (imx_pcie->drvdata->variant) {
+ case IMX8MQ:
+ case IMX8MQ_EP:
+- case IMX7D:
+ if (dbi_base->start == IMX8MQ_PCIE2_BASE_ADDR)
+ imx_pcie->controller_id = 1;
+ break;
+diff --git a/drivers/pci/controller/dwc/pcie-designware-host.c b/drivers/pci/controller/dwc/pcie-designware-host.c
+index 3e41865c72904e..120e2aca5164ab 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-host.c
++++ b/drivers/pci/controller/dwc/pcie-designware-host.c
+@@ -946,6 +946,7 @@ int dw_pcie_suspend_noirq(struct dw_pcie *pci)
+ return ret;
+ }
+
++ dw_pcie_stop_link(pci);
+ if (pci->pp.ops->deinit)
+ pci->pp.ops->deinit(&pci->pp);
+
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 6483e1874477ef..4c141e05f84e9c 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1559,6 +1559,8 @@ static irqreturn_t qcom_pcie_global_irq_thread(int irq, void *data)
+ pci_lock_rescan_remove();
+ pci_rescan_bus(pp->bridge->bus);
+ pci_unlock_rescan_remove();
++
++ qcom_pcie_icc_opp_update(pcie);
+ } else {
+ dev_WARN_ONCE(dev, 1, "Received unknown event. INT_STATUS: 0x%08x\n",
+ status);
+diff --git a/drivers/pci/controller/pcie-rcar-ep.c b/drivers/pci/controller/pcie-rcar-ep.c
+index 047e2cef5afcd5..c5e0d025bc4359 100644
+--- a/drivers/pci/controller/pcie-rcar-ep.c
++++ b/drivers/pci/controller/pcie-rcar-ep.c
+@@ -107,7 +107,7 @@ static int rcar_pcie_parse_outbound_ranges(struct rcar_pcie_endpoint *ep,
+ }
+ if (!devm_request_mem_region(&pdev->dev, res->start,
+ resource_size(res),
+- outbound_name)) {
++ res->name)) {
+ dev_err(pcie->dev, "Cannot request memory region %s.\n",
+ outbound_name);
+ return -EIO;
+diff --git a/drivers/pci/controller/plda/pcie-microchip-host.c b/drivers/pci/controller/plda/pcie-microchip-host.c
+index 48f60a04b740ba..3fdfffdf027001 100644
+--- a/drivers/pci/controller/plda/pcie-microchip-host.c
++++ b/drivers/pci/controller/plda/pcie-microchip-host.c
+@@ -7,27 +7,31 @@
+ * Author: Daire McNamara <daire.mcnamara@microchip.com>
+ */
+
++#include <linux/align.h>
++#include <linux/bits.h>
+ #include <linux/bitfield.h>
+ #include <linux/clk.h>
+ #include <linux/irqchip/chained_irq.h>
+ #include <linux/irqdomain.h>
++#include <linux/log2.h>
+ #include <linux/module.h>
+ #include <linux/msi.h>
+ #include <linux/of_address.h>
+ #include <linux/of_pci.h>
+ #include <linux/pci-ecam.h>
+ #include <linux/platform_device.h>
++#include <linux/wordpart.h>
+
+ #include "../../pci.h"
+ #include "pcie-plda.h"
+
++#define MC_MAX_NUM_INBOUND_WINDOWS 8
++#define MPFS_NC_BOUNCE_ADDR 0x80000000
++
+ /* PCIe Bridge Phy and Controller Phy offsets */
+ #define MC_PCIE1_BRIDGE_ADDR 0x00008000u
+ #define MC_PCIE1_CTRL_ADDR 0x0000a000u
+
+-#define MC_PCIE_BRIDGE_ADDR (MC_PCIE1_BRIDGE_ADDR)
+-#define MC_PCIE_CTRL_ADDR (MC_PCIE1_CTRL_ADDR)
+-
+ /* PCIe Controller Phy Regs */
+ #define SEC_ERROR_EVENT_CNT 0x20
+ #define DED_ERROR_EVENT_CNT 0x24
+@@ -128,7 +132,6 @@
+ [EVENT_LOCAL_ ## x] = { __stringify(x), s }
+
+ #define PCIE_EVENT(x) \
+- .base = MC_PCIE_CTRL_ADDR, \
+ .offset = PCIE_EVENT_INT, \
+ .mask_offset = PCIE_EVENT_INT, \
+ .mask_high = 1, \
+@@ -136,7 +139,6 @@
+ .enb_mask = PCIE_EVENT_INT_ENB_MASK
+
+ #define SEC_EVENT(x) \
+- .base = MC_PCIE_CTRL_ADDR, \
+ .offset = SEC_ERROR_INT, \
+ .mask_offset = SEC_ERROR_INT_MASK, \
+ .mask = SEC_ERROR_INT_ ## x ## _INT, \
+@@ -144,7 +146,6 @@
+ .enb_mask = 0
+
+ #define DED_EVENT(x) \
+- .base = MC_PCIE_CTRL_ADDR, \
+ .offset = DED_ERROR_INT, \
+ .mask_offset = DED_ERROR_INT_MASK, \
+ .mask_high = 1, \
+@@ -152,7 +153,6 @@
+ .enb_mask = 0
+
+ #define LOCAL_EVENT(x) \
+- .base = MC_PCIE_BRIDGE_ADDR, \
+ .offset = ISTATUS_LOCAL, \
+ .mask_offset = IMASK_LOCAL, \
+ .mask_high = 0, \
+@@ -179,7 +179,8 @@ struct event_map {
+
+ struct mc_pcie {
+ struct plda_pcie_rp plda;
+- void __iomem *axi_base_addr;
++ void __iomem *bridge_base_addr;
++ void __iomem *ctrl_base_addr;
+ };
+
+ struct cause {
+@@ -253,7 +254,6 @@ static struct event_map local_status_to_event[] = {
+ };
+
+ static struct {
+- u32 base;
+ u32 offset;
+ u32 mask;
+ u32 shift;
+@@ -325,8 +325,7 @@ static inline u32 reg_to_event(u32 reg, struct event_map field)
+
+ static u32 pcie_events(struct mc_pcie *port)
+ {
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+- u32 reg = readl_relaxed(ctrl_base_addr + PCIE_EVENT_INT);
++ u32 reg = readl_relaxed(port->ctrl_base_addr + PCIE_EVENT_INT);
+ u32 val = 0;
+ int i;
+
+@@ -338,8 +337,7 @@ static u32 pcie_events(struct mc_pcie *port)
+
+ static u32 sec_errors(struct mc_pcie *port)
+ {
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+- u32 reg = readl_relaxed(ctrl_base_addr + SEC_ERROR_INT);
++ u32 reg = readl_relaxed(port->ctrl_base_addr + SEC_ERROR_INT);
+ u32 val = 0;
+ int i;
+
+@@ -351,8 +349,7 @@ static u32 sec_errors(struct mc_pcie *port)
+
+ static u32 ded_errors(struct mc_pcie *port)
+ {
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+- u32 reg = readl_relaxed(ctrl_base_addr + DED_ERROR_INT);
++ u32 reg = readl_relaxed(port->ctrl_base_addr + DED_ERROR_INT);
+ u32 val = 0;
+ int i;
+
+@@ -364,8 +361,7 @@ static u32 ded_errors(struct mc_pcie *port)
+
+ static u32 local_events(struct mc_pcie *port)
+ {
+- void __iomem *bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+- u32 reg = readl_relaxed(bridge_base_addr + ISTATUS_LOCAL);
++ u32 reg = readl_relaxed(port->bridge_base_addr + ISTATUS_LOCAL);
+ u32 val = 0;
+ int i;
+
+@@ -412,8 +408,12 @@ static void mc_ack_event_irq(struct irq_data *data)
+ void __iomem *addr;
+ u32 mask;
+
+- addr = mc_port->axi_base_addr + event_descs[event].base +
+- event_descs[event].offset;
++ if (event_descs[event].offset == ISTATUS_LOCAL)
++ addr = mc_port->bridge_base_addr;
++ else
++ addr = mc_port->ctrl_base_addr;
++
++ addr += event_descs[event].offset;
+ mask = event_descs[event].mask;
+ mask |= event_descs[event].enb_mask;
+
+@@ -429,8 +429,12 @@ static void mc_mask_event_irq(struct irq_data *data)
+ u32 mask;
+ u32 val;
+
+- addr = mc_port->axi_base_addr + event_descs[event].base +
+- event_descs[event].mask_offset;
++ if (event_descs[event].offset == ISTATUS_LOCAL)
++ addr = mc_port->bridge_base_addr;
++ else
++ addr = mc_port->ctrl_base_addr;
++
++ addr += event_descs[event].mask_offset;
+ mask = event_descs[event].mask;
+ if (event_descs[event].enb_mask) {
+ mask <<= PCIE_EVENT_INT_ENB_SHIFT;
+@@ -460,8 +464,12 @@ static void mc_unmask_event_irq(struct irq_data *data)
+ u32 mask;
+ u32 val;
+
+- addr = mc_port->axi_base_addr + event_descs[event].base +
+- event_descs[event].mask_offset;
++ if (event_descs[event].offset == ISTATUS_LOCAL)
++ addr = mc_port->bridge_base_addr;
++ else
++ addr = mc_port->ctrl_base_addr;
++
++ addr += event_descs[event].mask_offset;
+ mask = event_descs[event].mask;
+
+ if (event_descs[event].enb_mask)
+@@ -554,26 +562,20 @@ static const struct plda_event mc_event = {
+
+ static inline void mc_clear_secs(struct mc_pcie *port)
+ {
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+-
+- writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT, ctrl_base_addr +
+- SEC_ERROR_INT);
+- writel_relaxed(0, ctrl_base_addr + SEC_ERROR_EVENT_CNT);
++ writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT,
++ port->ctrl_base_addr + SEC_ERROR_INT);
++ writel_relaxed(0, port->ctrl_base_addr + SEC_ERROR_EVENT_CNT);
+ }
+
+ static inline void mc_clear_deds(struct mc_pcie *port)
+ {
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+-
+- writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT, ctrl_base_addr +
+- DED_ERROR_INT);
+- writel_relaxed(0, ctrl_base_addr + DED_ERROR_EVENT_CNT);
++ writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT,
++ port->ctrl_base_addr + DED_ERROR_INT);
++ writel_relaxed(0, port->ctrl_base_addr + DED_ERROR_EVENT_CNT);
+ }
+
+ static void mc_disable_interrupts(struct mc_pcie *port)
+ {
+- void __iomem *bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+- void __iomem *ctrl_base_addr = port->axi_base_addr + MC_PCIE_CTRL_ADDR;
+ u32 val;
+
+ /* Ensure ECC bypass is enabled */
+@@ -581,22 +583,22 @@ static void mc_disable_interrupts(struct mc_pcie *port)
+ ECC_CONTROL_RX_RAM_ECC_BYPASS |
+ ECC_CONTROL_PCIE2AXI_RAM_ECC_BYPASS |
+ ECC_CONTROL_AXI2PCIE_RAM_ECC_BYPASS;
+- writel_relaxed(val, ctrl_base_addr + ECC_CONTROL);
++ writel_relaxed(val, port->ctrl_base_addr + ECC_CONTROL);
+
+ /* Disable SEC errors and clear any outstanding */
+- writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT, ctrl_base_addr +
+- SEC_ERROR_INT_MASK);
++ writel_relaxed(SEC_ERROR_INT_ALL_RAM_SEC_ERR_INT,
++ port->ctrl_base_addr + SEC_ERROR_INT_MASK);
+ mc_clear_secs(port);
+
+ /* Disable DED errors and clear any outstanding */
+- writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT, ctrl_base_addr +
+- DED_ERROR_INT_MASK);
++ writel_relaxed(DED_ERROR_INT_ALL_RAM_DED_ERR_INT,
++ port->ctrl_base_addr + DED_ERROR_INT_MASK);
+ mc_clear_deds(port);
+
+ /* Disable local interrupts and clear any outstanding */
+- writel_relaxed(0, bridge_base_addr + IMASK_LOCAL);
+- writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_LOCAL);
+- writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_MSI);
++ writel_relaxed(0, port->bridge_base_addr + IMASK_LOCAL);
++ writel_relaxed(GENMASK(31, 0), port->bridge_base_addr + ISTATUS_LOCAL);
++ writel_relaxed(GENMASK(31, 0), port->bridge_base_addr + ISTATUS_MSI);
+
+ /* Disable PCIe events and clear any outstanding */
+ val = PCIE_EVENT_INT_L2_EXIT_INT |
+@@ -605,11 +607,96 @@ static void mc_disable_interrupts(struct mc_pcie *port)
+ PCIE_EVENT_INT_L2_EXIT_INT_MASK |
+ PCIE_EVENT_INT_HOTRST_EXIT_INT_MASK |
+ PCIE_EVENT_INT_DLUP_EXIT_INT_MASK;
+- writel_relaxed(val, ctrl_base_addr + PCIE_EVENT_INT);
++ writel_relaxed(val, port->ctrl_base_addr + PCIE_EVENT_INT);
+
+ /* Disable host interrupts and clear any outstanding */
+- writel_relaxed(0, bridge_base_addr + IMASK_HOST);
+- writel_relaxed(GENMASK(31, 0), bridge_base_addr + ISTATUS_HOST);
++ writel_relaxed(0, port->bridge_base_addr + IMASK_HOST);
++ writel_relaxed(GENMASK(31, 0), port->bridge_base_addr + ISTATUS_HOST);
++}
++
++static void mc_pcie_setup_inbound_atr(struct mc_pcie *port, int window_index,
++ u64 axi_addr, u64 pcie_addr, u64 size)
++{
++ u32 table_offset = window_index * ATR_ENTRY_SIZE;
++ void __iomem *table_addr = port->bridge_base_addr + table_offset;
++ u32 atr_sz;
++ u32 val;
++
++ atr_sz = ilog2(size) - 1;
++
++ val = ALIGN_DOWN(lower_32_bits(pcie_addr), SZ_4K);
++ val |= FIELD_PREP(ATR_SIZE_MASK, atr_sz);
++ val |= ATR_IMPL_ENABLE;
++
++ writel(val, table_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
++
++ writel(upper_32_bits(pcie_addr), table_addr + ATR0_PCIE_WIN0_SRC_ADDR);
++
++ writel(lower_32_bits(axi_addr), table_addr + ATR0_PCIE_WIN0_TRSL_ADDR_LSB);
++ writel(upper_32_bits(axi_addr), table_addr + ATR0_PCIE_WIN0_TRSL_ADDR_UDW);
++
++ writel(TRSL_ID_AXI4_MASTER_0, table_addr + ATR0_PCIE_WIN0_TRSL_PARAM);
++}
++
++static int mc_pcie_setup_inbound_ranges(struct platform_device *pdev,
++ struct mc_pcie *port)
++{
++ struct device *dev = &pdev->dev;
++ struct device_node *dn = dev->of_node;
++ struct of_range_parser parser;
++ struct of_range range;
++ int atr_index = 0;
++
++ /*
++ * MPFS PCIe Root Port is 32-bit only, behind a Fabric Interface
++ * Controller FPGA logic block which contains the AXI-S interface.
++ *
++ * From the point of view of the PCIe Root Port, there are only two
++ * supported Root Port configurations:
++ *
++ * Configuration 1: for use with fully coherent designs; supports a
++ * window from 0x0 (CPU space) to specified PCIe space.
++ *
++ * Configuration 2: for use with non-coherent designs; supports two
++ * 1 GB windows to CPU space; one mapping CPU space 0 to PCIe space
++ * 0x80000000 and a second mapping CPU space 0x40000000 to PCIe
++ * space 0xc0000000. This cfg needs two windows because of how the
++ * MSI space is allocated in the AXI-S range on MPFS.
++ *
++ * The FIC interface outside the PCIe block *must* complete the
++ * inbound address translation as per MCHP MPFS FPGA design
++ * guidelines.
++ */
++ if (device_property_read_bool(dev, "dma-noncoherent")) {
++ /*
++ * Always need same two tables in this case. Need two tables
++ * due to hardware interactions between address and size.
++ */
++ mc_pcie_setup_inbound_atr(port, 0, 0,
++ MPFS_NC_BOUNCE_ADDR, SZ_1G);
++ mc_pcie_setup_inbound_atr(port, 1, SZ_1G,
++ MPFS_NC_BOUNCE_ADDR + SZ_1G, SZ_1G);
++ } else {
++ /* Find any DMA ranges */
++ if (of_pci_dma_range_parser_init(&parser, dn)) {
++ /* No DMA range property - setup default */
++ mc_pcie_setup_inbound_atr(port, 0, 0, 0, SZ_4G);
++ return 0;
++ }
++
++ for_each_of_range(&parser, &range) {
++ if (atr_index >= MC_MAX_NUM_INBOUND_WINDOWS) {
++ dev_err(dev, "too many inbound ranges; %d available tables\n",
++ MC_MAX_NUM_INBOUND_WINDOWS);
++ return -EINVAL;
++ }
++ mc_pcie_setup_inbound_atr(port, atr_index, 0,
++ range.pci_addr, range.size);
++ atr_index++;
++ }
++ }
++
++ return 0;
+ }
+
+ static int mc_platform_init(struct pci_config_window *cfg)
+@@ -617,12 +704,10 @@ static int mc_platform_init(struct pci_config_window *cfg)
+ struct device *dev = cfg->parent;
+ struct platform_device *pdev = to_platform_device(dev);
+ struct pci_host_bridge *bridge = platform_get_drvdata(pdev);
+- void __iomem *bridge_base_addr =
+- port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+ int ret;
+
+ /* Configure address translation table 0 for PCIe config space */
+- plda_pcie_setup_window(bridge_base_addr, 0, cfg->res.start,
++ plda_pcie_setup_window(port->bridge_base_addr, 0, cfg->res.start,
+ cfg->res.start,
+ resource_size(&cfg->res));
+
+@@ -634,6 +719,10 @@ static int mc_platform_init(struct pci_config_window *cfg)
+ if (ret)
+ return ret;
+
++ ret = mc_pcie_setup_inbound_ranges(pdev, port);
++ if (ret)
++ return ret;
++
+ port->plda.event_ops = &mc_event_ops;
+ port->plda.event_irq_chip = &mc_event_irq_chip;
+ port->plda.events_bitmap = GENMASK(NUM_EVENTS - 1, 0);
+@@ -649,7 +738,7 @@ static int mc_platform_init(struct pci_config_window *cfg)
+ static int mc_host_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+- void __iomem *bridge_base_addr;
++ void __iomem *apb_base_addr;
+ struct plda_pcie_rp *plda;
+ int ret;
+ u32 val;
+@@ -661,30 +750,45 @@ static int mc_host_probe(struct platform_device *pdev)
+ plda = &port->plda;
+ plda->dev = dev;
+
+- port->axi_base_addr = devm_platform_ioremap_resource(pdev, 1);
+- if (IS_ERR(port->axi_base_addr))
+- return PTR_ERR(port->axi_base_addr);
++ port->bridge_base_addr = devm_platform_ioremap_resource_byname(pdev,
++ "bridge");
++ port->ctrl_base_addr = devm_platform_ioremap_resource_byname(pdev,
++ "ctrl");
++ if (!IS_ERR(port->bridge_base_addr) && !IS_ERR(port->ctrl_base_addr))
++ goto addrs_set;
++
++ /*
++ * The original, incorrect, binding that lumped the control and
++ * bridge addresses together still needs to be handled by the driver.
++ */
++ apb_base_addr = devm_platform_ioremap_resource_byname(pdev, "apb");
++ if (IS_ERR(apb_base_addr))
++ return dev_err_probe(dev, PTR_ERR(apb_base_addr),
++ "both legacy apb register and ctrl/bridge regions missing");
++
++ port->bridge_base_addr = apb_base_addr + MC_PCIE1_BRIDGE_ADDR;
++ port->ctrl_base_addr = apb_base_addr + MC_PCIE1_CTRL_ADDR;
+
++addrs_set:
+ mc_disable_interrupts(port);
+
+- bridge_base_addr = port->axi_base_addr + MC_PCIE_BRIDGE_ADDR;
+- plda->bridge_addr = bridge_base_addr;
++ plda->bridge_addr = port->bridge_base_addr;
+ plda->num_events = NUM_EVENTS;
+
+ /* Allow enabling MSI by disabling MSI-X */
+- val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0);
++ val = readl(port->bridge_base_addr + PCIE_PCI_IRQ_DW0);
+ val &= ~MSIX_CAP_MASK;
+- writel(val, bridge_base_addr + PCIE_PCI_IRQ_DW0);
++ writel(val, port->bridge_base_addr + PCIE_PCI_IRQ_DW0);
+
+ /* Pick num vectors from bitfile programmed onto FPGA fabric */
+- val = readl(bridge_base_addr + PCIE_PCI_IRQ_DW0);
++ val = readl(port->bridge_base_addr + PCIE_PCI_IRQ_DW0);
+ val &= NUM_MSI_MSGS_MASK;
+ val >>= NUM_MSI_MSGS_SHIFT;
+
+ plda->msi.num_vectors = 1 << val;
+
+ /* Pick vector address from design */
+- plda->msi.vector_phy = readl_relaxed(bridge_base_addr + IMSI_ADDR);
++ plda->msi.vector_phy = readl_relaxed(port->bridge_base_addr + IMSI_ADDR);
+
+ ret = mc_pcie_init_clks(dev);
+ if (ret) {
+diff --git a/drivers/pci/controller/plda/pcie-plda-host.c b/drivers/pci/controller/plda/pcie-plda-host.c
+index 8533dc618d45f0..4153214ca41038 100644
+--- a/drivers/pci/controller/plda/pcie-plda-host.c
++++ b/drivers/pci/controller/plda/pcie-plda-host.c
+@@ -8,11 +8,14 @@
+ * Author: Daire McNamara <daire.mcnamara@microchip.com>
+ */
+
++#include <linux/align.h>
++#include <linux/bitfield.h>
+ #include <linux/irqchip/chained_irq.h>
+ #include <linux/irqdomain.h>
+ #include <linux/msi.h>
+ #include <linux/pci_regs.h>
+ #include <linux/pci-ecam.h>
++#include <linux/wordpart.h>
+
+ #include "pcie-plda.h"
+
+@@ -502,8 +505,9 @@ void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
+ writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+ ATR0_AXI4_SLV0_TRSL_PARAM);
+
+- val = lower_32_bits(axi_addr) | (atr_sz << ATR_SIZE_SHIFT) |
+- ATR_IMPL_ENABLE;
++ val = ALIGN_DOWN(lower_32_bits(axi_addr), SZ_4K);
++ val |= FIELD_PREP(ATR_SIZE_MASK, atr_sz);
++ val |= ATR_IMPL_ENABLE;
+ writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+ ATR0_AXI4_SLV0_SRCADDR_PARAM);
+
+@@ -518,13 +522,20 @@ void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
+ val = upper_32_bits(pci_addr);
+ writel(val, bridge_base_addr + (index * ATR_ENTRY_SIZE) +
+ ATR0_AXI4_SLV0_TRSL_ADDR_UDW);
++}
++EXPORT_SYMBOL_GPL(plda_pcie_setup_window);
++
++void plda_pcie_setup_inbound_address_translation(struct plda_pcie_rp *port)
++{
++ void __iomem *bridge_base_addr = port->bridge_addr;
++ u32 val;
+
+ val = readl(bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
+ val |= (ATR0_PCIE_ATR_SIZE << ATR0_PCIE_ATR_SIZE_SHIFT);
+ writel(val, bridge_base_addr + ATR0_PCIE_WIN0_SRCADDR_PARAM);
+ writel(0, bridge_base_addr + ATR0_PCIE_WIN0_SRC_ADDR);
+ }
+-EXPORT_SYMBOL_GPL(plda_pcie_setup_window);
++EXPORT_SYMBOL_GPL(plda_pcie_setup_inbound_address_translation);
+
+ int plda_pcie_setup_iomems(struct pci_host_bridge *bridge,
+ struct plda_pcie_rp *port)
+diff --git a/drivers/pci/controller/plda/pcie-plda.h b/drivers/pci/controller/plda/pcie-plda.h
+index 0e7dc0d8e5ba11..61ece26065ea09 100644
+--- a/drivers/pci/controller/plda/pcie-plda.h
++++ b/drivers/pci/controller/plda/pcie-plda.h
+@@ -89,14 +89,15 @@
+
+ /* PCIe AXI slave table init defines */
+ #define ATR0_AXI4_SLV0_SRCADDR_PARAM 0x800u
+-#define ATR_SIZE_SHIFT 1
+-#define ATR_IMPL_ENABLE 1
++#define ATR_SIZE_MASK GENMASK(6, 1)
++#define ATR_IMPL_ENABLE BIT(0)
+ #define ATR0_AXI4_SLV0_SRC_ADDR 0x804u
+ #define ATR0_AXI4_SLV0_TRSL_ADDR_LSB 0x808u
+ #define ATR0_AXI4_SLV0_TRSL_ADDR_UDW 0x80cu
+ #define ATR0_AXI4_SLV0_TRSL_PARAM 0x810u
+ #define PCIE_TX_RX_INTERFACE 0x00000000u
+ #define PCIE_CONFIG_INTERFACE 0x00000001u
++#define TRSL_ID_AXI4_MASTER_0 0x00000004u
+
+ #define CONFIG_SPACE_ADDR_OFFSET 0x1000u
+
+@@ -204,6 +205,7 @@ int plda_init_interrupts(struct platform_device *pdev,
+ void plda_pcie_setup_window(void __iomem *bridge_base_addr, u32 index,
+ phys_addr_t axi_addr, phys_addr_t pci_addr,
+ size_t size);
++void plda_pcie_setup_inbound_address_translation(struct plda_pcie_rp *port);
+ int plda_pcie_setup_iomems(struct pci_host_bridge *bridge,
+ struct plda_pcie_rp *port);
+ int plda_pcie_host_init(struct plda_pcie_rp *port, struct pci_ops *ops,
+diff --git a/drivers/pci/endpoint/functions/pci-epf-test.c b/drivers/pci/endpoint/functions/pci-epf-test.c
+index 7c2ed6eae53ad1..14b4c68ab4e1a2 100644
+--- a/drivers/pci/endpoint/functions/pci-epf-test.c
++++ b/drivers/pci/endpoint/functions/pci-epf-test.c
+@@ -251,7 +251,7 @@ static int pci_epf_test_init_dma_chan(struct pci_epf_test *epf_test)
+
+ fail_back_rx:
+ dma_release_channel(epf_test->dma_chan_rx);
+- epf_test->dma_chan_tx = NULL;
++ epf_test->dma_chan_rx = NULL;
+
+ fail_back_tx:
+ dma_cap_zero(mask);
+@@ -361,8 +361,8 @@ static void pci_epf_test_copy(struct pci_epf_test *epf_test,
+
+ ktime_get_ts64(&start);
+ if (reg->flags & FLAG_USE_DMA) {
+- if (epf_test->dma_private) {
+- dev_err(dev, "Cannot transfer data using DMA\n");
++ if (!dma_has_cap(DMA_MEMCPY, epf_test->dma_chan_tx->device->cap_mask)) {
++ dev_err(dev, "DMA controller doesn't support MEMCPY\n");
+ ret = -EINVAL;
+ goto err_map_addr;
+ }
+diff --git a/drivers/pci/endpoint/pci-epc-core.c b/drivers/pci/endpoint/pci-epc-core.c
+index 62f7dff437309f..de665342dc16d0 100644
+--- a/drivers/pci/endpoint/pci-epc-core.c
++++ b/drivers/pci/endpoint/pci-epc-core.c
+@@ -856,7 +856,7 @@ void devm_pci_epc_destroy(struct device *dev, struct pci_epc *epc)
+ {
+ int r;
+
+- r = devres_destroy(dev, devm_pci_epc_release, devm_pci_epc_match,
++ r = devres_release(dev, devm_pci_epc_release, devm_pci_epc_match,
+ epc);
+ dev_WARN_ONCE(dev, r, "couldn't find PCI EPC resource\n");
+ }
+diff --git a/drivers/pinctrl/nomadik/pinctrl-nomadik.c b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+index f4f10c60c1d23b..dcc662be080004 100644
+--- a/drivers/pinctrl/nomadik/pinctrl-nomadik.c
++++ b/drivers/pinctrl/nomadik/pinctrl-nomadik.c
+@@ -438,9 +438,9 @@ static void nmk_prcm_altcx_set_mode(struct nmk_pinctrl *npct,
+ * - Any spurious wake up event during switch sequence to be ignored and
+ * cleared
+ */
+-static void nmk_gpio_glitch_slpm_init(unsigned int *slpm)
++static int nmk_gpio_glitch_slpm_init(unsigned int *slpm)
+ {
+- int i;
++ int i, j, ret;
+
+ for (i = 0; i < NMK_MAX_BANKS; i++) {
+ struct nmk_gpio_chip *chip = nmk_gpio_chips[i];
+@@ -449,11 +449,21 @@ static void nmk_gpio_glitch_slpm_init(unsigned int *slpm)
+ if (!chip)
+ break;
+
+- clk_enable(chip->clk);
++ ret = clk_enable(chip->clk);
++ if (ret) {
++ for (j = 0; j < i; j++) {
++ chip = nmk_gpio_chips[j];
++ clk_disable(chip->clk);
++ }
++
++ return ret;
++ }
+
+ slpm[i] = readl(chip->addr + NMK_GPIO_SLPC);
+ writel(temp, chip->addr + NMK_GPIO_SLPC);
+ }
++
++ return 0;
+ }
+
+ static void nmk_gpio_glitch_slpm_restore(unsigned int *slpm)
+@@ -923,7 +933,9 @@ static int nmk_pmx_set(struct pinctrl_dev *pctldev, unsigned int function,
+
+ slpm[nmk_chip->bank] &= ~BIT(bit);
+ }
+- nmk_gpio_glitch_slpm_init(slpm);
++ ret = nmk_gpio_glitch_slpm_init(slpm);
++ if (ret)
++ goto out_pre_slpm_init;
+ }
+
+ for (i = 0; i < g->grp.npins; i++) {
+@@ -940,7 +952,10 @@ static int nmk_pmx_set(struct pinctrl_dev *pctldev, unsigned int function,
+ dev_dbg(npct->dev, "setting pin %d to altsetting %d\n",
+ g->grp.pins[i], g->altsetting);
+
+- clk_enable(nmk_chip->clk);
++ ret = clk_enable(nmk_chip->clk);
++ if (ret)
++ goto out_glitch;
++
+ /*
+ * If the pin is switching to altfunc, and there was an
+ * interrupt installed on it which has been lazy disabled,
+@@ -988,6 +1003,7 @@ static int nmk_gpio_request_enable(struct pinctrl_dev *pctldev,
+ struct nmk_gpio_chip *nmk_chip;
+ struct gpio_chip *chip;
+ unsigned int bit;
++ int ret;
+
+ if (!range) {
+ dev_err(npct->dev, "invalid range\n");
+@@ -1004,7 +1020,9 @@ static int nmk_gpio_request_enable(struct pinctrl_dev *pctldev,
+
+ find_nmk_gpio_from_pin(pin, &bit);
+
+- clk_enable(nmk_chip->clk);
++ ret = clk_enable(nmk_chip->clk);
++ if (ret)
++ return ret;
+ /* There is no glitch when converting any pin to GPIO */
+ __nmk_gpio_set_mode(nmk_chip, bit, NMK_GPIO_ALT_GPIO);
+ clk_disable(nmk_chip->clk);
+@@ -1058,6 +1076,7 @@ static int nmk_pin_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ unsigned long cfg;
+ int pull, slpm, output, val, i;
+ bool lowemi, gpiomode, sleep;
++ int ret;
+
+ nmk_chip = find_nmk_gpio_from_pin(pin, &bit);
+ if (!nmk_chip) {
+@@ -1116,7 +1135,9 @@ static int nmk_pin_config_set(struct pinctrl_dev *pctldev, unsigned int pin,
+ output ? (val ? "high" : "low") : "",
+ lowemi ? "on" : "off");
+
+- clk_enable(nmk_chip->clk);
++ ret = clk_enable(nmk_chip->clk);
++ if (ret)
++ return ret;
+ if (gpiomode)
+ /* No glitch when going to GPIO mode */
+ __nmk_gpio_set_mode(nmk_chip, bit, NMK_GPIO_ALT_GPIO);
+diff --git a/drivers/pinctrl/pinctrl-amd.c b/drivers/pinctrl/pinctrl-amd.c
+index 7f66ec73199a9c..a12766b3bc8a73 100644
+--- a/drivers/pinctrl/pinctrl-amd.c
++++ b/drivers/pinctrl/pinctrl-amd.c
+@@ -908,12 +908,13 @@ static bool amd_gpio_should_save(struct amd_gpio *gpio_dev, unsigned int pin)
+ return false;
+ }
+
+-static int amd_gpio_suspend(struct device *dev)
++static int amd_gpio_suspend_hibernate_common(struct device *dev, bool is_suspend)
+ {
+ struct amd_gpio *gpio_dev = dev_get_drvdata(dev);
+ struct pinctrl_desc *desc = gpio_dev->pctrl->desc;
+ unsigned long flags;
+ int i;
++ u32 wake_mask = is_suspend ? WAKE_SOURCE_SUSPEND : WAKE_SOURCE_HIBERNATE;
+
+ for (i = 0; i < desc->npins; i++) {
+ int pin = desc->pins[i].number;
+@@ -925,11 +926,11 @@ static int amd_gpio_suspend(struct device *dev)
+ gpio_dev->saved_regs[i] = readl(gpio_dev->base + pin * 4) & ~PIN_IRQ_PENDING;
+
+ /* mask any interrupts not intended to be a wake source */
+- if (!(gpio_dev->saved_regs[i] & WAKE_SOURCE)) {
++ if (!(gpio_dev->saved_regs[i] & wake_mask)) {
+ writel(gpio_dev->saved_regs[i] & ~BIT(INTERRUPT_MASK_OFF),
+ gpio_dev->base + pin * 4);
+- pm_pr_dbg("Disabling GPIO #%d interrupt for suspend.\n",
+- pin);
++ pm_pr_dbg("Disabling GPIO #%d interrupt for %s.\n",
++ pin, is_suspend ? "suspend" : "hibernate");
+ }
+
+ raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
+@@ -938,6 +939,16 @@ static int amd_gpio_suspend(struct device *dev)
+ return 0;
+ }
+
++static int amd_gpio_suspend(struct device *dev)
++{
++ return amd_gpio_suspend_hibernate_common(dev, true);
++}
++
++static int amd_gpio_hibernate(struct device *dev)
++{
++ return amd_gpio_suspend_hibernate_common(dev, false);
++}
++
+ static int amd_gpio_resume(struct device *dev)
+ {
+ struct amd_gpio *gpio_dev = dev_get_drvdata(dev);
+@@ -961,8 +972,12 @@ static int amd_gpio_resume(struct device *dev)
+ }
+
+ static const struct dev_pm_ops amd_gpio_pm_ops = {
+- SET_LATE_SYSTEM_SLEEP_PM_OPS(amd_gpio_suspend,
+- amd_gpio_resume)
++ .suspend_late = amd_gpio_suspend,
++ .resume_early = amd_gpio_resume,
++ .freeze_late = amd_gpio_hibernate,
++ .thaw_early = amd_gpio_resume,
++ .poweroff_late = amd_gpio_hibernate,
++ .restore_early = amd_gpio_resume,
+ };
+ #endif
+
+diff --git a/drivers/pinctrl/pinctrl-amd.h b/drivers/pinctrl/pinctrl-amd.h
+index cf59089f277639..c9522c62d7910f 100644
+--- a/drivers/pinctrl/pinctrl-amd.h
++++ b/drivers/pinctrl/pinctrl-amd.h
+@@ -80,10 +80,9 @@
+ #define FUNCTION_MASK GENMASK(1, 0)
+ #define FUNCTION_INVALID GENMASK(7, 0)
+
+-#define WAKE_SOURCE (BIT(WAKE_CNTRL_OFF_S0I3) | \
+- BIT(WAKE_CNTRL_OFF_S3) | \
+- BIT(WAKE_CNTRL_OFF_S4) | \
+- BIT(WAKECNTRL_Z_OFF))
++#define WAKE_SOURCE_SUSPEND (BIT(WAKE_CNTRL_OFF_S0I3) | \
++ BIT(WAKE_CNTRL_OFF_S3))
++#define WAKE_SOURCE_HIBERNATE BIT(WAKE_CNTRL_OFF_S4)
+
+ struct amd_function {
+ const char *name;
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.c b/drivers/pinctrl/samsung/pinctrl-exynos.c
+index b79c211c037496..ac6dc22b37c98e 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.c
+@@ -636,7 +636,7 @@ static void exynos_irq_demux_eint16_31(struct irq_desc *desc)
+ if (clk_enable(b->drvdata->pclk)) {
+ dev_err(b->gpio_chip.parent,
+ "unable to enable clock for pending IRQs\n");
+- return;
++ goto out;
+ }
+ }
+
+@@ -652,6 +652,7 @@ static void exynos_irq_demux_eint16_31(struct irq_desc *desc)
+ if (eintd->nr_banks)
+ clk_disable(eintd->banks[0]->drvdata->pclk);
+
++out:
+ chained_irq_exit(chip, desc);
+ }
+
+diff --git a/drivers/pinctrl/stm32/pinctrl-stm32.c b/drivers/pinctrl/stm32/pinctrl-stm32.c
+index 5b7fa77c118436..03f3f707d27555 100644
+--- a/drivers/pinctrl/stm32/pinctrl-stm32.c
++++ b/drivers/pinctrl/stm32/pinctrl-stm32.c
+@@ -86,7 +86,6 @@ struct stm32_pinctrl_group {
+
+ struct stm32_gpio_bank {
+ void __iomem *base;
+- struct clk *clk;
+ struct reset_control *rstc;
+ spinlock_t lock;
+ struct gpio_chip gpio_chip;
+@@ -108,6 +107,7 @@ struct stm32_pinctrl {
+ unsigned ngroups;
+ const char **grp_names;
+ struct stm32_gpio_bank *banks;
++ struct clk_bulk_data *clks;
+ unsigned nbanks;
+ const struct stm32_pinctrl_match_data *match_data;
+ struct irq_domain *domain;
+@@ -1308,12 +1308,6 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl, struct fwnode
+ if (IS_ERR(bank->base))
+ return PTR_ERR(bank->base);
+
+- err = clk_prepare_enable(bank->clk);
+- if (err) {
+- dev_err(dev, "failed to prepare_enable clk (%d)\n", err);
+- return err;
+- }
+-
+ bank->gpio_chip = stm32_gpio_template;
+
+ fwnode_property_read_string(fwnode, "st,bank-name", &bank->gpio_chip.label);
+@@ -1360,26 +1354,20 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl, struct fwnode
+ bank->fwnode, &stm32_gpio_domain_ops,
+ bank);
+
+- if (!bank->domain) {
+- err = -ENODEV;
+- goto err_clk;
+- }
++ if (!bank->domain)
++ return -ENODEV;
+ }
+
+ names = devm_kcalloc(dev, npins, sizeof(char *), GFP_KERNEL);
+- if (!names) {
+- err = -ENOMEM;
+- goto err_clk;
+- }
++ if (!names)
++ return -ENOMEM;
+
+ for (i = 0; i < npins; i++) {
+ stm32_pin = stm32_pctrl_get_desc_pin_from_gpio(pctl, bank, i);
+ if (stm32_pin && stm32_pin->pin.name) {
+ names[i] = devm_kasprintf(dev, GFP_KERNEL, "%s", stm32_pin->pin.name);
+- if (!names[i]) {
+- err = -ENOMEM;
+- goto err_clk;
+- }
++ if (!names[i])
++ return -ENOMEM;
+ } else {
+ names[i] = NULL;
+ }
+@@ -1390,15 +1378,11 @@ static int stm32_gpiolib_register_bank(struct stm32_pinctrl *pctl, struct fwnode
+ err = gpiochip_add_data(&bank->gpio_chip, bank);
+ if (err) {
+ dev_err(dev, "Failed to add gpiochip(%d)!\n", bank_nr);
+- goto err_clk;
++ return err;
+ }
+
+ dev_info(dev, "%s bank added\n", bank->gpio_chip.label);
+ return 0;
+-
+-err_clk:
+- clk_disable_unprepare(bank->clk);
+- return err;
+ }
+
+ static struct irq_domain *stm32_pctrl_get_irq_domain(struct platform_device *pdev)
+@@ -1621,6 +1605,11 @@ int stm32_pctl_probe(struct platform_device *pdev)
+ if (!pctl->banks)
+ return -ENOMEM;
+
++ pctl->clks = devm_kcalloc(dev, banks, sizeof(*pctl->clks),
++ GFP_KERNEL);
++ if (!pctl->clks)
++ return -ENOMEM;
++
+ i = 0;
+ for_each_gpiochip_node(dev, child) {
+ struct stm32_gpio_bank *bank = &pctl->banks[i];
+@@ -1632,24 +1621,27 @@ int stm32_pctl_probe(struct platform_device *pdev)
+ return -EPROBE_DEFER;
+ }
+
+- bank->clk = of_clk_get_by_name(np, NULL);
+- if (IS_ERR(bank->clk)) {
++ pctl->clks[i].clk = of_clk_get_by_name(np, NULL);
++ if (IS_ERR(pctl->clks[i].clk)) {
+ fwnode_handle_put(child);
+- return dev_err_probe(dev, PTR_ERR(bank->clk),
++ return dev_err_probe(dev, PTR_ERR(pctl->clks[i].clk),
+ "failed to get clk\n");
+ }
++ pctl->clks[i].id = "pctl";
+ i++;
+ }
+
++ ret = clk_bulk_prepare_enable(banks, pctl->clks);
++ if (ret) {
++ dev_err(dev, "failed to prepare_enable clk (%d)\n", ret);
++ return ret;
++ }
++
+ for_each_gpiochip_node(dev, child) {
+ ret = stm32_gpiolib_register_bank(pctl, child);
+ if (ret) {
+ fwnode_handle_put(child);
+-
+- for (i = 0; i < pctl->nbanks; i++)
+- clk_disable_unprepare(pctl->banks[i].clk);
+-
+- return ret;
++ goto err_register;
+ }
+
+ pctl->nbanks++;
+@@ -1658,6 +1650,15 @@ int stm32_pctl_probe(struct platform_device *pdev)
+ dev_info(dev, "Pinctrl STM32 initialized\n");
+
+ return 0;
++err_register:
++ for (i = 0; i < pctl->nbanks; i++) {
++ struct stm32_gpio_bank *bank = &pctl->banks[i];
++
++ gpiochip_remove(&bank->gpio_chip);
++ }
++
++ clk_bulk_disable_unprepare(banks, pctl->clks);
++ return ret;
+ }
+
+ static int __maybe_unused stm32_pinctrl_restore_gpio_regs(
+@@ -1726,10 +1727,8 @@ static int __maybe_unused stm32_pinctrl_restore_gpio_regs(
+ int __maybe_unused stm32_pinctrl_suspend(struct device *dev)
+ {
+ struct stm32_pinctrl *pctl = dev_get_drvdata(dev);
+- int i;
+
+- for (i = 0; i < pctl->nbanks; i++)
+- clk_disable(pctl->banks[i].clk);
++ clk_bulk_disable(pctl->nbanks, pctl->clks);
+
+ return 0;
+ }
+@@ -1738,10 +1737,11 @@ int __maybe_unused stm32_pinctrl_resume(struct device *dev)
+ {
+ struct stm32_pinctrl *pctl = dev_get_drvdata(dev);
+ struct stm32_pinctrl_group *g = pctl->groups;
+- int i;
++ int i, ret;
+
+- for (i = 0; i < pctl->nbanks; i++)
+- clk_enable(pctl->banks[i].clk);
++ ret = clk_bulk_enable(pctl->nbanks, pctl->clks);
++ if (ret)
++ return ret;
+
+ for (i = 0; i < pctl->ngroups; i++, g++)
+ stm32_pinctrl_restore_gpio_regs(pctl, g->pin);
+diff --git a/drivers/platform/mellanox/mlxbf-pmc.c b/drivers/platform/mellanox/mlxbf-pmc.c
+index 9d18dfca6a673b..9ff7b487dc4892 100644
+--- a/drivers/platform/mellanox/mlxbf-pmc.c
++++ b/drivers/platform/mellanox/mlxbf-pmc.c
+@@ -1168,7 +1168,7 @@ static int mlxbf_pmc_program_l3_counter(unsigned int blk_num, u32 cnt_num, u32 e
+ /* Method to handle crspace counter programming */
+ static int mlxbf_pmc_program_crspace_counter(unsigned int blk_num, u32 cnt_num, u32 evt)
+ {
+- void *addr;
++ void __iomem *addr;
+ u32 word;
+ int ret;
+
+@@ -1192,7 +1192,7 @@ static int mlxbf_pmc_program_crspace_counter(unsigned int blk_num, u32 cnt_num,
+ /* Method to clear crspace counter value */
+ static int mlxbf_pmc_clear_crspace_counter(unsigned int blk_num, u32 cnt_num)
+ {
+- void *addr;
++ void __iomem *addr;
+
+ addr = pmc->block[blk_num].mmio_base +
+ MLXBF_PMC_CRSPACE_PERFMON_VAL0(pmc->block[blk_num].counters) +
+@@ -1405,7 +1405,7 @@ static int mlxbf_pmc_read_l3_event(unsigned int blk_num, u32 cnt_num, u64 *resul
+ static int mlxbf_pmc_read_crspace_event(unsigned int blk_num, u32 cnt_num, u64 *result)
+ {
+ u32 word, evt;
+- void *addr;
++ void __iomem *addr;
+ int ret;
+
+ addr = pmc->block[blk_num].mmio_base +
+diff --git a/drivers/platform/x86/x86-android-tablets/lenovo.c b/drivers/platform/x86/x86-android-tablets/lenovo.c
+index ae087f1471c174..a60efbaf4817fe 100644
+--- a/drivers/platform/x86/x86-android-tablets/lenovo.c
++++ b/drivers/platform/x86/x86-android-tablets/lenovo.c
+@@ -601,7 +601,7 @@ static const struct regulator_init_data lenovo_yoga_tab2_1380_bq24190_vbus_init_
+ .num_consumer_supplies = 1,
+ };
+
+-struct bq24190_platform_data lenovo_yoga_tab2_1380_bq24190_pdata = {
++static struct bq24190_platform_data lenovo_yoga_tab2_1380_bq24190_pdata = {
+ .regulator_init_data = &lenovo_yoga_tab2_1380_bq24190_vbus_init_data,
+ };
+
+@@ -726,7 +726,7 @@ static const struct platform_device_info lenovo_yoga_tab2_1380_pdevs[] __initcon
+ },
+ };
+
+-const char * const lenovo_yoga_tab2_1380_modules[] __initconst = {
++static const char * const lenovo_yoga_tab2_1380_modules[] __initconst = {
+ "bq24190_charger", /* For the Vbus regulator for lc824206xa */
+ NULL
+ };
+diff --git a/drivers/pps/clients/pps-gpio.c b/drivers/pps/clients/pps-gpio.c
+index 791fdc9326dd60..93e662912b5313 100644
+--- a/drivers/pps/clients/pps-gpio.c
++++ b/drivers/pps/clients/pps-gpio.c
+@@ -214,8 +214,8 @@ static int pps_gpio_probe(struct platform_device *pdev)
+ return -EINVAL;
+ }
+
+- dev_info(data->pps->dev, "Registered IRQ %d as PPS source\n",
+- data->irq);
++ dev_dbg(&data->pps->dev, "Registered IRQ %d as PPS source\n",
++ data->irq);
+
+ return 0;
+ }
+diff --git a/drivers/pps/clients/pps-ktimer.c b/drivers/pps/clients/pps-ktimer.c
+index d33106bd7a290f..2f465549b843f7 100644
+--- a/drivers/pps/clients/pps-ktimer.c
++++ b/drivers/pps/clients/pps-ktimer.c
+@@ -56,7 +56,7 @@ static struct pps_source_info pps_ktimer_info = {
+
+ static void __exit pps_ktimer_exit(void)
+ {
+- dev_info(pps->dev, "ktimer PPS source unregistered\n");
++ dev_dbg(&pps->dev, "ktimer PPS source unregistered\n");
+
+ del_timer_sync(&ktimer);
+ pps_unregister_source(pps);
+@@ -74,7 +74,7 @@ static int __init pps_ktimer_init(void)
+ timer_setup(&ktimer, pps_ktimer_event, 0);
+ mod_timer(&ktimer, jiffies + HZ);
+
+- dev_info(pps->dev, "ktimer PPS source registered\n");
++ dev_dbg(&pps->dev, "ktimer PPS source registered\n");
+
+ return 0;
+ }
+diff --git a/drivers/pps/clients/pps-ldisc.c b/drivers/pps/clients/pps-ldisc.c
+index 443d6bae19d14d..fa5660f3c4b707 100644
+--- a/drivers/pps/clients/pps-ldisc.c
++++ b/drivers/pps/clients/pps-ldisc.c
+@@ -32,7 +32,7 @@ static void pps_tty_dcd_change(struct tty_struct *tty, bool active)
+ pps_event(pps, &ts, active ? PPS_CAPTUREASSERT :
+ PPS_CAPTURECLEAR, NULL);
+
+- dev_dbg(pps->dev, "PPS %s at %lu\n",
++ dev_dbg(&pps->dev, "PPS %s at %lu\n",
+ active ? "assert" : "clear", jiffies);
+ }
+
+@@ -69,7 +69,7 @@ static int pps_tty_open(struct tty_struct *tty)
+ goto err_unregister;
+ }
+
+- dev_info(pps->dev, "source \"%s\" added\n", info.path);
++ dev_dbg(&pps->dev, "source \"%s\" added\n", info.path);
+
+ return 0;
+
+@@ -89,7 +89,7 @@ static void pps_tty_close(struct tty_struct *tty)
+ if (WARN_ON(!pps))
+ return;
+
+- dev_info(pps->dev, "removed\n");
++ dev_info(&pps->dev, "removed\n");
+ pps_unregister_source(pps);
+ }
+
+diff --git a/drivers/pps/clients/pps_parport.c b/drivers/pps/clients/pps_parport.c
+index abaffb4e1c1ce9..24db06750297d5 100644
+--- a/drivers/pps/clients/pps_parport.c
++++ b/drivers/pps/clients/pps_parport.c
+@@ -81,7 +81,7 @@ static void parport_irq(void *handle)
+ /* check the signal (no signal means the pulse is lost this time) */
+ if (!signal_is_set(port)) {
+ local_irq_restore(flags);
+- dev_err(dev->pps->dev, "lost the signal\n");
++ dev_err(&dev->pps->dev, "lost the signal\n");
+ goto out_assert;
+ }
+
+@@ -98,7 +98,7 @@ static void parport_irq(void *handle)
+ /* timeout */
+ dev->cw_err++;
+ if (dev->cw_err >= CLEAR_WAIT_MAX_ERRORS) {
+- dev_err(dev->pps->dev, "disabled clear edge capture after %d"
++ dev_err(&dev->pps->dev, "disabled clear edge capture after %d"
+ " timeouts\n", dev->cw_err);
+ dev->cw = 0;
+ dev->cw_err = 0;
+diff --git a/drivers/pps/kapi.c b/drivers/pps/kapi.c
+index d9d566f70ed199..92d1b62ea239d7 100644
+--- a/drivers/pps/kapi.c
++++ b/drivers/pps/kapi.c
+@@ -41,7 +41,7 @@ static void pps_add_offset(struct pps_ktime *ts, struct pps_ktime *offset)
+ static void pps_echo_client_default(struct pps_device *pps, int event,
+ void *data)
+ {
+- dev_info(pps->dev, "echo %s %s\n",
++ dev_info(&pps->dev, "echo %s %s\n",
+ event & PPS_CAPTUREASSERT ? "assert" : "",
+ event & PPS_CAPTURECLEAR ? "clear" : "");
+ }
+@@ -112,7 +112,7 @@ struct pps_device *pps_register_source(struct pps_source_info *info,
+ goto kfree_pps;
+ }
+
+- dev_info(pps->dev, "new PPS source %s\n", info->name);
++ dev_dbg(&pps->dev, "new PPS source %s\n", info->name);
+
+ return pps;
+
+@@ -166,7 +166,7 @@ void pps_event(struct pps_device *pps, struct pps_event_time *ts, int event,
+ /* check event type */
+ BUG_ON((event & (PPS_CAPTUREASSERT | PPS_CAPTURECLEAR)) == 0);
+
+- dev_dbg(pps->dev, "PPS event at %lld.%09ld\n",
++ dev_dbg(&pps->dev, "PPS event at %lld.%09ld\n",
+ (s64)ts->ts_real.tv_sec, ts->ts_real.tv_nsec);
+
+ timespec_to_pps_ktime(&ts_real, ts->ts_real);
+@@ -188,7 +188,7 @@ void pps_event(struct pps_device *pps, struct pps_event_time *ts, int event,
+ /* Save the time stamp */
+ pps->assert_tu = ts_real;
+ pps->assert_sequence++;
+- dev_dbg(pps->dev, "capture assert seq #%u\n",
++ dev_dbg(&pps->dev, "capture assert seq #%u\n",
+ pps->assert_sequence);
+
+ captured = ~0;
+@@ -202,7 +202,7 @@ void pps_event(struct pps_device *pps, struct pps_event_time *ts, int event,
+ /* Save the time stamp */
+ pps->clear_tu = ts_real;
+ pps->clear_sequence++;
+- dev_dbg(pps->dev, "capture clear seq #%u\n",
++ dev_dbg(&pps->dev, "capture clear seq #%u\n",
+ pps->clear_sequence);
+
+ captured = ~0;
+diff --git a/drivers/pps/kc.c b/drivers/pps/kc.c
+index 50dc59af45be24..fbd23295afd7d9 100644
+--- a/drivers/pps/kc.c
++++ b/drivers/pps/kc.c
+@@ -43,11 +43,11 @@ int pps_kc_bind(struct pps_device *pps, struct pps_bind_args *bind_args)
+ pps_kc_hardpps_mode = 0;
+ pps_kc_hardpps_dev = NULL;
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+- dev_info(pps->dev, "unbound kernel"
++ dev_info(&pps->dev, "unbound kernel"
+ " consumer\n");
+ } else {
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+- dev_err(pps->dev, "selected kernel consumer"
++ dev_err(&pps->dev, "selected kernel consumer"
+ " is not bound\n");
+ return -EINVAL;
+ }
+@@ -57,11 +57,11 @@ int pps_kc_bind(struct pps_device *pps, struct pps_bind_args *bind_args)
+ pps_kc_hardpps_mode = bind_args->edge;
+ pps_kc_hardpps_dev = pps;
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+- dev_info(pps->dev, "bound kernel consumer: "
++ dev_info(&pps->dev, "bound kernel consumer: "
+ "edge=0x%x\n", bind_args->edge);
+ } else {
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+- dev_err(pps->dev, "another kernel consumer"
++ dev_err(&pps->dev, "another kernel consumer"
+ " is already bound\n");
+ return -EINVAL;
+ }
+@@ -83,7 +83,7 @@ void pps_kc_remove(struct pps_device *pps)
+ pps_kc_hardpps_mode = 0;
+ pps_kc_hardpps_dev = NULL;
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+- dev_info(pps->dev, "unbound kernel consumer"
++ dev_info(&pps->dev, "unbound kernel consumer"
+ " on device removal\n");
+ } else
+ spin_unlock_irq(&pps_kc_hardpps_lock);
+diff --git a/drivers/pps/pps.c b/drivers/pps/pps.c
+index 25d47907db175e..6a02245ea35fec 100644
+--- a/drivers/pps/pps.c
++++ b/drivers/pps/pps.c
+@@ -25,7 +25,7 @@
+ * Local variables
+ */
+
+-static dev_t pps_devt;
++static int pps_major;
+ static struct class *pps_class;
+
+ static DEFINE_MUTEX(pps_idr_lock);
+@@ -62,7 +62,7 @@ static int pps_cdev_pps_fetch(struct pps_device *pps, struct pps_fdata *fdata)
+ else {
+ unsigned long ticks;
+
+- dev_dbg(pps->dev, "timeout %lld.%09d\n",
++ dev_dbg(&pps->dev, "timeout %lld.%09d\n",
+ (long long) fdata->timeout.sec,
+ fdata->timeout.nsec);
+ ticks = fdata->timeout.sec * HZ;
+@@ -80,7 +80,7 @@ static int pps_cdev_pps_fetch(struct pps_device *pps, struct pps_fdata *fdata)
+
+ /* Check for pending signals */
+ if (err == -ERESTARTSYS) {
+- dev_dbg(pps->dev, "pending signal caught\n");
++ dev_dbg(&pps->dev, "pending signal caught\n");
+ return -EINTR;
+ }
+
+@@ -98,7 +98,7 @@ static long pps_cdev_ioctl(struct file *file,
+
+ switch (cmd) {
+ case PPS_GETPARAMS:
+- dev_dbg(pps->dev, "PPS_GETPARAMS\n");
++ dev_dbg(&pps->dev, "PPS_GETPARAMS\n");
+
+ spin_lock_irq(&pps->lock);
+
+@@ -114,7 +114,7 @@ static long pps_cdev_ioctl(struct file *file,
+ break;
+
+ case PPS_SETPARAMS:
+- dev_dbg(pps->dev, "PPS_SETPARAMS\n");
++ dev_dbg(&pps->dev, "PPS_SETPARAMS\n");
+
+ /* Check the capabilities */
+ if (!capable(CAP_SYS_TIME))
+@@ -124,14 +124,14 @@ static long pps_cdev_ioctl(struct file *file,
+ if (err)
+ return -EFAULT;
+ if (!(params.mode & (PPS_CAPTUREASSERT | PPS_CAPTURECLEAR))) {
+- dev_dbg(pps->dev, "capture mode unspecified (%x)\n",
++ dev_dbg(&pps->dev, "capture mode unspecified (%x)\n",
+ params.mode);
+ return -EINVAL;
+ }
+
+ /* Check for supported capabilities */
+ if ((params.mode & ~pps->info.mode) != 0) {
+- dev_dbg(pps->dev, "unsupported capabilities (%x)\n",
++ dev_dbg(&pps->dev, "unsupported capabilities (%x)\n",
+ params.mode);
+ return -EINVAL;
+ }
+@@ -144,7 +144,7 @@ static long pps_cdev_ioctl(struct file *file,
+ /* Restore the read only parameters */
+ if ((params.mode & (PPS_TSFMT_TSPEC | PPS_TSFMT_NTPFP)) == 0) {
+ /* section 3.3 of RFC 2783 interpreted */
+- dev_dbg(pps->dev, "time format unspecified (%x)\n",
++ dev_dbg(&pps->dev, "time format unspecified (%x)\n",
+ params.mode);
+ pps->params.mode |= PPS_TSFMT_TSPEC;
+ }
+@@ -165,7 +165,7 @@ static long pps_cdev_ioctl(struct file *file,
+ break;
+
+ case PPS_GETCAP:
+- dev_dbg(pps->dev, "PPS_GETCAP\n");
++ dev_dbg(&pps->dev, "PPS_GETCAP\n");
+
+ err = put_user(pps->info.mode, iuarg);
+ if (err)
+@@ -176,7 +176,7 @@ static long pps_cdev_ioctl(struct file *file,
+ case PPS_FETCH: {
+ struct pps_fdata fdata;
+
+- dev_dbg(pps->dev, "PPS_FETCH\n");
++ dev_dbg(&pps->dev, "PPS_FETCH\n");
+
+ err = copy_from_user(&fdata, uarg, sizeof(struct pps_fdata));
+ if (err)
+@@ -206,7 +206,7 @@ static long pps_cdev_ioctl(struct file *file,
+ case PPS_KC_BIND: {
+ struct pps_bind_args bind_args;
+
+- dev_dbg(pps->dev, "PPS_KC_BIND\n");
++ dev_dbg(&pps->dev, "PPS_KC_BIND\n");
+
+ /* Check the capabilities */
+ if (!capable(CAP_SYS_TIME))
+@@ -218,7 +218,7 @@ static long pps_cdev_ioctl(struct file *file,
+
+ /* Check for supported capabilities */
+ if ((bind_args.edge & ~pps->info.mode) != 0) {
+- dev_err(pps->dev, "unsupported capabilities (%x)\n",
++ dev_err(&pps->dev, "unsupported capabilities (%x)\n",
+ bind_args.edge);
+ return -EINVAL;
+ }
+@@ -227,7 +227,7 @@ static long pps_cdev_ioctl(struct file *file,
+ if (bind_args.tsformat != PPS_TSFMT_TSPEC ||
+ (bind_args.edge & ~PPS_CAPTUREBOTH) != 0 ||
+ bind_args.consumer != PPS_KC_HARDPPS) {
+- dev_err(pps->dev, "invalid kernel consumer bind"
++ dev_err(&pps->dev, "invalid kernel consumer bind"
+ " parameters (%x)\n", bind_args.edge);
+ return -EINVAL;
+ }
+@@ -259,7 +259,7 @@ static long pps_cdev_compat_ioctl(struct file *file,
+ struct pps_fdata fdata;
+ int err;
+
+- dev_dbg(pps->dev, "PPS_FETCH\n");
++ dev_dbg(&pps->dev, "PPS_FETCH\n");
+
+ err = copy_from_user(&compat, uarg, sizeof(struct pps_fdata_compat));
+ if (err)
+@@ -296,20 +296,36 @@ static long pps_cdev_compat_ioctl(struct file *file,
+ #define pps_cdev_compat_ioctl NULL
+ #endif
+
++static struct pps_device *pps_idr_get(unsigned long id)
++{
++ struct pps_device *pps;
++
++ mutex_lock(&pps_idr_lock);
++ pps = idr_find(&pps_idr, id);
++ if (pps)
++ get_device(&pps->dev);
++
++ mutex_unlock(&pps_idr_lock);
++ return pps;
++}
++
+ static int pps_cdev_open(struct inode *inode, struct file *file)
+ {
+- struct pps_device *pps = container_of(inode->i_cdev,
+- struct pps_device, cdev);
++ struct pps_device *pps = pps_idr_get(iminor(inode));
++
++ if (!pps)
++ return -ENODEV;
++
+ file->private_data = pps;
+- kobject_get(&pps->dev->kobj);
+ return 0;
+ }
+
+ static int pps_cdev_release(struct inode *inode, struct file *file)
+ {
+- struct pps_device *pps = container_of(inode->i_cdev,
+- struct pps_device, cdev);
+- kobject_put(&pps->dev->kobj);
++ struct pps_device *pps = file->private_data;
++
++ WARN_ON(pps->id != iminor(inode));
++ put_device(&pps->dev);
+ return 0;
+ }
+
+@@ -331,22 +347,13 @@ static void pps_device_destruct(struct device *dev)
+ {
+ struct pps_device *pps = dev_get_drvdata(dev);
+
+- cdev_del(&pps->cdev);
+-
+- /* Now we can release the ID for re-use */
+ pr_debug("deallocating pps%d\n", pps->id);
+- mutex_lock(&pps_idr_lock);
+- idr_remove(&pps_idr, pps->id);
+- mutex_unlock(&pps_idr_lock);
+-
+- kfree(dev);
+ kfree(pps);
+ }
+
+ int pps_register_cdev(struct pps_device *pps)
+ {
+ int err;
+- dev_t devt;
+
+ mutex_lock(&pps_idr_lock);
+ /*
+@@ -363,40 +370,29 @@ int pps_register_cdev(struct pps_device *pps)
+ goto out_unlock;
+ }
+ pps->id = err;
+- mutex_unlock(&pps_idr_lock);
+-
+- devt = MKDEV(MAJOR(pps_devt), pps->id);
+-
+- cdev_init(&pps->cdev, &pps_cdev_fops);
+- pps->cdev.owner = pps->info.owner;
+
+- err = cdev_add(&pps->cdev, devt, 1);
+- if (err) {
+- pr_err("%s: failed to add char device %d:%d\n",
+- pps->info.name, MAJOR(pps_devt), pps->id);
++ pps->dev.class = pps_class;
++ pps->dev.parent = pps->info.dev;
++ pps->dev.devt = MKDEV(pps_major, pps->id);
++ dev_set_drvdata(&pps->dev, pps);
++ dev_set_name(&pps->dev, "pps%d", pps->id);
++ err = device_register(&pps->dev);
++ if (err)
+ goto free_idr;
+- }
+- pps->dev = device_create(pps_class, pps->info.dev, devt, pps,
+- "pps%d", pps->id);
+- if (IS_ERR(pps->dev)) {
+- err = PTR_ERR(pps->dev);
+- goto del_cdev;
+- }
+
+ /* Override the release function with our own */
+- pps->dev->release = pps_device_destruct;
++ pps->dev.release = pps_device_destruct;
+
+- pr_debug("source %s got cdev (%d:%d)\n", pps->info.name,
+- MAJOR(pps_devt), pps->id);
++ pr_debug("source %s got cdev (%d:%d)\n", pps->info.name, pps_major,
++ pps->id);
+
++ get_device(&pps->dev);
++ mutex_unlock(&pps_idr_lock);
+ return 0;
+
+-del_cdev:
+- cdev_del(&pps->cdev);
+-
+ free_idr:
+- mutex_lock(&pps_idr_lock);
+ idr_remove(&pps_idr, pps->id);
++ put_device(&pps->dev);
+ out_unlock:
+ mutex_unlock(&pps_idr_lock);
+ return err;
+@@ -406,7 +402,13 @@ void pps_unregister_cdev(struct pps_device *pps)
+ {
+ pr_debug("unregistering pps%d\n", pps->id);
+ pps->lookup_cookie = NULL;
+- device_destroy(pps_class, pps->dev->devt);
++ device_destroy(pps_class, pps->dev.devt);
++
++ /* Now we can release the ID for re-use */
++ mutex_lock(&pps_idr_lock);
++ idr_remove(&pps_idr, pps->id);
++ put_device(&pps->dev);
++ mutex_unlock(&pps_idr_lock);
+ }
+
+ /*
+@@ -426,6 +428,11 @@ void pps_unregister_cdev(struct pps_device *pps)
+ * so that it will not be used again, even if the pps device cannot
+ * be removed from the idr due to pending references holding the minor
+ * number in use.
++ *
++ * Since pps_idr holds a reference to the device, the returned
++ * pps_device is guaranteed to be valid until pps_unregister_cdev() is
++ * called on it. But after calling pps_unregister_cdev(), it may be
++ * freed at any time.
+ */
+ struct pps_device *pps_lookup_dev(void const *cookie)
+ {
+@@ -448,13 +455,11 @@ EXPORT_SYMBOL(pps_lookup_dev);
+ static void __exit pps_exit(void)
+ {
+ class_destroy(pps_class);
+- unregister_chrdev_region(pps_devt, PPS_MAX_SOURCES);
++ __unregister_chrdev(pps_major, 0, PPS_MAX_SOURCES, "pps");
+ }
+
+ static int __init pps_init(void)
+ {
+- int err;
+-
+ pps_class = class_create("pps");
+ if (IS_ERR(pps_class)) {
+ pr_err("failed to allocate class\n");
+@@ -462,8 +467,9 @@ static int __init pps_init(void)
+ }
+ pps_class->dev_groups = pps_groups;
+
+- err = alloc_chrdev_region(&pps_devt, 0, PPS_MAX_SOURCES, "pps");
+- if (err < 0) {
++ pps_major = __register_chrdev(0, 0, PPS_MAX_SOURCES, "pps",
++ &pps_cdev_fops);
++ if (pps_major < 0) {
+ pr_err("failed to allocate char device region\n");
+ goto remove_class;
+ }
+@@ -476,8 +482,7 @@ static int __init pps_init(void)
+
+ remove_class:
+ class_destroy(pps_class);
+-
+- return err;
++ return pps_major;
+ }
+
+ subsys_initcall(pps_init);
+diff --git a/drivers/ptp/ptp_chardev.c b/drivers/ptp/ptp_chardev.c
+index ea96a14d72d141..bf6468c56419c5 100644
+--- a/drivers/ptp/ptp_chardev.c
++++ b/drivers/ptp/ptp_chardev.c
+@@ -4,6 +4,7 @@
+ *
+ * Copyright (C) 2010 OMICRON electronics GmbH
+ */
++#include <linux/compat.h>
+ #include <linux/module.h>
+ #include <linux/posix-clock.h>
+ #include <linux/poll.h>
+@@ -176,6 +177,9 @@ long ptp_ioctl(struct posix_clock_context *pccontext, unsigned int cmd,
+ struct timespec64 ts;
+ int enable, err = 0;
+
++ if (in_compat_syscall() && cmd != PTP_ENABLE_PPS && cmd != PTP_ENABLE_PPS2)
++ arg = (unsigned long)compat_ptr(arg);
++
+ tsevq = pccontext->private_clkdata;
+
+ switch (cmd) {
+diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
+index 5feecaadde8e05..120db96d9e95d6 100644
+--- a/drivers/ptp/ptp_ocp.c
++++ b/drivers/ptp/ptp_ocp.c
+@@ -4420,7 +4420,7 @@ ptp_ocp_complete(struct ptp_ocp *bp)
+
+ pps = pps_lookup_dev(bp->ptp);
+ if (pps)
+- ptp_ocp_symlink(bp, pps->dev, "pps");
++ ptp_ocp_symlink(bp, &pps->dev, "pps");
+
+ ptp_ocp_debugfs_add_device(bp);
+
+diff --git a/drivers/pwm/core.c b/drivers/pwm/core.c
+index 210368099a0642..174939359ae3eb 100644
+--- a/drivers/pwm/core.c
++++ b/drivers/pwm/core.c
+@@ -6,7 +6,7 @@
+ * Copyright (C) 2011-2012 Avionic Design GmbH
+ */
+
+-#define DEFAULT_SYMBOL_NAMESPACE PWM
++#define DEFAULT_SYMBOL_NAMESPACE "PWM"
+
+ #include <linux/acpi.h>
+ #include <linux/module.h>
+diff --git a/drivers/pwm/pwm-dwc-core.c b/drivers/pwm/pwm-dwc-core.c
+index c8425493b95d85..6dabec93a3c641 100644
+--- a/drivers/pwm/pwm-dwc-core.c
++++ b/drivers/pwm/pwm-dwc-core.c
+@@ -9,7 +9,7 @@
+ * Author: Raymond Tan <raymond.tan@intel.com>
+ */
+
+-#define DEFAULT_SYMBOL_NAMESPACE dwc_pwm
++#define DEFAULT_SYMBOL_NAMESPACE "dwc_pwm"
+
+ #include <linux/bitops.h>
+ #include <linux/export.h>
+diff --git a/drivers/pwm/pwm-lpss.c b/drivers/pwm/pwm-lpss.c
+index 867e2bc8c601c8..3b99feb3bb4918 100644
+--- a/drivers/pwm/pwm-lpss.c
++++ b/drivers/pwm/pwm-lpss.c
+@@ -19,7 +19,7 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/time.h>
+
+-#define DEFAULT_SYMBOL_NAMESPACE PWM_LPSS
++#define DEFAULT_SYMBOL_NAMESPACE "PWM_LPSS"
+
+ #include "pwm-lpss.h"
+
+diff --git a/drivers/pwm/pwm-stm32-lp.c b/drivers/pwm/pwm-stm32-lp.c
+index 989731256f5030..5832dce8ed9d58 100644
+--- a/drivers/pwm/pwm-stm32-lp.c
++++ b/drivers/pwm/pwm-stm32-lp.c
+@@ -167,8 +167,12 @@ static int stm32_pwm_lp_get_state(struct pwm_chip *chip,
+ regmap_read(priv->regmap, STM32_LPTIM_CR, &val);
+ state->enabled = !!FIELD_GET(STM32_LPTIM_ENABLE, val);
+ /* Keep PWM counter clock refcount in sync with PWM initial state */
+- if (state->enabled)
+- clk_enable(priv->clk);
++ if (state->enabled) {
++ int ret = clk_enable(priv->clk);
++
++ if (ret)
++ return ret;
++ }
+
+ regmap_read(priv->regmap, STM32_LPTIM_CFGR, &val);
+ presc = FIELD_GET(STM32_LPTIM_PRESC, val);
+diff --git a/drivers/pwm/pwm-stm32.c b/drivers/pwm/pwm-stm32.c
+index eb24054f972973..4f231f8aae7d4c 100644
+--- a/drivers/pwm/pwm-stm32.c
++++ b/drivers/pwm/pwm-stm32.c
+@@ -688,8 +688,11 @@ static int stm32_pwm_probe(struct platform_device *pdev)
+ chip->ops = &stm32pwm_ops;
+
+ /* Initialize clock refcount to number of enabled PWM channels. */
+- for (i = 0; i < num_enabled; i++)
+- clk_enable(priv->clk);
++ for (i = 0; i < num_enabled; i++) {
++ ret = clk_enable(priv->clk);
++ if (ret)
++ return ret;
++ }
+
+ ret = devm_pwmchip_add(dev, chip);
+ if (ret < 0)
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 1179766811f583..4bb2652740d001 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -4946,7 +4946,7 @@ int _regulator_bulk_get(struct device *dev, int num_consumers,
+ consumers[i].supply, get_type);
+ if (IS_ERR(consumers[i].consumer)) {
+ ret = dev_err_probe(dev, PTR_ERR(consumers[i].consumer),
+- "Failed to get supply '%s'",
++ "Failed to get supply '%s'\n",
+ consumers[i].supply);
+ consumers[i].consumer = NULL;
+ goto err;
+diff --git a/drivers/regulator/of_regulator.c b/drivers/regulator/of_regulator.c
+index 3f490d81abc28f..deab0b95b6637d 100644
+--- a/drivers/regulator/of_regulator.c
++++ b/drivers/regulator/of_regulator.c
+@@ -446,7 +446,7 @@ int of_regulator_match(struct device *dev, struct device_node *node,
+ "failed to parse DT for regulator %pOFn\n",
+ child);
+ of_node_put(child);
+- return -EINVAL;
++ goto err_put;
+ }
+ match->of_node = of_node_get(child);
+ count++;
+@@ -455,6 +455,18 @@ int of_regulator_match(struct device *dev, struct device_node *node,
+ }
+
+ return count;
++
++err_put:
++ for (i = 0; i < num_matches; i++) {
++ struct of_regulator_match *match = &matches[i];
++
++ match->init_data = NULL;
++ if (match->of_node) {
++ of_node_put(match->of_node);
++ match->of_node = NULL;
++ }
++ }
++ return -EINVAL;
+ }
+ EXPORT_SYMBOL_GPL(of_regulator_match);
+
+diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
+index e744c07507eede..f98a11d4cf2920 100644
+--- a/drivers/remoteproc/mtk_scp.c
++++ b/drivers/remoteproc/mtk_scp.c
+@@ -1326,6 +1326,11 @@ static int scp_cluster_init(struct platform_device *pdev, struct mtk_scp_of_clus
+ return ret;
+ }
+
++static const struct of_device_id scp_core_match[] = {
++ { .compatible = "mediatek,scp-core" },
++ {}
++};
++
+ static int scp_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -1357,13 +1362,15 @@ static int scp_probe(struct platform_device *pdev)
+ INIT_LIST_HEAD(&scp_cluster->mtk_scp_list);
+ mutex_init(&scp_cluster->cluster_lock);
+
+- ret = devm_of_platform_populate(dev);
++ ret = of_platform_populate(dev_of_node(dev), scp_core_match, NULL, dev);
+ if (ret)
+ return dev_err_probe(dev, ret, "Failed to populate platform devices\n");
+
+ ret = scp_cluster_init(pdev, scp_cluster);
+- if (ret)
++ if (ret) {
++ of_platform_depopulate(dev);
+ return ret;
++ }
+
+ return 0;
+ }
+@@ -1379,6 +1386,7 @@ static void scp_remove(struct platform_device *pdev)
+ rproc_del(scp->rproc);
+ scp_free(scp);
+ }
++ of_platform_depopulate(&pdev->dev);
+ mutex_destroy(&scp_cluster->cluster_lock);
+ }
+
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index f276956f2c5cec..ef6febe3563307 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -2486,6 +2486,13 @@ struct rproc *rproc_alloc(struct device *dev, const char *name,
+ rproc->dev.driver_data = rproc;
+ idr_init(&rproc->notifyids);
+
++ /* Assign a unique device index and name */
++ rproc->index = ida_alloc(&rproc_dev_index, GFP_KERNEL);
++ if (rproc->index < 0) {
++ dev_err(dev, "ida_alloc failed: %d\n", rproc->index);
++ goto put_device;
++ }
++
+ rproc->name = kstrdup_const(name, GFP_KERNEL);
+ if (!rproc->name)
+ goto put_device;
+@@ -2496,13 +2503,6 @@ struct rproc *rproc_alloc(struct device *dev, const char *name,
+ if (rproc_alloc_ops(rproc, ops))
+ goto put_device;
+
+- /* Assign a unique device index and name */
+- rproc->index = ida_alloc(&rproc_dev_index, GFP_KERNEL);
+- if (rproc->index < 0) {
+- dev_err(dev, "ida_alloc failed: %d\n", rproc->index);
+- goto put_device;
+- }
+-
+ dev_set_name(&rproc->dev, "remoteproc%d", rproc->index);
+
+ atomic_set(&rproc->power, 0);
+diff --git a/drivers/rtc/rtc-loongson.c b/drivers/rtc/rtc-loongson.c
+index e8ffc1ab90b02f..90e9d97a86b487 100644
+--- a/drivers/rtc/rtc-loongson.c
++++ b/drivers/rtc/rtc-loongson.c
+@@ -114,6 +114,13 @@ static irqreturn_t loongson_rtc_isr(int irq, void *id)
+ struct loongson_rtc_priv *priv = (struct loongson_rtc_priv *)id;
+
+ rtc_update_irq(priv->rtcdev, 1, RTC_AF | RTC_IRQF);
++
++ /*
++ * The TOY_MATCH0_REG should be cleared 0 here,
++ * otherwise the interrupt cannot be cleared.
++ */
++ regmap_write(priv->regmap, TOY_MATCH0_REG, 0);
++
+ return IRQ_HANDLED;
+ }
+
+@@ -131,11 +138,7 @@ static u32 loongson_rtc_handler(void *id)
+ writel(RTC_STS, priv->pm_base + PM1_STS_REG);
+ spin_unlock(&priv->lock);
+
+- /*
+- * The TOY_MATCH0_REG should be cleared 0 here,
+- * otherwise the interrupt cannot be cleared.
+- */
+- return regmap_write(priv->regmap, TOY_MATCH0_REG, 0);
++ return ACPI_INTERRUPT_HANDLED;
+ }
+
+ static int loongson_rtc_set_enabled(struct device *dev)
+diff --git a/drivers/rtc/rtc-pcf85063.c b/drivers/rtc/rtc-pcf85063.c
+index fdbc07f14036af..905986c616559b 100644
+--- a/drivers/rtc/rtc-pcf85063.c
++++ b/drivers/rtc/rtc-pcf85063.c
+@@ -322,7 +322,16 @@ static const struct rtc_class_ops pcf85063_rtc_ops = {
+ static int pcf85063_nvmem_read(void *priv, unsigned int offset,
+ void *val, size_t bytes)
+ {
+- return regmap_read(priv, PCF85063_REG_RAM, val);
++ unsigned int tmp;
++ int ret;
++
++ ret = regmap_read(priv, PCF85063_REG_RAM, &tmp);
++ if (ret < 0)
++ return ret;
++
++ *(u8 *)val = tmp;
++
++ return 0;
+ }
+
+ static int pcf85063_nvmem_write(void *priv, unsigned int offset,
+diff --git a/drivers/rtc/rtc-tps6594.c b/drivers/rtc/rtc-tps6594.c
+index e696676341378e..7c6246e3f02923 100644
+--- a/drivers/rtc/rtc-tps6594.c
++++ b/drivers/rtc/rtc-tps6594.c
+@@ -37,7 +37,7 @@
+ #define MAX_OFFSET (277774)
+
+ // Number of ticks per hour
+-#define TICKS_PER_HOUR (32768 * 3600)
++#define TICKS_PER_HOUR (32768 * 3600LL)
+
+ // Multiplier for ppb conversions
+ #define PPB_MULT NANO
+diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c
+index fbffd451031fdb..45bd001206a2b8 100644
+--- a/drivers/s390/char/sclp.c
++++ b/drivers/s390/char/sclp.c
+@@ -245,7 +245,6 @@ static void sclp_request_timeout(bool force_restart);
+ static void sclp_process_queue(void);
+ static void __sclp_make_read_req(void);
+ static int sclp_init_mask(int calculate);
+-static int sclp_init(void);
+
+ static void
+ __sclp_queue_read_req(void)
+@@ -1251,8 +1250,7 @@ static struct platform_driver sclp_pdrv = {
+
+ /* Initialize SCLP driver. Return zero if driver is operational, non-zero
+ * otherwise. */
+-static int
+-sclp_init(void)
++int sclp_init(void)
+ {
+ unsigned long flags;
+ int rc = 0;
+@@ -1305,13 +1303,7 @@ sclp_init(void)
+
+ static __init int sclp_initcall(void)
+ {
+- int rc;
+-
+- rc = platform_driver_register(&sclp_pdrv);
+- if (rc)
+- return rc;
+-
+- return sclp_init();
++ return platform_driver_register(&sclp_pdrv);
+ }
+
+ arch_initcall(sclp_initcall);
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c
+index 10b8e4dc64f8b0..7589f48aebc80f 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_app.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_app.c
+@@ -2951,6 +2951,7 @@ void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc)
+ .max_hw_sectors = MPI3MR_MAX_APP_XFER_SECTORS,
+ .max_segments = MPI3MR_MAX_APP_XFER_SEGMENTS,
+ };
++ struct request_queue *q;
+
+ device_initialize(bsg_dev);
+
+@@ -2966,14 +2967,17 @@ void mpi3mr_bsg_init(struct mpi3mr_ioc *mrioc)
+ return;
+ }
+
+- mrioc->bsg_queue = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), &lim,
++ q = bsg_setup_queue(bsg_dev, dev_name(bsg_dev), &lim,
+ mpi3mr_bsg_request, NULL, 0);
+- if (IS_ERR(mrioc->bsg_queue)) {
++ if (IS_ERR(q)) {
+ ioc_err(mrioc, "%s: bsg registration failed\n",
+ dev_name(bsg_dev));
+ device_del(bsg_dev);
+ put_device(bsg_dev);
++ return;
+ }
++
++ mrioc->bsg_queue = q;
+ }
+
+ /**
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_base.c b/drivers/scsi/mpt3sas/mpt3sas_base.c
+index 16ac2267c71e19..c1d8f2c91a5e51 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -5629,8 +5629,7 @@ _base_static_config_pages(struct MPT3SAS_ADAPTER *ioc)
+ if (!ioc->is_gen35_ioc && ioc->manu_pg11.EEDPTagMode == 0) {
+ pr_err("%s: overriding NVDATA EEDPTagMode setting\n",
+ ioc->name);
+- ioc->manu_pg11.EEDPTagMode &= ~0x3;
+- ioc->manu_pg11.EEDPTagMode |= 0x1;
++ ioc->manu_pg11.EEDPTagMode = 0x1;
+ mpt3sas_config_set_manufacturing_pg11(ioc, &mpi_reply,
+ &ioc->manu_pg11);
+ }
+diff --git a/drivers/soc/atmel/soc.c b/drivers/soc/atmel/soc.c
+index 2a42b28931c96d..298b542dd1c064 100644
+--- a/drivers/soc/atmel/soc.c
++++ b/drivers/soc/atmel/soc.c
+@@ -399,7 +399,7 @@ static const struct of_device_id at91_soc_allowed_list[] __initconst = {
+
+ static int __init atmel_soc_device_init(void)
+ {
+- struct device_node *np = of_find_node_by_path("/");
++ struct device_node *np __free(device_node) = of_find_node_by_path("/");
+
+ if (!of_match_node(at91_soc_allowed_list, np))
+ return 0;
+diff --git a/drivers/spi/spi-omap2-mcspi.c b/drivers/spi/spi-omap2-mcspi.c
+index 4a2f84c4d22e5f..532b2e9c31d0d3 100644
+--- a/drivers/spi/spi-omap2-mcspi.c
++++ b/drivers/spi/spi-omap2-mcspi.c
+@@ -1561,10 +1561,15 @@ static int omap2_mcspi_probe(struct platform_device *pdev)
+ }
+
+ mcspi->ref_clk = devm_clk_get_optional_enabled(&pdev->dev, NULL);
+- if (IS_ERR(mcspi->ref_clk))
+- mcspi->ref_clk_hz = OMAP2_MCSPI_MAX_FREQ;
+- else
++ if (IS_ERR(mcspi->ref_clk)) {
++ status = PTR_ERR(mcspi->ref_clk);
++ dev_err_probe(&pdev->dev, status, "Failed to get ref_clk");
++ goto free_ctlr;
++ }
++ if (mcspi->ref_clk)
+ mcspi->ref_clk_hz = clk_get_rate(mcspi->ref_clk);
++ else
++ mcspi->ref_clk_hz = OMAP2_MCSPI_MAX_FREQ;
+ ctlr->max_speed_hz = mcspi->ref_clk_hz;
+ ctlr->min_speed_hz = mcspi->ref_clk_hz >> 15;
+
+diff --git a/drivers/spi/spi-zynq-qspi.c b/drivers/spi/spi-zynq-qspi.c
+index b67455bda972b2..de4c182474329d 100644
+--- a/drivers/spi/spi-zynq-qspi.c
++++ b/drivers/spi/spi-zynq-qspi.c
+@@ -379,12 +379,21 @@ static int zynq_qspi_setup_op(struct spi_device *spi)
+ {
+ struct spi_controller *ctlr = spi->controller;
+ struct zynq_qspi *qspi = spi_controller_get_devdata(ctlr);
++ int ret;
+
+ if (ctlr->busy)
+ return -EBUSY;
+
+- clk_enable(qspi->refclk);
+- clk_enable(qspi->pclk);
++ ret = clk_enable(qspi->refclk);
++ if (ret)
++ return ret;
++
++ ret = clk_enable(qspi->pclk);
++ if (ret) {
++ clk_disable(qspi->refclk);
++ return ret;
++ }
++
+ zynq_qspi_write(qspi, ZYNQ_QSPI_ENABLE_OFFSET,
+ ZYNQ_QSPI_ENABLE_ENABLE_MASK);
+
+diff --git a/drivers/staging/media/imx/imx-media-of.c b/drivers/staging/media/imx/imx-media-of.c
+index 118bff988bc7e6..bb28daa4d71334 100644
+--- a/drivers/staging/media/imx/imx-media-of.c
++++ b/drivers/staging/media/imx/imx-media-of.c
+@@ -54,22 +54,18 @@ int imx_media_add_of_subdevs(struct imx_media_dev *imxmd,
+ break;
+
+ ret = imx_media_of_add_csi(imxmd, csi_np);
++ of_node_put(csi_np);
+ if (ret) {
+ /* unavailable or already added is not an error */
+ if (ret == -ENODEV || ret == -EEXIST) {
+- of_node_put(csi_np);
+ continue;
+ }
+
+ /* other error, can't continue */
+- goto err_out;
++ return ret;
+ }
+ }
+
+ return 0;
+-
+-err_out:
+- of_node_put(csi_np);
+- return ret;
+ }
+ EXPORT_SYMBOL_GPL(imx_media_add_of_subdevs);
+diff --git a/drivers/staging/media/max96712/max96712.c b/drivers/staging/media/max96712/max96712.c
+index 6bdbccbee05ac3..b528727ada75c6 100644
+--- a/drivers/staging/media/max96712/max96712.c
++++ b/drivers/staging/media/max96712/max96712.c
+@@ -421,7 +421,6 @@ static int max96712_probe(struct i2c_client *client)
+ return -ENOMEM;
+
+ priv->client = client;
+- i2c_set_clientdata(client, priv);
+
+ priv->regmap = devm_regmap_init_i2c(client, &max96712_i2c_regmap);
+ if (IS_ERR(priv->regmap))
+@@ -454,7 +453,8 @@ static int max96712_probe(struct i2c_client *client)
+
+ static void max96712_remove(struct i2c_client *client)
+ {
+- struct max96712_priv *priv = i2c_get_clientdata(client);
++ struct v4l2_subdev *sd = i2c_get_clientdata(client);
++ struct max96712_priv *priv = container_of(sd, struct max96712_priv, sd);
+
+ v4l2_async_unregister_subdev(&priv->sd);
+
+diff --git a/drivers/tty/mips_ejtag_fdc.c b/drivers/tty/mips_ejtag_fdc.c
+index afbf7738c7c47c..58b28be63c79b1 100644
+--- a/drivers/tty/mips_ejtag_fdc.c
++++ b/drivers/tty/mips_ejtag_fdc.c
+@@ -1154,7 +1154,7 @@ static char kgdbfdc_rbuf[4];
+
+ /* write buffer to allow compaction */
+ static unsigned int kgdbfdc_wbuflen;
+-static char kgdbfdc_wbuf[4];
++static u8 kgdbfdc_wbuf[4];
+
+ static void __iomem *kgdbfdc_setup(void)
+ {
+@@ -1215,7 +1215,7 @@ static int kgdbfdc_read_char(void)
+ /* push an FDC word from write buffer to TX FIFO */
+ static void kgdbfdc_push_one(void)
+ {
+- const char *bufs[1] = { kgdbfdc_wbuf };
++ const u8 *bufs[1] = { kgdbfdc_wbuf };
+ struct fdc_word word;
+ void __iomem *regs;
+ unsigned int i;
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 3509af7dc52b88..11519aa2598a01 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2059,7 +2059,8 @@ static void serial8250_break_ctl(struct uart_port *port, int break_state)
+ serial8250_rpm_put(up);
+ }
+
+-static void wait_for_lsr(struct uart_8250_port *up, int bits)
++/* Returns true if @bits were set, false on timeout */
++static bool wait_for_lsr(struct uart_8250_port *up, int bits)
+ {
+ unsigned int status, tmout = 10000;
+
+@@ -2074,11 +2075,11 @@ static void wait_for_lsr(struct uart_8250_port *up, int bits)
+ udelay(1);
+ touch_nmi_watchdog();
+ }
++
++ return (tmout != 0);
+ }
+
+-/*
+- * Wait for transmitter & holding register to empty
+- */
++/* Wait for transmitter and holding register to empty with timeout */
+ static void wait_for_xmitr(struct uart_8250_port *up, int bits)
+ {
+ unsigned int tmout;
+@@ -3297,6 +3298,16 @@ static void serial8250_console_restore(struct uart_8250_port *up)
+ serial8250_out_MCR(up, up->mcr | UART_MCR_DTR | UART_MCR_RTS);
+ }
+
++static void fifo_wait_for_lsr(struct uart_8250_port *up, unsigned int count)
++{
++ unsigned int i;
++
++ for (i = 0; i < count; i++) {
++ if (wait_for_lsr(up, UART_LSR_THRE))
++ return;
++ }
++}
++
+ /*
+ * Print a string to the serial port using the device FIFO
+ *
+@@ -3306,13 +3317,15 @@ static void serial8250_console_restore(struct uart_8250_port *up)
+ static void serial8250_console_fifo_write(struct uart_8250_port *up,
+ const char *s, unsigned int count)
+ {
+- int i;
+ const char *end = s + count;
+ unsigned int fifosize = up->tx_loadsz;
++ unsigned int tx_count = 0;
+ bool cr_sent = false;
++ unsigned int i;
+
+ while (s != end) {
+- wait_for_lsr(up, UART_LSR_THRE);
++ /* Allow timeout for each byte of a possibly full FIFO */
++ fifo_wait_for_lsr(up, fifosize);
+
+ for (i = 0; i < fifosize && s != end; ++i) {
+ if (*s == '\n' && !cr_sent) {
+@@ -3323,7 +3336,14 @@ static void serial8250_console_fifo_write(struct uart_8250_port *up,
+ cr_sent = false;
+ }
+ }
++ tx_count = i;
+ }
++
++ /*
++ * Allow timeout for each byte written since the caller will only wait
++ * for UART_LSR_BOTH_EMPTY using the timeout of a single character
++ */
++ fifo_wait_for_lsr(up, tx_count);
+ }
+
+ /*
+diff --git a/drivers/tty/serial/sc16is7xx.c b/drivers/tty/serial/sc16is7xx.c
+index ad88a33a504f53..6a0a1cce3a897f 100644
+--- a/drivers/tty/serial/sc16is7xx.c
++++ b/drivers/tty/serial/sc16is7xx.c
+@@ -8,7 +8,7 @@
+ */
+
+ #undef DEFAULT_SYMBOL_NAMESPACE
+-#define DEFAULT_SYMBOL_NAMESPACE SERIAL_NXP_SC16IS7XX
++#define DEFAULT_SYMBOL_NAMESPACE "SERIAL_NXP_SC16IS7XX"
+
+ #include <linux/bits.h>
+ #include <linux/clk.h>
+diff --git a/drivers/ufs/core/ufs_bsg.c b/drivers/ufs/core/ufs_bsg.c
+index 6c09d97ae00658..58023f735c195f 100644
+--- a/drivers/ufs/core/ufs_bsg.c
++++ b/drivers/ufs/core/ufs_bsg.c
+@@ -257,6 +257,7 @@ int ufs_bsg_probe(struct ufs_hba *hba)
+ NULL, 0);
+ if (IS_ERR(q)) {
+ ret = PTR_ERR(q);
++ device_del(bsg_dev);
+ goto out;
+ }
+
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 98114c2827c098..244e3e04e1ad74 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1660,8 +1660,6 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ u8 tx_thr_num_pkt_prd = 0;
+ u8 tx_max_burst_prd = 0;
+ u8 tx_fifo_resize_max_num;
+- const char *usb_psy_name;
+- int ret;
+
+ /* default to highest possible threshold */
+ lpm_nyet_threshold = 0xf;
+@@ -1696,13 +1694,6 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+
+ dwc->sys_wakeup = device_may_wakeup(dwc->sysdev);
+
+- ret = device_property_read_string(dev, "usb-psy-name", &usb_psy_name);
+- if (ret >= 0) {
+- dwc->usb_psy = power_supply_get_by_name(usb_psy_name);
+- if (!dwc->usb_psy)
+- dev_err(dev, "couldn't get usb power supply\n");
+- }
+-
+ dwc->has_lpm_erratum = device_property_read_bool(dev,
+ "snps,has-lpm-erratum");
+ device_property_read_u8(dev, "snps,lpm-nyet-threshold",
+@@ -2105,6 +2096,23 @@ static int dwc3_get_num_ports(struct dwc3 *dwc)
+ return 0;
+ }
+
++static struct power_supply *dwc3_get_usb_power_supply(struct dwc3 *dwc)
++{
++ struct power_supply *usb_psy;
++ const char *usb_psy_name;
++ int ret;
++
++ ret = device_property_read_string(dwc->dev, "usb-psy-name", &usb_psy_name);
++ if (ret < 0)
++ return NULL;
++
++ usb_psy = power_supply_get_by_name(usb_psy_name);
++ if (!usb_psy)
++ return ERR_PTR(-EPROBE_DEFER);
++
++ return usb_psy;
++}
++
+ static int dwc3_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -2161,6 +2169,10 @@ static int dwc3_probe(struct platform_device *pdev)
+
+ dwc3_get_software_properties(dwc);
+
++ dwc->usb_psy = dwc3_get_usb_power_supply(dwc);
++ if (IS_ERR(dwc->usb_psy))
++ return dev_err_probe(dev, PTR_ERR(dwc->usb_psy), "couldn't get usb power supply\n");
++
+ dwc->reset = devm_reset_control_array_get_optional_shared(dev);
+ if (IS_ERR(dwc->reset)) {
+ ret = PTR_ERR(dwc->reset);
+@@ -2585,12 +2597,15 @@ static int dwc3_resume(struct device *dev)
+ pinctrl_pm_select_default_state(dev);
+
+ pm_runtime_disable(dev);
+- pm_runtime_set_active(dev);
++ ret = pm_runtime_set_active(dev);
++ if (ret)
++ goto out;
+
+ ret = dwc3_resume_common(dwc, PMSG_RESUME);
+ if (ret)
+ pm_runtime_set_suspended(dev);
+
++out:
+ pm_runtime_enable(dev);
+
+ return ret;
+diff --git a/drivers/usb/dwc3/dwc3-am62.c b/drivers/usb/dwc3/dwc3-am62.c
+index 538185a4d1b4fb..c507e576bbe084 100644
+--- a/drivers/usb/dwc3/dwc3-am62.c
++++ b/drivers/usb/dwc3/dwc3-am62.c
+@@ -166,6 +166,7 @@ static int phy_syscon_pll_refclk(struct dwc3_am62 *am62)
+ if (ret)
+ return ret;
+
++ of_node_put(args.np);
+ am62->offset = args.args[0];
+
+ /* Core voltage. PHY_CORE_VOLTAGE bit Recommended to be 0 always */
+diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c
+index 15bb3aa12aa8b4..48dee166e5d89c 100644
+--- a/drivers/usb/gadget/function/f_tcm.c
++++ b/drivers/usb/gadget/function/f_tcm.c
+@@ -1066,7 +1066,6 @@ static void usbg_cmd_work(struct work_struct *work)
+ out:
+ transport_send_check_condition_and_sense(se_cmd,
+ TCM_UNSUPPORTED_SCSI_OPCODE, 1);
+- transport_generic_free_cmd(&cmd->se_cmd, 0);
+ }
+
+ static struct usbg_cmd *usbg_get_cmd(struct f_uas *fu,
+@@ -1195,7 +1194,6 @@ static void bot_cmd_work(struct work_struct *work)
+ out:
+ transport_send_check_condition_and_sense(se_cmd,
+ TCM_UNSUPPORTED_SCSI_OPCODE, 1);
+- transport_generic_free_cmd(&cmd->se_cmd, 0);
+ }
+
+ static int bot_submit_command(struct f_uas *fu,
+@@ -2051,9 +2049,14 @@ static void tcm_delayed_set_alt(struct work_struct *wq)
+
+ static int tcm_get_alt(struct usb_function *f, unsigned intf)
+ {
+- if (intf == bot_intf_desc.bInterfaceNumber)
++ struct f_uas *fu = to_f_uas(f);
++
++ if (fu->iface != intf)
++ return -EOPNOTSUPP;
++
++ if (fu->flags & USBG_IS_BOT)
+ return USB_G_ALT_INT_BBB;
+- if (intf == uasp_intf_desc.bInterfaceNumber)
++ else if (fu->flags & USBG_IS_UAS)
+ return USB_G_ALT_INT_UAS;
+
+ return -EOPNOTSUPP;
+@@ -2063,6 +2066,9 @@ static int tcm_set_alt(struct usb_function *f, unsigned intf, unsigned alt)
+ {
+ struct f_uas *fu = to_f_uas(f);
+
++ if (fu->iface != intf)
++ return -EOPNOTSUPP;
++
+ if ((alt == USB_G_ALT_INT_BBB) || (alt == USB_G_ALT_INT_UAS)) {
+ struct guas_setup_wq *work;
+
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index b267dae14d3904..4384b86ea7b66c 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -422,7 +422,8 @@ static void xhci_handle_stopped_cmd_ring(struct xhci_hcd *xhci,
+ if ((xhci->cmd_ring->dequeue != xhci->cmd_ring->enqueue) &&
+ !(xhci->xhc_state & XHCI_STATE_DYING)) {
+ xhci->current_cmd = cur_cmd;
+- xhci_mod_cmd_timer(xhci);
++ if (cur_cmd)
++ xhci_mod_cmd_timer(xhci);
+ xhci_ring_cmd_db(xhci);
+ }
+ }
+diff --git a/drivers/usb/storage/Makefile b/drivers/usb/storage/Makefile
+index 46635fa4a3405d..28db337f190bf5 100644
+--- a/drivers/usb/storage/Makefile
++++ b/drivers/usb/storage/Makefile
+@@ -8,7 +8,7 @@
+
+ ccflags-y := -I $(srctree)/drivers/scsi
+
+-ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE=USB_STORAGE
++ccflags-y += -DDEFAULT_SYMBOL_NAMESPACE='"USB_STORAGE"'
+
+ obj-$(CONFIG_USB_UAS) += uas.o
+ obj-$(CONFIG_USB_STORAGE) += usb-storage.o
+diff --git a/drivers/usb/typec/tcpm/tcpci.c b/drivers/usb/typec/tcpm/tcpci.c
+index 24a6a4354df8ba..b2c83f552da55d 100644
+--- a/drivers/usb/typec/tcpm/tcpci.c
++++ b/drivers/usb/typec/tcpm/tcpci.c
+@@ -27,6 +27,7 @@
+ #define VPPS_NEW_MIN_PERCENT 95
+ #define VPPS_VALID_MIN_MV 100
+ #define VSINKDISCONNECT_PD_MIN_PERCENT 90
++#define VPPS_SHUTDOWN_MIN_PERCENT 85
+
+ struct tcpci {
+ struct device *dev;
+@@ -366,7 +367,8 @@ static int tcpci_enable_auto_vbus_discharge(struct tcpc_dev *dev, bool enable)
+ }
+
+ static int tcpci_set_auto_vbus_discharge_threshold(struct tcpc_dev *dev, enum typec_pwr_opmode mode,
+- bool pps_active, u32 requested_vbus_voltage_mv)
++ bool pps_active, u32 requested_vbus_voltage_mv,
++ u32 apdo_min_voltage_mv)
+ {
+ struct tcpci *tcpci = tcpc_to_tcpci(dev);
+ unsigned int pwr_ctrl, threshold = 0;
+@@ -388,9 +390,12 @@ static int tcpci_set_auto_vbus_discharge_threshold(struct tcpc_dev *dev, enum ty
+ threshold = AUTO_DISCHARGE_DEFAULT_THRESHOLD_MV;
+ } else if (mode == TYPEC_PWR_MODE_PD) {
+ if (pps_active)
+- threshold = ((VPPS_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) -
+- VSINKPD_MIN_IR_DROP_MV - VPPS_VALID_MIN_MV) *
+- VSINKDISCONNECT_PD_MIN_PERCENT / 100;
++ /*
++ * To prevent disconnect when the source is in Current Limit Mode.
++ * Set the threshold to the lowest possible voltage vPpsShutdown (min)
++ */
++ threshold = VPPS_SHUTDOWN_MIN_PERCENT * apdo_min_voltage_mv / 100 -
++ VSINKPD_MIN_IR_DROP_MV;
+ else
+ threshold = ((VSRC_NEW_MIN_PERCENT * requested_vbus_voltage_mv / 100) -
+ VSINKPD_MIN_IR_DROP_MV - VSRC_VALID_MIN_MV) *
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index 7ae341a403424c..48ddf27704619d 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -2928,10 +2928,12 @@ static int tcpm_set_auto_vbus_discharge_threshold(struct tcpm_port *port,
+ return 0;
+
+ ret = port->tcpc->set_auto_vbus_discharge_threshold(port->tcpc, mode, pps_active,
+- requested_vbus_voltage);
++ requested_vbus_voltage,
++ port->pps_data.min_volt);
+ tcpm_log_force(port,
+- "set_auto_vbus_discharge_threshold mode:%d pps_active:%c vbus:%u ret:%d",
+- mode, pps_active ? 'y' : 'n', requested_vbus_voltage, ret);
++ "set_auto_vbus_discharge_threshold mode:%d pps_active:%c vbus:%u pps_apdo_min_volt:%u ret:%d",
++ mode, pps_active ? 'y' : 'n', requested_vbus_voltage,
++ port->pps_data.min_volt, ret);
+
+ return ret;
+ }
+@@ -4757,7 +4759,7 @@ static void run_state_machine(struct tcpm_port *port)
+ port->caps_count = 0;
+ port->pd_capable = true;
+ tcpm_set_state_cond(port, SRC_SEND_CAPABILITIES_TIMEOUT,
+- PD_T_SEND_SOURCE_CAP);
++ PD_T_SENDER_RESPONSE);
+ }
+ break;
+ case SRC_SEND_CAPABILITIES_TIMEOUT:
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dss-of.c b/drivers/video/fbdev/omap2/omapfb/dss/dss-of.c
+index d5a43b3bf45ec9..c46108a16a9dd3 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/dss-of.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/dss-of.c
+@@ -102,6 +102,7 @@ struct device_node *dss_of_port_get_parent_device(struct device_node *port)
+ np = of_get_next_parent(np);
+ }
+
++ of_node_put(np);
+ return NULL;
+ }
+
+diff --git a/drivers/watchdog/rti_wdt.c b/drivers/watchdog/rti_wdt.c
+index 563d842014dfba..cc239251e19383 100644
+--- a/drivers/watchdog/rti_wdt.c
++++ b/drivers/watchdog/rti_wdt.c
+@@ -301,6 +301,7 @@ static int rti_wdt_probe(struct platform_device *pdev)
+ node = of_parse_phandle(pdev->dev.of_node, "memory-region", 0);
+ if (node) {
+ ret = of_address_to_resource(node, 0, &res);
++ of_node_put(node);
+ if (ret) {
+ dev_err(dev, "No memory address assigned to the region.\n");
+ goto err_iomap;
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index ada363af5aab8e..50edd1cae28ace 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -1472,7 +1472,12 @@ static int afs_rmdir(struct inode *dir, struct dentry *dentry)
+ op->file[1].vnode = vnode;
+ }
+
+- return afs_do_sync_operation(op);
++ ret = afs_do_sync_operation(op);
++
++ /* Not all systems that can host afs servers have ENOTEMPTY. */
++ if (ret == -EEXIST)
++ ret = -ENOTEMPTY;
++ return ret;
+
+ error:
+ return afs_put_operation(op);
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index c9d620175e80ca..d9760b2a8d8de4 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -1346,6 +1346,15 @@ extern void afs_send_simple_reply(struct afs_call *, const void *, size_t);
+ extern int afs_extract_data(struct afs_call *, bool);
+ extern int afs_protocol_error(struct afs_call *, enum afs_eproto_cause);
+
++static inline void afs_see_call(struct afs_call *call, enum afs_call_trace why)
++{
++ int r = refcount_read(&call->ref);
++
++ trace_afs_call(call->debug_id, why, r,
++ atomic_read(&call->net->nr_outstanding_calls),
++ __builtin_return_address(0));
++}
++
+ static inline void afs_make_op_call(struct afs_operation *op, struct afs_call *call,
+ gfp_t gfp)
+ {
+diff --git a/fs/afs/rxrpc.c b/fs/afs/rxrpc.c
+index 9f2a3bb56ec69e..a122c6366ce19f 100644
+--- a/fs/afs/rxrpc.c
++++ b/fs/afs/rxrpc.c
+@@ -430,11 +430,16 @@ void afs_make_call(struct afs_call *call, gfp_t gfp)
+ return;
+
+ error_do_abort:
+- if (ret != -ECONNABORTED) {
++ if (ret != -ECONNABORTED)
+ rxrpc_kernel_abort_call(call->net->socket, rxcall,
+ RX_USER_ABORT, ret,
+ afs_abort_send_data_error);
+- } else {
++ if (call->async) {
++ afs_see_call(call, afs_call_trace_async_abort);
++ return;
++ }
++
++ if (ret == -ECONNABORTED) {
+ len = 0;
+ iov_iter_kvec(&msg.msg_iter, ITER_DEST, NULL, 0, 0);
+ rxrpc_kernel_recv_data(call->net->socket, rxcall,
+@@ -445,6 +450,8 @@ void afs_make_call(struct afs_call *call, gfp_t gfp)
+ call->error = ret;
+ trace_afs_call_done(call);
+ error_kill_call:
++ if (call->async)
++ afs_see_call(call, afs_call_trace_async_kill);
+ if (call->type->done)
+ call->type->done(call);
+
+@@ -602,7 +609,6 @@ static void afs_deliver_to_call(struct afs_call *call)
+ abort_code = 0;
+ call_complete:
+ afs_set_call_complete(call, ret, remote_abort);
+- state = AFS_CALL_COMPLETE;
+ goto done;
+ }
+
+diff --git a/fs/afs/xdr_fs.h b/fs/afs/xdr_fs.h
+index 8ca8681645077d..cc5f143d21a347 100644
+--- a/fs/afs/xdr_fs.h
++++ b/fs/afs/xdr_fs.h
+@@ -88,7 +88,7 @@ union afs_xdr_dir_block {
+
+ struct {
+ struct afs_xdr_dir_hdr hdr;
+- u8 alloc_ctrs[AFS_DIR_MAX_BLOCKS];
++ u8 alloc_ctrs[AFS_DIR_BLOCKS_WITH_CTR];
+ __be16 hashtable[AFS_DIR_HASHTBL_SIZE];
+ } meta;
+
+diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
+index 024227aba4cd5f..362845f9aaaefa 100644
+--- a/fs/afs/yfsclient.c
++++ b/fs/afs/yfsclient.c
+@@ -666,8 +666,9 @@ static int yfs_deliver_fs_remove_file2(struct afs_call *call)
+ static void yfs_done_fs_remove_file2(struct afs_call *call)
+ {
+ if (call->error == -ECONNABORTED &&
+- call->abort_code == RX_INVALID_OPERATION) {
+- set_bit(AFS_SERVER_FL_NO_RM2, &call->server->flags);
++ (call->abort_code == RX_INVALID_OPERATION ||
++ call->abort_code == RXGEN_OPCODE)) {
++ set_bit(AFS_SERVER_FL_NO_RM2, &call->op->server->flags);
+ call->op->flags |= AFS_OPERATION_DOWNGRADE;
+ }
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index a3c861b2a6d25d..9d9ce308488dd3 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2001,6 +2001,53 @@ static int can_nocow_file_extent(struct btrfs_path *path,
+ return ret < 0 ? ret : can_nocow;
+ }
+
++/*
++ * Cleanup the dirty folios which will never be submitted due to error.
++ *
++ * When running a delalloc range, we may need to split the ranges (due to
++ * fragmentation or NOCOW). If we hit an error in the later part, we will error
++ * out and previously successfully executed range will never be submitted, thus
++ * we have to cleanup those folios by clearing their dirty flag, starting and
++ * finishing the writeback.
++ */
++static void cleanup_dirty_folios(struct btrfs_inode *inode,
++ struct folio *locked_folio,
++ u64 start, u64 end, int error)
++{
++ struct btrfs_fs_info *fs_info = inode->root->fs_info;
++ struct address_space *mapping = inode->vfs_inode.i_mapping;
++ pgoff_t start_index = start >> PAGE_SHIFT;
++ pgoff_t end_index = end >> PAGE_SHIFT;
++ u32 len;
++
++ ASSERT(end + 1 - start < U32_MAX);
++ ASSERT(IS_ALIGNED(start, fs_info->sectorsize) &&
++ IS_ALIGNED(end + 1, fs_info->sectorsize));
++ len = end + 1 - start;
++
++ /*
++ * Handle the locked folio first.
++ * The btrfs_folio_clamp_*() helpers can handle range out of the folio case.
++ */
++ btrfs_folio_clamp_finish_io(fs_info, locked_folio, start, len);
++
++ for (pgoff_t index = start_index; index <= end_index; index++) {
++ struct folio *folio;
++
++ /* Already handled at the beginning. */
++ if (index == locked_folio->index)
++ continue;
++ folio = __filemap_get_folio(mapping, index, FGP_LOCK, GFP_NOFS);
++ /* Cache already dropped, no need to do any cleanup. */
++ if (IS_ERR(folio))
++ continue;
++ btrfs_folio_clamp_finish_io(fs_info, locked_folio, start, len);
++ folio_unlock(folio);
++ folio_put(folio);
++ }
++ mapping_set_error(mapping, error);
++}
++
+ /*
+ * when nowcow writeback call back. This checks for snapshots or COW copies
+ * of the extents that exist in the file, and COWs the file as required.
+@@ -2016,6 +2063,11 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
+ struct btrfs_root *root = inode->root;
+ struct btrfs_path *path;
+ u64 cow_start = (u64)-1;
++ /*
++ * If not 0, represents the inclusive end of the last fallback_to_cow()
++ * range. Only for error handling.
++ */
++ u64 cow_end = 0;
+ u64 cur_offset = start;
+ int ret;
+ bool check_prev = true;
+@@ -2176,6 +2228,7 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
+ found_key.offset - 1);
+ cow_start = (u64)-1;
+ if (ret) {
++ cow_end = found_key.offset - 1;
+ btrfs_dec_nocow_writers(nocow_bg);
+ goto error;
+ }
+@@ -2249,24 +2302,54 @@ static noinline int run_delalloc_nocow(struct btrfs_inode *inode,
+ cow_start = cur_offset;
+
+ if (cow_start != (u64)-1) {
+- cur_offset = end;
+ ret = fallback_to_cow(inode, locked_folio, cow_start, end);
+ cow_start = (u64)-1;
+- if (ret)
++ if (ret) {
++ cow_end = end;
+ goto error;
++ }
+ }
+
+ btrfs_free_path(path);
+ return 0;
+
+ error:
++ /*
++ * There are several error cases:
++ *
++ * 1) Failed without falling back to COW
++ * start cur_offset end
++ * |/////////////| |
++ *
++ * For range [start, cur_offset) the folios are already unlocked (except
++ * @locked_folio), EXTENT_DELALLOC already removed.
++ * Only need to clear the dirty flag as they will never be submitted.
++ * Ordered extent and extent maps are handled by
++ * btrfs_mark_ordered_io_finished() inside run_delalloc_range().
++ *
++ * 2) Failed with error from fallback_to_cow()
++ * start cur_offset cow_end end
++ * |/////////////|-----------| |
++ *
++ * For range [start, cur_offset) it's the same as case 1).
++ * But for range [cur_offset, cow_end), the folios have dirty flag
++ * cleared and unlocked, EXTENT_DEALLLOC cleared by cow_file_range().
++ *
++ * Thus we should not call extent_clear_unlock_delalloc() on range
++ * [cur_offset, cow_end), as the folios are already unlocked.
++ *
++ * So clear the folio dirty flags for [start, cur_offset) first.
++ */
++ if (cur_offset > start)
++ cleanup_dirty_folios(inode, locked_folio, start, cur_offset - 1, ret);
++
+ /*
+ * If an error happened while a COW region is outstanding, cur_offset
+- * needs to be reset to cow_start to ensure the COW region is unlocked
+- * as well.
++ * needs to be reset to @cow_end + 1 to skip the COW range, as
++ * cow_file_range() will do the proper cleanup at error.
+ */
+- if (cow_start != (u64)-1)
+- cur_offset = cow_start;
++ if (cow_end)
++ cur_offset = cow_end + 1;
+
+ /*
+ * We need to lock the extent here because we're clearing DELALLOC and
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index e70ed857fc743b..4fcd6cd4c1c244 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1839,9 +1839,19 @@ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ * Thus its reserved space should all be zero, no matter if qgroup
+ * is consistent or the mode.
+ */
+- WARN_ON(qgroup->rsv.values[BTRFS_QGROUP_RSV_DATA] ||
+- qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC] ||
+- qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS]);
++ if (qgroup->rsv.values[BTRFS_QGROUP_RSV_DATA] ||
++ qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC] ||
++ qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS]) {
++ WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
++ btrfs_warn_rl(fs_info,
++"to be deleted qgroup %u/%llu has non-zero numbers, data %llu meta prealloc %llu meta pertrans %llu",
++ btrfs_qgroup_level(qgroup->qgroupid),
++ btrfs_qgroup_subvolid(qgroup->qgroupid),
++ qgroup->rsv.values[BTRFS_QGROUP_RSV_DATA],
++ qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PREALLOC],
++ qgroup->rsv.values[BTRFS_QGROUP_RSV_META_PERTRANS]);
++
++ }
+ /*
+ * The same for rfer/excl numbers, but that's only if our qgroup is
+ * consistent and if it's in regular qgroup mode.
+@@ -1850,8 +1860,9 @@ int btrfs_remove_qgroup(struct btrfs_trans_handle *trans, u64 qgroupid)
+ */
+ if (btrfs_qgroup_mode(fs_info) == BTRFS_QGROUP_MODE_FULL &&
+ !(fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_INCONSISTENT)) {
+- if (WARN_ON(qgroup->rfer || qgroup->excl ||
+- qgroup->rfer_cmpr || qgroup->excl_cmpr)) {
++ if (qgroup->rfer || qgroup->excl ||
++ qgroup->rfer_cmpr || qgroup->excl_cmpr) {
++ WARN_ON(IS_ENABLED(CONFIG_BTRFS_DEBUG));
+ btrfs_warn_rl(fs_info,
+ "to be deleted qgroup %u/%llu has non-zero numbers, rfer %llu rfer_cmpr %llu excl %llu excl_cmpr %llu",
+ btrfs_qgroup_level(qgroup->qgroupid),
+diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
+index fe4d719d506bf5..ec7328a6bfd755 100644
+--- a/fs/btrfs/subpage.c
++++ b/fs/btrfs/subpage.c
+@@ -868,6 +868,7 @@ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info,
+ unsigned long writeback_bitmap;
+ unsigned long ordered_bitmap;
+ unsigned long checked_bitmap;
++ unsigned long locked_bitmap;
+ unsigned long flags;
+
+ ASSERT(folio_test_private(folio) && folio_get_private(folio));
+@@ -880,15 +881,16 @@ void __cold btrfs_subpage_dump_bitmap(const struct btrfs_fs_info *fs_info,
+ GET_SUBPAGE_BITMAP(subpage, fs_info, writeback, &writeback_bitmap);
+ GET_SUBPAGE_BITMAP(subpage, fs_info, ordered, &ordered_bitmap);
+ GET_SUBPAGE_BITMAP(subpage, fs_info, checked, &checked_bitmap);
+- GET_SUBPAGE_BITMAP(subpage, fs_info, locked, &checked_bitmap);
++ GET_SUBPAGE_BITMAP(subpage, fs_info, locked, &locked_bitmap);
+ spin_unlock_irqrestore(&subpage->lock, flags);
+
+ dump_page(folio_page(folio, 0), "btrfs subpage dump");
+ btrfs_warn(fs_info,
+-"start=%llu len=%u page=%llu, bitmaps uptodate=%*pbl dirty=%*pbl writeback=%*pbl ordered=%*pbl checked=%*pbl",
++"start=%llu len=%u page=%llu, bitmaps uptodate=%*pbl dirty=%*pbl locked=%*pbl writeback=%*pbl ordered=%*pbl checked=%*pbl",
+ start, len, folio_pos(folio),
+ sectors_per_page, &uptodate_bitmap,
+ sectors_per_page, &dirty_bitmap,
++ sectors_per_page, &locked_bitmap,
+ sectors_per_page, &writeback_bitmap,
+ sectors_per_page, &ordered_bitmap,
+ sectors_per_page, &checked_bitmap);
+diff --git a/fs/btrfs/subpage.h b/fs/btrfs/subpage.h
+index 4b85d91d0e18b0..cdb554e0d215e2 100644
+--- a/fs/btrfs/subpage.h
++++ b/fs/btrfs/subpage.h
+@@ -152,6 +152,19 @@ DECLARE_BTRFS_SUBPAGE_OPS(writeback);
+ DECLARE_BTRFS_SUBPAGE_OPS(ordered);
+ DECLARE_BTRFS_SUBPAGE_OPS(checked);
+
++/*
++ * Helper for error cleanup, where a folio will have its dirty flag cleared,
++ * with writeback started and finished.
++ */
++static inline void btrfs_folio_clamp_finish_io(struct btrfs_fs_info *fs_info,
++ struct folio *locked_folio,
++ u64 start, u32 len)
++{
++ btrfs_folio_clamp_clear_dirty(fs_info, locked_folio, start, len);
++ btrfs_folio_clamp_set_writeback(fs_info, locked_folio, start, len);
++ btrfs_folio_clamp_clear_writeback(fs_info, locked_folio, start, len);
++}
++
+ bool btrfs_subpage_clear_and_test_dirty(const struct btrfs_fs_info *fs_info,
+ struct folio *folio, u64 start, u32 len);
+
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 8292e488d3d777..73343503ea60e4 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -972,7 +972,7 @@ static int btrfs_fill_super(struct super_block *sb,
+
+ err = open_ctree(sb, fs_devices);
+ if (err) {
+- btrfs_err(fs_info, "open_ctree failed");
++ btrfs_err(fs_info, "open_ctree failed: %d", err);
+ return err;
+ }
+
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index dddedaef5e93dd..0c01e4423ee2a8 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -824,9 +824,12 @@ static int find_rsb_dir(struct dlm_ls *ls, const void *name, int len,
+ r->res_first_lkid = 0;
+ }
+
+- /* A dir record will not be on the scan list. */
+- if (r->res_dir_nodeid != our_nodeid)
+- del_scan(ls, r);
++ /* we always deactivate scan timer for the rsb, when
++ * we move it out of the inactive state as rsb state
++ * can be changed and scan timers are only for inactive
++ * rsbs.
++ */
++ del_scan(ls, r);
+ list_move(&r->res_slow_list, &ls->ls_slow_active);
+ rsb_clear_flag(r, RSB_INACTIVE);
+ kref_init(&r->res_ref); /* ref is now used in active state */
+@@ -989,10 +992,10 @@ static int find_rsb_nodir(struct dlm_ls *ls, const void *name, int len,
+ r->res_nodeid = 0;
+ }
+
++ del_scan(ls, r);
+ list_move(&r->res_slow_list, &ls->ls_slow_active);
+ rsb_clear_flag(r, RSB_INACTIVE);
+ kref_init(&r->res_ref);
+- del_scan(ls, r);
+ write_unlock_bh(&ls->ls_rsbtbl_lock);
+
+ goto out;
+@@ -1337,9 +1340,13 @@ static int _dlm_master_lookup(struct dlm_ls *ls, int from_nodeid, const char *na
+ __dlm_master_lookup(ls, r, our_nodeid, from_nodeid, true, flags,
+ r_nodeid, result);
+
+- /* A dir record rsb should never be on scan list. */
+- /* Try to fix this with del_scan? */
+- WARN_ON(!list_empty(&r->res_scan_list));
++ /* A dir record rsb should never be on scan list.
++ * Except when we are the dir and master node.
++ * This function should only be called by the dir
++ * node.
++ */
++ WARN_ON(!list_empty(&r->res_scan_list) &&
++ r->res_master_nodeid != our_nodeid);
+
+ write_unlock_bh(&ls->ls_rsbtbl_lock);
+
+@@ -1430,16 +1437,23 @@ static void deactivate_rsb(struct kref *kref)
+ list_move(&r->res_slow_list, &ls->ls_slow_inactive);
+
+ /*
+- * When the rsb becomes unused:
+- * - If it's not a dir record for a remote master rsb,
+- * then it is put on the scan list to be freed.
+- * - If it's a dir record for a remote master rsb,
+- * then it is kept in the inactive state until
+- * receive_remove() from the master node.
++ * When the rsb becomes unused, there are two possibilities:
++ * 1. Leave the inactive rsb in place (don't remove it).
++ * 2. Add it to the scan list to be removed.
++ *
++ * 1 is done when the rsb is acting as the dir record
++ * for a remotely mastered rsb. The rsb must be left
++ * in place as an inactive rsb to act as the dir record.
++ *
++ * 2 is done when a) the rsb is not the master and not the
++ * dir record, b) when the rsb is both the master and the
++ * dir record, c) when the rsb is master but not dir record.
++ *
++ * (If no directory is used, the rsb can always be removed.)
+ */
+- if (!dlm_no_directory(ls) &&
+- (r->res_master_nodeid != our_nodeid) &&
+- (dlm_dir_nodeid(r) != our_nodeid))
++ if (dlm_no_directory(ls) ||
++ (r->res_master_nodeid == our_nodeid ||
++ dlm_dir_nodeid(r) != our_nodeid))
+ add_scan(ls, r);
+
+ if (r->res_lvbptr) {
+diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
+index cb3a10b041c278..f2d88a3581695a 100644
+--- a/fs/dlm/lowcomms.c
++++ b/fs/dlm/lowcomms.c
+@@ -462,7 +462,8 @@ static bool dlm_lowcomms_con_has_addr(const struct connection *con,
+ int dlm_lowcomms_addr(int nodeid, struct sockaddr_storage *addr)
+ {
+ struct connection *con;
+- bool ret, idx;
++ bool ret;
++ int idx;
+
+ idx = srcu_read_lock(&connections_srcu);
+ con = nodeid2con(nodeid, GFP_NOFS);
+diff --git a/fs/erofs/internal.h b/fs/erofs/internal.h
+index 77e785a6dfa7ff..edbabb3256c9ac 100644
+--- a/fs/erofs/internal.h
++++ b/fs/erofs/internal.h
+@@ -205,12 +205,6 @@ enum {
+ EROFS_ZIP_CACHE_READAROUND
+ };
+
+-/* basic unit of the workstation of a super_block */
+-struct erofs_workgroup {
+- pgoff_t index;
+- struct lockref lockref;
+-};
+-
+ enum erofs_kmap_type {
+ EROFS_NO_KMAP, /* don't map the buffer */
+ EROFS_KMAP, /* use kmap_local_page() to map the buffer */
+@@ -452,20 +446,15 @@ static inline void erofs_pagepool_add(struct page **pagepool, struct page *page)
+ void erofs_release_pages(struct page **pagepool);
+
+ #ifdef CONFIG_EROFS_FS_ZIP
+-void erofs_workgroup_put(struct erofs_workgroup *grp);
+-struct erofs_workgroup *erofs_find_workgroup(struct super_block *sb,
+- pgoff_t index);
+-struct erofs_workgroup *erofs_insert_workgroup(struct super_block *sb,
+- struct erofs_workgroup *grp);
+-void erofs_workgroup_free_rcu(struct erofs_workgroup *grp);
++extern atomic_long_t erofs_global_shrink_cnt;
+ void erofs_shrinker_register(struct super_block *sb);
+ void erofs_shrinker_unregister(struct super_block *sb);
+ int __init erofs_init_shrinker(void);
+ void erofs_exit_shrinker(void);
+ int __init z_erofs_init_subsystem(void);
+ void z_erofs_exit_subsystem(void);
+-int erofs_try_to_free_all_cached_folios(struct erofs_sb_info *sbi,
+- struct erofs_workgroup *egrp);
++unsigned long z_erofs_shrink_scan(struct erofs_sb_info *sbi,
++ unsigned long nr_shrink);
+ int z_erofs_map_blocks_iter(struct inode *inode, struct erofs_map_blocks *map,
+ int flags);
+ void *z_erofs_get_gbuf(unsigned int requiredpages);
+diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
+index 1a00f061798a3c..a8fb4b525f5443 100644
+--- a/fs/erofs/zdata.c
++++ b/fs/erofs/zdata.c
+@@ -44,12 +44,15 @@ __Z_EROFS_BVSET(z_erofs_bvset_inline, Z_EROFS_INLINE_BVECS);
+ * A: Field should be accessed / updated in atomic for parallelized code.
+ */
+ struct z_erofs_pcluster {
+- struct erofs_workgroup obj;
+ struct mutex lock;
++ struct lockref lockref;
+
+ /* A: point to next chained pcluster or TAILs */
+ z_erofs_next_pcluster_t next;
+
++ /* I: start block address of this pcluster */
++ erofs_off_t index;
++
+ /* L: the maximum decompression size of this round */
+ unsigned int length;
+
+@@ -108,7 +111,7 @@ struct z_erofs_decompressqueue {
+
+ static inline bool z_erofs_is_inline_pcluster(struct z_erofs_pcluster *pcl)
+ {
+- return !pcl->obj.index;
++ return !pcl->index;
+ }
+
+ static inline unsigned int z_erofs_pclusterpages(struct z_erofs_pcluster *pcl)
+@@ -548,7 +551,7 @@ static void z_erofs_bind_cache(struct z_erofs_decompress_frontend *fe)
+ if (READ_ONCE(pcl->compressed_bvecs[i].page))
+ continue;
+
+- page = find_get_page(mc, pcl->obj.index + i);
++ page = find_get_page(mc, pcl->index + i);
+ if (!page) {
+ /* I/O is needed, no possible to decompress directly */
+ standalone = false;
+@@ -564,13 +567,13 @@ static void z_erofs_bind_cache(struct z_erofs_decompress_frontend *fe)
+ continue;
+ set_page_private(newpage, Z_EROFS_PREALLOCATED_PAGE);
+ }
+- spin_lock(&pcl->obj.lockref.lock);
++ spin_lock(&pcl->lockref.lock);
+ if (!pcl->compressed_bvecs[i].page) {
+ pcl->compressed_bvecs[i].page = page ? page : newpage;
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ continue;
+ }
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+
+ if (page)
+ put_page(page);
+@@ -587,11 +590,9 @@ static void z_erofs_bind_cache(struct z_erofs_decompress_frontend *fe)
+ }
+
+ /* (erofs_shrinker) disconnect cached encoded data with pclusters */
+-int erofs_try_to_free_all_cached_folios(struct erofs_sb_info *sbi,
+- struct erofs_workgroup *grp)
++static int erofs_try_to_free_all_cached_folios(struct erofs_sb_info *sbi,
++ struct z_erofs_pcluster *pcl)
+ {
+- struct z_erofs_pcluster *const pcl =
+- container_of(grp, struct z_erofs_pcluster, obj);
+ unsigned int pclusterpages = z_erofs_pclusterpages(pcl);
+ struct folio *folio;
+ int i;
+@@ -626,8 +627,8 @@ static bool z_erofs_cache_release_folio(struct folio *folio, gfp_t gfp)
+ return true;
+
+ ret = false;
+- spin_lock(&pcl->obj.lockref.lock);
+- if (pcl->obj.lockref.count <= 0) {
++ spin_lock(&pcl->lockref.lock);
++ if (pcl->lockref.count <= 0) {
+ DBG_BUGON(z_erofs_is_inline_pcluster(pcl));
+ for (; bvec < end; ++bvec) {
+ if (bvec->page && page_folio(bvec->page) == folio) {
+@@ -638,7 +639,7 @@ static bool z_erofs_cache_release_folio(struct folio *folio, gfp_t gfp)
+ }
+ }
+ }
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ return ret;
+ }
+
+@@ -689,15 +690,15 @@ static int z_erofs_attach_page(struct z_erofs_decompress_frontend *fe,
+
+ if (exclusive) {
+ /* give priority for inplaceio to use file pages first */
+- spin_lock(&pcl->obj.lockref.lock);
++ spin_lock(&pcl->lockref.lock);
+ while (fe->icur > 0) {
+ if (pcl->compressed_bvecs[--fe->icur].page)
+ continue;
+ pcl->compressed_bvecs[fe->icur] = *bvec;
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ return 0;
+ }
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+
+ /* otherwise, check if it can be used as a bvpage */
+ if (fe->mode >= Z_EROFS_PCLUSTER_FOLLOWED &&
+@@ -710,13 +711,30 @@ static int z_erofs_attach_page(struct z_erofs_decompress_frontend *fe,
+ return ret;
+ }
+
++static bool z_erofs_get_pcluster(struct z_erofs_pcluster *pcl)
++{
++ if (lockref_get_not_zero(&pcl->lockref))
++ return true;
++
++ spin_lock(&pcl->lockref.lock);
++ if (__lockref_is_dead(&pcl->lockref)) {
++ spin_unlock(&pcl->lockref.lock);
++ return false;
++ }
++
++ if (!pcl->lockref.count++)
++ atomic_long_dec(&erofs_global_shrink_cnt);
++ spin_unlock(&pcl->lockref.lock);
++ return true;
++}
++
+ static int z_erofs_register_pcluster(struct z_erofs_decompress_frontend *fe)
+ {
+ struct erofs_map_blocks *map = &fe->map;
+ struct super_block *sb = fe->inode->i_sb;
++ struct erofs_sb_info *sbi = EROFS_SB(sb);
+ bool ztailpacking = map->m_flags & EROFS_MAP_META;
+- struct z_erofs_pcluster *pcl;
+- struct erofs_workgroup *grp;
++ struct z_erofs_pcluster *pcl, *pre;
+ int err;
+
+ if (!(map->m_flags & EROFS_MAP_ENCODED) ||
+@@ -730,8 +748,8 @@ static int z_erofs_register_pcluster(struct z_erofs_decompress_frontend *fe)
+ if (IS_ERR(pcl))
+ return PTR_ERR(pcl);
+
+- spin_lock_init(&pcl->obj.lockref.lock);
+- pcl->obj.lockref.count = 1; /* one ref for this request */
++ spin_lock_init(&pcl->lockref.lock);
++ pcl->lockref.count = 1; /* one ref for this request */
+ pcl->algorithmformat = map->m_algorithmformat;
+ pcl->length = 0;
+ pcl->partial = true;
+@@ -749,19 +767,26 @@ static int z_erofs_register_pcluster(struct z_erofs_decompress_frontend *fe)
+ DBG_BUGON(!mutex_trylock(&pcl->lock));
+
+ if (ztailpacking) {
+- pcl->obj.index = 0; /* which indicates ztailpacking */
++ pcl->index = 0; /* which indicates ztailpacking */
+ } else {
+- pcl->obj.index = erofs_blknr(sb, map->m_pa);
+-
+- grp = erofs_insert_workgroup(fe->inode->i_sb, &pcl->obj);
+- if (IS_ERR(grp)) {
+- err = PTR_ERR(grp);
+- goto err_out;
++ pcl->index = erofs_blknr(sb, map->m_pa);
++ while (1) {
++ xa_lock(&sbi->managed_pslots);
++ pre = __xa_cmpxchg(&sbi->managed_pslots, pcl->index,
++ NULL, pcl, GFP_KERNEL);
++ if (!pre || xa_is_err(pre) || z_erofs_get_pcluster(pre)) {
++ xa_unlock(&sbi->managed_pslots);
++ break;
++ }
++ /* try to legitimize the current in-tree one */
++ xa_unlock(&sbi->managed_pslots);
++ cond_resched();
+ }
+-
+- if (grp != &pcl->obj) {
+- fe->pcl = container_of(grp,
+- struct z_erofs_pcluster, obj);
++ if (xa_is_err(pre)) {
++ err = xa_err(pre);
++ goto err_out;
++ } else if (pre) {
++ fe->pcl = pre;
+ err = -EEXIST;
+ goto err_out;
+ }
+@@ -781,7 +806,7 @@ static int z_erofs_pcluster_begin(struct z_erofs_decompress_frontend *fe)
+ struct erofs_map_blocks *map = &fe->map;
+ struct super_block *sb = fe->inode->i_sb;
+ erofs_blk_t blknr = erofs_blknr(sb, map->m_pa);
+- struct erofs_workgroup *grp = NULL;
++ struct z_erofs_pcluster *pcl = NULL;
+ int ret;
+
+ DBG_BUGON(fe->pcl);
+@@ -789,14 +814,23 @@ static int z_erofs_pcluster_begin(struct z_erofs_decompress_frontend *fe)
+ DBG_BUGON(fe->owned_head == Z_EROFS_PCLUSTER_NIL);
+
+ if (!(map->m_flags & EROFS_MAP_META)) {
+- grp = erofs_find_workgroup(sb, blknr);
++ while (1) {
++ rcu_read_lock();
++ pcl = xa_load(&EROFS_SB(sb)->managed_pslots, blknr);
++ if (!pcl || z_erofs_get_pcluster(pcl)) {
++ DBG_BUGON(pcl && blknr != pcl->index);
++ rcu_read_unlock();
++ break;
++ }
++ rcu_read_unlock();
++ }
+ } else if ((map->m_pa & ~PAGE_MASK) + map->m_plen > PAGE_SIZE) {
+ DBG_BUGON(1);
+ return -EFSCORRUPTED;
+ }
+
+- if (grp) {
+- fe->pcl = container_of(grp, struct z_erofs_pcluster, obj);
++ if (pcl) {
++ fe->pcl = pcl;
+ ret = -EEXIST;
+ } else {
+ ret = z_erofs_register_pcluster(fe);
+@@ -851,12 +885,72 @@ static void z_erofs_rcu_callback(struct rcu_head *head)
+ struct z_erofs_pcluster, rcu));
+ }
+
+-void erofs_workgroup_free_rcu(struct erofs_workgroup *grp)
++static bool erofs_try_to_release_pcluster(struct erofs_sb_info *sbi,
++ struct z_erofs_pcluster *pcl)
+ {
+- struct z_erofs_pcluster *const pcl =
+- container_of(grp, struct z_erofs_pcluster, obj);
++ int free = false;
++
++ spin_lock(&pcl->lockref.lock);
++ if (pcl->lockref.count)
++ goto out;
++
++ /*
++ * Note that all cached folios should be detached before deleted from
++ * the XArray. Otherwise some folios could be still attached to the
++ * orphan old pcluster when the new one is available in the tree.
++ */
++ if (erofs_try_to_free_all_cached_folios(sbi, pcl))
++ goto out;
++
++ /*
++ * It's impossible to fail after the pcluster is freezed, but in order
++ * to avoid some race conditions, add a DBG_BUGON to observe this.
++ */
++ DBG_BUGON(__xa_erase(&sbi->managed_pslots, pcl->index) != pcl);
++
++ lockref_mark_dead(&pcl->lockref);
++ free = true;
++out:
++ spin_unlock(&pcl->lockref.lock);
++ if (free) {
++ atomic_long_dec(&erofs_global_shrink_cnt);
++ call_rcu(&pcl->rcu, z_erofs_rcu_callback);
++ }
++ return free;
++}
++
++unsigned long z_erofs_shrink_scan(struct erofs_sb_info *sbi,
++ unsigned long nr_shrink)
++{
++ struct z_erofs_pcluster *pcl;
++ unsigned long index, freed = 0;
++
++ xa_lock(&sbi->managed_pslots);
++ xa_for_each(&sbi->managed_pslots, index, pcl) {
++ /* try to shrink each valid pcluster */
++ if (!erofs_try_to_release_pcluster(sbi, pcl))
++ continue;
++ xa_unlock(&sbi->managed_pslots);
++
++ ++freed;
++ if (!--nr_shrink)
++ return freed;
++ xa_lock(&sbi->managed_pslots);
++ }
++ xa_unlock(&sbi->managed_pslots);
++ return freed;
++}
++
++static void z_erofs_put_pcluster(struct z_erofs_pcluster *pcl)
++{
++ if (lockref_put_or_lock(&pcl->lockref))
++ return;
+
+- call_rcu(&pcl->rcu, z_erofs_rcu_callback);
++ DBG_BUGON(__lockref_is_dead(&pcl->lockref));
++ if (pcl->lockref.count == 1)
++ atomic_long_inc(&erofs_global_shrink_cnt);
++ --pcl->lockref.count;
++ spin_unlock(&pcl->lockref.lock);
+ }
+
+ static void z_erofs_pcluster_end(struct z_erofs_decompress_frontend *fe)
+@@ -877,7 +971,7 @@ static void z_erofs_pcluster_end(struct z_erofs_decompress_frontend *fe)
+ * any longer if the pcluster isn't hosted by ourselves.
+ */
+ if (fe->mode < Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE)
+- erofs_workgroup_put(&pcl->obj);
++ z_erofs_put_pcluster(pcl);
+
+ fe->pcl = NULL;
+ }
+@@ -1309,7 +1403,7 @@ static int z_erofs_decompress_queue(const struct z_erofs_decompressqueue *io,
+ if (z_erofs_is_inline_pcluster(be.pcl))
+ z_erofs_free_pcluster(be.pcl);
+ else
+- erofs_workgroup_put(&be.pcl->obj);
++ z_erofs_put_pcluster(be.pcl);
+ }
+ return err;
+ }
+@@ -1391,9 +1485,9 @@ static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+ bvec->bv_offset = 0;
+ bvec->bv_len = PAGE_SIZE;
+ repeat:
+- spin_lock(&pcl->obj.lockref.lock);
++ spin_lock(&pcl->lockref.lock);
+ zbv = pcl->compressed_bvecs[nr];
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ if (!zbv.page)
+ goto out_allocfolio;
+
+@@ -1455,23 +1549,23 @@ static void z_erofs_fill_bio_vec(struct bio_vec *bvec,
+ folio_put(folio);
+ out_allocfolio:
+ page = __erofs_allocpage(&f->pagepool, gfp, true);
+- spin_lock(&pcl->obj.lockref.lock);
++ spin_lock(&pcl->lockref.lock);
+ if (unlikely(pcl->compressed_bvecs[nr].page != zbv.page)) {
+ if (page)
+ erofs_pagepool_add(&f->pagepool, page);
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ cond_resched();
+ goto repeat;
+ }
+ pcl->compressed_bvecs[nr].page = page ? page : ERR_PTR(-ENOMEM);
+- spin_unlock(&pcl->obj.lockref.lock);
++ spin_unlock(&pcl->lockref.lock);
+ bvec->bv_page = page;
+ if (!page)
+ return;
+ folio = page_folio(page);
+ out_tocache:
+ if (!tocache || bs != PAGE_SIZE ||
+- filemap_add_folio(mc, folio, pcl->obj.index + nr, gfp)) {
++ filemap_add_folio(mc, folio, pcl->index + nr, gfp)) {
+ /* turn into a temporary shortlived folio (1 ref) */
+ folio->private = (void *)Z_EROFS_SHORTLIVED_PAGE;
+ return;
+@@ -1603,7 +1697,7 @@ static void z_erofs_submit_queue(struct z_erofs_decompress_frontend *f,
+
+ /* no device id here, thus it will always succeed */
+ mdev = (struct erofs_map_dev) {
+- .m_pa = erofs_pos(sb, pcl->obj.index),
++ .m_pa = erofs_pos(sb, pcl->index),
+ };
+ (void)erofs_map_dev(sb, &mdev);
+
+diff --git a/fs/erofs/zutil.c b/fs/erofs/zutil.c
+index 37afe202484091..75704f58ecfa92 100644
+--- a/fs/erofs/zutil.c
++++ b/fs/erofs/zutil.c
+@@ -2,6 +2,7 @@
+ /*
+ * Copyright (C) 2018 HUAWEI, Inc.
+ * https://www.huawei.com/
++ * Copyright (C) 2024 Alibaba Cloud
+ */
+ #include "internal.h"
+
+@@ -19,13 +20,12 @@ static unsigned int z_erofs_gbuf_count, z_erofs_gbuf_nrpages,
+ module_param_named(global_buffers, z_erofs_gbuf_count, uint, 0444);
+ module_param_named(reserved_pages, z_erofs_rsv_nrpages, uint, 0444);
+
+-static atomic_long_t erofs_global_shrink_cnt; /* for all mounted instances */
+-/* protected by 'erofs_sb_list_lock' */
+-static unsigned int shrinker_run_no;
++atomic_long_t erofs_global_shrink_cnt; /* for all mounted instances */
+
+-/* protects the mounted 'erofs_sb_list' */
++/* protects `erofs_sb_list_lock` and the mounted `erofs_sb_list` */
+ static DEFINE_SPINLOCK(erofs_sb_list_lock);
+ static LIST_HEAD(erofs_sb_list);
++static unsigned int shrinker_run_no;
+ static struct shrinker *erofs_shrinker_info;
+
+ static unsigned int z_erofs_gbuf_id(void)
+@@ -214,145 +214,6 @@ void erofs_release_pages(struct page **pagepool)
+ }
+ }
+
+-static bool erofs_workgroup_get(struct erofs_workgroup *grp)
+-{
+- if (lockref_get_not_zero(&grp->lockref))
+- return true;
+-
+- spin_lock(&grp->lockref.lock);
+- if (__lockref_is_dead(&grp->lockref)) {
+- spin_unlock(&grp->lockref.lock);
+- return false;
+- }
+-
+- if (!grp->lockref.count++)
+- atomic_long_dec(&erofs_global_shrink_cnt);
+- spin_unlock(&grp->lockref.lock);
+- return true;
+-}
+-
+-struct erofs_workgroup *erofs_find_workgroup(struct super_block *sb,
+- pgoff_t index)
+-{
+- struct erofs_sb_info *sbi = EROFS_SB(sb);
+- struct erofs_workgroup *grp;
+-
+-repeat:
+- rcu_read_lock();
+- grp = xa_load(&sbi->managed_pslots, index);
+- if (grp) {
+- if (!erofs_workgroup_get(grp)) {
+- /* prefer to relax rcu read side */
+- rcu_read_unlock();
+- goto repeat;
+- }
+-
+- DBG_BUGON(index != grp->index);
+- }
+- rcu_read_unlock();
+- return grp;
+-}
+-
+-struct erofs_workgroup *erofs_insert_workgroup(struct super_block *sb,
+- struct erofs_workgroup *grp)
+-{
+- struct erofs_sb_info *const sbi = EROFS_SB(sb);
+- struct erofs_workgroup *pre;
+-
+- DBG_BUGON(grp->lockref.count < 1);
+-repeat:
+- xa_lock(&sbi->managed_pslots);
+- pre = __xa_cmpxchg(&sbi->managed_pslots, grp->index,
+- NULL, grp, GFP_KERNEL);
+- if (pre) {
+- if (xa_is_err(pre)) {
+- pre = ERR_PTR(xa_err(pre));
+- } else if (!erofs_workgroup_get(pre)) {
+- /* try to legitimize the current in-tree one */
+- xa_unlock(&sbi->managed_pslots);
+- cond_resched();
+- goto repeat;
+- }
+- grp = pre;
+- }
+- xa_unlock(&sbi->managed_pslots);
+- return grp;
+-}
+-
+-static void __erofs_workgroup_free(struct erofs_workgroup *grp)
+-{
+- atomic_long_dec(&erofs_global_shrink_cnt);
+- erofs_workgroup_free_rcu(grp);
+-}
+-
+-void erofs_workgroup_put(struct erofs_workgroup *grp)
+-{
+- if (lockref_put_or_lock(&grp->lockref))
+- return;
+-
+- DBG_BUGON(__lockref_is_dead(&grp->lockref));
+- if (grp->lockref.count == 1)
+- atomic_long_inc(&erofs_global_shrink_cnt);
+- --grp->lockref.count;
+- spin_unlock(&grp->lockref.lock);
+-}
+-
+-static bool erofs_try_to_release_workgroup(struct erofs_sb_info *sbi,
+- struct erofs_workgroup *grp)
+-{
+- int free = false;
+-
+- spin_lock(&grp->lockref.lock);
+- if (grp->lockref.count)
+- goto out;
+-
+- /*
+- * Note that all cached pages should be detached before deleted from
+- * the XArray. Otherwise some cached pages could be still attached to
+- * the orphan old workgroup when the new one is available in the tree.
+- */
+- if (erofs_try_to_free_all_cached_folios(sbi, grp))
+- goto out;
+-
+- /*
+- * It's impossible to fail after the workgroup is freezed,
+- * however in order to avoid some race conditions, add a
+- * DBG_BUGON to observe this in advance.
+- */
+- DBG_BUGON(__xa_erase(&sbi->managed_pslots, grp->index) != grp);
+-
+- lockref_mark_dead(&grp->lockref);
+- free = true;
+-out:
+- spin_unlock(&grp->lockref.lock);
+- if (free)
+- __erofs_workgroup_free(grp);
+- return free;
+-}
+-
+-static unsigned long erofs_shrink_workstation(struct erofs_sb_info *sbi,
+- unsigned long nr_shrink)
+-{
+- struct erofs_workgroup *grp;
+- unsigned int freed = 0;
+- unsigned long index;
+-
+- xa_lock(&sbi->managed_pslots);
+- xa_for_each(&sbi->managed_pslots, index, grp) {
+- /* try to shrink each valid workgroup */
+- if (!erofs_try_to_release_workgroup(sbi, grp))
+- continue;
+- xa_unlock(&sbi->managed_pslots);
+-
+- ++freed;
+- if (!--nr_shrink)
+- return freed;
+- xa_lock(&sbi->managed_pslots);
+- }
+- xa_unlock(&sbi->managed_pslots);
+- return freed;
+-}
+-
+ void erofs_shrinker_register(struct super_block *sb)
+ {
+ struct erofs_sb_info *sbi = EROFS_SB(sb);
+@@ -369,8 +230,8 @@ void erofs_shrinker_unregister(struct super_block *sb)
+ struct erofs_sb_info *const sbi = EROFS_SB(sb);
+
+ mutex_lock(&sbi->umount_mutex);
+- /* clean up all remaining workgroups in memory */
+- erofs_shrink_workstation(sbi, ~0UL);
++ /* clean up all remaining pclusters in memory */
++ z_erofs_shrink_scan(sbi, ~0UL);
+
+ spin_lock(&erofs_sb_list_lock);
+ list_del(&sbi->list);
+@@ -418,9 +279,7 @@ static unsigned long erofs_shrink_scan(struct shrinker *shrink,
+
+ spin_unlock(&erofs_sb_list_lock);
+ sbi->shrinker_run_no = run_no;
+-
+- freed += erofs_shrink_workstation(sbi, nr - freed);
+-
++ freed += z_erofs_shrink_scan(sbi, nr - freed);
+ spin_lock(&erofs_sb_list_lock);
+ /* Get the next list element before we move this one */
+ p = p->next;
+diff --git a/fs/f2fs/dir.c b/fs/f2fs/dir.c
+index 47a5c806cf1628..54dd52de7269da 100644
+--- a/fs/f2fs/dir.c
++++ b/fs/f2fs/dir.c
+@@ -175,7 +175,8 @@ static unsigned long dir_block_index(unsigned int level,
+ static struct f2fs_dir_entry *find_in_block(struct inode *dir,
+ struct page *dentry_page,
+ const struct f2fs_filename *fname,
+- int *max_slots)
++ int *max_slots,
++ bool use_hash)
+ {
+ struct f2fs_dentry_block *dentry_blk;
+ struct f2fs_dentry_ptr d;
+@@ -183,7 +184,7 @@ static struct f2fs_dir_entry *find_in_block(struct inode *dir,
+ dentry_blk = (struct f2fs_dentry_block *)page_address(dentry_page);
+
+ make_dentry_ptr_block(dir, &d, dentry_blk);
+- return f2fs_find_target_dentry(&d, fname, max_slots);
++ return f2fs_find_target_dentry(&d, fname, max_slots, use_hash);
+ }
+
+ static inline int f2fs_match_name(const struct inode *dir,
+@@ -208,7 +209,8 @@ static inline int f2fs_match_name(const struct inode *dir,
+ }
+
+ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d,
+- const struct f2fs_filename *fname, int *max_slots)
++ const struct f2fs_filename *fname, int *max_slots,
++ bool use_hash)
+ {
+ struct f2fs_dir_entry *de;
+ unsigned long bit_pos = 0;
+@@ -231,7 +233,7 @@ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d,
+ continue;
+ }
+
+- if (de->hash_code == fname->hash) {
++ if (!use_hash || de->hash_code == fname->hash) {
+ res = f2fs_match_name(d->inode, fname,
+ d->filename[bit_pos],
+ le16_to_cpu(de->name_len));
+@@ -258,11 +260,12 @@ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d,
+ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
+ unsigned int level,
+ const struct f2fs_filename *fname,
+- struct page **res_page)
++ struct page **res_page,
++ bool use_hash)
+ {
+ int s = GET_DENTRY_SLOTS(fname->disk_name.len);
+ unsigned int nbucket, nblock;
+- unsigned int bidx, end_block;
++ unsigned int bidx, end_block, bucket_no;
+ struct page *dentry_page;
+ struct f2fs_dir_entry *de = NULL;
+ pgoff_t next_pgofs;
+@@ -272,8 +275,11 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
+ nbucket = dir_buckets(level, F2FS_I(dir)->i_dir_level);
+ nblock = bucket_blocks(level);
+
++ bucket_no = use_hash ? le32_to_cpu(fname->hash) % nbucket : 0;
++
++start_find_bucket:
+ bidx = dir_block_index(level, F2FS_I(dir)->i_dir_level,
+- le32_to_cpu(fname->hash) % nbucket);
++ bucket_no);
+ end_block = bidx + nblock;
+
+ while (bidx < end_block) {
+@@ -290,7 +296,7 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
+ }
+ }
+
+- de = find_in_block(dir, dentry_page, fname, &max_slots);
++ de = find_in_block(dir, dentry_page, fname, &max_slots, use_hash);
+ if (IS_ERR(de)) {
+ *res_page = ERR_CAST(de);
+ de = NULL;
+@@ -307,12 +313,18 @@ static struct f2fs_dir_entry *find_in_level(struct inode *dir,
+ bidx++;
+ }
+
+- if (!de && room && F2FS_I(dir)->chash != fname->hash) {
+- F2FS_I(dir)->chash = fname->hash;
+- F2FS_I(dir)->clevel = level;
+- }
++ if (de)
++ return de;
+
+- return de;
++ if (likely(use_hash)) {
++ if (room && F2FS_I(dir)->chash != fname->hash) {
++ F2FS_I(dir)->chash = fname->hash;
++ F2FS_I(dir)->clevel = level;
++ }
++ } else if (++bucket_no < nbucket) {
++ goto start_find_bucket;
++ }
++ return NULL;
+ }
+
+ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
+@@ -323,11 +335,15 @@ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
+ struct f2fs_dir_entry *de = NULL;
+ unsigned int max_depth;
+ unsigned int level;
++ bool use_hash = true;
+
+ *res_page = NULL;
+
++#if IS_ENABLED(CONFIG_UNICODE)
++start_find_entry:
++#endif
+ if (f2fs_has_inline_dentry(dir)) {
+- de = f2fs_find_in_inline_dir(dir, fname, res_page);
++ de = f2fs_find_in_inline_dir(dir, fname, res_page, use_hash);
+ goto out;
+ }
+
+@@ -343,11 +359,18 @@ struct f2fs_dir_entry *__f2fs_find_entry(struct inode *dir,
+ }
+
+ for (level = 0; level < max_depth; level++) {
+- de = find_in_level(dir, level, fname, res_page);
++ de = find_in_level(dir, level, fname, res_page, use_hash);
+ if (de || IS_ERR(*res_page))
+ break;
+ }
++
+ out:
++#if IS_ENABLED(CONFIG_UNICODE)
++ if (IS_CASEFOLDED(dir) && !de && use_hash) {
++ use_hash = false;
++ goto start_find_entry;
++ }
++#endif
+ /* This is to increase the speed of f2fs_create */
+ if (!de)
+ F2FS_I(dir)->task = current;
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index cec3dd205b3df8..b52df8aa95350e 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -3579,7 +3579,8 @@ int f2fs_prepare_lookup(struct inode *dir, struct dentry *dentry,
+ struct f2fs_filename *fname);
+ void f2fs_free_filename(struct f2fs_filename *fname);
+ struct f2fs_dir_entry *f2fs_find_target_dentry(const struct f2fs_dentry_ptr *d,
+- const struct f2fs_filename *fname, int *max_slots);
++ const struct f2fs_filename *fname, int *max_slots,
++ bool use_hash);
+ int f2fs_fill_dentries(struct dir_context *ctx, struct f2fs_dentry_ptr *d,
+ unsigned int start_pos, struct fscrypt_str *fstr);
+ void f2fs_do_make_empty_dir(struct inode *inode, struct inode *parent,
+@@ -4199,7 +4200,8 @@ int f2fs_write_inline_data(struct inode *inode, struct folio *folio);
+ int f2fs_recover_inline_data(struct inode *inode, struct page *npage);
+ struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
+ const struct f2fs_filename *fname,
+- struct page **res_page);
++ struct page **res_page,
++ bool use_hash);
+ int f2fs_make_empty_inline_dir(struct inode *inode, struct inode *parent,
+ struct page *ipage);
+ int f2fs_add_inline_entry(struct inode *dir, const struct f2fs_filename *fname,
+diff --git a/fs/f2fs/inline.c b/fs/f2fs/inline.c
+index 005babf1bed1e3..3b91a95d42764f 100644
+--- a/fs/f2fs/inline.c
++++ b/fs/f2fs/inline.c
+@@ -352,7 +352,8 @@ int f2fs_recover_inline_data(struct inode *inode, struct page *npage)
+
+ struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
+ const struct f2fs_filename *fname,
+- struct page **res_page)
++ struct page **res_page,
++ bool use_hash)
+ {
+ struct f2fs_sb_info *sbi = F2FS_SB(dir->i_sb);
+ struct f2fs_dir_entry *de;
+@@ -369,7 +370,7 @@ struct f2fs_dir_entry *f2fs_find_in_inline_dir(struct inode *dir,
+ inline_dentry = inline_data_addr(dir, ipage);
+
+ make_dentry_ptr_inline(dir, &d, inline_dentry);
+- de = f2fs_find_target_dentry(&d, fname, NULL);
++ de = f2fs_find_target_dentry(&d, fname, NULL, use_hash);
+ unlock_page(ipage);
+ if (IS_ERR(de)) {
+ *res_page = ERR_CAST(de);
+diff --git a/fs/file_table.c b/fs/file_table.c
+index eed5ffad9997c2..18735dc8269a10 100644
+--- a/fs/file_table.c
++++ b/fs/file_table.c
+@@ -125,7 +125,7 @@ static struct ctl_table fs_stat_sysctls[] = {
+ .data = &sysctl_nr_open,
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
+- .proc_handler = proc_dointvec_minmax,
++ .proc_handler = proc_douintvec_minmax,
+ .extra1 = &sysctl_nr_open_min,
+ .extra2 = &sysctl_nr_open_max,
+ },
+diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c
+index 084f6ed2dd7a69..94f3cc42c74035 100644
+--- a/fs/hostfs/hostfs_kern.c
++++ b/fs/hostfs/hostfs_kern.c
+@@ -94,32 +94,17 @@ __uml_setup("hostfs=", hostfs_args,
+ static char *__dentry_name(struct dentry *dentry, char *name)
+ {
+ char *p = dentry_path_raw(dentry, name, PATH_MAX);
+- char *root;
+- size_t len;
+- struct hostfs_fs_info *fsi;
+-
+- fsi = dentry->d_sb->s_fs_info;
+- root = fsi->host_root_path;
+- len = strlen(root);
+- if (IS_ERR(p)) {
+- __putname(name);
+- return NULL;
+- }
+-
+- /*
+- * This function relies on the fact that dentry_path_raw() will place
+- * the path name at the end of the provided buffer.
+- */
+- BUG_ON(p + strlen(p) + 1 != name + PATH_MAX);
++ struct hostfs_fs_info *fsi = dentry->d_sb->s_fs_info;
++ char *root = fsi->host_root_path;
++ size_t len = strlen(root);
+
+- strscpy(name, root, PATH_MAX);
+- if (len > p - name) {
++ if (IS_ERR(p) || len > p - name) {
+ __putname(name);
+ return NULL;
+ }
+
+- if (p > name + len)
+- strcpy(name + len, p);
++ memcpy(name, root, len);
++ memmove(name + len, p, name + PATH_MAX - p);
+
+ return name;
+ }
+diff --git a/fs/nfs/localio.c b/fs/nfs/localio.c
+index 637528e6368ef7..21b2b38fae9f3a 100644
+--- a/fs/nfs/localio.c
++++ b/fs/nfs/localio.c
+@@ -331,7 +331,7 @@ nfs_local_pgio_done(struct nfs_pgio_header *hdr, long status)
+ hdr->res.op_status = NFS4_OK;
+ hdr->task.tk_status = 0;
+ } else {
+- hdr->res.op_status = nfs4_stat_to_errno(status);
++ hdr->res.op_status = nfs_localio_errno_to_nfs4_stat(status);
+ hdr->task.tk_status = status;
+ }
+ }
+@@ -669,7 +669,7 @@ nfs_local_commit_done(struct nfs_commit_data *data, int status)
+ data->task.tk_status = 0;
+ } else {
+ nfs_reset_boot_verifier(data->inode);
+- data->res.op_status = nfs4_stat_to_errno(status);
++ data->res.op_status = nfs_localio_errno_to_nfs4_stat(status);
+ data->task.tk_status = status;
+ }
+ }
+diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c
+index 531c9c20ef1d1b..9f0d69e6526443 100644
+--- a/fs/nfs/nfs42proc.c
++++ b/fs/nfs/nfs42proc.c
+@@ -552,7 +552,7 @@ static int nfs42_do_offload_cancel_async(struct file *dst,
+ .rpc_message = &msg,
+ .callback_ops = &nfs42_offload_cancel_ops,
+ .workqueue = nfsiod_workqueue,
+- .flags = RPC_TASK_ASYNC,
++ .flags = RPC_TASK_ASYNC | RPC_TASK_MOVEABLE,
+ };
+ int status;
+
+diff --git a/fs/nfs/nfs42xdr.c b/fs/nfs/nfs42xdr.c
+index 9e3ae53e220583..becc3149aa9e5c 100644
+--- a/fs/nfs/nfs42xdr.c
++++ b/fs/nfs/nfs42xdr.c
+@@ -144,9 +144,11 @@
+ decode_putfh_maxsz + \
+ decode_offload_cancel_maxsz)
+ #define NFS4_enc_copy_notify_sz (compound_encode_hdr_maxsz + \
++ encode_sequence_maxsz + \
+ encode_putfh_maxsz + \
+ encode_copy_notify_maxsz)
+ #define NFS4_dec_copy_notify_sz (compound_decode_hdr_maxsz + \
++ decode_sequence_maxsz + \
+ decode_putfh_maxsz + \
+ decode_copy_notify_maxsz)
+ #define NFS4_enc_deallocate_sz (compound_encode_hdr_maxsz + \
+diff --git a/fs/nfs_common/common.c b/fs/nfs_common/common.c
+index 34a115176f97eb..af09aed09fd279 100644
+--- a/fs/nfs_common/common.c
++++ b/fs/nfs_common/common.c
+@@ -15,7 +15,7 @@ static const struct {
+ { NFS_OK, 0 },
+ { NFSERR_PERM, -EPERM },
+ { NFSERR_NOENT, -ENOENT },
+- { NFSERR_IO, -errno_NFSERR_IO},
++ { NFSERR_IO, -EIO },
+ { NFSERR_NXIO, -ENXIO },
+ /* { NFSERR_EAGAIN, -EAGAIN }, */
+ { NFSERR_ACCES, -EACCES },
+@@ -45,7 +45,6 @@ static const struct {
+ { NFSERR_SERVERFAULT, -EREMOTEIO },
+ { NFSERR_BADTYPE, -EBADTYPE },
+ { NFSERR_JUKEBOX, -EJUKEBOX },
+- { -1, -EIO }
+ };
+
+ /**
+@@ -59,26 +58,29 @@ int nfs_stat_to_errno(enum nfs_stat status)
+ {
+ int i;
+
+- for (i = 0; nfs_errtbl[i].stat != -1; i++) {
++ for (i = 0; i < ARRAY_SIZE(nfs_errtbl); i++) {
+ if (nfs_errtbl[i].stat == (int)status)
+ return nfs_errtbl[i].errno;
+ }
+- return nfs_errtbl[i].errno;
++ return -EIO;
+ }
+ EXPORT_SYMBOL_GPL(nfs_stat_to_errno);
+
+ /*
+ * We need to translate between nfs v4 status return values and
+ * the local errno values which may not be the same.
++ *
++ * nfs4_errtbl_common[] is used before more specialized mappings
++ * available in nfs4_errtbl[] or nfs4_errtbl_localio[].
+ */
+ static const struct {
+ int stat;
+ int errno;
+-} nfs4_errtbl[] = {
++} nfs4_errtbl_common[] = {
+ { NFS4_OK, 0 },
+ { NFS4ERR_PERM, -EPERM },
+ { NFS4ERR_NOENT, -ENOENT },
+- { NFS4ERR_IO, -errno_NFSERR_IO},
++ { NFS4ERR_IO, -EIO },
+ { NFS4ERR_NXIO, -ENXIO },
+ { NFS4ERR_ACCESS, -EACCES },
+ { NFS4ERR_EXIST, -EEXIST },
+@@ -98,15 +100,20 @@ static const struct {
+ { NFS4ERR_BAD_COOKIE, -EBADCOOKIE },
+ { NFS4ERR_NOTSUPP, -ENOTSUPP },
+ { NFS4ERR_TOOSMALL, -ETOOSMALL },
+- { NFS4ERR_SERVERFAULT, -EREMOTEIO },
+ { NFS4ERR_BADTYPE, -EBADTYPE },
+- { NFS4ERR_LOCKED, -EAGAIN },
+ { NFS4ERR_SYMLINK, -ELOOP },
+- { NFS4ERR_OP_ILLEGAL, -EOPNOTSUPP },
+ { NFS4ERR_DEADLOCK, -EDEADLK },
++};
++
++static const struct {
++ int stat;
++ int errno;
++} nfs4_errtbl[] = {
++ { NFS4ERR_SERVERFAULT, -EREMOTEIO },
++ { NFS4ERR_LOCKED, -EAGAIN },
++ { NFS4ERR_OP_ILLEGAL, -EOPNOTSUPP },
+ { NFS4ERR_NOXATTR, -ENODATA },
+ { NFS4ERR_XATTR2BIG, -E2BIG },
+- { -1, -EIO }
+ };
+
+ /*
+@@ -116,7 +123,14 @@ static const struct {
+ int nfs4_stat_to_errno(int stat)
+ {
+ int i;
+- for (i = 0; nfs4_errtbl[i].stat != -1; i++) {
++
++ /* First check nfs4_errtbl_common */
++ for (i = 0; i < ARRAY_SIZE(nfs4_errtbl_common); i++) {
++ if (nfs4_errtbl_common[i].stat == stat)
++ return nfs4_errtbl_common[i].errno;
++ }
++ /* Then check nfs4_errtbl */
++ for (i = 0; i < ARRAY_SIZE(nfs4_errtbl); i++) {
+ if (nfs4_errtbl[i].stat == stat)
+ return nfs4_errtbl[i].errno;
+ }
+@@ -132,3 +146,56 @@ int nfs4_stat_to_errno(int stat)
+ return -stat;
+ }
+ EXPORT_SYMBOL_GPL(nfs4_stat_to_errno);
++
++/*
++ * This table is useful for conversion from local errno to NFS error.
++ * It provides more logically correct mappings for use with LOCALIO
++ * (which is focused on converting from errno to NFS status).
++ */
++static const struct {
++ int stat;
++ int errno;
++} nfs4_errtbl_localio[] = {
++ /* Map errors differently than nfs4_errtbl */
++ { NFS4ERR_IO, -EREMOTEIO },
++ { NFS4ERR_DELAY, -EAGAIN },
++ { NFS4ERR_FBIG, -E2BIG },
++ /* Map errors not handled by nfs4_errtbl */
++ { NFS4ERR_STALE, -EBADF },
++ { NFS4ERR_STALE, -EOPENSTALE },
++ { NFS4ERR_DELAY, -ETIMEDOUT },
++ { NFS4ERR_DELAY, -ERESTARTSYS },
++ { NFS4ERR_DELAY, -ENOMEM },
++ { NFS4ERR_IO, -ETXTBSY },
++ { NFS4ERR_IO, -EBUSY },
++ { NFS4ERR_SERVERFAULT, -ESERVERFAULT },
++ { NFS4ERR_SERVERFAULT, -ENFILE },
++ { NFS4ERR_IO, -EUCLEAN },
++ { NFS4ERR_PERM, -ENOKEY },
++};
++
++/*
++ * Convert an errno to an NFS error code for LOCALIO.
++ */
++__u32 nfs_localio_errno_to_nfs4_stat(int errno)
++{
++ int i;
++
++ /* First check nfs4_errtbl_common */
++ for (i = 0; i < ARRAY_SIZE(nfs4_errtbl_common); i++) {
++ if (nfs4_errtbl_common[i].errno == errno)
++ return nfs4_errtbl_common[i].stat;
++ }
++ /* Then check nfs4_errtbl_localio */
++ for (i = 0; i < ARRAY_SIZE(nfs4_errtbl_localio); i++) {
++ if (nfs4_errtbl_localio[i].errno == errno)
++ return nfs4_errtbl_localio[i].stat;
++ }
++ /* If we cannot translate the error, the recovery routines should
++ * handle it.
++ * Note: remaining NFSv4 error codes have values > 10000, so should
++ * not conflict with native Linux error codes.
++ */
++ return NFS4ERR_SERVERFAULT;
++}
++EXPORT_SYMBOL_GPL(nfs_localio_errno_to_nfs4_stat);
+diff --git a/fs/nilfs2/dir.c b/fs/nilfs2/dir.c
+index f61c58fbf117d3..0cc32e9c71cbf0 100644
+--- a/fs/nilfs2/dir.c
++++ b/fs/nilfs2/dir.c
+@@ -400,7 +400,7 @@ int nilfs_inode_by_name(struct inode *dir, const struct qstr *qstr, ino_t *ino)
+ return 0;
+ }
+
+-void nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de,
++int nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de,
+ struct folio *folio, struct inode *inode)
+ {
+ size_t from = offset_in_folio(folio, de);
+@@ -410,11 +410,15 @@ void nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de,
+
+ folio_lock(folio);
+ err = nilfs_prepare_chunk(folio, from, to);
+- BUG_ON(err);
++ if (unlikely(err)) {
++ folio_unlock(folio);
++ return err;
++ }
+ de->inode = cpu_to_le64(inode->i_ino);
+ de->file_type = fs_umode_to_ftype(inode->i_mode);
+ nilfs_commit_chunk(folio, mapping, from, to);
+ inode_set_mtime_to_ts(dir, inode_set_ctime_current(dir));
++ return 0;
+ }
+
+ /*
+@@ -543,7 +547,10 @@ int nilfs_delete_entry(struct nilfs_dir_entry *dir, struct folio *folio)
+ from = (char *)pde - kaddr;
+ folio_lock(folio);
+ err = nilfs_prepare_chunk(folio, from, to);
+- BUG_ON(err);
++ if (unlikely(err)) {
++ folio_unlock(folio);
++ goto out;
++ }
+ if (pde)
+ pde->rec_len = nilfs_rec_len_to_disk(to - from);
+ dir->inode = 0;
+diff --git a/fs/nilfs2/namei.c b/fs/nilfs2/namei.c
+index 1d836a5540f3b1..e02fae6757f126 100644
+--- a/fs/nilfs2/namei.c
++++ b/fs/nilfs2/namei.c
+@@ -406,8 +406,10 @@ static int nilfs_rename(struct mnt_idmap *idmap,
+ err = PTR_ERR(new_de);
+ goto out_dir;
+ }
+- nilfs_set_link(new_dir, new_de, new_folio, old_inode);
++ err = nilfs_set_link(new_dir, new_de, new_folio, old_inode);
+ folio_release_kmap(new_folio, new_de);
++ if (unlikely(err))
++ goto out_dir;
+ nilfs_mark_inode_dirty(new_dir);
+ inode_set_ctime_current(new_inode);
+ if (dir_de)
+@@ -430,28 +432,27 @@ static int nilfs_rename(struct mnt_idmap *idmap,
+ */
+ inode_set_ctime_current(old_inode);
+
+- nilfs_delete_entry(old_de, old_folio);
+-
+- if (dir_de) {
+- nilfs_set_link(old_inode, dir_de, dir_folio, new_dir);
+- folio_release_kmap(dir_folio, dir_de);
+- drop_nlink(old_dir);
++ err = nilfs_delete_entry(old_de, old_folio);
++ if (likely(!err)) {
++ if (dir_de) {
++ err = nilfs_set_link(old_inode, dir_de, dir_folio,
++ new_dir);
++ drop_nlink(old_dir);
++ }
++ nilfs_mark_inode_dirty(old_dir);
+ }
+- folio_release_kmap(old_folio, old_de);
+-
+- nilfs_mark_inode_dirty(old_dir);
+ nilfs_mark_inode_dirty(old_inode);
+
+- err = nilfs_transaction_commit(old_dir->i_sb);
+- return err;
+-
+ out_dir:
+ if (dir_de)
+ folio_release_kmap(dir_folio, dir_de);
+ out_old:
+ folio_release_kmap(old_folio, old_de);
+ out:
+- nilfs_transaction_abort(old_dir->i_sb);
++ if (likely(!err))
++ err = nilfs_transaction_commit(old_dir->i_sb);
++ else
++ nilfs_transaction_abort(old_dir->i_sb);
+ return err;
+ }
+
+diff --git a/fs/nilfs2/nilfs.h b/fs/nilfs2/nilfs.h
+index dff241c53fc583..cb6ed54accd7ba 100644
+--- a/fs/nilfs2/nilfs.h
++++ b/fs/nilfs2/nilfs.h
+@@ -261,8 +261,8 @@ struct nilfs_dir_entry *nilfs_find_entry(struct inode *, const struct qstr *,
+ int nilfs_delete_entry(struct nilfs_dir_entry *, struct folio *);
+ int nilfs_empty_dir(struct inode *);
+ struct nilfs_dir_entry *nilfs_dotdot(struct inode *, struct folio **);
+-void nilfs_set_link(struct inode *, struct nilfs_dir_entry *,
+- struct folio *, struct inode *);
++int nilfs_set_link(struct inode *dir, struct nilfs_dir_entry *de,
++ struct folio *folio, struct inode *inode);
+
+ /* file.c */
+ extern int nilfs_sync_file(struct file *, loff_t, loff_t, int);
+diff --git a/fs/nilfs2/page.c b/fs/nilfs2/page.c
+index 6dd8b854cd1f38..06f18fe86407e4 100644
+--- a/fs/nilfs2/page.c
++++ b/fs/nilfs2/page.c
+@@ -392,6 +392,11 @@ void nilfs_clear_dirty_pages(struct address_space *mapping)
+ /**
+ * nilfs_clear_folio_dirty - discard dirty folio
+ * @folio: dirty folio that will be discarded
++ *
++ * nilfs_clear_folio_dirty() clears working states including dirty state for
++ * the folio and its buffers. If the folio has buffers, clear only if it is
++ * confirmed that none of the buffer heads are busy (none have valid
++ * references and none are locked).
+ */
+ void nilfs_clear_folio_dirty(struct folio *folio)
+ {
+@@ -399,10 +404,6 @@ void nilfs_clear_folio_dirty(struct folio *folio)
+
+ BUG_ON(!folio_test_locked(folio));
+
+- folio_clear_uptodate(folio);
+- folio_clear_mappedtodisk(folio);
+- folio_clear_checked(folio);
+-
+ head = folio_buffers(folio);
+ if (head) {
+ const unsigned long clear_bits =
+@@ -410,6 +411,25 @@ void nilfs_clear_folio_dirty(struct folio *folio)
+ BIT(BH_Async_Write) | BIT(BH_NILFS_Volatile) |
+ BIT(BH_NILFS_Checked) | BIT(BH_NILFS_Redirected) |
+ BIT(BH_Delay));
++ bool busy, invalidated = false;
++
++recheck_buffers:
++ busy = false;
++ bh = head;
++ do {
++ if (atomic_read(&bh->b_count) | buffer_locked(bh)) {
++ busy = true;
++ break;
++ }
++ } while (bh = bh->b_this_page, bh != head);
++
++ if (busy) {
++ if (invalidated)
++ return;
++ invalidate_bh_lrus();
++ invalidated = true;
++ goto recheck_buffers;
++ }
+
+ bh = head;
+ do {
+@@ -419,6 +439,9 @@ void nilfs_clear_folio_dirty(struct folio *folio)
+ } while (bh = bh->b_this_page, bh != head);
+ }
+
++ folio_clear_uptodate(folio);
++ folio_clear_mappedtodisk(folio);
++ folio_clear_checked(folio);
+ __nilfs_clear_folio_dirty(folio);
+ }
+
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 58725183089733..58a598b548fa28 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -734,7 +734,6 @@ static size_t nilfs_lookup_dirty_data_buffers(struct inode *inode,
+ if (!head)
+ head = create_empty_buffers(folio,
+ i_blocksize(inode), 0);
+- folio_unlock(folio);
+
+ bh = head;
+ do {
+@@ -744,11 +743,14 @@ static size_t nilfs_lookup_dirty_data_buffers(struct inode *inode,
+ list_add_tail(&bh->b_assoc_buffers, listp);
+ ndirties++;
+ if (unlikely(ndirties >= nlimit)) {
++ folio_unlock(folio);
+ folio_batch_release(&fbatch);
+ cond_resched();
+ return ndirties;
+ }
+ } while (bh = bh->b_this_page, bh != head);
++
++ folio_unlock(folio);
+ }
+ folio_batch_release(&fbatch);
+ cond_resched();
+diff --git a/fs/ocfs2/quota_global.c b/fs/ocfs2/quota_global.c
+index 3404e7a30c330c..15d9acd456ecce 100644
+--- a/fs/ocfs2/quota_global.c
++++ b/fs/ocfs2/quota_global.c
+@@ -761,6 +761,11 @@ static int ocfs2_release_dquot(struct dquot *dquot)
+ handle = ocfs2_start_trans(osb,
+ ocfs2_calc_qdel_credits(dquot->dq_sb, dquot->dq_id.type));
+ if (IS_ERR(handle)) {
++ /*
++ * Mark dquot as inactive to avoid endless cycle in
++ * quota_release_workfn().
++ */
++ clear_bit(DQ_ACTIVE_B, &dquot->dq_flags);
+ status = PTR_ERR(handle);
+ mlog_errno(status);
+ goto out_ilock;
+diff --git a/fs/pstore/blk.c b/fs/pstore/blk.c
+index 65b2473e22ff9c..fa6b8cb788a1f2 100644
+--- a/fs/pstore/blk.c
++++ b/fs/pstore/blk.c
+@@ -89,7 +89,7 @@ static struct pstore_device_info *pstore_device_info;
+ _##name_ = check_size(name, alignsize); \
+ else \
+ _##name_ = 0; \
+- /* Synchronize module parameters with resuls. */ \
++ /* Synchronize module parameters with results. */ \
+ name = _##name_ / 1024; \
+ dev->zone.name = _##name_; \
+ }
+@@ -121,7 +121,7 @@ static int __register_pstore_device(struct pstore_device_info *dev)
+ if (pstore_device_info)
+ return -EBUSY;
+
+- /* zero means not limit on which backends to attempt to store. */
++ /* zero means no limit on which backends attempt to store. */
+ if (!dev->flags)
+ dev->flags = UINT_MAX;
+
+diff --git a/fs/select.c b/fs/select.c
+index a77907faf2b459..834f438296e2ba 100644
+--- a/fs/select.c
++++ b/fs/select.c
+@@ -787,7 +787,7 @@ static inline int get_sigset_argpack(struct sigset_argpack *to,
+ }
+ return 0;
+ Efault:
+- user_access_end();
++ user_read_access_end();
+ return -EFAULT;
+ }
+
+@@ -1361,7 +1361,7 @@ static inline int get_compat_sigset_argpack(struct compat_sigset_argpack *to,
+ }
+ return 0;
+ Efault:
+- user_access_end();
++ user_read_access_end();
+ return -EFAULT;
+ }
+
+diff --git a/fs/smb/client/cifsacl.c b/fs/smb/client/cifsacl.c
+index c68ad526a4de1b..ebe9a7d7c70e86 100644
+--- a/fs/smb/client/cifsacl.c
++++ b/fs/smb/client/cifsacl.c
+@@ -1395,7 +1395,7 @@ static int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *pnntsd,
+ #ifdef CONFIG_CIFS_ALLOW_INSECURE_LEGACY
+ struct smb_ntsd *get_cifs_acl_by_fid(struct cifs_sb_info *cifs_sb,
+ const struct cifs_fid *cifsfid, u32 *pacllen,
+- u32 __maybe_unused unused)
++ u32 info)
+ {
+ struct smb_ntsd *pntsd = NULL;
+ unsigned int xid;
+@@ -1407,7 +1407,7 @@ struct smb_ntsd *get_cifs_acl_by_fid(struct cifs_sb_info *cifs_sb,
+
+ xid = get_xid();
+ rc = CIFSSMBGetCIFSACL(xid, tlink_tcon(tlink), cifsfid->netfid, &pntsd,
+- pacllen);
++ pacllen, info);
+ free_xid(xid);
+
+ cifs_put_tlink(tlink);
+@@ -1419,7 +1419,7 @@ struct smb_ntsd *get_cifs_acl_by_fid(struct cifs_sb_info *cifs_sb,
+ }
+
+ static struct smb_ntsd *get_cifs_acl_by_path(struct cifs_sb_info *cifs_sb,
+- const char *path, u32 *pacllen)
++ const char *path, u32 *pacllen, u32 info)
+ {
+ struct smb_ntsd *pntsd = NULL;
+ int oplock = 0;
+@@ -1446,9 +1446,12 @@ static struct smb_ntsd *get_cifs_acl_by_path(struct cifs_sb_info *cifs_sb,
+ .fid = &fid,
+ };
+
++ if (info & SACL_SECINFO)
++ oparms.desired_access |= SYSTEM_SECURITY;
++
+ rc = CIFS_open(xid, &oparms, &oplock, NULL);
+ if (!rc) {
+- rc = CIFSSMBGetCIFSACL(xid, tcon, fid.netfid, &pntsd, pacllen);
++ rc = CIFSSMBGetCIFSACL(xid, tcon, fid.netfid, &pntsd, pacllen, info);
+ CIFSSMBClose(xid, tcon, fid.netfid);
+ }
+
+@@ -1472,7 +1475,7 @@ struct smb_ntsd *get_cifs_acl(struct cifs_sb_info *cifs_sb,
+ if (inode)
+ open_file = find_readable_file(CIFS_I(inode), true);
+ if (!open_file)
+- return get_cifs_acl_by_path(cifs_sb, path, pacllen);
++ return get_cifs_acl_by_path(cifs_sb, path, pacllen, info);
+
+ pntsd = get_cifs_acl_by_fid(cifs_sb, &open_file->fid, pacllen, info);
+ cifsFileInfo_put(open_file);
+@@ -1485,7 +1488,7 @@ int set_cifs_acl(struct smb_ntsd *pnntsd, __u32 acllen,
+ {
+ int oplock = 0;
+ unsigned int xid;
+- int rc, access_flags;
++ int rc, access_flags = 0;
+ struct cifs_tcon *tcon;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+ struct tcon_link *tlink = cifs_sb_tlink(cifs_sb);
+@@ -1498,10 +1501,12 @@ int set_cifs_acl(struct smb_ntsd *pnntsd, __u32 acllen,
+ tcon = tlink_tcon(tlink);
+ xid = get_xid();
+
+- if (aclflag == CIFS_ACL_OWNER || aclflag == CIFS_ACL_GROUP)
+- access_flags = WRITE_OWNER;
+- else
+- access_flags = WRITE_DAC;
++ if (aclflag & CIFS_ACL_OWNER || aclflag & CIFS_ACL_GROUP)
++ access_flags |= WRITE_OWNER;
++ if (aclflag & CIFS_ACL_SACL)
++ access_flags |= SYSTEM_SECURITY;
++ if (aclflag & CIFS_ACL_DACL)
++ access_flags |= WRITE_DAC;
+
+ oparms = (struct cifs_open_parms) {
+ .tcon = tcon,
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index a697e53ccee2be..907af3cbffd1bc 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -568,7 +568,7 @@ extern int CIFSSMBSetEA(const unsigned int xid, struct cifs_tcon *tcon,
+ const struct nls_table *nls_codepage,
+ struct cifs_sb_info *cifs_sb);
+ extern int CIFSSMBGetCIFSACL(const unsigned int xid, struct cifs_tcon *tcon,
+- __u16 fid, struct smb_ntsd **acl_inf, __u32 *buflen);
++ __u16 fid, struct smb_ntsd **acl_inf, __u32 *buflen, __u32 info);
+ extern int CIFSSMBSetCIFSACL(const unsigned int, struct cifs_tcon *, __u16,
+ struct smb_ntsd *pntsd, __u32 len, int aclflag);
+ extern int cifs_do_get_acl(const unsigned int xid, struct cifs_tcon *tcon,
+diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c
+index 0eae60731c20c0..e2a14e25da87ce 100644
+--- a/fs/smb/client/cifssmb.c
++++ b/fs/smb/client/cifssmb.c
+@@ -3427,7 +3427,7 @@ validate_ntransact(char *buf, char **ppparm, char **ppdata,
+ /* Get Security Descriptor (by handle) from remote server for a file or dir */
+ int
+ CIFSSMBGetCIFSACL(const unsigned int xid, struct cifs_tcon *tcon, __u16 fid,
+- struct smb_ntsd **acl_inf, __u32 *pbuflen)
++ struct smb_ntsd **acl_inf, __u32 *pbuflen, __u32 info)
+ {
+ int rc = 0;
+ int buf_type = 0;
+@@ -3450,7 +3450,7 @@ CIFSSMBGetCIFSACL(const unsigned int xid, struct cifs_tcon *tcon, __u16 fid,
+ pSMB->MaxSetupCount = 0;
+ pSMB->Fid = fid; /* file handle always le */
+ pSMB->AclFlags = cpu_to_le32(CIFS_ACL_OWNER | CIFS_ACL_GROUP |
+- CIFS_ACL_DACL);
++ CIFS_ACL_DACL | info);
+ pSMB->ByteCount = cpu_to_le16(11); /* 3 bytes pad + 8 bytes parm */
+ inc_rfc1001_len(pSMB, 11);
+ iov[0].iov_base = (char *)pSMB;
+diff --git a/fs/smb/client/readdir.c b/fs/smb/client/readdir.c
+index 273358d20a46c9..50f96259d9adc2 100644
+--- a/fs/smb/client/readdir.c
++++ b/fs/smb/client/readdir.c
+@@ -413,7 +413,7 @@ _initiate_cifs_search(const unsigned int xid, struct file *file,
+ cifsFile->invalidHandle = false;
+ } else if ((rc == -EOPNOTSUPP) &&
+ (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)) {
+- cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_SERVER_INUM;
++ cifs_autodisable_serverino(cifs_sb);
+ goto ffirst_retry;
+ }
+ error_exit:
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index d3abb99cc99094..e56a8df23fec9a 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -674,11 +674,12 @@ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
+ return parse_reparse_point(buf, plen, cifs_sb, full_path, true, data);
+ }
+
+-static void wsl_to_fattr(struct cifs_open_info_data *data,
++static bool wsl_to_fattr(struct cifs_open_info_data *data,
+ struct cifs_sb_info *cifs_sb,
+ u32 tag, struct cifs_fattr *fattr)
+ {
+ struct smb2_file_full_ea_info *ea;
++ bool have_xattr_dev = false;
+ u32 next = 0;
+
+ switch (tag) {
+@@ -721,13 +722,24 @@ static void wsl_to_fattr(struct cifs_open_info_data *data,
+ fattr->cf_uid = wsl_make_kuid(cifs_sb, v);
+ else if (!strncmp(name, SMB2_WSL_XATTR_GID, nlen))
+ fattr->cf_gid = wsl_make_kgid(cifs_sb, v);
+- else if (!strncmp(name, SMB2_WSL_XATTR_MODE, nlen))
++ else if (!strncmp(name, SMB2_WSL_XATTR_MODE, nlen)) {
++ /* File type in reparse point tag and in xattr mode must match. */
++ if (S_DT(fattr->cf_mode) != S_DT(le32_to_cpu(*(__le32 *)v)))
++ return false;
+ fattr->cf_mode = (umode_t)le32_to_cpu(*(__le32 *)v);
+- else if (!strncmp(name, SMB2_WSL_XATTR_DEV, nlen))
++ } else if (!strncmp(name, SMB2_WSL_XATTR_DEV, nlen)) {
+ fattr->cf_rdev = reparse_mkdev(v);
++ have_xattr_dev = true;
++ }
+ } while (next);
+ out:
++
++ /* Major and minor numbers for char and block devices are mandatory. */
++ if (!have_xattr_dev && (tag == IO_REPARSE_TAG_LX_CHR || tag == IO_REPARSE_TAG_LX_BLK))
++ return false;
++
+ fattr->cf_dtype = S_DT(fattr->cf_mode);
++ return true;
+ }
+
+ static bool posix_reparse_to_fattr(struct cifs_sb_info *cifs_sb,
+@@ -801,7 +813,9 @@ bool cifs_reparse_point_to_fattr(struct cifs_sb_info *cifs_sb,
+ case IO_REPARSE_TAG_AF_UNIX:
+ case IO_REPARSE_TAG_LX_CHR:
+ case IO_REPARSE_TAG_LX_BLK:
+- wsl_to_fattr(data, cifs_sb, tag, fattr);
++ ok = wsl_to_fattr(data, cifs_sb, tag, fattr);
++ if (!ok)
++ return false;
+ break;
+ case IO_REPARSE_TAG_NFS:
+ ok = posix_reparse_to_fattr(cifs_sb, fattr, data);
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 7571fefeb83aa1..6bacf754b57efd 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -658,7 +658,8 @@ parse_server_interfaces(struct network_interface_info_ioctl_rsp *buf,
+
+ while (bytes_left >= (ssize_t)sizeof(*p)) {
+ memset(&tmp_iface, 0, sizeof(tmp_iface));
+- tmp_iface.speed = le64_to_cpu(p->LinkSpeed);
++ /* default to 1Gbps when link speed is unset */
++ tmp_iface.speed = le64_to_cpu(p->LinkSpeed) ?: 1000000000;
+ tmp_iface.rdma_capable = le32_to_cpu(p->Capability & RDMA_CAPABLE) ? 1 : 0;
+ tmp_iface.rss_capable = le32_to_cpu(p->Capability & RSS_CAPABLE) ? 1 : 0;
+
+diff --git a/fs/ubifs/debug.c b/fs/ubifs/debug.c
+index 5cc69beaa62ecf..10a86c02a8b328 100644
+--- a/fs/ubifs/debug.c
++++ b/fs/ubifs/debug.c
+@@ -946,16 +946,20 @@ void ubifs_dump_tnc(struct ubifs_info *c)
+
+ pr_err("\n");
+ pr_err("(pid %d) start dumping TNC tree\n", current->pid);
+- znode = ubifs_tnc_levelorder_next(c, c->zroot.znode, NULL);
+- level = znode->level;
+- pr_err("== Level %d ==\n", level);
+- while (znode) {
+- if (level != znode->level) {
+- level = znode->level;
+- pr_err("== Level %d ==\n", level);
++ if (c->zroot.znode) {
++ znode = ubifs_tnc_levelorder_next(c, c->zroot.znode, NULL);
++ level = znode->level;
++ pr_err("== Level %d ==\n", level);
++ while (znode) {
++ if (level != znode->level) {
++ level = znode->level;
++ pr_err("== Level %d ==\n", level);
++ }
++ ubifs_dump_znode(c, znode);
++ znode = ubifs_tnc_levelorder_next(c, c->zroot.znode, znode);
+ }
+- ubifs_dump_znode(c, znode);
+- znode = ubifs_tnc_levelorder_next(c, c->zroot.znode, znode);
++ } else {
++ pr_err("empty TNC tree in memory\n");
+ }
+ pr_err("(pid %d) finish dumping TNC tree\n", current->pid);
+ }
+diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c
+index aa4dbda7b5365e..6bcbdc8bf186da 100644
+--- a/fs/xfs/xfs_buf.c
++++ b/fs/xfs/xfs_buf.c
+@@ -663,9 +663,8 @@ xfs_buf_find_insert(
+ spin_unlock(&bch->bc_lock);
+ goto out_free_buf;
+ }
+- if (bp) {
++ if (bp && atomic_inc_not_zero(&bp->b_hold)) {
+ /* found an existing buffer */
+- atomic_inc(&bp->b_hold);
+ spin_unlock(&bch->bc_lock);
+ error = xfs_buf_find_lock(bp, flags);
+ if (error)
+diff --git a/fs/xfs/xfs_notify_failure.c b/fs/xfs/xfs_notify_failure.c
+index fa50e5308292d3..0b0b0f31aca274 100644
+--- a/fs/xfs/xfs_notify_failure.c
++++ b/fs/xfs/xfs_notify_failure.c
+@@ -153,6 +153,79 @@ xfs_dax_notify_failure_thaw(
+ thaw_super(sb, FREEZE_HOLDER_USERSPACE);
+ }
+
++static int
++xfs_dax_translate_range(
++ struct xfs_buftarg *btp,
++ u64 offset,
++ u64 len,
++ xfs_daddr_t *daddr,
++ uint64_t *bblen)
++{
++ u64 dev_start = btp->bt_dax_part_off;
++ u64 dev_len = bdev_nr_bytes(btp->bt_bdev);
++ u64 dev_end = dev_start + dev_len - 1;
++
++ /* Notify failure on the whole device. */
++ if (offset == 0 && len == U64_MAX) {
++ offset = dev_start;
++ len = dev_len;
++ }
++
++ /* Ignore the range out of filesystem area */
++ if (offset + len - 1 < dev_start)
++ return -ENXIO;
++ if (offset > dev_end)
++ return -ENXIO;
++
++ /* Calculate the real range when it touches the boundary */
++ if (offset > dev_start)
++ offset -= dev_start;
++ else {
++ len -= dev_start - offset;
++ offset = 0;
++ }
++ if (offset + len - 1 > dev_end)
++ len = dev_end - offset + 1;
++
++ *daddr = BTOBB(offset);
++ *bblen = BTOBB(len);
++ return 0;
++}
++
++static int
++xfs_dax_notify_logdev_failure(
++ struct xfs_mount *mp,
++ u64 offset,
++ u64 len,
++ int mf_flags)
++{
++ xfs_daddr_t daddr;
++ uint64_t bblen;
++ int error;
++
++ /*
++ * Return ENXIO instead of shutting down the filesystem if the failed
++ * region is beyond the end of the log.
++ */
++ error = xfs_dax_translate_range(mp->m_logdev_targp,
++ offset, len, &daddr, &bblen);
++ if (error)
++ return error;
++
++ /*
++ * In the pre-remove case the failure notification is attempting to
++ * trigger a force unmount. The expectation is that the device is
++ * still present, but its removal is in progress and can not be
++ * cancelled, proceed with accessing the log device.
++ */
++ if (mf_flags & MF_MEM_PRE_REMOVE)
++ return 0;
++
++ xfs_err(mp, "ondisk log corrupt, shutting down fs!");
++ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_ONDISK);
++ return -EFSCORRUPTED;
++}
++
+ static int
+ xfs_dax_notify_ddev_failure(
+ struct xfs_mount *mp,
+@@ -263,8 +336,9 @@ xfs_dax_notify_failure(
+ int mf_flags)
+ {
+ struct xfs_mount *mp = dax_holder(dax_dev);
+- u64 ddev_start;
+- u64 ddev_end;
++ xfs_daddr_t daddr;
++ uint64_t bblen;
++ int error;
+
+ if (!(mp->m_super->s_flags & SB_BORN)) {
+ xfs_warn(mp, "filesystem is not ready for notify_failure()!");
+@@ -279,17 +353,7 @@ xfs_dax_notify_failure(
+
+ if (mp->m_logdev_targp && mp->m_logdev_targp->bt_daxdev == dax_dev &&
+ mp->m_logdev_targp != mp->m_ddev_targp) {
+- /*
+- * In the pre-remove case the failure notification is attempting
+- * to trigger a force unmount. The expectation is that the
+- * device is still present, but its removal is in progress and
+- * can not be cancelled, proceed with accessing the log device.
+- */
+- if (mf_flags & MF_MEM_PRE_REMOVE)
+- return 0;
+- xfs_err(mp, "ondisk log corrupt, shutting down fs!");
+- xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_ONDISK);
+- return -EFSCORRUPTED;
++ return xfs_dax_notify_logdev_failure(mp, offset, len, mf_flags);
+ }
+
+ if (!xfs_has_rmapbt(mp)) {
+@@ -297,33 +361,12 @@ xfs_dax_notify_failure(
+ return -EOPNOTSUPP;
+ }
+
+- ddev_start = mp->m_ddev_targp->bt_dax_part_off;
+- ddev_end = ddev_start + bdev_nr_bytes(mp->m_ddev_targp->bt_bdev) - 1;
+-
+- /* Notify failure on the whole device. */
+- if (offset == 0 && len == U64_MAX) {
+- offset = ddev_start;
+- len = bdev_nr_bytes(mp->m_ddev_targp->bt_bdev);
+- }
+-
+- /* Ignore the range out of filesystem area */
+- if (offset + len - 1 < ddev_start)
+- return -ENXIO;
+- if (offset > ddev_end)
+- return -ENXIO;
+-
+- /* Calculate the real range when it touches the boundary */
+- if (offset > ddev_start)
+- offset -= ddev_start;
+- else {
+- len -= ddev_start - offset;
+- offset = 0;
+- }
+- if (offset + len - 1 > ddev_end)
+- len = ddev_end - offset + 1;
++ error = xfs_dax_translate_range(mp->m_ddev_targp, offset, len, &daddr,
++ &bblen);
++ if (error)
++ return error;
+
+- return xfs_dax_notify_ddev_failure(mp, BTOBB(offset), BTOBB(len),
+- mf_flags);
++ return xfs_dax_notify_ddev_failure(mp, daddr, bblen, mf_flags);
+ }
+
+ const struct dax_holder_operations xfs_dax_holder_operations = {
+diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
+index d076ebd19a61e8..78b24b09048885 100644
+--- a/include/acpi/acpixf.h
++++ b/include/acpi/acpixf.h
+@@ -763,6 +763,7 @@ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
+ *event_status))
+ ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_dispatch_gpe(acpi_handle gpe_device, u32 gpe_number))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_hw_disable_all_gpes(void))
++ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_hw_enable_all_wakeup_gpes(void))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void))
+diff --git a/include/dt-bindings/clock/imx93-clock.h b/include/dt-bindings/clock/imx93-clock.h
+index 787c9e74dc96db..c393fad3a3469c 100644
+--- a/include/dt-bindings/clock/imx93-clock.h
++++ b/include/dt-bindings/clock/imx93-clock.h
+@@ -204,6 +204,11 @@
+ #define IMX93_CLK_A55_SEL 199
+ #define IMX93_CLK_A55_CORE 200
+ #define IMX93_CLK_PDM_IPG 201
+-#define IMX93_CLK_END 202
++#define IMX91_CLK_ENET1_QOS_TSN 202
++#define IMX91_CLK_ENET_TIMER 203
++#define IMX91_CLK_ENET2_REGULAR 204
++#define IMX91_CLK_ENET2_REGULAR_GATE 205
++#define IMX91_CLK_ENET1_QOS_TSN_GATE 206
++#define IMX93_CLK_SPDIF_IPG 207
+
+ #endif
+diff --git a/include/dt-bindings/clock/sun50i-a64-ccu.h b/include/dt-bindings/clock/sun50i-a64-ccu.h
+index 175892189e9dcb..4f220ea7a23cc5 100644
+--- a/include/dt-bindings/clock/sun50i-a64-ccu.h
++++ b/include/dt-bindings/clock/sun50i-a64-ccu.h
+@@ -44,7 +44,9 @@
+ #define _DT_BINDINGS_CLK_SUN50I_A64_H_
+
+ #define CLK_PLL_VIDEO0 7
++#define CLK_PLL_VIDEO0_2X 8
+ #define CLK_PLL_PERIPH0 11
++#define CLK_PLL_MIPI 17
+
+ #define CLK_CPUX 21
+ #define CLK_BUS_MIPI_DSI 28
+diff --git a/include/linux/btf.h b/include/linux/btf.h
+index b8a583194c4a97..d99178ce01d21d 100644
+--- a/include/linux/btf.h
++++ b/include/linux/btf.h
+@@ -352,6 +352,11 @@ static inline bool btf_type_is_scalar(const struct btf_type *t)
+ return btf_type_is_int(t) || btf_type_is_enum(t);
+ }
+
++static inline bool btf_type_is_fwd(const struct btf_type *t)
++{
++ return BTF_INFO_KIND(t->info) == BTF_KIND_FWD;
++}
++
+ static inline bool btf_type_is_typedef(const struct btf_type *t)
+ {
+ return BTF_INFO_KIND(t->info) == BTF_KIND_TYPEDEF;
+diff --git a/include/linux/coredump.h b/include/linux/coredump.h
+index 45e598fe34766f..77e6e195d1d687 100644
+--- a/include/linux/coredump.h
++++ b/include/linux/coredump.h
+@@ -52,8 +52,8 @@ extern void do_coredump(const kernel_siginfo_t *siginfo);
+ #define __COREDUMP_PRINTK(Level, Format, ...) \
+ do { \
+ char comm[TASK_COMM_LEN]; \
+- \
+- get_task_comm(comm, current); \
++ /* This will always be NUL terminated. */ \
++ memcpy(comm, current->comm, sizeof(comm)); \
+ printk_ratelimited(Level "coredump: %d(%*pE): " Format "\n", \
+ task_tgid_vnr(current), (int)strlen(comm), comm, ##__VA_ARGS__); \
+ } while (0) \
+diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
+index 12f6dc5675987b..b8b935b526033f 100644
+--- a/include/linux/ethtool.h
++++ b/include/linux/ethtool.h
+@@ -734,6 +734,9 @@ struct kernel_ethtool_ts_info {
+ * @rxfh_per_ctx_key: device supports setting different RSS key for each
+ * additional context. Netlink API should report hfunc, key, and input_xfrm
+ * for every context, not just context 0.
++ * @cap_rss_rxnfc_adds: device supports nonzero ring_cookie in filters with
++ * %FLOW_RSS flag; the queue ID from the filter is added to the value from
++ * the indirection table to determine the delivery queue.
+ * @rxfh_indir_space: max size of RSS indirection tables, if indirection table
+ * size as returned by @get_rxfh_indir_size may change during lifetime
+ * of the device. Leave as 0 if the table size is constant.
+@@ -956,6 +959,7 @@ struct ethtool_ops {
+ u32 cap_rss_ctx_supported:1;
+ u32 cap_rss_sym_xor_supported:1;
+ u32 rxfh_per_ctx_key:1;
++ u32 cap_rss_rxnfc_adds:1;
+ u32 rxfh_indir_space;
+ u16 rxfh_key_space;
+ u16 rxfh_priv_size;
+diff --git a/include/linux/export.h b/include/linux/export.h
+index 0bbd02fd351db9..1e04dbc675c2fa 100644
+--- a/include/linux/export.h
++++ b/include/linux/export.h
+@@ -60,7 +60,7 @@
+ #endif
+
+ #ifdef DEFAULT_SYMBOL_NAMESPACE
+-#define _EXPORT_SYMBOL(sym, license) __EXPORT_SYMBOL(sym, license, __stringify(DEFAULT_SYMBOL_NAMESPACE))
++#define _EXPORT_SYMBOL(sym, license) __EXPORT_SYMBOL(sym, license, DEFAULT_SYMBOL_NAMESPACE)
+ #else
+ #define _EXPORT_SYMBOL(sym, license) __EXPORT_SYMBOL(sym, license, "")
+ #endif
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index a7d60a1c72a09a..dd33423012538d 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -218,6 +218,7 @@ struct hid_item {
+ #define HID_GD_DOWN 0x00010091
+ #define HID_GD_RIGHT 0x00010092
+ #define HID_GD_LEFT 0x00010093
++#define HID_GD_DO_NOT_DISTURB 0x0001009b
+ /* Microsoft Win8 Wireless Radio Controls CA usage codes */
+ #define HID_GD_RFKILL_BTN 0x000100c6
+ #define HID_GD_RFKILL_LED 0x000100c7
+diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
+index 456bca45ff0528..3750e56bfcbb36 100644
+--- a/include/linux/ieee80211.h
++++ b/include/linux/ieee80211.h
+@@ -5053,28 +5053,24 @@ static inline u8 ieee80211_mle_common_size(const u8 *data)
+ {
+ const struct ieee80211_multi_link_elem *mle = (const void *)data;
+ u16 control = le16_to_cpu(mle->control);
+- u8 common = 0;
+
+ switch (u16_get_bits(control, IEEE80211_ML_CONTROL_TYPE)) {
+ case IEEE80211_ML_CONTROL_TYPE_BASIC:
+ case IEEE80211_ML_CONTROL_TYPE_PREQ:
+ case IEEE80211_ML_CONTROL_TYPE_TDLS:
+ case IEEE80211_ML_CONTROL_TYPE_RECONF:
++ case IEEE80211_ML_CONTROL_TYPE_PRIO_ACCESS:
+ /*
+ * The length is the first octet pointed by mle->variable so no
+ * need to add anything
+ */
+ break;
+- case IEEE80211_ML_CONTROL_TYPE_PRIO_ACCESS:
+- if (control & IEEE80211_MLC_PRIO_ACCESS_PRES_AP_MLD_MAC_ADDR)
+- common += ETH_ALEN;
+- return common;
+ default:
+ WARN_ON(1);
+ return 0;
+ }
+
+- return sizeof(*mle) + common + mle->variable[0];
++ return sizeof(*mle) + mle->variable[0];
+ }
+
+ /**
+@@ -5312,8 +5308,7 @@ static inline bool ieee80211_mle_size_ok(const u8 *data, size_t len)
+ check_common_len = true;
+ break;
+ case IEEE80211_ML_CONTROL_TYPE_PRIO_ACCESS:
+- if (control & IEEE80211_MLC_PRIO_ACCESS_PRES_AP_MLD_MAC_ADDR)
+- common += ETH_ALEN;
++ common = ETH_ALEN + 1;
+ break;
+ default:
+ /* we don't know this type */
+diff --git a/include/linux/kallsyms.h b/include/linux/kallsyms.h
+index c3f075e8f60cb6..1c6a6c1704d8d0 100644
+--- a/include/linux/kallsyms.h
++++ b/include/linux/kallsyms.h
+@@ -57,10 +57,10 @@ static inline void *dereference_symbol_descriptor(void *ptr)
+
+ preempt_disable();
+ mod = __module_address((unsigned long)ptr);
+- preempt_enable();
+
+ if (mod)
+ ptr = dereference_module_function_descriptor(mod, ptr);
++ preempt_enable();
+ #endif
+ return ptr;
+ }
+diff --git a/include/linux/mroute_base.h b/include/linux/mroute_base.h
+index 9dd4bf1572553f..58a2401e4b551b 100644
+--- a/include/linux/mroute_base.h
++++ b/include/linux/mroute_base.h
+@@ -146,9 +146,9 @@ struct mr_mfc {
+ unsigned long last_assert;
+ int minvif;
+ int maxvif;
+- unsigned long bytes;
+- unsigned long pkt;
+- unsigned long wrong_if;
++ atomic_long_t bytes;
++ atomic_long_t pkt;
++ atomic_long_t wrong_if;
+ unsigned long lastuse;
+ unsigned char ttls[MAXVIFS];
+ refcount_t refcount;
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 8896705ccd638b..02d3bafebbe77c 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2222,7 +2222,7 @@ struct net_device {
+ void *atalk_ptr;
+ #endif
+ #if IS_ENABLED(CONFIG_AX25)
+- void *ax25_ptr;
++ struct ax25_dev __rcu *ax25_ptr;
+ #endif
+ #if IS_ENABLED(CONFIG_CFG80211)
+ struct wireless_dev *ieee80211_ptr;
+diff --git a/include/linux/nfs_common.h b/include/linux/nfs_common.h
+index 5fc02df882521e..a541c3a0288750 100644
+--- a/include/linux/nfs_common.h
++++ b/include/linux/nfs_common.h
+@@ -9,9 +9,10 @@
+ #include <uapi/linux/nfs.h>
+
+ /* Mapping from NFS error code to "errno" error code. */
+-#define errno_NFSERR_IO EIO
+
+ int nfs_stat_to_errno(enum nfs_stat status);
+ int nfs4_stat_to_errno(int stat);
+
++__u32 nfs_localio_errno_to_nfs4_stat(int errno);
++
+ #endif /* _LINUX_NFS_COMMON_H */
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index fb908843f20928..347901525a46ae 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -1266,12 +1266,18 @@ static inline void perf_sample_save_callchain(struct perf_sample_data *data,
+ }
+
+ static inline void perf_sample_save_raw_data(struct perf_sample_data *data,
++ struct perf_event *event,
+ struct perf_raw_record *raw)
+ {
+ struct perf_raw_frag *frag = &raw->frag;
+ u32 sum = 0;
+ int size;
+
++ if (!(event->attr.sample_type & PERF_SAMPLE_RAW))
++ return;
++ if (WARN_ON_ONCE(data->sample_flags & PERF_SAMPLE_RAW))
++ return;
++
+ do {
+ sum += frag->size;
+ if (perf_raw_frag_last(frag))
+diff --git a/include/linux/pps_kernel.h b/include/linux/pps_kernel.h
+index 78c8ac4951b581..c7abce28ed2995 100644
+--- a/include/linux/pps_kernel.h
++++ b/include/linux/pps_kernel.h
+@@ -56,8 +56,7 @@ struct pps_device {
+
+ unsigned int id; /* PPS source unique ID */
+ void const *lookup_cookie; /* For pps_lookup_dev() only */
+- struct cdev cdev;
+- struct device *dev;
++ struct device dev;
+ struct fasync_struct *async_queue; /* fasync method */
+ spinlock_t lock;
+ };
+diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h
+index fd037c127bb071..551329220e4f34 100644
+--- a/include/linux/ptr_ring.h
++++ b/include/linux/ptr_ring.h
+@@ -615,15 +615,14 @@ static inline int ptr_ring_resize_noprof(struct ptr_ring *r, int size, gfp_t gfp
+ /*
+ * Note: producer lock is nested within consumer lock, so if you
+ * resize you must make sure all uses nest correctly.
+- * In particular if you consume ring in interrupt or BH context, you must
+- * disable interrupts/BH when doing so.
++ * In particular if you consume ring in BH context, you must
++ * disable BH when doing so.
+ */
+-static inline int ptr_ring_resize_multiple_noprof(struct ptr_ring **rings,
+- unsigned int nrings,
+- int size,
+- gfp_t gfp, void (*destroy)(void *))
++static inline int ptr_ring_resize_multiple_bh_noprof(struct ptr_ring **rings,
++ unsigned int nrings,
++ int size, gfp_t gfp,
++ void (*destroy)(void *))
+ {
+- unsigned long flags;
+ void ***queues;
+ int i;
+
+@@ -638,12 +637,12 @@ static inline int ptr_ring_resize_multiple_noprof(struct ptr_ring **rings,
+ }
+
+ for (i = 0; i < nrings; ++i) {
+- spin_lock_irqsave(&(rings[i])->consumer_lock, flags);
++ spin_lock_bh(&(rings[i])->consumer_lock);
+ spin_lock(&(rings[i])->producer_lock);
+ queues[i] = __ptr_ring_swap_queue(rings[i], queues[i],
+ size, gfp, destroy);
+ spin_unlock(&(rings[i])->producer_lock);
+- spin_unlock_irqrestore(&(rings[i])->consumer_lock, flags);
++ spin_unlock_bh(&(rings[i])->consumer_lock);
+ }
+
+ for (i = 0; i < nrings; ++i)
+@@ -662,8 +661,8 @@ static inline int ptr_ring_resize_multiple_noprof(struct ptr_ring **rings,
+ noqueues:
+ return -ENOMEM;
+ }
+-#define ptr_ring_resize_multiple(...) \
+- alloc_hooks(ptr_ring_resize_multiple_noprof(__VA_ARGS__))
++#define ptr_ring_resize_multiple_bh(...) \
++ alloc_hooks(ptr_ring_resize_multiple_bh_noprof(__VA_ARGS__))
+
+ static inline void ptr_ring_cleanup(struct ptr_ring *r, void (*destroy)(void *))
+ {
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 02eaf84c8626f4..8982820dae2131 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -944,6 +944,7 @@ struct task_struct {
+ unsigned sched_reset_on_fork:1;
+ unsigned sched_contributes_to_load:1;
+ unsigned sched_migrated:1;
++ unsigned sched_task_hot:1;
+
+ /* Force alignment to the next boundary: */
+ unsigned :0;
+diff --git a/include/linux/skb_array.h b/include/linux/skb_array.h
+index 926496c9cc9c3b..bf178238a3083d 100644
+--- a/include/linux/skb_array.h
++++ b/include/linux/skb_array.h
+@@ -199,17 +199,18 @@ static inline int skb_array_resize(struct skb_array *a, int size, gfp_t gfp)
+ return ptr_ring_resize(&a->ring, size, gfp, __skb_array_destroy_skb);
+ }
+
+-static inline int skb_array_resize_multiple_noprof(struct skb_array **rings,
+- int nrings, unsigned int size,
+- gfp_t gfp)
++static inline int skb_array_resize_multiple_bh_noprof(struct skb_array **rings,
++ int nrings,
++ unsigned int size,
++ gfp_t gfp)
+ {
+ BUILD_BUG_ON(offsetof(struct skb_array, ring));
+- return ptr_ring_resize_multiple_noprof((struct ptr_ring **)rings,
+- nrings, size, gfp,
+- __skb_array_destroy_skb);
++ return ptr_ring_resize_multiple_bh_noprof((struct ptr_ring **)rings,
++ nrings, size, gfp,
++ __skb_array_destroy_skb);
+ }
+-#define skb_array_resize_multiple(...) \
+- alloc_hooks(skb_array_resize_multiple_noprof(__VA_ARGS__))
++#define skb_array_resize_multiple_bh(...) \
++ alloc_hooks(skb_array_resize_multiple_bh_noprof(__VA_ARGS__))
+
+ static inline void skb_array_cleanup(struct skb_array *a)
+ {
+diff --git a/include/linux/usb/tcpm.h b/include/linux/usb/tcpm.h
+index 061da9546a8131..b22e659f81ba54 100644
+--- a/include/linux/usb/tcpm.h
++++ b/include/linux/usb/tcpm.h
+@@ -163,7 +163,8 @@ struct tcpc_dev {
+ void (*frs_sourcing_vbus)(struct tcpc_dev *dev);
+ int (*enable_auto_vbus_discharge)(struct tcpc_dev *dev, bool enable);
+ int (*set_auto_vbus_discharge_threshold)(struct tcpc_dev *dev, enum typec_pwr_opmode mode,
+- bool pps_active, u32 requested_vbus_voltage);
++ bool pps_active, u32 requested_vbus_voltage,
++ u32 pps_apdo_min_voltage);
+ bool (*is_vbus_vsafe0v)(struct tcpc_dev *dev);
+ void (*set_partner_usb_comm_capable)(struct tcpc_dev *dev, bool enable);
+ void (*check_contaminant)(struct tcpc_dev *dev);
+diff --git a/include/net/ax25.h b/include/net/ax25.h
+index cb622d84cd0cc4..4ee141aae0a29d 100644
+--- a/include/net/ax25.h
++++ b/include/net/ax25.h
+@@ -231,6 +231,7 @@ typedef struct ax25_dev {
+ #endif
+ refcount_t refcount;
+ bool device_up;
++ struct rcu_head rcu;
+ } ax25_dev;
+
+ typedef struct ax25_cb {
+@@ -290,9 +291,8 @@ static inline void ax25_dev_hold(ax25_dev *ax25_dev)
+
+ static inline void ax25_dev_put(ax25_dev *ax25_dev)
+ {
+- if (refcount_dec_and_test(&ax25_dev->refcount)) {
+- kfree(ax25_dev);
+- }
++ if (refcount_dec_and_test(&ax25_dev->refcount))
++ kfree_rcu(ax25_dev, rcu);
+ }
+ static inline __be16 ax25_type_trans(struct sk_buff *skb, struct net_device *dev)
+ {
+@@ -335,9 +335,9 @@ void ax25_digi_invert(const ax25_digi *, ax25_digi *);
+ extern spinlock_t ax25_dev_lock;
+
+ #if IS_ENABLED(CONFIG_AX25)
+-static inline ax25_dev *ax25_dev_ax25dev(struct net_device *dev)
++static inline ax25_dev *ax25_dev_ax25dev(const struct net_device *dev)
+ {
+- return dev->ax25_ptr;
++ return rcu_dereference_rtnl(dev->ax25_ptr);
+ }
+ #endif
+
+diff --git a/include/net/inetpeer.h b/include/net/inetpeer.h
+index 74ff688568a0c6..f475757daafba9 100644
+--- a/include/net/inetpeer.h
++++ b/include/net/inetpeer.h
+@@ -96,30 +96,28 @@ static inline struct in6_addr *inetpeer_get_addr_v6(struct inetpeer_addr *iaddr)
+
+ /* can be called with or without local BH being disabled */
+ struct inet_peer *inet_getpeer(struct inet_peer_base *base,
+- const struct inetpeer_addr *daddr,
+- int create);
++ const struct inetpeer_addr *daddr);
+
+ static inline struct inet_peer *inet_getpeer_v4(struct inet_peer_base *base,
+ __be32 v4daddr,
+- int vif, int create)
++ int vif)
+ {
+ struct inetpeer_addr daddr;
+
+ daddr.a4.addr = v4daddr;
+ daddr.a4.vif = vif;
+ daddr.family = AF_INET;
+- return inet_getpeer(base, &daddr, create);
++ return inet_getpeer(base, &daddr);
+ }
+
+ static inline struct inet_peer *inet_getpeer_v6(struct inet_peer_base *base,
+- const struct in6_addr *v6daddr,
+- int create)
++ const struct in6_addr *v6daddr)
+ {
+ struct inetpeer_addr daddr;
+
+ daddr.a6 = *v6daddr;
+ daddr.family = AF_INET6;
+- return inet_getpeer(base, &daddr, create);
++ return inet_getpeer(base, &daddr);
+ }
+
+ static inline int inetpeer_addr_cmp(const struct inetpeer_addr *a,
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 471c353d32a4a5..788513cc384b7f 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -442,6 +442,9 @@ struct nft_set_ext;
+ * @remove: remove element from set
+ * @walk: iterate over all set elements
+ * @get: get set elements
++ * @ksize: kernel set size
++ * @usize: userspace set size
++ * @adjust_maxsize: delta to adjust maximum set size
+ * @commit: commit set elements
+ * @abort: abort set elements
+ * @privsize: function to return size of set private data
+@@ -495,6 +498,9 @@ struct nft_set_ops {
+ const struct nft_set *set,
+ const struct nft_set_elem *elem,
+ unsigned int flags);
++ u32 (*ksize)(u32 size);
++ u32 (*usize)(u32 size);
++ u32 (*adjust_maxsize)(const struct nft_set *set);
+ void (*commit)(struct nft_set *set);
+ void (*abort)(const struct nft_set *set);
+ u64 (*privsize)(const struct nlattr * const nla[],
+diff --git a/include/net/netns/xfrm.h b/include/net/netns/xfrm.h
+index ae60d66640954c..23dd647fe0248c 100644
+--- a/include/net/netns/xfrm.h
++++ b/include/net/netns/xfrm.h
+@@ -43,6 +43,7 @@ struct netns_xfrm {
+ struct hlist_head __rcu *state_bysrc;
+ struct hlist_head __rcu *state_byspi;
+ struct hlist_head __rcu *state_byseq;
++ struct hlist_head __percpu *state_cache_input;
+ unsigned int state_hmask;
+ unsigned int state_num;
+ struct work_struct state_hash_work;
+diff --git a/include/net/pkt_cls.h b/include/net/pkt_cls.h
+index 4880b3a7aced5b..4229e4fcd2a9ee 100644
+--- a/include/net/pkt_cls.h
++++ b/include/net/pkt_cls.h
+@@ -75,11 +75,11 @@ static inline bool tcf_block_non_null_shared(struct tcf_block *block)
+ }
+
+ #ifdef CONFIG_NET_CLS_ACT
+-DECLARE_STATIC_KEY_FALSE(tcf_bypass_check_needed_key);
++DECLARE_STATIC_KEY_FALSE(tcf_sw_enabled_key);
+
+ static inline bool tcf_block_bypass_sw(struct tcf_block *block)
+ {
+- return block && block->bypass_wanted;
++ return block && !atomic_read(&block->useswcnt);
+ }
+ #endif
+
+@@ -759,6 +759,15 @@ tc_cls_common_offload_init(struct flow_cls_common_offload *cls_common,
+ cls_common->extack = extack;
+ }
+
++static inline void tcf_proto_update_usesw(struct tcf_proto *tp, u32 flags)
++{
++ if (tp->usesw)
++ return;
++ if (tc_skip_sw(flags) && tc_in_hw(flags))
++ return;
++ tp->usesw = true;
++}
++
+ #if IS_ENABLED(CONFIG_NET_TC_SKB_EXT)
+ static inline struct tc_skb_ext *tc_skb_ext_alloc(struct sk_buff *skb)
+ {
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 5d74fa7e694cc8..1e6324f0d4efda 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -425,6 +425,7 @@ struct tcf_proto {
+ spinlock_t lock;
+ bool deleting;
+ bool counted;
++ bool usesw;
+ refcount_t refcnt;
+ struct rcu_head rcu;
+ struct hlist_node destroy_ht_node;
+@@ -474,9 +475,7 @@ struct tcf_block {
+ struct flow_block flow_block;
+ struct list_head owner_list;
+ bool keep_dst;
+- bool bypass_wanted;
+- atomic_t filtercnt; /* Number of filters */
+- atomic_t skipswcnt; /* Number of skip_sw filters */
++ atomic_t useswcnt;
+ atomic_t offloadcnt; /* Number of oddloaded filters */
+ unsigned int nooffloaddevcnt; /* Number of devs unable to do offload */
+ unsigned int lockeddevcnt; /* Number of devs that require rtnl lock. */
+diff --git a/include/net/xfrm.h b/include/net/xfrm.h
+index a0bdd58f401c0f..83e9ef25b8d0d4 100644
+--- a/include/net/xfrm.h
++++ b/include/net/xfrm.h
+@@ -184,10 +184,13 @@ struct xfrm_state {
+ };
+ struct hlist_node byspi;
+ struct hlist_node byseq;
++ struct hlist_node state_cache;
++ struct hlist_node state_cache_input;
+
+ refcount_t refcnt;
+ spinlock_t lock;
+
++ u32 pcpu_num;
+ struct xfrm_id id;
+ struct xfrm_selector sel;
+ struct xfrm_mark mark;
+@@ -536,6 +539,7 @@ struct xfrm_policy_queue {
+ * @xp_net: network namespace the policy lives in
+ * @bydst: hlist node for SPD hash table or rbtree list
+ * @byidx: hlist node for index hash table
++ * @state_cache_list: hlist head for policy cached xfrm states
+ * @lock: serialize changes to policy structure members
+ * @refcnt: reference count, freed once it reaches 0
+ * @pos: kernel internal tie-breaker to determine age of policy
+@@ -566,6 +570,8 @@ struct xfrm_policy {
+ struct hlist_node bydst;
+ struct hlist_node byidx;
+
++ struct hlist_head state_cache_list;
++
+ /* This lock only affects elements except for entry. */
+ rwlock_t lock;
+ refcount_t refcnt;
+@@ -1217,9 +1223,19 @@ static inline int __xfrm_policy_check2(struct sock *sk, int dir,
+
+ if (xo) {
+ x = xfrm_input_state(skb);
+- if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET)
+- return (xo->flags & CRYPTO_DONE) &&
+- (xo->status & CRYPTO_SUCCESS);
++ if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET) {
++ bool check = (xo->flags & CRYPTO_DONE) &&
++ (xo->status & CRYPTO_SUCCESS);
++
++ /* The packets here are plain ones and secpath was
++ * needed to indicate that hardware already handled
++ * them and there is no need to do nothing in addition.
++ *
++ * Consume secpath which was set by drivers.
++ */
++ secpath_reset(skb);
++ return check;
++ }
+ }
+
+ return __xfrm_check_nopolicy(net, skb, dir) ||
+@@ -1645,6 +1661,10 @@ int xfrm_state_update(struct xfrm_state *x);
+ struct xfrm_state *xfrm_state_lookup(struct net *net, u32 mark,
+ const xfrm_address_t *daddr, __be32 spi,
+ u8 proto, unsigned short family);
++struct xfrm_state *xfrm_input_state_lookup(struct net *net, u32 mark,
++ const xfrm_address_t *daddr,
++ __be32 spi, u8 proto,
++ unsigned short family);
+ struct xfrm_state *xfrm_state_lookup_byaddr(struct net *net, u32 mark,
+ const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr,
+@@ -1684,7 +1704,7 @@ struct xfrmk_spdinfo {
+ u32 spdhmcnt;
+ };
+
+-struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq);
++struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq, u32 pcpu_num);
+ int xfrm_state_delete(struct xfrm_state *x);
+ int xfrm_state_flush(struct net *net, u8 proto, bool task_valid, bool sync);
+ int xfrm_dev_state_flush(struct net *net, struct net_device *dev, bool task_valid);
+@@ -1796,7 +1816,7 @@ int verify_spi_info(u8 proto, u32 min, u32 max, struct netlink_ext_ack *extack);
+ int xfrm_alloc_spi(struct xfrm_state *x, u32 minspi, u32 maxspi,
+ struct netlink_ext_ack *extack);
+ struct xfrm_state *xfrm_find_acq(struct net *net, const struct xfrm_mark *mark,
+- u8 mode, u32 reqid, u32 if_id, u8 proto,
++ u8 mode, u32 reqid, u32 if_id, u32 pcpu_num, u8 proto,
+ const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr, int create,
+ unsigned short family);
+diff --git a/include/sound/hdaudio_ext.h b/include/sound/hdaudio_ext.h
+index 957295364a5e3c..4c7a40e149a594 100644
+--- a/include/sound/hdaudio_ext.h
++++ b/include/sound/hdaudio_ext.h
+@@ -2,8 +2,6 @@
+ #ifndef __SOUND_HDAUDIO_EXT_H
+ #define __SOUND_HDAUDIO_EXT_H
+
+-#include <linux/io-64-nonatomic-lo-hi.h>
+-#include <linux/iopoll.h>
+ #include <sound/hdaudio.h>
+
+ int snd_hdac_ext_bus_init(struct hdac_bus *bus, struct device *dev,
+@@ -119,49 +117,6 @@ int snd_hdac_ext_bus_link_put(struct hdac_bus *bus, struct hdac_ext_link *hlink)
+
+ void snd_hdac_ext_bus_link_power(struct hdac_device *codec, bool enable);
+
+-#define snd_hdac_adsp_writeb(chip, reg, value) \
+- snd_hdac_reg_writeb(chip, (chip)->dsp_ba + (reg), value)
+-#define snd_hdac_adsp_readb(chip, reg) \
+- snd_hdac_reg_readb(chip, (chip)->dsp_ba + (reg))
+-#define snd_hdac_adsp_writew(chip, reg, value) \
+- snd_hdac_reg_writew(chip, (chip)->dsp_ba + (reg), value)
+-#define snd_hdac_adsp_readw(chip, reg) \
+- snd_hdac_reg_readw(chip, (chip)->dsp_ba + (reg))
+-#define snd_hdac_adsp_writel(chip, reg, value) \
+- snd_hdac_reg_writel(chip, (chip)->dsp_ba + (reg), value)
+-#define snd_hdac_adsp_readl(chip, reg) \
+- snd_hdac_reg_readl(chip, (chip)->dsp_ba + (reg))
+-#define snd_hdac_adsp_writeq(chip, reg, value) \
+- snd_hdac_reg_writeq(chip, (chip)->dsp_ba + (reg), value)
+-#define snd_hdac_adsp_readq(chip, reg) \
+- snd_hdac_reg_readq(chip, (chip)->dsp_ba + (reg))
+-
+-#define snd_hdac_adsp_updateb(chip, reg, mask, val) \
+- snd_hdac_adsp_writeb(chip, reg, \
+- (snd_hdac_adsp_readb(chip, reg) & ~(mask)) | (val))
+-#define snd_hdac_adsp_updatew(chip, reg, mask, val) \
+- snd_hdac_adsp_writew(chip, reg, \
+- (snd_hdac_adsp_readw(chip, reg) & ~(mask)) | (val))
+-#define snd_hdac_adsp_updatel(chip, reg, mask, val) \
+- snd_hdac_adsp_writel(chip, reg, \
+- (snd_hdac_adsp_readl(chip, reg) & ~(mask)) | (val))
+-#define snd_hdac_adsp_updateq(chip, reg, mask, val) \
+- snd_hdac_adsp_writeq(chip, reg, \
+- (snd_hdac_adsp_readq(chip, reg) & ~(mask)) | (val))
+-
+-#define snd_hdac_adsp_readb_poll(chip, reg, val, cond, delay_us, timeout_us) \
+- readb_poll_timeout((chip)->dsp_ba + (reg), val, cond, \
+- delay_us, timeout_us)
+-#define snd_hdac_adsp_readw_poll(chip, reg, val, cond, delay_us, timeout_us) \
+- readw_poll_timeout((chip)->dsp_ba + (reg), val, cond, \
+- delay_us, timeout_us)
+-#define snd_hdac_adsp_readl_poll(chip, reg, val, cond, delay_us, timeout_us) \
+- readl_poll_timeout((chip)->dsp_ba + (reg), val, cond, \
+- delay_us, timeout_us)
+-#define snd_hdac_adsp_readq_poll(chip, reg, val, cond, delay_us, timeout_us) \
+- readq_poll_timeout((chip)->dsp_ba + (reg), val, cond, \
+- delay_us, timeout_us)
+-
+ struct hdac_ext_device;
+
+ /* ops common to all codec drivers */
+diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
+index a0aed1a428a183..9a75590227f262 100644
+--- a/include/trace/events/afs.h
++++ b/include/trace/events/afs.h
+@@ -118,6 +118,8 @@ enum yfs_cm_operation {
+ */
+ #define afs_call_traces \
+ EM(afs_call_trace_alloc, "ALLOC") \
++ EM(afs_call_trace_async_abort, "ASYAB") \
++ EM(afs_call_trace_async_kill, "ASYKL") \
+ EM(afs_call_trace_free, "FREE ") \
+ EM(afs_call_trace_get, "GET ") \
+ EM(afs_call_trace_put, "PUT ") \
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index cc22596c7250cf..666fe1779ccc63 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -117,6 +117,7 @@
+ #define rxrpc_call_poke_traces \
+ EM(rxrpc_call_poke_abort, "Abort") \
+ EM(rxrpc_call_poke_complete, "Compl") \
++ EM(rxrpc_call_poke_conn_abort, "Conn-abort") \
+ EM(rxrpc_call_poke_error, "Error") \
+ EM(rxrpc_call_poke_idle, "Idle") \
+ EM(rxrpc_call_poke_set_timeout, "Set-timo") \
+@@ -282,6 +283,7 @@
+ EM(rxrpc_call_see_activate_client, "SEE act-clnt") \
+ EM(rxrpc_call_see_connect_failed, "SEE con-fail") \
+ EM(rxrpc_call_see_connected, "SEE connect ") \
++ EM(rxrpc_call_see_conn_abort, "SEE conn-abt") \
+ EM(rxrpc_call_see_disconnected, "SEE disconn ") \
+ EM(rxrpc_call_see_distribute_error, "SEE dist-err") \
+ EM(rxrpc_call_see_input, "SEE input ") \
+@@ -956,6 +958,29 @@ TRACE_EVENT(rxrpc_rx_abort,
+ __entry->abort_code)
+ );
+
++TRACE_EVENT(rxrpc_rx_conn_abort,
++ TP_PROTO(const struct rxrpc_connection *conn, const struct sk_buff *skb),
++
++ TP_ARGS(conn, skb),
++
++ TP_STRUCT__entry(
++ __field(unsigned int, conn)
++ __field(rxrpc_serial_t, serial)
++ __field(u32, abort_code)
++ ),
++
++ TP_fast_assign(
++ __entry->conn = conn->debug_id;
++ __entry->serial = rxrpc_skb(skb)->hdr.serial;
++ __entry->abort_code = skb->priority;
++ ),
++
++ TP_printk("C=%08x ABORT %08x ac=%d",
++ __entry->conn,
++ __entry->serial,
++ __entry->abort_code)
++ );
++
+ TRACE_EVENT(rxrpc_rx_challenge,
+ TP_PROTO(struct rxrpc_connection *conn, rxrpc_serial_t serial,
+ u32 version, u32 nonce, u32 min_level),
+diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h
+index f28701500714f6..d73a97e3030a86 100644
+--- a/include/uapi/linux/xfrm.h
++++ b/include/uapi/linux/xfrm.h
+@@ -322,6 +322,7 @@ enum xfrm_attr_type_t {
+ XFRMA_MTIMER_THRESH, /* __u32 in seconds for input SA */
+ XFRMA_SA_DIR, /* __u8 */
+ XFRMA_NAT_KEEPALIVE_INTERVAL, /* __u32 in seconds for NAT keepalive */
++ XFRMA_SA_PCPU, /* __u32 */
+ __XFRMA_MAX
+
+ #define XFRMA_OUTPUT_MARK XFRMA_SET_MARK /* Compatibility */
+@@ -437,6 +438,7 @@ struct xfrm_userpolicy_info {
+ #define XFRM_POLICY_LOCALOK 1 /* Allow user to override global policy */
+ /* Automatically expand selector to include matching ICMP payloads. */
+ #define XFRM_POLICY_ICMP 2
++#define XFRM_POLICY_CPU_ACQUIRE 4
+ __u8 share;
+ };
+
+diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
+index 883510a3e8d075..874f9e2defd583 100644
+--- a/io_uring/uring_cmd.c
++++ b/io_uring/uring_cmd.c
+@@ -340,7 +340,7 @@ int io_uring_cmd_sock(struct io_uring_cmd *cmd, unsigned int issue_flags)
+ if (!prot || !prot->ioctl)
+ return -EOPNOTSUPP;
+
+- switch (cmd->sqe->cmd_op) {
++ switch (cmd->cmd_op) {
+ case SOCKET_URING_OP_SIOCINQ:
+ ret = prot->ioctl(sk, SIOCINQ, &arg);
+ if (ret)
+diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
+index e52b3ad231b9c4..93e48c7cad4eff 100644
+--- a/kernel/bpf/arena.c
++++ b/kernel/bpf/arena.c
+@@ -212,7 +212,7 @@ static u64 arena_map_mem_usage(const struct bpf_map *map)
+ struct vma_list {
+ struct vm_area_struct *vma;
+ struct list_head head;
+- atomic_t mmap_count;
++ refcount_t mmap_count;
+ };
+
+ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
+@@ -222,7 +222,7 @@ static int remember_vma(struct bpf_arena *arena, struct vm_area_struct *vma)
+ vml = kmalloc(sizeof(*vml), GFP_KERNEL);
+ if (!vml)
+ return -ENOMEM;
+- atomic_set(&vml->mmap_count, 1);
++ refcount_set(&vml->mmap_count, 1);
+ vma->vm_private_data = vml;
+ vml->vma = vma;
+ list_add(&vml->head, &arena->vma_list);
+@@ -233,7 +233,7 @@ static void arena_vm_open(struct vm_area_struct *vma)
+ {
+ struct vma_list *vml = vma->vm_private_data;
+
+- atomic_inc(&vml->mmap_count);
++ refcount_inc(&vml->mmap_count);
+ }
+
+ static void arena_vm_close(struct vm_area_struct *vma)
+@@ -242,7 +242,7 @@ static void arena_vm_close(struct vm_area_struct *vma)
+ struct bpf_arena *arena = container_of(map, struct bpf_arena, map);
+ struct vma_list *vml = vma->vm_private_data;
+
+- if (!atomic_dec_and_test(&vml->mmap_count))
++ if (!refcount_dec_and_test(&vml->mmap_count))
+ return;
+ guard(mutex)(&arena->lock);
+ /* update link list under lock */
+diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
+index c938dea5ddbf3a..050c784498abec 100644
+--- a/kernel/bpf/bpf_local_storage.c
++++ b/kernel/bpf/bpf_local_storage.c
+@@ -797,8 +797,12 @@ bpf_local_storage_map_alloc(union bpf_attr *attr,
+ smap->elem_size = offsetof(struct bpf_local_storage_elem,
+ sdata.data[attr->value_size]);
+
+- smap->bpf_ma = bpf_ma;
+- if (bpf_ma) {
++ /* In PREEMPT_RT, kmalloc(GFP_ATOMIC) is still not safe in non
++ * preemptible context. Thus, enforce all storages to use
++ * bpf_mem_alloc when CONFIG_PREEMPT_RT is enabled.
++ */
++ smap->bpf_ma = IS_ENABLED(CONFIG_PREEMPT_RT) ? true : bpf_ma;
++ if (smap->bpf_ma) {
+ err = bpf_mem_alloc_init(&smap->selem_ma, smap->elem_size, false);
+ if (err)
+ goto free_smap;
+diff --git a/kernel/bpf/bpf_struct_ops.c b/kernel/bpf/bpf_struct_ops.c
+index b3a2ce1e5e22ec..b70d0eef8a284d 100644
+--- a/kernel/bpf/bpf_struct_ops.c
++++ b/kernel/bpf/bpf_struct_ops.c
+@@ -311,6 +311,20 @@ void bpf_struct_ops_desc_release(struct bpf_struct_ops_desc *st_ops_desc)
+ kfree(arg_info);
+ }
+
++static bool is_module_member(const struct btf *btf, u32 id)
++{
++ const struct btf_type *t;
++
++ t = btf_type_resolve_ptr(btf, id, NULL);
++ if (!t)
++ return false;
++
++ if (!__btf_type_is_struct(t) && !btf_type_is_fwd(t))
++ return false;
++
++ return !strcmp(btf_name_by_offset(btf, t->name_off), "module");
++}
++
+ int bpf_struct_ops_desc_init(struct bpf_struct_ops_desc *st_ops_desc,
+ struct btf *btf,
+ struct bpf_verifier_log *log)
+@@ -390,6 +404,13 @@ int bpf_struct_ops_desc_init(struct bpf_struct_ops_desc *st_ops_desc,
+ goto errout;
+ }
+
++ if (!st_ops_ids[IDX_MODULE_ID] && is_module_member(btf, member->type)) {
++ pr_warn("'struct module' btf id not found. Is CONFIG_MODULES enabled? bpf_struct_ops '%s' needs module support.\n",
++ st_ops->name);
++ err = -EOPNOTSUPP;
++ goto errout;
++ }
++
+ func_proto = btf_type_resolve_func_ptr(btf,
+ member->type,
+ NULL);
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 41d20b7199c4af..a44f4be592be79 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -498,11 +498,6 @@ bool btf_type_is_void(const struct btf_type *t)
+ return t == &btf_void;
+ }
+
+-static bool btf_type_is_fwd(const struct btf_type *t)
+-{
+- return BTF_INFO_KIND(t->info) == BTF_KIND_FWD;
+-}
+-
+ static bool btf_type_is_datasec(const struct btf_type *t)
+ {
+ return BTF_INFO_KIND(t->info) == BTF_KIND_DATASEC;
+diff --git a/kernel/bpf/helpers.c b/kernel/bpf/helpers.c
+index 3d45ebe8afb48d..a05aeb34589641 100644
+--- a/kernel/bpf/helpers.c
++++ b/kernel/bpf/helpers.c
+@@ -1593,10 +1593,24 @@ void bpf_timer_cancel_and_free(void *val)
+ * To avoid these issues, punt to workqueue context when we are in a
+ * timer callback.
+ */
+- if (this_cpu_read(hrtimer_running))
++ if (this_cpu_read(hrtimer_running)) {
+ queue_work(system_unbound_wq, &t->cb.delete_work);
+- else
++ return;
++ }
++
++ if (IS_ENABLED(CONFIG_PREEMPT_RT)) {
++ /* If the timer is running on other CPU, also use a kworker to
++ * wait for the completion of the timer instead of trying to
++ * acquire a sleepable lock in hrtimer_cancel() to wait for its
++ * completion.
++ */
++ if (hrtimer_try_to_cancel(&t->timer) >= 0)
++ kfree_rcu(t, cb.rcu);
++ else
++ queue_work(system_unbound_wq, &t->cb.delete_work);
++ } else {
+ bpf_timer_delete_work(&t->cb.delete_work);
++ }
+ }
+
+ /* This function is called by map_delete/update_elem for individual element and
+diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
+index ff5683a57f7712..3b2bdca9f1d4b0 100644
+--- a/kernel/dma/coherent.c
++++ b/kernel/dma/coherent.c
+@@ -330,7 +330,8 @@ int dma_init_global_coherent(phys_addr_t phys_addr, size_t size)
+ #include <linux/of_reserved_mem.h>
+
+ #ifdef CONFIG_DMA_GLOBAL_POOL
+-static struct reserved_mem *dma_reserved_default_memory __initdata;
++static phys_addr_t dma_reserved_default_memory_base __initdata;
++static phys_addr_t dma_reserved_default_memory_size __initdata;
+ #endif
+
+ static int rmem_dma_device_init(struct reserved_mem *rmem, struct device *dev)
+@@ -376,9 +377,10 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
+
+ #ifdef CONFIG_DMA_GLOBAL_POOL
+ if (of_get_flat_dt_prop(node, "linux,dma-default", NULL)) {
+- WARN(dma_reserved_default_memory,
++ WARN(dma_reserved_default_memory_size,
+ "Reserved memory: region for default DMA coherent area is redefined\n");
+- dma_reserved_default_memory = rmem;
++ dma_reserved_default_memory_base = rmem->base;
++ dma_reserved_default_memory_size = rmem->size;
+ }
+ #endif
+
+@@ -391,10 +393,10 @@ static int __init rmem_dma_setup(struct reserved_mem *rmem)
+ #ifdef CONFIG_DMA_GLOBAL_POOL
+ static int __init dma_init_reserved_memory(void)
+ {
+- if (!dma_reserved_default_memory)
++ if (!dma_reserved_default_memory_size)
+ return -ENOMEM;
+- return dma_init_global_coherent(dma_reserved_default_memory->base,
+- dma_reserved_default_memory->size);
++ return dma_init_global_coherent(dma_reserved_default_memory_base,
++ dma_reserved_default_memory_size);
+ }
+ core_initcall(dma_init_reserved_memory);
+ #endif /* CONFIG_DMA_GLOBAL_POOL */
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index df27d08a723269..501d8c2fedff40 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -10375,9 +10375,9 @@ static struct pmu perf_tracepoint = {
+ };
+
+ static int perf_tp_filter_match(struct perf_event *event,
+- struct perf_sample_data *data)
++ struct perf_raw_record *raw)
+ {
+- void *record = data->raw->frag.data;
++ void *record = raw->frag.data;
+
+ /* only top level events have filters set */
+ if (event->parent)
+@@ -10389,7 +10389,7 @@ static int perf_tp_filter_match(struct perf_event *event,
+ }
+
+ static int perf_tp_event_match(struct perf_event *event,
+- struct perf_sample_data *data,
++ struct perf_raw_record *raw,
+ struct pt_regs *regs)
+ {
+ if (event->hw.state & PERF_HES_STOPPED)
+@@ -10400,7 +10400,7 @@ static int perf_tp_event_match(struct perf_event *event,
+ if (event->attr.exclude_kernel && !user_mode(regs))
+ return 0;
+
+- if (!perf_tp_filter_match(event, data))
++ if (!perf_tp_filter_match(event, raw))
+ return 0;
+
+ return 1;
+@@ -10426,6 +10426,7 @@ EXPORT_SYMBOL_GPL(perf_trace_run_bpf_submit);
+ static void __perf_tp_event_target_task(u64 count, void *record,
+ struct pt_regs *regs,
+ struct perf_sample_data *data,
++ struct perf_raw_record *raw,
+ struct perf_event *event)
+ {
+ struct trace_entry *entry = record;
+@@ -10435,13 +10436,17 @@ static void __perf_tp_event_target_task(u64 count, void *record,
+ /* Cannot deliver synchronous signal to other task. */
+ if (event->attr.sigtrap)
+ return;
+- if (perf_tp_event_match(event, data, regs))
++ if (perf_tp_event_match(event, raw, regs)) {
++ perf_sample_data_init(data, 0, 0);
++ perf_sample_save_raw_data(data, event, raw);
+ perf_swevent_event(event, count, data, regs);
++ }
+ }
+
+ static void perf_tp_event_target_task(u64 count, void *record,
+ struct pt_regs *regs,
+ struct perf_sample_data *data,
++ struct perf_raw_record *raw,
+ struct perf_event_context *ctx)
+ {
+ unsigned int cpu = smp_processor_id();
+@@ -10449,15 +10454,15 @@ static void perf_tp_event_target_task(u64 count, void *record,
+ struct perf_event *event, *sibling;
+
+ perf_event_groups_for_cpu_pmu(event, &ctx->pinned_groups, cpu, pmu) {
+- __perf_tp_event_target_task(count, record, regs, data, event);
++ __perf_tp_event_target_task(count, record, regs, data, raw, event);
+ for_each_sibling_event(sibling, event)
+- __perf_tp_event_target_task(count, record, regs, data, sibling);
++ __perf_tp_event_target_task(count, record, regs, data, raw, sibling);
+ }
+
+ perf_event_groups_for_cpu_pmu(event, &ctx->flexible_groups, cpu, pmu) {
+- __perf_tp_event_target_task(count, record, regs, data, event);
++ __perf_tp_event_target_task(count, record, regs, data, raw, event);
+ for_each_sibling_event(sibling, event)
+- __perf_tp_event_target_task(count, record, regs, data, sibling);
++ __perf_tp_event_target_task(count, record, regs, data, raw, sibling);
+ }
+ }
+
+@@ -10475,15 +10480,10 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
+ },
+ };
+
+- perf_sample_data_init(&data, 0, 0);
+- perf_sample_save_raw_data(&data, &raw);
+-
+ perf_trace_buf_update(record, event_type);
+
+ hlist_for_each_entry_rcu(event, head, hlist_entry) {
+- if (perf_tp_event_match(event, &data, regs)) {
+- perf_swevent_event(event, count, &data, regs);
+-
++ if (perf_tp_event_match(event, &raw, regs)) {
+ /*
+ * Here use the same on-stack perf_sample_data,
+ * some members in data are event-specific and
+@@ -10493,7 +10493,8 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
+ * because data->sample_flags is set.
+ */
+ perf_sample_data_init(&data, 0, 0);
+- perf_sample_save_raw_data(&data, &raw);
++ perf_sample_save_raw_data(&data, event, &raw);
++ perf_swevent_event(event, count, &data, regs);
+ }
+ }
+
+@@ -10510,7 +10511,7 @@ void perf_tp_event(u16 event_type, u64 count, void *record, int entry_size,
+ goto unlock;
+
+ raw_spin_lock(&ctx->lock);
+- perf_tp_event_target_task(count, record, regs, &data, ctx);
++ perf_tp_event_target_task(count, record, regs, &data, &raw, ctx);
+ raw_spin_unlock(&ctx->lock);
+ unlock:
+ rcu_read_unlock();
+diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
+index fe0272cd84a51a..a29df4b02a2ed9 100644
+--- a/kernel/irq/internals.h
++++ b/kernel/irq/internals.h
+@@ -441,10 +441,6 @@ static inline struct cpumask *irq_desc_get_pending_mask(struct irq_desc *desc)
+ {
+ return desc->pending_mask;
+ }
+-static inline bool handle_enforce_irqctx(struct irq_data *data)
+-{
+- return irqd_is_handle_enforce_irqctx(data);
+-}
+ bool irq_fixup_move_pending(struct irq_desc *desc, bool force_clear);
+ #else /* CONFIG_GENERIC_PENDING_IRQ */
+ static inline bool irq_can_move_pcntxt(struct irq_data *data)
+@@ -471,11 +467,12 @@ static inline bool irq_fixup_move_pending(struct irq_desc *desc, bool fclear)
+ {
+ return false;
+ }
++#endif /* !CONFIG_GENERIC_PENDING_IRQ */
++
+ static inline bool handle_enforce_irqctx(struct irq_data *data)
+ {
+- return false;
++ return irqd_is_handle_enforce_irqctx(data);
+ }
+-#endif /* !CONFIG_GENERIC_PENDING_IRQ */
+
+ #if !defined(CONFIG_IRQ_DOMAIN) || !defined(CONFIG_IRQ_DOMAIN_HIERARCHY)
+ static inline int irq_domain_activate_irq(struct irq_data *data, bool reserve)
+diff --git a/kernel/module/main.c b/kernel/module/main.c
+index 49b9bca9de12f7..93a07387af3b75 100644
+--- a/kernel/module/main.c
++++ b/kernel/module/main.c
+@@ -2583,7 +2583,10 @@ static noinline int do_init_module(struct module *mod)
+ #endif
+ ret = module_enable_rodata_ro(mod, true);
+ if (ret)
+- goto fail_mutex_unlock;
++ pr_warn("%s: module_enable_rodata_ro_after_init() returned %d, "
++ "ro_after_init data might still be writable\n",
++ mod->name, ret);
++
+ mod_tree_remove_init(mod);
+ module_arch_freeing_init(mod);
+ for_class_mod_mem_type(type, init) {
+@@ -2622,8 +2625,6 @@ static noinline int do_init_module(struct module *mod)
+
+ return 0;
+
+-fail_mutex_unlock:
+- mutex_unlock(&module_mutex);
+ fail_free_freeinit:
+ kfree(freeinit);
+ fail:
+diff --git a/kernel/padata.c b/kernel/padata.c
+index d899f34558afcc..22770372bdf329 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -47,6 +47,22 @@ struct padata_mt_job_state {
+ static void padata_free_pd(struct parallel_data *pd);
+ static void __init padata_mt_helper(struct work_struct *work);
+
++static inline void padata_get_pd(struct parallel_data *pd)
++{
++ refcount_inc(&pd->refcnt);
++}
++
++static inline void padata_put_pd_cnt(struct parallel_data *pd, int cnt)
++{
++ if (refcount_sub_and_test(cnt, &pd->refcnt))
++ padata_free_pd(pd);
++}
++
++static inline void padata_put_pd(struct parallel_data *pd)
++{
++ padata_put_pd_cnt(pd, 1);
++}
++
+ static int padata_index_to_cpu(struct parallel_data *pd, int cpu_index)
+ {
+ int cpu, target_cpu;
+@@ -206,7 +222,7 @@ int padata_do_parallel(struct padata_shell *ps,
+ if ((pinst->flags & PADATA_RESET))
+ goto out;
+
+- refcount_inc(&pd->refcnt);
++ padata_get_pd(pd);
+ padata->pd = pd;
+ padata->cb_cpu = *cb_cpu;
+
+@@ -336,8 +352,14 @@ static void padata_reorder(struct parallel_data *pd)
+ smp_mb();
+
+ reorder = per_cpu_ptr(pd->reorder_list, pd->cpu);
+- if (!list_empty(&reorder->list) && padata_find_next(pd, false))
++ if (!list_empty(&reorder->list) && padata_find_next(pd, false)) {
++ /*
++ * Other context(eg. the padata_serial_worker) can finish the request.
++ * To avoid UAF issue, add pd ref here, and put pd ref after reorder_work finish.
++ */
++ padata_get_pd(pd);
+ queue_work(pinst->serial_wq, &pd->reorder_work);
++ }
+ }
+
+ static void invoke_padata_reorder(struct work_struct *work)
+@@ -348,6 +370,8 @@ static void invoke_padata_reorder(struct work_struct *work)
+ pd = container_of(work, struct parallel_data, reorder_work);
+ padata_reorder(pd);
+ local_bh_enable();
++ /* Pairs with putting the reorder_work in the serial_wq */
++ padata_put_pd(pd);
+ }
+
+ static void padata_serial_worker(struct work_struct *serial_work)
+@@ -380,8 +404,7 @@ static void padata_serial_worker(struct work_struct *serial_work)
+ }
+ local_bh_enable();
+
+- if (refcount_sub_and_test(cnt, &pd->refcnt))
+- padata_free_pd(pd);
++ padata_put_pd_cnt(pd, cnt);
+ }
+
+ /**
+@@ -688,8 +711,7 @@ static int padata_replace(struct padata_instance *pinst)
+ synchronize_rcu();
+
+ list_for_each_entry_continue_reverse(ps, &pinst->pslist, list)
+- if (refcount_dec_and_test(&ps->opd->refcnt))
+- padata_free_pd(ps->opd);
++ padata_put_pd(ps->opd);
+
+ pinst->flags &= ~PADATA_RESET;
+
+@@ -977,7 +999,7 @@ static ssize_t padata_sysfs_store(struct kobject *kobj, struct attribute *attr,
+
+ pinst = kobj2pinst(kobj);
+ pentry = attr2pentry(attr);
+- if (pentry->show)
++ if (pentry->store)
+ ret = pentry->store(pinst, attr, buf, count);
+
+ return ret;
+@@ -1128,11 +1150,16 @@ void padata_free_shell(struct padata_shell *ps)
+ if (!ps)
+ return;
+
++ /*
++ * Wait for all _do_serial calls to finish to avoid touching
++ * freed pd's and ps's.
++ */
++ synchronize_rcu();
++
+ mutex_lock(&ps->pinst->lock);
+ list_del(&ps->list);
+ pd = rcu_dereference_protected(ps->pd, 1);
+- if (refcount_dec_and_test(&pd->refcnt))
+- padata_free_pd(pd);
++ padata_put_pd(pd);
+ mutex_unlock(&ps->pinst->lock);
+
+ kfree(ps);
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index e35829d360390f..b483fcea811b1a 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -608,7 +608,11 @@ int hibernation_platform_enter(void)
+
+ local_irq_disable();
+ system_state = SYSTEM_SUSPEND;
+- syscore_suspend();
++
++ error = syscore_suspend();
++ if (error)
++ goto Enable_irqs;
++
+ if (pm_wakeup_pending()) {
+ error = -EAGAIN;
+ goto Power_up;
+@@ -620,6 +624,7 @@ int hibernation_platform_enter(void)
+
+ Power_up:
+ syscore_resume();
++ Enable_irqs:
+ system_state = SYSTEM_RUNNING;
+ local_irq_enable();
+
+diff --git a/kernel/printk/internal.h b/kernel/printk/internal.h
+index 3fcb48502adbd8..5eef70000b439d 100644
+--- a/kernel/printk/internal.h
++++ b/kernel/printk/internal.h
+@@ -335,3 +335,9 @@ bool printk_get_next_message(struct printk_message *pmsg, u64 seq,
+ void console_prepend_dropped(struct printk_message *pmsg, unsigned long dropped);
+ void console_prepend_replay(struct printk_message *pmsg);
+ #endif
++
++#ifdef CONFIG_SMP
++bool is_printk_cpu_sync_owner(void);
++#else
++static inline bool is_printk_cpu_sync_owner(void) { return false; }
++#endif
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index beb808f4c367b9..7530df62ff7cbc 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -4892,6 +4892,11 @@ void console_try_replay_all(void)
+ static atomic_t printk_cpu_sync_owner = ATOMIC_INIT(-1);
+ static atomic_t printk_cpu_sync_nested = ATOMIC_INIT(0);
+
++bool is_printk_cpu_sync_owner(void)
++{
++ return (atomic_read(&printk_cpu_sync_owner) == raw_smp_processor_id());
++}
++
+ /**
+ * __printk_cpu_sync_wait() - Busy wait until the printk cpu-reentrant
+ * spinning lock is not owned by any CPU.
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index 2b35a9d3919d8b..e6198da7c7354a 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -43,10 +43,15 @@ bool is_printk_legacy_deferred(void)
+ /*
+ * The per-CPU variable @printk_context can be read safely in any
+ * context. CPU migration is always disabled when set.
++ *
++ * A context holding the printk_cpu_sync must not spin waiting for
++ * another CPU. For legacy printing, it could be the console_lock
++ * or the port lock.
+ */
+ return (force_legacy_kthread() ||
+ this_cpu_read(printk_context) ||
+- in_nmi());
++ in_nmi() ||
++ is_printk_cpu_sync_owner());
+ }
+
+ asmlinkage int vprintk(const char *fmt, va_list args)
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index d07dc87787dff3..aba41c69f09c42 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -2024,10 +2024,10 @@ void enqueue_task(struct rq *rq, struct task_struct *p, int flags)
+ */
+ uclamp_rq_inc(rq, p);
+
+- if (!(flags & ENQUEUE_RESTORE)) {
++ psi_enqueue(p, flags);
++
++ if (!(flags & ENQUEUE_RESTORE))
+ sched_info_enqueue(rq, p);
+- psi_enqueue(p, flags & ENQUEUE_MIGRATED);
+- }
+
+ if (sched_core_enabled(rq))
+ sched_core_enqueue(rq, p);
+@@ -2044,10 +2044,10 @@ inline bool dequeue_task(struct rq *rq, struct task_struct *p, int flags)
+ if (!(flags & DEQUEUE_NOCLOCK))
+ update_rq_clock(rq);
+
+- if (!(flags & DEQUEUE_SAVE)) {
++ if (!(flags & DEQUEUE_SAVE))
+ sched_info_dequeue(rq, p);
+- psi_dequeue(p, !(flags & DEQUEUE_SLEEP));
+- }
++
++ psi_dequeue(p, flags);
+
+ /*
+ * Must be before ->dequeue_task() because ->dequeue_task() can 'fail'
+@@ -6507,6 +6507,45 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
+ #define SM_PREEMPT 1
+ #define SM_RTLOCK_WAIT 2
+
++/*
++ * Helper function for __schedule()
++ *
++ * If a task does not have signals pending, deactivate it
++ * Otherwise marks the task's __state as RUNNING
++ */
++static bool try_to_block_task(struct rq *rq, struct task_struct *p,
++ unsigned long task_state)
++{
++ int flags = DEQUEUE_NOCLOCK;
++
++ if (signal_pending_state(task_state, p)) {
++ WRITE_ONCE(p->__state, TASK_RUNNING);
++ return false;
++ }
++
++ p->sched_contributes_to_load =
++ (task_state & TASK_UNINTERRUPTIBLE) &&
++ !(task_state & TASK_NOLOAD) &&
++ !(task_state & TASK_FROZEN);
++
++ if (unlikely(is_special_task_state(task_state)))
++ flags |= DEQUEUE_SPECIAL;
++
++ /*
++ * __schedule() ttwu()
++ * prev_state = prev->state; if (p->on_rq && ...)
++ * if (prev_state) goto out;
++ * p->on_rq = 0; smp_acquire__after_ctrl_dep();
++ * p->state = TASK_WAKING
++ *
++ * Where __schedule() and ttwu() have matching control dependencies.
++ *
++ * After this, schedule() must not care about p->state any more.
++ */
++ block_task(rq, p, flags);
++ return true;
++}
++
+ /*
+ * __schedule() is the main scheduler function.
+ *
+@@ -6554,7 +6593,6 @@ static void __sched notrace __schedule(int sched_mode)
+ * as a preemption by schedule_debug() and RCU.
+ */
+ bool preempt = sched_mode > SM_NONE;
+- bool block = false;
+ unsigned long *switch_count;
+ unsigned long prev_state;
+ struct rq_flags rf;
+@@ -6615,33 +6653,7 @@ static void __sched notrace __schedule(int sched_mode)
+ goto picked;
+ }
+ } else if (!preempt && prev_state) {
+- if (signal_pending_state(prev_state, prev)) {
+- WRITE_ONCE(prev->__state, TASK_RUNNING);
+- } else {
+- int flags = DEQUEUE_NOCLOCK;
+-
+- prev->sched_contributes_to_load =
+- (prev_state & TASK_UNINTERRUPTIBLE) &&
+- !(prev_state & TASK_NOLOAD) &&
+- !(prev_state & TASK_FROZEN);
+-
+- if (unlikely(is_special_task_state(prev_state)))
+- flags |= DEQUEUE_SPECIAL;
+-
+- /*
+- * __schedule() ttwu()
+- * prev_state = prev->state; if (p->on_rq && ...)
+- * if (prev_state) goto out;
+- * p->on_rq = 0; smp_acquire__after_ctrl_dep();
+- * p->state = TASK_WAKING
+- *
+- * Where __schedule() and ttwu() have matching control dependencies.
+- *
+- * After this, schedule() must not care about p->state any more.
+- */
+- block_task(rq, prev, flags);
+- block = true;
+- }
++ try_to_block_task(rq, prev, prev_state);
+ switch_count = &prev->nvcsw;
+ }
+
+@@ -6686,7 +6698,8 @@ static void __sched notrace __schedule(int sched_mode)
+
+ migrate_disable_switch(rq, prev);
+ psi_account_irqtime(rq, prev, next);
+- psi_sched_switch(prev, next, block);
++ psi_sched_switch(prev, next, !task_on_rq_queued(prev) ||
++ prev->se.sched_delayed);
+
+ trace_sched_switch(preempt, prev, next, prev_state);
+
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index 28c77904ea749f..e51d5ce730be15 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -83,7 +83,7 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
+
+ if (unlikely(sg_policy->limits_changed)) {
+ sg_policy->limits_changed = false;
+- sg_policy->need_freq_update = true;
++ sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS);
+ return true;
+ }
+
+@@ -96,7 +96,7 @@ static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,
+ unsigned int next_freq)
+ {
+ if (sg_policy->need_freq_update)
+- sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS);
++ sg_policy->need_freq_update = false;
+ else if (sg_policy->next_freq == next_freq)
+ return false;
+
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 60be5f8bbe7115..65e7be64487202 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -5647,9 +5647,9 @@ static struct sched_entity *
+ pick_next_entity(struct rq *rq, struct cfs_rq *cfs_rq)
+ {
+ /*
+- * Enabling NEXT_BUDDY will affect latency but not fairness.
++ * Picking the ->next buddy will affect latency but not fairness.
+ */
+- if (sched_feat(NEXT_BUDDY) &&
++ if (sched_feat(PICK_BUDDY) &&
+ cfs_rq->next && entity_eligible(cfs_rq, cfs_rq->next)) {
+ /* ->next will never be delayed */
+ SCHED_WARN_ON(cfs_rq->next->sched_delayed);
+@@ -9418,6 +9418,8 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
+ int tsk_cache_hot;
+
+ lockdep_assert_rq_held(env->src_rq);
++ if (p->sched_task_hot)
++ p->sched_task_hot = 0;
+
+ /*
+ * We do not migrate tasks that are:
+@@ -9490,10 +9492,8 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
+
+ if (tsk_cache_hot <= 0 ||
+ env->sd->nr_balance_failed > env->sd->cache_nice_tries) {
+- if (tsk_cache_hot == 1) {
+- schedstat_inc(env->sd->lb_hot_gained[env->idle]);
+- schedstat_inc(p->stats.nr_forced_migrations);
+- }
++ if (tsk_cache_hot == 1)
++ p->sched_task_hot = 1;
+ return 1;
+ }
+
+@@ -9508,6 +9508,12 @@ static void detach_task(struct task_struct *p, struct lb_env *env)
+ {
+ lockdep_assert_rq_held(env->src_rq);
+
++ if (p->sched_task_hot) {
++ p->sched_task_hot = 0;
++ schedstat_inc(env->sd->lb_hot_gained[env->idle]);
++ schedstat_inc(p->stats.nr_forced_migrations);
++ }
++
+ deactivate_task(env->src_rq, p, DEQUEUE_NOCLOCK);
+ set_task_cpu(p, env->dst_cpu);
+ }
+@@ -9668,6 +9674,9 @@ static int detach_tasks(struct lb_env *env)
+
+ continue;
+ next:
++ if (p->sched_task_hot)
++ schedstat_inc(p->stats.nr_failed_migrations_hot);
++
+ list_move(&p->se.group_node, tasks);
+ }
+
+diff --git a/kernel/sched/features.h b/kernel/sched/features.h
+index 290874079f60d9..050d7503064e3a 100644
+--- a/kernel/sched/features.h
++++ b/kernel/sched/features.h
+@@ -31,6 +31,15 @@ SCHED_FEAT(PREEMPT_SHORT, true)
+ */
+ SCHED_FEAT(NEXT_BUDDY, false)
+
++/*
++ * Allow completely ignoring cfs_rq->next; which can be set from various
++ * places:
++ * - NEXT_BUDDY (wakeup preemption)
++ * - yield_to_task()
++ * - cgroup dequeue / pick
++ */
++SCHED_FEAT(PICK_BUDDY, true)
++
+ /*
+ * Consider buddies to be cache hot, decreases the likeliness of a
+ * cache buddy being migrated away, increases cache locality.
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index f2ef520513c4a2..5426969cf478a0 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -2095,34 +2095,6 @@ static inline const struct cpumask *task_user_cpus(struct task_struct *p)
+
+ #endif /* CONFIG_SMP */
+
+-#include "stats.h"
+-
+-#if defined(CONFIG_SCHED_CORE) && defined(CONFIG_SCHEDSTATS)
+-
+-extern void __sched_core_account_forceidle(struct rq *rq);
+-
+-static inline void sched_core_account_forceidle(struct rq *rq)
+-{
+- if (schedstat_enabled())
+- __sched_core_account_forceidle(rq);
+-}
+-
+-extern void __sched_core_tick(struct rq *rq);
+-
+-static inline void sched_core_tick(struct rq *rq)
+-{
+- if (sched_core_enabled(rq) && schedstat_enabled())
+- __sched_core_tick(rq);
+-}
+-
+-#else /* !(CONFIG_SCHED_CORE && CONFIG_SCHEDSTATS): */
+-
+-static inline void sched_core_account_forceidle(struct rq *rq) { }
+-
+-static inline void sched_core_tick(struct rq *rq) { }
+-
+-#endif /* !(CONFIG_SCHED_CORE && CONFIG_SCHEDSTATS) */
+-
+ #ifdef CONFIG_CGROUP_SCHED
+
+ /*
+@@ -3209,6 +3181,34 @@ extern void nohz_run_idle_balance(int cpu);
+ static inline void nohz_run_idle_balance(int cpu) { }
+ #endif
+
++#include "stats.h"
++
++#if defined(CONFIG_SCHED_CORE) && defined(CONFIG_SCHEDSTATS)
++
++extern void __sched_core_account_forceidle(struct rq *rq);
++
++static inline void sched_core_account_forceidle(struct rq *rq)
++{
++ if (schedstat_enabled())
++ __sched_core_account_forceidle(rq);
++}
++
++extern void __sched_core_tick(struct rq *rq);
++
++static inline void sched_core_tick(struct rq *rq)
++{
++ if (sched_core_enabled(rq) && schedstat_enabled())
++ __sched_core_tick(rq);
++}
++
++#else /* !(CONFIG_SCHED_CORE && CONFIG_SCHEDSTATS): */
++
++static inline void sched_core_account_forceidle(struct rq *rq) { }
++
++static inline void sched_core_tick(struct rq *rq) { }
++
++#endif /* !(CONFIG_SCHED_CORE && CONFIG_SCHEDSTATS) */
++
+ #ifdef CONFIG_IRQ_TIME_ACCOUNTING
+
+ struct irqtime {
+diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
+index 767e098a3bd132..6ade91bce63ee3 100644
+--- a/kernel/sched/stats.h
++++ b/kernel/sched/stats.h
+@@ -127,21 +127,29 @@ static inline void psi_account_irqtime(struct rq *rq, struct task_struct *curr,
+ * go through migration requeues. In this case, *sleeping* states need
+ * to be transferred.
+ */
+-static inline void psi_enqueue(struct task_struct *p, bool migrate)
++static inline void psi_enqueue(struct task_struct *p, int flags)
+ {
+ int clear = 0, set = 0;
+
+ if (static_branch_likely(&psi_disabled))
+ return;
+
++ /* Same runqueue, nothing changed for psi */
++ if (flags & ENQUEUE_RESTORE)
++ return;
++
++ /* psi_sched_switch() will handle the flags */
++ if (task_on_cpu(task_rq(p), p))
++ return;
++
+ if (p->se.sched_delayed) {
+ /* CPU migration of "sleeping" task */
+- SCHED_WARN_ON(!migrate);
++ SCHED_WARN_ON(!(flags & ENQUEUE_MIGRATED));
+ if (p->in_memstall)
+ set |= TSK_MEMSTALL;
+ if (p->in_iowait)
+ set |= TSK_IOWAIT;
+- } else if (migrate) {
++ } else if (flags & ENQUEUE_MIGRATED) {
+ /* CPU migration of runnable task */
+ set = TSK_RUNNING;
+ if (p->in_memstall)
+@@ -158,17 +166,14 @@ static inline void psi_enqueue(struct task_struct *p, bool migrate)
+ psi_task_change(p, clear, set);
+ }
+
+-static inline void psi_dequeue(struct task_struct *p, bool migrate)
++static inline void psi_dequeue(struct task_struct *p, int flags)
+ {
+ if (static_branch_likely(&psi_disabled))
+ return;
+
+- /*
+- * When migrating a task to another CPU, clear all psi
+- * state. The enqueue callback above will work it out.
+- */
+- if (migrate)
+- psi_task_change(p, p->psi_flags, 0);
++ /* Same runqueue, nothing changed for psi */
++ if (flags & DEQUEUE_SAVE)
++ return;
+
+ /*
+ * A voluntary sleep is a dequeue followed by a task switch. To
+@@ -176,6 +181,14 @@ static inline void psi_dequeue(struct task_struct *p, bool migrate)
+ * TSK_RUNNING and TSK_IOWAIT for us when it moves TSK_ONCPU.
+ * Do nothing here.
+ */
++ if (flags & DEQUEUE_SLEEP)
++ return;
++
++ /*
++ * When migrating a task to another CPU, clear all psi
++ * state. The enqueue callback above will work it out.
++ */
++ psi_task_change(p, p->psi_flags, 0);
+ }
+
+ static inline void psi_ttwu_dequeue(struct task_struct *p)
+diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
+index 1784ed1fb3fe5d..f9cb7896c1b966 100644
+--- a/kernel/sched/syscalls.c
++++ b/kernel/sched/syscalls.c
+@@ -1471,7 +1471,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
+ struct rq *rq, *p_rq;
+ int yielded = 0;
+
+- scoped_guard (irqsave) {
++ scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
+ rq = this_rq();
+
+ again:
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 50881898e758d8..449efaaa387a68 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -619,7 +619,8 @@ static const struct bpf_func_proto bpf_perf_event_read_value_proto = {
+
+ static __always_inline u64
+ __bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
+- u64 flags, struct perf_sample_data *sd)
++ u64 flags, struct perf_raw_record *raw,
++ struct perf_sample_data *sd)
+ {
+ struct bpf_array *array = container_of(map, struct bpf_array, map);
+ unsigned int cpu = smp_processor_id();
+@@ -644,6 +645,8 @@ __bpf_perf_event_output(struct pt_regs *regs, struct bpf_map *map,
+ if (unlikely(event->oncpu != cpu))
+ return -EOPNOTSUPP;
+
++ perf_sample_save_raw_data(sd, event, raw);
++
+ return perf_event_output(event, sd, regs);
+ }
+
+@@ -687,9 +690,8 @@ BPF_CALL_5(bpf_perf_event_output, struct pt_regs *, regs, struct bpf_map *, map,
+ }
+
+ perf_sample_data_init(sd, 0, 0);
+- perf_sample_save_raw_data(sd, &raw);
+
+- err = __bpf_perf_event_output(regs, map, flags, sd);
++ err = __bpf_perf_event_output(regs, map, flags, &raw, sd);
+ out:
+ this_cpu_dec(bpf_trace_nest_level);
+ preempt_enable();
+@@ -748,9 +750,8 @@ u64 bpf_event_output(struct bpf_map *map, u64 flags, void *meta, u64 meta_size,
+
+ perf_fetch_caller_regs(regs);
+ perf_sample_data_init(sd, 0, 0);
+- perf_sample_save_raw_data(sd, &raw);
+
+- ret = __bpf_perf_event_output(regs, map, flags, sd);
++ ret = __bpf_perf_event_output(regs, map, flags, &raw, sd);
+ out:
+ this_cpu_dec(bpf_event_output_nest_level);
+ preempt_enable();
+@@ -832,7 +833,7 @@ static int bpf_send_signal_common(u32 sig, enum pid_type type)
+ if (unlikely(is_global_init(current)))
+ return -EPERM;
+
+- if (irqs_disabled()) {
++ if (!preemptible()) {
+ /* Do an early check on signal validity. Otherwise,
+ * the error is lost in deferred irq_work.
+ */
+diff --git a/lib/rhashtable.c b/lib/rhashtable.c
+index 6c902639728b76..0e9a1d4cf89be0 100644
+--- a/lib/rhashtable.c
++++ b/lib/rhashtable.c
+@@ -584,10 +584,6 @@ static struct bucket_table *rhashtable_insert_one(
+ */
+ rht_assign_locked(bkt, obj);
+
+- atomic_inc(&ht->nelems);
+- if (rht_grow_above_75(ht, tbl))
+- schedule_work(&ht->run_work);
+-
+ return NULL;
+ }
+
+@@ -615,15 +611,23 @@ static void *rhashtable_try_insert(struct rhashtable *ht, const void *key,
+ new_tbl = rht_dereference_rcu(tbl->future_tbl, ht);
+ data = ERR_PTR(-EAGAIN);
+ } else {
++ bool inserted;
++
+ flags = rht_lock(tbl, bkt);
+ data = rhashtable_lookup_one(ht, bkt, tbl,
+ hash, key, obj);
+ new_tbl = rhashtable_insert_one(ht, bkt, tbl,
+ hash, obj, data);
++ inserted = data && !new_tbl;
++ if (inserted)
++ atomic_inc(&ht->nelems);
+ if (PTR_ERR(new_tbl) != -EEXIST)
+ data = ERR_CAST(new_tbl);
+
+ rht_unlock(tbl, bkt, flags);
++
++ if (inserted && rht_grow_above_75(ht, tbl))
++ schedule_work(&ht->run_work);
+ }
+ } while (!IS_ERR_OR_NULL(new_tbl));
+
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 53db98d2c4a1b3..ae1d184d035a4d 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -1139,6 +1139,7 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
+ {
+ struct mem_cgroup *iter;
+ int ret = 0;
++ int i = 0;
+
+ BUG_ON(mem_cgroup_is_root(memcg));
+
+@@ -1147,8 +1148,12 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
+ struct task_struct *task;
+
+ css_task_iter_start(&iter->css, CSS_TASK_ITER_PROCS, &it);
+- while (!ret && (task = css_task_iter_next(&it)))
++ while (!ret && (task = css_task_iter_next(&it))) {
++ /* Avoid potential softlockup warning */
++ if ((++i & 1023) == 0)
++ cond_resched();
+ ret = fn(task, arg);
++ }
+ css_task_iter_end(&it);
+ if (ret) {
+ mem_cgroup_iter_break(memcg, iter);
+diff --git a/mm/oom_kill.c b/mm/oom_kill.c
+index 4d7a0004df2cac..8aa712afd8ae1a 100644
+--- a/mm/oom_kill.c
++++ b/mm/oom_kill.c
+@@ -45,6 +45,7 @@
+ #include <linux/init.h>
+ #include <linux/mmu_notifier.h>
+ #include <linux/cred.h>
++#include <linux/nmi.h>
+
+ #include <asm/tlb.h>
+ #include "internal.h"
+@@ -431,10 +432,15 @@ static void dump_tasks(struct oom_control *oc)
+ mem_cgroup_scan_tasks(oc->memcg, dump_task, oc);
+ else {
+ struct task_struct *p;
++ int i = 0;
+
+ rcu_read_lock();
+- for_each_process(p)
++ for_each_process(p) {
++ /* Avoid potential softlockup warning */
++ if ((++i & 1023) == 0)
++ touch_softlockup_watchdog();
+ dump_task(p, oc);
++ }
+ rcu_read_unlock();
+ }
+ }
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index d6f9fae06a9d81..aa6c714892ec9d 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -467,7 +467,7 @@ static int ax25_ctl_ioctl(const unsigned int cmd, void __user *arg)
+ goto out_put;
+ }
+
+-static void ax25_fillin_cb_from_dev(ax25_cb *ax25, ax25_dev *ax25_dev)
++static void ax25_fillin_cb_from_dev(ax25_cb *ax25, const ax25_dev *ax25_dev)
+ {
+ ax25->rtt = msecs_to_jiffies(ax25_dev->values[AX25_VALUES_T1]) / 2;
+ ax25->t1 = msecs_to_jiffies(ax25_dev->values[AX25_VALUES_T1]);
+@@ -677,22 +677,22 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
+- rtnl_lock();
+- dev = __dev_get_by_name(&init_net, devname);
++ rcu_read_lock();
++ dev = dev_get_by_name_rcu(&init_net, devname);
+ if (!dev) {
+- rtnl_unlock();
++ rcu_read_unlock();
+ res = -ENODEV;
+ break;
+ }
+
+ ax25->ax25_dev = ax25_dev_ax25dev(dev);
+ if (!ax25->ax25_dev) {
+- rtnl_unlock();
++ rcu_read_unlock();
+ res = -ENODEV;
+ break;
+ }
+ ax25_fillin_cb(ax25, ax25->ax25_dev);
+- rtnl_unlock();
++ rcu_read_unlock();
+ break;
+
+ default:
+diff --git a/net/ax25/ax25_dev.c b/net/ax25/ax25_dev.c
+index 9efd6690b34436..3733c0254a5084 100644
+--- a/net/ax25/ax25_dev.c
++++ b/net/ax25/ax25_dev.c
+@@ -90,7 +90,7 @@ void ax25_dev_device_up(struct net_device *dev)
+
+ spin_lock_bh(&ax25_dev_lock);
+ list_add(&ax25_dev->list, &ax25_dev_list);
+- dev->ax25_ptr = ax25_dev;
++ rcu_assign_pointer(dev->ax25_ptr, ax25_dev);
+ spin_unlock_bh(&ax25_dev_lock);
+
+ ax25_register_dev_sysctl(ax25_dev);
+@@ -125,7 +125,7 @@ void ax25_dev_device_down(struct net_device *dev)
+ }
+ }
+
+- dev->ax25_ptr = NULL;
++ RCU_INIT_POINTER(dev->ax25_ptr, NULL);
+ spin_unlock_bh(&ax25_dev_lock);
+ netdev_put(dev, &ax25_dev->dev_tracker);
+ ax25_dev_put(ax25_dev);
+diff --git a/net/ax25/ax25_ip.c b/net/ax25/ax25_ip.c
+index 36249776c021e7..215d4ccf12b913 100644
+--- a/net/ax25/ax25_ip.c
++++ b/net/ax25/ax25_ip.c
+@@ -122,6 +122,7 @@ netdev_tx_t ax25_ip_xmit(struct sk_buff *skb)
+ if (dev == NULL)
+ dev = skb->dev;
+
++ rcu_read_lock();
+ if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL) {
+ kfree_skb(skb);
+ goto put;
+@@ -202,7 +203,7 @@ netdev_tx_t ax25_ip_xmit(struct sk_buff *skb)
+ ax25_queue_xmit(skb, dev);
+
+ put:
+-
++ rcu_read_unlock();
+ ax25_route_lock_unuse();
+ return NETDEV_TX_OK;
+ }
+diff --git a/net/ax25/ax25_out.c b/net/ax25/ax25_out.c
+index 3db76d2470e954..8bca2ace98e51b 100644
+--- a/net/ax25/ax25_out.c
++++ b/net/ax25/ax25_out.c
+@@ -39,10 +39,14 @@ ax25_cb *ax25_send_frame(struct sk_buff *skb, int paclen, const ax25_address *sr
+ * specified.
+ */
+ if (paclen == 0) {
+- if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL)
++ rcu_read_lock();
++ ax25_dev = ax25_dev_ax25dev(dev);
++ if (!ax25_dev) {
++ rcu_read_unlock();
+ return NULL;
+-
++ }
+ paclen = ax25_dev->values[AX25_VALUES_PACLEN];
++ rcu_read_unlock();
+ }
+
+ /*
+@@ -53,13 +57,19 @@ ax25_cb *ax25_send_frame(struct sk_buff *skb, int paclen, const ax25_address *sr
+ return ax25; /* It already existed */
+ }
+
+- if ((ax25_dev = ax25_dev_ax25dev(dev)) == NULL)
++ rcu_read_lock();
++ ax25_dev = ax25_dev_ax25dev(dev);
++ if (!ax25_dev) {
++ rcu_read_unlock();
+ return NULL;
++ }
+
+- if ((ax25 = ax25_create_cb()) == NULL)
++ if ((ax25 = ax25_create_cb()) == NULL) {
++ rcu_read_unlock();
+ return NULL;
+-
++ }
+ ax25_fillin_cb(ax25, ax25_dev);
++ rcu_read_unlock();
+
+ ax25->source_addr = *src;
+ ax25->dest_addr = *dest;
+@@ -358,7 +368,9 @@ void ax25_queue_xmit(struct sk_buff *skb, struct net_device *dev)
+ {
+ unsigned char *ptr;
+
++ rcu_read_lock();
+ skb->protocol = ax25_type_trans(skb, ax25_fwd_dev(dev));
++ rcu_read_unlock();
+
+ ptr = skb_push(skb, 1);
+ *ptr = 0x00; /* KISS */
+diff --git a/net/ax25/ax25_route.c b/net/ax25/ax25_route.c
+index b7c4d656a94b71..69de75db0c9c21 100644
+--- a/net/ax25/ax25_route.c
++++ b/net/ax25/ax25_route.c
+@@ -406,6 +406,7 @@ int ax25_rt_autobind(ax25_cb *ax25, ax25_address *addr)
+ ax25_route_lock_unuse();
+ return -EHOSTUNREACH;
+ }
++ rcu_read_lock();
+ if ((ax25->ax25_dev = ax25_dev_ax25dev(ax25_rt->dev)) == NULL) {
+ err = -EHOSTUNREACH;
+ goto put;
+@@ -442,6 +443,7 @@ int ax25_rt_autobind(ax25_cb *ax25, ax25_address *addr)
+ }
+
+ put:
++ rcu_read_unlock();
+ ax25_route_lock_unuse();
+ return err;
+ }
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 1867a6a8d76da9..2e0fe38d0e877d 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1279,7 +1279,9 @@ int dev_change_name(struct net_device *dev, const char *newname)
+ rollback:
+ ret = device_rename(&dev->dev, dev->name);
+ if (ret) {
++ write_seqlock_bh(&netdev_rename_lock);
+ memcpy(dev->name, oldname, IFNAMSIZ);
++ write_sequnlock_bh(&netdev_rename_lock);
+ WRITE_ONCE(dev->name_assign_type, old_assign_type);
+ up_write(&devnet_rename_sem);
+ return ret;
+@@ -2134,8 +2136,8 @@ EXPORT_SYMBOL_GPL(net_dec_egress_queue);
+ #endif
+
+ #ifdef CONFIG_NET_CLS_ACT
+-DEFINE_STATIC_KEY_FALSE(tcf_bypass_check_needed_key);
+-EXPORT_SYMBOL(tcf_bypass_check_needed_key);
++DEFINE_STATIC_KEY_FALSE(tcf_sw_enabled_key);
++EXPORT_SYMBOL(tcf_sw_enabled_key);
+ #endif
+
+ DEFINE_STATIC_KEY_FALSE(netstamp_needed_key);
+@@ -4028,10 +4030,13 @@ static int tc_run(struct tcx_entry *entry, struct sk_buff *skb,
+ if (!miniq)
+ return ret;
+
+- if (static_branch_unlikely(&tcf_bypass_check_needed_key)) {
+- if (tcf_block_bypass_sw(miniq->block))
+- return ret;
+- }
++ /* Global bypass */
++ if (!static_branch_likely(&tcf_sw_enabled_key))
++ return ret;
++
++ /* Block-wise bypass */
++ if (tcf_block_bypass_sw(miniq->block))
++ return ret;
+
+ tc_skb_cb(skb)->mru = 0;
+ tc_skb_cb(skb)->post_ct = false;
+@@ -9590,6 +9595,10 @@ static int dev_xdp_attach(struct net_device *dev, struct netlink_ext_ack *extack
+ NL_SET_ERR_MSG(extack, "Program bound to different device");
+ return -EINVAL;
+ }
++ if (bpf_prog_is_dev_bound(new_prog->aux) && mode == XDP_MODE_SKB) {
++ NL_SET_ERR_MSG(extack, "Can't attach device-bound programs in generic mode");
++ return -EINVAL;
++ }
+ if (new_prog->expected_attach_type == BPF_XDP_DEVMAP) {
+ NL_SET_ERR_MSG(extack, "BPF_XDP_DEVMAP programs can not be attached to a device");
+ return -EINVAL;
+diff --git a/net/core/filter.c b/net/core/filter.c
+index 46da488ff0703f..a2f990bf51e5e1 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -7662,7 +7662,7 @@ static const struct bpf_func_proto bpf_sock_ops_load_hdr_opt_proto = {
+ .gpl_only = false,
+ .ret_type = RET_INTEGER,
+ .arg1_type = ARG_PTR_TO_CTX,
+- .arg2_type = ARG_PTR_TO_MEM,
++ .arg2_type = ARG_PTR_TO_MEM | MEM_WRITE,
+ .arg3_type = ARG_CONST_SIZE,
+ .arg4_type = ARG_ANYTHING,
+ };
+diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
+index 86a2476678c484..5dd54a81339806 100644
+--- a/net/core/sysctl_net_core.c
++++ b/net/core/sysctl_net_core.c
+@@ -303,7 +303,7 @@ static int proc_do_dev_weight(const struct ctl_table *table, int write,
+ int ret, weight;
+
+ mutex_lock(&dev_weight_mutex);
+- ret = proc_dointvec(table, write, buffer, lenp, ppos);
++ ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos);
+ if (!ret && write) {
+ weight = READ_ONCE(weight_p);
+ WRITE_ONCE(net_hotdata.dev_rx_weight, weight * dev_weight_rx_bias);
+@@ -396,6 +396,7 @@ static struct ctl_table net_core_table[] = {
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_do_dev_weight,
++ .extra1 = SYSCTL_ONE,
+ },
+ {
+ .procname = "dev_weight_rx_bias",
+@@ -403,6 +404,7 @@ static struct ctl_table net_core_table[] = {
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_do_dev_weight,
++ .extra1 = SYSCTL_ONE,
+ },
+ {
+ .procname = "dev_weight_tx_bias",
+@@ -410,6 +412,7 @@ static struct ctl_table net_core_table[] = {
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_do_dev_weight,
++ .extra1 = SYSCTL_ONE,
+ },
+ {
+ .procname = "netdev_max_backlog",
+diff --git a/net/ethtool/ioctl.c b/net/ethtool/ioctl.c
+index 65cfe76dafbe2e..8b9692c35e7067 100644
+--- a/net/ethtool/ioctl.c
++++ b/net/ethtool/ioctl.c
+@@ -992,7 +992,13 @@ static noinline_for_stack int ethtool_set_rxnfc(struct net_device *dev,
+ if (rc)
+ return rc;
+
+- if (ops->get_rxfh) {
++ /* Nonzero ring with RSS only makes sense if NIC adds them together */
++ if (cmd == ETHTOOL_SRXCLSRLINS && info.fs.flow_type & FLOW_RSS &&
++ !ops->cap_rss_rxnfc_adds &&
++ ethtool_get_flow_spec_ring(info.fs.ring_cookie))
++ return -EINVAL;
++
++ if (cmd == ETHTOOL_SRXFH && ops->get_rxfh) {
+ struct ethtool_rxfh_param rxfh = {};
+
+ rc = ops->get_rxfh(dev, &rxfh);
+diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c
+index e3f0ef6b851bb4..4d18dc29b30438 100644
+--- a/net/ethtool/netlink.c
++++ b/net/ethtool/netlink.c
+@@ -90,7 +90,7 @@ int ethnl_ops_begin(struct net_device *dev)
+ pm_runtime_get_sync(dev->dev.parent);
+
+ if (!netif_device_present(dev) ||
+- dev->reg_state == NETREG_UNREGISTERING) {
++ dev->reg_state >= NETREG_UNREGISTERING) {
+ ret = -ENODEV;
+ goto err;
+ }
+diff --git a/net/hsr/hsr_forward.c b/net/hsr/hsr_forward.c
+index 40c5fbbd155d66..c0217476eb17f9 100644
+--- a/net/hsr/hsr_forward.c
++++ b/net/hsr/hsr_forward.c
+@@ -688,9 +688,12 @@ static int fill_frame_info(struct hsr_frame_info *frame,
+ frame->is_vlan = true;
+
+ if (frame->is_vlan) {
+- if (skb->mac_len < offsetofend(struct hsr_vlan_ethhdr, vlanhdr))
++ /* Note: skb->mac_len might be wrong here. */
++ if (!pskb_may_pull(skb,
++ skb_mac_offset(skb) +
++ offsetofend(struct hsr_vlan_ethhdr, vlanhdr)))
+ return -EINVAL;
+- vlan_hdr = (struct hsr_vlan_ethhdr *)ethhdr;
++ vlan_hdr = (struct hsr_vlan_ethhdr *)skb_mac_header(skb);
+ proto = vlan_hdr->vlanhdr.h_vlan_encapsulated_proto;
+ /* FIXME: */
+ netdev_warn_once(skb->dev, "VLAN not yet supported");
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index 80c4ea0e12f48a..e0d94270da28a3 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -53,9 +53,9 @@ static struct sk_buff *esp4_gro_receive(struct list_head *head,
+ if (sp->len == XFRM_MAX_DEPTH)
+ goto out_reset;
+
+- x = xfrm_state_lookup(dev_net(skb->dev), skb->mark,
+- (xfrm_address_t *)&ip_hdr(skb)->daddr,
+- spi, IPPROTO_ESP, AF_INET);
++ x = xfrm_input_state_lookup(dev_net(skb->dev), skb->mark,
++ (xfrm_address_t *)&ip_hdr(skb)->daddr,
++ spi, IPPROTO_ESP, AF_INET);
+
+ if (unlikely(x && x->dir && x->dir != XFRM_SA_DIR_IN)) {
+ /* non-offload path will record the error and audit log */
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index c3ad41573b33ea..932bd775fc2682 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -312,7 +312,6 @@ static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
+ struct dst_entry *dst = &rt->dst;
+ struct inet_peer *peer;
+ bool rc = true;
+- int vif;
+
+ if (!apply_ratelimit)
+ return true;
+@@ -321,12 +320,12 @@ static bool icmpv4_xrlim_allow(struct net *net, struct rtable *rt,
+ if (dst->dev && (dst->dev->flags&IFF_LOOPBACK))
+ goto out;
+
+- vif = l3mdev_master_ifindex(dst->dev);
+- peer = inet_getpeer_v4(net->ipv4.peers, fl4->daddr, vif, 1);
++ rcu_read_lock();
++ peer = inet_getpeer_v4(net->ipv4.peers, fl4->daddr,
++ l3mdev_master_ifindex_rcu(dst->dev));
+ rc = inet_peer_xrlim_allow(peer,
+ READ_ONCE(net->ipv4.sysctl_icmp_ratelimit));
+- if (peer)
+- inet_putpeer(peer);
++ rcu_read_unlock();
+ out:
+ if (!rc)
+ __ICMP_INC_STATS(net, ICMP_MIB_RATELIMITHOST);
+diff --git a/net/ipv4/inetpeer.c b/net/ipv4/inetpeer.c
+index 5bd7599634517a..9c5ffe3b5f776f 100644
+--- a/net/ipv4/inetpeer.c
++++ b/net/ipv4/inetpeer.c
+@@ -95,6 +95,7 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
+ {
+ struct rb_node **pp, *parent, *next;
+ struct inet_peer *p;
++ u32 now;
+
+ pp = &base->rb_root.rb_node;
+ parent = NULL;
+@@ -108,8 +109,9 @@ static struct inet_peer *lookup(const struct inetpeer_addr *daddr,
+ p = rb_entry(parent, struct inet_peer, rb_node);
+ cmp = inetpeer_addr_cmp(daddr, &p->daddr);
+ if (cmp == 0) {
+- if (!refcount_inc_not_zero(&p->refcnt))
+- break;
++ now = jiffies;
++ if (READ_ONCE(p->dtime) != now)
++ WRITE_ONCE(p->dtime, now);
+ return p;
+ }
+ if (gc_stack) {
+@@ -155,9 +157,6 @@ static void inet_peer_gc(struct inet_peer_base *base,
+ for (i = 0; i < gc_cnt; i++) {
+ p = gc_stack[i];
+
+- /* The READ_ONCE() pairs with the WRITE_ONCE()
+- * in inet_putpeer()
+- */
+ delta = (__u32)jiffies - READ_ONCE(p->dtime);
+
+ if (delta < ttl || !refcount_dec_if_one(&p->refcnt))
+@@ -173,31 +172,23 @@ static void inet_peer_gc(struct inet_peer_base *base,
+ }
+ }
+
++/* Must be called under RCU : No refcount change is done here. */
+ struct inet_peer *inet_getpeer(struct inet_peer_base *base,
+- const struct inetpeer_addr *daddr,
+- int create)
++ const struct inetpeer_addr *daddr)
+ {
+ struct inet_peer *p, *gc_stack[PEER_MAX_GC];
+ struct rb_node **pp, *parent;
+ unsigned int gc_cnt, seq;
+- int invalidated;
+
+ /* Attempt a lockless lookup first.
+ * Because of a concurrent writer, we might not find an existing entry.
+ */
+- rcu_read_lock();
+ seq = read_seqbegin(&base->lock);
+ p = lookup(daddr, base, seq, NULL, &gc_cnt, &parent, &pp);
+- invalidated = read_seqretry(&base->lock, seq);
+- rcu_read_unlock();
+
+ if (p)
+ return p;
+
+- /* If no writer did a change during our lookup, we can return early. */
+- if (!create && !invalidated)
+- return NULL;
+-
+ /* retry an exact lookup, taking the lock before.
+ * At least, nodes should be hot in our cache.
+ */
+@@ -206,12 +197,12 @@ struct inet_peer *inet_getpeer(struct inet_peer_base *base,
+
+ gc_cnt = 0;
+ p = lookup(daddr, base, seq, gc_stack, &gc_cnt, &parent, &pp);
+- if (!p && create) {
++ if (!p) {
+ p = kmem_cache_alloc(peer_cachep, GFP_ATOMIC);
+ if (p) {
+ p->daddr = *daddr;
+ p->dtime = (__u32)jiffies;
+- refcount_set(&p->refcnt, 2);
++ refcount_set(&p->refcnt, 1);
+ atomic_set(&p->rid, 0);
+ p->metrics[RTAX_LOCK-1] = INETPEER_METRICS_NEW;
+ p->rate_tokens = 0;
+@@ -236,15 +227,9 @@ EXPORT_SYMBOL_GPL(inet_getpeer);
+
+ void inet_putpeer(struct inet_peer *p)
+ {
+- /* The WRITE_ONCE() pairs with itself (we run lockless)
+- * and the READ_ONCE() in inet_peer_gc()
+- */
+- WRITE_ONCE(p->dtime, (__u32)jiffies);
+-
+ if (refcount_dec_and_test(&p->refcnt))
+ call_rcu(&p->rcu, inetpeer_free_rcu);
+ }
+-EXPORT_SYMBOL_GPL(inet_putpeer);
+
+ /*
+ * Check transmit rate limitation for given message.
+diff --git a/net/ipv4/ip_fragment.c b/net/ipv4/ip_fragment.c
+index a92664a5ef2efe..9ca0a183a55ffa 100644
+--- a/net/ipv4/ip_fragment.c
++++ b/net/ipv4/ip_fragment.c
+@@ -82,15 +82,20 @@ static int ip_frag_reasm(struct ipq *qp, struct sk_buff *skb,
+ static void ip4_frag_init(struct inet_frag_queue *q, const void *a)
+ {
+ struct ipq *qp = container_of(q, struct ipq, q);
+- struct net *net = q->fqdir->net;
+-
+ const struct frag_v4_compare_key *key = a;
++ struct net *net = q->fqdir->net;
++ struct inet_peer *p = NULL;
+
+ q->key.v4 = *key;
+ qp->ecn = 0;
+- qp->peer = q->fqdir->max_dist ?
+- inet_getpeer_v4(net->ipv4.peers, key->saddr, key->vif, 1) :
+- NULL;
++ if (q->fqdir->max_dist) {
++ rcu_read_lock();
++ p = inet_getpeer_v4(net->ipv4.peers, key->saddr, key->vif);
++ if (p && !refcount_inc_not_zero(&p->refcnt))
++ p = NULL;
++ rcu_read_unlock();
++ }
++ qp->peer = p;
+ }
+
+ static void ip4_frag_free(struct inet_frag_queue *q)
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 449a2ac40bdc00..de0d9cc7806a15 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -817,7 +817,7 @@ static void ipmr_update_thresholds(struct mr_table *mrt, struct mr_mfc *cache,
+ cache->mfc_un.res.maxvif = vifi + 1;
+ }
+ }
+- cache->mfc_un.res.lastuse = jiffies;
++ WRITE_ONCE(cache->mfc_un.res.lastuse, jiffies);
+ }
+
+ static int vif_add(struct net *net, struct mr_table *mrt,
+@@ -1667,9 +1667,9 @@ int ipmr_ioctl(struct sock *sk, int cmd, void *arg)
+ rcu_read_lock();
+ c = ipmr_cache_find(mrt, sr->src.s_addr, sr->grp.s_addr);
+ if (c) {
+- sr->pktcnt = c->_c.mfc_un.res.pkt;
+- sr->bytecnt = c->_c.mfc_un.res.bytes;
+- sr->wrong_if = c->_c.mfc_un.res.wrong_if;
++ sr->pktcnt = atomic_long_read(&c->_c.mfc_un.res.pkt);
++ sr->bytecnt = atomic_long_read(&c->_c.mfc_un.res.bytes);
++ sr->wrong_if = atomic_long_read(&c->_c.mfc_un.res.wrong_if);
+ rcu_read_unlock();
+ return 0;
+ }
+@@ -1739,9 +1739,9 @@ int ipmr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
+ rcu_read_lock();
+ c = ipmr_cache_find(mrt, sr.src.s_addr, sr.grp.s_addr);
+ if (c) {
+- sr.pktcnt = c->_c.mfc_un.res.pkt;
+- sr.bytecnt = c->_c.mfc_un.res.bytes;
+- sr.wrong_if = c->_c.mfc_un.res.wrong_if;
++ sr.pktcnt = atomic_long_read(&c->_c.mfc_un.res.pkt);
++ sr.bytecnt = atomic_long_read(&c->_c.mfc_un.res.bytes);
++ sr.wrong_if = atomic_long_read(&c->_c.mfc_un.res.wrong_if);
+ rcu_read_unlock();
+
+ if (copy_to_user(arg, &sr, sizeof(sr)))
+@@ -1974,9 +1974,9 @@ static void ip_mr_forward(struct net *net, struct mr_table *mrt,
+ int vif, ct;
+
+ vif = c->_c.mfc_parent;
+- c->_c.mfc_un.res.pkt++;
+- c->_c.mfc_un.res.bytes += skb->len;
+- c->_c.mfc_un.res.lastuse = jiffies;
++ atomic_long_inc(&c->_c.mfc_un.res.pkt);
++ atomic_long_add(skb->len, &c->_c.mfc_un.res.bytes);
++ WRITE_ONCE(c->_c.mfc_un.res.lastuse, jiffies);
+
+ if (c->mfc_origin == htonl(INADDR_ANY) && true_vifi >= 0) {
+ struct mfc_cache *cache_proxy;
+@@ -2007,7 +2007,7 @@ static void ip_mr_forward(struct net *net, struct mr_table *mrt,
+ goto dont_forward;
+ }
+
+- c->_c.mfc_un.res.wrong_if++;
++ atomic_long_inc(&c->_c.mfc_un.res.wrong_if);
+
+ if (true_vifi >= 0 && mrt->mroute_do_assert &&
+ /* pimsm uses asserts, when switching from RPT to SPT,
+@@ -3015,9 +3015,9 @@ static int ipmr_mfc_seq_show(struct seq_file *seq, void *v)
+
+ if (it->cache != &mrt->mfc_unres_queue) {
+ seq_printf(seq, " %8lu %8lu %8lu",
+- mfc->_c.mfc_un.res.pkt,
+- mfc->_c.mfc_un.res.bytes,
+- mfc->_c.mfc_un.res.wrong_if);
++ atomic_long_read(&mfc->_c.mfc_un.res.pkt),
++ atomic_long_read(&mfc->_c.mfc_un.res.bytes),
++ atomic_long_read(&mfc->_c.mfc_un.res.wrong_if));
+ for (n = mfc->_c.mfc_un.res.minvif;
+ n < mfc->_c.mfc_un.res.maxvif; n++) {
+ if (VIF_EXISTS(mrt, n) &&
+diff --git a/net/ipv4/ipmr_base.c b/net/ipv4/ipmr_base.c
+index f0af12a2f70bcd..28d77d454d442e 100644
+--- a/net/ipv4/ipmr_base.c
++++ b/net/ipv4/ipmr_base.c
+@@ -263,9 +263,9 @@ int mr_fill_mroute(struct mr_table *mrt, struct sk_buff *skb,
+ lastuse = READ_ONCE(c->mfc_un.res.lastuse);
+ lastuse = time_after_eq(jiffies, lastuse) ? jiffies - lastuse : 0;
+
+- mfcs.mfcs_packets = c->mfc_un.res.pkt;
+- mfcs.mfcs_bytes = c->mfc_un.res.bytes;
+- mfcs.mfcs_wrong_if = c->mfc_un.res.wrong_if;
++ mfcs.mfcs_packets = atomic_long_read(&c->mfc_un.res.pkt);
++ mfcs.mfcs_bytes = atomic_long_read(&c->mfc_un.res.bytes);
++ mfcs.mfcs_wrong_if = atomic_long_read(&c->mfc_un.res.wrong_if);
+ if (nla_put_64bit(skb, RTA_MFC_STATS, sizeof(mfcs), &mfcs, RTA_PAD) ||
+ nla_put_u64_64bit(skb, RTA_EXPIRES, jiffies_to_clock_t(lastuse),
+ RTA_PAD))
+@@ -330,9 +330,6 @@ int mr_table_dump(struct mr_table *mrt, struct sk_buff *skb,
+ list_for_each_entry(mfc, &mrt->mfc_unres_queue, list) {
+ if (e < s_e)
+ goto next_entry2;
+- if (filter->dev &&
+- !mr_mfc_uses_dev(mrt, mfc, filter->dev))
+- goto next_entry2;
+
+ err = fill(mrt, skb, NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq, mfc, RTM_NEWROUTE, flags);
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 723ac9181558c3..2a27913588d05a 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -870,11 +870,11 @@ void ip_rt_send_redirect(struct sk_buff *skb)
+ }
+ log_martians = IN_DEV_LOG_MARTIANS(in_dev);
+ vif = l3mdev_master_ifindex_rcu(rt->dst.dev);
+- rcu_read_unlock();
+
+ net = dev_net(rt->dst.dev);
+- peer = inet_getpeer_v4(net->ipv4.peers, ip_hdr(skb)->saddr, vif, 1);
++ peer = inet_getpeer_v4(net->ipv4.peers, ip_hdr(skb)->saddr, vif);
+ if (!peer) {
++ rcu_read_unlock();
+ icmp_send(skb, ICMP_REDIRECT, ICMP_REDIR_HOST,
+ rt_nexthop(rt, ip_hdr(skb)->daddr));
+ return;
+@@ -893,7 +893,7 @@ void ip_rt_send_redirect(struct sk_buff *skb)
+ */
+ if (peer->n_redirects >= ip_rt_redirect_number) {
+ peer->rate_last = jiffies;
+- goto out_put_peer;
++ goto out_unlock;
+ }
+
+ /* Check for load limit; set rate_last to the latest sent
+@@ -914,8 +914,8 @@ void ip_rt_send_redirect(struct sk_buff *skb)
+ &ip_hdr(skb)->saddr, inet_iif(skb),
+ &ip_hdr(skb)->daddr, &gw);
+ }
+-out_put_peer:
+- inet_putpeer(peer);
++out_unlock:
++ rcu_read_unlock();
+ }
+
+ static int ip_error(struct sk_buff *skb)
+@@ -975,9 +975,9 @@ static int ip_error(struct sk_buff *skb)
+ break;
+ }
+
++ rcu_read_lock();
+ peer = inet_getpeer_v4(net->ipv4.peers, ip_hdr(skb)->saddr,
+- l3mdev_master_ifindex(skb->dev), 1);
+-
++ l3mdev_master_ifindex_rcu(skb->dev));
+ send = true;
+ if (peer) {
+ now = jiffies;
+@@ -989,8 +989,9 @@ static int ip_error(struct sk_buff *skb)
+ peer->rate_tokens -= ip_rt_error_cost;
+ else
+ send = false;
+- inet_putpeer(peer);
+ }
++ rcu_read_unlock();
++
+ if (send)
+ icmp_send(skb, ICMP_DEST_UNREACH, code, 0);
+
+diff --git a/net/ipv4/tcp_cubic.c b/net/ipv4/tcp_cubic.c
+index 5dbed91c617825..76c23675ae50ab 100644
+--- a/net/ipv4/tcp_cubic.c
++++ b/net/ipv4/tcp_cubic.c
+@@ -392,6 +392,10 @@ static void hystart_update(struct sock *sk, u32 delay)
+ if (after(tp->snd_una, ca->end_seq))
+ bictcp_hystart_reset(sk);
+
++ /* hystart triggers when cwnd is larger than some threshold */
++ if (tcp_snd_cwnd(tp) < hystart_low_window)
++ return;
++
+ if (hystart_detect & HYSTART_ACK_TRAIN) {
+ u32 now = bictcp_clock_us(sk);
+
+@@ -467,9 +471,7 @@ __bpf_kfunc static void cubictcp_acked(struct sock *sk, const struct ack_sample
+ if (ca->delay_min == 0 || ca->delay_min > delay)
+ ca->delay_min = delay;
+
+- /* hystart triggers when cwnd is larger than some threshold */
+- if (!ca->found && tcp_in_slow_start(tp) && hystart &&
+- tcp_snd_cwnd(tp) >= hystart_low_window)
++ if (!ca->found && tcp_in_slow_start(tp) && hystart)
+ hystart_update(sk, delay);
+ }
+
+diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c
+index 8efc58716ce969..6d5387811c32ad 100644
+--- a/net/ipv4/tcp_output.c
++++ b/net/ipv4/tcp_output.c
+@@ -265,11 +265,14 @@ static u16 tcp_select_window(struct sock *sk)
+ u32 cur_win, new_win;
+
+ /* Make the window 0 if we failed to queue the data because we
+- * are out of memory. The window is temporary, so we don't store
+- * it on the socket.
++ * are out of memory.
+ */
+- if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM))
++ if (unlikely(inet_csk(sk)->icsk_ack.pending & ICSK_ACK_NOMEM)) {
++ tp->pred_flags = 0;
++ tp->rcv_wnd = 0;
++ tp->rcv_wup = tp->rcv_nxt;
+ return 0;
++ }
+
+ cur_win = tcp_receive_window(tp);
+ new_win = __tcp_select_window(sk);
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index ff85242720a0a9..d2eeb6fc49b382 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -420,6 +420,49 @@ u32 udp_ehashfn(const struct net *net, const __be32 laddr, const __u16 lport,
+ udp_ehash_secret + net_hash_mix(net));
+ }
+
++/**
++ * udp4_lib_lookup1() - Simplified lookup using primary hash (destination port)
++ * @net: Network namespace
++ * @saddr: Source address, network order
++ * @sport: Source port, network order
++ * @daddr: Destination address, network order
++ * @hnum: Destination port, host order
++ * @dif: Destination interface index
++ * @sdif: Destination bridge port index, if relevant
++ * @udptable: Set of UDP hash tables
++ *
++ * Simplified lookup to be used as fallback if no sockets are found due to a
++ * potential race between (receive) address change, and lookup happening before
++ * the rehash operation. This function ignores SO_REUSEPORT groups while scoring
++ * result sockets, because if we have one, we don't need the fallback at all.
++ *
++ * Called under rcu_read_lock().
++ *
++ * Return: socket with highest matching score if any, NULL if none
++ */
++static struct sock *udp4_lib_lookup1(const struct net *net,
++ __be32 saddr, __be16 sport,
++ __be32 daddr, unsigned int hnum,
++ int dif, int sdif,
++ const struct udp_table *udptable)
++{
++ unsigned int slot = udp_hashfn(net, hnum, udptable->mask);
++ struct udp_hslot *hslot = &udptable->hash[slot];
++ struct sock *sk, *result = NULL;
++ int score, badness = 0;
++
++ sk_for_each_rcu(sk, &hslot->head) {
++ score = compute_score(sk, net,
++ saddr, sport, daddr, hnum, dif, sdif);
++ if (score > badness) {
++ result = sk;
++ badness = score;
++ }
++ }
++
++ return result;
++}
++
+ /* called with rcu_read_lock() */
+ static struct sock *udp4_lib_lookup2(const struct net *net,
+ __be32 saddr, __be16 sport,
+@@ -525,6 +568,19 @@ struct sock *__udp4_lib_lookup(const struct net *net, __be32 saddr,
+ result = udp4_lib_lookup2(net, saddr, sport,
+ htonl(INADDR_ANY), hnum, dif, sdif,
+ hslot2, skb);
++ if (!IS_ERR_OR_NULL(result))
++ goto done;
++
++ /* Primary hash (destination port) lookup as fallback for this race:
++ * 1. __ip4_datagram_connect() sets sk_rcv_saddr
++ * 2. lookup (this function): new sk_rcv_saddr, hashes not updated yet
++ * 3. rehash operation updating _secondary and four-tuple_ hashes
++ * The primary hash doesn't need an update after 1., so, thanks to this
++ * further step, 1. and 3. don't need to be atomic against the lookup.
++ */
++ result = udp4_lib_lookup1(net, saddr, sport, daddr, hnum, dif, sdif,
++ udptable);
++
+ done:
+ if (IS_ERR(result))
+ return NULL;
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index 919ebfabbe4ee2..7b41fb4f00b587 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -80,9 +80,9 @@ static struct sk_buff *esp6_gro_receive(struct list_head *head,
+ if (sp->len == XFRM_MAX_DEPTH)
+ goto out_reset;
+
+- x = xfrm_state_lookup(dev_net(skb->dev), skb->mark,
+- (xfrm_address_t *)&ipv6_hdr(skb)->daddr,
+- spi, IPPROTO_ESP, AF_INET6);
++ x = xfrm_input_state_lookup(dev_net(skb->dev), skb->mark,
++ (xfrm_address_t *)&ipv6_hdr(skb)->daddr,
++ spi, IPPROTO_ESP, AF_INET6);
+
+ if (unlikely(x && x->dir && x->dir != XFRM_SA_DIR_IN)) {
+ /* non-offload path will record the error and audit log */
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index 071b0bc1179d81..a6984a29fdb9dd 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -222,10 +222,10 @@ static bool icmpv6_xrlim_allow(struct sock *sk, u8 type,
+ if (rt->rt6i_dst.plen < 128)
+ tmo >>= ((128 - rt->rt6i_dst.plen)>>5);
+
+- peer = inet_getpeer_v6(net->ipv6.peers, &fl6->daddr, 1);
++ rcu_read_lock();
++ peer = inet_getpeer_v6(net->ipv6.peers, &fl6->daddr);
+ res = inet_peer_xrlim_allow(peer, tmo);
+- if (peer)
+- inet_putpeer(peer);
++ rcu_read_unlock();
+ }
+ if (!res)
+ __ICMP6_INC_STATS(net, ip6_dst_idev(dst),
+diff --git a/net/ipv6/ip6_output.c b/net/ipv6/ip6_output.c
+index f26841f1490f5c..434ddf263b88a3 100644
+--- a/net/ipv6/ip6_output.c
++++ b/net/ipv6/ip6_output.c
+@@ -613,15 +613,15 @@ int ip6_forward(struct sk_buff *skb)
+ else
+ target = &hdr->daddr;
+
+- peer = inet_getpeer_v6(net->ipv6.peers, &hdr->daddr, 1);
++ rcu_read_lock();
++ peer = inet_getpeer_v6(net->ipv6.peers, &hdr->daddr);
+
+ /* Limit redirects both by destination (here)
+ and by source (inside ndisc_send_redirect)
+ */
+ if (inet_peer_xrlim_allow(peer, 1*HZ))
+ ndisc_send_redirect(skb, target);
+- if (peer)
+- inet_putpeer(peer);
++ rcu_read_unlock();
+ } else {
+ int addrtype = ipv6_addr_type(&hdr->saddr);
+
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index d5057401701c1a..440048d609c37a 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -506,9 +506,9 @@ static int ipmr_mfc_seq_show(struct seq_file *seq, void *v)
+
+ if (it->cache != &mrt->mfc_unres_queue) {
+ seq_printf(seq, " %8lu %8lu %8lu",
+- mfc->_c.mfc_un.res.pkt,
+- mfc->_c.mfc_un.res.bytes,
+- mfc->_c.mfc_un.res.wrong_if);
++ atomic_long_read(&mfc->_c.mfc_un.res.pkt),
++ atomic_long_read(&mfc->_c.mfc_un.res.bytes),
++ atomic_long_read(&mfc->_c.mfc_un.res.wrong_if));
+ for (n = mfc->_c.mfc_un.res.minvif;
+ n < mfc->_c.mfc_un.res.maxvif; n++) {
+ if (VIF_EXISTS(mrt, n) &&
+@@ -870,7 +870,7 @@ static void ip6mr_update_thresholds(struct mr_table *mrt,
+ cache->mfc_un.res.maxvif = vifi + 1;
+ }
+ }
+- cache->mfc_un.res.lastuse = jiffies;
++ WRITE_ONCE(cache->mfc_un.res.lastuse, jiffies);
+ }
+
+ static int mif6_add(struct net *net, struct mr_table *mrt,
+@@ -1928,9 +1928,9 @@ int ip6mr_ioctl(struct sock *sk, int cmd, void *arg)
+ c = ip6mr_cache_find(mrt, &sr->src.sin6_addr,
+ &sr->grp.sin6_addr);
+ if (c) {
+- sr->pktcnt = c->_c.mfc_un.res.pkt;
+- sr->bytecnt = c->_c.mfc_un.res.bytes;
+- sr->wrong_if = c->_c.mfc_un.res.wrong_if;
++ sr->pktcnt = atomic_long_read(&c->_c.mfc_un.res.pkt);
++ sr->bytecnt = atomic_long_read(&c->_c.mfc_un.res.bytes);
++ sr->wrong_if = atomic_long_read(&c->_c.mfc_un.res.wrong_if);
+ rcu_read_unlock();
+ return 0;
+ }
+@@ -2000,9 +2000,9 @@ int ip6mr_compat_ioctl(struct sock *sk, unsigned int cmd, void __user *arg)
+ rcu_read_lock();
+ c = ip6mr_cache_find(mrt, &sr.src.sin6_addr, &sr.grp.sin6_addr);
+ if (c) {
+- sr.pktcnt = c->_c.mfc_un.res.pkt;
+- sr.bytecnt = c->_c.mfc_un.res.bytes;
+- sr.wrong_if = c->_c.mfc_un.res.wrong_if;
++ sr.pktcnt = atomic_long_read(&c->_c.mfc_un.res.pkt);
++ sr.bytecnt = atomic_long_read(&c->_c.mfc_un.res.bytes);
++ sr.wrong_if = atomic_long_read(&c->_c.mfc_un.res.wrong_if);
+ rcu_read_unlock();
+
+ if (copy_to_user(arg, &sr, sizeof(sr)))
+@@ -2125,9 +2125,9 @@ static void ip6_mr_forward(struct net *net, struct mr_table *mrt,
+ int true_vifi = ip6mr_find_vif(mrt, dev);
+
+ vif = c->_c.mfc_parent;
+- c->_c.mfc_un.res.pkt++;
+- c->_c.mfc_un.res.bytes += skb->len;
+- c->_c.mfc_un.res.lastuse = jiffies;
++ atomic_long_inc(&c->_c.mfc_un.res.pkt);
++ atomic_long_add(skb->len, &c->_c.mfc_un.res.bytes);
++ WRITE_ONCE(c->_c.mfc_un.res.lastuse, jiffies);
+
+ if (ipv6_addr_any(&c->mf6c_origin) && true_vifi >= 0) {
+ struct mfc6_cache *cache_proxy;
+@@ -2145,7 +2145,7 @@ static void ip6_mr_forward(struct net *net, struct mr_table *mrt,
+ * Wrong interface: drop packet and (maybe) send PIM assert.
+ */
+ if (rcu_access_pointer(mrt->vif_table[vif].dev) != dev) {
+- c->_c.mfc_un.res.wrong_if++;
++ atomic_long_inc(&c->_c.mfc_un.res.wrong_if);
+
+ if (true_vifi >= 0 && mrt->mroute_do_assert &&
+ /* pimsm uses asserts, when switching from RPT to SPT,
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index aba94a34867379..d044c67019de6d 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -1731,10 +1731,12 @@ void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target)
+ "Redirect: destination is not a neighbour\n");
+ goto release;
+ }
+- peer = inet_getpeer_v6(net->ipv6.peers, &ipv6_hdr(skb)->saddr, 1);
++
++ rcu_read_lock();
++ peer = inet_getpeer_v6(net->ipv6.peers, &ipv6_hdr(skb)->saddr);
+ ret = inet_peer_xrlim_allow(peer, 1*HZ);
+- if (peer)
+- inet_putpeer(peer);
++ rcu_read_unlock();
++
+ if (!ret)
+ goto release;
+
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 0cef8ae5d1ea18..896c9c827a288c 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -159,6 +159,49 @@ static int compute_score(struct sock *sk, const struct net *net,
+ return score;
+ }
+
++/**
++ * udp6_lib_lookup1() - Simplified lookup using primary hash (destination port)
++ * @net: Network namespace
++ * @saddr: Source address, network order
++ * @sport: Source port, network order
++ * @daddr: Destination address, network order
++ * @hnum: Destination port, host order
++ * @dif: Destination interface index
++ * @sdif: Destination bridge port index, if relevant
++ * @udptable: Set of UDP hash tables
++ *
++ * Simplified lookup to be used as fallback if no sockets are found due to a
++ * potential race between (receive) address change, and lookup happening before
++ * the rehash operation. This function ignores SO_REUSEPORT groups while scoring
++ * result sockets, because if we have one, we don't need the fallback at all.
++ *
++ * Called under rcu_read_lock().
++ *
++ * Return: socket with highest matching score if any, NULL if none
++ */
++static struct sock *udp6_lib_lookup1(const struct net *net,
++ const struct in6_addr *saddr, __be16 sport,
++ const struct in6_addr *daddr,
++ unsigned int hnum, int dif, int sdif,
++ const struct udp_table *udptable)
++{
++ unsigned int slot = udp_hashfn(net, hnum, udptable->mask);
++ struct udp_hslot *hslot = &udptable->hash[slot];
++ struct sock *sk, *result = NULL;
++ int score, badness = 0;
++
++ sk_for_each_rcu(sk, &hslot->head) {
++ score = compute_score(sk, net,
++ saddr, sport, daddr, hnum, dif, sdif);
++ if (score > badness) {
++ result = sk;
++ badness = score;
++ }
++ }
++
++ return result;
++}
++
+ /* called with rcu_read_lock() */
+ static struct sock *udp6_lib_lookup2(const struct net *net,
+ const struct in6_addr *saddr, __be16 sport,
+@@ -263,6 +306,13 @@ struct sock *__udp6_lib_lookup(const struct net *net,
+ result = udp6_lib_lookup2(net, saddr, sport,
+ &in6addr_any, hnum, dif, sdif,
+ hslot2, skb);
++ if (!IS_ERR_OR_NULL(result))
++ goto done;
++
++ /* Cover address change/lookup/rehash race: see __udp4_lib_lookup() */
++ result = udp6_lib_lookup1(net, saddr, sport, daddr, hnum, dif, sdif,
++ udptable);
++
+ done:
+ if (IS_ERR(result))
+ return NULL;
+diff --git a/net/key/af_key.c b/net/key/af_key.c
+index f79fb99271ed84..c56bb4f451e6de 100644
+--- a/net/key/af_key.c
++++ b/net/key/af_key.c
+@@ -1354,7 +1354,7 @@ static int pfkey_getspi(struct sock *sk, struct sk_buff *skb, const struct sadb_
+ }
+
+ if (hdr->sadb_msg_seq) {
+- x = xfrm_find_acq_byseq(net, DUMMY_MARK, hdr->sadb_msg_seq);
++ x = xfrm_find_acq_byseq(net, DUMMY_MARK, hdr->sadb_msg_seq, UINT_MAX);
+ if (x && !xfrm_addr_equal(&x->id.daddr, xdaddr, family)) {
+ xfrm_state_put(x);
+ x = NULL;
+@@ -1362,7 +1362,8 @@ static int pfkey_getspi(struct sock *sk, struct sk_buff *skb, const struct sadb_
+ }
+
+ if (!x)
+- x = xfrm_find_acq(net, &dummy_mark, mode, reqid, 0, proto, xdaddr, xsaddr, 1, family);
++ x = xfrm_find_acq(net, &dummy_mark, mode, reqid, 0, UINT_MAX,
++ proto, xdaddr, xsaddr, 1, family);
+
+ if (x == NULL)
+ return -ENOENT;
+@@ -1417,7 +1418,7 @@ static int pfkey_acquire(struct sock *sk, struct sk_buff *skb, const struct sadb
+ if (hdr->sadb_msg_seq == 0 || hdr->sadb_msg_errno == 0)
+ return 0;
+
+- x = xfrm_find_acq_byseq(net, DUMMY_MARK, hdr->sadb_msg_seq);
++ x = xfrm_find_acq_byseq(net, DUMMY_MARK, hdr->sadb_msg_seq, UINT_MAX);
+ if (x == NULL)
+ return 0;
+
+diff --git a/net/mac80211/debugfs_netdev.c b/net/mac80211/debugfs_netdev.c
+index 68596ef78b15ee..d0b145888e1398 100644
+--- a/net/mac80211/debugfs_netdev.c
++++ b/net/mac80211/debugfs_netdev.c
+@@ -728,7 +728,7 @@ static ssize_t ieee80211_if_parse_active_links(struct ieee80211_sub_if_data *sda
+ {
+ u16 active_links;
+
+- if (kstrtou16(buf, 0, &active_links))
++ if (kstrtou16(buf, 0, &active_links) || !active_links)
+ return -EINVAL;
+
+ return ieee80211_set_active_links(&sdata->vif, active_links) ?: buflen;
+diff --git a/net/mac80211/driver-ops.h b/net/mac80211/driver-ops.h
+index d382d9729e853f..a06644084d15d1 100644
+--- a/net/mac80211/driver-ops.h
++++ b/net/mac80211/driver-ops.h
+@@ -724,6 +724,9 @@ static inline void drv_flush_sta(struct ieee80211_local *local,
+ if (sdata && !check_sdata_in_driver(sdata))
+ return;
+
++ if (!sta->uploaded)
++ return;
++
+ trace_drv_flush_sta(local, sdata, &sta->sta);
+ if (local->ops->flush_sta)
+ local->ops->flush_sta(&local->hw, &sdata->vif, &sta->sta);
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 694b43091fec6b..6f3a86040cfcd8 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -2994,6 +2994,7 @@ ieee80211_rx_mesh_data(struct ieee80211_sub_if_data *sdata, struct sta_info *sta
+ }
+
+ IEEE80211_IFSTA_MESH_CTR_INC(ifmsh, fwded_frames);
++ ieee80211_set_qos_hdr(sdata, fwd_skb);
+ ieee80211_add_pending_skb(local, fwd_skb);
+
+ rx_accept:
+diff --git a/net/mptcp/ctrl.c b/net/mptcp/ctrl.c
+index b0dd008e2114bc..dd595d9b5e50c7 100644
+--- a/net/mptcp/ctrl.c
++++ b/net/mptcp/ctrl.c
+@@ -405,9 +405,9 @@ void mptcp_active_detect_blackhole(struct sock *ssk, bool expired)
+ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_MPCAPABLEACTIVEDROP);
+ subflow->mpc_drop = 1;
+ mptcp_subflow_early_fallback(mptcp_sk(subflow->conn), subflow);
+- } else {
+- subflow->mpc_drop = 0;
+ }
++ } else if (ssk->sk_state == TCP_SYN_SENT) {
++ subflow->mpc_drop = 0;
+ }
+ }
+
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index 123f3f2972841a..fd2de185bc939f 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -108,7 +108,6 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ mp_opt->suboptions |= OPTION_MPTCP_DSS;
+ mp_opt->use_map = 1;
+ mp_opt->mpc_map = 1;
+- mp_opt->use_ack = 0;
+ mp_opt->data_len = get_unaligned_be16(ptr);
+ ptr += 2;
+ }
+@@ -157,11 +156,6 @@ static void mptcp_parse_option(const struct sk_buff *skb,
+ pr_debug("DSS\n");
+ ptr++;
+
+- /* we must clear 'mpc_map' be able to detect MP_CAPABLE
+- * map vs DSS map in mptcp_incoming_options(), and reconstruct
+- * map info accordingly
+- */
+- mp_opt->mpc_map = 0;
+ flags = (*ptr++) & MPTCP_DSS_FLAG_MASK;
+ mp_opt->data_fin = (flags & MPTCP_DSS_DATA_FIN) != 0;
+ mp_opt->dsn64 = (flags & MPTCP_DSS_DSN64) != 0;
+@@ -369,8 +363,11 @@ void mptcp_get_options(const struct sk_buff *skb,
+ const unsigned char *ptr;
+ int length;
+
+- /* initialize option status */
+- mp_opt->suboptions = 0;
++ /* Ensure that casting the whole status to u32 is efficient and safe */
++ BUILD_BUG_ON(sizeof_field(struct mptcp_options_received, status) != sizeof(u32));
++ BUILD_BUG_ON(!IS_ALIGNED(offsetof(struct mptcp_options_received, status),
++ sizeof(u32)));
++ *(u32 *)&mp_opt->status = 0;
+
+ length = (th->doff * 4) - sizeof(struct tcphdr);
+ ptr = (const unsigned char *)(th + 1);
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 45a2b5f05d38b0..8c4f934d198cc6 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -2049,7 +2049,8 @@ int mptcp_pm_nl_set_flags(struct sk_buff *skb, struct genl_info *info)
+ return -EINVAL;
+ }
+ if ((addr.flags & MPTCP_PM_ADDR_FLAG_FULLMESH) &&
+- (entry->flags & MPTCP_PM_ADDR_FLAG_SIGNAL)) {
++ (entry->flags & (MPTCP_PM_ADDR_FLAG_SIGNAL |
++ MPTCP_PM_ADDR_FLAG_IMPLICIT))) {
+ spin_unlock_bh(&pernet->lock);
+ GENL_SET_ERR_MSG(info, "invalid addr flags");
+ return -EINVAL;
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 4b9d850ce85a25..fac774825aff39 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -1766,8 +1766,10 @@ static int mptcp_sendmsg_fastopen(struct sock *sk, struct msghdr *msg,
+ * see mptcp_disconnect().
+ * Attempt it again outside the problematic scope.
+ */
+- if (!mptcp_disconnect(sk, 0))
++ if (!mptcp_disconnect(sk, 0)) {
++ sk->sk_disconnects++;
+ sk->sk_socket->state = SS_UNCONNECTED;
++ }
+ }
+ inet_clear_bit(DEFER_CONNECT, sk);
+
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index 73526f1d768fcb..b70a303e082878 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -149,22 +149,24 @@ struct mptcp_options_received {
+ u32 subflow_seq;
+ u16 data_len;
+ __sum16 csum;
+- u16 suboptions;
++ struct_group(status,
++ u16 suboptions;
++ u16 use_map:1,
++ dsn64:1,
++ data_fin:1,
++ use_ack:1,
++ ack64:1,
++ mpc_map:1,
++ reset_reason:4,
++ reset_transient:1,
++ echo:1,
++ backup:1,
++ deny_join_id0:1,
++ __unused:2;
++ );
++ u8 join_id;
+ u32 token;
+ u32 nonce;
+- u16 use_map:1,
+- dsn64:1,
+- data_fin:1,
+- use_ack:1,
+- ack64:1,
+- mpc_map:1,
+- reset_reason:4,
+- reset_transient:1,
+- echo:1,
+- backup:1,
+- deny_join_id0:1,
+- __unused:2;
+- u8 join_id;
+ u64 thmac;
+ u8 hmac[MPTCPOPT_HMAC_LEN];
+ struct mptcp_addr_info addr;
+diff --git a/net/ncsi/ncsi-rsp.c b/net/ncsi/ncsi-rsp.c
+index 14bd66909ca455..4a8ce2949faeac 100644
+--- a/net/ncsi/ncsi-rsp.c
++++ b/net/ncsi/ncsi-rsp.c
+@@ -1089,14 +1089,12 @@ static int ncsi_rsp_handler_netlink(struct ncsi_request *nr)
+ static int ncsi_rsp_handler_gmcma(struct ncsi_request *nr)
+ {
+ struct ncsi_dev_priv *ndp = nr->ndp;
++ struct sockaddr *saddr = &ndp->pending_mac;
+ struct net_device *ndev = ndp->ndev.dev;
+ struct ncsi_rsp_gmcma_pkt *rsp;
+- struct sockaddr saddr;
+- int ret = -1;
+ int i;
+
+ rsp = (struct ncsi_rsp_gmcma_pkt *)skb_network_header(nr->rsp);
+- saddr.sa_family = ndev->type;
+ ndev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+
+ netdev_info(ndev, "NCSI: Received %d provisioned MAC addresses\n",
+@@ -1108,20 +1106,20 @@ static int ncsi_rsp_handler_gmcma(struct ncsi_request *nr)
+ rsp->addresses[i][4], rsp->addresses[i][5]);
+ }
+
++ saddr->sa_family = ndev->type;
+ for (i = 0; i < rsp->address_count; i++) {
+- memcpy(saddr.sa_data, &rsp->addresses[i], ETH_ALEN);
+- ret = ndev->netdev_ops->ndo_set_mac_address(ndev, &saddr);
+- if (ret < 0) {
++ if (!is_valid_ether_addr(rsp->addresses[i])) {
+ netdev_warn(ndev, "NCSI: Unable to assign %pM to device\n",
+- saddr.sa_data);
++ rsp->addresses[i]);
+ continue;
+ }
+- netdev_warn(ndev, "NCSI: Set MAC address to %pM\n", saddr.sa_data);
++ memcpy(saddr->sa_data, rsp->addresses[i], ETH_ALEN);
++ netdev_warn(ndev, "NCSI: Will set MAC address to %pM\n", saddr->sa_data);
+ break;
+ }
+
+- ndp->gma_flag = ret == 0;
+- return ret;
++ ndp->gma_flag = 1;
++ return 0;
+ }
+
+ static struct ncsi_rsp_handler {
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 42dc8cc721ff7b..939510247ef5a6 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -4647,6 +4647,14 @@ static int nf_tables_fill_set_concat(struct sk_buff *skb,
+ return 0;
+ }
+
++static u32 nft_set_userspace_size(const struct nft_set_ops *ops, u32 size)
++{
++ if (ops->usize)
++ return ops->usize(size);
++
++ return size;
++}
++
+ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ const struct nft_set *set, u16 event, u16 flags)
+ {
+@@ -4717,7 +4725,8 @@ static int nf_tables_fill_set(struct sk_buff *skb, const struct nft_ctx *ctx,
+ if (!nest)
+ goto nla_put_failure;
+ if (set->size &&
+- nla_put_be32(skb, NFTA_SET_DESC_SIZE, htonl(set->size)))
++ nla_put_be32(skb, NFTA_SET_DESC_SIZE,
++ htonl(nft_set_userspace_size(set->ops, set->size))))
+ goto nla_put_failure;
+
+ if (set->field_count > 1 &&
+@@ -4959,7 +4968,7 @@ static int nft_set_desc_concat_parse(const struct nlattr *attr,
+ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ const struct nlattr *nla)
+ {
+- u32 num_regs = 0, key_num_regs = 0;
++ u32 len = 0, num_regs;
+ struct nlattr *attr;
+ int rem, err, i;
+
+@@ -4973,12 +4982,12 @@ static int nft_set_desc_concat(struct nft_set_desc *desc,
+ }
+
+ for (i = 0; i < desc->field_count; i++)
+- num_regs += DIV_ROUND_UP(desc->field_len[i], sizeof(u32));
++ len += round_up(desc->field_len[i], sizeof(u32));
+
+- key_num_regs = DIV_ROUND_UP(desc->klen, sizeof(u32));
+- if (key_num_regs != num_regs)
++ if (len != desc->klen)
+ return -EINVAL;
+
++ num_regs = DIV_ROUND_UP(desc->klen, sizeof(u32));
+ if (num_regs > NFT_REG32_COUNT)
+ return -E2BIG;
+
+@@ -5085,6 +5094,15 @@ static bool nft_set_is_same(const struct nft_set *set,
+ return true;
+ }
+
++static u32 nft_set_kernel_size(const struct nft_set_ops *ops,
++ const struct nft_set_desc *desc)
++{
++ if (ops->ksize)
++ return ops->ksize(desc->size);
++
++ return desc->size;
++}
++
+ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ const struct nlattr * const nla[])
+ {
+@@ -5267,6 +5285,9 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ if (err < 0)
+ return err;
+
++ if (desc.size)
++ desc.size = nft_set_kernel_size(set->ops, &desc);
++
+ err = 0;
+ if (!nft_set_is_same(set, &desc, exprs, num_exprs, flags)) {
+ NL_SET_BAD_ATTR(extack, nla[NFTA_SET_NAME]);
+@@ -5289,6 +5310,9 @@ static int nf_tables_newset(struct sk_buff *skb, const struct nfnl_info *info,
+ if (IS_ERR(ops))
+ return PTR_ERR(ops);
+
++ if (desc.size)
++ desc.size = nft_set_kernel_size(ops, &desc);
++
+ udlen = 0;
+ if (nla[NFTA_SET_USERDATA])
+ udlen = nla_len(nla[NFTA_SET_USERDATA]);
+@@ -6855,6 +6879,27 @@ static bool nft_setelem_valid_key_end(const struct nft_set *set,
+ return true;
+ }
+
++static u32 nft_set_maxsize(const struct nft_set *set)
++{
++ u32 maxsize, delta;
++
++ if (!set->size)
++ return UINT_MAX;
++
++ if (set->ops->adjust_maxsize)
++ delta = set->ops->adjust_maxsize(set);
++ else
++ delta = 0;
++
++ if (check_add_overflow(set->size, set->ndeact, &maxsize))
++ return UINT_MAX;
++
++ if (check_add_overflow(maxsize, delta, &maxsize))
++ return UINT_MAX;
++
++ return maxsize;
++}
++
+ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ const struct nlattr *attr, u32 nlmsg_flags)
+ {
+@@ -7218,7 +7263,7 @@ static int nft_add_set_elem(struct nft_ctx *ctx, struct nft_set *set,
+ }
+
+ if (!(flags & NFT_SET_ELEM_CATCHALL)) {
+- unsigned int max = set->size ? set->size + set->ndeact : UINT_MAX;
++ unsigned int max = nft_set_maxsize(set);
+
+ if (!atomic_add_unless(&set->nelems, 1, max)) {
+ err = -ENFILE;
+diff --git a/net/netfilter/nft_flow_offload.c b/net/netfilter/nft_flow_offload.c
+index 2f732fae5a831e..da9ebd00b19891 100644
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -289,6 +289,15 @@ static bool nft_flow_offload_skip(struct sk_buff *skb, int family)
+ return false;
+ }
+
++static void flow_offload_ct_tcp(struct nf_conn *ct)
++{
++ /* conntrack will not see all packets, disable tcp window validation. */
++ spin_lock_bh(&ct->lock);
++ ct->proto.tcp.seen[0].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
++ ct->proto.tcp.seen[1].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
++ spin_unlock_bh(&ct->lock);
++}
++
+ static void nft_flow_offload_eval(const struct nft_expr *expr,
+ struct nft_regs *regs,
+ const struct nft_pktinfo *pkt)
+@@ -356,11 +365,8 @@ static void nft_flow_offload_eval(const struct nft_expr *expr,
+ goto err_flow_alloc;
+
+ flow_offload_route_init(flow, &route);
+-
+- if (tcph) {
+- ct->proto.tcp.seen[0].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
+- ct->proto.tcp.seen[1].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
+- }
++ if (tcph)
++ flow_offload_ct_tcp(ct);
+
+ __set_bit(NF_FLOW_HW_BIDIRECTIONAL, &flow->flags);
+ ret = flow_offload_add(flowtable, flow);
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index b7ea21327549b3..2e8ef16ff191d4 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -750,6 +750,46 @@ static void nft_rbtree_gc_init(const struct nft_set *set)
+ priv->last_gc = jiffies;
+ }
+
++/* rbtree stores ranges as singleton elements, each range is composed of two
++ * elements ...
++ */
++static u32 nft_rbtree_ksize(u32 size)
++{
++ return size * 2;
++}
++
++/* ... hide this detail to userspace. */
++static u32 nft_rbtree_usize(u32 size)
++{
++ if (!size)
++ return 0;
++
++ return size / 2;
++}
++
++static u32 nft_rbtree_adjust_maxsize(const struct nft_set *set)
++{
++ struct nft_rbtree *priv = nft_set_priv(set);
++ struct nft_rbtree_elem *rbe;
++ struct rb_node *node;
++ const void *key;
++
++ node = rb_last(&priv->root);
++ if (!node)
++ return 0;
++
++ rbe = rb_entry(node, struct nft_rbtree_elem, node);
++ if (!nft_rbtree_interval_end(rbe))
++ return 0;
++
++ key = nft_set_ext_key(&rbe->ext);
++ if (memchr(key, 1, set->klen))
++ return 0;
++
++ /* this is the all-zero no-match element. */
++ return 1;
++}
++
+ const struct nft_set_type nft_set_rbtree_type = {
+ .features = NFT_SET_INTERVAL | NFT_SET_MAP | NFT_SET_OBJECT | NFT_SET_TIMEOUT,
+ .ops = {
+@@ -768,5 +808,8 @@ const struct nft_set_type nft_set_rbtree_type = {
+ .lookup = nft_rbtree_lookup,
+ .walk = nft_rbtree_walk,
+ .get = nft_rbtree_get,
++ .ksize = nft_rbtree_ksize,
++ .usize = nft_rbtree_usize,
++ .adjust_maxsize = nft_rbtree_adjust_maxsize,
+ },
+ };
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index 59050caab65c8b..72c65d938a150e 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -397,15 +397,15 @@ static int rose_setsockopt(struct socket *sock, int level, int optname,
+ {
+ struct sock *sk = sock->sk;
+ struct rose_sock *rose = rose_sk(sk);
+- int opt;
++ unsigned int opt;
+
+ if (level != SOL_ROSE)
+ return -ENOPROTOOPT;
+
+- if (optlen < sizeof(int))
++ if (optlen < sizeof(unsigned int))
+ return -EINVAL;
+
+- if (copy_from_sockptr(&opt, optval, sizeof(int)))
++ if (copy_from_sockptr(&opt, optval, sizeof(unsigned int)))
+ return -EFAULT;
+
+ switch (optname) {
+@@ -414,31 +414,31 @@ static int rose_setsockopt(struct socket *sock, int level, int optname,
+ return 0;
+
+ case ROSE_T1:
+- if (opt < 1)
++ if (opt < 1 || opt > UINT_MAX / HZ)
+ return -EINVAL;
+ rose->t1 = opt * HZ;
+ return 0;
+
+ case ROSE_T2:
+- if (opt < 1)
++ if (opt < 1 || opt > UINT_MAX / HZ)
+ return -EINVAL;
+ rose->t2 = opt * HZ;
+ return 0;
+
+ case ROSE_T3:
+- if (opt < 1)
++ if (opt < 1 || opt > UINT_MAX / HZ)
+ return -EINVAL;
+ rose->t3 = opt * HZ;
+ return 0;
+
+ case ROSE_HOLDBACK:
+- if (opt < 1)
++ if (opt < 1 || opt > UINT_MAX / HZ)
+ return -EINVAL;
+ rose->hb = opt * HZ;
+ return 0;
+
+ case ROSE_IDLE:
+- if (opt < 0)
++ if (opt > UINT_MAX / (60 * HZ))
+ return -EINVAL;
+ rose->idle = opt * 60 * HZ;
+ return 0;
+diff --git a/net/rose/rose_timer.c b/net/rose/rose_timer.c
+index f06ddbed3fed63..1525773e94aa17 100644
+--- a/net/rose/rose_timer.c
++++ b/net/rose/rose_timer.c
+@@ -122,6 +122,10 @@ static void rose_heartbeat_expiry(struct timer_list *t)
+ struct rose_sock *rose = rose_sk(sk);
+
+ bh_lock_sock(sk);
++ if (sock_owned_by_user(sk)) {
++ sk_reset_timer(sk, &sk->sk_timer, jiffies + HZ/20);
++ goto out;
++ }
+ switch (rose->state) {
+ case ROSE_STATE_0:
+ /* Magic here: If we listen() and a new link dies before it
+@@ -152,6 +156,7 @@ static void rose_heartbeat_expiry(struct timer_list *t)
+ }
+
+ rose_start_heartbeat(sk);
++out:
+ bh_unlock_sock(sk);
+ sock_put(sk);
+ }
+@@ -162,6 +167,10 @@ static void rose_timer_expiry(struct timer_list *t)
+ struct sock *sk = &rose->sock;
+
+ bh_lock_sock(sk);
++ if (sock_owned_by_user(sk)) {
++ sk_reset_timer(sk, &rose->timer, jiffies + HZ/20);
++ goto out;
++ }
+ switch (rose->state) {
+ case ROSE_STATE_1: /* T1 */
+ case ROSE_STATE_4: /* T2 */
+@@ -182,6 +191,7 @@ static void rose_timer_expiry(struct timer_list *t)
+ }
+ break;
+ }
++out:
+ bh_unlock_sock(sk);
+ sock_put(sk);
+ }
+@@ -192,6 +202,10 @@ static void rose_idletimer_expiry(struct timer_list *t)
+ struct sock *sk = &rose->sock;
+
+ bh_lock_sock(sk);
++ if (sock_owned_by_user(sk)) {
++ sk_reset_timer(sk, &rose->idletimer, jiffies + HZ/20);
++ goto out;
++ }
+ rose_clear_queues(sk);
+
+ rose_write_internal(sk, ROSE_CLEAR_REQUEST);
+@@ -207,6 +221,7 @@ static void rose_idletimer_expiry(struct timer_list *t)
+ sk->sk_state_change(sk);
+ sock_set_flag(sk, SOCK_DEAD);
+ }
++out:
+ bh_unlock_sock(sk);
+ sock_put(sk);
+ }
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 598b4ee389fc1e..2a1396cd892f30 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -63,11 +63,12 @@ int rxrpc_abort_conn(struct rxrpc_connection *conn, struct sk_buff *skb,
+ /*
+ * Mark a connection as being remotely aborted.
+ */
+-static bool rxrpc_input_conn_abort(struct rxrpc_connection *conn,
++static void rxrpc_input_conn_abort(struct rxrpc_connection *conn,
+ struct sk_buff *skb)
+ {
+- return rxrpc_set_conn_aborted(conn, skb, skb->priority, -ECONNABORTED,
+- RXRPC_CALL_REMOTELY_ABORTED);
++ trace_rxrpc_rx_conn_abort(conn, skb);
++ rxrpc_set_conn_aborted(conn, skb, skb->priority, -ECONNABORTED,
++ RXRPC_CALL_REMOTELY_ABORTED);
+ }
+
+ /*
+@@ -202,11 +203,14 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn)
+
+ for (i = 0; i < RXRPC_MAXCALLS; i++) {
+ call = conn->channels[i].call;
+- if (call)
++ if (call) {
++ rxrpc_see_call(call, rxrpc_call_see_conn_abort);
+ rxrpc_set_call_completion(call,
+ conn->completion,
+ conn->abort_code,
+ conn->error);
++ rxrpc_poke_call(call, rxrpc_call_poke_conn_abort);
++ }
+ }
+
+ _leave("");
+diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
+index 552ba84a255c43..5d0842efde69ff 100644
+--- a/net/rxrpc/peer_event.c
++++ b/net/rxrpc/peer_event.c
+@@ -238,7 +238,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
+ bool use;
+ int slot;
+
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+
+ while (!list_empty(collector)) {
+ peer = list_entry(collector->next,
+@@ -249,7 +249,7 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
+ continue;
+
+ use = __rxrpc_use_local(peer->local, rxrpc_local_use_peer_keepalive);
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+
+ if (use) {
+ keepalive_at = peer->last_tx_at + RXRPC_KEEPALIVE_TIME;
+@@ -269,17 +269,17 @@ static void rxrpc_peer_keepalive_dispatch(struct rxrpc_net *rxnet,
+ */
+ slot += cursor;
+ slot &= mask;
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+ list_add_tail(&peer->keepalive_link,
+ &rxnet->peer_keepalive[slot & mask]);
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+ rxrpc_unuse_local(peer->local, rxrpc_local_unuse_peer_keepalive);
+ }
+ rxrpc_put_peer(peer, rxrpc_peer_put_keepalive);
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+ }
+
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+ }
+
+ /*
+@@ -309,7 +309,7 @@ void rxrpc_peer_keepalive_worker(struct work_struct *work)
+ * second; the bucket at cursor + 1 goes at now + 1s and so
+ * on...
+ */
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+ list_splice_init(&rxnet->peer_keepalive_new, &collector);
+
+ stop = cursor + ARRAY_SIZE(rxnet->peer_keepalive);
+@@ -321,7 +321,7 @@ void rxrpc_peer_keepalive_worker(struct work_struct *work)
+ }
+
+ base = now;
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+
+ rxnet->peer_keepalive_base = base;
+ rxnet->peer_keepalive_cursor = cursor;
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 49dcda67a0d591..956fc7ea4b7346 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -313,10 +313,10 @@ void rxrpc_new_incoming_peer(struct rxrpc_local *local, struct rxrpc_peer *peer)
+ hash_key = rxrpc_peer_hash_key(local, &peer->srx);
+ rxrpc_init_peer(local, peer, hash_key);
+
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+ hash_add_rcu(rxnet->peer_hash, &peer->hash_link, hash_key);
+ list_add_tail(&peer->keepalive_link, &rxnet->peer_keepalive_new);
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+ }
+
+ /*
+@@ -348,7 +348,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
+ return NULL;
+ }
+
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+
+ /* Need to check that we aren't racing with someone else */
+ peer = __rxrpc_lookup_peer_rcu(local, srx, hash_key);
+@@ -361,7 +361,7 @@ struct rxrpc_peer *rxrpc_lookup_peer(struct rxrpc_local *local,
+ &rxnet->peer_keepalive_new);
+ }
+
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+
+ if (peer)
+ rxrpc_free_peer(candidate);
+@@ -411,10 +411,10 @@ static void __rxrpc_put_peer(struct rxrpc_peer *peer)
+
+ ASSERT(hlist_empty(&peer->error_targets));
+
+- spin_lock(&rxnet->peer_hash_lock);
++ spin_lock_bh(&rxnet->peer_hash_lock);
+ hash_del_rcu(&peer->hash_link);
+ list_del_init(&peer->keepalive_link);
+- spin_unlock(&rxnet->peer_hash_lock);
++ spin_unlock_bh(&rxnet->peer_hash_lock);
+
+ rxrpc_free_peer(peer);
+ }
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index bbc778c233c892..dfa3067084948f 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -390,6 +390,7 @@ static struct tcf_proto *tcf_proto_create(const char *kind, u32 protocol,
+ tp->protocol = protocol;
+ tp->prio = prio;
+ tp->chain = chain;
++ tp->usesw = !tp->ops->reoffload;
+ spin_lock_init(&tp->lock);
+ refcount_set(&tp->refcnt, 1);
+
+@@ -410,39 +411,31 @@ static void tcf_proto_get(struct tcf_proto *tp)
+ refcount_inc(&tp->refcnt);
+ }
+
+-static void tcf_maintain_bypass(struct tcf_block *block)
++static void tcf_proto_count_usesw(struct tcf_proto *tp, bool add)
+ {
+- int filtercnt = atomic_read(&block->filtercnt);
+- int skipswcnt = atomic_read(&block->skipswcnt);
+- bool bypass_wanted = filtercnt > 0 && filtercnt == skipswcnt;
+-
+- if (bypass_wanted != block->bypass_wanted) {
+ #ifdef CONFIG_NET_CLS_ACT
+- if (bypass_wanted)
+- static_branch_inc(&tcf_bypass_check_needed_key);
+- else
+- static_branch_dec(&tcf_bypass_check_needed_key);
+-#endif
+- block->bypass_wanted = bypass_wanted;
++ struct tcf_block *block = tp->chain->block;
++ bool counted = false;
++
++ if (!add) {
++ if (tp->usesw && tp->counted) {
++ if (!atomic_dec_return(&block->useswcnt))
++ static_branch_dec(&tcf_sw_enabled_key);
++ tp->counted = false;
++ }
++ return;
+ }
+-}
+-
+-static void tcf_block_filter_cnt_update(struct tcf_block *block, bool *counted, bool add)
+-{
+- lockdep_assert_not_held(&block->cb_lock);
+
+- down_write(&block->cb_lock);
+- if (*counted != add) {
+- if (add) {
+- atomic_inc(&block->filtercnt);
+- *counted = true;
+- } else {
+- atomic_dec(&block->filtercnt);
+- *counted = false;
+- }
++ spin_lock(&tp->lock);
++ if (tp->usesw && !tp->counted) {
++ counted = true;
++ tp->counted = true;
+ }
+- tcf_maintain_bypass(block);
+- up_write(&block->cb_lock);
++ spin_unlock(&tp->lock);
++
++ if (counted && atomic_inc_return(&block->useswcnt) == 1)
++ static_branch_inc(&tcf_sw_enabled_key);
++#endif
+ }
+
+ static void tcf_chain_put(struct tcf_chain *chain);
+@@ -451,7 +444,7 @@ static void tcf_proto_destroy(struct tcf_proto *tp, bool rtnl_held,
+ bool sig_destroy, struct netlink_ext_ack *extack)
+ {
+ tp->ops->destroy(tp, rtnl_held, extack);
+- tcf_block_filter_cnt_update(tp->chain->block, &tp->counted, false);
++ tcf_proto_count_usesw(tp, false);
+ if (sig_destroy)
+ tcf_proto_signal_destroyed(tp->chain, tp);
+ tcf_chain_put(tp->chain);
+@@ -2404,7 +2397,7 @@ static int tc_new_tfilter(struct sk_buff *skb, struct nlmsghdr *n,
+ tfilter_notify(net, skb, n, tp, block, q, parent, fh,
+ RTM_NEWTFILTER, false, rtnl_held, extack);
+ tfilter_put(tp, fh);
+- tcf_block_filter_cnt_update(block, &tp->counted, true);
++ tcf_proto_count_usesw(tp, true);
+ /* q pointer is NULL for shared blocks */
+ if (q)
+ q->flags &= ~TCQ_F_CAN_BYPASS;
+@@ -3521,8 +3514,6 @@ static void tcf_block_offload_inc(struct tcf_block *block, u32 *flags)
+ if (*flags & TCA_CLS_FLAGS_IN_HW)
+ return;
+ *flags |= TCA_CLS_FLAGS_IN_HW;
+- if (tc_skip_sw(*flags))
+- atomic_inc(&block->skipswcnt);
+ atomic_inc(&block->offloadcnt);
+ }
+
+@@ -3531,8 +3522,6 @@ static void tcf_block_offload_dec(struct tcf_block *block, u32 *flags)
+ if (!(*flags & TCA_CLS_FLAGS_IN_HW))
+ return;
+ *flags &= ~TCA_CLS_FLAGS_IN_HW;
+- if (tc_skip_sw(*flags))
+- atomic_dec(&block->skipswcnt);
+ atomic_dec(&block->offloadcnt);
+ }
+
+diff --git a/net/sched/cls_bpf.c b/net/sched/cls_bpf.c
+index 1941ebec23ff9c..7fbe42f0e5c2b7 100644
+--- a/net/sched/cls_bpf.c
++++ b/net/sched/cls_bpf.c
+@@ -509,6 +509,8 @@ static int cls_bpf_change(struct net *net, struct sk_buff *in_skb,
+ if (!tc_in_hw(prog->gen_flags))
+ prog->gen_flags |= TCA_CLS_FLAGS_NOT_IN_HW;
+
++ tcf_proto_update_usesw(tp, prog->gen_flags);
++
+ if (oldprog) {
+ idr_replace(&head->handle_idr, prog, handle);
+ list_replace_rcu(&oldprog->link, &prog->link);
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 1008ec8a464c93..03505673d5234d 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -2503,6 +2503,8 @@ static int fl_change(struct net *net, struct sk_buff *in_skb,
+ if (!tc_in_hw(fnew->flags))
+ fnew->flags |= TCA_CLS_FLAGS_NOT_IN_HW;
+
++ tcf_proto_update_usesw(tp, fnew->flags);
++
+ spin_lock(&tp->lock);
+
+ /* tp was deleted concurrently. -EAGAIN will cause caller to lookup
+diff --git a/net/sched/cls_matchall.c b/net/sched/cls_matchall.c
+index 9f1e62ca508d04..f03bf5da39ee83 100644
+--- a/net/sched/cls_matchall.c
++++ b/net/sched/cls_matchall.c
+@@ -228,6 +228,8 @@ static int mall_change(struct net *net, struct sk_buff *in_skb,
+ if (!tc_in_hw(new->flags))
+ new->flags |= TCA_CLS_FLAGS_NOT_IN_HW;
+
++ tcf_proto_update_usesw(tp, new->flags);
++
+ *arg = head;
+ rcu_assign_pointer(tp->root, new);
+ return 0;
+diff --git a/net/sched/cls_u32.c b/net/sched/cls_u32.c
+index d3a03c57545bcc..2a1c00048fd6f4 100644
+--- a/net/sched/cls_u32.c
++++ b/net/sched/cls_u32.c
+@@ -951,6 +951,8 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ if (!tc_in_hw(new->flags))
+ new->flags |= TCA_CLS_FLAGS_NOT_IN_HW;
+
++ tcf_proto_update_usesw(tp, new->flags);
++
+ u32_replace_knode(tp, tp_c, new);
+ tcf_unbind_filter(tp, &n->res);
+ tcf_exts_get_net(&n->exts);
+@@ -1164,6 +1166,8 @@ static int u32_change(struct net *net, struct sk_buff *in_skb,
+ if (!tc_in_hw(n->flags))
+ n->flags |= TCA_CLS_FLAGS_NOT_IN_HW;
+
++ tcf_proto_update_usesw(tp, n->flags);
++
+ ins = &ht->ht[TC_U32_HASH(handle)];
+ for (pins = rtnl_dereference(*ins); pins;
+ ins = &pins->next, pins = rtnl_dereference(*ins))
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index a1d27bc039a364..d26ac6bd9b1080 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -1664,6 +1664,10 @@ static int tc_modify_qdisc(struct sk_buff *skb, struct nlmsghdr *n,
+ q = qdisc_lookup(dev, tcm->tcm_handle);
+ if (!q)
+ goto create_n_graft;
++ if (q->parent != tcm->tcm_parent) {
++ NL_SET_ERR_MSG(extack, "Cannot move an existing qdisc to a different parent");
++ return -EINVAL;
++ }
+ if (n->nlmsg_flags & NLM_F_EXCL) {
+ NL_SET_ERR_MSG(extack, "Exclusivity flag on, cannot override");
+ return -EEXIST;
+diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c
+index 38ec18f73de43a..8874ae6680952a 100644
+--- a/net/sched/sch_generic.c
++++ b/net/sched/sch_generic.c
+@@ -911,8 +911,8 @@ static int pfifo_fast_change_tx_queue_len(struct Qdisc *sch,
+ bands[prio] = q;
+ }
+
+- return skb_array_resize_multiple(bands, PFIFO_FAST_BANDS, new_len,
+- GFP_KERNEL);
++ return skb_array_resize_multiple_bh(bands, PFIFO_FAST_BANDS, new_len,
++ GFP_KERNEL);
+ }
+
+ struct Qdisc_ops pfifo_fast_ops __read_mostly = {
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index 3b9245a3c767a6..65d5b59da58303 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -77,12 +77,6 @@
+ #define SFQ_EMPTY_SLOT 0xffff
+ #define SFQ_DEFAULT_HASH_DIVISOR 1024
+
+-/* We use 16 bits to store allot, and want to handle packets up to 64K
+- * Scale allot by 8 (1<<3) so that no overflow occurs.
+- */
+-#define SFQ_ALLOT_SHIFT 3
+-#define SFQ_ALLOT_SIZE(X) DIV_ROUND_UP(X, 1 << SFQ_ALLOT_SHIFT)
+-
+ /* This type should contain at least SFQ_MAX_DEPTH + 1 + SFQ_MAX_FLOWS values */
+ typedef u16 sfq_index;
+
+@@ -104,7 +98,7 @@ struct sfq_slot {
+ sfq_index next; /* next slot in sfq RR chain */
+ struct sfq_head dep; /* anchor in dep[] chains */
+ unsigned short hash; /* hash value (index in ht[]) */
+- short allot; /* credit for this slot */
++ int allot; /* credit for this slot */
+
+ unsigned int backlog;
+ struct red_vars vars;
+@@ -120,7 +114,6 @@ struct sfq_sched_data {
+ siphash_key_t perturbation;
+ u8 cur_depth; /* depth of longest slot */
+ u8 flags;
+- unsigned short scaled_quantum; /* SFQ_ALLOT_SIZE(quantum) */
+ struct tcf_proto __rcu *filter_list;
+ struct tcf_block *block;
+ sfq_index *ht; /* Hash table ('divisor' slots) */
+@@ -456,7 +449,7 @@ sfq_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free)
+ */
+ q->tail = slot;
+ /* We could use a bigger initial quantum for new flows */
+- slot->allot = q->scaled_quantum;
++ slot->allot = q->quantum;
+ }
+ if (++sch->q.qlen <= q->limit)
+ return NET_XMIT_SUCCESS;
+@@ -493,7 +486,7 @@ sfq_dequeue(struct Qdisc *sch)
+ slot = &q->slots[a];
+ if (slot->allot <= 0) {
+ q->tail = slot;
+- slot->allot += q->scaled_quantum;
++ slot->allot += q->quantum;
+ goto next_slot;
+ }
+ skb = slot_dequeue_head(slot);
+@@ -512,7 +505,7 @@ sfq_dequeue(struct Qdisc *sch)
+ }
+ q->tail->next = next_a;
+ } else {
+- slot->allot -= SFQ_ALLOT_SIZE(qdisc_pkt_len(skb));
++ slot->allot -= qdisc_pkt_len(skb);
+ }
+ return skb;
+ }
+@@ -595,7 +588,7 @@ static void sfq_rehash(struct Qdisc *sch)
+ q->tail->next = x;
+ }
+ q->tail = slot;
+- slot->allot = q->scaled_quantum;
++ slot->allot = q->quantum;
+ }
+ }
+ sch->q.qlen -= dropped;
+@@ -628,7 +621,8 @@ static void sfq_perturbation(struct timer_list *t)
+ rcu_read_unlock();
+ }
+
+-static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
++static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
++ struct netlink_ext_ack *extack)
+ {
+ struct sfq_sched_data *q = qdisc_priv(sch);
+ struct tc_sfq_qopt *ctl = nla_data(opt);
+@@ -646,14 +640,10 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
+ (!is_power_of_2(ctl->divisor) || ctl->divisor > 65536))
+ return -EINVAL;
+
+- /* slot->allot is a short, make sure quantum is not too big. */
+- if (ctl->quantum) {
+- unsigned int scaled = SFQ_ALLOT_SIZE(ctl->quantum);
+-
+- if (scaled <= 0 || scaled > SHRT_MAX)
+- return -EINVAL;
++ if ((int)ctl->quantum < 0) {
++ NL_SET_ERR_MSG_MOD(extack, "invalid quantum");
++ return -EINVAL;
+ }
+-
+ if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max,
+ ctl_v1->Wlog, ctl_v1->Scell_log, NULL))
+ return -EINVAL;
+@@ -662,11 +652,13 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
+ if (!p)
+ return -ENOMEM;
+ }
++ if (ctl->limit == 1) {
++ NL_SET_ERR_MSG_MOD(extack, "invalid limit");
++ return -EINVAL;
++ }
+ sch_tree_lock(sch);
+- if (ctl->quantum) {
++ if (ctl->quantum)
+ q->quantum = ctl->quantum;
+- q->scaled_quantum = SFQ_ALLOT_SIZE(q->quantum);
+- }
+ WRITE_ONCE(q->perturb_period, ctl->perturb_period * HZ);
+ if (ctl->flows)
+ q->maxflows = min_t(u32, ctl->flows, SFQ_MAX_FLOWS);
+@@ -762,12 +754,11 @@ static int sfq_init(struct Qdisc *sch, struct nlattr *opt,
+ q->divisor = SFQ_DEFAULT_HASH_DIVISOR;
+ q->maxflows = SFQ_DEFAULT_FLOWS;
+ q->quantum = psched_mtu(qdisc_dev(sch));
+- q->scaled_quantum = SFQ_ALLOT_SIZE(q->quantum);
+ q->perturb_period = 0;
+ get_random_bytes(&q->perturbation, sizeof(q->perturbation));
+
+ if (opt) {
+- int err = sfq_change(sch, opt);
++ int err = sfq_change(sch, opt, extack);
+ if (err)
+ return err;
+ }
+@@ -878,7 +869,7 @@ static int sfq_dump_class_stats(struct Qdisc *sch, unsigned long cl,
+ if (idx != SFQ_EMPTY_SLOT) {
+ const struct sfq_slot *slot = &q->slots[idx];
+
+- xstats.allot = slot->allot << SFQ_ALLOT_SHIFT;
++ xstats.allot = slot->allot;
+ qs.qlen = slot->qlen;
+ qs.backlog = slot->backlog;
+ }
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index 6cc7b846cff1bb..ebc41a7b13dbec 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -2738,7 +2738,7 @@ int smc_accept(struct socket *sock, struct socket *new_sock,
+ release_sock(clcsk);
+ } else if (!atomic_read(&smc_sk(nsk)->conn.bytes_to_rcv)) {
+ lock_sock(nsk);
+- smc_rx_wait(smc_sk(nsk), &timeo, smc_rx_data_available);
++ smc_rx_wait(smc_sk(nsk), &timeo, 0, smc_rx_data_available);
+ release_sock(nsk);
+ }
+ }
+diff --git a/net/smc/smc_rx.c b/net/smc/smc_rx.c
+index f0cbe77a80b440..79047721df5110 100644
+--- a/net/smc/smc_rx.c
++++ b/net/smc/smc_rx.c
+@@ -238,22 +238,23 @@ static int smc_rx_splice(struct pipe_inode_info *pipe, char *src, size_t len,
+ return -ENOMEM;
+ }
+
+-static int smc_rx_data_available_and_no_splice_pend(struct smc_connection *conn)
++static int smc_rx_data_available_and_no_splice_pend(struct smc_connection *conn, size_t peeked)
+ {
+- return atomic_read(&conn->bytes_to_rcv) &&
++ return smc_rx_data_available(conn, peeked) &&
+ !atomic_read(&conn->splice_pending);
+ }
+
+ /* blocks rcvbuf consumer until >=len bytes available or timeout or interrupted
+ * @smc smc socket
+ * @timeo pointer to max seconds to wait, pointer to value 0 for no timeout
++ * @peeked number of bytes already peeked
+ * @fcrit add'l criterion to evaluate as function pointer
+ * Returns:
+ * 1 if at least 1 byte available in rcvbuf or if socket error/shutdown.
+ * 0 otherwise (nothing in rcvbuf nor timeout, e.g. interrupted).
+ */
+-int smc_rx_wait(struct smc_sock *smc, long *timeo,
+- int (*fcrit)(struct smc_connection *conn))
++int smc_rx_wait(struct smc_sock *smc, long *timeo, size_t peeked,
++ int (*fcrit)(struct smc_connection *conn, size_t baseline))
+ {
+ DEFINE_WAIT_FUNC(wait, woken_wake_function);
+ struct smc_connection *conn = &smc->conn;
+@@ -262,7 +263,7 @@ int smc_rx_wait(struct smc_sock *smc, long *timeo,
+ struct sock *sk = &smc->sk;
+ int rc;
+
+- if (fcrit(conn))
++ if (fcrit(conn, peeked))
+ return 1;
+ sk_set_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+ add_wait_queue(sk_sleep(sk), &wait);
+@@ -271,7 +272,7 @@ int smc_rx_wait(struct smc_sock *smc, long *timeo,
+ cflags->peer_conn_abort ||
+ READ_ONCE(sk->sk_shutdown) & RCV_SHUTDOWN ||
+ conn->killed ||
+- fcrit(conn),
++ fcrit(conn, peeked),
+ &wait);
+ remove_wait_queue(sk_sleep(sk), &wait);
+ sk_clear_bit(SOCKWQ_ASYNC_WAITDATA, sk);
+@@ -322,11 +323,11 @@ static int smc_rx_recv_urg(struct smc_sock *smc, struct msghdr *msg, int len,
+ return -EAGAIN;
+ }
+
+-static bool smc_rx_recvmsg_data_available(struct smc_sock *smc)
++static bool smc_rx_recvmsg_data_available(struct smc_sock *smc, size_t peeked)
+ {
+ struct smc_connection *conn = &smc->conn;
+
+- if (smc_rx_data_available(conn))
++ if (smc_rx_data_available(conn, peeked))
+ return true;
+ else if (conn->urg_state == SMC_URG_VALID)
+ /* we received a single urgent Byte - skip */
+@@ -344,10 +345,10 @@ static bool smc_rx_recvmsg_data_available(struct smc_sock *smc)
+ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ struct pipe_inode_info *pipe, size_t len, int flags)
+ {
+- size_t copylen, read_done = 0, read_remaining = len;
++ size_t copylen, read_done = 0, read_remaining = len, peeked_bytes = 0;
+ size_t chunk_len, chunk_off, chunk_len_sum;
+ struct smc_connection *conn = &smc->conn;
+- int (*func)(struct smc_connection *conn);
++ int (*func)(struct smc_connection *conn, size_t baseline);
+ union smc_host_cursor cons;
+ int readable, chunk;
+ char *rcvbuf_base;
+@@ -384,14 +385,14 @@ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ if (conn->killed)
+ break;
+
+- if (smc_rx_recvmsg_data_available(smc))
++ if (smc_rx_recvmsg_data_available(smc, peeked_bytes))
+ goto copy;
+
+ if (sk->sk_shutdown & RCV_SHUTDOWN) {
+ /* smc_cdc_msg_recv_action() could have run after
+ * above smc_rx_recvmsg_data_available()
+ */
+- if (smc_rx_recvmsg_data_available(smc))
++ if (smc_rx_recvmsg_data_available(smc, peeked_bytes))
+ goto copy;
+ break;
+ }
+@@ -425,26 +426,28 @@ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ }
+ }
+
+- if (!smc_rx_data_available(conn)) {
+- smc_rx_wait(smc, &timeo, smc_rx_data_available);
++ if (!smc_rx_data_available(conn, peeked_bytes)) {
++ smc_rx_wait(smc, &timeo, peeked_bytes, smc_rx_data_available);
+ continue;
+ }
+
+ copy:
+ /* initialize variables for 1st iteration of subsequent loop */
+ /* could be just 1 byte, even after waiting on data above */
+- readable = atomic_read(&conn->bytes_to_rcv);
++ readable = smc_rx_data_available(conn, peeked_bytes);
+ splbytes = atomic_read(&conn->splice_pending);
+ if (!readable || (msg && splbytes)) {
+ if (splbytes)
+ func = smc_rx_data_available_and_no_splice_pend;
+ else
+ func = smc_rx_data_available;
+- smc_rx_wait(smc, &timeo, func);
++ smc_rx_wait(smc, &timeo, peeked_bytes, func);
+ continue;
+ }
+
+ smc_curs_copy(&cons, &conn->local_tx_ctrl.cons, conn);
++ if ((flags & MSG_PEEK) && peeked_bytes)
++ smc_curs_add(conn->rmb_desc->len, &cons, peeked_bytes);
+ /* subsequent splice() calls pick up where previous left */
+ if (splbytes)
+ smc_curs_add(conn->rmb_desc->len, &cons, splbytes);
+@@ -480,6 +483,8 @@ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ }
+ read_remaining -= chunk_len;
+ read_done += chunk_len;
++ if (flags & MSG_PEEK)
++ peeked_bytes += chunk_len;
+
+ if (chunk_len_sum == copylen)
+ break; /* either on 1st or 2nd iteration */
+diff --git a/net/smc/smc_rx.h b/net/smc/smc_rx.h
+index db823c97d824ea..994f5e42d1ba26 100644
+--- a/net/smc/smc_rx.h
++++ b/net/smc/smc_rx.h
+@@ -21,11 +21,11 @@ void smc_rx_init(struct smc_sock *smc);
+
+ int smc_rx_recvmsg(struct smc_sock *smc, struct msghdr *msg,
+ struct pipe_inode_info *pipe, size_t len, int flags);
+-int smc_rx_wait(struct smc_sock *smc, long *timeo,
+- int (*fcrit)(struct smc_connection *conn));
+-static inline int smc_rx_data_available(struct smc_connection *conn)
++int smc_rx_wait(struct smc_sock *smc, long *timeo, size_t peeked,
++ int (*fcrit)(struct smc_connection *conn, size_t baseline));
++static inline int smc_rx_data_available(struct smc_connection *conn, size_t peeked)
+ {
+- return atomic_read(&conn->bytes_to_rcv);
++ return atomic_read(&conn->bytes_to_rcv) - peeked;
+ }
+
+ #endif /* SMC_RX_H */
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 59e2c46240f5c1..3bfbb789c4beed 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -1083,9 +1083,6 @@ static void svc_tcp_fragment_received(struct svc_sock *svsk)
+ /* If we have more data, signal svc_xprt_enqueue() to try again */
+ svsk->sk_tcplen = 0;
+ svsk->sk_marker = xdr_zero;
+-
+- smp_wmb();
+- tcp_set_rcvlowat(svsk->sk_sk, 1);
+ }
+
+ /**
+@@ -1175,17 +1172,10 @@ static int svc_tcp_recvfrom(struct svc_rqst *rqstp)
+ goto err_delete;
+ if (len == want)
+ svc_tcp_fragment_received(svsk);
+- else {
+- /* Avoid more ->sk_data_ready() calls until the rest
+- * of the message has arrived. This reduces service
+- * thread wake-ups on large incoming messages. */
+- tcp_set_rcvlowat(svsk->sk_sk,
+- svc_sock_reclen(svsk) - svsk->sk_tcplen);
+-
++ else
+ trace_svcsock_tcp_recv_short(&svsk->sk_xprt,
+ svc_sock_reclen(svsk),
+ svsk->sk_tcplen - sizeof(rpc_fraghdr));
+- }
+ goto err_noclose;
+ error:
+ if (len != -EAGAIN)
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 15724f171b0f96..f5d116a1bdea1a 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1519,6 +1519,11 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
+ if (err < 0)
+ goto out;
+
++ /* sk_err might have been set as a result of an earlier
++ * (failed) connect attempt.
++ */
++ sk->sk_err = 0;
++
+ /* Mark sock as connecting and set the error code to in
+ * progress in case this is a non-blocking connect.
+ */
+diff --git a/net/wireless/scan.c b/net/wireless/scan.c
+index d0aed41ded2f19..18e132cdea72a8 100644
+--- a/net/wireless/scan.c
++++ b/net/wireless/scan.c
+@@ -763,12 +763,11 @@ static void cfg80211_scan_req_add_chan(struct cfg80211_scan_request *request,
+ }
+ }
+
++ request->n_channels++;
+ request->channels[n_channels] = chan;
+ if (add_to_6ghz)
+ request->scan_6ghz_params[request->n_6ghz_params].channel_idx =
+ n_channels;
+-
+- request->n_channels++;
+ }
+
+ static bool cfg80211_find_ssid_match(struct cfg80211_colocated_ap *ap,
+@@ -858,9 +857,7 @@ static int cfg80211_scan_6ghz(struct cfg80211_registered_device *rdev)
+ if (ret)
+ continue;
+
+- entry = kzalloc(sizeof(*entry) + IEEE80211_MAX_SSID_LEN,
+- GFP_ATOMIC);
+-
++ entry = kzalloc(sizeof(*entry), GFP_ATOMIC);
+ if (!entry)
+ continue;
+
+diff --git a/net/wireless/tests/scan.c b/net/wireless/tests/scan.c
+index 9f458be7165951..79a99cf5e8922f 100644
+--- a/net/wireless/tests/scan.c
++++ b/net/wireless/tests/scan.c
+@@ -810,6 +810,8 @@ static void test_cfg80211_parse_colocated_ap(struct kunit *test)
+ skb_put_data(input, "123", 3);
+
+ ies = kunit_kzalloc(test, struct_size(ies, data, input->len), GFP_KERNEL);
++ KUNIT_ASSERT_NOT_NULL(test, ies);
++
+ ies->len = input->len;
+ memcpy(ies->data, input->data, input->len);
+
+diff --git a/net/xfrm/xfrm_compat.c b/net/xfrm/xfrm_compat.c
+index 91357ccaf4afe3..5b9ee63e30b69d 100644
+--- a/net/xfrm/xfrm_compat.c
++++ b/net/xfrm/xfrm_compat.c
+@@ -132,6 +132,7 @@ static const struct nla_policy compat_policy[XFRMA_MAX+1] = {
+ [XFRMA_MTIMER_THRESH] = { .type = NLA_U32 },
+ [XFRMA_SA_DIR] = NLA_POLICY_RANGE(NLA_U8, XFRM_SA_DIR_IN, XFRM_SA_DIR_OUT),
+ [XFRMA_NAT_KEEPALIVE_INTERVAL] = { .type = NLA_U32 },
++ [XFRMA_SA_PCPU] = { .type = NLA_U32 },
+ };
+
+ static struct nlmsghdr *xfrm_nlmsg_put_compat(struct sk_buff *skb,
+@@ -282,9 +283,10 @@ static int xfrm_xlate64_attr(struct sk_buff *dst, const struct nlattr *src)
+ case XFRMA_MTIMER_THRESH:
+ case XFRMA_SA_DIR:
+ case XFRMA_NAT_KEEPALIVE_INTERVAL:
++ case XFRMA_SA_PCPU:
+ return xfrm_nla_cpy(dst, src, nla_len(src));
+ default:
+- BUILD_BUG_ON(XFRMA_MAX != XFRMA_NAT_KEEPALIVE_INTERVAL);
++ BUILD_BUG_ON(XFRMA_MAX != XFRMA_SA_PCPU);
+ pr_warn_once("unsupported nla_type %d\n", src->nla_type);
+ return -EOPNOTSUPP;
+ }
+@@ -439,7 +441,7 @@ static int xfrm_xlate32_attr(void *dst, const struct nlattr *nla,
+ int err;
+
+ if (type > XFRMA_MAX) {
+- BUILD_BUG_ON(XFRMA_MAX != XFRMA_NAT_KEEPALIVE_INTERVAL);
++ BUILD_BUG_ON(XFRMA_MAX != XFRMA_SA_PCPU);
+ NL_SET_ERR_MSG(extack, "Bad attribute");
+ return -EOPNOTSUPP;
+ }
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index 749e7eea99e465..841a60a6fbfea3 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -572,7 +572,7 @@ int xfrm_input(struct sk_buff *skb, int nexthdr, __be32 spi, int encap_type)
+ goto drop;
+ }
+
+- x = xfrm_state_lookup(net, mark, daddr, spi, nexthdr, family);
++ x = xfrm_input_state_lookup(net, mark, daddr, spi, nexthdr, family);
+ if (x == NULL) {
+ secpath_reset(skb);
+ XFRM_INC_STATS(net, LINUX_MIB_XFRMINNOSTATES);
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index a2ea9dbac90b36..8a1b83191a6cdf 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -434,6 +434,7 @@ struct xfrm_policy *xfrm_policy_alloc(struct net *net, gfp_t gfp)
+ if (policy) {
+ write_pnet(&policy->xp_net, net);
+ INIT_LIST_HEAD(&policy->walk.all);
++ INIT_HLIST_HEAD(&policy->state_cache_list);
+ INIT_HLIST_NODE(&policy->bydst);
+ INIT_HLIST_NODE(&policy->byidx);
+ rwlock_init(&policy->lock);
+@@ -475,6 +476,9 @@ EXPORT_SYMBOL(xfrm_policy_destroy);
+
+ static void xfrm_policy_kill(struct xfrm_policy *policy)
+ {
++ struct net *net = xp_net(policy);
++ struct xfrm_state *x;
++
+ xfrm_dev_policy_delete(policy);
+
+ write_lock_bh(&policy->lock);
+@@ -490,6 +494,13 @@ static void xfrm_policy_kill(struct xfrm_policy *policy)
+ if (del_timer(&policy->timer))
+ xfrm_pol_put(policy);
+
++ /* XXX: Flush state cache */
++ spin_lock_bh(&net->xfrm.xfrm_state_lock);
++ hlist_for_each_entry_rcu(x, &policy->state_cache_list, state_cache) {
++ hlist_del_init_rcu(&x->state_cache);
++ }
++ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
++
+ xfrm_pol_put(policy);
+ }
+
+@@ -3275,6 +3286,7 @@ struct dst_entry *xfrm_lookup_with_ifid(struct net *net,
+ dst_release(dst);
+ dst = dst_orig;
+ }
++
+ ok:
+ xfrm_pols_put(pols, drop_pols);
+ if (dst && dst->xfrm &&
+diff --git a/net/xfrm/xfrm_replay.c b/net/xfrm/xfrm_replay.c
+index bc56c630572527..235bbefc2abae2 100644
+--- a/net/xfrm/xfrm_replay.c
++++ b/net/xfrm/xfrm_replay.c
+@@ -714,10 +714,12 @@ static int xfrm_replay_overflow_offload_esn(struct xfrm_state *x, struct sk_buff
+ oseq += skb_shinfo(skb)->gso_segs;
+ }
+
+- if (unlikely(xo->seq.low < replay_esn->oseq)) {
+- XFRM_SKB_CB(skb)->seq.output.hi = ++oseq_hi;
+- xo->seq.hi = oseq_hi;
+- replay_esn->oseq_hi = oseq_hi;
++ if (unlikely(oseq < replay_esn->oseq)) {
++ replay_esn->oseq_hi = ++oseq_hi;
++ if (xo->seq.low < replay_esn->oseq) {
++ XFRM_SKB_CB(skb)->seq.output.hi = oseq_hi;
++ xo->seq.hi = oseq_hi;
++ }
+ if (replay_esn->oseq_hi == 0) {
+ replay_esn->oseq--;
+ replay_esn->oseq_hi--;
+diff --git a/net/xfrm/xfrm_state.c b/net/xfrm/xfrm_state.c
+index 37478d36a8dff7..711e816fc4041e 100644
+--- a/net/xfrm/xfrm_state.c
++++ b/net/xfrm/xfrm_state.c
+@@ -34,6 +34,8 @@
+
+ #define xfrm_state_deref_prot(table, net) \
+ rcu_dereference_protected((table), lockdep_is_held(&(net)->xfrm.xfrm_state_lock))
++#define xfrm_state_deref_check(table, net) \
++ rcu_dereference_check((table), lockdep_is_held(&(net)->xfrm.xfrm_state_lock))
+
+ static void xfrm_state_gc_task(struct work_struct *work);
+
+@@ -62,6 +64,8 @@ static inline unsigned int xfrm_dst_hash(struct net *net,
+ u32 reqid,
+ unsigned short family)
+ {
++ lockdep_assert_held(&net->xfrm.xfrm_state_lock);
++
+ return __xfrm_dst_hash(daddr, saddr, reqid, family, net->xfrm.state_hmask);
+ }
+
+@@ -70,6 +74,8 @@ static inline unsigned int xfrm_src_hash(struct net *net,
+ const xfrm_address_t *saddr,
+ unsigned short family)
+ {
++ lockdep_assert_held(&net->xfrm.xfrm_state_lock);
++
+ return __xfrm_src_hash(daddr, saddr, family, net->xfrm.state_hmask);
+ }
+
+@@ -77,11 +83,15 @@ static inline unsigned int
+ xfrm_spi_hash(struct net *net, const xfrm_address_t *daddr,
+ __be32 spi, u8 proto, unsigned short family)
+ {
++ lockdep_assert_held(&net->xfrm.xfrm_state_lock);
++
+ return __xfrm_spi_hash(daddr, spi, proto, family, net->xfrm.state_hmask);
+ }
+
+ static unsigned int xfrm_seq_hash(struct net *net, u32 seq)
+ {
++ lockdep_assert_held(&net->xfrm.xfrm_state_lock);
++
+ return __xfrm_seq_hash(seq, net->xfrm.state_hmask);
+ }
+
+@@ -665,6 +675,7 @@ struct xfrm_state *xfrm_state_alloc(struct net *net)
+ refcount_set(&x->refcnt, 1);
+ atomic_set(&x->tunnel_users, 0);
+ INIT_LIST_HEAD(&x->km.all);
++ INIT_HLIST_NODE(&x->state_cache);
+ INIT_HLIST_NODE(&x->bydst);
+ INIT_HLIST_NODE(&x->bysrc);
+ INIT_HLIST_NODE(&x->byspi);
+@@ -679,6 +690,7 @@ struct xfrm_state *xfrm_state_alloc(struct net *net)
+ x->lft.hard_packet_limit = XFRM_INF;
+ x->replay_maxage = 0;
+ x->replay_maxdiff = 0;
++ x->pcpu_num = UINT_MAX;
+ spin_lock_init(&x->lock);
+ }
+ return x;
+@@ -743,12 +755,18 @@ int __xfrm_state_delete(struct xfrm_state *x)
+
+ if (x->km.state != XFRM_STATE_DEAD) {
+ x->km.state = XFRM_STATE_DEAD;
++
+ spin_lock(&net->xfrm.xfrm_state_lock);
+ list_del(&x->km.all);
+ hlist_del_rcu(&x->bydst);
+ hlist_del_rcu(&x->bysrc);
+ if (x->km.seq)
+ hlist_del_rcu(&x->byseq);
++ if (!hlist_unhashed(&x->state_cache))
++ hlist_del_rcu(&x->state_cache);
++ if (!hlist_unhashed(&x->state_cache_input))
++ hlist_del_rcu(&x->state_cache_input);
++
+ if (x->id.spi)
+ hlist_del_rcu(&x->byspi);
+ net->xfrm.state_num--;
+@@ -1033,16 +1051,38 @@ xfrm_init_tempstate(struct xfrm_state *x, const struct flowi *fl,
+ x->props.family = tmpl->encap_family;
+ }
+
+-static struct xfrm_state *__xfrm_state_lookup_all(struct net *net, u32 mark,
++struct xfrm_hash_state_ptrs {
++ const struct hlist_head *bydst;
++ const struct hlist_head *bysrc;
++ const struct hlist_head *byspi;
++ unsigned int hmask;
++};
++
++static void xfrm_hash_ptrs_get(const struct net *net, struct xfrm_hash_state_ptrs *ptrs)
++{
++ unsigned int sequence;
++
++ do {
++ sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
++
++ ptrs->bydst = xfrm_state_deref_check(net->xfrm.state_bydst, net);
++ ptrs->bysrc = xfrm_state_deref_check(net->xfrm.state_bysrc, net);
++ ptrs->byspi = xfrm_state_deref_check(net->xfrm.state_byspi, net);
++ ptrs->hmask = net->xfrm.state_hmask;
++ } while (read_seqcount_retry(&net->xfrm.xfrm_state_hash_generation, sequence));
++}
++
++static struct xfrm_state *__xfrm_state_lookup_all(const struct xfrm_hash_state_ptrs *state_ptrs,
++ u32 mark,
+ const xfrm_address_t *daddr,
+ __be32 spi, u8 proto,
+ unsigned short family,
+ struct xfrm_dev_offload *xdo)
+ {
+- unsigned int h = xfrm_spi_hash(net, daddr, spi, proto, family);
++ unsigned int h = __xfrm_spi_hash(daddr, spi, proto, family, state_ptrs->hmask);
+ struct xfrm_state *x;
+
+- hlist_for_each_entry_rcu(x, net->xfrm.state_byspi + h, byspi) {
++ hlist_for_each_entry_rcu(x, state_ptrs->byspi + h, byspi) {
+ #ifdef CONFIG_XFRM_OFFLOAD
+ if (xdo->type == XFRM_DEV_OFFLOAD_PACKET) {
+ if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET)
+@@ -1076,15 +1116,16 @@ static struct xfrm_state *__xfrm_state_lookup_all(struct net *net, u32 mark,
+ return NULL;
+ }
+
+-static struct xfrm_state *__xfrm_state_lookup(struct net *net, u32 mark,
++static struct xfrm_state *__xfrm_state_lookup(const struct xfrm_hash_state_ptrs *state_ptrs,
++ u32 mark,
+ const xfrm_address_t *daddr,
+ __be32 spi, u8 proto,
+ unsigned short family)
+ {
+- unsigned int h = xfrm_spi_hash(net, daddr, spi, proto, family);
++ unsigned int h = __xfrm_spi_hash(daddr, spi, proto, family, state_ptrs->hmask);
+ struct xfrm_state *x;
+
+- hlist_for_each_entry_rcu(x, net->xfrm.state_byspi + h, byspi) {
++ hlist_for_each_entry_rcu(x, state_ptrs->byspi + h, byspi) {
+ if (x->props.family != family ||
+ x->id.spi != spi ||
+ x->id.proto != proto ||
+@@ -1101,15 +1142,63 @@ static struct xfrm_state *__xfrm_state_lookup(struct net *net, u32 mark,
+ return NULL;
+ }
+
+-static struct xfrm_state *__xfrm_state_lookup_byaddr(struct net *net, u32 mark,
++struct xfrm_state *xfrm_input_state_lookup(struct net *net, u32 mark,
++ const xfrm_address_t *daddr,
++ __be32 spi, u8 proto,
++ unsigned short family)
++{
++ struct xfrm_hash_state_ptrs state_ptrs;
++ struct hlist_head *state_cache_input;
++ struct xfrm_state *x = NULL;
++
++ state_cache_input = raw_cpu_ptr(net->xfrm.state_cache_input);
++
++ rcu_read_lock();
++ hlist_for_each_entry_rcu(x, state_cache_input, state_cache_input) {
++ if (x->props.family != family ||
++ x->id.spi != spi ||
++ x->id.proto != proto ||
++ !xfrm_addr_equal(&x->id.daddr, daddr, family))
++ continue;
++
++ if ((mark & x->mark.m) != x->mark.v)
++ continue;
++ if (!xfrm_state_hold_rcu(x))
++ continue;
++ goto out;
++ }
++
++ xfrm_hash_ptrs_get(net, &state_ptrs);
++
++ x = __xfrm_state_lookup(&state_ptrs, mark, daddr, spi, proto, family);
++
++ if (x && x->km.state == XFRM_STATE_VALID) {
++ spin_lock_bh(&net->xfrm.xfrm_state_lock);
++ if (hlist_unhashed(&x->state_cache_input)) {
++ hlist_add_head_rcu(&x->state_cache_input, state_cache_input);
++ } else {
++ hlist_del_rcu(&x->state_cache_input);
++ hlist_add_head_rcu(&x->state_cache_input, state_cache_input);
++ }
++ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
++ }
++
++out:
++ rcu_read_unlock();
++ return x;
++}
++EXPORT_SYMBOL(xfrm_input_state_lookup);
++
++static struct xfrm_state *__xfrm_state_lookup_byaddr(const struct xfrm_hash_state_ptrs *state_ptrs,
++ u32 mark,
+ const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr,
+ u8 proto, unsigned short family)
+ {
+- unsigned int h = xfrm_src_hash(net, daddr, saddr, family);
++ unsigned int h = __xfrm_src_hash(daddr, saddr, family, state_ptrs->hmask);
+ struct xfrm_state *x;
+
+- hlist_for_each_entry_rcu(x, net->xfrm.state_bysrc + h, bysrc) {
++ hlist_for_each_entry_rcu(x, state_ptrs->bysrc + h, bysrc) {
+ if (x->props.family != family ||
+ x->id.proto != proto ||
+ !xfrm_addr_equal(&x->id.daddr, daddr, family) ||
+@@ -1129,14 +1218,17 @@ static struct xfrm_state *__xfrm_state_lookup_byaddr(struct net *net, u32 mark,
+ static inline struct xfrm_state *
+ __xfrm_state_locate(struct xfrm_state *x, int use_spi, int family)
+ {
++ struct xfrm_hash_state_ptrs state_ptrs;
+ struct net *net = xs_net(x);
+ u32 mark = x->mark.v & x->mark.m;
+
++ xfrm_hash_ptrs_get(net, &state_ptrs);
++
+ if (use_spi)
+- return __xfrm_state_lookup(net, mark, &x->id.daddr,
++ return __xfrm_state_lookup(&state_ptrs, mark, &x->id.daddr,
+ x->id.spi, x->id.proto, family);
+ else
+- return __xfrm_state_lookup_byaddr(net, mark,
++ return __xfrm_state_lookup_byaddr(&state_ptrs, mark,
+ &x->id.daddr,
+ &x->props.saddr,
+ x->id.proto, family);
+@@ -1155,6 +1247,12 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x,
+ struct xfrm_state **best, int *acq_in_progress,
+ int *error)
+ {
++ /* We need the cpu id just as a lookup key,
++ * we don't require it to be stable.
++ */
++ unsigned int pcpu_id = get_cpu();
++ put_cpu();
++
+ /* Resolution logic:
+ * 1. There is a valid state with matching selector. Done.
+ * 2. Valid state with inappropriate selector. Skip.
+@@ -1174,13 +1272,18 @@ static void xfrm_state_look_at(struct xfrm_policy *pol, struct xfrm_state *x,
+ &fl->u.__fl_common))
+ return;
+
++ if (x->pcpu_num != UINT_MAX && x->pcpu_num != pcpu_id)
++ return;
++
+ if (!*best ||
++ ((*best)->pcpu_num == UINT_MAX && x->pcpu_num == pcpu_id) ||
+ (*best)->km.dying > x->km.dying ||
+ ((*best)->km.dying == x->km.dying &&
+ (*best)->curlft.add_time < x->curlft.add_time))
+ *best = x;
+ } else if (x->km.state == XFRM_STATE_ACQ) {
+- *acq_in_progress = 1;
++ if (!*best || x->pcpu_num == pcpu_id)
++ *acq_in_progress = 1;
+ } else if (x->km.state == XFRM_STATE_ERROR ||
+ x->km.state == XFRM_STATE_EXPIRED) {
+ if ((!x->sel.family ||
+@@ -1199,6 +1302,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ unsigned short family, u32 if_id)
+ {
+ static xfrm_address_t saddr_wildcard = { };
++ struct xfrm_hash_state_ptrs state_ptrs;
+ struct net *net = xp_net(pol);
+ unsigned int h, h_wildcard;
+ struct xfrm_state *x, *x0, *to_put;
+@@ -1209,14 +1313,64 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ unsigned short encap_family = tmpl->encap_family;
+ unsigned int sequence;
+ struct km_event c;
++ unsigned int pcpu_id;
++ bool cached = false;
++
++ /* We need the cpu id just as a lookup key,
++ * we don't require it to be stable.
++ */
++ pcpu_id = get_cpu();
++ put_cpu();
+
+ to_put = NULL;
+
+ sequence = read_seqcount_begin(&net->xfrm.xfrm_state_hash_generation);
+
+ rcu_read_lock();
+- h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family);
+- hlist_for_each_entry_rcu(x, net->xfrm.state_bydst + h, bydst) {
++ hlist_for_each_entry_rcu(x, &pol->state_cache_list, state_cache) {
++ if (x->props.family == encap_family &&
++ x->props.reqid == tmpl->reqid &&
++ (mark & x->mark.m) == x->mark.v &&
++ x->if_id == if_id &&
++ !(x->props.flags & XFRM_STATE_WILDRECV) &&
++ xfrm_state_addr_check(x, daddr, saddr, encap_family) &&
++ tmpl->mode == x->props.mode &&
++ tmpl->id.proto == x->id.proto &&
++ (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
++ xfrm_state_look_at(pol, x, fl, encap_family,
++ &best, &acquire_in_progress, &error);
++ }
++
++ if (best)
++ goto cached;
++
++ hlist_for_each_entry_rcu(x, &pol->state_cache_list, state_cache) {
++ if (x->props.family == encap_family &&
++ x->props.reqid == tmpl->reqid &&
++ (mark & x->mark.m) == x->mark.v &&
++ x->if_id == if_id &&
++ !(x->props.flags & XFRM_STATE_WILDRECV) &&
++ xfrm_addr_equal(&x->id.daddr, daddr, encap_family) &&
++ tmpl->mode == x->props.mode &&
++ tmpl->id.proto == x->id.proto &&
++ (tmpl->id.spi == x->id.spi || !tmpl->id.spi))
++ xfrm_state_look_at(pol, x, fl, family,
++ &best, &acquire_in_progress, &error);
++ }
++
++cached:
++ cached = true;
++ if (best)
++ goto found;
++ else if (error)
++ best = NULL;
++ else if (acquire_in_progress) /* XXX: acquire_in_progress should not happen */
++ WARN_ON(1);
++
++ xfrm_hash_ptrs_get(net, &state_ptrs);
++
++ h = __xfrm_dst_hash(daddr, saddr, tmpl->reqid, encap_family, state_ptrs.hmask);
++ hlist_for_each_entry_rcu(x, state_ptrs.bydst + h, bydst) {
+ #ifdef CONFIG_XFRM_OFFLOAD
+ if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET) {
+ if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET)
+@@ -1249,8 +1403,9 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ if (best || acquire_in_progress)
+ goto found;
+
+- h_wildcard = xfrm_dst_hash(net, daddr, &saddr_wildcard, tmpl->reqid, encap_family);
+- hlist_for_each_entry_rcu(x, net->xfrm.state_bydst + h_wildcard, bydst) {
++ h_wildcard = __xfrm_dst_hash(daddr, &saddr_wildcard, tmpl->reqid,
++ encap_family, state_ptrs.hmask);
++ hlist_for_each_entry_rcu(x, state_ptrs.bydst + h_wildcard, bydst) {
+ #ifdef CONFIG_XFRM_OFFLOAD
+ if (pol->xdo.type == XFRM_DEV_OFFLOAD_PACKET) {
+ if (x->xso.type != XFRM_DEV_OFFLOAD_PACKET)
+@@ -1282,10 +1437,13 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ }
+
+ found:
+- x = best;
++ if (!(pol->flags & XFRM_POLICY_CPU_ACQUIRE) ||
++ (best && (best->pcpu_num == pcpu_id)))
++ x = best;
++
+ if (!x && !error && !acquire_in_progress) {
+ if (tmpl->id.spi &&
+- (x0 = __xfrm_state_lookup_all(net, mark, daddr,
++ (x0 = __xfrm_state_lookup_all(&state_ptrs, mark, daddr,
+ tmpl->id.spi, tmpl->id.proto,
+ encap_family,
+ &pol->xdo)) != NULL) {
+@@ -1314,6 +1472,8 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ xfrm_init_tempstate(x, fl, tmpl, daddr, saddr, family);
+ memcpy(&x->mark, &pol->mark, sizeof(x->mark));
+ x->if_id = if_id;
++ if ((pol->flags & XFRM_POLICY_CPU_ACQUIRE) && best)
++ x->pcpu_num = pcpu_id;
+
+ error = security_xfrm_state_alloc_acquire(x, pol->security, fl->flowi_secid);
+ if (error) {
+@@ -1352,6 +1512,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ x->km.state = XFRM_STATE_ACQ;
+ x->dir = XFRM_SA_DIR_OUT;
+ list_add(&x->km.all, &net->xfrm.state_all);
++ h = xfrm_dst_hash(net, daddr, saddr, tmpl->reqid, encap_family);
+ XFRM_STATE_INSERT(bydst, &x->bydst,
+ net->xfrm.state_bydst + h,
+ x->xso.type);
+@@ -1359,6 +1520,7 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ XFRM_STATE_INSERT(bysrc, &x->bysrc,
+ net->xfrm.state_bysrc + h,
+ x->xso.type);
++ INIT_HLIST_NODE(&x->state_cache);
+ if (x->id.spi) {
+ h = xfrm_spi_hash(net, &x->id.daddr, x->id.spi, x->id.proto, encap_family);
+ XFRM_STATE_INSERT(byspi, &x->byspi,
+@@ -1392,6 +1554,11 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ x = NULL;
+ error = -ESRCH;
+ }
++
++ /* Use the already installed 'fallback' while the CPU-specific
++ * SA acquire is handled*/
++ if (best)
++ x = best;
+ }
+ out:
+ if (x) {
+@@ -1402,6 +1569,15 @@ xfrm_state_find(const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ } else {
+ *err = acquire_in_progress ? -EAGAIN : error;
+ }
++
++ if (x && x->km.state == XFRM_STATE_VALID && !cached &&
++ (!(pol->flags & XFRM_POLICY_CPU_ACQUIRE) || x->pcpu_num == pcpu_id)) {
++ spin_lock_bh(&net->xfrm.xfrm_state_lock);
++ if (hlist_unhashed(&x->state_cache))
++ hlist_add_head_rcu(&x->state_cache, &pol->state_cache_list);
++ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
++ }
++
+ rcu_read_unlock();
+ if (to_put)
+ xfrm_state_put(to_put);
+@@ -1524,12 +1700,14 @@ static void __xfrm_state_bump_genids(struct xfrm_state *xnew)
+ unsigned int h;
+ u32 mark = xnew->mark.v & xnew->mark.m;
+ u32 if_id = xnew->if_id;
++ u32 cpu_id = xnew->pcpu_num;
+
+ h = xfrm_dst_hash(net, &xnew->id.daddr, &xnew->props.saddr, reqid, family);
+ hlist_for_each_entry(x, net->xfrm.state_bydst+h, bydst) {
+ if (x->props.family == family &&
+ x->props.reqid == reqid &&
+ x->if_id == if_id &&
++ x->pcpu_num == cpu_id &&
+ (mark & x->mark.m) == x->mark.v &&
+ xfrm_addr_equal(&x->id.daddr, &xnew->id.daddr, family) &&
+ xfrm_addr_equal(&x->props.saddr, &xnew->props.saddr, family))
+@@ -1552,7 +1730,7 @@ EXPORT_SYMBOL(xfrm_state_insert);
+ static struct xfrm_state *__find_acq_core(struct net *net,
+ const struct xfrm_mark *m,
+ unsigned short family, u8 mode,
+- u32 reqid, u32 if_id, u8 proto,
++ u32 reqid, u32 if_id, u32 pcpu_num, u8 proto,
+ const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr,
+ int create)
+@@ -1569,6 +1747,7 @@ static struct xfrm_state *__find_acq_core(struct net *net,
+ x->id.spi != 0 ||
+ x->id.proto != proto ||
+ (mark & x->mark.m) != x->mark.v ||
++ x->pcpu_num != pcpu_num ||
+ !xfrm_addr_equal(&x->id.daddr, daddr, family) ||
+ !xfrm_addr_equal(&x->props.saddr, saddr, family))
+ continue;
+@@ -1602,6 +1781,7 @@ static struct xfrm_state *__find_acq_core(struct net *net,
+ break;
+ }
+
++ x->pcpu_num = pcpu_num;
+ x->km.state = XFRM_STATE_ACQ;
+ x->id.proto = proto;
+ x->props.family = family;
+@@ -1630,7 +1810,7 @@ static struct xfrm_state *__find_acq_core(struct net *net,
+ return x;
+ }
+
+-static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq);
++static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq, u32 pcpu_num);
+
+ int xfrm_state_add(struct xfrm_state *x)
+ {
+@@ -1656,7 +1836,7 @@ int xfrm_state_add(struct xfrm_state *x)
+ }
+
+ if (use_spi && x->km.seq) {
+- x1 = __xfrm_find_acq_byseq(net, mark, x->km.seq);
++ x1 = __xfrm_find_acq_byseq(net, mark, x->km.seq, x->pcpu_num);
+ if (x1 && ((x1->id.proto != x->id.proto) ||
+ !xfrm_addr_equal(&x1->id.daddr, &x->id.daddr, family))) {
+ to_put = x1;
+@@ -1666,7 +1846,7 @@ int xfrm_state_add(struct xfrm_state *x)
+
+ if (use_spi && !x1)
+ x1 = __find_acq_core(net, &x->mark, family, x->props.mode,
+- x->props.reqid, x->if_id, x->id.proto,
++ x->props.reqid, x->if_id, x->pcpu_num, x->id.proto,
+ &x->id.daddr, &x->props.saddr, 0);
+
+ __xfrm_state_bump_genids(x);
+@@ -1791,6 +1971,7 @@ static struct xfrm_state *xfrm_state_clone(struct xfrm_state *orig,
+ x->props.flags = orig->props.flags;
+ x->props.extra_flags = orig->props.extra_flags;
+
++ x->pcpu_num = orig->pcpu_num;
+ x->if_id = orig->if_id;
+ x->tfcpad = orig->tfcpad;
+ x->replay_maxdiff = orig->replay_maxdiff;
+@@ -2041,10 +2222,13 @@ struct xfrm_state *
+ xfrm_state_lookup(struct net *net, u32 mark, const xfrm_address_t *daddr, __be32 spi,
+ u8 proto, unsigned short family)
+ {
++ struct xfrm_hash_state_ptrs state_ptrs;
+ struct xfrm_state *x;
+
+ rcu_read_lock();
+- x = __xfrm_state_lookup(net, mark, daddr, spi, proto, family);
++ xfrm_hash_ptrs_get(net, &state_ptrs);
++
++ x = __xfrm_state_lookup(&state_ptrs, mark, daddr, spi, proto, family);
+ rcu_read_unlock();
+ return x;
+ }
+@@ -2055,10 +2239,14 @@ xfrm_state_lookup_byaddr(struct net *net, u32 mark,
+ const xfrm_address_t *daddr, const xfrm_address_t *saddr,
+ u8 proto, unsigned short family)
+ {
++ struct xfrm_hash_state_ptrs state_ptrs;
+ struct xfrm_state *x;
+
+ spin_lock_bh(&net->xfrm.xfrm_state_lock);
+- x = __xfrm_state_lookup_byaddr(net, mark, daddr, saddr, proto, family);
++
++ xfrm_hash_ptrs_get(net, &state_ptrs);
++
++ x = __xfrm_state_lookup_byaddr(&state_ptrs, mark, daddr, saddr, proto, family);
+ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+ return x;
+ }
+@@ -2066,13 +2254,14 @@ EXPORT_SYMBOL(xfrm_state_lookup_byaddr);
+
+ struct xfrm_state *
+ xfrm_find_acq(struct net *net, const struct xfrm_mark *mark, u8 mode, u32 reqid,
+- u32 if_id, u8 proto, const xfrm_address_t *daddr,
++ u32 if_id, u32 pcpu_num, u8 proto, const xfrm_address_t *daddr,
+ const xfrm_address_t *saddr, int create, unsigned short family)
+ {
+ struct xfrm_state *x;
+
+ spin_lock_bh(&net->xfrm.xfrm_state_lock);
+- x = __find_acq_core(net, mark, family, mode, reqid, if_id, proto, daddr, saddr, create);
++ x = __find_acq_core(net, mark, family, mode, reqid, if_id, pcpu_num,
++ proto, daddr, saddr, create);
+ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+
+ return x;
+@@ -2207,7 +2396,7 @@ xfrm_state_sort(struct xfrm_state **dst, struct xfrm_state **src, int n,
+
+ /* Silly enough, but I'm lazy to build resolution list */
+
+-static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq)
++static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq, u32 pcpu_num)
+ {
+ unsigned int h = xfrm_seq_hash(net, seq);
+ struct xfrm_state *x;
+@@ -2215,6 +2404,7 @@ static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 s
+ hlist_for_each_entry_rcu(x, net->xfrm.state_byseq + h, byseq) {
+ if (x->km.seq == seq &&
+ (mark & x->mark.m) == x->mark.v &&
++ x->pcpu_num == pcpu_num &&
+ x->km.state == XFRM_STATE_ACQ) {
+ xfrm_state_hold(x);
+ return x;
+@@ -2224,12 +2414,12 @@ static struct xfrm_state *__xfrm_find_acq_byseq(struct net *net, u32 mark, u32 s
+ return NULL;
+ }
+
+-struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq)
++struct xfrm_state *xfrm_find_acq_byseq(struct net *net, u32 mark, u32 seq, u32 pcpu_num)
+ {
+ struct xfrm_state *x;
+
+ spin_lock_bh(&net->xfrm.xfrm_state_lock);
+- x = __xfrm_find_acq_byseq(net, mark, seq);
++ x = __xfrm_find_acq_byseq(net, mark, seq, pcpu_num);
+ spin_unlock_bh(&net->xfrm.xfrm_state_lock);
+ return x;
+ }
+@@ -2988,6 +3178,11 @@ int __net_init xfrm_state_init(struct net *net)
+ net->xfrm.state_byseq = xfrm_hash_alloc(sz);
+ if (!net->xfrm.state_byseq)
+ goto out_byseq;
++
++ net->xfrm.state_cache_input = alloc_percpu(struct hlist_head);
++ if (!net->xfrm.state_cache_input)
++ goto out_state_cache_input;
++
+ net->xfrm.state_hmask = ((sz / sizeof(struct hlist_head)) - 1);
+
+ net->xfrm.state_num = 0;
+@@ -2997,6 +3192,8 @@ int __net_init xfrm_state_init(struct net *net)
+ &net->xfrm.xfrm_state_lock);
+ return 0;
+
++out_state_cache_input:
++ xfrm_hash_free(net->xfrm.state_byseq, sz);
+ out_byseq:
+ xfrm_hash_free(net->xfrm.state_byspi, sz);
+ out_byspi:
+@@ -3026,6 +3223,7 @@ void xfrm_state_fini(struct net *net)
+ xfrm_hash_free(net->xfrm.state_bysrc, sz);
+ WARN_ON(!hlist_empty(net->xfrm.state_bydst));
+ xfrm_hash_free(net->xfrm.state_bydst, sz);
++ free_percpu(net->xfrm.state_cache_input);
+ }
+
+ #ifdef CONFIG_AUDITSYSCALL
+diff --git a/net/xfrm/xfrm_user.c b/net/xfrm/xfrm_user.c
+index e3b8ce89831abf..87013623773a2b 100644
+--- a/net/xfrm/xfrm_user.c
++++ b/net/xfrm/xfrm_user.c
+@@ -460,6 +460,12 @@ static int verify_newsa_info(struct xfrm_usersa_info *p,
+ }
+ }
+
++ if (!sa_dir && attrs[XFRMA_SA_PCPU]) {
++ NL_SET_ERR_MSG(extack, "SA_PCPU only supported with SA_DIR");
++ err = -EINVAL;
++ goto out;
++ }
++
+ out:
+ return err;
+ }
+@@ -841,6 +847,12 @@ static struct xfrm_state *xfrm_state_construct(struct net *net,
+ x->nat_keepalive_interval =
+ nla_get_u32(attrs[XFRMA_NAT_KEEPALIVE_INTERVAL]);
+
++ if (attrs[XFRMA_SA_PCPU]) {
++ x->pcpu_num = nla_get_u32(attrs[XFRMA_SA_PCPU]);
++ if (x->pcpu_num >= num_possible_cpus())
++ goto error;
++ }
++
+ err = __xfrm_init_state(x, false, attrs[XFRMA_OFFLOAD_DEV], extack);
+ if (err)
+ goto error;
+@@ -1296,6 +1308,11 @@ static int copy_to_user_state_extra(struct xfrm_state *x,
+ if (ret)
+ goto out;
+ }
++ if (x->pcpu_num != UINT_MAX) {
++ ret = nla_put_u32(skb, XFRMA_SA_PCPU, x->pcpu_num);
++ if (ret)
++ goto out;
++ }
+ if (x->dir)
+ ret = nla_put_u8(skb, XFRMA_SA_DIR, x->dir);
+
+@@ -1700,6 +1717,7 @@ static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh,
+ u32 mark;
+ struct xfrm_mark m;
+ u32 if_id = 0;
++ u32 pcpu_num = UINT_MAX;
+
+ p = nlmsg_data(nlh);
+ err = verify_spi_info(p->info.id.proto, p->min, p->max, extack);
+@@ -1716,8 +1734,16 @@ static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh,
+ if (attrs[XFRMA_IF_ID])
+ if_id = nla_get_u32(attrs[XFRMA_IF_ID]);
+
++ if (attrs[XFRMA_SA_PCPU]) {
++ pcpu_num = nla_get_u32(attrs[XFRMA_SA_PCPU]);
++ if (pcpu_num >= num_possible_cpus()) {
++ err = -EINVAL;
++ goto out_noput;
++ }
++ }
++
+ if (p->info.seq) {
+- x = xfrm_find_acq_byseq(net, mark, p->info.seq);
++ x = xfrm_find_acq_byseq(net, mark, p->info.seq, pcpu_num);
+ if (x && !xfrm_addr_equal(&x->id.daddr, daddr, family)) {
+ xfrm_state_put(x);
+ x = NULL;
+@@ -1726,7 +1752,7 @@ static int xfrm_alloc_userspi(struct sk_buff *skb, struct nlmsghdr *nlh,
+
+ if (!x)
+ x = xfrm_find_acq(net, &m, p->info.mode, p->info.reqid,
+- if_id, p->info.id.proto, daddr,
++ if_id, pcpu_num, p->info.id.proto, daddr,
+ &p->info.saddr, 1,
+ family);
+ err = -ENOENT;
+@@ -2526,7 +2552,8 @@ static inline unsigned int xfrm_aevent_msgsize(struct xfrm_state *x)
+ + nla_total_size(sizeof(struct xfrm_mark))
+ + nla_total_size(4) /* XFRM_AE_RTHR */
+ + nla_total_size(4) /* XFRM_AE_ETHR */
+- + nla_total_size(sizeof(x->dir)); /* XFRMA_SA_DIR */
++ + nla_total_size(sizeof(x->dir)) /* XFRMA_SA_DIR */
++ + nla_total_size(4); /* XFRMA_SA_PCPU */
+ }
+
+ static int build_aevent(struct sk_buff *skb, struct xfrm_state *x, const struct km_event *c)
+@@ -2582,6 +2609,11 @@ static int build_aevent(struct sk_buff *skb, struct xfrm_state *x, const struct
+ err = xfrm_if_id_put(skb, x->if_id);
+ if (err)
+ goto out_cancel;
++ if (x->pcpu_num != UINT_MAX) {
++ err = nla_put_u32(skb, XFRMA_SA_PCPU, x->pcpu_num);
++ if (err)
++ goto out_cancel;
++ }
+
+ if (x->dir) {
+ err = nla_put_u8(skb, XFRMA_SA_DIR, x->dir);
+@@ -2852,6 +2884,13 @@ static int xfrm_add_acquire(struct sk_buff *skb, struct nlmsghdr *nlh,
+
+ xfrm_mark_get(attrs, &mark);
+
++ if (attrs[XFRMA_SA_PCPU]) {
++ x->pcpu_num = nla_get_u32(attrs[XFRMA_SA_PCPU]);
++ err = -EINVAL;
++ if (x->pcpu_num >= num_possible_cpus())
++ goto free_state;
++ }
++
+ err = verify_newpolicy_info(&ua->policy, extack);
+ if (err)
+ goto free_state;
+@@ -3182,6 +3221,7 @@ const struct nla_policy xfrma_policy[XFRMA_MAX+1] = {
+ [XFRMA_MTIMER_THRESH] = { .type = NLA_U32 },
+ [XFRMA_SA_DIR] = NLA_POLICY_RANGE(NLA_U8, XFRM_SA_DIR_IN, XFRM_SA_DIR_OUT),
+ [XFRMA_NAT_KEEPALIVE_INTERVAL] = { .type = NLA_U32 },
++ [XFRMA_SA_PCPU] = { .type = NLA_U32 },
+ };
+ EXPORT_SYMBOL_GPL(xfrma_policy);
+
+@@ -3348,7 +3388,8 @@ static inline unsigned int xfrm_expire_msgsize(void)
+ {
+ return NLMSG_ALIGN(sizeof(struct xfrm_user_expire)) +
+ nla_total_size(sizeof(struct xfrm_mark)) +
+- nla_total_size(sizeof_field(struct xfrm_state, dir));
++ nla_total_size(sizeof_field(struct xfrm_state, dir)) +
++ nla_total_size(4); /* XFRMA_SA_PCPU */
+ }
+
+ static int build_expire(struct sk_buff *skb, struct xfrm_state *x, const struct km_event *c)
+@@ -3374,6 +3415,11 @@ static int build_expire(struct sk_buff *skb, struct xfrm_state *x, const struct
+ err = xfrm_if_id_put(skb, x->if_id);
+ if (err)
+ return err;
++ if (x->pcpu_num != UINT_MAX) {
++ err = nla_put_u32(skb, XFRMA_SA_PCPU, x->pcpu_num);
++ if (err)
++ return err;
++ }
+
+ if (x->dir) {
+ err = nla_put_u8(skb, XFRMA_SA_DIR, x->dir);
+@@ -3481,6 +3527,8 @@ static inline unsigned int xfrm_sa_len(struct xfrm_state *x)
+ }
+ if (x->if_id)
+ l += nla_total_size(sizeof(x->if_id));
++ if (x->pcpu_num)
++ l += nla_total_size(sizeof(x->pcpu_num));
+
+ /* Must count x->lastused as it may become non-zero behind our back. */
+ l += nla_total_size_64bit(sizeof(u64));
+@@ -3587,6 +3635,7 @@ static inline unsigned int xfrm_acquire_msgsize(struct xfrm_state *x,
+ + nla_total_size(sizeof(struct xfrm_user_tmpl) * xp->xfrm_nr)
+ + nla_total_size(sizeof(struct xfrm_mark))
+ + nla_total_size(xfrm_user_sec_ctx_size(x->security))
++ + nla_total_size(4) /* XFRMA_SA_PCPU */
+ + userpolicy_type_attrsize();
+ }
+
+@@ -3623,6 +3672,8 @@ static int build_acquire(struct sk_buff *skb, struct xfrm_state *x,
+ err = xfrm_if_id_put(skb, xp->if_id);
+ if (!err && xp->xdo.dev)
+ err = copy_user_offload(&xp->xdo, skb);
++ if (!err && x->pcpu_num != UINT_MAX)
++ err = nla_put_u32(skb, XFRMA_SA_PCPU, x->pcpu_num);
+ if (err) {
+ nlmsg_cancel(skb, nlh);
+ return err;
+diff --git a/samples/landlock/sandboxer.c b/samples/landlock/sandboxer.c
+index 57565dfd74a260..07fab2ef534e8d 100644
+--- a/samples/landlock/sandboxer.c
++++ b/samples/landlock/sandboxer.c
+@@ -91,6 +91,9 @@ static int parse_path(char *env_path, const char ***const path_list)
+ }
+ }
+ *path_list = malloc(num_paths * sizeof(**path_list));
++ if (!*path_list)
++ return -1;
++
+ for (i = 0; i < num_paths; i++)
+ (*path_list)[i] = strsep(&env_path, ENV_DELIMITER);
+
+@@ -127,6 +130,10 @@ static int populate_ruleset_fs(const char *const env_var, const int ruleset_fd,
+ env_path_name = strdup(env_path_name);
+ unsetenv(env_var);
+ num_paths = parse_path(env_path_name, &path_list);
++ if (num_paths < 0) {
++ fprintf(stderr, "Failed to allocate memory\n");
++ goto out_free_name;
++ }
+ if (num_paths == 1 && path_list[0][0] == '\0') {
+ /*
+ * Allows to not use all possible restrictions (e.g. use
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index 01a9f567d5af48..fe5e132fcea89a 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -371,10 +371,10 @@ quiet_cmd_lzo_with_size = LZO $@
+ cmd_lzo_with_size = { cat $(real-prereqs) | $(KLZOP) -9; $(size_append); } > $@
+
+ quiet_cmd_lz4 = LZ4 $@
+- cmd_lz4 = cat $(real-prereqs) | $(LZ4) -l -c1 stdin stdout > $@
++ cmd_lz4 = cat $(real-prereqs) | $(LZ4) -l -9 - - > $@
+
+ quiet_cmd_lz4_with_size = LZ4 $@
+- cmd_lz4_with_size = { cat $(real-prereqs) | $(LZ4) -l -c1 stdin stdout; \
++ cmd_lz4_with_size = { cat $(real-prereqs) | $(LZ4) -l -9 - -; \
+ $(size_append); } > $@
+
+ # U-Boot mkimage
+diff --git a/scripts/genksyms/genksyms.c b/scripts/genksyms/genksyms.c
+index f3901c55df239d..bbc6b7d3088c15 100644
+--- a/scripts/genksyms/genksyms.c
++++ b/scripts/genksyms/genksyms.c
+@@ -239,6 +239,7 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ "unchanged\n");
+ }
+ sym->is_declared = 1;
++ free_list(defn, NULL);
+ return sym;
+ } else if (!sym->is_declared) {
+ if (sym->is_override && flag_preserve) {
+@@ -247,6 +248,7 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ print_type_name(type, name);
+ fprintf(stderr, " modversion change\n");
+ sym->is_declared = 1;
++ free_list(defn, NULL);
+ return sym;
+ } else {
+ status = is_unknown_symbol(sym) ?
+@@ -254,6 +256,7 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ }
+ } else {
+ error_with_pos("redefinition of %s", name);
++ free_list(defn, NULL);
+ return sym;
+ }
+ break;
+@@ -269,11 +272,15 @@ static struct symbol *__add_symbol(const char *name, enum symbol_type type,
+ break;
+ }
+ }
++
++ free_list(sym->defn, NULL);
++ free(sym->name);
++ free(sym);
+ --nsyms;
+ }
+
+ sym = xmalloc(sizeof(*sym));
+- sym->name = name;
++ sym->name = xstrdup(name);
+ sym->type = type;
+ sym->defn = defn;
+ sym->expansion_trail = NULL;
+@@ -480,7 +487,7 @@ static void read_reference(FILE *f)
+ defn = def;
+ def = read_node(f);
+ }
+- subsym = add_reference_symbol(xstrdup(sym->string), sym->tag,
++ subsym = add_reference_symbol(sym->string, sym->tag,
+ defn, is_extern);
+ subsym->is_override = is_override;
+ free_node(sym);
+diff --git a/scripts/genksyms/genksyms.h b/scripts/genksyms/genksyms.h
+index 21ed2ec2d98ca8..5621533dcb8e43 100644
+--- a/scripts/genksyms/genksyms.h
++++ b/scripts/genksyms/genksyms.h
+@@ -32,7 +32,7 @@ struct string_list {
+
+ struct symbol {
+ struct symbol *hash_next;
+- const char *name;
++ char *name;
+ enum symbol_type type;
+ struct string_list *defn;
+ struct symbol *expansion_trail;
+diff --git a/scripts/genksyms/parse.y b/scripts/genksyms/parse.y
+index 8e9b5e69e8f01d..689cb6bb40b657 100644
+--- a/scripts/genksyms/parse.y
++++ b/scripts/genksyms/parse.y
+@@ -152,14 +152,19 @@ simple_declaration:
+ ;
+
+ init_declarator_list_opt:
+- /* empty */ { $$ = NULL; }
+- | init_declarator_list
++ /* empty */ { $$ = NULL; }
++ | init_declarator_list { free_list(decl_spec, NULL); $$ = $1; }
+ ;
+
+ init_declarator_list:
+ init_declarator
+ { struct string_list *decl = *$1;
+ *$1 = NULL;
++
++ /* avoid sharing among multiple init_declarators */
++ if (decl_spec)
++ decl_spec = copy_list_range(decl_spec, NULL);
++
+ add_symbol(current_name,
+ is_typedef ? SYM_TYPEDEF : SYM_NORMAL, decl, is_extern);
+ current_name = NULL;
+@@ -170,6 +175,11 @@ init_declarator_list:
+ *$3 = NULL;
+ free_list(*$2, NULL);
+ *$2 = decl_spec;
++
++ /* avoid sharing among multiple init_declarators */
++ if (decl_spec)
++ decl_spec = copy_list_range(decl_spec, NULL);
++
+ add_symbol(current_name,
+ is_typedef ? SYM_TYPEDEF : SYM_NORMAL, decl, is_extern);
+ current_name = NULL;
+@@ -472,12 +482,12 @@ enumerator_list:
+ enumerator:
+ IDENT
+ {
+- const char *name = strdup((*$1)->string);
++ const char *name = (*$1)->string;
+ add_symbol(name, SYM_ENUM_CONST, NULL, 0);
+ }
+ | IDENT '=' EXPRESSION_PHRASE
+ {
+- const char *name = strdup((*$1)->string);
++ const char *name = (*$1)->string;
+ struct string_list *expr = copy_list_range(*$3, *$2);
+ add_symbol(name, SYM_ENUM_CONST, expr, 0);
+ }
+diff --git a/scripts/kconfig/confdata.c b/scripts/kconfig/confdata.c
+index 4286d5e7f95dc1..3b55e7a4131d9a 100644
+--- a/scripts/kconfig/confdata.c
++++ b/scripts/kconfig/confdata.c
+@@ -360,10 +360,12 @@ int conf_read_simple(const char *name, int def)
+
+ *p = '\0';
+
+- in = zconf_fopen(env);
++ name = env;
++
++ in = zconf_fopen(name);
+ if (in) {
+ conf_message("using defaults found in %s",
+- env);
++ name);
+ goto load;
+ }
+
+diff --git a/scripts/kconfig/symbol.c b/scripts/kconfig/symbol.c
+index a3af93aaaf32af..453721e66c4ebc 100644
+--- a/scripts/kconfig/symbol.c
++++ b/scripts/kconfig/symbol.c
+@@ -376,6 +376,7 @@ static void sym_warn_unmet_dep(const struct symbol *sym)
+ " Selected by [m]:\n");
+
+ fputs(str_get(&gs), stderr);
++ str_free(&gs);
+ sym_warnings++;
+ }
+
+diff --git a/security/landlock/fs.c b/security/landlock/fs.c
+index e31b97a9f175aa..7adb25150488fc 100644
+--- a/security/landlock/fs.c
++++ b/security/landlock/fs.c
+@@ -937,10 +937,6 @@ static access_mask_t get_mode_access(const umode_t mode)
+ switch (mode & S_IFMT) {
+ case S_IFLNK:
+ return LANDLOCK_ACCESS_FS_MAKE_SYM;
+- case 0:
+- /* A zero mode translates to S_IFREG. */
+- case S_IFREG:
+- return LANDLOCK_ACCESS_FS_MAKE_REG;
+ case S_IFDIR:
+ return LANDLOCK_ACCESS_FS_MAKE_DIR;
+ case S_IFCHR:
+@@ -951,9 +947,12 @@ static access_mask_t get_mode_access(const umode_t mode)
+ return LANDLOCK_ACCESS_FS_MAKE_FIFO;
+ case S_IFSOCK:
+ return LANDLOCK_ACCESS_FS_MAKE_SOCK;
++ case S_IFREG:
++ case 0:
++ /* A zero mode translates to S_IFREG. */
+ default:
+- WARN_ON_ONCE(1);
+- return 0;
++ /* Treats weird files as regular files. */
++ return LANDLOCK_ACCESS_FS_MAKE_REG;
+ }
+ }
+
+diff --git a/sound/core/seq/Kconfig b/sound/core/seq/Kconfig
+index 0374bbf51cd4d3..e4f58cb985d47c 100644
+--- a/sound/core/seq/Kconfig
++++ b/sound/core/seq/Kconfig
+@@ -62,7 +62,7 @@ config SND_SEQ_VIRMIDI
+
+ config SND_SEQ_UMP
+ bool "Support for UMP events"
+- default y if SND_SEQ_UMP_CLIENT
++ default SND_UMP
+ help
+ Say Y here to enable the support for handling UMP (Universal MIDI
+ Packet) events via ALSA sequencer infrastructure, which is an
+@@ -71,6 +71,6 @@ config SND_SEQ_UMP
+ among legacy and UMP clients.
+
+ config SND_SEQ_UMP_CLIENT
+- def_tristate SND_UMP
++ def_tristate SND_UMP && SND_SEQ_UMP
+
+ endif # SND_SEQUENCER
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8c4de5a253addf..5d99a4ea176a15 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10143,6 +10143,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1025, 0x1308, "Acer Aspire Z24-890", ALC286_FIXUP_ACER_AIO_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x132a, "Acer TravelMate B114-21", ALC233_FIXUP_ACER_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1025, 0x1330, "Acer TravelMate X514-51T", ALC255_FIXUP_ACER_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1025, 0x1360, "Acer Aspire A115", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x141f, "Acer Spin SP513-54N", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x142b, "Acer Swift SF314-42", ALC255_FIXUP_ACER_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1025, 0x1430, "Acer TravelMate B311R-31", ALC256_FIXUP_ACER_MIC_NO_PRESENCE),
+diff --git a/sound/soc/amd/acp/acp-i2s.c b/sound/soc/amd/acp/acp-i2s.c
+index 56ce9e4b6accc7..92c5ff0deea2cd 100644
+--- a/sound/soc/amd/acp/acp-i2s.c
++++ b/sound/soc/amd/acp/acp-i2s.c
+@@ -181,6 +181,7 @@ static int acp_i2s_set_tdm_slot(struct snd_soc_dai *dai, u32 tx_mask, u32 rx_mas
+ break;
+ default:
+ dev_err(dev, "Unknown chip revision %d\n", chip->acp_rev);
++ spin_unlock_irq(&adata->acp_lock);
+ return -EINVAL;
+ }
+ }
+diff --git a/sound/soc/codecs/Makefile b/sound/soc/codecs/Makefile
+index 54cbc3feae3277..69cb0b39f22007 100644
+--- a/sound/soc/codecs/Makefile
++++ b/sound/soc/codecs/Makefile
+@@ -79,7 +79,7 @@ snd-soc-cs35l56-shared-y := cs35l56-shared.o
+ snd-soc-cs35l56-i2c-y := cs35l56-i2c.o
+ snd-soc-cs35l56-spi-y := cs35l56-spi.o
+ snd-soc-cs35l56-sdw-y := cs35l56-sdw.o
+-snd-soc-cs40l50-objs := cs40l50-codec.o
++snd-soc-cs40l50-y := cs40l50-codec.o
+ snd-soc-cs42l42-y := cs42l42.o
+ snd-soc-cs42l42-i2c-y := cs42l42-i2c.o
+ snd-soc-cs42l42-sdw-y := cs42l42-sdw.o
+@@ -324,8 +324,8 @@ snd-soc-wcd-classh-y := wcd-clsh-v2.o
+ snd-soc-wcd-mbhc-y := wcd-mbhc-v2.o
+ snd-soc-wcd9335-y := wcd9335.o
+ snd-soc-wcd934x-y := wcd934x.o
+-snd-soc-wcd937x-objs := wcd937x.o
+-snd-soc-wcd937x-sdw-objs := wcd937x-sdw.o
++snd-soc-wcd937x-y := wcd937x.o
++snd-soc-wcd937x-sdw-y := wcd937x-sdw.o
+ snd-soc-wcd938x-y := wcd938x.o
+ snd-soc-wcd938x-sdw-y := wcd938x-sdw.o
+ snd-soc-wcd939x-y := wcd939x.o
+diff --git a/sound/soc/codecs/da7213.c b/sound/soc/codecs/da7213.c
+index 486db60bf2dd14..f17f02d01f8c0f 100644
+--- a/sound/soc/codecs/da7213.c
++++ b/sound/soc/codecs/da7213.c
+@@ -2191,6 +2191,8 @@ static int da7213_i2c_probe(struct i2c_client *i2c)
+ return ret;
+ }
+
++ mutex_init(&da7213->ctrl_lock);
++
+ pm_runtime_set_autosuspend_delay(&i2c->dev, 100);
+ pm_runtime_use_autosuspend(&i2c->dev);
+ pm_runtime_set_active(&i2c->dev);
+diff --git a/sound/soc/intel/avs/apl.c b/sound/soc/intel/avs/apl.c
+index 27516ef5718591..3dccf0a57a3a11 100644
+--- a/sound/soc/intel/avs/apl.c
++++ b/sound/soc/intel/avs/apl.c
+@@ -12,6 +12,7 @@
+ #include "avs.h"
+ #include "messages.h"
+ #include "path.h"
++#include "registers.h"
+ #include "topology.h"
+
+ static irqreturn_t avs_apl_dsp_interrupt(struct avs_dev *adev)
+@@ -125,7 +126,7 @@ int avs_apl_coredump(struct avs_dev *adev, union avs_notify_msg *msg)
+ struct avs_apl_log_buffer_layout layout;
+ void __iomem *addr, *buf;
+ size_t dump_size;
+- u16 offset = 0;
++ u32 offset = 0;
+ u8 *dump, *pos;
+
+ dump_size = AVS_FW_REGS_SIZE + msg->ext.coredump.stack_dump_size;
+diff --git a/sound/soc/intel/avs/cnl.c b/sound/soc/intel/avs/cnl.c
+index bd3c4bb8bf5a17..03f8fb0dc187f5 100644
+--- a/sound/soc/intel/avs/cnl.c
++++ b/sound/soc/intel/avs/cnl.c
+@@ -9,6 +9,7 @@
+ #include <sound/hdaudio_ext.h>
+ #include "avs.h"
+ #include "messages.h"
++#include "registers.h"
+
+ static void avs_cnl_ipc_interrupt(struct avs_dev *adev)
+ {
+diff --git a/sound/soc/intel/avs/core.c b/sound/soc/intel/avs/core.c
+index 73d4bde9b2f788..82839d0994ee3e 100644
+--- a/sound/soc/intel/avs/core.c
++++ b/sound/soc/intel/avs/core.c
+@@ -829,10 +829,10 @@ static const struct avs_spec jsl_desc = {
+ .hipc = &cnl_hipc_spec,
+ };
+
+-#define AVS_TGL_BASED_SPEC(sname) \
++#define AVS_TGL_BASED_SPEC(sname, min) \
+ static const struct avs_spec sname##_desc = { \
+ .name = #sname, \
+- .min_fw_version = { 10, 29, 0, 5646 }, \
++ .min_fw_version = { 10, min, 0, 5646 }, \
+ .dsp_ops = &avs_tgl_dsp_ops, \
+ .core_init_mask = 1, \
+ .attributes = AVS_PLATATTR_IMR, \
+@@ -840,11 +840,11 @@ static const struct avs_spec sname##_desc = { \
+ .hipc = &cnl_hipc_spec, \
+ }
+
+-AVS_TGL_BASED_SPEC(lkf);
+-AVS_TGL_BASED_SPEC(tgl);
+-AVS_TGL_BASED_SPEC(ehl);
+-AVS_TGL_BASED_SPEC(adl);
+-AVS_TGL_BASED_SPEC(adl_n);
++AVS_TGL_BASED_SPEC(lkf, 28);
++AVS_TGL_BASED_SPEC(tgl, 29);
++AVS_TGL_BASED_SPEC(ehl, 30);
++AVS_TGL_BASED_SPEC(adl, 35);
++AVS_TGL_BASED_SPEC(adl_n, 35);
+
+ static const struct pci_device_id avs_ids[] = {
+ { PCI_DEVICE_DATA(INTEL, HDA_SKL_LP, &skl_desc) },
+diff --git a/sound/soc/intel/avs/loader.c b/sound/soc/intel/avs/loader.c
+index 890efd2f1feabe..37de077a998386 100644
+--- a/sound/soc/intel/avs/loader.c
++++ b/sound/soc/intel/avs/loader.c
+@@ -308,7 +308,7 @@ avs_hda_init_rom(struct avs_dev *adev, unsigned int dma_id, bool purge)
+ }
+
+ /* await ROM init */
+- ret = snd_hdac_adsp_readq_poll(adev, spec->sram->rom_status_offset, reg,
++ ret = snd_hdac_adsp_readl_poll(adev, spec->sram->rom_status_offset, reg,
+ (reg & 0xF) == AVS_ROM_INIT_DONE ||
+ (reg & 0xF) == APL_ROM_FW_ENTERED,
+ AVS_ROM_INIT_POLLING_US, APL_ROM_INIT_TIMEOUT_US);
+diff --git a/sound/soc/intel/avs/registers.h b/sound/soc/intel/avs/registers.h
+index f76e91cff2a9a6..5b6d60eb3c18bd 100644
+--- a/sound/soc/intel/avs/registers.h
++++ b/sound/soc/intel/avs/registers.h
+@@ -9,6 +9,8 @@
+ #ifndef __SOUND_SOC_INTEL_AVS_REGS_H
+ #define __SOUND_SOC_INTEL_AVS_REGS_H
+
++#include <linux/io-64-nonatomic-lo-hi.h>
++#include <linux/iopoll.h>
+ #include <linux/sizes.h>
+
+ #define AZX_PCIREG_PGCTL 0x44
+@@ -98,4 +100,47 @@
+ #define avs_downlink_addr(adev) \
+ avs_sram_addr(adev, AVS_DOWNLINK_WINDOW)
+
++#define snd_hdac_adsp_writeb(adev, reg, value) \
++ snd_hdac_reg_writeb(&(adev)->base.core, (adev)->dsp_ba + (reg), value)
++#define snd_hdac_adsp_readb(adev, reg) \
++ snd_hdac_reg_readb(&(adev)->base.core, (adev)->dsp_ba + (reg))
++#define snd_hdac_adsp_writew(adev, reg, value) \
++ snd_hdac_reg_writew(&(adev)->base.core, (adev)->dsp_ba + (reg), value)
++#define snd_hdac_adsp_readw(adev, reg) \
++ snd_hdac_reg_readw(&(adev)->base.core, (adev)->dsp_ba + (reg))
++#define snd_hdac_adsp_writel(adev, reg, value) \
++ snd_hdac_reg_writel(&(adev)->base.core, (adev)->dsp_ba + (reg), value)
++#define snd_hdac_adsp_readl(adev, reg) \
++ snd_hdac_reg_readl(&(adev)->base.core, (adev)->dsp_ba + (reg))
++#define snd_hdac_adsp_writeq(adev, reg, value) \
++ snd_hdac_reg_writeq(&(adev)->base.core, (adev)->dsp_ba + (reg), value)
++#define snd_hdac_adsp_readq(adev, reg) \
++ snd_hdac_reg_readq(&(adev)->base.core, (adev)->dsp_ba + (reg))
++
++#define snd_hdac_adsp_updateb(adev, reg, mask, val) \
++ snd_hdac_adsp_writeb(adev, reg, \
++ (snd_hdac_adsp_readb(adev, reg) & ~(mask)) | (val))
++#define snd_hdac_adsp_updatew(adev, reg, mask, val) \
++ snd_hdac_adsp_writew(adev, reg, \
++ (snd_hdac_adsp_readw(adev, reg) & ~(mask)) | (val))
++#define snd_hdac_adsp_updatel(adev, reg, mask, val) \
++ snd_hdac_adsp_writel(adev, reg, \
++ (snd_hdac_adsp_readl(adev, reg) & ~(mask)) | (val))
++#define snd_hdac_adsp_updateq(adev, reg, mask, val) \
++ snd_hdac_adsp_writeq(adev, reg, \
++ (snd_hdac_adsp_readq(adev, reg) & ~(mask)) | (val))
++
++#define snd_hdac_adsp_readb_poll(adev, reg, val, cond, delay_us, timeout_us) \
++ readb_poll_timeout((adev)->dsp_ba + (reg), val, cond, \
++ delay_us, timeout_us)
++#define snd_hdac_adsp_readw_poll(adev, reg, val, cond, delay_us, timeout_us) \
++ readw_poll_timeout((adev)->dsp_ba + (reg), val, cond, \
++ delay_us, timeout_us)
++#define snd_hdac_adsp_readl_poll(adev, reg, val, cond, delay_us, timeout_us) \
++ readl_poll_timeout((adev)->dsp_ba + (reg), val, cond, \
++ delay_us, timeout_us)
++#define snd_hdac_adsp_readq_poll(adev, reg, val, cond, delay_us, timeout_us) \
++ readq_poll_timeout((adev)->dsp_ba + (reg), val, cond, \
++ delay_us, timeout_us)
++
+ #endif /* __SOUND_SOC_INTEL_AVS_REGS_H */
+diff --git a/sound/soc/intel/avs/skl.c b/sound/soc/intel/avs/skl.c
+index 34f859d6e5a49a..d66ef000de9ee7 100644
+--- a/sound/soc/intel/avs/skl.c
++++ b/sound/soc/intel/avs/skl.c
+@@ -12,6 +12,7 @@
+ #include "avs.h"
+ #include "cldma.h"
+ #include "messages.h"
++#include "registers.h"
+
+ void avs_skl_ipc_interrupt(struct avs_dev *adev)
+ {
+diff --git a/sound/soc/intel/avs/topology.c b/sound/soc/intel/avs/topology.c
+index 5cda527020c7bf..d612f20ed98937 100644
+--- a/sound/soc/intel/avs/topology.c
++++ b/sound/soc/intel/avs/topology.c
+@@ -1466,7 +1466,7 @@ avs_tplg_path_template_create(struct snd_soc_component *comp, struct avs_tplg *o
+
+ static const struct avs_tplg_token_parser mod_init_config_parsers[] = {
+ {
+- .token = AVS_TKN_MOD_INIT_CONFIG_ID_U32,
++ .token = AVS_TKN_INIT_CONFIG_ID_U32,
+ .type = SND_SOC_TPLG_TUPLE_TYPE_WORD,
+ .offset = offsetof(struct avs_tplg_init_config, id),
+ .parse = avs_parse_word_token,
+@@ -1519,7 +1519,7 @@ static int avs_tplg_parse_initial_configs(struct snd_soc_component *comp,
+ esize = le32_to_cpu(tuples->size) + le32_to_cpu(tmp->size);
+
+ ret = parse_dictionary_entries(comp, tuples, esize, config, 1, sizeof(*config),
+- AVS_TKN_MOD_INIT_CONFIG_ID_U32,
++ AVS_TKN_INIT_CONFIG_ID_U32,
+ mod_init_config_parsers,
+ ARRAY_SIZE(mod_init_config_parsers));
+
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 41042259f2b26e..9f2dc24d44cb54 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -22,6 +22,8 @@ static int quirk_override = -1;
+ module_param_named(quirk, quirk_override, int, 0444);
+ MODULE_PARM_DESC(quirk, "Board-specific quirk override");
+
++#define DMIC_DEFAULT_CHANNELS 2
++
+ static void log_quirks(struct device *dev)
+ {
+ if (SOC_SDW_JACK_JDSRC(sof_sdw_quirk))
+@@ -584,17 +586,32 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "3838")
++ DMI_MATCH(DMI_PRODUCT_NAME, "83JX")
+ },
+- .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
+ },
+ {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "3832")
++ DMI_MATCH(DMI_PRODUCT_NAME, "83LC")
+ },
+- .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
++ },
++ {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83MC")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
++ }, {
++ .callback = sof_sdw_quirk_cb,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83NM")
++ },
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS | SOC_SDW_CODEC_MIC),
+ },
+ {
+ .callback = sof_sdw_quirk_cb,
+@@ -1063,17 +1080,19 @@ static int sof_card_dai_links_create(struct snd_soc_card *card)
+ hdmi_num = SOF_PRE_TGL_HDMI_COUNT;
+
+ /* enable dmic01 & dmic16k */
+- if (sof_sdw_quirk & SOC_SDW_PCH_DMIC || mach_params->dmic_num) {
+- if (ctx->ignore_internal_dmic)
+- dev_warn(dev, "Ignoring PCH DMIC\n");
+- else
+- dmic_num = 2;
++ if (ctx->ignore_internal_dmic) {
++ dev_warn(dev, "Ignoring internal DMIC\n");
++ mach_params->dmic_num = 0;
++ } else if (mach_params->dmic_num) {
++ dmic_num = 2;
++ } else if (sof_sdw_quirk & SOC_SDW_PCH_DMIC) {
++ dmic_num = 2;
++ /*
++ * mach_params->dmic_num will be used to set the cfg-mics value of
++ * card->components string. Set it to the default value.
++ */
++ mach_params->dmic_num = DMIC_DEFAULT_CHANNELS;
+ }
+- /*
+- * mach_params->dmic_num will be used to set the cfg-mics value of card->components
+- * string. Overwrite it to the actual number of PCH DMICs used in the device.
+- */
+- mach_params->dmic_num = dmic_num;
+
+ if (sof_sdw_quirk & SOF_SSP_BT_OFFLOAD_PRESENT)
+ bt_num = 1;
+diff --git a/sound/soc/mediatek/mt8365/Makefile b/sound/soc/mediatek/mt8365/Makefile
+index 52ba45a8498a20..b197025e34bb80 100644
+--- a/sound/soc/mediatek/mt8365/Makefile
++++ b/sound/soc/mediatek/mt8365/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+
+ # MTK Platform driver
+-snd-soc-mt8365-pcm-objs := \
++snd-soc-mt8365-pcm-y := \
+ mt8365-afe-clk.o \
+ mt8365-afe-pcm.o \
+ mt8365-dai-adda.o \
+diff --git a/sound/soc/rockchip/rockchip_i2s_tdm.c b/sound/soc/rockchip/rockchip_i2s_tdm.c
+index d1f28699652fe3..acd75e48851fcf 100644
+--- a/sound/soc/rockchip/rockchip_i2s_tdm.c
++++ b/sound/soc/rockchip/rockchip_i2s_tdm.c
+@@ -22,7 +22,6 @@
+
+ #define DRV_NAME "rockchip-i2s-tdm"
+
+-#define DEFAULT_MCLK_FS 256
+ #define CH_GRP_MAX 4 /* The max channel 8 / 2 */
+ #define MULTIPLEX_CH_MAX 10
+
+@@ -70,6 +69,8 @@ struct rk_i2s_tdm_dev {
+ bool has_playback;
+ bool has_capture;
+ struct snd_soc_dai_driver *dai;
++ unsigned int mclk_rx_freq;
++ unsigned int mclk_tx_freq;
+ };
+
+ static int to_ch_num(unsigned int val)
+@@ -645,6 +646,27 @@ static int rockchip_i2s_trcm_mode(struct snd_pcm_substream *substream,
+ return 0;
+ }
+
++static int rockchip_i2s_tdm_set_sysclk(struct snd_soc_dai *cpu_dai, int stream,
++ unsigned int freq, int dir)
++{
++ struct rk_i2s_tdm_dev *i2s_tdm = to_info(cpu_dai);
++
++ if (i2s_tdm->clk_trcm) {
++ i2s_tdm->mclk_tx_freq = freq;
++ i2s_tdm->mclk_rx_freq = freq;
++ } else {
++ if (stream == SNDRV_PCM_STREAM_PLAYBACK)
++ i2s_tdm->mclk_tx_freq = freq;
++ else
++ i2s_tdm->mclk_rx_freq = freq;
++ }
++
++ dev_dbg(i2s_tdm->dev, "The target mclk_%s freq is: %d\n",
++ stream ? "rx" : "tx", freq);
++
++ return 0;
++}
++
+ static int rockchip_i2s_tdm_hw_params(struct snd_pcm_substream *substream,
+ struct snd_pcm_hw_params *params,
+ struct snd_soc_dai *dai)
+@@ -659,15 +681,19 @@ static int rockchip_i2s_tdm_hw_params(struct snd_pcm_substream *substream,
+
+ if (i2s_tdm->clk_trcm == TRCM_TX) {
+ mclk = i2s_tdm->mclk_tx;
++ mclk_rate = i2s_tdm->mclk_tx_freq;
+ } else if (i2s_tdm->clk_trcm == TRCM_RX) {
+ mclk = i2s_tdm->mclk_rx;
++ mclk_rate = i2s_tdm->mclk_rx_freq;
+ } else if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ mclk = i2s_tdm->mclk_tx;
++ mclk_rate = i2s_tdm->mclk_tx_freq;
+ } else {
+ mclk = i2s_tdm->mclk_rx;
++ mclk_rate = i2s_tdm->mclk_rx_freq;
+ }
+
+- err = clk_set_rate(mclk, DEFAULT_MCLK_FS * params_rate(params));
++ err = clk_set_rate(mclk, mclk_rate);
+ if (err)
+ return err;
+
+@@ -827,6 +853,7 @@ static const struct snd_soc_dai_ops rockchip_i2s_tdm_dai_ops = {
+ .hw_params = rockchip_i2s_tdm_hw_params,
+ .set_bclk_ratio = rockchip_i2s_tdm_set_bclk_ratio,
+ .set_fmt = rockchip_i2s_tdm_set_fmt,
++ .set_sysclk = rockchip_i2s_tdm_set_sysclk,
+ .set_tdm_slot = rockchip_dai_tdm_slot,
+ .trigger = rockchip_i2s_tdm_trigger,
+ };
+diff --git a/sound/soc/sh/rz-ssi.c b/sound/soc/sh/rz-ssi.c
+index 040ce0431fd2c5..32db2cead8a4ec 100644
+--- a/sound/soc/sh/rz-ssi.c
++++ b/sound/soc/sh/rz-ssi.c
+@@ -258,8 +258,7 @@ static void rz_ssi_stream_quit(struct rz_ssi_priv *ssi,
+ static int rz_ssi_clk_setup(struct rz_ssi_priv *ssi, unsigned int rate,
+ unsigned int channels)
+ {
+- static s8 ckdv[16] = { 1, 2, 4, 8, 16, 32, 64, 128,
+- 6, 12, 24, 48, 96, -1, -1, -1 };
++ static u8 ckdv[] = { 1, 2, 4, 8, 16, 32, 64, 128, 6, 12, 24, 48, 96 };
+ unsigned int channel_bits = 32; /* System Word Length */
+ unsigned long bclk_rate = rate * channels * channel_bits;
+ unsigned int div;
+diff --git a/sound/soc/sunxi/sun4i-spdif.c b/sound/soc/sunxi/sun4i-spdif.c
+index 0aa4164232464e..7cf623cbe9ed4b 100644
+--- a/sound/soc/sunxi/sun4i-spdif.c
++++ b/sound/soc/sunxi/sun4i-spdif.c
+@@ -176,6 +176,7 @@ struct sun4i_spdif_quirks {
+ unsigned int reg_dac_txdata;
+ bool has_reset;
+ unsigned int val_fctl_ftx;
++ unsigned int mclk_multiplier;
+ };
+
+ struct sun4i_spdif_dev {
+@@ -313,6 +314,7 @@ static int sun4i_spdif_hw_params(struct snd_pcm_substream *substream,
+ default:
+ return -EINVAL;
+ }
++ mclk *= host->quirks->mclk_multiplier;
+
+ ret = clk_set_rate(host->spdif_clk, mclk);
+ if (ret < 0) {
+@@ -347,6 +349,7 @@ static int sun4i_spdif_hw_params(struct snd_pcm_substream *substream,
+ default:
+ return -EINVAL;
+ }
++ mclk_div *= host->quirks->mclk_multiplier;
+
+ reg_val = 0;
+ reg_val |= SUN4I_SPDIF_TXCFG_ASS;
+@@ -540,24 +543,28 @@ static struct snd_soc_dai_driver sun4i_spdif_dai = {
+ static const struct sun4i_spdif_quirks sun4i_a10_spdif_quirks = {
+ .reg_dac_txdata = SUN4I_SPDIF_TXFIFO,
+ .val_fctl_ftx = SUN4I_SPDIF_FCTL_FTX,
++ .mclk_multiplier = 1,
+ };
+
+ static const struct sun4i_spdif_quirks sun6i_a31_spdif_quirks = {
+ .reg_dac_txdata = SUN4I_SPDIF_TXFIFO,
+ .val_fctl_ftx = SUN4I_SPDIF_FCTL_FTX,
+ .has_reset = true,
++ .mclk_multiplier = 1,
+ };
+
+ static const struct sun4i_spdif_quirks sun8i_h3_spdif_quirks = {
+ .reg_dac_txdata = SUN8I_SPDIF_TXFIFO,
+ .val_fctl_ftx = SUN4I_SPDIF_FCTL_FTX,
+ .has_reset = true,
++ .mclk_multiplier = 4,
+ };
+
+ static const struct sun4i_spdif_quirks sun50i_h6_spdif_quirks = {
+ .reg_dac_txdata = SUN8I_SPDIF_TXFIFO,
+ .val_fctl_ftx = SUN50I_H6_SPDIF_FCTL_FTX,
+ .has_reset = true,
++ .mclk_multiplier = 1,
+ };
+
+ static const struct of_device_id sun4i_spdif_of_match[] = {
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 7968d6a2f592ac..a97efb7b131ea2 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -2343,6 +2343,8 @@ static const struct usb_audio_quirk_flags_table quirk_flags_table[] = {
+ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x2d95, 0x8021, /* VIVO USB-C-XE710 HEADSET */
+ QUIRK_FLAG_CTL_MSG_DELAY_1M),
++ DEVICE_FLG(0x2fc6, 0xf0b7, /* iBasso DC07 Pro */
++ QUIRK_FLAG_CTL_MSG_DELAY_1M),
+ DEVICE_FLG(0x30be, 0x0101, /* Schiit Hel */
+ QUIRK_FLAG_IGNORE_CTL_ERROR),
+ DEVICE_FLG(0x413c, 0xa506, /* Dell AE515 sound bar */
+diff --git a/tools/bootconfig/main.c b/tools/bootconfig/main.c
+index 156b62a163c5a6..8a48cc2536f566 100644
+--- a/tools/bootconfig/main.c
++++ b/tools/bootconfig/main.c
+@@ -226,7 +226,7 @@ static int load_xbc_from_initrd(int fd, char **buf)
+ /* Wrong Checksum */
+ rcsum = xbc_calc_checksum(*buf, size);
+ if (csum != rcsum) {
+- pr_err("checksum error: %d != %d\n", csum, rcsum);
++ pr_err("checksum error: %u != %u\n", csum, rcsum);
+ return -EINVAL;
+ }
+
+@@ -395,7 +395,7 @@ static int apply_xbc(const char *path, const char *xbc_path)
+ xbc_get_info(&ret, NULL);
+ printf("\tNumber of nodes: %d\n", ret);
+ printf("\tSize: %u bytes\n", (unsigned int)size);
+- printf("\tChecksum: %d\n", (unsigned int)csum);
++ printf("\tChecksum: %u\n", (unsigned int)csum);
+
+ /* TODO: Check the options by schema */
+ xbc_exit();
+diff --git a/tools/include/uapi/linux/if_xdp.h b/tools/include/uapi/linux/if_xdp.h
+index 2f082b01ff2284..42ec5ddaab8dc8 100644
+--- a/tools/include/uapi/linux/if_xdp.h
++++ b/tools/include/uapi/linux/if_xdp.h
+@@ -117,12 +117,12 @@ struct xdp_options {
+ ((1ULL << XSK_UNALIGNED_BUF_OFFSET_SHIFT) - 1)
+
+ /* Request transmit timestamp. Upon completion, put it into tx_timestamp
+- * field of union xsk_tx_metadata.
++ * field of struct xsk_tx_metadata.
+ */
+ #define XDP_TXMD_FLAGS_TIMESTAMP (1 << 0)
+
+ /* Request transmit checksum offload. Checksum start position and offset
+- * are communicated via csum_start and csum_offset fields of union
++ * are communicated via csum_start and csum_offset fields of struct
+ * xsk_tx_metadata.
+ */
+ #define XDP_TXMD_FLAGS_CHECKSUM (1 << 1)
+diff --git a/tools/lib/bpf/btf.c b/tools/lib/bpf/btf.c
+index 3c131039c52326..27e7bfae953bd3 100644
+--- a/tools/lib/bpf/btf.c
++++ b/tools/lib/bpf/btf.c
+@@ -1185,6 +1185,7 @@ static struct btf *btf_parse_elf(const char *path, struct btf *base_btf,
+
+ elf = elf_begin(fd, ELF_C_READ, NULL);
+ if (!elf) {
++ err = -LIBBPF_ERRNO__FORMAT;
+ pr_warn("failed to open %s as ELF file\n", path);
+ goto done;
+ }
+diff --git a/tools/lib/bpf/btf_relocate.c b/tools/lib/bpf/btf_relocate.c
+index 4f7399d85eab3d..8ef8003480dac8 100644
+--- a/tools/lib/bpf/btf_relocate.c
++++ b/tools/lib/bpf/btf_relocate.c
+@@ -212,7 +212,7 @@ static int btf_relocate_map_distilled_base(struct btf_relocate *r)
+ * need to match both name and size, otherwise embedding the base
+ * struct/union in the split type is invalid.
+ */
+- for (id = r->nr_dist_base_types; id < r->nr_split_types; id++) {
++ for (id = r->nr_dist_base_types; id < r->nr_dist_base_types + r->nr_split_types; id++) {
+ err = btf_mark_embedded_composite_type_ids(r, id);
+ if (err)
+ goto done;
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index 6985ab0f1ca9e8..777600822d8e45 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -567,17 +567,15 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
+ }
+ obj->elf = elf_begin(obj->fd, ELF_C_READ_MMAP, NULL);
+ if (!obj->elf) {
+- err = -errno;
+ pr_warn_elf("failed to parse ELF file '%s'", filename);
+- return err;
++ return -EINVAL;
+ }
+
+ /* Sanity check ELF file high-level properties */
+ ehdr = elf64_getehdr(obj->elf);
+ if (!ehdr) {
+- err = -errno;
+ pr_warn_elf("failed to get ELF header for %s", filename);
+- return err;
++ return -EINVAL;
+ }
+ if (ehdr->e_ident[EI_DATA] != host_endianness) {
+ err = -EOPNOTSUPP;
+@@ -593,9 +591,8 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
+ }
+
+ if (elf_getshdrstrndx(obj->elf, &obj->shstrs_sec_idx)) {
+- err = -errno;
+ pr_warn_elf("failed to get SHSTRTAB section index for %s", filename);
+- return err;
++ return -EINVAL;
+ }
+
+ scn = NULL;
+@@ -605,26 +602,23 @@ static int linker_load_obj_file(struct bpf_linker *linker, const char *filename,
+
+ shdr = elf64_getshdr(scn);
+ if (!shdr) {
+- err = -errno;
+ pr_warn_elf("failed to get section #%zu header for %s",
+ sec_idx, filename);
+- return err;
++ return -EINVAL;
+ }
+
+ sec_name = elf_strptr(obj->elf, obj->shstrs_sec_idx, shdr->sh_name);
+ if (!sec_name) {
+- err = -errno;
+ pr_warn_elf("failed to get section #%zu name for %s",
+ sec_idx, filename);
+- return err;
++ return -EINVAL;
+ }
+
+ data = elf_getdata(scn, 0);
+ if (!data) {
+- err = -errno;
+ pr_warn_elf("failed to get section #%zu (%s) data from %s",
+ sec_idx, sec_name, filename);
+- return err;
++ return -EINVAL;
+ }
+
+ sec = add_src_sec(obj, sec_name);
+@@ -2635,14 +2629,14 @@ int bpf_linker__finalize(struct bpf_linker *linker)
+
+ /* Finalize ELF layout */
+ if (elf_update(linker->elf, ELF_C_NULL) < 0) {
+- err = -errno;
++ err = -EINVAL;
+ pr_warn_elf("failed to finalize ELF layout");
+ return libbpf_err(err);
+ }
+
+ /* Write out final ELF contents */
+ if (elf_update(linker->elf, ELF_C_WRITE) < 0) {
+- err = -errno;
++ err = -EINVAL;
+ pr_warn_elf("failed to write ELF contents");
+ return libbpf_err(err);
+ }
+diff --git a/tools/lib/bpf/usdt.c b/tools/lib/bpf/usdt.c
+index 93794f01bb67cb..6ff28e7bf5e3da 100644
+--- a/tools/lib/bpf/usdt.c
++++ b/tools/lib/bpf/usdt.c
+@@ -659,7 +659,7 @@ static int collect_usdt_targets(struct usdt_manager *man, Elf *elf, const char *
+ * [0] https://sourceware.org/systemtap/wiki/UserSpaceProbeImplementation
+ */
+ usdt_abs_ip = note.loc_addr;
+- if (base_addr)
++ if (base_addr && note.base_addr)
+ usdt_abs_ip += base_addr - note.base_addr;
+
+ /* When attaching uprobes (which is what USDTs basically are)
+diff --git a/tools/net/ynl/lib/ynl.c b/tools/net/ynl/lib/ynl.c
+index e16cef160bc2cb..ce32cb35007d6f 100644
+--- a/tools/net/ynl/lib/ynl.c
++++ b/tools/net/ynl/lib/ynl.c
+@@ -95,7 +95,7 @@ ynl_err_walk(struct ynl_sock *ys, void *start, void *end, unsigned int off,
+
+ ynl_attr_for_each_payload(start, data_len, attr) {
+ astart_off = (char *)attr - (char *)start;
+- aend_off = astart_off + ynl_attr_data_len(attr);
++ aend_off = (char *)ynl_attr_data_end(attr) - (char *)start;
+ if (aend_off <= off)
+ continue;
+
+diff --git a/tools/perf/MANIFEST b/tools/perf/MANIFEST
+index dc42de1785cee7..908165fcec7de3 100644
+--- a/tools/perf/MANIFEST
++++ b/tools/perf/MANIFEST
+@@ -1,5 +1,6 @@
+ arch/arm64/tools/gen-sysreg.awk
+ arch/arm64/tools/sysreg
++arch/*/include/uapi/asm/bpf_perf_event.h
+ tools/perf
+ tools/arch
+ tools/scripts
+diff --git a/tools/perf/builtin-inject.c b/tools/perf/builtin-inject.c
+index d6989195a061ff..11e49cafa3af9d 100644
+--- a/tools/perf/builtin-inject.c
++++ b/tools/perf/builtin-inject.c
+@@ -2367,10 +2367,10 @@ int cmd_inject(int argc, const char **argv)
+ };
+ int ret;
+ const char *known_build_ids = NULL;
+- bool build_ids;
+- bool build_id_all;
+- bool mmap2_build_ids;
+- bool mmap2_build_id_all;
++ bool build_ids = false;
++ bool build_id_all = false;
++ bool mmap2_build_ids = false;
++ bool mmap2_build_id_all = false;
+
+ struct option options[] = {
+ OPT_BOOLEAN('b', "build-ids", &build_ids,
+diff --git a/tools/perf/builtin-lock.c b/tools/perf/builtin-lock.c
+index 062e2b56a2ab57..33a456980664a0 100644
+--- a/tools/perf/builtin-lock.c
++++ b/tools/perf/builtin-lock.c
+@@ -1591,8 +1591,8 @@ static const struct {
+ { LCB_F_PERCPU | LCB_F_WRITE, "pcpu-sem:W", "percpu-rwsem" },
+ { LCB_F_MUTEX, "mutex", "mutex" },
+ { LCB_F_MUTEX | LCB_F_SPIN, "mutex", "mutex" },
+- /* alias for get_type_flag() */
+- { LCB_F_MUTEX | LCB_F_SPIN, "mutex-spin", "mutex" },
++ /* alias for optimistic spinning only */
++ { LCB_F_MUTEX | LCB_F_SPIN, "mutex:spin", "mutex-spin" },
+ };
+
+ static const char *get_type_str(unsigned int flags)
+@@ -1617,19 +1617,6 @@ static const char *get_type_name(unsigned int flags)
+ return "unknown";
+ }
+
+-static unsigned int get_type_flag(const char *str)
+-{
+- for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
+- if (!strcmp(lock_type_table[i].name, str))
+- return lock_type_table[i].flags;
+- }
+- for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
+- if (!strcmp(lock_type_table[i].str, str))
+- return lock_type_table[i].flags;
+- }
+- return UINT_MAX;
+-}
+-
+ static void lock_filter_finish(void)
+ {
+ zfree(&filters.types);
+@@ -2350,29 +2337,58 @@ static int parse_lock_type(const struct option *opt __maybe_unused, const char *
+ int unset __maybe_unused)
+ {
+ char *s, *tmp, *tok;
+- int ret = 0;
+
+ s = strdup(str);
+ if (s == NULL)
+ return -1;
+
+ for (tok = strtok_r(s, ", ", &tmp); tok; tok = strtok_r(NULL, ", ", &tmp)) {
+- unsigned int flags = get_type_flag(tok);
++ bool found = false;
+
+- if (flags == -1U) {
+- pr_err("Unknown lock flags: %s\n", tok);
+- ret = -1;
+- break;
++ /* `tok` is `str` in `lock_type_table` if it contains ':'. */
++ if (strchr(tok, ':')) {
++ for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
++ if (!strcmp(lock_type_table[i].str, tok) &&
++ add_lock_type(lock_type_table[i].flags)) {
++ found = true;
++ break;
++ }
++ }
++
++ if (!found) {
++ pr_err("Unknown lock flags name: %s\n", tok);
++ free(s);
++ return -1;
++ }
++
++ continue;
+ }
+
+- if (!add_lock_type(flags)) {
+- ret = -1;
+- break;
++ /*
++ * Otherwise `tok` is `name` in `lock_type_table`.
++ * Single lock name could contain multiple flags.
++ */
++ for (unsigned int i = 0; i < ARRAY_SIZE(lock_type_table); i++) {
++ if (!strcmp(lock_type_table[i].name, tok)) {
++ if (add_lock_type(lock_type_table[i].flags)) {
++ found = true;
++ } else {
++ free(s);
++ return -1;
++ }
++ }
+ }
++
++ if (!found) {
++ pr_err("Unknown lock name: %s\n", tok);
++ free(s);
++ return -1;
++ }
++
+ }
+
+ free(s);
+- return ret;
++ return 0;
+ }
+
+ static bool add_lock_addr(unsigned long addr)
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 5dc17ffee27a2d..645deec294c842 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -1418,7 +1418,7 @@ int cmd_report(int argc, const char **argv)
+ OPT_STRING(0, "addr2line", &addr2line_path, "path",
+ "addr2line binary to use for line numbers"),
+ OPT_BOOLEAN(0, "demangle", &symbol_conf.demangle,
+- "Disable symbol demangling"),
++ "Symbol demangling. Enabled by default, use --no-demangle to disable."),
+ OPT_BOOLEAN(0, "demangle-kernel", &symbol_conf.demangle_kernel,
+ "Enable kernel symbol demangling"),
+ OPT_BOOLEAN(0, "mem-mode", &report.mem_mode, "mem access profile"),
+diff --git a/tools/perf/builtin-top.c b/tools/perf/builtin-top.c
+index 724a7938632126..ca3e8eca6610e8 100644
+--- a/tools/perf/builtin-top.c
++++ b/tools/perf/builtin-top.c
+@@ -809,7 +809,7 @@ static void perf_event__process_sample(const struct perf_tool *tool,
+ * invalid --vmlinux ;-)
+ */
+ if (!machine->kptr_restrict_warned && !top->vmlinux_warned &&
+- __map__is_kernel(al.map) && map__has_symbols(al.map)) {
++ __map__is_kernel(al.map) && !map__has_symbols(al.map)) {
+ if (symbol_conf.vmlinux_name) {
+ char serr[256];
+
+diff --git a/tools/perf/builtin-trace.c b/tools/perf/builtin-trace.c
+index ffa1295273099e..ecd26e058baf67 100644
+--- a/tools/perf/builtin-trace.c
++++ b/tools/perf/builtin-trace.c
+@@ -2122,8 +2122,12 @@ static int trace__read_syscall_info(struct trace *trace, int id)
+ return PTR_ERR(sc->tp_format);
+ }
+
++ /*
++ * The tracepoint format contains __syscall_nr field, so it's one more
++ * than the actual number of syscall arguments.
++ */
+ if (syscall__alloc_arg_fmts(sc, IS_ERR(sc->tp_format) ?
+- RAW_SYSCALL_ARGS_NUM : sc->tp_format->format.nr_fields))
++ RAW_SYSCALL_ARGS_NUM : sc->tp_format->format.nr_fields - 1))
+ return -ENOMEM;
+
+ sc->args = sc->tp_format->format.fields;
+diff --git a/tools/perf/tests/shell/trace_btf_enum.sh b/tools/perf/tests/shell/trace_btf_enum.sh
+index 5a3b8a5a9b5cf2..8d1e6bbeac9068 100755
+--- a/tools/perf/tests/shell/trace_btf_enum.sh
++++ b/tools/perf/tests/shell/trace_btf_enum.sh
+@@ -26,8 +26,12 @@ check_vmlinux() {
+ trace_landlock() {
+ echo "Tracing syscall ${syscall}"
+
+- # test flight just to see if landlock_add_rule and libbpf are available
+- $TESTPROG
++ # test flight just to see if landlock_add_rule is available
++ if ! perf trace $TESTPROG 2>&1 | grep -q landlock
++ then
++ echo "No landlock system call found, skipping to non-syscall tracing."
++ return
++ fi
+
+ if perf trace -e $syscall $TESTPROG 2>&1 | \
+ grep -q -E ".*landlock_add_rule\(ruleset_fd: 11, rule_type: (LANDLOCK_RULE_PATH_BENEATH|LANDLOCK_RULE_NET_PORT), rule_attr: 0x[a-f0-9]+, flags: 45\) = -1.*"
+diff --git a/tools/perf/util/bpf-event.c b/tools/perf/util/bpf-event.c
+index 13608237c50e05..c81444059ad077 100644
+--- a/tools/perf/util/bpf-event.c
++++ b/tools/perf/util/bpf-event.c
+@@ -289,7 +289,10 @@ static int perf_event__synthesize_one_bpf_prog(struct perf_session *session,
+ }
+
+ info_node->info_linear = info_linear;
+- perf_env__insert_bpf_prog_info(env, info_node);
++ if (!perf_env__insert_bpf_prog_info(env, info_node)) {
++ free(info_linear);
++ free(info_node);
++ }
+ info_linear = NULL;
+
+ /*
+@@ -480,7 +483,10 @@ static void perf_env__add_bpf_info(struct perf_env *env, u32 id)
+ info_node = malloc(sizeof(struct bpf_prog_info_node));
+ if (info_node) {
+ info_node->info_linear = info_linear;
+- perf_env__insert_bpf_prog_info(env, info_node);
++ if (!perf_env__insert_bpf_prog_info(env, info_node)) {
++ free(info_linear);
++ free(info_node);
++ }
+ } else
+ free(info_linear);
+
+diff --git a/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c b/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c
+index 4a62ed593e84ed..e4352881e3faa6 100644
+--- a/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c
++++ b/tools/perf/util/bpf_skel/augmented_raw_syscalls.bpf.c
+@@ -431,9 +431,9 @@ static bool pid_filter__has(struct pids_filtered *pids, pid_t pid)
+ static int augment_sys_enter(void *ctx, struct syscall_enter_args *args)
+ {
+ bool augmented, do_output = false;
+- int zero = 0, size, aug_size, index,
+- value_size = sizeof(struct augmented_arg) - offsetof(struct augmented_arg, value);
++ int zero = 0, index, value_size = sizeof(struct augmented_arg) - offsetof(struct augmented_arg, value);
+ u64 output = 0; /* has to be u64, otherwise it won't pass the verifier */
++ s64 aug_size, size;
+ unsigned int nr, *beauty_map;
+ struct beauty_payload_enter *payload;
+ void *arg, *payload_offset;
+@@ -484,14 +484,11 @@ static int augment_sys_enter(void *ctx, struct syscall_enter_args *args)
+ } else if (size > 0 && size <= value_size) { /* struct */
+ if (!bpf_probe_read_user(((struct augmented_arg *)payload_offset)->value, size, arg))
+ augmented = true;
+- } else if (size < 0 && size >= -6) { /* buffer */
++ } else if ((int)size < 0 && size >= -6) { /* buffer */
+ index = -(size + 1);
+ barrier_var(index); // Prevent clang (noticed with v18) from removing the &= 7 trick.
+ index &= 7; // Satisfy the bounds checking with the verifier in some kernels.
+- aug_size = args->args[index];
+-
+- if (aug_size > TRACE_AUG_MAX_BUF)
+- aug_size = TRACE_AUG_MAX_BUF;
++ aug_size = args->args[index] > TRACE_AUG_MAX_BUF ? TRACE_AUG_MAX_BUF : args->args[index];
+
+ if (aug_size > 0) {
+ if (!bpf_probe_read_user(((struct augmented_arg *)payload_offset)->value, aug_size, arg))
+diff --git a/tools/perf/util/env.c b/tools/perf/util/env.c
+index 1edbccfc3281d2..d981b6f4bc5ea2 100644
+--- a/tools/perf/util/env.c
++++ b/tools/perf/util/env.c
+@@ -22,15 +22,19 @@ struct perf_env perf_env;
+ #include "bpf-utils.h"
+ #include <bpf/libbpf.h>
+
+-void perf_env__insert_bpf_prog_info(struct perf_env *env,
++bool perf_env__insert_bpf_prog_info(struct perf_env *env,
+ struct bpf_prog_info_node *info_node)
+ {
++ bool ret;
++
+ down_write(&env->bpf_progs.lock);
+- __perf_env__insert_bpf_prog_info(env, info_node);
++ ret = __perf_env__insert_bpf_prog_info(env, info_node);
+ up_write(&env->bpf_progs.lock);
++
++ return ret;
+ }
+
+-void __perf_env__insert_bpf_prog_info(struct perf_env *env, struct bpf_prog_info_node *info_node)
++bool __perf_env__insert_bpf_prog_info(struct perf_env *env, struct bpf_prog_info_node *info_node)
+ {
+ __u32 prog_id = info_node->info_linear->info.id;
+ struct bpf_prog_info_node *node;
+@@ -48,13 +52,14 @@ void __perf_env__insert_bpf_prog_info(struct perf_env *env, struct bpf_prog_info
+ p = &(*p)->rb_right;
+ } else {
+ pr_debug("duplicated bpf prog info %u\n", prog_id);
+- return;
++ return false;
+ }
+ }
+
+ rb_link_node(&info_node->rb_node, parent, p);
+ rb_insert_color(&info_node->rb_node, &env->bpf_progs.infos);
+ env->bpf_progs.infos_cnt++;
++ return true;
+ }
+
+ struct bpf_prog_info_node *perf_env__find_bpf_prog_info(struct perf_env *env,
+diff --git a/tools/perf/util/env.h b/tools/perf/util/env.h
+index 51b36c36019be6..38de0af2a68081 100644
+--- a/tools/perf/util/env.h
++++ b/tools/perf/util/env.h
+@@ -176,9 +176,9 @@ const char *perf_env__raw_arch(struct perf_env *env);
+ int perf_env__nr_cpus_avail(struct perf_env *env);
+
+ void perf_env__init(struct perf_env *env);
+-void __perf_env__insert_bpf_prog_info(struct perf_env *env,
++bool __perf_env__insert_bpf_prog_info(struct perf_env *env,
+ struct bpf_prog_info_node *info_node);
+-void perf_env__insert_bpf_prog_info(struct perf_env *env,
++bool perf_env__insert_bpf_prog_info(struct perf_env *env,
+ struct bpf_prog_info_node *info_node);
+ struct bpf_prog_info_node *perf_env__find_bpf_prog_info(struct perf_env *env,
+ __u32 prog_id);
+diff --git a/tools/perf/util/expr.c b/tools/perf/util/expr.c
+index b2536a59c44e64..90c6ce2212e4fe 100644
+--- a/tools/perf/util/expr.c
++++ b/tools/perf/util/expr.c
+@@ -288,7 +288,7 @@ struct expr_parse_ctx *expr__ctx_new(void)
+ {
+ struct expr_parse_ctx *ctx;
+
+- ctx = malloc(sizeof(struct expr_parse_ctx));
++ ctx = calloc(1, sizeof(struct expr_parse_ctx));
+ if (!ctx)
+ return NULL;
+
+@@ -297,9 +297,6 @@ struct expr_parse_ctx *expr__ctx_new(void)
+ free(ctx);
+ return NULL;
+ }
+- ctx->sctx.user_requested_cpu_list = NULL;
+- ctx->sctx.runtime = 0;
+- ctx->sctx.system_wide = false;
+
+ return ctx;
+ }
+diff --git a/tools/perf/util/header.c b/tools/perf/util/header.c
+index a6386d12afd729..7b99f58f7bf269 100644
+--- a/tools/perf/util/header.c
++++ b/tools/perf/util/header.c
+@@ -3188,7 +3188,10 @@ static int process_bpf_prog_info(struct feat_fd *ff, void *data __maybe_unused)
+ /* after reading from file, translate offset to address */
+ bpil_offs_to_addr(info_linear);
+ info_node->info_linear = info_linear;
+- __perf_env__insert_bpf_prog_info(env, info_node);
++ if (!__perf_env__insert_bpf_prog_info(env, info_node)) {
++ free(info_linear);
++ free(info_node);
++ }
+ }
+
+ up_write(&env->bpf_progs.lock);
+@@ -3235,7 +3238,8 @@ static int process_bpf_btf(struct feat_fd *ff, void *data __maybe_unused)
+ if (__do_read(ff, node->data, data_size))
+ goto out;
+
+- __perf_env__insert_btf(env, node);
++ if (!__perf_env__insert_btf(env, node))
++ free(node);
+ node = NULL;
+ }
+
+diff --git a/tools/perf/util/machine.c b/tools/perf/util/machine.c
+index 27d5345d2b307a..9be2f4479f5257 100644
+--- a/tools/perf/util/machine.c
++++ b/tools/perf/util/machine.c
+@@ -1003,7 +1003,7 @@ static int machine__get_running_kernel_start(struct machine *machine,
+
+ err = kallsyms__get_symbol_start(filename, "_edata", &addr);
+ if (err)
+- err = kallsyms__get_function_start(filename, "_etext", &addr);
++ err = kallsyms__get_symbol_start(filename, "_etext", &addr);
+ if (!err)
+ *end = addr;
+
+diff --git a/tools/perf/util/maps.c b/tools/perf/util/maps.c
+index 432399cbe5dd39..09c9cc326c08d4 100644
+--- a/tools/perf/util/maps.c
++++ b/tools/perf/util/maps.c
+@@ -1136,8 +1136,13 @@ struct map *maps__find_next_entry(struct maps *maps, struct map *map)
+ struct map *result = NULL;
+
+ down_read(maps__lock(maps));
++ while (!maps__maps_by_address_sorted(maps)) {
++ up_read(maps__lock(maps));
++ maps__sort_by_address(maps);
++ down_read(maps__lock(maps));
++ }
+ i = maps__by_address_index(maps, map);
+- if (i < maps__nr_maps(maps))
++ if (++i < maps__nr_maps(maps))
+ result = map__get(maps__maps_by_address(maps)[i]);
+
+ up_read(maps__lock(maps));
+diff --git a/tools/perf/util/namespaces.c b/tools/perf/util/namespaces.c
+index cb185c5659d6b3..68f5de2d79c72c 100644
+--- a/tools/perf/util/namespaces.c
++++ b/tools/perf/util/namespaces.c
+@@ -266,11 +266,16 @@ pid_t nsinfo__pid(const struct nsinfo *nsi)
+ return RC_CHK_ACCESS(nsi)->pid;
+ }
+
+-pid_t nsinfo__in_pidns(const struct nsinfo *nsi)
++bool nsinfo__in_pidns(const struct nsinfo *nsi)
+ {
+ return RC_CHK_ACCESS(nsi)->in_pidns;
+ }
+
++void nsinfo__set_in_pidns(struct nsinfo *nsi)
++{
++ RC_CHK_ACCESS(nsi)->in_pidns = true;
++}
++
+ void nsinfo__mountns_enter(struct nsinfo *nsi,
+ struct nscookie *nc)
+ {
+diff --git a/tools/perf/util/namespaces.h b/tools/perf/util/namespaces.h
+index 8c0731c6cbb7ee..e95c79b80e27c8 100644
+--- a/tools/perf/util/namespaces.h
++++ b/tools/perf/util/namespaces.h
+@@ -58,7 +58,8 @@ void nsinfo__clear_need_setns(struct nsinfo *nsi);
+ pid_t nsinfo__tgid(const struct nsinfo *nsi);
+ pid_t nsinfo__nstgid(const struct nsinfo *nsi);
+ pid_t nsinfo__pid(const struct nsinfo *nsi);
+-pid_t nsinfo__in_pidns(const struct nsinfo *nsi);
++bool nsinfo__in_pidns(const struct nsinfo *nsi);
++void nsinfo__set_in_pidns(struct nsinfo *nsi);
+
+ void nsinfo__mountns_enter(struct nsinfo *nsi, struct nscookie *nc);
+ void nsinfo__mountns_exit(struct nscookie *nc);
+diff --git a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+index ae6af354a81db5..08a399b0be286c 100644
+--- a/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
++++ b/tools/power/cpupower/utils/idle_monitor/mperf_monitor.c
+@@ -33,7 +33,7 @@ static int mperf_get_count_percent(unsigned int self_id, double *percent,
+ unsigned int cpu);
+ static int mperf_get_count_freq(unsigned int id, unsigned long long *count,
+ unsigned int cpu);
+-static struct timespec time_start, time_end;
++static struct timespec *time_start, *time_end;
+
+ static cstate_t mperf_cstates[MPERF_CSTATE_COUNT] = {
+ {
+@@ -174,7 +174,7 @@ static int mperf_get_count_percent(unsigned int id, double *percent,
+ dprint("%s: TSC Ref - mperf_diff: %llu, tsc_diff: %llu\n",
+ mperf_cstates[id].name, mperf_diff, tsc_diff);
+ } else if (max_freq_mode == MAX_FREQ_SYSFS) {
+- timediff = max_frequency * timespec_diff_us(time_start, time_end);
++ timediff = max_frequency * timespec_diff_us(time_start[cpu], time_end[cpu]);
+ *percent = 100.0 * mperf_diff / timediff;
+ dprint("%s: MAXFREQ - mperf_diff: %llu, time_diff: %llu\n",
+ mperf_cstates[id].name, mperf_diff, timediff);
+@@ -207,7 +207,7 @@ static int mperf_get_count_freq(unsigned int id, unsigned long long *count,
+ if (max_freq_mode == MAX_FREQ_TSC_REF) {
+ /* Calculate max_freq from TSC count */
+ tsc_diff = tsc_at_measure_end[cpu] - tsc_at_measure_start[cpu];
+- time_diff = timespec_diff_us(time_start, time_end);
++ time_diff = timespec_diff_us(time_start[cpu], time_end[cpu]);
+ max_frequency = tsc_diff / time_diff;
+ }
+
+@@ -226,9 +226,8 @@ static int mperf_start(void)
+ {
+ int cpu;
+
+- clock_gettime(CLOCK_REALTIME, &time_start);
+-
+ for (cpu = 0; cpu < cpu_count; cpu++) {
++ clock_gettime(CLOCK_REALTIME, &time_start[cpu]);
+ mperf_get_tsc(&tsc_at_measure_start[cpu]);
+ mperf_init_stats(cpu);
+ }
+@@ -243,9 +242,9 @@ static int mperf_stop(void)
+ for (cpu = 0; cpu < cpu_count; cpu++) {
+ mperf_measure_stats(cpu);
+ mperf_get_tsc(&tsc_at_measure_end[cpu]);
++ clock_gettime(CLOCK_REALTIME, &time_end[cpu]);
+ }
+
+- clock_gettime(CLOCK_REALTIME, &time_end);
+ return 0;
+ }
+
+@@ -349,6 +348,8 @@ struct cpuidle_monitor *mperf_register(void)
+ aperf_current_count = calloc(cpu_count, sizeof(unsigned long long));
+ tsc_at_measure_start = calloc(cpu_count, sizeof(unsigned long long));
+ tsc_at_measure_end = calloc(cpu_count, sizeof(unsigned long long));
++ time_start = calloc(cpu_count, sizeof(struct timespec));
++ time_end = calloc(cpu_count, sizeof(struct timespec));
+ mperf_monitor.name_len = strlen(mperf_monitor.name);
+ return &mperf_monitor;
+ }
+@@ -361,6 +362,8 @@ void mperf_unregister(void)
+ free(aperf_current_count);
+ free(tsc_at_measure_start);
+ free(tsc_at_measure_end);
++ free(time_start);
++ free(time_end);
+ free(is_valid);
+ }
+
+diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8
+index 067717bce1d4ab..56c7ff6efcdabc 100644
+--- a/tools/power/x86/turbostat/turbostat.8
++++ b/tools/power/x86/turbostat/turbostat.8
+@@ -33,6 +33,9 @@ name as necessary to disambiguate it from others is necessary. Note that option
+ msr0xXXX is a hex offset, eg. msr0x10
+ /sys/path... is an absolute path to a sysfs attribute
+ <device> is a perf device from /sys/bus/event_source/devices/<device> eg. cstate_core
++ On Intel hybrid platforms, instead of one "cpu" perf device there are two, "cpu_core" and "cpu_atom" devices for P and E cores respectively.
++ Turbostat, in this case, allow user to use "cpu" device and will automatically detect the type of a CPU and translate it to "cpu_core" and "cpu_atom" accordingly.
++ For a complete example see "ADD PERF COUNTER EXAMPLE #2 (using virtual "cpu" device)".
+ <event> is a perf event for given device from /sys/bus/event_source/devices/<device>/events/<event> eg. c1-residency
+ perf/cstate_core/c1-residency would then use /sys/bus/event_source/devices/cstate_core/events/c1-residency
+
+@@ -387,6 +390,28 @@ CPU pCPU%c1 CPU%c1
+
+ .fi
+
++.SH ADD PERF COUNTER EXAMPLE #2 (using virtual cpu device)
++Here we run on hybrid, Raptor Lake platform.
++We limit turbostat to show output for just cpu0 (pcore) and cpu12 (ecore).
++We add a counter showing number of L3 cache misses, using virtual "cpu" device,
++labeling it with the column header, "VCMISS".
++We add a counter showing number of L3 cache misses, using virtual "cpu_core" device,
++labeling it with the column header, "PCMISS". This will fail on ecore cpu12.
++We add a counter showing number of L3 cache misses, using virtual "cpu_atom" device,
++labeling it with the column header, "ECMISS". This will fail on pcore cpu0.
++We display it only once, after the conclusion of 0.1 second sleep.
++.nf
++sudo ./turbostat --quiet --cpu 0,12 --show CPU --add perf/cpu/cache-misses,cpu,delta,raw,VCMISS --add perf/cpu_core/cache-misses,cpu,delta,raw,PCMISS --add perf/cpu_atom/cache-misses,cpu,delta,raw,ECMISS sleep .1
++turbostat: added_perf_counters_init_: perf/cpu_atom/cache-misses: failed to open counter on cpu0
++turbostat: added_perf_counters_init_: perf/cpu_core/cache-misses: failed to open counter on cpu12
++0.104630 sec
++CPU ECMISS PCMISS VCMISS
++- 0x0000000000000000 0x0000000000000000 0x0000000000000000
++0 0x0000000000000000 0x0000000000007951 0x0000000000007796
++12 0x000000000001137a 0x0000000000000000 0x0000000000011392
++
++.fi
++
+ .SH ADD PMT COUNTER EXAMPLE
+ Here we limit turbostat to showing just the CPU number 0.
+ We add two counters, showing crystal clock count and the DC6 residency.
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index a5ebee8b23bbe3..235e82fe7d0a56 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -31,6 +31,9 @@
+ )
+ // end copied section
+
++#define CPUID_LEAF_MODEL_ID 0x1A
++#define CPUID_LEAF_MODEL_ID_CORE_TYPE_SHIFT 24
++
+ #define X86_VENDOR_INTEL 0
+
+ #include INTEL_FAMILY_HEADER
+@@ -89,6 +92,11 @@
+ #define PERF_DEV_NAME_BYTES 32
+ #define PERF_EVT_NAME_BYTES 32
+
++#define INTEL_ECORE_TYPE 0x20
++#define INTEL_PCORE_TYPE 0x40
++
++#define ROUND_UP_TO_PAGE_SIZE(n) (((n) + 0x1000UL-1UL) & ~(0x1000UL-1UL))
++
+ enum counter_scope { SCOPE_CPU, SCOPE_CORE, SCOPE_PACKAGE };
+ enum counter_type { COUNTER_ITEMS, COUNTER_CYCLES, COUNTER_SECONDS, COUNTER_USEC, COUNTER_K2M };
+ enum counter_format { FORMAT_RAW, FORMAT_DELTA, FORMAT_PERCENT, FORMAT_AVERAGE };
+@@ -1079,8 +1087,8 @@ int backwards_count;
+ char *progname;
+
+ #define CPU_SUBSET_MAXCPUS 1024 /* need to use before probe... */
+-cpu_set_t *cpu_present_set, *cpu_effective_set, *cpu_allowed_set, *cpu_affinity_set, *cpu_subset;
+-size_t cpu_present_setsize, cpu_effective_setsize, cpu_allowed_setsize, cpu_affinity_setsize, cpu_subset_size;
++cpu_set_t *cpu_present_set, *cpu_possible_set, *cpu_effective_set, *cpu_allowed_set, *cpu_affinity_set, *cpu_subset;
++size_t cpu_present_setsize, cpu_possible_setsize, cpu_effective_setsize, cpu_allowed_setsize, cpu_affinity_setsize, cpu_subset_size;
+ #define MAX_ADDED_THREAD_COUNTERS 24
+ #define MAX_ADDED_CORE_COUNTERS 8
+ #define MAX_ADDED_PACKAGE_COUNTERS 16
+@@ -1848,6 +1856,7 @@ struct cpu_topology {
+ int logical_node_id; /* 0-based count within the package */
+ int physical_core_id;
+ int thread_id;
++ int type;
+ cpu_set_t *put_ids; /* Processing Unit/Thread IDs */
+ } *cpus;
+
+@@ -5659,6 +5668,32 @@ int init_thread_id(int cpu)
+ return 0;
+ }
+
++int set_my_cpu_type(void)
++{
++ unsigned int eax, ebx, ecx, edx;
++ unsigned int max_level;
++
++ __cpuid(0, max_level, ebx, ecx, edx);
++
++ if (max_level < CPUID_LEAF_MODEL_ID)
++ return 0;
++
++ __cpuid(CPUID_LEAF_MODEL_ID, eax, ebx, ecx, edx);
++
++ return (eax >> CPUID_LEAF_MODEL_ID_CORE_TYPE_SHIFT);
++}
++
++int set_cpu_hybrid_type(int cpu)
++{
++ if (cpu_migrate(cpu))
++ return -1;
++
++ int type = set_my_cpu_type();
++
++ cpus[cpu].type = type;
++ return 0;
++}
++
+ /*
+ * snapshot_proc_interrupts()
+ *
+@@ -8188,6 +8223,33 @@ int dir_filter(const struct dirent *dirp)
+ return 0;
+ }
+
++char *possible_file = "/sys/devices/system/cpu/possible";
++char possible_buf[1024];
++
++int initialize_cpu_possible_set(void)
++{
++ FILE *fp;
++
++ fp = fopen(possible_file, "r");
++ if (!fp) {
++ warn("open %s", possible_file);
++ return -1;
++ }
++ if (fread(possible_buf, sizeof(char), 1024, fp) == 0) {
++ warn("read %s", possible_file);
++ goto err;
++ }
++ if (parse_cpu_str(possible_buf, cpu_possible_set, cpu_possible_setsize)) {
++ warnx("%s: cpu str malformat %s\n", possible_file, cpu_effective_str);
++ goto err;
++ }
++ return 0;
++
++err:
++ fclose(fp);
++ return -1;
++}
++
+ void topology_probe(bool startup)
+ {
+ int i;
+@@ -8219,6 +8281,16 @@ void topology_probe(bool startup)
+ CPU_ZERO_S(cpu_present_setsize, cpu_present_set);
+ for_all_proc_cpus(mark_cpu_present);
+
++ /*
++ * Allocate and initialize cpu_possible_set
++ */
++ cpu_possible_set = CPU_ALLOC((topo.max_cpu_num + 1));
++ if (cpu_possible_set == NULL)
++ err(3, "CPU_ALLOC");
++ cpu_possible_setsize = CPU_ALLOC_SIZE((topo.max_cpu_num + 1));
++ CPU_ZERO_S(cpu_possible_setsize, cpu_possible_set);
++ initialize_cpu_possible_set();
++
+ /*
+ * Allocate and initialize cpu_effective_set
+ */
+@@ -8287,6 +8359,8 @@ void topology_probe(bool startup)
+
+ for_all_proc_cpus(init_thread_id);
+
++ for_all_proc_cpus(set_cpu_hybrid_type);
++
+ /*
+ * For online cpus
+ * find max_core_id, max_package_id
+@@ -8551,6 +8625,35 @@ void check_perf_access(void)
+ bic_enabled &= ~BIC_IPC;
+ }
+
++bool perf_has_hybrid_devices(void)
++{
++ /*
++ * 0: unknown
++ * 1: has separate perf device for p and e core
++ * -1: doesn't have separate perf device for p and e core
++ */
++ static int cached;
++
++ if (cached > 0)
++ return true;
++
++ if (cached < 0)
++ return false;
++
++ if (access("/sys/bus/event_source/devices/cpu_core", F_OK)) {
++ cached = -1;
++ return false;
++ }
++
++ if (access("/sys/bus/event_source/devices/cpu_atom", F_OK)) {
++ cached = -1;
++ return false;
++ }
++
++ cached = 1;
++ return true;
++}
++
+ int added_perf_counters_init_(struct perf_counter_info *pinfo)
+ {
+ size_t num_domains = 0;
+@@ -8607,29 +8710,56 @@ int added_perf_counters_init_(struct perf_counter_info *pinfo)
+ if (domain_visited[next_domain])
+ continue;
+
+- perf_type = read_perf_type(pinfo->device);
++ /*
++ * Intel hybrid platforms expose different perf devices for P and E cores.
++ * Instead of one, "/sys/bus/event_source/devices/cpu" device, there are
++ * "/sys/bus/event_source/devices/{cpu_core,cpu_atom}".
++ *
++ * This makes it more complicated to the user, because most of the counters
++ * are available on both and have to be handled manually, otherwise.
++ *
++ * Code below, allow user to use the old "cpu" name, which is translated accordingly.
++ */
++ const char *perf_device = pinfo->device;
++
++ if (strcmp(perf_device, "cpu") == 0 && perf_has_hybrid_devices()) {
++ switch (cpus[cpu].type) {
++ case INTEL_PCORE_TYPE:
++ perf_device = "cpu_core";
++ break;
++
++ case INTEL_ECORE_TYPE:
++ perf_device = "cpu_atom";
++ break;
++
++ default: /* Don't change, we will probably fail and report a problem soon. */
++ break;
++ }
++ }
++
++ perf_type = read_perf_type(perf_device);
+ if (perf_type == (unsigned int)-1) {
+ warnx("%s: perf/%s/%s: failed to read %s",
+- __func__, pinfo->device, pinfo->event, "type");
++ __func__, perf_device, pinfo->event, "type");
+ continue;
+ }
+
+- perf_config = read_perf_config(pinfo->device, pinfo->event);
++ perf_config = read_perf_config(perf_device, pinfo->event);
+ if (perf_config == (unsigned int)-1) {
+ warnx("%s: perf/%s/%s: failed to read %s",
+- __func__, pinfo->device, pinfo->event, "config");
++ __func__, perf_device, pinfo->event, "config");
+ continue;
+ }
+
+ /* Scale is not required, some counters just don't have it. */
+- perf_scale = read_perf_scale(pinfo->device, pinfo->event);
++ perf_scale = read_perf_scale(perf_device, pinfo->event);
+ if (perf_scale == 0.0)
+ perf_scale = 1.0;
+
+ fd_perf = open_perf_counter(cpu, perf_type, perf_config, -1, 0);
+ if (fd_perf == -1) {
+ warnx("%s: perf/%s/%s: failed to open counter on cpu%d",
+- __func__, pinfo->device, pinfo->event, cpu);
++ __func__, perf_device, pinfo->event, cpu);
+ continue;
+ }
+
+@@ -8639,7 +8769,7 @@ int added_perf_counters_init_(struct perf_counter_info *pinfo)
+
+ if (debug)
+ fprintf(stderr, "Add perf/%s/%s cpu%d: %d\n",
+- pinfo->device, pinfo->event, cpu, pinfo->fd_perf_per_domain[next_domain]);
++ perf_device, pinfo->event, cpu, pinfo->fd_perf_per_domain[next_domain]);
+ }
+
+ pinfo = pinfo->next;
+@@ -8762,7 +8892,7 @@ struct pmt_mmio *pmt_mmio_open(unsigned int target_guid)
+ if (fd_pmt == -1)
+ goto loop_cleanup_and_break;
+
+- mmap_size = (size + 0x1000UL) & (~0x1000UL);
++ mmap_size = ROUND_UP_TO_PAGE_SIZE(size);
+ mmio = mmap(0, mmap_size, PROT_READ, MAP_SHARED, fd_pmt, 0);
+ if (mmio != MAP_FAILED) {
+
+@@ -9001,6 +9131,18 @@ void turbostat_init()
+ }
+ }
+
++void affinitize_child(void)
++{
++ /* Prefer cpu_possible_set, if available */
++ if (sched_setaffinity(0, cpu_possible_setsize, cpu_possible_set)) {
++ warn("sched_setaffinity cpu_possible_set");
++
++ /* Otherwise, allow child to run on same cpu set as turbostat */
++ if (sched_setaffinity(0, cpu_allowed_setsize, cpu_allowed_set))
++ warn("sched_setaffinity cpu_allowed_set");
++ }
++}
++
+ int fork_it(char **argv)
+ {
+ pid_t child_pid;
+@@ -9016,6 +9158,7 @@ int fork_it(char **argv)
+ child_pid = fork();
+ if (!child_pid) {
+ /* child */
++ affinitize_child();
+ execvp(argv[0], argv);
+ err(errno, "exec %s", argv[0]);
+ } else {
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index dacad94e2be42a..c76ad0be54e2ed 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -2419,6 +2419,11 @@ sub get_version {
+ return if ($have_version);
+ doprint "$make kernelrelease ... ";
+ $version = `$make -s kernelrelease | tail -1`;
++ if (!length($version)) {
++ run_command "$make allnoconfig" or return 0;
++ doprint "$make kernelrelease ... ";
++ $version = `$make -s kernelrelease | tail -1`;
++ }
+ chomp($version);
+ doprint "$version\n";
+ $have_version = 1;
+@@ -2960,8 +2965,6 @@ sub run_bisect_test {
+
+ my $failed = 0;
+ my $result;
+- my $output;
+- my $ret;
+
+ $in_bisect = 1;
+
+diff --git a/tools/testing/selftests/bpf/Makefile b/tools/testing/selftests/bpf/Makefile
+index 43a02931847854..6fc29996ae2938 100644
+--- a/tools/testing/selftests/bpf/Makefile
++++ b/tools/testing/selftests/bpf/Makefile
+@@ -193,9 +193,9 @@ ifeq ($(shell expr $(MAKE_VERSION) \>= 4.4), 1)
+ $(let OUTPUT,$(OUTPUT)/,\
+ $(eval include ../../../build/Makefile.feature))
+ else
+-OUTPUT := $(OUTPUT)/
++override OUTPUT := $(OUTPUT)/
+ $(eval include ../../../build/Makefile.feature)
+-OUTPUT := $(patsubst %/,%,$(OUTPUT))
++override OUTPUT := $(patsubst %/,%,$(OUTPUT))
+ endif
+ endif
+
+diff --git a/tools/testing/selftests/bpf/prog_tests/btf_distill.c b/tools/testing/selftests/bpf/prog_tests/btf_distill.c
+index ca84726d5ac1b9..b72b966df77b90 100644
+--- a/tools/testing/selftests/bpf/prog_tests/btf_distill.c
++++ b/tools/testing/selftests/bpf/prog_tests/btf_distill.c
+@@ -385,7 +385,7 @@ static void test_distilled_base_missing_err(void)
+ "[2] INT 'int' size=8 bits_offset=0 nr_bits=64 encoding=SIGNED");
+ btf5 = btf__new_empty();
+ if (!ASSERT_OK_PTR(btf5, "empty_reloc_btf"))
+- return;
++ goto cleanup;
+ btf__add_int(btf5, "int", 4, BTF_INT_SIGNED); /* [1] int */
+ VALIDATE_RAW_BTF(
+ btf5,
+@@ -478,7 +478,7 @@ static void test_distilled_base_multi_err2(void)
+ "[1] INT 'int' size=4 bits_offset=0 nr_bits=32 encoding=SIGNED");
+ btf5 = btf__new_empty();
+ if (!ASSERT_OK_PTR(btf5, "empty_reloc_btf"))
+- return;
++ goto cleanup;
+ btf__add_int(btf5, "int", 4, BTF_INT_SIGNED); /* [1] int */
+ btf__add_int(btf5, "int", 4, BTF_INT_SIGNED); /* [2] int */
+ VALIDATE_RAW_BTF(
+diff --git a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
+index d50cbd8040d45f..e59af2aa660166 100644
+--- a/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
++++ b/tools/testing/selftests/bpf/prog_tests/fill_link_info.c
+@@ -171,6 +171,10 @@ static void test_kprobe_fill_link_info(struct test_fill_link_info *skel,
+ /* See also arch_adjust_kprobe_addr(). */
+ if (skel->kconfig->CONFIG_X86_KERNEL_IBT)
+ entry_offset = 4;
++ if (skel->kconfig->CONFIG_PPC64 &&
++ skel->kconfig->CONFIG_KPROBES_ON_FTRACE &&
++ !skel->kconfig->CONFIG_PPC_FTRACE_OUT_OF_LINE)
++ entry_offset = 4;
+ err = verify_perf_link_info(link_fd, type, kprobe_addr, 0, entry_offset);
+ ASSERT_OK(err, "verify_perf_link_info");
+ } else {
+diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+index 21c5a37846adea..40f22454cf05b0 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
++++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+@@ -1496,8 +1496,8 @@ static void test_tailcall_bpf2bpf_hierarchy_3(void)
+ RUN_TESTS(tailcall_bpf2bpf_hierarchy3);
+ }
+
+-/* test_tailcall_freplace checks that the attached freplace prog is OK to
+- * update the prog_array map.
++/* test_tailcall_freplace checks that the freplace prog fails to update the
++ * prog_array map, no matter whether the freplace prog attaches to its target.
+ */
+ static void test_tailcall_freplace(void)
+ {
+@@ -1505,7 +1505,7 @@ static void test_tailcall_freplace(void)
+ struct bpf_link *freplace_link = NULL;
+ struct bpf_program *freplace_prog;
+ struct tc_bpf2bpf *tc_skel = NULL;
+- int prog_fd, map_fd;
++ int prog_fd, tc_prog_fd, map_fd;
+ char buff[128] = {};
+ int err, key;
+
+@@ -1523,9 +1523,10 @@ static void test_tailcall_freplace(void)
+ if (!ASSERT_OK_PTR(tc_skel, "tc_bpf2bpf__open_and_load"))
+ goto out;
+
+- prog_fd = bpf_program__fd(tc_skel->progs.entry_tc);
++ tc_prog_fd = bpf_program__fd(tc_skel->progs.entry_tc);
+ freplace_prog = freplace_skel->progs.entry_freplace;
+- err = bpf_program__set_attach_target(freplace_prog, prog_fd, "subprog");
++ err = bpf_program__set_attach_target(freplace_prog, tc_prog_fd,
++ "subprog_tc");
+ if (!ASSERT_OK(err, "set_attach_target"))
+ goto out;
+
+@@ -1533,27 +1534,116 @@ static void test_tailcall_freplace(void)
+ if (!ASSERT_OK(err, "tailcall_freplace__load"))
+ goto out;
+
+- freplace_link = bpf_program__attach_freplace(freplace_prog, prog_fd,
+- "subprog");
++ map_fd = bpf_map__fd(freplace_skel->maps.jmp_table);
++ prog_fd = bpf_program__fd(freplace_prog);
++ key = 0;
++ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
++ ASSERT_ERR(err, "update jmp_table failure");
++
++ freplace_link = bpf_program__attach_freplace(freplace_prog, tc_prog_fd,
++ "subprog_tc");
+ if (!ASSERT_OK_PTR(freplace_link, "attach_freplace"))
+ goto out;
+
+- map_fd = bpf_map__fd(freplace_skel->maps.jmp_table);
+- prog_fd = bpf_program__fd(freplace_prog);
++ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
++ ASSERT_ERR(err, "update jmp_table failure");
++
++out:
++ bpf_link__destroy(freplace_link);
++ tailcall_freplace__destroy(freplace_skel);
++ tc_bpf2bpf__destroy(tc_skel);
++}
++
++/* test_tailcall_bpf2bpf_freplace checks the failure that fails to attach a tail
++ * callee prog with freplace prog or fails to update an extended prog to
++ * prog_array map.
++ */
++static void test_tailcall_bpf2bpf_freplace(void)
++{
++ struct tailcall_freplace *freplace_skel = NULL;
++ struct bpf_link *freplace_link = NULL;
++ struct tc_bpf2bpf *tc_skel = NULL;
++ char buff[128] = {};
++ int prog_fd, map_fd;
++ int err, key;
++
++ LIBBPF_OPTS(bpf_test_run_opts, topts,
++ .data_in = buff,
++ .data_size_in = sizeof(buff),
++ .repeat = 1,
++ );
++
++ tc_skel = tc_bpf2bpf__open_and_load();
++ if (!ASSERT_OK_PTR(tc_skel, "tc_bpf2bpf__open_and_load"))
++ goto out;
++
++ prog_fd = bpf_program__fd(tc_skel->progs.entry_tc);
++ freplace_skel = tailcall_freplace__open();
++ if (!ASSERT_OK_PTR(freplace_skel, "tailcall_freplace__open"))
++ goto out;
++
++ err = bpf_program__set_attach_target(freplace_skel->progs.entry_freplace,
++ prog_fd, "subprog_tc");
++ if (!ASSERT_OK(err, "set_attach_target"))
++ goto out;
++
++ err = tailcall_freplace__load(freplace_skel);
++ if (!ASSERT_OK(err, "tailcall_freplace__load"))
++ goto out;
++
++ /* OK to attach then detach freplace prog. */
++
++ freplace_link = bpf_program__attach_freplace(freplace_skel->progs.entry_freplace,
++ prog_fd, "subprog_tc");
++ if (!ASSERT_OK_PTR(freplace_link, "attach_freplace"))
++ goto out;
++
++ err = bpf_link__destroy(freplace_link);
++ if (!ASSERT_OK(err, "destroy link"))
++ goto out;
++
++ /* OK to update prog_array map then delete element from the map. */
++
+ key = 0;
++ map_fd = bpf_map__fd(freplace_skel->maps.jmp_table);
+ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
+ if (!ASSERT_OK(err, "update jmp_table"))
+ goto out;
+
+- prog_fd = bpf_program__fd(tc_skel->progs.entry_tc);
+- err = bpf_prog_test_run_opts(prog_fd, &topts);
+- ASSERT_OK(err, "test_run");
+- ASSERT_EQ(topts.retval, 34, "test_run retval");
++ err = bpf_map_delete_elem(map_fd, &key);
++ if (!ASSERT_OK(err, "delete_elem from jmp_table"))
++ goto out;
++
++ /* Fail to attach a tail callee prog with freplace prog. */
++
++ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
++ if (!ASSERT_OK(err, "update jmp_table"))
++ goto out;
++
++ freplace_link = bpf_program__attach_freplace(freplace_skel->progs.entry_freplace,
++ prog_fd, "subprog_tc");
++ if (!ASSERT_ERR_PTR(freplace_link, "attach_freplace failure"))
++ goto out;
++
++ err = bpf_map_delete_elem(map_fd, &key);
++ if (!ASSERT_OK(err, "delete_elem from jmp_table"))
++ goto out;
++
++ /* Fail to update an extended prog to prog_array map. */
++
++ freplace_link = bpf_program__attach_freplace(freplace_skel->progs.entry_freplace,
++ prog_fd, "subprog_tc");
++ if (!ASSERT_OK_PTR(freplace_link, "attach_freplace"))
++ goto out;
++
++ err = bpf_map_update_elem(map_fd, &key, &prog_fd, BPF_ANY);
++ if (!ASSERT_ERR(err, "update jmp_table failure"))
++ goto out;
+
+ out:
+ bpf_link__destroy(freplace_link);
+- tc_bpf2bpf__destroy(tc_skel);
+ tailcall_freplace__destroy(freplace_skel);
++ tc_bpf2bpf__destroy(tc_skel);
+ }
+
+ void test_tailcalls(void)
+@@ -1606,4 +1696,6 @@ void test_tailcalls(void)
+ test_tailcall_bpf2bpf_hierarchy_3();
+ if (test__start_subtest("tailcall_freplace"))
+ test_tailcall_freplace();
++ if (test__start_subtest("tailcall_bpf2bpf_freplace"))
++ test_tailcall_bpf2bpf_freplace();
+ }
+diff --git a/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c b/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
+index 79f5087dade224..fe6249d99b315b 100644
+--- a/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
++++ b/tools/testing/selftests/bpf/progs/tc_bpf2bpf.c
+@@ -5,10 +5,11 @@
+ #include "bpf_misc.h"
+
+ __noinline
+-int subprog(struct __sk_buff *skb)
++int subprog_tc(struct __sk_buff *skb)
+ {
+ int ret = 1;
+
++ __sink(skb);
+ __sink(ret);
+ /* let verifier know that 'subprog_tc' can change pointers to skb->data */
+ bpf_skb_change_proto(skb, 0, 0);
+@@ -18,7 +19,7 @@ int subprog(struct __sk_buff *skb)
+ SEC("tc")
+ int entry_tc(struct __sk_buff *skb)
+ {
+- return subprog(skb);
++ return subprog_tc(skb);
+ }
+
+ char __license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/progs/test_fill_link_info.c b/tools/testing/selftests/bpf/progs/test_fill_link_info.c
+index 6afa834756e9fd..fac33a14f2009c 100644
+--- a/tools/testing/selftests/bpf/progs/test_fill_link_info.c
++++ b/tools/testing/selftests/bpf/progs/test_fill_link_info.c
+@@ -6,13 +6,20 @@
+ #include <stdbool.h>
+
+ extern bool CONFIG_X86_KERNEL_IBT __kconfig __weak;
++extern bool CONFIG_PPC_FTRACE_OUT_OF_LINE __kconfig __weak;
++extern bool CONFIG_KPROBES_ON_FTRACE __kconfig __weak;
++extern bool CONFIG_PPC64 __kconfig __weak;
+
+-/* This function is here to have CONFIG_X86_KERNEL_IBT
+- * used and added to object BTF.
++/* This function is here to have CONFIG_X86_KERNEL_IBT,
++ * CONFIG_PPC_FTRACE_OUT_OF_LINE, CONFIG_KPROBES_ON_FTRACE,
++ * CONFIG_PPC6 used and added to object BTF.
+ */
+ int unused(void)
+ {
+- return CONFIG_X86_KERNEL_IBT ? 0 : 1;
++ return CONFIG_X86_KERNEL_IBT ||
++ CONFIG_PPC_FTRACE_OUT_OF_LINE ||
++ CONFIG_KPROBES_ON_FTRACE ||
++ CONFIG_PPC64 ? 0 : 1;
+ }
+
+ SEC("kprobe")
+diff --git a/tools/testing/selftests/bpf/test_tc_tunnel.sh b/tools/testing/selftests/bpf/test_tc_tunnel.sh
+index 7989ec60845455..cb55a908bb0d70 100755
+--- a/tools/testing/selftests/bpf/test_tc_tunnel.sh
++++ b/tools/testing/selftests/bpf/test_tc_tunnel.sh
+@@ -305,6 +305,7 @@ else
+ client_connect
+ verify_data
+ server_listen
++ wait_for_port ${port} ${netcat_opt}
+ fi
+
+ # serverside, use BPF for decap
+diff --git a/tools/testing/selftests/bpf/xdp_hw_metadata.c b/tools/testing/selftests/bpf/xdp_hw_metadata.c
+index 6f9956eed797f3..ad6c08dfd6c8cc 100644
+--- a/tools/testing/selftests/bpf/xdp_hw_metadata.c
++++ b/tools/testing/selftests/bpf/xdp_hw_metadata.c
+@@ -79,7 +79,7 @@ static int open_xsk(int ifindex, struct xsk *xsk, __u32 queue_id)
+ .fill_size = XSK_RING_PROD__DEFAULT_NUM_DESCS,
+ .comp_size = XSK_RING_CONS__DEFAULT_NUM_DESCS,
+ .frame_size = XSK_UMEM__DEFAULT_FRAME_SIZE,
+- .flags = XSK_UMEM__DEFAULT_FLAGS,
++ .flags = XDP_UMEM_TX_METADATA_LEN,
+ .tx_metadata_len = sizeof(struct xsk_tx_metadata),
+ };
+ __u32 idx = 0;
+diff --git a/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh b/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
+index 384cfa3d38a6cd..92c2f0376c081d 100755
+--- a/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
++++ b/tools/testing/selftests/drivers/net/netdevsim/udp_tunnel_nic.sh
+@@ -142,7 +142,7 @@ function pre_ethtool {
+ }
+
+ function check_table {
+- local path=$NSIM_DEV_DFS/ports/$port/udp_ports_table$1
++ local path=$NSIM_DEV_DFS/ports/$port/udp_ports/table$1
+ local -n expected=$2
+ local last=$3
+
+@@ -212,7 +212,7 @@ function check_tables {
+ }
+
+ function print_table {
+- local path=$NSIM_DEV_DFS/ports/$port/udp_ports_table$1
++ local path=$NSIM_DEV_DFS/ports/$port/udp_ports/table$1
+ read -a have < $path
+
+ tree $NSIM_DEV_DFS/
+@@ -641,7 +641,7 @@ for port in 0 1; do
+ NSIM_NETDEV=`get_netdev_name old_netdevs`
+ ip link set dev $NSIM_NETDEV up
+
+- echo 110 > $NSIM_DEV_DFS/ports/$port/udp_ports_inject_error
++ echo 110 > $NSIM_DEV_DFS/ports/$port/udp_ports/inject_error
+
+ msg="1 - create VxLANs v6"
+ exp0=( 0 0 0 0 )
+@@ -663,7 +663,7 @@ for port in 0 1; do
+ new_geneve gnv0 20000
+
+ msg="2 - destroy GENEVE"
+- echo 2 > $NSIM_DEV_DFS/ports/$port/udp_ports_inject_error
++ echo 2 > $NSIM_DEV_DFS/ports/$port/udp_ports/inject_error
+ exp1=( `mke 20000 2` 0 0 0 )
+ del_dev gnv0
+
+@@ -764,7 +764,7 @@ for port in 0 1; do
+ msg="create VxLANs v4"
+ new_vxlan vxlan0 10000 $NSIM_NETDEV
+
+- echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
++ echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
+ check_tables
+
+ msg="NIC device goes down"
+@@ -775,7 +775,7 @@ for port in 0 1; do
+ fi
+ check_tables
+
+- echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
++ echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
+ check_tables
+
+ msg="NIC device goes up again"
+@@ -789,7 +789,7 @@ for port in 0 1; do
+ del_dev vxlan0
+ check_tables
+
+- echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
++ echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
+ check_tables
+
+ msg="destroy NIC"
+@@ -896,7 +896,7 @@ msg="vacate VxLAN in overflow table"
+ exp0=( `mke 10000 1` `mke 10004 1` 0 `mke 10003 1` )
+ del_dev vxlan2
+
+-echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports_reset
++echo 1 > $NSIM_DEV_DFS/ports/$port/udp_ports/reset
+ check_tables
+
+ msg="tunnels destroyed 2"
+diff --git a/tools/testing/selftests/ftrace/test.d/00basic/mount_options.tc b/tools/testing/selftests/ftrace/test.d/00basic/mount_options.tc
+index 35e8d47d607259..8a7ce647a60d1c 100644
+--- a/tools/testing/selftests/ftrace/test.d/00basic/mount_options.tc
++++ b/tools/testing/selftests/ftrace/test.d/00basic/mount_options.tc
+@@ -15,11 +15,11 @@ find_alternate_gid() {
+ tac /etc/group | grep -v ":$original_gid:" | head -1 | cut -d: -f3
+ }
+
+-mount_tracefs_with_options() {
++remount_tracefs_with_options() {
+ local mount_point="$1"
+ local options="$2"
+
+- mount -t tracefs -o "$options" nodev "$mount_point"
++ mount -t tracefs -o "remount,$options" nodev "$mount_point"
+
+ setup
+ }
+@@ -81,7 +81,7 @@ test_gid_mount_option() {
+
+ # Unmount existing tracefs instance and mount with new GID
+ unmount_tracefs "$mount_point"
+- mount_tracefs_with_options "$mount_point" "$new_options"
++ remount_tracefs_with_options "$mount_point" "$new_options"
+
+ check_gid "$mount_point" "$other_group"
+
+@@ -92,7 +92,7 @@ test_gid_mount_option() {
+
+ # Unmount and remount with the original GID
+ unmount_tracefs "$mount_point"
+- mount_tracefs_with_options "$mount_point" "$mount_options"
++ remount_tracefs_with_options "$mount_point" "$mount_options"
+ check_gid "$mount_point" "$original_group"
+ }
+
+diff --git a/tools/testing/selftests/kselftest/ktap_helpers.sh b/tools/testing/selftests/kselftest/ktap_helpers.sh
+index 79a125eb24c2e8..14e7f3ec3f84c3 100644
+--- a/tools/testing/selftests/kselftest/ktap_helpers.sh
++++ b/tools/testing/selftests/kselftest/ktap_helpers.sh
+@@ -40,7 +40,7 @@ ktap_skip_all() {
+ __ktap_test() {
+ result="$1"
+ description="$2"
+- directive="$3" # optional
++ directive="${3:-}" # optional
+
+ local directive_str=
+ [ ! -z "$directive" ] && directive_str="# $directive"
+diff --git a/tools/testing/selftests/kselftest_harness.h b/tools/testing/selftests/kselftest_harness.h
+index a5a72415e37b06..666c9fde76da9d 100644
+--- a/tools/testing/selftests/kselftest_harness.h
++++ b/tools/testing/selftests/kselftest_harness.h
+@@ -760,33 +760,33 @@
+ /* Report with actual signedness to avoid weird output. */ \
+ switch (is_signed_type(__exp) * 2 + is_signed_type(__seen)) { \
+ case 0: { \
+- unsigned long long __exp_print = (uintptr_t)__exp; \
+- unsigned long long __seen_print = (uintptr_t)__seen; \
+- __TH_LOG("Expected %s (%llu) %s %s (%llu)", \
++ uintmax_t __exp_print = (uintmax_t)__exp; \
++ uintmax_t __seen_print = (uintmax_t)__seen; \
++ __TH_LOG("Expected %s (%ju) %s %s (%ju)", \
+ _expected_str, __exp_print, #_t, \
+ _seen_str, __seen_print); \
+ break; \
+ } \
+ case 1: { \
+- unsigned long long __exp_print = (uintptr_t)__exp; \
+- long long __seen_print = (intptr_t)__seen; \
+- __TH_LOG("Expected %s (%llu) %s %s (%lld)", \
++ uintmax_t __exp_print = (uintmax_t)__exp; \
++ intmax_t __seen_print = (intmax_t)__seen; \
++ __TH_LOG("Expected %s (%ju) %s %s (%jd)", \
+ _expected_str, __exp_print, #_t, \
+ _seen_str, __seen_print); \
+ break; \
+ } \
+ case 2: { \
+- long long __exp_print = (intptr_t)__exp; \
+- unsigned long long __seen_print = (uintptr_t)__seen; \
+- __TH_LOG("Expected %s (%lld) %s %s (%llu)", \
++ intmax_t __exp_print = (intmax_t)__exp; \
++ uintmax_t __seen_print = (uintmax_t)__seen; \
++ __TH_LOG("Expected %s (%jd) %s %s (%ju)", \
+ _expected_str, __exp_print, #_t, \
+ _seen_str, __seen_print); \
+ break; \
+ } \
+ case 3: { \
+- long long __exp_print = (intptr_t)__exp; \
+- long long __seen_print = (intptr_t)__seen; \
+- __TH_LOG("Expected %s (%lld) %s %s (%lld)", \
++ intmax_t __exp_print = (intmax_t)__exp; \
++ intmax_t __seen_print = (intmax_t)__seen; \
++ __TH_LOG("Expected %s (%jd) %s %s (%jd)", \
+ _expected_str, __exp_print, #_t, \
+ _seen_str, __seen_print); \
+ break; \
+diff --git a/tools/testing/selftests/landlock/Makefile b/tools/testing/selftests/landlock/Makefile
+index 348e2dbdb4e0b9..480f13e77fcc4b 100644
+--- a/tools/testing/selftests/landlock/Makefile
++++ b/tools/testing/selftests/landlock/Makefile
+@@ -13,11 +13,11 @@ TEST_GEN_PROGS := $(src_test:.c=)
+ TEST_GEN_PROGS_EXTENDED := true
+
+ # Short targets:
+-$(TEST_GEN_PROGS): LDLIBS += -lcap
++$(TEST_GEN_PROGS): LDLIBS += -lcap -lpthread
+ $(TEST_GEN_PROGS_EXTENDED): LDFLAGS += -static
+
+ include ../lib.mk
+
+ # Targets with $(OUTPUT)/ prefix:
+-$(TEST_GEN_PROGS): LDLIBS += -lcap
++$(TEST_GEN_PROGS): LDLIBS += -lcap -lpthread
+ $(TEST_GEN_PROGS_EXTENDED): LDFLAGS += -static
+diff --git a/tools/testing/selftests/landlock/fs_test.c b/tools/testing/selftests/landlock/fs_test.c
+index 6788762188feac..97d360eae4f69e 100644
+--- a/tools/testing/selftests/landlock/fs_test.c
++++ b/tools/testing/selftests/landlock/fs_test.c
+@@ -2003,8 +2003,7 @@ static void test_execute(struct __test_metadata *const _metadata, const int err,
+ ASSERT_EQ(1, WIFEXITED(status));
+ ASSERT_EQ(err ? 2 : 0, WEXITSTATUS(status))
+ {
+- TH_LOG("Unexpected return code for \"%s\": %s", path,
+- strerror(errno));
++ TH_LOG("Unexpected return code for \"%s\"", path);
+ };
+ }
+
+diff --git a/tools/testing/selftests/net/lib/Makefile b/tools/testing/selftests/net/lib/Makefile
+index 82c3264b115eee..704b88b6a8d2a2 100644
+--- a/tools/testing/selftests/net/lib/Makefile
++++ b/tools/testing/selftests/net/lib/Makefile
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+
+-CFLAGS = -Wall -Wl,--no-as-needed -O2 -g
++CFLAGS += -Wall -Wl,--no-as-needed -O2 -g
+ CFLAGS += -I../../../../../usr/include/ $(KHDR_INCLUDES)
+ # Additional include paths needed by kselftest.h
+ CFLAGS += -I../../
+diff --git a/tools/testing/selftests/net/mptcp/Makefile b/tools/testing/selftests/net/mptcp/Makefile
+index 5d796622e73099..580610c46e5aef 100644
+--- a/tools/testing/selftests/net/mptcp/Makefile
++++ b/tools/testing/selftests/net/mptcp/Makefile
+@@ -2,7 +2,7 @@
+
+ top_srcdir = ../../../../..
+
+-CFLAGS = -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
++CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
+
+ TEST_PROGS := mptcp_connect.sh pm_netlink.sh mptcp_join.sh diag.sh \
+ simult_flows.sh mptcp_sockopt.sh userspace_pm.sh
+diff --git a/tools/testing/selftests/net/openvswitch/Makefile b/tools/testing/selftests/net/openvswitch/Makefile
+index 2f1508abc826b7..3fd1da2ec07d54 100644
+--- a/tools/testing/selftests/net/openvswitch/Makefile
++++ b/tools/testing/selftests/net/openvswitch/Makefile
+@@ -2,7 +2,7 @@
+
+ top_srcdir = ../../../../..
+
+-CFLAGS = -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
++CFLAGS += -Wall -Wl,--no-as-needed -O2 -g -I$(top_srcdir)/usr/include $(KHDR_INCLUDES)
+
+ TEST_PROGS := openvswitch.sh
+
+diff --git a/tools/testing/selftests/powerpc/benchmarks/gettimeofday.c b/tools/testing/selftests/powerpc/benchmarks/gettimeofday.c
+index 580fcac0a09f31..b71ef8a493ed1a 100644
+--- a/tools/testing/selftests/powerpc/benchmarks/gettimeofday.c
++++ b/tools/testing/selftests/powerpc/benchmarks/gettimeofday.c
+@@ -20,7 +20,7 @@ static int test_gettimeofday(void)
+ gettimeofday(&tv_end, NULL);
+ }
+
+- timersub(&tv_start, &tv_end, &tv_diff);
++ timersub(&tv_end, &tv_start, &tv_diff);
+
+ printf("time = %.6f\n", tv_diff.tv_sec + (tv_diff.tv_usec) * 1e-6);
+
+diff --git a/tools/testing/selftests/rseq/rseq.c b/tools/testing/selftests/rseq/rseq.c
+index 5b9772cdf2651b..f6156790c3b4df 100644
+--- a/tools/testing/selftests/rseq/rseq.c
++++ b/tools/testing/selftests/rseq/rseq.c
+@@ -61,7 +61,6 @@ unsigned int rseq_size = -1U;
+ unsigned int rseq_flags;
+
+ static int rseq_ownership;
+-static int rseq_reg_success; /* At least one rseq registration has succeded. */
+
+ /* Allocate a large area for the TLS. */
+ #define RSEQ_THREAD_AREA_ALLOC_SIZE 1024
+@@ -152,14 +151,27 @@ int rseq_register_current_thread(void)
+ }
+ rc = sys_rseq(&__rseq_abi, get_rseq_min_alloc_size(), 0, RSEQ_SIG);
+ if (rc) {
+- if (RSEQ_READ_ONCE(rseq_reg_success)) {
++ /*
++ * After at least one thread has registered successfully
++ * (rseq_size > 0), the registration of other threads should
++ * never fail.
++ */
++ if (RSEQ_READ_ONCE(rseq_size) > 0) {
+ /* Incoherent success/failure within process. */
+ abort();
+ }
+ return -1;
+ }
+ assert(rseq_current_cpu_raw() >= 0);
+- RSEQ_WRITE_ONCE(rseq_reg_success, 1);
++
++ /*
++ * The first thread to register sets the rseq_size to mimic the libc
++ * behavior.
++ */
++ if (RSEQ_READ_ONCE(rseq_size) == 0) {
++ RSEQ_WRITE_ONCE(rseq_size, get_rseq_kernel_feature_size());
++ }
++
+ return 0;
+ }
+
+@@ -235,12 +247,18 @@ void rseq_init(void)
+ return;
+ }
+ rseq_ownership = 1;
+- if (!rseq_available()) {
+- rseq_size = 0;
+- return;
+- }
++
++ /* Calculate the offset of the rseq area from the thread pointer. */
+ rseq_offset = (void *)&__rseq_abi - rseq_thread_pointer();
++
++ /* rseq flags are deprecated, always set to 0. */
+ rseq_flags = 0;
++
++ /*
++ * Set the size to 0 until at least one thread registers to mimic the
++ * libc behavior.
++ */
++ rseq_size = 0;
+ }
+
+ static __attribute__((destructor))
+diff --git a/tools/testing/selftests/rseq/rseq.h b/tools/testing/selftests/rseq/rseq.h
+index 4e217b620e0c7a..062d10925a1011 100644
+--- a/tools/testing/selftests/rseq/rseq.h
++++ b/tools/testing/selftests/rseq/rseq.h
+@@ -60,7 +60,14 @@
+ extern ptrdiff_t rseq_offset;
+
+ /*
+- * Size of the registered rseq area. 0 if the registration was
++ * The rseq ABI is composed of extensible feature fields. The extensions
++ * are done by appending additional fields at the end of the structure.
++ * The rseq_size defines the size of the active feature set which can be
++ * used by the application for the current rseq registration. Features
++ * starting at offset >= rseq_size are inactive and should not be used.
++ *
++ * The rseq_size is the intersection between the available allocation
++ * size for the rseq area and the feature size supported by the kernel.
+ * unsuccessful.
+ */
+ extern unsigned int rseq_size;
+diff --git a/tools/testing/selftests/timers/clocksource-switch.c b/tools/testing/selftests/timers/clocksource-switch.c
+index c5264594064c85..83faa4e354e389 100644
+--- a/tools/testing/selftests/timers/clocksource-switch.c
++++ b/tools/testing/selftests/timers/clocksource-switch.c
+@@ -156,8 +156,8 @@ int main(int argc, char **argv)
+ /* Check everything is sane before we start switching asynchronously */
+ if (do_sanity_check) {
+ for (i = 0; i < count; i++) {
+- printf("Validating clocksource %s\n",
+- clocksource_list[i]);
++ ksft_print_msg("Validating clocksource %s\n",
++ clocksource_list[i]);
+ if (change_clocksource(clocksource_list[i])) {
+ status = -1;
+ goto out;
+@@ -169,7 +169,7 @@ int main(int argc, char **argv)
+ }
+ }
+
+- printf("Running Asynchronous Switching Tests...\n");
++ ksft_print_msg("Running Asynchronous Switching Tests...\n");
+ pid = fork();
+ if (!pid)
+ return run_tests(runtime);
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-16 21:48 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-02-16 21:48 UTC (permalink / raw
To: gentoo-commits
commit: f8e6e0a09a78ef67abed5a29f23c6a2db0d259e9
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Feb 16 21:48:06 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Feb 16 21:48:06 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f8e6e0a0
fortify: Hide run-time copy size from value range tracking
Bug: https://bugs.gentoo.org/947270
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
...ortify-copy-size-value-range-tracking-fix.patch | 161 +++++++++++++++++++++
2 files changed, 165 insertions(+)
diff --git a/0000_README b/0000_README
index ceb862e7..499702fa 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch: 1012_linux-6.12.13.patch
From: https://www.kernel.org
Desc: Linux 6.12.13
+Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
+From: https://git.kernel.org/
+Desc: fortify: Hide run-time copy size from value range tracking
+
Patch: 1510_fs-enable-link-security-restrictions-by-default.patch
From: http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
Desc: Enable link security restrictions by default.
diff --git a/1500_fortify-copy-size-value-range-tracking-fix.patch b/1500_fortify-copy-size-value-range-tracking-fix.patch
new file mode 100644
index 00000000..f751e02c
--- /dev/null
+++ b/1500_fortify-copy-size-value-range-tracking-fix.patch
@@ -0,0 +1,161 @@
+From 239d87327dcd361b0098038995f8908f3296864f Mon Sep 17 00:00:00 2001
+From: Kees Cook <kees@kernel.org>
+Date: Thu, 12 Dec 2024 17:28:06 -0800
+Subject: fortify: Hide run-time copy size from value range tracking
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+GCC performs value range tracking for variables as a way to provide better
+diagnostics. One place this is regularly seen is with warnings associated
+with bounds-checking, e.g. -Wstringop-overflow, -Wstringop-overread,
+-Warray-bounds, etc. In order to keep the signal-to-noise ratio high,
+warnings aren't emitted when a value range spans the entire value range
+representable by a given variable. For example:
+
+ unsigned int len;
+ char dst[8];
+ ...
+ memcpy(dst, src, len);
+
+If len's value is unknown, it has the full "unsigned int" range of [0,
+UINT_MAX], and GCC's compile-time bounds checks against memcpy() will
+be ignored. However, when a code path has been able to narrow the range:
+
+ if (len > 16)
+ return;
+ memcpy(dst, src, len);
+
+Then the range will be updated for the execution path. Above, len is
+now [0, 16] when reading memcpy(), so depending on other optimizations,
+we might see a -Wstringop-overflow warning like:
+
+ error: '__builtin_memcpy' writing between 9 and 16 bytes into region of size 8 [-Werror=stringop-overflow]
+
+When building with CONFIG_FORTIFY_SOURCE, the fortified run-time bounds
+checking can appear to narrow value ranges of lengths for memcpy(),
+depending on how the compiler constructs the execution paths during
+optimization passes, due to the checks against the field sizes. For
+example:
+
+ if (p_size_field != SIZE_MAX &&
+ p_size != p_size_field && p_size_field < size)
+
+As intentionally designed, these checks only affect the kernel warnings
+emitted at run-time and do not block the potentially overflowing memcpy(),
+so GCC thinks it needs to produce a warning about the resulting value
+range that might be reaching the memcpy().
+
+We have seen this manifest a few times now, with the most recent being
+with cpumasks:
+
+In function ‘bitmap_copy’,
+ inlined from ‘cpumask_copy’ at ./include/linux/cpumask.h:839:2,
+ inlined from ‘__padata_set_cpumasks’ at kernel/padata.c:730:2:
+./include/linux/fortify-string.h:114:33: error: ‘__builtin_memcpy’ reading between 257 and 536870904 bytes from a region of size 256 [-Werror=stringop-overread]
+ 114 | #define __underlying_memcpy __builtin_memcpy
+ | ^
+./include/linux/fortify-string.h:633:9: note: in expansion of macro ‘__underlying_memcpy’
+ 633 | __underlying_##op(p, q, __fortify_size); \
+ | ^~~~~~~~~~~~~
+./include/linux/fortify-string.h:678:26: note: in expansion of macro ‘__fortify_memcpy_chk’
+ 678 | #define memcpy(p, q, s) __fortify_memcpy_chk(p, q, s, \
+ | ^~~~~~~~~~~~~~~~~~~~
+./include/linux/bitmap.h:259:17: note: in expansion of macro ‘memcpy’
+ 259 | memcpy(dst, src, len);
+ | ^~~~~~
+kernel/padata.c: In function ‘__padata_set_cpumasks’:
+kernel/padata.c:713:48: note: source object ‘pcpumask’ of size [0, 256]
+ 713 | cpumask_var_t pcpumask,
+ | ~~~~~~~~~~~~~~^~~~~~~~
+
+This warning is _not_ emitted when CONFIG_FORTIFY_SOURCE is disabled,
+and with the recent -fdiagnostics-details we can confirm the origin of
+the warning is due to FORTIFY's bounds checking:
+
+../include/linux/bitmap.h:259:17: note: in expansion of macro 'memcpy'
+ 259 | memcpy(dst, src, len);
+ | ^~~~~~
+ '__padata_set_cpumasks': events 1-2
+../include/linux/fortify-string.h:613:36:
+ 612 | if (p_size_field != SIZE_MAX &&
+ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ 613 | p_size != p_size_field && p_size_field < size)
+ | ~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~
+ | |
+ | (1) when the condition is evaluated to false
+ | (2) when the condition is evaluated to true
+ '__padata_set_cpumasks': event 3
+ 114 | #define __underlying_memcpy __builtin_memcpy
+ | ^
+ | |
+ | (3) out of array bounds here
+
+Note that the cpumask warning started appearing since bitmap functions
+were recently marked __always_inline in commit ed8cd2b3bd9f ("bitmap:
+Switch from inline to __always_inline"), which allowed GCC to gain
+visibility into the variables as they passed through the FORTIFY
+implementation.
+
+In order to silence these false positives but keep otherwise deterministic
+compile-time warnings intact, hide the length variable from GCC with
+OPTIMIZE_HIDE_VAR() before calling the builtin memcpy.
+
+Additionally add a comment about why all the macro args have copies with
+const storage.
+
+Reported-by: "Thomas Weißschuh" <linux@weissschuh.net>
+Closes: https://lore.kernel.org/all/db7190c8-d17f-4a0d-bc2f-5903c79f36c2@t-8ch.de/
+Reported-by: Nilay Shroff <nilay@linux.ibm.com>
+Closes: https://lore.kernel.org/all/20241112124127.1666300-1-nilay@linux.ibm.com/
+Tested-by: Nilay Shroff <nilay@linux.ibm.com>
+Acked-by: Yury Norov <yury.norov@gmail.com>
+Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Kees Cook <kees@kernel.org>
+---
+ include/linux/fortify-string.h | 14 +++++++++++++-
+ 1 file changed, 13 insertions(+), 1 deletion(-)
+
+(limited to 'include/linux/fortify-string.h')
+
+diff --git a/include/linux/fortify-string.h b/include/linux/fortify-string.h
+index 0d99bf11d260a3..e4ce1cae03bf77 100644
+--- a/include/linux/fortify-string.h
++++ b/include/linux/fortify-string.h
+@@ -616,6 +616,12 @@ __FORTIFY_INLINE bool fortify_memcpy_chk(__kernel_size_t size,
+ return false;
+ }
+
++/*
++ * To work around what seems to be an optimizer bug, the macro arguments
++ * need to have const copies or the values end up changed by the time they
++ * reach fortify_warn_once(). See commit 6f7630b1b5bc ("fortify: Capture
++ * __bos() results in const temp vars") for more details.
++ */
+ #define __fortify_memcpy_chk(p, q, size, p_size, q_size, \
+ p_size_field, q_size_field, op) ({ \
+ const size_t __fortify_size = (size_t)(size); \
+@@ -623,6 +629,8 @@ __FORTIFY_INLINE bool fortify_memcpy_chk(__kernel_size_t size,
+ const size_t __q_size = (q_size); \
+ const size_t __p_size_field = (p_size_field); \
+ const size_t __q_size_field = (q_size_field); \
++ /* Keep a mutable version of the size for the final copy. */ \
++ size_t __copy_size = __fortify_size; \
+ fortify_warn_once(fortify_memcpy_chk(__fortify_size, __p_size, \
+ __q_size, __p_size_field, \
+ __q_size_field, FORTIFY_FUNC_ ##op), \
+@@ -630,7 +638,11 @@ __FORTIFY_INLINE bool fortify_memcpy_chk(__kernel_size_t size,
+ __fortify_size, \
+ "field \"" #p "\" at " FILE_LINE, \
+ __p_size_field); \
+- __underlying_##op(p, q, __fortify_size); \
++ /* Hide only the run-time size from value range tracking to */ \
++ /* silence compile-time false positive bounds warnings. */ \
++ if (!__builtin_constant_p(__copy_size)) \
++ OPTIMIZER_HIDE_VAR(__copy_size); \
++ __underlying_##op(p, q, __copy_size); \
+ })
+
+ /*
+--
+cgit 1.2.3-korg
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-17 11:16 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-02-17 11:16 UTC (permalink / raw
To: gentoo-commits
commit: ac1b056c5231ef785a638ac21cacdd2697fd115c
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Feb 17 11:16:10 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Feb 17 11:16:10 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ac1b056c
Linux patch 6.12.14
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1013_linux-6.12.14.patch | 17568 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 17572 insertions(+)
diff --git a/0000_README b/0000_README
index 499702fa..c6c607fe 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch: 1012_linux-6.12.13.patch
From: https://www.kernel.org
Desc: Linux 6.12.13
+Patch: 1013_linux-6.12.14.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.14
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1013_linux-6.12.14.patch b/1013_linux-6.12.14.patch
new file mode 100644
index 00000000..5243c324
--- /dev/null
+++ b/1013_linux-6.12.14.patch
@@ -0,0 +1,17568 @@
+diff --git a/Documentation/arch/arm64/elf_hwcaps.rst b/Documentation/arch/arm64/elf_hwcaps.rst
+index 694f67fa07d196..ab556426c7ac24 100644
+--- a/Documentation/arch/arm64/elf_hwcaps.rst
++++ b/Documentation/arch/arm64/elf_hwcaps.rst
+@@ -174,22 +174,28 @@ HWCAP2_DCPODP
+ Functionality implied by ID_AA64ISAR1_EL1.DPB == 0b0010.
+
+ HWCAP2_SVE2
+- Functionality implied by ID_AA64ZFR0_EL1.SVEver == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.SVEver == 0b0001.
+
+ HWCAP2_SVEAES
+- Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.AES == 0b0001.
+
+ HWCAP2_SVEPMULL
+- Functionality implied by ID_AA64ZFR0_EL1.AES == 0b0010.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.AES == 0b0010.
+
+ HWCAP2_SVEBITPERM
+- Functionality implied by ID_AA64ZFR0_EL1.BitPerm == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.BitPerm == 0b0001.
+
+ HWCAP2_SVESHA3
+- Functionality implied by ID_AA64ZFR0_EL1.SHA3 == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.SHA3 == 0b0001.
+
+ HWCAP2_SVESM4
+- Functionality implied by ID_AA64ZFR0_EL1.SM4 == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.SM4 == 0b0001.
+
+ HWCAP2_FLAGM2
+ Functionality implied by ID_AA64ISAR0_EL1.TS == 0b0010.
+@@ -198,16 +204,20 @@ HWCAP2_FRINT
+ Functionality implied by ID_AA64ISAR1_EL1.FRINTTS == 0b0001.
+
+ HWCAP2_SVEI8MM
+- Functionality implied by ID_AA64ZFR0_EL1.I8MM == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.I8MM == 0b0001.
+
+ HWCAP2_SVEF32MM
+- Functionality implied by ID_AA64ZFR0_EL1.F32MM == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.F32MM == 0b0001.
+
+ HWCAP2_SVEF64MM
+- Functionality implied by ID_AA64ZFR0_EL1.F64MM == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.F64MM == 0b0001.
+
+ HWCAP2_SVEBF16
+- Functionality implied by ID_AA64ZFR0_EL1.BF16 == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.BF16 == 0b0001.
+
+ HWCAP2_I8MM
+ Functionality implied by ID_AA64ISAR1_EL1.I8MM == 0b0001.
+@@ -273,7 +283,8 @@ HWCAP2_EBF16
+ Functionality implied by ID_AA64ISAR1_EL1.BF16 == 0b0010.
+
+ HWCAP2_SVE_EBF16
+- Functionality implied by ID_AA64ZFR0_EL1.BF16 == 0b0010.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.BF16 == 0b0010.
+
+ HWCAP2_CSSC
+ Functionality implied by ID_AA64ISAR2_EL1.CSSC == 0b0001.
+@@ -282,7 +293,8 @@ HWCAP2_RPRFM
+ Functionality implied by ID_AA64ISAR2_EL1.RPRFM == 0b0001.
+
+ HWCAP2_SVE2P1
+- Functionality implied by ID_AA64ZFR0_EL1.SVEver == 0b0010.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.SVEver == 0b0010.
+
+ HWCAP2_SME2
+ Functionality implied by ID_AA64SMFR0_EL1.SMEver == 0b0001.
+@@ -309,7 +321,8 @@ HWCAP2_HBC
+ Functionality implied by ID_AA64ISAR2_EL1.BC == 0b0001.
+
+ HWCAP2_SVE_B16B16
+- Functionality implied by ID_AA64ZFR0_EL1.B16B16 == 0b0001.
++ Functionality implied by ID_AA64PFR0_EL1.SVE == 0b0001 and
++ ID_AA64ZFR0_EL1.B16B16 == 0b0001.
+
+ HWCAP2_LRCPC3
+ Functionality implied by ID_AA64ISAR1_EL1.LRCPC == 0b0011.
+diff --git a/Documentation/gpu/drm-kms-helpers.rst b/Documentation/gpu/drm-kms-helpers.rst
+index c3e58856f75b36..96c03b9a644e4f 100644
+--- a/Documentation/gpu/drm-kms-helpers.rst
++++ b/Documentation/gpu/drm-kms-helpers.rst
+@@ -230,6 +230,9 @@ Panel Helper Reference
+ .. kernel-doc:: drivers/gpu/drm/drm_panel_orientation_quirks.c
+ :export:
+
++.. kernel-doc:: drivers/gpu/drm/drm_panel_backlight_quirks.c
++ :export:
++
+ Panel Self Refresh Helper Reference
+ ===================================
+
+diff --git a/Makefile b/Makefile
+index 5442ff45f963ed..26a471dbed62a5 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/boot/dts/ti/omap/dra7-l4.dtsi b/arch/arm/boot/dts/ti/omap/dra7-l4.dtsi
+index 6e67d99832ac25..ba7fdaae9c6e6d 100644
+--- a/arch/arm/boot/dts/ti/omap/dra7-l4.dtsi
++++ b/arch/arm/boot/dts/ti/omap/dra7-l4.dtsi
+@@ -12,6 +12,7 @@ &l4_cfg { /* 0x4a000000 */
+ ranges = <0x00000000 0x4a000000 0x100000>, /* segment 0 */
+ <0x00100000 0x4a100000 0x100000>, /* segment 1 */
+ <0x00200000 0x4a200000 0x100000>; /* segment 2 */
++ dma-ranges;
+
+ segment@0 { /* 0x4a000000 */
+ compatible = "simple-pm-bus";
+@@ -557,6 +558,7 @@ segment@100000 { /* 0x4a100000 */
+ <0x0007e000 0x0017e000 0x001000>, /* ap 124 */
+ <0x00059000 0x00159000 0x001000>, /* ap 125 */
+ <0x0005a000 0x0015a000 0x001000>; /* ap 126 */
++ dma-ranges;
+
+ target-module@2000 { /* 0x4a102000, ap 27 3c.0 */
+ compatible = "ti,sysc";
+diff --git a/arch/arm/boot/dts/ti/omap/omap3-gta04.dtsi b/arch/arm/boot/dts/ti/omap/omap3-gta04.dtsi
+index 3661340009e7a4..8dca2bed941b64 100644
+--- a/arch/arm/boot/dts/ti/omap/omap3-gta04.dtsi
++++ b/arch/arm/boot/dts/ti/omap/omap3-gta04.dtsi
+@@ -446,6 +446,7 @@ &omap3_pmx_core2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <
+ &hsusb2_2_pins
++ &mcspi3hog_pins
+ >;
+
+ hsusb2_2_pins: hsusb2-2-pins {
+@@ -459,6 +460,15 @@ OMAP3630_CORE2_IOPAD(0x25fa, PIN_INPUT_PULLDOWN | MUX_MODE3) /* etk_d15.hsusb2_d
+ >;
+ };
+
++ mcspi3hog_pins: mcspi3hog-pins {
++ pinctrl-single,pins = <
++ OMAP3630_CORE2_IOPAD(0x25dc, PIN_OUTPUT_PULLDOWN | MUX_MODE4) /* etk_d0 */
++ OMAP3630_CORE2_IOPAD(0x25de, PIN_OUTPUT_PULLDOWN | MUX_MODE4) /* etk_d1 */
++ OMAP3630_CORE2_IOPAD(0x25e0, PIN_OUTPUT_PULLDOWN | MUX_MODE4) /* etk_d2 */
++ OMAP3630_CORE2_IOPAD(0x25e2, PIN_OUTPUT_PULLDOWN | MUX_MODE4) /* etk_d3 */
++ >;
++ };
++
+ spi_gpio_pins: spi-gpio-pinmux-pins {
+ pinctrl-single,pins = <
+ OMAP3630_CORE2_IOPAD(0x25d8, PIN_OUTPUT | MUX_MODE4) /* clk */
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+index 07ae3c8e897b7d..22924f61ec9ed2 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183-kukui.dtsi
+@@ -290,11 +290,6 @@ dsi_out: endpoint {
+ };
+ };
+
+-&dpi0 {
+- /* TODO Re-enable after DP to Type-C port muxing can be described */
+- status = "disabled";
+-};
+-
+ &gic {
+ mediatek,broken-save-restore-fw;
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 9cd5e0cef02a29..5cb6bd3c5acbb0 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -1846,6 +1846,7 @@ dpi0: dpi@14015000 {
+ <&mmsys CLK_MM_DPI_MM>,
+ <&apmixedsys CLK_APMIXED_TVDPLL>;
+ clock-names = "pixel", "engine", "pll";
++ status = "disabled";
+
+ port {
+ dpi_out: endpoint { };
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234.dtsi b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+index 570331baa09ee3..2601b43b2d8cad 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234.dtsi
+@@ -3815,7 +3815,7 @@ sce-fabric@b600000 {
+ compatible = "nvidia,tegra234-sce-fabric";
+ reg = <0x0 0xb600000 0x0 0x40000>;
+ interrupts = <GIC_SPI 173 IRQ_TYPE_LEVEL_HIGH>;
+- status = "okay";
++ status = "disabled";
+ };
+
+ rce-fabric@be00000 {
+@@ -3995,7 +3995,7 @@ bpmp-fabric@d600000 {
+ };
+
+ dce-fabric@de00000 {
+- compatible = "nvidia,tegra234-sce-fabric";
++ compatible = "nvidia,tegra234-dce-fabric";
+ reg = <0x0 0xde00000 0x0 0x40000>;
+ interrupts = <GIC_SPI 381 IRQ_TYPE_LEVEL_HIGH>;
+ status = "okay";
+@@ -4018,6 +4018,8 @@ gic: interrupt-controller@f400000 {
+ #redistributor-regions = <1>;
+ #interrupt-cells = <3>;
+ interrupt-controller;
++
++ #address-cells = <0>;
+ };
+
+ smmu_iso: iommu@10000000 {
+diff --git a/arch/arm64/boot/dts/qcom/sdx75.dtsi b/arch/arm64/boot/dts/qcom/sdx75.dtsi
+index dcb925348e3f31..60a5d6d3ca7cc8 100644
+--- a/arch/arm64/boot/dts/qcom/sdx75.dtsi
++++ b/arch/arm64/boot/dts/qcom/sdx75.dtsi
+@@ -893,7 +893,7 @@ tcsr: syscon@1fc0000 {
+
+ remoteproc_mpss: remoteproc@4080000 {
+ compatible = "qcom,sdx75-mpss-pas";
+- reg = <0 0x04080000 0 0x4040>;
++ reg = <0 0x04080000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 250 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm6115.dtsi b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+index 41216cc319d65e..4adadfd1e51ae9 100644
+--- a/arch/arm64/boot/dts/qcom/sm6115.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6115.dtsi
+@@ -2027,7 +2027,7 @@ dispcc: clock-controller@5f00000 {
+
+ remoteproc_mpss: remoteproc@6080000 {
+ compatible = "qcom,sm6115-mpss-pas";
+- reg = <0x0 0x06080000 0x0 0x100>;
++ reg = <0x0 0x06080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 307 IRQ_TYPE_EDGE_RISING>,
+ <&modem_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2670,9 +2670,9 @@ funnel_apss1_in: endpoint {
+ };
+ };
+
+- remoteproc_adsp: remoteproc@ab00000 {
++ remoteproc_adsp: remoteproc@a400000 {
+ compatible = "qcom,sm6115-adsp-pas";
+- reg = <0x0 0x0ab00000 0x0 0x100>;
++ reg = <0x0 0x0a400000 0x0 0x4040>;
+
+ interrupts-extended = <&intc GIC_SPI 282 IRQ_TYPE_EDGE_RISING>,
+ <&adsp_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2744,7 +2744,7 @@ compute-cb@7 {
+
+ remoteproc_cdsp: remoteproc@b300000 {
+ compatible = "qcom,sm6115-cdsp-pas";
+- reg = <0x0 0x0b300000 0x0 0x100000>;
++ reg = <0x0 0x0b300000 0x0 0x4040>;
+
+ interrupts-extended = <&intc GIC_SPI 265 IRQ_TYPE_EDGE_RISING>,
+ <&cdsp_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm6350.dtsi b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+index 4f8477de7e1b1e..10418fccfea24f 100644
+--- a/arch/arm64/boot/dts/qcom/sm6350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6350.dtsi
+@@ -936,7 +936,7 @@ uart1: serial@884000 {
+ power-domains = <&rpmhpd SM6350_CX>;
+ operating-points-v2 = <&qup_opp_table>;
+ interconnects = <&clk_virt MASTER_QUP_CORE_0 0 &clk_virt SLAVE_QUP_CORE_0 0>,
+- <&aggre1_noc MASTER_QUP_0 0 &clk_virt SLAVE_EBI_CH0 0>;
++ <&gem_noc MASTER_AMPSS_M0 0 &config_noc SLAVE_QUP_0 0>;
+ interconnect-names = "qup-core", "qup-config";
+ status = "disabled";
+ };
+@@ -1283,7 +1283,7 @@ tcsr_mutex: hwlock@1f40000 {
+
+ adsp: remoteproc@3000000 {
+ compatible = "qcom,sm6350-adsp-pas";
+- reg = <0 0x03000000 0 0x100>;
++ reg = <0x0 0x03000000 0x0 0x10000>;
+
+ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -1503,7 +1503,7 @@ gpucc: clock-controller@3d90000 {
+
+ mpss: remoteproc@4080000 {
+ compatible = "qcom,sm6350-mpss-pas";
+- reg = <0x0 0x04080000 0x0 0x4040>;
++ reg = <0x0 0x04080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 136 IRQ_TYPE_EDGE_RISING>,
+ <&modem_smp2p_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm6375.dtsi b/arch/arm64/boot/dts/qcom/sm6375.dtsi
+index 72e01437ded125..01371f41f7906b 100644
+--- a/arch/arm64/boot/dts/qcom/sm6375.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm6375.dtsi
+@@ -1516,9 +1516,9 @@ gpucc: clock-controller@5990000 {
+ #power-domain-cells = <1>;
+ };
+
+- remoteproc_mss: remoteproc@6000000 {
++ remoteproc_mss: remoteproc@6080000 {
+ compatible = "qcom,sm6375-mpss-pas";
+- reg = <0 0x06000000 0 0x4040>;
++ reg = <0x0 0x06080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 307 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -1559,7 +1559,7 @@ IPCC_MPROC_SIGNAL_GLINK_QMP
+
+ remoteproc_adsp: remoteproc@a400000 {
+ compatible = "qcom,sm6375-adsp-pas";
+- reg = <0 0x0a400000 0 0x100>;
++ reg = <0 0x0a400000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 282 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -1595,9 +1595,9 @@ IPCC_MPROC_SIGNAL_GLINK_QMP
+ };
+ };
+
+- remoteproc_cdsp: remoteproc@b000000 {
++ remoteproc_cdsp: remoteproc@b300000 {
+ compatible = "qcom,sm6375-cdsp-pas";
+- reg = <0x0 0x0b000000 0x0 0x100000>;
++ reg = <0x0 0x0b300000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 265 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8350.dtsi b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+index 041750d71e4550..46adf10e5fe4d6 100644
+--- a/arch/arm64/boot/dts/qcom/sm8350.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8350.dtsi
+@@ -1876,6 +1876,142 @@ tcsr: syscon@1fc0000 {
+ reg = <0x0 0x1fc0000 0x0 0x30000>;
+ };
+
++ adsp: remoteproc@3000000 {
++ compatible = "qcom,sm8350-adsp-pas";
++ reg = <0x0 0x03000000 0x0 0x10000>;
++
++ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog", "fatal", "ready",
++ "handover", "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ power-domains = <&rpmhpd RPMHPD_LCX>,
++ <&rpmhpd RPMHPD_LMX>;
++ power-domain-names = "lcx", "lmx";
++
++ memory-region = <&pil_adsp_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_adsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++ mboxes = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ label = "lpass";
++ qcom,remote-pid = <2>;
++
++ apr {
++ compatible = "qcom,apr-v2";
++ qcom,glink-channels = "apr_audio_svc";
++ qcom,domain = <APR_DOMAIN_ADSP>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ service@3 {
++ reg = <APR_SVC_ADSP_CORE>;
++ compatible = "qcom,q6core";
++ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
++ };
++
++ q6afe: service@4 {
++ compatible = "qcom,q6afe";
++ reg = <APR_SVC_AFE>;
++ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
++
++ q6afedai: dais {
++ compatible = "qcom,q6afe-dais";
++ #address-cells = <1>;
++ #size-cells = <0>;
++ #sound-dai-cells = <1>;
++ };
++
++ q6afecc: clock-controller {
++ compatible = "qcom,q6afe-clocks";
++ #clock-cells = <2>;
++ };
++ };
++
++ q6asm: service@7 {
++ compatible = "qcom,q6asm";
++ reg = <APR_SVC_ASM>;
++ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
++
++ q6asmdai: dais {
++ compatible = "qcom,q6asm-dais";
++ #address-cells = <1>;
++ #size-cells = <0>;
++ #sound-dai-cells = <1>;
++ iommus = <&apps_smmu 0x1801 0x0>;
++
++ dai@0 {
++ reg = <0>;
++ };
++
++ dai@1 {
++ reg = <1>;
++ };
++
++ dai@2 {
++ reg = <2>;
++ };
++ };
++ };
++
++ q6adm: service@8 {
++ compatible = "qcom,q6adm";
++ reg = <APR_SVC_ADM>;
++ qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
++
++ q6routing: routing {
++ compatible = "qcom,q6adm-routing";
++ #sound-dai-cells = <0>;
++ };
++ };
++ };
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++ label = "adsp";
++ qcom,non-secure-domain;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++ iommus = <&apps_smmu 0x1803 0x0>;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++ iommus = <&apps_smmu 0x1804 0x0>;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++ iommus = <&apps_smmu 0x1805 0x0>;
++ };
++ };
++ };
++ };
++
+ lpass_tlmm: pinctrl@33c0000 {
+ compatible = "qcom,sm8350-lpass-lpi-pinctrl";
+ reg = <0 0x033c0000 0 0x20000>,
+@@ -2078,7 +2214,7 @@ lpass_ag_noc: interconnect@3c40000 {
+
+ mpss: remoteproc@4080000 {
+ compatible = "qcom,sm8350-mpss-pas";
+- reg = <0x0 0x04080000 0x0 0x4040>;
++ reg = <0x0 0x04080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 264 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2360,6 +2496,115 @@ compute_noc: interconnect@a0c0000 {
+ qcom,bcm-voters = <&apps_bcm_voter>;
+ };
+
++ cdsp: remoteproc@a300000 {
++ compatible = "qcom,sm8350-cdsp-pas";
++ reg = <0x0 0x0a300000 0x0 0x10000>;
++
++ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_cdsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_cdsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_cdsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog", "fatal", "ready",
++ "handover", "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ power-domains = <&rpmhpd RPMHPD_CX>,
++ <&rpmhpd RPMHPD_MXC>;
++ power-domain-names = "cx", "mxc";
++
++ interconnects = <&compute_noc MASTER_CDSP_PROC 0 &mc_virt SLAVE_EBI1 0>;
++
++ memory-region = <&pil_cdsp_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_cdsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_CDSP
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++ mboxes = <&ipcc IPCC_CLIENT_CDSP
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ label = "cdsp";
++ qcom,remote-pid = <5>;
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++ label = "cdsp";
++ qcom,non-secure-domain;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@1 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <1>;
++ iommus = <&apps_smmu 0x2161 0x0400>,
++ <&apps_smmu 0x1181 0x0420>;
++ };
++
++ compute-cb@2 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <2>;
++ iommus = <&apps_smmu 0x2162 0x0400>,
++ <&apps_smmu 0x1182 0x0420>;
++ };
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++ iommus = <&apps_smmu 0x2163 0x0400>,
++ <&apps_smmu 0x1183 0x0420>;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++ iommus = <&apps_smmu 0x2164 0x0400>,
++ <&apps_smmu 0x1184 0x0420>;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++ iommus = <&apps_smmu 0x2165 0x0400>,
++ <&apps_smmu 0x1185 0x0420>;
++ };
++
++ compute-cb@6 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <6>;
++ iommus = <&apps_smmu 0x2166 0x0400>,
++ <&apps_smmu 0x1186 0x0420>;
++ };
++
++ compute-cb@7 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <7>;
++ iommus = <&apps_smmu 0x2167 0x0400>,
++ <&apps_smmu 0x1187 0x0420>;
++ };
++
++ compute-cb@8 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <8>;
++ iommus = <&apps_smmu 0x2168 0x0400>,
++ <&apps_smmu 0x1188 0x0420>;
++ };
++
++ /* note: secure cb9 in downstream */
++ };
++ };
++ };
++
+ usb_1: usb@a6f8800 {
+ compatible = "qcom,sm8350-dwc3", "qcom,dwc3";
+ reg = <0 0x0a6f8800 0 0x400>;
+@@ -3284,142 +3529,6 @@ apps_smmu: iommu@15000000 {
+ <GIC_SPI 707 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+- adsp: remoteproc@17300000 {
+- compatible = "qcom,sm8350-adsp-pas";
+- reg = <0 0x17300000 0 0x100>;
+-
+- interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog", "fatal", "ready",
+- "handover", "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- power-domains = <&rpmhpd RPMHPD_LCX>,
+- <&rpmhpd RPMHPD_LMX>;
+- power-domain-names = "lcx", "lmx";
+-
+- memory-region = <&pil_adsp_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_adsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+- mboxes = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- label = "lpass";
+- qcom,remote-pid = <2>;
+-
+- apr {
+- compatible = "qcom,apr-v2";
+- qcom,glink-channels = "apr_audio_svc";
+- qcom,domain = <APR_DOMAIN_ADSP>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- service@3 {
+- reg = <APR_SVC_ADSP_CORE>;
+- compatible = "qcom,q6core";
+- qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+- };
+-
+- q6afe: service@4 {
+- compatible = "qcom,q6afe";
+- reg = <APR_SVC_AFE>;
+- qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+-
+- q6afedai: dais {
+- compatible = "qcom,q6afe-dais";
+- #address-cells = <1>;
+- #size-cells = <0>;
+- #sound-dai-cells = <1>;
+- };
+-
+- q6afecc: clock-controller {
+- compatible = "qcom,q6afe-clocks";
+- #clock-cells = <2>;
+- };
+- };
+-
+- q6asm: service@7 {
+- compatible = "qcom,q6asm";
+- reg = <APR_SVC_ASM>;
+- qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+-
+- q6asmdai: dais {
+- compatible = "qcom,q6asm-dais";
+- #address-cells = <1>;
+- #size-cells = <0>;
+- #sound-dai-cells = <1>;
+- iommus = <&apps_smmu 0x1801 0x0>;
+-
+- dai@0 {
+- reg = <0>;
+- };
+-
+- dai@1 {
+- reg = <1>;
+- };
+-
+- dai@2 {
+- reg = <2>;
+- };
+- };
+- };
+-
+- q6adm: service@8 {
+- compatible = "qcom,q6adm";
+- reg = <APR_SVC_ADM>;
+- qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+-
+- q6routing: routing {
+- compatible = "qcom,q6adm-routing";
+- #sound-dai-cells = <0>;
+- };
+- };
+- };
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+- label = "adsp";
+- qcom,non-secure-domain;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+- iommus = <&apps_smmu 0x1803 0x0>;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+- iommus = <&apps_smmu 0x1804 0x0>;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+- iommus = <&apps_smmu 0x1805 0x0>;
+- };
+- };
+- };
+- };
+-
+ intc: interrupt-controller@17a00000 {
+ compatible = "arm,gic-v3";
+ #interrupt-cells = <3>;
+@@ -3588,115 +3697,6 @@ cpufreq_hw: cpufreq@18591000 {
+ #freq-domain-cells = <1>;
+ #clock-cells = <1>;
+ };
+-
+- cdsp: remoteproc@98900000 {
+- compatible = "qcom,sm8350-cdsp-pas";
+- reg = <0 0x98900000 0 0x1400000>;
+-
+- interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_cdsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_cdsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_cdsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog", "fatal", "ready",
+- "handover", "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- power-domains = <&rpmhpd RPMHPD_CX>,
+- <&rpmhpd RPMHPD_MXC>;
+- power-domain-names = "cx", "mxc";
+-
+- interconnects = <&compute_noc MASTER_CDSP_PROC 0 &mc_virt SLAVE_EBI1 0>;
+-
+- memory-region = <&pil_cdsp_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_cdsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_CDSP
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+- mboxes = <&ipcc IPCC_CLIENT_CDSP
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- label = "cdsp";
+- qcom,remote-pid = <5>;
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+- label = "cdsp";
+- qcom,non-secure-domain;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@1 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <1>;
+- iommus = <&apps_smmu 0x2161 0x0400>,
+- <&apps_smmu 0x1181 0x0420>;
+- };
+-
+- compute-cb@2 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <2>;
+- iommus = <&apps_smmu 0x2162 0x0400>,
+- <&apps_smmu 0x1182 0x0420>;
+- };
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+- iommus = <&apps_smmu 0x2163 0x0400>,
+- <&apps_smmu 0x1183 0x0420>;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+- iommus = <&apps_smmu 0x2164 0x0400>,
+- <&apps_smmu 0x1184 0x0420>;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+- iommus = <&apps_smmu 0x2165 0x0400>,
+- <&apps_smmu 0x1185 0x0420>;
+- };
+-
+- compute-cb@6 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <6>;
+- iommus = <&apps_smmu 0x2166 0x0400>,
+- <&apps_smmu 0x1186 0x0420>;
+- };
+-
+- compute-cb@7 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <7>;
+- iommus = <&apps_smmu 0x2167 0x0400>,
+- <&apps_smmu 0x1187 0x0420>;
+- };
+-
+- compute-cb@8 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <8>;
+- iommus = <&apps_smmu 0x2168 0x0400>,
+- <&apps_smmu 0x1188 0x0420>;
+- };
+-
+- /* note: secure cb9 in downstream */
+- };
+- };
+- };
+ };
+
+ thermal_zones: thermal-zones {
+diff --git a/arch/arm64/boot/dts/qcom/sm8450.dtsi b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+index f7d52e491b694b..d664a88a018efb 100644
+--- a/arch/arm64/boot/dts/qcom/sm8450.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8450.dtsi
+@@ -2492,6 +2492,112 @@ compute-cb@3 {
+ };
+ };
+
++ remoteproc_adsp: remoteproc@3000000 {
++ compatible = "qcom,sm8450-adsp-pas";
++ reg = <0x0 0x03000000 0x0 0x10000>;
++
++ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog", "fatal", "ready",
++ "handover", "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ power-domains = <&rpmhpd RPMHPD_LCX>,
++ <&rpmhpd RPMHPD_LMX>;
++ power-domain-names = "lcx", "lmx";
++
++ memory-region = <&adsp_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_adsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ remoteproc_adsp_glink: glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++ mboxes = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ label = "lpass";
++ qcom,remote-pid = <2>;
++
++ gpr {
++ compatible = "qcom,gpr";
++ qcom,glink-channels = "adsp_apps";
++ qcom,domain = <GPR_DOMAIN_ID_ADSP>;
++ qcom,intents = <512 20>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ q6apm: service@1 {
++ compatible = "qcom,q6apm";
++ reg = <GPR_APM_MODULE_IID>;
++ #sound-dai-cells = <0>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6apmdai: dais {
++ compatible = "qcom,q6apm-dais";
++ iommus = <&apps_smmu 0x1801 0x0>;
++ };
++
++ q6apmbedai: bedais {
++ compatible = "qcom,q6apm-lpass-dais";
++ #sound-dai-cells = <1>;
++ };
++ };
++
++ q6prm: service@2 {
++ compatible = "qcom,q6prm";
++ reg = <GPR_PRM_MODULE_IID>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6prmcc: clock-controller {
++ compatible = "qcom,q6prm-lpass-clocks";
++ #clock-cells = <2>;
++ };
++ };
++ };
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++ label = "adsp";
++ qcom,non-secure-domain;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++ iommus = <&apps_smmu 0x1803 0x0>;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++ iommus = <&apps_smmu 0x1804 0x0>;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++ iommus = <&apps_smmu 0x1805 0x0>;
++ };
++ };
++ };
++ };
++
+ wsa2macro: codec@31e0000 {
+ compatible = "qcom,sm8450-lpass-wsa-macro";
+ reg = <0 0x031e0000 0 0x1000>;
+@@ -2688,115 +2794,9 @@ vamacro: codec@33f0000 {
+ status = "disabled";
+ };
+
+- remoteproc_adsp: remoteproc@30000000 {
+- compatible = "qcom,sm8450-adsp-pas";
+- reg = <0 0x30000000 0 0x100>;
+-
+- interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog", "fatal", "ready",
+- "handover", "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- power-domains = <&rpmhpd RPMHPD_LCX>,
+- <&rpmhpd RPMHPD_LMX>;
+- power-domain-names = "lcx", "lmx";
+-
+- memory-region = <&adsp_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_adsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- remoteproc_adsp_glink: glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+- mboxes = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- label = "lpass";
+- qcom,remote-pid = <2>;
+-
+- gpr {
+- compatible = "qcom,gpr";
+- qcom,glink-channels = "adsp_apps";
+- qcom,domain = <GPR_DOMAIN_ID_ADSP>;
+- qcom,intents = <512 20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- q6apm: service@1 {
+- compatible = "qcom,q6apm";
+- reg = <GPR_APM_MODULE_IID>;
+- #sound-dai-cells = <0>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6apmdai: dais {
+- compatible = "qcom,q6apm-dais";
+- iommus = <&apps_smmu 0x1801 0x0>;
+- };
+-
+- q6apmbedai: bedais {
+- compatible = "qcom,q6apm-lpass-dais";
+- #sound-dai-cells = <1>;
+- };
+- };
+-
+- q6prm: service@2 {
+- compatible = "qcom,q6prm";
+- reg = <GPR_PRM_MODULE_IID>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6prmcc: clock-controller {
+- compatible = "qcom,q6prm-lpass-clocks";
+- #clock-cells = <2>;
+- };
+- };
+- };
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+- label = "adsp";
+- qcom,non-secure-domain;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+- iommus = <&apps_smmu 0x1803 0x0>;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+- iommus = <&apps_smmu 0x1804 0x0>;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+- iommus = <&apps_smmu 0x1805 0x0>;
+- };
+- };
+- };
+- };
+-
+ remoteproc_cdsp: remoteproc@32300000 {
+ compatible = "qcom,sm8450-cdsp-pas";
+- reg = <0 0x32300000 0 0x1400000>;
++ reg = <0 0x32300000 0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2903,7 +2903,7 @@ compute-cb@8 {
+
+ remoteproc_mpss: remoteproc@4080000 {
+ compatible = "qcom,sm8450-mpss-pas";
+- reg = <0x0 0x04080000 0x0 0x4040>;
++ reg = <0x0 0x04080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 264 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8550.dtsi b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+index 9dc0ee3eb98f87..9ecf4a7fc3287a 100644
+--- a/arch/arm64/boot/dts/qcom/sm8550.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8550.dtsi
+@@ -2313,7 +2313,7 @@ ipa: ipa@3f40000 {
+
+ remoteproc_mpss: remoteproc@4080000 {
+ compatible = "qcom,sm8550-mpss-pas";
+- reg = <0x0 0x04080000 0x0 0x4040>;
++ reg = <0x0 0x04080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 264 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2353,6 +2353,137 @@ IPCC_MPROC_SIGNAL_GLINK_QMP
+ };
+ };
+
++ remoteproc_adsp: remoteproc@6800000 {
++ compatible = "qcom,sm8550-adsp-pas";
++ reg = <0x0 0x06800000 0x0 0x10000>;
++
++ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog", "fatal", "ready",
++ "handover", "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ power-domains = <&rpmhpd RPMHPD_LCX>,
++ <&rpmhpd RPMHPD_LMX>;
++ power-domain-names = "lcx", "lmx";
++
++ interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC 0 &mc_virt SLAVE_EBI1 0>;
++
++ memory-region = <&adspslpi_mem>, <&q6_adsp_dtb_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_adsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ remoteproc_adsp_glink: glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++ mboxes = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ label = "lpass";
++ qcom,remote-pid = <2>;
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++ label = "adsp";
++ qcom,non-secure-domain;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++ iommus = <&apps_smmu 0x1003 0x80>,
++ <&apps_smmu 0x1063 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++ iommus = <&apps_smmu 0x1004 0x80>,
++ <&apps_smmu 0x1064 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++ iommus = <&apps_smmu 0x1005 0x80>,
++ <&apps_smmu 0x1065 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@6 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <6>;
++ iommus = <&apps_smmu 0x1006 0x80>,
++ <&apps_smmu 0x1066 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@7 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <7>;
++ iommus = <&apps_smmu 0x1007 0x80>,
++ <&apps_smmu 0x1067 0x0>;
++ dma-coherent;
++ };
++ };
++
++ gpr {
++ compatible = "qcom,gpr";
++ qcom,glink-channels = "adsp_apps";
++ qcom,domain = <GPR_DOMAIN_ID_ADSP>;
++ qcom,intents = <512 20>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ q6apm: service@1 {
++ compatible = "qcom,q6apm";
++ reg = <GPR_APM_MODULE_IID>;
++ #sound-dai-cells = <0>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6apmdai: dais {
++ compatible = "qcom,q6apm-dais";
++ iommus = <&apps_smmu 0x1001 0x80>,
++ <&apps_smmu 0x1061 0x0>;
++ };
++
++ q6apmbedai: bedais {
++ compatible = "qcom,q6apm-lpass-dais";
++ #sound-dai-cells = <1>;
++ };
++ };
++
++ q6prm: service@2 {
++ compatible = "qcom,q6prm";
++ reg = <GPR_PRM_MODULE_IID>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6prmcc: clock-controller {
++ compatible = "qcom,q6prm-lpass-clocks";
++ #clock-cells = <2>;
++ };
++ };
++ };
++ };
++ };
++
+ lpass_wsa2macro: codec@6aa0000 {
+ compatible = "qcom,sm8550-lpass-wsa-macro";
+ reg = <0 0x06aa0000 0 0x1000>;
+@@ -2871,9 +3002,8 @@ mdss: display-subsystem@ae00000 {
+
+ power-domains = <&dispcc MDSS_GDSC>;
+
+- interconnects = <&mmss_noc MASTER_MDP 0 &gem_noc SLAVE_LLCC 0>,
+- <&mc_virt MASTER_LLCC 0 &mc_virt SLAVE_EBI1 0>;
+- interconnect-names = "mdp0-mem", "mdp1-mem";
++ interconnects = <&mmss_noc MASTER_MDP 0 &mc_virt SLAVE_EBI1 0>;
++ interconnect-names = "mdp0-mem";
+
+ iommus = <&apps_smmu 0x1c00 0x2>;
+
+@@ -4575,137 +4705,6 @@ system-cache-controller@25000000 {
+ interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+- remoteproc_adsp: remoteproc@30000000 {
+- compatible = "qcom,sm8550-adsp-pas";
+- reg = <0x0 0x30000000 0x0 0x100>;
+-
+- interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog", "fatal", "ready",
+- "handover", "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- power-domains = <&rpmhpd RPMHPD_LCX>,
+- <&rpmhpd RPMHPD_LMX>;
+- power-domain-names = "lcx", "lmx";
+-
+- interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC 0 &mc_virt SLAVE_EBI1 0>;
+-
+- memory-region = <&adspslpi_mem>, <&q6_adsp_dtb_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_adsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- remoteproc_adsp_glink: glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+- mboxes = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- label = "lpass";
+- qcom,remote-pid = <2>;
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+- label = "adsp";
+- qcom,non-secure-domain;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+- iommus = <&apps_smmu 0x1003 0x80>,
+- <&apps_smmu 0x1063 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+- iommus = <&apps_smmu 0x1004 0x80>,
+- <&apps_smmu 0x1064 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+- iommus = <&apps_smmu 0x1005 0x80>,
+- <&apps_smmu 0x1065 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@6 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <6>;
+- iommus = <&apps_smmu 0x1006 0x80>,
+- <&apps_smmu 0x1066 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@7 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <7>;
+- iommus = <&apps_smmu 0x1007 0x80>,
+- <&apps_smmu 0x1067 0x0>;
+- dma-coherent;
+- };
+- };
+-
+- gpr {
+- compatible = "qcom,gpr";
+- qcom,glink-channels = "adsp_apps";
+- qcom,domain = <GPR_DOMAIN_ID_ADSP>;
+- qcom,intents = <512 20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- q6apm: service@1 {
+- compatible = "qcom,q6apm";
+- reg = <GPR_APM_MODULE_IID>;
+- #sound-dai-cells = <0>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6apmdai: dais {
+- compatible = "qcom,q6apm-dais";
+- iommus = <&apps_smmu 0x1001 0x80>,
+- <&apps_smmu 0x1061 0x0>;
+- };
+-
+- q6apmbedai: bedais {
+- compatible = "qcom,q6apm-lpass-dais";
+- #sound-dai-cells = <1>;
+- };
+- };
+-
+- q6prm: service@2 {
+- compatible = "qcom,q6prm";
+- reg = <GPR_PRM_MODULE_IID>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6prmcc: clock-controller {
+- compatible = "qcom,q6prm-lpass-clocks";
+- #clock-cells = <2>;
+- };
+- };
+- };
+- };
+- };
+-
+ nsp_noc: interconnect@320c0000 {
+ compatible = "qcom,sm8550-nsp-noc";
+ reg = <0 0x320c0000 0 0xe080>;
+@@ -4715,7 +4714,7 @@ nsp_noc: interconnect@320c0000 {
+
+ remoteproc_cdsp: remoteproc@32300000 {
+ compatible = "qcom,sm8550-cdsp-pas";
+- reg = <0x0 0x32300000 0x0 0x1400000>;
++ reg = <0x0 0x32300000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+index cd54fd723ce40e..416cfb71878a5f 100644
+--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi
++++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi
+@@ -2853,7 +2853,7 @@ ipa: ipa@3f40000 {
+
+ remoteproc_mpss: remoteproc@4080000 {
+ compatible = "qcom,sm8650-mpss-pas";
+- reg = <0 0x04080000 0 0x4040>;
++ reg = <0x0 0x04080000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 264 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_modem_in 0 IRQ_TYPE_EDGE_RISING>,
+@@ -2904,6 +2904,154 @@ IPCC_MPROC_SIGNAL_GLINK_QMP
+ };
+ };
+
++ remoteproc_adsp: remoteproc@6800000 {
++ compatible = "qcom,sm8650-adsp-pas";
++ reg = <0x0 0x06800000 0x0 0x10000>;
++
++ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog",
++ "fatal",
++ "ready",
++ "handover",
++ "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC QCOM_ICC_TAG_ALWAYS
++ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
++
++ power-domains = <&rpmhpd RPMHPD_LCX>,
++ <&rpmhpd RPMHPD_LMX>;
++ power-domain-names = "lcx",
++ "lmx";
++
++ memory-region = <&adspslpi_mem>, <&q6_adsp_dtb_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_adsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ remoteproc_adsp_glink: glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++
++ mboxes = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ qcom,remote-pid = <2>;
++
++ label = "lpass";
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++
++ label = "adsp";
++
++ qcom,non-secure-domain;
++
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++
++ iommus = <&apps_smmu 0x1003 0x80>,
++ <&apps_smmu 0x1043 0x20>;
++ dma-coherent;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++
++ iommus = <&apps_smmu 0x1004 0x80>,
++ <&apps_smmu 0x1044 0x20>;
++ dma-coherent;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++
++ iommus = <&apps_smmu 0x1005 0x80>,
++ <&apps_smmu 0x1045 0x20>;
++ dma-coherent;
++ };
++
++ compute-cb@6 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <6>;
++
++ iommus = <&apps_smmu 0x1006 0x80>,
++ <&apps_smmu 0x1046 0x20>;
++ dma-coherent;
++ };
++
++ compute-cb@7 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <7>;
++
++ iommus = <&apps_smmu 0x1007 0x40>,
++ <&apps_smmu 0x1067 0x0>,
++ <&apps_smmu 0x1087 0x0>;
++ dma-coherent;
++ };
++ };
++
++ gpr {
++ compatible = "qcom,gpr";
++ qcom,glink-channels = "adsp_apps";
++ qcom,domain = <GPR_DOMAIN_ID_ADSP>;
++ qcom,intents = <512 20>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ q6apm: service@1 {
++ compatible = "qcom,q6apm";
++ reg = <GPR_APM_MODULE_IID>;
++ #sound-dai-cells = <0>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6apmbedai: bedais {
++ compatible = "qcom,q6apm-lpass-dais";
++ #sound-dai-cells = <1>;
++ };
++
++ q6apmdai: dais {
++ compatible = "qcom,q6apm-dais";
++ iommus = <&apps_smmu 0x1001 0x80>,
++ <&apps_smmu 0x1061 0x0>;
++ };
++ };
++
++ q6prm: service@2 {
++ compatible = "qcom,q6prm";
++ reg = <GPR_PRM_MODULE_IID>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6prmcc: clock-controller {
++ compatible = "qcom,q6prm-lpass-clocks";
++ #clock-cells = <2>;
++ };
++ };
++ };
++ };
++ };
++
+ lpass_wsa2macro: codec@6aa0000 {
+ compatible = "qcom,sm8650-lpass-wsa-macro", "qcom,sm8550-lpass-wsa-macro";
+ reg = <0 0x06aa0000 0 0x1000>;
+@@ -3455,11 +3603,8 @@ mdss: display-subsystem@ae00000 {
+ resets = <&dispcc DISP_CC_MDSS_CORE_BCR>;
+
+ interconnects = <&mmss_noc MASTER_MDP QCOM_ICC_TAG_ALWAYS
+- &gem_noc SLAVE_LLCC QCOM_ICC_TAG_ALWAYS>,
+- <&mc_virt MASTER_LLCC QCOM_ICC_TAG_ALWAYS
+ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+- interconnect-names = "mdp0-mem",
+- "mdp1-mem";
++ interconnect-names = "mdp0-mem";
+
+ power-domains = <&dispcc MDSS_GDSC>;
+
+@@ -5324,154 +5469,6 @@ system-cache-controller@25000000 {
+ interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+- remoteproc_adsp: remoteproc@30000000 {
+- compatible = "qcom,sm8650-adsp-pas";
+- reg = <0 0x30000000 0 0x100>;
+-
+- interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog",
+- "fatal",
+- "ready",
+- "handover",
+- "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC QCOM_ICC_TAG_ALWAYS
+- &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+-
+- power-domains = <&rpmhpd RPMHPD_LCX>,
+- <&rpmhpd RPMHPD_LMX>;
+- power-domain-names = "lcx",
+- "lmx";
+-
+- memory-region = <&adspslpi_mem>, <&q6_adsp_dtb_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_adsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- remoteproc_adsp_glink: glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+-
+- mboxes = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- qcom,remote-pid = <2>;
+-
+- label = "lpass";
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+-
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+-
+- label = "adsp";
+-
+- qcom,non-secure-domain;
+-
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+-
+- iommus = <&apps_smmu 0x1003 0x80>,
+- <&apps_smmu 0x1043 0x20>;
+- dma-coherent;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+-
+- iommus = <&apps_smmu 0x1004 0x80>,
+- <&apps_smmu 0x1044 0x20>;
+- dma-coherent;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+-
+- iommus = <&apps_smmu 0x1005 0x80>,
+- <&apps_smmu 0x1045 0x20>;
+- dma-coherent;
+- };
+-
+- compute-cb@6 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <6>;
+-
+- iommus = <&apps_smmu 0x1006 0x80>,
+- <&apps_smmu 0x1046 0x20>;
+- dma-coherent;
+- };
+-
+- compute-cb@7 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <7>;
+-
+- iommus = <&apps_smmu 0x1007 0x40>,
+- <&apps_smmu 0x1067 0x0>,
+- <&apps_smmu 0x1087 0x0>;
+- dma-coherent;
+- };
+- };
+-
+- gpr {
+- compatible = "qcom,gpr";
+- qcom,glink-channels = "adsp_apps";
+- qcom,domain = <GPR_DOMAIN_ID_ADSP>;
+- qcom,intents = <512 20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- q6apm: service@1 {
+- compatible = "qcom,q6apm";
+- reg = <GPR_APM_MODULE_IID>;
+- #sound-dai-cells = <0>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6apmbedai: bedais {
+- compatible = "qcom,q6apm-lpass-dais";
+- #sound-dai-cells = <1>;
+- };
+-
+- q6apmdai: dais {
+- compatible = "qcom,q6apm-dais";
+- iommus = <&apps_smmu 0x1001 0x80>,
+- <&apps_smmu 0x1061 0x0>;
+- };
+- };
+-
+- q6prm: service@2 {
+- compatible = "qcom,q6prm";
+- reg = <GPR_PRM_MODULE_IID>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6prmcc: clock-controller {
+- compatible = "qcom,q6prm-lpass-clocks";
+- #clock-cells = <2>;
+- };
+- };
+- };
+- };
+- };
+-
+ nsp_noc: interconnect@320c0000 {
+ compatible = "qcom,sm8650-nsp-noc";
+ reg = <0 0x320c0000 0 0xf080>;
+@@ -5483,7 +5480,7 @@ nsp_noc: interconnect@320c0000 {
+
+ remoteproc_cdsp: remoteproc@32300000 {
+ compatible = "qcom,sm8650-cdsp-pas";
+- reg = <0 0x32300000 0 0x1400000>;
++ reg = <0x0 0x32300000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts b/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts
+index fdde988ae01ebd..b1fa8f3558b3fc 100644
+--- a/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts
++++ b/arch/arm64/boot/dts/qcom/x1e78100-lenovo-thinkpad-t14s.dts
+@@ -754,7 +754,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+ status = "okay";
+@@ -786,7 +786,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+index 2926a1aba76873..b2cf080cab5622 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-asus-vivobook-s15.dts
+@@ -591,7 +591,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+ status = "okay";
+@@ -623,7 +623,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+index c6e0356ed9a2a2..044a2f1432fe32 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts
+@@ -1147,7 +1147,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+ status = "okay";
+@@ -1179,7 +1179,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+@@ -1211,7 +1211,7 @@ &usb_1_ss2_hsphy {
+ };
+
+ &usb_1_ss2_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+index f22e5c840a2e55..e9ed723f90381a 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-lenovo-yoga-slim7x.dts
+@@ -895,7 +895,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+ status = "okay";
+@@ -927,7 +927,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+@@ -959,7 +959,7 @@ &usb_1_ss2_hsphy {
+ };
+
+ &usb_1_ss2_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+index 89e39d55278579..19da90704b7cb9 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100-microsoft-romulus.dtsi
+@@ -782,7 +782,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e>;
++ vdda-phy-supply = <&vreg_l2j>;
+ vdda-pll-supply = <&vreg_l1j>;
+
+ status = "okay";
+@@ -814,7 +814,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e>;
++ vdda-phy-supply = <&vreg_l2j>;
+ vdda-pll-supply = <&vreg_l2d>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+index 5ef030c60abe29..af76aa034d0e17 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
++++ b/arch/arm64/boot/dts/qcom/x1e80100-qcp.dts
+@@ -896,7 +896,7 @@ &usb_1_ss0_hsphy {
+ };
+
+ &usb_1_ss0_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l1j_0p8>;
+
+ status = "okay";
+@@ -928,7 +928,7 @@ &usb_1_ss1_hsphy {
+ };
+
+ &usb_1_ss1_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+@@ -960,7 +960,7 @@ &usb_1_ss2_hsphy {
+ };
+
+ &usb_1_ss2_qmpphy {
+- vdda-phy-supply = <&vreg_l3e_1p2>;
++ vdda-phy-supply = <&vreg_l2j_1p2>;
+ vdda-pll-supply = <&vreg_l2d_0p9>;
+
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/qcom/x1e80100.dtsi b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+index f0797df9619b15..91e4fbca19f99c 100644
+--- a/arch/arm64/boot/dts/qcom/x1e80100.dtsi
++++ b/arch/arm64/boot/dts/qcom/x1e80100.dtsi
+@@ -3515,6 +3515,143 @@ nsp_noc: interconnect@320c0000 {
+ #interconnect-cells = <2>;
+ };
+
++ remoteproc_adsp: remoteproc@6800000 {
++ compatible = "qcom,x1e80100-adsp-pas";
++ reg = <0x0 0x06800000 0x0 0x10000>;
++
++ interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
++ <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
++ interrupt-names = "wdog",
++ "fatal",
++ "ready",
++ "handover",
++ "stop-ack";
++
++ clocks = <&rpmhcc RPMH_CXO_CLK>;
++ clock-names = "xo";
++
++ power-domains = <&rpmhpd RPMHPD_LCX>,
++ <&rpmhpd RPMHPD_LMX>;
++ power-domain-names = "lcx",
++ "lmx";
++
++ interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC QCOM_ICC_TAG_ALWAYS
++ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
++
++ memory-region = <&adspslpi_mem>,
++ <&q6_adsp_dtb_mem>;
++
++ qcom,qmp = <&aoss_qmp>;
++
++ qcom,smem-states = <&smp2p_adsp_out 0>;
++ qcom,smem-state-names = "stop";
++
++ status = "disabled";
++
++ glink-edge {
++ interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP
++ IRQ_TYPE_EDGE_RISING>;
++ mboxes = <&ipcc IPCC_CLIENT_LPASS
++ IPCC_MPROC_SIGNAL_GLINK_QMP>;
++
++ label = "lpass";
++ qcom,remote-pid = <2>;
++
++ fastrpc {
++ compatible = "qcom,fastrpc";
++ qcom,glink-channels = "fastrpcglink-apps-dsp";
++ label = "adsp";
++ qcom,non-secure-domain;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ compute-cb@3 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <3>;
++ iommus = <&apps_smmu 0x1003 0x80>,
++ <&apps_smmu 0x1063 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@4 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <4>;
++ iommus = <&apps_smmu 0x1004 0x80>,
++ <&apps_smmu 0x1064 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@5 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <5>;
++ iommus = <&apps_smmu 0x1005 0x80>,
++ <&apps_smmu 0x1065 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@6 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <6>;
++ iommus = <&apps_smmu 0x1006 0x80>,
++ <&apps_smmu 0x1066 0x0>;
++ dma-coherent;
++ };
++
++ compute-cb@7 {
++ compatible = "qcom,fastrpc-compute-cb";
++ reg = <7>;
++ iommus = <&apps_smmu 0x1007 0x80>,
++ <&apps_smmu 0x1067 0x0>;
++ dma-coherent;
++ };
++ };
++
++ gpr {
++ compatible = "qcom,gpr";
++ qcom,glink-channels = "adsp_apps";
++ qcom,domain = <GPR_DOMAIN_ID_ADSP>;
++ qcom,intents = <512 20>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ q6apm: service@1 {
++ compatible = "qcom,q6apm";
++ reg = <GPR_APM_MODULE_IID>;
++ #sound-dai-cells = <0>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6apmbedai: bedais {
++ compatible = "qcom,q6apm-lpass-dais";
++ #sound-dai-cells = <1>;
++ };
++
++ q6apmdai: dais {
++ compatible = "qcom,q6apm-dais";
++ iommus = <&apps_smmu 0x1001 0x80>,
++ <&apps_smmu 0x1061 0x0>;
++ };
++ };
++
++ q6prm: service@2 {
++ compatible = "qcom,q6prm";
++ reg = <GPR_PRM_MODULE_IID>;
++ qcom,protection-domain = "avs/audio",
++ "msm/adsp/audio_pd";
++
++ q6prmcc: clock-controller {
++ compatible = "qcom,q6prm-lpass-clocks";
++ #clock-cells = <2>;
++ };
++ };
++ };
++ };
++ };
++
+ lpass_wsa2macro: codec@6aa0000 {
+ compatible = "qcom,x1e80100-lpass-wsa-macro", "qcom,sm8550-lpass-wsa-macro";
+ reg = <0 0x06aa0000 0 0x1000>;
+@@ -4115,7 +4252,7 @@ usb_2: usb@a2f8800 {
+ <&gcc GCC_USB20_MASTER_CLK>;
+ assigned-clock-rates = <19200000>, <200000000>;
+
+- interrupts-extended = <&intc GIC_SPI 240 IRQ_TYPE_LEVEL_HIGH>,
++ interrupts-extended = <&intc GIC_SPI 245 IRQ_TYPE_LEVEL_HIGH>,
+ <&pdc 50 IRQ_TYPE_EDGE_BOTH>,
+ <&pdc 49 IRQ_TYPE_EDGE_BOTH>;
+ interrupt-names = "pwr_event",
+@@ -4141,7 +4278,7 @@ &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>,
+ usb_2_dwc3: usb@a200000 {
+ compatible = "snps,dwc3";
+ reg = <0 0x0a200000 0 0xcd00>;
+- interrupts = <GIC_SPI 241 IRQ_TYPE_LEVEL_HIGH>;
++ interrupts = <GIC_SPI 240 IRQ_TYPE_LEVEL_HIGH>;
+ iommus = <&apps_smmu 0x14e0 0x0>;
+ phys = <&usb_2_hsphy>;
+ phy-names = "usb2-phy";
+@@ -6108,146 +6245,9 @@ system-cache-controller@25000000 {
+ interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+- remoteproc_adsp: remoteproc@30000000 {
+- compatible = "qcom,x1e80100-adsp-pas";
+- reg = <0 0x30000000 0 0x100>;
+-
+- interrupts-extended = <&pdc 6 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 0 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 1 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 2 IRQ_TYPE_EDGE_RISING>,
+- <&smp2p_adsp_in 3 IRQ_TYPE_EDGE_RISING>;
+- interrupt-names = "wdog",
+- "fatal",
+- "ready",
+- "handover",
+- "stop-ack";
+-
+- clocks = <&rpmhcc RPMH_CXO_CLK>;
+- clock-names = "xo";
+-
+- power-domains = <&rpmhpd RPMHPD_LCX>,
+- <&rpmhpd RPMHPD_LMX>;
+- power-domain-names = "lcx",
+- "lmx";
+-
+- interconnects = <&lpass_lpicx_noc MASTER_LPASS_PROC QCOM_ICC_TAG_ALWAYS
+- &mc_virt SLAVE_EBI1 QCOM_ICC_TAG_ALWAYS>;
+-
+- memory-region = <&adspslpi_mem>,
+- <&q6_adsp_dtb_mem>;
+-
+- qcom,qmp = <&aoss_qmp>;
+-
+- qcom,smem-states = <&smp2p_adsp_out 0>;
+- qcom,smem-state-names = "stop";
+-
+- status = "disabled";
+-
+- glink-edge {
+- interrupts-extended = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP
+- IRQ_TYPE_EDGE_RISING>;
+- mboxes = <&ipcc IPCC_CLIENT_LPASS
+- IPCC_MPROC_SIGNAL_GLINK_QMP>;
+-
+- label = "lpass";
+- qcom,remote-pid = <2>;
+-
+- fastrpc {
+- compatible = "qcom,fastrpc";
+- qcom,glink-channels = "fastrpcglink-apps-dsp";
+- label = "adsp";
+- qcom,non-secure-domain;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- compute-cb@3 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <3>;
+- iommus = <&apps_smmu 0x1003 0x80>,
+- <&apps_smmu 0x1063 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@4 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <4>;
+- iommus = <&apps_smmu 0x1004 0x80>,
+- <&apps_smmu 0x1064 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@5 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <5>;
+- iommus = <&apps_smmu 0x1005 0x80>,
+- <&apps_smmu 0x1065 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@6 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <6>;
+- iommus = <&apps_smmu 0x1006 0x80>,
+- <&apps_smmu 0x1066 0x0>;
+- dma-coherent;
+- };
+-
+- compute-cb@7 {
+- compatible = "qcom,fastrpc-compute-cb";
+- reg = <7>;
+- iommus = <&apps_smmu 0x1007 0x80>,
+- <&apps_smmu 0x1067 0x0>;
+- dma-coherent;
+- };
+- };
+-
+- gpr {
+- compatible = "qcom,gpr";
+- qcom,glink-channels = "adsp_apps";
+- qcom,domain = <GPR_DOMAIN_ID_ADSP>;
+- qcom,intents = <512 20>;
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+- q6apm: service@1 {
+- compatible = "qcom,q6apm";
+- reg = <GPR_APM_MODULE_IID>;
+- #sound-dai-cells = <0>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6apmbedai: bedais {
+- compatible = "qcom,q6apm-lpass-dais";
+- #sound-dai-cells = <1>;
+- };
+-
+- q6apmdai: dais {
+- compatible = "qcom,q6apm-dais";
+- iommus = <&apps_smmu 0x1001 0x80>,
+- <&apps_smmu 0x1061 0x0>;
+- };
+- };
+-
+- q6prm: service@2 {
+- compatible = "qcom,q6prm";
+- reg = <GPR_PRM_MODULE_IID>;
+- qcom,protection-domain = "avs/audio",
+- "msm/adsp/audio_pd";
+-
+- q6prmcc: clock-controller {
+- compatible = "qcom,q6prm-lpass-clocks";
+- #clock-cells = <2>;
+- };
+- };
+- };
+- };
+- };
+-
+ remoteproc_cdsp: remoteproc@32300000 {
+ compatible = "qcom,x1e80100-cdsp-pas";
+- reg = <0 0x32300000 0 0x1400000>;
++ reg = <0x0 0x32300000 0x0 0x10000>;
+
+ interrupts-extended = <&intc GIC_SPI 578 IRQ_TYPE_EDGE_RISING>,
+ <&smp2p_cdsp_in 0 IRQ_TYPE_EDGE_RISING>,
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+index 650b1ba9c19213..257636d0d2cbb0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi
+@@ -181,7 +181,7 @@ &gmac {
+ snps,reset-active-low;
+ snps,reset-delays-us = <0 10000 50000>;
+ tx_delay = <0x10>;
+- rx_delay = <0x10>;
++ rx_delay = <0x23>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/rockchip/rk3568.dtsi b/arch/arm64/boot/dts/rockchip/rk3568.dtsi
+index 0946310e8c1248..6fd67ae2711746 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3568.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3568.dtsi
+@@ -262,6 +262,7 @@ combphy0: phy@fe820000 {
+ assigned-clocks = <&pmucru CLK_PCIEPHY0_REF>;
+ assigned-clock-rates = <100000000>;
+ resets = <&cru SRST_PIPEPHY0>;
++ reset-names = "phy";
+ rockchip,pipe-grf = <&pipegrf>;
+ rockchip,pipe-phy-grf = <&pipe_phy_grf0>;
+ #phy-cells = <1>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk356x.dtsi b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+index 0ee0ada6f0ab0f..bc0f57a26c2ff8 100644
+--- a/arch/arm64/boot/dts/rockchip/rk356x.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk356x.dtsi
+@@ -1762,6 +1762,7 @@ combphy1: phy@fe830000 {
+ assigned-clocks = <&pmucru CLK_PCIEPHY1_REF>;
+ assigned-clock-rates = <100000000>;
+ resets = <&cru SRST_PIPEPHY1>;
++ reset-names = "phy";
+ rockchip,pipe-grf = <&pipegrf>;
+ rockchip,pipe-phy-grf = <&pipe_phy_grf1>;
+ #phy-cells = <1>;
+@@ -1778,6 +1779,7 @@ combphy2: phy@fe840000 {
+ assigned-clocks = <&pmucru CLK_PCIEPHY2_REF>;
+ assigned-clock-rates = <100000000>;
+ resets = <&cru SRST_PIPEPHY2>;
++ reset-names = "phy";
+ rockchip,pipe-grf = <&pipegrf>;
+ rockchip,pipe-phy-grf = <&pipe_phy_grf2>;
+ #phy-cells = <1>;
+diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
+index bc0b0d75acef7b..c1f45fd6b3e9a9 100644
+--- a/arch/arm64/include/asm/assembler.h
++++ b/arch/arm64/include/asm/assembler.h
+@@ -350,6 +350,11 @@ alternative_cb_end
+ // Narrow PARange to fit the PS field in TCR_ELx
+ ubfx \tmp0, \tmp0, #ID_AA64MMFR0_EL1_PARANGE_SHIFT, #3
+ mov \tmp1, #ID_AA64MMFR0_EL1_PARANGE_MAX
++#ifdef CONFIG_ARM64_LPA2
++alternative_if_not ARM64_HAS_VA52
++ mov \tmp1, #ID_AA64MMFR0_EL1_PARANGE_48
++alternative_else_nop_endif
++#endif
+ cmp \tmp0, \tmp1
+ csel \tmp0, \tmp1, \tmp0, hi
+ bfi \tcr, \tmp0, \pos, #3
+diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
+index fd330c1db289a6..a970def932aacb 100644
+--- a/arch/arm64/include/asm/pgtable-hwdef.h
++++ b/arch/arm64/include/asm/pgtable-hwdef.h
+@@ -218,12 +218,6 @@
+ */
+ #define S1_TABLE_AP (_AT(pmdval_t, 3) << 61)
+
+-/*
+- * Highest possible physical address supported.
+- */
+-#define PHYS_MASK_SHIFT (CONFIG_ARM64_PA_BITS)
+-#define PHYS_MASK ((UL(1) << PHYS_MASK_SHIFT) - 1)
+-
+ #define TTBR_CNP_BIT (UL(1) << 0)
+
+ /*
+diff --git a/arch/arm64/include/asm/pgtable-prot.h b/arch/arm64/include/asm/pgtable-prot.h
+index 2a11d0c10760b9..3ce7c632fbfbc3 100644
+--- a/arch/arm64/include/asm/pgtable-prot.h
++++ b/arch/arm64/include/asm/pgtable-prot.h
+@@ -78,6 +78,7 @@ extern bool arm64_use_ng_mappings;
+ #define lpa2_is_enabled() false
+ #define PTE_MAYBE_SHARED PTE_SHARED
+ #define PMD_MAYBE_SHARED PMD_SECT_S
++#define PHYS_MASK_SHIFT (CONFIG_ARM64_PA_BITS)
+ #else
+ static inline bool __pure lpa2_is_enabled(void)
+ {
+@@ -86,8 +87,14 @@ static inline bool __pure lpa2_is_enabled(void)
+
+ #define PTE_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PTE_SHARED)
+ #define PMD_MAYBE_SHARED (lpa2_is_enabled() ? 0 : PMD_SECT_S)
++#define PHYS_MASK_SHIFT (lpa2_is_enabled() ? CONFIG_ARM64_PA_BITS : 48)
+ #endif
+
++/*
++ * Highest possible physical address supported.
++ */
++#define PHYS_MASK ((UL(1) << PHYS_MASK_SHIFT) - 1)
++
+ /*
+ * If we have userspace only BTI we don't want to mark kernel pages
+ * guarded even if the system does support BTI.
+diff --git a/arch/arm64/include/asm/sparsemem.h b/arch/arm64/include/asm/sparsemem.h
+index 8a8acc220371cb..84783efdc9d1f7 100644
+--- a/arch/arm64/include/asm/sparsemem.h
++++ b/arch/arm64/include/asm/sparsemem.h
+@@ -5,7 +5,10 @@
+ #ifndef __ASM_SPARSEMEM_H
+ #define __ASM_SPARSEMEM_H
+
+-#define MAX_PHYSMEM_BITS CONFIG_ARM64_PA_BITS
++#include <asm/pgtable-prot.h>
++
++#define MAX_PHYSMEM_BITS PHYS_MASK_SHIFT
++#define MAX_POSSIBLE_PHYSMEM_BITS (52)
+
+ /*
+ * Section size must be at least 512MB for 64K base
+diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
+index db994d1fd97e70..709f2b51be6df3 100644
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1153,12 +1153,6 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
+ id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
+ unsigned long cpacr = cpacr_save_enable_kernel_sme();
+
+- /*
+- * We mask out SMPS since even if the hardware
+- * supports priorities the kernel does not at present
+- * and we block access to them.
+- */
+- info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
+ vec_init_vq_map(ARM64_VEC_SME);
+
+ cpacr_restore(cpacr);
+@@ -1406,13 +1400,6 @@ void update_cpu_features(int cpu,
+ id_aa64pfr1_sme(read_sanitised_ftr_reg(SYS_ID_AA64PFR1_EL1))) {
+ unsigned long cpacr = cpacr_save_enable_kernel_sme();
+
+- /*
+- * We mask out SMPS since even if the hardware
+- * supports priorities the kernel does not at present
+- * and we block access to them.
+- */
+- info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
+-
+ /* Probe vector lengths */
+ if (!system_capabilities_finalized())
+ vec_update_vq_map(ARM64_VEC_SME);
+@@ -2923,6 +2910,13 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
+ .matches = match, \
+ }
+
++#define HWCAP_CAP_MATCH_ID(match, reg, field, min_value, cap_type, cap) \
++ { \
++ __HWCAP_CAP(#cap, cap_type, cap) \
++ HWCAP_CPUID_MATCH(reg, field, min_value) \
++ .matches = match, \
++ }
++
+ #ifdef CONFIG_ARM64_PTR_AUTH
+ static const struct arm64_cpu_capabilities ptr_auth_hwcap_addr_matches[] = {
+ {
+@@ -2951,6 +2945,13 @@ static const struct arm64_cpu_capabilities ptr_auth_hwcap_gen_matches[] = {
+ };
+ #endif
+
++#ifdef CONFIG_ARM64_SVE
++static bool has_sve_feature(const struct arm64_cpu_capabilities *cap, int scope)
++{
++ return system_supports_sve() && has_user_cpuid_feature(cap, scope);
++}
++#endif
++
+ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ HWCAP_CAP(ID_AA64ISAR0_EL1, AES, PMULL, CAP_HWCAP, KERNEL_HWCAP_PMULL),
+ HWCAP_CAP(ID_AA64ISAR0_EL1, AES, AES, CAP_HWCAP, KERNEL_HWCAP_AES),
+@@ -2993,19 +2994,19 @@ static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
+ HWCAP_CAP(ID_AA64MMFR2_EL1, AT, IMP, CAP_HWCAP, KERNEL_HWCAP_USCAT),
+ #ifdef CONFIG_ARM64_SVE
+ HWCAP_CAP(ID_AA64PFR0_EL1, SVE, IMP, CAP_HWCAP, KERNEL_HWCAP_SVE),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, SVEver, SVE2p1, CAP_HWCAP, KERNEL_HWCAP_SVE2P1),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, SVEver, SVE2, CAP_HWCAP, KERNEL_HWCAP_SVE2),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEAES),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, AES, PMULL128, CAP_HWCAP, KERNEL_HWCAP_SVEPMULL),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, BitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEBITPERM),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SVE_B16B16),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, BF16, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEBF16),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, BF16, EBF16, CAP_HWCAP, KERNEL_HWCAP_SVE_EBF16),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, SHA3, IMP, CAP_HWCAP, KERNEL_HWCAP_SVESHA3),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, SM4, IMP, CAP_HWCAP, KERNEL_HWCAP_SVESM4),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, I8MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEI8MM),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, F32MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF32MM),
+- HWCAP_CAP(ID_AA64ZFR0_EL1, F64MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF64MM),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, SVEver, SVE2p1, CAP_HWCAP, KERNEL_HWCAP_SVE2P1),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, SVEver, SVE2, CAP_HWCAP, KERNEL_HWCAP_SVE2),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, AES, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEAES),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, AES, PMULL128, CAP_HWCAP, KERNEL_HWCAP_SVEPMULL),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, BitPerm, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEBITPERM),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, B16B16, IMP, CAP_HWCAP, KERNEL_HWCAP_SVE_B16B16),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, BF16, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEBF16),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, BF16, EBF16, CAP_HWCAP, KERNEL_HWCAP_SVE_EBF16),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, SHA3, IMP, CAP_HWCAP, KERNEL_HWCAP_SVESHA3),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, SM4, IMP, CAP_HWCAP, KERNEL_HWCAP_SVESM4),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, I8MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEI8MM),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, F32MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF32MM),
++ HWCAP_CAP_MATCH_ID(has_sve_feature, ID_AA64ZFR0_EL1, F64MM, IMP, CAP_HWCAP, KERNEL_HWCAP_SVEF64MM),
+ #endif
+ HWCAP_CAP(ID_AA64PFR1_EL1, SSBS, SSBS2, CAP_HWCAP, KERNEL_HWCAP_SSBS),
+ #ifdef CONFIG_ARM64_BTI
+@@ -3376,7 +3377,7 @@ static void verify_hyp_capabilities(void)
+ return;
+
+ safe_mmfr1 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR1_EL1);
+- mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
++ mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+ mmfr1 = read_cpuid(ID_AA64MMFR1_EL1);
+
+ /* Verify VMID bits */
+diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
+index 44718d0482b3b4..aec5e3947c780a 100644
+--- a/arch/arm64/kernel/cpuinfo.c
++++ b/arch/arm64/kernel/cpuinfo.c
+@@ -478,6 +478,16 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
+ if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0))
+ __cpuinfo_store_cpu_32bit(&info->aarch32);
+
++ if (IS_ENABLED(CONFIG_ARM64_SME) &&
++ id_aa64pfr1_sme(info->reg_id_aa64pfr1)) {
++ /*
++ * We mask out SMPS since even if the hardware
++ * supports priorities the kernel does not at present
++ * and we block access to them.
++ */
++ info->reg_smidr = read_cpuid(SMIDR_EL1) & ~SMIDR_EL1_SMPS;
++ }
++
+ cpuinfo_detect_icache_policy(info);
+ }
+
+diff --git a/arch/arm64/kernel/pi/idreg-override.c b/arch/arm64/kernel/pi/idreg-override.c
+index 29d4b6244a6f63..5c03f5e0d352da 100644
+--- a/arch/arm64/kernel/pi/idreg-override.c
++++ b/arch/arm64/kernel/pi/idreg-override.c
+@@ -74,6 +74,15 @@ static bool __init mmfr2_varange_filter(u64 val)
+ id_aa64mmfr0_override.val |=
+ (ID_AA64MMFR0_EL1_TGRAN_LPA2 - 1) << ID_AA64MMFR0_EL1_TGRAN_SHIFT;
+ id_aa64mmfr0_override.mask |= 0xfU << ID_AA64MMFR0_EL1_TGRAN_SHIFT;
++
++ /*
++ * Override PARange to 48 bits - the override will just be
++ * ignored if the actual PARange is smaller, but this is
++ * unlikely to be the case for LPA2 capable silicon.
++ */
++ id_aa64mmfr0_override.val |=
++ ID_AA64MMFR0_EL1_PARANGE_48 << ID_AA64MMFR0_EL1_PARANGE_SHIFT;
++ id_aa64mmfr0_override.mask |= 0xfU << ID_AA64MMFR0_EL1_PARANGE_SHIFT;
+ }
+ #endif
+ return true;
+diff --git a/arch/arm64/kernel/pi/map_kernel.c b/arch/arm64/kernel/pi/map_kernel.c
+index f374a3e5a5fe10..e57b043f324b51 100644
+--- a/arch/arm64/kernel/pi/map_kernel.c
++++ b/arch/arm64/kernel/pi/map_kernel.c
+@@ -136,6 +136,12 @@ static void noinline __section(".idmap.text") set_ttbr0_for_lpa2(u64 ttbr)
+ {
+ u64 sctlr = read_sysreg(sctlr_el1);
+ u64 tcr = read_sysreg(tcr_el1) | TCR_DS;
++ u64 mmfr0 = read_sysreg(id_aa64mmfr0_el1);
++ u64 parange = cpuid_feature_extract_unsigned_field(mmfr0,
++ ID_AA64MMFR0_EL1_PARANGE_SHIFT);
++
++ tcr &= ~TCR_IPS_MASK;
++ tcr |= parange << TCR_IPS_SHIFT;
+
+ asm(" msr sctlr_el1, %0 ;"
+ " isb ;"
+diff --git a/arch/arm64/kvm/arch_timer.c b/arch/arm64/kvm/arch_timer.c
+index 1215df59041856..754914d9ec6835 100644
+--- a/arch/arm64/kvm/arch_timer.c
++++ b/arch/arm64/kvm/arch_timer.c
+@@ -466,10 +466,8 @@ static void timer_emulate(struct arch_timer_context *ctx)
+
+ trace_kvm_timer_emulate(ctx, should_fire);
+
+- if (should_fire != ctx->irq.level) {
++ if (should_fire != ctx->irq.level)
+ kvm_timer_update_irq(ctx->vcpu, should_fire, ctx);
+- return;
+- }
+
+ /*
+ * If the timer can fire now, we don't need to have a soft timer
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 70ff9a20ef3af3..117702f033218d 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -1998,8 +1998,7 @@ static int kvm_init_vector_slots(void)
+ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
+ {
+ struct kvm_nvhe_init_params *params = per_cpu_ptr_nvhe_sym(kvm_init_params, cpu);
+- u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+- unsigned long tcr;
++ unsigned long tcr, ips;
+
+ /*
+ * Calculate the raw per-cpu offset without a translation from the
+@@ -2013,6 +2012,7 @@ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
+ params->mair_el2 = read_sysreg(mair_el1);
+
+ tcr = read_sysreg(tcr_el1);
++ ips = FIELD_GET(TCR_IPS_MASK, tcr);
+ if (cpus_have_final_cap(ARM64_KVM_HVHE)) {
+ tcr |= TCR_EPD1_MASK;
+ } else {
+@@ -2022,8 +2022,8 @@ static void __init cpu_prepare_hyp_mode(int cpu, u32 hyp_va_bits)
+ tcr &= ~TCR_T0SZ_MASK;
+ tcr |= TCR_T0SZ(hyp_va_bits);
+ tcr &= ~TCR_EL2_PS_MASK;
+- tcr |= FIELD_PREP(TCR_EL2_PS_MASK, kvm_get_parange(mmfr0));
+- if (kvm_lpa2_is_enabled())
++ tcr |= FIELD_PREP(TCR_EL2_PS_MASK, ips);
++ if (lpa2_is_enabled())
+ tcr |= TCR_EL2_DS;
+ params->tcr_el2 = tcr;
+
+diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
+index 5f1e2103888b76..0a6956bbfb3269 100644
+--- a/arch/arm64/mm/hugetlbpage.c
++++ b/arch/arm64/mm/hugetlbpage.c
+@@ -508,6 +508,18 @@ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+
+ static int __init hugetlbpage_init(void)
+ {
++ /*
++ * HugeTLB pages are supported on maximum four page table
++ * levels (PUD, CONT PMD, PMD, CONT PTE) for a given base
++ * page size, corresponding to hugetlb_add_hstate() calls
++ * here.
++ *
++ * HUGE_MAX_HSTATE should at least match maximum supported
++ * HugeTLB page sizes on the platform. Any new addition to
++ * supported HugeTLB page sizes will also require changing
++ * HUGE_MAX_HSTATE as well.
++ */
++ BUILD_BUG_ON(HUGE_MAX_HSTATE < 4);
+ if (pud_sect_supported())
+ hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
+
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index 93ba66de160ce4..ea71ef2e343c2c 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -278,7 +278,12 @@ void __init arm64_memblock_init(void)
+
+ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+ extern u16 memstart_offset_seed;
+- u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
++
++ /*
++ * Use the sanitised version of id_aa64mmfr0_el1 so that linear
++ * map randomization can be enabled by shrinking the IPA space.
++ */
++ u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
+ int parange = cpuid_feature_extract_unsigned_field(
+ mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT);
+ s64 range = linear_region_size -
+diff --git a/arch/loongarch/include/uapi/asm/ptrace.h b/arch/loongarch/include/uapi/asm/ptrace.h
+index ac915f84165053..aafb3cd9e943e5 100644
+--- a/arch/loongarch/include/uapi/asm/ptrace.h
++++ b/arch/loongarch/include/uapi/asm/ptrace.h
+@@ -72,6 +72,16 @@ struct user_watch_state {
+ } dbg_regs[8];
+ };
+
++struct user_watch_state_v2 {
++ uint64_t dbg_info;
++ struct {
++ uint64_t addr;
++ uint64_t mask;
++ uint32_t ctrl;
++ uint32_t pad;
++ } dbg_regs[14];
++};
++
+ #define PTRACE_SYSEMU 0x1f
+ #define PTRACE_SYSEMU_SINGLESTEP 0x20
+
+diff --git a/arch/loongarch/kernel/ptrace.c b/arch/loongarch/kernel/ptrace.c
+index 19dc6eff45ccc8..5e2402cfcab0a1 100644
+--- a/arch/loongarch/kernel/ptrace.c
++++ b/arch/loongarch/kernel/ptrace.c
+@@ -720,7 +720,7 @@ static int hw_break_set(struct task_struct *target,
+ unsigned int note_type = regset->core_note_type;
+
+ /* Resource info */
+- offset = offsetof(struct user_watch_state, dbg_regs);
++ offset = offsetof(struct user_watch_state_v2, dbg_regs);
+ user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf, 0, offset);
+
+ /* (address, mask, ctrl) registers */
+@@ -920,7 +920,7 @@ static const struct user_regset loongarch64_regsets[] = {
+ #ifdef CONFIG_HAVE_HW_BREAKPOINT
+ [REGSET_HW_BREAK] = {
+ .core_note_type = NT_LOONGARCH_HW_BREAK,
+- .n = sizeof(struct user_watch_state) / sizeof(u32),
++ .n = sizeof(struct user_watch_state_v2) / sizeof(u32),
+ .size = sizeof(u32),
+ .align = sizeof(u32),
+ .regset_get = hw_break_get,
+@@ -928,7 +928,7 @@ static const struct user_regset loongarch64_regsets[] = {
+ },
+ [REGSET_HW_WATCH] = {
+ .core_note_type = NT_LOONGARCH_HW_WATCH,
+- .n = sizeof(struct user_watch_state) / sizeof(u32),
++ .n = sizeof(struct user_watch_state_v2) / sizeof(u32),
+ .size = sizeof(u32),
+ .align = sizeof(u32),
+ .regset_get = hw_break_get,
+diff --git a/arch/m68k/include/asm/vga.h b/arch/m68k/include/asm/vga.h
+index 4742e6bc3ab8ea..cdd414fa8710a9 100644
+--- a/arch/m68k/include/asm/vga.h
++++ b/arch/m68k/include/asm/vga.h
+@@ -9,7 +9,7 @@
+ */
+ #ifndef CONFIG_PCI
+
+-#include <asm/raw_io.h>
++#include <asm/io.h>
+ #include <asm/kmap.h>
+
+ /*
+@@ -29,9 +29,9 @@
+ #define inw_p(port) 0
+ #define outb_p(port, val) do { } while (0)
+ #define outw(port, val) do { } while (0)
+-#define readb raw_inb
+-#define writeb raw_outb
+-#define writew raw_outw
++#define readb __raw_readb
++#define writeb __raw_writeb
++#define writew __raw_writew
+
+ #endif /* CONFIG_PCI */
+ #endif /* _ASM_M68K_VGA_H */
+diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
+index 467b10f4361aeb..5078ebf071ec07 100644
+--- a/arch/mips/Kconfig
++++ b/arch/mips/Kconfig
+@@ -1084,7 +1084,6 @@ config CSRC_IOASIC
+
+ config CSRC_R4K
+ select CLOCKSOURCE_WATCHDOG if CPU_FREQ
+- select HAVE_UNSTABLE_SCHED_CLOCK if SMP && 64BIT
+ bool
+
+ config CSRC_SB1250
+diff --git a/arch/mips/kernel/ftrace.c b/arch/mips/kernel/ftrace.c
+index 8c401e42301cbf..f39e85fd58fa99 100644
+--- a/arch/mips/kernel/ftrace.c
++++ b/arch/mips/kernel/ftrace.c
+@@ -248,7 +248,7 @@ int ftrace_disable_ftrace_graph_caller(void)
+ #define S_R_SP (0xafb0 << 16) /* s{d,w} R, offset(sp) */
+ #define OFFSET_MASK 0xffff /* stack offset range: 0 ~ PT_SIZE */
+
+-unsigned long ftrace_get_parent_ra_addr(unsigned long self_ra, unsigned long
++static unsigned long ftrace_get_parent_ra_addr(unsigned long self_ra, unsigned long
+ old_parent_ra, unsigned long parent_ra_addr, unsigned long fp)
+ {
+ unsigned long sp, ip, tmp;
+diff --git a/arch/mips/loongson64/boardinfo.c b/arch/mips/loongson64/boardinfo.c
+index 280989c5a137b5..8bb275c93ac099 100644
+--- a/arch/mips/loongson64/boardinfo.c
++++ b/arch/mips/loongson64/boardinfo.c
+@@ -21,13 +21,11 @@ static ssize_t boardinfo_show(struct kobject *kobj,
+ "BIOS Info\n"
+ "Vendor\t\t\t: %s\n"
+ "Version\t\t\t: %s\n"
+- "ROM Size\t\t: %d KB\n"
+ "Release Date\t\t: %s\n",
+ strsep(&tmp_board_manufacturer, "-"),
+ eboard->name,
+ strsep(&tmp_bios_vendor, "-"),
+ einter->description,
+- einter->size,
+ especial->special_name);
+ }
+ static struct kobj_attribute boardinfo_attr = __ATTR(boardinfo, 0444,
+diff --git a/arch/mips/math-emu/cp1emu.c b/arch/mips/math-emu/cp1emu.c
+index 265bc57819dfb5..c89e70df43d82b 100644
+--- a/arch/mips/math-emu/cp1emu.c
++++ b/arch/mips/math-emu/cp1emu.c
+@@ -1660,7 +1660,7 @@ static int fpux_emu(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
+ break;
+ }
+
+- case 0x3:
++ case 0x7:
+ if (MIPSInst_FUNC(ir) != pfetch_op)
+ return SIGILL;
+
+diff --git a/arch/mips/pci/pci-legacy.c b/arch/mips/pci/pci-legacy.c
+index ec2567f8efd83b..66898fd182dc1f 100644
+--- a/arch/mips/pci/pci-legacy.c
++++ b/arch/mips/pci/pci-legacy.c
+@@ -29,6 +29,14 @@ static LIST_HEAD(controllers);
+
+ static int pci_initialized;
+
++unsigned long pci_address_to_pio(phys_addr_t address)
++{
++ if (address > IO_SPACE_LIMIT)
++ return (unsigned long)-1;
++
++ return (unsigned long) address;
++}
++
+ /*
+ * We need to avoid collisions with `mirrored' VGA ports
+ * and other strange ISA hardware, so we always want the
+diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
+index aa6a3cad275d91..fcc5973f75195a 100644
+--- a/arch/parisc/Kconfig
++++ b/arch/parisc/Kconfig
+@@ -60,8 +60,8 @@ config PARISC
+ select HAVE_ARCH_MMAP_RND_BITS
+ select HAVE_ARCH_AUDITSYSCALL
+ select HAVE_ARCH_HASH
+- select HAVE_ARCH_JUMP_LABEL
+- select HAVE_ARCH_JUMP_LABEL_RELATIVE
++ # select HAVE_ARCH_JUMP_LABEL
++ # select HAVE_ARCH_JUMP_LABEL_RELATIVE
+ select HAVE_ARCH_KFENCE
+ select HAVE_ARCH_SECCOMP_FILTER
+ select HAVE_ARCH_TRACEHOOK
+diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
+index c664fdec75b1ab..6824e8139801c2 100644
+--- a/arch/powerpc/kvm/e500_mmu_host.c
++++ b/arch/powerpc/kvm/e500_mmu_host.c
+@@ -242,7 +242,7 @@ static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe)
+ return tlbe->mas7_3 & (MAS3_SW|MAS3_UW);
+ }
+
+-static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
++static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref,
+ struct kvm_book3e_206_tlb_entry *gtlbe,
+ kvm_pfn_t pfn, unsigned int wimg)
+ {
+@@ -252,11 +252,7 @@ static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
+ /* Use guest supplied MAS2_G and MAS2_E */
+ ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
+
+- /* Mark the page accessed */
+- kvm_set_pfn_accessed(pfn);
+-
+- if (tlbe_is_writable(gtlbe))
+- kvm_set_pfn_dirty(pfn);
++ return tlbe_is_writable(gtlbe);
+ }
+
+ static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref)
+@@ -326,6 +322,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ {
+ struct kvm_memory_slot *slot;
+ unsigned long pfn = 0; /* silence GCC warning */
++ struct page *page = NULL;
+ unsigned long hva;
+ int pfnmap = 0;
+ int tsize = BOOK3E_PAGESZ_4K;
+@@ -337,6 +334,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ unsigned int wimg = 0;
+ pgd_t *pgdir;
+ unsigned long flags;
++ bool writable = false;
+
+ /* used to check for invalidations in progress */
+ mmu_seq = kvm->mmu_invalidate_seq;
+@@ -446,7 +444,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+
+ if (likely(!pfnmap)) {
+ tsize_pages = 1UL << (tsize + 10 - PAGE_SHIFT);
+- pfn = gfn_to_pfn_memslot(slot, gfn);
++ pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, NULL, &page);
+ if (is_error_noslot_pfn(pfn)) {
+ if (printk_ratelimit())
+ pr_err("%s: real page not found for gfn %lx\n",
+@@ -481,7 +479,6 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ if (pte_present(pte)) {
+ wimg = (pte_val(pte) >> PTE_WIMGE_SHIFT) &
+ MAS2_WIMGE_MASK;
+- local_irq_restore(flags);
+ } else {
+ local_irq_restore(flags);
+ pr_err_ratelimited("%s: pte not present: gfn %lx,pfn %lx\n",
+@@ -490,8 +487,9 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ goto out;
+ }
+ }
+- kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
++ local_irq_restore(flags);
+
++ writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
+ kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
+ ref, gvaddr, stlbe);
+
+@@ -499,11 +497,8 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ kvmppc_mmu_flush_icache(pfn);
+
+ out:
++ kvm_release_faultin_page(kvm, page, !!ret, writable);
+ spin_unlock(&kvm->mmu_lock);
+-
+- /* Drop refcount on page, so that mmu notifiers can clear it */
+- kvm_release_pfn_clean(pfn);
+-
+ return ret;
+ }
+
+diff --git a/arch/powerpc/platforms/pseries/eeh_pseries.c b/arch/powerpc/platforms/pseries/eeh_pseries.c
+index 1893f66371fa43..b12ef382fec709 100644
+--- a/arch/powerpc/platforms/pseries/eeh_pseries.c
++++ b/arch/powerpc/platforms/pseries/eeh_pseries.c
+@@ -580,8 +580,10 @@ static int pseries_eeh_get_state(struct eeh_pe *pe, int *delay)
+
+ switch(rets[0]) {
+ case 0:
+- result = EEH_STATE_MMIO_ACTIVE |
+- EEH_STATE_DMA_ACTIVE;
++ result = EEH_STATE_MMIO_ACTIVE |
++ EEH_STATE_DMA_ACTIVE |
++ EEH_STATE_MMIO_ENABLED |
++ EEH_STATE_DMA_ENABLED;
+ break;
+ case 1:
+ result = EEH_STATE_RESET_ACTIVE |
+diff --git a/arch/s390/include/asm/asm-extable.h b/arch/s390/include/asm/asm-extable.h
+index 4a6b0a8b6412f1..00a67464c44534 100644
+--- a/arch/s390/include/asm/asm-extable.h
++++ b/arch/s390/include/asm/asm-extable.h
+@@ -14,6 +14,7 @@
+ #define EX_TYPE_UA_LOAD_REG 5
+ #define EX_TYPE_UA_LOAD_REGPAIR 6
+ #define EX_TYPE_ZEROPAD 7
++#define EX_TYPE_FPC 8
+
+ #define EX_DATA_REG_ERR_SHIFT 0
+ #define EX_DATA_REG_ERR GENMASK(3, 0)
+@@ -84,4 +85,7 @@
+ #define EX_TABLE_ZEROPAD(_fault, _target, _regdata, _regaddr) \
+ __EX_TABLE(__ex_table, _fault, _target, EX_TYPE_ZEROPAD, _regdata, _regaddr, 0)
+
++#define EX_TABLE_FPC(_fault, _target) \
++ __EX_TABLE(__ex_table, _fault, _target, EX_TYPE_FPC, __stringify(%%r0), __stringify(%%r0), 0)
++
+ #endif /* __ASM_EXTABLE_H */
+diff --git a/arch/s390/include/asm/fpu-insn.h b/arch/s390/include/asm/fpu-insn.h
+index c1e2e521d9af7c..a4c9b4db62ff57 100644
+--- a/arch/s390/include/asm/fpu-insn.h
++++ b/arch/s390/include/asm/fpu-insn.h
+@@ -100,19 +100,12 @@ static __always_inline void fpu_lfpc(unsigned int *fpc)
+ */
+ static inline void fpu_lfpc_safe(unsigned int *fpc)
+ {
+- u32 tmp;
+-
+ instrument_read(fpc, sizeof(*fpc));
+- asm volatile("\n"
+- "0: lfpc %[fpc]\n"
+- "1: nopr %%r7\n"
+- ".pushsection .fixup, \"ax\"\n"
+- "2: lghi %[tmp],0\n"
+- " sfpc %[tmp]\n"
+- " jg 1b\n"
+- ".popsection\n"
+- EX_TABLE(1b, 2b)
+- : [tmp] "=d" (tmp)
++ asm_inline volatile(
++ " lfpc %[fpc]\n"
++ "0: nopr %%r7\n"
++ EX_TABLE_FPC(0b, 0b)
++ :
+ : [fpc] "Q" (*fpc)
+ : "memory");
+ }
+diff --git a/arch/s390/include/asm/futex.h b/arch/s390/include/asm/futex.h
+index eaeaeb3ff0be3e..752a2310f0d6c1 100644
+--- a/arch/s390/include/asm/futex.h
++++ b/arch/s390/include/asm/futex.h
+@@ -44,7 +44,7 @@ static inline int arch_futex_atomic_op_inuser(int op, int oparg, int *oval,
+ break;
+ case FUTEX_OP_ANDN:
+ __futex_atomic_op("lr %2,%1\nnr %2,%5\n",
+- ret, oldval, newval, uaddr, oparg);
++ ret, oldval, newval, uaddr, ~oparg);
+ break;
+ case FUTEX_OP_XOR:
+ __futex_atomic_op("lr %2,%1\nxr %2,%5\n",
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index 9a5236acc0a860..21ae93cbd8e478 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -162,8 +162,7 @@ static __always_inline void __stackleak_poison(unsigned long erase_low,
+ " la %[addr],256(%[addr])\n"
+ " brctg %[tmp],0b\n"
+ "1: stg %[poison],0(%[addr])\n"
+- " larl %[tmp],3f\n"
+- " ex %[count],0(%[tmp])\n"
++ " exrl %[count],3f\n"
+ " j 4f\n"
+ "2: stg %[poison],0(%[addr])\n"
+ " j 4f\n"
+diff --git a/arch/s390/kernel/vmlinux.lds.S b/arch/s390/kernel/vmlinux.lds.S
+index 377b9aaf8c9248..ff1ddba96352a1 100644
+--- a/arch/s390/kernel/vmlinux.lds.S
++++ b/arch/s390/kernel/vmlinux.lds.S
+@@ -52,7 +52,6 @@ SECTIONS
+ SOFTIRQENTRY_TEXT
+ FTRACE_HOTPATCH_TRAMPOLINES_TEXT
+ *(.text.*_indirect_*)
+- *(.fixup)
+ *(.gnu.warning)
+ . = ALIGN(PAGE_SIZE);
+ _etext = .; /* End of text section */
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index 89cafea4c41f26..caf40665fce96e 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -1362,8 +1362,14 @@ static struct vsie_page *get_vsie_page(struct kvm *kvm, unsigned long addr)
+ page = radix_tree_lookup(&kvm->arch.vsie.addr_to_page, addr >> 9);
+ rcu_read_unlock();
+ if (page) {
+- if (page_ref_inc_return(page) == 2)
+- return page_to_virt(page);
++ if (page_ref_inc_return(page) == 2) {
++ if (page->index == addr)
++ return page_to_virt(page);
++ /*
++ * We raced with someone reusing + putting this vsie
++ * page before we grabbed it.
++ */
++ }
+ page_ref_dec(page);
+ }
+
+@@ -1393,15 +1399,20 @@ static struct vsie_page *get_vsie_page(struct kvm *kvm, unsigned long addr)
+ kvm->arch.vsie.next++;
+ kvm->arch.vsie.next %= nr_vcpus;
+ }
+- radix_tree_delete(&kvm->arch.vsie.addr_to_page, page->index >> 9);
++ if (page->index != ULONG_MAX)
++ radix_tree_delete(&kvm->arch.vsie.addr_to_page,
++ page->index >> 9);
+ }
+- page->index = addr;
+- /* double use of the same address */
++ /* Mark it as invalid until it resides in the tree. */
++ page->index = ULONG_MAX;
++
++ /* Double use of the same address or allocation failure. */
+ if (radix_tree_insert(&kvm->arch.vsie.addr_to_page, addr >> 9, page)) {
+ page_ref_dec(page);
+ mutex_unlock(&kvm->arch.vsie.mutex);
+ return NULL;
+ }
++ page->index = addr;
+ mutex_unlock(&kvm->arch.vsie.mutex);
+
+ vsie_page = page_to_virt(page);
+@@ -1496,7 +1507,9 @@ void kvm_s390_vsie_destroy(struct kvm *kvm)
+ vsie_page = page_to_virt(page);
+ release_gmap_shadow(vsie_page);
+ /* free the radix tree entry */
+- radix_tree_delete(&kvm->arch.vsie.addr_to_page, page->index >> 9);
++ if (page->index != ULONG_MAX)
++ radix_tree_delete(&kvm->arch.vsie.addr_to_page,
++ page->index >> 9);
+ __free_page(page);
+ }
+ kvm->arch.vsie.page_count = 0;
+diff --git a/arch/s390/mm/extable.c b/arch/s390/mm/extable.c
+index 0a0738a473af05..812ec5be129169 100644
+--- a/arch/s390/mm/extable.c
++++ b/arch/s390/mm/extable.c
+@@ -77,6 +77,13 @@ static bool ex_handler_zeropad(const struct exception_table_entry *ex, struct pt
+ return true;
+ }
+
++static bool ex_handler_fpc(const struct exception_table_entry *ex, struct pt_regs *regs)
++{
++ asm volatile("sfpc %[val]\n" : : [val] "d" (0));
++ regs->psw.addr = extable_fixup(ex);
++ return true;
++}
++
+ bool fixup_exception(struct pt_regs *regs)
+ {
+ const struct exception_table_entry *ex;
+@@ -99,6 +106,8 @@ bool fixup_exception(struct pt_regs *regs)
+ return ex_handler_ua_load_reg(ex, true, regs);
+ case EX_TYPE_ZEROPAD:
+ return ex_handler_zeropad(ex, regs);
++ case EX_TYPE_FPC:
++ return ex_handler_fpc(ex, regs);
+ }
+ panic("invalid exception table entry");
+ }
+diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
+index 1b74a000ff6459..56a786ca7354b9 100644
+--- a/arch/s390/pci/pci_bus.c
++++ b/arch/s390/pci/pci_bus.c
+@@ -171,7 +171,6 @@ void zpci_bus_scan_busses(void)
+ static bool zpci_bus_is_multifunction_root(struct zpci_dev *zdev)
+ {
+ return !s390_pci_no_rid && zdev->rid_available &&
+- zpci_is_device_configured(zdev) &&
+ !zdev->vfn;
+ }
+
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index f2051644de9432..606c74f274593e 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -25,6 +25,7 @@ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
+ # avoid errors with '-march=i386', and future flags may depend on the target to
+ # be valid.
+ KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
++KBUILD_CFLAGS += -std=gnu11
+ KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
+ KBUILD_CFLAGS += -Wundef
+ KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
+diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
+index ae5482a2f0ca0e..ccb8ff37fa9d4b 100644
+--- a/arch/x86/include/asm/kexec.h
++++ b/arch/x86/include/asm/kexec.h
+@@ -16,6 +16,7 @@
+ # define PAGES_NR 4
+ #endif
+
++# define KEXEC_CONTROL_PAGE_SIZE 4096
+ # define KEXEC_CONTROL_CODE_MAX_SIZE 2048
+
+ #ifndef __ASSEMBLY__
+@@ -43,7 +44,6 @@ struct kimage;
+ /* Maximum address we can use for the control code buffer */
+ # define KEXEC_CONTROL_MEMORY_LIMIT TASK_SIZE
+
+-# define KEXEC_CONTROL_PAGE_SIZE 4096
+
+ /* The native architecture */
+ # define KEXEC_ARCH KEXEC_ARCH_386
+@@ -58,9 +58,6 @@ struct kimage;
+ /* Maximum address we can use for the control pages */
+ # define KEXEC_CONTROL_MEMORY_LIMIT (MAXMEM-1)
+
+-/* Allocate one page for the pdp and the second for the code */
+-# define KEXEC_CONTROL_PAGE_SIZE (4096UL + 4096UL)
+-
+ /* The native architecture */
+ # define KEXEC_ARCH KEXEC_ARCH_X86_64
+ #endif
+@@ -145,6 +142,19 @@ struct kimage_arch {
+ };
+ #else
+ struct kimage_arch {
++ /*
++ * This is a kimage control page, as it must not overlap with either
++ * source or destination address ranges.
++ */
++ pgd_t *pgd;
++ /*
++ * The virtual mapping of the control code page itself is used only
++ * during the transition, while the current kernel's pages are all
++ * in place. Thus the intermediate page table pages used to map it
++ * are not control pages, but instead just normal pages obtained
++ * with get_zeroed_page(). And have to be tracked (below) so that
++ * they can be freed.
++ */
+ p4d_t *p4d;
+ pud_t *pud;
+ pmd_t *pmd;
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 6b981868905f5d..5da67e5c00401b 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -27,6 +27,7 @@
+ #include <linux/hyperv.h>
+ #include <linux/kfifo.h>
+ #include <linux/sched/vhost_task.h>
++#include <linux/call_once.h>
+
+ #include <asm/apic.h>
+ #include <asm/pvclock-abi.h>
+@@ -1446,6 +1447,7 @@ struct kvm_arch {
+ struct kvm_x86_pmu_event_filter __rcu *pmu_event_filter;
+ struct vhost_task *nx_huge_page_recovery_thread;
+ u64 nx_huge_page_last;
++ struct once nx_once;
+
+ #ifdef CONFIG_X86_64
+ /* The number of TDP MMU pages across all roots. */
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 4efecac49863ec..c70b86f1f2954f 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -226,6 +226,28 @@ acpi_parse_x2apic(union acpi_subtable_headers *header, const unsigned long end)
+ return 0;
+ }
+
++static int __init
++acpi_check_lapic(union acpi_subtable_headers *header, const unsigned long end)
++{
++ struct acpi_madt_local_apic *processor = NULL;
++
++ processor = (struct acpi_madt_local_apic *)header;
++
++ if (BAD_MADT_ENTRY(processor, end))
++ return -EINVAL;
++
++ /* Ignore invalid ID */
++ if (processor->id == 0xff)
++ return 0;
++
++ /* Ignore processors that can not be onlined */
++ if (!acpi_is_processor_usable(processor->lapic_flags))
++ return 0;
++
++ has_lapic_cpus = true;
++ return 0;
++}
++
+ static int __init
+ acpi_parse_lapic(union acpi_subtable_headers * header, const unsigned long end)
+ {
+@@ -257,7 +279,6 @@ acpi_parse_lapic(union acpi_subtable_headers * header, const unsigned long end)
+ processor->processor_id, /* ACPI ID */
+ processor->lapic_flags & ACPI_MADT_ENABLED);
+
+- has_lapic_cpus = true;
+ return 0;
+ }
+
+@@ -1029,6 +1050,8 @@ static int __init early_acpi_parse_madt_lapic_addr_ovr(void)
+ static int __init acpi_parse_madt_lapic_entries(void)
+ {
+ int count, x2count = 0;
++ struct acpi_subtable_proc madt_proc[2];
++ int ret;
+
+ if (!boot_cpu_has(X86_FEATURE_APIC))
+ return -ENODEV;
+@@ -1037,10 +1060,27 @@ static int __init acpi_parse_madt_lapic_entries(void)
+ acpi_parse_sapic, MAX_LOCAL_APIC);
+
+ if (!count) {
+- count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_APIC,
+- acpi_parse_lapic, MAX_LOCAL_APIC);
+- x2count = acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_X2APIC,
+- acpi_parse_x2apic, MAX_LOCAL_APIC);
++ /* Check if there are valid LAPIC entries */
++ acpi_table_parse_madt(ACPI_MADT_TYPE_LOCAL_APIC, acpi_check_lapic, MAX_LOCAL_APIC);
++
++ /*
++ * Enumerate the APIC IDs in the order that they appear in the
++ * MADT, no matter LAPIC entry or x2APIC entry is used.
++ */
++ memset(madt_proc, 0, sizeof(madt_proc));
++ madt_proc[0].id = ACPI_MADT_TYPE_LOCAL_APIC;
++ madt_proc[0].handler = acpi_parse_lapic;
++ madt_proc[1].id = ACPI_MADT_TYPE_LOCAL_X2APIC;
++ madt_proc[1].handler = acpi_parse_x2apic;
++ ret = acpi_table_parse_entries_array(ACPI_SIG_MADT,
++ sizeof(struct acpi_table_madt),
++ madt_proc, ARRAY_SIZE(madt_proc), MAX_LOCAL_APIC);
++ if (ret < 0) {
++ pr_err("Error parsing LAPIC/X2APIC entries\n");
++ return ret;
++ }
++ count = madt_proc[0].count;
++ x2count = madt_proc[1].count;
+ }
+ if (!count && !x2count) {
+ pr_err("No LAPIC entries present\n");
+diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
+index 9fe9972d2071b9..37b8244899d895 100644
+--- a/arch/x86/kernel/amd_nb.c
++++ b/arch/x86/kernel/amd_nb.c
+@@ -582,6 +582,10 @@ static __init void fix_erratum_688(void)
+
+ static __init int init_amd_nbs(void)
+ {
++ if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD &&
++ boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
++ return 0;
++
+ amd_cache_northbridges();
+ amd_cache_gart();
+
+diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
+index 9c9ac606893e99..7223c38a8708fc 100644
+--- a/arch/x86/kernel/machine_kexec_64.c
++++ b/arch/x86/kernel/machine_kexec_64.c
+@@ -146,7 +146,8 @@ static void free_transition_pgtable(struct kimage *image)
+ image->arch.pte = NULL;
+ }
+
+-static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
++static int init_transition_pgtable(struct kimage *image, pgd_t *pgd,
++ unsigned long control_page)
+ {
+ pgprot_t prot = PAGE_KERNEL_EXEC_NOENC;
+ unsigned long vaddr, paddr;
+@@ -157,7 +158,7 @@ static int init_transition_pgtable(struct kimage *image, pgd_t *pgd)
+ pte_t *pte;
+
+ vaddr = (unsigned long)relocate_kernel;
+- paddr = __pa(page_address(image->control_code_page)+PAGE_SIZE);
++ paddr = control_page;
+ pgd += pgd_index(vaddr);
+ if (!pgd_present(*pgd)) {
+ p4d = (p4d_t *)get_zeroed_page(GFP_KERNEL);
+@@ -216,7 +217,7 @@ static void *alloc_pgt_page(void *data)
+ return p;
+ }
+
+-static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
++static int init_pgtable(struct kimage *image, unsigned long control_page)
+ {
+ struct x86_mapping_info info = {
+ .alloc_pgt_page = alloc_pgt_page,
+@@ -225,12 +226,12 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
+ .kernpg_flag = _KERNPG_TABLE_NOENC,
+ };
+ unsigned long mstart, mend;
+- pgd_t *level4p;
+ int result;
+ int i;
+
+- level4p = (pgd_t *)__va(start_pgtable);
+- clear_page(level4p);
++ image->arch.pgd = alloc_pgt_page(image);
++ if (!image->arch.pgd)
++ return -ENOMEM;
+
+ if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
+ info.page_flag |= _PAGE_ENC;
+@@ -244,8 +245,8 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
+ mstart = pfn_mapped[i].start << PAGE_SHIFT;
+ mend = pfn_mapped[i].end << PAGE_SHIFT;
+
+- result = kernel_ident_mapping_init(&info,
+- level4p, mstart, mend);
++ result = kernel_ident_mapping_init(&info, image->arch.pgd,
++ mstart, mend);
+ if (result)
+ return result;
+ }
+@@ -260,8 +261,8 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
+ mstart = image->segment[i].mem;
+ mend = mstart + image->segment[i].memsz;
+
+- result = kernel_ident_mapping_init(&info,
+- level4p, mstart, mend);
++ result = kernel_ident_mapping_init(&info, image->arch.pgd,
++ mstart, mend);
+
+ if (result)
+ return result;
+@@ -271,15 +272,19 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
+ * Prepare EFI systab and ACPI tables for kexec kernel since they are
+ * not covered by pfn_mapped.
+ */
+- result = map_efi_systab(&info, level4p);
++ result = map_efi_systab(&info, image->arch.pgd);
+ if (result)
+ return result;
+
+- result = map_acpi_tables(&info, level4p);
++ result = map_acpi_tables(&info, image->arch.pgd);
+ if (result)
+ return result;
+
+- return init_transition_pgtable(image, level4p);
++ /*
++ * This must be last because the intermediate page table pages it
++ * allocates will not be control pages and may overlap the image.
++ */
++ return init_transition_pgtable(image, image->arch.pgd, control_page);
+ }
+
+ static void load_segments(void)
+@@ -296,14 +301,14 @@ static void load_segments(void)
+
+ int machine_kexec_prepare(struct kimage *image)
+ {
+- unsigned long start_pgtable;
++ unsigned long control_page;
+ int result;
+
+ /* Calculate the offsets */
+- start_pgtable = page_to_pfn(image->control_code_page) << PAGE_SHIFT;
++ control_page = page_to_pfn(image->control_code_page) << PAGE_SHIFT;
+
+ /* Setup the identity mapped 64bit page table */
+- result = init_pgtable(image, start_pgtable);
++ result = init_pgtable(image, control_page);
+ if (result)
+ return result;
+
+@@ -357,13 +362,12 @@ void machine_kexec(struct kimage *image)
+ #endif
+ }
+
+- control_page = page_address(image->control_code_page) + PAGE_SIZE;
++ control_page = page_address(image->control_code_page);
+ __memcpy(control_page, relocate_kernel, KEXEC_CONTROL_CODE_MAX_SIZE);
+
+ page_list[PA_CONTROL_PAGE] = virt_to_phys(control_page);
+ page_list[VA_CONTROL_PAGE] = (unsigned long)control_page;
+- page_list[PA_TABLE_PAGE] =
+- (unsigned long)__pa(page_address(image->control_code_page));
++ page_list[PA_TABLE_PAGE] = (unsigned long)__pa(image->arch.pgd);
+
+ if (image->type == KEXEC_TYPE_DEFAULT)
+ page_list[PA_SWAP_PAGE] = (page_to_pfn(image->swap_page)
+@@ -573,8 +577,7 @@ static void kexec_mark_crashkres(bool protect)
+
+ /* Don't touch the control code page used in crash_kexec().*/
+ control = PFN_PHYS(page_to_pfn(kexec_crash_image->control_code_page));
+- /* Control code page is located in the 2nd page. */
+- kexec_mark_range(crashk_res.start, control + PAGE_SIZE - 1, protect);
++ kexec_mark_range(crashk_res.start, control - 1, protect);
+ control += KEXEC_CONTROL_PAGE_SIZE;
+ kexec_mark_range(control, crashk_res.end, protect);
+ }
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index f63f8fd00a91f3..15507e739c255b 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -838,7 +838,7 @@ void __noreturn stop_this_cpu(void *dummy)
+ #ifdef CONFIG_SMP
+ if (smp_ops.stop_this_cpu) {
+ smp_ops.stop_this_cpu();
+- unreachable();
++ BUG();
+ }
+ #endif
+
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index 615922838c510b..dc1dd3f3e67fcd 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -883,7 +883,7 @@ static int crash_nmi_callback(unsigned int val, struct pt_regs *regs)
+
+ if (smp_ops.stop_this_cpu) {
+ smp_ops.stop_this_cpu();
+- unreachable();
++ BUG();
+ }
+
+ /* Assume hlt works */
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 1b4438e24814b4..9dd3796d075a56 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -7227,6 +7227,19 @@ static void mmu_destroy_caches(void)
+ kmem_cache_destroy(mmu_page_header_cache);
+ }
+
++static void kvm_wake_nx_recovery_thread(struct kvm *kvm)
++{
++ /*
++ * The NX recovery thread is spawned on-demand at the first KVM_RUN and
++ * may not be valid even though the VM is globally visible. Do nothing,
++ * as such a VM can't have any possible NX huge pages.
++ */
++ struct vhost_task *nx_thread = READ_ONCE(kvm->arch.nx_huge_page_recovery_thread);
++
++ if (nx_thread)
++ vhost_task_wake(nx_thread);
++}
++
+ static int get_nx_huge_pages(char *buffer, const struct kernel_param *kp)
+ {
+ if (nx_hugepage_mitigation_hard_disabled)
+@@ -7287,7 +7300,7 @@ static int set_nx_huge_pages(const char *val, const struct kernel_param *kp)
+ kvm_mmu_zap_all_fast(kvm);
+ mutex_unlock(&kvm->slots_lock);
+
+- vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
++ kvm_wake_nx_recovery_thread(kvm);
+ }
+ mutex_unlock(&kvm_lock);
+ }
+@@ -7433,7 +7446,7 @@ static int set_nx_huge_pages_recovery_param(const char *val, const struct kernel
+ mutex_lock(&kvm_lock);
+
+ list_for_each_entry(kvm, &vm_list, vm_list)
+- vhost_task_wake(kvm->arch.nx_huge_page_recovery_thread);
++ kvm_wake_nx_recovery_thread(kvm);
+
+ mutex_unlock(&kvm_lock);
+ }
+@@ -7565,20 +7578,34 @@ static bool kvm_nx_huge_page_recovery_worker(void *data)
+ return true;
+ }
+
++static void kvm_mmu_start_lpage_recovery(struct once *once)
++{
++ struct kvm_arch *ka = container_of(once, struct kvm_arch, nx_once);
++ struct kvm *kvm = container_of(ka, struct kvm, arch);
++ struct vhost_task *nx_thread;
++
++ kvm->arch.nx_huge_page_last = get_jiffies_64();
++ nx_thread = vhost_task_create(kvm_nx_huge_page_recovery_worker,
++ kvm_nx_huge_page_recovery_worker_kill,
++ kvm, "kvm-nx-lpage-recovery");
++
++ if (!nx_thread)
++ return;
++
++ vhost_task_start(nx_thread);
++
++ /* Make the task visible only once it is fully started. */
++ WRITE_ONCE(kvm->arch.nx_huge_page_recovery_thread, nx_thread);
++}
++
+ int kvm_mmu_post_init_vm(struct kvm *kvm)
+ {
+ if (nx_hugepage_mitigation_hard_disabled)
+ return 0;
+
+- kvm->arch.nx_huge_page_last = get_jiffies_64();
+- kvm->arch.nx_huge_page_recovery_thread = vhost_task_create(
+- kvm_nx_huge_page_recovery_worker, kvm_nx_huge_page_recovery_worker_kill,
+- kvm, "kvm-nx-lpage-recovery");
+-
++ call_once(&kvm->arch.nx_once, kvm_mmu_start_lpage_recovery);
+ if (!kvm->arch.nx_huge_page_recovery_thread)
+ return -ENOMEM;
+-
+- vhost_task_start(kvm->arch.nx_huge_page_recovery_thread);
+ return 0;
+ }
+
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index fb854cf20ac3be..e9af87b1281407 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -3833,7 +3833,7 @@ static int snp_begin_psc(struct vcpu_svm *svm, struct psc_buffer *psc)
+ goto next_range;
+ }
+
+- unreachable();
++ BUG();
+ }
+
+ static int __sev_snp_update_protected_guest_state(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index b49e2eb4893080..d760b19d1e513e 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -11478,6 +11478,10 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ struct kvm_run *kvm_run = vcpu->run;
+ int r;
+
++ r = kvm_mmu_post_init_vm(vcpu->kvm);
++ if (r)
++ return r;
++
+ vcpu_load(vcpu);
+ kvm_sigset_activate(vcpu);
+ kvm_run->flags = 0;
+@@ -12751,7 +12755,8 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+
+ int kvm_arch_post_init_vm(struct kvm *kvm)
+ {
+- return kvm_mmu_post_init_vm(kvm);
++ once_init(&kvm->arch.nx_once);
++ return 0;
+ }
+
+ static void kvm_unload_vcpu_mmu(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
+index e6c469b323ccb7..ac52255fab01f4 100644
+--- a/arch/x86/mm/fault.c
++++ b/arch/x86/mm/fault.c
+@@ -678,7 +678,7 @@ page_fault_oops(struct pt_regs *regs, unsigned long error_code,
+ ASM_CALL_ARG3,
+ , [arg1] "r" (regs), [arg2] "r" (address), [arg3] "r" (&info));
+
+- unreachable();
++ BUG();
+ }
+ #endif
+
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index 98a9bb92d75c88..12d5a0f37432ea 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -1010,4 +1010,34 @@ DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x1668, amd_rp_pme_suspend);
+ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1668, amd_rp_pme_resume);
+ DECLARE_PCI_FIXUP_SUSPEND(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_suspend);
+ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_AMD, 0x1669, amd_rp_pme_resume);
++
++/*
++ * Putting PCIe root ports on Ryzen SoCs with USB4 controllers into D3hot
++ * may cause problems when the system attempts wake up from s2idle.
++ *
++ * On the TUXEDO Sirius 16 Gen 1 with a specific old BIOS this manifests as
++ * a system hang.
++ */
++static const struct dmi_system_id quirk_tuxeo_rp_d3_dmi_table[] = {
++ {
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "TUXEDO"),
++ DMI_EXACT_MATCH(DMI_BOARD_NAME, "APX958"),
++ DMI_EXACT_MATCH(DMI_BIOS_VERSION, "V1.00A00_20240108"),
++ },
++ },
++ {}
++};
++
++static void quirk_tuxeo_rp_d3(struct pci_dev *pdev)
++{
++ struct pci_dev *root_pdev;
++
++ if (dmi_check_system(quirk_tuxeo_rp_d3_dmi_table)) {
++ root_pdev = pcie_find_root_port(pdev);
++ if (root_pdev)
++ root_pdev->dev_flags |= PCI_DEV_FLAGS_NO_D3;
++ }
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_AMD, 0x1502, quirk_tuxeo_rp_d3);
+ #endif /* CONFIG_SUSPEND */
+diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
+index 721a57700a3b05..55978e0dc17551 100644
+--- a/arch/x86/xen/xen-head.S
++++ b/arch/x86/xen/xen-head.S
+@@ -117,8 +117,8 @@ SYM_FUNC_START(xen_hypercall_hvm)
+ pop %ebx
+ pop %eax
+ #else
+- lea xen_hypercall_amd(%rip), %rbx
+- cmp %rax, %rbx
++ lea xen_hypercall_amd(%rip), %rcx
++ cmp %rax, %rcx
+ #ifdef CONFIG_FRAME_POINTER
+ pop %rax /* Dummy pop. */
+ #endif
+@@ -132,6 +132,7 @@ SYM_FUNC_START(xen_hypercall_hvm)
+ pop %rcx
+ pop %rax
+ #endif
++ FRAME_END
+ /* Use correct hypercall function. */
+ jz xen_hypercall_amd
+ jmp xen_hypercall_intel
+diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
+index 45a395862fbc88..f1cf7f2909f3a7 100644
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -1138,6 +1138,7 @@ static void blkcg_fill_root_iostats(void)
+ blkg_iostat_set(&blkg->iostat.cur, &tmp);
+ u64_stats_update_end_irqrestore(&blkg->iostat.sync, flags);
+ }
++ class_dev_iter_exit(&iter);
+ }
+
+ static void blkcg_print_one_stat(struct blkcg_gq *blkg, struct seq_file *s)
+diff --git a/block/fops.c b/block/fops.c
+index 13a67940d0408d..43983be5a2b3b1 100644
+--- a/block/fops.c
++++ b/block/fops.c
+@@ -758,11 +758,12 @@ static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to)
+ file_accessed(iocb->ki_filp);
+
+ ret = blkdev_direct_IO(iocb, to);
+- if (ret >= 0) {
++ if (ret > 0) {
+ iocb->ki_pos += ret;
+ count -= ret;
+ }
+- iov_iter_revert(to, count - iov_iter_count(to));
++ if (ret != -EIOCBQUEUED)
++ iov_iter_revert(to, count - iov_iter_count(to));
+ if (ret < 0 || !count)
+ goto reexpand;
+ }
+diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
+index 10b7ae0f866c98..ef9a4ba18cb8a8 100644
+--- a/drivers/accel/ivpu/ivpu_pm.c
++++ b/drivers/accel/ivpu/ivpu_pm.c
+@@ -73,8 +73,8 @@ static int ivpu_resume(struct ivpu_device *vdev)
+ int ret;
+
+ retry:
+- pci_restore_state(to_pci_dev(vdev->drm.dev));
+ pci_set_power_state(to_pci_dev(vdev->drm.dev), PCI_D0);
++ pci_restore_state(to_pci_dev(vdev->drm.dev));
+
+ ret = ivpu_hw_power_up(vdev);
+ if (ret) {
+@@ -295,7 +295,10 @@ int ivpu_rpm_get(struct ivpu_device *vdev)
+ int ret;
+
+ ret = pm_runtime_resume_and_get(vdev->drm.dev);
+- drm_WARN_ON(&vdev->drm, ret < 0);
++ if (ret < 0) {
++ ivpu_err(vdev, "Failed to resume NPU: %d\n", ret);
++ pm_runtime_set_suspended(vdev->drm.dev);
++ }
+
+ return ret;
+ }
+diff --git a/drivers/acpi/apei/ghes.c b/drivers/acpi/apei/ghes.c
+index ada93cfde9ba1c..cff6685fa6cc6b 100644
+--- a/drivers/acpi/apei/ghes.c
++++ b/drivers/acpi/apei/ghes.c
+@@ -173,8 +173,6 @@ static struct gen_pool *ghes_estatus_pool;
+ static struct ghes_estatus_cache __rcu *ghes_estatus_caches[GHES_ESTATUS_CACHES_SIZE];
+ static atomic_t ghes_estatus_cache_alloced;
+
+-static int ghes_panic_timeout __read_mostly = 30;
+-
+ static void __iomem *ghes_map(u64 pfn, enum fixed_addresses fixmap_idx)
+ {
+ phys_addr_t paddr;
+@@ -983,14 +981,16 @@ static void __ghes_panic(struct ghes *ghes,
+ struct acpi_hest_generic_status *estatus,
+ u64 buf_paddr, enum fixed_addresses fixmap_idx)
+ {
++ const char *msg = GHES_PFX "Fatal hardware error";
++
+ __ghes_print_estatus(KERN_EMERG, ghes->generic, estatus);
+
+ ghes_clear_estatus(ghes, estatus, buf_paddr, fixmap_idx);
+
+- /* reboot to log the error! */
+ if (!panic_timeout)
+- panic_timeout = ghes_panic_timeout;
+- panic("Fatal hardware error!");
++ pr_emerg("%s but panic disabled\n", msg);
++
++ panic(msg);
+ }
+
+ static int ghes_proc(struct ghes *ghes)
+diff --git a/drivers/acpi/prmt.c b/drivers/acpi/prmt.c
+index 747f83f7114d29..e549914a636c66 100644
+--- a/drivers/acpi/prmt.c
++++ b/drivers/acpi/prmt.c
+@@ -287,9 +287,7 @@ static acpi_status acpi_platformrt_space_handler(u32 function,
+ if (!handler || !module)
+ goto invalid_guid;
+
+- if (!handler->handler_addr ||
+- !handler->static_data_buffer_addr ||
+- !handler->acpi_param_buffer_addr) {
++ if (!handler->handler_addr) {
+ buffer->prm_status = PRM_HANDLER_ERROR;
+ return AE_OK;
+ }
+diff --git a/drivers/acpi/property.c b/drivers/acpi/property.c
+index 80a52a4e66dd16..e9186339f6e6bb 100644
+--- a/drivers/acpi/property.c
++++ b/drivers/acpi/property.c
+@@ -1187,8 +1187,6 @@ static int acpi_data_prop_read(const struct acpi_device_data *data,
+ }
+ break;
+ }
+- if (nval == 0)
+- return -EINVAL;
+
+ if (obj->type == ACPI_TYPE_BUFFER) {
+ if (proptype != DEV_PROP_U8)
+@@ -1212,9 +1210,11 @@ static int acpi_data_prop_read(const struct acpi_device_data *data,
+ ret = acpi_copy_property_array_uint(items, (u64 *)val, nval);
+ break;
+ case DEV_PROP_STRING:
+- ret = acpi_copy_property_array_string(
+- items, (char **)val,
+- min_t(u32, nval, obj->package.count));
++ nval = min_t(u32, nval, obj->package.count);
++ if (nval == 0)
++ return -ENODATA;
++
++ ret = acpi_copy_property_array_string(items, (char **)val, nval);
+ break;
+ default:
+ ret = -EINVAL;
+diff --git a/drivers/ata/libata-sff.c b/drivers/ata/libata-sff.c
+index 67f277e1c3bf31..5a46c066abc365 100644
+--- a/drivers/ata/libata-sff.c
++++ b/drivers/ata/libata-sff.c
+@@ -601,7 +601,7 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
+ {
+ struct ata_port *ap = qc->ap;
+ struct page *page;
+- unsigned int offset;
++ unsigned int offset, count;
+
+ if (!qc->cursg) {
+ qc->curbytes = qc->nbytes;
+@@ -617,25 +617,27 @@ static void ata_pio_sector(struct ata_queued_cmd *qc)
+ page = nth_page(page, (offset >> PAGE_SHIFT));
+ offset %= PAGE_SIZE;
+
+- trace_ata_sff_pio_transfer_data(qc, offset, qc->sect_size);
++ /* don't overrun current sg */
++ count = min(qc->cursg->length - qc->cursg_ofs, qc->sect_size);
++
++ trace_ata_sff_pio_transfer_data(qc, offset, count);
+
+ /*
+ * Split the transfer when it splits a page boundary. Note that the
+ * split still has to be dword aligned like all ATA data transfers.
+ */
+ WARN_ON_ONCE(offset % 4);
+- if (offset + qc->sect_size > PAGE_SIZE) {
++ if (offset + count > PAGE_SIZE) {
+ unsigned int split_len = PAGE_SIZE - offset;
+
+ ata_pio_xfer(qc, page, offset, split_len);
+- ata_pio_xfer(qc, nth_page(page, 1), 0,
+- qc->sect_size - split_len);
++ ata_pio_xfer(qc, nth_page(page, 1), 0, count - split_len);
+ } else {
+- ata_pio_xfer(qc, page, offset, qc->sect_size);
++ ata_pio_xfer(qc, page, offset, count);
+ }
+
+- qc->curbytes += qc->sect_size;
+- qc->cursg_ofs += qc->sect_size;
++ qc->curbytes += count;
++ qc->cursg_ofs += count;
+
+ if (qc->cursg_ofs == qc->cursg->length) {
+ qc->cursg = sg_next(qc->cursg);
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 258a5cb6f27afe..6bc6dd417adf64 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -604,6 +604,8 @@ static const struct usb_device_id quirks_table[] = {
+ /* MediaTek MT7922 Bluetooth devices */
+ { USB_DEVICE(0x13d3, 0x3585), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3610), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
+
+ /* MediaTek MT7922A Bluetooth devices */
+ { USB_DEVICE(0x0489, 0xe0d8), .driver_info = BTUSB_MEDIATEK |
+@@ -668,6 +670,8 @@ static const struct usb_device_id quirks_table[] = {
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x13d3, 0x3608), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x13d3, 0x3628), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
+
+ /* Additional Realtek 8723AE Bluetooth devices */
+ { USB_DEVICE(0x0930, 0x021d), .driver_info = BTUSB_REALTEK },
+diff --git a/drivers/char/misc.c b/drivers/char/misc.c
+index 541edc26ec89a1..2cf595d2e10b85 100644
+--- a/drivers/char/misc.c
++++ b/drivers/char/misc.c
+@@ -63,16 +63,30 @@ static DEFINE_MUTEX(misc_mtx);
+ #define DYNAMIC_MINORS 128 /* like dynamic majors */
+ static DEFINE_IDA(misc_minors_ida);
+
+-static int misc_minor_alloc(void)
++static int misc_minor_alloc(int minor)
+ {
+- int ret;
+-
+- ret = ida_alloc_max(&misc_minors_ida, DYNAMIC_MINORS - 1, GFP_KERNEL);
+- if (ret >= 0) {
+- ret = DYNAMIC_MINORS - ret - 1;
++ int ret = 0;
++
++ if (minor == MISC_DYNAMIC_MINOR) {
++ /* allocate free id */
++ ret = ida_alloc_max(&misc_minors_ida, DYNAMIC_MINORS - 1, GFP_KERNEL);
++ if (ret >= 0) {
++ ret = DYNAMIC_MINORS - ret - 1;
++ } else {
++ ret = ida_alloc_range(&misc_minors_ida, MISC_DYNAMIC_MINOR + 1,
++ MINORMASK, GFP_KERNEL);
++ }
+ } else {
+- ret = ida_alloc_range(&misc_minors_ida, MISC_DYNAMIC_MINOR + 1,
+- MINORMASK, GFP_KERNEL);
++ /* specific minor, check if it is in dynamic or misc dynamic range */
++ if (minor < DYNAMIC_MINORS) {
++ minor = DYNAMIC_MINORS - minor - 1;
++ ret = ida_alloc_range(&misc_minors_ida, minor, minor, GFP_KERNEL);
++ } else if (minor > MISC_DYNAMIC_MINOR) {
++ ret = ida_alloc_range(&misc_minors_ida, minor, minor, GFP_KERNEL);
++ } else {
++ /* case of non-dynamic minors, no need to allocate id */
++ ret = 0;
++ }
+ }
+ return ret;
+ }
+@@ -219,7 +233,7 @@ int misc_register(struct miscdevice *misc)
+ mutex_lock(&misc_mtx);
+
+ if (is_dynamic) {
+- int i = misc_minor_alloc();
++ int i = misc_minor_alloc(misc->minor);
+
+ if (i < 0) {
+ err = -EBUSY;
+@@ -228,6 +242,7 @@ int misc_register(struct miscdevice *misc)
+ misc->minor = i;
+ } else {
+ struct miscdevice *c;
++ int i;
+
+ list_for_each_entry(c, &misc_list, list) {
+ if (c->minor == misc->minor) {
+@@ -235,6 +250,12 @@ int misc_register(struct miscdevice *misc)
+ goto out;
+ }
+ }
++
++ i = misc_minor_alloc(misc->minor);
++ if (i < 0) {
++ err = -EBUSY;
++ goto out;
++ }
+ }
+
+ dev = MKDEV(MISC_MAJOR, misc->minor);
+diff --git a/drivers/char/tpm/eventlog/acpi.c b/drivers/char/tpm/eventlog/acpi.c
+index 69533d0bfb51e8..cf02ec646f46f0 100644
+--- a/drivers/char/tpm/eventlog/acpi.c
++++ b/drivers/char/tpm/eventlog/acpi.c
+@@ -63,6 +63,11 @@ static bool tpm_is_tpm2_log(void *bios_event_log, u64 len)
+ return n == 0;
+ }
+
++static void tpm_bios_log_free(void *data)
++{
++ kvfree(data);
++}
++
+ /* read binary bios log */
+ int tpm_read_log_acpi(struct tpm_chip *chip)
+ {
+@@ -136,7 +141,7 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
+ }
+
+ /* malloc EventLog space */
+- log->bios_event_log = devm_kmalloc(&chip->dev, len, GFP_KERNEL);
++ log->bios_event_log = kvmalloc(len, GFP_KERNEL);
+ if (!log->bios_event_log)
+ return -ENOMEM;
+
+@@ -161,10 +166,16 @@ int tpm_read_log_acpi(struct tpm_chip *chip)
+ goto err;
+ }
+
++ ret = devm_add_action(&chip->dev, tpm_bios_log_free, log->bios_event_log);
++ if (ret) {
++ log->bios_event_log = NULL;
++ goto err;
++ }
++
+ return format;
+
+ err:
+- devm_kfree(&chip->dev, log->bios_event_log);
++ tpm_bios_log_free(log->bios_event_log);
+ log->bios_event_log = NULL;
+ return ret;
+ }
+diff --git a/drivers/clk/clk-loongson2.c b/drivers/clk/clk-loongson2.c
+index 7082b4309c6f15..0d9485e83938a1 100644
+--- a/drivers/clk/clk-loongson2.c
++++ b/drivers/clk/clk-loongson2.c
+@@ -294,7 +294,7 @@ static int loongson2_clk_probe(struct platform_device *pdev)
+ return -EINVAL;
+
+ for (p = data; p->name; p++)
+- clks_num++;
++ clks_num = max(clks_num, p->id + 1);
+
+ clp = devm_kzalloc(dev, struct_size(clp, clk_data.hws, clks_num),
+ GFP_KERNEL);
+@@ -309,6 +309,9 @@ static int loongson2_clk_probe(struct platform_device *pdev)
+ clp->clk_data.num = clks_num;
+ clp->dev = dev;
+
++ /* Avoid returning NULL for unused id */
++ memset_p((void **)clp->clk_data.hws, ERR_PTR(-ENOENT), clks_num);
++
+ for (i = 0; i < clks_num; i++) {
+ p = &data[i];
+ switch (p->type) {
+diff --git a/drivers/clk/mediatek/clk-mt2701-aud.c b/drivers/clk/mediatek/clk-mt2701-aud.c
+index 425c69cfb105a6..e103121cf58e77 100644
+--- a/drivers/clk/mediatek/clk-mt2701-aud.c
++++ b/drivers/clk/mediatek/clk-mt2701-aud.c
+@@ -55,10 +55,16 @@ static const struct mtk_gate audio_clks[] = {
+ GATE_DUMMY(CLK_DUMMY, "aud_dummy"),
+ /* AUDIO0 */
+ GATE_AUDIO0(CLK_AUD_AFE, "audio_afe", "aud_intbus_sel", 2),
++ GATE_DUMMY(CLK_AUD_LRCK_DETECT, "audio_lrck_detect_dummy"),
++ GATE_DUMMY(CLK_AUD_I2S, "audio_i2c_dummy"),
++ GATE_DUMMY(CLK_AUD_APLL_TUNER, "audio_apll_tuner_dummy"),
+ GATE_AUDIO0(CLK_AUD_HDMI, "audio_hdmi", "audpll_sel", 20),
+ GATE_AUDIO0(CLK_AUD_SPDF, "audio_spdf", "audpll_sel", 21),
+ GATE_AUDIO0(CLK_AUD_SPDF2, "audio_spdf2", "audpll_sel", 22),
+ GATE_AUDIO0(CLK_AUD_APLL, "audio_apll", "audpll_sel", 23),
++ GATE_DUMMY(CLK_AUD_TML, "audio_tml_dummy"),
++ GATE_DUMMY(CLK_AUD_AHB_IDLE_EXT, "audio_ahb_idle_ext_dummy"),
++ GATE_DUMMY(CLK_AUD_AHB_IDLE_INT, "audio_ahb_idle_int_dummy"),
+ /* AUDIO1 */
+ GATE_AUDIO1(CLK_AUD_I2SIN1, "audio_i2sin1", "aud_mux1_sel", 0),
+ GATE_AUDIO1(CLK_AUD_I2SIN2, "audio_i2sin2", "aud_mux1_sel", 1),
+@@ -76,10 +82,12 @@ static const struct mtk_gate audio_clks[] = {
+ GATE_AUDIO1(CLK_AUD_ASRCI2, "audio_asrci2", "asm_h_sel", 13),
+ GATE_AUDIO1(CLK_AUD_ASRCO1, "audio_asrco1", "asm_h_sel", 14),
+ GATE_AUDIO1(CLK_AUD_ASRCO2, "audio_asrco2", "asm_h_sel", 15),
++ GATE_DUMMY(CLK_AUD_HDMIRX, "audio_hdmirx_dummy"),
+ GATE_AUDIO1(CLK_AUD_INTDIR, "audio_intdir", "intdir_sel", 20),
+ GATE_AUDIO1(CLK_AUD_A1SYS, "audio_a1sys", "aud_mux1_sel", 21),
+ GATE_AUDIO1(CLK_AUD_A2SYS, "audio_a2sys", "aud_mux2_sel", 22),
+ GATE_AUDIO1(CLK_AUD_AFE_CONN, "audio_afe_conn", "aud_mux1_sel", 23),
++ GATE_DUMMY(CLK_AUD_AFE_PCMIF, "audio_afe_pcmif_dummy"),
+ GATE_AUDIO1(CLK_AUD_AFE_MRGIF, "audio_afe_mrgif", "aud_mux1_sel", 25),
+ /* AUDIO2 */
+ GATE_AUDIO2(CLK_AUD_MMIF_UL1, "audio_ul1", "aud_mux1_sel", 0),
+@@ -100,6 +108,8 @@ static const struct mtk_gate audio_clks[] = {
+ GATE_AUDIO2(CLK_AUD_MMIF_AWB2, "audio_awb2", "aud_mux1_sel", 15),
+ GATE_AUDIO2(CLK_AUD_MMIF_DAI, "audio_dai", "aud_mux1_sel", 16),
+ /* AUDIO3 */
++ GATE_DUMMY(CLK_AUD_DMIC1, "audio_dmic1_dummy"),
++ GATE_DUMMY(CLK_AUD_DMIC2, "audio_dmic2_dummy"),
+ GATE_AUDIO3(CLK_AUD_ASRCI3, "audio_asrci3", "asm_h_sel", 2),
+ GATE_AUDIO3(CLK_AUD_ASRCI4, "audio_asrci4", "asm_h_sel", 3),
+ GATE_AUDIO3(CLK_AUD_ASRCI5, "audio_asrci5", "asm_h_sel", 4),
+diff --git a/drivers/clk/mediatek/clk-mt2701-bdp.c b/drivers/clk/mediatek/clk-mt2701-bdp.c
+index 5da3eabffd3e76..f11c7a4fa37b65 100644
+--- a/drivers/clk/mediatek/clk-mt2701-bdp.c
++++ b/drivers/clk/mediatek/clk-mt2701-bdp.c
+@@ -31,6 +31,7 @@ static const struct mtk_gate_regs bdp1_cg_regs = {
+ GATE_MTK(_id, _name, _parent, &bdp1_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+
+ static const struct mtk_gate bdp_clks[] = {
++ GATE_DUMMY(CLK_DUMMY, "bdp_dummy"),
+ GATE_BDP0(CLK_BDP_BRG_BA, "brg_baclk", "mm_sel", 0),
+ GATE_BDP0(CLK_BDP_BRG_DRAM, "brg_dram", "mm_sel", 1),
+ GATE_BDP0(CLK_BDP_LARB_DRAM, "larb_dram", "mm_sel", 2),
+diff --git a/drivers/clk/mediatek/clk-mt2701-img.c b/drivers/clk/mediatek/clk-mt2701-img.c
+index 875594bc9dcba8..c158e54c46526e 100644
+--- a/drivers/clk/mediatek/clk-mt2701-img.c
++++ b/drivers/clk/mediatek/clk-mt2701-img.c
+@@ -22,6 +22,7 @@ static const struct mtk_gate_regs img_cg_regs = {
+ GATE_MTK(_id, _name, _parent, &img_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+
+ static const struct mtk_gate img_clks[] = {
++ GATE_DUMMY(CLK_DUMMY, "img_dummy"),
+ GATE_IMG(CLK_IMG_SMI_COMM, "img_smi_comm", "mm_sel", 0),
+ GATE_IMG(CLK_IMG_RESZ, "img_resz", "mm_sel", 1),
+ GATE_IMG(CLK_IMG_JPGDEC_SMI, "img_jpgdec_smi", "mm_sel", 5),
+diff --git a/drivers/clk/mediatek/clk-mt2701-mm.c b/drivers/clk/mediatek/clk-mt2701-mm.c
+index bc68fa718878f9..474d87d62e8331 100644
+--- a/drivers/clk/mediatek/clk-mt2701-mm.c
++++ b/drivers/clk/mediatek/clk-mt2701-mm.c
+@@ -31,6 +31,7 @@ static const struct mtk_gate_regs disp1_cg_regs = {
+ GATE_MTK(_id, _name, _parent, &disp1_cg_regs, _shift, &mtk_clk_gate_ops_setclr)
+
+ static const struct mtk_gate mm_clks[] = {
++ GATE_DUMMY(CLK_DUMMY, "mm_dummy"),
+ GATE_DISP0(CLK_MM_SMI_COMMON, "mm_smi_comm", "mm_sel", 0),
+ GATE_DISP0(CLK_MM_SMI_LARB0, "mm_smi_larb0", "mm_sel", 1),
+ GATE_DISP0(CLK_MM_CMDQ, "mm_cmdq", "mm_sel", 2),
+diff --git a/drivers/clk/mediatek/clk-mt2701-vdec.c b/drivers/clk/mediatek/clk-mt2701-vdec.c
+index 94db86f8d0a462..5299d92f3aba0f 100644
+--- a/drivers/clk/mediatek/clk-mt2701-vdec.c
++++ b/drivers/clk/mediatek/clk-mt2701-vdec.c
+@@ -31,6 +31,7 @@ static const struct mtk_gate_regs vdec1_cg_regs = {
+ GATE_MTK(_id, _name, _parent, &vdec1_cg_regs, _shift, &mtk_clk_gate_ops_setclr_inv)
+
+ static const struct mtk_gate vdec_clks[] = {
++ GATE_DUMMY(CLK_DUMMY, "vdec_dummy"),
+ GATE_VDEC0(CLK_VDEC_CKGEN, "vdec_cken", "vdec_sel", 0),
+ GATE_VDEC1(CLK_VDEC_LARB, "vdec_larb_cken", "mm_sel", 0),
+ };
+diff --git a/drivers/clk/mmp/pwr-island.c b/drivers/clk/mmp/pwr-island.c
+index edaa2433a472ad..eaf5d2c5e59337 100644
+--- a/drivers/clk/mmp/pwr-island.c
++++ b/drivers/clk/mmp/pwr-island.c
+@@ -106,10 +106,10 @@ struct generic_pm_domain *mmp_pm_domain_register(const char *name,
+ pm_domain->flags = flags;
+ pm_domain->lock = lock;
+
+- pm_genpd_init(&pm_domain->genpd, NULL, true);
+ pm_domain->genpd.name = name;
+ pm_domain->genpd.power_on = mmp_pm_domain_power_on;
+ pm_domain->genpd.power_off = mmp_pm_domain_power_off;
++ pm_genpd_init(&pm_domain->genpd, NULL, true);
+
+ return &pm_domain->genpd;
+ }
+diff --git a/drivers/clk/qcom/Kconfig b/drivers/clk/qcom/Kconfig
+index 9ba675f229b144..16145f74bbc853 100644
+--- a/drivers/clk/qcom/Kconfig
++++ b/drivers/clk/qcom/Kconfig
+@@ -1022,6 +1022,7 @@ config SM_GCC_7150
+ config SM_GCC_8150
+ tristate "SM8150 Global Clock Controller"
+ depends on ARM64 || COMPILE_TEST
++ select QCOM_GDSC
+ help
+ Support for the global clock controller on SM8150 devices.
+ Say Y if you want to use peripheral devices such as UART,
+diff --git a/drivers/clk/qcom/clk-alpha-pll.c b/drivers/clk/qcom/clk-alpha-pll.c
+index 49687512184b92..10e276dabff93d 100644
+--- a/drivers/clk/qcom/clk-alpha-pll.c
++++ b/drivers/clk/qcom/clk-alpha-pll.c
+@@ -432,6 +432,8 @@ void clk_alpha_pll_configure(struct clk_alpha_pll *pll, struct regmap *regmap,
+ mask |= config->pre_div_mask;
+ mask |= config->post_div_mask;
+ mask |= config->vco_mask;
++ mask |= config->alpha_en_mask;
++ mask |= config->alpha_mode_mask;
+
+ regmap_update_bits(regmap, PLL_USER_CTL(pll), mask, val);
+
+diff --git a/drivers/clk/qcom/clk-rpmh.c b/drivers/clk/qcom/clk-rpmh.c
+index eefc322ce36798..e6c33010cfbf69 100644
+--- a/drivers/clk/qcom/clk-rpmh.c
++++ b/drivers/clk/qcom/clk-rpmh.c
+@@ -329,7 +329,7 @@ static unsigned long clk_rpmh_bcm_recalc_rate(struct clk_hw *hw,
+ {
+ struct clk_rpmh *c = to_clk_rpmh(hw);
+
+- return c->aggr_state * c->unit;
++ return (unsigned long)c->aggr_state * c->unit;
+ }
+
+ static const struct clk_ops clk_rpmh_bcm_ops = {
+diff --git a/drivers/clk/qcom/dispcc-sm6350.c b/drivers/clk/qcom/dispcc-sm6350.c
+index 50facb36701af9..2bc6b5f99f5725 100644
+--- a/drivers/clk/qcom/dispcc-sm6350.c
++++ b/drivers/clk/qcom/dispcc-sm6350.c
+@@ -187,13 +187,12 @@ static struct clk_rcg2 disp_cc_mdss_dp_aux_clk_src = {
+ .cmd_rcgr = 0x1144,
+ .mnd_width = 0,
+ .hid_width = 5,
++ .parent_map = disp_cc_parent_map_6,
+ .freq_tbl = ftbl_disp_cc_mdss_dp_aux_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "disp_cc_mdss_dp_aux_clk_src",
+- .parent_data = &(const struct clk_parent_data){
+- .fw_name = "bi_tcxo",
+- },
+- .num_parents = 1,
++ .parent_data = disp_cc_parent_data_6,
++ .num_parents = ARRAY_SIZE(disp_cc_parent_data_6),
+ .ops = &clk_rcg2_ops,
+ },
+ };
+diff --git a/drivers/clk/qcom/gcc-mdm9607.c b/drivers/clk/qcom/gcc-mdm9607.c
+index 6e6068b168e66e..07f1b78d737a15 100644
+--- a/drivers/clk/qcom/gcc-mdm9607.c
++++ b/drivers/clk/qcom/gcc-mdm9607.c
+@@ -535,7 +535,7 @@ static struct clk_rcg2 blsp1_uart5_apps_clk_src = {
+ };
+
+ static struct clk_rcg2 blsp1_uart6_apps_clk_src = {
+- .cmd_rcgr = 0x6044,
++ .cmd_rcgr = 0x7044,
+ .mnd_width = 16,
+ .hid_width = 5,
+ .parent_map = gcc_xo_gpll0_map,
+diff --git a/drivers/clk/qcom/gcc-sm6350.c b/drivers/clk/qcom/gcc-sm6350.c
+index a811fad2aa2785..74346dc026068a 100644
+--- a/drivers/clk/qcom/gcc-sm6350.c
++++ b/drivers/clk/qcom/gcc-sm6350.c
+@@ -182,6 +182,14 @@ static const struct clk_parent_data gcc_parent_data_2_ao[] = {
+ { .hw = &gpll0_out_odd.clkr.hw },
+ };
+
++static const struct parent_map gcc_parent_map_3[] = {
++ { P_BI_TCXO, 0 },
++};
++
++static const struct clk_parent_data gcc_parent_data_3[] = {
++ { .fw_name = "bi_tcxo" },
++};
++
+ static const struct parent_map gcc_parent_map_4[] = {
+ { P_BI_TCXO, 0 },
+ { P_GPLL0_OUT_MAIN, 1 },
+@@ -701,13 +709,12 @@ static struct clk_rcg2 gcc_ufs_phy_phy_aux_clk_src = {
+ .cmd_rcgr = 0x3a0b0,
+ .mnd_width = 0,
+ .hid_width = 5,
++ .parent_map = gcc_parent_map_3,
+ .freq_tbl = ftbl_gcc_ufs_phy_phy_aux_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_ufs_phy_phy_aux_clk_src",
+- .parent_data = &(const struct clk_parent_data){
+- .fw_name = "bi_tcxo",
+- },
+- .num_parents = 1,
++ .parent_data = gcc_parent_data_3,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_3),
+ .ops = &clk_rcg2_ops,
+ },
+ };
+@@ -764,13 +771,12 @@ static struct clk_rcg2 gcc_usb30_prim_mock_utmi_clk_src = {
+ .cmd_rcgr = 0x1a034,
+ .mnd_width = 0,
+ .hid_width = 5,
++ .parent_map = gcc_parent_map_3,
+ .freq_tbl = ftbl_gcc_usb30_prim_mock_utmi_clk_src,
+ .clkr.hw.init = &(struct clk_init_data){
+ .name = "gcc_usb30_prim_mock_utmi_clk_src",
+- .parent_data = &(const struct clk_parent_data){
+- .fw_name = "bi_tcxo",
+- },
+- .num_parents = 1,
++ .parent_data = gcc_parent_data_3,
++ .num_parents = ARRAY_SIZE(gcc_parent_data_3),
+ .ops = &clk_rcg2_ops,
+ },
+ };
+diff --git a/drivers/clk/qcom/gcc-sm8550.c b/drivers/clk/qcom/gcc-sm8550.c
+index 5abaeddd6afcc5..862a9bf73bcb5d 100644
+--- a/drivers/clk/qcom/gcc-sm8550.c
++++ b/drivers/clk/qcom/gcc-sm8550.c
+@@ -3003,7 +3003,7 @@ static struct gdsc pcie_0_gdsc = {
+ .pd = {
+ .name = "pcie_0_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -3014,7 +3014,7 @@ static struct gdsc pcie_0_phy_gdsc = {
+ .pd = {
+ .name = "pcie_0_phy_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -3025,7 +3025,7 @@ static struct gdsc pcie_1_gdsc = {
+ .pd = {
+ .name = "pcie_1_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -3036,7 +3036,7 @@ static struct gdsc pcie_1_phy_gdsc = {
+ .pd = {
+ .name = "pcie_1_phy_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = VOTABLE | POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+diff --git a/drivers/clk/qcom/gcc-sm8650.c b/drivers/clk/qcom/gcc-sm8650.c
+index fd9d6544bdd53a..9dd5c48f33bed5 100644
+--- a/drivers/clk/qcom/gcc-sm8650.c
++++ b/drivers/clk/qcom/gcc-sm8650.c
+@@ -3437,7 +3437,7 @@ static struct gdsc pcie_0_gdsc = {
+ .pd = {
+ .name = "pcie_0_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE | VOTABLE,
+ };
+
+@@ -3448,7 +3448,7 @@ static struct gdsc pcie_0_phy_gdsc = {
+ .pd = {
+ .name = "pcie_0_phy_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE | VOTABLE,
+ };
+
+@@ -3459,7 +3459,7 @@ static struct gdsc pcie_1_gdsc = {
+ .pd = {
+ .name = "pcie_1_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE | VOTABLE,
+ };
+
+@@ -3470,7 +3470,7 @@ static struct gdsc pcie_1_phy_gdsc = {
+ .pd = {
+ .name = "pcie_1_phy_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE | VOTABLE,
+ };
+
+diff --git a/drivers/clk/sunxi-ng/ccu-sun50i-a100.c b/drivers/clk/sunxi-ng/ccu-sun50i-a100.c
+index bbaa82978716a9..a59e420b195d77 100644
+--- a/drivers/clk/sunxi-ng/ccu-sun50i-a100.c
++++ b/drivers/clk/sunxi-ng/ccu-sun50i-a100.c
+@@ -436,7 +436,7 @@ static SUNXI_CCU_MP_WITH_MUX_GATE_POSTDIV(mmc0_clk, "mmc0", mmc_parents, 0x830,
+ 24, 2, /* mux */
+ BIT(31), /* gate */
+ 2, /* post-div */
+- CLK_SET_RATE_NO_REPARENT);
++ 0);
+
+ static SUNXI_CCU_MP_WITH_MUX_GATE_POSTDIV(mmc1_clk, "mmc1", mmc_parents, 0x834,
+ 0, 4, /* M */
+@@ -444,7 +444,7 @@ static SUNXI_CCU_MP_WITH_MUX_GATE_POSTDIV(mmc1_clk, "mmc1", mmc_parents, 0x834,
+ 24, 2, /* mux */
+ BIT(31), /* gate */
+ 2, /* post-div */
+- CLK_SET_RATE_NO_REPARENT);
++ 0);
+
+ static SUNXI_CCU_MP_WITH_MUX_GATE_POSTDIV(mmc2_clk, "mmc2", mmc_parents, 0x838,
+ 0, 4, /* M */
+@@ -452,7 +452,7 @@ static SUNXI_CCU_MP_WITH_MUX_GATE_POSTDIV(mmc2_clk, "mmc2", mmc_parents, 0x838,
+ 24, 2, /* mux */
+ BIT(31), /* gate */
+ 2, /* post-div */
+- CLK_SET_RATE_NO_REPARENT);
++ 0);
+
+ static SUNXI_CCU_GATE(bus_mmc0_clk, "bus-mmc0", "ahb3", 0x84c, BIT(0), 0);
+ static SUNXI_CCU_GATE(bus_mmc1_clk, "bus-mmc1", "ahb3", 0x84c, BIT(1), 0);
+diff --git a/drivers/cpufreq/Kconfig b/drivers/cpufreq/Kconfig
+index 588ab1cc6d557c..f089a1b9c0c98a 100644
+--- a/drivers/cpufreq/Kconfig
++++ b/drivers/cpufreq/Kconfig
+@@ -218,7 +218,7 @@ config CPUFREQ_DT
+ If in doubt, say N.
+
+ config CPUFREQ_DT_PLATDEV
+- tristate "Generic DT based cpufreq platdev driver"
++ bool "Generic DT based cpufreq platdev driver"
+ depends on OF
+ help
+ This adds a generic DT based cpufreq platdev driver for frequency
+diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
+index 18942bfe9c95f7..78ad3221fe077e 100644
+--- a/drivers/cpufreq/cpufreq-dt-platdev.c
++++ b/drivers/cpufreq/cpufreq-dt-platdev.c
+@@ -234,5 +234,3 @@ static int __init cpufreq_dt_platdev_init(void)
+ sizeof(struct cpufreq_dt_platform_data)));
+ }
+ core_initcall(cpufreq_dt_platdev_init);
+-MODULE_DESCRIPTION("Generic DT based cpufreq platdev driver");
+-MODULE_LICENSE("GPL");
+diff --git a/drivers/cpufreq/s3c64xx-cpufreq.c b/drivers/cpufreq/s3c64xx-cpufreq.c
+index c6bdfc308e9908..9cef7152807626 100644
+--- a/drivers/cpufreq/s3c64xx-cpufreq.c
++++ b/drivers/cpufreq/s3c64xx-cpufreq.c
+@@ -24,6 +24,7 @@ struct s3c64xx_dvfs {
+ unsigned int vddarm_max;
+ };
+
++#ifdef CONFIG_REGULATOR
+ static struct s3c64xx_dvfs s3c64xx_dvfs_table[] = {
+ [0] = { 1000000, 1150000 },
+ [1] = { 1050000, 1150000 },
+@@ -31,6 +32,7 @@ static struct s3c64xx_dvfs s3c64xx_dvfs_table[] = {
+ [3] = { 1200000, 1350000 },
+ [4] = { 1300000, 1350000 },
+ };
++#endif
+
+ static struct cpufreq_frequency_table s3c64xx_freq_table[] = {
+ { 0, 0, 66000 },
+@@ -51,15 +53,16 @@ static struct cpufreq_frequency_table s3c64xx_freq_table[] = {
+ static int s3c64xx_cpufreq_set_target(struct cpufreq_policy *policy,
+ unsigned int index)
+ {
+- struct s3c64xx_dvfs *dvfs;
+- unsigned int old_freq, new_freq;
++ unsigned int new_freq = s3c64xx_freq_table[index].frequency;
+ int ret;
+
++#ifdef CONFIG_REGULATOR
++ struct s3c64xx_dvfs *dvfs;
++ unsigned int old_freq;
++
+ old_freq = clk_get_rate(policy->clk) / 1000;
+- new_freq = s3c64xx_freq_table[index].frequency;
+ dvfs = &s3c64xx_dvfs_table[s3c64xx_freq_table[index].driver_data];
+
+-#ifdef CONFIG_REGULATOR
+ if (vddarm && new_freq > old_freq) {
+ ret = regulator_set_voltage(vddarm,
+ dvfs->vddarm_min,
+diff --git a/drivers/crypto/qce/aead.c b/drivers/crypto/qce/aead.c
+index 7d811728f04782..97b56e92ea33f5 100644
+--- a/drivers/crypto/qce/aead.c
++++ b/drivers/crypto/qce/aead.c
+@@ -786,7 +786,7 @@ static int qce_aead_register_one(const struct qce_aead_def *def, struct qce_devi
+ alg->init = qce_aead_init;
+ alg->exit = qce_aead_exit;
+
+- alg->base.cra_priority = 300;
++ alg->base.cra_priority = 275;
+ alg->base.cra_flags = CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_ALLOCATES_MEMORY |
+ CRYPTO_ALG_KERN_DRIVER_ONLY |
+diff --git a/drivers/crypto/qce/core.c b/drivers/crypto/qce/core.c
+index 28b5fd82382775..3ec8297ed3fff7 100644
+--- a/drivers/crypto/qce/core.c
++++ b/drivers/crypto/qce/core.c
+@@ -51,16 +51,19 @@ static void qce_unregister_algs(struct qce_device *qce)
+ static int qce_register_algs(struct qce_device *qce)
+ {
+ const struct qce_algo_ops *ops;
+- int i, ret = -ENODEV;
++ int i, j, ret = -ENODEV;
+
+ for (i = 0; i < ARRAY_SIZE(qce_ops); i++) {
+ ops = qce_ops[i];
+ ret = ops->register_algs(qce);
+- if (ret)
+- break;
++ if (ret) {
++ for (j = i - 1; j >= 0; j--)
++ ops->unregister_algs(qce);
++ return ret;
++ }
+ }
+
+- return ret;
++ return 0;
+ }
+
+ static int qce_handle_request(struct crypto_async_request *async_req)
+@@ -247,7 +250,7 @@ static int qce_crypto_probe(struct platform_device *pdev)
+
+ ret = qce_check_version(qce);
+ if (ret)
+- goto err_clks;
++ goto err_dma;
+
+ spin_lock_init(&qce->lock);
+ tasklet_init(&qce->done_tasklet, qce_tasklet_req_done,
+diff --git a/drivers/crypto/qce/sha.c b/drivers/crypto/qce/sha.c
+index fc72af8aa9a725..71b748183cfa86 100644
+--- a/drivers/crypto/qce/sha.c
++++ b/drivers/crypto/qce/sha.c
+@@ -482,7 +482,7 @@ static int qce_ahash_register_one(const struct qce_ahash_def *def,
+
+ base = &alg->halg.base;
+ base->cra_blocksize = def->blocksize;
+- base->cra_priority = 300;
++ base->cra_priority = 175;
+ base->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY;
+ base->cra_ctxsize = sizeof(struct qce_sha_ctx);
+ base->cra_alignmask = 0;
+diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
+index 5b493fdc1e747f..ffb334eb5b3461 100644
+--- a/drivers/crypto/qce/skcipher.c
++++ b/drivers/crypto/qce/skcipher.c
+@@ -461,7 +461,7 @@ static int qce_skcipher_register_one(const struct qce_skcipher_def *def,
+ alg->encrypt = qce_skcipher_encrypt;
+ alg->decrypt = qce_skcipher_decrypt;
+
+- alg->base.cra_priority = 300;
++ alg->base.cra_priority = 275;
+ alg->base.cra_flags = CRYPTO_ALG_ASYNC |
+ CRYPTO_ALG_ALLOCATES_MEMORY |
+ CRYPTO_ALG_KERN_DRIVER_ONLY;
+diff --git a/drivers/firmware/Kconfig b/drivers/firmware/Kconfig
+index 71d8b26c4103b9..9f35f69e0f9e2b 100644
+--- a/drivers/firmware/Kconfig
++++ b/drivers/firmware/Kconfig
+@@ -106,7 +106,7 @@ config ISCSI_IBFT
+ select ISCSI_BOOT_SYSFS
+ select ISCSI_IBFT_FIND if X86
+ depends on ACPI && SCSI && SCSI_LOWLEVEL
+- default n
++ default n
+ help
+ This option enables support for detection and exposing of iSCSI
+ Boot Firmware Table (iBFT) via sysfs to userspace. If you wish to
+diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
+index ed4e8ddbe76a50..1141cd06011ff4 100644
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -11,7 +11,7 @@ cflags-y := $(KBUILD_CFLAGS)
+
+ cflags-$(CONFIG_X86_32) := -march=i386
+ cflags-$(CONFIG_X86_64) := -mcmodel=small
+-cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \
++cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ -std=gnu11 \
+ -fPIC -fno-strict-aliasing -mno-red-zone \
+ -mno-mmx -mno-sse -fshort-wchar \
+ -Wno-pointer-sign \
+diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
+index a6bdedbbf70888..2e093c39b610ae 100644
+--- a/drivers/firmware/qcom/qcom_scm.c
++++ b/drivers/firmware/qcom/qcom_scm.c
+@@ -217,7 +217,10 @@ static DEFINE_SPINLOCK(scm_query_lock);
+
+ struct qcom_tzmem_pool *qcom_scm_get_tzmem_pool(void)
+ {
+- return __scm ? __scm->mempool : NULL;
++ if (!qcom_scm_is_available())
++ return NULL;
++
++ return __scm->mempool;
+ }
+
+ static enum qcom_scm_convention __get_convention(void)
+@@ -1839,7 +1842,8 @@ static int qcom_scm_qseecom_init(struct qcom_scm *scm)
+ */
+ bool qcom_scm_is_available(void)
+ {
+- return !!READ_ONCE(__scm);
++ /* Paired with smp_store_release() in qcom_scm_probe */
++ return !!smp_load_acquire(&__scm);
+ }
+ EXPORT_SYMBOL_GPL(qcom_scm_is_available);
+
+@@ -1996,7 +2000,7 @@ static int qcom_scm_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
+- /* Let all above stores be available after this */
++ /* Paired with smp_load_acquire() in qcom_scm_is_available(). */
+ smp_store_release(&__scm, scm);
+
+ irq = platform_get_irq_optional(pdev, 0);
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index e49802f26e07f8..d764a3af634670 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -841,25 +841,6 @@ static bool pca953x_irq_pending(struct pca953x_chip *chip, unsigned long *pendin
+ DECLARE_BITMAP(trigger, MAX_LINE);
+ int ret;
+
+- if (chip->driver_data & PCA_PCAL) {
+- /* Read the current interrupt status from the device */
+- ret = pca953x_read_regs(chip, PCAL953X_INT_STAT, trigger);
+- if (ret)
+- return false;
+-
+- /* Check latched inputs and clear interrupt status */
+- ret = pca953x_read_regs(chip, chip->regs->input, cur_stat);
+- if (ret)
+- return false;
+-
+- /* Apply filter for rising/falling edge selection */
+- bitmap_replace(new_stat, chip->irq_trig_fall, chip->irq_trig_raise, cur_stat, gc->ngpio);
+-
+- bitmap_and(pending, new_stat, trigger, gc->ngpio);
+-
+- return !bitmap_empty(pending, gc->ngpio);
+- }
+-
+ ret = pca953x_read_regs(chip, chip->regs->input, cur_stat);
+ if (ret)
+ return false;
+diff --git a/drivers/gpio/gpio-sim.c b/drivers/gpio/gpio-sim.c
+index deedacdeb23952..f83a8b5a51d0dc 100644
+--- a/drivers/gpio/gpio-sim.c
++++ b/drivers/gpio/gpio-sim.c
+@@ -1036,20 +1036,23 @@ gpio_sim_device_lockup_configfs(struct gpio_sim_device *dev, bool lock)
+ struct configfs_subsystem *subsys = dev->group.cg_subsys;
+ struct gpio_sim_bank *bank;
+ struct gpio_sim_line *line;
++ struct config_item *item;
+
+ /*
+- * The device only needs to depend on leaf line entries. This is
++ * The device only needs to depend on leaf entries. This is
+ * sufficient to lock up all the configfs entries that the
+ * instantiated, alive device depends on.
+ */
+ list_for_each_entry(bank, &dev->bank_list, siblings) {
+ list_for_each_entry(line, &bank->line_list, siblings) {
++ item = line->hog ? &line->hog->item
++ : &line->group.cg_item;
++
+ if (lock)
+- WARN_ON(configfs_depend_item_unlocked(
+- subsys, &line->group.cg_item));
++ WARN_ON(configfs_depend_item_unlocked(subsys,
++ item));
+ else
+- configfs_undepend_item_unlocked(
+- &line->group.cg_item);
++ configfs_undepend_item_unlocked(item);
+ }
+ }
+ }
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index 610e159d362ad6..7408ea8caacc3c 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -486,6 +486,10 @@ config DRM_HYPERV
+ config DRM_EXPORT_FOR_TESTS
+ bool
+
++# Separate option as not all DRM drivers use it
++config DRM_PANEL_BACKLIGHT_QUIRKS
++ tristate
++
+ config DRM_LIB_RANDOM
+ bool
+ default n
+diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
+index 784229d4504dcb..84746054c721a3 100644
+--- a/drivers/gpu/drm/Makefile
++++ b/drivers/gpu/drm/Makefile
+@@ -93,6 +93,7 @@ drm-$(CONFIG_DRM_PANIC_SCREEN_QR_CODE) += drm_panic_qr.o
+ obj-$(CONFIG_DRM) += drm.o
+
+ obj-$(CONFIG_DRM_PANEL_ORIENTATION_QUIRKS) += drm_panel_orientation_quirks.o
++obj-$(CONFIG_DRM_PANEL_BACKLIGHT_QUIRKS) += drm_panel_backlight_quirks.o
+
+ #
+ # Memory-management helpers
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 81d9877c87357d..c27b4c36a7c0f5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -118,9 +118,10 @@
+ * - 3.57.0 - Compute tunneling on GFX10+
+ * - 3.58.0 - Add GFX12 DCC support
+ * - 3.59.0 - Cleared VRAM
++ * - 3.60.0 - Add AMDGPU_TILING_GFX12_DCC_WRITE_COMPRESS_DISABLE (Vulkan requirement)
+ */
+ #define KMS_DRIVER_MAJOR 3
+-#define KMS_DRIVER_MINOR 59
++#define KMS_DRIVER_MINOR 60
+ #define KMS_DRIVER_PATCHLEVEL 0
+
+ /*
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+index 156abd2ba5a6c6..05ebb8216a55a5 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gfx.c
+@@ -1837,6 +1837,7 @@ void amdgpu_gfx_enforce_isolation_ring_begin_use(struct amdgpu_ring *ring)
+ {
+ struct amdgpu_device *adev = ring->adev;
+ u32 idx;
++ bool sched_work = false;
+
+ if (!adev->gfx.enable_cleaner_shader)
+ return;
+@@ -1852,15 +1853,19 @@ void amdgpu_gfx_enforce_isolation_ring_begin_use(struct amdgpu_ring *ring)
+ mutex_lock(&adev->enforce_isolation_mutex);
+ if (adev->enforce_isolation[idx]) {
+ if (adev->kfd.init_complete)
+- amdgpu_gfx_kfd_sch_ctrl(adev, idx, false);
++ sched_work = true;
+ }
+ mutex_unlock(&adev->enforce_isolation_mutex);
++
++ if (sched_work)
++ amdgpu_gfx_kfd_sch_ctrl(adev, idx, false);
+ }
+
+ void amdgpu_gfx_enforce_isolation_ring_end_use(struct amdgpu_ring *ring)
+ {
+ struct amdgpu_device *adev = ring->adev;
+ u32 idx;
++ bool sched_work = false;
+
+ if (!adev->gfx.enable_cleaner_shader)
+ return;
+@@ -1876,7 +1881,10 @@ void amdgpu_gfx_enforce_isolation_ring_end_use(struct amdgpu_ring *ring)
+ mutex_lock(&adev->enforce_isolation_mutex);
+ if (adev->enforce_isolation[idx]) {
+ if (adev->kfd.init_complete)
+- amdgpu_gfx_kfd_sch_ctrl(adev, idx, true);
++ sched_work = true;
+ }
+ mutex_unlock(&adev->enforce_isolation_mutex);
++
++ if (sched_work)
++ amdgpu_gfx_kfd_sch_ctrl(adev, idx, true);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index ae9ca6788df78c..425073d994912f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -309,7 +309,7 @@ int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
+ mutex_lock(&adev->mman.gtt_window_lock);
+ while (src_mm.remaining) {
+ uint64_t from, to, cur_size, tiling_flags;
+- uint32_t num_type, data_format, max_com;
++ uint32_t num_type, data_format, max_com, write_compress_disable;
+ struct dma_fence *next;
+
+ /* Never copy more than 256MiB at once to avoid a timeout */
+@@ -340,9 +340,13 @@ int amdgpu_ttm_copy_mem_to_mem(struct amdgpu_device *adev,
+ max_com = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_MAX_COMPRESSED_BLOCK);
+ num_type = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_NUMBER_TYPE);
+ data_format = AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_DATA_FORMAT);
++ write_compress_disable =
++ AMDGPU_TILING_GET(tiling_flags, GFX12_DCC_WRITE_COMPRESS_DISABLE);
+ copy_flags |= (AMDGPU_COPY_FLAGS_SET(MAX_COMPRESSED, max_com) |
+ AMDGPU_COPY_FLAGS_SET(NUMBER_TYPE, num_type) |
+- AMDGPU_COPY_FLAGS_SET(DATA_FORMAT, data_format));
++ AMDGPU_COPY_FLAGS_SET(DATA_FORMAT, data_format) |
++ AMDGPU_COPY_FLAGS_SET(WRITE_COMPRESS_DISABLE,
++ write_compress_disable));
+ }
+
+ r = amdgpu_copy_buffer(ring, from, to, cur_size, resv,
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+index 138d80017f3564..b7742fa74e1de2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.h
+@@ -118,6 +118,8 @@ struct amdgpu_copy_mem {
+ #define AMDGPU_COPY_FLAGS_NUMBER_TYPE_MASK 0x07
+ #define AMDGPU_COPY_FLAGS_DATA_FORMAT_SHIFT 8
+ #define AMDGPU_COPY_FLAGS_DATA_FORMAT_MASK 0x3f
++#define AMDGPU_COPY_FLAGS_WRITE_COMPRESS_DISABLE_SHIFT 14
++#define AMDGPU_COPY_FLAGS_WRITE_COMPRESS_DISABLE_MASK 0x1
+
+ #define AMDGPU_COPY_FLAGS_SET(field, value) \
+ (((__u32)(value) & AMDGPU_COPY_FLAGS_##field##_MASK) << AMDGPU_COPY_FLAGS_##field##_SHIFT)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index 6c19626ec59e9d..ca130880edfd42 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -3981,17 +3981,6 @@ static void gfx_v12_0_update_coarse_grain_clock_gating(struct amdgpu_device *ade
+
+ if (def != data)
+ WREG32_SOC15(GC, 0, regRLC_CGCG_CGLS_CTRL_3D, data);
+-
+- data = RREG32_SOC15(GC, 0, regSDMA0_RLC_CGCG_CTRL);
+- data &= ~SDMA0_RLC_CGCG_CTRL__CGCG_INT_ENABLE_MASK;
+- WREG32_SOC15(GC, 0, regSDMA0_RLC_CGCG_CTRL, data);
+-
+- /* Some ASICs only have one SDMA instance, not need to configure SDMA1 */
+- if (adev->sdma.num_instances > 1) {
+- data = RREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL);
+- data &= ~SDMA1_RLC_CGCG_CTRL__CGCG_INT_ENABLE_MASK;
+- WREG32_SOC15(GC, 0, regSDMA1_RLC_CGCG_CTRL, data);
+- }
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+index c77889040760ad..4dd86c682ee6a2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_4_2.c
+@@ -953,10 +953,12 @@ static int sdma_v4_4_2_inst_start(struct amdgpu_device *adev,
+ /* set utc l1 enable flag always to 1 */
+ temp = RREG32_SDMA(i, regSDMA_CNTL);
+ temp = REG_SET_FIELD(temp, SDMA_CNTL, UTC_L1_ENABLE, 1);
+- /* enable context empty interrupt during initialization */
+- temp = REG_SET_FIELD(temp, SDMA_CNTL, CTXEMPTY_INT_ENABLE, 1);
+- WREG32_SDMA(i, regSDMA_CNTL, temp);
+
++ if (amdgpu_ip_version(adev, SDMA0_HWIP, 0) < IP_VERSION(4, 4, 5)) {
++ /* enable context empty interrupt during initialization */
++ temp = REG_SET_FIELD(temp, SDMA_CNTL, CTXEMPTY_INT_ENABLE, 1);
++ WREG32_SDMA(i, regSDMA_CNTL, temp);
++ }
+ if (!amdgpu_sriov_vf(adev)) {
+ if (adev->firmware.load_type != AMDGPU_FW_LOAD_PSP) {
+ /* unhalt engine */
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+index 9288f37a3cc5c3..843e6b46deee82 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v7_0.c
+@@ -1688,11 +1688,12 @@ static void sdma_v7_0_emit_copy_buffer(struct amdgpu_ib *ib,
+ uint32_t byte_count,
+ uint32_t copy_flags)
+ {
+- uint32_t num_type, data_format, max_com;
++ uint32_t num_type, data_format, max_com, write_cm;
+
+ max_com = AMDGPU_COPY_FLAGS_GET(copy_flags, MAX_COMPRESSED);
+ data_format = AMDGPU_COPY_FLAGS_GET(copy_flags, DATA_FORMAT);
+ num_type = AMDGPU_COPY_FLAGS_GET(copy_flags, NUMBER_TYPE);
++ write_cm = AMDGPU_COPY_FLAGS_GET(copy_flags, WRITE_COMPRESS_DISABLE) ? 2 : 1;
+
+ ib->ptr[ib->length_dw++] = SDMA_PKT_COPY_LINEAR_HEADER_OP(SDMA_OP_COPY) |
+ SDMA_PKT_COPY_LINEAR_HEADER_SUB_OP(SDMA_SUBOP_COPY_LINEAR) |
+@@ -1709,7 +1710,7 @@ static void sdma_v7_0_emit_copy_buffer(struct amdgpu_ib *ib,
+ if ((copy_flags & (AMDGPU_COPY_FLAGS_READ_DECOMPRESSED | AMDGPU_COPY_FLAGS_WRITE_COMPRESSED)))
+ ib->ptr[ib->length_dw++] = SDMA_DCC_DATA_FORMAT(data_format) | SDMA_DCC_NUM_TYPE(num_type) |
+ ((copy_flags & AMDGPU_COPY_FLAGS_READ_DECOMPRESSED) ? SDMA_DCC_READ_CM(2) : 0) |
+- ((copy_flags & AMDGPU_COPY_FLAGS_WRITE_COMPRESSED) ? SDMA_DCC_WRITE_CM(1) : 0) |
++ ((copy_flags & AMDGPU_COPY_FLAGS_WRITE_COMPRESSED) ? SDMA_DCC_WRITE_CM(write_cm) : 0) |
+ SDMA_DCC_MAX_COM(max_com) | SDMA_DCC_MAX_UCOM(1);
+ else
+ ib->ptr[ib->length_dw++] = 0;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index b05be24531e187..d350c7ce35b3d6 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -637,6 +637,14 @@ static void kfd_cleanup_nodes(struct kfd_dev *kfd, unsigned int num_nodes)
+ struct kfd_node *knode;
+ unsigned int i;
+
++ /*
++ * flush_work ensures that there are no outstanding
++ * work-queue items that will access interrupt_ring. New work items
++ * can't be created because we stopped interrupt handling above.
++ */
++ flush_workqueue(kfd->ih_wq);
++ destroy_workqueue(kfd->ih_wq);
++
+ for (i = 0; i < num_nodes; i++) {
+ knode = kfd->nodes[i];
+ device_queue_manager_uninit(knode->dqm);
+@@ -1058,21 +1066,6 @@ static int kfd_resume(struct kfd_node *node)
+ return err;
+ }
+
+-static inline void kfd_queue_work(struct workqueue_struct *wq,
+- struct work_struct *work)
+-{
+- int cpu, new_cpu;
+-
+- cpu = new_cpu = smp_processor_id();
+- do {
+- new_cpu = cpumask_next(new_cpu, cpu_online_mask) % nr_cpu_ids;
+- if (cpu_to_node(new_cpu) == numa_node_id())
+- break;
+- } while (cpu != new_cpu);
+-
+- queue_work_on(new_cpu, wq, work);
+-}
+-
+ /* This is called directly from KGD at ISR. */
+ void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
+ {
+@@ -1098,7 +1091,7 @@ void kgd2kfd_interrupt(struct kfd_dev *kfd, const void *ih_ring_entry)
+ patched_ihre, &is_patched)
+ && enqueue_ih_ring_entry(node,
+ is_patched ? patched_ihre : ih_ring_entry)) {
+- kfd_queue_work(node->ih_wq, &node->interrupt_work);
++ queue_work(node->kfd->ih_wq, &node->interrupt_work);
+ spin_unlock_irqrestore(&node->interrupt_lock, flags);
+ return;
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index f5b3ed20e891b3..3cfb4a38d17c7f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -2290,9 +2290,9 @@ static int unmap_queues_cpsch(struct device_queue_manager *dqm,
+ */
+ mqd_mgr = dqm->mqd_mgrs[KFD_MQD_TYPE_HIQ];
+ if (mqd_mgr->check_preemption_failed(mqd_mgr, dqm->packet_mgr.priv_queue->queue->mqd)) {
++ while (halt_if_hws_hang)
++ schedule();
+ if (reset_queues_on_hws_hang(dqm)) {
+- while (halt_if_hws_hang)
+- schedule();
+ dqm->is_hws_hang = true;
+ kfd_hws_hang(dqm);
+ retval = -ETIME;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_interrupt.c b/drivers/gpu/drm/amd/amdkfd/kfd_interrupt.c
+index 9b6b6e88259348..15b4b70cf19976 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_interrupt.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_interrupt.c
+@@ -62,11 +62,14 @@ int kfd_interrupt_init(struct kfd_node *node)
+ return r;
+ }
+
+- node->ih_wq = alloc_workqueue("KFD IH", WQ_HIGHPRI, 1);
+- if (unlikely(!node->ih_wq)) {
+- kfifo_free(&node->ih_fifo);
+- dev_err(node->adev->dev, "Failed to allocate KFD IH workqueue\n");
+- return -ENOMEM;
++ if (!node->kfd->ih_wq) {
++ node->kfd->ih_wq = alloc_workqueue("KFD IH", WQ_HIGHPRI | WQ_UNBOUND,
++ node->kfd->num_nodes);
++ if (unlikely(!node->kfd->ih_wq)) {
++ kfifo_free(&node->ih_fifo);
++ dev_err(node->adev->dev, "Failed to allocate KFD IH workqueue\n");
++ return -ENOMEM;
++ }
+ }
+ spin_lock_init(&node->interrupt_lock);
+
+@@ -96,16 +99,6 @@ void kfd_interrupt_exit(struct kfd_node *node)
+ spin_lock_irqsave(&node->interrupt_lock, flags);
+ node->interrupts_active = false;
+ spin_unlock_irqrestore(&node->interrupt_lock, flags);
+-
+- /*
+- * flush_work ensures that there are no outstanding
+- * work-queue items that will access interrupt_ring. New work items
+- * can't be created because we stopped interrupt handling above.
+- */
+- flush_workqueue(node->ih_wq);
+-
+- destroy_workqueue(node->ih_wq);
+-
+ kfifo_free(&node->ih_fifo);
+ }
+
+@@ -162,7 +155,7 @@ static void interrupt_wq(struct work_struct *work)
+ /* If we spent more than a second processing signals,
+ * reschedule the worker to avoid soft-lockup warnings
+ */
+- queue_work(dev->ih_wq, &dev->interrupt_work);
++ queue_work(dev->kfd->ih_wq, &dev->interrupt_work);
+ break;
+ }
+ }
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+index 26e48fdc872896..75523f30cd38b0 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h
+@@ -273,7 +273,6 @@ struct kfd_node {
+
+ /* Interrupts */
+ struct kfifo ih_fifo;
+- struct workqueue_struct *ih_wq;
+ struct work_struct interrupt_work;
+ spinlock_t interrupt_lock;
+
+@@ -366,6 +365,8 @@ struct kfd_dev {
+ struct kfd_node *nodes[MAX_KFD_NODES];
+ unsigned int num_nodes;
+
++ struct workqueue_struct *ih_wq;
++
+ /* Kernel doorbells for KFD device */
+ struct amdgpu_bo *doorbells;
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index ead4317a21680b..dbb63ce316f11e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -86,9 +86,12 @@ void kfd_process_dequeue_from_device(struct kfd_process_device *pdd)
+
+ if (pdd->already_dequeued)
+ return;
+-
++ /* The MES context flush needs to filter out the case which the
++ * KFD process is created without setting up the MES context and
++ * queue for creating a compute queue.
++ */
+ dev->dqm->ops.process_termination(dev->dqm, &pdd->qpd);
+- if (dev->kfd->shared_resources.enable_mes &&
++ if (dev->kfd->shared_resources.enable_mes && !!pdd->proc_ctx_gpu_addr &&
+ down_read_trylock(&dev->adev->reset_domain->sem)) {
+ amdgpu_mes_flush_shader_debugger(dev->adev,
+ pdd->proc_ctx_gpu_addr);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 08c58d0315de7f..85e58e0f6059a6 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1036,8 +1036,10 @@ static int amdgpu_dm_audio_component_get_eld(struct device *kdev, int port,
+ continue;
+
+ *enabled = true;
++ mutex_lock(&connector->eld_mutex);
+ ret = drm_eld_size(connector->eld);
+ memcpy(buf, connector->eld, min(max_bytes, ret));
++ mutex_unlock(&connector->eld_mutex);
+
+ break;
+ }
+@@ -5497,8 +5499,7 @@ fill_dc_plane_info_and_addr(struct amdgpu_device *adev,
+ const u64 tiling_flags,
+ struct dc_plane_info *plane_info,
+ struct dc_plane_address *address,
+- bool tmz_surface,
+- bool force_disable_dcc)
++ bool tmz_surface)
+ {
+ const struct drm_framebuffer *fb = plane_state->fb;
+ const struct amdgpu_framebuffer *afb =
+@@ -5597,7 +5598,7 @@ fill_dc_plane_info_and_addr(struct amdgpu_device *adev,
+ &plane_info->tiling_info,
+ &plane_info->plane_size,
+ &plane_info->dcc, address,
+- tmz_surface, force_disable_dcc);
++ tmz_surface);
+ if (ret)
+ return ret;
+
+@@ -5618,7 +5619,6 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev,
+ struct dc_scaling_info scaling_info;
+ struct dc_plane_info plane_info;
+ int ret;
+- bool force_disable_dcc = false;
+
+ ret = amdgpu_dm_plane_fill_dc_scaling_info(adev, plane_state, &scaling_info);
+ if (ret)
+@@ -5629,13 +5629,11 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev,
+ dc_plane_state->clip_rect = scaling_info.clip_rect;
+ dc_plane_state->scaling_quality = scaling_info.scaling_quality;
+
+- force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
+ ret = fill_dc_plane_info_and_addr(adev, plane_state,
+ afb->tiling_flags,
+ &plane_info,
+ &dc_plane_state->address,
+- afb->tmz_surface,
+- force_disable_dcc);
++ afb->tmz_surface);
+ if (ret)
+ return ret;
+
+@@ -9061,7 +9059,7 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ afb->tiling_flags,
+ &bundle->plane_infos[planes_count],
+ &bundle->flip_addrs[planes_count].address,
+- afb->tmz_surface, false);
++ afb->tmz_surface);
+
+ drm_dbg_state(state->dev, "plane: id=%d dcc_en=%d\n",
+ new_plane_state->plane->index,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+index 754dbc544f03a3..5bdf44c692180c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c
+@@ -1691,16 +1691,16 @@ int pre_validate_dsc(struct drm_atomic_state *state,
+ return ret;
+ }
+
+-static unsigned int kbps_from_pbn(unsigned int pbn)
++static uint32_t kbps_from_pbn(unsigned int pbn)
+ {
+- unsigned int kbps = pbn;
++ uint64_t kbps = (uint64_t)pbn;
+
+ kbps *= (1000000 / PEAK_FACTOR_X1000);
+ kbps *= 8;
+ kbps *= 54;
+ kbps /= 64;
+
+- return kbps;
++ return (uint32_t)kbps;
+ }
+
+ static bool is_dsc_common_config_possible(struct dc_stream_state *stream,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+index 495e3cd70426db..83c7c8853edeca 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+@@ -309,8 +309,7 @@ static int amdgpu_dm_plane_fill_gfx9_plane_attributes_from_modifiers(struct amdg
+ const struct plane_size *plane_size,
+ union dc_tiling_info *tiling_info,
+ struct dc_plane_dcc_param *dcc,
+- struct dc_plane_address *address,
+- const bool force_disable_dcc)
++ struct dc_plane_address *address)
+ {
+ const uint64_t modifier = afb->base.modifier;
+ int ret = 0;
+@@ -318,7 +317,7 @@ static int amdgpu_dm_plane_fill_gfx9_plane_attributes_from_modifiers(struct amdg
+ amdgpu_dm_plane_fill_gfx9_tiling_info_from_modifier(adev, tiling_info, modifier);
+ tiling_info->gfx9.swizzle = amdgpu_dm_plane_modifier_gfx9_swizzle_mode(modifier);
+
+- if (amdgpu_dm_plane_modifier_has_dcc(modifier) && !force_disable_dcc) {
++ if (amdgpu_dm_plane_modifier_has_dcc(modifier)) {
+ uint64_t dcc_address = afb->address + afb->base.offsets[1];
+ bool independent_64b_blks = AMD_FMT_MOD_GET(DCC_INDEPENDENT_64B, modifier);
+ bool independent_128b_blks = AMD_FMT_MOD_GET(DCC_INDEPENDENT_128B, modifier);
+@@ -360,8 +359,7 @@ static int amdgpu_dm_plane_fill_gfx12_plane_attributes_from_modifiers(struct amd
+ const struct plane_size *plane_size,
+ union dc_tiling_info *tiling_info,
+ struct dc_plane_dcc_param *dcc,
+- struct dc_plane_address *address,
+- const bool force_disable_dcc)
++ struct dc_plane_address *address)
+ {
+ const uint64_t modifier = afb->base.modifier;
+ int ret = 0;
+@@ -371,7 +369,7 @@ static int amdgpu_dm_plane_fill_gfx12_plane_attributes_from_modifiers(struct amd
+
+ tiling_info->gfx9.swizzle = amdgpu_dm_plane_modifier_gfx9_swizzle_mode(modifier);
+
+- if (amdgpu_dm_plane_modifier_has_dcc(modifier) && !force_disable_dcc) {
++ if (amdgpu_dm_plane_modifier_has_dcc(modifier)) {
+ int max_compressed_block = AMD_FMT_MOD_GET(DCC_MAX_COMPRESSED_BLOCK, modifier);
+
+ dcc->enable = 1;
+@@ -839,8 +837,7 @@ int amdgpu_dm_plane_fill_plane_buffer_attributes(struct amdgpu_device *adev,
+ struct plane_size *plane_size,
+ struct dc_plane_dcc_param *dcc,
+ struct dc_plane_address *address,
+- bool tmz_surface,
+- bool force_disable_dcc)
++ bool tmz_surface)
+ {
+ const struct drm_framebuffer *fb = &afb->base;
+ int ret;
+@@ -900,16 +897,14 @@ int amdgpu_dm_plane_fill_plane_buffer_attributes(struct amdgpu_device *adev,
+ ret = amdgpu_dm_plane_fill_gfx12_plane_attributes_from_modifiers(adev, afb, format,
+ rotation, plane_size,
+ tiling_info, dcc,
+- address,
+- force_disable_dcc);
++ address);
+ if (ret)
+ return ret;
+ } else if (adev->family >= AMDGPU_FAMILY_AI) {
+ ret = amdgpu_dm_plane_fill_gfx9_plane_attributes_from_modifiers(adev, afb, format,
+ rotation, plane_size,
+ tiling_info, dcc,
+- address,
+- force_disable_dcc);
++ address);
+ if (ret)
+ return ret;
+ } else {
+@@ -1000,14 +995,13 @@ static int amdgpu_dm_plane_helper_prepare_fb(struct drm_plane *plane,
+ dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
+ struct dc_plane_state *plane_state =
+ dm_plane_state_new->dc_state;
+- bool force_disable_dcc = !plane_state->dcc.enable;
+
+ amdgpu_dm_plane_fill_plane_buffer_attributes(
+ adev, afb, plane_state->format, plane_state->rotation,
+ afb->tiling_flags,
+ &plane_state->tiling_info, &plane_state->plane_size,
+ &plane_state->dcc, &plane_state->address,
+- afb->tmz_surface, force_disable_dcc);
++ afb->tmz_surface);
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.h b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.h
+index 6498359bff6f68..2eef13b1c05a4b 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.h
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.h
+@@ -51,8 +51,7 @@ int amdgpu_dm_plane_fill_plane_buffer_attributes(struct amdgpu_device *adev,
+ struct plane_size *plane_size,
+ struct dc_plane_dcc_param *dcc,
+ struct dc_plane_address *address,
+- bool tmz_surface,
+- bool force_disable_dcc);
++ bool tmz_surface);
+
+ int amdgpu_dm_plane_init(struct amdgpu_display_manager *dm,
+ struct drm_plane *plane,
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 6d4ee8fe615c38..216b525bd75e79 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -2032,7 +2032,7 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
+
+ dc_enable_stereo(dc, context, dc_streams, context->stream_count);
+
+- if (context->stream_count > get_seamless_boot_stream_count(context) ||
++ if (get_seamless_boot_stream_count(context) == 0 ||
+ context->stream_count == 0) {
+ /* Must wait for no flips to be pending before doing optimize bw */
+ hwss_wait_for_no_pipes_pending(dc, context);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+index 5bb8b78bf250a0..bf636b28e3e16e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+@@ -63,8 +63,7 @@ void dmub_hw_lock_mgr_inbox0_cmd(struct dc_dmub_srv *dmub_srv,
+
+ bool should_use_dmub_lock(struct dc_link *link)
+ {
+- if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1 ||
+- link->psr_settings.psr_version == DC_PSR_VERSION_1)
++ if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1)
+ return true;
+
+ if (link->replay_settings.replay_feature_enabled)
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/Makefile b/drivers/gpu/drm/amd/display/dc/dml2/Makefile
+index c4378e620cbf91..986a69c5bd4bca 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/dml2/Makefile
+@@ -29,7 +29,11 @@ dml2_rcflags := $(CC_FLAGS_NO_FPU)
+
+ ifneq ($(CONFIG_FRAME_WARN),0)
+ ifeq ($(filter y,$(CONFIG_KASAN)$(CONFIG_KCSAN)),y)
++ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_COMPILE_TEST),yy)
++frame_warn_flag := -Wframe-larger-than=4096
++else
+ frame_warn_flag := -Wframe-larger-than=3072
++endif
+ else
+ frame_warn_flag := -Wframe-larger-than=2048
+ endif
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+index 8dabb1ac0b684d..6822b07951204b 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core.c
+@@ -6301,9 +6301,9 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ mode_lib->ms.meta_row_bandwidth_this_state,
+ mode_lib->ms.dpte_row_bandwidth_this_state,
+ mode_lib->ms.NoOfDPPThisState,
+- mode_lib->ms.UrgentBurstFactorLuma,
+- mode_lib->ms.UrgentBurstFactorChroma,
+- mode_lib->ms.UrgentBurstFactorCursor);
++ mode_lib->ms.UrgentBurstFactorLuma[j],
++ mode_lib->ms.UrgentBurstFactorChroma[j],
++ mode_lib->ms.UrgentBurstFactorCursor[j]);
+
+ s->VMDataOnlyReturnBWPerState = dml_get_return_bw_mbps_vm_only(
+ &mode_lib->ms.soc,
+@@ -6434,7 +6434,7 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ /* Output */
+ &mode_lib->ms.UrgentBurstFactorCursorPre[k],
+ &mode_lib->ms.UrgentBurstFactorLumaPre[k],
+- &mode_lib->ms.UrgentBurstFactorChroma[k],
++ &mode_lib->ms.UrgentBurstFactorChromaPre[k],
+ &mode_lib->ms.NotUrgentLatencyHidingPre[k]);
+
+ mode_lib->ms.cursor_bw_pre[k] = mode_lib->ms.cache_display_cfg.plane.NumberOfCursors[k] * mode_lib->ms.cache_display_cfg.plane.CursorWidth[k] *
+@@ -6458,9 +6458,9 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ mode_lib->ms.cursor_bw_pre,
+ mode_lib->ms.prefetch_vmrow_bw,
+ mode_lib->ms.NoOfDPPThisState,
+- mode_lib->ms.UrgentBurstFactorLuma,
+- mode_lib->ms.UrgentBurstFactorChroma,
+- mode_lib->ms.UrgentBurstFactorCursor,
++ mode_lib->ms.UrgentBurstFactorLuma[j],
++ mode_lib->ms.UrgentBurstFactorChroma[j],
++ mode_lib->ms.UrgentBurstFactorCursor[j],
+ mode_lib->ms.UrgentBurstFactorLumaPre,
+ mode_lib->ms.UrgentBurstFactorChromaPre,
+ mode_lib->ms.UrgentBurstFactorCursorPre,
+@@ -6517,9 +6517,9 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ mode_lib->ms.cursor_bw,
+ mode_lib->ms.cursor_bw_pre,
+ mode_lib->ms.NoOfDPPThisState,
+- mode_lib->ms.UrgentBurstFactorLuma,
+- mode_lib->ms.UrgentBurstFactorChroma,
+- mode_lib->ms.UrgentBurstFactorCursor,
++ mode_lib->ms.UrgentBurstFactorLuma[j],
++ mode_lib->ms.UrgentBurstFactorChroma[j],
++ mode_lib->ms.UrgentBurstFactorCursor[j],
+ mode_lib->ms.UrgentBurstFactorLumaPre,
+ mode_lib->ms.UrgentBurstFactorChromaPre,
+ mode_lib->ms.UrgentBurstFactorCursorPre);
+@@ -6586,9 +6586,9 @@ static void dml_prefetch_check(struct display_mode_lib_st *mode_lib)
+ mode_lib->ms.cursor_bw_pre,
+ mode_lib->ms.prefetch_vmrow_bw,
+ mode_lib->ms.NoOfDPP[j], // VBA_ERROR DPPPerSurface is not assigned at this point, should use NoOfDpp here
+- mode_lib->ms.UrgentBurstFactorLuma,
+- mode_lib->ms.UrgentBurstFactorChroma,
+- mode_lib->ms.UrgentBurstFactorCursor,
++ mode_lib->ms.UrgentBurstFactorLuma[j],
++ mode_lib->ms.UrgentBurstFactorChroma[j],
++ mode_lib->ms.UrgentBurstFactorCursor[j],
+ mode_lib->ms.UrgentBurstFactorLumaPre,
+ mode_lib->ms.UrgentBurstFactorChromaPre,
+ mode_lib->ms.UrgentBurstFactorCursorPre,
+@@ -7809,9 +7809,9 @@ dml_bool_t dml_core_mode_support(struct display_mode_lib_st *mode_lib)
+ mode_lib->ms.DETBufferSizeYThisState[k],
+ mode_lib->ms.DETBufferSizeCThisState[k],
+ /* Output */
+- &mode_lib->ms.UrgentBurstFactorCursor[k],
+- &mode_lib->ms.UrgentBurstFactorLuma[k],
+- &mode_lib->ms.UrgentBurstFactorChroma[k],
++ &mode_lib->ms.UrgentBurstFactorCursor[j][k],
++ &mode_lib->ms.UrgentBurstFactorLuma[j][k],
++ &mode_lib->ms.UrgentBurstFactorChroma[j][k],
+ &mode_lib->ms.NotUrgentLatencyHiding[k]);
+ }
+
+@@ -9190,6 +9190,8 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
+ &locals->FractionOfUrgentBandwidth,
+ &s->dummy_boolean[0]); // dml_bool_t *PrefetchBandwidthSupport
+
++
++
+ if (s->VRatioPrefetchMoreThanMax != false || s->DestinationLineTimesForPrefetchLessThan2 != false) {
+ dml_print("DML::%s: VRatioPrefetchMoreThanMax = %u\n", __func__, s->VRatioPrefetchMoreThanMax);
+ dml_print("DML::%s: DestinationLineTimesForPrefetchLessThan2 = %u\n", __func__, s->DestinationLineTimesForPrefetchLessThan2);
+@@ -9204,6 +9206,7 @@ void dml_core_mode_programming(struct display_mode_lib_st *mode_lib, const struc
+ }
+ }
+
++
+ if (locals->PrefetchModeSupported == true && mode_lib->ms.support.ImmediateFlipSupport == true) {
+ locals->BandwidthAvailableForImmediateFlip = CalculateBandwidthAvailableForImmediateFlip(
+ mode_lib->ms.num_active_planes,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core_structs.h b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core_structs.h
+index f951936bb579e6..504c427b3b3191 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core_structs.h
++++ b/drivers/gpu/drm/amd/display/dc/dml2/display_mode_core_structs.h
+@@ -884,11 +884,11 @@ struct mode_support_st {
+ dml_uint_t meta_row_height[__DML_NUM_PLANES__];
+ dml_uint_t meta_row_height_chroma[__DML_NUM_PLANES__];
+ dml_float_t UrgLatency;
+- dml_float_t UrgentBurstFactorCursor[__DML_NUM_PLANES__];
++ dml_float_t UrgentBurstFactorCursor[2][__DML_NUM_PLANES__];
+ dml_float_t UrgentBurstFactorCursorPre[__DML_NUM_PLANES__];
+- dml_float_t UrgentBurstFactorLuma[__DML_NUM_PLANES__];
++ dml_float_t UrgentBurstFactorLuma[2][__DML_NUM_PLANES__];
+ dml_float_t UrgentBurstFactorLumaPre[__DML_NUM_PLANES__];
+- dml_float_t UrgentBurstFactorChroma[__DML_NUM_PLANES__];
++ dml_float_t UrgentBurstFactorChroma[2][__DML_NUM_PLANES__];
+ dml_float_t UrgentBurstFactorChromaPre[__DML_NUM_PLANES__];
+ dml_float_t MaximumSwathWidthInLineBufferLuma;
+ dml_float_t MaximumSwathWidthInLineBufferChroma;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+index 866b0abcff1bad..4d64c45930da49 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+@@ -533,14 +533,21 @@ static bool optimize_pstate_with_svp_and_drr(struct dml2_context *dml2, struct d
+ static bool call_dml_mode_support_and_programming(struct dc_state *context)
+ {
+ unsigned int result = 0;
+- unsigned int min_state;
++ unsigned int min_state = 0;
+ int min_state_for_g6_temp_read = 0;
++
++
++ if (!context)
++ return false;
++
+ struct dml2_context *dml2 = context->bw_ctx.dml2;
+ struct dml2_wrapper_scratch *s = &dml2->v20.scratch;
+
+- min_state_for_g6_temp_read = calculate_lowest_supported_state_for_temp_read(dml2, context);
++ if (!context->streams[0]->sink->link->dc->caps.is_apu) {
++ min_state_for_g6_temp_read = calculate_lowest_supported_state_for_temp_read(dml2, context);
+
+- ASSERT(min_state_for_g6_temp_read >= 0);
++ ASSERT(min_state_for_g6_temp_read >= 0);
++ }
+
+ if (!dml2->config.use_native_pstate_optimization) {
+ result = optimize_pstate_with_svp_and_drr(dml2, context);
+@@ -551,14 +558,20 @@ static bool call_dml_mode_support_and_programming(struct dc_state *context)
+ /* Upon trying to sett certain frequencies in FRL, min_state_for_g6_temp_read is reported as -1. This leads to an invalid value of min_state causing crashes later on.
+ * Use the default logic for min_state only when min_state_for_g6_temp_read is a valid value. In other cases, use the value calculated by the DML directly.
+ */
+- if (min_state_for_g6_temp_read >= 0)
+- min_state = min_state_for_g6_temp_read > s->mode_support_params.out_lowest_state_idx ? min_state_for_g6_temp_read : s->mode_support_params.out_lowest_state_idx;
+- else
+- min_state = s->mode_support_params.out_lowest_state_idx;
+-
+- if (result)
+- result = dml_mode_programming(&dml2->v20.dml_core_ctx, min_state, &s->cur_display_config, true);
++ if (!context->streams[0]->sink->link->dc->caps.is_apu) {
++ if (min_state_for_g6_temp_read >= 0)
++ min_state = min_state_for_g6_temp_read > s->mode_support_params.out_lowest_state_idx ? min_state_for_g6_temp_read : s->mode_support_params.out_lowest_state_idx;
++ else
++ min_state = s->mode_support_params.out_lowest_state_idx;
++ }
+
++ if (result) {
++ if (!context->streams[0]->sink->link->dc->caps.is_apu) {
++ result = dml_mode_programming(&dml2->v20.dml_core_ctx, min_state, &s->cur_display_config, true);
++ } else {
++ result = dml_mode_programming(&dml2->v20.dml_core_ctx, s->mode_support_params.out_lowest_state_idx, &s->cur_display_config, true);
++ }
++ }
+ return result;
+ }
+
+@@ -687,6 +700,8 @@ static bool dml2_validate_only(struct dc_state *context)
+ build_unoptimized_policy_settings(dml2->v20.dml_core_ctx.project, &dml2->v20.dml_core_ctx.policy);
+
+ map_dc_state_into_dml_display_cfg(dml2, context, &dml2->v20.scratch.cur_display_config);
++ if (!dml2->config.skip_hw_state_mapping)
++ dml2_apply_det_buffer_allocation_policy(dml2, &dml2->v20.scratch.cur_display_config);
+
+ result = pack_and_call_dml_mode_support_ex(dml2,
+ &dml2->v20.scratch.cur_display_config,
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
+index 961d8936150ab7..75fb77bca83ba2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn10/dcn10_dpp.c
+@@ -483,10 +483,11 @@ void dpp1_set_cursor_position(
+ if (src_y_offset + cursor_height <= 0)
+ cur_en = 0; /* not visible beyond top edge*/
+
+- REG_UPDATE(CURSOR0_CONTROL,
+- CUR0_ENABLE, cur_en);
++ if (dpp_base->pos.cur0_ctl.bits.cur0_enable != cur_en) {
++ REG_UPDATE(CURSOR0_CONTROL, CUR0_ENABLE, cur_en);
+
+- dpp_base->pos.cur0_ctl.bits.cur0_enable = cur_en;
++ dpp_base->pos.cur0_ctl.bits.cur0_enable = cur_en;
++ }
+ }
+
+ void dpp1_cnv_set_optional_cursor_attributes(
+diff --git a/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c b/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c
+index 3b6ca7974e188d..1236e0f9a2560c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c
++++ b/drivers/gpu/drm/amd/display/dc/dpp/dcn401/dcn401_dpp_cm.c
+@@ -154,9 +154,11 @@ void dpp401_set_cursor_position(
+ struct dcn401_dpp *dpp = TO_DCN401_DPP(dpp_base);
+ uint32_t cur_en = pos->enable ? 1 : 0;
+
+- REG_UPDATE(CURSOR0_CONTROL, CUR0_ENABLE, cur_en);
++ if (dpp_base->pos.cur0_ctl.bits.cur0_enable != cur_en) {
++ REG_UPDATE(CURSOR0_CONTROL, CUR0_ENABLE, cur_en);
+
+- dpp_base->pos.cur0_ctl.bits.cur0_enable = cur_en;
++ dpp_base->pos.cur0_ctl.bits.cur0_enable = cur_en;
++ }
+ }
+
+ void dpp401_set_optional_cursor_attributes(
+diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
+index fe741100c0f880..d347bb06577ac6 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn30/dcn30_hubbub.c
+@@ -129,7 +129,8 @@ bool hubbub3_program_watermarks(
+ REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
+ DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);
+
+- hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
++ if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
++ hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
+
+ return wm_pending;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
+index 7fb5523f972244..b98505b240a797 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn31/dcn31_hubbub.c
+@@ -750,7 +750,8 @@ static bool hubbub31_program_watermarks(
+ REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
+ DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);*/
+
+- hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
++ if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
++ hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
+ return wm_pending;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c
+index 5264dc26cce1fa..32a6be543105c1 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn32/dcn32_hubbub.c
+@@ -786,7 +786,8 @@ static bool hubbub32_program_watermarks(
+ REG_UPDATE(DCHUBBUB_ARB_DF_REQ_OUTSTAND,
+ DCHUBBUB_ARB_MIN_REQ_OUTSTAND, 0x1FF);*/
+
+- hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
++ if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
++ hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
+
+ hubbub32_force_usr_retraining_allow(hubbub, hubbub->ctx->dc->debug.force_usr_allow);
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c b/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c
+index 5eb3da8d5206e9..dce7269959ce74 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c
++++ b/drivers/gpu/drm/amd/display/dc/hubbub/dcn35/dcn35_hubbub.c
+@@ -326,7 +326,8 @@ static bool hubbub35_program_watermarks(
+ DCHUBBUB_ARB_MIN_REQ_OUTSTAND_COMMIT_THRESHOLD, 0xA);/*hw delta*/
+ REG_UPDATE(DCHUBBUB_ARB_HOSTVM_CNTL, DCHUBBUB_ARB_MAX_QOS_COMMIT_THRESHOLD, 0xF);
+
+- hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
++ if (safe_to_lower || hubbub->ctx->dc->debug.disable_stutter)
++ hubbub1_allow_self_refresh_control(hubbub, !hubbub->ctx->dc->debug.disable_stutter);
+
+ hubbub32_force_usr_retraining_allow(hubbub, hubbub->ctx->dc->debug.force_usr_allow);
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
+index b405fa22f87a9e..c74ee2d50a699a 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn20/dcn20_hubp.c
+@@ -1044,11 +1044,13 @@ void hubp2_cursor_set_position(
+ if (src_y_offset + cursor_height <= 0)
+ cur_en = 0; /* not visible beyond top edge*/
+
+- if (cur_en && REG_READ(CURSOR_SURFACE_ADDRESS) == 0)
+- hubp->funcs->set_cursor_attributes(hubp, &hubp->curs_attr);
++ if (hubp->pos.cur_ctl.bits.cur_enable != cur_en) {
++ if (cur_en && REG_READ(CURSOR_SURFACE_ADDRESS) == 0)
++ hubp->funcs->set_cursor_attributes(hubp, &hubp->curs_attr);
+
+- REG_UPDATE(CURSOR_CONTROL,
++ REG_UPDATE(CURSOR_CONTROL,
+ CURSOR_ENABLE, cur_en);
++ }
+
+ REG_SET_2(CURSOR_POSITION, 0,
+ CURSOR_X_POSITION, pos->x,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
+index c55b1b8be8ffd6..5cf7e6771cb49e 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn30/dcn30_hubp.c
+@@ -484,6 +484,8 @@ void hubp3_init(struct hubp *hubp)
+ //hubp[i].HUBPREQ_DEBUG.HUBPREQ_DEBUG[26] = 1;
+ REG_WRITE(HUBPREQ_DEBUG, 1 << 26);
+
++ REG_UPDATE(DCHUBP_CNTL, HUBP_TTU_DISABLE, 0);
++
+ hubp_reset(hubp);
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
+index 45023fa9b708dc..c4f41350d1b3ce 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn32/dcn32_hubp.c
+@@ -168,6 +168,8 @@ void hubp32_init(struct hubp *hubp)
+ {
+ struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+ REG_WRITE(HUBPREQ_DEBUG_DB, 1 << 8);
++
++ REG_UPDATE(DCHUBP_CNTL, HUBP_TTU_DISABLE, 0);
+ }
+ static struct hubp_funcs dcn32_hubp_funcs = {
+ .hubp_enable_tripleBuffer = hubp2_enable_triplebuffer,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
+index 2d52100510f05f..7013c124efcff8 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn401/dcn401_hubp.c
+@@ -718,11 +718,13 @@ void hubp401_cursor_set_position(
+ dc_fixpt_from_int(dst_x_offset),
+ param->h_scale_ratio));
+
+- if (cur_en && REG_READ(CURSOR_SURFACE_ADDRESS) == 0)
+- hubp->funcs->set_cursor_attributes(hubp, &hubp->curs_attr);
++ if (hubp->pos.cur_ctl.bits.cur_enable != cur_en) {
++ if (cur_en && REG_READ(CURSOR_SURFACE_ADDRESS) == 0)
++ hubp->funcs->set_cursor_attributes(hubp, &hubp->curs_attr);
+
+- REG_UPDATE(CURSOR_CONTROL,
+- CURSOR_ENABLE, cur_en);
++ REG_UPDATE(CURSOR_CONTROL,
++ CURSOR_ENABLE, cur_en);
++ }
+
+ REG_SET_2(CURSOR_POSITION, 0,
+ CURSOR_X_POSITION, x_pos,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+index f6b17bd3f714fa..38755ca771401b 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c
+@@ -236,7 +236,8 @@ void dcn35_init_hw(struct dc *dc)
+ }
+
+ hws->funcs.init_pipes(dc, dc->current_state);
+- if (dc->res_pool->hubbub->funcs->allow_self_refresh_control)
++ if (dc->res_pool->hubbub->funcs->allow_self_refresh_control &&
++ !dc->res_pool->hubbub->ctx->dc->debug.disable_stutter)
+ dc->res_pool->hubbub->funcs->allow_self_refresh_control(dc->res_pool->hubbub,
+ !dc->res_pool->hubbub->ctx->dc->debug.disable_stutter);
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c
+index 7d04739c3ba146..4bbbe07ecde7d0 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn301/dcn301_resource.c
+@@ -671,9 +671,9 @@ static const struct dc_plane_cap plane_cap = {
+
+ /* 6:1 downscaling ratio: 1000/6 = 166.666 */
+ .max_downscale_factor = {
+- .argb8888 = 167,
+- .nv12 = 167,
+- .fp16 = 167
++ .argb8888 = 358,
++ .nv12 = 358,
++ .fp16 = 358
+ },
+ 64,
+ 64
+@@ -694,7 +694,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ .disable_dcc = DCC_ENABLE,
+ .vsr_support = true,
+ .performance_trace = false,
+- .max_downscale_src_width = 7680,/*upto 8K*/
++ .max_downscale_src_width = 4096,/*upto true 4k*/
+ .scl_reset_length10 = true,
+ .sanity_checks = false,
+ .underflow_assert_delay_us = 0xFFFFFFFF,
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+index 2c35eb31475ab8..5a1f24438e472a 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/aldebaran_ppt.c
+@@ -1731,7 +1731,6 @@ static ssize_t aldebaran_get_gpu_metrics(struct smu_context *smu,
+
+ gpu_metrics->average_gfx_activity = metrics.AverageGfxActivity;
+ gpu_metrics->average_umc_activity = metrics.AverageUclkActivity;
+- gpu_metrics->average_mm_activity = 0;
+
+ /* Valid power data is available only from primary die */
+ if (aldebaran_is_primary(smu)) {
+diff --git a/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c b/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c
+index ebccb74306a765..f30b3d5eeca5c5 100644
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_wb_connector.c
+@@ -160,6 +160,10 @@ static int komeda_wb_connector_add(struct komeda_kms_dev *kms,
+ formats = komeda_get_layer_fourcc_list(&mdev->fmt_tbl,
+ kwb_conn->wb_layer->layer_type,
+ &n_formats);
++ if (!formats) {
++ kfree(kwb_conn);
++ return -ENOMEM;
++ }
+
+ err = drm_writeback_connector_init(&kms->base, wb_conn,
+ &komeda_wb_connector_funcs,
+diff --git a/drivers/gpu/drm/bridge/analogix/anx7625.c b/drivers/gpu/drm/bridge/analogix/anx7625.c
+index a2675b121fe44b..c036bbc92ba96e 100644
+--- a/drivers/gpu/drm/bridge/analogix/anx7625.c
++++ b/drivers/gpu/drm/bridge/analogix/anx7625.c
+@@ -2002,8 +2002,10 @@ static int anx7625_audio_get_eld(struct device *dev, void *data,
+ memset(buf, 0, len);
+ } else {
+ dev_dbg(dev, "audio copy eld\n");
++ mutex_lock(&ctx->connector->eld_mutex);
+ memcpy(buf, ctx->connector->eld,
+ min(sizeof(ctx->connector->eld), len));
++ mutex_unlock(&ctx->connector->eld_mutex);
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index cf891e7677c0e2..faee8e2e82a053 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -296,7 +296,7 @@
+ #define MAX_LANE_COUNT 4
+ #define MAX_LINK_RATE HBR
+ #define AUTO_TRAIN_RETRY 3
+-#define MAX_HDCP_DOWN_STREAM_COUNT 10
++#define MAX_HDCP_DOWN_STREAM_COUNT 127
+ #define MAX_CR_LEVEL 0x03
+ #define MAX_EQ_LEVEL 0x03
+ #define AUX_WAIT_TIMEOUT_MS 15
+@@ -2023,7 +2023,7 @@ static bool it6505_hdcp_part2_ksvlist_check(struct it6505 *it6505)
+ {
+ struct device *dev = it6505->dev;
+ u8 av[5][4], bv[5][4];
+- int i, err;
++ int i, err, retry;
+
+ i = it6505_setup_sha1_input(it6505, it6505->sha1_input);
+ if (i <= 0) {
+@@ -2032,22 +2032,28 @@ static bool it6505_hdcp_part2_ksvlist_check(struct it6505 *it6505)
+ }
+
+ it6505_sha1_digest(it6505, it6505->sha1_input, i, (u8 *)av);
++ /*1B-05 V' must retry 3 times */
++ for (retry = 0; retry < 3; retry++) {
++ err = it6505_get_dpcd(it6505, DP_AUX_HDCP_V_PRIME(0), (u8 *)bv,
++ sizeof(bv));
+
+- err = it6505_get_dpcd(it6505, DP_AUX_HDCP_V_PRIME(0), (u8 *)bv,
+- sizeof(bv));
++ if (err < 0) {
++ dev_err(dev, "Read V' value Fail %d", retry);
++ continue;
++ }
+
+- if (err < 0) {
+- dev_err(dev, "Read V' value Fail");
+- return false;
+- }
++ for (i = 0; i < 5; i++) {
++ if (bv[i][3] != av[i][0] || bv[i][2] != av[i][1] ||
++ av[i][1] != av[i][2] || bv[i][0] != av[i][3])
++ break;
+
+- for (i = 0; i < 5; i++)
+- if (bv[i][3] != av[i][0] || bv[i][2] != av[i][1] ||
+- bv[i][1] != av[i][2] || bv[i][0] != av[i][3])
+- return false;
++ DRM_DEV_DEBUG_DRIVER(dev, "V' all match!! %d, %d", retry, i);
++ return true;
++ }
++ }
+
+- DRM_DEV_DEBUG_DRIVER(dev, "V' all match!!");
+- return true;
++ DRM_DEV_DEBUG_DRIVER(dev, "V' NOT match!! %d", retry);
++ return false;
+ }
+
+ static void it6505_hdcp_wait_ksv_list(struct work_struct *work)
+@@ -2055,12 +2061,13 @@ static void it6505_hdcp_wait_ksv_list(struct work_struct *work)
+ struct it6505 *it6505 = container_of(work, struct it6505,
+ hdcp_wait_ksv_list);
+ struct device *dev = it6505->dev;
+- unsigned int timeout = 5000;
+- u8 bstatus = 0;
++ u8 bstatus;
+ bool ksv_list_check;
++ /* 1B-04 wait ksv list for 5s */
++ unsigned long timeout = jiffies +
++ msecs_to_jiffies(5000) + 1;
+
+- timeout /= 20;
+- while (timeout > 0) {
++ for (;;) {
+ if (!it6505_get_sink_hpd_status(it6505))
+ return;
+
+@@ -2069,27 +2076,23 @@ static void it6505_hdcp_wait_ksv_list(struct work_struct *work)
+ if (bstatus & DP_BSTATUS_READY)
+ break;
+
+- msleep(20);
+- timeout--;
+- }
++ if (time_after(jiffies, timeout)) {
++ DRM_DEV_DEBUG_DRIVER(dev, "KSV list wait timeout");
++ goto timeout;
++ }
+
+- if (timeout == 0) {
+- DRM_DEV_DEBUG_DRIVER(dev, "timeout and ksv list wait failed");
+- goto timeout;
++ msleep(20);
+ }
+
+ ksv_list_check = it6505_hdcp_part2_ksvlist_check(it6505);
+ DRM_DEV_DEBUG_DRIVER(dev, "ksv list ready, ksv list check %s",
+ ksv_list_check ? "pass" : "fail");
+- if (ksv_list_check) {
+- it6505_set_bits(it6505, REG_HDCP_TRIGGER,
+- HDCP_TRIGGER_KSV_DONE, HDCP_TRIGGER_KSV_DONE);
++
++ if (ksv_list_check)
+ return;
+- }
++
+ timeout:
+- it6505_set_bits(it6505, REG_HDCP_TRIGGER,
+- HDCP_TRIGGER_KSV_DONE | HDCP_TRIGGER_KSV_FAIL,
+- HDCP_TRIGGER_KSV_DONE | HDCP_TRIGGER_KSV_FAIL);
++ it6505_start_hdcp(it6505);
+ }
+
+ static void it6505_hdcp_work(struct work_struct *work)
+@@ -2312,14 +2315,20 @@ static int it6505_process_hpd_irq(struct it6505 *it6505)
+ DRM_DEV_DEBUG_DRIVER(dev, "dp_irq_vector = 0x%02x", dp_irq_vector);
+
+ if (dp_irq_vector & DP_CP_IRQ) {
+- it6505_set_bits(it6505, REG_HDCP_TRIGGER, HDCP_TRIGGER_CPIRQ,
+- HDCP_TRIGGER_CPIRQ);
+-
+ bstatus = it6505_dpcd_read(it6505, DP_AUX_HDCP_BSTATUS);
+ if (bstatus < 0)
+ return bstatus;
+
+ DRM_DEV_DEBUG_DRIVER(dev, "Bstatus = 0x%02x", bstatus);
++
++ /*Check BSTATUS when recive CP_IRQ */
++ if (bstatus & DP_BSTATUS_R0_PRIME_READY &&
++ it6505->hdcp_status == HDCP_AUTH_GOING)
++ it6505_set_bits(it6505, REG_HDCP_TRIGGER, HDCP_TRIGGER_CPIRQ,
++ HDCP_TRIGGER_CPIRQ);
++ else if (bstatus & (DP_BSTATUS_REAUTH_REQ | DP_BSTATUS_LINK_FAILURE) &&
++ it6505->hdcp_status == HDCP_AUTH_DONE)
++ it6505_start_hdcp(it6505);
+ }
+
+ ret = drm_dp_dpcd_read_link_status(&it6505->aux, link_status);
+@@ -2456,7 +2465,11 @@ static void it6505_irq_hdcp_ksv_check(struct it6505 *it6505)
+ {
+ struct device *dev = it6505->dev;
+
+- DRM_DEV_DEBUG_DRIVER(dev, "HDCP event Interrupt");
++ DRM_DEV_DEBUG_DRIVER(dev, "HDCP repeater R0 event Interrupt");
++ /* 1B01 HDCP encription should start when R0 is ready*/
++ it6505_set_bits(it6505, REG_HDCP_TRIGGER,
++ HDCP_TRIGGER_KSV_DONE, HDCP_TRIGGER_KSV_DONE);
++
+ schedule_work(&it6505->hdcp_wait_ksv_list);
+ }
+
+diff --git a/drivers/gpu/drm/bridge/ite-it66121.c b/drivers/gpu/drm/bridge/ite-it66121.c
+index 925e42f46cd87f..0f8d3ab30daa68 100644
+--- a/drivers/gpu/drm/bridge/ite-it66121.c
++++ b/drivers/gpu/drm/bridge/ite-it66121.c
+@@ -1452,8 +1452,10 @@ static int it66121_audio_get_eld(struct device *dev, void *data,
+ dev_dbg(dev, "No connector present, passing empty EDID data");
+ memset(buf, 0, len);
+ } else {
++ mutex_lock(&ctx->connector->eld_mutex);
+ memcpy(buf, ctx->connector->eld,
+ min(sizeof(ctx->connector->eld), len));
++ mutex_unlock(&ctx->connector->eld_mutex);
+ }
+ mutex_unlock(&ctx->lock);
+
+diff --git a/drivers/gpu/drm/display/drm_dp_cec.c b/drivers/gpu/drm/display/drm_dp_cec.c
+index 007ceb281d00da..56a4965e518cc2 100644
+--- a/drivers/gpu/drm/display/drm_dp_cec.c
++++ b/drivers/gpu/drm/display/drm_dp_cec.c
+@@ -311,16 +311,6 @@ void drm_dp_cec_attach(struct drm_dp_aux *aux, u16 source_physical_address)
+ if (!aux->transfer)
+ return;
+
+-#ifndef CONFIG_MEDIA_CEC_RC
+- /*
+- * CEC_CAP_RC is part of CEC_CAP_DEFAULTS, but it is stripped by
+- * cec_allocate_adapter() if CONFIG_MEDIA_CEC_RC is undefined.
+- *
+- * Do this here as well to ensure the tests against cec_caps are
+- * correct.
+- */
+- cec_caps &= ~CEC_CAP_RC;
+-#endif
+ cancel_delayed_work_sync(&aux->cec.unregister_work);
+
+ mutex_lock(&aux->cec.lock);
+@@ -337,7 +327,9 @@ void drm_dp_cec_attach(struct drm_dp_aux *aux, u16 source_physical_address)
+ num_las = CEC_MAX_LOG_ADDRS;
+
+ if (aux->cec.adap) {
+- if (aux->cec.adap->capabilities == cec_caps &&
++ /* Check if the adapter properties have changed */
++ if ((aux->cec.adap->capabilities & CEC_CAP_MONITOR_ALL) ==
++ (cec_caps & CEC_CAP_MONITOR_ALL) &&
+ aux->cec.adap->available_log_addrs == num_las) {
+ /* Unchanged, so just set the phys addr */
+ cec_s_phys_addr(aux->cec.adap, source_physical_address, false);
+diff --git a/drivers/gpu/drm/drm_client_modeset.c b/drivers/gpu/drm/drm_client_modeset.c
+index cee5eafbfb81a8..fd620f7db0dd27 100644
+--- a/drivers/gpu/drm/drm_client_modeset.c
++++ b/drivers/gpu/drm/drm_client_modeset.c
+@@ -741,6 +741,15 @@ static bool drm_client_firmware_config(struct drm_client_dev *client,
+ if ((conn_configured & mask) != mask && conn_configured != conn_seq)
+ goto retry;
+
++ for (i = 0; i < count; i++) {
++ struct drm_connector *connector = connectors[i];
++
++ if (connector->has_tile)
++ drm_client_get_tile_offsets(dev, connectors, connector_count,
++ modes, offsets, i,
++ connector->tile_h_loc, connector->tile_v_loc);
++ }
++
+ /*
+ * If the BIOS didn't enable everything it could, fall back to have the
+ * same user experiencing of lighting up as much as possible like the
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index ca7f43c8d6f1b3..0e6021235a9304 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -277,6 +277,7 @@ static int __drm_connector_init(struct drm_device *dev,
+ INIT_LIST_HEAD(&connector->probed_modes);
+ INIT_LIST_HEAD(&connector->modes);
+ mutex_init(&connector->mutex);
++ mutex_init(&connector->eld_mutex);
+ mutex_init(&connector->edid_override_mutex);
+ mutex_init(&connector->hdmi.infoframes.lock);
+ connector->edid_blob_ptr = NULL;
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 855beafb76ffbe..13bc4c290b17d5 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -5605,7 +5605,9 @@ EXPORT_SYMBOL(drm_edid_get_monitor_name);
+
+ static void clear_eld(struct drm_connector *connector)
+ {
++ mutex_lock(&connector->eld_mutex);
+ memset(connector->eld, 0, sizeof(connector->eld));
++ mutex_unlock(&connector->eld_mutex);
+
+ connector->latency_present[0] = false;
+ connector->latency_present[1] = false;
+@@ -5657,6 +5659,8 @@ static void drm_edid_to_eld(struct drm_connector *connector,
+ if (!drm_edid)
+ return;
+
++ mutex_lock(&connector->eld_mutex);
++
+ mnl = get_monitor_name(drm_edid, &eld[DRM_ELD_MONITOR_NAME_STRING]);
+ drm_dbg_kms(connector->dev, "[CONNECTOR:%d:%s] ELD monitor %s\n",
+ connector->base.id, connector->name,
+@@ -5717,6 +5721,8 @@ static void drm_edid_to_eld(struct drm_connector *connector,
+ drm_dbg_kms(connector->dev, "[CONNECTOR:%d:%s] ELD size %d, SAD count %d\n",
+ connector->base.id, connector->name,
+ drm_eld_size(eld), total_sad_count);
++
++ mutex_unlock(&connector->eld_mutex);
+ }
+
+ static int _drm_edid_to_sad(const struct drm_edid *drm_edid,
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index 29c53f9f449ca8..eaac2e5726e750 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -1348,14 +1348,14 @@ int drm_fb_helper_set_par(struct fb_info *info)
+ }
+ EXPORT_SYMBOL(drm_fb_helper_set_par);
+
+-static void pan_set(struct drm_fb_helper *fb_helper, int x, int y)
++static void pan_set(struct drm_fb_helper *fb_helper, int dx, int dy)
+ {
+ struct drm_mode_set *mode_set;
+
+ mutex_lock(&fb_helper->client.modeset_mutex);
+ drm_client_for_each_modeset(mode_set, &fb_helper->client) {
+- mode_set->x = x;
+- mode_set->y = y;
++ mode_set->x += dx;
++ mode_set->y += dy;
+ }
+ mutex_unlock(&fb_helper->client.modeset_mutex);
+ }
+@@ -1364,16 +1364,18 @@ static int pan_display_atomic(struct fb_var_screeninfo *var,
+ struct fb_info *info)
+ {
+ struct drm_fb_helper *fb_helper = info->par;
+- int ret;
++ int ret, dx, dy;
+
+- pan_set(fb_helper, var->xoffset, var->yoffset);
++ dx = var->xoffset - info->var.xoffset;
++ dy = var->yoffset - info->var.yoffset;
++ pan_set(fb_helper, dx, dy);
+
+ ret = drm_client_modeset_commit_locked(&fb_helper->client);
+ if (!ret) {
+ info->var.xoffset = var->xoffset;
+ info->var.yoffset = var->yoffset;
+ } else
+- pan_set(fb_helper, info->var.xoffset, info->var.yoffset);
++ pan_set(fb_helper, -dx, -dy);
+
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/drm_panel_backlight_quirks.c b/drivers/gpu/drm/drm_panel_backlight_quirks.c
+new file mode 100644
+index 00000000000000..c477d98ade2b41
+--- /dev/null
++++ b/drivers/gpu/drm/drm_panel_backlight_quirks.c
+@@ -0,0 +1,94 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/array_size.h>
++#include <linux/dmi.h>
++#include <linux/mod_devicetable.h>
++#include <linux/module.h>
++#include <drm/drm_edid.h>
++#include <drm/drm_utils.h>
++
++struct drm_panel_min_backlight_quirk {
++ struct {
++ enum dmi_field field;
++ const char * const value;
++ } dmi_match;
++ struct drm_edid_ident ident;
++ u8 min_brightness;
++};
++
++static const struct drm_panel_min_backlight_quirk drm_panel_min_backlight_quirks[] = {
++ /* 13 inch matte panel */
++ {
++ .dmi_match.field = DMI_BOARD_VENDOR,
++ .dmi_match.value = "Framework",
++ .ident.panel_id = drm_edid_encode_panel_id('B', 'O', 'E', 0x0bca),
++ .ident.name = "NE135FBM-N41",
++ .min_brightness = 0,
++ },
++ /* 13 inch glossy panel */
++ {
++ .dmi_match.field = DMI_BOARD_VENDOR,
++ .dmi_match.value = "Framework",
++ .ident.panel_id = drm_edid_encode_panel_id('B', 'O', 'E', 0x095f),
++ .ident.name = "NE135FBM-N41",
++ .min_brightness = 0,
++ },
++ /* 13 inch 2.8k panel */
++ {
++ .dmi_match.field = DMI_BOARD_VENDOR,
++ .dmi_match.value = "Framework",
++ .ident.panel_id = drm_edid_encode_panel_id('B', 'O', 'E', 0x0cb4),
++ .ident.name = "NE135A1M-NY1",
++ .min_brightness = 0,
++ },
++};
++
++static bool drm_panel_min_backlight_quirk_matches(const struct drm_panel_min_backlight_quirk *quirk,
++ const struct drm_edid *edid)
++{
++ if (!dmi_match(quirk->dmi_match.field, quirk->dmi_match.value))
++ return false;
++
++ if (!drm_edid_match(edid, &quirk->ident))
++ return false;
++
++ return true;
++}
++
++/**
++ * drm_get_panel_min_brightness_quirk - Get minimum supported brightness level for a panel.
++ * @edid: EDID of the panel to check
++ *
++ * This function checks for platform specific (e.g. DMI based) quirks
++ * providing info on the minimum backlight brightness for systems where this
++ * cannot be probed correctly from the hard-/firm-ware.
++ *
++ * Returns:
++ * A negative error value or
++ * an override value in the range [0, 255] representing 0-100% to be scaled to
++ * the drivers target range.
++ */
++int drm_get_panel_min_brightness_quirk(const struct drm_edid *edid)
++{
++ const struct drm_panel_min_backlight_quirk *quirk;
++ size_t i;
++
++ if (!IS_ENABLED(CONFIG_DMI))
++ return -ENODATA;
++
++ if (!edid)
++ return -EINVAL;
++
++ for (i = 0; i < ARRAY_SIZE(drm_panel_min_backlight_quirks); i++) {
++ quirk = &drm_panel_min_backlight_quirks[i];
++
++ if (drm_panel_min_backlight_quirk_matches(quirk, edid))
++ return quirk->min_brightness;
++ }
++
++ return -ENODATA;
++}
++EXPORT_SYMBOL(drm_get_panel_min_brightness_quirk);
++
++MODULE_DESCRIPTION("Quirks for panel backlight overrides");
++MODULE_LICENSE("GPL");
+diff --git a/drivers/gpu/drm/exynos/exynos_hdmi.c b/drivers/gpu/drm/exynos/exynos_hdmi.c
+index 1e26cd4f834798..52059cfff4f0b3 100644
+--- a/drivers/gpu/drm/exynos/exynos_hdmi.c
++++ b/drivers/gpu/drm/exynos/exynos_hdmi.c
+@@ -1643,7 +1643,9 @@ static int hdmi_audio_get_eld(struct device *dev, void *data, uint8_t *buf,
+ struct hdmi_context *hdata = dev_get_drvdata(dev);
+ struct drm_connector *connector = &hdata->connector;
+
++ mutex_lock(&connector->eld_mutex);
+ memcpy(buf, connector->eld, min(sizeof(connector->eld), len));
++ mutex_unlock(&connector->eld_mutex);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index 90fa73575feb13..45cca965c11b48 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -2022,11 +2022,10 @@ icl_dsc_compute_link_config(struct intel_dp *intel_dp,
+ /* Compressed BPP should be less than the Input DSC bpp */
+ dsc_max_bpp = min(dsc_max_bpp, pipe_bpp - 1);
+
+- for (i = 0; i < ARRAY_SIZE(valid_dsc_bpp); i++) {
+- if (valid_dsc_bpp[i] < dsc_min_bpp)
++ for (i = ARRAY_SIZE(valid_dsc_bpp) - 1; i >= 0; i--) {
++ if (valid_dsc_bpp[i] < dsc_min_bpp ||
++ valid_dsc_bpp[i] > dsc_max_bpp)
+ continue;
+- if (valid_dsc_bpp[i] > dsc_max_bpp)
+- break;
+
+ ret = dsc_compute_link_config(intel_dp,
+ pipe_config,
+@@ -2738,7 +2737,6 @@ static void intel_dp_compute_as_sdp(struct intel_dp *intel_dp,
+
+ crtc_state->infoframes.enable |= intel_hdmi_infoframe_enable(DP_SDP_ADAPTIVE_SYNC);
+
+- /* Currently only DP_AS_SDP_AVT_FIXED_VTOTAL mode supported */
+ as_sdp->sdp_type = DP_SDP_ADAPTIVE_SYNC;
+ as_sdp->length = 0x9;
+ as_sdp->duration_incr_ms = 0;
+@@ -2750,7 +2748,7 @@ static void intel_dp_compute_as_sdp(struct intel_dp *intel_dp,
+ as_sdp->target_rr = drm_mode_vrefresh(adjusted_mode);
+ as_sdp->target_rr_divider = true;
+ } else {
+- as_sdp->mode = DP_AS_SDP_AVT_FIXED_VTOTAL;
++ as_sdp->mode = DP_AS_SDP_AVT_DYNAMIC_VTOTAL;
+ as_sdp->vtotal = adjusted_mode->vtotal;
+ as_sdp->target_rr = 0;
+ }
+diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+index c8720d31d1013d..62a5287ea1d9c4 100644
+--- a/drivers/gpu/drm/i915/display/skl_universal_plane.c
++++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+@@ -105,8 +105,6 @@ static const u32 icl_sdr_y_plane_formats[] = {
+ DRM_FORMAT_Y216,
+ DRM_FORMAT_XYUV8888,
+ DRM_FORMAT_XVYU2101010,
+- DRM_FORMAT_XVYU12_16161616,
+- DRM_FORMAT_XVYU16161616,
+ };
+
+ static const u32 icl_sdr_uv_plane_formats[] = {
+@@ -133,8 +131,6 @@ static const u32 icl_sdr_uv_plane_formats[] = {
+ DRM_FORMAT_Y216,
+ DRM_FORMAT_XYUV8888,
+ DRM_FORMAT_XVYU2101010,
+- DRM_FORMAT_XVYU12_16161616,
+- DRM_FORMAT_XVYU16161616,
+ };
+
+ static const u32 icl_hdr_plane_formats[] = {
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+index fe69f2c8527d79..ae3343c81a6455 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_shmem.c
+@@ -209,8 +209,6 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
+ struct address_space *mapping = obj->base.filp->f_mapping;
+ unsigned int max_segment = i915_sg_segment_size(i915->drm.dev);
+ struct sg_table *st;
+- struct sgt_iter sgt_iter;
+- struct page *page;
+ int ret;
+
+ /*
+@@ -239,9 +237,7 @@ static int shmem_get_pages(struct drm_i915_gem_object *obj)
+ * for PAGE_SIZE chunks instead may be helpful.
+ */
+ if (max_segment > PAGE_SIZE) {
+- for_each_sgt_page(page, sgt_iter, st)
+- put_page(page);
+- sg_free_table(st);
++ shmem_sg_free_table(st, mapping, false, false);
+ kfree(st);
+
+ max_segment = PAGE_SIZE;
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index ee12ee0ed41871..b0e94c95940f67 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -5511,12 +5511,20 @@ static inline void guc_log_context(struct drm_printer *p,
+ {
+ drm_printf(p, "GuC lrc descriptor %u:\n", ce->guc_id.id);
+ drm_printf(p, "\tHW Context Desc: 0x%08x\n", ce->lrc.lrca);
+- drm_printf(p, "\t\tLRC Head: Internal %u, Memory %u\n",
+- ce->ring->head,
+- ce->lrc_reg_state[CTX_RING_HEAD]);
+- drm_printf(p, "\t\tLRC Tail: Internal %u, Memory %u\n",
+- ce->ring->tail,
+- ce->lrc_reg_state[CTX_RING_TAIL]);
++ if (intel_context_pin_if_active(ce)) {
++ drm_printf(p, "\t\tLRC Head: Internal %u, Memory %u\n",
++ ce->ring->head,
++ ce->lrc_reg_state[CTX_RING_HEAD]);
++ drm_printf(p, "\t\tLRC Tail: Internal %u, Memory %u\n",
++ ce->ring->tail,
++ ce->lrc_reg_state[CTX_RING_TAIL]);
++ intel_context_unpin(ce);
++ } else {
++ drm_printf(p, "\t\tLRC Head: Internal %u, Memory not pinned\n",
++ ce->ring->head);
++ drm_printf(p, "\t\tLRC Tail: Internal %u, Memory not pinned\n",
++ ce->ring->tail);
++ }
+ drm_printf(p, "\t\tContext Pin Count: %u\n",
+ atomic_read(&ce->pin_count));
+ drm_printf(p, "\t\tGuC ID Ref Count: %u\n",
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+index d586aea3089841..9c83bab0a53091 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/gsp/r535.c
+@@ -121,6 +121,8 @@ r535_gsp_msgq_wait(struct nvkm_gsp *gsp, u32 repc, u32 *prepc, int *ptime)
+ return mqe->data;
+ }
+
++ size = ALIGN(repc + GSP_MSG_HDR_SIZE, GSP_PAGE_SIZE);
++
+ msg = kvmalloc(repc, GFP_KERNEL);
+ if (!msg)
+ return ERR_PTR(-ENOMEM);
+@@ -129,19 +131,15 @@ r535_gsp_msgq_wait(struct nvkm_gsp *gsp, u32 repc, u32 *prepc, int *ptime)
+ len = min_t(u32, repc, len);
+ memcpy(msg, mqe->data, len);
+
+- rptr += DIV_ROUND_UP(len, GSP_PAGE_SIZE);
+- if (rptr == gsp->msgq.cnt)
+- rptr = 0;
+-
+ repc -= len;
+
+ if (repc) {
+ mqe = (void *)((u8 *)gsp->shm.msgq.ptr + 0x1000 + 0 * 0x1000);
+ memcpy(msg + len, mqe, repc);
+-
+- rptr += DIV_ROUND_UP(repc, GSP_PAGE_SIZE);
+ }
+
++ rptr = (rptr + DIV_ROUND_UP(size, GSP_PAGE_SIZE)) % gsp->msgq.cnt;
++
+ mb();
+ (*gsp->msgq.rptr) = rptr;
+ return msg;
+@@ -163,7 +161,7 @@ r535_gsp_cmdq_push(struct nvkm_gsp *gsp, void *argv)
+ u64 *end;
+ u64 csum = 0;
+ int free, time = 1000000;
+- u32 wptr, size;
++ u32 wptr, size, step;
+ u32 off = 0;
+
+ argc = ALIGN(GSP_MSG_HDR_SIZE + argc, GSP_PAGE_SIZE);
+@@ -197,7 +195,9 @@ r535_gsp_cmdq_push(struct nvkm_gsp *gsp, void *argv)
+ }
+
+ cqe = (void *)((u8 *)gsp->shm.cmdq.ptr + 0x1000 + wptr * 0x1000);
+- size = min_t(u32, argc, (gsp->cmdq.cnt - wptr) * GSP_PAGE_SIZE);
++ step = min_t(u32, free, (gsp->cmdq.cnt - wptr));
++ size = min_t(u32, argc, step * GSP_PAGE_SIZE);
++
+ memcpy(cqe, (u8 *)cmd + off, size);
+
+ wptr += DIV_ROUND_UP(size, 0x1000);
+diff --git a/drivers/gpu/drm/radeon/radeon_audio.c b/drivers/gpu/drm/radeon/radeon_audio.c
+index 5b69cc8011b42b..8d64ba18572ec4 100644
+--- a/drivers/gpu/drm/radeon/radeon_audio.c
++++ b/drivers/gpu/drm/radeon/radeon_audio.c
+@@ -775,8 +775,10 @@ static int radeon_audio_component_get_eld(struct device *kdev, int port,
+ if (!dig->pin || dig->pin->id != port)
+ continue;
+ *enabled = true;
++ mutex_lock(&connector->eld_mutex);
+ ret = drm_eld_size(connector->eld);
+ memcpy(buf, connector->eld, min(max_bytes, ret));
++ mutex_unlock(&connector->eld_mutex);
+ break;
+ }
+
+diff --git a/drivers/gpu/drm/rockchip/cdn-dp-core.c b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+index b04538907f956c..f576b1aa86d143 100644
+--- a/drivers/gpu/drm/rockchip/cdn-dp-core.c
++++ b/drivers/gpu/drm/rockchip/cdn-dp-core.c
+@@ -947,9 +947,6 @@ static void cdn_dp_pd_event_work(struct work_struct *work)
+ {
+ struct cdn_dp_device *dp = container_of(work, struct cdn_dp_device,
+ event_work);
+- struct drm_connector *connector = &dp->connector;
+- enum drm_connector_status old_status;
+-
+ int ret;
+
+ mutex_lock(&dp->lock);
+@@ -1009,11 +1006,7 @@ static void cdn_dp_pd_event_work(struct work_struct *work)
+
+ out:
+ mutex_unlock(&dp->lock);
+-
+- old_status = connector->status;
+- connector->status = connector->funcs->detect(connector, false);
+- if (old_status != connector->status)
+- drm_kms_helper_hotplug_event(dp->drm_dev);
++ drm_connector_helper_hpd_irq_event(&dp->connector);
+ }
+
+ static int cdn_dp_pd_event(struct notifier_block *nb,
+diff --git a/drivers/gpu/drm/sti/sti_hdmi.c b/drivers/gpu/drm/sti/sti_hdmi.c
+index 847470f747c0ef..3c8f3532c79723 100644
+--- a/drivers/gpu/drm/sti/sti_hdmi.c
++++ b/drivers/gpu/drm/sti/sti_hdmi.c
+@@ -1225,7 +1225,9 @@ static int hdmi_audio_get_eld(struct device *dev, void *data, uint8_t *buf, size
+ struct drm_connector *connector = hdmi->drm_connector;
+
+ DRM_DEBUG_DRIVER("\n");
++ mutex_lock(&connector->eld_mutex);
+ memcpy(buf, connector->eld, min(sizeof(connector->eld), len));
++ mutex_unlock(&connector->eld_mutex);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+index 294773342e710d..4ba869e0e794c7 100644
+--- a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
++++ b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+@@ -46,7 +46,7 @@ static struct drm_display_mode *find_preferred_mode(struct drm_connector *connec
+ struct drm_display_mode *mode, *preferred;
+
+ mutex_lock(&drm->mode_config.mutex);
+- preferred = list_first_entry(&connector->modes, struct drm_display_mode, head);
++ preferred = list_first_entry_or_null(&connector->modes, struct drm_display_mode, head);
+ list_for_each_entry(mode, &connector->modes, head)
+ if (mode->type & DRM_MODE_TYPE_PREFERRED)
+ preferred = mode;
+@@ -105,9 +105,8 @@ static int set_connector_edid(struct kunit *test, struct drm_connector *connecto
+ mutex_lock(&drm->mode_config.mutex);
+ ret = connector->funcs->fill_modes(connector, 4096, 4096);
+ mutex_unlock(&drm->mode_config.mutex);
+- KUNIT_ASSERT_GT(test, ret, 0);
+
+- return 0;
++ return ret;
+ }
+
+ static const struct drm_connector_hdmi_funcs dummy_connector_hdmi_funcs = {
+@@ -223,7 +222,7 @@ drm_atomic_helper_connector_hdmi_init(struct kunit *test,
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ return priv;
+ }
+@@ -728,7 +727,7 @@ static void drm_test_check_output_bpc_crtc_mode_changed(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+@@ -802,7 +801,7 @@ static void drm_test_check_output_bpc_crtc_mode_not_changed(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+@@ -873,7 +872,7 @@ static void drm_test_check_output_bpc_dvi(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_dvi_1080p,
+ ARRAY_SIZE(test_edid_dvi_1080p));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_FALSE(test, info->is_hdmi);
+@@ -920,7 +919,7 @@ static void drm_test_check_tmds_char_rate_rgb_8bpc(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+@@ -967,7 +966,7 @@ static void drm_test_check_tmds_char_rate_rgb_10bpc(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+@@ -1014,7 +1013,7 @@ static void drm_test_check_tmds_char_rate_rgb_12bpc(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+@@ -1121,7 +1120,7 @@ static void drm_test_check_max_tmds_rate_bpc_fallback(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1190,7 +1189,7 @@ static void drm_test_check_max_tmds_rate_format_fallback(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1254,7 +1253,7 @@ static void drm_test_check_output_bpc_format_vic_1(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1314,7 +1313,7 @@ static void drm_test_check_output_bpc_format_driver_rgb_only(struct kunit *test)
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1381,7 +1380,7 @@ static void drm_test_check_output_bpc_format_display_rgb_only(struct kunit *test
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_max_200mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1447,7 +1446,7 @@ static void drm_test_check_output_bpc_format_driver_8bpc_only(struct kunit *test
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+@@ -1507,7 +1506,7 @@ static void drm_test_check_output_bpc_format_display_8bpc_only(struct kunit *tes
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_max_340mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_max_340mhz));
+- KUNIT_ASSERT_EQ(test, ret, 0);
++ KUNIT_ASSERT_GT(test, ret, 0);
+
+ info = &conn->display_info;
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index 7e0a5ea7ab859a..6b83d02b5d62a5 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -2192,9 +2192,9 @@ static int vc4_hdmi_audio_get_eld(struct device *dev, void *data,
+ struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev);
+ struct drm_connector *connector = &vc4_hdmi->connector;
+
+- mutex_lock(&vc4_hdmi->mutex);
++ mutex_lock(&connector->eld_mutex);
+ memcpy(buf, connector->eld, min(sizeof(connector->eld), len));
+- mutex_unlock(&vc4_hdmi->mutex);
++ mutex_unlock(&connector->eld_mutex);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h b/drivers/gpu/drm/virtio/virtgpu_drv.h
+index 64c236169db88a..5dc8eeaf7123c4 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
++++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
+@@ -194,6 +194,13 @@ struct virtio_gpu_framebuffer {
+ #define to_virtio_gpu_framebuffer(x) \
+ container_of(x, struct virtio_gpu_framebuffer, base)
+
++struct virtio_gpu_plane_state {
++ struct drm_plane_state base;
++ struct virtio_gpu_fence *fence;
++};
++#define to_virtio_gpu_plane_state(x) \
++ container_of(x, struct virtio_gpu_plane_state, base)
++
+ struct virtio_gpu_queue {
+ struct virtqueue *vq;
+ spinlock_t qlock;
+diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c b/drivers/gpu/drm/virtio/virtgpu_plane.c
+index a72a2dbda031c2..7acd38b962c621 100644
+--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
++++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
+@@ -66,11 +66,28 @@ uint32_t virtio_gpu_translate_format(uint32_t drm_fourcc)
+ return format;
+ }
+
++static struct
++drm_plane_state *virtio_gpu_plane_duplicate_state(struct drm_plane *plane)
++{
++ struct virtio_gpu_plane_state *new;
++
++ if (WARN_ON(!plane->state))
++ return NULL;
++
++ new = kzalloc(sizeof(*new), GFP_KERNEL);
++ if (!new)
++ return NULL;
++
++ __drm_atomic_helper_plane_duplicate_state(plane, &new->base);
++
++ return &new->base;
++}
++
+ static const struct drm_plane_funcs virtio_gpu_plane_funcs = {
+ .update_plane = drm_atomic_helper_update_plane,
+ .disable_plane = drm_atomic_helper_disable_plane,
+ .reset = drm_atomic_helper_plane_reset,
+- .atomic_duplicate_state = drm_atomic_helper_plane_duplicate_state,
++ .atomic_duplicate_state = virtio_gpu_plane_duplicate_state,
+ .atomic_destroy_state = drm_atomic_helper_plane_destroy_state,
+ };
+
+@@ -138,11 +155,13 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane,
+ struct drm_device *dev = plane->dev;
+ struct virtio_gpu_device *vgdev = dev->dev_private;
+ struct virtio_gpu_framebuffer *vgfb;
++ struct virtio_gpu_plane_state *vgplane_st;
+ struct virtio_gpu_object *bo;
+
+ vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
++ vgplane_st = to_virtio_gpu_plane_state(plane->state);
+ bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+- if (vgfb->fence) {
++ if (vgplane_st->fence) {
+ struct virtio_gpu_object_array *objs;
+
+ objs = virtio_gpu_array_alloc(1);
+@@ -151,13 +170,11 @@ static void virtio_gpu_resource_flush(struct drm_plane *plane,
+ virtio_gpu_array_add_obj(objs, vgfb->base.obj[0]);
+ virtio_gpu_array_lock_resv(objs);
+ virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
+- width, height, objs, vgfb->fence);
++ width, height, objs,
++ vgplane_st->fence);
+ virtio_gpu_notify(vgdev);
+-
+- dma_fence_wait_timeout(&vgfb->fence->f, true,
++ dma_fence_wait_timeout(&vgplane_st->fence->f, true,
+ msecs_to_jiffies(50));
+- dma_fence_put(&vgfb->fence->f);
+- vgfb->fence = NULL;
+ } else {
+ virtio_gpu_cmd_resource_flush(vgdev, bo->hw_res_handle, x, y,
+ width, height, NULL, NULL);
+@@ -247,20 +264,23 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
+ struct drm_device *dev = plane->dev;
+ struct virtio_gpu_device *vgdev = dev->dev_private;
+ struct virtio_gpu_framebuffer *vgfb;
++ struct virtio_gpu_plane_state *vgplane_st;
+ struct virtio_gpu_object *bo;
+
+ if (!new_state->fb)
+ return 0;
+
+ vgfb = to_virtio_gpu_framebuffer(new_state->fb);
++ vgplane_st = to_virtio_gpu_plane_state(new_state);
+ bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+ if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
+ return 0;
+
+- if (bo->dumb && (plane->state->fb != new_state->fb)) {
+- vgfb->fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context,
++ if (bo->dumb) {
++ vgplane_st->fence = virtio_gpu_fence_alloc(vgdev,
++ vgdev->fence_drv.context,
+ 0);
+- if (!vgfb->fence)
++ if (!vgplane_st->fence)
+ return -ENOMEM;
+ }
+
+@@ -270,15 +290,15 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane *plane,
+ static void virtio_gpu_plane_cleanup_fb(struct drm_plane *plane,
+ struct drm_plane_state *state)
+ {
+- struct virtio_gpu_framebuffer *vgfb;
++ struct virtio_gpu_plane_state *vgplane_st;
+
+ if (!state->fb)
+ return;
+
+- vgfb = to_virtio_gpu_framebuffer(state->fb);
+- if (vgfb->fence) {
+- dma_fence_put(&vgfb->fence->f);
+- vgfb->fence = NULL;
++ vgplane_st = to_virtio_gpu_plane_state(state);
++ if (vgplane_st->fence) {
++ dma_fence_put(&vgplane_st->fence->f);
++ vgplane_st->fence = NULL;
+ }
+ }
+
+@@ -291,6 +311,7 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
+ struct virtio_gpu_device *vgdev = dev->dev_private;
+ struct virtio_gpu_output *output = NULL;
+ struct virtio_gpu_framebuffer *vgfb;
++ struct virtio_gpu_plane_state *vgplane_st;
+ struct virtio_gpu_object *bo = NULL;
+ uint32_t handle;
+
+@@ -303,6 +324,7 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
+
+ if (plane->state->fb) {
+ vgfb = to_virtio_gpu_framebuffer(plane->state->fb);
++ vgplane_st = to_virtio_gpu_plane_state(plane->state);
+ bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+ handle = bo->hw_res_handle;
+ } else {
+@@ -322,11 +344,9 @@ static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
+ (vgdev, 0,
+ plane->state->crtc_w,
+ plane->state->crtc_h,
+- 0, 0, objs, vgfb->fence);
++ 0, 0, objs, vgplane_st->fence);
+ virtio_gpu_notify(vgdev);
+- dma_fence_wait(&vgfb->fence->f, true);
+- dma_fence_put(&vgfb->fence->f);
+- vgfb->fence = NULL;
++ dma_fence_wait(&vgplane_st->fence->f, true);
+ }
+
+ if (plane->state->fb != old_state->fb) {
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
+index 85aa3ab0da3b87..8050938389b68f 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.c
++++ b/drivers/gpu/drm/xe/xe_devcoredump.c
+@@ -104,11 +104,7 @@ static ssize_t __xe_devcoredump_read(char *buffer, size_t count,
+ drm_puts(&p, "\n**** GuC CT ****\n");
+ xe_guc_ct_snapshot_print(ss->ct, &p);
+
+- /*
+- * Don't add a new section header here because the mesa debug decoder
+- * tool expects the context information to be in the 'GuC CT' section.
+- */
+- /* drm_puts(&p, "\n**** Contexts ****\n"); */
++ drm_puts(&p, "\n**** Contexts ****\n");
+ xe_guc_exec_queue_snapshot_print(ss->ge, &p);
+
+ drm_puts(&p, "\n**** Job ****\n");
+@@ -337,42 +333,34 @@ int xe_devcoredump_init(struct xe_device *xe)
+ /**
+ * xe_print_blob_ascii85 - print a BLOB to some useful location in ASCII85
+ *
+- * The output is split to multiple lines because some print targets, e.g. dmesg
+- * cannot handle arbitrarily long lines. Note also that printing to dmesg in
+- * piece-meal fashion is not possible, each separate call to drm_puts() has a
+- * line-feed automatically added! Therefore, the entire output line must be
+- * constructed in a local buffer first, then printed in one atomic output call.
++ * The output is split into multiple calls to drm_puts() because some print
++ * targets, e.g. dmesg, cannot handle arbitrarily long lines. These targets may
++ * add newlines, as is the case with dmesg: each drm_puts() call creates a
++ * separate line.
+ *
+ * There is also a scheduler yield call to prevent the 'task has been stuck for
+ * 120s' kernel hang check feature from firing when printing to a slow target
+ * such as dmesg over a serial port.
+ *
+- * TODO: Add compression prior to the ASCII85 encoding to shrink huge buffers down.
+- *
+ * @p: the printer object to output to
+ * @prefix: optional prefix to add to output string
++ * @suffix: optional suffix to add at the end. 0 disables it and is
++ * not added to the output, which is useful when using multiple calls
++ * to dump data to @p
+ * @blob: the Binary Large OBject to dump out
+ * @offset: offset in bytes to skip from the front of the BLOB, must be a multiple of sizeof(u32)
+ * @size: the size in bytes of the BLOB, must be a multiple of sizeof(u32)
+ */
+-void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
++void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix, char suffix,
+ const void *blob, size_t offset, size_t size)
+ {
+ const u32 *blob32 = (const u32 *)blob;
+ char buff[ASCII85_BUFSZ], *line_buff;
+ size_t line_pos = 0;
+
+- /*
+- * Splitting blobs across multiple lines is not compatible with the mesa
+- * debug decoder tool. Note that even dropping the explicit '\n' below
+- * doesn't help because the GuC log is so big some underlying implementation
+- * still splits the lines at 512K characters. So just bail completely for
+- * the moment.
+- */
+- return;
+-
+ #define DMESG_MAX_LINE_LEN 800
+-#define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "\n\0" */
++ /* Always leave space for the suffix char and the \0 */
++#define MIN_SPACE (ASCII85_BUFSZ + 2) /* 85 + "<suffix>\0" */
+
+ if (size & 3)
+ drm_printf(p, "Size not word aligned: %zu", size);
+@@ -404,7 +392,6 @@ void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
+ line_pos += strlen(line_buff + line_pos);
+
+ if ((line_pos + MIN_SPACE) >= DMESG_MAX_LINE_LEN) {
+- line_buff[line_pos++] = '\n';
+ line_buff[line_pos++] = 0;
+
+ drm_puts(p, line_buff);
+@@ -416,10 +403,11 @@ void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
+ }
+ }
+
++ if (suffix)
++ line_buff[line_pos++] = suffix;
++
+ if (line_pos) {
+- line_buff[line_pos++] = '\n';
+ line_buff[line_pos++] = 0;
+-
+ drm_puts(p, line_buff);
+ }
+
+diff --git a/drivers/gpu/drm/xe/xe_devcoredump.h b/drivers/gpu/drm/xe/xe_devcoredump.h
+index a4eebc285fc837..b231c8ad799f69 100644
+--- a/drivers/gpu/drm/xe/xe_devcoredump.h
++++ b/drivers/gpu/drm/xe/xe_devcoredump.h
+@@ -26,7 +26,7 @@ static inline int xe_devcoredump_init(struct xe_device *xe)
+ }
+ #endif
+
+-void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix,
++void xe_print_blob_ascii85(struct drm_printer *p, const char *prefix, char suffix,
+ const void *blob, size_t offset, size_t size);
+
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_guc_log.c b/drivers/gpu/drm/xe/xe_guc_log.c
+index be47780ec2a7e7..50851638003b9b 100644
+--- a/drivers/gpu/drm/xe/xe_guc_log.c
++++ b/drivers/gpu/drm/xe/xe_guc_log.c
+@@ -78,7 +78,7 @@ void xe_guc_log_print(struct xe_guc_log *log, struct drm_printer *p)
+
+ xe_map_memcpy_from(xe, copy, &log->bo->vmap, 0, size);
+
+- xe_print_blob_ascii85(p, "Log data", copy, 0, size);
++ xe_print_blob_ascii85(p, "Log data", '\n', copy, 0, size);
+
+ vfree(copy);
+ }
+diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
+index a4b47319ad8ead..bcdd168cdc6d79 100644
+--- a/drivers/hid/hid-asus.c
++++ b/drivers/hid/hid-asus.c
+@@ -432,6 +432,26 @@ static int asus_kbd_get_functions(struct hid_device *hdev,
+ return ret;
+ }
+
++static int asus_kbd_disable_oobe(struct hid_device *hdev)
++{
++ const u8 init[][6] = {
++ { FEATURE_KBD_REPORT_ID, 0x05, 0x20, 0x31, 0x00, 0x08 },
++ { FEATURE_KBD_REPORT_ID, 0xBA, 0xC5, 0xC4 },
++ { FEATURE_KBD_REPORT_ID, 0xD0, 0x8F, 0x01 },
++ { FEATURE_KBD_REPORT_ID, 0xD0, 0x85, 0xFF }
++ };
++ int ret;
++
++ for (size_t i = 0; i < ARRAY_SIZE(init); i++) {
++ ret = asus_kbd_set_report(hdev, init[i], sizeof(init[i]));
++ if (ret < 0)
++ return ret;
++ }
++
++ hid_info(hdev, "Disabled OOBE for keyboard\n");
++ return 0;
++}
++
+ static void asus_schedule_work(struct asus_kbd_leds *led)
+ {
+ unsigned long flags;
+@@ -534,6 +554,12 @@ static int asus_kbd_register_leds(struct hid_device *hdev)
+ ret = asus_kbd_init(hdev, FEATURE_KBD_LED_REPORT_ID2);
+ if (ret < 0)
+ return ret;
++
++ if (dmi_match(DMI_PRODUCT_FAMILY, "ProArt P16")) {
++ ret = asus_kbd_disable_oobe(hdev);
++ if (ret < 0)
++ return ret;
++ }
+ } else {
+ /* Initialize keyboard */
+ ret = asus_kbd_init(hdev, FEATURE_KBD_REPORT_ID);
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index e07d63db5e1f47..369414c92fccbe 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -2308,6 +2308,11 @@ static const struct hid_device_id mt_devices[] = {
+ HID_DEVICE(HID_BUS_ANY, HID_GROUP_ANY, USB_VENDOR_ID_SIS_TOUCH,
+ HID_ANY_ID) },
+
++ /* Hantick */
++ { .driver_data = MT_CLS_NSMU,
++ HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++ I2C_VENDOR_ID_HANTICK, I2C_PRODUCT_ID_HANTICK_5288) },
++
+ /* Generic MT device */
+ { HID_DEVICE(HID_BUS_ANY, HID_GROUP_MULTITOUCH, HID_ANY_ID, HID_ANY_ID) },
+
+diff --git a/drivers/hid/hid-sensor-hub.c b/drivers/hid/hid-sensor-hub.c
+index 7bd86eef6ec761..4c94c03cb57396 100644
+--- a/drivers/hid/hid-sensor-hub.c
++++ b/drivers/hid/hid-sensor-hub.c
+@@ -730,23 +730,30 @@ static int sensor_hub_probe(struct hid_device *hdev,
+ return ret;
+ }
+
++static int sensor_hub_finalize_pending_fn(struct device *dev, void *data)
++{
++ struct hid_sensor_hub_device *hsdev = dev->platform_data;
++
++ if (hsdev->pending.status)
++ complete(&hsdev->pending.ready);
++
++ return 0;
++}
++
+ static void sensor_hub_remove(struct hid_device *hdev)
+ {
+ struct sensor_hub_data *data = hid_get_drvdata(hdev);
+ unsigned long flags;
+- int i;
+
+ hid_dbg(hdev, " hardware removed\n");
+ hid_hw_close(hdev);
+ hid_hw_stop(hdev);
++
+ spin_lock_irqsave(&data->lock, flags);
+- for (i = 0; i < data->hid_sensor_client_cnt; ++i) {
+- struct hid_sensor_hub_device *hsdev =
+- data->hid_sensor_hub_client_devs[i].platform_data;
+- if (hsdev->pending.status)
+- complete(&hsdev->pending.ready);
+- }
++ device_for_each_child(&hdev->dev, NULL,
++ sensor_hub_finalize_pending_fn);
+ spin_unlock_irqrestore(&data->lock, flags);
++
+ mfd_remove_devices(&hdev->dev);
+ mutex_destroy(&data->mutex);
+ }
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index 5a599c90e7a2c7..c7033ffaba3919 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -4943,6 +4943,10 @@ static const struct wacom_features wacom_features_0x94 =
+ HID_DEVICE(BUS_I2C, HID_GROUP_WACOM, USB_VENDOR_ID_WACOM, prod),\
+ .driver_data = (kernel_ulong_t)&wacom_features_##prod
+
++#define PCI_DEVICE_WACOM(prod) \
++ HID_DEVICE(BUS_PCI, HID_GROUP_WACOM, USB_VENDOR_ID_WACOM, prod),\
++ .driver_data = (kernel_ulong_t)&wacom_features_##prod
++
+ #define USB_DEVICE_LENOVO(prod) \
+ HID_USB_DEVICE(USB_VENDOR_ID_LENOVO, prod), \
+ .driver_data = (kernel_ulong_t)&wacom_features_##prod
+@@ -5112,6 +5116,7 @@ const struct hid_device_id wacom_ids[] = {
+
+ { USB_DEVICE_WACOM(HID_ANY_ID) },
+ { I2C_DEVICE_WACOM(HID_ANY_ID) },
++ { PCI_DEVICE_WACOM(HID_ANY_ID) },
+ { BT_DEVICE_WACOM(HID_ANY_ID) },
+ { }
+ };
+diff --git a/drivers/i2c/i2c-core-acpi.c b/drivers/i2c/i2c-core-acpi.c
+index 14ae0cfc325efb..d2499f302b5083 100644
+--- a/drivers/i2c/i2c-core-acpi.c
++++ b/drivers/i2c/i2c-core-acpi.c
+@@ -355,6 +355,25 @@ static const struct acpi_device_id i2c_acpi_force_400khz_device_ids[] = {
+ {}
+ };
+
++static const struct acpi_device_id i2c_acpi_force_100khz_device_ids[] = {
++ /*
++ * When a 400KHz freq is used on this model of ELAN touchpad in Linux,
++ * excessive smoothing (similar to when the touchpad's firmware detects
++ * a noisy signal) is sometimes applied. As some devices' (e.g, Lenovo
++ * V15 G4) ACPI tables specify a 400KHz frequency for this device and
++ * some I2C busses (e.g, Designware I2C) default to a 400KHz freq,
++ * force the speed to 100KHz as a workaround.
++ *
++ * For future investigation: This problem may be related to the default
++ * HCNT/LCNT values given by some busses' drivers, because they are not
++ * specified in the aforementioned devices' ACPI tables, and because
++ * the device works without issues on Windows at what is expected to be
++ * a 400KHz frequency. The root cause of the issue is not known.
++ */
++ { "ELAN06FA", 0 },
++ {}
++};
++
+ static acpi_status i2c_acpi_lookup_speed(acpi_handle handle, u32 level,
+ void *data, void **return_value)
+ {
+@@ -373,6 +392,9 @@ static acpi_status i2c_acpi_lookup_speed(acpi_handle handle, u32 level,
+ if (acpi_match_device_ids(adev, i2c_acpi_force_400khz_device_ids) == 0)
+ lookup->force_speed = I2C_MAX_FAST_MODE_FREQ;
+
++ if (acpi_match_device_ids(adev, i2c_acpi_force_100khz_device_ids) == 0)
++ lookup->force_speed = I2C_MAX_STANDARD_MODE_FREQ;
++
+ return AE_OK;
+ }
+
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 42310c9a00c2d1..53ab814b676ffd 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -1919,7 +1919,7 @@ static int i3c_master_bus_init(struct i3c_master_controller *master)
+ goto err_bus_cleanup;
+
+ if (master->ops->set_speed) {
+- master->ops->set_speed(master, I3C_OPEN_DRAIN_NORMAL_SPEED);
++ ret = master->ops->set_speed(master, I3C_OPEN_DRAIN_NORMAL_SPEED);
+ if (ret)
+ goto err_bus_cleanup;
+ }
+diff --git a/drivers/iio/light/as73211.c b/drivers/iio/light/as73211.c
+index be0068081ebbbb..11fbdcdd26d656 100644
+--- a/drivers/iio/light/as73211.c
++++ b/drivers/iio/light/as73211.c
+@@ -177,6 +177,12 @@ struct as73211_data {
+ BIT(AS73211_SCAN_INDEX_TEMP) | \
+ AS73211_SCAN_MASK_COLOR)
+
++static const unsigned long as73211_scan_masks[] = {
++ AS73211_SCAN_MASK_COLOR,
++ AS73211_SCAN_MASK_ALL,
++ 0
++};
++
+ static const struct iio_chan_spec as73211_channels[] = {
+ {
+ .type = IIO_TEMP,
+@@ -672,9 +678,12 @@ static irqreturn_t as73211_trigger_handler(int irq __always_unused, void *p)
+
+ /* AS73211 starts reading at address 2 */
+ ret = i2c_master_recv(data->client,
+- (char *)&scan.chan[1], 3 * sizeof(scan.chan[1]));
++ (char *)&scan.chan[0], 3 * sizeof(scan.chan[0]));
+ if (ret < 0)
+ goto done;
++
++ /* Avoid pushing uninitialized data */
++ scan.chan[3] = 0;
+ }
+
+ if (data_result) {
+@@ -682,9 +691,15 @@ static irqreturn_t as73211_trigger_handler(int irq __always_unused, void *p)
+ * Saturate all channels (in case of overflows). Temperature channel
+ * is not affected by overflows.
+ */
+- scan.chan[1] = cpu_to_le16(U16_MAX);
+- scan.chan[2] = cpu_to_le16(U16_MAX);
+- scan.chan[3] = cpu_to_le16(U16_MAX);
++ if (*indio_dev->active_scan_mask == AS73211_SCAN_MASK_ALL) {
++ scan.chan[1] = cpu_to_le16(U16_MAX);
++ scan.chan[2] = cpu_to_le16(U16_MAX);
++ scan.chan[3] = cpu_to_le16(U16_MAX);
++ } else {
++ scan.chan[0] = cpu_to_le16(U16_MAX);
++ scan.chan[1] = cpu_to_le16(U16_MAX);
++ scan.chan[2] = cpu_to_le16(U16_MAX);
++ }
+ }
+
+ iio_push_to_buffers_with_timestamp(indio_dev, &scan, iio_get_time_ns(indio_dev));
+@@ -758,6 +773,7 @@ static int as73211_probe(struct i2c_client *client)
+ indio_dev->channels = data->spec_dev->channels;
+ indio_dev->num_channels = data->spec_dev->num_channels;
+ indio_dev->modes = INDIO_DIRECT_MODE;
++ indio_dev->available_scan_masks = as73211_scan_masks;
+
+ ret = i2c_smbus_read_byte_data(data->client, AS73211_REG_OSR);
+ if (ret < 0)
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 45d9dc9c6c8fda..bb02b6adbf2c21 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -2021,6 +2021,11 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ {
+ struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);
+ struct mlx5_cache_ent *ent = mr->mmkey.cache_ent;
++ bool is_odp = is_odp_mr(mr);
++ int ret = 0;
++
++ if (is_odp)
++ mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex);
+
+ if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr)) {
+ ent = mr->mmkey.cache_ent;
+@@ -2032,7 +2037,7 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ ent->tmp_cleanup_scheduled = true;
+ }
+ spin_unlock_irq(&ent->mkeys_queue.lock);
+- return 0;
++ goto out;
+ }
+
+ if (ent) {
+@@ -2041,7 +2046,15 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ mr->mmkey.cache_ent = NULL;
+ spin_unlock_irq(&ent->mkeys_queue.lock);
+ }
+- return destroy_mkey(dev, mr);
++ ret = destroy_mkey(dev, mr);
++out:
++ if (is_odp) {
++ if (!ret)
++ to_ib_umem_odp(mr->umem)->private = NULL;
++ mutex_unlock(&to_ib_umem_odp(mr->umem)->umem_mutex);
++ }
++
++ return ret;
+ }
+
+ static int __mlx5_ib_dereg_mr(struct ib_mr *ibmr)
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 64b441542cd5dd..1d3bf56157702d 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -282,6 +282,8 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni,
+ if (!umem_odp->npages)
+ goto out;
+ mr = umem_odp->private;
++ if (!mr)
++ goto out;
+
+ start = max_t(u64, ib_umem_start(umem_odp), range->start);
+ end = min_t(u64, ib_umem_end(umem_odp), range->end);
+diff --git a/drivers/input/misc/nxp-bbnsm-pwrkey.c b/drivers/input/misc/nxp-bbnsm-pwrkey.c
+index eb4173f9c82044..7ba8d166d68c18 100644
+--- a/drivers/input/misc/nxp-bbnsm-pwrkey.c
++++ b/drivers/input/misc/nxp-bbnsm-pwrkey.c
+@@ -187,6 +187,12 @@ static int bbnsm_pwrkey_probe(struct platform_device *pdev)
+ return 0;
+ }
+
++static void bbnsm_pwrkey_remove(struct platform_device *pdev)
++{
++ dev_pm_clear_wake_irq(&pdev->dev);
++ device_init_wakeup(&pdev->dev, false);
++}
++
+ static int __maybe_unused bbnsm_pwrkey_suspend(struct device *dev)
+ {
+ struct platform_device *pdev = to_platform_device(dev);
+@@ -223,6 +229,8 @@ static struct platform_driver bbnsm_pwrkey_driver = {
+ .of_match_table = bbnsm_pwrkey_ids,
+ },
+ .probe = bbnsm_pwrkey_probe,
++ .remove = bbnsm_pwrkey_remove,
++
+ };
+ module_platform_driver(bbnsm_pwrkey_driver);
+
+diff --git a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+index f1a8f8c75cb0e9..6bf8ecbbe0c263 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
++++ b/drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3.c
+@@ -4616,7 +4616,7 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ /* Initialise in-memory data structures */
+ ret = arm_smmu_init_structures(smmu);
+ if (ret)
+- return ret;
++ goto err_free_iopf;
+
+ /* Record our private device structure */
+ platform_set_drvdata(pdev, smmu);
+@@ -4627,22 +4627,29 @@ static int arm_smmu_device_probe(struct platform_device *pdev)
+ /* Reset the device */
+ ret = arm_smmu_device_reset(smmu);
+ if (ret)
+- return ret;
++ goto err_disable;
+
+ /* And we're up. Go go go! */
+ ret = iommu_device_sysfs_add(&smmu->iommu, dev, NULL,
+ "smmu3.%pa", &ioaddr);
+ if (ret)
+- return ret;
++ goto err_disable;
+
+ ret = iommu_device_register(&smmu->iommu, &arm_smmu_ops, dev);
+ if (ret) {
+ dev_err(dev, "Failed to register iommu\n");
+- iommu_device_sysfs_remove(&smmu->iommu);
+- return ret;
++ goto err_free_sysfs;
+ }
+
+ return 0;
++
++err_free_sysfs:
++ iommu_device_sysfs_remove(&smmu->iommu);
++err_disable:
++ arm_smmu_device_disable(smmu);
++err_free_iopf:
++ iopf_queue_free(smmu->evtq.iopf);
++ return ret;
+ }
+
+ static void arm_smmu_device_remove(struct platform_device *pdev)
+diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+index 6e41ddaa24d636..d525ab43a4aebf 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
++++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+@@ -79,7 +79,6 @@
+ #define TEGRA241_VCMDQ_PAGE1(q) (TEGRA241_VCMDQ_PAGE1_BASE + 0x80*(q))
+ #define VCMDQ_ADDR GENMASK(47, 5)
+ #define VCMDQ_LOG2SIZE GENMASK(4, 0)
+-#define VCMDQ_LOG2SIZE_MAX 19
+
+ #define TEGRA241_VCMDQ_BASE 0x00000
+ #define TEGRA241_VCMDQ_CONS_INDX_BASE 0x00008
+@@ -505,12 +504,15 @@ static int tegra241_vcmdq_alloc_smmu_cmdq(struct tegra241_vcmdq *vcmdq)
+ struct arm_smmu_cmdq *cmdq = &vcmdq->cmdq;
+ struct arm_smmu_queue *q = &cmdq->q;
+ char name[16];
++ u32 regval;
+ int ret;
+
+ snprintf(name, 16, "vcmdq%u", vcmdq->idx);
+
+- /* Queue size, capped to ensure natural alignment */
+- q->llq.max_n_shift = min_t(u32, CMDQ_MAX_SZ_SHIFT, VCMDQ_LOG2SIZE_MAX);
++ /* Cap queue size to SMMU's IDR1.CMDQS and ensure natural alignment */
++ regval = readl_relaxed(smmu->base + ARM_SMMU_IDR1);
++ q->llq.max_n_shift =
++ min_t(u32, CMDQ_MAX_SZ_SHIFT, FIELD_GET(IDR1_CMDQS, regval));
+
+ /* Use the common helper to init the VCMDQ, and then... */
+ ret = arm_smmu_init_one_queue(smmu, q, vcmdq->page0,
+diff --git a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+index 6372f3e25c4bc2..601fb878d0ef25 100644
+--- a/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
++++ b/drivers/iommu/arm/arm-smmu/arm-smmu-qcom.c
+@@ -567,6 +567,7 @@ static const struct of_device_id __maybe_unused qcom_smmu_impl_of_match[] = {
+ { .compatible = "qcom,sc8180x-smmu-500", .data = &qcom_smmu_500_impl0_data },
+ { .compatible = "qcom,sc8280xp-smmu-500", .data = &qcom_smmu_500_impl0_data },
+ { .compatible = "qcom,sdm630-smmu-v2", .data = &qcom_smmu_v2_data },
++ { .compatible = "qcom,sdm670-smmu-v2", .data = &qcom_smmu_v2_data },
+ { .compatible = "qcom,sdm845-smmu-v2", .data = &qcom_smmu_v2_data },
+ { .compatible = "qcom,sdm845-smmu-500", .data = &sdm845_smmu_500_data },
+ { .compatible = "qcom,sm6115-smmu-500", .data = &qcom_smmu_500_impl0_data},
+diff --git a/drivers/iommu/iommufd/fault.c b/drivers/iommu/iommufd/fault.c
+index b8393a8c075396..95e2e99ab27241 100644
+--- a/drivers/iommu/iommufd/fault.c
++++ b/drivers/iommu/iommufd/fault.c
+@@ -98,15 +98,23 @@ static void iommufd_auto_response_faults(struct iommufd_hw_pagetable *hwpt,
+ {
+ struct iommufd_fault *fault = hwpt->fault;
+ struct iopf_group *group, *next;
++ struct list_head free_list;
+ unsigned long index;
+
+ if (!fault)
+ return;
++ INIT_LIST_HEAD(&free_list);
+
+ mutex_lock(&fault->mutex);
++ spin_lock(&fault->lock);
+ list_for_each_entry_safe(group, next, &fault->deliver, node) {
+ if (group->attach_handle != &handle->handle)
+ continue;
++ list_move(&group->node, &free_list);
++ }
++ spin_unlock(&fault->lock);
++
++ list_for_each_entry_safe(group, next, &free_list, node) {
+ list_del(&group->node);
+ iopf_group_response(group, IOMMU_PAGE_RESP_INVALID);
+ iopf_free_group(group);
+@@ -208,6 +216,7 @@ void iommufd_fault_destroy(struct iommufd_object *obj)
+ {
+ struct iommufd_fault *fault = container_of(obj, struct iommufd_fault, obj);
+ struct iopf_group *group, *next;
++ unsigned long index;
+
+ /*
+ * The iommufd object's reference count is zero at this point.
+@@ -220,6 +229,13 @@ void iommufd_fault_destroy(struct iommufd_object *obj)
+ iopf_group_response(group, IOMMU_PAGE_RESP_INVALID);
+ iopf_free_group(group);
+ }
++ xa_for_each(&fault->response, index, group) {
++ xa_erase(&fault->response, index);
++ iopf_group_response(group, IOMMU_PAGE_RESP_INVALID);
++ iopf_free_group(group);
++ }
++ xa_destroy(&fault->response);
++ mutex_destroy(&fault->mutex);
+ }
+
+ static void iommufd_compose_fault_message(struct iommu_fault *fault,
+@@ -242,7 +258,7 @@ static ssize_t iommufd_fault_fops_read(struct file *filep, char __user *buf,
+ {
+ size_t fault_size = sizeof(struct iommu_hwpt_pgfault);
+ struct iommufd_fault *fault = filep->private_data;
+- struct iommu_hwpt_pgfault data;
++ struct iommu_hwpt_pgfault data = {};
+ struct iommufd_device *idev;
+ struct iopf_group *group;
+ struct iopf_fault *iopf;
+@@ -253,17 +269,19 @@ static ssize_t iommufd_fault_fops_read(struct file *filep, char __user *buf,
+ return -ESPIPE;
+
+ mutex_lock(&fault->mutex);
+- while (!list_empty(&fault->deliver) && count > done) {
+- group = list_first_entry(&fault->deliver,
+- struct iopf_group, node);
+-
+- if (group->fault_count * fault_size > count - done)
++ while ((group = iommufd_fault_deliver_fetch(fault))) {
++ if (done >= count ||
++ group->fault_count * fault_size > count - done) {
++ iommufd_fault_deliver_restore(fault, group);
+ break;
++ }
+
+ rc = xa_alloc(&fault->response, &group->cookie, group,
+ xa_limit_32b, GFP_KERNEL);
+- if (rc)
++ if (rc) {
++ iommufd_fault_deliver_restore(fault, group);
+ break;
++ }
+
+ idev = to_iommufd_handle(group->attach_handle)->idev;
+ list_for_each_entry(iopf, &group->faults, list) {
+@@ -272,13 +290,12 @@ static ssize_t iommufd_fault_fops_read(struct file *filep, char __user *buf,
+ group->cookie);
+ if (copy_to_user(buf + done, &data, fault_size)) {
+ xa_erase(&fault->response, group->cookie);
++ iommufd_fault_deliver_restore(fault, group);
+ rc = -EFAULT;
+ break;
+ }
+ done += fault_size;
+ }
+-
+- list_del(&group->node);
+ }
+ mutex_unlock(&fault->mutex);
+
+@@ -336,10 +353,10 @@ static __poll_t iommufd_fault_fops_poll(struct file *filep,
+ __poll_t pollflags = EPOLLOUT;
+
+ poll_wait(filep, &fault->wait_queue, wait);
+- mutex_lock(&fault->mutex);
++ spin_lock(&fault->lock);
+ if (!list_empty(&fault->deliver))
+ pollflags |= EPOLLIN | EPOLLRDNORM;
+- mutex_unlock(&fault->mutex);
++ spin_unlock(&fault->lock);
+
+ return pollflags;
+ }
+@@ -381,6 +398,7 @@ int iommufd_fault_alloc(struct iommufd_ucmd *ucmd)
+ INIT_LIST_HEAD(&fault->deliver);
+ xa_init_flags(&fault->response, XA_FLAGS_ALLOC1);
+ mutex_init(&fault->mutex);
++ spin_lock_init(&fault->lock);
+ init_waitqueue_head(&fault->wait_queue);
+
+ filep = anon_inode_getfile("[iommufd-pgfault]", &iommufd_fault_fops,
+@@ -429,9 +447,9 @@ int iommufd_fault_iopf_handler(struct iopf_group *group)
+ hwpt = group->attach_handle->domain->fault_data;
+ fault = hwpt->fault;
+
+- mutex_lock(&fault->mutex);
++ spin_lock(&fault->lock);
+ list_add_tail(&group->node, &fault->deliver);
+- mutex_unlock(&fault->mutex);
++ spin_unlock(&fault->lock);
+
+ wake_up_interruptible(&fault->wait_queue);
+
+diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h
+index f1d865e6fab66a..c1f82cb6824256 100644
+--- a/drivers/iommu/iommufd/iommufd_private.h
++++ b/drivers/iommu/iommufd/iommufd_private.h
+@@ -462,14 +462,39 @@ struct iommufd_fault {
+ struct iommufd_ctx *ictx;
+ struct file *filep;
+
+- /* The lists of outstanding faults protected by below mutex. */
+- struct mutex mutex;
++ spinlock_t lock; /* protects the deliver list */
+ struct list_head deliver;
++ struct mutex mutex; /* serializes response flows */
+ struct xarray response;
+
+ struct wait_queue_head wait_queue;
+ };
+
++/* Fetch the first node out of the fault->deliver list */
++static inline struct iopf_group *
++iommufd_fault_deliver_fetch(struct iommufd_fault *fault)
++{
++ struct list_head *list = &fault->deliver;
++ struct iopf_group *group = NULL;
++
++ spin_lock(&fault->lock);
++ if (!list_empty(list)) {
++ group = list_first_entry(list, struct iopf_group, node);
++ list_del(&group->node);
++ }
++ spin_unlock(&fault->lock);
++ return group;
++}
++
++/* Restore a node back to the head of the fault->deliver list */
++static inline void iommufd_fault_deliver_restore(struct iommufd_fault *fault,
++ struct iopf_group *group)
++{
++ spin_lock(&fault->lock);
++ list_add(&group->node, &fault->deliver);
++ spin_unlock(&fault->lock);
++}
++
+ struct iommufd_attach_handle {
+ struct iommu_attach_handle handle;
+ struct iommufd_device *idev;
+diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
+index 66ce15027f28d7..c1f30483600859 100644
+--- a/drivers/irqchip/Kconfig
++++ b/drivers/irqchip/Kconfig
+@@ -169,6 +169,7 @@ config IXP4XX_IRQ
+
+ config LAN966X_OIC
+ tristate "Microchip LAN966x OIC Support"
++ depends on MCHP_LAN966X_PCI || COMPILE_TEST
+ select GENERIC_IRQ_CHIP
+ select IRQ_DOMAIN
+ help
+diff --git a/drivers/irqchip/irq-apple-aic.c b/drivers/irqchip/irq-apple-aic.c
+index da5250f0155cfa..2b1684c60e3cac 100644
+--- a/drivers/irqchip/irq-apple-aic.c
++++ b/drivers/irqchip/irq-apple-aic.c
+@@ -577,7 +577,8 @@ static void __exception_irq_entry aic_handle_fiq(struct pt_regs *regs)
+ AIC_FIQ_HWIRQ(AIC_TMR_EL02_VIRT));
+ }
+
+- if (read_sysreg_s(SYS_IMP_APL_PMCR0_EL1) & PMCR0_IACT) {
++ if ((read_sysreg_s(SYS_IMP_APL_PMCR0_EL1) & (PMCR0_IMODE | PMCR0_IACT)) ==
++ (FIELD_PREP(PMCR0_IMODE, PMCR0_IMODE_FIQ) | PMCR0_IACT)) {
+ int irq;
+ if (cpumask_test_cpu(smp_processor_id(),
+ &aic_irqc->fiq_aff[AIC_CPU_PMU_P]->aff))
+diff --git a/drivers/irqchip/irq-mvebu-icu.c b/drivers/irqchip/irq-mvebu-icu.c
+index b337f6c05f184f..4eebed39880a5b 100644
+--- a/drivers/irqchip/irq-mvebu-icu.c
++++ b/drivers/irqchip/irq-mvebu-icu.c
+@@ -68,7 +68,8 @@ static int mvebu_icu_translate(struct irq_domain *d, struct irq_fwspec *fwspec,
+ unsigned long *hwirq, unsigned int *type)
+ {
+ unsigned int param_count = static_branch_unlikely(&legacy_bindings) ? 3 : 2;
+- struct mvebu_icu_msi_data *msi_data = d->host_data;
++ struct msi_domain_info *info = d->host_data;
++ struct mvebu_icu_msi_data *msi_data = info->chip_data;
+ struct mvebu_icu *icu = msi_data->icu;
+
+ /* Check the count of the parameters in dt */
+diff --git a/drivers/leds/leds-lp8860.c b/drivers/leds/leds-lp8860.c
+index 7a136fd8172061..06196d851ade71 100644
+--- a/drivers/leds/leds-lp8860.c
++++ b/drivers/leds/leds-lp8860.c
+@@ -265,7 +265,7 @@ static int lp8860_init(struct lp8860_led *led)
+ goto out;
+ }
+
+- reg_count = ARRAY_SIZE(lp8860_eeprom_disp_regs) / sizeof(lp8860_eeprom_disp_regs[0]);
++ reg_count = ARRAY_SIZE(lp8860_eeprom_disp_regs);
+ for (i = 0; i < reg_count; i++) {
+ ret = regmap_write(led->eeprom_regmap,
+ lp8860_eeprom_disp_regs[i].reg,
+diff --git a/drivers/mailbox/tegra-hsp.c b/drivers/mailbox/tegra-hsp.c
+index 19ef56cbcfd39d..46c921000a34cf 100644
+--- a/drivers/mailbox/tegra-hsp.c
++++ b/drivers/mailbox/tegra-hsp.c
+@@ -388,7 +388,6 @@ static void tegra_hsp_sm_recv32(struct tegra_hsp_channel *channel)
+ value = tegra_hsp_channel_readl(channel, HSP_SM_SHRD_MBOX);
+ value &= ~HSP_SM_SHRD_MBOX_FULL;
+ msg = (void *)(unsigned long)value;
+- mbox_chan_received_data(channel->chan, msg);
+
+ /*
+ * Need to clear all bits here since some producers, such as TCU, depend
+@@ -398,6 +397,8 @@ static void tegra_hsp_sm_recv32(struct tegra_hsp_channel *channel)
+ * explicitly, so we have to make sure we cover all possible cases.
+ */
+ tegra_hsp_channel_writel(channel, 0x0, HSP_SM_SHRD_MBOX);
++
++ mbox_chan_received_data(channel->chan, msg);
+ }
+
+ static const struct tegra_hsp_sm_ops tegra_hsp_sm_32bit_ops = {
+@@ -433,7 +434,6 @@ static void tegra_hsp_sm_recv128(struct tegra_hsp_channel *channel)
+ value[3] = tegra_hsp_channel_readl(channel, HSP_SHRD_MBOX_TYPE1_DATA3);
+
+ msg = (void *)(unsigned long)value;
+- mbox_chan_received_data(channel->chan, msg);
+
+ /*
+ * Clear data registers and tag.
+@@ -443,6 +443,8 @@ static void tegra_hsp_sm_recv128(struct tegra_hsp_channel *channel)
+ tegra_hsp_channel_writel(channel, 0x0, HSP_SHRD_MBOX_TYPE1_DATA2);
+ tegra_hsp_channel_writel(channel, 0x0, HSP_SHRD_MBOX_TYPE1_DATA3);
+ tegra_hsp_channel_writel(channel, 0x0, HSP_SHRD_MBOX_TYPE1_TAG);
++
++ mbox_chan_received_data(channel->chan, msg);
+ }
+
+ static const struct tegra_hsp_sm_ops tegra_hsp_sm_128bit_ops = {
+diff --git a/drivers/mailbox/zynqmp-ipi-mailbox.c b/drivers/mailbox/zynqmp-ipi-mailbox.c
+index 521d08b9ab47e3..d59fcb74b34794 100644
+--- a/drivers/mailbox/zynqmp-ipi-mailbox.c
++++ b/drivers/mailbox/zynqmp-ipi-mailbox.c
+@@ -905,7 +905,7 @@ static int zynqmp_ipi_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+ struct device_node *nc, *np = pdev->dev.of_node;
+- struct zynqmp_ipi_pdata __percpu *pdata;
++ struct zynqmp_ipi_pdata *pdata;
+ struct of_phandle_args out_irq;
+ struct zynqmp_ipi_mbox *mbox;
+ int num_mboxes, ret = -EINVAL;
+diff --git a/drivers/md/Kconfig b/drivers/md/Kconfig
+index 1e9db8e4acdf65..0b1870a09e1fdc 100644
+--- a/drivers/md/Kconfig
++++ b/drivers/md/Kconfig
+@@ -61,6 +61,19 @@ config MD_BITMAP_FILE
+ various kernel APIs and can only work with files on a file system not
+ actually sitting on the MD device.
+
++config MD_LINEAR
++ tristate "Linear (append) mode"
++ depends on BLK_DEV_MD
++ help
++ If you say Y here, then your multiple devices driver will be able to
++ use the so-called linear mode, i.e. it will combine the hard disk
++ partitions by simply appending one to the other.
++
++ To compile this as a module, choose M here: the module
++ will be called linear.
++
++ If unsure, say Y.
++
+ config MD_RAID0
+ tristate "RAID-0 (striping) mode"
+ depends on BLK_DEV_MD
+diff --git a/drivers/md/Makefile b/drivers/md/Makefile
+index 476a214e4bdc26..87bdfc9fe14c55 100644
+--- a/drivers/md/Makefile
++++ b/drivers/md/Makefile
+@@ -29,12 +29,14 @@ dm-zoned-y += dm-zoned-target.o dm-zoned-metadata.o dm-zoned-reclaim.o
+
+ md-mod-y += md.o md-bitmap.o
+ raid456-y += raid5.o raid5-cache.o raid5-ppl.o
++linear-y += md-linear.o
+
+ # Note: link order is important. All raid personalities
+ # and must come before md.o, as they each initialise
+ # themselves, and md.o may use the personalities when it
+ # auto-initialised.
+
++obj-$(CONFIG_MD_LINEAR) += linear.o
+ obj-$(CONFIG_MD_RAID0) += raid0.o
+ obj-$(CONFIG_MD_RAID1) += raid1.o
+ obj-$(CONFIG_MD_RAID10) += raid10.o
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index 1ae2c71bb383b7..78c975d7cd5f42 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -59,6 +59,7 @@ struct convert_context {
+ struct bio *bio_out;
+ struct bvec_iter iter_out;
+ atomic_t cc_pending;
++ unsigned int tag_offset;
+ u64 cc_sector;
+ union {
+ struct skcipher_request *req;
+@@ -1256,6 +1257,7 @@ static void crypt_convert_init(struct crypt_config *cc,
+ if (bio_out)
+ ctx->iter_out = bio_out->bi_iter;
+ ctx->cc_sector = sector + cc->iv_offset;
++ ctx->tag_offset = 0;
+ init_completion(&ctx->restart);
+ }
+
+@@ -1588,7 +1590,6 @@ static void crypt_free_req(struct crypt_config *cc, void *req, struct bio *base_
+ static blk_status_t crypt_convert(struct crypt_config *cc,
+ struct convert_context *ctx, bool atomic, bool reset_pending)
+ {
+- unsigned int tag_offset = 0;
+ unsigned int sector_step = cc->sector_size >> SECTOR_SHIFT;
+ int r;
+
+@@ -1611,9 +1612,9 @@ static blk_status_t crypt_convert(struct crypt_config *cc,
+ atomic_inc(&ctx->cc_pending);
+
+ if (crypt_integrity_aead(cc))
+- r = crypt_convert_block_aead(cc, ctx, ctx->r.req_aead, tag_offset);
++ r = crypt_convert_block_aead(cc, ctx, ctx->r.req_aead, ctx->tag_offset);
+ else
+- r = crypt_convert_block_skcipher(cc, ctx, ctx->r.req, tag_offset);
++ r = crypt_convert_block_skcipher(cc, ctx, ctx->r.req, ctx->tag_offset);
+
+ switch (r) {
+ /*
+@@ -1633,8 +1634,8 @@ static blk_status_t crypt_convert(struct crypt_config *cc,
+ * exit and continue processing in a workqueue
+ */
+ ctx->r.req = NULL;
++ ctx->tag_offset++;
+ ctx->cc_sector += sector_step;
+- tag_offset++;
+ return BLK_STS_DEV_RESOURCE;
+ }
+ } else {
+@@ -1648,8 +1649,8 @@ static blk_status_t crypt_convert(struct crypt_config *cc,
+ */
+ case -EINPROGRESS:
+ ctx->r.req = NULL;
++ ctx->tag_offset++;
+ ctx->cc_sector += sector_step;
+- tag_offset++;
+ continue;
+ /*
+ * The request was already processed (synchronously).
+@@ -1657,7 +1658,7 @@ static blk_status_t crypt_convert(struct crypt_config *cc,
+ case 0:
+ atomic_dec(&ctx->cc_pending);
+ ctx->cc_sector += sector_step;
+- tag_offset++;
++ ctx->tag_offset++;
+ if (!atomic)
+ cond_resched();
+ continue;
+@@ -2092,7 +2093,6 @@ static void kcryptd_crypt_write_continue(struct work_struct *work)
+ struct crypt_config *cc = io->cc;
+ struct convert_context *ctx = &io->ctx;
+ int crypt_finished;
+- sector_t sector = io->sector;
+ blk_status_t r;
+
+ wait_for_completion(&ctx->restart);
+@@ -2109,10 +2109,8 @@ static void kcryptd_crypt_write_continue(struct work_struct *work)
+ }
+
+ /* Encryption was already finished, submit io now */
+- if (crypt_finished) {
++ if (crypt_finished)
+ kcryptd_crypt_write_io_submit(io, 0);
+- io->sector = sector;
+- }
+
+ crypt_dec_pending(io);
+ }
+@@ -2123,14 +2121,13 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
+ struct convert_context *ctx = &io->ctx;
+ struct bio *clone;
+ int crypt_finished;
+- sector_t sector = io->sector;
+ blk_status_t r;
+
+ /*
+ * Prevent io from disappearing until this function completes.
+ */
+ crypt_inc_pending(io);
+- crypt_convert_init(cc, ctx, NULL, io->base_bio, sector);
++ crypt_convert_init(cc, ctx, NULL, io->base_bio, io->sector);
+
+ clone = crypt_alloc_buffer(io, io->base_bio->bi_iter.bi_size);
+ if (unlikely(!clone)) {
+@@ -2147,8 +2144,6 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
+ io->ctx.iter_in = clone->bi_iter;
+ }
+
+- sector += bio_sectors(clone);
+-
+ crypt_inc_pending(io);
+ r = crypt_convert(cc, ctx,
+ test_bit(DM_CRYPT_NO_WRITE_WORKQUEUE, &cc->flags), true);
+@@ -2172,10 +2167,8 @@ static void kcryptd_crypt_write_convert(struct dm_crypt_io *io)
+ }
+
+ /* Encryption was already finished, submit io now */
+- if (crypt_finished) {
++ if (crypt_finished)
+ kcryptd_crypt_write_io_submit(io, 0);
+- io->sector = sector;
+- }
+
+ dec:
+ crypt_dec_pending(io);
+diff --git a/drivers/md/md-autodetect.c b/drivers/md/md-autodetect.c
+index b2a00f213c2cd7..4b80165afd2331 100644
+--- a/drivers/md/md-autodetect.c
++++ b/drivers/md/md-autodetect.c
+@@ -49,6 +49,7 @@ static int md_setup_ents __initdata;
+ * instead of just one. -- KTK
+ * 18May2000: Added support for persistent-superblock arrays:
+ * md=n,0,factor,fault,device-list uses RAID0 for device n
++ * md=n,-1,factor,fault,device-list uses LINEAR for device n
+ * md=n,device-list reads a RAID superblock from the devices
+ * elements in device-list are read by name_to_kdev_t so can be
+ * a hex number or something like /dev/hda1 /dev/sdb
+@@ -87,7 +88,7 @@ static int __init md_setup(char *str)
+ md_setup_ents++;
+ switch (get_option(&str, &level)) { /* RAID level */
+ case 2: /* could be 0 or -1.. */
+- if (level == 0) {
++ if (level == 0 || level == LEVEL_LINEAR) {
+ if (get_option(&str, &factor) != 2 || /* Chunk Size */
+ get_option(&str, &fault) != 2) {
+ printk(KERN_WARNING "md: Too few arguments supplied to md=.\n");
+@@ -95,7 +96,10 @@ static int __init md_setup(char *str)
+ }
+ md_setup_args[ent].level = level;
+ md_setup_args[ent].chunk = 1 << (factor+12);
+- pername = "raid0";
++ if (level == LEVEL_LINEAR)
++ pername = "linear";
++ else
++ pername = "raid0";
+ break;
+ }
+ fallthrough;
+diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c
+new file mode 100644
+index 00000000000000..369aed044b409f
+--- /dev/null
++++ b/drivers/md/md-linear.c
+@@ -0,0 +1,352 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * linear.c : Multiple Devices driver for Linux Copyright (C) 1994-96 Marc
++ * ZYNGIER <zyngier@ufr-info-p7.ibp.fr> or <maz@gloups.fdn.fr>
++ */
++
++#include <linux/blkdev.h>
++#include <linux/raid/md_u.h>
++#include <linux/seq_file.h>
++#include <linux/module.h>
++#include <linux/slab.h>
++#include <trace/events/block.h>
++#include "md.h"
++
++struct dev_info {
++ struct md_rdev *rdev;
++ sector_t end_sector;
++};
++
++struct linear_conf {
++ struct rcu_head rcu;
++ sector_t array_sectors;
++ /* a copy of mddev->raid_disks */
++ int raid_disks;
++ struct dev_info disks[] __counted_by(raid_disks);
++};
++
++/*
++ * find which device holds a particular offset
++ */
++static inline struct dev_info *which_dev(struct mddev *mddev, sector_t sector)
++{
++ int lo, mid, hi;
++ struct linear_conf *conf;
++
++ lo = 0;
++ hi = mddev->raid_disks - 1;
++ conf = mddev->private;
++
++ /*
++ * Binary Search
++ */
++
++ while (hi > lo) {
++
++ mid = (hi + lo) / 2;
++ if (sector < conf->disks[mid].end_sector)
++ hi = mid;
++ else
++ lo = mid + 1;
++ }
++
++ return conf->disks + lo;
++}
++
++static sector_t linear_size(struct mddev *mddev, sector_t sectors, int raid_disks)
++{
++ struct linear_conf *conf;
++ sector_t array_sectors;
++
++ conf = mddev->private;
++ WARN_ONCE(sectors || raid_disks,
++ "%s does not support generic reshape\n", __func__);
++ array_sectors = conf->array_sectors;
++
++ return array_sectors;
++}
++
++static int linear_set_limits(struct mddev *mddev)
++{
++ struct queue_limits lim;
++ int err;
++
++ md_init_stacking_limits(&lim);
++ lim.max_hw_sectors = mddev->chunk_sectors;
++ lim.max_write_zeroes_sectors = mddev->chunk_sectors;
++ lim.io_min = mddev->chunk_sectors << 9;
++ err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
++ if (err)
++ return err;
++
++ return queue_limits_set(mddev->gendisk->queue, &lim);
++}
++
++static struct linear_conf *linear_conf(struct mddev *mddev, int raid_disks)
++{
++ struct linear_conf *conf;
++ struct md_rdev *rdev;
++ int ret = -EINVAL;
++ int cnt;
++ int i;
++
++ conf = kzalloc(struct_size(conf, disks, raid_disks), GFP_KERNEL);
++ if (!conf)
++ return ERR_PTR(-ENOMEM);
++
++ /*
++ * conf->raid_disks is copy of mddev->raid_disks. The reason to
++ * keep a copy of mddev->raid_disks in struct linear_conf is,
++ * mddev->raid_disks may not be consistent with pointers number of
++ * conf->disks[] when it is updated in linear_add() and used to
++ * iterate old conf->disks[] earray in linear_congested().
++ * Here conf->raid_disks is always consitent with number of
++ * pointers in conf->disks[] array, and mddev->private is updated
++ * with rcu_assign_pointer() in linear_addr(), such race can be
++ * avoided.
++ */
++ conf->raid_disks = raid_disks;
++
++ cnt = 0;
++ conf->array_sectors = 0;
++
++ rdev_for_each(rdev, mddev) {
++ int j = rdev->raid_disk;
++ struct dev_info *disk = conf->disks + j;
++ sector_t sectors;
++
++ if (j < 0 || j >= raid_disks || disk->rdev) {
++ pr_warn("md/linear:%s: disk numbering problem. Aborting!\n",
++ mdname(mddev));
++ goto out;
++ }
++
++ disk->rdev = rdev;
++ if (mddev->chunk_sectors) {
++ sectors = rdev->sectors;
++ sector_div(sectors, mddev->chunk_sectors);
++ rdev->sectors = sectors * mddev->chunk_sectors;
++ }
++
++ conf->array_sectors += rdev->sectors;
++ cnt++;
++ }
++ if (cnt != raid_disks) {
++ pr_warn("md/linear:%s: not enough drives present. Aborting!\n",
++ mdname(mddev));
++ goto out;
++ }
++
++ /*
++ * Here we calculate the device offsets.
++ */
++ conf->disks[0].end_sector = conf->disks[0].rdev->sectors;
++
++ for (i = 1; i < raid_disks; i++)
++ conf->disks[i].end_sector =
++ conf->disks[i-1].end_sector +
++ conf->disks[i].rdev->sectors;
++
++ if (!mddev_is_dm(mddev)) {
++ ret = linear_set_limits(mddev);
++ if (ret)
++ goto out;
++ }
++
++ return conf;
++
++out:
++ kfree(conf);
++ return ERR_PTR(ret);
++}
++
++static int linear_run(struct mddev *mddev)
++{
++ struct linear_conf *conf;
++ int ret;
++
++ if (md_check_no_bitmap(mddev))
++ return -EINVAL;
++
++ conf = linear_conf(mddev, mddev->raid_disks);
++ if (IS_ERR(conf))
++ return PTR_ERR(conf);
++
++ mddev->private = conf;
++ md_set_array_sectors(mddev, linear_size(mddev, 0, 0));
++
++ ret = md_integrity_register(mddev);
++ if (ret) {
++ kfree(conf);
++ mddev->private = NULL;
++ }
++ return ret;
++}
++
++static int linear_add(struct mddev *mddev, struct md_rdev *rdev)
++{
++ /* Adding a drive to a linear array allows the array to grow.
++ * It is permitted if the new drive has a matching superblock
++ * already on it, with raid_disk equal to raid_disks.
++ * It is achieved by creating a new linear_private_data structure
++ * and swapping it in in-place of the current one.
++ * The current one is never freed until the array is stopped.
++ * This avoids races.
++ */
++ struct linear_conf *newconf, *oldconf;
++
++ if (rdev->saved_raid_disk != mddev->raid_disks)
++ return -EINVAL;
++
++ rdev->raid_disk = rdev->saved_raid_disk;
++ rdev->saved_raid_disk = -1;
++
++ newconf = linear_conf(mddev, mddev->raid_disks + 1);
++ if (IS_ERR(newconf))
++ return PTR_ERR(newconf);
++
++ /* newconf->raid_disks already keeps a copy of * the increased
++ * value of mddev->raid_disks, WARN_ONCE() is just used to make
++ * sure of this. It is possible that oldconf is still referenced
++ * in linear_congested(), therefore kfree_rcu() is used to free
++ * oldconf until no one uses it anymore.
++ */
++ oldconf = rcu_dereference_protected(mddev->private,
++ lockdep_is_held(&mddev->reconfig_mutex));
++ mddev->raid_disks++;
++ WARN_ONCE(mddev->raid_disks != newconf->raid_disks,
++ "copied raid_disks doesn't match mddev->raid_disks");
++ rcu_assign_pointer(mddev->private, newconf);
++ md_set_array_sectors(mddev, linear_size(mddev, 0, 0));
++ set_capacity_and_notify(mddev->gendisk, mddev->array_sectors);
++ kfree_rcu(oldconf, rcu);
++ return 0;
++}
++
++static void linear_free(struct mddev *mddev, void *priv)
++{
++ struct linear_conf *conf = priv;
++
++ kfree(conf);
++}
++
++static bool linear_make_request(struct mddev *mddev, struct bio *bio)
++{
++ struct dev_info *tmp_dev;
++ sector_t start_sector, end_sector, data_offset;
++ sector_t bio_sector = bio->bi_iter.bi_sector;
++
++ if (unlikely(bio->bi_opf & REQ_PREFLUSH)
++ && md_flush_request(mddev, bio))
++ return true;
++
++ tmp_dev = which_dev(mddev, bio_sector);
++ start_sector = tmp_dev->end_sector - tmp_dev->rdev->sectors;
++ end_sector = tmp_dev->end_sector;
++ data_offset = tmp_dev->rdev->data_offset;
++
++ if (unlikely(bio_sector >= end_sector ||
++ bio_sector < start_sector))
++ goto out_of_bounds;
++
++ if (unlikely(is_rdev_broken(tmp_dev->rdev))) {
++ md_error(mddev, tmp_dev->rdev);
++ bio_io_error(bio);
++ return true;
++ }
++
++ if (unlikely(bio_end_sector(bio) > end_sector)) {
++ /* This bio crosses a device boundary, so we have to split it */
++ struct bio *split = bio_split(bio, end_sector - bio_sector,
++ GFP_NOIO, &mddev->bio_set);
++
++ if (IS_ERR(split)) {
++ bio->bi_status = errno_to_blk_status(PTR_ERR(split));
++ bio_endio(bio);
++ return true;
++ }
++
++ bio_chain(split, bio);
++ submit_bio_noacct(bio);
++ bio = split;
++ }
++
++ md_account_bio(mddev, &bio);
++ bio_set_dev(bio, tmp_dev->rdev->bdev);
++ bio->bi_iter.bi_sector = bio->bi_iter.bi_sector -
++ start_sector + data_offset;
++
++ if (unlikely((bio_op(bio) == REQ_OP_DISCARD) &&
++ !bdev_max_discard_sectors(bio->bi_bdev))) {
++ /* Just ignore it */
++ bio_endio(bio);
++ } else {
++ if (mddev->gendisk)
++ trace_block_bio_remap(bio, disk_devt(mddev->gendisk),
++ bio_sector);
++ mddev_check_write_zeroes(mddev, bio);
++ submit_bio_noacct(bio);
++ }
++ return true;
++
++out_of_bounds:
++ pr_err("md/linear:%s: make_request: Sector %llu out of bounds on dev %pg: %llu sectors, offset %llu\n",
++ mdname(mddev),
++ (unsigned long long)bio->bi_iter.bi_sector,
++ tmp_dev->rdev->bdev,
++ (unsigned long long)tmp_dev->rdev->sectors,
++ (unsigned long long)start_sector);
++ bio_io_error(bio);
++ return true;
++}
++
++static void linear_status(struct seq_file *seq, struct mddev *mddev)
++{
++ seq_printf(seq, " %dk rounding", mddev->chunk_sectors / 2);
++}
++
++static void linear_error(struct mddev *mddev, struct md_rdev *rdev)
++{
++ if (!test_and_set_bit(MD_BROKEN, &mddev->flags)) {
++ char *md_name = mdname(mddev);
++
++ pr_crit("md/linear%s: Disk failure on %pg detected, failing array.\n",
++ md_name, rdev->bdev);
++ }
++}
++
++static void linear_quiesce(struct mddev *mddev, int state)
++{
++}
++
++static struct md_personality linear_personality = {
++ .name = "linear",
++ .level = LEVEL_LINEAR,
++ .owner = THIS_MODULE,
++ .make_request = linear_make_request,
++ .run = linear_run,
++ .free = linear_free,
++ .status = linear_status,
++ .hot_add_disk = linear_add,
++ .size = linear_size,
++ .quiesce = linear_quiesce,
++ .error_handler = linear_error,
++};
++
++static int __init linear_init(void)
++{
++ return register_md_personality(&linear_personality);
++}
++
++static void linear_exit(void)
++{
++ unregister_md_personality(&linear_personality);
++}
++
++module_init(linear_init);
++module_exit(linear_exit);
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Linear device concatenation personality for MD (deprecated)");
++MODULE_ALIAS("md-personality-1"); /* LINEAR - deprecated*/
++MODULE_ALIAS("md-linear");
++MODULE_ALIAS("md-level--1");
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 44c4c518430d9b..fff28aea23c89e 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -8124,7 +8124,7 @@ void md_error(struct mddev *mddev, struct md_rdev *rdev)
+ return;
+ mddev->pers->error_handler(mddev, rdev);
+
+- if (mddev->pers->level == 0)
++ if (mddev->pers->level == 0 || mddev->pers->level == LEVEL_LINEAR)
+ return;
+
+ if (mddev->degraded && !test_bit(MD_BROKEN, &mddev->flags))
+diff --git a/drivers/media/i2c/ccs/ccs-core.c b/drivers/media/i2c/ccs/ccs-core.c
+index e1ae0f9fad4326..cb21df46bab169 100644
+--- a/drivers/media/i2c/ccs/ccs-core.c
++++ b/drivers/media/i2c/ccs/ccs-core.c
+@@ -3566,15 +3566,15 @@ static int ccs_probe(struct i2c_client *client)
+ out_cleanup:
+ ccs_cleanup(sensor);
+
++out_free_ccs_limits:
++ kfree(sensor->ccs_limits);
++
+ out_release_mdata:
+ kvfree(sensor->mdata.backing);
+
+ out_release_sdata:
+ kvfree(sensor->sdata.backing);
+
+-out_free_ccs_limits:
+- kfree(sensor->ccs_limits);
+-
+ out_power_off:
+ ccs_power_off(&client->dev);
+ mutex_destroy(&sensor->mutex);
+diff --git a/drivers/media/i2c/ccs/ccs-data.c b/drivers/media/i2c/ccs/ccs-data.c
+index 08400edf77ced1..2591dba51e17e2 100644
+--- a/drivers/media/i2c/ccs/ccs-data.c
++++ b/drivers/media/i2c/ccs/ccs-data.c
+@@ -10,6 +10,7 @@
+ #include <linux/limits.h>
+ #include <linux/mm.h>
+ #include <linux/slab.h>
++#include <linux/string.h>
+
+ #include "ccs-data-defs.h"
+
+@@ -97,7 +98,7 @@ ccs_data_parse_length_specifier(const struct __ccs_data_length_specifier *__len,
+ plen = ((size_t)
+ (__len3->length[0] &
+ ((1 << CCS_DATA_LENGTH_SPECIFIER_SIZE_SHIFT) - 1))
+- << 16) + (__len3->length[0] << 8) + __len3->length[1];
++ << 16) + (__len3->length[1] << 8) + __len3->length[2];
+ break;
+ }
+ default:
+@@ -948,15 +949,15 @@ int ccs_data_parse(struct ccs_data_container *ccsdata, const void *data,
+
+ rval = __ccs_data_parse(&bin, ccsdata, data, len, dev, verbose);
+ if (rval)
+- return rval;
++ goto out_cleanup;
+
+ rval = bin_backing_alloc(&bin);
+ if (rval)
+- return rval;
++ goto out_cleanup;
+
+ rval = __ccs_data_parse(&bin, ccsdata, data, len, dev, false);
+ if (rval)
+- goto out_free;
++ goto out_cleanup;
+
+ if (verbose && ccsdata->version)
+ print_ccs_data_version(dev, ccsdata->version);
+@@ -965,15 +966,16 @@ int ccs_data_parse(struct ccs_data_container *ccsdata, const void *data,
+ rval = -EPROTO;
+ dev_dbg(dev, "parsing mismatch; base %p; now %p; end %p\n",
+ bin.base, bin.now, bin.end);
+- goto out_free;
++ goto out_cleanup;
+ }
+
+ ccsdata->backing = bin.base;
+
+ return 0;
+
+-out_free:
++out_cleanup:
+ kvfree(bin.base);
++ memset(ccsdata, 0, sizeof(*ccsdata));
+
+ return rval;
+ }
+diff --git a/drivers/media/i2c/ds90ub913.c b/drivers/media/i2c/ds90ub913.c
+index 8eed4a200fd89b..b5375d73662996 100644
+--- a/drivers/media/i2c/ds90ub913.c
++++ b/drivers/media/i2c/ds90ub913.c
+@@ -793,7 +793,6 @@ static void ub913_subdev_uninit(struct ub913_data *priv)
+ v4l2_async_unregister_subdev(&priv->sd);
+ ub913_v4l2_nf_unregister(priv);
+ v4l2_subdev_cleanup(&priv->sd);
+- fwnode_handle_put(priv->sd.fwnode);
+ media_entity_cleanup(&priv->sd.entity);
+ }
+
+diff --git a/drivers/media/i2c/ds90ub953.c b/drivers/media/i2c/ds90ub953.c
+index 16f88db1498162..10daecf6f45798 100644
+--- a/drivers/media/i2c/ds90ub953.c
++++ b/drivers/media/i2c/ds90ub953.c
+@@ -1291,7 +1291,6 @@ static void ub953_subdev_uninit(struct ub953_data *priv)
+ v4l2_async_unregister_subdev(&priv->sd);
+ ub953_v4l2_notifier_unregister(priv);
+ v4l2_subdev_cleanup(&priv->sd);
+- fwnode_handle_put(priv->sd.fwnode);
+ media_entity_cleanup(&priv->sd.entity);
+ }
+
+diff --git a/drivers/media/i2c/ds90ub960.c b/drivers/media/i2c/ds90ub960.c
+index 58424d8f72af03..432457a761b116 100644
+--- a/drivers/media/i2c/ds90ub960.c
++++ b/drivers/media/i2c/ds90ub960.c
+@@ -352,6 +352,8 @@
+
+ #define UB960_SR_I2C_RX_ID(n) (0xf8 + (n)) /* < UB960_FPD_RX_NPORTS */
+
++#define UB9702_SR_REFCLK_FREQ 0x3d
++
+ /* Indirect register blocks */
+ #define UB960_IND_TARGET_PAT_GEN 0x00
+ #define UB960_IND_TARGET_RX_ANA(n) (0x01 + (n))
+@@ -1575,16 +1577,24 @@ static int ub960_rxport_wait_locks(struct ub960_data *priv,
+
+ ub960_rxport_read16(priv, nport, UB960_RR_RX_FREQ_HIGH, &v);
+
+- ret = ub960_rxport_get_strobe_pos(priv, nport, &strobe_pos);
+- if (ret)
+- return ret;
++ if (priv->hw_data->is_ub9702) {
++ dev_dbg(dev, "\trx%u: locked, freq %llu Hz\n",
++ nport, (v * 1000000ULL) >> 8);
++ } else {
++ ret = ub960_rxport_get_strobe_pos(priv, nport,
++ &strobe_pos);
++ if (ret)
++ return ret;
+
+- ret = ub960_rxport_get_eq_level(priv, nport, &eq_level);
+- if (ret)
+- return ret;
++ ret = ub960_rxport_get_eq_level(priv, nport, &eq_level);
++ if (ret)
++ return ret;
+
+- dev_dbg(dev, "\trx%u: locked, SP: %d, EQ: %u, freq %llu Hz\n",
+- nport, strobe_pos, eq_level, (v * 1000000ULL) >> 8);
++ dev_dbg(dev,
++ "\trx%u: locked, SP: %d, EQ: %u, freq %llu Hz\n",
++ nport, strobe_pos, eq_level,
++ (v * 1000000ULL) >> 8);
++ }
+ }
+
+ return 0;
+@@ -2523,7 +2533,7 @@ static int ub960_configure_ports_for_streaming(struct ub960_data *priv,
+ for (i = 0; i < 8; i++)
+ ub960_rxport_write(priv, nport,
+ UB960_RR_VC_ID_MAP(i),
+- nport);
++ (nport << 4) | nport);
+ }
+
+ break;
+@@ -2940,6 +2950,54 @@ static const struct v4l2_subdev_pad_ops ub960_pad_ops = {
+ .set_fmt = ub960_set_fmt,
+ };
+
++static void ub960_log_status_ub960_sp_eq(struct ub960_data *priv,
++ unsigned int nport)
++{
++ struct device *dev = &priv->client->dev;
++ u8 eq_level;
++ s8 strobe_pos;
++ u8 v = 0;
++
++ /* Strobe */
++
++ ub960_read(priv, UB960_XR_AEQ_CTL1, &v);
++
++ dev_info(dev, "\t%s strobe\n",
++ (v & UB960_XR_AEQ_CTL1_AEQ_SFILTER_EN) ? "Adaptive" :
++ "Manual");
++
++ if (v & UB960_XR_AEQ_CTL1_AEQ_SFILTER_EN) {
++ ub960_read(priv, UB960_XR_SFILTER_CFG, &v);
++
++ dev_info(dev, "\tStrobe range [%d, %d]\n",
++ ((v >> UB960_XR_SFILTER_CFG_SFILTER_MIN_SHIFT) & 0xf) - 7,
++ ((v >> UB960_XR_SFILTER_CFG_SFILTER_MAX_SHIFT) & 0xf) - 7);
++ }
++
++ ub960_rxport_get_strobe_pos(priv, nport, &strobe_pos);
++
++ dev_info(dev, "\tStrobe pos %d\n", strobe_pos);
++
++ /* EQ */
++
++ ub960_rxport_read(priv, nport, UB960_RR_AEQ_BYPASS, &v);
++
++ dev_info(dev, "\t%s EQ\n",
++ (v & UB960_RR_AEQ_BYPASS_ENABLE) ? "Manual" :
++ "Adaptive");
++
++ if (!(v & UB960_RR_AEQ_BYPASS_ENABLE)) {
++ ub960_rxport_read(priv, nport, UB960_RR_AEQ_MIN_MAX, &v);
++
++ dev_info(dev, "\tEQ range [%u, %u]\n",
++ (v >> UB960_RR_AEQ_MIN_MAX_AEQ_FLOOR_SHIFT) & 0xf,
++ (v >> UB960_RR_AEQ_MIN_MAX_AEQ_MAX_SHIFT) & 0xf);
++ }
++
++ if (ub960_rxport_get_eq_level(priv, nport, &eq_level) == 0)
++ dev_info(dev, "\tEQ level %u\n", eq_level);
++}
++
+ static int ub960_log_status(struct v4l2_subdev *sd)
+ {
+ struct ub960_data *priv = sd_to_ub960(sd);
+@@ -2987,8 +3045,6 @@ static int ub960_log_status(struct v4l2_subdev *sd)
+
+ for (nport = 0; nport < priv->hw_data->num_rxports; nport++) {
+ struct ub960_rxport *rxport = priv->rxports[nport];
+- u8 eq_level;
+- s8 strobe_pos;
+ unsigned int i;
+
+ dev_info(dev, "RX %u\n", nport);
+@@ -3024,44 +3080,8 @@ static int ub960_log_status(struct v4l2_subdev *sd)
+ ub960_rxport_read(priv, nport, UB960_RR_CSI_ERR_COUNTER, &v);
+ dev_info(dev, "\tcsi_err_counter %u\n", v);
+
+- /* Strobe */
+-
+- ub960_read(priv, UB960_XR_AEQ_CTL1, &v);
+-
+- dev_info(dev, "\t%s strobe\n",
+- (v & UB960_XR_AEQ_CTL1_AEQ_SFILTER_EN) ? "Adaptive" :
+- "Manual");
+-
+- if (v & UB960_XR_AEQ_CTL1_AEQ_SFILTER_EN) {
+- ub960_read(priv, UB960_XR_SFILTER_CFG, &v);
+-
+- dev_info(dev, "\tStrobe range [%d, %d]\n",
+- ((v >> UB960_XR_SFILTER_CFG_SFILTER_MIN_SHIFT) & 0xf) - 7,
+- ((v >> UB960_XR_SFILTER_CFG_SFILTER_MAX_SHIFT) & 0xf) - 7);
+- }
+-
+- ub960_rxport_get_strobe_pos(priv, nport, &strobe_pos);
+-
+- dev_info(dev, "\tStrobe pos %d\n", strobe_pos);
+-
+- /* EQ */
+-
+- ub960_rxport_read(priv, nport, UB960_RR_AEQ_BYPASS, &v);
+-
+- dev_info(dev, "\t%s EQ\n",
+- (v & UB960_RR_AEQ_BYPASS_ENABLE) ? "Manual" :
+- "Adaptive");
+-
+- if (!(v & UB960_RR_AEQ_BYPASS_ENABLE)) {
+- ub960_rxport_read(priv, nport, UB960_RR_AEQ_MIN_MAX, &v);
+-
+- dev_info(dev, "\tEQ range [%u, %u]\n",
+- (v >> UB960_RR_AEQ_MIN_MAX_AEQ_FLOOR_SHIFT) & 0xf,
+- (v >> UB960_RR_AEQ_MIN_MAX_AEQ_MAX_SHIFT) & 0xf);
+- }
+-
+- if (ub960_rxport_get_eq_level(priv, nport, &eq_level) == 0)
+- dev_info(dev, "\tEQ level %u\n", eq_level);
++ if (!priv->hw_data->is_ub9702)
++ ub960_log_status_ub960_sp_eq(priv, nport);
+
+ /* GPIOs */
+ for (i = 0; i < UB960_NUM_BC_GPIOS; i++) {
+@@ -3837,7 +3857,10 @@ static int ub960_enable_core_hw(struct ub960_data *priv)
+ if (ret)
+ goto err_pd_gpio;
+
+- ret = ub960_read(priv, UB960_XR_REFCLK_FREQ, &refclk_freq);
++ if (priv->hw_data->is_ub9702)
++ ret = ub960_read(priv, UB9702_SR_REFCLK_FREQ, &refclk_freq);
++ else
++ ret = ub960_read(priv, UB960_XR_REFCLK_FREQ, &refclk_freq);
+ if (ret)
+ goto err_pd_gpio;
+
+diff --git a/drivers/media/i2c/imx296.c b/drivers/media/i2c/imx296.c
+index 83149fa729c424..f3bec16b527c44 100644
+--- a/drivers/media/i2c/imx296.c
++++ b/drivers/media/i2c/imx296.c
+@@ -954,6 +954,8 @@ static int imx296_identify_model(struct imx296 *sensor)
+ return ret;
+ }
+
++ usleep_range(2000, 5000);
++
+ ret = imx296_read(sensor, IMX296_SENSOR_INFO);
+ if (ret < 0) {
+ dev_err(sensor->dev, "failed to read sensor information (%d)\n",
+diff --git a/drivers/media/i2c/ov5640.c b/drivers/media/i2c/ov5640.c
+index c1d3fce4a7d383..8566bc2edde978 100644
+--- a/drivers/media/i2c/ov5640.c
++++ b/drivers/media/i2c/ov5640.c
+@@ -1982,6 +1982,7 @@ static int ov5640_get_light_freq(struct ov5640_dev *sensor)
+ light_freq = 50;
+ } else {
+ /* 60Hz */
++ light_freq = 60;
+ }
+ }
+
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys.c b/drivers/media/pci/intel/ipu6/ipu6-isys.c
+index c85e056cb904b2..17bc8cabcbdb59 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-isys.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-isys.c
+@@ -1133,6 +1133,7 @@ static int isys_probe(struct auxiliary_device *auxdev,
+ free_fw_msg_bufs:
+ free_fw_msg_bufs(isys);
+ out_remove_pkg_dir_shared_buffer:
++ cpu_latency_qos_remove_request(&isys->pm_qos);
+ if (!isp->secure_mode)
+ ipu6_cpd_free_pkg_dir(adev);
+ remove_shared_buffer:
+diff --git a/drivers/media/platform/marvell/mmp-driver.c b/drivers/media/platform/marvell/mmp-driver.c
+index ff9d151121d5eb..4fa171d469cabc 100644
+--- a/drivers/media/platform/marvell/mmp-driver.c
++++ b/drivers/media/platform/marvell/mmp-driver.c
+@@ -231,13 +231,23 @@ static int mmpcam_probe(struct platform_device *pdev)
+
+ mcam_init_clk(mcam);
+
++ /*
++ * Register with V4L.
++ */
++
++ ret = v4l2_device_register(mcam->dev, &mcam->v4l2_dev);
++ if (ret)
++ return ret;
++
+ /*
+ * Create a match of the sensor against its OF node.
+ */
+ ep = fwnode_graph_get_next_endpoint(of_fwnode_handle(pdev->dev.of_node),
+ NULL);
+- if (!ep)
+- return -ENODEV;
++ if (!ep) {
++ ret = -ENODEV;
++ goto out_v4l2_device_unregister;
++ }
+
+ v4l2_async_nf_init(&mcam->notifier, &mcam->v4l2_dev);
+
+@@ -246,7 +256,7 @@ static int mmpcam_probe(struct platform_device *pdev)
+ fwnode_handle_put(ep);
+ if (IS_ERR(asd)) {
+ ret = PTR_ERR(asd);
+- goto out;
++ goto out_v4l2_device_unregister;
+ }
+
+ /*
+@@ -254,7 +264,7 @@ static int mmpcam_probe(struct platform_device *pdev)
+ */
+ ret = mccic_register(mcam);
+ if (ret)
+- goto out;
++ goto out_v4l2_device_unregister;
+
+ /*
+ * Add OF clock provider.
+@@ -283,6 +293,8 @@ static int mmpcam_probe(struct platform_device *pdev)
+ return 0;
+ out:
+ mccic_shutdown(mcam);
++out_v4l2_device_unregister:
++ v4l2_device_unregister(&mcam->v4l2_dev);
+
+ return ret;
+ }
+@@ -293,6 +305,7 @@ static void mmpcam_remove(struct platform_device *pdev)
+ struct mcam_camera *mcam = &cam->mcam;
+
+ mccic_shutdown(mcam);
++ v4l2_device_unregister(&mcam->v4l2_dev);
+ pm_runtime_force_suspend(mcam->dev);
+ }
+
+diff --git a/drivers/media/platform/nuvoton/npcm-video.c b/drivers/media/platform/nuvoton/npcm-video.c
+index 60fbb91400355c..db454c9d2641f8 100644
+--- a/drivers/media/platform/nuvoton/npcm-video.c
++++ b/drivers/media/platform/nuvoton/npcm-video.c
+@@ -1667,9 +1667,9 @@ static int npcm_video_ece_init(struct npcm_video *video)
+ dev_info(dev, "Support HEXTILE pixel format\n");
+
+ ece_pdev = of_find_device_by_node(ece_node);
+- if (IS_ERR(ece_pdev)) {
++ if (!ece_pdev) {
+ dev_err(dev, "Failed to find ECE device\n");
+- return PTR_ERR(ece_pdev);
++ return -ENODEV;
+ }
+ of_node_put(ece_node);
+
+diff --git a/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c b/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c
+index 9f768f011fa25a..0f6918f4db383f 100644
+--- a/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c
++++ b/drivers/media/platform/st/stm32/stm32-dcmipp/dcmipp-bytecap.c
+@@ -893,7 +893,7 @@ struct dcmipp_ent_device *dcmipp_bytecap_ent_init(struct device *dev,
+ q->dev = dev;
+
+ /* DCMIPP requires 16 bytes aligned buffers */
+- ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32) & ~0x0f);
++ ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
+ if (ret) {
+ dev_err(dev, "Failed to set DMA mask\n");
+ goto err_mutex_destroy;
+diff --git a/drivers/media/usb/uvc/uvc_ctrl.c b/drivers/media/usb/uvc/uvc_ctrl.c
+index 4fe26e82e3d1c1..4837d8df9c0386 100644
+--- a/drivers/media/usb/uvc/uvc_ctrl.c
++++ b/drivers/media/usb/uvc/uvc_ctrl.c
+@@ -1579,6 +1579,40 @@ static void uvc_ctrl_send_slave_event(struct uvc_video_chain *chain,
+ uvc_ctrl_send_event(chain, handle, ctrl, mapping, val, changes);
+ }
+
++static void uvc_ctrl_set_handle(struct uvc_fh *handle, struct uvc_control *ctrl,
++ struct uvc_fh *new_handle)
++{
++ lockdep_assert_held(&handle->chain->ctrl_mutex);
++
++ if (new_handle) {
++ if (ctrl->handle)
++ dev_warn_ratelimited(&handle->stream->dev->udev->dev,
++ "UVC non compliance: Setting an async control with a pending operation.");
++
++ if (new_handle == ctrl->handle)
++ return;
++
++ if (ctrl->handle) {
++ WARN_ON(!ctrl->handle->pending_async_ctrls);
++ if (ctrl->handle->pending_async_ctrls)
++ ctrl->handle->pending_async_ctrls--;
++ }
++
++ ctrl->handle = new_handle;
++ handle->pending_async_ctrls++;
++ return;
++ }
++
++ /* Cannot clear the handle for a control not owned by us.*/
++ if (WARN_ON(ctrl->handle != handle))
++ return;
++
++ ctrl->handle = NULL;
++ if (WARN_ON(!handle->pending_async_ctrls))
++ return;
++ handle->pending_async_ctrls--;
++}
++
+ void uvc_ctrl_status_event(struct uvc_video_chain *chain,
+ struct uvc_control *ctrl, const u8 *data)
+ {
+@@ -1589,7 +1623,8 @@ void uvc_ctrl_status_event(struct uvc_video_chain *chain,
+ mutex_lock(&chain->ctrl_mutex);
+
+ handle = ctrl->handle;
+- ctrl->handle = NULL;
++ if (handle)
++ uvc_ctrl_set_handle(handle, ctrl, NULL);
+
+ list_for_each_entry(mapping, &ctrl->info.mappings, list) {
+ s32 value = __uvc_ctrl_get_value(mapping, data);
+@@ -1640,10 +1675,8 @@ bool uvc_ctrl_status_event_async(struct urb *urb, struct uvc_video_chain *chain,
+ struct uvc_device *dev = chain->dev;
+ struct uvc_ctrl_work *w = &dev->async_ctrl;
+
+- if (list_empty(&ctrl->info.mappings)) {
+- ctrl->handle = NULL;
++ if (list_empty(&ctrl->info.mappings))
+ return false;
+- }
+
+ w->data = data;
+ w->urb = urb;
+@@ -1673,13 +1706,13 @@ static void uvc_ctrl_send_events(struct uvc_fh *handle,
+ {
+ struct uvc_control_mapping *mapping;
+ struct uvc_control *ctrl;
+- u32 changes = V4L2_EVENT_CTRL_CH_VALUE;
+ unsigned int i;
+ unsigned int j;
+
+ for (i = 0; i < xctrls_count; ++i) {
+- ctrl = uvc_find_control(handle->chain, xctrls[i].id, &mapping);
++ u32 changes = V4L2_EVENT_CTRL_CH_VALUE;
+
++ ctrl = uvc_find_control(handle->chain, xctrls[i].id, &mapping);
+ if (ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS)
+ /* Notification will be sent from an Interrupt event. */
+ continue;
+@@ -1811,7 +1844,10 @@ int uvc_ctrl_begin(struct uvc_video_chain *chain)
+ }
+
+ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+- struct uvc_entity *entity, int rollback, struct uvc_control **err_ctrl)
++ struct uvc_fh *handle,
++ struct uvc_entity *entity,
++ int rollback,
++ struct uvc_control **err_ctrl)
+ {
+ struct uvc_control *ctrl;
+ unsigned int i;
+@@ -1859,6 +1895,10 @@ static int uvc_ctrl_commit_entity(struct uvc_device *dev,
+ *err_ctrl = ctrl;
+ return ret;
+ }
++
++ if (!rollback && handle &&
++ ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS)
++ uvc_ctrl_set_handle(handle, ctrl, handle);
+ }
+
+ return 0;
+@@ -1895,8 +1935,8 @@ int __uvc_ctrl_commit(struct uvc_fh *handle, int rollback,
+
+ /* Find the control. */
+ list_for_each_entry(entity, &chain->entities, chain) {
+- ret = uvc_ctrl_commit_entity(chain->dev, entity, rollback,
+- &err_ctrl);
++ ret = uvc_ctrl_commit_entity(chain->dev, handle, entity,
++ rollback, &err_ctrl);
+ if (ret < 0) {
+ if (ctrls)
+ ctrls->error_idx =
+@@ -2046,9 +2086,6 @@ int uvc_ctrl_set(struct uvc_fh *handle,
+ mapping->set(mapping, value,
+ uvc_ctrl_data(ctrl, UVC_CTRL_DATA_CURRENT));
+
+- if (ctrl->info.flags & UVC_CTRL_FLAG_ASYNCHRONOUS)
+- ctrl->handle = handle;
+-
+ ctrl->dirty = 1;
+ ctrl->modified = 1;
+ return 0;
+@@ -2377,7 +2414,7 @@ int uvc_ctrl_restore_values(struct uvc_device *dev)
+ ctrl->dirty = 1;
+ }
+
+- ret = uvc_ctrl_commit_entity(dev, entity, 0, NULL);
++ ret = uvc_ctrl_commit_entity(dev, NULL, entity, 0, NULL);
+ if (ret < 0)
+ return ret;
+ }
+@@ -2770,6 +2807,26 @@ int uvc_ctrl_init_device(struct uvc_device *dev)
+ return 0;
+ }
+
++void uvc_ctrl_cleanup_fh(struct uvc_fh *handle)
++{
++ struct uvc_entity *entity;
++
++ guard(mutex)(&handle->chain->ctrl_mutex);
++
++ if (!handle->pending_async_ctrls)
++ return;
++
++ list_for_each_entry(entity, &handle->chain->dev->entities, list) {
++ for (unsigned int i = 0; i < entity->ncontrols; ++i) {
++ if (entity->controls[i].handle != handle)
++ continue;
++ uvc_ctrl_set_handle(handle, &entity->controls[i], NULL);
++ }
++ }
++
++ WARN_ON(handle->pending_async_ctrls);
++}
++
+ /*
+ * Cleanup device controls.
+ */
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 9f38a9b23c0181..d832aa55056f39 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -775,27 +775,14 @@ static const u8 uvc_media_transport_input_guid[16] =
+ UVC_GUID_UVC_MEDIA_TRANSPORT_INPUT;
+ static const u8 uvc_processing_guid[16] = UVC_GUID_UVC_PROCESSING;
+
+-static struct uvc_entity *uvc_alloc_new_entity(struct uvc_device *dev, u16 type,
+- u16 id, unsigned int num_pads,
+- unsigned int extra_size)
++static struct uvc_entity *uvc_alloc_entity(u16 type, u16 id,
++ unsigned int num_pads, unsigned int extra_size)
+ {
+ struct uvc_entity *entity;
+ unsigned int num_inputs;
+ unsigned int size;
+ unsigned int i;
+
+- /* Per UVC 1.1+ spec 3.7.2, the ID should be non-zero. */
+- if (id == 0) {
+- dev_err(&dev->udev->dev, "Found Unit with invalid ID 0.\n");
+- return ERR_PTR(-EINVAL);
+- }
+-
+- /* Per UVC 1.1+ spec 3.7.2, the ID is unique. */
+- if (uvc_entity_by_id(dev, id)) {
+- dev_err(&dev->udev->dev, "Found multiple Units with ID %u\n", id);
+- return ERR_PTR(-EINVAL);
+- }
+-
+ extra_size = roundup(extra_size, sizeof(*entity->pads));
+ if (num_pads)
+ num_inputs = type & UVC_TERM_OUTPUT ? num_pads : num_pads - 1;
+@@ -805,7 +792,7 @@ static struct uvc_entity *uvc_alloc_new_entity(struct uvc_device *dev, u16 type,
+ + num_inputs;
+ entity = kzalloc(size, GFP_KERNEL);
+ if (entity == NULL)
+- return ERR_PTR(-ENOMEM);
++ return NULL;
+
+ entity->id = id;
+ entity->type = type;
+@@ -917,10 +904,10 @@ static int uvc_parse_vendor_control(struct uvc_device *dev,
+ break;
+ }
+
+- unit = uvc_alloc_new_entity(dev, UVC_VC_EXTENSION_UNIT,
+- buffer[3], p + 1, 2 * n);
+- if (IS_ERR(unit))
+- return PTR_ERR(unit);
++ unit = uvc_alloc_entity(UVC_VC_EXTENSION_UNIT, buffer[3],
++ p + 1, 2*n);
++ if (unit == NULL)
++ return -ENOMEM;
+
+ memcpy(unit->guid, &buffer[4], 16);
+ unit->extension.bNumControls = buffer[20];
+@@ -1029,10 +1016,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- term = uvc_alloc_new_entity(dev, type | UVC_TERM_INPUT,
+- buffer[3], 1, n + p);
+- if (IS_ERR(term))
+- return PTR_ERR(term);
++ term = uvc_alloc_entity(type | UVC_TERM_INPUT, buffer[3],
++ 1, n + p);
++ if (term == NULL)
++ return -ENOMEM;
+
+ if (UVC_ENTITY_TYPE(term) == UVC_ITT_CAMERA) {
+ term->camera.bControlSize = n;
+@@ -1088,10 +1075,10 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return 0;
+ }
+
+- term = uvc_alloc_new_entity(dev, type | UVC_TERM_OUTPUT,
+- buffer[3], 1, 0);
+- if (IS_ERR(term))
+- return PTR_ERR(term);
++ term = uvc_alloc_entity(type | UVC_TERM_OUTPUT, buffer[3],
++ 1, 0);
++ if (term == NULL)
++ return -ENOMEM;
+
+ memcpy(term->baSourceID, &buffer[7], 1);
+
+@@ -1110,10 +1097,9 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3],
+- p + 1, 0);
+- if (IS_ERR(unit))
+- return PTR_ERR(unit);
++ unit = uvc_alloc_entity(buffer[2], buffer[3], p + 1, 0);
++ if (unit == NULL)
++ return -ENOMEM;
+
+ memcpy(unit->baSourceID, &buffer[5], p);
+
+@@ -1133,9 +1119,9 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3], 2, n);
+- if (IS_ERR(unit))
+- return PTR_ERR(unit);
++ unit = uvc_alloc_entity(buffer[2], buffer[3], 2, n);
++ if (unit == NULL)
++ return -ENOMEM;
+
+ memcpy(unit->baSourceID, &buffer[4], 1);
+ unit->processing.wMaxMultiplier =
+@@ -1162,10 +1148,9 @@ static int uvc_parse_standard_control(struct uvc_device *dev,
+ return -EINVAL;
+ }
+
+- unit = uvc_alloc_new_entity(dev, buffer[2], buffer[3],
+- p + 1, n);
+- if (IS_ERR(unit))
+- return PTR_ERR(unit);
++ unit = uvc_alloc_entity(buffer[2], buffer[3], p + 1, n);
++ if (unit == NULL)
++ return -ENOMEM;
+
+ memcpy(unit->guid, &buffer[4], 16);
+ unit->extension.bNumControls = buffer[20];
+@@ -1295,20 +1280,19 @@ static int uvc_gpio_parse(struct uvc_device *dev)
+ struct gpio_desc *gpio_privacy;
+ int irq;
+
+- gpio_privacy = devm_gpiod_get_optional(&dev->udev->dev, "privacy",
++ gpio_privacy = devm_gpiod_get_optional(&dev->intf->dev, "privacy",
+ GPIOD_IN);
+ if (IS_ERR_OR_NULL(gpio_privacy))
+ return PTR_ERR_OR_ZERO(gpio_privacy);
+
+ irq = gpiod_to_irq(gpio_privacy);
+ if (irq < 0)
+- return dev_err_probe(&dev->udev->dev, irq,
++ return dev_err_probe(&dev->intf->dev, irq,
+ "No IRQ for privacy GPIO\n");
+
+- unit = uvc_alloc_new_entity(dev, UVC_EXT_GPIO_UNIT,
+- UVC_EXT_GPIO_UNIT_ID, 0, 1);
+- if (IS_ERR(unit))
+- return PTR_ERR(unit);
++ unit = uvc_alloc_entity(UVC_EXT_GPIO_UNIT, UVC_EXT_GPIO_UNIT_ID, 0, 1);
++ if (!unit)
++ return -ENOMEM;
+
+ unit->gpio.gpio_privacy = gpio_privacy;
+ unit->gpio.irq = irq;
+@@ -1329,15 +1313,27 @@ static int uvc_gpio_parse(struct uvc_device *dev)
+ static int uvc_gpio_init_irq(struct uvc_device *dev)
+ {
+ struct uvc_entity *unit = dev->gpio_unit;
++ int ret;
+
+ if (!unit || unit->gpio.irq < 0)
+ return 0;
+
+- return devm_request_threaded_irq(&dev->udev->dev, unit->gpio.irq, NULL,
+- uvc_gpio_irq,
+- IRQF_ONESHOT | IRQF_TRIGGER_FALLING |
+- IRQF_TRIGGER_RISING,
+- "uvc_privacy_gpio", dev);
++ ret = request_threaded_irq(unit->gpio.irq, NULL, uvc_gpio_irq,
++ IRQF_ONESHOT | IRQF_TRIGGER_FALLING |
++ IRQF_TRIGGER_RISING,
++ "uvc_privacy_gpio", dev);
++
++ unit->gpio.initialized = !ret;
++
++ return ret;
++}
++
++static void uvc_gpio_deinit(struct uvc_device *dev)
++{
++ if (!dev->gpio_unit || !dev->gpio_unit->gpio.initialized)
++ return;
++
++ free_irq(dev->gpio_unit->gpio.irq, dev);
+ }
+
+ /* ------------------------------------------------------------------------
+@@ -1934,6 +1930,8 @@ static void uvc_unregister_video(struct uvc_device *dev)
+ {
+ struct uvc_streaming *stream;
+
++ uvc_gpio_deinit(dev);
++
+ list_for_each_entry(stream, &dev->streams, list) {
+ /* Nothing to do here, continue. */
+ if (!video_is_registered(&stream->vdev))
+diff --git a/drivers/media/usb/uvc/uvc_v4l2.c b/drivers/media/usb/uvc/uvc_v4l2.c
+index f4988f03640aec..7bcd706281daf3 100644
+--- a/drivers/media/usb/uvc/uvc_v4l2.c
++++ b/drivers/media/usb/uvc/uvc_v4l2.c
+@@ -659,6 +659,8 @@ static int uvc_v4l2_release(struct file *file)
+
+ uvc_dbg(stream->dev, CALLS, "%s\n", __func__);
+
++ uvc_ctrl_cleanup_fh(handle);
++
+ /* Only free resources if this is a privileged handle. */
+ if (uvc_has_privileges(handle))
+ uvc_queue_release(&stream->queue);
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index e00f38dd07d935..d2fe01bcd209e5 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -79,6 +79,27 @@ int uvc_query_ctrl(struct uvc_device *dev, u8 query, u8 unit,
+ if (likely(ret == size))
+ return 0;
+
++ /*
++ * Some devices return shorter USB control packets than expected if the
++ * returned value can fit in less bytes. Zero all the bytes that the
++ * device has not written.
++ *
++ * This quirk is applied to all controls, regardless of their data type.
++ * Most controls are little-endian integers, in which case the missing
++ * bytes become 0 MSBs. For other data types, a different heuristic
++ * could be implemented if a device is found needing it.
++ *
++ * We exclude UVC_GET_INFO from the quirk. UVC_GET_LEN does not need
++ * to be excluded because its size is always 1.
++ */
++ if (ret > 0 && query != UVC_GET_INFO) {
++ memset(data + ret, 0, size - ret);
++ dev_warn_once(&dev->udev->dev,
++ "UVC non compliance: %s control %u on unit %u returned %d bytes when we expected %u.\n",
++ uvc_query_name(query), cs, unit, ret, size);
++ return 0;
++ }
++
+ if (ret != -EPIPE) {
+ dev_err(&dev->udev->dev,
+ "Failed to query (%s) UVC control %u on unit %u: %d (exp. %u).\n",
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index b7d24a853ce4f1..272dc9cf01ee7d 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -234,6 +234,7 @@ struct uvc_entity {
+ u8 *bmControls;
+ struct gpio_desc *gpio_privacy;
+ int irq;
++ bool initialized;
+ } gpio;
+ };
+
+@@ -337,7 +338,11 @@ struct uvc_video_chain {
+ struct uvc_entity *processing; /* Processing unit */
+ struct uvc_entity *selector; /* Selector unit */
+
+- struct mutex ctrl_mutex; /* Protects ctrl.info */
++ struct mutex ctrl_mutex; /*
++ * Protects ctrl.info,
++ * ctrl.handle and
++ * uvc_fh.pending_async_ctrls
++ */
+
+ struct v4l2_prio_state prio; /* V4L2 priority state */
+ u32 caps; /* V4L2 chain-wide caps */
+@@ -612,6 +617,7 @@ struct uvc_fh {
+ struct uvc_video_chain *chain;
+ struct uvc_streaming *stream;
+ enum uvc_handle_state state;
++ unsigned int pending_async_ctrls;
+ };
+
+ struct uvc_driver {
+@@ -795,6 +801,8 @@ int uvc_ctrl_is_accessible(struct uvc_video_chain *chain, u32 v4l2_id,
+ int uvc_xu_ctrl_query(struct uvc_video_chain *chain,
+ struct uvc_xu_control_query *xqry);
+
++void uvc_ctrl_cleanup_fh(struct uvc_fh *handle);
++
+ /* Utility functions */
+ struct usb_host_endpoint *uvc_find_endpoint(struct usb_host_interface *alts,
+ u8 epaddr);
+diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
+index 4bb91359e3a9a7..937d358697e19a 100644
+--- a/drivers/media/v4l2-core/v4l2-mc.c
++++ b/drivers/media/v4l2-core/v4l2-mc.c
+@@ -329,7 +329,7 @@ int v4l2_create_fwnode_links_to_pad(struct v4l2_subdev *src_sd,
+ if (!(sink->flags & MEDIA_PAD_FL_SINK))
+ return -EINVAL;
+
+- fwnode_graph_for_each_endpoint(dev_fwnode(src_sd->dev), endpoint) {
++ fwnode_graph_for_each_endpoint(src_sd->fwnode, endpoint) {
+ struct fwnode_handle *remote_ep;
+ int src_idx, sink_idx, ret;
+ struct media_pad *src;
+diff --git a/drivers/mfd/lpc_ich.c b/drivers/mfd/lpc_ich.c
+index f14901660147f5..4b7d0cb9340f1a 100644
+--- a/drivers/mfd/lpc_ich.c
++++ b/drivers/mfd/lpc_ich.c
+@@ -834,8 +834,9 @@ static const struct pci_device_id lpc_ich_ids[] = {
+ { PCI_VDEVICE(INTEL, 0x2917), LPC_ICH9ME},
+ { PCI_VDEVICE(INTEL, 0x2918), LPC_ICH9},
+ { PCI_VDEVICE(INTEL, 0x2919), LPC_ICH9M},
+- { PCI_VDEVICE(INTEL, 0x3197), LPC_GLK},
+ { PCI_VDEVICE(INTEL, 0x2b9c), LPC_COUGARMOUNTAIN},
++ { PCI_VDEVICE(INTEL, 0x3197), LPC_GLK},
++ { PCI_VDEVICE(INTEL, 0x31e8), LPC_GLK},
+ { PCI_VDEVICE(INTEL, 0x3a14), LPC_ICH10DO},
+ { PCI_VDEVICE(INTEL, 0x3a16), LPC_ICH10R},
+ { PCI_VDEVICE(INTEL, 0x3a18), LPC_ICH10},
+diff --git a/drivers/misc/fastrpc.c b/drivers/misc/fastrpc.c
+index 74181b8c386b78..e567a36275afc5 100644
+--- a/drivers/misc/fastrpc.c
++++ b/drivers/misc/fastrpc.c
+@@ -992,7 +992,7 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx)
+ mmap_read_lock(current->mm);
+ vma = find_vma(current->mm, ctx->args[i].ptr);
+ if (vma)
+- pages[i].addr += ctx->args[i].ptr -
++ pages[i].addr += (ctx->args[i].ptr & PAGE_MASK) -
+ vma->vm_start;
+ mmap_read_unlock(current->mm);
+
+@@ -1019,8 +1019,8 @@ static int fastrpc_get_args(u32 kernel, struct fastrpc_invoke_ctx *ctx)
+ (pkt_size - rlen);
+ pages[i].addr = pages[i].addr & PAGE_MASK;
+
+- pg_start = (args & PAGE_MASK) >> PAGE_SHIFT;
+- pg_end = ((args + len - 1) & PAGE_MASK) >> PAGE_SHIFT;
++ pg_start = (rpra[i].buf.pv & PAGE_MASK) >> PAGE_SHIFT;
++ pg_end = ((rpra[i].buf.pv + len - 1) & PAGE_MASK) >> PAGE_SHIFT;
+ pages[i].size = (pg_end - pg_start + 1) * PAGE_SIZE;
+ args = args + mlen;
+ rlen -= mlen;
+@@ -2344,7 +2344,7 @@ static int fastrpc_rpmsg_probe(struct rpmsg_device *rpdev)
+
+ err = fastrpc_device_register(rdev, data, false, domains[domain_id]);
+ if (err)
+- goto fdev_error;
++ goto populate_error;
+ break;
+ default:
+ err = -EINVAL;
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index 9566837c9848e6..4b19b8a16b0968 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -458,6 +458,8 @@ static unsigned mmc_sdio_get_max_clock(struct mmc_card *card)
+ if (mmc_card_sd_combo(card))
+ max_dtr = min(max_dtr, mmc_sd_get_max_clock(card));
+
++ max_dtr = min_not_zero(max_dtr, card->quirk_max_rate);
++
+ return max_dtr;
+ }
+
+diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
+index ef3a44f2dff16d..d84aa20f035894 100644
+--- a/drivers/mmc/host/sdhci-esdhc-imx.c
++++ b/drivers/mmc/host/sdhci-esdhc-imx.c
+@@ -303,6 +303,7 @@ static struct esdhc_soc_data usdhc_s32g2_data = {
+ | ESDHC_FLAG_HAVE_CAP1 | ESDHC_FLAG_HS200
+ | ESDHC_FLAG_HS400 | ESDHC_FLAG_HS400_ES
+ | ESDHC_FLAG_SKIP_ERR004536 | ESDHC_FLAG_SKIP_CD_WAKE,
++ .quirks = SDHCI_QUIRK_NO_LED,
+ };
+
+ static struct esdhc_soc_data usdhc_imx7ulp_data = {
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 8716004fcf6c90..945d08531de376 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -134,9 +134,18 @@
+ /* Timeout value to avoid infinite waiting for pwr_irq */
+ #define MSM_PWR_IRQ_TIMEOUT_MS 5000
+
++/* Max load for eMMC Vdd supply */
++#define MMC_VMMC_MAX_LOAD_UA 570000
++
+ /* Max load for eMMC Vdd-io supply */
+ #define MMC_VQMMC_MAX_LOAD_UA 325000
+
++/* Max load for SD Vdd supply */
++#define SD_VMMC_MAX_LOAD_UA 800000
++
++/* Max load for SD Vdd-io supply */
++#define SD_VQMMC_MAX_LOAD_UA 22000
++
+ #define msm_host_readl(msm_host, host, offset) \
+ msm_host->var_ops->msm_readl_relaxed(host, offset)
+
+@@ -1403,11 +1412,48 @@ static int sdhci_msm_set_pincfg(struct sdhci_msm_host *msm_host, bool level)
+ return ret;
+ }
+
+-static int sdhci_msm_set_vmmc(struct mmc_host *mmc)
++static void msm_config_vmmc_regulator(struct mmc_host *mmc, bool hpm)
++{
++ int load;
++
++ if (!hpm)
++ load = 0;
++ else if (!mmc->card)
++ load = max(MMC_VMMC_MAX_LOAD_UA, SD_VMMC_MAX_LOAD_UA);
++ else if (mmc_card_mmc(mmc->card))
++ load = MMC_VMMC_MAX_LOAD_UA;
++ else if (mmc_card_sd(mmc->card))
++ load = SD_VMMC_MAX_LOAD_UA;
++ else
++ return;
++
++ regulator_set_load(mmc->supply.vmmc, load);
++}
++
++static void msm_config_vqmmc_regulator(struct mmc_host *mmc, bool hpm)
++{
++ int load;
++
++ if (!hpm)
++ load = 0;
++ else if (!mmc->card)
++ load = max(MMC_VQMMC_MAX_LOAD_UA, SD_VQMMC_MAX_LOAD_UA);
++ else if (mmc_card_sd(mmc->card))
++ load = SD_VQMMC_MAX_LOAD_UA;
++ else
++ return;
++
++ regulator_set_load(mmc->supply.vqmmc, load);
++}
++
++static int sdhci_msm_set_vmmc(struct sdhci_msm_host *msm_host,
++ struct mmc_host *mmc, bool hpm)
+ {
+ if (IS_ERR(mmc->supply.vmmc))
+ return 0;
+
++ msm_config_vmmc_regulator(mmc, hpm);
++
+ return mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, mmc->ios.vdd);
+ }
+
+@@ -1420,6 +1466,8 @@ static int msm_toggle_vqmmc(struct sdhci_msm_host *msm_host,
+ if (msm_host->vqmmc_enabled == level)
+ return 0;
+
++ msm_config_vqmmc_regulator(mmc, level);
++
+ if (level) {
+ /* Set the IO voltage regulator to default voltage level */
+ if (msm_host->caps_0 & CORE_3_0V_SUPPORT)
+@@ -1642,7 +1690,8 @@ static void sdhci_msm_handle_pwr_irq(struct sdhci_host *host, int irq)
+ }
+
+ if (pwr_state) {
+- ret = sdhci_msm_set_vmmc(mmc);
++ ret = sdhci_msm_set_vmmc(msm_host, mmc,
++ pwr_state & REQ_BUS_ON);
+ if (!ret)
+ ret = sdhci_msm_set_vqmmc(msm_host, mmc,
+ pwr_state & REQ_BUS_ON);
+diff --git a/drivers/mtd/nand/onenand/onenand_base.c b/drivers/mtd/nand/onenand/onenand_base.c
+index f66385faf631cd..0dc2ea4fc857b7 100644
+--- a/drivers/mtd/nand/onenand/onenand_base.c
++++ b/drivers/mtd/nand/onenand/onenand_base.c
+@@ -2923,6 +2923,7 @@ static int do_otp_read(struct mtd_info *mtd, loff_t from, size_t len,
+ ret = ONENAND_IS_4KB_PAGE(this) ?
+ onenand_mlc_read_ops_nolock(mtd, from, &ops) :
+ onenand_read_ops_nolock(mtd, from, &ops);
++ *retlen = ops.retlen;
+
+ /* Exit OTP access mode */
+ this->command(mtd, ONENAND_CMD_RESET, 0, 0);
+diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
+index 30be4ed68fad29..ef6a22f372f95c 100644
+--- a/drivers/mtd/ubi/build.c
++++ b/drivers/mtd/ubi/build.c
+@@ -1537,7 +1537,7 @@ static int ubi_mtd_param_parse(const char *val, const struct kernel_param *kp)
+ if (token) {
+ int err = kstrtoint(token, 10, &p->ubi_num);
+
+- if (err) {
++ if (err || p->ubi_num < UBI_DEV_NUM_AUTO) {
+ pr_err("UBI error: bad value for ubi_num parameter: %s\n",
+ token);
+ return -EINVAL;
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+index fe0e3e2a811718..71e50fc65c1478 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_nic.c
+@@ -1441,7 +1441,9 @@ void aq_nic_deinit(struct aq_nic_s *self, bool link_down)
+ aq_ptp_ring_free(self);
+ aq_ptp_free(self);
+
+- if (likely(self->aq_fw_ops->deinit) && link_down) {
++ /* May be invoked during hot unplug. */
++ if (pci_device_is_present(self->pdev) &&
++ likely(self->aq_fw_ops->deinit) && link_down) {
+ mutex_lock(&self->fwreq_mutex);
+ self->aq_fw_ops->deinit(self->aq_hw);
+ mutex_unlock(&self->fwreq_mutex);
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+index 0715ea5bf13ed9..3b082114f2e538 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet_wol.c
+@@ -41,9 +41,12 @@ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ {
+ struct bcmgenet_priv *priv = netdev_priv(dev);
+ struct device *kdev = &priv->pdev->dev;
++ u32 phy_wolopts = 0;
+
+- if (dev->phydev)
++ if (dev->phydev) {
+ phy_ethtool_get_wol(dev->phydev, wol);
++ phy_wolopts = wol->wolopts;
++ }
+
+ /* MAC is not wake-up capable, return what the PHY does */
+ if (!device_can_wakeup(kdev))
+@@ -51,9 +54,14 @@ void bcmgenet_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+
+ /* Overlay MAC capabilities with that of the PHY queried before */
+ wol->supported |= WAKE_MAGIC | WAKE_MAGICSECURE | WAKE_FILTER;
+- wol->wolopts = priv->wolopts;
+- memset(wol->sopass, 0, sizeof(wol->sopass));
++ wol->wolopts |= priv->wolopts;
+
++ /* Return the PHY configured magic password */
++ if (phy_wolopts & WAKE_MAGICSECURE)
++ return;
++
++ /* Otherwise the MAC one */
++ memset(wol->sopass, 0, sizeof(wol->sopass));
+ if (wol->wolopts & WAKE_MAGICSECURE)
+ memcpy(wol->sopass, priv->sopass, sizeof(priv->sopass));
+ }
+@@ -70,7 +78,7 @@ int bcmgenet_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+ /* Try Wake-on-LAN from the PHY first */
+ if (dev->phydev) {
+ ret = phy_ethtool_set_wol(dev->phydev, wol);
+- if (ret != -EOPNOTSUPP)
++ if (ret != -EOPNOTSUPP && wol->wolopts)
+ return ret;
+ }
+
+diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
+index d178138981a967..717e110d23c914 100644
+--- a/drivers/net/ethernet/broadcom/tg3.c
++++ b/drivers/net/ethernet/broadcom/tg3.c
+@@ -55,6 +55,7 @@
+ #include <linux/hwmon.h>
+ #include <linux/hwmon-sysfs.h>
+ #include <linux/crc32poly.h>
++#include <linux/dmi.h>
+
+ #include <net/checksum.h>
+ #include <net/gso.h>
+@@ -18154,6 +18155,50 @@ static int tg3_resume(struct device *device)
+
+ static SIMPLE_DEV_PM_OPS(tg3_pm_ops, tg3_suspend, tg3_resume);
+
++/* Systems where ACPI _PTS (Prepare To Sleep) S5 will result in a fatal
++ * PCIe AER event on the tg3 device if the tg3 device is not, or cannot
++ * be, powered down.
++ */
++static const struct dmi_system_id tg3_restart_aer_quirk_table[] = {
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R440"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R540"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R640"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R650"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R740"),
++ },
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "PowerEdge R750"),
++ },
++ },
++ {}
++};
++
+ static void tg3_shutdown(struct pci_dev *pdev)
+ {
+ struct net_device *dev = pci_get_drvdata(pdev);
+@@ -18170,6 +18215,19 @@ static void tg3_shutdown(struct pci_dev *pdev)
+
+ if (system_state == SYSTEM_POWER_OFF)
+ tg3_power_down(tp);
++ else if (system_state == SYSTEM_RESTART &&
++ dmi_first_match(tg3_restart_aer_quirk_table) &&
++ pdev->current_state != PCI_D3cold &&
++ pdev->current_state != PCI_UNKNOWN) {
++ /* Disable PCIe AER on the tg3 to avoid a fatal
++ * error during this system restart.
++ */
++ pcie_capability_clear_word(pdev, PCI_EXP_DEVCTL,
++ PCI_EXP_DEVCTL_CERE |
++ PCI_EXP_DEVCTL_NFERE |
++ PCI_EXP_DEVCTL_FERE |
++ PCI_EXP_DEVCTL_URRE);
++ }
+
+ rtnl_unlock();
+
+diff --git a/drivers/net/ethernet/intel/ice/devlink/devlink.c b/drivers/net/ethernet/intel/ice/devlink/devlink.c
+index 415445cefdb2aa..b1efd287b3309c 100644
+--- a/drivers/net/ethernet/intel/ice/devlink/devlink.c
++++ b/drivers/net/ethernet/intel/ice/devlink/devlink.c
+@@ -977,6 +977,9 @@ static int ice_devlink_rate_node_new(struct devlink_rate *rate_node, void **priv
+
+ /* preallocate memory for ice_sched_node */
+ node = devm_kzalloc(ice_hw_to_dev(pi->hw), sizeof(*node), GFP_KERNEL);
++ if (!node)
++ return -ENOMEM;
++
+ *priv = node;
+
+ return 0;
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index 8208055d6e7fc5..f12fb3a2b6ad94 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -527,15 +527,14 @@ int ice_setup_rx_ring(struct ice_rx_ring *rx_ring)
+ * @xdp: xdp_buff used as input to the XDP program
+ * @xdp_prog: XDP program to run
+ * @xdp_ring: ring to be used for XDP_TX action
+- * @rx_buf: Rx buffer to store the XDP action
+ * @eop_desc: Last descriptor in packet to read metadata from
+ *
+ * Returns any of ICE_XDP_{PASS, CONSUMED, TX, REDIR}
+ */
+-static void
++static u32
+ ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+ struct bpf_prog *xdp_prog, struct ice_tx_ring *xdp_ring,
+- struct ice_rx_buf *rx_buf, union ice_32b_rx_flex_desc *eop_desc)
++ union ice_32b_rx_flex_desc *eop_desc)
+ {
+ unsigned int ret = ICE_XDP_PASS;
+ u32 act;
+@@ -574,7 +573,7 @@ ice_run_xdp(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+ ret = ICE_XDP_CONSUMED;
+ }
+ exit:
+- ice_set_rx_bufs_act(xdp, rx_ring, ret);
++ return ret;
+ }
+
+ /**
+@@ -860,10 +859,8 @@ ice_add_xdp_frag(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
+ xdp_buff_set_frags_flag(xdp);
+ }
+
+- if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS)) {
+- ice_set_rx_bufs_act(xdp, rx_ring, ICE_XDP_CONSUMED);
++ if (unlikely(sinfo->nr_frags == MAX_SKB_FRAGS))
+ return -ENOMEM;
+- }
+
+ __skb_fill_page_desc_noacc(sinfo, sinfo->nr_frags++, rx_buf->page,
+ rx_buf->page_offset, size);
+@@ -924,7 +921,6 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
+ struct ice_rx_buf *rx_buf;
+
+ rx_buf = &rx_ring->rx_buf[ntc];
+- rx_buf->pgcnt = page_count(rx_buf->page);
+ prefetchw(rx_buf->page);
+
+ if (!size)
+@@ -940,6 +936,31 @@ ice_get_rx_buf(struct ice_rx_ring *rx_ring, const unsigned int size,
+ return rx_buf;
+ }
+
++/**
++ * ice_get_pgcnts - grab page_count() for gathered fragments
++ * @rx_ring: Rx descriptor ring to store the page counts on
++ *
++ * This function is intended to be called right before running XDP
++ * program so that the page recycling mechanism will be able to take
++ * a correct decision regarding underlying pages; this is done in such
++ * way as XDP program can change the refcount of page
++ */
++static void ice_get_pgcnts(struct ice_rx_ring *rx_ring)
++{
++ u32 nr_frags = rx_ring->nr_frags + 1;
++ u32 idx = rx_ring->first_desc;
++ struct ice_rx_buf *rx_buf;
++ u32 cnt = rx_ring->count;
++
++ for (int i = 0; i < nr_frags; i++) {
++ rx_buf = &rx_ring->rx_buf[idx];
++ rx_buf->pgcnt = page_count(rx_buf->page);
++
++ if (++idx == cnt)
++ idx = 0;
++ }
++}
++
+ /**
+ * ice_build_skb - Build skb around an existing buffer
+ * @rx_ring: Rx descriptor ring to transact packets on
+@@ -1051,12 +1072,12 @@ ice_construct_skb(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp)
+ rx_buf->page_offset + headlen, size,
+ xdp->frame_sz);
+ } else {
+- /* buffer is unused, change the act that should be taken later
+- * on; data was copied onto skb's linear part so there's no
++ /* buffer is unused, restore biased page count in Rx buffer;
++ * data was copied onto skb's linear part so there's no
+ * need for adjusting page offset and we can reuse this buffer
+ * as-is
+ */
+- rx_buf->act = ICE_SKB_CONSUMED;
++ rx_buf->pagecnt_bias++;
+ }
+
+ if (unlikely(xdp_buff_has_frags(xdp))) {
+@@ -1103,6 +1124,65 @@ ice_put_rx_buf(struct ice_rx_ring *rx_ring, struct ice_rx_buf *rx_buf)
+ rx_buf->page = NULL;
+ }
+
++/**
++ * ice_put_rx_mbuf - ice_put_rx_buf() caller, for all frame frags
++ * @rx_ring: Rx ring with all the auxiliary data
++ * @xdp: XDP buffer carrying linear + frags part
++ * @xdp_xmit: XDP_TX/XDP_REDIRECT verdict storage
++ * @ntc: a current next_to_clean value to be stored at rx_ring
++ * @verdict: return code from XDP program execution
++ *
++ * Walk through gathered fragments and satisfy internal page
++ * recycle mechanism; we take here an action related to verdict
++ * returned by XDP program;
++ */
++static void ice_put_rx_mbuf(struct ice_rx_ring *rx_ring, struct xdp_buff *xdp,
++ u32 *xdp_xmit, u32 ntc, u32 verdict)
++{
++ u32 nr_frags = rx_ring->nr_frags + 1;
++ u32 idx = rx_ring->first_desc;
++ u32 cnt = rx_ring->count;
++ u32 post_xdp_frags = 1;
++ struct ice_rx_buf *buf;
++ int i;
++
++ if (unlikely(xdp_buff_has_frags(xdp)))
++ post_xdp_frags += xdp_get_shared_info_from_buff(xdp)->nr_frags;
++
++ for (i = 0; i < post_xdp_frags; i++) {
++ buf = &rx_ring->rx_buf[idx];
++
++ if (verdict & (ICE_XDP_TX | ICE_XDP_REDIR)) {
++ ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
++ *xdp_xmit |= verdict;
++ } else if (verdict & ICE_XDP_CONSUMED) {
++ buf->pagecnt_bias++;
++ } else if (verdict == ICE_XDP_PASS) {
++ ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
++ }
++
++ ice_put_rx_buf(rx_ring, buf);
++
++ if (++idx == cnt)
++ idx = 0;
++ }
++ /* handle buffers that represented frags released by XDP prog;
++ * for these we keep pagecnt_bias as-is; refcount from struct page
++ * has been decremented within XDP prog and we do not have to increase
++ * the biased refcnt
++ */
++ for (; i < nr_frags; i++) {
++ buf = &rx_ring->rx_buf[idx];
++ ice_put_rx_buf(rx_ring, buf);
++ if (++idx == cnt)
++ idx = 0;
++ }
++
++ xdp->data = NULL;
++ rx_ring->first_desc = ntc;
++ rx_ring->nr_frags = 0;
++}
++
+ /**
+ * ice_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
+ * @rx_ring: Rx descriptor ring to transact packets on
+@@ -1120,15 +1200,13 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ unsigned int total_rx_bytes = 0, total_rx_pkts = 0;
+ unsigned int offset = rx_ring->rx_offset;
+ struct xdp_buff *xdp = &rx_ring->xdp;
+- u32 cached_ntc = rx_ring->first_desc;
+ struct ice_tx_ring *xdp_ring = NULL;
+ struct bpf_prog *xdp_prog = NULL;
+ u32 ntc = rx_ring->next_to_clean;
++ u32 cached_ntu, xdp_verdict;
+ u32 cnt = rx_ring->count;
+ u32 xdp_xmit = 0;
+- u32 cached_ntu;
+ bool failure;
+- u32 first;
+
+ xdp_prog = READ_ONCE(rx_ring->xdp_prog);
+ if (xdp_prog) {
+@@ -1190,6 +1268,7 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ xdp_prepare_buff(xdp, hard_start, offset, size, !!offset);
+ xdp_buff_clear_frags_flag(xdp);
+ } else if (ice_add_xdp_frag(rx_ring, xdp, rx_buf, size)) {
++ ice_put_rx_mbuf(rx_ring, xdp, NULL, ntc, ICE_XDP_CONSUMED);
+ break;
+ }
+ if (++ntc == cnt)
+@@ -1199,15 +1278,15 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ if (ice_is_non_eop(rx_ring, rx_desc))
+ continue;
+
+- ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_buf, rx_desc);
+- if (rx_buf->act == ICE_XDP_PASS)
++ ice_get_pgcnts(rx_ring);
++ xdp_verdict = ice_run_xdp(rx_ring, xdp, xdp_prog, xdp_ring, rx_desc);
++ if (xdp_verdict == ICE_XDP_PASS)
+ goto construct_skb;
+ total_rx_bytes += xdp_get_buff_len(xdp);
+ total_rx_pkts++;
+
+- xdp->data = NULL;
+- rx_ring->first_desc = ntc;
+- rx_ring->nr_frags = 0;
++ ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
++
+ continue;
+ construct_skb:
+ if (likely(ice_ring_uses_build_skb(rx_ring)))
+@@ -1217,18 +1296,12 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ /* exit if we failed to retrieve a buffer */
+ if (!skb) {
+ rx_ring->ring_stats->rx_stats.alloc_page_failed++;
+- rx_buf->act = ICE_XDP_CONSUMED;
+- if (unlikely(xdp_buff_has_frags(xdp)))
+- ice_set_rx_bufs_act(xdp, rx_ring,
+- ICE_XDP_CONSUMED);
+- xdp->data = NULL;
+- rx_ring->first_desc = ntc;
+- rx_ring->nr_frags = 0;
+- break;
++ xdp_verdict = ICE_XDP_CONSUMED;
+ }
+- xdp->data = NULL;
+- rx_ring->first_desc = ntc;
+- rx_ring->nr_frags = 0;
++ ice_put_rx_mbuf(rx_ring, xdp, &xdp_xmit, ntc, xdp_verdict);
++
++ if (!skb)
++ break;
+
+ stat_err_bits = BIT(ICE_RX_FLEX_DESC_STATUS0_RXE_S);
+ if (unlikely(ice_test_staterr(rx_desc->wb.status_error0,
+@@ -1257,23 +1330,6 @@ int ice_clean_rx_irq(struct ice_rx_ring *rx_ring, int budget)
+ total_rx_pkts++;
+ }
+
+- first = rx_ring->first_desc;
+- while (cached_ntc != first) {
+- struct ice_rx_buf *buf = &rx_ring->rx_buf[cached_ntc];
+-
+- if (buf->act & (ICE_XDP_TX | ICE_XDP_REDIR)) {
+- ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
+- xdp_xmit |= buf->act;
+- } else if (buf->act & ICE_XDP_CONSUMED) {
+- buf->pagecnt_bias++;
+- } else if (buf->act == ICE_XDP_PASS) {
+- ice_rx_buf_adjust_pg_offset(buf, xdp->frame_sz);
+- }
+-
+- ice_put_rx_buf(rx_ring, buf);
+- if (++cached_ntc >= cnt)
+- cached_ntc = 0;
+- }
+ rx_ring->next_to_clean = ntc;
+ /* return up to cleaned_count buffers to hardware */
+ failure = ice_alloc_rx_bufs(rx_ring, ICE_RX_DESC_UNUSED(rx_ring));
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
+index feba314a3fe441..7130992d417798 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
+@@ -201,7 +201,6 @@ struct ice_rx_buf {
+ struct page *page;
+ unsigned int page_offset;
+ unsigned int pgcnt;
+- unsigned int act;
+ unsigned int pagecnt_bias;
+ };
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
+index afcead4baef4b1..f6c2b16ab45674 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
++++ b/drivers/net/ethernet/intel/ice/ice_txrx_lib.h
+@@ -5,49 +5,6 @@
+ #define _ICE_TXRX_LIB_H_
+ #include "ice.h"
+
+-/**
+- * ice_set_rx_bufs_act - propagate Rx buffer action to frags
+- * @xdp: XDP buffer representing frame (linear and frags part)
+- * @rx_ring: Rx ring struct
+- * act: action to store onto Rx buffers related to XDP buffer parts
+- *
+- * Set action that should be taken before putting Rx buffer from first frag
+- * to the last.
+- */
+-static inline void
+-ice_set_rx_bufs_act(struct xdp_buff *xdp, const struct ice_rx_ring *rx_ring,
+- const unsigned int act)
+-{
+- u32 sinfo_frags = xdp_get_shared_info_from_buff(xdp)->nr_frags;
+- u32 nr_frags = rx_ring->nr_frags + 1;
+- u32 idx = rx_ring->first_desc;
+- u32 cnt = rx_ring->count;
+- struct ice_rx_buf *buf;
+-
+- for (int i = 0; i < nr_frags; i++) {
+- buf = &rx_ring->rx_buf[idx];
+- buf->act = act;
+-
+- if (++idx == cnt)
+- idx = 0;
+- }
+-
+- /* adjust pagecnt_bias on frags freed by XDP prog */
+- if (sinfo_frags < rx_ring->nr_frags && act == ICE_XDP_CONSUMED) {
+- u32 delta = rx_ring->nr_frags - sinfo_frags;
+-
+- while (delta) {
+- if (idx == 0)
+- idx = cnt - 1;
+- else
+- idx--;
+- buf = &rx_ring->rx_buf[idx];
+- buf->pagecnt_bias--;
+- delta--;
+- }
+- }
+-}
+-
+ /**
+ * ice_test_staterr - tests bits in Rx descriptor status and error fields
+ * @status_err_n: Rx descriptor status_error0 or status_error1 bits
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c b/drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c
+index 7d0124b283dace..d7a3465e6a7241 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_ethtool.c
+@@ -157,17 +157,14 @@ octep_get_ethtool_stats(struct net_device *netdev,
+ iface_rx_stats,
+ iface_tx_stats);
+
+- for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_iq *iq = oct->iq[q];
+- struct octep_oq *oq = oct->oq[q];
+-
+- tx_packets += iq->stats.instr_completed;
+- tx_bytes += iq->stats.bytes_sent;
+- tx_busy_errors += iq->stats.tx_busy;
+-
+- rx_packets += oq->stats.packets;
+- rx_bytes += oq->stats.bytes;
+- rx_alloc_errors += oq->stats.alloc_failures;
++ for (q = 0; q < OCTEP_MAX_QUEUES; q++) {
++ tx_packets += oct->stats_iq[q].instr_completed;
++ tx_bytes += oct->stats_iq[q].bytes_sent;
++ tx_busy_errors += oct->stats_iq[q].tx_busy;
++
++ rx_packets += oct->stats_oq[q].packets;
++ rx_bytes += oct->stats_oq[q].bytes;
++ rx_alloc_errors += oct->stats_oq[q].alloc_failures;
+ }
+ i = 0;
+ data[i++] = rx_packets;
+@@ -205,22 +202,18 @@ octep_get_ethtool_stats(struct net_device *netdev,
+ data[i++] = iface_rx_stats->err_pkts;
+
+ /* Per Tx Queue stats */
+- for (q = 0; q < oct->num_iqs; q++) {
+- struct octep_iq *iq = oct->iq[q];
+-
+- data[i++] = iq->stats.instr_posted;
+- data[i++] = iq->stats.instr_completed;
+- data[i++] = iq->stats.bytes_sent;
+- data[i++] = iq->stats.tx_busy;
++ for (q = 0; q < OCTEP_MAX_QUEUES; q++) {
++ data[i++] = oct->stats_iq[q].instr_posted;
++ data[i++] = oct->stats_iq[q].instr_completed;
++ data[i++] = oct->stats_iq[q].bytes_sent;
++ data[i++] = oct->stats_iq[q].tx_busy;
+ }
+
+ /* Per Rx Queue stats */
+- for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_oq *oq = oct->oq[q];
+-
+- data[i++] = oq->stats.packets;
+- data[i++] = oq->stats.bytes;
+- data[i++] = oq->stats.alloc_failures;
++ for (q = 0; q < OCTEP_MAX_QUEUES; q++) {
++ data[i++] = oct->stats_oq[q].packets;
++ data[i++] = oct->stats_oq[q].bytes;
++ data[i++] = oct->stats_oq[q].alloc_failures;
+ }
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+index 730aa5632cceee..a89f80bac39b8d 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.c
+@@ -822,7 +822,7 @@ static inline int octep_iq_full_check(struct octep_iq *iq)
+ if (unlikely(IQ_INSTR_SPACE(iq) >
+ OCTEP_WAKE_QUEUE_THRESHOLD)) {
+ netif_start_subqueue(iq->netdev, iq->q_no);
+- iq->stats.restart_cnt++;
++ iq->stats->restart_cnt++;
+ return 0;
+ }
+
+@@ -960,7 +960,7 @@ static netdev_tx_t octep_start_xmit(struct sk_buff *skb,
+ wmb();
+ /* Ring Doorbell to notify the NIC of new packets */
+ writel(iq->fill_cnt, iq->doorbell_reg);
+- iq->stats.instr_posted += iq->fill_cnt;
++ iq->stats->instr_posted += iq->fill_cnt;
+ iq->fill_cnt = 0;
+ return NETDEV_TX_OK;
+
+@@ -991,22 +991,19 @@ static netdev_tx_t octep_start_xmit(struct sk_buff *skb,
+ static void octep_get_stats64(struct net_device *netdev,
+ struct rtnl_link_stats64 *stats)
+ {
+- u64 tx_packets, tx_bytes, rx_packets, rx_bytes;
+ struct octep_device *oct = netdev_priv(netdev);
++ u64 tx_packets, tx_bytes, rx_packets, rx_bytes;
+ int q;
+
+ tx_packets = 0;
+ tx_bytes = 0;
+ rx_packets = 0;
+ rx_bytes = 0;
+- for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_iq *iq = oct->iq[q];
+- struct octep_oq *oq = oct->oq[q];
+-
+- tx_packets += iq->stats.instr_completed;
+- tx_bytes += iq->stats.bytes_sent;
+- rx_packets += oq->stats.packets;
+- rx_bytes += oq->stats.bytes;
++ for (q = 0; q < OCTEP_MAX_QUEUES; q++) {
++ tx_packets += oct->stats_iq[q].instr_completed;
++ tx_bytes += oct->stats_iq[q].bytes_sent;
++ rx_packets += oct->stats_oq[q].packets;
++ rx_bytes += oct->stats_oq[q].bytes;
+ }
+ stats->tx_packets = tx_packets;
+ stats->tx_bytes = tx_bytes;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_main.h b/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
+index fee59e0e0138fe..936b786f428168 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_main.h
+@@ -257,11 +257,17 @@ struct octep_device {
+ /* Pointers to Octeon Tx queues */
+ struct octep_iq *iq[OCTEP_MAX_IQ];
+
++ /* Per iq stats */
++ struct octep_iq_stats stats_iq[OCTEP_MAX_IQ];
++
+ /* Rx queues (OQ: Output Queue) */
+ u16 num_oqs;
+ /* Pointers to Octeon Rx queues */
+ struct octep_oq *oq[OCTEP_MAX_OQ];
+
++ /* Per oq stats */
++ struct octep_oq_stats stats_oq[OCTEP_MAX_OQ];
++
+ /* Hardware port number of the PCIe interface */
+ u16 pcie_port;
+
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
+index 8af75cb37c3ee8..82b6b19e76b47a 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.c
+@@ -87,7 +87,7 @@ static int octep_oq_refill(struct octep_device *oct, struct octep_oq *oq)
+ page = dev_alloc_page();
+ if (unlikely(!page)) {
+ dev_err(oq->dev, "refill: rx buffer alloc failed\n");
+- oq->stats.alloc_failures++;
++ oq->stats->alloc_failures++;
+ break;
+ }
+
+@@ -98,7 +98,7 @@ static int octep_oq_refill(struct octep_device *oct, struct octep_oq *oq)
+ "OQ-%d buffer refill: DMA mapping error!\n",
+ oq->q_no);
+ put_page(page);
+- oq->stats.alloc_failures++;
++ oq->stats->alloc_failures++;
+ break;
+ }
+ oq->buff_info[refill_idx].page = page;
+@@ -134,6 +134,7 @@ static int octep_setup_oq(struct octep_device *oct, int q_no)
+ oq->netdev = oct->netdev;
+ oq->dev = &oct->pdev->dev;
+ oq->q_no = q_no;
++ oq->stats = &oct->stats_oq[q_no];
+ oq->max_count = CFG_GET_OQ_NUM_DESC(oct->conf);
+ oq->ring_size_mask = oq->max_count - 1;
+ oq->buffer_size = CFG_GET_OQ_BUF_SIZE(oct->conf);
+@@ -443,7 +444,7 @@ static int __octep_oq_process_rx(struct octep_device *oct,
+ if (!skb) {
+ octep_oq_drop_rx(oq, buff_info,
+ &read_idx, &desc_used);
+- oq->stats.alloc_failures++;
++ oq->stats->alloc_failures++;
+ continue;
+ }
+ skb_reserve(skb, data_offset);
+@@ -494,8 +495,8 @@ static int __octep_oq_process_rx(struct octep_device *oct,
+
+ oq->host_read_idx = read_idx;
+ oq->refill_count += desc_used;
+- oq->stats.packets += pkt;
+- oq->stats.bytes += rx_bytes;
++ oq->stats->packets += pkt;
++ oq->stats->bytes += rx_bytes;
+
+ return pkt;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h
+index 3b08e2d560dc39..b4696c93d0e6a9 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_rx.h
+@@ -186,8 +186,8 @@ struct octep_oq {
+ */
+ u8 __iomem *pkts_sent_reg;
+
+- /* Statistics for this OQ. */
+- struct octep_oq_stats stats;
++ /* Pointer to statistics for this OQ. */
++ struct octep_oq_stats *stats;
+
+ /* Packets pending to be processed */
+ u32 pkts_pending;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
+index 06851b78aa28c8..08ee90013fef3b 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.c
+@@ -81,9 +81,9 @@ int octep_iq_process_completions(struct octep_iq *iq, u16 budget)
+ }
+
+ iq->pkts_processed += compl_pkts;
+- iq->stats.instr_completed += compl_pkts;
+- iq->stats.bytes_sent += compl_bytes;
+- iq->stats.sgentry_sent += compl_sg;
++ iq->stats->instr_completed += compl_pkts;
++ iq->stats->bytes_sent += compl_bytes;
++ iq->stats->sgentry_sent += compl_sg;
+ iq->flush_index = fi;
+
+ netdev_tx_completed_queue(iq->netdev_q, compl_pkts, compl_bytes);
+@@ -187,6 +187,7 @@ static int octep_setup_iq(struct octep_device *oct, int q_no)
+ iq->netdev = oct->netdev;
+ iq->dev = &oct->pdev->dev;
+ iq->q_no = q_no;
++ iq->stats = &oct->stats_iq[q_no];
+ iq->max_count = CFG_GET_IQ_NUM_DESC(oct->conf);
+ iq->ring_size_mask = iq->max_count - 1;
+ iq->fill_threshold = CFG_GET_IQ_DB_MIN(oct->conf);
+diff --git a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
+index 875a2c34091ffe..58fb39dda977c0 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
++++ b/drivers/net/ethernet/marvell/octeon_ep/octep_tx.h
+@@ -170,8 +170,8 @@ struct octep_iq {
+ */
+ u16 flush_index;
+
+- /* Statistics for this input queue. */
+- struct octep_iq_stats stats;
++ /* Pointer to statistics for this input queue. */
++ struct octep_iq_stats *stats;
+
+ /* Pointer to the Virtual Base addr of the input ring. */
+ struct octep_tx_desc_hw *desc_ring;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_ethtool.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_ethtool.c
+index a1979b45e355c6..12ddb77141cc35 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_ethtool.c
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_ethtool.c
+@@ -121,12 +121,9 @@ static void octep_vf_get_ethtool_stats(struct net_device *netdev,
+ iface_tx_stats = &oct->iface_tx_stats;
+ iface_rx_stats = &oct->iface_rx_stats;
+
+- for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_vf_iq *iq = oct->iq[q];
+- struct octep_vf_oq *oq = oct->oq[q];
+-
+- tx_busy_errors += iq->stats.tx_busy;
+- rx_alloc_errors += oq->stats.alloc_failures;
++ for (q = 0; q < OCTEP_VF_MAX_QUEUES; q++) {
++ tx_busy_errors += oct->stats_iq[q].tx_busy;
++ rx_alloc_errors += oct->stats_oq[q].alloc_failures;
+ }
+ i = 0;
+ data[i++] = rx_alloc_errors;
+@@ -141,22 +138,18 @@ static void octep_vf_get_ethtool_stats(struct net_device *netdev,
+ data[i++] = iface_rx_stats->dropped_octets_fifo_full;
+
+ /* Per Tx Queue stats */
+- for (q = 0; q < oct->num_iqs; q++) {
+- struct octep_vf_iq *iq = oct->iq[q];
+-
+- data[i++] = iq->stats.instr_posted;
+- data[i++] = iq->stats.instr_completed;
+- data[i++] = iq->stats.bytes_sent;
+- data[i++] = iq->stats.tx_busy;
++ for (q = 0; q < OCTEP_VF_MAX_QUEUES; q++) {
++ data[i++] = oct->stats_iq[q].instr_posted;
++ data[i++] = oct->stats_iq[q].instr_completed;
++ data[i++] = oct->stats_iq[q].bytes_sent;
++ data[i++] = oct->stats_iq[q].tx_busy;
+ }
+
+ /* Per Rx Queue stats */
+ for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_vf_oq *oq = oct->oq[q];
+-
+- data[i++] = oq->stats.packets;
+- data[i++] = oq->stats.bytes;
+- data[i++] = oq->stats.alloc_failures;
++ data[i++] = oct->stats_oq[q].packets;
++ data[i++] = oct->stats_oq[q].bytes;
++ data[i++] = oct->stats_oq[q].alloc_failures;
+ }
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
+index 4c699514fd57a0..18c922dd5fc64d 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.c
+@@ -574,7 +574,7 @@ static int octep_vf_iq_full_check(struct octep_vf_iq *iq)
+ * caused queues to get re-enabled after
+ * being stopped
+ */
+- iq->stats.restart_cnt++;
++ iq->stats->restart_cnt++;
+ fallthrough;
+ case 1: /* Queue left enabled, since IQ is not yet full*/
+ return 0;
+@@ -731,7 +731,7 @@ static netdev_tx_t octep_vf_start_xmit(struct sk_buff *skb,
+ /* Flush the hw descriptors before writing to doorbell */
+ smp_wmb();
+ writel(iq->fill_cnt, iq->doorbell_reg);
+- iq->stats.instr_posted += iq->fill_cnt;
++ iq->stats->instr_posted += iq->fill_cnt;
+ iq->fill_cnt = 0;
+ return NETDEV_TX_OK;
+ }
+@@ -786,14 +786,11 @@ static void octep_vf_get_stats64(struct net_device *netdev,
+ tx_bytes = 0;
+ rx_packets = 0;
+ rx_bytes = 0;
+- for (q = 0; q < oct->num_oqs; q++) {
+- struct octep_vf_iq *iq = oct->iq[q];
+- struct octep_vf_oq *oq = oct->oq[q];
+-
+- tx_packets += iq->stats.instr_completed;
+- tx_bytes += iq->stats.bytes_sent;
+- rx_packets += oq->stats.packets;
+- rx_bytes += oq->stats.bytes;
++ for (q = 0; q < OCTEP_VF_MAX_QUEUES; q++) {
++ tx_packets += oct->stats_iq[q].instr_completed;
++ tx_bytes += oct->stats_iq[q].bytes_sent;
++ rx_packets += oct->stats_oq[q].packets;
++ rx_bytes += oct->stats_oq[q].bytes;
+ }
+ stats->tx_packets = tx_packets;
+ stats->tx_bytes = tx_bytes;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h
+index 5769f62545cd44..1a352f41f823cd 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_main.h
+@@ -246,11 +246,17 @@ struct octep_vf_device {
+ /* Pointers to Octeon Tx queues */
+ struct octep_vf_iq *iq[OCTEP_VF_MAX_IQ];
+
++ /* Per iq stats */
++ struct octep_vf_iq_stats stats_iq[OCTEP_VF_MAX_IQ];
++
+ /* Rx queues (OQ: Output Queue) */
+ u16 num_oqs;
+ /* Pointers to Octeon Rx queues */
+ struct octep_vf_oq *oq[OCTEP_VF_MAX_OQ];
+
++ /* Per oq stats */
++ struct octep_vf_oq_stats stats_oq[OCTEP_VF_MAX_OQ];
++
+ /* Hardware port number of the PCIe interface */
+ u16 pcie_port;
+
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
+index 82821bc28634b6..d70c8be3cfc40b 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.c
+@@ -87,7 +87,7 @@ static int octep_vf_oq_refill(struct octep_vf_device *oct, struct octep_vf_oq *o
+ page = dev_alloc_page();
+ if (unlikely(!page)) {
+ dev_err(oq->dev, "refill: rx buffer alloc failed\n");
+- oq->stats.alloc_failures++;
++ oq->stats->alloc_failures++;
+ break;
+ }
+
+@@ -98,7 +98,7 @@ static int octep_vf_oq_refill(struct octep_vf_device *oct, struct octep_vf_oq *o
+ "OQ-%d buffer refill: DMA mapping error!\n",
+ oq->q_no);
+ put_page(page);
+- oq->stats.alloc_failures++;
++ oq->stats->alloc_failures++;
+ break;
+ }
+ oq->buff_info[refill_idx].page = page;
+@@ -134,6 +134,7 @@ static int octep_vf_setup_oq(struct octep_vf_device *oct, int q_no)
+ oq->netdev = oct->netdev;
+ oq->dev = &oct->pdev->dev;
+ oq->q_no = q_no;
++ oq->stats = &oct->stats_oq[q_no];
+ oq->max_count = CFG_GET_OQ_NUM_DESC(oct->conf);
+ oq->ring_size_mask = oq->max_count - 1;
+ oq->buffer_size = CFG_GET_OQ_BUF_SIZE(oct->conf);
+@@ -458,8 +459,8 @@ static int __octep_vf_oq_process_rx(struct octep_vf_device *oct,
+
+ oq->host_read_idx = read_idx;
+ oq->refill_count += desc_used;
+- oq->stats.packets += pkt;
+- oq->stats.bytes += rx_bytes;
++ oq->stats->packets += pkt;
++ oq->stats->bytes += rx_bytes;
+
+ return pkt;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.h b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.h
+index fe46838b5200ff..9e296b7d7e3494 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.h
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_rx.h
+@@ -187,7 +187,7 @@ struct octep_vf_oq {
+ u8 __iomem *pkts_sent_reg;
+
+ /* Statistics for this OQ. */
+- struct octep_vf_oq_stats stats;
++ struct octep_vf_oq_stats *stats;
+
+ /* Packets pending to be processed */
+ u32 pkts_pending;
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.c b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.c
+index 47a5c054fdb636..8180e5ce3d7efe 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.c
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.c
+@@ -82,9 +82,9 @@ int octep_vf_iq_process_completions(struct octep_vf_iq *iq, u16 budget)
+ }
+
+ iq->pkts_processed += compl_pkts;
+- iq->stats.instr_completed += compl_pkts;
+- iq->stats.bytes_sent += compl_bytes;
+- iq->stats.sgentry_sent += compl_sg;
++ iq->stats->instr_completed += compl_pkts;
++ iq->stats->bytes_sent += compl_bytes;
++ iq->stats->sgentry_sent += compl_sg;
+ iq->flush_index = fi;
+
+ netif_subqueue_completed_wake(iq->netdev, iq->q_no, compl_pkts,
+@@ -186,6 +186,7 @@ static int octep_vf_setup_iq(struct octep_vf_device *oct, int q_no)
+ iq->netdev = oct->netdev;
+ iq->dev = &oct->pdev->dev;
+ iq->q_no = q_no;
++ iq->stats = &oct->stats_iq[q_no];
+ iq->max_count = CFG_GET_IQ_NUM_DESC(oct->conf);
+ iq->ring_size_mask = iq->max_count - 1;
+ iq->fill_threshold = CFG_GET_IQ_DB_MIN(oct->conf);
+diff --git a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.h b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.h
+index f338b975103c30..1cede90e3a5fae 100644
+--- a/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.h
++++ b/drivers/net/ethernet/marvell/octeon_ep_vf/octep_vf_tx.h
+@@ -129,7 +129,7 @@ struct octep_vf_iq {
+ u16 flush_index;
+
+ /* Statistics for this input queue. */
+- struct octep_vf_iq_stats stats;
++ struct octep_vf_iq_stats *stats;
+
+ /* Pointer to the Virtual Base addr of the input ring. */
+ struct octep_vf_tx_desc_hw *desc_ring;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+index b306ae79bf97a6..863196ad0ddc73 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/clock.c
+@@ -322,17 +322,16 @@ static void mlx5_pps_out(struct work_struct *work)
+ }
+ }
+
+-static void mlx5_timestamp_overflow(struct work_struct *work)
++static long mlx5_timestamp_overflow(struct ptp_clock_info *ptp_info)
+ {
+- struct delayed_work *dwork = to_delayed_work(work);
+ struct mlx5_core_dev *mdev;
+ struct mlx5_timer *timer;
+ struct mlx5_clock *clock;
+ unsigned long flags;
+
+- timer = container_of(dwork, struct mlx5_timer, overflow_work);
+- clock = container_of(timer, struct mlx5_clock, timer);
++ clock = container_of(ptp_info, struct mlx5_clock, ptp_info);
+ mdev = container_of(clock, struct mlx5_core_dev, clock);
++ timer = &clock->timer;
+
+ if (mdev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
+ goto out;
+@@ -343,7 +342,7 @@ static void mlx5_timestamp_overflow(struct work_struct *work)
+ write_sequnlock_irqrestore(&clock->lock, flags);
+
+ out:
+- schedule_delayed_work(&timer->overflow_work, timer->overflow_period);
++ return timer->overflow_period;
+ }
+
+ static int mlx5_ptp_settime_real_time(struct mlx5_core_dev *mdev,
+@@ -521,6 +520,7 @@ static int mlx5_ptp_adjfine(struct ptp_clock_info *ptp, long scaled_ppm)
+ timer->cycles.mult = mult;
+ mlx5_update_clock_info_page(mdev);
+ write_sequnlock_irqrestore(&clock->lock, flags);
++ ptp_schedule_worker(clock->ptp, timer->overflow_period);
+
+ return 0;
+ }
+@@ -856,6 +856,7 @@ static const struct ptp_clock_info mlx5_ptp_clock_info = {
+ .settime64 = mlx5_ptp_settime,
+ .enable = NULL,
+ .verify = NULL,
++ .do_aux_work = mlx5_timestamp_overflow,
+ };
+
+ static int mlx5_query_mtpps_pin_mode(struct mlx5_core_dev *mdev, u8 pin,
+@@ -1056,12 +1057,11 @@ static void mlx5_init_overflow_period(struct mlx5_clock *clock)
+ do_div(ns, NSEC_PER_SEC / HZ);
+ timer->overflow_period = ns;
+
+- INIT_DELAYED_WORK(&timer->overflow_work, mlx5_timestamp_overflow);
+- if (timer->overflow_period)
+- schedule_delayed_work(&timer->overflow_work, 0);
+- else
++ if (!timer->overflow_period) {
++ timer->overflow_period = HZ;
+ mlx5_core_warn(mdev,
+- "invalid overflow period, overflow_work is not scheduled\n");
++ "invalid overflow period, overflow_work is scheduled once per second\n");
++ }
+
+ if (clock_info)
+ clock_info->overflow_period = timer->overflow_period;
+@@ -1176,6 +1176,9 @@ void mlx5_init_clock(struct mlx5_core_dev *mdev)
+
+ MLX5_NB_INIT(&clock->pps_nb, mlx5_pps_event, PPS_EVENT);
+ mlx5_eq_notifier_register(mdev, &clock->pps_nb);
++
++ if (clock->ptp)
++ ptp_schedule_worker(clock->ptp, 0);
+ }
+
+ void mlx5_cleanup_clock(struct mlx5_core_dev *mdev)
+@@ -1192,7 +1195,6 @@ void mlx5_cleanup_clock(struct mlx5_core_dev *mdev)
+ }
+
+ cancel_work_sync(&clock->pps_info.out_work);
+- cancel_delayed_work_sync(&clock->timer.overflow_work);
+
+ if (mdev->clock_info) {
+ free_page((unsigned long)mdev->clock_info);
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index b13c7e958e6b4e..3c0d067c360992 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2201,8 +2201,6 @@ static void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common)
+ struct device *dev = common->dev;
+ int i;
+
+- devm_remove_action(dev, am65_cpsw_nuss_free_tx_chns, common);
+-
+ common->tx_ch_rate_msk = 0;
+ for (i = 0; i < common->tx_ch_num; i++) {
+ struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
+@@ -2224,8 +2222,6 @@ static int am65_cpsw_nuss_ndev_add_tx_napi(struct am65_cpsw_common *common)
+ for (i = 0; i < common->tx_ch_num; i++) {
+ struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
+
+- netif_napi_add_tx(common->dma_ndev, &tx_chn->napi_tx,
+- am65_cpsw_nuss_tx_poll);
+ hrtimer_init(&tx_chn->tx_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED);
+ tx_chn->tx_hrtimer.function = &am65_cpsw_nuss_tx_timer_callback;
+
+@@ -2238,9 +2234,21 @@ static int am65_cpsw_nuss_ndev_add_tx_napi(struct am65_cpsw_common *common)
+ tx_chn->id, tx_chn->irq, ret);
+ goto err;
+ }
++
++ netif_napi_add_tx(common->dma_ndev, &tx_chn->napi_tx,
++ am65_cpsw_nuss_tx_poll);
+ }
+
++ return 0;
++
+ err:
++ for (--i ; i >= 0 ; i--) {
++ struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
++
++ netif_napi_del(&tx_chn->napi_tx);
++ devm_free_irq(dev, tx_chn->irq, tx_chn);
++ }
++
+ return ret;
+ }
+
+@@ -2321,12 +2329,10 @@ static int am65_cpsw_nuss_init_tx_chns(struct am65_cpsw_common *common)
+ goto err;
+ }
+
++ return 0;
++
+ err:
+- i = devm_add_action(dev, am65_cpsw_nuss_free_tx_chns, common);
+- if (i) {
+- dev_err(dev, "Failed to add free_tx_chns action %d\n", i);
+- return i;
+- }
++ am65_cpsw_nuss_free_tx_chns(common);
+
+ return ret;
+ }
+@@ -2354,7 +2360,6 @@ static void am65_cpsw_nuss_remove_rx_chns(struct am65_cpsw_common *common)
+
+ rx_chn = &common->rx_chns;
+ flows = rx_chn->flows;
+- devm_remove_action(dev, am65_cpsw_nuss_free_rx_chns, common);
+
+ for (i = 0; i < common->rx_ch_num_flows; i++) {
+ if (!(flows[i].irq < 0))
+@@ -2453,7 +2458,7 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+ i, &rx_flow_cfg);
+ if (ret) {
+ dev_err(dev, "Failed to init rx flow%d %d\n", i, ret);
+- goto err;
++ goto err_flow;
+ }
+ if (!i)
+ fdqring_id =
+@@ -2465,14 +2470,12 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+ dev_err(dev, "Failed to get rx dma irq %d\n",
+ flow->irq);
+ ret = flow->irq;
+- goto err;
++ goto err_flow;
+ }
+
+ snprintf(flow->name,
+ sizeof(flow->name), "%s-rx%d",
+ dev_name(dev), i);
+- netif_napi_add(common->dma_ndev, &flow->napi_rx,
+- am65_cpsw_nuss_rx_poll);
+ hrtimer_init(&flow->rx_hrtimer, CLOCK_MONOTONIC,
+ HRTIMER_MODE_REL_PINNED);
+ flow->rx_hrtimer.function = &am65_cpsw_nuss_rx_timer_callback;
+@@ -2485,20 +2488,28 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+ dev_err(dev, "failure requesting rx %d irq %u, %d\n",
+ i, flow->irq, ret);
+ flow->irq = -EINVAL;
+- goto err;
++ goto err_flow;
+ }
++
++ netif_napi_add(common->dma_ndev, &flow->napi_rx,
++ am65_cpsw_nuss_rx_poll);
+ }
+
+ /* setup classifier to route priorities to flows */
+ cpsw_ale_classifier_setup_default(common->ale, common->rx_ch_num_flows);
+
+-err:
+- i = devm_add_action(dev, am65_cpsw_nuss_free_rx_chns, common);
+- if (i) {
+- dev_err(dev, "Failed to add free_rx_chns action %d\n", i);
+- return i;
++ return 0;
++
++err_flow:
++ for (--i; i >= 0 ; i--) {
++ flow = &rx_chn->flows[i];
++ netif_napi_del(&flow->napi_rx);
++ devm_free_irq(dev, flow->irq, flow);
+ }
+
++err:
++ am65_cpsw_nuss_free_rx_chns(common);
++
+ return ret;
+ }
+
+@@ -3324,7 +3335,7 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common)
+ return ret;
+ ret = am65_cpsw_nuss_init_rx_chns(common);
+ if (ret)
+- return ret;
++ goto err_remove_tx;
+
+ /* The DMA Channels are not guaranteed to be in a clean state.
+ * Reset and disable them to ensure that they are back to the
+@@ -3345,7 +3356,7 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common)
+
+ ret = am65_cpsw_nuss_register_devlink(common);
+ if (ret)
+- return ret;
++ goto err_remove_rx;
+
+ for (i = 0; i < common->port_num; i++) {
+ port = &common->ports[i];
+@@ -3376,6 +3387,10 @@ static int am65_cpsw_nuss_register_ndevs(struct am65_cpsw_common *common)
+ err_cleanup_ndev:
+ am65_cpsw_nuss_cleanup_ndev(common);
+ am65_cpsw_unregister_devlink(common);
++err_remove_rx:
++ am65_cpsw_nuss_remove_rx_chns(common);
++err_remove_tx:
++ am65_cpsw_nuss_remove_tx_chns(common);
+
+ return ret;
+ }
+@@ -3395,6 +3410,8 @@ int am65_cpsw_nuss_update_tx_rx_chns(struct am65_cpsw_common *common,
+ return ret;
+
+ ret = am65_cpsw_nuss_init_rx_chns(common);
++ if (ret)
++ am65_cpsw_nuss_remove_tx_chns(common);
+
+ return ret;
+ }
+@@ -3652,6 +3669,8 @@ static void am65_cpsw_nuss_remove(struct platform_device *pdev)
+ */
+ am65_cpsw_nuss_cleanup_ndev(common);
+ am65_cpsw_unregister_devlink(common);
++ am65_cpsw_nuss_remove_rx_chns(common);
++ am65_cpsw_nuss_remove_tx_chns(common);
+ am65_cpsw_nuss_phylink_cleanup(common);
+ am65_cpts_release(common->cpts);
+ am65_cpsw_disable_serdes_phy(common);
+@@ -3713,8 +3732,10 @@ static int am65_cpsw_nuss_resume(struct device *dev)
+ if (ret)
+ return ret;
+ ret = am65_cpsw_nuss_init_rx_chns(common);
+- if (ret)
++ if (ret) {
++ am65_cpsw_nuss_remove_tx_chns(common);
+ return ret;
++ }
+
+ /* If RX IRQ was disabled before suspend, keep it disabled */
+ for (i = 0; i < common->rx_ch_num_flows; i++) {
+diff --git a/drivers/net/phy/nxp-c45-tja11xx.c b/drivers/net/phy/nxp-c45-tja11xx.c
+index 5af5ade4fc6418..ae43103c76cbd8 100644
+--- a/drivers/net/phy/nxp-c45-tja11xx.c
++++ b/drivers/net/phy/nxp-c45-tja11xx.c
+@@ -1296,6 +1296,8 @@ static int nxp_c45_soft_reset(struct phy_device *phydev)
+ if (ret)
+ return ret;
+
++ usleep_range(2000, 2050);
++
+ return phy_read_mmd_poll_timeout(phydev, MDIO_MMD_VEND1,
+ VEND1_DEVICE_CONTROL, ret,
+ !(ret & DEVICE_CONTROL_RESET), 20000,
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 6fc60950100c7c..fae1a0ab36bdfe 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -580,7 +580,7 @@ static inline bool tun_not_capable(struct tun_struct *tun)
+ struct net *net = dev_net(tun->dev);
+
+ return ((uid_valid(tun->owner) && !uid_eq(cred->euid, tun->owner)) ||
+- (gid_valid(tun->group) && !in_egroup_p(tun->group))) &&
++ (gid_valid(tun->group) && !in_egroup_p(tun->group))) &&
+ !ns_capable(net->user_ns, CAP_NET_ADMIN);
+ }
+
+diff --git a/drivers/net/usb/ipheth.c b/drivers/net/usb/ipheth.c
+index 46afb95ffabe3b..a19789b571905a 100644
+--- a/drivers/net/usb/ipheth.c
++++ b/drivers/net/usb/ipheth.c
+@@ -61,7 +61,18 @@
+ #define IPHETH_USBINTF_PROTO 1
+
+ #define IPHETH_IP_ALIGN 2 /* padding at front of URB */
+-#define IPHETH_NCM_HEADER_SIZE (12 + 96) /* NCMH + NCM0 */
++/* On iOS devices, NCM headers in RX have a fixed size regardless of DPE count:
++ * - NTH16 (NCMH): 12 bytes, as per CDC NCM 1.0 spec
++ * - NDP16 (NCM0): 96 bytes, of which
++ * - NDP16 fixed header: 8 bytes
++ * - maximum of 22 DPEs (21 datagrams + trailer), 4 bytes each
++ */
++#define IPHETH_NDP16_MAX_DPE 22
++#define IPHETH_NDP16_HEADER_SIZE (sizeof(struct usb_cdc_ncm_ndp16) + \
++ IPHETH_NDP16_MAX_DPE * \
++ sizeof(struct usb_cdc_ncm_dpe16))
++#define IPHETH_NCM_HEADER_SIZE (sizeof(struct usb_cdc_ncm_nth16) + \
++ IPHETH_NDP16_HEADER_SIZE)
+ #define IPHETH_TX_BUF_SIZE ETH_FRAME_LEN
+ #define IPHETH_RX_BUF_SIZE_LEGACY (IPHETH_IP_ALIGN + ETH_FRAME_LEN)
+ #define IPHETH_RX_BUF_SIZE_NCM 65536
+@@ -207,15 +218,23 @@ static int ipheth_rcvbulk_callback_legacy(struct urb *urb)
+ return ipheth_consume_skb(buf, len, dev);
+ }
+
++/* In "NCM mode", the iOS device encapsulates RX (phone->computer) traffic
++ * in NCM Transfer Blocks (similarly to CDC NCM). However, unlike reverse
++ * tethering (handled by the `cdc_ncm` driver), regular tethering is not
++ * compliant with the CDC NCM spec, as the device is missing the necessary
++ * descriptors, and TX (computer->phone) traffic is not encapsulated
++ * at all. Thus `ipheth` implements a very limited subset of the spec with
++ * the sole purpose of parsing RX URBs.
++ */
+ static int ipheth_rcvbulk_callback_ncm(struct urb *urb)
+ {
+ struct usb_cdc_ncm_nth16 *ncmh;
+ struct usb_cdc_ncm_ndp16 *ncm0;
+ struct usb_cdc_ncm_dpe16 *dpe;
+ struct ipheth_device *dev;
++ u16 dg_idx, dg_len;
+ int retval = -EINVAL;
+ char *buf;
+- int len;
+
+ dev = urb->context;
+
+@@ -226,40 +245,42 @@ static int ipheth_rcvbulk_callback_ncm(struct urb *urb)
+
+ ncmh = urb->transfer_buffer;
+ if (ncmh->dwSignature != cpu_to_le32(USB_CDC_NCM_NTH16_SIGN) ||
+- le16_to_cpu(ncmh->wNdpIndex) >= urb->actual_length) {
+- dev->net->stats.rx_errors++;
+- return retval;
+- }
++ /* On iOS, NDP16 directly follows NTH16 */
++ ncmh->wNdpIndex != cpu_to_le16(sizeof(struct usb_cdc_ncm_nth16)))
++ goto rx_error;
+
+- ncm0 = urb->transfer_buffer + le16_to_cpu(ncmh->wNdpIndex);
+- if (ncm0->dwSignature != cpu_to_le32(USB_CDC_NCM_NDP16_NOCRC_SIGN) ||
+- le16_to_cpu(ncmh->wHeaderLength) + le16_to_cpu(ncm0->wLength) >=
+- urb->actual_length) {
+- dev->net->stats.rx_errors++;
+- return retval;
+- }
++ ncm0 = urb->transfer_buffer + sizeof(struct usb_cdc_ncm_nth16);
++ if (ncm0->dwSignature != cpu_to_le32(USB_CDC_NCM_NDP16_NOCRC_SIGN))
++ goto rx_error;
+
+ dpe = ncm0->dpe16;
+- while (le16_to_cpu(dpe->wDatagramIndex) != 0 &&
+- le16_to_cpu(dpe->wDatagramLength) != 0) {
+- if (le16_to_cpu(dpe->wDatagramIndex) >= urb->actual_length ||
+- le16_to_cpu(dpe->wDatagramIndex) +
+- le16_to_cpu(dpe->wDatagramLength) > urb->actual_length) {
++ for (int dpe_i = 0; dpe_i < IPHETH_NDP16_MAX_DPE; ++dpe_i, ++dpe) {
++ dg_idx = le16_to_cpu(dpe->wDatagramIndex);
++ dg_len = le16_to_cpu(dpe->wDatagramLength);
++
++ /* Null DPE must be present after last datagram pointer entry
++ * (3.3.1 USB CDC NCM spec v1.0)
++ */
++ if (dg_idx == 0 && dg_len == 0)
++ return 0;
++
++ if (dg_idx < IPHETH_NCM_HEADER_SIZE ||
++ dg_idx >= urb->actual_length ||
++ dg_len > urb->actual_length - dg_idx) {
+ dev->net->stats.rx_length_errors++;
+ return retval;
+ }
+
+- buf = urb->transfer_buffer + le16_to_cpu(dpe->wDatagramIndex);
+- len = le16_to_cpu(dpe->wDatagramLength);
++ buf = urb->transfer_buffer + dg_idx;
+
+- retval = ipheth_consume_skb(buf, len, dev);
++ retval = ipheth_consume_skb(buf, dg_len, dev);
+ if (retval != 0)
+ return retval;
+-
+- dpe++;
+ }
+
+- return 0;
++rx_error:
++ dev->net->stats.rx_errors++;
++ return retval;
+ }
+
+ static void ipheth_rcvbulk_callback(struct urb *urb)
+diff --git a/drivers/net/vmxnet3/vmxnet3_xdp.c b/drivers/net/vmxnet3/vmxnet3_xdp.c
+index 1341374a4588a0..616ecc38d1726c 100644
+--- a/drivers/net/vmxnet3/vmxnet3_xdp.c
++++ b/drivers/net/vmxnet3/vmxnet3_xdp.c
+@@ -28,7 +28,7 @@ vmxnet3_xdp_get_tq(struct vmxnet3_adapter *adapter)
+ if (likely(cpu < tq_number))
+ tq = &adapter->tx_queue[cpu];
+ else
+- tq = &adapter->tx_queue[reciprocal_scale(cpu, tq_number)];
++ tq = &adapter->tx_queue[cpu % tq_number];
+
+ return tq;
+ }
+@@ -124,6 +124,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
+ u32 buf_size;
+ u32 dw2;
+
++ spin_lock_irq(&tq->tx_lock);
+ dw2 = (tq->tx_ring.gen ^ 0x1) << VMXNET3_TXD_GEN_SHIFT;
+ dw2 |= xdpf->len;
+ ctx.sop_txd = tq->tx_ring.base + tq->tx_ring.next2fill;
+@@ -134,6 +135,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
+
+ if (vmxnet3_cmd_ring_desc_avail(&tq->tx_ring) == 0) {
+ tq->stats.tx_ring_full++;
++ spin_unlock_irq(&tq->tx_lock);
+ return -ENOSPC;
+ }
+
+@@ -142,8 +144,10 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
+ tbi->dma_addr = dma_map_single(&adapter->pdev->dev,
+ xdpf->data, buf_size,
+ DMA_TO_DEVICE);
+- if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr))
++ if (dma_mapping_error(&adapter->pdev->dev, tbi->dma_addr)) {
++ spin_unlock_irq(&tq->tx_lock);
+ return -EFAULT;
++ }
+ tbi->map_type |= VMXNET3_MAP_SINGLE;
+ } else { /* XDP buffer from page pool */
+ page = virt_to_page(xdpf->data);
+@@ -182,6 +186,7 @@ vmxnet3_xdp_xmit_frame(struct vmxnet3_adapter *adapter,
+ dma_wmb();
+ gdesc->dword[2] = cpu_to_le32(le32_to_cpu(gdesc->dword[2]) ^
+ VMXNET3_TXD_GEN);
++ spin_unlock_irq(&tq->tx_lock);
+
+ /* No need to handle the case when tx_num_deferred doesn't reach
+ * threshold. Backend driver at hypervisor side will poll and reset
+@@ -225,6 +230,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
+ {
+ struct vmxnet3_adapter *adapter = netdev_priv(dev);
+ struct vmxnet3_tx_queue *tq;
++ struct netdev_queue *nq;
+ int i;
+
+ if (unlikely(test_bit(VMXNET3_STATE_BIT_QUIESCED, &adapter->state)))
+@@ -236,6 +242,9 @@ vmxnet3_xdp_xmit(struct net_device *dev,
+ if (tq->stopped)
+ return -ENETDOWN;
+
++ nq = netdev_get_tx_queue(adapter->netdev, tq->qid);
++
++ __netif_tx_lock(nq, smp_processor_id());
+ for (i = 0; i < n; i++) {
+ if (vmxnet3_xdp_xmit_frame(adapter, frames[i], tq, true)) {
+ tq->stats.xdp_xmit_err++;
+@@ -243,6 +252,7 @@ vmxnet3_xdp_xmit(struct net_device *dev,
+ }
+ }
+ tq->stats.xdp_xmit += i;
++ __netif_tx_unlock(nq);
+
+ return i;
+ }
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index da72fd2d541ff7..20ab9b1eea2836 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -540,6 +540,11 @@ void brcmf_txfinalize(struct brcmf_if *ifp, struct sk_buff *txp, bool success)
+ struct ethhdr *eh;
+ u16 type;
+
++ if (!ifp) {
++ brcmu_pkt_buf_free_skb(txp);
++ return;
++ }
++
+ eh = (struct ethhdr *)(txp->data);
+ type = ntohs(eh->h_proto);
+
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+index af930e34c21f8a..22c064848124d8 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/of.c
+@@ -97,13 +97,13 @@ void brcmf_of_probe(struct device *dev, enum brcmf_bus_type bus_type,
+ /* Set board-type to the first string of the machine compatible prop */
+ root = of_find_node_by_path("/");
+ if (root && err) {
+- char *board_type;
++ char *board_type = NULL;
+ const char *tmp;
+
+- of_property_read_string_index(root, "compatible", 0, &tmp);
+-
+ /* get rid of '/' in the compatible string to be able to find the FW */
+- board_type = devm_kstrdup(dev, tmp, GFP_KERNEL);
++ if (!of_property_read_string_index(root, "compatible", 0, &tmp))
++ board_type = devm_kstrdup(dev, tmp, GFP_KERNEL);
++
+ if (!board_type) {
+ of_node_put(root);
+ return;
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c
+index d69879e1bd870c..d362c4337616b4 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmsmac/phy/phy_n.c
+@@ -23423,6 +23423,9 @@ wlc_phy_iqcal_gainparams_nphy(struct brcms_phy *pi, u16 core_no,
+ break;
+ }
+
++ if (WARN_ON(k == NPHY_IQCAL_NUMGAINS))
++ return;
++
+ params->txgm = tbl_iqcal_gainparams_nphy[band_idx][k][1];
+ params->pga = tbl_iqcal_gainparams_nphy[band_idx][k][2];
+ params->pad = tbl_iqcal_gainparams_nphy[band_idx][k][3];
+diff --git a/drivers/net/wireless/intel/iwlwifi/Makefile b/drivers/net/wireless/intel/iwlwifi/Makefile
+index 64c1233142451a..a3052684b341f2 100644
+--- a/drivers/net/wireless/intel/iwlwifi/Makefile
++++ b/drivers/net/wireless/intel/iwlwifi/Makefile
+@@ -11,7 +11,7 @@ iwlwifi-objs += pcie/ctxt-info.o pcie/ctxt-info-gen3.o
+ iwlwifi-objs += pcie/trans-gen2.o pcie/tx-gen2.o
+ iwlwifi-$(CONFIG_IWLDVM) += cfg/1000.o cfg/2000.o cfg/5000.o cfg/6000.o
+ iwlwifi-$(CONFIG_IWLMVM) += cfg/7000.o cfg/8000.o cfg/9000.o cfg/22000.o
+-iwlwifi-$(CONFIG_IWLMVM) += cfg/ax210.o cfg/bz.o cfg/sc.o
++iwlwifi-$(CONFIG_IWLMVM) += cfg/ax210.o cfg/bz.o cfg/sc.o cfg/dr.o
+ iwlwifi-objs += iwl-dbg-tlv.o
+ iwlwifi-objs += iwl-trans.o
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/cfg/dr.c b/drivers/net/wireless/intel/iwlwifi/cfg/dr.c
+new file mode 100644
+index 00000000000000..ab7c0f8d54f425
+--- /dev/null
++++ b/drivers/net/wireless/intel/iwlwifi/cfg/dr.c
+@@ -0,0 +1,167 @@
++// SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
++/*
++ * Copyright (C) 2024 Intel Corporation
++ */
++#include <linux/module.h>
++#include <linux/stringify.h>
++#include "iwl-config.h"
++#include "iwl-prph.h"
++#include "fw/api/txq.h"
++
++/* Highest firmware API version supported */
++#define IWL_DR_UCODE_API_MAX 96
++
++/* Lowest firmware API version supported */
++#define IWL_DR_UCODE_API_MIN 96
++
++/* NVM versions */
++#define IWL_DR_NVM_VERSION 0x0a1d
++
++/* Memory offsets and lengths */
++#define IWL_DR_DCCM_OFFSET 0x800000 /* LMAC1 */
++#define IWL_DR_DCCM_LEN 0x10000 /* LMAC1 */
++#define IWL_DR_DCCM2_OFFSET 0x880000
++#define IWL_DR_DCCM2_LEN 0x8000
++#define IWL_DR_SMEM_OFFSET 0x400000
++#define IWL_DR_SMEM_LEN 0xD0000
++
++#define IWL_DR_A_PE_A_FW_PRE "iwlwifi-dr-a0-pe-a0"
++#define IWL_BR_A_PET_A_FW_PRE "iwlwifi-br-a0-petc-a0"
++#define IWL_BR_A_PE_A_FW_PRE "iwlwifi-br-a0-pe-a0"
++
++#define IWL_DR_A_PE_A_FW_MODULE_FIRMWARE(api) \
++ IWL_DR_A_PE_A_FW_PRE "-" __stringify(api) ".ucode"
++#define IWL_BR_A_PET_A_FW_MODULE_FIRMWARE(api) \
++ IWL_BR_A_PET_A_FW_PRE "-" __stringify(api) ".ucode"
++#define IWL_BR_A_PE_A_FW_MODULE_FIRMWARE(api) \
++ IWL_BR_A_PE_A_FW_PRE "-" __stringify(api) ".ucode"
++
++static const struct iwl_base_params iwl_dr_base_params = {
++ .eeprom_size = OTP_LOW_IMAGE_SIZE_32K,
++ .num_of_queues = 512,
++ .max_tfd_queue_size = 65536,
++ .shadow_ram_support = true,
++ .led_compensation = 57,
++ .wd_timeout = IWL_LONG_WD_TIMEOUT,
++ .max_event_log_size = 512,
++ .shadow_reg_enable = true,
++ .pcie_l1_allowed = true,
++};
++
++#define IWL_DEVICE_DR_COMMON \
++ .ucode_api_max = IWL_DR_UCODE_API_MAX, \
++ .ucode_api_min = IWL_DR_UCODE_API_MIN, \
++ .led_mode = IWL_LED_RF_STATE, \
++ .nvm_hw_section_num = 10, \
++ .non_shared_ant = ANT_B, \
++ .dccm_offset = IWL_DR_DCCM_OFFSET, \
++ .dccm_len = IWL_DR_DCCM_LEN, \
++ .dccm2_offset = IWL_DR_DCCM2_OFFSET, \
++ .dccm2_len = IWL_DR_DCCM2_LEN, \
++ .smem_offset = IWL_DR_SMEM_OFFSET, \
++ .smem_len = IWL_DR_SMEM_LEN, \
++ .apmg_not_supported = true, \
++ .trans.mq_rx_supported = true, \
++ .vht_mu_mimo_supported = true, \
++ .mac_addr_from_csr = 0x30, \
++ .nvm_ver = IWL_DR_NVM_VERSION, \
++ .trans.rf_id = true, \
++ .trans.gen2 = true, \
++ .nvm_type = IWL_NVM_EXT, \
++ .dbgc_supported = true, \
++ .min_umac_error_event_table = 0xD0000, \
++ .d3_debug_data_base_addr = 0x401000, \
++ .d3_debug_data_length = 60 * 1024, \
++ .mon_smem_regs = { \
++ .write_ptr = { \
++ .addr = LDBG_M2S_BUF_WPTR, \
++ .mask = LDBG_M2S_BUF_WPTR_VAL_MSK, \
++ }, \
++ .cycle_cnt = { \
++ .addr = LDBG_M2S_BUF_WRAP_CNT, \
++ .mask = LDBG_M2S_BUF_WRAP_CNT_VAL_MSK, \
++ }, \
++ }, \
++ .trans.umac_prph_offset = 0x300000, \
++ .trans.device_family = IWL_DEVICE_FAMILY_DR, \
++ .trans.base_params = &iwl_dr_base_params, \
++ .min_txq_size = 128, \
++ .gp2_reg_addr = 0xd02c68, \
++ .min_ba_txq_size = IWL_DEFAULT_QUEUE_SIZE_EHT, \
++ .mon_dram_regs = { \
++ .write_ptr = { \
++ .addr = DBGC_CUR_DBGBUF_STATUS, \
++ .mask = DBGC_CUR_DBGBUF_STATUS_OFFSET_MSK, \
++ }, \
++ .cycle_cnt = { \
++ .addr = DBGC_DBGBUF_WRAP_AROUND, \
++ .mask = 0xffffffff, \
++ }, \
++ .cur_frag = { \
++ .addr = DBGC_CUR_DBGBUF_STATUS, \
++ .mask = DBGC_CUR_DBGBUF_STATUS_IDX_MSK, \
++ }, \
++ }, \
++ .mon_dbgi_regs = { \
++ .write_ptr = { \
++ .addr = DBGI_SRAM_FIFO_POINTERS, \
++ .mask = DBGI_SRAM_FIFO_POINTERS_WR_PTR_MSK, \
++ }, \
++ }
++
++#define IWL_DEVICE_DR \
++ IWL_DEVICE_DR_COMMON, \
++ .uhb_supported = true, \
++ .features = IWL_TX_CSUM_NETIF_FLAGS | NETIF_F_RXCSUM, \
++ .num_rbds = IWL_NUM_RBDS_DR_EHT, \
++ .ht_params = &iwl_22000_ht_params
++
++/*
++ * This size was picked according to 8 MSDUs inside 512 A-MSDUs in an
++ * A-MPDU, with additional overhead to account for processing time.
++ */
++#define IWL_NUM_RBDS_DR_EHT (512 * 16)
++
++const struct iwl_cfg_trans_params iwl_dr_trans_cfg = {
++ .device_family = IWL_DEVICE_FAMILY_DR,
++ .base_params = &iwl_dr_base_params,
++ .mq_rx_supported = true,
++ .rf_id = true,
++ .gen2 = true,
++ .integrated = true,
++ .umac_prph_offset = 0x300000,
++ .xtal_latency = 12000,
++ .low_latency_xtal = true,
++ .ltr_delay = IWL_CFG_TRANS_LTR_DELAY_2500US,
++};
++
++const char iwl_dr_name[] = "Intel(R) TBD Dr device";
++
++const struct iwl_cfg iwl_cfg_dr = {
++ .fw_name_mac = "dr",
++ IWL_DEVICE_DR,
++};
++
++const struct iwl_cfg_trans_params iwl_br_trans_cfg = {
++ .device_family = IWL_DEVICE_FAMILY_DR,
++ .base_params = &iwl_dr_base_params,
++ .mq_rx_supported = true,
++ .rf_id = true,
++ .gen2 = true,
++ .integrated = true,
++ .umac_prph_offset = 0x300000,
++ .xtal_latency = 12000,
++ .low_latency_xtal = true,
++ .ltr_delay = IWL_CFG_TRANS_LTR_DELAY_2500US,
++};
++
++const char iwl_br_name[] = "Intel(R) TBD Br device";
++
++const struct iwl_cfg iwl_cfg_br = {
++ .fw_name_mac = "br",
++ IWL_DEVICE_DR,
++};
++
++MODULE_FIRMWARE(IWL_DR_A_PE_A_FW_MODULE_FIRMWARE(IWL_DR_UCODE_API_MAX));
++MODULE_FIRMWARE(IWL_BR_A_PET_A_FW_MODULE_FIRMWARE(IWL_DR_UCODE_API_MAX));
++MODULE_FIRMWARE(IWL_BR_A_PE_A_FW_MODULE_FIRMWARE(IWL_DR_UCODE_API_MAX));
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index 0bc32291815e1b..a26c5573d20916 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -108,7 +108,7 @@ static int iwl_acpi_get_dsm_integer(struct device *dev, int rev, int func,
+ size_t expected_size)
+ {
+ union acpi_object *obj;
+- int ret = 0;
++ int ret;
+
+ obj = iwl_acpi_get_dsm_object(dev, rev, func, NULL, guid);
+ if (IS_ERR(obj)) {
+@@ -123,8 +123,10 @@ static int iwl_acpi_get_dsm_integer(struct device *dev, int rev, int func,
+ } else if (obj->type == ACPI_TYPE_BUFFER) {
+ __le64 le_value = 0;
+
+- if (WARN_ON_ONCE(expected_size > sizeof(le_value)))
+- return -EINVAL;
++ if (WARN_ON_ONCE(expected_size > sizeof(le_value))) {
++ ret = -EINVAL;
++ goto out;
++ }
+
+ /* if the buffer size doesn't match the expected size */
+ if (obj->buffer.length != expected_size)
+@@ -145,8 +147,9 @@ static int iwl_acpi_get_dsm_integer(struct device *dev, int rev, int func,
+ }
+
+ IWL_DEBUG_DEV_RADIO(dev,
+- "ACPI: DSM method evaluated: func=%d, ret=%d\n",
+- func, ret);
++ "ACPI: DSM method evaluated: func=%d, value=%lld\n",
++ func, *value);
++ ret = 0;
+ out:
+ ACPI_FREE(obj);
+ return ret;
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-config.h b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+index 17721bb47e2511..89744dbedb4a5a 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-config.h
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-config.h
+@@ -38,6 +38,7 @@ enum iwl_device_family {
+ IWL_DEVICE_FAMILY_AX210,
+ IWL_DEVICE_FAMILY_BZ,
+ IWL_DEVICE_FAMILY_SC,
++ IWL_DEVICE_FAMILY_DR,
+ };
+
+ /*
+@@ -424,6 +425,8 @@ struct iwl_cfg {
+ #define IWL_CFG_MAC_TYPE_SC2 0x49
+ #define IWL_CFG_MAC_TYPE_SC2F 0x4A
+ #define IWL_CFG_MAC_TYPE_BZ_W 0x4B
++#define IWL_CFG_MAC_TYPE_BR 0x4C
++#define IWL_CFG_MAC_TYPE_DR 0x4D
+
+ #define IWL_CFG_RF_TYPE_TH 0x105
+ #define IWL_CFG_RF_TYPE_TH1 0x108
+@@ -434,6 +437,7 @@ struct iwl_cfg {
+ #define IWL_CFG_RF_TYPE_GF 0x10D
+ #define IWL_CFG_RF_TYPE_FM 0x112
+ #define IWL_CFG_RF_TYPE_WH 0x113
++#define IWL_CFG_RF_TYPE_PE 0x114
+
+ #define IWL_CFG_RF_ID_TH 0x1
+ #define IWL_CFG_RF_ID_TH1 0x1
+@@ -506,6 +510,8 @@ extern const struct iwl_cfg_trans_params iwl_ma_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_bz_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_gl_trans_cfg;
+ extern const struct iwl_cfg_trans_params iwl_sc_trans_cfg;
++extern const struct iwl_cfg_trans_params iwl_dr_trans_cfg;
++extern const struct iwl_cfg_trans_params iwl_br_trans_cfg;
+ extern const char iwl9162_name[];
+ extern const char iwl9260_name[];
+ extern const char iwl9260_1_name[];
+@@ -551,6 +557,8 @@ extern const char iwl_mtp_name[];
+ extern const char iwl_sc_name[];
+ extern const char iwl_sc2_name[];
+ extern const char iwl_sc2f_name[];
++extern const char iwl_dr_name[];
++extern const char iwl_br_name[];
+ #if IS_ENABLED(CONFIG_IWLDVM)
+ extern const struct iwl_cfg iwl5300_agn_cfg;
+ extern const struct iwl_cfg iwl5100_agn_cfg;
+@@ -658,6 +666,8 @@ extern const struct iwl_cfg iwl_cfg_gl;
+ extern const struct iwl_cfg iwl_cfg_sc;
+ extern const struct iwl_cfg iwl_cfg_sc2;
+ extern const struct iwl_cfg iwl_cfg_sc2f;
++extern const struct iwl_cfg iwl_cfg_dr;
++extern const struct iwl_cfg iwl_cfg_br;
+ #endif /* CONFIG_IWLMVM */
+
+ #endif /* __IWL_CONFIG_H__ */
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+index 8fb2aa28224212..9dd0e0a51ce5cc 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/drv.c
+@@ -540,6 +540,9 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct pci_device_id iwl_hw_card_ids[] = {
+ {IWL_PCI_DEVICE(0xE340, PCI_ANY_ID, iwl_sc_trans_cfg)},
+ {IWL_PCI_DEVICE(0xD340, PCI_ANY_ID, iwl_sc_trans_cfg)},
+ {IWL_PCI_DEVICE(0x6E70, PCI_ANY_ID, iwl_sc_trans_cfg)},
++
++/* Dr devices */
++ {IWL_PCI_DEVICE(0x272F, PCI_ANY_ID, iwl_dr_trans_cfg)},
+ #endif /* CONFIG_IWLMVM */
+
+ {0}
+@@ -1182,6 +1185,19 @@ VISIBLE_IF_IWLWIFI_KUNIT const struct iwl_dev_info iwl_dev_info_table[] = {
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
+ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
+ iwl_cfg_sc2f, iwl_sc2f_name),
++/* Dr */
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_DR, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_dr, iwl_dr_name),
++
++/* Br */
++ _IWL_DEV_INFO(IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_MAC_TYPE_BR, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ IWL_CFG_ANY, IWL_CFG_ANY, IWL_CFG_ANY,
++ iwl_cfg_br, iwl_br_name),
+ #endif /* CONFIG_IWLMVM */
+ };
+ EXPORT_SYMBOL_IF_IWLWIFI_KUNIT(iwl_dev_info_table);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+index bfdbc15abaa9a7..928e0b07a9bf18 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/eeprom.c
+@@ -2,9 +2,14 @@
+ /* Copyright (C) 2020 MediaTek Inc. */
+
+ #include <linux/firmware.h>
++#include <linux/moduleparam.h>
+ #include "mt7915.h"
+ #include "eeprom.h"
+
++static bool enable_6ghz;
++module_param(enable_6ghz, bool, 0644);
++MODULE_PARM_DESC(enable_6ghz, "Enable 6 GHz instead of 5 GHz on hardware that supports both");
++
+ static int mt7915_eeprom_load_precal(struct mt7915_dev *dev)
+ {
+ struct mt76_dev *mdev = &dev->mt76;
+@@ -170,8 +175,20 @@ static void mt7915_eeprom_parse_band_config(struct mt7915_phy *phy)
+ phy->mt76->cap.has_6ghz = true;
+ return;
+ case MT_EE_V2_BAND_SEL_5GHZ_6GHZ:
+- phy->mt76->cap.has_5ghz = true;
+- phy->mt76->cap.has_6ghz = true;
++ if (enable_6ghz) {
++ phy->mt76->cap.has_6ghz = true;
++ u8p_replace_bits(&eeprom[MT_EE_WIFI_CONF + band],
++ MT_EE_V2_BAND_SEL_6GHZ,
++ MT_EE_WIFI_CONF0_BAND_SEL);
++ } else {
++ phy->mt76->cap.has_5ghz = true;
++ u8p_replace_bits(&eeprom[MT_EE_WIFI_CONF + band],
++ MT_EE_V2_BAND_SEL_5GHZ,
++ MT_EE_WIFI_CONF0_BAND_SEL);
++ }
++ /* force to buffer mode */
++ dev->flash_mode = true;
++
+ return;
+ default:
+ phy->mt76->cap.has_2ghz = true;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7915/init.c b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+index 77d82ccd73079d..bc983ab10b0c7a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7915/init.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7915/init.c
+@@ -1239,14 +1239,14 @@ int mt7915_register_device(struct mt7915_dev *dev)
+ if (ret)
+ goto unreg_dev;
+
+- ieee80211_queue_work(mt76_hw(dev), &dev->init_work);
+-
+ if (phy2) {
+ ret = mt7915_register_ext_phy(dev, phy2);
+ if (ret)
+ goto unreg_thermal;
+ }
+
++ ieee80211_queue_work(mt76_hw(dev), &dev->init_work);
++
+ dev->recovery.hw_init_done = true;
+
+ ret = mt7915_init_debugfs(&dev->phy);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+index 8aa4f0203208ab..e3459295ad884e 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/usb.c
+@@ -21,6 +21,9 @@ static const struct usb_device_id mt7921u_device_table[] = {
+ /* Netgear, Inc. [A8000,AXE3000] */
+ { USB_DEVICE_AND_INTERFACE_INFO(0x0846, 0x9060, 0xff, 0xff, 0xff),
+ .driver_info = (kernel_ulong_t)MT7921_FIRMWARE_WM },
++ /* TP-Link TXE50UH */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x35bc, 0x0107, 0xff, 0xff, 0xff),
++ .driver_info = (kernel_ulong_t)MT7921_FIRMWARE_WM },
+ { },
+ };
+
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.h b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.h
+index c269942b3f4ab1..af8d17b9e012ca 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.h
++++ b/drivers/net/wireless/realtek/rtlwifi/rtl8821ae/fw.h
+@@ -197,9 +197,9 @@ enum rtl8821a_h2c_cmd {
+
+ /* _MEDIA_STATUS_RPT_PARM_CMD1 */
+ #define SET_H2CCMD_MSRRPT_PARM_OPMODE(__cmd, __value) \
+- u8p_replace_bits(__cmd + 1, __value, BIT(0))
++ u8p_replace_bits(__cmd, __value, BIT(0))
+ #define SET_H2CCMD_MSRRPT_PARM_MACID_IND(__cmd, __value) \
+- u8p_replace_bits(__cmd + 1, __value, BIT(1))
++ u8p_replace_bits(__cmd, __value, BIT(1))
+
+ /* AP_OFFLOAD */
+ #define SET_H2CCMD_AP_OFFLOAD_ON(__cmd, __value) \
+diff --git a/drivers/net/wireless/realtek/rtw88/main.h b/drivers/net/wireless/realtek/rtw88/main.h
+index 945117afe1438b..c808bb271e9d0f 100644
+--- a/drivers/net/wireless/realtek/rtw88/main.h
++++ b/drivers/net/wireless/realtek/rtw88/main.h
+@@ -508,12 +508,12 @@ struct rtw_5g_txpwr_idx {
+ struct rtw_5g_vht_ns_pwr_idx_diff vht_2s_diff;
+ struct rtw_5g_vht_ns_pwr_idx_diff vht_3s_diff;
+ struct rtw_5g_vht_ns_pwr_idx_diff vht_4s_diff;
+-};
++} __packed;
+
+ struct rtw_txpwr_idx {
+ struct rtw_2g_txpwr_idx pwr_idx_2g;
+ struct rtw_5g_txpwr_idx pwr_idx_5g;
+-};
++} __packed;
+
+ struct rtw_channel_params {
+ u8 center_chan;
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8703b.c b/drivers/net/wireless/realtek/rtw88/rtw8703b.c
+index 222608de33cdec..a977aad9c650f5 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8703b.c
++++ b/drivers/net/wireless/realtek/rtw88/rtw8703b.c
+@@ -911,7 +911,7 @@ static void rtw8703b_set_channel_bb(struct rtw_dev *rtwdev, u8 channel, u8 bw,
+ rtw_write32_mask(rtwdev, REG_FPGA0_RFMOD, BIT_MASK_RFMOD, 0x0);
+ rtw_write32_mask(rtwdev, REG_FPGA1_RFMOD, BIT_MASK_RFMOD, 0x0);
+ rtw_write32_mask(rtwdev, REG_OFDM0_TX_PSD_NOISE,
+- GENMASK(31, 20), 0x0);
++ GENMASK(31, 30), 0x0);
+ rtw_write32(rtwdev, REG_BBRX_DFIR, 0x4A880000);
+ rtw_write32(rtwdev, REG_OFDM0_A_TX_AFE, 0x19F60000);
+ break;
+@@ -1257,9 +1257,9 @@ static u8 rtw8703b_iqk_rx_path(struct rtw_dev *rtwdev,
+ rtw_write32(rtwdev, REG_RXIQK_TONE_A_11N, 0x38008c1c);
+ rtw_write32(rtwdev, REG_TX_IQK_TONE_B, 0x38008c1c);
+ rtw_write32(rtwdev, REG_RX_IQK_TONE_B, 0x38008c1c);
+- rtw_write32(rtwdev, REG_TXIQK_PI_A_11N, 0x8216000f);
++ rtw_write32(rtwdev, REG_TXIQK_PI_A_11N, 0x8214030f);
+ rtw_write32(rtwdev, REG_RXIQK_PI_A_11N, 0x28110000);
+- rtw_write32(rtwdev, REG_TXIQK_PI_B, 0x28110000);
++ rtw_write32(rtwdev, REG_TXIQK_PI_B, 0x82110000);
+ rtw_write32(rtwdev, REG_RXIQK_PI_B, 0x28110000);
+
+ /* LOK setting */
+@@ -1431,7 +1431,7 @@ void rtw8703b_iqk_fill_a_matrix(struct rtw_dev *rtwdev, const s32 result[])
+ return;
+
+ tmp_rx_iqi |= FIELD_PREP(BIT_MASK_RXIQ_S1_X, result[IQK_S1_RX_X]);
+- tmp_rx_iqi |= FIELD_PREP(BIT_MASK_RXIQ_S1_Y1, result[IQK_S1_RX_X]);
++ tmp_rx_iqi |= FIELD_PREP(BIT_MASK_RXIQ_S1_Y1, result[IQK_S1_RX_Y]);
+ rtw_write32(rtwdev, REG_A_RXIQI, tmp_rx_iqi);
+ rtw_write32_mask(rtwdev, REG_RXIQK_MATRIX_LSB_11N, BIT_MASK_RXIQ_S1_Y2,
+ BIT_SET_RXIQ_S1_Y2(result[IQK_S1_RX_Y]));
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8723x.h b/drivers/net/wireless/realtek/rtw88/rtw8723x.h
+index e93bfce994bf82..a99af527c92cfb 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8723x.h
++++ b/drivers/net/wireless/realtek/rtw88/rtw8723x.h
+@@ -47,7 +47,7 @@ struct rtw8723xe_efuse {
+ u8 device_id[2];
+ u8 sub_vendor_id[2];
+ u8 sub_device_id[2];
+-};
++} __packed;
+
+ struct rtw8723xu_efuse {
+ u8 res4[48]; /* 0xd0 */
+@@ -56,12 +56,12 @@ struct rtw8723xu_efuse {
+ u8 usb_option; /* 0x104 */
+ u8 res5[2]; /* 0x105 */
+ u8 mac_addr[ETH_ALEN]; /* 0x107 */
+-};
++} __packed;
+
+ struct rtw8723xs_efuse {
+ u8 res4[0x4a]; /* 0xd0 */
+ u8 mac_addr[ETH_ALEN]; /* 0x11a */
+-};
++} __packed;
+
+ struct rtw8723x_efuse {
+ __le16 rtl_id;
+@@ -96,7 +96,7 @@ struct rtw8723x_efuse {
+ struct rtw8723xu_efuse u;
+ struct rtw8723xs_efuse s;
+ };
+-};
++} __packed;
+
+ #define RTW8723X_IQK_ADDA_REG_NUM 16
+ #define RTW8723X_IQK_MAC8_REG_NUM 3
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8821c.h b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
+index 91ed921407bbe7..10172f4d74bf28 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8821c.h
++++ b/drivers/net/wireless/realtek/rtw88/rtw8821c.h
+@@ -27,7 +27,7 @@ struct rtw8821cu_efuse {
+ u8 res11[0xcf];
+ u8 package_type; /* 0x1fb */
+ u8 res12[0x4];
+-};
++} __packed;
+
+ struct rtw8821ce_efuse {
+ u8 mac_addr[ETH_ALEN]; /* 0xd0 */
+@@ -47,7 +47,8 @@ struct rtw8821ce_efuse {
+ u8 ltr_en:1;
+ u8 res1:2;
+ u8 obff:2;
+- u8 res2:3;
++ u8 res2_1:1;
++ u8 res2_2:2;
+ u8 obff_cap:2;
+ u8 res3:4;
+ u8 res4[3];
+@@ -63,7 +64,7 @@ struct rtw8821ce_efuse {
+ u8 res6:1;
+ u8 port_t_power_on_value:5;
+ u8 res7;
+-};
++} __packed;
+
+ struct rtw8821cs_efuse {
+ u8 res4[0x4a]; /* 0xd0 */
+@@ -101,7 +102,7 @@ struct rtw8821c_efuse {
+ struct rtw8821cu_efuse u;
+ struct rtw8821cs_efuse s;
+ };
+-};
++} __packed;
+
+ static inline void
+ _rtw_write32s_mask(struct rtw_dev *rtwdev, u32 addr, u32 mask, u32 data)
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822b.h b/drivers/net/wireless/realtek/rtw88/rtw8822b.h
+index cf85e63966a1c7..e815bc97c218af 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822b.h
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822b.h
+@@ -27,7 +27,7 @@ struct rtw8822bu_efuse {
+ u8 res11[0xcf];
+ u8 package_type; /* 0x1fb */
+ u8 res12[0x4];
+-};
++} __packed;
+
+ struct rtw8822be_efuse {
+ u8 mac_addr[ETH_ALEN]; /* 0xd0 */
+@@ -47,7 +47,8 @@ struct rtw8822be_efuse {
+ u8 ltr_en:1;
+ u8 res1:2;
+ u8 obff:2;
+- u8 res2:3;
++ u8 res2_1:1;
++ u8 res2_2:2;
+ u8 obff_cap:2;
+ u8 res3:4;
+ u8 res4[3];
+@@ -63,7 +64,7 @@ struct rtw8822be_efuse {
+ u8 res6:1;
+ u8 port_t_power_on_value:5;
+ u8 res7;
+-};
++} __packed;
+
+ struct rtw8822bs_efuse {
+ u8 res4[0x4a]; /* 0xd0 */
+@@ -103,7 +104,7 @@ struct rtw8822b_efuse {
+ struct rtw8822bu_efuse u;
+ struct rtw8822bs_efuse s;
+ };
+-};
++} __packed;
+
+ static inline void
+ _rtw_write32s_mask(struct rtw_dev *rtwdev, u32 addr, u32 mask, u32 data)
+diff --git a/drivers/net/wireless/realtek/rtw88/rtw8822c.h b/drivers/net/wireless/realtek/rtw88/rtw8822c.h
+index e2b383d633cd23..fc62b67a15f216 100644
+--- a/drivers/net/wireless/realtek/rtw88/rtw8822c.h
++++ b/drivers/net/wireless/realtek/rtw88/rtw8822c.h
+@@ -14,7 +14,7 @@ struct rtw8822cu_efuse {
+ u8 res1[3];
+ u8 mac_addr[ETH_ALEN]; /* 0x157 */
+ u8 res2[0x3d];
+-};
++} __packed;
+
+ struct rtw8822cs_efuse {
+ u8 res0[0x4a]; /* 0x120 */
+@@ -39,7 +39,8 @@ struct rtw8822ce_efuse {
+ u8 ltr_en:1;
+ u8 res1:2;
+ u8 obff:2;
+- u8 res2:3;
++ u8 res2_1:1;
++ u8 res2_2:2;
+ u8 obff_cap:2;
+ u8 res3:4;
+ u8 class_code[3];
+@@ -55,7 +56,7 @@ struct rtw8822ce_efuse {
+ u8 res6:1;
+ u8 port_t_power_on_value:5;
+ u8 res7;
+-};
++} __packed;
+
+ struct rtw8822c_efuse {
+ __le16 rtl_id;
+@@ -102,7 +103,7 @@ struct rtw8822c_efuse {
+ struct rtw8822cu_efuse u;
+ struct rtw8822cs_efuse s;
+ };
+-};
++} __packed;
+
+ enum rtw8822c_dpk_agc_phase {
+ RTW_DPK_GAIN_CHECK,
+diff --git a/drivers/net/wireless/realtek/rtw88/sdio.c b/drivers/net/wireless/realtek/rtw88/sdio.c
+index b67e551fcee3ef..1d62b38526c486 100644
+--- a/drivers/net/wireless/realtek/rtw88/sdio.c
++++ b/drivers/net/wireless/realtek/rtw88/sdio.c
+@@ -1193,6 +1193,8 @@ static void rtw_sdio_indicate_tx_status(struct rtw_dev *rtwdev,
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ struct ieee80211_hw *hw = rtwdev->hw;
+
++ skb_pull(skb, rtwdev->chip->tx_pkt_desc_sz);
++
+ /* enqueue to wait for tx report */
+ if (info->flags & IEEE80211_TX_CTL_REQ_TX_STATUS) {
+ rtw_tx_report_enqueue(rtwdev, skb, tx_data->sn);
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.c b/drivers/net/wireless/realtek/rtw89/phy.c
+index 4b47b45f897cbc..5c31639b4cade9 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.c
++++ b/drivers/net/wireless/realtek/rtw89/phy.c
+@@ -3892,7 +3892,6 @@ static void rtw89_phy_cfo_set_crystal_cap(struct rtw89_dev *rtwdev,
+
+ if (!force && cfo->crystal_cap == crystal_cap)
+ return;
+- crystal_cap = clamp_t(u8, crystal_cap, 0, 127);
+ if (chip->chip_id == RTL8852A || chip->chip_id == RTL8851B) {
+ rtw89_phy_cfo_set_xcap_reg(rtwdev, true, crystal_cap);
+ rtw89_phy_cfo_set_xcap_reg(rtwdev, false, crystal_cap);
+@@ -4015,7 +4014,7 @@ static void rtw89_phy_cfo_crystal_cap_adjust(struct rtw89_dev *rtwdev,
+ s32 curr_cfo)
+ {
+ struct rtw89_cfo_tracking_info *cfo = &rtwdev->cfo_tracking;
+- s8 crystal_cap = cfo->crystal_cap;
++ int crystal_cap = cfo->crystal_cap;
+ s32 cfo_abs = abs(curr_cfo);
+ int sign;
+
+@@ -4036,15 +4035,17 @@ static void rtw89_phy_cfo_crystal_cap_adjust(struct rtw89_dev *rtwdev,
+ }
+ sign = curr_cfo > 0 ? 1 : -1;
+ if (cfo_abs > CFO_TRK_STOP_TH_4)
+- crystal_cap += 7 * sign;
++ crystal_cap += 3 * sign;
+ else if (cfo_abs > CFO_TRK_STOP_TH_3)
+- crystal_cap += 5 * sign;
+- else if (cfo_abs > CFO_TRK_STOP_TH_2)
+ crystal_cap += 3 * sign;
++ else if (cfo_abs > CFO_TRK_STOP_TH_2)
++ crystal_cap += 1 * sign;
+ else if (cfo_abs > CFO_TRK_STOP_TH_1)
+ crystal_cap += 1 * sign;
+ else
+ return;
++
++ crystal_cap = clamp(crystal_cap, 0, 127);
+ rtw89_phy_cfo_set_crystal_cap(rtwdev, (u8)crystal_cap, false);
+ rtw89_debug(rtwdev, RTW89_DBG_CFO,
+ "X_cap{Curr,Default}={0x%x,0x%x}\n",
+diff --git a/drivers/net/wireless/realtek/rtw89/phy.h b/drivers/net/wireless/realtek/rtw89/phy.h
+index 7e335c02ee6fbf..9bb9c9c8e7a1b0 100644
+--- a/drivers/net/wireless/realtek/rtw89/phy.h
++++ b/drivers/net/wireless/realtek/rtw89/phy.h
+@@ -57,7 +57,7 @@
+ #define CFO_TRK_STOP_TH_4 (30 << 2)
+ #define CFO_TRK_STOP_TH_3 (20 << 2)
+ #define CFO_TRK_STOP_TH_2 (10 << 2)
+-#define CFO_TRK_STOP_TH_1 (00 << 2)
++#define CFO_TRK_STOP_TH_1 (03 << 2)
+ #define CFO_TRK_STOP_TH (2 << 2)
+ #define CFO_SW_COMP_FINE_TUNE (2 << 2)
+ #define CFO_PERIOD_CNT 15
+diff --git a/drivers/net/wwan/iosm/iosm_ipc_pcie.c b/drivers/net/wwan/iosm/iosm_ipc_pcie.c
+index 04517bd3325a2a..a066977af0be5c 100644
+--- a/drivers/net/wwan/iosm/iosm_ipc_pcie.c
++++ b/drivers/net/wwan/iosm/iosm_ipc_pcie.c
+@@ -6,6 +6,7 @@
+ #include <linux/acpi.h>
+ #include <linux/bitfield.h>
+ #include <linux/module.h>
++#include <linux/suspend.h>
+ #include <net/rtnetlink.h>
+
+ #include "iosm_ipc_imem.h"
+@@ -18,6 +19,7 @@ MODULE_LICENSE("GPL v2");
+ /* WWAN GUID */
+ static guid_t wwan_acpi_guid = GUID_INIT(0xbad01b75, 0x22a8, 0x4f48, 0x87, 0x92,
+ 0xbd, 0xde, 0x94, 0x67, 0x74, 0x7d);
++static bool pci_registered;
+
+ static void ipc_pcie_resources_release(struct iosm_pcie *ipc_pcie)
+ {
+@@ -448,7 +450,6 @@ static struct pci_driver iosm_ipc_driver = {
+ },
+ .id_table = iosm_ipc_ids,
+ };
+-module_pci_driver(iosm_ipc_driver);
+
+ int ipc_pcie_addr_map(struct iosm_pcie *ipc_pcie, unsigned char *data,
+ size_t size, dma_addr_t *mapping, int direction)
+@@ -530,3 +531,56 @@ void ipc_pcie_kfree_skb(struct iosm_pcie *ipc_pcie, struct sk_buff *skb)
+ IPC_CB(skb)->mapping = 0;
+ dev_kfree_skb(skb);
+ }
++
++static int pm_notify(struct notifier_block *nb, unsigned long mode, void *_unused)
++{
++ if (mode == PM_HIBERNATION_PREPARE || mode == PM_RESTORE_PREPARE) {
++ if (pci_registered) {
++ pci_unregister_driver(&iosm_ipc_driver);
++ pci_registered = false;
++ }
++ } else if (mode == PM_POST_HIBERNATION || mode == PM_POST_RESTORE) {
++ if (!pci_registered) {
++ int ret;
++
++ ret = pci_register_driver(&iosm_ipc_driver);
++ if (ret) {
++ pr_err(KBUILD_MODNAME ": unable to re-register PCI driver: %d\n",
++ ret);
++ } else {
++ pci_registered = true;
++ }
++ }
++ }
++
++ return 0;
++}
++
++static struct notifier_block pm_notifier = {
++ .notifier_call = pm_notify,
++};
++
++static int __init iosm_ipc_driver_init(void)
++{
++ int ret;
++
++ ret = pci_register_driver(&iosm_ipc_driver);
++ if (ret)
++ return ret;
++
++ pci_registered = true;
++
++ register_pm_notifier(&pm_notifier);
++
++ return 0;
++}
++module_init(iosm_ipc_driver_init);
++
++static void __exit iosm_ipc_driver_exit(void)
++{
++ unregister_pm_notifier(&pm_notifier);
++
++ if (pci_registered)
++ pci_unregister_driver(&iosm_ipc_driver);
++}
++module_exit(iosm_ipc_driver_exit);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 4c409efd8cec17..8da50df56b0795 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1691,7 +1691,13 @@ int nvme_set_queue_count(struct nvme_ctrl *ctrl, int *count)
+
+ status = nvme_set_features(ctrl, NVME_FEAT_NUM_QUEUES, q_count, NULL, 0,
+ &result);
+- if (status < 0)
++
++ /*
++ * It's either a kernel error or the host observed a connection
++ * lost. In either case it's not possible communicate with the
++ * controller and thus enter the error code path.
++ */
++ if (status < 0 || status == NVME_SC_HOST_PATH_ERROR)
+ return status;
+
+ /*
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index b81af7919e94c4..682234da2fabe0 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -2080,7 +2080,8 @@ nvme_fc_fcpio_done(struct nvmefc_fcp_req *req)
+ nvme_fc_complete_rq(rq);
+
+ check_error:
+- if (terminate_assoc && ctrl->ctrl.state != NVME_CTRL_RESETTING)
++ if (terminate_assoc &&
++ nvme_ctrl_state(&ctrl->ctrl) != NVME_CTRL_RESETTING)
+ queue_work(nvme_reset_wq, &ctrl->ioerr_work);
+ }
+
+@@ -2534,6 +2535,8 @@ __nvme_fc_abort_outstanding_ios(struct nvme_fc_ctrl *ctrl, bool start_queues)
+ static void
+ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
+ {
++ enum nvme_ctrl_state state = nvme_ctrl_state(&ctrl->ctrl);
++
+ /*
+ * if an error (io timeout, etc) while (re)connecting, the remote
+ * port requested terminating of the association (disconnect_ls)
+@@ -2541,7 +2544,7 @@ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
+ * the controller. Abort any ios on the association and let the
+ * create_association error path resolve things.
+ */
+- if (ctrl->ctrl.state == NVME_CTRL_CONNECTING) {
++ if (state == NVME_CTRL_CONNECTING) {
+ __nvme_fc_abort_outstanding_ios(ctrl, true);
+ set_bit(ASSOC_FAILED, &ctrl->flags);
+ dev_warn(ctrl->ctrl.device,
+@@ -2551,7 +2554,7 @@ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
+ }
+
+ /* Otherwise, only proceed if in LIVE state - e.g. on first error */
+- if (ctrl->ctrl.state != NVME_CTRL_LIVE)
++ if (state != NVME_CTRL_LIVE)
+ return;
+
+ dev_warn(ctrl->ctrl.device,
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 76b3f7b396c86b..cc74682dc0d4e9 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -2987,7 +2987,9 @@ static unsigned long check_vendor_combination_bug(struct pci_dev *pdev)
+ * because of high power consumption (> 2 Watt) in s2idle
+ * sleep. Only some boards with Intel CPU are affected.
+ */
+- if (dmi_match(DMI_BOARD_NAME, "GMxPXxx") ||
++ if (dmi_match(DMI_BOARD_NAME, "DN50Z-140HC-YD") ||
++ dmi_match(DMI_BOARD_NAME, "GMxPXxx") ||
++ dmi_match(DMI_BOARD_NAME, "GXxMRXx") ||
+ dmi_match(DMI_BOARD_NAME, "PH4PG31") ||
+ dmi_match(DMI_BOARD_NAME, "PH4PRX1_PH6PRX1") ||
+ dmi_match(DMI_BOARD_NAME, "PH6PG01_PH6PG71"))
+diff --git a/drivers/nvme/host/sysfs.c b/drivers/nvme/host/sysfs.c
+index b68a9e5f1ea395..3a41b9ab0f13c4 100644
+--- a/drivers/nvme/host/sysfs.c
++++ b/drivers/nvme/host/sysfs.c
+@@ -792,7 +792,7 @@ static umode_t nvme_tls_attrs_are_visible(struct kobject *kobj,
+ return a->mode;
+ }
+
+-const struct attribute_group nvme_tls_attrs_group = {
++static const struct attribute_group nvme_tls_attrs_group = {
+ .attrs = nvme_tls_attrs,
+ .is_visible = nvme_tls_attrs_are_visible,
+ };
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index e1a15fbc6ad025..d00a3b015635c2 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -1780,6 +1780,8 @@ static int __nvmem_cell_entry_write(struct nvmem_cell_entry *cell, void *buf, si
+ return -EINVAL;
+
+ if (cell->bit_offset || cell->nbits) {
++ if (len != BITS_TO_BYTES(cell->nbits) && len != cell->bytes)
++ return -EINVAL;
+ buf = nvmem_cell_prepare_write_buffer(cell, buf, len);
+ if (IS_ERR(buf))
+ return PTR_ERR(buf);
+diff --git a/drivers/nvmem/imx-ocotp-ele.c b/drivers/nvmem/imx-ocotp-ele.c
+index 1ba49449769874..ca6dd71d8a2e29 100644
+--- a/drivers/nvmem/imx-ocotp-ele.c
++++ b/drivers/nvmem/imx-ocotp-ele.c
+@@ -71,13 +71,15 @@ static int imx_ocotp_reg_read(void *context, unsigned int offset, void *val, siz
+ u32 *buf;
+ void *p;
+ int i;
++ u8 skipbytes;
+
+- index = offset;
+- num_bytes = round_up(bytes, 4);
+- count = num_bytes >> 2;
++ if (offset + bytes > priv->data->size)
++ bytes = priv->data->size - offset;
+
+- if (count > ((priv->data->size >> 2) - index))
+- count = (priv->data->size >> 2) - index;
++ index = offset >> 2;
++ skipbytes = offset - (index << 2);
++ num_bytes = round_up(bytes + skipbytes, 4);
++ count = num_bytes >> 2;
+
+ p = kzalloc(num_bytes, GFP_KERNEL);
+ if (!p)
+@@ -100,7 +102,7 @@ static int imx_ocotp_reg_read(void *context, unsigned int offset, void *val, siz
+ *buf++ = readl_relaxed(reg + (i << 2));
+ }
+
+- memcpy(val, (u8 *)p, bytes);
++ memcpy(val, ((u8 *)p) + skipbytes, bytes);
+
+ mutex_unlock(&priv->lock);
+
+@@ -109,6 +111,26 @@ static int imx_ocotp_reg_read(void *context, unsigned int offset, void *val, siz
+ return 0;
+ };
+
++static int imx_ocotp_cell_pp(void *context, const char *id, int index,
++ unsigned int offset, void *data, size_t bytes)
++{
++ u8 *buf = data;
++ int i;
++
++ /* Deal with some post processing of nvmem cell data */
++ if (id && !strcmp(id, "mac-address"))
++ for (i = 0; i < bytes / 2; i++)
++ swap(buf[i], buf[bytes - i - 1]);
++
++ return 0;
++}
++
++static void imx_ocotp_fixup_dt_cell_info(struct nvmem_device *nvmem,
++ struct nvmem_cell_info *cell)
++{
++ cell->read_post_process = imx_ocotp_cell_pp;
++}
++
+ static int imx_ele_ocotp_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -131,10 +153,12 @@ static int imx_ele_ocotp_probe(struct platform_device *pdev)
+ priv->config.owner = THIS_MODULE;
+ priv->config.size = priv->data->size;
+ priv->config.reg_read = priv->data->reg_read;
+- priv->config.word_size = 4;
++ priv->config.word_size = 1;
+ priv->config.stride = 1;
+ priv->config.priv = priv;
+ priv->config.read_only = true;
++ priv->config.add_legacy_fixed_of_cells = true;
++ priv->config.fixup_dt_cell_info = imx_ocotp_fixup_dt_cell_info;
+ mutex_init(&priv->lock);
+
+ nvmem = devm_nvmem_register(dev, &priv->config);
+diff --git a/drivers/nvmem/qcom-spmi-sdam.c b/drivers/nvmem/qcom-spmi-sdam.c
+index 9aa8f42faa4c93..4f1cca6eab71e1 100644
+--- a/drivers/nvmem/qcom-spmi-sdam.c
++++ b/drivers/nvmem/qcom-spmi-sdam.c
+@@ -144,6 +144,7 @@ static int sdam_probe(struct platform_device *pdev)
+ sdam->sdam_config.owner = THIS_MODULE;
+ sdam->sdam_config.add_legacy_fixed_of_cells = true;
+ sdam->sdam_config.stride = 1;
++ sdam->sdam_config.size = sdam->size;
+ sdam->sdam_config.word_size = 1;
+ sdam->sdam_config.reg_read = sdam_read;
+ sdam->sdam_config.reg_write = sdam_write;
+diff --git a/drivers/of/address.c b/drivers/of/address.c
+index a565b8c91da593..0e708a863e4aa3 100644
+--- a/drivers/of/address.c
++++ b/drivers/of/address.c
+@@ -200,17 +200,15 @@ static u64 of_bus_pci_map(__be32 *addr, const __be32 *range, int na, int ns,
+
+ static int __of_address_resource_bounds(struct resource *r, u64 start, u64 size)
+ {
+- u64 end = start;
+-
+ if (overflows_type(start, r->start))
+ return -EOVERFLOW;
+- if (size && check_add_overflow(end, size - 1, &end))
+- return -EOVERFLOW;
+- if (overflows_type(end, r->end))
+- return -EOVERFLOW;
+
+ r->start = start;
+- r->end = end;
++
++ if (!size)
++ r->end = wrapping_sub(typeof(r->end), r->start, 1);
++ else if (size && check_add_overflow(r->start, size - 1, &r->end))
++ return -EOVERFLOW;
+
+ return 0;
+ }
+diff --git a/drivers/of/base.c b/drivers/of/base.c
+index 63161d0f72b4e8..4bb87e0cbaf179 100644
+--- a/drivers/of/base.c
++++ b/drivers/of/base.c
+@@ -841,10 +841,10 @@ struct device_node *of_find_node_opts_by_path(const char *path, const char **opt
+ /* The path could begin with an alias */
+ if (*path != '/') {
+ int len;
+- const char *p = separator;
++ const char *p = strchrnul(path, '/');
+
+- if (!p)
+- p = strchrnul(path, '/');
++ if (separator && separator < p)
++ p = separator;
+ len = p - path;
+
+ /* of_aliases must not be NULL */
+@@ -1493,7 +1493,6 @@ int of_parse_phandle_with_args_map(const struct device_node *np,
+ * specifier into the out_args structure, keeping the
+ * bits specified in <list>-map-pass-thru.
+ */
+- match_array = map - new_size;
+ for (i = 0; i < new_size; i++) {
+ __be32 val = *(map - new_size + i);
+
+@@ -1502,6 +1501,7 @@ int of_parse_phandle_with_args_map(const struct device_node *np,
+ val |= cpu_to_be32(out_args->args[i]) & pass[i];
+ }
+
++ initial_match_array[i] = val;
+ out_args->args[i] = be32_to_cpu(val);
+ }
+ out_args->args_count = list_size = new_size;
+diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
+index 45445a1600a968..e45d6d3a8dc678 100644
+--- a/drivers/of/of_reserved_mem.c
++++ b/drivers/of/of_reserved_mem.c
+@@ -360,12 +360,12 @@ static int __init __reserved_mem_alloc_size(unsigned long node, const char *unam
+
+ prop = of_get_flat_dt_prop(node, "alignment", &len);
+ if (prop) {
+- if (len != dt_root_addr_cells * sizeof(__be32)) {
++ if (len != dt_root_size_cells * sizeof(__be32)) {
+ pr_err("invalid alignment property in '%s' node.\n",
+ uname);
+ return -EINVAL;
+ }
+- align = dt_mem_next_cell(dt_root_addr_cells, &prop);
++ align = dt_mem_next_cell(dt_root_size_cells, &prop);
+ }
+
+ nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index cc8ff4a014368c..b58e89ea566b8d 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -222,19 +222,30 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
+ if ((flags & PCI_BASE_ADDRESS_MEM_TYPE_64) && (bar & 1))
+ return -EINVAL;
+
+- reg = PCI_BASE_ADDRESS_0 + (4 * bar);
+-
+- if (!(flags & PCI_BASE_ADDRESS_SPACE))
+- type = PCIE_ATU_TYPE_MEM;
+- else
+- type = PCIE_ATU_TYPE_IO;
++ /*
++ * Certain EPF drivers dynamically change the physical address of a BAR
++ * (i.e. they call set_bar() twice, without ever calling clear_bar(), as
++ * calling clear_bar() would clear the BAR's PCI address assigned by the
++ * host).
++ */
++ if (ep->epf_bar[bar]) {
++ /*
++ * We can only dynamically change a BAR if the new BAR size and
++ * BAR flags do not differ from the existing configuration.
++ */
++ if (ep->epf_bar[bar]->barno != bar ||
++ ep->epf_bar[bar]->size != size ||
++ ep->epf_bar[bar]->flags != flags)
++ return -EINVAL;
+
+- ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar);
+- if (ret)
+- return ret;
++ /*
++ * When dynamically changing a BAR, skip writing the BAR reg, as
++ * that would clear the BAR's PCI address assigned by the host.
++ */
++ goto config_atu;
++ }
+
+- if (ep->epf_bar[bar])
+- return 0;
++ reg = PCI_BASE_ADDRESS_0 + (4 * bar);
+
+ dw_pcie_dbi_ro_wr_en(pci);
+
+@@ -246,9 +257,20 @@ static int dw_pcie_ep_set_bar(struct pci_epc *epc, u8 func_no, u8 vfunc_no,
+ dw_pcie_ep_writel_dbi(ep, func_no, reg + 4, 0);
+ }
+
+- ep->epf_bar[bar] = epf_bar;
+ dw_pcie_dbi_ro_wr_dis(pci);
+
++config_atu:
++ if (!(flags & PCI_BASE_ADDRESS_SPACE))
++ type = PCIE_ATU_TYPE_MEM;
++ else
++ type = PCIE_ATU_TYPE_IO;
++
++ ret = dw_pcie_ep_inbound_atu(ep, func_no, type, epf_bar->phys_addr, bar);
++ if (ret)
++ return ret;
++
++ ep->epf_bar[bar] = epf_bar;
++
+ return 0;
+ }
+
+diff --git a/drivers/pci/endpoint/pci-epf-core.c b/drivers/pci/endpoint/pci-epf-core.c
+index 8fa2797d4169a9..50bc2892a36c54 100644
+--- a/drivers/pci/endpoint/pci-epf-core.c
++++ b/drivers/pci/endpoint/pci-epf-core.c
+@@ -202,6 +202,7 @@ void pci_epf_remove_vepf(struct pci_epf *epf_pf, struct pci_epf *epf_vf)
+
+ mutex_lock(&epf_pf->lock);
+ clear_bit(epf_vf->vfunc_no, &epf_pf->vfunction_num_map);
++ epf_vf->epf_pf = NULL;
+ list_del(&epf_vf->list);
+ mutex_unlock(&epf_pf->lock);
+ }
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index 3a81837b5e623b..5081c7d8064fae 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -155,7 +155,7 @@
+ #define PWPR_REGWE_B BIT(5) /* OEN Register Write Enable, known only in RZ/V2H(P) */
+
+ #define PM_MASK 0x03
+-#define PFC_MASK 0x07
++#define PFC_MASK 0x0f
+ #define IEN_MASK 0x01
+ #define IOLH_MASK 0x03
+ #define SR_MASK 0x01
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index 675efa5d86a9af..c142cd7920307f 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -1272,7 +1272,7 @@ static int samsung_pinctrl_probe(struct platform_device *pdev)
+
+ ret = platform_get_irq_optional(pdev, 0);
+ if (ret < 0 && ret != -ENXIO)
+- return ret;
++ goto err_put_banks;
+ if (ret > 0)
+ drvdata->irq = ret;
+
+diff --git a/drivers/platform/x86/acer-wmi.c b/drivers/platform/x86/acer-wmi.c
+index 7169b84ccdb6e2..c5679e4a58a76e 100644
+--- a/drivers/platform/x86/acer-wmi.c
++++ b/drivers/platform/x86/acer-wmi.c
+@@ -95,6 +95,7 @@ enum acer_wmi_event_ids {
+ WMID_HOTKEY_EVENT = 0x1,
+ WMID_ACCEL_OR_KBD_DOCK_EVENT = 0x5,
+ WMID_GAMING_TURBO_KEY_EVENT = 0x7,
++ WMID_AC_EVENT = 0x8,
+ };
+
+ enum acer_wmi_predator_v4_sys_info_command {
+@@ -398,6 +399,20 @@ static struct quirk_entry quirk_acer_predator_ph315_53 = {
+ .gpu_fans = 1,
+ };
+
++static struct quirk_entry quirk_acer_predator_ph16_72 = {
++ .turbo = 1,
++ .cpu_fans = 1,
++ .gpu_fans = 1,
++ .predator_v4 = 1,
++};
++
++static struct quirk_entry quirk_acer_predator_pt14_51 = {
++ .turbo = 1,
++ .cpu_fans = 1,
++ .gpu_fans = 1,
++ .predator_v4 = 1,
++};
++
+ static struct quirk_entry quirk_acer_predator_v4 = {
+ .predator_v4 = 1,
+ };
+@@ -569,6 +584,15 @@ static const struct dmi_system_id acer_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_acer_travelmate_2490,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Acer Nitro AN515-58",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Nitro AN515-58"),
++ },
++ .driver_data = &quirk_acer_predator_v4,
++ },
+ {
+ .callback = dmi_matched,
+ .ident = "Acer Predator PH315-53",
+@@ -596,6 +620,15 @@ static const struct dmi_system_id acer_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_acer_predator_v4,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Acer Predator PH16-72",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Predator PH16-72"),
++ },
++ .driver_data = &quirk_acer_predator_ph16_72,
++ },
+ {
+ .callback = dmi_matched,
+ .ident = "Acer Predator PH18-71",
+@@ -605,6 +638,15 @@ static const struct dmi_system_id acer_quirks[] __initconst = {
+ },
+ .driver_data = &quirk_acer_predator_v4,
+ },
++ {
++ .callback = dmi_matched,
++ .ident = "Acer Predator PT14-51",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Predator PT14-51"),
++ },
++ .driver_data = &quirk_acer_predator_pt14_51,
++ },
+ {
+ .callback = set_force_caps,
+ .ident = "Acer Aspire Switch 10E SW3-016",
+@@ -2285,6 +2327,9 @@ static void acer_wmi_notify(union acpi_object *obj, void *context)
+ if (return_value.key_num == 0x5 && has_cap(ACER_CAP_PLATFORM_PROFILE))
+ acer_thermal_profile_change();
+ break;
++ case WMID_AC_EVENT:
++ /* We ignore AC events here */
++ break;
+ default:
+ pr_warn("Unknown function number - %d - %d\n",
+ return_value.function, return_value.key_num);
+diff --git a/drivers/platform/x86/intel/int3472/discrete.c b/drivers/platform/x86/intel/int3472/discrete.c
+index 3de463c3d13b8e..15678508ee5019 100644
+--- a/drivers/platform/x86/intel/int3472/discrete.c
++++ b/drivers/platform/x86/intel/int3472/discrete.c
+@@ -336,6 +336,9 @@ static int skl_int3472_discrete_probe(struct platform_device *pdev)
+ struct int3472_cldb cldb;
+ int ret;
+
++ if (!adev)
++ return -ENODEV;
++
+ ret = skl_int3472_fill_cldb(adev, &cldb);
+ if (ret) {
+ dev_err(&pdev->dev, "Couldn't fill CLDB structure\n");
+diff --git a/drivers/platform/x86/intel/int3472/tps68470.c b/drivers/platform/x86/intel/int3472/tps68470.c
+index 1e107fd49f828c..81ac4c69196309 100644
+--- a/drivers/platform/x86/intel/int3472/tps68470.c
++++ b/drivers/platform/x86/intel/int3472/tps68470.c
+@@ -152,6 +152,9 @@ static int skl_int3472_tps68470_probe(struct i2c_client *client)
+ int ret;
+ int i;
+
++ if (!adev)
++ return -ENODEV;
++
+ n_consumers = skl_int3472_fill_clk_pdata(&client->dev, &clk_pdata);
+ if (n_consumers < 0)
+ return n_consumers;
+diff --git a/drivers/platform/x86/serdev_helpers.h b/drivers/platform/x86/serdev_helpers.h
+index bcf3a0c356ea1b..3bc7fd8e1e1972 100644
+--- a/drivers/platform/x86/serdev_helpers.h
++++ b/drivers/platform/x86/serdev_helpers.h
+@@ -35,7 +35,7 @@ get_serdev_controller(const char *serial_ctrl_hid,
+ ctrl_adev = acpi_dev_get_first_match_dev(serial_ctrl_hid, serial_ctrl_uid, -1);
+ if (!ctrl_adev) {
+ pr_err("error could not get %s/%s serial-ctrl adev\n",
+- serial_ctrl_hid, serial_ctrl_uid);
++ serial_ctrl_hid, serial_ctrl_uid ?: "*");
+ return ERR_PTR(-ENODEV);
+ }
+
+@@ -43,7 +43,7 @@ get_serdev_controller(const char *serial_ctrl_hid,
+ ctrl_dev = get_device(acpi_get_first_physical_node(ctrl_adev));
+ if (!ctrl_dev) {
+ pr_err("error could not get %s/%s serial-ctrl physical node\n",
+- serial_ctrl_hid, serial_ctrl_uid);
++ serial_ctrl_hid, serial_ctrl_uid ?: "*");
+ ctrl_dev = ERR_PTR(-ENODEV);
+ goto put_ctrl_adev;
+ }
+diff --git a/drivers/ptp/ptp_clock.c b/drivers/ptp/ptp_clock.c
+index 77a36e7bddd54e..1a1edd87122d3d 100644
+--- a/drivers/ptp/ptp_clock.c
++++ b/drivers/ptp/ptp_clock.c
+@@ -217,6 +217,11 @@ static int ptp_getcycles64(struct ptp_clock_info *info, struct timespec64 *ts)
+ return info->gettime64(info, ts);
+ }
+
++static int ptp_enable(struct ptp_clock_info *ptp, struct ptp_clock_request *request, int on)
++{
++ return -EOPNOTSUPP;
++}
++
+ static void ptp_aux_kworker(struct kthread_work *work)
+ {
+ struct ptp_clock *ptp = container_of(work, struct ptp_clock,
+@@ -294,6 +299,9 @@ struct ptp_clock *ptp_clock_register(struct ptp_clock_info *info,
+ ptp->info->getcrosscycles = ptp->info->getcrosststamp;
+ }
+
++ if (!ptp->info->enable)
++ ptp->info->enable = ptp_enable;
++
+ if (ptp->info->do_aux_work) {
+ kthread_init_delayed_work(&ptp->aux_work, ptp_aux_kworker);
+ ptp->kworker = kthread_create_worker(0, "ptp%d", ptp->index);
+diff --git a/drivers/pwm/pwm-microchip-core.c b/drivers/pwm/pwm-microchip-core.c
+index c1f2287b8e9748..12821b4bbf9756 100644
+--- a/drivers/pwm/pwm-microchip-core.c
++++ b/drivers/pwm/pwm-microchip-core.c
+@@ -327,7 +327,7 @@ static int mchp_core_pwm_apply_locked(struct pwm_chip *chip, struct pwm_device *
+ * mchp_core_pwm_calc_period().
+ * The period is locked and we cannot change this, so we abort.
+ */
+- if (hw_period_steps == MCHPCOREPWM_PERIOD_STEPS_MAX)
++ if (hw_period_steps > MCHPCOREPWM_PERIOD_STEPS_MAX)
+ return -EINVAL;
+
+ prescale = hw_prescale;
+diff --git a/drivers/remoteproc/omap_remoteproc.c b/drivers/remoteproc/omap_remoteproc.c
+index 9ae2e831456d57..3260dd512491e8 100644
+--- a/drivers/remoteproc/omap_remoteproc.c
++++ b/drivers/remoteproc/omap_remoteproc.c
+@@ -37,6 +37,10 @@
+
+ #include <linux/platform_data/dmtimer-omap.h>
+
++#ifdef CONFIG_ARM_DMA_USE_IOMMU
++#include <asm/dma-iommu.h>
++#endif
++
+ #include "omap_remoteproc.h"
+ #include "remoteproc_internal.h"
+
+@@ -1323,6 +1327,19 @@ static int omap_rproc_probe(struct platform_device *pdev)
+ /* All existing OMAP IPU and DSP processors have an MMU */
+ rproc->has_iommu = true;
+
++#ifdef CONFIG_ARM_DMA_USE_IOMMU
++ /*
++ * Throw away the ARM DMA mapping that we'll never use, so it doesn't
++ * interfere with the core rproc->domain and we get the right DMA ops.
++ */
++ if (pdev->dev.archdata.mapping) {
++ struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(&pdev->dev);
++
++ arm_iommu_detach_device(&pdev->dev);
++ arm_iommu_release_mapping(mapping);
++ }
++#endif
++
+ ret = omap_rproc_of_get_internal_memories(pdev, rproc);
+ if (ret)
+ return ret;
+diff --git a/drivers/rtc/rtc-zynqmp.c b/drivers/rtc/rtc-zynqmp.c
+index 08ed171bdab43a..b6f96c10196ae3 100644
+--- a/drivers/rtc/rtc-zynqmp.c
++++ b/drivers/rtc/rtc-zynqmp.c
+@@ -318,8 +318,8 @@ static int xlnx_rtc_probe(struct platform_device *pdev)
+ return ret;
+ }
+
+- /* Getting the rtc_clk info */
+- xrtcdev->rtc_clk = devm_clk_get_optional(&pdev->dev, "rtc_clk");
++ /* Getting the rtc info */
++ xrtcdev->rtc_clk = devm_clk_get_optional(&pdev->dev, "rtc");
+ if (IS_ERR(xrtcdev->rtc_clk)) {
+ if (PTR_ERR(xrtcdev->rtc_clk) != -EPROBE_DEFER)
+ dev_warn(&pdev->dev, "Device clock not found.\n");
+diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
+index 15066c112817a8..cb95b7b12051da 100644
+--- a/drivers/scsi/qla2xxx/qla_def.h
++++ b/drivers/scsi/qla2xxx/qla_def.h
+@@ -4098,6 +4098,8 @@ struct qla_hw_data {
+ uint32_t npiv_supported :1;
+ uint32_t pci_channel_io_perm_failure :1;
+ uint32_t fce_enabled :1;
++ uint32_t user_enabled_fce :1;
++ uint32_t fce_dump_buf_alloced :1;
+ uint32_t fac_supported :1;
+
+ uint32_t chip_reset_done :1;
+diff --git a/drivers/scsi/qla2xxx/qla_dfs.c b/drivers/scsi/qla2xxx/qla_dfs.c
+index a1545dad0c0ce2..08273520c77793 100644
+--- a/drivers/scsi/qla2xxx/qla_dfs.c
++++ b/drivers/scsi/qla2xxx/qla_dfs.c
+@@ -409,26 +409,31 @@ qla2x00_dfs_fce_show(struct seq_file *s, void *unused)
+
+ mutex_lock(&ha->fce_mutex);
+
+- seq_puts(s, "FCE Trace Buffer\n");
+- seq_printf(s, "In Pointer = %llx\n\n", (unsigned long long)ha->fce_wr);
+- seq_printf(s, "Base = %llx\n\n", (unsigned long long) ha->fce_dma);
+- seq_puts(s, "FCE Enable Registers\n");
+- seq_printf(s, "%08x %08x %08x %08x %08x %08x\n",
+- ha->fce_mb[0], ha->fce_mb[2], ha->fce_mb[3], ha->fce_mb[4],
+- ha->fce_mb[5], ha->fce_mb[6]);
+-
+- fce = (uint32_t *) ha->fce;
+- fce_start = (unsigned long long) ha->fce_dma;
+- for (cnt = 0; cnt < fce_calc_size(ha->fce_bufs) / 4; cnt++) {
+- if (cnt % 8 == 0)
+- seq_printf(s, "\n%llx: ",
+- (unsigned long long)((cnt * 4) + fce_start));
+- else
+- seq_putc(s, ' ');
+- seq_printf(s, "%08x", *fce++);
+- }
++ if (ha->flags.user_enabled_fce) {
++ seq_puts(s, "FCE Trace Buffer\n");
++ seq_printf(s, "In Pointer = %llx\n\n", (unsigned long long)ha->fce_wr);
++ seq_printf(s, "Base = %llx\n\n", (unsigned long long)ha->fce_dma);
++ seq_puts(s, "FCE Enable Registers\n");
++ seq_printf(s, "%08x %08x %08x %08x %08x %08x\n",
++ ha->fce_mb[0], ha->fce_mb[2], ha->fce_mb[3], ha->fce_mb[4],
++ ha->fce_mb[5], ha->fce_mb[6]);
++
++ fce = (uint32_t *)ha->fce;
++ fce_start = (unsigned long long)ha->fce_dma;
++ for (cnt = 0; cnt < fce_calc_size(ha->fce_bufs) / 4; cnt++) {
++ if (cnt % 8 == 0)
++ seq_printf(s, "\n%llx: ",
++ (unsigned long long)((cnt * 4) + fce_start));
++ else
++ seq_putc(s, ' ');
++ seq_printf(s, "%08x", *fce++);
++ }
+
+- seq_puts(s, "\nEnd\n");
++ seq_puts(s, "\nEnd\n");
++ } else {
++ seq_puts(s, "FCE Trace is currently not enabled\n");
++ seq_puts(s, "\techo [ 1 | 0 ] > fce\n");
++ }
+
+ mutex_unlock(&ha->fce_mutex);
+
+@@ -467,7 +472,7 @@ qla2x00_dfs_fce_release(struct inode *inode, struct file *file)
+ struct qla_hw_data *ha = vha->hw;
+ int rval;
+
+- if (ha->flags.fce_enabled)
++ if (ha->flags.fce_enabled || !ha->fce)
+ goto out;
+
+ mutex_lock(&ha->fce_mutex);
+@@ -488,11 +493,88 @@ qla2x00_dfs_fce_release(struct inode *inode, struct file *file)
+ return single_release(inode, file);
+ }
+
++static ssize_t
++qla2x00_dfs_fce_write(struct file *file, const char __user *buffer,
++ size_t count, loff_t *pos)
++{
++ struct seq_file *s = file->private_data;
++ struct scsi_qla_host *vha = s->private;
++ struct qla_hw_data *ha = vha->hw;
++ char *buf;
++ int rc = 0;
++ unsigned long enable;
++
++ if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
++ !IS_QLA27XX(ha) && !IS_QLA28XX(ha)) {
++ ql_dbg(ql_dbg_user, vha, 0xd034,
++ "this adapter does not support FCE.");
++ return -EINVAL;
++ }
++
++ buf = memdup_user_nul(buffer, count);
++ if (IS_ERR(buf)) {
++ ql_dbg(ql_dbg_user, vha, 0xd037,
++ "fail to copy user buffer.");
++ return PTR_ERR(buf);
++ }
++
++ enable = kstrtoul(buf, 0, 0);
++ rc = count;
++
++ mutex_lock(&ha->fce_mutex);
++
++ if (enable) {
++ if (ha->flags.user_enabled_fce) {
++ mutex_unlock(&ha->fce_mutex);
++ goto out_free;
++ }
++ ha->flags.user_enabled_fce = 1;
++ if (!ha->fce) {
++ rc = qla2x00_alloc_fce_trace(vha);
++ if (rc) {
++ ha->flags.user_enabled_fce = 0;
++ mutex_unlock(&ha->fce_mutex);
++ goto out_free;
++ }
++
++ /* adjust fw dump buffer to take into account of this feature */
++ if (!ha->flags.fce_dump_buf_alloced)
++ qla2x00_alloc_fw_dump(vha);
++ }
++
++ if (!ha->flags.fce_enabled)
++ qla_enable_fce_trace(vha);
++
++ ql_dbg(ql_dbg_user, vha, 0xd045, "User enabled FCE .\n");
++ } else {
++ if (!ha->flags.user_enabled_fce) {
++ mutex_unlock(&ha->fce_mutex);
++ goto out_free;
++ }
++ ha->flags.user_enabled_fce = 0;
++ if (ha->flags.fce_enabled) {
++ qla2x00_disable_fce_trace(vha, NULL, NULL);
++ ha->flags.fce_enabled = 0;
++ }
++
++ qla2x00_free_fce_trace(ha);
++ /* no need to re-adjust fw dump buffer */
++
++ ql_dbg(ql_dbg_user, vha, 0xd04f, "User disabled FCE .\n");
++ }
++
++ mutex_unlock(&ha->fce_mutex);
++out_free:
++ kfree(buf);
++ return rc;
++}
++
+ static const struct file_operations dfs_fce_ops = {
+ .open = qla2x00_dfs_fce_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = qla2x00_dfs_fce_release,
++ .write = qla2x00_dfs_fce_write,
+ };
+
+ static int
+@@ -626,8 +708,6 @@ qla2x00_dfs_setup(scsi_qla_host_t *vha)
+ if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
+ !IS_QLA27XX(ha) && !IS_QLA28XX(ha))
+ goto out;
+- if (!ha->fce)
+- goto out;
+
+ if (qla2x00_dfs_root)
+ goto create_dir;
+diff --git a/drivers/scsi/qla2xxx/qla_gbl.h b/drivers/scsi/qla2xxx/qla_gbl.h
+index cededfda9d0e31..e556f57c91af62 100644
+--- a/drivers/scsi/qla2xxx/qla_gbl.h
++++ b/drivers/scsi/qla2xxx/qla_gbl.h
+@@ -11,6 +11,9 @@
+ /*
+ * Global Function Prototypes in qla_init.c source file.
+ */
++int qla2x00_alloc_fce_trace(scsi_qla_host_t *);
++void qla2x00_free_fce_trace(struct qla_hw_data *ha);
++void qla_enable_fce_trace(scsi_qla_host_t *);
+ extern int qla2x00_initialize_adapter(scsi_qla_host_t *);
+ extern int qla24xx_post_prli_work(struct scsi_qla_host *vha, fc_port_t *fcport);
+
+diff --git a/drivers/scsi/qla2xxx/qla_init.c b/drivers/scsi/qla2xxx/qla_init.c
+index 31fc6a0eca3e80..79cdfec2bca356 100644
+--- a/drivers/scsi/qla2xxx/qla_init.c
++++ b/drivers/scsi/qla2xxx/qla_init.c
+@@ -2681,7 +2681,7 @@ qla83xx_nic_core_fw_load(scsi_qla_host_t *vha)
+ return rval;
+ }
+
+-static void qla_enable_fce_trace(scsi_qla_host_t *vha)
++void qla_enable_fce_trace(scsi_qla_host_t *vha)
+ {
+ int rval;
+ struct qla_hw_data *ha = vha->hw;
+@@ -3717,25 +3717,24 @@ qla24xx_chip_diag(scsi_qla_host_t *vha)
+ return rval;
+ }
+
+-static void
+-qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
++int qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
+ {
+ dma_addr_t tc_dma;
+ void *tc;
+ struct qla_hw_data *ha = vha->hw;
+
+ if (!IS_FWI2_CAPABLE(ha))
+- return;
++ return -EINVAL;
+
+ if (!IS_QLA25XX(ha) && !IS_QLA81XX(ha) && !IS_QLA83XX(ha) &&
+ !IS_QLA27XX(ha) && !IS_QLA28XX(ha))
+- return;
++ return -EINVAL;
+
+ if (ha->fce) {
+ ql_dbg(ql_dbg_init, vha, 0x00bd,
+ "%s: FCE Mem is already allocated.\n",
+ __func__);
+- return;
++ return -EIO;
+ }
+
+ /* Allocate memory for Fibre Channel Event Buffer. */
+@@ -3745,7 +3744,7 @@ qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
+ ql_log(ql_log_warn, vha, 0x00be,
+ "Unable to allocate (%d KB) for FCE.\n",
+ FCE_SIZE / 1024);
+- return;
++ return -ENOMEM;
+ }
+
+ ql_dbg(ql_dbg_init, vha, 0x00c0,
+@@ -3754,6 +3753,16 @@ qla2x00_alloc_fce_trace(scsi_qla_host_t *vha)
+ ha->fce_dma = tc_dma;
+ ha->fce = tc;
+ ha->fce_bufs = FCE_NUM_BUFFERS;
++ return 0;
++}
++
++void qla2x00_free_fce_trace(struct qla_hw_data *ha)
++{
++ if (!ha->fce)
++ return;
++ dma_free_coherent(&ha->pdev->dev, FCE_SIZE, ha->fce, ha->fce_dma);
++ ha->fce = NULL;
++ ha->fce_dma = 0;
+ }
+
+ static void
+@@ -3844,9 +3853,10 @@ qla2x00_alloc_fw_dump(scsi_qla_host_t *vha)
+ if (ha->tgt.atio_ring)
+ mq_size += ha->tgt.atio_q_length * sizeof(request_t);
+
+- qla2x00_alloc_fce_trace(vha);
+- if (ha->fce)
++ if (ha->fce) {
+ fce_size = sizeof(struct qla2xxx_fce_chain) + FCE_SIZE;
++ ha->flags.fce_dump_buf_alloced = 1;
++ }
+ qla2x00_alloc_eft_trace(vha);
+ if (ha->eft)
+ eft_size = EFT_SIZE;
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index adee6f60c96655..c9dde1ac9523e8 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -865,13 +865,18 @@ static void scsi_io_completion_action(struct scsi_cmnd *cmd, int result)
+ case 0x1a: /* start stop unit in progress */
+ case 0x1b: /* sanitize in progress */
+ case 0x1d: /* configuration in progress */
+- case 0x24: /* depopulation in progress */
+- case 0x25: /* depopulation restore in progress */
+ action = ACTION_DELAYED_RETRY;
+ break;
+ case 0x0a: /* ALUA state transition */
+ action = ACTION_DELAYED_REPREP;
+ break;
++ /*
++ * Depopulation might take many hours,
++ * thus it is not worthwhile to retry.
++ */
++ case 0x24: /* depopulation in progress */
++ case 0x25: /* depopulation restore in progress */
++ fallthrough;
+ default:
+ action = ACTION_FAIL;
+ break;
+diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
+index c9038284bc893d..0dc37fc6f23678 100644
+--- a/drivers/scsi/st.c
++++ b/drivers/scsi/st.c
+@@ -1027,6 +1027,11 @@ static int test_ready(struct scsi_tape *STp, int do_wait)
+ retval = new_session ? CHKRES_NEW_SESSION : CHKRES_READY;
+ break;
+ }
++ if (STp->first_tur) {
++ /* Don't set pos_unknown right after device recognition */
++ STp->pos_unknown = 0;
++ STp->first_tur = 0;
++ }
+
+ if (SRpnt != NULL)
+ st_release_request(SRpnt);
+@@ -4325,6 +4330,7 @@ static int st_probe(struct device *dev)
+ blk_queue_rq_timeout(tpnt->device->request_queue, ST_TIMEOUT);
+ tpnt->long_timeout = ST_LONG_TIMEOUT;
+ tpnt->try_dio = try_direct_io;
++ tpnt->first_tur = 1;
+
+ for (i = 0; i < ST_NBR_MODES; i++) {
+ STm = &(tpnt->modes[i]);
+diff --git a/drivers/scsi/st.h b/drivers/scsi/st.h
+index 7a68eaba7e810c..1aaaf5369a40fc 100644
+--- a/drivers/scsi/st.h
++++ b/drivers/scsi/st.h
+@@ -170,6 +170,7 @@ struct scsi_tape {
+ unsigned char rew_at_close; /* rewind necessary at close */
+ unsigned char inited;
+ unsigned char cleaning_req; /* cleaning requested? */
++ unsigned char first_tur; /* first TEST UNIT READY */
+ int block_size;
+ int min_block;
+ int max_block;
+diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
+index b3c588b102d900..b8186feccdf5aa 100644
+--- a/drivers/scsi/storvsc_drv.c
++++ b/drivers/scsi/storvsc_drv.c
+@@ -1800,6 +1800,7 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
+
+ length = scsi_bufflen(scmnd);
+ payload = (struct vmbus_packet_mpb_array *)&cmd_request->mpb;
++ payload->range.len = 0;
+ payload_sz = 0;
+
+ if (scsi_sg_count(scmnd)) {
+diff --git a/drivers/soc/mediatek/mtk-devapc.c b/drivers/soc/mediatek/mtk-devapc.c
+index 56cc345552a430..d83a46334adbbe 100644
+--- a/drivers/soc/mediatek/mtk-devapc.c
++++ b/drivers/soc/mediatek/mtk-devapc.c
+@@ -273,23 +273,31 @@ static int mtk_devapc_probe(struct platform_device *pdev)
+ return -EINVAL;
+
+ devapc_irq = irq_of_parse_and_map(node, 0);
+- if (!devapc_irq)
+- return -EINVAL;
++ if (!devapc_irq) {
++ ret = -EINVAL;
++ goto err;
++ }
+
+ ctx->infra_clk = devm_clk_get_enabled(&pdev->dev, "devapc-infra-clock");
+- if (IS_ERR(ctx->infra_clk))
+- return -EINVAL;
++ if (IS_ERR(ctx->infra_clk)) {
++ ret = -EINVAL;
++ goto err;
++ }
+
+ ret = devm_request_irq(&pdev->dev, devapc_irq, devapc_violation_irq,
+ IRQF_TRIGGER_NONE, "devapc", ctx);
+ if (ret)
+- return ret;
++ goto err;
+
+ platform_set_drvdata(pdev, ctx);
+
+ start_devapc(ctx);
+
+ return 0;
++
++err:
++ iounmap(ctx->infra_base);
++ return ret;
+ }
+
+ static void mtk_devapc_remove(struct platform_device *pdev)
+@@ -297,6 +305,7 @@ static void mtk_devapc_remove(struct platform_device *pdev)
+ struct mtk_devapc_context *ctx = platform_get_drvdata(pdev);
+
+ stop_devapc(ctx);
++ iounmap(ctx->infra_base);
+ }
+
+ static struct platform_driver mtk_devapc_driver = {
+diff --git a/drivers/soc/qcom/llcc-qcom.c b/drivers/soc/qcom/llcc-qcom.c
+index a470285f54a875..133dc483331352 100644
+--- a/drivers/soc/qcom/llcc-qcom.c
++++ b/drivers/soc/qcom/llcc-qcom.c
+@@ -2511,6 +2511,7 @@ static const struct llcc_slice_config x1e80100_data[] = {
+ .fixed_size = true,
+ .bonus_ways = 0xfff,
+ .cache_mode = 0,
++ .activate_on_init = true,
+ }, {
+ .usecase_id = LLCC_CAMEXP0,
+ .slice_id = 4,
+diff --git a/drivers/soc/qcom/smem_state.c b/drivers/soc/qcom/smem_state.c
+index e848cc9a3cf801..a8be3a2f33824f 100644
+--- a/drivers/soc/qcom/smem_state.c
++++ b/drivers/soc/qcom/smem_state.c
+@@ -116,7 +116,8 @@ struct qcom_smem_state *qcom_smem_state_get(struct device *dev,
+
+ if (args.args_count != 1) {
+ dev_err(dev, "invalid #qcom,smem-state-cells\n");
+- return ERR_PTR(-EINVAL);
++ state = ERR_PTR(-EINVAL);
++ goto put;
+ }
+
+ state = of_node_to_state(args.np);
+diff --git a/drivers/soc/qcom/socinfo.c b/drivers/soc/qcom/socinfo.c
+index ecfd3da9d5e877..c2f2a1ce4194b3 100644
+--- a/drivers/soc/qcom/socinfo.c
++++ b/drivers/soc/qcom/socinfo.c
+@@ -789,7 +789,7 @@ static int qcom_socinfo_probe(struct platform_device *pdev)
+ if (!qs->attr.soc_id || !qs->attr.revision)
+ return -ENOMEM;
+
+- if (offsetof(struct socinfo, serial_num) <= item_size) {
++ if (offsetofend(struct socinfo, serial_num) <= item_size) {
+ qs->attr.serial_number = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "%u",
+ le32_to_cpu(info->serial_num));
+diff --git a/drivers/soc/samsung/exynos-pmu.c b/drivers/soc/samsung/exynos-pmu.c
+index d8c53cec7f37ad..dd5256e5aae1ae 100644
+--- a/drivers/soc/samsung/exynos-pmu.c
++++ b/drivers/soc/samsung/exynos-pmu.c
+@@ -126,7 +126,7 @@ static int tensor_set_bits_atomic(void *ctx, unsigned int offset, u32 val,
+ if (ret)
+ return ret;
+ }
+- return ret;
++ return 0;
+ }
+
+ static bool tensor_is_atomic(unsigned int reg)
+diff --git a/drivers/spi/atmel-quadspi.c b/drivers/spi/atmel-quadspi.c
+index caecb2ad2a150d..02d83e2956cdf7 100644
+--- a/drivers/spi/atmel-quadspi.c
++++ b/drivers/spi/atmel-quadspi.c
+@@ -138,11 +138,15 @@
+ #define QSPI_WPSR_WPVSRC_MASK GENMASK(15, 8)
+ #define QSPI_WPSR_WPVSRC(src) (((src) << 8) & QSPI_WPSR_WPVSRC)
+
++#define ATMEL_QSPI_TIMEOUT 1000 /* ms */
++
+ struct atmel_qspi_caps {
+ bool has_qspick;
+ bool has_ricr;
+ };
+
++struct atmel_qspi_ops;
++
+ struct atmel_qspi {
+ void __iomem *regs;
+ void __iomem *mem;
+@@ -150,13 +154,22 @@ struct atmel_qspi {
+ struct clk *qspick;
+ struct platform_device *pdev;
+ const struct atmel_qspi_caps *caps;
++ const struct atmel_qspi_ops *ops;
+ resource_size_t mmap_size;
+ u32 pending;
++ u32 irq_mask;
+ u32 mr;
+ u32 scr;
+ struct completion cmd_completion;
+ };
+
++struct atmel_qspi_ops {
++ int (*set_cfg)(struct atmel_qspi *aq, const struct spi_mem_op *op,
++ u32 *offset);
++ int (*transfer)(struct spi_mem *mem, const struct spi_mem_op *op,
++ u32 offset);
++};
++
+ struct atmel_qspi_mode {
+ u8 cmd_buswidth;
+ u8 addr_buswidth;
+@@ -404,10 +417,67 @@ static int atmel_qspi_set_cfg(struct atmel_qspi *aq,
+ return 0;
+ }
+
++static int atmel_qspi_wait_for_completion(struct atmel_qspi *aq, u32 irq_mask)
++{
++ int err = 0;
++ u32 sr;
++
++ /* Poll INSTRuction End status */
++ sr = atmel_qspi_read(aq, QSPI_SR);
++ if ((sr & irq_mask) == irq_mask)
++ return 0;
++
++ /* Wait for INSTRuction End interrupt */
++ reinit_completion(&aq->cmd_completion);
++ aq->pending = sr & irq_mask;
++ aq->irq_mask = irq_mask;
++ atmel_qspi_write(irq_mask, aq, QSPI_IER);
++ if (!wait_for_completion_timeout(&aq->cmd_completion,
++ msecs_to_jiffies(ATMEL_QSPI_TIMEOUT)))
++ err = -ETIMEDOUT;
++ atmel_qspi_write(irq_mask, aq, QSPI_IDR);
++
++ return err;
++}
++
++static int atmel_qspi_transfer(struct spi_mem *mem,
++ const struct spi_mem_op *op, u32 offset)
++{
++ struct atmel_qspi *aq = spi_controller_get_devdata(mem->spi->controller);
++
++ /* Skip to the final steps if there is no data */
++ if (!op->data.nbytes)
++ return atmel_qspi_wait_for_completion(aq,
++ QSPI_SR_CMD_COMPLETED);
++
++ /* Dummy read of QSPI_IFR to synchronize APB and AHB accesses */
++ (void)atmel_qspi_read(aq, QSPI_IFR);
++
++ /* Send/Receive data */
++ if (op->data.dir == SPI_MEM_DATA_IN) {
++ memcpy_fromio(op->data.buf.in, aq->mem + offset,
++ op->data.nbytes);
++
++ /* Synchronize AHB and APB accesses again */
++ rmb();
++ } else {
++ memcpy_toio(aq->mem + offset, op->data.buf.out,
++ op->data.nbytes);
++
++ /* Synchronize AHB and APB accesses again */
++ wmb();
++ }
++
++ /* Release the chip-select */
++ atmel_qspi_write(QSPI_CR_LASTXFER, aq, QSPI_CR);
++
++ return atmel_qspi_wait_for_completion(aq, QSPI_SR_CMD_COMPLETED);
++}
++
+ static int atmel_qspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
+ {
+ struct atmel_qspi *aq = spi_controller_get_devdata(mem->spi->controller);
+- u32 sr, offset;
++ u32 offset;
+ int err;
+
+ /*
+@@ -416,46 +486,20 @@ static int atmel_qspi_exec_op(struct spi_mem *mem, const struct spi_mem_op *op)
+ * when the flash memories overrun the controller's memory space.
+ */
+ if (op->addr.val + op->data.nbytes > aq->mmap_size)
+- return -ENOTSUPP;
++ return -EOPNOTSUPP;
++
++ if (op->addr.nbytes > 4)
++ return -EOPNOTSUPP;
+
+ err = pm_runtime_resume_and_get(&aq->pdev->dev);
+ if (err < 0)
+ return err;
+
+- err = atmel_qspi_set_cfg(aq, op, &offset);
++ err = aq->ops->set_cfg(aq, op, &offset);
+ if (err)
+ goto pm_runtime_put;
+
+- /* Skip to the final steps if there is no data */
+- if (op->data.nbytes) {
+- /* Dummy read of QSPI_IFR to synchronize APB and AHB accesses */
+- (void)atmel_qspi_read(aq, QSPI_IFR);
+-
+- /* Send/Receive data */
+- if (op->data.dir == SPI_MEM_DATA_IN)
+- memcpy_fromio(op->data.buf.in, aq->mem + offset,
+- op->data.nbytes);
+- else
+- memcpy_toio(aq->mem + offset, op->data.buf.out,
+- op->data.nbytes);
+-
+- /* Release the chip-select */
+- atmel_qspi_write(QSPI_CR_LASTXFER, aq, QSPI_CR);
+- }
+-
+- /* Poll INSTRuction End status */
+- sr = atmel_qspi_read(aq, QSPI_SR);
+- if ((sr & QSPI_SR_CMD_COMPLETED) == QSPI_SR_CMD_COMPLETED)
+- goto pm_runtime_put;
+-
+- /* Wait for INSTRuction End interrupt */
+- reinit_completion(&aq->cmd_completion);
+- aq->pending = sr & QSPI_SR_CMD_COMPLETED;
+- atmel_qspi_write(QSPI_SR_CMD_COMPLETED, aq, QSPI_IER);
+- if (!wait_for_completion_timeout(&aq->cmd_completion,
+- msecs_to_jiffies(1000)))
+- err = -ETIMEDOUT;
+- atmel_qspi_write(QSPI_SR_CMD_COMPLETED, aq, QSPI_IDR);
++ err = aq->ops->transfer(mem, op, offset);
+
+ pm_runtime_put:
+ pm_runtime_mark_last_busy(&aq->pdev->dev);
+@@ -571,12 +615,17 @@ static irqreturn_t atmel_qspi_interrupt(int irq, void *dev_id)
+ return IRQ_NONE;
+
+ aq->pending |= pending;
+- if ((aq->pending & QSPI_SR_CMD_COMPLETED) == QSPI_SR_CMD_COMPLETED)
++ if ((aq->pending & aq->irq_mask) == aq->irq_mask)
+ complete(&aq->cmd_completion);
+
+ return IRQ_HANDLED;
+ }
+
++static const struct atmel_qspi_ops atmel_qspi_ops = {
++ .set_cfg = atmel_qspi_set_cfg,
++ .transfer = atmel_qspi_transfer,
++};
++
+ static int atmel_qspi_probe(struct platform_device *pdev)
+ {
+ struct spi_controller *ctrl;
+@@ -601,6 +650,7 @@ static int atmel_qspi_probe(struct platform_device *pdev)
+
+ init_completion(&aq->cmd_completion);
+ aq->pdev = pdev;
++ aq->ops = &atmel_qspi_ops;
+
+ /* Map the registers */
+ aq->regs = devm_platform_ioremap_resource_byname(pdev, "qspi_base");
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index bdf17eafd3598d..f43059e1b5c28e 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -165,6 +165,7 @@ struct sci_port {
+ static struct sci_port sci_ports[SCI_NPORTS];
+ static unsigned long sci_ports_in_use;
+ static struct uart_driver sci_uart_driver;
++static bool sci_uart_earlycon;
+
+ static inline struct sci_port *
+ to_sci_port(struct uart_port *uart)
+@@ -3450,6 +3451,7 @@ static int sci_probe_single(struct platform_device *dev,
+ static int sci_probe(struct platform_device *dev)
+ {
+ struct plat_sci_port *p;
++ struct resource *res;
+ struct sci_port *sp;
+ unsigned int dev_id;
+ int ret;
+@@ -3479,6 +3481,26 @@ static int sci_probe(struct platform_device *dev)
+ }
+
+ sp = &sci_ports[dev_id];
++
++ /*
++ * In case:
++ * - the probed port alias is zero (as the one used by earlycon), and
++ * - the earlycon is still active (e.g., "earlycon keep_bootcon" in
++ * bootargs)
++ *
++ * defer the probe of this serial. This is a debug scenario and the user
++ * must be aware of it.
++ *
++ * Except when the probed port is the same as the earlycon port.
++ */
++
++ res = platform_get_resource(dev, IORESOURCE_MEM, 0);
++ if (!res)
++ return -ENODEV;
++
++ if (sci_uart_earlycon && sp == &sci_ports[0] && sp->port.mapbase != res->start)
++ return dev_err_probe(&dev->dev, -EBUSY, "sci_port[0] is used by earlycon!\n");
++
+ platform_set_drvdata(dev, sp);
+
+ ret = sci_probe_single(dev, dev_id, p, sp);
+@@ -3562,7 +3584,7 @@ sh_early_platform_init_buffer("earlyprintk", &sci_driver,
+ early_serial_buf, ARRAY_SIZE(early_serial_buf));
+ #endif
+ #ifdef CONFIG_SERIAL_SH_SCI_EARLYCON
+-static struct plat_sci_port port_cfg __initdata;
++static struct plat_sci_port port_cfg;
+
+ static int __init early_console_setup(struct earlycon_device *device,
+ int type)
+@@ -3575,6 +3597,7 @@ static int __init early_console_setup(struct earlycon_device *device,
+ port_cfg.type = type;
+ sci_ports[0].cfg = &port_cfg;
+ sci_ports[0].params = sci_probe_regmap(&port_cfg);
++ sci_uart_earlycon = true;
+ port_cfg.scscr = sci_serial_in(&sci_ports[0].port, SCSCR);
+ sci_serial_out(&sci_ports[0].port, SCSCR,
+ SCSCR_RE | SCSCR_TE | port_cfg.scscr);
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index 777392914819d7..1d636578c1efc5 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -287,7 +287,7 @@ static void cdns_uart_handle_rx(void *dev_id, unsigned int isrstatus)
+ continue;
+ }
+
+- if (uart_handle_sysrq_char(port, data))
++ if (uart_prepare_sysrq_char(port, data))
+ continue;
+
+ if (is_rxbs_support) {
+@@ -495,7 +495,7 @@ static irqreturn_t cdns_uart_isr(int irq, void *dev_id)
+ !(readl(port->membase + CDNS_UART_CR) & CDNS_UART_CR_RX_DIS))
+ cdns_uart_handle_rx(dev_id, isrstatus);
+
+- uart_port_unlock(port);
++ uart_unlock_and_check_sysrq(port);
+ return IRQ_HANDLED;
+ }
+
+@@ -1380,9 +1380,7 @@ static void cdns_uart_console_write(struct console *co, const char *s,
+ unsigned int imr, ctrl;
+ int locked = 1;
+
+- if (port->sysrq)
+- locked = 0;
+- else if (oops_in_progress)
++ if (oops_in_progress)
+ locked = uart_port_trylock_irqsave(port, &flags);
+ else
+ uart_port_lock_irqsave(port, &flags);
+diff --git a/drivers/tty/vt/selection.c b/drivers/tty/vt/selection.c
+index 564341f1a74f3f..0bd6544e30a6b3 100644
+--- a/drivers/tty/vt/selection.c
++++ b/drivers/tty/vt/selection.c
+@@ -192,6 +192,20 @@ int set_selection_user(const struct tiocl_selection __user *sel,
+ if (copy_from_user(&v, sel, sizeof(*sel)))
+ return -EFAULT;
+
++ /*
++ * TIOCL_SELCLEAR, TIOCL_SELPOINTER and TIOCL_SELMOUSEREPORT are OK to
++ * use without CAP_SYS_ADMIN as they do not modify the selection.
++ */
++ switch (v.sel_mode) {
++ case TIOCL_SELCLEAR:
++ case TIOCL_SELPOINTER:
++ case TIOCL_SELMOUSEREPORT:
++ break;
++ default:
++ if (!capable(CAP_SYS_ADMIN))
++ return -EPERM;
++ }
++
+ return set_selection_kernel(&v, tty);
+ }
+
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 96842ce817af47..be5564ed8c018a 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -3345,8 +3345,6 @@ int tioclinux(struct tty_struct *tty, unsigned long arg)
+
+ switch (type) {
+ case TIOCL_SETSEL:
+- if (!capable(CAP_SYS_ADMIN))
+- return -EPERM;
+ return set_selection_user(param, tty);
+ case TIOCL_PASTESEL:
+ if (!capable(CAP_SYS_ADMIN))
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 6cc9e61cca07de..b786cba9a270f4 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -10323,16 +10323,6 @@ int ufshcd_system_thaw(struct device *dev)
+ EXPORT_SYMBOL_GPL(ufshcd_system_thaw);
+ #endif /* CONFIG_PM_SLEEP */
+
+-/**
+- * ufshcd_dealloc_host - deallocate Host Bus Adapter (HBA)
+- * @hba: pointer to Host Bus Adapter (HBA)
+- */
+-void ufshcd_dealloc_host(struct ufs_hba *hba)
+-{
+- scsi_host_put(hba->host);
+-}
+-EXPORT_SYMBOL_GPL(ufshcd_dealloc_host);
+-
+ /**
+ * ufshcd_set_dma_mask - Set dma mask based on the controller
+ * addressing capability
+@@ -10351,12 +10341,26 @@ static int ufshcd_set_dma_mask(struct ufs_hba *hba)
+ return dma_set_mask_and_coherent(hba->dev, DMA_BIT_MASK(32));
+ }
+
++/**
++ * ufshcd_devres_release - devres cleanup handler, invoked during release of
++ * hba->dev
++ * @host: pointer to SCSI host
++ */
++static void ufshcd_devres_release(void *host)
++{
++ scsi_host_put(host);
++}
++
+ /**
+ * ufshcd_alloc_host - allocate Host Bus Adapter (HBA)
+ * @dev: pointer to device handle
+ * @hba_handle: driver private handle
+ *
+ * Return: 0 on success, non-zero value on failure.
++ *
++ * NOTE: There is no corresponding ufshcd_dealloc_host() because this function
++ * keeps track of its allocations using devres and deallocates everything on
++ * device removal automatically.
+ */
+ int ufshcd_alloc_host(struct device *dev, struct ufs_hba **hba_handle)
+ {
+@@ -10378,6 +10382,13 @@ int ufshcd_alloc_host(struct device *dev, struct ufs_hba **hba_handle)
+ err = -ENOMEM;
+ goto out_error;
+ }
++
++ err = devm_add_action_or_reset(dev, ufshcd_devres_release,
++ host);
++ if (err)
++ return dev_err_probe(dev, err,
++ "failed to add ufshcd dealloc action\n");
++
+ host->nr_maps = HCTX_TYPE_POLL + 1;
+ hba = shost_priv(host);
+ hba->host = host;
+diff --git a/drivers/ufs/host/ufs-qcom.c b/drivers/ufs/host/ufs-qcom.c
+index 989692fb91083f..e12c5f9f795638 100644
+--- a/drivers/ufs/host/ufs-qcom.c
++++ b/drivers/ufs/host/ufs-qcom.c
+@@ -155,8 +155,9 @@ static int ufs_qcom_ice_program_key(struct ufs_hba *hba,
+ {
+ struct ufs_qcom_host *host = ufshcd_get_variant(hba);
+ union ufs_crypto_cap_entry cap;
+- bool config_enable =
+- cfg->config_enable & UFS_CRYPTO_CONFIGURATION_ENABLE;
++
++ if (!(cfg->config_enable & UFS_CRYPTO_CONFIGURATION_ENABLE))
++ return qcom_ice_evict_key(host->ice, slot);
+
+ /* Only AES-256-XTS has been tested so far. */
+ cap = hba->crypto_cap_array[cfg->crypto_cap_idx];
+@@ -164,14 +165,11 @@ static int ufs_qcom_ice_program_key(struct ufs_hba *hba,
+ cap.key_size != UFS_CRYPTO_KEY_SIZE_256)
+ return -EOPNOTSUPP;
+
+- if (config_enable)
+- return qcom_ice_program_key(host->ice,
+- QCOM_ICE_CRYPTO_ALG_AES_XTS,
+- QCOM_ICE_CRYPTO_KEY_SIZE_256,
+- cfg->crypto_key,
+- cfg->data_unit_size, slot);
+- else
+- return qcom_ice_evict_key(host->ice, slot);
++ return qcom_ice_program_key(host->ice,
++ QCOM_ICE_CRYPTO_ALG_AES_XTS,
++ QCOM_ICE_CRYPTO_KEY_SIZE_256,
++ cfg->crypto_key,
++ cfg->data_unit_size, slot);
+ }
+
+ #else
+diff --git a/drivers/ufs/host/ufshcd-pci.c b/drivers/ufs/host/ufshcd-pci.c
+index 54e0cc0653a247..850ff71130d5e4 100644
+--- a/drivers/ufs/host/ufshcd-pci.c
++++ b/drivers/ufs/host/ufshcd-pci.c
+@@ -562,7 +562,6 @@ static void ufshcd_pci_remove(struct pci_dev *pdev)
+ pm_runtime_forbid(&pdev->dev);
+ pm_runtime_get_noresume(&pdev->dev);
+ ufshcd_remove(hba);
+- ufshcd_dealloc_host(hba);
+ }
+
+ /**
+@@ -607,7 +606,6 @@ ufshcd_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ err = ufshcd_init(hba, mmio_base, pdev->irq);
+ if (err) {
+ dev_err(&pdev->dev, "Initialization failed\n");
+- ufshcd_dealloc_host(hba);
+ return err;
+ }
+
+diff --git a/drivers/ufs/host/ufshcd-pltfrm.c b/drivers/ufs/host/ufshcd-pltfrm.c
+index 505572d4fa878c..ffe5d1d2b21588 100644
+--- a/drivers/ufs/host/ufshcd-pltfrm.c
++++ b/drivers/ufs/host/ufshcd-pltfrm.c
+@@ -465,21 +465,17 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ struct device *dev = &pdev->dev;
+
+ mmio_base = devm_platform_ioremap_resource(pdev, 0);
+- if (IS_ERR(mmio_base)) {
+- err = PTR_ERR(mmio_base);
+- goto out;
+- }
++ if (IS_ERR(mmio_base))
++ return PTR_ERR(mmio_base);
+
+ irq = platform_get_irq(pdev, 0);
+- if (irq < 0) {
+- err = irq;
+- goto out;
+- }
++ if (irq < 0)
++ return irq;
+
+ err = ufshcd_alloc_host(dev, &hba);
+ if (err) {
+ dev_err(dev, "Allocation failed\n");
+- goto out;
++ return err;
+ }
+
+ hba->vops = vops;
+@@ -488,13 +484,13 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ if (err) {
+ dev_err(dev, "%s: clock parse failed %d\n",
+ __func__, err);
+- goto dealloc_host;
++ return err;
+ }
+ err = ufshcd_parse_regulator_info(hba);
+ if (err) {
+ dev_err(dev, "%s: regulator init failed %d\n",
+ __func__, err);
+- goto dealloc_host;
++ return err;
+ }
+
+ ufshcd_init_lanes_per_dir(hba);
+@@ -502,25 +498,20 @@ int ufshcd_pltfrm_init(struct platform_device *pdev,
+ err = ufshcd_parse_operating_points(hba);
+ if (err) {
+ dev_err(dev, "%s: OPP parse failed %d\n", __func__, err);
+- goto dealloc_host;
++ return err;
+ }
+
+ err = ufshcd_init(hba, mmio_base, irq);
+ if (err) {
+ dev_err_probe(dev, err, "Initialization failed with error %d\n",
+ err);
+- goto dealloc_host;
++ return err;
+ }
+
+ pm_runtime_set_active(dev);
+ pm_runtime_enable(dev);
+
+ return 0;
+-
+-dealloc_host:
+- ufshcd_dealloc_host(hba);
+-out:
+- return err;
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_pltfrm_init);
+
+@@ -534,7 +525,6 @@ void ufshcd_pltfrm_remove(struct platform_device *pdev)
+
+ pm_runtime_get_sync(&pdev->dev);
+ ufshcd_remove(hba);
+- ufshcd_dealloc_host(hba);
+ pm_runtime_disable(&pdev->dev);
+ pm_runtime_put_noidle(&pdev->dev);
+ }
+diff --git a/drivers/usb/gadget/function/f_tcm.c b/drivers/usb/gadget/function/f_tcm.c
+index 48dee166e5d89c..7b23631f47449b 100644
+--- a/drivers/usb/gadget/function/f_tcm.c
++++ b/drivers/usb/gadget/function/f_tcm.c
+@@ -245,7 +245,6 @@ static int bot_send_write_request(struct usbg_cmd *cmd)
+ {
+ struct f_uas *fu = cmd->fu;
+ struct se_cmd *se_cmd = &cmd->se_cmd;
+- struct usb_gadget *gadget = fuas_to_gadget(fu);
+ int ret;
+
+ init_completion(&cmd->write_complete);
+@@ -256,22 +255,6 @@ static int bot_send_write_request(struct usbg_cmd *cmd)
+ return -EINVAL;
+ }
+
+- if (!gadget->sg_supported) {
+- cmd->data_buf = kmalloc(se_cmd->data_length, GFP_KERNEL);
+- if (!cmd->data_buf)
+- return -ENOMEM;
+-
+- fu->bot_req_out->buf = cmd->data_buf;
+- } else {
+- fu->bot_req_out->buf = NULL;
+- fu->bot_req_out->num_sgs = se_cmd->t_data_nents;
+- fu->bot_req_out->sg = se_cmd->t_data_sg;
+- }
+-
+- fu->bot_req_out->complete = usbg_data_write_cmpl;
+- fu->bot_req_out->length = se_cmd->data_length;
+- fu->bot_req_out->context = cmd;
+-
+ ret = usbg_prepare_w_request(cmd, fu->bot_req_out);
+ if (ret)
+ goto cleanup;
+@@ -973,6 +956,7 @@ static void usbg_data_write_cmpl(struct usb_ep *ep, struct usb_request *req)
+ return;
+
+ cleanup:
++ target_put_sess_cmd(se_cmd);
+ transport_generic_free_cmd(&cmd->se_cmd, 0);
+ }
+
+@@ -1065,7 +1049,7 @@ static void usbg_cmd_work(struct work_struct *work)
+
+ out:
+ transport_send_check_condition_and_sense(se_cmd,
+- TCM_UNSUPPORTED_SCSI_OPCODE, 1);
++ TCM_UNSUPPORTED_SCSI_OPCODE, 0);
+ }
+
+ static struct usbg_cmd *usbg_get_cmd(struct f_uas *fu,
+@@ -1193,7 +1177,7 @@ static void bot_cmd_work(struct work_struct *work)
+
+ out:
+ transport_send_check_condition_and_sense(se_cmd,
+- TCM_UNSUPPORTED_SCSI_OPCODE, 1);
++ TCM_UNSUPPORTED_SCSI_OPCODE, 0);
+ }
+
+ static int bot_submit_command(struct f_uas *fu,
+@@ -1969,43 +1953,39 @@ static int tcm_bind(struct usb_configuration *c, struct usb_function *f)
+ bot_intf_desc.bInterfaceNumber = iface;
+ uasp_intf_desc.bInterfaceNumber = iface;
+ fu->iface = iface;
+- ep = usb_ep_autoconfig_ss(gadget, &uasp_ss_bi_desc,
+- &uasp_bi_ep_comp_desc);
++ ep = usb_ep_autoconfig(gadget, &uasp_fs_bi_desc);
+ if (!ep)
+ goto ep_fail;
+
+ fu->ep_in = ep;
+
+- ep = usb_ep_autoconfig_ss(gadget, &uasp_ss_bo_desc,
+- &uasp_bo_ep_comp_desc);
++ ep = usb_ep_autoconfig(gadget, &uasp_fs_bo_desc);
+ if (!ep)
+ goto ep_fail;
+ fu->ep_out = ep;
+
+- ep = usb_ep_autoconfig_ss(gadget, &uasp_ss_status_desc,
+- &uasp_status_in_ep_comp_desc);
++ ep = usb_ep_autoconfig(gadget, &uasp_fs_status_desc);
+ if (!ep)
+ goto ep_fail;
+ fu->ep_status = ep;
+
+- ep = usb_ep_autoconfig_ss(gadget, &uasp_ss_cmd_desc,
+- &uasp_cmd_comp_desc);
++ ep = usb_ep_autoconfig(gadget, &uasp_fs_cmd_desc);
+ if (!ep)
+ goto ep_fail;
+ fu->ep_cmd = ep;
+
+ /* Assume endpoint addresses are the same for both speeds */
+- uasp_bi_desc.bEndpointAddress = uasp_ss_bi_desc.bEndpointAddress;
+- uasp_bo_desc.bEndpointAddress = uasp_ss_bo_desc.bEndpointAddress;
++ uasp_bi_desc.bEndpointAddress = uasp_fs_bi_desc.bEndpointAddress;
++ uasp_bo_desc.bEndpointAddress = uasp_fs_bo_desc.bEndpointAddress;
+ uasp_status_desc.bEndpointAddress =
+- uasp_ss_status_desc.bEndpointAddress;
+- uasp_cmd_desc.bEndpointAddress = uasp_ss_cmd_desc.bEndpointAddress;
+-
+- uasp_fs_bi_desc.bEndpointAddress = uasp_ss_bi_desc.bEndpointAddress;
+- uasp_fs_bo_desc.bEndpointAddress = uasp_ss_bo_desc.bEndpointAddress;
+- uasp_fs_status_desc.bEndpointAddress =
+- uasp_ss_status_desc.bEndpointAddress;
+- uasp_fs_cmd_desc.bEndpointAddress = uasp_ss_cmd_desc.bEndpointAddress;
++ uasp_fs_status_desc.bEndpointAddress;
++ uasp_cmd_desc.bEndpointAddress = uasp_fs_cmd_desc.bEndpointAddress;
++
++ uasp_ss_bi_desc.bEndpointAddress = uasp_fs_bi_desc.bEndpointAddress;
++ uasp_ss_bo_desc.bEndpointAddress = uasp_fs_bo_desc.bEndpointAddress;
++ uasp_ss_status_desc.bEndpointAddress =
++ uasp_fs_status_desc.bEndpointAddress;
++ uasp_ss_cmd_desc.bEndpointAddress = uasp_fs_cmd_desc.bEndpointAddress;
+
+ ret = usb_assign_descriptors(f, uasp_fs_function_desc,
+ uasp_hs_function_desc, uasp_ss_function_desc,
+diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
+index 3bf1043cd7957c..d63c2d266d0735 100644
+--- a/drivers/vfio/platform/vfio_platform_common.c
++++ b/drivers/vfio/platform/vfio_platform_common.c
+@@ -393,6 +393,11 @@ static ssize_t vfio_platform_read_mmio(struct vfio_platform_region *reg,
+
+ count = min_t(size_t, count, reg->size - off);
+
++ if (off >= reg->size)
++ return -EINVAL;
++
++ count = min_t(size_t, count, reg->size - off);
++
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+@@ -477,6 +482,11 @@ static ssize_t vfio_platform_write_mmio(struct vfio_platform_region *reg,
+
+ count = min_t(size_t, count, reg->size - off);
+
++ if (off >= reg->size)
++ return -EINVAL;
++
++ count = min_t(size_t, count, reg->size - off);
++
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+diff --git a/fs/binfmt_flat.c b/fs/binfmt_flat.c
+index 390808ce935d50..b5b5ca1a44f70b 100644
+--- a/fs/binfmt_flat.c
++++ b/fs/binfmt_flat.c
+@@ -478,7 +478,7 @@ static int load_flat_file(struct linux_binprm *bprm,
+ * 28 bits (256 MB) is way more than reasonable in this case.
+ * If some top bits are set we have probable binary corruption.
+ */
+- if ((text_len | data_len | bss_len | stack_len | full_data) >> 28) {
++ if ((text_len | data_len | bss_len | stack_len | relocs | full_data) >> 28) {
+ pr_err("bad header\n");
+ ret = -ENOEXEC;
+ goto err;
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 4fb521d91b0612..559c177456e6a0 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -242,7 +242,7 @@ int btrfs_drop_extents(struct btrfs_trans_handle *trans,
+ if (args->drop_cache)
+ btrfs_drop_extent_map_range(inode, args->start, args->end - 1, false);
+
+- if (args->start >= inode->disk_i_size && !args->replace_extent)
++ if (data_race(args->start >= inode->disk_i_size) && !args->replace_extent)
+ modify_tree = 0;
+
+ update_refs = (btrfs_root_id(root) != BTRFS_TREE_LOG_OBJECTID);
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index 9d9ce308488dd3..f7e7d864f41440 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -7200,8 +7200,6 @@ noinline int can_nocow_extent(struct inode *inode, u64 offset, u64 *len,
+ ret = -EAGAIN;
+ goto out;
+ }
+-
+- cond_resched();
+ }
+
+ if (file_extent)
+@@ -10144,6 +10142,8 @@ static int btrfs_swap_activate(struct swap_info_struct *sis, struct file *file,
+ ret = -EINTR;
+ goto out;
+ }
++
++ cond_resched();
+ }
+
+ if (bsi.block_len)
+diff --git a/fs/btrfs/ordered-data.c b/fs/btrfs/ordered-data.c
+index 2104d60c216166..4ed11b089ea95a 100644
+--- a/fs/btrfs/ordered-data.c
++++ b/fs/btrfs/ordered-data.c
+@@ -1229,6 +1229,18 @@ struct btrfs_ordered_extent *btrfs_split_ordered_extent(
+ */
+ if (WARN_ON_ONCE(len >= ordered->num_bytes))
+ return ERR_PTR(-EINVAL);
++ /*
++ * If our ordered extent had an error there's no point in continuing.
++ * The error may have come from a transaction abort done either by this
++ * task or some other concurrent task, and the transaction abort path
++ * iterates over all existing ordered extents and sets the flag
++ * BTRFS_ORDERED_IOERR on them.
++ */
++ if (unlikely(flags & (1U << BTRFS_ORDERED_IOERR))) {
++ const int fs_error = BTRFS_FS_ERROR(fs_info);
++
++ return fs_error ? ERR_PTR(fs_error) : ERR_PTR(-EIO);
++ }
+ /* We cannot split partially completed ordered extents. */
+ if (ordered->bytes_left) {
+ ASSERT(!(flags & ~BTRFS_ORDERED_TYPE_FLAGS));
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index 4fcd6cd4c1c244..fa9025c05d4e29 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1916,8 +1916,11 @@ int btrfs_qgroup_cleanup_dropped_subvolume(struct btrfs_fs_info *fs_info, u64 su
+ /*
+ * It's squota and the subvolume still has numbers needed for future
+ * accounting, in this case we can not delete it. Just skip it.
++ *
++ * Or the qgroup is already removed by a qgroup rescan. For both cases we're
++ * safe to ignore them.
+ */
+- if (ret == -EBUSY)
++ if (ret == -EBUSY || ret == -ENOENT)
+ ret = 0;
+ return ret;
+ }
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index adcbdc970f9ea4..f24a80857cd600 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -4405,8 +4405,18 @@ int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
+ WARN_ON(!first_cow && level == 0);
+
+ node = rc->backref_cache.path[level];
+- BUG_ON(node->bytenr != buf->start &&
+- node->new_bytenr != buf->start);
++
++ /*
++ * If node->bytenr != buf->start and node->new_bytenr !=
++ * buf->start then we've got the wrong backref node for what we
++ * expected to see here and the cache is incorrect.
++ */
++ if (unlikely(node->bytenr != buf->start && node->new_bytenr != buf->start)) {
++ btrfs_err(fs_info,
++"bytenr %llu was found but our backref cache was expecting %llu or %llu",
++ buf->start, node->bytenr, node->new_bytenr);
++ return -EUCLEAN;
++ }
+
+ btrfs_backref_drop_node_buffer(node);
+ atomic_inc(&cow->refs);
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 0fc873af891f65..82dd9ee89fbc5b 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -275,8 +275,10 @@ static noinline int join_transaction(struct btrfs_fs_info *fs_info,
+ cur_trans = fs_info->running_transaction;
+ if (cur_trans) {
+ if (TRANS_ABORTED(cur_trans)) {
++ const int abort_error = cur_trans->aborted;
++
+ spin_unlock(&fs_info->trans_lock);
+- return cur_trans->aborted;
++ return abort_error;
+ }
+ if (btrfs_blocked_trans_types[cur_trans->state] & type) {
+ spin_unlock(&fs_info->trans_lock);
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index f48242262b2177..f3af6330d74a7d 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -5698,18 +5698,18 @@ static int ceph_mds_auth_match(struct ceph_mds_client *mdsc,
+ *
+ * All the other cases --> mismatch
+ */
++ bool path_matched = true;
+ char *first = strstr(_tpath, auth->match.path);
+- if (first != _tpath) {
+- if (free_tpath)
+- kfree(_tpath);
+- return 0;
++ if (first != _tpath ||
++ (tlen > len && _tpath[len] != '/')) {
++ path_matched = false;
+ }
+
+- if (tlen > len && _tpath[len] != '/') {
+- if (free_tpath)
+- kfree(_tpath);
++ if (free_tpath)
++ kfree(_tpath);
++
++ if (!path_matched)
+ return 0;
+- }
+ }
+ }
+
+diff --git a/fs/exec.c b/fs/exec.c
+index 9c349a74f38589..67513bd606c249 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1351,7 +1351,28 @@ int begin_new_exec(struct linux_binprm * bprm)
+ set_dumpable(current->mm, SUID_DUMP_USER);
+
+ perf_event_exec();
+- __set_task_comm(me, kbasename(bprm->filename), true);
++
++ /*
++ * If the original filename was empty, alloc_bprm() made up a path
++ * that will probably not be useful to admins running ps or similar.
++ * Let's fix it up to be something reasonable.
++ */
++ if (bprm->comm_from_dentry) {
++ /*
++ * Hold RCU lock to keep the name from being freed behind our back.
++ * Use acquire semantics to make sure the terminating NUL from
++ * __d_alloc() is seen.
++ *
++ * Note, we're deliberately sloppy here. We don't need to care about
++ * detecting a concurrent rename and just want a terminated name.
++ */
++ rcu_read_lock();
++ __set_task_comm(me, smp_load_acquire(&bprm->file->f_path.dentry->d_name.name),
++ true);
++ rcu_read_unlock();
++ } else {
++ __set_task_comm(me, kbasename(bprm->filename), true);
++ }
+
+ /* An exec changes our domain. We are no longer part of the thread
+ group */
+@@ -1527,11 +1548,13 @@ static struct linux_binprm *alloc_bprm(int fd, struct filename *filename, int fl
+ if (fd == AT_FDCWD || filename->name[0] == '/') {
+ bprm->filename = filename->name;
+ } else {
+- if (filename->name[0] == '\0')
++ if (filename->name[0] == '\0') {
+ bprm->fdpath = kasprintf(GFP_KERNEL, "/dev/fd/%d", fd);
+- else
++ bprm->comm_from_dentry = 1;
++ } else {
+ bprm->fdpath = kasprintf(GFP_KERNEL, "/dev/fd/%d/%s",
+ fd, filename->name);
++ }
+ if (!bprm->fdpath)
+ goto out_free;
+
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 5ea644b679add5..73da51ac5a0349 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -5020,26 +5020,29 @@ static int statmount_mnt_opts(struct kstatmount *s, struct seq_file *seq)
+ {
+ struct vfsmount *mnt = s->mnt;
+ struct super_block *sb = mnt->mnt_sb;
++ size_t start = seq->count;
+ int err;
+
+- if (sb->s_op->show_options) {
+- size_t start = seq->count;
++ err = security_sb_show_options(seq, sb);
++ if (err)
++ return err;
+
++ if (sb->s_op->show_options) {
+ err = sb->s_op->show_options(seq, mnt->mnt_root);
+ if (err)
+ return err;
++ }
+
+- if (unlikely(seq_has_overflowed(seq)))
+- return -EAGAIN;
++ if (unlikely(seq_has_overflowed(seq)))
++ return -EAGAIN;
+
+- if (seq->count == start)
+- return 0;
++ if (seq->count == start)
++ return 0;
+
+- /* skip leading comma */
+- memmove(seq->buf + start, seq->buf + start + 1,
+- seq->count - start - 1);
+- seq->count--;
+- }
++ /* skip leading comma */
++ memmove(seq->buf + start, seq->buf + start + 1,
++ seq->count - start - 1);
++ seq->count--;
+
+ return 0;
+ }
+@@ -5050,22 +5053,29 @@ static int statmount_string(struct kstatmount *s, u64 flag)
+ size_t kbufsize;
+ struct seq_file *seq = &s->seq;
+ struct statmount *sm = &s->sm;
++ u32 start, *offp;
++
++ /* Reserve an empty string at the beginning for any unset offsets */
++ if (!seq->count)
++ seq_putc(seq, 0);
++
++ start = seq->count;
+
+ switch (flag) {
+ case STATMOUNT_FS_TYPE:
+- sm->fs_type = seq->count;
++ offp = &sm->fs_type;
+ ret = statmount_fs_type(s, seq);
+ break;
+ case STATMOUNT_MNT_ROOT:
+- sm->mnt_root = seq->count;
++ offp = &sm->mnt_root;
+ ret = statmount_mnt_root(s, seq);
+ break;
+ case STATMOUNT_MNT_POINT:
+- sm->mnt_point = seq->count;
++ offp = &sm->mnt_point;
+ ret = statmount_mnt_point(s, seq);
+ break;
+ case STATMOUNT_MNT_OPTS:
+- sm->mnt_opts = seq->count;
++ offp = &sm->mnt_opts;
+ ret = statmount_mnt_opts(s, seq);
+ break;
+ default:
+@@ -5087,6 +5097,7 @@ static int statmount_string(struct kstatmount *s, u64 flag)
+
+ seq->buf[seq->count++] = '\0';
+ sm->mask |= flag;
++ *offp = start;
+ return 0;
+ }
+
+diff --git a/fs/nfs/Kconfig b/fs/nfs/Kconfig
+index 0eb20012792f07..d3f76101ad4b91 100644
+--- a/fs/nfs/Kconfig
++++ b/fs/nfs/Kconfig
+@@ -170,7 +170,8 @@ config ROOT_NFS
+
+ config NFS_FSCACHE
+ bool "Provide NFS client caching support"
+- depends on NFS_FS=m && NETFS_SUPPORT || NFS_FS=y && NETFS_SUPPORT=y
++ depends on NFS_FS
++ select NETFS_SUPPORT
+ select FSCACHE
+ help
+ Say Y here if you want NFS data to be cached locally on disc through
+diff --git a/fs/nfs/flexfilelayout/flexfilelayout.c b/fs/nfs/flexfilelayout/flexfilelayout.c
+index f78115c6c2c12a..a1cfe4cc60c4b1 100644
+--- a/fs/nfs/flexfilelayout/flexfilelayout.c
++++ b/fs/nfs/flexfilelayout/flexfilelayout.c
+@@ -847,6 +847,9 @@ ff_layout_pg_init_read(struct nfs_pageio_descriptor *pgio,
+ struct nfs4_pnfs_ds *ds;
+ u32 ds_idx;
+
++ if (NFS_SERVER(pgio->pg_inode)->flags &
++ (NFS_MOUNT_SOFT|NFS_MOUNT_SOFTERR))
++ pgio->pg_maxretrans = io_maxretrans;
+ retry:
+ pnfs_generic_pg_check_layout(pgio, req);
+ /* Use full layout for now */
+@@ -860,6 +863,8 @@ ff_layout_pg_init_read(struct nfs_pageio_descriptor *pgio,
+ if (!pgio->pg_lseg)
+ goto out_nolseg;
+ }
++ /* Reset wb_nio, since getting layout segment was successful */
++ req->wb_nio = 0;
+
+ ds = ff_layout_get_ds_for_read(pgio, &ds_idx);
+ if (!ds) {
+@@ -876,14 +881,24 @@ ff_layout_pg_init_read(struct nfs_pageio_descriptor *pgio,
+ pgm->pg_bsize = mirror->mirror_ds->ds_versions[0].rsize;
+
+ pgio->pg_mirror_idx = ds_idx;
+-
+- if (NFS_SERVER(pgio->pg_inode)->flags &
+- (NFS_MOUNT_SOFT|NFS_MOUNT_SOFTERR))
+- pgio->pg_maxretrans = io_maxretrans;
+ return;
+ out_nolseg:
+- if (pgio->pg_error < 0)
+- return;
++ if (pgio->pg_error < 0) {
++ if (pgio->pg_error != -EAGAIN)
++ return;
++ /* Retry getting layout segment if lower layer returned -EAGAIN */
++ if (pgio->pg_maxretrans && req->wb_nio++ > pgio->pg_maxretrans) {
++ if (NFS_SERVER(pgio->pg_inode)->flags & NFS_MOUNT_SOFTERR)
++ pgio->pg_error = -ETIMEDOUT;
++ else
++ pgio->pg_error = -EIO;
++ return;
++ }
++ pgio->pg_error = 0;
++ /* Sleep for 1 second before retrying */
++ ssleep(1);
++ goto retry;
++ }
+ out_mds:
+ trace_pnfs_mds_fallback_pg_init_read(pgio->pg_inode,
+ 0, NFS4_MAX_UINT64, IOMODE_READ,
+diff --git a/fs/nfsd/nfs4xdr.c b/fs/nfsd/nfs4xdr.c
+index 8d25aef51ad150..2fc1919dd3c09f 100644
+--- a/fs/nfsd/nfs4xdr.c
++++ b/fs/nfsd/nfs4xdr.c
+@@ -5747,15 +5747,14 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op)
+ struct nfs4_stateowner *so = resp->cstate.replay_owner;
+ struct svc_rqst *rqstp = resp->rqstp;
+ const struct nfsd4_operation *opdesc = op->opdesc;
+- int post_err_offset;
++ unsigned int op_status_offset;
+ nfsd4_enc encoder;
+- __be32 *p;
+
+- p = xdr_reserve_space(xdr, 8);
+- if (!p)
++ if (xdr_stream_encode_u32(xdr, op->opnum) != XDR_UNIT)
++ goto release;
++ op_status_offset = xdr->buf->len;
++ if (!xdr_reserve_space(xdr, XDR_UNIT))
+ goto release;
+- *p++ = cpu_to_be32(op->opnum);
+- post_err_offset = xdr->buf->len;
+
+ if (op->opnum == OP_ILLEGAL)
+ goto status;
+@@ -5796,20 +5795,21 @@ nfsd4_encode_operation(struct nfsd4_compoundres *resp, struct nfsd4_op *op)
+ * bug if we had to do this on a non-idempotent op:
+ */
+ warn_on_nonidempotent_op(op);
+- xdr_truncate_encode(xdr, post_err_offset);
++ xdr_truncate_encode(xdr, op_status_offset + XDR_UNIT);
+ }
+ if (so) {
+- int len = xdr->buf->len - post_err_offset;
++ int len = xdr->buf->len - (op_status_offset + XDR_UNIT);
+
+ so->so_replay.rp_status = op->status;
+ so->so_replay.rp_buflen = len;
+- read_bytes_from_xdr_buf(xdr->buf, post_err_offset,
++ read_bytes_from_xdr_buf(xdr->buf, op_status_offset + XDR_UNIT,
+ so->so_replay.rp_buf, len);
+ }
+ status:
+ op->status = nfsd4_map_status(op->status,
+ resp->cstate.minorversion);
+- *p = op->status;
++ write_bytes_to_xdr_buf(xdr->buf, op_status_offset,
++ &op->status, XDR_UNIT);
+ release:
+ if (opdesc && opdesc->op_release)
+ opdesc->op_release(&op->u);
+diff --git a/fs/nilfs2/inode.c b/fs/nilfs2/inode.c
+index aaca34ec678f26..fd2de4b2bef1a8 100644
+--- a/fs/nilfs2/inode.c
++++ b/fs/nilfs2/inode.c
+@@ -1219,7 +1219,7 @@ int nilfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ if (size) {
+ if (phys && blkphy << blkbits == phys + size) {
+ /* The current extent goes on */
+- size += n << blkbits;
++ size += (u64)n << blkbits;
+ } else {
+ /* Terminate the current extent */
+ ret = fiemap_fill_next_extent(
+@@ -1232,14 +1232,14 @@ int nilfs_fiemap(struct inode *inode, struct fiemap_extent_info *fieinfo,
+ flags = FIEMAP_EXTENT_MERGED;
+ logical = blkoff << blkbits;
+ phys = blkphy << blkbits;
+- size = n << blkbits;
++ size = (u64)n << blkbits;
+ }
+ } else {
+ /* Start a new extent */
+ flags = FIEMAP_EXTENT_MERGED;
+ logical = blkoff << blkbits;
+ phys = blkphy << blkbits;
+- size = n << blkbits;
++ size = (u64)n << blkbits;
+ }
+ blkoff += n;
+ }
+diff --git a/fs/ocfs2/dir.c b/fs/ocfs2/dir.c
+index 213206ebdd5810..7799f4d16ce999 100644
+--- a/fs/ocfs2/dir.c
++++ b/fs/ocfs2/dir.c
+@@ -1065,26 +1065,39 @@ int ocfs2_find_entry(const char *name, int namelen,
+ {
+ struct buffer_head *bh;
+ struct ocfs2_dir_entry *res_dir = NULL;
++ int ret = 0;
+
+ if (ocfs2_dir_indexed(dir))
+ return ocfs2_find_entry_dx(name, namelen, dir, lookup);
+
++ if (unlikely(i_size_read(dir) <= 0)) {
++ ret = -EFSCORRUPTED;
++ mlog_errno(ret);
++ goto out;
++ }
+ /*
+ * The unindexed dir code only uses part of the lookup
+ * structure, so there's no reason to push it down further
+ * than this.
+ */
+- if (OCFS2_I(dir)->ip_dyn_features & OCFS2_INLINE_DATA_FL)
++ if (OCFS2_I(dir)->ip_dyn_features & OCFS2_INLINE_DATA_FL) {
++ if (unlikely(i_size_read(dir) > dir->i_sb->s_blocksize)) {
++ ret = -EFSCORRUPTED;
++ mlog_errno(ret);
++ goto out;
++ }
+ bh = ocfs2_find_entry_id(name, namelen, dir, &res_dir);
+- else
++ } else {
+ bh = ocfs2_find_entry_el(name, namelen, dir, &res_dir);
++ }
+
+ if (bh == NULL)
+ return -ENOENT;
+
+ lookup->dl_leaf_bh = bh;
+ lookup->dl_entry = res_dir;
+- return 0;
++out:
++ return ret;
+ }
+
+ /*
+@@ -2010,6 +2023,7 @@ int ocfs2_lookup_ino_from_name(struct inode *dir, const char *name,
+ *
+ * Return 0 if the name does not exist
+ * Return -EEXIST if the directory contains the name
++ * Return -EFSCORRUPTED if found corruption
+ *
+ * Callers should have i_rwsem + a cluster lock on dir
+ */
+@@ -2023,9 +2037,12 @@ int ocfs2_check_dir_for_entry(struct inode *dir,
+ trace_ocfs2_check_dir_for_entry(
+ (unsigned long long)OCFS2_I(dir)->ip_blkno, namelen, name);
+
+- if (ocfs2_find_entry(name, namelen, dir, &lookup) == 0) {
++ ret = ocfs2_find_entry(name, namelen, dir, &lookup);
++ if (ret == 0) {
+ ret = -EEXIST;
+ mlog_errno(ret);
++ } else if (ret == -ENOENT) {
++ ret = 0;
+ }
+
+ ocfs2_free_dir_lookup_result(&lookup);
+diff --git a/fs/ocfs2/super.c b/fs/ocfs2/super.c
+index c79b4291777f63..1e87554f6f4104 100644
+--- a/fs/ocfs2/super.c
++++ b/fs/ocfs2/super.c
+@@ -2340,7 +2340,7 @@ static int ocfs2_verify_volume(struct ocfs2_dinode *di,
+ mlog(ML_ERROR, "found superblock with incorrect block "
+ "size bits: found %u, should be 9, 10, 11, or 12\n",
+ blksz_bits);
+- } else if ((1 << le32_to_cpu(blksz_bits)) != blksz) {
++ } else if ((1 << blksz_bits) != blksz) {
+ mlog(ML_ERROR, "found superblock with incorrect block "
+ "size: found %u, should be %u\n", 1 << blksz_bits, blksz);
+ } else if (le16_to_cpu(di->id2.i_super.s_major_rev_level) !=
+diff --git a/fs/ocfs2/symlink.c b/fs/ocfs2/symlink.c
+index d4c5fdcfa1e464..f5cf2255dc0972 100644
+--- a/fs/ocfs2/symlink.c
++++ b/fs/ocfs2/symlink.c
+@@ -65,7 +65,7 @@ static int ocfs2_fast_symlink_read_folio(struct file *f, struct folio *folio)
+
+ if (status < 0) {
+ mlog_errno(status);
+- return status;
++ goto out;
+ }
+
+ fe = (struct ocfs2_dinode *) bh->b_data;
+@@ -76,9 +76,10 @@ static int ocfs2_fast_symlink_read_folio(struct file *f, struct folio *folio)
+ memcpy(kaddr, link, len + 1);
+ kunmap_atomic(kaddr);
+ SetPageUptodate(page);
++out:
+ unlock_page(page);
+ brelse(bh);
+- return 0;
++ return status;
+ }
+
+ const struct address_space_operations ocfs2_fast_symlink_aops = {
+diff --git a/fs/proc/array.c b/fs/proc/array.c
+index 34a47fb0c57f25..5e4f7b411fbdb9 100644
+--- a/fs/proc/array.c
++++ b/fs/proc/array.c
+@@ -500,7 +500,7 @@ static int do_task_stat(struct seq_file *m, struct pid_namespace *ns,
+ * a program is not able to use ptrace(2) in that case. It is
+ * safe because the task has stopped executing permanently.
+ */
+- if (permitted && (task->flags & (PF_EXITING|PF_DUMPCORE))) {
++ if (permitted && (task->flags & (PF_EXITING|PF_DUMPCORE|PF_POSTCOREDUMP))) {
+ if (try_get_task_stack(task)) {
+ eip = KSTK_EIP(task);
+ esp = KSTK_ESP(task);
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 9a4b3608b7d6f3..94785abc9b1b2d 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -320,7 +320,7 @@ struct smb_version_operations {
+ int (*handle_cancelled_mid)(struct mid_q_entry *, struct TCP_Server_Info *);
+ void (*downgrade_oplock)(struct TCP_Server_Info *server,
+ struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache);
++ __u16 epoch, bool *purge_cache);
+ /* process transaction2 response */
+ bool (*check_trans2)(struct mid_q_entry *, struct TCP_Server_Info *,
+ char *, int);
+@@ -515,12 +515,12 @@ struct smb_version_operations {
+ /* if we can do cache read operations */
+ bool (*is_read_op)(__u32);
+ /* set oplock level for the inode */
+- void (*set_oplock_level)(struct cifsInodeInfo *, __u32, unsigned int,
+- bool *);
++ void (*set_oplock_level)(struct cifsInodeInfo *cinode, __u32 oplock, __u16 epoch,
++ bool *purge_cache);
+ /* create lease context buffer for CREATE request */
+ char * (*create_lease_buf)(u8 *lease_key, u8 oplock);
+ /* parse lease context buffer and return oplock/epoch info */
+- __u8 (*parse_lease_buf)(void *buf, unsigned int *epoch, char *lkey);
++ __u8 (*parse_lease_buf)(void *buf, __u16 *epoch, char *lkey);
+ ssize_t (*copychunk_range)(const unsigned int,
+ struct cifsFileInfo *src_file,
+ struct cifsFileInfo *target_file,
+@@ -1415,7 +1415,7 @@ struct cifs_fid {
+ __u8 create_guid[16];
+ __u32 access;
+ struct cifs_pending_open *pending_open;
+- unsigned int epoch;
++ __u16 epoch;
+ #ifdef CONFIG_CIFS_DEBUG2
+ __u64 mid;
+ #endif /* CIFS_DEBUG2 */
+@@ -1448,7 +1448,7 @@ struct cifsFileInfo {
+ bool oplock_break_cancelled:1;
+ bool status_file_deleted:1; /* file has been deleted */
+ bool offload:1; /* offload final part of _put to a wq */
+- unsigned int oplock_epoch; /* epoch from the lease break */
++ __u16 oplock_epoch; /* epoch from the lease break */
+ __u32 oplock_level; /* oplock/lease level from the lease break */
+ int count;
+ spinlock_t file_info_lock; /* protects four flag/count fields above */
+@@ -1545,7 +1545,7 @@ struct cifsInodeInfo {
+ spinlock_t open_file_lock; /* protects openFileList */
+ __u32 cifsAttrs; /* e.g. DOS archive bit, sparse, compressed, system */
+ unsigned int oplock; /* oplock/lease level we have */
+- unsigned int epoch; /* used to track lease state changes */
++ __u16 epoch; /* used to track lease state changes */
+ #define CIFS_INODE_PENDING_OPLOCK_BREAK (0) /* oplock break in progress */
+ #define CIFS_INODE_PENDING_WRITERS (1) /* Writes in progress */
+ #define CIFS_INODE_FLAG_UNUSED (2) /* Unused flag */
+diff --git a/fs/smb/client/dir.c b/fs/smb/client/dir.c
+index 864b194dbaa0a0..1822493dd0842e 100644
+--- a/fs/smb/client/dir.c
++++ b/fs/smb/client/dir.c
+@@ -627,7 +627,7 @@ int cifs_mknod(struct mnt_idmap *idmap, struct inode *inode,
+ goto mknod_out;
+ }
+
+- trace_smb3_mknod_enter(xid, tcon->ses->Suid, tcon->tid, full_path);
++ trace_smb3_mknod_enter(xid, tcon->tid, tcon->ses->Suid, full_path);
+
+ rc = tcon->ses->server->ops->make_node(xid, inode, direntry, tcon,
+ full_path, mode,
+@@ -635,9 +635,9 @@ int cifs_mknod(struct mnt_idmap *idmap, struct inode *inode,
+
+ mknod_out:
+ if (rc)
+- trace_smb3_mknod_err(xid, tcon->ses->Suid, tcon->tid, rc);
++ trace_smb3_mknod_err(xid, tcon->tid, tcon->ses->Suid, rc);
+ else
+- trace_smb3_mknod_done(xid, tcon->ses->Suid, tcon->tid);
++ trace_smb3_mknod_done(xid, tcon->tid, tcon->ses->Suid);
+
+ free_dentry_path(page);
+ free_xid(xid);
+diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
+index db3695eddcf9d5..c70f4961c4eb78 100644
+--- a/fs/smb/client/smb1ops.c
++++ b/fs/smb/client/smb1ops.c
+@@ -377,7 +377,7 @@ coalesce_t2(char *second_buf, struct smb_hdr *target_hdr)
+ static void
+ cifs_downgrade_oplock(struct TCP_Server_Info *server,
+ struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ cifs_set_oplock_level(cinode, oplock);
+ }
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index b935c1a62d10cf..7dfd3eb3847b33 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -298,8 +298,8 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_query_info_compound_enter(xid, ses->Suid,
+- tcon->tid, full_path);
++ trace_smb3_query_info_compound_enter(xid, tcon->tid,
++ ses->Suid, full_path);
+ break;
+ case SMB2_OP_POSIX_QUERY_INFO:
+ rqst[num_rqst].rq_iov = &vars->qi_iov;
+@@ -334,18 +334,18 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_posix_query_info_compound_enter(xid, ses->Suid,
+- tcon->tid, full_path);
++ trace_smb3_posix_query_info_compound_enter(xid, tcon->tid,
++ ses->Suid, full_path);
+ break;
+ case SMB2_OP_DELETE:
+- trace_smb3_delete_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_delete_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_MKDIR:
+ /*
+ * Directories are created through parameters in the
+ * SMB2_open() call.
+ */
+- trace_smb3_mkdir_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_mkdir_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_RMDIR:
+ rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -363,7 +363,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ smb2_set_next_command(tcon, &rqst[num_rqst]);
+ smb2_set_related(&rqst[num_rqst++]);
+- trace_smb3_rmdir_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_rmdir_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_SET_EOF:
+ rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -398,7 +398,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_set_eof_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_set_eof_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_SET_INFO:
+ rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -429,8 +429,8 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_set_info_compound_enter(xid, ses->Suid,
+- tcon->tid, full_path);
++ trace_smb3_set_info_compound_enter(xid, tcon->tid,
++ ses->Suid, full_path);
+ break;
+ case SMB2_OP_RENAME:
+ rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -469,7 +469,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_rename_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_rename_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_HARDLINK:
+ rqst[num_rqst].rq_iov = &vars->si_iov[0];
+@@ -496,7 +496,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ smb2_set_next_command(tcon, &rqst[num_rqst]);
+ smb2_set_related(&rqst[num_rqst++]);
+- trace_smb3_hardlink_enter(xid, ses->Suid, tcon->tid, full_path);
++ trace_smb3_hardlink_enter(xid, tcon->tid, ses->Suid, full_path);
+ break;
+ case SMB2_OP_SET_REPARSE:
+ rqst[num_rqst].rq_iov = vars->io_iov;
+@@ -523,8 +523,8 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_set_reparse_compound_enter(xid, ses->Suid,
+- tcon->tid, full_path);
++ trace_smb3_set_reparse_compound_enter(xid, tcon->tid,
++ ses->Suid, full_path);
+ break;
+ case SMB2_OP_GET_REPARSE:
+ rqst[num_rqst].rq_iov = vars->io_iov;
+@@ -549,8 +549,8 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ goto finished;
+ }
+ num_rqst++;
+- trace_smb3_get_reparse_compound_enter(xid, ses->Suid,
+- tcon->tid, full_path);
++ trace_smb3_get_reparse_compound_enter(xid, tcon->tid,
++ ses->Suid, full_path);
+ break;
+ case SMB2_OP_QUERY_WSL_EA:
+ rqst[num_rqst].rq_iov = &vars->ea_iov;
+@@ -663,11 +663,11 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ }
+ SMB2_query_info_free(&rqst[num_rqst++]);
+ if (rc)
+- trace_smb3_query_info_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_query_info_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ else
+- trace_smb3_query_info_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_query_info_compound_done(xid, tcon->tid,
++ ses->Suid);
+ break;
+ case SMB2_OP_POSIX_QUERY_INFO:
+ idata = in_iov[i].iov_base;
+@@ -690,15 +690,15 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+
+ SMB2_query_info_free(&rqst[num_rqst++]);
+ if (rc)
+- trace_smb3_posix_query_info_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_posix_query_info_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ else
+- trace_smb3_posix_query_info_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_posix_query_info_compound_done(xid, tcon->tid,
++ ses->Suid);
+ break;
+ case SMB2_OP_DELETE:
+ if (rc)
+- trace_smb3_delete_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_delete_err(xid, tcon->tid, ses->Suid, rc);
+ else {
+ /*
+ * If dentry (hence, inode) is NULL, lease break is going to
+@@ -706,59 +706,59 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ */
+ if (inode)
+ cifs_mark_open_handles_for_deleted_file(inode, full_path);
+- trace_smb3_delete_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_delete_done(xid, tcon->tid, ses->Suid);
+ }
+ break;
+ case SMB2_OP_MKDIR:
+ if (rc)
+- trace_smb3_mkdir_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_mkdir_err(xid, tcon->tid, ses->Suid, rc);
+ else
+- trace_smb3_mkdir_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_mkdir_done(xid, tcon->tid, ses->Suid);
+ break;
+ case SMB2_OP_HARDLINK:
+ if (rc)
+- trace_smb3_hardlink_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_hardlink_err(xid, tcon->tid, ses->Suid, rc);
+ else
+- trace_smb3_hardlink_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_hardlink_done(xid, tcon->tid, ses->Suid);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+ case SMB2_OP_RENAME:
+ if (rc)
+- trace_smb3_rename_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_rename_err(xid, tcon->tid, ses->Suid, rc);
+ else
+- trace_smb3_rename_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_rename_done(xid, tcon->tid, ses->Suid);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+ case SMB2_OP_RMDIR:
+ if (rc)
+- trace_smb3_rmdir_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_rmdir_err(xid, tcon->tid, ses->Suid, rc);
+ else
+- trace_smb3_rmdir_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_rmdir_done(xid, tcon->tid, ses->Suid);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+ case SMB2_OP_SET_EOF:
+ if (rc)
+- trace_smb3_set_eof_err(xid, ses->Suid, tcon->tid, rc);
++ trace_smb3_set_eof_err(xid, tcon->tid, ses->Suid, rc);
+ else
+- trace_smb3_set_eof_done(xid, ses->Suid, tcon->tid);
++ trace_smb3_set_eof_done(xid, tcon->tid, ses->Suid);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+ case SMB2_OP_SET_INFO:
+ if (rc)
+- trace_smb3_set_info_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_set_info_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ else
+- trace_smb3_set_info_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_set_info_compound_done(xid, tcon->tid,
++ ses->Suid);
+ SMB2_set_info_free(&rqst[num_rqst++]);
+ break;
+ case SMB2_OP_SET_REPARSE:
+ if (rc) {
+- trace_smb3_set_reparse_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_set_reparse_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ } else {
+- trace_smb3_set_reparse_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_set_reparse_compound_done(xid, tcon->tid,
++ ses->Suid);
+ }
+ SMB2_ioctl_free(&rqst[num_rqst++]);
+ break;
+@@ -771,18 +771,18 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ rbuf = reparse_buf_ptr(iov);
+ if (IS_ERR(rbuf)) {
+ rc = PTR_ERR(rbuf);
+- trace_smb3_set_reparse_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_get_reparse_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ } else {
+ idata->reparse.tag = le32_to_cpu(rbuf->ReparseTag);
+- trace_smb3_set_reparse_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_get_reparse_compound_done(xid, tcon->tid,
++ ses->Suid);
+ }
+ memset(iov, 0, sizeof(*iov));
+ resp_buftype[i + 1] = CIFS_NO_BUFFER;
+ } else {
+- trace_smb3_set_reparse_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_get_reparse_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ }
+ SMB2_ioctl_free(&rqst[num_rqst++]);
+ break;
+@@ -799,11 +799,11 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ }
+ }
+ if (!rc) {
+- trace_smb3_query_wsl_ea_compound_done(xid, ses->Suid,
+- tcon->tid);
++ trace_smb3_query_wsl_ea_compound_done(xid, tcon->tid,
++ ses->Suid);
+ } else {
+- trace_smb3_query_wsl_ea_compound_err(xid, ses->Suid,
+- tcon->tid, rc);
++ trace_smb3_query_wsl_ea_compound_err(xid, tcon->tid,
++ ses->Suid, rc);
+ }
+ SMB2_query_info_free(&rqst[num_rqst++]);
+ break;
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 6bacf754b57efd..44952727fef9ef 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -3931,22 +3931,22 @@ static long smb3_fallocate(struct file *file, struct cifs_tcon *tcon, int mode,
+ static void
+ smb2_downgrade_oplock(struct TCP_Server_Info *server,
+ struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ server->ops->set_oplock_level(cinode, oplock, 0, NULL);
+ }
+
+ static void
+ smb21_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache);
++ __u16 epoch, bool *purge_cache);
+
+ static void
+ smb3_downgrade_oplock(struct TCP_Server_Info *server,
+ struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ unsigned int old_state = cinode->oplock;
+- unsigned int old_epoch = cinode->epoch;
++ __u16 old_epoch = cinode->epoch;
+ unsigned int new_state;
+
+ if (epoch > old_epoch) {
+@@ -3966,7 +3966,7 @@ smb3_downgrade_oplock(struct TCP_Server_Info *server,
+
+ static void
+ smb2_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ oplock &= 0xFF;
+ cinode->lease_granted = false;
+@@ -3990,7 +3990,7 @@ smb2_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+
+ static void
+ smb21_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ char message[5] = {0};
+ unsigned int new_oplock = 0;
+@@ -4027,7 +4027,7 @@ smb21_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+
+ static void
+ smb3_set_oplock_level(struct cifsInodeInfo *cinode, __u32 oplock,
+- unsigned int epoch, bool *purge_cache)
++ __u16 epoch, bool *purge_cache)
+ {
+ unsigned int old_oplock = cinode->oplock;
+
+@@ -4141,7 +4141,7 @@ smb3_create_lease_buf(u8 *lease_key, u8 oplock)
+ }
+
+ static __u8
+-smb2_parse_lease_buf(void *buf, unsigned int *epoch, char *lease_key)
++smb2_parse_lease_buf(void *buf, __u16 *epoch, char *lease_key)
+ {
+ struct create_lease *lc = (struct create_lease *)buf;
+
+@@ -4152,7 +4152,7 @@ smb2_parse_lease_buf(void *buf, unsigned int *epoch, char *lease_key)
+ }
+
+ static __u8
+-smb3_parse_lease_buf(void *buf, unsigned int *epoch, char *lease_key)
++smb3_parse_lease_buf(void *buf, __u16 *epoch, char *lease_key)
+ {
+ struct create_lease_v2 *lc = (struct create_lease_v2 *)buf;
+
+@@ -5104,6 +5104,7 @@ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ {
+ struct TCP_Server_Info *server = tcon->ses->server;
+ struct cifs_open_parms oparms;
++ struct cifs_open_info_data idata;
+ struct cifs_io_parms io_parms = {};
+ struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+ struct cifs_fid fid;
+@@ -5173,10 +5174,20 @@ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ CREATE_OPTION_SPECIAL, ACL_NO_MODE);
+ oparms.fid = &fid;
+
+- rc = server->ops->open(xid, &oparms, &oplock, NULL);
++ rc = server->ops->open(xid, &oparms, &oplock, &idata);
+ if (rc)
+ goto out;
+
++ /*
++ * Check if the server honored ATTR_SYSTEM flag by CREATE_OPTION_SPECIAL
++ * option. If not then server does not support ATTR_SYSTEM and newly
++ * created file is not SFU compatible, which means that the call failed.
++ */
++ if (!(le32_to_cpu(idata.fi.Attributes) & ATTR_SYSTEM)) {
++ rc = -EOPNOTSUPP;
++ goto out_close;
++ }
++
+ if (type_len + data_len > 0) {
+ io_parms.pid = current->tgid;
+ io_parms.tcon = tcon;
+@@ -5191,8 +5202,18 @@ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ iov, ARRAY_SIZE(iov)-1);
+ }
+
++out_close:
+ server->ops->close(xid, tcon, &fid);
+
++ /*
++ * If CREATE was successful but either setting ATTR_SYSTEM failed or
++ * writing type/data information failed then remove the intermediate
++ * object created by CREATE. Otherwise intermediate empty object stay
++ * on the server.
++ */
++ if (rc)
++ server->ops->unlink(xid, tcon, full_path, cifs_sb, NULL);
++
+ out:
+ kfree(symname_utf16);
+ return rc;
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 4750505465ae63..2e3f78fe9210ff 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -2335,7 +2335,7 @@ parse_posix_ctxt(struct create_context *cc, struct smb2_file_all_info *info,
+
+ int smb2_parse_contexts(struct TCP_Server_Info *server,
+ struct kvec *rsp_iov,
+- unsigned int *epoch,
++ __u16 *epoch,
+ char *lease_key, __u8 *oplock,
+ struct smb2_file_all_info *buf,
+ struct create_posix_rsp *posix)
+diff --git a/fs/smb/client/smb2proto.h b/fs/smb/client/smb2proto.h
+index 09349fa8da039a..51d890f74e36f3 100644
+--- a/fs/smb/client/smb2proto.h
++++ b/fs/smb/client/smb2proto.h
+@@ -282,7 +282,7 @@ extern enum securityEnum smb2_select_sectype(struct TCP_Server_Info *,
+ enum securityEnum);
+ int smb2_parse_contexts(struct TCP_Server_Info *server,
+ struct kvec *rsp_iov,
+- unsigned int *epoch,
++ __u16 *epoch,
+ char *lease_key, __u8 *oplock,
+ struct smb2_file_all_info *buf,
+ struct create_posix_rsp *posix);
+diff --git a/fs/smb/server/transport_ipc.c b/fs/smb/server/transport_ipc.c
+index 6de351cc2b60e0..69bac122adbe06 100644
+--- a/fs/smb/server/transport_ipc.c
++++ b/fs/smb/server/transport_ipc.c
+@@ -626,6 +626,9 @@ ksmbd_ipc_spnego_authen_request(const char *spnego_blob, int blob_len)
+ struct ksmbd_spnego_authen_request *req;
+ struct ksmbd_spnego_authen_response *resp;
+
++ if (blob_len > KSMBD_IPC_MAX_PAYLOAD)
++ return NULL;
++
+ msg = ipc_msg_alloc(sizeof(struct ksmbd_spnego_authen_request) +
+ blob_len + 1);
+ if (!msg)
+@@ -805,6 +808,9 @@ struct ksmbd_rpc_command *ksmbd_rpc_write(struct ksmbd_session *sess, int handle
+ struct ksmbd_rpc_command *req;
+ struct ksmbd_rpc_command *resp;
+
++ if (payload_sz > KSMBD_IPC_MAX_PAYLOAD)
++ return NULL;
++
+ msg = ipc_msg_alloc(sizeof(struct ksmbd_rpc_command) + payload_sz + 1);
+ if (!msg)
+ return NULL;
+@@ -853,6 +859,9 @@ struct ksmbd_rpc_command *ksmbd_rpc_ioctl(struct ksmbd_session *sess, int handle
+ struct ksmbd_rpc_command *req;
+ struct ksmbd_rpc_command *resp;
+
++ if (payload_sz > KSMBD_IPC_MAX_PAYLOAD)
++ return NULL;
++
+ msg = ipc_msg_alloc(sizeof(struct ksmbd_rpc_command) + payload_sz + 1);
+ if (!msg)
+ return NULL;
+diff --git a/fs/xfs/xfs_buf_item_recover.c b/fs/xfs/xfs_buf_item_recover.c
+index 5180cbf5a90b4b..0185c92df8c2ea 100644
+--- a/fs/xfs/xfs_buf_item_recover.c
++++ b/fs/xfs/xfs_buf_item_recover.c
+@@ -1036,11 +1036,20 @@ xlog_recover_buf_commit_pass2(
+ error = xlog_recover_do_primary_sb_buffer(mp, item, bp, buf_f,
+ current_lsn);
+ if (error)
+- goto out_release;
++ goto out_writebuf;
+ } else {
+ xlog_recover_do_reg_buffer(mp, item, bp, buf_f, current_lsn);
+ }
+
++ /*
++ * Buffer held by buf log item during 'normal' buffer recovery must
++ * be committed through buffer I/O submission path to ensure proper
++ * release. When error occurs during sb buffer recovery, log shutdown
++ * will be done before submitting buffer list so that buffers can be
++ * released correctly through ioend failure path.
++ */
++out_writebuf:
++
+ /*
+ * Perform delayed write on the buffer. Asynchronous writes will be
+ * slower when taking into account all the buffers to be flushed.
+diff --git a/fs/xfs/xfs_dquot.c b/fs/xfs/xfs_dquot.c
+index c1b211c260a9d5..0d73b59f1c9e57 100644
+--- a/fs/xfs/xfs_dquot.c
++++ b/fs/xfs/xfs_dquot.c
+@@ -68,6 +68,31 @@ xfs_dquot_mark_sick(
+ }
+ }
+
++/*
++ * Detach the dquot buffer if it's still attached, because we can get called
++ * through dqpurge after a log shutdown. Caller must hold the dqflock or have
++ * otherwise isolated the dquot.
++ */
++void
++xfs_dquot_detach_buf(
++ struct xfs_dquot *dqp)
++{
++ struct xfs_dq_logitem *qlip = &dqp->q_logitem;
++ struct xfs_buf *bp = NULL;
++
++ spin_lock(&qlip->qli_lock);
++ if (qlip->qli_item.li_buf) {
++ bp = qlip->qli_item.li_buf;
++ qlip->qli_item.li_buf = NULL;
++ }
++ spin_unlock(&qlip->qli_lock);
++ if (bp) {
++ xfs_buf_lock(bp);
++ list_del_init(&qlip->qli_item.li_bio_list);
++ xfs_buf_relse(bp);
++ }
++}
++
+ /*
+ * This is called to free all the memory associated with a dquot
+ */
+@@ -76,6 +101,7 @@ xfs_qm_dqdestroy(
+ struct xfs_dquot *dqp)
+ {
+ ASSERT(list_empty(&dqp->q_lru));
++ ASSERT(dqp->q_logitem.qli_item.li_buf == NULL);
+
+ kvfree(dqp->q_logitem.qli_item.li_lv_shadow);
+ mutex_destroy(&dqp->q_qlock);
+@@ -1136,9 +1162,11 @@ static void
+ xfs_qm_dqflush_done(
+ struct xfs_log_item *lip)
+ {
+- struct xfs_dq_logitem *qip = (struct xfs_dq_logitem *)lip;
+- struct xfs_dquot *dqp = qip->qli_dquot;
++ struct xfs_dq_logitem *qlip =
++ container_of(lip, struct xfs_dq_logitem, qli_item);
++ struct xfs_dquot *dqp = qlip->qli_dquot;
+ struct xfs_ail *ailp = lip->li_ailp;
++ struct xfs_buf *bp = NULL;
+ xfs_lsn_t tail_lsn;
+
+ /*
+@@ -1150,12 +1178,12 @@ xfs_qm_dqflush_done(
+ * holding the lock before removing the dquot from the AIL.
+ */
+ if (test_bit(XFS_LI_IN_AIL, &lip->li_flags) &&
+- ((lip->li_lsn == qip->qli_flush_lsn) ||
++ ((lip->li_lsn == qlip->qli_flush_lsn) ||
+ test_bit(XFS_LI_FAILED, &lip->li_flags))) {
+
+ spin_lock(&ailp->ail_lock);
+ xfs_clear_li_failed(lip);
+- if (lip->li_lsn == qip->qli_flush_lsn) {
++ if (lip->li_lsn == qlip->qli_flush_lsn) {
+ /* xfs_ail_update_finish() drops the AIL lock */
+ tail_lsn = xfs_ail_delete_one(ailp, lip);
+ xfs_ail_update_finish(ailp, tail_lsn);
+@@ -1168,6 +1196,19 @@ xfs_qm_dqflush_done(
+ * Release the dq's flush lock since we're done with it.
+ */
+ xfs_dqfunlock(dqp);
++
++ /*
++ * If this dquot hasn't been dirtied since initiating the last dqflush,
++ * release the buffer reference.
++ */
++ spin_lock(&qlip->qli_lock);
++ if (!qlip->qli_dirty) {
++ bp = lip->li_buf;
++ lip->li_buf = NULL;
++ }
++ spin_unlock(&qlip->qli_lock);
++ if (bp)
++ xfs_buf_rele(bp);
+ }
+
+ void
+@@ -1190,7 +1231,7 @@ xfs_buf_dquot_io_fail(
+
+ spin_lock(&bp->b_mount->m_ail->ail_lock);
+ list_for_each_entry(lip, &bp->b_li_list, li_bio_list)
+- xfs_set_li_failed(lip, bp);
++ set_bit(XFS_LI_FAILED, &lip->li_flags);
+ spin_unlock(&bp->b_mount->m_ail->ail_lock);
+ }
+
+@@ -1232,6 +1273,115 @@ xfs_qm_dqflush_check(
+ return NULL;
+ }
+
++/*
++ * Get the buffer containing the on-disk dquot.
++ *
++ * Requires dquot flush lock, will clear the dirty flag, delete the quota log
++ * item from the AIL, and shut down the system if something goes wrong.
++ */
++static int
++xfs_dquot_read_buf(
++ struct xfs_trans *tp,
++ struct xfs_dquot *dqp,
++ struct xfs_buf **bpp)
++{
++ struct xfs_mount *mp = dqp->q_mount;
++ struct xfs_buf *bp = NULL;
++ int error;
++
++ error = xfs_trans_read_buf(mp, tp, mp->m_ddev_targp, dqp->q_blkno,
++ mp->m_quotainfo->qi_dqchunklen, 0,
++ &bp, &xfs_dquot_buf_ops);
++ if (xfs_metadata_is_sick(error))
++ xfs_dquot_mark_sick(dqp);
++ if (error)
++ goto out_abort;
++
++ *bpp = bp;
++ return 0;
++
++out_abort:
++ dqp->q_flags &= ~XFS_DQFLAG_DIRTY;
++ xfs_trans_ail_delete(&dqp->q_logitem.qli_item, 0);
++ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
++ return error;
++}
++
++/*
++ * Attach a dquot buffer to this dquot to avoid allocating a buffer during a
++ * dqflush, since dqflush can be called from reclaim context. Caller must hold
++ * the dqlock.
++ */
++int
++xfs_dquot_attach_buf(
++ struct xfs_trans *tp,
++ struct xfs_dquot *dqp)
++{
++ struct xfs_dq_logitem *qlip = &dqp->q_logitem;
++ struct xfs_log_item *lip = &qlip->qli_item;
++ int error;
++
++ spin_lock(&qlip->qli_lock);
++ if (!lip->li_buf) {
++ struct xfs_buf *bp = NULL;
++
++ spin_unlock(&qlip->qli_lock);
++ error = xfs_dquot_read_buf(tp, dqp, &bp);
++ if (error)
++ return error;
++
++ /*
++ * Hold the dquot buffer so that we retain our ref to it after
++ * detaching it from the transaction, then give that ref to the
++ * dquot log item so that the AIL does not have to read the
++ * dquot buffer to push this item.
++ */
++ xfs_buf_hold(bp);
++ xfs_trans_brelse(tp, bp);
++
++ spin_lock(&qlip->qli_lock);
++ lip->li_buf = bp;
++ }
++ qlip->qli_dirty = true;
++ spin_unlock(&qlip->qli_lock);
++
++ return 0;
++}
++
++/*
++ * Get a new reference the dquot buffer attached to this dquot for a dqflush
++ * operation.
++ *
++ * Returns 0 and a NULL bp if none was attached to the dquot; 0 and a locked
++ * bp; or -EAGAIN if the buffer could not be locked.
++ */
++int
++xfs_dquot_use_attached_buf(
++ struct xfs_dquot *dqp,
++ struct xfs_buf **bpp)
++{
++ struct xfs_buf *bp = dqp->q_logitem.qli_item.li_buf;
++
++ /*
++ * A NULL buffer can happen if the dquot dirty flag was set but the
++ * filesystem shut down before transaction commit happened. In that
++ * case we're not going to flush anyway.
++ */
++ if (!bp) {
++ ASSERT(xfs_is_shutdown(dqp->q_mount));
++
++ *bpp = NULL;
++ return 0;
++ }
++
++ if (!xfs_buf_trylock(bp))
++ return -EAGAIN;
++
++ xfs_buf_hold(bp);
++ *bpp = bp;
++ return 0;
++}
++
+ /*
+ * Write a modified dquot to disk.
+ * The dquot must be locked and the flush lock too taken by caller.
+@@ -1243,11 +1393,11 @@ xfs_qm_dqflush_check(
+ int
+ xfs_qm_dqflush(
+ struct xfs_dquot *dqp,
+- struct xfs_buf **bpp)
++ struct xfs_buf *bp)
+ {
+ struct xfs_mount *mp = dqp->q_mount;
+- struct xfs_log_item *lip = &dqp->q_logitem.qli_item;
+- struct xfs_buf *bp;
++ struct xfs_dq_logitem *qlip = &dqp->q_logitem;
++ struct xfs_log_item *lip = &qlip->qli_item;
+ struct xfs_dqblk *dqblk;
+ xfs_failaddr_t fa;
+ int error;
+@@ -1257,28 +1407,12 @@ xfs_qm_dqflush(
+
+ trace_xfs_dqflush(dqp);
+
+- *bpp = NULL;
+-
+ xfs_qm_dqunpin_wait(dqp);
+
+- /*
+- * Get the buffer containing the on-disk dquot
+- */
+- error = xfs_trans_read_buf(mp, NULL, mp->m_ddev_targp, dqp->q_blkno,
+- mp->m_quotainfo->qi_dqchunklen, XBF_TRYLOCK,
+- &bp, &xfs_dquot_buf_ops);
+- if (error == -EAGAIN)
+- goto out_unlock;
+- if (xfs_metadata_is_sick(error))
+- xfs_dquot_mark_sick(dqp);
+- if (error)
+- goto out_abort;
+-
+ fa = xfs_qm_dqflush_check(dqp);
+ if (fa) {
+ xfs_alert(mp, "corrupt dquot ID 0x%x in memory at %pS",
+ dqp->q_id, fa);
+- xfs_buf_relse(bp);
+ xfs_dquot_mark_sick(dqp);
+ error = -EFSCORRUPTED;
+ goto out_abort;
+@@ -1293,8 +1427,15 @@ xfs_qm_dqflush(
+ */
+ dqp->q_flags &= ~XFS_DQFLAG_DIRTY;
+
+- xfs_trans_ail_copy_lsn(mp->m_ail, &dqp->q_logitem.qli_flush_lsn,
+- &dqp->q_logitem.qli_item.li_lsn);
++ /*
++ * We hold the dquot lock, so nobody can dirty it while we're
++ * scheduling the write out. Clear the dirty-since-flush flag.
++ */
++ spin_lock(&qlip->qli_lock);
++ qlip->qli_dirty = false;
++ spin_unlock(&qlip->qli_lock);
++
++ xfs_trans_ail_copy_lsn(mp->m_ail, &qlip->qli_flush_lsn, &lip->li_lsn);
+
+ /*
+ * copy the lsn into the on-disk dquot now while we have the in memory
+@@ -1306,7 +1447,7 @@ xfs_qm_dqflush(
+ * of a dquot without an up-to-date CRC getting to disk.
+ */
+ if (xfs_has_crc(mp)) {
+- dqblk->dd_lsn = cpu_to_be64(dqp->q_logitem.qli_item.li_lsn);
++ dqblk->dd_lsn = cpu_to_be64(lip->li_lsn);
+ xfs_update_cksum((char *)dqblk, sizeof(struct xfs_dqblk),
+ XFS_DQUOT_CRC_OFF);
+ }
+@@ -1316,7 +1457,7 @@ xfs_qm_dqflush(
+ * the AIL and release the flush lock once the dquot is synced to disk.
+ */
+ bp->b_flags |= _XBF_DQUOTS;
+- list_add_tail(&dqp->q_logitem.qli_item.li_bio_list, &bp->b_li_list);
++ list_add_tail(&lip->li_bio_list, &bp->b_li_list);
+
+ /*
+ * If the buffer is pinned then push on the log so we won't
+@@ -1328,14 +1469,12 @@ xfs_qm_dqflush(
+ }
+
+ trace_xfs_dqflush_done(dqp);
+- *bpp = bp;
+ return 0;
+
+ out_abort:
+ dqp->q_flags &= ~XFS_DQFLAG_DIRTY;
+ xfs_trans_ail_delete(lip, 0);
+ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
+-out_unlock:
+ xfs_dqfunlock(dqp);
+ return error;
+ }
+diff --git a/fs/xfs/xfs_dquot.h b/fs/xfs/xfs_dquot.h
+index 677bb2dc9ac913..bd7bfd9e402e5b 100644
+--- a/fs/xfs/xfs_dquot.h
++++ b/fs/xfs/xfs_dquot.h
+@@ -204,7 +204,7 @@ void xfs_dquot_to_disk(struct xfs_disk_dquot *ddqp, struct xfs_dquot *dqp);
+ #define XFS_DQ_IS_DIRTY(dqp) ((dqp)->q_flags & XFS_DQFLAG_DIRTY)
+
+ void xfs_qm_dqdestroy(struct xfs_dquot *dqp);
+-int xfs_qm_dqflush(struct xfs_dquot *dqp, struct xfs_buf **bpp);
++int xfs_qm_dqflush(struct xfs_dquot *dqp, struct xfs_buf *bp);
+ void xfs_qm_dqunpin_wait(struct xfs_dquot *dqp);
+ void xfs_qm_adjust_dqtimers(struct xfs_dquot *d);
+ void xfs_qm_adjust_dqlimits(struct xfs_dquot *d);
+@@ -227,6 +227,10 @@ void xfs_dqlockn(struct xfs_dqtrx *q);
+
+ void xfs_dquot_set_prealloc_limits(struct xfs_dquot *);
+
++int xfs_dquot_attach_buf(struct xfs_trans *tp, struct xfs_dquot *dqp);
++int xfs_dquot_use_attached_buf(struct xfs_dquot *dqp, struct xfs_buf **bpp);
++void xfs_dquot_detach_buf(struct xfs_dquot *dqp);
++
+ static inline struct xfs_dquot *xfs_qm_dqhold(struct xfs_dquot *dqp)
+ {
+ xfs_dqlock(dqp);
+diff --git a/fs/xfs/xfs_dquot_item.c b/fs/xfs/xfs_dquot_item.c
+index 7d19091215b080..271b195ebb9326 100644
+--- a/fs/xfs/xfs_dquot_item.c
++++ b/fs/xfs/xfs_dquot_item.c
+@@ -123,8 +123,9 @@ xfs_qm_dquot_logitem_push(
+ __releases(&lip->li_ailp->ail_lock)
+ __acquires(&lip->li_ailp->ail_lock)
+ {
+- struct xfs_dquot *dqp = DQUOT_ITEM(lip)->qli_dquot;
+- struct xfs_buf *bp = lip->li_buf;
++ struct xfs_dq_logitem *qlip = DQUOT_ITEM(lip);
++ struct xfs_dquot *dqp = qlip->qli_dquot;
++ struct xfs_buf *bp;
+ uint rval = XFS_ITEM_SUCCESS;
+ int error;
+
+@@ -155,14 +156,25 @@ xfs_qm_dquot_logitem_push(
+
+ spin_unlock(&lip->li_ailp->ail_lock);
+
+- error = xfs_qm_dqflush(dqp, &bp);
++ error = xfs_dquot_use_attached_buf(dqp, &bp);
++ if (error == -EAGAIN) {
++ xfs_dqfunlock(dqp);
++ rval = XFS_ITEM_LOCKED;
++ goto out_relock_ail;
++ }
++
++ /*
++ * dqflush completes dqflock on error, and the delwri ioend does it on
++ * success.
++ */
++ error = xfs_qm_dqflush(dqp, bp);
+ if (!error) {
+ if (!xfs_buf_delwri_queue(bp, buffer_list))
+ rval = XFS_ITEM_FLUSHING;
+- xfs_buf_relse(bp);
+- } else if (error == -EAGAIN)
+- rval = XFS_ITEM_LOCKED;
++ }
++ xfs_buf_relse(bp);
+
++out_relock_ail:
+ spin_lock(&lip->li_ailp->ail_lock);
+ out_unlock:
+ xfs_dqunlock(dqp);
+@@ -195,12 +207,10 @@ xfs_qm_dquot_logitem_committing(
+ }
+
+ #ifdef DEBUG_EXPENSIVE
+-static int
+-xfs_qm_dquot_logitem_precommit(
+- struct xfs_trans *tp,
+- struct xfs_log_item *lip)
++static void
++xfs_qm_dquot_logitem_precommit_check(
++ struct xfs_dquot *dqp)
+ {
+- struct xfs_dquot *dqp = DQUOT_ITEM(lip)->qli_dquot;
+ struct xfs_mount *mp = dqp->q_mount;
+ struct xfs_disk_dquot ddq = { };
+ xfs_failaddr_t fa;
+@@ -216,13 +226,24 @@ xfs_qm_dquot_logitem_precommit(
+ xfs_force_shutdown(mp, SHUTDOWN_CORRUPT_INCORE);
+ ASSERT(fa == NULL);
+ }
+-
+- return 0;
+ }
+ #else
+-# define xfs_qm_dquot_logitem_precommit NULL
++# define xfs_qm_dquot_logitem_precommit_check(...) ((void)0)
+ #endif
+
++static int
++xfs_qm_dquot_logitem_precommit(
++ struct xfs_trans *tp,
++ struct xfs_log_item *lip)
++{
++ struct xfs_dq_logitem *qlip = DQUOT_ITEM(lip);
++ struct xfs_dquot *dqp = qlip->qli_dquot;
++
++ xfs_qm_dquot_logitem_precommit_check(dqp);
++
++ return xfs_dquot_attach_buf(tp, dqp);
++}
++
+ static const struct xfs_item_ops xfs_dquot_item_ops = {
+ .iop_size = xfs_qm_dquot_logitem_size,
+ .iop_precommit = xfs_qm_dquot_logitem_precommit,
+@@ -247,5 +268,7 @@ xfs_qm_dquot_logitem_init(
+
+ xfs_log_item_init(dqp->q_mount, &lp->qli_item, XFS_LI_DQUOT,
+ &xfs_dquot_item_ops);
++ spin_lock_init(&lp->qli_lock);
+ lp->qli_dquot = dqp;
++ lp->qli_dirty = false;
+ }
+diff --git a/fs/xfs/xfs_dquot_item.h b/fs/xfs/xfs_dquot_item.h
+index 794710c2447493..d66e52807d76d5 100644
+--- a/fs/xfs/xfs_dquot_item.h
++++ b/fs/xfs/xfs_dquot_item.h
+@@ -14,6 +14,13 @@ struct xfs_dq_logitem {
+ struct xfs_log_item qli_item; /* common portion */
+ struct xfs_dquot *qli_dquot; /* dquot ptr */
+ xfs_lsn_t qli_flush_lsn; /* lsn at last flush */
++
++ /*
++ * We use this spinlock to coordinate access to the li_buf pointer in
++ * the log item and the qli_dirty flag.
++ */
++ spinlock_t qli_lock;
++ bool qli_dirty; /* dirtied since last flush? */
+ };
+
+ void xfs_qm_dquot_logitem_init(struct xfs_dquot *dqp);
+diff --git a/fs/xfs/xfs_exchrange.c b/fs/xfs/xfs_exchrange.c
+index 75cb53f090d1f7..7c8195895a734e 100644
+--- a/fs/xfs/xfs_exchrange.c
++++ b/fs/xfs/xfs_exchrange.c
+@@ -326,22 +326,6 @@ xfs_exchrange_mappings(
+ * successfully but before locks are dropped.
+ */
+
+-/* Verify that we have security clearance to perform this operation. */
+-static int
+-xfs_exchange_range_verify_area(
+- struct xfs_exchrange *fxr)
+-{
+- int ret;
+-
+- ret = remap_verify_area(fxr->file1, fxr->file1_offset, fxr->length,
+- true);
+- if (ret)
+- return ret;
+-
+- return remap_verify_area(fxr->file2, fxr->file2_offset, fxr->length,
+- true);
+-}
+-
+ /*
+ * Performs necessary checks before doing a range exchange, having stabilized
+ * mutable inode attributes via i_rwsem.
+@@ -352,11 +336,13 @@ xfs_exchange_range_checks(
+ unsigned int alloc_unit)
+ {
+ struct inode *inode1 = file_inode(fxr->file1);
++ loff_t size1 = i_size_read(inode1);
+ struct inode *inode2 = file_inode(fxr->file2);
++ loff_t size2 = i_size_read(inode2);
+ uint64_t allocmask = alloc_unit - 1;
+ int64_t test_len;
+ uint64_t blen;
+- loff_t size1, size2, tmp;
++ loff_t tmp;
+ int error;
+
+ /* Don't touch certain kinds of inodes */
+@@ -365,24 +351,25 @@ xfs_exchange_range_checks(
+ if (IS_SWAPFILE(inode1) || IS_SWAPFILE(inode2))
+ return -ETXTBSY;
+
+- size1 = i_size_read(inode1);
+- size2 = i_size_read(inode2);
+-
+ /* Ranges cannot start after EOF. */
+ if (fxr->file1_offset > size1 || fxr->file2_offset > size2)
+ return -EINVAL;
+
+- /*
+- * If the caller said to exchange to EOF, we set the length of the
+- * request large enough to cover everything to the end of both files.
+- */
+ if (fxr->flags & XFS_EXCHANGE_RANGE_TO_EOF) {
++ /*
++ * If the caller said to exchange to EOF, we set the length of
++ * the request large enough to cover everything to the end of
++ * both files.
++ */
+ fxr->length = max_t(int64_t, size1 - fxr->file1_offset,
+ size2 - fxr->file2_offset);
+-
+- error = xfs_exchange_range_verify_area(fxr);
+- if (error)
+- return error;
++ } else {
++ /*
++ * Otherwise we require both ranges to end within EOF.
++ */
++ if (fxr->file1_offset + fxr->length > size1 ||
++ fxr->file2_offset + fxr->length > size2)
++ return -EINVAL;
+ }
+
+ /*
+@@ -398,15 +385,6 @@ xfs_exchange_range_checks(
+ check_add_overflow(fxr->file2_offset, fxr->length, &tmp))
+ return -EINVAL;
+
+- /*
+- * We require both ranges to end within EOF, unless we're exchanging
+- * to EOF.
+- */
+- if (!(fxr->flags & XFS_EXCHANGE_RANGE_TO_EOF) &&
+- (fxr->file1_offset + fxr->length > size1 ||
+- fxr->file2_offset + fxr->length > size2))
+- return -EINVAL;
+-
+ /*
+ * Make sure we don't hit any file size limits. If we hit any size
+ * limits such that test_length was adjusted, we abort the whole
+@@ -744,6 +722,7 @@ xfs_exchange_range(
+ {
+ struct inode *inode1 = file_inode(fxr->file1);
+ struct inode *inode2 = file_inode(fxr->file2);
++ loff_t check_len = fxr->length;
+ int ret;
+
+ BUILD_BUG_ON(XFS_EXCHANGE_RANGE_ALL_FLAGS &
+@@ -776,14 +755,18 @@ xfs_exchange_range(
+ return -EBADF;
+
+ /*
+- * If we're not exchanging to EOF, we can check the areas before
+- * stabilizing both files' i_size.
++ * If we're exchanging to EOF we can't calculate the length until taking
++ * the iolock. Pass a 0 length to remap_verify_area similar to the
++ * FICLONE and FICLONERANGE ioctls that support cloning to EOF as well.
+ */
+- if (!(fxr->flags & XFS_EXCHANGE_RANGE_TO_EOF)) {
+- ret = xfs_exchange_range_verify_area(fxr);
+- if (ret)
+- return ret;
+- }
++ if (fxr->flags & XFS_EXCHANGE_RANGE_TO_EOF)
++ check_len = 0;
++ ret = remap_verify_area(fxr->file1, fxr->file1_offset, check_len, true);
++ if (ret)
++ return ret;
++ ret = remap_verify_area(fxr->file2, fxr->file2_offset, check_len, true);
++ if (ret)
++ return ret;
+
+ /* Update cmtime if the fd/inode don't forbid it. */
+ if (!(fxr->file1->f_mode & FMODE_NOCMTIME) && !IS_NOCMTIME(inode1))
+diff --git a/fs/xfs/xfs_inode.c b/fs/xfs/xfs_inode.c
+index 19dcb569a3e7f8..ed09b4a3084e1c 100644
+--- a/fs/xfs/xfs_inode.c
++++ b/fs/xfs/xfs_inode.c
+@@ -1392,8 +1392,11 @@ xfs_inactive(
+ goto out;
+
+ /* Try to clean out the cow blocks if there are any. */
+- if (xfs_inode_has_cow_data(ip))
+- xfs_reflink_cancel_cow_range(ip, 0, NULLFILEOFF, true);
++ if (xfs_inode_has_cow_data(ip)) {
++ error = xfs_reflink_cancel_cow_range(ip, 0, NULLFILEOFF, true);
++ if (error)
++ goto out;
++ }
+
+ if (VFS_I(ip)->i_nlink != 0) {
+ /*
+diff --git a/fs/xfs/xfs_iomap.c b/fs/xfs/xfs_iomap.c
+index 86da16f54be9d7..6335b122486fee 100644
+--- a/fs/xfs/xfs_iomap.c
++++ b/fs/xfs/xfs_iomap.c
+@@ -942,10 +942,8 @@ xfs_dax_write_iomap_end(
+ if (!xfs_is_cow_inode(ip))
+ return 0;
+
+- if (!written) {
+- xfs_reflink_cancel_cow_range(ip, pos, length, true);
+- return 0;
+- }
++ if (!written)
++ return xfs_reflink_cancel_cow_range(ip, pos, length, true);
+
+ return xfs_reflink_end_cow(ip, pos, written);
+ }
+diff --git a/fs/xfs/xfs_qm.c b/fs/xfs/xfs_qm.c
+index 7e2307921deb2f..3212b5bf3fb3c6 100644
+--- a/fs/xfs/xfs_qm.c
++++ b/fs/xfs/xfs_qm.c
+@@ -146,17 +146,29 @@ xfs_qm_dqpurge(
+ * We don't care about getting disk errors here. We need
+ * to purge this dquot anyway, so we go ahead regardless.
+ */
+- error = xfs_qm_dqflush(dqp, &bp);
++ error = xfs_dquot_use_attached_buf(dqp, &bp);
++ if (error == -EAGAIN) {
++ xfs_dqfunlock(dqp);
++ dqp->q_flags &= ~XFS_DQFLAG_FREEING;
++ goto out_unlock;
++ }
++ if (!bp)
++ goto out_funlock;
++
++ /*
++ * dqflush completes dqflock on error, and the bwrite ioend
++ * does it on success.
++ */
++ error = xfs_qm_dqflush(dqp, bp);
+ if (!error) {
+ error = xfs_bwrite(bp);
+ xfs_buf_relse(bp);
+- } else if (error == -EAGAIN) {
+- dqp->q_flags &= ~XFS_DQFLAG_FREEING;
+- goto out_unlock;
+ }
+ xfs_dqflock(dqp);
+ }
++ xfs_dquot_detach_buf(dqp);
+
++out_funlock:
+ ASSERT(atomic_read(&dqp->q_pincount) == 0);
+ ASSERT(xlog_is_shutdown(dqp->q_logitem.qli_item.li_log) ||
+ !test_bit(XFS_LI_IN_AIL, &dqp->q_logitem.qli_item.li_flags));
+@@ -462,7 +474,17 @@ xfs_qm_dquot_isolate(
+ /* we have to drop the LRU lock to flush the dquot */
+ spin_unlock(lru_lock);
+
+- error = xfs_qm_dqflush(dqp, &bp);
++ error = xfs_dquot_use_attached_buf(dqp, &bp);
++ if (!bp || error == -EAGAIN) {
++ xfs_dqfunlock(dqp);
++ goto out_unlock_dirty;
++ }
++
++ /*
++ * dqflush completes dqflock on error, and the delwri ioend
++ * does it on success.
++ */
++ error = xfs_qm_dqflush(dqp, bp);
+ if (error)
+ goto out_unlock_dirty;
+
+@@ -470,6 +492,8 @@ xfs_qm_dquot_isolate(
+ xfs_buf_relse(bp);
+ goto out_unlock_dirty;
+ }
++
++ xfs_dquot_detach_buf(dqp);
+ xfs_dqfunlock(dqp);
+
+ /*
+@@ -1108,6 +1132,10 @@ xfs_qm_quotacheck_dqadjust(
+ return error;
+ }
+
++ error = xfs_dquot_attach_buf(NULL, dqp);
++ if (error)
++ return error;
++
+ trace_xfs_dqadjust(dqp);
+
+ /*
+@@ -1287,11 +1315,17 @@ xfs_qm_flush_one(
+ goto out_unlock;
+ }
+
+- error = xfs_qm_dqflush(dqp, &bp);
++ error = xfs_dquot_use_attached_buf(dqp, &bp);
+ if (error)
+ goto out_unlock;
++ if (!bp) {
++ error = -EFSCORRUPTED;
++ goto out_unlock;
++ }
+
+- xfs_buf_delwri_queue(bp, buffer_list);
++ error = xfs_qm_dqflush(dqp, bp);
++ if (!error)
++ xfs_buf_delwri_queue(bp, buffer_list);
+ xfs_buf_relse(bp);
+ out_unlock:
+ xfs_dqunlock(dqp);
+diff --git a/fs/xfs/xfs_qm_bhv.c b/fs/xfs/xfs_qm_bhv.c
+index a11436579877d5..ed1d597c30ca25 100644
+--- a/fs/xfs/xfs_qm_bhv.c
++++ b/fs/xfs/xfs_qm_bhv.c
+@@ -19,28 +19,41 @@
+ STATIC void
+ xfs_fill_statvfs_from_dquot(
+ struct kstatfs *statp,
++ struct xfs_inode *ip,
+ struct xfs_dquot *dqp)
+ {
++ struct xfs_dquot_res *blkres = &dqp->q_blk;
+ uint64_t limit;
+
+- limit = dqp->q_blk.softlimit ?
+- dqp->q_blk.softlimit :
+- dqp->q_blk.hardlimit;
+- if (limit && statp->f_blocks > limit) {
+- statp->f_blocks = limit;
+- statp->f_bfree = statp->f_bavail =
+- (statp->f_blocks > dqp->q_blk.reserved) ?
+- (statp->f_blocks - dqp->q_blk.reserved) : 0;
++ if (XFS_IS_REALTIME_MOUNT(ip->i_mount) &&
++ (ip->i_diflags & (XFS_DIFLAG_RTINHERIT | XFS_DIFLAG_REALTIME)))
++ blkres = &dqp->q_rtb;
++
++ limit = blkres->softlimit ?
++ blkres->softlimit :
++ blkres->hardlimit;
++ if (limit) {
++ uint64_t remaining = 0;
++
++ if (limit > blkres->reserved)
++ remaining = limit - blkres->reserved;
++
++ statp->f_blocks = min(statp->f_blocks, limit);
++ statp->f_bfree = min(statp->f_bfree, remaining);
++ statp->f_bavail = min(statp->f_bavail, remaining);
+ }
+
+ limit = dqp->q_ino.softlimit ?
+ dqp->q_ino.softlimit :
+ dqp->q_ino.hardlimit;
+- if (limit && statp->f_files > limit) {
+- statp->f_files = limit;
+- statp->f_ffree =
+- (statp->f_files > dqp->q_ino.reserved) ?
+- (statp->f_files - dqp->q_ino.reserved) : 0;
++ if (limit) {
++ uint64_t remaining = 0;
++
++ if (limit > dqp->q_ino.reserved)
++ remaining = limit - dqp->q_ino.reserved;
++
++ statp->f_files = min(statp->f_files, limit);
++ statp->f_ffree = min(statp->f_ffree, remaining);
+ }
+ }
+
+@@ -61,7 +74,7 @@ xfs_qm_statvfs(
+ struct xfs_dquot *dqp;
+
+ if (!xfs_qm_dqget(mp, ip->i_projid, XFS_DQTYPE_PROJ, false, &dqp)) {
+- xfs_fill_statvfs_from_dquot(statp, dqp);
++ xfs_fill_statvfs_from_dquot(statp, ip, dqp);
+ xfs_qm_dqput(dqp);
+ }
+ }
+diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
+index fbb3a1594c0dcc..8f7c9eaeb36090 100644
+--- a/fs/xfs/xfs_super.c
++++ b/fs/xfs/xfs_super.c
+@@ -873,12 +873,6 @@ xfs_fs_statfs(
+ ffree = statp->f_files - (icount - ifree);
+ statp->f_ffree = max_t(int64_t, ffree, 0);
+
+-
+- if ((ip->i_diflags & XFS_DIFLAG_PROJINHERIT) &&
+- ((mp->m_qflags & (XFS_PQUOTA_ACCT|XFS_PQUOTA_ENFD))) ==
+- (XFS_PQUOTA_ACCT|XFS_PQUOTA_ENFD))
+- xfs_qm_statvfs(ip, statp);
+-
+ if (XFS_IS_REALTIME_MOUNT(mp) &&
+ (ip->i_diflags & (XFS_DIFLAG_RTINHERIT | XFS_DIFLAG_REALTIME))) {
+ s64 freertx;
+@@ -888,6 +882,11 @@ xfs_fs_statfs(
+ statp->f_bavail = statp->f_bfree = xfs_rtx_to_rtb(mp, freertx);
+ }
+
++ if ((ip->i_diflags & XFS_DIFLAG_PROJINHERIT) &&
++ ((mp->m_qflags & (XFS_PQUOTA_ACCT|XFS_PQUOTA_ENFD))) ==
++ (XFS_PQUOTA_ACCT|XFS_PQUOTA_ENFD))
++ xfs_qm_statvfs(ip, statp);
++
+ return 0;
+ }
+
+diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c
+index 30e03342287a94..ee46051db12dde 100644
+--- a/fs/xfs/xfs_trans.c
++++ b/fs/xfs/xfs_trans.c
+@@ -835,16 +835,11 @@ __xfs_trans_commit(
+ trace_xfs_trans_commit(tp, _RET_IP_);
+
+ /*
+- * Finish deferred items on final commit. Only permanent transactions
+- * should ever have deferred ops.
++ * Commit per-transaction changes that are not already tracked through
++ * log items. This can add dirty log items to the transaction.
+ */
+- WARN_ON_ONCE(!list_empty(&tp->t_dfops) &&
+- !(tp->t_flags & XFS_TRANS_PERM_LOG_RES));
+- if (!regrant && (tp->t_flags & XFS_TRANS_PERM_LOG_RES)) {
+- error = xfs_defer_finish_noroll(&tp);
+- if (error)
+- goto out_unreserve;
+- }
++ if (tp->t_flags & XFS_TRANS_SB_DIRTY)
++ xfs_trans_apply_sb_deltas(tp);
+
+ error = xfs_trans_run_precommits(tp);
+ if (error)
+@@ -876,8 +871,6 @@ __xfs_trans_commit(
+ /*
+ * If we need to update the superblock, then do it now.
+ */
+- if (tp->t_flags & XFS_TRANS_SB_DIRTY)
+- xfs_trans_apply_sb_deltas(tp);
+ xfs_trans_apply_dquot_deltas(tp);
+
+ xlog_cil_commit(log, tp, &commit_seq, regrant);
+@@ -924,6 +917,20 @@ int
+ xfs_trans_commit(
+ struct xfs_trans *tp)
+ {
++ /*
++ * Finish deferred items on final commit. Only permanent transactions
++ * should ever have deferred ops.
++ */
++ WARN_ON_ONCE(!list_empty(&tp->t_dfops) &&
++ !(tp->t_flags & XFS_TRANS_PERM_LOG_RES));
++ if (tp->t_flags & XFS_TRANS_PERM_LOG_RES) {
++ int error = xfs_defer_finish_noroll(&tp);
++ if (error) {
++ xfs_trans_cancel(tp);
++ return error;
++ }
++ }
++
+ return __xfs_trans_commit(tp, false);
+ }
+
+diff --git a/fs/xfs/xfs_trans_ail.c b/fs/xfs/xfs_trans_ail.c
+index 8ede9d099d1fea..f56d62dced97b1 100644
+--- a/fs/xfs/xfs_trans_ail.c
++++ b/fs/xfs/xfs_trans_ail.c
+@@ -360,7 +360,7 @@ xfsaild_resubmit_item(
+
+ /* protected by ail_lock */
+ list_for_each_entry(lip, &bp->b_li_list, li_bio_list) {
+- if (bp->b_flags & _XBF_INODES)
++ if (bp->b_flags & (_XBF_INODES | _XBF_DQUOTS))
+ clear_bit(XFS_LI_FAILED, &lip->li_flags);
+ else
+ xfs_clear_li_failed(lip);
+diff --git a/include/drm/drm_connector.h b/include/drm/drm_connector.h
+index e3fa43291f449d..1e2b25e204cb52 100644
+--- a/include/drm/drm_connector.h
++++ b/include/drm/drm_connector.h
+@@ -2001,8 +2001,11 @@ struct drm_connector {
+ struct drm_encoder *encoder;
+
+ #define MAX_ELD_BYTES 128
+- /** @eld: EDID-like data, if present */
++ /** @eld: EDID-like data, if present, protected by @eld_mutex */
+ uint8_t eld[MAX_ELD_BYTES];
++ /** @eld_mutex: protection for concurrenct access to @eld */
++ struct mutex eld_mutex;
++
+ /** @latency_present: AV delay info from ELD, if found */
+ bool latency_present[2];
+ /**
+diff --git a/include/drm/drm_utils.h b/include/drm/drm_utils.h
+index 70775748d243b0..15fa9b6865f448 100644
+--- a/include/drm/drm_utils.h
++++ b/include/drm/drm_utils.h
+@@ -12,8 +12,12 @@
+
+ #include <linux/types.h>
+
++struct drm_edid;
++
+ int drm_get_panel_orientation_quirk(int width, int height);
+
++int drm_get_panel_min_brightness_quirk(const struct drm_edid *edid);
++
+ signed long drm_timeout_abs_to_jiffies(int64_t timeout_nsec);
+
+ #endif
+diff --git a/include/linux/binfmts.h b/include/linux/binfmts.h
+index e6c00e860951ae..3305c849abd66a 100644
+--- a/include/linux/binfmts.h
++++ b/include/linux/binfmts.h
+@@ -42,7 +42,9 @@ struct linux_binprm {
+ * Set when errors can no longer be returned to the
+ * original userspace.
+ */
+- point_of_no_return:1;
++ point_of_no_return:1,
++ /* Set when "comm" must come from the dentry. */
++ comm_from_dentry:1;
+ struct file *executable; /* Executable to pass to the interpreter */
+ struct file *interpreter;
+ struct file *file;
+diff --git a/include/linux/call_once.h b/include/linux/call_once.h
+new file mode 100644
+index 00000000000000..6261aa0b3fb00d
+--- /dev/null
++++ b/include/linux/call_once.h
+@@ -0,0 +1,45 @@
++#ifndef _LINUX_CALL_ONCE_H
++#define _LINUX_CALL_ONCE_H
++
++#include <linux/types.h>
++#include <linux/mutex.h>
++
++#define ONCE_NOT_STARTED 0
++#define ONCE_RUNNING 1
++#define ONCE_COMPLETED 2
++
++struct once {
++ atomic_t state;
++ struct mutex lock;
++};
++
++static inline void __once_init(struct once *once, const char *name,
++ struct lock_class_key *key)
++{
++ atomic_set(&once->state, ONCE_NOT_STARTED);
++ __mutex_init(&once->lock, name, key);
++}
++
++#define once_init(once) \
++do { \
++ static struct lock_class_key __key; \
++ __once_init((once), #once, &__key); \
++} while (0)
++
++static inline void call_once(struct once *once, void (*cb)(struct once *))
++{
++ /* Pairs with atomic_set_release() below. */
++ if (atomic_read_acquire(&once->state) == ONCE_COMPLETED)
++ return;
++
++ guard(mutex)(&once->lock);
++ WARN_ON(atomic_read(&once->state) == ONCE_RUNNING);
++ if (atomic_read(&once->state) != ONCE_NOT_STARTED)
++ return;
++
++ atomic_set(&once->state, ONCE_RUNNING);
++ cb(once);
++ atomic_set_release(&once->state, ONCE_COMPLETED);
++}
++
++#endif /* _LINUX_CALL_ONCE_H */
+diff --git a/include/linux/hrtimer_defs.h b/include/linux/hrtimer_defs.h
+index c3b4b7ed7c163f..84a5045f80f36f 100644
+--- a/include/linux/hrtimer_defs.h
++++ b/include/linux/hrtimer_defs.h
+@@ -125,6 +125,7 @@ struct hrtimer_cpu_base {
+ ktime_t softirq_expires_next;
+ struct hrtimer *softirq_next_timer;
+ struct hrtimer_clock_base clock_base[HRTIMER_MAX_CLOCK_BASES];
++ call_single_data_t csd;
+ } ____cacheline_aligned;
+
+
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 85fe9d0ebb9152..2c66ca21801c17 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -969,6 +969,15 @@ static inline struct kvm_io_bus *kvm_get_bus(struct kvm *kvm, enum kvm_bus idx)
+ static inline struct kvm_vcpu *kvm_get_vcpu(struct kvm *kvm, int i)
+ {
+ int num_vcpus = atomic_read(&kvm->online_vcpus);
++
++ /*
++ * Explicitly verify the target vCPU is online, as the anti-speculation
++ * logic only limits the CPU's ability to speculate, e.g. given a "bad"
++ * index, clamping the index to 0 would return vCPU0, not NULL.
++ */
++ if (i >= num_vcpus)
++ return NULL;
++
+ i = array_index_nospec(i, num_vcpus);
+
+ /* Pairs with smp_wmb() in kvm_vm_ioctl_create_vcpu. */
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 82c7056e27599e..d4b2c09cd5fec4 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -722,7 +722,6 @@ struct mlx5_timer {
+ struct timecounter tc;
+ u32 nominal_c_mult;
+ unsigned long overflow_period;
+- struct delayed_work overflow_work;
+ };
+
+ struct mlx5_clock {
+diff --git a/include/linux/platform_data/x86/asus-wmi.h b/include/linux/platform_data/x86/asus-wmi.h
+index 365e119bebaa23..783e2a336861b7 100644
+--- a/include/linux/platform_data/x86/asus-wmi.h
++++ b/include/linux/platform_data/x86/asus-wmi.h
+@@ -184,6 +184,11 @@ static const struct dmi_system_id asus_use_hid_led_dmi_ids[] = {
+ DMI_MATCH(DMI_PRODUCT_FAMILY, "ROG Flow"),
+ },
+ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_PRODUCT_FAMILY, "ProArt P16"),
++ },
++ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "GA403U"),
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index 1e6324f0d4efda..24e48af7e8f74a 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -851,7 +851,7 @@ static inline int qdisc_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ }
+
+ static inline void _bstats_update(struct gnet_stats_basic_sync *bstats,
+- __u64 bytes, __u32 packets)
++ __u64 bytes, __u64 packets)
+ {
+ u64_stats_update_begin(&bstats->syncp);
+ u64_stats_add(&bstats->bytes, bytes);
+diff --git a/include/rv/da_monitor.h b/include/rv/da_monitor.h
+index 9705b2a98e49e1..510c88bfabd433 100644
+--- a/include/rv/da_monitor.h
++++ b/include/rv/da_monitor.h
+@@ -14,6 +14,7 @@
+ #include <rv/automata.h>
+ #include <linux/rv.h>
+ #include <linux/bug.h>
++#include <linux/sched.h>
+
+ #ifdef CONFIG_RV_REACTORS
+
+@@ -324,10 +325,13 @@ static inline struct da_monitor *da_get_monitor_##name(struct task_struct *tsk)
+ static void da_monitor_reset_all_##name(void) \
+ { \
+ struct task_struct *g, *p; \
++ int cpu; \
+ \
+ read_lock(&tasklist_lock); \
+ for_each_process_thread(g, p) \
+ da_monitor_reset_##name(da_get_monitor_##name(p)); \
++ for_each_present_cpu(cpu) \
++ da_monitor_reset_##name(da_get_monitor_##name(idle_task(cpu))); \
+ read_unlock(&tasklist_lock); \
+ } \
+ \
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 666fe1779ccc63..e1a37e9c2d42d5 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -218,6 +218,7 @@
+ EM(rxrpc_conn_get_conn_input, "GET inp-conn") \
+ EM(rxrpc_conn_get_idle, "GET idle ") \
+ EM(rxrpc_conn_get_poke_abort, "GET pk-abort") \
++ EM(rxrpc_conn_get_poke_secured, "GET secured ") \
+ EM(rxrpc_conn_get_poke_timer, "GET poke ") \
+ EM(rxrpc_conn_get_service_conn, "GET svc-conn") \
+ EM(rxrpc_conn_new_client, "NEW client ") \
+diff --git a/include/uapi/drm/amdgpu_drm.h b/include/uapi/drm/amdgpu_drm.h
+index efe5de6ce208a1..aaa4f3bc688b57 100644
+--- a/include/uapi/drm/amdgpu_drm.h
++++ b/include/uapi/drm/amdgpu_drm.h
+@@ -411,13 +411,20 @@ struct drm_amdgpu_gem_userptr {
+ /* GFX12 and later: */
+ #define AMDGPU_TILING_GFX12_SWIZZLE_MODE_SHIFT 0
+ #define AMDGPU_TILING_GFX12_SWIZZLE_MODE_MASK 0x7
+-/* These are DCC recompression setting for memory management: */
++/* These are DCC recompression settings for memory management: */
+ #define AMDGPU_TILING_GFX12_DCC_MAX_COMPRESSED_BLOCK_SHIFT 3
+ #define AMDGPU_TILING_GFX12_DCC_MAX_COMPRESSED_BLOCK_MASK 0x3 /* 0:64B, 1:128B, 2:256B */
+ #define AMDGPU_TILING_GFX12_DCC_NUMBER_TYPE_SHIFT 5
+ #define AMDGPU_TILING_GFX12_DCC_NUMBER_TYPE_MASK 0x7 /* CB_COLOR0_INFO.NUMBER_TYPE */
+ #define AMDGPU_TILING_GFX12_DCC_DATA_FORMAT_SHIFT 8
+ #define AMDGPU_TILING_GFX12_DCC_DATA_FORMAT_MASK 0x3f /* [0:4]:CB_COLOR0_INFO.FORMAT, [5]:MM */
++/* When clearing the buffer or moving it from VRAM to GTT, don't compress and set DCC metadata
++ * to uncompressed. Set when parts of an allocation bypass DCC and read raw data. */
++#define AMDGPU_TILING_GFX12_DCC_WRITE_COMPRESS_DISABLE_SHIFT 14
++#define AMDGPU_TILING_GFX12_DCC_WRITE_COMPRESS_DISABLE_MASK 0x1
++/* bit gap */
++#define AMDGPU_TILING_GFX12_SCANOUT_SHIFT 63
++#define AMDGPU_TILING_GFX12_SCANOUT_MASK 0x1
+
+ /* Set/Get helpers for tiling flags. */
+ #define AMDGPU_TILING_SET(field, value) \
+diff --git a/include/uapi/linux/input-event-codes.h b/include/uapi/linux/input-event-codes.h
+index a4206723f50333..5a199f3d4a26a2 100644
+--- a/include/uapi/linux/input-event-codes.h
++++ b/include/uapi/linux/input-event-codes.h
+@@ -519,6 +519,7 @@
+ #define KEY_NOTIFICATION_CENTER 0x1bc /* Show/hide the notification center */
+ #define KEY_PICKUP_PHONE 0x1bd /* Answer incoming call */
+ #define KEY_HANGUP_PHONE 0x1be /* Decline incoming call */
++#define KEY_LINK_PHONE 0x1bf /* AL Phone Syncing */
+
+ #define KEY_DEL_EOL 0x1c0
+ #define KEY_DEL_EOS 0x1c1
+diff --git a/include/uapi/linux/iommufd.h b/include/uapi/linux/iommufd.h
+index 72010f71c5e479..8c4470742dcd99 100644
+--- a/include/uapi/linux/iommufd.h
++++ b/include/uapi/linux/iommufd.h
+@@ -737,6 +737,7 @@ enum iommu_hwpt_pgfault_perm {
+ * @pasid: Process Address Space ID
+ * @grpid: Page Request Group Index
+ * @perm: Combination of enum iommu_hwpt_pgfault_perm
++ * @__reserved: Must be 0.
+ * @addr: Fault address
+ * @length: a hint of how much data the requestor is expecting to fetch. For
+ * example, if the PRI initiator knows it is going to do a 10MB
+@@ -752,7 +753,8 @@ struct iommu_hwpt_pgfault {
+ __u32 pasid;
+ __u32 grpid;
+ __u32 perm;
+- __u64 addr;
++ __u32 __reserved;
++ __aligned_u64 addr;
+ __u32 length;
+ __u32 cookie;
+ };
+diff --git a/include/uapi/linux/raid/md_p.h b/include/uapi/linux/raid/md_p.h
+index 5a43c23f53bfbf..ff47b6f0ba0f5f 100644
+--- a/include/uapi/linux/raid/md_p.h
++++ b/include/uapi/linux/raid/md_p.h
+@@ -233,7 +233,7 @@ struct mdp_superblock_1 {
+ char set_name[32]; /* set and interpreted by user-space */
+
+ __le64 ctime; /* lo 40 bits are seconds, top 24 are microseconds or 0*/
+- __le32 level; /* 0,1,4,5 */
++ __le32 level; /* 0,1,4,5, -1 (linear) */
+ __le32 layout; /* only for raid5 and raid10 currently */
+ __le64 size; /* used size of component devices, in 512byte sectors */
+
+diff --git a/include/uapi/linux/raid/md_u.h b/include/uapi/linux/raid/md_u.h
+index 7be89a4906e73e..a893010735fbad 100644
+--- a/include/uapi/linux/raid/md_u.h
++++ b/include/uapi/linux/raid/md_u.h
+@@ -103,6 +103,8 @@ typedef struct mdu_array_info_s {
+
+ } mdu_array_info_t;
+
++#define LEVEL_LINEAR (-1)
++
+ /* we need a value for 'no level specified' and 0
+ * means 'raid0', so we need something else. This is
+ * for internal use only
+diff --git a/include/ufs/ufs.h b/include/ufs/ufs.h
+index e594abe5d05fed..f0c6111160e7af 100644
+--- a/include/ufs/ufs.h
++++ b/include/ufs/ufs.h
+@@ -386,8 +386,8 @@ enum {
+
+ /* Possible values for dExtendedUFSFeaturesSupport */
+ enum {
+- UFS_DEV_LOW_TEMP_NOTIF = BIT(4),
+- UFS_DEV_HIGH_TEMP_NOTIF = BIT(5),
++ UFS_DEV_HIGH_TEMP_NOTIF = BIT(4),
++ UFS_DEV_LOW_TEMP_NOTIF = BIT(5),
+ UFS_DEV_EXT_TEMP_NOTIF = BIT(6),
+ UFS_DEV_HPB_SUPPORT = BIT(7),
+ UFS_DEV_WRITE_BOOSTER_SUP = BIT(8),
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index 20c5374e922ef5..d5e43a1dcff226 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -1297,7 +1297,6 @@ static inline void ufshcd_rmwl(struct ufs_hba *hba, u32 mask, u32 val, u32 reg)
+ void ufshcd_enable_irq(struct ufs_hba *hba);
+ void ufshcd_disable_irq(struct ufs_hba *hba);
+ int ufshcd_alloc_host(struct device *, struct ufs_hba **);
+-void ufshcd_dealloc_host(struct ufs_hba *);
+ int ufshcd_hba_enable(struct ufs_hba *hba);
+ int ufshcd_init(struct ufs_hba *, void __iomem *, unsigned int);
+ int ufshcd_link_recovery(struct ufs_hba *hba);
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 7f549be9abd1e6..3974c417fe2644 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -1697,6 +1697,11 @@ int io_connect(struct io_kiocb *req, unsigned int issue_flags)
+ int ret;
+ bool force_nonblock = issue_flags & IO_URING_F_NONBLOCK;
+
++ if (unlikely(req->flags & REQ_F_FAIL)) {
++ ret = -ECONNRESET;
++ goto out;
++ }
++
+ file_flags = force_nonblock ? O_NONBLOCK : 0;
+
+ ret = __sys_connect_file(req->file, &io->addr, connect->addr_len,
+diff --git a/io_uring/poll.c b/io_uring/poll.c
+index 1f63b60e85e7c0..b93e9ebdd87c8f 100644
+--- a/io_uring/poll.c
++++ b/io_uring/poll.c
+@@ -315,6 +315,8 @@ static int io_poll_check_events(struct io_kiocb *req, struct io_tw_state *ts)
+ return IOU_POLL_REISSUE;
+ }
+ }
++ if (unlikely(req->cqe.res & EPOLLERR))
++ req_set_fail(req);
+ if (req->apoll_events & EPOLLONESHOT)
+ return IOU_POLL_DONE;
+
+@@ -357,8 +359,10 @@ void io_poll_task_func(struct io_kiocb *req, struct io_tw_state *ts)
+
+ ret = io_poll_check_events(req, ts);
+ if (ret == IOU_POLL_NO_ACTION) {
++ io_kbuf_recycle(req, 0);
+ return;
+ } else if (ret == IOU_POLL_REQUEUE) {
++ io_kbuf_recycle(req, 0);
+ __io_poll_execute(req, 0);
+ return;
+ }
+diff --git a/kernel/locking/test-ww_mutex.c b/kernel/locking/test-ww_mutex.c
+index 10a5736a21c222..b5c2a2de457888 100644
+--- a/kernel/locking/test-ww_mutex.c
++++ b/kernel/locking/test-ww_mutex.c
+@@ -402,7 +402,7 @@ static inline u32 prandom_u32_below(u32 ceil)
+ static int *get_random_order(int count)
+ {
+ int *order;
+- int n, r, tmp;
++ int n, r;
+
+ order = kmalloc_array(count, sizeof(*order), GFP_KERNEL);
+ if (!order)
+@@ -413,11 +413,8 @@ static int *get_random_order(int count)
+
+ for (n = count - 1; n > 1; n--) {
+ r = prandom_u32_below(n + 1);
+- if (r != n) {
+- tmp = order[n];
+- order[n] = order[r];
+- order[r] = tmp;
+- }
++ if (r != n)
++ swap(order[n], order[r]);
+ }
+
+ return order;
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 7530df62ff7cbc..3b75f6e8410b9d 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -523,7 +523,7 @@ static struct latched_seq clear_seq = {
+ /* record buffer */
+ #define LOG_ALIGN __alignof__(unsigned long)
+ #define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT)
+-#define LOG_BUF_LEN_MAX (u32)(1 << 31)
++#define LOG_BUF_LEN_MAX ((u32)1 << 31)
+ static char __log_buf[__LOG_BUF_LEN] __aligned(LOG_ALIGN);
+ static char *log_buf = __log_buf;
+ static u32 log_buf_len = __LOG_BUF_LEN;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index aba41c69f09c42..5d67f41d05d40b 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -766,13 +766,15 @@ static void update_rq_clock_task(struct rq *rq, s64 delta)
+ #endif
+ #ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
+ if (static_key_false((¶virt_steal_rq_enabled))) {
+- steal = paravirt_steal_clock(cpu_of(rq));
++ u64 prev_steal;
++
++ steal = prev_steal = paravirt_steal_clock(cpu_of(rq));
+ steal -= rq->prev_steal_time_rq;
+
+ if (unlikely(steal > delta))
+ steal = delta;
+
+- rq->prev_steal_time_rq += steal;
++ rq->prev_steal_time_rq = prev_steal;
+ delta -= steal;
+ }
+ #endif
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 65e7be64487202..ddc096d6b0c203 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -5481,6 +5481,15 @@ static __always_inline void return_cfs_rq_runtime(struct cfs_rq *cfs_rq);
+ static void set_delayed(struct sched_entity *se)
+ {
+ se->sched_delayed = 1;
++
++ /*
++ * Delayed se of cfs_rq have no tasks queued on them.
++ * Do not adjust h_nr_runnable since dequeue_entities()
++ * will account it for blocked tasks.
++ */
++ if (!entity_is_task(se))
++ return;
++
+ for_each_sched_entity(se) {
+ struct cfs_rq *cfs_rq = cfs_rq_of(se);
+
+@@ -5493,6 +5502,16 @@ static void set_delayed(struct sched_entity *se)
+ static void clear_delayed(struct sched_entity *se)
+ {
+ se->sched_delayed = 0;
++
++ /*
++ * Delayed se of cfs_rq have no tasks queued on them.
++ * Do not adjust h_nr_runnable since a dequeue has
++ * already accounted for it or an enqueue of a task
++ * below it will account for it in enqueue_task_fair().
++ */
++ if (!entity_is_task(se))
++ return;
++
+ for_each_sched_entity(se) {
+ struct cfs_rq *cfs_rq = cfs_rq_of(se);
+
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index 385d48293a5fa1..0cd1f8b5a102ee 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -749,6 +749,15 @@ static bool seccomp_is_const_allow(struct sock_fprog_kern *fprog,
+ if (WARN_ON_ONCE(!fprog))
+ return false;
+
++ /* Our single exception to filtering. */
++#ifdef __NR_uretprobe
++#ifdef SECCOMP_ARCH_COMPAT
++ if (sd->arch == SECCOMP_ARCH_NATIVE)
++#endif
++ if (sd->nr == __NR_uretprobe)
++ return true;
++#endif
++
+ for (pc = 0; pc < fprog->len; pc++) {
+ struct sock_filter *insn = &fprog->filter[pc];
+ u16 code = insn->code;
+@@ -1023,6 +1032,9 @@ static inline void seccomp_log(unsigned long syscall, long signr, u32 action,
+ */
+ static const int mode1_syscalls[] = {
+ __NR_seccomp_read, __NR_seccomp_write, __NR_seccomp_exit, __NR_seccomp_sigreturn,
++#ifdef __NR_uretprobe
++ __NR_uretprobe,
++#endif
+ -1, /* negative terminated */
+ };
+
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index ee20f5032a0366..d116c28564f26c 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -58,6 +58,8 @@
+ #define HRTIMER_ACTIVE_SOFT (HRTIMER_ACTIVE_HARD << MASK_SHIFT)
+ #define HRTIMER_ACTIVE_ALL (HRTIMER_ACTIVE_SOFT | HRTIMER_ACTIVE_HARD)
+
++static void retrigger_next_event(void *arg);
++
+ /*
+ * The timer bases:
+ *
+@@ -111,7 +113,8 @@ DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) =
+ .clockid = CLOCK_TAI,
+ .get_time = &ktime_get_clocktai,
+ },
+- }
++ },
++ .csd = CSD_INIT(retrigger_next_event, NULL)
+ };
+
+ static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = {
+@@ -124,6 +127,14 @@ static const int hrtimer_clock_to_base_table[MAX_CLOCKS] = {
+ [CLOCK_TAI] = HRTIMER_BASE_TAI,
+ };
+
++static inline bool hrtimer_base_is_online(struct hrtimer_cpu_base *base)
++{
++ if (!IS_ENABLED(CONFIG_HOTPLUG_CPU))
++ return true;
++ else
++ return likely(base->online);
++}
++
+ /*
+ * Functions and macros which are different for UP/SMP systems are kept in a
+ * single place
+@@ -183,27 +194,54 @@ struct hrtimer_clock_base *lock_hrtimer_base(const struct hrtimer *timer,
+ }
+
+ /*
+- * We do not migrate the timer when it is expiring before the next
+- * event on the target cpu. When high resolution is enabled, we cannot
+- * reprogram the target cpu hardware and we would cause it to fire
+- * late. To keep it simple, we handle the high resolution enabled and
+- * disabled case similar.
++ * Check if the elected target is suitable considering its next
++ * event and the hotplug state of the current CPU.
++ *
++ * If the elected target is remote and its next event is after the timer
++ * to queue, then a remote reprogram is necessary. However there is no
++ * guarantee the IPI handling the operation would arrive in time to meet
++ * the high resolution deadline. In this case the local CPU becomes a
++ * preferred target, unless it is offline.
++ *
++ * High and low resolution modes are handled the same way for simplicity.
+ *
+ * Called with cpu_base->lock of target cpu held.
+ */
+-static int
+-hrtimer_check_target(struct hrtimer *timer, struct hrtimer_clock_base *new_base)
++static bool hrtimer_suitable_target(struct hrtimer *timer, struct hrtimer_clock_base *new_base,
++ struct hrtimer_cpu_base *new_cpu_base,
++ struct hrtimer_cpu_base *this_cpu_base)
+ {
+ ktime_t expires;
+
++ /*
++ * The local CPU clockevent can be reprogrammed. Also get_target_base()
++ * guarantees it is online.
++ */
++ if (new_cpu_base == this_cpu_base)
++ return true;
++
++ /*
++ * The offline local CPU can't be the default target if the
++ * next remote target event is after this timer. Keep the
++ * elected new base. An IPI will we issued to reprogram
++ * it as a last resort.
++ */
++ if (!hrtimer_base_is_online(this_cpu_base))
++ return true;
++
+ expires = ktime_sub(hrtimer_get_expires(timer), new_base->offset);
+- return expires < new_base->cpu_base->expires_next;
++
++ return expires >= new_base->cpu_base->expires_next;
+ }
+
+-static inline
+-struct hrtimer_cpu_base *get_target_base(struct hrtimer_cpu_base *base,
+- int pinned)
++static inline struct hrtimer_cpu_base *get_target_base(struct hrtimer_cpu_base *base, int pinned)
+ {
++ if (!hrtimer_base_is_online(base)) {
++ int cpu = cpumask_any_and(cpu_online_mask, housekeeping_cpumask(HK_TYPE_TIMER));
++
++ return &per_cpu(hrtimer_bases, cpu);
++ }
++
+ #if defined(CONFIG_SMP) && defined(CONFIG_NO_HZ_COMMON)
+ if (static_branch_likely(&timers_migration_enabled) && !pinned)
+ return &per_cpu(hrtimer_bases, get_nohz_timer_target());
+@@ -254,8 +292,8 @@ switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base,
+ raw_spin_unlock(&base->cpu_base->lock);
+ raw_spin_lock(&new_base->cpu_base->lock);
+
+- if (new_cpu_base != this_cpu_base &&
+- hrtimer_check_target(timer, new_base)) {
++ if (!hrtimer_suitable_target(timer, new_base, new_cpu_base,
++ this_cpu_base)) {
+ raw_spin_unlock(&new_base->cpu_base->lock);
+ raw_spin_lock(&base->cpu_base->lock);
+ new_cpu_base = this_cpu_base;
+@@ -264,8 +302,7 @@ switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base,
+ }
+ WRITE_ONCE(timer->base, new_base);
+ } else {
+- if (new_cpu_base != this_cpu_base &&
+- hrtimer_check_target(timer, new_base)) {
++ if (!hrtimer_suitable_target(timer, new_base, new_cpu_base, this_cpu_base)) {
+ new_cpu_base = this_cpu_base;
+ goto again;
+ }
+@@ -725,8 +762,6 @@ static inline int hrtimer_is_hres_enabled(void)
+ return hrtimer_hres_enabled;
+ }
+
+-static void retrigger_next_event(void *arg);
+-
+ /*
+ * Switch to high resolution mode
+ */
+@@ -1215,6 +1250,7 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ u64 delta_ns, const enum hrtimer_mode mode,
+ struct hrtimer_clock_base *base)
+ {
++ struct hrtimer_cpu_base *this_cpu_base = this_cpu_ptr(&hrtimer_bases);
+ struct hrtimer_clock_base *new_base;
+ bool force_local, first;
+
+@@ -1226,9 +1262,15 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ * and enforce reprogramming after it is queued no matter whether
+ * it is the new first expiring timer again or not.
+ */
+- force_local = base->cpu_base == this_cpu_ptr(&hrtimer_bases);
++ force_local = base->cpu_base == this_cpu_base;
+ force_local &= base->cpu_base->next_timer == timer;
+
++ /*
++ * Don't force local queuing if this enqueue happens on a unplugged
++ * CPU after hrtimer_cpu_dying() has been invoked.
++ */
++ force_local &= this_cpu_base->online;
++
+ /*
+ * Remove an active timer from the queue. In case it is not queued
+ * on the current CPU, make sure that remove_hrtimer() updates the
+@@ -1258,8 +1300,27 @@ static int __hrtimer_start_range_ns(struct hrtimer *timer, ktime_t tim,
+ }
+
+ first = enqueue_hrtimer(timer, new_base, mode);
+- if (!force_local)
+- return first;
++ if (!force_local) {
++ /*
++ * If the current CPU base is online, then the timer is
++ * never queued on a remote CPU if it would be the first
++ * expiring timer there.
++ */
++ if (hrtimer_base_is_online(this_cpu_base))
++ return first;
++
++ /*
++ * Timer was enqueued remote because the current base is
++ * already offline. If the timer is the first to expire,
++ * kick the remote CPU to reprogram the clock event.
++ */
++ if (first) {
++ struct hrtimer_cpu_base *new_cpu_base = new_base->cpu_base;
++
++ smp_call_function_single_async(new_cpu_base->cpu, &new_cpu_base->csd);
++ }
++ return 0;
++ }
+
+ /*
+ * Timer was forced to stay on the current CPU to avoid
+diff --git a/kernel/time/timer_migration.c b/kernel/time/timer_migration.c
+index 371a62a749aad3..72538baa7a1fb0 100644
+--- a/kernel/time/timer_migration.c
++++ b/kernel/time/timer_migration.c
+@@ -1668,6 +1668,9 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node)
+
+ } while (i < tmigr_hierarchy_levels);
+
++ /* Assert single root */
++ WARN_ON_ONCE(!err && !group->parent && !list_is_singular(&tmigr_level_list[top]));
++
+ while (i > 0) {
+ group = stack[--i];
+
+@@ -1709,7 +1712,12 @@ static int tmigr_setup_groups(unsigned int cpu, unsigned int node)
+ WARN_ON_ONCE(top == 0);
+
+ lvllist = &tmigr_level_list[top];
+- if (group->num_children == 1 && list_is_singular(lvllist)) {
++
++ /*
++ * Newly created root level should have accounted the upcoming
++ * CPU's child group and pre-accounted the old root.
++ */
++ if (group->num_children == 2 && list_is_singular(lvllist)) {
+ /*
+ * The target CPU must never do the prepare work, except
+ * on early boot when the boot CPU is the target. Otherwise
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 703978b2d557d7..0f8f3ffc6f0904 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -4398,8 +4398,13 @@ rb_reserve_next_event(struct trace_buffer *buffer,
+ int nr_loops = 0;
+ int add_ts_default;
+
+- /* ring buffer does cmpxchg, make sure it is safe in NMI context */
+- if (!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) &&
++ /*
++ * ring buffer does cmpxchg as well as atomic64 operations
++ * (which some archs use locking for atomic64), make sure this
++ * is safe in NMI context
++ */
++ if ((!IS_ENABLED(CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG) ||
++ IS_ENABLED(CONFIG_GENERIC_ATOMIC64)) &&
+ (unlikely(in_nmi()))) {
+ return NULL;
+ }
+@@ -7059,7 +7064,7 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
+ }
+
+ while (p < nr_pages) {
+- struct page *page = virt_to_page((void *)cpu_buffer->subbuf_ids[s]);
++ struct page *page;
+ int off = 0;
+
+ if (WARN_ON_ONCE(s >= nr_subbufs)) {
+@@ -7067,6 +7072,8 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
+ goto out;
+ }
+
++ page = virt_to_page((void *)cpu_buffer->subbuf_ids[s]);
++
+ for (; off < (1 << (subbuf_order)); off++, page++) {
+ if (p >= nr_pages)
+ break;
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index a569daaac4c4ff..ebb61ddca749d8 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -150,7 +150,7 @@ int trace_graph_entry(struct ftrace_graph_ent *trace,
+ * returning from the function.
+ */
+ if (ftrace_graph_notrace_addr(trace->func)) {
+- *task_var |= TRACE_GRAPH_NOTRACE_BIT;
++ *task_var |= TRACE_GRAPH_NOTRACE;
+ /*
+ * Need to return 1 to have the return called
+ * that will clear the NOTRACE bit.
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index a50ed23bee777b..032fdeba37d350 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -1235,6 +1235,8 @@ static void trace_sched_migrate_callback(void *data, struct task_struct *p, int
+ }
+ }
+
++static bool monitor_enabled;
++
+ static int register_migration_monitor(void)
+ {
+ int ret = 0;
+@@ -1243,16 +1245,25 @@ static int register_migration_monitor(void)
+ * Timerlat thread migration check is only required when running timerlat in user-space.
+ * Thus, enable callback only if timerlat is set with no workload.
+ */
+- if (timerlat_enabled() && !test_bit(OSN_WORKLOAD, &osnoise_options))
++ if (timerlat_enabled() && !test_bit(OSN_WORKLOAD, &osnoise_options)) {
++ if (WARN_ON_ONCE(monitor_enabled))
++ return 0;
++
+ ret = register_trace_sched_migrate_task(trace_sched_migrate_callback, NULL);
++ if (!ret)
++ monitor_enabled = true;
++ }
+
+ return ret;
+ }
+
+ static void unregister_migration_monitor(void)
+ {
+- if (timerlat_enabled() && !test_bit(OSN_WORKLOAD, &osnoise_options))
+- unregister_trace_sched_migrate_task(trace_sched_migrate_callback, NULL);
++ if (!monitor_enabled)
++ return;
++
++ unregister_trace_sched_migrate_task(trace_sched_migrate_callback, NULL);
++ monitor_enabled = false;
+ }
+ #else
+ static int register_migration_monitor(void)
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 3f9c238bb58ea3..e48375fe5a50ce 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -1511,7 +1511,7 @@ config LOCKDEP_SMALL
+ config LOCKDEP_BITS
+ int "Bitsize for MAX_LOCKDEP_ENTRIES"
+ depends on LOCKDEP && !LOCKDEP_SMALL
+- range 10 30
++ range 10 24
+ default 15
+ help
+ Try increasing this value if you hit "BUG: MAX_LOCKDEP_ENTRIES too low!" message.
+@@ -1527,7 +1527,7 @@ config LOCKDEP_CHAINS_BITS
+ config LOCKDEP_STACK_TRACE_BITS
+ int "Bitsize for MAX_STACK_TRACE_ENTRIES"
+ depends on LOCKDEP && !LOCKDEP_SMALL
+- range 10 30
++ range 10 26
+ default 19
+ help
+ Try increasing this value if you hit "BUG: MAX_STACK_TRACE_ENTRIES too low!" message.
+@@ -1535,7 +1535,7 @@ config LOCKDEP_STACK_TRACE_BITS
+ config LOCKDEP_STACK_TRACE_HASH_BITS
+ int "Bitsize for STACK_TRACE_HASH_SIZE"
+ depends on LOCKDEP && !LOCKDEP_SMALL
+- range 10 30
++ range 10 26
+ default 14
+ help
+ Try increasing this value if you need large STACK_TRACE_HASH_SIZE.
+@@ -1543,7 +1543,7 @@ config LOCKDEP_STACK_TRACE_HASH_BITS
+ config LOCKDEP_CIRCULAR_QUEUE_BITS
+ int "Bitsize for elements in circular_queue struct"
+ depends on LOCKDEP
+- range 10 30
++ range 10 26
+ default 12
+ help
+ Try increasing this value if you hit "lockdep bfs error:-1" warning due to __cq_enqueue() failure.
+diff --git a/lib/atomic64.c b/lib/atomic64.c
+index caf895789a1ee6..1a72bba36d2430 100644
+--- a/lib/atomic64.c
++++ b/lib/atomic64.c
+@@ -25,15 +25,15 @@
+ * Ensure each lock is in a separate cacheline.
+ */
+ static union {
+- raw_spinlock_t lock;
++ arch_spinlock_t lock;
+ char pad[L1_CACHE_BYTES];
+ } atomic64_lock[NR_LOCKS] __cacheline_aligned_in_smp = {
+ [0 ... (NR_LOCKS - 1)] = {
+- .lock = __RAW_SPIN_LOCK_UNLOCKED(atomic64_lock.lock),
++ .lock = __ARCH_SPIN_LOCK_UNLOCKED,
+ },
+ };
+
+-static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
++static inline arch_spinlock_t *lock_addr(const atomic64_t *v)
+ {
+ unsigned long addr = (unsigned long) v;
+
+@@ -45,12 +45,14 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
+ s64 generic_atomic64_read(const atomic64_t *v)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+ s64 val;
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ val = v->counter;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+ return val;
+ }
+ EXPORT_SYMBOL(generic_atomic64_read);
+@@ -58,11 +60,13 @@ EXPORT_SYMBOL(generic_atomic64_read);
+ void generic_atomic64_set(atomic64_t *v, s64 i)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ v->counter = i;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+ }
+ EXPORT_SYMBOL(generic_atomic64_set);
+
+@@ -70,11 +74,13 @@ EXPORT_SYMBOL(generic_atomic64_set);
+ void generic_atomic64_##op(s64 a, atomic64_t *v) \
+ { \
+ unsigned long flags; \
+- raw_spinlock_t *lock = lock_addr(v); \
++ arch_spinlock_t *lock = lock_addr(v); \
+ \
+- raw_spin_lock_irqsave(lock, flags); \
++ local_irq_save(flags); \
++ arch_spin_lock(lock); \
+ v->counter c_op a; \
+- raw_spin_unlock_irqrestore(lock, flags); \
++ arch_spin_unlock(lock); \
++ local_irq_restore(flags); \
+ } \
+ EXPORT_SYMBOL(generic_atomic64_##op);
+
+@@ -82,12 +88,14 @@ EXPORT_SYMBOL(generic_atomic64_##op);
+ s64 generic_atomic64_##op##_return(s64 a, atomic64_t *v) \
+ { \
+ unsigned long flags; \
+- raw_spinlock_t *lock = lock_addr(v); \
++ arch_spinlock_t *lock = lock_addr(v); \
+ s64 val; \
+ \
+- raw_spin_lock_irqsave(lock, flags); \
++ local_irq_save(flags); \
++ arch_spin_lock(lock); \
+ val = (v->counter c_op a); \
+- raw_spin_unlock_irqrestore(lock, flags); \
++ arch_spin_unlock(lock); \
++ local_irq_restore(flags); \
+ return val; \
+ } \
+ EXPORT_SYMBOL(generic_atomic64_##op##_return);
+@@ -96,13 +104,15 @@ EXPORT_SYMBOL(generic_atomic64_##op##_return);
+ s64 generic_atomic64_fetch_##op(s64 a, atomic64_t *v) \
+ { \
+ unsigned long flags; \
+- raw_spinlock_t *lock = lock_addr(v); \
++ arch_spinlock_t *lock = lock_addr(v); \
+ s64 val; \
+ \
+- raw_spin_lock_irqsave(lock, flags); \
++ local_irq_save(flags); \
++ arch_spin_lock(lock); \
+ val = v->counter; \
+ v->counter c_op a; \
+- raw_spin_unlock_irqrestore(lock, flags); \
++ arch_spin_unlock(lock); \
++ local_irq_restore(flags); \
+ return val; \
+ } \
+ EXPORT_SYMBOL(generic_atomic64_fetch_##op);
+@@ -131,14 +141,16 @@ ATOMIC64_OPS(xor, ^=)
+ s64 generic_atomic64_dec_if_positive(atomic64_t *v)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+ s64 val;
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ val = v->counter - 1;
+ if (val >= 0)
+ v->counter = val;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+ return val;
+ }
+ EXPORT_SYMBOL(generic_atomic64_dec_if_positive);
+@@ -146,14 +158,16 @@ EXPORT_SYMBOL(generic_atomic64_dec_if_positive);
+ s64 generic_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+ s64 val;
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ val = v->counter;
+ if (val == o)
+ v->counter = n;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+ return val;
+ }
+ EXPORT_SYMBOL(generic_atomic64_cmpxchg);
+@@ -161,13 +175,15 @@ EXPORT_SYMBOL(generic_atomic64_cmpxchg);
+ s64 generic_atomic64_xchg(atomic64_t *v, s64 new)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+ s64 val;
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ val = v->counter;
+ v->counter = new;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+ return val;
+ }
+ EXPORT_SYMBOL(generic_atomic64_xchg);
+@@ -175,14 +191,16 @@ EXPORT_SYMBOL(generic_atomic64_xchg);
+ s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+ {
+ unsigned long flags;
+- raw_spinlock_t *lock = lock_addr(v);
++ arch_spinlock_t *lock = lock_addr(v);
+ s64 val;
+
+- raw_spin_lock_irqsave(lock, flags);
++ local_irq_save(flags);
++ arch_spin_lock(lock);
+ val = v->counter;
+ if (val != u)
+ v->counter += a;
+- raw_spin_unlock_irqrestore(lock, flags);
++ arch_spin_unlock(lock);
++ local_irq_restore(flags);
+
+ return val;
+ }
+diff --git a/lib/maple_tree.c b/lib/maple_tree.c
+index 0cbe913634be4b..8d73ccf66f3aa0 100644
+--- a/lib/maple_tree.c
++++ b/lib/maple_tree.c
+@@ -1849,11 +1849,11 @@ static inline int mab_no_null_split(struct maple_big_node *b_node,
+ * Return: The first split location. The middle split is set in @mid_split.
+ */
+ static inline int mab_calc_split(struct ma_state *mas,
+- struct maple_big_node *bn, unsigned char *mid_split, unsigned long min)
++ struct maple_big_node *bn, unsigned char *mid_split)
+ {
+ unsigned char b_end = bn->b_end;
+ int split = b_end / 2; /* Assume equal split. */
+- unsigned char slot_min, slot_count = mt_slots[bn->type];
++ unsigned char slot_count = mt_slots[bn->type];
+
+ /*
+ * To support gap tracking, all NULL entries are kept together and a node cannot
+@@ -1886,18 +1886,7 @@ static inline int mab_calc_split(struct ma_state *mas,
+ split = b_end / 3;
+ *mid_split = split * 2;
+ } else {
+- slot_min = mt_min_slots[bn->type];
+-
+ *mid_split = 0;
+- /*
+- * Avoid having a range less than the slot count unless it
+- * causes one node to be deficient.
+- * NOTE: mt_min_slots is 1 based, b_end and split are zero.
+- */
+- while ((split < slot_count - 1) &&
+- ((bn->pivot[split] - min) < slot_count - 1) &&
+- (b_end - split > slot_min))
+- split++;
+ }
+
+ /* Avoid ending a node on a NULL entry */
+@@ -2366,7 +2355,7 @@ static inline struct maple_enode
+ static inline unsigned char mas_mab_to_node(struct ma_state *mas,
+ struct maple_big_node *b_node, struct maple_enode **left,
+ struct maple_enode **right, struct maple_enode **middle,
+- unsigned char *mid_split, unsigned long min)
++ unsigned char *mid_split)
+ {
+ unsigned char split = 0;
+ unsigned char slot_count = mt_slots[b_node->type];
+@@ -2379,7 +2368,7 @@ static inline unsigned char mas_mab_to_node(struct ma_state *mas,
+ if (b_node->b_end < slot_count) {
+ split = b_node->b_end;
+ } else {
+- split = mab_calc_split(mas, b_node, mid_split, min);
++ split = mab_calc_split(mas, b_node, mid_split);
+ *right = mas_new_ma_node(mas, b_node);
+ }
+
+@@ -2866,7 +2855,7 @@ static void mas_spanning_rebalance(struct ma_state *mas,
+ mast->bn->b_end--;
+ mast->bn->type = mte_node_type(mast->orig_l->node);
+ split = mas_mab_to_node(mas, mast->bn, &left, &right, &middle,
+- &mid_split, mast->orig_l->min);
++ &mid_split);
+ mast_set_split_parents(mast, left, middle, right, split,
+ mid_split);
+ mast_cp_to_nodes(mast, left, middle, right, split, mid_split);
+@@ -3357,7 +3346,7 @@ static void mas_split(struct ma_state *mas, struct maple_big_node *b_node)
+ if (mas_push_data(mas, height, &mast, false))
+ break;
+
+- split = mab_calc_split(mas, b_node, &mid_split, prev_l_mas.min);
++ split = mab_calc_split(mas, b_node, &mid_split);
+ mast_split_data(&mast, mas, split);
+ /*
+ * Usually correct, mab_mas_cp in the above call overwrites
+diff --git a/mm/compaction.c b/mm/compaction.c
+index a2b16b08cbbff7..384e4672998e55 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -630,7 +630,8 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
+ if (PageCompound(page)) {
+ const unsigned int order = compound_order(page);
+
+- if (blockpfn + (1UL << order) <= end_pfn) {
++ if ((order <= MAX_PAGE_ORDER) &&
++ (blockpfn + (1UL << order) <= end_pfn)) {
+ blockpfn += (1UL << order) - 1;
+ page += (1UL << order) - 1;
+ nr_scanned += (1UL << order) - 1;
+diff --git a/mm/gup.c b/mm/gup.c
+index 7053f8114e0127..44c536904a83bb 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -2323,13 +2323,13 @@ static void pofs_unpin(struct pages_or_folios *pofs)
+ /*
+ * Returns the number of collected folios. Return value is always >= 0.
+ */
+-static unsigned long collect_longterm_unpinnable_folios(
++static void collect_longterm_unpinnable_folios(
+ struct list_head *movable_folio_list,
+ struct pages_or_folios *pofs)
+ {
+- unsigned long i, collected = 0;
+ struct folio *prev_folio = NULL;
+ bool drain_allow = true;
++ unsigned long i;
+
+ for (i = 0; i < pofs->nr_entries; i++) {
+ struct folio *folio = pofs_get_folio(pofs, i);
+@@ -2341,8 +2341,6 @@ static unsigned long collect_longterm_unpinnable_folios(
+ if (folio_is_longterm_pinnable(folio))
+ continue;
+
+- collected++;
+-
+ if (folio_is_device_coherent(folio))
+ continue;
+
+@@ -2364,8 +2362,6 @@ static unsigned long collect_longterm_unpinnable_folios(
+ NR_ISOLATED_ANON + folio_is_file_lru(folio),
+ folio_nr_pages(folio));
+ }
+-
+- return collected;
+ }
+
+ /*
+@@ -2442,11 +2438,9 @@ static long
+ check_and_migrate_movable_pages_or_folios(struct pages_or_folios *pofs)
+ {
+ LIST_HEAD(movable_folio_list);
+- unsigned long collected;
+
+- collected = collect_longterm_unpinnable_folios(&movable_folio_list,
+- pofs);
+- if (!collected)
++ collect_longterm_unpinnable_folios(&movable_folio_list, pofs);
++ if (list_empty(&movable_folio_list))
+ return 0;
+
+ return migrate_longterm_unpinnable_folios(&movable_folio_list, pofs);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 4a8a4f3535caf7..bdee6d3ab0e7e3 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -1394,8 +1394,7 @@ static unsigned long available_huge_pages(struct hstate *h)
+
+ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
+ struct vm_area_struct *vma,
+- unsigned long address, int avoid_reserve,
+- long chg)
++ unsigned long address, long chg)
+ {
+ struct folio *folio = NULL;
+ struct mempolicy *mpol;
+@@ -1411,10 +1410,6 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
+ if (!vma_has_reserves(vma, chg) && !available_huge_pages(h))
+ goto err;
+
+- /* If reserves cannot be used, ensure enough pages are in the pool */
+- if (avoid_reserve && !available_huge_pages(h))
+- goto err;
+-
+ gfp_mask = htlb_alloc_mask(h);
+ nid = huge_node(vma, address, gfp_mask, &mpol, &nodemask);
+
+@@ -1430,7 +1425,7 @@ static struct folio *dequeue_hugetlb_folio_vma(struct hstate *h,
+ folio = dequeue_hugetlb_folio_nodemask(h, gfp_mask,
+ nid, nodemask);
+
+- if (folio && !avoid_reserve && vma_has_reserves(vma, chg)) {
++ if (folio && vma_has_reserves(vma, chg)) {
+ folio_set_hugetlb_restore_reserve(folio);
+ h->resv_huge_pages--;
+ }
+@@ -3006,17 +3001,6 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+ gbl_chg = hugepage_subpool_get_pages(spool, 1);
+ if (gbl_chg < 0)
+ goto out_end_reservation;
+-
+- /*
+- * Even though there was no reservation in the region/reserve
+- * map, there could be reservations associated with the
+- * subpool that can be used. This would be indicated if the
+- * return value of hugepage_subpool_get_pages() is zero.
+- * However, if avoid_reserve is specified we still avoid even
+- * the subpool reservations.
+- */
+- if (avoid_reserve)
+- gbl_chg = 1;
+ }
+
+ /* If this allocation is not consuming a reservation, charge it now.
+@@ -3039,7 +3023,7 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+ * from the global free pool (global change). gbl_chg == 0 indicates
+ * a reservation exists for the allocation.
+ */
+- folio = dequeue_hugetlb_folio_vma(h, vma, addr, avoid_reserve, gbl_chg);
++ folio = dequeue_hugetlb_folio_vma(h, vma, addr, gbl_chg);
+ if (!folio) {
+ spin_unlock_irq(&hugetlb_lock);
+ folio = alloc_buddy_hugetlb_folio_with_mpol(h, vma, addr);
+@@ -3287,7 +3271,7 @@ static void __init gather_bootmem_prealloc(void)
+ .thread_fn = gather_bootmem_prealloc_parallel,
+ .fn_arg = NULL,
+ .start = 0,
+- .size = num_node_state(N_MEMORY),
++ .size = nr_node_ids,
+ .align = 1,
+ .min_chunk = 1,
+ .max_threads = num_node_state(N_MEMORY),
+diff --git a/mm/kfence/core.c b/mm/kfence/core.c
+index 67fc321db79b7e..102048821c222a 100644
+--- a/mm/kfence/core.c
++++ b/mm/kfence/core.c
+@@ -21,6 +21,7 @@
+ #include <linux/log2.h>
+ #include <linux/memblock.h>
+ #include <linux/moduleparam.h>
++#include <linux/nodemask.h>
+ #include <linux/notifier.h>
+ #include <linux/panic_notifier.h>
+ #include <linux/random.h>
+@@ -1084,6 +1085,7 @@ void *__kfence_alloc(struct kmem_cache *s, size_t size, gfp_t flags)
+ * properties (e.g. reside in DMAable memory).
+ */
+ if ((flags & GFP_ZONEMASK) ||
++ ((flags & __GFP_THISNODE) && num_online_nodes() > 1) ||
+ (s->flags & (SLAB_CACHE_DMA | SLAB_CACHE_DMA32))) {
+ atomic_long_inc(&counters[KFENCE_COUNTER_SKIP_INCOMPAT]);
+ return NULL;
+diff --git a/mm/kmemleak.c b/mm/kmemleak.c
+index 5f878ee05ff80b..44bb798423dd39 100644
+--- a/mm/kmemleak.c
++++ b/mm/kmemleak.c
+@@ -1650,7 +1650,7 @@ static void kmemleak_scan(void)
+ unsigned long phys = object->pointer;
+
+ if (PHYS_PFN(phys) < min_low_pfn ||
+- PHYS_PFN(phys + object->size) >= max_low_pfn)
++ PHYS_PFN(phys + object->size) > max_low_pfn)
+ __paint_it(object, KMEMLEAK_BLACK);
+ }
+
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index d81d667907448c..77d015d5db0c5b 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -1053,7 +1053,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
+ struct folio_batch free_folios;
+ LIST_HEAD(ret_folios);
+ LIST_HEAD(demote_folios);
+- unsigned int nr_reclaimed = 0;
++ unsigned int nr_reclaimed = 0, nr_demoted = 0;
+ unsigned int pgactivate = 0;
+ bool do_demote_pass;
+ struct swap_iocb *plug = NULL;
+@@ -1522,8 +1522,9 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
+ /* 'folio_list' is always empty here */
+
+ /* Migrate folios selected for demotion */
+- stat->nr_demoted = demote_folio_list(&demote_folios, pgdat);
+- nr_reclaimed += stat->nr_demoted;
++ nr_demoted = demote_folio_list(&demote_folios, pgdat);
++ nr_reclaimed += nr_demoted;
++ stat->nr_demoted += nr_demoted;
+ /* Folios that could not be demoted are still in @demote_folios */
+ if (!list_empty(&demote_folios)) {
+ /* Folios which weren't demoted go back on @folio_list */
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 3d2553dcdb1b3c..46ea0bee2259f8 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -710,12 +710,12 @@ static bool l2cap_valid_mtu(struct l2cap_chan *chan, u16 mtu)
+ {
+ switch (chan->scid) {
+ case L2CAP_CID_ATT:
+- if (mtu < L2CAP_LE_MIN_MTU)
++ if (mtu && mtu < L2CAP_LE_MIN_MTU)
+ return false;
+ break;
+
+ default:
+- if (mtu < L2CAP_DEFAULT_MIN_MTU)
++ if (mtu && mtu < L2CAP_DEFAULT_MIN_MTU)
+ return false;
+ }
+
+@@ -1888,7 +1888,8 @@ static struct sock *l2cap_sock_alloc(struct net *net, struct socket *sock,
+ chan = l2cap_chan_create();
+ if (!chan) {
+ sk_free(sk);
+- sock->sk = NULL;
++ if (sock)
++ sock->sk = NULL;
+ return NULL;
+ }
+
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 7dc315c1658e7d..90c21b3edcd80e 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -5460,10 +5460,16 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
+ {
+ struct mgmt_rp_remove_adv_monitor rp;
+ struct mgmt_pending_cmd *cmd = data;
+- struct mgmt_cp_remove_adv_monitor *cp = cmd->param;
++ struct mgmt_cp_remove_adv_monitor *cp;
++
++ if (status == -ECANCELED ||
++ cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev))
++ return;
+
+ hci_dev_lock(hdev);
+
++ cp = cmd->param;
++
+ rp.monitor_handle = cp->monitor_handle;
+
+ if (!status)
+@@ -5481,6 +5487,10 @@ static void mgmt_remove_adv_monitor_complete(struct hci_dev *hdev,
+ static int mgmt_remove_adv_monitor_sync(struct hci_dev *hdev, void *data)
+ {
+ struct mgmt_pending_cmd *cmd = data;
++
++ if (cmd != pending_find(MGMT_OP_REMOVE_ADV_MONITOR, hdev))
++ return -ECANCELED;
++
+ struct mgmt_cp_remove_adv_monitor *cp = cmd->param;
+ u16 handle = __le16_to_cpu(cp->monitor_handle);
+
+diff --git a/net/ethtool/rss.c b/net/ethtool/rss.c
+index e07386275e142d..8aa45f3fdfdf08 100644
+--- a/net/ethtool/rss.c
++++ b/net/ethtool/rss.c
+@@ -107,6 +107,8 @@ rss_prepare_ctx(const struct rss_req_info *request, struct net_device *dev,
+ u32 total_size, indir_bytes;
+ u8 *rss_config;
+
++ data->no_key_fields = !dev->ethtool_ops->rxfh_per_ctx_key;
++
+ ctx = xa_load(&dev->ethtool->rss_ctx, request->rss_context);
+ if (!ctx)
+ return -ENOENT;
+@@ -153,7 +155,6 @@ rss_prepare_data(const struct ethnl_req_info *req_base,
+ if (!ops->cap_rss_ctx_supported && !ops->create_rxfh_context)
+ return -EOPNOTSUPP;
+
+- data->no_key_fields = !ops->rxfh_per_ctx_key;
+ return rss_prepare_ctx(request, dev, data, info);
+ }
+
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index d2eeb6fc49b382..8da74dc63061c0 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -985,9 +985,9 @@ static int udp_send_skb(struct sk_buff *skb, struct flowi4 *fl4,
+ const int hlen = skb_network_header_len(skb) +
+ sizeof(struct udphdr);
+
+- if (hlen + cork->gso_size > cork->fragsize) {
++ if (hlen + min(datalen, cork->gso_size) > cork->fragsize) {
+ kfree_skb(skb);
+- return -EINVAL;
++ return -EMSGSIZE;
+ }
+ if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) {
+ kfree_skb(skb);
+diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c
+index 896c9c827a288c..197d0ac47592ad 100644
+--- a/net/ipv6/udp.c
++++ b/net/ipv6/udp.c
+@@ -1294,9 +1294,9 @@ static int udp_v6_send_skb(struct sk_buff *skb, struct flowi6 *fl6,
+ const int hlen = skb_network_header_len(skb) +
+ sizeof(struct udphdr);
+
+- if (hlen + cork->gso_size > cork->fragsize) {
++ if (hlen + min(datalen, cork->gso_size) > cork->fragsize) {
+ kfree_skb(skb);
+- return -EINVAL;
++ return -EMSGSIZE;
+ }
+ if (datalen > cork->gso_size * UDP_MAX_SEGMENTS) {
+ kfree_skb(skb);
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index fac774825aff39..42b239d9b2b3cf 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -136,6 +136,7 @@ static bool mptcp_try_coalesce(struct sock *sk, struct sk_buff *to,
+ int delta;
+
+ if (MPTCP_SKB_CB(from)->offset ||
++ ((to->len + from->len) > (sk->sk_rcvbuf >> 3)) ||
+ !skb_try_coalesce(to, from, &fragstolen, &delta))
+ return false;
+
+diff --git a/net/ncsi/ncsi-manage.c b/net/ncsi/ncsi-manage.c
+index bf276eaf933075..7891a537bddd11 100644
+--- a/net/ncsi/ncsi-manage.c
++++ b/net/ncsi/ncsi-manage.c
+@@ -1385,6 +1385,12 @@ static void ncsi_probe_channel(struct ncsi_dev_priv *ndp)
+ nd->state = ncsi_dev_state_probe_package;
+ break;
+ case ncsi_dev_state_probe_package:
++ if (ndp->package_probe_id >= 8) {
++ /* Last package probed, finishing */
++ ndp->flags |= NCSI_DEV_PROBED;
++ break;
++ }
++
+ ndp->pending_req_num = 1;
+
+ nca.type = NCSI_PKT_CMD_SP;
+@@ -1501,13 +1507,8 @@ static void ncsi_probe_channel(struct ncsi_dev_priv *ndp)
+ if (ret)
+ goto error;
+
+- /* Probe next package */
++ /* Probe next package after receiving response */
+ ndp->package_probe_id++;
+- if (ndp->package_probe_id >= 8) {
+- /* Probe finished */
+- ndp->flags |= NCSI_DEV_PROBED;
+- break;
+- }
+ nd->state = ncsi_dev_state_probe_package;
+ ndp->active_package = NULL;
+ break;
+diff --git a/net/nfc/nci/hci.c b/net/nfc/nci/hci.c
+index de175318a3a0f3..082ab66f120b73 100644
+--- a/net/nfc/nci/hci.c
++++ b/net/nfc/nci/hci.c
+@@ -542,6 +542,8 @@ static u8 nci_hci_create_pipe(struct nci_dev *ndev, u8 dest_host,
+
+ pr_debug("pipe created=%d\n", pipe);
+
++ if (pipe >= NCI_HCI_MAX_PIPES)
++ pipe = NCI_HCI_INVALID_PIPE;
+ return pipe;
+ }
+
+diff --git a/net/rose/af_rose.c b/net/rose/af_rose.c
+index 72c65d938a150e..a4a668b88a8f27 100644
+--- a/net/rose/af_rose.c
++++ b/net/rose/af_rose.c
+@@ -701,11 +701,9 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ struct net_device *dev;
+ ax25_address *source;
+ ax25_uid_assoc *user;
++ int err = -EINVAL;
+ int n;
+
+- if (!sock_flag(sk, SOCK_ZAPPED))
+- return -EINVAL;
+-
+ if (addr_len != sizeof(struct sockaddr_rose) && addr_len != sizeof(struct full_sockaddr_rose))
+ return -EINVAL;
+
+@@ -718,8 +716,15 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ if ((unsigned int) addr->srose_ndigis > ROSE_MAX_DIGIS)
+ return -EINVAL;
+
+- if ((dev = rose_dev_get(&addr->srose_addr)) == NULL)
+- return -EADDRNOTAVAIL;
++ lock_sock(sk);
++
++ if (!sock_flag(sk, SOCK_ZAPPED))
++ goto out_release;
++
++ err = -EADDRNOTAVAIL;
++ dev = rose_dev_get(&addr->srose_addr);
++ if (!dev)
++ goto out_release;
+
+ source = &addr->srose_call;
+
+@@ -730,7 +735,8 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ } else {
+ if (ax25_uid_policy && !capable(CAP_NET_BIND_SERVICE)) {
+ dev_put(dev);
+- return -EACCES;
++ err = -EACCES;
++ goto out_release;
+ }
+ rose->source_call = *source;
+ }
+@@ -753,8 +759,10 @@ static int rose_bind(struct socket *sock, struct sockaddr *uaddr, int addr_len)
+ rose_insert_socket(sk);
+
+ sock_reset_flag(sk, SOCK_ZAPPED);
+-
+- return 0;
++ err = 0;
++out_release:
++ release_sock(sk);
++ return err;
+ }
+
+ static int rose_connect(struct socket *sock, struct sockaddr *uaddr, int addr_len, int flags)
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index d0fd37bdcfe9c8..6b036c0564c7a8 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -567,6 +567,7 @@ enum rxrpc_call_flag {
+ RXRPC_CALL_EXCLUSIVE, /* The call uses a once-only connection */
+ RXRPC_CALL_RX_IS_IDLE, /* recvmsg() is idle - send an ACK */
+ RXRPC_CALL_RECVMSG_READ_ALL, /* recvmsg() read all of the received data */
++ RXRPC_CALL_CONN_CHALLENGING, /* The connection is being challenged */
+ };
+
+ /*
+@@ -587,7 +588,6 @@ enum rxrpc_call_state {
+ RXRPC_CALL_CLIENT_AWAIT_REPLY, /* - client awaiting reply */
+ RXRPC_CALL_CLIENT_RECV_REPLY, /* - client receiving reply phase */
+ RXRPC_CALL_SERVER_PREALLOC, /* - service preallocation */
+- RXRPC_CALL_SERVER_SECURING, /* - server securing request connection */
+ RXRPC_CALL_SERVER_RECV_REQUEST, /* - server receiving request */
+ RXRPC_CALL_SERVER_ACK_REQUEST, /* - server pending ACK of request */
+ RXRPC_CALL_SERVER_SEND_REPLY, /* - server sending reply */
+diff --git a/net/rxrpc/call_object.c b/net/rxrpc/call_object.c
+index f9e983a12c1492..e379a2a9375ae0 100644
+--- a/net/rxrpc/call_object.c
++++ b/net/rxrpc/call_object.c
+@@ -22,7 +22,6 @@ const char *const rxrpc_call_states[NR__RXRPC_CALL_STATES] = {
+ [RXRPC_CALL_CLIENT_AWAIT_REPLY] = "ClAwtRpl",
+ [RXRPC_CALL_CLIENT_RECV_REPLY] = "ClRcvRpl",
+ [RXRPC_CALL_SERVER_PREALLOC] = "SvPrealc",
+- [RXRPC_CALL_SERVER_SECURING] = "SvSecure",
+ [RXRPC_CALL_SERVER_RECV_REQUEST] = "SvRcvReq",
+ [RXRPC_CALL_SERVER_ACK_REQUEST] = "SvAckReq",
+ [RXRPC_CALL_SERVER_SEND_REPLY] = "SvSndRpl",
+@@ -453,17 +452,16 @@ void rxrpc_incoming_call(struct rxrpc_sock *rx,
+ call->cong_tstamp = skb->tstamp;
+
+ __set_bit(RXRPC_CALL_EXPOSED, &call->flags);
+- rxrpc_set_call_state(call, RXRPC_CALL_SERVER_SECURING);
++ rxrpc_set_call_state(call, RXRPC_CALL_SERVER_RECV_REQUEST);
+
+ spin_lock(&conn->state_lock);
+
+ switch (conn->state) {
+ case RXRPC_CONN_SERVICE_UNSECURED:
+ case RXRPC_CONN_SERVICE_CHALLENGING:
+- rxrpc_set_call_state(call, RXRPC_CALL_SERVER_SECURING);
++ __set_bit(RXRPC_CALL_CONN_CHALLENGING, &call->flags);
+ break;
+ case RXRPC_CONN_SERVICE:
+- rxrpc_set_call_state(call, RXRPC_CALL_SERVER_RECV_REQUEST);
+ break;
+
+ case RXRPC_CONN_ABORTED:
+diff --git a/net/rxrpc/conn_event.c b/net/rxrpc/conn_event.c
+index 2a1396cd892f30..c4eb7986efddf8 100644
+--- a/net/rxrpc/conn_event.c
++++ b/net/rxrpc/conn_event.c
+@@ -222,10 +222,8 @@ static void rxrpc_abort_calls(struct rxrpc_connection *conn)
+ */
+ static void rxrpc_call_is_secure(struct rxrpc_call *call)
+ {
+- if (call && __rxrpc_call_state(call) == RXRPC_CALL_SERVER_SECURING) {
+- rxrpc_set_call_state(call, RXRPC_CALL_SERVER_RECV_REQUEST);
++ if (call && __test_and_clear_bit(RXRPC_CALL_CONN_CHALLENGING, &call->flags))
+ rxrpc_notify_socket(call);
+- }
+ }
+
+ /*
+@@ -266,6 +264,7 @@ static int rxrpc_process_event(struct rxrpc_connection *conn,
+ * we've already received the packet, put it on the
+ * front of the queue.
+ */
++ sp->conn = rxrpc_get_connection(conn, rxrpc_conn_get_poke_secured);
+ skb->mark = RXRPC_SKB_MARK_SERVICE_CONN_SECURED;
+ rxrpc_get_skb(skb, rxrpc_skb_get_conn_secured);
+ skb_queue_head(&conn->local->rx_queue, skb);
+@@ -431,14 +430,16 @@ void rxrpc_input_conn_event(struct rxrpc_connection *conn, struct sk_buff *skb)
+ if (test_and_clear_bit(RXRPC_CONN_EV_ABORT_CALLS, &conn->events))
+ rxrpc_abort_calls(conn);
+
+- switch (skb->mark) {
+- case RXRPC_SKB_MARK_SERVICE_CONN_SECURED:
+- if (conn->state != RXRPC_CONN_SERVICE)
+- break;
++ if (skb) {
++ switch (skb->mark) {
++ case RXRPC_SKB_MARK_SERVICE_CONN_SECURED:
++ if (conn->state != RXRPC_CONN_SERVICE)
++ break;
+
+- for (loop = 0; loop < RXRPC_MAXCALLS; loop++)
+- rxrpc_call_is_secure(conn->channels[loop].call);
+- break;
++ for (loop = 0; loop < RXRPC_MAXCALLS; loop++)
++ rxrpc_call_is_secure(conn->channels[loop].call);
++ break;
++ }
+ }
+
+ /* Process delayed ACKs whose time has come. */
+diff --git a/net/rxrpc/conn_object.c b/net/rxrpc/conn_object.c
+index 1539d315afe74a..7bc68135966e24 100644
+--- a/net/rxrpc/conn_object.c
++++ b/net/rxrpc/conn_object.c
+@@ -67,6 +67,7 @@ struct rxrpc_connection *rxrpc_alloc_connection(struct rxrpc_net *rxnet,
+ INIT_WORK(&conn->destructor, rxrpc_clean_up_connection);
+ INIT_LIST_HEAD(&conn->proc_link);
+ INIT_LIST_HEAD(&conn->link);
++ INIT_LIST_HEAD(&conn->attend_link);
+ mutex_init(&conn->security_lock);
+ mutex_init(&conn->tx_data_alloc_lock);
+ skb_queue_head_init(&conn->rx_queue);
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 16d49a861dbb58..6a075a7c190db3 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -573,7 +573,7 @@ static bool rxrpc_input_split_jumbo(struct rxrpc_call *call, struct sk_buff *skb
+ rxrpc_propose_delay_ACK(call, sp->hdr.serial,
+ rxrpc_propose_ack_input_data);
+ }
+- if (notify) {
++ if (notify && !test_bit(RXRPC_CALL_CONN_CHALLENGING, &call->flags)) {
+ trace_rxrpc_notify_socket(call->debug_id, sp->hdr.serial);
+ rxrpc_notify_socket(call);
+ }
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index 23d18fe5de9f0d..154f650efb0ab6 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -654,7 +654,7 @@ int rxrpc_do_sendmsg(struct rxrpc_sock *rx, struct msghdr *msg, size_t len)
+ } else {
+ switch (rxrpc_call_state(call)) {
+ case RXRPC_CALL_CLIENT_AWAIT_CONN:
+- case RXRPC_CALL_SERVER_SECURING:
++ case RXRPC_CALL_SERVER_RECV_REQUEST:
+ if (p.command == RXRPC_CMD_SEND_ABORT)
+ break;
+ fallthrough;
+diff --git a/net/sched/sch_fifo.c b/net/sched/sch_fifo.c
+index b50b2c2cc09bc6..e6bfd39ff33965 100644
+--- a/net/sched/sch_fifo.c
++++ b/net/sched/sch_fifo.c
+@@ -40,6 +40,9 @@ static int pfifo_tail_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ {
+ unsigned int prev_backlog;
+
++ if (unlikely(READ_ONCE(sch->limit) == 0))
++ return qdisc_drop(skb, sch, to_free);
++
+ if (likely(sch->q.qlen < READ_ONCE(sch->limit)))
+ return qdisc_enqueue_tail(skb, sch);
+
+diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c
+index 3b519adc01259f..68a08f6d1fbce2 100644
+--- a/net/sched/sch_netem.c
++++ b/net/sched/sch_netem.c
+@@ -748,9 +748,9 @@ static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+ if (err != NET_XMIT_SUCCESS) {
+ if (net_xmit_drop_count(err))
+ qdisc_qstats_drop(sch);
+- qdisc_tree_reduce_backlog(sch, 1, pkt_len);
+ sch->qstats.backlog -= pkt_len;
+ sch->q.qlen--;
++ qdisc_tree_reduce_backlog(sch, 1, pkt_len);
+ }
+ goto tfifo_dequeue;
+ }
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index 43c3f1c971b8fd..c524421ec65252 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -2293,8 +2293,8 @@ static bool tipc_crypto_key_rcv(struct tipc_crypto *rx, struct tipc_msg *hdr)
+ keylen = ntohl(*((__be32 *)(data + TIPC_AEAD_ALG_NAME)));
+
+ /* Verify the supplied size values */
+- if (unlikely(size != keylen + sizeof(struct tipc_aead_key) ||
+- keylen > TIPC_AEAD_KEY_SIZE_MAX)) {
++ if (unlikely(keylen > TIPC_AEAD_KEY_SIZE_MAX ||
++ size != keylen + sizeof(struct tipc_aead_key))) {
+ pr_debug("%s: invalid MSG_CRYPTO key size\n", rx->name);
+ goto exit;
+ }
+diff --git a/rust/kernel/init.rs b/rust/kernel/init.rs
+index a17ac8762d8f9f..789f80f71ca7e1 100644
+--- a/rust/kernel/init.rs
++++ b/rust/kernel/init.rs
+@@ -858,7 +858,7 @@ pub unsafe trait PinInit<T: ?Sized, E = Infallible>: Sized {
+ /// use kernel::{types::Opaque, init::pin_init_from_closure};
+ /// #[repr(C)]
+ /// struct RawFoo([u8; 16]);
+- /// extern {
++ /// extern "C" {
+ /// fn init_foo(_: *mut RawFoo);
+ /// }
+ ///
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index 1d13cecc7cc780..04faf15ed316a9 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -130,7 +130,6 @@ KBUILD_CFLAGS += $(call cc-disable-warning, pointer-to-enum-cast)
+ KBUILD_CFLAGS += -Wno-tautological-constant-out-of-range-compare
+ KBUILD_CFLAGS += $(call cc-disable-warning, unaligned-access)
+ KBUILD_CFLAGS += -Wno-enum-compare-conditional
+-KBUILD_CFLAGS += -Wno-enum-enum-conversion
+ endif
+
+ endif
+@@ -154,6 +153,10 @@ KBUILD_CFLAGS += -Wno-missing-field-initializers
+ KBUILD_CFLAGS += -Wno-type-limits
+ KBUILD_CFLAGS += -Wno-shift-negative-value
+
++ifdef CONFIG_CC_IS_CLANG
++KBUILD_CFLAGS += -Wno-enum-enum-conversion
++endif
++
+ ifdef CONFIG_CC_IS_GCC
+ KBUILD_CFLAGS += -Wno-maybe-uninitialized
+ endif
+diff --git a/scripts/gdb/linux/cpus.py b/scripts/gdb/linux/cpus.py
+index 2f11c4f9c345a0..13eb8b3901b8fc 100644
+--- a/scripts/gdb/linux/cpus.py
++++ b/scripts/gdb/linux/cpus.py
+@@ -167,7 +167,7 @@ def get_current_task(cpu):
+ var_ptr = gdb.parse_and_eval("&pcpu_hot.current_task")
+ return per_cpu(var_ptr, cpu).dereference()
+ elif utils.is_target_arch("aarch64"):
+- current_task_addr = gdb.parse_and_eval("$SP_EL0")
++ current_task_addr = gdb.parse_and_eval("(unsigned long)$SP_EL0")
+ if (current_task_addr >> 63) != 0:
+ current_task = current_task_addr.cast(task_ptr_type)
+ return current_task.dereference()
+diff --git a/scripts/generate_rust_target.rs b/scripts/generate_rust_target.rs
+index 0d00ac3723b5e5..4fd6b6ab3e329d 100644
+--- a/scripts/generate_rust_target.rs
++++ b/scripts/generate_rust_target.rs
+@@ -165,6 +165,18 @@ fn has(&self, option: &str) -> bool {
+ let option = "CONFIG_".to_owned() + option;
+ self.0.contains_key(&option)
+ }
++
++ /// Is the rustc version at least `major.minor.patch`?
++ fn rustc_version_atleast(&self, major: u32, minor: u32, patch: u32) -> bool {
++ let check_version = 100000 * major + 100 * minor + patch;
++ let actual_version = self
++ .0
++ .get("CONFIG_RUSTC_VERSION")
++ .unwrap()
++ .parse::<u32>()
++ .unwrap();
++ check_version <= actual_version
++ }
+ }
+
+ fn main() {
+@@ -182,6 +194,9 @@ fn main() {
+ }
+ } else if cfg.has("X86_64") {
+ ts.push("arch", "x86_64");
++ if cfg.rustc_version_atleast(1, 86, 0) {
++ ts.push("rustc-abi", "x86-softfloat");
++ }
+ ts.push(
+ "data-layout",
+ "e-m:e-p270:32:32-p271:32:32-p272:64:64-i64:64-i128:128-f80:128-n8:16:32:64-S128",
+@@ -215,6 +230,9 @@ fn main() {
+ panic!("32-bit x86 only works under UML");
+ }
+ ts.push("arch", "x86");
++ if cfg.rustc_version_atleast(1, 86, 0) {
++ ts.push("rustc-abi", "x86-softfloat");
++ }
+ ts.push(
+ "data-layout",
+ "e-m:e-p:32:32-p270:32:32-p271:32:32-p272:64:64-i128:128-f64:32:64-f80:32-n8:16:32-S128",
+diff --git a/security/keys/trusted-keys/trusted_dcp.c b/security/keys/trusted-keys/trusted_dcp.c
+index e908c53a803c4b..7b6eb655df0cbf 100644
+--- a/security/keys/trusted-keys/trusted_dcp.c
++++ b/security/keys/trusted-keys/trusted_dcp.c
+@@ -201,12 +201,16 @@ static int trusted_dcp_seal(struct trusted_key_payload *p, char *datablob)
+ {
+ struct dcp_blob_fmt *b = (struct dcp_blob_fmt *)p->blob;
+ int blen, ret;
+- u8 plain_blob_key[AES_KEYSIZE_128];
++ u8 *plain_blob_key;
+
+ blen = calc_blob_len(p->key_len);
+ if (blen > MAX_BLOB_SIZE)
+ return -E2BIG;
+
++ plain_blob_key = kmalloc(AES_KEYSIZE_128, GFP_KERNEL);
++ if (!plain_blob_key)
++ return -ENOMEM;
++
+ b->fmt_version = DCP_BLOB_VERSION;
+ get_random_bytes(b->nonce, AES_KEYSIZE_128);
+ get_random_bytes(plain_blob_key, AES_KEYSIZE_128);
+@@ -229,7 +233,8 @@ static int trusted_dcp_seal(struct trusted_key_payload *p, char *datablob)
+ ret = 0;
+
+ out:
+- memzero_explicit(plain_blob_key, sizeof(plain_blob_key));
++ memzero_explicit(plain_blob_key, AES_KEYSIZE_128);
++ kfree(plain_blob_key);
+
+ return ret;
+ }
+@@ -238,7 +243,7 @@ static int trusted_dcp_unseal(struct trusted_key_payload *p, char *datablob)
+ {
+ struct dcp_blob_fmt *b = (struct dcp_blob_fmt *)p->blob;
+ int blen, ret;
+- u8 plain_blob_key[AES_KEYSIZE_128];
++ u8 *plain_blob_key = NULL;
+
+ if (b->fmt_version != DCP_BLOB_VERSION) {
+ pr_err("DCP blob has bad version: %i, expected %i\n",
+@@ -256,6 +261,12 @@ static int trusted_dcp_unseal(struct trusted_key_payload *p, char *datablob)
+ goto out;
+ }
+
++ plain_blob_key = kmalloc(AES_KEYSIZE_128, GFP_KERNEL);
++ if (!plain_blob_key) {
++ ret = -ENOMEM;
++ goto out;
++ }
++
+ ret = decrypt_blob_key(b->blob_key, plain_blob_key);
+ if (ret) {
+ pr_err("Unable to decrypt blob key: %i\n", ret);
+@@ -271,7 +282,10 @@ static int trusted_dcp_unseal(struct trusted_key_payload *p, char *datablob)
+
+ ret = 0;
+ out:
+- memzero_explicit(plain_blob_key, sizeof(plain_blob_key));
++ if (plain_blob_key) {
++ memzero_explicit(plain_blob_key, AES_KEYSIZE_128);
++ kfree(plain_blob_key);
++ }
+
+ return ret;
+ }
+diff --git a/security/safesetid/securityfs.c b/security/safesetid/securityfs.c
+index 25310468bcddff..8e1ffd70b18ab4 100644
+--- a/security/safesetid/securityfs.c
++++ b/security/safesetid/securityfs.c
+@@ -143,6 +143,9 @@ static ssize_t handle_policy_update(struct file *file,
+ char *buf, *p, *end;
+ int err;
+
++ if (len >= KMALLOC_MAX_SIZE)
++ return -EINVAL;
++
+ pol = kmalloc(sizeof(struct setid_ruleset), GFP_KERNEL);
+ if (!pol)
+ return -ENOMEM;
+diff --git a/security/tomoyo/common.c b/security/tomoyo/common.c
+index 5c7b059a332aac..972664962e8f67 100644
+--- a/security/tomoyo/common.c
++++ b/security/tomoyo/common.c
+@@ -2665,7 +2665,7 @@ ssize_t tomoyo_write_control(struct tomoyo_io_buffer *head,
+
+ if (head->w.avail >= head->writebuf_size - 1) {
+ const int len = head->writebuf_size * 2;
+- char *cp = kzalloc(len, GFP_NOFS);
++ char *cp = kzalloc(len, GFP_NOFS | __GFP_NOWARN);
+
+ if (!cp) {
+ error = -ENOMEM;
+diff --git a/sound/pci/hda/hda_auto_parser.c b/sound/pci/hda/hda_auto_parser.c
+index 8e74be038b0fad..0091ab3f2bd56b 100644
+--- a/sound/pci/hda/hda_auto_parser.c
++++ b/sound/pci/hda/hda_auto_parser.c
+@@ -80,7 +80,11 @@ static int compare_input_type(const void *ap, const void *bp)
+
+ /* In case one has boost and the other one has not,
+ pick the one with boost first. */
+- return (int)(b->has_boost_on_pin - a->has_boost_on_pin);
++ if (a->has_boost_on_pin != b->has_boost_on_pin)
++ return (int)(b->has_boost_on_pin - a->has_boost_on_pin);
++
++ /* Keep the original order */
++ return a->order - b->order;
+ }
+
+ /* Reorder the surround channels
+@@ -400,6 +404,8 @@ int snd_hda_parse_pin_defcfg(struct hda_codec *codec,
+ reorder_outputs(cfg->speaker_outs, cfg->speaker_pins);
+
+ /* sort inputs in the order of AUTO_PIN_* type */
++ for (i = 0; i < cfg->num_inputs; i++)
++ cfg->inputs[i].order = i;
+ sort(cfg->inputs, cfg->num_inputs, sizeof(cfg->inputs[0]),
+ compare_input_type, NULL);
+
+diff --git a/sound/pci/hda/hda_auto_parser.h b/sound/pci/hda/hda_auto_parser.h
+index 579b11beac718e..87af3d8c02f7f6 100644
+--- a/sound/pci/hda/hda_auto_parser.h
++++ b/sound/pci/hda/hda_auto_parser.h
+@@ -37,6 +37,7 @@ struct auto_pin_cfg_item {
+ unsigned int is_headset_mic:1;
+ unsigned int is_headphone_mic:1; /* Mic-only in headphone jack */
+ unsigned int has_boost_on_pin:1;
++ int order;
+ };
+
+ struct auto_pin_cfg;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 5d99a4ea176a15..f3f849b96402d1 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10374,6 +10374,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8870, "HP ZBook Fury 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x8873, "HP ZBook Studio 15.6 Inch G8 Mobile Workstation PC", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x887a, "HP Laptop 15s-eq2xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
++ SND_PCI_QUIRK(0x103c, 0x887c, "HP Laptop 14s-fq1xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x888a, "HP ENVY x360 Convertible 15-eu0xxx", ALC245_FIXUP_HP_X360_MUTE_LEDS),
+ SND_PCI_QUIRK(0x103c, 0x888d, "HP ZBook Power 15.6 inch G8 Mobile Workstation PC", ALC236_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8895, "HP EliteBook 855 G8 Notebook PC", ALC285_FIXUP_HP_SPEAKERS_MICMUTE_LED),
+@@ -10873,7 +10874,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3869, "Lenovo Yoga7 14IAL7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+ HDA_CODEC_QUIRK(0x17aa, 0x386e, "Legion Y9000X 2022 IAH7", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x386e, "Yoga Pro 7 14ARP8", ALC285_FIXUP_SPEAKER2_TO_DAC1),
+- HDA_CODEC_QUIRK(0x17aa, 0x386f, "Legion Pro 7 16ARX8H", ALC287_FIXUP_TAS2781_I2C),
++ HDA_CODEC_QUIRK(0x17aa, 0x38a8, "Legion Pro 7 16ARX8H", ALC287_FIXUP_TAS2781_I2C), /* this must match before PCI SSID 17aa:386f below */
+ SND_PCI_QUIRK(0x17aa, 0x386f, "Legion Pro 7i 16IAX7", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x17aa, 0x3870, "Lenovo Yoga 7 14ARB7", ALC287_FIXUP_YOGA7_14ARB7_I2C),
+ SND_PCI_QUIRK(0x17aa, 0x3877, "Lenovo Legion 7 Slim 16ARHA7", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10948,6 +10949,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x511f, "Thinkpad", ALC298_FIXUP_TPT470_DOCK),
+ SND_PCI_QUIRK(0x17aa, 0x9e54, "LENOVO NB", ALC269_FIXUP_LENOVO_EAPD),
+ SND_PCI_QUIRK(0x17aa, 0x9e56, "Lenovo ZhaoYang CF4620Z", ALC286_FIXUP_SONY_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1849, 0x0269, "Positivo Master C6400", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ SND_PCI_QUIRK(0x1849, 0x1233, "ASRock NUC Box 1100", ALC233_FIXUP_NO_AUDIO_JACK),
+ SND_PCI_QUIRK(0x1849, 0xa233, "Positivo Master C6300", ALC269_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1854, 0x0440, "LG CQ6", ALC256_FIXUP_HEADPHONE_AMP_VOL),
+diff --git a/sound/soc/amd/Kconfig b/sound/soc/amd/Kconfig
+index 6dec44f516c13f..c2a5671ba96b07 100644
+--- a/sound/soc/amd/Kconfig
++++ b/sound/soc/amd/Kconfig
+@@ -105,7 +105,7 @@ config SND_SOC_AMD_ACP6x
+ config SND_SOC_AMD_YC_MACH
+ tristate "AMD YC support for DMIC"
+ select SND_SOC_DMIC
+- depends on SND_SOC_AMD_ACP6x
++ depends on SND_SOC_AMD_ACP6x && ACPI
+ help
+ This option enables machine driver for Yellow Carp platform
+ using dmic. ACP IP has PDM Decoder block with DMA controller.
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index ecf57a6cb7c37d..b16587d8f97a89 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -304,6 +304,34 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "83AS"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83L3"),
++ }
++ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83N6"),
++ }
++ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83Q2"),
++ }
++ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83Q3"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 9f2dc24d44cb54..84fc35d88b9267 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -617,9 +617,10 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ .callback = sof_sdw_quirk_cb,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_SKU, "380E")
++ DMI_MATCH(DMI_PRODUCT_NAME, "83HM")
+ },
+- .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS),
++ .driver_data = (void *)(SOC_SDW_SIDECAR_AMPS |
++ SOC_SDW_CODEC_MIC),
+ },
+ {
+ .callback = sof_sdw_quirk_cb,
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 7a59121fc323c3..1102599403c534 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -38,7 +38,6 @@ static inline int _soc_pcm_ret(struct snd_soc_pcm_runtime *rtd,
+ switch (ret) {
+ case -EPROBE_DEFER:
+ case -ENOTSUPP:
+- case -EINVAL:
+ break;
+ default:
+ dev_err(rtd->dev,
+@@ -1001,7 +1000,13 @@ static int __soc_pcm_prepare(struct snd_soc_pcm_runtime *rtd,
+ }
+
+ out:
+- return soc_pcm_ret(rtd, ret);
++ /*
++ * Don't use soc_pcm_ret() on .prepare callback to lower error log severity
++ *
++ * We don't want to log an error since we do not want to give userspace a way to do a
++ * denial-of-service attack on the syslog / diskspace.
++ */
++ return ret;
+ }
+
+ /* PCM prepare ops for non-DPCM streams */
+@@ -1013,6 +1018,13 @@ static int soc_pcm_prepare(struct snd_pcm_substream *substream)
+ snd_soc_dpcm_mutex_lock(rtd);
+ ret = __soc_pcm_prepare(rtd, substream);
+ snd_soc_dpcm_mutex_unlock(rtd);
++
++ /*
++ * Don't use soc_pcm_ret() on .prepare callback to lower error log severity
++ *
++ * We don't want to log an error since we do not want to give userspace a way to do a
++ * denial-of-service attack on the syslog / diskspace.
++ */
+ return ret;
+ }
+
+@@ -2554,7 +2566,13 @@ int dpcm_be_dai_prepare(struct snd_soc_pcm_runtime *fe, int stream)
+ be->dpcm[stream].state = SND_SOC_DPCM_STATE_PREPARE;
+ }
+
+- return soc_pcm_ret(fe, ret);
++ /*
++ * Don't use soc_pcm_ret() on .prepare callback to lower error log severity
++ *
++ * We don't want to log an error since we do not want to give userspace a way to do a
++ * denial-of-service attack on the syslog / diskspace.
++ */
++ return ret;
+ }
+
+ static int dpcm_fe_dai_prepare(struct snd_pcm_substream *substream)
+@@ -2594,7 +2612,13 @@ static int dpcm_fe_dai_prepare(struct snd_pcm_substream *substream)
+ dpcm_set_fe_update_state(fe, stream, SND_SOC_DPCM_UPDATE_NO);
+ snd_soc_dpcm_mutex_unlock(fe);
+
+- return soc_pcm_ret(fe, ret);
++ /*
++ * Don't use soc_pcm_ret() on .prepare callback to lower error log severity
++ *
++ * We don't want to log an error since we do not want to give userspace a way to do a
++ * denial-of-service attack on the syslog / diskspace.
++ */
++ return ret;
+ }
+
+ static int dpcm_run_update_shutdown(struct snd_soc_pcm_runtime *fe, int stream)
+diff --git a/sound/soc/sof/intel/hda-dai.c b/sound/soc/sof/intel/hda-dai.c
+index 82f46ecd94301e..2e58a264da5566 100644
+--- a/sound/soc/sof/intel/hda-dai.c
++++ b/sound/soc/sof/intel/hda-dai.c
+@@ -503,6 +503,12 @@ int sdw_hda_dai_hw_params(struct snd_pcm_substream *substream,
+ int ret;
+ int i;
+
++ if (!w) {
++ dev_err(cpu_dai->dev, "%s widget not found, check amp link num in the topology\n",
++ cpu_dai->name);
++ return -EINVAL;
++ }
++
+ ops = hda_dai_get_ops(substream, cpu_dai);
+ if (!ops) {
+ dev_err(cpu_dai->dev, "DAI widget ops not set\n");
+@@ -582,6 +588,12 @@ int sdw_hda_dai_hw_params(struct snd_pcm_substream *substream,
+ */
+ for_each_rtd_cpu_dais(rtd, i, dai) {
+ w = snd_soc_dai_get_widget(dai, substream->stream);
++ if (!w) {
++ dev_err(cpu_dai->dev,
++ "%s widget not found, check amp link num in the topology\n",
++ dai->name);
++ return -EINVAL;
++ }
+ ipc4_copier = widget_to_copier(w);
+ memcpy(&ipc4_copier->dma_config_tlv[cpu_dai_id], dma_config_tlv,
+ sizeof(*dma_config_tlv));
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index 70fc08c8fc99e2..f10ed4d1025016 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -63,6 +63,11 @@ static int sdw_params_stream(struct device *dev,
+ struct snd_soc_dapm_widget *w = snd_soc_dai_get_widget(d, params_data->substream->stream);
+ struct snd_sof_dai_config_data data = { 0 };
+
++ if (!w) {
++ dev_err(dev, "%s widget not found, check amp link num in the topology\n",
++ d->name);
++ return -EINVAL;
++ }
+ data.dai_index = (params_data->link_id << 8) | d->id;
+ data.dai_data = params_data->alh_stream_id;
+ data.dai_node_id = data.dai_data;
+diff --git a/tools/perf/bench/epoll-wait.c b/tools/perf/bench/epoll-wait.c
+index ef5c4257844d13..20fe4f72b4afcc 100644
+--- a/tools/perf/bench/epoll-wait.c
++++ b/tools/perf/bench/epoll-wait.c
+@@ -420,7 +420,12 @@ static int cmpworker(const void *p1, const void *p2)
+
+ struct worker *w1 = (struct worker *) p1;
+ struct worker *w2 = (struct worker *) p2;
+- return w1->tid > w2->tid;
++
++ if (w1->tid > w2->tid)
++ return 1;
++ if (w1->tid < w2->tid)
++ return -1;
++ return 0;
+ }
+
+ int bench_epoll_wait(int argc, const char **argv)
+diff --git a/tools/testing/selftests/net/ipsec.c b/tools/testing/selftests/net/ipsec.c
+index be4a30a0d02aef..9b44a091802cbb 100644
+--- a/tools/testing/selftests/net/ipsec.c
++++ b/tools/testing/selftests/net/ipsec.c
+@@ -227,7 +227,8 @@ static int rtattr_pack(struct nlmsghdr *nh, size_t req_sz,
+
+ attr->rta_len = RTA_LENGTH(size);
+ attr->rta_type = rta_type;
+- memcpy(RTA_DATA(attr), payload, size);
++ if (payload)
++ memcpy(RTA_DATA(attr), payload, size);
+
+ return 0;
+ }
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index 414addef9a4514..d240d02fa443a1 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -1302,7 +1302,7 @@ int main_loop(void)
+ return ret;
+
+ if (cfg_truncate > 0) {
+- xdisconnect(fd);
++ shutdown(fd, SHUT_WR);
+ } else if (--cfg_repeat > 0) {
+ xdisconnect(fd);
+
+diff --git a/tools/testing/selftests/net/udpgso.c b/tools/testing/selftests/net/udpgso.c
+index 3f2fca02fec53f..36ff28af4b1905 100644
+--- a/tools/testing/selftests/net/udpgso.c
++++ b/tools/testing/selftests/net/udpgso.c
+@@ -102,6 +102,19 @@ struct testcase testcases_v4[] = {
+ .gso_len = CONST_MSS_V4,
+ .r_num_mss = 1,
+ },
++ {
++ /* datalen <= MSS < gso_len: will fall back to no GSO */
++ .tlen = CONST_MSS_V4,
++ .gso_len = CONST_MSS_V4 + 1,
++ .r_num_mss = 0,
++ .r_len_last = CONST_MSS_V4,
++ },
++ {
++ /* MSS < datalen < gso_len: fail */
++ .tlen = CONST_MSS_V4 + 1,
++ .gso_len = CONST_MSS_V4 + 2,
++ .tfail = true,
++ },
+ {
+ /* send a single MSS + 1B */
+ .tlen = CONST_MSS_V4 + 1,
+@@ -205,6 +218,19 @@ struct testcase testcases_v6[] = {
+ .gso_len = CONST_MSS_V6,
+ .r_num_mss = 1,
+ },
++ {
++ /* datalen <= MSS < gso_len: will fall back to no GSO */
++ .tlen = CONST_MSS_V6,
++ .gso_len = CONST_MSS_V6 + 1,
++ .r_num_mss = 0,
++ .r_len_last = CONST_MSS_V6,
++ },
++ {
++ /* MSS < datalen < gso_len: fail */
++ .tlen = CONST_MSS_V6 + 1,
++ .gso_len = CONST_MSS_V6 + 2,
++ .tfail = true
++ },
+ {
+ /* send a single MSS + 1B */
+ .tlen = CONST_MSS_V6 + 1,
+diff --git a/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c b/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
+index 6f4c3f5a1c5d99..37d9bf6fb7458d 100644
+--- a/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
++++ b/tools/testing/selftests/sched_ext/ddsp_bogus_dsq_fail.bpf.c
+@@ -20,7 +20,7 @@ s32 BPF_STRUCT_OPS(ddsp_bogus_dsq_fail_select_cpu, struct task_struct *p,
+ * If we dispatch to a bogus DSQ that will fall back to the
+ * builtin global DSQ, we fail gracefully.
+ */
+- scx_bpf_dsq_insert_vtime(p, 0xcafef00d, SCX_SLICE_DFL,
++ scx_bpf_dispatch_vtime(p, 0xcafef00d, SCX_SLICE_DFL,
+ p->scx.dsq_vtime, 0);
+ return cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c b/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
+index e4a55027778fd0..dffc97d9cdf141 100644
+--- a/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
++++ b/tools/testing/selftests/sched_ext/ddsp_vtimelocal_fail.bpf.c
+@@ -17,8 +17,8 @@ s32 BPF_STRUCT_OPS(ddsp_vtimelocal_fail_select_cpu, struct task_struct *p,
+
+ if (cpu >= 0) {
+ /* Shouldn't be allowed to vtime dispatch to a builtin DSQ. */
+- scx_bpf_dsq_insert_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL,
+- p->scx.dsq_vtime, 0);
++ scx_bpf_dispatch_vtime(p, SCX_DSQ_LOCAL, SCX_SLICE_DFL,
++ p->scx.dsq_vtime, 0);
+ return cpu;
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
+index fbda6bf5467128..c9a2da0575a0fa 100644
+--- a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
++++ b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
+@@ -48,7 +48,7 @@ void BPF_STRUCT_OPS(dsp_local_on_dispatch, s32 cpu, struct task_struct *prev)
+ else
+ target = scx_bpf_task_cpu(p);
+
+- scx_bpf_dsq_insert(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0);
++ scx_bpf_dispatch(p, SCX_DSQ_LOCAL_ON | target, SCX_SLICE_DFL, 0);
+ bpf_task_release(p);
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+index a7cf868d5e311d..1efb50d61040ad 100644
+--- a/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
++++ b/tools/testing/selftests/sched_ext/enq_select_cpu_fails.bpf.c
+@@ -31,7 +31,7 @@ void BPF_STRUCT_OPS(enq_select_cpu_fails_enqueue, struct task_struct *p,
+ /* Can only call from ops.select_cpu() */
+ scx_bpf_select_cpu_dfl(p, 0, 0, &found);
+
+- scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
+ }
+
+ SEC(".struct_ops.link")
+diff --git a/tools/testing/selftests/sched_ext/exit.bpf.c b/tools/testing/selftests/sched_ext/exit.bpf.c
+index 4bc36182d3ffc2..d75d4faf07f6d5 100644
+--- a/tools/testing/selftests/sched_ext/exit.bpf.c
++++ b/tools/testing/selftests/sched_ext/exit.bpf.c
+@@ -33,7 +33,7 @@ void BPF_STRUCT_OPS(exit_enqueue, struct task_struct *p, u64 enq_flags)
+ if (exit_point == EXIT_ENQUEUE)
+ EXIT_CLEANLY();
+
+- scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dispatch(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
+ }
+
+ void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p)
+@@ -41,7 +41,7 @@ void BPF_STRUCT_OPS(exit_dispatch, s32 cpu, struct task_struct *p)
+ if (exit_point == EXIT_DISPATCH)
+ EXIT_CLEANLY();
+
+- scx_bpf_dsq_move_to_local(DSQ_ID);
++ scx_bpf_consume(DSQ_ID);
+ }
+
+ void BPF_STRUCT_OPS(exit_enable, struct task_struct *p)
+diff --git a/tools/testing/selftests/sched_ext/maximal.bpf.c b/tools/testing/selftests/sched_ext/maximal.bpf.c
+index 430f5e13bf5544..361797e10ed5d5 100644
+--- a/tools/testing/selftests/sched_ext/maximal.bpf.c
++++ b/tools/testing/selftests/sched_ext/maximal.bpf.c
+@@ -22,7 +22,7 @@ s32 BPF_STRUCT_OPS(maximal_select_cpu, struct task_struct *p, s32 prev_cpu,
+
+ void BPF_STRUCT_OPS(maximal_enqueue, struct task_struct *p, u64 enq_flags)
+ {
+- scx_bpf_dsq_insert(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dispatch(p, DSQ_ID, SCX_SLICE_DFL, enq_flags);
+ }
+
+ void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags)
+@@ -30,7 +30,7 @@ void BPF_STRUCT_OPS(maximal_dequeue, struct task_struct *p, u64 deq_flags)
+
+ void BPF_STRUCT_OPS(maximal_dispatch, s32 cpu, struct task_struct *prev)
+ {
+- scx_bpf_dsq_move_to_local(DSQ_ID);
++ scx_bpf_consume(DSQ_ID);
+ }
+
+ void BPF_STRUCT_OPS(maximal_runnable, struct task_struct *p, u64 enq_flags)
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
+index 13d0f5be788d12..f171ac47097060 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dfl.bpf.c
+@@ -30,7 +30,7 @@ void BPF_STRUCT_OPS(select_cpu_dfl_enqueue, struct task_struct *p,
+ }
+ scx_bpf_put_idle_cpumask(idle_mask);
+
+- scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, enq_flags);
+ }
+
+ SEC(".struct_ops.link")
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+index 815f1d5d61ac43..9efdbb7da92887 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dfl_nodispatch.bpf.c
+@@ -67,7 +67,7 @@ void BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_enqueue, struct task_struct *p,
+ saw_local = true;
+ }
+
+- scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, enq_flags);
++ scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, enq_flags);
+ }
+
+ s32 BPF_STRUCT_OPS(select_cpu_dfl_nodispatch_init_task,
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
+index 4bb99699e9209c..59bfc4f36167a7 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch.bpf.c
+@@ -29,7 +29,7 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_select_cpu, struct task_struct *p,
+ cpu = prev_cpu;
+
+ dispatch:
+- scx_bpf_dsq_insert(p, dsq_id, SCX_SLICE_DFL, 0);
++ scx_bpf_dispatch(p, dsq_id, SCX_SLICE_DFL, 0);
+ return cpu;
+ }
+
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
+index 2a75de11b2cfd5..3bbd5fcdfb18e0 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch_bad_dsq.bpf.c
+@@ -18,7 +18,7 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_bad_dsq_select_cpu, struct task_struct *p
+ s32 prev_cpu, u64 wake_flags)
+ {
+ /* Dispatching to a random DSQ should fail. */
+- scx_bpf_dsq_insert(p, 0xcafef00d, SCX_SLICE_DFL, 0);
++ scx_bpf_dispatch(p, 0xcafef00d, SCX_SLICE_DFL, 0);
+
+ return prev_cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
+index 99d075695c9743..0fda57fe0ecfae 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_dispatch_dbl_dsp.bpf.c
+@@ -18,8 +18,8 @@ s32 BPF_STRUCT_OPS(select_cpu_dispatch_dbl_dsp_select_cpu, struct task_struct *p
+ s32 prev_cpu, u64 wake_flags)
+ {
+ /* Dispatching twice in a row is disallowed. */
+- scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
+- scx_bpf_dsq_insert(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
++ scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
++ scx_bpf_dispatch(p, SCX_DSQ_GLOBAL, SCX_SLICE_DFL, 0);
+
+ return prev_cpu;
+ }
+diff --git a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
+index bfcb96cd4954bd..e6c67bcf5e6e35 100644
+--- a/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
++++ b/tools/testing/selftests/sched_ext/select_cpu_vtime.bpf.c
+@@ -2,8 +2,8 @@
+ /*
+ * A scheduler that validates that enqueue flags are properly stored and
+ * applied at dispatch time when a task is directly dispatched from
+- * ops.select_cpu(). We validate this by using scx_bpf_dsq_insert_vtime(),
+- * and making the test a very basic vtime scheduler.
++ * ops.select_cpu(). We validate this by using scx_bpf_dispatch_vtime(), and
++ * making the test a very basic vtime scheduler.
+ *
+ * Copyright (c) 2024 Meta Platforms, Inc. and affiliates.
+ * Copyright (c) 2024 David Vernet <dvernet@meta.com>
+@@ -47,13 +47,13 @@ s32 BPF_STRUCT_OPS(select_cpu_vtime_select_cpu, struct task_struct *p,
+ cpu = prev_cpu;
+ scx_bpf_test_and_clear_cpu_idle(cpu);
+ ddsp:
+- scx_bpf_dsq_insert_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0);
++ scx_bpf_dispatch_vtime(p, VTIME_DSQ, SCX_SLICE_DFL, task_vtime(p), 0);
+ return cpu;
+ }
+
+ void BPF_STRUCT_OPS(select_cpu_vtime_dispatch, s32 cpu, struct task_struct *p)
+ {
+- if (scx_bpf_dsq_move_to_local(VTIME_DSQ))
++ if (scx_bpf_consume(VTIME_DSQ))
+ consumed = true;
+ }
+
+diff --git a/tools/tracing/rtla/src/osnoise.c b/tools/tracing/rtla/src/osnoise.c
+index 245e9344932bc4..699a83f538a8e8 100644
+--- a/tools/tracing/rtla/src/osnoise.c
++++ b/tools/tracing/rtla/src/osnoise.c
+@@ -867,7 +867,7 @@ int osnoise_set_workload(struct osnoise_context *context, bool onoff)
+
+ retval = osnoise_options_set_option("OSNOISE_WORKLOAD", onoff);
+ if (retval < 0)
+- return -1;
++ return -2;
+
+ context->opt_workload = onoff;
+
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index 2cc3ffcbc983d3..4cbd2d8ebb0461 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -1091,12 +1091,15 @@ timerlat_hist_apply_config(struct osnoise_tool *tool, struct timerlat_hist_param
+ }
+ }
+
+- if (params->user_hist) {
+- retval = osnoise_set_workload(tool->context, 0);
+- if (retval) {
+- err_msg("Failed to set OSNOISE_WORKLOAD option\n");
+- goto out_err;
+- }
++ /*
++ * Set workload according to type of thread if the kernel supports it.
++ * On kernels without support, user threads will have already failed
++ * on missing timerlat_fd, and kernel threads do not need it.
++ */
++ retval = osnoise_set_workload(tool->context, params->kernel_workload);
++ if (retval < -1) {
++ err_msg("Failed to set OSNOISE_WORKLOAD option\n");
++ goto out_err;
+ }
+
+ return 0;
+@@ -1137,9 +1140,12 @@ static struct osnoise_tool
+ }
+
+ static int stop_tracing;
++static struct trace_instance *hist_inst = NULL;
+ static void stop_hist(int sig)
+ {
+ stop_tracing = 1;
++ if (hist_inst)
++ trace_instance_stop(hist_inst);
+ }
+
+ /*
+@@ -1185,6 +1191,12 @@ int timerlat_hist_main(int argc, char *argv[])
+ }
+
+ trace = &tool->trace;
++ /*
++ * Save trace instance into global variable so that SIGINT can stop
++ * the timerlat tracer.
++ * Otherwise, rtla could loop indefinitely when overloaded.
++ */
++ hist_inst = trace;
+
+ retval = enable_timerlat(trace);
+ if (retval) {
+@@ -1331,7 +1343,7 @@ int timerlat_hist_main(int argc, char *argv[])
+
+ return_value = 0;
+
+- if (trace_is_off(&tool->trace, &record->trace)) {
++ if (trace_is_off(&tool->trace, &record->trace) && !stop_tracing) {
+ printf("rtla timerlat hit stop tracing\n");
+
+ if (!params->no_aa)
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index ac2ff38a57ee55..d13be28dacd599 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -842,12 +842,15 @@ timerlat_top_apply_config(struct osnoise_tool *top, struct timerlat_top_params *
+ }
+ }
+
+- if (params->user_top) {
+- retval = osnoise_set_workload(top->context, 0);
+- if (retval) {
+- err_msg("Failed to set OSNOISE_WORKLOAD option\n");
+- goto out_err;
+- }
++ /*
++ * Set workload according to type of thread if the kernel supports it.
++ * On kernels without support, user threads will have already failed
++ * on missing timerlat_fd, and kernel threads do not need it.
++ */
++ retval = osnoise_set_workload(top->context, params->kernel_workload);
++ if (retval < -1) {
++ err_msg("Failed to set OSNOISE_WORKLOAD option\n");
++ goto out_err;
+ }
+
+ if (isatty(1) && !params->quiet)
+@@ -891,9 +894,12 @@ static struct osnoise_tool
+ }
+
+ static int stop_tracing;
++static struct trace_instance *top_inst = NULL;
+ static void stop_top(int sig)
+ {
+ stop_tracing = 1;
++ if (top_inst)
++ trace_instance_stop(top_inst);
+ }
+
+ /*
+@@ -940,6 +946,13 @@ int timerlat_top_main(int argc, char *argv[])
+ }
+
+ trace = &top->trace;
++ /*
++ * Save trace instance into global variable so that SIGINT can stop
++ * the timerlat tracer.
++ * Otherwise, rtla could loop indefinitely when overloaded.
++ */
++ top_inst = trace;
++
+
+ retval = enable_timerlat(trace);
+ if (retval) {
+@@ -1099,7 +1112,7 @@ int timerlat_top_main(int argc, char *argv[])
+
+ return_value = 0;
+
+- if (trace_is_off(&top->trace, &record->trace)) {
++ if (trace_is_off(&top->trace, &record->trace) && !stop_tracing) {
+ printf("rtla timerlat hit stop tracing\n");
+
+ if (!params->no_aa)
+diff --git a/tools/tracing/rtla/src/trace.c b/tools/tracing/rtla/src/trace.c
+index 170a706248abff..440323a997c621 100644
+--- a/tools/tracing/rtla/src/trace.c
++++ b/tools/tracing/rtla/src/trace.c
+@@ -196,6 +196,14 @@ int trace_instance_start(struct trace_instance *trace)
+ return tracefs_trace_on(trace->inst);
+ }
+
++/*
++ * trace_instance_stop - stop tracing a given rtla instance
++ */
++int trace_instance_stop(struct trace_instance *trace)
++{
++ return tracefs_trace_off(trace->inst);
++}
++
+ /*
+ * trace_events_free - free a list of trace events
+ */
+diff --git a/tools/tracing/rtla/src/trace.h b/tools/tracing/rtla/src/trace.h
+index c7c92dc9a18a61..76e1b77291ba2a 100644
+--- a/tools/tracing/rtla/src/trace.h
++++ b/tools/tracing/rtla/src/trace.h
+@@ -21,6 +21,7 @@ struct trace_instance {
+
+ int trace_instance_init(struct trace_instance *trace, char *tool_name);
+ int trace_instance_start(struct trace_instance *trace);
++int trace_instance_stop(struct trace_instance *trace);
+ void trace_instance_destroy(struct trace_instance *trace);
+
+ struct trace_seq *get_trace_seq(void);
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-17 11:25 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-02-17 11:25 UTC (permalink / raw
To: gentoo-commits
commit: e35e34f8774b096f3213e24cfcbf45e4240ae613
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Feb 17 11:24:59 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Feb 17 11:24:59 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e35e34f8
Removed redundant patch
Removed
2980_GCC15-gnu23-to-gnu11-fix.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 --
2980_GCC15-gnu23-to-gnu11-fix.patch | 105 ------------------------------------
2 files changed, 109 deletions(-)
diff --git a/0000_README b/0000_README
index c6c607fe..54f48e7e 100644
--- a/0000_README
+++ b/0000_README
@@ -131,10 +131,6 @@ Patch: 2920_sign-file-patch-for-libressl.patch
From: https://bugs.gentoo.org/717166
Desc: sign-file: full functionality with modern LibreSSL
-Patch: 2980_GCC15-gnu23-to-gnu11-fix.patch
-From: https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
-Desc: GCC 15 defaults to -std=gnu23. Hack in CSTD_FLAG to pass -std=gnu11 everywhere.
-
Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
From: https://lore.kernel.org/bpf/
Desc: libbpf: workaround -Wmaybe-uninitialized false positive
diff --git a/2980_GCC15-gnu23-to-gnu11-fix.patch b/2980_GCC15-gnu23-to-gnu11-fix.patch
deleted file mode 100644
index c74b6180..00000000
--- a/2980_GCC15-gnu23-to-gnu11-fix.patch
+++ /dev/null
@@ -1,105 +0,0 @@
-iGCC 15 defaults to -std=gnu23. While most of the kernel builds with -std=gnu11,
-some of it forgets to pass that flag. Hack in CSTD_FLAG to pass -std=gnu11
-everywhere.
-
-https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
---- a/Makefile
-+++ b/Makefile
-@@ -416,6 +416,8 @@ export KCONFIG_CONFIG
- # SHELL used by kbuild
- CONFIG_SHELL := sh
-
-+CSTD_FLAG := -std=gnu11
-+
- HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null)
- HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null)
- HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null)
-@@ -437,7 +439,7 @@ HOSTRUSTC = rustc
- HOSTPKG_CONFIG = pkg-config
-
- KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
-- -O2 -fomit-frame-pointer -std=gnu11
-+ -O2 -fomit-frame-pointer $(CSTD_FLAG)
- KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
- KBUILD_USERLDFLAGS := $(USERLDFLAGS)
-
-@@ -545,7 +547,7 @@ LINUXINCLUDE := \
- KBUILD_AFLAGS := -D__ASSEMBLY__ -fno-PIE
-
- KBUILD_CFLAGS :=
--KBUILD_CFLAGS += -std=gnu11
-+KBUILD_CFLAGS += $(CSTD_FLAG)
- KBUILD_CFLAGS += -fshort-wchar
- KBUILD_CFLAGS += -funsigned-char
- KBUILD_CFLAGS += -fno-common
-@@ -589,7 +591,7 @@ export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AW
- export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
- export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
- export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
--export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS
-+export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS CSTD_FLAG
-
- export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
- export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
---- a/arch/arm64/kernel/vdso32/Makefile
-+++ b/arch/arm64/kernel/vdso32/Makefile
-@@ -65,7 +65,7 @@ VDSO_CFLAGS += -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
- -fno-strict-aliasing -fno-common \
- -Werror-implicit-function-declaration \
- -Wno-format-security \
-- -std=gnu11
-+ $(CSTD_FLAG)
- VDSO_CFLAGS += -O2
- # Some useful compiler-dependent flags from top-level Makefile
- VDSO_CFLAGS += $(call cc32-option,-Wno-pointer-sign)
---- a/arch/x86/Makefile
-+++ b/arch/x86/Makefile
-@@ -47,7 +47,7 @@ endif
-
- # How to compile the 16-bit code. Note we always compile for -march=i386;
- # that way we can complain to the user if the CPU is insufficient.
--REALMODE_CFLAGS := -std=gnu11 -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
-+REALMODE_CFLAGS := $(CSTD_FLAG) -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
- -Wall -Wstrict-prototypes -march=i386 -mregparm=3 \
- -fno-strict-aliasing -fomit-frame-pointer -fno-pic \
- -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none)
---- a/drivers/firmware/efi/libstub/Makefile
-+++ b/drivers/firmware/efi/libstub/Makefile
-@@ -7,7 +7,7 @@
- #
-
- # non-x86 reuses KBUILD_CFLAGS, x86 does not
--cflags-y := $(KBUILD_CFLAGS)
-+cflags-y := $(KBUILD_CFLAGS) $(CSTD_FLAG)
-
- cflags-$(CONFIG_X86_32) := -march=i386
- cflags-$(CONFIG_X86_64) := -mcmodel=small
-@@ -18,7 +18,7 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \
- $(call cc-disable-warning, address-of-packed-member) \
- $(call cc-disable-warning, gnu) \
- -fno-asynchronous-unwind-tables \
-- $(CLANG_FLAGS)
-+ $(CLANG_FLAGS) $(CSTD_FLAG)
-
- # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
- # disable the stackleak plugin
-@@ -42,7 +42,7 @@ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(cflags-y)) \
- -ffreestanding \
- -fno-stack-protector \
- $(call cc-option,-fno-addrsig) \
-- -D__DISABLE_EXPORTS
-+ -D__DISABLE_EXPORTS $(CSTD_FLAG)
-
- #
- # struct randomization only makes sense for Linux internal types, which the EFI
---- a/arch/x86/boot/compressed/Makefile
-+++ b/arch/x86/boot/compressed/Makefile
-@@ -24,7 +24,7 @@ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
- # case of cross compiling, as it has the '--target=' flag, which is needed to
- # avoid errors with '-march=i386', and future flags may depend on the target to
- # be valid.
--KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS)
-+KBUILD_CFLAGS := -m$(BITS) -O2 $(CLANG_FLAGS) $(CSTD_FLAG)
- KBUILD_CFLAGS += -fno-strict-aliasing -fPIE
- KBUILD_CFLAGS += -Wundef
- KBUILD_CFLAGS += -DDISABLE_BRANCH_PROFILING
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-17 15:44 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-02-17 15:44 UTC (permalink / raw
To: gentoo-commits
commit: 86cb22103e9d9da37a098e62cada3e3a279169a4
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Feb 17 15:44:27 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Feb 17 15:44:27 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=86cb2210
kbuild gcc15 fixes, thanks to holgar
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
2980_kbuild-gcc15-gnu23-to-gnu11-fix.patch | 94 ++++++++++++++++++++++++++++++
2 files changed, 98 insertions(+)
diff --git a/0000_README b/0000_README
index 54f48e7e..8a136823 100644
--- a/0000_README
+++ b/0000_README
@@ -131,6 +131,10 @@ Patch: 2920_sign-file-patch-for-libressl.patch
From: https://bugs.gentoo.org/717166
Desc: sign-file: full functionality with modern LibreSSL
+Patch: 2980_kbuild-gcc15-gnu23-to-gnu11-fix.patch
+From: https://github.com/hhoffstaette/kernel-patches/
+Desc: gcc 15 kbuild fixes
+
Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
From: https://lore.kernel.org/bpf/
Desc: libbpf: workaround -Wmaybe-uninitialized false positive
diff --git a/2980_kbuild-gcc15-gnu23-to-gnu11-fix.patch b/2980_kbuild-gcc15-gnu23-to-gnu11-fix.patch
new file mode 100644
index 00000000..e55dc3ed
--- /dev/null
+++ b/2980_kbuild-gcc15-gnu23-to-gnu11-fix.patch
@@ -0,0 +1,94 @@
+iGCC 15 defaults to -std=gnu23. While most of the kernel builds with -std=gnu11,
+some of it forgets to pass that flag. Hack in CSTD_FLAG to pass -std=gnu11
+everywhere.
+
+https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
+--- a/Makefile
++++ b/Makefile
+@@ -416,6 +416,8 @@ export KCONFIG_CONFIG
+ # SHELL used by kbuild
+ CONFIG_SHELL := sh
+
++CSTD_FLAG := -std=gnu11
++
+ HOST_LFS_CFLAGS := $(shell getconf LFS_CFLAGS 2>/dev/null)
+ HOST_LFS_LDFLAGS := $(shell getconf LFS_LDFLAGS 2>/dev/null)
+ HOST_LFS_LIBS := $(shell getconf LFS_LIBS 2>/dev/null)
+@@ -437,7 +439,7 @@ HOSTRUSTC = rustc
+ HOSTPKG_CONFIG = pkg-config
+
+ KBUILD_USERHOSTCFLAGS := -Wall -Wmissing-prototypes -Wstrict-prototypes \
+- -O2 -fomit-frame-pointer -std=gnu11
++ -O2 -fomit-frame-pointer $(CSTD_FLAG)
+ KBUILD_USERCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(USERCFLAGS)
+ KBUILD_USERLDFLAGS := $(USERLDFLAGS)
+
+@@ -545,7 +547,7 @@ LINUXINCLUDE := \
+ KBUILD_AFLAGS := -D__ASSEMBLY__ -fno-PIE
+
+ KBUILD_CFLAGS :=
+-KBUILD_CFLAGS += -std=gnu11
++KBUILD_CFLAGS += $(CSTD_FLAG)
+ KBUILD_CFLAGS += -fshort-wchar
+ KBUILD_CFLAGS += -funsigned-char
+ KBUILD_CFLAGS += -fno-common
+@@ -589,7 +591,7 @@ export CPP AR NM STRIP OBJCOPY OBJDUMP READELF PAHOLE RESOLVE_BTFIDS LEX YACC AW
+ export PERL PYTHON3 CHECK CHECKFLAGS MAKE UTS_MACHINE HOSTCXX
+ export KGZIP KBZIP2 KLZOP LZMA LZ4 XZ ZSTD
+ export KBUILD_HOSTCXXFLAGS KBUILD_HOSTLDFLAGS KBUILD_HOSTLDLIBS LDFLAGS_MODULE
+-export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS
++export KBUILD_USERCFLAGS KBUILD_USERLDFLAGS CSTD_FLAG
+
+ export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS KBUILD_LDFLAGS
+ export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE
+--- a/arch/arm64/kernel/vdso32/Makefile
++++ b/arch/arm64/kernel/vdso32/Makefile
+@@ -65,7 +65,7 @@ VDSO_CFLAGS += -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
+ -fno-strict-aliasing -fno-common \
+ -Werror-implicit-function-declaration \
+ -Wno-format-security \
+- -std=gnu11
++ $(CSTD_FLAG)
+ VDSO_CFLAGS += -O2
+ # Some useful compiler-dependent flags from top-level Makefile
+ VDSO_CFLAGS += $(call cc32-option,-Wno-pointer-sign)
+--- a/arch/x86/Makefile
++++ b/arch/x86/Makefile
+@@ -47,7 +47,7 @@ endif
+
+ # How to compile the 16-bit code. Note we always compile for -march=i386;
+ # that way we can complain to the user if the CPU is insufficient.
+-REALMODE_CFLAGS := -std=gnu11 -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
++REALMODE_CFLAGS := $(CSTD_FLAG) -m16 -g -Os -DDISABLE_BRANCH_PROFILING -D__DISABLE_EXPORTS \
+ -Wall -Wstrict-prototypes -march=i386 -mregparm=3 \
+ -fno-strict-aliasing -fomit-frame-pointer -fno-pic \
+ -mno-mmx -mno-sse $(call cc-option,-fcf-protection=none)
+--- a/drivers/firmware/efi/libstub/Makefile
++++ b/drivers/firmware/efi/libstub/Makefile
+@@ -7,7 +7,7 @@
+ #
+
+ # non-x86 reuses KBUILD_CFLAGS, x86 does not
+-cflags-y := $(KBUILD_CFLAGS)
++cflags-y := $(KBUILD_CFLAGS) $(CSTD_FLAG)
+
+ cflags-$(CONFIG_X86_32) := -march=i386
+ cflags-$(CONFIG_X86_64) := -mcmodel=small
+@@ -18,7 +18,7 @@ cflags-$(CONFIG_X86) += -m$(BITS) -D__KERNEL__ \
+ $(call cc-disable-warning, address-of-packed-member) \
+ $(call cc-disable-warning, gnu) \
+ -fno-asynchronous-unwind-tables \
+- $(CLANG_FLAGS)
++ $(CLANG_FLAGS) $(CSTD_FLAG)
+
+ # arm64 uses the full KBUILD_CFLAGS so it's necessary to explicitly
+ # disable the stackleak plugin
+@@ -42,7 +42,7 @@ KBUILD_CFLAGS := $(subst $(CC_FLAGS_FTRACE),,$(cflags-y)) \
+ -ffreestanding \
+ -fno-stack-protector \
+ $(call cc-option,-fno-addrsig) \
+- -D__DISABLE_EXPORTS
++ -D__DISABLE_EXPORTS $(CSTD_FLAG)
+
+ #
+ # struct randomization only makes sense for Linux internal types, which the EFI
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-18 11:26 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-02-18 11:26 UTC (permalink / raw
To: gentoo-commits
commit: 74d0366e3c6bc166d44d782e2f233740da1a9a16
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Feb 18 11:26:15 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Feb 18 11:26:15 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=74d0366e
Linux patch 6.12.15
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
1014_linux-6.12.15.patch | 142 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 146 insertions(+)
diff --git a/0000_README b/0000_README
index 8a136823..f6cd3204 100644
--- a/0000_README
+++ b/0000_README
@@ -99,6 +99,10 @@ Patch: 1013_linux-6.12.14.patch
From: https://www.kernel.org
Desc: Linux 6.12.14
+Patch: 1014_linux-6.12.15.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.15
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1014_linux-6.12.15.patch b/1014_linux-6.12.15.patch
new file mode 100644
index 00000000..8fb3146b
--- /dev/null
+++ b/1014_linux-6.12.15.patch
@@ -0,0 +1,142 @@
+diff --git a/Makefile b/Makefile
+index 26a471dbed62a5..c6918c620bc368 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/fs/xfs/xfs_quota.h b/fs/xfs/xfs_quota.h
+index 23d71a55bbc006..032f3a70f21ddd 100644
+--- a/fs/xfs/xfs_quota.h
++++ b/fs/xfs/xfs_quota.h
+@@ -96,7 +96,8 @@ extern void xfs_trans_free_dqinfo(struct xfs_trans *);
+ extern void xfs_trans_mod_dquot_byino(struct xfs_trans *, struct xfs_inode *,
+ uint, int64_t);
+ extern void xfs_trans_apply_dquot_deltas(struct xfs_trans *);
+-extern void xfs_trans_unreserve_and_mod_dquots(struct xfs_trans *);
++void xfs_trans_unreserve_and_mod_dquots(struct xfs_trans *tp,
++ bool already_locked);
+ int xfs_trans_reserve_quota_nblks(struct xfs_trans *tp, struct xfs_inode *ip,
+ int64_t dblocks, int64_t rblocks, bool force);
+ extern int xfs_trans_reserve_quota_bydquots(struct xfs_trans *,
+@@ -166,7 +167,7 @@ static inline void xfs_trans_mod_dquot_byino(struct xfs_trans *tp,
+ {
+ }
+ #define xfs_trans_apply_dquot_deltas(tp)
+-#define xfs_trans_unreserve_and_mod_dquots(tp)
++#define xfs_trans_unreserve_and_mod_dquots(tp, a)
+ static inline int xfs_trans_reserve_quota_nblks(struct xfs_trans *tp,
+ struct xfs_inode *ip, int64_t dblocks, int64_t rblocks,
+ bool force)
+diff --git a/fs/xfs/xfs_trans.c b/fs/xfs/xfs_trans.c
+index ee46051db12dde..39cd11cbe21fcb 100644
+--- a/fs/xfs/xfs_trans.c
++++ b/fs/xfs/xfs_trans.c
+@@ -840,6 +840,7 @@ __xfs_trans_commit(
+ */
+ if (tp->t_flags & XFS_TRANS_SB_DIRTY)
+ xfs_trans_apply_sb_deltas(tp);
++ xfs_trans_apply_dquot_deltas(tp);
+
+ error = xfs_trans_run_precommits(tp);
+ if (error)
+@@ -868,11 +869,6 @@ __xfs_trans_commit(
+
+ ASSERT(tp->t_ticket != NULL);
+
+- /*
+- * If we need to update the superblock, then do it now.
+- */
+- xfs_trans_apply_dquot_deltas(tp);
+-
+ xlog_cil_commit(log, tp, &commit_seq, regrant);
+
+ xfs_trans_free(tp);
+@@ -898,7 +894,7 @@ __xfs_trans_commit(
+ * the dqinfo portion to be. All that means is that we have some
+ * (non-persistent) quota reservations that need to be unreserved.
+ */
+- xfs_trans_unreserve_and_mod_dquots(tp);
++ xfs_trans_unreserve_and_mod_dquots(tp, true);
+ if (tp->t_ticket) {
+ if (regrant && !xlog_is_shutdown(log))
+ xfs_log_ticket_regrant(log, tp->t_ticket);
+@@ -992,7 +988,7 @@ xfs_trans_cancel(
+ }
+ #endif
+ xfs_trans_unreserve_and_mod_sb(tp);
+- xfs_trans_unreserve_and_mod_dquots(tp);
++ xfs_trans_unreserve_and_mod_dquots(tp, false);
+
+ if (tp->t_ticket) {
+ xfs_log_ticket_ungrant(log, tp->t_ticket);
+diff --git a/fs/xfs/xfs_trans_dquot.c b/fs/xfs/xfs_trans_dquot.c
+index b368e13424c4f4..b92eeaa1a2a9e7 100644
+--- a/fs/xfs/xfs_trans_dquot.c
++++ b/fs/xfs/xfs_trans_dquot.c
+@@ -602,6 +602,24 @@ xfs_trans_apply_dquot_deltas(
+ ASSERT(dqp->q_blk.reserved >= dqp->q_blk.count);
+ ASSERT(dqp->q_ino.reserved >= dqp->q_ino.count);
+ ASSERT(dqp->q_rtb.reserved >= dqp->q_rtb.count);
++
++ /*
++ * We've applied the count changes and given back
++ * whatever reservation we didn't use. Zero out the
++ * dqtrx fields.
++ */
++ qtrx->qt_blk_res = 0;
++ qtrx->qt_bcount_delta = 0;
++ qtrx->qt_delbcnt_delta = 0;
++
++ qtrx->qt_rtblk_res = 0;
++ qtrx->qt_rtblk_res_used = 0;
++ qtrx->qt_rtbcount_delta = 0;
++ qtrx->qt_delrtb_delta = 0;
++
++ qtrx->qt_ino_res = 0;
++ qtrx->qt_ino_res_used = 0;
++ qtrx->qt_icount_delta = 0;
+ }
+ }
+ }
+@@ -638,7 +656,8 @@ xfs_trans_unreserve_and_mod_dquots_hook(
+ */
+ void
+ xfs_trans_unreserve_and_mod_dquots(
+- struct xfs_trans *tp)
++ struct xfs_trans *tp,
++ bool already_locked)
+ {
+ int i, j;
+ struct xfs_dquot *dqp;
+@@ -667,10 +686,12 @@ xfs_trans_unreserve_and_mod_dquots(
+ * about the number of blocks used field, or deltas.
+ * Also we don't bother to zero the fields.
+ */
+- locked = false;
++ locked = already_locked;
+ if (qtrx->qt_blk_res) {
+- xfs_dqlock(dqp);
+- locked = true;
++ if (!locked) {
++ xfs_dqlock(dqp);
++ locked = true;
++ }
+ dqp->q_blk.reserved -=
+ (xfs_qcnt_t)qtrx->qt_blk_res;
+ }
+@@ -691,7 +712,7 @@ xfs_trans_unreserve_and_mod_dquots(
+ dqp->q_rtb.reserved -=
+ (xfs_qcnt_t)qtrx->qt_rtblk_res;
+ }
+- if (locked)
++ if (locked && !already_locked)
+ xfs_dqunlock(dqp);
+
+ }
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-21 13:31 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-02-21 13:31 UTC (permalink / raw
To: gentoo-commits
commit: ce2243f5071849f131d7ebfffb21858a8b0fb12a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Feb 21 13:30:53 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Feb 21 13:30:53 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ce2243f5
Linux patch 6.12.16
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1015_linux-6.12.16.patch | 9009 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 9013 insertions(+)
diff --git a/0000_README b/0000_README
index f6cd3204..9f0c3a67 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch: 1014_linux-6.12.15.patch
From: https://www.kernel.org
Desc: Linux 6.12.15
+Patch: 1015_linux-6.12.16.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.16
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1015_linux-6.12.16.patch b/1015_linux-6.12.16.patch
new file mode 100644
index 00000000..6d524d6b
--- /dev/null
+++ b/1015_linux-6.12.16.patch
@@ -0,0 +1,9009 @@
+diff --git a/Documentation/devicetree/bindings/regulator/qcom,smd-rpm-regulator.yaml b/Documentation/devicetree/bindings/regulator/qcom,smd-rpm-regulator.yaml
+index f2fd2df68a9ed9..b7241ce975b961 100644
+--- a/Documentation/devicetree/bindings/regulator/qcom,smd-rpm-regulator.yaml
++++ b/Documentation/devicetree/bindings/regulator/qcom,smd-rpm-regulator.yaml
+@@ -22,7 +22,7 @@ description:
+ Each sub-node is identified using the node's name, with valid values listed
+ for each of the pmics below.
+
+- For mp5496, s1, s2
++ For mp5496, s1, s2, l2, l5
+
+ For pm2250, s1, s2, s3, s4, l1, l2, l3, l4, l5, l6, l7, l8, l9, l10, l11,
+ l12, l13, l14, l15, l16, l17, l18, l19, l20, l21, l22
+diff --git a/Documentation/networking/iso15765-2.rst b/Documentation/networking/iso15765-2.rst
+index 0e9d960741783b..37ebb2c417cb44 100644
+--- a/Documentation/networking/iso15765-2.rst
++++ b/Documentation/networking/iso15765-2.rst
+@@ -369,8 +369,8 @@ to their default.
+
+ addr.can_family = AF_CAN;
+ addr.can_ifindex = if_nametoindex("can0");
+- addr.tp.tx_id = 0x18DA42F1 | CAN_EFF_FLAG;
+- addr.tp.rx_id = 0x18DAF142 | CAN_EFF_FLAG;
++ addr.can_addr.tp.tx_id = 0x18DA42F1 | CAN_EFF_FLAG;
++ addr.can_addr.tp.rx_id = 0x18DAF142 | CAN_EFF_FLAG;
+
+ ret = bind(s, (struct sockaddr *)&addr, sizeof(addr));
+ if (ret < 0)
+diff --git a/Makefile b/Makefile
+index c6918c620bc368..340da922fa4f2c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -1057,8 +1057,8 @@ LDFLAGS_vmlinux += --orphan-handling=$(CONFIG_LD_ORPHAN_WARN_LEVEL)
+ endif
+
+ # Align the bit size of userspace programs with the kernel
+-KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS))
+-KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS))
++KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
++KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
+
+ # make the checker run with the right architecture
+ CHECKFLAGS += --arch=$(ARCH)
+@@ -1357,18 +1357,13 @@ ifneq ($(wildcard $(resolve_btfids_O)),)
+ $(Q)$(MAKE) -sC $(srctree)/tools/bpf/resolve_btfids O=$(resolve_btfids_O) clean
+ endif
+
+-# Clear a bunch of variables before executing the submake
+-ifeq ($(quiet),silent_)
+-tools_silent=s
+-endif
+-
+ tools/: FORCE
+ $(Q)mkdir -p $(objtree)/tools
+- $(Q)$(MAKE) LDFLAGS= MAKEFLAGS="$(tools_silent) $(filter --j% -j,$(MAKEFLAGS))" O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/
++ $(Q)$(MAKE) LDFLAGS= O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/
+
+ tools/%: FORCE
+ $(Q)mkdir -p $(objtree)/tools
+- $(Q)$(MAKE) LDFLAGS= MAKEFLAGS="$(tools_silent) $(filter --j% -j,$(MAKEFLAGS))" O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/ $*
++ $(Q)$(MAKE) LDFLAGS= O=$(abspath $(objtree)) subdir=tools -C $(srctree)/tools/ $*
+
+ # ---------------------------------------------------------------------------
+ # Kernel selftest
+diff --git a/arch/alpha/include/uapi/asm/ptrace.h b/arch/alpha/include/uapi/asm/ptrace.h
+index 5ca45934fcbb82..72ed913a910f25 100644
+--- a/arch/alpha/include/uapi/asm/ptrace.h
++++ b/arch/alpha/include/uapi/asm/ptrace.h
+@@ -42,6 +42,8 @@ struct pt_regs {
+ unsigned long trap_a0;
+ unsigned long trap_a1;
+ unsigned long trap_a2;
++/* This makes the stack 16-byte aligned as GCC expects */
++ unsigned long __pad0;
+ /* These are saved by PAL-code: */
+ unsigned long ps;
+ unsigned long pc;
+diff --git a/arch/alpha/kernel/asm-offsets.c b/arch/alpha/kernel/asm-offsets.c
+index 4cfeae42c79ac7..e9dad60b147f33 100644
+--- a/arch/alpha/kernel/asm-offsets.c
++++ b/arch/alpha/kernel/asm-offsets.c
+@@ -19,9 +19,13 @@ static void __used foo(void)
+ DEFINE(TI_STATUS, offsetof(struct thread_info, status));
+ BLANK();
+
++ DEFINE(SP_OFF, offsetof(struct pt_regs, ps));
+ DEFINE(SIZEOF_PT_REGS, sizeof(struct pt_regs));
+ BLANK();
+
++ DEFINE(SWITCH_STACK_SIZE, sizeof(struct switch_stack));
++ BLANK();
++
+ DEFINE(HAE_CACHE, offsetof(struct alpha_machine_vector, hae_cache));
+ DEFINE(HAE_REG, offsetof(struct alpha_machine_vector, hae_register));
+ }
+diff --git a/arch/alpha/kernel/entry.S b/arch/alpha/kernel/entry.S
+index dd26062d75b3c5..f4d41b4538c2e8 100644
+--- a/arch/alpha/kernel/entry.S
++++ b/arch/alpha/kernel/entry.S
+@@ -15,10 +15,6 @@
+ .set noat
+ .cfi_sections .debug_frame
+
+-/* Stack offsets. */
+-#define SP_OFF 184
+-#define SWITCH_STACK_SIZE 64
+-
+ .macro CFI_START_OSF_FRAME func
+ .align 4
+ .globl \func
+@@ -198,8 +194,8 @@ CFI_END_OSF_FRAME entArith
+ CFI_START_OSF_FRAME entMM
+ SAVE_ALL
+ /* save $9 - $15 so the inline exception code can manipulate them. */
+- subq $sp, 56, $sp
+- .cfi_adjust_cfa_offset 56
++ subq $sp, 64, $sp
++ .cfi_adjust_cfa_offset 64
+ stq $9, 0($sp)
+ stq $10, 8($sp)
+ stq $11, 16($sp)
+@@ -214,7 +210,7 @@ CFI_START_OSF_FRAME entMM
+ .cfi_rel_offset $13, 32
+ .cfi_rel_offset $14, 40
+ .cfi_rel_offset $15, 48
+- addq $sp, 56, $19
++ addq $sp, 64, $19
+ /* handle the fault */
+ lda $8, 0x3fff
+ bic $sp, $8, $8
+@@ -227,7 +223,7 @@ CFI_START_OSF_FRAME entMM
+ ldq $13, 32($sp)
+ ldq $14, 40($sp)
+ ldq $15, 48($sp)
+- addq $sp, 56, $sp
++ addq $sp, 64, $sp
+ .cfi_restore $9
+ .cfi_restore $10
+ .cfi_restore $11
+@@ -235,7 +231,7 @@ CFI_START_OSF_FRAME entMM
+ .cfi_restore $13
+ .cfi_restore $14
+ .cfi_restore $15
+- .cfi_adjust_cfa_offset -56
++ .cfi_adjust_cfa_offset -64
+ /* finish up the syscall as normal. */
+ br ret_from_sys_call
+ CFI_END_OSF_FRAME entMM
+@@ -382,8 +378,8 @@ entUnaUser:
+ .cfi_restore $0
+ .cfi_adjust_cfa_offset -256
+ SAVE_ALL /* setup normal kernel stack */
+- lda $sp, -56($sp)
+- .cfi_adjust_cfa_offset 56
++ lda $sp, -64($sp)
++ .cfi_adjust_cfa_offset 64
+ stq $9, 0($sp)
+ stq $10, 8($sp)
+ stq $11, 16($sp)
+@@ -399,7 +395,7 @@ entUnaUser:
+ .cfi_rel_offset $14, 40
+ .cfi_rel_offset $15, 48
+ lda $8, 0x3fff
+- addq $sp, 56, $19
++ addq $sp, 64, $19
+ bic $sp, $8, $8
+ jsr $26, do_entUnaUser
+ ldq $9, 0($sp)
+@@ -409,7 +405,7 @@ entUnaUser:
+ ldq $13, 32($sp)
+ ldq $14, 40($sp)
+ ldq $15, 48($sp)
+- lda $sp, 56($sp)
++ lda $sp, 64($sp)
+ .cfi_restore $9
+ .cfi_restore $10
+ .cfi_restore $11
+@@ -417,7 +413,7 @@ entUnaUser:
+ .cfi_restore $13
+ .cfi_restore $14
+ .cfi_restore $15
+- .cfi_adjust_cfa_offset -56
++ .cfi_adjust_cfa_offset -64
+ br ret_from_sys_call
+ CFI_END_OSF_FRAME entUna
+
+diff --git a/arch/alpha/kernel/traps.c b/arch/alpha/kernel/traps.c
+index a9a38c80c4a7af..7004397937cfda 100644
+--- a/arch/alpha/kernel/traps.c
++++ b/arch/alpha/kernel/traps.c
+@@ -649,7 +649,7 @@ s_reg_to_mem (unsigned long s_reg)
+ static int unauser_reg_offsets[32] = {
+ R(r0), R(r1), R(r2), R(r3), R(r4), R(r5), R(r6), R(r7), R(r8),
+ /* r9 ... r15 are stored in front of regs. */
+- -56, -48, -40, -32, -24, -16, -8,
++ -64, -56, -48, -40, -32, -24, -16, /* padding at -8 */
+ R(r16), R(r17), R(r18),
+ R(r19), R(r20), R(r21), R(r22), R(r23), R(r24), R(r25), R(r26),
+ R(r27), R(r28), R(gp),
+diff --git a/arch/alpha/mm/fault.c b/arch/alpha/mm/fault.c
+index 8c9850437e6744..a9816bbc9f34d3 100644
+--- a/arch/alpha/mm/fault.c
++++ b/arch/alpha/mm/fault.c
+@@ -78,8 +78,8 @@ __load_new_mm_context(struct mm_struct *next_mm)
+
+ /* Macro for exception fixup code to access integer registers. */
+ #define dpf_reg(r) \
+- (((unsigned long *)regs)[(r) <= 8 ? (r) : (r) <= 15 ? (r)-16 : \
+- (r) <= 18 ? (r)+10 : (r)-10])
++ (((unsigned long *)regs)[(r) <= 8 ? (r) : (r) <= 15 ? (r)-17 : \
++ (r) <= 18 ? (r)+11 : (r)-10])
+
+ asmlinkage void
+ do_page_fault(unsigned long address, unsigned long mmcsr,
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index 9efd3f37c2fd9d..19a4988621ac9a 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -48,7 +48,11 @@ KBUILD_CFLAGS += $(CC_FLAGS_NO_FPU) \
+ KBUILD_CFLAGS += $(call cc-disable-warning, psabi)
+ KBUILD_AFLAGS += $(compat_vdso)
+
++ifeq ($(call test-ge, $(CONFIG_RUSTC_VERSION), 108500),y)
++KBUILD_RUSTFLAGS += --target=aarch64-unknown-none-softfloat
++else
+ KBUILD_RUSTFLAGS += --target=aarch64-unknown-none -Ctarget-feature="-neon"
++endif
+
+ KBUILD_CFLAGS += $(call cc-option,-mabi=lp64)
+ KBUILD_AFLAGS += $(call cc-option,-mabi=lp64)
+diff --git a/arch/arm64/kernel/cacheinfo.c b/arch/arm64/kernel/cacheinfo.c
+index d9c9218fa1fddc..309942b06c5bc2 100644
+--- a/arch/arm64/kernel/cacheinfo.c
++++ b/arch/arm64/kernel/cacheinfo.c
+@@ -101,16 +101,18 @@ int populate_cache_leaves(unsigned int cpu)
+ unsigned int level, idx;
+ enum cache_type type;
+ struct cpu_cacheinfo *this_cpu_ci = get_cpu_cacheinfo(cpu);
+- struct cacheinfo *this_leaf = this_cpu_ci->info_list;
++ struct cacheinfo *infos = this_cpu_ci->info_list;
+
+ for (idx = 0, level = 1; level <= this_cpu_ci->num_levels &&
+- idx < this_cpu_ci->num_leaves; idx++, level++) {
++ idx < this_cpu_ci->num_leaves; level++) {
+ type = get_cache_type(level);
+ if (type == CACHE_TYPE_SEPARATE) {
+- ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level);
+- ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level);
++ if (idx + 1 >= this_cpu_ci->num_leaves)
++ break;
++ ci_leaf_init(&infos[idx++], CACHE_TYPE_DATA, level);
++ ci_leaf_init(&infos[idx++], CACHE_TYPE_INST, level);
+ } else {
+- ci_leaf_init(this_leaf++, type, level);
++ ci_leaf_init(&infos[idx++], type, level);
+ }
+ }
+ return 0;
+diff --git a/arch/arm64/kernel/vdso/vdso.lds.S b/arch/arm64/kernel/vdso/vdso.lds.S
+index f204a9ddc83359..a3f1e895e2a670 100644
+--- a/arch/arm64/kernel/vdso/vdso.lds.S
++++ b/arch/arm64/kernel/vdso/vdso.lds.S
+@@ -41,6 +41,7 @@ SECTIONS
+ */
+ /DISCARD/ : {
+ *(.note.GNU-stack .note.gnu.property)
++ *(.ARM.attributes)
+ }
+ .note : { *(.note.*) } :text :note
+
+diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
+index f84c71f04d9ea9..e73326bd3ff7e9 100644
+--- a/arch/arm64/kernel/vmlinux.lds.S
++++ b/arch/arm64/kernel/vmlinux.lds.S
+@@ -162,6 +162,7 @@ SECTIONS
+ /DISCARD/ : {
+ *(.interp .dynamic)
+ *(.dynsym .dynstr .hash .gnu.hash)
++ *(.ARM.attributes)
+ }
+
+ . = KIMAGE_VADDR;
+diff --git a/arch/loongarch/kernel/genex.S b/arch/loongarch/kernel/genex.S
+index 86d5d90ebefe5b..4f09121417818d 100644
+--- a/arch/loongarch/kernel/genex.S
++++ b/arch/loongarch/kernel/genex.S
+@@ -18,16 +18,19 @@
+
+ .align 5
+ SYM_FUNC_START(__arch_cpu_idle)
+- /* start of rollback region */
+- LONG_L t0, tp, TI_FLAGS
+- nop
+- andi t0, t0, _TIF_NEED_RESCHED
+- bnez t0, 1f
+- nop
+- nop
+- nop
++ /* start of idle interrupt region */
++ ori t0, zero, CSR_CRMD_IE
++ /* idle instruction needs irq enabled */
++ csrxchg t0, t0, LOONGARCH_CSR_CRMD
++ /*
++ * If an interrupt lands here; between enabling interrupts above and
++ * going idle on the next instruction, we must *NOT* go idle since the
++ * interrupt could have set TIF_NEED_RESCHED or caused an timer to need
++ * reprogramming. Fall through -- see handle_vint() below -- and have
++ * the idle loop take care of things.
++ */
+ idle 0
+- /* end of rollback region */
++ /* end of idle interrupt region */
+ 1: jr ra
+ SYM_FUNC_END(__arch_cpu_idle)
+
+@@ -35,11 +38,10 @@ SYM_CODE_START(handle_vint)
+ UNWIND_HINT_UNDEFINED
+ BACKUP_T0T1
+ SAVE_ALL
+- la_abs t1, __arch_cpu_idle
++ la_abs t1, 1b
+ LONG_L t0, sp, PT_ERA
+- /* 32 byte rollback region */
+- ori t0, t0, 0x1f
+- xori t0, t0, 0x1f
++ /* 3 instructions idle interrupt region */
++ ori t0, t0, 0b1100
+ bne t0, t1, 1f
+ LONG_S t0, sp, PT_ERA
+ 1: move a0, sp
+diff --git a/arch/loongarch/kernel/idle.c b/arch/loongarch/kernel/idle.c
+index 0b5dd2faeb90b8..54b247d8cdb695 100644
+--- a/arch/loongarch/kernel/idle.c
++++ b/arch/loongarch/kernel/idle.c
+@@ -11,7 +11,6 @@
+
+ void __cpuidle arch_cpu_idle(void)
+ {
+- raw_local_irq_enable();
+- __arch_cpu_idle(); /* idle instruction needs irq enabled */
++ __arch_cpu_idle();
+ raw_local_irq_disable();
+ }
+diff --git a/arch/loongarch/kernel/reset.c b/arch/loongarch/kernel/reset.c
+index 1ef8c63835351b..de8fa5a8a825cd 100644
+--- a/arch/loongarch/kernel/reset.c
++++ b/arch/loongarch/kernel/reset.c
+@@ -33,7 +33,7 @@ void machine_halt(void)
+ console_flush_on_panic(CONSOLE_FLUSH_PENDING);
+
+ while (true) {
+- __arch_cpu_idle();
++ __asm__ __volatile__("idle 0" : : : "memory");
+ }
+ }
+
+@@ -53,7 +53,7 @@ void machine_power_off(void)
+ #endif
+
+ while (true) {
+- __arch_cpu_idle();
++ __asm__ __volatile__("idle 0" : : : "memory");
+ }
+ }
+
+@@ -74,6 +74,6 @@ void machine_restart(char *command)
+ acpi_reboot();
+
+ while (true) {
+- __arch_cpu_idle();
++ __asm__ __volatile__("idle 0" : : : "memory");
+ }
+ }
+diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c
+index 27e9b94c0a0b6e..7e8f5d6829ef0c 100644
+--- a/arch/loongarch/kvm/main.c
++++ b/arch/loongarch/kvm/main.c
+@@ -283,9 +283,9 @@ int kvm_arch_enable_virtualization_cpu(void)
+ * TOE=0: Trap on Exception.
+ * TIT=0: Trap on Timer.
+ */
+- if (env & CSR_GCFG_GCIP_ALL)
++ if (env & CSR_GCFG_GCIP_SECURE)
+ gcfg |= CSR_GCFG_GCI_SECURE;
+- if (env & CSR_GCFG_MATC_ROOT)
++ if (env & CSR_GCFG_MATP_ROOT)
+ gcfg |= CSR_GCFG_MATC_ROOT;
+
+ write_csr_gcfg(gcfg);
+diff --git a/arch/loongarch/lib/csum.c b/arch/loongarch/lib/csum.c
+index a5e84b403c3b34..df309ae4045dee 100644
+--- a/arch/loongarch/lib/csum.c
++++ b/arch/loongarch/lib/csum.c
+@@ -25,7 +25,7 @@ unsigned int __no_sanitize_address do_csum(const unsigned char *buff, int len)
+ const u64 *ptr;
+ u64 data, sum64 = 0;
+
+- if (unlikely(len == 0))
++ if (unlikely(len <= 0))
+ return 0;
+
+ offset = (unsigned long)buff & 7;
+diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
+index 56a786ca7354b9..c3854682934557 100644
+--- a/arch/s390/pci/pci_bus.c
++++ b/arch/s390/pci/pci_bus.c
+@@ -331,6 +331,17 @@ static int zpci_bus_add_device(struct zpci_bus *zbus, struct zpci_dev *zdev)
+ return rc;
+ }
+
++static bool zpci_bus_is_isolated_vf(struct zpci_bus *zbus, struct zpci_dev *zdev)
++{
++ struct pci_dev *pdev;
++
++ pdev = zpci_iov_find_parent_pf(zbus, zdev);
++ if (!pdev)
++ return true;
++ pci_dev_put(pdev);
++ return false;
++}
++
+ int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
+ {
+ bool topo_is_tid = zdev->tid_avail;
+@@ -345,6 +356,15 @@ int zpci_bus_device_register(struct zpci_dev *zdev, struct pci_ops *ops)
+
+ topo = topo_is_tid ? zdev->tid : zdev->pchid;
+ zbus = zpci_bus_get(topo, topo_is_tid);
++ /*
++ * An isolated VF gets its own domain/bus even if there exists
++ * a matching domain/bus already
++ */
++ if (zbus && zpci_bus_is_isolated_vf(zbus, zdev)) {
++ zpci_bus_put(zbus);
++ zbus = NULL;
++ }
++
+ if (!zbus) {
+ zbus = zpci_bus_alloc(topo, topo_is_tid);
+ if (!zbus)
+diff --git a/arch/s390/pci/pci_iov.c b/arch/s390/pci/pci_iov.c
+index ead062bf2b41cc..191e56a623f62c 100644
+--- a/arch/s390/pci/pci_iov.c
++++ b/arch/s390/pci/pci_iov.c
+@@ -60,18 +60,35 @@ static int zpci_iov_link_virtfn(struct pci_dev *pdev, struct pci_dev *virtfn, in
+ return 0;
+ }
+
+-int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *virtfn, int vfn)
++/**
++ * zpci_iov_find_parent_pf - Find the parent PF, if any, of the given function
++ * @zbus: The bus that the PCI function is on, or would be added on
++ * @zdev: The PCI function
++ *
++ * Finds the parent PF, if it exists and is configured, of the given PCI function
++ * and increments its refcount. Th PF is searched for on the provided bus so the
++ * caller has to ensure that this is the correct bus to search. This function may
++ * be used before adding the PCI function to a zbus.
++ *
++ * Return: Pointer to the struct pci_dev of the parent PF or NULL if it not
++ * found. If the function is not a VF or has no RequesterID information,
++ * NULL is returned as well.
++ */
++struct pci_dev *zpci_iov_find_parent_pf(struct zpci_bus *zbus, struct zpci_dev *zdev)
+ {
+- int i, cand_devfn;
+- struct zpci_dev *zdev;
++ int i, vfid, devfn, cand_devfn;
+ struct pci_dev *pdev;
+- int vfid = vfn - 1; /* Linux' vfid's start at 0 vfn at 1*/
+- int rc = 0;
+
+ if (!zbus->multifunction)
+- return 0;
+-
+- /* If the parent PF for the given VF is also configured in the
++ return NULL;
++ /* Non-VFs and VFs without RID available don't have a parent */
++ if (!zdev->vfn || !zdev->rid_available)
++ return NULL;
++ /* Linux vfid starts at 0 vfn at 1 */
++ vfid = zdev->vfn - 1;
++ devfn = zdev->rid & ZPCI_RID_MASK_DEVFN;
++ /*
++ * If the parent PF for the given VF is also configured in the
+ * instance, it must be on the same zbus.
+ * We can then identify the parent PF by checking what
+ * devfn the VF would have if it belonged to that PF using the PF's
+@@ -85,15 +102,26 @@ int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *virtfn, int vfn
+ if (!pdev)
+ continue;
+ cand_devfn = pci_iov_virtfn_devfn(pdev, vfid);
+- if (cand_devfn == virtfn->devfn) {
+- rc = zpci_iov_link_virtfn(pdev, virtfn, vfid);
+- /* balance pci_get_slot() */
+- pci_dev_put(pdev);
+- break;
+- }
++ if (cand_devfn == devfn)
++ return pdev;
+ /* balance pci_get_slot() */
+ pci_dev_put(pdev);
+ }
+ }
++ return NULL;
++}
++
++int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *virtfn, int vfn)
++{
++ struct zpci_dev *zdev = to_zpci(virtfn);
++ struct pci_dev *pdev_pf;
++ int rc = 0;
++
++ pdev_pf = zpci_iov_find_parent_pf(zbus, zdev);
++ if (pdev_pf) {
++ /* Linux' vfids start at 0 while zdev->vfn starts at 1 */
++ rc = zpci_iov_link_virtfn(pdev_pf, virtfn, zdev->vfn - 1);
++ pci_dev_put(pdev_pf);
++ }
+ return rc;
+ }
+diff --git a/arch/s390/pci/pci_iov.h b/arch/s390/pci/pci_iov.h
+index b2c828003bad0a..05df728f980ca4 100644
+--- a/arch/s390/pci/pci_iov.h
++++ b/arch/s390/pci/pci_iov.h
+@@ -17,6 +17,8 @@ void zpci_iov_map_resources(struct pci_dev *pdev);
+
+ int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *virtfn, int vfn);
+
++struct pci_dev *zpci_iov_find_parent_pf(struct zpci_bus *zbus, struct zpci_dev *zdev);
++
+ #else /* CONFIG_PCI_IOV */
+ static inline void zpci_iov_remove_virtfn(struct pci_dev *pdev, int vfn) {}
+
+@@ -26,5 +28,10 @@ static inline int zpci_iov_setup_virtfn(struct zpci_bus *zbus, struct pci_dev *v
+ {
+ return 0;
+ }
++
++static inline struct pci_dev *zpci_iov_find_parent_pf(struct zpci_bus *zbus, struct zpci_dev *zdev)
++{
++ return NULL;
++}
+ #endif /* CONFIG_PCI_IOV */
+ #endif /* __S390_PCI_IOV_h */
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 171be04eca1f5d..1b0c2397d65753 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2582,7 +2582,8 @@ config MITIGATION_IBPB_ENTRY
+ depends on CPU_SUP_AMD && X86_64
+ default y
+ help
+- Compile the kernel with support for the retbleed=ibpb mitigation.
++ Compile the kernel with support for the retbleed=ibpb and
++ spec_rstack_overflow={ibpb,ibpb-vmexit} mitigations.
+
+ config MITIGATION_IBRS_ENTRY
+ bool "Enable IBRS on kernel entry"
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index f558be868a50b6..f5bf400f6a2833 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -4865,20 +4865,22 @@ static inline bool intel_pmu_broken_perf_cap(void)
+
+ static void update_pmu_cap(struct x86_hybrid_pmu *pmu)
+ {
+- unsigned int sub_bitmaps, eax, ebx, ecx, edx;
++ unsigned int cntr, fixed_cntr, ecx, edx;
++ union cpuid35_eax eax;
++ union cpuid35_ebx ebx;
+
+- cpuid(ARCH_PERFMON_EXT_LEAF, &sub_bitmaps, &ebx, &ecx, &edx);
++ cpuid(ARCH_PERFMON_EXT_LEAF, &eax.full, &ebx.full, &ecx, &edx);
+
+- if (ebx & ARCH_PERFMON_EXT_UMASK2)
++ if (ebx.split.umask2)
+ pmu->config_mask |= ARCH_PERFMON_EVENTSEL_UMASK2;
+- if (ebx & ARCH_PERFMON_EXT_EQ)
++ if (ebx.split.eq)
+ pmu->config_mask |= ARCH_PERFMON_EVENTSEL_EQ;
+
+- if (sub_bitmaps & ARCH_PERFMON_NUM_COUNTER_LEAF_BIT) {
++ if (eax.split.cntr_subleaf) {
+ cpuid_count(ARCH_PERFMON_EXT_LEAF, ARCH_PERFMON_NUM_COUNTER_LEAF,
+- &eax, &ebx, &ecx, &edx);
+- pmu->cntr_mask64 = eax;
+- pmu->fixed_cntr_mask64 = ebx;
++ &cntr, &fixed_cntr, &ecx, &edx);
++ pmu->cntr_mask64 = cntr;
++ pmu->fixed_cntr_mask64 = fixed_cntr;
+ }
+
+ if (!intel_pmu_broken_perf_cap()) {
+@@ -4901,11 +4903,6 @@ static void intel_pmu_check_hybrid_pmus(struct x86_hybrid_pmu *pmu)
+ else
+ pmu->intel_ctrl &= ~(1ULL << GLOBAL_CTRL_EN_PERF_METRICS);
+
+- if (pmu->intel_cap.pebs_output_pt_available)
+- pmu->pmu.capabilities |= PERF_PMU_CAP_AUX_OUTPUT;
+- else
+- pmu->pmu.capabilities &= ~PERF_PMU_CAP_AUX_OUTPUT;
+-
+ intel_pmu_check_event_constraints(pmu->event_constraints,
+ pmu->cntr_mask64,
+ pmu->fixed_cntr_mask64,
+@@ -4974,9 +4971,6 @@ static bool init_hybrid_pmu(int cpu)
+
+ pr_info("%s PMU driver: ", pmu->name);
+
+- if (pmu->intel_cap.pebs_output_pt_available)
+- pr_cont("PEBS-via-PT ");
+-
+ pr_cont("\n");
+
+ x86_pmu_show_pmu_cap(&pmu->pmu);
+@@ -4999,8 +4993,11 @@ static void intel_pmu_cpu_starting(int cpu)
+
+ init_debug_store_on_cpu(cpu);
+ /*
+- * Deal with CPUs that don't clear their LBRs on power-up.
++ * Deal with CPUs that don't clear their LBRs on power-up, and that may
++ * even boot with LBRs enabled.
+ */
++ if (!static_cpu_has(X86_FEATURE_ARCH_LBR) && x86_pmu.lbr_nr)
++ msr_clear_bit(MSR_IA32_DEBUGCTLMSR, DEBUGCTLMSR_LBR_BIT);
+ intel_pmu_lbr_reset();
+
+ cpuc->lbr_sel = NULL;
+@@ -6284,11 +6281,9 @@ static __always_inline int intel_pmu_init_hybrid(enum hybrid_pmu_type pmus)
+ pmu->intel_cap.capabilities = x86_pmu.intel_cap.capabilities;
+ if (pmu->pmu_type & hybrid_small) {
+ pmu->intel_cap.perf_metrics = 0;
+- pmu->intel_cap.pebs_output_pt_available = 1;
+ pmu->mid_ack = true;
+ } else if (pmu->pmu_type & hybrid_big) {
+ pmu->intel_cap.perf_metrics = 1;
+- pmu->intel_cap.pebs_output_pt_available = 0;
+ pmu->late_ack = true;
+ }
+ }
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 19a9fd974e3e1d..b6303b0224531b 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -2523,7 +2523,15 @@ void __init intel_ds_init(void)
+ }
+ pr_cont("PEBS fmt4%c%s, ", pebs_type, pebs_qual);
+
+- if (!is_hybrid() && x86_pmu.intel_cap.pebs_output_pt_available) {
++ /*
++ * The PEBS-via-PT is not supported on hybrid platforms,
++ * because not all CPUs of a hybrid machine support it.
++ * The global x86_pmu.intel_cap, which only contains the
++ * common capabilities, is used to check the availability
++ * of the feature. The per-PMU pebs_output_pt_available
++ * in a hybrid machine should be ignored.
++ */
++ if (x86_pmu.intel_cap.pebs_output_pt_available) {
+ pr_cont("PEBS-via-PT, ");
+ x86_get_pmu(smp_processor_id())->capabilities |= PERF_PMU_CAP_AUX_OUTPUT;
+ }
+diff --git a/arch/x86/include/asm/kvm-x86-ops.h b/arch/x86/include/asm/kvm-x86-ops.h
+index 861d080ed4c6ab..cfb22f8c451a7f 100644
+--- a/arch/x86/include/asm/kvm-x86-ops.h
++++ b/arch/x86/include/asm/kvm-x86-ops.h
+@@ -47,6 +47,7 @@ KVM_X86_OP(set_idt)
+ KVM_X86_OP(get_gdt)
+ KVM_X86_OP(set_gdt)
+ KVM_X86_OP(sync_dirty_debug_regs)
++KVM_X86_OP(set_dr6)
+ KVM_X86_OP(set_dr7)
+ KVM_X86_OP(cache_reg)
+ KVM_X86_OP(get_rflags)
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 5da67e5c00401b..8499b9cb9c8263 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1674,6 +1674,7 @@ struct kvm_x86_ops {
+ void (*get_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ void (*set_gdt)(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ void (*sync_dirty_debug_regs)(struct kvm_vcpu *vcpu);
++ void (*set_dr6)(struct kvm_vcpu *vcpu, unsigned long value);
+ void (*set_dr7)(struct kvm_vcpu *vcpu, unsigned long value);
+ void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
+ unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
+diff --git a/arch/x86/include/asm/mmu.h b/arch/x86/include/asm/mmu.h
+index ce4677b8b7356c..3b496cdcb74b3c 100644
+--- a/arch/x86/include/asm/mmu.h
++++ b/arch/x86/include/asm/mmu.h
+@@ -37,6 +37,8 @@ typedef struct {
+ */
+ atomic64_t tlb_gen;
+
++ unsigned long next_trim_cpumask;
++
+ #ifdef CONFIG_MODIFY_LDT_SYSCALL
+ struct rw_semaphore ldt_usr_sem;
+ struct ldt_struct *ldt;
+diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
+index 2886cb668d7fae..795fdd53bd0a6d 100644
+--- a/arch/x86/include/asm/mmu_context.h
++++ b/arch/x86/include/asm/mmu_context.h
+@@ -151,6 +151,7 @@ static inline int init_new_context(struct task_struct *tsk,
+
+ mm->context.ctx_id = atomic64_inc_return(&last_mm_ctx_id);
+ atomic64_set(&mm->context.tlb_gen, 0);
++ mm->context.next_trim_cpumask = jiffies + HZ;
+
+ #ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
+ if (cpu_feature_enabled(X86_FEATURE_OSPKE)) {
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index 3ae84c3b8e6dba..61e991507353eb 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -395,7 +395,8 @@
+ #define MSR_IA32_PASID_VALID BIT_ULL(31)
+
+ /* DEBUGCTLMSR bits (others vary by model): */
+-#define DEBUGCTLMSR_LBR (1UL << 0) /* last branch recording */
++#define DEBUGCTLMSR_LBR_BIT 0 /* last branch recording */
++#define DEBUGCTLMSR_LBR (1UL << DEBUGCTLMSR_LBR_BIT)
+ #define DEBUGCTLMSR_BTF_SHIFT 1
+ #define DEBUGCTLMSR_BTF (1UL << 1) /* single-step on branches */
+ #define DEBUGCTLMSR_BUS_LOCK_DETECT (1UL << 2)
+diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
+index 91b73571412f16..7505bb5d260ab4 100644
+--- a/arch/x86/include/asm/perf_event.h
++++ b/arch/x86/include/asm/perf_event.h
+@@ -187,11 +187,33 @@ union cpuid10_edx {
+ * detection/enumeration details:
+ */
+ #define ARCH_PERFMON_EXT_LEAF 0x00000023
+-#define ARCH_PERFMON_EXT_UMASK2 0x1
+-#define ARCH_PERFMON_EXT_EQ 0x2
+-#define ARCH_PERFMON_NUM_COUNTER_LEAF_BIT 0x1
+ #define ARCH_PERFMON_NUM_COUNTER_LEAF 0x1
+
++union cpuid35_eax {
++ struct {
++ unsigned int leaf0:1;
++ /* Counters Sub-Leaf */
++ unsigned int cntr_subleaf:1;
++ /* Auto Counter Reload Sub-Leaf */
++ unsigned int acr_subleaf:1;
++ /* Events Sub-Leaf */
++ unsigned int events_subleaf:1;
++ unsigned int reserved:28;
++ } split;
++ unsigned int full;
++};
++
++union cpuid35_ebx {
++ struct {
++ /* UnitMask2 Supported */
++ unsigned int umask2:1;
++ /* EQ-bit Supported */
++ unsigned int eq:1;
++ unsigned int reserved:30;
++ } split;
++ unsigned int full;
++};
++
+ /*
+ * Intel Architectural LBR CPUID detection/enumeration details:
+ */
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 69e79fff41b800..02fc2aa06e9e0e 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -222,6 +222,7 @@ struct flush_tlb_info {
+ unsigned int initiating_cpu;
+ u8 stride_shift;
+ u8 freed_tables;
++ u8 trim_cpumask;
+ };
+
+ void flush_tlb_local(void);
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 47a01d4028f60e..5fba44a4f988c0 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -1115,6 +1115,8 @@ static void __init retbleed_select_mitigation(void)
+
+ case RETBLEED_MITIGATION_IBPB:
+ setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
++ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
++ mitigate_smt = true;
+
+ /*
+ * IBPB on entry already obviates the need for
+@@ -1124,9 +1126,6 @@ static void __init retbleed_select_mitigation(void)
+ setup_clear_cpu_cap(X86_FEATURE_UNRET);
+ setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
+
+- setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+- mitigate_smt = true;
+-
+ /*
+ * There is no need for RSB filling: entry_ibpb() ensures
+ * all predictions, including the RSB, are invalidated,
+@@ -2643,6 +2642,7 @@ static void __init srso_select_mitigation(void)
+ if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
+ if (has_microcode) {
+ setup_force_cpu_cap(X86_FEATURE_ENTRY_IBPB);
++ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+ srso_mitigation = SRSO_MITIGATION_IBPB;
+
+ /*
+@@ -2652,6 +2652,13 @@ static void __init srso_select_mitigation(void)
+ */
+ setup_clear_cpu_cap(X86_FEATURE_UNRET);
+ setup_clear_cpu_cap(X86_FEATURE_RETHUNK);
++
++ /*
++ * There is no need for RSB filling: entry_ibpb() ensures
++ * all predictions, including the RSB, are invalidated,
++ * regardless of IBPB implementation.
++ */
++ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
+ }
+ } else {
+ pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
+@@ -2659,8 +2666,8 @@ static void __init srso_select_mitigation(void)
+ break;
+
+ case SRSO_CMD_IBPB_ON_VMEXIT:
+- if (IS_ENABLED(CONFIG_MITIGATION_SRSO)) {
+- if (!boot_cpu_has(X86_FEATURE_ENTRY_IBPB) && has_microcode) {
++ if (IS_ENABLED(CONFIG_MITIGATION_IBPB_ENTRY)) {
++ if (has_microcode) {
+ setup_force_cpu_cap(X86_FEATURE_IBPB_ON_VMEXIT);
+ srso_mitigation = SRSO_MITIGATION_IBPB_ON_VMEXIT;
+
+@@ -2672,8 +2679,8 @@ static void __init srso_select_mitigation(void)
+ setup_clear_cpu_cap(X86_FEATURE_RSB_VMEXIT);
+ }
+ } else {
+- pr_err("WARNING: kernel not compiled with MITIGATION_SRSO.\n");
+- }
++ pr_err("WARNING: kernel not compiled with MITIGATION_IBPB_ENTRY.\n");
++ }
+ break;
+ default:
+ break;
+diff --git a/arch/x86/kernel/static_call.c b/arch/x86/kernel/static_call.c
+index 9eed0c144dad51..9e51242ed125ee 100644
+--- a/arch/x86/kernel/static_call.c
++++ b/arch/x86/kernel/static_call.c
+@@ -175,7 +175,6 @@ EXPORT_SYMBOL_GPL(arch_static_call_transform);
+ noinstr void __static_call_update_early(void *tramp, void *func)
+ {
+ BUG_ON(system_state != SYSTEM_BOOTING);
+- BUG_ON(!early_boot_irqs_disabled);
+ BUG_ON(static_call_initialized);
+ __text_gen_insn(tramp, JMP32_INSN_OPCODE, tramp, func, JMP32_INSN_SIZE);
+ sync_core();
+diff --git a/arch/x86/kvm/hyperv.c b/arch/x86/kvm/hyperv.c
+index 4f0a94346d0094..44c88537448c74 100644
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -2226,6 +2226,9 @@ static u64 kvm_hv_send_ipi(struct kvm_vcpu *vcpu, struct kvm_hv_hcall *hc)
+ u32 vector;
+ bool all_cpus;
+
++ if (!lapic_in_kernel(vcpu))
++ return HV_STATUS_INVALID_HYPERCALL_INPUT;
++
+ if (hc->code == HVCALL_SEND_IPI) {
+ if (!hc->fast) {
+ if (unlikely(kvm_read_guest(kvm, hc->ingpa, &send_ipi,
+@@ -2852,7 +2855,8 @@ int kvm_get_hv_cpuid(struct kvm_vcpu *vcpu, struct kvm_cpuid2 *cpuid,
+ ent->eax |= HV_X64_REMOTE_TLB_FLUSH_RECOMMENDED;
+ ent->eax |= HV_X64_APIC_ACCESS_RECOMMENDED;
+ ent->eax |= HV_X64_RELAXED_TIMING_RECOMMENDED;
+- ent->eax |= HV_X64_CLUSTER_IPI_RECOMMENDED;
++ if (!vcpu || lapic_in_kernel(vcpu))
++ ent->eax |= HV_X64_CLUSTER_IPI_RECOMMENDED;
+ ent->eax |= HV_X64_EX_PROCESSOR_MASKS_RECOMMENDED;
+ if (evmcs_ver)
+ ent->eax |= HV_X64_ENLIGHTENED_VMCS_RECOMMENDED;
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 9dd3796d075a56..19c96278ba755d 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5591,7 +5591,7 @@ void kvm_init_shadow_npt_mmu(struct kvm_vcpu *vcpu, unsigned long cr0,
+ union kvm_mmu_page_role root_role;
+
+ /* NPT requires CR0.PG=1. */
+- WARN_ON_ONCE(cpu_role.base.direct);
++ WARN_ON_ONCE(cpu_role.base.direct || !cpu_role.base.guest_mode);
+
+ root_role = cpu_role.base;
+ root_role.level = kvm_mmu_get_tdp_level(vcpu);
+diff --git a/arch/x86/kvm/svm/nested.c b/arch/x86/kvm/svm/nested.c
+index cf84103ce38b97..2dcb9c870d5a22 100644
+--- a/arch/x86/kvm/svm/nested.c
++++ b/arch/x86/kvm/svm/nested.c
+@@ -646,6 +646,11 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
+ u32 pause_count12;
+ u32 pause_thresh12;
+
++ nested_svm_transition_tlb_flush(vcpu);
++
++ /* Enter Guest-Mode */
++ enter_guest_mode(vcpu);
++
+ /*
+ * Filled at exit: exit_code, exit_code_hi, exit_info_1, exit_info_2,
+ * exit_int_info, exit_int_info_err, next_rip, insn_len, insn_bytes.
+@@ -762,11 +767,6 @@ static void nested_vmcb02_prepare_control(struct vcpu_svm *svm,
+ }
+ }
+
+- nested_svm_transition_tlb_flush(vcpu);
+-
+- /* Enter Guest-Mode */
+- enter_guest_mode(vcpu);
+-
+ /*
+ * Merge guest and host intercepts - must be called with vcpu in
+ * guest-mode to take effect.
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index 4543dd6bcab2cb..a7cb7c82b38e39 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -1993,11 +1993,11 @@ static void new_asid(struct vcpu_svm *svm, struct svm_cpu_data *sd)
+ svm->asid = sd->next_asid++;
+ }
+
+-static void svm_set_dr6(struct vcpu_svm *svm, unsigned long value)
++static void svm_set_dr6(struct kvm_vcpu *vcpu, unsigned long value)
+ {
+- struct vmcb *vmcb = svm->vmcb;
++ struct vmcb *vmcb = to_svm(vcpu)->vmcb;
+
+- if (svm->vcpu.arch.guest_state_protected)
++ if (vcpu->arch.guest_state_protected)
+ return;
+
+ if (unlikely(value != vmcb->save.dr6)) {
+@@ -4234,10 +4234,8 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
+ * Run with all-zero DR6 unless needed, so that we can get the exact cause
+ * of a #DB.
+ */
+- if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
+- svm_set_dr6(svm, vcpu->arch.dr6);
+- else
+- svm_set_dr6(svm, DR6_ACTIVE_LOW);
++ if (likely(!(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT)))
++ svm_set_dr6(vcpu, DR6_ACTIVE_LOW);
+
+ clgi();
+ kvm_load_guest_xsave_state(vcpu);
+@@ -5033,6 +5031,7 @@ static struct kvm_x86_ops svm_x86_ops __initdata = {
+ .set_idt = svm_set_idt,
+ .get_gdt = svm_get_gdt,
+ .set_gdt = svm_set_gdt,
++ .set_dr6 = svm_set_dr6,
+ .set_dr7 = svm_set_dr7,
+ .sync_dirty_debug_regs = svm_sync_dirty_debug_regs,
+ .cache_reg = svm_cache_reg,
+diff --git a/arch/x86/kvm/vmx/main.c b/arch/x86/kvm/vmx/main.c
+index 7668e2fb8043ef..47476fcc179a52 100644
+--- a/arch/x86/kvm/vmx/main.c
++++ b/arch/x86/kvm/vmx/main.c
+@@ -60,6 +60,7 @@ struct kvm_x86_ops vt_x86_ops __initdata = {
+ .set_idt = vmx_set_idt,
+ .get_gdt = vmx_get_gdt,
+ .set_gdt = vmx_set_gdt,
++ .set_dr6 = vmx_set_dr6,
+ .set_dr7 = vmx_set_dr7,
+ .sync_dirty_debug_regs = vmx_sync_dirty_debug_regs,
+ .cache_reg = vmx_cache_reg,
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 968ddf71405446..f06d443ec3c68d 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -5631,6 +5631,12 @@ void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu)
+ set_debugreg(DR6_RESERVED, 6);
+ }
+
++void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val)
++{
++ lockdep_assert_irqs_disabled();
++ set_debugreg(vcpu->arch.dr6, 6);
++}
++
+ void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
+ {
+ vmcs_writel(GUEST_DR7, val);
+@@ -7392,10 +7398,6 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
+ vmx->loaded_vmcs->host_state.cr4 = cr4;
+ }
+
+- /* When KVM_DEBUGREG_WONT_EXIT, dr6 is accessible in guest. */
+- if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
+- set_debugreg(vcpu->arch.dr6, 6);
+-
+ /* When single-stepping over STI and MOV SS, we must clear the
+ * corresponding interruptibility bits in the guest state. Otherwise
+ * vmentry fails as it then expects bit 14 (BS) in pending debug
+diff --git a/arch/x86/kvm/vmx/x86_ops.h b/arch/x86/kvm/vmx/x86_ops.h
+index 48dc76bf0ec03a..4aba200f435d42 100644
+--- a/arch/x86/kvm/vmx/x86_ops.h
++++ b/arch/x86/kvm/vmx/x86_ops.h
+@@ -74,6 +74,7 @@ void vmx_get_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ void vmx_set_idt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ void vmx_get_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
+ void vmx_set_gdt(struct kvm_vcpu *vcpu, struct desc_ptr *dt);
++void vmx_set_dr6(struct kvm_vcpu *vcpu, unsigned long val);
+ void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val);
+ void vmx_sync_dirty_debug_regs(struct kvm_vcpu *vcpu);
+ void vmx_cache_reg(struct kvm_vcpu *vcpu, enum kvm_reg reg);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index d760b19d1e513e..0846e3af5f6c5a 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10968,6 +10968,9 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ set_debugreg(vcpu->arch.eff_db[1], 1);
+ set_debugreg(vcpu->arch.eff_db[2], 2);
+ set_debugreg(vcpu->arch.eff_db[3], 3);
++ /* When KVM_DEBUGREG_WONT_EXIT, dr6 is accessible in guest. */
++ if (unlikely(vcpu->arch.switch_db_regs & KVM_DEBUGREG_WONT_EXIT))
++ kvm_x86_call(set_dr6)(vcpu, vcpu->arch.dr6);
+ } else if (unlikely(hw_breakpoint_active())) {
+ set_debugreg(0, 7);
+ }
+diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
+index b0678d59ebdb4a..00ffa74d0dd0bf 100644
+--- a/arch/x86/mm/tlb.c
++++ b/arch/x86/mm/tlb.c
+@@ -893,9 +893,36 @@ static void flush_tlb_func(void *info)
+ nr_invalidate);
+ }
+
+-static bool tlb_is_not_lazy(int cpu, void *data)
++static bool should_flush_tlb(int cpu, void *data)
+ {
+- return !per_cpu(cpu_tlbstate_shared.is_lazy, cpu);
++ struct flush_tlb_info *info = data;
++
++ /* Lazy TLB will get flushed at the next context switch. */
++ if (per_cpu(cpu_tlbstate_shared.is_lazy, cpu))
++ return false;
++
++ /* No mm means kernel memory flush. */
++ if (!info->mm)
++ return true;
++
++ /* The target mm is loaded, and the CPU is not lazy. */
++ if (per_cpu(cpu_tlbstate.loaded_mm, cpu) == info->mm)
++ return true;
++
++ /* In cpumask, but not the loaded mm? Periodically remove by flushing. */
++ if (info->trim_cpumask)
++ return true;
++
++ return false;
++}
++
++static bool should_trim_cpumask(struct mm_struct *mm)
++{
++ if (time_after(jiffies, READ_ONCE(mm->context.next_trim_cpumask))) {
++ WRITE_ONCE(mm->context.next_trim_cpumask, jiffies + HZ);
++ return true;
++ }
++ return false;
+ }
+
+ DEFINE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared);
+@@ -929,7 +956,7 @@ STATIC_NOPV void native_flush_tlb_multi(const struct cpumask *cpumask,
+ if (info->freed_tables)
+ on_each_cpu_mask(cpumask, flush_tlb_func, (void *)info, true);
+ else
+- on_each_cpu_cond_mask(tlb_is_not_lazy, flush_tlb_func,
++ on_each_cpu_cond_mask(should_flush_tlb, flush_tlb_func,
+ (void *)info, 1, cpumask);
+ }
+
+@@ -980,6 +1007,7 @@ static struct flush_tlb_info *get_flush_tlb_info(struct mm_struct *mm,
+ info->freed_tables = freed_tables;
+ info->new_tlb_gen = new_tlb_gen;
+ info->initiating_cpu = smp_processor_id();
++ info->trim_cpumask = 0;
+
+ return info;
+ }
+@@ -1022,6 +1050,7 @@ void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
+ * flush_tlb_func_local() directly in this case.
+ */
+ if (cpumask_any_but(mm_cpumask(mm), cpu) < nr_cpu_ids) {
++ info->trim_cpumask = should_trim_cpumask(mm);
+ flush_tlb_multi(mm_cpumask(mm), info);
+ } else if (mm == this_cpu_read(cpu_tlbstate.loaded_mm)) {
+ lockdep_assert_irqs_enabled();
+diff --git a/arch/x86/xen/mmu_pv.c b/arch/x86/xen/mmu_pv.c
+index 55a4996d0c04f1..d078de2c952b37 100644
+--- a/arch/x86/xen/mmu_pv.c
++++ b/arch/x86/xen/mmu_pv.c
+@@ -111,6 +111,51 @@ static pud_t level3_user_vsyscall[PTRS_PER_PUD] __page_aligned_bss;
+ */
+ static DEFINE_SPINLOCK(xen_reservation_lock);
+
++/* Protected by xen_reservation_lock. */
++#define MIN_CONTIG_ORDER 9 /* 2MB */
++static unsigned int discontig_frames_order = MIN_CONTIG_ORDER;
++static unsigned long discontig_frames_early[1UL << MIN_CONTIG_ORDER] __initdata;
++static unsigned long *discontig_frames __refdata = discontig_frames_early;
++static bool discontig_frames_dyn;
++
++static int alloc_discontig_frames(unsigned int order)
++{
++ unsigned long *new_array, *old_array;
++ unsigned int old_order;
++ unsigned long flags;
++
++ BUG_ON(order < MIN_CONTIG_ORDER);
++ BUILD_BUG_ON(sizeof(discontig_frames_early) != PAGE_SIZE);
++
++ new_array = (unsigned long *)__get_free_pages(GFP_KERNEL,
++ order - MIN_CONTIG_ORDER);
++ if (!new_array)
++ return -ENOMEM;
++
++ spin_lock_irqsave(&xen_reservation_lock, flags);
++
++ old_order = discontig_frames_order;
++
++ if (order > discontig_frames_order || !discontig_frames_dyn) {
++ if (!discontig_frames_dyn)
++ old_array = NULL;
++ else
++ old_array = discontig_frames;
++
++ discontig_frames = new_array;
++ discontig_frames_order = order;
++ discontig_frames_dyn = true;
++ } else {
++ old_array = new_array;
++ }
++
++ spin_unlock_irqrestore(&xen_reservation_lock, flags);
++
++ free_pages((unsigned long)old_array, old_order - MIN_CONTIG_ORDER);
++
++ return 0;
++}
++
+ /*
+ * Note about cr3 (pagetable base) values:
+ *
+@@ -781,6 +826,7 @@ void xen_mm_pin_all(void)
+ {
+ struct page *page;
+
++ spin_lock(&init_mm.page_table_lock);
+ spin_lock(&pgd_lock);
+
+ list_for_each_entry(page, &pgd_list, lru) {
+@@ -791,6 +837,7 @@ void xen_mm_pin_all(void)
+ }
+
+ spin_unlock(&pgd_lock);
++ spin_unlock(&init_mm.page_table_lock);
+ }
+
+ static void __init xen_mark_pinned(struct mm_struct *mm, struct page *page,
+@@ -812,6 +859,9 @@ static void __init xen_after_bootmem(void)
+ SetPagePinned(virt_to_page(level3_user_vsyscall));
+ #endif
+ xen_pgd_walk(&init_mm, xen_mark_pinned, FIXADDR_TOP);
++
++ if (alloc_discontig_frames(MIN_CONTIG_ORDER))
++ BUG();
+ }
+
+ static void xen_unpin_page(struct mm_struct *mm, struct page *page,
+@@ -887,6 +937,7 @@ void xen_mm_unpin_all(void)
+ {
+ struct page *page;
+
++ spin_lock(&init_mm.page_table_lock);
+ spin_lock(&pgd_lock);
+
+ list_for_each_entry(page, &pgd_list, lru) {
+@@ -898,6 +949,7 @@ void xen_mm_unpin_all(void)
+ }
+
+ spin_unlock(&pgd_lock);
++ spin_unlock(&init_mm.page_table_lock);
+ }
+
+ static void xen_enter_mmap(struct mm_struct *mm)
+@@ -2199,10 +2251,6 @@ void __init xen_init_mmu_ops(void)
+ memset(dummy_mapping, 0xff, PAGE_SIZE);
+ }
+
+-/* Protected by xen_reservation_lock. */
+-#define MAX_CONTIG_ORDER 9 /* 2MB */
+-static unsigned long discontig_frames[1<<MAX_CONTIG_ORDER];
+-
+ #define VOID_PTE (mfn_pte(0, __pgprot(0)))
+ static void xen_zap_pfn_range(unsigned long vaddr, unsigned int order,
+ unsigned long *in_frames,
+@@ -2319,18 +2367,25 @@ int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
+ unsigned int address_bits,
+ dma_addr_t *dma_handle)
+ {
+- unsigned long *in_frames = discontig_frames, out_frame;
++ unsigned long *in_frames, out_frame;
+ unsigned long flags;
+ int success;
+ unsigned long vstart = (unsigned long)phys_to_virt(pstart);
+
+- if (unlikely(order > MAX_CONTIG_ORDER))
+- return -ENOMEM;
++ if (unlikely(order > discontig_frames_order)) {
++ if (!discontig_frames_dyn)
++ return -ENOMEM;
++
++ if (alloc_discontig_frames(order))
++ return -ENOMEM;
++ }
+
+ memset((void *) vstart, 0, PAGE_SIZE << order);
+
+ spin_lock_irqsave(&xen_reservation_lock, flags);
+
++ in_frames = discontig_frames;
++
+ /* 1. Zap current PTEs, remembering MFNs. */
+ xen_zap_pfn_range(vstart, order, in_frames, NULL);
+
+@@ -2354,12 +2409,12 @@ int xen_create_contiguous_region(phys_addr_t pstart, unsigned int order,
+
+ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
+ {
+- unsigned long *out_frames = discontig_frames, in_frame;
++ unsigned long *out_frames, in_frame;
+ unsigned long flags;
+ int success;
+ unsigned long vstart;
+
+- if (unlikely(order > MAX_CONTIG_ORDER))
++ if (unlikely(order > discontig_frames_order))
+ return;
+
+ vstart = (unsigned long)phys_to_virt(pstart);
+@@ -2367,6 +2422,8 @@ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
+
+ spin_lock_irqsave(&xen_reservation_lock, flags);
+
++ out_frames = discontig_frames;
++
+ /* 1. Find start MFN of contiguous extent. */
+ in_frame = virt_to_mfn((void *)vstart);
+
+diff --git a/block/partitions/mac.c b/block/partitions/mac.c
+index c80183156d6802..b02530d9862970 100644
+--- a/block/partitions/mac.c
++++ b/block/partitions/mac.c
+@@ -53,13 +53,25 @@ int mac_partition(struct parsed_partitions *state)
+ }
+ secsize = be16_to_cpu(md->block_size);
+ put_dev_sector(sect);
++
++ /*
++ * If the "block size" is not a power of 2, things get weird - we might
++ * end up with a partition straddling a sector boundary, so we wouldn't
++ * be able to read a partition entry with read_part_sector().
++ * Real block sizes are probably (?) powers of two, so just require
++ * that.
++ */
++ if (!is_power_of_2(secsize))
++ return -1;
+ datasize = round_down(secsize, 512);
+ data = read_part_sector(state, datasize / 512, §);
+ if (!data)
+ return -1;
+ partoffset = secsize % 512;
+- if (partoffset + sizeof(*part) > datasize)
++ if (partoffset + sizeof(*part) > datasize) {
++ put_dev_sector(sect);
+ return -1;
++ }
+ part = (struct mac_partition *) (data + partoffset);
+ if (be16_to_cpu(part->signature) != MAC_PARTITION_MAGIC) {
+ put_dev_sector(sect);
+@@ -112,8 +124,8 @@ int mac_partition(struct parsed_partitions *state)
+ int i, l;
+
+ goodness++;
+- l = strlen(part->name);
+- if (strcmp(part->name, "/") == 0)
++ l = strnlen(part->name, sizeof(part->name));
++ if (strncmp(part->name, "/", sizeof(part->name)) == 0)
+ goodness++;
+ for (i = 0; i <= l - 4; ++i) {
+ if (strncasecmp(part->name + i, "root",
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index cb45ef5240dab6..068c1612660bc0 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -407,6 +407,19 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
+ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
+ },
++ {
++ /* Vexia Edu Atla 10 tablet 5V version */
++ .matches = {
++ /* Having all 3 of these not set is somewhat unique */
++ DMI_MATCH(DMI_SYS_VENDOR, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_BOARD_NAME, "To be filled by O.E.M."),
++ /* Above strings are too generic, also match on BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "05/14/2015"),
++ },
++ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
++ },
+ {
+ /* Vexia Edu Atla 10 tablet 9V version */
+ .matches = {
+diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c
+index 6981e5f974e9a4..ff7d0b14a6468b 100644
+--- a/drivers/base/regmap/regmap-irq.c
++++ b/drivers/base/regmap/regmap-irq.c
+@@ -909,6 +909,7 @@ int regmap_add_irq_chip_fwnode(struct fwnode_handle *fwnode,
+ kfree(d->wake_buf);
+ kfree(d->mask_buf_def);
+ kfree(d->mask_buf);
++ kfree(d->main_status_buf);
+ kfree(d->status_buf);
+ kfree(d->status_reg_buf);
+ if (d->config_buf) {
+@@ -984,6 +985,7 @@ void regmap_del_irq_chip(int irq, struct regmap_irq_chip_data *d)
+ kfree(d->wake_buf);
+ kfree(d->mask_buf_def);
+ kfree(d->mask_buf);
++ kfree(d->main_status_buf);
+ kfree(d->status_reg_buf);
+ kfree(d->status_buf);
+ if (d->config_buf) {
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 8bd663f4bac1b7..53f6b4f76bccdd 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -1312,6 +1312,10 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ if (opcode == 0xfc01)
+ btintel_pcie_inject_cmd_complete(hdev, opcode);
+ }
++ /* Firmware raises alive interrupt on HCI_OP_RESET */
++ if (opcode == HCI_OP_RESET)
++ data->gp0_received = false;
++
+ hdev->stat.cmd_tx++;
+ break;
+ case HCI_ACLDATA_PKT:
+@@ -1349,7 +1353,6 @@ static int btintel_pcie_send_frame(struct hci_dev *hdev,
+ opcode, btintel_pcie_alivectxt_state2str(old_ctxt),
+ btintel_pcie_alivectxt_state2str(data->alive_intr_ctxt));
+ if (opcode == HCI_OP_RESET) {
+- data->gp0_received = false;
+ ret = wait_event_timeout(data->gp0_wait_q,
+ data->gp0_received,
+ msecs_to_jiffies(BTINTEL_DEFAULT_INTR_TIMEOUT_MS));
+diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c
+index 91d3c3b1c2d3bf..9db5354fdb0271 100644
+--- a/drivers/cpufreq/amd-pstate.c
++++ b/drivers/cpufreq/amd-pstate.c
+@@ -696,12 +696,12 @@ static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state)
+ pr_err("Boost mode is not supported by this processor or SBIOS\n");
+ return -EOPNOTSUPP;
+ }
+- mutex_lock(&amd_pstate_driver_lock);
++ guard(mutex)(&amd_pstate_driver_lock);
++
+ ret = amd_pstate_cpu_boost_update(policy, state);
+ WRITE_ONCE(cpudata->boost_state, !ret ? state : false);
+ policy->boost_enabled = !ret ? state : false;
+ refresh_frequency_limits(policy);
+- mutex_unlock(&amd_pstate_driver_lock);
+
+ return ret;
+ }
+@@ -778,24 +778,28 @@ static void amd_pstate_init_prefcore(struct amd_cpudata *cpudata)
+
+ static void amd_pstate_update_limits(unsigned int cpu)
+ {
+- struct cpufreq_policy *policy = cpufreq_cpu_get(cpu);
++ struct cpufreq_policy *policy = NULL;
+ struct amd_cpudata *cpudata;
+ u32 prev_high = 0, cur_high = 0;
+ int ret;
+ bool highest_perf_changed = false;
+
++ if (!amd_pstate_prefcore)
++ return;
++
++ policy = cpufreq_cpu_get(cpu);
+ if (!policy)
+ return;
+
+ cpudata = policy->driver_data;
+
+- if (!amd_pstate_prefcore)
+- return;
++ guard(mutex)(&amd_pstate_driver_lock);
+
+- mutex_lock(&amd_pstate_driver_lock);
+ ret = amd_get_highest_perf(cpu, &cur_high);
+- if (ret)
+- goto free_cpufreq_put;
++ if (ret) {
++ cpufreq_cpu_put(policy);
++ return;
++ }
+
+ prev_high = READ_ONCE(cpudata->prefcore_ranking);
+ highest_perf_changed = (prev_high != cur_high);
+@@ -805,14 +809,11 @@ static void amd_pstate_update_limits(unsigned int cpu)
+ if (cur_high < CPPC_MAX_PERF)
+ sched_set_itmt_core_prio((int)cur_high, cpu);
+ }
+-
+-free_cpufreq_put:
+ cpufreq_cpu_put(policy);
+
+ if (!highest_perf_changed)
+ cpufreq_update_policy(cpu);
+
+- mutex_unlock(&amd_pstate_driver_lock);
+ }
+
+ /*
+@@ -1145,11 +1146,11 @@ static ssize_t store_energy_performance_preference(
+ if (ret < 0)
+ return -EINVAL;
+
+- mutex_lock(&amd_pstate_limits_lock);
++ guard(mutex)(&amd_pstate_limits_lock);
++
+ ret = amd_pstate_set_energy_pref_index(cpudata, ret);
+- mutex_unlock(&amd_pstate_limits_lock);
+
+- return ret ?: count;
++ return ret ? ret : count;
+ }
+
+ static ssize_t show_energy_performance_preference(
+@@ -1297,13 +1298,10 @@ EXPORT_SYMBOL_GPL(amd_pstate_update_status);
+ static ssize_t status_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+- ssize_t ret;
+
+- mutex_lock(&amd_pstate_driver_lock);
+- ret = amd_pstate_show_status(buf);
+- mutex_unlock(&amd_pstate_driver_lock);
++ guard(mutex)(&amd_pstate_driver_lock);
+
+- return ret;
++ return amd_pstate_show_status(buf);
+ }
+
+ static ssize_t status_store(struct device *a, struct device_attribute *b,
+@@ -1312,9 +1310,8 @@ static ssize_t status_store(struct device *a, struct device_attribute *b,
+ char *p = memchr(buf, '\n', count);
+ int ret;
+
+- mutex_lock(&amd_pstate_driver_lock);
++ guard(mutex)(&amd_pstate_driver_lock);
+ ret = amd_pstate_update_status(buf, p ? p - buf : count);
+- mutex_unlock(&amd_pstate_driver_lock);
+
+ return ret < 0 ? ret : count;
+ }
+@@ -1579,24 +1576,17 @@ static int amd_pstate_epp_set_policy(struct cpufreq_policy *policy)
+
+ static void amd_pstate_epp_reenable(struct amd_cpudata *cpudata)
+ {
+- struct cppc_perf_ctrls perf_ctrls;
+- u64 value, max_perf;
++ u64 max_perf;
+ int ret;
+
+ ret = amd_pstate_enable(true);
+ if (ret)
+ pr_err("failed to enable amd pstate during resume, return %d\n", ret);
+
+- value = READ_ONCE(cpudata->cppc_req_cached);
+ max_perf = READ_ONCE(cpudata->highest_perf);
+
+- if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
+- wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+- } else {
+- perf_ctrls.max_perf = max_perf;
+- perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(cpudata->epp_cached);
+- cppc_set_perf(cpudata->cpu, &perf_ctrls);
+- }
++ amd_pstate_update_perf(cpudata, 0, 0, max_perf, false);
++ amd_pstate_set_epp(cpudata, cpudata->epp_cached);
+ }
+
+ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
+@@ -1605,54 +1595,26 @@ static int amd_pstate_epp_cpu_online(struct cpufreq_policy *policy)
+
+ pr_debug("AMD CPU Core %d going online\n", cpudata->cpu);
+
+- if (cppc_state == AMD_PSTATE_ACTIVE) {
+- amd_pstate_epp_reenable(cpudata);
+- cpudata->suspended = false;
+- }
++ amd_pstate_epp_reenable(cpudata);
++ cpudata->suspended = false;
+
+ return 0;
+ }
+
+-static void amd_pstate_epp_offline(struct cpufreq_policy *policy)
+-{
+- struct amd_cpudata *cpudata = policy->driver_data;
+- struct cppc_perf_ctrls perf_ctrls;
+- int min_perf;
+- u64 value;
+-
+- min_perf = READ_ONCE(cpudata->lowest_perf);
+- value = READ_ONCE(cpudata->cppc_req_cached);
+-
+- mutex_lock(&amd_pstate_limits_lock);
+- if (cpu_feature_enabled(X86_FEATURE_CPPC)) {
+- cpudata->epp_policy = CPUFREQ_POLICY_UNKNOWN;
+-
+- /* Set max perf same as min perf */
+- value &= ~AMD_CPPC_MAX_PERF(~0L);
+- value |= AMD_CPPC_MAX_PERF(min_perf);
+- value &= ~AMD_CPPC_MIN_PERF(~0L);
+- value |= AMD_CPPC_MIN_PERF(min_perf);
+- wrmsrl_on_cpu(cpudata->cpu, MSR_AMD_CPPC_REQ, value);
+- } else {
+- perf_ctrls.desired_perf = 0;
+- perf_ctrls.max_perf = min_perf;
+- perf_ctrls.energy_perf = AMD_CPPC_ENERGY_PERF_PREF(HWP_EPP_BALANCE_POWERSAVE);
+- cppc_set_perf(cpudata->cpu, &perf_ctrls);
+- }
+- mutex_unlock(&amd_pstate_limits_lock);
+-}
+-
+ static int amd_pstate_epp_cpu_offline(struct cpufreq_policy *policy)
+ {
+ struct amd_cpudata *cpudata = policy->driver_data;
+-
+- pr_debug("AMD CPU Core %d going offline\n", cpudata->cpu);
++ int min_perf;
+
+ if (cpudata->suspended)
+ return 0;
+
+- if (cppc_state == AMD_PSTATE_ACTIVE)
+- amd_pstate_epp_offline(policy);
++ min_perf = READ_ONCE(cpudata->lowest_perf);
++
++ guard(mutex)(&amd_pstate_limits_lock);
++
++ amd_pstate_update_perf(cpudata, min_perf, 0, min_perf, false);
++ amd_pstate_set_epp(cpudata, AMD_CPPC_EPP_BALANCE_POWERSAVE);
+
+ return 0;
+ }
+@@ -1689,13 +1651,11 @@ static int amd_pstate_epp_resume(struct cpufreq_policy *policy)
+ struct amd_cpudata *cpudata = policy->driver_data;
+
+ if (cpudata->suspended) {
+- mutex_lock(&amd_pstate_limits_lock);
++ guard(mutex)(&amd_pstate_limits_lock);
+
+ /* enable amd pstate from suspend state*/
+ amd_pstate_epp_reenable(cpudata);
+
+- mutex_unlock(&amd_pstate_limits_lock);
+-
+ cpudata->suspended = false;
+ }
+
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 70490bf2697b16..acabc856fe8a58 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -922,13 +922,15 @@ char * __init efi_md_typeattr_format(char *buf, size_t size,
+ EFI_MEMORY_WB | EFI_MEMORY_UCE | EFI_MEMORY_RO |
+ EFI_MEMORY_WP | EFI_MEMORY_RP | EFI_MEMORY_XP |
+ EFI_MEMORY_NV | EFI_MEMORY_SP | EFI_MEMORY_CPU_CRYPTO |
+- EFI_MEMORY_RUNTIME | EFI_MEMORY_MORE_RELIABLE))
++ EFI_MEMORY_MORE_RELIABLE | EFI_MEMORY_HOT_PLUGGABLE |
++ EFI_MEMORY_RUNTIME))
+ snprintf(pos, size, "|attr=0x%016llx]",
+ (unsigned long long)attr);
+ else
+ snprintf(pos, size,
+- "|%3s|%2s|%2s|%2s|%2s|%2s|%2s|%2s|%2s|%3s|%2s|%2s|%2s|%2s]",
++ "|%3s|%2s|%2s|%2s|%2s|%2s|%2s|%2s|%2s|%2s|%3s|%2s|%2s|%2s|%2s]",
+ attr & EFI_MEMORY_RUNTIME ? "RUN" : "",
++ attr & EFI_MEMORY_HOT_PLUGGABLE ? "HP" : "",
+ attr & EFI_MEMORY_MORE_RELIABLE ? "MR" : "",
+ attr & EFI_MEMORY_CPU_CRYPTO ? "CC" : "",
+ attr & EFI_MEMORY_SP ? "SP" : "",
+diff --git a/drivers/firmware/efi/libstub/randomalloc.c b/drivers/firmware/efi/libstub/randomalloc.c
+index c41e7b2091cdd1..8ad3efb9b1ff16 100644
+--- a/drivers/firmware/efi/libstub/randomalloc.c
++++ b/drivers/firmware/efi/libstub/randomalloc.c
+@@ -25,6 +25,9 @@ static unsigned long get_entry_num_slots(efi_memory_desc_t *md,
+ if (md->type != EFI_CONVENTIONAL_MEMORY)
+ return 0;
+
++ if (md->attribute & EFI_MEMORY_HOT_PLUGGABLE)
++ return 0;
++
+ if (efi_soft_reserve_enabled() &&
+ (md->attribute & EFI_MEMORY_SP))
+ return 0;
+diff --git a/drivers/firmware/efi/libstub/relocate.c b/drivers/firmware/efi/libstub/relocate.c
+index d694bcfa1074e9..bf676dd127a143 100644
+--- a/drivers/firmware/efi/libstub/relocate.c
++++ b/drivers/firmware/efi/libstub/relocate.c
+@@ -53,6 +53,9 @@ efi_status_t efi_low_alloc_above(unsigned long size, unsigned long align,
+ if (desc->type != EFI_CONVENTIONAL_MEMORY)
+ continue;
+
++ if (desc->attribute & EFI_MEMORY_HOT_PLUGGABLE)
++ continue;
++
+ if (efi_soft_reserve_enabled() &&
+ (desc->attribute & EFI_MEMORY_SP))
+ continue;
+diff --git a/drivers/firmware/qcom/qcom_scm-smc.c b/drivers/firmware/qcom/qcom_scm-smc.c
+index 2b4c2826f57251..3f10b23ec941b5 100644
+--- a/drivers/firmware/qcom/qcom_scm-smc.c
++++ b/drivers/firmware/qcom/qcom_scm-smc.c
+@@ -173,6 +173,9 @@ int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc,
+ smc.args[i + SCM_SMC_FIRST_REG_IDX] = desc->args[i];
+
+ if (unlikely(arglen > SCM_SMC_N_REG_ARGS)) {
++ if (!mempool)
++ return -EINVAL;
++
+ args_virt = qcom_tzmem_alloc(mempool,
+ SCM_SMC_N_EXT_ARGS * sizeof(u64),
+ flag);
+diff --git a/drivers/gpio/gpio-bcm-kona.c b/drivers/gpio/gpio-bcm-kona.c
+index 5321ef98f4427d..64908f1a5e7f9b 100644
+--- a/drivers/gpio/gpio-bcm-kona.c
++++ b/drivers/gpio/gpio-bcm-kona.c
+@@ -69,6 +69,22 @@ struct bcm_kona_gpio {
+ struct bcm_kona_gpio_bank {
+ int id;
+ int irq;
++ /*
++ * Used to keep track of lock/unlock operations for each GPIO in the
++ * bank.
++ *
++ * All GPIOs are locked by default (see bcm_kona_gpio_reset), and the
++ * unlock count for all GPIOs is 0 by default. Each unlock increments
++ * the counter, and each lock decrements the counter.
++ *
++ * The lock function only locks the GPIO once its unlock counter is
++ * down to 0. This is necessary because the GPIO is unlocked in two
++ * places in this driver: once for requested GPIOs, and once for
++ * requested IRQs. Since it is possible for a GPIO to be requested
++ * as both a GPIO and an IRQ, we need to ensure that we don't lock it
++ * too early.
++ */
++ u8 gpio_unlock_count[GPIO_PER_BANK];
+ /* Used in the interrupt handler */
+ struct bcm_kona_gpio *kona_gpio;
+ };
+@@ -86,14 +102,24 @@ static void bcm_kona_gpio_lock_gpio(struct bcm_kona_gpio *kona_gpio,
+ u32 val;
+ unsigned long flags;
+ int bank_id = GPIO_BANK(gpio);
++ int bit = GPIO_BIT(gpio);
++ struct bcm_kona_gpio_bank *bank = &kona_gpio->banks[bank_id];
+
+- raw_spin_lock_irqsave(&kona_gpio->lock, flags);
++ if (bank->gpio_unlock_count[bit] == 0) {
++ dev_err(kona_gpio->gpio_chip.parent,
++ "Unbalanced locks for GPIO %u\n", gpio);
++ return;
++ }
+
+- val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));
+- val |= BIT(gpio);
+- bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);
++ if (--bank->gpio_unlock_count[bit] == 0) {
++ raw_spin_lock_irqsave(&kona_gpio->lock, flags);
+
+- raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);
++ val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));
++ val |= BIT(bit);
++ bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);
++
++ raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);
++ }
+ }
+
+ static void bcm_kona_gpio_unlock_gpio(struct bcm_kona_gpio *kona_gpio,
+@@ -102,14 +128,20 @@ static void bcm_kona_gpio_unlock_gpio(struct bcm_kona_gpio *kona_gpio,
+ u32 val;
+ unsigned long flags;
+ int bank_id = GPIO_BANK(gpio);
++ int bit = GPIO_BIT(gpio);
++ struct bcm_kona_gpio_bank *bank = &kona_gpio->banks[bank_id];
+
+- raw_spin_lock_irqsave(&kona_gpio->lock, flags);
++ if (bank->gpio_unlock_count[bit] == 0) {
++ raw_spin_lock_irqsave(&kona_gpio->lock, flags);
+
+- val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));
+- val &= ~BIT(gpio);
+- bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);
++ val = readl(kona_gpio->reg_base + GPIO_PWD_STATUS(bank_id));
++ val &= ~BIT(bit);
++ bcm_kona_gpio_write_lock_regs(kona_gpio->reg_base, bank_id, val);
+
+- raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);
++ raw_spin_unlock_irqrestore(&kona_gpio->lock, flags);
++ }
++
++ ++bank->gpio_unlock_count[bit];
+ }
+
+ static int bcm_kona_gpio_get_dir(struct gpio_chip *chip, unsigned gpio)
+@@ -360,6 +392,7 @@ static void bcm_kona_gpio_irq_mask(struct irq_data *d)
+
+ kona_gpio = irq_data_get_irq_chip_data(d);
+ reg_base = kona_gpio->reg_base;
++
+ raw_spin_lock_irqsave(&kona_gpio->lock, flags);
+
+ val = readl(reg_base + GPIO_INT_MASK(bank_id));
+@@ -382,6 +415,7 @@ static void bcm_kona_gpio_irq_unmask(struct irq_data *d)
+
+ kona_gpio = irq_data_get_irq_chip_data(d);
+ reg_base = kona_gpio->reg_base;
++
+ raw_spin_lock_irqsave(&kona_gpio->lock, flags);
+
+ val = readl(reg_base + GPIO_INT_MSKCLR(bank_id));
+@@ -477,15 +511,26 @@ static void bcm_kona_gpio_irq_handler(struct irq_desc *desc)
+ static int bcm_kona_gpio_irq_reqres(struct irq_data *d)
+ {
+ struct bcm_kona_gpio *kona_gpio = irq_data_get_irq_chip_data(d);
++ unsigned int gpio = d->hwirq;
+
+- return gpiochip_reqres_irq(&kona_gpio->gpio_chip, d->hwirq);
++ /*
++ * We need to unlock the GPIO before any other operations are performed
++ * on the relevant GPIO configuration registers
++ */
++ bcm_kona_gpio_unlock_gpio(kona_gpio, gpio);
++
++ return gpiochip_reqres_irq(&kona_gpio->gpio_chip, gpio);
+ }
+
+ static void bcm_kona_gpio_irq_relres(struct irq_data *d)
+ {
+ struct bcm_kona_gpio *kona_gpio = irq_data_get_irq_chip_data(d);
++ unsigned int gpio = d->hwirq;
++
++ /* Once we no longer use it, lock the GPIO again */
++ bcm_kona_gpio_lock_gpio(kona_gpio, gpio);
+
+- gpiochip_relres_irq(&kona_gpio->gpio_chip, d->hwirq);
++ gpiochip_relres_irq(&kona_gpio->gpio_chip, gpio);
+ }
+
+ static struct irq_chip bcm_gpio_irq_chip = {
+@@ -614,7 +659,7 @@ static int bcm_kona_gpio_probe(struct platform_device *pdev)
+ bank->irq = platform_get_irq(pdev, i);
+ bank->kona_gpio = kona_gpio;
+ if (bank->irq < 0) {
+- dev_err(dev, "Couldn't get IRQ for bank %d", i);
++ dev_err(dev, "Couldn't get IRQ for bank %d\n", i);
+ ret = -ENOENT;
+ goto err_irq_domain;
+ }
+diff --git a/drivers/gpio/gpio-stmpe.c b/drivers/gpio/gpio-stmpe.c
+index 75a3633ceddbb8..222279a9d82b2d 100644
+--- a/drivers/gpio/gpio-stmpe.c
++++ b/drivers/gpio/gpio-stmpe.c
+@@ -191,7 +191,7 @@ static void stmpe_gpio_irq_sync_unlock(struct irq_data *d)
+ [REG_IE][CSB] = STMPE_IDX_IEGPIOR_CSB,
+ [REG_IE][MSB] = STMPE_IDX_IEGPIOR_MSB,
+ };
+- int i, j;
++ int ret, i, j;
+
+ /*
+ * STMPE1600: to be able to get IRQ from pins,
+@@ -199,8 +199,16 @@ static void stmpe_gpio_irq_sync_unlock(struct irq_data *d)
+ * GPSR or GPCR registers
+ */
+ if (stmpe->partnum == STMPE1600) {
+- stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_LSB]);
+- stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_CSB]);
++ ret = stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_LSB]);
++ if (ret < 0) {
++ dev_err(stmpe->dev, "Failed to read GPMR_LSB: %d\n", ret);
++ goto err;
++ }
++ ret = stmpe_reg_read(stmpe, stmpe->regs[STMPE_IDX_GPMR_CSB]);
++ if (ret < 0) {
++ dev_err(stmpe->dev, "Failed to read GPMR_CSB: %d\n", ret);
++ goto err;
++ }
+ }
+
+ for (i = 0; i < CACHE_NR_REGS; i++) {
+@@ -222,6 +230,7 @@ static void stmpe_gpio_irq_sync_unlock(struct irq_data *d)
+ }
+ }
+
++err:
+ mutex_unlock(&stmpe_gpio->irq_lock);
+ }
+
+diff --git a/drivers/gpio/gpiolib-acpi.c b/drivers/gpio/gpiolib-acpi.c
+index 78ecd56123a3b6..148b4d1788a219 100644
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -1691,6 +1691,20 @@ static const struct dmi_system_id gpiolib_acpi_quirks[] __initconst = {
+ .ignore_wake = "PNP0C50:00@8",
+ },
+ },
++ {
++ /*
++ * Spurious wakeups from GPIO 11
++ * Found in BIOS 1.04
++ * https://gitlab.freedesktop.org/drm/amd/-/issues/3954
++ */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++ DMI_MATCH(DMI_PRODUCT_FAMILY, "Acer Nitro V 14"),
++ },
++ .driver_data = &(struct acpi_gpiolib_dmi_quirk) {
++ .ignore_interrupt = "AMDI0030:00@11",
++ },
++ },
+ {} /* Terminating entry */
+ };
+
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 44372f8647d51a..1e8f0bdb6ae3b4 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -905,13 +905,13 @@ int gpiochip_get_ngpios(struct gpio_chip *gc, struct device *dev)
+ }
+
+ if (gc->ngpio == 0) {
+- chip_err(gc, "tried to insert a GPIO chip with zero lines\n");
++ dev_err(dev, "tried to insert a GPIO chip with zero lines\n");
+ return -EINVAL;
+ }
+
+ if (gc->ngpio > FASTPATH_NGPIO)
+- chip_warn(gc, "line cnt %u is greater than fast path cnt %u\n",
+- gc->ngpio, FASTPATH_NGPIO);
++ dev_warn(dev, "line cnt %u is greater than fast path cnt %u\n",
++ gc->ngpio, FASTPATH_NGPIO);
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+index 0b28b2cf1517d1..d70855d7c61c1d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c
+@@ -3713,9 +3713,10 @@ int psp_init_cap_microcode(struct psp_context *psp, const char *chip_name)
+ if (err == -ENODEV) {
+ dev_warn(adev->dev, "cap microcode does not exist, skip\n");
+ err = 0;
+- goto out;
++ } else {
++ dev_err(adev->dev, "fail to initialize cap microcode\n");
+ }
+- dev_err(adev->dev, "fail to initialize cap microcode\n");
++ goto out;
+ }
+
+ info = &adev->firmware.ucode[AMDGPU_UCODE_ID_CAP];
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index dbb63ce316f11e..42fd7669ac7d37 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -298,7 +298,7 @@ static int init_user_queue(struct process_queue_manager *pqm,
+ return 0;
+
+ free_gang_ctx_bo:
+- amdgpu_amdkfd_free_gtt_mem(dev->adev, (*q)->gang_ctx_bo);
++ amdgpu_amdkfd_free_gtt_mem(dev->adev, &(*q)->gang_ctx_bo);
+ cleanup:
+ uninit_queue(*q);
+ *q = NULL;
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+index 0c0b9aa44dfa3a..99d2d3092ea540 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
+@@ -607,7 +607,8 @@ static int smu_sys_set_pp_table(void *handle,
+ return -EIO;
+ }
+
+- if (!smu_table->hardcode_pptable) {
++ if (!smu_table->hardcode_pptable || smu_table->power_play_table_size < size) {
++ kfree(smu_table->hardcode_pptable);
+ smu_table->hardcode_pptable = kzalloc(size, GFP_KERNEL);
+ if (!smu_table->hardcode_pptable)
+ return -ENOMEM;
+diff --git a/drivers/gpu/drm/display/drm_dp_helper.c b/drivers/gpu/drm/display/drm_dp_helper.c
+index 6ee51003de3ce6..9fa13da513d24e 100644
+--- a/drivers/gpu/drm/display/drm_dp_helper.c
++++ b/drivers/gpu/drm/display/drm_dp_helper.c
+@@ -2421,7 +2421,7 @@ u8 drm_dp_dsc_sink_bpp_incr(const u8 dsc_dpcd[DP_DSC_RECEIVER_CAP_SIZE])
+ {
+ u8 bpp_increment_dpcd = dsc_dpcd[DP_DSC_BITS_PER_PIXEL_INC - DP_DSC_SUPPORT];
+
+- switch (bpp_increment_dpcd) {
++ switch (bpp_increment_dpcd & DP_DSC_BITS_PER_PIXEL_MASK) {
+ case DP_DSC_BITS_PER_PIXEL_1_16:
+ return 16;
+ case DP_DSC_BITS_PER_PIXEL_1_8:
+diff --git a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+index 5c397a2df70e28..5d27e1c733c527 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
++++ b/drivers/gpu/drm/i915/selftests/i915_gem_gtt.c
+@@ -168,7 +168,7 @@ static int igt_ppgtt_alloc(void *arg)
+ return PTR_ERR(ppgtt);
+
+ if (!ppgtt->vm.allocate_va_range)
+- goto err_ppgtt_cleanup;
++ goto ppgtt_vm_put;
+
+ /*
+ * While we only allocate the page tables here and so we could
+@@ -236,7 +236,7 @@ static int igt_ppgtt_alloc(void *arg)
+ goto retry;
+ }
+ i915_gem_ww_ctx_fini(&ww);
+-
++ppgtt_vm_put:
+ i915_vm_put(&ppgtt->vm);
+ return err;
+ }
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
+index e084406ebb0711..4f110be6b750d3 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_9_2_x1e80100.h
+@@ -391,8 +391,8 @@ static const struct dpu_intf_cfg x1e80100_intf[] = {
+ .type = INTF_DP,
+ .controller_id = MSM_DP_CONTROLLER_2,
+ .prog_fetch_lines_worst_case = 24,
+- .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 17),
+- .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 16),
++ .intr_underrun = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 16),
++ .intr_vsync = DPU_IRQ_IDX(MDP_SSPP_TOP0_INTR, 17),
+ }, {
+ .name = "intf_7", .id = INTF_7,
+ .base = 0x3b000, .len = 0x280,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
+index 16f144cbc0c986..8ff496082902b1 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_writeback.c
+@@ -42,9 +42,6 @@ static int dpu_wb_conn_atomic_check(struct drm_connector *connector,
+ if (!conn_state || !conn_state->connector) {
+ DPU_ERROR("invalid connector state\n");
+ return -EINVAL;
+- } else if (conn_state->connector->status != connector_status_connected) {
+- DPU_ERROR("connector not connected %d\n", conn_state->connector->status);
+- return -EINVAL;
+ }
+
+ crtc = conn_state->crtc;
+diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
+index fba78193127dee..f775638d239a5c 100644
+--- a/drivers/gpu/drm/msm/msm_gem_submit.c
++++ b/drivers/gpu/drm/msm/msm_gem_submit.c
+@@ -787,8 +787,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
+ goto out;
+
+ if (!submit->cmd[i].size ||
+- ((submit->cmd[i].size + submit->cmd[i].offset) >
+- obj->size / 4)) {
++ (size_add(submit->cmd[i].size, submit->cmd[i].offset) > obj->size / 4)) {
+ SUBMIT_ERROR(submit, "invalid cmdstream size: %u\n", submit->cmd[i].size * 4);
+ ret = -EINVAL;
+ goto out;
+diff --git a/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi.c b/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi.c
+index 2dba7c5ffd2c62..92f4261305bd9d 100644
+--- a/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi.c
++++ b/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi.c
+@@ -587,7 +587,7 @@ static int rcar_mipi_dsi_startup(struct rcar_mipi_dsi *dsi,
+ for (timeout = 10; timeout > 0; --timeout) {
+ if ((rcar_mipi_dsi_read(dsi, PPICLSR) & PPICLSR_STPST) &&
+ (rcar_mipi_dsi_read(dsi, PPIDLSR) & PPIDLSR_STPST) &&
+- (rcar_mipi_dsi_read(dsi, CLOCKSET1) & CLOCKSET1_LOCK))
++ (rcar_mipi_dsi_read(dsi, CLOCKSET1) & CLOCKSET1_LOCK_PHY))
+ break;
+
+ usleep_range(1000, 2000);
+diff --git a/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi_regs.h b/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi_regs.h
+index f8114d11f2d158..a6b276f1d6ee15 100644
+--- a/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi_regs.h
++++ b/drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi_regs.h
+@@ -142,7 +142,6 @@
+
+ #define CLOCKSET1 0x101c
+ #define CLOCKSET1_LOCK_PHY (1 << 17)
+-#define CLOCKSET1_LOCK (1 << 16)
+ #define CLOCKSET1_CLKSEL (1 << 8)
+ #define CLOCKSET1_CLKINSEL_EXTAL (0 << 2)
+ #define CLOCKSET1_CLKINSEL_DIG (1 << 2)
+diff --git a/drivers/gpu/drm/renesas/rz-du/rzg2l_du_kms.c b/drivers/gpu/drm/renesas/rz-du/rzg2l_du_kms.c
+index b99217b4e05d7d..90c6269ccd2920 100644
+--- a/drivers/gpu/drm/renesas/rz-du/rzg2l_du_kms.c
++++ b/drivers/gpu/drm/renesas/rz-du/rzg2l_du_kms.c
+@@ -311,11 +311,11 @@ int rzg2l_du_modeset_init(struct rzg2l_du_device *rcdu)
+ dev->mode_config.helper_private = &rzg2l_du_mode_config_helper;
+
+ /*
+- * The RZ DU uses the VSP1 for memory access, and is limited
+- * to frame sizes of 1920x1080.
++ * The RZ DU was designed to support a frame size of 1920x1200 (landscape)
++ * or 1200x1920 (portrait).
+ */
+ dev->mode_config.max_width = 1920;
+- dev->mode_config.max_height = 1080;
++ dev->mode_config.max_height = 1920;
+
+ rcdu->num_crtcs = hweight8(rcdu->info->channels_mask);
+
+diff --git a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+index 4ba869e0e794c7..cbd9584af32995 100644
+--- a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
++++ b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+@@ -70,10 +70,17 @@ static int light_up_connector(struct kunit *test,
+ state = drm_kunit_helper_atomic_state_alloc(test, drm, ctx);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, state);
+
++retry:
+ conn_state = drm_atomic_get_connector_state(state, connector);
+ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, conn_state);
+
+ ret = drm_atomic_set_crtc_for_connector(conn_state, crtc);
++ if (ret == -EDEADLK) {
++ drm_atomic_state_clear(state);
++ ret = drm_modeset_backoff(ctx);
++ if (!ret)
++ goto retry;
++ }
+ KUNIT_EXPECT_EQ(test, ret, 0);
+
+ crtc_state = drm_atomic_get_crtc_state(state, crtc);
+diff --git a/drivers/gpu/drm/tidss/tidss_dispc.c b/drivers/gpu/drm/tidss/tidss_dispc.c
+index 1ad711f8d2a8bf..45f22ead3e61d3 100644
+--- a/drivers/gpu/drm/tidss/tidss_dispc.c
++++ b/drivers/gpu/drm/tidss/tidss_dispc.c
+@@ -700,7 +700,7 @@ void dispc_k2g_set_irqenable(struct dispc_device *dispc, dispc_irq_t mask)
+ {
+ dispc_irq_t old_mask = dispc_k2g_read_irqenable(dispc);
+
+- /* clear the irqstatus for newly enabled irqs */
++ /* clear the irqstatus for irqs that will be enabled */
+ dispc_k2g_clear_irqstatus(dispc, (mask ^ old_mask) & mask);
+
+ dispc_k2g_vp_set_irqenable(dispc, 0, mask);
+@@ -708,6 +708,9 @@ void dispc_k2g_set_irqenable(struct dispc_device *dispc, dispc_irq_t mask)
+
+ dispc_write(dispc, DISPC_IRQENABLE_SET, (1 << 0) | (1 << 7));
+
++ /* clear the irqstatus for irqs that were disabled */
++ dispc_k2g_clear_irqstatus(dispc, (mask ^ old_mask) & old_mask);
++
+ /* flush posted write */
+ dispc_k2g_read_irqenable(dispc);
+ }
+@@ -780,24 +783,20 @@ static
+ void dispc_k3_clear_irqstatus(struct dispc_device *dispc, dispc_irq_t clearmask)
+ {
+ unsigned int i;
+- u32 top_clear = 0;
+
+ for (i = 0; i < dispc->feat->num_vps; ++i) {
+- if (clearmask & DSS_IRQ_VP_MASK(i)) {
++ if (clearmask & DSS_IRQ_VP_MASK(i))
+ dispc_k3_vp_write_irqstatus(dispc, i, clearmask);
+- top_clear |= BIT(i);
+- }
+ }
+ for (i = 0; i < dispc->feat->num_planes; ++i) {
+- if (clearmask & DSS_IRQ_PLANE_MASK(i)) {
++ if (clearmask & DSS_IRQ_PLANE_MASK(i))
+ dispc_k3_vid_write_irqstatus(dispc, i, clearmask);
+- top_clear |= BIT(4 + i);
+- }
+ }
+ if (dispc->feat->subrev == DISPC_K2G)
+ return;
+
+- dispc_write(dispc, DISPC_IRQSTATUS, top_clear);
++ /* always clear the top level irqstatus */
++ dispc_write(dispc, DISPC_IRQSTATUS, dispc_read(dispc, DISPC_IRQSTATUS));
+
+ /* Flush posted writes */
+ dispc_read(dispc, DISPC_IRQSTATUS);
+@@ -843,7 +842,7 @@ static void dispc_k3_set_irqenable(struct dispc_device *dispc,
+
+ old_mask = dispc_k3_read_irqenable(dispc);
+
+- /* clear the irqstatus for newly enabled irqs */
++ /* clear the irqstatus for irqs that will be enabled */
+ dispc_k3_clear_irqstatus(dispc, (old_mask ^ mask) & mask);
+
+ for (i = 0; i < dispc->feat->num_vps; ++i) {
+@@ -868,6 +867,9 @@ static void dispc_k3_set_irqenable(struct dispc_device *dispc,
+ if (main_disable)
+ dispc_write(dispc, DISPC_IRQENABLE_CLR, main_disable);
+
++ /* clear the irqstatus for irqs that were disabled */
++ dispc_k3_clear_irqstatus(dispc, (old_mask ^ mask) & old_mask);
++
+ /* Flush posted writes */
+ dispc_read(dispc, DISPC_IRQENABLE_SET);
+ }
+@@ -2767,8 +2769,12 @@ static void dispc_init_errata(struct dispc_device *dispc)
+ */
+ static void dispc_softreset_k2g(struct dispc_device *dispc)
+ {
++ unsigned long flags;
++
++ spin_lock_irqsave(&dispc->tidss->wait_lock, flags);
+ dispc_set_irqenable(dispc, 0);
+ dispc_read_and_clear_irqstatus(dispc);
++ spin_unlock_irqrestore(&dispc->tidss->wait_lock, flags);
+
+ for (unsigned int vp_idx = 0; vp_idx < dispc->feat->num_vps; ++vp_idx)
+ VP_REG_FLD_MOD(dispc, vp_idx, DISPC_VP_CONTROL, 0, 0, 0);
+diff --git a/drivers/gpu/drm/tidss/tidss_irq.c b/drivers/gpu/drm/tidss/tidss_irq.c
+index 604334ef526a04..d053dbb9d28c5d 100644
+--- a/drivers/gpu/drm/tidss/tidss_irq.c
++++ b/drivers/gpu/drm/tidss/tidss_irq.c
+@@ -60,7 +60,9 @@ static irqreturn_t tidss_irq_handler(int irq, void *arg)
+ unsigned int id;
+ dispc_irq_t irqstatus;
+
++ spin_lock(&tidss->wait_lock);
+ irqstatus = dispc_read_and_clear_irqstatus(tidss->dispc);
++ spin_unlock(&tidss->wait_lock);
+
+ for (id = 0; id < tidss->num_crtcs; id++) {
+ struct drm_crtc *crtc = tidss->crtcs[id];
+diff --git a/drivers/gpu/drm/v3d/v3d_perfmon.c b/drivers/gpu/drm/v3d/v3d_perfmon.c
+index e3013ac3a5c2a6..1abfd738a6017d 100644
+--- a/drivers/gpu/drm/v3d/v3d_perfmon.c
++++ b/drivers/gpu/drm/v3d/v3d_perfmon.c
+@@ -384,6 +384,7 @@ int v3d_perfmon_destroy_ioctl(struct drm_device *dev, void *data,
+ {
+ struct v3d_file_priv *v3d_priv = file_priv->driver_priv;
+ struct drm_v3d_perfmon_destroy *req = data;
++ struct v3d_dev *v3d = v3d_priv->v3d;
+ struct v3d_perfmon *perfmon;
+
+ mutex_lock(&v3d_priv->perfmon.lock);
+@@ -393,6 +394,10 @@ int v3d_perfmon_destroy_ioctl(struct drm_device *dev, void *data,
+ if (!perfmon)
+ return -EINVAL;
+
++ /* If the active perfmon is being destroyed, stop it first */
++ if (perfmon == v3d->active_perfmon)
++ v3d_perfmon_stop(v3d, perfmon, false);
++
+ v3d_perfmon_put(perfmon);
+
+ return 0;
+diff --git a/drivers/gpu/drm/xe/xe_drm_client.c b/drivers/gpu/drm/xe/xe_drm_client.c
+index fb52a23e28f84e..a89fbfbdab329f 100644
+--- a/drivers/gpu/drm/xe/xe_drm_client.c
++++ b/drivers/gpu/drm/xe/xe_drm_client.c
+@@ -135,8 +135,8 @@ void xe_drm_client_add_bo(struct xe_drm_client *client,
+ XE_WARN_ON(bo->client);
+ XE_WARN_ON(!list_empty(&bo->client_link));
+
+- spin_lock(&client->bos_lock);
+ bo->client = xe_drm_client_get(client);
++ spin_lock(&client->bos_lock);
+ list_add_tail(&bo->client_link, &client->bos_list);
+ spin_unlock(&client->bos_lock);
+ }
+diff --git a/drivers/gpu/drm/xe/xe_trace_bo.h b/drivers/gpu/drm/xe/xe_trace_bo.h
+index 9b1a1d4304ae18..ba0f61e7d2d6b9 100644
+--- a/drivers/gpu/drm/xe/xe_trace_bo.h
++++ b/drivers/gpu/drm/xe/xe_trace_bo.h
+@@ -55,8 +55,8 @@ TRACE_EVENT(xe_bo_move,
+ TP_STRUCT__entry(
+ __field(struct xe_bo *, bo)
+ __field(size_t, size)
+- __field(u32, new_placement)
+- __field(u32, old_placement)
++ __string(new_placement_name, xe_mem_type_to_name[new_placement])
++ __string(old_placement_name, xe_mem_type_to_name[old_placement])
+ __string(device_id, __dev_name_bo(bo))
+ __field(bool, move_lacks_source)
+ ),
+@@ -64,15 +64,15 @@ TRACE_EVENT(xe_bo_move,
+ TP_fast_assign(
+ __entry->bo = bo;
+ __entry->size = bo->size;
+- __entry->new_placement = new_placement;
+- __entry->old_placement = old_placement;
++ __assign_str(new_placement_name);
++ __assign_str(old_placement_name);
+ __assign_str(device_id);
+ __entry->move_lacks_source = move_lacks_source;
+ ),
+ TP_printk("move_lacks_source:%s, migrate object %p [size %zu] from %s to %s device_id:%s",
+ __entry->move_lacks_source ? "yes" : "no", __entry->bo, __entry->size,
+- xe_mem_type_to_name[__entry->old_placement],
+- xe_mem_type_to_name[__entry->new_placement], __get_str(device_id))
++ __get_str(old_placement_name),
++ __get_str(new_placement_name), __get_str(device_id))
+ );
+
+ DECLARE_EVENT_CLASS(xe_vma,
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index e98528777faaec..710674ef40a973 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -625,6 +625,8 @@ static int host1x_probe(struct platform_device *pdev)
+ goto free_contexts;
+ }
+
++ mutex_init(&host->intr_mutex);
++
+ pm_runtime_enable(&pdev->dev);
+
+ err = devm_tegra_core_dev_init_opp_table_common(&pdev->dev);
+diff --git a/drivers/gpu/host1x/intr.c b/drivers/gpu/host1x/intr.c
+index b3285dd101804c..f77a678949e96b 100644
+--- a/drivers/gpu/host1x/intr.c
++++ b/drivers/gpu/host1x/intr.c
+@@ -104,8 +104,6 @@ int host1x_intr_init(struct host1x *host)
+ unsigned int id;
+ int i, err;
+
+- mutex_init(&host->intr_mutex);
+-
+ for (id = 0; id < host1x_syncpt_nb_pts(host); ++id) {
+ struct host1x_syncpt *syncpt = &host->syncpt[id];
+
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 369414c92fccbe..93b5c648ef82c9 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1673,9 +1673,12 @@ static int mt_input_configured(struct hid_device *hdev, struct hid_input *hi)
+ break;
+ }
+
+- if (suffix)
++ if (suffix) {
+ hi->input->name = devm_kasprintf(&hdev->dev, GFP_KERNEL,
+ "%s %s", hdev->name, suffix);
++ if (!hi->input->name)
++ return -ENOMEM;
++ }
+
+ return 0;
+ }
+diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c
+index bf8b633114be6a..7b359668987854 100644
+--- a/drivers/hid/hid-steam.c
++++ b/drivers/hid/hid-steam.c
+@@ -313,6 +313,7 @@ struct steam_device {
+ u16 rumble_left;
+ u16 rumble_right;
+ unsigned int sensor_timestamp_us;
++ struct work_struct unregister_work;
+ };
+
+ static int steam_recv_report(struct steam_device *steam,
+@@ -1072,6 +1073,31 @@ static void steam_mode_switch_cb(struct work_struct *work)
+ }
+ }
+
++static void steam_work_unregister_cb(struct work_struct *work)
++{
++ struct steam_device *steam = container_of(work, struct steam_device,
++ unregister_work);
++ unsigned long flags;
++ bool connected;
++ bool opened;
++
++ spin_lock_irqsave(&steam->lock, flags);
++ opened = steam->client_opened;
++ connected = steam->connected;
++ spin_unlock_irqrestore(&steam->lock, flags);
++
++ if (connected) {
++ if (opened) {
++ steam_sensors_unregister(steam);
++ steam_input_unregister(steam);
++ } else {
++ steam_set_lizard_mode(steam, lizard_mode);
++ steam_input_register(steam);
++ steam_sensors_register(steam);
++ }
++ }
++}
++
+ static bool steam_is_valve_interface(struct hid_device *hdev)
+ {
+ struct hid_report_enum *rep_enum;
+@@ -1117,8 +1143,7 @@ static int steam_client_ll_open(struct hid_device *hdev)
+ steam->client_opened++;
+ spin_unlock_irqrestore(&steam->lock, flags);
+
+- steam_sensors_unregister(steam);
+- steam_input_unregister(steam);
++ schedule_work(&steam->unregister_work);
+
+ return 0;
+ }
+@@ -1135,11 +1160,7 @@ static void steam_client_ll_close(struct hid_device *hdev)
+ connected = steam->connected && !steam->client_opened;
+ spin_unlock_irqrestore(&steam->lock, flags);
+
+- if (connected) {
+- steam_set_lizard_mode(steam, lizard_mode);
+- steam_input_register(steam);
+- steam_sensors_register(steam);
+- }
++ schedule_work(&steam->unregister_work);
+ }
+
+ static int steam_client_ll_raw_request(struct hid_device *hdev,
+@@ -1231,6 +1252,7 @@ static int steam_probe(struct hid_device *hdev,
+ INIT_LIST_HEAD(&steam->list);
+ INIT_WORK(&steam->rumble_work, steam_haptic_rumble_cb);
+ steam->sensor_timestamp_us = 0;
++ INIT_WORK(&steam->unregister_work, steam_work_unregister_cb);
+
+ /*
+ * With the real steam controller interface, do not connect hidraw.
+@@ -1291,6 +1313,7 @@ static int steam_probe(struct hid_device *hdev,
+ cancel_work_sync(&steam->work_connect);
+ cancel_delayed_work_sync(&steam->mode_switch);
+ cancel_work_sync(&steam->rumble_work);
++ cancel_work_sync(&steam->unregister_work);
+
+ return ret;
+ }
+@@ -1306,6 +1329,8 @@ static void steam_remove(struct hid_device *hdev)
+
+ cancel_delayed_work_sync(&steam->mode_switch);
+ cancel_work_sync(&steam->work_connect);
++ cancel_work_sync(&steam->rumble_work);
++ cancel_work_sync(&steam->unregister_work);
+ hid_destroy_device(steam->client_hdev);
+ steam->client_hdev = NULL;
+ steam->client_opened = 0;
+@@ -1592,7 +1617,7 @@ static void steam_do_deck_input_event(struct steam_device *steam,
+
+ if (!(b9 & BIT(6)) && steam->did_mode_switch) {
+ steam->did_mode_switch = false;
+- cancel_delayed_work_sync(&steam->mode_switch);
++ cancel_delayed_work(&steam->mode_switch);
+ } else if (!steam->client_opened && (b9 & BIT(6)) && !steam->did_mode_switch) {
+ steam->did_mode_switch = true;
+ schedule_delayed_work(&steam->mode_switch, 45 * HZ / 100);
+diff --git a/drivers/hid/hid-thrustmaster.c b/drivers/hid/hid-thrustmaster.c
+index 6c3e758bbb09e3..3b81468a1df297 100644
+--- a/drivers/hid/hid-thrustmaster.c
++++ b/drivers/hid/hid-thrustmaster.c
+@@ -171,7 +171,7 @@ static void thrustmaster_interrupts(struct hid_device *hdev)
+ b_ep = ep->desc.bEndpointAddress;
+
+ /* Are the expected endpoints present? */
+- u8 ep_addr[1] = {b_ep};
++ u8 ep_addr[2] = {b_ep, 0};
+
+ if (!usb_check_int_endpoints(usbif, ep_addr)) {
+ hid_err(hdev, "Unexpected non-int endpoint\n");
+diff --git a/drivers/hid/hid-winwing.c b/drivers/hid/hid-winwing.c
+index 831b760c66ea72..d4afbbd2780797 100644
+--- a/drivers/hid/hid-winwing.c
++++ b/drivers/hid/hid-winwing.c
+@@ -106,6 +106,8 @@ static int winwing_init_led(struct hid_device *hdev,
+ "%s::%s",
+ dev_name(&input->dev),
+ info->led_name);
++ if (!led->cdev.name)
++ return -ENOMEM;
+
+ ret = devm_led_classdev_register(&hdev->dev, &led->cdev);
+ if (ret)
+diff --git a/drivers/i3c/master/Kconfig b/drivers/i3c/master/Kconfig
+index 90dee3ec552097..77da199c7413e6 100644
+--- a/drivers/i3c/master/Kconfig
++++ b/drivers/i3c/master/Kconfig
+@@ -57,3 +57,14 @@ config MIPI_I3C_HCI
+
+ This driver can also be built as a module. If so, the module will be
+ called mipi-i3c-hci.
++
++config MIPI_I3C_HCI_PCI
++ tristate "MIPI I3C Host Controller Interface PCI support"
++ depends on MIPI_I3C_HCI
++ depends on PCI
++ help
++ Support for MIPI I3C Host Controller Interface compatible hardware
++ on the PCI bus.
++
++ This driver can also be built as a module. If so, the module will be
++ called mipi-i3c-hci-pci.
+diff --git a/drivers/i3c/master/mipi-i3c-hci/Makefile b/drivers/i3c/master/mipi-i3c-hci/Makefile
+index 1f8cd5c48fdef3..e3d3ef757035f0 100644
+--- a/drivers/i3c/master/mipi-i3c-hci/Makefile
++++ b/drivers/i3c/master/mipi-i3c-hci/Makefile
+@@ -5,3 +5,4 @@ mipi-i3c-hci-y := core.o ext_caps.o pio.o dma.o \
+ cmd_v1.o cmd_v2.o \
+ dat_v1.o dct_v1.o \
+ hci_quirks.o
++obj-$(CONFIG_MIPI_I3C_HCI_PCI) += mipi-i3c-hci-pci.o
+diff --git a/drivers/i3c/master/mipi-i3c-hci/dma.c b/drivers/i3c/master/mipi-i3c-hci/dma.c
+index 13adc584009429..fe955703e59b58 100644
+--- a/drivers/i3c/master/mipi-i3c-hci/dma.c
++++ b/drivers/i3c/master/mipi-i3c-hci/dma.c
+@@ -762,9 +762,26 @@ static bool hci_dma_irq_handler(struct i3c_hci *hci, unsigned int mask)
+ complete(&rh->op_done);
+
+ if (status & INTR_TRANSFER_ABORT) {
++ u32 ring_status;
++
+ dev_notice_ratelimited(&hci->master.dev,
+ "ring %d: Transfer Aborted\n", i);
+ mipi_i3c_hci_resume(hci);
++ ring_status = rh_reg_read(RING_STATUS);
++ if (!(ring_status & RING_STATUS_RUNNING) &&
++ status & INTR_TRANSFER_COMPLETION &&
++ status & INTR_TRANSFER_ERR) {
++ /*
++ * Ring stop followed by run is an Intel
++ * specific required quirk after resuming the
++ * halted controller. Do it only when the ring
++ * is not in running state after a transfer
++ * error.
++ */
++ rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE);
++ rh_reg_write(RING_CONTROL, RING_CTRL_ENABLE |
++ RING_CTRL_RUN_STOP);
++ }
+ }
+ if (status & INTR_WARN_INS_STOP_MODE)
+ dev_warn_ratelimited(&hci->master.dev,
+diff --git a/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
+new file mode 100644
+index 00000000000000..c6c3a3ec11eae3
+--- /dev/null
++++ b/drivers/i3c/master/mipi-i3c-hci/mipi-i3c-hci-pci.c
+@@ -0,0 +1,148 @@
++// SPDX-License-Identifier: GPL-2.0
++/*
++ * PCI glue code for MIPI I3C HCI driver
++ *
++ * Copyright (C) 2024 Intel Corporation
++ *
++ * Author: Jarkko Nikula <jarkko.nikula@linux.intel.com>
++ */
++#include <linux/acpi.h>
++#include <linux/idr.h>
++#include <linux/kernel.h>
++#include <linux/module.h>
++#include <linux/pci.h>
++#include <linux/platform_device.h>
++
++struct mipi_i3c_hci_pci_info {
++ int (*init)(struct pci_dev *pci);
++};
++
++#define INTEL_PRIV_OFFSET 0x2b0
++#define INTEL_PRIV_SIZE 0x28
++#define INTEL_PRIV_RESETS 0x04
++#define INTEL_PRIV_RESETS_RESET BIT(0)
++#define INTEL_PRIV_RESETS_RESET_DONE BIT(1)
++
++static DEFINE_IDA(mipi_i3c_hci_pci_ida);
++
++static int mipi_i3c_hci_pci_intel_init(struct pci_dev *pci)
++{
++ unsigned long timeout;
++ void __iomem *priv;
++
++ priv = devm_ioremap(&pci->dev,
++ pci_resource_start(pci, 0) + INTEL_PRIV_OFFSET,
++ INTEL_PRIV_SIZE);
++ if (!priv)
++ return -ENOMEM;
++
++ /* Assert reset, wait for completion and release reset */
++ writel(0, priv + INTEL_PRIV_RESETS);
++ timeout = jiffies + msecs_to_jiffies(10);
++ while (!(readl(priv + INTEL_PRIV_RESETS) &
++ INTEL_PRIV_RESETS_RESET_DONE)) {
++ if (time_after(jiffies, timeout))
++ break;
++ cpu_relax();
++ }
++ writel(INTEL_PRIV_RESETS_RESET, priv + INTEL_PRIV_RESETS);
++
++ return 0;
++}
++
++static struct mipi_i3c_hci_pci_info intel_info = {
++ .init = mipi_i3c_hci_pci_intel_init,
++};
++
++static int mipi_i3c_hci_pci_probe(struct pci_dev *pci,
++ const struct pci_device_id *id)
++{
++ struct mipi_i3c_hci_pci_info *info;
++ struct platform_device *pdev;
++ struct resource res[2];
++ int dev_id, ret;
++
++ ret = pcim_enable_device(pci);
++ if (ret)
++ return ret;
++
++ pci_set_master(pci);
++
++ memset(&res, 0, sizeof(res));
++
++ res[0].flags = IORESOURCE_MEM;
++ res[0].start = pci_resource_start(pci, 0);
++ res[0].end = pci_resource_end(pci, 0);
++
++ res[1].flags = IORESOURCE_IRQ;
++ res[1].start = pci->irq;
++ res[1].end = pci->irq;
++
++ dev_id = ida_alloc(&mipi_i3c_hci_pci_ida, GFP_KERNEL);
++ if (dev_id < 0)
++ return dev_id;
++
++ pdev = platform_device_alloc("mipi-i3c-hci", dev_id);
++ if (!pdev)
++ return -ENOMEM;
++
++ pdev->dev.parent = &pci->dev;
++ device_set_node(&pdev->dev, dev_fwnode(&pci->dev));
++
++ ret = platform_device_add_resources(pdev, res, ARRAY_SIZE(res));
++ if (ret)
++ goto err;
++
++ info = (struct mipi_i3c_hci_pci_info *)id->driver_data;
++ if (info && info->init) {
++ ret = info->init(pci);
++ if (ret)
++ goto err;
++ }
++
++ ret = platform_device_add(pdev);
++ if (ret)
++ goto err;
++
++ pci_set_drvdata(pci, pdev);
++
++ return 0;
++
++err:
++ platform_device_put(pdev);
++ ida_free(&mipi_i3c_hci_pci_ida, dev_id);
++ return ret;
++}
++
++static void mipi_i3c_hci_pci_remove(struct pci_dev *pci)
++{
++ struct platform_device *pdev = pci_get_drvdata(pci);
++ int dev_id = pdev->id;
++
++ platform_device_unregister(pdev);
++ ida_free(&mipi_i3c_hci_pci_ida, dev_id);
++}
++
++static const struct pci_device_id mipi_i3c_hci_pci_devices[] = {
++ /* Panther Lake-H */
++ { PCI_VDEVICE(INTEL, 0xe37c), (kernel_ulong_t)&intel_info},
++ { PCI_VDEVICE(INTEL, 0xe36f), (kernel_ulong_t)&intel_info},
++ /* Panther Lake-P */
++ { PCI_VDEVICE(INTEL, 0xe47c), (kernel_ulong_t)&intel_info},
++ { PCI_VDEVICE(INTEL, 0xe46f), (kernel_ulong_t)&intel_info},
++ { },
++};
++MODULE_DEVICE_TABLE(pci, mipi_i3c_hci_pci_devices);
++
++static struct pci_driver mipi_i3c_hci_pci_driver = {
++ .name = "mipi_i3c_hci_pci",
++ .id_table = mipi_i3c_hci_pci_devices,
++ .probe = mipi_i3c_hci_pci_probe,
++ .remove = mipi_i3c_hci_pci_remove,
++};
++
++module_pci_driver(mipi_i3c_hci_pci_driver);
++
++MODULE_AUTHOR("Jarkko Nikula <jarkko.nikula@intel.com>");
++MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("MIPI I3C HCI driver on PCI bus");
+diff --git a/drivers/infiniband/hw/efa/efa_main.c b/drivers/infiniband/hw/efa/efa_main.c
+index ad225823e6f2fe..45a4564c670c01 100644
+--- a/drivers/infiniband/hw/efa/efa_main.c
++++ b/drivers/infiniband/hw/efa/efa_main.c
+@@ -470,7 +470,6 @@ static void efa_ib_device_remove(struct efa_dev *dev)
+ ibdev_info(&dev->ibdev, "Unregister ib device\n");
+ ib_unregister_device(&dev->ibdev);
+ efa_destroy_eqs(dev);
+- efa_com_dev_reset(&dev->edev, EFA_REGS_RESET_NORMAL);
+ efa_release_doorbell_bar(dev);
+ }
+
+@@ -643,12 +642,14 @@ static struct efa_dev *efa_probe_device(struct pci_dev *pdev)
+ return ERR_PTR(err);
+ }
+
+-static void efa_remove_device(struct pci_dev *pdev)
++static void efa_remove_device(struct pci_dev *pdev,
++ enum efa_regs_reset_reason_types reset_reason)
+ {
+ struct efa_dev *dev = pci_get_drvdata(pdev);
+ struct efa_com_dev *edev;
+
+ edev = &dev->edev;
++ efa_com_dev_reset(edev, reset_reason);
+ efa_com_admin_destroy(edev);
+ efa_free_irq(dev, &dev->admin_irq);
+ efa_disable_msix(dev);
+@@ -676,7 +677,7 @@ static int efa_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ return 0;
+
+ err_remove_device:
+- efa_remove_device(pdev);
++ efa_remove_device(pdev, EFA_REGS_RESET_INIT_ERR);
+ return err;
+ }
+
+@@ -685,7 +686,7 @@ static void efa_remove(struct pci_dev *pdev)
+ struct efa_dev *dev = pci_get_drvdata(pdev);
+
+ efa_ib_device_remove(dev);
+- efa_remove_device(pdev);
++ efa_remove_device(pdev, EFA_REGS_RESET_NORMAL);
+ }
+
+ static void efa_shutdown(struct pci_dev *pdev)
+diff --git a/drivers/iommu/amd/amd_iommu_types.h b/drivers/iommu/amd/amd_iommu_types.h
+index 601fb4ee69009e..6fb2f2919ab1ff 100644
+--- a/drivers/iommu/amd/amd_iommu_types.h
++++ b/drivers/iommu/amd/amd_iommu_types.h
+@@ -175,6 +175,7 @@
+ #define CONTROL_GAM_EN 25
+ #define CONTROL_GALOG_EN 28
+ #define CONTROL_GAINT_EN 29
++#define CONTROL_EPH_EN 45
+ #define CONTROL_XT_EN 50
+ #define CONTROL_INTCAPXT_EN 51
+ #define CONTROL_IRTCACHEDIS 59
+diff --git a/drivers/iommu/amd/init.c b/drivers/iommu/amd/init.c
+index 43131c3a21726f..dbe2d13972feff 100644
+--- a/drivers/iommu/amd/init.c
++++ b/drivers/iommu/amd/init.c
+@@ -2647,6 +2647,10 @@ static void iommu_init_flags(struct amd_iommu *iommu)
+
+ /* Set IOTLB invalidation timeout to 1s */
+ iommu_set_inv_tlb_timeout(iommu, CTRL_INV_TO_1S);
++
++ /* Enable Enhanced Peripheral Page Request Handling */
++ if (check_feature(FEATURE_EPHSUP))
++ iommu_feature_enable(iommu, CONTROL_EPH_EN);
+ }
+
+ static void iommu_apply_resume_quirks(struct amd_iommu *iommu)
+diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
+index 4674e618797c15..8b5926c1452edb 100644
+--- a/drivers/iommu/io-pgfault.c
++++ b/drivers/iommu/io-pgfault.c
+@@ -478,6 +478,7 @@ void iopf_queue_remove_device(struct iopf_queue *queue, struct device *dev)
+
+ ops->page_response(dev, iopf, &resp);
+ list_del_init(&group->pending_node);
++ iopf_free_group(group);
+ }
+ mutex_unlock(&fault_param->lock);
+
+diff --git a/drivers/media/dvb-frontends/cxd2841er.c b/drivers/media/dvb-frontends/cxd2841er.c
+index d925ca24183b50..415f1f91cc3072 100644
+--- a/drivers/media/dvb-frontends/cxd2841er.c
++++ b/drivers/media/dvb-frontends/cxd2841er.c
+@@ -311,12 +311,8 @@ static int cxd2841er_set_reg_bits(struct cxd2841er_priv *priv,
+
+ static u32 cxd2841er_calc_iffreq_xtal(enum cxd2841er_xtal xtal, u32 ifhz)
+ {
+- u64 tmp;
+-
+- tmp = (u64) ifhz * 16777216;
+- do_div(tmp, ((xtal == SONY_XTAL_24000) ? 48000000 : 41000000));
+-
+- return (u32) tmp;
++ return div_u64(ifhz * 16777216ull,
++ (xtal == SONY_XTAL_24000) ? 48000000 : 41000000);
+ }
+
+ static u32 cxd2841er_calc_iffreq(u32 ifhz)
+diff --git a/drivers/media/i2c/ds90ub913.c b/drivers/media/i2c/ds90ub913.c
+index b5375d73662996..7670d6c82d923e 100644
+--- a/drivers/media/i2c/ds90ub913.c
++++ b/drivers/media/i2c/ds90ub913.c
+@@ -8,6 +8,7 @@
+ * Copyright (c) 2023 Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/clk-provider.h>
+ #include <linux/clk.h>
+ #include <linux/delay.h>
+@@ -146,6 +147,19 @@ static int ub913_write(const struct ub913_data *priv, u8 reg, u8 val)
+ return ret;
+ }
+
++static int ub913_update_bits(const struct ub913_data *priv, u8 reg, u8 mask,
++ u8 val)
++{
++ int ret;
++
++ ret = regmap_update_bits(priv->regmap, reg, mask, val);
++ if (ret < 0)
++ dev_err(&priv->client->dev,
++ "Cannot update register 0x%02x %d!\n", reg, ret);
++
++ return ret;
++}
++
+ /*
+ * GPIO chip
+ */
+@@ -733,10 +747,13 @@ static int ub913_hw_init(struct ub913_data *priv)
+ if (ret)
+ return dev_err_probe(dev, ret, "i2c master init failed\n");
+
+- ub913_read(priv, UB913_REG_GENERAL_CFG, &v);
+- v &= ~UB913_REG_GENERAL_CFG_PCLK_RISING;
+- v |= priv->pclk_polarity_rising ? UB913_REG_GENERAL_CFG_PCLK_RISING : 0;
+- ub913_write(priv, UB913_REG_GENERAL_CFG, v);
++ ret = ub913_update_bits(priv, UB913_REG_GENERAL_CFG,
++ UB913_REG_GENERAL_CFG_PCLK_RISING,
++ FIELD_PREP(UB913_REG_GENERAL_CFG_PCLK_RISING,
++ priv->pclk_polarity_rising));
++
++ if (ret)
++ return ret;
+
+ return 0;
+ }
+diff --git a/drivers/media/i2c/ds90ub953.c b/drivers/media/i2c/ds90ub953.c
+index 10daecf6f45798..f0bad3e64f23dc 100644
+--- a/drivers/media/i2c/ds90ub953.c
++++ b/drivers/media/i2c/ds90ub953.c
+@@ -398,8 +398,13 @@ static int ub953_gpiochip_probe(struct ub953_data *priv)
+ int ret;
+
+ /* Set all GPIOs to local input mode */
+- ub953_write(priv, UB953_REG_LOCAL_GPIO_DATA, 0);
+- ub953_write(priv, UB953_REG_GPIO_INPUT_CTRL, 0xf);
++ ret = ub953_write(priv, UB953_REG_LOCAL_GPIO_DATA, 0);
++ if (ret)
++ return ret;
++
++ ret = ub953_write(priv, UB953_REG_GPIO_INPUT_CTRL, 0xf);
++ if (ret)
++ return ret;
+
+ gc->label = dev_name(dev);
+ gc->parent = dev;
+@@ -961,10 +966,11 @@ static void ub953_calc_clkout_params(struct ub953_data *priv,
+ clkout_data->rate = clkout_rate;
+ }
+
+-static void ub953_write_clkout_regs(struct ub953_data *priv,
+- const struct ub953_clkout_data *clkout_data)
++static int ub953_write_clkout_regs(struct ub953_data *priv,
++ const struct ub953_clkout_data *clkout_data)
+ {
+ u8 clkout_ctrl0, clkout_ctrl1;
++ int ret;
+
+ if (priv->hw_data->is_ub971)
+ clkout_ctrl0 = clkout_data->m;
+@@ -974,8 +980,15 @@ static void ub953_write_clkout_regs(struct ub953_data *priv,
+
+ clkout_ctrl1 = clkout_data->n;
+
+- ub953_write(priv, UB953_REG_CLKOUT_CTRL0, clkout_ctrl0);
+- ub953_write(priv, UB953_REG_CLKOUT_CTRL1, clkout_ctrl1);
++ ret = ub953_write(priv, UB953_REG_CLKOUT_CTRL0, clkout_ctrl0);
++ if (ret)
++ return ret;
++
++ ret = ub953_write(priv, UB953_REG_CLKOUT_CTRL1, clkout_ctrl1);
++ if (ret)
++ return ret;
++
++ return 0;
+ }
+
+ static unsigned long ub953_clkout_recalc_rate(struct clk_hw *hw,
+@@ -1055,9 +1068,7 @@ static int ub953_clkout_set_rate(struct clk_hw *hw, unsigned long rate,
+ dev_dbg(&priv->client->dev, "%s %lu (requested %lu)\n", __func__,
+ clkout_data.rate, rate);
+
+- ub953_write_clkout_regs(priv, &clkout_data);
+-
+- return 0;
++ return ub953_write_clkout_regs(priv, &clkout_data);
+ }
+
+ static const struct clk_ops ub953_clkout_ops = {
+@@ -1082,7 +1093,9 @@ static int ub953_register_clkout(struct ub953_data *priv)
+
+ /* Initialize clkout to 25MHz by default */
+ ub953_calc_clkout_params(priv, UB953_DEFAULT_CLKOUT_RATE, &clkout_data);
+- ub953_write_clkout_regs(priv, &clkout_data);
++ ret = ub953_write_clkout_regs(priv, &clkout_data);
++ if (ret)
++ return ret;
+
+ priv->clkout_clk_hw.init = &init;
+
+@@ -1229,10 +1242,15 @@ static int ub953_hw_init(struct ub953_data *priv)
+ if (ret)
+ return dev_err_probe(dev, ret, "i2c init failed\n");
+
+- ub953_write(priv, UB953_REG_GENERAL_CFG,
+- (priv->non_continous_clk ? 0 : UB953_REG_GENERAL_CFG_CONT_CLK) |
+- ((priv->num_data_lanes - 1) << UB953_REG_GENERAL_CFG_CSI_LANE_SEL_SHIFT) |
+- UB953_REG_GENERAL_CFG_CRC_TX_GEN_ENABLE);
++ v = 0;
++ v |= priv->non_continous_clk ? 0 : UB953_REG_GENERAL_CFG_CONT_CLK;
++ v |= (priv->num_data_lanes - 1) <<
++ UB953_REG_GENERAL_CFG_CSI_LANE_SEL_SHIFT;
++ v |= UB953_REG_GENERAL_CFG_CRC_TX_GEN_ENABLE;
++
++ ret = ub953_write(priv, UB953_REG_GENERAL_CFG, v);
++ if (ret)
++ return ret;
+
+ return 0;
+ }
+diff --git a/drivers/media/platform/broadcom/bcm2835-unicam.c b/drivers/media/platform/broadcom/bcm2835-unicam.c
+index a1d93c14553d80..9f81e1582a3005 100644
+--- a/drivers/media/platform/broadcom/bcm2835-unicam.c
++++ b/drivers/media/platform/broadcom/bcm2835-unicam.c
+@@ -816,11 +816,6 @@ static irqreturn_t unicam_isr(int irq, void *dev)
+ }
+ }
+
+- if (unicam_reg_read(unicam, UNICAM_ICTL) & UNICAM_FCM) {
+- /* Switch out of trigger mode if selected */
+- unicam_reg_write_field(unicam, UNICAM_ICTL, 1, UNICAM_TFC);
+- unicam_reg_write_field(unicam, UNICAM_ICTL, 0, UNICAM_FCM);
+- }
+ return IRQ_HANDLED;
+ }
+
+@@ -984,8 +979,7 @@ static void unicam_start_rx(struct unicam_device *unicam,
+
+ unicam_reg_write_field(unicam, UNICAM_ANA, 0, UNICAM_DDL);
+
+- /* Always start in trigger frame capture mode (UNICAM_FCM set) */
+- val = UNICAM_FSIE | UNICAM_FEIE | UNICAM_FCM | UNICAM_IBOB;
++ val = UNICAM_FSIE | UNICAM_FEIE | UNICAM_IBOB;
+ line_int_freq = max(fmt->height >> 2, 128);
+ unicam_set_field(&val, line_int_freq, UNICAM_LCIE_MASK);
+ unicam_reg_write(unicam, UNICAM_ICTL, val);
+diff --git a/drivers/media/test-drivers/vidtv/vidtv_bridge.c b/drivers/media/test-drivers/vidtv/vidtv_bridge.c
+index 613949df897d34..6d964e392d3130 100644
+--- a/drivers/media/test-drivers/vidtv/vidtv_bridge.c
++++ b/drivers/media/test-drivers/vidtv/vidtv_bridge.c
+@@ -191,10 +191,11 @@ static int vidtv_start_streaming(struct vidtv_dvb *dvb)
+
+ mux_args.mux_buf_sz = mux_buf_sz;
+
+- dvb->streaming = true;
+ dvb->mux = vidtv_mux_init(dvb->fe[0], dev, &mux_args);
+ if (!dvb->mux)
+ return -ENOMEM;
++
++ dvb->streaming = true;
+ vidtv_mux_start_thread(dvb->mux);
+
+ dev_dbg_ratelimited(dev, "Started streaming\n");
+@@ -205,6 +206,11 @@ static int vidtv_stop_streaming(struct vidtv_dvb *dvb)
+ {
+ struct device *dev = &dvb->pdev->dev;
+
++ if (!dvb->streaming) {
++ dev_warn_ratelimited(dev, "No streaming. Skipping.\n");
++ return 0;
++ }
++
+ dvb->streaming = false;
+ vidtv_mux_stop_thread(dvb->mux);
+ vidtv_mux_destroy(dvb->mux);
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index d832aa55056f39..4d8e00b425f443 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -2809,6 +2809,15 @@ static const struct usb_device_id uvc_ids[] = {
+ .bInterfaceSubClass = 1,
+ .bInterfaceProtocol = 0,
+ .driver_info = (kernel_ulong_t)&uvc_quirk_probe_minmax },
++ /* Sonix Technology Co. Ltd. - 292A IPC AR0330 */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x0c45,
++ .idProduct = 0x6366,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = 0,
++ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_MJPEG_NO_EOF) },
+ /* MT6227 */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+@@ -2837,6 +2846,15 @@ static const struct usb_device_id uvc_ids[] = {
+ .bInterfaceSubClass = 1,
+ .bInterfaceProtocol = 0,
+ .driver_info = (kernel_ulong_t)&uvc_quirk_probe_minmax },
++ /* Kurokesu C1 PRO */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x16d0,
++ .idProduct = 0x0ed1,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = 0,
++ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_MJPEG_NO_EOF) },
+ /* Syntek (HP Spartan) */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+diff --git a/drivers/media/usb/uvc/uvc_video.c b/drivers/media/usb/uvc/uvc_video.c
+index d2fe01bcd209e5..eab7b8f5573057 100644
+--- a/drivers/media/usb/uvc/uvc_video.c
++++ b/drivers/media/usb/uvc/uvc_video.c
+@@ -20,6 +20,7 @@
+ #include <linux/atomic.h>
+ #include <linux/unaligned.h>
+
++#include <media/jpeg.h>
+ #include <media/v4l2-common.h>
+
+ #include "uvcvideo.h"
+@@ -1137,6 +1138,7 @@ static void uvc_video_stats_stop(struct uvc_streaming *stream)
+ static int uvc_video_decode_start(struct uvc_streaming *stream,
+ struct uvc_buffer *buf, const u8 *data, int len)
+ {
++ u8 header_len;
+ u8 fid;
+
+ /*
+@@ -1150,6 +1152,7 @@ static int uvc_video_decode_start(struct uvc_streaming *stream,
+ return -EINVAL;
+ }
+
++ header_len = data[0];
+ fid = data[1] & UVC_STREAM_FID;
+
+ /*
+@@ -1231,9 +1234,31 @@ static int uvc_video_decode_start(struct uvc_streaming *stream,
+ return -EAGAIN;
+ }
+
++ /*
++ * Some cameras, when running two parallel streams (one MJPEG alongside
++ * another non-MJPEG stream), are known to lose the EOF packet for a frame.
++ * We can detect the end of a frame by checking for a new SOI marker, as
++ * the SOI always lies on the packet boundary between two frames for
++ * these devices.
++ */
++ if (stream->dev->quirks & UVC_QUIRK_MJPEG_NO_EOF &&
++ (stream->cur_format->fcc == V4L2_PIX_FMT_MJPEG ||
++ stream->cur_format->fcc == V4L2_PIX_FMT_JPEG)) {
++ const u8 *packet = data + header_len;
++
++ if (len >= header_len + 2 &&
++ packet[0] == 0xff && packet[1] == JPEG_MARKER_SOI &&
++ buf->bytesused != 0) {
++ buf->state = UVC_BUF_STATE_READY;
++ buf->error = 1;
++ stream->last_fid ^= UVC_STREAM_FID;
++ return -EAGAIN;
++ }
++ }
++
+ stream->last_fid = fid;
+
+- return data[0];
++ return header_len;
+ }
+
+ static inline enum dma_data_direction uvc_stream_dir(
+diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
+index 272dc9cf01ee7d..74ac2106f08e2c 100644
+--- a/drivers/media/usb/uvc/uvcvideo.h
++++ b/drivers/media/usb/uvc/uvcvideo.h
+@@ -76,6 +76,7 @@
+ #define UVC_QUIRK_NO_RESET_RESUME 0x00004000
+ #define UVC_QUIRK_DISABLE_AUTOSUSPEND 0x00008000
+ #define UVC_QUIRK_INVALID_DEVICE_SOF 0x00010000
++#define UVC_QUIRK_MJPEG_NO_EOF 0x00020000
+
+ /* Format flags */
+ #define UVC_FMT_FLAG_COMPRESSED 0x00000001
+diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
+index 6e62415de2e5ec..d5d868cb4edc7b 100644
+--- a/drivers/mmc/host/mtk-sd.c
++++ b/drivers/mmc/host/mtk-sd.c
+@@ -263,6 +263,7 @@
+ #define MSDC_PAD_TUNE_CMD2_SEL BIT(21) /* RW */
+
+ #define PAD_DS_TUNE_DLY_SEL BIT(0) /* RW */
++#define PAD_DS_TUNE_DLY2_SEL BIT(1) /* RW */
+ #define PAD_DS_TUNE_DLY1 GENMASK(6, 2) /* RW */
+ #define PAD_DS_TUNE_DLY2 GENMASK(11, 7) /* RW */
+ #define PAD_DS_TUNE_DLY3 GENMASK(16, 12) /* RW */
+@@ -308,6 +309,7 @@
+
+ /* EMMC50_PAD_DS_TUNE mask */
+ #define PAD_DS_DLY_SEL BIT(16) /* RW */
++#define PAD_DS_DLY2_SEL BIT(15) /* RW */
+ #define PAD_DS_DLY1 GENMASK(14, 10) /* RW */
+ #define PAD_DS_DLY3 GENMASK(4, 0) /* RW */
+
+@@ -2361,13 +2363,23 @@ static int msdc_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ static int msdc_prepare_hs400_tuning(struct mmc_host *mmc, struct mmc_ios *ios)
+ {
+ struct msdc_host *host = mmc_priv(mmc);
++
+ host->hs400_mode = true;
+
+- if (host->top_base)
+- writel(host->hs400_ds_delay,
+- host->top_base + EMMC50_PAD_DS_TUNE);
+- else
+- writel(host->hs400_ds_delay, host->base + PAD_DS_TUNE);
++ if (host->top_base) {
++ if (host->hs400_ds_dly3)
++ sdr_set_field(host->top_base + EMMC50_PAD_DS_TUNE,
++ PAD_DS_DLY3, host->hs400_ds_dly3);
++ if (host->hs400_ds_delay)
++ writel(host->hs400_ds_delay,
++ host->top_base + EMMC50_PAD_DS_TUNE);
++ } else {
++ if (host->hs400_ds_dly3)
++ sdr_set_field(host->base + PAD_DS_TUNE,
++ PAD_DS_TUNE_DLY3, host->hs400_ds_dly3);
++ if (host->hs400_ds_delay)
++ writel(host->hs400_ds_delay, host->base + PAD_DS_TUNE);
++ }
+ /* hs400 mode must set it to 0 */
+ sdr_clr_bits(host->base + MSDC_PATCH_BIT2, MSDC_PATCH_BIT2_CFGCRCSTS);
+ /* to improve read performance, set outstanding to 2 */
+@@ -2387,14 +2399,11 @@ static int msdc_execute_hs400_tuning(struct mmc_host *mmc, struct mmc_card *card
+ if (host->top_base) {
+ sdr_set_bits(host->top_base + EMMC50_PAD_DS_TUNE,
+ PAD_DS_DLY_SEL);
+- if (host->hs400_ds_dly3)
+- sdr_set_field(host->top_base + EMMC50_PAD_DS_TUNE,
+- PAD_DS_DLY3, host->hs400_ds_dly3);
++ sdr_clr_bits(host->top_base + EMMC50_PAD_DS_TUNE,
++ PAD_DS_DLY2_SEL);
+ } else {
+ sdr_set_bits(host->base + PAD_DS_TUNE, PAD_DS_TUNE_DLY_SEL);
+- if (host->hs400_ds_dly3)
+- sdr_set_field(host->base + PAD_DS_TUNE,
+- PAD_DS_TUNE_DLY3, host->hs400_ds_dly3);
++ sdr_clr_bits(host->base + PAD_DS_TUNE, PAD_DS_TUNE_DLY2_SEL);
+ }
+
+ host->hs400_tuning = true;
+diff --git a/drivers/net/can/c_can/c_can_platform.c b/drivers/net/can/c_can/c_can_platform.c
+index 6cba9717a6d87d..399844809bbeaa 100644
+--- a/drivers/net/can/c_can/c_can_platform.c
++++ b/drivers/net/can/c_can/c_can_platform.c
+@@ -385,15 +385,16 @@ static int c_can_plat_probe(struct platform_device *pdev)
+ if (ret) {
+ dev_err(&pdev->dev, "registering %s failed (err=%d)\n",
+ KBUILD_MODNAME, ret);
+- goto exit_free_device;
++ goto exit_pm_runtime;
+ }
+
+ dev_info(&pdev->dev, "%s device registered (regs=%p, irq=%d)\n",
+ KBUILD_MODNAME, priv->base, dev->irq);
+ return 0;
+
+-exit_free_device:
++exit_pm_runtime:
+ pm_runtime_disable(priv->device);
++exit_free_device:
+ free_c_can_dev(dev);
+ exit:
+ dev_err(&pdev->dev, "probe failed\n");
+diff --git a/drivers/net/can/ctucanfd/ctucanfd_base.c b/drivers/net/can/ctucanfd/ctucanfd_base.c
+index 64c349fd46007f..f65c1a1e05ccdf 100644
+--- a/drivers/net/can/ctucanfd/ctucanfd_base.c
++++ b/drivers/net/can/ctucanfd/ctucanfd_base.c
+@@ -867,10 +867,12 @@ static void ctucan_err_interrupt(struct net_device *ndev, u32 isr)
+ }
+ break;
+ case CAN_STATE_ERROR_ACTIVE:
+- cf->can_id |= CAN_ERR_CNT;
+- cf->data[1] = CAN_ERR_CRTL_ACTIVE;
+- cf->data[6] = bec.txerr;
+- cf->data[7] = bec.rxerr;
++ if (skb) {
++ cf->can_id |= CAN_ERR_CNT;
++ cf->data[1] = CAN_ERR_CRTL_ACTIVE;
++ cf->data[6] = bec.txerr;
++ cf->data[7] = bec.rxerr;
++ }
+ break;
+ default:
+ netdev_warn(ndev, "unhandled error state (%d:%s)!\n",
+diff --git a/drivers/net/can/rockchip/rockchip_canfd-core.c b/drivers/net/can/rockchip/rockchip_canfd-core.c
+index df18c85fc07841..d9a937ba126c3c 100644
+--- a/drivers/net/can/rockchip/rockchip_canfd-core.c
++++ b/drivers/net/can/rockchip/rockchip_canfd-core.c
+@@ -622,7 +622,7 @@ rkcanfd_handle_rx_fifo_overflow_int(struct rkcanfd_priv *priv)
+ netdev_dbg(priv->ndev, "RX-FIFO overflow\n");
+
+ skb = rkcanfd_alloc_can_err_skb(priv, &cf, ×tamp);
+- if (skb)
++ if (!skb)
+ return 0;
+
+ rkcanfd_get_berr_counter_corrected(priv, &bec);
+diff --git a/drivers/net/can/usb/etas_es58x/es58x_devlink.c b/drivers/net/can/usb/etas_es58x/es58x_devlink.c
+index eee20839d96fd4..0d155eb1b9e999 100644
+--- a/drivers/net/can/usb/etas_es58x/es58x_devlink.c
++++ b/drivers/net/can/usb/etas_es58x/es58x_devlink.c
+@@ -248,7 +248,11 @@ static int es58x_devlink_info_get(struct devlink *devlink,
+ return ret;
+ }
+
+- return devlink_info_serial_number_put(req, es58x_dev->udev->serial);
++ if (es58x_dev->udev->serial)
++ ret = devlink_info_serial_number_put(req,
++ es58x_dev->udev->serial);
++
++ return ret;
+ }
+
+ const struct devlink_ops es58x_dl_ops = {
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+index b4fbb99bfad208..a3d6b8f198a86a 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c
+@@ -2159,8 +2159,13 @@ static int idpf_open(struct net_device *netdev)
+ idpf_vport_ctrl_lock(netdev);
+ vport = idpf_netdev_to_vport(netdev);
+
++ err = idpf_set_real_num_queues(vport);
++ if (err)
++ goto unlock;
++
+ err = idpf_vport_open(vport);
+
++unlock:
+ idpf_vport_ctrl_unlock(netdev);
+
+ return err;
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index 60d15b3e6e2faa..1e0d1f9b07fbcf 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -3008,8 +3008,6 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+ return -EINVAL;
+
+ rsc_segments = DIV_ROUND_UP(skb->data_len, rsc_seg_len);
+- if (unlikely(rsc_segments == 1))
+- return 0;
+
+ NAPI_GRO_CB(skb)->count = rsc_segments;
+ skb_shinfo(skb)->gso_size = rsc_seg_len;
+@@ -3072,6 +3070,7 @@ idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+ idpf_rx_hash(rxq, skb, rx_desc, decoded);
+
+ skb->protocol = eth_type_trans(skb, rxq->netdev);
++ skb_record_rx_queue(skb, rxq->idx);
+
+ if (le16_get_bits(rx_desc->hdrlen_flags,
+ VIRTCHNL2_RX_FLEX_DESC_ADV_RSC_M))
+@@ -3080,8 +3079,6 @@ idpf_rx_process_skb_fields(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+ csum_bits = idpf_rx_splitq_extract_csum_bits(rx_desc);
+ idpf_rx_csum(rxq, skb, csum_bits, decoded);
+
+- skb_record_rx_queue(skb, rxq->idx);
+-
+ return 0;
+ }
+
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 6e70bca15db1d8..1ec9e8cc99d947 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -1096,6 +1096,7 @@ static int igc_init_empty_frame(struct igc_ring *ring,
+ return -ENOMEM;
+ }
+
++ buffer->type = IGC_TX_BUFFER_TYPE_SKB;
+ buffer->skb = skb;
+ buffer->protocol = 0;
+ buffer->bytecount = skb->len;
+@@ -2707,8 +2708,9 @@ static int igc_clean_rx_irq(struct igc_q_vector *q_vector, const int budget)
+ }
+
+ static struct sk_buff *igc_construct_skb_zc(struct igc_ring *ring,
+- struct xdp_buff *xdp)
++ struct igc_xdp_buff *ctx)
+ {
++ struct xdp_buff *xdp = &ctx->xdp;
+ unsigned int totalsize = xdp->data_end - xdp->data_meta;
+ unsigned int metasize = xdp->data - xdp->data_meta;
+ struct sk_buff *skb;
+@@ -2727,27 +2729,28 @@ static struct sk_buff *igc_construct_skb_zc(struct igc_ring *ring,
+ __skb_pull(skb, metasize);
+ }
+
++ if (ctx->rx_ts) {
++ skb_shinfo(skb)->tx_flags |= SKBTX_HW_TSTAMP_NETDEV;
++ skb_hwtstamps(skb)->netdev_data = ctx->rx_ts;
++ }
++
+ return skb;
+ }
+
+ static void igc_dispatch_skb_zc(struct igc_q_vector *q_vector,
+ union igc_adv_rx_desc *desc,
+- struct xdp_buff *xdp,
+- ktime_t timestamp)
++ struct igc_xdp_buff *ctx)
+ {
+ struct igc_ring *ring = q_vector->rx.ring;
+ struct sk_buff *skb;
+
+- skb = igc_construct_skb_zc(ring, xdp);
++ skb = igc_construct_skb_zc(ring, ctx);
+ if (!skb) {
+ ring->rx_stats.alloc_failed++;
+ set_bit(IGC_RING_FLAG_RX_ALLOC_FAILED, &ring->flags);
+ return;
+ }
+
+- if (timestamp)
+- skb_hwtstamps(skb)->hwtstamp = timestamp;
+-
+ if (igc_cleanup_headers(ring, desc, skb))
+ return;
+
+@@ -2783,7 +2786,6 @@ static int igc_clean_rx_irq_zc(struct igc_q_vector *q_vector, const int budget)
+ union igc_adv_rx_desc *desc;
+ struct igc_rx_buffer *bi;
+ struct igc_xdp_buff *ctx;
+- ktime_t timestamp = 0;
+ unsigned int size;
+ int res;
+
+@@ -2813,6 +2815,8 @@ static int igc_clean_rx_irq_zc(struct igc_q_vector *q_vector, const int budget)
+ */
+ bi->xdp->data_meta += IGC_TS_HDR_LEN;
+ size -= IGC_TS_HDR_LEN;
++ } else {
++ ctx->rx_ts = NULL;
+ }
+
+ bi->xdp->data_end = bi->xdp->data + size;
+@@ -2821,7 +2825,7 @@ static int igc_clean_rx_irq_zc(struct igc_q_vector *q_vector, const int budget)
+ res = __igc_xdp_run_prog(adapter, prog, bi->xdp);
+ switch (res) {
+ case IGC_XDP_PASS:
+- igc_dispatch_skb_zc(q_vector, desc, bi->xdp, timestamp);
++ igc_dispatch_skb_zc(q_vector, desc, ctx);
+ fallthrough;
+ case IGC_XDP_CONSUMED:
+ xsk_buff_free(bi->xdp);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
+index 2bed8c86b7cfc5..3f64cdbabfa3c1 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_ethtool.c
+@@ -768,7 +768,9 @@ static void __mlxsw_sp_port_get_stats(struct net_device *dev,
+ err = mlxsw_sp_get_hw_stats_by_group(&hw_stats, &len, grp);
+ if (err)
+ return;
+- mlxsw_sp_port_get_stats_raw(dev, grp, prio, ppcnt_pl);
++ err = mlxsw_sp_port_get_stats_raw(dev, grp, prio, ppcnt_pl);
++ if (err)
++ return;
+ for (i = 0; i < len; i++) {
+ data[data_index + i] = hw_stats[i].getter(ppcnt_pl);
+ if (!hw_stats[i].cells_bytes)
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 3c0d067c360992..3e090f87f97ebd 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -585,21 +585,30 @@ static void am65_cpsw_nuss_xmit_free(struct am65_cpsw_tx_chn *tx_chn,
+ static void am65_cpsw_nuss_tx_cleanup(void *data, dma_addr_t desc_dma)
+ {
+ struct am65_cpsw_tx_chn *tx_chn = data;
++ enum am65_cpsw_tx_buf_type buf_type;
+ struct cppi5_host_desc_t *desc_tx;
++ struct xdp_frame *xdpf;
+ struct sk_buff *skb;
+ void **swdata;
+
+ desc_tx = k3_cppi_desc_pool_dma2virt(tx_chn->desc_pool, desc_dma);
+ swdata = cppi5_hdesc_get_swdata(desc_tx);
+- skb = *(swdata);
+- am65_cpsw_nuss_xmit_free(tx_chn, desc_tx);
++ buf_type = am65_cpsw_nuss_buf_type(tx_chn, desc_dma);
++ if (buf_type == AM65_CPSW_TX_BUF_TYPE_SKB) {
++ skb = *(swdata);
++ dev_kfree_skb_any(skb);
++ } else {
++ xdpf = *(swdata);
++ xdp_return_frame(xdpf);
++ }
+
+- dev_kfree_skb_any(skb);
++ am65_cpsw_nuss_xmit_free(tx_chn, desc_tx);
+ }
+
+ static struct sk_buff *am65_cpsw_build_skb(void *page_addr,
+ struct net_device *ndev,
+- unsigned int len)
++ unsigned int len,
++ unsigned int headroom)
+ {
+ struct sk_buff *skb;
+
+@@ -609,7 +618,7 @@ static struct sk_buff *am65_cpsw_build_skb(void *page_addr,
+ if (unlikely(!skb))
+ return NULL;
+
+- skb_reserve(skb, AM65_CPSW_HEADROOM);
++ skb_reserve(skb, headroom);
+ skb->dev = ndev;
+
+ return skb;
+@@ -1191,16 +1200,8 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_rx_flow *flow,
+ dev_dbg(dev, "%s rx csum_info:%#x\n", __func__, csum_info);
+
+ dma_unmap_single(rx_chn->dma_dev, buf_dma, buf_dma_len, DMA_FROM_DEVICE);
+-
+ k3_cppi_desc_pool_free(rx_chn->desc_pool, desc_rx);
+
+- skb = am65_cpsw_build_skb(page_addr, ndev,
+- AM65_CPSW_MAX_PACKET_SIZE);
+- if (unlikely(!skb)) {
+- new_page = page;
+- goto requeue;
+- }
+-
+ if (port->xdp_prog) {
+ xdp_init_buff(&xdp, PAGE_SIZE, &port->xdp_rxq[flow->id]);
+ xdp_prepare_buff(&xdp, page_addr, AM65_CPSW_HEADROOM,
+@@ -1210,9 +1211,16 @@ static int am65_cpsw_nuss_rx_packets(struct am65_cpsw_rx_flow *flow,
+ if (*xdp_state != AM65_CPSW_XDP_PASS)
+ goto allocate;
+
+- /* Compute additional headroom to be reserved */
+- headroom = (xdp.data - xdp.data_hard_start) - skb_headroom(skb);
+- skb_reserve(skb, headroom);
++ headroom = xdp.data - xdp.data_hard_start;
++ } else {
++ headroom = AM65_CPSW_HEADROOM;
++ }
++
++ skb = am65_cpsw_build_skb(page_addr, ndev,
++ AM65_CPSW_MAX_PACKET_SIZE, headroom);
++ if (unlikely(!skb)) {
++ new_page = page;
++ goto requeue;
+ }
+
+ ndev_priv = netdev_priv(ndev);
+diff --git a/drivers/net/netdevsim/ipsec.c b/drivers/net/netdevsim/ipsec.c
+index 3612b0633bd177..88187dd4eb2d40 100644
+--- a/drivers/net/netdevsim/ipsec.c
++++ b/drivers/net/netdevsim/ipsec.c
+@@ -39,10 +39,14 @@ static ssize_t nsim_dbg_netdev_ops_read(struct file *filp,
+ if (!sap->used)
+ continue;
+
+- p += scnprintf(p, bufsize - (p - buf),
+- "sa[%i] %cx ipaddr=0x%08x %08x %08x %08x\n",
+- i, (sap->rx ? 'r' : 't'), sap->ipaddr[0],
+- sap->ipaddr[1], sap->ipaddr[2], sap->ipaddr[3]);
++ if (sap->xs->props.family == AF_INET6)
++ p += scnprintf(p, bufsize - (p - buf),
++ "sa[%i] %cx ipaddr=%pI6c\n",
++ i, (sap->rx ? 'r' : 't'), &sap->ipaddr);
++ else
++ p += scnprintf(p, bufsize - (p - buf),
++ "sa[%i] %cx ipaddr=%pI4\n",
++ i, (sap->rx ? 'r' : 't'), &sap->ipaddr[3]);
+ p += scnprintf(p, bufsize - (p - buf),
+ "sa[%i] spi=0x%08x proto=0x%x salt=0x%08x crypt=%d\n",
+ i, be32_to_cpu(sap->xs->id.spi),
+diff --git a/drivers/net/team/team_core.c b/drivers/net/team/team_core.c
+index 7f4ef219eee44f..6cfafaac1b4fb6 100644
+--- a/drivers/net/team/team_core.c
++++ b/drivers/net/team/team_core.c
+@@ -2640,7 +2640,9 @@ int team_nl_options_set_doit(struct sk_buff *skb, struct genl_info *info)
+ ctx.data.u32_val = nla_get_u32(attr_data);
+ break;
+ case TEAM_OPTION_TYPE_STRING:
+- if (nla_len(attr_data) > TEAM_STRING_MAX_LEN) {
++ if (nla_len(attr_data) > TEAM_STRING_MAX_LEN ||
++ !memchr(nla_data(attr_data), '\0',
++ nla_len(attr_data))) {
+ err = -EINVAL;
+ goto team_put;
+ }
+diff --git a/drivers/net/vxlan/vxlan_core.c b/drivers/net/vxlan/vxlan_core.c
+index 6e9a3795846aa3..5e7cdd1b806fbd 100644
+--- a/drivers/net/vxlan/vxlan_core.c
++++ b/drivers/net/vxlan/vxlan_core.c
+@@ -2871,8 +2871,11 @@ static int vxlan_init(struct net_device *dev)
+ struct vxlan_dev *vxlan = netdev_priv(dev);
+ int err;
+
+- if (vxlan->cfg.flags & VXLAN_F_VNIFILTER)
+- vxlan_vnigroup_init(vxlan);
++ if (vxlan->cfg.flags & VXLAN_F_VNIFILTER) {
++ err = vxlan_vnigroup_init(vxlan);
++ if (err)
++ return err;
++ }
+
+ err = gro_cells_init(&vxlan->gro_cells, dev);
+ if (err)
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.c b/drivers/net/wireless/ath/ath12k/wmi.c
+index 2cd3ff9b0164c8..a6ba97949440e4 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.c
++++ b/drivers/net/wireless/ath/ath12k/wmi.c
+@@ -4681,6 +4681,22 @@ static struct ath12k_reg_rule
+ return reg_rule_ptr;
+ }
+
++static u8 ath12k_wmi_ignore_num_extra_rules(struct ath12k_wmi_reg_rule_ext_params *rule,
++ u32 num_reg_rules)
++{
++ u8 num_invalid_5ghz_rules = 0;
++ u32 count, start_freq;
++
++ for (count = 0; count < num_reg_rules; count++) {
++ start_freq = le32_get_bits(rule[count].freq_info, REG_RULE_START_FREQ);
++
++ if (start_freq >= ATH12K_MIN_6G_FREQ)
++ num_invalid_5ghz_rules++;
++ }
++
++ return num_invalid_5ghz_rules;
++}
++
+ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ struct sk_buff *skb,
+ struct ath12k_reg_info *reg_info)
+@@ -4691,6 +4707,7 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ u32 num_2g_reg_rules, num_5g_reg_rules;
+ u32 num_6g_reg_rules_ap[WMI_REG_CURRENT_MAX_AP_TYPE];
+ u32 num_6g_reg_rules_cl[WMI_REG_CURRENT_MAX_AP_TYPE][WMI_REG_MAX_CLIENT_TYPE];
++ u8 num_invalid_5ghz_ext_rules;
+ u32 total_reg_rules = 0;
+ int ret, i, j;
+
+@@ -4784,20 +4801,6 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+
+ memcpy(reg_info->alpha2, &ev->alpha2, REG_ALPHA2_LEN);
+
+- /* FIXME: Currently FW includes 6G reg rule also in 5G rule
+- * list for country US.
+- * Having same 6G reg rule in 5G and 6G rules list causes
+- * intersect check to be true, and same rules will be shown
+- * multiple times in iw cmd. So added hack below to avoid
+- * parsing 6G rule from 5G reg rule list, and this can be
+- * removed later, after FW updates to remove 6G reg rule
+- * from 5G rules list.
+- */
+- if (memcmp(reg_info->alpha2, "US", 2) == 0) {
+- reg_info->num_5g_reg_rules = REG_US_5G_NUM_REG_RULES;
+- num_5g_reg_rules = reg_info->num_5g_reg_rules;
+- }
+-
+ reg_info->dfs_region = le32_to_cpu(ev->dfs_region);
+ reg_info->phybitmap = le32_to_cpu(ev->phybitmap);
+ reg_info->num_phy = le32_to_cpu(ev->num_phy);
+@@ -4900,8 +4903,29 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ }
+ }
+
++ ext_wmi_reg_rule += num_2g_reg_rules;
++
++ /* Firmware might include 6 GHz reg rule in 5 GHz rule list
++ * for few countries along with separate 6 GHz rule.
++ * Having same 6 GHz reg rule in 5 GHz and 6 GHz rules list
++ * causes intersect check to be true, and same rules will be
++ * shown multiple times in iw cmd.
++ * Hence, avoid parsing 6 GHz rule from 5 GHz reg rule list
++ */
++ num_invalid_5ghz_ext_rules = ath12k_wmi_ignore_num_extra_rules(ext_wmi_reg_rule,
++ num_5g_reg_rules);
++
++ if (num_invalid_5ghz_ext_rules) {
++ ath12k_dbg(ab, ATH12K_DBG_WMI,
++ "CC: %s 5 GHz reg rules number %d from fw, %d number of invalid 5 GHz rules",
++ reg_info->alpha2, reg_info->num_5g_reg_rules,
++ num_invalid_5ghz_ext_rules);
++
++ num_5g_reg_rules = num_5g_reg_rules - num_invalid_5ghz_ext_rules;
++ reg_info->num_5g_reg_rules = num_5g_reg_rules;
++ }
++
+ if (num_5g_reg_rules) {
+- ext_wmi_reg_rule += num_2g_reg_rules;
+ reg_info->reg_rules_5g_ptr =
+ create_ext_reg_rules_from_wmi(num_5g_reg_rules,
+ ext_wmi_reg_rule);
+@@ -4913,7 +4937,12 @@ static int ath12k_pull_reg_chan_list_ext_update_ev(struct ath12k_base *ab,
+ }
+ }
+
+- ext_wmi_reg_rule += num_5g_reg_rules;
++ /* We have adjusted the number of 5 GHz reg rules above. But still those
++ * many rules needs to be adjusted in ext_wmi_reg_rule.
++ *
++ * NOTE: num_invalid_5ghz_ext_rules will be 0 for rest other cases.
++ */
++ ext_wmi_reg_rule += (num_5g_reg_rules + num_invalid_5ghz_ext_rules);
+
+ for (i = 0; i < WMI_REG_CURRENT_MAX_AP_TYPE; i++) {
+ reg_info->reg_rules_6g_ap_ptr[i] =
+diff --git a/drivers/net/wireless/ath/ath12k/wmi.h b/drivers/net/wireless/ath/ath12k/wmi.h
+index 6a913f9b831580..b495cdea7111c3 100644
+--- a/drivers/net/wireless/ath/ath12k/wmi.h
++++ b/drivers/net/wireless/ath/ath12k/wmi.h
+@@ -3943,7 +3943,6 @@ struct ath12k_wmi_eht_rate_set_params {
+ #define MAX_REG_RULES 10
+ #define REG_ALPHA2_LEN 2
+ #define MAX_6G_REG_RULES 5
+-#define REG_US_5G_NUM_REG_RULES 4
+
+ enum wmi_start_event_param {
+ WMI_VDEV_START_RESP_EVENT = 0,
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.c b/drivers/net/wireless/realtek/rtw89/pci.c
+index 5aef7fa378788c..0ac84f968994b4 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.c
++++ b/drivers/net/wireless/realtek/rtw89/pci.c
+@@ -2492,7 +2492,7 @@ static int rtw89_pci_dphy_delay(struct rtw89_dev *rtwdev)
+ PCIE_DPHY_DLY_25US, PCIE_PHY_GEN1);
+ }
+
+-static void rtw89_pci_power_wake(struct rtw89_dev *rtwdev, bool pwr_up)
++static void rtw89_pci_power_wake_ax(struct rtw89_dev *rtwdev, bool pwr_up)
+ {
+ if (pwr_up)
+ rtw89_write32_set(rtwdev, R_AX_HCI_OPT_CTRL, BIT_WAKE_CTRL);
+@@ -2799,6 +2799,8 @@ static int rtw89_pci_ops_deinit(struct rtw89_dev *rtwdev)
+ {
+ const struct rtw89_pci_info *info = rtwdev->pci_info;
+
++ rtw89_pci_power_wake(rtwdev, false);
++
+ if (rtwdev->chip->chip_id == RTL8852A) {
+ /* ltr sw trigger */
+ rtw89_write32_set(rtwdev, R_AX_LTR_CTRL_0, B_AX_APP_LTR_IDLE);
+@@ -2841,7 +2843,7 @@ static int rtw89_pci_ops_mac_pre_init_ax(struct rtw89_dev *rtwdev)
+ return ret;
+ }
+
+- rtw89_pci_power_wake(rtwdev, true);
++ rtw89_pci_power_wake_ax(rtwdev, true);
+ rtw89_pci_autoload_hang(rtwdev);
+ rtw89_pci_l12_vmain(rtwdev);
+ rtw89_pci_gen2_force_ib(rtwdev);
+@@ -2886,6 +2888,13 @@ static int rtw89_pci_ops_mac_pre_init_ax(struct rtw89_dev *rtwdev)
+ return 0;
+ }
+
++static int rtw89_pci_ops_mac_pre_deinit_ax(struct rtw89_dev *rtwdev)
++{
++ rtw89_pci_power_wake_ax(rtwdev, false);
++
++ return 0;
++}
++
+ int rtw89_pci_ltr_set(struct rtw89_dev *rtwdev, bool en)
+ {
+ u32 val;
+@@ -4264,7 +4273,7 @@ const struct rtw89_pci_gen_def rtw89_pci_gen_ax = {
+ B_AX_RDU_INT},
+
+ .mac_pre_init = rtw89_pci_ops_mac_pre_init_ax,
+- .mac_pre_deinit = NULL,
++ .mac_pre_deinit = rtw89_pci_ops_mac_pre_deinit_ax,
+ .mac_post_init = rtw89_pci_ops_mac_post_init_ax,
+
+ .clr_idx_all = rtw89_pci_clr_idx_all_ax,
+@@ -4280,6 +4289,8 @@ const struct rtw89_pci_gen_def rtw89_pci_gen_ax = {
+ .aspm_set = rtw89_pci_aspm_set_ax,
+ .clkreq_set = rtw89_pci_clkreq_set_ax,
+ .l1ss_set = rtw89_pci_l1ss_set_ax,
++
++ .power_wake = rtw89_pci_power_wake_ax,
+ };
+ EXPORT_SYMBOL(rtw89_pci_gen_ax);
+
+diff --git a/drivers/net/wireless/realtek/rtw89/pci.h b/drivers/net/wireless/realtek/rtw89/pci.h
+index 48c3ab735db2a7..0ea4dcb84dd862 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci.h
++++ b/drivers/net/wireless/realtek/rtw89/pci.h
+@@ -1276,6 +1276,8 @@ struct rtw89_pci_gen_def {
+ void (*aspm_set)(struct rtw89_dev *rtwdev, bool enable);
+ void (*clkreq_set)(struct rtw89_dev *rtwdev, bool enable);
+ void (*l1ss_set)(struct rtw89_dev *rtwdev, bool enable);
++
++ void (*power_wake)(struct rtw89_dev *rtwdev, bool pwr_up);
+ };
+
+ struct rtw89_pci_info {
+@@ -1766,4 +1768,13 @@ static inline int rtw89_pci_poll_txdma_ch_idle(struct rtw89_dev *rtwdev)
+
+ return gen_def->poll_txdma_ch_idle(rtwdev);
+ }
++
++static inline void rtw89_pci_power_wake(struct rtw89_dev *rtwdev, bool pwr_up)
++{
++ const struct rtw89_pci_info *info = rtwdev->pci_info;
++ const struct rtw89_pci_gen_def *gen_def = info->gen_def;
++
++ gen_def->power_wake(rtwdev, pwr_up);
++}
++
+ #endif
+diff --git a/drivers/net/wireless/realtek/rtw89/pci_be.c b/drivers/net/wireless/realtek/rtw89/pci_be.c
+index 7cc32822296528..2f0d9ff25ba520 100644
+--- a/drivers/net/wireless/realtek/rtw89/pci_be.c
++++ b/drivers/net/wireless/realtek/rtw89/pci_be.c
+@@ -614,5 +614,7 @@ const struct rtw89_pci_gen_def rtw89_pci_gen_be = {
+ .aspm_set = rtw89_pci_aspm_set_be,
+ .clkreq_set = rtw89_pci_clkreq_set_be,
+ .l1ss_set = rtw89_pci_l1ss_set_be,
++
++ .power_wake = _patch_pcie_power_wake_be,
+ };
+ EXPORT_SYMBOL(rtw89_pci_gen_be);
+diff --git a/drivers/parport/parport_serial.c b/drivers/parport/parport_serial.c
+index 3644997a834255..24d4f3a3ec3d0e 100644
+--- a/drivers/parport/parport_serial.c
++++ b/drivers/parport/parport_serial.c
+@@ -266,10 +266,14 @@ static struct pci_device_id parport_serial_pci_tbl[] = {
+ { 0x1409, 0x7168, 0x1409, 0xd079, 0, 0, timedia_9079c },
+
+ /* WCH CARDS */
+- { 0x4348, 0x5053, PCI_ANY_ID, PCI_ANY_ID, 0, 0, wch_ch353_1s1p},
+- { 0x4348, 0x7053, 0x4348, 0x3253, 0, 0, wch_ch353_2s1p},
+- { 0x1c00, 0x3050, 0x1c00, 0x3050, 0, 0, wch_ch382_0s1p},
+- { 0x1c00, 0x3250, 0x1c00, 0x3250, 0, 0, wch_ch382_2s1p},
++ { PCI_VENDOR_ID_WCHCN, PCI_DEVICE_ID_WCHCN_CH353_1S1P,
++ PCI_ANY_ID, PCI_ANY_ID, 0, 0, wch_ch353_1s1p },
++ { PCI_VENDOR_ID_WCHCN, PCI_DEVICE_ID_WCHCN_CH353_2S1P,
++ 0x4348, 0x3253, 0, 0, wch_ch353_2s1p },
++ { PCI_VENDOR_ID_WCHIC, PCI_DEVICE_ID_WCHIC_CH382_0S1P,
++ 0x1c00, 0x3050, 0, 0, wch_ch382_0s1p },
++ { PCI_VENDOR_ID_WCHIC, PCI_DEVICE_ID_WCHIC_CH382_2S1P,
++ 0x1c00, 0x3250, 0, 0, wch_ch382_2s1p },
+
+ /* BrainBoxes PX272/PX306 MIO card */
+ { PCI_VENDOR_ID_INTASHIELD, 0x4100,
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 8103bc24a54ea4..064067d9c8b529 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -5522,7 +5522,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x443, quirk_intel_qat_vf_cap);
+ * AMD Matisse USB 3.0 Host Controller 0x149c
+ * Intel 82579LM Gigabit Ethernet Controller 0x1502
+ * Intel 82579V Gigabit Ethernet Controller 0x1503
+- *
++ * Mediatek MT7922 802.11ax PCI Express Wireless Network Adapter
+ */
+ static void quirk_no_flr(struct pci_dev *dev)
+ {
+@@ -5534,6 +5534,7 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x149c, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_AMD, 0x7901, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1502, quirk_no_flr);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x1503, quirk_no_flr);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_MEDIATEK, 0x0616, quirk_no_flr);
+
+ /* FLR may cause the SolidRun SNET DPU (rev 0x1) to hang */
+ static void quirk_no_flr_snet(struct pci_dev *dev)
+@@ -5985,6 +5986,17 @@ SWITCHTEC_QUIRK(0x5552); /* PAXA 52XG5 */
+ SWITCHTEC_QUIRK(0x5536); /* PAXA 36XG5 */
+ SWITCHTEC_QUIRK(0x5528); /* PAXA 28XG5 */
+
++#define SWITCHTEC_PCI100X_QUIRK(vid) \
++ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_EFAR, vid, \
++ PCI_CLASS_BRIDGE_OTHER, 8, quirk_switchtec_ntb_dma_alias)
++SWITCHTEC_PCI100X_QUIRK(0x1001); /* PCI1001XG4 */
++SWITCHTEC_PCI100X_QUIRK(0x1002); /* PCI1002XG4 */
++SWITCHTEC_PCI100X_QUIRK(0x1003); /* PCI1003XG4 */
++SWITCHTEC_PCI100X_QUIRK(0x1004); /* PCI1004XG4 */
++SWITCHTEC_PCI100X_QUIRK(0x1005); /* PCI1005XG4 */
++SWITCHTEC_PCI100X_QUIRK(0x1006); /* PCI1006XG4 */
++
++
+ /*
+ * The PLX NTB uses devfn proxy IDs to move TLPs between NT endpoints.
+ * These IDs are used to forward responses to the originator on the other
+@@ -6254,6 +6266,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2b, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2d, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a2f, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x9a31, dpc_log_size);
++DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa72f, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa73f, dpc_log_size);
+ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0xa76e, dpc_log_size);
+ #endif
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index c7e1089ffdafcb..b14dfab04d846c 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -1739,6 +1739,26 @@ static void switchtec_pci_remove(struct pci_dev *pdev)
+ .driver_data = gen, \
+ }
+
++#define SWITCHTEC_PCI100X_DEVICE(device_id, gen) \
++ { \
++ .vendor = PCI_VENDOR_ID_EFAR, \
++ .device = device_id, \
++ .subvendor = PCI_ANY_ID, \
++ .subdevice = PCI_ANY_ID, \
++ .class = (PCI_CLASS_MEMORY_OTHER << 8), \
++ .class_mask = 0xFFFFFFFF, \
++ .driver_data = gen, \
++ }, \
++ { \
++ .vendor = PCI_VENDOR_ID_EFAR, \
++ .device = device_id, \
++ .subvendor = PCI_ANY_ID, \
++ .subdevice = PCI_ANY_ID, \
++ .class = (PCI_CLASS_BRIDGE_OTHER << 8), \
++ .class_mask = 0xFFFFFFFF, \
++ .driver_data = gen, \
++ }
++
+ static const struct pci_device_id switchtec_pci_tbl[] = {
+ SWITCHTEC_PCI_DEVICE(0x8531, SWITCHTEC_GEN3), /* PFX 24xG3 */
+ SWITCHTEC_PCI_DEVICE(0x8532, SWITCHTEC_GEN3), /* PFX 32xG3 */
+@@ -1833,6 +1853,12 @@ static const struct pci_device_id switchtec_pci_tbl[] = {
+ SWITCHTEC_PCI_DEVICE(0x5552, SWITCHTEC_GEN5), /* PAXA 52XG5 */
+ SWITCHTEC_PCI_DEVICE(0x5536, SWITCHTEC_GEN5), /* PAXA 36XG5 */
+ SWITCHTEC_PCI_DEVICE(0x5528, SWITCHTEC_GEN5), /* PAXA 28XG5 */
++ SWITCHTEC_PCI100X_DEVICE(0x1001, SWITCHTEC_GEN4), /* PCI1001 16XG4 */
++ SWITCHTEC_PCI100X_DEVICE(0x1002, SWITCHTEC_GEN4), /* PCI1002 12XG4 */
++ SWITCHTEC_PCI100X_DEVICE(0x1003, SWITCHTEC_GEN4), /* PCI1003 16XG4 */
++ SWITCHTEC_PCI100X_DEVICE(0x1004, SWITCHTEC_GEN4), /* PCI1004 16XG4 */
++ SWITCHTEC_PCI100X_DEVICE(0x1005, SWITCHTEC_GEN4), /* PCI1005 16XG4 */
++ SWITCHTEC_PCI100X_DEVICE(0x1006, SWITCHTEC_GEN4), /* PCI1006 16XG4 */
+ {0}
+ };
+ MODULE_DEVICE_TABLE(pci, switchtec_pci_tbl);
+diff --git a/drivers/pinctrl/pinconf-generic.c b/drivers/pinctrl/pinconf-generic.c
+index 0b13d7f17b3256..42547f64453e85 100644
+--- a/drivers/pinctrl/pinconf-generic.c
++++ b/drivers/pinctrl/pinconf-generic.c
+@@ -89,12 +89,12 @@ static void pinconf_generic_dump_one(struct pinctrl_dev *pctldev,
+ seq_puts(s, items[i].display);
+ /* Print unit if available */
+ if (items[i].has_arg) {
+- seq_printf(s, " (0x%x",
+- pinconf_to_config_argument(config));
++ u32 val = pinconf_to_config_argument(config);
++
+ if (items[i].format)
+- seq_printf(s, " %s)", items[i].format);
++ seq_printf(s, " (%u %s)", val, items[i].format);
+ else
+- seq_puts(s, ")");
++ seq_printf(s, " (0x%x)", val);
+ }
+ }
+ }
+diff --git a/drivers/pinctrl/pinctrl-cy8c95x0.c b/drivers/pinctrl/pinctrl-cy8c95x0.c
+index 5096ccdd459ea4..7a6a1434ae7f4b 100644
+--- a/drivers/pinctrl/pinctrl-cy8c95x0.c
++++ b/drivers/pinctrl/pinctrl-cy8c95x0.c
+@@ -42,7 +42,7 @@
+ #define CY8C95X0_PORTSEL 0x18
+ /* Port settings, write PORTSEL first */
+ #define CY8C95X0_INTMASK 0x19
+-#define CY8C95X0_PWMSEL 0x1A
++#define CY8C95X0_SELPWM 0x1A
+ #define CY8C95X0_INVERT 0x1B
+ #define CY8C95X0_DIRECTION 0x1C
+ /* Drive mode register change state on writing '1' */
+@@ -330,14 +330,14 @@ static int cypress_get_pin_mask(struct cy8c95x0_pinctrl *chip, unsigned int pin)
+ static bool cy8c95x0_readable_register(struct device *dev, unsigned int reg)
+ {
+ /*
+- * Only 12 registers are present per port (see Table 6 in the
+- * datasheet).
++ * Only 12 registers are present per port (see Table 6 in the datasheet).
+ */
+- if (reg >= CY8C95X0_VIRTUAL && (reg % MUXED_STRIDE) < 12)
+- return true;
++ if (reg >= CY8C95X0_VIRTUAL && (reg % MUXED_STRIDE) >= 12)
++ return false;
+
+ switch (reg) {
+ case 0x24 ... 0x27:
++ case 0x31 ... 0x3f:
+ return false;
+ default:
+ return true;
+@@ -346,8 +346,11 @@ static bool cy8c95x0_readable_register(struct device *dev, unsigned int reg)
+
+ static bool cy8c95x0_writeable_register(struct device *dev, unsigned int reg)
+ {
+- if (reg >= CY8C95X0_VIRTUAL)
+- return true;
++ /*
++ * Only 12 registers are present per port (see Table 6 in the datasheet).
++ */
++ if (reg >= CY8C95X0_VIRTUAL && (reg % MUXED_STRIDE) >= 12)
++ return false;
+
+ switch (reg) {
+ case CY8C95X0_INPUT_(0) ... CY8C95X0_INPUT_(7):
+@@ -355,6 +358,7 @@ static bool cy8c95x0_writeable_register(struct device *dev, unsigned int reg)
+ case CY8C95X0_DEVID:
+ return false;
+ case 0x24 ... 0x27:
++ case 0x31 ... 0x3f:
+ return false;
+ default:
+ return true;
+@@ -367,8 +371,8 @@ static bool cy8c95x0_volatile_register(struct device *dev, unsigned int reg)
+ case CY8C95X0_INPUT_(0) ... CY8C95X0_INPUT_(7):
+ case CY8C95X0_INTSTATUS_(0) ... CY8C95X0_INTSTATUS_(7):
+ case CY8C95X0_INTMASK:
++ case CY8C95X0_SELPWM:
+ case CY8C95X0_INVERT:
+- case CY8C95X0_PWMSEL:
+ case CY8C95X0_DIRECTION:
+ case CY8C95X0_DRV_PU:
+ case CY8C95X0_DRV_PD:
+@@ -397,7 +401,7 @@ static bool cy8c95x0_muxed_register(unsigned int reg)
+ {
+ switch (reg) {
+ case CY8C95X0_INTMASK:
+- case CY8C95X0_PWMSEL:
++ case CY8C95X0_SELPWM:
+ case CY8C95X0_INVERT:
+ case CY8C95X0_DIRECTION:
+ case CY8C95X0_DRV_PU:
+@@ -468,7 +472,11 @@ static const struct regmap_config cy8c9520_i2c_regmap = {
+ .max_register = 0, /* Updated at runtime */
+ .num_reg_defaults_raw = 0, /* Updated at runtime */
+ .use_single_read = true, /* Workaround for regcache bug */
++#if IS_ENABLED(CONFIG_DEBUG_PINCTRL)
++ .disable_locking = false,
++#else
+ .disable_locking = true,
++#endif
+ };
+
+ static inline int cy8c95x0_regmap_update_bits_base(struct cy8c95x0_pinctrl *chip,
+@@ -799,7 +807,7 @@ static int cy8c95x0_gpio_get_pincfg(struct cy8c95x0_pinctrl *chip,
+ reg = CY8C95X0_DIRECTION;
+ break;
+ case PIN_CONFIG_MODE_PWM:
+- reg = CY8C95X0_PWMSEL;
++ reg = CY8C95X0_SELPWM;
+ break;
+ case PIN_CONFIG_OUTPUT:
+ reg = CY8C95X0_OUTPUT;
+@@ -881,7 +889,7 @@ static int cy8c95x0_gpio_set_pincfg(struct cy8c95x0_pinctrl *chip,
+ reg = CY8C95X0_DRV_PP_FAST;
+ break;
+ case PIN_CONFIG_MODE_PWM:
+- reg = CY8C95X0_PWMSEL;
++ reg = CY8C95X0_SELPWM;
+ break;
+ case PIN_CONFIG_OUTPUT_ENABLE:
+ ret = cy8c95x0_pinmux_direction(chip, off, !arg);
+@@ -1171,7 +1179,7 @@ static void cy8c95x0_pin_dbg_show(struct pinctrl_dev *pctldev, struct seq_file *
+ bitmap_zero(mask, MAX_LINE);
+ __set_bit(pin, mask);
+
+- if (cy8c95x0_read_regs_mask(chip, CY8C95X0_PWMSEL, pwm, mask)) {
++ if (cy8c95x0_read_regs_mask(chip, CY8C95X0_SELPWM, pwm, mask)) {
+ seq_puts(s, "not available");
+ return;
+ }
+@@ -1216,7 +1224,7 @@ static int cy8c95x0_set_mode(struct cy8c95x0_pinctrl *chip, unsigned int off, bo
+ u8 port = cypress_get_port(chip, off);
+ u8 bit = cypress_get_pin_mask(chip, off);
+
+- return cy8c95x0_regmap_write_bits(chip, CY8C95X0_PWMSEL, port, bit, mode ? bit : 0);
++ return cy8c95x0_regmap_write_bits(chip, CY8C95X0_SELPWM, port, bit, mode ? bit : 0);
+ }
+
+ static int cy8c95x0_pinmux_mode(struct cy8c95x0_pinctrl *chip,
+@@ -1365,7 +1373,7 @@ static int cy8c95x0_irq_setup(struct cy8c95x0_pinctrl *chip, int irq)
+
+ ret = devm_request_threaded_irq(chip->dev, irq,
+ NULL, cy8c95x0_irq_handler,
+- IRQF_ONESHOT | IRQF_SHARED | IRQF_TRIGGER_HIGH,
++ IRQF_ONESHOT | IRQF_SHARED,
+ dev_name(chip->dev), chip);
+ if (ret) {
+ dev_err(chip->dev, "failed to request irq %d\n", irq);
+diff --git a/drivers/soc/tegra/fuse/fuse-tegra30.c b/drivers/soc/tegra/fuse/fuse-tegra30.c
+index eb14e5ff5a0aa8..e24ab5f7d2bf10 100644
+--- a/drivers/soc/tegra/fuse/fuse-tegra30.c
++++ b/drivers/soc/tegra/fuse/fuse-tegra30.c
+@@ -647,15 +647,20 @@ static const struct nvmem_cell_lookup tegra234_fuse_lookups[] = {
+ };
+
+ static const struct nvmem_keepout tegra234_fuse_keepouts[] = {
+- { .start = 0x01c, .end = 0x0c8 },
+- { .start = 0x12c, .end = 0x184 },
++ { .start = 0x01c, .end = 0x064 },
++ { .start = 0x084, .end = 0x0a0 },
++ { .start = 0x0a4, .end = 0x0c8 },
++ { .start = 0x12c, .end = 0x164 },
++ { .start = 0x16c, .end = 0x184 },
+ { .start = 0x190, .end = 0x198 },
+ { .start = 0x1a0, .end = 0x204 },
+- { .start = 0x21c, .end = 0x250 },
+- { .start = 0x25c, .end = 0x2f0 },
++ { .start = 0x21c, .end = 0x2f0 },
+ { .start = 0x310, .end = 0x3d8 },
+- { .start = 0x400, .end = 0x4f0 },
+- { .start = 0x4f8, .end = 0x7e8 },
++ { .start = 0x400, .end = 0x420 },
++ { .start = 0x444, .end = 0x490 },
++ { .start = 0x4bc, .end = 0x4f0 },
++ { .start = 0x4f8, .end = 0x54c },
++ { .start = 0x57c, .end = 0x7e8 },
+ { .start = 0x8d0, .end = 0x8d8 },
+ { .start = 0xacc, .end = 0xf00 }
+ };
+diff --git a/drivers/spi/spi-sn-f-ospi.c b/drivers/spi/spi-sn-f-ospi.c
+index a7c3b3923b4af7..fd8c8eb37d01d6 100644
+--- a/drivers/spi/spi-sn-f-ospi.c
++++ b/drivers/spi/spi-sn-f-ospi.c
+@@ -116,6 +116,9 @@ struct f_ospi {
+
+ static u32 f_ospi_get_dummy_cycle(const struct spi_mem_op *op)
+ {
++ if (!op->dummy.nbytes)
++ return 0;
++
+ return (op->dummy.nbytes * 8) / op->dummy.buswidth;
+ }
+
+diff --git a/drivers/tty/serial/8250/8250.h b/drivers/tty/serial/8250/8250.h
+index e5310c65cf52b3..10a706fe4b247d 100644
+--- a/drivers/tty/serial/8250/8250.h
++++ b/drivers/tty/serial/8250/8250.h
+@@ -374,6 +374,7 @@ static inline int is_omap1510_8250(struct uart_8250_port *pt)
+
+ #ifdef CONFIG_SERIAL_8250_DMA
+ extern int serial8250_tx_dma(struct uart_8250_port *);
++extern void serial8250_tx_dma_flush(struct uart_8250_port *);
+ extern int serial8250_rx_dma(struct uart_8250_port *);
+ extern void serial8250_rx_dma_flush(struct uart_8250_port *);
+ extern int serial8250_request_dma(struct uart_8250_port *);
+@@ -406,6 +407,7 @@ static inline int serial8250_tx_dma(struct uart_8250_port *p)
+ {
+ return -1;
+ }
++static inline void serial8250_tx_dma_flush(struct uart_8250_port *p) { }
+ static inline int serial8250_rx_dma(struct uart_8250_port *p)
+ {
+ return -1;
+diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c
+index d215c494ee24c1..f245a84f4a508d 100644
+--- a/drivers/tty/serial/8250/8250_dma.c
++++ b/drivers/tty/serial/8250/8250_dma.c
+@@ -149,6 +149,22 @@ int serial8250_tx_dma(struct uart_8250_port *p)
+ return ret;
+ }
+
++void serial8250_tx_dma_flush(struct uart_8250_port *p)
++{
++ struct uart_8250_dma *dma = p->dma;
++
++ if (!dma->tx_running)
++ return;
++
++ /*
++ * kfifo_reset() has been called by the serial core, avoid
++ * advancing and underflowing in __dma_tx_complete().
++ */
++ dma->tx_size = 0;
++
++ dmaengine_terminate_async(dma->rxchan);
++}
++
+ int serial8250_rx_dma(struct uart_8250_port *p)
+ {
+ struct uart_8250_dma *dma = p->dma;
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index 6709b6a5f3011d..de6d90bf0d70a2 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -64,23 +64,17 @@
+ #define PCIE_DEVICE_ID_NEO_2_OX_IBM 0x00F6
+ #define PCI_DEVICE_ID_PLX_CRONYX_OMEGA 0xc001
+ #define PCI_DEVICE_ID_INTEL_PATSBURG_KT 0x1d3d
+-#define PCI_VENDOR_ID_WCH 0x4348
+-#define PCI_DEVICE_ID_WCH_CH352_2S 0x3253
+-#define PCI_DEVICE_ID_WCH_CH353_4S 0x3453
+-#define PCI_DEVICE_ID_WCH_CH353_2S1PF 0x5046
+-#define PCI_DEVICE_ID_WCH_CH353_1S1P 0x5053
+-#define PCI_DEVICE_ID_WCH_CH353_2S1P 0x7053
+-#define PCI_DEVICE_ID_WCH_CH355_4S 0x7173
++
++#define PCI_DEVICE_ID_WCHCN_CH352_2S 0x3253
++#define PCI_DEVICE_ID_WCHCN_CH355_4S 0x7173
++
+ #define PCI_VENDOR_ID_AGESTAR 0x5372
+ #define PCI_DEVICE_ID_AGESTAR_9375 0x6872
+ #define PCI_DEVICE_ID_BROADCOM_TRUMANAGE 0x160a
+ #define PCI_DEVICE_ID_AMCC_ADDIDATA_APCI7800 0x818e
+
+-#define PCIE_VENDOR_ID_WCH 0x1c00
+-#define PCIE_DEVICE_ID_WCH_CH382_2S1P 0x3250
+-#define PCIE_DEVICE_ID_WCH_CH384_4S 0x3470
+-#define PCIE_DEVICE_ID_WCH_CH384_8S 0x3853
+-#define PCIE_DEVICE_ID_WCH_CH382_2S 0x3253
++#define PCI_DEVICE_ID_WCHIC_CH384_4S 0x3470
++#define PCI_DEVICE_ID_WCHIC_CH384_8S 0x3853
+
+ #define PCI_DEVICE_ID_MOXA_CP102E 0x1024
+ #define PCI_DEVICE_ID_MOXA_CP102EL 0x1025
+@@ -2777,80 +2771,80 @@ static struct pci_serial_quirk pci_serial_quirks[] = {
+ },
+ /* WCH CH353 1S1P card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH353_1S1P,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH353_1S1P,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
+ /* WCH CH353 2S1P card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH353_2S1P,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH353_2S1P,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
+ /* WCH CH353 4S card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH353_4S,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH353_4S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
+ /* WCH CH353 2S1PF card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH353_2S1PF,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH353_2S1PF,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
+ /* WCH CH352 2S card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH352_2S,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH352_2S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch353_setup,
+ },
+ /* WCH CH355 4S card (16550 clone) */
+ {
+- .vendor = PCI_VENDOR_ID_WCH,
+- .device = PCI_DEVICE_ID_WCH_CH355_4S,
++ .vendor = PCI_VENDOR_ID_WCHCN,
++ .device = PCI_DEVICE_ID_WCHCN_CH355_4S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch355_setup,
+ },
+ /* WCH CH382 2S card (16850 clone) */
+ {
+- .vendor = PCIE_VENDOR_ID_WCH,
+- .device = PCIE_DEVICE_ID_WCH_CH382_2S,
++ .vendor = PCI_VENDOR_ID_WCHIC,
++ .device = PCI_DEVICE_ID_WCHIC_CH382_2S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch38x_setup,
+ },
+ /* WCH CH382 2S1P card (16850 clone) */
+ {
+- .vendor = PCIE_VENDOR_ID_WCH,
+- .device = PCIE_DEVICE_ID_WCH_CH382_2S1P,
++ .vendor = PCI_VENDOR_ID_WCHIC,
++ .device = PCI_DEVICE_ID_WCHIC_CH382_2S1P,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch38x_setup,
+ },
+ /* WCH CH384 4S card (16850 clone) */
+ {
+- .vendor = PCIE_VENDOR_ID_WCH,
+- .device = PCIE_DEVICE_ID_WCH_CH384_4S,
++ .vendor = PCI_VENDOR_ID_WCHIC,
++ .device = PCI_DEVICE_ID_WCHIC_CH384_4S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .setup = pci_wch_ch38x_setup,
+ },
+ /* WCH CH384 8S card (16850 clone) */
+ {
+- .vendor = PCIE_VENDOR_ID_WCH,
+- .device = PCIE_DEVICE_ID_WCH_CH384_8S,
++ .vendor = PCI_VENDOR_ID_WCHIC,
++ .device = PCI_DEVICE_ID_WCHIC_CH384_8S,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ .init = pci_wch_ch38x_init,
+@@ -3927,11 +3921,11 @@ static const struct pci_device_id blacklist[] = {
+
+ /* multi-io cards handled by parport_serial */
+ /* WCH CH353 2S1P */
+- { PCI_DEVICE(0x4348, 0x7053), 0, 0, REPORT_CONFIG(PARPORT_SERIAL), },
++ { PCI_VDEVICE(WCHCN, 0x7053), REPORT_CONFIG(PARPORT_SERIAL), },
+ /* WCH CH353 1S1P */
+- { PCI_DEVICE(0x4348, 0x5053), 0, 0, REPORT_CONFIG(PARPORT_SERIAL), },
++ { PCI_VDEVICE(WCHCN, 0x5053), REPORT_CONFIG(PARPORT_SERIAL), },
+ /* WCH CH382 2S1P */
+- { PCI_DEVICE(0x1c00, 0x3250), 0, 0, REPORT_CONFIG(PARPORT_SERIAL), },
++ { PCI_VDEVICE(WCHIC, 0x3250), REPORT_CONFIG(PARPORT_SERIAL), },
+
+ /* Intel platforms with MID UART */
+ { PCI_VDEVICE(INTEL, 0x081b), REPORT_8250_CONFIG(MID), },
+@@ -6004,27 +5998,27 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ * WCH CH353 series devices: The 2S1P is handled by parport_serial
+ * so not listed here.
+ */
+- { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH353_4S,
++ { PCI_VENDOR_ID_WCHCN, PCI_DEVICE_ID_WCHCN_CH353_4S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_b0_bt_4_115200 },
+
+- { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH353_2S1PF,
++ { PCI_VENDOR_ID_WCHCN, PCI_DEVICE_ID_WCHCN_CH353_2S1PF,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_b0_bt_2_115200 },
+
+- { PCI_VENDOR_ID_WCH, PCI_DEVICE_ID_WCH_CH355_4S,
++ { PCI_VENDOR_ID_WCHCN, PCI_DEVICE_ID_WCHCN_CH355_4S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_b0_bt_4_115200 },
+
+- { PCIE_VENDOR_ID_WCH, PCIE_DEVICE_ID_WCH_CH382_2S,
++ { PCI_VENDOR_ID_WCHIC, PCI_DEVICE_ID_WCHIC_CH382_2S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_wch382_2 },
+
+- { PCIE_VENDOR_ID_WCH, PCIE_DEVICE_ID_WCH_CH384_4S,
++ { PCI_VENDOR_ID_WCHIC, PCI_DEVICE_ID_WCHIC_CH384_4S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_wch384_4 },
+
+- { PCIE_VENDOR_ID_WCH, PCIE_DEVICE_ID_WCH_CH384_8S,
++ { PCI_VENDOR_ID_WCHIC, PCI_DEVICE_ID_WCHIC_CH384_8S,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0, pbn_wch384_8 },
+ /*
+diff --git a/drivers/tty/serial/8250/8250_pci1xxxx.c b/drivers/tty/serial/8250/8250_pci1xxxx.c
+index d3930bf32fe4c4..f462b3d1c104ce 100644
+--- a/drivers/tty/serial/8250/8250_pci1xxxx.c
++++ b/drivers/tty/serial/8250/8250_pci1xxxx.c
+@@ -78,6 +78,12 @@
+ #define UART_TX_BYTE_FIFO 0x00
+ #define UART_FIFO_CTL 0x02
+
++#define UART_MODEM_CTL_REG 0x04
++#define UART_MODEM_CTL_RTS_SET BIT(1)
++
++#define UART_LINE_STAT_REG 0x05
++#define UART_LINE_XMIT_CHECK_MASK GENMASK(6, 5)
++
+ #define UART_ACTV_REG 0x11
+ #define UART_BLOCK_SET_ACTIVE BIT(0)
+
+@@ -94,6 +100,7 @@
+ #define UART_BIT_SAMPLE_CNT_16 16
+ #define BAUD_CLOCK_DIV_INT_MSK GENMASK(31, 8)
+ #define ADCL_CFG_RTS_DELAY_MASK GENMASK(11, 8)
++#define FRAC_DIV_TX_END_POINT_MASK GENMASK(23, 20)
+
+ #define UART_WAKE_REG 0x8C
+ #define UART_WAKE_MASK_REG 0x90
+@@ -134,6 +141,11 @@
+ #define UART_BST_STAT_LSR_FRAME_ERR 0x8000000
+ #define UART_BST_STAT_LSR_THRE 0x20000000
+
++#define GET_MODEM_CTL_RTS_STATUS(reg) ((reg) & UART_MODEM_CTL_RTS_SET)
++#define GET_RTS_PIN_STATUS(val) (((val) & TIOCM_RTS) >> 1)
++#define RTS_TOGGLE_STATUS_MASK(val, reg) (GET_MODEM_CTL_RTS_STATUS(reg) \
++ != GET_RTS_PIN_STATUS(val))
++
+ struct pci1xxxx_8250 {
+ unsigned int nr;
+ u8 dev_rev;
+@@ -254,6 +266,47 @@ static void pci1xxxx_set_divisor(struct uart_port *port, unsigned int baud,
+ port->membase + UART_BAUD_CLK_DIVISOR_REG);
+ }
+
++static void pci1xxxx_set_mctrl(struct uart_port *port, unsigned int mctrl)
++{
++ u32 fract_div_cfg_reg;
++ u32 line_stat_reg;
++ u32 modem_ctl_reg;
++ u32 adcl_cfg_reg;
++
++ adcl_cfg_reg = readl(port->membase + ADCL_CFG_REG);
++
++ /* HW is responsible in ADCL_EN case */
++ if ((adcl_cfg_reg & (ADCL_CFG_EN | ADCL_CFG_PIN_SEL)))
++ return;
++
++ modem_ctl_reg = readl(port->membase + UART_MODEM_CTL_REG);
++
++ serial8250_do_set_mctrl(port, mctrl);
++
++ if (RTS_TOGGLE_STATUS_MASK(mctrl, modem_ctl_reg)) {
++ line_stat_reg = readl(port->membase + UART_LINE_STAT_REG);
++ if (line_stat_reg & UART_LINE_XMIT_CHECK_MASK) {
++ fract_div_cfg_reg = readl(port->membase +
++ FRAC_DIV_CFG_REG);
++
++ writel((fract_div_cfg_reg &
++ ~(FRAC_DIV_TX_END_POINT_MASK)),
++ port->membase + FRAC_DIV_CFG_REG);
++
++ /* Enable ADC and set the nRTS pin */
++ writel((adcl_cfg_reg | (ADCL_CFG_EN |
++ ADCL_CFG_PIN_SEL)),
++ port->membase + ADCL_CFG_REG);
++
++ /* Revert to the original settings */
++ writel(adcl_cfg_reg, port->membase + ADCL_CFG_REG);
++
++ writel(fract_div_cfg_reg, port->membase +
++ FRAC_DIV_CFG_REG);
++ }
++ }
++}
++
+ static int pci1xxxx_rs485_config(struct uart_port *port,
+ struct ktermios *termios,
+ struct serial_rs485 *rs485)
+@@ -631,9 +684,14 @@ static int pci1xxxx_setup(struct pci_dev *pdev,
+ port->port.rs485_config = pci1xxxx_rs485_config;
+ port->port.rs485_supported = pci1xxxx_rs485_supported;
+
+- /* From C0 rev Burst operation is supported */
++ /*
++ * C0 and later revisions support Burst operation.
++ * RTS workaround in mctrl is applicable only to B0.
++ */
+ if (rev >= 0xC0)
+ port->port.handle_irq = pci1xxxx_handle_irq;
++ else if (rev == 0xB0)
++ port->port.set_mctrl = pci1xxxx_set_mctrl;
+
+ ret = serial8250_pci_setup_port(pdev, port, 0, PORT_OFFSET * port_idx, 0);
+ if (ret < 0)
+diff --git a/drivers/tty/serial/8250/8250_port.c b/drivers/tty/serial/8250/8250_port.c
+index 11519aa2598a01..c1376727642a71 100644
+--- a/drivers/tty/serial/8250/8250_port.c
++++ b/drivers/tty/serial/8250/8250_port.c
+@@ -2524,6 +2524,14 @@ static void serial8250_shutdown(struct uart_port *port)
+ serial8250_do_shutdown(port);
+ }
+
++static void serial8250_flush_buffer(struct uart_port *port)
++{
++ struct uart_8250_port *up = up_to_u8250p(port);
++
++ if (up->dma)
++ serial8250_tx_dma_flush(up);
++}
++
+ static unsigned int serial8250_do_get_divisor(struct uart_port *port,
+ unsigned int baud,
+ unsigned int *frac)
+@@ -3207,6 +3215,7 @@ static const struct uart_ops serial8250_pops = {
+ .break_ctl = serial8250_break_ctl,
+ .startup = serial8250_startup,
+ .shutdown = serial8250_shutdown,
++ .flush_buffer = serial8250_flush_buffer,
+ .set_termios = serial8250_set_termios,
+ .set_ldisc = serial8250_set_ldisc,
+ .pm = serial8250_pm,
+diff --git a/drivers/tty/serial/serial_port.c b/drivers/tty/serial/serial_port.c
+index d35f1d24156c22..85285c56fabff4 100644
+--- a/drivers/tty/serial/serial_port.c
++++ b/drivers/tty/serial/serial_port.c
+@@ -173,6 +173,7 @@ EXPORT_SYMBOL(uart_remove_one_port);
+ * The caller is responsible to initialize the following fields of the @port
+ * ->dev (must be valid)
+ * ->flags
++ * ->iobase
+ * ->mapbase
+ * ->mapsize
+ * ->regshift (if @use_defaults is false)
+@@ -214,7 +215,7 @@ static int __uart_read_properties(struct uart_port *port, bool use_defaults)
+ /* Read the registers I/O access type (default: MMIO 8-bit) */
+ ret = device_property_read_u32(dev, "reg-io-width", &value);
+ if (ret) {
+- port->iotype = UPIO_MEM;
++ port->iotype = port->iobase ? UPIO_PORT : UPIO_MEM;
+ } else {
+ switch (value) {
+ case 1:
+@@ -227,11 +228,11 @@ static int __uart_read_properties(struct uart_port *port, bool use_defaults)
+ port->iotype = device_is_big_endian(dev) ? UPIO_MEM32BE : UPIO_MEM32;
+ break;
+ default:
++ port->iotype = UPIO_UNKNOWN;
+ if (!use_defaults) {
+ dev_err(dev, "Unsupported reg-io-width (%u)\n", value);
+ return -EINVAL;
+ }
+- port->iotype = UPIO_UNKNOWN;
+ break;
+ }
+ }
+diff --git a/drivers/ufs/core/ufs_bsg.c b/drivers/ufs/core/ufs_bsg.c
+index 58023f735c195f..8d4ad0a3f2cf02 100644
+--- a/drivers/ufs/core/ufs_bsg.c
++++ b/drivers/ufs/core/ufs_bsg.c
+@@ -216,6 +216,7 @@ void ufs_bsg_remove(struct ufs_hba *hba)
+ return;
+
+ bsg_remove_queue(hba->bsg_queue);
++ hba->bsg_queue = NULL;
+
+ device_del(bsg_dev);
+ put_device(bsg_dev);
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index b786cba9a270f4..67410c4cebee6d 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -258,10 +258,15 @@ ufs_get_desired_pm_lvl_for_dev_link_state(enum ufs_dev_pwr_mode dev_state,
+ return UFS_PM_LVL_0;
+ }
+
++static bool ufshcd_has_pending_tasks(struct ufs_hba *hba)
++{
++ return hba->outstanding_tasks || hba->active_uic_cmd ||
++ hba->uic_async_done;
++}
++
+ static bool ufshcd_is_ufs_dev_busy(struct ufs_hba *hba)
+ {
+- return (hba->clk_gating.active_reqs || hba->outstanding_reqs || hba->outstanding_tasks ||
+- hba->active_uic_cmd || hba->uic_async_done);
++ return hba->outstanding_reqs || ufshcd_has_pending_tasks(hba);
+ }
+
+ static const struct ufs_dev_quirk ufs_fixups[] = {
+@@ -1835,19 +1840,16 @@ static void ufshcd_exit_clk_scaling(struct ufs_hba *hba)
+ static void ufshcd_ungate_work(struct work_struct *work)
+ {
+ int ret;
+- unsigned long flags;
+ struct ufs_hba *hba = container_of(work, struct ufs_hba,
+ clk_gating.ungate_work);
+
+ cancel_delayed_work_sync(&hba->clk_gating.gate_work);
+
+- spin_lock_irqsave(hba->host->host_lock, flags);
+- if (hba->clk_gating.state == CLKS_ON) {
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+- return;
++ scoped_guard(spinlock_irqsave, &hba->clk_gating.lock) {
++ if (hba->clk_gating.state == CLKS_ON)
++ return;
+ }
+
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+ ufshcd_hba_vreg_set_hpm(hba);
+ ufshcd_setup_clocks(hba, true);
+
+@@ -1882,7 +1884,7 @@ void ufshcd_hold(struct ufs_hba *hba)
+ if (!ufshcd_is_clkgating_allowed(hba) ||
+ !hba->clk_gating.is_initialized)
+ return;
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ spin_lock_irqsave(&hba->clk_gating.lock, flags);
+ hba->clk_gating.active_reqs++;
+
+ start:
+@@ -1898,11 +1900,11 @@ void ufshcd_hold(struct ufs_hba *hba)
+ */
+ if (ufshcd_can_hibern8_during_gating(hba) &&
+ ufshcd_is_link_hibern8(hba)) {
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
++ spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
+ flush_result = flush_work(&hba->clk_gating.ungate_work);
+ if (hba->clk_gating.is_suspended && !flush_result)
+ return;
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ spin_lock_irqsave(&hba->clk_gating.lock, flags);
+ goto start;
+ }
+ break;
+@@ -1931,17 +1933,17 @@ void ufshcd_hold(struct ufs_hba *hba)
+ */
+ fallthrough;
+ case REQ_CLKS_ON:
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
++ spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
+ flush_work(&hba->clk_gating.ungate_work);
+ /* Make sure state is CLKS_ON before returning */
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ spin_lock_irqsave(&hba->clk_gating.lock, flags);
+ goto start;
+ default:
+ dev_err(hba->dev, "%s: clk gating is in invalid state %d\n",
+ __func__, hba->clk_gating.state);
+ break;
+ }
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
++ spin_unlock_irqrestore(&hba->clk_gating.lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_hold);
+
+@@ -1949,28 +1951,32 @@ static void ufshcd_gate_work(struct work_struct *work)
+ {
+ struct ufs_hba *hba = container_of(work, struct ufs_hba,
+ clk_gating.gate_work.work);
+- unsigned long flags;
+ int ret;
+
+- spin_lock_irqsave(hba->host->host_lock, flags);
+- /*
+- * In case you are here to cancel this work the gating state
+- * would be marked as REQ_CLKS_ON. In this case save time by
+- * skipping the gating work and exit after changing the clock
+- * state to CLKS_ON.
+- */
+- if (hba->clk_gating.is_suspended ||
+- (hba->clk_gating.state != REQ_CLKS_OFF)) {
+- hba->clk_gating.state = CLKS_ON;
+- trace_ufshcd_clk_gating(dev_name(hba->dev),
+- hba->clk_gating.state);
+- goto rel_lock;
+- }
++ scoped_guard(spinlock_irqsave, &hba->clk_gating.lock) {
++ /*
++ * In case you are here to cancel this work the gating state
++ * would be marked as REQ_CLKS_ON. In this case save time by
++ * skipping the gating work and exit after changing the clock
++ * state to CLKS_ON.
++ */
++ if (hba->clk_gating.is_suspended ||
++ hba->clk_gating.state != REQ_CLKS_OFF) {
++ hba->clk_gating.state = CLKS_ON;
++ trace_ufshcd_clk_gating(dev_name(hba->dev),
++ hba->clk_gating.state);
++ return;
++ }
+
+- if (ufshcd_is_ufs_dev_busy(hba) || hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL)
+- goto rel_lock;
++ if (hba->clk_gating.active_reqs)
++ return;
++ }
+
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
++ scoped_guard(spinlock_irqsave, hba->host->host_lock) {
++ if (ufshcd_is_ufs_dev_busy(hba) ||
++ hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL)
++ return;
++ }
+
+ /* put the link into hibern8 mode before turning off clocks */
+ if (ufshcd_can_hibern8_during_gating(hba)) {
+@@ -1981,7 +1987,7 @@ static void ufshcd_gate_work(struct work_struct *work)
+ __func__, ret);
+ trace_ufshcd_clk_gating(dev_name(hba->dev),
+ hba->clk_gating.state);
+- goto out;
++ return;
+ }
+ ufshcd_set_link_hibern8(hba);
+ }
+@@ -2001,33 +2007,34 @@ static void ufshcd_gate_work(struct work_struct *work)
+ * prevent from doing cancel work multiple times when there are
+ * new requests arriving before the current cancel work is done.
+ */
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ guard(spinlock_irqsave)(&hba->clk_gating.lock);
+ if (hba->clk_gating.state == REQ_CLKS_OFF) {
+ hba->clk_gating.state = CLKS_OFF;
+ trace_ufshcd_clk_gating(dev_name(hba->dev),
+ hba->clk_gating.state);
+ }
+-rel_lock:
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+-out:
+- return;
+ }
+
+-/* host lock must be held before calling this variant */
+ static void __ufshcd_release(struct ufs_hba *hba)
+ {
++ lockdep_assert_held(&hba->clk_gating.lock);
++
+ if (!ufshcd_is_clkgating_allowed(hba))
+ return;
+
+ hba->clk_gating.active_reqs--;
+
+ if (hba->clk_gating.active_reqs || hba->clk_gating.is_suspended ||
+- hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL ||
+- hba->outstanding_tasks || !hba->clk_gating.is_initialized ||
+- hba->active_uic_cmd || hba->uic_async_done ||
++ !hba->clk_gating.is_initialized ||
+ hba->clk_gating.state == CLKS_OFF)
+ return;
+
++ scoped_guard(spinlock_irqsave, hba->host->host_lock) {
++ if (ufshcd_has_pending_tasks(hba) ||
++ hba->ufshcd_state != UFSHCD_STATE_OPERATIONAL)
++ return;
++ }
++
+ hba->clk_gating.state = REQ_CLKS_OFF;
+ trace_ufshcd_clk_gating(dev_name(hba->dev), hba->clk_gating.state);
+ queue_delayed_work(hba->clk_gating.clk_gating_workq,
+@@ -2037,11 +2044,8 @@ static void __ufshcd_release(struct ufs_hba *hba)
+
+ void ufshcd_release(struct ufs_hba *hba)
+ {
+- unsigned long flags;
+-
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ guard(spinlock_irqsave)(&hba->clk_gating.lock);
+ __ufshcd_release(hba);
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_release);
+
+@@ -2056,11 +2060,9 @@ static ssize_t ufshcd_clkgate_delay_show(struct device *dev,
+ void ufshcd_clkgate_delay_set(struct device *dev, unsigned long value)
+ {
+ struct ufs_hba *hba = dev_get_drvdata(dev);
+- unsigned long flags;
+
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ guard(spinlock_irqsave)(&hba->clk_gating.lock);
+ hba->clk_gating.delay_ms = value;
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(ufshcd_clkgate_delay_set);
+
+@@ -2088,7 +2090,6 @@ static ssize_t ufshcd_clkgate_enable_store(struct device *dev,
+ struct device_attribute *attr, const char *buf, size_t count)
+ {
+ struct ufs_hba *hba = dev_get_drvdata(dev);
+- unsigned long flags;
+ u32 value;
+
+ if (kstrtou32(buf, 0, &value))
+@@ -2096,9 +2097,10 @@ static ssize_t ufshcd_clkgate_enable_store(struct device *dev,
+
+ value = !!value;
+
+- spin_lock_irqsave(hba->host->host_lock, flags);
++ guard(spinlock_irqsave)(&hba->clk_gating.lock);
++
+ if (value == hba->clk_gating.is_enabled)
+- goto out;
++ return count;
+
+ if (value)
+ __ufshcd_release(hba);
+@@ -2106,8 +2108,7 @@ static ssize_t ufshcd_clkgate_enable_store(struct device *dev,
+ hba->clk_gating.active_reqs++;
+
+ hba->clk_gating.is_enabled = value;
+-out:
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
++
+ return count;
+ }
+
+@@ -8267,7 +8268,9 @@ static void ufshcd_rtc_work(struct work_struct *work)
+ hba = container_of(to_delayed_work(work), struct ufs_hba, ufs_rtc_update_work);
+
+ /* Update RTC only when there are no requests in progress and UFSHCI is operational */
+- if (!ufshcd_is_ufs_dev_busy(hba) && hba->ufshcd_state == UFSHCD_STATE_OPERATIONAL)
++ if (!ufshcd_is_ufs_dev_busy(hba) &&
++ hba->ufshcd_state == UFSHCD_STATE_OPERATIONAL &&
++ !hba->clk_gating.active_reqs)
+ ufshcd_update_rtc(hba);
+
+ if (ufshcd_is_ufs_dev_active(hba) && hba->dev_info.rtc_update_period)
+@@ -9186,7 +9189,6 @@ static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on)
+ int ret = 0;
+ struct ufs_clk_info *clki;
+ struct list_head *head = &hba->clk_list_head;
+- unsigned long flags;
+ ktime_t start = ktime_get();
+ bool clk_state_changed = false;
+
+@@ -9236,12 +9238,11 @@ static int ufshcd_setup_clocks(struct ufs_hba *hba, bool on)
+ if (!IS_ERR_OR_NULL(clki->clk) && clki->enabled)
+ clk_disable_unprepare(clki->clk);
+ }
+- } else if (!ret && on) {
+- spin_lock_irqsave(hba->host->host_lock, flags);
+- hba->clk_gating.state = CLKS_ON;
++ } else if (!ret && on && hba->clk_gating.is_initialized) {
++ scoped_guard(spinlock_irqsave, &hba->clk_gating.lock)
++ hba->clk_gating.state = CLKS_ON;
+ trace_ufshcd_clk_gating(dev_name(hba->dev),
+ hba->clk_gating.state);
+- spin_unlock_irqrestore(hba->host->host_lock, flags);
+ }
+
+ if (clk_state_changed)
+@@ -10450,6 +10451,12 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ hba->irq = irq;
+ hba->vps = &ufs_hba_vps;
+
++ /*
++ * Initialize clk_gating.lock early since it is being used in
++ * ufshcd_setup_clocks()
++ */
++ spin_lock_init(&hba->clk_gating.lock);
++
+ err = ufshcd_hba_init(hba);
+ if (err)
+ goto out_error;
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 6b37d1c47fce13..c2ecfa3c83496f 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -371,7 +371,7 @@ static void acm_process_notification(struct acm *acm, unsigned char *buf)
+ static void acm_ctrl_irq(struct urb *urb)
+ {
+ struct acm *acm = urb->context;
+- struct usb_cdc_notification *dr = urb->transfer_buffer;
++ struct usb_cdc_notification *dr;
+ unsigned int current_size = urb->actual_length;
+ unsigned int expected_size, copy_size, alloc_size;
+ int retval;
+@@ -398,14 +398,25 @@ static void acm_ctrl_irq(struct urb *urb)
+
+ usb_mark_last_busy(acm->dev);
+
+- if (acm->nb_index)
++ if (acm->nb_index == 0) {
++ /*
++ * The first chunk of a message must contain at least the
++ * notification header with the length field, otherwise we
++ * can't get an expected_size.
++ */
++ if (current_size < sizeof(struct usb_cdc_notification)) {
++ dev_dbg(&acm->control->dev, "urb too short\n");
++ goto exit;
++ }
++ dr = urb->transfer_buffer;
++ } else {
+ dr = (struct usb_cdc_notification *)acm->notification_buffer;
+-
++ }
+ /* size = notification-header + (optional) data */
+ expected_size = sizeof(struct usb_cdc_notification) +
+ le16_to_cpu(dr->wLength);
+
+- if (current_size < expected_size) {
++ if (acm->nb_index != 0 || current_size < expected_size) {
+ /* notification is transmitted fragmented, reassemble */
+ if (acm->nb_size < expected_size) {
+ u8 *new_buffer;
+@@ -1727,13 +1738,16 @@ static const struct usb_device_id acm_ids[] = {
+ { USB_DEVICE(0x0870, 0x0001), /* Metricom GS Modem */
+ .driver_info = NO_UNION_NORMAL, /* has no union descriptor */
+ },
+- { USB_DEVICE(0x045b, 0x023c), /* Renesas USB Download mode */
++ { USB_DEVICE(0x045b, 0x023c), /* Renesas R-Car H3 USB Download mode */
++ .driver_info = DISABLE_ECHO, /* Don't echo banner */
++ },
++ { USB_DEVICE(0x045b, 0x0247), /* Renesas R-Car D3 USB Download mode */
+ .driver_info = DISABLE_ECHO, /* Don't echo banner */
+ },
+- { USB_DEVICE(0x045b, 0x0248), /* Renesas USB Download mode */
++ { USB_DEVICE(0x045b, 0x0248), /* Renesas R-Car M3-N USB Download mode */
+ .driver_info = DISABLE_ECHO, /* Don't echo banner */
+ },
+- { USB_DEVICE(0x045b, 0x024D), /* Renesas USB Download mode */
++ { USB_DEVICE(0x045b, 0x024D), /* Renesas R-Car E3 USB Download mode */
+ .driver_info = DISABLE_ECHO, /* Don't echo banner */
+ },
+ { USB_DEVICE(0x0e8d, 0x0003), /* FIREFLY, MediaTek Inc; andrey.arapov@gmail.com */
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 21ac9b464696f5..906daf423cb02b 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -1847,6 +1847,17 @@ static int hub_probe(struct usb_interface *intf, const struct usb_device_id *id)
+ desc = intf->cur_altsetting;
+ hdev = interface_to_usbdev(intf);
+
++ /*
++ * The USB 2.0 spec prohibits hubs from having more than one
++ * configuration or interface, and we rely on this prohibition.
++ * Refuse to accept a device that violates it.
++ */
++ if (hdev->descriptor.bNumConfigurations > 1 ||
++ hdev->actconfig->desc.bNumInterfaces > 1) {
++ dev_err(&intf->dev, "Invalid hub with more than one config or interface\n");
++ return -EINVAL;
++ }
++
+ /*
+ * Set default autosuspend delay as 0 to speedup bus suspend,
+ * based on the below considerations:
+@@ -4698,7 +4709,6 @@ void usb_ep0_reinit(struct usb_device *udev)
+ EXPORT_SYMBOL_GPL(usb_ep0_reinit);
+
+ #define usb_sndaddr0pipe() (PIPE_CONTROL << 30)
+-#define usb_rcvaddr0pipe() ((PIPE_CONTROL << 30) | USB_DIR_IN)
+
+ static int hub_set_address(struct usb_device *udev, int devnum)
+ {
+@@ -4804,7 +4814,7 @@ static int get_bMaxPacketSize0(struct usb_device *udev,
+ for (i = 0; i < GET_MAXPACKET0_TRIES; ++i) {
+ /* Start with invalid values in case the transfer fails */
+ buf->bDescriptorType = buf->bMaxPacketSize0 = 0;
+- rc = usb_control_msg(udev, usb_rcvaddr0pipe(),
++ rc = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
+ USB_REQ_GET_DESCRIPTOR, USB_DIR_IN,
+ USB_DT_DEVICE << 8, 0,
+ buf, size,
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 13171454f9591a..027479179f09e9 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -432,6 +432,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x0c45, 0x7056), .driver_info =
+ USB_QUIRK_IGNORE_REMOTE_WAKEUP },
+
++ /* Sony Xperia XZ1 Compact (lilac) smartphone in fastboot mode */
++ { USB_DEVICE(0x0fce, 0x0dde), .driver_info = USB_QUIRK_NO_LPM },
++
+ /* Action Semiconductor flash disk */
+ { USB_DEVICE(0x10d6, 0x2200), .driver_info =
+ USB_QUIRK_STRING_FETCH_255 },
+@@ -522,6 +525,9 @@ static const struct usb_device_id usb_quirk_list[] = {
+ /* Blackmagic Design UltraStudio SDI */
+ { USB_DEVICE(0x1edb, 0xbd4f), .driver_info = USB_QUIRK_NO_LPM },
+
++ /* Teclast disk */
++ { USB_DEVICE(0x1f75, 0x0917), .driver_info = USB_QUIRK_NO_LPM },
++
+ /* Hauppauge HVR-950q */
+ { USB_DEVICE(0x2040, 0x7200), .driver_info =
+ USB_QUIRK_CONFIG_INTF_STRINGS },
+diff --git a/drivers/usb/dwc2/gadget.c b/drivers/usb/dwc2/gadget.c
+index e7bf9cc635be6f..bd4c788f03bc14 100644
+--- a/drivers/usb/dwc2/gadget.c
++++ b/drivers/usb/dwc2/gadget.c
+@@ -4615,6 +4615,7 @@ static int dwc2_hsotg_udc_stop(struct usb_gadget *gadget)
+ spin_lock_irqsave(&hsotg->lock, flags);
+
+ hsotg->driver = NULL;
++ hsotg->gadget.dev.of_node = NULL;
+ hsotg->gadget.speed = USB_SPEED_UNKNOWN;
+ hsotg->enabled = 0;
+
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index a5d75d7d0a8707..8c80bb4a467bff 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2618,10 +2618,38 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on)
+ {
+ u32 reg;
+ u32 timeout = 2000;
++ u32 saved_config = 0;
+
+ if (pm_runtime_suspended(dwc->dev))
+ return 0;
+
++ /*
++ * When operating in USB 2.0 speeds (HS/FS), ensure that
++ * GUSB2PHYCFG.ENBLSLPM and GUSB2PHYCFG.SUSPHY are cleared before starting
++ * or stopping the controller. This resolves timeout issues that occur
++ * during frequent role switches between host and device modes.
++ *
++ * Save and clear these settings, then restore them after completing the
++ * controller start or stop sequence.
++ *
++ * This solution was discovered through experimentation as it is not
++ * mentioned in the dwc3 programming guide. It has been tested on an
++ * Exynos platforms.
++ */
++ reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
++ if (reg & DWC3_GUSB2PHYCFG_SUSPHY) {
++ saved_config |= DWC3_GUSB2PHYCFG_SUSPHY;
++ reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
++ }
++
++ if (reg & DWC3_GUSB2PHYCFG_ENBLSLPM) {
++ saved_config |= DWC3_GUSB2PHYCFG_ENBLSLPM;
++ reg &= ~DWC3_GUSB2PHYCFG_ENBLSLPM;
++ }
++
++ if (saved_config)
++ dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
++
+ reg = dwc3_readl(dwc->regs, DWC3_DCTL);
+ if (is_on) {
+ if (DWC3_VER_IS_WITHIN(DWC3, ANY, 187A)) {
+@@ -2649,6 +2677,12 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on)
+ reg &= DWC3_DSTS_DEVCTRLHLT;
+ } while (--timeout && !(!is_on ^ !reg));
+
++ if (saved_config) {
++ reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
++ reg |= saved_config;
++ dwc3_writel(dwc->regs, DWC3_GUSB2PHYCFG(0), reg);
++ }
++
+ if (!timeout)
+ return -ETIMEDOUT;
+
+diff --git a/drivers/usb/gadget/function/f_midi.c b/drivers/usb/gadget/function/f_midi.c
+index 1067847cc07995..4153643c67dcec 100644
+--- a/drivers/usb/gadget/function/f_midi.c
++++ b/drivers/usb/gadget/function/f_midi.c
+@@ -907,6 +907,15 @@ static int f_midi_bind(struct usb_configuration *c, struct usb_function *f)
+
+ status = -ENODEV;
+
++ /*
++ * Reset wMaxPacketSize with maximum packet size of FS bulk transfer before
++ * endpoint claim. This ensures that the wMaxPacketSize does not exceed the
++ * limit during bind retries where configured dwc3 TX/RX FIFO's maxpacket
++ * size of 512 bytes for IN/OUT endpoints in support HS speed only.
++ */
++ bulk_in_desc.wMaxPacketSize = cpu_to_le16(64);
++ bulk_out_desc.wMaxPacketSize = cpu_to_le16(64);
++
+ /* allocate instance-specific endpoints */
+ midi->in_ep = usb_ep_autoconfig(cdev->gadget, &bulk_in_desc);
+ if (!midi->in_ep)
+@@ -1000,11 +1009,11 @@ static int f_midi_bind(struct usb_configuration *c, struct usb_function *f)
+ }
+
+ /* configure the endpoint descriptors ... */
+- ms_out_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->in_ports);
+- ms_out_desc.bNumEmbMIDIJack = midi->in_ports;
++ ms_out_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->out_ports);
++ ms_out_desc.bNumEmbMIDIJack = midi->out_ports;
+
+- ms_in_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->out_ports);
+- ms_in_desc.bNumEmbMIDIJack = midi->out_ports;
++ ms_in_desc.bLength = USB_DT_MS_ENDPOINT_SIZE(midi->in_ports);
++ ms_in_desc.bNumEmbMIDIJack = midi->in_ports;
+
+ /* ... and add them to the list */
+ endpoint_descriptor_index = i;
+diff --git a/drivers/usb/gadget/udc/core.c b/drivers/usb/gadget/udc/core.c
+index a6f46364be65f0..4b3d5075621aa0 100644
+--- a/drivers/usb/gadget/udc/core.c
++++ b/drivers/usb/gadget/udc/core.c
+@@ -1543,8 +1543,8 @@ void usb_del_gadget(struct usb_gadget *gadget)
+
+ kobject_uevent(&udc->dev.kobj, KOBJ_REMOVE);
+ sysfs_remove_link(&udc->dev.kobj, "gadget");
+- flush_work(&gadget->work);
+ device_del(&gadget->dev);
++ flush_work(&gadget->work);
+ ida_free(&gadget_id_numbers, gadget->id_number);
+ cancel_work_sync(&udc->vbus_work);
+ device_unregister(&udc->dev);
+diff --git a/drivers/usb/gadget/udc/renesas_usb3.c b/drivers/usb/gadget/udc/renesas_usb3.c
+index 3b01734ce1b7e5..a93ad93390ba17 100644
+--- a/drivers/usb/gadget/udc/renesas_usb3.c
++++ b/drivers/usb/gadget/udc/renesas_usb3.c
+@@ -310,7 +310,7 @@ struct renesas_usb3_request {
+ struct list_head queue;
+ };
+
+-#define USB3_EP_NAME_SIZE 8
++#define USB3_EP_NAME_SIZE 16
+ struct renesas_usb3_ep {
+ struct usb_ep ep;
+ struct renesas_usb3 *usb3;
+diff --git a/drivers/usb/host/pci-quirks.c b/drivers/usb/host/pci-quirks.c
+index 1f9c1b1435d862..0404489c2f6a9c 100644
+--- a/drivers/usb/host/pci-quirks.c
++++ b/drivers/usb/host/pci-quirks.c
+@@ -958,6 +958,15 @@ static void quirk_usb_disable_ehci(struct pci_dev *pdev)
+ * booting from USB disk or using a usb keyboard
+ */
+ hcc_params = readl(base + EHCI_HCC_PARAMS);
++
++ /* LS7A EHCI controller doesn't have extended capabilities, the
++ * EECP (EHCI Extended Capabilities Pointer) field of HCCPARAMS
++ * register should be 0x0 but it reads as 0xa0. So clear it to
++ * avoid error messages on boot.
++ */
++ if (pdev->vendor == PCI_VENDOR_ID_LOONGSON && pdev->device == 0x7a14)
++ hcc_params &= ~(0xffL << 8);
++
+ offset = (hcc_params >> 8) & 0xff;
+ while (offset && --count) {
+ pci_read_config_dword(pdev, offset, &cap);
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index 3ba9902dd2093c..deb3c98c9beaf6 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -656,8 +656,8 @@ int xhci_pci_common_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ }
+ EXPORT_SYMBOL_NS_GPL(xhci_pci_common_probe, xhci);
+
+-static const struct pci_device_id pci_ids_reject[] = {
+- /* handled by xhci-pci-renesas */
++/* handled by xhci-pci-renesas if enabled */
++static const struct pci_device_id pci_ids_renesas[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, 0x0014) },
+ { PCI_DEVICE(PCI_VENDOR_ID_RENESAS, 0x0015) },
+ { /* end: all zeroes */ }
+@@ -665,7 +665,8 @@ static const struct pci_device_id pci_ids_reject[] = {
+
+ static int xhci_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ {
+- if (pci_match_id(pci_ids_reject, dev))
++ if (IS_ENABLED(CONFIG_USB_XHCI_PCI_RENESAS) &&
++ pci_match_id(pci_ids_renesas, dev))
+ return -ENODEV;
+
+ return xhci_pci_common_probe(dev, id);
+diff --git a/drivers/usb/roles/class.c b/drivers/usb/roles/class.c
+index c58a12c147f451..30482d4cf82678 100644
+--- a/drivers/usb/roles/class.c
++++ b/drivers/usb/roles/class.c
+@@ -387,8 +387,11 @@ usb_role_switch_register(struct device *parent,
+ dev_set_name(&sw->dev, "%s-role-switch",
+ desc->name ? desc->name : dev_name(parent));
+
++ sw->registered = true;
++
+ ret = device_register(&sw->dev);
+ if (ret) {
++ sw->registered = false;
+ put_device(&sw->dev);
+ return ERR_PTR(ret);
+ }
+@@ -399,8 +402,6 @@ usb_role_switch_register(struct device *parent,
+ dev_warn(&sw->dev, "failed to add component\n");
+ }
+
+- sw->registered = true;
+-
+ /* TODO: Symlinks for the host port and the device controller. */
+
+ return sw;
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 1e2ae0c6c41c79..58bd54e8c483a2 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -619,15 +619,6 @@ static void option_instat_callback(struct urb *urb);
+ /* Luat Air72*U series based on UNISOC UIS8910 uses UNISOC's vendor ID */
+ #define LUAT_PRODUCT_AIR720U 0x4e00
+
+-/* MeiG Smart Technology products */
+-#define MEIGSMART_VENDOR_ID 0x2dee
+-/* MeiG Smart SRM815/SRM825L based on Qualcomm 315 */
+-#define MEIGSMART_PRODUCT_SRM825L 0x4d22
+-/* MeiG Smart SLM320 based on UNISOC UIS8910 */
+-#define MEIGSMART_PRODUCT_SLM320 0x4d41
+-/* MeiG Smart SLM770A based on ASR1803 */
+-#define MEIGSMART_PRODUCT_SLM770A 0x4d57
+-
+ /* Device flags */
+
+ /* Highest interface number which can be used with NCTRL() and RSVD() */
+@@ -1367,15 +1358,15 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(2) | RSVD(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1063, 0xff), /* Telit LN920 (ECM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1070, 0xff), /* Telit FN990 (rmnet) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1070, 0xff), /* Telit FN990A (rmnet) */
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1071, 0xff), /* Telit FN990 (MBIM) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1071, 0xff), /* Telit FN990A (MBIM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1072, 0xff), /* Telit FN990 (RNDIS) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1072, 0xff), /* Telit FN990A (RNDIS) */
+ .driver_info = NCTRL(2) | RSVD(3) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff), /* Telit FN990 (ECM) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1073, 0xff), /* Telit FN990A (ECM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990 (PCIe) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990A (PCIe) */
+ .driver_info = RSVD(0) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990 (rmnet) */
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+@@ -1403,6 +1394,22 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(0) | NCTRL(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c8, 0xff), /* Telit FE910C04 (rmnet) */
+ .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x60) }, /* Telit FN990B (rmnet) */
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x40) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x30),
++ .driver_info = NCTRL(5) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x60) }, /* Telit FN990B (MBIM) */
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x40) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x30),
++ .driver_info = NCTRL(6) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x60) }, /* Telit FN990B (RNDIS) */
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x40) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x30),
++ .driver_info = NCTRL(6) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x60) }, /* Telit FN990B (ECM) */
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x40) },
++ { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x30),
++ .driver_info = NCTRL(6) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+@@ -2347,6 +2354,14 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a05, 0xff) }, /* Fibocom FM650-CN (NCM mode) */
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a06, 0xff) }, /* Fibocom FM650-CN (RNDIS mode) */
+ { USB_DEVICE_INTERFACE_CLASS(0x2cb7, 0x0a07, 0xff) }, /* Fibocom FM650-CN (MBIM mode) */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d41, 0xff, 0, 0) }, /* MeiG Smart SLM320 */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d57, 0xff, 0, 0) }, /* MeiG Smart SLM770A */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0, 0) }, /* MeiG Smart SRM815 */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0x10, 0x02) }, /* MeiG Smart SLM828 */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0x10, 0x03) }, /* MeiG Smart SLM828 */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0xff, 0x30) }, /* MeiG Smart SRM815 and SRM825L */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0xff, 0x40) }, /* MeiG Smart SRM825L */
++ { USB_DEVICE_AND_INTERFACE_INFO(0x2dee, 0x4d22, 0xff, 0xff, 0x60) }, /* MeiG Smart SRM825L */
+ { USB_DEVICE_INTERFACE_CLASS(0x2df3, 0x9d03, 0xff) }, /* LongSung M5710 */
+ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1404, 0xff) }, /* GosunCn GM500 RNDIS */
+ { USB_DEVICE_INTERFACE_CLASS(0x305a, 0x1405, 0xff) }, /* GosunCn GM500 MBIM */
+@@ -2403,12 +2418,6 @@ static const struct usb_device_id option_ids[] = {
+ { USB_DEVICE_AND_INTERFACE_INFO(SIERRA_VENDOR_ID, SIERRA_PRODUCT_EM9191, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, TOZED_PRODUCT_LT70C, 0xff, 0, 0) },
+ { USB_DEVICE_AND_INTERFACE_INFO(UNISOC_VENDOR_ID, LUAT_PRODUCT_AIR720U, 0xff, 0, 0) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM320, 0xff, 0, 0) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SLM770A, 0xff, 0, 0) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0, 0) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x30) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x40) },
+- { USB_DEVICE_AND_INTERFACE_INFO(MEIGSMART_VENDOR_ID, MEIGSMART_PRODUCT_SRM825L, 0xff, 0xff, 0x60) },
+ { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0530, 0xff), /* TCL IK512 MBIM */
+ .driver_info = NCTRL(1) },
+ { USB_DEVICE_INTERFACE_CLASS(0x1bbb, 0x0640, 0xff), /* TCL IK512 ECM */
+diff --git a/drivers/vfio/pci/nvgrace-gpu/main.c b/drivers/vfio/pci/nvgrace-gpu/main.c
+index a7fd018aa54836..9e1c57baab64a2 100644
+--- a/drivers/vfio/pci/nvgrace-gpu/main.c
++++ b/drivers/vfio/pci/nvgrace-gpu/main.c
+@@ -17,12 +17,14 @@
+ #define RESMEM_REGION_INDEX VFIO_PCI_BAR2_REGION_INDEX
+ #define USEMEM_REGION_INDEX VFIO_PCI_BAR4_REGION_INDEX
+
+-/* Memory size expected as non cached and reserved by the VM driver */
+-#define RESMEM_SIZE SZ_1G
+-
+ /* A hardwired and constant ABI value between the GPU FW and VFIO driver. */
+ #define MEMBLK_SIZE SZ_512M
+
++#define DVSEC_BITMAP_OFFSET 0xA
++#define MIG_SUPPORTED_WITH_CACHED_RESMEM BIT(0)
++
++#define GPU_CAP_DVSEC_REGISTER 3
++
+ /*
+ * The state of the two device memory region - resmem and usemem - is
+ * saved as struct mem_region.
+@@ -46,6 +48,7 @@ struct nvgrace_gpu_pci_core_device {
+ struct mem_region resmem;
+ /* Lock to control device memory kernel mapping */
+ struct mutex remap_lock;
++ bool has_mig_hw_bug;
+ };
+
+ static void nvgrace_gpu_init_fake_bar_emu_regs(struct vfio_device *core_vdev)
+@@ -66,7 +69,7 @@ nvgrace_gpu_memregion(int index,
+ if (index == USEMEM_REGION_INDEX)
+ return &nvdev->usemem;
+
+- if (index == RESMEM_REGION_INDEX)
++ if (nvdev->resmem.memlength && index == RESMEM_REGION_INDEX)
+ return &nvdev->resmem;
+
+ return NULL;
+@@ -751,40 +754,67 @@ nvgrace_gpu_init_nvdev_struct(struct pci_dev *pdev,
+ u64 memphys, u64 memlength)
+ {
+ int ret = 0;
++ u64 resmem_size = 0;
+
+ /*
+- * The VM GPU device driver needs a non-cacheable region to support
+- * the MIG feature. Since the device memory is mapped as NORMAL cached,
+- * carve out a region from the end with a different NORMAL_NC
+- * property (called as reserved memory and represented as resmem). This
+- * region then is exposed as a 64b BAR (region 2 and 3) to the VM, while
+- * exposing the rest (termed as usable memory and represented using usemem)
+- * as cacheable 64b BAR (region 4 and 5).
++ * On Grace Hopper systems, the VM GPU device driver needs a non-cacheable
++ * region to support the MIG feature owing to a hardware bug. Since the
++ * device memory is mapped as NORMAL cached, carve out a region from the end
++ * with a different NORMAL_NC property (called as reserved memory and
++ * represented as resmem). This region then is exposed as a 64b BAR
++ * (region 2 and 3) to the VM, while exposing the rest (termed as usable
++ * memory and represented using usemem) as cacheable 64b BAR (region 4 and 5).
+ *
+ * devmem (memlength)
+ * |-------------------------------------------------|
+ * | |
+ * usemem.memphys resmem.memphys
++ *
++ * This hardware bug is fixed on the Grace Blackwell platforms and the
++ * presence of the bug can be determined through nvdev->has_mig_hw_bug.
++ * Thus on systems with the hardware fix, there is no need to partition
++ * the GPU device memory and the entire memory is usable and mapped as
++ * NORMAL cached (i.e. resmem size is 0).
+ */
++ if (nvdev->has_mig_hw_bug)
++ resmem_size = SZ_1G;
++
+ nvdev->usemem.memphys = memphys;
+
+ /*
+ * The device memory exposed to the VM is added to the kernel by the
+- * VM driver module in chunks of memory block size. Only the usable
+- * memory (usemem) is added to the kernel for usage by the VM
+- * workloads. Make the usable memory size memblock aligned.
++ * VM driver module in chunks of memory block size. Note that only the
++ * usable memory (usemem) is added to the kernel for usage by the VM
++ * workloads.
+ */
+- if (check_sub_overflow(memlength, RESMEM_SIZE,
++ if (check_sub_overflow(memlength, resmem_size,
+ &nvdev->usemem.memlength)) {
+ ret = -EOVERFLOW;
+ goto done;
+ }
+
+ /*
+- * The USEMEM part of the device memory has to be MEMBLK_SIZE
+- * aligned. This is a hardwired ABI value between the GPU FW and
+- * VFIO driver. The VM device driver is also aware of it and make
+- * use of the value for its calculation to determine USEMEM size.
++ * The usemem region is exposed as a 64B Bar composed of region 4 and 5.
++ * Calculate and save the BAR size for the region.
++ */
++ nvdev->usemem.bar_size = roundup_pow_of_two(nvdev->usemem.memlength);
++
++ /*
++ * If the hardware has the fix for MIG, there is no requirement
++ * for splitting the device memory to create RESMEM. The entire
++ * device memory is usable and will be USEMEM. Return here for
++ * such case.
++ */
++ if (!nvdev->has_mig_hw_bug)
++ goto done;
++
++ /*
++ * When the device memory is split to workaround the MIG bug on
++ * Grace Hopper, the USEMEM part of the device memory has to be
++ * MEMBLK_SIZE aligned. This is a hardwired ABI value between the
++ * GPU FW and VFIO driver. The VM device driver is also aware of it
++ * and make use of the value for its calculation to determine USEMEM
++ * size. Note that the device memory may not be 512M aligned.
+ */
+ nvdev->usemem.memlength = round_down(nvdev->usemem.memlength,
+ MEMBLK_SIZE);
+@@ -803,15 +833,34 @@ nvgrace_gpu_init_nvdev_struct(struct pci_dev *pdev,
+ }
+
+ /*
+- * The memory regions are exposed as BARs. Calculate and save
+- * the BAR size for them.
++ * The resmem region is exposed as a 64b BAR composed of region 2 and 3
++ * for Grace Hopper. Calculate and save the BAR size for the region.
+ */
+- nvdev->usemem.bar_size = roundup_pow_of_two(nvdev->usemem.memlength);
+ nvdev->resmem.bar_size = roundup_pow_of_two(nvdev->resmem.memlength);
+ done:
+ return ret;
+ }
+
++static bool nvgrace_gpu_has_mig_hw_bug(struct pci_dev *pdev)
++{
++ int pcie_dvsec;
++ u16 dvsec_ctrl16;
++
++ pcie_dvsec = pci_find_dvsec_capability(pdev, PCI_VENDOR_ID_NVIDIA,
++ GPU_CAP_DVSEC_REGISTER);
++
++ if (pcie_dvsec) {
++ pci_read_config_word(pdev,
++ pcie_dvsec + DVSEC_BITMAP_OFFSET,
++ &dvsec_ctrl16);
++
++ if (dvsec_ctrl16 & MIG_SUPPORTED_WITH_CACHED_RESMEM)
++ return false;
++ }
++
++ return true;
++}
++
+ static int nvgrace_gpu_probe(struct pci_dev *pdev,
+ const struct pci_device_id *id)
+ {
+@@ -832,6 +881,8 @@ static int nvgrace_gpu_probe(struct pci_dev *pdev,
+ dev_set_drvdata(&pdev->dev, &nvdev->core_device);
+
+ if (ops == &nvgrace_gpu_pci_ops) {
++ nvdev->has_mig_hw_bug = nvgrace_gpu_has_mig_hw_bug(pdev);
++
+ /*
+ * Device memory properties are identified in the host ACPI
+ * table. Set the nvgrace_gpu_pci_core_device structure.
+diff --git a/drivers/vfio/pci/vfio_pci_rdwr.c b/drivers/vfio/pci/vfio_pci_rdwr.c
+index 66b72c2892841d..a0595c745732a3 100644
+--- a/drivers/vfio/pci/vfio_pci_rdwr.c
++++ b/drivers/vfio/pci/vfio_pci_rdwr.c
+@@ -16,6 +16,7 @@
+ #include <linux/io.h>
+ #include <linux/vfio.h>
+ #include <linux/vgaarb.h>
++#include <linux/io-64-nonatomic-lo-hi.h>
+
+ #include "vfio_pci_priv.h"
+
+diff --git a/drivers/vfio/platform/vfio_platform_common.c b/drivers/vfio/platform/vfio_platform_common.c
+index d63c2d266d0735..3bf1043cd7957c 100644
+--- a/drivers/vfio/platform/vfio_platform_common.c
++++ b/drivers/vfio/platform/vfio_platform_common.c
+@@ -393,11 +393,6 @@ static ssize_t vfio_platform_read_mmio(struct vfio_platform_region *reg,
+
+ count = min_t(size_t, count, reg->size - off);
+
+- if (off >= reg->size)
+- return -EINVAL;
+-
+- count = min_t(size_t, count, reg->size - off);
+-
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+@@ -482,11 +477,6 @@ static ssize_t vfio_platform_write_mmio(struct vfio_platform_region *reg,
+
+ count = min_t(size_t, count, reg->size - off);
+
+- if (off >= reg->size)
+- return -EINVAL;
+-
+- count = min_t(size_t, count, reg->size - off);
+-
+ if (!reg->ioaddr) {
+ reg->ioaddr =
+ ioremap(reg->addr, reg->size);
+diff --git a/drivers/video/fbdev/omap/lcd_dma.c b/drivers/video/fbdev/omap/lcd_dma.c
+index f85817635a8c2c..0da23c57e4757e 100644
+--- a/drivers/video/fbdev/omap/lcd_dma.c
++++ b/drivers/video/fbdev/omap/lcd_dma.c
+@@ -432,8 +432,8 @@ static int __init omap_init_lcd_dma(void)
+
+ spin_lock_init(&lcd_dma.lock);
+
+- r = request_irq(INT_DMA_LCD, lcd_dma_irq_handler, 0,
+- "LCD DMA", NULL);
++ r = request_threaded_irq(INT_DMA_LCD, NULL, lcd_dma_irq_handler,
++ IRQF_ONESHOT, "LCD DMA", NULL);
+ if (r != 0)
+ pr_err("unable to request IRQ for LCD DMA (error %d)\n", r);
+
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index a337edcf8faf71..26c62e0d34e98b 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -74,19 +74,21 @@ static inline phys_addr_t xen_dma_to_phys(struct device *dev,
+ return xen_bus_to_phys(dev, dma_to_phys(dev, dma_addr));
+ }
+
++static inline bool range_requires_alignment(phys_addr_t p, size_t size)
++{
++ phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
++ phys_addr_t bus_addr = pfn_to_bfn(XEN_PFN_DOWN(p)) << XEN_PAGE_SHIFT;
++
++ return IS_ALIGNED(p, algn) && !IS_ALIGNED(bus_addr, algn);
++}
++
+ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
+ {
+ unsigned long next_bfn, xen_pfn = XEN_PFN_DOWN(p);
+ unsigned int i, nr_pages = XEN_PFN_UP(xen_offset_in_page(p) + size);
+- phys_addr_t algn = 1ULL << (get_order(size) + PAGE_SHIFT);
+
+ next_bfn = pfn_to_bfn(xen_pfn);
+
+- /* If buffer is physically aligned, ensure DMA alignment. */
+- if (IS_ALIGNED(p, algn) &&
+- !IS_ALIGNED((phys_addr_t)next_bfn << XEN_PAGE_SHIFT, algn))
+- return 1;
+-
+ for (i = 1; i < nr_pages; i++)
+ if (pfn_to_bfn(++xen_pfn) != ++next_bfn)
+ return 1;
+@@ -156,7 +158,8 @@ xen_swiotlb_alloc_coherent(struct device *dev, size_t size,
+
+ *dma_handle = xen_phys_to_dma(dev, phys);
+ if (*dma_handle + size - 1 > dma_mask ||
+- range_straddles_page_boundary(phys, size)) {
++ range_straddles_page_boundary(phys, size) ||
++ range_requires_alignment(phys, size)) {
+ if (xen_create_contiguous_region(phys, order, fls64(dma_mask),
+ dma_handle) != 0)
+ goto out_free_pages;
+@@ -182,7 +185,8 @@ xen_swiotlb_free_coherent(struct device *dev, size_t size, void *vaddr,
+ size = ALIGN(size, XEN_PAGE_SIZE);
+
+ if (WARN_ON_ONCE(dma_handle + size - 1 > dev->coherent_dma_mask) ||
+- WARN_ON_ONCE(range_straddles_page_boundary(phys, size)))
++ WARN_ON_ONCE(range_straddles_page_boundary(phys, size) ||
++ range_requires_alignment(phys, size)))
+ return;
+
+ if (TestClearPageXenRemapped(virt_to_page(vaddr)))
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 42c9899d9241c9..fe08c983d5bb4b 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -901,12 +901,11 @@ void clear_folio_extent_mapped(struct folio *folio)
+ folio_detach_private(folio);
+ }
+
+-static struct extent_map *__get_extent_map(struct inode *inode,
+- struct folio *folio, u64 start,
+- u64 len, struct extent_map **em_cached)
++static struct extent_map *get_extent_map(struct btrfs_inode *inode,
++ struct folio *folio, u64 start,
++ u64 len, struct extent_map **em_cached)
+ {
+ struct extent_map *em;
+- struct extent_state *cached_state = NULL;
+
+ ASSERT(em_cached);
+
+@@ -922,14 +921,12 @@ static struct extent_map *__get_extent_map(struct inode *inode,
+ *em_cached = NULL;
+ }
+
+- btrfs_lock_and_flush_ordered_range(BTRFS_I(inode), start, start + len - 1, &cached_state);
+- em = btrfs_get_extent(BTRFS_I(inode), folio, start, len);
++ em = btrfs_get_extent(inode, folio, start, len);
+ if (!IS_ERR(em)) {
+ BUG_ON(*em_cached);
+ refcount_inc(&em->refs);
+ *em_cached = em;
+ }
+- unlock_extent(&BTRFS_I(inode)->io_tree, start, start + len - 1, &cached_state);
+
+ return em;
+ }
+@@ -985,8 +982,7 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+ end_folio_read(folio, true, cur, iosize);
+ break;
+ }
+- em = __get_extent_map(inode, folio, cur, end - cur + 1,
+- em_cached);
++ em = get_extent_map(BTRFS_I(inode), folio, cur, end - cur + 1, em_cached);
+ if (IS_ERR(em)) {
+ end_folio_read(folio, false, cur, end + 1 - cur);
+ return PTR_ERR(em);
+@@ -1087,11 +1083,18 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+
+ int btrfs_read_folio(struct file *file, struct folio *folio)
+ {
++ struct btrfs_inode *inode = folio_to_inode(folio);
++ const u64 start = folio_pos(folio);
++ const u64 end = start + folio_size(folio) - 1;
++ struct extent_state *cached_state = NULL;
+ struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ };
+ struct extent_map *em_cached = NULL;
+ int ret;
+
++ btrfs_lock_and_flush_ordered_range(inode, start, end, &cached_state);
+ ret = btrfs_do_readpage(folio, &em_cached, &bio_ctrl, NULL);
++ unlock_extent(&inode->io_tree, start, end, &cached_state);
++
+ free_extent_map(em_cached);
+
+ /*
+@@ -2268,12 +2271,20 @@ void btrfs_readahead(struct readahead_control *rac)
+ {
+ struct btrfs_bio_ctrl bio_ctrl = { .opf = REQ_OP_READ | REQ_RAHEAD };
+ struct folio *folio;
++ struct btrfs_inode *inode = BTRFS_I(rac->mapping->host);
++ const u64 start = readahead_pos(rac);
++ const u64 end = start + readahead_length(rac) - 1;
++ struct extent_state *cached_state = NULL;
+ struct extent_map *em_cached = NULL;
+ u64 prev_em_start = (u64)-1;
+
++ btrfs_lock_and_flush_ordered_range(inode, start, end, &cached_state);
++
+ while ((folio = readahead_folio(rac)) != NULL)
+ btrfs_do_readpage(folio, &em_cached, &bio_ctrl, &prev_em_start);
+
++ unlock_extent(&inode->io_tree, start, end, &cached_state);
++
+ if (em_cached)
+ free_extent_map(em_cached);
+ submit_one_bio(&bio_ctrl);
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 559c177456e6a0..848cb2c3d9ddeb 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1148,7 +1148,6 @@ int btrfs_write_check(struct kiocb *iocb, struct iov_iter *from, size_t count)
+ loff_t pos = iocb->ki_pos;
+ int ret;
+ loff_t oldsize;
+- loff_t start_pos;
+
+ /*
+ * Quickly bail out on NOWAIT writes if we don't have the nodatacow or
+@@ -1172,9 +1171,8 @@ int btrfs_write_check(struct kiocb *iocb, struct iov_iter *from, size_t count)
+ */
+ update_time_for_write(inode);
+
+- start_pos = round_down(pos, fs_info->sectorsize);
+ oldsize = i_size_read(inode);
+- if (start_pos > oldsize) {
++ if (pos > oldsize) {
+ /* Expand hole size to cover write data, preventing empty gap */
+ loff_t end_pos = round_up(pos + count, fs_info->sectorsize);
+
+diff --git a/fs/nfs/sysfs.c b/fs/nfs/sysfs.c
+index bf378ecd5d9fdd..7b59a40d40c061 100644
+--- a/fs/nfs/sysfs.c
++++ b/fs/nfs/sysfs.c
+@@ -280,9 +280,9 @@ void nfs_sysfs_link_rpc_client(struct nfs_server *server,
+ char name[RPC_CLIENT_NAME_SIZE];
+ int ret;
+
+- strcpy(name, clnt->cl_program->name);
+- strcat(name, uniq ? uniq : "");
+- strcat(name, "_client");
++ strscpy(name, clnt->cl_program->name, sizeof(name));
++ strncat(name, uniq ? uniq : "", sizeof(name) - strlen(name) - 1);
++ strncat(name, "_client", sizeof(name) - strlen(name) - 1);
+
+ ret = sysfs_create_link_nowarn(&server->kobj,
+ &clnt->cl_sysfs->kobject, name);
+diff --git a/fs/nfsd/filecache.c b/fs/nfsd/filecache.c
+index 146a9463c3c230..d199688818557d 100644
+--- a/fs/nfsd/filecache.c
++++ b/fs/nfsd/filecache.c
+@@ -445,11 +445,20 @@ nfsd_file_dispose_list_delayed(struct list_head *dispose)
+ struct nfsd_file, nf_gc);
+ struct nfsd_net *nn = net_generic(nf->nf_net, nfsd_net_id);
+ struct nfsd_fcache_disposal *l = nn->fcache_disposal;
++ struct svc_serv *serv;
+
+ spin_lock(&l->lock);
+ list_move_tail(&nf->nf_gc, &l->freeme);
+ spin_unlock(&l->lock);
+- svc_wake_up(nn->nfsd_serv);
++
++ /*
++ * The filecache laundrette is shut down after the
++ * nn->nfsd_serv pointer is cleared, but before the
++ * svc_serv is freed.
++ */
++ serv = nn->nfsd_serv;
++ if (serv)
++ svc_wake_up(serv);
+ }
+ }
+
+diff --git a/fs/nfsd/nfs2acl.c b/fs/nfsd/nfs2acl.c
+index 4e3be7201b1c43..5fb202acb0fd00 100644
+--- a/fs/nfsd/nfs2acl.c
++++ b/fs/nfsd/nfs2acl.c
+@@ -84,6 +84,8 @@ static __be32 nfsacld_proc_getacl(struct svc_rqst *rqstp)
+ fail:
+ posix_acl_release(resp->acl_access);
+ posix_acl_release(resp->acl_default);
++ resp->acl_access = NULL;
++ resp->acl_default = NULL;
+ goto out;
+ }
+
+diff --git a/fs/nfsd/nfs3acl.c b/fs/nfsd/nfs3acl.c
+index 5e34e98db969db..7b5433bd301974 100644
+--- a/fs/nfsd/nfs3acl.c
++++ b/fs/nfsd/nfs3acl.c
+@@ -76,6 +76,8 @@ static __be32 nfsd3_proc_getacl(struct svc_rqst *rqstp)
+ fail:
+ posix_acl_release(resp->acl_access);
+ posix_acl_release(resp->acl_default);
++ resp->acl_access = NULL;
++ resp->acl_default = NULL;
+ goto out;
+ }
+
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index de076365254978..88c03e18257323 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -1486,8 +1486,11 @@ nfsd4_run_cb_work(struct work_struct *work)
+ nfsd4_process_cb_update(cb);
+
+ clnt = clp->cl_cb_client;
+- if (!clnt) {
+- /* Callback channel broken, or client killed; give up: */
++ if (!clnt || clp->cl_state == NFSD4_COURTESY) {
++ /*
++ * Callback channel broken, client killed or
++ * nfs4_client in courtesy state; give up.
++ */
+ nfsd41_destroy_cb(cb);
+ return;
+ }
+diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
+index 8d789b017fa9b6..da1a9312e61a0e 100644
+--- a/fs/ntfs3/attrib.c
++++ b/fs/ntfs3/attrib.c
+@@ -1406,7 +1406,7 @@ int attr_wof_frame_info(struct ntfs_inode *ni, struct ATTRIB *attr,
+ */
+ if (!attr->non_res) {
+ if (vbo[1] + bytes_per_off > le32_to_cpu(attr->res.data_size)) {
+- ntfs_inode_err(&ni->vfs_inode, "is corrupted");
++ _ntfs_bad_inode(&ni->vfs_inode);
+ return -EINVAL;
+ }
+ addr = resident_data(attr);
+@@ -2587,7 +2587,7 @@ int attr_force_nonresident(struct ntfs_inode *ni)
+
+ attr = ni_find_attr(ni, NULL, &le, ATTR_DATA, NULL, 0, NULL, &mi);
+ if (!attr) {
+- ntfs_bad_inode(&ni->vfs_inode, "no data attribute");
++ _ntfs_bad_inode(&ni->vfs_inode);
+ return -ENOENT;
+ }
+
+diff --git a/fs/ntfs3/dir.c b/fs/ntfs3/dir.c
+index fc6a8aa29e3afe..b6da80c69ca634 100644
+--- a/fs/ntfs3/dir.c
++++ b/fs/ntfs3/dir.c
+@@ -512,7 +512,7 @@ static int ntfs_readdir(struct file *file, struct dir_context *ctx)
+ ctx->pos = pos;
+ } else if (err < 0) {
+ if (err == -EINVAL)
+- ntfs_inode_err(dir, "directory corrupted");
++ _ntfs_bad_inode(dir);
+ ctx->pos = eod;
+ }
+
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index c33e818b3164cd..175662acd5eaf0 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -148,8 +148,10 @@ int ni_load_mi_ex(struct ntfs_inode *ni, CLST rno, struct mft_inode **mi)
+ goto out;
+
+ err = mi_get(ni->mi.sbi, rno, &r);
+- if (err)
++ if (err) {
++ _ntfs_bad_inode(&ni->vfs_inode);
+ return err;
++ }
+
+ ni_add_mi(ni, r);
+
+@@ -238,8 +240,7 @@ struct ATTRIB *ni_find_attr(struct ntfs_inode *ni, struct ATTRIB *attr,
+ return attr;
+
+ out:
+- ntfs_inode_err(&ni->vfs_inode, "failed to parse mft record");
+- ntfs_set_state(ni->mi.sbi, NTFS_DIRTY_ERROR);
++ _ntfs_bad_inode(&ni->vfs_inode);
+ return NULL;
+ }
+
+@@ -330,6 +331,7 @@ struct ATTRIB *ni_load_attr(struct ntfs_inode *ni, enum ATTR_TYPE type,
+ vcn <= le64_to_cpu(attr->nres.evcn))
+ return attr;
+
++ _ntfs_bad_inode(&ni->vfs_inode);
+ return NULL;
+ }
+
+@@ -1604,8 +1606,8 @@ int ni_delete_all(struct ntfs_inode *ni)
+ roff = le16_to_cpu(attr->nres.run_off);
+
+ if (roff > asize) {
+- _ntfs_bad_inode(&ni->vfs_inode);
+- return -EINVAL;
++ /* ni_enum_attr_ex checks this case. */
++ continue;
+ }
+
+ /* run==1 means unpack and deallocate. */
+diff --git a/fs/ntfs3/fsntfs.c b/fs/ntfs3/fsntfs.c
+index 0fa636038b4e4d..6c73e93afb478c 100644
+--- a/fs/ntfs3/fsntfs.c
++++ b/fs/ntfs3/fsntfs.c
+@@ -908,7 +908,11 @@ void ntfs_bad_inode(struct inode *inode, const char *hint)
+
+ ntfs_inode_err(inode, "%s", hint);
+ make_bad_inode(inode);
+- ntfs_set_state(sbi, NTFS_DIRTY_ERROR);
++ /* Avoid recursion if bad inode is $Volume. */
++ if (inode->i_ino != MFT_REC_VOL &&
++ !(sbi->flags & NTFS_FLAGS_LOG_REPLAYING)) {
++ ntfs_set_state(sbi, NTFS_DIRTY_ERROR);
++ }
+ }
+
+ /*
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index 9089c58a005ce1..7eb9fae22f8da6 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -1094,8 +1094,7 @@ int indx_read(struct ntfs_index *indx, struct ntfs_inode *ni, CLST vbn,
+
+ ok:
+ if (!index_buf_check(ib, bytes, &vbn)) {
+- ntfs_inode_err(&ni->vfs_inode, "directory corrupted");
+- ntfs_set_state(ni->mi.sbi, NTFS_DIRTY_ERROR);
++ _ntfs_bad_inode(&ni->vfs_inode);
+ err = -EINVAL;
+ goto out;
+ }
+@@ -1117,8 +1116,7 @@ int indx_read(struct ntfs_index *indx, struct ntfs_inode *ni, CLST vbn,
+
+ out:
+ if (err == -E_NTFS_CORRUPT) {
+- ntfs_inode_err(&ni->vfs_inode, "directory corrupted");
+- ntfs_set_state(ni->mi.sbi, NTFS_DIRTY_ERROR);
++ _ntfs_bad_inode(&ni->vfs_inode);
+ err = -EINVAL;
+ }
+
+diff --git a/fs/ntfs3/inode.c b/fs/ntfs3/inode.c
+index be04d2845bb7bc..a1e11228dafd02 100644
+--- a/fs/ntfs3/inode.c
++++ b/fs/ntfs3/inode.c
+@@ -410,6 +410,9 @@ static struct inode *ntfs_read_mft(struct inode *inode,
+ if (!std5)
+ goto out;
+
++ if (is_bad_inode(inode))
++ goto out;
++
+ if (!is_match && name) {
+ err = -ENOENT;
+ goto out;
+diff --git a/fs/orangefs/orangefs-debugfs.c b/fs/orangefs/orangefs-debugfs.c
+index 1b508f5433846e..fa41db08848802 100644
+--- a/fs/orangefs/orangefs-debugfs.c
++++ b/fs/orangefs/orangefs-debugfs.c
+@@ -393,9 +393,9 @@ static ssize_t orangefs_debug_write(struct file *file,
+ * Thwart users who try to jamb a ridiculous number
+ * of bytes into the debug file...
+ */
+- if (count > ORANGEFS_MAX_DEBUG_STRING_LEN + 1) {
++ if (count > ORANGEFS_MAX_DEBUG_STRING_LEN) {
+ silly = count;
+- count = ORANGEFS_MAX_DEBUG_STRING_LEN + 1;
++ count = ORANGEFS_MAX_DEBUG_STRING_LEN;
+ }
+
+ buf = kzalloc(ORANGEFS_MAX_DEBUG_STRING_LEN, GFP_KERNEL);
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 94785abc9b1b2d..05274121e46f04 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -1476,7 +1476,6 @@ struct cifs_io_parms {
+ struct cifs_io_request {
+ struct netfs_io_request rreq;
+ struct cifsFileInfo *cfile;
+- struct TCP_Server_Info *server;
+ pid_t pid;
+ };
+
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index a58a3333ecc300..313c851fc1c122 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -147,7 +147,7 @@ static int cifs_prepare_read(struct netfs_io_subrequest *subreq)
+ struct netfs_io_request *rreq = subreq->rreq;
+ struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq);
+ struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq);
+- struct TCP_Server_Info *server = req->server;
++ struct TCP_Server_Info *server;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(rreq->inode->i_sb);
+ size_t size;
+ int rc = 0;
+@@ -156,6 +156,8 @@ static int cifs_prepare_read(struct netfs_io_subrequest *subreq)
+ rdata->xid = get_xid();
+ rdata->have_xid = true;
+ }
++
++ server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses);
+ rdata->server = server;
+
+ if (cifs_sb->ctx->rsize == 0)
+@@ -198,7 +200,7 @@ static void cifs_issue_read(struct netfs_io_subrequest *subreq)
+ struct netfs_io_request *rreq = subreq->rreq;
+ struct cifs_io_subrequest *rdata = container_of(subreq, struct cifs_io_subrequest, subreq);
+ struct cifs_io_request *req = container_of(subreq->rreq, struct cifs_io_request, rreq);
+- struct TCP_Server_Info *server = req->server;
++ struct TCP_Server_Info *server = rdata->server;
+ int rc = 0;
+
+ cifs_dbg(FYI, "%s: op=%08x[%x] mapping=%p len=%zu/%zu\n",
+@@ -265,7 +267,6 @@ static int cifs_init_request(struct netfs_io_request *rreq, struct file *file)
+ open_file = file->private_data;
+ rreq->netfs_priv = file->private_data;
+ req->cfile = cifsFileInfo_get(open_file);
+- req->server = cifs_pick_channel(tlink_tcon(req->cfile->tlink)->ses);
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_RWPIDFORWARD)
+ req->pid = req->cfile->pid;
+ } else if (rreq->origin != NETFS_WRITEBACK) {
+diff --git a/include/drm/display/drm_dp.h b/include/drm/display/drm_dp.h
+index a6f8b098c56f14..3bd9f482f0c3e6 100644
+--- a/include/drm/display/drm_dp.h
++++ b/include/drm/display/drm_dp.h
+@@ -359,6 +359,7 @@
+ # define DP_DSC_BITS_PER_PIXEL_1_4 0x2
+ # define DP_DSC_BITS_PER_PIXEL_1_2 0x3
+ # define DP_DSC_BITS_PER_PIXEL_1_1 0x4
++# define DP_DSC_BITS_PER_PIXEL_MASK 0x7
+
+ #define DP_PSR_SUPPORT 0x070 /* XXX 1.2? */
+ # define DP_PSR_IS_SUPPORTED 1
+diff --git a/include/kunit/platform_device.h b/include/kunit/platform_device.h
+index 0fc0999d2420aa..f8236a8536f7eb 100644
+--- a/include/kunit/platform_device.h
++++ b/include/kunit/platform_device.h
+@@ -2,6 +2,7 @@
+ #ifndef _KUNIT_PLATFORM_DRIVER_H
+ #define _KUNIT_PLATFORM_DRIVER_H
+
++struct completion;
+ struct kunit;
+ struct platform_device;
+ struct platform_driver;
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index c5063e0a38a058..a53cbe25691043 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -880,12 +880,22 @@ static inline bool blk_mq_add_to_batch(struct request *req,
+ void (*complete)(struct io_comp_batch *))
+ {
+ /*
+- * blk_mq_end_request_batch() can't end request allocated from
+- * sched tags
++ * Check various conditions that exclude batch processing:
++ * 1) No batch container
++ * 2) Has scheduler data attached
++ * 3) Not a passthrough request and end_io set
++ * 4) Not a passthrough request and an ioerror
+ */
+- if (!iob || (req->rq_flags & RQF_SCHED_TAGS) || ioerror ||
+- (req->end_io && !blk_rq_is_passthrough(req)))
++ if (!iob)
+ return false;
++ if (req->rq_flags & RQF_SCHED_TAGS)
++ return false;
++ if (!blk_rq_is_passthrough(req)) {
++ if (req->end_io)
++ return false;
++ if (ioerror < 0)
++ return false;
++ }
+
+ if (!iob->complete)
+ iob->complete = complete;
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 47ae4c4d924c28..a32eebcd23da47 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -71,9 +71,6 @@ enum {
+
+ /* Cgroup is frozen. */
+ CGRP_FROZEN,
+-
+- /* Control group has to be killed. */
+- CGRP_KILL,
+ };
+
+ /* cgroup_root->flags */
+@@ -460,6 +457,9 @@ struct cgroup {
+
+ int nr_threaded_children; /* # of live threaded child cgroups */
+
++ /* sequence number for cgroup.kill, serialized by css_set_lock. */
++ unsigned int kill_seq;
++
+ struct kernfs_node *kn; /* cgroup kernfs entry */
+ struct cgroup_file procs_file; /* handle for "cgroup.procs" */
+ struct cgroup_file events_file; /* handle for "cgroup.events" */
+diff --git a/include/linux/efi.h b/include/linux/efi.h
+index e28d8806603376..2f1bfd7562eb2b 100644
+--- a/include/linux/efi.h
++++ b/include/linux/efi.h
+@@ -128,6 +128,7 @@ typedef struct {
+ #define EFI_MEMORY_RO ((u64)0x0000000000020000ULL) /* read-only */
+ #define EFI_MEMORY_SP ((u64)0x0000000000040000ULL) /* soft reserved */
+ #define EFI_MEMORY_CPU_CRYPTO ((u64)0x0000000000080000ULL) /* supports encryption */
++#define EFI_MEMORY_HOT_PLUGGABLE BIT_ULL(20) /* supports unplugging at runtime */
+ #define EFI_MEMORY_RUNTIME ((u64)0x8000000000000000ULL) /* range requires runtime mapping */
+ #define EFI_MEMORY_DESCRIPTOR_VERSION 1
+
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 02d3bafebbe77c..4f17b786828af7 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -2577,6 +2577,12 @@ struct net *dev_net(const struct net_device *dev)
+ return read_pnet(&dev->nd_net);
+ }
+
++static inline
++struct net *dev_net_rcu(const struct net_device *dev)
++{
++ return read_pnet_rcu(&dev->nd_net);
++}
++
+ static inline
+ void dev_net_set(struct net_device *dev, struct net *net)
+ {
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 4cf6aaed5f35db..22f6b018cff8de 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -2589,6 +2589,11 @@
+
+ #define PCI_VENDOR_ID_REDHAT 0x1b36
+
++#define PCI_VENDOR_ID_WCHIC 0x1c00
++#define PCI_DEVICE_ID_WCHIC_CH382_0S1P 0x3050
++#define PCI_DEVICE_ID_WCHIC_CH382_2S1P 0x3250
++#define PCI_DEVICE_ID_WCHIC_CH382_2S 0x3253
++
+ #define PCI_VENDOR_ID_SILICOM_DENMARK 0x1c2c
+
+ #define PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS 0x1c36
+@@ -2643,6 +2648,12 @@
+ #define PCI_VENDOR_ID_AKS 0x416c
+ #define PCI_DEVICE_ID_AKS_ALADDINCARD 0x0100
+
++#define PCI_VENDOR_ID_WCHCN 0x4348
++#define PCI_DEVICE_ID_WCHCN_CH353_4S 0x3453
++#define PCI_DEVICE_ID_WCHCN_CH353_2S1PF 0x5046
++#define PCI_DEVICE_ID_WCHCN_CH353_1S1P 0x5053
++#define PCI_DEVICE_ID_WCHCN_CH353_2S1P 0x7053
++
+ #define PCI_VENDOR_ID_ACCESSIO 0x494f
+ #define PCI_DEVICE_ID_ACCESSIO_WDG_CSM 0x22c0
+
+diff --git a/include/linux/sched/task.h b/include/linux/sched/task.h
+index 0f2aeb37bbb047..ca1db4b92c3244 100644
+--- a/include/linux/sched/task.h
++++ b/include/linux/sched/task.h
+@@ -43,6 +43,7 @@ struct kernel_clone_args {
+ void *fn_arg;
+ struct cgroup *cgrp;
+ struct css_set *cset;
++ unsigned int kill_seq;
+ };
+
+ /*
+diff --git a/include/net/dst.h b/include/net/dst.h
+index 0f303cc602520e..08647c99d79c9a 100644
+--- a/include/net/dst.h
++++ b/include/net/dst.h
+@@ -440,6 +440,15 @@ static inline void dst_set_expires(struct dst_entry *dst, int timeout)
+ dst->expires = expires;
+ }
+
++static inline unsigned int dst_dev_overhead(struct dst_entry *dst,
++ struct sk_buff *skb)
++{
++ if (likely(dst))
++ return LL_RESERVED_SPACE(dst->dev);
++
++ return skb->mac_len;
++}
++
+ INDIRECT_CALLABLE_DECLARE(int ip6_output(struct net *, struct sock *,
+ struct sk_buff *));
+ INDIRECT_CALLABLE_DECLARE(int ip_output(struct net *, struct sock *,
+diff --git a/include/net/ip.h b/include/net/ip.h
+index d92d3bc3ec0e25..fe4f8543811433 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -465,9 +465,12 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
+ bool forwarding)
+ {
+ const struct rtable *rt = dst_rtable(dst);
+- struct net *net = dev_net(dst->dev);
+- unsigned int mtu;
++ unsigned int mtu, res;
++ struct net *net;
++
++ rcu_read_lock();
+
++ net = dev_net_rcu(dst->dev);
+ if (READ_ONCE(net->ipv4.sysctl_ip_fwd_use_pmtu) ||
+ ip_mtu_locked(dst) ||
+ !forwarding) {
+@@ -491,7 +494,11 @@ static inline unsigned int ip_dst_mtu_maybe_forward(const struct dst_entry *dst,
+ out:
+ mtu = min_t(unsigned int, mtu, IP_MAX_MTU);
+
+- return mtu - lwtunnel_headroom(dst->lwtstate, mtu);
++ res = mtu - lwtunnel_headroom(dst->lwtstate, mtu);
++
++ rcu_read_unlock();
++
++ return res;
+ }
+
+ static inline unsigned int ip_skb_dst_mtu(struct sock *sk,
+diff --git a/include/net/l3mdev.h b/include/net/l3mdev.h
+index 031c661aa14df7..bdfa9d414360c7 100644
+--- a/include/net/l3mdev.h
++++ b/include/net/l3mdev.h
+@@ -198,10 +198,12 @@ struct sk_buff *l3mdev_l3_out(struct sock *sk, struct sk_buff *skb, u16 proto)
+ if (netif_is_l3_slave(dev)) {
+ struct net_device *master;
+
++ rcu_read_lock();
+ master = netdev_master_upper_dev_get_rcu(dev);
+ if (master && master->l3mdev_ops->l3mdev_l3_out)
+ skb = master->l3mdev_ops->l3mdev_l3_out(master, sk,
+ skb, proto);
++ rcu_read_unlock();
+ }
+
+ return skb;
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index 9398c8f4995368..da93873df4dbd7 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -387,7 +387,7 @@ static inline struct net *read_pnet(const possible_net_t *pnet)
+ #endif
+ }
+
+-static inline struct net *read_pnet_rcu(possible_net_t *pnet)
++static inline struct net *read_pnet_rcu(const possible_net_t *pnet)
+ {
+ #ifdef CONFIG_NET_NS
+ return rcu_dereference(pnet->net);
+diff --git a/include/net/route.h b/include/net/route.h
+index 1789f1e6640b46..da34b6fa9862dc 100644
+--- a/include/net/route.h
++++ b/include/net/route.h
+@@ -363,10 +363,15 @@ static inline int inet_iif(const struct sk_buff *skb)
+ static inline int ip4_dst_hoplimit(const struct dst_entry *dst)
+ {
+ int hoplimit = dst_metric_raw(dst, RTAX_HOPLIMIT);
+- struct net *net = dev_net(dst->dev);
+
+- if (hoplimit == 0)
++ if (hoplimit == 0) {
++ const struct net *net;
++
++ rcu_read_lock();
++ net = dev_net_rcu(dst->dev);
+ hoplimit = READ_ONCE(net->ipv4.sysctl_ip_default_ttl);
++ rcu_read_unlock();
++ }
+ return hoplimit;
+ }
+
+diff --git a/include/ufs/ufshcd.h b/include/ufs/ufshcd.h
+index d5e43a1dcff226..47cba116f87b81 100644
+--- a/include/ufs/ufshcd.h
++++ b/include/ufs/ufshcd.h
+@@ -402,6 +402,9 @@ enum clk_gating_state {
+ * delay_ms
+ * @ungate_work: worker to turn on clocks that will be used in case of
+ * interrupt context
++ * @clk_gating_workq: workqueue for clock gating work.
++ * @lock: serialize access to some struct ufs_clk_gating members. An outer lock
++ * relative to the host lock
+ * @state: the current clocks state
+ * @delay_ms: gating delay in ms
+ * @is_suspended: clk gating is suspended when set to 1 which can be used
+@@ -412,11 +415,14 @@ enum clk_gating_state {
+ * @is_initialized: Indicates whether clock gating is initialized or not
+ * @active_reqs: number of requests that are pending and should be waited for
+ * completion before gating clocks.
+- * @clk_gating_workq: workqueue for clock gating work.
+ */
+ struct ufs_clk_gating {
+ struct delayed_work gate_work;
+ struct work_struct ungate_work;
++ struct workqueue_struct *clk_gating_workq;
++
++ spinlock_t lock;
++
+ enum clk_gating_state state;
+ unsigned long delay_ms;
+ bool is_suspended;
+@@ -425,7 +431,6 @@ struct ufs_clk_gating {
+ bool is_enabled;
+ bool is_initialized;
+ int active_reqs;
+- struct workqueue_struct *clk_gating_workq;
+ };
+
+ /**
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index eec5eb7de8430e..e1895952066eeb 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -420,6 +420,12 @@ void io_destroy_buffers(struct io_ring_ctx *ctx)
+ }
+ }
+
++static void io_destroy_bl(struct io_ring_ctx *ctx, struct io_buffer_list *bl)
++{
++ xa_erase(&ctx->io_bl_xa, bl->bgid);
++ io_put_bl(ctx, bl);
++}
++
+ int io_remove_buffers_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ {
+ struct io_provide_buf *p = io_kiocb_to_cmd(req, struct io_provide_buf);
+@@ -717,12 +723,13 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg)
+ /* if mapped buffer ring OR classic exists, don't allow */
+ if (bl->flags & IOBL_BUF_RING || !list_empty(&bl->buf_list))
+ return -EEXIST;
+- } else {
+- free_bl = bl = kzalloc(sizeof(*bl), GFP_KERNEL);
+- if (!bl)
+- return -ENOMEM;
++ io_destroy_bl(ctx, bl);
+ }
+
++ free_bl = bl = kzalloc(sizeof(*bl), GFP_KERNEL);
++ if (!bl)
++ return -ENOMEM;
++
+ if (!(reg.flags & IOU_PBUF_RING_MMAP))
+ ret = io_pin_pbuf_ring(®, bl);
+ else
+diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c
+index 874f9e2defd583..b2ce4b56100271 100644
+--- a/io_uring/uring_cmd.c
++++ b/io_uring/uring_cmd.c
+@@ -65,9 +65,6 @@ bool io_uring_try_cancel_uring_cmd(struct io_ring_ctx *ctx,
+ continue;
+
+ if (cmd->flags & IORING_URING_CMD_CANCELABLE) {
+- /* ->sqe isn't available if no async data */
+- if (!req_has_async_data(req))
+- cmd->sqe = NULL;
+ file->f_op->uring_cmd(cmd, IO_URING_F_CANCEL |
+ IO_URING_F_COMPLETE_DEFER);
+ ret = true;
+diff --git a/io_uring/waitid.c b/io_uring/waitid.c
+index 6362ec20abc0cf..2f7b5eeab845e9 100644
+--- a/io_uring/waitid.c
++++ b/io_uring/waitid.c
+@@ -118,7 +118,6 @@ static int io_waitid_finish(struct io_kiocb *req, int ret)
+ static void io_waitid_complete(struct io_kiocb *req, int ret)
+ {
+ struct io_waitid *iw = io_kiocb_to_cmd(req, struct io_waitid);
+- struct io_tw_state ts = {};
+
+ /* anyone completing better be holding a reference */
+ WARN_ON_ONCE(!(atomic_read(&iw->refs) & IO_WAITID_REF_MASK));
+@@ -131,7 +130,6 @@ static void io_waitid_complete(struct io_kiocb *req, int ret)
+ if (ret < 0)
+ req_set_fail(req);
+ io_req_set_res(req, ret, 0);
+- io_req_task_complete(req, &ts);
+ }
+
+ static bool __io_waitid_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
+@@ -153,6 +151,7 @@ static bool __io_waitid_cancel(struct io_ring_ctx *ctx, struct io_kiocb *req)
+ list_del_init(&iwa->wo.child_wait.entry);
+ spin_unlock_irq(&iw->head->lock);
+ io_waitid_complete(req, -ECANCELED);
++ io_req_queue_tw_complete(req, -ECANCELED);
+ return true;
+ }
+
+@@ -258,6 +257,7 @@ static void io_waitid_cb(struct io_kiocb *req, struct io_tw_state *ts)
+ }
+
+ io_waitid_complete(req, ret);
++ io_req_task_complete(req, ts);
+ }
+
+ static int io_waitid_wait(struct wait_queue_entry *wait, unsigned mode,
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index e275eaf2de7f8f..216535e055e112 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -4013,7 +4013,7 @@ static void __cgroup_kill(struct cgroup *cgrp)
+ lockdep_assert_held(&cgroup_mutex);
+
+ spin_lock_irq(&css_set_lock);
+- set_bit(CGRP_KILL, &cgrp->flags);
++ cgrp->kill_seq++;
+ spin_unlock_irq(&css_set_lock);
+
+ css_task_iter_start(&cgrp->self, CSS_TASK_ITER_PROCS | CSS_TASK_ITER_THREADED, &it);
+@@ -4029,10 +4029,6 @@ static void __cgroup_kill(struct cgroup *cgrp)
+ send_sig(SIGKILL, task, 0);
+ }
+ css_task_iter_end(&it);
+-
+- spin_lock_irq(&css_set_lock);
+- clear_bit(CGRP_KILL, &cgrp->flags);
+- spin_unlock_irq(&css_set_lock);
+ }
+
+ static void cgroup_kill(struct cgroup *cgrp)
+@@ -6489,6 +6485,10 @@ static int cgroup_css_set_fork(struct kernel_clone_args *kargs)
+ spin_lock_irq(&css_set_lock);
+ cset = task_css_set(current);
+ get_css_set(cset);
++ if (kargs->cgrp)
++ kargs->kill_seq = kargs->cgrp->kill_seq;
++ else
++ kargs->kill_seq = cset->dfl_cgrp->kill_seq;
+ spin_unlock_irq(&css_set_lock);
+
+ if (!(kargs->flags & CLONE_INTO_CGROUP)) {
+@@ -6672,6 +6672,7 @@ void cgroup_post_fork(struct task_struct *child,
+ struct kernel_clone_args *kargs)
+ __releases(&cgroup_threadgroup_rwsem) __releases(&cgroup_mutex)
+ {
++ unsigned int cgrp_kill_seq = 0;
+ unsigned long cgrp_flags = 0;
+ bool kill = false;
+ struct cgroup_subsys *ss;
+@@ -6685,10 +6686,13 @@ void cgroup_post_fork(struct task_struct *child,
+
+ /* init tasks are special, only link regular threads */
+ if (likely(child->pid)) {
+- if (kargs->cgrp)
++ if (kargs->cgrp) {
+ cgrp_flags = kargs->cgrp->flags;
+- else
++ cgrp_kill_seq = kargs->cgrp->kill_seq;
++ } else {
+ cgrp_flags = cset->dfl_cgrp->flags;
++ cgrp_kill_seq = cset->dfl_cgrp->kill_seq;
++ }
+
+ WARN_ON_ONCE(!list_empty(&child->cg_list));
+ cset->nr_tasks++;
+@@ -6723,7 +6727,7 @@ void cgroup_post_fork(struct task_struct *child,
+ * child down right after we finished preparing it for
+ * userspace.
+ */
+- kill = test_bit(CGRP_KILL, &cgrp_flags);
++ kill = kargs->kill_seq != cgrp_kill_seq;
+ }
+
+ spin_unlock_irq(&css_set_lock);
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index a06b452724118a..ce295b73c0a366 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -586,7 +586,6 @@ static void root_cgroup_cputime(struct cgroup_base_stat *bstat)
+
+ cputime->sum_exec_runtime += user;
+ cputime->sum_exec_runtime += sys;
+- cputime->sum_exec_runtime += cpustat[CPUTIME_STEAL];
+
+ #ifdef CONFIG_SCHED_CORE
+ bstat->forceidle_sum += cpustat[CPUTIME_FORCEIDLE];
+diff --git a/kernel/sched/autogroup.c b/kernel/sched/autogroup.c
+index db68a964e34e26..c4a3ccf6a8ace4 100644
+--- a/kernel/sched/autogroup.c
++++ b/kernel/sched/autogroup.c
+@@ -150,7 +150,7 @@ void sched_autogroup_exit_task(struct task_struct *p)
+ * see this thread after that: we can no longer use signal->autogroup.
+ * See the PF_EXITING check in task_wants_autogroup().
+ */
+- sched_move_task(p);
++ sched_move_task(p, true);
+ }
+
+ static void
+@@ -182,7 +182,7 @@ autogroup_move_group(struct task_struct *p, struct autogroup *ag)
+ * sched_autogroup_exit_task().
+ */
+ for_each_thread(p, t)
+- sched_move_task(t);
++ sched_move_task(t, true);
+
+ unlock_task_sighand(p, &flags);
+ autogroup_kref_put(prev);
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 5d67f41d05d40b..c72356836eb628 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -8953,7 +8953,7 @@ static void sched_change_group(struct task_struct *tsk, struct task_group *group
+ * now. This function just updates tsk->se.cfs_rq and tsk->se.parent to reflect
+ * its new group.
+ */
+-void sched_move_task(struct task_struct *tsk)
++void sched_move_task(struct task_struct *tsk, bool for_autogroup)
+ {
+ int queued, running, queue_flags =
+ DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
+@@ -8982,7 +8982,8 @@ void sched_move_task(struct task_struct *tsk)
+ put_prev_task(rq, tsk);
+
+ sched_change_group(tsk, group);
+- scx_move_task(tsk);
++ if (!for_autogroup)
++ scx_cgroup_move_task(tsk);
+
+ if (queued)
+ enqueue_task(rq, tsk, queue_flags);
+@@ -9083,7 +9084,7 @@ static void cpu_cgroup_attach(struct cgroup_taskset *tset)
+ struct cgroup_subsys_state *css;
+
+ cgroup_taskset_for_each(task, css, tset)
+- sched_move_task(task);
++ sched_move_task(task, false);
+
+ scx_cgroup_finish_attach();
+ }
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 4c4681cb9337b4..689f7e8f69f54d 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -2458,6 +2458,9 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
+ {
+ struct rq *src_rq = task_rq(p);
+ struct rq *dst_rq = container_of(dst_dsq, struct rq, scx.local_dsq);
++#ifdef CONFIG_SMP
++ struct rq *locked_rq = rq;
++#endif
+
+ /*
+ * We're synchronized against dequeue through DISPATCHING. As @p can't
+@@ -2494,8 +2497,9 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
+ atomic_long_set_release(&p->scx.ops_state, SCX_OPSS_NONE);
+
+ /* switch to @src_rq lock */
+- if (rq != src_rq) {
+- raw_spin_rq_unlock(rq);
++ if (locked_rq != src_rq) {
++ raw_spin_rq_unlock(locked_rq);
++ locked_rq = src_rq;
+ raw_spin_rq_lock(src_rq);
+ }
+
+@@ -2513,6 +2517,8 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
+ } else {
+ move_remote_task_to_local_dsq(p, enq_flags,
+ src_rq, dst_rq);
++ /* task has been moved to dst_rq, which is now locked */
++ locked_rq = dst_rq;
+ }
+
+ /* if the destination CPU is idle, wake it up */
+@@ -2521,8 +2527,8 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
+ }
+
+ /* switch back to @rq lock */
+- if (rq != dst_rq) {
+- raw_spin_rq_unlock(dst_rq);
++ if (locked_rq != rq) {
++ raw_spin_rq_unlock(locked_rq);
+ raw_spin_rq_lock(rq);
+ }
+ #else /* CONFIG_SMP */
+@@ -3441,7 +3447,7 @@ static void task_tick_scx(struct rq *rq, struct task_struct *curr, int queued)
+ curr->scx.slice = 0;
+ touch_core_sched(rq, curr);
+ } else if (SCX_HAS_OP(tick)) {
+- SCX_CALL_OP(SCX_KF_REST, tick, curr);
++ SCX_CALL_OP_TASK(SCX_KF_REST, tick, curr);
+ }
+
+ if (!curr->scx.slice)
+@@ -3588,7 +3594,7 @@ static void scx_ops_disable_task(struct task_struct *p)
+ WARN_ON_ONCE(scx_get_task_state(p) != SCX_TASK_ENABLED);
+
+ if (SCX_HAS_OP(disable))
+- SCX_CALL_OP(SCX_KF_REST, disable, p);
++ SCX_CALL_OP_TASK(SCX_KF_REST, disable, p);
+ scx_set_task_state(p, SCX_TASK_READY);
+ }
+
+@@ -3617,7 +3623,7 @@ static void scx_ops_exit_task(struct task_struct *p)
+ }
+
+ if (SCX_HAS_OP(exit_task))
+- SCX_CALL_OP(SCX_KF_REST, exit_task, p, &args);
++ SCX_CALL_OP_TASK(SCX_KF_REST, exit_task, p, &args);
+ scx_set_task_state(p, SCX_TASK_NONE);
+ }
+
+@@ -3913,24 +3919,11 @@ int scx_cgroup_can_attach(struct cgroup_taskset *tset)
+ return ops_sanitize_err("cgroup_prep_move", ret);
+ }
+
+-void scx_move_task(struct task_struct *p)
++void scx_cgroup_move_task(struct task_struct *p)
+ {
+ if (!scx_cgroup_enabled)
+ return;
+
+- /*
+- * We're called from sched_move_task() which handles both cgroup and
+- * autogroup moves. Ignore the latter.
+- *
+- * Also ignore exiting tasks, because in the exit path tasks transition
+- * from the autogroup to the root group, so task_group_is_autogroup()
+- * alone isn't able to catch exiting autogroup tasks. This is safe for
+- * cgroup_move(), because cgroup migrations never happen for PF_EXITING
+- * tasks.
+- */
+- if (task_group_is_autogroup(task_group(p)) || (p->flags & PF_EXITING))
+- return;
+-
+ /*
+ * @p must have ops.cgroup_prep_move() called on it and thus
+ * cgrp_moving_from set.
+diff --git a/kernel/sched/ext.h b/kernel/sched/ext.h
+index 4d022d17ac7dd6..1079b56b0f7aea 100644
+--- a/kernel/sched/ext.h
++++ b/kernel/sched/ext.h
+@@ -73,7 +73,7 @@ static inline void scx_update_idle(struct rq *rq, bool idle, bool do_notify) {}
+ int scx_tg_online(struct task_group *tg);
+ void scx_tg_offline(struct task_group *tg);
+ int scx_cgroup_can_attach(struct cgroup_taskset *tset);
+-void scx_move_task(struct task_struct *p);
++void scx_cgroup_move_task(struct task_struct *p);
+ void scx_cgroup_finish_attach(void);
+ void scx_cgroup_cancel_attach(struct cgroup_taskset *tset);
+ void scx_group_set_weight(struct task_group *tg, unsigned long cgrp_weight);
+@@ -82,7 +82,7 @@ void scx_group_set_idle(struct task_group *tg, bool idle);
+ static inline int scx_tg_online(struct task_group *tg) { return 0; }
+ static inline void scx_tg_offline(struct task_group *tg) {}
+ static inline int scx_cgroup_can_attach(struct cgroup_taskset *tset) { return 0; }
+-static inline void scx_move_task(struct task_struct *p) {}
++static inline void scx_cgroup_move_task(struct task_struct *p) {}
+ static inline void scx_cgroup_finish_attach(void) {}
+ static inline void scx_cgroup_cancel_attach(struct cgroup_taskset *tset) {}
+ static inline void scx_group_set_weight(struct task_group *tg, unsigned long cgrp_weight) {}
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 5426969cf478a0..d79de755c1c269 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -572,7 +572,7 @@ extern void sched_online_group(struct task_group *tg,
+ extern void sched_destroy_group(struct task_group *tg);
+ extern void sched_release_group(struct task_group *tg);
+
+-extern void sched_move_task(struct task_struct *tsk);
++extern void sched_move_task(struct task_struct *tsk, bool for_autogroup);
+
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
+diff --git a/kernel/time/clocksource.c b/kernel/time/clocksource.c
+index 8a40a616288b81..58fb7280cabbe6 100644
+--- a/kernel/time/clocksource.c
++++ b/kernel/time/clocksource.c
+@@ -365,16 +365,18 @@ void clocksource_verify_percpu(struct clocksource *cs)
+ cpumask_clear(&cpus_ahead);
+ cpumask_clear(&cpus_behind);
+ cpus_read_lock();
+- preempt_disable();
++ migrate_disable();
+ clocksource_verify_choose_cpus();
+ if (cpumask_empty(&cpus_chosen)) {
+- preempt_enable();
++ migrate_enable();
+ cpus_read_unlock();
+ pr_warn("Not enough CPUs to check clocksource '%s'.\n", cs->name);
+ return;
+ }
+ testcpu = smp_processor_id();
+- pr_warn("Checking clocksource %s synchronization from CPU %d to CPUs %*pbl.\n", cs->name, testcpu, cpumask_pr_args(&cpus_chosen));
++ pr_info("Checking clocksource %s synchronization from CPU %d to CPUs %*pbl.\n",
++ cs->name, testcpu, cpumask_pr_args(&cpus_chosen));
++ preempt_disable();
+ for_each_cpu(cpu, &cpus_chosen) {
+ if (cpu == testcpu)
+ continue;
+@@ -394,6 +396,7 @@ void clocksource_verify_percpu(struct clocksource *cs)
+ cs_nsec_min = cs_nsec;
+ }
+ preempt_enable();
++ migrate_enable();
+ cpus_read_unlock();
+ if (!cpumask_empty(&cpus_ahead))
+ pr_warn(" CPUs %*pbl ahead of CPU %d for clocksource %s.\n",
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 0f8f3ffc6f0904..ea8ad5480e286d 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -1672,7 +1672,8 @@ static void *rb_range_buffer(struct ring_buffer_per_cpu *cpu_buffer, int idx)
+ * must be the same.
+ */
+ static bool rb_meta_valid(struct ring_buffer_meta *meta, int cpu,
+- struct trace_buffer *buffer, int nr_pages)
++ struct trace_buffer *buffer, int nr_pages,
++ unsigned long *subbuf_mask)
+ {
+ int subbuf_size = PAGE_SIZE;
+ struct buffer_data_page *subbuf;
+@@ -1680,6 +1681,9 @@ static bool rb_meta_valid(struct ring_buffer_meta *meta, int cpu,
+ unsigned long buffers_end;
+ int i;
+
++ if (!subbuf_mask)
++ return false;
++
+ /* Check the meta magic and meta struct size */
+ if (meta->magic != RING_BUFFER_META_MAGIC ||
+ meta->struct_size != sizeof(*meta)) {
+@@ -1712,6 +1716,8 @@ static bool rb_meta_valid(struct ring_buffer_meta *meta, int cpu,
+
+ subbuf = rb_subbufs_from_meta(meta);
+
++ bitmap_clear(subbuf_mask, 0, meta->nr_subbufs);
++
+ /* Is the meta buffers and the subbufs themselves have correct data? */
+ for (i = 0; i < meta->nr_subbufs; i++) {
+ if (meta->buffers[i] < 0 ||
+@@ -1725,6 +1731,12 @@ static bool rb_meta_valid(struct ring_buffer_meta *meta, int cpu,
+ return false;
+ }
+
++ if (test_bit(meta->buffers[i], subbuf_mask)) {
++ pr_info("Ring buffer boot meta [%d] array has duplicates\n", cpu);
++ return false;
++ }
++
++ set_bit(meta->buffers[i], subbuf_mask);
+ subbuf = (void *)subbuf + subbuf_size;
+ }
+
+@@ -1838,6 +1850,11 @@ static void rb_meta_validate_events(struct ring_buffer_per_cpu *cpu_buffer)
+ cpu_buffer->cpu);
+ goto invalid;
+ }
++
++ /* If the buffer has content, update pages_touched */
++ if (ret)
++ local_inc(&cpu_buffer->pages_touched);
++
+ entries += ret;
+ entry_bytes += local_read(&head_page->page->commit);
+ local_set(&cpu_buffer->head_page->entries, ret);
+@@ -1889,17 +1906,22 @@ static void rb_meta_init_text_addr(struct ring_buffer_meta *meta)
+ static void rb_range_meta_init(struct trace_buffer *buffer, int nr_pages)
+ {
+ struct ring_buffer_meta *meta;
++ unsigned long *subbuf_mask;
+ unsigned long delta;
+ void *subbuf;
+ int cpu;
+ int i;
+
++ /* Create a mask to test the subbuf array */
++ subbuf_mask = bitmap_alloc(nr_pages + 1, GFP_KERNEL);
++ /* If subbuf_mask fails to allocate, then rb_meta_valid() will return false */
++
+ for (cpu = 0; cpu < nr_cpu_ids; cpu++) {
+ void *next_meta;
+
+ meta = rb_range_meta(buffer, nr_pages, cpu);
+
+- if (rb_meta_valid(meta, cpu, buffer, nr_pages)) {
++ if (rb_meta_valid(meta, cpu, buffer, nr_pages, subbuf_mask)) {
+ /* Make the mappings match the current address */
+ subbuf = rb_subbufs_from_meta(meta);
+ delta = (unsigned long)subbuf - meta->first_buffer;
+@@ -1943,6 +1965,7 @@ static void rb_range_meta_init(struct trace_buffer *buffer, int nr_pages)
+ subbuf += meta->subbuf_size;
+ }
+ }
++ bitmap_free(subbuf_mask);
+ }
+
+ static void *rbm_start(struct seq_file *m, loff_t *pos)
+@@ -7157,6 +7180,7 @@ int ring_buffer_map(struct trace_buffer *buffer, int cpu,
+ kfree(cpu_buffer->subbuf_ids);
+ cpu_buffer->subbuf_ids = NULL;
+ rb_free_meta_page(cpu_buffer);
++ atomic_dec(&cpu_buffer->resize_disabled);
+ }
+
+ unlock:
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index b04990385a6a87..bfc4ac265c2c33 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -8364,6 +8364,10 @@ static int tracing_buffers_mmap(struct file *filp, struct vm_area_struct *vma)
+ struct trace_iterator *iter = &info->iter;
+ int ret = 0;
+
++ /* Currently the boot mapped buffer is not supported for mmap */
++ if (iter->tr->flags & TRACE_ARRAY_FL_BOOT)
++ return -ENODEV;
++
+ ret = get_snapshot_map(iter->tr);
+ if (ret)
+ return ret;
+diff --git a/kernel/workqueue.c b/kernel/workqueue.c
+index cee65cb4310816..a9d64e08dffc7c 100644
+--- a/kernel/workqueue.c
++++ b/kernel/workqueue.c
+@@ -3509,12 +3509,6 @@ static int rescuer_thread(void *__rescuer)
+ }
+ }
+
+- /*
+- * Put the reference grabbed by send_mayday(). @pool won't
+- * go away while we're still attached to it.
+- */
+- put_pwq(pwq);
+-
+ /*
+ * Leave this pool. Notify regular workers; otherwise, we end up
+ * with 0 concurrency and stalling the execution.
+@@ -3525,6 +3519,12 @@ static int rescuer_thread(void *__rescuer)
+
+ worker_detach_from_pool(rescuer);
+
++ /*
++ * Put the reference grabbed by send_mayday(). @pool might
++ * go away any time after it.
++ */
++ put_pwq_unlocked(pwq);
++
+ raw_spin_lock_irq(&wq_mayday_lock);
+ }
+
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index aa6c714892ec9d..9f3b8b682adb29 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -685,6 +685,15 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+
++ if (ax25->ax25_dev) {
++ if (dev == ax25->ax25_dev->dev) {
++ rcu_read_unlock();
++ break;
++ }
++ netdev_put(ax25->ax25_dev->dev, &ax25->dev_tracker);
++ ax25_dev_put(ax25->ax25_dev);
++ }
++
+ ax25->ax25_dev = ax25_dev_ax25dev(dev);
+ if (!ax25->ax25_dev) {
+ rcu_read_unlock();
+@@ -692,6 +701,8 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ break;
+ }
+ ax25_fillin_cb(ax25, ax25->ax25_dev);
++ netdev_hold(dev, &ax25->dev_tracker, GFP_ATOMIC);
++ ax25_dev_hold(ax25->ax25_dev);
+ rcu_read_unlock();
+ break;
+
+diff --git a/net/batman-adv/bat_v.c b/net/batman-adv/bat_v.c
+index ac11f1f08db0f9..d35479c465e2c4 100644
+--- a/net/batman-adv/bat_v.c
++++ b/net/batman-adv/bat_v.c
+@@ -113,8 +113,6 @@ static void
+ batadv_v_hardif_neigh_init(struct batadv_hardif_neigh_node *hardif_neigh)
+ {
+ ewma_throughput_init(&hardif_neigh->bat_v.throughput);
+- INIT_WORK(&hardif_neigh->bat_v.metric_work,
+- batadv_v_elp_throughput_metric_update);
+ }
+
+ /**
+diff --git a/net/batman-adv/bat_v_elp.c b/net/batman-adv/bat_v_elp.c
+index 1d704574e6bf54..b065578b4436ee 100644
+--- a/net/batman-adv/bat_v_elp.c
++++ b/net/batman-adv/bat_v_elp.c
+@@ -18,6 +18,7 @@
+ #include <linux/if_ether.h>
+ #include <linux/jiffies.h>
+ #include <linux/kref.h>
++#include <linux/list.h>
+ #include <linux/minmax.h>
+ #include <linux/netdevice.h>
+ #include <linux/nl80211.h>
+@@ -26,6 +27,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/rtnetlink.h>
+ #include <linux/skbuff.h>
++#include <linux/slab.h>
+ #include <linux/stddef.h>
+ #include <linux/string.h>
+ #include <linux/types.h>
+@@ -41,6 +43,18 @@
+ #include "routing.h"
+ #include "send.h"
+
++/**
++ * struct batadv_v_metric_queue_entry - list of hardif neighbors which require
++ * and metric update
++ */
++struct batadv_v_metric_queue_entry {
++ /** @hardif_neigh: hardif neighbor scheduled for metric update */
++ struct batadv_hardif_neigh_node *hardif_neigh;
++
++ /** @list: list node for metric_queue */
++ struct list_head list;
++};
++
+ /**
+ * batadv_v_elp_start_timer() - restart timer for ELP periodic work
+ * @hard_iface: the interface for which the timer has to be reset
+@@ -59,25 +73,36 @@ static void batadv_v_elp_start_timer(struct batadv_hard_iface *hard_iface)
+ /**
+ * batadv_v_elp_get_throughput() - get the throughput towards a neighbour
+ * @neigh: the neighbour for which the throughput has to be obtained
++ * @pthroughput: calculated throughput towards the given neighbour in multiples
++ * of 100kpbs (a value of '1' equals 0.1Mbps, '10' equals 1Mbps, etc).
+ *
+- * Return: The throughput towards the given neighbour in multiples of 100kpbs
+- * (a value of '1' equals 0.1Mbps, '10' equals 1Mbps, etc).
++ * Return: true when value behind @pthroughput was set
+ */
+-static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
++static bool batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh,
++ u32 *pthroughput)
+ {
+ struct batadv_hard_iface *hard_iface = neigh->if_incoming;
++ struct net_device *soft_iface = hard_iface->soft_iface;
+ struct ethtool_link_ksettings link_settings;
+ struct net_device *real_netdev;
+ struct station_info sinfo;
+ u32 throughput;
+ int ret;
+
++ /* don't query throughput when no longer associated with any
++ * batman-adv interface
++ */
++ if (!soft_iface)
++ return false;
++
+ /* if the user specified a customised value for this interface, then
+ * return it directly
+ */
+ throughput = atomic_read(&hard_iface->bat_v.throughput_override);
+- if (throughput != 0)
+- return throughput;
++ if (throughput != 0) {
++ *pthroughput = throughput;
++ return true;
++ }
+
+ /* if this is a wireless device, then ask its throughput through
+ * cfg80211 API
+@@ -104,27 +129,39 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
+ * possible to delete this neighbor. For now set
+ * the throughput metric to 0.
+ */
+- return 0;
++ *pthroughput = 0;
++ return true;
+ }
+ if (ret)
+ goto default_throughput;
+
+- if (sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT))
+- return sinfo.expected_throughput / 100;
++ if (sinfo.filled & BIT(NL80211_STA_INFO_EXPECTED_THROUGHPUT)) {
++ *pthroughput = sinfo.expected_throughput / 100;
++ return true;
++ }
+
+ /* try to estimate the expected throughput based on reported tx
+ * rates
+ */
+- if (sinfo.filled & BIT(NL80211_STA_INFO_TX_BITRATE))
+- return cfg80211_calculate_bitrate(&sinfo.txrate) / 3;
++ if (sinfo.filled & BIT(NL80211_STA_INFO_TX_BITRATE)) {
++ *pthroughput = cfg80211_calculate_bitrate(&sinfo.txrate) / 3;
++ return true;
++ }
+
+ goto default_throughput;
+ }
+
++ /* only use rtnl_trylock because the elp worker will be cancelled while
++ * the rntl_lock is held. the cancel_delayed_work_sync() would otherwise
++ * wait forever when the elp work_item was started and it is then also
++ * trying to rtnl_lock
++ */
++ if (!rtnl_trylock())
++ return false;
++
+ /* if not a wifi interface, check if this device provides data via
+ * ethtool (e.g. an Ethernet adapter)
+ */
+- rtnl_lock();
+ ret = __ethtool_get_link_ksettings(hard_iface->net_dev, &link_settings);
+ rtnl_unlock();
+ if (ret == 0) {
+@@ -135,13 +172,15 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
+ hard_iface->bat_v.flags &= ~BATADV_FULL_DUPLEX;
+
+ throughput = link_settings.base.speed;
+- if (throughput && throughput != SPEED_UNKNOWN)
+- return throughput * 10;
++ if (throughput && throughput != SPEED_UNKNOWN) {
++ *pthroughput = throughput * 10;
++ return true;
++ }
+ }
+
+ default_throughput:
+ if (!(hard_iface->bat_v.flags & BATADV_WARNING_DEFAULT)) {
+- batadv_info(hard_iface->soft_iface,
++ batadv_info(soft_iface,
+ "WiFi driver or ethtool info does not provide information about link speeds on interface %s, therefore defaulting to hardcoded throughput values of %u.%1u Mbps. Consider overriding the throughput manually or checking your driver.\n",
+ hard_iface->net_dev->name,
+ BATADV_THROUGHPUT_DEFAULT_VALUE / 10,
+@@ -150,31 +189,26 @@ static u32 batadv_v_elp_get_throughput(struct batadv_hardif_neigh_node *neigh)
+ }
+
+ /* if none of the above cases apply, return the base_throughput */
+- return BATADV_THROUGHPUT_DEFAULT_VALUE;
++ *pthroughput = BATADV_THROUGHPUT_DEFAULT_VALUE;
++ return true;
+ }
+
+ /**
+ * batadv_v_elp_throughput_metric_update() - worker updating the throughput
+ * metric of a single hop neighbour
+- * @work: the work queue item
++ * @neigh: the neighbour to probe
+ */
+-void batadv_v_elp_throughput_metric_update(struct work_struct *work)
++static void
++batadv_v_elp_throughput_metric_update(struct batadv_hardif_neigh_node *neigh)
+ {
+- struct batadv_hardif_neigh_node_bat_v *neigh_bat_v;
+- struct batadv_hardif_neigh_node *neigh;
+-
+- neigh_bat_v = container_of(work, struct batadv_hardif_neigh_node_bat_v,
+- metric_work);
+- neigh = container_of(neigh_bat_v, struct batadv_hardif_neigh_node,
+- bat_v);
++ u32 throughput;
++ bool valid;
+
+- ewma_throughput_add(&neigh->bat_v.throughput,
+- batadv_v_elp_get_throughput(neigh));
++ valid = batadv_v_elp_get_throughput(neigh, &throughput);
++ if (!valid)
++ return;
+
+- /* decrement refcounter to balance increment performed before scheduling
+- * this task
+- */
+- batadv_hardif_neigh_put(neigh);
++ ewma_throughput_add(&neigh->bat_v.throughput, throughput);
+ }
+
+ /**
+@@ -248,14 +282,16 @@ batadv_v_elp_wifi_neigh_probe(struct batadv_hardif_neigh_node *neigh)
+ */
+ static void batadv_v_elp_periodic_work(struct work_struct *work)
+ {
++ struct batadv_v_metric_queue_entry *metric_entry;
++ struct batadv_v_metric_queue_entry *metric_safe;
+ struct batadv_hardif_neigh_node *hardif_neigh;
+ struct batadv_hard_iface *hard_iface;
+ struct batadv_hard_iface_bat_v *bat_v;
+ struct batadv_elp_packet *elp_packet;
++ struct list_head metric_queue;
+ struct batadv_priv *bat_priv;
+ struct sk_buff *skb;
+ u32 elp_interval;
+- bool ret;
+
+ bat_v = container_of(work, struct batadv_hard_iface_bat_v, elp_wq.work);
+ hard_iface = container_of(bat_v, struct batadv_hard_iface, bat_v);
+@@ -291,6 +327,8 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
+
+ atomic_inc(&hard_iface->bat_v.elp_seqno);
+
++ INIT_LIST_HEAD(&metric_queue);
++
+ /* The throughput metric is updated on each sent packet. This way, if a
+ * node is dead and no longer sends packets, batman-adv is still able to
+ * react timely to its death.
+@@ -315,16 +353,28 @@ static void batadv_v_elp_periodic_work(struct work_struct *work)
+
+ /* Reading the estimated throughput from cfg80211 is a task that
+ * may sleep and that is not allowed in an rcu protected
+- * context. Therefore schedule a task for that.
++ * context. Therefore add it to metric_queue and process it
++ * outside rcu protected context.
+ */
+- ret = queue_work(batadv_event_workqueue,
+- &hardif_neigh->bat_v.metric_work);
+-
+- if (!ret)
++ metric_entry = kzalloc(sizeof(*metric_entry), GFP_ATOMIC);
++ if (!metric_entry) {
+ batadv_hardif_neigh_put(hardif_neigh);
++ continue;
++ }
++
++ metric_entry->hardif_neigh = hardif_neigh;
++ list_add(&metric_entry->list, &metric_queue);
+ }
+ rcu_read_unlock();
+
++ list_for_each_entry_safe(metric_entry, metric_safe, &metric_queue, list) {
++ batadv_v_elp_throughput_metric_update(metric_entry->hardif_neigh);
++
++ batadv_hardif_neigh_put(metric_entry->hardif_neigh);
++ list_del(&metric_entry->list);
++ kfree(metric_entry);
++ }
++
+ restart_timer:
+ batadv_v_elp_start_timer(hard_iface);
+ out:
+diff --git a/net/batman-adv/bat_v_elp.h b/net/batman-adv/bat_v_elp.h
+index 9e2740195fa2d4..c9cb0a30710045 100644
+--- a/net/batman-adv/bat_v_elp.h
++++ b/net/batman-adv/bat_v_elp.h
+@@ -10,7 +10,6 @@
+ #include "main.h"
+
+ #include <linux/skbuff.h>
+-#include <linux/workqueue.h>
+
+ int batadv_v_elp_iface_enable(struct batadv_hard_iface *hard_iface);
+ void batadv_v_elp_iface_disable(struct batadv_hard_iface *hard_iface);
+@@ -19,6 +18,5 @@ void batadv_v_elp_iface_activate(struct batadv_hard_iface *primary_iface,
+ void batadv_v_elp_primary_iface_set(struct batadv_hard_iface *primary_iface);
+ int batadv_v_elp_packet_recv(struct sk_buff *skb,
+ struct batadv_hard_iface *if_incoming);
+-void batadv_v_elp_throughput_metric_update(struct work_struct *work);
+
+ #endif /* _NET_BATMAN_ADV_BAT_V_ELP_H_ */
+diff --git a/net/batman-adv/types.h b/net/batman-adv/types.h
+index 04f6398b3a40e8..85a50096f5b24d 100644
+--- a/net/batman-adv/types.h
++++ b/net/batman-adv/types.h
+@@ -596,9 +596,6 @@ struct batadv_hardif_neigh_node_bat_v {
+ * neighbor
+ */
+ unsigned long last_unicast_tx;
+-
+- /** @metric_work: work queue callback item for metric update */
+- struct work_struct metric_work;
+ };
+
+ /**
+diff --git a/net/can/j1939/socket.c b/net/can/j1939/socket.c
+index 305dd72c844c70..17226b2341d03d 100644
+--- a/net/can/j1939/socket.c
++++ b/net/can/j1939/socket.c
+@@ -1132,7 +1132,7 @@ static int j1939_sk_send_loop(struct j1939_priv *priv, struct sock *sk,
+
+ todo_size = size;
+
+- while (todo_size) {
++ do {
+ struct j1939_sk_buff_cb *skcb;
+
+ segment_size = min_t(size_t, J1939_MAX_TP_PACKET_SIZE,
+@@ -1177,7 +1177,7 @@ static int j1939_sk_send_loop(struct j1939_priv *priv, struct sock *sk,
+
+ todo_size -= segment_size;
+ session->total_queued_size += segment_size;
+- }
++ } while (todo_size);
+
+ switch (ret) {
+ case 0: /* OK */
+diff --git a/net/can/j1939/transport.c b/net/can/j1939/transport.c
+index 95f7a7e65a73fa..9b72d118d756dd 100644
+--- a/net/can/j1939/transport.c
++++ b/net/can/j1939/transport.c
+@@ -382,8 +382,9 @@ sk_buff *j1939_session_skb_get_by_offset(struct j1939_session *session,
+ skb_queue_walk(&session->skb_queue, do_skb) {
+ do_skcb = j1939_skb_to_cb(do_skb);
+
+- if (offset_start >= do_skcb->offset &&
+- offset_start < (do_skcb->offset + do_skb->len)) {
++ if ((offset_start >= do_skcb->offset &&
++ offset_start < (do_skcb->offset + do_skb->len)) ||
++ (offset_start == 0 && do_skcb->offset == 0 && do_skb->len == 0)) {
+ skb = do_skb;
+ }
+ }
+diff --git a/net/core/fib_rules.c b/net/core/fib_rules.c
+index 154a2681f55cc6..388ff1d6d86b7a 100644
+--- a/net/core/fib_rules.c
++++ b/net/core/fib_rules.c
+@@ -37,8 +37,8 @@ static const struct fib_kuid_range fib_kuid_range_unset = {
+
+ bool fib_rule_matchall(const struct fib_rule *rule)
+ {
+- if (rule->iifindex || rule->oifindex || rule->mark || rule->tun_id ||
+- rule->flags)
++ if (READ_ONCE(rule->iifindex) || READ_ONCE(rule->oifindex) ||
++ rule->mark || rule->tun_id || rule->flags)
+ return false;
+ if (rule->suppress_ifgroup != -1 || rule->suppress_prefixlen != -1)
+ return false;
+@@ -260,12 +260,14 @@ static int fib_rule_match(struct fib_rule *rule, struct fib_rules_ops *ops,
+ struct flowi *fl, int flags,
+ struct fib_lookup_arg *arg)
+ {
+- int ret = 0;
++ int iifindex, oifindex, ret = 0;
+
+- if (rule->iifindex && (rule->iifindex != fl->flowi_iif))
++ iifindex = READ_ONCE(rule->iifindex);
++ if (iifindex && (iifindex != fl->flowi_iif))
+ goto out;
+
+- if (rule->oifindex && (rule->oifindex != fl->flowi_oif))
++ oifindex = READ_ONCE(rule->oifindex);
++ if (oifindex && (oifindex != fl->flowi_oif))
+ goto out;
+
+ if ((rule->mark ^ fl->flowi_mark) & rule->mark_mask)
+@@ -1038,14 +1040,14 @@ static int fib_nl_fill_rule(struct sk_buff *skb, struct fib_rule *rule,
+ if (rule->iifname[0]) {
+ if (nla_put_string(skb, FRA_IIFNAME, rule->iifname))
+ goto nla_put_failure;
+- if (rule->iifindex == -1)
++ if (READ_ONCE(rule->iifindex) == -1)
+ frh->flags |= FIB_RULE_IIF_DETACHED;
+ }
+
+ if (rule->oifname[0]) {
+ if (nla_put_string(skb, FRA_OIFNAME, rule->oifname))
+ goto nla_put_failure;
+- if (rule->oifindex == -1)
++ if (READ_ONCE(rule->oifindex) == -1)
+ frh->flags |= FIB_RULE_OIF_DETACHED;
+ }
+
+@@ -1217,10 +1219,10 @@ static void attach_rules(struct list_head *rules, struct net_device *dev)
+ list_for_each_entry(rule, rules, list) {
+ if (rule->iifindex == -1 &&
+ strcmp(dev->name, rule->iifname) == 0)
+- rule->iifindex = dev->ifindex;
++ WRITE_ONCE(rule->iifindex, dev->ifindex);
+ if (rule->oifindex == -1 &&
+ strcmp(dev->name, rule->oifname) == 0)
+- rule->oifindex = dev->ifindex;
++ WRITE_ONCE(rule->oifindex, dev->ifindex);
+ }
+ }
+
+@@ -1230,9 +1232,9 @@ static void detach_rules(struct list_head *rules, struct net_device *dev)
+
+ list_for_each_entry(rule, rules, list) {
+ if (rule->iifindex == dev->ifindex)
+- rule->iifindex = -1;
++ WRITE_ONCE(rule->iifindex, -1);
+ if (rule->oifindex == dev->ifindex)
+- rule->oifindex = -1;
++ WRITE_ONCE(rule->oifindex, -1);
+ }
+ }
+
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 0e638a37aa0961..5db41bf2ed93e0 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -1108,10 +1108,12 @@ bool __skb_flow_dissect(const struct net *net,
+ FLOW_DISSECTOR_KEY_BASIC,
+ target_container);
+
++ rcu_read_lock();
++
+ if (skb) {
+ if (!net) {
+ if (skb->dev)
+- net = dev_net(skb->dev);
++ net = dev_net_rcu(skb->dev);
+ else if (skb->sk)
+ net = sock_net(skb->sk);
+ }
+@@ -1122,7 +1124,6 @@ bool __skb_flow_dissect(const struct net *net,
+ enum netns_bpf_attach_type type = NETNS_BPF_FLOW_DISSECTOR;
+ struct bpf_prog_array *run_array;
+
+- rcu_read_lock();
+ run_array = rcu_dereference(init_net.bpf.run_array[type]);
+ if (!run_array)
+ run_array = rcu_dereference(net->bpf.run_array[type]);
+@@ -1150,17 +1151,17 @@ bool __skb_flow_dissect(const struct net *net,
+ prog = READ_ONCE(run_array->items[0].prog);
+ result = bpf_flow_dissect(prog, &ctx, n_proto, nhoff,
+ hlen, flags);
+- if (result == BPF_FLOW_DISSECTOR_CONTINUE)
+- goto dissect_continue;
+- __skb_flow_bpf_to_target(&flow_keys, flow_dissector,
+- target_container);
+- rcu_read_unlock();
+- return result == BPF_OK;
++ if (result != BPF_FLOW_DISSECTOR_CONTINUE) {
++ __skb_flow_bpf_to_target(&flow_keys, flow_dissector,
++ target_container);
++ rcu_read_unlock();
++ return result == BPF_OK;
++ }
+ }
+-dissect_continue:
+- rcu_read_unlock();
+ }
+
++ rcu_read_unlock();
++
+ if (dissector_uses_key(flow_dissector,
+ FLOW_DISSECTOR_KEY_ETH_ADDRS)) {
+ struct ethhdr *eth = eth_hdr(skb);
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index cc58315a40a79c..c7f7ea61b524a2 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -3513,10 +3513,12 @@ static const struct seq_operations neigh_stat_seq_ops = {
+ static void __neigh_notify(struct neighbour *n, int type, int flags,
+ u32 pid)
+ {
+- struct net *net = dev_net(n->dev);
+ struct sk_buff *skb;
+ int err = -ENOBUFS;
++ struct net *net;
+
++ rcu_read_lock();
++ net = dev_net_rcu(n->dev);
+ skb = nlmsg_new(neigh_nlmsg_size(), GFP_ATOMIC);
+ if (skb == NULL)
+ goto errout;
+@@ -3529,9 +3531,11 @@ static void __neigh_notify(struct neighbour *n, int type, int flags,
+ goto errout;
+ }
+ rtnl_notify(skb, net, 0, RTNLGRP_NEIGH, NULL, GFP_ATOMIC);
+- return;
++ goto out;
+ errout:
+ rtnl_set_sk_err(net, RTNLGRP_NEIGH, err);
++out:
++ rcu_read_unlock();
+ }
+
+ void neigh_app_ns(struct neighbour *n)
+diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
+index 11c1519b36993d..59ffaa89d7b05f 100644
+--- a/net/ipv4/arp.c
++++ b/net/ipv4/arp.c
+@@ -659,10 +659,12 @@ static int arp_xmit_finish(struct net *net, struct sock *sk, struct sk_buff *skb
+ */
+ void arp_xmit(struct sk_buff *skb)
+ {
++ rcu_read_lock();
+ /* Send it off, maybe filter it using firewalling first. */
+ NF_HOOK(NFPROTO_ARP, NF_ARP_OUT,
+- dev_net(skb->dev), NULL, skb, NULL, skb->dev,
++ dev_net_rcu(skb->dev), NULL, skb, NULL, skb->dev,
+ arp_xmit_finish);
++ rcu_read_unlock();
+ }
+ EXPORT_SYMBOL(arp_xmit);
+
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index 7cf5f7d0d0de23..a55e95046984da 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -1351,10 +1351,11 @@ __be32 inet_select_addr(const struct net_device *dev, __be32 dst, int scope)
+ __be32 addr = 0;
+ unsigned char localnet_scope = RT_SCOPE_HOST;
+ struct in_device *in_dev;
+- struct net *net = dev_net(dev);
++ struct net *net;
+ int master_idx;
+
+ rcu_read_lock();
++ net = dev_net_rcu(dev);
+ in_dev = __in_dev_get_rcu(dev);
+ if (!in_dev)
+ goto no_in_dev;
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index 932bd775fc2682..f45bc187a92a7e 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -399,10 +399,10 @@ static void icmp_push_reply(struct sock *sk,
+
+ static void icmp_reply(struct icmp_bxm *icmp_param, struct sk_buff *skb)
+ {
+- struct ipcm_cookie ipc;
+ struct rtable *rt = skb_rtable(skb);
+- struct net *net = dev_net(rt->dst.dev);
++ struct net *net = dev_net_rcu(rt->dst.dev);
+ bool apply_ratelimit = false;
++ struct ipcm_cookie ipc;
+ struct flowi4 fl4;
+ struct sock *sk;
+ struct inet_sock *inet;
+@@ -610,12 +610,14 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ struct sock *sk;
+
+ if (!rt)
+- goto out;
++ return;
++
++ rcu_read_lock();
+
+ if (rt->dst.dev)
+- net = dev_net(rt->dst.dev);
++ net = dev_net_rcu(rt->dst.dev);
+ else if (skb_in->dev)
+- net = dev_net(skb_in->dev);
++ net = dev_net_rcu(skb_in->dev);
+ else
+ goto out;
+
+@@ -786,7 +788,8 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ icmp_xmit_unlock(sk);
+ out_bh_enable:
+ local_bh_enable();
+-out:;
++out:
++ rcu_read_unlock();
+ }
+ EXPORT_SYMBOL(__icmp_send);
+
+@@ -835,7 +838,7 @@ static void icmp_socket_deliver(struct sk_buff *skb, u32 info)
+ * avoid additional coding at protocol handlers.
+ */
+ if (!pskb_may_pull(skb, iph->ihl * 4 + 8)) {
+- __ICMP_INC_STATS(dev_net(skb->dev), ICMP_MIB_INERRORS);
++ __ICMP_INC_STATS(dev_net_rcu(skb->dev), ICMP_MIB_INERRORS);
+ return;
+ }
+
+@@ -869,7 +872,7 @@ static enum skb_drop_reason icmp_unreach(struct sk_buff *skb)
+ struct net *net;
+ u32 info = 0;
+
+- net = dev_net(skb_dst(skb)->dev);
++ net = dev_net_rcu(skb_dst(skb)->dev);
+
+ /*
+ * Incomplete header ?
+@@ -980,7 +983,7 @@ static enum skb_drop_reason icmp_unreach(struct sk_buff *skb)
+ static enum skb_drop_reason icmp_redirect(struct sk_buff *skb)
+ {
+ if (skb->len < sizeof(struct iphdr)) {
+- __ICMP_INC_STATS(dev_net(skb->dev), ICMP_MIB_INERRORS);
++ __ICMP_INC_STATS(dev_net_rcu(skb->dev), ICMP_MIB_INERRORS);
+ return SKB_DROP_REASON_PKT_TOO_SMALL;
+ }
+
+@@ -1012,7 +1015,7 @@ static enum skb_drop_reason icmp_echo(struct sk_buff *skb)
+ struct icmp_bxm icmp_param;
+ struct net *net;
+
+- net = dev_net(skb_dst(skb)->dev);
++ net = dev_net_rcu(skb_dst(skb)->dev);
+ /* should there be an ICMP stat for ignored echos? */
+ if (READ_ONCE(net->ipv4.sysctl_icmp_echo_ignore_all))
+ return SKB_NOT_DROPPED_YET;
+@@ -1041,9 +1044,9 @@ static enum skb_drop_reason icmp_echo(struct sk_buff *skb)
+
+ bool icmp_build_probe(struct sk_buff *skb, struct icmphdr *icmphdr)
+ {
++ struct net *net = dev_net_rcu(skb->dev);
+ struct icmp_ext_hdr *ext_hdr, _ext_hdr;
+ struct icmp_ext_echo_iio *iio, _iio;
+- struct net *net = dev_net(skb->dev);
+ struct inet6_dev *in6_dev;
+ struct in_device *in_dev;
+ struct net_device *dev;
+@@ -1182,7 +1185,7 @@ static enum skb_drop_reason icmp_timestamp(struct sk_buff *skb)
+ return SKB_NOT_DROPPED_YET;
+
+ out_err:
+- __ICMP_INC_STATS(dev_net(skb_dst(skb)->dev), ICMP_MIB_INERRORS);
++ __ICMP_INC_STATS(dev_net_rcu(skb_dst(skb)->dev), ICMP_MIB_INERRORS);
+ return SKB_DROP_REASON_PKT_TOO_SMALL;
+ }
+
+@@ -1199,7 +1202,7 @@ int icmp_rcv(struct sk_buff *skb)
+ {
+ enum skb_drop_reason reason = SKB_DROP_REASON_NOT_SPECIFIED;
+ struct rtable *rt = skb_rtable(skb);
+- struct net *net = dev_net(rt->dst.dev);
++ struct net *net = dev_net_rcu(rt->dst.dev);
+ struct icmphdr *icmph;
+
+ if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb)) {
+@@ -1372,9 +1375,9 @@ int icmp_err(struct sk_buff *skb, u32 info)
+ struct iphdr *iph = (struct iphdr *)skb->data;
+ int offset = iph->ihl<<2;
+ struct icmphdr *icmph = (struct icmphdr *)(skb->data + offset);
++ struct net *net = dev_net_rcu(skb->dev);
+ int type = icmp_hdr(skb)->type;
+ int code = icmp_hdr(skb)->code;
+- struct net *net = dev_net(skb->dev);
+
+ /*
+ * Use ping_err to handle all icmp errors except those
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index 2a27913588d05a..41b320f0c20ebf 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -390,7 +390,13 @@ static inline int ip_rt_proc_init(void)
+
+ static inline bool rt_is_expired(const struct rtable *rth)
+ {
+- return rth->rt_genid != rt_genid_ipv4(dev_net(rth->dst.dev));
++ bool res;
++
++ rcu_read_lock();
++ res = rth->rt_genid != rt_genid_ipv4(dev_net_rcu(rth->dst.dev));
++ rcu_read_unlock();
++
++ return res;
+ }
+
+ void rt_cache_flush(struct net *net)
+@@ -1002,9 +1008,9 @@ out: kfree_skb_reason(skb, reason);
+ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+ {
+ struct dst_entry *dst = &rt->dst;
+- struct net *net = dev_net(dst->dev);
+ struct fib_result res;
+ bool lock = false;
++ struct net *net;
+ u32 old_mtu;
+
+ if (ip_mtu_locked(dst))
+@@ -1014,6 +1020,8 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+ if (old_mtu < mtu)
+ return;
+
++ rcu_read_lock();
++ net = dev_net_rcu(dst->dev);
+ if (mtu < net->ipv4.ip_rt_min_pmtu) {
+ lock = true;
+ mtu = min(old_mtu, net->ipv4.ip_rt_min_pmtu);
+@@ -1021,17 +1029,29 @@ static void __ip_rt_update_pmtu(struct rtable *rt, struct flowi4 *fl4, u32 mtu)
+
+ if (rt->rt_pmtu == mtu && !lock &&
+ time_before(jiffies, dst->expires - net->ipv4.ip_rt_mtu_expires / 2))
+- return;
++ goto out;
+
+- rcu_read_lock();
+ if (fib_lookup(net, fl4, &res, 0) == 0) {
+ struct fib_nh_common *nhc;
+
+ fib_select_path(net, &res, fl4, NULL);
++#ifdef CONFIG_IP_ROUTE_MULTIPATH
++ if (fib_info_num_path(res.fi) > 1) {
++ int nhsel;
++
++ for (nhsel = 0; nhsel < fib_info_num_path(res.fi); nhsel++) {
++ nhc = fib_info_nhc(res.fi, nhsel);
++ update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
++ jiffies + net->ipv4.ip_rt_mtu_expires);
++ }
++ goto out;
++ }
++#endif /* CONFIG_IP_ROUTE_MULTIPATH */
+ nhc = FIB_RES_NHC(res);
+ update_or_create_fnhe(nhc, fl4->daddr, 0, mtu, lock,
+ jiffies + net->ipv4.ip_rt_mtu_expires);
+ }
++out:
+ rcu_read_unlock();
+ }
+
+@@ -1294,10 +1314,15 @@ static void set_class_tag(struct rtable *rt, u32 tag)
+
+ static unsigned int ipv4_default_advmss(const struct dst_entry *dst)
+ {
+- struct net *net = dev_net(dst->dev);
+ unsigned int header_size = sizeof(struct tcphdr) + sizeof(struct iphdr);
+- unsigned int advmss = max_t(unsigned int, ipv4_mtu(dst) - header_size,
+- net->ipv4.ip_rt_min_advmss);
++ unsigned int advmss;
++ struct net *net;
++
++ rcu_read_lock();
++ net = dev_net_rcu(dst->dev);
++ advmss = max_t(unsigned int, ipv4_mtu(dst) - header_size,
++ net->ipv4.ip_rt_min_advmss);
++ rcu_read_unlock();
+
+ return min(advmss, IPV4_MAX_PMTU - header_size);
+ }
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index a6984a29fdb9dd..4d14ab7f7e99f1 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -76,7 +76,7 @@ static int icmpv6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ {
+ /* icmpv6_notify checks 8 bytes can be pulled, icmp6hdr is 8 bytes */
+ struct icmp6hdr *icmp6 = (struct icmp6hdr *) (skb->data + offset);
+- struct net *net = dev_net(skb->dev);
++ struct net *net = dev_net_rcu(skb->dev);
+
+ if (type == ICMPV6_PKT_TOOBIG)
+ ip6_update_pmtu(skb, net, info, skb->dev->ifindex, 0, sock_net_uid(net, NULL));
+@@ -473,7 +473,10 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+
+ if (!skb->dev)
+ return;
+- net = dev_net(skb->dev);
++
++ rcu_read_lock();
++
++ net = dev_net_rcu(skb->dev);
+ mark = IP6_REPLY_MARK(net, skb->mark);
+ /*
+ * Make sure we respect the rules
+@@ -496,7 +499,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ !(type == ICMPV6_PARAMPROB &&
+ code == ICMPV6_UNK_OPTION &&
+ (opt_unrec(skb, info))))
+- return;
++ goto out;
+
+ saddr = NULL;
+ }
+@@ -526,7 +529,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ if ((addr_type == IPV6_ADDR_ANY) || (addr_type & IPV6_ADDR_MULTICAST)) {
+ net_dbg_ratelimited("icmp6_send: addr_any/mcast source [%pI6c > %pI6c]\n",
+ &hdr->saddr, &hdr->daddr);
+- return;
++ goto out;
+ }
+
+ /*
+@@ -535,7 +538,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ if (is_ineligible(skb)) {
+ net_dbg_ratelimited("icmp6_send: no reply to icmp error [%pI6c > %pI6c]\n",
+ &hdr->saddr, &hdr->daddr);
+- return;
++ goto out;
+ }
+
+ /* Needed by both icmpv6_global_allow and icmpv6_xmit_lock */
+@@ -582,7 +585,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ np = inet6_sk(sk);
+
+ if (!icmpv6_xrlim_allow(sk, type, &fl6, apply_ratelimit))
+- goto out;
++ goto out_unlock;
+
+ tmp_hdr.icmp6_type = type;
+ tmp_hdr.icmp6_code = code;
+@@ -600,7 +603,7 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+
+ dst = icmpv6_route_lookup(net, skb, sk, &fl6);
+ if (IS_ERR(dst))
+- goto out;
++ goto out_unlock;
+
+ ipc6.hlimit = ip6_sk_dst_hoplimit(np, &fl6, dst);
+
+@@ -616,7 +619,6 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ goto out_dst_release;
+ }
+
+- rcu_read_lock();
+ idev = __in6_dev_get(skb->dev);
+
+ if (ip6_append_data(sk, icmpv6_getfrag, &msg,
+@@ -630,13 +632,15 @@ void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ icmpv6_push_pending_frames(sk, &fl6, &tmp_hdr,
+ len + sizeof(struct icmp6hdr));
+ }
+- rcu_read_unlock();
++
+ out_dst_release:
+ dst_release(dst);
+-out:
++out_unlock:
+ icmpv6_xmit_unlock(sk);
+ out_bh_enable:
+ local_bh_enable();
++out:
++ rcu_read_unlock();
+ }
+ EXPORT_SYMBOL(icmp6_send);
+
+@@ -679,8 +683,8 @@ int ip6_err_gen_icmpv6_unreach(struct sk_buff *skb, int nhs, int type,
+ skb_pull(skb2, nhs);
+ skb_reset_network_header(skb2);
+
+- rt = rt6_lookup(dev_net(skb->dev), &ipv6_hdr(skb2)->saddr, NULL, 0,
+- skb, 0);
++ rt = rt6_lookup(dev_net_rcu(skb->dev), &ipv6_hdr(skb2)->saddr,
++ NULL, 0, skb, 0);
+
+ if (rt && rt->dst.dev)
+ skb2->dev = rt->dst.dev;
+@@ -717,7 +721,7 @@ EXPORT_SYMBOL(ip6_err_gen_icmpv6_unreach);
+
+ static enum skb_drop_reason icmpv6_echo_reply(struct sk_buff *skb)
+ {
+- struct net *net = dev_net(skb->dev);
++ struct net *net = dev_net_rcu(skb->dev);
+ struct sock *sk;
+ struct inet6_dev *idev;
+ struct ipv6_pinfo *np;
+@@ -832,7 +836,7 @@ enum skb_drop_reason icmpv6_notify(struct sk_buff *skb, u8 type,
+ u8 code, __be32 info)
+ {
+ struct inet6_skb_parm *opt = IP6CB(skb);
+- struct net *net = dev_net(skb->dev);
++ struct net *net = dev_net_rcu(skb->dev);
+ const struct inet6_protocol *ipprot;
+ enum skb_drop_reason reason;
+ int inner_offset;
+@@ -889,7 +893,7 @@ enum skb_drop_reason icmpv6_notify(struct sk_buff *skb, u8 type,
+ static int icmpv6_rcv(struct sk_buff *skb)
+ {
+ enum skb_drop_reason reason = SKB_DROP_REASON_NOT_SPECIFIED;
+- struct net *net = dev_net(skb->dev);
++ struct net *net = dev_net_rcu(skb->dev);
+ struct net_device *dev = icmp6_dev(skb);
+ struct inet6_dev *idev = __in6_dev_get(dev);
+ const struct in6_addr *saddr, *daddr;
+@@ -921,7 +925,7 @@ static int icmpv6_rcv(struct sk_buff *skb)
+ skb_set_network_header(skb, nh);
+ }
+
+- __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_INMSGS);
++ __ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_INMSGS);
+
+ saddr = &ipv6_hdr(skb)->saddr;
+ daddr = &ipv6_hdr(skb)->daddr;
+@@ -939,7 +943,7 @@ static int icmpv6_rcv(struct sk_buff *skb)
+
+ type = hdr->icmp6_type;
+
+- ICMP6MSGIN_INC_STATS(dev_net(dev), idev, type);
++ ICMP6MSGIN_INC_STATS(dev_net_rcu(dev), idev, type);
+
+ switch (type) {
+ case ICMPV6_ECHO_REQUEST:
+@@ -1034,9 +1038,9 @@ static int icmpv6_rcv(struct sk_buff *skb)
+
+ csum_error:
+ reason = SKB_DROP_REASON_ICMP_CSUM;
+- __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_CSUMERRORS);
++ __ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_CSUMERRORS);
+ discard_it:
+- __ICMP6_INC_STATS(dev_net(dev), idev, ICMP6_MIB_INERRORS);
++ __ICMP6_INC_STATS(dev_net_rcu(dev), idev, ICMP6_MIB_INERRORS);
+ drop_no_count:
+ kfree_skb_reason(skb, reason);
+ return 0;
+diff --git a/net/ipv6/ioam6_iptunnel.c b/net/ipv6/ioam6_iptunnel.c
+index beb6b4cfc551cf..4215cebe7d85a9 100644
+--- a/net/ipv6/ioam6_iptunnel.c
++++ b/net/ipv6/ioam6_iptunnel.c
+@@ -255,14 +255,15 @@ static int ioam6_do_fill(struct net *net, struct sk_buff *skb)
+ }
+
+ static int ioam6_do_inline(struct net *net, struct sk_buff *skb,
+- struct ioam6_lwt_encap *tuninfo)
++ struct ioam6_lwt_encap *tuninfo,
++ struct dst_entry *cache_dst)
+ {
+ struct ipv6hdr *oldhdr, *hdr;
+ int hdrlen, err;
+
+ hdrlen = (tuninfo->eh.hdrlen + 1) << 3;
+
+- err = skb_cow_head(skb, hdrlen + skb->mac_len);
++ err = skb_cow_head(skb, hdrlen + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err))
+ return err;
+
+@@ -293,7 +294,8 @@ static int ioam6_do_encap(struct net *net, struct sk_buff *skb,
+ struct ioam6_lwt_encap *tuninfo,
+ bool has_tunsrc,
+ struct in6_addr *tunsrc,
+- struct in6_addr *tundst)
++ struct in6_addr *tundst,
++ struct dst_entry *cache_dst)
+ {
+ struct dst_entry *dst = skb_dst(skb);
+ struct ipv6hdr *hdr, *inner_hdr;
+@@ -302,7 +304,7 @@ static int ioam6_do_encap(struct net *net, struct sk_buff *skb,
+ hdrlen = (tuninfo->eh.hdrlen + 1) << 3;
+ len = sizeof(*hdr) + hdrlen;
+
+- err = skb_cow_head(skb, len + skb->mac_len);
++ err = skb_cow_head(skb, len + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err))
+ return err;
+
+@@ -336,7 +338,7 @@ static int ioam6_do_encap(struct net *net, struct sk_buff *skb,
+
+ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+- struct dst_entry *dst = skb_dst(skb);
++ struct dst_entry *dst = skb_dst(skb), *cache_dst = NULL;
+ struct in6_addr orig_daddr;
+ struct ioam6_lwt *ilwt;
+ int err = -EINVAL;
+@@ -354,6 +356,10 @@ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+
+ orig_daddr = ipv6_hdr(skb)->daddr;
+
++ local_bh_disable();
++ cache_dst = dst_cache_get(&ilwt->cache);
++ local_bh_enable();
++
+ switch (ilwt->mode) {
+ case IOAM6_IPTUNNEL_MODE_INLINE:
+ do_inline:
+@@ -361,7 +367,7 @@ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ if (ipv6_hdr(skb)->nexthdr == NEXTHDR_HOP)
+ goto out;
+
+- err = ioam6_do_inline(net, skb, &ilwt->tuninfo);
++ err = ioam6_do_inline(net, skb, &ilwt->tuninfo, cache_dst);
+ if (unlikely(err))
+ goto drop;
+
+@@ -371,7 +377,7 @@ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ /* Encapsulation (ip6ip6) */
+ err = ioam6_do_encap(net, skb, &ilwt->tuninfo,
+ ilwt->has_tunsrc, &ilwt->tunsrc,
+- &ilwt->tundst);
++ &ilwt->tundst, cache_dst);
+ if (unlikely(err))
+ goto drop;
+
+@@ -389,46 +395,45 @@ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ goto drop;
+ }
+
+- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+- if (unlikely(err))
+- goto drop;
++ if (unlikely(!cache_dst)) {
++ struct ipv6hdr *hdr = ipv6_hdr(skb);
++ struct flowi6 fl6;
+
+- if (!ipv6_addr_equal(&orig_daddr, &ipv6_hdr(skb)->daddr)) {
+- local_bh_disable();
+- dst = dst_cache_get(&ilwt->cache);
+- local_bh_enable();
+-
+- if (unlikely(!dst)) {
+- struct ipv6hdr *hdr = ipv6_hdr(skb);
+- struct flowi6 fl6;
+-
+- memset(&fl6, 0, sizeof(fl6));
+- fl6.daddr = hdr->daddr;
+- fl6.saddr = hdr->saddr;
+- fl6.flowlabel = ip6_flowinfo(hdr);
+- fl6.flowi6_mark = skb->mark;
+- fl6.flowi6_proto = hdr->nexthdr;
+-
+- dst = ip6_route_output(net, NULL, &fl6);
+- if (dst->error) {
+- err = dst->error;
+- dst_release(dst);
+- goto drop;
+- }
++ memset(&fl6, 0, sizeof(fl6));
++ fl6.daddr = hdr->daddr;
++ fl6.saddr = hdr->saddr;
++ fl6.flowlabel = ip6_flowinfo(hdr);
++ fl6.flowi6_mark = skb->mark;
++ fl6.flowi6_proto = hdr->nexthdr;
++
++ cache_dst = ip6_route_output(net, NULL, &fl6);
++ if (cache_dst->error) {
++ err = cache_dst->error;
++ goto drop;
++ }
+
++ /* cache only if we don't create a dst reference loop */
++ if (dst->lwtstate != cache_dst->lwtstate) {
+ local_bh_disable();
+- dst_cache_set_ip6(&ilwt->cache, dst, &fl6.saddr);
++ dst_cache_set_ip6(&ilwt->cache, cache_dst, &fl6.saddr);
+ local_bh_enable();
+ }
+
+- skb_dst_drop(skb);
+- skb_dst_set(skb, dst);
++ err = skb_cow_head(skb, LL_RESERVED_SPACE(cache_dst->dev));
++ if (unlikely(err))
++ goto drop;
++ }
+
++ if (!ipv6_addr_equal(&orig_daddr, &ipv6_hdr(skb)->daddr)) {
++ skb_dst_drop(skb);
++ skb_dst_set(skb, cache_dst);
+ return dst_output(net, sk, skb);
+ }
+ out:
++ dst_release(cache_dst);
+ return dst->lwtstate->orig_output(net, sk, skb);
+ drop:
++ dst_release(cache_dst);
+ kfree_skb(skb);
+ return err;
+ }
+diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c
+index b244dbf61d5f39..b7b62e5a562e5d 100644
+--- a/net/ipv6/mcast.c
++++ b/net/ipv6/mcast.c
+@@ -1730,21 +1730,19 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
+ struct net_device *dev = idev->dev;
+ int hlen = LL_RESERVED_SPACE(dev);
+ int tlen = dev->needed_tailroom;
+- struct net *net = dev_net(dev);
+ const struct in6_addr *saddr;
+ struct in6_addr addr_buf;
+ struct mld2_report *pmr;
+ struct sk_buff *skb;
+ unsigned int size;
+ struct sock *sk;
+- int err;
++ struct net *net;
+
+- sk = net->ipv6.igmp_sk;
+ /* we assume size > sizeof(ra) here
+ * Also try to not allocate high-order pages for big MTU
+ */
+ size = min_t(int, mtu, PAGE_SIZE / 2) + hlen + tlen;
+- skb = sock_alloc_send_skb(sk, size, 1, &err);
++ skb = alloc_skb(size, GFP_KERNEL);
+ if (!skb)
+ return NULL;
+
+@@ -1752,6 +1750,12 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
+ skb_reserve(skb, hlen);
+ skb_tailroom_reserve(skb, mtu, tlen);
+
++ rcu_read_lock();
++
++ net = dev_net_rcu(dev);
++ sk = net->ipv6.igmp_sk;
++ skb_set_owner_w(skb, sk);
++
+ if (ipv6_get_lladdr(dev, &addr_buf, IFA_F_TENTATIVE)) {
+ /* <draft-ietf-magma-mld-source-05.txt>:
+ * use unspecified address as the source address
+@@ -1763,6 +1767,8 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu)
+
+ ip6_mc_hdr(sk, skb, dev, saddr, &mld2_all_mcr, NEXTHDR_HOP, 0);
+
++ rcu_read_unlock();
++
+ skb_put_data(skb, ra, sizeof(ra));
+
+ skb_set_transport_header(skb, skb_tail_pointer(skb) - skb->data);
+@@ -2122,21 +2128,21 @@ static void mld_send_cr(struct inet6_dev *idev)
+
+ static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type)
+ {
+- struct net *net = dev_net(dev);
+- struct sock *sk = net->ipv6.igmp_sk;
++ const struct in6_addr *snd_addr, *saddr;
++ int err, len, payload_len, full_len;
++ struct in6_addr addr_buf;
+ struct inet6_dev *idev;
+ struct sk_buff *skb;
+ struct mld_msg *hdr;
+- const struct in6_addr *snd_addr, *saddr;
+- struct in6_addr addr_buf;
+ int hlen = LL_RESERVED_SPACE(dev);
+ int tlen = dev->needed_tailroom;
+- int err, len, payload_len, full_len;
+ u8 ra[8] = { IPPROTO_ICMPV6, 0,
+ IPV6_TLV_ROUTERALERT, 2, 0, 0,
+ IPV6_TLV_PADN, 0 };
+- struct flowi6 fl6;
+ struct dst_entry *dst;
++ struct flowi6 fl6;
++ struct net *net;
++ struct sock *sk;
+
+ if (type == ICMPV6_MGM_REDUCTION)
+ snd_addr = &in6addr_linklocal_allrouters;
+@@ -2147,19 +2153,21 @@ static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type)
+ payload_len = len + sizeof(ra);
+ full_len = sizeof(struct ipv6hdr) + payload_len;
+
+- rcu_read_lock();
+- IP6_INC_STATS(net, __in6_dev_get(dev), IPSTATS_MIB_OUTREQUESTS);
+- rcu_read_unlock();
++ skb = alloc_skb(hlen + tlen + full_len, GFP_KERNEL);
+
+- skb = sock_alloc_send_skb(sk, hlen + tlen + full_len, 1, &err);
++ rcu_read_lock();
+
++ net = dev_net_rcu(dev);
++ idev = __in6_dev_get(dev);
++ IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS);
+ if (!skb) {
+- rcu_read_lock();
+- IP6_INC_STATS(net, __in6_dev_get(dev),
+- IPSTATS_MIB_OUTDISCARDS);
++ IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTDISCARDS);
+ rcu_read_unlock();
+ return;
+ }
++ sk = net->ipv6.igmp_sk;
++ skb_set_owner_w(skb, sk);
++
+ skb->priority = TC_PRIO_CONTROL;
+ skb_reserve(skb, hlen);
+
+@@ -2184,9 +2192,6 @@ static void igmp6_send(struct in6_addr *addr, struct net_device *dev, int type)
+ IPPROTO_ICMPV6,
+ csum_partial(hdr, len, 0));
+
+- rcu_read_lock();
+- idev = __in6_dev_get(skb->dev);
+-
+ icmpv6_flow_init(sk, &fl6, type,
+ &ipv6_hdr(skb)->saddr, &ipv6_hdr(skb)->daddr,
+ skb->dev->ifindex);
+diff --git a/net/ipv6/ndisc.c b/net/ipv6/ndisc.c
+index d044c67019de6d..8699d1a188dc4a 100644
+--- a/net/ipv6/ndisc.c
++++ b/net/ipv6/ndisc.c
+@@ -418,15 +418,11 @@ static struct sk_buff *ndisc_alloc_skb(struct net_device *dev,
+ {
+ int hlen = LL_RESERVED_SPACE(dev);
+ int tlen = dev->needed_tailroom;
+- struct sock *sk = dev_net(dev)->ipv6.ndisc_sk;
+ struct sk_buff *skb;
+
+ skb = alloc_skb(hlen + sizeof(struct ipv6hdr) + len + tlen, GFP_ATOMIC);
+- if (!skb) {
+- ND_PRINTK(0, err, "ndisc: %s failed to allocate an skb\n",
+- __func__);
++ if (!skb)
+ return NULL;
+- }
+
+ skb->protocol = htons(ETH_P_IPV6);
+ skb->dev = dev;
+@@ -437,7 +433,9 @@ static struct sk_buff *ndisc_alloc_skb(struct net_device *dev,
+ /* Manually assign socket ownership as we avoid calling
+ * sock_alloc_send_pskb() to bypass wmem buffer limits
+ */
+- skb_set_owner_w(skb, sk);
++ rcu_read_lock();
++ skb_set_owner_w(skb, dev_net_rcu(dev)->ipv6.ndisc_sk);
++ rcu_read_unlock();
+
+ return skb;
+ }
+@@ -473,16 +471,20 @@ static void ip6_nd_hdr(struct sk_buff *skb,
+ void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,
+ const struct in6_addr *saddr)
+ {
++ struct icmp6hdr *icmp6h = icmp6_hdr(skb);
+ struct dst_entry *dst = skb_dst(skb);
+- struct net *net = dev_net(skb->dev);
+- struct sock *sk = net->ipv6.ndisc_sk;
+ struct inet6_dev *idev;
++ struct net *net;
++ struct sock *sk;
+ int err;
+- struct icmp6hdr *icmp6h = icmp6_hdr(skb);
+ u8 type;
+
+ type = icmp6h->icmp6_type;
+
++ rcu_read_lock();
++
++ net = dev_net_rcu(skb->dev);
++ sk = net->ipv6.ndisc_sk;
+ if (!dst) {
+ struct flowi6 fl6;
+ int oif = skb->dev->ifindex;
+@@ -490,6 +492,7 @@ void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,
+ icmpv6_flow_init(sk, &fl6, type, saddr, daddr, oif);
+ dst = icmp6_dst_alloc(skb->dev, &fl6);
+ if (IS_ERR(dst)) {
++ rcu_read_unlock();
+ kfree_skb(skb);
+ return;
+ }
+@@ -504,7 +507,6 @@ void ndisc_send_skb(struct sk_buff *skb, const struct in6_addr *daddr,
+
+ ip6_nd_hdr(skb, saddr, daddr, READ_ONCE(inet6_sk(sk)->hop_limit), skb->len);
+
+- rcu_read_lock();
+ idev = __in6_dev_get(dst->dev);
+ IP6_INC_STATS(net, idev, IPSTATS_MIB_OUTREQUESTS);
+
+@@ -1694,7 +1696,7 @@ void ndisc_send_redirect(struct sk_buff *skb, const struct in6_addr *target)
+ bool ret;
+
+ if (netif_is_l3_master(skb->dev)) {
+- dev = __dev_get_by_index(dev_net(skb->dev), IPCB(skb)->iif);
++ dev = dev_get_by_index_rcu(dev_net(skb->dev), IPCB(skb)->iif);
+ if (!dev)
+ return;
+ }
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 8ebfed5d63232e..2736dea77575b5 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3196,13 +3196,18 @@ static unsigned int ip6_default_advmss(const struct dst_entry *dst)
+ {
+ struct net_device *dev = dst->dev;
+ unsigned int mtu = dst_mtu(dst);
+- struct net *net = dev_net(dev);
++ struct net *net;
+
+ mtu -= sizeof(struct ipv6hdr) + sizeof(struct tcphdr);
+
++ rcu_read_lock();
++
++ net = dev_net_rcu(dev);
+ if (mtu < net->ipv6.sysctl.ip6_rt_min_advmss)
+ mtu = net->ipv6.sysctl.ip6_rt_min_advmss;
+
++ rcu_read_unlock();
++
+ /*
+ * Maximal non-jumbo IPv6 payload is IPV6_MAXPLEN and
+ * corresponding MSS is IPV6_MAXPLEN - tcp_header_size.
+diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c
+index db3c19a42e1ca7..0ac4283acdf20c 100644
+--- a/net/ipv6/rpl_iptunnel.c
++++ b/net/ipv6/rpl_iptunnel.c
+@@ -125,7 +125,8 @@ static void rpl_destroy_state(struct lwtunnel_state *lwt)
+ }
+
+ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+- const struct ipv6_rpl_sr_hdr *srh)
++ const struct ipv6_rpl_sr_hdr *srh,
++ struct dst_entry *cache_dst)
+ {
+ struct ipv6_rpl_sr_hdr *isrh, *csrh;
+ const struct ipv6hdr *oldhdr;
+@@ -153,7 +154,7 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+
+ hdrlen = ((csrh->hdrlen + 1) << 3);
+
+- err = skb_cow_head(skb, hdrlen + skb->mac_len);
++ err = skb_cow_head(skb, hdrlen + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err)) {
+ kfree(buf);
+ return err;
+@@ -186,7 +187,8 @@ static int rpl_do_srh_inline(struct sk_buff *skb, const struct rpl_lwt *rlwt,
+ return 0;
+ }
+
+-static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt)
++static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt,
++ struct dst_entry *cache_dst)
+ {
+ struct dst_entry *dst = skb_dst(skb);
+ struct rpl_iptunnel_encap *tinfo;
+@@ -196,7 +198,7 @@ static int rpl_do_srh(struct sk_buff *skb, const struct rpl_lwt *rlwt)
+
+ tinfo = rpl_encap_lwtunnel(dst->lwtstate);
+
+- return rpl_do_srh_inline(skb, rlwt, tinfo->srh);
++ return rpl_do_srh_inline(skb, rlwt, tinfo->srh, cache_dst);
+ }
+
+ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+@@ -208,14 +210,14 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+
+ rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
+
+- err = rpl_do_srh(skb, rlwt);
+- if (unlikely(err))
+- goto drop;
+-
+ local_bh_disable();
+ dst = dst_cache_get(&rlwt->cache);
+ local_bh_enable();
+
++ err = rpl_do_srh(skb, rlwt, dst);
++ if (unlikely(err))
++ goto drop;
++
+ if (unlikely(!dst)) {
+ struct ipv6hdr *hdr = ipv6_hdr(skb);
+ struct flowi6 fl6;
+@@ -230,25 +232,28 @@ static int rpl_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ dst = ip6_route_output(net, NULL, &fl6);
+ if (dst->error) {
+ err = dst->error;
+- dst_release(dst);
+ goto drop;
+ }
+
+- local_bh_disable();
+- dst_cache_set_ip6(&rlwt->cache, dst, &fl6.saddr);
+- local_bh_enable();
++ /* cache only if we don't create a dst reference loop */
++ if (orig_dst->lwtstate != dst->lwtstate) {
++ local_bh_disable();
++ dst_cache_set_ip6(&rlwt->cache, dst, &fl6.saddr);
++ local_bh_enable();
++ }
++
++ err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
++ if (unlikely(err))
++ goto drop;
+ }
+
+ skb_dst_drop(skb);
+ skb_dst_set(skb, dst);
+
+- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+- if (unlikely(err))
+- goto drop;
+-
+ return dst_output(net, sk, skb);
+
+ drop:
++ dst_release(dst);
+ kfree_skb(skb);
+ return err;
+ }
+@@ -262,29 +267,33 @@ static int rpl_input(struct sk_buff *skb)
+
+ rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
+
+- err = rpl_do_srh(skb, rlwt);
+- if (unlikely(err))
+- goto drop;
+-
+ local_bh_disable();
+ dst = dst_cache_get(&rlwt->cache);
++ local_bh_enable();
++
++ err = rpl_do_srh(skb, rlwt, dst);
++ if (unlikely(err)) {
++ dst_release(dst);
++ goto drop;
++ }
+
+ if (!dst) {
+ ip6_route_input(skb);
+ dst = skb_dst(skb);
+ if (!dst->error) {
++ local_bh_disable();
+ dst_cache_set_ip6(&rlwt->cache, dst,
+ &ipv6_hdr(skb)->saddr);
++ local_bh_enable();
+ }
++
++ err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
++ if (unlikely(err))
++ goto drop;
+ } else {
+ skb_dst_drop(skb);
+ skb_dst_set(skb, dst);
+ }
+- local_bh_enable();
+-
+- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+- if (unlikely(err))
+- goto drop;
+
+ return dst_input(skb);
+
+diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c
+index 098632adc9b5af..33833b2064c072 100644
+--- a/net/ipv6/seg6_iptunnel.c
++++ b/net/ipv6/seg6_iptunnel.c
+@@ -124,8 +124,8 @@ static __be32 seg6_make_flowlabel(struct net *net, struct sk_buff *skb,
+ return flowlabel;
+ }
+
+-/* encapsulate an IPv6 packet within an outer IPv6 header with a given SRH */
+-int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
++static int __seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh,
++ int proto, struct dst_entry *cache_dst)
+ {
+ struct dst_entry *dst = skb_dst(skb);
+ struct net *net = dev_net(dst->dev);
+@@ -137,7 +137,7 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
+ hdrlen = (osrh->hdrlen + 1) << 3;
+ tot_len = hdrlen + sizeof(*hdr);
+
+- err = skb_cow_head(skb, tot_len + skb->mac_len);
++ err = skb_cow_head(skb, tot_len + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err))
+ return err;
+
+@@ -197,11 +197,18 @@ int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
+
+ return 0;
+ }
++
++/* encapsulate an IPv6 packet within an outer IPv6 header with a given SRH */
++int seg6_do_srh_encap(struct sk_buff *skb, struct ipv6_sr_hdr *osrh, int proto)
++{
++ return __seg6_do_srh_encap(skb, osrh, proto, NULL);
++}
+ EXPORT_SYMBOL_GPL(seg6_do_srh_encap);
+
+ /* encapsulate an IPv6 packet within an outer IPv6 header with reduced SRH */
+ static int seg6_do_srh_encap_red(struct sk_buff *skb,
+- struct ipv6_sr_hdr *osrh, int proto)
++ struct ipv6_sr_hdr *osrh, int proto,
++ struct dst_entry *cache_dst)
+ {
+ __u8 first_seg = osrh->first_segment;
+ struct dst_entry *dst = skb_dst(skb);
+@@ -230,7 +237,7 @@ static int seg6_do_srh_encap_red(struct sk_buff *skb,
+
+ tot_len = red_hdrlen + sizeof(struct ipv6hdr);
+
+- err = skb_cow_head(skb, tot_len + skb->mac_len);
++ err = skb_cow_head(skb, tot_len + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err))
+ return err;
+
+@@ -317,8 +324,8 @@ static int seg6_do_srh_encap_red(struct sk_buff *skb,
+ return 0;
+ }
+
+-/* insert an SRH within an IPv6 packet, just after the IPv6 header */
+-int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
++static int __seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh,
++ struct dst_entry *cache_dst)
+ {
+ struct ipv6hdr *hdr, *oldhdr;
+ struct ipv6_sr_hdr *isrh;
+@@ -326,7 +333,7 @@ int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
+
+ hdrlen = (osrh->hdrlen + 1) << 3;
+
+- err = skb_cow_head(skb, hdrlen + skb->mac_len);
++ err = skb_cow_head(skb, hdrlen + dst_dev_overhead(cache_dst, skb));
+ if (unlikely(err))
+ return err;
+
+@@ -369,9 +376,8 @@ int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
+
+ return 0;
+ }
+-EXPORT_SYMBOL_GPL(seg6_do_srh_inline);
+
+-static int seg6_do_srh(struct sk_buff *skb)
++static int seg6_do_srh(struct sk_buff *skb, struct dst_entry *cache_dst)
+ {
+ struct dst_entry *dst = skb_dst(skb);
+ struct seg6_iptunnel_encap *tinfo;
+@@ -384,7 +390,7 @@ static int seg6_do_srh(struct sk_buff *skb)
+ if (skb->protocol != htons(ETH_P_IPV6))
+ return -EINVAL;
+
+- err = seg6_do_srh_inline(skb, tinfo->srh);
++ err = __seg6_do_srh_inline(skb, tinfo->srh, cache_dst);
+ if (err)
+ return err;
+ break;
+@@ -402,9 +408,11 @@ static int seg6_do_srh(struct sk_buff *skb)
+ return -EINVAL;
+
+ if (tinfo->mode == SEG6_IPTUN_MODE_ENCAP)
+- err = seg6_do_srh_encap(skb, tinfo->srh, proto);
++ err = __seg6_do_srh_encap(skb, tinfo->srh,
++ proto, cache_dst);
+ else
+- err = seg6_do_srh_encap_red(skb, tinfo->srh, proto);
++ err = seg6_do_srh_encap_red(skb, tinfo->srh,
++ proto, cache_dst);
+
+ if (err)
+ return err;
+@@ -425,11 +433,13 @@ static int seg6_do_srh(struct sk_buff *skb)
+ skb_push(skb, skb->mac_len);
+
+ if (tinfo->mode == SEG6_IPTUN_MODE_L2ENCAP)
+- err = seg6_do_srh_encap(skb, tinfo->srh,
+- IPPROTO_ETHERNET);
++ err = __seg6_do_srh_encap(skb, tinfo->srh,
++ IPPROTO_ETHERNET,
++ cache_dst);
+ else
+ err = seg6_do_srh_encap_red(skb, tinfo->srh,
+- IPPROTO_ETHERNET);
++ IPPROTO_ETHERNET,
++ cache_dst);
+
+ if (err)
+ return err;
+@@ -444,6 +454,13 @@ static int seg6_do_srh(struct sk_buff *skb)
+ return 0;
+ }
+
++/* insert an SRH within an IPv6 packet, just after the IPv6 header */
++int seg6_do_srh_inline(struct sk_buff *skb, struct ipv6_sr_hdr *osrh)
++{
++ return __seg6_do_srh_inline(skb, osrh, NULL);
++}
++EXPORT_SYMBOL_GPL(seg6_do_srh_inline);
++
+ static int seg6_input_finish(struct net *net, struct sock *sk,
+ struct sk_buff *skb)
+ {
+@@ -458,31 +475,35 @@ static int seg6_input_core(struct net *net, struct sock *sk,
+ struct seg6_lwt *slwt;
+ int err;
+
+- err = seg6_do_srh(skb);
+- if (unlikely(err))
+- goto drop;
+-
+ slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
+
+ local_bh_disable();
+ dst = dst_cache_get(&slwt->cache);
++ local_bh_enable();
++
++ err = seg6_do_srh(skb, dst);
++ if (unlikely(err)) {
++ dst_release(dst);
++ goto drop;
++ }
+
+ if (!dst) {
+ ip6_route_input(skb);
+ dst = skb_dst(skb);
+ if (!dst->error) {
++ local_bh_disable();
+ dst_cache_set_ip6(&slwt->cache, dst,
+ &ipv6_hdr(skb)->saddr);
++ local_bh_enable();
+ }
++
++ err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
++ if (unlikely(err))
++ goto drop;
+ } else {
+ skb_dst_drop(skb);
+ skb_dst_set(skb, dst);
+ }
+- local_bh_enable();
+-
+- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+- if (unlikely(err))
+- goto drop;
+
+ if (static_branch_unlikely(&nf_hooks_lwtunnel_enabled))
+ return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT,
+@@ -528,16 +549,16 @@ static int seg6_output_core(struct net *net, struct sock *sk,
+ struct seg6_lwt *slwt;
+ int err;
+
+- err = seg6_do_srh(skb);
+- if (unlikely(err))
+- goto drop;
+-
+ slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
+
+ local_bh_disable();
+ dst = dst_cache_get(&slwt->cache);
+ local_bh_enable();
+
++ err = seg6_do_srh(skb, dst);
++ if (unlikely(err))
++ goto drop;
++
+ if (unlikely(!dst)) {
+ struct ipv6hdr *hdr = ipv6_hdr(skb);
+ struct flowi6 fl6;
+@@ -552,28 +573,31 @@ static int seg6_output_core(struct net *net, struct sock *sk,
+ dst = ip6_route_output(net, NULL, &fl6);
+ if (dst->error) {
+ err = dst->error;
+- dst_release(dst);
+ goto drop;
+ }
+
+- local_bh_disable();
+- dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr);
+- local_bh_enable();
++ /* cache only if we don't create a dst reference loop */
++ if (orig_dst->lwtstate != dst->lwtstate) {
++ local_bh_disable();
++ dst_cache_set_ip6(&slwt->cache, dst, &fl6.saddr);
++ local_bh_enable();
++ }
++
++ err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
++ if (unlikely(err))
++ goto drop;
+ }
+
+ skb_dst_drop(skb);
+ skb_dst_set(skb, dst);
+
+- err = skb_cow_head(skb, LL_RESERVED_SPACE(dst->dev));
+- if (unlikely(err))
+- goto drop;
+-
+ if (static_branch_unlikely(&nf_hooks_lwtunnel_enabled))
+ return NF_HOOK(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk, skb,
+ NULL, skb_dst(skb)->dev, dst_output);
+
+ return dst_output(net, sk, skb);
+ drop:
++ dst_release(dst);
+ kfree_skb(skb);
+ return err;
+ }
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 78d9961fcd446d..8d3c01f0e2aa19 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -2102,6 +2102,7 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
+ {
+ struct ovs_header *ovs_header;
+ struct ovs_vport_stats vport_stats;
++ struct net *net_vport;
+ int err;
+
+ ovs_header = genlmsg_put(skb, portid, seq, &dp_vport_genl_family,
+@@ -2118,12 +2119,15 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
+ nla_put_u32(skb, OVS_VPORT_ATTR_IFINDEX, vport->dev->ifindex))
+ goto nla_put_failure;
+
+- if (!net_eq(net, dev_net(vport->dev))) {
+- int id = peernet2id_alloc(net, dev_net(vport->dev), gfp);
++ rcu_read_lock();
++ net_vport = dev_net_rcu(vport->dev);
++ if (!net_eq(net, net_vport)) {
++ int id = peernet2id_alloc(net, net_vport, GFP_ATOMIC);
+
+ if (nla_put_s32(skb, OVS_VPORT_ATTR_NETNSID, id))
+- goto nla_put_failure;
++ goto nla_put_failure_unlock;
+ }
++ rcu_read_unlock();
+
+ ovs_vport_get_stats(vport, &vport_stats);
+ if (nla_put_64bit(skb, OVS_VPORT_ATTR_STATS,
+@@ -2144,6 +2148,8 @@ static int ovs_vport_cmd_fill_info(struct vport *vport, struct sk_buff *skb,
+ genlmsg_end(skb, ovs_header);
+ return 0;
+
++nla_put_failure_unlock:
++ rcu_read_unlock();
+ nla_put_failure:
+ err = -EMSGSIZE;
+ error:
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index f5d116a1bdea1a..37299a7ca1876e 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -337,7 +337,10 @@ EXPORT_SYMBOL_GPL(vsock_find_connected_socket);
+
+ void vsock_remove_sock(struct vsock_sock *vsk)
+ {
+- vsock_remove_bound(vsk);
++ /* Transport reassignment must not remove the binding. */
++ if (sock_flag(sk_vsock(vsk), SOCK_DEAD))
++ vsock_remove_bound(vsk);
++
+ vsock_remove_connected(vsk);
+ }
+ EXPORT_SYMBOL_GPL(vsock_remove_sock);
+@@ -821,6 +824,13 @@ static void __vsock_release(struct sock *sk, int level)
+ */
+ lock_sock_nested(sk, level);
+
++ /* Indicate to vsock_remove_sock() that the socket is being released and
++ * can be removed from the bound_table. Unlike transport reassignment
++ * case, where the socket must remain bound despite vsock_remove_sock()
++ * being called from the transport release() callback.
++ */
++ sock_set_flag(sk, SOCK_DEAD);
++
+ if (vsk->transport)
+ vsk->transport->release(vsk);
+ else if (sock_type_connectible(sk->sk_type))
+diff --git a/rust/Makefile b/rust/Makefile
+index 9f59baacaf7730..45779a064fa4f4 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -229,6 +229,7 @@ bindgen_skip_c_flags := -mno-fp-ret-in-387 -mpreferred-stack-boundary=% \
+ -fzero-call-used-regs=% -fno-stack-clash-protection \
+ -fno-inline-functions-called-once -fsanitize=bounds-strict \
+ -fstrict-flex-arrays=% -fmin-function-alignment=% \
++ -fzero-init-padding-bits=% \
+ --param=% --param asan-%
+
+ # Derived from `scripts/Makefile.clang`.
+diff --git a/rust/kernel/rbtree.rs b/rust/kernel/rbtree.rs
+index d03e4aa1f4812b..7543378d372927 100644
+--- a/rust/kernel/rbtree.rs
++++ b/rust/kernel/rbtree.rs
+@@ -1147,7 +1147,7 @@ pub struct VacantEntry<'a, K, V> {
+ /// # Invariants
+ /// - `parent` may be null if the new node becomes the root.
+ /// - `child_field_of_parent` is a valid pointer to the left-child or right-child of `parent`. If `parent` is
+-/// null, it is a pointer to the root of the [`RBTree`].
++/// null, it is a pointer to the root of the [`RBTree`].
+ struct RawVacantEntry<'a, K, V> {
+ rbtree: *mut RBTree<K, V>,
+ /// The node that will become the parent of the new node if we insert one.
+diff --git a/scripts/Makefile.defconf b/scripts/Makefile.defconf
+index 226ea3df3b4b4c..a44307f08e9d68 100644
+--- a/scripts/Makefile.defconf
++++ b/scripts/Makefile.defconf
+@@ -1,6 +1,11 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Configuration heplers
+
++cmd_merge_fragments = \
++ $(srctree)/scripts/kconfig/merge_config.sh \
++ $4 -m -O $(objtree) $(srctree)/arch/$(SRCARCH)/configs/$2 \
++ $(foreach config,$3,$(srctree)/arch/$(SRCARCH)/configs/$(config).config)
++
+ # Creates 'merged defconfigs'
+ # ---------------------------------------------------------------------------
+ # Usage:
+@@ -8,9 +13,7 @@
+ #
+ # Input config fragments without '.config' suffix
+ define merge_into_defconfig
+- $(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh \
+- -m -O $(objtree) $(srctree)/arch/$(SRCARCH)/configs/$(1) \
+- $(foreach config,$(2),$(srctree)/arch/$(SRCARCH)/configs/$(config).config)
++ $(call cmd,merge_fragments,$1,$2)
+ +$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
+ endef
+
+@@ -22,8 +25,6 @@ endef
+ #
+ # Input config fragments without '.config' suffix
+ define merge_into_defconfig_override
+- $(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh \
+- -Q -m -O $(objtree) $(srctree)/arch/$(SRCARCH)/configs/$(1) \
+- $(foreach config,$(2),$(srctree)/arch/$(SRCARCH)/configs/$(config).config)
++ $(call cmd,merge_fragments,$1,$2,-Q)
+ +$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
+ endef
+diff --git a/scripts/Makefile.extrawarn b/scripts/Makefile.extrawarn
+index 04faf15ed316a9..dc081cf46d211c 100644
+--- a/scripts/Makefile.extrawarn
++++ b/scripts/Makefile.extrawarn
+@@ -31,6 +31,11 @@ KBUILD_CFLAGS-$(CONFIG_CC_NO_ARRAY_BOUNDS) += -Wno-array-bounds
+ ifdef CONFIG_CC_IS_CLANG
+ # The kernel builds with '-std=gnu11' so use of GNU extensions is acceptable.
+ KBUILD_CFLAGS += -Wno-gnu
++
++# Clang checks for overflow/truncation with '%p', while GCC does not:
++# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111219
++KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow-non-kprintf)
++KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation-non-kprintf)
+ else
+
+ # gcc inanely warns about local variables called 'main'
+@@ -77,6 +82,9 @@ KBUILD_CFLAGS += $(call cc-option,-Werror=designated-init)
+ # Warn if there is an enum types mismatch
+ KBUILD_CFLAGS += $(call cc-option,-Wenum-conversion)
+
++# Explicitly clear padding bits during variable initialization
++KBUILD_CFLAGS += $(call cc-option,-fzero-init-padding-bits=all)
++
+ KBUILD_CFLAGS += -Wextra
+ KBUILD_CFLAGS += -Wunused
+
+@@ -102,11 +110,6 @@ KBUILD_CFLAGS += $(call cc-disable-warning, packed-not-aligned)
+ KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow)
+ ifdef CONFIG_CC_IS_GCC
+ KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation)
+-else
+-# Clang checks for overflow/truncation with '%p', while GCC does not:
+-# https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111219
+-KBUILD_CFLAGS += $(call cc-disable-warning, format-overflow-non-kprintf)
+-KBUILD_CFLAGS += $(call cc-disable-warning, format-truncation-non-kprintf)
+ endif
+ KBUILD_CFLAGS += $(call cc-disable-warning, stringop-truncation)
+
+diff --git a/scripts/kconfig/Makefile b/scripts/kconfig/Makefile
+index a0a0be38cbdc14..fb50bd4f4103f2 100644
+--- a/scripts/kconfig/Makefile
++++ b/scripts/kconfig/Makefile
+@@ -105,9 +105,11 @@ configfiles = $(wildcard $(srctree)/kernel/configs/$(1) $(srctree)/arch/$(SRCARC
+ all-config-fragments = $(call configfiles,*.config)
+ config-fragments = $(call configfiles,$@)
+
++cmd_merge_fragments = $(srctree)/scripts/kconfig/merge_config.sh -m $(KCONFIG_CONFIG) $(config-fragments)
++
+ %.config: $(obj)/conf
+ $(if $(config-fragments),, $(error $@ fragment does not exists on this architecture))
+- $(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh -m $(KCONFIG_CONFIG) $(config-fragments)
++ $(call cmd,merge_fragments)
+ $(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
+
+ PHONY += tinyconfig
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 54f77f57ec8e25..1148e9498d8e83 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -1132,7 +1132,22 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ BYT_RT5640_SSP0_AIF2 |
+ BYT_RT5640_MCLK_EN),
+ },
+- { /* Vexia Edu Atla 10 tablet */
++ {
++ /* Vexia Edu Atla 10 tablet 5V version */
++ .matches = {
++ /* Having all 3 of these not set is somewhat unique */
++ DMI_MATCH(DMI_SYS_VENDOR, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "To be filled by O.E.M."),
++ DMI_MATCH(DMI_BOARD_NAME, "To be filled by O.E.M."),
++ /* Above strings are too generic, also match on BIOS date */
++ DMI_MATCH(DMI_BIOS_DATE, "05/14/2015"),
++ },
++ .driver_data = (void *)(BYTCR_INPUT_DEFAULTS |
++ BYT_RT5640_JD_NOT_INV |
++ BYT_RT5640_SSP0_AIF1 |
++ BYT_RT5640_MCLK_EN),
++ },
++ { /* Vexia Edu Atla 10 tablet 9V version */
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "AMI Corporation"),
+ DMI_MATCH(DMI_BOARD_NAME, "Aptio CRB"),
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index f0d8796b984a80..8e02db7e83323b 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -218,6 +218,7 @@ static bool is_rust_noreturn(const struct symbol *func)
+ str_ends_with(func->name, "_4core9panicking18panic_bounds_check") ||
+ str_ends_with(func->name, "_4core9panicking19assert_failed_inner") ||
+ str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference") ||
++ strstr(func->name, "_4core9panicking13assert_failed") ||
+ strstr(func->name, "_4core9panicking11panic_const24panic_const_") ||
+ (strstr(func->name, "_4core5slice5index24slice_") &&
+ str_ends_with(func->name, "_fail"));
+diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
+index 248ab790d143ed..f7206374a73dd8 100644
+--- a/tools/sched_ext/include/scx/common.bpf.h
++++ b/tools/sched_ext/include/scx/common.bpf.h
+@@ -251,8 +251,16 @@ void bpf_obj_drop_impl(void *kptr, void *meta) __ksym;
+ #define bpf_obj_new(type) ((type *)bpf_obj_new_impl(bpf_core_type_id_local(type), NULL))
+ #define bpf_obj_drop(kptr) bpf_obj_drop_impl(kptr, NULL)
+
+-void bpf_list_push_front(struct bpf_list_head *head, struct bpf_list_node *node) __ksym;
+-void bpf_list_push_back(struct bpf_list_head *head, struct bpf_list_node *node) __ksym;
++int bpf_list_push_front_impl(struct bpf_list_head *head,
++ struct bpf_list_node *node,
++ void *meta, __u64 off) __ksym;
++#define bpf_list_push_front(head, node) bpf_list_push_front_impl(head, node, NULL, 0)
++
++int bpf_list_push_back_impl(struct bpf_list_head *head,
++ struct bpf_list_node *node,
++ void *meta, __u64 off) __ksym;
++#define bpf_list_push_back(head, node) bpf_list_push_back_impl(head, node, NULL, 0)
++
+ struct bpf_list_node *bpf_list_pop_front(struct bpf_list_head *head) __ksym;
+ struct bpf_list_node *bpf_list_pop_back(struct bpf_list_head *head) __ksym;
+ struct bpf_rb_node *bpf_rbtree_remove(struct bpf_rb_root *root,
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+index f0a3a9c18e9ef5..9006549a12945f 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+@@ -226,7 +226,7 @@ static void test_task_common_nocheck(struct bpf_iter_attach_opts *opts,
+ ASSERT_OK(pthread_create(&thread_id, NULL, &do_nothing_wait, NULL),
+ "pthread_create");
+
+- skel->bss->tid = gettid();
++ skel->bss->tid = syscall(SYS_gettid);
+
+ do_dummy_read_opts(skel->progs.dump_task, opts);
+
+@@ -255,10 +255,10 @@ static void *run_test_task_tid(void *arg)
+ union bpf_iter_link_info linfo;
+ int num_unknown_tid, num_known_tid;
+
+- ASSERT_NEQ(getpid(), gettid(), "check_new_thread_id");
++ ASSERT_NEQ(getpid(), syscall(SYS_gettid), "check_new_thread_id");
+
+ memset(&linfo, 0, sizeof(linfo));
+- linfo.task.tid = gettid();
++ linfo.task.tid = syscall(SYS_gettid);
+ opts.link_info = &linfo;
+ opts.link_info_len = sizeof(linfo);
+ test_task_common(&opts, 0, 1);
+diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
+index 844f6fc8487b67..c1ac813ff9bae3 100644
+--- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
++++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
+@@ -869,21 +869,14 @@ static void consumer_test(struct uprobe_multi_consumers *skel,
+ fmt = "prog 0/1: uprobe";
+ } else {
+ /*
+- * uprobe return is tricky ;-)
+- *
+ * to trigger uretprobe consumer, the uretprobe needs to be installed,
+ * which means one of the 'return' uprobes was alive when probe was hit:
+ *
+ * idxs: 2/3 uprobe return in 'installed' mask
+- *
+- * in addition if 'after' state removes everything that was installed in
+- * 'before' state, then uprobe kernel object goes away and return uprobe
+- * is not installed and we won't hit it even if it's in 'after' state.
+ */
+ unsigned long had_uretprobes = before & 0b1100; /* is uretprobe installed */
+- unsigned long probe_preserved = before & after; /* did uprobe go away */
+
+- if (had_uretprobes && probe_preserved && test_bit(idx, after))
++ if (had_uretprobes && test_bit(idx, after))
+ val++;
+ fmt = "idx 2/3: uretprobe";
+ }
+diff --git a/tools/testing/selftests/gpio/gpio-sim.sh b/tools/testing/selftests/gpio/gpio-sim.sh
+index 6fb66a687f1737..bbc29ed9c60a91 100755
+--- a/tools/testing/selftests/gpio/gpio-sim.sh
++++ b/tools/testing/selftests/gpio/gpio-sim.sh
+@@ -46,12 +46,6 @@ remove_chip() {
+ rmdir $CONFIGFS_DIR/$CHIP || fail "Unable to remove the chip"
+ }
+
+-configfs_cleanup() {
+- for CHIP in `ls $CONFIGFS_DIR/`; do
+- remove_chip $CHIP
+- done
+-}
+-
+ create_chip() {
+ local CHIP=$1
+
+@@ -105,6 +99,13 @@ disable_chip() {
+ echo 0 > $CONFIGFS_DIR/$CHIP/live || fail "Unable to disable the chip"
+ }
+
++configfs_cleanup() {
++ for CHIP in `ls $CONFIGFS_DIR/`; do
++ disable_chip $CHIP
++ remove_chip $CHIP
++ done
++}
++
+ configfs_chip_name() {
+ local CHIP=$1
+ local BANK=$2
+@@ -181,6 +182,7 @@ create_chip chip
+ create_bank chip bank
+ enable_chip chip
+ test -n `cat $CONFIGFS_DIR/chip/bank/chip_name` || fail "chip_name doesn't work"
++disable_chip chip
+ remove_chip chip
+
+ echo "1.2. chip_name returns 'none' if the chip is still pending"
+@@ -195,6 +197,7 @@ create_chip chip
+ create_bank chip bank
+ enable_chip chip
+ test -n `cat $CONFIGFS_DIR/chip/dev_name` || fail "dev_name doesn't work"
++disable_chip chip
+ remove_chip chip
+
+ echo "2. Creating and configuring simulated chips"
+@@ -204,6 +207,7 @@ create_chip chip
+ create_bank chip bank
+ enable_chip chip
+ test "`get_chip_num_lines chip bank`" = "1" || fail "default number of lines is not 1"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.2. Number of lines can be specified"
+@@ -212,6 +216,7 @@ create_bank chip bank
+ set_num_lines chip bank 16
+ enable_chip chip
+ test "`get_chip_num_lines chip bank`" = "16" || fail "number of lines is not 16"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.3. Label can be set"
+@@ -220,6 +225,7 @@ create_bank chip bank
+ set_label chip bank foobar
+ enable_chip chip
+ test "`get_chip_label chip bank`" = "foobar" || fail "label is incorrect"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.4. Label can be left empty"
+@@ -227,6 +233,7 @@ create_chip chip
+ create_bank chip bank
+ enable_chip chip
+ test -z "`cat $CONFIGFS_DIR/chip/bank/label`" || fail "label is not empty"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.5. Line names can be configured"
+@@ -238,6 +245,7 @@ set_line_name chip bank 2 bar
+ enable_chip chip
+ test "`get_line_name chip bank 0`" = "foo" || fail "line name is incorrect"
+ test "`get_line_name chip bank 2`" = "bar" || fail "line name is incorrect"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.6. Line config can remain unused if offset is greater than number of lines"
+@@ -248,6 +256,7 @@ set_line_name chip bank 5 foobar
+ enable_chip chip
+ test "`get_line_name chip bank 0`" = "" || fail "line name is incorrect"
+ test "`get_line_name chip bank 1`" = "" || fail "line name is incorrect"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.7. Line configfs directory names are sanitized"
+@@ -267,6 +276,7 @@ for CHIP in $CHIPS; do
+ enable_chip $CHIP
+ done
+ for CHIP in $CHIPS; do
++ disable_chip $CHIP
+ remove_chip $CHIP
+ done
+
+@@ -278,6 +288,7 @@ echo foobar > $CONFIGFS_DIR/chip/bank/label 2> /dev/null && \
+ fail "Setting label of a live chip should fail"
+ echo 8 > $CONFIGFS_DIR/chip/bank/num_lines 2> /dev/null && \
+ fail "Setting number of lines of a live chip should fail"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.10. Can't create line items when chip is live"
+@@ -285,6 +296,7 @@ create_chip chip
+ create_bank chip bank
+ enable_chip chip
+ mkdir $CONFIGFS_DIR/chip/bank/line0 2> /dev/null && fail "Creating line item should fail"
++disable_chip chip
+ remove_chip chip
+
+ echo "2.11. Probe errors are propagated to user-space"
+@@ -316,6 +328,7 @@ mkdir -p $CONFIGFS_DIR/chip/bank/line4/hog
+ enable_chip chip
+ $BASE_DIR/gpio-mockup-cdev -s 1 /dev/`configfs_chip_name chip bank` 4 2> /dev/null && \
+ fail "Setting the value of a hogged line shouldn't succeed"
++disable_chip chip
+ remove_chip chip
+
+ echo "3. Controlling simulated chips"
+@@ -331,6 +344,7 @@ test "$?" = "1" || fail "pull set incorrectly"
+ sysfs_set_pull chip bank 0 pull-down
+ $BASE_DIR/gpio-mockup-cdev /dev/`configfs_chip_name chip bank` 1
+ test "$?" = "0" || fail "pull set incorrectly"
++disable_chip chip
+ remove_chip chip
+
+ echo "3.2. Pull can be read from sysfs"
+@@ -344,6 +358,7 @@ SYSFS_PATH=/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/pull
+ test `cat $SYSFS_PATH` = "pull-down" || fail "reading the pull failed"
+ sysfs_set_pull chip bank 0 pull-up
+ test `cat $SYSFS_PATH` = "pull-up" || fail "reading the pull failed"
++disable_chip chip
+ remove_chip chip
+
+ echo "3.3. Incorrect input in sysfs is rejected"
+@@ -355,6 +370,7 @@ DEVNAME=`configfs_dev_name chip`
+ CHIPNAME=`configfs_chip_name chip bank`
+ SYSFS_PATH="/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/pull"
+ echo foobar > $SYSFS_PATH 2> /dev/null && fail "invalid input not detected"
++disable_chip chip
+ remove_chip chip
+
+ echo "3.4. Can't write to value"
+@@ -365,6 +381,7 @@ DEVNAME=`configfs_dev_name chip`
+ CHIPNAME=`configfs_chip_name chip bank`
+ SYSFS_PATH="/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/value"
+ echo 1 > $SYSFS_PATH 2> /dev/null && fail "writing to 'value' succeeded unexpectedly"
++disable_chip chip
+ remove_chip chip
+
+ echo "4. Simulated GPIO chips are functional"
+@@ -382,6 +399,7 @@ $BASE_DIR/gpio-mockup-cdev -s 1 /dev/`configfs_chip_name chip bank` 0 &
+ sleep 0.1 # FIXME Any better way?
+ test `cat $SYSFS_PATH` = "1" || fail "incorrect value read from sysfs"
+ kill $!
++disable_chip chip
+ remove_chip chip
+
+ echo "4.2. Bias settings work correctly"
+@@ -394,6 +412,7 @@ CHIPNAME=`configfs_chip_name chip bank`
+ SYSFS_PATH="/sys/devices/platform/$DEVNAME/$CHIPNAME/sim_gpio0/value"
+ $BASE_DIR/gpio-mockup-cdev -b pull-up /dev/`configfs_chip_name chip bank` 0
+ test `cat $SYSFS_PATH` = "1" || fail "bias setting does not work"
++disable_chip chip
+ remove_chip chip
+
+ echo "GPIO $MODULE test PASS"
+diff --git a/tools/testing/selftests/net/pmtu.sh b/tools/testing/selftests/net/pmtu.sh
+index 6c651c880fe83d..66be7699c72c9a 100755
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -197,6 +197,12 @@
+ #
+ # - pmtu_ipv6_route_change
+ # Same as above but with IPv6
++#
++# - pmtu_ipv4_mp_exceptions
++# Use the same topology as in pmtu_ipv4, but add routeable addresses
++# on host A and B on lo reachable via both routers. Host A and B
++# addresses have multipath routes to each other, b_r1 mtu = 1500.
++# Check that PMTU exceptions are created for both paths.
+
+ source lib.sh
+ source net_helper.sh
+@@ -266,7 +272,8 @@ tests="
+ list_flush_ipv4_exception ipv4: list and flush cached exceptions 1
+ list_flush_ipv6_exception ipv6: list and flush cached exceptions 1
+ pmtu_ipv4_route_change ipv4: PMTU exception w/route replace 1
+- pmtu_ipv6_route_change ipv6: PMTU exception w/route replace 1"
++ pmtu_ipv6_route_change ipv6: PMTU exception w/route replace 1
++ pmtu_ipv4_mp_exceptions ipv4: PMTU multipath nh exceptions 1"
+
+ # Addressing and routing for tests with routers: four network segments, with
+ # index SEGMENT between 1 and 4, a common prefix (PREFIX4 or PREFIX6) and an
+@@ -343,6 +350,9 @@ tunnel6_a_addr="fd00:2::a"
+ tunnel6_b_addr="fd00:2::b"
+ tunnel6_mask="64"
+
++host4_a_addr="192.168.99.99"
++host4_b_addr="192.168.88.88"
++
+ dummy6_0_prefix="fc00:1000::"
+ dummy6_1_prefix="fc00:1001::"
+ dummy6_mask="64"
+@@ -984,6 +994,52 @@ setup_ovs_bridge() {
+ run_cmd ip route add ${prefix6}:${b_r1}::1 via ${prefix6}:${a_r1}::2
+ }
+
++setup_multipath_new() {
++ # Set up host A with multipath routes to host B host4_b_addr
++ run_cmd ${ns_a} ip addr add ${host4_a_addr} dev lo
++ run_cmd ${ns_a} ip nexthop add id 401 via ${prefix4}.${a_r1}.2 dev veth_A-R1
++ run_cmd ${ns_a} ip nexthop add id 402 via ${prefix4}.${a_r2}.2 dev veth_A-R2
++ run_cmd ${ns_a} ip nexthop add id 403 group 401/402
++ run_cmd ${ns_a} ip route add ${host4_b_addr} src ${host4_a_addr} nhid 403
++
++ # Set up host B with multipath routes to host A host4_a_addr
++ run_cmd ${ns_b} ip addr add ${host4_b_addr} dev lo
++ run_cmd ${ns_b} ip nexthop add id 401 via ${prefix4}.${b_r1}.2 dev veth_B-R1
++ run_cmd ${ns_b} ip nexthop add id 402 via ${prefix4}.${b_r2}.2 dev veth_B-R2
++ run_cmd ${ns_b} ip nexthop add id 403 group 401/402
++ run_cmd ${ns_b} ip route add ${host4_a_addr} src ${host4_b_addr} nhid 403
++}
++
++setup_multipath_old() {
++ # Set up host A with multipath routes to host B host4_b_addr
++ run_cmd ${ns_a} ip addr add ${host4_a_addr} dev lo
++ run_cmd ${ns_a} ip route add ${host4_b_addr} \
++ src ${host4_a_addr} \
++ nexthop via ${prefix4}.${a_r1}.2 weight 1 \
++ nexthop via ${prefix4}.${a_r2}.2 weight 1
++
++ # Set up host B with multipath routes to host A host4_a_addr
++ run_cmd ${ns_b} ip addr add ${host4_b_addr} dev lo
++ run_cmd ${ns_b} ip route add ${host4_a_addr} \
++ src ${host4_b_addr} \
++ nexthop via ${prefix4}.${b_r1}.2 weight 1 \
++ nexthop via ${prefix4}.${b_r2}.2 weight 1
++}
++
++setup_multipath() {
++ if [ "$USE_NH" = "yes" ]; then
++ setup_multipath_new
++ else
++ setup_multipath_old
++ fi
++
++ # Set up routers with routes to dummies
++ run_cmd ${ns_r1} ip route add ${host4_a_addr} via ${prefix4}.${a_r1}.1
++ run_cmd ${ns_r2} ip route add ${host4_a_addr} via ${prefix4}.${a_r2}.1
++ run_cmd ${ns_r1} ip route add ${host4_b_addr} via ${prefix4}.${b_r1}.1
++ run_cmd ${ns_r2} ip route add ${host4_b_addr} via ${prefix4}.${b_r2}.1
++}
++
+ setup() {
+ [ "$(id -u)" -ne 0 ] && echo " need to run as root" && return $ksft_skip
+
+@@ -1076,23 +1132,15 @@ link_get_mtu() {
+ }
+
+ route_get_dst_exception() {
+- ns_cmd="${1}"
+- dst="${2}"
+- dsfield="${3}"
++ ns_cmd="${1}"; shift
+
+- if [ -z "${dsfield}" ]; then
+- dsfield=0
+- fi
+-
+- ${ns_cmd} ip route get "${dst}" dsfield "${dsfield}"
++ ${ns_cmd} ip route get "$@"
+ }
+
+ route_get_dst_pmtu_from_exception() {
+- ns_cmd="${1}"
+- dst="${2}"
+- dsfield="${3}"
++ ns_cmd="${1}"; shift
+
+- mtu_parse "$(route_get_dst_exception "${ns_cmd}" "${dst}" "${dsfield}")"
++ mtu_parse "$(route_get_dst_exception "${ns_cmd}" "$@")"
+ }
+
+ check_pmtu_value() {
+@@ -1235,10 +1283,10 @@ test_pmtu_ipv4_dscp_icmp_exception() {
+ run_cmd "${ns_a}" ping -q -M want -Q "${dsfield}" -c 1 -w 1 -s "${len}" "${dst2}"
+
+ # Check that exceptions have been created with the correct PMTU
+- pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst1}" "${policy_mark}")"
++ pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst1}" dsfield "${policy_mark}")"
+ check_pmtu_value "1400" "${pmtu_1}" "exceeding MTU" || return 1
+
+- pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst2}" "${policy_mark}")"
++ pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst2}" dsfield "${policy_mark}")"
+ check_pmtu_value "1500" "${pmtu_2}" "exceeding MTU" || return 1
+ }
+
+@@ -1285,9 +1333,9 @@ test_pmtu_ipv4_dscp_udp_exception() {
+ UDP:"${dst2}":50000,tos="${dsfield}"
+
+ # Check that exceptions have been created with the correct PMTU
+- pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst1}" "${policy_mark}")"
++ pmtu_1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst1}" dsfield "${policy_mark}")"
+ check_pmtu_value "1400" "${pmtu_1}" "exceeding MTU" || return 1
+- pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst2}" "${policy_mark}")"
++ pmtu_2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${dst2}" dsfield "${policy_mark}")"
+ check_pmtu_value "1500" "${pmtu_2}" "exceeding MTU" || return 1
+ }
+
+@@ -2329,6 +2377,36 @@ test_pmtu_ipv6_route_change() {
+ test_pmtu_ipvX_route_change 6
+ }
+
++test_pmtu_ipv4_mp_exceptions() {
++ setup namespaces routing multipath || return $ksft_skip
++
++ trace "${ns_a}" veth_A-R1 "${ns_r1}" veth_R1-A \
++ "${ns_r1}" veth_R1-B "${ns_b}" veth_B-R1 \
++ "${ns_a}" veth_A-R2 "${ns_r2}" veth_R2-A \
++ "${ns_r2}" veth_R2-B "${ns_b}" veth_B-R2
++
++ # Set up initial MTU values
++ mtu "${ns_a}" veth_A-R1 2000
++ mtu "${ns_r1}" veth_R1-A 2000
++ mtu "${ns_r1}" veth_R1-B 1500
++ mtu "${ns_b}" veth_B-R1 1500
++
++ mtu "${ns_a}" veth_A-R2 2000
++ mtu "${ns_r2}" veth_R2-A 2000
++ mtu "${ns_r2}" veth_R2-B 1500
++ mtu "${ns_b}" veth_B-R2 1500
++
++ # Ping and expect two nexthop exceptions for two routes
++ run_cmd ${ns_a} ping -q -M want -i 0.1 -c 1 -s 1800 "${host4_b_addr}"
++
++ # Check that exceptions have been created with the correct PMTU
++ pmtu_a_R1="$(route_get_dst_pmtu_from_exception "${ns_a}" "${host4_b_addr}" oif veth_A-R1)"
++ pmtu_a_R2="$(route_get_dst_pmtu_from_exception "${ns_a}" "${host4_b_addr}" oif veth_A-R2)"
++
++ check_pmtu_value "1500" "${pmtu_a_R1}" "exceeding MTU (veth_A-R1)" || return 1
++ check_pmtu_value "1500" "${pmtu_a_R2}" "exceeding MTU (veth_A-R2)" || return 1
++}
++
+ usage() {
+ echo
+ echo "$0 [OPTIONS] [TEST]..."
+diff --git a/tools/testing/selftests/net/rtnetlink.sh b/tools/testing/selftests/net/rtnetlink.sh
+index bdf6f10d055891..87dce3efe31e4a 100755
+--- a/tools/testing/selftests/net/rtnetlink.sh
++++ b/tools/testing/selftests/net/rtnetlink.sh
+@@ -809,10 +809,10 @@ kci_test_ipsec_offload()
+ # does driver have correct offload info
+ run_cmd diff $sysfsf - << EOF
+ SA count=2 tx=3
+-sa[0] tx ipaddr=0x00000000 00000000 00000000 00000000
++sa[0] tx ipaddr=$dstip
+ sa[0] spi=0x00000009 proto=0x32 salt=0x61626364 crypt=1
+ sa[0] key=0x34333231 38373635 32313039 36353433
+-sa[1] rx ipaddr=0x00000000 00000000 00000000 037ba8c0
++sa[1] rx ipaddr=$srcip
+ sa[1] spi=0x00000009 proto=0x32 salt=0x61626364 crypt=1
+ sa[1] key=0x34333231 38373635 32313039 36353433
+ EOF
+diff --git a/tools/tracing/rtla/src/timerlat_hist.c b/tools/tracing/rtla/src/timerlat_hist.c
+index 4cbd2d8ebb0461..397dc962f5e2ad 100644
+--- a/tools/tracing/rtla/src/timerlat_hist.c
++++ b/tools/tracing/rtla/src/timerlat_hist.c
+@@ -1143,6 +1143,14 @@ static int stop_tracing;
+ static struct trace_instance *hist_inst = NULL;
+ static void stop_hist(int sig)
+ {
++ if (stop_tracing) {
++ /*
++ * Stop requested twice in a row; abort event processing and
++ * exit immediately
++ */
++ tracefs_iterate_stop(hist_inst->inst);
++ return;
++ }
+ stop_tracing = 1;
+ if (hist_inst)
+ trace_instance_stop(hist_inst);
+diff --git a/tools/tracing/rtla/src/timerlat_top.c b/tools/tracing/rtla/src/timerlat_top.c
+index d13be28dacd599..0def5fec51ed7a 100644
+--- a/tools/tracing/rtla/src/timerlat_top.c
++++ b/tools/tracing/rtla/src/timerlat_top.c
+@@ -897,6 +897,14 @@ static int stop_tracing;
+ static struct trace_instance *top_inst = NULL;
+ static void stop_top(int sig)
+ {
++ if (stop_tracing) {
++ /*
++ * Stop requested twice in a row; abort event processing and
++ * exit immediately
++ */
++ tracefs_iterate_stop(top_inst->inst);
++ return;
++ }
+ stop_tracing = 1;
+ if (top_inst)
+ trace_instance_stop(top_inst);
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-02-27 13:22 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-02-27 13:22 UTC (permalink / raw
To: gentoo-commits
commit: 43affa7d97bc920177a436c3997d9b1fb0cf1521
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Feb 27 13:22:00 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Feb 27 13:22:00 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=43affa7d
Linux patch 6.12.17
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1016_linux-6.12.17.patch | 9620 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 9624 insertions(+)
diff --git a/0000_README b/0000_README
index 9f0c3a67..8efc8938 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch: 1015_linux-6.12.16.patch
From: https://www.kernel.org
Desc: Linux 6.12.16
+Patch: 1016_linux-6.12.17.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.17
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1016_linux-6.12.17.patch b/1016_linux-6.12.17.patch
new file mode 100644
index 00000000..cebbb158
--- /dev/null
+++ b/1016_linux-6.12.17.patch
@@ -0,0 +1,9620 @@
+diff --git a/Documentation/networking/strparser.rst b/Documentation/networking/strparser.rst
+index 6cab1f74ae05a3..7f623d1db72aae 100644
+--- a/Documentation/networking/strparser.rst
++++ b/Documentation/networking/strparser.rst
+@@ -112,7 +112,7 @@ Functions
+ Callbacks
+ =========
+
+-There are six callbacks:
++There are seven callbacks:
+
+ ::
+
+@@ -182,6 +182,13 @@ There are six callbacks:
+ the length of the message. skb->len - offset may be greater
+ then full_len since strparser does not trim the skb.
+
++ ::
++
++ int (*read_sock)(struct strparser *strp, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor);
++
++ The read_sock callback is used by strparser instead of
++ sock->ops->read_sock, if provided.
+ ::
+
+ int (*read_sock_done)(struct strparser *strp, int err);
+diff --git a/Makefile b/Makefile
+index 340da922fa4f2c..e8b8c5b3840505 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183-pumpkin.dts b/arch/arm64/boot/dts/mediatek/mt8183-pumpkin.dts
+index 1aa668c3ccf928..dbdee604edab43 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183-pumpkin.dts
++++ b/arch/arm64/boot/dts/mediatek/mt8183-pumpkin.dts
+@@ -63,6 +63,18 @@ thermistor {
+ pulldown-ohm = <0>;
+ io-channels = <&auxadc 0>;
+ };
++
++ connector {
++ compatible = "hdmi-connector";
++ label = "hdmi";
++ type = "d";
++
++ port {
++ hdmi_connector_in: endpoint {
++ remote-endpoint = <&hdmi_connector_out>;
++ };
++ };
++ };
+ };
+
+ &auxadc {
+@@ -120,6 +132,43 @@ &i2c6 {
+ pinctrl-0 = <&i2c6_pins>;
+ status = "okay";
+ clock-frequency = <100000>;
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ it66121hdmitx: hdmitx@4c {
++ compatible = "ite,it66121";
++ reg = <0x4c>;
++ pinctrl-names = "default";
++ pinctrl-0 = <&ite_pins>;
++ reset-gpios = <&pio 160 GPIO_ACTIVE_LOW>;
++ interrupt-parent = <&pio>;
++ interrupts = <4 IRQ_TYPE_LEVEL_LOW>;
++ vcn33-supply = <&mt6358_vcn33_reg>;
++ vcn18-supply = <&mt6358_vcn18_reg>;
++ vrf12-supply = <&mt6358_vrf12_reg>;
++
++ ports {
++ #address-cells = <1>;
++ #size-cells = <0>;
++
++ port@0 {
++ reg = <0>;
++
++ it66121_in: endpoint {
++ bus-width = <12>;
++ remote-endpoint = <&dpi_out>;
++ };
++ };
++
++ port@1 {
++ reg = <1>;
++
++ hdmi_connector_out: endpoint {
++ remote-endpoint = <&hdmi_connector_in>;
++ };
++ };
++ };
++ };
+ };
+
+ &keyboard {
+@@ -362,6 +411,67 @@ pins_clk {
+ input-enable;
+ };
+ };
++
++ ite_pins: ite-pins {
++ pins-irq {
++ pinmux = <PINMUX_GPIO4__FUNC_GPIO4>;
++ input-enable;
++ bias-pull-up;
++ };
++
++ pins-rst {
++ pinmux = <PINMUX_GPIO160__FUNC_GPIO160>;
++ output-high;
++ };
++ };
++
++ dpi_func_pins: dpi-func-pins {
++ pins-dpi {
++ pinmux = <PINMUX_GPIO12__FUNC_I2S5_BCK>,
++ <PINMUX_GPIO46__FUNC_I2S5_LRCK>,
++ <PINMUX_GPIO47__FUNC_I2S5_DO>,
++ <PINMUX_GPIO13__FUNC_DBPI_D0>,
++ <PINMUX_GPIO14__FUNC_DBPI_D1>,
++ <PINMUX_GPIO15__FUNC_DBPI_D2>,
++ <PINMUX_GPIO16__FUNC_DBPI_D3>,
++ <PINMUX_GPIO17__FUNC_DBPI_D4>,
++ <PINMUX_GPIO18__FUNC_DBPI_D5>,
++ <PINMUX_GPIO19__FUNC_DBPI_D6>,
++ <PINMUX_GPIO20__FUNC_DBPI_D7>,
++ <PINMUX_GPIO21__FUNC_DBPI_D8>,
++ <PINMUX_GPIO22__FUNC_DBPI_D9>,
++ <PINMUX_GPIO23__FUNC_DBPI_D10>,
++ <PINMUX_GPIO24__FUNC_DBPI_D11>,
++ <PINMUX_GPIO25__FUNC_DBPI_HSYNC>,
++ <PINMUX_GPIO26__FUNC_DBPI_VSYNC>,
++ <PINMUX_GPIO27__FUNC_DBPI_DE>,
++ <PINMUX_GPIO28__FUNC_DBPI_CK>;
++ };
++ };
++
++ dpi_idle_pins: dpi-idle-pins {
++ pins-idle {
++ pinmux = <PINMUX_GPIO12__FUNC_GPIO12>,
++ <PINMUX_GPIO46__FUNC_GPIO46>,
++ <PINMUX_GPIO47__FUNC_GPIO47>,
++ <PINMUX_GPIO13__FUNC_GPIO13>,
++ <PINMUX_GPIO14__FUNC_GPIO14>,
++ <PINMUX_GPIO15__FUNC_GPIO15>,
++ <PINMUX_GPIO16__FUNC_GPIO16>,
++ <PINMUX_GPIO17__FUNC_GPIO17>,
++ <PINMUX_GPIO18__FUNC_GPIO18>,
++ <PINMUX_GPIO19__FUNC_GPIO19>,
++ <PINMUX_GPIO20__FUNC_GPIO20>,
++ <PINMUX_GPIO21__FUNC_GPIO21>,
++ <PINMUX_GPIO22__FUNC_GPIO22>,
++ <PINMUX_GPIO23__FUNC_GPIO23>,
++ <PINMUX_GPIO24__FUNC_GPIO24>,
++ <PINMUX_GPIO25__FUNC_GPIO25>,
++ <PINMUX_GPIO26__FUNC_GPIO26>,
++ <PINMUX_GPIO27__FUNC_GPIO27>,
++ <PINMUX_GPIO28__FUNC_GPIO28>;
++ };
++ };
+ };
+
+ &pmic {
+@@ -412,6 +522,15 @@ &scp {
+ status = "okay";
+ };
+
+-&dsi0 {
+- status = "disabled";
++&dpi0 {
++ pinctrl-names = "default", "sleep";
++ pinctrl-0 = <&dpi_func_pins>;
++ pinctrl-1 = <&dpi_idle_pins>;
++ status = "okay";
++
++ port {
++ dpi_out: endpoint {
++ remote-endpoint = <&it66121_in>;
++ };
++ };
+ };
+diff --git a/arch/arm64/boot/dts/mediatek/mt8183.dtsi b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+index 5cb6bd3c5acbb0..92c41463d10e37 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8183.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8183.dtsi
+@@ -1835,6 +1835,7 @@ dsi0: dsi@14014000 {
+ resets = <&mmsys MT8183_MMSYS_SW0_RST_B_DISP_DSI0>;
+ phys = <&mipi_tx0>;
+ phy-names = "dphy";
++ status = "disabled";
+ };
+
+ dpi0: dpi@14015000 {
+diff --git a/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts b/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts
+index ae398acdcf45e6..0905668cbe1f4e 100644
+--- a/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts
++++ b/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts
+@@ -226,7 +226,6 @@ &uart0 {
+ };
+
+ &uart5 {
+- pinctrl-0 = <&uart5_xfer>;
+ rts-gpios = <&gpio0 RK_PB5 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi b/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
+index b7163ed74232d7..f743aaf78359d2 100644
+--- a/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
++++ b/arch/arm64/boot/dts/rockchip/px30-ringneck.dtsi
+@@ -373,6 +373,12 @@ &u2phy_host {
+ status = "okay";
+ };
+
++&uart5 {
++ /delete-property/ dmas;
++ /delete-property/ dma-names;
++ pinctrl-0 = <&uart5_xfer>;
++};
++
+ /* Mule UCAN */
+ &usb_host0_ehci {
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts b/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts
+index 4237f2ee8fee33..f57d4acd9807cb 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-orangepi-r1-plus-lts.dts
+@@ -15,9 +15,11 @@ / {
+ };
+
+ &gmac2io {
++ /delete-property/ tx_delay;
++ /delete-property/ rx_delay;
++
+ phy-handle = <&yt8531c>;
+- tx_delay = <0x19>;
+- rx_delay = <0x05>;
++ phy-mode = "rgmii-id";
+
+ mdio {
+ /delete-node/ ethernet-phy@1;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
+index fc67585b64b7ba..83e7e0fbe7839e 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-base.dtsi
+@@ -549,10 +549,10 @@ usb_host2_xhci: usb@fcd00000 {
+ mmu600_pcie: iommu@fc900000 {
+ compatible = "arm,smmu-v3";
+ reg = <0x0 0xfc900000 0x0 0x200000>;
+- interrupts = <GIC_SPI 369 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 371 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 374 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 367 IRQ_TYPE_LEVEL_HIGH 0>;
++ interrupts = <GIC_SPI 369 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 371 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 374 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 367 IRQ_TYPE_EDGE_RISING 0>;
+ interrupt-names = "eventq", "gerror", "priq", "cmdq-sync";
+ #iommu-cells = <1>;
+ status = "disabled";
+@@ -561,10 +561,10 @@ mmu600_pcie: iommu@fc900000 {
+ mmu600_php: iommu@fcb00000 {
+ compatible = "arm,smmu-v3";
+ reg = <0x0 0xfcb00000 0x0 0x200000>;
+- interrupts = <GIC_SPI 381 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 383 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 386 IRQ_TYPE_LEVEL_HIGH 0>,
+- <GIC_SPI 379 IRQ_TYPE_LEVEL_HIGH 0>;
++ interrupts = <GIC_SPI 381 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 383 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 386 IRQ_TYPE_EDGE_RISING 0>,
++ <GIC_SPI 379 IRQ_TYPE_EDGE_RISING 0>;
+ interrupt-names = "eventq", "gerror", "priq", "cmdq-sync";
+ #iommu-cells = <1>;
+ status = "disabled";
+@@ -2626,9 +2626,9 @@ tsadc: tsadc@fec00000 {
+ rockchip,hw-tshut-temp = <120000>;
+ rockchip,hw-tshut-mode = <0>; /* tshut mode 0:CRU 1:GPIO */
+ rockchip,hw-tshut-polarity = <0>; /* tshut polarity 0:LOW 1:HIGH */
+- pinctrl-0 = <&tsadc_gpio_func>;
+- pinctrl-1 = <&tsadc_shut>;
+- pinctrl-names = "gpio", "otpout";
++ pinctrl-0 = <&tsadc_shut_org>;
++ pinctrl-1 = <&tsadc_gpio_func>;
++ pinctrl-names = "default", "sleep";
+ #thermal-sensor-cells = <1>;
+ status = "disabled";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5-genbook.dts b/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5-genbook.dts
+index 6418286efe40d3..762d36ad733ab2 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5-genbook.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588-coolpi-cm5-genbook.dts
+@@ -101,7 +101,7 @@ vcc3v3_lcd: vcc3v3-lcd-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc3v3_lcd";
+ enable-active-high;
+- gpio = <&gpio1 RK_PC4 GPIO_ACTIVE_HIGH>;
++ gpio = <&gpio0 RK_PC4 GPIO_ACTIVE_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&lcdpwr_en>;
+ vin-supply = <&vcc3v3_sys>;
+@@ -207,7 +207,7 @@ &pcie3x4 {
+ &pinctrl {
+ lcd {
+ lcdpwr_en: lcdpwr-en {
+- rockchip,pins = <1 RK_PC4 RK_FUNC_GPIO &pcfg_pull_down>;
++ rockchip,pins = <0 RK_PC4 RK_FUNC_GPIO &pcfg_pull_down>;
+ };
+
+ bl_en: bl-en {
+diff --git a/arch/arm64/include/asm/mman.h b/arch/arm64/include/asm/mman.h
+index 798d965760d434..5a280ac7570cdd 100644
+--- a/arch/arm64/include/asm/mman.h
++++ b/arch/arm64/include/asm/mman.h
+@@ -41,9 +41,12 @@ static inline unsigned long arch_calc_vm_flag_bits(struct file *file,
+ * backed by tags-capable memory. The vm_flags may be overridden by a
+ * filesystem supporting MTE (RAM-based).
+ */
+- if (system_supports_mte() &&
+- ((flags & MAP_ANONYMOUS) || shmem_file(file)))
+- return VM_MTE_ALLOWED;
++ if (system_supports_mte()) {
++ if ((flags & MAP_ANONYMOUS) && !(flags & MAP_HUGETLB))
++ return VM_MTE_ALLOWED;
++ if (shmem_file(file))
++ return VM_MTE_ALLOWED;
++ }
+
+ return 0;
+ }
+diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
+index c3efacab4b9412..aa90a048f319a3 100644
+--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
++++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
+@@ -77,9 +77,17 @@
+ /*
+ * With 4K page size the real_pte machinery is all nops.
+ */
+-#define __real_pte(e, p, o) ((real_pte_t){(e)})
++static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep, int offset)
++{
++ return (real_pte_t){pte};
++}
++
+ #define __rpte_to_pte(r) ((r).pte)
+-#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >> H_PAGE_F_GIX_SHIFT)
++
++static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
++{
++ return pte_val(__rpte_to_pte(rpte)) >> H_PAGE_F_GIX_SHIFT;
++}
+
+ #define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \
+ do { \
+diff --git a/arch/powerpc/lib/code-patching.c b/arch/powerpc/lib/code-patching.c
+index acdab294b340a8..c1d9b031f0d578 100644
+--- a/arch/powerpc/lib/code-patching.c
++++ b/arch/powerpc/lib/code-patching.c
+@@ -108,7 +108,7 @@ static int text_area_cpu_up(unsigned int cpu)
+ unsigned long addr;
+ int err;
+
+- area = get_vm_area(PAGE_SIZE, VM_ALLOC);
++ area = get_vm_area(PAGE_SIZE, 0);
+ if (!area) {
+ WARN_ONCE(1, "Failed to create text area for cpu %d\n",
+ cpu);
+@@ -493,7 +493,9 @@ static int __do_patch_instructions_mm(u32 *addr, u32 *code, size_t len, bool rep
+
+ orig_mm = start_using_temp_mm(patching_mm);
+
++ kasan_disable_current();
+ err = __patch_instructions(patch_addr, code, len, repeat_instr);
++ kasan_enable_current();
+
+ /* context synchronisation performed by __patch_instructions */
+ stop_using_temp_mm(patching_mm, orig_mm);
+diff --git a/arch/s390/boot/startup.c b/arch/s390/boot/startup.c
+index c2ee0745f59edc..7b69be63d5d20a 100644
+--- a/arch/s390/boot/startup.c
++++ b/arch/s390/boot/startup.c
+@@ -75,7 +75,7 @@ static int cmma_test_essa(void)
+ : [reg1] "=&d" (reg1),
+ [reg2] "=&a" (reg2),
+ [rc] "+&d" (rc),
+- [tmp] "=&d" (tmp),
++ [tmp] "+&d" (tmp),
+ "+Q" (get_lowcore()->program_new_psw),
+ "=Q" (old)
+ : [psw_old] "a" (&old),
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index f5bf400f6a2833..9ec3170c18f925 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -397,34 +397,28 @@ static struct event_constraint intel_lnc_event_constraints[] = {
+ METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_FETCH_LAT, 6),
+ METRIC_EVENT_CONSTRAINT(INTEL_TD_METRIC_MEM_BOUND, 7),
+
++ INTEL_EVENT_CONSTRAINT(0x20, 0xf),
++
++ INTEL_UEVENT_CONSTRAINT(0x012a, 0xf),
++ INTEL_UEVENT_CONSTRAINT(0x012b, 0xf),
+ INTEL_UEVENT_CONSTRAINT(0x0148, 0x4),
+ INTEL_UEVENT_CONSTRAINT(0x0175, 0x4),
+
+ INTEL_EVENT_CONSTRAINT(0x2e, 0x3ff),
+ INTEL_EVENT_CONSTRAINT(0x3c, 0x3ff),
+- /*
+- * Generally event codes < 0x90 are restricted to counters 0-3.
+- * The 0x2E and 0x3C are exception, which has no restriction.
+- */
+- INTEL_EVENT_CONSTRAINT_RANGE(0x01, 0x8f, 0xf),
+
+- INTEL_UEVENT_CONSTRAINT(0x01a3, 0xf),
+- INTEL_UEVENT_CONSTRAINT(0x02a3, 0xf),
+ INTEL_UEVENT_CONSTRAINT(0x08a3, 0x4),
+ INTEL_UEVENT_CONSTRAINT(0x0ca3, 0x4),
+ INTEL_UEVENT_CONSTRAINT(0x04a4, 0x1),
+ INTEL_UEVENT_CONSTRAINT(0x08a4, 0x1),
+ INTEL_UEVENT_CONSTRAINT(0x10a4, 0x1),
+ INTEL_UEVENT_CONSTRAINT(0x01b1, 0x8),
++ INTEL_UEVENT_CONSTRAINT(0x01cd, 0x3fc),
+ INTEL_UEVENT_CONSTRAINT(0x02cd, 0x3),
+- INTEL_EVENT_CONSTRAINT(0xce, 0x1),
+
+ INTEL_EVENT_CONSTRAINT_RANGE(0xd0, 0xdf, 0xf),
+- /*
+- * Generally event codes >= 0x90 are likely to have no restrictions.
+- * The exception are defined as above.
+- */
+- INTEL_EVENT_CONSTRAINT_RANGE(0x90, 0xfe, 0x3ff),
++
++ INTEL_UEVENT_CONSTRAINT(0x00e0, 0xf),
+
+ EVENT_CONSTRAINT_END
+ };
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index b6303b0224531b..c07ca43e67e7f1 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -1178,7 +1178,7 @@ struct event_constraint intel_lnc_pebs_event_constraints[] = {
+ INTEL_FLAGS_UEVENT_CONSTRAINT(0x100, 0x100000000ULL), /* INST_RETIRED.PREC_DIST */
+ INTEL_FLAGS_UEVENT_CONSTRAINT(0x0400, 0x800000000ULL),
+
+- INTEL_HYBRID_LDLAT_CONSTRAINT(0x1cd, 0x3ff),
++ INTEL_HYBRID_LDLAT_CONSTRAINT(0x1cd, 0x3fc),
+ INTEL_HYBRID_STLAT_CONSTRAINT(0x2cd, 0x3),
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_LD(0x11d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_LOADS */
+ INTEL_FLAGS_UEVENT_CONSTRAINT_DATALA_ST(0x12d0, 0xf), /* MEM_INST_RETIRED.STLB_MISS_STORES */
+diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
+index 375bbb9600d3c1..1a8148dec4afe9 100644
+--- a/arch/x86/kvm/lapic.c
++++ b/arch/x86/kvm/lapic.c
+@@ -816,6 +816,17 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
+ }
+ }
+
++void kvm_apic_update_hwapic_isr(struct kvm_vcpu *vcpu)
++{
++ struct kvm_lapic *apic = vcpu->arch.apic;
++
++ if (WARN_ON_ONCE(!lapic_in_kernel(vcpu)) || !apic->apicv_active)
++ return;
++
++ kvm_x86_call(hwapic_isr_update)(vcpu, apic_find_highest_isr(apic));
++}
++EXPORT_SYMBOL_GPL(kvm_apic_update_hwapic_isr);
++
+ int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu)
+ {
+ /* This may race with setting of irr in __apic_accept_irq() and
+diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
+index 1b8ef9856422a4..3aa599db779689 100644
+--- a/arch/x86/kvm/lapic.h
++++ b/arch/x86/kvm/lapic.h
+@@ -117,11 +117,10 @@ bool kvm_irq_delivery_to_apic_fast(struct kvm *kvm, struct kvm_lapic *src,
+ struct kvm_lapic_irq *irq, int *r, struct dest_map *dest_map);
+ void kvm_apic_send_ipi(struct kvm_lapic *apic, u32 icr_low, u32 icr_high);
+
+-u64 kvm_get_apic_base(struct kvm_vcpu *vcpu);
+ int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info);
+ int kvm_apic_get_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s);
+ int kvm_apic_set_state(struct kvm_vcpu *vcpu, struct kvm_lapic_state *s);
+-enum lapic_mode kvm_get_apic_mode(struct kvm_vcpu *vcpu);
++void kvm_apic_update_hwapic_isr(struct kvm_vcpu *vcpu);
+ int kvm_lapic_find_highest_irr(struct kvm_vcpu *vcpu);
+
+ u64 kvm_get_lapic_tscdeadline_msr(struct kvm_vcpu *vcpu);
+@@ -271,6 +270,11 @@ static inline enum lapic_mode kvm_apic_mode(u64 apic_base)
+ return apic_base & (MSR_IA32_APICBASE_ENABLE | X2APIC_ENABLE);
+ }
+
++static inline enum lapic_mode kvm_get_apic_mode(struct kvm_vcpu *vcpu)
++{
++ return kvm_apic_mode(vcpu->arch.apic_base);
++}
++
+ static inline u8 kvm_xapic_id(struct kvm_lapic *apic)
+ {
+ return kvm_lapic_get_reg(apic, APIC_ID) >> 24;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 931a7361c30f2d..22bee8a711442d 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -5043,6 +5043,11 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 vm_exit_reason,
+ kvm_make_request(KVM_REQ_APICV_UPDATE, vcpu);
+ }
+
++ if (vmx->nested.update_vmcs01_hwapic_isr) {
++ vmx->nested.update_vmcs01_hwapic_isr = false;
++ kvm_apic_update_hwapic_isr(vcpu);
++ }
++
+ if ((vm_exit_reason != -1) &&
+ (enable_shadow_vmcs || nested_vmx_is_evmptr12_valid(vmx)))
+ vmx->nested.need_vmcs12_to_shadow_sync = true;
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index f06d443ec3c68d..1af30e3472cdd9 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -6858,6 +6858,27 @@ void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
+ u16 status;
+ u8 old;
+
++ /*
++ * If L2 is active, defer the SVI update until vmcs01 is loaded, as SVI
++ * is only relevant for if and only if Virtual Interrupt Delivery is
++ * enabled in vmcs12, and if VID is enabled then L2 EOIs affect L2's
++ * vAPIC, not L1's vAPIC. KVM must update vmcs01 on the next nested
++ * VM-Exit, otherwise L1 with run with a stale SVI.
++ */
++ if (is_guest_mode(vcpu)) {
++ /*
++ * KVM is supposed to forward intercepted L2 EOIs to L1 if VID
++ * is enabled in vmcs12; as above, the EOIs affect L2's vAPIC.
++ * Note, userspace can stuff state while L2 is active; assert
++ * that VID is disabled if and only if the vCPU is in KVM_RUN
++ * to avoid false positives if userspace is setting APIC state.
++ */
++ WARN_ON_ONCE(vcpu->wants_to_run &&
++ nested_cpu_has_vid(get_vmcs12(vcpu)));
++ to_vmx(vcpu)->nested.update_vmcs01_hwapic_isr = true;
++ return;
++ }
++
+ if (max_isr == -1)
+ max_isr = 0;
+
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 2325f773a20be0..41bf59bbc6426c 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -176,6 +176,7 @@ struct nested_vmx {
+ bool reload_vmcs01_apic_access_page;
+ bool update_vmcs01_cpu_dirty_logging;
+ bool update_vmcs01_apicv_status;
++ bool update_vmcs01_hwapic_isr;
+
+ /*
+ * Enlightened VMCS has been enabled. It does not mean that L1 has to
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 0846e3af5f6c5a..b67a2f46e40b05 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -667,17 +667,6 @@ static void drop_user_return_notifiers(void)
+ kvm_on_user_return(&msrs->urn);
+ }
+
+-u64 kvm_get_apic_base(struct kvm_vcpu *vcpu)
+-{
+- return vcpu->arch.apic_base;
+-}
+-
+-enum lapic_mode kvm_get_apic_mode(struct kvm_vcpu *vcpu)
+-{
+- return kvm_apic_mode(kvm_get_apic_base(vcpu));
+-}
+-EXPORT_SYMBOL_GPL(kvm_get_apic_mode);
+-
+ int kvm_set_apic_base(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ {
+ enum lapic_mode old_mode = kvm_get_apic_mode(vcpu);
+@@ -4314,7 +4303,7 @@ int kvm_get_msr_common(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
+ msr_info->data = 1 << 24;
+ break;
+ case MSR_IA32_APICBASE:
+- msr_info->data = kvm_get_apic_base(vcpu);
++ msr_info->data = vcpu->arch.apic_base;
+ break;
+ case APIC_BASE_MSR ... APIC_BASE_MSR + 0xff:
+ return kvm_x2apic_msr_read(vcpu, msr_info->index, &msr_info->data);
+@@ -10159,7 +10148,7 @@ static void post_kvm_run_save(struct kvm_vcpu *vcpu)
+
+ kvm_run->if_flag = kvm_x86_call(get_if_flag)(vcpu);
+ kvm_run->cr8 = kvm_get_cr8(vcpu);
+- kvm_run->apic_base = kvm_get_apic_base(vcpu);
++ kvm_run->apic_base = vcpu->arch.apic_base;
+
+ kvm_run->ready_for_interrupt_injection =
+ pic_in_kernel(vcpu->kvm) ||
+@@ -11718,7 +11707,7 @@ static void __get_sregs_common(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+ sregs->cr4 = kvm_read_cr4(vcpu);
+ sregs->cr8 = kvm_get_cr8(vcpu);
+ sregs->efer = vcpu->arch.efer;
+- sregs->apic_base = kvm_get_apic_base(vcpu);
++ sregs->apic_base = vcpu->arch.apic_base;
+ }
+
+ static void __get_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+diff --git a/drivers/accel/ivpu/Kconfig b/drivers/accel/ivpu/Kconfig
+index 682c532452863e..e4d418b44626ed 100644
+--- a/drivers/accel/ivpu/Kconfig
++++ b/drivers/accel/ivpu/Kconfig
+@@ -8,6 +8,7 @@ config DRM_ACCEL_IVPU
+ select FW_LOADER
+ select DRM_GEM_SHMEM_HELPER
+ select GENERIC_ALLOCATOR
++ select WANT_DEV_COREDUMP
+ help
+ Choose this option if you have a system with an 14th generation
+ Intel CPU (Meteor Lake) or newer. Intel NPU (formerly called Intel VPU)
+diff --git a/drivers/accel/ivpu/Makefile b/drivers/accel/ivpu/Makefile
+index ebd682a42eb124..232ea6d28c6e25 100644
+--- a/drivers/accel/ivpu/Makefile
++++ b/drivers/accel/ivpu/Makefile
+@@ -19,5 +19,6 @@ intel_vpu-y := \
+ ivpu_sysfs.o
+
+ intel_vpu-$(CONFIG_DEBUG_FS) += ivpu_debugfs.o
++intel_vpu-$(CONFIG_DEV_COREDUMP) += ivpu_coredump.o
+
+ obj-$(CONFIG_DRM_ACCEL_IVPU) += intel_vpu.o
+diff --git a/drivers/accel/ivpu/ivpu_coredump.c b/drivers/accel/ivpu/ivpu_coredump.c
+new file mode 100644
+index 00000000000000..16ad0c30818ccf
+--- /dev/null
++++ b/drivers/accel/ivpu/ivpu_coredump.c
+@@ -0,0 +1,39 @@
++// SPDX-License-Identifier: GPL-2.0-only
++/*
++ * Copyright (C) 2020-2024 Intel Corporation
++ */
++
++#include <linux/devcoredump.h>
++#include <linux/firmware.h>
++
++#include "ivpu_coredump.h"
++#include "ivpu_fw.h"
++#include "ivpu_gem.h"
++#include "vpu_boot_api.h"
++
++#define CRASH_DUMP_HEADER "Intel NPU crash dump"
++#define CRASH_DUMP_HEADERS_SIZE SZ_4K
++
++void ivpu_dev_coredump(struct ivpu_device *vdev)
++{
++ struct drm_print_iterator pi = {};
++ struct drm_printer p;
++ size_t coredump_size;
++ char *coredump;
++
++ coredump_size = CRASH_DUMP_HEADERS_SIZE + FW_VERSION_HEADER_SIZE +
++ ivpu_bo_size(vdev->fw->mem_log_crit) + ivpu_bo_size(vdev->fw->mem_log_verb);
++ coredump = vmalloc(coredump_size);
++ if (!coredump)
++ return;
++
++ pi.data = coredump;
++ pi.remain = coredump_size;
++ p = drm_coredump_printer(&pi);
++
++ drm_printf(&p, "%s\n", CRASH_DUMP_HEADER);
++ drm_printf(&p, "FW version: %s\n", vdev->fw->version);
++ ivpu_fw_log_print(vdev, false, &p);
++
++ dev_coredumpv(vdev->drm.dev, coredump, pi.offset, GFP_KERNEL);
++}
+diff --git a/drivers/accel/ivpu/ivpu_coredump.h b/drivers/accel/ivpu/ivpu_coredump.h
+new file mode 100644
+index 00000000000000..8efb09d0244115
+--- /dev/null
++++ b/drivers/accel/ivpu/ivpu_coredump.h
+@@ -0,0 +1,25 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Copyright (C) 2020-2024 Intel Corporation
++ */
++
++#ifndef __IVPU_COREDUMP_H__
++#define __IVPU_COREDUMP_H__
++
++#include <drm/drm_print.h>
++
++#include "ivpu_drv.h"
++#include "ivpu_fw_log.h"
++
++#ifdef CONFIG_DEV_COREDUMP
++void ivpu_dev_coredump(struct ivpu_device *vdev);
++#else
++static inline void ivpu_dev_coredump(struct ivpu_device *vdev)
++{
++ struct drm_printer p = drm_info_printer(vdev->drm.dev);
++
++ ivpu_fw_log_print(vdev, false, &p);
++}
++#endif
++
++#endif /* __IVPU_COREDUMP_H__ */
+diff --git a/drivers/accel/ivpu/ivpu_drv.c b/drivers/accel/ivpu/ivpu_drv.c
+index c91400ecf92651..38b4158f52784b 100644
+--- a/drivers/accel/ivpu/ivpu_drv.c
++++ b/drivers/accel/ivpu/ivpu_drv.c
+@@ -14,7 +14,7 @@
+ #include <drm/drm_ioctl.h>
+ #include <drm/drm_prime.h>
+
+-#include "vpu_boot_api.h"
++#include "ivpu_coredump.h"
+ #include "ivpu_debugfs.h"
+ #include "ivpu_drv.h"
+ #include "ivpu_fw.h"
+@@ -29,6 +29,7 @@
+ #include "ivpu_ms.h"
+ #include "ivpu_pm.h"
+ #include "ivpu_sysfs.h"
++#include "vpu_boot_api.h"
+
+ #ifndef DRIVER_VERSION_STR
+ #define DRIVER_VERSION_STR __stringify(DRM_IVPU_DRIVER_MAJOR) "." \
+@@ -382,7 +383,7 @@ int ivpu_boot(struct ivpu_device *vdev)
+ ivpu_err(vdev, "Failed to boot the firmware: %d\n", ret);
+ ivpu_hw_diagnose_failure(vdev);
+ ivpu_mmu_evtq_dump(vdev);
+- ivpu_fw_log_dump(vdev);
++ ivpu_dev_coredump(vdev);
+ return ret;
+ }
+
+diff --git a/drivers/accel/ivpu/ivpu_drv.h b/drivers/accel/ivpu/ivpu_drv.h
+index 63f13b697eed71..2b30cc2e9272e4 100644
+--- a/drivers/accel/ivpu/ivpu_drv.h
++++ b/drivers/accel/ivpu/ivpu_drv.h
+@@ -152,6 +152,7 @@ struct ivpu_device {
+ int tdr;
+ int autosuspend;
+ int d0i3_entry_msg;
++ int state_dump_msg;
+ } timeout;
+ };
+
+diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c
+index ede6165e09d90d..b2b6d89f06537f 100644
+--- a/drivers/accel/ivpu/ivpu_fw.c
++++ b/drivers/accel/ivpu/ivpu_fw.c
+@@ -25,7 +25,6 @@
+ #define FW_SHAVE_NN_MAX_SIZE SZ_2M
+ #define FW_RUNTIME_MIN_ADDR (FW_GLOBAL_MEM_START)
+ #define FW_RUNTIME_MAX_ADDR (FW_GLOBAL_MEM_END - FW_SHARED_MEM_SIZE)
+-#define FW_VERSION_HEADER_SIZE SZ_4K
+ #define FW_FILE_IMAGE_OFFSET (VPU_FW_HEADER_SIZE + FW_VERSION_HEADER_SIZE)
+
+ #define WATCHDOG_MSS_REDIRECT 32
+@@ -191,8 +190,10 @@ static int ivpu_fw_parse(struct ivpu_device *vdev)
+ ivpu_dbg(vdev, FW_BOOT, "Header version: 0x%x, format 0x%x\n",
+ fw_hdr->header_version, fw_hdr->image_format);
+
+- ivpu_info(vdev, "Firmware: %s, version: %s", fw->name,
+- (const char *)fw_hdr + VPU_FW_HEADER_SIZE);
++ if (!scnprintf(fw->version, sizeof(fw->version), "%s", fw->file->data + VPU_FW_HEADER_SIZE))
++ ivpu_warn(vdev, "Missing firmware version\n");
++
++ ivpu_info(vdev, "Firmware: %s, version: %s\n", fw->name, fw->version);
+
+ if (IVPU_FW_CHECK_API_COMPAT(vdev, fw_hdr, BOOT, 3))
+ return -EINVAL;
+diff --git a/drivers/accel/ivpu/ivpu_fw.h b/drivers/accel/ivpu/ivpu_fw.h
+index 40d9d17be3f528..5e8eb608b70f1f 100644
+--- a/drivers/accel/ivpu/ivpu_fw.h
++++ b/drivers/accel/ivpu/ivpu_fw.h
+@@ -1,11 +1,14 @@
+ /* SPDX-License-Identifier: GPL-2.0-only */
+ /*
+- * Copyright (C) 2020-2023 Intel Corporation
++ * Copyright (C) 2020-2024 Intel Corporation
+ */
+
+ #ifndef __IVPU_FW_H__
+ #define __IVPU_FW_H__
+
++#define FW_VERSION_HEADER_SIZE SZ_4K
++#define FW_VERSION_STR_SIZE SZ_256
++
+ struct ivpu_device;
+ struct ivpu_bo;
+ struct vpu_boot_params;
+@@ -13,6 +16,7 @@ struct vpu_boot_params;
+ struct ivpu_fw_info {
+ const struct firmware *file;
+ const char *name;
++ char version[FW_VERSION_STR_SIZE];
+ struct ivpu_bo *mem;
+ struct ivpu_bo *mem_shave_nn;
+ struct ivpu_bo *mem_log_crit;
+diff --git a/drivers/accel/ivpu/ivpu_fw_log.h b/drivers/accel/ivpu/ivpu_fw_log.h
+index 0b2573f6f31519..4b390a99699d66 100644
+--- a/drivers/accel/ivpu/ivpu_fw_log.h
++++ b/drivers/accel/ivpu/ivpu_fw_log.h
+@@ -8,8 +8,6 @@
+
+ #include <linux/types.h>
+
+-#include <drm/drm_print.h>
+-
+ #include "ivpu_drv.h"
+
+ #define IVPU_FW_LOG_DEFAULT 0
+@@ -28,11 +26,5 @@ extern unsigned int ivpu_log_level;
+ void ivpu_fw_log_print(struct ivpu_device *vdev, bool only_new_msgs, struct drm_printer *p);
+ void ivpu_fw_log_clear(struct ivpu_device *vdev);
+
+-static inline void ivpu_fw_log_dump(struct ivpu_device *vdev)
+-{
+- struct drm_printer p = drm_info_printer(vdev->drm.dev);
+-
+- ivpu_fw_log_print(vdev, false, &p);
+-}
+
+ #endif /* __IVPU_FW_LOG_H__ */
+diff --git a/drivers/accel/ivpu/ivpu_hw.c b/drivers/accel/ivpu/ivpu_hw.c
+index e69c0613513f11..08b3cef58fd2d7 100644
+--- a/drivers/accel/ivpu/ivpu_hw.c
++++ b/drivers/accel/ivpu/ivpu_hw.c
+@@ -89,12 +89,14 @@ static void timeouts_init(struct ivpu_device *vdev)
+ vdev->timeout.tdr = 2000000;
+ vdev->timeout.autosuspend = -1;
+ vdev->timeout.d0i3_entry_msg = 500;
++ vdev->timeout.state_dump_msg = 10;
+ } else if (ivpu_is_simics(vdev)) {
+ vdev->timeout.boot = 50;
+ vdev->timeout.jsm = 500;
+ vdev->timeout.tdr = 10000;
+ vdev->timeout.autosuspend = -1;
+ vdev->timeout.d0i3_entry_msg = 100;
++ vdev->timeout.state_dump_msg = 10;
+ } else {
+ vdev->timeout.boot = 1000;
+ vdev->timeout.jsm = 500;
+@@ -104,6 +106,7 @@ static void timeouts_init(struct ivpu_device *vdev)
+ else
+ vdev->timeout.autosuspend = 100;
+ vdev->timeout.d0i3_entry_msg = 5;
++ vdev->timeout.state_dump_msg = 10;
+ }
+ }
+
+diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c
+index 29b723039a3459..13c8a12162e89e 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.c
++++ b/drivers/accel/ivpu/ivpu_ipc.c
+@@ -353,6 +353,32 @@ int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+ return ret;
+ }
+
++int ivpu_ipc_send_and_wait(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
++ u32 channel, unsigned long timeout_ms)
++{
++ struct ivpu_ipc_consumer cons;
++ int ret;
++
++ ret = ivpu_rpm_get(vdev);
++ if (ret < 0)
++ return ret;
++
++ ivpu_ipc_consumer_add(vdev, &cons, channel, NULL);
++
++ ret = ivpu_ipc_send(vdev, &cons, req);
++ if (ret) {
++ ivpu_warn_ratelimited(vdev, "IPC send failed: %d\n", ret);
++ goto consumer_del;
++ }
++
++ msleep(timeout_ms);
++
++consumer_del:
++ ivpu_ipc_consumer_del(vdev, &cons);
++ ivpu_rpm_put(vdev);
++ return ret;
++}
++
+ static bool
+ ivpu_ipc_match_consumer(struct ivpu_device *vdev, struct ivpu_ipc_consumer *cons,
+ struct ivpu_ipc_hdr *ipc_hdr, struct vpu_jsm_msg *jsm_msg)
+diff --git a/drivers/accel/ivpu/ivpu_ipc.h b/drivers/accel/ivpu/ivpu_ipc.h
+index fb4de7fb8210ea..b4dfb504679bac 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.h
++++ b/drivers/accel/ivpu/ivpu_ipc.h
+@@ -107,5 +107,7 @@ int ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg
+ int ivpu_ipc_send_receive(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
+ enum vpu_ipc_msg_type expected_resp, struct vpu_jsm_msg *resp,
+ u32 channel, unsigned long timeout_ms);
++int ivpu_ipc_send_and_wait(struct ivpu_device *vdev, struct vpu_jsm_msg *req,
++ u32 channel, unsigned long timeout_ms);
+
+ #endif /* __IVPU_IPC_H__ */
+diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.c b/drivers/accel/ivpu/ivpu_jsm_msg.c
+index 88105963c1b288..f7618b605f0219 100644
+--- a/drivers/accel/ivpu/ivpu_jsm_msg.c
++++ b/drivers/accel/ivpu/ivpu_jsm_msg.c
+@@ -555,3 +555,11 @@ int ivpu_jsm_dct_disable(struct ivpu_device *vdev)
+ return ivpu_ipc_send_receive_internal(vdev, &req, VPU_JSM_MSG_DCT_DISABLE_DONE, &resp,
+ VPU_IPC_CHAN_ASYNC_CMD, vdev->timeout.jsm);
+ }
++
++int ivpu_jsm_state_dump(struct ivpu_device *vdev)
++{
++ struct vpu_jsm_msg req = { .type = VPU_JSM_MSG_STATE_DUMP };
++
++ return ivpu_ipc_send_and_wait(vdev, &req, VPU_IPC_CHAN_ASYNC_CMD,
++ vdev->timeout.state_dump_msg);
++}
+diff --git a/drivers/accel/ivpu/ivpu_jsm_msg.h b/drivers/accel/ivpu/ivpu_jsm_msg.h
+index e4e42c0ff6e656..9e84d3526a1463 100644
+--- a/drivers/accel/ivpu/ivpu_jsm_msg.h
++++ b/drivers/accel/ivpu/ivpu_jsm_msg.h
+@@ -43,4 +43,6 @@ int ivpu_jsm_metric_streamer_info(struct ivpu_device *vdev, u64 metric_group_mas
+ u64 buffer_size, u32 *sample_size, u64 *info_size);
+ int ivpu_jsm_dct_enable(struct ivpu_device *vdev, u32 active_us, u32 inactive_us);
+ int ivpu_jsm_dct_disable(struct ivpu_device *vdev);
++int ivpu_jsm_state_dump(struct ivpu_device *vdev);
++
+ #endif
+diff --git a/drivers/accel/ivpu/ivpu_pm.c b/drivers/accel/ivpu/ivpu_pm.c
+index ef9a4ba18cb8a8..fbb61a2c3b19ce 100644
+--- a/drivers/accel/ivpu/ivpu_pm.c
++++ b/drivers/accel/ivpu/ivpu_pm.c
+@@ -9,17 +9,18 @@
+ #include <linux/pm_runtime.h>
+ #include <linux/reboot.h>
+
+-#include "vpu_boot_api.h"
++#include "ivpu_coredump.h"
+ #include "ivpu_drv.h"
+-#include "ivpu_hw.h"
+ #include "ivpu_fw.h"
+ #include "ivpu_fw_log.h"
++#include "ivpu_hw.h"
+ #include "ivpu_ipc.h"
+ #include "ivpu_job.h"
+ #include "ivpu_jsm_msg.h"
+ #include "ivpu_mmu.h"
+ #include "ivpu_ms.h"
+ #include "ivpu_pm.h"
++#include "vpu_boot_api.h"
+
+ static bool ivpu_disable_recovery;
+ module_param_named_unsafe(disable_recovery, ivpu_disable_recovery, bool, 0644);
+@@ -110,40 +111,57 @@ static int ivpu_resume(struct ivpu_device *vdev)
+ return ret;
+ }
+
+-static void ivpu_pm_recovery_work(struct work_struct *work)
++static void ivpu_pm_reset_begin(struct ivpu_device *vdev)
+ {
+- struct ivpu_pm_info *pm = container_of(work, struct ivpu_pm_info, recovery_work);
+- struct ivpu_device *vdev = pm->vdev;
+- char *evt[2] = {"IVPU_PM_EVENT=IVPU_RECOVER", NULL};
+- int ret;
+-
+- ivpu_err(vdev, "Recovering the NPU (reset #%d)\n", atomic_read(&vdev->pm->reset_counter));
+-
+- ret = pm_runtime_resume_and_get(vdev->drm.dev);
+- if (ret)
+- ivpu_err(vdev, "Failed to resume NPU: %d\n", ret);
+-
+- ivpu_fw_log_dump(vdev);
++ pm_runtime_disable(vdev->drm.dev);
+
+ atomic_inc(&vdev->pm->reset_counter);
+ atomic_set(&vdev->pm->reset_pending, 1);
+ down_write(&vdev->pm->reset_lock);
++}
++
++static void ivpu_pm_reset_complete(struct ivpu_device *vdev)
++{
++ int ret;
+
+- ivpu_suspend(vdev);
+ ivpu_pm_prepare_cold_boot(vdev);
+ ivpu_jobs_abort_all(vdev);
+ ivpu_ms_cleanup_all(vdev);
+
+ ret = ivpu_resume(vdev);
+- if (ret)
++ if (ret) {
+ ivpu_err(vdev, "Failed to resume NPU: %d\n", ret);
++ pm_runtime_set_suspended(vdev->drm.dev);
++ } else {
++ pm_runtime_set_active(vdev->drm.dev);
++ }
+
+ up_write(&vdev->pm->reset_lock);
+ atomic_set(&vdev->pm->reset_pending, 0);
+
+- kobject_uevent_env(&vdev->drm.dev->kobj, KOBJ_CHANGE, evt);
+ pm_runtime_mark_last_busy(vdev->drm.dev);
+- pm_runtime_put_autosuspend(vdev->drm.dev);
++ pm_runtime_enable(vdev->drm.dev);
++}
++
++static void ivpu_pm_recovery_work(struct work_struct *work)
++{
++ struct ivpu_pm_info *pm = container_of(work, struct ivpu_pm_info, recovery_work);
++ struct ivpu_device *vdev = pm->vdev;
++ char *evt[2] = {"IVPU_PM_EVENT=IVPU_RECOVER", NULL};
++
++ ivpu_err(vdev, "Recovering the NPU (reset #%d)\n", atomic_read(&vdev->pm->reset_counter));
++
++ ivpu_pm_reset_begin(vdev);
++
++ if (!pm_runtime_status_suspended(vdev->drm.dev)) {
++ ivpu_jsm_state_dump(vdev);
++ ivpu_dev_coredump(vdev);
++ ivpu_suspend(vdev);
++ }
++
++ ivpu_pm_reset_complete(vdev);
++
++ kobject_uevent_env(&vdev->drm.dev->kobj, KOBJ_CHANGE, evt);
+ }
+
+ void ivpu_pm_trigger_recovery(struct ivpu_device *vdev, const char *reason)
+@@ -262,7 +280,7 @@ int ivpu_pm_runtime_suspend_cb(struct device *dev)
+ if (!is_idle || ret_d0i3) {
+ ivpu_err(vdev, "Forcing cold boot due to previous errors\n");
+ atomic_inc(&vdev->pm->reset_counter);
+- ivpu_fw_log_dump(vdev);
++ ivpu_dev_coredump(vdev);
+ ivpu_pm_prepare_cold_boot(vdev);
+ } else {
+ ivpu_pm_prepare_warm_boot(vdev);
+@@ -314,16 +332,13 @@ void ivpu_pm_reset_prepare_cb(struct pci_dev *pdev)
+ struct ivpu_device *vdev = pci_get_drvdata(pdev);
+
+ ivpu_dbg(vdev, PM, "Pre-reset..\n");
+- atomic_inc(&vdev->pm->reset_counter);
+- atomic_set(&vdev->pm->reset_pending, 1);
+
+- pm_runtime_get_sync(vdev->drm.dev);
+- down_write(&vdev->pm->reset_lock);
+- ivpu_prepare_for_reset(vdev);
+- ivpu_hw_reset(vdev);
+- ivpu_pm_prepare_cold_boot(vdev);
+- ivpu_jobs_abort_all(vdev);
+- ivpu_ms_cleanup_all(vdev);
++ ivpu_pm_reset_begin(vdev);
++
++ if (!pm_runtime_status_suspended(vdev->drm.dev)) {
++ ivpu_prepare_for_reset(vdev);
++ ivpu_hw_reset(vdev);
++ }
+
+ ivpu_dbg(vdev, PM, "Pre-reset done.\n");
+ }
+@@ -331,18 +346,12 @@ void ivpu_pm_reset_prepare_cb(struct pci_dev *pdev)
+ void ivpu_pm_reset_done_cb(struct pci_dev *pdev)
+ {
+ struct ivpu_device *vdev = pci_get_drvdata(pdev);
+- int ret;
+
+ ivpu_dbg(vdev, PM, "Post-reset..\n");
+- ret = ivpu_resume(vdev);
+- if (ret)
+- ivpu_err(vdev, "Failed to set RESUME state: %d\n", ret);
+- up_write(&vdev->pm->reset_lock);
+- atomic_set(&vdev->pm->reset_pending, 0);
+- ivpu_dbg(vdev, PM, "Post-reset done.\n");
+
+- pm_runtime_mark_last_busy(vdev->drm.dev);
+- pm_runtime_put_autosuspend(vdev->drm.dev);
++ ivpu_pm_reset_complete(vdev);
++
++ ivpu_dbg(vdev, PM, "Post-reset done.\n");
+ }
+
+ void ivpu_pm_init(struct ivpu_device *vdev)
+diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
+index dfbbac92242a84..04d02c746ec0fd 100644
+--- a/drivers/bluetooth/btqca.c
++++ b/drivers/bluetooth/btqca.c
+@@ -272,6 +272,39 @@ int qca_send_pre_shutdown_cmd(struct hci_dev *hdev)
+ }
+ EXPORT_SYMBOL_GPL(qca_send_pre_shutdown_cmd);
+
++static bool qca_filename_has_extension(const char *filename)
++{
++ const char *suffix = strrchr(filename, '.');
++
++ /* File extensions require a dot, but not as the first or last character */
++ if (!suffix || suffix == filename || *(suffix + 1) == '\0')
++ return 0;
++
++ /* Avoid matching directories with names that look like files with extensions */
++ return !strchr(suffix, '/');
++}
++
++static bool qca_get_alt_nvm_file(char *filename, size_t max_size)
++{
++ char fwname[64];
++ const char *suffix;
++
++ /* nvm file name has an extension, replace with .bin */
++ if (qca_filename_has_extension(filename)) {
++ suffix = strrchr(filename, '.');
++ strscpy(fwname, filename, suffix - filename + 1);
++ snprintf(fwname + (suffix - filename),
++ sizeof(fwname) - (suffix - filename), ".bin");
++ /* If nvm file is already the default one, return false to skip the retry. */
++ if (strcmp(fwname, filename) == 0)
++ return false;
++
++ snprintf(filename, max_size, "%s", fwname);
++ return true;
++ }
++ return false;
++}
++
+ static int qca_tlv_check_data(struct hci_dev *hdev,
+ struct qca_fw_config *config,
+ u8 *fw_data, size_t fw_size,
+@@ -564,6 +597,19 @@ static int qca_download_firmware(struct hci_dev *hdev,
+ config->fwname, ret);
+ return ret;
+ }
++ }
++ /* If the board-specific file is missing, try loading the default
++ * one, unless that was attempted already.
++ */
++ else if (config->type == TLV_TYPE_NVM &&
++ qca_get_alt_nvm_file(config->fwname, sizeof(config->fwname))) {
++ bt_dev_info(hdev, "QCA Downloading %s", config->fwname);
++ ret = request_firmware(&fw, config->fwname, &hdev->dev);
++ if (ret) {
++ bt_dev_err(hdev, "QCA Failed to request file: %s (%d)",
++ config->fwname, ret);
++ return ret;
++ }
+ } else {
+ bt_dev_err(hdev, "QCA Failed to request file: %s (%d)",
+ config->fwname, ret);
+@@ -700,34 +746,38 @@ static int qca_check_bdaddr(struct hci_dev *hdev, const struct qca_fw_config *co
+ return 0;
+ }
+
+-static void qca_generate_hsp_nvm_name(char *fwname, size_t max_size,
++static void qca_get_nvm_name_by_board(char *fwname, size_t max_size,
++ const char *stem, enum qca_btsoc_type soc_type,
+ struct qca_btsoc_version ver, u8 rom_ver, u16 bid)
+ {
+ const char *variant;
++ const char *prefix;
+
+- /* hsp gf chip */
+- if ((le32_to_cpu(ver.soc_id) & QCA_HSP_GF_SOC_MASK) == QCA_HSP_GF_SOC_ID)
+- variant = "g";
+- else
+- variant = "";
++ /* Set the default value to variant and prefix */
++ variant = "";
++ prefix = "b";
+
+- if (bid == 0x0)
+- snprintf(fwname, max_size, "qca/hpnv%02x%s.bin", rom_ver, variant);
+- else
+- snprintf(fwname, max_size, "qca/hpnv%02x%s.%x", rom_ver, variant, bid);
+-}
++ if (soc_type == QCA_QCA2066)
++ prefix = "";
+
+-static inline void qca_get_nvm_name_generic(struct qca_fw_config *cfg,
+- const char *stem, u8 rom_ver, u16 bid)
+-{
+- if (bid == 0x0)
+- snprintf(cfg->fwname, sizeof(cfg->fwname), "qca/%snv%02x.bin", stem, rom_ver);
+- else if (bid & 0xff00)
+- snprintf(cfg->fwname, sizeof(cfg->fwname),
+- "qca/%snv%02x.b%x", stem, rom_ver, bid);
+- else
+- snprintf(cfg->fwname, sizeof(cfg->fwname),
+- "qca/%snv%02x.b%02x", stem, rom_ver, bid);
++ if (soc_type == QCA_WCN6855 || soc_type == QCA_QCA2066) {
++ /* If the chip is manufactured by GlobalFoundries */
++ if ((le32_to_cpu(ver.soc_id) & QCA_HSP_GF_SOC_MASK) == QCA_HSP_GF_SOC_ID)
++ variant = "g";
++ }
++
++ if (rom_ver != 0) {
++ if (bid == 0x0 || bid == 0xffff)
++ snprintf(fwname, max_size, "qca/%s%02x%s.bin", stem, rom_ver, variant);
++ else
++ snprintf(fwname, max_size, "qca/%s%02x%s.%s%02x", stem, rom_ver,
++ variant, prefix, bid);
++ } else {
++ if (bid == 0x0 || bid == 0xffff)
++ snprintf(fwname, max_size, "qca/%s%s.bin", stem, variant);
++ else
++ snprintf(fwname, max_size, "qca/%s%s.%s%02x", stem, variant, prefix, bid);
++ }
+ }
+
+ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+@@ -816,8 +866,14 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ /* Download NVM configuration */
+ config.type = TLV_TYPE_NVM;
+ if (firmware_name) {
+- snprintf(config.fwname, sizeof(config.fwname),
+- "qca/%s", firmware_name);
++ /* The firmware name has an extension, use it directly */
++ if (qca_filename_has_extension(firmware_name)) {
++ snprintf(config.fwname, sizeof(config.fwname), "qca/%s", firmware_name);
++ } else {
++ qca_read_fw_board_id(hdev, &boardid);
++ qca_get_nvm_name_by_board(config.fwname, sizeof(config.fwname),
++ firmware_name, soc_type, ver, 0, boardid);
++ }
+ } else {
+ switch (soc_type) {
+ case QCA_WCN3990:
+@@ -836,8 +892,9 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ "qca/apnv%02x.bin", rom_ver);
+ break;
+ case QCA_QCA2066:
+- qca_generate_hsp_nvm_name(config.fwname,
+- sizeof(config.fwname), ver, rom_ver, boardid);
++ qca_get_nvm_name_by_board(config.fwname,
++ sizeof(config.fwname), "hpnv", soc_type, ver,
++ rom_ver, boardid);
+ break;
+ case QCA_QCA6390:
+ snprintf(config.fwname, sizeof(config.fwname),
+@@ -848,13 +905,14 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ "qca/msnv%02x.bin", rom_ver);
+ break;
+ case QCA_WCN6855:
+- snprintf(config.fwname, sizeof(config.fwname),
+- "qca/hpnv%02x.bin", rom_ver);
++ qca_read_fw_board_id(hdev, &boardid);
++ qca_get_nvm_name_by_board(config.fwname, sizeof(config.fwname),
++ "hpnv", soc_type, ver, rom_ver, boardid);
+ break;
+ case QCA_WCN7850:
+- qca_get_nvm_name_generic(&config, "hmt", rom_ver, boardid);
++ qca_get_nvm_name_by_board(config.fwname, sizeof(config.fwname),
++ "hmtnv", soc_type, ver, rom_ver, boardid);
+ break;
+-
+ default:
+ snprintf(config.fwname, sizeof(config.fwname),
+ "qca/nvm_%08x.bin", soc_ver);
+diff --git a/drivers/clocksource/jcore-pit.c b/drivers/clocksource/jcore-pit.c
+index a3fe98cd383820..82815428f8f925 100644
+--- a/drivers/clocksource/jcore-pit.c
++++ b/drivers/clocksource/jcore-pit.c
+@@ -114,6 +114,18 @@ static int jcore_pit_local_init(unsigned cpu)
+ pit->periodic_delta = DIV_ROUND_CLOSEST(NSEC_PER_SEC, HZ * buspd);
+
+ clockevents_config_and_register(&pit->ced, freq, 1, ULONG_MAX);
++ enable_percpu_irq(pit->ced.irq, IRQ_TYPE_NONE);
++
++ return 0;
++}
++
++static int jcore_pit_local_teardown(unsigned cpu)
++{
++ struct jcore_pit *pit = this_cpu_ptr(jcore_pit_percpu);
++
++ pr_info("Local J-Core PIT teardown on cpu %u\n", cpu);
++
++ disable_percpu_irq(pit->ced.irq);
+
+ return 0;
+ }
+@@ -168,6 +180,7 @@ static int __init jcore_pit_init(struct device_node *node)
+ return -ENOMEM;
+ }
+
++ irq_set_percpu_devid(pit_irq);
+ err = request_percpu_irq(pit_irq, jcore_timer_interrupt,
+ "jcore_pit", jcore_pit_percpu);
+ if (err) {
+@@ -237,7 +250,7 @@ static int __init jcore_pit_init(struct device_node *node)
+
+ cpuhp_setup_state(CPUHP_AP_JCORE_TIMER_STARTING,
+ "clockevents/jcore:starting",
+- jcore_pit_local_init, NULL);
++ jcore_pit_local_init, jcore_pit_local_teardown);
+
+ return 0;
+ }
+diff --git a/drivers/edac/qcom_edac.c b/drivers/edac/qcom_edac.c
+index a9a8ba067007a9..0fd7a777fe7d27 100644
+--- a/drivers/edac/qcom_edac.c
++++ b/drivers/edac/qcom_edac.c
+@@ -95,7 +95,7 @@ static int qcom_llcc_core_setup(struct llcc_drv_data *drv, struct regmap *llcc_b
+ * Configure interrupt enable registers such that Tag, Data RAM related
+ * interrupts are propagated to interrupt controller for servicing
+ */
+- ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable,
++ ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_0_enable,
+ TRP0_INTERRUPT_ENABLE,
+ TRP0_INTERRUPT_ENABLE);
+ if (ret)
+@@ -113,7 +113,7 @@ static int qcom_llcc_core_setup(struct llcc_drv_data *drv, struct regmap *llcc_b
+ if (ret)
+ return ret;
+
+- ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_2_enable,
++ ret = regmap_update_bits(llcc_bcast_regmap, drv->edac_reg_offset->cmn_interrupt_0_enable,
+ DRP0_INTERRUPT_ENABLE,
+ DRP0_INTERRUPT_ENABLE);
+ if (ret)
+diff --git a/drivers/firmware/arm_scmi/vendors/imx/imx-sm-misc.c b/drivers/firmware/arm_scmi/vendors/imx/imx-sm-misc.c
+index a86ab9b35953f7..2641faa329cdd0 100644
+--- a/drivers/firmware/arm_scmi/vendors/imx/imx-sm-misc.c
++++ b/drivers/firmware/arm_scmi/vendors/imx/imx-sm-misc.c
+@@ -254,8 +254,8 @@ static int scmi_imx_misc_ctrl_set(const struct scmi_protocol_handle *ph,
+ if (num > max_num)
+ return -EINVAL;
+
+- ret = ph->xops->xfer_get_init(ph, SCMI_IMX_MISC_CTRL_SET, sizeof(*in),
+- 0, &t);
++ ret = ph->xops->xfer_get_init(ph, SCMI_IMX_MISC_CTRL_SET,
++ sizeof(*in) + num * sizeof(__le32), 0, &t);
+ if (ret)
+ return ret;
+
+diff --git a/drivers/firmware/imx/Kconfig b/drivers/firmware/imx/Kconfig
+index 907cd149c40a8b..c964f4924359fc 100644
+--- a/drivers/firmware/imx/Kconfig
++++ b/drivers/firmware/imx/Kconfig
+@@ -25,6 +25,7 @@ config IMX_SCU
+
+ config IMX_SCMI_MISC_DRV
+ tristate "IMX SCMI MISC Protocol driver"
++ depends on ARCH_MXC || COMPILE_TEST
+ default y if ARCH_MXC
+ help
+ The System Controller Management Interface firmware (SCMI FW) is
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 1e8f0bdb6ae3b4..209871c219d697 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -3068,6 +3068,8 @@ static int gpiod_get_raw_value_commit(const struct gpio_desc *desc)
+ static int gpio_chip_get_multiple(struct gpio_chip *gc,
+ unsigned long *mask, unsigned long *bits)
+ {
++ lockdep_assert_held(&gc->gpiodev->srcu);
++
+ if (gc->get_multiple)
+ return gc->get_multiple(gc, mask, bits);
+ if (gc->get) {
+@@ -3098,6 +3100,7 @@ int gpiod_get_array_value_complex(bool raw, bool can_sleep,
+ struct gpio_array *array_info,
+ unsigned long *value_bitmap)
+ {
++ struct gpio_chip *gc;
+ int ret, i = 0;
+
+ /*
+@@ -3109,10 +3112,15 @@ int gpiod_get_array_value_complex(bool raw, bool can_sleep,
+ array_size <= array_info->size &&
+ (void *)array_info == desc_array + array_info->size) {
+ if (!can_sleep)
+- WARN_ON(array_info->chip->can_sleep);
++ WARN_ON(array_info->gdev->can_sleep);
++
++ guard(srcu)(&array_info->gdev->srcu);
++ gc = srcu_dereference(array_info->gdev->chip,
++ &array_info->gdev->srcu);
++ if (!gc)
++ return -ENODEV;
+
+- ret = gpio_chip_get_multiple(array_info->chip,
+- array_info->get_mask,
++ ret = gpio_chip_get_multiple(gc, array_info->get_mask,
+ value_bitmap);
+ if (ret)
+ return ret;
+@@ -3393,6 +3401,8 @@ static void gpiod_set_raw_value_commit(struct gpio_desc *desc, bool value)
+ static void gpio_chip_set_multiple(struct gpio_chip *gc,
+ unsigned long *mask, unsigned long *bits)
+ {
++ lockdep_assert_held(&gc->gpiodev->srcu);
++
+ if (gc->set_multiple) {
+ gc->set_multiple(gc, mask, bits);
+ } else {
+@@ -3410,6 +3420,7 @@ int gpiod_set_array_value_complex(bool raw, bool can_sleep,
+ struct gpio_array *array_info,
+ unsigned long *value_bitmap)
+ {
++ struct gpio_chip *gc;
+ int i = 0;
+
+ /*
+@@ -3421,14 +3432,19 @@ int gpiod_set_array_value_complex(bool raw, bool can_sleep,
+ array_size <= array_info->size &&
+ (void *)array_info == desc_array + array_info->size) {
+ if (!can_sleep)
+- WARN_ON(array_info->chip->can_sleep);
++ WARN_ON(array_info->gdev->can_sleep);
++
++ guard(srcu)(&array_info->gdev->srcu);
++ gc = srcu_dereference(array_info->gdev->chip,
++ &array_info->gdev->srcu);
++ if (!gc)
++ return -ENODEV;
+
+ if (!raw && !bitmap_empty(array_info->invert_mask, array_size))
+ bitmap_xor(value_bitmap, value_bitmap,
+ array_info->invert_mask, array_size);
+
+- gpio_chip_set_multiple(array_info->chip, array_info->set_mask,
+- value_bitmap);
++ gpio_chip_set_multiple(gc, array_info->set_mask, value_bitmap);
+
+ i = find_first_zero_bit(array_info->set_mask, array_size);
+ if (i == array_size)
+@@ -4684,9 +4700,10 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ {
+ struct gpio_desc *desc;
+ struct gpio_descs *descs;
++ struct gpio_device *gdev;
+ struct gpio_array *array_info = NULL;
+- struct gpio_chip *gc;
+ int count, bitmap_size;
++ unsigned long dflags;
+ size_t descs_size;
+
+ count = gpiod_count(dev, con_id);
+@@ -4707,7 +4724,7 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+
+ descs->desc[descs->ndescs] = desc;
+
+- gc = gpiod_to_chip(desc);
++ gdev = gpiod_to_gpio_device(desc);
+ /*
+ * If pin hardware number of array member 0 is also 0, select
+ * its chip as a candidate for fast bitmap processing path.
+@@ -4715,8 +4732,8 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ if (descs->ndescs == 0 && gpio_chip_hwgpio(desc) == 0) {
+ struct gpio_descs *array;
+
+- bitmap_size = BITS_TO_LONGS(gc->ngpio > count ?
+- gc->ngpio : count);
++ bitmap_size = BITS_TO_LONGS(gdev->ngpio > count ?
++ gdev->ngpio : count);
+
+ array = krealloc(descs, descs_size +
+ struct_size(array_info, invert_mask, 3 * bitmap_size),
+@@ -4736,7 +4753,7 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+
+ array_info->desc = descs->desc;
+ array_info->size = count;
+- array_info->chip = gc;
++ array_info->gdev = gdev;
+ bitmap_set(array_info->get_mask, descs->ndescs,
+ count - descs->ndescs);
+ bitmap_set(array_info->set_mask, descs->ndescs,
+@@ -4749,7 +4766,7 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ continue;
+
+ /* Unmark array members which don't belong to the 'fast' chip */
+- if (array_info->chip != gc) {
++ if (array_info->gdev != gdev) {
+ __clear_bit(descs->ndescs, array_info->get_mask);
+ __clear_bit(descs->ndescs, array_info->set_mask);
+ }
+@@ -4772,9 +4789,10 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ array_info->set_mask);
+ }
+ } else {
++ dflags = READ_ONCE(desc->flags);
+ /* Exclude open drain or open source from fast output */
+- if (gpiochip_line_is_open_drain(gc, descs->ndescs) ||
+- gpiochip_line_is_open_source(gc, descs->ndescs))
++ if (test_bit(FLAG_OPEN_DRAIN, &dflags) ||
++ test_bit(FLAG_OPEN_SOURCE, &dflags))
+ __clear_bit(descs->ndescs,
+ array_info->set_mask);
+ /* Identify 'fast' pins which require invertion */
+@@ -4786,7 +4804,7 @@ struct gpio_descs *__must_check gpiod_get_array(struct device *dev,
+ if (array_info)
+ dev_dbg(dev,
+ "GPIO array info: chip=%s, size=%d, get_mask=%lx, set_mask=%lx, invert_mask=%lx\n",
+- array_info->chip->label, array_info->size,
++ array_info->gdev->label, array_info->size,
+ *array_info->get_mask, *array_info->set_mask,
+ *array_info->invert_mask);
+ return descs;
+diff --git a/drivers/gpio/gpiolib.h b/drivers/gpio/gpiolib.h
+index 067197d61d57e4..87ce3753500e4b 100644
+--- a/drivers/gpio/gpiolib.h
++++ b/drivers/gpio/gpiolib.h
+@@ -110,7 +110,7 @@ extern const char *const gpio_suffixes[];
+ *
+ * @desc: Array of pointers to the GPIO descriptors
+ * @size: Number of elements in desc
+- * @chip: Parent GPIO chip
++ * @gdev: Parent GPIO device
+ * @get_mask: Get mask used in fastpath
+ * @set_mask: Set mask used in fastpath
+ * @invert_mask: Invert mask used in fastpath
+@@ -122,7 +122,7 @@ extern const char *const gpio_suffixes[];
+ struct gpio_array {
+ struct gpio_desc **desc;
+ unsigned int size;
+- struct gpio_chip *chip;
++ struct gpio_device *gdev;
+ unsigned long *get_mask;
+ unsigned long *set_mask;
+ unsigned long invert_mask[];
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index c27b4c36a7c0f5..32afcf9485245e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -119,9 +119,10 @@
+ * - 3.58.0 - Add GFX12 DCC support
+ * - 3.59.0 - Cleared VRAM
+ * - 3.60.0 - Add AMDGPU_TILING_GFX12_DCC_WRITE_COMPRESS_DISABLE (Vulkan requirement)
++ * - 3.61.0 - Contains fix for RV/PCO compute queues
+ */
+ #define KMS_DRIVER_MAJOR 3
+-#define KMS_DRIVER_MINOR 60
++#define KMS_DRIVER_MINOR 61
+ #define KMS_DRIVER_PATCHLEVEL 0
+
+ /*
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index e2501c98e107d3..05d1ae2ef84b4e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -7415,6 +7415,34 @@ static void gfx_v9_0_ring_emit_cleaner_shader(struct amdgpu_ring *ring)
+ amdgpu_ring_write(ring, 0); /* RESERVED field, programmed to zero */
+ }
+
++static void gfx_v9_0_ring_begin_use_compute(struct amdgpu_ring *ring)
++{
++ struct amdgpu_device *adev = ring->adev;
++
++ amdgpu_gfx_enforce_isolation_ring_begin_use(ring);
++
++ /* Raven and PCO APUs seem to have stability issues
++ * with compute and gfxoff and gfx pg. Disable gfx pg during
++ * submission and allow again afterwards.
++ */
++ if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 1, 0))
++ gfx_v9_0_set_powergating_state(adev, AMD_PG_STATE_UNGATE);
++}
++
++static void gfx_v9_0_ring_end_use_compute(struct amdgpu_ring *ring)
++{
++ struct amdgpu_device *adev = ring->adev;
++
++ /* Raven and PCO APUs seem to have stability issues
++ * with compute and gfxoff and gfx pg. Disable gfx pg during
++ * submission and allow again afterwards.
++ */
++ if (amdgpu_ip_version(adev, GC_HWIP, 0) == IP_VERSION(9, 1, 0))
++ gfx_v9_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
++
++ amdgpu_gfx_enforce_isolation_ring_end_use(ring);
++}
++
+ static const struct amd_ip_funcs gfx_v9_0_ip_funcs = {
+ .name = "gfx_v9_0",
+ .early_init = gfx_v9_0_early_init,
+@@ -7591,8 +7619,8 @@ static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_compute = {
+ .emit_wave_limit = gfx_v9_0_emit_wave_limit,
+ .reset = gfx_v9_0_reset_kcq,
+ .emit_cleaner_shader = gfx_v9_0_ring_emit_cleaner_shader,
+- .begin_use = amdgpu_gfx_enforce_isolation_ring_begin_use,
+- .end_use = amdgpu_gfx_enforce_isolation_ring_end_use,
++ .begin_use = gfx_v9_0_ring_begin_use_compute,
++ .end_use = gfx_v9_0_ring_end_use_compute,
+ };
+
+ static const struct amdgpu_ring_funcs gfx_v9_0_ring_funcs_kiq = {
+diff --git a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
+index 02f7ba8c93cd45..7062f12b5b7511 100644
+--- a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
++++ b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler.h
+@@ -4117,7 +4117,8 @@ static const uint32_t cwsr_trap_gfx12_hex[] = {
+ 0x0000ffff, 0x8bfe7e7e,
+ 0x8bea6a6a, 0xb97af804,
+ 0xbe804ec2, 0xbf94fffe,
+- 0xbe804a6c, 0xbfb10000,
++ 0xbe804a6c, 0xbe804ec2,
++ 0xbf94fffe, 0xbfb10000,
+ 0xbf9f0000, 0xbf9f0000,
+ 0xbf9f0000, 0xbf9f0000,
+ 0xbf9f0000, 0x00000000,
+diff --git a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm
+index 44772eec9ef4df..96fbb16ceb216d 100644
+--- a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm
++++ b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx10.asm
+@@ -34,41 +34,24 @@
+ * cpp -DASIC_FAMILY=CHIP_PLUM_BONITO cwsr_trap_handler_gfx10.asm -P -o gfx11.sp3
+ * sp3 gfx11.sp3 -hex gfx11.hex
+ *
+- * gfx12:
+- * cpp -DASIC_FAMILY=CHIP_GFX12 cwsr_trap_handler_gfx10.asm -P -o gfx12.sp3
+- * sp3 gfx12.sp3 -hex gfx12.hex
+ */
+
+ #define CHIP_NAVI10 26
+ #define CHIP_SIENNA_CICHLID 30
+ #define CHIP_PLUM_BONITO 36
+-#define CHIP_GFX12 37
+
+ #define NO_SQC_STORE (ASIC_FAMILY >= CHIP_SIENNA_CICHLID)
+ #define HAVE_XNACK (ASIC_FAMILY < CHIP_SIENNA_CICHLID)
+ #define HAVE_SENDMSG_RTN (ASIC_FAMILY >= CHIP_PLUM_BONITO)
+ #define HAVE_BUFFER_LDS_LOAD (ASIC_FAMILY < CHIP_PLUM_BONITO)
+-#define SW_SA_TRAP (ASIC_FAMILY >= CHIP_PLUM_BONITO && ASIC_FAMILY < CHIP_GFX12)
++#define SW_SA_TRAP (ASIC_FAMILY == CHIP_PLUM_BONITO)
+ #define SAVE_AFTER_XNACK_ERROR (HAVE_XNACK && !NO_SQC_STORE) // workaround for TCP store failure after XNACK error when ALLOW_REPLAY=0, for debugger
+ #define SINGLE_STEP_MISSED_WORKAROUND 1 //workaround for lost MODE.DEBUG_EN exception when SAVECTX raised
+
+-#if ASIC_FAMILY < CHIP_GFX12
+ #define S_COHERENCE glc:1
+ #define V_COHERENCE slc:1 glc:1
+ #define S_WAITCNT_0 s_waitcnt 0
+-#else
+-#define S_COHERENCE scope:SCOPE_SYS
+-#define V_COHERENCE scope:SCOPE_SYS
+-#define S_WAITCNT_0 s_wait_idle
+-
+-#define HW_REG_SHADER_FLAT_SCRATCH_LO HW_REG_WAVE_SCRATCH_BASE_LO
+-#define HW_REG_SHADER_FLAT_SCRATCH_HI HW_REG_WAVE_SCRATCH_BASE_HI
+-#define HW_REG_GPR_ALLOC HW_REG_WAVE_GPR_ALLOC
+-#define HW_REG_LDS_ALLOC HW_REG_WAVE_LDS_ALLOC
+-#define HW_REG_MODE HW_REG_WAVE_MODE
+-#endif
+
+-#if ASIC_FAMILY < CHIP_GFX12
+ var SQ_WAVE_STATUS_SPI_PRIO_MASK = 0x00000006
+ var SQ_WAVE_STATUS_HALT_MASK = 0x2000
+ var SQ_WAVE_STATUS_ECC_ERR_MASK = 0x20000
+@@ -81,21 +64,6 @@ var S_STATUS_ALWAYS_CLEAR_MASK = SQ_WAVE_STATUS_SPI_PRIO_MASK|SQ_WAVE_STATUS_E
+ var S_STATUS_HALT_MASK = SQ_WAVE_STATUS_HALT_MASK
+ var S_SAVE_PC_HI_TRAP_ID_MASK = 0x00FF0000
+ var S_SAVE_PC_HI_HT_MASK = 0x01000000
+-#else
+-var SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK = 0x4
+-var SQ_WAVE_STATE_PRIV_SCC_SHIFT = 9
+-var SQ_WAVE_STATE_PRIV_SYS_PRIO_MASK = 0xC00
+-var SQ_WAVE_STATE_PRIV_HALT_MASK = 0x4000
+-var SQ_WAVE_STATE_PRIV_POISON_ERR_MASK = 0x8000
+-var SQ_WAVE_STATE_PRIV_POISON_ERR_SHIFT = 15
+-var SQ_WAVE_STATUS_WAVE64_SHIFT = 29
+-var SQ_WAVE_STATUS_WAVE64_SIZE = 1
+-var SQ_WAVE_LDS_ALLOC_GRANULARITY = 9
+-var S_STATUS_HWREG = HW_REG_WAVE_STATE_PRIV
+-var S_STATUS_ALWAYS_CLEAR_MASK = SQ_WAVE_STATE_PRIV_SYS_PRIO_MASK|SQ_WAVE_STATE_PRIV_POISON_ERR_MASK
+-var S_STATUS_HALT_MASK = SQ_WAVE_STATE_PRIV_HALT_MASK
+-var S_SAVE_PC_HI_TRAP_ID_MASK = 0xF0000000
+-#endif
+
+ var SQ_WAVE_STATUS_NO_VGPRS_SHIFT = 24
+ var SQ_WAVE_LDS_ALLOC_LDS_SIZE_SHIFT = 12
+@@ -110,7 +78,6 @@ var SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT = 8
+ var SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT = 12
+ #endif
+
+-#if ASIC_FAMILY < CHIP_GFX12
+ var SQ_WAVE_TRAPSTS_SAVECTX_MASK = 0x400
+ var SQ_WAVE_TRAPSTS_EXCP_MASK = 0x1FF
+ var SQ_WAVE_TRAPSTS_SAVECTX_SHIFT = 10
+@@ -161,39 +128,6 @@ var S_TRAPSTS_RESTORE_PART_3_SIZE = 32 - S_TRAPSTS_RESTORE_PART_3_SHIFT
+ var S_TRAPSTS_HWREG = HW_REG_TRAPSTS
+ var S_TRAPSTS_SAVE_CONTEXT_MASK = SQ_WAVE_TRAPSTS_SAVECTX_MASK
+ var S_TRAPSTS_SAVE_CONTEXT_SHIFT = SQ_WAVE_TRAPSTS_SAVECTX_SHIFT
+-#else
+-var SQ_WAVE_EXCP_FLAG_PRIV_ADDR_WATCH_MASK = 0xF
+-var SQ_WAVE_EXCP_FLAG_PRIV_MEM_VIOL_MASK = 0x10
+-var SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT = 5
+-var SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_MASK = 0x20
+-var SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_MASK = 0x40
+-var SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT = 6
+-var SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK = 0x80
+-var SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_SHIFT = 7
+-var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_MASK = 0x100
+-var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_SHIFT = 8
+-var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_END_MASK = 0x200
+-var SQ_WAVE_EXCP_FLAG_PRIV_TRAP_AFTER_INST_MASK = 0x800
+-var SQ_WAVE_TRAP_CTRL_ADDR_WATCH_MASK = 0x80
+-var SQ_WAVE_TRAP_CTRL_TRAP_AFTER_INST_MASK = 0x200
+-
+-var S_TRAPSTS_HWREG = HW_REG_WAVE_EXCP_FLAG_PRIV
+-var S_TRAPSTS_SAVE_CONTEXT_MASK = SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_MASK
+-var S_TRAPSTS_SAVE_CONTEXT_SHIFT = SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT
+-var S_TRAPSTS_NON_MASKABLE_EXCP_MASK = SQ_WAVE_EXCP_FLAG_PRIV_MEM_VIOL_MASK |\
+- SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_MASK |\
+- SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK |\
+- SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_MASK |\
+- SQ_WAVE_EXCP_FLAG_PRIV_WAVE_END_MASK |\
+- SQ_WAVE_EXCP_FLAG_PRIV_TRAP_AFTER_INST_MASK
+-var S_TRAPSTS_RESTORE_PART_1_SIZE = SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT
+-var S_TRAPSTS_RESTORE_PART_2_SHIFT = SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT
+-var S_TRAPSTS_RESTORE_PART_2_SIZE = SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_SHIFT - SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT
+-var S_TRAPSTS_RESTORE_PART_3_SHIFT = SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_SHIFT
+-var S_TRAPSTS_RESTORE_PART_3_SIZE = 32 - S_TRAPSTS_RESTORE_PART_3_SHIFT
+-var BARRIER_STATE_SIGNAL_OFFSET = 16
+-var BARRIER_STATE_VALID_OFFSET = 0
+-#endif
+
+ // bits [31:24] unused by SPI debug data
+ var TTMP11_SAVE_REPLAY_W64H_SHIFT = 31
+@@ -305,11 +239,7 @@ L_TRAP_NO_BARRIER:
+
+ L_HALTED:
+ // Host trap may occur while wave is halted.
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_and_b32 ttmp2, s_save_pc_hi, S_SAVE_PC_HI_TRAP_ID_MASK
+-#else
+- s_and_b32 ttmp2, s_save_trapsts, SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK
+-#endif
+ s_cbranch_scc1 L_FETCH_2ND_TRAP
+
+ L_CHECK_SAVE:
+@@ -336,7 +266,6 @@ L_NOT_HALTED:
+ // Check for maskable exceptions in trapsts.excp and trapsts.excp_hi.
+ // Maskable exceptions only cause the wave to enter the trap handler if
+ // their respective bit in mode.excp_en is set.
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_and_b32 ttmp2, s_save_trapsts, SQ_WAVE_TRAPSTS_EXCP_MASK|SQ_WAVE_TRAPSTS_EXCP_HI_MASK
+ s_cbranch_scc0 L_CHECK_TRAP_ID
+
+@@ -349,17 +278,6 @@ L_NOT_ADDR_WATCH:
+ s_lshl_b32 ttmp2, ttmp2, SQ_WAVE_MODE_EXCP_EN_SHIFT
+ s_and_b32 ttmp2, ttmp2, ttmp3
+ s_cbranch_scc1 L_FETCH_2ND_TRAP
+-#else
+- s_getreg_b32 ttmp2, hwreg(HW_REG_WAVE_EXCP_FLAG_USER)
+- s_and_b32 ttmp3, s_save_trapsts, SQ_WAVE_EXCP_FLAG_PRIV_ADDR_WATCH_MASK
+- s_cbranch_scc0 L_NOT_ADDR_WATCH
+- s_or_b32 ttmp2, ttmp2, SQ_WAVE_TRAP_CTRL_ADDR_WATCH_MASK
+-
+-L_NOT_ADDR_WATCH:
+- s_getreg_b32 ttmp3, hwreg(HW_REG_WAVE_TRAP_CTRL)
+- s_and_b32 ttmp2, ttmp3, ttmp2
+- s_cbranch_scc1 L_FETCH_2ND_TRAP
+-#endif
+
+ L_CHECK_TRAP_ID:
+ // Check trap_id != 0
+@@ -369,13 +287,8 @@ L_CHECK_TRAP_ID:
+ #if SINGLE_STEP_MISSED_WORKAROUND
+ // Prioritize single step exception over context save.
+ // Second-level trap will halt wave and RFE, re-entering for SAVECTX.
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_getreg_b32 ttmp2, hwreg(HW_REG_MODE)
+ s_and_b32 ttmp2, ttmp2, SQ_WAVE_MODE_DEBUG_EN_MASK
+-#else
+- // WAVE_TRAP_CTRL is already in ttmp3.
+- s_and_b32 ttmp3, ttmp3, SQ_WAVE_TRAP_CTRL_TRAP_AFTER_INST_MASK
+-#endif
+ s_cbranch_scc1 L_FETCH_2ND_TRAP
+ #endif
+
+@@ -425,12 +338,7 @@ L_NO_NEXT_TRAP:
+ s_cbranch_scc1 L_TRAP_CASE
+
+ // Host trap will not cause trap re-entry.
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_and_b32 ttmp2, s_save_pc_hi, S_SAVE_PC_HI_HT_MASK
+-#else
+- s_getreg_b32 ttmp2, hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV)
+- s_and_b32 ttmp2, ttmp2, SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK
+-#endif
+ s_cbranch_scc1 L_EXIT_TRAP
+ s_or_b32 s_save_status, s_save_status, S_STATUS_HALT_MASK
+
+@@ -457,16 +365,7 @@ L_EXIT_TRAP:
+ s_and_b64 exec, exec, exec // Restore STATUS.EXECZ, not writable by s_setreg_b32
+ s_and_b64 vcc, vcc, vcc // Restore STATUS.VCCZ, not writable by s_setreg_b32
+
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_setreg_b32 hwreg(S_STATUS_HWREG), s_save_status
+-#else
+- // STATE_PRIV.BARRIER_COMPLETE may have changed since we read it.
+- // Only restore fields which the trap handler changes.
+- s_lshr_b32 s_save_status, s_save_status, SQ_WAVE_STATE_PRIV_SCC_SHIFT
+- s_setreg_b32 hwreg(S_STATUS_HWREG, SQ_WAVE_STATE_PRIV_SCC_SHIFT, \
+- SQ_WAVE_STATE_PRIV_POISON_ERR_SHIFT - SQ_WAVE_STATE_PRIV_SCC_SHIFT + 1), s_save_status
+-#endif
+-
+ s_rfe_b64 [ttmp0, ttmp1]
+
+ L_SAVE:
+@@ -478,14 +377,6 @@ L_SAVE:
+ s_endpgm
+ L_HAVE_VGPRS:
+ #endif
+-#if ASIC_FAMILY >= CHIP_GFX12
+- s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_STATUS)
+- s_bitcmp1_b32 s_save_tmp, SQ_WAVE_STATUS_NO_VGPRS_SHIFT
+- s_cbranch_scc0 L_HAVE_VGPRS
+- s_endpgm
+-L_HAVE_VGPRS:
+-#endif
+-
+ s_and_b32 s_save_pc_hi, s_save_pc_hi, 0x0000ffff //pc[47:32]
+ s_mov_b32 s_save_tmp, 0
+ s_setreg_b32 hwreg(S_TRAPSTS_HWREG, S_TRAPSTS_SAVE_CONTEXT_SHIFT, 1), s_save_tmp //clear saveCtx bit
+@@ -671,19 +562,6 @@ L_SAVE_HWREG:
+ s_mov_b32 m0, 0x0 //Next lane of v2 to write to
+ #endif
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- // Ensure no further changes to barrier or LDS state.
+- // STATE_PRIV.BARRIER_COMPLETE may change up to this point.
+- s_barrier_signal -2
+- s_barrier_wait -2
+-
+- // Re-read final state of BARRIER_COMPLETE field for save.
+- s_getreg_b32 s_save_tmp, hwreg(S_STATUS_HWREG)
+- s_and_b32 s_save_tmp, s_save_tmp, SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK
+- s_andn2_b32 s_save_status, s_save_status, SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK
+- s_or_b32 s_save_status, s_save_status, s_save_tmp
+-#endif
+-
+ write_hwreg_to_mem(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset)
+ write_hwreg_to_mem(s_save_pc_lo, s_save_buf_rsrc0, s_save_mem_offset)
+ s_andn2_b32 s_save_tmp, s_save_pc_hi, S_SAVE_PC_HI_FIRST_WAVE_MASK
+@@ -707,21 +585,6 @@ L_SAVE_HWREG:
+ s_getreg_b32 s_save_m0, hwreg(HW_REG_SHADER_FLAT_SCRATCH_HI)
+ write_hwreg_to_mem(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset)
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_EXCP_FLAG_USER)
+- write_hwreg_to_mem(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset)
+-
+- s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_TRAP_CTRL)
+- write_hwreg_to_mem(s_save_m0, s_save_buf_rsrc0, s_save_mem_offset)
+-
+- s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_STATUS)
+- write_hwreg_to_mem(s_save_tmp, s_save_buf_rsrc0, s_save_mem_offset)
+-
+- s_get_barrier_state s_save_tmp, -1
+- s_wait_kmcnt (0)
+- write_hwreg_to_mem(s_save_tmp, s_save_buf_rsrc0, s_save_mem_offset)
+-#endif
+-
+ #if NO_SQC_STORE
+ // Write HWREGs with 16 VGPR lanes. TTMPs occupy space after this.
+ s_mov_b32 exec_lo, 0xFFFF
+@@ -814,9 +677,7 @@ L_SAVE_LDS_NORMAL:
+ s_and_b32 s_save_alloc_size, s_save_alloc_size, 0xFFFFFFFF //lds_size is zero?
+ s_cbranch_scc0 L_SAVE_LDS_DONE //no lds used? jump to L_SAVE_DONE
+
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_barrier //LDS is used? wait for other waves in the same TG
+-#endif
+ s_and_b32 s_save_tmp, s_save_pc_hi, S_SAVE_PC_HI_FIRST_WAVE_MASK
+ s_cbranch_scc0 L_SAVE_LDS_DONE
+
+@@ -1081,11 +942,6 @@ L_RESTORE:
+ s_mov_b32 s_restore_buf_rsrc2, 0 //NUM_RECORDS initial value = 0 (in bytes)
+ s_mov_b32 s_restore_buf_rsrc3, S_RESTORE_BUF_RSRC_WORD3_MISC
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- // Save s_restore_spi_init_hi for later use.
+- s_mov_b32 s_restore_spi_init_hi_save, s_restore_spi_init_hi
+-#endif
+-
+ //determine it is wave32 or wave64
+ get_wave_size2(s_restore_size)
+
+@@ -1320,9 +1176,7 @@ L_RESTORE_SGPR:
+ // s_barrier with MODE.DEBUG_EN=1, STATUS.PRIV=1 incorrectly asserts debug exception.
+ // Clear DEBUG_EN before and restore MODE after the barrier.
+ s_setreg_imm32_b32 hwreg(HW_REG_MODE), 0
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_barrier //barrier to ensure the readiness of LDS before access attemps from any other wave in the same TG
+-#endif
+
+ /* restore HW registers */
+ L_RESTORE_HWREG:
+@@ -1334,11 +1188,6 @@ L_RESTORE_HWREG:
+
+ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- // Restore s_restore_spi_init_hi before the saved value gets clobbered.
+- s_mov_b32 s_restore_spi_init_hi, s_restore_spi_init_hi_save
+-#endif
+-
+ read_hwreg_from_mem(s_restore_m0, s_restore_buf_rsrc0, s_restore_mem_offset)
+ read_hwreg_from_mem(s_restore_pc_lo, s_restore_buf_rsrc0, s_restore_mem_offset)
+ read_hwreg_from_mem(s_restore_pc_hi, s_restore_buf_rsrc0, s_restore_mem_offset)
+@@ -1358,44 +1207,6 @@ L_RESTORE_HWREG:
+
+ s_setreg_b32 hwreg(HW_REG_SHADER_FLAT_SCRATCH_HI), s_restore_flat_scratch
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
+- S_WAITCNT_0
+- s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_USER), s_restore_tmp
+-
+- read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
+- S_WAITCNT_0
+- s_setreg_b32 hwreg(HW_REG_WAVE_TRAP_CTRL), s_restore_tmp
+-
+- // Only the first wave needs to restore the workgroup barrier.
+- s_and_b32 s_restore_tmp, s_restore_spi_init_hi, S_RESTORE_SPI_INIT_FIRST_WAVE_MASK
+- s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
+-
+- // Skip over WAVE_STATUS, since there is no state to restore from it
+- s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 4
+-
+- read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
+- S_WAITCNT_0
+-
+- s_bitcmp1_b32 s_restore_tmp, BARRIER_STATE_VALID_OFFSET
+- s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
+-
+- // extract the saved signal count from s_restore_tmp
+- s_lshr_b32 s_restore_tmp, s_restore_tmp, BARRIER_STATE_SIGNAL_OFFSET
+-
+- // We need to call s_barrier_signal repeatedly to restore the signal
+- // count of the work group barrier. The member count is already
+- // initialized with the number of waves in the work group.
+-L_BARRIER_RESTORE_LOOP:
+- s_and_b32 s_restore_tmp, s_restore_tmp, s_restore_tmp
+- s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
+- s_barrier_signal -1
+- s_add_i32 s_restore_tmp, s_restore_tmp, -1
+- s_branch L_BARRIER_RESTORE_LOOP
+-
+-L_SKIP_BARRIER_RESTORE:
+-#endif
+-
+ s_mov_b32 m0, s_restore_m0
+ s_mov_b32 exec_lo, s_restore_exec_lo
+ s_mov_b32 exec_hi, s_restore_exec_hi
+@@ -1453,13 +1264,6 @@ L_RETURN_WITHOUT_PRIV:
+
+ s_setreg_b32 hwreg(S_STATUS_HWREG), s_restore_status // SCC is included, which is changed by previous salu
+
+-#if ASIC_FAMILY >= CHIP_GFX12
+- // Make barrier and LDS state visible to all waves in the group.
+- // STATE_PRIV.BARRIER_COMPLETE may change after this point.
+- s_barrier_signal -2
+- s_barrier_wait -2
+-#endif
+-
+ s_rfe_b64 s_restore_pc_lo //Return to the main shader program and resume execution
+
+ L_END_PGM:
+@@ -1598,11 +1402,7 @@ function get_hwreg_size_bytes
+ end
+
+ function get_wave_size2(s_reg)
+-#if ASIC_FAMILY < CHIP_GFX12
+ s_getreg_b32 s_reg, hwreg(HW_REG_IB_STS2,SQ_WAVE_IB_STS2_WAVE64_SHIFT,SQ_WAVE_IB_STS2_WAVE64_SIZE)
+-#else
+- s_getreg_b32 s_reg, hwreg(HW_REG_WAVE_STATUS,SQ_WAVE_STATUS_WAVE64_SHIFT,SQ_WAVE_STATUS_WAVE64_SIZE)
+-#endif
+ s_lshl_b32 s_reg, s_reg, S_WAVE_SIZE
+ end
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx12.asm b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx12.asm
+new file mode 100644
+index 00000000000000..7b9d36e5fa4372
+--- /dev/null
++++ b/drivers/gpu/drm/amd/amdkfd/cwsr_trap_handler_gfx12.asm
+@@ -0,0 +1,1130 @@
++/*
++ * Copyright 2018 Advanced Micro Devices, Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ */
++
++/* To compile this assembly code:
++ *
++ * gfx12:
++ * cpp -DASIC_FAMILY=CHIP_GFX12 cwsr_trap_handler_gfx12.asm -P -o gfx12.sp3
++ * sp3 gfx12.sp3 -hex gfx12.hex
++ */
++
++#define CHIP_GFX12 37
++
++#define SINGLE_STEP_MISSED_WORKAROUND 1 //workaround for lost TRAP_AFTER_INST exception when SAVECTX raised
++
++var SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK = 0x4
++var SQ_WAVE_STATE_PRIV_SCC_SHIFT = 9
++var SQ_WAVE_STATE_PRIV_SYS_PRIO_MASK = 0xC00
++var SQ_WAVE_STATE_PRIV_HALT_MASK = 0x4000
++var SQ_WAVE_STATE_PRIV_POISON_ERR_MASK = 0x8000
++var SQ_WAVE_STATE_PRIV_POISON_ERR_SHIFT = 15
++var SQ_WAVE_STATUS_WAVE64_SHIFT = 29
++var SQ_WAVE_STATUS_WAVE64_SIZE = 1
++var SQ_WAVE_STATUS_NO_VGPRS_SHIFT = 24
++var SQ_WAVE_STATE_PRIV_ALWAYS_CLEAR_MASK = SQ_WAVE_STATE_PRIV_SYS_PRIO_MASK|SQ_WAVE_STATE_PRIV_POISON_ERR_MASK
++var S_SAVE_PC_HI_TRAP_ID_MASK = 0xF0000000
++
++var SQ_WAVE_LDS_ALLOC_LDS_SIZE_SHIFT = 12
++var SQ_WAVE_LDS_ALLOC_LDS_SIZE_SIZE = 9
++var SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SIZE = 8
++var SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT = 12
++var SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SHIFT = 24
++var SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SIZE = 4
++var SQ_WAVE_LDS_ALLOC_GRANULARITY = 9
++
++var SQ_WAVE_EXCP_FLAG_PRIV_ADDR_WATCH_MASK = 0xF
++var SQ_WAVE_EXCP_FLAG_PRIV_MEM_VIOL_MASK = 0x10
++var SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT = 5
++var SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_MASK = 0x20
++var SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_MASK = 0x40
++var SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT = 6
++var SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK = 0x80
++var SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_SHIFT = 7
++var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_MASK = 0x100
++var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_SHIFT = 8
++var SQ_WAVE_EXCP_FLAG_PRIV_WAVE_END_MASK = 0x200
++var SQ_WAVE_EXCP_FLAG_PRIV_TRAP_AFTER_INST_MASK = 0x800
++var SQ_WAVE_TRAP_CTRL_ADDR_WATCH_MASK = 0x80
++var SQ_WAVE_TRAP_CTRL_TRAP_AFTER_INST_MASK = 0x200
++
++var SQ_WAVE_EXCP_FLAG_PRIV_NON_MASKABLE_EXCP_MASK= SQ_WAVE_EXCP_FLAG_PRIV_MEM_VIOL_MASK |\
++ SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_MASK |\
++ SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK |\
++ SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_MASK |\
++ SQ_WAVE_EXCP_FLAG_PRIV_WAVE_END_MASK |\
++ SQ_WAVE_EXCP_FLAG_PRIV_TRAP_AFTER_INST_MASK
++var SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_1_SIZE = SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT
++var SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SHIFT = SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT
++var SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SIZE = SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_SHIFT - SQ_WAVE_EXCP_FLAG_PRIV_ILLEGAL_INST_SHIFT
++var SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SHIFT = SQ_WAVE_EXCP_FLAG_PRIV_WAVE_START_SHIFT
++var SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SIZE = 32 - SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SHIFT
++var BARRIER_STATE_SIGNAL_OFFSET = 16
++var BARRIER_STATE_VALID_OFFSET = 0
++
++var TTMP11_DEBUG_TRAP_ENABLED_SHIFT = 23
++var TTMP11_DEBUG_TRAP_ENABLED_MASK = 0x800000
++
++// SQ_SEL_X/Y/Z/W, BUF_NUM_FORMAT_FLOAT, (0 for MUBUF stride[17:14]
++// when ADD_TID_ENABLE and BUF_DATA_FORMAT_32 for MTBUF), ADD_TID_ENABLE
++var S_SAVE_BUF_RSRC_WORD1_STRIDE = 0x00040000
++var S_SAVE_BUF_RSRC_WORD3_MISC = 0x10807FAC
++var S_SAVE_SPI_INIT_FIRST_WAVE_MASK = 0x04000000
++var S_SAVE_SPI_INIT_FIRST_WAVE_SHIFT = 26
++
++var S_SAVE_PC_HI_FIRST_WAVE_MASK = 0x80000000
++var S_SAVE_PC_HI_FIRST_WAVE_SHIFT = 31
++
++var s_sgpr_save_num = 108
++
++var s_save_spi_init_lo = exec_lo
++var s_save_spi_init_hi = exec_hi
++var s_save_pc_lo = ttmp0
++var s_save_pc_hi = ttmp1
++var s_save_exec_lo = ttmp2
++var s_save_exec_hi = ttmp3
++var s_save_state_priv = ttmp12
++var s_save_excp_flag_priv = ttmp15
++var s_save_xnack_mask = s_save_excp_flag_priv
++var s_wave_size = ttmp7
++var s_save_buf_rsrc0 = ttmp8
++var s_save_buf_rsrc1 = ttmp9
++var s_save_buf_rsrc2 = ttmp10
++var s_save_buf_rsrc3 = ttmp11
++var s_save_mem_offset = ttmp4
++var s_save_alloc_size = s_save_excp_flag_priv
++var s_save_tmp = ttmp14
++var s_save_m0 = ttmp5
++var s_save_ttmps_lo = s_save_tmp
++var s_save_ttmps_hi = s_save_excp_flag_priv
++
++var S_RESTORE_BUF_RSRC_WORD1_STRIDE = S_SAVE_BUF_RSRC_WORD1_STRIDE
++var S_RESTORE_BUF_RSRC_WORD3_MISC = S_SAVE_BUF_RSRC_WORD3_MISC
++
++var S_RESTORE_SPI_INIT_FIRST_WAVE_MASK = 0x04000000
++var S_RESTORE_SPI_INIT_FIRST_WAVE_SHIFT = 26
++var S_WAVE_SIZE = 25
++
++var s_restore_spi_init_lo = exec_lo
++var s_restore_spi_init_hi = exec_hi
++var s_restore_mem_offset = ttmp12
++var s_restore_alloc_size = ttmp3
++var s_restore_tmp = ttmp2
++var s_restore_mem_offset_save = s_restore_tmp
++var s_restore_m0 = s_restore_alloc_size
++var s_restore_mode = ttmp7
++var s_restore_flat_scratch = s_restore_tmp
++var s_restore_pc_lo = ttmp0
++var s_restore_pc_hi = ttmp1
++var s_restore_exec_lo = ttmp4
++var s_restore_exec_hi = ttmp5
++var s_restore_state_priv = ttmp14
++var s_restore_excp_flag_priv = ttmp15
++var s_restore_xnack_mask = ttmp13
++var s_restore_buf_rsrc0 = ttmp8
++var s_restore_buf_rsrc1 = ttmp9
++var s_restore_buf_rsrc2 = ttmp10
++var s_restore_buf_rsrc3 = ttmp11
++var s_restore_size = ttmp6
++var s_restore_ttmps_lo = s_restore_tmp
++var s_restore_ttmps_hi = s_restore_alloc_size
++var s_restore_spi_init_hi_save = s_restore_exec_hi
++
++shader main
++ asic(DEFAULT)
++ type(CS)
++ wave_size(32)
++
++ s_branch L_SKIP_RESTORE //NOT restore. might be a regular trap or save
++
++L_JUMP_TO_RESTORE:
++ s_branch L_RESTORE
++
++L_SKIP_RESTORE:
++ s_getreg_b32 s_save_state_priv, hwreg(HW_REG_WAVE_STATE_PRIV) //save STATUS since we will change SCC
++
++ // Clear SPI_PRIO: do not save with elevated priority.
++ // Clear ECC_ERR: prevents SQC store and triggers FATAL_HALT if setreg'd.
++ s_andn2_b32 s_save_state_priv, s_save_state_priv, SQ_WAVE_STATE_PRIV_ALWAYS_CLEAR_MASK
++
++ s_getreg_b32 s_save_excp_flag_priv, hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV)
++
++ s_and_b32 ttmp2, s_save_state_priv, SQ_WAVE_STATE_PRIV_HALT_MASK
++ s_cbranch_scc0 L_NOT_HALTED
++
++L_HALTED:
++ // Host trap may occur while wave is halted.
++ s_and_b32 ttmp2, s_save_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK
++ s_cbranch_scc1 L_FETCH_2ND_TRAP
++
++L_CHECK_SAVE:
++ s_and_b32 ttmp2, s_save_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_MASK
++ s_cbranch_scc1 L_SAVE
++
++ // Wave is halted but neither host trap nor SAVECTX is raised.
++ // Caused by instruction fetch memory violation.
++ // Spin wait until context saved to prevent interrupt storm.
++ s_sleep 0x10
++ s_getreg_b32 s_save_excp_flag_priv, hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV)
++ s_branch L_CHECK_SAVE
++
++L_NOT_HALTED:
++ // Let second-level handle non-SAVECTX exception or trap.
++ // Any concurrent SAVECTX will be handled upon re-entry once halted.
++
++ // Check non-maskable exceptions. memory_violation, illegal_instruction
++ // and xnack_error exceptions always cause the wave to enter the trap
++ // handler.
++ s_and_b32 ttmp2, s_save_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_NON_MASKABLE_EXCP_MASK
++ s_cbranch_scc1 L_FETCH_2ND_TRAP
++
++ // Check for maskable exceptions in trapsts.excp and trapsts.excp_hi.
++ // Maskable exceptions only cause the wave to enter the trap handler if
++ // their respective bit in mode.excp_en is set.
++ s_getreg_b32 ttmp2, hwreg(HW_REG_WAVE_EXCP_FLAG_USER)
++ s_and_b32 ttmp3, s_save_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_ADDR_WATCH_MASK
++ s_cbranch_scc0 L_NOT_ADDR_WATCH
++ s_or_b32 ttmp2, ttmp2, SQ_WAVE_TRAP_CTRL_ADDR_WATCH_MASK
++
++L_NOT_ADDR_WATCH:
++ s_getreg_b32 ttmp3, hwreg(HW_REG_WAVE_TRAP_CTRL)
++ s_and_b32 ttmp2, ttmp3, ttmp2
++ s_cbranch_scc1 L_FETCH_2ND_TRAP
++
++L_CHECK_TRAP_ID:
++ // Check trap_id != 0
++ s_and_b32 ttmp2, s_save_pc_hi, S_SAVE_PC_HI_TRAP_ID_MASK
++ s_cbranch_scc1 L_FETCH_2ND_TRAP
++
++#if SINGLE_STEP_MISSED_WORKAROUND
++ // Prioritize single step exception over context save.
++ // Second-level trap will halt wave and RFE, re-entering for SAVECTX.
++ // WAVE_TRAP_CTRL is already in ttmp3.
++ s_and_b32 ttmp3, ttmp3, SQ_WAVE_TRAP_CTRL_TRAP_AFTER_INST_MASK
++ s_cbranch_scc1 L_FETCH_2ND_TRAP
++#endif
++
++ s_and_b32 ttmp2, s_save_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_MASK
++ s_cbranch_scc1 L_SAVE
++
++L_FETCH_2ND_TRAP:
++ // Read second-level TBA/TMA from first-level TMA and jump if available.
++ // ttmp[2:5] and ttmp12 can be used (others hold SPI-initialized debug data)
++ // ttmp12 holds SQ_WAVE_STATUS
++ s_sendmsg_rtn_b64 [ttmp14, ttmp15], sendmsg(MSG_RTN_GET_TMA)
++ s_wait_idle
++ s_lshl_b64 [ttmp14, ttmp15], [ttmp14, ttmp15], 0x8
++
++ s_bitcmp1_b32 ttmp15, 0xF
++ s_cbranch_scc0 L_NO_SIGN_EXTEND_TMA
++ s_or_b32 ttmp15, ttmp15, 0xFFFF0000
++L_NO_SIGN_EXTEND_TMA:
++
++ s_load_dword ttmp2, [ttmp14, ttmp15], 0x10 scope:SCOPE_SYS // debug trap enabled flag
++ s_wait_idle
++ s_lshl_b32 ttmp2, ttmp2, TTMP11_DEBUG_TRAP_ENABLED_SHIFT
++ s_andn2_b32 ttmp11, ttmp11, TTMP11_DEBUG_TRAP_ENABLED_MASK
++ s_or_b32 ttmp11, ttmp11, ttmp2
++
++ s_load_dwordx2 [ttmp2, ttmp3], [ttmp14, ttmp15], 0x0 scope:SCOPE_SYS // second-level TBA
++ s_wait_idle
++ s_load_dwordx2 [ttmp14, ttmp15], [ttmp14, ttmp15], 0x8 scope:SCOPE_SYS // second-level TMA
++ s_wait_idle
++
++ s_and_b64 [ttmp2, ttmp3], [ttmp2, ttmp3], [ttmp2, ttmp3]
++ s_cbranch_scc0 L_NO_NEXT_TRAP // second-level trap handler not been set
++ s_setpc_b64 [ttmp2, ttmp3] // jump to second-level trap handler
++
++L_NO_NEXT_TRAP:
++ // If not caused by trap then halt wave to prevent re-entry.
++ s_and_b32 ttmp2, s_save_pc_hi, S_SAVE_PC_HI_TRAP_ID_MASK
++ s_cbranch_scc1 L_TRAP_CASE
++
++ // Host trap will not cause trap re-entry.
++ s_getreg_b32 ttmp2, hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV)
++ s_and_b32 ttmp2, ttmp2, SQ_WAVE_EXCP_FLAG_PRIV_HOST_TRAP_MASK
++ s_cbranch_scc1 L_EXIT_TRAP
++ s_or_b32 s_save_state_priv, s_save_state_priv, SQ_WAVE_STATE_PRIV_HALT_MASK
++
++ // If the PC points to S_ENDPGM then context save will fail if STATE_PRIV.HALT is set.
++ // Rewind the PC to prevent this from occurring.
++ s_sub_u32 ttmp0, ttmp0, 0x8
++ s_subb_u32 ttmp1, ttmp1, 0x0
++
++ s_branch L_EXIT_TRAP
++
++L_TRAP_CASE:
++ // Advance past trap instruction to prevent re-entry.
++ s_add_u32 ttmp0, ttmp0, 0x4
++ s_addc_u32 ttmp1, ttmp1, 0x0
++
++L_EXIT_TRAP:
++ s_and_b32 ttmp1, ttmp1, 0xFFFF
++
++ // Restore SQ_WAVE_STATUS.
++ s_and_b64 exec, exec, exec // Restore STATUS.EXECZ, not writable by s_setreg_b32
++ s_and_b64 vcc, vcc, vcc // Restore STATUS.VCCZ, not writable by s_setreg_b32
++
++ // STATE_PRIV.BARRIER_COMPLETE may have changed since we read it.
++ // Only restore fields which the trap handler changes.
++ s_lshr_b32 s_save_state_priv, s_save_state_priv, SQ_WAVE_STATE_PRIV_SCC_SHIFT
++ s_setreg_b32 hwreg(HW_REG_WAVE_STATE_PRIV, SQ_WAVE_STATE_PRIV_SCC_SHIFT, \
++ SQ_WAVE_STATE_PRIV_POISON_ERR_SHIFT - SQ_WAVE_STATE_PRIV_SCC_SHIFT + 1), s_save_state_priv
++
++ s_rfe_b64 [ttmp0, ttmp1]
++
++L_SAVE:
++ // If VGPRs have been deallocated then terminate the wavefront.
++ // It has no remaining program to run and cannot save without VGPRs.
++ s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_STATUS)
++ s_bitcmp1_b32 s_save_tmp, SQ_WAVE_STATUS_NO_VGPRS_SHIFT
++ s_cbranch_scc0 L_HAVE_VGPRS
++ s_endpgm
++L_HAVE_VGPRS:
++
++ s_and_b32 s_save_pc_hi, s_save_pc_hi, 0x0000ffff //pc[47:32]
++ s_mov_b32 s_save_tmp, 0
++ s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV, SQ_WAVE_EXCP_FLAG_PRIV_SAVE_CONTEXT_SHIFT, 1), s_save_tmp //clear saveCtx bit
++
++ /* inform SPI the readiness and wait for SPI's go signal */
++ s_mov_b32 s_save_exec_lo, exec_lo //save EXEC and use EXEC for the go signal from SPI
++ s_mov_b32 s_save_exec_hi, exec_hi
++ s_mov_b64 exec, 0x0 //clear EXEC to get ready to receive
++
++ s_sendmsg_rtn_b64 [exec_lo, exec_hi], sendmsg(MSG_RTN_SAVE_WAVE)
++ s_wait_idle
++
++ // Save first_wave flag so we can clear high bits of save address.
++ s_and_b32 s_save_tmp, s_save_spi_init_hi, S_SAVE_SPI_INIT_FIRST_WAVE_MASK
++ s_lshl_b32 s_save_tmp, s_save_tmp, (S_SAVE_PC_HI_FIRST_WAVE_SHIFT - S_SAVE_SPI_INIT_FIRST_WAVE_SHIFT)
++ s_or_b32 s_save_pc_hi, s_save_pc_hi, s_save_tmp
++
++ // Trap temporaries must be saved via VGPR but all VGPRs are in use.
++ // There is no ttmp space to hold the resource constant for VGPR save.
++ // Save v0 by itself since it requires only two SGPRs.
++ s_mov_b32 s_save_ttmps_lo, exec_lo
++ s_and_b32 s_save_ttmps_hi, exec_hi, 0xFFFF
++ s_mov_b32 exec_lo, 0xFFFFFFFF
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++ global_store_dword_addtid v0, [s_save_ttmps_lo, s_save_ttmps_hi] scope:SCOPE_SYS
++ v_mov_b32 v0, 0x0
++ s_mov_b32 exec_lo, s_save_ttmps_lo
++ s_mov_b32 exec_hi, s_save_ttmps_hi
++
++ // Save trap temporaries 4-11, 13 initialized by SPI debug dispatch logic
++ // ttmp SR memory offset : size(VGPR)+size(SVGPR)+size(SGPR)+0x40
++ get_wave_size2(s_save_ttmps_hi)
++ get_vgpr_size_bytes(s_save_ttmps_lo, s_save_ttmps_hi)
++ get_svgpr_size_bytes(s_save_ttmps_hi)
++ s_add_u32 s_save_ttmps_lo, s_save_ttmps_lo, s_save_ttmps_hi
++ s_and_b32 s_save_ttmps_hi, s_save_spi_init_hi, 0xFFFF
++ s_add_u32 s_save_ttmps_lo, s_save_ttmps_lo, get_sgpr_size_bytes()
++ s_add_u32 s_save_ttmps_lo, s_save_ttmps_lo, s_save_spi_init_lo
++ s_addc_u32 s_save_ttmps_hi, s_save_ttmps_hi, 0x0
++
++ v_writelane_b32 v0, ttmp4, 0x4
++ v_writelane_b32 v0, ttmp5, 0x5
++ v_writelane_b32 v0, ttmp6, 0x6
++ v_writelane_b32 v0, ttmp7, 0x7
++ v_writelane_b32 v0, ttmp8, 0x8
++ v_writelane_b32 v0, ttmp9, 0x9
++ v_writelane_b32 v0, ttmp10, 0xA
++ v_writelane_b32 v0, ttmp11, 0xB
++ v_writelane_b32 v0, ttmp13, 0xD
++ v_writelane_b32 v0, exec_lo, 0xE
++ v_writelane_b32 v0, exec_hi, 0xF
++
++ s_mov_b32 exec_lo, 0x3FFF
++ s_mov_b32 exec_hi, 0x0
++ global_store_dword_addtid v0, [s_save_ttmps_lo, s_save_ttmps_hi] offset:0x40 scope:SCOPE_SYS
++ v_readlane_b32 ttmp14, v0, 0xE
++ v_readlane_b32 ttmp15, v0, 0xF
++ s_mov_b32 exec_lo, ttmp14
++ s_mov_b32 exec_hi, ttmp15
++
++ /* setup Resource Contants */
++ s_mov_b32 s_save_buf_rsrc0, s_save_spi_init_lo //base_addr_lo
++ s_and_b32 s_save_buf_rsrc1, s_save_spi_init_hi, 0x0000FFFF //base_addr_hi
++ s_or_b32 s_save_buf_rsrc1, s_save_buf_rsrc1, S_SAVE_BUF_RSRC_WORD1_STRIDE
++ s_mov_b32 s_save_buf_rsrc2, 0 //NUM_RECORDS initial value = 0 (in bytes) although not neccessarily inited
++ s_mov_b32 s_save_buf_rsrc3, S_SAVE_BUF_RSRC_WORD3_MISC
++
++ s_mov_b32 s_save_m0, m0
++
++ /* global mem offset */
++ s_mov_b32 s_save_mem_offset, 0x0
++ get_wave_size2(s_wave_size)
++
++ /* save first 4 VGPRs, needed for SGPR save */
++ s_mov_b32 exec_lo, 0xFFFFFFFF //need every thread from now on
++ s_lshr_b32 m0, s_wave_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_ENABLE_SAVE_4VGPR_EXEC_HI
++ s_mov_b32 exec_hi, 0x00000000
++ s_branch L_SAVE_4VGPR_WAVE32
++L_ENABLE_SAVE_4VGPR_EXEC_HI:
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++ s_branch L_SAVE_4VGPR_WAVE64
++L_SAVE_4VGPR_WAVE32:
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR Allocated in 4-GPR granularity
++
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128*2
++ buffer_store_dword v3, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128*3
++ s_branch L_SAVE_HWREG
++
++L_SAVE_4VGPR_WAVE64:
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR Allocated in 4-GPR granularity
++
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256*2
++ buffer_store_dword v3, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256*3
++
++ /* save HW registers */
++
++L_SAVE_HWREG:
++ // HWREG SR memory offset : size(VGPR)+size(SVGPR)+size(SGPR)
++ get_vgpr_size_bytes(s_save_mem_offset, s_wave_size)
++ get_svgpr_size_bytes(s_save_tmp)
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, s_save_tmp
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, get_sgpr_size_bytes()
++
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ v_mov_b32 v0, 0x0 //Offset[31:0] from buffer resource
++ v_mov_b32 v1, 0x0 //Offset[63:32] from buffer resource
++ v_mov_b32 v2, 0x0 //Set of SGPRs for TCP store
++ s_mov_b32 m0, 0x0 //Next lane of v2 to write to
++
++ // Ensure no further changes to barrier or LDS state.
++ // STATE_PRIV.BARRIER_COMPLETE may change up to this point.
++ s_barrier_signal -2
++ s_barrier_wait -2
++
++ // Re-read final state of BARRIER_COMPLETE field for save.
++ s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_STATE_PRIV)
++ s_and_b32 s_save_tmp, s_save_tmp, SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK
++ s_andn2_b32 s_save_state_priv, s_save_state_priv, SQ_WAVE_STATE_PRIV_BARRIER_COMPLETE_MASK
++ s_or_b32 s_save_state_priv, s_save_state_priv, s_save_tmp
++
++ write_hwreg_to_v2(s_save_m0)
++ write_hwreg_to_v2(s_save_pc_lo)
++ s_andn2_b32 s_save_tmp, s_save_pc_hi, S_SAVE_PC_HI_FIRST_WAVE_MASK
++ write_hwreg_to_v2(s_save_tmp)
++ write_hwreg_to_v2(s_save_exec_lo)
++ write_hwreg_to_v2(s_save_exec_hi)
++ write_hwreg_to_v2(s_save_state_priv)
++
++ s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV)
++ write_hwreg_to_v2(s_save_tmp)
++
++ write_hwreg_to_v2(s_save_xnack_mask)
++
++ s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_MODE)
++ write_hwreg_to_v2(s_save_m0)
++
++ s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_SCRATCH_BASE_LO)
++ write_hwreg_to_v2(s_save_m0)
++
++ s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_SCRATCH_BASE_HI)
++ write_hwreg_to_v2(s_save_m0)
++
++ s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_EXCP_FLAG_USER)
++ write_hwreg_to_v2(s_save_m0)
++
++ s_getreg_b32 s_save_m0, hwreg(HW_REG_WAVE_TRAP_CTRL)
++ write_hwreg_to_v2(s_save_m0)
++
++ s_getreg_b32 s_save_tmp, hwreg(HW_REG_WAVE_STATUS)
++ write_hwreg_to_v2(s_save_tmp)
++
++ s_get_barrier_state s_save_tmp, -1
++ s_wait_kmcnt (0)
++ write_hwreg_to_v2(s_save_tmp)
++
++ // Write HWREGs with 16 VGPR lanes. TTMPs occupy space after this.
++ s_mov_b32 exec_lo, 0xFFFF
++ s_mov_b32 exec_hi, 0x0
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++
++ // Write SGPRs with 32 VGPR lanes. This works in wave32 and wave64 mode.
++ s_mov_b32 exec_lo, 0xFFFFFFFF
++
++ /* save SGPRs */
++ // Save SGPR before LDS save, then the s0 to s4 can be used during LDS save...
++
++ // SGPR SR memory offset : size(VGPR)+size(SVGPR)
++ get_vgpr_size_bytes(s_save_mem_offset, s_wave_size)
++ get_svgpr_size_bytes(s_save_tmp)
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, s_save_tmp
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ s_mov_b32 ttmp13, 0x0 //next VGPR lane to copy SGPR into
++
++ s_mov_b32 m0, 0x0 //SGPR initial index value =0
++ s_nop 0x0 //Manually inserted wait states
++L_SAVE_SGPR_LOOP:
++ // SGPR is allocated in 16 SGPR granularity
++ s_movrels_b64 s0, s0 //s0 = s[0+m0], s1 = s[1+m0]
++ s_movrels_b64 s2, s2 //s2 = s[2+m0], s3 = s[3+m0]
++ s_movrels_b64 s4, s4 //s4 = s[4+m0], s5 = s[5+m0]
++ s_movrels_b64 s6, s6 //s6 = s[6+m0], s7 = s[7+m0]
++ s_movrels_b64 s8, s8 //s8 = s[8+m0], s9 = s[9+m0]
++ s_movrels_b64 s10, s10 //s10 = s[10+m0], s11 = s[11+m0]
++ s_movrels_b64 s12, s12 //s12 = s[12+m0], s13 = s[13+m0]
++ s_movrels_b64 s14, s14 //s14 = s[14+m0], s15 = s[15+m0]
++
++ write_16sgpr_to_v2(s0)
++
++ s_cmp_eq_u32 ttmp13, 0x20 //have 32 VGPR lanes filled?
++ s_cbranch_scc0 L_SAVE_SGPR_SKIP_TCP_STORE
++
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, 0x80
++ s_mov_b32 ttmp13, 0x0
++ v_mov_b32 v2, 0x0
++L_SAVE_SGPR_SKIP_TCP_STORE:
++
++ s_add_u32 m0, m0, 16 //next sgpr index
++ s_cmp_lt_u32 m0, 96 //scc = (m0 < first 96 SGPR) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_SGPR_LOOP //first 96 SGPR save is complete?
++
++ //save the rest 12 SGPR
++ s_movrels_b64 s0, s0 //s0 = s[0+m0], s1 = s[1+m0]
++ s_movrels_b64 s2, s2 //s2 = s[2+m0], s3 = s[3+m0]
++ s_movrels_b64 s4, s4 //s4 = s[4+m0], s5 = s[5+m0]
++ s_movrels_b64 s6, s6 //s6 = s[6+m0], s7 = s[7+m0]
++ s_movrels_b64 s8, s8 //s8 = s[8+m0], s9 = s[9+m0]
++ s_movrels_b64 s10, s10 //s10 = s[10+m0], s11 = s[11+m0]
++ write_12sgpr_to_v2(s0)
++
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++
++ /* save LDS */
++
++L_SAVE_LDS:
++ // Change EXEC to all threads...
++ s_mov_b32 exec_lo, 0xFFFFFFFF //need every thread from now on
++ s_lshr_b32 m0, s_wave_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_ENABLE_SAVE_LDS_EXEC_HI
++ s_mov_b32 exec_hi, 0x00000000
++ s_branch L_SAVE_LDS_NORMAL
++L_ENABLE_SAVE_LDS_EXEC_HI:
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++L_SAVE_LDS_NORMAL:
++ s_getreg_b32 s_save_alloc_size, hwreg(HW_REG_WAVE_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SIZE)
++ s_and_b32 s_save_alloc_size, s_save_alloc_size, 0xFFFFFFFF //lds_size is zero?
++ s_cbranch_scc0 L_SAVE_LDS_DONE //no lds used? jump to L_SAVE_DONE
++
++ s_and_b32 s_save_tmp, s_save_pc_hi, S_SAVE_PC_HI_FIRST_WAVE_MASK
++ s_cbranch_scc0 L_SAVE_LDS_DONE
++
++ // first wave do LDS save;
++
++ s_lshl_b32 s_save_alloc_size, s_save_alloc_size, SQ_WAVE_LDS_ALLOC_GRANULARITY
++ s_mov_b32 s_save_buf_rsrc2, s_save_alloc_size //NUM_RECORDS in bytes
++
++ // LDS at offset: size(VGPR)+size(SVGPR)+SIZE(SGPR)+SIZE(HWREG)
++ //
++ get_vgpr_size_bytes(s_save_mem_offset, s_wave_size)
++ get_svgpr_size_bytes(s_save_tmp)
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, s_save_tmp
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, get_sgpr_size_bytes()
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, get_hwreg_size_bytes()
++
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ //load 0~63*4(byte address) to vgpr v0
++ v_mbcnt_lo_u32_b32 v0, -1, 0
++ v_mbcnt_hi_u32_b32 v0, -1, v0
++ v_mul_u32_u24 v0, 4, v0
++
++ s_lshr_b32 m0, s_wave_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_mov_b32 m0, 0x0
++ s_cbranch_scc1 L_SAVE_LDS_W64
++
++L_SAVE_LDS_W32:
++ s_mov_b32 s3, 128
++ s_nop 0
++ s_nop 0
++ s_nop 0
++L_SAVE_LDS_LOOP_W32:
++ ds_read_b32 v1, v0
++ s_wait_idle
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++
++ s_add_u32 m0, m0, s3 //every buffer_store_lds does 128 bytes
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, s3
++ v_add_nc_u32 v0, v0, 128 //mem offset increased by 128 bytes
++ s_cmp_lt_u32 m0, s_save_alloc_size //scc=(m0 < s_save_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_LDS_LOOP_W32 //LDS save is complete?
++
++ s_branch L_SAVE_LDS_DONE
++
++L_SAVE_LDS_W64:
++ s_mov_b32 s3, 256
++ s_nop 0
++ s_nop 0
++ s_nop 0
++L_SAVE_LDS_LOOP_W64:
++ ds_read_b32 v1, v0
++ s_wait_idle
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++
++ s_add_u32 m0, m0, s3 //every buffer_store_lds does 256 bytes
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, s3
++ v_add_nc_u32 v0, v0, 256 //mem offset increased by 256 bytes
++ s_cmp_lt_u32 m0, s_save_alloc_size //scc=(m0 < s_save_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_LDS_LOOP_W64 //LDS save is complete?
++
++L_SAVE_LDS_DONE:
++ /* save VGPRs - set the Rest VGPRs */
++L_SAVE_VGPR:
++ // VGPR SR memory offset: 0
++ s_mov_b32 exec_lo, 0xFFFFFFFF //need every thread from now on
++ s_lshr_b32 m0, s_wave_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_ENABLE_SAVE_VGPR_EXEC_HI
++ s_mov_b32 s_save_mem_offset, (0+128*4) // for the rest VGPRs
++ s_mov_b32 exec_hi, 0x00000000
++ s_branch L_SAVE_VGPR_NORMAL
++L_ENABLE_SAVE_VGPR_EXEC_HI:
++ s_mov_b32 s_save_mem_offset, (0+256*4) // for the rest VGPRs
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++L_SAVE_VGPR_NORMAL:
++ s_getreg_b32 s_save_alloc_size, hwreg(HW_REG_WAVE_GPR_ALLOC,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SIZE)
++ s_add_u32 s_save_alloc_size, s_save_alloc_size, 1
++ s_lshl_b32 s_save_alloc_size, s_save_alloc_size, 2 //Number of VGPRs = (vgpr_size + 1) * 4 (non-zero value)
++ //determine it is wave32 or wave64
++ s_lshr_b32 m0, s_wave_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_SAVE_VGPR_WAVE64
++
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR Allocated in 4-GPR granularity
++
++ // VGPR store using dw burst
++ s_mov_b32 m0, 0x4 //VGPR initial index value =4
++ s_cmp_lt_u32 m0, s_save_alloc_size
++ s_cbranch_scc0 L_SAVE_VGPR_END
++
++L_SAVE_VGPR_W32_LOOP:
++ v_movrels_b32 v0, v0 //v0 = v[0+m0]
++ v_movrels_b32 v1, v1 //v1 = v[1+m0]
++ v_movrels_b32 v2, v2 //v2 = v[2+m0]
++ v_movrels_b32 v3, v3 //v3 = v[3+m0]
++
++ buffer_store_dword v0, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128*2
++ buffer_store_dword v3, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:128*3
++
++ s_add_u32 m0, m0, 4 //next vgpr index
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, 128*4 //every buffer_store_dword does 128 bytes
++ s_cmp_lt_u32 m0, s_save_alloc_size //scc = (m0 < s_save_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_VGPR_W32_LOOP //VGPR save is complete?
++
++ s_branch L_SAVE_VGPR_END
++
++L_SAVE_VGPR_WAVE64:
++ s_mov_b32 s_save_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR store using dw burst
++ s_mov_b32 m0, 0x4 //VGPR initial index value =4
++ s_cmp_lt_u32 m0, s_save_alloc_size
++ s_cbranch_scc0 L_SAVE_SHARED_VGPR
++
++L_SAVE_VGPR_W64_LOOP:
++ v_movrels_b32 v0, v0 //v0 = v[0+m0]
++ v_movrels_b32 v1, v1 //v1 = v[1+m0]
++ v_movrels_b32 v2, v2 //v2 = v[2+m0]
++ v_movrels_b32 v3, v3 //v3 = v[3+m0]
++
++ buffer_store_dword v0, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++ buffer_store_dword v1, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256
++ buffer_store_dword v2, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256*2
++ buffer_store_dword v3, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS offset:256*3
++
++ s_add_u32 m0, m0, 4 //next vgpr index
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, 256*4 //every buffer_store_dword does 256 bytes
++ s_cmp_lt_u32 m0, s_save_alloc_size //scc = (m0 < s_save_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_VGPR_W64_LOOP //VGPR save is complete?
++
++L_SAVE_SHARED_VGPR:
++ s_getreg_b32 s_save_alloc_size, hwreg(HW_REG_WAVE_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SIZE)
++ s_and_b32 s_save_alloc_size, s_save_alloc_size, 0xFFFFFFFF //shared_vgpr_size is zero?
++ s_cbranch_scc0 L_SAVE_VGPR_END //no shared_vgpr used? jump to L_SAVE_LDS
++ s_lshl_b32 s_save_alloc_size, s_save_alloc_size, 3 //Number of SHARED_VGPRs = shared_vgpr_size * 8 (non-zero value)
++ //m0 now has the value of normal vgpr count, just add the m0 with shared_vgpr count to get the total count.
++ //save shared_vgpr will start from the index of m0
++ s_add_u32 s_save_alloc_size, s_save_alloc_size, m0
++ s_mov_b32 exec_lo, 0xFFFFFFFF
++ s_mov_b32 exec_hi, 0x00000000
++
++L_SAVE_SHARED_VGPR_WAVE64_LOOP:
++ v_movrels_b32 v0, v0 //v0 = v[0+m0]
++ buffer_store_dword v0, v0, s_save_buf_rsrc0, s_save_mem_offset scope:SCOPE_SYS
++ s_add_u32 m0, m0, 1 //next vgpr index
++ s_add_u32 s_save_mem_offset, s_save_mem_offset, 128
++ s_cmp_lt_u32 m0, s_save_alloc_size //scc = (m0 < s_save_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_SAVE_SHARED_VGPR_WAVE64_LOOP //SHARED_VGPR save is complete?
++
++L_SAVE_VGPR_END:
++ s_branch L_END_PGM
++
++L_RESTORE:
++ /* Setup Resource Contants */
++ s_mov_b32 s_restore_buf_rsrc0, s_restore_spi_init_lo //base_addr_lo
++ s_and_b32 s_restore_buf_rsrc1, s_restore_spi_init_hi, 0x0000FFFF //base_addr_hi
++ s_or_b32 s_restore_buf_rsrc1, s_restore_buf_rsrc1, S_RESTORE_BUF_RSRC_WORD1_STRIDE
++ s_mov_b32 s_restore_buf_rsrc2, 0 //NUM_RECORDS initial value = 0 (in bytes)
++ s_mov_b32 s_restore_buf_rsrc3, S_RESTORE_BUF_RSRC_WORD3_MISC
++
++ // Save s_restore_spi_init_hi for later use.
++ s_mov_b32 s_restore_spi_init_hi_save, s_restore_spi_init_hi
++
++ //determine it is wave32 or wave64
++ get_wave_size2(s_restore_size)
++
++ s_and_b32 s_restore_tmp, s_restore_spi_init_hi, S_RESTORE_SPI_INIT_FIRST_WAVE_MASK
++ s_cbranch_scc0 L_RESTORE_VGPR
++
++ /* restore LDS */
++L_RESTORE_LDS:
++ s_mov_b32 exec_lo, 0xFFFFFFFF //need every thread from now on
++ s_lshr_b32 m0, s_restore_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_ENABLE_RESTORE_LDS_EXEC_HI
++ s_mov_b32 exec_hi, 0x00000000
++ s_branch L_RESTORE_LDS_NORMAL
++L_ENABLE_RESTORE_LDS_EXEC_HI:
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++L_RESTORE_LDS_NORMAL:
++ s_getreg_b32 s_restore_alloc_size, hwreg(HW_REG_WAVE_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_LDS_SIZE_SIZE)
++ s_and_b32 s_restore_alloc_size, s_restore_alloc_size, 0xFFFFFFFF //lds_size is zero?
++ s_cbranch_scc0 L_RESTORE_VGPR //no lds used? jump to L_RESTORE_VGPR
++ s_lshl_b32 s_restore_alloc_size, s_restore_alloc_size, SQ_WAVE_LDS_ALLOC_GRANULARITY
++ s_mov_b32 s_restore_buf_rsrc2, s_restore_alloc_size //NUM_RECORDS in bytes
++
++ // LDS at offset: size(VGPR)+size(SVGPR)+SIZE(SGPR)+SIZE(HWREG)
++ //
++ get_vgpr_size_bytes(s_restore_mem_offset, s_restore_size)
++ get_svgpr_size_bytes(s_restore_tmp)
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, s_restore_tmp
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, get_sgpr_size_bytes()
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, get_hwreg_size_bytes()
++
++ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ s_lshr_b32 m0, s_restore_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_mov_b32 m0, 0x0
++ s_cbranch_scc1 L_RESTORE_LDS_LOOP_W64
++
++L_RESTORE_LDS_LOOP_W32:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset
++ s_wait_idle
++ ds_store_addtid_b32 v0
++ s_add_u32 m0, m0, 128 // 128 DW
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 128 //mem offset increased by 128DW
++ s_cmp_lt_u32 m0, s_restore_alloc_size //scc=(m0 < s_restore_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_RESTORE_LDS_LOOP_W32 //LDS restore is complete?
++ s_branch L_RESTORE_VGPR
++
++L_RESTORE_LDS_LOOP_W64:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset
++ s_wait_idle
++ ds_store_addtid_b32 v0
++ s_add_u32 m0, m0, 256 // 256 DW
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 256 //mem offset increased by 256DW
++ s_cmp_lt_u32 m0, s_restore_alloc_size //scc=(m0 < s_restore_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_RESTORE_LDS_LOOP_W64 //LDS restore is complete?
++
++ /* restore VGPRs */
++L_RESTORE_VGPR:
++ // VGPR SR memory offset : 0
++ s_mov_b32 s_restore_mem_offset, 0x0
++ s_mov_b32 exec_lo, 0xFFFFFFFF //need every thread from now on
++ s_lshr_b32 m0, s_restore_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_ENABLE_RESTORE_VGPR_EXEC_HI
++ s_mov_b32 exec_hi, 0x00000000
++ s_branch L_RESTORE_VGPR_NORMAL
++L_ENABLE_RESTORE_VGPR_EXEC_HI:
++ s_mov_b32 exec_hi, 0xFFFFFFFF
++L_RESTORE_VGPR_NORMAL:
++ s_getreg_b32 s_restore_alloc_size, hwreg(HW_REG_WAVE_GPR_ALLOC,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SIZE)
++ s_add_u32 s_restore_alloc_size, s_restore_alloc_size, 1
++ s_lshl_b32 s_restore_alloc_size, s_restore_alloc_size, 2 //Number of VGPRs = (vgpr_size + 1) * 4 (non-zero value)
++ //determine it is wave32 or wave64
++ s_lshr_b32 m0, s_restore_size, S_WAVE_SIZE
++ s_and_b32 m0, m0, 1
++ s_cmp_eq_u32 m0, 1
++ s_cbranch_scc1 L_RESTORE_VGPR_WAVE64
++
++ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR load using dw burst
++ s_mov_b32 s_restore_mem_offset_save, s_restore_mem_offset // restore start with v1, v0 will be the last
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 128*4
++ s_mov_b32 m0, 4 //VGPR initial index value = 4
++ s_cmp_lt_u32 m0, s_restore_alloc_size
++ s_cbranch_scc0 L_RESTORE_SGPR
++
++L_RESTORE_VGPR_WAVE32_LOOP:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS
++ buffer_load_dword v1, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:128
++ buffer_load_dword v2, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:128*2
++ buffer_load_dword v3, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:128*3
++ s_wait_idle
++ v_movreld_b32 v0, v0 //v[0+m0] = v0
++ v_movreld_b32 v1, v1
++ v_movreld_b32 v2, v2
++ v_movreld_b32 v3, v3
++ s_add_u32 m0, m0, 4 //next vgpr index
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 128*4 //every buffer_load_dword does 128 bytes
++ s_cmp_lt_u32 m0, s_restore_alloc_size //scc = (m0 < s_restore_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_RESTORE_VGPR_WAVE32_LOOP //VGPR restore (except v0) is complete?
++
++ /* VGPR restore on v0 */
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS
++ buffer_load_dword v1, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:128
++ buffer_load_dword v2, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:128*2
++ buffer_load_dword v3, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:128*3
++ s_wait_idle
++
++ s_branch L_RESTORE_SGPR
++
++L_RESTORE_VGPR_WAVE64:
++ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // VGPR load using dw burst
++ s_mov_b32 s_restore_mem_offset_save, s_restore_mem_offset // restore start with v4, v0 will be the last
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 256*4
++ s_mov_b32 m0, 4 //VGPR initial index value = 4
++ s_cmp_lt_u32 m0, s_restore_alloc_size
++ s_cbranch_scc0 L_RESTORE_SHARED_VGPR
++
++L_RESTORE_VGPR_WAVE64_LOOP:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS
++ buffer_load_dword v1, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:256
++ buffer_load_dword v2, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:256*2
++ buffer_load_dword v3, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS offset:256*3
++ s_wait_idle
++ v_movreld_b32 v0, v0 //v[0+m0] = v0
++ v_movreld_b32 v1, v1
++ v_movreld_b32 v2, v2
++ v_movreld_b32 v3, v3
++ s_add_u32 m0, m0, 4 //next vgpr index
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 256*4 //every buffer_load_dword does 256 bytes
++ s_cmp_lt_u32 m0, s_restore_alloc_size //scc = (m0 < s_restore_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_RESTORE_VGPR_WAVE64_LOOP //VGPR restore (except v0) is complete?
++
++L_RESTORE_SHARED_VGPR:
++ s_getreg_b32 s_restore_alloc_size, hwreg(HW_REG_WAVE_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SIZE) //shared_vgpr_size
++ s_and_b32 s_restore_alloc_size, s_restore_alloc_size, 0xFFFFFFFF //shared_vgpr_size is zero?
++ s_cbranch_scc0 L_RESTORE_V0 //no shared_vgpr used?
++ s_lshl_b32 s_restore_alloc_size, s_restore_alloc_size, 3 //Number of SHARED_VGPRs = shared_vgpr_size * 8 (non-zero value)
++ //m0 now has the value of normal vgpr count, just add the m0 with shared_vgpr count to get the total count.
++ //restore shared_vgpr will start from the index of m0
++ s_add_u32 s_restore_alloc_size, s_restore_alloc_size, m0
++ s_mov_b32 exec_lo, 0xFFFFFFFF
++ s_mov_b32 exec_hi, 0x00000000
++L_RESTORE_SHARED_VGPR_WAVE64_LOOP:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset scope:SCOPE_SYS
++ s_wait_idle
++ v_movreld_b32 v0, v0 //v[0+m0] = v0
++ s_add_u32 m0, m0, 1 //next vgpr index
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 128
++ s_cmp_lt_u32 m0, s_restore_alloc_size //scc = (m0 < s_restore_alloc_size) ? 1 : 0
++ s_cbranch_scc1 L_RESTORE_SHARED_VGPR_WAVE64_LOOP //VGPR restore (except v0) is complete?
++
++ s_mov_b32 exec_hi, 0xFFFFFFFF //restore back exec_hi before restoring V0!!
++
++ /* VGPR restore on v0 */
++L_RESTORE_V0:
++ buffer_load_dword v0, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS
++ buffer_load_dword v1, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:256
++ buffer_load_dword v2, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:256*2
++ buffer_load_dword v3, v0, s_restore_buf_rsrc0, s_restore_mem_offset_save scope:SCOPE_SYS offset:256*3
++ s_wait_idle
++
++ /* restore SGPRs */
++ //will be 2+8+16*6
++ // SGPR SR memory offset : size(VGPR)+size(SVGPR)
++L_RESTORE_SGPR:
++ get_vgpr_size_bytes(s_restore_mem_offset, s_restore_size)
++ get_svgpr_size_bytes(s_restore_tmp)
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, s_restore_tmp
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, get_sgpr_size_bytes()
++ s_sub_u32 s_restore_mem_offset, s_restore_mem_offset, 20*4 //s108~s127 is not saved
++
++ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ s_mov_b32 m0, s_sgpr_save_num
++
++ read_4sgpr_from_mem(s0, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_sub_u32 m0, m0, 4 // Restore from S[0] to S[104]
++ s_nop 0 // hazard SALU M0=> S_MOVREL
++
++ s_movreld_b64 s0, s0 //s[0+m0] = s0
++ s_movreld_b64 s2, s2
++
++ read_8sgpr_from_mem(s0, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_sub_u32 m0, m0, 8 // Restore from S[0] to S[96]
++ s_nop 0 // hazard SALU M0=> S_MOVREL
++
++ s_movreld_b64 s0, s0 //s[0+m0] = s0
++ s_movreld_b64 s2, s2
++ s_movreld_b64 s4, s4
++ s_movreld_b64 s6, s6
++
++ L_RESTORE_SGPR_LOOP:
++ read_16sgpr_from_mem(s0, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_sub_u32 m0, m0, 16 // Restore from S[n] to S[0]
++ s_nop 0 // hazard SALU M0=> S_MOVREL
++
++ s_movreld_b64 s0, s0 //s[0+m0] = s0
++ s_movreld_b64 s2, s2
++ s_movreld_b64 s4, s4
++ s_movreld_b64 s6, s6
++ s_movreld_b64 s8, s8
++ s_movreld_b64 s10, s10
++ s_movreld_b64 s12, s12
++ s_movreld_b64 s14, s14
++
++ s_cmp_eq_u32 m0, 0 //scc = (m0 < s_sgpr_save_num) ? 1 : 0
++ s_cbranch_scc0 L_RESTORE_SGPR_LOOP
++
++ // s_barrier with STATE_PRIV.TRAP_AFTER_INST=1, STATUS.PRIV=1 incorrectly asserts debug exception.
++ // Clear DEBUG_EN before and restore MODE after the barrier.
++ s_setreg_imm32_b32 hwreg(HW_REG_WAVE_MODE), 0
++
++ /* restore HW registers */
++L_RESTORE_HWREG:
++ // HWREG SR memory offset : size(VGPR)+size(SVGPR)+size(SGPR)
++ get_vgpr_size_bytes(s_restore_mem_offset, s_restore_size)
++ get_svgpr_size_bytes(s_restore_tmp)
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, s_restore_tmp
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, get_sgpr_size_bytes()
++
++ s_mov_b32 s_restore_buf_rsrc2, 0x1000000 //NUM_RECORDS in bytes
++
++ // Restore s_restore_spi_init_hi before the saved value gets clobbered.
++ s_mov_b32 s_restore_spi_init_hi, s_restore_spi_init_hi_save
++
++ read_hwreg_from_mem(s_restore_m0, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_pc_lo, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_pc_hi, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_exec_lo, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_exec_hi, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_state_priv, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_excp_flag_priv, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_xnack_mask, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_mode, s_restore_buf_rsrc0, s_restore_mem_offset)
++ read_hwreg_from_mem(s_restore_flat_scratch, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_setreg_b32 hwreg(HW_REG_WAVE_SCRATCH_BASE_LO), s_restore_flat_scratch
++
++ read_hwreg_from_mem(s_restore_flat_scratch, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_setreg_b32 hwreg(HW_REG_WAVE_SCRATCH_BASE_HI), s_restore_flat_scratch
++
++ read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++ s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_USER), s_restore_tmp
++
++ read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++ s_setreg_b32 hwreg(HW_REG_WAVE_TRAP_CTRL), s_restore_tmp
++
++ // Only the first wave needs to restore the workgroup barrier.
++ s_and_b32 s_restore_tmp, s_restore_spi_init_hi, S_RESTORE_SPI_INIT_FIRST_WAVE_MASK
++ s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
++
++ // Skip over WAVE_STATUS, since there is no state to restore from it
++ s_add_u32 s_restore_mem_offset, s_restore_mem_offset, 4
++
++ read_hwreg_from_mem(s_restore_tmp, s_restore_buf_rsrc0, s_restore_mem_offset)
++ s_wait_idle
++
++ s_bitcmp1_b32 s_restore_tmp, BARRIER_STATE_VALID_OFFSET
++ s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
++
++ // extract the saved signal count from s_restore_tmp
++ s_lshr_b32 s_restore_tmp, s_restore_tmp, BARRIER_STATE_SIGNAL_OFFSET
++
++ // We need to call s_barrier_signal repeatedly to restore the signal
++ // count of the work group barrier. The member count is already
++ // initialized with the number of waves in the work group.
++L_BARRIER_RESTORE_LOOP:
++ s_and_b32 s_restore_tmp, s_restore_tmp, s_restore_tmp
++ s_cbranch_scc0 L_SKIP_BARRIER_RESTORE
++ s_barrier_signal -1
++ s_add_i32 s_restore_tmp, s_restore_tmp, -1
++ s_branch L_BARRIER_RESTORE_LOOP
++
++L_SKIP_BARRIER_RESTORE:
++
++ s_mov_b32 m0, s_restore_m0
++ s_mov_b32 exec_lo, s_restore_exec_lo
++ s_mov_b32 exec_hi, s_restore_exec_hi
++
++ // EXCP_FLAG_PRIV.SAVE_CONTEXT and HOST_TRAP may have changed.
++ // Only restore the other fields to avoid clobbering them.
++ s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV, 0, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_1_SIZE), s_restore_excp_flag_priv
++ s_lshr_b32 s_restore_excp_flag_priv, s_restore_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SHIFT
++ s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SHIFT, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SIZE), s_restore_excp_flag_priv
++ s_lshr_b32 s_restore_excp_flag_priv, s_restore_excp_flag_priv, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SHIFT - SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_2_SHIFT
++ s_setreg_b32 hwreg(HW_REG_WAVE_EXCP_FLAG_PRIV, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SHIFT, SQ_WAVE_EXCP_FLAG_PRIV_RESTORE_PART_3_SIZE), s_restore_excp_flag_priv
++
++ s_setreg_b32 hwreg(HW_REG_WAVE_MODE), s_restore_mode
++
++ // Restore trap temporaries 4-11, 13 initialized by SPI debug dispatch logic
++ // ttmp SR memory offset : size(VGPR)+size(SVGPR)+size(SGPR)+0x40
++ get_vgpr_size_bytes(s_restore_ttmps_lo, s_restore_size)
++ get_svgpr_size_bytes(s_restore_ttmps_hi)
++ s_add_u32 s_restore_ttmps_lo, s_restore_ttmps_lo, s_restore_ttmps_hi
++ s_add_u32 s_restore_ttmps_lo, s_restore_ttmps_lo, get_sgpr_size_bytes()
++ s_add_u32 s_restore_ttmps_lo, s_restore_ttmps_lo, s_restore_buf_rsrc0
++ s_addc_u32 s_restore_ttmps_hi, s_restore_buf_rsrc1, 0x0
++ s_and_b32 s_restore_ttmps_hi, s_restore_ttmps_hi, 0xFFFF
++ s_load_dwordx4 [ttmp4, ttmp5, ttmp6, ttmp7], [s_restore_ttmps_lo, s_restore_ttmps_hi], 0x50 scope:SCOPE_SYS
++ s_load_dwordx4 [ttmp8, ttmp9, ttmp10, ttmp11], [s_restore_ttmps_lo, s_restore_ttmps_hi], 0x60 scope:SCOPE_SYS
++ s_load_dword ttmp13, [s_restore_ttmps_lo, s_restore_ttmps_hi], 0x74 scope:SCOPE_SYS
++ s_wait_idle
++
++ s_and_b32 s_restore_pc_hi, s_restore_pc_hi, 0x0000ffff //pc[47:32] //Do it here in order not to affect STATUS
++ s_and_b64 exec, exec, exec // Restore STATUS.EXECZ, not writable by s_setreg_b32
++ s_and_b64 vcc, vcc, vcc // Restore STATUS.VCCZ, not writable by s_setreg_b32
++
++ s_setreg_b32 hwreg(HW_REG_WAVE_STATE_PRIV), s_restore_state_priv // SCC is included, which is changed by previous salu
++
++ // Make barrier and LDS state visible to all waves in the group.
++ // STATE_PRIV.BARRIER_COMPLETE may change after this point.
++ s_barrier_signal -2
++ s_barrier_wait -2
++
++ s_rfe_b64 s_restore_pc_lo //Return to the main shader program and resume execution
++
++L_END_PGM:
++ // Make sure that no wave of the workgroup can exit the trap handler
++ // before the workgroup barrier state is saved.
++ s_barrier_signal -2
++ s_barrier_wait -2
++ s_endpgm_saved
++end
++
++function write_hwreg_to_v2(s)
++ // Copy into VGPR for later TCP store.
++ v_writelane_b32 v2, s, m0
++ s_add_u32 m0, m0, 0x1
++end
++
++
++function write_16sgpr_to_v2(s)
++ // Copy into VGPR for later TCP store.
++ for var sgpr_idx = 0; sgpr_idx < 16; sgpr_idx ++
++ v_writelane_b32 v2, s[sgpr_idx], ttmp13
++ s_add_u32 ttmp13, ttmp13, 0x1
++ end
++end
++
++function write_12sgpr_to_v2(s)
++ // Copy into VGPR for later TCP store.
++ for var sgpr_idx = 0; sgpr_idx < 12; sgpr_idx ++
++ v_writelane_b32 v2, s[sgpr_idx], ttmp13
++ s_add_u32 ttmp13, ttmp13, 0x1
++ end
++end
++
++function read_hwreg_from_mem(s, s_rsrc, s_mem_offset)
++ s_buffer_load_dword s, s_rsrc, s_mem_offset scope:SCOPE_SYS
++ s_add_u32 s_mem_offset, s_mem_offset, 4
++end
++
++function read_16sgpr_from_mem(s, s_rsrc, s_mem_offset)
++ s_sub_u32 s_mem_offset, s_mem_offset, 4*16
++ s_buffer_load_dwordx16 s, s_rsrc, s_mem_offset scope:SCOPE_SYS
++end
++
++function read_8sgpr_from_mem(s, s_rsrc, s_mem_offset)
++ s_sub_u32 s_mem_offset, s_mem_offset, 4*8
++ s_buffer_load_dwordx8 s, s_rsrc, s_mem_offset scope:SCOPE_SYS
++end
++
++function read_4sgpr_from_mem(s, s_rsrc, s_mem_offset)
++ s_sub_u32 s_mem_offset, s_mem_offset, 4*4
++ s_buffer_load_dwordx4 s, s_rsrc, s_mem_offset scope:SCOPE_SYS
++end
++
++function get_vgpr_size_bytes(s_vgpr_size_byte, s_size)
++ s_getreg_b32 s_vgpr_size_byte, hwreg(HW_REG_WAVE_GPR_ALLOC,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SHIFT,SQ_WAVE_GPR_ALLOC_VGPR_SIZE_SIZE)
++ s_add_u32 s_vgpr_size_byte, s_vgpr_size_byte, 1
++ s_bitcmp1_b32 s_size, S_WAVE_SIZE
++ s_cbranch_scc1 L_ENABLE_SHIFT_W64
++ s_lshl_b32 s_vgpr_size_byte, s_vgpr_size_byte, (2+7) //Number of VGPRs = (vgpr_size + 1) * 4 * 32 * 4 (non-zero value)
++ s_branch L_SHIFT_DONE
++L_ENABLE_SHIFT_W64:
++ s_lshl_b32 s_vgpr_size_byte, s_vgpr_size_byte, (2+8) //Number of VGPRs = (vgpr_size + 1) * 4 * 64 * 4 (non-zero value)
++L_SHIFT_DONE:
++end
++
++function get_svgpr_size_bytes(s_svgpr_size_byte)
++ s_getreg_b32 s_svgpr_size_byte, hwreg(HW_REG_WAVE_LDS_ALLOC,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SHIFT,SQ_WAVE_LDS_ALLOC_VGPR_SHARED_SIZE_SIZE)
++ s_lshl_b32 s_svgpr_size_byte, s_svgpr_size_byte, (3+7)
++end
++
++function get_sgpr_size_bytes
++ return 512
++end
++
++function get_hwreg_size_bytes
++ return 128
++end
++
++function get_wave_size2(s_reg)
++ s_getreg_b32 s_reg, hwreg(HW_REG_WAVE_STATUS,SQ_WAVE_STATUS_WAVE64_SHIFT,SQ_WAVE_STATUS_WAVE64_SIZE)
++ s_lshl_b32 s_reg, s_reg, S_WAVE_SIZE
++end
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile b/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
+index ab1132bc896a32..d9955c5d2e5ed5 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/Makefile
+@@ -174,7 +174,7 @@ AMD_DISPLAY_FILES += $(AMD_DAL_CLK_MGR_DCN32)
+ ###############################################################################
+ # DCN35
+ ###############################################################################
+-CLK_MGR_DCN35 = dcn35_smu.o dcn35_clk_mgr.o
++CLK_MGR_DCN35 = dcn35_smu.o dcn351_clk_mgr.o dcn35_clk_mgr.o
+
+ AMD_DAL_CLK_MGR_DCN35 = $(addprefix $(AMDDALPATH)/dc/clk_mgr/dcn35/,$(CLK_MGR_DCN35))
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+index 0e243f4344d050..4c3e58c730b11c 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/clk_mgr.c
+@@ -355,8 +355,11 @@ struct clk_mgr *dc_clk_mgr_create(struct dc_context *ctx, struct pp_smu_funcs *p
+ BREAK_TO_DEBUGGER();
+ return NULL;
+ }
++ if (ctx->dce_version == DCN_VERSION_3_51)
++ dcn351_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
++ else
++ dcn35_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
+
+- dcn35_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
+ return &clk_mgr->base.base;
+ }
+ break;
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+index e93df3d6222e68..bc123f1884da32 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn31/dcn31_clk_mgr.c
+@@ -50,12 +50,13 @@
+ #include "link.h"
+
+ #include "logger_types.h"
++
++
++#include "yellow_carp_offset.h"
+ #undef DC_LOGGER
+ #define DC_LOGGER \
+ clk_mgr->base.base.ctx->logger
+
+-#include "yellow_carp_offset.h"
+-
+ #define regCLK1_CLK_PLL_REQ 0x0237
+ #define regCLK1_CLK_PLL_REQ_BASE_IDX 0
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+index 29eff386505ab5..91d872d6d392b1 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn314/dcn314_clk_mgr.c
+@@ -53,9 +53,6 @@
+
+
+ #include "logger_types.h"
+-#undef DC_LOGGER
+-#define DC_LOGGER \
+- clk_mgr->base.base.ctx->logger
+
+
+ #define MAX_INSTANCE 7
+@@ -77,6 +74,9 @@ static const struct IP_BASE CLK_BASE = { { { { 0x00016C00, 0x02401800, 0, 0, 0,
+ { { 0x0001B200, 0x0242DC00, 0, 0, 0, 0, 0, 0 } },
+ { { 0x0001B400, 0x0242E000, 0, 0, 0, 0, 0, 0 } } } };
+
++#undef DC_LOGGER
++#define DC_LOGGER \
++ clk_mgr->base.base.ctx->logger
+ #define regCLK1_CLK_PLL_REQ 0x0237
+ #define regCLK1_CLK_PLL_REQ_BASE_IDX 0
+
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn351_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn351_clk_mgr.c
+new file mode 100644
+index 00000000000000..6a6ae618650b6d
+--- /dev/null
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn351_clk_mgr.c
+@@ -0,0 +1,140 @@
++/*
++ * Copyright 2024 Advanced Micro Devices, Inc.
++ *
++ * Permission is hereby granted, free of charge, to any person obtaining a
++ * copy of this software and associated documentation files (the "Software"),
++ * to deal in the Software without restriction, including without limitation
++ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
++ * and/or sell copies of the Software, and to permit persons to whom the
++ * Software is furnished to do so, subject to the following conditions:
++ *
++ * The above copyright notice and this permission notice shall be included in
++ * all copies or substantial portions of the Software.
++ *
++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
++ * OTHER DEALINGS IN THE SOFTWARE.
++ *
++ * Authors: AMD
++ *
++ */
++
++#include "core_types.h"
++#include "dcn35_clk_mgr.h"
++
++#define DCN_BASE__INST0_SEG1 0x000000C0
++#define mmCLK1_CLK_PLL_REQ 0x16E37
++
++#define mmCLK1_CLK0_DFS_CNTL 0x16E69
++#define mmCLK1_CLK1_DFS_CNTL 0x16E6C
++#define mmCLK1_CLK2_DFS_CNTL 0x16E6F
++#define mmCLK1_CLK3_DFS_CNTL 0x16E72
++#define mmCLK1_CLK4_DFS_CNTL 0x16E75
++#define mmCLK1_CLK5_DFS_CNTL 0x16E78
++
++#define mmCLK1_CLK0_CURRENT_CNT 0x16EFC
++#define mmCLK1_CLK1_CURRENT_CNT 0x16EFD
++#define mmCLK1_CLK2_CURRENT_CNT 0x16EFE
++#define mmCLK1_CLK3_CURRENT_CNT 0x16EFF
++#define mmCLK1_CLK4_CURRENT_CNT 0x16F00
++#define mmCLK1_CLK5_CURRENT_CNT 0x16F01
++
++#define mmCLK1_CLK0_BYPASS_CNTL 0x16E8A
++#define mmCLK1_CLK1_BYPASS_CNTL 0x16E93
++#define mmCLK1_CLK2_BYPASS_CNTL 0x16E9C
++#define mmCLK1_CLK3_BYPASS_CNTL 0x16EA5
++#define mmCLK1_CLK4_BYPASS_CNTL 0x16EAE
++#define mmCLK1_CLK5_BYPASS_CNTL 0x16EB7
++
++#define mmCLK1_CLK0_DS_CNTL 0x16E83
++#define mmCLK1_CLK1_DS_CNTL 0x16E8C
++#define mmCLK1_CLK2_DS_CNTL 0x16E95
++#define mmCLK1_CLK3_DS_CNTL 0x16E9E
++#define mmCLK1_CLK4_DS_CNTL 0x16EA7
++#define mmCLK1_CLK5_DS_CNTL 0x16EB0
++
++#define mmCLK1_CLK0_ALLOW_DS 0x16E84
++#define mmCLK1_CLK1_ALLOW_DS 0x16E8D
++#define mmCLK1_CLK2_ALLOW_DS 0x16E96
++#define mmCLK1_CLK3_ALLOW_DS 0x16E9F
++#define mmCLK1_CLK4_ALLOW_DS 0x16EA8
++#define mmCLK1_CLK5_ALLOW_DS 0x16EB1
++
++#define mmCLK5_spll_field_8 0x1B04B
++#define mmDENTIST_DISPCLK_CNTL 0x0124
++#define regDENTIST_DISPCLK_CNTL 0x0064
++#define regDENTIST_DISPCLK_CNTL_BASE_IDX 1
++
++#define CLK1_CLK_PLL_REQ__FbMult_int__SHIFT 0x0
++#define CLK1_CLK_PLL_REQ__PllSpineDiv__SHIFT 0xc
++#define CLK1_CLK_PLL_REQ__FbMult_frac__SHIFT 0x10
++#define CLK1_CLK_PLL_REQ__FbMult_int_MASK 0x000001FFL
++#define CLK1_CLK_PLL_REQ__PllSpineDiv_MASK 0x0000F000L
++#define CLK1_CLK_PLL_REQ__FbMult_frac_MASK 0xFFFF0000L
++
++#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL_MASK 0x00000007L
++
++// DENTIST_DISPCLK_CNTL
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_WDIVIDER__SHIFT 0x0
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_RDIVIDER__SHIFT 0x8
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_CHG_DONE__SHIFT 0x13
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_CHG_DONE__SHIFT 0x14
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_WDIVIDER__SHIFT 0x18
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_WDIVIDER_MASK 0x0000007FL
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_RDIVIDER_MASK 0x00007F00L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_CHG_DONE_MASK 0x00080000L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_CHG_DONE_MASK 0x00100000L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_WDIVIDER_MASK 0x7F000000L
++
++#define CLK5_spll_field_8__spll_ssc_en_MASK 0x00002000L
++
++#define REG(reg) \
++ (clk_mgr->regs->reg)
++
++#define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
++
++#define BASE(seg) BASE_INNER(seg)
++
++#define SR(reg_name)\
++ .reg_name = BASE(reg ## reg_name ## _BASE_IDX) + \
++ reg ## reg_name
++
++#define CLK_SR_DCN35(reg_name)\
++ .reg_name = mm ## reg_name
++
++static const struct clk_mgr_registers clk_mgr_regs_dcn351 = {
++ CLK_REG_LIST_DCN35()
++};
++
++static const struct clk_mgr_shift clk_mgr_shift_dcn351 = {
++ CLK_COMMON_MASK_SH_LIST_DCN32(__SHIFT)
++};
++
++static const struct clk_mgr_mask clk_mgr_mask_dcn351 = {
++ CLK_COMMON_MASK_SH_LIST_DCN32(_MASK)
++};
++
++#define TO_CLK_MGR_DCN35(clk_mgr)\
++ container_of(clk_mgr, struct clk_mgr_dcn35, base)
++
++
++void dcn351_clk_mgr_construct(
++ struct dc_context *ctx,
++ struct clk_mgr_dcn35 *clk_mgr,
++ struct pp_smu_funcs *pp_smu,
++ struct dccg *dccg)
++{
++ /*register offset changed*/
++ clk_mgr->base.regs = &clk_mgr_regs_dcn351;
++ clk_mgr->base.clk_mgr_shift = &clk_mgr_shift_dcn351;
++ clk_mgr->base.clk_mgr_mask = &clk_mgr_mask_dcn351;
++
++ dcn35_clk_mgr_construct(ctx, clk_mgr, pp_smu, dccg);
++
++}
++
++
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+index 3bd0d46c170109..7d0d8852ce8d27 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.c
+@@ -36,15 +36,11 @@
+ #include "dcn20/dcn20_clk_mgr.h"
+
+
+-
+-
+ #include "reg_helper.h"
+ #include "core_types.h"
+ #include "dcn35_smu.h"
+ #include "dm_helpers.h"
+
+-/* TODO: remove this include once we ported over remaining clk mgr functions*/
+-#include "dcn30/dcn30_clk_mgr.h"
+ #include "dcn31/dcn31_clk_mgr.h"
+
+ #include "dc_dmub_srv.h"
+@@ -55,34 +51,102 @@
+ #define DC_LOGGER \
+ clk_mgr->base.base.ctx->logger
+
+-#define regCLK1_CLK_PLL_REQ 0x0237
+-#define regCLK1_CLK_PLL_REQ_BASE_IDX 0
++#define DCN_BASE__INST0_SEG1 0x000000C0
++#define mmCLK1_CLK_PLL_REQ 0x16E37
++
++#define mmCLK1_CLK0_DFS_CNTL 0x16E69
++#define mmCLK1_CLK1_DFS_CNTL 0x16E6C
++#define mmCLK1_CLK2_DFS_CNTL 0x16E6F
++#define mmCLK1_CLK3_DFS_CNTL 0x16E72
++#define mmCLK1_CLK4_DFS_CNTL 0x16E75
++#define mmCLK1_CLK5_DFS_CNTL 0x16E78
++
++#define mmCLK1_CLK0_CURRENT_CNT 0x16EFB
++#define mmCLK1_CLK1_CURRENT_CNT 0x16EFC
++#define mmCLK1_CLK2_CURRENT_CNT 0x16EFD
++#define mmCLK1_CLK3_CURRENT_CNT 0x16EFE
++#define mmCLK1_CLK4_CURRENT_CNT 0x16EFF
++#define mmCLK1_CLK5_CURRENT_CNT 0x16F00
++
++#define mmCLK1_CLK0_BYPASS_CNTL 0x16E8A
++#define mmCLK1_CLK1_BYPASS_CNTL 0x16E93
++#define mmCLK1_CLK2_BYPASS_CNTL 0x16E9C
++#define mmCLK1_CLK3_BYPASS_CNTL 0x16EA5
++#define mmCLK1_CLK4_BYPASS_CNTL 0x16EAE
++#define mmCLK1_CLK5_BYPASS_CNTL 0x16EB7
++
++#define mmCLK1_CLK0_DS_CNTL 0x16E83
++#define mmCLK1_CLK1_DS_CNTL 0x16E8C
++#define mmCLK1_CLK2_DS_CNTL 0x16E95
++#define mmCLK1_CLK3_DS_CNTL 0x16E9E
++#define mmCLK1_CLK4_DS_CNTL 0x16EA7
++#define mmCLK1_CLK5_DS_CNTL 0x16EB0
++
++#define mmCLK1_CLK0_ALLOW_DS 0x16E84
++#define mmCLK1_CLK1_ALLOW_DS 0x16E8D
++#define mmCLK1_CLK2_ALLOW_DS 0x16E96
++#define mmCLK1_CLK3_ALLOW_DS 0x16E9F
++#define mmCLK1_CLK4_ALLOW_DS 0x16EA8
++#define mmCLK1_CLK5_ALLOW_DS 0x16EB1
++
++#define mmCLK5_spll_field_8 0x1B24B
++#define mmDENTIST_DISPCLK_CNTL 0x0124
++#define regDENTIST_DISPCLK_CNTL 0x0064
++#define regDENTIST_DISPCLK_CNTL_BASE_IDX 1
++
++#define CLK1_CLK_PLL_REQ__FbMult_int__SHIFT 0x0
++#define CLK1_CLK_PLL_REQ__PllSpineDiv__SHIFT 0xc
++#define CLK1_CLK_PLL_REQ__FbMult_frac__SHIFT 0x10
++#define CLK1_CLK_PLL_REQ__FbMult_int_MASK 0x000001FFL
++#define CLK1_CLK_PLL_REQ__PllSpineDiv_MASK 0x0000F000L
++#define CLK1_CLK_PLL_REQ__FbMult_frac_MASK 0xFFFF0000L
++
++#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL_MASK 0x00000007L
++#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_DIV_MASK 0x000F0000L
++// DENTIST_DISPCLK_CNTL
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_WDIVIDER__SHIFT 0x0
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_RDIVIDER__SHIFT 0x8
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_CHG_DONE__SHIFT 0x13
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_CHG_DONE__SHIFT 0x14
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_WDIVIDER__SHIFT 0x18
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_WDIVIDER_MASK 0x0000007FL
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_RDIVIDER_MASK 0x00007F00L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DISPCLK_CHG_DONE_MASK 0x00080000L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_CHG_DONE_MASK 0x00100000L
++#define DENTIST_DISPCLK_CNTL__DENTIST_DPPCLK_WDIVIDER_MASK 0x7F000000L
++
++#define CLK5_spll_field_8__spll_ssc_en_MASK 0x00002000L
+
+-#define CLK1_CLK_PLL_REQ__FbMult_int__SHIFT 0x0
+-#define CLK1_CLK_PLL_REQ__PllSpineDiv__SHIFT 0xc
+-#define CLK1_CLK_PLL_REQ__FbMult_frac__SHIFT 0x10
+-#define CLK1_CLK_PLL_REQ__FbMult_int_MASK 0x000001FFL
+-#define CLK1_CLK_PLL_REQ__PllSpineDiv_MASK 0x0000F000L
+-#define CLK1_CLK_PLL_REQ__FbMult_frac_MASK 0xFFFF0000L
++#define SMU_VER_THRESHOLD 0x5D4A00 //93.74.0
++#undef FN
++#define FN(reg_name, field_name) \
++ clk_mgr->clk_mgr_shift->field_name, clk_mgr->clk_mgr_mask->field_name
+
+-#define regCLK1_CLK2_BYPASS_CNTL 0x029c
+-#define regCLK1_CLK2_BYPASS_CNTL_BASE_IDX 0
++#define REG(reg) \
++ (clk_mgr->regs->reg)
+
+-#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL__SHIFT 0x0
+-#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_DIV__SHIFT 0x10
+-#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL_MASK 0x00000007L
+-#define CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_DIV_MASK 0x000F0000L
++#define BASE_INNER(seg) DCN_BASE__INST0_SEG ## seg
+
+-#define regCLK5_0_CLK5_spll_field_8 0x464b
+-#define regCLK5_0_CLK5_spll_field_8_BASE_IDX 0
++#define BASE(seg) BASE_INNER(seg)
+
+-#define CLK5_0_CLK5_spll_field_8__spll_ssc_en__SHIFT 0xd
+-#define CLK5_0_CLK5_spll_field_8__spll_ssc_en_MASK 0x00002000L
++#define SR(reg_name)\
++ .reg_name = BASE(reg ## reg_name ## _BASE_IDX) + \
++ reg ## reg_name
+
+-#define SMU_VER_THRESHOLD 0x5D4A00 //93.74.0
++#define CLK_SR_DCN35(reg_name)\
++ .reg_name = mm ## reg_name
+
+-#define REG(reg_name) \
+- (ctx->clk_reg_offsets[reg ## reg_name ## _BASE_IDX] + reg ## reg_name)
++static const struct clk_mgr_registers clk_mgr_regs_dcn35 = {
++ CLK_REG_LIST_DCN35()
++};
++
++static const struct clk_mgr_shift clk_mgr_shift_dcn35 = {
++ CLK_COMMON_MASK_SH_LIST_DCN32(__SHIFT)
++};
++
++static const struct clk_mgr_mask clk_mgr_mask_dcn35 = {
++ CLK_COMMON_MASK_SH_LIST_DCN32(_MASK)
++};
+
+ #define TO_CLK_MGR_DCN35(clk_mgr)\
+ container_of(clk_mgr, struct clk_mgr_dcn35, base)
+@@ -443,7 +507,6 @@ static int get_vco_frequency_from_reg(struct clk_mgr_internal *clk_mgr)
+ struct fixed31_32 pll_req;
+ unsigned int fbmult_frac_val = 0;
+ unsigned int fbmult_int_val = 0;
+- struct dc_context *ctx = clk_mgr->base.ctx;
+
+ /*
+ * Register value of fbmult is in 8.16 format, we are converting to 314.32
+@@ -503,12 +566,12 @@ static void dcn35_dump_clk_registers(struct clk_state_registers_and_bypass *regs
+ static bool dcn35_is_spll_ssc_enabled(struct clk_mgr *clk_mgr_base)
+ {
+ struct clk_mgr_internal *clk_mgr = TO_CLK_MGR_INTERNAL(clk_mgr_base);
+- struct dc_context *ctx = clk_mgr->base.ctx;
++
+ uint32_t ssc_enable;
+
+- REG_GET(CLK5_0_CLK5_spll_field_8, spll_ssc_en, &ssc_enable);
++ ssc_enable = REG_READ(CLK5_spll_field_8) & CLK5_spll_field_8__spll_ssc_en_MASK;
+
+- return ssc_enable == 1;
++ return ssc_enable != 0;
+ }
+
+ static void init_clk_states(struct clk_mgr *clk_mgr)
+@@ -633,10 +696,10 @@ static struct dcn35_ss_info_table ss_info_table = {
+
+ static void dcn35_read_ss_info_from_lut(struct clk_mgr_internal *clk_mgr)
+ {
+- struct dc_context *ctx = clk_mgr->base.ctx;
+- uint32_t clock_source;
++ uint32_t clock_source = 0;
++
++ clock_source = REG_READ(CLK1_CLK2_BYPASS_CNTL) & CLK1_CLK2_BYPASS_CNTL__CLK2_BYPASS_SEL_MASK;
+
+- REG_GET(CLK1_CLK2_BYPASS_CNTL, CLK2_BYPASS_SEL, &clock_source);
+ // If it's DFS mode, clock_source is 0.
+ if (dcn35_is_spll_ssc_enabled(&clk_mgr->base) && (clock_source < ARRAY_SIZE(ss_info_table.ss_percentage))) {
+ clk_mgr->dprefclk_ss_percentage = ss_info_table.ss_percentage[clock_source];
+@@ -1106,6 +1169,12 @@ void dcn35_clk_mgr_construct(
+ clk_mgr->base.dprefclk_ss_divider = 1000;
+ clk_mgr->base.ss_on_dprefclk = false;
+ clk_mgr->base.dfs_ref_freq_khz = 48000;
++ if (ctx->dce_version == DCN_VERSION_3_5) {
++ clk_mgr->base.regs = &clk_mgr_regs_dcn35;
++ clk_mgr->base.clk_mgr_shift = &clk_mgr_shift_dcn35;
++ clk_mgr->base.clk_mgr_mask = &clk_mgr_mask_dcn35;
++ }
++
+
+ clk_mgr->smu_wm_set.wm_set = (struct dcn35_watermarks *)dm_helpers_allocate_gpu_mem(
+ clk_mgr->base.base.ctx,
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.h b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.h
+index 1203dc605b12c4..a12a9bf90806ed 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.h
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn35/dcn35_clk_mgr.h
+@@ -60,4 +60,8 @@ void dcn35_clk_mgr_construct(struct dc_context *ctx,
+
+ void dcn35_clk_mgr_destroy(struct clk_mgr_internal *clk_mgr_int);
+
++void dcn351_clk_mgr_construct(struct dc_context *ctx,
++ struct clk_mgr_dcn35 *clk_mgr,
++ struct pp_smu_funcs *pp_smu,
++ struct dccg *dccg);
+ #endif //__DCN35_CLK_MGR_H__
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
+index c2dd061892f4d9..7a1ca1e98059b0 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/clk_mgr_internal.h
+@@ -166,6 +166,41 @@ enum dentist_divider_range {
+ CLK_SR_DCN32(CLK1_CLK4_CURRENT_CNT), \
+ CLK_SR_DCN32(CLK4_CLK0_CURRENT_CNT)
+
++#define CLK_REG_LIST_DCN35() \
++ CLK_SR_DCN35(CLK1_CLK_PLL_REQ), \
++ CLK_SR_DCN35(CLK1_CLK0_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK1_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK2_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK3_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK4_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK5_DFS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK0_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK1_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK2_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK3_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK4_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK5_CURRENT_CNT), \
++ CLK_SR_DCN35(CLK1_CLK0_BYPASS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK1_BYPASS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK2_BYPASS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK3_BYPASS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK4_BYPASS_CNTL),\
++ CLK_SR_DCN35(CLK1_CLK5_BYPASS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK0_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK1_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK2_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK3_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK4_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK5_DS_CNTL), \
++ CLK_SR_DCN35(CLK1_CLK0_ALLOW_DS), \
++ CLK_SR_DCN35(CLK1_CLK1_ALLOW_DS), \
++ CLK_SR_DCN35(CLK1_CLK2_ALLOW_DS), \
++ CLK_SR_DCN35(CLK1_CLK3_ALLOW_DS), \
++ CLK_SR_DCN35(CLK1_CLK4_ALLOW_DS), \
++ CLK_SR_DCN35(CLK1_CLK5_ALLOW_DS), \
++ CLK_SR_DCN35(CLK5_spll_field_8), \
++ SR(DENTIST_DISPCLK_CNTL), \
++
+ #define CLK_COMMON_MASK_SH_LIST_DCN32(mask_sh) \
+ CLK_COMMON_MASK_SH_LIST_DCN20_BASE(mask_sh),\
+ CLK_SF(CLK1_CLK_PLL_REQ, FbMult_int, mask_sh),\
+@@ -236,6 +271,7 @@ struct clk_mgr_registers {
+ uint32_t CLK1_CLK2_DFS_CNTL;
+ uint32_t CLK1_CLK3_DFS_CNTL;
+ uint32_t CLK1_CLK4_DFS_CNTL;
++ uint32_t CLK1_CLK5_DFS_CNTL;
+ uint32_t CLK2_CLK2_DFS_CNTL;
+
+ uint32_t CLK1_CLK0_CURRENT_CNT;
+@@ -243,11 +279,34 @@ struct clk_mgr_registers {
+ uint32_t CLK1_CLK2_CURRENT_CNT;
+ uint32_t CLK1_CLK3_CURRENT_CNT;
+ uint32_t CLK1_CLK4_CURRENT_CNT;
++ uint32_t CLK1_CLK5_CURRENT_CNT;
+
+ uint32_t CLK0_CLK0_DFS_CNTL;
+ uint32_t CLK0_CLK1_DFS_CNTL;
+ uint32_t CLK0_CLK3_DFS_CNTL;
+ uint32_t CLK0_CLK4_DFS_CNTL;
++ uint32_t CLK1_CLK0_BYPASS_CNTL;
++ uint32_t CLK1_CLK1_BYPASS_CNTL;
++ uint32_t CLK1_CLK2_BYPASS_CNTL;
++ uint32_t CLK1_CLK3_BYPASS_CNTL;
++ uint32_t CLK1_CLK4_BYPASS_CNTL;
++ uint32_t CLK1_CLK5_BYPASS_CNTL;
++
++ uint32_t CLK1_CLK0_DS_CNTL;
++ uint32_t CLK1_CLK1_DS_CNTL;
++ uint32_t CLK1_CLK2_DS_CNTL;
++ uint32_t CLK1_CLK3_DS_CNTL;
++ uint32_t CLK1_CLK4_DS_CNTL;
++ uint32_t CLK1_CLK5_DS_CNTL;
++
++ uint32_t CLK1_CLK0_ALLOW_DS;
++ uint32_t CLK1_CLK1_ALLOW_DS;
++ uint32_t CLK1_CLK2_ALLOW_DS;
++ uint32_t CLK1_CLK3_ALLOW_DS;
++ uint32_t CLK1_CLK4_ALLOW_DS;
++ uint32_t CLK1_CLK5_ALLOW_DS;
++ uint32_t CLK5_spll_field_8;
++
+ };
+
+ struct clk_mgr_shift {
+diff --git a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+index d78c8ec4de79e7..885e749cdc6e96 100644
+--- a/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
++++ b/drivers/gpu/drm/amd/display/dc/link/protocols/link_dp_capability.c
+@@ -51,9 +51,10 @@
+ #include "dc_dmub_srv.h"
+ #include "gpio_service_interface.h"
+
++#define DC_TRACE_LEVEL_MESSAGE(...) /* do nothing */
++
+ #define DC_LOGGER \
+ link->ctx->logger
+-#define DC_TRACE_LEVEL_MESSAGE(...) /* do nothing */
+
+ #ifndef MAX
+ #define MAX(X, Y) ((X) > (Y) ? (X) : (Y))
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index b1c294236cc878..2f1d9ce87ceb01 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -3363,7 +3363,7 @@ static void intel_enable_ddi_hdmi(struct intel_atomic_state *state,
+ intel_de_rmw(dev_priv, XELPDP_PORT_BUF_CTL1(dev_priv, port),
+ XELPDP_PORT_WIDTH_MASK | XELPDP_PORT_REVERSAL, port_buf);
+
+- buf_ctl |= DDI_PORT_WIDTH(lane_count);
++ buf_ctl |= DDI_PORT_WIDTH(crtc_state->lane_count);
+
+ if (DISPLAY_VER(dev_priv) >= 20)
+ buf_ctl |= XE2LPD_DDI_BUF_D2D_LINK_ENABLE;
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index b4ef4d59da1ace..2c6d0da8a16f8c 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -6369,12 +6369,30 @@ static int intel_async_flip_check_hw(struct intel_atomic_state *state, struct in
+ static int intel_joiner_add_affected_crtcs(struct intel_atomic_state *state)
+ {
+ struct drm_i915_private *i915 = to_i915(state->base.dev);
++ const struct intel_plane_state *plane_state;
+ struct intel_crtc_state *crtc_state;
++ struct intel_plane *plane;
+ struct intel_crtc *crtc;
+ u8 affected_pipes = 0;
+ u8 modeset_pipes = 0;
+ int i;
+
++ /*
++ * Any plane which is in use by the joiner needs its crtc.
++ * Pull those in first as this will not have happened yet
++ * if the plane remains disabled according to uapi.
++ */
++ for_each_new_intel_plane_in_state(state, plane, plane_state, i) {
++ crtc = to_intel_crtc(plane_state->hw.crtc);
++ if (!crtc)
++ continue;
++
++ crtc_state = intel_atomic_get_crtc_state(&state->base, crtc);
++ if (IS_ERR(crtc_state))
++ return PTR_ERR(crtc_state);
++ }
++
++ /* Now pull in all joined crtcs */
+ for_each_new_intel_crtc_in_state(state, crtc, crtc_state, i) {
+ affected_pipes |= crtc_state->joiner_pipes;
+ if (intel_crtc_needs_modeset(crtc_state))
+diff --git a/drivers/gpu/drm/i915/display/intel_dp_link_training.c b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+index 40bedc31d6bf2f..5d8f93d4cdc6a6 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp_link_training.c
++++ b/drivers/gpu/drm/i915/display/intel_dp_link_training.c
+@@ -1561,7 +1561,7 @@ intel_dp_128b132b_link_train(struct intel_dp *intel_dp,
+
+ if (wait_for(intel_dp_128b132b_intra_hop(intel_dp, crtc_state) == 0, 500)) {
+ lt_err(intel_dp, DP_PHY_DPRX, "128b/132b intra-hop not clear\n");
+- return false;
++ goto out;
+ }
+
+ if (intel_dp_128b132b_lane_eq(intel_dp, crtc_state) &&
+@@ -1573,6 +1573,19 @@ intel_dp_128b132b_link_train(struct intel_dp *intel_dp,
+ passed ? "passed" : "failed",
+ crtc_state->port_clock, crtc_state->lane_count);
+
++out:
++ /*
++ * Ensure that the training pattern does get set to TPS2 even in case
++ * of a failure, as is the case at the end of a passing link training
++ * and what is expected by the transcoder. Leaving TPS1 set (and
++ * disabling the link train mode in DP_TP_CTL later from TPS1 directly)
++ * would result in a stuck transcoder HW state and flip-done timeouts
++ * later in the modeset sequence.
++ */
++ if (!passed)
++ intel_dp_program_link_training_pattern(intel_dp, crtc_state,
++ DP_PHY_DPRX, DP_TRAINING_PATTERN_2);
++
+ return passed;
+ }
+
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+index b0e94c95940f67..8aaadbb702df6d 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_guc_submission.c
+@@ -3425,10 +3425,10 @@ static inline int guc_lrc_desc_unpin(struct intel_context *ce)
+ */
+ ret = deregister_context(ce, ce->guc_id.id);
+ if (ret) {
+- spin_lock(&ce->guc_state.lock);
++ spin_lock_irqsave(&ce->guc_state.lock, flags);
+ set_context_registered(ce);
+ clr_context_destroyed(ce);
+- spin_unlock(&ce->guc_state.lock);
++ spin_unlock_irqrestore(&ce->guc_state.lock, flags);
+ /*
+ * As gt-pm is awake at function entry, intel_wakeref_put_async merely decrements
+ * the wakeref immediately but per function spec usage call this after unlock.
+diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
+index 41f4350a7c6c58..b7f521a9b337d3 100644
+--- a/drivers/gpu/drm/i915/i915_reg.h
++++ b/drivers/gpu/drm/i915/i915_reg.h
+@@ -3869,7 +3869,7 @@ enum skl_power_gate {
+ #define DDI_BUF_IS_IDLE (1 << 7)
+ #define DDI_BUF_CTL_TC_PHY_OWNERSHIP REG_BIT(6)
+ #define DDI_A_4_LANES (1 << 4)
+-#define DDI_PORT_WIDTH(width) (((width) - 1) << 1)
++#define DDI_PORT_WIDTH(width) (((width) == 3 ? 4 : ((width) - 1)) << 1)
+ #define DDI_PORT_WIDTH_MASK (7 << 1)
+ #define DDI_PORT_WIDTH_SHIFT 1
+ #define DDI_INIT_DISPLAY_DETECTED (1 << 0)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+index 421afacb724803..36cc9dbc00b5c1 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_0_sm8150.h
+@@ -297,7 +297,7 @@ static const struct dpu_wb_cfg sm8150_wb[] = {
+ {
+ .name = "wb_2", .id = WB_2,
+ .base = 0x65000, .len = 0x2c8,
+- .features = WB_SDM845_MASK,
++ .features = WB_SM8250_MASK,
+ .format_list = wb2_formats_rgb,
+ .num_formats = ARRAY_SIZE(wb2_formats_rgb),
+ .clk_ctrl = DPU_CLK_CTRL_WB2,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+index 641023b102bf59..e8eacdb47967a2 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_1_sc8180x.h
+@@ -304,7 +304,7 @@ static const struct dpu_wb_cfg sc8180x_wb[] = {
+ {
+ .name = "wb_2", .id = WB_2,
+ .base = 0x65000, .len = 0x2c8,
+- .features = WB_SDM845_MASK,
++ .features = WB_SM8250_MASK,
+ .format_list = wb2_formats_rgb,
+ .num_formats = ARRAY_SIZE(wb2_formats_rgb),
+ .clk_ctrl = DPU_CLK_CTRL_WB2,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_4_sm6125.h b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_4_sm6125.h
+index d039b96beb97cf..76f60a2df7a890 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_4_sm6125.h
++++ b/drivers/gpu/drm/msm/disp/dpu1/catalog/dpu_5_4_sm6125.h
+@@ -144,7 +144,7 @@ static const struct dpu_wb_cfg sm6125_wb[] = {
+ {
+ .name = "wb_2", .id = WB_2,
+ .base = 0x65000, .len = 0x2c8,
+- .features = WB_SDM845_MASK,
++ .features = WB_SM8250_MASK,
+ .format_list = wb2_formats_rgb,
+ .num_formats = ARRAY_SIZE(wb2_formats_rgb),
+ .clk_ctrl = DPU_CLK_CTRL_WB2,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index bd3698bf0cf740..2cf8150adf81ff 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -2125,6 +2125,9 @@ void dpu_encoder_helper_phys_cleanup(struct dpu_encoder_phys *phys_enc)
+ }
+ }
+
++ if (phys_enc->hw_pp && phys_enc->hw_pp->ops.setup_dither)
++ phys_enc->hw_pp->ops.setup_dither(phys_enc->hw_pp, NULL);
++
+ /* reset the merge 3D HW block */
+ if (phys_enc->hw_pp && phys_enc->hw_pp->merge_3d) {
+ phys_enc->hw_pp->merge_3d->ops.setup_3d_mode(phys_enc->hw_pp->merge_3d,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+index 5e9aad1b2aa283..d1e0fb2139765c 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_dsc.c
+@@ -52,6 +52,7 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc,
+ u32 slice_last_group_size;
+ u32 det_thresh_flatness;
+ bool is_cmd_mode = !(mode & DSC_MODE_VIDEO);
++ bool input_10_bits = dsc->bits_per_component == 10;
+
+ DPU_REG_WRITE(c, DSC_COMMON_MODE, mode);
+
+@@ -68,7 +69,7 @@ static void dpu_hw_dsc_config(struct dpu_hw_dsc *hw_dsc,
+ data |= (dsc->line_buf_depth << 3);
+ data |= (dsc->simple_422 << 2);
+ data |= (dsc->convert_rgb << 1);
+- data |= dsc->bits_per_component;
++ data |= input_10_bits;
+
+ DPU_REG_WRITE(c, DSC_ENC, data);
+
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
+index 0f40eea7f5e247..2040bee8d512f6 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_hw_top.c
+@@ -272,7 +272,7 @@ static void _setup_mdp_ops(struct dpu_hw_mdp_ops *ops,
+
+ if (cap & BIT(DPU_MDP_VSYNC_SEL))
+ ops->setup_vsync_source = dpu_hw_setup_vsync_sel;
+- else
++ else if (!(cap & BIT(DPU_MDP_PERIPH_0_REMOVED)))
+ ops->setup_vsync_source = dpu_hw_setup_wd_timer;
+
+ ops->get_safe_status = dpu_hw_get_safe_status;
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+index 031446c87daec0..798168180c1ab6 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+@@ -83,6 +83,9 @@ struct dsi_pll_7nm {
+ /* protects REG_DSI_7nm_PHY_CMN_CLK_CFG0 register */
+ spinlock_t postdiv_lock;
+
++ /* protects REG_DSI_7nm_PHY_CMN_CLK_CFG1 register */
++ spinlock_t pclk_mux_lock;
++
+ struct pll_7nm_cached_state cached_state;
+
+ struct dsi_pll_7nm *slave;
+@@ -372,22 +375,41 @@ static void dsi_pll_enable_pll_bias(struct dsi_pll_7nm *pll)
+ ndelay(250);
+ }
+
+-static void dsi_pll_disable_global_clk(struct dsi_pll_7nm *pll)
++static void dsi_pll_cmn_clk_cfg0_write(struct dsi_pll_7nm *pll, u32 val)
+ {
++ unsigned long flags;
++
++ spin_lock_irqsave(&pll->postdiv_lock, flags);
++ writel(val, pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG0);
++ spin_unlock_irqrestore(&pll->postdiv_lock, flags);
++}
++
++static void dsi_pll_cmn_clk_cfg1_update(struct dsi_pll_7nm *pll, u32 mask,
++ u32 val)
++{
++ unsigned long flags;
+ u32 data;
+
++ spin_lock_irqsave(&pll->pclk_mux_lock, flags);
+ data = readl(pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
+- writel(data & ~BIT(5), pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
++ data &= ~mask;
++ data |= val & mask;
++
++ writel(data, pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
++ spin_unlock_irqrestore(&pll->pclk_mux_lock, flags);
++}
++
++static void dsi_pll_disable_global_clk(struct dsi_pll_7nm *pll)
++{
++ dsi_pll_cmn_clk_cfg1_update(pll, DSI_7nm_PHY_CMN_CLK_CFG1_CLK_EN, 0);
+ }
+
+ static void dsi_pll_enable_global_clk(struct dsi_pll_7nm *pll)
+ {
+- u32 data;
++ u32 cfg_1 = DSI_7nm_PHY_CMN_CLK_CFG1_CLK_EN | DSI_7nm_PHY_CMN_CLK_CFG1_CLK_EN_SEL;
+
+ writel(0x04, pll->phy->base + REG_DSI_7nm_PHY_CMN_CTRL_3);
+-
+- data = readl(pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
+- writel(data | BIT(5) | BIT(4), pll->phy->base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
++ dsi_pll_cmn_clk_cfg1_update(pll, cfg_1, cfg_1);
+ }
+
+ static void dsi_pll_phy_dig_reset(struct dsi_pll_7nm *pll)
+@@ -565,7 +587,6 @@ static int dsi_7nm_pll_restore_state(struct msm_dsi_phy *phy)
+ {
+ struct dsi_pll_7nm *pll_7nm = to_pll_7nm(phy->vco_hw);
+ struct pll_7nm_cached_state *cached = &pll_7nm->cached_state;
+- void __iomem *phy_base = pll_7nm->phy->base;
+ u32 val;
+ int ret;
+
+@@ -574,13 +595,10 @@ static int dsi_7nm_pll_restore_state(struct msm_dsi_phy *phy)
+ val |= cached->pll_out_div;
+ writel(val, pll_7nm->phy->pll_base + REG_DSI_7nm_PHY_PLL_PLL_OUTDIV_RATE);
+
+- writel(cached->bit_clk_div | (cached->pix_clk_div << 4),
+- phy_base + REG_DSI_7nm_PHY_CMN_CLK_CFG0);
+-
+- val = readl(phy_base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
+- val &= ~0x3;
+- val |= cached->pll_mux;
+- writel(val, phy_base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
++ dsi_pll_cmn_clk_cfg0_write(pll_7nm,
++ DSI_7nm_PHY_CMN_CLK_CFG0_DIV_CTRL_3_0(cached->bit_clk_div) |
++ DSI_7nm_PHY_CMN_CLK_CFG0_DIV_CTRL_7_4(cached->pix_clk_div));
++ dsi_pll_cmn_clk_cfg1_update(pll_7nm, 0x3, cached->pll_mux);
+
+ ret = dsi_pll_7nm_vco_set_rate(phy->vco_hw,
+ pll_7nm->vco_current_rate,
+@@ -599,7 +617,6 @@ static int dsi_7nm_pll_restore_state(struct msm_dsi_phy *phy)
+ static int dsi_7nm_set_usecase(struct msm_dsi_phy *phy)
+ {
+ struct dsi_pll_7nm *pll_7nm = to_pll_7nm(phy->vco_hw);
+- void __iomem *base = phy->base;
+ u32 data = 0x0; /* internal PLL */
+
+ DBG("DSI PLL%d", pll_7nm->phy->id);
+@@ -618,7 +635,8 @@ static int dsi_7nm_set_usecase(struct msm_dsi_phy *phy)
+ }
+
+ /* set PLL src */
+- writel(data << 2, base + REG_DSI_7nm_PHY_CMN_CLK_CFG1);
++ dsi_pll_cmn_clk_cfg1_update(pll_7nm, DSI_7nm_PHY_CMN_CLK_CFG1_BITCLK_SEL__MASK,
++ DSI_7nm_PHY_CMN_CLK_CFG1_BITCLK_SEL(data));
+
+ return 0;
+ }
+@@ -733,7 +751,7 @@ static int pll_7nm_register(struct dsi_pll_7nm *pll_7nm, struct clk_hw **provide
+ pll_by_2_bit,
+ }), 2, 0, pll_7nm->phy->base +
+ REG_DSI_7nm_PHY_CMN_CLK_CFG1,
+- 0, 1, 0, NULL);
++ 0, 1, 0, &pll_7nm->pclk_mux_lock);
+ if (IS_ERR(hw)) {
+ ret = PTR_ERR(hw);
+ goto fail;
+@@ -778,6 +796,7 @@ static int dsi_pll_7nm_init(struct msm_dsi_phy *phy)
+ pll_7nm_list[phy->id] = pll_7nm;
+
+ spin_lock_init(&pll_7nm->postdiv_lock);
++ spin_lock_init(&pll_7nm->pclk_mux_lock);
+
+ pll_7nm->phy = phy;
+
+diff --git a/drivers/gpu/drm/msm/msm_drv.h b/drivers/gpu/drm/msm/msm_drv.h
+index 2e28a13446366c..9526b22038ab82 100644
+--- a/drivers/gpu/drm/msm/msm_drv.h
++++ b/drivers/gpu/drm/msm/msm_drv.h
+@@ -543,15 +543,12 @@ static inline int align_pitch(int width, int bpp)
+ static inline unsigned long timeout_to_jiffies(const ktime_t *timeout)
+ {
+ ktime_t now = ktime_get();
+- s64 remaining_jiffies;
+
+- if (ktime_compare(*timeout, now) < 0) {
+- remaining_jiffies = 0;
+- } else {
+- ktime_t rem = ktime_sub(*timeout, now);
+- remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ);
+- }
++ if (ktime_compare(*timeout, now) <= 0)
++ return 0;
+
++ ktime_t rem = ktime_sub(*timeout, now);
++ s64 remaining_jiffies = ktime_divns(rem, NSEC_PER_SEC / HZ);
+ return clamp(remaining_jiffies, 1LL, (s64)INT_MAX);
+ }
+
+diff --git a/drivers/gpu/drm/msm/registers/display/dsi_phy_7nm.xml b/drivers/gpu/drm/msm/registers/display/dsi_phy_7nm.xml
+index d54b72f924493b..35f7f40e405b7d 100644
+--- a/drivers/gpu/drm/msm/registers/display/dsi_phy_7nm.xml
++++ b/drivers/gpu/drm/msm/registers/display/dsi_phy_7nm.xml
+@@ -9,8 +9,15 @@ xsi:schemaLocation="https://gitlab.freedesktop.org/freedreno/ rules-fd.xsd">
+ <reg32 offset="0x00004" name="REVISION_ID1"/>
+ <reg32 offset="0x00008" name="REVISION_ID2"/>
+ <reg32 offset="0x0000c" name="REVISION_ID3"/>
+- <reg32 offset="0x00010" name="CLK_CFG0"/>
+- <reg32 offset="0x00014" name="CLK_CFG1"/>
++ <reg32 offset="0x00010" name="CLK_CFG0">
++ <bitfield name="DIV_CTRL_3_0" low="0" high="3" type="uint"/>
++ <bitfield name="DIV_CTRL_7_4" low="4" high="7" type="uint"/>
++ </reg32>
++ <reg32 offset="0x00014" name="CLK_CFG1">
++ <bitfield name="CLK_EN" pos="5" type="boolean"/>
++ <bitfield name="CLK_EN_SEL" pos="4" type="boolean"/>
++ <bitfield name="BITCLK_SEL" low="2" high="3" type="uint"/>
++ </reg32>
+ <reg32 offset="0x00018" name="GLBL_CTRL"/>
+ <reg32 offset="0x0001c" name="RBUF_CTRL"/>
+ <reg32 offset="0x00020" name="VREG_CTRL_0"/>
+diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
+index b4da82ddbb6b2f..8ea98f06d39afc 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
+@@ -590,6 +590,7 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm,
+ unsigned long timeout =
+ jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
+ struct mm_struct *mm = svmm->notifier.mm;
++ struct folio *folio;
+ struct page *page;
+ unsigned long start = args->p.addr;
+ unsigned long notifier_seq;
+@@ -616,12 +617,16 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm,
+ ret = -EINVAL;
+ goto out;
+ }
++ folio = page_folio(page);
+
+ mutex_lock(&svmm->mutex);
+ if (!mmu_interval_read_retry(¬ifier->notifier,
+ notifier_seq))
+ break;
+ mutex_unlock(&svmm->mutex);
++
++ folio_unlock(folio);
++ folio_put(folio);
+ }
+
+ /* Map the page on the GPU. */
+@@ -637,8 +642,8 @@ static int nouveau_atomic_range_fault(struct nouveau_svmm *svmm,
+ ret = nvif_object_ioctl(&svmm->vmm->vmm.object, args, size, NULL);
+ mutex_unlock(&svmm->mutex);
+
+- unlock_page(page);
+- put_page(page);
++ folio_unlock(folio);
++ folio_put(folio);
+
+ out:
+ mmu_interval_notifier_remove(¬ifier->notifier);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+index a6f410ba60bc94..d393bc540f8628 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
++++ b/drivers/gpu/drm/nouveau/nvkm/subdev/pmu/gp10b.c
+@@ -75,7 +75,7 @@ gp10b_pmu_acr = {
+ .bootstrap_multiple_falcons = gp10b_pmu_acr_bootstrap_multiple_falcons,
+ };
+
+-#if IS_ENABLED(CONFIG_ARCH_TEGRA_210_SOC)
++#if IS_ENABLED(CONFIG_ARCH_TEGRA_186_SOC)
+ MODULE_FIRMWARE("nvidia/gp10b/pmu/desc.bin");
+ MODULE_FIRMWARE("nvidia/gp10b/pmu/image.bin");
+ MODULE_FIRMWARE("nvidia/gp10b/pmu/sig.bin");
+diff --git a/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c b/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
+index 45d09e6fa667fd..7d68a8acfe2ea4 100644
+--- a/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
++++ b/drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
+@@ -109,13 +109,13 @@ static int jadard_prepare(struct drm_panel *panel)
+ if (jadard->desc->lp11_to_reset_delay_ms)
+ msleep(jadard->desc->lp11_to_reset_delay_ms);
+
+- gpiod_set_value(jadard->reset, 1);
++ gpiod_set_value(jadard->reset, 0);
+ msleep(5);
+
+- gpiod_set_value(jadard->reset, 0);
++ gpiod_set_value(jadard->reset, 1);
+ msleep(10);
+
+- gpiod_set_value(jadard->reset, 1);
++ gpiod_set_value(jadard->reset, 0);
+ msleep(130);
+
+ ret = jadard->desc->init(jadard);
+@@ -1130,7 +1130,7 @@ static int jadard_dsi_probe(struct mipi_dsi_device *dsi)
+ dsi->format = desc->format;
+ dsi->lanes = desc->lanes;
+
+- jadard->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
++ jadard->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
+ if (IS_ERR(jadard->reset)) {
+ DRM_DEV_ERROR(&dsi->dev, "failed to get our reset GPIO\n");
+ return PTR_ERR(jadard->reset);
+diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
+index 6fc00d63b2857f..e6744422dee492 100644
+--- a/drivers/gpu/drm/xe/xe_oa.c
++++ b/drivers/gpu/drm/xe/xe_oa.c
+@@ -36,11 +36,17 @@
+ #include "xe_pm.h"
+ #include "xe_sched_job.h"
+ #include "xe_sriov.h"
++#include "xe_sync.h"
+
+ #define DEFAULT_POLL_FREQUENCY_HZ 200
+ #define DEFAULT_POLL_PERIOD_NS (NSEC_PER_SEC / DEFAULT_POLL_FREQUENCY_HZ)
+ #define XE_OA_UNIT_INVALID U32_MAX
+
++enum xe_oa_submit_deps {
++ XE_OA_SUBMIT_NO_DEPS,
++ XE_OA_SUBMIT_ADD_DEPS,
++};
++
+ struct xe_oa_reg {
+ struct xe_reg addr;
+ u32 value;
+@@ -63,13 +69,8 @@ struct xe_oa_config {
+ struct rcu_head rcu;
+ };
+
+-struct flex {
+- struct xe_reg reg;
+- u32 offset;
+- u32 value;
+-};
+-
+ struct xe_oa_open_param {
++ struct xe_file *xef;
+ u32 oa_unit_id;
+ bool sample;
+ u32 metric_set;
+@@ -81,6 +82,9 @@ struct xe_oa_open_param {
+ struct xe_exec_queue *exec_q;
+ struct xe_hw_engine *hwe;
+ bool no_preempt;
++ struct drm_xe_sync __user *syncs_user;
++ int num_syncs;
++ struct xe_sync_entry *syncs;
+ };
+
+ struct xe_oa_config_bo {
+@@ -567,32 +571,60 @@ static __poll_t xe_oa_poll(struct file *file, poll_table *wait)
+ return ret;
+ }
+
+-static int xe_oa_submit_bb(struct xe_oa_stream *stream, struct xe_bb *bb)
++static void xe_oa_lock_vma(struct xe_exec_queue *q)
++{
++ if (q->vm) {
++ down_read(&q->vm->lock);
++ xe_vm_lock(q->vm, false);
++ }
++}
++
++static void xe_oa_unlock_vma(struct xe_exec_queue *q)
+ {
++ if (q->vm) {
++ xe_vm_unlock(q->vm);
++ up_read(&q->vm->lock);
++ }
++}
++
++static struct dma_fence *xe_oa_submit_bb(struct xe_oa_stream *stream, enum xe_oa_submit_deps deps,
++ struct xe_bb *bb)
++{
++ struct xe_exec_queue *q = stream->exec_q ?: stream->k_exec_q;
+ struct xe_sched_job *job;
+ struct dma_fence *fence;
+- long timeout;
+ int err = 0;
+
+- /* Kernel configuration is issued on stream->k_exec_q, not stream->exec_q */
+- job = xe_bb_create_job(stream->k_exec_q, bb);
++ xe_oa_lock_vma(q);
++
++ job = xe_bb_create_job(q, bb);
+ if (IS_ERR(job)) {
+ err = PTR_ERR(job);
+ goto exit;
+ }
++ job->ggtt = true;
++
++ if (deps == XE_OA_SUBMIT_ADD_DEPS) {
++ for (int i = 0; i < stream->num_syncs && !err; i++)
++ err = xe_sync_entry_add_deps(&stream->syncs[i], job);
++ if (err) {
++ drm_dbg(&stream->oa->xe->drm, "xe_sync_entry_add_deps err %d\n", err);
++ goto err_put_job;
++ }
++ }
+
+ xe_sched_job_arm(job);
+ fence = dma_fence_get(&job->drm.s_fence->finished);
+ xe_sched_job_push(job);
+
+- timeout = dma_fence_wait_timeout(fence, false, HZ);
+- dma_fence_put(fence);
+- if (timeout < 0)
+- err = timeout;
+- else if (!timeout)
+- err = -ETIME;
++ xe_oa_unlock_vma(q);
++
++ return fence;
++err_put_job:
++ xe_sched_job_put(job);
+ exit:
+- return err;
++ xe_oa_unlock_vma(q);
++ return ERR_PTR(err);
+ }
+
+ static void write_cs_mi_lri(struct xe_bb *bb, const struct xe_oa_reg *reg_data, u32 n_regs)
+@@ -639,54 +671,30 @@ static void xe_oa_free_configs(struct xe_oa_stream *stream)
+ free_oa_config_bo(oa_bo);
+ }
+
+-static void xe_oa_store_flex(struct xe_oa_stream *stream, struct xe_lrc *lrc,
+- struct xe_bb *bb, const struct flex *flex, u32 count)
+-{
+- u32 offset = xe_bo_ggtt_addr(lrc->bo);
+-
+- do {
+- bb->cs[bb->len++] = MI_STORE_DATA_IMM | MI_SDI_GGTT | MI_SDI_NUM_DW(1);
+- bb->cs[bb->len++] = offset + flex->offset * sizeof(u32);
+- bb->cs[bb->len++] = 0;
+- bb->cs[bb->len++] = flex->value;
+-
+- } while (flex++, --count);
+-}
+-
+-static int xe_oa_modify_ctx_image(struct xe_oa_stream *stream, struct xe_lrc *lrc,
+- const struct flex *flex, u32 count)
++static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri, u32 count)
+ {
++ struct dma_fence *fence;
+ struct xe_bb *bb;
+ int err;
+
+- bb = xe_bb_new(stream->gt, 4 * count, false);
++ bb = xe_bb_new(stream->gt, 2 * count + 1, false);
+ if (IS_ERR(bb)) {
+ err = PTR_ERR(bb);
+ goto exit;
+ }
+
+- xe_oa_store_flex(stream, lrc, bb, flex, count);
+-
+- err = xe_oa_submit_bb(stream, bb);
+- xe_bb_free(bb, NULL);
+-exit:
+- return err;
+-}
+-
+-static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri)
+-{
+- struct xe_bb *bb;
+- int err;
++ write_cs_mi_lri(bb, reg_lri, count);
+
+- bb = xe_bb_new(stream->gt, 3, false);
+- if (IS_ERR(bb)) {
+- err = PTR_ERR(bb);
+- goto exit;
++ fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_NO_DEPS, bb);
++ if (IS_ERR(fence)) {
++ err = PTR_ERR(fence);
++ goto free_bb;
+ }
++ xe_bb_free(bb, fence);
++ dma_fence_put(fence);
+
+- write_cs_mi_lri(bb, reg_lri, 1);
+-
+- err = xe_oa_submit_bb(stream, bb);
++ return 0;
++free_bb:
+ xe_bb_free(bb, NULL);
+ exit:
+ return err;
+@@ -695,70 +703,54 @@ static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *re
+ static int xe_oa_configure_oar_context(struct xe_oa_stream *stream, bool enable)
+ {
+ const struct xe_oa_format *format = stream->oa_buffer.format;
+- struct xe_lrc *lrc = stream->exec_q->lrc[0];
+- u32 regs_offset = xe_lrc_regs_offset(lrc) / sizeof(u32);
+ u32 oacontrol = __format_to_oactrl(format, OAR_OACONTROL_COUNTER_SEL_MASK) |
+ (enable ? OAR_OACONTROL_COUNTER_ENABLE : 0);
+
+- struct flex regs_context[] = {
++ struct xe_oa_reg reg_lri[] = {
+ {
+ OACTXCONTROL(stream->hwe->mmio_base),
+- stream->oa->ctx_oactxctrl_offset[stream->hwe->class] + 1,
+ enable ? OA_COUNTER_RESUME : 0,
+ },
++ {
++ OAR_OACONTROL,
++ oacontrol,
++ },
+ {
+ RING_CONTEXT_CONTROL(stream->hwe->mmio_base),
+- regs_offset + CTX_CONTEXT_CONTROL,
+- _MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE),
++ _MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE,
++ enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0)
+ },
+ };
+- struct xe_oa_reg reg_lri = { OAR_OACONTROL, oacontrol };
+- int err;
+
+- /* Modify stream hwe context image with regs_context */
+- err = xe_oa_modify_ctx_image(stream, stream->exec_q->lrc[0],
+- regs_context, ARRAY_SIZE(regs_context));
+- if (err)
+- return err;
+-
+- /* Apply reg_lri using LRI */
+- return xe_oa_load_with_lri(stream, ®_lri);
++ return xe_oa_load_with_lri(stream, reg_lri, ARRAY_SIZE(reg_lri));
+ }
+
+ static int xe_oa_configure_oac_context(struct xe_oa_stream *stream, bool enable)
+ {
+ const struct xe_oa_format *format = stream->oa_buffer.format;
+- struct xe_lrc *lrc = stream->exec_q->lrc[0];
+- u32 regs_offset = xe_lrc_regs_offset(lrc) / sizeof(u32);
+ u32 oacontrol = __format_to_oactrl(format, OAR_OACONTROL_COUNTER_SEL_MASK) |
+ (enable ? OAR_OACONTROL_COUNTER_ENABLE : 0);
+- struct flex regs_context[] = {
++ struct xe_oa_reg reg_lri[] = {
+ {
+ OACTXCONTROL(stream->hwe->mmio_base),
+- stream->oa->ctx_oactxctrl_offset[stream->hwe->class] + 1,
+ enable ? OA_COUNTER_RESUME : 0,
+ },
++ {
++ OAC_OACONTROL,
++ oacontrol
++ },
+ {
+ RING_CONTEXT_CONTROL(stream->hwe->mmio_base),
+- regs_offset + CTX_CONTEXT_CONTROL,
+- _MASKED_BIT_ENABLE(CTX_CTRL_OAC_CONTEXT_ENABLE) |
++ _MASKED_FIELD(CTX_CTRL_OAC_CONTEXT_ENABLE,
++ enable ? CTX_CTRL_OAC_CONTEXT_ENABLE : 0) |
+ _MASKED_FIELD(CTX_CTRL_RUN_ALONE, enable ? CTX_CTRL_RUN_ALONE : 0),
+ },
+ };
+- struct xe_oa_reg reg_lri = { OAC_OACONTROL, oacontrol };
+- int err;
+
+ /* Set ccs select to enable programming of OAC_OACONTROL */
+ xe_mmio_write32(stream->gt, __oa_regs(stream)->oa_ctrl, __oa_ccs_select(stream));
+
+- /* Modify stream hwe context image with regs_context */
+- err = xe_oa_modify_ctx_image(stream, stream->exec_q->lrc[0],
+- regs_context, ARRAY_SIZE(regs_context));
+- if (err)
+- return err;
+-
+- /* Apply reg_lri using LRI */
+- return xe_oa_load_with_lri(stream, ®_lri);
++ return xe_oa_load_with_lri(stream, reg_lri, ARRAY_SIZE(reg_lri));
+ }
+
+ static int xe_oa_configure_oa_context(struct xe_oa_stream *stream, bool enable)
+@@ -914,15 +906,32 @@ static int xe_oa_emit_oa_config(struct xe_oa_stream *stream, struct xe_oa_config
+ {
+ #define NOA_PROGRAM_ADDITIONAL_DELAY_US 500
+ struct xe_oa_config_bo *oa_bo;
+- int err, us = NOA_PROGRAM_ADDITIONAL_DELAY_US;
++ int err = 0, us = NOA_PROGRAM_ADDITIONAL_DELAY_US;
++ struct dma_fence *fence;
++ long timeout;
+
++ /* Emit OA configuration batch */
+ oa_bo = xe_oa_alloc_config_buffer(stream, config);
+ if (IS_ERR(oa_bo)) {
+ err = PTR_ERR(oa_bo);
+ goto exit;
+ }
+
+- err = xe_oa_submit_bb(stream, oa_bo->bb);
++ fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_ADD_DEPS, oa_bo->bb);
++ if (IS_ERR(fence)) {
++ err = PTR_ERR(fence);
++ goto exit;
++ }
++
++ /* Wait till all previous batches have executed */
++ timeout = dma_fence_wait_timeout(fence, false, 5 * HZ);
++ dma_fence_put(fence);
++ if (timeout < 0)
++ err = timeout;
++ else if (!timeout)
++ err = -ETIME;
++ if (err)
++ drm_dbg(&stream->oa->xe->drm, "dma_fence_wait_timeout err %d\n", err);
+
+ /* Additional empirical delay needed for NOA programming after registers are written */
+ usleep_range(us, 2 * us);
+@@ -1362,6 +1371,9 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
+ stream->period_exponent = param->period_exponent;
+ stream->no_preempt = param->no_preempt;
+
++ stream->num_syncs = param->num_syncs;
++ stream->syncs = param->syncs;
++
+ /*
+ * For Xe2+, when overrun mode is enabled, there are no partial reports at the end
+ * of buffer, making the OA buffer effectively a non-power-of-2 size circular
+@@ -1712,6 +1724,20 @@ static int xe_oa_set_no_preempt(struct xe_oa *oa, u64 value,
+ return 0;
+ }
+
++static int xe_oa_set_prop_num_syncs(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->num_syncs = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_syncs_user(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->syncs_user = u64_to_user_ptr(value);
++ return 0;
++}
++
+ typedef int (*xe_oa_set_property_fn)(struct xe_oa *oa, u64 value,
+ struct xe_oa_open_param *param);
+ static const xe_oa_set_property_fn xe_oa_set_property_funcs[] = {
+@@ -1724,6 +1750,8 @@ static const xe_oa_set_property_fn xe_oa_set_property_funcs[] = {
+ [DRM_XE_OA_PROPERTY_EXEC_QUEUE_ID] = xe_oa_set_prop_exec_queue_id,
+ [DRM_XE_OA_PROPERTY_OA_ENGINE_INSTANCE] = xe_oa_set_prop_engine_instance,
+ [DRM_XE_OA_PROPERTY_NO_PREEMPT] = xe_oa_set_no_preempt,
++ [DRM_XE_OA_PROPERTY_NUM_SYNCS] = xe_oa_set_prop_num_syncs,
++ [DRM_XE_OA_PROPERTY_SYNCS] = xe_oa_set_prop_syncs_user,
+ };
+
+ static int xe_oa_user_ext_set_property(struct xe_oa *oa, u64 extension,
+@@ -1783,6 +1811,49 @@ static int xe_oa_user_extensions(struct xe_oa *oa, u64 extension, int ext_number
+ return 0;
+ }
+
++static int xe_oa_parse_syncs(struct xe_oa *oa, struct xe_oa_open_param *param)
++{
++ int ret, num_syncs, num_ufence = 0;
++
++ if (param->num_syncs && !param->syncs_user) {
++ drm_dbg(&oa->xe->drm, "num_syncs specified without sync array\n");
++ ret = -EINVAL;
++ goto exit;
++ }
++
++ if (param->num_syncs) {
++ param->syncs = kcalloc(param->num_syncs, sizeof(*param->syncs), GFP_KERNEL);
++ if (!param->syncs) {
++ ret = -ENOMEM;
++ goto exit;
++ }
++ }
++
++ for (num_syncs = 0; num_syncs < param->num_syncs; num_syncs++) {
++ ret = xe_sync_entry_parse(oa->xe, param->xef, ¶m->syncs[num_syncs],
++ ¶m->syncs_user[num_syncs], 0);
++ if (ret)
++ goto err_syncs;
++
++ if (xe_sync_is_ufence(¶m->syncs[num_syncs]))
++ num_ufence++;
++ }
++
++ if (XE_IOCTL_DBG(oa->xe, num_ufence > 1)) {
++ ret = -EINVAL;
++ goto err_syncs;
++ }
++
++ return 0;
++
++err_syncs:
++ while (num_syncs--)
++ xe_sync_entry_cleanup(¶m->syncs[num_syncs]);
++ kfree(param->syncs);
++exit:
++ return ret;
++}
++
+ /**
+ * xe_oa_stream_open_ioctl - Opens an OA stream
+ * @dev: @drm_device
+@@ -1808,6 +1879,7 @@ int xe_oa_stream_open_ioctl(struct drm_device *dev, u64 data, struct drm_file *f
+ return -ENODEV;
+ }
+
++ param.xef = xef;
+ ret = xe_oa_user_extensions(oa, data, 0, ¶m);
+ if (ret)
+ return ret;
+@@ -1817,8 +1889,8 @@ int xe_oa_stream_open_ioctl(struct drm_device *dev, u64 data, struct drm_file *f
+ if (XE_IOCTL_DBG(oa->xe, !param.exec_q))
+ return -ENOENT;
+
+- if (param.exec_q->width > 1)
+- drm_dbg(&oa->xe->drm, "exec_q->width > 1, programming only exec_q->lrc[0]\n");
++ if (XE_IOCTL_DBG(oa->xe, param.exec_q->width > 1))
++ return -EOPNOTSUPP;
+ }
+
+ /*
+@@ -1876,11 +1948,24 @@ int xe_oa_stream_open_ioctl(struct drm_device *dev, u64 data, struct drm_file *f
+ drm_dbg(&oa->xe->drm, "Using periodic sampling freq %lld Hz\n", oa_freq_hz);
+ }
+
++ ret = xe_oa_parse_syncs(oa, ¶m);
++ if (ret)
++ goto err_exec_q;
++
+ mutex_lock(¶m.hwe->gt->oa.gt_lock);
+ ret = xe_oa_stream_open_ioctl_locked(oa, ¶m);
+ mutex_unlock(¶m.hwe->gt->oa.gt_lock);
++ if (ret < 0)
++ goto err_sync_cleanup;
++
++ return ret;
++
++err_sync_cleanup:
++ while (param.num_syncs--)
++ xe_sync_entry_cleanup(¶m.syncs[param.num_syncs]);
++ kfree(param.syncs);
+ err_exec_q:
+- if (ret < 0 && param.exec_q)
++ if (param.exec_q)
+ xe_exec_queue_put(param.exec_q);
+ return ret;
+ }
+diff --git a/drivers/gpu/drm/xe/xe_oa_types.h b/drivers/gpu/drm/xe/xe_oa_types.h
+index 8862eca73fbe32..99f4b2d4bdcf6a 100644
+--- a/drivers/gpu/drm/xe/xe_oa_types.h
++++ b/drivers/gpu/drm/xe/xe_oa_types.h
+@@ -238,5 +238,11 @@ struct xe_oa_stream {
+
+ /** @no_preempt: Whether preemption and timeslicing is disabled for stream exec_q */
+ u32 no_preempt;
++
++ /** @num_syncs: size of @syncs array */
++ u32 num_syncs;
++
++ /** @syncs: syncs to wait on and to signal */
++ struct xe_sync_entry *syncs;
+ };
+ #endif
+diff --git a/drivers/gpu/drm/xe/xe_query.c b/drivers/gpu/drm/xe/xe_query.c
+index 1c96375bd7df75..6fec5d1a1eb44b 100644
+--- a/drivers/gpu/drm/xe/xe_query.c
++++ b/drivers/gpu/drm/xe/xe_query.c
+@@ -679,7 +679,7 @@ static int query_oa_units(struct xe_device *xe,
+ du->oa_unit_id = u->oa_unit_id;
+ du->oa_unit_type = u->type;
+ du->oa_timestamp_freq = xe_oa_timestamp_frequency(gt);
+- du->capabilities = DRM_XE_OA_CAPS_BASE;
++ du->capabilities = DRM_XE_OA_CAPS_BASE | DRM_XE_OA_CAPS_SYNCS;
+
+ j = 0;
+ for_each_hw_engine(hwe, gt, hwe_id) {
+diff --git a/drivers/gpu/drm/xe/xe_ring_ops.c b/drivers/gpu/drm/xe/xe_ring_ops.c
+index 0be4f489d3e126..9f327f27c0726e 100644
+--- a/drivers/gpu/drm/xe/xe_ring_ops.c
++++ b/drivers/gpu/drm/xe/xe_ring_ops.c
+@@ -221,7 +221,10 @@ static int emit_pipe_imm_ggtt(u32 addr, u32 value, bool stall_only, u32 *dw,
+
+ static u32 get_ppgtt_flag(struct xe_sched_job *job)
+ {
+- return job->q->vm ? BIT(8) : 0;
++ if (job->q->vm && !job->ggtt)
++ return BIT(8);
++
++ return 0;
+ }
+
+ static int emit_copy_timestamp(struct xe_lrc *lrc, u32 *dw, int i)
+diff --git a/drivers/gpu/drm/xe/xe_sched_job_types.h b/drivers/gpu/drm/xe/xe_sched_job_types.h
+index 0d3f76fb05cea2..c207361bf43e1c 100644
+--- a/drivers/gpu/drm/xe/xe_sched_job_types.h
++++ b/drivers/gpu/drm/xe/xe_sched_job_types.h
+@@ -57,6 +57,8 @@ struct xe_sched_job {
+ u32 migrate_flush_flags;
+ /** @ring_ops_flush_tlb: The ring ops need to flush TLB before payload. */
+ bool ring_ops_flush_tlb;
++ /** @ggtt: mapped in ggtt. */
++ bool ggtt;
+ /** @ptrs: per instance pointers. */
+ struct xe_job_ptrs ptrs[];
+ };
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 380aa1614442f4..3d1459b551bb2e 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -667,23 +667,50 @@ static void synaptics_pt_stop(struct serio *serio)
+ serio_continue_rx(parent->ps2dev.serio);
+ }
+
++static int synaptics_pt_open(struct serio *serio)
++{
++ struct psmouse *parent = psmouse_from_serio(serio->parent);
++ struct synaptics_data *priv = parent->private;
++
++ guard(serio_pause_rx)(parent->ps2dev.serio);
++ priv->pt_port_open = true;
++
++ return 0;
++}
++
++static void synaptics_pt_close(struct serio *serio)
++{
++ struct psmouse *parent = psmouse_from_serio(serio->parent);
++ struct synaptics_data *priv = parent->private;
++
++ guard(serio_pause_rx)(parent->ps2dev.serio);
++ priv->pt_port_open = false;
++}
++
+ static int synaptics_is_pt_packet(u8 *buf)
+ {
+ return (buf[0] & 0xFC) == 0x84 && (buf[3] & 0xCC) == 0xC4;
+ }
+
+-static void synaptics_pass_pt_packet(struct serio *ptport, u8 *packet)
++static void synaptics_pass_pt_packet(struct synaptics_data *priv, u8 *packet)
+ {
+- struct psmouse *child = psmouse_from_serio(ptport);
++ struct serio *ptport;
+
+- if (child && child->state == PSMOUSE_ACTIVATED) {
+- serio_interrupt(ptport, packet[1], 0);
+- serio_interrupt(ptport, packet[4], 0);
+- serio_interrupt(ptport, packet[5], 0);
+- if (child->pktsize == 4)
+- serio_interrupt(ptport, packet[2], 0);
+- } else {
+- serio_interrupt(ptport, packet[1], 0);
++ ptport = priv->pt_port;
++ if (!ptport)
++ return;
++
++ serio_interrupt(ptport, packet[1], 0);
++
++ if (priv->pt_port_open) {
++ struct psmouse *child = psmouse_from_serio(ptport);
++
++ if (child->state == PSMOUSE_ACTIVATED) {
++ serio_interrupt(ptport, packet[4], 0);
++ serio_interrupt(ptport, packet[5], 0);
++ if (child->pktsize == 4)
++ serio_interrupt(ptport, packet[2], 0);
++ }
+ }
+ }
+
+@@ -722,6 +749,8 @@ static void synaptics_pt_create(struct psmouse *psmouse)
+ serio->write = synaptics_pt_write;
+ serio->start = synaptics_pt_start;
+ serio->stop = synaptics_pt_stop;
++ serio->open = synaptics_pt_open;
++ serio->close = synaptics_pt_close;
+ serio->parent = psmouse->ps2dev.serio;
+
+ psmouse->pt_activate = synaptics_pt_activate;
+@@ -1218,11 +1247,10 @@ static psmouse_ret_t synaptics_process_byte(struct psmouse *psmouse)
+
+ if (SYN_CAP_PASS_THROUGH(priv->info.capabilities) &&
+ synaptics_is_pt_packet(psmouse->packet)) {
+- if (priv->pt_port)
+- synaptics_pass_pt_packet(priv->pt_port,
+- psmouse->packet);
+- } else
++ synaptics_pass_pt_packet(priv, psmouse->packet);
++ } else {
+ synaptics_process_packet(psmouse);
++ }
+
+ return PSMOUSE_FULL_PACKET;
+ }
+diff --git a/drivers/input/mouse/synaptics.h b/drivers/input/mouse/synaptics.h
+index 08533d1b1b16fc..4b34f13b9f7616 100644
+--- a/drivers/input/mouse/synaptics.h
++++ b/drivers/input/mouse/synaptics.h
+@@ -188,6 +188,7 @@ struct synaptics_data {
+ bool disable_gesture; /* disable gestures */
+
+ struct serio *pt_port; /* Pass-through serio port */
++ bool pt_port_open;
+
+ /*
+ * Last received Advanced Gesture Mode (AGM) packet. An AGM packet
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 8fdee511bc0f2c..cf469a67249723 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -44,6 +44,7 @@ static u8 dist_prio_nmi __ro_after_init = GICV3_PRIO_NMI;
+ #define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0)
+ #define FLAGS_WORKAROUND_CAVIUM_ERRATUM_38539 (1ULL << 1)
+ #define FLAGS_WORKAROUND_ASR_ERRATUM_8601001 (1ULL << 2)
++#define FLAGS_WORKAROUND_INSECURE (1ULL << 3)
+
+ #define GIC_IRQ_TYPE_PARTITION (GIC_IRQ_TYPE_LPI + 1)
+
+@@ -83,6 +84,8 @@ static DEFINE_STATIC_KEY_TRUE(supports_deactivate_key);
+ #define GIC_LINE_NR min(GICD_TYPER_SPIS(gic_data.rdists.gicd_typer), 1020U)
+ #define GIC_ESPI_NR GICD_TYPER_ESPIS(gic_data.rdists.gicd_typer)
+
++static bool nmi_support_forbidden;
++
+ /*
+ * There are 16 SGIs, though we only actually use 8 in Linux. The other 8 SGIs
+ * are potentially stolen by the secure side. Some code, especially code dealing
+@@ -163,21 +166,27 @@ static void __init gic_prio_init(void)
+ {
+ bool ds;
+
+- ds = gic_dist_security_disabled();
+- if (!ds) {
+- u32 val;
+-
+- val = readl_relaxed(gic_data.dist_base + GICD_CTLR);
+- val |= GICD_CTLR_DS;
+- writel_relaxed(val, gic_data.dist_base + GICD_CTLR);
++ cpus_have_group0 = gic_has_group0();
+
+- ds = gic_dist_security_disabled();
+- if (ds)
+- pr_warn("Broken GIC integration, security disabled");
++ ds = gic_dist_security_disabled();
++ if ((gic_data.flags & FLAGS_WORKAROUND_INSECURE) && !ds) {
++ if (cpus_have_group0) {
++ u32 val;
++
++ val = readl_relaxed(gic_data.dist_base + GICD_CTLR);
++ val |= GICD_CTLR_DS;
++ writel_relaxed(val, gic_data.dist_base + GICD_CTLR);
++
++ ds = gic_dist_security_disabled();
++ if (ds)
++ pr_warn("Broken GIC integration, security disabled\n");
++ } else {
++ pr_warn("Broken GIC integration, pNMI forbidden\n");
++ nmi_support_forbidden = true;
++ }
+ }
+
+ cpus_have_security_disabled = ds;
+- cpus_have_group0 = gic_has_group0();
+
+ /*
+ * How priority values are used by the GIC depends on two things:
+@@ -209,7 +218,7 @@ static void __init gic_prio_init(void)
+ * be in the non-secure range, we program the non-secure values into
+ * the distributor to match the PMR values we want.
+ */
+- if (cpus_have_group0 & !cpus_have_security_disabled) {
++ if (cpus_have_group0 && !cpus_have_security_disabled) {
+ dist_prio_irq = __gicv3_prio_to_ns(dist_prio_irq);
+ dist_prio_nmi = __gicv3_prio_to_ns(dist_prio_nmi);
+ }
+@@ -1922,6 +1931,18 @@ static bool gic_enable_quirk_arm64_2941627(void *data)
+ return true;
+ }
+
++static bool gic_enable_quirk_rk3399(void *data)
++{
++ struct gic_chip_data *d = data;
++
++ if (of_machine_is_compatible("rockchip,rk3399")) {
++ d->flags |= FLAGS_WORKAROUND_INSECURE;
++ return true;
++ }
++
++ return false;
++}
++
+ static bool rd_set_non_coherent(void *data)
+ {
+ struct gic_chip_data *d = data;
+@@ -1996,6 +2017,12 @@ static const struct gic_quirk gic_quirks[] = {
+ .property = "dma-noncoherent",
+ .init = rd_set_non_coherent,
+ },
++ {
++ .desc = "GICv3: Insecure RK3399 integration",
++ .iidr = 0x0000043b,
++ .mask = 0xff000fff,
++ .init = gic_enable_quirk_rk3399,
++ },
+ {
+ }
+ };
+@@ -2004,7 +2031,7 @@ static void gic_enable_nmi_support(void)
+ {
+ int i;
+
+- if (!gic_prio_masking_enabled())
++ if (!gic_prio_masking_enabled() || nmi_support_forbidden)
+ return;
+
+ rdist_nmi_refs = kcalloc(gic_data.ppi_nr + SGI_NR,
+diff --git a/drivers/irqchip/irq-jcore-aic.c b/drivers/irqchip/irq-jcore-aic.c
+index b9dcc8e78c7501..1f613eb7b7f034 100644
+--- a/drivers/irqchip/irq-jcore-aic.c
++++ b/drivers/irqchip/irq-jcore-aic.c
+@@ -38,7 +38,7 @@ static struct irq_chip jcore_aic;
+ static void handle_jcore_irq(struct irq_desc *desc)
+ {
+ if (irqd_is_per_cpu(irq_desc_get_irq_data(desc)))
+- handle_percpu_irq(desc);
++ handle_percpu_devid_irq(desc);
+ else
+ handle_simple_irq(desc);
+ }
+diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
+index 32d58752477847..31bea72bcb01ad 100644
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -385,10 +385,8 @@ static int raid0_set_limits(struct mddev *mddev)
+ lim.io_min = mddev->chunk_sectors << 9;
+ lim.io_opt = lim.io_min * mddev->raid_disks;
+ err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
+- if (err) {
+- queue_limits_cancel_update(mddev->gendisk->queue);
++ if (err)
+ return err;
+- }
+ return queue_limits_set(mddev->gendisk->queue, &lim);
+ }
+
+diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
+index d83fe3b3abc009..8a994a1975ca7b 100644
+--- a/drivers/md/raid1.c
++++ b/drivers/md/raid1.c
+@@ -3171,10 +3171,8 @@ static int raid1_set_limits(struct mddev *mddev)
+ md_init_stacking_limits(&lim);
+ lim.max_write_zeroes_sectors = 0;
+ err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
+- if (err) {
+- queue_limits_cancel_update(mddev->gendisk->queue);
++ if (err)
+ return err;
+- }
+ return queue_limits_set(mddev->gendisk->queue, &lim);
+ }
+
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index daf42acc4fb6f3..a214fed4f16226 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -3963,10 +3963,8 @@ static int raid10_set_queue_limits(struct mddev *mddev)
+ lim.io_min = mddev->chunk_sectors << 9;
+ lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
+ err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
+- if (err) {
+- queue_limits_cancel_update(mddev->gendisk->queue);
++ if (err)
+ return err;
+- }
+ return queue_limits_set(mddev->gendisk->queue, &lim);
+ }
+
+diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c
+index 3bc89b3569632d..fca54e21a164f3 100644
+--- a/drivers/mtd/nand/raw/cadence-nand-controller.c
++++ b/drivers/mtd/nand/raw/cadence-nand-controller.c
+@@ -471,6 +471,8 @@ struct cdns_nand_ctrl {
+ struct {
+ void __iomem *virt;
+ dma_addr_t dma;
++ dma_addr_t iova_dma;
++ u32 size;
+ } io;
+
+ int irq;
+@@ -1835,11 +1837,11 @@ static int cadence_nand_slave_dma_transfer(struct cdns_nand_ctrl *cdns_ctrl,
+ }
+
+ if (dir == DMA_FROM_DEVICE) {
+- src_dma = cdns_ctrl->io.dma;
++ src_dma = cdns_ctrl->io.iova_dma;
+ dst_dma = buf_dma;
+ } else {
+ src_dma = buf_dma;
+- dst_dma = cdns_ctrl->io.dma;
++ dst_dma = cdns_ctrl->io.iova_dma;
+ }
+
+ tx = dmaengine_prep_dma_memcpy(cdns_ctrl->dmac, dst_dma, src_dma, len,
+@@ -1861,12 +1863,12 @@ static int cadence_nand_slave_dma_transfer(struct cdns_nand_ctrl *cdns_ctrl,
+ dma_async_issue_pending(cdns_ctrl->dmac);
+ wait_for_completion(&finished);
+
+- dma_unmap_single(cdns_ctrl->dev, buf_dma, len, dir);
++ dma_unmap_single(dma_dev->dev, buf_dma, len, dir);
+
+ return 0;
+
+ err_unmap:
+- dma_unmap_single(cdns_ctrl->dev, buf_dma, len, dir);
++ dma_unmap_single(dma_dev->dev, buf_dma, len, dir);
+
+ err:
+ dev_dbg(cdns_ctrl->dev, "Fall back to CPU I/O\n");
+@@ -2869,6 +2871,7 @@ cadence_nand_irq_cleanup(int irqnum, struct cdns_nand_ctrl *cdns_ctrl)
+ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl)
+ {
+ dma_cap_mask_t mask;
++ struct dma_device *dma_dev = cdns_ctrl->dmac->device;
+ int ret;
+
+ cdns_ctrl->cdma_desc = dma_alloc_coherent(cdns_ctrl->dev,
+@@ -2904,15 +2907,24 @@ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl)
+ dma_cap_set(DMA_MEMCPY, mask);
+
+ if (cdns_ctrl->caps1->has_dma) {
+- cdns_ctrl->dmac = dma_request_channel(mask, NULL, NULL);
+- if (!cdns_ctrl->dmac) {
+- dev_err(cdns_ctrl->dev,
+- "Unable to get a DMA channel\n");
+- ret = -EBUSY;
++ cdns_ctrl->dmac = dma_request_chan_by_mask(&mask);
++ if (IS_ERR(cdns_ctrl->dmac)) {
++ ret = dev_err_probe(cdns_ctrl->dev, PTR_ERR(cdns_ctrl->dmac),
++ "%d: Failed to get a DMA channel\n", ret);
+ goto disable_irq;
+ }
+ }
+
++ cdns_ctrl->io.iova_dma = dma_map_resource(dma_dev->dev, cdns_ctrl->io.dma,
++ cdns_ctrl->io.size,
++ DMA_BIDIRECTIONAL, 0);
++
++ ret = dma_mapping_error(dma_dev->dev, cdns_ctrl->io.iova_dma);
++ if (ret) {
++ dev_err(cdns_ctrl->dev, "Failed to map I/O resource to DMA\n");
++ goto dma_release_chnl;
++ }
++
+ nand_controller_init(&cdns_ctrl->controller);
+ INIT_LIST_HEAD(&cdns_ctrl->chips);
+
+@@ -2923,18 +2935,22 @@ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl)
+ if (ret) {
+ dev_err(cdns_ctrl->dev, "Failed to register MTD: %d\n",
+ ret);
+- goto dma_release_chnl;
++ goto unmap_dma_resource;
+ }
+
+ kfree(cdns_ctrl->buf);
+ cdns_ctrl->buf = kzalloc(cdns_ctrl->buf_size, GFP_KERNEL);
+ if (!cdns_ctrl->buf) {
+ ret = -ENOMEM;
+- goto dma_release_chnl;
++ goto unmap_dma_resource;
+ }
+
+ return 0;
+
++unmap_dma_resource:
++ dma_unmap_resource(dma_dev->dev, cdns_ctrl->io.iova_dma,
++ cdns_ctrl->io.size, DMA_BIDIRECTIONAL, 0);
++
+ dma_release_chnl:
+ if (cdns_ctrl->dmac)
+ dma_release_channel(cdns_ctrl->dmac);
+@@ -2956,6 +2972,8 @@ static int cadence_nand_init(struct cdns_nand_ctrl *cdns_ctrl)
+ static void cadence_nand_remove(struct cdns_nand_ctrl *cdns_ctrl)
+ {
+ cadence_nand_chips_cleanup(cdns_ctrl);
++ dma_unmap_resource(cdns_ctrl->dmac->device->dev, cdns_ctrl->io.iova_dma,
++ cdns_ctrl->io.size, DMA_BIDIRECTIONAL, 0);
+ cadence_nand_irq_cleanup(cdns_ctrl->irq, cdns_ctrl);
+ kfree(cdns_ctrl->buf);
+ dma_free_coherent(cdns_ctrl->dev, sizeof(struct cadence_nand_cdma_desc),
+@@ -3020,7 +3038,9 @@ static int cadence_nand_dt_probe(struct platform_device *ofdev)
+ cdns_ctrl->io.virt = devm_platform_get_and_ioremap_resource(ofdev, 1, &res);
+ if (IS_ERR(cdns_ctrl->io.virt))
+ return PTR_ERR(cdns_ctrl->io.virt);
++
+ cdns_ctrl->io.dma = res->start;
++ cdns_ctrl->io.size = resource_size(res);
+
+ dt->clk = devm_clk_get(cdns_ctrl->dev, "nf_clk");
+ if (IS_ERR(dt->clk))
+diff --git a/drivers/mtd/spi-nor/sst.c b/drivers/mtd/spi-nor/sst.c
+index b5ad7118c49a2b..175211fe6a5ed2 100644
+--- a/drivers/mtd/spi-nor/sst.c
++++ b/drivers/mtd/spi-nor/sst.c
+@@ -174,7 +174,7 @@ static int sst_nor_write_data(struct spi_nor *nor, loff_t to, size_t len,
+ int ret;
+
+ nor->program_opcode = op;
+- ret = spi_nor_write_data(nor, to, 1, buf);
++ ret = spi_nor_write_data(nor, to, len, buf);
+ if (ret < 0)
+ return ret;
+ WARN(ret != len, "While writing %zu byte written %i bytes\n", len, ret);
+diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h
+index 95471cfcff420a..8ddd366d9fde54 100644
+--- a/drivers/net/ethernet/google/gve/gve.h
++++ b/drivers/net/ethernet/google/gve/gve.h
+@@ -1110,6 +1110,16 @@ static inline u32 gve_xdp_tx_start_queue_id(struct gve_priv *priv)
+ return gve_xdp_tx_queue_id(priv, 0);
+ }
+
++static inline bool gve_supports_xdp_xmit(struct gve_priv *priv)
++{
++ switch (priv->queue_format) {
++ case GVE_GQI_QPL_FORMAT:
++ return true;
++ default:
++ return false;
++ }
++}
++
+ /* gqi napi handler defined in gve_main.c */
+ int gve_napi_poll(struct napi_struct *napi, int budget);
+
+diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c
+index f985a3cf2b11fa..862c4575701fec 100644
+--- a/drivers/net/ethernet/google/gve/gve_main.c
++++ b/drivers/net/ethernet/google/gve/gve_main.c
+@@ -1895,6 +1895,8 @@ static void gve_turndown(struct gve_priv *priv)
+ /* Stop tx queues */
+ netif_tx_disable(priv->dev);
+
++ xdp_features_clear_redirect_target(priv->dev);
++
+ gve_clear_napi_enabled(priv);
+ gve_clear_report_stats(priv);
+
+@@ -1955,6 +1957,9 @@ static void gve_turnup(struct gve_priv *priv)
+ napi_schedule(&block->napi);
+ }
+
++ if (priv->num_xdp_queues && gve_supports_xdp_xmit(priv))
++ xdp_features_set_redirect_target(priv->dev, false);
++
+ gve_set_napi_enabled(priv);
+ }
+
+@@ -2229,7 +2234,6 @@ static void gve_set_netdev_xdp_features(struct gve_priv *priv)
+ if (priv->queue_format == GVE_GQI_QPL_FORMAT) {
+ xdp_features = NETDEV_XDP_ACT_BASIC;
+ xdp_features |= NETDEV_XDP_ACT_REDIRECT;
+- xdp_features |= NETDEV_XDP_ACT_NDO_XMIT;
+ xdp_features |= NETDEV_XDP_ACT_XSK_ZEROCOPY;
+ } else {
+ xdp_features = 0;
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 97425c06e1ed7f..61db00b2b33e43 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -2310,7 +2310,7 @@ static void ibmvnic_tx_scrq_clean_buffer(struct ibmvnic_adapter *adapter,
+ tx_buff = &tx_pool->tx_buff[index];
+ adapter->netdev->stats.tx_packets--;
+ adapter->netdev->stats.tx_bytes -= tx_buff->skb->len;
+- adapter->tx_stats_buffers[queue_num].packets--;
++ adapter->tx_stats_buffers[queue_num].batched_packets--;
+ adapter->tx_stats_buffers[queue_num].bytes -=
+ tx_buff->skb->len;
+ dev_kfree_skb_any(tx_buff->skb);
+@@ -2402,11 +2402,13 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ unsigned int tx_map_failed = 0;
+ union sub_crq indir_arr[16];
+ unsigned int tx_dropped = 0;
+- unsigned int tx_packets = 0;
++ unsigned int tx_dpackets = 0;
++ unsigned int tx_bpackets = 0;
+ unsigned int tx_bytes = 0;
+ dma_addr_t data_dma_addr;
+ struct netdev_queue *txq;
+ unsigned long lpar_rc;
++ unsigned int skblen;
+ union sub_crq tx_crq;
+ unsigned int offset;
+ bool use_scrq_send_direct = false;
+@@ -2521,6 +2523,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ tx_buff->skb = skb;
+ tx_buff->index = bufidx;
+ tx_buff->pool_index = queue_num;
++ skblen = skb->len;
+
+ memset(&tx_crq, 0, sizeof(tx_crq));
+ tx_crq.v1.first = IBMVNIC_CRQ_CMD;
+@@ -2575,6 +2578,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ if (lpar_rc != H_SUCCESS)
+ goto tx_err;
+
++ tx_dpackets++;
+ goto early_exit;
+ }
+
+@@ -2603,6 +2607,8 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ goto tx_err;
+ }
+
++ tx_bpackets++;
++
+ early_exit:
+ if (atomic_add_return(num_entries, &tx_scrq->used)
+ >= adapter->req_tx_entries_per_subcrq) {
+@@ -2610,8 +2616,7 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ netif_stop_subqueue(netdev, queue_num);
+ }
+
+- tx_packets++;
+- tx_bytes += skb->len;
++ tx_bytes += skblen;
+ txq_trans_cond_update(txq);
+ ret = NETDEV_TX_OK;
+ goto out;
+@@ -2640,10 +2645,11 @@ static netdev_tx_t ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
+ rcu_read_unlock();
+ netdev->stats.tx_dropped += tx_dropped;
+ netdev->stats.tx_bytes += tx_bytes;
+- netdev->stats.tx_packets += tx_packets;
++ netdev->stats.tx_packets += tx_bpackets + tx_dpackets;
+ adapter->tx_send_failed += tx_send_failed;
+ adapter->tx_map_failed += tx_map_failed;
+- adapter->tx_stats_buffers[queue_num].packets += tx_packets;
++ adapter->tx_stats_buffers[queue_num].batched_packets += tx_bpackets;
++ adapter->tx_stats_buffers[queue_num].direct_packets += tx_dpackets;
+ adapter->tx_stats_buffers[queue_num].bytes += tx_bytes;
+ adapter->tx_stats_buffers[queue_num].dropped_packets += tx_dropped;
+
+@@ -3808,7 +3814,10 @@ static void ibmvnic_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+ memcpy(data, ibmvnic_stats[i].name, ETH_GSTRING_LEN);
+
+ for (i = 0; i < adapter->req_tx_queues; i++) {
+- snprintf(data, ETH_GSTRING_LEN, "tx%d_packets", i);
++ snprintf(data, ETH_GSTRING_LEN, "tx%d_batched_packets", i);
++ data += ETH_GSTRING_LEN;
++
++ snprintf(data, ETH_GSTRING_LEN, "tx%d_direct_packets", i);
+ data += ETH_GSTRING_LEN;
+
+ snprintf(data, ETH_GSTRING_LEN, "tx%d_bytes", i);
+@@ -3873,7 +3882,9 @@ static void ibmvnic_get_ethtool_stats(struct net_device *dev,
+ (adapter, ibmvnic_stats[i].offset));
+
+ for (j = 0; j < adapter->req_tx_queues; j++) {
+- data[i] = adapter->tx_stats_buffers[j].packets;
++ data[i] = adapter->tx_stats_buffers[j].batched_packets;
++ i++;
++ data[i] = adapter->tx_stats_buffers[j].direct_packets;
+ i++;
+ data[i] = adapter->tx_stats_buffers[j].bytes;
+ i++;
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
+index 94ac36b1408be9..a189038d88df03 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.h
++++ b/drivers/net/ethernet/ibm/ibmvnic.h
+@@ -213,7 +213,8 @@ struct ibmvnic_statistics {
+
+ #define NUM_TX_STATS 3
+ struct ibmvnic_tx_queue_stats {
+- u64 packets;
++ u64 batched_packets;
++ u64 direct_packets;
+ u64 bytes;
+ u64 dropped_packets;
+ };
+diff --git a/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c b/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
+index 2ec62c8d86e1c1..59486fe2ad18c2 100644
+--- a/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
++++ b/drivers/net/ethernet/netronome/nfp/bpf/cmsg.c
+@@ -20,6 +20,8 @@ nfp_bpf_cmsg_alloc(struct nfp_app_bpf *bpf, unsigned int size)
+ struct sk_buff *skb;
+
+ skb = nfp_app_ctrl_msg_alloc(bpf->app, size, GFP_KERNEL);
++ if (!skb)
++ return NULL;
+ skb_put(skb, size);
+
+ return skb;
+diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+index de10a2d08c428e..fe3438abcd253d 100644
+--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
++++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+@@ -2888,6 +2888,7 @@ static int axienet_probe(struct platform_device *pdev)
+
+ lp->phylink_config.dev = &ndev->dev;
+ lp->phylink_config.type = PHYLINK_NETDEV;
++ lp->phylink_config.mac_managed_pm = true;
+ lp->phylink_config.mac_capabilities = MAC_SYM_PAUSE | MAC_ASYM_PAUSE |
+ MAC_10FD | MAC_100FD | MAC_1000FD;
+
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index ba15a0a4ce629e..963fb9261f017c 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -1902,21 +1902,9 @@ static void geneve_destroy_tunnels(struct net *net, struct list_head *head)
+ {
+ struct geneve_net *gn = net_generic(net, geneve_net_id);
+ struct geneve_dev *geneve, *next;
+- struct net_device *dev, *aux;
+
+- /* gather any geneve devices that were moved into this ns */
+- for_each_netdev_safe(net, dev, aux)
+- if (dev->rtnl_link_ops == &geneve_link_ops)
+- unregister_netdevice_queue(dev, head);
+-
+- /* now gather any other geneve devices that were created in this ns */
+- list_for_each_entry_safe(geneve, next, &gn->geneve_list, next) {
+- /* If geneve->dev is in the same netns, it was already added
+- * to the list by the previous loop.
+- */
+- if (!net_eq(dev_net(geneve->dev), net))
+- unregister_netdevice_queue(geneve->dev, head);
+- }
++ list_for_each_entry_safe(geneve, next, &gn->geneve_list, next)
++ geneve_dellink(geneve->dev, head);
+ }
+
+ static void __net_exit geneve_exit_batch_rtnl(struct list_head *net_list,
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 47406ce9901612..33b78b4007fe7a 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -2487,11 +2487,6 @@ static void __net_exit gtp_net_exit_batch_rtnl(struct list_head *net_list,
+ list_for_each_entry(net, net_list, exit_list) {
+ struct gtp_net *gn = net_generic(net, gtp_net_id);
+ struct gtp_dev *gtp, *gtp_next;
+- struct net_device *dev;
+-
+- for_each_netdev(net, dev)
+- if (dev->rtnl_link_ops == >p_link_ops)
+- gtp_dellink(dev, dev_to_kill);
+
+ list_for_each_entry_safe(gtp, gtp_next, &gn->gtp_dev_list, list)
+ gtp_dellink(gtp->dev, dev_to_kill);
+diff --git a/drivers/net/pse-pd/pd692x0.c b/drivers/net/pse-pd/pd692x0.c
+index 0af7db80b2f883..7cfc36cadb5761 100644
+--- a/drivers/net/pse-pd/pd692x0.c
++++ b/drivers/net/pse-pd/pd692x0.c
+@@ -999,13 +999,12 @@ static int pd692x0_pi_get_voltage(struct pse_controller_dev *pcdev, int id)
+ return (buf.sub[0] << 8 | buf.sub[1]) * 100000;
+ }
+
+-static int pd692x0_pi_get_current_limit(struct pse_controller_dev *pcdev,
+- int id)
++static int pd692x0_pi_get_pw_limit(struct pse_controller_dev *pcdev,
++ int id)
+ {
+ struct pd692x0_priv *priv = to_pd692x0_priv(pcdev);
+ struct pd692x0_msg msg, buf = {0};
+- int mW, uV, uA, ret;
+- s64 tmp_64;
++ int ret;
+
+ msg = pd692x0_msg_template_list[PD692X0_MSG_GET_PORT_PARAM];
+ msg.sub[2] = id;
+@@ -1013,48 +1012,24 @@ static int pd692x0_pi_get_current_limit(struct pse_controller_dev *pcdev,
+ if (ret < 0)
+ return ret;
+
+- ret = pd692x0_pi_get_pw_from_table(buf.data[2], buf.data[3]);
+- if (ret < 0)
+- return ret;
+- mW = ret;
+-
+- ret = pd692x0_pi_get_voltage(pcdev, id);
+- if (ret < 0)
+- return ret;
+- uV = ret;
+-
+- tmp_64 = mW;
+- tmp_64 *= 1000000000ull;
+- /* uA = mW * 1000000000 / uV */
+- uA = DIV_ROUND_CLOSEST_ULL(tmp_64, uV);
+- return uA;
++ return pd692x0_pi_get_pw_from_table(buf.data[0], buf.data[1]);
+ }
+
+-static int pd692x0_pi_set_current_limit(struct pse_controller_dev *pcdev,
+- int id, int max_uA)
++static int pd692x0_pi_set_pw_limit(struct pse_controller_dev *pcdev,
++ int id, int max_mW)
+ {
+ struct pd692x0_priv *priv = to_pd692x0_priv(pcdev);
+ struct device *dev = &priv->client->dev;
+ struct pd692x0_msg msg, buf = {0};
+- int uV, ret, mW;
+- s64 tmp_64;
++ int ret;
+
+ ret = pd692x0_fw_unavailable(priv);
+ if (ret)
+ return ret;
+
+- ret = pd692x0_pi_get_voltage(pcdev, id);
+- if (ret < 0)
+- return ret;
+- uV = ret;
+-
+ msg = pd692x0_msg_template_list[PD692X0_MSG_SET_PORT_PARAM];
+ msg.sub[2] = id;
+- tmp_64 = uV;
+- tmp_64 *= max_uA;
+- /* mW = uV * uA / 1000000000 */
+- mW = DIV_ROUND_CLOSEST_ULL(tmp_64, 1000000000);
+- ret = pd692x0_pi_set_pw_from_table(dev, &msg, mW);
++ ret = pd692x0_pi_set_pw_from_table(dev, &msg, max_mW);
+ if (ret)
+ return ret;
+
+@@ -1068,8 +1043,8 @@ static const struct pse_controller_ops pd692x0_ops = {
+ .pi_disable = pd692x0_pi_disable,
+ .pi_is_enabled = pd692x0_pi_is_enabled,
+ .pi_get_voltage = pd692x0_pi_get_voltage,
+- .pi_get_current_limit = pd692x0_pi_get_current_limit,
+- .pi_set_current_limit = pd692x0_pi_set_current_limit,
++ .pi_get_pw_limit = pd692x0_pi_get_pw_limit,
++ .pi_set_pw_limit = pd692x0_pi_set_pw_limit,
+ };
+
+ #define PD692X0_FW_LINE_MAX_SZ 0xff
+diff --git a/drivers/net/pse-pd/pse_core.c b/drivers/net/pse-pd/pse_core.c
+index 2906ce173f66cd..bb509d973e914e 100644
+--- a/drivers/net/pse-pd/pse_core.c
++++ b/drivers/net/pse-pd/pse_core.c
+@@ -291,32 +291,24 @@ static int pse_pi_get_voltage(struct regulator_dev *rdev)
+ return ret;
+ }
+
+-static int _pse_ethtool_get_status(struct pse_controller_dev *pcdev,
+- int id,
+- struct netlink_ext_ack *extack,
+- struct pse_control_status *status);
+-
+ static int pse_pi_get_current_limit(struct regulator_dev *rdev)
+ {
+ struct pse_controller_dev *pcdev = rdev_get_drvdata(rdev);
+ const struct pse_controller_ops *ops;
+- struct netlink_ext_ack extack = {};
+- struct pse_control_status st = {};
+- int id, uV, ret;
++ int id, uV, mW, ret;
+ s64 tmp_64;
+
+ ops = pcdev->ops;
+ id = rdev_get_id(rdev);
++ if (!ops->pi_get_pw_limit || !ops->pi_get_voltage)
++ return -EOPNOTSUPP;
++
+ mutex_lock(&pcdev->lock);
+- if (ops->pi_get_current_limit) {
+- ret = ops->pi_get_current_limit(pcdev, id);
++ ret = ops->pi_get_pw_limit(pcdev, id);
++ if (ret < 0)
+ goto out;
+- }
++ mW = ret;
+
+- /* If pi_get_current_limit() callback not populated get voltage
+- * from pi_get_voltage() and power limit from ethtool_get_status()
+- * to calculate current limit.
+- */
+ ret = _pse_pi_get_voltage(rdev);
+ if (!ret) {
+ dev_err(pcdev->dev, "Voltage null\n");
+@@ -327,16 +319,7 @@ static int pse_pi_get_current_limit(struct regulator_dev *rdev)
+ goto out;
+ uV = ret;
+
+- ret = _pse_ethtool_get_status(pcdev, id, &extack, &st);
+- if (ret)
+- goto out;
+-
+- if (!st.c33_avail_pw_limit) {
+- ret = -ENODATA;
+- goto out;
+- }
+-
+- tmp_64 = st.c33_avail_pw_limit;
++ tmp_64 = mW;
+ tmp_64 *= 1000000000ull;
+ /* uA = mW * 1000000000 / uV */
+ ret = DIV_ROUND_CLOSEST_ULL(tmp_64, uV);
+@@ -351,15 +334,33 @@ static int pse_pi_set_current_limit(struct regulator_dev *rdev, int min_uA,
+ {
+ struct pse_controller_dev *pcdev = rdev_get_drvdata(rdev);
+ const struct pse_controller_ops *ops;
+- int id, ret;
++ int id, mW, ret;
++ s64 tmp_64;
+
+ ops = pcdev->ops;
+- if (!ops->pi_set_current_limit)
++ if (!ops->pi_set_pw_limit || !ops->pi_get_voltage)
+ return -EOPNOTSUPP;
+
++ if (max_uA > MAX_PI_CURRENT)
++ return -ERANGE;
++
+ id = rdev_get_id(rdev);
+ mutex_lock(&pcdev->lock);
+- ret = ops->pi_set_current_limit(pcdev, id, max_uA);
++ ret = _pse_pi_get_voltage(rdev);
++ if (!ret) {
++ dev_err(pcdev->dev, "Voltage null\n");
++ ret = -ERANGE;
++ goto out;
++ }
++ if (ret < 0)
++ goto out;
++
++ tmp_64 = ret;
++ tmp_64 *= max_uA;
++ /* mW = uA * uV / 1000000000 */
++ mW = DIV_ROUND_CLOSEST_ULL(tmp_64, 1000000000);
++ ret = ops->pi_set_pw_limit(pcdev, id, mW);
++out:
+ mutex_unlock(&pcdev->lock);
+
+ return ret;
+@@ -403,11 +404,9 @@ devm_pse_pi_regulator_register(struct pse_controller_dev *pcdev,
+
+ rinit_data->constraints.valid_ops_mask = REGULATOR_CHANGE_STATUS;
+
+- if (pcdev->ops->pi_set_current_limit) {
++ if (pcdev->ops->pi_set_pw_limit)
+ rinit_data->constraints.valid_ops_mask |=
+ REGULATOR_CHANGE_CURRENT;
+- rinit_data->constraints.max_uA = MAX_PI_CURRENT;
+- }
+
+ rinit_data->supply_regulator = "vpwr";
+
+@@ -736,23 +735,6 @@ struct pse_control *of_pse_control_get(struct device_node *node)
+ }
+ EXPORT_SYMBOL_GPL(of_pse_control_get);
+
+-static int _pse_ethtool_get_status(struct pse_controller_dev *pcdev,
+- int id,
+- struct netlink_ext_ack *extack,
+- struct pse_control_status *status)
+-{
+- const struct pse_controller_ops *ops;
+-
+- ops = pcdev->ops;
+- if (!ops->ethtool_get_status) {
+- NL_SET_ERR_MSG(extack,
+- "PSE driver does not support status report");
+- return -EOPNOTSUPP;
+- }
+-
+- return ops->ethtool_get_status(pcdev, id, extack, status);
+-}
+-
+ /**
+ * pse_ethtool_get_status - get status of PSE control
+ * @psec: PSE control pointer
+@@ -765,11 +747,21 @@ int pse_ethtool_get_status(struct pse_control *psec,
+ struct netlink_ext_ack *extack,
+ struct pse_control_status *status)
+ {
++ const struct pse_controller_ops *ops;
++ struct pse_controller_dev *pcdev;
+ int err;
+
+- mutex_lock(&psec->pcdev->lock);
+- err = _pse_ethtool_get_status(psec->pcdev, psec->id, extack, status);
+- mutex_unlock(&psec->pcdev->lock);
++ pcdev = psec->pcdev;
++ ops = pcdev->ops;
++ if (!ops->ethtool_get_status) {
++ NL_SET_ERR_MSG(extack,
++ "PSE driver does not support status report");
++ return -EOPNOTSUPP;
++ }
++
++ mutex_lock(&pcdev->lock);
++ err = ops->ethtool_get_status(pcdev, psec->id, extack, status);
++ mutex_unlock(&pcdev->lock);
+
+ return err;
+ }
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index a96976b22fa796..61af1583356c27 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -276,8 +276,7 @@ static bool nvme_validate_passthru_nsid(struct nvme_ctrl *ctrl,
+ {
+ if (ns && nsid != ns->head->ns_id) {
+ dev_err(ctrl->device,
+- "%s: nsid (%u) in cmd does not match nsid (%u)"
+- "of namespace\n",
++ "%s: nsid (%u) in cmd does not match nsid (%u) of namespace\n",
+ current->comm, nsid, ns->head->ns_id);
+ return false;
+ }
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 8305d3c1280748..840ae475074d09 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -1449,11 +1449,14 @@ static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue)
+ msg.msg_control = cbuf;
+ msg.msg_controllen = sizeof(cbuf);
+ }
++ msg.msg_flags = MSG_WAITALL;
+ ret = kernel_recvmsg(queue->sock, &msg, &iov, 1,
+ iov.iov_len, msg.msg_flags);
+- if (ret < 0) {
++ if (ret < sizeof(*icresp)) {
+ pr_warn("queue %d: failed to receive icresp, error %d\n",
+ nvme_tcp_queue_id(queue), ret);
++ if (ret >= 0)
++ ret = -ECONNRESET;
+ goto free_icresp;
+ }
+ ret = -ENOTCONN;
+@@ -1565,7 +1568,7 @@ static bool nvme_tcp_poll_queue(struct nvme_tcp_queue *queue)
+ ctrl->io_queues[HCTX_TYPE_POLL];
+ }
+
+-/**
++/*
+ * Track the number of queues assigned to each cpu using a global per-cpu
+ * counter and select the least used cpu from the mq_map. Our goal is to spread
+ * different controllers I/O threads across different cpu cores.
+diff --git a/drivers/pci/devres.c b/drivers/pci/devres.c
+index b133967faef840..643f85849ef64b 100644
+--- a/drivers/pci/devres.c
++++ b/drivers/pci/devres.c
+@@ -411,46 +411,20 @@ static inline bool mask_contains_bar(int mask, int bar)
+ return mask & BIT(bar);
+ }
+
+-/*
+- * This is a copy of pci_intx() used to bypass the problem of recursive
+- * function calls due to the hybrid nature of pci_intx().
+- */
+-static void __pcim_intx(struct pci_dev *pdev, int enable)
+-{
+- u16 pci_command, new;
+-
+- pci_read_config_word(pdev, PCI_COMMAND, &pci_command);
+-
+- if (enable)
+- new = pci_command & ~PCI_COMMAND_INTX_DISABLE;
+- else
+- new = pci_command | PCI_COMMAND_INTX_DISABLE;
+-
+- if (new != pci_command)
+- pci_write_config_word(pdev, PCI_COMMAND, new);
+-}
+-
+ static void pcim_intx_restore(struct device *dev, void *data)
+ {
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct pcim_intx_devres *res = data;
+
+- __pcim_intx(pdev, res->orig_intx);
++ pci_intx(pdev, res->orig_intx);
+ }
+
+-static struct pcim_intx_devres *get_or_create_intx_devres(struct device *dev)
++static void save_orig_intx(struct pci_dev *pdev, struct pcim_intx_devres *res)
+ {
+- struct pcim_intx_devres *res;
+-
+- res = devres_find(dev, pcim_intx_restore, NULL, NULL);
+- if (res)
+- return res;
++ u16 pci_command;
+
+- res = devres_alloc(pcim_intx_restore, sizeof(*res), GFP_KERNEL);
+- if (res)
+- devres_add(dev, res);
+-
+- return res;
++ pci_read_config_word(pdev, PCI_COMMAND, &pci_command);
++ res->orig_intx = !(pci_command & PCI_COMMAND_INTX_DISABLE);
+ }
+
+ /**
+@@ -466,16 +440,28 @@ static struct pcim_intx_devres *get_or_create_intx_devres(struct device *dev)
+ int pcim_intx(struct pci_dev *pdev, int enable)
+ {
+ struct pcim_intx_devres *res;
++ struct device *dev = &pdev->dev;
+
+- res = get_or_create_intx_devres(&pdev->dev);
+- if (!res)
+- return -ENOMEM;
++ /*
++ * pcim_intx() must only restore the INTx value that existed before the
++ * driver was loaded, i.e., before it called pcim_intx() for the
++ * first time.
++ */
++ res = devres_find(dev, pcim_intx_restore, NULL, NULL);
++ if (!res) {
++ res = devres_alloc(pcim_intx_restore, sizeof(*res), GFP_KERNEL);
++ if (!res)
++ return -ENOMEM;
++
++ save_orig_intx(pdev, res);
++ devres_add(dev, res);
++ }
+
+- res->orig_intx = !enable;
+- __pcim_intx(pdev, enable);
++ pci_intx(pdev, enable);
+
+ return 0;
+ }
++EXPORT_SYMBOL_GPL(pcim_intx);
+
+ static void pcim_disable_device(void *pdev_raw)
+ {
+@@ -939,7 +925,7 @@ static void pcim_release_all_regions(struct pci_dev *pdev)
+ * desired, release individual regions with pcim_release_region() or all of
+ * them at once with pcim_release_all_regions().
+ */
+-static int pcim_request_all_regions(struct pci_dev *pdev, const char *name)
++int pcim_request_all_regions(struct pci_dev *pdev, const char *name)
+ {
+ int ret;
+ int bar;
+@@ -957,6 +943,7 @@ static int pcim_request_all_regions(struct pci_dev *pdev, const char *name)
+
+ return ret;
+ }
++EXPORT_SYMBOL(pcim_request_all_regions);
+
+ /**
+ * pcim_iomap_regions_request_all - Request all BARs and iomap specified ones
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index dd3c6dcb47ae4a..1aa5d6f98ebda2 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4486,11 +4486,6 @@ void pci_disable_parity(struct pci_dev *dev)
+ * @enable: boolean: whether to enable or disable PCI INTx
+ *
+ * Enables/disables PCI INTx for device @pdev
+- *
+- * NOTE:
+- * This is a "hybrid" function: It's normally unmanaged, but becomes managed
+- * when pcim_enable_device() has been called in advance. This hybrid feature is
+- * DEPRECATED! If you want managed cleanup, use pcim_intx() instead.
+ */
+ void pci_intx(struct pci_dev *pdev, int enable)
+ {
+@@ -4503,15 +4498,10 @@ void pci_intx(struct pci_dev *pdev, int enable)
+ else
+ new = pci_command | PCI_COMMAND_INTX_DISABLE;
+
+- if (new != pci_command) {
+- /* Preserve the "hybrid" behavior for backwards compatibility */
+- if (pci_is_managed(pdev)) {
+- WARN_ON_ONCE(pcim_intx(pdev, enable) != 0);
+- return;
+- }
++ if (new == pci_command)
++ return;
+
+- pci_write_config_word(pdev, PCI_COMMAND, new);
+- }
++ pci_write_config_word(pdev, PCI_COMMAND, new);
+ }
+ EXPORT_SYMBOL_GPL(pci_intx);
+
+diff --git a/drivers/platform/cznic/Kconfig b/drivers/platform/cznic/Kconfig
+index 49c383eb678541..13e37b49d9d01e 100644
+--- a/drivers/platform/cznic/Kconfig
++++ b/drivers/platform/cznic/Kconfig
+@@ -6,6 +6,7 @@
+
+ menuconfig CZNIC_PLATFORMS
+ bool "Platform support for CZ.NIC's Turris hardware"
++ depends on ARCH_MVEBU || COMPILE_TEST
+ help
+ Say Y here to be able to choose driver support for CZ.NIC's Turris
+ devices. This option alone does not add any kernel code.
+diff --git a/drivers/power/supply/axp20x_battery.c b/drivers/power/supply/axp20x_battery.c
+index f71cc90fea1273..57eba1ddb17ba5 100644
+--- a/drivers/power/supply/axp20x_battery.c
++++ b/drivers/power/supply/axp20x_battery.c
+@@ -466,10 +466,9 @@ static int axp717_battery_get_prop(struct power_supply *psy,
+
+ /*
+ * If a fault is detected it must also be cleared; if the
+- * condition persists it should reappear (This is an
+- * assumption, it's actually not documented). A restart was
+- * not sufficient to clear the bit in testing despite the
+- * register listed as POR.
++ * condition persists it should reappear. A restart was not
++ * sufficient to clear the bit in testing despite the register
++ * listed as POR.
+ */
+ case POWER_SUPPLY_PROP_HEALTH:
+ ret = regmap_read(axp20x_batt->regmap, AXP717_PMU_FAULT,
+@@ -480,26 +479,26 @@ static int axp717_battery_get_prop(struct power_supply *psy,
+ switch (reg & AXP717_BATT_PMU_FAULT_MASK) {
+ case AXP717_BATT_UVLO_2_5V:
+ val->intval = POWER_SUPPLY_HEALTH_DEAD;
+- regmap_update_bits(axp20x_batt->regmap,
+- AXP717_PMU_FAULT,
+- AXP717_BATT_UVLO_2_5V,
+- AXP717_BATT_UVLO_2_5V);
++ regmap_write_bits(axp20x_batt->regmap,
++ AXP717_PMU_FAULT,
++ AXP717_BATT_UVLO_2_5V,
++ AXP717_BATT_UVLO_2_5V);
+ return 0;
+
+ case AXP717_BATT_OVER_TEMP:
+ val->intval = POWER_SUPPLY_HEALTH_HOT;
+- regmap_update_bits(axp20x_batt->regmap,
+- AXP717_PMU_FAULT,
+- AXP717_BATT_OVER_TEMP,
+- AXP717_BATT_OVER_TEMP);
++ regmap_write_bits(axp20x_batt->regmap,
++ AXP717_PMU_FAULT,
++ AXP717_BATT_OVER_TEMP,
++ AXP717_BATT_OVER_TEMP);
+ return 0;
+
+ case AXP717_BATT_UNDER_TEMP:
+ val->intval = POWER_SUPPLY_HEALTH_COLD;
+- regmap_update_bits(axp20x_batt->regmap,
+- AXP717_PMU_FAULT,
+- AXP717_BATT_UNDER_TEMP,
+- AXP717_BATT_UNDER_TEMP);
++ regmap_write_bits(axp20x_batt->regmap,
++ AXP717_PMU_FAULT,
++ AXP717_BATT_UNDER_TEMP,
++ AXP717_BATT_UNDER_TEMP);
+ return 0;
+
+ default:
+diff --git a/drivers/power/supply/da9150-fg.c b/drivers/power/supply/da9150-fg.c
+index 652c1f213af1c2..4f28ef1bba1a3c 100644
+--- a/drivers/power/supply/da9150-fg.c
++++ b/drivers/power/supply/da9150-fg.c
+@@ -247,9 +247,9 @@ static int da9150_fg_current_avg(struct da9150_fg *fg,
+ DA9150_QIF_SD_GAIN_SIZE);
+ da9150_fg_read_sync_end(fg);
+
+- div = (u64) (sd_gain * shunt_val * 65536ULL);
++ div = 65536ULL * sd_gain * shunt_val;
+ do_div(div, 1000000);
+- res = (u64) (iavg * 1000000ULL);
++ res = 1000000ULL * iavg;
+ do_div(res, div);
+
+ val->intval = (int) res;
+diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c
+index e36e3ea165d3b2..2f34761e64135c 100644
+--- a/drivers/s390/net/ism_drv.c
++++ b/drivers/s390/net/ism_drv.c
+@@ -588,6 +588,15 @@ static int ism_dev_init(struct ism_dev *ism)
+ return ret;
+ }
+
++static void ism_dev_release(struct device *dev)
++{
++ struct ism_dev *ism;
++
++ ism = container_of(dev, struct ism_dev, dev);
++
++ kfree(ism);
++}
++
+ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ struct ism_dev *ism;
+@@ -601,6 +610,7 @@ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ dev_set_drvdata(&pdev->dev, ism);
+ ism->pdev = pdev;
+ ism->dev.parent = &pdev->dev;
++ ism->dev.release = ism_dev_release;
+ device_initialize(&ism->dev);
+ dev_set_name(&ism->dev, dev_name(&pdev->dev));
+ ret = device_add(&ism->dev);
+@@ -637,7 +647,7 @@ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ device_del(&ism->dev);
+ err_dev:
+ dev_set_drvdata(&pdev->dev, NULL);
+- kfree(ism);
++ put_device(&ism->dev);
+
+ return ret;
+ }
+@@ -682,7 +692,7 @@ static void ism_remove(struct pci_dev *pdev)
+ pci_disable_device(pdev);
+ device_del(&ism->dev);
+ dev_set_drvdata(&pdev->dev, NULL);
+- kfree(ism);
++ put_device(&ism->dev);
+ }
+
+ static struct pci_driver ism_driver = {
+diff --git a/drivers/soc/loongson/loongson2_guts.c b/drivers/soc/loongson/loongson2_guts.c
+index ef352a0f502208..1fcf7ca8083e10 100644
+--- a/drivers/soc/loongson/loongson2_guts.c
++++ b/drivers/soc/loongson/loongson2_guts.c
+@@ -114,8 +114,11 @@ static int loongson2_guts_probe(struct platform_device *pdev)
+ if (of_property_read_string(root, "model", &machine))
+ of_property_read_string_index(root, "compatible", 0, &machine);
+ of_node_put(root);
+- if (machine)
++ if (machine) {
+ soc_dev_attr.machine = devm_kstrdup(dev, machine, GFP_KERNEL);
++ if (!soc_dev_attr.machine)
++ return -ENOMEM;
++ }
+
+ svr = loongson2_guts_get_svr();
+ soc_die = loongson2_soc_die_match(svr, loongson2_soc_die);
+diff --git a/drivers/tee/optee/supp.c b/drivers/tee/optee/supp.c
+index 322a543b8c278a..d0f397c9024201 100644
+--- a/drivers/tee/optee/supp.c
++++ b/drivers/tee/optee/supp.c
+@@ -80,7 +80,6 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params,
+ struct optee *optee = tee_get_drvdata(ctx->teedev);
+ struct optee_supp *supp = &optee->supp;
+ struct optee_supp_req *req;
+- bool interruptable;
+ u32 ret;
+
+ /*
+@@ -111,36 +110,18 @@ u32 optee_supp_thrd_req(struct tee_context *ctx, u32 func, size_t num_params,
+ /*
+ * Wait for supplicant to process and return result, once we've
+ * returned from wait_for_completion(&req->c) successfully we have
+- * exclusive access again.
++ * exclusive access again. Allow the wait to be killable such that
++ * the wait doesn't turn into an indefinite state if the supplicant
++ * gets hung for some reason.
+ */
+- while (wait_for_completion_interruptible(&req->c)) {
++ if (wait_for_completion_killable(&req->c)) {
+ mutex_lock(&supp->mutex);
+- interruptable = !supp->ctx;
+- if (interruptable) {
+- /*
+- * There's no supplicant available and since the
+- * supp->mutex currently is held none can
+- * become available until the mutex released
+- * again.
+- *
+- * Interrupting an RPC to supplicant is only
+- * allowed as a way of slightly improving the user
+- * experience in case the supplicant hasn't been
+- * started yet. During normal operation the supplicant
+- * will serve all requests in a timely manner and
+- * interrupting then wouldn't make sense.
+- */
+- if (req->in_queue) {
+- list_del(&req->link);
+- req->in_queue = false;
+- }
++ if (req->in_queue) {
++ list_del(&req->link);
++ req->in_queue = false;
+ }
+ mutex_unlock(&supp->mutex);
+-
+- if (interruptable) {
+- req->ret = TEEC_ERROR_COMMUNICATION;
+- break;
+- }
++ req->ret = TEEC_ERROR_COMMUNICATION;
+ }
+
+ ret = req->ret;
+diff --git a/drivers/usb/gadget/function/f_midi.c b/drivers/usb/gadget/function/f_midi.c
+index 4153643c67dcec..1f18f15dba2778 100644
+--- a/drivers/usb/gadget/function/f_midi.c
++++ b/drivers/usb/gadget/function/f_midi.c
+@@ -283,7 +283,7 @@ f_midi_complete(struct usb_ep *ep, struct usb_request *req)
+ /* Our transmit completed. See if there's more to go.
+ * f_midi_transmit eats req, don't queue it again. */
+ req->length = 0;
+- f_midi_transmit(midi);
++ queue_work(system_highpri_wq, &midi->work);
+ return;
+ }
+ break;
+diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c
+index 90aef2627ca27b..40332ab62f1018 100644
+--- a/fs/btrfs/compression.c
++++ b/fs/btrfs/compression.c
+@@ -545,8 +545,7 @@ static noinline int add_ra_bio_pages(struct inode *inode,
+ * subpage::readers and to unlock the page.
+ */
+ if (fs_info->sectorsize < PAGE_SIZE)
+- btrfs_subpage_start_reader(fs_info, folio, cur,
+- add_size);
++ btrfs_folio_set_lock(fs_info, folio, cur, add_size);
+ folio_put(folio);
+ cur += add_size;
+ }
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index fe08c983d5bb4b..660a5b9c08e9e4 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -190,7 +190,7 @@ static void process_one_folio(struct btrfs_fs_info *fs_info,
+ btrfs_folio_clamp_clear_writeback(fs_info, folio, start, len);
+
+ if (folio != locked_folio && (page_ops & PAGE_UNLOCK))
+- btrfs_folio_end_writer_lock(fs_info, folio, start, len);
++ btrfs_folio_end_lock(fs_info, folio, start, len);
+ }
+
+ static void __process_folios_contig(struct address_space *mapping,
+@@ -276,7 +276,7 @@ static noinline int lock_delalloc_folios(struct inode *inode,
+ range_start = max_t(u64, folio_pos(folio), start);
+ range_len = min_t(u64, folio_pos(folio) + folio_size(folio),
+ end + 1) - range_start;
+- btrfs_folio_set_writer_lock(fs_info, folio, range_start, range_len);
++ btrfs_folio_set_lock(fs_info, folio, range_start, range_len);
+
+ processed_end = range_start + range_len - 1;
+ }
+@@ -438,7 +438,7 @@ static void end_folio_read(struct folio *folio, bool uptodate, u64 start, u32 le
+ if (!btrfs_is_subpage(fs_info, folio->mapping))
+ folio_unlock(folio);
+ else
+- btrfs_subpage_end_reader(fs_info, folio, start, len);
++ btrfs_folio_end_lock(fs_info, folio, start, len);
+ }
+
+ /*
+@@ -495,7 +495,7 @@ static void begin_folio_read(struct btrfs_fs_info *fs_info, struct folio *folio)
+ return;
+
+ ASSERT(folio_test_private(folio));
+- btrfs_subpage_start_reader(fs_info, folio, folio_pos(folio), PAGE_SIZE);
++ btrfs_folio_set_lock(fs_info, folio, folio_pos(folio), PAGE_SIZE);
+ }
+
+ /*
+@@ -1105,15 +1105,59 @@ int btrfs_read_folio(struct file *file, struct folio *folio)
+ return ret;
+ }
+
++static void set_delalloc_bitmap(struct folio *folio, unsigned long *delalloc_bitmap,
++ u64 start, u32 len)
++{
++ struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
++ const u64 folio_start = folio_pos(folio);
++ unsigned int start_bit;
++ unsigned int nbits;
++
++ ASSERT(start >= folio_start && start + len <= folio_start + PAGE_SIZE);
++ start_bit = (start - folio_start) >> fs_info->sectorsize_bits;
++ nbits = len >> fs_info->sectorsize_bits;
++ ASSERT(bitmap_test_range_all_zero(delalloc_bitmap, start_bit, nbits));
++ bitmap_set(delalloc_bitmap, start_bit, nbits);
++}
++
++static bool find_next_delalloc_bitmap(struct folio *folio,
++ unsigned long *delalloc_bitmap, u64 start,
++ u64 *found_start, u32 *found_len)
++{
++ struct btrfs_fs_info *fs_info = folio_to_fs_info(folio);
++ const u64 folio_start = folio_pos(folio);
++ const unsigned int bitmap_size = fs_info->sectors_per_page;
++ unsigned int start_bit;
++ unsigned int first_zero;
++ unsigned int first_set;
++
++ ASSERT(start >= folio_start && start < folio_start + PAGE_SIZE);
++
++ start_bit = (start - folio_start) >> fs_info->sectorsize_bits;
++ first_set = find_next_bit(delalloc_bitmap, bitmap_size, start_bit);
++ if (first_set >= bitmap_size)
++ return false;
++
++ *found_start = folio_start + (first_set << fs_info->sectorsize_bits);
++ first_zero = find_next_zero_bit(delalloc_bitmap, bitmap_size, first_set);
++ *found_len = (first_zero - first_set) << fs_info->sectorsize_bits;
++ return true;
++}
++
+ /*
+- * helper for extent_writepage(), doing all of the delayed allocation setup.
++ * Do all of the delayed allocation setup.
++ *
++ * Return >0 if all the dirty blocks are submitted async (compression) or inlined.
++ * The @folio should no longer be touched (treat it as already unlocked).
+ *
+- * This returns 1 if btrfs_run_delalloc_range function did all the work required
+- * to write the page (copy into inline extent). In this case the IO has
+- * been started and the page is already unlocked.
++ * Return 0 if there is still dirty block that needs to be submitted through
++ * extent_writepage_io().
++ * bio_ctrl->submit_bitmap will indicate which blocks of the folio should be
++ * submitted, and @folio is still kept locked.
+ *
+- * This returns 0 if all went well (page still locked)
+- * This returns < 0 if there were errors (page still locked)
++ * Return <0 if there is any error hit.
++ * Any allocated ordered extent range covering this folio will be marked
++ * finished (IOERR), and @folio is still kept locked.
+ */
+ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ struct folio *folio,
+@@ -1124,16 +1168,28 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ const bool is_subpage = btrfs_is_subpage(fs_info, folio->mapping);
+ const u64 page_start = folio_pos(folio);
+ const u64 page_end = page_start + folio_size(folio) - 1;
++ unsigned long delalloc_bitmap = 0;
+ /*
+ * Save the last found delalloc end. As the delalloc end can go beyond
+ * page boundary, thus we cannot rely on subpage bitmap to locate the
+ * last delalloc end.
+ */
+ u64 last_delalloc_end = 0;
++ /*
++ * The range end (exclusive) of the last successfully finished delalloc
++ * range.
++ * Any range covered by ordered extent must either be manually marked
++ * finished (error handling), or has IO submitted (and finish the
++ * ordered extent normally).
++ *
++ * This records the end of ordered extent cleanup if we hit an error.
++ */
++ u64 last_finished_delalloc_end = page_start;
+ u64 delalloc_start = page_start;
+ u64 delalloc_end = page_end;
+ u64 delalloc_to_write = 0;
+ int ret = 0;
++ int bit;
+
+ /* Save the dirty bitmap as our submission bitmap will be a subset of it. */
+ if (btrfs_is_subpage(fs_info, inode->vfs_inode.i_mapping)) {
+@@ -1143,6 +1199,12 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ bio_ctrl->submit_bitmap = 1;
+ }
+
++ for_each_set_bit(bit, &bio_ctrl->submit_bitmap, fs_info->sectors_per_page) {
++ u64 start = page_start + (bit << fs_info->sectorsize_bits);
++
++ btrfs_folio_set_lock(fs_info, folio, start, fs_info->sectorsize);
++ }
++
+ /* Lock all (subpage) delalloc ranges inside the folio first. */
+ while (delalloc_start < page_end) {
+ delalloc_end = page_end;
+@@ -1151,9 +1213,8 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ delalloc_start = delalloc_end + 1;
+ continue;
+ }
+- btrfs_folio_set_writer_lock(fs_info, folio, delalloc_start,
+- min(delalloc_end, page_end) + 1 -
+- delalloc_start);
++ set_delalloc_bitmap(folio, &delalloc_bitmap, delalloc_start,
++ min(delalloc_end, page_end) + 1 - delalloc_start);
+ last_delalloc_end = delalloc_end;
+ delalloc_start = delalloc_end + 1;
+ }
+@@ -1178,7 +1239,7 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ found_len = last_delalloc_end + 1 - found_start;
+ found = true;
+ } else {
+- found = btrfs_subpage_find_writer_locked(fs_info, folio,
++ found = find_next_delalloc_bitmap(folio, &delalloc_bitmap,
+ delalloc_start, &found_start, &found_len);
+ }
+ if (!found)
+@@ -1192,11 +1253,19 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+ found_len = last_delalloc_end + 1 - found_start;
+
+ if (ret >= 0) {
++ /*
++ * Some delalloc range may be created by previous folios.
++ * Thus we still need to clean up this range during error
++ * handling.
++ */
++ last_finished_delalloc_end = found_start;
+ /* No errors hit so far, run the current delalloc range. */
+ ret = btrfs_run_delalloc_range(inode, folio,
+ found_start,
+ found_start + found_len - 1,
+ wbc);
++ if (ret >= 0)
++ last_finished_delalloc_end = found_start + found_len;
+ } else {
+ /*
+ * We've hit an error during previous delalloc range,
+@@ -1231,8 +1300,22 @@ static noinline_for_stack int writepage_delalloc(struct btrfs_inode *inode,
+
+ delalloc_start = found_start + found_len;
+ }
+- if (ret < 0)
++ /*
++ * It's possible we had some ordered extents created before we hit
++ * an error, cleanup non-async successfully created delalloc ranges.
++ */
++ if (unlikely(ret < 0)) {
++ unsigned int bitmap_size = min(
++ (last_finished_delalloc_end - page_start) >>
++ fs_info->sectorsize_bits,
++ fs_info->sectors_per_page);
++
++ for_each_set_bit(bit, &bio_ctrl->submit_bitmap, bitmap_size)
++ btrfs_mark_ordered_io_finished(inode, folio,
++ page_start + (bit << fs_info->sectorsize_bits),
++ fs_info->sectorsize, false);
+ return ret;
++ }
+ out:
+ if (last_delalloc_end)
+ delalloc_end = last_delalloc_end;
+@@ -1348,6 +1431,7 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode,
+ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ unsigned long range_bitmap = 0;
+ bool submitted_io = false;
++ bool error = false;
+ const u64 folio_start = folio_pos(folio);
+ u64 cur;
+ int bit;
+@@ -1390,13 +1474,26 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode,
+ break;
+ }
+ ret = submit_one_sector(inode, folio, cur, bio_ctrl, i_size);
+- if (ret < 0)
+- goto out;
++ if (unlikely(ret < 0)) {
++ /*
++ * bio_ctrl may contain a bio crossing several folios.
++ * Submit it immediately so that the bio has a chance
++ * to finish normally, other than marked as error.
++ */
++ submit_one_bio(bio_ctrl);
++ /*
++ * Failed to grab the extent map which should be very rare.
++ * Since there is no bio submitted to finish the ordered
++ * extent, we have to manually finish this sector.
++ */
++ btrfs_mark_ordered_io_finished(inode, folio, cur,
++ fs_info->sectorsize, false);
++ error = true;
++ continue;
++ }
+ submitted_io = true;
+ }
+
+- btrfs_folio_assert_not_dirty(fs_info, folio, start, len);
+-out:
+ /*
+ * If we didn't submitted any sector (>= i_size), folio dirty get
+ * cleared but PAGECACHE_TAG_DIRTY is not cleared (only cleared
+@@ -1404,8 +1501,11 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode,
+ *
+ * Here we set writeback and clear for the range. If the full folio
+ * is no longer dirty then we clear the PAGECACHE_TAG_DIRTY tag.
++ *
++ * If we hit any error, the corresponding sector will still be dirty
++ * thus no need to clear PAGECACHE_TAG_DIRTY.
+ */
+- if (!submitted_io) {
++ if (!submitted_io && !error) {
+ btrfs_folio_set_writeback(fs_info, folio, start, len);
+ btrfs_folio_clear_writeback(fs_info, folio, start, len);
+ }
+@@ -1423,15 +1523,14 @@ static noinline_for_stack int extent_writepage_io(struct btrfs_inode *inode,
+ */
+ static int extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl)
+ {
+- struct inode *inode = folio->mapping->host;
+- struct btrfs_fs_info *fs_info = inode_to_fs_info(inode);
+- const u64 page_start = folio_pos(folio);
++ struct btrfs_inode *inode = BTRFS_I(folio->mapping->host);
++ struct btrfs_fs_info *fs_info = inode->root->fs_info;
+ int ret;
+ size_t pg_offset;
+- loff_t i_size = i_size_read(inode);
++ loff_t i_size = i_size_read(&inode->vfs_inode);
+ unsigned long end_index = i_size >> PAGE_SHIFT;
+
+- trace_extent_writepage(folio, inode, bio_ctrl->wbc);
++ trace_extent_writepage(folio, &inode->vfs_inode, bio_ctrl->wbc);
+
+ WARN_ON(!folio_test_locked(folio));
+
+@@ -1455,13 +1554,13 @@ static int extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl
+ if (ret < 0)
+ goto done;
+
+- ret = writepage_delalloc(BTRFS_I(inode), folio, bio_ctrl);
++ ret = writepage_delalloc(inode, folio, bio_ctrl);
+ if (ret == 1)
+ return 0;
+ if (ret)
+ goto done;
+
+- ret = extent_writepage_io(BTRFS_I(inode), folio, folio_pos(folio),
++ ret = extent_writepage_io(inode, folio, folio_pos(folio),
+ PAGE_SIZE, bio_ctrl, i_size);
+ if (ret == 1)
+ return 0;
+@@ -1469,17 +1568,13 @@ static int extent_writepage(struct folio *folio, struct btrfs_bio_ctrl *bio_ctrl
+ bio_ctrl->wbc->nr_to_write--;
+
+ done:
+- if (ret) {
+- btrfs_mark_ordered_io_finished(BTRFS_I(inode), folio,
+- page_start, PAGE_SIZE, !ret);
++ if (ret < 0)
+ mapping_set_error(folio->mapping, ret);
+- }
+-
+ /*
+ * Only unlock ranges that are submitted. As there can be some async
+ * submitted ranges inside the folio.
+ */
+- btrfs_folio_end_writer_lock_bitmap(fs_info, folio, bio_ctrl->submit_bitmap);
++ btrfs_folio_end_lock_bitmap(fs_info, folio, bio_ctrl->submit_bitmap);
+ ASSERT(ret <= 0);
+ return ret;
+ }
+@@ -2231,12 +2326,9 @@ void extent_write_locked_range(struct inode *inode, const struct folio *locked_f
+ if (ret == 1)
+ goto next_page;
+
+- if (ret) {
+- btrfs_mark_ordered_io_finished(BTRFS_I(inode), folio,
+- cur, cur_len, !ret);
++ if (ret)
+ mapping_set_error(mapping, ret);
+- }
+- btrfs_folio_end_writer_lock(fs_info, folio, cur, cur_len);
++ btrfs_folio_end_lock(fs_info, folio, cur, cur_len);
+ if (ret < 0)
+ found_error = true;
+ next_page:
+@@ -2463,12 +2555,6 @@ static bool folio_range_has_eb(struct btrfs_fs_info *fs_info, struct folio *foli
+ subpage = folio_get_private(folio);
+ if (atomic_read(&subpage->eb_refs))
+ return true;
+- /*
+- * Even there is no eb refs here, we may still have
+- * end_folio_read() call relying on page::private.
+- */
+- if (atomic_read(&subpage->readers))
+- return true;
+ }
+ return false;
+ }
+diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c
+index f7e7d864f41440..5b842276573e82 100644
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -2419,8 +2419,7 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct folio *locked_fol
+
+ out:
+ if (ret < 0)
+- btrfs_cleanup_ordered_extents(inode, locked_folio, start,
+- end - start + 1);
++ btrfs_cleanup_ordered_extents(inode, NULL, start, end - start + 1);
+ return ret;
+ }
+
+diff --git a/fs/btrfs/subpage.c b/fs/btrfs/subpage.c
+index ec7328a6bfd755..88a01d51ab11f1 100644
+--- a/fs/btrfs/subpage.c
++++ b/fs/btrfs/subpage.c
+@@ -140,12 +140,10 @@ struct btrfs_subpage *btrfs_alloc_subpage(const struct btrfs_fs_info *fs_info,
+ return ERR_PTR(-ENOMEM);
+
+ spin_lock_init(&ret->lock);
+- if (type == BTRFS_SUBPAGE_METADATA) {
++ if (type == BTRFS_SUBPAGE_METADATA)
+ atomic_set(&ret->eb_refs, 0);
+- } else {
+- atomic_set(&ret->readers, 0);
+- atomic_set(&ret->writers, 0);
+- }
++ else
++ atomic_set(&ret->nr_locked, 0);
+ return ret;
+ }
+
+@@ -221,62 +219,6 @@ static void btrfs_subpage_assert(const struct btrfs_fs_info *fs_info,
+ __start_bit; \
+ })
+
+-void btrfs_subpage_start_reader(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
+-{
+- struct btrfs_subpage *subpage = folio_get_private(folio);
+- const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
+- const int nbits = len >> fs_info->sectorsize_bits;
+- unsigned long flags;
+-
+-
+- btrfs_subpage_assert(fs_info, folio, start, len);
+-
+- spin_lock_irqsave(&subpage->lock, flags);
+- /*
+- * Even though it's just for reading the page, no one should have
+- * locked the subpage range.
+- */
+- ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits));
+- bitmap_set(subpage->bitmaps, start_bit, nbits);
+- atomic_add(nbits, &subpage->readers);
+- spin_unlock_irqrestore(&subpage->lock, flags);
+-}
+-
+-void btrfs_subpage_end_reader(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
+-{
+- struct btrfs_subpage *subpage = folio_get_private(folio);
+- const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
+- const int nbits = len >> fs_info->sectorsize_bits;
+- unsigned long flags;
+- bool is_data;
+- bool last;
+-
+- btrfs_subpage_assert(fs_info, folio, start, len);
+- is_data = is_data_inode(BTRFS_I(folio->mapping->host));
+-
+- spin_lock_irqsave(&subpage->lock, flags);
+-
+- /* The range should have already been locked. */
+- ASSERT(bitmap_test_range_all_set(subpage->bitmaps, start_bit, nbits));
+- ASSERT(atomic_read(&subpage->readers) >= nbits);
+-
+- bitmap_clear(subpage->bitmaps, start_bit, nbits);
+- last = atomic_sub_and_test(nbits, &subpage->readers);
+-
+- /*
+- * For data we need to unlock the page if the last read has finished.
+- *
+- * And please don't replace @last with atomic_sub_and_test() call
+- * inside if () condition.
+- * As we want the atomic_sub_and_test() to be always executed.
+- */
+- if (is_data && last)
+- folio_unlock(folio);
+- spin_unlock_irqrestore(&subpage->lock, flags);
+-}
+-
+ static void btrfs_subpage_clamp_range(struct folio *folio, u64 *start, u32 *len)
+ {
+ u64 orig_start = *start;
+@@ -295,28 +237,8 @@ static void btrfs_subpage_clamp_range(struct folio *folio, u64 *start, u32 *len)
+ orig_start + orig_len) - *start;
+ }
+
+-static void btrfs_subpage_start_writer(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
+-{
+- struct btrfs_subpage *subpage = folio_get_private(folio);
+- const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
+- const int nbits = (len >> fs_info->sectorsize_bits);
+- unsigned long flags;
+- int ret;
+-
+- btrfs_subpage_assert(fs_info, folio, start, len);
+-
+- spin_lock_irqsave(&subpage->lock, flags);
+- ASSERT(atomic_read(&subpage->readers) == 0);
+- ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits));
+- bitmap_set(subpage->bitmaps, start_bit, nbits);
+- ret = atomic_add_return(nbits, &subpage->writers);
+- ASSERT(ret == nbits);
+- spin_unlock_irqrestore(&subpage->lock, flags);
+-}
+-
+-static bool btrfs_subpage_end_and_test_writer(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
++static bool btrfs_subpage_end_and_test_lock(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, u64 start, u32 len)
+ {
+ struct btrfs_subpage *subpage = folio_get_private(folio);
+ const int start_bit = subpage_calc_start_bit(fs_info, folio, locked, start, len);
+@@ -334,9 +256,9 @@ static bool btrfs_subpage_end_and_test_writer(const struct btrfs_fs_info *fs_inf
+ * extent_clear_unlock_delalloc() for compression path.
+ *
+ * This @locked_page is locked by plain lock_page(), thus its
+- * subpage::writers is 0. Handle them in a special way.
++ * subpage::locked is 0. Handle them in a special way.
+ */
+- if (atomic_read(&subpage->writers) == 0) {
++ if (atomic_read(&subpage->nr_locked) == 0) {
+ spin_unlock_irqrestore(&subpage->lock, flags);
+ return true;
+ }
+@@ -345,39 +267,12 @@ static bool btrfs_subpage_end_and_test_writer(const struct btrfs_fs_info *fs_inf
+ clear_bit(bit, subpage->bitmaps);
+ cleared++;
+ }
+- ASSERT(atomic_read(&subpage->writers) >= cleared);
+- last = atomic_sub_and_test(cleared, &subpage->writers);
++ ASSERT(atomic_read(&subpage->nr_locked) >= cleared);
++ last = atomic_sub_and_test(cleared, &subpage->nr_locked);
+ spin_unlock_irqrestore(&subpage->lock, flags);
+ return last;
+ }
+
+-/*
+- * Lock a folio for delalloc page writeback.
+- *
+- * Return -EAGAIN if the page is not properly initialized.
+- * Return 0 with the page locked, and writer counter updated.
+- *
+- * Even with 0 returned, the page still need extra check to make sure
+- * it's really the correct page, as the caller is using
+- * filemap_get_folios_contig(), which can race with page invalidating.
+- */
+-int btrfs_folio_start_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
+-{
+- if (unlikely(!fs_info) || !btrfs_is_subpage(fs_info, folio->mapping)) {
+- folio_lock(folio);
+- return 0;
+- }
+- folio_lock(folio);
+- if (!folio_test_private(folio) || !folio_get_private(folio)) {
+- folio_unlock(folio);
+- return -EAGAIN;
+- }
+- btrfs_subpage_clamp_range(folio, &start, &len);
+- btrfs_subpage_start_writer(fs_info, folio, start, len);
+- return 0;
+-}
+-
+ /*
+ * Handle different locked folios:
+ *
+@@ -394,8 +289,8 @@ int btrfs_folio_start_writer_lock(const struct btrfs_fs_info *fs_info,
+ * bitmap, reduce the writer lock number, and unlock the page if that's
+ * the last locked range.
+ */
+-void btrfs_folio_end_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
++void btrfs_folio_end_lock(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, u64 start, u32 len)
+ {
+ struct btrfs_subpage *subpage = folio_get_private(folio);
+
+@@ -408,24 +303,24 @@ void btrfs_folio_end_writer_lock(const struct btrfs_fs_info *fs_info,
+
+ /*
+ * For subpage case, there are two types of locked page. With or
+- * without writers number.
++ * without locked number.
+ *
+- * Since we own the page lock, no one else could touch subpage::writers
++ * Since we own the page lock, no one else could touch subpage::locked
+ * and we are safe to do several atomic operations without spinlock.
+ */
+- if (atomic_read(&subpage->writers) == 0) {
+- /* No writers, locked by plain lock_page(). */
++ if (atomic_read(&subpage->nr_locked) == 0) {
++ /* No subpage lock, locked by plain lock_page(). */
+ folio_unlock(folio);
+ return;
+ }
+
+ btrfs_subpage_clamp_range(folio, &start, &len);
+- if (btrfs_subpage_end_and_test_writer(fs_info, folio, start, len))
++ if (btrfs_subpage_end_and_test_lock(fs_info, folio, start, len))
+ folio_unlock(folio);
+ }
+
+-void btrfs_folio_end_writer_lock_bitmap(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, unsigned long bitmap)
++void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, unsigned long bitmap)
+ {
+ struct btrfs_subpage *subpage = folio_get_private(folio);
+ const int start_bit = fs_info->sectors_per_page * btrfs_bitmap_nr_locked;
+@@ -439,8 +334,8 @@ void btrfs_folio_end_writer_lock_bitmap(const struct btrfs_fs_info *fs_info,
+ return;
+ }
+
+- if (atomic_read(&subpage->writers) == 0) {
+- /* No writers, locked by plain lock_page(). */
++ if (atomic_read(&subpage->nr_locked) == 0) {
++ /* No subpage lock, locked by plain lock_page(). */
+ folio_unlock(folio);
+ return;
+ }
+@@ -450,8 +345,8 @@ void btrfs_folio_end_writer_lock_bitmap(const struct btrfs_fs_info *fs_info,
+ if (test_and_clear_bit(bit + start_bit, subpage->bitmaps))
+ cleared++;
+ }
+- ASSERT(atomic_read(&subpage->writers) >= cleared);
+- last = atomic_sub_and_test(cleared, &subpage->writers);
++ ASSERT(atomic_read(&subpage->nr_locked) >= cleared);
++ last = atomic_sub_and_test(cleared, &subpage->nr_locked);
+ spin_unlock_irqrestore(&subpage->lock, flags);
+ if (last)
+ folio_unlock(folio);
+@@ -776,8 +671,8 @@ void btrfs_folio_assert_not_dirty(const struct btrfs_fs_info *fs_info,
+ * This populates the involved subpage ranges so that subpage helpers can
+ * properly unlock them.
+ */
+-void btrfs_folio_set_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len)
++void btrfs_folio_set_lock(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, u64 start, u32 len)
+ {
+ struct btrfs_subpage *subpage;
+ unsigned long flags;
+@@ -796,58 +691,11 @@ void btrfs_folio_set_writer_lock(const struct btrfs_fs_info *fs_info,
+ /* Target range should not yet be locked. */
+ ASSERT(bitmap_test_range_all_zero(subpage->bitmaps, start_bit, nbits));
+ bitmap_set(subpage->bitmaps, start_bit, nbits);
+- ret = atomic_add_return(nbits, &subpage->writers);
++ ret = atomic_add_return(nbits, &subpage->nr_locked);
+ ASSERT(ret <= fs_info->sectors_per_page);
+ spin_unlock_irqrestore(&subpage->lock, flags);
+ }
+
+-/*
+- * Find any subpage writer locked range inside @folio, starting at file offset
+- * @search_start. The caller should ensure the folio is locked.
+- *
+- * Return true and update @found_start_ret and @found_len_ret to the first
+- * writer locked range.
+- * Return false if there is no writer locked range.
+- */
+-bool btrfs_subpage_find_writer_locked(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 search_start,
+- u64 *found_start_ret, u32 *found_len_ret)
+-{
+- struct btrfs_subpage *subpage = folio_get_private(folio);
+- const u32 sectors_per_page = fs_info->sectors_per_page;
+- const unsigned int len = PAGE_SIZE - offset_in_page(search_start);
+- const unsigned int start_bit = subpage_calc_start_bit(fs_info, folio,
+- locked, search_start, len);
+- const unsigned int locked_bitmap_start = sectors_per_page * btrfs_bitmap_nr_locked;
+- const unsigned int locked_bitmap_end = locked_bitmap_start + sectors_per_page;
+- unsigned long flags;
+- int first_zero;
+- int first_set;
+- bool found = false;
+-
+- ASSERT(folio_test_locked(folio));
+- spin_lock_irqsave(&subpage->lock, flags);
+- first_set = find_next_bit(subpage->bitmaps, locked_bitmap_end, start_bit);
+- if (first_set >= locked_bitmap_end)
+- goto out;
+-
+- found = true;
+-
+- *found_start_ret = folio_pos(folio) +
+- ((first_set - locked_bitmap_start) << fs_info->sectorsize_bits);
+- /*
+- * Since @first_set is ensured to be smaller than locked_bitmap_end
+- * here, @found_start_ret should be inside the folio.
+- */
+- ASSERT(*found_start_ret < folio_pos(folio) + PAGE_SIZE);
+-
+- first_zero = find_next_zero_bit(subpage->bitmaps, locked_bitmap_end, first_set);
+- *found_len_ret = (first_zero - first_set) << fs_info->sectorsize_bits;
+-out:
+- spin_unlock_irqrestore(&subpage->lock, flags);
+- return found;
+-}
+-
+ #define GET_SUBPAGE_BITMAP(subpage, fs_info, name, dst) \
+ { \
+ const int sectors_per_page = fs_info->sectors_per_page; \
+diff --git a/fs/btrfs/subpage.h b/fs/btrfs/subpage.h
+index cdb554e0d215e2..44fff1f4eac482 100644
+--- a/fs/btrfs/subpage.h
++++ b/fs/btrfs/subpage.h
+@@ -45,14 +45,6 @@ enum {
+ struct btrfs_subpage {
+ /* Common members for both data and metadata pages */
+ spinlock_t lock;
+- /*
+- * Both data and metadata needs to track how many readers are for the
+- * page.
+- * Data relies on @readers to unlock the page when last reader finished.
+- * While metadata doesn't need page unlock, it needs to prevent
+- * page::private get cleared before the last end_page_read().
+- */
+- atomic_t readers;
+ union {
+ /*
+ * Structures only used by metadata
+@@ -62,8 +54,12 @@ struct btrfs_subpage {
+ */
+ atomic_t eb_refs;
+
+- /* Structures only used by data */
+- atomic_t writers;
++ /*
++ * Structures only used by data,
++ *
++ * How many sectors inside the page is locked.
++ */
++ atomic_t nr_locked;
+ };
+ unsigned long bitmaps[];
+ };
+@@ -95,23 +91,12 @@ void btrfs_free_subpage(struct btrfs_subpage *subpage);
+ void btrfs_folio_inc_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio);
+ void btrfs_folio_dec_eb_refs(const struct btrfs_fs_info *fs_info, struct folio *folio);
+
+-void btrfs_subpage_start_reader(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len);
+-void btrfs_subpage_end_reader(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len);
+-
+-int btrfs_folio_start_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len);
+-void btrfs_folio_end_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len);
+-void btrfs_folio_set_writer_lock(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 start, u32 len);
+-void btrfs_folio_end_writer_lock_bitmap(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, unsigned long bitmap);
+-bool btrfs_subpage_find_writer_locked(const struct btrfs_fs_info *fs_info,
+- struct folio *folio, u64 search_start,
+- u64 *found_start_ret, u32 *found_len_ret);
+-
++void btrfs_folio_end_lock(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, u64 start, u32 len);
++void btrfs_folio_set_lock(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, u64 start, u32 len);
++void btrfs_folio_end_lock_bitmap(const struct btrfs_fs_info *fs_info,
++ struct folio *folio, unsigned long bitmap);
+ /*
+ * Template for subpage related operations.
+ *
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index fafc07e38663ca..e11e67af760f44 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1381,7 +1381,7 @@ int cifs_get_inode_info(struct inode **inode,
+ struct cifs_fattr fattr = {};
+ int rc;
+
+- if (is_inode_cache_good(*inode)) {
++ if (!data && is_inode_cache_good(*inode)) {
+ cifs_dbg(FYI, "No need to revalidate cached inode sizes\n");
+ return 0;
+ }
+@@ -1480,7 +1480,7 @@ int smb311_posix_get_inode_info(struct inode **inode,
+ struct cifs_fattr fattr = {};
+ int rc;
+
+- if (is_inode_cache_good(*inode)) {
++ if (!data && is_inode_cache_good(*inode)) {
+ cifs_dbg(FYI, "No need to revalidate cached inode sizes\n");
+ return 0;
+ }
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 44952727fef9ef..e8da63d29a28f1 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -4991,6 +4991,10 @@ receive_encrypted_standard(struct TCP_Server_Info *server,
+ next_buffer = (char *)cifs_buf_get();
+ else
+ next_buffer = (char *)cifs_small_buf_get();
++ if (!next_buffer) {
++ cifs_server_dbg(VFS, "No memory for (large) SMB response\n");
++ return -1;
++ }
+ memcpy(next_buffer, buf + next_cmd, pdu_length - next_cmd);
+ }
+
+diff --git a/fs/xfs/scrub/common.h b/fs/xfs/scrub/common.h
+index 47148cc4a833e5..eb00d48590f200 100644
+--- a/fs/xfs/scrub/common.h
++++ b/fs/xfs/scrub/common.h
+@@ -179,7 +179,6 @@ static inline bool xchk_skip_xref(struct xfs_scrub_metadata *sm)
+ bool xchk_dir_looks_zapped(struct xfs_inode *dp);
+ bool xchk_pptr_looks_zapped(struct xfs_inode *ip);
+
+-#ifdef CONFIG_XFS_ONLINE_REPAIR
+ /* Decide if a repair is required. */
+ static inline bool xchk_needs_repair(const struct xfs_scrub_metadata *sm)
+ {
+@@ -199,10 +198,6 @@ static inline bool xchk_could_repair(const struct xfs_scrub *sc)
+ return (sc->sm->sm_flags & XFS_SCRUB_IFLAG_REPAIR) &&
+ !(sc->flags & XREP_ALREADY_FIXED);
+ }
+-#else
+-# define xchk_needs_repair(sc) (false)
+-# define xchk_could_repair(sc) (false)
+-#endif /* CONFIG_XFS_ONLINE_REPAIR */
+
+ int xchk_metadata_inode_forks(struct xfs_scrub *sc);
+
+diff --git a/fs/xfs/scrub/repair.h b/fs/xfs/scrub/repair.h
+index 0e0dc2bf985c21..96180176c582f3 100644
+--- a/fs/xfs/scrub/repair.h
++++ b/fs/xfs/scrub/repair.h
+@@ -163,7 +163,16 @@ bool xrep_buf_verify_struct(struct xfs_buf *bp, const struct xfs_buf_ops *ops);
+ #else
+
+ #define xrep_ino_dqattach(sc) (0)
+-#define xrep_will_attempt(sc) (false)
++
++/*
++ * When online repair is not built into the kernel, we still want to attempt
++ * the repair so that the stub xrep_attempt below will return EOPNOTSUPP.
++ */
++static inline bool xrep_will_attempt(const struct xfs_scrub *sc)
++{
++ return (sc->sm->sm_flags & XFS_SCRUB_IFLAG_FORCE_REBUILD) ||
++ xchk_needs_repair(sc->sm);
++}
+
+ static inline int
+ xrep_attempt(
+diff --git a/fs/xfs/scrub/scrub.c b/fs/xfs/scrub/scrub.c
+index 4cbcf7a86dbec5..5c266d2842dbe9 100644
+--- a/fs/xfs/scrub/scrub.c
++++ b/fs/xfs/scrub/scrub.c
+@@ -149,6 +149,18 @@ xchk_probe(
+ if (xchk_should_terminate(sc, &error))
+ return error;
+
++ /*
++ * If the caller is probing to see if repair works but repair isn't
++ * built into the kernel, return EOPNOTSUPP because that's the signal
++ * that userspace expects. If online repair is built in, set the
++ * CORRUPT flag (without any of the usual tracing/logging) to force us
++ * into xrep_probe.
++ */
++ if (xchk_could_repair(sc)) {
++ if (!IS_ENABLED(CONFIG_XFS_ONLINE_REPAIR))
++ return -EOPNOTSUPP;
++ sc->sm->sm_flags |= XFS_SCRUB_OFLAG_CORRUPT;
++ }
+ return 0;
+ }
+
+diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
+index 4f17b786828af7..35b886385f3298 100644
+--- a/include/linux/netdevice.h
++++ b/include/linux/netdevice.h
+@@ -3064,6 +3064,8 @@ static inline struct net_device *first_net_device_rcu(struct net *net)
+ }
+
+ int netdev_boot_setup_check(struct net_device *dev);
++struct net_device *dev_getbyhwaddr(struct net *net, unsigned short type,
++ const char *hwaddr);
+ struct net_device *dev_getbyhwaddr_rcu(struct net *net, unsigned short type,
+ const char *hwaddr);
+ struct net_device *dev_getfirstbyhwtype(struct net *net, unsigned short type);
+diff --git a/include/linux/pci.h b/include/linux/pci.h
+index 4e77c4230c0a19..74114acbb07fbb 100644
+--- a/include/linux/pci.h
++++ b/include/linux/pci.h
+@@ -2293,6 +2293,8 @@ static inline void pci_fixup_device(enum pci_fixup_pass pass,
+ struct pci_dev *dev) { }
+ #endif
+
++int pcim_intx(struct pci_dev *pdev, int enabled);
++int pcim_request_all_regions(struct pci_dev *pdev, const char *name);
+ void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen);
+ void __iomem *pcim_iomap_region(struct pci_dev *pdev, int bar,
+ const char *name);
+diff --git a/include/linux/pse-pd/pse.h b/include/linux/pse-pd/pse.h
+index 591a53e082e650..df1592022d938e 100644
+--- a/include/linux/pse-pd/pse.h
++++ b/include/linux/pse-pd/pse.h
+@@ -75,12 +75,8 @@ struct pse_control_status {
+ * @pi_disable: Configure the PSE PI as disabled.
+ * @pi_get_voltage: Return voltage similarly to get_voltage regulator
+ * callback.
+- * @pi_get_current_limit: Get the configured current limit similarly to
+- * get_current_limit regulator callback.
+- * @pi_set_current_limit: Configure the current limit similarly to
+- * set_current_limit regulator callback.
+- * Should not return an error in case of MAX_PI_CURRENT
+- * current value set.
++ * @pi_get_pw_limit: Get the configured power limit of the PSE PI.
++ * @pi_set_pw_limit: Configure the power limit of the PSE PI.
+ */
+ struct pse_controller_ops {
+ int (*ethtool_get_status)(struct pse_controller_dev *pcdev,
+@@ -91,10 +87,10 @@ struct pse_controller_ops {
+ int (*pi_enable)(struct pse_controller_dev *pcdev, int id);
+ int (*pi_disable)(struct pse_controller_dev *pcdev, int id);
+ int (*pi_get_voltage)(struct pse_controller_dev *pcdev, int id);
+- int (*pi_get_current_limit)(struct pse_controller_dev *pcdev,
+- int id);
+- int (*pi_set_current_limit)(struct pse_controller_dev *pcdev,
+- int id, int max_uA);
++ int (*pi_get_pw_limit)(struct pse_controller_dev *pcdev,
++ int id);
++ int (*pi_set_pw_limit)(struct pse_controller_dev *pcdev,
++ int id, int max_mW);
+ };
+
+ struct module;
+diff --git a/include/linux/serio.h b/include/linux/serio.h
+index bf2191f2535093..69a47674af653c 100644
+--- a/include/linux/serio.h
++++ b/include/linux/serio.h
+@@ -6,6 +6,7 @@
+ #define _SERIO_H
+
+
++#include <linux/cleanup.h>
+ #include <linux/types.h>
+ #include <linux/interrupt.h>
+ #include <linux/list.h>
+@@ -161,4 +162,6 @@ static inline void serio_continue_rx(struct serio *serio)
+ spin_unlock_irq(&serio->lock);
+ }
+
++DEFINE_GUARD(serio_pause_rx, struct serio *, serio_pause_rx(_T), serio_continue_rx(_T))
++
+ #endif
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 2cbe0c22a32f3c..0b9095a281b898 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -91,6 +91,8 @@ struct sk_psock {
+ struct sk_psock_progs progs;
+ #if IS_ENABLED(CONFIG_BPF_STREAM_PARSER)
+ struct strparser strp;
++ u32 copied_seq;
++ u32 ingress_bytes;
+ #endif
+ struct sk_buff_head ingress_skb;
+ struct list_head ingress_msg;
+diff --git a/include/net/gro.h b/include/net/gro.h
+index b9b58c1f8d190b..7b548f91754bf3 100644
+--- a/include/net/gro.h
++++ b/include/net/gro.h
+@@ -11,6 +11,9 @@
+ #include <net/udp.h>
+ #include <net/hotdata.h>
+
++/* This should be increased if a protocol with a bigger head is added. */
++#define GRO_MAX_HEAD (MAX_HEADER + 128)
++
+ struct napi_gro_cb {
+ union {
+ struct {
+diff --git a/include/net/strparser.h b/include/net/strparser.h
+index 41e2ce9e9e10ff..0a83010b3a64a9 100644
+--- a/include/net/strparser.h
++++ b/include/net/strparser.h
+@@ -43,6 +43,8 @@ struct strparser;
+ struct strp_callbacks {
+ int (*parse_msg)(struct strparser *strp, struct sk_buff *skb);
+ void (*rcv_msg)(struct strparser *strp, struct sk_buff *skb);
++ int (*read_sock)(struct strparser *strp, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor);
+ int (*read_sock_done)(struct strparser *strp, int err);
+ void (*abort_parser)(struct strparser *strp, int err);
+ void (*lock)(struct strparser *strp);
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index d1948d357dade0..3255a199ef60d5 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -41,6 +41,7 @@
+ #include <net/inet_ecn.h>
+ #include <net/dst.h>
+ #include <net/mptcp.h>
++#include <net/xfrm.h>
+
+ #include <linux/seq_file.h>
+ #include <linux/memcontrol.h>
+@@ -683,6 +684,19 @@ void tcp_fin(struct sock *sk);
+ void tcp_check_space(struct sock *sk);
+ void tcp_sack_compress_send_ack(struct sock *sk);
+
++static inline void tcp_cleanup_skb(struct sk_buff *skb)
++{
++ skb_dst_drop(skb);
++ secpath_reset(skb);
++}
++
++static inline void tcp_add_receive_queue(struct sock *sk, struct sk_buff *skb)
++{
++ DEBUG_NET_WARN_ON_ONCE(skb_dst(skb));
++ DEBUG_NET_WARN_ON_ONCE(secpath_exists(skb));
++ __skb_queue_tail(&sk->sk_receive_queue, skb);
++}
++
+ /* tcp_timer.c */
+ void tcp_init_xmit_timers(struct sock *);
+ static inline void tcp_clear_xmit_timers(struct sock *sk)
+@@ -729,6 +743,9 @@ void tcp_get_info(struct sock *, struct tcp_info *);
+ /* Read 'sendfile()'-style from a TCP socket */
+ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
+ sk_read_actor_t recv_actor);
++int tcp_read_sock_noack(struct sock *sk, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor, bool noack,
++ u32 *copied_seq);
+ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor);
+ struct sk_buff *tcp_recv_skb(struct sock *sk, u32 seq, u32 *off);
+ void tcp_read_done(struct sock *sk, size_t len);
+@@ -2595,6 +2612,11 @@ struct sk_psock;
+ #ifdef CONFIG_BPF_SYSCALL
+ int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore);
+ void tcp_bpf_clone(const struct sock *sk, struct sock *newsk);
++#ifdef CONFIG_BPF_STREAM_PARSER
++struct strparser;
++int tcp_bpf_strp_read_sock(struct strparser *strp, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor);
++#endif /* CONFIG_BPF_STREAM_PARSER */
+ #endif /* CONFIG_BPF_SYSCALL */
+
+ #ifdef CONFIG_INET
+diff --git a/include/uapi/drm/xe_drm.h b/include/uapi/drm/xe_drm.h
+index c4182e95a61955..4a8a4a63e99ca8 100644
+--- a/include/uapi/drm/xe_drm.h
++++ b/include/uapi/drm/xe_drm.h
+@@ -1485,6 +1485,7 @@ struct drm_xe_oa_unit {
+ /** @capabilities: OA capabilities bit-mask */
+ __u64 capabilities;
+ #define DRM_XE_OA_CAPS_BASE (1 << 0)
++#define DRM_XE_OA_CAPS_SYNCS (1 << 1)
+
+ /** @oa_timestamp_freq: OA timestamp freq */
+ __u64 oa_timestamp_freq;
+@@ -1634,6 +1635,22 @@ enum drm_xe_oa_property_id {
+ * to be disabled for the stream exec queue.
+ */
+ DRM_XE_OA_PROPERTY_NO_PREEMPT,
++
++ /**
++ * @DRM_XE_OA_PROPERTY_NUM_SYNCS: Number of syncs in the sync array
++ * specified in @DRM_XE_OA_PROPERTY_SYNCS
++ */
++ DRM_XE_OA_PROPERTY_NUM_SYNCS,
++
++ /**
++ * @DRM_XE_OA_PROPERTY_SYNCS: Pointer to struct @drm_xe_sync array
++ * with array size specified via @DRM_XE_OA_PROPERTY_NUM_SYNCS. OA
++ * configuration will wait till input fences signal. Output fences
++ * will signal after the new OA configuration takes effect. For
++ * @DRM_XE_SYNC_TYPE_USER_FENCE, @addr is a user pointer, similar
++ * to the VM bind case.
++ */
++ DRM_XE_OA_PROPERTY_SYNCS,
+ };
+
+ /**
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index 21f1bcba2f52b5..cf28d29fffbf0e 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -2053,6 +2053,8 @@ static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req,
+ req->opcode = 0;
+ return io_init_fail_req(req, -EINVAL);
+ }
++ opcode = array_index_nospec(opcode, IORING_OP_LAST);
++
+ def = &io_issue_defs[opcode];
+ if (unlikely(sqe_flags & ~SQE_COMMON_FLAGS)) {
+ /* enforce forwards compatibility on users */
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 39ad25d16ed404..6abc495602a4e9 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -862,7 +862,15 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
+ if (unlikely(ret))
+ return ret;
+
+- ret = io_iter_do_read(rw, &io->iter);
++ if (unlikely(req->opcode == IORING_OP_READ_MULTISHOT)) {
++ void *cb_copy = rw->kiocb.ki_complete;
++
++ rw->kiocb.ki_complete = NULL;
++ ret = io_iter_do_read(rw, &io->iter);
++ rw->kiocb.ki_complete = cb_copy;
++ } else {
++ ret = io_iter_do_read(rw, &io->iter);
++ }
+
+ /*
+ * Some file systems like to return -EOPNOTSUPP for an IOCB_NOWAIT
+@@ -887,7 +895,8 @@ static int __io_read(struct io_kiocb *req, unsigned int issue_flags)
+ } else if (ret == -EIOCBQUEUED) {
+ return IOU_ISSUE_SKIP_COMPLETE;
+ } else if (ret == req->cqe.res || ret <= 0 || !force_nonblock ||
+- (req->flags & REQ_F_NOWAIT) || !need_complete_io(req)) {
++ (req->flags & REQ_F_NOWAIT) || !need_complete_io(req) ||
++ (issue_flags & IO_URING_F_MULTISHOT)) {
+ /* read all, failed, already did sync or don't want to retry */
+ goto done;
+ }
+diff --git a/kernel/acct.c b/kernel/acct.c
+index 179848ad33e978..d9d55fa4d01a71 100644
+--- a/kernel/acct.c
++++ b/kernel/acct.c
+@@ -103,48 +103,50 @@ struct bsd_acct_struct {
+ atomic_long_t count;
+ struct rcu_head rcu;
+ struct mutex lock;
+- int active;
++ bool active;
++ bool check_space;
+ unsigned long needcheck;
+ struct file *file;
+ struct pid_namespace *ns;
+ struct work_struct work;
+ struct completion done;
++ acct_t ac;
+ };
+
+-static void do_acct_process(struct bsd_acct_struct *acct);
++static void fill_ac(struct bsd_acct_struct *acct);
++static void acct_write_process(struct bsd_acct_struct *acct);
+
+ /*
+ * Check the amount of free space and suspend/resume accordingly.
+ */
+-static int check_free_space(struct bsd_acct_struct *acct)
++static bool check_free_space(struct bsd_acct_struct *acct)
+ {
+ struct kstatfs sbuf;
+
+- if (time_is_after_jiffies(acct->needcheck))
+- goto out;
++ if (!acct->check_space)
++ return acct->active;
+
+ /* May block */
+ if (vfs_statfs(&acct->file->f_path, &sbuf))
+- goto out;
++ return acct->active;
+
+ if (acct->active) {
+ u64 suspend = sbuf.f_blocks * SUSPEND;
+ do_div(suspend, 100);
+ if (sbuf.f_bavail <= suspend) {
+- acct->active = 0;
++ acct->active = false;
+ pr_info("Process accounting paused\n");
+ }
+ } else {
+ u64 resume = sbuf.f_blocks * RESUME;
+ do_div(resume, 100);
+ if (sbuf.f_bavail >= resume) {
+- acct->active = 1;
++ acct->active = true;
+ pr_info("Process accounting resumed\n");
+ }
+ }
+
+ acct->needcheck = jiffies + ACCT_TIMEOUT*HZ;
+-out:
+ return acct->active;
+ }
+
+@@ -189,7 +191,11 @@ static void acct_pin_kill(struct fs_pin *pin)
+ {
+ struct bsd_acct_struct *acct = to_acct(pin);
+ mutex_lock(&acct->lock);
+- do_acct_process(acct);
++ /*
++ * Fill the accounting struct with the exiting task's info
++ * before punting to the workqueue.
++ */
++ fill_ac(acct);
+ schedule_work(&acct->work);
+ wait_for_completion(&acct->done);
+ cmpxchg(&acct->ns->bacct, pin, NULL);
+@@ -202,6 +208,9 @@ static void close_work(struct work_struct *work)
+ {
+ struct bsd_acct_struct *acct = container_of(work, struct bsd_acct_struct, work);
+ struct file *file = acct->file;
++
++ /* We were fired by acct_pin_kill() which holds acct->lock. */
++ acct_write_process(acct);
+ if (file->f_op->flush)
+ file->f_op->flush(file, NULL);
+ __fput_sync(file);
+@@ -234,6 +243,20 @@ static int acct_on(struct filename *pathname)
+ return -EACCES;
+ }
+
++ /* Exclude kernel kernel internal filesystems. */
++ if (file_inode(file)->i_sb->s_flags & (SB_NOUSER | SB_KERNMOUNT)) {
++ kfree(acct);
++ filp_close(file, NULL);
++ return -EINVAL;
++ }
++
++ /* Exclude procfs and sysfs. */
++ if (file_inode(file)->i_sb->s_iflags & SB_I_USERNS_VISIBLE) {
++ kfree(acct);
++ filp_close(file, NULL);
++ return -EINVAL;
++ }
++
+ if (!(file->f_mode & FMODE_CAN_WRITE)) {
+ kfree(acct);
+ filp_close(file, NULL);
+@@ -430,13 +453,27 @@ static u32 encode_float(u64 value)
+ * do_exit() or when switching to a different output file.
+ */
+
+-static void fill_ac(acct_t *ac)
++static void fill_ac(struct bsd_acct_struct *acct)
+ {
+ struct pacct_struct *pacct = ¤t->signal->pacct;
++ struct file *file = acct->file;
++ acct_t *ac = &acct->ac;
+ u64 elapsed, run_time;
+ time64_t btime;
+ struct tty_struct *tty;
+
++ lockdep_assert_held(&acct->lock);
++
++ if (time_is_after_jiffies(acct->needcheck)) {
++ acct->check_space = false;
++
++ /* Don't fill in @ac if nothing will be written. */
++ if (!acct->active)
++ return;
++ } else {
++ acct->check_space = true;
++ }
++
+ /*
+ * Fill the accounting struct with the needed info as recorded
+ * by the different kernel functions.
+@@ -484,64 +521,61 @@ static void fill_ac(acct_t *ac)
+ ac->ac_majflt = encode_comp_t(pacct->ac_majflt);
+ ac->ac_exitcode = pacct->ac_exitcode;
+ spin_unlock_irq(¤t->sighand->siglock);
+-}
+-/*
+- * do_acct_process does all actual work. Caller holds the reference to file.
+- */
+-static void do_acct_process(struct bsd_acct_struct *acct)
+-{
+- acct_t ac;
+- unsigned long flim;
+- const struct cred *orig_cred;
+- struct file *file = acct->file;
+
+- /*
+- * Accounting records are not subject to resource limits.
+- */
+- flim = rlimit(RLIMIT_FSIZE);
+- current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY;
+- /* Perform file operations on behalf of whoever enabled accounting */
+- orig_cred = override_creds(file->f_cred);
+-
+- /*
+- * First check to see if there is enough free_space to continue
+- * the process accounting system.
+- */
+- if (!check_free_space(acct))
+- goto out;
+-
+- fill_ac(&ac);
+ /* we really need to bite the bullet and change layout */
+- ac.ac_uid = from_kuid_munged(file->f_cred->user_ns, orig_cred->uid);
+- ac.ac_gid = from_kgid_munged(file->f_cred->user_ns, orig_cred->gid);
++ ac->ac_uid = from_kuid_munged(file->f_cred->user_ns, current_uid());
++ ac->ac_gid = from_kgid_munged(file->f_cred->user_ns, current_gid());
+ #if ACCT_VERSION == 1 || ACCT_VERSION == 2
+ /* backward-compatible 16 bit fields */
+- ac.ac_uid16 = ac.ac_uid;
+- ac.ac_gid16 = ac.ac_gid;
++ ac->ac_uid16 = ac->ac_uid;
++ ac->ac_gid16 = ac->ac_gid;
+ #elif ACCT_VERSION == 3
+ {
+ struct pid_namespace *ns = acct->ns;
+
+- ac.ac_pid = task_tgid_nr_ns(current, ns);
++ ac->ac_pid = task_tgid_nr_ns(current, ns);
+ rcu_read_lock();
+- ac.ac_ppid = task_tgid_nr_ns(rcu_dereference(current->real_parent),
+- ns);
++ ac->ac_ppid = task_tgid_nr_ns(rcu_dereference(current->real_parent), ns);
+ rcu_read_unlock();
+ }
+ #endif
++}
++
++static void acct_write_process(struct bsd_acct_struct *acct)
++{
++ struct file *file = acct->file;
++ const struct cred *cred;
++ acct_t *ac = &acct->ac;
++
++ /* Perform file operations on behalf of whoever enabled accounting */
++ cred = override_creds(file->f_cred);
++
+ /*
+- * Get freeze protection. If the fs is frozen, just skip the write
+- * as we could deadlock the system otherwise.
++ * First check to see if there is enough free_space to continue
++ * the process accounting system. Then get freeze protection. If
++ * the fs is frozen, just skip the write as we could deadlock
++ * the system otherwise.
+ */
+- if (file_start_write_trylock(file)) {
++ if (check_free_space(acct) && file_start_write_trylock(file)) {
+ /* it's been opened O_APPEND, so position is irrelevant */
+ loff_t pos = 0;
+- __kernel_write(file, &ac, sizeof(acct_t), &pos);
++ __kernel_write(file, ac, sizeof(acct_t), &pos);
+ file_end_write(file);
+ }
+-out:
++
++ revert_creds(cred);
++}
++
++static void do_acct_process(struct bsd_acct_struct *acct)
++{
++ unsigned long flim;
++
++ /* Accounting records are not subject to resource limits. */
++ flim = rlimit(RLIMIT_FSIZE);
++ current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY;
++ fill_ac(acct);
++ acct_write_process(acct);
+ current->signal->rlim[RLIMIT_FSIZE].rlim_cur = flim;
+- revert_creds(orig_cred);
+ }
+
+ /**
+diff --git a/kernel/bpf/arena.c b/kernel/bpf/arena.c
+index 93e48c7cad4eff..8c775a1401d3ed 100644
+--- a/kernel/bpf/arena.c
++++ b/kernel/bpf/arena.c
+@@ -37,7 +37,7 @@
+ */
+
+ /* number of bytes addressable by LDX/STX insn with 16-bit 'off' field */
+-#define GUARD_SZ (1ull << sizeof_field(struct bpf_insn, off) * 8)
++#define GUARD_SZ round_up(1ull << sizeof_field(struct bpf_insn, off) * 8, PAGE_SIZE << 1)
+ #define KERN_VM_SZ (SZ_4G + GUARD_SZ)
+
+ struct bpf_arena {
+diff --git a/kernel/bpf/bpf_cgrp_storage.c b/kernel/bpf/bpf_cgrp_storage.c
+index 28efd0a3f2200c..6547fb7ac0dcb2 100644
+--- a/kernel/bpf/bpf_cgrp_storage.c
++++ b/kernel/bpf/bpf_cgrp_storage.c
+@@ -154,7 +154,7 @@ static struct bpf_map *cgroup_storage_map_alloc(union bpf_attr *attr)
+
+ static void cgroup_storage_map_free(struct bpf_map *map)
+ {
+- bpf_local_storage_map_free(map, &cgroup_cache, NULL);
++ bpf_local_storage_map_free(map, &cgroup_cache, &bpf_cgrp_storage_busy);
+ }
+
+ /* *gfp_flags* is a hidden argument provided by the verifier */
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index a44f4be592be79..2c54c148a94f30 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -6483,6 +6483,8 @@ static const struct bpf_raw_tp_null_args raw_tp_null_args[] = {
+ /* rxrpc */
+ { "rxrpc_recvdata", 0x1 },
+ { "rxrpc_resend", 0x10 },
++ /* skb */
++ {"kfree_skb", 0x1000},
+ /* sunrpc */
+ { "xs_stream_read_data", 0x1 },
+ /* ... from xprt_cong_event event class */
+diff --git a/kernel/bpf/ringbuf.c b/kernel/bpf/ringbuf.c
+index e1cfe890e0be64..1499d8caa9a351 100644
+--- a/kernel/bpf/ringbuf.c
++++ b/kernel/bpf/ringbuf.c
+@@ -268,8 +268,6 @@ static int ringbuf_map_mmap_kern(struct bpf_map *map, struct vm_area_struct *vma
+ /* allow writable mapping for the consumer_pos only */
+ if (vma->vm_pgoff != 0 || vma->vm_end - vma->vm_start != PAGE_SIZE)
+ return -EPERM;
+- } else {
+- vm_flags_clear(vma, VM_MAYWRITE);
+ }
+ /* remap_vmalloc_range() checks size and offset constraints */
+ return remap_vmalloc_range(vma, rb_map->rb,
+@@ -289,8 +287,6 @@ static int ringbuf_map_mmap_user(struct bpf_map *map, struct vm_area_struct *vma
+ * position, and the ring buffer data itself.
+ */
+ return -EPERM;
+- } else {
+- vm_flags_clear(vma, VM_MAYWRITE);
+ }
+ /* remap_vmalloc_range() checks size and offset constraints */
+ return remap_vmalloc_range(vma, rb_map->rb, vma->vm_pgoff + RINGBUF_PGOFF);
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 368ae8d231d417..696e5a2cbea2e8 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -936,7 +936,7 @@ static const struct vm_operations_struct bpf_map_default_vmops = {
+ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
+ {
+ struct bpf_map *map = filp->private_data;
+- int err;
++ int err = 0;
+
+ if (!map->ops->map_mmap || !IS_ERR_OR_NULL(map->record))
+ return -ENOTSUPP;
+@@ -960,24 +960,33 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
+ err = -EACCES;
+ goto out;
+ }
++ bpf_map_write_active_inc(map);
+ }
++out:
++ mutex_unlock(&map->freeze_mutex);
++ if (err)
++ return err;
+
+ /* set default open/close callbacks */
+ vma->vm_ops = &bpf_map_default_vmops;
+ vma->vm_private_data = map;
+ vm_flags_clear(vma, VM_MAYEXEC);
++ /* If mapping is read-only, then disallow potentially re-mapping with
++ * PROT_WRITE by dropping VM_MAYWRITE flag. This VM_MAYWRITE clearing
++ * means that as far as BPF map's memory-mapped VMAs are concerned,
++ * VM_WRITE and VM_MAYWRITE and equivalent, if one of them is set,
++ * both should be set, so we can forget about VM_MAYWRITE and always
++ * check just VM_WRITE
++ */
+ if (!(vma->vm_flags & VM_WRITE))
+- /* disallow re-mapping with PROT_WRITE */
+ vm_flags_clear(vma, VM_MAYWRITE);
+
+ err = map->ops->map_mmap(map, vma);
+- if (err)
+- goto out;
++ if (err) {
++ if (vma->vm_flags & VM_WRITE)
++ bpf_map_write_active_dec(map);
++ }
+
+- if (vma->vm_flags & VM_MAYWRITE)
+- bpf_map_write_active_inc(map);
+-out:
+- mutex_unlock(&map->freeze_mutex);
+ return err;
+ }
+
+@@ -1863,8 +1872,6 @@ int generic_map_update_batch(struct bpf_map *map, struct file *map_file,
+ return err;
+ }
+
+-#define MAP_LOOKUP_RETRIES 3
+-
+ int generic_map_lookup_batch(struct bpf_map *map,
+ const union bpf_attr *attr,
+ union bpf_attr __user *uattr)
+@@ -1874,8 +1881,8 @@ int generic_map_lookup_batch(struct bpf_map *map,
+ void __user *values = u64_to_user_ptr(attr->batch.values);
+ void __user *keys = u64_to_user_ptr(attr->batch.keys);
+ void *buf, *buf_prevkey, *prev_key, *key, *value;
+- int err, retry = MAP_LOOKUP_RETRIES;
+ u32 value_size, cp, max_count;
++ int err;
+
+ if (attr->batch.elem_flags & ~BPF_F_LOCK)
+ return -EINVAL;
+@@ -1921,14 +1928,8 @@ int generic_map_lookup_batch(struct bpf_map *map,
+ err = bpf_map_copy_value(map, key, value,
+ attr->batch.elem_flags);
+
+- if (err == -ENOENT) {
+- if (retry) {
+- retry--;
+- continue;
+- }
+- err = -EINTR;
+- break;
+- }
++ if (err == -ENOENT)
++ goto next_key;
+
+ if (err)
+ goto free_buf;
+@@ -1943,12 +1944,12 @@ int generic_map_lookup_batch(struct bpf_map *map,
+ goto free_buf;
+ }
+
++ cp++;
++next_key:
+ if (!prev_key)
+ prev_key = buf_prevkey;
+
+ swap(prev_key, key);
+- retry = MAP_LOOKUP_RETRIES;
+- cp++;
+ cond_resched();
+ }
+
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 689f7e8f69f54d..aa57ae3eb1ff5e 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -2300,12 +2300,35 @@ static void move_remote_task_to_local_dsq(struct task_struct *p, u64 enq_flags,
+ *
+ * - The BPF scheduler is bypassed while the rq is offline and we can always say
+ * no to the BPF scheduler initiated migrations while offline.
++ *
++ * The caller must ensure that @p and @rq are on different CPUs.
+ */
+ static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq,
+ bool trigger_error)
+ {
+ int cpu = cpu_of(rq);
+
++ SCHED_WARN_ON(task_cpu(p) == cpu);
++
++ /*
++ * If @p has migration disabled, @p->cpus_ptr is updated to contain only
++ * the pinned CPU in migrate_disable_switch() while @p is being switched
++ * out. However, put_prev_task_scx() is called before @p->cpus_ptr is
++ * updated and thus another CPU may see @p on a DSQ inbetween leading to
++ * @p passing the below task_allowed_on_cpu() check while migration is
++ * disabled.
++ *
++ * Test the migration disabled state first as the race window is narrow
++ * and the BPF scheduler failing to check migration disabled state can
++ * easily be masked if task_allowed_on_cpu() is done first.
++ */
++ if (unlikely(is_migration_disabled(p))) {
++ if (trigger_error)
++ scx_ops_error("SCX_DSQ_LOCAL[_ON] cannot move migration disabled %s[%d] from CPU %d to %d",
++ p->comm, p->pid, task_cpu(p), cpu);
++ return false;
++ }
++
+ /*
+ * We don't require the BPF scheduler to avoid dispatching to offline
+ * CPUs mostly for convenience but also because CPUs can go offline
+@@ -2314,14 +2337,11 @@ static bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *rq,
+ */
+ if (!task_allowed_on_cpu(p, cpu)) {
+ if (trigger_error)
+- scx_ops_error("SCX_DSQ_LOCAL[_ON] verdict target cpu %d not allowed for %s[%d]",
+- cpu_of(rq), p->comm, p->pid);
++ scx_ops_error("SCX_DSQ_LOCAL[_ON] target CPU %d not allowed for %s[%d]",
++ cpu, p->comm, p->pid);
+ return false;
+ }
+
+- if (unlikely(is_migration_disabled(p)))
+- return false;
+-
+ if (!scx_rq_online(rq))
+ return false;
+
+@@ -2397,6 +2417,74 @@ static inline bool task_can_run_on_remote_rq(struct task_struct *p, struct rq *r
+ static inline bool consume_remote_task(struct rq *this_rq, struct task_struct *p, struct scx_dispatch_q *dsq, struct rq *task_rq) { return false; }
+ #endif /* CONFIG_SMP */
+
++/**
++ * move_task_between_dsqs() - Move a task from one DSQ to another
++ * @p: target task
++ * @enq_flags: %SCX_ENQ_*
++ * @src_dsq: DSQ @p is currently on, must not be a local DSQ
++ * @dst_dsq: DSQ @p is being moved to, can be any DSQ
++ *
++ * Must be called with @p's task_rq and @src_dsq locked. If @dst_dsq is a local
++ * DSQ and @p is on a different CPU, @p will be migrated and thus its task_rq
++ * will change. As @p's task_rq is locked, this function doesn't need to use the
++ * holding_cpu mechanism.
++ *
++ * On return, @src_dsq is unlocked and only @p's new task_rq, which is the
++ * return value, is locked.
++ */
++static struct rq *move_task_between_dsqs(struct task_struct *p, u64 enq_flags,
++ struct scx_dispatch_q *src_dsq,
++ struct scx_dispatch_q *dst_dsq)
++{
++ struct rq *src_rq = task_rq(p), *dst_rq;
++
++ BUG_ON(src_dsq->id == SCX_DSQ_LOCAL);
++ lockdep_assert_held(&src_dsq->lock);
++ lockdep_assert_rq_held(src_rq);
++
++ if (dst_dsq->id == SCX_DSQ_LOCAL) {
++ dst_rq = container_of(dst_dsq, struct rq, scx.local_dsq);
++ if (src_rq != dst_rq &&
++ unlikely(!task_can_run_on_remote_rq(p, dst_rq, true))) {
++ dst_dsq = find_global_dsq(p);
++ dst_rq = src_rq;
++ }
++ } else {
++ /* no need to migrate if destination is a non-local DSQ */
++ dst_rq = src_rq;
++ }
++
++ /*
++ * Move @p into $dst_dsq. If $dst_dsq is the local DSQ of a different
++ * CPU, @p will be migrated.
++ */
++ if (dst_dsq->id == SCX_DSQ_LOCAL) {
++ /* @p is going from a non-local DSQ to a local DSQ */
++ if (src_rq == dst_rq) {
++ task_unlink_from_dsq(p, src_dsq);
++ move_local_task_to_local_dsq(p, enq_flags,
++ src_dsq, dst_rq);
++ raw_spin_unlock(&src_dsq->lock);
++ } else {
++ raw_spin_unlock(&src_dsq->lock);
++ move_remote_task_to_local_dsq(p, enq_flags,
++ src_rq, dst_rq);
++ }
++ } else {
++ /*
++ * @p is going from a non-local DSQ to a non-local DSQ. As
++ * $src_dsq is already locked, do an abbreviated dequeue.
++ */
++ task_unlink_from_dsq(p, src_dsq);
++ p->scx.dsq = NULL;
++ raw_spin_unlock(&src_dsq->lock);
++
++ dispatch_enqueue(dst_dsq, p, enq_flags);
++ }
++
++ return dst_rq;
++}
++
+ static bool consume_dispatch_q(struct rq *rq, struct scx_dispatch_q *dsq)
+ {
+ struct task_struct *p;
+@@ -2474,7 +2562,8 @@ static void dispatch_to_local_dsq(struct rq *rq, struct scx_dispatch_q *dst_dsq,
+ }
+
+ #ifdef CONFIG_SMP
+- if (unlikely(!task_can_run_on_remote_rq(p, dst_rq, true))) {
++ if (src_rq != dst_rq &&
++ unlikely(!task_can_run_on_remote_rq(p, dst_rq, true))) {
+ dispatch_enqueue(find_global_dsq(p), p,
+ enq_flags | SCX_ENQ_CLEAR_OPSS);
+ return;
+@@ -6134,7 +6223,7 @@ static bool scx_dispatch_from_dsq(struct bpf_iter_scx_dsq_kern *kit,
+ u64 enq_flags)
+ {
+ struct scx_dispatch_q *src_dsq = kit->dsq, *dst_dsq;
+- struct rq *this_rq, *src_rq, *dst_rq, *locked_rq;
++ struct rq *this_rq, *src_rq, *locked_rq;
+ bool dispatched = false;
+ bool in_balance;
+ unsigned long flags;
+@@ -6180,51 +6269,18 @@ static bool scx_dispatch_from_dsq(struct bpf_iter_scx_dsq_kern *kit,
+ /* @p is still on $src_dsq and stable, determine the destination */
+ dst_dsq = find_dsq_for_dispatch(this_rq, dsq_id, p);
+
+- if (dst_dsq->id == SCX_DSQ_LOCAL) {
+- dst_rq = container_of(dst_dsq, struct rq, scx.local_dsq);
+- if (!task_can_run_on_remote_rq(p, dst_rq, true)) {
+- dst_dsq = find_global_dsq(p);
+- dst_rq = src_rq;
+- }
+- } else {
+- /* no need to migrate if destination is a non-local DSQ */
+- dst_rq = src_rq;
+- }
+-
+ /*
+- * Move @p into $dst_dsq. If $dst_dsq is the local DSQ of a different
+- * CPU, @p will be migrated.
++ * Apply vtime and slice updates before moving so that the new time is
++ * visible before inserting into $dst_dsq. @p is still on $src_dsq but
++ * this is safe as we're locking it.
+ */
+- if (dst_dsq->id == SCX_DSQ_LOCAL) {
+- /* @p is going from a non-local DSQ to a local DSQ */
+- if (src_rq == dst_rq) {
+- task_unlink_from_dsq(p, src_dsq);
+- move_local_task_to_local_dsq(p, enq_flags,
+- src_dsq, dst_rq);
+- raw_spin_unlock(&src_dsq->lock);
+- } else {
+- raw_spin_unlock(&src_dsq->lock);
+- move_remote_task_to_local_dsq(p, enq_flags,
+- src_rq, dst_rq);
+- locked_rq = dst_rq;
+- }
+- } else {
+- /*
+- * @p is going from a non-local DSQ to a non-local DSQ. As
+- * $src_dsq is already locked, do an abbreviated dequeue.
+- */
+- task_unlink_from_dsq(p, src_dsq);
+- p->scx.dsq = NULL;
+- raw_spin_unlock(&src_dsq->lock);
+-
+- if (kit->cursor.flags & __SCX_DSQ_ITER_HAS_VTIME)
+- p->scx.dsq_vtime = kit->vtime;
+- dispatch_enqueue(dst_dsq, p, enq_flags);
+- }
+-
++ if (kit->cursor.flags & __SCX_DSQ_ITER_HAS_VTIME)
++ p->scx.dsq_vtime = kit->vtime;
+ if (kit->cursor.flags & __SCX_DSQ_ITER_HAS_SLICE)
+ p->scx.slice = kit->slice;
+
++ /* execute move */
++ locked_rq = move_task_between_dsqs(p, enq_flags, src_dsq, dst_dsq);
+ dispatched = true;
+ out:
+ if (in_balance) {
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index cd9dbfb3038330..71cc1bbfe9aa3e 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -3219,15 +3219,22 @@ static struct ftrace_hash *copy_hash(struct ftrace_hash *src)
+ * The filter_hash updates uses just the append_hash() function
+ * and the notrace_hash does not.
+ */
+-static int append_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash)
++static int append_hash(struct ftrace_hash **hash, struct ftrace_hash *new_hash,
++ int size_bits)
+ {
+ struct ftrace_func_entry *entry;
+ int size;
+ int i;
+
+- /* An empty hash does everything */
+- if (ftrace_hash_empty(*hash))
+- return 0;
++ if (*hash) {
++ /* An empty hash does everything */
++ if (ftrace_hash_empty(*hash))
++ return 0;
++ } else {
++ *hash = alloc_ftrace_hash(size_bits);
++ if (!*hash)
++ return -ENOMEM;
++ }
+
+ /* If new_hash has everything make hash have everything */
+ if (ftrace_hash_empty(new_hash)) {
+@@ -3291,16 +3298,18 @@ static int intersect_hash(struct ftrace_hash **hash, struct ftrace_hash *new_has
+ /* Return a new hash that has a union of all @ops->filter_hash entries */
+ static struct ftrace_hash *append_hashes(struct ftrace_ops *ops)
+ {
+- struct ftrace_hash *new_hash;
++ struct ftrace_hash *new_hash = NULL;
+ struct ftrace_ops *subops;
++ int size_bits;
+ int ret;
+
+- new_hash = alloc_ftrace_hash(ops->func_hash->filter_hash->size_bits);
+- if (!new_hash)
+- return NULL;
++ if (ops->func_hash->filter_hash)
++ size_bits = ops->func_hash->filter_hash->size_bits;
++ else
++ size_bits = FTRACE_HASH_DEFAULT_BITS;
+
+ list_for_each_entry(subops, &ops->subop_list, list) {
+- ret = append_hash(&new_hash, subops->func_hash->filter_hash);
++ ret = append_hash(&new_hash, subops->func_hash->filter_hash, size_bits);
+ if (ret < 0) {
+ free_ftrace_hash(new_hash);
+ return NULL;
+@@ -3309,7 +3318,8 @@ static struct ftrace_hash *append_hashes(struct ftrace_ops *ops)
+ if (ftrace_hash_empty(new_hash))
+ break;
+ }
+- return new_hash;
++ /* Can't return NULL as that means this failed */
++ return new_hash ? : EMPTY_HASH;
+ }
+
+ /* Make @ops trace evenything except what all its subops do not trace */
+@@ -3504,7 +3514,8 @@ int ftrace_startup_subops(struct ftrace_ops *ops, struct ftrace_ops *subops, int
+ filter_hash = alloc_and_copy_ftrace_hash(size_bits, ops->func_hash->filter_hash);
+ if (!filter_hash)
+ return -ENOMEM;
+- ret = append_hash(&filter_hash, subops->func_hash->filter_hash);
++ ret = append_hash(&filter_hash, subops->func_hash->filter_hash,
++ size_bits);
+ if (ret < 0) {
+ free_ftrace_hash(filter_hash);
+ return ret;
+@@ -5747,6 +5758,9 @@ __ftrace_match_addr(struct ftrace_hash *hash, unsigned long ip, int remove)
+ return -ENOENT;
+ free_hash_entry(hash, entry);
+ return 0;
++ } else if (__ftrace_lookup_ip(hash, ip) != NULL) {
++ /* Already exists */
++ return 0;
+ }
+
+ entry = add_hash_entry(hash, ip);
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index bfc4ac265c2c33..ffe1422ab03f88 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -26,6 +26,7 @@
+ #include <linux/hardirq.h>
+ #include <linux/linkage.h>
+ #include <linux/uaccess.h>
++#include <linux/cleanup.h>
+ #include <linux/vmalloc.h>
+ #include <linux/ftrace.h>
+ #include <linux/module.h>
+@@ -535,19 +536,16 @@ LIST_HEAD(ftrace_trace_arrays);
+ int trace_array_get(struct trace_array *this_tr)
+ {
+ struct trace_array *tr;
+- int ret = -ENODEV;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+ list_for_each_entry(tr, &ftrace_trace_arrays, list) {
+ if (tr == this_tr) {
+ tr->ref++;
+- ret = 0;
+- break;
++ return 0;
+ }
+ }
+- mutex_unlock(&trace_types_lock);
+
+- return ret;
++ return -ENODEV;
+ }
+
+ static void __trace_array_put(struct trace_array *this_tr)
+@@ -1456,22 +1454,20 @@ EXPORT_SYMBOL_GPL(tracing_snapshot_alloc);
+ int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data,
+ cond_update_fn_t update)
+ {
+- struct cond_snapshot *cond_snapshot;
+- int ret = 0;
++ struct cond_snapshot *cond_snapshot __free(kfree) =
++ kzalloc(sizeof(*cond_snapshot), GFP_KERNEL);
++ int ret;
+
+- cond_snapshot = kzalloc(sizeof(*cond_snapshot), GFP_KERNEL);
+ if (!cond_snapshot)
+ return -ENOMEM;
+
+ cond_snapshot->cond_data = cond_data;
+ cond_snapshot->update = update;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+- if (tr->current_trace->use_max_tr) {
+- ret = -EBUSY;
+- goto fail_unlock;
+- }
++ if (tr->current_trace->use_max_tr)
++ return -EBUSY;
+
+ /*
+ * The cond_snapshot can only change to NULL without the
+@@ -1481,29 +1477,20 @@ int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data,
+ * do safely with only holding the trace_types_lock and not
+ * having to take the max_lock.
+ */
+- if (tr->cond_snapshot) {
+- ret = -EBUSY;
+- goto fail_unlock;
+- }
++ if (tr->cond_snapshot)
++ return -EBUSY;
+
+ ret = tracing_arm_snapshot_locked(tr);
+ if (ret)
+- goto fail_unlock;
++ return ret;
+
+ local_irq_disable();
+ arch_spin_lock(&tr->max_lock);
+- tr->cond_snapshot = cond_snapshot;
++ tr->cond_snapshot = no_free_ptr(cond_snapshot);
+ arch_spin_unlock(&tr->max_lock);
+ local_irq_enable();
+
+- mutex_unlock(&trace_types_lock);
+-
+- return ret;
+-
+- fail_unlock:
+- mutex_unlock(&trace_types_lock);
+- kfree(cond_snapshot);
+- return ret;
++ return 0;
+ }
+ EXPORT_SYMBOL_GPL(tracing_snapshot_cond_enable);
+
+@@ -2216,10 +2203,10 @@ static __init int init_trace_selftests(void)
+
+ selftests_can_run = true;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+ if (list_empty(&postponed_selftests))
+- goto out;
++ return 0;
+
+ pr_info("Running postponed tracer tests:\n");
+
+@@ -2248,9 +2235,6 @@ static __init int init_trace_selftests(void)
+ }
+ tracing_selftest_running = false;
+
+- out:
+- mutex_unlock(&trace_types_lock);
+-
+ return 0;
+ }
+ core_initcall(init_trace_selftests);
+@@ -2818,7 +2802,7 @@ int tracepoint_printk_sysctl(const struct ctl_table *table, int write,
+ int save_tracepoint_printk;
+ int ret;
+
+- mutex_lock(&tracepoint_printk_mutex);
++ guard(mutex)(&tracepoint_printk_mutex);
+ save_tracepoint_printk = tracepoint_printk;
+
+ ret = proc_dointvec(table, write, buffer, lenp, ppos);
+@@ -2831,16 +2815,13 @@ int tracepoint_printk_sysctl(const struct ctl_table *table, int write,
+ tracepoint_printk = 0;
+
+ if (save_tracepoint_printk == tracepoint_printk)
+- goto out;
++ return ret;
+
+ if (tracepoint_printk)
+ static_key_enable(&tracepoint_printk_key.key);
+ else
+ static_key_disable(&tracepoint_printk_key.key);
+
+- out:
+- mutex_unlock(&tracepoint_printk_mutex);
+-
+ return ret;
+ }
+
+@@ -5150,7 +5131,8 @@ static int tracing_trace_options_show(struct seq_file *m, void *v)
+ u32 tracer_flags;
+ int i;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
++
+ tracer_flags = tr->current_trace->flags->val;
+ trace_opts = tr->current_trace->flags->opts;
+
+@@ -5167,7 +5149,6 @@ static int tracing_trace_options_show(struct seq_file *m, void *v)
+ else
+ seq_printf(m, "no%s\n", trace_opts[i].name);
+ }
+- mutex_unlock(&trace_types_lock);
+
+ return 0;
+ }
+@@ -5832,7 +5813,7 @@ trace_insert_eval_map_file(struct module *mod, struct trace_eval_map **start,
+ return;
+ }
+
+- mutex_lock(&trace_eval_mutex);
++ guard(mutex)(&trace_eval_mutex);
+
+ if (!trace_eval_maps)
+ trace_eval_maps = map_array;
+@@ -5856,8 +5837,6 @@ trace_insert_eval_map_file(struct module *mod, struct trace_eval_map **start,
+ map_array++;
+ }
+ memset(map_array, 0, sizeof(*map_array));
+-
+- mutex_unlock(&trace_eval_mutex);
+ }
+
+ static void trace_create_eval_file(struct dentry *d_tracer)
+@@ -6019,26 +5998,15 @@ static int __tracing_resize_ring_buffer(struct trace_array *tr,
+ ssize_t tracing_resize_ring_buffer(struct trace_array *tr,
+ unsigned long size, int cpu_id)
+ {
+- int ret;
+-
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+ if (cpu_id != RING_BUFFER_ALL_CPUS) {
+ /* make sure, this cpu is enabled in the mask */
+- if (!cpumask_test_cpu(cpu_id, tracing_buffer_mask)) {
+- ret = -EINVAL;
+- goto out;
+- }
++ if (!cpumask_test_cpu(cpu_id, tracing_buffer_mask))
++ return -EINVAL;
+ }
+
+- ret = __tracing_resize_ring_buffer(tr, size, cpu_id);
+- if (ret < 0)
+- ret = -ENOMEM;
+-
+-out:
+- mutex_unlock(&trace_types_lock);
+-
+- return ret;
++ return __tracing_resize_ring_buffer(tr, size, cpu_id);
+ }
+
+ static void update_last_data(struct trace_array *tr)
+@@ -6129,9 +6097,9 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ #ifdef CONFIG_TRACER_MAX_TRACE
+ bool had_max_tr;
+ #endif
+- int ret = 0;
++ int ret;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+ update_last_data(tr);
+
+@@ -6139,7 +6107,7 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ ret = __tracing_resize_ring_buffer(tr, trace_buf_size,
+ RING_BUFFER_ALL_CPUS);
+ if (ret < 0)
+- goto out;
++ return ret;
+ ret = 0;
+ }
+
+@@ -6147,43 +6115,37 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ if (strcmp(t->name, buf) == 0)
+ break;
+ }
+- if (!t) {
+- ret = -EINVAL;
+- goto out;
+- }
++ if (!t)
++ return -EINVAL;
++
+ if (t == tr->current_trace)
+- goto out;
++ return 0;
+
+ #ifdef CONFIG_TRACER_SNAPSHOT
+ if (t->use_max_tr) {
+ local_irq_disable();
+ arch_spin_lock(&tr->max_lock);
+- if (tr->cond_snapshot)
+- ret = -EBUSY;
++ ret = tr->cond_snapshot ? -EBUSY : 0;
+ arch_spin_unlock(&tr->max_lock);
+ local_irq_enable();
+ if (ret)
+- goto out;
++ return ret;
+ }
+ #endif
+ /* Some tracers won't work on kernel command line */
+ if (system_state < SYSTEM_RUNNING && t->noboot) {
+ pr_warn("Tracer '%s' is not allowed on command line, ignored\n",
+ t->name);
+- goto out;
++ return 0;
+ }
+
+ /* Some tracers are only allowed for the top level buffer */
+- if (!trace_ok_for_array(t, tr)) {
+- ret = -EINVAL;
+- goto out;
+- }
++ if (!trace_ok_for_array(t, tr))
++ return -EINVAL;
+
+ /* If trace pipe files are being read, we can't change the tracer */
+- if (tr->trace_ref) {
+- ret = -EBUSY;
+- goto out;
+- }
++ if (tr->trace_ref)
++ return -EBUSY;
+
+ trace_branch_disable();
+
+@@ -6214,7 +6176,7 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ if (!had_max_tr && t->use_max_tr) {
+ ret = tracing_arm_snapshot_locked(tr);
+ if (ret)
+- goto out;
++ return ret;
+ }
+ #else
+ tr->current_trace = &nop_trace;
+@@ -6227,17 +6189,15 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
+ if (t->use_max_tr)
+ tracing_disarm_snapshot(tr);
+ #endif
+- goto out;
++ return ret;
+ }
+ }
+
+ tr->current_trace = t;
+ tr->current_trace->enabled++;
+ trace_branch_enable(tr);
+- out:
+- mutex_unlock(&trace_types_lock);
+
+- return ret;
++ return 0;
+ }
+
+ static ssize_t
+@@ -6315,22 +6275,18 @@ tracing_thresh_write(struct file *filp, const char __user *ubuf,
+ struct trace_array *tr = filp->private_data;
+ int ret;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+ ret = tracing_nsecs_write(&tracing_thresh, ubuf, cnt, ppos);
+ if (ret < 0)
+- goto out;
++ return ret;
+
+ if (tr->current_trace->update_thresh) {
+ ret = tr->current_trace->update_thresh(tr);
+ if (ret < 0)
+- goto out;
++ return ret;
+ }
+
+- ret = cnt;
+-out:
+- mutex_unlock(&trace_types_lock);
+-
+- return ret;
++ return cnt;
+ }
+
+ #ifdef CONFIG_TRACER_MAX_TRACE
+@@ -6549,31 +6505,29 @@ tracing_read_pipe(struct file *filp, char __user *ubuf,
+ * This is just a matter of traces coherency, the ring buffer itself
+ * is protected.
+ */
+- mutex_lock(&iter->mutex);
++ guard(mutex)(&iter->mutex);
+
+ /* return any leftover data */
+ sret = trace_seq_to_user(&iter->seq, ubuf, cnt);
+ if (sret != -EBUSY)
+- goto out;
++ return sret;
+
+ trace_seq_init(&iter->seq);
+
+ if (iter->trace->read) {
+ sret = iter->trace->read(iter, filp, ubuf, cnt, ppos);
+ if (sret)
+- goto out;
++ return sret;
+ }
+
+ waitagain:
+ sret = tracing_wait_pipe(filp);
+ if (sret <= 0)
+- goto out;
++ return sret;
+
+ /* stop when tracing is finished */
+- if (trace_empty(iter)) {
+- sret = 0;
+- goto out;
+- }
++ if (trace_empty(iter))
++ return 0;
+
+ if (cnt >= TRACE_SEQ_BUFFER_SIZE)
+ cnt = TRACE_SEQ_BUFFER_SIZE - 1;
+@@ -6637,9 +6591,6 @@ tracing_read_pipe(struct file *filp, char __user *ubuf,
+ if (sret == -EBUSY)
+ goto waitagain;
+
+-out:
+- mutex_unlock(&iter->mutex);
+-
+ return sret;
+ }
+
+@@ -7231,25 +7182,19 @@ u64 tracing_event_time_stamp(struct trace_buffer *buffer, struct ring_buffer_eve
+ */
+ int tracing_set_filter_buffering(struct trace_array *tr, bool set)
+ {
+- int ret = 0;
+-
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+ if (set && tr->no_filter_buffering_ref++)
+- goto out;
++ return 0;
+
+ if (!set) {
+- if (WARN_ON_ONCE(!tr->no_filter_buffering_ref)) {
+- ret = -EINVAL;
+- goto out;
+- }
++ if (WARN_ON_ONCE(!tr->no_filter_buffering_ref))
++ return -EINVAL;
+
+ --tr->no_filter_buffering_ref;
+ }
+- out:
+- mutex_unlock(&trace_types_lock);
+
+- return ret;
++ return 0;
+ }
+
+ struct ftrace_buffer_info {
+@@ -7325,12 +7270,10 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ if (ret)
+ return ret;
+
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&trace_types_lock);
+
+- if (tr->current_trace->use_max_tr) {
+- ret = -EBUSY;
+- goto out;
+- }
++ if (tr->current_trace->use_max_tr)
++ return -EBUSY;
+
+ local_irq_disable();
+ arch_spin_lock(&tr->max_lock);
+@@ -7339,24 +7282,20 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ arch_spin_unlock(&tr->max_lock);
+ local_irq_enable();
+ if (ret)
+- goto out;
++ return ret;
+
+ switch (val) {
+ case 0:
+- if (iter->cpu_file != RING_BUFFER_ALL_CPUS) {
+- ret = -EINVAL;
+- break;
+- }
++ if (iter->cpu_file != RING_BUFFER_ALL_CPUS)
++ return -EINVAL;
+ if (tr->allocated_snapshot)
+ free_snapshot(tr);
+ break;
+ case 1:
+ /* Only allow per-cpu swap if the ring buffer supports it */
+ #ifndef CONFIG_RING_BUFFER_ALLOW_SWAP
+- if (iter->cpu_file != RING_BUFFER_ALL_CPUS) {
+- ret = -EINVAL;
+- break;
+- }
++ if (iter->cpu_file != RING_BUFFER_ALL_CPUS)
++ return -EINVAL;
+ #endif
+ if (tr->allocated_snapshot)
+ ret = resize_buffer_duplicate_size(&tr->max_buffer,
+@@ -7364,7 +7303,7 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+
+ ret = tracing_arm_snapshot_locked(tr);
+ if (ret)
+- break;
++ return ret;
+
+ /* Now, we're going to swap */
+ if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
+@@ -7391,8 +7330,7 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
+ *ppos += cnt;
+ ret = cnt;
+ }
+-out:
+- mutex_unlock(&trace_types_lock);
++
+ return ret;
+ }
+
+@@ -7778,12 +7716,11 @@ void tracing_log_err(struct trace_array *tr,
+
+ len += sizeof(CMD_PREFIX) + 2 * sizeof("\n") + strlen(cmd) + 1;
+
+- mutex_lock(&tracing_err_log_lock);
++ guard(mutex)(&tracing_err_log_lock);
++
+ err = get_tracing_log_err(tr, len);
+- if (PTR_ERR(err) == -ENOMEM) {
+- mutex_unlock(&tracing_err_log_lock);
++ if (PTR_ERR(err) == -ENOMEM)
+ return;
+- }
+
+ snprintf(err->loc, TRACING_LOG_LOC_MAX, "%s: error: ", loc);
+ snprintf(err->cmd, len, "\n" CMD_PREFIX "%s\n", cmd);
+@@ -7794,7 +7731,6 @@ void tracing_log_err(struct trace_array *tr,
+ err->info.ts = local_clock();
+
+ list_add_tail(&err->list, &tr->err_log);
+- mutex_unlock(&tracing_err_log_lock);
+ }
+
+ static void clear_tracing_err_log(struct trace_array *tr)
+@@ -9535,20 +9471,17 @@ static int instance_mkdir(const char *name)
+ struct trace_array *tr;
+ int ret;
+
+- mutex_lock(&event_mutex);
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&event_mutex);
++ guard(mutex)(&trace_types_lock);
+
+ ret = -EEXIST;
+ if (trace_array_find(name))
+- goto out_unlock;
++ return -EEXIST;
+
+ tr = trace_array_create(name);
+
+ ret = PTR_ERR_OR_ZERO(tr);
+
+-out_unlock:
+- mutex_unlock(&trace_types_lock);
+- mutex_unlock(&event_mutex);
+ return ret;
+ }
+
+@@ -9598,24 +9531,23 @@ struct trace_array *trace_array_get_by_name(const char *name, const char *system
+ {
+ struct trace_array *tr;
+
+- mutex_lock(&event_mutex);
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&event_mutex);
++ guard(mutex)(&trace_types_lock);
+
+ list_for_each_entry(tr, &ftrace_trace_arrays, list) {
+- if (tr->name && strcmp(tr->name, name) == 0)
+- goto out_unlock;
++ if (tr->name && strcmp(tr->name, name) == 0) {
++ tr->ref++;
++ return tr;
++ }
+ }
+
+ tr = trace_array_create_systems(name, systems, 0, 0);
+
+ if (IS_ERR(tr))
+ tr = NULL;
+-out_unlock:
+- if (tr)
++ else
+ tr->ref++;
+
+- mutex_unlock(&trace_types_lock);
+- mutex_unlock(&event_mutex);
+ return tr;
+ }
+ EXPORT_SYMBOL_GPL(trace_array_get_by_name);
+@@ -9666,48 +9598,36 @@ static int __remove_instance(struct trace_array *tr)
+ int trace_array_destroy(struct trace_array *this_tr)
+ {
+ struct trace_array *tr;
+- int ret;
+
+ if (!this_tr)
+ return -EINVAL;
+
+- mutex_lock(&event_mutex);
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&event_mutex);
++ guard(mutex)(&trace_types_lock);
+
+- ret = -ENODEV;
+
+ /* Making sure trace array exists before destroying it. */
+ list_for_each_entry(tr, &ftrace_trace_arrays, list) {
+- if (tr == this_tr) {
+- ret = __remove_instance(tr);
+- break;
+- }
++ if (tr == this_tr)
++ return __remove_instance(tr);
+ }
+
+- mutex_unlock(&trace_types_lock);
+- mutex_unlock(&event_mutex);
+-
+- return ret;
++ return -ENODEV;
+ }
+ EXPORT_SYMBOL_GPL(trace_array_destroy);
+
+ static int instance_rmdir(const char *name)
+ {
+ struct trace_array *tr;
+- int ret;
+
+- mutex_lock(&event_mutex);
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&event_mutex);
++ guard(mutex)(&trace_types_lock);
+
+- ret = -ENODEV;
+ tr = trace_array_find(name);
+- if (tr)
+- ret = __remove_instance(tr);
+-
+- mutex_unlock(&trace_types_lock);
+- mutex_unlock(&event_mutex);
++ if (!tr)
++ return -ENODEV;
+
+- return ret;
++ return __remove_instance(tr);
+ }
+
+ static __init void create_trace_instances(struct dentry *d_tracer)
+@@ -9720,19 +9640,16 @@ static __init void create_trace_instances(struct dentry *d_tracer)
+ if (MEM_FAIL(!trace_instance_dir, "Failed to create instances directory\n"))
+ return;
+
+- mutex_lock(&event_mutex);
+- mutex_lock(&trace_types_lock);
++ guard(mutex)(&event_mutex);
++ guard(mutex)(&trace_types_lock);
+
+ list_for_each_entry(tr, &ftrace_trace_arrays, list) {
+ if (!tr->name)
+ continue;
+ if (MEM_FAIL(trace_array_create_dir(tr) < 0,
+ "Failed to create instance directory\n"))
+- break;
++ return;
+ }
+-
+- mutex_unlock(&trace_types_lock);
+- mutex_unlock(&event_mutex);
+ }
+
+ static void
+@@ -9946,7 +9863,7 @@ static void trace_module_remove_evals(struct module *mod)
+ if (!mod->num_trace_evals)
+ return;
+
+- mutex_lock(&trace_eval_mutex);
++ guard(mutex)(&trace_eval_mutex);
+
+ map = trace_eval_maps;
+
+@@ -9958,12 +9875,10 @@ static void trace_module_remove_evals(struct module *mod)
+ map = map->tail.next;
+ }
+ if (!map)
+- goto out;
++ return;
+
+ *last = trace_eval_jmp_to_tail(map)->tail.next;
+ kfree(map);
+- out:
+- mutex_unlock(&trace_eval_mutex);
+ }
+ #else
+ static inline void trace_module_remove_evals(struct module *mod) { }
+diff --git a/kernel/trace/trace_functions.c b/kernel/trace/trace_functions.c
+index 3b0cea37e0297b..fbbc3c719d2f68 100644
+--- a/kernel/trace/trace_functions.c
++++ b/kernel/trace/trace_functions.c
+@@ -193,7 +193,7 @@ function_trace_call(unsigned long ip, unsigned long parent_ip,
+ if (bit < 0)
+ return;
+
+- trace_ctx = tracing_gen_ctx();
++ trace_ctx = tracing_gen_ctx_dec();
+
+ cpu = smp_processor_id();
+ data = per_cpu_ptr(tr->array_buffer.data, cpu);
+@@ -298,7 +298,6 @@ function_no_repeats_trace_call(unsigned long ip, unsigned long parent_ip,
+ struct trace_array *tr = op->private;
+ struct trace_array_cpu *data;
+ unsigned int trace_ctx;
+- unsigned long flags;
+ int bit;
+ int cpu;
+
+@@ -325,8 +324,7 @@ function_no_repeats_trace_call(unsigned long ip, unsigned long parent_ip,
+ if (is_repeat_check(tr, last_info, ip, parent_ip))
+ goto out;
+
+- local_save_flags(flags);
+- trace_ctx = tracing_gen_ctx_flags(flags);
++ trace_ctx = tracing_gen_ctx_dec();
+ process_repeats(tr, ip, parent_ip, last_info, trace_ctx);
+
+ trace_function(tr, ip, parent_ip, trace_ctx);
+diff --git a/lib/iov_iter.c b/lib/iov_iter.c
+index 908e75a28d90bd..bdb37d572e97ca 100644
+--- a/lib/iov_iter.c
++++ b/lib/iov_iter.c
+@@ -1428,6 +1428,8 @@ static ssize_t __import_iovec_ubuf(int type, const struct iovec __user *uvec,
+ struct iovec *iov = *iovp;
+ ssize_t ret;
+
++ *iovp = NULL;
++
+ if (compat)
+ ret = copy_compat_iovec_from_user(iov, uvec, 1);
+ else
+@@ -1438,7 +1440,6 @@ static ssize_t __import_iovec_ubuf(int type, const struct iovec __user *uvec,
+ ret = import_ubuf(type, iov->iov_base, iov->iov_len, i);
+ if (unlikely(ret))
+ return ret;
+- *iovp = NULL;
+ return i->count;
+ }
+
+diff --git a/mm/madvise.c b/mm/madvise.c
+index ff139e57cca292..c211e8fa4e49bb 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -920,7 +920,16 @@ static long madvise_dontneed_free(struct vm_area_struct *vma,
+ */
+ end = vma->vm_end;
+ }
+- VM_WARN_ON(start >= end);
++ /*
++ * If the memory region between start and end was
++ * originally backed by 4kB pages and then remapped to
++ * be backed by hugepages while mmap_lock was dropped,
++ * the adjustment for hugetlb vma above may have rounded
++ * end down to the start address.
++ */
++ if (start == end)
++ return 0;
++ VM_WARN_ON(start > end);
+ }
+
+ if (behavior == MADV_DONTNEED || behavior == MADV_DONTNEED_LOCKED)
+diff --git a/mm/migrate_device.c b/mm/migrate_device.c
+index 9cf26592ac934d..5bd888223cc8b8 100644
+--- a/mm/migrate_device.c
++++ b/mm/migrate_device.c
+@@ -840,20 +840,15 @@ void migrate_device_finalize(unsigned long *src_pfns,
+ dst = src;
+ }
+
++ if (!folio_is_zone_device(dst))
++ folio_add_lru(dst);
+ remove_migration_ptes(src, dst, 0);
+ folio_unlock(src);
+-
+- if (folio_is_zone_device(src))
+- folio_put(src);
+- else
+- folio_putback_lru(src);
++ folio_put(src);
+
+ if (dst != src) {
+ folio_unlock(dst);
+- if (folio_is_zone_device(dst))
+- folio_put(dst);
+- else
+- folio_putback_lru(dst);
++ folio_put(dst);
+ }
+ }
+ }
+diff --git a/net/bpf/test_run.c b/net/bpf/test_run.c
+index 501ec4249fedc3..8612023bec60dc 100644
+--- a/net/bpf/test_run.c
++++ b/net/bpf/test_run.c
+@@ -660,12 +660,9 @@ static void *bpf_test_init(const union bpf_attr *kattr, u32 user_size,
+ void __user *data_in = u64_to_user_ptr(kattr->test.data_in);
+ void *data;
+
+- if (size < ETH_HLEN || size > PAGE_SIZE - headroom - tailroom)
++ if (user_size < ETH_HLEN || user_size > PAGE_SIZE - headroom - tailroom)
+ return ERR_PTR(-EINVAL);
+
+- if (user_size > size)
+- return ERR_PTR(-EMSGSIZE);
+-
+ size = SKB_DATA_ALIGN(size);
+ data = kzalloc(size + headroom + tailroom, GFP_USER);
+ if (!data)
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 2e0fe38d0e877d..c761f862bc5a2d 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -1012,6 +1012,12 @@ int netdev_get_name(struct net *net, char *name, int ifindex)
+ return ret;
+ }
+
++static bool dev_addr_cmp(struct net_device *dev, unsigned short type,
++ const char *ha)
++{
++ return dev->type == type && !memcmp(dev->dev_addr, ha, dev->addr_len);
++}
++
+ /**
+ * dev_getbyhwaddr_rcu - find a device by its hardware address
+ * @net: the applicable net namespace
+@@ -1020,7 +1026,7 @@ int netdev_get_name(struct net *net, char *name, int ifindex)
+ *
+ * Search for an interface by MAC address. Returns NULL if the device
+ * is not found or a pointer to the device.
+- * The caller must hold RCU or RTNL.
++ * The caller must hold RCU.
+ * The returned device has not had its ref count increased
+ * and the caller must therefore be careful about locking
+ *
+@@ -1032,14 +1038,39 @@ struct net_device *dev_getbyhwaddr_rcu(struct net *net, unsigned short type,
+ struct net_device *dev;
+
+ for_each_netdev_rcu(net, dev)
+- if (dev->type == type &&
+- !memcmp(dev->dev_addr, ha, dev->addr_len))
++ if (dev_addr_cmp(dev, type, ha))
+ return dev;
+
+ return NULL;
+ }
+ EXPORT_SYMBOL(dev_getbyhwaddr_rcu);
+
++/**
++ * dev_getbyhwaddr() - find a device by its hardware address
++ * @net: the applicable net namespace
++ * @type: media type of device
++ * @ha: hardware address
++ *
++ * Similar to dev_getbyhwaddr_rcu(), but the owner needs to hold
++ * rtnl_lock.
++ *
++ * Context: rtnl_lock() must be held.
++ * Return: pointer to the net_device, or NULL if not found
++ */
++struct net_device *dev_getbyhwaddr(struct net *net, unsigned short type,
++ const char *ha)
++{
++ struct net_device *dev;
++
++ ASSERT_RTNL();
++ for_each_netdev(net, dev)
++ if (dev_addr_cmp(dev, type, ha))
++ return dev;
++
++ return NULL;
++}
++EXPORT_SYMBOL(dev_getbyhwaddr);
++
+ struct net_device *dev_getfirstbyhwtype(struct net *net, unsigned short type)
+ {
+ struct net_device *dev, *ret = NULL;
+diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
+index 6efd4cccc9ddd2..212f0a048cab68 100644
+--- a/net/core/drop_monitor.c
++++ b/net/core/drop_monitor.c
+@@ -1734,30 +1734,30 @@ static int __init init_net_drop_monitor(void)
+ return -ENOSPC;
+ }
+
+- rc = genl_register_family(&net_drop_monitor_family);
+- if (rc) {
+- pr_err("Could not create drop monitor netlink family\n");
+- return rc;
++ for_each_possible_cpu(cpu) {
++ net_dm_cpu_data_init(cpu);
++ net_dm_hw_cpu_data_init(cpu);
+ }
+- WARN_ON(net_drop_monitor_family.mcgrp_offset != NET_DM_GRP_ALERT);
+
+ rc = register_netdevice_notifier(&dropmon_net_notifier);
+ if (rc < 0) {
+ pr_crit("Failed to register netdevice notifier\n");
++ return rc;
++ }
++
++ rc = genl_register_family(&net_drop_monitor_family);
++ if (rc) {
++ pr_err("Could not create drop monitor netlink family\n");
+ goto out_unreg;
+ }
++ WARN_ON(net_drop_monitor_family.mcgrp_offset != NET_DM_GRP_ALERT);
+
+ rc = 0;
+
+- for_each_possible_cpu(cpu) {
+- net_dm_cpu_data_init(cpu);
+- net_dm_hw_cpu_data_init(cpu);
+- }
+-
+ goto out;
+
+ out_unreg:
+- genl_unregister_family(&net_drop_monitor_family);
++ WARN_ON(unregister_netdevice_notifier(&dropmon_net_notifier));
+ out:
+ return rc;
+ }
+@@ -1766,19 +1766,18 @@ static void exit_net_drop_monitor(void)
+ {
+ int cpu;
+
+- BUG_ON(unregister_netdevice_notifier(&dropmon_net_notifier));
+-
+ /*
+ * Because of the module_get/put we do in the trace state change path
+ * we are guaranteed not to have any current users when we get here
+ */
++ BUG_ON(genl_unregister_family(&net_drop_monitor_family));
++
++ BUG_ON(unregister_netdevice_notifier(&dropmon_net_notifier));
+
+ for_each_possible_cpu(cpu) {
+ net_dm_hw_cpu_data_fini(cpu);
+ net_dm_cpu_data_fini(cpu);
+ }
+-
+- BUG_ON(genl_unregister_family(&net_drop_monitor_family));
+ }
+
+ module_init(init_net_drop_monitor);
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index 5db41bf2ed93e0..9cd8de6bebb543 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -853,23 +853,30 @@ __skb_flow_dissect_ports(const struct sk_buff *skb,
+ void *target_container, const void *data,
+ int nhoff, u8 ip_proto, int hlen)
+ {
+- enum flow_dissector_key_id dissector_ports = FLOW_DISSECTOR_KEY_MAX;
+- struct flow_dissector_key_ports *key_ports;
++ struct flow_dissector_key_ports_range *key_ports_range = NULL;
++ struct flow_dissector_key_ports *key_ports = NULL;
++ __be32 ports;
+
+ if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS))
+- dissector_ports = FLOW_DISSECTOR_KEY_PORTS;
+- else if (dissector_uses_key(flow_dissector,
+- FLOW_DISSECTOR_KEY_PORTS_RANGE))
+- dissector_ports = FLOW_DISSECTOR_KEY_PORTS_RANGE;
++ key_ports = skb_flow_dissector_target(flow_dissector,
++ FLOW_DISSECTOR_KEY_PORTS,
++ target_container);
+
+- if (dissector_ports == FLOW_DISSECTOR_KEY_MAX)
++ if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS_RANGE))
++ key_ports_range = skb_flow_dissector_target(flow_dissector,
++ FLOW_DISSECTOR_KEY_PORTS_RANGE,
++ target_container);
++
++ if (!key_ports && !key_ports_range)
+ return;
+
+- key_ports = skb_flow_dissector_target(flow_dissector,
+- dissector_ports,
+- target_container);
+- key_ports->ports = __skb_flow_get_ports(skb, nhoff, ip_proto,
+- data, hlen);
++ ports = __skb_flow_get_ports(skb, nhoff, ip_proto, data, hlen);
++
++ if (key_ports)
++ key_ports->ports = ports;
++
++ if (key_ports_range)
++ key_ports_range->tp.ports = ports;
+ }
+
+ static void
+@@ -924,6 +931,7 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys,
+ struct flow_dissector *flow_dissector,
+ void *target_container)
+ {
++ struct flow_dissector_key_ports_range *key_ports_range = NULL;
+ struct flow_dissector_key_ports *key_ports = NULL;
+ struct flow_dissector_key_control *key_control;
+ struct flow_dissector_key_basic *key_basic;
+@@ -968,20 +976,21 @@ static void __skb_flow_bpf_to_target(const struct bpf_flow_keys *flow_keys,
+ key_control->addr_type = FLOW_DISSECTOR_KEY_IPV6_ADDRS;
+ }
+
+- if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS))
++ if (dissector_uses_key(flow_dissector, FLOW_DISSECTOR_KEY_PORTS)) {
+ key_ports = skb_flow_dissector_target(flow_dissector,
+ FLOW_DISSECTOR_KEY_PORTS,
+ target_container);
+- else if (dissector_uses_key(flow_dissector,
+- FLOW_DISSECTOR_KEY_PORTS_RANGE))
+- key_ports = skb_flow_dissector_target(flow_dissector,
+- FLOW_DISSECTOR_KEY_PORTS_RANGE,
+- target_container);
+-
+- if (key_ports) {
+ key_ports->src = flow_keys->sport;
+ key_ports->dst = flow_keys->dport;
+ }
++ if (dissector_uses_key(flow_dissector,
++ FLOW_DISSECTOR_KEY_PORTS_RANGE)) {
++ key_ports_range = skb_flow_dissector_target(flow_dissector,
++ FLOW_DISSECTOR_KEY_PORTS_RANGE,
++ target_container);
++ key_ports_range->tp.src = flow_keys->sport;
++ key_ports_range->tp.dst = flow_keys->dport;
++ }
+
+ if (dissector_uses_key(flow_dissector,
+ FLOW_DISSECTOR_KEY_FLOW_LABEL)) {
+diff --git a/net/core/gro.c b/net/core/gro.c
+index d1f44084e978fb..78b320b6317445 100644
+--- a/net/core/gro.c
++++ b/net/core/gro.c
+@@ -7,9 +7,6 @@
+
+ #define MAX_GRO_SKBS 8
+
+-/* This should be increased if a protocol with a bigger head is added. */
+-#define GRO_MAX_HEAD (MAX_HEADER + 128)
+-
+ static DEFINE_SPINLOCK(offload_lock);
+
+ /**
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 74149dc4ee318d..61a950f13a91c7 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -69,6 +69,7 @@
+ #include <net/dst.h>
+ #include <net/sock.h>
+ #include <net/checksum.h>
++#include <net/gro.h>
+ #include <net/gso.h>
+ #include <net/hotdata.h>
+ #include <net/ip6_checksum.h>
+@@ -95,7 +96,9 @@
+ static struct kmem_cache *skbuff_ext_cache __ro_after_init;
+ #endif
+
+-#define SKB_SMALL_HEAD_SIZE SKB_HEAD_ALIGN(MAX_TCP_HEADER)
++#define GRO_MAX_HEAD_PAD (GRO_MAX_HEAD + NET_SKB_PAD + NET_IP_ALIGN)
++#define SKB_SMALL_HEAD_SIZE SKB_HEAD_ALIGN(max(MAX_TCP_HEADER, \
++ GRO_MAX_HEAD_PAD))
+
+ /* We want SKB_SMALL_HEAD_CACHE_SIZE to not be a power of two.
+ * This should ensure that SKB_SMALL_HEAD_HEADROOM is a unique
+@@ -736,7 +739,7 @@ struct sk_buff *__netdev_alloc_skb(struct net_device *dev, unsigned int len,
+ /* If requested length is either too small or too big,
+ * we use kmalloc() for skb->head allocation.
+ */
+- if (len <= SKB_WITH_OVERHEAD(1024) ||
++ if (len <= SKB_WITH_OVERHEAD(SKB_SMALL_HEAD_CACHE_SIZE) ||
+ len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
+ (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
+ skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX, NUMA_NO_NODE);
+@@ -816,7 +819,8 @@ struct sk_buff *napi_alloc_skb(struct napi_struct *napi, unsigned int len)
+ * When the small frag allocator is available, prefer it over kmalloc
+ * for small fragments
+ */
+- if ((!NAPI_HAS_SMALL_PAGE_FRAG && len <= SKB_WITH_OVERHEAD(1024)) ||
++ if ((!NAPI_HAS_SMALL_PAGE_FRAG &&
++ len <= SKB_WITH_OVERHEAD(SKB_SMALL_HEAD_CACHE_SIZE)) ||
+ len > SKB_WITH_OVERHEAD(PAGE_SIZE) ||
+ (gfp_mask & (__GFP_DIRECT_RECLAIM | GFP_DMA))) {
+ skb = __alloc_skb(len, gfp_mask, SKB_ALLOC_RX | SKB_ALLOC_NAPI,
+diff --git a/net/core/skmsg.c b/net/core/skmsg.c
+index 8ad7e6755fd642..f76cbf49c68c8d 100644
+--- a/net/core/skmsg.c
++++ b/net/core/skmsg.c
+@@ -548,6 +548,9 @@ static int sk_psock_skb_ingress_enqueue(struct sk_buff *skb,
+ return num_sge;
+ }
+
++#if IS_ENABLED(CONFIG_BPF_STREAM_PARSER)
++ psock->ingress_bytes += len;
++#endif
+ copied = len;
+ msg->sg.start = 0;
+ msg->sg.size = copied;
+@@ -1143,6 +1146,10 @@ int sk_psock_init_strp(struct sock *sk, struct sk_psock *psock)
+ if (!ret)
+ sk_psock_set_state(psock, SK_PSOCK_RX_STRP_ENABLED);
+
++ if (sk_is_tcp(sk)) {
++ psock->strp.cb.read_sock = tcp_bpf_strp_read_sock;
++ psock->copied_seq = tcp_sk(sk)->copied_seq;
++ }
+ return ret;
+ }
+
+diff --git a/net/core/sock_map.c b/net/core/sock_map.c
+index f1b9b3958792cd..82a14f131d00c6 100644
+--- a/net/core/sock_map.c
++++ b/net/core/sock_map.c
+@@ -303,7 +303,10 @@ static int sock_map_link(struct bpf_map *map, struct sock *sk)
+
+ write_lock_bh(&sk->sk_callback_lock);
+ if (stream_parser && stream_verdict && !psock->saved_data_ready) {
+- ret = sk_psock_init_strp(sk, psock);
++ if (sk_is_tcp(sk))
++ ret = sk_psock_init_strp(sk, psock);
++ else
++ ret = -EOPNOTSUPP;
+ if (ret) {
+ write_unlock_bh(&sk->sk_callback_lock);
+ sk_psock_put(sk, psock);
+@@ -541,6 +544,9 @@ static bool sock_map_sk_state_allowed(const struct sock *sk)
+ return (1 << sk->sk_state) & (TCPF_ESTABLISHED | TCPF_LISTEN);
+ if (sk_is_stream_unix(sk))
+ return (1 << sk->sk_state) & TCPF_ESTABLISHED;
++ if (sk_is_vsock(sk) &&
++ (sk->sk_type == SOCK_STREAM || sk->sk_type == SOCK_SEQPACKET))
++ return (1 << sk->sk_state) & TCPF_ESTABLISHED;
+ return true;
+ }
+
+diff --git a/net/ipv4/arp.c b/net/ipv4/arp.c
+index 59ffaa89d7b05f..8fb48f42581ce1 100644
+--- a/net/ipv4/arp.c
++++ b/net/ipv4/arp.c
+@@ -1077,7 +1077,7 @@ static int arp_req_set_public(struct net *net, struct arpreq *r,
+ __be32 mask = ((struct sockaddr_in *)&r->arp_netmask)->sin_addr.s_addr;
+
+ if (!dev && (r->arp_flags & ATF_COM)) {
+- dev = dev_getbyhwaddr_rcu(net, r->arp_ha.sa_family,
++ dev = dev_getbyhwaddr(net, r->arp_ha.sa_family,
+ r->arp_ha.sa_data);
+ if (!dev)
+ return -ENODEV;
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 4f77bd862e957f..68cb6a966b18b8 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -1564,12 +1564,13 @@ EXPORT_SYMBOL(tcp_recv_skb);
+ * or for 'peeking' the socket using this routine
+ * (although both would be easy to implement).
+ */
+-int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
+- sk_read_actor_t recv_actor)
++static int __tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor, bool noack,
++ u32 *copied_seq)
+ {
+ struct sk_buff *skb;
+ struct tcp_sock *tp = tcp_sk(sk);
+- u32 seq = tp->copied_seq;
++ u32 seq = *copied_seq;
+ u32 offset;
+ int copied = 0;
+
+@@ -1623,9 +1624,12 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
+ tcp_eat_recv_skb(sk, skb);
+ if (!desc->count)
+ break;
+- WRITE_ONCE(tp->copied_seq, seq);
++ WRITE_ONCE(*copied_seq, seq);
+ }
+- WRITE_ONCE(tp->copied_seq, seq);
++ WRITE_ONCE(*copied_seq, seq);
++
++ if (noack)
++ goto out;
+
+ tcp_rcv_space_adjust(sk);
+
+@@ -1634,10 +1638,25 @@ int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
+ tcp_recv_skb(sk, seq, &offset);
+ tcp_cleanup_rbuf(sk, copied);
+ }
++out:
+ return copied;
+ }
++
++int tcp_read_sock(struct sock *sk, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor)
++{
++ return __tcp_read_sock(sk, desc, recv_actor, false,
++ &tcp_sk(sk)->copied_seq);
++}
+ EXPORT_SYMBOL(tcp_read_sock);
+
++int tcp_read_sock_noack(struct sock *sk, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor, bool noack,
++ u32 *copied_seq)
++{
++ return __tcp_read_sock(sk, desc, recv_actor, noack, copied_seq);
++}
++
+ int tcp_read_skb(struct sock *sk, skb_read_actor_t recv_actor)
+ {
+ struct sk_buff *skb;
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 392678ae80f4ed..22e8a2af5dd8b0 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -646,6 +646,42 @@ static int tcp_bpf_assert_proto_ops(struct proto *ops)
+ ops->sendmsg == tcp_sendmsg ? 0 : -ENOTSUPP;
+ }
+
++#if IS_ENABLED(CONFIG_BPF_STREAM_PARSER)
++int tcp_bpf_strp_read_sock(struct strparser *strp, read_descriptor_t *desc,
++ sk_read_actor_t recv_actor)
++{
++ struct sock *sk = strp->sk;
++ struct sk_psock *psock;
++ struct tcp_sock *tp;
++ int copied = 0;
++
++ tp = tcp_sk(sk);
++ rcu_read_lock();
++ psock = sk_psock(sk);
++ if (WARN_ON_ONCE(!psock)) {
++ desc->error = -EINVAL;
++ goto out;
++ }
++
++ psock->ingress_bytes = 0;
++ copied = tcp_read_sock_noack(sk, desc, recv_actor, true,
++ &psock->copied_seq);
++ if (copied < 0)
++ goto out;
++ /* recv_actor may redirect skb to another socket (SK_REDIRECT) or
++ * just put skb into ingress queue of current socket (SK_PASS).
++ * For SK_REDIRECT, we need to ack the frame immediately but for
++ * SK_PASS, we want to delay the ack until tcp_bpf_recvmsg_parser().
++ */
++ tp->copied_seq = psock->copied_seq - psock->ingress_bytes;
++ tcp_rcv_space_adjust(sk);
++ __tcp_cleanup_rbuf(sk, copied - psock->ingress_bytes);
++out:
++ rcu_read_unlock();
++ return copied;
++}
++#endif /* CONFIG_BPF_STREAM_PARSER */
++
+ int tcp_bpf_update_proto(struct sock *sk, struct sk_psock *psock, bool restore)
+ {
+ int family = sk->sk_family == AF_INET6 ? TCP_BPF_IPV6 : TCP_BPF_IPV4;
+diff --git a/net/ipv4/tcp_fastopen.c b/net/ipv4/tcp_fastopen.c
+index 0f523cbfe329ef..32b28fc21b63c0 100644
+--- a/net/ipv4/tcp_fastopen.c
++++ b/net/ipv4/tcp_fastopen.c
+@@ -178,7 +178,7 @@ void tcp_fastopen_add_skb(struct sock *sk, struct sk_buff *skb)
+ if (!skb)
+ return;
+
+- skb_dst_drop(skb);
++ tcp_cleanup_skb(skb);
+ /* segs_in has been initialized to 1 in tcp_create_openreq_child().
+ * Hence, reset segs_in to 0 before calling tcp_segs_in()
+ * to avoid double counting. Also, tcp_segs_in() expects
+@@ -195,7 +195,7 @@ void tcp_fastopen_add_skb(struct sock *sk, struct sk_buff *skb)
+ TCP_SKB_CB(skb)->tcp_flags &= ~TCPHDR_SYN;
+
+ tp->rcv_nxt = TCP_SKB_CB(skb)->end_seq;
+- __skb_queue_tail(&sk->sk_receive_queue, skb);
++ tcp_add_receive_queue(sk, skb);
+ tp->syn_data_acked = 1;
+
+ /* u64_stats_update_begin(&tp->syncp) not needed here,
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 2d43b29da15e20..d93a5a89c5692d 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -243,9 +243,15 @@ static void tcp_measure_rcv_mss(struct sock *sk, const struct sk_buff *skb)
+ do_div(val, skb->truesize);
+ tcp_sk(sk)->scaling_ratio = val ? val : 1;
+
+- if (old_ratio != tcp_sk(sk)->scaling_ratio)
+- WRITE_ONCE(tcp_sk(sk)->window_clamp,
+- tcp_win_from_space(sk, sk->sk_rcvbuf));
++ if (old_ratio != tcp_sk(sk)->scaling_ratio) {
++ struct tcp_sock *tp = tcp_sk(sk);
++
++ val = tcp_win_from_space(sk, sk->sk_rcvbuf);
++ tcp_set_window_clamp(sk, val);
++
++ if (tp->window_clamp < tp->rcvq_space.space)
++ tp->rcvq_space.space = tp->window_clamp;
++ }
+ }
+ icsk->icsk_ack.rcv_mss = min_t(unsigned int, len,
+ tcp_sk(sk)->advmss);
+@@ -4964,7 +4970,7 @@ static void tcp_ofo_queue(struct sock *sk)
+ tcp_rcv_nxt_update(tp, TCP_SKB_CB(skb)->end_seq);
+ fin = TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN;
+ if (!eaten)
+- __skb_queue_tail(&sk->sk_receive_queue, skb);
++ tcp_add_receive_queue(sk, skb);
+ else
+ kfree_skb_partial(skb, fragstolen);
+
+@@ -5156,7 +5162,7 @@ static int __must_check tcp_queue_rcv(struct sock *sk, struct sk_buff *skb,
+ skb, fragstolen)) ? 1 : 0;
+ tcp_rcv_nxt_update(tcp_sk(sk), TCP_SKB_CB(skb)->end_seq);
+ if (!eaten) {
+- __skb_queue_tail(&sk->sk_receive_queue, skb);
++ tcp_add_receive_queue(sk, skb);
+ skb_set_owner_r(skb, sk);
+ }
+ return eaten;
+@@ -5239,7 +5245,7 @@ static void tcp_data_queue(struct sock *sk, struct sk_buff *skb)
+ __kfree_skb(skb);
+ return;
+ }
+- skb_dst_drop(skb);
++ tcp_cleanup_skb(skb);
+ __skb_pull(skb, tcp_hdr(skb)->doff * 4);
+
+ reason = SKB_DROP_REASON_NOT_SPECIFIED;
+@@ -6208,7 +6214,7 @@ void tcp_rcv_established(struct sock *sk, struct sk_buff *skb)
+ NET_INC_STATS(sock_net(sk), LINUX_MIB_TCPHPHITS);
+
+ /* Bulk data transfer: receiver */
+- skb_dst_drop(skb);
++ tcp_cleanup_skb(skb);
+ __skb_pull(skb, tcp_header_len);
+ eaten = tcp_queue_rcv(sk, skb, &fragstolen);
+
+diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c
+index bcc2f1e090c7db..824048679e1b8f 100644
+--- a/net/ipv4/tcp_ipv4.c
++++ b/net/ipv4/tcp_ipv4.c
+@@ -2025,7 +2025,7 @@ bool tcp_add_backlog(struct sock *sk, struct sk_buff *skb,
+ */
+ skb_condense(skb);
+
+- skb_dst_drop(skb);
++ tcp_cleanup_skb(skb);
+
+ if (unlikely(tcp_checksum_complete(skb))) {
+ bh_unlock_sock(sk);
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index dfa3067084948f..998ea3b5badfce 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -97,7 +97,7 @@ tcf_exts_miss_cookie_base_alloc(struct tcf_exts *exts, struct tcf_proto *tp,
+
+ err = xa_alloc_cyclic(&tcf_exts_miss_cookies_xa, &n->miss_cookie_base,
+ n, xa_limit_32b, &next, GFP_KERNEL);
+- if (err)
++ if (err < 0)
+ goto err_xa_alloc;
+
+ exts->miss_cookie_node = n;
+diff --git a/net/strparser/strparser.c b/net/strparser/strparser.c
+index 8299ceb3e3739d..95696f42647ec1 100644
+--- a/net/strparser/strparser.c
++++ b/net/strparser/strparser.c
+@@ -347,7 +347,10 @@ static int strp_read_sock(struct strparser *strp)
+ struct socket *sock = strp->sk->sk_socket;
+ read_descriptor_t desc;
+
+- if (unlikely(!sock || !sock->ops || !sock->ops->read_sock))
++ if (unlikely(!sock || !sock->ops))
++ return -EBUSY;
++
++ if (unlikely(!strp->cb.read_sock && !sock->ops->read_sock))
+ return -EBUSY;
+
+ desc.arg.data = strp;
+@@ -355,7 +358,10 @@ static int strp_read_sock(struct strparser *strp)
+ desc.count = 1; /* give more than one skb per call */
+
+ /* sk should be locked here, so okay to do read_sock */
+- sock->ops->read_sock(strp->sk, &desc, strp_recv);
++ if (strp->cb.read_sock)
++ strp->cb.read_sock(strp, &desc, strp_recv);
++ else
++ sock->ops->read_sock(strp->sk, &desc, strp_recv);
+
+ desc.error = strp->cb.read_sock_done(strp, desc.error);
+
+@@ -468,6 +474,7 @@ int strp_init(struct strparser *strp, struct sock *sk,
+ strp->cb.unlock = cb->unlock ? : strp_sock_unlock;
+ strp->cb.rcv_msg = cb->rcv_msg;
+ strp->cb.parse_msg = cb->parse_msg;
++ strp->cb.read_sock = cb->read_sock;
+ strp->cb.read_sock_done = cb->read_sock_done ? : default_read_sock_done;
+ strp->cb.abort_parser = cb->abort_parser ? : strp_abort_strp;
+
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index 37299a7ca1876e..eb6ea26b390ee8 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1189,6 +1189,9 @@ static int vsock_read_skb(struct sock *sk, skb_read_actor_t read_actor)
+ {
+ struct vsock_sock *vsk = vsock_sk(sk);
+
++ if (WARN_ON_ONCE(!vsk->transport))
++ return -ENODEV;
++
+ return vsk->transport->read_skb(vsk, read_actor);
+ }
+
+diff --git a/net/vmw_vsock/virtio_transport.c b/net/vmw_vsock/virtio_transport.c
+index b58c3818f284f1..f0e48e6911fc46 100644
+--- a/net/vmw_vsock/virtio_transport.c
++++ b/net/vmw_vsock/virtio_transport.c
+@@ -670,6 +670,13 @@ static int virtio_vsock_vqs_init(struct virtio_vsock *vsock)
+ };
+ int ret;
+
++ mutex_lock(&vsock->rx_lock);
++ vsock->rx_buf_nr = 0;
++ vsock->rx_buf_max_nr = 0;
++ mutex_unlock(&vsock->rx_lock);
++
++ atomic_set(&vsock->queued_replies, 0);
++
+ ret = virtio_find_vqs(vdev, VSOCK_VQ_MAX, vsock->vqs, vqs_info, NULL);
+ if (ret < 0)
+ return ret;
+@@ -779,9 +786,6 @@ static int virtio_vsock_probe(struct virtio_device *vdev)
+
+ vsock->vdev = vdev;
+
+- vsock->rx_buf_nr = 0;
+- vsock->rx_buf_max_nr = 0;
+- atomic_set(&vsock->queued_replies, 0);
+
+ mutex_init(&vsock->tx_lock);
+ mutex_init(&vsock->rx_lock);
+diff --git a/net/vmw_vsock/vsock_bpf.c b/net/vmw_vsock/vsock_bpf.c
+index f201d9eca1df2f..07b96d56f3a577 100644
+--- a/net/vmw_vsock/vsock_bpf.c
++++ b/net/vmw_vsock/vsock_bpf.c
+@@ -87,7 +87,7 @@ static int vsock_bpf_recvmsg(struct sock *sk, struct msghdr *msg,
+ lock_sock(sk);
+ vsk = vsock_sk(sk);
+
+- if (!vsk->transport) {
++ if (WARN_ON_ONCE(!vsk->transport)) {
+ copied = -ENODEV;
+ goto out;
+ }
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 77b6ac9b5c11bc..9955c4d54e42a7 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -678,12 +678,18 @@ static int snd_seq_deliver_single_event(struct snd_seq_client *client,
+ dest_port->time_real);
+
+ #if IS_ENABLED(CONFIG_SND_SEQ_UMP)
+- if (!(dest->filter & SNDRV_SEQ_FILTER_NO_CONVERT)) {
+- if (snd_seq_ev_is_ump(event)) {
++ if (snd_seq_ev_is_ump(event)) {
++ if (!(dest->filter & SNDRV_SEQ_FILTER_NO_CONVERT)) {
+ result = snd_seq_deliver_from_ump(client, dest, dest_port,
+ event, atomic, hop);
+ goto __skip;
+- } else if (snd_seq_client_is_ump(dest)) {
++ } else if (dest->type == USER_CLIENT &&
++ !snd_seq_client_is_ump(dest)) {
++ result = 0; // drop the event
++ goto __skip;
++ }
++ } else if (snd_seq_client_is_ump(dest)) {
++ if (!(dest->filter & SNDRV_SEQ_FILTER_NO_CONVERT)) {
+ result = snd_seq_deliver_to_ump(client, dest, dest_port,
+ event, atomic, hop);
+ goto __skip;
+diff --git a/sound/pci/hda/hda_codec.c b/sound/pci/hda/hda_codec.c
+index 14763c0f31ad9f..46a2204049993d 100644
+--- a/sound/pci/hda/hda_codec.c
++++ b/sound/pci/hda/hda_codec.c
+@@ -2470,7 +2470,9 @@ int snd_hda_create_dig_out_ctls(struct hda_codec *codec,
+ break;
+ id = kctl->id;
+ id.index = spdif_index;
+- snd_ctl_rename_id(codec->card, &kctl->id, &id);
++ err = snd_ctl_rename_id(codec->card, &kctl->id, &id);
++ if (err < 0)
++ return err;
+ }
+ bus->primary_dig_out_type = HDA_PCM_TYPE_HDMI;
+ }
+diff --git a/sound/pci/hda/patch_conexant.c b/sound/pci/hda/patch_conexant.c
+index 538c37a78a56f7..84ab357b840d67 100644
+--- a/sound/pci/hda/patch_conexant.c
++++ b/sound/pci/hda/patch_conexant.c
+@@ -1080,6 +1080,7 @@ static const struct hda_quirk cxt5066_fixups[] = {
+ SND_PCI_QUIRK(0x103c, 0x814f, "HP ZBook 15u G3", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x8174, "HP Spectre x360", CXT_FIXUP_HP_SPECTRE),
+ SND_PCI_QUIRK(0x103c, 0x822e, "HP ProBook 440 G4", CXT_FIXUP_MUTE_LED_GPIO),
++ SND_PCI_QUIRK(0x103c, 0x8231, "HP ProBook 450 G4", CXT_FIXUP_MUTE_LED_GPIO),
+ SND_PCI_QUIRK(0x103c, 0x828c, "HP EliteBook 840 G4", CXT_FIXUP_HP_DOCK),
+ SND_PCI_QUIRK(0x103c, 0x8299, "HP 800 G3 SFF", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x103c, 0x829a, "HP 800 G3 DM", CXT_FIXUP_HP_MIC_NO_PRESENCE),
+diff --git a/sound/pci/hda/patch_cs8409-tables.c b/sound/pci/hda/patch_cs8409-tables.c
+index 759f48038273df..621f947e38174d 100644
+--- a/sound/pci/hda/patch_cs8409-tables.c
++++ b/sound/pci/hda/patch_cs8409-tables.c
+@@ -121,7 +121,7 @@ static const struct cs8409_i2c_param cs42l42_init_reg_seq[] = {
+ { CS42L42_MIXER_CHA_VOL, 0x3F },
+ { CS42L42_MIXER_CHB_VOL, 0x3F },
+ { CS42L42_MIXER_ADC_VOL, 0x3f },
+- { CS42L42_HP_CTL, 0x03 },
++ { CS42L42_HP_CTL, 0x0D },
+ { CS42L42_MIC_DET_CTL1, 0xB6 },
+ { CS42L42_TIPSENSE_CTL, 0xC2 },
+ { CS42L42_HS_CLAMP_DISABLE, 0x01 },
+@@ -315,7 +315,7 @@ static const struct cs8409_i2c_param dolphin_c0_init_reg_seq[] = {
+ { CS42L42_ASP_TX_SZ_EN, 0x01 },
+ { CS42L42_PWR_CTL1, 0x0A },
+ { CS42L42_PWR_CTL2, 0x84 },
+- { CS42L42_HP_CTL, 0x03 },
++ { CS42L42_HP_CTL, 0x0D },
+ { CS42L42_MIXER_CHA_VOL, 0x3F },
+ { CS42L42_MIXER_CHB_VOL, 0x3F },
+ { CS42L42_MIXER_ADC_VOL, 0x3f },
+@@ -371,7 +371,7 @@ static const struct cs8409_i2c_param dolphin_c1_init_reg_seq[] = {
+ { CS42L42_ASP_TX_SZ_EN, 0x00 },
+ { CS42L42_PWR_CTL1, 0x0E },
+ { CS42L42_PWR_CTL2, 0x84 },
+- { CS42L42_HP_CTL, 0x01 },
++ { CS42L42_HP_CTL, 0x0D },
+ { CS42L42_MIXER_CHA_VOL, 0x3F },
+ { CS42L42_MIXER_CHB_VOL, 0x3F },
+ { CS42L42_MIXER_ADC_VOL, 0x3f },
+diff --git a/sound/pci/hda/patch_cs8409.c b/sound/pci/hda/patch_cs8409.c
+index 614327218634c0..b760332a4e3577 100644
+--- a/sound/pci/hda/patch_cs8409.c
++++ b/sound/pci/hda/patch_cs8409.c
+@@ -876,7 +876,7 @@ static void cs42l42_resume(struct sub_codec *cs42l42)
+ { CS42L42_DET_INT_STATUS2, 0x00 },
+ { CS42L42_TSRS_PLUG_STATUS, 0x00 },
+ };
+- int fsv_old, fsv_new;
++ unsigned int fsv;
+
+ /* Bring CS42L42 out of Reset */
+ spec->gpio_data = snd_hda_codec_read(codec, CS8409_PIN_AFG, 0, AC_VERB_GET_GPIO_DATA, 0);
+@@ -893,13 +893,15 @@ static void cs42l42_resume(struct sub_codec *cs42l42)
+ /* Clear interrupts, by reading interrupt status registers */
+ cs8409_i2c_bulk_read(cs42l42, irq_regs, ARRAY_SIZE(irq_regs));
+
+- fsv_old = cs8409_i2c_read(cs42l42, CS42L42_HP_CTL);
+- if (cs42l42->full_scale_vol == CS42L42_FULL_SCALE_VOL_0DB)
+- fsv_new = fsv_old & ~CS42L42_FULL_SCALE_VOL_MASK;
+- else
+- fsv_new = fsv_old & CS42L42_FULL_SCALE_VOL_MASK;
+- if (fsv_new != fsv_old)
+- cs8409_i2c_write(cs42l42, CS42L42_HP_CTL, fsv_new);
++ fsv = cs8409_i2c_read(cs42l42, CS42L42_HP_CTL);
++ if (cs42l42->full_scale_vol) {
++ // Set the full scale volume bit
++ fsv |= CS42L42_FULL_SCALE_VOL_MASK;
++ cs8409_i2c_write(cs42l42, CS42L42_HP_CTL, fsv);
++ }
++ // Unmute analog channels A and B
++ fsv = (fsv & ~CS42L42_ANA_MUTE_AB);
++ cs8409_i2c_write(cs42l42, CS42L42_HP_CTL, fsv);
+
+ /* we have to explicitly allow unsol event handling even during the
+ * resume phase so that the jack event is processed properly
+@@ -920,7 +922,7 @@ static void cs42l42_suspend(struct sub_codec *cs42l42)
+ { CS42L42_MIXER_CHA_VOL, 0x3F },
+ { CS42L42_MIXER_ADC_VOL, 0x3F },
+ { CS42L42_MIXER_CHB_VOL, 0x3F },
+- { CS42L42_HP_CTL, 0x0F },
++ { CS42L42_HP_CTL, 0x0D },
+ { CS42L42_ASP_RX_DAI0_EN, 0x00 },
+ { CS42L42_ASP_CLK_CFG, 0x00 },
+ { CS42L42_PWR_CTL1, 0xFE },
+diff --git a/sound/pci/hda/patch_cs8409.h b/sound/pci/hda/patch_cs8409.h
+index 5e48115caf096b..14645d25e70fd2 100644
+--- a/sound/pci/hda/patch_cs8409.h
++++ b/sound/pci/hda/patch_cs8409.h
+@@ -230,9 +230,10 @@ enum cs8409_coefficient_index_registers {
+ #define CS42L42_PDN_TIMEOUT_US (250000)
+ #define CS42L42_PDN_SLEEP_US (2000)
+ #define CS42L42_INIT_TIMEOUT_MS (45)
++#define CS42L42_ANA_MUTE_AB (0x0C)
+ #define CS42L42_FULL_SCALE_VOL_MASK (2)
+-#define CS42L42_FULL_SCALE_VOL_0DB (1)
+-#define CS42L42_FULL_SCALE_VOL_MINUS6DB (0)
++#define CS42L42_FULL_SCALE_VOL_0DB (0)
++#define CS42L42_FULL_SCALE_VOL_MINUS6DB (1)
+
+ /* Dell BULLSEYE / WARLOCK / CYBORG Specific Definitions */
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f3f849b96402d1..9bf99fe6cd34dd 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3790,6 +3790,7 @@ static void alc225_init(struct hda_codec *codec)
+ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
+
+ msleep(75);
++ alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
+ alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
+ }
+ }
+diff --git a/sound/soc/fsl/fsl_micfil.c b/sound/soc/fsl/fsl_micfil.c
+index 67c2d4cb0dea21..7cfe77b57b3c25 100644
+--- a/sound/soc/fsl/fsl_micfil.c
++++ b/sound/soc/fsl/fsl_micfil.c
+@@ -156,6 +156,8 @@ static int micfil_set_quality(struct fsl_micfil *micfil)
+ case QUALITY_VLOW2:
+ qsel = MICFIL_QSEL_VLOW2_QUALITY;
+ break;
++ default:
++ return -EINVAL;
+ }
+
+ return regmap_update_bits(micfil->regmap, REG_MICFIL_CTRL2,
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index 8e7b75cf64db42..ff3671226306bd 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -23,7 +23,6 @@ struct imx_audmix {
+ struct snd_soc_card card;
+ struct platform_device *audmix_pdev;
+ struct platform_device *out_pdev;
+- struct clk *cpu_mclk;
+ int num_dai;
+ struct snd_soc_dai_link *dai;
+ int num_dai_conf;
+@@ -32,34 +31,11 @@ struct imx_audmix {
+ struct snd_soc_dapm_route *dapm_routes;
+ };
+
+-static const u32 imx_audmix_rates[] = {
+- 8000, 12000, 16000, 24000, 32000, 48000, 64000, 96000,
+-};
+-
+-static const struct snd_pcm_hw_constraint_list imx_audmix_rate_constraints = {
+- .count = ARRAY_SIZE(imx_audmix_rates),
+- .list = imx_audmix_rates,
+-};
+-
+ static int imx_audmix_fe_startup(struct snd_pcm_substream *substream)
+ {
+- struct snd_soc_pcm_runtime *rtd = snd_soc_substream_to_rtd(substream);
+- struct imx_audmix *priv = snd_soc_card_get_drvdata(rtd->card);
+ struct snd_pcm_runtime *runtime = substream->runtime;
+- struct device *dev = rtd->card->dev;
+- unsigned long clk_rate = clk_get_rate(priv->cpu_mclk);
+ int ret;
+
+- if (clk_rate % 24576000 == 0) {
+- ret = snd_pcm_hw_constraint_list(runtime, 0,
+- SNDRV_PCM_HW_PARAM_RATE,
+- &imx_audmix_rate_constraints);
+- if (ret < 0)
+- return ret;
+- } else {
+- dev_warn(dev, "mclk may be not supported %lu\n", clk_rate);
+- }
+-
+ ret = snd_pcm_hw_constraint_minmax(runtime, SNDRV_PCM_HW_PARAM_CHANNELS,
+ 1, 8);
+ if (ret < 0)
+@@ -325,13 +301,6 @@ static int imx_audmix_probe(struct platform_device *pdev)
+ }
+ put_device(&cpu_pdev->dev);
+
+- priv->cpu_mclk = devm_clk_get(&cpu_pdev->dev, "mclk1");
+- if (IS_ERR(priv->cpu_mclk)) {
+- ret = PTR_ERR(priv->cpu_mclk);
+- dev_err(&cpu_pdev->dev, "failed to get DAI mclk1: %d\n", ret);
+- return ret;
+- }
+-
+ priv->audmix_pdev = audmix_pdev;
+ priv->out_pdev = cpu_pdev;
+
+diff --git a/sound/soc/rockchip/rockchip_i2s_tdm.c b/sound/soc/rockchip/rockchip_i2s_tdm.c
+index acd75e48851fcf..7feefeb6b876dc 100644
+--- a/sound/soc/rockchip/rockchip_i2s_tdm.c
++++ b/sound/soc/rockchip/rockchip_i2s_tdm.c
+@@ -451,11 +451,11 @@ static int rockchip_i2s_tdm_set_fmt(struct snd_soc_dai *cpu_dai,
+ break;
+ case SND_SOC_DAIFMT_DSP_A:
+ val = I2S_TXCR_TFS_TDM_PCM;
+- tdm_val = TDM_SHIFT_CTRL(0);
++ tdm_val = TDM_SHIFT_CTRL(2);
+ break;
+ case SND_SOC_DAIFMT_DSP_B:
+ val = I2S_TXCR_TFS_TDM_PCM;
+- tdm_val = TDM_SHIFT_CTRL(2);
++ tdm_val = TDM_SHIFT_CTRL(4);
+ break;
+ default:
+ ret = -EINVAL;
+diff --git a/sound/soc/sh/rz-ssi.c b/sound/soc/sh/rz-ssi.c
+index 32db2cead8a4ec..4f483bfa584f5b 100644
+--- a/sound/soc/sh/rz-ssi.c
++++ b/sound/soc/sh/rz-ssi.c
+@@ -416,8 +416,12 @@ static int rz_ssi_stop(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
+ rz_ssi_reg_mask_setl(ssi, SSICR, SSICR_TEN | SSICR_REN, 0);
+
+ /* Cancel all remaining DMA transactions */
+- if (rz_ssi_is_dma_enabled(ssi))
+- dmaengine_terminate_async(strm->dma_ch);
++ if (rz_ssi_is_dma_enabled(ssi)) {
++ if (ssi->playback.dma_ch)
++ dmaengine_terminate_async(ssi->playback.dma_ch);
++ if (ssi->capture.dma_ch)
++ dmaengine_terminate_async(ssi->capture.dma_ch);
++ }
+
+ rz_ssi_set_idle(ssi);
+
+@@ -524,6 +528,8 @@ static int rz_ssi_pio_send(struct rz_ssi_priv *ssi, struct rz_ssi_stream *strm)
+ sample_space = strm->fifo_sample_size;
+ ssifsr = rz_ssi_reg_readl(ssi, SSIFSR);
+ sample_space -= (ssifsr >> SSIFSR_TDC_SHIFT) & SSIFSR_TDC_MASK;
++ if (sample_space < 0)
++ return -EINVAL;
+
+ /* Only add full frames at a time */
+ while (frames_left && (sample_space >= runtime->channels)) {
+diff --git a/sound/soc/sof/ipc4-topology.c b/sound/soc/sof/ipc4-topology.c
+index 240fee2166d125..f82db7f2a6b7e7 100644
+--- a/sound/soc/sof/ipc4-topology.c
++++ b/sound/soc/sof/ipc4-topology.c
+@@ -671,10 +671,16 @@ static int sof_ipc4_widget_setup_comp_dai(struct snd_sof_widget *swidget)
+ }
+
+ list_for_each_entry(w, &sdev->widget_list, list) {
+- if (w->widget->sname &&
++ struct snd_sof_dai *alh_dai;
++
++ if (!WIDGET_IS_DAI(w->id) || !w->widget->sname ||
+ strcmp(w->widget->sname, swidget->widget->sname))
+ continue;
+
++ alh_dai = w->private;
++ if (alh_dai->type != SOF_DAI_INTEL_ALH)
++ continue;
++
+ blob->alh_cfg.device_count++;
+ }
+
+@@ -1973,11 +1979,13 @@ sof_ipc4_prepare_copier_module(struct snd_sof_widget *swidget,
+ list_for_each_entry(w, &sdev->widget_list, list) {
+ u32 node_type;
+
+- if (w->widget->sname &&
++ if (!WIDGET_IS_DAI(w->id) || !w->widget->sname ||
+ strcmp(w->widget->sname, swidget->widget->sname))
+ continue;
+
+ dai = w->private;
++ if (dai->type != SOF_DAI_INTEL_ALH)
++ continue;
+ alh_copier = (struct sof_ipc4_copier *)dai->private;
+ alh_data = &alh_copier->data;
+ node_type = SOF_IPC4_GET_NODE_TYPE(alh_data->gtw_cfg.node_id);
+diff --git a/sound/soc/sof/pcm.c b/sound/soc/sof/pcm.c
+index 35a7462d8b6938..c5c6353f18ceef 100644
+--- a/sound/soc/sof/pcm.c
++++ b/sound/soc/sof/pcm.c
+@@ -511,6 +511,8 @@ static int sof_pcm_close(struct snd_soc_component *component,
+ */
+ }
+
++ spcm->stream[substream->stream].substream = NULL;
++
+ return 0;
+ }
+
+diff --git a/sound/soc/sof/stream-ipc.c b/sound/soc/sof/stream-ipc.c
+index 794c7bbccbaf92..8262443ac89ad1 100644
+--- a/sound/soc/sof/stream-ipc.c
++++ b/sound/soc/sof/stream-ipc.c
+@@ -43,7 +43,7 @@ int sof_ipc_msg_data(struct snd_sof_dev *sdev,
+ return -ESTRPIPE;
+
+ posn_offset = stream->posn_offset;
+- } else {
++ } else if (sps->cstream) {
+
+ struct sof_compr_stream *sstream = sps->cstream->runtime->private_data;
+
+@@ -51,6 +51,10 @@ int sof_ipc_msg_data(struct snd_sof_dev *sdev,
+ return -ESTRPIPE;
+
+ posn_offset = sstream->posn_offset;
++
++ } else {
++ dev_err(sdev->dev, "%s: No stream opened\n", __func__);
++ return -EINVAL;
+ }
+
+ snd_sof_dsp_mailbox_read(sdev, posn_offset, p, sz);
+diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h
+index 6c3b4d4f173ac6..aeef86b3da747a 100644
+--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h
++++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod-events.h
+@@ -40,6 +40,14 @@ DECLARE_TRACE(bpf_testmod_test_nullable_bare,
+ TP_ARGS(ctx__nullable)
+ );
+
++struct sk_buff;
++
++DECLARE_TRACE(bpf_testmod_test_raw_tp_null,
++ TP_PROTO(struct sk_buff *skb),
++ TP_ARGS(skb)
++);
++
++
+ #undef BPF_TESTMOD_DECLARE_TRACE
+ #ifdef DECLARE_TRACE_WRITABLE
+ #define BPF_TESTMOD_DECLARE_TRACE(call, proto, args, size) \
+diff --git a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+index 8835761d9a126a..4e6a9e9c036873 100644
+--- a/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
++++ b/tools/testing/selftests/bpf/bpf_testmod/bpf_testmod.c
+@@ -380,6 +380,8 @@ bpf_testmod_test_read(struct file *file, struct kobject *kobj,
+
+ (void)bpf_testmod_test_arg_ptr_to_struct(&struct_arg1_2);
+
++ (void)trace_bpf_testmod_test_raw_tp_null(NULL);
++
+ struct_arg3 = kmalloc((sizeof(struct bpf_testmod_struct_arg_3) +
+ sizeof(int)), GFP_KERNEL);
+ if (struct_arg3 != NULL) {
+diff --git a/tools/testing/selftests/bpf/prog_tests/raw_tp_null.c b/tools/testing/selftests/bpf/prog_tests/raw_tp_null.c
+new file mode 100644
+index 00000000000000..6fa19449297e9b
+--- /dev/null
++++ b/tools/testing/selftests/bpf/prog_tests/raw_tp_null.c
+@@ -0,0 +1,25 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
++
++#include <test_progs.h>
++#include "raw_tp_null.skel.h"
++
++void test_raw_tp_null(void)
++{
++ struct raw_tp_null *skel;
++
++ skel = raw_tp_null__open_and_load();
++ if (!ASSERT_OK_PTR(skel, "raw_tp_null__open_and_load"))
++ return;
++
++ skel->bss->tid = sys_gettid();
++
++ if (!ASSERT_OK(raw_tp_null__attach(skel), "raw_tp_null__attach"))
++ goto end;
++
++ ASSERT_OK(trigger_module_test_read(2), "trigger testmod read");
++ ASSERT_EQ(skel->bss->i, 3, "invocations");
++
++end:
++ raw_tp_null__destroy(skel);
++}
+diff --git a/tools/testing/selftests/bpf/progs/raw_tp_null.c b/tools/testing/selftests/bpf/progs/raw_tp_null.c
+new file mode 100644
+index 00000000000000..457f34c151e32f
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/raw_tp_null.c
+@@ -0,0 +1,32 @@
++// SPDX-License-Identifier: GPL-2.0
++/* Copyright (c) 2024 Meta Platforms, Inc. and affiliates. */
++
++#include <vmlinux.h>
++#include <bpf/bpf_tracing.h>
++
++char _license[] SEC("license") = "GPL";
++
++int tid;
++int i;
++
++SEC("tp_btf/bpf_testmod_test_raw_tp_null")
++int BPF_PROG(test_raw_tp_null, struct sk_buff *skb)
++{
++ struct task_struct *task = bpf_get_current_task_btf();
++
++ if (task->pid != tid)
++ return 0;
++
++ i = i + skb->mark + 1;
++ /* The compiler may move the NULL check before this deref, which causes
++ * the load to fail as deref of scalar. Prevent that by using a barrier.
++ */
++ barrier();
++ /* If dead code elimination kicks in, the increment below will
++ * be removed. For raw_tp programs, we mark input arguments as
++ * PTR_MAYBE_NULL, so branch prediction should never kick in.
++ */
++ if (!skb)
++ i += 2;
++ return 0;
++}
+diff --git a/tools/testing/selftests/mm/Makefile b/tools/testing/selftests/mm/Makefile
+index 02e1204971b0a8..c0138cb19705bc 100644
+--- a/tools/testing/selftests/mm/Makefile
++++ b/tools/testing/selftests/mm/Makefile
+@@ -33,9 +33,16 @@ endif
+ # LDLIBS.
+ MAKEFLAGS += --no-builtin-rules
+
+-CFLAGS = -Wall -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES)
++CFLAGS = -Wall -O2 -I $(top_srcdir) $(EXTRA_CFLAGS) $(KHDR_INCLUDES) $(TOOLS_INCLUDES)
+ LDLIBS = -lrt -lpthread -lm
+
++# Some distributions (such as Ubuntu) configure GCC so that _FORTIFY_SOURCE is
++# automatically enabled at -O1 or above. This triggers various unused-result
++# warnings where functions such as read() or write() are called and their
++# return value is not checked. Disable _FORTIFY_SOURCE to silence those
++# warnings.
++CFLAGS += -U_FORTIFY_SOURCE
++
+ TEST_GEN_FILES = cow
+ TEST_GEN_FILES += compaction_test
+ TEST_GEN_FILES += gup_longterm
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-03-07 18:22 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-03-07 18:22 UTC (permalink / raw
To: gentoo-commits
commit: d5a9b4d7acad17d938141574f2e0bd1e9d28bbf8
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Mar 7 18:22:16 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Mar 7 18:22:16 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d5a9b4d7
Linux patch 6.12.18
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1017_linux-6.12.18.patch | 8470 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8474 insertions(+)
diff --git a/0000_README b/0000_README
index 8efc8938..85e743e9 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch: 1016_linux-6.12.17.patch
From: https://www.kernel.org
Desc: Linux 6.12.17
+Patch: 1017_linux-6.12.18.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.18
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1017_linux-6.12.18.patch b/1017_linux-6.12.18.patch
new file mode 100644
index 00000000..75258348
--- /dev/null
+++ b/1017_linux-6.12.18.patch
@@ -0,0 +1,8470 @@
+diff --git a/Makefile b/Makefile
+index e8b8c5b3840505..17dfe0a8ca8fa9 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index c315bc1a4e9adf..1bf70fa1045dcd 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -1243,7 +1243,7 @@ int kvm_arm_pvtime_has_attr(struct kvm_vcpu *vcpu,
+ extern unsigned int __ro_after_init kvm_arm_vmid_bits;
+ int __init kvm_arm_vmid_alloc_init(void);
+ void __init kvm_arm_vmid_alloc_free(void);
+-bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid);
++void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid);
+ void kvm_arm_vmid_clear_active(void);
+
+ static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch)
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 117702f033218d..3cf65daa75a51f 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -580,6 +580,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ mmu = vcpu->arch.hw_mmu;
+ last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
+
++ /*
++ * Ensure a VMID is allocated for the MMU before programming VTTBR_EL2,
++ * which happens eagerly in VHE.
++ *
++ * Also, the VMID allocator only preserves VMIDs that are active at the
++ * time of rollover, so KVM might need to grab a new VMID for the MMU if
++ * this is called from kvm_sched_in().
++ */
++ kvm_arm_vmid_update(&mmu->vmid);
++
+ /*
+ * We guarantee that both TLBs and I-cache are private to each
+ * vcpu. If detecting that a vcpu from the same VM has
+@@ -1155,18 +1165,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ */
+ preempt_disable();
+
+- /*
+- * The VMID allocator only tracks active VMIDs per
+- * physical CPU, and therefore the VMID allocated may not be
+- * preserved on VMID roll-over if the task was preempted,
+- * making a thread's VMID inactive. So we need to call
+- * kvm_arm_vmid_update() in non-premptible context.
+- */
+- if (kvm_arm_vmid_update(&vcpu->arch.hw_mmu->vmid) &&
+- has_vhe())
+- __load_stage2(vcpu->arch.hw_mmu,
+- vcpu->arch.hw_mmu->arch);
+-
+ kvm_pmu_flush_hwstate(vcpu);
+
+ local_irq_disable();
+diff --git a/arch/arm64/kvm/vmid.c b/arch/arm64/kvm/vmid.c
+index 806223b7022afd..7fe8ba1a2851c5 100644
+--- a/arch/arm64/kvm/vmid.c
++++ b/arch/arm64/kvm/vmid.c
+@@ -135,11 +135,10 @@ void kvm_arm_vmid_clear_active(void)
+ atomic64_set(this_cpu_ptr(&active_vmids), VMID_ACTIVE_INVALID);
+ }
+
+-bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
++void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
+ {
+ unsigned long flags;
+ u64 vmid, old_active_vmid;
+- bool updated = false;
+
+ vmid = atomic64_read(&kvm_vmid->id);
+
+@@ -157,21 +156,17 @@ bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
+ if (old_active_vmid != 0 && vmid_gen_match(vmid) &&
+ 0 != atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_vmids),
+ old_active_vmid, vmid))
+- return false;
++ return;
+
+ raw_spin_lock_irqsave(&cpu_vmid_lock, flags);
+
+ /* Check that our VMID belongs to the current generation. */
+ vmid = atomic64_read(&kvm_vmid->id);
+- if (!vmid_gen_match(vmid)) {
++ if (!vmid_gen_match(vmid))
+ vmid = new_vmid(kvm_vmid);
+- updated = true;
+- }
+
+ atomic64_set(this_cpu_ptr(&active_vmids), vmid);
+ raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags);
+-
+- return updated;
+ }
+
+ /*
+diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
+index ea71ef2e343c2c..93ba66de160ce4 100644
+--- a/arch/arm64/mm/init.c
++++ b/arch/arm64/mm/init.c
+@@ -278,12 +278,7 @@ void __init arm64_memblock_init(void)
+
+ if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
+ extern u16 memstart_offset_seed;
+-
+- /*
+- * Use the sanitised version of id_aa64mmfr0_el1 so that linear
+- * map randomization can be enabled by shrinking the IPA space.
+- */
+- u64 mmfr0 = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1);
++ u64 mmfr0 = read_cpuid(ID_AA64MMFR0_EL1);
+ int parange = cpuid_feature_extract_unsigned_field(
+ mmfr0, ID_AA64MMFR0_EL1_PARANGE_SHIFT);
+ s64 range = linear_region_size -
+diff --git a/arch/riscv/include/asm/futex.h b/arch/riscv/include/asm/futex.h
+index fc8130f995c1ee..6907c456ac8c05 100644
+--- a/arch/riscv/include/asm/futex.h
++++ b/arch/riscv/include/asm/futex.h
+@@ -93,7 +93,7 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
+ _ASM_EXTABLE_UACCESS_ERR(1b, 3b, %[r]) \
+ _ASM_EXTABLE_UACCESS_ERR(2b, 3b, %[r]) \
+ : [r] "+r" (ret), [v] "=&r" (val), [u] "+m" (*uaddr), [t] "=&r" (tmp)
+- : [ov] "Jr" (oldval), [nv] "Jr" (newval)
++ : [ov] "Jr" ((long)(int)oldval), [nv] "Jr" (newval)
+ : "memory");
+ __disable_user_access();
+
+diff --git a/arch/riscv/kernel/cacheinfo.c b/arch/riscv/kernel/cacheinfo.c
+index 2d40736fc37cec..26b085dbdd073f 100644
+--- a/arch/riscv/kernel/cacheinfo.c
++++ b/arch/riscv/kernel/cacheinfo.c
+@@ -108,11 +108,11 @@ int populate_cache_leaves(unsigned int cpu)
+ if (!np)
+ return -ENOENT;
+
+- if (of_property_read_bool(np, "cache-size"))
++ if (of_property_present(np, "cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_UNIFIED, level);
+- if (of_property_read_bool(np, "i-cache-size"))
++ if (of_property_present(np, "i-cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level);
+- if (of_property_read_bool(np, "d-cache-size"))
++ if (of_property_present(np, "d-cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level);
+
+ prev = np;
+@@ -125,11 +125,11 @@ int populate_cache_leaves(unsigned int cpu)
+ break;
+ if (level <= levels)
+ break;
+- if (of_property_read_bool(np, "cache-size"))
++ if (of_property_present(np, "cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_UNIFIED, level);
+- if (of_property_read_bool(np, "i-cache-size"))
++ if (of_property_present(np, "i-cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_INST, level);
+- if (of_property_read_bool(np, "d-cache-size"))
++ if (of_property_present(np, "d-cache-size"))
+ ci_leaf_init(this_leaf++, CACHE_TYPE_DATA, level);
+ levels = level;
+ }
+diff --git a/arch/riscv/kernel/cpufeature.c b/arch/riscv/kernel/cpufeature.c
+index 3a8eeaa9310c32..308430af3e8f83 100644
+--- a/arch/riscv/kernel/cpufeature.c
++++ b/arch/riscv/kernel/cpufeature.c
+@@ -454,7 +454,7 @@ static void __init riscv_resolve_isa(unsigned long *source_isa,
+ if (bit < RISCV_ISA_EXT_BASE)
+ *this_hwcap |= isa2hwcap[bit];
+ }
+- } while (loop && memcmp(prev_resolved_isa, resolved_isa, sizeof(prev_resolved_isa)));
++ } while (loop && !bitmap_equal(prev_resolved_isa, resolved_isa, RISCV_ISA_EXT_MAX));
+ }
+
+ static void __init match_isa_ext(const char *name, const char *name_end, unsigned long *bitmap)
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 2b3c152d3c91f5..7934613a98c883 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -288,8 +288,8 @@ void __init setup_arch(char **cmdline_p)
+
+ riscv_init_cbo_blocksizes();
+ riscv_fill_hwcap();
+- init_rt_signal_env();
+ apply_boot_alternatives();
++ init_rt_signal_env();
+
+ if (IS_ENABLED(CONFIG_RISCV_ISA_ZICBOM) &&
+ riscv_isa_extension_available(NULL, ZICBOM))
+diff --git a/arch/riscv/kernel/signal.c b/arch/riscv/kernel/signal.c
+index dcd28241945613..c3c517b9eee554 100644
+--- a/arch/riscv/kernel/signal.c
++++ b/arch/riscv/kernel/signal.c
+@@ -215,12 +215,6 @@ static size_t get_rt_frame_size(bool cal_all)
+ if (cal_all || riscv_v_vstate_query(task_pt_regs(current)))
+ total_context_size += riscv_v_sc_size;
+ }
+- /*
+- * Preserved a __riscv_ctx_hdr for END signal context header if an
+- * extension uses __riscv_extra_ext_header
+- */
+- if (total_context_size)
+- total_context_size += sizeof(struct __riscv_ctx_hdr);
+
+ frame_size += total_context_size;
+
+diff --git a/arch/riscv/kvm/vcpu_sbi_hsm.c b/arch/riscv/kvm/vcpu_sbi_hsm.c
+index dce667f4b6ab08..3070bb31745de7 100644
+--- a/arch/riscv/kvm/vcpu_sbi_hsm.c
++++ b/arch/riscv/kvm/vcpu_sbi_hsm.c
+@@ -9,6 +9,7 @@
+ #include <linux/errno.h>
+ #include <linux/err.h>
+ #include <linux/kvm_host.h>
++#include <linux/wordpart.h>
+ #include <asm/sbi.h>
+ #include <asm/kvm_vcpu_sbi.h>
+
+@@ -79,12 +80,12 @@ static int kvm_sbi_hsm_vcpu_get_status(struct kvm_vcpu *vcpu)
+ target_vcpu = kvm_get_vcpu_by_id(vcpu->kvm, target_vcpuid);
+ if (!target_vcpu)
+ return SBI_ERR_INVALID_PARAM;
+- if (!kvm_riscv_vcpu_stopped(target_vcpu))
+- return SBI_HSM_STATE_STARTED;
+- else if (vcpu->stat.generic.blocking)
++ if (kvm_riscv_vcpu_stopped(target_vcpu))
++ return SBI_HSM_STATE_STOPPED;
++ else if (target_vcpu->stat.generic.blocking)
+ return SBI_HSM_STATE_SUSPENDED;
+ else
+- return SBI_HSM_STATE_STOPPED;
++ return SBI_HSM_STATE_STARTED;
+ }
+
+ static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+@@ -109,7 +110,7 @@ static int kvm_sbi_ext_hsm_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ }
+ return 0;
+ case SBI_EXT_HSM_HART_SUSPEND:
+- switch (cp->a0) {
++ switch (lower_32_bits(cp->a0)) {
+ case SBI_HSM_SUSPEND_RET_DEFAULT:
+ kvm_riscv_vcpu_wfi(vcpu);
+ break;
+diff --git a/arch/riscv/kvm/vcpu_sbi_replace.c b/arch/riscv/kvm/vcpu_sbi_replace.c
+index 9c2ab3dfa93aa5..5fbf3f94f1e855 100644
+--- a/arch/riscv/kvm/vcpu_sbi_replace.c
++++ b/arch/riscv/kvm/vcpu_sbi_replace.c
+@@ -21,7 +21,7 @@ static int kvm_sbi_ext_time_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ u64 next_cycle;
+
+ if (cp->a6 != SBI_EXT_TIME_SET_TIMER) {
+- retdata->err_val = SBI_ERR_INVALID_PARAM;
++ retdata->err_val = SBI_ERR_NOT_SUPPORTED;
+ return 0;
+ }
+
+@@ -51,9 +51,10 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ struct kvm_cpu_context *cp = &vcpu->arch.guest_context;
+ unsigned long hmask = cp->a0;
+ unsigned long hbase = cp->a1;
++ unsigned long hart_bit = 0, sentmask = 0;
+
+ if (cp->a6 != SBI_EXT_IPI_SEND_IPI) {
+- retdata->err_val = SBI_ERR_INVALID_PARAM;
++ retdata->err_val = SBI_ERR_NOT_SUPPORTED;
+ return 0;
+ }
+
+@@ -62,15 +63,23 @@ static int kvm_sbi_ext_ipi_handler(struct kvm_vcpu *vcpu, struct kvm_run *run,
+ if (hbase != -1UL) {
+ if (tmp->vcpu_id < hbase)
+ continue;
+- if (!(hmask & (1UL << (tmp->vcpu_id - hbase))))
++ hart_bit = tmp->vcpu_id - hbase;
++ if (hart_bit >= __riscv_xlen)
++ goto done;
++ if (!(hmask & (1UL << hart_bit)))
+ continue;
+ }
+ ret = kvm_riscv_vcpu_set_interrupt(tmp, IRQ_VS_SOFT);
+ if (ret < 0)
+ break;
++ sentmask |= 1UL << hart_bit;
+ kvm_riscv_vcpu_pmu_incr_fw(tmp, SBI_PMU_FW_IPI_RCVD);
+ }
+
++done:
++ if (hbase != -1UL && (hmask ^ sentmask))
++ retdata->err_val = SBI_ERR_INVALID_PARAM;
++
+ return ret;
+ }
+
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 1b0c2397d65753..6f8e9af827e0c9 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -1333,6 +1333,7 @@ config X86_REBOOTFIXUPS
+ config MICROCODE
+ def_bool y
+ depends on CPU_SUP_AMD || CPU_SUP_INTEL
++ select CRYPTO_LIB_SHA256 if CPU_SUP_AMD
+
+ config MICROCODE_INITRD32
+ def_bool y
+diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
+index 65ab6460aed4d7..0d33c85da45355 100644
+--- a/arch/x86/events/core.c
++++ b/arch/x86/events/core.c
+@@ -628,7 +628,7 @@ int x86_pmu_hw_config(struct perf_event *event)
+ if (event->attr.type == event->pmu->type)
+ event->hw.config |= x86_pmu_get_event_config(event);
+
+- if (event->attr.sample_period && x86_pmu.limit_period) {
++ if (!event->attr.freq && x86_pmu.limit_period) {
+ s64 left = event->attr.sample_period;
+ x86_pmu.limit_period(event, &left);
+ if (left > event->attr.sample_period)
+diff --git a/arch/x86/kernel/cpu/cyrix.c b/arch/x86/kernel/cpu/cyrix.c
+index 9651275aecd1bb..dfec2c61e3547d 100644
+--- a/arch/x86/kernel/cpu/cyrix.c
++++ b/arch/x86/kernel/cpu/cyrix.c
+@@ -153,8 +153,8 @@ static void geode_configure(void)
+ u8 ccr3;
+ local_irq_save(flags);
+
+- /* Suspend on halt power saving and enable #SUSP pin */
+- setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x88);
++ /* Suspend on halt power saving */
++ setCx86(CX86_CCR2, getCx86(CX86_CCR2) | 0x08);
+
+ ccr3 = getCx86(CX86_CCR3);
+ setCx86(CX86_CCR3, (ccr3 & 0x0f) | 0x10); /* enable MAPEN */
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index fb5d0c67fbab17..f5365b32582a5c 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -23,14 +23,18 @@
+
+ #include <linux/earlycpio.h>
+ #include <linux/firmware.h>
++#include <linux/bsearch.h>
+ #include <linux/uaccess.h>
+ #include <linux/vmalloc.h>
+ #include <linux/initrd.h>
+ #include <linux/kernel.h>
+ #include <linux/pci.h>
+
++#include <crypto/sha2.h>
++
+ #include <asm/microcode.h>
+ #include <asm/processor.h>
++#include <asm/cmdline.h>
+ #include <asm/setup.h>
+ #include <asm/cpu.h>
+ #include <asm/msr.h>
+@@ -145,6 +149,107 @@ ucode_path[] __maybe_unused = "kernel/x86/microcode/AuthenticAMD.bin";
+ */
+ static u32 bsp_cpuid_1_eax __ro_after_init;
+
++static bool sha_check = true;
++
++struct patch_digest {
++ u32 patch_id;
++ u8 sha256[SHA256_DIGEST_SIZE];
++};
++
++#include "amd_shas.c"
++
++static int cmp_id(const void *key, const void *elem)
++{
++ struct patch_digest *pd = (struct patch_digest *)elem;
++ u32 patch_id = *(u32 *)key;
++
++ if (patch_id == pd->patch_id)
++ return 0;
++ else if (patch_id < pd->patch_id)
++ return -1;
++ else
++ return 1;
++}
++
++static bool need_sha_check(u32 cur_rev)
++{
++ switch (cur_rev >> 8) {
++ case 0x80012: return cur_rev <= 0x800126f; break;
++ case 0x83010: return cur_rev <= 0x830107c; break;
++ case 0x86001: return cur_rev <= 0x860010e; break;
++ case 0x86081: return cur_rev <= 0x8608108; break;
++ case 0x87010: return cur_rev <= 0x8701034; break;
++ case 0x8a000: return cur_rev <= 0x8a0000a; break;
++ case 0xa0011: return cur_rev <= 0xa0011da; break;
++ case 0xa0012: return cur_rev <= 0xa001243; break;
++ case 0xa1011: return cur_rev <= 0xa101153; break;
++ case 0xa1012: return cur_rev <= 0xa10124e; break;
++ case 0xa1081: return cur_rev <= 0xa108109; break;
++ case 0xa2010: return cur_rev <= 0xa20102f; break;
++ case 0xa2012: return cur_rev <= 0xa201212; break;
++ case 0xa6012: return cur_rev <= 0xa60120a; break;
++ case 0xa7041: return cur_rev <= 0xa704109; break;
++ case 0xa7052: return cur_rev <= 0xa705208; break;
++ case 0xa7080: return cur_rev <= 0xa708009; break;
++ case 0xa70c0: return cur_rev <= 0xa70C009; break;
++ case 0xaa002: return cur_rev <= 0xaa00218; break;
++ default: break;
++ }
++
++ pr_info("You should not be seeing this. Please send the following couple of lines to x86-<at>-kernel.org\n");
++ pr_info("CPUID(1).EAX: 0x%x, current revision: 0x%x\n", bsp_cpuid_1_eax, cur_rev);
++ return true;
++}
++
++static bool verify_sha256_digest(u32 patch_id, u32 cur_rev, const u8 *data, unsigned int len)
++{
++ struct patch_digest *pd = NULL;
++ u8 digest[SHA256_DIGEST_SIZE];
++ struct sha256_state s;
++ int i;
++
++ if (x86_family(bsp_cpuid_1_eax) < 0x17 ||
++ x86_family(bsp_cpuid_1_eax) > 0x19)
++ return true;
++
++ if (!need_sha_check(cur_rev))
++ return true;
++
++ if (!sha_check)
++ return true;
++
++ pd = bsearch(&patch_id, phashes, ARRAY_SIZE(phashes), sizeof(struct patch_digest), cmp_id);
++ if (!pd) {
++ pr_err("No sha256 digest for patch ID: 0x%x found\n", patch_id);
++ return false;
++ }
++
++ sha256_init(&s);
++ sha256_update(&s, data, len);
++ sha256_final(&s, digest);
++
++ if (memcmp(digest, pd->sha256, sizeof(digest))) {
++ pr_err("Patch 0x%x SHA256 digest mismatch!\n", patch_id);
++
++ for (i = 0; i < SHA256_DIGEST_SIZE; i++)
++ pr_cont("0x%x ", digest[i]);
++ pr_info("\n");
++
++ return false;
++ }
++
++ return true;
++}
++
++static u32 get_patch_level(void)
++{
++ u32 rev, dummy __always_unused;
++
++ native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
++
++ return rev;
++}
++
+ static union cpuid_1_eax ucode_rev_to_cpuid(unsigned int val)
+ {
+ union zen_patch_rev p;
+@@ -246,8 +351,7 @@ static bool verify_equivalence_table(const u8 *buf, size_t buf_size)
+ * On success, @sh_psize returns the patch size according to the section header,
+ * to the caller.
+ */
+-static bool
+-__verify_patch_section(const u8 *buf, size_t buf_size, u32 *sh_psize)
++static bool __verify_patch_section(const u8 *buf, size_t buf_size, u32 *sh_psize)
+ {
+ u32 p_type, p_size;
+ const u32 *hdr;
+@@ -484,10 +588,13 @@ static void scan_containers(u8 *ucode, size_t size, struct cont_desc *desc)
+ }
+ }
+
+-static int __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize)
++static bool __apply_microcode_amd(struct microcode_amd *mc, u32 *cur_rev,
++ unsigned int psize)
+ {
+ unsigned long p_addr = (unsigned long)&mc->hdr.data_code;
+- u32 rev, dummy;
++
++ if (!verify_sha256_digest(mc->hdr.patch_id, *cur_rev, (const u8 *)p_addr, psize))
++ return -1;
+
+ native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr);
+
+@@ -505,47 +612,13 @@ static int __apply_microcode_amd(struct microcode_amd *mc, unsigned int psize)
+ }
+
+ /* verify patch application was successful */
+- native_rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+-
+- if (rev != mc->hdr.patch_id)
+- return -1;
++ *cur_rev = get_patch_level();
++ if (*cur_rev != mc->hdr.patch_id)
++ return false;
+
+- return 0;
++ return true;
+ }
+
+-/*
+- * Early load occurs before we can vmalloc(). So we look for the microcode
+- * patch container file in initrd, traverse equivalent cpu table, look for a
+- * matching microcode patch, and update, all in initrd memory in place.
+- * When vmalloc() is available for use later -- on 64-bit during first AP load,
+- * and on 32-bit during save_microcode_in_initrd_amd() -- we can call
+- * load_microcode_amd() to save equivalent cpu table and microcode patches in
+- * kernel heap memory.
+- *
+- * Returns true if container found (sets @desc), false otherwise.
+- */
+-static bool early_apply_microcode(u32 old_rev, void *ucode, size_t size)
+-{
+- struct cont_desc desc = { 0 };
+- struct microcode_amd *mc;
+- bool ret = false;
+-
+- scan_containers(ucode, size, &desc);
+-
+- mc = desc.mc;
+- if (!mc)
+- return ret;
+-
+- /*
+- * Allow application of the same revision to pick up SMT-specific
+- * changes even if the revision of the other SMT thread is already
+- * up-to-date.
+- */
+- if (old_rev > mc->hdr.patch_id)
+- return ret;
+-
+- return !__apply_microcode_amd(mc, desc.psize);
+-}
+
+ static bool get_builtin_microcode(struct cpio_data *cp)
+ {
+@@ -569,64 +642,74 @@ static bool get_builtin_microcode(struct cpio_data *cp)
+ return false;
+ }
+
+-static void __init find_blobs_in_containers(struct cpio_data *ret)
++static bool __init find_blobs_in_containers(struct cpio_data *ret)
+ {
+ struct cpio_data cp;
++ bool found;
+
+ if (!get_builtin_microcode(&cp))
+ cp = find_microcode_in_initrd(ucode_path);
+
+- *ret = cp;
++ found = cp.data && cp.size;
++ if (found)
++ *ret = cp;
++
++ return found;
+ }
+
++/*
++ * Early load occurs before we can vmalloc(). So we look for the microcode
++ * patch container file in initrd, traverse equivalent cpu table, look for a
++ * matching microcode patch, and update, all in initrd memory in place.
++ * When vmalloc() is available for use later -- on 64-bit during first AP load,
++ * and on 32-bit during save_microcode_in_initrd() -- we can call
++ * load_microcode_amd() to save equivalent cpu table and microcode patches in
++ * kernel heap memory.
++ */
+ void __init load_ucode_amd_bsp(struct early_load_data *ed, unsigned int cpuid_1_eax)
+ {
++ struct cont_desc desc = { };
++ struct microcode_amd *mc;
+ struct cpio_data cp = { };
+- u32 dummy;
++ char buf[4];
++ u32 rev;
++
++ if (cmdline_find_option(boot_command_line, "microcode.amd_sha_check", buf, 4)) {
++ if (!strncmp(buf, "off", 3)) {
++ sha_check = false;
++ pr_warn_once("It is a very very bad idea to disable the blobs SHA check!\n");
++ add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_STILL_OK);
++ }
++ }
+
+ bsp_cpuid_1_eax = cpuid_1_eax;
+
+- native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->old_rev, dummy);
++ rev = get_patch_level();
++ ed->old_rev = rev;
+
+ /* Needed in load_microcode_amd() */
+ ucode_cpu_info[0].cpu_sig.sig = cpuid_1_eax;
+
+- find_blobs_in_containers(&cp);
+- if (!(cp.data && cp.size))
++ if (!find_blobs_in_containers(&cp))
+ return;
+
+- if (early_apply_microcode(ed->old_rev, cp.data, cp.size))
+- native_rdmsr(MSR_AMD64_PATCH_LEVEL, ed->new_rev, dummy);
+-}
+-
+-static enum ucode_state _load_microcode_amd(u8 family, const u8 *data, size_t size);
+-
+-static int __init save_microcode_in_initrd(void)
+-{
+- unsigned int cpuid_1_eax = native_cpuid_eax(1);
+- struct cpuinfo_x86 *c = &boot_cpu_data;
+- struct cont_desc desc = { 0 };
+- enum ucode_state ret;
+- struct cpio_data cp;
+-
+- if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10)
+- return 0;
+-
+- find_blobs_in_containers(&cp);
+- if (!(cp.data && cp.size))
+- return -EINVAL;
+-
+ scan_containers(cp.data, cp.size, &desc);
+- if (!desc.mc)
+- return -EINVAL;
+
+- ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size);
+- if (ret > UCODE_UPDATED)
+- return -EINVAL;
++ mc = desc.mc;
++ if (!mc)
++ return;
+
+- return 0;
++ /*
++ * Allow application of the same revision to pick up SMT-specific
++ * changes even if the revision of the other SMT thread is already
++ * up-to-date.
++ */
++ if (ed->old_rev > mc->hdr.patch_id)
++ return;
++
++ if (__apply_microcode_amd(mc, &rev, desc.psize))
++ ed->new_rev = rev;
+ }
+-early_initcall(save_microcode_in_initrd);
+
+ static inline bool patch_cpus_equivalent(struct ucode_patch *p,
+ struct ucode_patch *n,
+@@ -727,14 +810,9 @@ static void free_cache(void)
+ static struct ucode_patch *find_patch(unsigned int cpu)
+ {
+ struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+- u32 rev, dummy __always_unused;
+ u16 equiv_id = 0;
+
+- /* fetch rev if not populated yet: */
+- if (!uci->cpu_sig.rev) {
+- rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+- uci->cpu_sig.rev = rev;
+- }
++ uci->cpu_sig.rev = get_patch_level();
+
+ if (x86_family(bsp_cpuid_1_eax) < 0x17) {
+ equiv_id = find_equiv_id(&equiv_table, uci->cpu_sig.sig);
+@@ -757,22 +835,20 @@ void reload_ucode_amd(unsigned int cpu)
+
+ mc = p->data;
+
+- rdmsr(MSR_AMD64_PATCH_LEVEL, rev, dummy);
+-
++ rev = get_patch_level();
+ if (rev < mc->hdr.patch_id) {
+- if (!__apply_microcode_amd(mc, p->size))
+- pr_info_once("reload revision: 0x%08x\n", mc->hdr.patch_id);
++ if (__apply_microcode_amd(mc, &rev, p->size))
++ pr_info_once("reload revision: 0x%08x\n", rev);
+ }
+ }
+
+ static int collect_cpu_info_amd(int cpu, struct cpu_signature *csig)
+ {
+- struct cpuinfo_x86 *c = &cpu_data(cpu);
+ struct ucode_cpu_info *uci = ucode_cpu_info + cpu;
+ struct ucode_patch *p;
+
+ csig->sig = cpuid_eax(0x00000001);
+- csig->rev = c->microcode;
++ csig->rev = get_patch_level();
+
+ /*
+ * a patch could have been loaded early, set uci->mc so that
+@@ -813,7 +889,7 @@ static enum ucode_state apply_microcode_amd(int cpu)
+ goto out;
+ }
+
+- if (__apply_microcode_amd(mc_amd, p->size)) {
++ if (!__apply_microcode_amd(mc_amd, &rev, p->size)) {
+ pr_err("CPU%d: update failed for patch_level=0x%08x\n",
+ cpu, mc_amd->hdr.patch_id);
+ return UCODE_ERROR;
+@@ -935,8 +1011,7 @@ static int verify_and_add_patch(u8 family, u8 *fw, unsigned int leftover,
+ }
+
+ /* Scan the blob in @data and add microcode patches to the cache. */
+-static enum ucode_state __load_microcode_amd(u8 family, const u8 *data,
+- size_t size)
++static enum ucode_state __load_microcode_amd(u8 family, const u8 *data, size_t size)
+ {
+ u8 *fw = (u8 *)data;
+ size_t offset;
+@@ -1011,6 +1086,32 @@ static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t siz
+ return ret;
+ }
+
++static int __init save_microcode_in_initrd(void)
++{
++ unsigned int cpuid_1_eax = native_cpuid_eax(1);
++ struct cpuinfo_x86 *c = &boot_cpu_data;
++ struct cont_desc desc = { 0 };
++ enum ucode_state ret;
++ struct cpio_data cp;
++
++ if (dis_ucode_ldr || c->x86_vendor != X86_VENDOR_AMD || c->x86 < 0x10)
++ return 0;
++
++ if (!find_blobs_in_containers(&cp))
++ return -EINVAL;
++
++ scan_containers(cp.data, cp.size, &desc);
++ if (!desc.mc)
++ return -EINVAL;
++
++ ret = _load_microcode_amd(x86_family(cpuid_1_eax), desc.data, desc.size);
++ if (ret > UCODE_UPDATED)
++ return -EINVAL;
++
++ return 0;
++}
++early_initcall(save_microcode_in_initrd);
++
+ /*
+ * AMD microcode firmware naming convention, up to family 15h they are in
+ * the legacy file:
+diff --git a/arch/x86/kernel/cpu/microcode/amd_shas.c b/arch/x86/kernel/cpu/microcode/amd_shas.c
+new file mode 100644
+index 00000000000000..2a1655b1fdd883
+--- /dev/null
++++ b/arch/x86/kernel/cpu/microcode/amd_shas.c
+@@ -0,0 +1,444 @@
++/* Keep 'em sorted. */
++static const struct patch_digest phashes[] = {
++ { 0x8001227, {
++ 0x99,0xc0,0x9b,0x2b,0xcc,0x9f,0x52,0x1b,
++ 0x1a,0x5f,0x1d,0x83,0xa1,0x6c,0xc4,0x46,
++ 0xe2,0x6c,0xda,0x73,0xfb,0x2d,0x23,0xa8,
++ 0x77,0xdc,0x15,0x31,0x33,0x4a,0x46,0x18,
++ }
++ },
++ { 0x8001250, {
++ 0xc0,0x0b,0x6b,0x19,0xfd,0x5c,0x39,0x60,
++ 0xd5,0xc3,0x57,0x46,0x54,0xe4,0xd1,0xaa,
++ 0xa8,0xf7,0x1f,0xa8,0x6a,0x60,0x3e,0xe3,
++ 0x27,0x39,0x8e,0x53,0x30,0xf8,0x49,0x19,
++ }
++ },
++ { 0x800126e, {
++ 0xf3,0x8b,0x2b,0xb6,0x34,0xe3,0xc8,0x2c,
++ 0xef,0xec,0x63,0x6d,0xc8,0x76,0x77,0xb3,
++ 0x25,0x5a,0xb7,0x52,0x8c,0x83,0x26,0xe6,
++ 0x4c,0xbe,0xbf,0xe9,0x7d,0x22,0x6a,0x43,
++ }
++ },
++ { 0x800126f, {
++ 0x2b,0x5a,0xf2,0x9c,0xdd,0xd2,0x7f,0xec,
++ 0xec,0x96,0x09,0x57,0xb0,0x96,0x29,0x8b,
++ 0x2e,0x26,0x91,0xf0,0x49,0x33,0x42,0x18,
++ 0xdd,0x4b,0x65,0x5a,0xd4,0x15,0x3d,0x33,
++ }
++ },
++ { 0x800820d, {
++ 0x68,0x98,0x83,0xcd,0x22,0x0d,0xdd,0x59,
++ 0x73,0x2c,0x5b,0x37,0x1f,0x84,0x0e,0x67,
++ 0x96,0x43,0x83,0x0c,0x46,0x44,0xab,0x7c,
++ 0x7b,0x65,0x9e,0x57,0xb5,0x90,0x4b,0x0e,
++ }
++ },
++ { 0x8301025, {
++ 0xe4,0x7d,0xdb,0x1e,0x14,0xb4,0x5e,0x36,
++ 0x8f,0x3e,0x48,0x88,0x3c,0x6d,0x76,0xa1,
++ 0x59,0xc6,0xc0,0x72,0x42,0xdf,0x6c,0x30,
++ 0x6f,0x0b,0x28,0x16,0x61,0xfc,0x79,0x77,
++ }
++ },
++ { 0x8301055, {
++ 0x81,0x7b,0x99,0x1b,0xae,0x2d,0x4f,0x9a,
++ 0xef,0x13,0xce,0xb5,0x10,0xaf,0x6a,0xea,
++ 0xe5,0xb0,0x64,0x98,0x10,0x68,0x34,0x3b,
++ 0x9d,0x7a,0xd6,0x22,0x77,0x5f,0xb3,0x5b,
++ }
++ },
++ { 0x8301072, {
++ 0xcf,0x76,0xa7,0x1a,0x49,0xdf,0x2a,0x5e,
++ 0x9e,0x40,0x70,0xe5,0xdd,0x8a,0xa8,0x28,
++ 0x20,0xdc,0x91,0xd8,0x2c,0xa6,0xa0,0xb1,
++ 0x2d,0x22,0x26,0x94,0x4b,0x40,0x85,0x30,
++ }
++ },
++ { 0x830107a, {
++ 0x2a,0x65,0x8c,0x1a,0x5e,0x07,0x21,0x72,
++ 0xdf,0x90,0xa6,0x51,0x37,0xd3,0x4b,0x34,
++ 0xc4,0xda,0x03,0xe1,0x8a,0x6c,0xfb,0x20,
++ 0x04,0xb2,0x81,0x05,0xd4,0x87,0xf4,0x0a,
++ }
++ },
++ { 0x830107b, {
++ 0xb3,0x43,0x13,0x63,0x56,0xc1,0x39,0xad,
++ 0x10,0xa6,0x2b,0xcc,0x02,0xe6,0x76,0x2a,
++ 0x1e,0x39,0x58,0x3e,0x23,0x6e,0xa4,0x04,
++ 0x95,0xea,0xf9,0x6d,0xc2,0x8a,0x13,0x19,
++ }
++ },
++ { 0x830107c, {
++ 0x21,0x64,0xde,0xfb,0x9f,0x68,0x96,0x47,
++ 0x70,0x5c,0xe2,0x8f,0x18,0x52,0x6a,0xac,
++ 0xa4,0xd2,0x2e,0xe0,0xde,0x68,0x66,0xc3,
++ 0xeb,0x1e,0xd3,0x3f,0xbc,0x51,0x1d,0x38,
++ }
++ },
++ { 0x860010d, {
++ 0x86,0xb6,0x15,0x83,0xbc,0x3b,0x9c,0xe0,
++ 0xb3,0xef,0x1d,0x99,0x84,0x35,0x15,0xf7,
++ 0x7c,0x2a,0xc6,0x42,0xdb,0x73,0x07,0x5c,
++ 0x7d,0xc3,0x02,0xb5,0x43,0x06,0x5e,0xf8,
++ }
++ },
++ { 0x8608108, {
++ 0x14,0xfe,0x57,0x86,0x49,0xc8,0x68,0xe2,
++ 0x11,0xa3,0xcb,0x6e,0xff,0x6e,0xd5,0x38,
++ 0xfe,0x89,0x1a,0xe0,0x67,0xbf,0xc4,0xcc,
++ 0x1b,0x9f,0x84,0x77,0x2b,0x9f,0xaa,0xbd,
++ }
++ },
++ { 0x8701034, {
++ 0xc3,0x14,0x09,0xa8,0x9c,0x3f,0x8d,0x83,
++ 0x9b,0x4c,0xa5,0xb7,0x64,0x8b,0x91,0x5d,
++ 0x85,0x6a,0x39,0x26,0x1e,0x14,0x41,0xa8,
++ 0x75,0xea,0xa6,0xf9,0xc9,0xd1,0xea,0x2b,
++ }
++ },
++ { 0x8a00008, {
++ 0xd7,0x2a,0x93,0xdc,0x05,0x2f,0xa5,0x6e,
++ 0x0c,0x61,0x2c,0x07,0x9f,0x38,0xe9,0x8e,
++ 0xef,0x7d,0x2a,0x05,0x4d,0x56,0xaf,0x72,
++ 0xe7,0x56,0x47,0x6e,0x60,0x27,0xd5,0x8c,
++ }
++ },
++ { 0x8a0000a, {
++ 0x73,0x31,0x26,0x22,0xd4,0xf9,0xee,0x3c,
++ 0x07,0x06,0xe7,0xb9,0xad,0xd8,0x72,0x44,
++ 0x33,0x31,0xaa,0x7d,0xc3,0x67,0x0e,0xdb,
++ 0x47,0xb5,0xaa,0xbc,0xf5,0xbb,0xd9,0x20,
++ }
++ },
++ { 0xa00104c, {
++ 0x3c,0x8a,0xfe,0x04,0x62,0xd8,0x6d,0xbe,
++ 0xa7,0x14,0x28,0x64,0x75,0xc0,0xa3,0x76,
++ 0xb7,0x92,0x0b,0x97,0x0a,0x8e,0x9c,0x5b,
++ 0x1b,0xc8,0x9d,0x3a,0x1e,0x81,0x3d,0x3b,
++ }
++ },
++ { 0xa00104e, {
++ 0xc4,0x35,0x82,0x67,0xd2,0x86,0xe5,0xb2,
++ 0xfd,0x69,0x12,0x38,0xc8,0x77,0xba,0xe0,
++ 0x70,0xf9,0x77,0x89,0x10,0xa6,0x74,0x4e,
++ 0x56,0x58,0x13,0xf5,0x84,0x70,0x28,0x0b,
++ }
++ },
++ { 0xa001053, {
++ 0x92,0x0e,0xf4,0x69,0x10,0x3b,0xf9,0x9d,
++ 0x31,0x1b,0xa6,0x99,0x08,0x7d,0xd7,0x25,
++ 0x7e,0x1e,0x89,0xba,0x35,0x8d,0xac,0xcb,
++ 0x3a,0xb4,0xdf,0x58,0x12,0xcf,0xc0,0xc3,
++ }
++ },
++ { 0xa001058, {
++ 0x33,0x7d,0xa9,0xb5,0x4e,0x62,0x13,0x36,
++ 0xef,0x66,0xc9,0xbd,0x0a,0xa6,0x3b,0x19,
++ 0xcb,0xf5,0xc2,0xc3,0x55,0x47,0x20,0xec,
++ 0x1f,0x7b,0xa1,0x44,0x0e,0x8e,0xa4,0xb2,
++ }
++ },
++ { 0xa001075, {
++ 0x39,0x02,0x82,0xd0,0x7c,0x26,0x43,0xe9,
++ 0x26,0xa3,0xd9,0x96,0xf7,0x30,0x13,0x0a,
++ 0x8a,0x0e,0xac,0xe7,0x1d,0xdc,0xe2,0x0f,
++ 0xcb,0x9e,0x8d,0xbc,0xd2,0xa2,0x44,0xe0,
++ }
++ },
++ { 0xa001078, {
++ 0x2d,0x67,0xc7,0x35,0xca,0xef,0x2f,0x25,
++ 0x4c,0x45,0x93,0x3f,0x36,0x01,0x8c,0xce,
++ 0xa8,0x5b,0x07,0xd3,0xc1,0x35,0x3c,0x04,
++ 0x20,0xa2,0xfc,0xdc,0xe6,0xce,0x26,0x3e,
++ }
++ },
++ { 0xa001079, {
++ 0x43,0xe2,0x05,0x9c,0xfd,0xb7,0x5b,0xeb,
++ 0x5b,0xe9,0xeb,0x3b,0x96,0xf4,0xe4,0x93,
++ 0x73,0x45,0x3e,0xac,0x8d,0x3b,0xe4,0xdb,
++ 0x10,0x31,0xc1,0xe4,0xa2,0xd0,0x5a,0x8a,
++ }
++ },
++ { 0xa00107a, {
++ 0x5f,0x92,0xca,0xff,0xc3,0x59,0x22,0x5f,
++ 0x02,0xa0,0x91,0x3b,0x4a,0x45,0x10,0xfd,
++ 0x19,0xe1,0x8a,0x6d,0x9a,0x92,0xc1,0x3f,
++ 0x75,0x78,0xac,0x78,0x03,0x1d,0xdb,0x18,
++ }
++ },
++ { 0xa001143, {
++ 0x56,0xca,0xf7,0x43,0x8a,0x4c,0x46,0x80,
++ 0xec,0xde,0xe5,0x9c,0x50,0x84,0x9a,0x42,
++ 0x27,0xe5,0x51,0x84,0x8f,0x19,0xc0,0x8d,
++ 0x0c,0x25,0xb4,0xb0,0x8f,0x10,0xf3,0xf8,
++ }
++ },
++ { 0xa001144, {
++ 0x42,0xd5,0x9b,0xa7,0xd6,0x15,0x29,0x41,
++ 0x61,0xc4,0x72,0x3f,0xf3,0x06,0x78,0x4b,
++ 0x65,0xf3,0x0e,0xfa,0x9c,0x87,0xde,0x25,
++ 0xbd,0xb3,0x9a,0xf4,0x75,0x13,0x53,0xdc,
++ }
++ },
++ { 0xa00115d, {
++ 0xd4,0xc4,0x49,0x36,0x89,0x0b,0x47,0xdd,
++ 0xfb,0x2f,0x88,0x3b,0x5f,0xf2,0x8e,0x75,
++ 0xc6,0x6c,0x37,0x5a,0x90,0x25,0x94,0x3e,
++ 0x36,0x9c,0xae,0x02,0x38,0x6c,0xf5,0x05,
++ }
++ },
++ { 0xa001173, {
++ 0x28,0xbb,0x9b,0xd1,0xa0,0xa0,0x7e,0x3a,
++ 0x59,0x20,0xc0,0xa9,0xb2,0x5c,0xc3,0x35,
++ 0x53,0x89,0xe1,0x4c,0x93,0x2f,0x1d,0xc3,
++ 0xe5,0xf7,0xf3,0xc8,0x9b,0x61,0xaa,0x9e,
++ }
++ },
++ { 0xa0011a8, {
++ 0x97,0xc6,0x16,0x65,0x99,0xa4,0x85,0x3b,
++ 0xf6,0xce,0xaa,0x49,0x4a,0x3a,0xc5,0xb6,
++ 0x78,0x25,0xbc,0x53,0xaf,0x5d,0xcf,0xf4,
++ 0x23,0x12,0xbb,0xb1,0xbc,0x8a,0x02,0x2e,
++ }
++ },
++ { 0xa0011ce, {
++ 0xcf,0x1c,0x90,0xa3,0x85,0x0a,0xbf,0x71,
++ 0x94,0x0e,0x80,0x86,0x85,0x4f,0xd7,0x86,
++ 0xae,0x38,0x23,0x28,0x2b,0x35,0x9b,0x4e,
++ 0xfe,0xb8,0xcd,0x3d,0x3d,0x39,0xc9,0x6a,
++ }
++ },
++ { 0xa0011d1, {
++ 0xdf,0x0e,0xca,0xde,0xf6,0xce,0x5c,0x1e,
++ 0x4c,0xec,0xd7,0x71,0x83,0xcc,0xa8,0x09,
++ 0xc7,0xc5,0xfe,0xb2,0xf7,0x05,0xd2,0xc5,
++ 0x12,0xdd,0xe4,0xf3,0x92,0x1c,0x3d,0xb8,
++ }
++ },
++ { 0xa0011d3, {
++ 0x91,0xe6,0x10,0xd7,0x57,0xb0,0x95,0x0b,
++ 0x9a,0x24,0xee,0xf7,0xcf,0x56,0xc1,0xa6,
++ 0x4a,0x52,0x7d,0x5f,0x9f,0xdf,0xf6,0x00,
++ 0x65,0xf7,0xea,0xe8,0x2a,0x88,0xe2,0x26,
++ }
++ },
++ { 0xa0011d5, {
++ 0xed,0x69,0x89,0xf4,0xeb,0x64,0xc2,0x13,
++ 0xe0,0x51,0x1f,0x03,0x26,0x52,0x7d,0xb7,
++ 0x93,0x5d,0x65,0xca,0xb8,0x12,0x1d,0x62,
++ 0x0d,0x5b,0x65,0x34,0x69,0xb2,0x62,0x21,
++ }
++ },
++ { 0xa001223, {
++ 0xfb,0x32,0x5f,0xc6,0x83,0x4f,0x8c,0xb8,
++ 0xa4,0x05,0xf9,0x71,0x53,0x01,0x16,0xc4,
++ 0x83,0x75,0x94,0xdd,0xeb,0x7e,0xb7,0x15,
++ 0x8e,0x3b,0x50,0x29,0x8a,0x9c,0xcc,0x45,
++ }
++ },
++ { 0xa001224, {
++ 0x0e,0x0c,0xdf,0xb4,0x89,0xee,0x35,0x25,
++ 0xdd,0x9e,0xdb,0xc0,0x69,0x83,0x0a,0xad,
++ 0x26,0xa9,0xaa,0x9d,0xfc,0x3c,0xea,0xf9,
++ 0x6c,0xdc,0xd5,0x6d,0x8b,0x6e,0x85,0x4a,
++ }
++ },
++ { 0xa001227, {
++ 0xab,0xc6,0x00,0x69,0x4b,0x50,0x87,0xad,
++ 0x5f,0x0e,0x8b,0xea,0x57,0x38,0xce,0x1d,
++ 0x0f,0x75,0x26,0x02,0xf6,0xd6,0x96,0xe9,
++ 0x87,0xb9,0xd6,0x20,0x27,0x7c,0xd2,0xe0,
++ }
++ },
++ { 0xa001229, {
++ 0x7f,0x49,0x49,0x48,0x46,0xa5,0x50,0xa6,
++ 0x28,0x89,0x98,0xe2,0x9e,0xb4,0x7f,0x75,
++ 0x33,0xa7,0x04,0x02,0xe4,0x82,0xbf,0xb4,
++ 0xa5,0x3a,0xba,0x24,0x8d,0x31,0x10,0x1d,
++ }
++ },
++ { 0xa00122e, {
++ 0x56,0x94,0xa9,0x5d,0x06,0x68,0xfe,0xaf,
++ 0xdf,0x7a,0xff,0x2d,0xdf,0x74,0x0f,0x15,
++ 0x66,0xfb,0x00,0xb5,0x51,0x97,0x9b,0xfa,
++ 0xcb,0x79,0x85,0x46,0x25,0xb4,0xd2,0x10,
++ }
++ },
++ { 0xa001231, {
++ 0x0b,0x46,0xa5,0xfc,0x18,0x15,0xa0,0x9e,
++ 0xa6,0xdc,0xb7,0xff,0x17,0xf7,0x30,0x64,
++ 0xd4,0xda,0x9e,0x1b,0xc3,0xfc,0x02,0x3b,
++ 0xe2,0xc6,0x0e,0x41,0x54,0xb5,0x18,0xdd,
++ }
++ },
++ { 0xa001234, {
++ 0x88,0x8d,0xed,0xab,0xb5,0xbd,0x4e,0xf7,
++ 0x7f,0xd4,0x0e,0x95,0x34,0x91,0xff,0xcc,
++ 0xfb,0x2a,0xcd,0xf7,0xd5,0xdb,0x4c,0x9b,
++ 0xd6,0x2e,0x73,0x50,0x8f,0x83,0x79,0x1a,
++ }
++ },
++ { 0xa001236, {
++ 0x3d,0x30,0x00,0xb9,0x71,0xba,0x87,0x78,
++ 0xa8,0x43,0x55,0xc4,0x26,0x59,0xcf,0x9d,
++ 0x93,0xce,0x64,0x0e,0x8b,0x72,0x11,0x8b,
++ 0xa3,0x8f,0x51,0xe9,0xca,0x98,0xaa,0x25,
++ }
++ },
++ { 0xa001238, {
++ 0x72,0xf7,0x4b,0x0c,0x7d,0x58,0x65,0xcc,
++ 0x00,0xcc,0x57,0x16,0x68,0x16,0xf8,0x2a,
++ 0x1b,0xb3,0x8b,0xe1,0xb6,0x83,0x8c,0x7e,
++ 0xc0,0xcd,0x33,0xf2,0x8d,0xf9,0xef,0x59,
++ }
++ },
++ { 0xa00820c, {
++ 0xa8,0x0c,0x81,0xc0,0xa6,0x00,0xe7,0xf3,
++ 0x5f,0x65,0xd3,0xb9,0x6f,0xea,0x93,0x63,
++ 0xf1,0x8c,0x88,0x45,0xd7,0x82,0x80,0xd1,
++ 0xe1,0x3b,0x8d,0xb2,0xf8,0x22,0x03,0xe2,
++ }
++ },
++ { 0xa10113e, {
++ 0x05,0x3c,0x66,0xd7,0xa9,0x5a,0x33,0x10,
++ 0x1b,0xf8,0x9c,0x8f,0xed,0xfc,0xa7,0xa0,
++ 0x15,0xe3,0x3f,0x4b,0x1d,0x0d,0x0a,0xd5,
++ 0xfa,0x90,0xc4,0xed,0x9d,0x90,0xaf,0x53,
++ }
++ },
++ { 0xa101144, {
++ 0xb3,0x0b,0x26,0x9a,0xf8,0x7c,0x02,0x26,
++ 0x35,0x84,0x53,0xa4,0xd3,0x2c,0x7c,0x09,
++ 0x68,0x7b,0x96,0xb6,0x93,0xef,0xde,0xbc,
++ 0xfd,0x4b,0x15,0xd2,0x81,0xd3,0x51,0x47,
++ }
++ },
++ { 0xa101148, {
++ 0x20,0xd5,0x6f,0x40,0x4a,0xf6,0x48,0x90,
++ 0xc2,0x93,0x9a,0xc2,0xfd,0xac,0xef,0x4f,
++ 0xfa,0xc0,0x3d,0x92,0x3c,0x6d,0x01,0x08,
++ 0xf1,0x5e,0xb0,0xde,0xb4,0x98,0xae,0xc4,
++ }
++ },
++ { 0xa10123e, {
++ 0x03,0xb9,0x2c,0x76,0x48,0x93,0xc9,0x18,
++ 0xfb,0x56,0xfd,0xf7,0xe2,0x1d,0xca,0x4d,
++ 0x1d,0x13,0x53,0x63,0xfe,0x42,0x6f,0xfc,
++ 0x19,0x0f,0xf1,0xfc,0xa7,0xdd,0x89,0x1b,
++ }
++ },
++ { 0xa101244, {
++ 0x71,0x56,0xb5,0x9f,0x21,0xbf,0xb3,0x3c,
++ 0x8c,0xd7,0x36,0xd0,0x34,0x52,0x1b,0xb1,
++ 0x46,0x2f,0x04,0xf0,0x37,0xd8,0x1e,0x72,
++ 0x24,0xa2,0x80,0x84,0x83,0x65,0x84,0xc0,
++ }
++ },
++ { 0xa101248, {
++ 0xed,0x3b,0x95,0xa6,0x68,0xa7,0x77,0x3e,
++ 0xfc,0x17,0x26,0xe2,0x7b,0xd5,0x56,0x22,
++ 0x2c,0x1d,0xef,0xeb,0x56,0xdd,0xba,0x6e,
++ 0x1b,0x7d,0x64,0x9d,0x4b,0x53,0x13,0x75,
++ }
++ },
++ { 0xa108108, {
++ 0xed,0xc2,0xec,0xa1,0x15,0xc6,0x65,0xe9,
++ 0xd0,0xef,0x39,0xaa,0x7f,0x55,0x06,0xc6,
++ 0xf5,0xd4,0x3f,0x7b,0x14,0xd5,0x60,0x2c,
++ 0x28,0x1e,0x9c,0x59,0x69,0x99,0x4d,0x16,
++ }
++ },
++ { 0xa20102d, {
++ 0xf9,0x6e,0xf2,0x32,0xd3,0x0f,0x5f,0x11,
++ 0x59,0xa1,0xfe,0xcc,0xcd,0x9b,0x42,0x89,
++ 0x8b,0x89,0x2f,0xb5,0xbb,0x82,0xef,0x23,
++ 0x8c,0xe9,0x19,0x3e,0xcc,0x3f,0x7b,0xb4,
++ }
++ },
++ { 0xa201210, {
++ 0xe8,0x6d,0x51,0x6a,0x8e,0x72,0xf3,0xfe,
++ 0x6e,0x16,0xbc,0x62,0x59,0x40,0x17,0xe9,
++ 0x6d,0x3d,0x0e,0x6b,0xa7,0xac,0xe3,0x68,
++ 0xf7,0x55,0xf0,0x13,0xbb,0x22,0xf6,0x41,
++ }
++ },
++ { 0xa404107, {
++ 0xbb,0x04,0x4e,0x47,0xdd,0x5e,0x26,0x45,
++ 0x1a,0xc9,0x56,0x24,0xa4,0x4c,0x82,0xb0,
++ 0x8b,0x0d,0x9f,0xf9,0x3a,0xdf,0xc6,0x81,
++ 0x13,0xbc,0xc5,0x25,0xe4,0xc5,0xc3,0x99,
++ }
++ },
++ { 0xa500011, {
++ 0x23,0x3d,0x70,0x7d,0x03,0xc3,0xc4,0xf4,
++ 0x2b,0x82,0xc6,0x05,0xda,0x80,0x0a,0xf1,
++ 0xd7,0x5b,0x65,0x3a,0x7d,0xab,0xdf,0xa2,
++ 0x11,0x5e,0x96,0x7e,0x71,0xe9,0xfc,0x74,
++ }
++ },
++ { 0xa601209, {
++ 0x66,0x48,0xd4,0x09,0x05,0xcb,0x29,0x32,
++ 0x66,0xb7,0x9a,0x76,0xcd,0x11,0xf3,0x30,
++ 0x15,0x86,0xcc,0x5d,0x97,0x0f,0xc0,0x46,
++ 0xe8,0x73,0xe2,0xd6,0xdb,0xd2,0x77,0x1d,
++ }
++ },
++ { 0xa704107, {
++ 0xf3,0xc6,0x58,0x26,0xee,0xac,0x3f,0xd6,
++ 0xce,0xa1,0x72,0x47,0x3b,0xba,0x2b,0x93,
++ 0x2a,0xad,0x8e,0x6b,0xea,0x9b,0xb7,0xc2,
++ 0x64,0x39,0x71,0x8c,0xce,0xe7,0x41,0x39,
++ }
++ },
++ { 0xa705206, {
++ 0x8d,0xc0,0x76,0xbd,0x58,0x9f,0x8f,0xa4,
++ 0x12,0x9d,0x21,0xfb,0x48,0x21,0xbc,0xe7,
++ 0x67,0x6f,0x04,0x18,0xae,0x20,0x87,0x4b,
++ 0x03,0x35,0xe9,0xbe,0xfb,0x06,0xdf,0xfc,
++ }
++ },
++ { 0xa708007, {
++ 0x6b,0x76,0xcc,0x78,0xc5,0x8a,0xa3,0xe3,
++ 0x32,0x2d,0x79,0xe4,0xc3,0x80,0xdb,0xb2,
++ 0x07,0xaa,0x3a,0xe0,0x57,0x13,0x72,0x80,
++ 0xdf,0x92,0x73,0x84,0x87,0x3c,0x73,0x93,
++ }
++ },
++ { 0xa70c005, {
++ 0x88,0x5d,0xfb,0x79,0x64,0xd8,0x46,0x3b,
++ 0x4a,0x83,0x8e,0x77,0x7e,0xcf,0xb3,0x0f,
++ 0x1f,0x1f,0xf1,0x97,0xeb,0xfe,0x56,0x55,
++ 0xee,0x49,0xac,0xe1,0x8b,0x13,0xc5,0x13,
++ }
++ },
++ { 0xaa00116, {
++ 0xe8,0x4c,0x2c,0x88,0xa1,0xac,0x24,0x63,
++ 0x65,0xe5,0xaa,0x2d,0x16,0xa9,0xc3,0xf5,
++ 0xfe,0x1d,0x5e,0x65,0xc7,0xaa,0x92,0x4d,
++ 0x91,0xee,0x76,0xbb,0x4c,0x66,0x78,0xc9,
++ }
++ },
++ { 0xaa00212, {
++ 0xbd,0x57,0x5d,0x0a,0x0a,0x30,0xc1,0x75,
++ 0x95,0x58,0x5e,0x93,0x02,0x28,0x43,0x71,
++ 0xed,0x42,0x29,0xc8,0xec,0x34,0x2b,0xb2,
++ 0x1a,0x65,0x4b,0xfe,0x07,0x0f,0x34,0xa1,
++ }
++ },
++ { 0xaa00213, {
++ 0xed,0x58,0xb7,0x76,0x81,0x7f,0xd9,0x3a,
++ 0x1a,0xff,0x8b,0x34,0xb8,0x4a,0x99,0x0f,
++ 0x28,0x49,0x6c,0x56,0x2b,0xdc,0xb7,0xed,
++ 0x96,0xd5,0x9d,0xc1,0x7a,0xd4,0x51,0x9b,
++ }
++ },
++ { 0xaa00215, {
++ 0x55,0xd3,0x28,0xcb,0x87,0xa9,0x32,0xe9,
++ 0x4e,0x85,0x4b,0x7c,0x6b,0xd5,0x7c,0xd4,
++ 0x1b,0x51,0x71,0x3a,0x0e,0x0b,0xdc,0x9b,
++ 0x68,0x2f,0x46,0xee,0xfe,0xc6,0x6d,0xef,
++ }
++ },
++};
+diff --git a/arch/x86/kernel/cpu/microcode/internal.h b/arch/x86/kernel/cpu/microcode/internal.h
+index 21776c529fa97a..5df621752fefac 100644
+--- a/arch/x86/kernel/cpu/microcode/internal.h
++++ b/arch/x86/kernel/cpu/microcode/internal.h
+@@ -100,14 +100,12 @@ extern bool force_minrev;
+ #ifdef CONFIG_CPU_SUP_AMD
+ void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family);
+ void load_ucode_amd_ap(unsigned int family);
+-int save_microcode_in_initrd_amd(unsigned int family);
+ void reload_ucode_amd(unsigned int cpu);
+ struct microcode_ops *init_amd_microcode(void);
+ void exit_amd_microcode(void);
+ #else /* CONFIG_CPU_SUP_AMD */
+ static inline void load_ucode_amd_bsp(struct early_load_data *ed, unsigned int family) { }
+ static inline void load_ucode_amd_ap(unsigned int family) { }
+-static inline int save_microcode_in_initrd_amd(unsigned int family) { return -EINVAL; }
+ static inline void reload_ucode_amd(unsigned int cpu) { }
+ static inline struct microcode_ops *init_amd_microcode(void) { return NULL; }
+ static inline void exit_amd_microcode(void) { }
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 767bcbce74facb..c11db5be253248 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -427,13 +427,14 @@ static bool disk_insert_zone_wplug(struct gendisk *disk,
+ }
+ }
+ hlist_add_head_rcu(&zwplug->node, &disk->zone_wplugs_hash[idx]);
++ atomic_inc(&disk->nr_zone_wplugs);
+ spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
+
+ return true;
+ }
+
+-static struct blk_zone_wplug *disk_get_zone_wplug(struct gendisk *disk,
+- sector_t sector)
++static struct blk_zone_wplug *disk_get_hashed_zone_wplug(struct gendisk *disk,
++ sector_t sector)
+ {
+ unsigned int zno = disk_zone_no(disk, sector);
+ unsigned int idx = hash_32(zno, disk->zone_wplugs_hash_bits);
+@@ -454,6 +455,15 @@ static struct blk_zone_wplug *disk_get_zone_wplug(struct gendisk *disk,
+ return NULL;
+ }
+
++static inline struct blk_zone_wplug *disk_get_zone_wplug(struct gendisk *disk,
++ sector_t sector)
++{
++ if (!atomic_read(&disk->nr_zone_wplugs))
++ return NULL;
++
++ return disk_get_hashed_zone_wplug(disk, sector);
++}
++
+ static void disk_free_zone_wplug_rcu(struct rcu_head *rcu_head)
+ {
+ struct blk_zone_wplug *zwplug =
+@@ -518,6 +528,7 @@ static void disk_remove_zone_wplug(struct gendisk *disk,
+ zwplug->flags |= BLK_ZONE_WPLUG_UNHASHED;
+ spin_lock_irqsave(&disk->zone_wplugs_lock, flags);
+ hlist_del_init_rcu(&zwplug->node);
++ atomic_dec(&disk->nr_zone_wplugs);
+ spin_unlock_irqrestore(&disk->zone_wplugs_lock, flags);
+ disk_put_zone_wplug(zwplug);
+ }
+@@ -607,6 +618,11 @@ static void disk_zone_wplug_abort(struct blk_zone_wplug *zwplug)
+ {
+ struct bio *bio;
+
++ if (bio_list_empty(&zwplug->bio_list))
++ return;
++
++ pr_warn_ratelimited("%s: zone %u: Aborting plugged BIOs\n",
++ zwplug->disk->disk_name, zwplug->zone_no);
+ while ((bio = bio_list_pop(&zwplug->bio_list)))
+ blk_zone_wplug_bio_io_error(zwplug, bio);
+ }
+@@ -1055,6 +1071,47 @@ static bool blk_zone_wplug_handle_write(struct bio *bio, unsigned int nr_segs)
+ return true;
+ }
+
++static void blk_zone_wplug_handle_native_zone_append(struct bio *bio)
++{
++ struct gendisk *disk = bio->bi_bdev->bd_disk;
++ struct blk_zone_wplug *zwplug;
++ unsigned long flags;
++
++ /*
++ * We have native support for zone append operations, so we are not
++ * going to handle @bio through plugging. However, we may already have a
++ * zone write plug for the target zone if that zone was previously
++ * partially written using regular writes. In such case, we risk leaving
++ * the plug in the disk hash table if the zone is fully written using
++ * zone append operations. Avoid this by removing the zone write plug.
++ */
++ zwplug = disk_get_zone_wplug(disk, bio->bi_iter.bi_sector);
++ if (likely(!zwplug))
++ return;
++
++ spin_lock_irqsave(&zwplug->lock, flags);
++
++ /*
++ * We are about to remove the zone write plug. But if the user
++ * (mistakenly) has issued regular writes together with native zone
++ * append, we must aborts the writes as otherwise the plugged BIOs would
++ * not be executed by the plug BIO work as disk_get_zone_wplug() will
++ * return NULL after the plug is removed. Aborting the plugged write
++ * BIOs is consistent with the fact that these writes will most likely
++ * fail anyway as there is no ordering guarantees between zone append
++ * operations and regular write operations.
++ */
++ if (!bio_list_empty(&zwplug->bio_list)) {
++ pr_warn_ratelimited("%s: zone %u: Invalid mix of zone append and regular writes\n",
++ disk->disk_name, zwplug->zone_no);
++ disk_zone_wplug_abort(zwplug);
++ }
++ disk_remove_zone_wplug(disk, zwplug);
++ spin_unlock_irqrestore(&zwplug->lock, flags);
++
++ disk_put_zone_wplug(zwplug);
++}
++
+ /**
+ * blk_zone_plug_bio - Handle a zone write BIO with zone write plugging
+ * @bio: The BIO being submitted
+@@ -1111,8 +1168,10 @@ bool blk_zone_plug_bio(struct bio *bio, unsigned int nr_segs)
+ */
+ switch (bio_op(bio)) {
+ case REQ_OP_ZONE_APPEND:
+- if (!bdev_emulates_zone_append(bdev))
++ if (!bdev_emulates_zone_append(bdev)) {
++ blk_zone_wplug_handle_native_zone_append(bio);
+ return false;
++ }
+ fallthrough;
+ case REQ_OP_WRITE:
+ case REQ_OP_WRITE_ZEROES:
+@@ -1299,6 +1358,7 @@ static int disk_alloc_zone_resources(struct gendisk *disk,
+ {
+ unsigned int i;
+
++ atomic_set(&disk->nr_zone_wplugs, 0);
+ disk->zone_wplugs_hash_bits =
+ min(ilog2(pool_size) + 1, BLK_ZONE_WPLUG_MAX_HASH_BITS);
+
+@@ -1353,6 +1413,7 @@ static void disk_destroy_zone_wplugs_hash_table(struct gendisk *disk)
+ }
+ }
+
++ WARN_ON_ONCE(atomic_read(&disk->nr_zone_wplugs));
+ kfree(disk->zone_wplugs_hash);
+ disk->zone_wplugs_hash = NULL;
+ disk->zone_wplugs_hash_bits = 0;
+@@ -1570,11 +1631,12 @@ static int blk_revalidate_seq_zone(struct blk_zone *zone, unsigned int idx,
+ }
+
+ /*
+- * We need to track the write pointer of all zones that are not
+- * empty nor full. So make sure we have a zone write plug for
+- * such zone if the device has a zone write plug hash table.
++ * If the device needs zone append emulation, we need to track the
++ * write pointer of all zones that are not empty nor full. So make sure
++ * we have a zone write plug for such zone if the device has a zone
++ * write plug hash table.
+ */
+- if (!disk->zone_wplugs_hash)
++ if (!queue_emulates_zone_append(disk->queue) || !disk->zone_wplugs_hash)
+ return 0;
+
+ disk_zone_wplug_sync_wp_offset(disk, zone);
+diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c
+index 419220fa42fd7e..bd1ea99c3b4751 100644
+--- a/drivers/firmware/cirrus/cs_dsp.c
++++ b/drivers/firmware/cirrus/cs_dsp.c
+@@ -1609,8 +1609,8 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+ goto out_fw;
+ }
+
+- ret = regmap_raw_write_async(regmap, reg, buf->buf,
+- le32_to_cpu(region->len));
++ ret = regmap_raw_write(regmap, reg, buf->buf,
++ le32_to_cpu(region->len));
+ if (ret != 0) {
+ cs_dsp_err(dsp,
+ "%s.%d: Failed to write %d bytes at %d in %s: %d\n",
+@@ -1625,12 +1625,6 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+ regions++;
+ }
+
+- ret = regmap_async_complete(regmap);
+- if (ret != 0) {
+- cs_dsp_err(dsp, "Failed to complete async write: %d\n", ret);
+- goto out_fw;
+- }
+-
+ if (pos > firmware->size)
+ cs_dsp_warn(dsp, "%s.%d: %zu bytes at end of file\n",
+ file, regions, pos - firmware->size);
+@@ -1638,7 +1632,6 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+ cs_dsp_debugfs_save_wmfwname(dsp, file);
+
+ out_fw:
+- regmap_async_complete(regmap);
+ cs_dsp_buf_free(&buf_list);
+
+ if (ret == -EOVERFLOW)
+@@ -2326,8 +2319,8 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
+ cs_dsp_dbg(dsp, "%s.%d: Writing %d bytes at %x\n",
+ file, blocks, le32_to_cpu(blk->len),
+ reg);
+- ret = regmap_raw_write_async(regmap, reg, buf->buf,
+- le32_to_cpu(blk->len));
++ ret = regmap_raw_write(regmap, reg, buf->buf,
++ le32_to_cpu(blk->len));
+ if (ret != 0) {
+ cs_dsp_err(dsp,
+ "%s.%d: Failed to write to %x in %s: %d\n",
+@@ -2339,10 +2332,6 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
+ blocks++;
+ }
+
+- ret = regmap_async_complete(regmap);
+- if (ret != 0)
+- cs_dsp_err(dsp, "Failed to complete async write: %d\n", ret);
+-
+ if (pos > firmware->size)
+ cs_dsp_warn(dsp, "%s.%d: %zu bytes at end of file\n",
+ file, blocks, pos - firmware->size);
+@@ -2350,7 +2339,6 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
+ cs_dsp_debugfs_save_binname(dsp, file);
+
+ out_fw:
+- regmap_async_complete(regmap);
+ cs_dsp_buf_free(&buf_list);
+
+ if (ret == -EOVERFLOW)
+@@ -2561,8 +2549,8 @@ static int cs_dsp_adsp2_enable_core(struct cs_dsp *dsp)
+ {
+ int ret;
+
+- ret = regmap_update_bits_async(dsp->regmap, dsp->base + ADSP2_CONTROL,
+- ADSP2_SYS_ENA, ADSP2_SYS_ENA);
++ ret = regmap_update_bits(dsp->regmap, dsp->base + ADSP2_CONTROL,
++ ADSP2_SYS_ENA, ADSP2_SYS_ENA);
+ if (ret != 0)
+ return ret;
+
+diff --git a/drivers/firmware/efi/mokvar-table.c b/drivers/firmware/efi/mokvar-table.c
+index 5ed0602c2f75f0..4eb0dff4dfaf8b 100644
+--- a/drivers/firmware/efi/mokvar-table.c
++++ b/drivers/firmware/efi/mokvar-table.c
+@@ -103,9 +103,7 @@ void __init efi_mokvar_table_init(void)
+ void *va = NULL;
+ unsigned long cur_offset = 0;
+ unsigned long offset_limit;
+- unsigned long map_size = 0;
+ unsigned long map_size_needed = 0;
+- unsigned long size;
+ struct efi_mokvar_table_entry *mokvar_entry;
+ int err;
+
+@@ -134,48 +132,34 @@ void __init efi_mokvar_table_init(void)
+ */
+ err = -EINVAL;
+ while (cur_offset + sizeof(*mokvar_entry) <= offset_limit) {
+- mokvar_entry = va + cur_offset;
+- map_size_needed = cur_offset + sizeof(*mokvar_entry);
+- if (map_size_needed > map_size) {
+- if (va)
+- early_memunmap(va, map_size);
+- /*
+- * Map a little more than the fixed size entry
+- * header, anticipating some data. It's safe to
+- * do so as long as we stay within current memory
+- * descriptor.
+- */
+- map_size = min(map_size_needed + 2*EFI_PAGE_SIZE,
+- offset_limit);
+- va = early_memremap(efi.mokvar_table, map_size);
+- if (!va) {
+- pr_err("Failed to map EFI MOKvar config table pa=0x%lx, size=%lu.\n",
+- efi.mokvar_table, map_size);
+- return;
+- }
+- mokvar_entry = va + cur_offset;
++ if (va)
++ early_memunmap(va, sizeof(*mokvar_entry));
++ va = early_memremap(efi.mokvar_table + cur_offset, sizeof(*mokvar_entry));
++ if (!va) {
++ pr_err("Failed to map EFI MOKvar config table pa=0x%lx, size=%zu.\n",
++ efi.mokvar_table + cur_offset, sizeof(*mokvar_entry));
++ return;
+ }
++ mokvar_entry = va;
+
+ /* Check for last sentinel entry */
+ if (mokvar_entry->name[0] == '\0') {
+ if (mokvar_entry->data_size != 0)
+ break;
+ err = 0;
++ map_size_needed = cur_offset + sizeof(*mokvar_entry);
+ break;
+ }
+
+- /* Sanity check that the name is null terminated */
+- size = strnlen(mokvar_entry->name,
+- sizeof(mokvar_entry->name));
+- if (size >= sizeof(mokvar_entry->name))
+- break;
++ /* Enforce that the name is NUL terminated */
++ mokvar_entry->name[sizeof(mokvar_entry->name) - 1] = '\0';
+
+ /* Advance to the next entry */
+- cur_offset = map_size_needed + mokvar_entry->data_size;
++ cur_offset += sizeof(*mokvar_entry) + mokvar_entry->data_size;
+ }
+
+ if (va)
+- early_memunmap(va, map_size);
++ early_memunmap(va, sizeof(*mokvar_entry));
+ if (err) {
+ pr_err("EFI MOKvar config table is not valid\n");
+ return;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 45e28726e148e9..96845541b2d255 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -1542,6 +1542,13 @@ int amdgpu_device_resize_fb_bar(struct amdgpu_device *adev)
+ if (amdgpu_sriov_vf(adev))
+ return 0;
+
++ /* resizing on Dell G5 SE platforms causes problems with runtime pm */
++ if ((amdgpu_runtime_pm != 0) &&
++ adev->pdev->vendor == PCI_VENDOR_ID_ATI &&
++ adev->pdev->device == 0x731f &&
++ adev->pdev->subsystem_vendor == PCI_VENDOR_ID_DELL)
++ return 0;
++
+ /* PCI_EXT_CAP_ID_VNDR extended capability is located at 0x100 */
+ if (!pci_find_ext_capability(adev->pdev, PCI_EXT_CAP_ID_VNDR))
+ DRM_WARN("System can't access extended configuration space, please check!!\n");
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+index 425073d994912f..1c8ac4cf08c5ac 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
+@@ -2280,7 +2280,7 @@ int amdgpu_ttm_clear_buffer(struct amdgpu_bo *bo,
+ struct amdgpu_ring *ring = adev->mman.buffer_funcs_ring;
+ struct amdgpu_res_cursor cursor;
+ u64 addr;
+- int r;
++ int r = 0;
+
+ if (!adev->mman.buffer_funcs_enabled)
+ return -EINVAL;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+index 2eff37aaf8273b..1695dd78ede8e6 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v10.c
+@@ -107,6 +107,8 @@ static void init_mqd(struct mqd_manager *mm, void **mqd,
+ m->cp_hqd_persistent_state = CP_HQD_PERSISTENT_STATE__PRELOAD_REQ_MASK |
+ 0x53 << CP_HQD_PERSISTENT_STATE__PRELOAD_SIZE__SHIFT;
+
++ m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ m->cp_mqd_control = 1 << CP_MQD_CONTROL__PRIV_STATE__SHIFT;
+
+ m->cp_mqd_base_addr_lo = lower_32_bits(addr);
+@@ -167,10 +169,10 @@ static void update_mqd(struct mqd_manager *mm, void *mqd,
+
+ m = get_mqd(mqd);
+
+- m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control &= ~CP_HQD_PQ_CONTROL__QUEUE_SIZE_MASK;
+ m->cp_hqd_pq_control |=
+ ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1;
+- m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
++
+ pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+
+ m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
+index 68dbc0399c87aa..3c0ae28c5923b5 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v11.c
+@@ -154,6 +154,8 @@ static void init_mqd(struct mqd_manager *mm, void **mqd,
+ m->cp_hqd_persistent_state = CP_HQD_PERSISTENT_STATE__PRELOAD_REQ_MASK |
+ 0x55 << CP_HQD_PERSISTENT_STATE__PRELOAD_SIZE__SHIFT;
+
++ m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ m->cp_mqd_control = 1 << CP_MQD_CONTROL__PRIV_STATE__SHIFT;
+
+ m->cp_mqd_base_addr_lo = lower_32_bits(addr);
+@@ -221,10 +223,9 @@ static void update_mqd(struct mqd_manager *mm, void *mqd,
+
+ m = get_mqd(mqd);
+
+- m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control &= ~CP_HQD_PQ_CONTROL__QUEUE_SIZE_MASK;
+ m->cp_hqd_pq_control |=
+ ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1;
+- m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+
+ m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c
+index 2b72d5b4949b6c..565858b9044d46 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v12.c
+@@ -121,6 +121,8 @@ static void init_mqd(struct mqd_manager *mm, void **mqd,
+ m->cp_hqd_persistent_state = CP_HQD_PERSISTENT_STATE__PRELOAD_REQ_MASK |
+ 0x55 << CP_HQD_PERSISTENT_STATE__PRELOAD_SIZE__SHIFT;
+
++ m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ m->cp_mqd_control = 1 << CP_MQD_CONTROL__PRIV_STATE__SHIFT;
+
+ m->cp_mqd_base_addr_lo = lower_32_bits(addr);
+@@ -184,10 +186,9 @@ static void update_mqd(struct mqd_manager *mm, void *mqd,
+
+ m = get_mqd(mqd);
+
+- m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control &= ~CP_HQD_PQ_CONTROL__QUEUE_SIZE_MASK;
+ m->cp_hqd_pq_control |=
+ ffs(q->queue_size / sizeof(unsigned int)) - 1 - 1;
+- m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
+ pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+
+ m->cp_hqd_pq_base_lo = lower_32_bits((uint64_t)q->queue_address >> 8);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+index 84e8ea3a8a0c94..217af36dc0976f 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager_v9.c
+@@ -182,6 +182,9 @@ static void init_mqd(struct mqd_manager *mm, void **mqd,
+ m->cp_hqd_persistent_state = CP_HQD_PERSISTENT_STATE__PRELOAD_REQ_MASK |
+ 0x53 << CP_HQD_PERSISTENT_STATE__PRELOAD_SIZE__SHIFT;
+
++ m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control |= CP_HQD_PQ_CONTROL__UNORD_DISPATCH_MASK;
++
+ m->cp_mqd_control = 1 << CP_MQD_CONTROL__PRIV_STATE__SHIFT;
+
+ m->cp_mqd_base_addr_lo = lower_32_bits(addr);
+@@ -244,7 +247,7 @@ static void update_mqd(struct mqd_manager *mm, void *mqd,
+
+ m = get_mqd(mqd);
+
+- m->cp_hqd_pq_control = 5 << CP_HQD_PQ_CONTROL__RPTR_BLOCK_SIZE__SHIFT;
++ m->cp_hqd_pq_control &= ~CP_HQD_PQ_CONTROL__QUEUE_SIZE_MASK;
+ m->cp_hqd_pq_control |= order_base_2(q->queue_size / 4) - 1;
+ pr_debug("cp_hqd_pq_control 0x%x\n", m->cp_hqd_pq_control);
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 85e58e0f6059a6..5df26f8937cc81 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1593,75 +1593,130 @@ static bool dm_should_disable_stutter(struct pci_dev *pdev)
+ return false;
+ }
+
+-static const struct dmi_system_id hpd_disconnect_quirk_table[] = {
++struct amdgpu_dm_quirks {
++ bool aux_hpd_discon;
++ bool support_edp0_on_dp1;
++};
++
++static struct amdgpu_dm_quirks quirk_entries = {
++ .aux_hpd_discon = false,
++ .support_edp0_on_dp1 = false
++};
++
++static int edp0_on_dp1_callback(const struct dmi_system_id *id)
++{
++ quirk_entries.support_edp0_on_dp1 = true;
++ return 0;
++}
++
++static int aux_hpd_discon_callback(const struct dmi_system_id *id)
++{
++ quirk_entries.aux_hpd_discon = true;
++ return 0;
++}
++
++static const struct dmi_system_id dmi_quirk_table[] = {
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3660"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3260"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Precision 3460"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Tower Plus 7010"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Tower 7010"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex SFF Plus 7010"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex SFF 7010"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Micro Plus 7010"),
+ },
+ },
+ {
++ .callback = aux_hpd_discon_callback,
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "OptiPlex Micro 7010"),
+ },
+ },
++ {
++ .callback = edp0_on_dp1_callback,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP Elite mt645 G8 Mobile Thin Client"),
++ },
++ },
++ {
++ .callback = edp0_on_dp1_callback,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 665 16 inch G11 Notebook PC"),
++ },
++ },
+ {}
+ /* TODO: refactor this from a fixed table to a dynamic option */
+ };
+
+-static void retrieve_dmi_info(struct amdgpu_display_manager *dm)
++static void retrieve_dmi_info(struct amdgpu_display_manager *dm, struct dc_init_data *init_data)
+ {
+- const struct dmi_system_id *dmi_id;
++ int dmi_id;
++ struct drm_device *dev = dm->ddev;
+
+ dm->aux_hpd_discon_quirk = false;
++ init_data->flags.support_edp0_on_dp1 = false;
++
++ dmi_id = dmi_check_system(dmi_quirk_table);
+
+- dmi_id = dmi_first_match(hpd_disconnect_quirk_table);
+- if (dmi_id) {
++ if (!dmi_id)
++ return;
++
++ if (quirk_entries.aux_hpd_discon) {
+ dm->aux_hpd_discon_quirk = true;
+- DRM_INFO("aux_hpd_discon_quirk attached\n");
++ drm_info(dev, "aux_hpd_discon_quirk attached\n");
++ }
++ if (quirk_entries.support_edp0_on_dp1) {
++ init_data->flags.support_edp0_on_dp1 = true;
++ drm_info(dev, "aux_hpd_discon_quirk attached\n");
+ }
+ }
+
+@@ -1969,7 +2024,7 @@ static int amdgpu_dm_init(struct amdgpu_device *adev)
+ if (amdgpu_ip_version(adev, DCE_HWIP, 0) >= IP_VERSION(3, 0, 0))
+ init_data.num_virtual_links = 1;
+
+- retrieve_dmi_info(&adev->dm);
++ retrieve_dmi_info(&adev->dm, &init_data);
+
+ if (adev->dm.bb_from_dmub)
+ init_data.bb_from_dmub = adev->dm.bb_from_dmub;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+index 3390f0d8420a05..c4a7fd453e5fc0 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+@@ -894,6 +894,7 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev)
+ struct drm_device *dev = adev_to_drm(adev);
+ struct drm_connector *connector;
+ struct drm_connector_list_iter iter;
++ int i;
+
+ drm_connector_list_iter_begin(dev, &iter);
+ drm_for_each_connector_iter(connector, &iter) {
+@@ -920,6 +921,12 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev)
+ }
+ }
+ drm_connector_list_iter_end(&iter);
++
++ /* Update reference counts for HPDs */
++ for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) {
++ if (amdgpu_irq_get(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1))
++ drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", i);
++ }
+ }
+
+ /**
+@@ -935,6 +942,7 @@ void amdgpu_dm_hpd_fini(struct amdgpu_device *adev)
+ struct drm_device *dev = adev_to_drm(adev);
+ struct drm_connector *connector;
+ struct drm_connector_list_iter iter;
++ int i;
+
+ drm_connector_list_iter_begin(dev, &iter);
+ drm_for_each_connector_iter(connector, &iter) {
+@@ -960,4 +968,10 @@ void amdgpu_dm_hpd_fini(struct amdgpu_device *adev)
+ }
+ }
+ drm_connector_list_iter_end(&iter);
++
++ /* Update reference counts for HPDs */
++ for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) {
++ if (amdgpu_irq_put(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1))
++ drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", i);
++ }
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+index 45858bf1523d8f..e140b7a04d7246 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_psr.c
+@@ -54,7 +54,8 @@ static bool link_supports_psrsu(struct dc_link *link)
+ if (amdgpu_dc_debug_mask & DC_DISABLE_PSR_SU)
+ return false;
+
+- return dc_dmub_check_min_version(dc->ctx->dmub_srv->dmub);
++ /* Temporarily disable PSR-SU to avoid glitches */
++ return false;
+ }
+
+ /*
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
+index e8b6989a40f35a..6b34a33d788f29 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/kv_dpm.c
+@@ -3043,6 +3043,7 @@ static int kv_dpm_hw_init(void *handle)
+ if (!amdgpu_dpm)
+ return 0;
+
++ mutex_lock(&adev->pm.mutex);
+ kv_dpm_setup_asic(adev);
+ ret = kv_dpm_enable(adev);
+ if (ret)
+@@ -3050,6 +3051,8 @@ static int kv_dpm_hw_init(void *handle)
+ else
+ adev->pm.dpm_enabled = true;
+ amdgpu_legacy_dpm_compute_clocks(adev);
++ mutex_unlock(&adev->pm.mutex);
++
+ return ret;
+ }
+
+@@ -3067,32 +3070,42 @@ static int kv_dpm_suspend(void *handle)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
++ cancel_work_sync(&adev->pm.dpm.thermal.work);
++
+ if (adev->pm.dpm_enabled) {
++ mutex_lock(&adev->pm.mutex);
++ adev->pm.dpm_enabled = false;
+ /* disable dpm */
+ kv_dpm_disable(adev);
+ /* reset the power state */
+ adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps;
++ mutex_unlock(&adev->pm.mutex);
+ }
+ return 0;
+ }
+
+ static int kv_dpm_resume(void *handle)
+ {
+- int ret;
++ int ret = 0;
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+- if (adev->pm.dpm_enabled) {
++ if (!amdgpu_dpm)
++ return 0;
++
++ if (!adev->pm.dpm_enabled) {
++ mutex_lock(&adev->pm.mutex);
+ /* asic init will reset to the boot state */
+ kv_dpm_setup_asic(adev);
+ ret = kv_dpm_enable(adev);
+- if (ret)
++ if (ret) {
+ adev->pm.dpm_enabled = false;
+- else
++ } else {
+ adev->pm.dpm_enabled = true;
+- if (adev->pm.dpm_enabled)
+ amdgpu_legacy_dpm_compute_clocks(adev);
++ }
++ mutex_unlock(&adev->pm.mutex);
+ }
+- return 0;
++ return ret;
+ }
+
+ static bool kv_dpm_is_idle(void *handle)
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
+index e861355ebd75b9..c7518b13e78795 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/legacy_dpm.c
+@@ -1009,9 +1009,12 @@ void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
+ enum amd_pm_state_type dpm_state = POWER_STATE_TYPE_INTERNAL_THERMAL;
+ int temp, size = sizeof(temp);
+
+- if (!adev->pm.dpm_enabled)
+- return;
++ mutex_lock(&adev->pm.mutex);
+
++ if (!adev->pm.dpm_enabled) {
++ mutex_unlock(&adev->pm.mutex);
++ return;
++ }
+ if (!pp_funcs->read_sensor(adev->powerplay.pp_handle,
+ AMDGPU_PP_SENSOR_GPU_TEMP,
+ (void *)&temp,
+@@ -1033,4 +1036,5 @@ void amdgpu_dpm_thermal_work_handler(struct work_struct *work)
+ adev->pm.dpm.state = dpm_state;
+
+ amdgpu_legacy_dpm_compute_clocks(adev->powerplay.pp_handle);
++ mutex_unlock(&adev->pm.mutex);
+ }
+diff --git a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+index a1baa13ab2c263..a5ad1b60597e61 100644
+--- a/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
++++ b/drivers/gpu/drm/amd/pm/legacy-dpm/si_dpm.c
+@@ -7783,6 +7783,7 @@ static int si_dpm_hw_init(void *handle)
+ if (!amdgpu_dpm)
+ return 0;
+
++ mutex_lock(&adev->pm.mutex);
+ si_dpm_setup_asic(adev);
+ ret = si_dpm_enable(adev);
+ if (ret)
+@@ -7790,6 +7791,7 @@ static int si_dpm_hw_init(void *handle)
+ else
+ adev->pm.dpm_enabled = true;
+ amdgpu_legacy_dpm_compute_clocks(adev);
++ mutex_unlock(&adev->pm.mutex);
+ return ret;
+ }
+
+@@ -7807,32 +7809,44 @@ static int si_dpm_suspend(void *handle)
+ {
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
++ cancel_work_sync(&adev->pm.dpm.thermal.work);
++
+ if (adev->pm.dpm_enabled) {
++ mutex_lock(&adev->pm.mutex);
++ adev->pm.dpm_enabled = false;
+ /* disable dpm */
+ si_dpm_disable(adev);
+ /* reset the power state */
+ adev->pm.dpm.current_ps = adev->pm.dpm.requested_ps = adev->pm.dpm.boot_ps;
++ mutex_unlock(&adev->pm.mutex);
+ }
++
+ return 0;
+ }
+
+ static int si_dpm_resume(void *handle)
+ {
+- int ret;
++ int ret = 0;
+ struct amdgpu_device *adev = (struct amdgpu_device *)handle;
+
+- if (adev->pm.dpm_enabled) {
++ if (!amdgpu_dpm)
++ return 0;
++
++ if (!adev->pm.dpm_enabled) {
+ /* asic init will reset to the boot state */
++ mutex_lock(&adev->pm.mutex);
+ si_dpm_setup_asic(adev);
+ ret = si_dpm_enable(adev);
+- if (ret)
++ if (ret) {
+ adev->pm.dpm_enabled = false;
+- else
++ } else {
+ adev->pm.dpm_enabled = true;
+- if (adev->pm.dpm_enabled)
+ amdgpu_legacy_dpm_compute_clocks(adev);
++ }
++ mutex_unlock(&adev->pm.mutex);
+ }
+- return 0;
++
++ return ret;
+ }
+
+ static bool si_dpm_is_idle(void *handle)
+diff --git a/drivers/gpu/drm/xe/regs/xe_engine_regs.h b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+index 7c78496e6213cc..192e571348f6b3 100644
+--- a/drivers/gpu/drm/xe/regs/xe_engine_regs.h
++++ b/drivers/gpu/drm/xe/regs/xe_engine_regs.h
+@@ -53,7 +53,6 @@
+
+ #define RING_CTL(base) XE_REG((base) + 0x3c)
+ #define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */
+-#define RING_CTL_SIZE(size) ((size) - PAGE_SIZE) /* in bytes -> pages */
+
+ #define RING_START_UDW(base) XE_REG((base) + 0x48)
+
+diff --git a/drivers/gpu/drm/xe/xe_oa.c b/drivers/gpu/drm/xe/xe_oa.c
+index e6744422dee492..448766033690c7 100644
+--- a/drivers/gpu/drm/xe/xe_oa.c
++++ b/drivers/gpu/drm/xe/xe_oa.c
+@@ -47,6 +47,11 @@ enum xe_oa_submit_deps {
+ XE_OA_SUBMIT_ADD_DEPS,
+ };
+
++enum xe_oa_user_extn_from {
++ XE_OA_USER_EXTN_FROM_OPEN,
++ XE_OA_USER_EXTN_FROM_CONFIG,
++};
++
+ struct xe_oa_reg {
+ struct xe_reg addr;
+ u32 value;
+@@ -94,6 +99,17 @@ struct xe_oa_config_bo {
+ struct xe_bb *bb;
+ };
+
++struct xe_oa_fence {
++ /* @base: dma fence base */
++ struct dma_fence base;
++ /* @lock: lock for the fence */
++ spinlock_t lock;
++ /* @work: work to signal @base */
++ struct delayed_work work;
++ /* @cb: callback to schedule @work */
++ struct dma_fence_cb cb;
++};
++
+ #define DRM_FMT(x) DRM_XE_OA_FMT_TYPE_##x
+
+ static const struct xe_oa_format oa_formats[] = {
+@@ -166,10 +182,10 @@ static struct xe_oa_config *xe_oa_get_oa_config(struct xe_oa *oa, int metrics_se
+ return oa_config;
+ }
+
+-static void free_oa_config_bo(struct xe_oa_config_bo *oa_bo)
++static void free_oa_config_bo(struct xe_oa_config_bo *oa_bo, struct dma_fence *last_fence)
+ {
+ xe_oa_config_put(oa_bo->oa_config);
+- xe_bb_free(oa_bo->bb, NULL);
++ xe_bb_free(oa_bo->bb, last_fence);
+ kfree(oa_bo);
+ }
+
+@@ -668,7 +684,8 @@ static void xe_oa_free_configs(struct xe_oa_stream *stream)
+
+ xe_oa_config_put(stream->oa_config);
+ llist_for_each_entry_safe(oa_bo, tmp, stream->oa_config_bos.first, node)
+- free_oa_config_bo(oa_bo);
++ free_oa_config_bo(oa_bo, stream->last_fence);
++ dma_fence_put(stream->last_fence);
+ }
+
+ static int xe_oa_load_with_lri(struct xe_oa_stream *stream, struct xe_oa_reg *reg_lri, u32 count)
+@@ -832,6 +849,7 @@ static void xe_oa_stream_destroy(struct xe_oa_stream *stream)
+ xe_gt_WARN_ON(gt, xe_guc_pc_unset_gucrc_mode(>->uc.guc.pc));
+
+ xe_oa_free_configs(stream);
++ xe_file_put(stream->xef);
+ }
+
+ static int xe_oa_alloc_oa_buffer(struct xe_oa_stream *stream)
+@@ -902,40 +920,113 @@ xe_oa_alloc_config_buffer(struct xe_oa_stream *stream, struct xe_oa_config *oa_c
+ return oa_bo;
+ }
+
++static void xe_oa_update_last_fence(struct xe_oa_stream *stream, struct dma_fence *fence)
++{
++ dma_fence_put(stream->last_fence);
++ stream->last_fence = dma_fence_get(fence);
++}
++
++static void xe_oa_fence_work_fn(struct work_struct *w)
++{
++ struct xe_oa_fence *ofence = container_of(w, typeof(*ofence), work.work);
++
++ /* Signal fence to indicate new OA configuration is active */
++ dma_fence_signal(&ofence->base);
++ dma_fence_put(&ofence->base);
++}
++
++static void xe_oa_config_cb(struct dma_fence *fence, struct dma_fence_cb *cb)
++{
++ /* Additional empirical delay needed for NOA programming after registers are written */
++#define NOA_PROGRAM_ADDITIONAL_DELAY_US 500
++
++ struct xe_oa_fence *ofence = container_of(cb, typeof(*ofence), cb);
++
++ INIT_DELAYED_WORK(&ofence->work, xe_oa_fence_work_fn);
++ queue_delayed_work(system_unbound_wq, &ofence->work,
++ usecs_to_jiffies(NOA_PROGRAM_ADDITIONAL_DELAY_US));
++ dma_fence_put(fence);
++}
++
++static const char *xe_oa_get_driver_name(struct dma_fence *fence)
++{
++ return "xe_oa";
++}
++
++static const char *xe_oa_get_timeline_name(struct dma_fence *fence)
++{
++ return "unbound";
++}
++
++static const struct dma_fence_ops xe_oa_fence_ops = {
++ .get_driver_name = xe_oa_get_driver_name,
++ .get_timeline_name = xe_oa_get_timeline_name,
++};
++
+ static int xe_oa_emit_oa_config(struct xe_oa_stream *stream, struct xe_oa_config *config)
+ {
+ #define NOA_PROGRAM_ADDITIONAL_DELAY_US 500
+ struct xe_oa_config_bo *oa_bo;
+- int err = 0, us = NOA_PROGRAM_ADDITIONAL_DELAY_US;
++ struct xe_oa_fence *ofence;
++ int i, err, num_signal = 0;
+ struct dma_fence *fence;
+- long timeout;
+
+- /* Emit OA configuration batch */
++ ofence = kzalloc(sizeof(*ofence), GFP_KERNEL);
++ if (!ofence) {
++ err = -ENOMEM;
++ goto exit;
++ }
++
+ oa_bo = xe_oa_alloc_config_buffer(stream, config);
+ if (IS_ERR(oa_bo)) {
+ err = PTR_ERR(oa_bo);
+ goto exit;
+ }
+
++ /* Emit OA configuration batch */
+ fence = xe_oa_submit_bb(stream, XE_OA_SUBMIT_ADD_DEPS, oa_bo->bb);
+ if (IS_ERR(fence)) {
+ err = PTR_ERR(fence);
+ goto exit;
+ }
+
+- /* Wait till all previous batches have executed */
+- timeout = dma_fence_wait_timeout(fence, false, 5 * HZ);
+- dma_fence_put(fence);
+- if (timeout < 0)
+- err = timeout;
+- else if (!timeout)
+- err = -ETIME;
+- if (err)
+- drm_dbg(&stream->oa->xe->drm, "dma_fence_wait_timeout err %d\n", err);
++ /* Point of no return: initialize and set fence to signal */
++ spin_lock_init(&ofence->lock);
++ dma_fence_init(&ofence->base, &xe_oa_fence_ops, &ofence->lock, 0, 0);
+
+- /* Additional empirical delay needed for NOA programming after registers are written */
+- usleep_range(us, 2 * us);
++ for (i = 0; i < stream->num_syncs; i++) {
++ if (stream->syncs[i].flags & DRM_XE_SYNC_FLAG_SIGNAL)
++ num_signal++;
++ xe_sync_entry_signal(&stream->syncs[i], &ofence->base);
++ }
++
++ /* Additional dma_fence_get in case we dma_fence_wait */
++ if (!num_signal)
++ dma_fence_get(&ofence->base);
++
++ /* Update last fence too before adding callback */
++ xe_oa_update_last_fence(stream, fence);
++
++ /* Add job fence callback to schedule work to signal ofence->base */
++ err = dma_fence_add_callback(fence, &ofence->cb, xe_oa_config_cb);
++ xe_gt_assert(stream->gt, !err || err == -ENOENT);
++ if (err == -ENOENT)
++ xe_oa_config_cb(fence, &ofence->cb);
++
++ /* If nothing needs to be signaled we wait synchronously */
++ if (!num_signal) {
++ dma_fence_wait(&ofence->base, false);
++ dma_fence_put(&ofence->base);
++ }
++
++ /* Done with syncs */
++ for (i = 0; i < stream->num_syncs; i++)
++ xe_sync_entry_cleanup(&stream->syncs[i]);
++ kfree(stream->syncs);
++
++ return 0;
+ exit:
++ kfree(ofence);
+ return err;
+ }
+
+@@ -1006,6 +1097,262 @@ static int xe_oa_enable_metric_set(struct xe_oa_stream *stream)
+ return xe_oa_emit_oa_config(stream, stream->oa_config);
+ }
+
++static int decode_oa_format(struct xe_oa *oa, u64 fmt, enum xe_oa_format_name *name)
++{
++ u32 counter_size = FIELD_GET(DRM_XE_OA_FORMAT_MASK_COUNTER_SIZE, fmt);
++ u32 counter_sel = FIELD_GET(DRM_XE_OA_FORMAT_MASK_COUNTER_SEL, fmt);
++ u32 bc_report = FIELD_GET(DRM_XE_OA_FORMAT_MASK_BC_REPORT, fmt);
++ u32 type = FIELD_GET(DRM_XE_OA_FORMAT_MASK_FMT_TYPE, fmt);
++ int idx;
++
++ for_each_set_bit(idx, oa->format_mask, __XE_OA_FORMAT_MAX) {
++ const struct xe_oa_format *f = &oa->oa_formats[idx];
++
++ if (counter_size == f->counter_size && bc_report == f->bc_report &&
++ type == f->type && counter_sel == f->counter_select) {
++ *name = idx;
++ return 0;
++ }
++ }
++
++ return -EINVAL;
++}
++
++static int xe_oa_set_prop_oa_unit_id(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ if (value >= oa->oa_unit_ids) {
++ drm_dbg(&oa->xe->drm, "OA unit ID out of range %lld\n", value);
++ return -EINVAL;
++ }
++ param->oa_unit_id = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_sample_oa(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->sample = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_metric_set(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->metric_set = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_oa_format(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ int ret = decode_oa_format(oa, value, ¶m->oa_format);
++
++ if (ret) {
++ drm_dbg(&oa->xe->drm, "Unsupported OA report format %#llx\n", value);
++ return ret;
++ }
++ return 0;
++}
++
++static int xe_oa_set_prop_oa_exponent(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++#define OA_EXPONENT_MAX 31
++
++ if (value > OA_EXPONENT_MAX) {
++ drm_dbg(&oa->xe->drm, "OA timer exponent too high (> %u)\n", OA_EXPONENT_MAX);
++ return -EINVAL;
++ }
++ param->period_exponent = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_disabled(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->disabled = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_exec_queue_id(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->exec_queue_id = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_engine_instance(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->engine_instance = value;
++ return 0;
++}
++
++static int xe_oa_set_no_preempt(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->no_preempt = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_num_syncs(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->num_syncs = value;
++ return 0;
++}
++
++static int xe_oa_set_prop_syncs_user(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ param->syncs_user = u64_to_user_ptr(value);
++ return 0;
++}
++
++static int xe_oa_set_prop_ret_inval(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param)
++{
++ return -EINVAL;
++}
++
++typedef int (*xe_oa_set_property_fn)(struct xe_oa *oa, u64 value,
++ struct xe_oa_open_param *param);
++static const xe_oa_set_property_fn xe_oa_set_property_funcs_open[] = {
++ [DRM_XE_OA_PROPERTY_OA_UNIT_ID] = xe_oa_set_prop_oa_unit_id,
++ [DRM_XE_OA_PROPERTY_SAMPLE_OA] = xe_oa_set_prop_sample_oa,
++ [DRM_XE_OA_PROPERTY_OA_METRIC_SET] = xe_oa_set_prop_metric_set,
++ [DRM_XE_OA_PROPERTY_OA_FORMAT] = xe_oa_set_prop_oa_format,
++ [DRM_XE_OA_PROPERTY_OA_PERIOD_EXPONENT] = xe_oa_set_prop_oa_exponent,
++ [DRM_XE_OA_PROPERTY_OA_DISABLED] = xe_oa_set_prop_disabled,
++ [DRM_XE_OA_PROPERTY_EXEC_QUEUE_ID] = xe_oa_set_prop_exec_queue_id,
++ [DRM_XE_OA_PROPERTY_OA_ENGINE_INSTANCE] = xe_oa_set_prop_engine_instance,
++ [DRM_XE_OA_PROPERTY_NO_PREEMPT] = xe_oa_set_no_preempt,
++ [DRM_XE_OA_PROPERTY_NUM_SYNCS] = xe_oa_set_prop_num_syncs,
++ [DRM_XE_OA_PROPERTY_SYNCS] = xe_oa_set_prop_syncs_user,
++};
++
++static const xe_oa_set_property_fn xe_oa_set_property_funcs_config[] = {
++ [DRM_XE_OA_PROPERTY_OA_UNIT_ID] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_SAMPLE_OA] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_OA_METRIC_SET] = xe_oa_set_prop_metric_set,
++ [DRM_XE_OA_PROPERTY_OA_FORMAT] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_OA_PERIOD_EXPONENT] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_OA_DISABLED] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_EXEC_QUEUE_ID] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_OA_ENGINE_INSTANCE] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_NO_PREEMPT] = xe_oa_set_prop_ret_inval,
++ [DRM_XE_OA_PROPERTY_NUM_SYNCS] = xe_oa_set_prop_num_syncs,
++ [DRM_XE_OA_PROPERTY_SYNCS] = xe_oa_set_prop_syncs_user,
++};
++
++static int xe_oa_user_ext_set_property(struct xe_oa *oa, enum xe_oa_user_extn_from from,
++ u64 extension, struct xe_oa_open_param *param)
++{
++ u64 __user *address = u64_to_user_ptr(extension);
++ struct drm_xe_ext_set_property ext;
++ int err;
++ u32 idx;
++
++ err = __copy_from_user(&ext, address, sizeof(ext));
++ if (XE_IOCTL_DBG(oa->xe, err))
++ return -EFAULT;
++
++ BUILD_BUG_ON(ARRAY_SIZE(xe_oa_set_property_funcs_open) !=
++ ARRAY_SIZE(xe_oa_set_property_funcs_config));
++
++ if (XE_IOCTL_DBG(oa->xe, ext.property >= ARRAY_SIZE(xe_oa_set_property_funcs_open)) ||
++ XE_IOCTL_DBG(oa->xe, ext.pad))
++ return -EINVAL;
++
++ idx = array_index_nospec(ext.property, ARRAY_SIZE(xe_oa_set_property_funcs_open));
++
++ if (from == XE_OA_USER_EXTN_FROM_CONFIG)
++ return xe_oa_set_property_funcs_config[idx](oa, ext.value, param);
++ else
++ return xe_oa_set_property_funcs_open[idx](oa, ext.value, param);
++}
++
++typedef int (*xe_oa_user_extension_fn)(struct xe_oa *oa, enum xe_oa_user_extn_from from,
++ u64 extension, struct xe_oa_open_param *param);
++static const xe_oa_user_extension_fn xe_oa_user_extension_funcs[] = {
++ [DRM_XE_OA_EXTENSION_SET_PROPERTY] = xe_oa_user_ext_set_property,
++};
++
++#define MAX_USER_EXTENSIONS 16
++static int xe_oa_user_extensions(struct xe_oa *oa, enum xe_oa_user_extn_from from, u64 extension,
++ int ext_number, struct xe_oa_open_param *param)
++{
++ u64 __user *address = u64_to_user_ptr(extension);
++ struct drm_xe_user_extension ext;
++ int err;
++ u32 idx;
++
++ if (XE_IOCTL_DBG(oa->xe, ext_number >= MAX_USER_EXTENSIONS))
++ return -E2BIG;
++
++ err = __copy_from_user(&ext, address, sizeof(ext));
++ if (XE_IOCTL_DBG(oa->xe, err))
++ return -EFAULT;
++
++ if (XE_IOCTL_DBG(oa->xe, ext.pad) ||
++ XE_IOCTL_DBG(oa->xe, ext.name >= ARRAY_SIZE(xe_oa_user_extension_funcs)))
++ return -EINVAL;
++
++ idx = array_index_nospec(ext.name, ARRAY_SIZE(xe_oa_user_extension_funcs));
++ err = xe_oa_user_extension_funcs[idx](oa, from, extension, param);
++ if (XE_IOCTL_DBG(oa->xe, err))
++ return err;
++
++ if (ext.next_extension)
++ return xe_oa_user_extensions(oa, from, ext.next_extension, ++ext_number, param);
++
++ return 0;
++}
++
++static int xe_oa_parse_syncs(struct xe_oa *oa, struct xe_oa_open_param *param)
++{
++ int ret, num_syncs, num_ufence = 0;
++
++ if (param->num_syncs && !param->syncs_user) {
++ drm_dbg(&oa->xe->drm, "num_syncs specified without sync array\n");
++ ret = -EINVAL;
++ goto exit;
++ }
++
++ if (param->num_syncs) {
++ param->syncs = kcalloc(param->num_syncs, sizeof(*param->syncs), GFP_KERNEL);
++ if (!param->syncs) {
++ ret = -ENOMEM;
++ goto exit;
++ }
++ }
++
++ for (num_syncs = 0; num_syncs < param->num_syncs; num_syncs++) {
++ ret = xe_sync_entry_parse(oa->xe, param->xef, ¶m->syncs[num_syncs],
++ ¶m->syncs_user[num_syncs], 0);
++ if (ret)
++ goto err_syncs;
++
++ if (xe_sync_is_ufence(¶m->syncs[num_syncs]))
++ num_ufence++;
++ }
++
++ if (XE_IOCTL_DBG(oa->xe, num_ufence > 1)) {
++ ret = -EINVAL;
++ goto err_syncs;
++ }
++
++ return 0;
++
++err_syncs:
++ while (num_syncs--)
++ xe_sync_entry_cleanup(¶m->syncs[num_syncs]);
++ kfree(param->syncs);
++exit:
++ return ret;
++}
++
+ static void xe_oa_stream_enable(struct xe_oa_stream *stream)
+ {
+ stream->pollin = false;
+@@ -1099,36 +1446,38 @@ static int xe_oa_disable_locked(struct xe_oa_stream *stream)
+
+ static long xe_oa_config_locked(struct xe_oa_stream *stream, u64 arg)
+ {
+- struct drm_xe_ext_set_property ext;
++ struct xe_oa_open_param param = {};
+ long ret = stream->oa_config->id;
+ struct xe_oa_config *config;
+ int err;
+
+- err = __copy_from_user(&ext, u64_to_user_ptr(arg), sizeof(ext));
+- if (XE_IOCTL_DBG(stream->oa->xe, err))
+- return -EFAULT;
+-
+- if (XE_IOCTL_DBG(stream->oa->xe, ext.pad) ||
+- XE_IOCTL_DBG(stream->oa->xe, ext.base.name != DRM_XE_OA_EXTENSION_SET_PROPERTY) ||
+- XE_IOCTL_DBG(stream->oa->xe, ext.base.next_extension) ||
+- XE_IOCTL_DBG(stream->oa->xe, ext.property != DRM_XE_OA_PROPERTY_OA_METRIC_SET))
+- return -EINVAL;
++ err = xe_oa_user_extensions(stream->oa, XE_OA_USER_EXTN_FROM_CONFIG, arg, 0, ¶m);
++ if (err)
++ return err;
+
+- config = xe_oa_get_oa_config(stream->oa, ext.value);
++ config = xe_oa_get_oa_config(stream->oa, param.metric_set);
+ if (!config)
+ return -ENODEV;
+
+- if (config != stream->oa_config) {
+- err = xe_oa_emit_oa_config(stream, config);
+- if (!err)
+- config = xchg(&stream->oa_config, config);
+- else
+- ret = err;
++ param.xef = stream->xef;
++ err = xe_oa_parse_syncs(stream->oa, ¶m);
++ if (err)
++ goto err_config_put;
++
++ stream->num_syncs = param.num_syncs;
++ stream->syncs = param.syncs;
++
++ err = xe_oa_emit_oa_config(stream, config);
++ if (!err) {
++ config = xchg(&stream->oa_config, config);
++ drm_dbg(&stream->oa->xe->drm, "changed to oa config uuid=%s\n",
++ stream->oa_config->uuid);
+ }
+
++err_config_put:
+ xe_oa_config_put(config);
+
+- return ret;
++ return err ?: ret;
+ }
+
+ static long xe_oa_status_locked(struct xe_oa_stream *stream, unsigned long arg)
+@@ -1367,10 +1716,11 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
+ stream->oa_buffer.format = &stream->oa->oa_formats[param->oa_format];
+
+ stream->sample = param->sample;
+- stream->periodic = param->period_exponent > 0;
++ stream->periodic = param->period_exponent >= 0;
+ stream->period_exponent = param->period_exponent;
+ stream->no_preempt = param->no_preempt;
+
++ stream->xef = xe_file_get(param->xef);
+ stream->num_syncs = param->num_syncs;
+ stream->syncs = param->syncs;
+
+@@ -1470,6 +1820,7 @@ static int xe_oa_stream_init(struct xe_oa_stream *stream,
+ err_free_configs:
+ xe_oa_free_configs(stream);
+ exit:
++ xe_file_put(stream->xef);
+ return ret;
+ }
+
+@@ -1579,27 +1930,6 @@ static bool engine_supports_oa_format(const struct xe_hw_engine *hwe, int type)
+ }
+ }
+
+-static int decode_oa_format(struct xe_oa *oa, u64 fmt, enum xe_oa_format_name *name)
+-{
+- u32 counter_size = FIELD_GET(DRM_XE_OA_FORMAT_MASK_COUNTER_SIZE, fmt);
+- u32 counter_sel = FIELD_GET(DRM_XE_OA_FORMAT_MASK_COUNTER_SEL, fmt);
+- u32 bc_report = FIELD_GET(DRM_XE_OA_FORMAT_MASK_BC_REPORT, fmt);
+- u32 type = FIELD_GET(DRM_XE_OA_FORMAT_MASK_FMT_TYPE, fmt);
+- int idx;
+-
+- for_each_set_bit(idx, oa->format_mask, __XE_OA_FORMAT_MAX) {
+- const struct xe_oa_format *f = &oa->oa_formats[idx];
+-
+- if (counter_size == f->counter_size && bc_report == f->bc_report &&
+- type == f->type && counter_sel == f->counter_select) {
+- *name = idx;
+- return 0;
+- }
+- }
+-
+- return -EINVAL;
+-}
+-
+ /**
+ * xe_oa_unit_id - Return OA unit ID for a hardware engine
+ * @hwe: @xe_hw_engine
+@@ -1646,214 +1976,6 @@ static int xe_oa_assign_hwe(struct xe_oa *oa, struct xe_oa_open_param *param)
+ return ret;
+ }
+
+-static int xe_oa_set_prop_oa_unit_id(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- if (value >= oa->oa_unit_ids) {
+- drm_dbg(&oa->xe->drm, "OA unit ID out of range %lld\n", value);
+- return -EINVAL;
+- }
+- param->oa_unit_id = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_sample_oa(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->sample = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_metric_set(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->metric_set = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_oa_format(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- int ret = decode_oa_format(oa, value, ¶m->oa_format);
+-
+- if (ret) {
+- drm_dbg(&oa->xe->drm, "Unsupported OA report format %#llx\n", value);
+- return ret;
+- }
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_oa_exponent(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+-#define OA_EXPONENT_MAX 31
+-
+- if (value > OA_EXPONENT_MAX) {
+- drm_dbg(&oa->xe->drm, "OA timer exponent too high (> %u)\n", OA_EXPONENT_MAX);
+- return -EINVAL;
+- }
+- param->period_exponent = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_disabled(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->disabled = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_exec_queue_id(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->exec_queue_id = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_engine_instance(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->engine_instance = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_no_preempt(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->no_preempt = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_num_syncs(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->num_syncs = value;
+- return 0;
+-}
+-
+-static int xe_oa_set_prop_syncs_user(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param)
+-{
+- param->syncs_user = u64_to_user_ptr(value);
+- return 0;
+-}
+-
+-typedef int (*xe_oa_set_property_fn)(struct xe_oa *oa, u64 value,
+- struct xe_oa_open_param *param);
+-static const xe_oa_set_property_fn xe_oa_set_property_funcs[] = {
+- [DRM_XE_OA_PROPERTY_OA_UNIT_ID] = xe_oa_set_prop_oa_unit_id,
+- [DRM_XE_OA_PROPERTY_SAMPLE_OA] = xe_oa_set_prop_sample_oa,
+- [DRM_XE_OA_PROPERTY_OA_METRIC_SET] = xe_oa_set_prop_metric_set,
+- [DRM_XE_OA_PROPERTY_OA_FORMAT] = xe_oa_set_prop_oa_format,
+- [DRM_XE_OA_PROPERTY_OA_PERIOD_EXPONENT] = xe_oa_set_prop_oa_exponent,
+- [DRM_XE_OA_PROPERTY_OA_DISABLED] = xe_oa_set_prop_disabled,
+- [DRM_XE_OA_PROPERTY_EXEC_QUEUE_ID] = xe_oa_set_prop_exec_queue_id,
+- [DRM_XE_OA_PROPERTY_OA_ENGINE_INSTANCE] = xe_oa_set_prop_engine_instance,
+- [DRM_XE_OA_PROPERTY_NO_PREEMPT] = xe_oa_set_no_preempt,
+- [DRM_XE_OA_PROPERTY_NUM_SYNCS] = xe_oa_set_prop_num_syncs,
+- [DRM_XE_OA_PROPERTY_SYNCS] = xe_oa_set_prop_syncs_user,
+-};
+-
+-static int xe_oa_user_ext_set_property(struct xe_oa *oa, u64 extension,
+- struct xe_oa_open_param *param)
+-{
+- u64 __user *address = u64_to_user_ptr(extension);
+- struct drm_xe_ext_set_property ext;
+- int err;
+- u32 idx;
+-
+- err = __copy_from_user(&ext, address, sizeof(ext));
+- if (XE_IOCTL_DBG(oa->xe, err))
+- return -EFAULT;
+-
+- if (XE_IOCTL_DBG(oa->xe, ext.property >= ARRAY_SIZE(xe_oa_set_property_funcs)) ||
+- XE_IOCTL_DBG(oa->xe, ext.pad))
+- return -EINVAL;
+-
+- idx = array_index_nospec(ext.property, ARRAY_SIZE(xe_oa_set_property_funcs));
+- return xe_oa_set_property_funcs[idx](oa, ext.value, param);
+-}
+-
+-typedef int (*xe_oa_user_extension_fn)(struct xe_oa *oa, u64 extension,
+- struct xe_oa_open_param *param);
+-static const xe_oa_user_extension_fn xe_oa_user_extension_funcs[] = {
+- [DRM_XE_OA_EXTENSION_SET_PROPERTY] = xe_oa_user_ext_set_property,
+-};
+-
+-#define MAX_USER_EXTENSIONS 16
+-static int xe_oa_user_extensions(struct xe_oa *oa, u64 extension, int ext_number,
+- struct xe_oa_open_param *param)
+-{
+- u64 __user *address = u64_to_user_ptr(extension);
+- struct drm_xe_user_extension ext;
+- int err;
+- u32 idx;
+-
+- if (XE_IOCTL_DBG(oa->xe, ext_number >= MAX_USER_EXTENSIONS))
+- return -E2BIG;
+-
+- err = __copy_from_user(&ext, address, sizeof(ext));
+- if (XE_IOCTL_DBG(oa->xe, err))
+- return -EFAULT;
+-
+- if (XE_IOCTL_DBG(oa->xe, ext.pad) ||
+- XE_IOCTL_DBG(oa->xe, ext.name >= ARRAY_SIZE(xe_oa_user_extension_funcs)))
+- return -EINVAL;
+-
+- idx = array_index_nospec(ext.name, ARRAY_SIZE(xe_oa_user_extension_funcs));
+- err = xe_oa_user_extension_funcs[idx](oa, extension, param);
+- if (XE_IOCTL_DBG(oa->xe, err))
+- return err;
+-
+- if (ext.next_extension)
+- return xe_oa_user_extensions(oa, ext.next_extension, ++ext_number, param);
+-
+- return 0;
+-}
+-
+-static int xe_oa_parse_syncs(struct xe_oa *oa, struct xe_oa_open_param *param)
+-{
+- int ret, num_syncs, num_ufence = 0;
+-
+- if (param->num_syncs && !param->syncs_user) {
+- drm_dbg(&oa->xe->drm, "num_syncs specified without sync array\n");
+- ret = -EINVAL;
+- goto exit;
+- }
+-
+- if (param->num_syncs) {
+- param->syncs = kcalloc(param->num_syncs, sizeof(*param->syncs), GFP_KERNEL);
+- if (!param->syncs) {
+- ret = -ENOMEM;
+- goto exit;
+- }
+- }
+-
+- for (num_syncs = 0; num_syncs < param->num_syncs; num_syncs++) {
+- ret = xe_sync_entry_parse(oa->xe, param->xef, ¶m->syncs[num_syncs],
+- ¶m->syncs_user[num_syncs], 0);
+- if (ret)
+- goto err_syncs;
+-
+- if (xe_sync_is_ufence(¶m->syncs[num_syncs]))
+- num_ufence++;
+- }
+-
+- if (XE_IOCTL_DBG(oa->xe, num_ufence > 1)) {
+- ret = -EINVAL;
+- goto err_syncs;
+- }
+-
+- return 0;
+-
+-err_syncs:
+- while (num_syncs--)
+- xe_sync_entry_cleanup(¶m->syncs[num_syncs]);
+- kfree(param->syncs);
+-exit:
+- return ret;
+-}
+-
+ /**
+ * xe_oa_stream_open_ioctl - Opens an OA stream
+ * @dev: @drm_device
+@@ -1880,7 +2002,8 @@ int xe_oa_stream_open_ioctl(struct drm_device *dev, u64 data, struct drm_file *f
+ }
+
+ param.xef = xef;
+- ret = xe_oa_user_extensions(oa, data, 0, ¶m);
++ param.period_exponent = -1;
++ ret = xe_oa_user_extensions(oa, XE_OA_USER_EXTN_FROM_OPEN, data, 0, ¶m);
+ if (ret)
+ return ret;
+
+@@ -1934,7 +2057,7 @@ int xe_oa_stream_open_ioctl(struct drm_device *dev, u64 data, struct drm_file *f
+ goto err_exec_q;
+ }
+
+- if (param.period_exponent > 0) {
++ if (param.period_exponent >= 0) {
+ u64 oa_period, oa_freq_hz;
+
+ /* Requesting samples from OAG buffer is a privileged operation */
+diff --git a/drivers/gpu/drm/xe/xe_oa_types.h b/drivers/gpu/drm/xe/xe_oa_types.h
+index 99f4b2d4bdcf6a..fea9d981e414fa 100644
+--- a/drivers/gpu/drm/xe/xe_oa_types.h
++++ b/drivers/gpu/drm/xe/xe_oa_types.h
+@@ -239,6 +239,12 @@ struct xe_oa_stream {
+ /** @no_preempt: Whether preemption and timeslicing is disabled for stream exec_q */
+ u32 no_preempt;
+
++ /** @xef: xe_file with which the stream was opened */
++ struct xe_file *xef;
++
++ /** @last_fence: fence to use in stream destroy when needed */
++ struct dma_fence *last_fence;
++
+ /** @num_syncs: size of @syncs array */
+ u32 num_syncs;
+
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index c99380271de62f..5693b337f5dffe 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -667,20 +667,33 @@ int xe_vm_userptr_pin(struct xe_vm *vm)
+
+ /* Collect invalidated userptrs */
+ spin_lock(&vm->userptr.invalidated_lock);
++ xe_assert(vm->xe, list_empty(&vm->userptr.repin_list));
+ list_for_each_entry_safe(uvma, next, &vm->userptr.invalidated,
+ userptr.invalidate_link) {
+ list_del_init(&uvma->userptr.invalidate_link);
+- list_move_tail(&uvma->userptr.repin_link,
+- &vm->userptr.repin_list);
++ list_add_tail(&uvma->userptr.repin_link,
++ &vm->userptr.repin_list);
+ }
+ spin_unlock(&vm->userptr.invalidated_lock);
+
+- /* Pin and move to temporary list */
++ /* Pin and move to bind list */
+ list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
+ userptr.repin_link) {
+ err = xe_vma_userptr_pin_pages(uvma);
+ if (err == -EFAULT) {
+ list_del_init(&uvma->userptr.repin_link);
++ /*
++ * We might have already done the pin once already, but
++ * then had to retry before the re-bind happened, due
++ * some other condition in the caller, but in the
++ * meantime the userptr got dinged by the notifier such
++ * that we need to revalidate here, but this time we hit
++ * the EFAULT. In such a case make sure we remove
++ * ourselves from the rebind list to avoid going down in
++ * flames.
++ */
++ if (!list_empty(&uvma->vma.combined_links.rebind))
++ list_del_init(&uvma->vma.combined_links.rebind);
+
+ /* Wait for pending binds */
+ xe_vm_lock(vm, false);
+@@ -691,10 +704,10 @@ int xe_vm_userptr_pin(struct xe_vm *vm)
+ err = xe_vm_invalidate_vma(&uvma->vma);
+ xe_vm_unlock(vm);
+ if (err)
+- return err;
++ break;
+ } else {
+- if (err < 0)
+- return err;
++ if (err)
++ break;
+
+ list_del_init(&uvma->userptr.repin_link);
+ list_move_tail(&uvma->vma.combined_links.rebind,
+@@ -702,7 +715,19 @@ int xe_vm_userptr_pin(struct xe_vm *vm)
+ }
+ }
+
+- return 0;
++ if (err) {
++ down_write(&vm->userptr.notifier_lock);
++ spin_lock(&vm->userptr.invalidated_lock);
++ list_for_each_entry_safe(uvma, next, &vm->userptr.repin_list,
++ userptr.repin_link) {
++ list_del_init(&uvma->userptr.repin_link);
++ list_move_tail(&uvma->userptr.invalidate_link,
++ &vm->userptr.invalidated);
++ }
++ spin_unlock(&vm->userptr.invalidated_lock);
++ up_write(&vm->userptr.notifier_lock);
++ }
++ return err;
+ }
+
+ /**
+@@ -1066,6 +1091,7 @@ static void xe_vma_destroy(struct xe_vma *vma, struct dma_fence *fence)
+ xe_assert(vm->xe, vma->gpuva.flags & XE_VMA_DESTROYED);
+
+ spin_lock(&vm->userptr.invalidated_lock);
++ xe_assert(vm->xe, list_empty(&to_userptr_vma(vma)->userptr.repin_link));
+ list_del(&to_userptr_vma(vma)->userptr.invalidate_link);
+ spin_unlock(&vm->userptr.invalidated_lock);
+ } else if (!xe_vma_is_null(vma)) {
+diff --git a/drivers/i2c/busses/i2c-ls2x.c b/drivers/i2c/busses/i2c-ls2x.c
+index 8821cac3897b69..b475dd27b7af94 100644
+--- a/drivers/i2c/busses/i2c-ls2x.c
++++ b/drivers/i2c/busses/i2c-ls2x.c
+@@ -10,6 +10,7 @@
+ * Rewritten for mainline by Binbin Zhou <zhoubinbin@loongson.cn>
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/bits.h>
+ #include <linux/completion.h>
+ #include <linux/device.h>
+@@ -26,7 +27,8 @@
+ #include <linux/units.h>
+
+ /* I2C Registers */
+-#define I2C_LS2X_PRER 0x0 /* Freq Division Register(16 bits) */
++#define I2C_LS2X_PRER_LO 0x0 /* Freq Division Low Byte Register */
++#define I2C_LS2X_PRER_HI 0x1 /* Freq Division High Byte Register */
+ #define I2C_LS2X_CTR 0x2 /* Control Register */
+ #define I2C_LS2X_TXR 0x3 /* Transport Data Register */
+ #define I2C_LS2X_RXR 0x3 /* Receive Data Register */
+@@ -93,6 +95,7 @@ static irqreturn_t ls2x_i2c_isr(int this_irq, void *dev_id)
+ */
+ static void ls2x_i2c_adjust_bus_speed(struct ls2x_i2c_priv *priv)
+ {
++ u16 val;
+ struct i2c_timings *t = &priv->i2c_t;
+ struct device *dev = priv->adapter.dev.parent;
+ u32 acpi_speed = i2c_acpi_find_bus_speed(dev);
+@@ -104,9 +107,14 @@ static void ls2x_i2c_adjust_bus_speed(struct ls2x_i2c_priv *priv)
+ else
+ t->bus_freq_hz = LS2X_I2C_FREQ_STD;
+
+- /* Calculate and set i2c frequency. */
+- writew(LS2X_I2C_PCLK_FREQ / (5 * t->bus_freq_hz) - 1,
+- priv->base + I2C_LS2X_PRER);
++ /*
++ * According to the chip manual, we can only access the registers as bytes,
++ * otherwise the high bits will be truncated.
++ * So set the I2C frequency with a sequential writeb() instead of writew().
++ */
++ val = LS2X_I2C_PCLK_FREQ / (5 * t->bus_freq_hz) - 1;
++ writeb(FIELD_GET(GENMASK(7, 0), val), priv->base + I2C_LS2X_PRER_LO);
++ writeb(FIELD_GET(GENMASK(15, 8), val), priv->base + I2C_LS2X_PRER_HI);
+ }
+
+ static void ls2x_i2c_init(struct ls2x_i2c_priv *priv)
+diff --git a/drivers/i2c/busses/i2c-npcm7xx.c b/drivers/i2c/busses/i2c-npcm7xx.c
+index bbcb4d6668ce63..a693ebb64edf41 100644
+--- a/drivers/i2c/busses/i2c-npcm7xx.c
++++ b/drivers/i2c/busses/i2c-npcm7xx.c
+@@ -2319,6 +2319,13 @@ static int npcm_i2c_probe_bus(struct platform_device *pdev)
+ if (irq < 0)
+ return irq;
+
++ /*
++ * Disable the interrupt to avoid the interrupt handler being triggered
++ * incorrectly by the asynchronous interrupt status since the machine
++ * might do a warm reset during the last smbus/i2c transfer session.
++ */
++ npcm_i2c_int_enable(bus, false);
++
+ ret = devm_request_irq(bus->dev, irq, npcm_i2c_bus_irq, 0,
+ dev_name(bus->dev), bus);
+ if (ret)
+diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
+index 67aebfe0fed665..524ed143f875d3 100644
+--- a/drivers/idle/intel_idle.c
++++ b/drivers/idle/intel_idle.c
+@@ -55,6 +55,7 @@
+ #include <asm/intel-family.h>
+ #include <asm/mwait.h>
+ #include <asm/spec-ctrl.h>
++#include <asm/tsc.h>
+ #include <asm/fpu/api.h>
+
+ #define INTEL_IDLE_VERSION "0.5.1"
+@@ -1749,6 +1750,9 @@ static void __init intel_idle_init_cstates_acpi(struct cpuidle_driver *drv)
+ if (intel_idle_state_needs_timer_stop(state))
+ state->flags |= CPUIDLE_FLAG_TIMER_STOP;
+
++ if (cx->type > ACPI_STATE_C1 && !boot_cpu_has(X86_FEATURE_NONSTOP_TSC))
++ mark_tsc_unstable("TSC halts in idle");
++
+ state->enter = intel_idle;
+ state->enter_s2idle = intel_idle_s2idle;
+ }
+diff --git a/drivers/infiniband/hw/bnxt_re/bnxt_re.h b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+index e94518b12f86ee..a316afc0139c86 100644
+--- a/drivers/infiniband/hw/bnxt_re/bnxt_re.h
++++ b/drivers/infiniband/hw/bnxt_re/bnxt_re.h
+@@ -154,6 +154,14 @@ struct bnxt_re_pacing {
+
+ #define BNXT_RE_GRC_FIFO_REG_BASE 0x2000
+
++#define BNXT_RE_MIN_MSIX 2
++#define BNXT_RE_MAX_MSIX BNXT_MAX_ROCE_MSIX
++struct bnxt_re_nq_record {
++ struct bnxt_msix_entry msix_entries[BNXT_RE_MAX_MSIX];
++ struct bnxt_qplib_nq nq[BNXT_RE_MAX_MSIX];
++ int num_msix;
++};
++
+ #define MAX_CQ_HASH_BITS (16)
+ #define MAX_SRQ_HASH_BITS (16)
+ struct bnxt_re_dev {
+@@ -174,24 +182,20 @@ struct bnxt_re_dev {
+ unsigned int version, major, minor;
+ struct bnxt_qplib_chip_ctx *chip_ctx;
+ struct bnxt_en_dev *en_dev;
+- int num_msix;
+
+ int id;
+
+ struct delayed_work worker;
+ u8 cur_prio_map;
+
+- /* FP Notification Queue (CQ & SRQ) */
+- struct tasklet_struct nq_task;
+-
+ /* RCFW Channel */
+ struct bnxt_qplib_rcfw rcfw;
+
+- /* NQ */
+- struct bnxt_qplib_nq nq[BNXT_MAX_ROCE_MSIX];
++ /* NQ record */
++ struct bnxt_re_nq_record *nqr;
+
+ /* Device Resources */
+- struct bnxt_qplib_dev_attr dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr;
+ struct bnxt_qplib_ctx qplib_ctx;
+ struct bnxt_qplib_res qplib_res;
+ struct bnxt_qplib_dpi dpi_privileged;
+diff --git a/drivers/infiniband/hw/bnxt_re/hw_counters.c b/drivers/infiniband/hw/bnxt_re/hw_counters.c
+index 1e63f809174837..f51adb0a97e667 100644
+--- a/drivers/infiniband/hw/bnxt_re/hw_counters.c
++++ b/drivers/infiniband/hw/bnxt_re/hw_counters.c
+@@ -357,8 +357,8 @@ int bnxt_re_ib_get_hw_stats(struct ib_device *ibdev,
+ goto done;
+ }
+ bnxt_re_copy_err_stats(rdev, stats, err_s);
+- if (_is_ext_stats_supported(rdev->dev_attr.dev_cap_flags) &&
+- !rdev->is_virtfn) {
++ if (bnxt_ext_stats_supported(rdev->chip_ctx, rdev->dev_attr->dev_cap_flags,
++ rdev->is_virtfn)) {
+ rc = bnxt_re_get_ext_stat(rdev, stats);
+ if (rc) {
+ clear_bit(BNXT_RE_FLAG_ISSUE_ROCE_STATS,
+diff --git a/drivers/infiniband/hw/bnxt_re/ib_verbs.c b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+index a7067c3c067972..0b21d8b5d96296 100644
+--- a/drivers/infiniband/hw/bnxt_re/ib_verbs.c
++++ b/drivers/infiniband/hw/bnxt_re/ib_verbs.c
+@@ -118,7 +118,7 @@ static enum ib_access_flags __to_ib_access_flags(int qflags)
+ static void bnxt_re_check_and_set_relaxed_ordering(struct bnxt_re_dev *rdev,
+ struct bnxt_qplib_mrw *qplib_mr)
+ {
+- if (_is_relaxed_ordering_supported(rdev->dev_attr.dev_cap_flags2) &&
++ if (_is_relaxed_ordering_supported(rdev->dev_attr->dev_cap_flags2) &&
+ pcie_relaxed_ordering_enabled(rdev->en_dev->pdev))
+ qplib_mr->flags |= CMDQ_REGISTER_MR_FLAGS_ENABLE_RO;
+ }
+@@ -143,7 +143,7 @@ int bnxt_re_query_device(struct ib_device *ibdev,
+ struct ib_udata *udata)
+ {
+ struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+- struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+
+ memset(ib_attr, 0, sizeof(*ib_attr));
+ memcpy(&ib_attr->fw_ver, dev_attr->fw_ver,
+@@ -216,7 +216,7 @@ int bnxt_re_query_port(struct ib_device *ibdev, u32 port_num,
+ struct ib_port_attr *port_attr)
+ {
+ struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+- struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+ int rc;
+
+ memset(port_attr, 0, sizeof(*port_attr));
+@@ -274,8 +274,8 @@ void bnxt_re_query_fw_str(struct ib_device *ibdev, char *str)
+ struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+
+ snprintf(str, IB_FW_VERSION_NAME_MAX, "%d.%d.%d.%d",
+- rdev->dev_attr.fw_ver[0], rdev->dev_attr.fw_ver[1],
+- rdev->dev_attr.fw_ver[2], rdev->dev_attr.fw_ver[3]);
++ rdev->dev_attr->fw_ver[0], rdev->dev_attr->fw_ver[1],
++ rdev->dev_attr->fw_ver[2], rdev->dev_attr->fw_ver[3]);
+ }
+
+ int bnxt_re_query_pkey(struct ib_device *ibdev, u32 port_num,
+@@ -526,7 +526,7 @@ static int bnxt_re_create_fence_mr(struct bnxt_re_pd *pd)
+ mr->qplib_mr.pd = &pd->qplib_pd;
+ mr->qplib_mr.type = CMDQ_ALLOCATE_MRW_MRW_FLAGS_PMR;
+ mr->qplib_mr.access_flags = __from_ib_access_flags(mr_access_flags);
+- if (!_is_alloc_mr_unified(rdev->dev_attr.dev_cap_flags)) {
++ if (!_is_alloc_mr_unified(rdev->dev_attr->dev_cap_flags)) {
+ rc = bnxt_qplib_alloc_mrw(&rdev->qplib_res, &mr->qplib_mr);
+ if (rc) {
+ ibdev_err(&rdev->ibdev, "Failed to alloc fence-HW-MR\n");
+@@ -1001,7 +1001,7 @@ static int bnxt_re_setup_swqe_size(struct bnxt_re_qp *qp,
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+ sq = &qplqp->sq;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ align = sizeof(struct sq_send_hdr);
+ ilsize = ALIGN(init_attr->cap.max_inline_data, align);
+@@ -1221,7 +1221,7 @@ static int bnxt_re_init_rq_attr(struct bnxt_re_qp *qp,
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+ rq = &qplqp->rq;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ if (init_attr->srq) {
+ struct bnxt_re_srq *srq;
+@@ -1258,7 +1258,7 @@ static void bnxt_re_adjust_gsi_rq_attr(struct bnxt_re_qp *qp)
+
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ if (!bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx)) {
+ qplqp->rq.max_sge = dev_attr->max_qp_sges;
+@@ -1284,7 +1284,7 @@ static int bnxt_re_init_sq_attr(struct bnxt_re_qp *qp,
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+ sq = &qplqp->sq;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ sq->max_sge = init_attr->cap.max_send_sge;
+ entries = init_attr->cap.max_send_wr;
+@@ -1337,7 +1337,7 @@ static void bnxt_re_adjust_gsi_sq_attr(struct bnxt_re_qp *qp,
+
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ if (!bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx)) {
+ entries = bnxt_re_init_depth(init_attr->cap.max_send_wr + 1, uctx);
+@@ -1386,7 +1386,7 @@ static int bnxt_re_init_qp_attr(struct bnxt_re_qp *qp, struct bnxt_re_pd *pd,
+
+ rdev = qp->rdev;
+ qplqp = &qp->qplib_qp;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+
+ /* Setup misc params */
+ ether_addr_copy(qplqp->smac, rdev->netdev->dev_addr);
+@@ -1556,7 +1556,7 @@ int bnxt_re_create_qp(struct ib_qp *ib_qp, struct ib_qp_init_attr *qp_init_attr,
+ ib_pd = ib_qp->pd;
+ pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd);
+ rdev = pd->rdev;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+ qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp);
+
+ uctx = rdma_udata_to_drv_context(udata, struct bnxt_re_ucontext, ib_uctx);
+@@ -1783,7 +1783,7 @@ int bnxt_re_create_srq(struct ib_srq *ib_srq,
+ ib_pd = ib_srq->pd;
+ pd = container_of(ib_pd, struct bnxt_re_pd, ib_pd);
+ rdev = pd->rdev;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+ srq = container_of(ib_srq, struct bnxt_re_srq, ib_srq);
+
+ if (srq_init_attr->attr.max_wr >= dev_attr->max_srq_wqes) {
+@@ -1814,8 +1814,10 @@ int bnxt_re_create_srq(struct ib_srq *ib_srq,
+ srq->qplib_srq.wqe_size = bnxt_re_get_rwqe_size(dev_attr->max_srq_sges);
+ srq->qplib_srq.threshold = srq_init_attr->attr.srq_limit;
+ srq->srq_limit = srq_init_attr->attr.srq_limit;
+- srq->qplib_srq.eventq_hw_ring_id = rdev->nq[0].ring_id;
+- nq = &rdev->nq[0];
++ srq->qplib_srq.eventq_hw_ring_id = rdev->nqr->nq[0].ring_id;
++ srq->qplib_srq.sg_info.pgsize = PAGE_SIZE;
++ srq->qplib_srq.sg_info.pgshft = PAGE_SHIFT;
++ nq = &rdev->nqr->nq[0];
+
+ if (udata) {
+ rc = bnxt_re_init_user_srq(rdev, pd, srq, udata);
+@@ -1987,7 +1989,7 @@ int bnxt_re_modify_qp(struct ib_qp *ib_qp, struct ib_qp_attr *qp_attr,
+ {
+ struct bnxt_re_qp *qp = container_of(ib_qp, struct bnxt_re_qp, ib_qp);
+ struct bnxt_re_dev *rdev = qp->rdev;
+- struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+ enum ib_qp_state curr_qp_state, new_qp_state;
+ int rc, entries;
+ unsigned int flags;
+@@ -3011,7 +3013,7 @@ int bnxt_re_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ struct ib_udata *udata = &attrs->driver_udata;
+ struct bnxt_re_ucontext *uctx =
+ rdma_udata_to_drv_context(udata, struct bnxt_re_ucontext, ib_uctx);
+- struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+ struct bnxt_qplib_chip_ctx *cctx;
+ struct bnxt_qplib_nq *nq = NULL;
+ unsigned int nq_alloc_cnt;
+@@ -3070,7 +3072,7 @@ int bnxt_re_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ * used for getting the NQ index.
+ */
+ nq_alloc_cnt = atomic_inc_return(&rdev->nq_alloc_cnt);
+- nq = &rdev->nq[nq_alloc_cnt % (rdev->num_msix - 1)];
++ nq = &rdev->nqr->nq[nq_alloc_cnt % (rdev->nqr->num_msix - 1)];
+ cq->qplib_cq.max_wqe = entries;
+ cq->qplib_cq.cnq_hw_ring_id = nq->ring_id;
+ cq->qplib_cq.nq = nq;
+@@ -3154,7 +3156,7 @@ int bnxt_re_resize_cq(struct ib_cq *ibcq, int cqe, struct ib_udata *udata)
+
+ cq = container_of(ibcq, struct bnxt_re_cq, ib_cq);
+ rdev = cq->rdev;
+- dev_attr = &rdev->dev_attr;
++ dev_attr = rdev->dev_attr;
+ if (!ibcq->uobject) {
+ ibdev_err(&rdev->ibdev, "Kernel CQ Resize not supported");
+ return -EOPNOTSUPP;
+@@ -4127,7 +4129,7 @@ static struct ib_mr *__bnxt_re_user_reg_mr(struct ib_pd *ib_pd, u64 length, u64
+ mr->qplib_mr.access_flags = __from_ib_access_flags(mr_access_flags);
+ mr->qplib_mr.type = CMDQ_ALLOCATE_MRW_MRW_FLAGS_MR;
+
+- if (!_is_alloc_mr_unified(rdev->dev_attr.dev_cap_flags)) {
++ if (!_is_alloc_mr_unified(rdev->dev_attr->dev_cap_flags)) {
+ rc = bnxt_qplib_alloc_mrw(&rdev->qplib_res, &mr->qplib_mr);
+ if (rc) {
+ ibdev_err(&rdev->ibdev, "Failed to allocate MR rc = %d", rc);
+@@ -4219,7 +4221,7 @@ int bnxt_re_alloc_ucontext(struct ib_ucontext *ctx, struct ib_udata *udata)
+ struct bnxt_re_ucontext *uctx =
+ container_of(ctx, struct bnxt_re_ucontext, ib_uctx);
+ struct bnxt_re_dev *rdev = to_bnxt_re_dev(ibdev, ibdev);
+- struct bnxt_qplib_dev_attr *dev_attr = &rdev->dev_attr;
++ struct bnxt_qplib_dev_attr *dev_attr = rdev->dev_attr;
+ struct bnxt_re_user_mmap_entry *entry;
+ struct bnxt_re_uctx_resp resp = {};
+ struct bnxt_re_uctx_req ureq = {};
+diff --git a/drivers/infiniband/hw/bnxt_re/main.c b/drivers/infiniband/hw/bnxt_re/main.c
+index 8abd1b723f8ff5..9bd837a5b8a1ad 100644
+--- a/drivers/infiniband/hw/bnxt_re/main.c
++++ b/drivers/infiniband/hw/bnxt_re/main.c
+@@ -152,6 +152,10 @@ static void bnxt_re_destroy_chip_ctx(struct bnxt_re_dev *rdev)
+
+ if (!rdev->chip_ctx)
+ return;
++
++ kfree(rdev->dev_attr);
++ rdev->dev_attr = NULL;
++
+ chip_ctx = rdev->chip_ctx;
+ rdev->chip_ctx = NULL;
+ rdev->rcfw.res = NULL;
+@@ -165,7 +169,7 @@ static int bnxt_re_setup_chip_ctx(struct bnxt_re_dev *rdev)
+ {
+ struct bnxt_qplib_chip_ctx *chip_ctx;
+ struct bnxt_en_dev *en_dev;
+- int rc;
++ int rc = -ENOMEM;
+
+ en_dev = rdev->en_dev;
+
+@@ -181,23 +185,30 @@ static int bnxt_re_setup_chip_ctx(struct bnxt_re_dev *rdev)
+
+ rdev->qplib_res.cctx = rdev->chip_ctx;
+ rdev->rcfw.res = &rdev->qplib_res;
+- rdev->qplib_res.dattr = &rdev->dev_attr;
++ rdev->dev_attr = kzalloc(sizeof(*rdev->dev_attr), GFP_KERNEL);
++ if (!rdev->dev_attr)
++ goto free_chip_ctx;
++ rdev->qplib_res.dattr = rdev->dev_attr;
+ rdev->qplib_res.is_vf = BNXT_EN_VF(en_dev);
+
+ bnxt_re_set_drv_mode(rdev);
+
+ bnxt_re_set_db_offset(rdev);
+ rc = bnxt_qplib_map_db_bar(&rdev->qplib_res);
+- if (rc) {
+- kfree(rdev->chip_ctx);
+- rdev->chip_ctx = NULL;
+- return rc;
+- }
++ if (rc)
++ goto free_dev_attr;
+
+ if (bnxt_qplib_determine_atomics(en_dev->pdev))
+ ibdev_info(&rdev->ibdev,
+ "platform doesn't support global atomics.");
+ return 0;
++free_dev_attr:
++ kfree(rdev->dev_attr);
++ rdev->dev_attr = NULL;
++free_chip_ctx:
++ kfree(rdev->chip_ctx);
++ rdev->chip_ctx = NULL;
++ return rc;
+ }
+
+ /* SR-IOV helper functions */
+@@ -219,7 +230,7 @@ static void bnxt_re_limit_pf_res(struct bnxt_re_dev *rdev)
+ struct bnxt_qplib_ctx *ctx;
+ int i;
+
+- attr = &rdev->dev_attr;
++ attr = rdev->dev_attr;
+ ctx = &rdev->qplib_ctx;
+
+ ctx->qpc_count = min_t(u32, BNXT_RE_MAX_QPC_COUNT,
+@@ -233,7 +244,7 @@ static void bnxt_re_limit_pf_res(struct bnxt_re_dev *rdev)
+ if (!bnxt_qplib_is_chip_gen_p5_p7(rdev->chip_ctx))
+ for (i = 0; i < MAX_TQM_ALLOC_REQ; i++)
+ rdev->qplib_ctx.tqm_ctx.qcount[i] =
+- rdev->dev_attr.tqm_alloc_reqs[i];
++ rdev->dev_attr->tqm_alloc_reqs[i];
+ }
+
+ static void bnxt_re_limit_vf_res(struct bnxt_qplib_ctx *qplib_ctx, u32 num_vf)
+@@ -314,10 +325,12 @@ static void bnxt_re_stop_irq(void *handle)
+ int indx;
+
+ rdev = en_info->rdev;
++ if (!rdev)
++ return;
+ rcfw = &rdev->rcfw;
+
+- for (indx = BNXT_RE_NQ_IDX; indx < rdev->num_msix; indx++) {
+- nq = &rdev->nq[indx - 1];
++ for (indx = BNXT_RE_NQ_IDX; indx < rdev->nqr->num_msix; indx++) {
++ nq = &rdev->nqr->nq[indx - 1];
+ bnxt_qplib_nq_stop_irq(nq, false);
+ }
+
+@@ -334,7 +347,9 @@ static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent)
+ int indx, rc;
+
+ rdev = en_info->rdev;
+- msix_ent = rdev->en_dev->msix_entries;
++ if (!rdev)
++ return;
++ msix_ent = rdev->nqr->msix_entries;
+ rcfw = &rdev->rcfw;
+ if (!ent) {
+ /* Not setting the f/w timeout bit in rcfw.
+@@ -349,8 +364,8 @@ static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent)
+ /* Vectors may change after restart, so update with new vectors
+ * in device sctructure.
+ */
+- for (indx = 0; indx < rdev->num_msix; indx++)
+- rdev->en_dev->msix_entries[indx].vector = ent[indx].vector;
++ for (indx = 0; indx < rdev->nqr->num_msix; indx++)
++ rdev->nqr->msix_entries[indx].vector = ent[indx].vector;
+
+ rc = bnxt_qplib_rcfw_start_irq(rcfw, msix_ent[BNXT_RE_AEQ_IDX].vector,
+ false);
+@@ -358,8 +373,8 @@ static void bnxt_re_start_irq(void *handle, struct bnxt_msix_entry *ent)
+ ibdev_warn(&rdev->ibdev, "Failed to reinit CREQ\n");
+ return;
+ }
+- for (indx = BNXT_RE_NQ_IDX ; indx < rdev->num_msix; indx++) {
+- nq = &rdev->nq[indx - 1];
++ for (indx = BNXT_RE_NQ_IDX ; indx < rdev->nqr->num_msix; indx++) {
++ nq = &rdev->nqr->nq[indx - 1];
+ rc = bnxt_qplib_nq_start_irq(nq, indx - 1,
+ msix_ent[indx].vector, false);
+ if (rc) {
+@@ -943,7 +958,7 @@ static int bnxt_re_register_ib(struct bnxt_re_dev *rdev)
+
+ addrconf_addr_eui48((u8 *)&ibdev->node_guid, rdev->netdev->dev_addr);
+
+- ibdev->num_comp_vectors = rdev->num_msix - 1;
++ ibdev->num_comp_vectors = rdev->nqr->num_msix - 1;
+ ibdev->dev.parent = &rdev->en_dev->pdev->dev;
+ ibdev->local_dma_lkey = BNXT_QPLIB_RSVD_LKEY;
+
+@@ -1276,8 +1291,8 @@ static void bnxt_re_cleanup_res(struct bnxt_re_dev *rdev)
+ {
+ int i;
+
+- for (i = 1; i < rdev->num_msix; i++)
+- bnxt_qplib_disable_nq(&rdev->nq[i - 1]);
++ for (i = 1; i < rdev->nqr->num_msix; i++)
++ bnxt_qplib_disable_nq(&rdev->nqr->nq[i - 1]);
+
+ if (rdev->qplib_res.rcfw)
+ bnxt_qplib_cleanup_res(&rdev->qplib_res);
+@@ -1291,10 +1306,10 @@ static int bnxt_re_init_res(struct bnxt_re_dev *rdev)
+
+ bnxt_qplib_init_res(&rdev->qplib_res);
+
+- for (i = 1; i < rdev->num_msix ; i++) {
+- db_offt = rdev->en_dev->msix_entries[i].db_offset;
+- rc = bnxt_qplib_enable_nq(rdev->en_dev->pdev, &rdev->nq[i - 1],
+- i - 1, rdev->en_dev->msix_entries[i].vector,
++ for (i = 1; i < rdev->nqr->num_msix ; i++) {
++ db_offt = rdev->nqr->msix_entries[i].db_offset;
++ rc = bnxt_qplib_enable_nq(rdev->en_dev->pdev, &rdev->nqr->nq[i - 1],
++ i - 1, rdev->nqr->msix_entries[i].vector,
+ db_offt, &bnxt_re_cqn_handler,
+ &bnxt_re_srqn_handler);
+ if (rc) {
+@@ -1307,20 +1322,22 @@ static int bnxt_re_init_res(struct bnxt_re_dev *rdev)
+ return 0;
+ fail:
+ for (i = num_vec_enabled; i >= 0; i--)
+- bnxt_qplib_disable_nq(&rdev->nq[i]);
++ bnxt_qplib_disable_nq(&rdev->nqr->nq[i]);
+ return rc;
+ }
+
+ static void bnxt_re_free_nq_res(struct bnxt_re_dev *rdev)
+ {
++ struct bnxt_qplib_nq *nq;
+ u8 type;
+ int i;
+
+- for (i = 0; i < rdev->num_msix - 1; i++) {
++ for (i = 0; i < rdev->nqr->num_msix - 1; i++) {
+ type = bnxt_qplib_get_ring_type(rdev->chip_ctx);
+- bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id, type);
+- bnxt_qplib_free_nq(&rdev->nq[i]);
+- rdev->nq[i].res = NULL;
++ nq = &rdev->nqr->nq[i];
++ bnxt_re_net_ring_free(rdev, nq->ring_id, type);
++ bnxt_qplib_free_nq(nq);
++ nq->res = NULL;
+ }
+ }
+
+@@ -1347,12 +1364,11 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
+
+ /* Configure and allocate resources for qplib */
+ rdev->qplib_res.rcfw = &rdev->rcfw;
+- rc = bnxt_qplib_get_dev_attr(&rdev->rcfw, &rdev->dev_attr);
++ rc = bnxt_qplib_get_dev_attr(&rdev->rcfw);
+ if (rc)
+ goto fail;
+
+- rc = bnxt_qplib_alloc_res(&rdev->qplib_res, rdev->en_dev->pdev,
+- rdev->netdev, &rdev->dev_attr);
++ rc = bnxt_qplib_alloc_res(&rdev->qplib_res, rdev->netdev);
+ if (rc)
+ goto fail;
+
+@@ -1362,12 +1378,12 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
+ if (rc)
+ goto dealloc_res;
+
+- for (i = 0; i < rdev->num_msix - 1; i++) {
++ for (i = 0; i < rdev->nqr->num_msix - 1; i++) {
+ struct bnxt_qplib_nq *nq;
+
+- nq = &rdev->nq[i];
++ nq = &rdev->nqr->nq[i];
+ nq->hwq.max_elements = BNXT_QPLIB_NQE_MAX_CNT;
+- rc = bnxt_qplib_alloc_nq(&rdev->qplib_res, &rdev->nq[i]);
++ rc = bnxt_qplib_alloc_nq(&rdev->qplib_res, nq);
+ if (rc) {
+ ibdev_err(&rdev->ibdev, "Alloc Failed NQ%d rc:%#x",
+ i, rc);
+@@ -1375,17 +1391,17 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
+ }
+ type = bnxt_qplib_get_ring_type(rdev->chip_ctx);
+ rattr.dma_arr = nq->hwq.pbl[PBL_LVL_0].pg_map_arr;
+- rattr.pages = nq->hwq.pbl[rdev->nq[i].hwq.level].pg_count;
++ rattr.pages = nq->hwq.pbl[rdev->nqr->nq[i].hwq.level].pg_count;
+ rattr.type = type;
+ rattr.mode = RING_ALLOC_REQ_INT_MODE_MSIX;
+ rattr.depth = BNXT_QPLIB_NQE_MAX_CNT - 1;
+- rattr.lrid = rdev->en_dev->msix_entries[i + 1].ring_idx;
++ rattr.lrid = rdev->nqr->msix_entries[i + 1].ring_idx;
+ rc = bnxt_re_net_ring_alloc(rdev, &rattr, &nq->ring_id);
+ if (rc) {
+ ibdev_err(&rdev->ibdev,
+ "Failed to allocate NQ fw id with rc = 0x%x",
+ rc);
+- bnxt_qplib_free_nq(&rdev->nq[i]);
++ bnxt_qplib_free_nq(nq);
+ goto free_nq;
+ }
+ num_vec_created++;
+@@ -1394,8 +1410,8 @@ static int bnxt_re_alloc_res(struct bnxt_re_dev *rdev)
+ free_nq:
+ for (i = num_vec_created - 1; i >= 0; i--) {
+ type = bnxt_qplib_get_ring_type(rdev->chip_ctx);
+- bnxt_re_net_ring_free(rdev, rdev->nq[i].ring_id, type);
+- bnxt_qplib_free_nq(&rdev->nq[i]);
++ bnxt_re_net_ring_free(rdev, rdev->nqr->nq[i].ring_id, type);
++ bnxt_qplib_free_nq(&rdev->nqr->nq[i]);
+ }
+ bnxt_qplib_dealloc_dpi(&rdev->qplib_res,
+ &rdev->dpi_privileged);
+@@ -1584,6 +1600,21 @@ static int bnxt_re_ib_init(struct bnxt_re_dev *rdev)
+ return rc;
+ }
+
++static int bnxt_re_alloc_nqr_mem(struct bnxt_re_dev *rdev)
++{
++ rdev->nqr = kzalloc(sizeof(*rdev->nqr), GFP_KERNEL);
++ if (!rdev->nqr)
++ return -ENOMEM;
++
++ return 0;
++}
++
++static void bnxt_re_free_nqr_mem(struct bnxt_re_dev *rdev)
++{
++ kfree(rdev->nqr);
++ rdev->nqr = NULL;
++}
++
+ static void bnxt_re_dev_uninit(struct bnxt_re_dev *rdev, u8 op_type)
+ {
+ u8 type;
+@@ -1611,11 +1642,12 @@ static void bnxt_re_dev_uninit(struct bnxt_re_dev *rdev, u8 op_type)
+ bnxt_qplib_free_rcfw_channel(&rdev->rcfw);
+ }
+
+- rdev->num_msix = 0;
++ rdev->nqr->num_msix = 0;
+
+ if (rdev->pacing.dbr_pacing)
+ bnxt_re_deinitialize_dbr_pacing(rdev);
+
++ bnxt_re_free_nqr_mem(rdev);
+ bnxt_re_destroy_chip_ctx(rdev);
+ if (op_type == BNXT_RE_COMPLETE_REMOVE) {
+ if (test_and_clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags))
+@@ -1653,6 +1685,17 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 op_type)
+ }
+ set_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags);
+
++ if (rdev->en_dev->ulp_tbl->msix_requested < BNXT_RE_MIN_MSIX) {
++ ibdev_err(&rdev->ibdev,
++ "RoCE requires minimum 2 MSI-X vectors, but only %d reserved\n",
++ rdev->en_dev->ulp_tbl->msix_requested);
++ bnxt_unregister_dev(rdev->en_dev);
++ clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags);
++ return -EINVAL;
++ }
++ ibdev_dbg(&rdev->ibdev, "Got %d MSI-X vectors\n",
++ rdev->en_dev->ulp_tbl->msix_requested);
++
+ rc = bnxt_re_setup_chip_ctx(rdev);
+ if (rc) {
+ bnxt_unregister_dev(rdev->en_dev);
+@@ -1661,19 +1704,20 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 op_type)
+ return -EINVAL;
+ }
+
++ rc = bnxt_re_alloc_nqr_mem(rdev);
++ if (rc) {
++ bnxt_re_destroy_chip_ctx(rdev);
++ bnxt_unregister_dev(rdev->en_dev);
++ clear_bit(BNXT_RE_FLAG_NETDEV_REGISTERED, &rdev->flags);
++ return rc;
++ }
++ rdev->nqr->num_msix = rdev->en_dev->ulp_tbl->msix_requested;
++ memcpy(rdev->nqr->msix_entries, rdev->en_dev->msix_entries,
++ sizeof(struct bnxt_msix_entry) * rdev->nqr->num_msix);
++
+ /* Check whether VF or PF */
+ bnxt_re_get_sriov_func_type(rdev);
+
+- if (!rdev->en_dev->ulp_tbl->msix_requested) {
+- ibdev_err(&rdev->ibdev,
+- "Failed to get MSI-X vectors: %#x\n", rc);
+- rc = -EINVAL;
+- goto fail;
+- }
+- ibdev_dbg(&rdev->ibdev, "Got %d MSI-X vectors\n",
+- rdev->en_dev->ulp_tbl->msix_requested);
+- rdev->num_msix = rdev->en_dev->ulp_tbl->msix_requested;
+-
+ bnxt_re_query_hwrm_intf_version(rdev);
+
+ /* Establish RCFW Communication Channel to initialize the context
+@@ -1695,14 +1739,14 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 op_type)
+ rattr.type = type;
+ rattr.mode = RING_ALLOC_REQ_INT_MODE_MSIX;
+ rattr.depth = BNXT_QPLIB_CREQE_MAX_CNT - 1;
+- rattr.lrid = rdev->en_dev->msix_entries[BNXT_RE_AEQ_IDX].ring_idx;
++ rattr.lrid = rdev->nqr->msix_entries[BNXT_RE_AEQ_IDX].ring_idx;
+ rc = bnxt_re_net_ring_alloc(rdev, &rattr, &creq->ring_id);
+ if (rc) {
+ ibdev_err(&rdev->ibdev, "Failed to allocate CREQ: %#x\n", rc);
+ goto free_rcfw;
+ }
+- db_offt = rdev->en_dev->msix_entries[BNXT_RE_AEQ_IDX].db_offset;
+- vid = rdev->en_dev->msix_entries[BNXT_RE_AEQ_IDX].vector;
++ db_offt = rdev->nqr->msix_entries[BNXT_RE_AEQ_IDX].db_offset;
++ vid = rdev->nqr->msix_entries[BNXT_RE_AEQ_IDX].vector;
+ rc = bnxt_qplib_enable_rcfw_channel(&rdev->rcfw,
+ vid, db_offt,
+ &bnxt_re_aeq_handler);
+@@ -1722,7 +1766,7 @@ static int bnxt_re_dev_init(struct bnxt_re_dev *rdev, u8 op_type)
+ rdev->pacing.dbr_pacing = false;
+ }
+ }
+- rc = bnxt_qplib_get_dev_attr(&rdev->rcfw, &rdev->dev_attr);
++ rc = bnxt_qplib_get_dev_attr(&rdev->rcfw);
+ if (rc)
+ goto disable_rcfw;
+
+@@ -2047,6 +2091,7 @@ static int bnxt_re_suspend(struct auxiliary_device *adev, pm_message_t state)
+ ibdev_info(&rdev->ibdev, "%s: L2 driver notified to stop en_state 0x%lx",
+ __func__, en_dev->en_state);
+ bnxt_re_remove_device(rdev, BNXT_RE_PRE_RECOVERY_REMOVE, adev);
++ bnxt_re_update_en_info_rdev(NULL, en_info, adev);
+ mutex_unlock(&bnxt_re_mutex);
+
+ return 0;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.c b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+index 96ceec1e8199a6..02922a0987ad7a 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.c
+@@ -876,14 +876,13 @@ void bnxt_qplib_free_res(struct bnxt_qplib_res *res)
+ bnxt_qplib_free_dpi_tbl(res, &res->dpi_tbl);
+ }
+
+-int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct pci_dev *pdev,
+- struct net_device *netdev,
+- struct bnxt_qplib_dev_attr *dev_attr)
++int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct net_device *netdev)
+ {
++ struct bnxt_qplib_dev_attr *dev_attr;
+ int rc;
+
+- res->pdev = pdev;
+ res->netdev = netdev;
++ dev_attr = res->dattr;
+
+ rc = bnxt_qplib_alloc_sgid_tbl(res, &res->sgid_tbl, dev_attr->max_sgid);
+ if (rc)
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_res.h b/drivers/infiniband/hw/bnxt_re/qplib_res.h
+index c2f710364e0ffe..b40cff8252bc4d 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_res.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_res.h
+@@ -421,9 +421,7 @@ int bnxt_qplib_dealloc_dpi(struct bnxt_qplib_res *res,
+ void bnxt_qplib_cleanup_res(struct bnxt_qplib_res *res);
+ int bnxt_qplib_init_res(struct bnxt_qplib_res *res);
+ void bnxt_qplib_free_res(struct bnxt_qplib_res *res);
+-int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct pci_dev *pdev,
+- struct net_device *netdev,
+- struct bnxt_qplib_dev_attr *dev_attr);
++int bnxt_qplib_alloc_res(struct bnxt_qplib_res *res, struct net_device *netdev);
+ void bnxt_qplib_free_ctx(struct bnxt_qplib_res *res,
+ struct bnxt_qplib_ctx *ctx);
+ int bnxt_qplib_alloc_ctx(struct bnxt_qplib_res *res,
+@@ -546,6 +544,14 @@ static inline bool _is_ext_stats_supported(u16 dev_cap_flags)
+ CREQ_QUERY_FUNC_RESP_SB_EXT_STATS;
+ }
+
++static inline int bnxt_ext_stats_supported(struct bnxt_qplib_chip_ctx *ctx,
++ u16 flags, bool virtfn)
++{
++ /* ext stats supported if cap flag is set AND is a PF OR a Thor2 VF */
++ return (_is_ext_stats_supported(flags) &&
++ ((virtfn && bnxt_qplib_is_chip_gen_p7(ctx)) || (!virtfn)));
++}
++
+ static inline bool _is_hw_retx_supported(u16 dev_cap_flags)
+ {
+ return dev_cap_flags &
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.c b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+index 3cca7b1395f6a7..807439b1acb51f 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.c
+@@ -88,9 +88,9 @@ static void bnxt_qplib_query_version(struct bnxt_qplib_rcfw *rcfw,
+ fw_ver[3] = resp.fw_rsvd;
+ }
+
+-int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+- struct bnxt_qplib_dev_attr *attr)
++int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw)
+ {
++ struct bnxt_qplib_dev_attr *attr = rcfw->res->dattr;
+ struct creq_query_func_resp resp = {};
+ struct bnxt_qplib_cmdqmsg msg = {};
+ struct creq_query_func_resp_sb *sb;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_sp.h b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+index ecf3f45fea74fe..de959b3c28e01f 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_sp.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_sp.h
+@@ -325,8 +325,7 @@ int bnxt_qplib_add_sgid(struct bnxt_qplib_sgid_tbl *sgid_tbl,
+ int bnxt_qplib_update_sgid(struct bnxt_qplib_sgid_tbl *sgid_tbl,
+ struct bnxt_qplib_gid *gid, u16 gid_idx,
+ const u8 *smac);
+-int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw,
+- struct bnxt_qplib_dev_attr *attr);
++int bnxt_qplib_get_dev_attr(struct bnxt_qplib_rcfw *rcfw);
+ int bnxt_qplib_set_func_resources(struct bnxt_qplib_res *res,
+ struct bnxt_qplib_rcfw *rcfw,
+ struct bnxt_qplib_ctx *ctx);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+index 0144e7210d05a1..f5c3e560df58d7 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.c
+@@ -1286,10 +1286,8 @@ static u32 hns_roce_cmdq_tx_timeout(u16 opcode, u32 tx_timeout)
+ return tx_timeout;
+ }
+
+-static void hns_roce_wait_csq_done(struct hns_roce_dev *hr_dev, u16 opcode)
++static void hns_roce_wait_csq_done(struct hns_roce_dev *hr_dev, u32 tx_timeout)
+ {
+- struct hns_roce_v2_priv *priv = hr_dev->priv;
+- u32 tx_timeout = hns_roce_cmdq_tx_timeout(opcode, priv->cmq.tx_timeout);
+ u32 timeout = 0;
+
+ do {
+@@ -1299,8 +1297,9 @@ static void hns_roce_wait_csq_done(struct hns_roce_dev *hr_dev, u16 opcode)
+ } while (++timeout < tx_timeout);
+ }
+
+-static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+- struct hns_roce_cmq_desc *desc, int num)
++static int __hns_roce_cmq_send_one(struct hns_roce_dev *hr_dev,
++ struct hns_roce_cmq_desc *desc,
++ int num, u32 tx_timeout)
+ {
+ struct hns_roce_v2_priv *priv = hr_dev->priv;
+ struct hns_roce_v2_cmq_ring *csq = &priv->cmq.csq;
+@@ -1309,8 +1308,6 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ int ret;
+ int i;
+
+- spin_lock_bh(&csq->lock);
+-
+ tail = csq->head;
+
+ for (i = 0; i < num; i++) {
+@@ -1324,22 +1321,17 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+
+ atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_CMDS_CNT]);
+
+- hns_roce_wait_csq_done(hr_dev, le16_to_cpu(desc->opcode));
++ hns_roce_wait_csq_done(hr_dev, tx_timeout);
+ if (hns_roce_cmq_csq_done(hr_dev)) {
+ ret = 0;
+ for (i = 0; i < num; i++) {
+ /* check the result of hardware write back */
+- desc[i] = csq->desc[tail++];
++ desc_ret = le16_to_cpu(csq->desc[tail++].retval);
+ if (tail == csq->desc_num)
+ tail = 0;
+-
+- desc_ret = le16_to_cpu(desc[i].retval);
+ if (likely(desc_ret == CMD_EXEC_SUCCESS))
+ continue;
+
+- dev_err_ratelimited(hr_dev->dev,
+- "Cmdq IO error, opcode = 0x%x, return = 0x%x.\n",
+- desc->opcode, desc_ret);
+ ret = hns_roce_cmd_err_convert_errno(desc_ret);
+ }
+ } else {
+@@ -1354,14 +1346,54 @@ static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ ret = -EAGAIN;
+ }
+
+- spin_unlock_bh(&csq->lock);
+-
+ if (ret)
+ atomic64_inc(&hr_dev->dfx_cnt[HNS_ROCE_DFX_CMDS_ERR_CNT]);
+
+ return ret;
+ }
+
++static int __hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
++ struct hns_roce_cmq_desc *desc, int num)
++{
++ struct hns_roce_v2_priv *priv = hr_dev->priv;
++ struct hns_roce_v2_cmq_ring *csq = &priv->cmq.csq;
++ u16 opcode = le16_to_cpu(desc->opcode);
++ u32 tx_timeout = hns_roce_cmdq_tx_timeout(opcode, priv->cmq.tx_timeout);
++ u8 try_cnt = HNS_ROCE_OPC_POST_MB_TRY_CNT;
++ u32 rsv_tail;
++ int ret;
++ int i;
++
++ while (try_cnt) {
++ try_cnt--;
++
++ spin_lock_bh(&csq->lock);
++ rsv_tail = csq->head;
++ ret = __hns_roce_cmq_send_one(hr_dev, desc, num, tx_timeout);
++ if (opcode == HNS_ROCE_OPC_POST_MB && ret == -ETIME &&
++ try_cnt) {
++ spin_unlock_bh(&csq->lock);
++ mdelay(HNS_ROCE_OPC_POST_MB_RETRY_GAP_MSEC);
++ continue;
++ }
++
++ for (i = 0; i < num; i++) {
++ desc[i] = csq->desc[rsv_tail++];
++ if (rsv_tail == csq->desc_num)
++ rsv_tail = 0;
++ }
++ spin_unlock_bh(&csq->lock);
++ break;
++ }
++
++ if (ret)
++ dev_err_ratelimited(hr_dev->dev,
++ "Cmdq IO error, opcode = 0x%x, return = %d.\n",
++ opcode, ret);
++
++ return ret;
++}
++
+ static int hns_roce_cmq_send(struct hns_roce_dev *hr_dev,
+ struct hns_roce_cmq_desc *desc, int num)
+ {
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+index cbdbc9edbce6ec..91a5665465ffba 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
++++ b/drivers/infiniband/hw/hns/hns_roce_hw_v2.h
+@@ -230,6 +230,8 @@ enum hns_roce_opcode_type {
+ };
+
+ #define HNS_ROCE_OPC_POST_MB_TIMEOUT 35000
++#define HNS_ROCE_OPC_POST_MB_TRY_CNT 8
++#define HNS_ROCE_OPC_POST_MB_RETRY_GAP_MSEC 5
+ struct hns_roce_cmdq_tx_timeout_map {
+ u16 opcode;
+ u32 tx_timeout;
+diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c
+index 67c2d43135a8af..457cea6d990958 100644
+--- a/drivers/infiniband/hw/mana/main.c
++++ b/drivers/infiniband/hw/mana/main.c
+@@ -174,7 +174,7 @@ static int mana_gd_allocate_doorbell_page(struct gdma_context *gc,
+
+ req.resource_type = GDMA_RESOURCE_DOORBELL_PAGE;
+ req.num_resources = 1;
+- req.alignment = 1;
++ req.alignment = PAGE_SIZE / MANA_PAGE_SIZE;
+
+ /* Have GDMA start searching from 0 */
+ req.allocated_resources = 0;
+diff --git a/drivers/infiniband/hw/mlx5/ah.c b/drivers/infiniband/hw/mlx5/ah.c
+index 505bc47fd575d5..99036afb3aef0b 100644
+--- a/drivers/infiniband/hw/mlx5/ah.c
++++ b/drivers/infiniband/hw/mlx5/ah.c
+@@ -67,7 +67,8 @@ static void create_ib_ah(struct mlx5_ib_dev *dev, struct mlx5_ib_ah *ah,
+ ah->av.tclass = grh->traffic_class;
+ }
+
+- ah->av.stat_rate_sl = (rdma_ah_get_static_rate(ah_attr) << 4);
++ ah->av.stat_rate_sl =
++ (mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah_attr)) << 4);
+
+ if (ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) {
+ if (init_attr->xmit_slave)
+diff --git a/drivers/infiniband/hw/mlx5/counters.c b/drivers/infiniband/hw/mlx5/counters.c
+index 4f6c1968a2ee3c..81cfa74147a183 100644
+--- a/drivers/infiniband/hw/mlx5/counters.c
++++ b/drivers/infiniband/hw/mlx5/counters.c
+@@ -546,6 +546,7 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter,
+ struct ib_qp *qp)
+ {
+ struct mlx5_ib_dev *dev = to_mdev(qp->device);
++ bool new = false;
+ int err;
+
+ if (!counter->id) {
+@@ -560,6 +561,7 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter,
+ return err;
+ counter->id =
+ MLX5_GET(alloc_q_counter_out, out, counter_set_id);
++ new = true;
+ }
+
+ err = mlx5_ib_qp_set_counter(qp, counter);
+@@ -569,8 +571,10 @@ static int mlx5_ib_counter_bind_qp(struct rdma_counter *counter,
+ return 0;
+
+ fail_set_counter:
+- mlx5_ib_counter_dealloc(counter);
+- counter->id = 0;
++ if (new) {
++ mlx5_ib_counter_dealloc(counter);
++ counter->id = 0;
++ }
+
+ return err;
+ }
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index bb02b6adbf2c21..753faa9ad06a88 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -1550,7 +1550,7 @@ static void mlx5_ib_dmabuf_invalidate_cb(struct dma_buf_attachment *attach)
+
+ dma_resv_assert_held(umem_dmabuf->attach->dmabuf->resv);
+
+- if (!umem_dmabuf->sgt)
++ if (!umem_dmabuf->sgt || !mr)
+ return;
+
+ mlx5r_umr_update_mr_pas(mr, MLX5_IB_UPD_XLT_ZAP);
+@@ -1935,7 +1935,8 @@ mlx5_alloc_priv_descs(struct ib_device *device,
+ static void
+ mlx5_free_priv_descs(struct mlx5_ib_mr *mr)
+ {
+- if (!mr->umem && !mr->data_direct && mr->descs) {
++ if (!mr->umem && !mr->data_direct &&
++ mr->ibmr.type != IB_MR_TYPE_DM && mr->descs) {
+ struct ib_device *device = mr->ibmr.device;
+ int size = mr->max_descs * mr->desc_size;
+ struct mlx5_ib_dev *dev = to_mdev(device);
+@@ -2022,11 +2023,16 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ struct mlx5_ib_dev *dev = to_mdev(mr->ibmr.device);
+ struct mlx5_cache_ent *ent = mr->mmkey.cache_ent;
+ bool is_odp = is_odp_mr(mr);
++ bool is_odp_dma_buf = is_dmabuf_mr(mr) &&
++ !to_ib_umem_dmabuf(mr->umem)->pinned;
+ int ret = 0;
+
+ if (is_odp)
+ mutex_lock(&to_ib_umem_odp(mr->umem)->umem_mutex);
+
++ if (is_odp_dma_buf)
++ dma_resv_lock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv, NULL);
++
+ if (mr->mmkey.cacheable && !mlx5r_umr_revoke_mr(mr) && !cache_ent_find_and_store(dev, mr)) {
+ ent = mr->mmkey.cache_ent;
+ /* upon storing to a clean temp entry - schedule its cleanup */
+@@ -2054,6 +2060,12 @@ static int mlx5_revoke_mr(struct mlx5_ib_mr *mr)
+ mutex_unlock(&to_ib_umem_odp(mr->umem)->umem_mutex);
+ }
+
++ if (is_odp_dma_buf) {
++ if (!ret)
++ to_ib_umem_dmabuf(mr->umem)->private = NULL;
++ dma_resv_unlock(to_ib_umem_dmabuf(mr->umem)->attach->dmabuf->resv);
++ }
++
+ return ret;
+ }
+
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index 1d3bf56157702d..b4e2a6f9cb9c3d 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -242,6 +242,7 @@ static void destroy_unused_implicit_child_mr(struct mlx5_ib_mr *mr)
+ if (__xa_cmpxchg(&imr->implicit_children, idx, mr, NULL, GFP_KERNEL) !=
+ mr) {
+ xa_unlock(&imr->implicit_children);
++ mlx5r_deref_odp_mkey(&imr->mmkey);
+ return;
+ }
+
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 10ce3b44f645f4..ded139b4e87aa4 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -3420,11 +3420,11 @@ static int ib_to_mlx5_rate_map(u8 rate)
+ return 0;
+ }
+
+-static int ib_rate_to_mlx5(struct mlx5_ib_dev *dev, u8 rate)
++int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate)
+ {
+ u32 stat_rate_support;
+
+- if (rate == IB_RATE_PORT_CURRENT)
++ if (rate == IB_RATE_PORT_CURRENT || rate == IB_RATE_800_GBPS)
+ return 0;
+
+ if (rate < IB_RATE_2_5_GBPS || rate > IB_RATE_800_GBPS)
+@@ -3569,7 +3569,7 @@ static int mlx5_set_path(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
+ sizeof(grh->dgid.raw));
+ }
+
+- err = ib_rate_to_mlx5(dev, rdma_ah_get_static_rate(ah));
++ err = mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah));
+ if (err < 0)
+ return err;
+ MLX5_SET(ads, path, stat_rate, err);
+@@ -4547,6 +4547,8 @@ static int mlx5_ib_modify_dct(struct ib_qp *ibqp, struct ib_qp_attr *attr,
+
+ set_id = mlx5_ib_get_counters_id(dev, attr->port_num - 1);
+ MLX5_SET(dctc, dctc, counter_set_id, set_id);
++
++ qp->port = attr->port_num;
+ } else if (cur_state == IB_QPS_INIT && new_state == IB_QPS_RTR) {
+ struct mlx5_ib_modify_qp_resp resp = {};
+ u32 out[MLX5_ST_SZ_DW(create_dct_out)] = {};
+@@ -5033,7 +5035,7 @@ static int mlx5_ib_dct_query_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *mqp,
+ }
+
+ if (qp_attr_mask & IB_QP_PORT)
+- qp_attr->port_num = MLX5_GET(dctc, dctc, port);
++ qp_attr->port_num = mqp->port;
+ if (qp_attr_mask & IB_QP_MIN_RNR_TIMER)
+ qp_attr->min_rnr_timer = MLX5_GET(dctc, dctc, min_rnr_nak);
+ if (qp_attr_mask & IB_QP_AV) {
+diff --git a/drivers/infiniband/hw/mlx5/qp.h b/drivers/infiniband/hw/mlx5/qp.h
+index b6ee7c3ee1ca1b..2530e7730635f3 100644
+--- a/drivers/infiniband/hw/mlx5/qp.h
++++ b/drivers/infiniband/hw/mlx5/qp.h
+@@ -56,4 +56,5 @@ int mlx5_core_xrcd_dealloc(struct mlx5_ib_dev *dev, u32 xrcdn);
+ int mlx5_ib_qp_set_counter(struct ib_qp *qp, struct rdma_counter *counter);
+ int mlx5_ib_qp_event_init(void);
+ void mlx5_ib_qp_event_cleanup(void);
++int mlx5r_ib_rate(struct mlx5_ib_dev *dev, u8 rate);
+ #endif /* _MLX5_IB_QP_H */
+diff --git a/drivers/infiniband/hw/mlx5/umr.c b/drivers/infiniband/hw/mlx5/umr.c
+index 887fd6fa3ba930..793f3c5c4d0126 100644
+--- a/drivers/infiniband/hw/mlx5/umr.c
++++ b/drivers/infiniband/hw/mlx5/umr.c
+@@ -231,30 +231,6 @@ void mlx5r_umr_cleanup(struct mlx5_ib_dev *dev)
+ ib_dealloc_pd(dev->umrc.pd);
+ }
+
+-static int mlx5r_umr_recover(struct mlx5_ib_dev *dev)
+-{
+- struct umr_common *umrc = &dev->umrc;
+- struct ib_qp_attr attr;
+- int err;
+-
+- attr.qp_state = IB_QPS_RESET;
+- err = ib_modify_qp(umrc->qp, &attr, IB_QP_STATE);
+- if (err) {
+- mlx5_ib_dbg(dev, "Couldn't modify UMR QP\n");
+- goto err;
+- }
+-
+- err = mlx5r_umr_qp_rst2rts(dev, umrc->qp);
+- if (err)
+- goto err;
+-
+- umrc->state = MLX5_UMR_STATE_ACTIVE;
+- return 0;
+-
+-err:
+- umrc->state = MLX5_UMR_STATE_ERR;
+- return err;
+-}
+
+ static int mlx5r_umr_post_send(struct ib_qp *ibqp, u32 mkey, struct ib_cqe *cqe,
+ struct mlx5r_umr_wqe *wqe, bool with_data)
+@@ -302,6 +278,61 @@ static int mlx5r_umr_post_send(struct ib_qp *ibqp, u32 mkey, struct ib_cqe *cqe,
+ return err;
+ }
+
++static int mlx5r_umr_recover(struct mlx5_ib_dev *dev, u32 mkey,
++ struct mlx5r_umr_context *umr_context,
++ struct mlx5r_umr_wqe *wqe, bool with_data)
++{
++ struct umr_common *umrc = &dev->umrc;
++ struct ib_qp_attr attr;
++ int err;
++
++ mutex_lock(&umrc->lock);
++ /* Preventing any further WRs to be sent now */
++ if (umrc->state != MLX5_UMR_STATE_RECOVER) {
++ mlx5_ib_warn(dev, "UMR recovery encountered an unexpected state=%d\n",
++ umrc->state);
++ umrc->state = MLX5_UMR_STATE_RECOVER;
++ }
++ mutex_unlock(&umrc->lock);
++
++ /* Sending a final/barrier WR (the failed one) and wait for its completion.
++ * This will ensure that all the previous WRs got a completion before
++ * we set the QP state to RESET.
++ */
++ err = mlx5r_umr_post_send(umrc->qp, mkey, &umr_context->cqe, wqe,
++ with_data);
++ if (err) {
++ mlx5_ib_warn(dev, "UMR recovery post send failed, err %d\n", err);
++ goto err;
++ }
++
++ /* Since the QP is in an error state, it will only receive
++ * IB_WC_WR_FLUSH_ERR. However, as it serves only as a barrier
++ * we don't care about its status.
++ */
++ wait_for_completion(&umr_context->done);
++
++ attr.qp_state = IB_QPS_RESET;
++ err = ib_modify_qp(umrc->qp, &attr, IB_QP_STATE);
++ if (err) {
++ mlx5_ib_warn(dev, "Couldn't modify UMR QP to RESET, err=%d\n", err);
++ goto err;
++ }
++
++ err = mlx5r_umr_qp_rst2rts(dev, umrc->qp);
++ if (err) {
++ mlx5_ib_warn(dev, "Couldn't modify UMR QP to RTS, err=%d\n", err);
++ goto err;
++ }
++
++ umrc->state = MLX5_UMR_STATE_ACTIVE;
++ return 0;
++
++err:
++ umrc->state = MLX5_UMR_STATE_ERR;
++ return err;
++}
++
+ static void mlx5r_umr_done(struct ib_cq *cq, struct ib_wc *wc)
+ {
+ struct mlx5_ib_umr_context *context =
+@@ -366,9 +397,7 @@ static int mlx5r_umr_post_send_wait(struct mlx5_ib_dev *dev, u32 mkey,
+ mlx5_ib_warn(dev,
+ "reg umr failed (%u). Trying to recover and resubmit the flushed WQEs, mkey = %u\n",
+ umr_context.status, mkey);
+- mutex_lock(&umrc->lock);
+- err = mlx5r_umr_recover(dev);
+- mutex_unlock(&umrc->lock);
++ err = mlx5r_umr_recover(dev, mkey, &umr_context, wqe, with_data);
+ if (err)
+ mlx5_ib_warn(dev, "couldn't recover UMR, err %d\n",
+ err);
+diff --git a/drivers/iommu/intel/dmar.c b/drivers/iommu/intel/dmar.c
+index eaf862e8dea1a9..7f553f7aa3cb3b 100644
+--- a/drivers/iommu/intel/dmar.c
++++ b/drivers/iommu/intel/dmar.c
+@@ -2056,6 +2056,7 @@ int enable_drhd_fault_handling(unsigned int cpu)
+ /*
+ * Enable fault control interrupt.
+ */
++ guard(rwsem_read)(&dmar_global_lock);
+ for_each_iommu(iommu, drhd) {
+ u32 fault_status;
+ int ret;
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index cc23cfcdeb2d59..9c46a4cd384842 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -3307,7 +3307,14 @@ int __init intel_iommu_init(void)
+ iommu_device_sysfs_add(&iommu->iommu, NULL,
+ intel_iommu_groups,
+ "%s", iommu->name);
++ /*
++ * The iommu device probe is protected by the iommu_probe_device_lock.
++ * Release the dmar_global_lock before entering the device probe path
++ * to avoid unnecessary lock order splat.
++ */
++ up_read(&dmar_global_lock);
+ iommu_device_register(&iommu->iommu, &intel_iommu_ops, NULL);
++ down_read(&dmar_global_lock);
+
+ iommu_pmu_register(iommu);
+ }
+@@ -4547,9 +4554,6 @@ static int context_setup_pass_through_cb(struct pci_dev *pdev, u16 alias, void *
+ {
+ struct device *dev = data;
+
+- if (dev != &pdev->dev)
+- return 0;
+-
+ return context_setup_pass_through(dev, PCI_BUS_NUM(alias), alias & 0xff);
+ }
+
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index ee9f7cecd78e0e..555dc06b942287 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -3790,10 +3790,6 @@ static void dm_integrity_status(struct dm_target *ti, status_type_t type,
+ break;
+
+ case STATUSTYPE_TABLE: {
+- __u64 watermark_percentage = (__u64)(ic->journal_entries - ic->free_sectors_threshold) * 100;
+-
+- watermark_percentage += ic->journal_entries / 2;
+- do_div(watermark_percentage, ic->journal_entries);
+ arg_count = 3;
+ arg_count += !!ic->meta_dev;
+ arg_count += ic->sectors_per_block != 1;
+@@ -3826,6 +3822,10 @@ static void dm_integrity_status(struct dm_target *ti, status_type_t type,
+ DMEMIT(" interleave_sectors:%u", 1U << ic->sb->log2_interleave_sectors);
+ DMEMIT(" buffer_sectors:%u", 1U << ic->log2_buffer_sectors);
+ if (ic->mode == 'J') {
++ __u64 watermark_percentage = (__u64)(ic->journal_entries - ic->free_sectors_threshold) * 100;
++
++ watermark_percentage += ic->journal_entries / 2;
++ do_div(watermark_percentage, ic->journal_entries);
+ DMEMIT(" journal_watermark:%u", (unsigned int)watermark_percentage);
+ DMEMIT(" commit_time:%u", ic->autocommit_msec);
+ }
+diff --git a/drivers/md/dm-vdo/dedupe.c b/drivers/md/dm-vdo/dedupe.c
+index 80628ae93fbacc..5a74b3a85ec435 100644
+--- a/drivers/md/dm-vdo/dedupe.c
++++ b/drivers/md/dm-vdo/dedupe.c
+@@ -2178,6 +2178,7 @@ static int initialize_index(struct vdo *vdo, struct hash_zones *zones)
+
+ vdo_set_dedupe_index_timeout_interval(vdo_dedupe_index_timeout_interval);
+ vdo_set_dedupe_index_min_timer_interval(vdo_dedupe_index_min_timer_interval);
++ spin_lock_init(&zones->lock);
+
+ /*
+ * Since we will save up the timeouts that would have been reported but were ratelimited,
+diff --git a/drivers/net/dsa/realtek/Kconfig b/drivers/net/dsa/realtek/Kconfig
+index 6989972eebc306..10687722d14c08 100644
+--- a/drivers/net/dsa/realtek/Kconfig
++++ b/drivers/net/dsa/realtek/Kconfig
+@@ -43,4 +43,10 @@ config NET_DSA_REALTEK_RTL8366RB
+ help
+ Select to enable support for Realtek RTL8366RB.
+
++config NET_DSA_REALTEK_RTL8366RB_LEDS
++ bool "Support RTL8366RB LED control"
++ depends on (LEDS_CLASS=y || LEDS_CLASS=NET_DSA_REALTEK_RTL8366RB)
++ depends on NET_DSA_REALTEK_RTL8366RB
++ default NET_DSA_REALTEK_RTL8366RB
++
+ endif
+diff --git a/drivers/net/dsa/realtek/Makefile b/drivers/net/dsa/realtek/Makefile
+index 35491dc20d6d6e..17367bcba496c1 100644
+--- a/drivers/net/dsa/realtek/Makefile
++++ b/drivers/net/dsa/realtek/Makefile
+@@ -12,4 +12,7 @@ endif
+
+ obj-$(CONFIG_NET_DSA_REALTEK_RTL8366RB) += rtl8366.o
+ rtl8366-objs := rtl8366-core.o rtl8366rb.o
++ifdef CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS
++rtl8366-objs += rtl8366rb-leds.o
++endif
+ obj-$(CONFIG_NET_DSA_REALTEK_RTL8365MB) += rtl8365mb.o
+diff --git a/drivers/net/dsa/realtek/rtl8366rb-leds.c b/drivers/net/dsa/realtek/rtl8366rb-leds.c
+new file mode 100644
+index 00000000000000..99c890681ae607
+--- /dev/null
++++ b/drivers/net/dsa/realtek/rtl8366rb-leds.c
+@@ -0,0 +1,177 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/bitops.h>
++#include <linux/regmap.h>
++#include <net/dsa.h>
++#include "rtl83xx.h"
++#include "rtl8366rb.h"
++
++static inline u32 rtl8366rb_led_group_port_mask(u8 led_group, u8 port)
++{
++ switch (led_group) {
++ case 0:
++ return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
++ case 1:
++ return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
++ case 2:
++ return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
++ case 3:
++ return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
++ default:
++ return 0;
++ }
++}
++
++static int rb8366rb_get_port_led(struct rtl8366rb_led *led)
++{
++ struct realtek_priv *priv = led->priv;
++ u8 led_group = led->led_group;
++ u8 port_num = led->port_num;
++ int ret;
++ u32 val;
++
++ ret = regmap_read(priv->map, RTL8366RB_LED_X_X_CTRL_REG(led_group),
++ &val);
++ if (ret) {
++ dev_err(priv->dev, "error reading LED on port %d group %d\n",
++ led_group, port_num);
++ return ret;
++ }
++
++ return !!(val & rtl8366rb_led_group_port_mask(led_group, port_num));
++}
++
++static int rb8366rb_set_port_led(struct rtl8366rb_led *led, bool enable)
++{
++ struct realtek_priv *priv = led->priv;
++ u8 led_group = led->led_group;
++ u8 port_num = led->port_num;
++ int ret;
++
++ ret = regmap_update_bits(priv->map,
++ RTL8366RB_LED_X_X_CTRL_REG(led_group),
++ rtl8366rb_led_group_port_mask(led_group,
++ port_num),
++ enable ? 0xffff : 0);
++ if (ret) {
++ dev_err(priv->dev, "error updating LED on port %d group %d\n",
++ led_group, port_num);
++ return ret;
++ }
++
++ /* Change the LED group to manual controlled LEDs if required */
++ ret = rb8366rb_set_ledgroup_mode(priv, led_group,
++ RTL8366RB_LEDGROUP_FORCE);
++
++ if (ret) {
++ dev_err(priv->dev, "error updating LED GROUP group %d\n",
++ led_group);
++ return ret;
++ }
++
++ return 0;
++}
++
++static int
++rtl8366rb_cled_brightness_set_blocking(struct led_classdev *ldev,
++ enum led_brightness brightness)
++{
++ struct rtl8366rb_led *led = container_of(ldev, struct rtl8366rb_led,
++ cdev);
++
++ return rb8366rb_set_port_led(led, brightness == LED_ON);
++}
++
++static int rtl8366rb_setup_led(struct realtek_priv *priv, struct dsa_port *dp,
++ struct fwnode_handle *led_fwnode)
++{
++ struct rtl8366rb *rb = priv->chip_data;
++ struct led_init_data init_data = { };
++ enum led_default_state state;
++ struct rtl8366rb_led *led;
++ u32 led_group;
++ int ret;
++
++ ret = fwnode_property_read_u32(led_fwnode, "reg", &led_group);
++ if (ret)
++ return ret;
++
++ if (led_group >= RTL8366RB_NUM_LEDGROUPS) {
++ dev_warn(priv->dev, "Invalid LED reg %d defined for port %d",
++ led_group, dp->index);
++ return -EINVAL;
++ }
++
++ led = &rb->leds[dp->index][led_group];
++ led->port_num = dp->index;
++ led->led_group = led_group;
++ led->priv = priv;
++
++ state = led_init_default_state_get(led_fwnode);
++ switch (state) {
++ case LEDS_DEFSTATE_ON:
++ led->cdev.brightness = 1;
++ rb8366rb_set_port_led(led, 1);
++ break;
++ case LEDS_DEFSTATE_KEEP:
++ led->cdev.brightness =
++ rb8366rb_get_port_led(led);
++ break;
++ case LEDS_DEFSTATE_OFF:
++ default:
++ led->cdev.brightness = 0;
++ rb8366rb_set_port_led(led, 0);
++ }
++
++ led->cdev.max_brightness = 1;
++ led->cdev.brightness_set_blocking =
++ rtl8366rb_cled_brightness_set_blocking;
++ init_data.fwnode = led_fwnode;
++ init_data.devname_mandatory = true;
++
++ init_data.devicename = kasprintf(GFP_KERNEL, "Realtek-%d:0%d:%d",
++ dp->ds->index, dp->index, led_group);
++ if (!init_data.devicename)
++ return -ENOMEM;
++
++ ret = devm_led_classdev_register_ext(priv->dev, &led->cdev, &init_data);
++ if (ret) {
++ dev_warn(priv->dev, "Failed to init LED %d for port %d",
++ led_group, dp->index);
++ return ret;
++ }
++
++ return 0;
++}
++
++int rtl8366rb_setup_leds(struct realtek_priv *priv)
++{
++ struct dsa_switch *ds = &priv->ds;
++ struct device_node *leds_np;
++ struct dsa_port *dp;
++ int ret = 0;
++
++ dsa_switch_for_each_port(dp, ds) {
++ if (!dp->dn)
++ continue;
++
++ leds_np = of_get_child_by_name(dp->dn, "leds");
++ if (!leds_np) {
++ dev_dbg(priv->dev, "No leds defined for port %d",
++ dp->index);
++ continue;
++ }
++
++ for_each_child_of_node_scoped(leds_np, led_np) {
++ ret = rtl8366rb_setup_led(priv, dp,
++ of_fwnode_handle(led_np));
++ if (ret)
++ break;
++ }
++
++ of_node_put(leds_np);
++ if (ret)
++ return ret;
++ }
++ return 0;
++}
+diff --git a/drivers/net/dsa/realtek/rtl8366rb.c b/drivers/net/dsa/realtek/rtl8366rb.c
+index c7a8cd06058781..ae3d49fc22b809 100644
+--- a/drivers/net/dsa/realtek/rtl8366rb.c
++++ b/drivers/net/dsa/realtek/rtl8366rb.c
+@@ -26,11 +26,7 @@
+ #include "realtek-smi.h"
+ #include "realtek-mdio.h"
+ #include "rtl83xx.h"
+-
+-#define RTL8366RB_PORT_NUM_CPU 5
+-#define RTL8366RB_NUM_PORTS 6
+-#define RTL8366RB_PHY_NO_MAX 4
+-#define RTL8366RB_PHY_ADDR_MAX 31
++#include "rtl8366rb.h"
+
+ /* Switch Global Configuration register */
+ #define RTL8366RB_SGCR 0x0000
+@@ -175,39 +171,6 @@
+ */
+ #define RTL8366RB_VLAN_INGRESS_CTRL2_REG 0x037f
+
+-/* LED control registers */
+-/* The LED blink rate is global; it is used by all triggers in all groups. */
+-#define RTL8366RB_LED_BLINKRATE_REG 0x0430
+-#define RTL8366RB_LED_BLINKRATE_MASK 0x0007
+-#define RTL8366RB_LED_BLINKRATE_28MS 0x0000
+-#define RTL8366RB_LED_BLINKRATE_56MS 0x0001
+-#define RTL8366RB_LED_BLINKRATE_84MS 0x0002
+-#define RTL8366RB_LED_BLINKRATE_111MS 0x0003
+-#define RTL8366RB_LED_BLINKRATE_222MS 0x0004
+-#define RTL8366RB_LED_BLINKRATE_446MS 0x0005
+-
+-/* LED trigger event for each group */
+-#define RTL8366RB_LED_CTRL_REG 0x0431
+-#define RTL8366RB_LED_CTRL_OFFSET(led_group) \
+- (4 * (led_group))
+-#define RTL8366RB_LED_CTRL_MASK(led_group) \
+- (0xf << RTL8366RB_LED_CTRL_OFFSET(led_group))
+-
+-/* The RTL8366RB_LED_X_X registers are used to manually set the LED state only
+- * when the corresponding LED group in RTL8366RB_LED_CTRL_REG is
+- * RTL8366RB_LEDGROUP_FORCE. Otherwise, it is ignored.
+- */
+-#define RTL8366RB_LED_0_1_CTRL_REG 0x0432
+-#define RTL8366RB_LED_2_3_CTRL_REG 0x0433
+-#define RTL8366RB_LED_X_X_CTRL_REG(led_group) \
+- ((led_group) <= 1 ? \
+- RTL8366RB_LED_0_1_CTRL_REG : \
+- RTL8366RB_LED_2_3_CTRL_REG)
+-#define RTL8366RB_LED_0_X_CTRL_MASK GENMASK(5, 0)
+-#define RTL8366RB_LED_X_1_CTRL_MASK GENMASK(11, 6)
+-#define RTL8366RB_LED_2_X_CTRL_MASK GENMASK(5, 0)
+-#define RTL8366RB_LED_X_3_CTRL_MASK GENMASK(11, 6)
+-
+ #define RTL8366RB_MIB_COUNT 33
+ #define RTL8366RB_GLOBAL_MIB_COUNT 1
+ #define RTL8366RB_MIB_COUNTER_PORT_OFFSET 0x0050
+@@ -243,7 +206,6 @@
+ #define RTL8366RB_PORT_STATUS_AN_MASK 0x0080
+
+ #define RTL8366RB_NUM_VLANS 16
+-#define RTL8366RB_NUM_LEDGROUPS 4
+ #define RTL8366RB_NUM_VIDS 4096
+ #define RTL8366RB_PRIORITYMAX 7
+ #define RTL8366RB_NUM_FIDS 8
+@@ -350,46 +312,6 @@
+ #define RTL8366RB_GREEN_FEATURE_TX BIT(0)
+ #define RTL8366RB_GREEN_FEATURE_RX BIT(2)
+
+-enum rtl8366_ledgroup_mode {
+- RTL8366RB_LEDGROUP_OFF = 0x0,
+- RTL8366RB_LEDGROUP_DUP_COL = 0x1,
+- RTL8366RB_LEDGROUP_LINK_ACT = 0x2,
+- RTL8366RB_LEDGROUP_SPD1000 = 0x3,
+- RTL8366RB_LEDGROUP_SPD100 = 0x4,
+- RTL8366RB_LEDGROUP_SPD10 = 0x5,
+- RTL8366RB_LEDGROUP_SPD1000_ACT = 0x6,
+- RTL8366RB_LEDGROUP_SPD100_ACT = 0x7,
+- RTL8366RB_LEDGROUP_SPD10_ACT = 0x8,
+- RTL8366RB_LEDGROUP_SPD100_10_ACT = 0x9,
+- RTL8366RB_LEDGROUP_FIBER = 0xa,
+- RTL8366RB_LEDGROUP_AN_FAULT = 0xb,
+- RTL8366RB_LEDGROUP_LINK_RX = 0xc,
+- RTL8366RB_LEDGROUP_LINK_TX = 0xd,
+- RTL8366RB_LEDGROUP_MASTER = 0xe,
+- RTL8366RB_LEDGROUP_FORCE = 0xf,
+-
+- __RTL8366RB_LEDGROUP_MODE_MAX
+-};
+-
+-struct rtl8366rb_led {
+- u8 port_num;
+- u8 led_group;
+- struct realtek_priv *priv;
+- struct led_classdev cdev;
+-};
+-
+-/**
+- * struct rtl8366rb - RTL8366RB-specific data
+- * @max_mtu: per-port max MTU setting
+- * @pvid_enabled: if PVID is set for respective port
+- * @leds: per-port and per-ledgroup led info
+- */
+-struct rtl8366rb {
+- unsigned int max_mtu[RTL8366RB_NUM_PORTS];
+- bool pvid_enabled[RTL8366RB_NUM_PORTS];
+- struct rtl8366rb_led leds[RTL8366RB_NUM_PORTS][RTL8366RB_NUM_LEDGROUPS];
+-};
+-
+ static struct rtl8366_mib_counter rtl8366rb_mib_counters[] = {
+ { 0, 0, 4, "IfInOctets" },
+ { 0, 4, 4, "EtherStatsOctets" },
+@@ -830,9 +752,10 @@ static int rtl8366rb_jam_table(const struct rtl8366rb_jam_tbl_entry *jam_table,
+ return 0;
+ }
+
+-static int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
+- u8 led_group,
+- enum rtl8366_ledgroup_mode mode)
++/* This code is used also with LEDs disabled */
++int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
++ u8 led_group,
++ enum rtl8366_ledgroup_mode mode)
+ {
+ int ret;
+ u32 val;
+@@ -849,144 +772,7 @@ static int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
+ return 0;
+ }
+
+-static inline u32 rtl8366rb_led_group_port_mask(u8 led_group, u8 port)
+-{
+- switch (led_group) {
+- case 0:
+- return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
+- case 1:
+- return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
+- case 2:
+- return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
+- case 3:
+- return FIELD_PREP(RTL8366RB_LED_0_X_CTRL_MASK, BIT(port));
+- default:
+- return 0;
+- }
+-}
+-
+-static int rb8366rb_get_port_led(struct rtl8366rb_led *led)
+-{
+- struct realtek_priv *priv = led->priv;
+- u8 led_group = led->led_group;
+- u8 port_num = led->port_num;
+- int ret;
+- u32 val;
+-
+- ret = regmap_read(priv->map, RTL8366RB_LED_X_X_CTRL_REG(led_group),
+- &val);
+- if (ret) {
+- dev_err(priv->dev, "error reading LED on port %d group %d\n",
+- led_group, port_num);
+- return ret;
+- }
+-
+- return !!(val & rtl8366rb_led_group_port_mask(led_group, port_num));
+-}
+-
+-static int rb8366rb_set_port_led(struct rtl8366rb_led *led, bool enable)
+-{
+- struct realtek_priv *priv = led->priv;
+- u8 led_group = led->led_group;
+- u8 port_num = led->port_num;
+- int ret;
+-
+- ret = regmap_update_bits(priv->map,
+- RTL8366RB_LED_X_X_CTRL_REG(led_group),
+- rtl8366rb_led_group_port_mask(led_group,
+- port_num),
+- enable ? 0xffff : 0);
+- if (ret) {
+- dev_err(priv->dev, "error updating LED on port %d group %d\n",
+- led_group, port_num);
+- return ret;
+- }
+-
+- /* Change the LED group to manual controlled LEDs if required */
+- ret = rb8366rb_set_ledgroup_mode(priv, led_group,
+- RTL8366RB_LEDGROUP_FORCE);
+-
+- if (ret) {
+- dev_err(priv->dev, "error updating LED GROUP group %d\n",
+- led_group);
+- return ret;
+- }
+-
+- return 0;
+-}
+-
+-static int
+-rtl8366rb_cled_brightness_set_blocking(struct led_classdev *ldev,
+- enum led_brightness brightness)
+-{
+- struct rtl8366rb_led *led = container_of(ldev, struct rtl8366rb_led,
+- cdev);
+-
+- return rb8366rb_set_port_led(led, brightness == LED_ON);
+-}
+-
+-static int rtl8366rb_setup_led(struct realtek_priv *priv, struct dsa_port *dp,
+- struct fwnode_handle *led_fwnode)
+-{
+- struct rtl8366rb *rb = priv->chip_data;
+- struct led_init_data init_data = { };
+- enum led_default_state state;
+- struct rtl8366rb_led *led;
+- u32 led_group;
+- int ret;
+-
+- ret = fwnode_property_read_u32(led_fwnode, "reg", &led_group);
+- if (ret)
+- return ret;
+-
+- if (led_group >= RTL8366RB_NUM_LEDGROUPS) {
+- dev_warn(priv->dev, "Invalid LED reg %d defined for port %d",
+- led_group, dp->index);
+- return -EINVAL;
+- }
+-
+- led = &rb->leds[dp->index][led_group];
+- led->port_num = dp->index;
+- led->led_group = led_group;
+- led->priv = priv;
+-
+- state = led_init_default_state_get(led_fwnode);
+- switch (state) {
+- case LEDS_DEFSTATE_ON:
+- led->cdev.brightness = 1;
+- rb8366rb_set_port_led(led, 1);
+- break;
+- case LEDS_DEFSTATE_KEEP:
+- led->cdev.brightness =
+- rb8366rb_get_port_led(led);
+- break;
+- case LEDS_DEFSTATE_OFF:
+- default:
+- led->cdev.brightness = 0;
+- rb8366rb_set_port_led(led, 0);
+- }
+-
+- led->cdev.max_brightness = 1;
+- led->cdev.brightness_set_blocking =
+- rtl8366rb_cled_brightness_set_blocking;
+- init_data.fwnode = led_fwnode;
+- init_data.devname_mandatory = true;
+-
+- init_data.devicename = kasprintf(GFP_KERNEL, "Realtek-%d:0%d:%d",
+- dp->ds->index, dp->index, led_group);
+- if (!init_data.devicename)
+- return -ENOMEM;
+-
+- ret = devm_led_classdev_register_ext(priv->dev, &led->cdev, &init_data);
+- if (ret) {
+- dev_warn(priv->dev, "Failed to init LED %d for port %d",
+- led_group, dp->index);
+- return ret;
+- }
+-
+- return 0;
+-}
+-
++/* This code is used also with LEDs disabled */
+ static int rtl8366rb_setup_all_leds_off(struct realtek_priv *priv)
+ {
+ int ret = 0;
+@@ -1007,38 +793,6 @@ static int rtl8366rb_setup_all_leds_off(struct realtek_priv *priv)
+ return ret;
+ }
+
+-static int rtl8366rb_setup_leds(struct realtek_priv *priv)
+-{
+- struct dsa_switch *ds = &priv->ds;
+- struct device_node *leds_np;
+- struct dsa_port *dp;
+- int ret = 0;
+-
+- dsa_switch_for_each_port(dp, ds) {
+- if (!dp->dn)
+- continue;
+-
+- leds_np = of_get_child_by_name(dp->dn, "leds");
+- if (!leds_np) {
+- dev_dbg(priv->dev, "No leds defined for port %d",
+- dp->index);
+- continue;
+- }
+-
+- for_each_child_of_node_scoped(leds_np, led_np) {
+- ret = rtl8366rb_setup_led(priv, dp,
+- of_fwnode_handle(led_np));
+- if (ret)
+- break;
+- }
+-
+- of_node_put(leds_np);
+- if (ret)
+- return ret;
+- }
+- return 0;
+-}
+-
+ static int rtl8366rb_setup(struct dsa_switch *ds)
+ {
+ struct realtek_priv *priv = ds->priv;
+diff --git a/drivers/net/dsa/realtek/rtl8366rb.h b/drivers/net/dsa/realtek/rtl8366rb.h
+new file mode 100644
+index 00000000000000..685ff3275faa17
+--- /dev/null
++++ b/drivers/net/dsa/realtek/rtl8366rb.h
+@@ -0,0 +1,107 @@
++/* SPDX-License-Identifier: GPL-2.0+ */
++
++#ifndef _RTL8366RB_H
++#define _RTL8366RB_H
++
++#include "realtek.h"
++
++#define RTL8366RB_PORT_NUM_CPU 5
++#define RTL8366RB_NUM_PORTS 6
++#define RTL8366RB_PHY_NO_MAX 4
++#define RTL8366RB_NUM_LEDGROUPS 4
++#define RTL8366RB_PHY_ADDR_MAX 31
++
++/* LED control registers */
++/* The LED blink rate is global; it is used by all triggers in all groups. */
++#define RTL8366RB_LED_BLINKRATE_REG 0x0430
++#define RTL8366RB_LED_BLINKRATE_MASK 0x0007
++#define RTL8366RB_LED_BLINKRATE_28MS 0x0000
++#define RTL8366RB_LED_BLINKRATE_56MS 0x0001
++#define RTL8366RB_LED_BLINKRATE_84MS 0x0002
++#define RTL8366RB_LED_BLINKRATE_111MS 0x0003
++#define RTL8366RB_LED_BLINKRATE_222MS 0x0004
++#define RTL8366RB_LED_BLINKRATE_446MS 0x0005
++
++/* LED trigger event for each group */
++#define RTL8366RB_LED_CTRL_REG 0x0431
++#define RTL8366RB_LED_CTRL_OFFSET(led_group) \
++ (4 * (led_group))
++#define RTL8366RB_LED_CTRL_MASK(led_group) \
++ (0xf << RTL8366RB_LED_CTRL_OFFSET(led_group))
++
++/* The RTL8366RB_LED_X_X registers are used to manually set the LED state only
++ * when the corresponding LED group in RTL8366RB_LED_CTRL_REG is
++ * RTL8366RB_LEDGROUP_FORCE. Otherwise, it is ignored.
++ */
++#define RTL8366RB_LED_0_1_CTRL_REG 0x0432
++#define RTL8366RB_LED_2_3_CTRL_REG 0x0433
++#define RTL8366RB_LED_X_X_CTRL_REG(led_group) \
++ ((led_group) <= 1 ? \
++ RTL8366RB_LED_0_1_CTRL_REG : \
++ RTL8366RB_LED_2_3_CTRL_REG)
++#define RTL8366RB_LED_0_X_CTRL_MASK GENMASK(5, 0)
++#define RTL8366RB_LED_X_1_CTRL_MASK GENMASK(11, 6)
++#define RTL8366RB_LED_2_X_CTRL_MASK GENMASK(5, 0)
++#define RTL8366RB_LED_X_3_CTRL_MASK GENMASK(11, 6)
++
++enum rtl8366_ledgroup_mode {
++ RTL8366RB_LEDGROUP_OFF = 0x0,
++ RTL8366RB_LEDGROUP_DUP_COL = 0x1,
++ RTL8366RB_LEDGROUP_LINK_ACT = 0x2,
++ RTL8366RB_LEDGROUP_SPD1000 = 0x3,
++ RTL8366RB_LEDGROUP_SPD100 = 0x4,
++ RTL8366RB_LEDGROUP_SPD10 = 0x5,
++ RTL8366RB_LEDGROUP_SPD1000_ACT = 0x6,
++ RTL8366RB_LEDGROUP_SPD100_ACT = 0x7,
++ RTL8366RB_LEDGROUP_SPD10_ACT = 0x8,
++ RTL8366RB_LEDGROUP_SPD100_10_ACT = 0x9,
++ RTL8366RB_LEDGROUP_FIBER = 0xa,
++ RTL8366RB_LEDGROUP_AN_FAULT = 0xb,
++ RTL8366RB_LEDGROUP_LINK_RX = 0xc,
++ RTL8366RB_LEDGROUP_LINK_TX = 0xd,
++ RTL8366RB_LEDGROUP_MASTER = 0xe,
++ RTL8366RB_LEDGROUP_FORCE = 0xf,
++
++ __RTL8366RB_LEDGROUP_MODE_MAX
++};
++
++#if IS_ENABLED(CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS)
++
++struct rtl8366rb_led {
++ u8 port_num;
++ u8 led_group;
++ struct realtek_priv *priv;
++ struct led_classdev cdev;
++};
++
++int rtl8366rb_setup_leds(struct realtek_priv *priv);
++
++#else
++
++static inline int rtl8366rb_setup_leds(struct realtek_priv *priv)
++{
++ return 0;
++}
++
++#endif /* IS_ENABLED(CONFIG_LEDS_CLASS) */
++
++/**
++ * struct rtl8366rb - RTL8366RB-specific data
++ * @max_mtu: per-port max MTU setting
++ * @pvid_enabled: if PVID is set for respective port
++ * @leds: per-port and per-ledgroup led info
++ */
++struct rtl8366rb {
++ unsigned int max_mtu[RTL8366RB_NUM_PORTS];
++ bool pvid_enabled[RTL8366RB_NUM_PORTS];
++#if IS_ENABLED(CONFIG_NET_DSA_REALTEK_RTL8366RB_LEDS)
++ struct rtl8366rb_led leds[RTL8366RB_NUM_PORTS][RTL8366RB_NUM_LEDGROUPS];
++#endif
++};
++
++/* This code is used also with LEDs disabled */
++int rb8366rb_set_ledgroup_mode(struct realtek_priv *priv,
++ u8 led_group,
++ enum rtl8366_ledgroup_mode mode);
++
++#endif /* _RTL8366RB_H */
+diff --git a/drivers/net/ethernet/cadence/macb.h b/drivers/net/ethernet/cadence/macb.h
+index 5740c98d8c9f03..2847278d9cd48e 100644
+--- a/drivers/net/ethernet/cadence/macb.h
++++ b/drivers/net/ethernet/cadence/macb.h
+@@ -1279,6 +1279,8 @@ struct macb {
+ struct clk *rx_clk;
+ struct clk *tsu_clk;
+ struct net_device *dev;
++ /* Protects hw_stats and ethtool_stats */
++ spinlock_t stats_lock;
+ union {
+ struct macb_stats macb;
+ struct gem_stats gem;
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 56901280ba0472..60847cdb516eef 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -1992,10 +1992,12 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
+
+ if (status & MACB_BIT(ISR_ROVR)) {
+ /* We missed at least one packet */
++ spin_lock(&bp->stats_lock);
+ if (macb_is_gem(bp))
+ bp->hw_stats.gem.rx_overruns++;
+ else
+ bp->hw_stats.macb.rx_overruns++;
++ spin_unlock(&bp->stats_lock);
+
+ if (bp->caps & MACB_CAPS_ISR_CLEAR_ON_WRITE)
+ queue_writel(queue, ISR, MACB_BIT(ISR_ROVR));
+@@ -3116,6 +3118,7 @@ static struct net_device_stats *gem_get_stats(struct macb *bp)
+ if (!netif_running(bp->dev))
+ return nstat;
+
++ spin_lock_irq(&bp->stats_lock);
+ gem_update_stats(bp);
+
+ nstat->rx_errors = (hwstat->rx_frame_check_sequence_errors +
+@@ -3145,6 +3148,7 @@ static struct net_device_stats *gem_get_stats(struct macb *bp)
+ nstat->tx_aborted_errors = hwstat->tx_excessive_collisions;
+ nstat->tx_carrier_errors = hwstat->tx_carrier_sense_errors;
+ nstat->tx_fifo_errors = hwstat->tx_underrun;
++ spin_unlock_irq(&bp->stats_lock);
+
+ return nstat;
+ }
+@@ -3152,12 +3156,13 @@ static struct net_device_stats *gem_get_stats(struct macb *bp)
+ static void gem_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats *stats, u64 *data)
+ {
+- struct macb *bp;
++ struct macb *bp = netdev_priv(dev);
+
+- bp = netdev_priv(dev);
++ spin_lock_irq(&bp->stats_lock);
+ gem_update_stats(bp);
+ memcpy(data, &bp->ethtool_stats, sizeof(u64)
+ * (GEM_STATS_LEN + QUEUE_STATS_LEN * MACB_MAX_QUEUES));
++ spin_unlock_irq(&bp->stats_lock);
+ }
+
+ static int gem_get_sset_count(struct net_device *dev, int sset)
+@@ -3207,6 +3212,7 @@ static struct net_device_stats *macb_get_stats(struct net_device *dev)
+ return gem_get_stats(bp);
+
+ /* read stats from hardware */
++ spin_lock_irq(&bp->stats_lock);
+ macb_update_stats(bp);
+
+ /* Convert HW stats into netdevice stats */
+@@ -3240,6 +3246,7 @@ static struct net_device_stats *macb_get_stats(struct net_device *dev)
+ nstat->tx_carrier_errors = hwstat->tx_carrier_errors;
+ nstat->tx_fifo_errors = hwstat->tx_underruns;
+ /* Don't know about heartbeat or window errors... */
++ spin_unlock_irq(&bp->stats_lock);
+
+ return nstat;
+ }
+@@ -5110,6 +5117,7 @@ static int macb_probe(struct platform_device *pdev)
+ }
+ }
+ spin_lock_init(&bp->lock);
++ spin_lock_init(&bp->stats_lock);
+
+ /* setup capabilities */
+ macb_configure_caps(bp, macb_config);
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c b/drivers/net/ethernet/freescale/enetc/enetc.c
+index 16a7908c79f703..f662a5d54986cf 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc.c
+@@ -145,6 +145,24 @@ static int enetc_ptp_parse(struct sk_buff *skb, u8 *udp,
+ return 0;
+ }
+
++/**
++ * enetc_unwind_tx_frame() - Unwind the DMA mappings of a multi-buffer Tx frame
++ * @tx_ring: Pointer to the Tx ring on which the buffer descriptors are located
++ * @count: Number of Tx buffer descriptors which need to be unmapped
++ * @i: Index of the last successfully mapped Tx buffer descriptor
++ */
++static void enetc_unwind_tx_frame(struct enetc_bdr *tx_ring, int count, int i)
++{
++ while (count--) {
++ struct enetc_tx_swbd *tx_swbd = &tx_ring->tx_swbd[i];
++
++ enetc_free_tx_frame(tx_ring, tx_swbd);
++ if (i == 0)
++ i = tx_ring->bd_count;
++ i--;
++ }
++}
++
+ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
+ {
+ bool do_vlan, do_onestep_tstamp = false, do_twostep_tstamp = false;
+@@ -235,9 +253,11 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
+ }
+
+ if (do_onestep_tstamp) {
+- u32 lo, hi, val;
+- u64 sec, nsec;
++ __be32 new_sec_l, new_nsec;
++ u32 lo, hi, nsec, val;
++ __be16 new_sec_h;
+ u8 *data;
++ u64 sec;
+
+ lo = enetc_rd_hot(hw, ENETC_SICTR0);
+ hi = enetc_rd_hot(hw, ENETC_SICTR1);
+@@ -251,13 +271,38 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
+ /* Update originTimestamp field of Sync packet
+ * - 48 bits seconds field
+ * - 32 bits nanseconds field
++ *
++ * In addition, the UDP checksum needs to be updated
++ * by software after updating originTimestamp field,
++ * otherwise the hardware will calculate the wrong
++ * checksum when updating the correction field and
++ * update it to the packet.
+ */
+ data = skb_mac_header(skb);
+- *(__be16 *)(data + offset2) =
+- htons((sec >> 32) & 0xffff);
+- *(__be32 *)(data + offset2 + 2) =
+- htonl(sec & 0xffffffff);
+- *(__be32 *)(data + offset2 + 6) = htonl(nsec);
++ new_sec_h = htons((sec >> 32) & 0xffff);
++ new_sec_l = htonl(sec & 0xffffffff);
++ new_nsec = htonl(nsec);
++ if (udp) {
++ struct udphdr *uh = udp_hdr(skb);
++ __be32 old_sec_l, old_nsec;
++ __be16 old_sec_h;
++
++ old_sec_h = *(__be16 *)(data + offset2);
++ inet_proto_csum_replace2(&uh->check, skb, old_sec_h,
++ new_sec_h, false);
++
++ old_sec_l = *(__be32 *)(data + offset2 + 2);
++ inet_proto_csum_replace4(&uh->check, skb, old_sec_l,
++ new_sec_l, false);
++
++ old_nsec = *(__be32 *)(data + offset2 + 6);
++ inet_proto_csum_replace4(&uh->check, skb, old_nsec,
++ new_nsec, false);
++ }
++
++ *(__be16 *)(data + offset2) = new_sec_h;
++ *(__be32 *)(data + offset2 + 2) = new_sec_l;
++ *(__be32 *)(data + offset2 + 6) = new_nsec;
+
+ /* Configure single-step register */
+ val = ENETC_PM0_SINGLE_STEP_EN;
+@@ -328,25 +373,20 @@ static int enetc_map_tx_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb)
+ dma_err:
+ dev_err(tx_ring->dev, "DMA map error");
+
+- do {
+- tx_swbd = &tx_ring->tx_swbd[i];
+- enetc_free_tx_frame(tx_ring, tx_swbd);
+- if (i == 0)
+- i = tx_ring->bd_count;
+- i--;
+- } while (count--);
++ enetc_unwind_tx_frame(tx_ring, count, i);
+
+ return 0;
+ }
+
+-static void enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb,
+- struct enetc_tx_swbd *tx_swbd,
+- union enetc_tx_bd *txbd, int *i, int hdr_len,
+- int data_len)
++static int enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb,
++ struct enetc_tx_swbd *tx_swbd,
++ union enetc_tx_bd *txbd, int *i, int hdr_len,
++ int data_len)
+ {
+ union enetc_tx_bd txbd_tmp;
+ u8 flags = 0, e_flags = 0;
+ dma_addr_t addr;
++ int count = 1;
+
+ enetc_clear_tx_bd(&txbd_tmp);
+ addr = tx_ring->tso_headers_dma + *i * TSO_HEADER_SIZE;
+@@ -389,7 +429,10 @@ static void enetc_map_tx_tso_hdr(struct enetc_bdr *tx_ring, struct sk_buff *skb,
+ /* Write the BD */
+ txbd_tmp.ext.e_flags = e_flags;
+ *txbd = txbd_tmp;
++ count++;
+ }
++
++ return count;
+ }
+
+ static int enetc_map_tx_tso_data(struct enetc_bdr *tx_ring, struct sk_buff *skb,
+@@ -521,9 +564,9 @@ static int enetc_map_tx_tso_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb
+
+ /* compute the csum over the L4 header */
+ csum = enetc_tso_hdr_csum(&tso, skb, hdr, hdr_len, &pos);
+- enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd, &i, hdr_len, data_len);
++ count += enetc_map_tx_tso_hdr(tx_ring, skb, tx_swbd, txbd,
++ &i, hdr_len, data_len);
+ bd_data_num = 0;
+- count++;
+
+ while (data_len > 0) {
+ int size;
+@@ -547,8 +590,13 @@ static int enetc_map_tx_tso_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb
+ err = enetc_map_tx_tso_data(tx_ring, skb, tx_swbd, txbd,
+ tso.data, size,
+ size == data_len);
+- if (err)
++ if (err) {
++ if (i == 0)
++ i = tx_ring->bd_count;
++ i--;
++
+ goto err_map_data;
++ }
+
+ data_len -= size;
+ count++;
+@@ -577,13 +625,7 @@ static int enetc_map_tx_tso_buffs(struct enetc_bdr *tx_ring, struct sk_buff *skb
+ dev_err(tx_ring->dev, "DMA map error");
+
+ err_chained_bd:
+- do {
+- tx_swbd = &tx_ring->tx_swbd[i];
+- enetc_free_tx_frame(tx_ring, tx_swbd);
+- if (i == 0)
+- i = tx_ring->bd_count;
+- i--;
+- } while (count--);
++ enetc_unwind_tx_frame(tx_ring, count, i);
+
+ return 0;
+ }
+@@ -1623,7 +1665,7 @@ static int enetc_clean_rx_ring_xdp(struct enetc_bdr *rx_ring,
+ enetc_xdp_drop(rx_ring, orig_i, i);
+ tx_ring->stats.xdp_tx_drops++;
+ } else {
+- tx_ring->stats.xdp_tx += xdp_tx_bd_cnt;
++ tx_ring->stats.xdp_tx++;
+ rx_ring->xdp.xdp_tx_in_flight += xdp_tx_bd_cnt;
+ xdp_tx_frm_cnt++;
+ /* The XDP_TX enqueue was successful, so we
+@@ -2929,6 +2971,9 @@ static int enetc_hwtstamp_set(struct net_device *ndev, struct ifreq *ifr)
+ new_offloads |= ENETC_F_TX_TSTAMP;
+ break;
+ case HWTSTAMP_TX_ONESTEP_SYNC:
++ if (!enetc_si_is_pf(priv->si))
++ return -EOPNOTSUPP;
++
+ new_offloads &= ~ENETC_F_TX_TSTAMP_MASK;
+ new_offloads |= ENETC_F_TX_ONESTEP_SYNC_TSTAMP;
+ break;
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+index 2563eb8ac7b63a..6a24324703bf49 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_ethtool.c
+@@ -843,6 +843,7 @@ static int enetc_set_coalesce(struct net_device *ndev,
+ static int enetc_get_ts_info(struct net_device *ndev,
+ struct kernel_ethtool_ts_info *info)
+ {
++ struct enetc_ndev_priv *priv = netdev_priv(ndev);
+ int *phc_idx;
+
+ phc_idx = symbol_get(enetc_phc_index);
+@@ -863,8 +864,10 @@ static int enetc_get_ts_info(struct net_device *ndev,
+ SOF_TIMESTAMPING_TX_SOFTWARE;
+
+ info->tx_types = (1 << HWTSTAMP_TX_OFF) |
+- (1 << HWTSTAMP_TX_ON) |
+- (1 << HWTSTAMP_TX_ONESTEP_SYNC);
++ (1 << HWTSTAMP_TX_ON);
++
++ if (enetc_si_is_pf(priv->si))
++ info->tx_types |= (1 << HWTSTAMP_TX_ONESTEP_SYNC);
+
+ info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+ (1 << HWTSTAMP_FILTER_ALL);
+diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
+index 558cda577191d6..2960709f6b62ca 100644
+--- a/drivers/net/ethernet/intel/ice/ice.h
++++ b/drivers/net/ethernet/intel/ice/ice.h
+@@ -207,6 +207,7 @@ enum ice_feature {
+ ICE_F_GNSS,
+ ICE_F_ROCE_LAG,
+ ICE_F_SRIOV_LAG,
++ ICE_F_MBX_LIMIT,
+ ICE_F_MAX
+ };
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c
+index fb527434b58b15..d649c197cf673f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_eswitch.c
++++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c
+@@ -38,8 +38,7 @@ static int ice_eswitch_setup_env(struct ice_pf *pf)
+ if (ice_vsi_add_vlan_zero(uplink_vsi))
+ goto err_vlan_zero;
+
+- if (ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, true,
+- ICE_FLTR_RX))
++ if (ice_set_dflt_vsi(uplink_vsi))
+ goto err_def_rx;
+
+ if (ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, true,
+diff --git a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
+index 91cbae1eec89a0..8d31bfe28cc884 100644
+--- a/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
++++ b/drivers/net/ethernet/intel/ice/ice_hw_autogen.h
+@@ -539,5 +539,8 @@
+ #define E830_PRTMAC_CL01_QNT_THR_CL0_M GENMASK(15, 0)
+ #define VFINT_DYN_CTLN(_i) (0x00003800 + ((_i) * 4))
+ #define VFINT_DYN_CTLN_CLEARPBA_M BIT(1)
++#define E830_MBX_PF_IN_FLIGHT_VF_MSGS_THRESH 0x00234000
++#define E830_MBX_VF_DEC_TRIG(_VF) (0x00233800 + (_VF) * 4)
++#define E830_MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT(_VF) (0x00233000 + (_VF) * 4)
+
+ #endif /* _ICE_HW_AUTOGEN_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index 06e712cdc3d9ed..d4e74f96a8ad5d 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -3880,6 +3880,9 @@ void ice_init_feature_support(struct ice_pf *pf)
+ default:
+ break;
+ }
++
++ if (pf->hw.mac_type == ICE_MAC_E830)
++ ice_set_feature_support(pf, ICE_F_MBX_LIMIT);
+ }
+
+ /**
+diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
+index 45eefe22fb5b73..ca707dfcb286ef 100644
+--- a/drivers/net/ethernet/intel/ice/ice_main.c
++++ b/drivers/net/ethernet/intel/ice/ice_main.c
+@@ -1546,12 +1546,20 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
+ ice_vf_lan_overflow_event(pf, &event);
+ break;
+ case ice_mbx_opc_send_msg_to_pf:
+- data.num_msg_proc = i;
+- data.num_pending_arq = pending;
+- data.max_num_msgs_mbx = hw->mailboxq.num_rq_entries;
+- data.async_watermark_val = ICE_MBX_OVERFLOW_WATERMARK;
++ if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT)) {
++ ice_vc_process_vf_msg(pf, &event, NULL);
++ ice_mbx_vf_dec_trig_e830(hw, &event);
++ } else {
++ u16 val = hw->mailboxq.num_rq_entries;
++
++ data.max_num_msgs_mbx = val;
++ val = ICE_MBX_OVERFLOW_WATERMARK;
++ data.async_watermark_val = val;
++ data.num_msg_proc = i;
++ data.num_pending_arq = pending;
+
+- ice_vc_process_vf_msg(pf, &event, &data);
++ ice_vc_process_vf_msg(pf, &event, &data);
++ }
+ break;
+ case ice_aqc_opc_fw_logs_event:
+ ice_get_fwlog_data(pf, &event);
+@@ -4082,7 +4090,11 @@ static int ice_init_pf(struct ice_pf *pf)
+
+ mutex_init(&pf->vfs.table_lock);
+ hash_init(pf->vfs.table);
+- ice_mbx_init_snapshot(&pf->hw);
++ if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT))
++ wr32(&pf->hw, E830_MBX_PF_IN_FLIGHT_VF_MSGS_THRESH,
++ ICE_MBX_OVERFLOW_WATERMARK);
++ else
++ ice_mbx_init_snapshot(&pf->hw);
+
+ xa_init(&pf->dyn_ports);
+ xa_init(&pf->sf_nums);
+diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
+index 91cb393f616f2b..8aabf7749aa5e0 100644
+--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
++++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
+@@ -36,6 +36,7 @@ static void ice_free_vf_entries(struct ice_pf *pf)
+
+ hash_for_each_safe(vfs->table, bkt, tmp, vf, entry) {
+ hash_del_rcu(&vf->entry);
++ ice_deinitialize_vf_entry(vf);
+ ice_put_vf(vf);
+ }
+ }
+@@ -193,9 +194,6 @@ void ice_free_vfs(struct ice_pf *pf)
+ wr32(hw, GLGEN_VFLRSTAT(reg_idx), BIT(bit_idx));
+ }
+
+- /* clear malicious info since the VF is getting released */
+- list_del(&vf->mbx_info.list_entry);
+-
+ mutex_unlock(&vf->cfg_lock);
+ }
+
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib.c b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+index 8c434689e3f78e..815ad0bfe8326b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib.c
+@@ -716,6 +716,23 @@ ice_vf_clear_vsi_promisc(struct ice_vf *vf, struct ice_vsi *vsi, u8 promisc_m)
+ return 0;
+ }
+
++/**
++ * ice_reset_vf_mbx_cnt - reset VF mailbox message count
++ * @vf: pointer to the VF structure
++ *
++ * This function clears the VF mailbox message count, and should be called on
++ * VF reset.
++ */
++static void ice_reset_vf_mbx_cnt(struct ice_vf *vf)
++{
++ struct ice_pf *pf = vf->pf;
++
++ if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT))
++ ice_mbx_vf_clear_cnt_e830(&pf->hw, vf->vf_id);
++ else
++ ice_mbx_clear_malvf(&vf->mbx_info);
++}
++
+ /**
+ * ice_reset_all_vfs - reset all allocated VFs in one go
+ * @pf: pointer to the PF structure
+@@ -742,7 +759,7 @@ void ice_reset_all_vfs(struct ice_pf *pf)
+
+ /* clear all malicious info if the VFs are getting reset */
+ ice_for_each_vf(pf, bkt, vf)
+- ice_mbx_clear_malvf(&vf->mbx_info);
++ ice_reset_vf_mbx_cnt(vf);
+
+ /* If VFs have been disabled, there is no need to reset */
+ if (test_and_set_bit(ICE_VF_DIS, pf->state)) {
+@@ -958,7 +975,7 @@ int ice_reset_vf(struct ice_vf *vf, u32 flags)
+ ice_eswitch_update_repr(&vf->repr_id, vsi);
+
+ /* if the VF has been reset allow it to come up again */
+- ice_mbx_clear_malvf(&vf->mbx_info);
++ ice_reset_vf_mbx_cnt(vf);
+
+ out_unlock:
+ if (lag && lag->bonded && lag->primary &&
+@@ -1011,11 +1028,22 @@ void ice_initialize_vf_entry(struct ice_vf *vf)
+ ice_vf_fdir_init(vf);
+
+ /* Initialize mailbox info for this VF */
+- ice_mbx_init_vf_info(&pf->hw, &vf->mbx_info);
++ if (ice_is_feature_supported(pf, ICE_F_MBX_LIMIT))
++ ice_mbx_vf_clear_cnt_e830(&pf->hw, vf->vf_id);
++ else
++ ice_mbx_init_vf_info(&pf->hw, &vf->mbx_info);
+
+ mutex_init(&vf->cfg_lock);
+ }
+
++void ice_deinitialize_vf_entry(struct ice_vf *vf)
++{
++ struct ice_pf *pf = vf->pf;
++
++ if (!ice_is_feature_supported(pf, ICE_F_MBX_LIMIT))
++ list_del(&vf->mbx_info.list_entry);
++}
++
+ /**
+ * ice_dis_vf_qs - Disable the VF queues
+ * @vf: pointer to the VF structure
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
+index 0c7e77c0a09fa6..5392b040498621 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
++++ b/drivers/net/ethernet/intel/ice/ice_vf_lib_private.h
+@@ -24,6 +24,7 @@
+ #endif
+
+ void ice_initialize_vf_entry(struct ice_vf *vf);
++void ice_deinitialize_vf_entry(struct ice_vf *vf);
+ void ice_dis_vf_qs(struct ice_vf *vf);
+ int ice_check_vf_init(struct ice_vf *vf);
+ enum virtchnl_status_code ice_err_to_virt_err(int err);
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_mbx.c b/drivers/net/ethernet/intel/ice/ice_vf_mbx.c
+index 40cb4ba0789ced..75c8113e58ee92 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_mbx.c
++++ b/drivers/net/ethernet/intel/ice/ice_vf_mbx.c
+@@ -210,6 +210,38 @@ ice_mbx_detect_malvf(struct ice_hw *hw, struct ice_mbx_vf_info *vf_info,
+ return 0;
+ }
+
++/**
++ * ice_mbx_vf_dec_trig_e830 - Decrements the VF mailbox queue counter
++ * @hw: pointer to the HW struct
++ * @event: pointer to the control queue receive event
++ *
++ * This function triggers to decrement the counter
++ * MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT when the driver replenishes
++ * the buffers at the PF mailbox queue.
++ */
++void ice_mbx_vf_dec_trig_e830(const struct ice_hw *hw,
++ const struct ice_rq_event_info *event)
++{
++ u16 vfid = le16_to_cpu(event->desc.retval);
++
++ wr32(hw, E830_MBX_VF_DEC_TRIG(vfid), 1);
++}
++
++/**
++ * ice_mbx_vf_clear_cnt_e830 - Clear the VF mailbox queue count
++ * @hw: pointer to the HW struct
++ * @vf_id: VF ID in the PF space
++ *
++ * This function clears the counter MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT, and should
++ * be called when a VF is created and on VF reset.
++ */
++void ice_mbx_vf_clear_cnt_e830(const struct ice_hw *hw, u16 vf_id)
++{
++ u32 reg = rd32(hw, E830_MBX_VF_IN_FLIGHT_MSGS_AT_PF_CNT(vf_id));
++
++ wr32(hw, E830_MBX_VF_DEC_TRIG(vf_id), reg);
++}
++
+ /**
+ * ice_mbx_vf_state_handler - Handle states of the overflow algorithm
+ * @hw: pointer to the HW struct
+diff --git a/drivers/net/ethernet/intel/ice/ice_vf_mbx.h b/drivers/net/ethernet/intel/ice/ice_vf_mbx.h
+index 44bc030d17e07a..684de89e5c5ed7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_vf_mbx.h
++++ b/drivers/net/ethernet/intel/ice/ice_vf_mbx.h
+@@ -19,6 +19,9 @@ ice_aq_send_msg_to_vf(struct ice_hw *hw, u16 vfid, u32 v_opcode, u32 v_retval,
+ u8 *msg, u16 msglen, struct ice_sq_cd *cd);
+
+ u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed);
++void ice_mbx_vf_dec_trig_e830(const struct ice_hw *hw,
++ const struct ice_rq_event_info *event);
++void ice_mbx_vf_clear_cnt_e830(const struct ice_hw *hw, u16 vf_id);
+ int
+ ice_mbx_vf_state_handler(struct ice_hw *hw, struct ice_mbx_data *mbx_data,
+ struct ice_mbx_vf_info *vf_info, bool *report_malvf);
+@@ -47,5 +50,11 @@ static inline void ice_mbx_init_snapshot(struct ice_hw *hw)
+ {
+ }
+
++static inline void
++ice_mbx_vf_dec_trig_e830(const struct ice_hw *hw,
++ const struct ice_rq_event_info *event)
++{
++}
++
+ #endif /* CONFIG_PCI_IOV */
+ #endif /* _ICE_VF_MBX_H_ */
+diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl.c b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+index b6ec01f6fa73e0..c8c1d48ff793d7 100644
+--- a/drivers/net/ethernet/intel/ice/ice_virtchnl.c
++++ b/drivers/net/ethernet/intel/ice/ice_virtchnl.c
+@@ -4008,8 +4008,10 @@ ice_is_malicious_vf(struct ice_vf *vf, struct ice_mbx_data *mbxdata)
+ * @event: pointer to the AQ event
+ * @mbxdata: information used to detect VF attempting mailbox overflow
+ *
+- * called from the common asq/arq handler to
+- * process request from VF
++ * Called from the common asq/arq handler to process request from VF. When this
++ * flow is used for devices with hardware VF to PF message queue overflow
++ * support (ICE_F_MBX_LIMIT) mbxdata is set to NULL and ice_is_malicious_vf
++ * check is skipped.
+ */
+ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event,
+ struct ice_mbx_data *mbxdata)
+@@ -4035,7 +4037,7 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event,
+ mutex_lock(&vf->cfg_lock);
+
+ /* Check if the VF is trying to overflow the mailbox */
+- if (ice_is_malicious_vf(vf, mbxdata))
++ if (mbxdata && ice_is_malicious_vf(vf, mbxdata))
+ goto finish;
+
+ /* Check if VF is disabled. */
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.c b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+index 1e0d1f9b07fbcf..afc902ae4763e0 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.c
+@@ -3013,7 +3013,6 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+ skb_shinfo(skb)->gso_size = rsc_seg_len;
+
+ skb_reset_network_header(skb);
+- len = skb->len - skb_transport_offset(skb);
+
+ if (ipv4) {
+ struct iphdr *ipv4h = ip_hdr(skb);
+@@ -3022,6 +3021,7 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+
+ /* Reset and set transport header offset in skb */
+ skb_set_transport_header(skb, sizeof(struct iphdr));
++ len = skb->len - skb_transport_offset(skb);
+
+ /* Compute the TCP pseudo header checksum*/
+ tcp_hdr(skb)->check =
+@@ -3031,6 +3031,7 @@ static int idpf_rx_rsc(struct idpf_rx_queue *rxq, struct sk_buff *skb,
+
+ skb_shinfo(skb)->gso_type = SKB_GSO_TCPV6;
+ skb_set_transport_header(skb, sizeof(struct ipv6hdr));
++ len = skb->len - skb_transport_offset(skb);
+ tcp_hdr(skb)->check =
+ ~tcp_v6_check(len, &ipv6h->saddr, &ipv6h->daddr, 0);
+ }
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+index 1641791a2d5b4e..8ed83fb9886243 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+@@ -324,7 +324,7 @@ static const struct mvpp2_cls_flow cls_flows[MVPP2_N_PRS_FLOWS] = {
+ MVPP2_PRS_RI_VLAN_MASK),
+ /* Non IP flow, with vlan tag */
+ MVPP2_DEF_FLOW(MVPP22_FLOW_ETHERNET, MVPP2_FL_NON_IP_TAG,
+- MVPP22_CLS_HEK_OPT_VLAN,
++ MVPP22_CLS_HEK_TAGGED,
+ 0, 0),
+ };
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+index 7db9cab9bedf69..d9362eabc6a1ca 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+@@ -572,7 +572,7 @@ irq_pool_alloc(struct mlx5_core_dev *dev, int start, int size, char *name,
+ pool->min_threshold = min_threshold * MLX5_EQ_REFS_PER_IRQ;
+ pool->max_threshold = max_threshold * MLX5_EQ_REFS_PER_IRQ;
+ mlx5_core_dbg(dev, "pool->name = %s, pool->size = %d, pool->start = %d",
+- name, size, start);
++ name ? name : "mlx5_pcif_pool", size, start);
+ return pool;
+ }
+
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+index bfe6e2d631bdf5..f5acfb7d4ff655 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+@@ -516,6 +516,19 @@ static int loongson_dwmac_acpi_config(struct pci_dev *pdev,
+ return 0;
+ }
+
++/* Loongson's DWMAC device may take nearly two seconds to complete DMA reset */
++static int loongson_dwmac_fix_reset(void *priv, void __iomem *ioaddr)
++{
++ u32 value = readl(ioaddr + DMA_BUS_MODE);
++
++ value |= DMA_BUS_MODE_SFT_RESET;
++ writel(value, ioaddr + DMA_BUS_MODE);
++
++ return readl_poll_timeout(ioaddr + DMA_BUS_MODE, value,
++ !(value & DMA_BUS_MODE_SFT_RESET),
++ 10000, 2000000);
++}
++
+ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ {
+ struct plat_stmmacenet_data *plat;
+@@ -566,6 +579,7 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
+
+ plat->bsp_priv = ld;
+ plat->setup = loongson_dwmac_setup;
++ plat->fix_soc_reset = loongson_dwmac_fix_reset;
+ ld->dev = &pdev->dev;
+ ld->loongson_id = readl(res.addr + GMAC_VERSION) & 0xff;
+
+diff --git a/drivers/net/ethernet/ti/Kconfig b/drivers/net/ethernet/ti/Kconfig
+index 0d5a862cd78a6c..3a13d60a947a81 100644
+--- a/drivers/net/ethernet/ti/Kconfig
++++ b/drivers/net/ethernet/ti/Kconfig
+@@ -99,6 +99,7 @@ config TI_K3_AM65_CPSW_NUSS
+ select NET_DEVLINK
+ select TI_DAVINCI_MDIO
+ select PHYLINK
++ select PAGE_POOL
+ select TI_K3_CPPI_DESC_POOL
+ imply PHY_TI_GMII_SEL
+ depends on TI_K3_AM65_CPTS || !TI_K3_AM65_CPTS
+diff --git a/drivers/net/ethernet/ti/icssg/icss_iep.c b/drivers/net/ethernet/ti/icssg/icss_iep.c
+index 768578c0d9587d..d59c1744840af2 100644
+--- a/drivers/net/ethernet/ti/icssg/icss_iep.c
++++ b/drivers/net/ethernet/ti/icssg/icss_iep.c
+@@ -474,26 +474,7 @@ static int icss_iep_perout_enable_hw(struct icss_iep *iep,
+ static int icss_iep_perout_enable(struct icss_iep *iep,
+ struct ptp_perout_request *req, int on)
+ {
+- int ret = 0;
+-
+- mutex_lock(&iep->ptp_clk_mutex);
+-
+- if (iep->pps_enabled) {
+- ret = -EBUSY;
+- goto exit;
+- }
+-
+- if (iep->perout_enabled == !!on)
+- goto exit;
+-
+- ret = icss_iep_perout_enable_hw(iep, req, on);
+- if (!ret)
+- iep->perout_enabled = !!on;
+-
+-exit:
+- mutex_unlock(&iep->ptp_clk_mutex);
+-
+- return ret;
++ return -EOPNOTSUPP;
+ }
+
+ static void icss_iep_cap_cmp_work(struct work_struct *work)
+diff --git a/drivers/net/ipvlan/ipvlan_core.c b/drivers/net/ipvlan/ipvlan_core.c
+index b1afcb8740de12..ca62188a317ad4 100644
+--- a/drivers/net/ipvlan/ipvlan_core.c
++++ b/drivers/net/ipvlan/ipvlan_core.c
+@@ -3,6 +3,7 @@
+ */
+
+ #include <net/inet_dscp.h>
++#include <net/ip.h>
+
+ #include "ipvlan.h"
+
+@@ -415,20 +416,25 @@ struct ipvl_addr *ipvlan_addr_lookup(struct ipvl_port *port, void *lyr3h,
+
+ static noinline_for_stack int ipvlan_process_v4_outbound(struct sk_buff *skb)
+ {
+- const struct iphdr *ip4h = ip_hdr(skb);
+ struct net_device *dev = skb->dev;
+ struct net *net = dev_net(dev);
+- struct rtable *rt;
+ int err, ret = NET_XMIT_DROP;
++ const struct iphdr *ip4h;
++ struct rtable *rt;
+ struct flowi4 fl4 = {
+ .flowi4_oif = dev->ifindex,
+- .flowi4_tos = ip4h->tos & INET_DSCP_MASK,
+ .flowi4_flags = FLOWI_FLAG_ANYSRC,
+ .flowi4_mark = skb->mark,
+- .daddr = ip4h->daddr,
+- .saddr = ip4h->saddr,
+ };
+
++ if (!pskb_network_may_pull(skb, sizeof(struct iphdr)))
++ goto err;
++
++ ip4h = ip_hdr(skb);
++ fl4.daddr = ip4h->daddr;
++ fl4.saddr = ip4h->saddr;
++ fl4.flowi4_tos = inet_dscp_to_dsfield(ip4h_dscp(ip4h));
++
+ rt = ip_route_output_flow(net, &fl4, NULL);
+ if (IS_ERR(rt))
+ goto err;
+@@ -487,6 +493,12 @@ static int ipvlan_process_v6_outbound(struct sk_buff *skb)
+ struct net_device *dev = skb->dev;
+ int err, ret = NET_XMIT_DROP;
+
++ if (!pskb_network_may_pull(skb, sizeof(struct ipv6hdr))) {
++ DEV_STATS_INC(dev, tx_errors);
++ kfree_skb(skb);
++ return ret;
++ }
++
+ err = ipvlan_route_v6_outbound(dev, skb);
+ if (unlikely(err)) {
+ DEV_STATS_INC(dev, tx_errors);
+diff --git a/drivers/net/loopback.c b/drivers/net/loopback.c
+index 1993b90b1a5f90..491e56b3263fd5 100644
+--- a/drivers/net/loopback.c
++++ b/drivers/net/loopback.c
+@@ -244,8 +244,22 @@ static netdev_tx_t blackhole_netdev_xmit(struct sk_buff *skb,
+ return NETDEV_TX_OK;
+ }
+
++static int blackhole_neigh_output(struct neighbour *n, struct sk_buff *skb)
++{
++ kfree_skb(skb);
++ return 0;
++}
++
++static int blackhole_neigh_construct(struct net_device *dev,
++ struct neighbour *n)
++{
++ n->output = blackhole_neigh_output;
++ return 0;
++}
++
+ static const struct net_device_ops blackhole_netdev_ops = {
+ .ndo_start_xmit = blackhole_netdev_xmit,
++ .ndo_neigh_construct = blackhole_neigh_construct,
+ };
+
+ /* This is a dst-dummy device used specifically for invalidated
+diff --git a/drivers/net/phy/qcom/qca807x.c b/drivers/net/phy/qcom/qca807x.c
+index bd8a51ec0ecd6a..ec336c3e338d6c 100644
+--- a/drivers/net/phy/qcom/qca807x.c
++++ b/drivers/net/phy/qcom/qca807x.c
+@@ -774,7 +774,7 @@ static int qca807x_config_init(struct phy_device *phydev)
+ control_dac &= ~QCA807X_CONTROL_DAC_MASK;
+ if (!priv->dac_full_amplitude)
+ control_dac |= QCA807X_CONTROL_DAC_DSP_AMPLITUDE;
+- if (!priv->dac_full_amplitude)
++ if (!priv->dac_full_bias_current)
+ control_dac |= QCA807X_CONTROL_DAC_DSP_BIAS_CURRENT;
+ if (!priv->dac_disable_bias_current_tweak)
+ control_dac |= QCA807X_CONTROL_DAC_BIAS_CURRENT_TWEAK;
+diff --git a/drivers/net/usb/gl620a.c b/drivers/net/usb/gl620a.c
+index 46af78caf457a6..0bfa37c1405918 100644
+--- a/drivers/net/usb/gl620a.c
++++ b/drivers/net/usb/gl620a.c
+@@ -179,9 +179,7 @@ static int genelink_bind(struct usbnet *dev, struct usb_interface *intf)
+ {
+ dev->hard_mtu = GL_RCV_BUF_SIZE;
+ dev->net->hard_header_len += 4;
+- dev->in = usb_rcvbulkpipe(dev->udev, dev->driver_info->in);
+- dev->out = usb_sndbulkpipe(dev->udev, dev->driver_info->out);
+- return 0;
++ return usbnet_get_endpoints(dev, intf);
+ }
+
+ static const struct driver_info genelink_info = {
+diff --git a/drivers/phy/rockchip/Kconfig b/drivers/phy/rockchip/Kconfig
+index 2f7a05f21dc595..dcb8e1628632e6 100644
+--- a/drivers/phy/rockchip/Kconfig
++++ b/drivers/phy/rockchip/Kconfig
+@@ -125,6 +125,7 @@ config PHY_ROCKCHIP_USBDP
+ depends on ARCH_ROCKCHIP && OF
+ depends on TYPEC
+ select GENERIC_PHY
++ select USB_COMMON
+ help
+ Enable this to support the Rockchip USB3.0/DP combo PHY with
+ Samsung IP block. This is required for USB3 support on RK3588.
+diff --git a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
+index 2eb3329ca23f67..1ef6d9630f7e09 100644
+--- a/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
++++ b/drivers/phy/rockchip/phy-rockchip-naneng-combphy.c
+@@ -309,7 +309,10 @@ static int rockchip_combphy_parse_dt(struct device *dev, struct rockchip_combphy
+
+ priv->ext_refclk = device_property_present(dev, "rockchip,ext-refclk");
+
+- priv->phy_rst = devm_reset_control_get(dev, "phy");
++ priv->phy_rst = devm_reset_control_get_exclusive(dev, "phy");
++ /* fallback to old behaviour */
++ if (PTR_ERR(priv->phy_rst) == -ENOENT)
++ priv->phy_rst = devm_reset_control_array_get_exclusive(dev);
+ if (IS_ERR(priv->phy_rst))
+ return dev_err_probe(dev, PTR_ERR(priv->phy_rst), "failed to get phy reset\n");
+
+diff --git a/drivers/phy/samsung/phy-exynos5-usbdrd.c b/drivers/phy/samsung/phy-exynos5-usbdrd.c
+index c421b495eb0fe4..46b8f6987c62c3 100644
+--- a/drivers/phy/samsung/phy-exynos5-usbdrd.c
++++ b/drivers/phy/samsung/phy-exynos5-usbdrd.c
+@@ -488,9 +488,9 @@ exynos5_usbdrd_pipe3_set_refclk(struct phy_usb_instance *inst)
+ reg |= PHYCLKRST_REFCLKSEL_EXT_REFCLK;
+
+ /* FSEL settings corresponding to reference clock */
+- reg &= ~PHYCLKRST_FSEL_PIPE_MASK |
+- PHYCLKRST_MPLL_MULTIPLIER_MASK |
+- PHYCLKRST_SSC_REFCLKSEL_MASK;
++ reg &= ~(PHYCLKRST_FSEL_PIPE_MASK |
++ PHYCLKRST_MPLL_MULTIPLIER_MASK |
++ PHYCLKRST_SSC_REFCLKSEL_MASK);
+ switch (phy_drd->extrefclk) {
+ case EXYNOS5_FSEL_50MHZ:
+ reg |= (PHYCLKRST_MPLL_MULTIPLIER_50M_REF |
+@@ -532,9 +532,9 @@ exynos5_usbdrd_utmi_set_refclk(struct phy_usb_instance *inst)
+ reg &= ~PHYCLKRST_REFCLKSEL_MASK;
+ reg |= PHYCLKRST_REFCLKSEL_EXT_REFCLK;
+
+- reg &= ~PHYCLKRST_FSEL_UTMI_MASK |
+- PHYCLKRST_MPLL_MULTIPLIER_MASK |
+- PHYCLKRST_SSC_REFCLKSEL_MASK;
++ reg &= ~(PHYCLKRST_FSEL_UTMI_MASK |
++ PHYCLKRST_MPLL_MULTIPLIER_MASK |
++ PHYCLKRST_SSC_REFCLKSEL_MASK);
+ reg |= PHYCLKRST_FSEL(phy_drd->extrefclk);
+
+ return reg;
+@@ -1296,14 +1296,17 @@ static int exynos5_usbdrd_gs101_phy_exit(struct phy *phy)
+ struct exynos5_usbdrd_phy *phy_drd = to_usbdrd_phy(inst);
+ int ret;
+
++ if (inst->phy_cfg->id == EXYNOS5_DRDPHY_UTMI) {
++ ret = exynos850_usbdrd_phy_exit(phy);
++ if (ret)
++ return ret;
++ }
++
++ exynos5_usbdrd_phy_isol(inst, true);
++
+ if (inst->phy_cfg->id != EXYNOS5_DRDPHY_UTMI)
+ return 0;
+
+- ret = exynos850_usbdrd_phy_exit(phy);
+- if (ret)
+- return ret;
+-
+- exynos5_usbdrd_phy_isol(inst, true);
+ return regulator_bulk_disable(phy_drd->drv_data->n_regulators,
+ phy_drd->regulators);
+ }
+diff --git a/drivers/phy/tegra/xusb-tegra186.c b/drivers/phy/tegra/xusb-tegra186.c
+index 0f60d5d1c1678d..fae6242aa730e0 100644
+--- a/drivers/phy/tegra/xusb-tegra186.c
++++ b/drivers/phy/tegra/xusb-tegra186.c
+@@ -928,6 +928,7 @@ static int tegra186_utmi_phy_init(struct phy *phy)
+ unsigned int index = lane->index;
+ struct device *dev = padctl->dev;
+ int err;
++ u32 reg;
+
+ port = tegra_xusb_find_usb2_port(padctl, index);
+ if (!port) {
+@@ -935,6 +936,16 @@ static int tegra186_utmi_phy_init(struct phy *phy)
+ return -ENODEV;
+ }
+
++ if (port->mode == USB_DR_MODE_OTG ||
++ port->mode == USB_DR_MODE_PERIPHERAL) {
++ /* reset VBUS&ID OVERRIDE */
++ reg = padctl_readl(padctl, USB2_VBUS_ID);
++ reg &= ~VBUS_OVERRIDE;
++ reg &= ~ID_OVERRIDE(~0);
++ reg |= ID_OVERRIDE_FLOATING;
++ padctl_writel(padctl, reg, USB2_VBUS_ID);
++ }
++
+ if (port->supply && port->mode == USB_DR_MODE_HOST) {
+ err = regulator_enable(port->supply);
+ if (err) {
+diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
+index c9dde1ac9523e8..3023b07dc483b5 100644
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -1653,13 +1653,6 @@ static blk_status_t scsi_prepare_cmd(struct request *req)
+ if (in_flight)
+ __set_bit(SCMD_STATE_INFLIGHT, &cmd->state);
+
+- /*
+- * Only clear the driver-private command data if the LLD does not supply
+- * a function to initialize that data.
+- */
+- if (!shost->hostt->init_cmd_priv)
+- memset(cmd + 1, 0, shost->hostt->cmd_size);
+-
+ cmd->prot_op = SCSI_PROT_NORMAL;
+ if (blk_rq_bytes(req))
+ cmd->sc_data_direction = rq_dma_dir(req);
+@@ -1826,6 +1819,13 @@ static blk_status_t scsi_queue_rq(struct blk_mq_hw_ctx *hctx,
+ if (!scsi_host_queue_ready(q, shost, sdev, cmd))
+ goto out_dec_target_busy;
+
++ /*
++ * Only clear the driver-private command data if the LLD does not supply
++ * a function to initialize that data.
++ */
++ if (shost->hostt->cmd_size && !shost->hostt->init_cmd_priv)
++ memset(cmd + 1, 0, shost->hostt->cmd_size);
++
+ if (!(req->rq_flags & RQF_DONTPREP)) {
+ ret = scsi_prepare_cmd(req);
+ if (ret != BLK_STS_OK)
+diff --git a/drivers/thermal/gov_bang_bang.c b/drivers/thermal/gov_bang_bang.c
+index 863e7a4272e66f..b887e48e8c7e67 100644
+--- a/drivers/thermal/gov_bang_bang.c
++++ b/drivers/thermal/gov_bang_bang.c
+@@ -67,6 +67,7 @@ static void bang_bang_control(struct thermal_zone_device *tz,
+ const struct thermal_trip *trip,
+ bool crossed_up)
+ {
++ const struct thermal_trip_desc *td = trip_to_trip_desc(trip);
+ struct thermal_instance *instance;
+
+ lockdep_assert_held(&tz->lock);
+@@ -75,10 +76,8 @@ static void bang_bang_control(struct thermal_zone_device *tz,
+ thermal_zone_trip_id(tz, trip), trip->temperature,
+ tz->temperature, trip->hysteresis);
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (instance->trip == trip)
+- bang_bang_set_instance_target(instance, crossed_up);
+- }
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ bang_bang_set_instance_target(instance, crossed_up);
+ }
+
+ static void bang_bang_manage(struct thermal_zone_device *tz)
+@@ -104,8 +103,8 @@ static void bang_bang_manage(struct thermal_zone_device *tz)
+ * to the thermal zone temperature and the trip point threshold.
+ */
+ turn_on = tz->temperature >= td->threshold;
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (!instance->initialized && instance->trip == trip)
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
++ if (!instance->initialized)
+ bang_bang_set_instance_target(instance, turn_on);
+ }
+ }
+diff --git a/drivers/thermal/gov_fair_share.c b/drivers/thermal/gov_fair_share.c
+index ce0ea571ed67ab..d37d57d48c389a 100644
+--- a/drivers/thermal/gov_fair_share.c
++++ b/drivers/thermal/gov_fair_share.c
+@@ -44,7 +44,7 @@ static int get_trip_level(struct thermal_zone_device *tz)
+ /**
+ * fair_share_throttle - throttles devices associated with the given zone
+ * @tz: thermal_zone_device
+- * @trip: trip point
++ * @td: trip point descriptor
+ * @trip_level: number of trips crossed by the zone temperature
+ *
+ * Throttling Logic: This uses three parameters to calculate the new
+@@ -61,29 +61,23 @@ static int get_trip_level(struct thermal_zone_device *tz)
+ * new_state of cooling device = P3 * P2 * P1
+ */
+ static void fair_share_throttle(struct thermal_zone_device *tz,
+- const struct thermal_trip *trip,
++ const struct thermal_trip_desc *td,
+ int trip_level)
+ {
+ struct thermal_instance *instance;
+ int total_weight = 0;
+ int nr_instances = 0;
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (instance->trip != trip)
+- continue;
+-
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ total_weight += instance->weight;
+ nr_instances++;
+ }
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ struct thermal_cooling_device *cdev = instance->cdev;
+ u64 dividend;
+ u32 divisor;
+
+- if (instance->trip != trip)
+- continue;
+-
+ dividend = trip_level;
+ dividend *= cdev->max_state;
+ divisor = tz->num_trips;
+@@ -116,7 +110,7 @@ static void fair_share_manage(struct thermal_zone_device *tz)
+ trip->type == THERMAL_TRIP_HOT)
+ continue;
+
+- fair_share_throttle(tz, trip, trip_level);
++ fair_share_throttle(tz, td, trip_level);
+ }
+ }
+
+diff --git a/drivers/thermal/gov_power_allocator.c b/drivers/thermal/gov_power_allocator.c
+index 1b2345a697c5a0..90b4bfd9237bce 100644
+--- a/drivers/thermal/gov_power_allocator.c
++++ b/drivers/thermal/gov_power_allocator.c
+@@ -97,11 +97,9 @@ struct power_allocator_params {
+ struct power_actor *power;
+ };
+
+-static bool power_actor_is_valid(struct power_allocator_params *params,
+- struct thermal_instance *instance)
++static bool power_actor_is_valid(struct thermal_instance *instance)
+ {
+- return (instance->trip == params->trip_max &&
+- cdev_is_power_actor(instance->cdev));
++ return cdev_is_power_actor(instance->cdev);
+ }
+
+ /**
+@@ -118,13 +116,14 @@ static bool power_actor_is_valid(struct power_allocator_params *params,
+ static u32 estimate_sustainable_power(struct thermal_zone_device *tz)
+ {
+ struct power_allocator_params *params = tz->governor_data;
++ const struct thermal_trip_desc *td = trip_to_trip_desc(params->trip_max);
+ struct thermal_cooling_device *cdev;
+ struct thermal_instance *instance;
+ u32 sustainable_power = 0;
+ u32 min_power;
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (!power_actor_is_valid(params, instance))
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
++ if (!power_actor_is_valid(instance))
+ continue;
+
+ cdev = instance->cdev;
+@@ -364,7 +363,7 @@ static void divvy_up_power(struct power_actor *power, int num_actors,
+
+ for (i = 0; i < num_actors; i++) {
+ struct power_actor *pa = &power[i];
+- u64 req_range = (u64)pa->req_power * power_range;
++ u64 req_range = (u64)pa->weighted_req_power * power_range;
+
+ pa->granted_power = DIV_ROUND_CLOSEST_ULL(req_range,
+ total_req_power);
+@@ -400,6 +399,7 @@ static void divvy_up_power(struct power_actor *power, int num_actors,
+ static void allocate_power(struct thermal_zone_device *tz, int control_temp)
+ {
+ struct power_allocator_params *params = tz->governor_data;
++ const struct thermal_trip_desc *td = trip_to_trip_desc(params->trip_max);
+ unsigned int num_actors = params->num_actors;
+ struct power_actor *power = params->power;
+ struct thermal_cooling_device *cdev;
+@@ -417,10 +417,10 @@ static void allocate_power(struct thermal_zone_device *tz, int control_temp)
+ /* Clean all buffers for new power estimations */
+ memset(power, 0, params->buffer_size);
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ struct power_actor *pa = &power[i];
+
+- if (!power_actor_is_valid(params, instance))
++ if (!power_actor_is_valid(instance))
+ continue;
+
+ cdev = instance->cdev;
+@@ -454,10 +454,10 @@ static void allocate_power(struct thermal_zone_device *tz, int control_temp)
+ power_range);
+
+ i = 0;
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ struct power_actor *pa = &power[i];
+
+- if (!power_actor_is_valid(params, instance))
++ if (!power_actor_is_valid(instance))
+ continue;
+
+ power_actor_set_power(instance->cdev, instance,
+@@ -538,12 +538,13 @@ static void reset_pid_controller(struct power_allocator_params *params)
+ static void allow_maximum_power(struct thermal_zone_device *tz)
+ {
+ struct power_allocator_params *params = tz->governor_data;
++ const struct thermal_trip_desc *td = trip_to_trip_desc(params->trip_max);
+ struct thermal_cooling_device *cdev;
+ struct thermal_instance *instance;
+ u32 req_power;
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (!power_actor_is_valid(params, instance))
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
++ if (!power_actor_is_valid(instance))
+ continue;
+
+ cdev = instance->cdev;
+@@ -581,13 +582,16 @@ static void allow_maximum_power(struct thermal_zone_device *tz)
+ static int check_power_actors(struct thermal_zone_device *tz,
+ struct power_allocator_params *params)
+ {
++ const struct thermal_trip_desc *td;
+ struct thermal_instance *instance;
+ int ret = 0;
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
+- if (instance->trip != params->trip_max)
+- continue;
++ if (!params->trip_max)
++ return 0;
++
++ td = trip_to_trip_desc(params->trip_max);
+
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ if (!cdev_is_power_actor(instance->cdev)) {
+ dev_warn(&tz->device, "power_allocator: %s is not a power actor\n",
+ instance->cdev->type);
+@@ -631,30 +635,43 @@ static int allocate_actors_buffer(struct power_allocator_params *params,
+ return ret;
+ }
+
++static void power_allocator_update_weight(struct power_allocator_params *params)
++{
++ const struct thermal_trip_desc *td;
++ struct thermal_instance *instance;
++
++ if (!params->trip_max)
++ return;
++
++ td = trip_to_trip_desc(params->trip_max);
++
++ params->total_weight = 0;
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ if (power_actor_is_valid(instance))
++ params->total_weight += instance->weight;
++}
++
+ static void power_allocator_update_tz(struct thermal_zone_device *tz,
+ enum thermal_notify_event reason)
+ {
+ struct power_allocator_params *params = tz->governor_data;
++ const struct thermal_trip_desc *td = trip_to_trip_desc(params->trip_max);
+ struct thermal_instance *instance;
+ int num_actors = 0;
+
+ switch (reason) {
+ case THERMAL_TZ_BIND_CDEV:
+ case THERMAL_TZ_UNBIND_CDEV:
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node)
+- if (power_actor_is_valid(params, instance))
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ if (power_actor_is_valid(instance))
+ num_actors++;
+
+- if (num_actors == params->num_actors)
+- return;
++ if (num_actors != params->num_actors)
++ allocate_actors_buffer(params, num_actors);
+
+- allocate_actors_buffer(params, num_actors);
+- break;
++ fallthrough;
+ case THERMAL_INSTANCE_WEIGHT_CHANGED:
+- params->total_weight = 0;
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node)
+- if (power_actor_is_valid(params, instance))
+- params->total_weight += instance->weight;
++ power_allocator_update_weight(params);
+ break;
+ default:
+ break;
+@@ -720,6 +737,8 @@ static int power_allocator_bind(struct thermal_zone_device *tz)
+
+ tz->governor_data = params;
+
++ power_allocator_update_weight(params);
++
+ return 0;
+
+ free_params:
+diff --git a/drivers/thermal/gov_step_wise.c b/drivers/thermal/gov_step_wise.c
+index fd5527188cf91a..ea4bf88d37f337 100644
+--- a/drivers/thermal/gov_step_wise.c
++++ b/drivers/thermal/gov_step_wise.c
+@@ -66,9 +66,10 @@ static unsigned long get_target_state(struct thermal_instance *instance,
+ }
+
+ static void thermal_zone_trip_update(struct thermal_zone_device *tz,
+- const struct thermal_trip *trip,
++ const struct thermal_trip_desc *td,
+ int trip_threshold)
+ {
++ const struct thermal_trip *trip = &td->trip;
+ enum thermal_trend trend = get_tz_trend(tz, trip);
+ int trip_id = thermal_zone_trip_id(tz, trip);
+ struct thermal_instance *instance;
+@@ -82,12 +83,9 @@ static void thermal_zone_trip_update(struct thermal_zone_device *tz,
+ dev_dbg(&tz->device, "Trip%d[type=%d,temp=%d]:trend=%d,throttle=%d\n",
+ trip_id, trip->type, trip_threshold, trend, throttle);
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node) {
+ int old_target;
+
+- if (instance->trip != trip)
+- continue;
+-
+ old_target = instance->target;
+ instance->target = get_target_state(instance, trend, throttle);
+
+@@ -127,11 +125,13 @@ static void step_wise_manage(struct thermal_zone_device *tz)
+ trip->type == THERMAL_TRIP_HOT)
+ continue;
+
+- thermal_zone_trip_update(tz, trip, td->threshold);
++ thermal_zone_trip_update(tz, td, td->threshold);
+ }
+
+- list_for_each_entry(instance, &tz->thermal_instances, tz_node)
+- thermal_cdev_update(instance->cdev);
++ for_each_trip_desc(tz, td) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ thermal_cdev_update(instance->cdev);
++ }
+ }
+
+ static struct thermal_governor thermal_gov_step_wise = {
+diff --git a/drivers/thermal/thermal_core.c b/drivers/thermal/thermal_core.c
+index 1d2f2b307bac50..c2fa236e10cda7 100644
+--- a/drivers/thermal/thermal_core.c
++++ b/drivers/thermal/thermal_core.c
+@@ -490,7 +490,7 @@ static void thermal_zone_device_check(struct work_struct *work)
+
+ static void thermal_zone_device_init(struct thermal_zone_device *tz)
+ {
+- struct thermal_instance *pos;
++ struct thermal_trip_desc *td;
+
+ INIT_DELAYED_WORK(&tz->poll_queue, thermal_zone_device_check);
+
+@@ -498,8 +498,12 @@ static void thermal_zone_device_init(struct thermal_zone_device *tz)
+ tz->passive = 0;
+ tz->prev_low_trip = -INT_MAX;
+ tz->prev_high_trip = INT_MAX;
+- list_for_each_entry(pos, &tz->thermal_instances, tz_node)
+- pos->initialized = false;
++ for_each_trip_desc(tz, td) {
++ struct thermal_instance *instance;
++
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ instance->initialized = false;
++ }
+ }
+
+ static void thermal_governor_trip_crossed(struct thermal_governor *governor,
+@@ -764,12 +768,12 @@ struct thermal_zone_device *thermal_zone_get_by_id(int id)
+ * Return: 0 on success, the proper error value otherwise.
+ */
+ static int thermal_bind_cdev_to_trip(struct thermal_zone_device *tz,
+- const struct thermal_trip *trip,
++ struct thermal_trip *trip,
+ struct thermal_cooling_device *cdev,
+ struct cooling_spec *cool_spec)
+ {
+- struct thermal_instance *dev;
+- struct thermal_instance *pos;
++ struct thermal_trip_desc *td = trip_to_trip_desc(trip);
++ struct thermal_instance *dev, *instance;
+ bool upper_no_limit;
+ int result;
+
+@@ -832,13 +836,13 @@ static int thermal_bind_cdev_to_trip(struct thermal_zone_device *tz,
+ goto remove_trip_file;
+
+ mutex_lock(&cdev->lock);
+- list_for_each_entry(pos, &tz->thermal_instances, tz_node)
+- if (pos->trip == trip && pos->cdev == cdev) {
++ list_for_each_entry(instance, &td->thermal_instances, trip_node)
++ if (instance->cdev == cdev) {
+ result = -EEXIST;
+ break;
+ }
+ if (!result) {
+- list_add_tail(&dev->tz_node, &tz->thermal_instances);
++ list_add_tail(&dev->trip_node, &td->thermal_instances);
+ list_add_tail(&dev->cdev_node, &cdev->thermal_instances);
+ atomic_set(&tz->need_update, 1);
+
+@@ -872,15 +876,16 @@ static int thermal_bind_cdev_to_trip(struct thermal_zone_device *tz,
+ * This function is usually called in the thermal zone device .unbind callback.
+ */
+ static void thermal_unbind_cdev_from_trip(struct thermal_zone_device *tz,
+- const struct thermal_trip *trip,
++ struct thermal_trip *trip,
+ struct thermal_cooling_device *cdev)
+ {
++ struct thermal_trip_desc *td = trip_to_trip_desc(trip);
+ struct thermal_instance *pos, *next;
+
+ mutex_lock(&cdev->lock);
+- list_for_each_entry_safe(pos, next, &tz->thermal_instances, tz_node) {
+- if (pos->trip == trip && pos->cdev == cdev) {
+- list_del(&pos->tz_node);
++ list_for_each_entry_safe(pos, next, &td->thermal_instances, trip_node) {
++ if (pos->cdev == cdev) {
++ list_del(&pos->trip_node);
+ list_del(&pos->cdev_node);
+
+ thermal_governor_update_tz(tz, THERMAL_TZ_UNBIND_CDEV);
+@@ -1435,7 +1440,6 @@ thermal_zone_device_register_with_trips(const char *type,
+ }
+ }
+
+- INIT_LIST_HEAD(&tz->thermal_instances);
+ INIT_LIST_HEAD(&tz->node);
+ ida_init(&tz->ida);
+ mutex_init(&tz->lock);
+@@ -1459,6 +1463,7 @@ thermal_zone_device_register_with_trips(const char *type,
+ tz->num_trips = num_trips;
+ for_each_trip_desc(tz, td) {
+ td->trip = *trip++;
++ INIT_LIST_HEAD(&td->thermal_instances);
+ /*
+ * Mark all thresholds as invalid to start with even though
+ * this only matters for the trips that start as invalid and
+diff --git a/drivers/thermal/thermal_core.h b/drivers/thermal/thermal_core.h
+index 421522a2bb9d4c..163871699a602c 100644
+--- a/drivers/thermal/thermal_core.h
++++ b/drivers/thermal/thermal_core.h
+@@ -30,6 +30,7 @@ struct thermal_trip_desc {
+ struct thermal_trip trip;
+ struct thermal_trip_attrs trip_attrs;
+ struct list_head notify_list_node;
++ struct list_head thermal_instances;
+ int notify_temp;
+ int threshold;
+ };
+@@ -99,7 +100,6 @@ struct thermal_governor {
+ * @tzp: thermal zone parameters
+ * @governor: pointer to the governor for this thermal zone
+ * @governor_data: private pointer for governor data
+- * @thermal_instances: list of &struct thermal_instance of this thermal zone
+ * @ida: &struct ida to generate unique id for this zone's cooling
+ * devices
+ * @lock: lock to protect thermal_instances list
+@@ -133,7 +133,6 @@ struct thermal_zone_device {
+ struct thermal_zone_params *tzp;
+ struct thermal_governor *governor;
+ void *governor_data;
+- struct list_head thermal_instances;
+ struct ida ida;
+ struct mutex lock;
+ struct list_head node;
+@@ -230,7 +229,7 @@ struct thermal_instance {
+ struct device_attribute attr;
+ char weight_attr_name[THERMAL_NAME_LENGTH];
+ struct device_attribute weight_attr;
+- struct list_head tz_node; /* node in tz->thermal_instances */
++ struct list_head trip_node; /* node in trip->thermal_instances */
+ struct list_head cdev_node; /* node in cdev->thermal_instances */
+ unsigned int weight; /* The weight of the cooling device */
+ bool upper_no_limit;
+diff --git a/drivers/thermal/thermal_helpers.c b/drivers/thermal/thermal_helpers.c
+index dc374a7a1a659f..403d62d3ce77ee 100644
+--- a/drivers/thermal/thermal_helpers.c
++++ b/drivers/thermal/thermal_helpers.c
+@@ -43,10 +43,11 @@ static bool thermal_instance_present(struct thermal_zone_device *tz,
+ struct thermal_cooling_device *cdev,
+ const struct thermal_trip *trip)
+ {
++ const struct thermal_trip_desc *td = trip_to_trip_desc(trip);
+ struct thermal_instance *ti;
+
+- list_for_each_entry(ti, &tz->thermal_instances, tz_node) {
+- if (ti->trip == trip && ti->cdev == cdev)
++ list_for_each_entry(ti, &td->thermal_instances, trip_node) {
++ if (ti->cdev == cdev)
+ return true;
+ }
+
+diff --git a/drivers/thermal/thermal_of.c b/drivers/thermal/thermal_of.c
+index 5d3d8ce672cd51..e0aa9d9d5604b7 100644
+--- a/drivers/thermal/thermal_of.c
++++ b/drivers/thermal/thermal_of.c
+@@ -293,12 +293,40 @@ static bool thermal_of_get_cooling_spec(struct device_node *map_np, int index,
+ return true;
+ }
+
++static bool thermal_of_cm_lookup(struct device_node *cm_np,
++ const struct thermal_trip *trip,
++ struct thermal_cooling_device *cdev,
++ struct cooling_spec *c)
++{
++ for_each_child_of_node_scoped(cm_np, child) {
++ struct device_node *tr_np;
++ int count, i;
++
++ tr_np = of_parse_phandle(child, "trip", 0);
++ if (tr_np != trip->priv)
++ continue;
++
++ /* The trip has been found, look up the cdev. */
++ count = of_count_phandle_with_args(child, "cooling-device",
++ "#cooling-cells");
++ if (count <= 0)
++ pr_err("Add a cooling_device property with at least one device\n");
++
++ for (i = 0; i < count; i++) {
++ if (thermal_of_get_cooling_spec(child, i, cdev, c))
++ return true;
++ }
++ }
++
++ return false;
++}
++
+ static bool thermal_of_should_bind(struct thermal_zone_device *tz,
+ const struct thermal_trip *trip,
+ struct thermal_cooling_device *cdev,
+ struct cooling_spec *c)
+ {
+- struct device_node *tz_np, *cm_np, *child;
++ struct device_node *tz_np, *cm_np;
+ bool result = false;
+
+ tz_np = thermal_of_zone_get_by_name(tz);
+@@ -312,28 +340,7 @@ static bool thermal_of_should_bind(struct thermal_zone_device *tz,
+ goto out;
+
+ /* Look up the trip and the cdev in the cooling maps. */
+- for_each_child_of_node(cm_np, child) {
+- struct device_node *tr_np;
+- int count, i;
+-
+- tr_np = of_parse_phandle(child, "trip", 0);
+- if (tr_np != trip->priv)
+- continue;
+-
+- /* The trip has been found, look up the cdev. */
+- count = of_count_phandle_with_args(child, "cooling-device", "#cooling-cells");
+- if (count <= 0)
+- pr_err("Add a cooling_device property with at least one device\n");
+-
+- for (i = 0; i < count; i++) {
+- result = thermal_of_get_cooling_spec(child, i, cdev, c);
+- if (result)
+- break;
+- }
+-
+- of_node_put(child);
+- break;
+- }
++ result = thermal_of_cm_lookup(cm_np, trip, cdev, c);
+
+ of_node_put(cm_np);
+ out:
+diff --git a/drivers/ufs/core/ufs_bsg.c b/drivers/ufs/core/ufs_bsg.c
+index 8d4ad0a3f2cf02..252186124669a8 100644
+--- a/drivers/ufs/core/ufs_bsg.c
++++ b/drivers/ufs/core/ufs_bsg.c
+@@ -194,10 +194,12 @@ static int ufs_bsg_request(struct bsg_job *job)
+ ufshcd_rpm_put_sync(hba);
+ kfree(buff);
+ bsg_reply->result = ret;
+- job->reply_len = !rpmb ? sizeof(struct ufs_bsg_reply) : sizeof(struct ufs_rpmb_reply);
+ /* complete the job here only if no error */
+- if (ret == 0)
++ if (ret == 0) {
++ job->reply_len = rpmb ? sizeof(struct ufs_rpmb_reply) :
++ sizeof(struct ufs_bsg_reply);
+ bsg_job_done(job, ret, bsg_reply->reply_payload_rcv_len);
++ }
+
+ return ret;
+ }
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index 67410c4cebee6d..a3e95ef5eda82e 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -266,7 +266,7 @@ static bool ufshcd_has_pending_tasks(struct ufs_hba *hba)
+
+ static bool ufshcd_is_ufs_dev_busy(struct ufs_hba *hba)
+ {
+- return hba->outstanding_reqs || ufshcd_has_pending_tasks(hba);
++ return scsi_host_busy(hba->host) || ufshcd_has_pending_tasks(hba);
+ }
+
+ static const struct ufs_dev_quirk ufs_fixups[] = {
+@@ -639,8 +639,8 @@ static void ufshcd_print_host_state(struct ufs_hba *hba)
+ const struct scsi_device *sdev_ufs = hba->ufs_device_wlun;
+
+ dev_err(hba->dev, "UFS Host state=%d\n", hba->ufshcd_state);
+- dev_err(hba->dev, "outstanding reqs=0x%lx tasks=0x%lx\n",
+- hba->outstanding_reqs, hba->outstanding_tasks);
++ dev_err(hba->dev, "%d outstanding reqs, tasks=0x%lx\n",
++ scsi_host_busy(hba->host), hba->outstanding_tasks);
+ dev_err(hba->dev, "saved_err=0x%x, saved_uic_err=0x%x\n",
+ hba->saved_err, hba->saved_uic_err);
+ dev_err(hba->dev, "Device power mode=%d, UIC link state=%d\n",
+@@ -8975,7 +8975,7 @@ static enum scsi_timeout_action ufshcd_eh_timed_out(struct scsi_cmnd *scmd)
+ dev_info(hba->dev, "%s() finished; outstanding_tasks = %#lx.\n",
+ __func__, hba->outstanding_tasks);
+
+- return hba->outstanding_reqs ? SCSI_EH_RESET_TIMER : SCSI_EH_DONE;
++ return scsi_host_busy(hba->host) ? SCSI_EH_RESET_TIMER : SCSI_EH_DONE;
+ }
+
+ static const struct attribute_group *ufshcd_driver_groups[] = {
+@@ -10457,6 +10457,21 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ */
+ spin_lock_init(&hba->clk_gating.lock);
+
++ /*
++ * Set the default power management level for runtime and system PM.
++ * Host controller drivers can override them in their
++ * 'ufs_hba_variant_ops::init' callback.
++ *
++ * Default power saving mode is to keep UFS link in Hibern8 state
++ * and UFS device in sleep state.
++ */
++ hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
++ UFS_SLEEP_PWR_MODE,
++ UIC_LINK_HIBERN8_STATE);
++ hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
++ UFS_SLEEP_PWR_MODE,
++ UIC_LINK_HIBERN8_STATE);
++
+ err = ufshcd_hba_init(hba);
+ if (err)
+ goto out_error;
+@@ -10606,21 +10621,6 @@ int ufshcd_init(struct ufs_hba *hba, void __iomem *mmio_base, unsigned int irq)
+ goto free_tmf_queue;
+ }
+
+- /*
+- * Set the default power management level for runtime and system PM if
+- * not set by the host controller drivers.
+- * Default power saving mode is to keep UFS link in Hibern8 state
+- * and UFS device in sleep state.
+- */
+- if (!hba->rpm_lvl)
+- hba->rpm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
+- UFS_SLEEP_PWR_MODE,
+- UIC_LINK_HIBERN8_STATE);
+- if (!hba->spm_lvl)
+- hba->spm_lvl = ufs_get_desired_pm_lvl_for_dev_link_state(
+- UFS_SLEEP_PWR_MODE,
+- UIC_LINK_HIBERN8_STATE);
+-
+ INIT_DELAYED_WORK(&hba->rpm_dev_flush_recheck_work, ufshcd_rpm_dev_flush_recheck_work);
+ INIT_DELAYED_WORK(&hba->ufs_rtc_update_work, ufshcd_rtc_work);
+
+diff --git a/fs/afs/server.c b/fs/afs/server.c
+index 038f9d0ae3af8e..4504e16b458cc1 100644
+--- a/fs/afs/server.c
++++ b/fs/afs/server.c
+@@ -163,6 +163,8 @@ static struct afs_server *afs_install_server(struct afs_cell *cell,
+ rb_insert_color(&server->uuid_rb, &net->fs_servers);
+ hlist_add_head_rcu(&server->proc_link, &net->fs_proc);
+
++ afs_get_cell(cell, afs_cell_trace_get_server);
++
+ added_dup:
+ write_seqlock(&net->fs_addr_lock);
+ estate = rcu_dereference_protected(server->endpoint_state,
+@@ -442,6 +444,7 @@ static void afs_server_rcu(struct rcu_head *rcu)
+ atomic_read(&server->active), afs_server_trace_free);
+ afs_put_endpoint_state(rcu_access_pointer(server->endpoint_state),
+ afs_estate_trace_put_server);
++ afs_put_cell(server->cell, afs_cell_trace_put_server);
+ kfree(server);
+ }
+
+diff --git a/fs/afs/server_list.c b/fs/afs/server_list.c
+index 7e7e567a7f8a20..d20cd902ef949a 100644
+--- a/fs/afs/server_list.c
++++ b/fs/afs/server_list.c
+@@ -97,8 +97,8 @@ struct afs_server_list *afs_alloc_server_list(struct afs_volume *volume,
+ break;
+ if (j < slist->nr_servers) {
+ if (slist->servers[j].server == server) {
+- afs_put_server(volume->cell->net, server,
+- afs_server_trace_put_slist_isort);
++ afs_unuse_server(volume->cell->net, server,
++ afs_server_trace_put_slist_isort);
+ continue;
+ }
+
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 035ba52742a504..4db912f5623055 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -780,6 +780,43 @@ int nfs4_inode_return_delegation(struct inode *inode)
+ return 0;
+ }
+
++/**
++ * nfs4_inode_set_return_delegation_on_close - asynchronously return a delegation
++ * @inode: inode to process
++ *
++ * This routine is called to request that the delegation be returned as soon
++ * as the file is closed. If the file is already closed, the delegation is
++ * immediately returned.
++ */
++void nfs4_inode_set_return_delegation_on_close(struct inode *inode)
++{
++ struct nfs_delegation *delegation;
++ struct nfs_delegation *ret = NULL;
++
++ if (!inode)
++ return;
++ rcu_read_lock();
++ delegation = nfs4_get_valid_delegation(inode);
++ if (!delegation)
++ goto out;
++ spin_lock(&delegation->lock);
++ if (!delegation->inode)
++ goto out_unlock;
++ if (list_empty(&NFS_I(inode)->open_files) &&
++ !test_and_set_bit(NFS_DELEGATION_RETURNING, &delegation->flags)) {
++ /* Refcount matched in nfs_end_delegation_return() */
++ ret = nfs_get_delegation(delegation);
++ } else
++ set_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags);
++out_unlock:
++ spin_unlock(&delegation->lock);
++ if (ret)
++ nfs_clear_verifier_delegated(inode);
++out:
++ rcu_read_unlock();
++ nfs_end_delegation_return(inode, ret, 0);
++}
++
+ /**
+ * nfs4_inode_return_delegation_on_close - asynchronously return a delegation
+ * @inode: inode to process
+diff --git a/fs/nfs/delegation.h b/fs/nfs/delegation.h
+index 71524d34ed207c..8ff5ab9c5c2565 100644
+--- a/fs/nfs/delegation.h
++++ b/fs/nfs/delegation.h
+@@ -49,6 +49,7 @@ void nfs_inode_reclaim_delegation(struct inode *inode, const struct cred *cred,
+ unsigned long pagemod_limit, u32 deleg_type);
+ int nfs4_inode_return_delegation(struct inode *inode);
+ void nfs4_inode_return_delegation_on_close(struct inode *inode);
++void nfs4_inode_set_return_delegation_on_close(struct inode *inode);
+ int nfs_async_inode_return_delegation(struct inode *inode, const nfs4_stateid *stateid);
+ void nfs_inode_evict_delegation(struct inode *inode);
+
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index 90079ca134dd3c..c1f1b826888c98 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -56,6 +56,7 @@
+ #include <linux/uaccess.h>
+ #include <linux/atomic.h>
+
++#include "delegation.h"
+ #include "internal.h"
+ #include "iostat.h"
+ #include "pnfs.h"
+@@ -130,6 +131,20 @@ static void nfs_direct_truncate_request(struct nfs_direct_req *dreq,
+ dreq->count = req_start;
+ }
+
++static void nfs_direct_file_adjust_size_locked(struct inode *inode,
++ loff_t offset, size_t count)
++{
++ loff_t newsize = offset + (loff_t)count;
++ loff_t oldsize = i_size_read(inode);
++
++ if (newsize > oldsize) {
++ i_size_write(inode, newsize);
++ NFS_I(inode)->cache_validity &= ~NFS_INO_INVALID_SIZE;
++ trace_nfs_size_grow(inode, newsize);
++ nfs_inc_stats(inode, NFSIOS_EXTENDWRITE);
++ }
++}
++
+ /**
+ * nfs_swap_rw - NFS address space operation for swap I/O
+ * @iocb: target I/O control block
+@@ -272,6 +287,8 @@ static void nfs_direct_read_completion(struct nfs_pgio_header *hdr)
+ nfs_direct_count_bytes(dreq, hdr);
+ spin_unlock(&dreq->lock);
+
++ nfs_update_delegated_atime(dreq->inode);
++
+ while (!list_empty(&hdr->pages)) {
+ struct nfs_page *req = nfs_list_entry(hdr->pages.next);
+ struct page *page = req->wb_page;
+@@ -732,6 +749,7 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ struct nfs_direct_req *dreq = hdr->dreq;
+ struct nfs_commit_info cinfo;
+ struct nfs_page *req = nfs_list_entry(hdr->pages.next);
++ struct inode *inode = dreq->inode;
+ int flags = NFS_ODIRECT_DONE;
+
+ trace_nfs_direct_write_completion(dreq);
+@@ -753,6 +771,11 @@ static void nfs_direct_write_completion(struct nfs_pgio_header *hdr)
+ }
+ spin_unlock(&dreq->lock);
+
++ spin_lock(&inode->i_lock);
++ nfs_direct_file_adjust_size_locked(inode, dreq->io_start, dreq->count);
++ nfs_update_delegated_mtime_locked(dreq->inode);
++ spin_unlock(&inode->i_lock);
++
+ while (!list_empty(&hdr->pages)) {
+
+ req = nfs_list_entry(hdr->pages.next);
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 405f17e6e0b45b..e7bc99c69743cf 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -3898,8 +3898,11 @@ nfs4_atomic_open(struct inode *dir, struct nfs_open_context *ctx,
+
+ static void nfs4_close_context(struct nfs_open_context *ctx, int is_sync)
+ {
++ struct dentry *dentry = ctx->dentry;
+ if (ctx->state == NULL)
+ return;
++ if (dentry->d_flags & DCACHE_NFSFS_RENAMED)
++ nfs4_inode_set_return_delegation_on_close(d_inode(dentry));
+ if (is_sync)
+ nfs4_close_sync(ctx->state, _nfs4_ctx_to_openmode(ctx));
+ else
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index b2c78621da44a4..4388004a319d0c 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -619,7 +619,6 @@ static int ovl_link_up(struct ovl_copy_up_ctx *c)
+ err = PTR_ERR(upper);
+ if (!IS_ERR(upper)) {
+ err = ovl_do_link(ofs, ovl_dentry_upper(c->dentry), udir, upper);
+- dput(upper);
+
+ if (!err) {
+ /* Restore timestamps on parent (best effort) */
+@@ -627,6 +626,7 @@ static int ovl_link_up(struct ovl_copy_up_ctx *c)
+ ovl_dentry_set_upper_alias(c->dentry);
+ ovl_dentry_update_reval(c->dentry, upper);
+ }
++ dput(upper);
+ }
+ inode_unlock(udir);
+ if (err)
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index fa284b64b2de20..23b358a1271cd9 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -450,7 +450,7 @@
+ . = ALIGN((align)); \
+ .rodata : AT(ADDR(.rodata) - LOAD_OFFSET) { \
+ __start_rodata = .; \
+- *(.rodata) *(.rodata.*) \
++ *(.rodata) *(.rodata.*) *(.data.rel.ro*) \
+ SCHED_DATA \
+ RO_AFTER_INIT_DATA /* Read only after init */ \
+ . = ALIGN(8); \
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index b7f327ce797e5b..8f37c5dd52b215 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -196,10 +196,11 @@ struct gendisk {
+ unsigned int zone_capacity;
+ unsigned int last_zone_capacity;
+ unsigned long __rcu *conv_zones_bitmap;
+- unsigned int zone_wplugs_hash_bits;
+- spinlock_t zone_wplugs_lock;
++ unsigned int zone_wplugs_hash_bits;
++ atomic_t nr_zone_wplugs;
++ spinlock_t zone_wplugs_lock;
+ struct mempool_s *zone_wplugs_pool;
+- struct hlist_head *zone_wplugs_hash;
++ struct hlist_head *zone_wplugs_hash;
+ struct workqueue_struct *zone_wplugs_wq;
+ #endif /* CONFIG_BLK_DEV_ZONED */
+
+diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
+index cd6f9aae311fca..070b3b680209cd 100644
+--- a/include/linux/compiler-gcc.h
++++ b/include/linux/compiler-gcc.h
+@@ -52,18 +52,6 @@
+ */
+ #define barrier_before_unreachable() asm volatile("")
+
+-/*
+- * Mark a position in code as unreachable. This can be used to
+- * suppress control flow warnings after asm blocks that transfer
+- * control elsewhere.
+- */
+-#define unreachable() \
+- do { \
+- annotate_unreachable(); \
+- barrier_before_unreachable(); \
+- __builtin_unreachable(); \
+- } while (0)
+-
+ #if defined(CONFIG_ARCH_USE_BUILTIN_BSWAP)
+ #define __HAVE_BUILTIN_BSWAP32__
+ #define __HAVE_BUILTIN_BSWAP64__
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 2d962dade9faee..b15911e201bf95 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -109,44 +109,21 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val,
+
+ /* Unreachable code */
+ #ifdef CONFIG_OBJTOOL
+-/*
+- * These macros help objtool understand GCC code flow for unreachable code.
+- * The __COUNTER__ based labels are a hack to make each instance of the macros
+- * unique, to convince GCC not to merge duplicate inline asm statements.
+- */
+-#define __stringify_label(n) #n
+-
+-#define __annotate_reachable(c) ({ \
+- asm volatile(__stringify_label(c) ":\n\t" \
+- ".pushsection .discard.reachable\n\t" \
+- ".long " __stringify_label(c) "b - .\n\t" \
+- ".popsection\n\t"); \
+-})
+-#define annotate_reachable() __annotate_reachable(__COUNTER__)
+-
+-#define __annotate_unreachable(c) ({ \
+- asm volatile(__stringify_label(c) ":\n\t" \
+- ".pushsection .discard.unreachable\n\t" \
+- ".long " __stringify_label(c) "b - .\n\t" \
+- ".popsection\n\t" : : "i" (c)); \
+-})
+-#define annotate_unreachable() __annotate_unreachable(__COUNTER__)
+-
+ /* Annotate a C jump table to allow objtool to follow the code flow */
+-#define __annotate_jump_table __section(".rodata..c_jump_table,\"a\",@progbits #")
+-
++#define __annotate_jump_table __section(".data.rel.ro.c_jump_table")
+ #else /* !CONFIG_OBJTOOL */
+-#define annotate_reachable()
+-#define annotate_unreachable()
+ #define __annotate_jump_table
+ #endif /* CONFIG_OBJTOOL */
+
+-#ifndef unreachable
+-# define unreachable() do { \
+- annotate_unreachable(); \
++/*
++ * Mark a position in code as unreachable. This can be used to
++ * suppress control flow warnings after asm blocks that transfer
++ * control elsewhere.
++ */
++#define unreachable() do { \
++ barrier_before_unreachable(); \
+ __builtin_unreachable(); \
+ } while (0)
+-#endif
+
+ /*
+ * KENTRY - kernel entry point
+diff --git a/include/linux/rcuref.h b/include/linux/rcuref.h
+index 2c8bfd0f1b6b3a..6322d8c1c6b429 100644
+--- a/include/linux/rcuref.h
++++ b/include/linux/rcuref.h
+@@ -71,27 +71,30 @@ static inline __must_check bool rcuref_get(rcuref_t *ref)
+ return rcuref_get_slowpath(ref);
+ }
+
+-extern __must_check bool rcuref_put_slowpath(rcuref_t *ref);
++extern __must_check bool rcuref_put_slowpath(rcuref_t *ref, unsigned int cnt);
+
+ /*
+ * Internal helper. Do not invoke directly.
+ */
+ static __always_inline __must_check bool __rcuref_put(rcuref_t *ref)
+ {
++ int cnt;
++
+ RCU_LOCKDEP_WARN(!rcu_read_lock_held() && preemptible(),
+ "suspicious rcuref_put_rcusafe() usage");
+ /*
+ * Unconditionally decrease the reference count. The saturation and
+ * dead zones provide enough tolerance for this.
+ */
+- if (likely(!atomic_add_negative_release(-1, &ref->refcnt)))
++ cnt = atomic_sub_return_release(1, &ref->refcnt);
++ if (likely(cnt >= 0))
+ return false;
+
+ /*
+ * Handle the last reference drop and cases inside the saturation
+ * and dead zones.
+ */
+- return rcuref_put_slowpath(ref);
++ return rcuref_put_slowpath(ref, cnt);
+ }
+
+ /**
+diff --git a/include/linux/socket.h b/include/linux/socket.h
+index d18cc47e89bd01..c3322eb3d6865d 100644
+--- a/include/linux/socket.h
++++ b/include/linux/socket.h
+@@ -392,6 +392,8 @@ struct ucred {
+
+ extern int move_addr_to_kernel(void __user *uaddr, int ulen, struct sockaddr_storage *kaddr);
+ extern int put_cmsg(struct msghdr*, int level, int type, int len, void *data);
++extern int put_cmsg_notrunc(struct msghdr *msg, int level, int type, int len,
++ void *data);
+
+ struct timespec64;
+ struct __kernel_timespec;
+diff --git a/include/linux/sunrpc/sched.h b/include/linux/sunrpc/sched.h
+index fec1e8a1570c36..eac57914dcf320 100644
+--- a/include/linux/sunrpc/sched.h
++++ b/include/linux/sunrpc/sched.h
+@@ -158,7 +158,6 @@ enum {
+ RPC_TASK_NEED_XMIT,
+ RPC_TASK_NEED_RECV,
+ RPC_TASK_MSG_PIN_WAIT,
+- RPC_TASK_SIGNALLED,
+ };
+
+ #define rpc_test_and_set_running(t) \
+@@ -171,7 +170,7 @@ enum {
+
+ #define RPC_IS_ACTIVATED(t) test_bit(RPC_TASK_ACTIVE, &(t)->tk_runstate)
+
+-#define RPC_SIGNALLED(t) test_bit(RPC_TASK_SIGNALLED, &(t)->tk_runstate)
++#define RPC_SIGNALLED(t) (READ_ONCE(task->tk_rpc_status) == -ERESTARTSYS)
+
+ /*
+ * Task priorities.
+diff --git a/include/net/ip.h b/include/net/ip.h
+index fe4f8543811433..bd201278c55a58 100644
+--- a/include/net/ip.h
++++ b/include/net/ip.h
+@@ -424,6 +424,11 @@ int ip_decrease_ttl(struct iphdr *iph)
+ return --iph->ttl;
+ }
+
++static inline dscp_t ip4h_dscp(const struct iphdr *ip4h)
++{
++ return inet_dsfield_to_dscp(ip4h->tos);
++}
++
+ static inline int ip_mtu_locked(const struct dst_entry *dst)
+ {
+ const struct rtable *rt = dst_rtable(dst);
+diff --git a/include/net/route.h b/include/net/route.h
+index da34b6fa9862dc..8a11d19f897bb2 100644
+--- a/include/net/route.h
++++ b/include/net/route.h
+@@ -208,12 +208,13 @@ int ip_route_use_hint(struct sk_buff *skb, __be32 dst, __be32 src,
+ const struct sk_buff *hint);
+
+ static inline int ip_route_input(struct sk_buff *skb, __be32 dst, __be32 src,
+- u8 tos, struct net_device *devin)
++ dscp_t dscp, struct net_device *devin)
+ {
+ int err;
+
+ rcu_read_lock();
+- err = ip_route_input_noref(skb, dst, src, tos, devin);
++ err = ip_route_input_noref(skb, dst, src, inet_dscp_to_dsfield(dscp),
++ devin);
+ if (!err) {
+ skb_dst_force(skb);
+ if (!skb_dst(skb))
+diff --git a/include/sound/cs35l56.h b/include/sound/cs35l56.h
+index 3dc7a1551ac350..5d653a3491d073 100644
+--- a/include/sound/cs35l56.h
++++ b/include/sound/cs35l56.h
+@@ -12,6 +12,7 @@
+ #include <linux/firmware/cirrus/cs_dsp.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/regmap.h>
++#include <linux/spi/spi.h>
+ #include <sound/cs-amp-lib.h>
+
+ #define CS35L56_DEVID 0x0000000
+@@ -61,6 +62,7 @@
+ #define CS35L56_IRQ1_MASK_8 0x000E0AC
+ #define CS35L56_IRQ1_MASK_18 0x000E0D4
+ #define CS35L56_IRQ1_MASK_20 0x000E0DC
++#define CS35L56_DSP_MBOX_1_RAW 0x0011000
+ #define CS35L56_DSP_VIRTUAL1_MBOX_1 0x0011020
+ #define CS35L56_DSP_VIRTUAL1_MBOX_2 0x0011024
+ #define CS35L56_DSP_VIRTUAL1_MBOX_3 0x0011028
+@@ -224,6 +226,7 @@
+ #define CS35L56_HALO_STATE_SHUTDOWN 1
+ #define CS35L56_HALO_STATE_BOOT_DONE 2
+
++#define CS35L56_MBOX_CMD_PING 0x0A000000
+ #define CS35L56_MBOX_CMD_AUDIO_PLAY 0x0B000001
+ #define CS35L56_MBOX_CMD_AUDIO_PAUSE 0x0B000002
+ #define CS35L56_MBOX_CMD_AUDIO_REINIT 0x0B000003
+@@ -254,6 +257,16 @@
+ #define CS35L56_NUM_BULK_SUPPLIES 3
+ #define CS35L56_NUM_DSP_REGIONS 5
+
++/* Additional margin for SYSTEM_RESET to control port ready on SPI */
++#define CS35L56_SPI_RESET_TO_PORT_READY_US (CS35L56_CONTROL_PORT_READY_US + 2500)
++
++struct cs35l56_spi_payload {
++ __be32 addr;
++ __be16 pad;
++ __be32 value;
++} __packed;
++static_assert(sizeof(struct cs35l56_spi_payload) == 10);
++
+ struct cs35l56_base {
+ struct device *dev;
+ struct regmap *regmap;
+@@ -269,6 +282,7 @@ struct cs35l56_base {
+ s8 cal_index;
+ struct cirrus_amp_cal_data cal_data;
+ struct gpio_desc *reset_gpio;
++ struct cs35l56_spi_payload *spi_payload_buf;
+ };
+
+ static inline bool cs35l56_is_otp_register(unsigned int reg)
+@@ -276,6 +290,23 @@ static inline bool cs35l56_is_otp_register(unsigned int reg)
+ return (reg >> 16) == 3;
+ }
+
++static inline int cs35l56_init_config_for_spi(struct cs35l56_base *cs35l56,
++ struct spi_device *spi)
++{
++ cs35l56->spi_payload_buf = devm_kzalloc(&spi->dev,
++ sizeof(*cs35l56->spi_payload_buf),
++ GFP_KERNEL | GFP_DMA);
++ if (!cs35l56->spi_payload_buf)
++ return -ENOMEM;
++
++ return 0;
++}
++
++static inline bool cs35l56_is_spi(struct cs35l56_base *cs35l56)
++{
++ return IS_ENABLED(CONFIG_SPI_MASTER) && !!cs35l56->spi_payload_buf;
++}
++
+ extern const struct regmap_config cs35l56_regmap_i2c;
+ extern const struct regmap_config cs35l56_regmap_spi;
+ extern const struct regmap_config cs35l56_regmap_sdw;
+diff --git a/include/trace/events/afs.h b/include/trace/events/afs.h
+index 9a75590227f262..3dddfc6abf0ee3 100644
+--- a/include/trace/events/afs.h
++++ b/include/trace/events/afs.h
+@@ -173,6 +173,7 @@ enum yfs_cm_operation {
+ EM(afs_cell_trace_get_queue_dns, "GET q-dns ") \
+ EM(afs_cell_trace_get_queue_manage, "GET q-mng ") \
+ EM(afs_cell_trace_get_queue_new, "GET q-new ") \
++ EM(afs_cell_trace_get_server, "GET server") \
+ EM(afs_cell_trace_get_vol, "GET vol ") \
+ EM(afs_cell_trace_insert, "INSERT ") \
+ EM(afs_cell_trace_manage, "MANAGE ") \
+@@ -180,6 +181,7 @@ enum yfs_cm_operation {
+ EM(afs_cell_trace_put_destroy, "PUT destry") \
+ EM(afs_cell_trace_put_queue_work, "PUT q-work") \
+ EM(afs_cell_trace_put_queue_fail, "PUT q-fail") \
++ EM(afs_cell_trace_put_server, "PUT server") \
+ EM(afs_cell_trace_put_vol, "PUT vol ") \
+ EM(afs_cell_trace_see_source, "SEE source") \
+ EM(afs_cell_trace_see_ws, "SEE ws ") \
+diff --git a/include/trace/events/sunrpc.h b/include/trace/events/sunrpc.h
+index 5e849521668954..5fe852bd31abc9 100644
+--- a/include/trace/events/sunrpc.h
++++ b/include/trace/events/sunrpc.h
+@@ -360,8 +360,7 @@ TRACE_EVENT(rpc_request,
+ { (1UL << RPC_TASK_ACTIVE), "ACTIVE" }, \
+ { (1UL << RPC_TASK_NEED_XMIT), "NEED_XMIT" }, \
+ { (1UL << RPC_TASK_NEED_RECV), "NEED_RECV" }, \
+- { (1UL << RPC_TASK_MSG_PIN_WAIT), "MSG_PIN_WAIT" }, \
+- { (1UL << RPC_TASK_SIGNALLED), "SIGNALLED" })
++ { (1UL << RPC_TASK_MSG_PIN_WAIT), "MSG_PIN_WAIT" })
+
+ DECLARE_EVENT_CLASS(rpc_task_running,
+
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 3974c417fe2644..f32311f6411338 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -334,7 +334,9 @@ static int io_sendmsg_copy_hdr(struct io_kiocb *req,
+ if (unlikely(ret))
+ return ret;
+
+- return __get_compat_msghdr(&iomsg->msg, &cmsg, NULL);
++ ret = __get_compat_msghdr(&iomsg->msg, &cmsg, NULL);
++ sr->msg_control = iomsg->msg.msg_control_user;
++ return ret;
+ }
+ #endif
+
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 501d8c2fedff40..a0e1d2124727e1 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -4957,7 +4957,7 @@ static struct perf_event_pmu_context *
+ find_get_pmu_context(struct pmu *pmu, struct perf_event_context *ctx,
+ struct perf_event *event)
+ {
+- struct perf_event_pmu_context *new = NULL, *epc;
++ struct perf_event_pmu_context *new = NULL, *pos = NULL, *epc;
+ void *task_ctx_data = NULL;
+
+ if (!ctx->task) {
+@@ -5014,12 +5014,19 @@ find_get_pmu_context(struct pmu *pmu, struct perf_event_context *ctx,
+ atomic_inc(&epc->refcount);
+ goto found_epc;
+ }
++ /* Make sure the pmu_ctx_list is sorted by PMU type: */
++ if (!pos && epc->pmu->type > pmu->type)
++ pos = epc;
+ }
+
+ epc = new;
+ new = NULL;
+
+- list_add(&epc->pmu_ctx_entry, &ctx->pmu_ctx_list);
++ if (!pos)
++ list_add_tail(&epc->pmu_ctx_entry, &ctx->pmu_ctx_list);
++ else
++ list_add(&epc->pmu_ctx_entry, pos->pmu_ctx_entry.prev);
++
+ epc->ctx = ctx;
+
+ found_epc:
+@@ -5969,14 +5976,15 @@ static int _perf_event_period(struct perf_event *event, u64 value)
+ if (!value)
+ return -EINVAL;
+
+- if (event->attr.freq && value > sysctl_perf_event_sample_rate)
+- return -EINVAL;
+-
+- if (perf_event_check_period(event, value))
+- return -EINVAL;
+-
+- if (!event->attr.freq && (value & (1ULL << 63)))
+- return -EINVAL;
++ if (event->attr.freq) {
++ if (value > sysctl_perf_event_sample_rate)
++ return -EINVAL;
++ } else {
++ if (perf_event_check_period(event, value))
++ return -EINVAL;
++ if (value & (1ULL << 63))
++ return -EINVAL;
++ }
+
+ event_function_call(event, __perf_event_period, &value);
+
+@@ -8233,7 +8241,8 @@ void perf_event_exec(void)
+
+ perf_event_enable_on_exec(ctx);
+ perf_event_remove_on_exec(ctx);
+- perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL, true);
++ scoped_guard(rcu)
++ perf_iterate_ctx(ctx, perf_event_addr_filters_exec, NULL, true);
+
+ perf_unpin_context(ctx);
+ put_ctx(ctx);
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 4b52cb2ae6d620..a0e0676f5d8bbe 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -489,6 +489,11 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm,
+ if (ret <= 0)
+ goto put_old;
+
++ if (is_zero_page(old_page)) {
++ ret = -EINVAL;
++ goto put_old;
++ }
++
+ if (WARN(!is_register && PageCompound(old_page),
+ "uprobe unregister should never work on compound page\n")) {
+ ret = -EINVAL;
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index c72356836eb628..9803f10a082a7b 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -7229,7 +7229,7 @@ void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
+ #if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
+ int __sched __cond_resched(void)
+ {
+- if (should_resched(0)) {
++ if (should_resched(0) && !irqs_disabled()) {
+ preempt_schedule_common();
+ return 1;
+ }
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index aa57ae3eb1ff5e..325fd5b9d47152 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -3047,7 +3047,6 @@ static struct task_struct *pick_task_scx(struct rq *rq)
+ {
+ struct task_struct *prev = rq->curr;
+ struct task_struct *p;
+- bool prev_on_scx = prev->sched_class == &ext_sched_class;
+ bool keep_prev = rq->scx.flags & SCX_RQ_BAL_KEEP;
+ bool kick_idle = false;
+
+@@ -3067,14 +3066,18 @@ static struct task_struct *pick_task_scx(struct rq *rq)
+ * if pick_task_scx() is called without preceding balance_scx().
+ */
+ if (unlikely(rq->scx.flags & SCX_RQ_BAL_PENDING)) {
+- if (prev_on_scx) {
++ if (prev->scx.flags & SCX_TASK_QUEUED) {
+ keep_prev = true;
+ } else {
+ keep_prev = false;
+ kick_idle = true;
+ }
+- } else if (unlikely(keep_prev && !prev_on_scx)) {
+- /* only allowed during transitions */
++ } else if (unlikely(keep_prev &&
++ prev->sched_class != &ext_sched_class)) {
++ /*
++ * Can happen while enabling as SCX_RQ_BAL_PENDING assertion is
++ * conditional on scx_enabled() and may have been skipped.
++ */
+ WARN_ON_ONCE(scx_ops_enable_state() == SCX_OPS_ENABLED);
+ keep_prev = false;
+ }
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 71cc1bbfe9aa3e..dbd375f28ee098 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -541,6 +541,7 @@ static int function_stat_show(struct seq_file *m, void *v)
+ static struct trace_seq s;
+ unsigned long long avg;
+ unsigned long long stddev;
++ unsigned long long stddev_denom;
+ #endif
+ mutex_lock(&ftrace_profile_lock);
+
+@@ -562,23 +563,19 @@ static int function_stat_show(struct seq_file *m, void *v)
+ #ifdef CONFIG_FUNCTION_GRAPH_TRACER
+ seq_puts(m, " ");
+
+- /* Sample standard deviation (s^2) */
+- if (rec->counter <= 1)
+- stddev = 0;
+- else {
+- /*
+- * Apply Welford's method:
+- * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2)
+- */
++ /*
++ * Variance formula:
++ * s^2 = 1 / (n * (n-1)) * (n * \Sum (x_i)^2 - (\Sum x_i)^2)
++ * Maybe Welford's method is better here?
++ * Divide only by 1000 for ns^2 -> us^2 conversion.
++ * trace_print_graph_duration will divide by 1000 again.
++ */
++ stddev = 0;
++ stddev_denom = rec->counter * (rec->counter - 1) * 1000;
++ if (stddev_denom) {
+ stddev = rec->counter * rec->time_squared -
+ rec->time * rec->time;
+-
+- /*
+- * Divide only 1000 for ns^2 -> us^2 conversion.
+- * trace_print_graph_duration will divide 1000 again.
+- */
+- stddev = div64_ul(stddev,
+- rec->counter * (rec->counter - 1) * 1000);
++ stddev = div64_ul(stddev, stddev_denom);
+ }
+
+ trace_seq_init(&s);
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 5f9119eb7c67f6..31f5ad322fab0a 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -6652,27 +6652,27 @@ static int event_hist_trigger_parse(struct event_command *cmd_ops,
+ if (existing_hist_update_only(glob, trigger_data, file))
+ goto out_free;
+
+- ret = event_trigger_register(cmd_ops, file, glob, trigger_data);
+- if (ret < 0)
+- goto out_free;
++ if (!get_named_trigger_data(trigger_data)) {
+
+- if (get_named_trigger_data(trigger_data))
+- goto enable;
++ ret = create_actions(hist_data);
++ if (ret)
++ goto out_free;
+
+- ret = create_actions(hist_data);
+- if (ret)
+- goto out_unreg;
++ if (has_hist_vars(hist_data) || hist_data->n_var_refs) {
++ ret = save_hist_vars(hist_data);
++ if (ret)
++ goto out_free;
++ }
+
+- if (has_hist_vars(hist_data) || hist_data->n_var_refs) {
+- ret = save_hist_vars(hist_data);
++ ret = tracing_map_init(hist_data->map);
+ if (ret)
+- goto out_unreg;
++ goto out_free;
+ }
+
+- ret = tracing_map_init(hist_data->map);
+- if (ret)
+- goto out_unreg;
+-enable:
++ ret = event_trigger_register(cmd_ops, file, glob, trigger_data);
++ if (ret < 0)
++ goto out_free;
++
+ ret = hist_trigger_enable(trigger_data, file);
+ if (ret)
+ goto out_unreg;
+diff --git a/lib/rcuref.c b/lib/rcuref.c
+index 97f300eca927ce..5bd726b71e3936 100644
+--- a/lib/rcuref.c
++++ b/lib/rcuref.c
+@@ -220,6 +220,7 @@ EXPORT_SYMBOL_GPL(rcuref_get_slowpath);
+ /**
+ * rcuref_put_slowpath - Slowpath of __rcuref_put()
+ * @ref: Pointer to the reference count
++ * @cnt: The resulting value of the fastpath decrement
+ *
+ * Invoked when the reference count is outside of the valid zone.
+ *
+@@ -233,10 +234,8 @@ EXPORT_SYMBOL_GPL(rcuref_get_slowpath);
+ * with a concurrent get()/put() pair. Caller is not allowed to
+ * deconstruct the protected object.
+ */
+-bool rcuref_put_slowpath(rcuref_t *ref)
++bool rcuref_put_slowpath(rcuref_t *ref, unsigned int cnt)
+ {
+- unsigned int cnt = atomic_read(&ref->refcnt);
+-
+ /* Did this drop the last reference? */
+ if (likely(cnt == RCUREF_NOREF)) {
+ /*
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 27b4c4a2ba1fdd..728a5ce9b50587 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -636,7 +636,8 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
+ test_bit(FLAG_HOLD_HCI_CONN, &chan->flags))
+ hci_conn_hold(conn->hcon);
+
+- list_add(&chan->list, &conn->chan_l);
++ /* Append to the list since the order matters for ECRED */
++ list_add_tail(&chan->list, &conn->chan_l);
+ }
+
+ void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
+@@ -3776,7 +3777,11 @@ static void l2cap_ecred_rsp_defer(struct l2cap_chan *chan, void *data)
+ struct l2cap_ecred_conn_rsp *rsp_flex =
+ container_of(&rsp->pdu.rsp, struct l2cap_ecred_conn_rsp, hdr);
+
+- if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags))
++ /* Check if channel for outgoing connection or if it wasn't deferred
++ * since in those cases it must be skipped.
++ */
++ if (test_bit(FLAG_ECRED_CONN_REQ_SENT, &chan->flags) ||
++ !test_and_clear_bit(FLAG_DEFER_SETUP, &chan->flags))
+ return;
+
+ /* Reset ident so only one response is sent */
+diff --git a/net/bridge/br_netfilter_hooks.c b/net/bridge/br_netfilter_hooks.c
+index 1d458e9da660c9..17a5f5923d615d 100644
+--- a/net/bridge/br_netfilter_hooks.c
++++ b/net/bridge/br_netfilter_hooks.c
+@@ -370,9 +370,9 @@ br_nf_ipv4_daddr_was_changed(const struct sk_buff *skb,
+ */
+ static int br_nf_pre_routing_finish(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+- struct net_device *dev = skb->dev, *br_indev;
+- struct iphdr *iph = ip_hdr(skb);
+ struct nf_bridge_info *nf_bridge = nf_bridge_info_get(skb);
++ struct net_device *dev = skb->dev, *br_indev;
++ const struct iphdr *iph = ip_hdr(skb);
+ struct rtable *rt;
+ int err;
+
+@@ -390,7 +390,9 @@ static int br_nf_pre_routing_finish(struct net *net, struct sock *sk, struct sk_
+ }
+ nf_bridge->in_prerouting = 0;
+ if (br_nf_ipv4_daddr_was_changed(skb, nf_bridge)) {
+- if ((err = ip_route_input(skb, iph->daddr, iph->saddr, iph->tos, dev))) {
++ err = ip_route_input(skb, iph->daddr, iph->saddr,
++ ip4h_dscp(iph), dev);
++ if (err) {
+ struct in_device *in_dev = __in_dev_get_rcu(dev);
+
+ /* If err equals -EHOSTUNREACH the error is due to a
+diff --git a/net/core/gro.c b/net/core/gro.c
+index 78b320b6317445..0ad549b07e0399 100644
+--- a/net/core/gro.c
++++ b/net/core/gro.c
+@@ -653,6 +653,7 @@ static void napi_reuse_skb(struct napi_struct *napi, struct sk_buff *skb)
+ skb->pkt_type = PACKET_HOST;
+
+ skb->encapsulation = 0;
++ skb->ip_summed = CHECKSUM_NONE;
+ skb_shinfo(skb)->gso_type = 0;
+ skb_shinfo(skb)->gso_size = 0;
+ if (unlikely(skb->slow_gro)) {
+diff --git a/net/core/scm.c b/net/core/scm.c
+index 4f6a14babe5ae3..733c0cbd393d24 100644
+--- a/net/core/scm.c
++++ b/net/core/scm.c
+@@ -282,6 +282,16 @@ int put_cmsg(struct msghdr * msg, int level, int type, int len, void *data)
+ }
+ EXPORT_SYMBOL(put_cmsg);
+
++int put_cmsg_notrunc(struct msghdr *msg, int level, int type, int len,
++ void *data)
++{
++ /* Don't produce truncated CMSGs */
++ if (!msg->msg_control || msg->msg_controllen < CMSG_LEN(len))
++ return -ETOOSMALL;
++
++ return put_cmsg(msg, level, type, len, data);
++}
++
+ void put_cmsg_scm_timestamping64(struct msghdr *msg, struct scm_timestamping_internal *tss_internal)
+ {
+ struct scm_timestamping64 tss;
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index 61a950f13a91c7..f220306731dac8 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -6127,11 +6127,11 @@ void skb_scrub_packet(struct sk_buff *skb, bool xnet)
+ skb->offload_fwd_mark = 0;
+ skb->offload_l3_fwd_mark = 0;
+ #endif
++ ipvs_reset(skb);
+
+ if (!xnet)
+ return;
+
+- ipvs_reset(skb);
+ skb->mark = 0;
+ skb_clear_tstamp(skb);
+ }
+diff --git a/net/core/sysctl_net_core.c b/net/core/sysctl_net_core.c
+index 5dd54a81339806..47e2743ffe2289 100644
+--- a/net/core/sysctl_net_core.c
++++ b/net/core/sysctl_net_core.c
+@@ -34,6 +34,7 @@ static int min_sndbuf = SOCK_MIN_SNDBUF;
+ static int min_rcvbuf = SOCK_MIN_RCVBUF;
+ static int max_skb_frags = MAX_SKB_FRAGS;
+ static int min_mem_pcpu_rsv = SK_MEMORY_PCPU_RESERVE;
++static int netdev_budget_usecs_min = 2 * USEC_PER_SEC / HZ;
+
+ static int net_msg_warn; /* Unused, but still a sysctl */
+
+@@ -580,7 +581,7 @@ static struct ctl_table net_core_table[] = {
+ .maxlen = sizeof(unsigned int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+- .extra1 = SYSCTL_ZERO,
++ .extra1 = &netdev_budget_usecs_min,
+ },
+ {
+ .procname = "fb_tunnels_only_for_init_net",
+diff --git a/net/ipv4/icmp.c b/net/ipv4/icmp.c
+index f45bc187a92a7e..b8111ec651b545 100644
+--- a/net/ipv4/icmp.c
++++ b/net/ipv4/icmp.c
+@@ -477,13 +477,11 @@ static struct net_device *icmp_get_route_lookup_dev(struct sk_buff *skb)
+ return route_lookup_dev;
+ }
+
+-static struct rtable *icmp_route_lookup(struct net *net,
+- struct flowi4 *fl4,
++static struct rtable *icmp_route_lookup(struct net *net, struct flowi4 *fl4,
+ struct sk_buff *skb_in,
+- const struct iphdr *iph,
+- __be32 saddr, u8 tos, u32 mark,
+- int type, int code,
+- struct icmp_bxm *param)
++ const struct iphdr *iph, __be32 saddr,
++ dscp_t dscp, u32 mark, int type,
++ int code, struct icmp_bxm *param)
+ {
+ struct net_device *route_lookup_dev;
+ struct dst_entry *dst, *dst2;
+@@ -497,7 +495,7 @@ static struct rtable *icmp_route_lookup(struct net *net,
+ fl4->saddr = saddr;
+ fl4->flowi4_mark = mark;
+ fl4->flowi4_uid = sock_net_uid(net, NULL);
+- fl4->flowi4_tos = tos & INET_DSCP_MASK;
++ fl4->flowi4_tos = inet_dscp_to_dsfield(dscp);
+ fl4->flowi4_proto = IPPROTO_ICMP;
+ fl4->fl4_icmp_type = type;
+ fl4->fl4_icmp_code = code;
+@@ -549,7 +547,7 @@ static struct rtable *icmp_route_lookup(struct net *net,
+ orefdst = skb_in->_skb_refdst; /* save old refdst */
+ skb_dst_set(skb_in, NULL);
+ err = ip_route_input(skb_in, fl4_dec.daddr, fl4_dec.saddr,
+- tos, rt2->dst.dev);
++ dscp, rt2->dst.dev);
+
+ dst_release(&rt2->dst);
+ rt2 = skb_rtable(skb_in);
+@@ -745,8 +743,9 @@ void __icmp_send(struct sk_buff *skb_in, int type, int code, __be32 info,
+ ipc.opt = &icmp_param.replyopts.opt;
+ ipc.sockc.mark = mark;
+
+- rt = icmp_route_lookup(net, &fl4, skb_in, iph, saddr, tos, mark,
+- type, code, &icmp_param);
++ rt = icmp_route_lookup(net, &fl4, skb_in, iph, saddr,
++ inet_dsfield_to_dscp(tos), mark, type, code,
++ &icmp_param);
+ if (IS_ERR(rt))
+ goto out_unlock;
+
+diff --git a/net/ipv4/ip_options.c b/net/ipv4/ip_options.c
+index 68aedb8877b9f4..81e86e5defee6b 100644
+--- a/net/ipv4/ip_options.c
++++ b/net/ipv4/ip_options.c
+@@ -617,7 +617,8 @@ int ip_options_rcv_srr(struct sk_buff *skb, struct net_device *dev)
+
+ orefdst = skb->_skb_refdst;
+ skb_dst_set(skb, NULL);
+- err = ip_route_input(skb, nexthop, iph->saddr, iph->tos, dev);
++ err = ip_route_input(skb, nexthop, iph->saddr, ip4h_dscp(iph),
++ dev);
+ rt2 = skb_rtable(skb);
+ if (err || (rt2->rt_type != RTN_UNICAST && rt2->rt_type != RTN_LOCAL)) {
+ skb_dst_drop(skb);
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index 68cb6a966b18b8..b731a4a8f2b0d5 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -2456,14 +2456,12 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb,
+ */
+ memset(&dmabuf_cmsg, 0, sizeof(dmabuf_cmsg));
+ dmabuf_cmsg.frag_size = copy;
+- err = put_cmsg(msg, SOL_SOCKET, SO_DEVMEM_LINEAR,
+- sizeof(dmabuf_cmsg), &dmabuf_cmsg);
+- if (err || msg->msg_flags & MSG_CTRUNC) {
+- msg->msg_flags &= ~MSG_CTRUNC;
+- if (!err)
+- err = -ETOOSMALL;
++ err = put_cmsg_notrunc(msg, SOL_SOCKET,
++ SO_DEVMEM_LINEAR,
++ sizeof(dmabuf_cmsg),
++ &dmabuf_cmsg);
++ if (err)
+ goto out;
+- }
+
+ sent += copy;
+
+@@ -2517,16 +2515,12 @@ static int tcp_recvmsg_dmabuf(struct sock *sk, const struct sk_buff *skb,
+ offset += copy;
+ remaining_len -= copy;
+
+- err = put_cmsg(msg, SOL_SOCKET,
+- SO_DEVMEM_DMABUF,
+- sizeof(dmabuf_cmsg),
+- &dmabuf_cmsg);
+- if (err || msg->msg_flags & MSG_CTRUNC) {
+- msg->msg_flags &= ~MSG_CTRUNC;
+- if (!err)
+- err = -ETOOSMALL;
++ err = put_cmsg_notrunc(msg, SOL_SOCKET,
++ SO_DEVMEM_DMABUF,
++ sizeof(dmabuf_cmsg),
++ &dmabuf_cmsg);
++ if (err)
+ goto out;
+- }
+
+ atomic_long_inc(&niov->pp_ref_count);
+ tcp_xa_pool.netmems[tcp_xa_pool.idx++] = skb_frag_netmem(frag);
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index bb1fe1ba867ac3..f3e4fc9572196c 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -806,12 +806,6 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+
+ /* In sequence, PAWS is OK. */
+
+- /* TODO: We probably should defer ts_recent change once
+- * we take ownership of @req.
+- */
+- if (tmp_opt.saw_tstamp && !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt))
+- WRITE_ONCE(req->ts_recent, tmp_opt.rcv_tsval);
+-
+ if (TCP_SKB_CB(skb)->seq == tcp_rsk(req)->rcv_isn) {
+ /* Truncate SYN, it is out of window starting
+ at tcp_rsk(req)->rcv_isn + 1. */
+@@ -860,6 +854,10 @@ struct sock *tcp_check_req(struct sock *sk, struct sk_buff *skb,
+ if (!child)
+ goto listen_overflow;
+
++ if (own_req && tmp_opt.saw_tstamp &&
++ !after(TCP_SKB_CB(skb)->seq, tcp_rsk(req)->rcv_nxt))
++ tcp_sk(child)->rx_opt.ts_recent = tmp_opt.rcv_tsval;
++
+ if (own_req && rsk_drop_req(req)) {
+ reqsk_queue_removed(&inet_csk(req->rsk_listener)->icsk_accept_queue, req);
+ inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, req);
+diff --git a/net/ipv6/ip6_tunnel.c b/net/ipv6/ip6_tunnel.c
+index b60e13c42bcacd..48fd53b9897265 100644
+--- a/net/ipv6/ip6_tunnel.c
++++ b/net/ipv6/ip6_tunnel.c
+@@ -630,8 +630,8 @@ ip4ip6_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ }
+ skb_dst_set(skb2, &rt->dst);
+ } else {
+- if (ip_route_input(skb2, eiph->daddr, eiph->saddr, eiph->tos,
+- skb2->dev) ||
++ if (ip_route_input(skb2, eiph->daddr, eiph->saddr,
++ ip4h_dscp(eiph), skb2->dev) ||
+ skb_dst(skb2)->dev->type != ARPHRD_TUNNEL6)
+ goto out;
+ }
+diff --git a/net/ipv6/rpl_iptunnel.c b/net/ipv6/rpl_iptunnel.c
+index 0ac4283acdf20c..7c05ac846646f3 100644
+--- a/net/ipv6/rpl_iptunnel.c
++++ b/net/ipv6/rpl_iptunnel.c
+@@ -262,10 +262,18 @@ static int rpl_input(struct sk_buff *skb)
+ {
+ struct dst_entry *orig_dst = skb_dst(skb);
+ struct dst_entry *dst = NULL;
++ struct lwtunnel_state *lwtst;
+ struct rpl_lwt *rlwt;
+ int err;
+
+- rlwt = rpl_lwt_lwtunnel(orig_dst->lwtstate);
++ /* We cannot dereference "orig_dst" once ip6_route_input() or
++ * skb_dst_drop() is called. However, in order to detect a dst loop, we
++ * need the address of its lwtstate. So, save the address of lwtstate
++ * now and use it later as a comparison.
++ */
++ lwtst = orig_dst->lwtstate;
++
++ rlwt = rpl_lwt_lwtunnel(lwtst);
+
+ local_bh_disable();
+ dst = dst_cache_get(&rlwt->cache);
+@@ -280,7 +288,9 @@ static int rpl_input(struct sk_buff *skb)
+ if (!dst) {
+ ip6_route_input(skb);
+ dst = skb_dst(skb);
+- if (!dst->error) {
++
++ /* cache only if we don't create a dst reference loop */
++ if (!dst->error && lwtst != dst->lwtstate) {
+ local_bh_disable();
+ dst_cache_set_ip6(&rlwt->cache, dst,
+ &ipv6_hdr(skb)->saddr);
+diff --git a/net/ipv6/seg6_iptunnel.c b/net/ipv6/seg6_iptunnel.c
+index 33833b2064c072..51583461ae29ba 100644
+--- a/net/ipv6/seg6_iptunnel.c
++++ b/net/ipv6/seg6_iptunnel.c
+@@ -472,10 +472,18 @@ static int seg6_input_core(struct net *net, struct sock *sk,
+ {
+ struct dst_entry *orig_dst = skb_dst(skb);
+ struct dst_entry *dst = NULL;
++ struct lwtunnel_state *lwtst;
+ struct seg6_lwt *slwt;
+ int err;
+
+- slwt = seg6_lwt_lwtunnel(orig_dst->lwtstate);
++ /* We cannot dereference "orig_dst" once ip6_route_input() or
++ * skb_dst_drop() is called. However, in order to detect a dst loop, we
++ * need the address of its lwtstate. So, save the address of lwtstate
++ * now and use it later as a comparison.
++ */
++ lwtst = orig_dst->lwtstate;
++
++ slwt = seg6_lwt_lwtunnel(lwtst);
+
+ local_bh_disable();
+ dst = dst_cache_get(&slwt->cache);
+@@ -490,7 +498,9 @@ static int seg6_input_core(struct net *net, struct sock *sk,
+ if (!dst) {
+ ip6_route_input(skb);
+ dst = skb_dst(skb);
+- if (!dst->error) {
++
++ /* cache only if we don't create a dst reference loop */
++ if (!dst->error && lwtst != dst->lwtstate) {
+ local_bh_disable();
+ dst_cache_set_ip6(&slwt->cache, dst,
+ &ipv6_hdr(skb)->saddr);
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index 8c4f934d198cc6..b4ba2d9f041765 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -1513,11 +1513,6 @@ static int mptcp_nl_remove_subflow_and_signal_addr(struct net *net,
+ if (mptcp_pm_is_userspace(msk))
+ goto next;
+
+- if (list_empty(&msk->conn_list)) {
+- mptcp_pm_remove_anno_addr(msk, addr, false);
+- goto next;
+- }
+-
+ lock_sock(sk);
+ remove_subflow = lookup_subflow_by_saddr(&msk->conn_list, addr);
+ mptcp_pm_remove_anno_addr(msk, addr, remove_subflow &&
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 860903e0642255..b56bbee7312c48 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -1140,7 +1140,6 @@ static enum mapping_status get_mapping_status(struct sock *ssk,
+ if (data_len == 0) {
+ pr_debug("infinite mapping received\n");
+ MPTCP_INC_STATS(sock_net(ssk), MPTCP_MIB_INFINITEMAPRX);
+- subflow->map_data_len = 0;
+ return MAPPING_INVALID;
+ }
+
+@@ -1284,18 +1283,6 @@ static void subflow_sched_work_if_closed(struct mptcp_sock *msk, struct sock *ss
+ mptcp_schedule_work(sk);
+ }
+
+-static bool subflow_can_fallback(struct mptcp_subflow_context *subflow)
+-{
+- struct mptcp_sock *msk = mptcp_sk(subflow->conn);
+-
+- if (subflow->mp_join)
+- return false;
+- else if (READ_ONCE(msk->csum_enabled))
+- return !subflow->valid_csum_seen;
+- else
+- return READ_ONCE(msk->allow_infinite_fallback);
+-}
+-
+ static void mptcp_subflow_fail(struct mptcp_sock *msk, struct sock *ssk)
+ {
+ struct mptcp_subflow_context *subflow = mptcp_subflow_ctx(ssk);
+@@ -1391,7 +1378,7 @@ static bool subflow_check_data_avail(struct sock *ssk)
+ return true;
+ }
+
+- if (!subflow_can_fallback(subflow) && subflow->map_data_len) {
++ if (!READ_ONCE(msk->allow_infinite_fallback)) {
+ /* fatal protocol error, close the socket.
+ * subflow_error_report() will introduce the appropriate barriers
+ */
+diff --git a/net/rxrpc/rxperf.c b/net/rxrpc/rxperf.c
+index 085e7892d31040..b1536da2246b82 100644
+--- a/net/rxrpc/rxperf.c
++++ b/net/rxrpc/rxperf.c
+@@ -478,6 +478,18 @@ static int rxperf_deliver_request(struct rxperf_call *call)
+ call->unmarshal++;
+ fallthrough;
+ case 2:
++ ret = rxperf_extract_data(call, true);
++ if (ret < 0)
++ return ret;
++
++ /* Deal with the terminal magic cookie. */
++ call->iov_len = 4;
++ call->kvec[0].iov_len = call->iov_len;
++ call->kvec[0].iov_base = call->tmp;
++ iov_iter_kvec(&call->iter, READ, call->kvec, 1, call->iov_len);
++ call->unmarshal++;
++ fallthrough;
++ case 3:
+ ret = rxperf_extract_data(call, false);
+ if (ret < 0)
+ return ret;
+diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
+index 059f6ef1ad1898..7fcb0574fc79e7 100644
+--- a/net/sunrpc/cache.c
++++ b/net/sunrpc/cache.c
+@@ -1669,12 +1669,14 @@ static void remove_cache_proc_entries(struct cache_detail *cd)
+ }
+ }
+
+-#ifdef CONFIG_PROC_FS
+ static int create_cache_proc_entries(struct cache_detail *cd, struct net *net)
+ {
+ struct proc_dir_entry *p;
+ struct sunrpc_net *sn;
+
++ if (!IS_ENABLED(CONFIG_PROC_FS))
++ return 0;
++
+ sn = net_generic(net, sunrpc_net_id);
+ cd->procfs = proc_mkdir(cd->name, sn->proc_net_rpc);
+ if (cd->procfs == NULL)
+@@ -1702,12 +1704,6 @@ static int create_cache_proc_entries(struct cache_detail *cd, struct net *net)
+ remove_cache_proc_entries(cd);
+ return -ENOMEM;
+ }
+-#else /* CONFIG_PROC_FS */
+-static int create_cache_proc_entries(struct cache_detail *cd, struct net *net)
+-{
+- return 0;
+-}
+-#endif
+
+ void __init cache_initialize(void)
+ {
+diff --git a/net/sunrpc/sched.c b/net/sunrpc/sched.c
+index cef623ea150609..9b45fbdc90cabe 100644
+--- a/net/sunrpc/sched.c
++++ b/net/sunrpc/sched.c
+@@ -864,8 +864,6 @@ void rpc_signal_task(struct rpc_task *task)
+ if (!rpc_task_set_rpc_status(task, -ERESTARTSYS))
+ return;
+ trace_rpc_task_signalled(task, task->tk_action);
+- set_bit(RPC_TASK_SIGNALLED, &task->tk_runstate);
+- smp_mb__after_atomic();
+ queue = READ_ONCE(task->tk_waitqueue);
+ if (queue)
+ rpc_wake_up_queued_task(queue, task);
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index b69e6290acfabe..171ad4e2523f13 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -2580,7 +2580,15 @@ static void xs_tls_handshake_done(void *data, int status, key_serial_t peerid)
+ struct sock_xprt *lower_transport =
+ container_of(lower_xprt, struct sock_xprt, xprt);
+
+- lower_transport->xprt_err = status ? -EACCES : 0;
++ switch (status) {
++ case 0:
++ case -EACCES:
++ case -ETIMEDOUT:
++ lower_transport->xprt_err = status;
++ break;
++ default:
++ lower_transport->xprt_err = -EACCES;
++ }
+ complete(&lower_transport->handshake_done);
+ xprt_put(lower_xprt);
+ }
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index 3c323ca213d42c..abfdb4905ca2ac 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -149,6 +149,9 @@ struct ima_kexec_hdr {
+ #define IMA_CHECK_BLACKLIST 0x40000000
+ #define IMA_VERITY_REQUIRED 0x80000000
+
++/* Exclude non-action flags which are not rule-specific. */
++#define IMA_NONACTION_RULE_FLAGS (IMA_NONACTION_FLAGS & ~IMA_NEW_FILE)
++
+ #define IMA_DO_MASK (IMA_MEASURE | IMA_APPRAISE | IMA_AUDIT | \
+ IMA_HASH | IMA_APPRAISE_SUBMASK)
+ #define IMA_DONE_MASK (IMA_MEASURED | IMA_APPRAISED | IMA_AUDITED | \
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 06132cf47016da..4b213de8dcb40c 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -269,10 +269,13 @@ static int process_measurement(struct file *file, const struct cred *cred,
+ mutex_lock(&iint->mutex);
+
+ if (test_and_clear_bit(IMA_CHANGE_ATTR, &iint->atomic_flags))
+- /* reset appraisal flags if ima_inode_post_setattr was called */
++ /*
++ * Reset appraisal flags (action and non-action rule-specific)
++ * if ima_inode_post_setattr was called.
++ */
+ iint->flags &= ~(IMA_APPRAISE | IMA_APPRAISED |
+ IMA_APPRAISE_SUBMASK | IMA_APPRAISED_SUBMASK |
+- IMA_NONACTION_FLAGS);
++ IMA_NONACTION_RULE_FLAGS);
+
+ /*
+ * Re-evaulate the file if either the xattr has changed or the
+diff --git a/security/landlock/net.c b/security/landlock/net.c
+index d5dcc4407a197b..104b6c01fe503b 100644
+--- a/security/landlock/net.c
++++ b/security/landlock/net.c
+@@ -63,8 +63,7 @@ static int current_check_access_socket(struct socket *const sock,
+ if (WARN_ON_ONCE(dom->num_layers < 1))
+ return -EACCES;
+
+- /* Checks if it's a (potential) TCP socket. */
+- if (sock->type != SOCK_STREAM)
++ if (!sk_is_tcp(sock->sk))
+ return 0;
+
+ /* Checks for minimal header length to safely read sa_family. */
+diff --git a/sound/pci/hda/cs35l56_hda_spi.c b/sound/pci/hda/cs35l56_hda_spi.c
+index 7f02155fe61e3c..7c94110b6272a6 100644
+--- a/sound/pci/hda/cs35l56_hda_spi.c
++++ b/sound/pci/hda/cs35l56_hda_spi.c
+@@ -22,6 +22,9 @@ static int cs35l56_hda_spi_probe(struct spi_device *spi)
+ return -ENOMEM;
+
+ cs35l56->base.dev = &spi->dev;
++ ret = cs35l56_init_config_for_spi(&cs35l56->base, spi);
++ if (ret)
++ return ret;
+
+ #ifdef CS35L56_WAKE_HOLD_TIME_US
+ cs35l56->base.can_hibernate = true;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 9bf99fe6cd34dd..4a3b4c6d4114b9 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10564,6 +10564,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x13b0, "ASUS Z550SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1427, "Asus Zenbook UX31E", ALC269VB_FIXUP_ASUS_ZENBOOK),
+ SND_PCI_QUIRK(0x1043, 0x1433, "ASUS GX650PY/PZ/PV/PU/PYV/PZV/PIV/PVV", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC),
++ SND_PCI_QUIRK(0x1043, 0x1460, "Asus VivoBook 15", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1463, "Asus GA402X/GA402N", ALC285_FIXUP_ASUS_I2C_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1473, "ASUS GU604VI/VC/VE/VG/VJ/VQ/VU/VV/VY/VZ", ALC285_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1483, "ASUS GU603VQ/VU/VV/VJ/VI", ALC285_FIXUP_ASUS_HEADSET_MIC),
+@@ -10597,7 +10598,6 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE),
+ SND_PCI_QUIRK(0x1043, 0x19e1, "ASUS UX581LV", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+- SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1a63, "ASUS UX3405MA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1a83, "ASUS UM5302LA", ALC294_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x1a8f, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
+diff --git a/sound/soc/codecs/cs35l56-shared.c b/sound/soc/codecs/cs35l56-shared.c
+index e45e9ae01bc668..195841a567c3d4 100644
+--- a/sound/soc/codecs/cs35l56-shared.c
++++ b/sound/soc/codecs/cs35l56-shared.c
+@@ -10,6 +10,7 @@
+ #include <linux/gpio/consumer.h>
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
++#include <linux/spi/spi.h>
+ #include <linux/types.h>
+ #include <sound/cs-amp-lib.h>
+
+@@ -303,6 +304,79 @@ void cs35l56_wait_min_reset_pulse(void)
+ }
+ EXPORT_SYMBOL_NS_GPL(cs35l56_wait_min_reset_pulse, SND_SOC_CS35L56_SHARED);
+
++static const struct {
++ u32 addr;
++ u32 value;
++} cs35l56_spi_system_reset_stages[] = {
++ { .addr = CS35L56_DSP_VIRTUAL1_MBOX_1, .value = CS35L56_MBOX_CMD_SYSTEM_RESET },
++ /* The next write is necessary to delimit the soft reset */
++ { .addr = CS35L56_DSP_MBOX_1_RAW, .value = CS35L56_MBOX_CMD_PING },
++};
++
++static void cs35l56_spi_issue_bus_locked_reset(struct cs35l56_base *cs35l56_base,
++ struct spi_device *spi)
++{
++ struct cs35l56_spi_payload *buf = cs35l56_base->spi_payload_buf;
++ struct spi_transfer t = {
++ .tx_buf = buf,
++ .len = sizeof(*buf),
++ };
++ struct spi_message m;
++ int i, ret;
++
++ for (i = 0; i < ARRAY_SIZE(cs35l56_spi_system_reset_stages); i++) {
++ buf->addr = cpu_to_be32(cs35l56_spi_system_reset_stages[i].addr);
++ buf->value = cpu_to_be32(cs35l56_spi_system_reset_stages[i].value);
++ spi_message_init_with_transfers(&m, &t, 1);
++ ret = spi_sync_locked(spi, &m);
++ if (ret)
++ dev_warn(cs35l56_base->dev, "spi_sync failed: %d\n", ret);
++
++ usleep_range(CS35L56_SPI_RESET_TO_PORT_READY_US,
++ 2 * CS35L56_SPI_RESET_TO_PORT_READY_US);
++ }
++}
++
++static void cs35l56_spi_system_reset(struct cs35l56_base *cs35l56_base)
++{
++ struct spi_device *spi = to_spi_device(cs35l56_base->dev);
++ unsigned int val;
++ int read_ret, ret;
++
++ /*
++ * There must not be any other SPI bus activity while the amp is
++ * soft-resetting.
++ */
++ ret = spi_bus_lock(spi->controller);
++ if (ret) {
++ dev_warn(cs35l56_base->dev, "spi_bus_lock failed: %d\n", ret);
++ return;
++ }
++
++ cs35l56_spi_issue_bus_locked_reset(cs35l56_base, spi);
++ spi_bus_unlock(spi->controller);
++
++ /*
++ * Check firmware boot by testing for a response in MBOX_2.
++ * HALO_STATE cannot be trusted yet because the reset sequence
++ * can leave it with stale state. But MBOX is reset.
++ * The regmap must remain in cache-only until the chip has
++ * booted, so use a bypassed read.
++ */
++ ret = read_poll_timeout(regmap_read_bypassed, read_ret,
++ (val > 0) && (val < 0xffffffff),
++ CS35L56_HALO_STATE_POLL_US,
++ CS35L56_HALO_STATE_TIMEOUT_US,
++ false,
++ cs35l56_base->regmap,
++ CS35L56_DSP_VIRTUAL1_MBOX_2,
++ &val);
++ if (ret) {
++ dev_err(cs35l56_base->dev, "SPI reboot timed out(%d): MBOX2=%#x\n",
++ read_ret, val);
++ }
++}
++
+ static const struct reg_sequence cs35l56_system_reset_seq[] = {
+ REG_SEQ0(CS35L56_DSP1_HALO_STATE, 0),
+ REG_SEQ0(CS35L56_DSP_VIRTUAL1_MBOX_1, CS35L56_MBOX_CMD_SYSTEM_RESET),
+@@ -315,6 +389,12 @@ void cs35l56_system_reset(struct cs35l56_base *cs35l56_base, bool is_soundwire)
+ * accesses other than the controlled system reset sequence below.
+ */
+ regcache_cache_only(cs35l56_base->regmap, true);
++
++ if (cs35l56_is_spi(cs35l56_base)) {
++ cs35l56_spi_system_reset(cs35l56_base);
++ return;
++ }
++
+ regmap_multi_reg_write_bypassed(cs35l56_base->regmap,
+ cs35l56_system_reset_seq,
+ ARRAY_SIZE(cs35l56_system_reset_seq));
+diff --git a/sound/soc/codecs/cs35l56-spi.c b/sound/soc/codecs/cs35l56-spi.c
+index b07b798b0b45d6..568f554a8638bf 100644
+--- a/sound/soc/codecs/cs35l56-spi.c
++++ b/sound/soc/codecs/cs35l56-spi.c
+@@ -33,6 +33,9 @@ static int cs35l56_spi_probe(struct spi_device *spi)
+
+ cs35l56->base.dev = &spi->dev;
+ cs35l56->base.can_hibernate = true;
++ ret = cs35l56_init_config_for_spi(&cs35l56->base, spi);
++ if (ret)
++ return ret;
+
+ ret = cs35l56_common_probe(cs35l56);
+ if (ret != 0)
+diff --git a/sound/soc/codecs/es8328.c b/sound/soc/codecs/es8328.c
+index f3c97da798dc8e..76159c45e6b52e 100644
+--- a/sound/soc/codecs/es8328.c
++++ b/sound/soc/codecs/es8328.c
+@@ -233,7 +233,6 @@ static const struct snd_kcontrol_new es8328_right_line_controls =
+
+ /* Left Mixer */
+ static const struct snd_kcontrol_new es8328_left_mixer_controls[] = {
+- SOC_DAPM_SINGLE("Playback Switch", ES8328_DACCONTROL17, 7, 1, 0),
+ SOC_DAPM_SINGLE("Left Bypass Switch", ES8328_DACCONTROL17, 6, 1, 0),
+ SOC_DAPM_SINGLE("Right Playback Switch", ES8328_DACCONTROL18, 7, 1, 0),
+ SOC_DAPM_SINGLE("Right Bypass Switch", ES8328_DACCONTROL18, 6, 1, 0),
+@@ -243,7 +242,6 @@ static const struct snd_kcontrol_new es8328_left_mixer_controls[] = {
+ static const struct snd_kcontrol_new es8328_right_mixer_controls[] = {
+ SOC_DAPM_SINGLE("Left Playback Switch", ES8328_DACCONTROL19, 7, 1, 0),
+ SOC_DAPM_SINGLE("Left Bypass Switch", ES8328_DACCONTROL19, 6, 1, 0),
+- SOC_DAPM_SINGLE("Playback Switch", ES8328_DACCONTROL20, 7, 1, 0),
+ SOC_DAPM_SINGLE("Right Bypass Switch", ES8328_DACCONTROL20, 6, 1, 0),
+ };
+
+@@ -336,10 +334,10 @@ static const struct snd_soc_dapm_widget es8328_dapm_widgets[] = {
+ SND_SOC_DAPM_DAC("Left DAC", "Left Playback", ES8328_DACPOWER,
+ ES8328_DACPOWER_LDAC_OFF, 1),
+
+- SND_SOC_DAPM_MIXER("Left Mixer", SND_SOC_NOPM, 0, 0,
++ SND_SOC_DAPM_MIXER("Left Mixer", ES8328_DACCONTROL17, 7, 0,
+ &es8328_left_mixer_controls[0],
+ ARRAY_SIZE(es8328_left_mixer_controls)),
+- SND_SOC_DAPM_MIXER("Right Mixer", SND_SOC_NOPM, 0, 0,
++ SND_SOC_DAPM_MIXER("Right Mixer", ES8328_DACCONTROL20, 7, 0,
+ &es8328_right_mixer_controls[0],
+ ARRAY_SIZE(es8328_right_mixer_controls)),
+
+@@ -418,19 +416,14 @@ static const struct snd_soc_dapm_route es8328_dapm_routes[] = {
+ { "Right Line Mux", "PGA", "Right PGA Mux" },
+ { "Right Line Mux", "Differential", "Differential Mux" },
+
+- { "Left Out 1", NULL, "Left DAC" },
+- { "Right Out 1", NULL, "Right DAC" },
+- { "Left Out 2", NULL, "Left DAC" },
+- { "Right Out 2", NULL, "Right DAC" },
+-
+- { "Left Mixer", "Playback Switch", "Left DAC" },
++ { "Left Mixer", NULL, "Left DAC" },
+ { "Left Mixer", "Left Bypass Switch", "Left Line Mux" },
+ { "Left Mixer", "Right Playback Switch", "Right DAC" },
+ { "Left Mixer", "Right Bypass Switch", "Right Line Mux" },
+
+ { "Right Mixer", "Left Playback Switch", "Left DAC" },
+ { "Right Mixer", "Left Bypass Switch", "Left Line Mux" },
+- { "Right Mixer", "Playback Switch", "Right DAC" },
++ { "Right Mixer", NULL, "Right DAC" },
+ { "Right Mixer", "Right Bypass Switch", "Right Line Mux" },
+
+ { "DAC DIG", NULL, "DAC STM" },
+diff --git a/sound/soc/fsl/fsl_sai.c b/sound/soc/fsl/fsl_sai.c
+index 634168d2bb6e54..c5efbceb06d1fc 100644
+--- a/sound/soc/fsl/fsl_sai.c
++++ b/sound/soc/fsl/fsl_sai.c
+@@ -994,10 +994,10 @@ static struct snd_soc_dai_driver fsl_sai_dai_template[] = {
+ {
+ .name = "sai-tx",
+ .playback = {
+- .stream_name = "CPU-Playback",
++ .stream_name = "SAI-Playback",
+ .channels_min = 1,
+ .channels_max = 32,
+- .rate_min = 8000,
++ .rate_min = 8000,
+ .rate_max = 2822400,
+ .rates = SNDRV_PCM_RATE_KNOT,
+ .formats = FSL_SAI_FORMATS,
+@@ -1007,7 +1007,7 @@ static struct snd_soc_dai_driver fsl_sai_dai_template[] = {
+ {
+ .name = "sai-rx",
+ .capture = {
+- .stream_name = "CPU-Capture",
++ .stream_name = "SAI-Capture",
+ .channels_min = 1,
+ .channels_max = 32,
+ .rate_min = 8000,
+diff --git a/sound/soc/fsl/imx-audmix.c b/sound/soc/fsl/imx-audmix.c
+index ff3671226306bd..ca33ecad075218 100644
+--- a/sound/soc/fsl/imx-audmix.c
++++ b/sound/soc/fsl/imx-audmix.c
+@@ -119,8 +119,8 @@ static const struct snd_soc_ops imx_audmix_be_ops = {
+ static const char *name[][3] = {
+ {"HiFi-AUDMIX-FE-0", "HiFi-AUDMIX-FE-1", "HiFi-AUDMIX-FE-2"},
+ {"sai-tx", "sai-tx", "sai-rx"},
+- {"AUDMIX-Playback-0", "AUDMIX-Playback-1", "CPU-Capture"},
+- {"CPU-Playback", "CPU-Playback", "AUDMIX-Capture-0"},
++ {"AUDMIX-Playback-0", "AUDMIX-Playback-1", "SAI-Capture"},
++ {"SAI-Playback", "SAI-Playback", "AUDMIX-Capture-0"},
+ };
+
+ static int imx_audmix_probe(struct platform_device *pdev)
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 737dd00e97b142..779d97d31f170e 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -1145,7 +1145,7 @@ static int snd_usbmidi_output_close(struct snd_rawmidi_substream *substream)
+ {
+ struct usbmidi_out_port *port = substream->runtime->private_data;
+
+- cancel_work_sync(&port->ep->work);
++ flush_work(&port->ep->work);
+ return substream_open(substream, 0, 0);
+ }
+
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index a97efb7b131ea2..09210fb4ac60c1 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1868,6 +1868,7 @@ void snd_usb_set_format_quirk(struct snd_usb_substream *subs,
+ case USB_ID(0x534d, 0x2109): /* MacroSilicon MS2109 */
+ subs->stream_offset_adj = 2;
+ break;
++ case USB_ID(0x2b73, 0x000a): /* Pioneer DJM-900NXS2 */
+ case USB_ID(0x2b73, 0x0013): /* Pioneer DJM-450 */
+ pioneer_djm_set_format_quirk(subs, 0x0082);
+ break;
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 8e02db7e83323b..1691aa6e6ce32d 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -639,47 +639,8 @@ static int add_dead_ends(struct objtool_file *file)
+ uint64_t offset;
+
+ /*
+- * Check for manually annotated dead ends.
+- */
+- rsec = find_section_by_name(file->elf, ".rela.discard.unreachable");
+- if (!rsec)
+- goto reachable;
+-
+- for_each_reloc(rsec, reloc) {
+- if (reloc->sym->type == STT_SECTION) {
+- offset = reloc_addend(reloc);
+- } else if (reloc->sym->local_label) {
+- offset = reloc->sym->offset;
+- } else {
+- WARN("unexpected relocation symbol type in %s", rsec->name);
+- return -1;
+- }
+-
+- insn = find_insn(file, reloc->sym->sec, offset);
+- if (insn)
+- insn = prev_insn_same_sec(file, insn);
+- else if (offset == reloc->sym->sec->sh.sh_size) {
+- insn = find_last_insn(file, reloc->sym->sec);
+- if (!insn) {
+- WARN("can't find unreachable insn at %s+0x%" PRIx64,
+- reloc->sym->sec->name, offset);
+- return -1;
+- }
+- } else {
+- WARN("can't find unreachable insn at %s+0x%" PRIx64,
+- reloc->sym->sec->name, offset);
+- return -1;
+- }
+-
+- insn->dead_end = true;
+- }
+-
+-reachable:
+- /*
+- * These manually annotated reachable checks are needed for GCC 4.4,
+- * where the Linux unreachable() macro isn't supported. In that case
+- * GCC doesn't know the "ud2" is fatal, so it generates code as if it's
+- * not a dead end.
++ * UD2 defaults to being a dead-end, allow them to be annotated for
++ * non-fatal, eg WARN.
+ */
+ rsec = find_section_by_name(file->elf, ".rela.discard.reachable");
+ if (!rsec)
+@@ -2628,13 +2589,14 @@ static void mark_rodata(struct objtool_file *file)
+ *
+ * - .rodata: can contain GCC switch tables
+ * - .rodata.<func>: same, if -fdata-sections is being used
+- * - .rodata..c_jump_table: contains C annotated jump tables
++ * - .data.rel.ro.c_jump_table: contains C annotated jump tables
+ *
+ * .rodata.str1.* sections are ignored; they don't contain jump tables.
+ */
+ for_each_sec(file, sec) {
+- if (!strncmp(sec->name, ".rodata", 7) &&
+- !strstr(sec->name, ".str1.")) {
++ if ((!strncmp(sec->name, ".rodata", 7) &&
++ !strstr(sec->name, ".str1.")) ||
++ !strncmp(sec->name, ".data.rel.ro", 12)) {
+ sec->rodata = true;
+ found = true;
+ }
+diff --git a/tools/objtool/include/objtool/special.h b/tools/objtool/include/objtool/special.h
+index 86d4af9c5aa9dc..89ee12b1a13849 100644
+--- a/tools/objtool/include/objtool/special.h
++++ b/tools/objtool/include/objtool/special.h
+@@ -10,7 +10,7 @@
+ #include <objtool/check.h>
+ #include <objtool/elf.h>
+
+-#define C_JUMP_TABLE_SECTION ".rodata..c_jump_table"
++#define C_JUMP_TABLE_SECTION ".data.rel.ro.c_jump_table"
+
+ struct special_alt {
+ struct list_head list;
+diff --git a/tools/testing/selftests/drivers/net/queues.py b/tools/testing/selftests/drivers/net/queues.py
+index 30f29096e27c22..4868b514ae78d8 100755
+--- a/tools/testing/selftests/drivers/net/queues.py
++++ b/tools/testing/selftests/drivers/net/queues.py
+@@ -40,10 +40,9 @@ def addremove_queues(cfg, nl) -> None:
+
+ netnl = EthtoolFamily()
+ channels = netnl.channels_get({'header': {'dev-index': cfg.ifindex}})
+- if channels['combined-count'] == 0:
+- rx_type = 'rx'
+- else:
+- rx_type = 'combined'
++ rx_type = 'rx'
++ if channels.get('combined-count', 0) > 0:
++ rx_type = 'combined'
+
+ expected = curr_queues - 1
+ cmd(f"ethtool -L {cfg.dev['ifname']} {rx_type} {expected}", timeout=10)
+diff --git a/tools/testing/selftests/landlock/common.h b/tools/testing/selftests/landlock/common.h
+index 61056fa074bb2f..40a2def50b837e 100644
+--- a/tools/testing/selftests/landlock/common.h
++++ b/tools/testing/selftests/landlock/common.h
+@@ -234,6 +234,7 @@ enforce_ruleset(struct __test_metadata *const _metadata, const int ruleset_fd)
+ struct protocol_variant {
+ int domain;
+ int type;
++ int protocol;
+ };
+
+ struct service_fixture {
+diff --git a/tools/testing/selftests/landlock/config b/tools/testing/selftests/landlock/config
+index 29af19c4e9f981..a8982da4acbdc3 100644
+--- a/tools/testing/selftests/landlock/config
++++ b/tools/testing/selftests/landlock/config
+@@ -3,6 +3,8 @@ CONFIG_CGROUP_SCHED=y
+ CONFIG_INET=y
+ CONFIG_IPV6=y
+ CONFIG_KEYS=y
++CONFIG_MPTCP=y
++CONFIG_MPTCP_IPV6=y
+ CONFIG_NET=y
+ CONFIG_NET_NS=y
+ CONFIG_OVERLAY_FS=y
+diff --git a/tools/testing/selftests/landlock/net_test.c b/tools/testing/selftests/landlock/net_test.c
+index 4e0aeb53b225a5..376079d70d3fc0 100644
+--- a/tools/testing/selftests/landlock/net_test.c
++++ b/tools/testing/selftests/landlock/net_test.c
+@@ -85,18 +85,18 @@ static void setup_loopback(struct __test_metadata *const _metadata)
+ clear_ambient_cap(_metadata, CAP_NET_ADMIN);
+ }
+
++static bool prot_is_tcp(const struct protocol_variant *const prot)
++{
++ return (prot->domain == AF_INET || prot->domain == AF_INET6) &&
++ prot->type == SOCK_STREAM &&
++ (prot->protocol == IPPROTO_TCP || prot->protocol == IPPROTO_IP);
++}
++
+ static bool is_restricted(const struct protocol_variant *const prot,
+ const enum sandbox_type sandbox)
+ {
+- switch (prot->domain) {
+- case AF_INET:
+- case AF_INET6:
+- switch (prot->type) {
+- case SOCK_STREAM:
+- return sandbox == TCP_SANDBOX;
+- }
+- break;
+- }
++ if (sandbox == TCP_SANDBOX)
++ return prot_is_tcp(prot);
+ return false;
+ }
+
+@@ -105,7 +105,7 @@ static int socket_variant(const struct service_fixture *const srv)
+ int ret;
+
+ ret = socket(srv->protocol.domain, srv->protocol.type | SOCK_CLOEXEC,
+- 0);
++ srv->protocol.protocol);
+ if (ret < 0)
+ return -errno;
+ return ret;
+@@ -290,22 +290,59 @@ FIXTURE_TEARDOWN(protocol)
+ }
+
+ /* clang-format off */
+-FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_tcp) {
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_tcp1) {
+ /* clang-format on */
+ .sandbox = NO_SANDBOX,
+ .prot = {
+ .domain = AF_INET,
+ .type = SOCK_STREAM,
++ /* IPPROTO_IP == 0 */
++ .protocol = IPPROTO_IP,
+ },
+ };
+
+ /* clang-format off */
+-FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_tcp) {
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_tcp2) {
++ /* clang-format on */
++ .sandbox = NO_SANDBOX,
++ .prot = {
++ .domain = AF_INET,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_TCP,
++ },
++};
++
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_tcp1) {
+ /* clang-format on */
+ .sandbox = NO_SANDBOX,
+ .prot = {
+ .domain = AF_INET6,
+ .type = SOCK_STREAM,
++ /* IPPROTO_IP == 0 */
++ .protocol = IPPROTO_IP,
++ },
++};
++
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_tcp2) {
++ /* clang-format on */
++ .sandbox = NO_SANDBOX,
++ .prot = {
++ .domain = AF_INET6,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_TCP,
++ },
++};
++
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv4_mptcp) {
++ /* clang-format on */
++ .sandbox = NO_SANDBOX,
++ .prot = {
++ .domain = AF_INET,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_MPTCP,
+ },
+ };
+
+@@ -329,6 +366,17 @@ FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_udp) {
+ },
+ };
+
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_ipv6_mptcp) {
++ /* clang-format on */
++ .sandbox = NO_SANDBOX,
++ .prot = {
++ .domain = AF_INET6,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_MPTCP,
++ },
++};
++
+ /* clang-format off */
+ FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_unix_stream) {
+ /* clang-format on */
+@@ -350,22 +398,48 @@ FIXTURE_VARIANT_ADD(protocol, no_sandbox_with_unix_datagram) {
+ };
+
+ /* clang-format off */
+-FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_tcp) {
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_tcp1) {
++ /* clang-format on */
++ .sandbox = TCP_SANDBOX,
++ .prot = {
++ .domain = AF_INET,
++ .type = SOCK_STREAM,
++ /* IPPROTO_IP == 0 */
++ .protocol = IPPROTO_IP,
++ },
++};
++
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_tcp2) {
+ /* clang-format on */
+ .sandbox = TCP_SANDBOX,
+ .prot = {
+ .domain = AF_INET,
+ .type = SOCK_STREAM,
++ .protocol = IPPROTO_TCP,
++ },
++};
++
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_tcp1) {
++ /* clang-format on */
++ .sandbox = TCP_SANDBOX,
++ .prot = {
++ .domain = AF_INET6,
++ .type = SOCK_STREAM,
++ /* IPPROTO_IP == 0 */
++ .protocol = IPPROTO_IP,
+ },
+ };
+
+ /* clang-format off */
+-FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_tcp) {
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_tcp2) {
+ /* clang-format on */
+ .sandbox = TCP_SANDBOX,
+ .prot = {
+ .domain = AF_INET6,
+ .type = SOCK_STREAM,
++ .protocol = IPPROTO_TCP,
+ },
+ };
+
+@@ -389,6 +463,17 @@ FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_udp) {
+ },
+ };
+
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv4_mptcp) {
++ /* clang-format on */
++ .sandbox = TCP_SANDBOX,
++ .prot = {
++ .domain = AF_INET,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_MPTCP,
++ },
++};
++
+ /* clang-format off */
+ FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_unix_stream) {
+ /* clang-format on */
+@@ -399,6 +484,17 @@ FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_unix_stream) {
+ },
+ };
+
++/* clang-format off */
++FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_ipv6_mptcp) {
++ /* clang-format on */
++ .sandbox = TCP_SANDBOX,
++ .prot = {
++ .domain = AF_INET6,
++ .type = SOCK_STREAM,
++ .protocol = IPPROTO_MPTCP,
++ },
++};
++
+ /* clang-format off */
+ FIXTURE_VARIANT_ADD(protocol, tcp_sandbox_with_unix_datagram) {
+ /* clang-format on */
+diff --git a/tools/testing/selftests/rseq/rseq-riscv-bits.h b/tools/testing/selftests/rseq/rseq-riscv-bits.h
+index de31a0143139b7..f02f411d550d18 100644
+--- a/tools/testing/selftests/rseq/rseq-riscv-bits.h
++++ b/tools/testing/selftests/rseq/rseq-riscv-bits.h
+@@ -243,7 +243,7 @@ int RSEQ_TEMPLATE_IDENTIFIER(rseq_offset_deref_addv)(intptr_t *ptr, off_t off, i
+ #ifdef RSEQ_COMPARE_TWICE
+ RSEQ_ASM_CMP_CPU_ID(cpu_id, current_cpu_id, "%l[error1]")
+ #endif
+- RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, 3)
++ RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, inc, 3)
+ RSEQ_INJECT_ASM(4)
+ RSEQ_ASM_DEFINE_ABORT(4, abort)
+ : /* gcc asm goto does not allow outputs */
+@@ -251,8 +251,8 @@ int RSEQ_TEMPLATE_IDENTIFIER(rseq_offset_deref_addv)(intptr_t *ptr, off_t off, i
+ [current_cpu_id] "m" (rseq_get_abi()->RSEQ_TEMPLATE_CPU_ID_FIELD),
+ [rseq_cs] "m" (rseq_get_abi()->rseq_cs.arch.ptr),
+ [ptr] "r" (ptr),
+- [off] "er" (off),
+- [inc] "er" (inc)
++ [off] "r" (off),
++ [inc] "r" (inc)
+ RSEQ_INJECT_INPUT
+ : "memory", RSEQ_ASM_TMP_REG_1
+ RSEQ_INJECT_CLOBBER
+diff --git a/tools/testing/selftests/rseq/rseq-riscv.h b/tools/testing/selftests/rseq/rseq-riscv.h
+index 37e598d0a365e2..67d544aaa9a3b0 100644
+--- a/tools/testing/selftests/rseq/rseq-riscv.h
++++ b/tools/testing/selftests/rseq/rseq-riscv.h
+@@ -158,7 +158,7 @@ do { \
+ "bnez " RSEQ_ASM_TMP_REG_1 ", 222b\n" \
+ "333:\n"
+
+-#define RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, post_commit_label) \
++#define RSEQ_ASM_OP_R_DEREF_ADDV(ptr, off, inc, post_commit_label) \
+ "mv " RSEQ_ASM_TMP_REG_1 ", %[" __rseq_str(ptr) "]\n" \
+ RSEQ_ASM_OP_R_ADD(off) \
+ REG_L RSEQ_ASM_TMP_REG_1 ", 0(" RSEQ_ASM_TMP_REG_1 ")\n" \
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-03-13 12:54 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-03-13 12:54 UTC (permalink / raw
To: gentoo-commits
commit: eaf25fbd983469e888a0f8a6a6c0e1f7cf5f60c2
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 13 12:54:46 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Mar 13 12:54:46 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=eaf25fbd
Linux patch 6.12.19
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1018_linux-6.12.19.patch | 18520 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 18524 insertions(+)
diff --git a/0000_README b/0000_README
index 85e743e9..a2f75d4a 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch: 1017_linux-6.12.18.patch
From: https://www.kernel.org
Desc: Linux 6.12.18
+Patch: 1018_linux-6.12.19.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.19
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1018_linux-6.12.19.patch b/1018_linux-6.12.19.patch
new file mode 100644
index 00000000..8440f974
--- /dev/null
+++ b/1018_linux-6.12.19.patch
@@ -0,0 +1,18520 @@
+diff --git a/.clippy.toml b/.clippy.toml
+new file mode 100644
+index 00000000000000..e4c4eef10b28c1
+--- /dev/null
++++ b/.clippy.toml
+@@ -0,0 +1,9 @@
++# SPDX-License-Identifier: GPL-2.0
++
++check-private-items = true
++
++disallowed-macros = [
++ # The `clippy::dbg_macro` lint only works with `std::dbg!`, thus we simulate
++ # it here, see: https://github.com/rust-lang/rust-clippy/issues/11303.
++ { path = "kernel::dbg", reason = "the `dbg!` macro is intended as a debugging tool" },
++]
+diff --git a/.gitignore b/.gitignore
+index 56972adb5031af..a61e4778d011cf 100644
+--- a/.gitignore
++++ b/.gitignore
+@@ -103,6 +103,7 @@ modules.order
+ # We don't want to ignore the following even if they are dot-files
+ #
+ !.clang-format
++!.clippy.toml
+ !.cocciconfig
+ !.editorconfig
+ !.get_maintainer.ignore
+diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
+index f8bc1630eba056..fa21cdd610b21a 100644
+--- a/Documentation/admin-guide/sysctl/kernel.rst
++++ b/Documentation/admin-guide/sysctl/kernel.rst
+@@ -212,6 +212,17 @@ pid>/``).
+ This value defaults to 0.
+
+
++core_sort_vma
++=============
++
++The default coredump writes VMAs in address order. By setting
++``core_sort_vma`` to 1, VMAs will be written from smallest size
++to largest size. This is known to break at least elfutils, but
++can be handy when dealing with very large (and truncated)
++coredumps where the more useful debugging details are included
++in the smaller VMAs.
++
++
+ core_uses_pid
+ =============
+
+diff --git a/Documentation/rust/coding-guidelines.rst b/Documentation/rust/coding-guidelines.rst
+index 329b070a1d4736..a2e326b42410f8 100644
+--- a/Documentation/rust/coding-guidelines.rst
++++ b/Documentation/rust/coding-guidelines.rst
+@@ -227,3 +227,149 @@ The equivalent in Rust may look like (ignoring documentation):
+ That is, the equivalent of ``GPIO_LINE_DIRECTION_IN`` would be referred to as
+ ``gpio::LineDirection::In``. In particular, it should not be named
+ ``gpio::gpio_line_direction::GPIO_LINE_DIRECTION_IN``.
++
++
++Lints
++-----
++
++In Rust, it is possible to ``allow`` particular warnings (diagnostics, lints)
++locally, making the compiler ignore instances of a given warning within a given
++function, module, block, etc.
++
++It is similar to ``#pragma GCC diagnostic push`` + ``ignored`` + ``pop`` in C
++[#]_:
++
++.. code-block:: c
++
++ #pragma GCC diagnostic push
++ #pragma GCC diagnostic ignored "-Wunused-function"
++ static void f(void) {}
++ #pragma GCC diagnostic pop
++
++.. [#] In this particular case, the kernel's ``__{always,maybe}_unused``
++ attributes (C23's ``[[maybe_unused]]``) may be used; however, the example
++ is meant to reflect the equivalent lint in Rust discussed afterwards.
++
++But way less verbose:
++
++.. code-block:: rust
++
++ #[allow(dead_code)]
++ fn f() {}
++
++By that virtue, it makes it possible to comfortably enable more diagnostics by
++default (i.e. outside ``W=`` levels). In particular, those that may have some
++false positives but that are otherwise quite useful to keep enabled to catch
++potential mistakes.
++
++On top of that, Rust provides the ``expect`` attribute which takes this further.
++It makes the compiler warn if the warning was not produced. For instance, the
++following will ensure that, when ``f()`` is called somewhere, we will have to
++remove the attribute:
++
++.. code-block:: rust
++
++ #[expect(dead_code)]
++ fn f() {}
++
++If we do not, we get a warning from the compiler::
++
++ warning: this lint expectation is unfulfilled
++ --> x.rs:3:10
++ |
++ 3 | #[expect(dead_code)]
++ | ^^^^^^^^^
++ |
++ = note: `#[warn(unfulfilled_lint_expectations)]` on by default
++
++This means that ``expect``\ s do not get forgotten when they are not needed, which
++may happen in several situations, e.g.:
++
++- Temporary attributes added while developing.
++
++- Improvements in lints in the compiler, Clippy or custom tools which may
++ remove a false positive.
++
++- When the lint is not needed anymore because it was expected that it would be
++ removed at some point, such as the ``dead_code`` example above.
++
++It also increases the visibility of the remaining ``allow``\ s and reduces the
++chance of misapplying one.
++
++Thus prefer ``expect`` over ``allow`` unless:
++
++- Conditional compilation triggers the warning in some cases but not others.
++
++ If there are only a few cases where the warning triggers (or does not
++ trigger) compared to the total number of cases, then one may consider using
++ a conditional ``expect`` (i.e. ``cfg_attr(..., expect(...))``). Otherwise,
++ it is likely simpler to just use ``allow``.
++
++- Inside macros, when the different invocations may create expanded code that
++ triggers the warning in some cases but not in others.
++
++- When code may trigger a warning for some architectures but not others, such
++ as an ``as`` cast to a C FFI type.
++
++As a more developed example, consider for instance this program:
++
++.. code-block:: rust
++
++ fn g() {}
++
++ fn main() {
++ #[cfg(CONFIG_X)]
++ g();
++ }
++
++Here, function ``g()`` is dead code if ``CONFIG_X`` is not set. Can we use
++``expect`` here?
++
++.. code-block:: rust
++
++ #[expect(dead_code)]
++ fn g() {}
++
++ fn main() {
++ #[cfg(CONFIG_X)]
++ g();
++ }
++
++This would emit a lint if ``CONFIG_X`` is set, since it is not dead code in that
++configuration. Therefore, in cases like this, we cannot use ``expect`` as-is.
++
++A simple possibility is using ``allow``:
++
++.. code-block:: rust
++
++ #[allow(dead_code)]
++ fn g() {}
++
++ fn main() {
++ #[cfg(CONFIG_X)]
++ g();
++ }
++
++An alternative would be using a conditional ``expect``:
++
++.. code-block:: rust
++
++ #[cfg_attr(not(CONFIG_X), expect(dead_code))]
++ fn g() {}
++
++ fn main() {
++ #[cfg(CONFIG_X)]
++ g();
++ }
++
++This would ensure that, if someone introduces another call to ``g()`` somewhere
++(e.g. unconditionally), then it would be spotted that it is not dead code
++anymore. However, the ``cfg_attr`` is more complex than a simple ``allow``.
++
++Therefore, it is likely that it is not worth using conditional ``expect``\ s when
++more than one or two configurations are involved or when the lint may be
++triggered due to non-local changes (such as ``dead_code``).
++
++For more information about diagnostics in Rust, please see:
++
++ https://doc.rust-lang.org/stable/reference/attributes/diagnostics.html
+diff --git a/MAINTAINERS b/MAINTAINERS
+index 6bb4ec0c162a53..de04c7ba8571bd 100644
+--- a/MAINTAINERS
++++ b/MAINTAINERS
+@@ -20175,6 +20175,7 @@ B: https://github.com/Rust-for-Linux/linux/issues
+ C: zulip://rust-for-linux.zulipchat.com
+ P: https://rust-for-linux.com/contributing
+ T: git https://github.com/Rust-for-Linux/linux.git rust-next
++F: .clippy.toml
+ F: Documentation/rust/
+ F: rust/
+ F: samples/rust/
+@@ -20182,6 +20183,13 @@ F: scripts/*rust*
+ F: tools/testing/selftests/rust/
+ K: \b(?i:rust)\b
+
++RUST [ALLOC]
++M: Danilo Krummrich <dakr@kernel.org>
++L: rust-for-linux@vger.kernel.org
++S: Maintained
++F: rust/kernel/alloc.rs
++F: rust/kernel/alloc/
++
+ RXRPC SOCKETS (AF_RXRPC)
+ M: David Howells <dhowells@redhat.com>
+ M: Marc Dionne <marc.dionne@auristor.com>
+diff --git a/Makefile b/Makefile
+index 17dfe0a8ca8fa9..343c9f25433c7c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -446,19 +446,23 @@ KBUILD_USERLDFLAGS := $(USERLDFLAGS)
+ export rust_common_flags := --edition=2021 \
+ -Zbinary_dep_depinfo=y \
+ -Astable_features \
+- -Dunsafe_op_in_unsafe_fn \
+ -Dnon_ascii_idents \
++ -Dunsafe_op_in_unsafe_fn \
++ -Wmissing_docs \
+ -Wrust_2018_idioms \
+ -Wunreachable_pub \
+- -Wmissing_docs \
+- -Wrustdoc::missing_crate_level_docs \
+ -Wclippy::all \
++ -Wclippy::ignored_unit_patterns \
+ -Wclippy::mut_mut \
+ -Wclippy::needless_bitwise_bool \
+ -Wclippy::needless_continue \
+ -Aclippy::needless_lifetimes \
+ -Wclippy::no_mangle_with_rust_abi \
+- -Wclippy::dbg_macro
++ -Wclippy::undocumented_unsafe_blocks \
++ -Wclippy::unnecessary_safety_comment \
++ -Wclippy::unnecessary_safety_doc \
++ -Wrustdoc::missing_crate_level_docs \
++ -Wrustdoc::unescaped_backticks
+
+ KBUILD_HOSTCFLAGS := $(KBUILD_USERHOSTCFLAGS) $(HOST_LFS_CFLAGS) \
+ $(HOSTCFLAGS) -I $(srctree)/scripts/include
+@@ -583,6 +587,9 @@ endif
+ # Allows the usage of unstable features in stable compilers.
+ export RUSTC_BOOTSTRAP := 1
+
++# Allows finding `.clippy.toml` in out-of-srctree builds.
++export CLIPPY_CONF_DIR := $(srctree)
++
+ export ARCH SRCARCH CONFIG_SHELL BASH HOSTCC KBUILD_HOSTCFLAGS CROSS_COMPILE LD CC HOSTPKG_CONFIG
+ export RUSTC RUSTDOC RUSTFMT RUSTC_OR_CLIPPY_QUIET RUSTC_OR_CLIPPY BINDGEN
+ export HOSTRUSTC KBUILD_HOSTRUSTFLAGS
+@@ -1060,6 +1067,11 @@ endif
+ KBUILD_USERCFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
+ KBUILD_USERLDFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
+
++# userspace programs are linked via the compiler, use the correct linker
++ifeq ($(CONFIG_CC_IS_CLANG)$(CONFIG_LD_IS_LLD),yy)
++KBUILD_USERLDFLAGS += --ld-path=$(LD)
++endif
++
+ # make the checker run with the right architecture
+ CHECKFLAGS += --arch=$(ARCH)
+
+diff --git a/arch/arm64/include/asm/hugetlb.h b/arch/arm64/include/asm/hugetlb.h
+index 293f880865e8d0..f0304273eb3519 100644
+--- a/arch/arm64/include/asm/hugetlb.h
++++ b/arch/arm64/include/asm/hugetlb.h
+@@ -34,8 +34,8 @@ extern int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep,
+ pte_t pte, int dirty);
+ #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
+-extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+- unsigned long addr, pte_t *ptep);
++extern pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep, unsigned long sz);
+ #define __HAVE_ARCH_HUGE_PTEP_SET_WRPROTECT
+ extern void huge_ptep_set_wrprotect(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep);
+diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
+index 0a6956bbfb3269..fe167ce297a161 100644
+--- a/arch/arm64/mm/hugetlbpage.c
++++ b/arch/arm64/mm/hugetlbpage.c
+@@ -100,20 +100,11 @@ static int find_num_contig(struct mm_struct *mm, unsigned long addr,
+
+ static inline int num_contig_ptes(unsigned long size, size_t *pgsize)
+ {
+- int contig_ptes = 0;
++ int contig_ptes = 1;
+
+ *pgsize = size;
+
+ switch (size) {
+-#ifndef __PAGETABLE_PMD_FOLDED
+- case PUD_SIZE:
+- if (pud_sect_supported())
+- contig_ptes = 1;
+- break;
+-#endif
+- case PMD_SIZE:
+- contig_ptes = 1;
+- break;
+ case CONT_PMD_SIZE:
+ *pgsize = PMD_SIZE;
+ contig_ptes = CONT_PMDS;
+@@ -122,6 +113,8 @@ static inline int num_contig_ptes(unsigned long size, size_t *pgsize)
+ *pgsize = PAGE_SIZE;
+ contig_ptes = CONT_PTES;
+ break;
++ default:
++ WARN_ON(!__hugetlb_valid_size(size));
+ }
+
+ return contig_ptes;
+@@ -163,24 +156,23 @@ static pte_t get_clear_contig(struct mm_struct *mm,
+ unsigned long pgsize,
+ unsigned long ncontig)
+ {
+- pte_t orig_pte = __ptep_get(ptep);
+- unsigned long i;
+-
+- for (i = 0; i < ncontig; i++, addr += pgsize, ptep++) {
+- pte_t pte = __ptep_get_and_clear(mm, addr, ptep);
+-
+- /*
+- * If HW_AFDBM is enabled, then the HW could turn on
+- * the dirty or accessed bit for any page in the set,
+- * so check them all.
+- */
+- if (pte_dirty(pte))
+- orig_pte = pte_mkdirty(orig_pte);
+-
+- if (pte_young(pte))
+- orig_pte = pte_mkyoung(orig_pte);
++ pte_t pte, tmp_pte;
++ bool present;
++
++ pte = __ptep_get_and_clear(mm, addr, ptep);
++ present = pte_present(pte);
++ while (--ncontig) {
++ ptep++;
++ addr += pgsize;
++ tmp_pte = __ptep_get_and_clear(mm, addr, ptep);
++ if (present) {
++ if (pte_dirty(tmp_pte))
++ pte = pte_mkdirty(pte);
++ if (pte_young(tmp_pte))
++ pte = pte_mkyoung(pte);
++ }
+ }
+- return orig_pte;
++ return pte;
+ }
+
+ static pte_t get_clear_contig_flush(struct mm_struct *mm,
+@@ -385,18 +377,13 @@ void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
+ __pte_clear(mm, addr, ptep);
+ }
+
+-pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+- unsigned long addr, pte_t *ptep)
++pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep, unsigned long sz)
+ {
+ int ncontig;
+ size_t pgsize;
+- pte_t orig_pte = __ptep_get(ptep);
+-
+- if (!pte_cont(orig_pte))
+- return __ptep_get_and_clear(mm, addr, ptep);
+-
+- ncontig = find_num_contig(mm, addr, ptep, &pgsize);
+
++ ncontig = num_contig_ptes(sz, &pgsize);
+ return get_clear_contig(mm, addr, ptep, pgsize, ncontig);
+ }
+
+@@ -538,6 +525,8 @@ bool __init arch_hugetlb_valid_size(unsigned long size)
+
+ pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep)
+ {
++ unsigned long psize = huge_page_size(hstate_vma(vma));
++
+ if (alternative_has_cap_unlikely(ARM64_WORKAROUND_2645198)) {
+ /*
+ * Break-before-make (BBM) is required for all user space mappings
+@@ -547,7 +536,7 @@ pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma, unsigned long addr
+ if (pte_user_exec(__ptep_get(ptep)))
+ return huge_ptep_clear_flush(vma, addr, ptep);
+ }
+- return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
++ return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, psize);
+ }
+
+ void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep,
+diff --git a/arch/loongarch/include/asm/bug.h b/arch/loongarch/include/asm/bug.h
+index 08388876ade4ce..561ac1bf79e26c 100644
+--- a/arch/loongarch/include/asm/bug.h
++++ b/arch/loongarch/include/asm/bug.h
+@@ -4,6 +4,7 @@
+
+ #include <asm/break.h>
+ #include <linux/stringify.h>
++#include <linux/objtool.h>
+
+ #ifndef CONFIG_DEBUG_BUGVERBOSE
+ #define _BUGVERBOSE_LOCATION(file, line)
+@@ -33,25 +34,25 @@
+
+ #define ASM_BUG_FLAGS(flags) \
+ __BUG_ENTRY(flags) \
+- break BRK_BUG
++ break BRK_BUG;
+
+ #define ASM_BUG() ASM_BUG_FLAGS(0)
+
+-#define __BUG_FLAGS(flags) \
+- asm_inline volatile (__stringify(ASM_BUG_FLAGS(flags)));
++#define __BUG_FLAGS(flags, extra) \
++ asm_inline volatile (__stringify(ASM_BUG_FLAGS(flags)) \
++ extra);
+
+ #define __WARN_FLAGS(flags) \
+ do { \
+ instrumentation_begin(); \
+- __BUG_FLAGS(BUGFLAG_WARNING|(flags)); \
+- annotate_reachable(); \
++ __BUG_FLAGS(BUGFLAG_WARNING|(flags), ASM_REACHABLE); \
+ instrumentation_end(); \
+ } while (0)
+
+ #define BUG() \
+ do { \
+ instrumentation_begin(); \
+- __BUG_FLAGS(0); \
++ __BUG_FLAGS(0, ""); \
+ unreachable(); \
+ } while (0)
+
+diff --git a/arch/loongarch/include/asm/hugetlb.h b/arch/loongarch/include/asm/hugetlb.h
+index 376c0708e2979b..6302e60fbaee1a 100644
+--- a/arch/loongarch/include/asm/hugetlb.h
++++ b/arch/loongarch/include/asm/hugetlb.h
+@@ -41,7 +41,8 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
+
+ #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
+ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+- unsigned long addr, pte_t *ptep)
++ unsigned long addr, pte_t *ptep,
++ unsigned long sz)
+ {
+ pte_t clear;
+ pte_t pte = ptep_get(ptep);
+@@ -56,8 +57,9 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep)
+ {
+ pte_t pte;
++ unsigned long sz = huge_page_size(hstate_vma(vma));
+
+- pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
++ pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz);
+ flush_tlb_page(vma, addr);
+ return pte;
+ }
+diff --git a/arch/loongarch/kernel/machine_kexec.c b/arch/loongarch/kernel/machine_kexec.c
+index 8ae641dc53bb77..f9381800e291cc 100644
+--- a/arch/loongarch/kernel/machine_kexec.c
++++ b/arch/loongarch/kernel/machine_kexec.c
+@@ -126,14 +126,14 @@ void kexec_reboot(void)
+ /* All secondary cpus go to kexec_smp_wait */
+ if (smp_processor_id() > 0) {
+ relocated_kexec_smp_wait(NULL);
+- unreachable();
++ BUG();
+ }
+ #endif
+
+ do_kexec = (void *)reboot_code_buffer;
+ do_kexec(efi_boot, cmdline_ptr, systable_ptr, start_addr, first_ind_entry);
+
+- unreachable();
++ BUG();
+ }
+
+
+diff --git a/arch/loongarch/kernel/setup.c b/arch/loongarch/kernel/setup.c
+index 56934fe58170e0..1fa6a604734ef2 100644
+--- a/arch/loongarch/kernel/setup.c
++++ b/arch/loongarch/kernel/setup.c
+@@ -387,6 +387,9 @@ static void __init check_kernel_sections_mem(void)
+ */
+ static void __init arch_mem_init(char **cmdline_p)
+ {
++ /* Recalculate max_low_pfn for "mem=xxx" */
++ max_pfn = max_low_pfn = PHYS_PFN(memblock_end_of_DRAM());
++
+ if (usermem)
+ pr_info("User-defined physical RAM map overwrite\n");
+
+diff --git a/arch/loongarch/kernel/smp.c b/arch/loongarch/kernel/smp.c
+index 5d59e9ce2772d8..d96065dbe779be 100644
+--- a/arch/loongarch/kernel/smp.c
++++ b/arch/loongarch/kernel/smp.c
+@@ -19,6 +19,7 @@
+ #include <linux/smp.h>
+ #include <linux/threads.h>
+ #include <linux/export.h>
++#include <linux/suspend.h>
+ #include <linux/syscore_ops.h>
+ #include <linux/time.h>
+ #include <linux/tracepoint.h>
+@@ -423,7 +424,7 @@ void loongson_cpu_die(unsigned int cpu)
+ mb();
+ }
+
+-void __noreturn arch_cpu_idle_dead(void)
++static void __noreturn idle_play_dead(void)
+ {
+ register uint64_t addr;
+ register void (*init_fn)(void);
+@@ -447,6 +448,50 @@ void __noreturn arch_cpu_idle_dead(void)
+ BUG();
+ }
+
++#ifdef CONFIG_HIBERNATION
++static void __noreturn poll_play_dead(void)
++{
++ register uint64_t addr;
++ register void (*init_fn)(void);
++
++ idle_task_exit();
++ __this_cpu_write(cpu_state, CPU_DEAD);
++
++ __smp_mb();
++ do {
++ __asm__ __volatile__("nop\n\t");
++ addr = iocsr_read64(LOONGARCH_IOCSR_MBUF0);
++ } while (addr == 0);
++
++ init_fn = (void *)TO_CACHE(addr);
++ iocsr_write32(0xffffffff, LOONGARCH_IOCSR_IPI_CLEAR);
++
++ init_fn();
++ BUG();
++}
++#endif
++
++static void (*play_dead)(void) = idle_play_dead;
++
++void __noreturn arch_cpu_idle_dead(void)
++{
++ play_dead();
++ BUG(); /* play_dead() doesn't return */
++}
++
++#ifdef CONFIG_HIBERNATION
++int hibernate_resume_nonboot_cpu_disable(void)
++{
++ int ret;
++
++ play_dead = poll_play_dead;
++ ret = suspend_disable_secondary_cpus();
++ play_dead = idle_play_dead;
++
++ return ret;
++}
++#endif
++
+ #endif
+
+ /*
+diff --git a/arch/loongarch/kvm/exit.c b/arch/loongarch/kvm/exit.c
+index 90894f70ff4a50..add52e927f1530 100644
+--- a/arch/loongarch/kvm/exit.c
++++ b/arch/loongarch/kvm/exit.c
+@@ -624,6 +624,12 @@ static int kvm_handle_rdwr_fault(struct kvm_vcpu *vcpu, bool write)
+ struct kvm_run *run = vcpu->run;
+ unsigned long badv = vcpu->arch.badv;
+
++ /* Inject ADE exception if exceed max GPA size */
++ if (unlikely(badv >= vcpu->kvm->arch.gpa_size)) {
++ kvm_queue_exception(vcpu, EXCCODE_ADE, EXSUBCODE_ADEM);
++ return RESUME_GUEST;
++ }
++
+ ret = kvm_handle_mm_fault(vcpu, badv, write);
+ if (ret) {
+ /* Treat as MMIO */
+diff --git a/arch/loongarch/kvm/main.c b/arch/loongarch/kvm/main.c
+index 7e8f5d6829ef0c..34fad2c29ee695 100644
+--- a/arch/loongarch/kvm/main.c
++++ b/arch/loongarch/kvm/main.c
+@@ -297,6 +297,13 @@ int kvm_arch_enable_virtualization_cpu(void)
+ kvm_debug("GCFG:%lx GSTAT:%lx GINTC:%lx GTLBC:%lx",
+ read_csr_gcfg(), read_csr_gstat(), read_csr_gintc(), read_csr_gtlbc());
+
++ /*
++ * HW Guest CSR registers are lost after CPU suspend and resume.
++ * Clear last_vcpu so that Guest CSR registers forced to reload
++ * from vCPU SW state.
++ */
++ this_cpu_ptr(vmcs)->last_vcpu = NULL;
++
+ return 0;
+ }
+
+diff --git a/arch/loongarch/kvm/vcpu.c b/arch/loongarch/kvm/vcpu.c
+index 9d53eca66fcc70..e7a084de64f7bf 100644
+--- a/arch/loongarch/kvm/vcpu.c
++++ b/arch/loongarch/kvm/vcpu.c
+@@ -311,7 +311,7 @@ static int kvm_handle_exit(struct kvm_run *run, struct kvm_vcpu *vcpu)
+ {
+ int ret = RESUME_GUEST;
+ unsigned long estat = vcpu->arch.host_estat;
+- u32 intr = estat & 0x1fff; /* Ignore NMI */
++ u32 intr = estat & CSR_ESTAT_IS;
+ u32 ecode = (estat & CSR_ESTAT_EXC) >> CSR_ESTAT_EXC_SHIFT;
+
+ vcpu->mode = OUTSIDE_GUEST_MODE;
+diff --git a/arch/loongarch/kvm/vm.c b/arch/loongarch/kvm/vm.c
+index 4ba734aaef87a7..fe9e973912d440 100644
+--- a/arch/loongarch/kvm/vm.c
++++ b/arch/loongarch/kvm/vm.c
+@@ -46,7 +46,11 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
+ if (kvm_pvtime_supported())
+ kvm->arch.pv_features |= BIT(KVM_FEATURE_STEAL_TIME);
+
+- kvm->arch.gpa_size = BIT(cpu_vabits - 1);
++ /*
++ * cpu_vabits means user address space only (a half of total).
++ * GPA size of VM is the same with the size of user address space.
++ */
++ kvm->arch.gpa_size = BIT(cpu_vabits);
+ kvm->arch.root_level = CONFIG_PGTABLE_LEVELS - 1;
+ kvm->arch.invalid_ptes[0] = 0;
+ kvm->arch.invalid_ptes[1] = (unsigned long)invalid_pte_table;
+diff --git a/arch/mips/include/asm/hugetlb.h b/arch/mips/include/asm/hugetlb.h
+index fd69c88085542e..00ee3c0366305c 100644
+--- a/arch/mips/include/asm/hugetlb.h
++++ b/arch/mips/include/asm/hugetlb.h
+@@ -32,7 +32,8 @@ static inline int prepare_hugepage_range(struct file *file,
+
+ #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
+ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+- unsigned long addr, pte_t *ptep)
++ unsigned long addr, pte_t *ptep,
++ unsigned long sz)
+ {
+ pte_t clear;
+ pte_t pte = *ptep;
+@@ -47,13 +48,14 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep)
+ {
+ pte_t pte;
++ unsigned long sz = huge_page_size(hstate_vma(vma));
+
+ /*
+ * clear the huge pte entry firstly, so that the other smp threads will
+ * not get old pte entry after finishing flush_tlb_page and before
+ * setting new huge pte entry
+ */
+- pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
++ pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz);
+ flush_tlb_page(vma, addr);
+ return pte;
+ }
+diff --git a/arch/parisc/include/asm/hugetlb.h b/arch/parisc/include/asm/hugetlb.h
+index 72daacc472a0a3..f7a91411dcc955 100644
+--- a/arch/parisc/include/asm/hugetlb.h
++++ b/arch/parisc/include/asm/hugetlb.h
+@@ -10,7 +10,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+
+ #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
+ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
+- pte_t *ptep);
++ pte_t *ptep, unsigned long sz);
+
+ /*
+ * If the arch doesn't supply something else, assume that hugepage
+diff --git a/arch/parisc/mm/hugetlbpage.c b/arch/parisc/mm/hugetlbpage.c
+index aa664f7ddb6398..cec2b9a581dd3e 100644
+--- a/arch/parisc/mm/hugetlbpage.c
++++ b/arch/parisc/mm/hugetlbpage.c
+@@ -147,7 +147,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+
+
+ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
+- pte_t *ptep)
++ pte_t *ptep, unsigned long sz)
+ {
+ pte_t entry;
+
+diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h
+index dad2e7980f245b..86326587e58de8 100644
+--- a/arch/powerpc/include/asm/hugetlb.h
++++ b/arch/powerpc/include/asm/hugetlb.h
+@@ -45,7 +45,8 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
+
+ #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
+ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+- unsigned long addr, pte_t *ptep)
++ unsigned long addr, pte_t *ptep,
++ unsigned long sz)
+ {
+ return __pte(pte_update(mm, addr, ptep, ~0UL, 0, 1));
+ }
+@@ -55,8 +56,9 @@ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep)
+ {
+ pte_t pte;
++ unsigned long sz = huge_page_size(hstate_vma(vma));
+
+- pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
++ pte = huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, sz);
+ flush_hugetlb_page(vma, addr);
+ return pte;
+ }
+diff --git a/arch/powerpc/kvm/e500_mmu_host.c b/arch/powerpc/kvm/e500_mmu_host.c
+index 6824e8139801c2..3708fa48bee95d 100644
+--- a/arch/powerpc/kvm/e500_mmu_host.c
++++ b/arch/powerpc/kvm/e500_mmu_host.c
+@@ -242,7 +242,7 @@ static inline int tlbe_is_writable(struct kvm_book3e_206_tlb_entry *tlbe)
+ return tlbe->mas7_3 & (MAS3_SW|MAS3_UW);
+ }
+
+-static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref,
++static inline void kvmppc_e500_ref_setup(struct tlbe_ref *ref,
+ struct kvm_book3e_206_tlb_entry *gtlbe,
+ kvm_pfn_t pfn, unsigned int wimg)
+ {
+@@ -252,7 +252,11 @@ static inline bool kvmppc_e500_ref_setup(struct tlbe_ref *ref,
+ /* Use guest supplied MAS2_G and MAS2_E */
+ ref->flags |= (gtlbe->mas2 & MAS2_ATTRIB_MASK) | wimg;
+
+- return tlbe_is_writable(gtlbe);
++ /* Mark the page accessed */
++ kvm_set_pfn_accessed(pfn);
++
++ if (tlbe_is_writable(gtlbe))
++ kvm_set_pfn_dirty(pfn);
+ }
+
+ static inline void kvmppc_e500_ref_release(struct tlbe_ref *ref)
+@@ -322,7 +326,6 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ {
+ struct kvm_memory_slot *slot;
+ unsigned long pfn = 0; /* silence GCC warning */
+- struct page *page = NULL;
+ unsigned long hva;
+ int pfnmap = 0;
+ int tsize = BOOK3E_PAGESZ_4K;
+@@ -334,7 +337,6 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ unsigned int wimg = 0;
+ pgd_t *pgdir;
+ unsigned long flags;
+- bool writable = false;
+
+ /* used to check for invalidations in progress */
+ mmu_seq = kvm->mmu_invalidate_seq;
+@@ -444,7 +446,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+
+ if (likely(!pfnmap)) {
+ tsize_pages = 1UL << (tsize + 10 - PAGE_SHIFT);
+- pfn = __kvm_faultin_pfn(slot, gfn, FOLL_WRITE, NULL, &page);
++ pfn = gfn_to_pfn_memslot(slot, gfn);
+ if (is_error_noslot_pfn(pfn)) {
+ if (printk_ratelimit())
+ pr_err("%s: real page not found for gfn %lx\n",
+@@ -489,7 +491,7 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ }
+ local_irq_restore(flags);
+
+- writable = kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
++ kvmppc_e500_ref_setup(ref, gtlbe, pfn, wimg);
+ kvmppc_e500_setup_stlbe(&vcpu_e500->vcpu, gtlbe, tsize,
+ ref, gvaddr, stlbe);
+
+@@ -497,8 +499,11 @@ static inline int kvmppc_e500_shadow_map(struct kvmppc_vcpu_e500 *vcpu_e500,
+ kvmppc_mmu_flush_icache(pfn);
+
+ out:
+- kvm_release_faultin_page(kvm, page, !!ret, writable);
+ spin_unlock(&kvm->mmu_lock);
++
++ /* Drop refcount on page, so that mmu notifiers can clear it */
++ kvm_release_pfn_clean(pfn);
++
+ return ret;
+ }
+
+diff --git a/arch/riscv/include/asm/hugetlb.h b/arch/riscv/include/asm/hugetlb.h
+index faf3624d80577c..4461264977684b 100644
+--- a/arch/riscv/include/asm/hugetlb.h
++++ b/arch/riscv/include/asm/hugetlb.h
+@@ -28,7 +28,8 @@ void set_huge_pte_at(struct mm_struct *mm,
+
+ #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
+ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+- unsigned long addr, pte_t *ptep);
++ unsigned long addr, pte_t *ptep,
++ unsigned long sz);
+
+ #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
+ pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
+index 42314f0939220a..b4a78a4b35cff5 100644
+--- a/arch/riscv/mm/hugetlbpage.c
++++ b/arch/riscv/mm/hugetlbpage.c
+@@ -293,7 +293,7 @@ int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+
+ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long addr,
+- pte_t *ptep)
++ pte_t *ptep, unsigned long sz)
+ {
+ pte_t orig_pte = ptep_get(ptep);
+ int pte_num;
+diff --git a/arch/s390/include/asm/hugetlb.h b/arch/s390/include/asm/hugetlb.h
+index cf1b5d6fb1a629..4731a51241ba86 100644
+--- a/arch/s390/include/asm/hugetlb.h
++++ b/arch/s390/include/asm/hugetlb.h
+@@ -20,8 +20,15 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+ void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pte);
+ pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep);
+-pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+- unsigned long addr, pte_t *ptep);
++pte_t __huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
++ pte_t *ptep);
++
++static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
++ unsigned long addr, pte_t *ptep,
++ unsigned long sz)
++{
++ return __huge_ptep_get_and_clear(mm, addr, ptep);
++}
+
+ /*
+ * If the arch doesn't supply something else, assume that hugepage
+@@ -57,7 +64,7 @@ static inline void huge_pte_clear(struct mm_struct *mm, unsigned long addr,
+ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+ unsigned long address, pte_t *ptep)
+ {
+- return huge_ptep_get_and_clear(vma->vm_mm, address, ptep);
++ return __huge_ptep_get_and_clear(vma->vm_mm, address, ptep);
+ }
+
+ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+@@ -66,7 +73,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+ {
+ int changed = !pte_same(huge_ptep_get(vma->vm_mm, addr, ptep), pte);
+ if (changed) {
+- huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
++ __huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
+ __set_huge_pte_at(vma->vm_mm, addr, ptep, pte);
+ }
+ return changed;
+@@ -75,7 +82,7 @@ static inline int huge_ptep_set_access_flags(struct vm_area_struct *vma,
+ static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+ {
+- pte_t pte = huge_ptep_get_and_clear(mm, addr, ptep);
++ pte_t pte = __huge_ptep_get_and_clear(mm, addr, ptep);
+ __set_huge_pte_at(mm, addr, ptep, pte_wrprotect(pte));
+ }
+
+diff --git a/arch/s390/kernel/traps.c b/arch/s390/kernel/traps.c
+index 160b2acba8db2b..908bae84984351 100644
+--- a/arch/s390/kernel/traps.c
++++ b/arch/s390/kernel/traps.c
+@@ -284,10 +284,10 @@ static void __init test_monitor_call(void)
+ return;
+ asm volatile(
+ " mc 0,0\n"
+- "0: xgr %0,%0\n"
++ "0: lhi %[val],0\n"
+ "1:\n"
+- EX_TABLE(0b,1b)
+- : "+d" (val));
++ EX_TABLE(0b, 1b)
++ : [val] "+d" (val));
+ if (!val)
+ panic("Monitor call doesn't work!\n");
+ }
+diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
+index ded0eff58a192a..9c1ba8c0cac61a 100644
+--- a/arch/s390/mm/hugetlbpage.c
++++ b/arch/s390/mm/hugetlbpage.c
+@@ -174,8 +174,8 @@ pte_t huge_ptep_get(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
+ return __rste_to_pte(pte_val(*ptep));
+ }
+
+-pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+- unsigned long addr, pte_t *ptep)
++pte_t __huge_ptep_get_and_clear(struct mm_struct *mm,
++ unsigned long addr, pte_t *ptep)
+ {
+ pte_t pte = huge_ptep_get(mm, addr, ptep);
+ pmd_t *pmdp = (pmd_t *) ptep;
+diff --git a/arch/sparc/include/asm/hugetlb.h b/arch/sparc/include/asm/hugetlb.h
+index c714ca6a05aa04..e7a9cdd498dca6 100644
+--- a/arch/sparc/include/asm/hugetlb.h
++++ b/arch/sparc/include/asm/hugetlb.h
+@@ -20,7 +20,7 @@ void __set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+
+ #define __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
+ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
+- pte_t *ptep);
++ pte_t *ptep, unsigned long sz);
+
+ #define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
+ static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
+diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
+index cc91ca7a1e182c..c276d70a747995 100644
+--- a/arch/sparc/mm/hugetlbpage.c
++++ b/arch/sparc/mm/hugetlbpage.c
+@@ -368,7 +368,7 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+ }
+
+ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
+- pte_t *ptep)
++ pte_t *ptep, unsigned long sz)
+ {
+ unsigned int i, nptes, orig_shift, shift;
+ unsigned long size;
+diff --git a/arch/x86/boot/compressed/pgtable_64.c b/arch/x86/boot/compressed/pgtable_64.c
+index c882e1f67af01c..d8c5de40669d36 100644
+--- a/arch/x86/boot/compressed/pgtable_64.c
++++ b/arch/x86/boot/compressed/pgtable_64.c
+@@ -1,6 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "misc.h"
+ #include <asm/bootparam.h>
++#include <asm/bootparam_utils.h>
+ #include <asm/e820/types.h>
+ #include <asm/processor.h>
+ #include "pgtable.h"
+@@ -107,6 +108,7 @@ asmlinkage void configure_5level_paging(struct boot_params *bp, void *pgtable)
+ bool l5_required = false;
+
+ /* Initialize boot_params. Required for cmdline_find_option_bool(). */
++ sanitize_boot_params(bp);
+ boot_params_ptr = bp;
+
+ /*
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 8499b9cb9c8263..e4dd840e0becd4 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -761,6 +761,7 @@ struct kvm_vcpu_arch {
+ u32 pkru;
+ u32 hflags;
+ u64 efer;
++ u64 host_debugctl;
+ u64 apic_base;
+ struct kvm_lapic *apic; /* kernel irqchip context */
+ bool load_eoi_exitmap_pending;
+diff --git a/arch/x86/kernel/amd_nb.c b/arch/x86/kernel/amd_nb.c
+index 37b8244899d895..04712fd0c96497 100644
+--- a/arch/x86/kernel/amd_nb.c
++++ b/arch/x86/kernel/amd_nb.c
+@@ -405,7 +405,6 @@ bool __init early_is_amd_nb(u32 device)
+
+ struct resource *amd_get_mmconfig_range(struct resource *res)
+ {
+- u32 address;
+ u64 base, msr;
+ unsigned int segn_busn_bits;
+
+@@ -413,13 +412,11 @@ struct resource *amd_get_mmconfig_range(struct resource *res)
+ boot_cpu_data.x86_vendor != X86_VENDOR_HYGON)
+ return NULL;
+
+- /* assume all cpus from fam10h have mmconfig */
+- if (boot_cpu_data.x86 < 0x10)
++ /* Assume CPUs from Fam10h have mmconfig, although not all VMs do */
++ if (boot_cpu_data.x86 < 0x10 ||
++ rdmsrl_safe(MSR_FAM10H_MMIO_CONF_BASE, &msr))
+ return NULL;
+
+- address = MSR_FAM10H_MMIO_CONF_BASE;
+- rdmsrl(address, msr);
+-
+ /* mmconfig is not enabled */
+ if (!(msr & FAM10H_MMIO_CONF_ENABLE))
+ return NULL;
+diff --git a/arch/x86/kernel/cpu/cacheinfo.c b/arch/x86/kernel/cpu/cacheinfo.c
+index e6fa03ed9172c0..a6c6bccfa8b8d3 100644
+--- a/arch/x86/kernel/cpu/cacheinfo.c
++++ b/arch/x86/kernel/cpu/cacheinfo.c
+@@ -808,7 +808,7 @@ void init_intel_cacheinfo(struct cpuinfo_x86 *c)
+ cpuid(2, ®s[0], ®s[1], ®s[2], ®s[3]);
+
+ /* If bit 31 is set, this is an unknown format */
+- for (j = 0 ; j < 3 ; j++)
++ for (j = 0 ; j < 4 ; j++)
+ if (regs[j] & (1 << 31))
+ regs[j] = 0;
+
+diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
+index 4b5f3d0521517a..b93d88ec141759 100644
+--- a/arch/x86/kernel/cpu/intel.c
++++ b/arch/x86/kernel/cpu/intel.c
+@@ -672,26 +672,37 @@ static unsigned int intel_size_cache(struct cpuinfo_x86 *c, unsigned int size)
+ }
+ #endif
+
+-#define TLB_INST_4K 0x01
+-#define TLB_INST_4M 0x02
+-#define TLB_INST_2M_4M 0x03
++#define TLB_INST_4K 0x01
++#define TLB_INST_4M 0x02
++#define TLB_INST_2M_4M 0x03
+
+-#define TLB_INST_ALL 0x05
+-#define TLB_INST_1G 0x06
++#define TLB_INST_ALL 0x05
++#define TLB_INST_1G 0x06
+
+-#define TLB_DATA_4K 0x11
+-#define TLB_DATA_4M 0x12
+-#define TLB_DATA_2M_4M 0x13
+-#define TLB_DATA_4K_4M 0x14
++#define TLB_DATA_4K 0x11
++#define TLB_DATA_4M 0x12
++#define TLB_DATA_2M_4M 0x13
++#define TLB_DATA_4K_4M 0x14
+
+-#define TLB_DATA_1G 0x16
++#define TLB_DATA_1G 0x16
++#define TLB_DATA_1G_2M_4M 0x17
+
+-#define TLB_DATA0_4K 0x21
+-#define TLB_DATA0_4M 0x22
+-#define TLB_DATA0_2M_4M 0x23
++#define TLB_DATA0_4K 0x21
++#define TLB_DATA0_4M 0x22
++#define TLB_DATA0_2M_4M 0x23
+
+-#define STLB_4K 0x41
+-#define STLB_4K_2M 0x42
++#define STLB_4K 0x41
++#define STLB_4K_2M 0x42
++
++/*
++ * All of leaf 0x2's one-byte TLB descriptors implies the same number of
++ * entries for their respective TLB types. The 0x63 descriptor is an
++ * exception: it implies 4 dTLB entries for 1GB pages 32 dTLB entries
++ * for 2MB or 4MB pages. Encode descriptor 0x63 dTLB entry count for
++ * 2MB/4MB pages here, as its count for dTLB 1GB pages is already at the
++ * intel_tlb_table[] mapping.
++ */
++#define TLB_0x63_2M_4M_ENTRIES 32
+
+ static const struct _tlb_table intel_tlb_table[] = {
+ { 0x01, TLB_INST_4K, 32, " TLB_INST 4 KByte pages, 4-way set associative" },
+@@ -713,7 +724,8 @@ static const struct _tlb_table intel_tlb_table[] = {
+ { 0x5c, TLB_DATA_4K_4M, 128, " TLB_DATA 4 KByte and 4 MByte pages" },
+ { 0x5d, TLB_DATA_4K_4M, 256, " TLB_DATA 4 KByte and 4 MByte pages" },
+ { 0x61, TLB_INST_4K, 48, " TLB_INST 4 KByte pages, full associative" },
+- { 0x63, TLB_DATA_1G, 4, " TLB_DATA 1 GByte pages, 4-way set associative" },
++ { 0x63, TLB_DATA_1G_2M_4M, 4, " TLB_DATA 1 GByte pages, 4-way set associative"
++ " (plus 32 entries TLB_DATA 2 MByte or 4 MByte pages, not encoded here)" },
+ { 0x6b, TLB_DATA_4K, 256, " TLB_DATA 4 KByte pages, 8-way associative" },
+ { 0x6c, TLB_DATA_2M_4M, 128, " TLB_DATA 2 MByte or 4 MByte pages, 8-way associative" },
+ { 0x6d, TLB_DATA_1G, 16, " TLB_DATA 1 GByte pages, fully associative" },
+@@ -813,6 +825,12 @@ static void intel_tlb_lookup(const unsigned char desc)
+ if (tlb_lld_4m[ENTRIES] < intel_tlb_table[k].entries)
+ tlb_lld_4m[ENTRIES] = intel_tlb_table[k].entries;
+ break;
++ case TLB_DATA_1G_2M_4M:
++ if (tlb_lld_2m[ENTRIES] < TLB_0x63_2M_4M_ENTRIES)
++ tlb_lld_2m[ENTRIES] = TLB_0x63_2M_4M_ENTRIES;
++ if (tlb_lld_4m[ENTRIES] < TLB_0x63_2M_4M_ENTRIES)
++ tlb_lld_4m[ENTRIES] = TLB_0x63_2M_4M_ENTRIES;
++ fallthrough;
+ case TLB_DATA_1G:
+ if (tlb_lld_1g[ENTRIES] < intel_tlb_table[k].entries)
+ tlb_lld_1g[ENTRIES] = intel_tlb_table[k].entries;
+@@ -836,7 +854,7 @@ static void intel_detect_tlb(struct cpuinfo_x86 *c)
+ cpuid(2, ®s[0], ®s[1], ®s[2], ®s[3]);
+
+ /* If bit 31 is set, this is an unknown format */
+- for (j = 0 ; j < 3 ; j++)
++ for (j = 0 ; j < 4 ; j++)
+ if (regs[j] & (1 << 31))
+ regs[j] = 0;
+
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index f5365b32582a5c..def6a2854a4b7c 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -175,23 +175,29 @@ static bool need_sha_check(u32 cur_rev)
+ {
+ switch (cur_rev >> 8) {
+ case 0x80012: return cur_rev <= 0x800126f; break;
++ case 0x80082: return cur_rev <= 0x800820f; break;
+ case 0x83010: return cur_rev <= 0x830107c; break;
+ case 0x86001: return cur_rev <= 0x860010e; break;
+ case 0x86081: return cur_rev <= 0x8608108; break;
+ case 0x87010: return cur_rev <= 0x8701034; break;
+ case 0x8a000: return cur_rev <= 0x8a0000a; break;
++ case 0xa0010: return cur_rev <= 0xa00107a; break;
+ case 0xa0011: return cur_rev <= 0xa0011da; break;
+ case 0xa0012: return cur_rev <= 0xa001243; break;
++ case 0xa0082: return cur_rev <= 0xa00820e; break;
+ case 0xa1011: return cur_rev <= 0xa101153; break;
+ case 0xa1012: return cur_rev <= 0xa10124e; break;
+ case 0xa1081: return cur_rev <= 0xa108109; break;
+ case 0xa2010: return cur_rev <= 0xa20102f; break;
+ case 0xa2012: return cur_rev <= 0xa201212; break;
++ case 0xa4041: return cur_rev <= 0xa404109; break;
++ case 0xa5000: return cur_rev <= 0xa500013; break;
+ case 0xa6012: return cur_rev <= 0xa60120a; break;
+ case 0xa7041: return cur_rev <= 0xa704109; break;
+ case 0xa7052: return cur_rev <= 0xa705208; break;
+ case 0xa7080: return cur_rev <= 0xa708009; break;
+ case 0xa70c0: return cur_rev <= 0xa70C009; break;
++ case 0xaa001: return cur_rev <= 0xaa00116; break;
+ case 0xaa002: return cur_rev <= 0xaa00218; break;
+ default: break;
+ }
+diff --git a/arch/x86/kernel/cpu/sgx/ioctl.c b/arch/x86/kernel/cpu/sgx/ioctl.c
+index b65ab214bdf57d..776a20172867ea 100644
+--- a/arch/x86/kernel/cpu/sgx/ioctl.c
++++ b/arch/x86/kernel/cpu/sgx/ioctl.c
+@@ -64,6 +64,13 @@ static int sgx_encl_create(struct sgx_encl *encl, struct sgx_secs *secs)
+ struct file *backing;
+ long ret;
+
++ /*
++ * ECREATE would detect this too, but checking here also ensures
++ * that the 'encl_size' calculations below can never overflow.
++ */
++ if (!is_power_of_2(secs->size))
++ return -EINVAL;
++
+ va_page = sgx_encl_grow(encl, true);
+ if (IS_ERR(va_page))
+ return PTR_ERR(va_page);
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 83bfecd1a6e40c..9157b4485dedce 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -1387,7 +1387,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+
+ entry->ecx = entry->edx = 0;
+ if (!enable_pmu || !kvm_cpu_cap_has(X86_FEATURE_PERFMON_V2)) {
+- entry->eax = entry->ebx;
++ entry->eax = entry->ebx = 0;
+ break;
+ }
+
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index e9af87b1281407..3ec56bf76ef164 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -4579,6 +4579,8 @@ void sev_es_vcpu_reset(struct vcpu_svm *svm)
+
+ void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_area *hostsa)
+ {
++ struct kvm *kvm = svm->vcpu.kvm;
++
+ /*
+ * All host state for SEV-ES guests is categorized into three swap types
+ * based on how it is handled by hardware during a world switch:
+@@ -4602,10 +4604,15 @@ void sev_es_prepare_switch_to_guest(struct vcpu_svm *svm, struct sev_es_save_are
+
+ /*
+ * If DebugSwap is enabled, debug registers are loaded but NOT saved by
+- * the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU both
+- * saves and loads debug registers (Type-A).
++ * the CPU (Type-B). If DebugSwap is disabled/unsupported, the CPU does
++ * not save or load debug registers. Sadly, on CPUs without
++ * ALLOWED_SEV_FEATURES, KVM can't prevent SNP guests from enabling
++ * DebugSwap on secondary vCPUs without KVM's knowledge via "AP Create".
++ * Save all registers if DebugSwap is supported to prevent host state
++ * from being clobbered by a misbehaving guest.
+ */
+- if (sev_vcpu_has_debug_swap(svm)) {
++ if (sev_vcpu_has_debug_swap(svm) ||
++ (sev_snp_guest(kvm) && cpu_feature_enabled(X86_FEATURE_DEBUG_SWAP))) {
+ hostsa->dr0 = native_get_debugreg(0);
+ hostsa->dr1 = native_get_debugreg(1);
+ hostsa->dr2 = native_get_debugreg(2);
+diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
+index a7cb7c82b38e39..e39ab7c0be4e9c 100644
+--- a/arch/x86/kvm/svm/svm.c
++++ b/arch/x86/kvm/svm/svm.c
+@@ -3167,6 +3167,27 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
+ kvm_pr_unimpl_wrmsr(vcpu, ecx, data);
+ break;
+ }
++
++ /*
++ * AMD changed the architectural behavior of bits 5:2. On CPUs
++ * without BusLockTrap, bits 5:2 control "external pins", but
++ * on CPUs that support BusLockDetect, bit 2 enables BusLockTrap
++ * and bits 5:3 are reserved-to-zero. Sadly, old KVM allowed
++ * the guest to set bits 5:2 despite not actually virtualizing
++ * Performance-Monitoring/Breakpoint external pins. Drop bits
++ * 5:2 for backwards compatibility.
++ */
++ data &= ~GENMASK(5, 2);
++
++ /*
++ * Suppress BTF as KVM doesn't virtualize BTF, but there's no
++ * way to communicate lack of support to the guest.
++ */
++ if (data & DEBUGCTLMSR_BTF) {
++ kvm_pr_unimpl_wrmsr(vcpu, MSR_IA32_DEBUGCTLMSR, data);
++ data &= ~DEBUGCTLMSR_BTF;
++ }
++
+ if (data & DEBUGCTL_RESERVED_BITS)
+ return 1;
+
+@@ -4176,6 +4197,18 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in
+
+ guest_state_enter_irqoff();
+
++ /*
++ * Set RFLAGS.IF prior to VMRUN, as the host's RFLAGS.IF at the time of
++ * VMRUN controls whether or not physical IRQs are masked (KVM always
++ * runs with V_INTR_MASKING_MASK). Toggle RFLAGS.IF here to avoid the
++ * temptation to do STI+VMRUN+CLI, as AMD CPUs bleed the STI shadow
++ * into guest state if delivery of an event during VMRUN triggers a
++ * #VMEXIT, and the guest_state transitions already tell lockdep that
++ * IRQs are being enabled/disabled. Note! GIF=0 for the entirety of
++ * this path, so IRQs aren't actually unmasked while running host code.
++ */
++ raw_local_irq_enable();
++
+ amd_clear_divider();
+
+ if (sev_es_guest(vcpu->kvm))
+@@ -4184,6 +4217,8 @@ static noinstr void svm_vcpu_enter_exit(struct kvm_vcpu *vcpu, bool spec_ctrl_in
+ else
+ __svm_vcpu_run(svm, spec_ctrl_intercepted);
+
++ raw_local_irq_disable();
++
+ guest_state_exit_irqoff();
+ }
+
+@@ -4240,6 +4275,16 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
+ clgi();
+ kvm_load_guest_xsave_state(vcpu);
+
++ /*
++ * Hardware only context switches DEBUGCTL if LBR virtualization is
++ * enabled. Manually load DEBUGCTL if necessary (and restore it after
++ * VM-Exit), as running with the host's DEBUGCTL can negatively affect
++ * guest state and can even be fatal, e.g. due to Bus Lock Detect.
++ */
++ if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) &&
++ vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl)
++ update_debugctlmsr(svm->vmcb->save.dbgctl);
++
+ kvm_wait_lapic_expire(vcpu);
+
+ /*
+@@ -4267,6 +4312,10 @@ static __no_kcsan fastpath_t svm_vcpu_run(struct kvm_vcpu *vcpu,
+ if (unlikely(svm->vmcb->control.exit_code == SVM_EXIT_NMI))
+ kvm_before_interrupt(vcpu, KVM_HANDLING_NMI);
+
++ if (!(svm->vmcb->control.virt_ext & LBR_CTL_ENABLE_MASK) &&
++ vcpu->arch.host_debugctl != svm->vmcb->save.dbgctl)
++ update_debugctlmsr(vcpu->arch.host_debugctl);
++
+ kvm_load_host_xsave_state(vcpu);
+ stgi();
+
+diff --git a/arch/x86/kvm/svm/svm.h b/arch/x86/kvm/svm/svm.h
+index 43fa6a16eb1917..d114efac7af78d 100644
+--- a/arch/x86/kvm/svm/svm.h
++++ b/arch/x86/kvm/svm/svm.h
+@@ -591,7 +591,7 @@ static inline bool is_vnmi_enabled(struct vcpu_svm *svm)
+ /* svm.c */
+ #define MSR_INVALID 0xffffffffU
+
+-#define DEBUGCTL_RESERVED_BITS (~(0x3fULL))
++#define DEBUGCTL_RESERVED_BITS (~DEBUGCTLMSR_LBR)
+
+ extern bool dump_invalid_vmcb;
+
+diff --git a/arch/x86/kvm/svm/vmenter.S b/arch/x86/kvm/svm/vmenter.S
+index 2ed80aea3bb130..0c61153b275f64 100644
+--- a/arch/x86/kvm/svm/vmenter.S
++++ b/arch/x86/kvm/svm/vmenter.S
+@@ -170,12 +170,8 @@ SYM_FUNC_START(__svm_vcpu_run)
+ mov VCPU_RDI(%_ASM_DI), %_ASM_DI
+
+ /* Enter guest mode */
+- sti
+-
+ 3: vmrun %_ASM_AX
+ 4:
+- cli
+-
+ /* Pop @svm to RAX while it's the only available register. */
+ pop %_ASM_AX
+
+@@ -340,12 +336,8 @@ SYM_FUNC_START(__svm_sev_es_vcpu_run)
+ mov KVM_VMCB_pa(%rax), %rax
+
+ /* Enter guest mode */
+- sti
+-
+ 1: vmrun %rax
+-
+-2: cli
+-
++2:
+ /* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
+ FILL_RETURN_BUFFER %rax, RSB_CLEAR_LOOPS, X86_FEATURE_RSB_VMEXIT
+
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 1af30e3472cdd9..a3d45b01dbadf3 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1515,16 +1515,12 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
+ */
+ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ {
+- struct vcpu_vmx *vmx = to_vmx(vcpu);
+-
+ if (vcpu->scheduled_out && !kvm_pause_in_guest(vcpu->kvm))
+ shrink_ple_window(vcpu);
+
+ vmx_vcpu_load_vmcs(vcpu, cpu, NULL);
+
+ vmx_vcpu_pi_load(vcpu, cpu);
+-
+- vmx->host_debugctlmsr = get_debugctlmsr();
+ }
+
+ void vmx_vcpu_put(struct kvm_vcpu *vcpu)
+@@ -7454,8 +7450,8 @@ fastpath_t vmx_vcpu_run(struct kvm_vcpu *vcpu, bool force_immediate_exit)
+ }
+
+ /* MSR_IA32_DEBUGCTLMSR is zeroed on vmexit. Restore it if needed */
+- if (vmx->host_debugctlmsr)
+- update_debugctlmsr(vmx->host_debugctlmsr);
++ if (vcpu->arch.host_debugctl)
++ update_debugctlmsr(vcpu->arch.host_debugctl);
+
+ #ifndef CONFIG_X86_64
+ /*
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index 41bf59bbc6426c..cf57fbf12104f5 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -339,8 +339,6 @@ struct vcpu_vmx {
+ /* apic deadline value in host tsc */
+ u64 hv_deadline_tsc;
+
+- unsigned long host_debugctlmsr;
+-
+ /*
+ * Only bits masked by msr_ia32_feature_control_valid_bits can be set in
+ * msr_ia32_feature_control. FEAT_CTL_LOCKED is always included
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index b67a2f46e40b05..8794c0a8a2e447 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -10964,6 +10964,8 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ set_debugreg(0, 7);
+ }
+
++ vcpu->arch.host_debugctl = get_debugctlmsr();
++
+ guest_timing_enter_irqoff();
+
+ for (;;) {
+diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
+index eb503f53c3195c..101725c149c429 100644
+--- a/arch/x86/mm/init.c
++++ b/arch/x86/mm/init.c
+@@ -263,28 +263,33 @@ static void __init probe_page_size_mask(void)
+ }
+
+ /*
+- * INVLPG may not properly flush Global entries
+- * on these CPUs when PCIDs are enabled.
++ * INVLPG may not properly flush Global entries on
++ * these CPUs. New microcode fixes the issue.
+ */
+ static const struct x86_cpu_id invlpg_miss_ids[] = {
+- X86_MATCH_VFM(INTEL_ALDERLAKE, 0),
+- X86_MATCH_VFM(INTEL_ALDERLAKE_L, 0),
+- X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, 0),
+- X86_MATCH_VFM(INTEL_RAPTORLAKE, 0),
+- X86_MATCH_VFM(INTEL_RAPTORLAKE_P, 0),
+- X86_MATCH_VFM(INTEL_RAPTORLAKE_S, 0),
++ X86_MATCH_VFM(INTEL_ALDERLAKE, 0x2e),
++ X86_MATCH_VFM(INTEL_ALDERLAKE_L, 0x42c),
++ X86_MATCH_VFM(INTEL_ATOM_GRACEMONT, 0x11),
++ X86_MATCH_VFM(INTEL_RAPTORLAKE, 0x118),
++ X86_MATCH_VFM(INTEL_RAPTORLAKE_P, 0x4117),
++ X86_MATCH_VFM(INTEL_RAPTORLAKE_S, 0x2e),
+ {}
+ };
+
+ static void setup_pcid(void)
+ {
++ const struct x86_cpu_id *invlpg_miss_match;
++
+ if (!IS_ENABLED(CONFIG_X86_64))
+ return;
+
+ if (!boot_cpu_has(X86_FEATURE_PCID))
+ return;
+
+- if (x86_match_cpu(invlpg_miss_ids)) {
++ invlpg_miss_match = x86_match_cpu(invlpg_miss_ids);
++
++ if (invlpg_miss_match &&
++ boot_cpu_data.microcode < invlpg_miss_match->driver_data) {
+ pr_info("Incomplete global flushes, disabling PCID");
+ setup_clear_cpu_cap(X86_FEATURE_PCID);
+ return;
+diff --git a/block/partitions/efi.c b/block/partitions/efi.c
+index 5e9be13a56a82a..7acba66eed481c 100644
+--- a/block/partitions/efi.c
++++ b/block/partitions/efi.c
+@@ -682,7 +682,7 @@ static void utf16_le_to_7bit(const __le16 *in, unsigned int size, u8 *out)
+ out[size] = 0;
+
+ while (i < size) {
+- u8 c = le16_to_cpu(in[i]) & 0xff;
++ u8 c = le16_to_cpu(in[i]) & 0x7f;
+
+ if (c && !isprint(c))
+ c = '!';
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index d922cefc1e6625..ec0ef6a0de9427 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -2079,6 +2079,7 @@ static bool __fw_devlink_relax_cycles(struct fwnode_handle *con_handle,
+ out:
+ sup_handle->flags &= ~FWNODE_FLAG_VISITED;
+ put_device(sup_dev);
++ put_device(con_dev);
+ put_device(par_dev);
+ return ret;
+ }
+diff --git a/drivers/block/rnull.rs b/drivers/block/rnull.rs
+index b0227cf9ddd387..5de7223beb4d5b 100644
+--- a/drivers/block/rnull.rs
++++ b/drivers/block/rnull.rs
+@@ -32,7 +32,7 @@
+ }
+
+ struct NullBlkModule {
+- _disk: Pin<Box<Mutex<GenDisk<NullBlkDevice>>>>,
++ _disk: Pin<KBox<Mutex<GenDisk<NullBlkDevice>>>>,
+ }
+
+ impl kernel::Module for NullBlkModule {
+@@ -47,7 +47,7 @@ fn init(_module: &'static ThisModule) -> Result<Self> {
+ .rotational(false)
+ .build(format_args!("rnullb{}", 0), tagset)?;
+
+- let disk = Box::pin_init(new_mutex!(disk, "nullb:disk"), flags::GFP_KERNEL)?;
++ let disk = KBox::pin_init(new_mutex!(disk, "nullb:disk"), flags::GFP_KERNEL)?;
+
+ Ok(Self { _disk: disk })
+ }
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 458ac54e7b201e..c7d728d686e5a5 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -2665,9 +2665,12 @@ static int ublk_ctrl_set_params(struct ublk_device *ub,
+ if (ph.len > sizeof(struct ublk_params))
+ ph.len = sizeof(struct ublk_params);
+
+- /* parameters can only be changed when device isn't live */
+ mutex_lock(&ub->mutex);
+- if (ub->dev_info.state == UBLK_S_DEV_LIVE) {
++ if (test_bit(UB_STATE_USED, &ub->state)) {
++ /*
++ * Parameters can only be changed when device hasn't
++ * been started yet
++ */
+ ret = -EACCES;
+ } else if (copy_from_user(&ub->params, argp, ph.len)) {
+ ret = -EFAULT;
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 6bc6dd417adf64..3a0b9dc98707f5 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -3644,6 +3644,7 @@ static ssize_t force_poll_sync_write(struct file *file,
+ }
+
+ static const struct file_operations force_poll_sync_fops = {
++ .owner = THIS_MODULE,
+ .open = simple_open,
+ .read = force_poll_sync_read,
+ .write = force_poll_sync_write,
+diff --git a/drivers/bus/mhi/host/pci_generic.c b/drivers/bus/mhi/host/pci_generic.c
+index 9938bb034c1cbc..acfd673834ed73 100644
+--- a/drivers/bus/mhi/host/pci_generic.c
++++ b/drivers/bus/mhi/host/pci_generic.c
+@@ -1040,8 +1040,9 @@ static void mhi_pci_recovery_work(struct work_struct *work)
+ err_unprepare:
+ mhi_unprepare_after_power_down(mhi_cntrl);
+ err_try_reset:
+- if (pci_reset_function(pdev))
+- dev_err(&pdev->dev, "Recovery failed\n");
++ err = pci_try_reset_function(pdev);
++ if (err)
++ dev_err(&pdev->dev, "Recovery failed: %d\n", err);
+ }
+
+ static void health_check(struct timer_list *t)
+diff --git a/drivers/cdx/cdx.c b/drivers/cdx/cdx.c
+index 07371cb653d356..4af1901c9d524a 100644
+--- a/drivers/cdx/cdx.c
++++ b/drivers/cdx/cdx.c
+@@ -470,8 +470,12 @@ static ssize_t driver_override_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+ {
+ struct cdx_device *cdx_dev = to_cdx_device(dev);
++ ssize_t len;
+
+- return sysfs_emit(buf, "%s\n", cdx_dev->driver_override);
++ device_lock(dev);
++ len = sysfs_emit(buf, "%s\n", cdx_dev->driver_override);
++ device_unlock(dev);
++ return len;
+ }
+ static DEVICE_ATTR_RW(driver_override);
+
+diff --git a/drivers/char/misc.c b/drivers/char/misc.c
+index 2cf595d2e10b85..f7dd455dd0dd3c 100644
+--- a/drivers/char/misc.c
++++ b/drivers/char/misc.c
+@@ -264,8 +264,8 @@ int misc_register(struct miscdevice *misc)
+ device_create_with_groups(&misc_class, misc->parent, dev,
+ misc, misc->groups, "%s", misc->name);
+ if (IS_ERR(misc->this_device)) {
++ misc_minor_free(misc->minor);
+ if (is_dynamic) {
+- misc_minor_free(misc->minor);
+ misc->minor = MISC_DYNAMIC_MINOR;
+ }
+ err = PTR_ERR(misc->this_device);
+diff --git a/drivers/gpio/gpio-aggregator.c b/drivers/gpio/gpio-aggregator.c
+index 38e0fff9afe722..cc6ee4334602aa 100644
+--- a/drivers/gpio/gpio-aggregator.c
++++ b/drivers/gpio/gpio-aggregator.c
+@@ -121,10 +121,15 @@ static ssize_t new_device_store(struct device_driver *driver, const char *buf,
+ struct platform_device *pdev;
+ int res, id;
+
++ if (!try_module_get(THIS_MODULE))
++ return -ENOENT;
++
+ /* kernfs guarantees string termination, so count + 1 is safe */
+ aggr = kzalloc(sizeof(*aggr) + count + 1, GFP_KERNEL);
+- if (!aggr)
+- return -ENOMEM;
++ if (!aggr) {
++ res = -ENOMEM;
++ goto put_module;
++ }
+
+ memcpy(aggr->args, buf, count + 1);
+
+@@ -163,6 +168,7 @@ static ssize_t new_device_store(struct device_driver *driver, const char *buf,
+ }
+
+ aggr->pdev = pdev;
++ module_put(THIS_MODULE);
+ return count;
+
+ remove_table:
+@@ -177,6 +183,8 @@ static ssize_t new_device_store(struct device_driver *driver, const char *buf,
+ kfree(aggr->lookups);
+ free_ga:
+ kfree(aggr);
++put_module:
++ module_put(THIS_MODULE);
+ return res;
+ }
+
+@@ -205,13 +213,19 @@ static ssize_t delete_device_store(struct device_driver *driver,
+ if (error)
+ return error;
+
++ if (!try_module_get(THIS_MODULE))
++ return -ENOENT;
++
+ mutex_lock(&gpio_aggregator_lock);
+ aggr = idr_remove(&gpio_aggregator_idr, id);
+ mutex_unlock(&gpio_aggregator_lock);
+- if (!aggr)
++ if (!aggr) {
++ module_put(THIS_MODULE);
+ return -ENOENT;
++ }
+
+ gpio_aggregator_free(aggr);
++ module_put(THIS_MODULE);
+ return count;
+ }
+ static DRIVER_ATTR_WO(delete_device);
+diff --git a/drivers/gpio/gpio-rcar.c b/drivers/gpio/gpio-rcar.c
+index 6159fda38d5da1..6641ed5cd8e1c6 100644
+--- a/drivers/gpio/gpio-rcar.c
++++ b/drivers/gpio/gpio-rcar.c
+@@ -40,7 +40,7 @@ struct gpio_rcar_info {
+
+ struct gpio_rcar_priv {
+ void __iomem *base;
+- spinlock_t lock;
++ raw_spinlock_t lock;
+ struct device *dev;
+ struct gpio_chip gpio_chip;
+ unsigned int irq_parent;
+@@ -123,7 +123,7 @@ static void gpio_rcar_config_interrupt_input_mode(struct gpio_rcar_priv *p,
+ * "Setting Level-Sensitive Interrupt Input Mode"
+ */
+
+- spin_lock_irqsave(&p->lock, flags);
++ raw_spin_lock_irqsave(&p->lock, flags);
+
+ /* Configure positive or negative logic in POSNEG */
+ gpio_rcar_modify_bit(p, POSNEG, hwirq, !active_high_rising_edge);
+@@ -142,7 +142,7 @@ static void gpio_rcar_config_interrupt_input_mode(struct gpio_rcar_priv *p,
+ if (!level_trigger)
+ gpio_rcar_write(p, INTCLR, BIT(hwirq));
+
+- spin_unlock_irqrestore(&p->lock, flags);
++ raw_spin_unlock_irqrestore(&p->lock, flags);
+ }
+
+ static int gpio_rcar_irq_set_type(struct irq_data *d, unsigned int type)
+@@ -246,7 +246,7 @@ static void gpio_rcar_config_general_input_output_mode(struct gpio_chip *chip,
+ * "Setting General Input Mode"
+ */
+
+- spin_lock_irqsave(&p->lock, flags);
++ raw_spin_lock_irqsave(&p->lock, flags);
+
+ /* Configure positive logic in POSNEG */
+ gpio_rcar_modify_bit(p, POSNEG, gpio, false);
+@@ -261,7 +261,7 @@ static void gpio_rcar_config_general_input_output_mode(struct gpio_chip *chip,
+ if (p->info.has_outdtsel && output)
+ gpio_rcar_modify_bit(p, OUTDTSEL, gpio, false);
+
+- spin_unlock_irqrestore(&p->lock, flags);
++ raw_spin_unlock_irqrestore(&p->lock, flags);
+ }
+
+ static int gpio_rcar_request(struct gpio_chip *chip, unsigned offset)
+@@ -347,7 +347,7 @@ static int gpio_rcar_get_multiple(struct gpio_chip *chip, unsigned long *mask,
+ return 0;
+ }
+
+- spin_lock_irqsave(&p->lock, flags);
++ raw_spin_lock_irqsave(&p->lock, flags);
+ outputs = gpio_rcar_read(p, INOUTSEL);
+ m = outputs & bankmask;
+ if (m)
+@@ -356,7 +356,7 @@ static int gpio_rcar_get_multiple(struct gpio_chip *chip, unsigned long *mask,
+ m = ~outputs & bankmask;
+ if (m)
+ val |= gpio_rcar_read(p, INDT) & m;
+- spin_unlock_irqrestore(&p->lock, flags);
++ raw_spin_unlock_irqrestore(&p->lock, flags);
+
+ bits[0] = val;
+ return 0;
+@@ -367,9 +367,9 @@ static void gpio_rcar_set(struct gpio_chip *chip, unsigned offset, int value)
+ struct gpio_rcar_priv *p = gpiochip_get_data(chip);
+ unsigned long flags;
+
+- spin_lock_irqsave(&p->lock, flags);
++ raw_spin_lock_irqsave(&p->lock, flags);
+ gpio_rcar_modify_bit(p, OUTDT, offset, value);
+- spin_unlock_irqrestore(&p->lock, flags);
++ raw_spin_unlock_irqrestore(&p->lock, flags);
+ }
+
+ static void gpio_rcar_set_multiple(struct gpio_chip *chip, unsigned long *mask,
+@@ -386,12 +386,12 @@ static void gpio_rcar_set_multiple(struct gpio_chip *chip, unsigned long *mask,
+ if (!bankmask)
+ return;
+
+- spin_lock_irqsave(&p->lock, flags);
++ raw_spin_lock_irqsave(&p->lock, flags);
+ val = gpio_rcar_read(p, OUTDT);
+ val &= ~bankmask;
+ val |= (bankmask & bits[0]);
+ gpio_rcar_write(p, OUTDT, val);
+- spin_unlock_irqrestore(&p->lock, flags);
++ raw_spin_unlock_irqrestore(&p->lock, flags);
+ }
+
+ static int gpio_rcar_direction_output(struct gpio_chip *chip, unsigned offset,
+@@ -468,7 +468,12 @@ static int gpio_rcar_parse_dt(struct gpio_rcar_priv *p, unsigned int *npins)
+ p->info = *info;
+
+ ret = of_parse_phandle_with_fixed_args(np, "gpio-ranges", 3, 0, &args);
+- *npins = ret == 0 ? args.args[2] : RCAR_MAX_GPIO_PER_BANK;
++ if (ret) {
++ *npins = RCAR_MAX_GPIO_PER_BANK;
++ } else {
++ *npins = args.args[2];
++ of_node_put(args.np);
++ }
+
+ if (*npins == 0 || *npins > RCAR_MAX_GPIO_PER_BANK) {
+ dev_warn(p->dev, "Invalid number of gpio lines %u, using %u\n",
+@@ -505,7 +510,7 @@ static int gpio_rcar_probe(struct platform_device *pdev)
+ return -ENOMEM;
+
+ p->dev = dev;
+- spin_lock_init(&p->lock);
++ raw_spin_lock_init(&p->lock);
+
+ /* Get device configuration from DT node */
+ ret = gpio_rcar_parse_dt(p, &npins);
+diff --git a/drivers/gpio/gpio-vf610.c b/drivers/gpio/gpio-vf610.c
+index 27eff741fe9a2a..c36a9dbccd4dd5 100644
+--- a/drivers/gpio/gpio-vf610.c
++++ b/drivers/gpio/gpio-vf610.c
+@@ -15,10 +15,9 @@
+ #include <linux/io.h>
+ #include <linux/ioport.h>
+ #include <linux/irq.h>
+-#include <linux/platform_device.h>
+-#include <linux/of.h>
+-#include <linux/of_irq.h>
+ #include <linux/pinctrl/consumer.h>
++#include <linux/platform_device.h>
++#include <linux/property.h>
+
+ #define VF610_GPIO_PER_PORT 32
+
+@@ -37,6 +36,7 @@ struct vf610_gpio_port {
+ struct clk *clk_port;
+ struct clk *clk_gpio;
+ int irq;
++ spinlock_t lock; /* protect gpio direction registers */
+ };
+
+ #define GPIO_PDOR 0x00
+@@ -125,6 +125,7 @@ static int vf610_gpio_direction_input(struct gpio_chip *chip, unsigned int gpio)
+ u32 val;
+
+ if (port->sdata->have_paddr) {
++ guard(spinlock_irqsave)(&port->lock);
+ val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR);
+ val &= ~mask;
+ vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR);
+@@ -143,6 +144,7 @@ static int vf610_gpio_direction_output(struct gpio_chip *chip, unsigned int gpio
+ vf610_gpio_set(chip, gpio, value);
+
+ if (port->sdata->have_paddr) {
++ guard(spinlock_irqsave)(&port->lock);
+ val = vf610_gpio_readl(port->gpio_base + GPIO_PDDR);
+ val |= mask;
+ vf610_gpio_writel(val, port->gpio_base + GPIO_PDDR);
+@@ -297,7 +299,8 @@ static int vf610_gpio_probe(struct platform_device *pdev)
+ if (!port)
+ return -ENOMEM;
+
+- port->sdata = of_device_get_match_data(dev);
++ port->sdata = device_get_match_data(dev);
++ spin_lock_init(&port->lock);
+
+ dual_base = port->sdata->have_dual_base;
+
+diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig
+index 7408ea8caacc3c..ae53f26da945f8 100644
+--- a/drivers/gpu/drm/Kconfig
++++ b/drivers/gpu/drm/Kconfig
+@@ -211,6 +211,18 @@ config DRM_DEBUG_MODESET_LOCK
+
+ If in doubt, say "N".
+
++config DRM_CLIENT_SELECTION
++ bool
++ depends on DRM
++ select DRM_CLIENT_SETUP if DRM_FBDEV_EMULATION
++ help
++ Drivers that support in-kernel DRM clients have to select this
++ option.
++
++config DRM_CLIENT_SETUP
++ bool
++ depends on DRM_CLIENT_SELECTION
++
+ config DRM_FBDEV_EMULATION
+ bool "Enable legacy fbdev support for your modesetting driver"
+ depends on DRM
+diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
+index 84746054c721a3..1ec44529447a76 100644
+--- a/drivers/gpu/drm/Makefile
++++ b/drivers/gpu/drm/Makefile
+@@ -144,8 +144,12 @@ drm_kms_helper-y := \
+ drm_rect.o \
+ drm_self_refresh_helper.o \
+ drm_simple_kms_helper.o
++drm_kms_helper-$(CONFIG_DRM_CLIENT_SETUP) += \
++ drm_client_setup.o
+ drm_kms_helper-$(CONFIG_DRM_PANEL_BRIDGE) += bridge/panel.o
+-drm_kms_helper-$(CONFIG_DRM_FBDEV_EMULATION) += drm_fb_helper.o
++drm_kms_helper-$(CONFIG_DRM_FBDEV_EMULATION) += \
++ drm_fbdev_client.o \
++ drm_fb_helper.o
+ obj-$(CONFIG_DRM_KMS_HELPER) += drm_kms_helper.o
+
+ #
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_queue.c b/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
+index ad29634f8b44ca..80c85b6cc478a9 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
+@@ -266,8 +266,8 @@ int kfd_queue_acquire_buffers(struct kfd_process_device *pdd, struct queue_prope
+ /* EOP buffer is not required for all ASICs */
+ if (properties->eop_ring_buffer_address) {
+ if (properties->eop_ring_buffer_size != topo_dev->node_props.eop_buffer_size) {
+- pr_debug("queue eop bo size 0x%lx not equal to node eop buf size 0x%x\n",
+- properties->eop_buf_bo->tbo.base.size,
++ pr_debug("queue eop bo size 0x%x not equal to node eop buf size 0x%x\n",
++ properties->eop_ring_buffer_size,
+ topo_dev->node_props.eop_buffer_size);
+ err = -EINVAL;
+ goto out_err_unreserve;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index d915020a429582..f0eda0ba015600 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -1455,7 +1455,8 @@ bool resource_build_scaling_params(struct pipe_ctx *pipe_ctx)
+ DC_LOGGER_INIT(pipe_ctx->stream->ctx->logger);
+
+ /* Invalid input */
+- if (!plane_state->dst_rect.width ||
++ if (!plane_state ||
++ !plane_state->dst_rect.width ||
+ !plane_state->dst_rect.height ||
+ !plane_state->src_rect.width ||
+ !plane_state->src_rect.height) {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
+index 452589adaf0468..e5f619c979d80e 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0.c
+@@ -1883,16 +1883,6 @@ static int smu_v14_0_allow_ih_interrupt(struct smu_context *smu)
+ NULL);
+ }
+
+-static int smu_v14_0_process_pending_interrupt(struct smu_context *smu)
+-{
+- int ret = 0;
+-
+- if (smu_cmn_feature_is_enabled(smu, SMU_FEATURE_ACDC_BIT))
+- ret = smu_v14_0_allow_ih_interrupt(smu);
+-
+- return ret;
+-}
+-
+ int smu_v14_0_enable_thermal_alert(struct smu_context *smu)
+ {
+ int ret = 0;
+@@ -1904,7 +1894,7 @@ int smu_v14_0_enable_thermal_alert(struct smu_context *smu)
+ if (ret)
+ return ret;
+
+- return smu_v14_0_process_pending_interrupt(smu);
++ return smu_v14_0_allow_ih_interrupt(smu);
+ }
+
+ int smu_v14_0_disable_thermal_alert(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/drm_client_setup.c b/drivers/gpu/drm/drm_client_setup.c
+new file mode 100644
+index 00000000000000..5969c4ffe31ba4
+--- /dev/null
++++ b/drivers/gpu/drm/drm_client_setup.c
+@@ -0,0 +1,66 @@
++// SPDX-License-Identifier: MIT
++
++#include <drm/drm_client_setup.h>
++#include <drm/drm_device.h>
++#include <drm/drm_fbdev_client.h>
++#include <drm/drm_fourcc.h>
++#include <drm/drm_print.h>
++
++/**
++ * drm_client_setup() - Setup in-kernel DRM clients
++ * @dev: DRM device
++ * @format: Preferred pixel format for the device. Use NULL, unless
++ * there is clearly a driver-preferred format.
++ *
++ * This function sets up the in-kernel DRM clients. Restore, hotplug
++ * events and teardown are all taken care of.
++ *
++ * Drivers should call drm_client_setup() after registering the new
++ * DRM device with drm_dev_register(). This function is safe to call
++ * even when there are no connectors present. Setup will be retried
++ * on the next hotplug event.
++ *
++ * The clients are destroyed by drm_dev_unregister().
++ */
++void drm_client_setup(struct drm_device *dev, const struct drm_format_info *format)
++{
++ int ret;
++
++ ret = drm_fbdev_client_setup(dev, format);
++ if (ret)
++ drm_warn(dev, "Failed to set up DRM client; error %d\n", ret);
++}
++EXPORT_SYMBOL(drm_client_setup);
++
++/**
++ * drm_client_setup_with_fourcc() - Setup in-kernel DRM clients for color mode
++ * @dev: DRM device
++ * @fourcc: Preferred pixel format as 4CC code for the device
++ *
++ * This function sets up the in-kernel DRM clients. It is equivalent
++ * to drm_client_setup(), but expects a 4CC code as second argument.
++ */
++void drm_client_setup_with_fourcc(struct drm_device *dev, u32 fourcc)
++{
++ drm_client_setup(dev, drm_format_info(fourcc));
++}
++EXPORT_SYMBOL(drm_client_setup_with_fourcc);
++
++/**
++ * drm_client_setup_with_color_mode() - Setup in-kernel DRM clients for color mode
++ * @dev: DRM device
++ * @color_mode: Preferred color mode for the device
++ *
++ * This function sets up the in-kernel DRM clients. It is equivalent
++ * to drm_client_setup(), but expects a color mode as second argument.
++ *
++ * Do not use this function in new drivers. Prefer drm_client_setup() with a
++ * format of NULL.
++ */
++void drm_client_setup_with_color_mode(struct drm_device *dev, unsigned int color_mode)
++{
++ u32 fourcc = drm_driver_color_mode_format(dev, color_mode);
++
++ drm_client_setup_with_fourcc(dev, fourcc);
++}
++EXPORT_SYMBOL(drm_client_setup_with_color_mode);
+diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c
+index eaac2e5726e750..b15ddbd65e7b5d 100644
+--- a/drivers/gpu/drm/drm_fb_helper.c
++++ b/drivers/gpu/drm/drm_fb_helper.c
+@@ -492,8 +492,8 @@ EXPORT_SYMBOL(drm_fb_helper_init);
+ * @fb_helper: driver-allocated fbdev helper
+ *
+ * A helper to alloc fb_info and the member cmap. Called by the driver
+- * within the fb_probe fb_helper callback function. Drivers do not
+- * need to release the allocated fb_info structure themselves, this is
++ * within the struct &drm_driver.fbdev_probe callback function. Drivers do
++ * not need to release the allocated fb_info structure themselves, this is
+ * automatically done when calling drm_fb_helper_fini().
+ *
+ * RETURNS:
+@@ -1443,67 +1443,27 @@ int drm_fb_helper_pan_display(struct fb_var_screeninfo *var,
+ EXPORT_SYMBOL(drm_fb_helper_pan_display);
+
+ static uint32_t drm_fb_helper_find_format(struct drm_fb_helper *fb_helper, const uint32_t *formats,
+- size_t format_count, uint32_t bpp, uint32_t depth)
++ size_t format_count, unsigned int color_mode)
+ {
+ struct drm_device *dev = fb_helper->dev;
+ uint32_t format;
+ size_t i;
+
+- /*
+- * Do not consider YUV or other complicated formats
+- * for framebuffers. This means only legacy formats
+- * are supported (fmt->depth is a legacy field), but
+- * the framebuffer emulation can only deal with such
+- * formats, specifically RGB/BGA formats.
+- */
+- format = drm_mode_legacy_fb_format(bpp, depth);
+- if (!format)
+- goto err;
++ format = drm_driver_color_mode_format(dev, color_mode);
++ if (!format) {
++ drm_info(dev, "unsupported color mode of %d\n", color_mode);
++ return DRM_FORMAT_INVALID;
++ }
+
+ for (i = 0; i < format_count; ++i) {
+ if (formats[i] == format)
+ return format;
+ }
+-
+-err:
+- /* We found nothing. */
+- drm_warn(dev, "bpp/depth value of %u/%u not supported\n", bpp, depth);
++ drm_warn(dev, "format %p4cc not supported\n", &format);
+
+ return DRM_FORMAT_INVALID;
+ }
+
+-static uint32_t drm_fb_helper_find_color_mode_format(struct drm_fb_helper *fb_helper,
+- const uint32_t *formats, size_t format_count,
+- unsigned int color_mode)
+-{
+- struct drm_device *dev = fb_helper->dev;
+- uint32_t bpp, depth;
+-
+- switch (color_mode) {
+- case 1:
+- case 2:
+- case 4:
+- case 8:
+- case 16:
+- case 24:
+- bpp = depth = color_mode;
+- break;
+- case 15:
+- bpp = 16;
+- depth = 15;
+- break;
+- case 32:
+- bpp = 32;
+- depth = 24;
+- break;
+- default:
+- drm_info(dev, "unsupported color mode of %d\n", color_mode);
+- return DRM_FORMAT_INVALID;
+- }
+-
+- return drm_fb_helper_find_format(fb_helper, formats, format_count, bpp, depth);
+-}
+-
+ static int __drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper,
+ struct drm_fb_helper_surface_size *sizes)
+ {
+@@ -1533,10 +1493,10 @@ static int __drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper,
+ if (!cmdline_mode->bpp_specified)
+ continue;
+
+- surface_format = drm_fb_helper_find_color_mode_format(fb_helper,
+- plane->format_types,
+- plane->format_count,
+- cmdline_mode->bpp);
++ surface_format = drm_fb_helper_find_format(fb_helper,
++ plane->format_types,
++ plane->format_count,
++ cmdline_mode->bpp);
+ if (surface_format != DRM_FORMAT_INVALID)
+ break; /* found supported format */
+ }
+@@ -1546,10 +1506,10 @@ static int __drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper,
+ break; /* found supported format */
+
+ /* try preferred color mode */
+- surface_format = drm_fb_helper_find_color_mode_format(fb_helper,
+- plane->format_types,
+- plane->format_count,
+- fb_helper->preferred_bpp);
++ surface_format = drm_fb_helper_find_format(fb_helper,
++ plane->format_types,
++ plane->format_count,
++ fb_helper->preferred_bpp);
+ if (surface_format != DRM_FORMAT_INVALID)
+ break; /* found supported format */
+ }
+@@ -1650,7 +1610,7 @@ static int drm_fb_helper_find_sizes(struct drm_fb_helper *fb_helper,
+
+ /*
+ * Allocates the backing storage and sets up the fbdev info structure through
+- * the ->fb_probe callback.
++ * the ->fbdev_probe callback.
+ */
+ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper)
+ {
+@@ -1668,7 +1628,10 @@ static int drm_fb_helper_single_fb_probe(struct drm_fb_helper *fb_helper)
+ }
+
+ /* push down into drivers */
+- ret = (*fb_helper->funcs->fb_probe)(fb_helper, &sizes);
++ if (dev->driver->fbdev_probe)
++ ret = dev->driver->fbdev_probe(fb_helper, &sizes);
++ else if (fb_helper->funcs)
++ ret = fb_helper->funcs->fb_probe(fb_helper, &sizes);
+ if (ret < 0)
+ return ret;
+
+@@ -1740,7 +1703,7 @@ static void drm_fb_helper_fill_var(struct fb_info *info,
+ * instance and the drm framebuffer allocated in &drm_fb_helper.fb.
+ *
+ * Drivers should call this (or their equivalent setup code) from their
+- * &drm_fb_helper_funcs.fb_probe callback after having allocated the fbdev
++ * &drm_driver.fbdev_probe callback after having allocated the fbdev
+ * backing storage framebuffer.
+ */
+ void drm_fb_helper_fill_info(struct fb_info *info,
+@@ -1896,7 +1859,7 @@ __drm_fb_helper_initial_config_and_unlock(struct drm_fb_helper *fb_helper)
+ * Note that this also registers the fbdev and so allows userspace to call into
+ * the driver through the fbdev interfaces.
+ *
+- * This function will call down into the &drm_fb_helper_funcs.fb_probe callback
++ * This function will call down into the &drm_driver.fbdev_probe callback
+ * to let the driver allocate and initialize the fbdev info structure and the
+ * drm framebuffer used to back the fbdev. drm_fb_helper_fill_info() is provided
+ * as a helper to setup simple default values for the fbdev info structure.
+diff --git a/drivers/gpu/drm/drm_fbdev_client.c b/drivers/gpu/drm/drm_fbdev_client.c
+new file mode 100644
+index 00000000000000..a09382afe2fb6f
+--- /dev/null
++++ b/drivers/gpu/drm/drm_fbdev_client.c
+@@ -0,0 +1,141 @@
++// SPDX-License-Identifier: MIT
++
++#include <drm/drm_client.h>
++#include <drm/drm_crtc_helper.h>
++#include <drm/drm_drv.h>
++#include <drm/drm_fbdev_client.h>
++#include <drm/drm_fb_helper.h>
++#include <drm/drm_fourcc.h>
++#include <drm/drm_print.h>
++
++/*
++ * struct drm_client_funcs
++ */
++
++static void drm_fbdev_client_unregister(struct drm_client_dev *client)
++{
++ struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client);
++
++ if (fb_helper->info) {
++ drm_fb_helper_unregister_info(fb_helper);
++ } else {
++ drm_client_release(&fb_helper->client);
++ drm_fb_helper_unprepare(fb_helper);
++ kfree(fb_helper);
++ }
++}
++
++static int drm_fbdev_client_restore(struct drm_client_dev *client)
++{
++ drm_fb_helper_lastclose(client->dev);
++
++ return 0;
++}
++
++static int drm_fbdev_client_hotplug(struct drm_client_dev *client)
++{
++ struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client);
++ struct drm_device *dev = client->dev;
++ int ret;
++
++ if (dev->fb_helper)
++ return drm_fb_helper_hotplug_event(dev->fb_helper);
++
++ ret = drm_fb_helper_init(dev, fb_helper);
++ if (ret)
++ goto err_drm_err;
++
++ if (!drm_drv_uses_atomic_modeset(dev))
++ drm_helper_disable_unused_functions(dev);
++
++ ret = drm_fb_helper_initial_config(fb_helper);
++ if (ret)
++ goto err_drm_fb_helper_fini;
++
++ return 0;
++
++err_drm_fb_helper_fini:
++ drm_fb_helper_fini(fb_helper);
++err_drm_err:
++ drm_err(dev, "fbdev: Failed to setup emulation (ret=%d)\n", ret);
++ return ret;
++}
++
++static const struct drm_client_funcs drm_fbdev_client_funcs = {
++ .owner = THIS_MODULE,
++ .unregister = drm_fbdev_client_unregister,
++ .restore = drm_fbdev_client_restore,
++ .hotplug = drm_fbdev_client_hotplug,
++};
++
++/**
++ * drm_fbdev_client_setup() - Setup fbdev emulation
++ * @dev: DRM device
++ * @format: Preferred color format for the device. DRM_FORMAT_XRGB8888
++ * is used if this is zero.
++ *
++ * This function sets up fbdev emulation. Restore, hotplug events and
++ * teardown are all taken care of. Drivers that do suspend/resume need
++ * to call drm_fb_helper_set_suspend_unlocked() themselves. Simple
++ * drivers might use drm_mode_config_helper_suspend().
++ *
++ * This function is safe to call even when there are no connectors present.
++ * Setup will be retried on the next hotplug event.
++ *
++ * The fbdev client is destroyed by drm_dev_unregister().
++ *
++ * Returns:
++ * 0 on success, or a negative errno code otherwise.
++ */
++int drm_fbdev_client_setup(struct drm_device *dev, const struct drm_format_info *format)
++{
++ struct drm_fb_helper *fb_helper;
++ unsigned int color_mode;
++ int ret;
++
++ /* TODO: Use format info throughout DRM */
++ if (format) {
++ unsigned int bpp = drm_format_info_bpp(format, 0);
++
++ switch (bpp) {
++ case 16:
++ color_mode = format->depth; // could also be 15
++ break;
++ default:
++ color_mode = bpp;
++ }
++ } else {
++ switch (dev->mode_config.preferred_depth) {
++ case 0:
++ case 24:
++ color_mode = 32;
++ break;
++ default:
++ color_mode = dev->mode_config.preferred_depth;
++ }
++ }
++
++ drm_WARN(dev, !dev->registered, "Device has not been registered.\n");
++ drm_WARN(dev, dev->fb_helper, "fb_helper is already set!\n");
++
++ fb_helper = kzalloc(sizeof(*fb_helper), GFP_KERNEL);
++ if (!fb_helper)
++ return -ENOMEM;
++ drm_fb_helper_prepare(dev, fb_helper, color_mode, NULL);
++
++ ret = drm_client_init(dev, &fb_helper->client, "fbdev", &drm_fbdev_client_funcs);
++ if (ret) {
++ drm_err(dev, "Failed to register client: %d\n", ret);
++ goto err_drm_client_init;
++ }
++
++ drm_client_register(&fb_helper->client);
++
++ return 0;
++
++err_drm_client_init:
++ drm_fb_helper_unprepare(fb_helper);
++ kfree(fb_helper);
++ return ret;
++}
++EXPORT_SYMBOL(drm_fbdev_client_setup);
+diff --git a/drivers/gpu/drm/drm_fbdev_ttm.c b/drivers/gpu/drm/drm_fbdev_ttm.c
+index 119ffb28aaf952..d799cbe944cd34 100644
+--- a/drivers/gpu/drm/drm_fbdev_ttm.c
++++ b/drivers/gpu/drm/drm_fbdev_ttm.c
+@@ -71,71 +71,7 @@ static const struct fb_ops drm_fbdev_ttm_fb_ops = {
+ static int drm_fbdev_ttm_helper_fb_probe(struct drm_fb_helper *fb_helper,
+ struct drm_fb_helper_surface_size *sizes)
+ {
+- struct drm_client_dev *client = &fb_helper->client;
+- struct drm_device *dev = fb_helper->dev;
+- struct drm_client_buffer *buffer;
+- struct fb_info *info;
+- size_t screen_size;
+- void *screen_buffer;
+- u32 format;
+- int ret;
+-
+- drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
+- sizes->surface_width, sizes->surface_height,
+- sizes->surface_bpp);
+-
+- format = drm_driver_legacy_fb_format(dev, sizes->surface_bpp,
+- sizes->surface_depth);
+- buffer = drm_client_framebuffer_create(client, sizes->surface_width,
+- sizes->surface_height, format);
+- if (IS_ERR(buffer))
+- return PTR_ERR(buffer);
+-
+- fb_helper->buffer = buffer;
+- fb_helper->fb = buffer->fb;
+-
+- screen_size = buffer->gem->size;
+- screen_buffer = vzalloc(screen_size);
+- if (!screen_buffer) {
+- ret = -ENOMEM;
+- goto err_drm_client_framebuffer_delete;
+- }
+-
+- info = drm_fb_helper_alloc_info(fb_helper);
+- if (IS_ERR(info)) {
+- ret = PTR_ERR(info);
+- goto err_vfree;
+- }
+-
+- drm_fb_helper_fill_info(info, fb_helper, sizes);
+-
+- info->fbops = &drm_fbdev_ttm_fb_ops;
+-
+- /* screen */
+- info->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST;
+- info->screen_buffer = screen_buffer;
+- info->fix.smem_len = screen_size;
+-
+- /* deferred I/O */
+- fb_helper->fbdefio.delay = HZ / 20;
+- fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io;
+-
+- info->fbdefio = &fb_helper->fbdefio;
+- ret = fb_deferred_io_init(info);
+- if (ret)
+- goto err_drm_fb_helper_release_info;
+-
+- return 0;
+-
+-err_drm_fb_helper_release_info:
+- drm_fb_helper_release_info(fb_helper);
+-err_vfree:
+- vfree(screen_buffer);
+-err_drm_client_framebuffer_delete:
+- fb_helper->fb = NULL;
+- fb_helper->buffer = NULL;
+- drm_client_framebuffer_delete(buffer);
+- return ret;
++ return drm_fbdev_ttm_driver_fbdev_probe(fb_helper, sizes);
+ }
+
+ static void drm_fbdev_ttm_damage_blit_real(struct drm_fb_helper *fb_helper,
+@@ -240,6 +176,82 @@ static const struct drm_fb_helper_funcs drm_fbdev_ttm_helper_funcs = {
+ .fb_dirty = drm_fbdev_ttm_helper_fb_dirty,
+ };
+
++/*
++ * struct drm_driver
++ */
++
++int drm_fbdev_ttm_driver_fbdev_probe(struct drm_fb_helper *fb_helper,
++ struct drm_fb_helper_surface_size *sizes)
++{
++ struct drm_client_dev *client = &fb_helper->client;
++ struct drm_device *dev = fb_helper->dev;
++ struct drm_client_buffer *buffer;
++ struct fb_info *info;
++ size_t screen_size;
++ void *screen_buffer;
++ u32 format;
++ int ret;
++
++ drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n",
++ sizes->surface_width, sizes->surface_height,
++ sizes->surface_bpp);
++
++ format = drm_driver_legacy_fb_format(dev, sizes->surface_bpp,
++ sizes->surface_depth);
++ buffer = drm_client_framebuffer_create(client, sizes->surface_width,
++ sizes->surface_height, format);
++ if (IS_ERR(buffer))
++ return PTR_ERR(buffer);
++
++ fb_helper->funcs = &drm_fbdev_ttm_helper_funcs;
++ fb_helper->buffer = buffer;
++ fb_helper->fb = buffer->fb;
++
++ screen_size = buffer->gem->size;
++ screen_buffer = vzalloc(screen_size);
++ if (!screen_buffer) {
++ ret = -ENOMEM;
++ goto err_drm_client_framebuffer_delete;
++ }
++
++ info = drm_fb_helper_alloc_info(fb_helper);
++ if (IS_ERR(info)) {
++ ret = PTR_ERR(info);
++ goto err_vfree;
++ }
++
++ drm_fb_helper_fill_info(info, fb_helper, sizes);
++
++ info->fbops = &drm_fbdev_ttm_fb_ops;
++
++ /* screen */
++ info->flags |= FBINFO_VIRTFB | FBINFO_READS_FAST;
++ info->screen_buffer = screen_buffer;
++ info->fix.smem_len = screen_size;
++
++ /* deferred I/O */
++ fb_helper->fbdefio.delay = HZ / 20;
++ fb_helper->fbdefio.deferred_io = drm_fb_helper_deferred_io;
++
++ info->fbdefio = &fb_helper->fbdefio;
++ ret = fb_deferred_io_init(info);
++ if (ret)
++ goto err_drm_fb_helper_release_info;
++
++ return 0;
++
++err_drm_fb_helper_release_info:
++ drm_fb_helper_release_info(fb_helper);
++err_vfree:
++ vfree(screen_buffer);
++err_drm_client_framebuffer_delete:
++ fb_helper->fb = NULL;
++ fb_helper->buffer = NULL;
++ drm_client_framebuffer_delete(buffer);
++ return ret;
++}
++EXPORT_SYMBOL(drm_fbdev_ttm_driver_fbdev_probe);
++
+ static void drm_fbdev_ttm_client_unregister(struct drm_client_dev *client)
+ {
+ struct drm_fb_helper *fb_helper = drm_fb_helper_from_client(client);
+diff --git a/drivers/gpu/drm/drm_fourcc.c b/drivers/gpu/drm/drm_fourcc.c
+index 193cf8ed791283..3a94ca211f9ce9 100644
+--- a/drivers/gpu/drm/drm_fourcc.c
++++ b/drivers/gpu/drm/drm_fourcc.c
+@@ -36,7 +36,6 @@
+ * @depth: bit depth per pixel
+ *
+ * Computes a drm fourcc pixel format code for the given @bpp/@depth values.
+- * Useful in fbdev emulation code, since that deals in those values.
+ */
+ uint32_t drm_mode_legacy_fb_format(uint32_t bpp, uint32_t depth)
+ {
+@@ -140,6 +139,35 @@ uint32_t drm_driver_legacy_fb_format(struct drm_device *dev,
+ }
+ EXPORT_SYMBOL(drm_driver_legacy_fb_format);
+
++/**
++ * drm_driver_color_mode_format - Compute DRM 4CC code from color mode
++ * @dev: DRM device
++ * @color_mode: command-line color mode
++ *
++ * Computes a DRM 4CC pixel format code for the given color mode using
++ * drm_driver_color_mode(). The color mode is in the format used and the
++ * kernel command line. It specifies the number of bits per pixel
++ * and color depth in a single value.
++ *
++ * Useful in fbdev emulation code, since that deals in those values. The
++ * helper does not consider YUV or other complicated formats. This means
++ * only legacy formats are supported (fmt->depth is a legacy field), but
++ * the framebuffer emulation can only deal with such formats, specifically
++ * RGB/BGA formats.
++ */
++uint32_t drm_driver_color_mode_format(struct drm_device *dev, unsigned int color_mode)
++{
++ switch (color_mode) {
++ case 15:
++ return drm_driver_legacy_fb_format(dev, 16, 15);
++ case 32:
++ return drm_driver_legacy_fb_format(dev, 32, 24);
++ default:
++ return drm_driver_legacy_fb_format(dev, color_mode, color_mode);
++ }
++}
++EXPORT_SYMBOL(drm_driver_color_mode_format);
++
+ /*
+ * Internal function to query information for a given format. See
+ * drm_format_info() for the public API.
+diff --git a/drivers/gpu/drm/drm_panic_qr.rs b/drivers/gpu/drm/drm_panic_qr.rs
+index 447740d79d3d2e..bcf248f69252c2 100644
+--- a/drivers/gpu/drm/drm_panic_qr.rs
++++ b/drivers/gpu/drm/drm_panic_qr.rs
+@@ -209,12 +209,9 @@
+ impl Version {
+ /// Returns the smallest QR version than can hold these segments.
+ fn from_segments(segments: &[&Segment<'_>]) -> Option<Version> {
+- for v in (1..=40).map(|k| Version(k)) {
+- if v.max_data() * 8 >= segments.iter().map(|s| s.total_size_bits(v)).sum() {
+- return Some(v);
+- }
+- }
+- None
++ (1..=40)
++ .map(Version)
++ .find(|&v| v.max_data() * 8 >= segments.iter().map(|s| s.total_size_bits(v)).sum())
+ }
+
+ fn width(&self) -> u8 {
+@@ -242,7 +239,7 @@ fn g1_blk_size(&self) -> usize {
+ }
+
+ fn alignment_pattern(&self) -> &'static [u8] {
+- &ALIGNMENT_PATTERNS[self.0 - 1]
++ ALIGNMENT_PATTERNS[self.0 - 1]
+ }
+
+ fn poly(&self) -> &'static [u8] {
+@@ -479,7 +476,7 @@ struct EncodedMsg<'a> {
+ /// Data to be put in the QR code, with correct segment encoding, padding, and
+ /// Error Code Correction.
+ impl EncodedMsg<'_> {
+- fn new<'a, 'b>(segments: &[&Segment<'b>], data: &'a mut [u8]) -> Option<EncodedMsg<'a>> {
++ fn new<'a>(segments: &[&Segment<'_>], data: &'a mut [u8]) -> Option<EncodedMsg<'a>> {
+ let version = Version::from_segments(segments)?;
+ let ec_size = version.ec_size();
+ let g1_blocks = version.g1_blocks();
+@@ -492,7 +489,7 @@ fn new<'a, 'b>(segments: &[&Segment<'b>], data: &'a mut [u8]) -> Option<EncodedM
+ data.fill(0);
+
+ let mut em = EncodedMsg {
+- data: data,
++ data,
+ ec_size,
+ g1_blocks,
+ g2_blocks,
+@@ -722,7 +719,10 @@ fn draw_finders(&mut self) {
+
+ fn is_finder(&self, x: u8, y: u8) -> bool {
+ let end = self.width - 8;
+- (x < 8 && y < 8) || (x < 8 && y >= end) || (x >= end && y < 8)
++ #[expect(clippy::nonminimal_bool)]
++ {
++ (x < 8 && y < 8) || (x < 8 && y >= end) || (x >= end && y < 8)
++ }
+ }
+
+ // Alignment pattern: 5x5 squares in a grid.
+@@ -931,7 +931,7 @@ fn draw_all(&mut self, data: impl Iterator<Item = u8>) {
+ /// They must remain valid for the duration of the function call.
+ #[no_mangle]
+ pub unsafe extern "C" fn drm_panic_qr_generate(
+- url: *const i8,
++ url: *const kernel::ffi::c_char,
+ data: *mut u8,
+ data_len: usize,
+ data_size: usize,
+@@ -978,10 +978,11 @@ fn draw_all(&mut self, data: impl Iterator<Item = u8>) {
+ /// * `url_len`: Length of the URL.
+ ///
+ /// * If `url_len` > 0, remove the 2 segments header/length and also count the
+-/// conversion to numeric segments.
++/// conversion to numeric segments.
+ /// * If `url_len` = 0, only removes 3 bytes for 1 binary segment.
+ #[no_mangle]
+ pub extern "C" fn drm_panic_qr_max_data_size(version: u8, url_len: usize) -> usize {
++ #[expect(clippy::manual_range_contains)]
+ if version < 1 || version > 40 {
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/i915/display/i9xx_plane.c b/drivers/gpu/drm/i915/display/i9xx_plane.c
+index 9447f7229b6084..17a1e3801a85c0 100644
+--- a/drivers/gpu/drm/i915/display/i9xx_plane.c
++++ b/drivers/gpu/drm/i915/display/i9xx_plane.c
+@@ -416,7 +416,8 @@ static int i9xx_plane_min_cdclk(const struct intel_crtc_state *crtc_state,
+ return DIV_ROUND_UP(pixel_rate * num, den);
+ }
+
+-static void i9xx_plane_update_noarm(struct intel_plane *plane,
++static void i9xx_plane_update_noarm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+@@ -444,7 +445,8 @@ static void i9xx_plane_update_noarm(struct intel_plane *plane,
+ }
+ }
+
+-static void i9xx_plane_update_arm(struct intel_plane *plane,
++static void i9xx_plane_update_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+@@ -507,7 +509,8 @@ static void i9xx_plane_update_arm(struct intel_plane *plane,
+ intel_plane_ggtt_offset(plane_state) + dspaddr_offset);
+ }
+
+-static void i830_plane_update_arm(struct intel_plane *plane,
++static void i830_plane_update_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+@@ -517,11 +520,12 @@ static void i830_plane_update_arm(struct intel_plane *plane,
+ * Additional breakage on i830 causes register reads to return
+ * the last latched value instead of the last written value [ALM026].
+ */
+- i9xx_plane_update_noarm(plane, crtc_state, plane_state);
+- i9xx_plane_update_arm(plane, crtc_state, plane_state);
++ i9xx_plane_update_noarm(dsb, plane, crtc_state, plane_state);
++ i9xx_plane_update_arm(dsb, plane, crtc_state, plane_state);
+ }
+
+-static void i9xx_plane_disable_arm(struct intel_plane *plane,
++static void i9xx_plane_disable_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
+ struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
+@@ -549,7 +553,8 @@ static void i9xx_plane_disable_arm(struct intel_plane *plane,
+ }
+
+ static void
+-g4x_primary_async_flip(struct intel_plane *plane,
++g4x_primary_async_flip(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state,
+ bool async_flip)
+@@ -569,7 +574,8 @@ g4x_primary_async_flip(struct intel_plane *plane,
+ }
+
+ static void
+-vlv_primary_async_flip(struct intel_plane *plane,
++vlv_primary_async_flip(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state,
+ bool async_flip)
+diff --git a/drivers/gpu/drm/i915/display/icl_dsi.c b/drivers/gpu/drm/i915/display/icl_dsi.c
+index 293efc1f841dff..4e95b8eda23f74 100644
+--- a/drivers/gpu/drm/i915/display/icl_dsi.c
++++ b/drivers/gpu/drm/i915/display/icl_dsi.c
+@@ -50,38 +50,38 @@
+ #include "skl_scaler.h"
+ #include "skl_universal_plane.h"
+
+-static int header_credits_available(struct drm_i915_private *dev_priv,
++static int header_credits_available(struct intel_display *display,
+ enum transcoder dsi_trans)
+ {
+- return (intel_de_read(dev_priv, DSI_CMD_TXCTL(dsi_trans)) & FREE_HEADER_CREDIT_MASK)
++ return (intel_de_read(display, DSI_CMD_TXCTL(dsi_trans)) & FREE_HEADER_CREDIT_MASK)
+ >> FREE_HEADER_CREDIT_SHIFT;
+ }
+
+-static int payload_credits_available(struct drm_i915_private *dev_priv,
++static int payload_credits_available(struct intel_display *display,
+ enum transcoder dsi_trans)
+ {
+- return (intel_de_read(dev_priv, DSI_CMD_TXCTL(dsi_trans)) & FREE_PLOAD_CREDIT_MASK)
++ return (intel_de_read(display, DSI_CMD_TXCTL(dsi_trans)) & FREE_PLOAD_CREDIT_MASK)
+ >> FREE_PLOAD_CREDIT_SHIFT;
+ }
+
+-static bool wait_for_header_credits(struct drm_i915_private *dev_priv,
++static bool wait_for_header_credits(struct intel_display *display,
+ enum transcoder dsi_trans, int hdr_credit)
+ {
+- if (wait_for_us(header_credits_available(dev_priv, dsi_trans) >=
++ if (wait_for_us(header_credits_available(display, dsi_trans) >=
+ hdr_credit, 100)) {
+- drm_err(&dev_priv->drm, "DSI header credits not released\n");
++ drm_err(display->drm, "DSI header credits not released\n");
+ return false;
+ }
+
+ return true;
+ }
+
+-static bool wait_for_payload_credits(struct drm_i915_private *dev_priv,
++static bool wait_for_payload_credits(struct intel_display *display,
+ enum transcoder dsi_trans, int payld_credit)
+ {
+- if (wait_for_us(payload_credits_available(dev_priv, dsi_trans) >=
++ if (wait_for_us(payload_credits_available(display, dsi_trans) >=
+ payld_credit, 100)) {
+- drm_err(&dev_priv->drm, "DSI payload credits not released\n");
++ drm_err(display->drm, "DSI payload credits not released\n");
+ return false;
+ }
+
+@@ -98,7 +98,7 @@ static enum transcoder dsi_port_to_transcoder(enum port port)
+
+ static void wait_for_cmds_dispatched_to_panel(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ struct mipi_dsi_device *dsi;
+ enum port port;
+@@ -108,8 +108,8 @@ static void wait_for_cmds_dispatched_to_panel(struct intel_encoder *encoder)
+ /* wait for header/payload credits to be released */
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- wait_for_header_credits(dev_priv, dsi_trans, MAX_HEADER_CREDIT);
+- wait_for_payload_credits(dev_priv, dsi_trans, MAX_PLOAD_CREDIT);
++ wait_for_header_credits(display, dsi_trans, MAX_HEADER_CREDIT);
++ wait_for_payload_credits(display, dsi_trans, MAX_PLOAD_CREDIT);
+ }
+
+ /* send nop DCS command */
+@@ -119,22 +119,22 @@ static void wait_for_cmds_dispatched_to_panel(struct intel_encoder *encoder)
+ dsi->channel = 0;
+ ret = mipi_dsi_dcs_nop(dsi);
+ if (ret < 0)
+- drm_err(&dev_priv->drm,
++ drm_err(display->drm,
+ "error sending DCS NOP command\n");
+ }
+
+ /* wait for header credits to be released */
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- wait_for_header_credits(dev_priv, dsi_trans, MAX_HEADER_CREDIT);
++ wait_for_header_credits(display, dsi_trans, MAX_HEADER_CREDIT);
+ }
+
+ /* wait for LP TX in progress bit to be cleared */
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- if (wait_for_us(!(intel_de_read(dev_priv, DSI_LP_MSG(dsi_trans)) &
++ if (wait_for_us(!(intel_de_read(display, DSI_LP_MSG(dsi_trans)) &
+ LPTX_IN_PROGRESS), 20))
+- drm_err(&dev_priv->drm, "LPTX bit not cleared\n");
++ drm_err(display->drm, "LPTX bit not cleared\n");
+ }
+ }
+
+@@ -142,7 +142,7 @@ static int dsi_send_pkt_payld(struct intel_dsi_host *host,
+ const struct mipi_dsi_packet *packet)
+ {
+ struct intel_dsi *intel_dsi = host->intel_dsi;
+- struct drm_i915_private *i915 = to_i915(intel_dsi->base.base.dev);
++ struct intel_display *display = to_intel_display(&intel_dsi->base);
+ enum transcoder dsi_trans = dsi_port_to_transcoder(host->port);
+ const u8 *data = packet->payload;
+ u32 len = packet->payload_length;
+@@ -150,20 +150,20 @@ static int dsi_send_pkt_payld(struct intel_dsi_host *host,
+
+ /* payload queue can accept *256 bytes*, check limit */
+ if (len > MAX_PLOAD_CREDIT * 4) {
+- drm_err(&i915->drm, "payload size exceeds max queue limit\n");
++ drm_err(display->drm, "payload size exceeds max queue limit\n");
+ return -EINVAL;
+ }
+
+ for (i = 0; i < len; i += 4) {
+ u32 tmp = 0;
+
+- if (!wait_for_payload_credits(i915, dsi_trans, 1))
++ if (!wait_for_payload_credits(display, dsi_trans, 1))
+ return -EBUSY;
+
+ for (j = 0; j < min_t(u32, len - i, 4); j++)
+ tmp |= *data++ << 8 * j;
+
+- intel_de_write(i915, DSI_CMD_TXPYLD(dsi_trans), tmp);
++ intel_de_write(display, DSI_CMD_TXPYLD(dsi_trans), tmp);
+ }
+
+ return 0;
+@@ -174,14 +174,14 @@ static int dsi_send_pkt_hdr(struct intel_dsi_host *host,
+ bool enable_lpdt)
+ {
+ struct intel_dsi *intel_dsi = host->intel_dsi;
+- struct drm_i915_private *dev_priv = to_i915(intel_dsi->base.base.dev);
++ struct intel_display *display = to_intel_display(&intel_dsi->base);
+ enum transcoder dsi_trans = dsi_port_to_transcoder(host->port);
+ u32 tmp;
+
+- if (!wait_for_header_credits(dev_priv, dsi_trans, 1))
++ if (!wait_for_header_credits(display, dsi_trans, 1))
+ return -EBUSY;
+
+- tmp = intel_de_read(dev_priv, DSI_CMD_TXHDR(dsi_trans));
++ tmp = intel_de_read(display, DSI_CMD_TXHDR(dsi_trans));
+
+ if (packet->payload)
+ tmp |= PAYLOAD_PRESENT;
+@@ -200,15 +200,14 @@ static int dsi_send_pkt_hdr(struct intel_dsi_host *host,
+ tmp |= ((packet->header[0] & DT_MASK) << DT_SHIFT);
+ tmp |= (packet->header[1] << PARAM_WC_LOWER_SHIFT);
+ tmp |= (packet->header[2] << PARAM_WC_UPPER_SHIFT);
+- intel_de_write(dev_priv, DSI_CMD_TXHDR(dsi_trans), tmp);
++ intel_de_write(display, DSI_CMD_TXHDR(dsi_trans), tmp);
+
+ return 0;
+ }
+
+ void icl_dsi_frame_update(struct intel_crtc_state *crtc_state)
+ {
+- struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
+- struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
++ struct intel_display *display = to_intel_display(crtc_state);
+ u32 mode_flags;
+ enum port port;
+
+@@ -226,12 +225,13 @@ void icl_dsi_frame_update(struct intel_crtc_state *crtc_state)
+ else
+ return;
+
+- intel_de_rmw(dev_priv, DSI_CMD_FRMCTL(port), 0, DSI_FRAME_UPDATE_REQUEST);
++ intel_de_rmw(display, DSI_CMD_FRMCTL(port), 0,
++ DSI_FRAME_UPDATE_REQUEST);
+ }
+
+ static void dsi_program_swing_and_deemphasis(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum phy phy;
+ u32 tmp, mask, val;
+@@ -245,31 +245,31 @@ static void dsi_program_swing_and_deemphasis(struct intel_encoder *encoder)
+ mask = SCALING_MODE_SEL_MASK | RTERM_SELECT_MASK;
+ val = SCALING_MODE_SEL(0x2) | TAP2_DISABLE | TAP3_DISABLE |
+ RTERM_SELECT(0x6);
+- tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_LN(0, phy));
++ tmp = intel_de_read(display, ICL_PORT_TX_DW5_LN(0, phy));
+ tmp &= ~mask;
+ tmp |= val;
+- intel_de_write(dev_priv, ICL_PORT_TX_DW5_GRP(phy), tmp);
+- intel_de_rmw(dev_priv, ICL_PORT_TX_DW5_AUX(phy), mask, val);
++ intel_de_write(display, ICL_PORT_TX_DW5_GRP(phy), tmp);
++ intel_de_rmw(display, ICL_PORT_TX_DW5_AUX(phy), mask, val);
+
+ mask = SWING_SEL_LOWER_MASK | SWING_SEL_UPPER_MASK |
+ RCOMP_SCALAR_MASK;
+ val = SWING_SEL_UPPER(0x2) | SWING_SEL_LOWER(0x2) |
+ RCOMP_SCALAR(0x98);
+- tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW2_LN(0, phy));
++ tmp = intel_de_read(display, ICL_PORT_TX_DW2_LN(0, phy));
+ tmp &= ~mask;
+ tmp |= val;
+- intel_de_write(dev_priv, ICL_PORT_TX_DW2_GRP(phy), tmp);
+- intel_de_rmw(dev_priv, ICL_PORT_TX_DW2_AUX(phy), mask, val);
++ intel_de_write(display, ICL_PORT_TX_DW2_GRP(phy), tmp);
++ intel_de_rmw(display, ICL_PORT_TX_DW2_AUX(phy), mask, val);
+
+ mask = POST_CURSOR_1_MASK | POST_CURSOR_2_MASK |
+ CURSOR_COEFF_MASK;
+ val = POST_CURSOR_1(0x0) | POST_CURSOR_2(0x0) |
+ CURSOR_COEFF(0x3f);
+- intel_de_rmw(dev_priv, ICL_PORT_TX_DW4_AUX(phy), mask, val);
++ intel_de_rmw(display, ICL_PORT_TX_DW4_AUX(phy), mask, val);
+
+ /* Bspec: must not use GRP register for write */
+ for (lane = 0; lane <= 3; lane++)
+- intel_de_rmw(dev_priv, ICL_PORT_TX_DW4_LN(lane, phy),
++ intel_de_rmw(display, ICL_PORT_TX_DW4_LN(lane, phy),
+ mask, val);
+ }
+ }
+@@ -277,13 +277,13 @@ static void dsi_program_swing_and_deemphasis(struct intel_encoder *encoder)
+ static void configure_dual_link_mode(struct intel_encoder *encoder,
+ const struct intel_crtc_state *pipe_config)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ i915_reg_t dss_ctl1_reg, dss_ctl2_reg;
+ u32 dss_ctl1;
+
+ /* FIXME: Move all DSS handling to intel_vdsc.c */
+- if (DISPLAY_VER(dev_priv) >= 12) {
++ if (DISPLAY_VER(display) >= 12) {
+ struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
+
+ dss_ctl1_reg = ICL_PIPE_DSS_CTL1(crtc->pipe);
+@@ -293,7 +293,7 @@ static void configure_dual_link_mode(struct intel_encoder *encoder,
+ dss_ctl2_reg = DSS_CTL2;
+ }
+
+- dss_ctl1 = intel_de_read(dev_priv, dss_ctl1_reg);
++ dss_ctl1 = intel_de_read(display, dss_ctl1_reg);
+ dss_ctl1 |= SPLITTER_ENABLE;
+ dss_ctl1 &= ~OVERLAP_PIXELS_MASK;
+ dss_ctl1 |= OVERLAP_PIXELS(intel_dsi->pixel_overlap);
+@@ -308,19 +308,19 @@ static void configure_dual_link_mode(struct intel_encoder *encoder,
+ dl_buffer_depth = hactive / 2 + intel_dsi->pixel_overlap;
+
+ if (dl_buffer_depth > MAX_DL_BUFFER_TARGET_DEPTH)
+- drm_err(&dev_priv->drm,
++ drm_err(display->drm,
+ "DL buffer depth exceed max value\n");
+
+ dss_ctl1 &= ~LEFT_DL_BUF_TARGET_DEPTH_MASK;
+ dss_ctl1 |= LEFT_DL_BUF_TARGET_DEPTH(dl_buffer_depth);
+- intel_de_rmw(dev_priv, dss_ctl2_reg, RIGHT_DL_BUF_TARGET_DEPTH_MASK,
++ intel_de_rmw(display, dss_ctl2_reg, RIGHT_DL_BUF_TARGET_DEPTH_MASK,
+ RIGHT_DL_BUF_TARGET_DEPTH(dl_buffer_depth));
+ } else {
+ /* Interleave */
+ dss_ctl1 |= DUAL_LINK_MODE_INTERLEAVE;
+ }
+
+- intel_de_write(dev_priv, dss_ctl1_reg, dss_ctl1);
++ intel_de_write(display, dss_ctl1_reg, dss_ctl1);
+ }
+
+ /* aka DSI 8X clock */
+@@ -341,6 +341,7 @@ static int afe_clk(struct intel_encoder *encoder,
+ static void gen11_dsi_program_esc_clk_div(struct intel_encoder *encoder,
+ const struct intel_crtc_state *crtc_state)
+ {
++ struct intel_display *display = to_intel_display(encoder);
+ struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+@@ -360,33 +361,34 @@ static void gen11_dsi_program_esc_clk_div(struct intel_encoder *encoder,
+ }
+
+ for_each_dsi_port(port, intel_dsi->ports) {
+- intel_de_write(dev_priv, ICL_DSI_ESC_CLK_DIV(port),
++ intel_de_write(display, ICL_DSI_ESC_CLK_DIV(port),
+ esc_clk_div_m & ICL_ESC_CLK_DIV_MASK);
+- intel_de_posting_read(dev_priv, ICL_DSI_ESC_CLK_DIV(port));
++ intel_de_posting_read(display, ICL_DSI_ESC_CLK_DIV(port));
+ }
+
+ for_each_dsi_port(port, intel_dsi->ports) {
+- intel_de_write(dev_priv, ICL_DPHY_ESC_CLK_DIV(port),
++ intel_de_write(display, ICL_DPHY_ESC_CLK_DIV(port),
+ esc_clk_div_m & ICL_ESC_CLK_DIV_MASK);
+- intel_de_posting_read(dev_priv, ICL_DPHY_ESC_CLK_DIV(port));
++ intel_de_posting_read(display, ICL_DPHY_ESC_CLK_DIV(port));
+ }
+
+ if (IS_ALDERLAKE_S(dev_priv) || IS_ALDERLAKE_P(dev_priv)) {
+ for_each_dsi_port(port, intel_dsi->ports) {
+- intel_de_write(dev_priv, ADL_MIPIO_DW(port, 8),
++ intel_de_write(display, ADL_MIPIO_DW(port, 8),
+ esc_clk_div_m_phy & TX_ESC_CLK_DIV_PHY);
+- intel_de_posting_read(dev_priv, ADL_MIPIO_DW(port, 8));
++ intel_de_posting_read(display, ADL_MIPIO_DW(port, 8));
+ }
+ }
+ }
+
+-static void get_dsi_io_power_domains(struct drm_i915_private *dev_priv,
+- struct intel_dsi *intel_dsi)
++static void get_dsi_io_power_domains(struct intel_dsi *intel_dsi)
+ {
++ struct intel_display *display = to_intel_display(&intel_dsi->base);
++ struct drm_i915_private *dev_priv = to_i915(display->drm);
+ enum port port;
+
+ for_each_dsi_port(port, intel_dsi->ports) {
+- drm_WARN_ON(&dev_priv->drm, intel_dsi->io_wakeref[port]);
++ drm_WARN_ON(display->drm, intel_dsi->io_wakeref[port]);
+ intel_dsi->io_wakeref[port] =
+ intel_display_power_get(dev_priv,
+ port == PORT_A ?
+@@ -397,15 +399,15 @@ static void get_dsi_io_power_domains(struct drm_i915_private *dev_priv,
+
+ static void gen11_dsi_enable_io_power(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+
+ for_each_dsi_port(port, intel_dsi->ports)
+- intel_de_rmw(dev_priv, ICL_DSI_IO_MODECTL(port),
++ intel_de_rmw(display, ICL_DSI_IO_MODECTL(port),
+ 0, COMBO_PHY_MODE_DSI);
+
+- get_dsi_io_power_domains(dev_priv, intel_dsi);
++ get_dsi_io_power_domains(intel_dsi);
+ }
+
+ static void gen11_dsi_power_up_lanes(struct intel_encoder *encoder)
+@@ -421,6 +423,7 @@ static void gen11_dsi_power_up_lanes(struct intel_encoder *encoder)
+
+ static void gen11_dsi_config_phy_lanes_sequence(struct intel_encoder *encoder)
+ {
++ struct intel_display *display = to_intel_display(encoder);
+ struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum phy phy;
+@@ -429,32 +432,33 @@ static void gen11_dsi_config_phy_lanes_sequence(struct intel_encoder *encoder)
+
+ /* Step 4b(i) set loadgen select for transmit and aux lanes */
+ for_each_dsi_phy(phy, intel_dsi->phys) {
+- intel_de_rmw(dev_priv, ICL_PORT_TX_DW4_AUX(phy), LOADGEN_SELECT, 0);
++ intel_de_rmw(display, ICL_PORT_TX_DW4_AUX(phy),
++ LOADGEN_SELECT, 0);
+ for (lane = 0; lane <= 3; lane++)
+- intel_de_rmw(dev_priv, ICL_PORT_TX_DW4_LN(lane, phy),
++ intel_de_rmw(display, ICL_PORT_TX_DW4_LN(lane, phy),
+ LOADGEN_SELECT, lane != 2 ? LOADGEN_SELECT : 0);
+ }
+
+ /* Step 4b(ii) set latency optimization for transmit and aux lanes */
+ for_each_dsi_phy(phy, intel_dsi->phys) {
+- intel_de_rmw(dev_priv, ICL_PORT_TX_DW2_AUX(phy),
++ intel_de_rmw(display, ICL_PORT_TX_DW2_AUX(phy),
+ FRC_LATENCY_OPTIM_MASK, FRC_LATENCY_OPTIM_VAL(0x5));
+- tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW2_LN(0, phy));
++ tmp = intel_de_read(display, ICL_PORT_TX_DW2_LN(0, phy));
+ tmp &= ~FRC_LATENCY_OPTIM_MASK;
+ tmp |= FRC_LATENCY_OPTIM_VAL(0x5);
+- intel_de_write(dev_priv, ICL_PORT_TX_DW2_GRP(phy), tmp);
++ intel_de_write(display, ICL_PORT_TX_DW2_GRP(phy), tmp);
+
+ /* For EHL, TGL, set latency optimization for PCS_DW1 lanes */
+ if (IS_JASPERLAKE(dev_priv) || IS_ELKHARTLAKE(dev_priv) ||
+- (DISPLAY_VER(dev_priv) >= 12)) {
+- intel_de_rmw(dev_priv, ICL_PORT_PCS_DW1_AUX(phy),
++ (DISPLAY_VER(display) >= 12)) {
++ intel_de_rmw(display, ICL_PORT_PCS_DW1_AUX(phy),
+ LATENCY_OPTIM_MASK, LATENCY_OPTIM_VAL(0));
+
+- tmp = intel_de_read(dev_priv,
++ tmp = intel_de_read(display,
+ ICL_PORT_PCS_DW1_LN(0, phy));
+ tmp &= ~LATENCY_OPTIM_MASK;
+ tmp |= LATENCY_OPTIM_VAL(0x1);
+- intel_de_write(dev_priv, ICL_PORT_PCS_DW1_GRP(phy),
++ intel_de_write(display, ICL_PORT_PCS_DW1_GRP(phy),
+ tmp);
+ }
+ }
+@@ -463,17 +467,17 @@ static void gen11_dsi_config_phy_lanes_sequence(struct intel_encoder *encoder)
+
+ static void gen11_dsi_voltage_swing_program_seq(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ u32 tmp;
+ enum phy phy;
+
+ /* clear common keeper enable bit */
+ for_each_dsi_phy(phy, intel_dsi->phys) {
+- tmp = intel_de_read(dev_priv, ICL_PORT_PCS_DW1_LN(0, phy));
++ tmp = intel_de_read(display, ICL_PORT_PCS_DW1_LN(0, phy));
+ tmp &= ~COMMON_KEEPER_EN;
+- intel_de_write(dev_priv, ICL_PORT_PCS_DW1_GRP(phy), tmp);
+- intel_de_rmw(dev_priv, ICL_PORT_PCS_DW1_AUX(phy), COMMON_KEEPER_EN, 0);
++ intel_de_write(display, ICL_PORT_PCS_DW1_GRP(phy), tmp);
++ intel_de_rmw(display, ICL_PORT_PCS_DW1_AUX(phy), COMMON_KEEPER_EN, 0);
+ }
+
+ /*
+@@ -482,14 +486,15 @@ static void gen11_dsi_voltage_swing_program_seq(struct intel_encoder *encoder)
+ * as part of lane phy sequence configuration
+ */
+ for_each_dsi_phy(phy, intel_dsi->phys)
+- intel_de_rmw(dev_priv, ICL_PORT_CL_DW5(phy), 0, SUS_CLOCK_CONFIG);
++ intel_de_rmw(display, ICL_PORT_CL_DW5(phy), 0,
++ SUS_CLOCK_CONFIG);
+
+ /* Clear training enable to change swing values */
+ for_each_dsi_phy(phy, intel_dsi->phys) {
+- tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_LN(0, phy));
++ tmp = intel_de_read(display, ICL_PORT_TX_DW5_LN(0, phy));
+ tmp &= ~TX_TRAINING_EN;
+- intel_de_write(dev_priv, ICL_PORT_TX_DW5_GRP(phy), tmp);
+- intel_de_rmw(dev_priv, ICL_PORT_TX_DW5_AUX(phy), TX_TRAINING_EN, 0);
++ intel_de_write(display, ICL_PORT_TX_DW5_GRP(phy), tmp);
++ intel_de_rmw(display, ICL_PORT_TX_DW5_AUX(phy), TX_TRAINING_EN, 0);
+ }
+
+ /* Program swing and de-emphasis */
+@@ -497,26 +502,26 @@ static void gen11_dsi_voltage_swing_program_seq(struct intel_encoder *encoder)
+
+ /* Set training enable to trigger update */
+ for_each_dsi_phy(phy, intel_dsi->phys) {
+- tmp = intel_de_read(dev_priv, ICL_PORT_TX_DW5_LN(0, phy));
++ tmp = intel_de_read(display, ICL_PORT_TX_DW5_LN(0, phy));
+ tmp |= TX_TRAINING_EN;
+- intel_de_write(dev_priv, ICL_PORT_TX_DW5_GRP(phy), tmp);
+- intel_de_rmw(dev_priv, ICL_PORT_TX_DW5_AUX(phy), 0, TX_TRAINING_EN);
++ intel_de_write(display, ICL_PORT_TX_DW5_GRP(phy), tmp);
++ intel_de_rmw(display, ICL_PORT_TX_DW5_AUX(phy), 0, TX_TRAINING_EN);
+ }
+ }
+
+ static void gen11_dsi_enable_ddi_buffer(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+
+ for_each_dsi_port(port, intel_dsi->ports) {
+- intel_de_rmw(dev_priv, DDI_BUF_CTL(port), 0, DDI_BUF_CTL_ENABLE);
++ intel_de_rmw(display, DDI_BUF_CTL(port), 0, DDI_BUF_CTL_ENABLE);
+
+- if (wait_for_us(!(intel_de_read(dev_priv, DDI_BUF_CTL(port)) &
++ if (wait_for_us(!(intel_de_read(display, DDI_BUF_CTL(port)) &
+ DDI_BUF_IS_IDLE),
+ 500))
+- drm_err(&dev_priv->drm, "DDI port:%c buffer idle\n",
++ drm_err(display->drm, "DDI port:%c buffer idle\n",
+ port_name(port));
+ }
+ }
+@@ -525,6 +530,7 @@ static void
+ gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder,
+ const struct intel_crtc_state *crtc_state)
+ {
++ struct intel_display *display = to_intel_display(encoder);
+ struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+@@ -532,12 +538,12 @@ gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder,
+
+ /* Program DPHY clock lanes timings */
+ for_each_dsi_port(port, intel_dsi->ports)
+- intel_de_write(dev_priv, DPHY_CLK_TIMING_PARAM(port),
++ intel_de_write(display, DPHY_CLK_TIMING_PARAM(port),
+ intel_dsi->dphy_reg);
+
+ /* Program DPHY data lanes timings */
+ for_each_dsi_port(port, intel_dsi->ports)
+- intel_de_write(dev_priv, DPHY_DATA_TIMING_PARAM(port),
++ intel_de_write(display, DPHY_DATA_TIMING_PARAM(port),
+ intel_dsi->dphy_data_lane_reg);
+
+ /*
+@@ -546,10 +552,10 @@ gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder,
+ * a value '0' inside TA_PARAM_REGISTERS otherwise
+ * leave all fields at HW default values.
+ */
+- if (DISPLAY_VER(dev_priv) == 11) {
++ if (DISPLAY_VER(display) == 11) {
+ if (afe_clk(encoder, crtc_state) <= 800000) {
+ for_each_dsi_port(port, intel_dsi->ports)
+- intel_de_rmw(dev_priv, DPHY_TA_TIMING_PARAM(port),
++ intel_de_rmw(display, DPHY_TA_TIMING_PARAM(port),
+ TA_SURE_MASK,
+ TA_SURE_OVERRIDE | TA_SURE(0));
+ }
+@@ -557,7 +563,7 @@ gen11_dsi_setup_dphy_timings(struct intel_encoder *encoder,
+
+ if (IS_JASPERLAKE(dev_priv) || IS_ELKHARTLAKE(dev_priv)) {
+ for_each_dsi_phy(phy, intel_dsi->phys)
+- intel_de_rmw(dev_priv, ICL_DPHY_CHKN(phy),
++ intel_de_rmw(display, ICL_DPHY_CHKN(phy),
+ 0, ICL_DPHY_CHKN_AFE_OVER_PPI_STRAP);
+ }
+ }
+@@ -566,30 +572,30 @@ static void
+ gen11_dsi_setup_timings(struct intel_encoder *encoder,
+ const struct intel_crtc_state *crtc_state)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+
+ /* Program T-INIT master registers */
+ for_each_dsi_port(port, intel_dsi->ports)
+- intel_de_rmw(dev_priv, ICL_DSI_T_INIT_MASTER(port),
++ intel_de_rmw(display, ICL_DSI_T_INIT_MASTER(port),
+ DSI_T_INIT_MASTER_MASK, intel_dsi->init_count);
+
+ /* shadow register inside display core */
+ for_each_dsi_port(port, intel_dsi->ports)
+- intel_de_write(dev_priv, DSI_CLK_TIMING_PARAM(port),
++ intel_de_write(display, DSI_CLK_TIMING_PARAM(port),
+ intel_dsi->dphy_reg);
+
+ /* shadow register inside display core */
+ for_each_dsi_port(port, intel_dsi->ports)
+- intel_de_write(dev_priv, DSI_DATA_TIMING_PARAM(port),
++ intel_de_write(display, DSI_DATA_TIMING_PARAM(port),
+ intel_dsi->dphy_data_lane_reg);
+
+ /* shadow register inside display core */
+- if (DISPLAY_VER(dev_priv) == 11) {
++ if (DISPLAY_VER(display) == 11) {
+ if (afe_clk(encoder, crtc_state) <= 800000) {
+ for_each_dsi_port(port, intel_dsi->ports) {
+- intel_de_rmw(dev_priv, DSI_TA_TIMING_PARAM(port),
++ intel_de_rmw(display, DSI_TA_TIMING_PARAM(port),
+ TA_SURE_MASK,
+ TA_SURE_OVERRIDE | TA_SURE(0));
+ }
+@@ -599,45 +605,45 @@ gen11_dsi_setup_timings(struct intel_encoder *encoder,
+
+ static void gen11_dsi_gate_clocks(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ u32 tmp;
+ enum phy phy;
+
+- mutex_lock(&dev_priv->display.dpll.lock);
+- tmp = intel_de_read(dev_priv, ICL_DPCLKA_CFGCR0);
++ mutex_lock(&display->dpll.lock);
++ tmp = intel_de_read(display, ICL_DPCLKA_CFGCR0);
+ for_each_dsi_phy(phy, intel_dsi->phys)
+ tmp |= ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
+
+- intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, tmp);
+- mutex_unlock(&dev_priv->display.dpll.lock);
++ intel_de_write(display, ICL_DPCLKA_CFGCR0, tmp);
++ mutex_unlock(&display->dpll.lock);
+ }
+
+ static void gen11_dsi_ungate_clocks(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ u32 tmp;
+ enum phy phy;
+
+- mutex_lock(&dev_priv->display.dpll.lock);
+- tmp = intel_de_read(dev_priv, ICL_DPCLKA_CFGCR0);
++ mutex_lock(&display->dpll.lock);
++ tmp = intel_de_read(display, ICL_DPCLKA_CFGCR0);
+ for_each_dsi_phy(phy, intel_dsi->phys)
+ tmp &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
+
+- intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, tmp);
+- mutex_unlock(&dev_priv->display.dpll.lock);
++ intel_de_write(display, ICL_DPCLKA_CFGCR0, tmp);
++ mutex_unlock(&display->dpll.lock);
+ }
+
+ static bool gen11_dsi_is_clock_enabled(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ bool clock_enabled = false;
+ enum phy phy;
+ u32 tmp;
+
+- tmp = intel_de_read(dev_priv, ICL_DPCLKA_CFGCR0);
++ tmp = intel_de_read(display, ICL_DPCLKA_CFGCR0);
+
+ for_each_dsi_phy(phy, intel_dsi->phys) {
+ if (!(tmp & ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy)))
+@@ -650,36 +656,36 @@ static bool gen11_dsi_is_clock_enabled(struct intel_encoder *encoder)
+ static void gen11_dsi_map_pll(struct intel_encoder *encoder,
+ const struct intel_crtc_state *crtc_state)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ struct intel_shared_dpll *pll = crtc_state->shared_dpll;
+ enum phy phy;
+ u32 val;
+
+- mutex_lock(&dev_priv->display.dpll.lock);
++ mutex_lock(&display->dpll.lock);
+
+- val = intel_de_read(dev_priv, ICL_DPCLKA_CFGCR0);
++ val = intel_de_read(display, ICL_DPCLKA_CFGCR0);
+ for_each_dsi_phy(phy, intel_dsi->phys) {
+ val &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_SEL_MASK(phy);
+ val |= ICL_DPCLKA_CFGCR0_DDI_CLK_SEL(pll->info->id, phy);
+ }
+- intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, val);
++ intel_de_write(display, ICL_DPCLKA_CFGCR0, val);
+
+ for_each_dsi_phy(phy, intel_dsi->phys) {
+ val &= ~ICL_DPCLKA_CFGCR0_DDI_CLK_OFF(phy);
+ }
+- intel_de_write(dev_priv, ICL_DPCLKA_CFGCR0, val);
++ intel_de_write(display, ICL_DPCLKA_CFGCR0, val);
+
+- intel_de_posting_read(dev_priv, ICL_DPCLKA_CFGCR0);
++ intel_de_posting_read(display, ICL_DPCLKA_CFGCR0);
+
+- mutex_unlock(&dev_priv->display.dpll.lock);
++ mutex_unlock(&display->dpll.lock);
+ }
+
+ static void
+ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
+ const struct intel_crtc_state *pipe_config)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ struct intel_crtc *crtc = to_intel_crtc(pipe_config->uapi.crtc);
+ enum pipe pipe = crtc->pipe;
+@@ -689,7 +695,7 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
+
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- tmp = intel_de_read(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans));
++ tmp = intel_de_read(display, DSI_TRANS_FUNC_CONF(dsi_trans));
+
+ if (intel_dsi->eotp_pkt)
+ tmp &= ~EOTP_DISABLED;
+@@ -745,7 +751,7 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
+ }
+ }
+
+- if (DISPLAY_VER(dev_priv) >= 12) {
++ if (DISPLAY_VER(display) >= 12) {
+ if (is_vid_mode(intel_dsi))
+ tmp |= BLANKING_PACKET_ENABLE;
+ }
+@@ -778,15 +784,15 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
+ tmp |= TE_SOURCE_GPIO;
+ }
+
+- intel_de_write(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans), tmp);
++ intel_de_write(display, DSI_TRANS_FUNC_CONF(dsi_trans), tmp);
+ }
+
+ /* enable port sync mode if dual link */
+ if (intel_dsi->dual_link) {
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- intel_de_rmw(dev_priv,
+- TRANS_DDI_FUNC_CTL2(dev_priv, dsi_trans),
++ intel_de_rmw(display,
++ TRANS_DDI_FUNC_CTL2(display, dsi_trans),
+ 0, PORT_SYNC_MODE_ENABLE);
+ }
+
+@@ -798,10 +804,10 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
+ dsi_trans = dsi_port_to_transcoder(port);
+
+ /* select data lane width */
+- tmp = intel_de_read(dev_priv,
+- TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans));
+- tmp &= ~DDI_PORT_WIDTH_MASK;
+- tmp |= DDI_PORT_WIDTH(intel_dsi->lane_count);
++ tmp = intel_de_read(display,
++ TRANS_DDI_FUNC_CTL(display, dsi_trans));
++ tmp &= ~TRANS_DDI_PORT_WIDTH_MASK;
++ tmp |= TRANS_DDI_PORT_WIDTH(intel_dsi->lane_count);
+
+ /* select input pipe */
+ tmp &= ~TRANS_DDI_EDP_INPUT_MASK;
+@@ -825,16 +831,16 @@ gen11_dsi_configure_transcoder(struct intel_encoder *encoder,
+
+ /* enable DDI buffer */
+ tmp |= TRANS_DDI_FUNC_ENABLE;
+- intel_de_write(dev_priv,
+- TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans), tmp);
++ intel_de_write(display,
++ TRANS_DDI_FUNC_CTL(display, dsi_trans), tmp);
+ }
+
+ /* wait for link ready */
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- if (wait_for_us((intel_de_read(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans)) &
++ if (wait_for_us((intel_de_read(display, DSI_TRANS_FUNC_CONF(dsi_trans)) &
+ LINK_READY), 2500))
+- drm_err(&dev_priv->drm, "DSI link not ready\n");
++ drm_err(display->drm, "DSI link not ready\n");
+ }
+ }
+
+@@ -842,7 +848,7 @@ static void
+ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
+ const struct intel_crtc_state *crtc_state)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ const struct drm_display_mode *adjusted_mode =
+ &crtc_state->hw.adjusted_mode;
+@@ -909,17 +915,17 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
+
+ /* minimum hactive as per bspec: 256 pixels */
+ if (adjusted_mode->crtc_hdisplay < 256)
+- drm_err(&dev_priv->drm, "hactive is less then 256 pixels\n");
++ drm_err(display->drm, "hactive is less then 256 pixels\n");
+
+ /* if RGB666 format, then hactive must be multiple of 4 pixels */
+ if (intel_dsi->pixel_format == MIPI_DSI_FMT_RGB666 && hactive % 4 != 0)
+- drm_err(&dev_priv->drm,
++ drm_err(display->drm,
+ "hactive pixels are not multiple of 4\n");
+
+ /* program TRANS_HTOTAL register */
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- intel_de_write(dev_priv, TRANS_HTOTAL(dev_priv, dsi_trans),
++ intel_de_write(display, TRANS_HTOTAL(display, dsi_trans),
+ HACTIVE(hactive - 1) | HTOTAL(htotal - 1));
+ }
+
+@@ -928,12 +934,12 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
+ if (intel_dsi->video_mode == NON_BURST_SYNC_PULSE) {
+ /* BSPEC: hsync size should be atleast 16 pixels */
+ if (hsync_size < 16)
+- drm_err(&dev_priv->drm,
++ drm_err(display->drm,
+ "hsync size < 16 pixels\n");
+ }
+
+ if (hback_porch < 16)
+- drm_err(&dev_priv->drm, "hback porch < 16 pixels\n");
++ drm_err(display->drm, "hback porch < 16 pixels\n");
+
+ if (intel_dsi->dual_link) {
+ hsync_start /= 2;
+@@ -942,8 +948,8 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
+
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- intel_de_write(dev_priv,
+- TRANS_HSYNC(dev_priv, dsi_trans),
++ intel_de_write(display,
++ TRANS_HSYNC(display, dsi_trans),
+ HSYNC_START(hsync_start - 1) | HSYNC_END(hsync_end - 1));
+ }
+ }
+@@ -957,22 +963,22 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
+ * struct drm_display_mode.
+ * For interlace mode: program required pixel minus 2
+ */
+- intel_de_write(dev_priv, TRANS_VTOTAL(dev_priv, dsi_trans),
++ intel_de_write(display, TRANS_VTOTAL(display, dsi_trans),
+ VACTIVE(vactive - 1) | VTOTAL(vtotal - 1));
+ }
+
+ if (vsync_end < vsync_start || vsync_end > vtotal)
+- drm_err(&dev_priv->drm, "Invalid vsync_end value\n");
++ drm_err(display->drm, "Invalid vsync_end value\n");
+
+ if (vsync_start < vactive)
+- drm_err(&dev_priv->drm, "vsync_start less than vactive\n");
++ drm_err(display->drm, "vsync_start less than vactive\n");
+
+ /* program TRANS_VSYNC register for video mode only */
+ if (is_vid_mode(intel_dsi)) {
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- intel_de_write(dev_priv,
+- TRANS_VSYNC(dev_priv, dsi_trans),
++ intel_de_write(display,
++ TRANS_VSYNC(display, dsi_trans),
+ VSYNC_START(vsync_start - 1) | VSYNC_END(vsync_end - 1));
+ }
+ }
+@@ -986,8 +992,8 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
+ if (is_vid_mode(intel_dsi)) {
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- intel_de_write(dev_priv,
+- TRANS_VSYNCSHIFT(dev_priv, dsi_trans),
++ intel_de_write(display,
++ TRANS_VSYNCSHIFT(display, dsi_trans),
+ vsync_shift);
+ }
+ }
+@@ -998,11 +1004,11 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
+ * FIXME get rid of these local hacks and do it right,
+ * this will not handle eg. delayed vblank correctly.
+ */
+- if (DISPLAY_VER(dev_priv) >= 12) {
++ if (DISPLAY_VER(display) >= 12) {
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- intel_de_write(dev_priv,
+- TRANS_VBLANK(dev_priv, dsi_trans),
++ intel_de_write(display,
++ TRANS_VBLANK(display, dsi_trans),
+ VBLANK_START(vactive - 1) | VBLANK_END(vtotal - 1));
+ }
+ }
+@@ -1010,20 +1016,20 @@ gen11_dsi_set_transcoder_timings(struct intel_encoder *encoder,
+
+ static void gen11_dsi_enable_transcoder(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+ enum transcoder dsi_trans;
+
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- intel_de_rmw(dev_priv, TRANSCONF(dev_priv, dsi_trans), 0,
++ intel_de_rmw(display, TRANSCONF(display, dsi_trans), 0,
+ TRANSCONF_ENABLE);
+
+ /* wait for transcoder to be enabled */
+- if (intel_de_wait_for_set(dev_priv, TRANSCONF(dev_priv, dsi_trans),
++ if (intel_de_wait_for_set(display, TRANSCONF(display, dsi_trans),
+ TRANSCONF_STATE_ENABLE, 10))
+- drm_err(&dev_priv->drm,
++ drm_err(display->drm,
+ "DSI transcoder not enabled\n");
+ }
+ }
+@@ -1031,7 +1037,7 @@ static void gen11_dsi_enable_transcoder(struct intel_encoder *encoder)
+ static void gen11_dsi_setup_timeouts(struct intel_encoder *encoder,
+ const struct intel_crtc_state *crtc_state)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+ enum transcoder dsi_trans;
+@@ -1055,21 +1061,21 @@ static void gen11_dsi_setup_timeouts(struct intel_encoder *encoder,
+ dsi_trans = dsi_port_to_transcoder(port);
+
+ /* program hst_tx_timeout */
+- intel_de_rmw(dev_priv, DSI_HSTX_TO(dsi_trans),
++ intel_de_rmw(display, DSI_HSTX_TO(dsi_trans),
+ HSTX_TIMEOUT_VALUE_MASK,
+ HSTX_TIMEOUT_VALUE(hs_tx_timeout));
+
+ /* FIXME: DSI_CALIB_TO */
+
+ /* program lp_rx_host timeout */
+- intel_de_rmw(dev_priv, DSI_LPRX_HOST_TO(dsi_trans),
++ intel_de_rmw(display, DSI_LPRX_HOST_TO(dsi_trans),
+ LPRX_TIMEOUT_VALUE_MASK,
+ LPRX_TIMEOUT_VALUE(lp_rx_timeout));
+
+ /* FIXME: DSI_PWAIT_TO */
+
+ /* program turn around timeout */
+- intel_de_rmw(dev_priv, DSI_TA_TO(dsi_trans),
++ intel_de_rmw(display, DSI_TA_TO(dsi_trans),
+ TA_TIMEOUT_VALUE_MASK,
+ TA_TIMEOUT_VALUE(ta_timeout));
+ }
+@@ -1078,7 +1084,7 @@ static void gen11_dsi_setup_timeouts(struct intel_encoder *encoder,
+ static void gen11_dsi_config_util_pin(struct intel_encoder *encoder,
+ bool enable)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ u32 tmp;
+
+@@ -1090,7 +1096,7 @@ static void gen11_dsi_config_util_pin(struct intel_encoder *encoder,
+ if (is_vid_mode(intel_dsi) || (intel_dsi->ports & BIT(PORT_B)))
+ return;
+
+- tmp = intel_de_read(dev_priv, UTIL_PIN_CTL);
++ tmp = intel_de_read(display, UTIL_PIN_CTL);
+
+ if (enable) {
+ tmp |= UTIL_PIN_DIRECTION_INPUT;
+@@ -1098,7 +1104,7 @@ static void gen11_dsi_config_util_pin(struct intel_encoder *encoder,
+ } else {
+ tmp &= ~UTIL_PIN_ENABLE;
+ }
+- intel_de_write(dev_priv, UTIL_PIN_CTL, tmp);
++ intel_de_write(display, UTIL_PIN_CTL, tmp);
+ }
+
+ static void
+@@ -1136,7 +1142,7 @@ gen11_dsi_enable_port_and_phy(struct intel_encoder *encoder,
+
+ static void gen11_dsi_powerup_panel(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ struct mipi_dsi_device *dsi;
+ enum port port;
+@@ -1152,14 +1158,14 @@ static void gen11_dsi_powerup_panel(struct intel_encoder *encoder)
+ * FIXME: This uses the number of DW's currently in the payload
+ * receive queue. This is probably not what we want here.
+ */
+- tmp = intel_de_read(dev_priv, DSI_CMD_RXCTL(dsi_trans));
++ tmp = intel_de_read(display, DSI_CMD_RXCTL(dsi_trans));
+ tmp &= NUMBER_RX_PLOAD_DW_MASK;
+ /* multiply "Number Rx Payload DW" by 4 to get max value */
+ tmp = tmp * 4;
+ dsi = intel_dsi->dsi_hosts[port]->device;
+ ret = mipi_dsi_set_maximum_return_packet_size(dsi, tmp);
+ if (ret < 0)
+- drm_err(&dev_priv->drm,
++ drm_err(display->drm,
+ "error setting max return pkt size%d\n", tmp);
+ }
+
+@@ -1219,10 +1225,10 @@ static void gen11_dsi_pre_enable(struct intel_atomic_state *state,
+ static void icl_apply_kvmr_pipe_a_wa(struct intel_encoder *encoder,
+ enum pipe pipe, bool enable)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+
+- if (DISPLAY_VER(dev_priv) == 11 && pipe == PIPE_B)
+- intel_de_rmw(dev_priv, CHICKEN_PAR1_1,
++ if (DISPLAY_VER(display) == 11 && pipe == PIPE_B)
++ intel_de_rmw(display, CHICKEN_PAR1_1,
+ IGNORE_KVMR_PIPE_A,
+ enable ? IGNORE_KVMR_PIPE_A : 0);
+ }
+@@ -1235,13 +1241,13 @@ static void icl_apply_kvmr_pipe_a_wa(struct intel_encoder *encoder,
+ */
+ static void adlp_set_lp_hs_wakeup_gb(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *i915 = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+
+- if (DISPLAY_VER(i915) == 13) {
++ if (DISPLAY_VER(display) == 13) {
+ for_each_dsi_port(port, intel_dsi->ports)
+- intel_de_rmw(i915, TGL_DSI_CHKN_REG(port),
++ intel_de_rmw(display, TGL_DSI_CHKN_REG(port),
+ TGL_DSI_CHKN_LSHS_GB_MASK,
+ TGL_DSI_CHKN_LSHS_GB(4));
+ }
+@@ -1275,7 +1281,7 @@ static void gen11_dsi_enable(struct intel_atomic_state *state,
+
+ static void gen11_dsi_disable_transcoder(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+ enum transcoder dsi_trans;
+@@ -1284,13 +1290,13 @@ static void gen11_dsi_disable_transcoder(struct intel_encoder *encoder)
+ dsi_trans = dsi_port_to_transcoder(port);
+
+ /* disable transcoder */
+- intel_de_rmw(dev_priv, TRANSCONF(dev_priv, dsi_trans),
++ intel_de_rmw(display, TRANSCONF(display, dsi_trans),
+ TRANSCONF_ENABLE, 0);
+
+ /* wait for transcoder to be disabled */
+- if (intel_de_wait_for_clear(dev_priv, TRANSCONF(dev_priv, dsi_trans),
++ if (intel_de_wait_for_clear(display, TRANSCONF(display, dsi_trans),
+ TRANSCONF_STATE_ENABLE, 50))
+- drm_err(&dev_priv->drm,
++ drm_err(display->drm,
+ "DSI trancoder not disabled\n");
+ }
+ }
+@@ -1307,7 +1313,7 @@ static void gen11_dsi_powerdown_panel(struct intel_encoder *encoder)
+
+ static void gen11_dsi_deconfigure_trancoder(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+ enum transcoder dsi_trans;
+@@ -1316,29 +1322,29 @@ static void gen11_dsi_deconfigure_trancoder(struct intel_encoder *encoder)
+ /* disable periodic update mode */
+ if (is_cmd_mode(intel_dsi)) {
+ for_each_dsi_port(port, intel_dsi->ports)
+- intel_de_rmw(dev_priv, DSI_CMD_FRMCTL(port),
++ intel_de_rmw(display, DSI_CMD_FRMCTL(port),
+ DSI_PERIODIC_FRAME_UPDATE_ENABLE, 0);
+ }
+
+ /* put dsi link in ULPS */
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- tmp = intel_de_read(dev_priv, DSI_LP_MSG(dsi_trans));
++ tmp = intel_de_read(display, DSI_LP_MSG(dsi_trans));
+ tmp |= LINK_ENTER_ULPS;
+ tmp &= ~LINK_ULPS_TYPE_LP11;
+- intel_de_write(dev_priv, DSI_LP_MSG(dsi_trans), tmp);
++ intel_de_write(display, DSI_LP_MSG(dsi_trans), tmp);
+
+- if (wait_for_us((intel_de_read(dev_priv, DSI_LP_MSG(dsi_trans)) &
++ if (wait_for_us((intel_de_read(display, DSI_LP_MSG(dsi_trans)) &
+ LINK_IN_ULPS),
+ 10))
+- drm_err(&dev_priv->drm, "DSI link not in ULPS\n");
++ drm_err(display->drm, "DSI link not in ULPS\n");
+ }
+
+ /* disable ddi function */
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- intel_de_rmw(dev_priv,
+- TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans),
++ intel_de_rmw(display,
++ TRANS_DDI_FUNC_CTL(display, dsi_trans),
+ TRANS_DDI_FUNC_ENABLE, 0);
+ }
+
+@@ -1346,8 +1352,8 @@ static void gen11_dsi_deconfigure_trancoder(struct intel_encoder *encoder)
+ if (intel_dsi->dual_link) {
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- intel_de_rmw(dev_priv,
+- TRANS_DDI_FUNC_CTL2(dev_priv, dsi_trans),
++ intel_de_rmw(display,
++ TRANS_DDI_FUNC_CTL2(display, dsi_trans),
+ PORT_SYNC_MODE_ENABLE, 0);
+ }
+ }
+@@ -1355,18 +1361,18 @@ static void gen11_dsi_deconfigure_trancoder(struct intel_encoder *encoder)
+
+ static void gen11_dsi_disable_port(struct intel_encoder *encoder)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+
+ gen11_dsi_ungate_clocks(encoder);
+ for_each_dsi_port(port, intel_dsi->ports) {
+- intel_de_rmw(dev_priv, DDI_BUF_CTL(port), DDI_BUF_CTL_ENABLE, 0);
++ intel_de_rmw(display, DDI_BUF_CTL(port), DDI_BUF_CTL_ENABLE, 0);
+
+- if (wait_for_us((intel_de_read(dev_priv, DDI_BUF_CTL(port)) &
++ if (wait_for_us((intel_de_read(display, DDI_BUF_CTL(port)) &
+ DDI_BUF_IS_IDLE),
+ 8))
+- drm_err(&dev_priv->drm,
++ drm_err(display->drm,
+ "DDI port:%c buffer not idle\n",
+ port_name(port));
+ }
+@@ -1375,6 +1381,7 @@ static void gen11_dsi_disable_port(struct intel_encoder *encoder)
+
+ static void gen11_dsi_disable_io_power(struct intel_encoder *encoder)
+ {
++ struct intel_display *display = to_intel_display(encoder);
+ struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum port port;
+@@ -1392,7 +1399,7 @@ static void gen11_dsi_disable_io_power(struct intel_encoder *encoder)
+
+ /* set mode to DDI */
+ for_each_dsi_port(port, intel_dsi->ports)
+- intel_de_rmw(dev_priv, ICL_DSI_IO_MODECTL(port),
++ intel_de_rmw(display, ICL_DSI_IO_MODECTL(port),
+ COMBO_PHY_MODE_DSI, 0);
+ }
+
+@@ -1504,8 +1511,7 @@ static void gen11_dsi_get_timings(struct intel_encoder *encoder,
+
+ static bool gen11_dsi_is_periodic_cmd_mode(struct intel_dsi *intel_dsi)
+ {
+- struct drm_device *dev = intel_dsi->base.base.dev;
+- struct drm_i915_private *dev_priv = to_i915(dev);
++ struct intel_display *display = to_intel_display(&intel_dsi->base);
+ enum transcoder dsi_trans;
+ u32 val;
+
+@@ -1514,7 +1520,7 @@ static bool gen11_dsi_is_periodic_cmd_mode(struct intel_dsi *intel_dsi)
+ else
+ dsi_trans = TRANSCODER_DSI_0;
+
+- val = intel_de_read(dev_priv, DSI_TRANS_FUNC_CONF(dsi_trans));
++ val = intel_de_read(display, DSI_TRANS_FUNC_CONF(dsi_trans));
+ return (val & DSI_PERIODIC_FRAME_UPDATE_ENABLE);
+ }
+
+@@ -1557,7 +1563,7 @@ static void gen11_dsi_get_config(struct intel_encoder *encoder,
+ static void gen11_dsi_sync_state(struct intel_encoder *encoder,
+ const struct intel_crtc_state *crtc_state)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_crtc *intel_crtc;
+ enum pipe pipe;
+
+@@ -1568,9 +1574,9 @@ static void gen11_dsi_sync_state(struct intel_encoder *encoder,
+ pipe = intel_crtc->pipe;
+
+ /* wa verify 1409054076:icl,jsl,ehl */
+- if (DISPLAY_VER(dev_priv) == 11 && pipe == PIPE_B &&
+- !(intel_de_read(dev_priv, CHICKEN_PAR1_1) & IGNORE_KVMR_PIPE_A))
+- drm_dbg_kms(&dev_priv->drm,
++ if (DISPLAY_VER(display) == 11 && pipe == PIPE_B &&
++ !(intel_de_read(display, CHICKEN_PAR1_1) & IGNORE_KVMR_PIPE_A))
++ drm_dbg_kms(display->drm,
+ "[ENCODER:%d:%s] BIOS left IGNORE_KVMR_PIPE_A cleared with pipe B enabled\n",
+ encoder->base.base.id,
+ encoder->base.name);
+@@ -1579,9 +1585,9 @@ static void gen11_dsi_sync_state(struct intel_encoder *encoder,
+ static int gen11_dsi_dsc_compute_config(struct intel_encoder *encoder,
+ struct intel_crtc_state *crtc_state)
+ {
+- struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct drm_dsc_config *vdsc_cfg = &crtc_state->dsc.config;
+- int dsc_max_bpc = DISPLAY_VER(dev_priv) >= 12 ? 12 : 10;
++ int dsc_max_bpc = DISPLAY_VER(display) >= 12 ? 12 : 10;
+ bool use_dsc;
+ int ret;
+
+@@ -1606,12 +1612,12 @@ static int gen11_dsi_dsc_compute_config(struct intel_encoder *encoder,
+ return ret;
+
+ /* DSI specific sanity checks on the common code */
+- drm_WARN_ON(&dev_priv->drm, vdsc_cfg->vbr_enable);
+- drm_WARN_ON(&dev_priv->drm, vdsc_cfg->simple_422);
+- drm_WARN_ON(&dev_priv->drm,
++ drm_WARN_ON(display->drm, vdsc_cfg->vbr_enable);
++ drm_WARN_ON(display->drm, vdsc_cfg->simple_422);
++ drm_WARN_ON(display->drm,
+ vdsc_cfg->pic_width % vdsc_cfg->slice_width);
+- drm_WARN_ON(&dev_priv->drm, vdsc_cfg->slice_height < 8);
+- drm_WARN_ON(&dev_priv->drm,
++ drm_WARN_ON(display->drm, vdsc_cfg->slice_height < 8);
++ drm_WARN_ON(display->drm,
+ vdsc_cfg->pic_height % vdsc_cfg->slice_height);
+
+ ret = drm_dsc_compute_rc_parameters(vdsc_cfg);
+@@ -1627,7 +1633,7 @@ static int gen11_dsi_compute_config(struct intel_encoder *encoder,
+ struct intel_crtc_state *pipe_config,
+ struct drm_connector_state *conn_state)
+ {
+- struct drm_i915_private *i915 = to_i915(encoder->base.dev);
++ struct intel_display *display = to_intel_display(encoder);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ struct intel_connector *intel_connector = intel_dsi->attached_connector;
+ struct drm_display_mode *adjusted_mode =
+@@ -1661,7 +1667,7 @@ static int gen11_dsi_compute_config(struct intel_encoder *encoder,
+ pipe_config->clock_set = true;
+
+ if (gen11_dsi_dsc_compute_config(encoder, pipe_config))
+- drm_dbg_kms(&i915->drm, "Attempting to use DSC failed\n");
++ drm_dbg_kms(display->drm, "Attempting to use DSC failed\n");
+
+ pipe_config->port_clock = afe_clk(encoder, pipe_config) / 5;
+
+@@ -1679,15 +1685,13 @@ static int gen11_dsi_compute_config(struct intel_encoder *encoder,
+ static void gen11_dsi_get_power_domains(struct intel_encoder *encoder,
+ struct intel_crtc_state *crtc_state)
+ {
+- struct drm_i915_private *i915 = to_i915(encoder->base.dev);
+-
+- get_dsi_io_power_domains(i915,
+- enc_to_intel_dsi(encoder));
++ get_dsi_io_power_domains(enc_to_intel_dsi(encoder));
+ }
+
+ static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
+ enum pipe *pipe)
+ {
++ struct intel_display *display = to_intel_display(encoder);
+ struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+ struct intel_dsi *intel_dsi = enc_to_intel_dsi(encoder);
+ enum transcoder dsi_trans;
+@@ -1703,8 +1707,8 @@ static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
+
+ for_each_dsi_port(port, intel_dsi->ports) {
+ dsi_trans = dsi_port_to_transcoder(port);
+- tmp = intel_de_read(dev_priv,
+- TRANS_DDI_FUNC_CTL(dev_priv, dsi_trans));
++ tmp = intel_de_read(display,
++ TRANS_DDI_FUNC_CTL(display, dsi_trans));
+ switch (tmp & TRANS_DDI_EDP_INPUT_MASK) {
+ case TRANS_DDI_EDP_INPUT_A_ON:
+ *pipe = PIPE_A;
+@@ -1719,11 +1723,11 @@ static bool gen11_dsi_get_hw_state(struct intel_encoder *encoder,
+ *pipe = PIPE_D;
+ break;
+ default:
+- drm_err(&dev_priv->drm, "Invalid PIPE input\n");
++ drm_err(display->drm, "Invalid PIPE input\n");
+ goto out;
+ }
+
+- tmp = intel_de_read(dev_priv, TRANSCONF(dev_priv, dsi_trans));
++ tmp = intel_de_read(display, TRANSCONF(display, dsi_trans));
+ ret = tmp & TRANSCONF_ENABLE;
+ }
+ out:
+@@ -1833,8 +1837,7 @@ static const struct mipi_dsi_host_ops gen11_dsi_host_ops = {
+
+ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
+ {
+- struct drm_device *dev = intel_dsi->base.base.dev;
+- struct drm_i915_private *dev_priv = to_i915(dev);
++ struct intel_display *display = to_intel_display(&intel_dsi->base);
+ struct intel_connector *connector = intel_dsi->attached_connector;
+ struct mipi_config *mipi_config = connector->panel.vbt.dsi.config;
+ u32 tlpx_ns;
+@@ -1858,7 +1861,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
+ */
+ prepare_cnt = DIV_ROUND_UP(ths_prepare_ns * 4, tlpx_ns);
+ if (prepare_cnt > ICL_PREPARE_CNT_MAX) {
+- drm_dbg_kms(&dev_priv->drm, "prepare_cnt out of range (%d)\n",
++ drm_dbg_kms(display->drm, "prepare_cnt out of range (%d)\n",
+ prepare_cnt);
+ prepare_cnt = ICL_PREPARE_CNT_MAX;
+ }
+@@ -1867,7 +1870,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
+ clk_zero_cnt = DIV_ROUND_UP(mipi_config->tclk_prepare_clkzero -
+ ths_prepare_ns, tlpx_ns);
+ if (clk_zero_cnt > ICL_CLK_ZERO_CNT_MAX) {
+- drm_dbg_kms(&dev_priv->drm,
++ drm_dbg_kms(display->drm,
+ "clk_zero_cnt out of range (%d)\n", clk_zero_cnt);
+ clk_zero_cnt = ICL_CLK_ZERO_CNT_MAX;
+ }
+@@ -1875,7 +1878,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
+ /* trail cnt in escape clocks*/
+ trail_cnt = DIV_ROUND_UP(tclk_trail_ns, tlpx_ns);
+ if (trail_cnt > ICL_TRAIL_CNT_MAX) {
+- drm_dbg_kms(&dev_priv->drm, "trail_cnt out of range (%d)\n",
++ drm_dbg_kms(display->drm, "trail_cnt out of range (%d)\n",
+ trail_cnt);
+ trail_cnt = ICL_TRAIL_CNT_MAX;
+ }
+@@ -1883,7 +1886,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
+ /* tclk pre count in escape clocks */
+ tclk_pre_cnt = DIV_ROUND_UP(mipi_config->tclk_pre, tlpx_ns);
+ if (tclk_pre_cnt > ICL_TCLK_PRE_CNT_MAX) {
+- drm_dbg_kms(&dev_priv->drm,
++ drm_dbg_kms(display->drm,
+ "tclk_pre_cnt out of range (%d)\n", tclk_pre_cnt);
+ tclk_pre_cnt = ICL_TCLK_PRE_CNT_MAX;
+ }
+@@ -1892,7 +1895,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
+ hs_zero_cnt = DIV_ROUND_UP(mipi_config->ths_prepare_hszero -
+ ths_prepare_ns, tlpx_ns);
+ if (hs_zero_cnt > ICL_HS_ZERO_CNT_MAX) {
+- drm_dbg_kms(&dev_priv->drm, "hs_zero_cnt out of range (%d)\n",
++ drm_dbg_kms(display->drm, "hs_zero_cnt out of range (%d)\n",
+ hs_zero_cnt);
+ hs_zero_cnt = ICL_HS_ZERO_CNT_MAX;
+ }
+@@ -1900,7 +1903,7 @@ static void icl_dphy_param_init(struct intel_dsi *intel_dsi)
+ /* hs exit zero cnt in escape clocks */
+ exit_zero_cnt = DIV_ROUND_UP(mipi_config->ths_exit, tlpx_ns);
+ if (exit_zero_cnt > ICL_EXIT_ZERO_CNT_MAX) {
+- drm_dbg_kms(&dev_priv->drm,
++ drm_dbg_kms(display->drm,
+ "exit_zero_cnt out of range (%d)\n",
+ exit_zero_cnt);
+ exit_zero_cnt = ICL_EXIT_ZERO_CNT_MAX;
+@@ -1942,10 +1945,9 @@ static void icl_dsi_add_properties(struct intel_connector *connector)
+ fixed_mode->vdisplay);
+ }
+
+-void icl_dsi_init(struct drm_i915_private *dev_priv,
++void icl_dsi_init(struct intel_display *display,
+ const struct intel_bios_encoder_data *devdata)
+ {
+- struct intel_display *display = &dev_priv->display;
+ struct intel_dsi *intel_dsi;
+ struct intel_encoder *encoder;
+ struct intel_connector *intel_connector;
+@@ -1973,7 +1975,8 @@ void icl_dsi_init(struct drm_i915_private *dev_priv,
+ encoder->devdata = devdata;
+
+ /* register DSI encoder with DRM subsystem */
+- drm_encoder_init(&dev_priv->drm, &encoder->base, &gen11_dsi_encoder_funcs,
++ drm_encoder_init(display->drm, &encoder->base,
++ &gen11_dsi_encoder_funcs,
+ DRM_MODE_ENCODER_DSI, "DSI %c", port_name(port));
+
+ encoder->pre_pll_enable = gen11_dsi_pre_pll_enable;
+@@ -1998,7 +2001,8 @@ void icl_dsi_init(struct drm_i915_private *dev_priv,
+ encoder->shutdown = intel_dsi_shutdown;
+
+ /* register DSI connector with DRM subsystem */
+- drm_connector_init(&dev_priv->drm, connector, &gen11_dsi_connector_funcs,
++ drm_connector_init(display->drm, connector,
++ &gen11_dsi_connector_funcs,
+ DRM_MODE_CONNECTOR_DSI);
+ drm_connector_helper_add(connector, &gen11_dsi_connector_helper_funcs);
+ connector->display_info.subpixel_order = SubPixelHorizontalRGB;
+@@ -2011,12 +2015,12 @@ void icl_dsi_init(struct drm_i915_private *dev_priv,
+
+ intel_bios_init_panel_late(display, &intel_connector->panel, encoder->devdata, NULL);
+
+- mutex_lock(&dev_priv->drm.mode_config.mutex);
++ mutex_lock(&display->drm->mode_config.mutex);
+ intel_panel_add_vbt_lfp_fixed_mode(intel_connector);
+- mutex_unlock(&dev_priv->drm.mode_config.mutex);
++ mutex_unlock(&display->drm->mode_config.mutex);
+
+ if (!intel_panel_preferred_fixed_mode(intel_connector)) {
+- drm_err(&dev_priv->drm, "DSI fixed mode info missing\n");
++ drm_err(display->drm, "DSI fixed mode info missing\n");
+ goto err;
+ }
+
+@@ -2029,10 +2033,10 @@ void icl_dsi_init(struct drm_i915_private *dev_priv,
+ else
+ intel_dsi->ports = BIT(port);
+
+- if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.bl_ports & ~intel_dsi->ports))
++ if (drm_WARN_ON(display->drm, intel_connector->panel.vbt.dsi.bl_ports & ~intel_dsi->ports))
+ intel_connector->panel.vbt.dsi.bl_ports &= intel_dsi->ports;
+
+- if (drm_WARN_ON(&dev_priv->drm, intel_connector->panel.vbt.dsi.cabc_ports & ~intel_dsi->ports))
++ if (drm_WARN_ON(display->drm, intel_connector->panel.vbt.dsi.cabc_ports & ~intel_dsi->ports))
+ intel_connector->panel.vbt.dsi.cabc_ports &= intel_dsi->ports;
+
+ for_each_dsi_port(port, intel_dsi->ports) {
+@@ -2046,7 +2050,7 @@ void icl_dsi_init(struct drm_i915_private *dev_priv,
+ }
+
+ if (!intel_dsi_vbt_init(intel_dsi, MIPI_DSI_GENERIC_PANEL_ID)) {
+- drm_dbg_kms(&dev_priv->drm, "no device found\n");
++ drm_dbg_kms(display->drm, "no device found\n");
+ goto err;
+ }
+
+diff --git a/drivers/gpu/drm/i915/display/icl_dsi.h b/drivers/gpu/drm/i915/display/icl_dsi.h
+index 43fa7d72eeb180..099fc50e35b415 100644
+--- a/drivers/gpu/drm/i915/display/icl_dsi.h
++++ b/drivers/gpu/drm/i915/display/icl_dsi.h
+@@ -6,11 +6,11 @@
+ #ifndef __ICL_DSI_H__
+ #define __ICL_DSI_H__
+
+-struct drm_i915_private;
+ struct intel_bios_encoder_data;
+ struct intel_crtc_state;
++struct intel_display;
+
+-void icl_dsi_init(struct drm_i915_private *dev_priv,
++void icl_dsi_init(struct intel_display *display,
+ const struct intel_bios_encoder_data *devdata);
+ void icl_dsi_frame_update(struct intel_crtc_state *crtc_state);
+
+diff --git a/drivers/gpu/drm/i915/display/intel_atomic_plane.c b/drivers/gpu/drm/i915/display/intel_atomic_plane.c
+index e979786aa5cf3d..5c2a7987cccb44 100644
+--- a/drivers/gpu/drm/i915/display/intel_atomic_plane.c
++++ b/drivers/gpu/drm/i915/display/intel_atomic_plane.c
+@@ -790,7 +790,8 @@ skl_next_plane_to_commit(struct intel_atomic_state *state,
+ return NULL;
+ }
+
+-void intel_plane_update_noarm(struct intel_plane *plane,
++void intel_plane_update_noarm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+@@ -799,10 +800,11 @@ void intel_plane_update_noarm(struct intel_plane *plane,
+ trace_intel_plane_update_noarm(plane, crtc);
+
+ if (plane->update_noarm)
+- plane->update_noarm(plane, crtc_state, plane_state);
++ plane->update_noarm(dsb, plane, crtc_state, plane_state);
+ }
+
+-void intel_plane_async_flip(struct intel_plane *plane,
++void intel_plane_async_flip(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state,
+ bool async_flip)
+@@ -810,34 +812,37 @@ void intel_plane_async_flip(struct intel_plane *plane,
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
+
+ trace_intel_plane_async_flip(plane, crtc, async_flip);
+- plane->async_flip(plane, crtc_state, plane_state, async_flip);
++ plane->async_flip(dsb, plane, crtc_state, plane_state, async_flip);
+ }
+
+-void intel_plane_update_arm(struct intel_plane *plane,
++void intel_plane_update_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
+
+ if (crtc_state->do_async_flip && plane->async_flip) {
+- intel_plane_async_flip(plane, crtc_state, plane_state, true);
++ intel_plane_async_flip(dsb, plane, crtc_state, plane_state, true);
+ return;
+ }
+
+ trace_intel_plane_update_arm(plane, crtc);
+- plane->update_arm(plane, crtc_state, plane_state);
++ plane->update_arm(dsb, plane, crtc_state, plane_state);
+ }
+
+-void intel_plane_disable_arm(struct intel_plane *plane,
++void intel_plane_disable_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
+
+ trace_intel_plane_disable_arm(plane, crtc);
+- plane->disable_arm(plane, crtc_state);
++ plane->disable_arm(dsb, plane, crtc_state);
+ }
+
+-void intel_crtc_planes_update_noarm(struct intel_atomic_state *state,
++void intel_crtc_planes_update_noarm(struct intel_dsb *dsb,
++ struct intel_atomic_state *state,
+ struct intel_crtc *crtc)
+ {
+ struct intel_crtc_state *new_crtc_state =
+@@ -862,11 +867,13 @@ void intel_crtc_planes_update_noarm(struct intel_atomic_state *state,
+ /* TODO: for mailbox updates this should be skipped */
+ if (new_plane_state->uapi.visible ||
+ new_plane_state->planar_slave)
+- intel_plane_update_noarm(plane, new_crtc_state, new_plane_state);
++ intel_plane_update_noarm(dsb, plane,
++ new_crtc_state, new_plane_state);
+ }
+ }
+
+-static void skl_crtc_planes_update_arm(struct intel_atomic_state *state,
++static void skl_crtc_planes_update_arm(struct intel_dsb *dsb,
++ struct intel_atomic_state *state,
+ struct intel_crtc *crtc)
+ {
+ struct intel_crtc_state *old_crtc_state =
+@@ -893,13 +900,14 @@ static void skl_crtc_planes_update_arm(struct intel_atomic_state *state,
+ */
+ if (new_plane_state->uapi.visible ||
+ new_plane_state->planar_slave)
+- intel_plane_update_arm(plane, new_crtc_state, new_plane_state);
++ intel_plane_update_arm(dsb, plane, new_crtc_state, new_plane_state);
+ else
+- intel_plane_disable_arm(plane, new_crtc_state);
++ intel_plane_disable_arm(dsb, plane, new_crtc_state);
+ }
+ }
+
+-static void i9xx_crtc_planes_update_arm(struct intel_atomic_state *state,
++static void i9xx_crtc_planes_update_arm(struct intel_dsb *dsb,
++ struct intel_atomic_state *state,
+ struct intel_crtc *crtc)
+ {
+ struct intel_crtc_state *new_crtc_state =
+@@ -919,21 +927,22 @@ static void i9xx_crtc_planes_update_arm(struct intel_atomic_state *state,
+ * would have to be called here as well.
+ */
+ if (new_plane_state->uapi.visible)
+- intel_plane_update_arm(plane, new_crtc_state, new_plane_state);
++ intel_plane_update_arm(dsb, plane, new_crtc_state, new_plane_state);
+ else
+- intel_plane_disable_arm(plane, new_crtc_state);
++ intel_plane_disable_arm(dsb, plane, new_crtc_state);
+ }
+ }
+
+-void intel_crtc_planes_update_arm(struct intel_atomic_state *state,
++void intel_crtc_planes_update_arm(struct intel_dsb *dsb,
++ struct intel_atomic_state *state,
+ struct intel_crtc *crtc)
+ {
+ struct drm_i915_private *i915 = to_i915(state->base.dev);
+
+ if (DISPLAY_VER(i915) >= 9)
+- skl_crtc_planes_update_arm(state, crtc);
++ skl_crtc_planes_update_arm(dsb, state, crtc);
+ else
+- i9xx_crtc_planes_update_arm(state, crtc);
++ i9xx_crtc_planes_update_arm(dsb, state, crtc);
+ }
+
+ int intel_atomic_plane_check_clipping(struct intel_plane_state *plane_state,
+diff --git a/drivers/gpu/drm/i915/display/intel_atomic_plane.h b/drivers/gpu/drm/i915/display/intel_atomic_plane.h
+index 6c4fe359646504..0f982f452ff391 100644
+--- a/drivers/gpu/drm/i915/display/intel_atomic_plane.h
++++ b/drivers/gpu/drm/i915/display/intel_atomic_plane.h
+@@ -14,6 +14,7 @@ struct drm_rect;
+ struct intel_atomic_state;
+ struct intel_crtc;
+ struct intel_crtc_state;
++struct intel_dsb;
+ struct intel_plane;
+ struct intel_plane_state;
+ enum plane_id;
+@@ -32,26 +33,32 @@ void intel_plane_copy_uapi_to_hw_state(struct intel_plane_state *plane_state,
+ struct intel_crtc *crtc);
+ void intel_plane_copy_hw_state(struct intel_plane_state *plane_state,
+ const struct intel_plane_state *from_plane_state);
+-void intel_plane_async_flip(struct intel_plane *plane,
++void intel_plane_async_flip(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state,
+ bool async_flip);
+-void intel_plane_update_noarm(struct intel_plane *plane,
++void intel_plane_update_noarm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state);
+-void intel_plane_update_arm(struct intel_plane *plane,
++void intel_plane_update_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state);
+-void intel_plane_disable_arm(struct intel_plane *plane,
++void intel_plane_disable_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state);
+ struct intel_plane *intel_plane_alloc(void);
+ void intel_plane_free(struct intel_plane *plane);
+ struct drm_plane_state *intel_plane_duplicate_state(struct drm_plane *plane);
+ void intel_plane_destroy_state(struct drm_plane *plane,
+ struct drm_plane_state *state);
+-void intel_crtc_planes_update_noarm(struct intel_atomic_state *state,
++void intel_crtc_planes_update_noarm(struct intel_dsb *dsb,
++ struct intel_atomic_state *state,
+ struct intel_crtc *crtc);
+-void intel_crtc_planes_update_arm(struct intel_atomic_state *state,
++void intel_crtc_planes_update_arm(struct intel_dsb *dsbx,
++ struct intel_atomic_state *state,
+ struct intel_crtc *crtc);
+ int intel_plane_atomic_check_with_state(const struct intel_crtc_state *old_crtc_state,
+ struct intel_crtc_state *crtc_state,
+diff --git a/drivers/gpu/drm/i915/display/intel_color.c b/drivers/gpu/drm/i915/display/intel_color.c
+index ec55cb651d4498..1fbe3cd452c13a 100644
+--- a/drivers/gpu/drm/i915/display/intel_color.c
++++ b/drivers/gpu/drm/i915/display/intel_color.c
+@@ -1912,6 +1912,23 @@ void intel_color_post_update(const struct intel_crtc_state *crtc_state)
+ i915->display.funcs.color->color_post_update(crtc_state);
+ }
+
++void intel_color_modeset(const struct intel_crtc_state *crtc_state)
++{
++ struct intel_display *display = to_intel_display(crtc_state);
++
++ intel_color_load_luts(crtc_state);
++ intel_color_commit_noarm(crtc_state);
++ intel_color_commit_arm(crtc_state);
++
++ if (DISPLAY_VER(display) < 9) {
++ struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
++ struct intel_plane *plane = to_intel_plane(crtc->base.primary);
++
++ /* update DSPCNTR to configure gamma/csc for pipe bottom color */
++ plane->disable_arm(NULL, plane, crtc_state);
++ }
++}
++
+ void intel_color_prepare_commit(struct intel_atomic_state *state,
+ struct intel_crtc *crtc)
+ {
+diff --git a/drivers/gpu/drm/i915/display/intel_color.h b/drivers/gpu/drm/i915/display/intel_color.h
+index 79f230a1709ad1..ab3aaec06a2ac8 100644
+--- a/drivers/gpu/drm/i915/display/intel_color.h
++++ b/drivers/gpu/drm/i915/display/intel_color.h
+@@ -28,6 +28,7 @@ void intel_color_commit_noarm(const struct intel_crtc_state *crtc_state);
+ void intel_color_commit_arm(const struct intel_crtc_state *crtc_state);
+ void intel_color_post_update(const struct intel_crtc_state *crtc_state);
+ void intel_color_load_luts(const struct intel_crtc_state *crtc_state);
++void intel_color_modeset(const struct intel_crtc_state *crtc_state);
+ void intel_color_get_config(struct intel_crtc_state *crtc_state);
+ bool intel_color_lut_equal(const struct intel_crtc_state *crtc_state,
+ const struct drm_property_blob *blob1,
+diff --git a/drivers/gpu/drm/i915/display/intel_cursor.c b/drivers/gpu/drm/i915/display/intel_cursor.c
+index 9ad53e1cbbd063..aeadb834d33286 100644
+--- a/drivers/gpu/drm/i915/display/intel_cursor.c
++++ b/drivers/gpu/drm/i915/display/intel_cursor.c
+@@ -275,7 +275,8 @@ static int i845_check_cursor(struct intel_crtc_state *crtc_state,
+ }
+
+ /* TODO: split into noarm+arm pair */
+-static void i845_cursor_update_arm(struct intel_plane *plane,
++static void i845_cursor_update_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+@@ -315,10 +316,11 @@ static void i845_cursor_update_arm(struct intel_plane *plane,
+ }
+ }
+
+-static void i845_cursor_disable_arm(struct intel_plane *plane,
++static void i845_cursor_disable_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
+- i845_cursor_update_arm(plane, crtc_state, NULL);
++ i845_cursor_update_arm(dsb, plane, crtc_state, NULL);
+ }
+
+ static bool i845_cursor_get_hw_state(struct intel_plane *plane,
+@@ -527,22 +529,25 @@ static int i9xx_check_cursor(struct intel_crtc_state *crtc_state,
+ return 0;
+ }
+
+-static void i9xx_cursor_disable_sel_fetch_arm(struct intel_plane *plane,
++static void i9xx_cursor_disable_sel_fetch_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
+- struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ enum pipe pipe = plane->pipe;
+
+ if (!crtc_state->enable_psr2_sel_fetch)
+ return;
+
+- intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe), 0);
++ intel_de_write_dsb(display, dsb, SEL_FETCH_CUR_CTL(pipe), 0);
+ }
+
+-static void wa_16021440873(struct intel_plane *plane,
++static void wa_16021440873(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
+ u32 ctl = plane_state->ctl;
+ int et_y_position = drm_rect_height(&crtc_state->pipe_src) + 1;
+@@ -551,16 +556,18 @@ static void wa_16021440873(struct intel_plane *plane,
+ ctl &= ~MCURSOR_MODE_MASK;
+ ctl |= MCURSOR_MODE_64_2B;
+
+- intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe), ctl);
++ intel_de_write_dsb(display, dsb, SEL_FETCH_CUR_CTL(pipe), ctl);
+
+- intel_de_write(dev_priv, CURPOS_ERLY_TPT(dev_priv, pipe),
+- CURSOR_POS_Y(et_y_position));
++ intel_de_write_dsb(display, dsb, CURPOS_ERLY_TPT(dev_priv, pipe),
++ CURSOR_POS_Y(et_y_position));
+ }
+
+-static void i9xx_cursor_update_sel_fetch_arm(struct intel_plane *plane,
++static void i9xx_cursor_update_sel_fetch_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
+ enum pipe pipe = plane->pipe;
+
+@@ -571,19 +578,17 @@ static void i9xx_cursor_update_sel_fetch_arm(struct intel_plane *plane,
+ if (crtc_state->enable_psr2_su_region_et) {
+ u32 val = intel_cursor_position(crtc_state, plane_state,
+ true);
+- intel_de_write_fw(dev_priv,
+- CURPOS_ERLY_TPT(dev_priv, pipe),
+- val);
++
++ intel_de_write_dsb(display, dsb, CURPOS_ERLY_TPT(dev_priv, pipe), val);
+ }
+
+- intel_de_write_fw(dev_priv, SEL_FETCH_CUR_CTL(pipe),
+- plane_state->ctl);
++ intel_de_write_dsb(display, dsb, SEL_FETCH_CUR_CTL(pipe), plane_state->ctl);
+ } else {
+ /* Wa_16021440873 */
+ if (crtc_state->enable_psr2_su_region_et)
+- wa_16021440873(plane, crtc_state, plane_state);
++ wa_16021440873(dsb, plane, crtc_state, plane_state);
+ else
+- i9xx_cursor_disable_sel_fetch_arm(plane, crtc_state);
++ i9xx_cursor_disable_sel_fetch_arm(dsb, plane, crtc_state);
+ }
+ }
+
+@@ -610,9 +615,11 @@ static u32 skl_cursor_wm_reg_val(const struct skl_wm_level *level)
+ return val;
+ }
+
+-static void skl_write_cursor_wm(struct intel_plane *plane,
++static void skl_write_cursor_wm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ struct drm_i915_private *i915 = to_i915(plane->base.dev);
+ enum plane_id plane_id = plane->id;
+ enum pipe pipe = plane->pipe;
+@@ -622,30 +629,32 @@ static void skl_write_cursor_wm(struct intel_plane *plane,
+ int level;
+
+ for (level = 0; level < i915->display.wm.num_levels; level++)
+- intel_de_write_fw(i915, CUR_WM(pipe, level),
+- skl_cursor_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level)));
++ intel_de_write_dsb(display, dsb, CUR_WM(pipe, level),
++ skl_cursor_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level)));
+
+- intel_de_write_fw(i915, CUR_WM_TRANS(pipe),
+- skl_cursor_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id)));
++ intel_de_write_dsb(display, dsb, CUR_WM_TRANS(pipe),
++ skl_cursor_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id)));
+
+ if (HAS_HW_SAGV_WM(i915)) {
+ const struct skl_plane_wm *wm = &pipe_wm->planes[plane_id];
+
+- intel_de_write_fw(i915, CUR_WM_SAGV(pipe),
+- skl_cursor_wm_reg_val(&wm->sagv.wm0));
+- intel_de_write_fw(i915, CUR_WM_SAGV_TRANS(pipe),
+- skl_cursor_wm_reg_val(&wm->sagv.trans_wm));
++ intel_de_write_dsb(display, dsb, CUR_WM_SAGV(pipe),
++ skl_cursor_wm_reg_val(&wm->sagv.wm0));
++ intel_de_write_dsb(display, dsb, CUR_WM_SAGV_TRANS(pipe),
++ skl_cursor_wm_reg_val(&wm->sagv.trans_wm));
+ }
+
+- intel_de_write_fw(i915, CUR_BUF_CFG(pipe),
+- skl_cursor_ddb_reg_val(ddb));
++ intel_de_write_dsb(display, dsb, CUR_BUF_CFG(pipe),
++ skl_cursor_ddb_reg_val(ddb));
+ }
+
+ /* TODO: split into noarm+arm pair */
+-static void i9xx_cursor_update_arm(struct intel_plane *plane,
++static void i9xx_cursor_update_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
+ enum pipe pipe = plane->pipe;
+ u32 cntl = 0, base = 0, pos = 0, fbc_ctl = 0;
+@@ -685,38 +694,36 @@ static void i9xx_cursor_update_arm(struct intel_plane *plane,
+ */
+
+ if (DISPLAY_VER(dev_priv) >= 9)
+- skl_write_cursor_wm(plane, crtc_state);
++ skl_write_cursor_wm(dsb, plane, crtc_state);
+
+ if (plane_state)
+- i9xx_cursor_update_sel_fetch_arm(plane, crtc_state,
+- plane_state);
++ i9xx_cursor_update_sel_fetch_arm(dsb, plane, crtc_state, plane_state);
+ else
+- i9xx_cursor_disable_sel_fetch_arm(plane, crtc_state);
++ i9xx_cursor_disable_sel_fetch_arm(dsb, plane, crtc_state);
+
+ if (plane->cursor.base != base ||
+ plane->cursor.size != fbc_ctl ||
+ plane->cursor.cntl != cntl) {
+ if (HAS_CUR_FBC(dev_priv))
+- intel_de_write_fw(dev_priv,
+- CUR_FBC_CTL(dev_priv, pipe),
+- fbc_ctl);
+- intel_de_write_fw(dev_priv, CURCNTR(dev_priv, pipe), cntl);
+- intel_de_write_fw(dev_priv, CURPOS(dev_priv, pipe), pos);
+- intel_de_write_fw(dev_priv, CURBASE(dev_priv, pipe), base);
++ intel_de_write_dsb(display, dsb, CUR_FBC_CTL(dev_priv, pipe), fbc_ctl);
++ intel_de_write_dsb(display, dsb, CURCNTR(dev_priv, pipe), cntl);
++ intel_de_write_dsb(display, dsb, CURPOS(dev_priv, pipe), pos);
++ intel_de_write_dsb(display, dsb, CURBASE(dev_priv, pipe), base);
+
+ plane->cursor.base = base;
+ plane->cursor.size = fbc_ctl;
+ plane->cursor.cntl = cntl;
+ } else {
+- intel_de_write_fw(dev_priv, CURPOS(dev_priv, pipe), pos);
+- intel_de_write_fw(dev_priv, CURBASE(dev_priv, pipe), base);
++ intel_de_write_dsb(display, dsb, CURPOS(dev_priv, pipe), pos);
++ intel_de_write_dsb(display, dsb, CURBASE(dev_priv, pipe), base);
+ }
+ }
+
+-static void i9xx_cursor_disable_arm(struct intel_plane *plane,
++static void i9xx_cursor_disable_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
+- i9xx_cursor_update_arm(plane, crtc_state, NULL);
++ i9xx_cursor_update_arm(dsb, plane, crtc_state, NULL);
+ }
+
+ static bool i9xx_cursor_get_hw_state(struct intel_plane *plane,
+@@ -905,10 +912,10 @@ intel_legacy_cursor_update(struct drm_plane *_plane,
+ }
+
+ if (new_plane_state->uapi.visible) {
+- intel_plane_update_noarm(plane, crtc_state, new_plane_state);
+- intel_plane_update_arm(plane, crtc_state, new_plane_state);
++ intel_plane_update_noarm(NULL, plane, crtc_state, new_plane_state);
++ intel_plane_update_arm(NULL, plane, crtc_state, new_plane_state);
+ } else {
+- intel_plane_disable_arm(plane, crtc_state);
++ intel_plane_disable_arm(NULL, plane, crtc_state);
+ }
+
+ local_irq_enable();
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 2f1d9ce87ceb01..34dee523f0b612 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -4888,7 +4888,7 @@ void intel_ddi_init(struct intel_display *display,
+ if (!assert_has_icl_dsi(dev_priv))
+ return;
+
+- icl_dsi_init(dev_priv, devdata);
++ icl_dsi_init(display, devdata);
+ return;
+ }
+
+diff --git a/drivers/gpu/drm/i915/display/intel_de.h b/drivers/gpu/drm/i915/display/intel_de.h
+index e881bfeafb47d7..e017cd4a81685a 100644
+--- a/drivers/gpu/drm/i915/display/intel_de.h
++++ b/drivers/gpu/drm/i915/display/intel_de.h
+@@ -8,6 +8,7 @@
+
+ #include "i915_drv.h"
+ #include "i915_trace.h"
++#include "intel_dsb.h"
+ #include "intel_uncore.h"
+
+ static inline struct intel_uncore *__to_uncore(struct intel_display *display)
+@@ -233,4 +234,14 @@ __intel_de_write_notrace(struct intel_display *display, i915_reg_t reg,
+ }
+ #define intel_de_write_notrace(p,...) __intel_de_write_notrace(__to_intel_display(p), __VA_ARGS__)
+
++static __always_inline void
++intel_de_write_dsb(struct intel_display *display, struct intel_dsb *dsb,
++ i915_reg_t reg, u32 val)
++{
++ if (dsb)
++ intel_dsb_reg_write(dsb, reg, val);
++ else
++ intel_de_write_fw(display, reg, val);
++}
++
+ #endif /* __INTEL_DE_H__ */
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 2c6d0da8a16f8c..3039ee03e1c7a8 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -135,7 +135,8 @@
+ static void intel_set_transcoder_timings(const struct intel_crtc_state *crtc_state);
+ static void intel_set_pipe_src_size(const struct intel_crtc_state *crtc_state);
+ static void hsw_set_transconf(const struct intel_crtc_state *crtc_state);
+-static void bdw_set_pipe_misc(const struct intel_crtc_state *crtc_state);
++static void bdw_set_pipe_misc(struct intel_dsb *dsb,
++ const struct intel_crtc_state *crtc_state);
+
+ /* returns HPLL frequency in kHz */
+ int vlv_get_hpll_vco(struct drm_i915_private *dev_priv)
+@@ -715,7 +716,7 @@ void intel_plane_disable_noatomic(struct intel_crtc *crtc,
+ if (DISPLAY_VER(dev_priv) == 2 && !crtc_state->active_planes)
+ intel_set_cpu_fifo_underrun_reporting(dev_priv, crtc->pipe, false);
+
+- intel_plane_disable_arm(plane, crtc_state);
++ intel_plane_disable_arm(NULL, plane, crtc_state);
+ intel_crtc_wait_for_next_vblank(crtc);
+ }
+
+@@ -1172,8 +1173,8 @@ static void intel_crtc_async_flip_disable_wa(struct intel_atomic_state *state,
+ * Apart from the async flip bit we want to
+ * preserve the old state for the plane.
+ */
+- intel_plane_async_flip(plane, old_crtc_state,
+- old_plane_state, false);
++ intel_plane_async_flip(NULL, plane,
++ old_crtc_state, old_plane_state, false);
+ need_vbl_wait = true;
+ }
+ }
+@@ -1315,7 +1316,7 @@ static void intel_crtc_disable_planes(struct intel_atomic_state *state,
+ !(update_mask & BIT(plane->id)))
+ continue;
+
+- intel_plane_disable_arm(plane, new_crtc_state);
++ intel_plane_disable_arm(NULL, plane, new_crtc_state);
+
+ if (old_plane_state->uapi.visible)
+ fb_bits |= plane->frontbuffer_bit;
+@@ -1502,14 +1503,6 @@ static void intel_encoders_update_pipe(struct intel_atomic_state *state,
+ }
+ }
+
+-static void intel_disable_primary_plane(const struct intel_crtc_state *crtc_state)
+-{
+- struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
+- struct intel_plane *plane = to_intel_plane(crtc->base.primary);
+-
+- plane->disable_arm(plane, crtc_state);
+-}
+-
+ static void ilk_configure_cpu_transcoder(const struct intel_crtc_state *crtc_state)
+ {
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
+@@ -1575,11 +1568,7 @@ static void ilk_crtc_enable(struct intel_atomic_state *state,
+ * On ILK+ LUT must be loaded before the pipe is running but with
+ * clocks enabled
+ */
+- intel_color_load_luts(new_crtc_state);
+- intel_color_commit_noarm(new_crtc_state);
+- intel_color_commit_arm(new_crtc_state);
+- /* update DSPCNTR to configure gamma for pipe bottom color */
+- intel_disable_primary_plane(new_crtc_state);
++ intel_color_modeset(new_crtc_state);
+
+ intel_initial_watermarks(state, crtc);
+ intel_enable_transcoder(new_crtc_state);
+@@ -1716,7 +1705,7 @@ static void hsw_crtc_enable(struct intel_atomic_state *state,
+ intel_set_pipe_src_size(pipe_crtc_state);
+
+ if (DISPLAY_VER(dev_priv) >= 9 || IS_BROADWELL(dev_priv))
+- bdw_set_pipe_misc(pipe_crtc_state);
++ bdw_set_pipe_misc(NULL, pipe_crtc_state);
+ }
+
+ if (!transcoder_is_dsi(cpu_transcoder))
+@@ -1741,12 +1730,7 @@ static void hsw_crtc_enable(struct intel_atomic_state *state,
+ * On ILK+ LUT must be loaded before the pipe is running but with
+ * clocks enabled
+ */
+- intel_color_load_luts(pipe_crtc_state);
+- intel_color_commit_noarm(pipe_crtc_state);
+- intel_color_commit_arm(pipe_crtc_state);
+- /* update DSPCNTR to configure gamma/csc for pipe bottom color */
+- if (DISPLAY_VER(dev_priv) < 9)
+- intel_disable_primary_plane(pipe_crtc_state);
++ intel_color_modeset(pipe_crtc_state);
+
+ hsw_set_linetime_wm(pipe_crtc_state);
+
+@@ -2147,11 +2131,7 @@ static void valleyview_crtc_enable(struct intel_atomic_state *state,
+
+ i9xx_pfit_enable(new_crtc_state);
+
+- intel_color_load_luts(new_crtc_state);
+- intel_color_commit_noarm(new_crtc_state);
+- intel_color_commit_arm(new_crtc_state);
+- /* update DSPCNTR to configure gamma for pipe bottom color */
+- intel_disable_primary_plane(new_crtc_state);
++ intel_color_modeset(new_crtc_state);
+
+ intel_initial_watermarks(state, crtc);
+ intel_enable_transcoder(new_crtc_state);
+@@ -2187,11 +2167,7 @@ static void i9xx_crtc_enable(struct intel_atomic_state *state,
+
+ i9xx_pfit_enable(new_crtc_state);
+
+- intel_color_load_luts(new_crtc_state);
+- intel_color_commit_noarm(new_crtc_state);
+- intel_color_commit_arm(new_crtc_state);
+- /* update DSPCNTR to configure gamma for pipe bottom color */
+- intel_disable_primary_plane(new_crtc_state);
++ intel_color_modeset(new_crtc_state);
+
+ if (!intel_initial_watermarks(state, crtc))
+ intel_update_watermarks(dev_priv);
+@@ -3246,9 +3222,11 @@ static void hsw_set_transconf(const struct intel_crtc_state *crtc_state)
+ intel_de_posting_read(dev_priv, TRANSCONF(dev_priv, cpu_transcoder));
+ }
+
+-static void bdw_set_pipe_misc(const struct intel_crtc_state *crtc_state)
++static void bdw_set_pipe_misc(struct intel_dsb *dsb,
++ const struct intel_crtc_state *crtc_state)
+ {
+ struct intel_crtc *crtc = to_intel_crtc(crtc_state->uapi.crtc);
++ struct intel_display *display = to_intel_display(crtc->base.dev);
+ struct drm_i915_private *dev_priv = to_i915(crtc->base.dev);
+ u32 val = 0;
+
+@@ -3293,7 +3271,7 @@ static void bdw_set_pipe_misc(const struct intel_crtc_state *crtc_state)
+ if (IS_BROADWELL(dev_priv))
+ val |= PIPE_MISC_PSR_MASK_SPRITE_ENABLE;
+
+- intel_de_write(dev_priv, PIPE_MISC(crtc->pipe), val);
++ intel_de_write_dsb(display, dsb, PIPE_MISC(crtc->pipe), val);
+ }
+
+ int bdw_get_pipe_misc_bpp(struct intel_crtc *crtc)
+@@ -6846,7 +6824,7 @@ static void commit_pipe_pre_planes(struct intel_atomic_state *state,
+ intel_color_commit_arm(new_crtc_state);
+
+ if (DISPLAY_VER(dev_priv) >= 9 || IS_BROADWELL(dev_priv))
+- bdw_set_pipe_misc(new_crtc_state);
++ bdw_set_pipe_misc(NULL, new_crtc_state);
+
+ if (intel_crtc_needs_fastset(new_crtc_state))
+ intel_pipe_fastset(old_crtc_state, new_crtc_state);
+@@ -6946,7 +6924,7 @@ static void intel_pre_update_crtc(struct intel_atomic_state *state,
+ intel_crtc_needs_color_update(new_crtc_state))
+ intel_color_commit_noarm(new_crtc_state);
+
+- intel_crtc_planes_update_noarm(state, crtc);
++ intel_crtc_planes_update_noarm(NULL, state, crtc);
+ }
+
+ static void intel_update_crtc(struct intel_atomic_state *state,
+@@ -6962,7 +6940,7 @@ static void intel_update_crtc(struct intel_atomic_state *state,
+
+ commit_pipe_pre_planes(state, crtc);
+
+- intel_crtc_planes_update_arm(state, crtc);
++ intel_crtc_planes_update_arm(NULL, state, crtc);
+
+ commit_pipe_post_planes(state, crtc);
+
+diff --git a/drivers/gpu/drm/i915/display/intel_display_types.h b/drivers/gpu/drm/i915/display/intel_display_types.h
+index f29e5dc3db910c..3e24d2e90d3cfb 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_types.h
++++ b/drivers/gpu/drm/i915/display/intel_display_types.h
+@@ -1036,6 +1036,10 @@ struct intel_csc_matrix {
+ u16 postoff[3];
+ };
+
++void intel_io_mmio_fw_write(void *ctx, i915_reg_t reg, u32 val);
++
++typedef void (*intel_io_reg_write)(void *ctx, i915_reg_t reg, u32 val);
++
+ struct intel_crtc_state {
+ /*
+ * uapi (drm) state. This is the software state shown to userspace.
+@@ -1578,22 +1582,26 @@ struct intel_plane {
+ u32 pixel_format, u64 modifier,
+ unsigned int rotation);
+ /* Write all non-self arming plane registers */
+- void (*update_noarm)(struct intel_plane *plane,
++ void (*update_noarm)(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state);
+ /* Write all self-arming plane registers */
+- void (*update_arm)(struct intel_plane *plane,
++ void (*update_arm)(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state);
+ /* Disable the plane, must arm */
+- void (*disable_arm)(struct intel_plane *plane,
++ void (*disable_arm)(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state);
+ bool (*get_hw_state)(struct intel_plane *plane, enum pipe *pipe);
+ int (*check_plane)(struct intel_crtc_state *crtc_state,
+ struct intel_plane_state *plane_state);
+ int (*min_cdclk)(const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state);
+- void (*async_flip)(struct intel_plane *plane,
++ void (*async_flip)(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state,
+ bool async_flip);
+diff --git a/drivers/gpu/drm/i915/display/intel_sprite.c b/drivers/gpu/drm/i915/display/intel_sprite.c
+index e657b09ede999b..e6fadcef58e06c 100644
+--- a/drivers/gpu/drm/i915/display/intel_sprite.c
++++ b/drivers/gpu/drm/i915/display/intel_sprite.c
+@@ -378,7 +378,8 @@ static void vlv_sprite_update_gamma(const struct intel_plane_state *plane_state)
+ }
+
+ static void
+-vlv_sprite_update_noarm(struct intel_plane *plane,
++vlv_sprite_update_noarm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+@@ -399,7 +400,8 @@ vlv_sprite_update_noarm(struct intel_plane *plane,
+ }
+
+ static void
+-vlv_sprite_update_arm(struct intel_plane *plane,
++vlv_sprite_update_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+@@ -449,7 +451,8 @@ vlv_sprite_update_arm(struct intel_plane *plane,
+ }
+
+ static void
+-vlv_sprite_disable_arm(struct intel_plane *plane,
++vlv_sprite_disable_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
+ struct intel_display *display = to_intel_display(plane->base.dev);
+@@ -795,7 +798,8 @@ static void ivb_sprite_update_gamma(const struct intel_plane_state *plane_state)
+ }
+
+ static void
+-ivb_sprite_update_noarm(struct intel_plane *plane,
++ivb_sprite_update_noarm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+@@ -826,7 +830,8 @@ ivb_sprite_update_noarm(struct intel_plane *plane,
+ }
+
+ static void
+-ivb_sprite_update_arm(struct intel_plane *plane,
++ivb_sprite_update_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+@@ -874,7 +879,8 @@ ivb_sprite_update_arm(struct intel_plane *plane,
+ }
+
+ static void
+-ivb_sprite_disable_arm(struct intel_plane *plane,
++ivb_sprite_disable_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
+ struct intel_display *display = to_intel_display(plane->base.dev);
+@@ -1133,7 +1139,8 @@ static void ilk_sprite_update_gamma(const struct intel_plane_state *plane_state)
+ }
+
+ static void
+-g4x_sprite_update_noarm(struct intel_plane *plane,
++g4x_sprite_update_noarm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+@@ -1162,7 +1169,8 @@ g4x_sprite_update_noarm(struct intel_plane *plane,
+ }
+
+ static void
+-g4x_sprite_update_arm(struct intel_plane *plane,
++g4x_sprite_update_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+@@ -1206,7 +1214,8 @@ g4x_sprite_update_arm(struct intel_plane *plane,
+ }
+
+ static void
+-g4x_sprite_disable_arm(struct intel_plane *plane,
++g4x_sprite_disable_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
+ struct intel_display *display = to_intel_display(plane->base.dev);
+diff --git a/drivers/gpu/drm/i915/display/skl_universal_plane.c b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+index 62a5287ea1d9c4..7f77a76309bd55 100644
+--- a/drivers/gpu/drm/i915/display/skl_universal_plane.c
++++ b/drivers/gpu/drm/i915/display/skl_universal_plane.c
+@@ -589,11 +589,11 @@ static u32 skl_plane_min_alignment(struct intel_plane *plane,
+ * in full-range YCbCr.
+ */
+ static void
+-icl_program_input_csc(struct intel_plane *plane,
+- const struct intel_crtc_state *crtc_state,
++icl_program_input_csc(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_plane_state *plane_state)
+ {
+- struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ enum pipe pipe = plane->pipe;
+ enum plane_id plane_id = plane->id;
+
+@@ -637,31 +637,31 @@ icl_program_input_csc(struct intel_plane *plane,
+ };
+ const u16 *csc = input_csc_matrix[plane_state->hw.color_encoding];
+
+- intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0),
+- ROFF(csc[0]) | GOFF(csc[1]));
+- intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 1),
+- BOFF(csc[2]));
+- intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 2),
+- ROFF(csc[3]) | GOFF(csc[4]));
+- intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 3),
+- BOFF(csc[5]));
+- intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 4),
+- ROFF(csc[6]) | GOFF(csc[7]));
+- intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 5),
+- BOFF(csc[8]));
+-
+- intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0),
+- PREOFF_YUV_TO_RGB_HI);
+- intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
+- PREOFF_YUV_TO_RGB_ME);
+- intel_de_write_fw(dev_priv, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2),
+- PREOFF_YUV_TO_RGB_LO);
+- intel_de_write_fw(dev_priv,
+- PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 0), 0x0);
+- intel_de_write_fw(dev_priv,
+- PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 1), 0x0);
+- intel_de_write_fw(dev_priv,
+- PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 2), 0x0);
++ intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 0),
++ ROFF(csc[0]) | GOFF(csc[1]));
++ intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 1),
++ BOFF(csc[2]));
++ intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 2),
++ ROFF(csc[3]) | GOFF(csc[4]));
++ intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 3),
++ BOFF(csc[5]));
++ intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 4),
++ ROFF(csc[6]) | GOFF(csc[7]));
++ intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_COEFF(pipe, plane_id, 5),
++ BOFF(csc[8]));
++
++ intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 0),
++ PREOFF_YUV_TO_RGB_HI);
++ intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 1),
++ PREOFF_YUV_TO_RGB_ME);
++ intel_de_write_dsb(display, dsb, PLANE_INPUT_CSC_PREOFF(pipe, plane_id, 2),
++ PREOFF_YUV_TO_RGB_LO);
++ intel_de_write_dsb(display, dsb,
++ PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 0), 0x0);
++ intel_de_write_dsb(display, dsb,
++ PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 1), 0x0);
++ intel_de_write_dsb(display, dsb,
++ PLANE_INPUT_CSC_POSTOFF(pipe, plane_id, 2), 0x0);
+ }
+
+ static unsigned int skl_plane_stride_mult(const struct drm_framebuffer *fb,
+@@ -715,9 +715,11 @@ static u32 skl_plane_wm_reg_val(const struct skl_wm_level *level)
+ return val;
+ }
+
+-static void skl_write_plane_wm(struct intel_plane *plane,
++static void skl_write_plane_wm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ struct drm_i915_private *i915 = to_i915(plane->base.dev);
+ enum plane_id plane_id = plane->id;
+ enum pipe pipe = plane->pipe;
+@@ -729,71 +731,75 @@ static void skl_write_plane_wm(struct intel_plane *plane,
+ int level;
+
+ for (level = 0; level < i915->display.wm.num_levels; level++)
+- intel_de_write_fw(i915, PLANE_WM(pipe, plane_id, level),
+- skl_plane_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level)));
++ intel_de_write_dsb(display, dsb, PLANE_WM(pipe, plane_id, level),
++ skl_plane_wm_reg_val(skl_plane_wm_level(pipe_wm, plane_id, level)));
+
+- intel_de_write_fw(i915, PLANE_WM_TRANS(pipe, plane_id),
+- skl_plane_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id)));
++ intel_de_write_dsb(display, dsb, PLANE_WM_TRANS(pipe, plane_id),
++ skl_plane_wm_reg_val(skl_plane_trans_wm(pipe_wm, plane_id)));
+
+ if (HAS_HW_SAGV_WM(i915)) {
+ const struct skl_plane_wm *wm = &pipe_wm->planes[plane_id];
+
+- intel_de_write_fw(i915, PLANE_WM_SAGV(pipe, plane_id),
+- skl_plane_wm_reg_val(&wm->sagv.wm0));
+- intel_de_write_fw(i915, PLANE_WM_SAGV_TRANS(pipe, plane_id),
+- skl_plane_wm_reg_val(&wm->sagv.trans_wm));
++ intel_de_write_dsb(display, dsb, PLANE_WM_SAGV(pipe, plane_id),
++ skl_plane_wm_reg_val(&wm->sagv.wm0));
++ intel_de_write_dsb(display, dsb, PLANE_WM_SAGV_TRANS(pipe, plane_id),
++ skl_plane_wm_reg_val(&wm->sagv.trans_wm));
+ }
+
+- intel_de_write_fw(i915, PLANE_BUF_CFG(pipe, plane_id),
+- skl_plane_ddb_reg_val(ddb));
++ intel_de_write_dsb(display, dsb, PLANE_BUF_CFG(pipe, plane_id),
++ skl_plane_ddb_reg_val(ddb));
+
+ if (DISPLAY_VER(i915) < 11)
+- intel_de_write_fw(i915, PLANE_NV12_BUF_CFG(pipe, plane_id),
+- skl_plane_ddb_reg_val(ddb_y));
++ intel_de_write_dsb(display, dsb, PLANE_NV12_BUF_CFG(pipe, plane_id),
++ skl_plane_ddb_reg_val(ddb_y));
+ }
+
+ static void
+-skl_plane_disable_arm(struct intel_plane *plane,
++skl_plane_disable_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
+- struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ enum plane_id plane_id = plane->id;
+ enum pipe pipe = plane->pipe;
+
+- skl_write_plane_wm(plane, crtc_state);
++ skl_write_plane_wm(dsb, plane, crtc_state);
+
+- intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), 0);
+- intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id), 0);
++ intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id), 0);
+ }
+
+-static void icl_plane_disable_sel_fetch_arm(struct intel_plane *plane,
++static void icl_plane_disable_sel_fetch_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
+- struct drm_i915_private *i915 = to_i915(plane->base.dev);
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ enum pipe pipe = plane->pipe;
+
+ if (!crtc_state->enable_psr2_sel_fetch)
+ return;
+
+- intel_de_write_fw(i915, SEL_FETCH_PLANE_CTL(pipe, plane->id), 0);
++ intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_CTL(pipe, plane->id), 0);
+ }
+
+ static void
+-icl_plane_disable_arm(struct intel_plane *plane,
++icl_plane_disable_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state)
+ {
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
+ enum plane_id plane_id = plane->id;
+ enum pipe pipe = plane->pipe;
+
+ if (icl_is_hdr_plane(dev_priv, plane_id))
+- intel_de_write_fw(dev_priv, PLANE_CUS_CTL(pipe, plane_id), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CUS_CTL(pipe, plane_id), 0);
+
+- skl_write_plane_wm(plane, crtc_state);
++ skl_write_plane_wm(dsb, plane, crtc_state);
+
+- icl_plane_disable_sel_fetch_arm(plane, crtc_state);
+- intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), 0);
+- intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id), 0);
++ icl_plane_disable_sel_fetch_arm(dsb, plane, crtc_state);
++ intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id), 0);
++ intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id), 0);
+ }
+
+ static bool
+@@ -1230,28 +1236,30 @@ static u32 skl_plane_keymsk(const struct intel_plane_state *plane_state)
+ return keymsk;
+ }
+
+-static void icl_plane_csc_load_black(struct intel_plane *plane)
++static void icl_plane_csc_load_black(struct intel_dsb *dsb,
++ struct intel_plane *plane,
++ const struct intel_crtc_state *crtc_state)
+ {
+- struct drm_i915_private *i915 = to_i915(plane->base.dev);
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ enum plane_id plane_id = plane->id;
+ enum pipe pipe = plane->pipe;
+
+- intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 0), 0);
+- intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 1), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 0), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 1), 0);
+
+- intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 2), 0);
+- intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 3), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 2), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 3), 0);
+
+- intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 4), 0);
+- intel_de_write_fw(i915, PLANE_CSC_COEFF(pipe, plane_id, 5), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 4), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_COEFF(pipe, plane_id, 5), 0);
+
+- intel_de_write_fw(i915, PLANE_CSC_PREOFF(pipe, plane_id, 0), 0);
+- intel_de_write_fw(i915, PLANE_CSC_PREOFF(pipe, plane_id, 1), 0);
+- intel_de_write_fw(i915, PLANE_CSC_PREOFF(pipe, plane_id, 2), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_PREOFF(pipe, plane_id, 0), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_PREOFF(pipe, plane_id, 1), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_PREOFF(pipe, plane_id, 2), 0);
+
+- intel_de_write_fw(i915, PLANE_CSC_POSTOFF(pipe, plane_id, 0), 0);
+- intel_de_write_fw(i915, PLANE_CSC_POSTOFF(pipe, plane_id, 1), 0);
+- intel_de_write_fw(i915, PLANE_CSC_POSTOFF(pipe, plane_id, 2), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_POSTOFF(pipe, plane_id, 0), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_POSTOFF(pipe, plane_id, 1), 0);
++ intel_de_write_dsb(display, dsb, PLANE_CSC_POSTOFF(pipe, plane_id, 2), 0);
+ }
+
+ static int icl_plane_color_plane(const struct intel_plane_state *plane_state)
+@@ -1264,11 +1272,12 @@ static int icl_plane_color_plane(const struct intel_plane_state *plane_state)
+ }
+
+ static void
+-skl_plane_update_noarm(struct intel_plane *plane,
++skl_plane_update_noarm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+- struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ enum plane_id plane_id = plane->id;
+ enum pipe pipe = plane->pipe;
+ u32 stride = skl_plane_stride(plane_state, 0);
+@@ -1283,21 +1292,23 @@ skl_plane_update_noarm(struct intel_plane *plane,
+ crtc_y = 0;
+ }
+
+- intel_de_write_fw(dev_priv, PLANE_STRIDE(pipe, plane_id),
+- PLANE_STRIDE_(stride));
+- intel_de_write_fw(dev_priv, PLANE_POS(pipe, plane_id),
+- PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x));
+- intel_de_write_fw(dev_priv, PLANE_SIZE(pipe, plane_id),
+- PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1));
++ intel_de_write_dsb(display, dsb, PLANE_STRIDE(pipe, plane_id),
++ PLANE_STRIDE_(stride));
++ intel_de_write_dsb(display, dsb, PLANE_POS(pipe, plane_id),
++ PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x));
++ intel_de_write_dsb(display, dsb, PLANE_SIZE(pipe, plane_id),
++ PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1));
+
+- skl_write_plane_wm(plane, crtc_state);
++ skl_write_plane_wm(dsb, plane, crtc_state);
+ }
+
+ static void
+-skl_plane_update_arm(struct intel_plane *plane,
++skl_plane_update_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
+ enum plane_id plane_id = plane->id;
+ enum pipe pipe = plane->pipe;
+@@ -1317,22 +1328,26 @@ skl_plane_update_arm(struct intel_plane *plane,
+ plane_color_ctl = plane_state->color_ctl |
+ glk_plane_color_ctl_crtc(crtc_state);
+
+- intel_de_write_fw(dev_priv, PLANE_KEYVAL(pipe, plane_id), skl_plane_keyval(plane_state));
+- intel_de_write_fw(dev_priv, PLANE_KEYMSK(pipe, plane_id), skl_plane_keymsk(plane_state));
+- intel_de_write_fw(dev_priv, PLANE_KEYMAX(pipe, plane_id), skl_plane_keymax(plane_state));
++ intel_de_write_dsb(display, dsb, PLANE_KEYVAL(pipe, plane_id),
++ skl_plane_keyval(plane_state));
++ intel_de_write_dsb(display, dsb, PLANE_KEYMSK(pipe, plane_id),
++ skl_plane_keymsk(plane_state));
++ intel_de_write_dsb(display, dsb, PLANE_KEYMAX(pipe, plane_id),
++ skl_plane_keymax(plane_state));
+
+- intel_de_write_fw(dev_priv, PLANE_OFFSET(pipe, plane_id),
+- PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x));
++ intel_de_write_dsb(display, dsb, PLANE_OFFSET(pipe, plane_id),
++ PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x));
+
+- intel_de_write_fw(dev_priv, PLANE_AUX_DIST(pipe, plane_id),
+- skl_plane_aux_dist(plane_state, 0));
++ intel_de_write_dsb(display, dsb, PLANE_AUX_DIST(pipe, plane_id),
++ skl_plane_aux_dist(plane_state, 0));
+
+- intel_de_write_fw(dev_priv, PLANE_AUX_OFFSET(pipe, plane_id),
+- PLANE_OFFSET_Y(plane_state->view.color_plane[1].y) |
+- PLANE_OFFSET_X(plane_state->view.color_plane[1].x));
++ intel_de_write_dsb(display, dsb, PLANE_AUX_OFFSET(pipe, plane_id),
++ PLANE_OFFSET_Y(plane_state->view.color_plane[1].y) |
++ PLANE_OFFSET_X(plane_state->view.color_plane[1].x));
+
+ if (DISPLAY_VER(dev_priv) >= 10)
+- intel_de_write_fw(dev_priv, PLANE_COLOR_CTL(pipe, plane_id), plane_color_ctl);
++ intel_de_write_dsb(display, dsb, PLANE_COLOR_CTL(pipe, plane_id),
++ plane_color_ctl);
+
+ /*
+ * Enable the scaler before the plane so that we don't
+@@ -1349,17 +1364,19 @@ skl_plane_update_arm(struct intel_plane *plane,
+ * disabled. Try to make the plane enable atomic by writing
+ * the control register just before the surface register.
+ */
+- intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl);
+- intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id),
+- skl_plane_surf(plane_state, 0));
++ intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id),
++ plane_ctl);
++ intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id),
++ skl_plane_surf(plane_state, 0));
+ }
+
+-static void icl_plane_update_sel_fetch_noarm(struct intel_plane *plane,
++static void icl_plane_update_sel_fetch_noarm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state,
+ int color_plane)
+ {
+- struct drm_i915_private *i915 = to_i915(plane->base.dev);
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ enum pipe pipe = plane->pipe;
+ const struct drm_rect *clip;
+ u32 val;
+@@ -1376,7 +1393,7 @@ static void icl_plane_update_sel_fetch_noarm(struct intel_plane *plane,
+ y = (clip->y1 + plane_state->uapi.dst.y1);
+ val = y << 16;
+ val |= plane_state->uapi.dst.x1;
+- intel_de_write_fw(i915, SEL_FETCH_PLANE_POS(pipe, plane->id), val);
++ intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_POS(pipe, plane->id), val);
+
+ x = plane_state->view.color_plane[color_plane].x;
+
+@@ -1391,20 +1408,21 @@ static void icl_plane_update_sel_fetch_noarm(struct intel_plane *plane,
+
+ val = y << 16 | x;
+
+- intel_de_write_fw(i915, SEL_FETCH_PLANE_OFFSET(pipe, plane->id),
+- val);
++ intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_OFFSET(pipe, plane->id), val);
+
+ /* Sizes are 0 based */
+ val = (drm_rect_height(clip) - 1) << 16;
+ val |= (drm_rect_width(&plane_state->uapi.src) >> 16) - 1;
+- intel_de_write_fw(i915, SEL_FETCH_PLANE_SIZE(pipe, plane->id), val);
++ intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_SIZE(pipe, plane->id), val);
+ }
+
+ static void
+-icl_plane_update_noarm(struct intel_plane *plane,
++icl_plane_update_noarm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
+ enum plane_id plane_id = plane->id;
+ enum pipe pipe = plane->pipe;
+@@ -1428,76 +1446,82 @@ icl_plane_update_noarm(struct intel_plane *plane,
+ crtc_y = 0;
+ }
+
+- intel_de_write_fw(dev_priv, PLANE_STRIDE(pipe, plane_id),
+- PLANE_STRIDE_(stride));
+- intel_de_write_fw(dev_priv, PLANE_POS(pipe, plane_id),
+- PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x));
+- intel_de_write_fw(dev_priv, PLANE_SIZE(pipe, plane_id),
+- PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1));
++ intel_de_write_dsb(display, dsb, PLANE_STRIDE(pipe, plane_id),
++ PLANE_STRIDE_(stride));
++ intel_de_write_dsb(display, dsb, PLANE_POS(pipe, plane_id),
++ PLANE_POS_Y(crtc_y) | PLANE_POS_X(crtc_x));
++ intel_de_write_dsb(display, dsb, PLANE_SIZE(pipe, plane_id),
++ PLANE_HEIGHT(src_h - 1) | PLANE_WIDTH(src_w - 1));
+
+- intel_de_write_fw(dev_priv, PLANE_KEYVAL(pipe, plane_id), skl_plane_keyval(plane_state));
+- intel_de_write_fw(dev_priv, PLANE_KEYMSK(pipe, plane_id), skl_plane_keymsk(plane_state));
+- intel_de_write_fw(dev_priv, PLANE_KEYMAX(pipe, plane_id), skl_plane_keymax(plane_state));
++ intel_de_write_dsb(display, dsb, PLANE_KEYVAL(pipe, plane_id),
++ skl_plane_keyval(plane_state));
++ intel_de_write_dsb(display, dsb, PLANE_KEYMSK(pipe, plane_id),
++ skl_plane_keymsk(plane_state));
++ intel_de_write_dsb(display, dsb, PLANE_KEYMAX(pipe, plane_id),
++ skl_plane_keymax(plane_state));
+
+- intel_de_write_fw(dev_priv, PLANE_OFFSET(pipe, plane_id),
+- PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x));
++ intel_de_write_dsb(display, dsb, PLANE_OFFSET(pipe, plane_id),
++ PLANE_OFFSET_Y(y) | PLANE_OFFSET_X(x));
+
+ if (intel_fb_is_rc_ccs_cc_modifier(fb->modifier)) {
+- intel_de_write_fw(dev_priv, PLANE_CC_VAL(pipe, plane_id, 0),
+- lower_32_bits(plane_state->ccval));
+- intel_de_write_fw(dev_priv, PLANE_CC_VAL(pipe, plane_id, 1),
+- upper_32_bits(plane_state->ccval));
++ intel_de_write_dsb(display, dsb, PLANE_CC_VAL(pipe, plane_id, 0),
++ lower_32_bits(plane_state->ccval));
++ intel_de_write_dsb(display, dsb, PLANE_CC_VAL(pipe, plane_id, 1),
++ upper_32_bits(plane_state->ccval));
+ }
+
+ /* FLAT CCS doesn't need to program AUX_DIST */
+ if (!HAS_FLAT_CCS(dev_priv) && DISPLAY_VER(dev_priv) < 20)
+- intel_de_write_fw(dev_priv, PLANE_AUX_DIST(pipe, plane_id),
+- skl_plane_aux_dist(plane_state, color_plane));
++ intel_de_write_dsb(display, dsb, PLANE_AUX_DIST(pipe, plane_id),
++ skl_plane_aux_dist(plane_state, color_plane));
+
+ if (icl_is_hdr_plane(dev_priv, plane_id))
+- intel_de_write_fw(dev_priv, PLANE_CUS_CTL(pipe, plane_id),
+- plane_state->cus_ctl);
++ intel_de_write_dsb(display, dsb, PLANE_CUS_CTL(pipe, plane_id),
++ plane_state->cus_ctl);
+
+- intel_de_write_fw(dev_priv, PLANE_COLOR_CTL(pipe, plane_id), plane_color_ctl);
++ intel_de_write_dsb(display, dsb, PLANE_COLOR_CTL(pipe, plane_id),
++ plane_color_ctl);
+
+ if (fb->format->is_yuv && icl_is_hdr_plane(dev_priv, plane_id))
+- icl_program_input_csc(plane, crtc_state, plane_state);
++ icl_program_input_csc(dsb, plane, plane_state);
+
+- skl_write_plane_wm(plane, crtc_state);
++ skl_write_plane_wm(dsb, plane, crtc_state);
+
+ /*
+ * FIXME: pxp session invalidation can hit any time even at time of commit
+ * or after the commit, display content will be garbage.
+ */
+ if (plane_state->force_black)
+- icl_plane_csc_load_black(plane);
++ icl_plane_csc_load_black(dsb, plane, crtc_state);
+
+- icl_plane_update_sel_fetch_noarm(plane, crtc_state, plane_state, color_plane);
++ icl_plane_update_sel_fetch_noarm(dsb, plane, crtc_state, plane_state, color_plane);
+ }
+
+-static void icl_plane_update_sel_fetch_arm(struct intel_plane *plane,
++static void icl_plane_update_sel_fetch_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+- struct drm_i915_private *i915 = to_i915(plane->base.dev);
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ enum pipe pipe = plane->pipe;
+
+ if (!crtc_state->enable_psr2_sel_fetch)
+ return;
+
+ if (drm_rect_height(&plane_state->psr2_sel_fetch_area) > 0)
+- intel_de_write_fw(i915, SEL_FETCH_PLANE_CTL(pipe, plane->id),
+- SEL_FETCH_PLANE_CTL_ENABLE);
++ intel_de_write_dsb(display, dsb, SEL_FETCH_PLANE_CTL(pipe, plane->id),
++ SEL_FETCH_PLANE_CTL_ENABLE);
+ else
+- icl_plane_disable_sel_fetch_arm(plane, crtc_state);
++ icl_plane_disable_sel_fetch_arm(dsb, plane, crtc_state);
+ }
+
+ static void
+-icl_plane_update_arm(struct intel_plane *plane,
++icl_plane_update_arm(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state)
+ {
+- struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ enum plane_id plane_id = plane->id;
+ enum pipe pipe = plane->pipe;
+ int color_plane = icl_plane_color_plane(plane_state);
+@@ -1516,25 +1540,27 @@ icl_plane_update_arm(struct intel_plane *plane,
+ if (plane_state->scaler_id >= 0)
+ skl_program_plane_scaler(plane, crtc_state, plane_state);
+
+- icl_plane_update_sel_fetch_arm(plane, crtc_state, plane_state);
++ icl_plane_update_sel_fetch_arm(dsb, plane, crtc_state, plane_state);
+
+ /*
+ * The control register self-arms if the plane was previously
+ * disabled. Try to make the plane enable atomic by writing
+ * the control register just before the surface register.
+ */
+- intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl);
+- intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id),
+- skl_plane_surf(plane_state, color_plane));
++ intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id),
++ plane_ctl);
++ intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id),
++ skl_plane_surf(plane_state, color_plane));
+ }
+
+ static void
+-skl_plane_async_flip(struct intel_plane *plane,
++skl_plane_async_flip(struct intel_dsb *dsb,
++ struct intel_plane *plane,
+ const struct intel_crtc_state *crtc_state,
+ const struct intel_plane_state *plane_state,
+ bool async_flip)
+ {
+- struct drm_i915_private *dev_priv = to_i915(plane->base.dev);
++ struct intel_display *display = to_intel_display(plane->base.dev);
+ enum plane_id plane_id = plane->id;
+ enum pipe pipe = plane->pipe;
+ u32 plane_ctl = plane_state->ctl;
+@@ -1544,9 +1570,10 @@ skl_plane_async_flip(struct intel_plane *plane,
+ if (async_flip)
+ plane_ctl |= PLANE_CTL_ASYNC_FLIP;
+
+- intel_de_write_fw(dev_priv, PLANE_CTL(pipe, plane_id), plane_ctl);
+- intel_de_write_fw(dev_priv, PLANE_SURF(pipe, plane_id),
+- skl_plane_surf(plane_state, 0));
++ intel_de_write_dsb(display, dsb, PLANE_CTL(pipe, plane_id),
++ plane_ctl);
++ intel_de_write_dsb(display, dsb, PLANE_SURF(pipe, plane_id),
++ skl_plane_surf(plane_state, 0));
+ }
+
+ static bool intel_format_is_p01x(u32 format)
+diff --git a/drivers/gpu/drm/imagination/pvr_fw_meta.c b/drivers/gpu/drm/imagination/pvr_fw_meta.c
+index c39beb70c3173e..6d13864851fc2e 100644
+--- a/drivers/gpu/drm/imagination/pvr_fw_meta.c
++++ b/drivers/gpu/drm/imagination/pvr_fw_meta.c
+@@ -527,8 +527,10 @@ pvr_meta_vm_map(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
+ static void
+ pvr_meta_vm_unmap(struct pvr_device *pvr_dev, struct pvr_fw_object *fw_obj)
+ {
+- pvr_vm_unmap(pvr_dev->kernel_vm_ctx, fw_obj->fw_mm_node.start,
+- fw_obj->fw_mm_node.size);
++ struct pvr_gem_object *pvr_obj = fw_obj->gem;
++
++ pvr_vm_unmap_obj(pvr_dev->kernel_vm_ctx, pvr_obj,
++ fw_obj->fw_mm_node.start, fw_obj->fw_mm_node.size);
+ }
+
+ static bool
+diff --git a/drivers/gpu/drm/imagination/pvr_fw_trace.c b/drivers/gpu/drm/imagination/pvr_fw_trace.c
+index 73707daa4e52d1..5dbb636d7d4ffe 100644
+--- a/drivers/gpu/drm/imagination/pvr_fw_trace.c
++++ b/drivers/gpu/drm/imagination/pvr_fw_trace.c
+@@ -333,8 +333,8 @@ static int fw_trace_seq_show(struct seq_file *s, void *v)
+ if (sf_id == ROGUE_FW_SF_LAST)
+ return -EINVAL;
+
+- timestamp = read_fw_trace(trace_seq_data, 1) |
+- ((u64)read_fw_trace(trace_seq_data, 2) << 32);
++ timestamp = ((u64)read_fw_trace(trace_seq_data, 1) << 32) |
++ read_fw_trace(trace_seq_data, 2);
+ timestamp = (timestamp & ~ROGUE_FWT_TIMESTAMP_TIME_CLRMSK) >>
+ ROGUE_FWT_TIMESTAMP_TIME_SHIFT;
+
+diff --git a/drivers/gpu/drm/imagination/pvr_queue.c b/drivers/gpu/drm/imagination/pvr_queue.c
+index 20cb4601208214..87780cc7c0c322 100644
+--- a/drivers/gpu/drm/imagination/pvr_queue.c
++++ b/drivers/gpu/drm/imagination/pvr_queue.c
+@@ -109,12 +109,20 @@ pvr_queue_fence_get_driver_name(struct dma_fence *f)
+ return PVR_DRIVER_NAME;
+ }
+
++static void pvr_queue_fence_release_work(struct work_struct *w)
++{
++ struct pvr_queue_fence *fence = container_of(w, struct pvr_queue_fence, release_work);
++
++ pvr_context_put(fence->queue->ctx);
++ dma_fence_free(&fence->base);
++}
++
+ static void pvr_queue_fence_release(struct dma_fence *f)
+ {
+ struct pvr_queue_fence *fence = container_of(f, struct pvr_queue_fence, base);
++ struct pvr_device *pvr_dev = fence->queue->ctx->pvr_dev;
+
+- pvr_context_put(fence->queue->ctx);
+- dma_fence_free(f);
++ queue_work(pvr_dev->sched_wq, &fence->release_work);
+ }
+
+ static const char *
+@@ -268,6 +276,7 @@ pvr_queue_fence_init(struct dma_fence *f,
+
+ pvr_context_get(queue->ctx);
+ fence->queue = queue;
++ INIT_WORK(&fence->release_work, pvr_queue_fence_release_work);
+ dma_fence_init(&fence->base, fence_ops,
+ &fence_ctx->lock, fence_ctx->id,
+ atomic_inc_return(&fence_ctx->seqno));
+@@ -304,8 +313,9 @@ pvr_queue_cccb_fence_init(struct dma_fence *fence, struct pvr_queue *queue)
+ static void
+ pvr_queue_job_fence_init(struct dma_fence *fence, struct pvr_queue *queue)
+ {
+- pvr_queue_fence_init(fence, queue, &pvr_queue_job_fence_ops,
+- &queue->job_fence_ctx);
++ if (!fence->ops)
++ pvr_queue_fence_init(fence, queue, &pvr_queue_job_fence_ops,
++ &queue->job_fence_ctx);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/imagination/pvr_queue.h b/drivers/gpu/drm/imagination/pvr_queue.h
+index e06ced69302fca..93fe9ac9f58ccc 100644
+--- a/drivers/gpu/drm/imagination/pvr_queue.h
++++ b/drivers/gpu/drm/imagination/pvr_queue.h
+@@ -5,6 +5,7 @@
+ #define PVR_QUEUE_H
+
+ #include <drm/gpu_scheduler.h>
++#include <linux/workqueue.h>
+
+ #include "pvr_cccb.h"
+ #include "pvr_device.h"
+@@ -63,6 +64,9 @@ struct pvr_queue_fence {
+
+ /** @queue: Queue that created this fence. */
+ struct pvr_queue *queue;
++
++ /** @release_work: Fence release work structure. */
++ struct work_struct release_work;
+ };
+
+ /**
+diff --git a/drivers/gpu/drm/imagination/pvr_vm.c b/drivers/gpu/drm/imagination/pvr_vm.c
+index 363f885a709826..2896fa7501b1cc 100644
+--- a/drivers/gpu/drm/imagination/pvr_vm.c
++++ b/drivers/gpu/drm/imagination/pvr_vm.c
+@@ -293,8 +293,9 @@ pvr_vm_bind_op_map_init(struct pvr_vm_bind_op *bind_op,
+
+ static int
+ pvr_vm_bind_op_unmap_init(struct pvr_vm_bind_op *bind_op,
+- struct pvr_vm_context *vm_ctx, u64 device_addr,
+- u64 size)
++ struct pvr_vm_context *vm_ctx,
++ struct pvr_gem_object *pvr_obj,
++ u64 device_addr, u64 size)
+ {
+ int err;
+
+@@ -318,6 +319,7 @@ pvr_vm_bind_op_unmap_init(struct pvr_vm_bind_op *bind_op,
+ goto err_bind_op_fini;
+ }
+
++ bind_op->pvr_obj = pvr_obj;
+ bind_op->vm_ctx = vm_ctx;
+ bind_op->device_addr = device_addr;
+ bind_op->size = size;
+@@ -597,20 +599,6 @@ pvr_vm_create_context(struct pvr_device *pvr_dev, bool is_userspace_context)
+ return ERR_PTR(err);
+ }
+
+-/**
+- * pvr_vm_unmap_all() - Unmap all mappings associated with a VM context.
+- * @vm_ctx: Target VM context.
+- *
+- * This function ensures that no mappings are left dangling by unmapping them
+- * all in order of ascending device-virtual address.
+- */
+-void
+-pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx)
+-{
+- WARN_ON(pvr_vm_unmap(vm_ctx, vm_ctx->gpuvm_mgr.mm_start,
+- vm_ctx->gpuvm_mgr.mm_range));
+-}
+-
+ /**
+ * pvr_vm_context_release() - Teardown a VM context.
+ * @ref_count: Pointer to reference counter of the VM context.
+@@ -703,11 +691,7 @@ pvr_vm_lock_extra(struct drm_gpuvm_exec *vm_exec)
+ struct pvr_vm_bind_op *bind_op = vm_exec->extra.priv;
+ struct pvr_gem_object *pvr_obj = bind_op->pvr_obj;
+
+- /* Unmap operations don't have an object to lock. */
+- if (!pvr_obj)
+- return 0;
+-
+- /* Acquire lock on the GEM being mapped. */
++ /* Acquire lock on the GEM object being mapped/unmapped. */
+ return drm_exec_lock_obj(&vm_exec->exec, gem_from_pvr_gem(pvr_obj));
+ }
+
+@@ -772,8 +756,10 @@ pvr_vm_map(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj,
+ }
+
+ /**
+- * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
++ * pvr_vm_unmap_obj_locked() - Unmap an already mapped section of device-virtual
++ * memory.
+ * @vm_ctx: Target VM context.
++ * @pvr_obj: Target PowerVR memory object.
+ * @device_addr: Virtual device address at the start of the target mapping.
+ * @size: Size of the target mapping.
+ *
+@@ -784,9 +770,13 @@ pvr_vm_map(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj,
+ * * Any error encountered while performing internal operations required to
+ * destroy the mapping (returned from pvr_vm_gpuva_unmap or
+ * pvr_vm_gpuva_remap).
++ *
++ * The vm_ctx->lock must be held when calling this function.
+ */
+-int
+-pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
++static int
++pvr_vm_unmap_obj_locked(struct pvr_vm_context *vm_ctx,
++ struct pvr_gem_object *pvr_obj,
++ u64 device_addr, u64 size)
+ {
+ struct pvr_vm_bind_op bind_op = {0};
+ struct drm_gpuvm_exec vm_exec = {
+@@ -799,11 +789,13 @@ pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
+ },
+ };
+
+- int err = pvr_vm_bind_op_unmap_init(&bind_op, vm_ctx, device_addr,
+- size);
++ int err = pvr_vm_bind_op_unmap_init(&bind_op, vm_ctx, pvr_obj,
++ device_addr, size);
+ if (err)
+ return err;
+
++ pvr_gem_object_get(pvr_obj);
++
+ err = drm_gpuvm_exec_lock(&vm_exec);
+ if (err)
+ goto err_cleanup;
+@@ -818,6 +810,96 @@ pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
+ return err;
+ }
+
++/**
++ * pvr_vm_unmap_obj() - Unmap an already mapped section of device-virtual
++ * memory.
++ * @vm_ctx: Target VM context.
++ * @pvr_obj: Target PowerVR memory object.
++ * @device_addr: Virtual device address at the start of the target mapping.
++ * @size: Size of the target mapping.
++ *
++ * Return:
++ * * 0 on success,
++ * * Any error encountered by pvr_vm_unmap_obj_locked.
++ */
++int
++pvr_vm_unmap_obj(struct pvr_vm_context *vm_ctx, struct pvr_gem_object *pvr_obj,
++ u64 device_addr, u64 size)
++{
++ int err;
++
++ mutex_lock(&vm_ctx->lock);
++ err = pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj, device_addr, size);
++ mutex_unlock(&vm_ctx->lock);
++
++ return err;
++}
++
++/**
++ * pvr_vm_unmap() - Unmap an already mapped section of device-virtual memory.
++ * @vm_ctx: Target VM context.
++ * @device_addr: Virtual device address at the start of the target mapping.
++ * @size: Size of the target mapping.
++ *
++ * Return:
++ * * 0 on success,
++ * * Any error encountered by drm_gpuva_find,
++ * * Any error encountered by pvr_vm_unmap_obj_locked.
++ */
++int
++pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size)
++{
++ struct pvr_gem_object *pvr_obj;
++ struct drm_gpuva *va;
++ int err;
++
++ mutex_lock(&vm_ctx->lock);
++
++ va = drm_gpuva_find(&vm_ctx->gpuvm_mgr, device_addr, size);
++ if (va) {
++ pvr_obj = gem_to_pvr_gem(va->gem.obj);
++ err = pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj,
++ va->va.addr, va->va.range);
++ } else {
++ err = -ENOENT;
++ }
++
++ mutex_unlock(&vm_ctx->lock);
++
++ return err;
++}
++
++/**
++ * pvr_vm_unmap_all() - Unmap all mappings associated with a VM context.
++ * @vm_ctx: Target VM context.
++ *
++ * This function ensures that no mappings are left dangling by unmapping them
++ * all in order of ascending device-virtual address.
++ */
++void
++pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx)
++{
++ mutex_lock(&vm_ctx->lock);
++
++ for (;;) {
++ struct pvr_gem_object *pvr_obj;
++ struct drm_gpuva *va;
++
++ va = drm_gpuva_find_first(&vm_ctx->gpuvm_mgr,
++ vm_ctx->gpuvm_mgr.mm_start,
++ vm_ctx->gpuvm_mgr.mm_range);
++ if (!va)
++ break;
++
++ pvr_obj = gem_to_pvr_gem(va->gem.obj);
++
++ WARN_ON(pvr_vm_unmap_obj_locked(vm_ctx, pvr_obj,
++ va->va.addr, va->va.range));
++ }
++
++ mutex_unlock(&vm_ctx->lock);
++}
++
+ /* Static data areas are determined by firmware. */
+ static const struct drm_pvr_static_data_area static_data_areas[] = {
+ {
+diff --git a/drivers/gpu/drm/imagination/pvr_vm.h b/drivers/gpu/drm/imagination/pvr_vm.h
+index 79406243617c1f..b0528dffa7f1ba 100644
+--- a/drivers/gpu/drm/imagination/pvr_vm.h
++++ b/drivers/gpu/drm/imagination/pvr_vm.h
+@@ -38,6 +38,9 @@ struct pvr_vm_context *pvr_vm_create_context(struct pvr_device *pvr_dev,
+ int pvr_vm_map(struct pvr_vm_context *vm_ctx,
+ struct pvr_gem_object *pvr_obj, u64 pvr_obj_offset,
+ u64 device_addr, u64 size);
++int pvr_vm_unmap_obj(struct pvr_vm_context *vm_ctx,
++ struct pvr_gem_object *pvr_obj,
++ u64 device_addr, u64 size);
+ int pvr_vm_unmap(struct pvr_vm_context *vm_ctx, u64 device_addr, u64 size);
+ void pvr_vm_unmap_all(struct pvr_vm_context *vm_ctx);
+
+diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig
+index ceef470c9fbfcf..1050a4617fc15c 100644
+--- a/drivers/gpu/drm/nouveau/Kconfig
++++ b/drivers/gpu/drm/nouveau/Kconfig
+@@ -4,6 +4,8 @@ config DRM_NOUVEAU
+ depends on DRM && PCI && MMU
+ select IOMMU_API
+ select FW_LOADER
++ select FW_CACHE if PM_SLEEP
++ select DRM_CLIENT_SELECTION
+ select DRM_DISPLAY_DP_HELPER
+ select DRM_DISPLAY_HDMI_HELPER
+ select DRM_DISPLAY_HELPER
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index 34985771b2a285..6e5adab034713f 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -31,6 +31,7 @@
+ #include <linux/dynamic_debug.h>
+
+ #include <drm/drm_aperture.h>
++#include <drm/drm_client_setup.h>
+ #include <drm/drm_drv.h>
+ #include <drm/drm_fbdev_ttm.h>
+ #include <drm/drm_gem_ttm_helper.h>
+@@ -836,6 +837,7 @@ static int nouveau_drm_probe(struct pci_dev *pdev,
+ {
+ struct nvkm_device *device;
+ struct nouveau_drm *drm;
++ const struct drm_format_info *format;
+ int ret;
+
+ if (vga_switcheroo_client_probe_defer(pdev))
+@@ -873,9 +875,11 @@ static int nouveau_drm_probe(struct pci_dev *pdev,
+ goto fail_pci;
+
+ if (drm->client.device.info.ram_size <= 32 * 1024 * 1024)
+- drm_fbdev_ttm_setup(drm->dev, 8);
++ format = drm_format_info(DRM_FORMAT_C8);
+ else
+- drm_fbdev_ttm_setup(drm->dev, 32);
++ format = NULL;
++
++ drm_client_setup(drm->dev, format);
+
+ quirk_broken_nv_runpm(pdev);
+ return 0;
+@@ -1318,6 +1322,8 @@ driver_stub = {
+ .dumb_create = nouveau_display_dumb_create,
+ .dumb_map_offset = drm_gem_ttm_dumb_map_offset,
+
++ DRM_FBDEV_TTM_DRIVER_OPS,
++
+ .name = DRIVER_NAME,
+ .desc = DRIVER_DESC,
+ #ifdef GIT_REVISION
+diff --git a/drivers/gpu/drm/radeon/r300.c b/drivers/gpu/drm/radeon/r300.c
+index 05c13102a8cb8f..d22889fbfa9c83 100644
+--- a/drivers/gpu/drm/radeon/r300.c
++++ b/drivers/gpu/drm/radeon/r300.c
+@@ -359,7 +359,8 @@ int r300_mc_wait_for_idle(struct radeon_device *rdev)
+ return -1;
+ }
+
+-static void r300_gpu_init(struct radeon_device *rdev)
++/* rs400_gpu_init also calls this! */
++void r300_gpu_init(struct radeon_device *rdev)
+ {
+ uint32_t gb_tile_config, tmp;
+
+diff --git a/drivers/gpu/drm/radeon/radeon_asic.h b/drivers/gpu/drm/radeon/radeon_asic.h
+index 1e00f6b99f94b6..8f5e07834fcc60 100644
+--- a/drivers/gpu/drm/radeon/radeon_asic.h
++++ b/drivers/gpu/drm/radeon/radeon_asic.h
+@@ -165,6 +165,7 @@ void r200_set_safe_registers(struct radeon_device *rdev);
+ */
+ extern int r300_init(struct radeon_device *rdev);
+ extern void r300_fini(struct radeon_device *rdev);
++extern void r300_gpu_init(struct radeon_device *rdev);
+ extern int r300_suspend(struct radeon_device *rdev);
+ extern int r300_resume(struct radeon_device *rdev);
+ extern int r300_asic_reset(struct radeon_device *rdev, bool hard);
+diff --git a/drivers/gpu/drm/radeon/rs400.c b/drivers/gpu/drm/radeon/rs400.c
+index d6c18fd740ec6a..13cd0a688a65cb 100644
+--- a/drivers/gpu/drm/radeon/rs400.c
++++ b/drivers/gpu/drm/radeon/rs400.c
+@@ -256,8 +256,22 @@ int rs400_mc_wait_for_idle(struct radeon_device *rdev)
+
+ static void rs400_gpu_init(struct radeon_device *rdev)
+ {
+- /* FIXME: is this correct ? */
+- r420_pipes_init(rdev);
++ /* Earlier code was calling r420_pipes_init and then
++ * rs400_mc_wait_for_idle(rdev). The problem is that
++ * at least on my Mobility Radeon Xpress 200M RC410 card
++ * that ends up in this code path ends up num_gb_pipes == 3
++ * while the card seems to have only one pipe. With the
++ * r420 pipe initialization method.
++ *
++ * Problems shown up as HyperZ glitches, see:
++ * https://bugs.freedesktop.org/show_bug.cgi?id=110897
++ *
++ * Delegating initialization to r300 code seems to work
++ * and results in proper pipe numbers. The rs400 cards
++ * are said to be not r400, but r300 kind of cards.
++ */
++ r300_gpu_init(rdev);
++
+ if (rs400_mc_wait_for_idle(rdev)) {
+ pr_warn("rs400: Failed to wait MC idle while programming pipes. Bad things might happen. %08x\n",
+ RREG32(RADEON_MC_STATUS));
+diff --git a/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h b/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
+index c75302ca3427ce..f56e77e7f6d022 100644
+--- a/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
++++ b/drivers/gpu/drm/scheduler/gpu_scheduler_trace.h
+@@ -21,7 +21,7 @@
+ *
+ */
+
+-#if !defined(_GPU_SCHED_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
++#if !defined(_GPU_SCHED_TRACE_H_) || defined(TRACE_HEADER_MULTI_READ)
+ #define _GPU_SCHED_TRACE_H_
+
+ #include <linux/stringify.h>
+@@ -106,7 +106,7 @@ TRACE_EVENT(drm_sched_job_wait_dep,
+ __entry->seqno)
+ );
+
+-#endif
++#endif /* _GPU_SCHED_TRACE_H_ */
+
+ /* This part must be outside protection */
+ #undef TRACE_INCLUDE_PATH
+diff --git a/drivers/gpu/drm/xe/display/xe_plane_initial.c b/drivers/gpu/drm/xe/display/xe_plane_initial.c
+index a50ab9eae40ae4..f99d38cc5d8e91 100644
+--- a/drivers/gpu/drm/xe/display/xe_plane_initial.c
++++ b/drivers/gpu/drm/xe/display/xe_plane_initial.c
+@@ -194,8 +194,6 @@ intel_find_initial_plane_obj(struct intel_crtc *crtc,
+ to_intel_plane(crtc->base.primary);
+ struct intel_plane_state *plane_state =
+ to_intel_plane_state(plane->base.state);
+- struct intel_crtc_state *crtc_state =
+- to_intel_crtc_state(crtc->base.state);
+ struct drm_framebuffer *fb;
+ struct i915_vma *vma;
+
+@@ -241,14 +239,6 @@ intel_find_initial_plane_obj(struct intel_crtc *crtc,
+ atomic_or(plane->frontbuffer_bit, &to_intel_frontbuffer(fb)->bits);
+
+ plane_config->vma = vma;
+-
+- /*
+- * Flip to the newly created mapping ASAP, so we can re-use the
+- * first part of GGTT for WOPCM, prevent flickering, and prevent
+- * the lookup of sysmem scratch pages.
+- */
+- plane->check_plane(crtc_state, plane_state);
+- plane->async_flip(plane, crtc_state, plane_state, true);
+ return;
+
+ nofb:
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index b940688c361356..98fe8573e054e9 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -379,9 +379,7 @@ int xe_gt_init_early(struct xe_gt *gt)
+ if (err)
+ return err;
+
+- xe_wa_process_gt(gt);
+ xe_wa_process_oob(gt);
+- xe_tuning_process_gt(gt);
+
+ xe_force_wake_init_gt(gt, gt_to_fw(gt));
+ spin_lock_init(>->global_invl_lock);
+@@ -469,6 +467,8 @@ static int all_fw_domain_init(struct xe_gt *gt)
+ goto err_hw_fence_irq;
+
+ xe_gt_mcr_set_implicit_defaults(gt);
++ xe_wa_process_gt(gt);
++ xe_tuning_process_gt(gt);
+ xe_reg_sr_apply_mmio(>->reg_sr, gt);
+
+ err = xe_gt_clock_init(gt);
+diff --git a/drivers/gpu/drm/xe/xe_hmm.c b/drivers/gpu/drm/xe/xe_hmm.c
+index 2c32dc46f7d482..d7a9408b3a97c8 100644
+--- a/drivers/gpu/drm/xe/xe_hmm.c
++++ b/drivers/gpu/drm/xe/xe_hmm.c
+@@ -19,11 +19,10 @@ static u64 xe_npages_in_range(unsigned long start, unsigned long end)
+ return (end - start) >> PAGE_SHIFT;
+ }
+
+-/*
++/**
+ * xe_mark_range_accessed() - mark a range is accessed, so core mm
+ * have such information for memory eviction or write back to
+ * hard disk
+- *
+ * @range: the range to mark
+ * @write: if write to this range, we mark pages in this range
+ * as dirty
+@@ -43,15 +42,51 @@ static void xe_mark_range_accessed(struct hmm_range *range, bool write)
+ }
+ }
+
+-/*
++static int xe_alloc_sg(struct xe_device *xe, struct sg_table *st,
++ struct hmm_range *range, struct rw_semaphore *notifier_sem)
++{
++ unsigned long i, npages, hmm_pfn;
++ unsigned long num_chunks = 0;
++ int ret;
++
++ /* HMM docs says this is needed. */
++ ret = down_read_interruptible(notifier_sem);
++ if (ret)
++ return ret;
++
++ if (mmu_interval_read_retry(range->notifier, range->notifier_seq)) {
++ up_read(notifier_sem);
++ return -EAGAIN;
++ }
++
++ npages = xe_npages_in_range(range->start, range->end);
++ for (i = 0; i < npages;) {
++ unsigned long len;
++
++ hmm_pfn = range->hmm_pfns[i];
++ xe_assert(xe, hmm_pfn & HMM_PFN_VALID);
++
++ len = 1UL << hmm_pfn_to_map_order(hmm_pfn);
++
++ /* If order > 0 the page may extend beyond range->start */
++ len -= (hmm_pfn & ~HMM_PFN_FLAGS) & (len - 1);
++ i += len;
++ num_chunks++;
++ }
++ up_read(notifier_sem);
++
++ return sg_alloc_table(st, num_chunks, GFP_KERNEL);
++}
++
++/**
+ * xe_build_sg() - build a scatter gather table for all the physical pages/pfn
+ * in a hmm_range. dma-map pages if necessary. dma-address is save in sg table
+ * and will be used to program GPU page table later.
+- *
+ * @xe: the xe device who will access the dma-address in sg table
+ * @range: the hmm range that we build the sg table from. range->hmm_pfns[]
+ * has the pfn numbers of pages that back up this hmm address range.
+ * @st: pointer to the sg table.
++ * @notifier_sem: The xe notifier lock.
+ * @write: whether we write to this range. This decides dma map direction
+ * for system pages. If write we map it bi-diretional; otherwise
+ * DMA_TO_DEVICE
+@@ -78,43 +113,84 @@ static void xe_mark_range_accessed(struct hmm_range *range, bool write)
+ * Returns 0 if successful; -ENOMEM if fails to allocate memory
+ */
+ static int xe_build_sg(struct xe_device *xe, struct hmm_range *range,
+- struct sg_table *st, bool write)
++ struct sg_table *st,
++ struct rw_semaphore *notifier_sem,
++ bool write)
+ {
++ unsigned long npages = xe_npages_in_range(range->start, range->end);
+ struct device *dev = xe->drm.dev;
+- struct page **pages;
+- u64 i, npages;
+- int ret;
++ struct scatterlist *sgl;
++ struct page *page;
++ unsigned long i, j;
+
+- npages = xe_npages_in_range(range->start, range->end);
+- pages = kvmalloc_array(npages, sizeof(*pages), GFP_KERNEL);
+- if (!pages)
+- return -ENOMEM;
++ lockdep_assert_held(notifier_sem);
+
+- for (i = 0; i < npages; i++) {
+- pages[i] = hmm_pfn_to_page(range->hmm_pfns[i]);
+- xe_assert(xe, !is_device_private_page(pages[i]));
++ i = 0;
++ for_each_sg(st->sgl, sgl, st->nents, j) {
++ unsigned long hmm_pfn, size;
++
++ hmm_pfn = range->hmm_pfns[i];
++ page = hmm_pfn_to_page(hmm_pfn);
++ xe_assert(xe, !is_device_private_page(page));
++
++ size = 1UL << hmm_pfn_to_map_order(hmm_pfn);
++ size -= page_to_pfn(page) & (size - 1);
++ i += size;
++
++ if (unlikely(j == st->nents - 1)) {
++ if (i > npages)
++ size -= (i - npages);
++ sg_mark_end(sgl);
++ }
++ sg_set_page(sgl, page, size << PAGE_SHIFT, 0);
+ }
++ xe_assert(xe, i == npages);
+
+- ret = sg_alloc_table_from_pages_segment(st, pages, npages, 0, npages << PAGE_SHIFT,
+- xe_sg_segment_size(dev), GFP_KERNEL);
+- if (ret)
+- goto free_pages;
++ return dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE,
++ DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
++}
++
++static void xe_hmm_userptr_set_mapped(struct xe_userptr_vma *uvma)
++{
++ struct xe_userptr *userptr = &uvma->userptr;
++ struct xe_vm *vm = xe_vma_vm(&uvma->vma);
++
++ lockdep_assert_held_write(&vm->lock);
++ lockdep_assert_held(&vm->userptr.notifier_lock);
++
++ mutex_lock(&userptr->unmap_mutex);
++ xe_assert(vm->xe, !userptr->mapped);
++ userptr->mapped = true;
++ mutex_unlock(&userptr->unmap_mutex);
++}
++
++void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma)
++{
++ struct xe_userptr *userptr = &uvma->userptr;
++ struct xe_vma *vma = &uvma->vma;
++ bool write = !xe_vma_read_only(vma);
++ struct xe_vm *vm = xe_vma_vm(vma);
++ struct xe_device *xe = vm->xe;
+
+- ret = dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE,
+- DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
+- if (ret) {
+- sg_free_table(st);
+- st = NULL;
++ if (!lockdep_is_held_type(&vm->userptr.notifier_lock, 0) &&
++ !lockdep_is_held_type(&vm->lock, 0) &&
++ !(vma->gpuva.flags & XE_VMA_DESTROYED)) {
++ /* Don't unmap in exec critical section. */
++ xe_vm_assert_held(vm);
++ /* Don't unmap while mapping the sg. */
++ lockdep_assert_held(&vm->lock);
+ }
+
+-free_pages:
+- kvfree(pages);
+- return ret;
++ mutex_lock(&userptr->unmap_mutex);
++ if (userptr->sg && userptr->mapped)
++ dma_unmap_sgtable(xe->drm.dev, userptr->sg,
++ write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 0);
++ userptr->mapped = false;
++ mutex_unlock(&userptr->unmap_mutex);
+ }
+
+-/*
++/**
+ * xe_hmm_userptr_free_sg() - Free the scatter gather table of userptr
+- *
+ * @uvma: the userptr vma which hold the scatter gather table
+ *
+ * With function xe_userptr_populate_range, we allocate storage of
+@@ -124,16 +200,9 @@ static int xe_build_sg(struct xe_device *xe, struct hmm_range *range,
+ void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma)
+ {
+ struct xe_userptr *userptr = &uvma->userptr;
+- struct xe_vma *vma = &uvma->vma;
+- bool write = !xe_vma_read_only(vma);
+- struct xe_vm *vm = xe_vma_vm(vma);
+- struct xe_device *xe = vm->xe;
+- struct device *dev = xe->drm.dev;
+-
+- xe_assert(xe, userptr->sg);
+- dma_unmap_sgtable(dev, userptr->sg,
+- write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE, 0);
+
++ xe_assert(xe_vma_vm(&uvma->vma)->xe, userptr->sg);
++ xe_hmm_userptr_unmap(uvma);
+ sg_free_table(userptr->sg);
+ userptr->sg = NULL;
+ }
+@@ -166,13 +235,20 @@ int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma,
+ {
+ unsigned long timeout =
+ jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
+- unsigned long *pfns, flags = HMM_PFN_REQ_FAULT;
++ unsigned long *pfns;
+ struct xe_userptr *userptr;
+ struct xe_vma *vma = &uvma->vma;
+ u64 userptr_start = xe_vma_userptr(vma);
+ u64 userptr_end = userptr_start + xe_vma_size(vma);
+ struct xe_vm *vm = xe_vma_vm(vma);
+- struct hmm_range hmm_range;
++ struct hmm_range hmm_range = {
++ .pfn_flags_mask = 0, /* ignore pfns */
++ .default_flags = HMM_PFN_REQ_FAULT,
++ .start = userptr_start,
++ .end = userptr_end,
++ .notifier = &uvma->userptr.notifier,
++ .dev_private_owner = vm->xe,
++ };
+ bool write = !xe_vma_read_only(vma);
+ unsigned long notifier_seq;
+ u64 npages;
+@@ -199,19 +275,14 @@ int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma,
+ return -ENOMEM;
+
+ if (write)
+- flags |= HMM_PFN_REQ_WRITE;
++ hmm_range.default_flags |= HMM_PFN_REQ_WRITE;
+
+ if (!mmget_not_zero(userptr->notifier.mm)) {
+ ret = -EFAULT;
+ goto free_pfns;
+ }
+
+- hmm_range.default_flags = flags;
+ hmm_range.hmm_pfns = pfns;
+- hmm_range.notifier = &userptr->notifier;
+- hmm_range.start = userptr_start;
+- hmm_range.end = userptr_end;
+- hmm_range.dev_private_owner = vm->xe;
+
+ while (true) {
+ hmm_range.notifier_seq = mmu_interval_read_begin(&userptr->notifier);
+@@ -238,16 +309,37 @@ int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma,
+ if (ret)
+ goto free_pfns;
+
+- ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt, write);
++ ret = xe_alloc_sg(vm->xe, &userptr->sgt, &hmm_range, &vm->userptr.notifier_lock);
+ if (ret)
+ goto free_pfns;
+
++ ret = down_read_interruptible(&vm->userptr.notifier_lock);
++ if (ret)
++ goto free_st;
++
++ if (mmu_interval_read_retry(hmm_range.notifier, hmm_range.notifier_seq)) {
++ ret = -EAGAIN;
++ goto out_unlock;
++ }
++
++ ret = xe_build_sg(vm->xe, &hmm_range, &userptr->sgt,
++ &vm->userptr.notifier_lock, write);
++ if (ret)
++ goto out_unlock;
++
+ xe_mark_range_accessed(&hmm_range, write);
+ userptr->sg = &userptr->sgt;
++ xe_hmm_userptr_set_mapped(uvma);
+ userptr->notifier_seq = hmm_range.notifier_seq;
++ up_read(&vm->userptr.notifier_lock);
++ kvfree(pfns);
++ return 0;
+
++out_unlock:
++ up_read(&vm->userptr.notifier_lock);
++free_st:
++ sg_free_table(&userptr->sgt);
+ free_pfns:
+ kvfree(pfns);
+ return ret;
+ }
+-
+diff --git a/drivers/gpu/drm/xe/xe_hmm.h b/drivers/gpu/drm/xe/xe_hmm.h
+index 909dc2bdcd97ee..0ea98d8e7bbc76 100644
+--- a/drivers/gpu/drm/xe/xe_hmm.h
++++ b/drivers/gpu/drm/xe/xe_hmm.h
+@@ -3,9 +3,16 @@
+ * Copyright © 2024 Intel Corporation
+ */
+
++#ifndef _XE_HMM_H_
++#define _XE_HMM_H_
++
+ #include <linux/types.h>
+
+ struct xe_userptr_vma;
+
+ int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma, bool is_mm_mmap_locked);
++
+ void xe_hmm_userptr_free_sg(struct xe_userptr_vma *uvma);
++
++void xe_hmm_userptr_unmap(struct xe_userptr_vma *uvma);
++#endif
+diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
+index 797576690356f2..230cf47fb9c5ee 100644
+--- a/drivers/gpu/drm/xe/xe_pt.c
++++ b/drivers/gpu/drm/xe/xe_pt.c
+@@ -28,6 +28,8 @@ struct xe_pt_dir {
+ struct xe_pt pt;
+ /** @children: Array of page-table child nodes */
+ struct xe_ptw *children[XE_PDES];
++ /** @staging: Array of page-table staging nodes */
++ struct xe_ptw *staging[XE_PDES];
+ };
+
+ #if IS_ENABLED(CONFIG_DRM_XE_DEBUG_VM)
+@@ -48,9 +50,10 @@ static struct xe_pt_dir *as_xe_pt_dir(struct xe_pt *pt)
+ return container_of(pt, struct xe_pt_dir, pt);
+ }
+
+-static struct xe_pt *xe_pt_entry(struct xe_pt_dir *pt_dir, unsigned int index)
++static struct xe_pt *
++xe_pt_entry_staging(struct xe_pt_dir *pt_dir, unsigned int index)
+ {
+- return container_of(pt_dir->children[index], struct xe_pt, base);
++ return container_of(pt_dir->staging[index], struct xe_pt, base);
+ }
+
+ static u64 __xe_pt_empty_pte(struct xe_tile *tile, struct xe_vm *vm,
+@@ -125,6 +128,7 @@ struct xe_pt *xe_pt_create(struct xe_vm *vm, struct xe_tile *tile,
+ }
+ pt->bo = bo;
+ pt->base.children = level ? as_xe_pt_dir(pt)->children : NULL;
++ pt->base.staging = level ? as_xe_pt_dir(pt)->staging : NULL;
+
+ if (vm->xef)
+ xe_drm_client_add_bo(vm->xef->client, pt->bo);
+@@ -205,8 +209,8 @@ void xe_pt_destroy(struct xe_pt *pt, u32 flags, struct llist_head *deferred)
+ struct xe_pt_dir *pt_dir = as_xe_pt_dir(pt);
+
+ for (i = 0; i < XE_PDES; i++) {
+- if (xe_pt_entry(pt_dir, i))
+- xe_pt_destroy(xe_pt_entry(pt_dir, i), flags,
++ if (xe_pt_entry_staging(pt_dir, i))
++ xe_pt_destroy(xe_pt_entry_staging(pt_dir, i), flags,
+ deferred);
+ }
+ }
+@@ -375,8 +379,10 @@ xe_pt_insert_entry(struct xe_pt_stage_bind_walk *xe_walk, struct xe_pt *parent,
+ /* Continue building a non-connected subtree. */
+ struct iosys_map *map = &parent->bo->vmap;
+
+- if (unlikely(xe_child))
++ if (unlikely(xe_child)) {
+ parent->base.children[offset] = &xe_child->base;
++ parent->base.staging[offset] = &xe_child->base;
++ }
+
+ xe_pt_write(xe_walk->vm->xe, map, offset, pte);
+ parent->num_live++;
+@@ -613,6 +619,7 @@ xe_pt_stage_bind(struct xe_tile *tile, struct xe_vma *vma,
+ .ops = &xe_pt_stage_bind_ops,
+ .shifts = xe_normal_pt_shifts,
+ .max_level = XE_PT_HIGHEST_LEVEL,
++ .staging = true,
+ },
+ .vm = xe_vma_vm(vma),
+ .tile = tile,
+@@ -872,7 +879,7 @@ static void xe_pt_cancel_bind(struct xe_vma *vma,
+ }
+ }
+
+-static void xe_pt_commit_locks_assert(struct xe_vma *vma)
++static void xe_pt_commit_prepare_locks_assert(struct xe_vma *vma)
+ {
+ struct xe_vm *vm = xe_vma_vm(vma);
+
+@@ -884,6 +891,16 @@ static void xe_pt_commit_locks_assert(struct xe_vma *vma)
+ xe_vm_assert_held(vm);
+ }
+
++static void xe_pt_commit_locks_assert(struct xe_vma *vma)
++{
++ struct xe_vm *vm = xe_vma_vm(vma);
++
++ xe_pt_commit_prepare_locks_assert(vma);
++
++ if (xe_vma_is_userptr(vma))
++ lockdep_assert_held_read(&vm->userptr.notifier_lock);
++}
++
+ static void xe_pt_commit(struct xe_vma *vma,
+ struct xe_vm_pgtable_update *entries,
+ u32 num_entries, struct llist_head *deferred)
+@@ -894,13 +911,17 @@ static void xe_pt_commit(struct xe_vma *vma,
+
+ for (i = 0; i < num_entries; i++) {
+ struct xe_pt *pt = entries[i].pt;
++ struct xe_pt_dir *pt_dir;
+
+ if (!pt->level)
+ continue;
+
++ pt_dir = as_xe_pt_dir(pt);
+ for (j = 0; j < entries[i].qwords; j++) {
+ struct xe_pt *oldpte = entries[i].pt_entries[j].pt;
++ int j_ = j + entries[i].ofs;
+
++ pt_dir->children[j_] = pt_dir->staging[j_];
+ xe_pt_destroy(oldpte, xe_vma_vm(vma)->flags, deferred);
+ }
+ }
+@@ -912,7 +933,7 @@ static void xe_pt_abort_bind(struct xe_vma *vma,
+ {
+ int i, j;
+
+- xe_pt_commit_locks_assert(vma);
++ xe_pt_commit_prepare_locks_assert(vma);
+
+ for (i = num_entries - 1; i >= 0; --i) {
+ struct xe_pt *pt = entries[i].pt;
+@@ -927,10 +948,10 @@ static void xe_pt_abort_bind(struct xe_vma *vma,
+ pt_dir = as_xe_pt_dir(pt);
+ for (j = 0; j < entries[i].qwords; j++) {
+ u32 j_ = j + entries[i].ofs;
+- struct xe_pt *newpte = xe_pt_entry(pt_dir, j_);
++ struct xe_pt *newpte = xe_pt_entry_staging(pt_dir, j_);
+ struct xe_pt *oldpte = entries[i].pt_entries[j].pt;
+
+- pt_dir->children[j_] = oldpte ? &oldpte->base : 0;
++ pt_dir->staging[j_] = oldpte ? &oldpte->base : 0;
+ xe_pt_destroy(newpte, xe_vma_vm(vma)->flags, NULL);
+ }
+ }
+@@ -942,7 +963,7 @@ static void xe_pt_commit_prepare_bind(struct xe_vma *vma,
+ {
+ u32 i, j;
+
+- xe_pt_commit_locks_assert(vma);
++ xe_pt_commit_prepare_locks_assert(vma);
+
+ for (i = 0; i < num_entries; i++) {
+ struct xe_pt *pt = entries[i].pt;
+@@ -960,10 +981,10 @@ static void xe_pt_commit_prepare_bind(struct xe_vma *vma,
+ struct xe_pt *newpte = entries[i].pt_entries[j].pt;
+ struct xe_pt *oldpte = NULL;
+
+- if (xe_pt_entry(pt_dir, j_))
+- oldpte = xe_pt_entry(pt_dir, j_);
++ if (xe_pt_entry_staging(pt_dir, j_))
++ oldpte = xe_pt_entry_staging(pt_dir, j_);
+
+- pt_dir->children[j_] = &newpte->base;
++ pt_dir->staging[j_] = &newpte->base;
+ entries[i].pt_entries[j].pt = oldpte;
+ }
+ }
+@@ -1212,42 +1233,22 @@ static int vma_check_userptr(struct xe_vm *vm, struct xe_vma *vma,
+ return 0;
+
+ uvma = to_userptr_vma(vma);
+- notifier_seq = uvma->userptr.notifier_seq;
++ if (xe_pt_userptr_inject_eagain(uvma))
++ xe_vma_userptr_force_invalidate(uvma);
+
+- if (uvma->userptr.initial_bind && !xe_vm_in_fault_mode(vm))
+- return 0;
++ notifier_seq = uvma->userptr.notifier_seq;
+
+ if (!mmu_interval_read_retry(&uvma->userptr.notifier,
+- notifier_seq) &&
+- !xe_pt_userptr_inject_eagain(uvma))
++ notifier_seq))
+ return 0;
+
+- if (xe_vm_in_fault_mode(vm)) {
++ if (xe_vm_in_fault_mode(vm))
+ return -EAGAIN;
+- } else {
+- spin_lock(&vm->userptr.invalidated_lock);
+- list_move_tail(&uvma->userptr.invalidate_link,
+- &vm->userptr.invalidated);
+- spin_unlock(&vm->userptr.invalidated_lock);
+-
+- if (xe_vm_in_preempt_fence_mode(vm)) {
+- struct dma_resv_iter cursor;
+- struct dma_fence *fence;
+- long err;
+-
+- dma_resv_iter_begin(&cursor, xe_vm_resv(vm),
+- DMA_RESV_USAGE_BOOKKEEP);
+- dma_resv_for_each_fence_unlocked(&cursor, fence)
+- dma_fence_enable_sw_signaling(fence);
+- dma_resv_iter_end(&cursor);
+-
+- err = dma_resv_wait_timeout(xe_vm_resv(vm),
+- DMA_RESV_USAGE_BOOKKEEP,
+- false, MAX_SCHEDULE_TIMEOUT);
+- XE_WARN_ON(err <= 0);
+- }
+- }
+
++ /*
++ * Just continue the operation since exec or rebind worker
++ * will take care of rebinding.
++ */
+ return 0;
+ }
+
+@@ -1513,6 +1514,7 @@ static unsigned int xe_pt_stage_unbind(struct xe_tile *tile, struct xe_vma *vma,
+ .ops = &xe_pt_stage_unbind_ops,
+ .shifts = xe_normal_pt_shifts,
+ .max_level = XE_PT_HIGHEST_LEVEL,
++ .staging = true,
+ },
+ .tile = tile,
+ .modified_start = xe_vma_start(vma),
+@@ -1554,7 +1556,7 @@ static void xe_pt_abort_unbind(struct xe_vma *vma,
+ {
+ int i, j;
+
+- xe_pt_commit_locks_assert(vma);
++ xe_pt_commit_prepare_locks_assert(vma);
+
+ for (i = num_entries - 1; i >= 0; --i) {
+ struct xe_vm_pgtable_update *entry = &entries[i];
+@@ -1567,7 +1569,7 @@ static void xe_pt_abort_unbind(struct xe_vma *vma,
+ continue;
+
+ for (j = entry->ofs; j < entry->ofs + entry->qwords; j++)
+- pt_dir->children[j] =
++ pt_dir->staging[j] =
+ entries[i].pt_entries[j - entry->ofs].pt ?
+ &entries[i].pt_entries[j - entry->ofs].pt->base : NULL;
+ }
+@@ -1580,7 +1582,7 @@ xe_pt_commit_prepare_unbind(struct xe_vma *vma,
+ {
+ int i, j;
+
+- xe_pt_commit_locks_assert(vma);
++ xe_pt_commit_prepare_locks_assert(vma);
+
+ for (i = 0; i < num_entries; ++i) {
+ struct xe_vm_pgtable_update *entry = &entries[i];
+@@ -1594,8 +1596,8 @@ xe_pt_commit_prepare_unbind(struct xe_vma *vma,
+ pt_dir = as_xe_pt_dir(pt);
+ for (j = entry->ofs; j < entry->ofs + entry->qwords; j++) {
+ entry->pt_entries[j - entry->ofs].pt =
+- xe_pt_entry(pt_dir, j);
+- pt_dir->children[j] = NULL;
++ xe_pt_entry_staging(pt_dir, j);
++ pt_dir->staging[j] = NULL;
+ }
+ }
+ }
+diff --git a/drivers/gpu/drm/xe/xe_pt_walk.c b/drivers/gpu/drm/xe/xe_pt_walk.c
+index b8b3d2aea4923d..be602a763ff32b 100644
+--- a/drivers/gpu/drm/xe/xe_pt_walk.c
++++ b/drivers/gpu/drm/xe/xe_pt_walk.c
+@@ -74,7 +74,8 @@ int xe_pt_walk_range(struct xe_ptw *parent, unsigned int level,
+ u64 addr, u64 end, struct xe_pt_walk *walk)
+ {
+ pgoff_t offset = xe_pt_offset(addr, level, walk);
+- struct xe_ptw **entries = parent->children ? parent->children : NULL;
++ struct xe_ptw **entries = walk->staging ? (parent->staging ?: NULL) :
++ (parent->children ?: NULL);
+ const struct xe_pt_walk_ops *ops = walk->ops;
+ enum page_walk_action action;
+ struct xe_ptw *child;
+diff --git a/drivers/gpu/drm/xe/xe_pt_walk.h b/drivers/gpu/drm/xe/xe_pt_walk.h
+index 5ecc4d2f0f6536..5c02c244f7de35 100644
+--- a/drivers/gpu/drm/xe/xe_pt_walk.h
++++ b/drivers/gpu/drm/xe/xe_pt_walk.h
+@@ -11,12 +11,14 @@
+ /**
+ * struct xe_ptw - base class for driver pagetable subclassing.
+ * @children: Pointer to an array of children if any.
++ * @staging: Pointer to an array of staging if any.
+ *
+ * Drivers could subclass this, and if it's a page-directory, typically
+ * embed an array of xe_ptw pointers.
+ */
+ struct xe_ptw {
+ struct xe_ptw **children;
++ struct xe_ptw **staging;
+ };
+
+ /**
+@@ -41,6 +43,8 @@ struct xe_pt_walk {
+ * as shared pagetables.
+ */
+ bool shared_pt_mode;
++ /** @staging: Walk staging PT structure */
++ bool staging;
+ };
+
+ /**
+diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
+index 5693b337f5dffe..872de052d670f5 100644
+--- a/drivers/gpu/drm/xe/xe_vm.c
++++ b/drivers/gpu/drm/xe/xe_vm.c
+@@ -580,51 +580,26 @@ static void preempt_rebind_work_func(struct work_struct *w)
+ trace_xe_vm_rebind_worker_exit(vm);
+ }
+
+-static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
+- const struct mmu_notifier_range *range,
+- unsigned long cur_seq)
++static void __vma_userptr_invalidate(struct xe_vm *vm, struct xe_userptr_vma *uvma)
+ {
+- struct xe_userptr *userptr = container_of(mni, typeof(*userptr), notifier);
+- struct xe_userptr_vma *uvma = container_of(userptr, typeof(*uvma), userptr);
++ struct xe_userptr *userptr = &uvma->userptr;
+ struct xe_vma *vma = &uvma->vma;
+- struct xe_vm *vm = xe_vma_vm(vma);
+ struct dma_resv_iter cursor;
+ struct dma_fence *fence;
+ long err;
+
+- xe_assert(vm->xe, xe_vma_is_userptr(vma));
+- trace_xe_vma_userptr_invalidate(vma);
+-
+- if (!mmu_notifier_range_blockable(range))
+- return false;
+-
+- vm_dbg(&xe_vma_vm(vma)->xe->drm,
+- "NOTIFIER: addr=0x%016llx, range=0x%016llx",
+- xe_vma_start(vma), xe_vma_size(vma));
+-
+- down_write(&vm->userptr.notifier_lock);
+- mmu_interval_set_seq(mni, cur_seq);
+-
+- /* No need to stop gpu access if the userptr is not yet bound. */
+- if (!userptr->initial_bind) {
+- up_write(&vm->userptr.notifier_lock);
+- return true;
+- }
+-
+ /*
+ * Tell exec and rebind worker they need to repin and rebind this
+ * userptr.
+ */
+ if (!xe_vm_in_fault_mode(vm) &&
+- !(vma->gpuva.flags & XE_VMA_DESTROYED) && vma->tile_present) {
++ !(vma->gpuva.flags & XE_VMA_DESTROYED)) {
+ spin_lock(&vm->userptr.invalidated_lock);
+ list_move_tail(&userptr->invalidate_link,
+ &vm->userptr.invalidated);
+ spin_unlock(&vm->userptr.invalidated_lock);
+ }
+
+- up_write(&vm->userptr.notifier_lock);
+-
+ /*
+ * Preempt fences turn into schedule disables, pipeline these.
+ * Note that even in fault mode, we need to wait for binds and
+@@ -642,11 +617,37 @@ static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
+ false, MAX_SCHEDULE_TIMEOUT);
+ XE_WARN_ON(err <= 0);
+
+- if (xe_vm_in_fault_mode(vm)) {
++ if (xe_vm_in_fault_mode(vm) && userptr->initial_bind) {
+ err = xe_vm_invalidate_vma(vma);
+ XE_WARN_ON(err);
+ }
+
++ xe_hmm_userptr_unmap(uvma);
++}
++
++static bool vma_userptr_invalidate(struct mmu_interval_notifier *mni,
++ const struct mmu_notifier_range *range,
++ unsigned long cur_seq)
++{
++ struct xe_userptr_vma *uvma = container_of(mni, typeof(*uvma), userptr.notifier);
++ struct xe_vma *vma = &uvma->vma;
++ struct xe_vm *vm = xe_vma_vm(vma);
++
++ xe_assert(vm->xe, xe_vma_is_userptr(vma));
++ trace_xe_vma_userptr_invalidate(vma);
++
++ if (!mmu_notifier_range_blockable(range))
++ return false;
++
++ vm_dbg(&xe_vma_vm(vma)->xe->drm,
++ "NOTIFIER: addr=0x%016llx, range=0x%016llx",
++ xe_vma_start(vma), xe_vma_size(vma));
++
++ down_write(&vm->userptr.notifier_lock);
++ mmu_interval_set_seq(mni, cur_seq);
++
++ __vma_userptr_invalidate(vm, uvma);
++ up_write(&vm->userptr.notifier_lock);
+ trace_xe_vma_userptr_invalidate_complete(vma);
+
+ return true;
+@@ -656,6 +657,34 @@ static const struct mmu_interval_notifier_ops vma_userptr_notifier_ops = {
+ .invalidate = vma_userptr_invalidate,
+ };
+
++#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
++/**
++ * xe_vma_userptr_force_invalidate() - force invalidate a userptr
++ * @uvma: The userptr vma to invalidate
++ *
++ * Perform a forced userptr invalidation for testing purposes.
++ */
++void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
++{
++ struct xe_vm *vm = xe_vma_vm(&uvma->vma);
++
++ /* Protect against concurrent userptr pinning */
++ lockdep_assert_held(&vm->lock);
++ /* Protect against concurrent notifiers */
++ lockdep_assert_held(&vm->userptr.notifier_lock);
++ /*
++ * Protect against concurrent instances of this function and
++ * the critical exec sections
++ */
++ xe_vm_assert_held(vm);
++
++ if (!mmu_interval_read_retry(&uvma->userptr.notifier,
++ uvma->userptr.notifier_seq))
++ uvma->userptr.notifier_seq -= 2;
++ __vma_userptr_invalidate(vm, uvma);
++}
++#endif
++
+ int xe_vm_userptr_pin(struct xe_vm *vm)
+ {
+ struct xe_userptr_vma *uvma, *next;
+@@ -1012,6 +1041,7 @@ static struct xe_vma *xe_vma_create(struct xe_vm *vm,
+ INIT_LIST_HEAD(&userptr->invalidate_link);
+ INIT_LIST_HEAD(&userptr->repin_link);
+ vma->gpuva.gem.offset = bo_offset_or_userptr;
++ mutex_init(&userptr->unmap_mutex);
+
+ err = mmu_interval_notifier_insert(&userptr->notifier,
+ current->mm,
+@@ -1053,6 +1083,7 @@ static void xe_vma_destroy_late(struct xe_vma *vma)
+ * them anymore
+ */
+ mmu_interval_notifier_remove(&userptr->notifier);
++ mutex_destroy(&userptr->unmap_mutex);
+ xe_vm_put(vm);
+ } else if (xe_vma_is_null(vma)) {
+ xe_vm_put(vm);
+@@ -2284,8 +2315,17 @@ static int vm_bind_ioctl_ops_parse(struct xe_vm *vm, struct drm_gpuva_ops *ops,
+ break;
+ }
+ case DRM_GPUVA_OP_UNMAP:
++ xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
++ break;
+ case DRM_GPUVA_OP_PREFETCH:
+- /* FIXME: Need to skip some prefetch ops */
++ vma = gpuva_to_vma(op->base.prefetch.va);
++
++ if (xe_vma_is_userptr(vma)) {
++ err = xe_vma_userptr_pin_pages(to_userptr_vma(vma));
++ if (err)
++ return err;
++ }
++
+ xe_vma_ops_incr_pt_update_ops(vops, op->tile_mask);
+ break;
+ default:
+diff --git a/drivers/gpu/drm/xe/xe_vm.h b/drivers/gpu/drm/xe/xe_vm.h
+index c864dba35e1d5c..d2406532fcc500 100644
+--- a/drivers/gpu/drm/xe/xe_vm.h
++++ b/drivers/gpu/drm/xe/xe_vm.h
+@@ -275,9 +275,17 @@ static inline void vm_dbg(const struct drm_device *dev,
+ const char *format, ...)
+ { /* noop */ }
+ #endif
+-#endif
+
+ struct xe_vm_snapshot *xe_vm_snapshot_capture(struct xe_vm *vm);
+ void xe_vm_snapshot_capture_delayed(struct xe_vm_snapshot *snap);
+ void xe_vm_snapshot_print(struct xe_vm_snapshot *snap, struct drm_printer *p);
+ void xe_vm_snapshot_free(struct xe_vm_snapshot *snap);
++
++#if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
++void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma);
++#else
++static inline void xe_vma_userptr_force_invalidate(struct xe_userptr_vma *uvma)
++{
++}
++#endif
++#endif
+diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
+index 7f9a303e51d896..a4b4091cfd0dab 100644
+--- a/drivers/gpu/drm/xe/xe_vm_types.h
++++ b/drivers/gpu/drm/xe/xe_vm_types.h
+@@ -59,12 +59,16 @@ struct xe_userptr {
+ struct sg_table *sg;
+ /** @notifier_seq: notifier sequence number */
+ unsigned long notifier_seq;
++ /** @unmap_mutex: Mutex protecting dma-unmapping */
++ struct mutex unmap_mutex;
+ /**
+ * @initial_bind: user pointer has been bound at least once.
+ * write: vm->userptr.notifier_lock in read mode and vm->resv held.
+ * read: vm->userptr.notifier_lock in write mode or vm->resv held.
+ */
+ bool initial_bind;
++ /** @mapped: Whether the @sgt sg-table is dma-mapped. Protected by @unmap_mutex. */
++ bool mapped;
+ #if IS_ENABLED(CONFIG_DRM_XE_USERPTR_INVAL_INJECT)
+ u32 divisor;
+ #endif
+@@ -227,8 +231,8 @@ struct xe_vm {
+ * up for revalidation. Protected from access with the
+ * @invalidated_lock. Removing items from the list
+ * additionally requires @lock in write mode, and adding
+- * items to the list requires the @userptr.notifer_lock in
+- * write mode.
++ * items to the list requires either the @userptr.notifer_lock in
++ * write mode, OR @lock in write mode.
+ */
+ struct list_head invalidated;
+ } userptr;
+diff --git a/drivers/hid/hid-appleir.c b/drivers/hid/hid-appleir.c
+index 8deded1857254a..c45e5aa569d25f 100644
+--- a/drivers/hid/hid-appleir.c
++++ b/drivers/hid/hid-appleir.c
+@@ -188,7 +188,7 @@ static int appleir_raw_event(struct hid_device *hid, struct hid_report *report,
+ static const u8 flatbattery[] = { 0x25, 0x87, 0xe0 };
+ unsigned long flags;
+
+- if (len != 5)
++ if (len != 5 || !(hid->claimed & HID_CLAIMED_INPUT))
+ goto out;
+
+ if (!memcmp(data, keydown, sizeof(keydown))) {
+diff --git a/drivers/hid/hid-google-hammer.c b/drivers/hid/hid-google-hammer.c
+index 22683ec819aaca..646ba5b92e0b2a 100644
+--- a/drivers/hid/hid-google-hammer.c
++++ b/drivers/hid/hid-google-hammer.c
+@@ -268,11 +268,13 @@ static void cbas_ec_remove(struct platform_device *pdev)
+ mutex_unlock(&cbas_ec_reglock);
+ }
+
++#ifdef CONFIG_ACPI
+ static const struct acpi_device_id cbas_ec_acpi_ids[] = {
+ { "GOOG000B", 0 },
+ { }
+ };
+ MODULE_DEVICE_TABLE(acpi, cbas_ec_acpi_ids);
++#endif
+
+ #ifdef CONFIG_OF
+ static const struct of_device_id cbas_ec_of_match[] = {
+diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c
+index 7b359668987854..19b7bb0c3d7f99 100644
+--- a/drivers/hid/hid-steam.c
++++ b/drivers/hid/hid-steam.c
+@@ -1327,11 +1327,11 @@ static void steam_remove(struct hid_device *hdev)
+ return;
+ }
+
++ hid_destroy_device(steam->client_hdev);
+ cancel_delayed_work_sync(&steam->mode_switch);
+ cancel_work_sync(&steam->work_connect);
+ cancel_work_sync(&steam->rumble_work);
+ cancel_work_sync(&steam->unregister_work);
+- hid_destroy_device(steam->client_hdev);
+ steam->client_hdev = NULL;
+ steam->client_opened = 0;
+ if (steam->quirks & STEAM_QUIRK_WIRELESS) {
+diff --git a/drivers/hid/intel-ish-hid/ishtp-hid-client.c b/drivers/hid/intel-ish-hid/ishtp-hid-client.c
+index fbd4f8ea1951b8..af6a5afc1a93e9 100644
+--- a/drivers/hid/intel-ish-hid/ishtp-hid-client.c
++++ b/drivers/hid/intel-ish-hid/ishtp-hid-client.c
+@@ -833,9 +833,9 @@ static void hid_ishtp_cl_remove(struct ishtp_cl_device *cl_device)
+ hid_ishtp_cl);
+
+ dev_dbg(ishtp_device(cl_device), "%s\n", __func__);
+- hid_ishtp_cl_deinit(hid_ishtp_cl);
+ ishtp_put_device(cl_device);
+ ishtp_hid_remove(client_data);
++ hid_ishtp_cl_deinit(hid_ishtp_cl);
+
+ hid_ishtp_cl = NULL;
+
+diff --git a/drivers/hid/intel-ish-hid/ishtp-hid.c b/drivers/hid/intel-ish-hid/ishtp-hid.c
+index 00c6f0ebf35633..be2c62fc8251d7 100644
+--- a/drivers/hid/intel-ish-hid/ishtp-hid.c
++++ b/drivers/hid/intel-ish-hid/ishtp-hid.c
+@@ -261,12 +261,14 @@ int ishtp_hid_probe(unsigned int cur_hid_dev,
+ */
+ void ishtp_hid_remove(struct ishtp_cl_data *client_data)
+ {
++ void *data;
+ int i;
+
+ for (i = 0; i < client_data->num_hid_devices; ++i) {
+ if (client_data->hid_sensor_hubs[i]) {
+- kfree(client_data->hid_sensor_hubs[i]->driver_data);
++ data = client_data->hid_sensor_hubs[i]->driver_data;
+ hid_destroy_device(client_data->hid_sensor_hubs[i]);
++ kfree(data);
+ client_data->hid_sensor_hubs[i] = NULL;
+ }
+ }
+diff --git a/drivers/hwmon/ad7314.c b/drivers/hwmon/ad7314.c
+index 7802bbf5f9587f..59424103f6348a 100644
+--- a/drivers/hwmon/ad7314.c
++++ b/drivers/hwmon/ad7314.c
+@@ -22,11 +22,13 @@
+ */
+ #define AD7314_TEMP_MASK 0x7FE0
+ #define AD7314_TEMP_SHIFT 5
++#define AD7314_LEADING_ZEROS_MASK BIT(15)
+
+ /*
+ * ADT7301 and ADT7302 temperature masks
+ */
+ #define ADT7301_TEMP_MASK 0x3FFF
++#define ADT7301_LEADING_ZEROS_MASK (BIT(15) | BIT(14))
+
+ enum ad7314_variant {
+ adt7301,
+@@ -65,12 +67,20 @@ static ssize_t ad7314_temperature_show(struct device *dev,
+ return ret;
+ switch (spi_get_device_id(chip->spi_dev)->driver_data) {
+ case ad7314:
++ if (ret & AD7314_LEADING_ZEROS_MASK) {
++ /* Invalid read-out, leading zero part is missing */
++ return -EIO;
++ }
+ data = (ret & AD7314_TEMP_MASK) >> AD7314_TEMP_SHIFT;
+ data = sign_extend32(data, 9);
+
+ return sprintf(buf, "%d\n", 250 * data);
+ case adt7301:
+ case adt7302:
++ if (ret & ADT7301_LEADING_ZEROS_MASK) {
++ /* Invalid read-out, leading zero part is missing */
++ return -EIO;
++ }
+ /*
+ * Documented as a 13 bit twos complement register
+ * with a sign bit - which is a 14 bit 2's complement
+diff --git a/drivers/hwmon/ntc_thermistor.c b/drivers/hwmon/ntc_thermistor.c
+index b5352900463fb9..0d29c8f97ba7c2 100644
+--- a/drivers/hwmon/ntc_thermistor.c
++++ b/drivers/hwmon/ntc_thermistor.c
+@@ -181,40 +181,40 @@ static const struct ntc_compensation ncpXXwf104[] = {
+ };
+
+ static const struct ntc_compensation ncpXXxh103[] = {
+- { .temp_c = -40, .ohm = 247565 },
+- { .temp_c = -35, .ohm = 181742 },
+- { .temp_c = -30, .ohm = 135128 },
+- { .temp_c = -25, .ohm = 101678 },
+- { .temp_c = -20, .ohm = 77373 },
+- { .temp_c = -15, .ohm = 59504 },
+- { .temp_c = -10, .ohm = 46222 },
+- { .temp_c = -5, .ohm = 36244 },
+- { .temp_c = 0, .ohm = 28674 },
+- { .temp_c = 5, .ohm = 22878 },
+- { .temp_c = 10, .ohm = 18399 },
+- { .temp_c = 15, .ohm = 14910 },
+- { .temp_c = 20, .ohm = 12169 },
++ { .temp_c = -40, .ohm = 195652 },
++ { .temp_c = -35, .ohm = 148171 },
++ { .temp_c = -30, .ohm = 113347 },
++ { .temp_c = -25, .ohm = 87559 },
++ { .temp_c = -20, .ohm = 68237 },
++ { .temp_c = -15, .ohm = 53650 },
++ { .temp_c = -10, .ohm = 42506 },
++ { .temp_c = -5, .ohm = 33892 },
++ { .temp_c = 0, .ohm = 27219 },
++ { .temp_c = 5, .ohm = 22021 },
++ { .temp_c = 10, .ohm = 17926 },
++ { .temp_c = 15, .ohm = 14674 },
++ { .temp_c = 20, .ohm = 12081 },
+ { .temp_c = 25, .ohm = 10000 },
+- { .temp_c = 30, .ohm = 8271 },
+- { .temp_c = 35, .ohm = 6883 },
+- { .temp_c = 40, .ohm = 5762 },
+- { .temp_c = 45, .ohm = 4851 },
+- { .temp_c = 50, .ohm = 4105 },
+- { .temp_c = 55, .ohm = 3492 },
+- { .temp_c = 60, .ohm = 2985 },
+- { .temp_c = 65, .ohm = 2563 },
+- { .temp_c = 70, .ohm = 2211 },
+- { .temp_c = 75, .ohm = 1915 },
+- { .temp_c = 80, .ohm = 1666 },
+- { .temp_c = 85, .ohm = 1454 },
+- { .temp_c = 90, .ohm = 1275 },
+- { .temp_c = 95, .ohm = 1121 },
+- { .temp_c = 100, .ohm = 990 },
+- { .temp_c = 105, .ohm = 876 },
+- { .temp_c = 110, .ohm = 779 },
+- { .temp_c = 115, .ohm = 694 },
+- { .temp_c = 120, .ohm = 620 },
+- { .temp_c = 125, .ohm = 556 },
++ { .temp_c = 30, .ohm = 8315 },
++ { .temp_c = 35, .ohm = 6948 },
++ { .temp_c = 40, .ohm = 5834 },
++ { .temp_c = 45, .ohm = 4917 },
++ { .temp_c = 50, .ohm = 4161 },
++ { .temp_c = 55, .ohm = 3535 },
++ { .temp_c = 60, .ohm = 3014 },
++ { .temp_c = 65, .ohm = 2586 },
++ { .temp_c = 70, .ohm = 2228 },
++ { .temp_c = 75, .ohm = 1925 },
++ { .temp_c = 80, .ohm = 1669 },
++ { .temp_c = 85, .ohm = 1452 },
++ { .temp_c = 90, .ohm = 1268 },
++ { .temp_c = 95, .ohm = 1110 },
++ { .temp_c = 100, .ohm = 974 },
++ { .temp_c = 105, .ohm = 858 },
++ { .temp_c = 110, .ohm = 758 },
++ { .temp_c = 115, .ohm = 672 },
++ { .temp_c = 120, .ohm = 596 },
++ { .temp_c = 125, .ohm = 531 },
+ };
+
+ /*
+diff --git a/drivers/hwmon/peci/dimmtemp.c b/drivers/hwmon/peci/dimmtemp.c
+index 4a72e9712408e2..b7b09780c7b0a6 100644
+--- a/drivers/hwmon/peci/dimmtemp.c
++++ b/drivers/hwmon/peci/dimmtemp.c
+@@ -127,8 +127,6 @@ static int update_thresholds(struct peci_dimmtemp *priv, int dimm_no)
+ return 0;
+
+ ret = priv->gen_info->read_thresholds(priv, dimm_order, chan_rank, &data);
+- if (ret == -ENODATA) /* Use default or previous value */
+- return 0;
+ if (ret)
+ return ret;
+
+@@ -509,11 +507,11 @@ read_thresholds_icx(struct peci_dimmtemp *priv, int dimm_order, int chan_rank, u
+
+ ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd4, ®_val);
+ if (ret || !(reg_val & BIT(31)))
+- return -ENODATA; /* Use default or previous value */
++ return -ENODATA;
+
+ ret = peci_ep_pci_local_read(priv->peci_dev, 0, 13, 0, 2, 0xd0, ®_val);
+ if (ret)
+- return -ENODATA; /* Use default or previous value */
++ return -ENODATA;
+
+ /*
+ * Device 26, Offset 224e0: IMC 0 channel 0 -> rank 0
+@@ -546,11 +544,11 @@ read_thresholds_spr(struct peci_dimmtemp *priv, int dimm_order, int chan_rank, u
+
+ ret = peci_ep_pci_local_read(priv->peci_dev, 0, 30, 0, 2, 0xd4, ®_val);
+ if (ret || !(reg_val & BIT(31)))
+- return -ENODATA; /* Use default or previous value */
++ return -ENODATA;
+
+ ret = peci_ep_pci_local_read(priv->peci_dev, 0, 30, 0, 2, 0xd0, ®_val);
+ if (ret)
+- return -ENODATA; /* Use default or previous value */
++ return -ENODATA;
+
+ /*
+ * Device 26, Offset 219a8: IMC 0 channel 0 -> rank 0
+diff --git a/drivers/hwmon/pmbus/pmbus.c b/drivers/hwmon/pmbus/pmbus.c
+index ec40c5c599543a..59424dc518c8f9 100644
+--- a/drivers/hwmon/pmbus/pmbus.c
++++ b/drivers/hwmon/pmbus/pmbus.c
+@@ -103,6 +103,8 @@ static int pmbus_identify(struct i2c_client *client,
+ if (pmbus_check_byte_register(client, 0, PMBUS_PAGE)) {
+ int page;
+
++ info->pages = PMBUS_PAGES;
++
+ for (page = 1; page < PMBUS_PAGES; page++) {
+ if (pmbus_set_page(client, page, 0xff) < 0)
+ break;
+diff --git a/drivers/hwmon/xgene-hwmon.c b/drivers/hwmon/xgene-hwmon.c
+index 5e0759a70f6d51..92d82faf237fcf 100644
+--- a/drivers/hwmon/xgene-hwmon.c
++++ b/drivers/hwmon/xgene-hwmon.c
+@@ -706,7 +706,7 @@ static int xgene_hwmon_probe(struct platform_device *pdev)
+ goto out;
+ }
+
+- if (!ctx->pcc_comm_addr) {
++ if (IS_ERR_OR_NULL(ctx->pcc_comm_addr)) {
+ dev_err(&pdev->dev,
+ "Failed to ioremap PCC comm region\n");
+ rc = -ENOMEM;
+diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
+index 0d7b9839e5b663..6bb6af0f96fa5c 100644
+--- a/drivers/hwtracing/intel_th/pci.c
++++ b/drivers/hwtracing/intel_th/pci.c
+@@ -329,6 +329,21 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xa824),
+ .driver_data = (kernel_ulong_t)&intel_th_2x,
+ },
++ {
++ /* Arrow Lake */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x7724),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
++ {
++ /* Panther Lake-H */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xe324),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
++ {
++ /* Panther Lake-P/U */
++ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0xe424),
++ .driver_data = (kernel_ulong_t)&intel_th_2x,
++ },
+ {
+ /* Alder Lake CPU */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x466f),
+diff --git a/drivers/iio/adc/ad7192.c b/drivers/iio/adc/ad7192.c
+index 955e9eff0099e5..6fe32f866765bf 100644
+--- a/drivers/iio/adc/ad7192.c
++++ b/drivers/iio/adc/ad7192.c
+@@ -1082,7 +1082,7 @@ static int ad7192_update_scan_mode(struct iio_dev *indio_dev, const unsigned lon
+
+ conf &= ~AD7192_CONF_CHAN_MASK;
+ for_each_set_bit(i, scan_mask, 8)
+- conf |= FIELD_PREP(AD7192_CONF_CHAN_MASK, i);
++ conf |= FIELD_PREP(AD7192_CONF_CHAN_MASK, BIT(i));
+
+ ret = ad_sd_write_reg(&st->sd, AD7192_REG_CONF, 3, conf);
+ if (ret < 0)
+diff --git a/drivers/iio/adc/at91-sama5d2_adc.c b/drivers/iio/adc/at91-sama5d2_adc.c
+index d7fd21e7c6e2a6..3618e769b10654 100644
+--- a/drivers/iio/adc/at91-sama5d2_adc.c
++++ b/drivers/iio/adc/at91-sama5d2_adc.c
+@@ -329,7 +329,7 @@ static const struct at91_adc_reg_layout sama7g5_layout = {
+ #define AT91_HWFIFO_MAX_SIZE_STR "128"
+ #define AT91_HWFIFO_MAX_SIZE 128
+
+-#define AT91_SAMA5D2_CHAN_SINGLE(index, num, addr) \
++#define AT91_SAMA_CHAN_SINGLE(index, num, addr, rbits) \
+ { \
+ .type = IIO_VOLTAGE, \
+ .channel = num, \
+@@ -337,7 +337,7 @@ static const struct at91_adc_reg_layout sama7g5_layout = {
+ .scan_index = index, \
+ .scan_type = { \
+ .sign = 'u', \
+- .realbits = 14, \
++ .realbits = rbits, \
+ .storagebits = 16, \
+ }, \
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
+@@ -350,7 +350,13 @@ static const struct at91_adc_reg_layout sama7g5_layout = {
+ .indexed = 1, \
+ }
+
+-#define AT91_SAMA5D2_CHAN_DIFF(index, num, num2, addr) \
++#define AT91_SAMA5D2_CHAN_SINGLE(index, num, addr) \
++ AT91_SAMA_CHAN_SINGLE(index, num, addr, 14)
++
++#define AT91_SAMA7G5_CHAN_SINGLE(index, num, addr) \
++ AT91_SAMA_CHAN_SINGLE(index, num, addr, 16)
++
++#define AT91_SAMA_CHAN_DIFF(index, num, num2, addr, rbits) \
+ { \
+ .type = IIO_VOLTAGE, \
+ .differential = 1, \
+@@ -360,7 +366,7 @@ static const struct at91_adc_reg_layout sama7g5_layout = {
+ .scan_index = index, \
+ .scan_type = { \
+ .sign = 's', \
+- .realbits = 14, \
++ .realbits = rbits, \
+ .storagebits = 16, \
+ }, \
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
+@@ -373,6 +379,12 @@ static const struct at91_adc_reg_layout sama7g5_layout = {
+ .indexed = 1, \
+ }
+
++#define AT91_SAMA5D2_CHAN_DIFF(index, num, num2, addr) \
++ AT91_SAMA_CHAN_DIFF(index, num, num2, addr, 14)
++
++#define AT91_SAMA7G5_CHAN_DIFF(index, num, num2, addr) \
++ AT91_SAMA_CHAN_DIFF(index, num, num2, addr, 16)
++
+ #define AT91_SAMA5D2_CHAN_TOUCH(num, name, mod) \
+ { \
+ .type = IIO_POSITIONRELATIVE, \
+@@ -666,30 +678,30 @@ static const struct iio_chan_spec at91_sama5d2_adc_channels[] = {
+ };
+
+ static const struct iio_chan_spec at91_sama7g5_adc_channels[] = {
+- AT91_SAMA5D2_CHAN_SINGLE(0, 0, 0x60),
+- AT91_SAMA5D2_CHAN_SINGLE(1, 1, 0x64),
+- AT91_SAMA5D2_CHAN_SINGLE(2, 2, 0x68),
+- AT91_SAMA5D2_CHAN_SINGLE(3, 3, 0x6c),
+- AT91_SAMA5D2_CHAN_SINGLE(4, 4, 0x70),
+- AT91_SAMA5D2_CHAN_SINGLE(5, 5, 0x74),
+- AT91_SAMA5D2_CHAN_SINGLE(6, 6, 0x78),
+- AT91_SAMA5D2_CHAN_SINGLE(7, 7, 0x7c),
+- AT91_SAMA5D2_CHAN_SINGLE(8, 8, 0x80),
+- AT91_SAMA5D2_CHAN_SINGLE(9, 9, 0x84),
+- AT91_SAMA5D2_CHAN_SINGLE(10, 10, 0x88),
+- AT91_SAMA5D2_CHAN_SINGLE(11, 11, 0x8c),
+- AT91_SAMA5D2_CHAN_SINGLE(12, 12, 0x90),
+- AT91_SAMA5D2_CHAN_SINGLE(13, 13, 0x94),
+- AT91_SAMA5D2_CHAN_SINGLE(14, 14, 0x98),
+- AT91_SAMA5D2_CHAN_SINGLE(15, 15, 0x9c),
+- AT91_SAMA5D2_CHAN_DIFF(16, 0, 1, 0x60),
+- AT91_SAMA5D2_CHAN_DIFF(17, 2, 3, 0x68),
+- AT91_SAMA5D2_CHAN_DIFF(18, 4, 5, 0x70),
+- AT91_SAMA5D2_CHAN_DIFF(19, 6, 7, 0x78),
+- AT91_SAMA5D2_CHAN_DIFF(20, 8, 9, 0x80),
+- AT91_SAMA5D2_CHAN_DIFF(21, 10, 11, 0x88),
+- AT91_SAMA5D2_CHAN_DIFF(22, 12, 13, 0x90),
+- AT91_SAMA5D2_CHAN_DIFF(23, 14, 15, 0x98),
++ AT91_SAMA7G5_CHAN_SINGLE(0, 0, 0x60),
++ AT91_SAMA7G5_CHAN_SINGLE(1, 1, 0x64),
++ AT91_SAMA7G5_CHAN_SINGLE(2, 2, 0x68),
++ AT91_SAMA7G5_CHAN_SINGLE(3, 3, 0x6c),
++ AT91_SAMA7G5_CHAN_SINGLE(4, 4, 0x70),
++ AT91_SAMA7G5_CHAN_SINGLE(5, 5, 0x74),
++ AT91_SAMA7G5_CHAN_SINGLE(6, 6, 0x78),
++ AT91_SAMA7G5_CHAN_SINGLE(7, 7, 0x7c),
++ AT91_SAMA7G5_CHAN_SINGLE(8, 8, 0x80),
++ AT91_SAMA7G5_CHAN_SINGLE(9, 9, 0x84),
++ AT91_SAMA7G5_CHAN_SINGLE(10, 10, 0x88),
++ AT91_SAMA7G5_CHAN_SINGLE(11, 11, 0x8c),
++ AT91_SAMA7G5_CHAN_SINGLE(12, 12, 0x90),
++ AT91_SAMA7G5_CHAN_SINGLE(13, 13, 0x94),
++ AT91_SAMA7G5_CHAN_SINGLE(14, 14, 0x98),
++ AT91_SAMA7G5_CHAN_SINGLE(15, 15, 0x9c),
++ AT91_SAMA7G5_CHAN_DIFF(16, 0, 1, 0x60),
++ AT91_SAMA7G5_CHAN_DIFF(17, 2, 3, 0x68),
++ AT91_SAMA7G5_CHAN_DIFF(18, 4, 5, 0x70),
++ AT91_SAMA7G5_CHAN_DIFF(19, 6, 7, 0x78),
++ AT91_SAMA7G5_CHAN_DIFF(20, 8, 9, 0x80),
++ AT91_SAMA7G5_CHAN_DIFF(21, 10, 11, 0x88),
++ AT91_SAMA7G5_CHAN_DIFF(22, 12, 13, 0x90),
++ AT91_SAMA7G5_CHAN_DIFF(23, 14, 15, 0x98),
+ IIO_CHAN_SOFT_TIMESTAMP(24),
+ AT91_SAMA5D2_CHAN_TEMP(AT91_SAMA7G5_ADC_TEMP_CHANNEL, "temp", 0xdc),
+ };
+diff --git a/drivers/iio/dac/ad3552r.c b/drivers/iio/dac/ad3552r.c
+index 7d61b2fe662436..390d3fab21478f 100644
+--- a/drivers/iio/dac/ad3552r.c
++++ b/drivers/iio/dac/ad3552r.c
+@@ -714,6 +714,12 @@ static int ad3552r_reset(struct ad3552r_desc *dac)
+ return ret;
+ }
+
++ /* Clear reset error flag, see ad3552r manual, rev B table 38. */
++ ret = ad3552r_write_reg(dac, AD3552R_REG_ADDR_ERR_STATUS,
++ AD3552R_MASK_RESET_STATUS);
++ if (ret)
++ return ret;
++
+ return ad3552r_update_reg_field(dac,
+ addr_mask_map[AD3552R_ADDR_ASCENSION][0],
+ addr_mask_map[AD3552R_ADDR_ASCENSION][1],
+diff --git a/drivers/iio/filter/admv8818.c b/drivers/iio/filter/admv8818.c
+index 848baa6e3bbf5d..d85b7d3de86604 100644
+--- a/drivers/iio/filter/admv8818.c
++++ b/drivers/iio/filter/admv8818.c
+@@ -574,21 +574,15 @@ static int admv8818_init(struct admv8818_state *st)
+ struct spi_device *spi = st->spi;
+ unsigned int chip_id;
+
+- ret = regmap_update_bits(st->regmap, ADMV8818_REG_SPI_CONFIG_A,
+- ADMV8818_SOFTRESET_N_MSK |
+- ADMV8818_SOFTRESET_MSK,
+- FIELD_PREP(ADMV8818_SOFTRESET_N_MSK, 1) |
+- FIELD_PREP(ADMV8818_SOFTRESET_MSK, 1));
++ ret = regmap_write(st->regmap, ADMV8818_REG_SPI_CONFIG_A,
++ ADMV8818_SOFTRESET_N_MSK | ADMV8818_SOFTRESET_MSK);
+ if (ret) {
+ dev_err(&spi->dev, "ADMV8818 Soft Reset failed.\n");
+ return ret;
+ }
+
+- ret = regmap_update_bits(st->regmap, ADMV8818_REG_SPI_CONFIG_A,
+- ADMV8818_SDOACTIVE_N_MSK |
+- ADMV8818_SDOACTIVE_MSK,
+- FIELD_PREP(ADMV8818_SDOACTIVE_N_MSK, 1) |
+- FIELD_PREP(ADMV8818_SDOACTIVE_MSK, 1));
++ ret = regmap_write(st->regmap, ADMV8818_REG_SPI_CONFIG_A,
++ ADMV8818_SDOACTIVE_N_MSK | ADMV8818_SDOACTIVE_MSK);
+ if (ret) {
+ dev_err(&spi->dev, "ADMV8818 SDO Enable failed.\n");
+ return ret;
+diff --git a/drivers/iio/light/apds9306.c b/drivers/iio/light/apds9306.c
+index 079e02be100521..7f9d6cac8adb72 100644
+--- a/drivers/iio/light/apds9306.c
++++ b/drivers/iio/light/apds9306.c
+@@ -108,11 +108,11 @@ static const struct part_id_gts_multiplier apds9306_gts_mul[] = {
+ {
+ .part_id = 0xB1,
+ .max_scale_int = 16,
+- .max_scale_nano = 3264320,
++ .max_scale_nano = 326432000,
+ }, {
+ .part_id = 0xB3,
+ .max_scale_int = 14,
+- .max_scale_nano = 9712000,
++ .max_scale_nano = 97120000,
+ },
+ };
+
+diff --git a/drivers/misc/cardreader/rtsx_usb.c b/drivers/misc/cardreader/rtsx_usb.c
+index 285a748748d701..f150d8769f1986 100644
+--- a/drivers/misc/cardreader/rtsx_usb.c
++++ b/drivers/misc/cardreader/rtsx_usb.c
+@@ -286,7 +286,6 @@ static int rtsx_usb_get_status_with_bulk(struct rtsx_ucr *ucr, u16 *status)
+ int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status)
+ {
+ int ret;
+- u8 interrupt_val = 0;
+ u16 *buf;
+
+ if (!status)
+@@ -309,20 +308,6 @@ int rtsx_usb_get_card_status(struct rtsx_ucr *ucr, u16 *status)
+ ret = rtsx_usb_get_status_with_bulk(ucr, status);
+ }
+
+- rtsx_usb_read_register(ucr, CARD_INT_PEND, &interrupt_val);
+- /* Cross check presence with interrupts */
+- if (*status & XD_CD)
+- if (!(interrupt_val & XD_INT))
+- *status &= ~XD_CD;
+-
+- if (*status & SD_CD)
+- if (!(interrupt_val & SD_INT))
+- *status &= ~SD_CD;
+-
+- if (*status & MS_CD)
+- if (!(interrupt_val & MS_INT))
+- *status &= ~MS_CD;
+-
+ /* usb_control_msg may return positive when success */
+ if (ret < 0)
+ return ret;
+diff --git a/drivers/misc/eeprom/digsy_mtc_eeprom.c b/drivers/misc/eeprom/digsy_mtc_eeprom.c
+index 88888485e6f8eb..ee58f7ce5bfa98 100644
+--- a/drivers/misc/eeprom/digsy_mtc_eeprom.c
++++ b/drivers/misc/eeprom/digsy_mtc_eeprom.c
+@@ -50,7 +50,7 @@ static struct platform_device digsy_mtc_eeprom = {
+ };
+
+ static struct gpiod_lookup_table eeprom_spi_gpiod_table = {
+- .dev_id = "spi_gpio",
++ .dev_id = "spi_gpio.1",
+ .table = {
+ GPIO_LOOKUP("gpio@b00", GPIO_EEPROM_CLK,
+ "sck", GPIO_ACTIVE_HIGH),
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index c3a6657dcd4a29..a5f88ec97df753 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -117,6 +117,8 @@
+
+ #define MEI_DEV_ID_LNL_M 0xA870 /* Lunar Lake Point M */
+
++#define MEI_DEV_ID_PTL_P 0xE470 /* Panther Lake P */
++
+ /*
+ * MEI HW Section
+ */
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 6589635f8ba32b..d6ff9d82ae94b3 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -124,6 +124,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+
+ {MEI_PCI_DEVICE(MEI_DEV_ID_LNL_M, MEI_ME_PCH15_CFG)},
+
++ {MEI_PCI_DEVICE(MEI_DEV_ID_PTL_P, MEI_ME_PCH15_CFG)},
++
+ /* required last entry */
+ {0, }
+ };
+diff --git a/drivers/misc/mei/vsc-tp.c b/drivers/misc/mei/vsc-tp.c
+index 1618cca9a7317f..ef0a9f423c8f8d 100644
+--- a/drivers/misc/mei/vsc-tp.c
++++ b/drivers/misc/mei/vsc-tp.c
+@@ -504,7 +504,7 @@ static int vsc_tp_probe(struct spi_device *spi)
+ if (ret)
+ return ret;
+
+- tp->wakeuphost = devm_gpiod_get(dev, "wakeuphost", GPIOD_IN);
++ tp->wakeuphost = devm_gpiod_get(dev, "wakeuphostint", GPIOD_IN);
+ if (IS_ERR(tp->wakeuphost))
+ return PTR_ERR(tp->wakeuphost);
+
+diff --git a/drivers/net/caif/caif_virtio.c b/drivers/net/caif/caif_virtio.c
+index 7fea00c7ca8a6a..c60386bf2d1a4a 100644
+--- a/drivers/net/caif/caif_virtio.c
++++ b/drivers/net/caif/caif_virtio.c
+@@ -745,7 +745,7 @@ static int cfv_probe(struct virtio_device *vdev)
+
+ if (cfv->vr_rx)
+ vdev->vringh_config->del_vrhs(cfv->vdev);
+- if (cfv->vdev)
++ if (cfv->vq_tx)
+ vdev->config->del_vqs(cfv->vdev);
+ free_netdev(netdev);
+ return err;
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index d84ee1b419a614..abc979fbb45d18 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -2590,7 +2590,8 @@ mt7531_setup_common(struct dsa_switch *ds)
+ if (ret < 0)
+ return ret;
+
+- return 0;
++ /* Setup VLAN ID 0 for VLAN-unaware bridges */
++ return mt7530_setup_vlan0(priv);
+ }
+
+ static int
+@@ -2686,11 +2687,6 @@ mt7531_setup(struct dsa_switch *ds)
+ if (ret)
+ return ret;
+
+- /* Setup VLAN ID 0 for VLAN-unaware bridges */
+- ret = mt7530_setup_vlan0(priv);
+- if (ret)
+- return ret;
+-
+ ds->assisted_learning_on_cpu_port = true;
+ ds->mtu_enforcement_ingress = true;
+
+diff --git a/drivers/net/ethernet/emulex/benet/be.h b/drivers/net/ethernet/emulex/benet/be.h
+index e48b861e4ce15d..270ff9aab3352b 100644
+--- a/drivers/net/ethernet/emulex/benet/be.h
++++ b/drivers/net/ethernet/emulex/benet/be.h
+@@ -562,7 +562,7 @@ struct be_adapter {
+ struct be_dma_mem mbox_mem_alloced;
+
+ struct be_mcc_obj mcc_obj;
+- struct mutex mcc_lock; /* For serializing mcc cmds to BE card */
++ spinlock_t mcc_lock; /* For serializing mcc cmds to BE card */
+ spinlock_t mcc_cq_lock;
+
+ u16 cfg_num_rx_irqs; /* configured via set-channels */
+diff --git a/drivers/net/ethernet/emulex/benet/be_cmds.c b/drivers/net/ethernet/emulex/benet/be_cmds.c
+index 61adcebeef0107..51b8377edd1d04 100644
+--- a/drivers/net/ethernet/emulex/benet/be_cmds.c
++++ b/drivers/net/ethernet/emulex/benet/be_cmds.c
+@@ -575,7 +575,7 @@ int be_process_mcc(struct be_adapter *adapter)
+ /* Wait till no more pending mcc requests are present */
+ static int be_mcc_wait_compl(struct be_adapter *adapter)
+ {
+-#define mcc_timeout 12000 /* 12s timeout */
++#define mcc_timeout 120000 /* 12s timeout */
+ int i, status = 0;
+ struct be_mcc_obj *mcc_obj = &adapter->mcc_obj;
+
+@@ -589,7 +589,7 @@ static int be_mcc_wait_compl(struct be_adapter *adapter)
+
+ if (atomic_read(&mcc_obj->q.used) == 0)
+ break;
+- usleep_range(500, 1000);
++ udelay(100);
+ }
+ if (i == mcc_timeout) {
+ dev_err(&adapter->pdev->dev, "FW not responding\n");
+@@ -866,7 +866,7 @@ static bool use_mcc(struct be_adapter *adapter)
+ static int be_cmd_lock(struct be_adapter *adapter)
+ {
+ if (use_mcc(adapter)) {
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+ return 0;
+ } else {
+ return mutex_lock_interruptible(&adapter->mbox_lock);
+@@ -877,7 +877,7 @@ static int be_cmd_lock(struct be_adapter *adapter)
+ static void be_cmd_unlock(struct be_adapter *adapter)
+ {
+ if (use_mcc(adapter))
+- return mutex_unlock(&adapter->mcc_lock);
++ return spin_unlock_bh(&adapter->mcc_lock);
+ else
+ return mutex_unlock(&adapter->mbox_lock);
+ }
+@@ -1047,7 +1047,7 @@ int be_cmd_mac_addr_query(struct be_adapter *adapter, u8 *mac_addr,
+ struct be_cmd_req_mac_query *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -1076,7 +1076,7 @@ int be_cmd_mac_addr_query(struct be_adapter *adapter, u8 *mac_addr,
+ }
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1088,7 +1088,7 @@ int be_cmd_pmac_add(struct be_adapter *adapter, const u8 *mac_addr,
+ struct be_cmd_req_pmac_add *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -1113,7 +1113,7 @@ int be_cmd_pmac_add(struct be_adapter *adapter, const u8 *mac_addr,
+ }
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+
+ if (base_status(status) == MCC_STATUS_UNAUTHORIZED_REQUEST)
+ status = -EPERM;
+@@ -1131,7 +1131,7 @@ int be_cmd_pmac_del(struct be_adapter *adapter, u32 if_id, int pmac_id, u32 dom)
+ if (pmac_id == -1)
+ return 0;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -1151,7 +1151,7 @@ int be_cmd_pmac_del(struct be_adapter *adapter, u32 if_id, int pmac_id, u32 dom)
+ status = be_mcc_notify_wait(adapter);
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1414,7 +1414,7 @@ int be_cmd_rxq_create(struct be_adapter *adapter,
+ struct be_dma_mem *q_mem = &rxq->dma_mem;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -1444,7 +1444,7 @@ int be_cmd_rxq_create(struct be_adapter *adapter,
+ }
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1508,7 +1508,7 @@ int be_cmd_rxq_destroy(struct be_adapter *adapter, struct be_queue_info *q)
+ struct be_cmd_req_q_destroy *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -1525,7 +1525,7 @@ int be_cmd_rxq_destroy(struct be_adapter *adapter, struct be_queue_info *q)
+ q->created = false;
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1593,7 +1593,7 @@ int be_cmd_get_stats(struct be_adapter *adapter, struct be_dma_mem *nonemb_cmd)
+ struct be_cmd_req_hdr *hdr;
+ int status = 0;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -1621,7 +1621,7 @@ int be_cmd_get_stats(struct be_adapter *adapter, struct be_dma_mem *nonemb_cmd)
+ adapter->stats_cmd_sent = true;
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1637,7 +1637,7 @@ int lancer_cmd_get_pport_stats(struct be_adapter *adapter,
+ CMD_SUBSYSTEM_ETH))
+ return -EPERM;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -1660,7 +1660,7 @@ int lancer_cmd_get_pport_stats(struct be_adapter *adapter,
+ adapter->stats_cmd_sent = true;
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1697,7 +1697,7 @@ int be_cmd_link_status_query(struct be_adapter *adapter, u16 *link_speed,
+ struct be_cmd_req_link_status *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ if (link_status)
+ *link_status = LINK_DOWN;
+@@ -1736,7 +1736,7 @@ int be_cmd_link_status_query(struct be_adapter *adapter, u16 *link_speed,
+ }
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1747,7 +1747,7 @@ int be_cmd_get_die_temperature(struct be_adapter *adapter)
+ struct be_cmd_req_get_cntl_addnl_attribs *req;
+ int status = 0;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -1762,7 +1762,7 @@ int be_cmd_get_die_temperature(struct be_adapter *adapter)
+
+ status = be_mcc_notify(adapter);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1811,7 +1811,7 @@ int be_cmd_get_fat_dump(struct be_adapter *adapter, u32 buf_len, void *buf)
+ if (!get_fat_cmd.va)
+ return -ENOMEM;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ while (total_size) {
+ buf_size = min(total_size, (u32)60 * 1024);
+@@ -1849,9 +1849,9 @@ int be_cmd_get_fat_dump(struct be_adapter *adapter, u32 buf_len, void *buf)
+ log_offset += buf_size;
+ }
+ err:
++ spin_unlock_bh(&adapter->mcc_lock);
+ dma_free_coherent(&adapter->pdev->dev, get_fat_cmd.size,
+ get_fat_cmd.va, get_fat_cmd.dma);
+- mutex_unlock(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1862,7 +1862,7 @@ int be_cmd_get_fw_ver(struct be_adapter *adapter)
+ struct be_cmd_req_get_fw_version *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -1885,7 +1885,7 @@ int be_cmd_get_fw_ver(struct be_adapter *adapter)
+ sizeof(adapter->fw_on_flash));
+ }
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1899,7 +1899,7 @@ static int __be_cmd_modify_eqd(struct be_adapter *adapter,
+ struct be_cmd_req_modify_eq_delay *req;
+ int status = 0, i;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -1922,7 +1922,7 @@ static int __be_cmd_modify_eqd(struct be_adapter *adapter,
+
+ status = be_mcc_notify(adapter);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1949,7 +1949,7 @@ int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array,
+ struct be_cmd_req_vlan_config *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -1971,7 +1971,7 @@ int be_cmd_vlan_config(struct be_adapter *adapter, u32 if_id, u16 *vtag_array,
+
+ status = be_mcc_notify_wait(adapter);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -1982,7 +1982,7 @@ static int __be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 value)
+ struct be_cmd_req_rx_filter *req = mem->va;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -2015,7 +2015,7 @@ static int __be_cmd_rx_filter(struct be_adapter *adapter, u32 flags, u32 value)
+
+ status = be_mcc_notify_wait(adapter);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -2046,7 +2046,7 @@ int be_cmd_set_flow_control(struct be_adapter *adapter, u32 tx_fc, u32 rx_fc)
+ CMD_SUBSYSTEM_COMMON))
+ return -EPERM;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -2066,7 +2066,7 @@ int be_cmd_set_flow_control(struct be_adapter *adapter, u32 tx_fc, u32 rx_fc)
+ status = be_mcc_notify_wait(adapter);
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+
+ if (base_status(status) == MCC_STATUS_FEATURE_NOT_SUPPORTED)
+ return -EOPNOTSUPP;
+@@ -2085,7 +2085,7 @@ int be_cmd_get_flow_control(struct be_adapter *adapter, u32 *tx_fc, u32 *rx_fc)
+ CMD_SUBSYSTEM_COMMON))
+ return -EPERM;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -2108,7 +2108,7 @@ int be_cmd_get_flow_control(struct be_adapter *adapter, u32 *tx_fc, u32 *rx_fc)
+ }
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -2189,7 +2189,7 @@ int be_cmd_rss_config(struct be_adapter *adapter, u8 *rsstable,
+ if (!(be_if_cap_flags(adapter) & BE_IF_FLAGS_RSS))
+ return 0;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -2214,7 +2214,7 @@ int be_cmd_rss_config(struct be_adapter *adapter, u8 *rsstable,
+
+ status = be_mcc_notify_wait(adapter);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -2226,7 +2226,7 @@ int be_cmd_set_beacon_state(struct be_adapter *adapter, u8 port_num,
+ struct be_cmd_req_enable_disable_beacon *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -2247,7 +2247,7 @@ int be_cmd_set_beacon_state(struct be_adapter *adapter, u8 port_num,
+ status = be_mcc_notify_wait(adapter);
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -2258,7 +2258,7 @@ int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num, u32 *state)
+ struct be_cmd_req_get_beacon_state *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -2282,7 +2282,7 @@ int be_cmd_get_beacon_state(struct be_adapter *adapter, u8 port_num, u32 *state)
+ }
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -2306,7 +2306,7 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+ return -ENOMEM;
+ }
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -2328,7 +2328,7 @@ int be_cmd_read_port_transceiver_data(struct be_adapter *adapter,
+ memcpy(data, resp->page_data + off, len);
+ }
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ return status;
+ }
+@@ -2345,7 +2345,7 @@ static int lancer_cmd_write_object(struct be_adapter *adapter,
+ void *ctxt = NULL;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+ adapter->flash_status = 0;
+
+ wrb = wrb_from_mccq(adapter);
+@@ -2387,7 +2387,7 @@ static int lancer_cmd_write_object(struct be_adapter *adapter,
+ if (status)
+ goto err_unlock;
+
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+
+ if (!wait_for_completion_timeout(&adapter->et_cmd_compl,
+ msecs_to_jiffies(60000)))
+@@ -2406,7 +2406,7 @@ static int lancer_cmd_write_object(struct be_adapter *adapter,
+ return status;
+
+ err_unlock:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -2460,7 +2460,7 @@ static int lancer_cmd_delete_object(struct be_adapter *adapter,
+ struct be_mcc_wrb *wrb;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -2478,7 +2478,7 @@ static int lancer_cmd_delete_object(struct be_adapter *adapter,
+
+ status = be_mcc_notify_wait(adapter);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -2491,7 +2491,7 @@ int lancer_cmd_read_object(struct be_adapter *adapter, struct be_dma_mem *cmd,
+ struct lancer_cmd_resp_read_object *resp;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -2525,7 +2525,7 @@ int lancer_cmd_read_object(struct be_adapter *adapter, struct be_dma_mem *cmd,
+ }
+
+ err_unlock:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -2537,7 +2537,7 @@ static int be_cmd_write_flashrom(struct be_adapter *adapter,
+ struct be_cmd_write_flashrom *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+ adapter->flash_status = 0;
+
+ wrb = wrb_from_mccq(adapter);
+@@ -2562,7 +2562,7 @@ static int be_cmd_write_flashrom(struct be_adapter *adapter,
+ if (status)
+ goto err_unlock;
+
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+
+ if (!wait_for_completion_timeout(&adapter->et_cmd_compl,
+ msecs_to_jiffies(40000)))
+@@ -2573,7 +2573,7 @@ static int be_cmd_write_flashrom(struct be_adapter *adapter,
+ return status;
+
+ err_unlock:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -2584,7 +2584,7 @@ static int be_cmd_get_flash_crc(struct be_adapter *adapter, u8 *flashed_crc,
+ struct be_mcc_wrb *wrb;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -2611,7 +2611,7 @@ static int be_cmd_get_flash_crc(struct be_adapter *adapter, u8 *flashed_crc,
+ memcpy(flashed_crc, req->crc, 4);
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3217,7 +3217,7 @@ int be_cmd_enable_magic_wol(struct be_adapter *adapter, u8 *mac,
+ struct be_cmd_req_acpi_wol_magic_config *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3234,7 +3234,7 @@ int be_cmd_enable_magic_wol(struct be_adapter *adapter, u8 *mac,
+ status = be_mcc_notify_wait(adapter);
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3249,7 +3249,7 @@ int be_cmd_set_loopback(struct be_adapter *adapter, u8 port_num,
+ CMD_SUBSYSTEM_LOWLEVEL))
+ return -EPERM;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3272,7 +3272,7 @@ int be_cmd_set_loopback(struct be_adapter *adapter, u8 port_num,
+ if (status)
+ goto err_unlock;
+
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+
+ if (!wait_for_completion_timeout(&adapter->et_cmd_compl,
+ msecs_to_jiffies(SET_LB_MODE_TIMEOUT)))
+@@ -3281,7 +3281,7 @@ int be_cmd_set_loopback(struct be_adapter *adapter, u8 port_num,
+ return status;
+
+ err_unlock:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3298,7 +3298,7 @@ int be_cmd_loopback_test(struct be_adapter *adapter, u32 port_num,
+ CMD_SUBSYSTEM_LOWLEVEL))
+ return -EPERM;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3324,7 +3324,7 @@ int be_cmd_loopback_test(struct be_adapter *adapter, u32 port_num,
+ if (status)
+ goto err;
+
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+
+ wait_for_completion(&adapter->et_cmd_compl);
+ resp = embedded_payload(wrb);
+@@ -3332,7 +3332,7 @@ int be_cmd_loopback_test(struct be_adapter *adapter, u32 port_num,
+
+ return status;
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3348,7 +3348,7 @@ int be_cmd_ddr_dma_test(struct be_adapter *adapter, u64 pattern,
+ CMD_SUBSYSTEM_LOWLEVEL))
+ return -EPERM;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3382,7 +3382,7 @@ int be_cmd_ddr_dma_test(struct be_adapter *adapter, u64 pattern,
+ }
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3393,7 +3393,7 @@ int be_cmd_get_seeprom_data(struct be_adapter *adapter,
+ struct be_cmd_req_seeprom_read *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3409,7 +3409,7 @@ int be_cmd_get_seeprom_data(struct be_adapter *adapter,
+ status = be_mcc_notify_wait(adapter);
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3424,7 +3424,7 @@ int be_cmd_get_phy_info(struct be_adapter *adapter)
+ CMD_SUBSYSTEM_COMMON))
+ return -EPERM;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3469,7 +3469,7 @@ int be_cmd_get_phy_info(struct be_adapter *adapter)
+ }
+ dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3479,7 +3479,7 @@ static int be_cmd_set_qos(struct be_adapter *adapter, u32 bps, u32 domain)
+ struct be_cmd_req_set_qos *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3499,7 +3499,7 @@ static int be_cmd_set_qos(struct be_adapter *adapter, u32 bps, u32 domain)
+ status = be_mcc_notify_wait(adapter);
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3611,7 +3611,7 @@ int be_cmd_get_fn_privileges(struct be_adapter *adapter, u32 *privilege,
+ struct be_cmd_req_get_fn_privileges *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3643,7 +3643,7 @@ int be_cmd_get_fn_privileges(struct be_adapter *adapter, u32 *privilege,
+ }
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3655,7 +3655,7 @@ int be_cmd_set_fn_privileges(struct be_adapter *adapter, u32 privileges,
+ struct be_cmd_req_set_fn_privileges *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3675,7 +3675,7 @@ int be_cmd_set_fn_privileges(struct be_adapter *adapter, u32 privileges,
+
+ status = be_mcc_notify_wait(adapter);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3707,7 +3707,7 @@ int be_cmd_get_mac_from_list(struct be_adapter *adapter, u8 *mac,
+ return -ENOMEM;
+ }
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3771,7 +3771,7 @@ int be_cmd_get_mac_from_list(struct be_adapter *adapter, u8 *mac,
+ }
+
+ out:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ dma_free_coherent(&adapter->pdev->dev, get_mac_list_cmd.size,
+ get_mac_list_cmd.va, get_mac_list_cmd.dma);
+ return status;
+@@ -3831,7 +3831,7 @@ int be_cmd_set_mac_list(struct be_adapter *adapter, u8 *mac_array,
+ if (!cmd.va)
+ return -ENOMEM;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3853,7 +3853,7 @@ int be_cmd_set_mac_list(struct be_adapter *adapter, u8 *mac_array,
+
+ err:
+ dma_free_coherent(&adapter->pdev->dev, cmd.size, cmd.va, cmd.dma);
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3889,7 +3889,7 @@ int be_cmd_set_hsw_config(struct be_adapter *adapter, u16 pvid,
+ CMD_SUBSYSTEM_COMMON))
+ return -EPERM;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3930,7 +3930,7 @@ int be_cmd_set_hsw_config(struct be_adapter *adapter, u16 pvid,
+ status = be_mcc_notify_wait(adapter);
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -3944,7 +3944,7 @@ int be_cmd_get_hsw_config(struct be_adapter *adapter, u16 *pvid,
+ int status;
+ u16 vid;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -3991,7 +3991,7 @@ int be_cmd_get_hsw_config(struct be_adapter *adapter, u16 *pvid,
+ }
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -4190,7 +4190,7 @@ int be_cmd_set_ext_fat_capabilites(struct be_adapter *adapter,
+ struct be_cmd_req_set_ext_fat_caps *req;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -4206,7 +4206,7 @@ int be_cmd_set_ext_fat_capabilites(struct be_adapter *adapter,
+
+ status = be_mcc_notify_wait(adapter);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -4684,7 +4684,7 @@ int be_cmd_manage_iface(struct be_adapter *adapter, u32 iface, u8 op)
+ if (iface == 0xFFFFFFFF)
+ return -1;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -4701,7 +4701,7 @@ int be_cmd_manage_iface(struct be_adapter *adapter, u32 iface, u8 op)
+
+ status = be_mcc_notify_wait(adapter);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -4735,7 +4735,7 @@ int be_cmd_get_if_id(struct be_adapter *adapter, struct be_vf_cfg *vf_cfg,
+ struct be_cmd_resp_get_iface_list *resp;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -4756,7 +4756,7 @@ int be_cmd_get_if_id(struct be_adapter *adapter, struct be_vf_cfg *vf_cfg,
+ }
+
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -4850,7 +4850,7 @@ int be_cmd_enable_vf(struct be_adapter *adapter, u8 domain)
+ if (BEx_chip(adapter))
+ return 0;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -4868,7 +4868,7 @@ int be_cmd_enable_vf(struct be_adapter *adapter, u8 domain)
+ req->enable = 1;
+ status = be_mcc_notify_wait(adapter);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -4941,7 +4941,7 @@ __be_cmd_set_logical_link_config(struct be_adapter *adapter,
+ u32 link_config = 0;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -4969,7 +4969,7 @@ __be_cmd_set_logical_link_config(struct be_adapter *adapter,
+
+ status = be_mcc_notify_wait(adapter);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -5000,8 +5000,7 @@ int be_cmd_set_features(struct be_adapter *adapter)
+ struct be_mcc_wrb *wrb;
+ int status;
+
+- if (mutex_lock_interruptible(&adapter->mcc_lock))
+- return -1;
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -5039,7 +5038,7 @@ int be_cmd_set_features(struct be_adapter *adapter)
+ dev_info(&adapter->pdev->dev,
+ "Adapter does not support HW error recovery\n");
+
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+
+@@ -5053,7 +5052,7 @@ int be_roce_mcc_cmd(void *netdev_handle, void *wrb_payload,
+ struct be_cmd_resp_hdr *resp;
+ int status;
+
+- mutex_lock(&adapter->mcc_lock);
++ spin_lock_bh(&adapter->mcc_lock);
+
+ wrb = wrb_from_mccq(adapter);
+ if (!wrb) {
+@@ -5076,7 +5075,7 @@ int be_roce_mcc_cmd(void *netdev_handle, void *wrb_payload,
+ memcpy(wrb_payload, resp, sizeof(*resp) + resp->response_length);
+ be_dws_le_to_cpu(wrb_payload, sizeof(*resp) + resp->response_length);
+ err:
+- mutex_unlock(&adapter->mcc_lock);
++ spin_unlock_bh(&adapter->mcc_lock);
+ return status;
+ }
+ EXPORT_SYMBOL(be_roce_mcc_cmd);
+diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
+index 875fe379eea213..3d2e2159211917 100644
+--- a/drivers/net/ethernet/emulex/benet/be_main.c
++++ b/drivers/net/ethernet/emulex/benet/be_main.c
+@@ -5667,8 +5667,8 @@ static int be_drv_init(struct be_adapter *adapter)
+ }
+
+ mutex_init(&adapter->mbox_lock);
+- mutex_init(&adapter->mcc_lock);
+ mutex_init(&adapter->rx_filter_lock);
++ spin_lock_init(&adapter->mcc_lock);
+ spin_lock_init(&adapter->mcc_cq_lock);
+ init_completion(&adapter->et_cmd_compl);
+
+diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+index bab16c2191b2f0..181af419b878d5 100644
+--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c
+@@ -483,7 +483,7 @@ int hclge_ptp_init(struct hclge_dev *hdev)
+
+ ret = hclge_ptp_get_cycle(hdev);
+ if (ret)
+- return ret;
++ goto out;
+ }
+
+ ret = hclge_ptp_int_en(hdev, true);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+index f5acfb7d4ff655..ab7c2750c10425 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-loongson.c
+@@ -11,6 +11,8 @@
+ #include "dwmac_dma.h"
+ #include "dwmac1000.h"
+
++#define DRIVER_NAME "dwmac-loongson-pci"
++
+ /* Normal Loongson Tx Summary */
+ #define DMA_INTR_ENA_NIE_TX_LOONGSON 0x00040000
+ /* Normal Loongson Rx Summary */
+@@ -568,7 +570,7 @@ static int loongson_dwmac_probe(struct pci_dev *pdev, const struct pci_device_id
+ for (i = 0; i < PCI_STD_NUM_BARS; i++) {
+ if (pci_resource_len(pdev, i) == 0)
+ continue;
+- ret = pcim_iomap_regions(pdev, BIT(0), pci_name(pdev));
++ ret = pcim_iomap_regions(pdev, BIT(0), DRIVER_NAME);
+ if (ret)
+ goto err_disable_device;
+ break;
+@@ -687,7 +689,7 @@ static const struct pci_device_id loongson_dwmac_id_table[] = {
+ MODULE_DEVICE_TABLE(pci, loongson_dwmac_id_table);
+
+ static struct pci_driver loongson_dwmac_driver = {
+- .name = "dwmac-loongson-pci",
++ .name = DRIVER_NAME,
+ .id_table = loongson_dwmac_id_table,
+ .probe = loongson_dwmac_probe,
+ .remove = loongson_dwmac_remove,
+diff --git a/drivers/net/ipa/data/ipa_data-v4.7.c b/drivers/net/ipa/data/ipa_data-v4.7.c
+index c8c23d9be961b1..41f212209993f1 100644
+--- a/drivers/net/ipa/data/ipa_data-v4.7.c
++++ b/drivers/net/ipa/data/ipa_data-v4.7.c
+@@ -28,20 +28,18 @@ enum ipa_resource_type {
+ enum ipa_rsrc_group_id {
+ /* Source resource group identifiers */
+ IPA_RSRC_GROUP_SRC_UL_DL = 0,
+- IPA_RSRC_GROUP_SRC_UC_RX_Q,
+ IPA_RSRC_GROUP_SRC_COUNT, /* Last in set; not a source group */
+
+ /* Destination resource group identifiers */
+- IPA_RSRC_GROUP_DST_UL_DL_DPL = 0,
+- IPA_RSRC_GROUP_DST_UNUSED_1,
++ IPA_RSRC_GROUP_DST_UL_DL = 0,
+ IPA_RSRC_GROUP_DST_COUNT, /* Last; not a destination group */
+ };
+
+ /* QSB configuration data for an SoC having IPA v4.7 */
+ static const struct ipa_qsb_data ipa_qsb_data[] = {
+ [IPA_QSB_MASTER_DDR] = {
+- .max_writes = 8,
+- .max_reads = 0, /* no limit (hardware max) */
++ .max_writes = 12,
++ .max_reads = 13,
+ .max_reads_beats = 120,
+ },
+ };
+@@ -81,7 +79,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = {
+ },
+ .endpoint = {
+ .config = {
+- .resource_group = IPA_RSRC_GROUP_DST_UL_DL_DPL,
++ .resource_group = IPA_RSRC_GROUP_DST_UL_DL,
+ .aggregation = true,
+ .status_enable = true,
+ .rx = {
+@@ -106,6 +104,7 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = {
+ .filter_support = true,
+ .config = {
+ .resource_group = IPA_RSRC_GROUP_SRC_UL_DL,
++ .checksum = true,
+ .qmap = true,
+ .status_enable = true,
+ .tx = {
+@@ -128,7 +127,8 @@ static const struct ipa_gsi_endpoint_data ipa_gsi_endpoint_data[] = {
+ },
+ .endpoint = {
+ .config = {
+- .resource_group = IPA_RSRC_GROUP_DST_UL_DL_DPL,
++ .resource_group = IPA_RSRC_GROUP_DST_UL_DL,
++ .checksum = true,
+ .qmap = true,
+ .aggregation = true,
+ .rx = {
+@@ -197,12 +197,12 @@ static const struct ipa_resource ipa_resource_src[] = {
+ /* Destination resource configuration data for an SoC having IPA v4.7 */
+ static const struct ipa_resource ipa_resource_dst[] = {
+ [IPA_RESOURCE_TYPE_DST_DATA_SECTORS] = {
+- .limits[IPA_RSRC_GROUP_DST_UL_DL_DPL] = {
++ .limits[IPA_RSRC_GROUP_DST_UL_DL] = {
+ .min = 7, .max = 7,
+ },
+ },
+ [IPA_RESOURCE_TYPE_DST_DPS_DMARS] = {
+- .limits[IPA_RSRC_GROUP_DST_UL_DL_DPL] = {
++ .limits[IPA_RSRC_GROUP_DST_UL_DL] = {
+ .min = 2, .max = 2,
+ },
+ },
+diff --git a/drivers/net/mctp/mctp-i3c.c b/drivers/net/mctp/mctp-i3c.c
+index ee9d562f0817cf..a2b15cddf46e6b 100644
+--- a/drivers/net/mctp/mctp-i3c.c
++++ b/drivers/net/mctp/mctp-i3c.c
+@@ -507,6 +507,9 @@ static int mctp_i3c_header_create(struct sk_buff *skb, struct net_device *dev,
+ {
+ struct mctp_i3c_internal_hdr *ihdr;
+
++ if (!daddr || !saddr)
++ return -EINVAL;
++
+ skb_push(skb, sizeof(struct mctp_i3c_internal_hdr));
+ skb_reset_mac_header(skb);
+ ihdr = (void *)skb_mac_header(skb);
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 4f3e742907cb62..c9cfdc33fc5f17 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -615,6 +615,49 @@ int phy_ethtool_get_stats(struct phy_device *phydev,
+ }
+ EXPORT_SYMBOL(phy_ethtool_get_stats);
+
++/**
++ * __phy_ethtool_get_phy_stats - Retrieve standardized PHY statistics
++ * @phydev: Pointer to the PHY device
++ * @phy_stats: Pointer to ethtool_eth_phy_stats structure
++ * @phydev_stats: Pointer to ethtool_phy_stats structure
++ *
++ * Fetches PHY statistics using a kernel-defined interface for consistent
++ * diagnostics. Unlike phy_ethtool_get_stats(), which allows custom stats,
++ * this function enforces a standardized format for better interoperability.
++ */
++void __phy_ethtool_get_phy_stats(struct phy_device *phydev,
++ struct ethtool_eth_phy_stats *phy_stats,
++ struct ethtool_phy_stats *phydev_stats)
++{
++ if (!phydev->drv || !phydev->drv->get_phy_stats)
++ return;
++
++ mutex_lock(&phydev->lock);
++ phydev->drv->get_phy_stats(phydev, phy_stats, phydev_stats);
++ mutex_unlock(&phydev->lock);
++}
++
++/**
++ * __phy_ethtool_get_link_ext_stats - Retrieve extended link statistics for a PHY
++ * @phydev: Pointer to the PHY device
++ * @link_stats: Pointer to the structure to store extended link statistics
++ *
++ * Populates the ethtool_link_ext_stats structure with link down event counts
++ * and additional driver-specific link statistics, if available.
++ */
++void __phy_ethtool_get_link_ext_stats(struct phy_device *phydev,
++ struct ethtool_link_ext_stats *link_stats)
++{
++ link_stats->link_down_events = READ_ONCE(phydev->link_down_events);
++
++ if (!phydev->drv || !phydev->drv->get_link_stats)
++ return;
++
++ mutex_lock(&phydev->lock);
++ phydev->drv->get_link_stats(phydev, link_stats);
++ mutex_unlock(&phydev->lock);
++}
++
+ /**
+ * phy_ethtool_get_plca_cfg - Get PLCA RS configuration
+ * @phydev: the phy_device struct
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 499797646580e3..119dfa2d6643a9 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -3776,6 +3776,8 @@ static const struct ethtool_phy_ops phy_ethtool_phy_ops = {
+ static const struct phylib_stubs __phylib_stubs = {
+ .hwtstamp_get = __phy_hwtstamp_get,
+ .hwtstamp_set = __phy_hwtstamp_set,
++ .get_phy_stats = __phy_ethtool_get_phy_stats,
++ .get_link_ext_stats = __phy_ethtool_get_link_ext_stats,
+ };
+
+ static void phylib_register_stubs(void)
+diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
+index 4583e15ad03a0b..1420c4efa48e68 100644
+--- a/drivers/net/ppp/ppp_generic.c
++++ b/drivers/net/ppp/ppp_generic.c
+@@ -72,6 +72,17 @@
+ #define PPP_PROTO_LEN 2
+ #define PPP_LCP_HDRLEN 4
+
++/* The filter instructions generated by libpcap are constructed
++ * assuming a four-byte PPP header on each packet, where the last
++ * 2 bytes are the protocol field defined in the RFC and the first
++ * byte of the first 2 bytes indicates the direction.
++ * The second byte is currently unused, but we still need to initialize
++ * it to prevent crafted BPF programs from reading them which would
++ * cause reading of uninitialized data.
++ */
++#define PPP_FILTER_OUTBOUND_TAG 0x0100
++#define PPP_FILTER_INBOUND_TAG 0x0000
++
+ /*
+ * An instance of /dev/ppp can be associated with either a ppp
+ * interface unit or a ppp channel. In both cases, file->private_data
+@@ -1762,10 +1773,10 @@ ppp_send_frame(struct ppp *ppp, struct sk_buff *skb)
+
+ if (proto < 0x8000) {
+ #ifdef CONFIG_PPP_FILTER
+- /* check if we should pass this packet */
+- /* the filter instructions are constructed assuming
+- a four-byte PPP header on each packet */
+- *(u8 *)skb_push(skb, 2) = 1;
++ /* check if the packet passes the pass and active filters.
++ * See comment for PPP_FILTER_OUTBOUND_TAG above.
++ */
++ *(__be16 *)skb_push(skb, 2) = htons(PPP_FILTER_OUTBOUND_TAG);
+ if (ppp->pass_filter &&
+ bpf_prog_run(ppp->pass_filter, skb) == 0) {
+ if (ppp->debug & 1)
+@@ -2482,14 +2493,13 @@ ppp_receive_nonmp_frame(struct ppp *ppp, struct sk_buff *skb)
+ /* network protocol frame - give it to the kernel */
+
+ #ifdef CONFIG_PPP_FILTER
+- /* check if the packet passes the pass and active filters */
+- /* the filter instructions are constructed assuming
+- a four-byte PPP header on each packet */
+ if (ppp->pass_filter || ppp->active_filter) {
+ if (skb_unclone(skb, GFP_ATOMIC))
+ goto err;
+-
+- *(u8 *)skb_push(skb, 2) = 0;
++ /* Check if the packet passes the pass and active filters.
++ * See comment for PPP_FILTER_INBOUND_TAG above.
++ */
++ *(__be16 *)skb_push(skb, 2) = htons(PPP_FILTER_INBOUND_TAG);
+ if (ppp->pass_filter &&
+ bpf_prog_run(ppp->pass_filter, skb) == 0) {
+ if (ppp->debug & 1)
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index c620911a11933a..754e01688900d3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -1197,7 +1197,7 @@ static int iwl_parse_tlv_firmware(struct iwl_drv *drv,
+
+ if (tlv_len != sizeof(*fseq_ver))
+ goto invalid_tlv_len;
+- IWL_INFO(drv, "TLV_FW_FSEQ_VERSION: %s\n",
++ IWL_INFO(drv, "TLV_FW_FSEQ_VERSION: %.32s\n",
+ fseq_ver->version);
+ }
+ break;
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+index 91ca830a7b6035..f4276fdee6beae 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c
+@@ -1518,6 +1518,13 @@ static ssize_t iwl_dbgfs_fw_dbg_clear_write(struct iwl_mvm *mvm,
+ if (mvm->trans->trans_cfg->device_family < IWL_DEVICE_FAMILY_9000)
+ return -EOPNOTSUPP;
+
++ /*
++ * If the firmware is not running, silently succeed since there is
++ * no data to clear.
++ */
++ if (!iwl_mvm_firmware_running(mvm))
++ return count;
++
+ mutex_lock(&mvm->mutex);
+ iwl_fw_dbg_clear_monitor_buf(&mvm->fwrt);
+ mutex_unlock(&mvm->mutex);
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+index 72fa7ac86516cd..17b8ccc275693b 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/time-event.c
+@@ -1030,6 +1030,8 @@ void iwl_mvm_rx_session_protect_notif(struct iwl_mvm *mvm,
+ /* End TE, notify mac80211 */
+ mvmvif->time_event_data.id = SESSION_PROTECT_CONF_MAX_ID;
+ mvmvif->time_event_data.link_id = -1;
++ /* set the bit so the ROC cleanup will actually clean up */
++ set_bit(IWL_MVM_STATUS_ROC_P2P_RUNNING, &mvm->status);
+ iwl_mvm_roc_finished(mvm);
+ ieee80211_remain_on_channel_expired(mvm->hw);
+ } else if (le32_to_cpu(notif->start)) {
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+index 27a7e0b5b3d51e..ebe9b25cc53a99 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/internal.h
+@@ -1,6 +1,6 @@
+ /* SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause */
+ /*
+- * Copyright (C) 2003-2015, 2018-2024 Intel Corporation
++ * Copyright (C) 2003-2015, 2018-2025 Intel Corporation
+ * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+ * Copyright (C) 2016-2017 Intel Deutschland GmbH
+ */
+@@ -643,7 +643,8 @@ dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, unsigned int offset,
+ unsigned int len);
+ struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb,
+ struct iwl_cmd_meta *cmd_meta,
+- u8 **hdr, unsigned int hdr_room);
++ u8 **hdr, unsigned int hdr_room,
++ unsigned int offset);
+
+ void iwl_pcie_free_tso_pages(struct iwl_trans *trans, struct sk_buff *skb,
+ struct iwl_cmd_meta *cmd_meta);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+index b1846abb99b78f..477a05cd1288b0 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+ * Copyright (C) 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2020, 2023-2024 Intel Corporation
++ * Copyright (C) 2018-2020, 2023-2025 Intel Corporation
+ */
+ #include <net/tso.h>
+ #include <linux/tcp.h>
+@@ -188,7 +188,8 @@ static int iwl_txq_gen2_build_amsdu(struct iwl_trans *trans,
+ (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr));
+
+ /* Our device supports 9 segments at most, it will fit in 1 page */
+- sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room);
++ sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room,
++ snap_ip_tcp_hdrlen + hdr_len);
+ if (!sgt)
+ return -ENOMEM;
+
+@@ -347,6 +348,7 @@ iwl_tfh_tfd *iwl_txq_gen2_build_tx_amsdu(struct iwl_trans *trans,
+ return tfd;
+
+ out_err:
++ iwl_pcie_free_tso_pages(trans, skb, out_meta);
+ iwl_txq_gen2_tfd_unmap(trans, out_meta, tfd);
+ return NULL;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+index 9fe050f0ddc160..9fcdd06e126ae1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2003-2014, 2018-2021, 2023-2024 Intel Corporation
++ * Copyright (C) 2003-2014, 2018-2021, 2023-2025 Intel Corporation
+ * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+ * Copyright (C) 2016-2017 Intel Deutschland GmbH
+ */
+@@ -1853,6 +1853,7 @@ dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, unsigned int offset,
+ * @cmd_meta: command meta to store the scatter list information for unmapping
+ * @hdr: output argument for TSO headers
+ * @hdr_room: requested length for TSO headers
++ * @offset: offset into the data from which mapping should start
+ *
+ * Allocate space for a scatter gather list and TSO headers and map the SKB
+ * using the scatter gather list. The SKB is unmapped again when the page is
+@@ -1862,9 +1863,12 @@ dma_addr_t iwl_pcie_get_sgt_tb_phys(struct sg_table *sgt, unsigned int offset,
+ */
+ struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb,
+ struct iwl_cmd_meta *cmd_meta,
+- u8 **hdr, unsigned int hdr_room)
++ u8 **hdr, unsigned int hdr_room,
++ unsigned int offset)
+ {
+ struct sg_table *sgt;
++ unsigned int n_segments = skb_shinfo(skb)->nr_frags + 1;
++ int orig_nents;
+
+ if (WARN_ON_ONCE(skb_has_frag_list(skb)))
+ return NULL;
+@@ -1872,8 +1876,7 @@ struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb,
+ *hdr = iwl_pcie_get_page_hdr(trans,
+ hdr_room + __alignof__(struct sg_table) +
+ sizeof(struct sg_table) +
+- (skb_shinfo(skb)->nr_frags + 1) *
+- sizeof(struct scatterlist),
++ n_segments * sizeof(struct scatterlist),
+ skb);
+ if (!*hdr)
+ return NULL;
+@@ -1881,14 +1884,15 @@ struct sg_table *iwl_pcie_prep_tso(struct iwl_trans *trans, struct sk_buff *skb,
+ sgt = (void *)PTR_ALIGN(*hdr + hdr_room, __alignof__(struct sg_table));
+ sgt->sgl = (void *)(sgt + 1);
+
+- sg_init_table(sgt->sgl, skb_shinfo(skb)->nr_frags + 1);
++ sg_init_table(sgt->sgl, n_segments);
+
+ /* Only map the data, not the header (it is copied to the TSO page) */
+- sgt->orig_nents = skb_to_sgvec(skb, sgt->sgl, skb_headlen(skb),
+- skb->data_len);
+- if (WARN_ON_ONCE(sgt->orig_nents <= 0))
++ orig_nents = skb_to_sgvec(skb, sgt->sgl, offset, skb->len - offset);
++ if (WARN_ON_ONCE(orig_nents <= 0))
+ return NULL;
+
++ sgt->orig_nents = orig_nents;
++
+ /* And map the entire SKB */
+ if (dma_map_sgtable(trans->dev, sgt, DMA_TO_DEVICE, 0) < 0)
+ return NULL;
+@@ -1937,7 +1941,8 @@ static int iwl_fill_data_tbs_amsdu(struct iwl_trans *trans, struct sk_buff *skb,
+ (3 + snap_ip_tcp_hdrlen + sizeof(struct ethhdr)) + iv_len;
+
+ /* Our device supports 9 segments at most, it will fit in 1 page */
+- sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room);
++ sgt = iwl_pcie_prep_tso(trans, skb, out_meta, &start_hdr, hdr_room,
++ snap_ip_tcp_hdrlen + hdr_len + iv_len);
+ if (!sgt)
+ return -ENOMEM;
+
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index 61af1583356c27..e4daac9c244015 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -120,19 +120,31 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
+ struct nvme_ns *ns = q->queuedata;
+ struct block_device *bdev = ns ? ns->disk->part0 : NULL;
+ bool supports_metadata = bdev && blk_get_integrity(bdev->bd_disk);
++ struct nvme_ctrl *ctrl = nvme_req(req)->ctrl;
+ bool has_metadata = meta_buffer && meta_len;
+ struct bio *bio = NULL;
+ int ret;
+
+- if (has_metadata && !supports_metadata)
+- return -EINVAL;
++ if (!nvme_ctrl_sgl_supported(ctrl))
++ dev_warn_once(ctrl->device, "using unchecked data buffer\n");
++ if (has_metadata) {
++ if (!supports_metadata) {
++ ret = -EINVAL;
++ goto out;
++ }
++ if (!nvme_ctrl_meta_sgl_supported(ctrl))
++ dev_warn_once(ctrl->device,
++ "using unchecked metadata buffer\n");
++ }
+
+ if (ioucmd && (ioucmd->flags & IORING_URING_CMD_FIXED)) {
+ struct iov_iter iter;
+
+ /* fixedbufs is only for non-vectored io */
+- if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC))
+- return -EINVAL;
++ if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC)) {
++ ret = -EINVAL;
++ goto out;
++ }
+ ret = io_uring_cmd_import_fixed(ubuffer, bufflen,
+ rq_data_dir(req), &iter, ioucmd);
+ if (ret < 0)
+diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
+index 61bba5513de05a..dcdce7d12e441a 100644
+--- a/drivers/nvme/host/nvme.h
++++ b/drivers/nvme/host/nvme.h
+@@ -1130,6 +1130,13 @@ static inline bool nvme_ctrl_sgl_supported(struct nvme_ctrl *ctrl)
+ return ctrl->sgls & ((1 << 0) | (1 << 1));
+ }
+
++static inline bool nvme_ctrl_meta_sgl_supported(struct nvme_ctrl *ctrl)
++{
++ if (ctrl->ops->flags & NVME_F_FABRICS)
++ return true;
++ return ctrl->sgls & NVME_CTRL_SGLS_MSDS;
++}
++
+ #ifdef CONFIG_NVME_HOST_AUTH
+ int __init nvme_init_auth(void);
+ void __exit nvme_exit_auth(void);
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index cc74682dc0d4e9..e1329d4974fd6f 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -43,6 +43,7 @@
+ */
+ #define NVME_MAX_KB_SZ 8192
+ #define NVME_MAX_SEGS 128
++#define NVME_MAX_META_SEGS 15
+ #define NVME_MAX_NR_ALLOCATIONS 5
+
+ static int use_threaded_interrupts;
+@@ -143,6 +144,7 @@ struct nvme_dev {
+ bool hmb;
+
+ mempool_t *iod_mempool;
++ mempool_t *iod_meta_mempool;
+
+ /* shadow doorbell buffer support: */
+ __le32 *dbbuf_dbs;
+@@ -238,6 +240,8 @@ struct nvme_iod {
+ dma_addr_t first_dma;
+ dma_addr_t meta_dma;
+ struct sg_table sgt;
++ struct sg_table meta_sgt;
++ union nvme_descriptor meta_list;
+ union nvme_descriptor list[NVME_MAX_NR_ALLOCATIONS];
+ };
+
+@@ -505,6 +509,15 @@ static void nvme_commit_rqs(struct blk_mq_hw_ctx *hctx)
+ spin_unlock(&nvmeq->sq_lock);
+ }
+
++static inline bool nvme_pci_metadata_use_sgls(struct nvme_dev *dev,
++ struct request *req)
++{
++ if (!nvme_ctrl_meta_sgl_supported(&dev->ctrl))
++ return false;
++ return req->nr_integrity_segments > 1 ||
++ nvme_req(req)->flags & NVME_REQ_USERCMD;
++}
++
+ static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req,
+ int nseg)
+ {
+@@ -517,8 +530,10 @@ static inline bool nvme_pci_use_sgls(struct nvme_dev *dev, struct request *req,
+ return false;
+ if (!nvmeq->qid)
+ return false;
++ if (nvme_pci_metadata_use_sgls(dev, req))
++ return true;
+ if (!sgl_threshold || avg_seg_size < sgl_threshold)
+- return false;
++ return nvme_req(req)->flags & NVME_REQ_USERCMD;
+ return true;
+ }
+
+@@ -779,7 +794,8 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
+ struct bio_vec bv = req_bvec(req);
+
+ if (!is_pci_p2pdma_page(bv.bv_page)) {
+- if ((bv.bv_offset & (NVME_CTRL_PAGE_SIZE - 1)) +
++ if (!nvme_pci_metadata_use_sgls(dev, req) &&
++ (bv.bv_offset & (NVME_CTRL_PAGE_SIZE - 1)) +
+ bv.bv_len <= NVME_CTRL_PAGE_SIZE * 2)
+ return nvme_setup_prp_simple(dev, req,
+ &cmnd->rw, &bv);
+@@ -823,11 +839,69 @@ static blk_status_t nvme_map_data(struct nvme_dev *dev, struct request *req,
+ return ret;
+ }
+
+-static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req,
+- struct nvme_command *cmnd)
++static blk_status_t nvme_pci_setup_meta_sgls(struct nvme_dev *dev,
++ struct request *req)
++{
++ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
++ struct nvme_rw_command *cmnd = &iod->cmd.rw;
++ struct nvme_sgl_desc *sg_list;
++ struct scatterlist *sgl, *sg;
++ unsigned int entries;
++ dma_addr_t sgl_dma;
++ int rc, i;
++
++ iod->meta_sgt.sgl = mempool_alloc(dev->iod_meta_mempool, GFP_ATOMIC);
++ if (!iod->meta_sgt.sgl)
++ return BLK_STS_RESOURCE;
++
++ sg_init_table(iod->meta_sgt.sgl, req->nr_integrity_segments);
++ iod->meta_sgt.orig_nents = blk_rq_map_integrity_sg(req,
++ iod->meta_sgt.sgl);
++ if (!iod->meta_sgt.orig_nents)
++ goto out_free_sg;
++
++ rc = dma_map_sgtable(dev->dev, &iod->meta_sgt, rq_dma_dir(req),
++ DMA_ATTR_NO_WARN);
++ if (rc)
++ goto out_free_sg;
++
++ sg_list = dma_pool_alloc(dev->prp_small_pool, GFP_ATOMIC, &sgl_dma);
++ if (!sg_list)
++ goto out_unmap_sg;
++
++ entries = iod->meta_sgt.nents;
++ iod->meta_list.sg_list = sg_list;
++ iod->meta_dma = sgl_dma;
++
++ cmnd->flags = NVME_CMD_SGL_METASEG;
++ cmnd->metadata = cpu_to_le64(sgl_dma);
++
++ sgl = iod->meta_sgt.sgl;
++ if (entries == 1) {
++ nvme_pci_sgl_set_data(sg_list, sgl);
++ return BLK_STS_OK;
++ }
++
++ sgl_dma += sizeof(*sg_list);
++ nvme_pci_sgl_set_seg(sg_list, sgl_dma, entries);
++ for_each_sg(sgl, sg, entries, i)
++ nvme_pci_sgl_set_data(&sg_list[i + 1], sg);
++
++ return BLK_STS_OK;
++
++out_unmap_sg:
++ dma_unmap_sgtable(dev->dev, &iod->meta_sgt, rq_dma_dir(req), 0);
++out_free_sg:
++ mempool_free(iod->meta_sgt.sgl, dev->iod_meta_mempool);
++ return BLK_STS_RESOURCE;
++}
++
++static blk_status_t nvme_pci_setup_meta_mptr(struct nvme_dev *dev,
++ struct request *req)
+ {
+ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+ struct bio_vec bv = rq_integrity_vec(req);
++ struct nvme_command *cmnd = &iod->cmd;
+
+ iod->meta_dma = dma_map_bvec(dev->dev, &bv, rq_dma_dir(req), 0);
+ if (dma_mapping_error(dev->dev, iod->meta_dma))
+@@ -836,6 +910,13 @@ static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req,
+ return BLK_STS_OK;
+ }
+
++static blk_status_t nvme_map_metadata(struct nvme_dev *dev, struct request *req)
++{
++ if (nvme_pci_metadata_use_sgls(dev, req))
++ return nvme_pci_setup_meta_sgls(dev, req);
++ return nvme_pci_setup_meta_mptr(dev, req);
++}
++
+ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req)
+ {
+ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+@@ -844,6 +925,7 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req)
+ iod->aborted = false;
+ iod->nr_allocations = -1;
+ iod->sgt.nents = 0;
++ iod->meta_sgt.nents = 0;
+
+ ret = nvme_setup_cmd(req->q->queuedata, req);
+ if (ret)
+@@ -856,7 +938,7 @@ static blk_status_t nvme_prep_rq(struct nvme_dev *dev, struct request *req)
+ }
+
+ if (blk_integrity_rq(req)) {
+- ret = nvme_map_metadata(dev, req, &iod->cmd);
++ ret = nvme_map_metadata(dev, req);
+ if (ret)
+ goto out_unmap_data;
+ }
+@@ -955,17 +1037,31 @@ static void nvme_queue_rqs(struct request **rqlist)
+ *rqlist = requeue_list;
+ }
+
++static __always_inline void nvme_unmap_metadata(struct nvme_dev *dev,
++ struct request *req)
++{
++ struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
++
++ if (!iod->meta_sgt.nents) {
++ dma_unmap_page(dev->dev, iod->meta_dma,
++ rq_integrity_vec(req).bv_len,
++ rq_dma_dir(req));
++ return;
++ }
++
++ dma_pool_free(dev->prp_small_pool, iod->meta_list.sg_list,
++ iod->meta_dma);
++ dma_unmap_sgtable(dev->dev, &iod->meta_sgt, rq_dma_dir(req), 0);
++ mempool_free(iod->meta_sgt.sgl, dev->iod_meta_mempool);
++}
++
+ static __always_inline void nvme_pci_unmap_rq(struct request *req)
+ {
+ struct nvme_queue *nvmeq = req->mq_hctx->driver_data;
+ struct nvme_dev *dev = nvmeq->dev;
+
+- if (blk_integrity_rq(req)) {
+- struct nvme_iod *iod = blk_mq_rq_to_pdu(req);
+-
+- dma_unmap_page(dev->dev, iod->meta_dma,
+- rq_integrity_vec(req).bv_len, rq_dma_dir(req));
+- }
++ if (blk_integrity_rq(req))
++ nvme_unmap_metadata(dev, req);
+
+ if (blk_rq_nr_phys_segments(req))
+ nvme_unmap_data(dev, req);
+@@ -2719,6 +2815,7 @@ static void nvme_release_prp_pools(struct nvme_dev *dev)
+
+ static int nvme_pci_alloc_iod_mempool(struct nvme_dev *dev)
+ {
++ size_t meta_size = sizeof(struct scatterlist) * (NVME_MAX_META_SEGS + 1);
+ size_t alloc_size = sizeof(struct scatterlist) * NVME_MAX_SEGS;
+
+ dev->iod_mempool = mempool_create_node(1,
+@@ -2727,7 +2824,18 @@ static int nvme_pci_alloc_iod_mempool(struct nvme_dev *dev)
+ dev_to_node(dev->dev));
+ if (!dev->iod_mempool)
+ return -ENOMEM;
++
++ dev->iod_meta_mempool = mempool_create_node(1,
++ mempool_kmalloc, mempool_kfree,
++ (void *)meta_size, GFP_KERNEL,
++ dev_to_node(dev->dev));
++ if (!dev->iod_meta_mempool)
++ goto free;
++
+ return 0;
++free:
++ mempool_destroy(dev->iod_mempool);
++ return -ENOMEM;
+ }
+
+ static void nvme_free_tagset(struct nvme_dev *dev)
+@@ -2792,6 +2900,11 @@ static void nvme_reset_work(struct work_struct *work)
+ if (result)
+ goto out;
+
++ if (nvme_ctrl_meta_sgl_supported(&dev->ctrl))
++ dev->ctrl.max_integrity_segments = NVME_MAX_META_SEGS;
++ else
++ dev->ctrl.max_integrity_segments = 1;
++
+ nvme_dbbuf_dma_alloc(dev);
+
+ result = nvme_setup_host_mem(dev);
+@@ -3061,11 +3174,6 @@ static struct nvme_dev *nvme_pci_alloc_dev(struct pci_dev *pdev,
+ dev->ctrl.max_hw_sectors = min_t(u32,
+ NVME_MAX_KB_SZ << 1, dma_opt_mapping_size(&pdev->dev) >> 9);
+ dev->ctrl.max_segments = NVME_MAX_SEGS;
+-
+- /*
+- * There is no support for SGLs for metadata (yet), so we are limited to
+- * a single integrity segment for the separate metadata pointer.
+- */
+ dev->ctrl.max_integrity_segments = 1;
+ return dev;
+
+@@ -3128,6 +3236,11 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ if (result)
+ goto out_disable;
+
++ if (nvme_ctrl_meta_sgl_supported(&dev->ctrl))
++ dev->ctrl.max_integrity_segments = NVME_MAX_META_SEGS;
++ else
++ dev->ctrl.max_integrity_segments = 1;
++
+ nvme_dbbuf_dma_alloc(dev);
+
+ result = nvme_setup_host_mem(dev);
+@@ -3170,6 +3283,7 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ nvme_free_queues(dev, 0);
+ out_release_iod_mempool:
+ mempool_destroy(dev->iod_mempool);
++ mempool_destroy(dev->iod_meta_mempool);
+ out_release_prp_pools:
+ nvme_release_prp_pools(dev);
+ out_dev_unmap:
+@@ -3235,6 +3349,7 @@ static void nvme_remove(struct pci_dev *pdev)
+ nvme_dbbuf_dma_free(dev);
+ nvme_free_queues(dev, 0);
+ mempool_destroy(dev->iod_mempool);
++ mempool_destroy(dev->iod_meta_mempool);
+ nvme_release_prp_pools(dev);
+ nvme_dev_unmap(dev);
+ nvme_uninit_ctrl(&dev->ctrl);
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 840ae475074d09..eeb05b7bc0fd01 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -217,6 +217,19 @@ static inline int nvme_tcp_queue_id(struct nvme_tcp_queue *queue)
+ return queue - queue->ctrl->queues;
+ }
+
++static inline bool nvme_tcp_recv_pdu_supported(enum nvme_tcp_pdu_type type)
++{
++ switch (type) {
++ case nvme_tcp_c2h_term:
++ case nvme_tcp_c2h_data:
++ case nvme_tcp_r2t:
++ case nvme_tcp_rsp:
++ return true;
++ default:
++ return false;
++ }
++}
++
+ /*
+ * Check if the queue is TLS encrypted
+ */
+@@ -763,6 +776,40 @@ static int nvme_tcp_handle_r2t(struct nvme_tcp_queue *queue,
+ return 0;
+ }
+
++static void nvme_tcp_handle_c2h_term(struct nvme_tcp_queue *queue,
++ struct nvme_tcp_term_pdu *pdu)
++{
++ u16 fes;
++ const char *msg;
++ u32 plen = le32_to_cpu(pdu->hdr.plen);
++
++ static const char * const msg_table[] = {
++ [NVME_TCP_FES_INVALID_PDU_HDR] = "Invalid PDU Header Field",
++ [NVME_TCP_FES_PDU_SEQ_ERR] = "PDU Sequence Error",
++ [NVME_TCP_FES_HDR_DIGEST_ERR] = "Header Digest Error",
++ [NVME_TCP_FES_DATA_OUT_OF_RANGE] = "Data Transfer Out Of Range",
++ [NVME_TCP_FES_DATA_LIMIT_EXCEEDED] = "Data Transfer Limit Exceeded",
++ [NVME_TCP_FES_UNSUPPORTED_PARAM] = "Unsupported Parameter",
++ };
++
++ if (plen < NVME_TCP_MIN_C2HTERM_PLEN ||
++ plen > NVME_TCP_MAX_C2HTERM_PLEN) {
++ dev_err(queue->ctrl->ctrl.device,
++ "Received a malformed C2HTermReq PDU (plen = %u)\n",
++ plen);
++ return;
++ }
++
++ fes = le16_to_cpu(pdu->fes);
++ if (fes && fes < ARRAY_SIZE(msg_table))
++ msg = msg_table[fes];
++ else
++ msg = "Unknown";
++
++ dev_err(queue->ctrl->ctrl.device,
++ "Received C2HTermReq (FES = %s)\n", msg);
++}
++
+ static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb,
+ unsigned int *offset, size_t *len)
+ {
+@@ -784,6 +831,25 @@ static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb,
+ return 0;
+
+ hdr = queue->pdu;
++ if (unlikely(hdr->hlen != sizeof(struct nvme_tcp_rsp_pdu))) {
++ if (!nvme_tcp_recv_pdu_supported(hdr->type))
++ goto unsupported_pdu;
++
++ dev_err(queue->ctrl->ctrl.device,
++ "pdu type %d has unexpected header length (%d)\n",
++ hdr->type, hdr->hlen);
++ return -EPROTO;
++ }
++
++ if (unlikely(hdr->type == nvme_tcp_c2h_term)) {
++ /*
++ * C2HTermReq never includes Header or Data digests.
++ * Skip the checks.
++ */
++ nvme_tcp_handle_c2h_term(queue, (void *)queue->pdu);
++ return -EINVAL;
++ }
++
+ if (queue->hdr_digest) {
+ ret = nvme_tcp_verify_hdgst(queue, queue->pdu, hdr->hlen);
+ if (unlikely(ret))
+@@ -807,10 +873,13 @@ static int nvme_tcp_recv_pdu(struct nvme_tcp_queue *queue, struct sk_buff *skb,
+ nvme_tcp_init_recv_ctx(queue);
+ return nvme_tcp_handle_r2t(queue, (void *)queue->pdu);
+ default:
+- dev_err(queue->ctrl->ctrl.device,
+- "unsupported pdu type (%d)\n", hdr->type);
+- return -EINVAL;
++ goto unsupported_pdu;
+ }
++
++unsupported_pdu:
++ dev_err(queue->ctrl->ctrl.device,
++ "unsupported pdu type (%d)\n", hdr->type);
++ return -EINVAL;
+ }
+
+ static inline void nvme_tcp_end_request(struct request *rq, u16 status)
+@@ -1452,11 +1521,11 @@ static int nvme_tcp_init_connection(struct nvme_tcp_queue *queue)
+ msg.msg_flags = MSG_WAITALL;
+ ret = kernel_recvmsg(queue->sock, &msg, &iov, 1,
+ iov.iov_len, msg.msg_flags);
+- if (ret < sizeof(*icresp)) {
++ if (ret >= 0 && ret < sizeof(*icresp))
++ ret = -ECONNRESET;
++ if (ret < 0) {
+ pr_warn("queue %d: failed to receive icresp, error %d\n",
+ nvme_tcp_queue_id(queue), ret);
+- if (ret >= 0)
+- ret = -ECONNRESET;
+ goto free_icresp;
+ }
+ ret = -ENOTCONN;
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 7c51c2a8c109a9..4f9cac8a5abe07 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -571,10 +571,16 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
+ struct nvmet_tcp_cmd *cmd =
+ container_of(req, struct nvmet_tcp_cmd, req);
+ struct nvmet_tcp_queue *queue = cmd->queue;
++ enum nvmet_tcp_recv_state queue_state;
++ struct nvmet_tcp_cmd *queue_cmd;
+ struct nvme_sgl_desc *sgl;
+ u32 len;
+
+- if (unlikely(cmd == queue->cmd)) {
++ /* Pairs with store_release in nvmet_prepare_receive_pdu() */
++ queue_state = smp_load_acquire(&queue->rcv_state);
++ queue_cmd = READ_ONCE(queue->cmd);
++
++ if (unlikely(cmd == queue_cmd)) {
+ sgl = &cmd->req.cmd->common.dptr.sgl;
+ len = le32_to_cpu(sgl->length);
+
+@@ -583,7 +589,7 @@ static void nvmet_tcp_queue_response(struct nvmet_req *req)
+ * Avoid using helpers, this might happen before
+ * nvmet_req_init is completed.
+ */
+- if (queue->rcv_state == NVMET_TCP_RECV_PDU &&
++ if (queue_state == NVMET_TCP_RECV_PDU &&
+ len && len <= cmd->req.port->inline_data_size &&
+ nvme_is_write(cmd->req.cmd))
+ return;
+@@ -847,8 +853,9 @@ static void nvmet_prepare_receive_pdu(struct nvmet_tcp_queue *queue)
+ {
+ queue->offset = 0;
+ queue->left = sizeof(struct nvme_tcp_hdr);
+- queue->cmd = NULL;
+- queue->rcv_state = NVMET_TCP_RECV_PDU;
++ WRITE_ONCE(queue->cmd, NULL);
++ /* Ensure rcv_state is visible only after queue->cmd is set */
++ smp_store_release(&queue->rcv_state, NVMET_TCP_RECV_PDU);
+ }
+
+ static void nvmet_tcp_free_crypto(struct nvmet_tcp_queue *queue)
+diff --git a/drivers/of/of_reserved_mem.c b/drivers/of/of_reserved_mem.c
+index e45d6d3a8dc678..45445a1600a968 100644
+--- a/drivers/of/of_reserved_mem.c
++++ b/drivers/of/of_reserved_mem.c
+@@ -360,12 +360,12 @@ static int __init __reserved_mem_alloc_size(unsigned long node, const char *unam
+
+ prop = of_get_flat_dt_prop(node, "alignment", &len);
+ if (prop) {
+- if (len != dt_root_size_cells * sizeof(__be32)) {
++ if (len != dt_root_addr_cells * sizeof(__be32)) {
+ pr_err("invalid alignment property in '%s' node.\n",
+ uname);
+ return -EINVAL;
+ }
+- align = dt_mem_next_cell(dt_root_size_cells, &prop);
++ align = dt_mem_next_cell(dt_root_addr_cells, &prop);
+ }
+
+ nomap = of_get_flat_dt_prop(node, "no-map", NULL) != NULL;
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 2cfb2ac3f465aa..84dcd7da7319e3 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -9958,6 +9958,7 @@ static const struct tpacpi_quirk battery_quirk_table[] __initconst = {
+ * Individual addressing is broken on models that expose the
+ * primary battery as BAT1.
+ */
++ TPACPI_Q_LNV('G', '8', true), /* ThinkPad X131e */
+ TPACPI_Q_LNV('8', 'F', true), /* Thinkpad X120e */
+ TPACPI_Q_LNV('J', '7', true), /* B5400 */
+ TPACPI_Q_LNV('J', 'I', true), /* Thinkpad 11e */
+diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c
+index 27afbb9d544b7c..cbf531d0ba6885 100644
+--- a/drivers/rapidio/devices/rio_mport_cdev.c
++++ b/drivers/rapidio/devices/rio_mport_cdev.c
+@@ -1742,7 +1742,8 @@ static int rio_mport_add_riodev(struct mport_cdev_priv *priv,
+ err = rio_add_net(net);
+ if (err) {
+ rmcd_debug(RDEV, "failed to register net, err=%d", err);
+- kfree(net);
++ put_device(&net->dev);
++ mport->net = NULL;
+ goto cleanup;
+ }
+ }
+diff --git a/drivers/rapidio/rio-scan.c b/drivers/rapidio/rio-scan.c
+index fdcf742b2adbcb..c12941f71e2cba 100644
+--- a/drivers/rapidio/rio-scan.c
++++ b/drivers/rapidio/rio-scan.c
+@@ -871,7 +871,10 @@ static struct rio_net *rio_scan_alloc_net(struct rio_mport *mport,
+ dev_set_name(&net->dev, "rnet_%d", net->id);
+ net->dev.parent = &mport->dev;
+ net->dev.release = rio_scan_release_dev;
+- rio_add_net(net);
++ if (rio_add_net(net)) {
++ put_device(&net->dev);
++ net = NULL;
++ }
+ }
+
+ return net;
+diff --git a/drivers/slimbus/messaging.c b/drivers/slimbus/messaging.c
+index 242570a5e5654b..455c1fd1490fd3 100644
+--- a/drivers/slimbus/messaging.c
++++ b/drivers/slimbus/messaging.c
+@@ -148,8 +148,9 @@ int slim_do_transfer(struct slim_controller *ctrl, struct slim_msg_txn *txn)
+ }
+
+ ret = ctrl->xfer_msg(ctrl, txn);
+-
+- if (!ret && need_tid && !txn->msg->comp) {
++ if (ret == -ETIMEDOUT) {
++ slim_free_txn_tid(ctrl, txn);
++ } else if (!ret && need_tid && !txn->msg->comp) {
+ unsigned long ms = txn->rl + HZ;
+
+ time_left = wait_for_completion_timeout(txn->comp,
+diff --git a/drivers/usb/atm/cxacru.c b/drivers/usb/atm/cxacru.c
+index 0dd85d2635b998..47d06af33747d0 100644
+--- a/drivers/usb/atm/cxacru.c
++++ b/drivers/usb/atm/cxacru.c
+@@ -1131,7 +1131,10 @@ static int cxacru_bind(struct usbatm_data *usbatm_instance,
+ struct cxacru_data *instance;
+ struct usb_device *usb_dev = interface_to_usbdev(intf);
+ struct usb_host_endpoint *cmd_ep = usb_dev->ep_in[CXACRU_EP_CMD];
+- struct usb_endpoint_descriptor *in, *out;
++ static const u8 ep_addrs[] = {
++ CXACRU_EP_CMD + USB_DIR_IN,
++ CXACRU_EP_CMD + USB_DIR_OUT,
++ 0};
+ int ret;
+
+ /* instance init */
+@@ -1179,13 +1182,11 @@ static int cxacru_bind(struct usbatm_data *usbatm_instance,
+ }
+
+ if (usb_endpoint_xfer_int(&cmd_ep->desc))
+- ret = usb_find_common_endpoints(intf->cur_altsetting,
+- NULL, NULL, &in, &out);
++ ret = usb_check_int_endpoints(intf, ep_addrs);
+ else
+- ret = usb_find_common_endpoints(intf->cur_altsetting,
+- &in, &out, NULL, NULL);
++ ret = usb_check_bulk_endpoints(intf, ep_addrs);
+
+- if (ret) {
++ if (!ret) {
+ usb_err(usbatm_instance, "cxacru_bind: interface has incorrect endpoints\n");
+ ret = -ENODEV;
+ goto fail;
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 906daf423cb02b..145787c424e0c8 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -6065,6 +6065,36 @@ void usb_hub_cleanup(void)
+ usb_deregister(&hub_driver);
+ } /* usb_hub_cleanup() */
+
++/**
++ * hub_hc_release_resources - clear resources used by host controller
++ * @udev: pointer to device being released
++ *
++ * Context: task context, might sleep
++ *
++ * Function releases the host controller resources in correct order before
++ * making any operation on resuming usb device. The host controller resources
++ * allocated for devices in tree should be released starting from the last
++ * usb device in tree toward the root hub. This function is used only during
++ * resuming device when usb device require reinitialization – that is, when
++ * flag udev->reset_resume is set.
++ *
++ * This call is synchronous, and may not be used in an interrupt context.
++ */
++static void hub_hc_release_resources(struct usb_device *udev)
++{
++ struct usb_hub *hub = usb_hub_to_struct_hub(udev);
++ struct usb_hcd *hcd = bus_to_hcd(udev->bus);
++ int i;
++
++ /* Release up resources for all children before this device */
++ for (i = 0; i < udev->maxchild; i++)
++ if (hub->ports[i]->child)
++ hub_hc_release_resources(hub->ports[i]->child);
++
++ if (hcd->driver->reset_device)
++ hcd->driver->reset_device(hcd, udev);
++}
++
+ /**
+ * usb_reset_and_verify_device - perform a USB port reset to reinitialize a device
+ * @udev: device to reset (not in SUSPENDED or NOTATTACHED state)
+@@ -6129,6 +6159,9 @@ static int usb_reset_and_verify_device(struct usb_device *udev)
+ bos = udev->bos;
+ udev->bos = NULL;
+
++ if (udev->reset_resume)
++ hub_hc_release_resources(udev);
++
+ mutex_lock(hcd->address0_mutex);
+
+ for (i = 0; i < PORT_INIT_TRIES; ++i) {
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index 027479179f09e9..6926bd639ec6ff 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -341,6 +341,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ { USB_DEVICE(0x0638, 0x0a13), .driver_info =
+ USB_QUIRK_STRING_FETCH_255 },
+
++ /* Prolific Single-LUN Mass Storage Card Reader */
++ { USB_DEVICE(0x067b, 0x2731), .driver_info = USB_QUIRK_DELAY_INIT |
++ USB_QUIRK_NO_LPM },
++
+ /* Saitek Cyborg Gold Joystick */
+ { USB_DEVICE(0x06a3, 0x0006), .driver_info =
+ USB_QUIRK_CONFIG_INTF_STRINGS },
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 244e3e04e1ad74..7820d6815bedd5 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -131,11 +131,24 @@ void dwc3_enable_susphy(struct dwc3 *dwc, bool enable)
+ }
+ }
+
+-void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode)
++void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode, bool ignore_susphy)
+ {
++ unsigned int hw_mode;
+ u32 reg;
+
+ reg = dwc3_readl(dwc->regs, DWC3_GCTL);
++
++ /*
++ * For DRD controllers, GUSB3PIPECTL.SUSPENDENABLE and
++ * GUSB2PHYCFG.SUSPHY should be cleared during mode switching,
++ * and they can be set after core initialization.
++ */
++ hw_mode = DWC3_GHWPARAMS0_MODE(dwc->hwparams.hwparams0);
++ if (hw_mode == DWC3_GHWPARAMS0_MODE_DRD && !ignore_susphy) {
++ if (DWC3_GCTL_PRTCAP(reg) != mode)
++ dwc3_enable_susphy(dwc, false);
++ }
++
+ reg &= ~(DWC3_GCTL_PRTCAPDIR(DWC3_GCTL_PRTCAP_OTG));
+ reg |= DWC3_GCTL_PRTCAPDIR(mode);
+ dwc3_writel(dwc->regs, DWC3_GCTL, reg);
+@@ -216,7 +229,7 @@ static void __dwc3_set_mode(struct work_struct *work)
+
+ spin_lock_irqsave(&dwc->lock, flags);
+
+- dwc3_set_prtcap(dwc, desired_dr_role);
++ dwc3_set_prtcap(dwc, desired_dr_role, false);
+
+ spin_unlock_irqrestore(&dwc->lock, flags);
+
+@@ -658,16 +671,7 @@ static int dwc3_ss_phy_setup(struct dwc3 *dwc, int index)
+ */
+ reg &= ~DWC3_GUSB3PIPECTL_UX_EXIT_PX;
+
+- /*
+- * Above DWC_usb3.0 1.94a, it is recommended to set
+- * DWC3_GUSB3PIPECTL_SUSPHY to '0' during coreConsultant configuration.
+- * So default value will be '0' when the core is reset. Application
+- * needs to set it to '1' after the core initialization is completed.
+- *
+- * Similarly for DRD controllers, GUSB3PIPECTL.SUSPENDENABLE must be
+- * cleared after power-on reset, and it can be set after core
+- * initialization.
+- */
++ /* Ensure the GUSB3PIPECTL.SUSPENDENABLE is cleared prior to phy init. */
+ reg &= ~DWC3_GUSB3PIPECTL_SUSPHY;
+
+ if (dwc->u2ss_inp3_quirk)
+@@ -747,15 +751,7 @@ static int dwc3_hs_phy_setup(struct dwc3 *dwc, int index)
+ break;
+ }
+
+- /*
+- * Above DWC_usb3.0 1.94a, it is recommended to set
+- * DWC3_GUSB2PHYCFG_SUSPHY to '0' during coreConsultant configuration.
+- * So default value will be '0' when the core is reset. Application
+- * needs to set it to '1' after the core initialization is completed.
+- *
+- * Similarly for DRD controllers, GUSB2PHYCFG.SUSPHY must be cleared
+- * after power-on reset, and it can be set after core initialization.
+- */
++ /* Ensure the GUSB2PHYCFG.SUSPHY is cleared prior to phy init. */
+ reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
+
+ if (dwc->dis_enblslpm_quirk)
+@@ -830,6 +826,25 @@ static int dwc3_phy_init(struct dwc3 *dwc)
+ goto err_exit_usb3_phy;
+ }
+
++ /*
++ * Above DWC_usb3.0 1.94a, it is recommended to set
++ * DWC3_GUSB3PIPECTL_SUSPHY and DWC3_GUSB2PHYCFG_SUSPHY to '0' during
++ * coreConsultant configuration. So default value will be '0' when the
++ * core is reset. Application needs to set it to '1' after the core
++ * initialization is completed.
++ *
++ * Certain phy requires to be in P0 power state during initialization.
++ * Make sure GUSB3PIPECTL.SUSPENDENABLE and GUSB2PHYCFG.SUSPHY are clear
++ * prior to phy init to maintain in the P0 state.
++ *
++ * After phy initialization, some phy operations can only be executed
++ * while in lower P states. Ensure GUSB3PIPECTL.SUSPENDENABLE and
++ * GUSB2PHYCFG.SUSPHY are set soon after initialization to avoid
++ * blocking phy ops.
++ */
++ if (!DWC3_VER_IS_WITHIN(DWC3, ANY, 194A))
++ dwc3_enable_susphy(dwc, true);
++
+ return 0;
+
+ err_exit_usb3_phy:
+@@ -1564,7 +1579,7 @@ static int dwc3_core_init_mode(struct dwc3 *dwc)
+
+ switch (dwc->dr_mode) {
+ case USB_DR_MODE_PERIPHERAL:
+- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE);
++ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, false);
+
+ if (dwc->usb2_phy)
+ otg_set_vbus(dwc->usb2_phy->otg, false);
+@@ -1576,7 +1591,7 @@ static int dwc3_core_init_mode(struct dwc3 *dwc)
+ return dev_err_probe(dev, ret, "failed to initialize gadget\n");
+ break;
+ case USB_DR_MODE_HOST:
+- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST);
++ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST, false);
+
+ if (dwc->usb2_phy)
+ otg_set_vbus(dwc->usb2_phy->otg, true);
+@@ -1621,7 +1636,7 @@ static void dwc3_core_exit_mode(struct dwc3 *dwc)
+ }
+
+ /* de-assert DRVVBUS for HOST and OTG mode */
+- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE);
++ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, true);
+ }
+
+ static void dwc3_get_software_properties(struct dwc3 *dwc)
+@@ -1811,8 +1826,6 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ dwc->tx_thr_num_pkt_prd = tx_thr_num_pkt_prd;
+ dwc->tx_max_burst_prd = tx_max_burst_prd;
+
+- dwc->imod_interval = 0;
+-
+ dwc->tx_fifo_resize_max_num = tx_fifo_resize_max_num;
+ }
+
+@@ -1830,21 +1843,19 @@ static void dwc3_check_params(struct dwc3 *dwc)
+ unsigned int hwparam_gen =
+ DWC3_GHWPARAMS3_SSPHY_IFC(dwc->hwparams.hwparams3);
+
+- /* Check for proper value of imod_interval */
+- if (dwc->imod_interval && !dwc3_has_imod(dwc)) {
+- dev_warn(dwc->dev, "Interrupt moderation not supported\n");
+- dwc->imod_interval = 0;
+- }
+-
+ /*
++ * Enable IMOD for all supporting controllers.
++ *
++ * Particularly, DWC_usb3 v3.00a must enable this feature for
++ * the following reason:
++ *
+ * Workaround for STAR 9000961433 which affects only version
+ * 3.00a of the DWC_usb3 core. This prevents the controller
+ * interrupt from being masked while handling events. IMOD
+ * allows us to work around this issue. Enable it for the
+ * affected version.
+ */
+- if (!dwc->imod_interval &&
+- DWC3_VER_IS(DWC3, 300A))
++ if (dwc3_has_imod((dwc)))
+ dwc->imod_interval = 1;
+
+ /* Check the maximum_speed parameter */
+@@ -2433,7 +2444,7 @@ static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
+ if (ret)
+ return ret;
+
+- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE);
++ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_DEVICE, true);
+ dwc3_gadget_resume(dwc);
+ break;
+ case DWC3_GCTL_PRTCAP_HOST:
+@@ -2441,7 +2452,7 @@ static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
+ ret = dwc3_core_init_for_resume(dwc);
+ if (ret)
+ return ret;
+- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST);
++ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_HOST, true);
+ break;
+ }
+ /* Restore GUSB2PHYCFG bits that were modified in suspend */
+@@ -2470,7 +2481,7 @@ static int dwc3_resume_common(struct dwc3 *dwc, pm_message_t msg)
+ if (ret)
+ return ret;
+
+- dwc3_set_prtcap(dwc, dwc->current_dr_role);
++ dwc3_set_prtcap(dwc, dwc->current_dr_role, true);
+
+ dwc3_otg_init(dwc);
+ if (dwc->current_otg_role == DWC3_OTG_ROLE_HOST) {
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 0e91a227507fff..f288d88cd10519 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -1562,7 +1562,7 @@ struct dwc3_gadget_ep_cmd_params {
+ #define DWC3_HAS_OTG BIT(3)
+
+ /* prototypes */
+-void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode);
++void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode, bool ignore_susphy);
+ void dwc3_set_mode(struct dwc3 *dwc, u32 mode);
+ u32 dwc3_core_fifo_space(struct dwc3_ep *dep, u8 type);
+
+diff --git a/drivers/usb/dwc3/drd.c b/drivers/usb/dwc3/drd.c
+index d76ae676783cf3..7977860932b142 100644
+--- a/drivers/usb/dwc3/drd.c
++++ b/drivers/usb/dwc3/drd.c
+@@ -173,7 +173,7 @@ void dwc3_otg_init(struct dwc3 *dwc)
+ * block "Initialize GCTL for OTG operation".
+ */
+ /* GCTL.PrtCapDir=2'b11 */
+- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG);
++ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG, true);
+ /* GUSB2PHYCFG0.SusPHY=0 */
+ reg = dwc3_readl(dwc->regs, DWC3_GUSB2PHYCFG(0));
+ reg &= ~DWC3_GUSB2PHYCFG_SUSPHY;
+@@ -556,7 +556,7 @@ int dwc3_drd_init(struct dwc3 *dwc)
+
+ dwc3_drd_update(dwc);
+ } else {
+- dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG);
++ dwc3_set_prtcap(dwc, DWC3_GCTL_PRTCAP_OTG, true);
+
+ /* use OTG block to get ID event */
+ irq = dwc3_otg_get_irq(dwc);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 8c80bb4a467bff..309a871453bfad 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -4507,14 +4507,18 @@ static irqreturn_t dwc3_process_event_buf(struct dwc3_event_buffer *evt)
+ dwc3_writel(dwc->regs, DWC3_GEVNTSIZ(0),
+ DWC3_GEVNTSIZ_SIZE(evt->length));
+
++ evt->flags &= ~DWC3_EVENT_PENDING;
++ /*
++ * Add an explicit write memory barrier to make sure that the update of
++ * clearing DWC3_EVENT_PENDING is observed in dwc3_check_event_buf()
++ */
++ wmb();
++
+ if (dwc->imod_interval) {
+ dwc3_writel(dwc->regs, DWC3_GEVNTCOUNT(0), DWC3_GEVNTCOUNT_EHB);
+ dwc3_writel(dwc->regs, DWC3_DEV_IMOD(0), dwc->imod_interval);
+ }
+
+- /* Keep the clearing of DWC3_EVENT_PENDING at the end */
+- evt->flags &= ~DWC3_EVENT_PENDING;
+-
+ return ret;
+ }
+
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index cec86c0c6369ca..8402a86176f48c 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -1050,10 +1050,11 @@ static int set_config(struct usb_composite_dev *cdev,
+ else
+ usb_gadget_set_remote_wakeup(gadget, 0);
+ done:
+- if (power <= USB_SELF_POWER_VBUS_MAX_DRAW)
+- usb_gadget_set_selfpowered(gadget);
+- else
++ if (power > USB_SELF_POWER_VBUS_MAX_DRAW ||
++ (c && !(c->bmAttributes & USB_CONFIG_ATT_SELFPOWER)))
+ usb_gadget_clear_selfpowered(gadget);
++ else
++ usb_gadget_set_selfpowered(gadget);
+
+ usb_gadget_vbus_draw(gadget, power);
+ if (result >= 0 && cdev->delayed_status)
+@@ -2615,7 +2616,10 @@ void composite_suspend(struct usb_gadget *gadget)
+
+ cdev->suspended = 1;
+
+- usb_gadget_set_selfpowered(gadget);
++ if (cdev->config &&
++ cdev->config->bmAttributes & USB_CONFIG_ATT_SELFPOWER)
++ usb_gadget_set_selfpowered(gadget);
++
+ usb_gadget_vbus_draw(gadget, 2);
+ }
+
+@@ -2649,8 +2653,11 @@ void composite_resume(struct usb_gadget *gadget)
+ else
+ maxpower = min(maxpower, 900U);
+
+- if (maxpower > USB_SELF_POWER_VBUS_MAX_DRAW)
++ if (maxpower > USB_SELF_POWER_VBUS_MAX_DRAW ||
++ !(cdev->config->bmAttributes & USB_CONFIG_ATT_SELFPOWER))
+ usb_gadget_clear_selfpowered(gadget);
++ else
++ usb_gadget_set_selfpowered(gadget);
+
+ usb_gadget_vbus_draw(gadget, maxpower);
+ } else {
+diff --git a/drivers/usb/gadget/function/u_ether.c b/drivers/usb/gadget/function/u_ether.c
+index 09e2838917e294..f58590bf5e02f5 100644
+--- a/drivers/usb/gadget/function/u_ether.c
++++ b/drivers/usb/gadget/function/u_ether.c
+@@ -1052,8 +1052,8 @@ void gether_suspend(struct gether *link)
+ * There is a transfer in progress. So we trigger a remote
+ * wakeup to inform the host.
+ */
+- ether_wakeup_host(dev->port_usb);
+- return;
++ if (!ether_wakeup_host(dev->port_usb))
++ return;
+ }
+ spin_lock_irqsave(&dev->lock, flags);
+ link->is_suspend = true;
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index 8d774f19271e6a..2fe3a92978fa29 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -12,6 +12,7 @@
+ #include <linux/slab.h>
+ #include <linux/unaligned.h>
+ #include <linux/bitfield.h>
++#include <linux/pci.h>
+
+ #include "xhci.h"
+ #include "xhci-trace.h"
+@@ -770,9 +771,16 @@ static int xhci_exit_test_mode(struct xhci_hcd *xhci)
+ enum usb_link_tunnel_mode xhci_port_is_tunneled(struct xhci_hcd *xhci,
+ struct xhci_port *port)
+ {
++ struct usb_hcd *hcd;
+ void __iomem *base;
+ u32 offset;
+
++ /* Don't try and probe this capability for non-Intel hosts */
++ hcd = xhci_to_hcd(xhci);
++ if (!dev_is_pci(hcd->self.controller) ||
++ to_pci_dev(hcd->self.controller)->vendor != PCI_VENDOR_ID_INTEL)
++ return USB_LINK_UNKNOWN;
++
+ base = &xhci->cap_regs->hc_capbase;
+ offset = xhci_find_next_ext_cap(base, 0, XHCI_EXT_CAPS_INTEL_SPR_SHADOW);
+
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index d2900197a49e74..32c8693b438b07 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -2443,7 +2443,8 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ * and our use of dma addresses in the trb_address_map radix tree needs
+ * TRB_SEGMENT_SIZE alignment, so we pick the greater alignment need.
+ */
+- if (xhci->quirks & XHCI_ZHAOXIN_TRB_FETCH)
++ if (xhci->quirks & XHCI_TRB_OVERFETCH)
++ /* Buggy HC prefetches beyond segment bounds - allocate dummy space at the end */
+ xhci->segment_pool = dma_pool_create("xHCI ring segments", dev,
+ TRB_SEGMENT_SIZE * 2, TRB_SEGMENT_SIZE * 2, xhci->page_size * 2);
+ else
+diff --git a/drivers/usb/host/xhci-pci.c b/drivers/usb/host/xhci-pci.c
+index deb3c98c9beaf6..1b033c8ce188ef 100644
+--- a/drivers/usb/host/xhci-pci.c
++++ b/drivers/usb/host/xhci-pci.c
+@@ -28,8 +28,8 @@
+ #define SPARSE_CNTL_ENABLE 0xC12C
+
+ /* Device for a quirk */
+-#define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73
+-#define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x1000
++#define PCI_VENDOR_ID_FRESCO_LOGIC 0x1b73
++#define PCI_DEVICE_ID_FRESCO_LOGIC_PDK 0x1000
+ #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1009 0x1009
+ #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1100 0x1100
+ #define PCI_DEVICE_ID_FRESCO_LOGIC_FL1400 0x1400
+@@ -38,8 +38,10 @@
+ #define PCI_DEVICE_ID_EJ168 0x7023
+ #define PCI_DEVICE_ID_EJ188 0x7052
+
+-#define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31
+-#define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31
++#define PCI_DEVICE_ID_VIA_VL805 0x3483
++
++#define PCI_DEVICE_ID_INTEL_LYNXPOINT_XHCI 0x8c31
++#define PCI_DEVICE_ID_INTEL_LYNXPOINT_LP_XHCI 0x9c31
+ #define PCI_DEVICE_ID_INTEL_WILDCATPOINT_LP_XHCI 0x9cb1
+ #define PCI_DEVICE_ID_INTEL_CHERRYVIEW_XHCI 0x22b5
+ #define PCI_DEVICE_ID_INTEL_SUNRISEPOINT_H_XHCI 0xa12f
+@@ -424,8 +426,10 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+ pdev->device == 0x3432)
+ xhci->quirks |= XHCI_BROKEN_STREAMS;
+
+- if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == 0x3483)
++ if (pdev->vendor == PCI_VENDOR_ID_VIA && pdev->device == PCI_DEVICE_ID_VIA_VL805) {
+ xhci->quirks |= XHCI_LPM_SUPPORT;
++ xhci->quirks |= XHCI_TRB_OVERFETCH;
++ }
+
+ if (pdev->vendor == PCI_VENDOR_ID_ASMEDIA &&
+ pdev->device == PCI_DEVICE_ID_ASMEDIA_1042_XHCI) {
+@@ -473,11 +477,11 @@ static void xhci_pci_quirks(struct device *dev, struct xhci_hcd *xhci)
+
+ if (pdev->device == 0x9202) {
+ xhci->quirks |= XHCI_RESET_ON_RESUME;
+- xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH;
++ xhci->quirks |= XHCI_TRB_OVERFETCH;
+ }
+
+ if (pdev->device == 0x9203)
+- xhci->quirks |= XHCI_ZHAOXIN_TRB_FETCH;
++ xhci->quirks |= XHCI_TRB_OVERFETCH;
+ }
+
+ if (pdev->vendor == PCI_DEVICE_ID_CADENCE &&
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 673179047eb82e..439767d242fa9c 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1621,7 +1621,7 @@ struct xhci_hcd {
+ #define XHCI_EP_CTX_BROKEN_DCS BIT_ULL(42)
+ #define XHCI_SUSPEND_RESUME_CLKS BIT_ULL(43)
+ #define XHCI_RESET_TO_DEFAULT BIT_ULL(44)
+-#define XHCI_ZHAOXIN_TRB_FETCH BIT_ULL(45)
++#define XHCI_TRB_OVERFETCH BIT_ULL(45)
+ #define XHCI_ZHAOXIN_HOST BIT_ULL(46)
+ #define XHCI_WRITE_64_HI_LO BIT_ULL(47)
+ #define XHCI_CDNS_SCTX_QUIRK BIT_ULL(48)
+diff --git a/drivers/usb/renesas_usbhs/common.c b/drivers/usb/renesas_usbhs/common.c
+index edc43f169d493c..7324de52d9505e 100644
+--- a/drivers/usb/renesas_usbhs/common.c
++++ b/drivers/usb/renesas_usbhs/common.c
+@@ -312,8 +312,10 @@ static int usbhsc_clk_get(struct device *dev, struct usbhs_priv *priv)
+ priv->clks[1] = of_clk_get(dev_of_node(dev), 1);
+ if (PTR_ERR(priv->clks[1]) == -ENOENT)
+ priv->clks[1] = NULL;
+- else if (IS_ERR(priv->clks[1]))
++ else if (IS_ERR(priv->clks[1])) {
++ clk_put(priv->clks[0]);
+ return PTR_ERR(priv->clks[1]);
++ }
+
+ return 0;
+ }
+@@ -779,6 +781,8 @@ static void usbhs_remove(struct platform_device *pdev)
+
+ dev_dbg(&pdev->dev, "usb remove\n");
+
++ flush_delayed_work(&priv->notify_hotplug_work);
++
+ /* power off */
+ if (!usbhs_get_dparam(priv, runtime_pwctrl))
+ usbhsc_power_ctrl(priv, 0);
+diff --git a/drivers/usb/renesas_usbhs/mod_gadget.c b/drivers/usb/renesas_usbhs/mod_gadget.c
+index 105132ae87acbc..e8e5723f541226 100644
+--- a/drivers/usb/renesas_usbhs/mod_gadget.c
++++ b/drivers/usb/renesas_usbhs/mod_gadget.c
+@@ -1094,7 +1094,7 @@ int usbhs_mod_gadget_probe(struct usbhs_priv *priv)
+ goto usbhs_mod_gadget_probe_err_gpriv;
+ }
+
+- gpriv->transceiver = usb_get_phy(USB_PHY_TYPE_UNDEFINED);
++ gpriv->transceiver = devm_usb_get_phy(dev, USB_PHY_TYPE_UNDEFINED);
+ dev_info(dev, "%stransceiver found\n",
+ !IS_ERR(gpriv->transceiver) ? "" : "no ");
+
+diff --git a/drivers/usb/typec/tcpm/tcpci_rt1711h.c b/drivers/usb/typec/tcpm/tcpci_rt1711h.c
+index 64f6dd0dc66096..88c50b984e8a3f 100644
+--- a/drivers/usb/typec/tcpm/tcpci_rt1711h.c
++++ b/drivers/usb/typec/tcpm/tcpci_rt1711h.c
+@@ -334,6 +334,11 @@ static int rt1711h_probe(struct i2c_client *client)
+ {
+ int ret;
+ struct rt1711h_chip *chip;
++ const u16 alert_mask = TCPC_ALERT_TX_SUCCESS | TCPC_ALERT_TX_DISCARDED |
++ TCPC_ALERT_TX_FAILED | TCPC_ALERT_RX_HARD_RST |
++ TCPC_ALERT_RX_STATUS | TCPC_ALERT_POWER_STATUS |
++ TCPC_ALERT_CC_STATUS | TCPC_ALERT_RX_BUF_OVF |
++ TCPC_ALERT_FAULT;
+
+ chip = devm_kzalloc(&client->dev, sizeof(*chip), GFP_KERNEL);
+ if (!chip)
+@@ -382,6 +387,12 @@ static int rt1711h_probe(struct i2c_client *client)
+ dev_name(chip->dev), chip);
+ if (ret < 0)
+ return ret;
++
++ /* Enable alert interrupts */
++ ret = rt1711h_write16(chip, TCPC_ALERT_MASK, alert_mask);
++ if (ret < 0)
++ return ret;
++
+ enable_irq_wake(client->irq);
+
+ return 0;
+diff --git a/drivers/usb/typec/ucsi/ucsi.c b/drivers/usb/typec/ucsi/ucsi.c
+index 7a3f0f5af38fdb..3f2bc13efa4865 100644
+--- a/drivers/usb/typec/ucsi/ucsi.c
++++ b/drivers/usb/typec/ucsi/ucsi.c
+@@ -25,7 +25,7 @@
+ * difficult to estimate the time it takes for the system to process the command
+ * before it is actually passed to the PPM.
+ */
+-#define UCSI_TIMEOUT_MS 5000
++#define UCSI_TIMEOUT_MS 10000
+
+ /*
+ * UCSI_SWAP_TIMEOUT_MS - Timeout for role swap requests
+@@ -1330,7 +1330,7 @@ static int ucsi_reset_ppm(struct ucsi *ucsi)
+
+ mutex_lock(&ucsi->ppm_lock);
+
+- ret = ucsi->ops->read_cci(ucsi, &cci);
++ ret = ucsi->ops->poll_cci(ucsi, &cci);
+ if (ret < 0)
+ goto out;
+
+@@ -1348,7 +1348,7 @@ static int ucsi_reset_ppm(struct ucsi *ucsi)
+
+ tmo = jiffies + msecs_to_jiffies(UCSI_TIMEOUT_MS);
+ do {
+- ret = ucsi->ops->read_cci(ucsi, &cci);
++ ret = ucsi->ops->poll_cci(ucsi, &cci);
+ if (ret < 0)
+ goto out;
+ if (cci & UCSI_CCI_COMMAND_COMPLETE)
+@@ -1377,7 +1377,7 @@ static int ucsi_reset_ppm(struct ucsi *ucsi)
+ /* Give the PPM time to process a reset before reading CCI */
+ msleep(20);
+
+- ret = ucsi->ops->read_cci(ucsi, &cci);
++ ret = ucsi->ops->poll_cci(ucsi, &cci);
+ if (ret)
+ goto out;
+
+@@ -1809,11 +1809,11 @@ static int ucsi_init(struct ucsi *ucsi)
+
+ err_unregister:
+ for (con = connector; con->port; con++) {
++ if (con->wq)
++ destroy_workqueue(con->wq);
+ ucsi_unregister_partner(con);
+ ucsi_unregister_altmodes(con, UCSI_RECIPIENT_CON);
+ ucsi_unregister_port_psy(con);
+- if (con->wq)
+- destroy_workqueue(con->wq);
+
+ usb_power_delivery_unregister_capabilities(con->port_sink_caps);
+ con->port_sink_caps = NULL;
+@@ -1913,8 +1913,8 @@ struct ucsi *ucsi_create(struct device *dev, const struct ucsi_operations *ops)
+ struct ucsi *ucsi;
+
+ if (!ops ||
+- !ops->read_version || !ops->read_cci || !ops->read_message_in ||
+- !ops->sync_control || !ops->async_control)
++ !ops->read_version || !ops->read_cci || !ops->poll_cci ||
++ !ops->read_message_in || !ops->sync_control || !ops->async_control)
+ return ERR_PTR(-EINVAL);
+
+ ucsi = kzalloc(sizeof(*ucsi), GFP_KERNEL);
+@@ -1997,10 +1997,6 @@ void ucsi_unregister(struct ucsi *ucsi)
+
+ for (i = 0; i < ucsi->cap.num_connectors; i++) {
+ cancel_work_sync(&ucsi->connector[i].work);
+- ucsi_unregister_partner(&ucsi->connector[i]);
+- ucsi_unregister_altmodes(&ucsi->connector[i],
+- UCSI_RECIPIENT_CON);
+- ucsi_unregister_port_psy(&ucsi->connector[i]);
+
+ if (ucsi->connector[i].wq) {
+ struct ucsi_work *uwork;
+@@ -2016,6 +2012,11 @@ void ucsi_unregister(struct ucsi *ucsi)
+ destroy_workqueue(ucsi->connector[i].wq);
+ }
+
++ ucsi_unregister_partner(&ucsi->connector[i]);
++ ucsi_unregister_altmodes(&ucsi->connector[i],
++ UCSI_RECIPIENT_CON);
++ ucsi_unregister_port_psy(&ucsi->connector[i]);
++
+ usb_power_delivery_unregister_capabilities(ucsi->connector[i].port_sink_caps);
+ ucsi->connector[i].port_sink_caps = NULL;
+ usb_power_delivery_unregister_capabilities(ucsi->connector[i].port_source_caps);
+diff --git a/drivers/usb/typec/ucsi/ucsi.h b/drivers/usb/typec/ucsi/ucsi.h
+index 1cf5aad4c23a9e..a333006d3496a1 100644
+--- a/drivers/usb/typec/ucsi/ucsi.h
++++ b/drivers/usb/typec/ucsi/ucsi.h
+@@ -60,6 +60,7 @@ struct dentry;
+ * struct ucsi_operations - UCSI I/O operations
+ * @read_version: Read implemented UCSI version
+ * @read_cci: Read CCI register
++ * @poll_cci: Read CCI register while polling with notifications disabled
+ * @read_message_in: Read message data from UCSI
+ * @sync_control: Blocking control operation
+ * @async_control: Non-blocking control operation
+@@ -74,6 +75,7 @@ struct dentry;
+ struct ucsi_operations {
+ int (*read_version)(struct ucsi *ucsi, u16 *version);
+ int (*read_cci)(struct ucsi *ucsi, u32 *cci);
++ int (*poll_cci)(struct ucsi *ucsi, u32 *cci);
+ int (*read_message_in)(struct ucsi *ucsi, void *val, size_t val_len);
+ int (*sync_control)(struct ucsi *ucsi, u64 command);
+ int (*async_control)(struct ucsi *ucsi, u64 command);
+diff --git a/drivers/usb/typec/ucsi/ucsi_acpi.c b/drivers/usb/typec/ucsi/ucsi_acpi.c
+index accf15ff1306a2..8de2961718cdd2 100644
+--- a/drivers/usb/typec/ucsi/ucsi_acpi.c
++++ b/drivers/usb/typec/ucsi/ucsi_acpi.c
+@@ -59,19 +59,24 @@ static int ucsi_acpi_read_version(struct ucsi *ucsi, u16 *version)
+ static int ucsi_acpi_read_cci(struct ucsi *ucsi, u32 *cci)
+ {
+ struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+- int ret;
+-
+- if (UCSI_COMMAND(ua->cmd) == UCSI_PPM_RESET) {
+- ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ);
+- if (ret)
+- return ret;
+- }
+
+ memcpy(cci, ua->base + UCSI_CCI, sizeof(*cci));
+
+ return 0;
+ }
+
++static int ucsi_acpi_poll_cci(struct ucsi *ucsi, u32 *cci)
++{
++ struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
++ int ret;
++
++ ret = ucsi_acpi_dsm(ua, UCSI_DSM_FUNC_READ);
++ if (ret)
++ return ret;
++
++ return ucsi_acpi_read_cci(ucsi, cci);
++}
++
+ static int ucsi_acpi_read_message_in(struct ucsi *ucsi, void *val, size_t val_len)
+ {
+ struct ucsi_acpi *ua = ucsi_get_drvdata(ucsi);
+@@ -94,6 +99,7 @@ static int ucsi_acpi_async_control(struct ucsi *ucsi, u64 command)
+ static const struct ucsi_operations ucsi_acpi_ops = {
+ .read_version = ucsi_acpi_read_version,
+ .read_cci = ucsi_acpi_read_cci,
++ .poll_cci = ucsi_acpi_poll_cci,
+ .read_message_in = ucsi_acpi_read_message_in,
+ .sync_control = ucsi_sync_control_common,
+ .async_control = ucsi_acpi_async_control
+@@ -145,6 +151,7 @@ static int ucsi_gram_sync_control(struct ucsi *ucsi, u64 command)
+ static const struct ucsi_operations ucsi_gram_ops = {
+ .read_version = ucsi_acpi_read_version,
+ .read_cci = ucsi_acpi_read_cci,
++ .poll_cci = ucsi_acpi_poll_cci,
+ .read_message_in = ucsi_gram_read_message_in,
+ .sync_control = ucsi_gram_sync_control,
+ .async_control = ucsi_acpi_async_control
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index 740171f24ef9fa..4b1668733a4bec 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -664,6 +664,7 @@ static int ucsi_ccg_sync_control(struct ucsi *ucsi, u64 command)
+ static const struct ucsi_operations ucsi_ccg_ops = {
+ .read_version = ucsi_ccg_read_version,
+ .read_cci = ucsi_ccg_read_cci,
++ .poll_cci = ucsi_ccg_read_cci,
+ .read_message_in = ucsi_ccg_read_message_in,
+ .sync_control = ucsi_ccg_sync_control,
+ .async_control = ucsi_ccg_async_control,
+diff --git a/drivers/usb/typec/ucsi/ucsi_glink.c b/drivers/usb/typec/ucsi/ucsi_glink.c
+index 9b6cb76e632807..75c0e54c37fa0a 100644
+--- a/drivers/usb/typec/ucsi/ucsi_glink.c
++++ b/drivers/usb/typec/ucsi/ucsi_glink.c
+@@ -201,6 +201,7 @@ static void pmic_glink_ucsi_connector_status(struct ucsi_connector *con)
+ static const struct ucsi_operations pmic_glink_ucsi_ops = {
+ .read_version = pmic_glink_ucsi_read_version,
+ .read_cci = pmic_glink_ucsi_read_cci,
++ .poll_cci = pmic_glink_ucsi_read_cci,
+ .read_message_in = pmic_glink_ucsi_read_message_in,
+ .sync_control = ucsi_sync_control_common,
+ .async_control = pmic_glink_ucsi_async_control,
+diff --git a/drivers/usb/typec/ucsi/ucsi_stm32g0.c b/drivers/usb/typec/ucsi/ucsi_stm32g0.c
+index 6923fad31d7951..57ef7d83a41211 100644
+--- a/drivers/usb/typec/ucsi/ucsi_stm32g0.c
++++ b/drivers/usb/typec/ucsi/ucsi_stm32g0.c
+@@ -424,6 +424,7 @@ static irqreturn_t ucsi_stm32g0_irq_handler(int irq, void *data)
+ static const struct ucsi_operations ucsi_stm32g0_ops = {
+ .read_version = ucsi_stm32g0_read_version,
+ .read_cci = ucsi_stm32g0_read_cci,
++ .poll_cci = ucsi_stm32g0_read_cci,
+ .read_message_in = ucsi_stm32g0_read_message_in,
+ .sync_control = ucsi_sync_control_common,
+ .async_control = ucsi_stm32g0_async_control,
+diff --git a/drivers/usb/typec/ucsi/ucsi_yoga_c630.c b/drivers/usb/typec/ucsi/ucsi_yoga_c630.c
+index f3a5e24ea84d51..40e5da4fd2a454 100644
+--- a/drivers/usb/typec/ucsi/ucsi_yoga_c630.c
++++ b/drivers/usb/typec/ucsi/ucsi_yoga_c630.c
+@@ -74,6 +74,7 @@ static int yoga_c630_ucsi_async_control(struct ucsi *ucsi, u64 command)
+ const struct ucsi_operations yoga_c630_ucsi_ops = {
+ .read_version = yoga_c630_ucsi_read_version,
+ .read_cci = yoga_c630_ucsi_read_cci,
++ .poll_cci = yoga_c630_ucsi_read_cci,
+ .read_message_in = yoga_c630_ucsi_read_message_in,
+ .sync_control = ucsi_sync_control_common,
+ .async_control = yoga_c630_ucsi_async_control,
+diff --git a/drivers/virt/acrn/hsm.c b/drivers/virt/acrn/hsm.c
+index c24036c4e51ecd..e4e196abdaac94 100644
+--- a/drivers/virt/acrn/hsm.c
++++ b/drivers/virt/acrn/hsm.c
+@@ -49,7 +49,7 @@ static int pmcmd_ioctl(u64 cmd, void __user *uptr)
+ switch (cmd & PMCMD_TYPE_MASK) {
+ case ACRN_PMCMD_GET_PX_CNT:
+ case ACRN_PMCMD_GET_CX_CNT:
+- pm_info = kmalloc(sizeof(u64), GFP_KERNEL);
++ pm_info = kzalloc(sizeof(u64), GFP_KERNEL);
+ if (!pm_info)
+ return -ENOMEM;
+
+@@ -64,7 +64,7 @@ static int pmcmd_ioctl(u64 cmd, void __user *uptr)
+ kfree(pm_info);
+ break;
+ case ACRN_PMCMD_GET_PX_DATA:
+- px_data = kmalloc(sizeof(*px_data), GFP_KERNEL);
++ px_data = kzalloc(sizeof(*px_data), GFP_KERNEL);
+ if (!px_data)
+ return -ENOMEM;
+
+@@ -79,7 +79,7 @@ static int pmcmd_ioctl(u64 cmd, void __user *uptr)
+ kfree(px_data);
+ break;
+ case ACRN_PMCMD_GET_CX_DATA:
+- cx_data = kmalloc(sizeof(*cx_data), GFP_KERNEL);
++ cx_data = kzalloc(sizeof(*cx_data), GFP_KERNEL);
+ if (!cx_data)
+ return -ENOMEM;
+
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index 848cb2c3d9ddeb..78c4a3765002eb 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -1200,7 +1200,7 @@ ssize_t btrfs_buffered_write(struct kiocb *iocb, struct iov_iter *i)
+ ssize_t ret;
+ bool only_release_metadata = false;
+ bool force_page_uptodate = false;
+- loff_t old_isize = i_size_read(inode);
++ loff_t old_isize;
+ unsigned int ilock_flags = 0;
+ const bool nowait = (iocb->ki_flags & IOCB_NOWAIT);
+ unsigned int bdp_flags = (nowait ? BDP_ASYNC : 0);
+@@ -1212,6 +1212,13 @@ ssize_t btrfs_buffered_write(struct kiocb *iocb, struct iov_iter *i)
+ if (ret < 0)
+ return ret;
+
++ /*
++ * We can only trust the isize with inode lock held, or it can race with
++ * other buffered writes and cause incorrect call of
++ * pagecache_isize_extended() to overwrite existing data.
++ */
++ old_isize = i_size_read(inode);
++
+ ret = generic_write_checks(iocb, i);
+ if (ret <= 0)
+ goto out;
+diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
+index 395b8b880ce786..587ac07cd19410 100644
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -7094,6 +7094,7 @@ static int read_one_chunk(struct btrfs_key *key, struct extent_buffer *leaf,
+ btrfs_err(fs_info,
+ "failed to add chunk map, start=%llu len=%llu: %d",
+ map->start, map->chunk_len, ret);
++ btrfs_free_chunk_map(map);
+ }
+
+ return ret;
+diff --git a/fs/coredump.c b/fs/coredump.c
+index 45737b43dda5c8..2b8c36c9660c5c 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -63,6 +63,7 @@ static void free_vma_snapshot(struct coredump_params *cprm);
+
+ static int core_uses_pid;
+ static unsigned int core_pipe_limit;
++static unsigned int core_sort_vma;
+ static char core_pattern[CORENAME_MAX_SIZE] = "core";
+ static int core_name_size = CORENAME_MAX_SIZE;
+ unsigned int core_file_note_size_limit = CORE_FILE_NOTE_SIZE_DEFAULT;
+@@ -1025,6 +1026,15 @@ static struct ctl_table coredump_sysctls[] = {
+ .extra1 = (unsigned int *)&core_file_note_size_min,
+ .extra2 = (unsigned int *)&core_file_note_size_max,
+ },
++ {
++ .procname = "core_sort_vma",
++ .data = &core_sort_vma,
++ .maxlen = sizeof(int),
++ .mode = 0644,
++ .proc_handler = proc_douintvec_minmax,
++ .extra1 = SYSCTL_ZERO,
++ .extra2 = SYSCTL_ONE,
++ },
+ };
+
+ static int __init init_fs_coredump_sysctls(void)
+@@ -1255,8 +1265,9 @@ static bool dump_vma_snapshot(struct coredump_params *cprm)
+ cprm->vma_data_size += m->dump_size;
+ }
+
+- sort(cprm->vma_meta, cprm->vma_count, sizeof(*cprm->vma_meta),
+- cmp_vma_size, NULL);
++ if (core_sort_vma)
++ sort(cprm->vma_meta, cprm->vma_count, sizeof(*cprm->vma_meta),
++ cmp_vma_size, NULL);
+
+ return true;
+ }
+diff --git a/fs/exfat/balloc.c b/fs/exfat/balloc.c
+index ce9be95c9172f6..9ff825f1502d5e 100644
+--- a/fs/exfat/balloc.c
++++ b/fs/exfat/balloc.c
+@@ -141,7 +141,7 @@ int exfat_set_bitmap(struct inode *inode, unsigned int clu, bool sync)
+ return 0;
+ }
+
+-void exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync)
++int exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync)
+ {
+ int i, b;
+ unsigned int ent_idx;
+@@ -150,13 +150,17 @@ void exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync)
+ struct exfat_mount_options *opts = &sbi->options;
+
+ if (!is_valid_cluster(sbi, clu))
+- return;
++ return -EIO;
+
+ ent_idx = CLUSTER_TO_BITMAP_ENT(clu);
+ i = BITMAP_OFFSET_SECTOR_INDEX(sb, ent_idx);
+ b = BITMAP_OFFSET_BIT_IN_SECTOR(sb, ent_idx);
+
++ if (!test_bit_le(b, sbi->vol_amap[i]->b_data))
++ return -EIO;
++
+ clear_bit_le(b, sbi->vol_amap[i]->b_data);
++
+ exfat_update_bh(sbi->vol_amap[i], sync);
+
+ if (opts->discard) {
+@@ -171,6 +175,8 @@ void exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync)
+ opts->discard = 0;
+ }
+ }
++
++ return 0;
+ }
+
+ /*
+diff --git a/fs/exfat/exfat_fs.h b/fs/exfat/exfat_fs.h
+index 3cdc1de362a941..d2ba8e2d0c398a 100644
+--- a/fs/exfat/exfat_fs.h
++++ b/fs/exfat/exfat_fs.h
+@@ -452,7 +452,7 @@ int exfat_count_num_clusters(struct super_block *sb,
+ int exfat_load_bitmap(struct super_block *sb);
+ void exfat_free_bitmap(struct exfat_sb_info *sbi);
+ int exfat_set_bitmap(struct inode *inode, unsigned int clu, bool sync);
+-void exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync);
++int exfat_clear_bitmap(struct inode *inode, unsigned int clu, bool sync);
+ unsigned int exfat_find_free_bitmap(struct super_block *sb, unsigned int clu);
+ int exfat_count_used_clusters(struct super_block *sb, unsigned int *ret_count);
+ int exfat_trim_fs(struct inode *inode, struct fstrim_range *range);
+diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c
+index 9e5492ac409b07..6f3651c6ca91ef 100644
+--- a/fs/exfat/fatent.c
++++ b/fs/exfat/fatent.c
+@@ -175,6 +175,7 @@ static int __exfat_free_cluster(struct inode *inode, struct exfat_chain *p_chain
+ BITMAP_OFFSET_SECTOR_INDEX(sb, CLUSTER_TO_BITMAP_ENT(clu));
+
+ if (p_chain->flags == ALLOC_NO_FAT_CHAIN) {
++ int err;
+ unsigned int last_cluster = p_chain->dir + p_chain->size - 1;
+ do {
+ bool sync = false;
+@@ -189,7 +190,9 @@ static int __exfat_free_cluster(struct inode *inode, struct exfat_chain *p_chain
+ cur_cmap_i = next_cmap_i;
+ }
+
+- exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode)));
++ err = exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode)));
++ if (err)
++ break;
+ clu++;
+ num_clusters++;
+ } while (num_clusters < p_chain->size);
+@@ -210,12 +213,13 @@ static int __exfat_free_cluster(struct inode *inode, struct exfat_chain *p_chain
+ cur_cmap_i = next_cmap_i;
+ }
+
+- exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode)));
++ if (exfat_clear_bitmap(inode, clu, (sync && IS_DIRSYNC(inode))))
++ break;
+ clu = n_clu;
+ num_clusters++;
+
+ if (err)
+- goto dec_used_clus;
++ break;
+
+ if (num_clusters >= sbi->num_clusters - EXFAT_FIRST_CLUSTER) {
+ /*
+@@ -229,7 +233,6 @@ static int __exfat_free_cluster(struct inode *inode, struct exfat_chain *p_chain
+ } while (clu != EXFAT_EOF_CLUSTER);
+ }
+
+-dec_used_clus:
+ sbi->used_clusters -= num_clusters;
+ return 0;
+ }
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index 05b51e7217838f..807349d8ea0501 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -587,7 +587,7 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ valid_size = ei->valid_size;
+
+ ret = generic_write_checks(iocb, iter);
+- if (ret < 0)
++ if (ret <= 0)
+ goto unlock;
+
+ if (iocb->ki_flags & IOCB_DIRECT) {
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index 337197ece59955..e47a5ddfc79b3d 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -237,7 +237,7 @@ static int exfat_search_empty_slot(struct super_block *sb,
+ dentry = 0;
+ }
+
+- while (dentry + num_entries < total_entries &&
++ while (dentry + num_entries <= total_entries &&
+ clu.dir != EXFAT_EOF_CLUSTER) {
+ i = dentry & (dentries_per_clu - 1);
+
+diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
+index a44132c986538b..3b9461f5e712e5 100644
+--- a/fs/netfs/read_collect.c
++++ b/fs/netfs/read_collect.c
+@@ -258,17 +258,18 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bool was
+ */
+ if (!subreq->consumed &&
+ !prev_donated &&
+- !list_is_first(&subreq->rreq_link, &rreq->subrequests) &&
+- subreq->start == prev->start + prev->len) {
++ !list_is_first(&subreq->rreq_link, &rreq->subrequests)) {
+ prev = list_prev_entry(subreq, rreq_link);
+- WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len);
+- subreq->start += subreq->len;
+- subreq->len = 0;
+- subreq->transferred = 0;
+- trace_netfs_donate(rreq, subreq, prev, subreq->len,
+- netfs_trace_donate_to_prev);
+- trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_prev);
+- goto remove_subreq_locked;
++ if (subreq->start == prev->start + prev->len) {
++ WRITE_ONCE(prev->next_donated, prev->next_donated + subreq->len);
++ subreq->start += subreq->len;
++ subreq->len = 0;
++ subreq->transferred = 0;
++ trace_netfs_donate(rreq, subreq, prev, subreq->len,
++ netfs_trace_donate_to_prev);
++ trace_netfs_sreq(subreq, netfs_sreq_trace_donate_to_prev);
++ goto remove_subreq_locked;
++ }
+ }
+
+ /* If we can't donate down the chain, donate up the chain instead. */
+diff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c
+index 54d5004fec1826..e72f5e67483422 100644
+--- a/fs/netfs/read_pgpriv2.c
++++ b/fs/netfs/read_pgpriv2.c
+@@ -181,16 +181,17 @@ void netfs_pgpriv2_write_to_the_cache(struct netfs_io_request *rreq)
+ break;
+
+ folioq_unmark3(folioq, slot);
+- if (!folioq->marks3) {
++ while (!folioq->marks3) {
+ folioq = folioq->next;
+ if (!folioq)
+- break;
++ goto end_of_queue;
+ }
+
+ slot = __ffs(folioq->marks3);
+ folio = folioq_folio(folioq, slot);
+ }
+
++end_of_queue:
+ netfs_issue_write(wreq, &wreq->io_streams[1]);
+ smp_wmb(); /* Write lists before ALL_QUEUED. */
+ set_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);
+diff --git a/fs/nfs/file.c b/fs/nfs/file.c
+index 6800ee92d742a8..153d25d4b810c5 100644
+--- a/fs/nfs/file.c
++++ b/fs/nfs/file.c
+@@ -29,6 +29,7 @@
+ #include <linux/pagemap.h>
+ #include <linux/gfp.h>
+ #include <linux/swap.h>
++#include <linux/compaction.h>
+
+ #include <linux/uaccess.h>
+ #include <linux/filelock.h>
+@@ -451,7 +452,7 @@ static bool nfs_release_folio(struct folio *folio, gfp_t gfp)
+ /* If the private flag is set, then the folio is not freeable */
+ if (folio_test_private(folio)) {
+ if ((current_gfp_context(gfp) & GFP_KERNEL) != GFP_KERNEL ||
+- current_is_kswapd())
++ current_is_kswapd() || current_is_kcompactd())
+ return false;
+ if (nfs_wb_folio(folio->mapping->host, folio) < 0)
+ return false;
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index 05274121e46f04..b630beb757a44a 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -209,10 +209,8 @@ struct cifs_cred {
+
+ struct cifs_open_info_data {
+ bool adjust_tz;
+- union {
+- bool reparse_point;
+- bool symlink;
+- };
++ bool reparse_point;
++ bool contains_posix_file_info;
+ struct {
+ /* ioctl response buffer */
+ struct {
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index e11e67af760f44..a3f0835e12be31 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -968,7 +968,7 @@ cifs_get_file_info(struct file *filp)
+ /* TODO: add support to query reparse tag */
+ data.adjust_tz = false;
+ if (data.symlink_target) {
+- data.symlink = true;
++ data.reparse_point = true;
+ data.reparse.tag = IO_REPARSE_TAG_SYMLINK;
+ }
+ path = build_path_from_dentry(dentry, page);
+diff --git a/fs/smb/client/reparse.h b/fs/smb/client/reparse.h
+index ff05b0e75c9284..f080f92cb1e741 100644
+--- a/fs/smb/client/reparse.h
++++ b/fs/smb/client/reparse.h
+@@ -97,14 +97,30 @@ static inline bool reparse_inode_match(struct inode *inode,
+
+ static inline bool cifs_open_data_reparse(struct cifs_open_info_data *data)
+ {
+- struct smb2_file_all_info *fi = &data->fi;
+- u32 attrs = le32_to_cpu(fi->Attributes);
++ u32 attrs;
+ bool ret;
+
+- ret = data->reparse_point || (attrs & ATTR_REPARSE);
+- if (ret)
+- attrs |= ATTR_REPARSE;
+- fi->Attributes = cpu_to_le32(attrs);
++ if (data->contains_posix_file_info) {
++ struct smb311_posix_qinfo *fi = &data->posix_fi;
++
++ attrs = le32_to_cpu(fi->DosAttributes);
++ if (data->reparse_point) {
++ attrs |= ATTR_REPARSE;
++ fi->DosAttributes = cpu_to_le32(attrs);
++ }
++
++ } else {
++ struct smb2_file_all_info *fi = &data->fi;
++
++ attrs = le32_to_cpu(fi->Attributes);
++ if (data->reparse_point) {
++ attrs |= ATTR_REPARSE;
++ fi->Attributes = cpu_to_le32(attrs);
++ }
++ }
++
++ ret = attrs & ATTR_REPARSE;
++
+ return ret;
+ }
+
+diff --git a/fs/smb/client/smb1ops.c b/fs/smb/client/smb1ops.c
+index c70f4961c4eb78..bd791aa54681f6 100644
+--- a/fs/smb/client/smb1ops.c
++++ b/fs/smb/client/smb1ops.c
+@@ -551,7 +551,7 @@ static int cifs_query_path_info(const unsigned int xid,
+ int rc;
+ FILE_ALL_INFO fi = {};
+
+- data->symlink = false;
++ data->reparse_point = false;
+ data->adjust_tz = false;
+
+ /* could do find first instead but this returns more info */
+@@ -592,7 +592,7 @@ static int cifs_query_path_info(const unsigned int xid,
+ /* Need to check if this is a symbolic link or not */
+ tmprc = CIFS_open(xid, &oparms, &oplock, NULL);
+ if (tmprc == -EOPNOTSUPP)
+- data->symlink = true;
++ data->reparse_point = true;
+ else if (tmprc == 0)
+ CIFSSMBClose(xid, tcon, fid.netfid);
+ }
+diff --git a/fs/smb/client/smb2inode.c b/fs/smb/client/smb2inode.c
+index 7dfd3eb3847b33..6048b3fed3e787 100644
+--- a/fs/smb/client/smb2inode.c
++++ b/fs/smb/client/smb2inode.c
+@@ -648,6 +648,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ switch (cmds[i]) {
+ case SMB2_OP_QUERY_INFO:
+ idata = in_iov[i].iov_base;
++ idata->contains_posix_file_info = false;
+ if (rc == 0 && cfile && cfile->symlink_target) {
+ idata->symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL);
+ if (!idata->symlink_target)
+@@ -671,6 +672,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ break;
+ case SMB2_OP_POSIX_QUERY_INFO:
+ idata = in_iov[i].iov_base;
++ idata->contains_posix_file_info = true;
+ if (rc == 0 && cfile && cfile->symlink_target) {
+ idata->symlink_target = kstrdup(cfile->symlink_target, GFP_KERNEL);
+ if (!idata->symlink_target)
+@@ -768,6 +770,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ idata = in_iov[i].iov_base;
+ idata->reparse.io.iov = *iov;
+ idata->reparse.io.buftype = resp_buftype[i + 1];
++ idata->contains_posix_file_info = false; /* BB VERIFY */
+ rbuf = reparse_buf_ptr(iov);
+ if (IS_ERR(rbuf)) {
+ rc = PTR_ERR(rbuf);
+@@ -789,6 +792,7 @@ static int smb2_compound_op(const unsigned int xid, struct cifs_tcon *tcon,
+ case SMB2_OP_QUERY_WSL_EA:
+ if (!rc) {
+ idata = in_iov[i].iov_base;
++ idata->contains_posix_file_info = false;
+ qi_rsp = rsp_iov[i + 1].iov_base;
+ data[0] = (u8 *)qi_rsp + le16_to_cpu(qi_rsp->OutputBufferOffset);
+ size[0] = le32_to_cpu(qi_rsp->OutputBufferLength);
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index e8da63d29a28f1..516be8c0b2a9b4 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -1001,6 +1001,7 @@ static int smb2_query_file_info(const unsigned int xid, struct cifs_tcon *tcon,
+ if (!data->symlink_target)
+ return -ENOMEM;
+ }
++ data->contains_posix_file_info = false;
+ return SMB2_query_info(xid, tcon, fid->persistent_fid, fid->volatile_fid, &data->fi);
+ }
+
+@@ -5177,7 +5178,7 @@ int __cifs_sfu_make_node(unsigned int xid, struct inode *inode,
+ FILE_CREATE, CREATE_NOT_DIR |
+ CREATE_OPTION_SPECIAL, ACL_NO_MODE);
+ oparms.fid = &fid;
+-
++ idata.contains_posix_file_info = false;
+ rc = server->ops->open(xid, &oparms, &oplock, &idata);
+ if (rc)
+ goto out;
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index c763a2f7df6640..8464261d763876 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -7441,17 +7441,17 @@ int smb2_lock(struct ksmbd_work *work)
+ }
+
+ no_check_cl:
++ flock = smb_lock->fl;
++ list_del(&smb_lock->llist);
++
+ if (smb_lock->zero_len) {
+ err = 0;
+ goto skip;
+ }
+-
+- flock = smb_lock->fl;
+- list_del(&smb_lock->llist);
+ retry:
+ rc = vfs_lock_file(filp, smb_lock->cmd, flock, NULL);
+ skip:
+- if (flags & SMB2_LOCKFLAG_UNLOCK) {
++ if (smb_lock->flags & SMB2_LOCKFLAG_UNLOCK) {
+ if (!rc) {
+ ksmbd_debug(SMB, "File unlocked\n");
+ } else if (rc == -ENOENT) {
+diff --git a/fs/smb/server/smbacl.c b/fs/smb/server/smbacl.c
+index 1c9775f1efa56d..da8ed72f335d99 100644
+--- a/fs/smb/server/smbacl.c
++++ b/fs/smb/server/smbacl.c
+@@ -807,6 +807,13 @@ static int parse_sid(struct smb_sid *psid, char *end_of_acl)
+ return -EINVAL;
+ }
+
++ if (!psid->num_subauth)
++ return 0;
++
++ if (psid->num_subauth > SID_MAX_SUB_AUTHORITIES ||
++ end_of_acl < (char *)psid + 8 + sizeof(__le32) * psid->num_subauth)
++ return -EINVAL;
++
+ return 0;
+ }
+
+@@ -848,6 +855,9 @@ int parse_sec_desc(struct mnt_idmap *idmap, struct smb_ntsd *pntsd,
+ pntsd->type = cpu_to_le16(DACL_PRESENT);
+
+ if (pntsd->osidoffset) {
++ if (le32_to_cpu(pntsd->osidoffset) < sizeof(struct smb_ntsd))
++ return -EINVAL;
++
+ rc = parse_sid(owner_sid_ptr, end_of_acl);
+ if (rc) {
+ pr_err("%s: Error %d parsing Owner SID\n", __func__, rc);
+@@ -863,6 +873,9 @@ int parse_sec_desc(struct mnt_idmap *idmap, struct smb_ntsd *pntsd,
+ }
+
+ if (pntsd->gsidoffset) {
++ if (le32_to_cpu(pntsd->gsidoffset) < sizeof(struct smb_ntsd))
++ return -EINVAL;
++
+ rc = parse_sid(group_sid_ptr, end_of_acl);
+ if (rc) {
+ pr_err("%s: Error %d mapping Owner SID to gid\n",
+@@ -884,6 +897,9 @@ int parse_sec_desc(struct mnt_idmap *idmap, struct smb_ntsd *pntsd,
+ pntsd->type |= cpu_to_le16(DACL_PROTECTED);
+
+ if (dacloffset) {
++ if (dacloffset < sizeof(struct smb_ntsd))
++ return -EINVAL;
++
+ parse_dacl(idmap, dacl_ptr, end_of_acl,
+ owner_sid_ptr, group_sid_ptr, fattr);
+ }
+diff --git a/fs/smb/server/transport_ipc.c b/fs/smb/server/transport_ipc.c
+index 69bac122adbe06..87af57cf35a157 100644
+--- a/fs/smb/server/transport_ipc.c
++++ b/fs/smb/server/transport_ipc.c
+@@ -281,6 +281,7 @@ static int handle_response(int type, void *payload, size_t sz)
+ if (entry->type + 1 != type) {
+ pr_err("Waiting for IPC type %d, got %d. Ignore.\n",
+ entry->type + 1, type);
++ continue;
+ }
+
+ entry->response = kvzalloc(sz, GFP_KERNEL);
+diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.h
+index 594d5905f61512..215bf9f317cbfd 100644
+--- a/include/asm-generic/hugetlb.h
++++ b/include/asm-generic/hugetlb.h
+@@ -84,7 +84,7 @@ static inline void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+
+ #ifndef __HAVE_ARCH_HUGE_PTEP_GET_AND_CLEAR
+ static inline pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+- unsigned long addr, pte_t *ptep)
++ unsigned long addr, pte_t *ptep, unsigned long sz)
+ {
+ return ptep_get_and_clear(mm, addr, ptep);
+ }
+diff --git a/include/drm/drm_client_setup.h b/include/drm/drm_client_setup.h
+new file mode 100644
+index 00000000000000..46aab3fb46be54
+--- /dev/null
++++ b/include/drm/drm_client_setup.h
+@@ -0,0 +1,26 @@
++/* SPDX-License-Identifier: MIT */
++
++#ifndef DRM_CLIENT_SETUP_H
++#define DRM_CLIENT_SETUP_H
++
++#include <linux/types.h>
++
++struct drm_device;
++struct drm_format_info;
++
++#if defined(CONFIG_DRM_CLIENT_SETUP)
++void drm_client_setup(struct drm_device *dev, const struct drm_format_info *format);
++void drm_client_setup_with_fourcc(struct drm_device *dev, u32 fourcc);
++void drm_client_setup_with_color_mode(struct drm_device *dev, unsigned int color_mode);
++#else
++static inline void drm_client_setup(struct drm_device *dev,
++ const struct drm_format_info *format)
++{ }
++static inline void drm_client_setup_with_fourcc(struct drm_device *dev, u32 fourcc)
++{ }
++static inline void drm_client_setup_with_color_mode(struct drm_device *dev,
++ unsigned int color_mode)
++{ }
++#endif
++
++#endif
+diff --git a/include/drm/drm_drv.h b/include/drm/drm_drv.h
+index 02ea4e3248fdf9..36a606af4ba1d8 100644
+--- a/include/drm/drm_drv.h
++++ b/include/drm/drm_drv.h
+@@ -34,6 +34,8 @@
+
+ #include <drm/drm_device.h>
+
++struct drm_fb_helper;
++struct drm_fb_helper_surface_size;
+ struct drm_file;
+ struct drm_gem_object;
+ struct drm_master;
+@@ -366,6 +368,22 @@ struct drm_driver {
+ struct drm_device *dev, uint32_t handle,
+ uint64_t *offset);
+
++ /**
++ * @fbdev_probe
++ *
++ * Allocates and initialize the fb_info structure for fbdev emulation.
++ * Furthermore it also needs to allocate the DRM framebuffer used to
++ * back the fbdev.
++ *
++ * This callback is mandatory for fbdev support.
++ *
++ * Returns:
++ *
++ * 0 on success ot a negative error code otherwise.
++ */
++ int (*fbdev_probe)(struct drm_fb_helper *fbdev_helper,
++ struct drm_fb_helper_surface_size *sizes);
++
+ /**
+ * @show_fdinfo:
+ *
+diff --git a/include/drm/drm_fbdev_client.h b/include/drm/drm_fbdev_client.h
+new file mode 100644
+index 00000000000000..e11a5614f127c9
+--- /dev/null
++++ b/include/drm/drm_fbdev_client.h
+@@ -0,0 +1,19 @@
++/* SPDX-License-Identifier: MIT */
++
++#ifndef DRM_FBDEV_CLIENT_H
++#define DRM_FBDEV_CLIENT_H
++
++struct drm_device;
++struct drm_format_info;
++
++#ifdef CONFIG_DRM_FBDEV_EMULATION
++int drm_fbdev_client_setup(struct drm_device *dev, const struct drm_format_info *format);
++#else
++static inline int drm_fbdev_client_setup(struct drm_device *dev,
++ const struct drm_format_info *format)
++{
++ return 0;
++}
++#endif
++
++#endif
+diff --git a/include/drm/drm_fbdev_ttm.h b/include/drm/drm_fbdev_ttm.h
+index 9e6c3bdf35376a..243685d02eb13a 100644
+--- a/include/drm/drm_fbdev_ttm.h
++++ b/include/drm/drm_fbdev_ttm.h
+@@ -3,11 +3,24 @@
+ #ifndef DRM_FBDEV_TTM_H
+ #define DRM_FBDEV_TTM_H
+
++#include <linux/stddef.h>
++
+ struct drm_device;
++struct drm_fb_helper;
++struct drm_fb_helper_surface_size;
+
+ #ifdef CONFIG_DRM_FBDEV_EMULATION
++int drm_fbdev_ttm_driver_fbdev_probe(struct drm_fb_helper *fb_helper,
++ struct drm_fb_helper_surface_size *sizes);
++
++#define DRM_FBDEV_TTM_DRIVER_OPS \
++ .fbdev_probe = drm_fbdev_ttm_driver_fbdev_probe
++
+ void drm_fbdev_ttm_setup(struct drm_device *dev, unsigned int preferred_bpp);
+ #else
++#define DRM_FBDEV_TTM_DRIVER_OPS \
++ .fbdev_probe = NULL
++
+ static inline void drm_fbdev_ttm_setup(struct drm_device *dev, unsigned int preferred_bpp)
+ { }
+ #endif
+diff --git a/include/drm/drm_fourcc.h b/include/drm/drm_fourcc.h
+index ccf91daa430702..c3f4405d66629e 100644
+--- a/include/drm/drm_fourcc.h
++++ b/include/drm/drm_fourcc.h
+@@ -313,6 +313,7 @@ drm_get_format_info(struct drm_device *dev,
+ uint32_t drm_mode_legacy_fb_format(uint32_t bpp, uint32_t depth);
+ uint32_t drm_driver_legacy_fb_format(struct drm_device *dev,
+ uint32_t bpp, uint32_t depth);
++uint32_t drm_driver_color_mode_format(struct drm_device *dev, unsigned int color_mode);
+ unsigned int drm_format_info_block_width(const struct drm_format_info *info,
+ int plane);
+ unsigned int drm_format_info_block_height(const struct drm_format_info *info,
+diff --git a/include/linux/compaction.h b/include/linux/compaction.h
+index e9477649604964..7bf0c521db6340 100644
+--- a/include/linux/compaction.h
++++ b/include/linux/compaction.h
+@@ -80,6 +80,11 @@ static inline unsigned long compact_gap(unsigned int order)
+ return 2UL << order;
+ }
+
++static inline int current_is_kcompactd(void)
++{
++ return current->flags & PF_KCOMPACTD;
++}
++
+ #ifdef CONFIG_COMPACTION
+
+ extern unsigned int extfrag_for_order(struct zone *zone, unsigned int order);
+diff --git a/include/linux/ethtool.h b/include/linux/ethtool.h
+index b8b935b526033f..b0ed740ca749bb 100644
+--- a/include/linux/ethtool.h
++++ b/include/linux/ethtool.h
+@@ -412,6 +412,29 @@ struct ethtool_eth_phy_stats {
+ );
+ };
+
++/**
++ * struct ethtool_phy_stats - PHY-level statistics counters
++ * @rx_packets: Total successfully received frames
++ * @rx_bytes: Total successfully received bytes
++ * @rx_errors: Total received frames with errors (e.g., CRC errors)
++ * @tx_packets: Total successfully transmitted frames
++ * @tx_bytes: Total successfully transmitted bytes
++ * @tx_errors: Total transmitted frames with errors
++ *
++ * This structure provides a standardized interface for reporting
++ * PHY-level statistics counters. It is designed to expose statistics
++ * commonly provided by PHYs but not explicitly defined in the IEEE
++ * 802.3 standard.
++ */
++struct ethtool_phy_stats {
++ u64 rx_packets;
++ u64 rx_bytes;
++ u64 rx_errors;
++ u64 tx_packets;
++ u64 tx_bytes;
++ u64 tx_errors;
++};
++
+ /* Basic IEEE 802.3 MAC Ctrl statistics (30.3.3.*), not otherwise exposed
+ * via a more targeted API.
+ */
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index e4697539b665a2..25a7b13574c28b 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -1009,7 +1009,9 @@ static inline void hugetlb_count_sub(long l, struct mm_struct *mm)
+ static inline pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *ptep)
+ {
+- return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep);
++ unsigned long psize = huge_page_size(hstate_vma(vma));
++
++ return huge_ptep_get_and_clear(vma->vm_mm, addr, ptep, psize);
+ }
+ #endif
+
+diff --git a/include/linux/nvme-tcp.h b/include/linux/nvme-tcp.h
+index e07e8978d691b7..e435250fcb4d05 100644
+--- a/include/linux/nvme-tcp.h
++++ b/include/linux/nvme-tcp.h
+@@ -13,6 +13,8 @@
+ #define NVME_TCP_ADMIN_CCSZ SZ_8K
+ #define NVME_TCP_DIGEST_LENGTH 4
+ #define NVME_TCP_MIN_MAXH2CDATA 4096
++#define NVME_TCP_MIN_C2HTERM_PLEN 24
++#define NVME_TCP_MAX_C2HTERM_PLEN 152
+
+ enum nvme_tcp_pfv {
+ NVME_TCP_PFV_1_0 = 0x0,
+diff --git a/include/linux/nvme.h b/include/linux/nvme.h
+index b58d9405d65e01..1c101f6fad2f31 100644
+--- a/include/linux/nvme.h
++++ b/include/linux/nvme.h
+@@ -388,6 +388,7 @@ enum {
+ NVME_CTRL_CTRATT_PREDICTABLE_LAT = 1 << 5,
+ NVME_CTRL_CTRATT_NAMESPACE_GRANULARITY = 1 << 7,
+ NVME_CTRL_CTRATT_UUID_LIST = 1 << 9,
++ NVME_CTRL_SGLS_MSDS = 1 << 19,
+ };
+
+ struct nvme_lbaf {
+diff --git a/include/linux/phy.h b/include/linux/phy.h
+index a98bc91a0cde9c..945264f457d8aa 100644
+--- a/include/linux/phy.h
++++ b/include/linux/phy.h
+@@ -1090,6 +1090,35 @@ struct phy_driver {
+ int (*cable_test_get_status)(struct phy_device *dev, bool *finished);
+
+ /* Get statistics from the PHY using ethtool */
++ /**
++ * @get_phy_stats: Retrieve PHY statistics.
++ * @dev: The PHY device for which the statistics are retrieved.
++ * @eth_stats: structure where Ethernet PHY stats will be stored.
++ * @stats: structure where additional PHY-specific stats will be stored.
++ *
++ * Retrieves the supported PHY statistics and populates the provided
++ * structures. The input structures are pre-initialized with
++ * `ETHTOOL_STAT_NOT_SET`, and the driver must only modify members
++ * corresponding to supported statistics. Unmodified members will remain
++ * set to `ETHTOOL_STAT_NOT_SET` and will not be returned to userspace.
++ */
++ void (*get_phy_stats)(struct phy_device *dev,
++ struct ethtool_eth_phy_stats *eth_stats,
++ struct ethtool_phy_stats *stats);
++
++ /**
++ * @get_link_stats: Retrieve link statistics.
++ * @dev: The PHY device for which the statistics are retrieved.
++ * @link_stats: structure where link-specific stats will be stored.
++ *
++ * Retrieves link-related statistics for the given PHY device. The input
++ * structure is pre-initialized with `ETHTOOL_STAT_NOT_SET`, and the
++ * driver must only modify members corresponding to supported
++ * statistics. Unmodified members will remain set to
++ * `ETHTOOL_STAT_NOT_SET` and will not be returned to userspace.
++ */
++ void (*get_link_stats)(struct phy_device *dev,
++ struct ethtool_link_ext_stats *link_stats);
+ /** @get_sset_count: Number of statistic counters */
+ int (*get_sset_count)(struct phy_device *dev);
+ /** @get_strings: Names of the statistic counters */
+@@ -2055,6 +2084,13 @@ int phy_ethtool_get_strings(struct phy_device *phydev, u8 *data);
+ int phy_ethtool_get_sset_count(struct phy_device *phydev);
+ int phy_ethtool_get_stats(struct phy_device *phydev,
+ struct ethtool_stats *stats, u64 *data);
++
++void __phy_ethtool_get_phy_stats(struct phy_device *phydev,
++ struct ethtool_eth_phy_stats *phy_stats,
++ struct ethtool_phy_stats *phydev_stats);
++void __phy_ethtool_get_link_ext_stats(struct phy_device *phydev,
++ struct ethtool_link_ext_stats *link_stats);
++
+ int phy_ethtool_get_plca_cfg(struct phy_device *phydev,
+ struct phy_plca_cfg *plca_cfg);
+ int phy_ethtool_set_plca_cfg(struct phy_device *phydev,
+diff --git a/include/linux/phylib_stubs.h b/include/linux/phylib_stubs.h
+index 1279f48c8a7077..9d2d6090c86d12 100644
+--- a/include/linux/phylib_stubs.h
++++ b/include/linux/phylib_stubs.h
+@@ -5,6 +5,9 @@
+
+ #include <linux/rtnetlink.h>
+
++struct ethtool_eth_phy_stats;
++struct ethtool_link_ext_stats;
++struct ethtool_phy_stats;
+ struct kernel_hwtstamp_config;
+ struct netlink_ext_ack;
+ struct phy_device;
+@@ -19,6 +22,11 @@ struct phylib_stubs {
+ int (*hwtstamp_set)(struct phy_device *phydev,
+ struct kernel_hwtstamp_config *config,
+ struct netlink_ext_ack *extack);
++ void (*get_phy_stats)(struct phy_device *phydev,
++ struct ethtool_eth_phy_stats *phy_stats,
++ struct ethtool_phy_stats *phydev_stats);
++ void (*get_link_ext_stats)(struct phy_device *phydev,
++ struct ethtool_link_ext_stats *link_stats);
+ };
+
+ static inline int phy_hwtstamp_get(struct phy_device *phydev,
+@@ -50,6 +58,29 @@ static inline int phy_hwtstamp_set(struct phy_device *phydev,
+ return phylib_stubs->hwtstamp_set(phydev, config, extack);
+ }
+
++static inline void phy_ethtool_get_phy_stats(struct phy_device *phydev,
++ struct ethtool_eth_phy_stats *phy_stats,
++ struct ethtool_phy_stats *phydev_stats)
++{
++ ASSERT_RTNL();
++
++ if (!phylib_stubs)
++ return;
++
++ phylib_stubs->get_phy_stats(phydev, phy_stats, phydev_stats);
++}
++
++static inline void phy_ethtool_get_link_ext_stats(struct phy_device *phydev,
++ struct ethtool_link_ext_stats *link_stats)
++{
++ ASSERT_RTNL();
++
++ if (!phylib_stubs)
++ return;
++
++ phylib_stubs->get_link_ext_stats(phydev, link_stats);
++}
++
+ #else
+
+ static inline int phy_hwtstamp_get(struct phy_device *phydev,
+@@ -65,4 +96,15 @@ static inline int phy_hwtstamp_set(struct phy_device *phydev,
+ return -EOPNOTSUPP;
+ }
+
++static inline void phy_ethtool_get_phy_stats(struct phy_device *phydev,
++ struct ethtool_eth_phy_stats *phy_stats,
++ struct ethtool_phy_stats *phydev_stats)
++{
++}
++
++static inline void phy_ethtool_get_link_ext_stats(struct phy_device *phydev,
++ struct ethtool_link_ext_stats *link_stats)
++{
++}
++
+ #endif
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 8982820dae2131..0d1d70aded38f6 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -1682,7 +1682,7 @@ extern struct pid *cad_pid;
+ #define PF_USED_MATH 0x00002000 /* If unset the fpu must be initialized before use */
+ #define PF_USER_WORKER 0x00004000 /* Kernel thread cloned from userspace thread */
+ #define PF_NOFREEZE 0x00008000 /* This thread should not be frozen */
+-#define PF__HOLE__00010000 0x00010000
++#define PF_KCOMPACTD 0x00010000 /* I am kcompactd */
+ #define PF_KSWAPD 0x00020000 /* I am kswapd */
+ #define PF_MEMALLOC_NOFS 0x00040000 /* All allocations inherit GFP_NOFS. See memalloc_nfs_save() */
+ #define PF_MEMALLOC_NOIO 0x00080000 /* All allocations inherit GFP_NOIO. See memalloc_noio_save() */
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index a0e1d2124727e1..5fff74c736063c 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -11846,6 +11846,8 @@ void perf_pmu_unregister(struct pmu *pmu)
+ {
+ mutex_lock(&pmus_lock);
+ list_del_rcu(&pmu->entry);
++ idr_remove(&pmu_idr, pmu->type);
++ mutex_unlock(&pmus_lock);
+
+ /*
+ * We dereference the pmu list under both SRCU and regular RCU, so
+@@ -11855,7 +11857,6 @@ void perf_pmu_unregister(struct pmu *pmu)
+ synchronize_rcu();
+
+ free_percpu(pmu->pmu_disable_count);
+- idr_remove(&pmu_idr, pmu->type);
+ if (pmu_bus_running && pmu->dev && pmu->dev != PMU_NULL_DEV) {
+ if (pmu->nr_addr_filters)
+ device_remove_file(pmu->dev, &dev_attr_nr_addr_filters);
+@@ -11863,7 +11864,6 @@ void perf_pmu_unregister(struct pmu *pmu)
+ put_device(pmu->dev);
+ }
+ free_pmu_context(pmu);
+- mutex_unlock(&pmus_lock);
+ }
+ EXPORT_SYMBOL_GPL(perf_pmu_unregister);
+
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index a0e0676f5d8bbe..4fdc08ca0f3cbd 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -1775,6 +1775,7 @@ void uprobe_free_utask(struct task_struct *t)
+ if (!utask)
+ return;
+
++ t->utask = NULL;
+ if (utask->active_uprobe)
+ put_uprobe(utask->active_uprobe);
+
+@@ -1784,7 +1785,6 @@ void uprobe_free_utask(struct task_struct *t)
+
+ xol_free_insn_slot(t);
+ kfree(utask);
+- t->utask = NULL;
+ }
+
+ /*
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index ddc096d6b0c203..58ba14ed8fbcb9 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -4155,15 +4155,17 @@ static inline bool child_cfs_rq_on_list(struct cfs_rq *cfs_rq)
+ {
+ struct cfs_rq *prev_cfs_rq;
+ struct list_head *prev;
++ struct rq *rq = rq_of(cfs_rq);
+
+ if (cfs_rq->on_list) {
+ prev = cfs_rq->leaf_cfs_rq_list.prev;
+ } else {
+- struct rq *rq = rq_of(cfs_rq);
+-
+ prev = rq->tmp_alone_branch;
+ }
+
++ if (prev == &rq->leaf_cfs_rq_list)
++ return false;
++
+ prev_cfs_rq = container_of(prev, struct cfs_rq, leaf_cfs_rq_list);
+
+ return (prev_cfs_rq->tg->parent == cfs_rq->tg);
+diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c
+index c62d1629cffecd..99048c33038223 100644
+--- a/kernel/trace/trace_fprobe.c
++++ b/kernel/trace/trace_fprobe.c
+@@ -1018,6 +1018,19 @@ static int parse_symbol_and_return(int argc, const char *argv[],
+ if (*is_return)
+ return 0;
+
++ if (is_tracepoint) {
++ tmp = *symbol;
++ while (*tmp && (isalnum(*tmp) || *tmp == '_'))
++ tmp++;
++ if (*tmp) {
++ /* find a wrong character. */
++ trace_probe_log_err(tmp - *symbol, BAD_TP_NAME);
++ kfree(*symbol);
++ *symbol = NULL;
++ return -EINVAL;
++ }
++ }
++
+ /* If there is $retval, this should be a return fprobe. */
+ for (i = 2; i < argc; i++) {
+ tmp = strstr(argv[i], "$retval");
+@@ -1025,6 +1038,8 @@ static int parse_symbol_and_return(int argc, const char *argv[],
+ if (is_tracepoint) {
+ trace_probe_log_set_index(i);
+ trace_probe_log_err(tmp - argv[i], RETVAL_ON_PROBE);
++ kfree(*symbol);
++ *symbol = NULL;
+ return -EINVAL;
+ }
+ *is_return = true;
+diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h
+index 5803e6a4157055..8a6797c2278d90 100644
+--- a/kernel/trace/trace_probe.h
++++ b/kernel/trace/trace_probe.h
+@@ -36,7 +36,6 @@
+ #define MAX_BTF_ARGS_LEN 128
+ #define MAX_DENTRY_ARGS_LEN 256
+ #define MAX_STRING_SIZE PATH_MAX
+-#define MAX_ARG_BUF_LEN (MAX_TRACE_ARGS * MAX_ARG_NAME_LEN)
+
+ /* Reserved field names */
+ #define FIELD_STRING_IP "__probe_ip"
+@@ -481,6 +480,7 @@ extern int traceprobe_define_arg_fields(struct trace_event_call *event_call,
+ C(NON_UNIQ_SYMBOL, "The symbol is not unique"), \
+ C(BAD_RETPROBE, "Retprobe address must be an function entry"), \
+ C(NO_TRACEPOINT, "Tracepoint is not found"), \
++ C(BAD_TP_NAME, "Invalid character in tracepoint name"),\
+ C(BAD_ADDR_SUFFIX, "Invalid probed address suffix"), \
+ C(NO_GROUP_NAME, "Group name is not specified"), \
+ C(GROUP_TOO_LONG, "Group name is too long"), \
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 384e4672998e55..77dbb9022b47f0 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -3164,6 +3164,7 @@ static int kcompactd(void *p)
+ if (!cpumask_empty(cpumask))
+ set_cpus_allowed_ptr(tsk, cpumask);
+
++ current->flags |= PF_KCOMPACTD;
+ set_freezable();
+
+ pgdat->kcompactd_max_order = 0;
+@@ -3220,6 +3221,8 @@ static int kcompactd(void *p)
+ pgdat->proactive_compact_trigger = false;
+ }
+
++ current->flags &= ~PF_KCOMPACTD;
++
+ return 0;
+ }
+
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index bdee6d3ab0e7e3..1e9aa6de4e21ea 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -5395,7 +5395,7 @@ static void move_huge_pte(struct vm_area_struct *vma, unsigned long old_addr,
+ if (src_ptl != dst_ptl)
+ spin_lock_nested(src_ptl, SINGLE_DEPTH_NESTING);
+
+- pte = huge_ptep_get_and_clear(mm, old_addr, src_pte);
++ pte = huge_ptep_get_and_clear(mm, old_addr, src_pte, sz);
+
+ if (need_clear_uffd_wp && pte_marker_uffd_wp(pte))
+ huge_pte_clear(mm, new_addr, dst_pte, sz);
+@@ -5570,7 +5570,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
+ set_vma_resv_flags(vma, HPAGE_RESV_UNMAPPED);
+ }
+
+- pte = huge_ptep_get_and_clear(mm, address, ptep);
++ pte = huge_ptep_get_and_clear(mm, address, ptep, sz);
+ tlb_remove_huge_tlb_entry(h, tlb, ptep, address);
+ if (huge_pte_dirty(pte))
+ set_page_dirty(page);
+diff --git a/mm/internal.h b/mm/internal.h
+index 9bb098e78f1556..398633d6b6c9f0 100644
+--- a/mm/internal.h
++++ b/mm/internal.h
+@@ -1101,7 +1101,7 @@ static inline int find_next_best_node(int node, nodemask_t *used_node_mask)
+ * mm/memory-failure.c
+ */
+ #ifdef CONFIG_MEMORY_FAILURE
+-void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu);
++int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill);
+ void shake_folio(struct folio *folio);
+ extern int hwpoison_filter(struct page *p);
+
+@@ -1123,8 +1123,9 @@ void add_to_kill_ksm(struct task_struct *tsk, struct page *p,
+ unsigned long page_mapped_in_vma(struct page *page, struct vm_area_struct *vma);
+
+ #else
+-static inline void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu)
++static inline int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill)
+ {
++ return -EBUSY;
+ }
+ #endif
+
+diff --git a/mm/kasan/kasan_test_rust.rs b/mm/kasan/kasan_test_rust.rs
+index caa7175964ef64..5b34edf30e7244 100644
+--- a/mm/kasan/kasan_test_rust.rs
++++ b/mm/kasan/kasan_test_rust.rs
+@@ -11,11 +11,12 @@
+ /// drop the vector, and touch it.
+ #[no_mangle]
+ pub extern "C" fn kasan_test_rust_uaf() -> u8 {
+- let mut v: Vec<u8> = Vec::new();
++ let mut v: KVec<u8> = KVec::new();
+ for _ in 0..4096 {
+ v.push(0x42, GFP_KERNEL).unwrap();
+ }
+ let ptr: *mut u8 = addr_of_mut!(v[2048]);
+ drop(v);
++ // SAFETY: Incorrect, on purpose.
+ unsafe { *ptr }
+ }
+diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c
+index 3ea50f09311fd7..3df45c25c1f62f 100644
+--- a/mm/kmsan/hooks.c
++++ b/mm/kmsan/hooks.c
+@@ -357,6 +357,7 @@ void kmsan_handle_dma(struct page *page, size_t offset, size_t size,
+ size -= to_go;
+ }
+ }
++EXPORT_SYMBOL_GPL(kmsan_handle_dma);
+
+ void kmsan_handle_dma_sg(struct scatterlist *sg, int nents,
+ enum dma_data_direction dir)
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index 96ce31e5a203be..fa25a022e64d71 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -1554,11 +1554,35 @@ static int get_hwpoison_page(struct page *p, unsigned long flags)
+ return ret;
+ }
+
+-void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu)
++int unmap_poisoned_folio(struct folio *folio, unsigned long pfn, bool must_kill)
+ {
+- if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {
+- struct address_space *mapping;
++ enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON;
++ struct address_space *mapping;
++
++ if (folio_test_swapcache(folio)) {
++ pr_err("%#lx: keeping poisoned page in swap cache\n", pfn);
++ ttu &= ~TTU_HWPOISON;
++ }
+
++ /*
++ * Propagate the dirty bit from PTEs to struct page first, because we
++ * need this to decide if we should kill or just drop the page.
++ * XXX: the dirty test could be racy: set_page_dirty() may not always
++ * be called inside page lock (it's recommended but not enforced).
++ */
++ mapping = folio_mapping(folio);
++ if (!must_kill && !folio_test_dirty(folio) && mapping &&
++ mapping_can_writeback(mapping)) {
++ if (folio_mkclean(folio)) {
++ folio_set_dirty(folio);
++ } else {
++ ttu &= ~TTU_HWPOISON;
++ pr_info("%#lx: corrupted page was clean: dropped without side effects\n",
++ pfn);
++ }
++ }
++
++ if (folio_test_hugetlb(folio) && !folio_test_anon(folio)) {
+ /*
+ * For hugetlb folios in shared mappings, try_to_unmap
+ * could potentially call huge_pmd_unshare. Because of
+@@ -1570,7 +1594,7 @@ void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu)
+ if (!mapping) {
+ pr_info("%#lx: could not lock mapping for mapped hugetlb folio\n",
+ folio_pfn(folio));
+- return;
++ return -EBUSY;
+ }
+
+ try_to_unmap(folio, ttu|TTU_RMAP_LOCKED);
+@@ -1578,6 +1602,8 @@ void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu)
+ } else {
+ try_to_unmap(folio, ttu);
+ }
++
++ return folio_mapped(folio) ? -EBUSY : 0;
+ }
+
+ /*
+@@ -1587,8 +1613,6 @@ void unmap_poisoned_folio(struct folio *folio, enum ttu_flags ttu)
+ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
+ unsigned long pfn, int flags)
+ {
+- enum ttu_flags ttu = TTU_IGNORE_MLOCK | TTU_SYNC | TTU_HWPOISON;
+- struct address_space *mapping;
+ LIST_HEAD(tokill);
+ bool unmap_success;
+ int forcekill;
+@@ -1611,29 +1635,6 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
+ if (!folio_mapped(folio))
+ return true;
+
+- if (folio_test_swapcache(folio)) {
+- pr_err("%#lx: keeping poisoned page in swap cache\n", pfn);
+- ttu &= ~TTU_HWPOISON;
+- }
+-
+- /*
+- * Propagate the dirty bit from PTEs to struct page first, because we
+- * need this to decide if we should kill or just drop the page.
+- * XXX: the dirty test could be racy: set_page_dirty() may not always
+- * be called inside page lock (it's recommended but not enforced).
+- */
+- mapping = folio_mapping(folio);
+- if (!(flags & MF_MUST_KILL) && !folio_test_dirty(folio) && mapping &&
+- mapping_can_writeback(mapping)) {
+- if (folio_mkclean(folio)) {
+- folio_set_dirty(folio);
+- } else {
+- ttu &= ~TTU_HWPOISON;
+- pr_info("%#lx: corrupted page was clean: dropped without side effects\n",
+- pfn);
+- }
+- }
+-
+ /*
+ * First collect all the processes that have the page
+ * mapped in dirty form. This has to be done before try_to_unmap,
+@@ -1641,9 +1642,7 @@ static bool hwpoison_user_mappings(struct folio *folio, struct page *p,
+ */
+ collect_procs(folio, p, &tokill, flags & MF_ACTION_REQUIRED);
+
+- unmap_poisoned_folio(folio, ttu);
+-
+- unmap_success = !folio_mapped(folio);
++ unmap_success = !unmap_poisoned_folio(folio, pfn, flags & MF_MUST_KILL);
+ if (!unmap_success)
+ pr_err("%#lx: failed to unmap page (folio mapcount=%d)\n",
+ pfn, folio_mapcount(folio));
+diff --git a/mm/memory.c b/mm/memory.c
+index d322ddfe679167..525f96ad65b8d7 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -2957,8 +2957,10 @@ static int __apply_to_page_range(struct mm_struct *mm, unsigned long addr,
+ next = pgd_addr_end(addr, end);
+ if (pgd_none(*pgd) && !create)
+ continue;
+- if (WARN_ON_ONCE(pgd_leaf(*pgd)))
+- return -EINVAL;
++ if (WARN_ON_ONCE(pgd_leaf(*pgd))) {
++ err = -EINVAL;
++ break;
++ }
+ if (!pgd_none(*pgd) && WARN_ON_ONCE(pgd_bad(*pgd))) {
+ if (!create)
+ continue;
+@@ -5077,7 +5079,11 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
+ bool is_cow = (vmf->flags & FAULT_FLAG_WRITE) &&
+ !(vma->vm_flags & VM_SHARED);
+ int type, nr_pages;
+- unsigned long addr = vmf->address;
++ unsigned long addr;
++ bool needs_fallback = false;
++
++fallback:
++ addr = vmf->address;
+
+ /* Did we COW the page? */
+ if (is_cow)
+@@ -5116,7 +5122,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
+ * approach also applies to non-anonymous-shmem faults to avoid
+ * inflating the RSS of the process.
+ */
+- if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma))) {
++ if (!vma_is_anon_shmem(vma) || unlikely(userfaultfd_armed(vma)) ||
++ unlikely(needs_fallback)) {
+ nr_pages = 1;
+ } else if (nr_pages > 1) {
+ pgoff_t idx = folio_page_idx(folio, page);
+@@ -5152,9 +5159,9 @@ vm_fault_t finish_fault(struct vm_fault *vmf)
+ ret = VM_FAULT_NOPAGE;
+ goto unlock;
+ } else if (nr_pages > 1 && !pte_range_none(vmf->pte, nr_pages)) {
+- update_mmu_tlb_range(vma, addr, vmf->pte, nr_pages);
+- ret = VM_FAULT_NOPAGE;
+- goto unlock;
++ needs_fallback = true;
++ pte_unmap_unlock(vmf->pte, vmf->ptl);
++ goto fallback;
+ }
+
+ folio_ref_add(folio, nr_pages - 1);
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 621ae1015106c5..619445096ef4a6 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1795,26 +1795,24 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+ if (folio_test_large(folio))
+ pfn = folio_pfn(folio) + folio_nr_pages(folio) - 1;
+
+- /*
+- * HWPoison pages have elevated reference counts so the migration would
+- * fail on them. It also doesn't make any sense to migrate them in the
+- * first place. Still try to unmap such a page in case it is still mapped
+- * (keep the unmap as the catch all safety net).
+- */
++ if (!folio_try_get(folio))
++ continue;
++
++ if (unlikely(page_folio(page) != folio))
++ goto put_folio;
++
+ if (folio_test_hwpoison(folio) ||
+ (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) {
+ if (WARN_ON(folio_test_lru(folio)))
+ folio_isolate_lru(folio);
+- if (folio_mapped(folio))
+- unmap_poisoned_folio(folio, TTU_IGNORE_MLOCK);
+- continue;
+- }
+-
+- if (!folio_try_get(folio))
+- continue;
++ if (folio_mapped(folio)) {
++ folio_lock(folio);
++ unmap_poisoned_folio(folio, pfn, false);
++ folio_unlock(folio);
++ }
+
+- if (unlikely(page_folio(page) != folio))
+ goto put_folio;
++ }
+
+ if (!isolate_folio_to_list(folio, &source)) {
+ if (__ratelimit(&migrate_rs)) {
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index de65e8b4f75f21..e0a77fe1b6300d 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -4243,6 +4243,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order,
+ restart:
+ compaction_retries = 0;
+ no_progress_loops = 0;
++ compact_result = COMPACT_SKIPPED;
+ compact_priority = DEF_COMPACT_PRIORITY;
+ cpuset_mems_cookie = read_mems_allowed_begin();
+ zonelist_iter_cookie = zonelist_iter_begin();
+@@ -5991,11 +5992,10 @@ static void setup_per_zone_lowmem_reserve(void)
+
+ for (j = i + 1; j < MAX_NR_ZONES; j++) {
+ struct zone *upper_zone = &pgdat->node_zones[j];
+- bool empty = !zone_managed_pages(upper_zone);
+
+ managed_pages += zone_managed_pages(upper_zone);
+
+- if (clear || empty)
++ if (clear)
+ zone->lowmem_reserve[j] = 0;
+ else
+ zone->lowmem_reserve[j] = managed_pages / ratio;
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index ce13c40626472a..66011831d7983d 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -1215,6 +1215,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
+ */
+ if (!src_folio) {
+ struct folio *folio;
++ bool locked;
+
+ /*
+ * Pin the page while holding the lock to be sure the
+@@ -1234,12 +1235,26 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
+ goto out;
+ }
+
++ locked = folio_trylock(folio);
++ /*
++ * We avoid waiting for folio lock with a raised
++ * refcount for large folios because extra refcounts
++ * will result in split_folio() failing later and
++ * retrying. If multiple tasks are trying to move a
++ * large folio we can end up livelocking.
++ */
++ if (!locked && folio_test_large(folio)) {
++ spin_unlock(src_ptl);
++ err = -EAGAIN;
++ goto out;
++ }
++
+ folio_get(folio);
+ src_folio = folio;
+ src_folio_pte = orig_src_pte;
+ spin_unlock(src_ptl);
+
+- if (!folio_trylock(src_folio)) {
++ if (!locked) {
+ pte_unmap(&orig_src_pte);
+ pte_unmap(&orig_dst_pte);
+ src_pte = dst_pte = NULL;
+diff --git a/mm/vma.c b/mm/vma.c
+index 7621384d64cf5f..c9ddc06b672a52 100644
+--- a/mm/vma.c
++++ b/mm/vma.c
+@@ -1417,24 +1417,28 @@ int do_vmi_munmap(struct vma_iterator *vmi, struct mm_struct *mm,
+ static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg)
+ {
+ struct vm_area_struct *vma = vmg->vma;
++ unsigned long start = vmg->start;
++ unsigned long end = vmg->end;
+ struct vm_area_struct *merged;
+
+ /* First, try to merge. */
+ merged = vma_merge_existing_range(vmg);
+ if (merged)
+ return merged;
++ if (vmg_nomem(vmg))
++ return ERR_PTR(-ENOMEM);
+
+ /* Split any preceding portion of the VMA. */
+- if (vma->vm_start < vmg->start) {
+- int err = split_vma(vmg->vmi, vma, vmg->start, 1);
++ if (vma->vm_start < start) {
++ int err = split_vma(vmg->vmi, vma, start, 1);
+
+ if (err)
+ return ERR_PTR(err);
+ }
+
+ /* Split any trailing portion of the VMA. */
+- if (vma->vm_end > vmg->end) {
+- int err = split_vma(vmg->vmi, vma, vmg->end, 0);
++ if (vma->vm_end > end) {
++ int err = split_vma(vmg->vmi, vma, end, 0);
+
+ if (err)
+ return ERR_PTR(err);
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 3f9255dfacb0c1..fd70a7cd1c8fa8 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -586,13 +586,13 @@ static int vmap_small_pages_range_noflush(unsigned long addr, unsigned long end,
+ mask |= PGTBL_PGD_MODIFIED;
+ err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr, &mask);
+ if (err)
+- return err;
++ break;
+ } while (pgd++, addr = next, addr != end);
+
+ if (mask & ARCH_PAGE_TABLE_SYNC_MASK)
+ arch_sync_kernel_mappings(start, end);
+
+- return 0;
++ return err;
+ }
+
+ /*
+diff --git a/net/8021q/vlan.c b/net/8021q/vlan.c
+index e45187b8822069..41be38264493df 100644
+--- a/net/8021q/vlan.c
++++ b/net/8021q/vlan.c
+@@ -131,7 +131,8 @@ int vlan_check_real_dev(struct net_device *real_dev,
+ {
+ const char *name = real_dev->name;
+
+- if (real_dev->features & NETIF_F_VLAN_CHALLENGED) {
++ if (real_dev->features & NETIF_F_VLAN_CHALLENGED ||
++ real_dev->type != ARPHRD_ETHER) {
+ pr_info("VLANs not supported on %s\n", name);
+ NL_SET_ERR_MSG_MOD(extack, "VLANs not supported on device");
+ return -EOPNOTSUPP;
+diff --git a/net/bluetooth/mgmt.c b/net/bluetooth/mgmt.c
+index 90c21b3edcd80e..c019f69c593955 100644
+--- a/net/bluetooth/mgmt.c
++++ b/net/bluetooth/mgmt.c
+@@ -9731,6 +9731,9 @@ void mgmt_device_connected(struct hci_dev *hdev, struct hci_conn *conn,
+ sizeof(*ev) + (name ? eir_precalc_len(name_len) : 0) +
+ eir_precalc_len(sizeof(conn->dev_class)));
+
++ if (!skb)
++ return;
++
+ ev = skb_put(skb, sizeof(*ev));
+ bacpy(&ev->addr.bdaddr, &conn->dst);
+ ev->addr.type = link_to_bdaddr(conn->type, conn->dst_type);
+@@ -10484,6 +10487,8 @@ void mgmt_remote_name(struct hci_dev *hdev, bdaddr_t *bdaddr, u8 link_type,
+
+ skb = mgmt_alloc_skb(hdev, MGMT_EV_DEVICE_FOUND,
+ sizeof(*ev) + (name ? eir_precalc_len(name_len) : 0));
++ if (!skb)
++ return;
+
+ ev = skb_put(skb, sizeof(*ev));
+ bacpy(&ev->addr.bdaddr, bdaddr);
+diff --git a/net/ethtool/cabletest.c b/net/ethtool/cabletest.c
+index f22051f33868ac..84096f6b0236e8 100644
+--- a/net/ethtool/cabletest.c
++++ b/net/ethtool/cabletest.c
+@@ -72,8 +72,8 @@ int ethnl_act_cable_test(struct sk_buff *skb, struct genl_info *info)
+ dev = req_info.dev;
+
+ rtnl_lock();
+- phydev = ethnl_req_get_phydev(&req_info,
+- tb[ETHTOOL_A_CABLE_TEST_HEADER],
++ phydev = ethnl_req_get_phydev(&req_info, tb,
++ ETHTOOL_A_CABLE_TEST_HEADER,
+ info->extack);
+ if (IS_ERR_OR_NULL(phydev)) {
+ ret = -EOPNOTSUPP;
+@@ -339,8 +339,8 @@ int ethnl_act_cable_test_tdr(struct sk_buff *skb, struct genl_info *info)
+ goto out_dev_put;
+
+ rtnl_lock();
+- phydev = ethnl_req_get_phydev(&req_info,
+- tb[ETHTOOL_A_CABLE_TEST_TDR_HEADER],
++ phydev = ethnl_req_get_phydev(&req_info, tb,
++ ETHTOOL_A_CABLE_TEST_TDR_HEADER,
+ info->extack);
+ if (IS_ERR_OR_NULL(phydev)) {
+ ret = -EOPNOTSUPP;
+diff --git a/net/ethtool/linkstate.c b/net/ethtool/linkstate.c
+index 34d76e87847d08..05a5f72c99fab1 100644
+--- a/net/ethtool/linkstate.c
++++ b/net/ethtool/linkstate.c
+@@ -3,6 +3,7 @@
+ #include "netlink.h"
+ #include "common.h"
+ #include <linux/phy.h>
++#include <linux/phylib_stubs.h>
+
+ struct linkstate_req_info {
+ struct ethnl_req_info base;
+@@ -26,9 +27,8 @@ const struct nla_policy ethnl_linkstate_get_policy[] = {
+ NLA_POLICY_NESTED(ethnl_header_policy_stats),
+ };
+
+-static int linkstate_get_sqi(struct net_device *dev)
++static int linkstate_get_sqi(struct phy_device *phydev)
+ {
+- struct phy_device *phydev = dev->phydev;
+ int ret;
+
+ if (!phydev)
+@@ -46,9 +46,8 @@ static int linkstate_get_sqi(struct net_device *dev)
+ return ret;
+ }
+
+-static int linkstate_get_sqi_max(struct net_device *dev)
++static int linkstate_get_sqi_max(struct phy_device *phydev)
+ {
+- struct phy_device *phydev = dev->phydev;
+ int ret;
+
+ if (!phydev)
+@@ -100,19 +99,28 @@ static int linkstate_prepare_data(const struct ethnl_req_info *req_base,
+ {
+ struct linkstate_reply_data *data = LINKSTATE_REPDATA(reply_base);
+ struct net_device *dev = reply_base->dev;
++ struct nlattr **tb = info->attrs;
++ struct phy_device *phydev;
+ int ret;
+
++ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_LINKSTATE_HEADER,
++ info->extack);
++ if (IS_ERR(phydev)) {
++ ret = PTR_ERR(phydev);
++ goto out;
++ }
++
+ ret = ethnl_ops_begin(dev);
+ if (ret < 0)
+ return ret;
+ data->link = __ethtool_get_link(dev);
+
+- ret = linkstate_get_sqi(dev);
++ ret = linkstate_get_sqi(phydev);
+ if (linkstate_sqi_critical_error(ret))
+ goto out;
+ data->sqi = ret;
+
+- ret = linkstate_get_sqi_max(dev);
++ ret = linkstate_get_sqi_max(phydev);
+ if (linkstate_sqi_critical_error(ret))
+ goto out;
+ data->sqi_max = ret;
+@@ -127,9 +135,9 @@ static int linkstate_prepare_data(const struct ethnl_req_info *req_base,
+ sizeof(data->link_stats) / 8);
+
+ if (req_base->flags & ETHTOOL_FLAG_STATS) {
+- if (dev->phydev)
+- data->link_stats.link_down_events =
+- READ_ONCE(dev->phydev->link_down_events);
++ if (phydev)
++ phy_ethtool_get_link_ext_stats(phydev,
++ &data->link_stats);
+
+ if (dev->ethtool_ops->get_link_ext_stats)
+ dev->ethtool_ops->get_link_ext_stats(dev,
+diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c
+index 4d18dc29b30438..e233dfc8ca4bec 100644
+--- a/net/ethtool/netlink.c
++++ b/net/ethtool/netlink.c
+@@ -210,7 +210,7 @@ int ethnl_parse_header_dev_get(struct ethnl_req_info *req_info,
+ }
+
+ struct phy_device *ethnl_req_get_phydev(const struct ethnl_req_info *req_info,
+- const struct nlattr *header,
++ struct nlattr **tb, unsigned int header,
+ struct netlink_ext_ack *extack)
+ {
+ struct phy_device *phydev;
+@@ -224,8 +224,8 @@ struct phy_device *ethnl_req_get_phydev(const struct ethnl_req_info *req_info,
+ return req_info->dev->phydev;
+
+ phydev = phy_link_topo_get_phy(req_info->dev, req_info->phy_index);
+- if (!phydev) {
+- NL_SET_ERR_MSG_ATTR(extack, header,
++ if (!phydev && tb) {
++ NL_SET_ERR_MSG_ATTR(extack, tb[header],
+ "no phy matching phyindex");
+ return ERR_PTR(-ENODEV);
+ }
+diff --git a/net/ethtool/netlink.h b/net/ethtool/netlink.h
+index 203b08eb6c6f60..5e176938d6d228 100644
+--- a/net/ethtool/netlink.h
++++ b/net/ethtool/netlink.h
+@@ -275,7 +275,8 @@ static inline void ethnl_parse_header_dev_put(struct ethnl_req_info *req_info)
+ * ethnl_req_get_phydev() - Gets the phy_device targeted by this request,
+ * if any. Must be called under rntl_lock().
+ * @req_info: The ethnl request to get the phy from.
+- * @header: The netlink header, used for error reporting.
++ * @tb: The netlink attributes array, for error reporting.
++ * @header: The netlink header index, used for error reporting.
+ * @extack: The netlink extended ACK, for error reporting.
+ *
+ * The caller must hold RTNL, until it's done interacting with the returned
+@@ -289,7 +290,7 @@ static inline void ethnl_parse_header_dev_put(struct ethnl_req_info *req_info)
+ * is returned.
+ */
+ struct phy_device *ethnl_req_get_phydev(const struct ethnl_req_info *req_info,
+- const struct nlattr *header,
++ struct nlattr **tb, unsigned int header,
+ struct netlink_ext_ack *extack);
+
+ /**
+diff --git a/net/ethtool/phy.c b/net/ethtool/phy.c
+index ed8f690f6bac81..e067cc234419dc 100644
+--- a/net/ethtool/phy.c
++++ b/net/ethtool/phy.c
+@@ -125,7 +125,7 @@ static int ethnl_phy_parse_request(struct ethnl_req_info *req_base,
+ struct phy_req_info *req_info = PHY_REQINFO(req_base);
+ struct phy_device *phydev;
+
+- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PHY_HEADER],
++ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PHY_HEADER,
+ extack);
+ if (!phydev)
+ return 0;
+diff --git a/net/ethtool/plca.c b/net/ethtool/plca.c
+index d95d92f173a6d2..e1f7820a6158f4 100644
+--- a/net/ethtool/plca.c
++++ b/net/ethtool/plca.c
+@@ -62,7 +62,7 @@ static int plca_get_cfg_prepare_data(const struct ethnl_req_info *req_base,
+ struct phy_device *phydev;
+ int ret;
+
+- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PLCA_HEADER],
++ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PLCA_HEADER,
+ info->extack);
+ // check that the PHY device is available and connected
+ if (IS_ERR_OR_NULL(phydev)) {
+@@ -152,7 +152,7 @@ ethnl_set_plca(struct ethnl_req_info *req_info, struct genl_info *info)
+ bool mod = false;
+ int ret;
+
+- phydev = ethnl_req_get_phydev(req_info, tb[ETHTOOL_A_PLCA_HEADER],
++ phydev = ethnl_req_get_phydev(req_info, tb, ETHTOOL_A_PLCA_HEADER,
+ info->extack);
+ // check that the PHY device is available and connected
+ if (IS_ERR_OR_NULL(phydev))
+@@ -211,7 +211,7 @@ static int plca_get_status_prepare_data(const struct ethnl_req_info *req_base,
+ struct phy_device *phydev;
+ int ret;
+
+- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PLCA_HEADER],
++ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PLCA_HEADER,
+ info->extack);
+ // check that the PHY device is available and connected
+ if (IS_ERR_OR_NULL(phydev)) {
+diff --git a/net/ethtool/pse-pd.c b/net/ethtool/pse-pd.c
+index a0705edca22a1a..71843de832cca7 100644
+--- a/net/ethtool/pse-pd.c
++++ b/net/ethtool/pse-pd.c
+@@ -64,7 +64,7 @@ static int pse_prepare_data(const struct ethnl_req_info *req_base,
+ if (ret < 0)
+ return ret;
+
+- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_PSE_HEADER],
++ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_PSE_HEADER,
+ info->extack);
+ if (IS_ERR(phydev))
+ return -ENODEV;
+@@ -261,7 +261,7 @@ ethnl_set_pse(struct ethnl_req_info *req_info, struct genl_info *info)
+ struct phy_device *phydev;
+ int ret;
+
+- phydev = ethnl_req_get_phydev(req_info, tb[ETHTOOL_A_PSE_HEADER],
++ phydev = ethnl_req_get_phydev(req_info, tb, ETHTOOL_A_PSE_HEADER,
+ info->extack);
+ ret = ethnl_set_pse_validate(phydev, info);
+ if (ret)
+diff --git a/net/ethtool/stats.c b/net/ethtool/stats.c
+index 912f0c4fff2fb9..273ae4ff343fe8 100644
+--- a/net/ethtool/stats.c
++++ b/net/ethtool/stats.c
+@@ -1,5 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+
++#include <linux/phy.h>
++#include <linux/phylib_stubs.h>
++
+ #include "netlink.h"
+ #include "common.h"
+ #include "bitset.h"
+@@ -20,6 +23,7 @@ struct stats_reply_data {
+ struct ethtool_eth_mac_stats mac_stats;
+ struct ethtool_eth_ctrl_stats ctrl_stats;
+ struct ethtool_rmon_stats rmon_stats;
++ struct ethtool_phy_stats phydev_stats;
+ );
+ const struct ethtool_rmon_hist_range *rmon_ranges;
+ };
+@@ -120,8 +124,15 @@ static int stats_prepare_data(const struct ethnl_req_info *req_base,
+ struct stats_reply_data *data = STATS_REPDATA(reply_base);
+ enum ethtool_mac_stats_src src = req_info->src;
+ struct net_device *dev = reply_base->dev;
++ struct nlattr **tb = info->attrs;
++ struct phy_device *phydev;
+ int ret;
+
++ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_STATS_HEADER,
++ info->extack);
++ if (IS_ERR(phydev))
++ return PTR_ERR(phydev);
++
+ ret = ethnl_ops_begin(dev);
+ if (ret < 0)
+ return ret;
+@@ -145,6 +156,13 @@ static int stats_prepare_data(const struct ethnl_req_info *req_base,
+ data->ctrl_stats.src = src;
+ data->rmon_stats.src = src;
+
++ if (test_bit(ETHTOOL_STATS_ETH_PHY, req_info->stat_mask) &&
++ src == ETHTOOL_MAC_STATS_SRC_AGGREGATE) {
++ if (phydev)
++ phy_ethtool_get_phy_stats(phydev, &data->phy_stats,
++ &data->phydev_stats);
++ }
++
+ if (test_bit(ETHTOOL_STATS_ETH_PHY, req_info->stat_mask) &&
+ dev->ethtool_ops->get_eth_phy_stats)
+ dev->ethtool_ops->get_eth_phy_stats(dev, &data->phy_stats);
+diff --git a/net/ethtool/strset.c b/net/ethtool/strset.c
+index b3382b3cf325c5..b9400d18f01d58 100644
+--- a/net/ethtool/strset.c
++++ b/net/ethtool/strset.c
+@@ -299,7 +299,7 @@ static int strset_prepare_data(const struct ethnl_req_info *req_base,
+ return 0;
+ }
+
+- phydev = ethnl_req_get_phydev(req_base, tb[ETHTOOL_A_HEADER_FLAGS],
++ phydev = ethnl_req_get_phydev(req_base, tb, ETHTOOL_A_HEADER_FLAGS,
+ info->extack);
+
+ /* phydev can be NULL, check for errors only */
+diff --git a/net/ipv4/tcp_offload.c b/net/ipv4/tcp_offload.c
+index 2308665b51c538..2dfac79dc78b8b 100644
+--- a/net/ipv4/tcp_offload.c
++++ b/net/ipv4/tcp_offload.c
+@@ -13,12 +13,15 @@
+ #include <net/tcp.h>
+ #include <net/protocol.h>
+
+-static void tcp_gso_tstamp(struct sk_buff *skb, unsigned int ts_seq,
++static void tcp_gso_tstamp(struct sk_buff *skb, struct sk_buff *gso_skb,
+ unsigned int seq, unsigned int mss)
+ {
++ u32 flags = skb_shinfo(gso_skb)->tx_flags & SKBTX_ANY_TSTAMP;
++ u32 ts_seq = skb_shinfo(gso_skb)->tskey;
++
+ while (skb) {
+ if (before(ts_seq, seq + mss)) {
+- skb_shinfo(skb)->tx_flags |= SKBTX_SW_TSTAMP;
++ skb_shinfo(skb)->tx_flags |= flags;
+ skb_shinfo(skb)->tskey = ts_seq;
+ return;
+ }
+@@ -193,8 +196,8 @@ struct sk_buff *tcp_gso_segment(struct sk_buff *skb,
+ th = tcp_hdr(skb);
+ seq = ntohl(th->seq);
+
+- if (unlikely(skb_shinfo(gso_skb)->tx_flags & SKBTX_SW_TSTAMP))
+- tcp_gso_tstamp(segs, skb_shinfo(gso_skb)->tskey, seq, mss);
++ if (unlikely(skb_shinfo(gso_skb)->tx_flags & SKBTX_ANY_TSTAMP))
++ tcp_gso_tstamp(segs, gso_skb, seq, mss);
+
+ newcheck = ~csum_fold(csum_add(csum_unfold(th->check), delta));
+
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index a5be6e4ed326fb..ecfca59f31f13e 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -321,13 +321,17 @@ struct sk_buff *__udp_gso_segment(struct sk_buff *gso_skb,
+
+ /* clear destructor to avoid skb_segment assigning it to tail */
+ copy_dtor = gso_skb->destructor == sock_wfree;
+- if (copy_dtor)
++ if (copy_dtor) {
+ gso_skb->destructor = NULL;
++ gso_skb->sk = NULL;
++ }
+
+ segs = skb_segment(gso_skb, features);
+ if (IS_ERR_OR_NULL(segs)) {
+- if (copy_dtor)
++ if (copy_dtor) {
+ gso_skb->destructor = sock_wfree;
++ gso_skb->sk = sk;
++ }
+ return segs;
+ }
+
+diff --git a/net/ipv6/ila/ila_lwt.c b/net/ipv6/ila/ila_lwt.c
+index ff7e734e335b06..7d574f5132e2fb 100644
+--- a/net/ipv6/ila/ila_lwt.c
++++ b/net/ipv6/ila/ila_lwt.c
+@@ -88,13 +88,15 @@ static int ila_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ goto drop;
+ }
+
+- if (ilwt->connected) {
++ /* cache only if we don't create a dst reference loop */
++ if (ilwt->connected && orig_dst->lwtstate != dst->lwtstate) {
+ local_bh_disable();
+ dst_cache_set_ip6(&ilwt->dst_cache, dst, &fl6.saddr);
+ local_bh_enable();
+ }
+ }
+
++ skb_dst_drop(skb);
+ skb_dst_set(skb, dst);
+ return dst_output(net, sk, skb);
+
+diff --git a/net/llc/llc_s_ac.c b/net/llc/llc_s_ac.c
+index 06fb8e6944b06a..7a0cae9a811148 100644
+--- a/net/llc/llc_s_ac.c
++++ b/net/llc/llc_s_ac.c
+@@ -24,7 +24,7 @@
+ #include <net/llc_s_ac.h>
+ #include <net/llc_s_ev.h>
+ #include <net/llc_sap.h>
+-
++#include <net/sock.h>
+
+ /**
+ * llc_sap_action_unitdata_ind - forward UI PDU to network layer
+@@ -40,6 +40,26 @@ int llc_sap_action_unitdata_ind(struct llc_sap *sap, struct sk_buff *skb)
+ return 0;
+ }
+
++static int llc_prepare_and_xmit(struct sk_buff *skb)
++{
++ struct llc_sap_state_ev *ev = llc_sap_ev(skb);
++ struct sk_buff *nskb;
++ int rc;
++
++ rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac);
++ if (rc)
++ return rc;
++
++ nskb = skb_clone(skb, GFP_ATOMIC);
++ if (!nskb)
++ return -ENOMEM;
++
++ if (skb->sk)
++ skb_set_owner_w(nskb, skb->sk);
++
++ return dev_queue_xmit(nskb);
++}
++
+ /**
+ * llc_sap_action_send_ui - sends UI PDU resp to UNITDATA REQ to MAC layer
+ * @sap: SAP
+@@ -52,17 +72,12 @@ int llc_sap_action_unitdata_ind(struct llc_sap *sap, struct sk_buff *skb)
+ int llc_sap_action_send_ui(struct llc_sap *sap, struct sk_buff *skb)
+ {
+ struct llc_sap_state_ev *ev = llc_sap_ev(skb);
+- int rc;
+
+ llc_pdu_header_init(skb, LLC_PDU_TYPE_U, ev->saddr.lsap,
+ ev->daddr.lsap, LLC_PDU_CMD);
+ llc_pdu_init_as_ui_cmd(skb);
+- rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac);
+- if (likely(!rc)) {
+- skb_get(skb);
+- rc = dev_queue_xmit(skb);
+- }
+- return rc;
++
++ return llc_prepare_and_xmit(skb);
+ }
+
+ /**
+@@ -77,17 +92,12 @@ int llc_sap_action_send_ui(struct llc_sap *sap, struct sk_buff *skb)
+ int llc_sap_action_send_xid_c(struct llc_sap *sap, struct sk_buff *skb)
+ {
+ struct llc_sap_state_ev *ev = llc_sap_ev(skb);
+- int rc;
+
+ llc_pdu_header_init(skb, LLC_PDU_TYPE_U_XID, ev->saddr.lsap,
+ ev->daddr.lsap, LLC_PDU_CMD);
+ llc_pdu_init_as_xid_cmd(skb, LLC_XID_NULL_CLASS_2, 0);
+- rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac);
+- if (likely(!rc)) {
+- skb_get(skb);
+- rc = dev_queue_xmit(skb);
+- }
+- return rc;
++
++ return llc_prepare_and_xmit(skb);
+ }
+
+ /**
+@@ -133,17 +143,12 @@ int llc_sap_action_send_xid_r(struct llc_sap *sap, struct sk_buff *skb)
+ int llc_sap_action_send_test_c(struct llc_sap *sap, struct sk_buff *skb)
+ {
+ struct llc_sap_state_ev *ev = llc_sap_ev(skb);
+- int rc;
+
+ llc_pdu_header_init(skb, LLC_PDU_TYPE_U, ev->saddr.lsap,
+ ev->daddr.lsap, LLC_PDU_CMD);
+ llc_pdu_init_as_test_cmd(skb);
+- rc = llc_mac_hdr_init(skb, ev->saddr.mac, ev->daddr.mac);
+- if (likely(!rc)) {
+- skb_get(skb);
+- rc = dev_queue_xmit(skb);
+- }
+- return rc;
++
++ return llc_prepare_and_xmit(skb);
+ }
+
+ int llc_sap_action_send_test_r(struct llc_sap *sap, struct sk_buff *skb)
+diff --git a/net/mac80211/ieee80211_i.h b/net/mac80211/ieee80211_i.h
+index 7a0242e937d364..bfe0514efca37f 100644
+--- a/net/mac80211/ieee80211_i.h
++++ b/net/mac80211/ieee80211_i.h
+@@ -1751,6 +1751,7 @@ struct ieee802_11_elems {
+ const struct ieee80211_eht_operation *eht_operation;
+ const struct ieee80211_multi_link_elem *ml_basic;
+ const struct ieee80211_multi_link_elem *ml_reconf;
++ const struct ieee80211_multi_link_elem *ml_epcs;
+ const struct ieee80211_bandwidth_indication *bandwidth_indication;
+ const struct ieee80211_ttlm_elem *ttlm[IEEE80211_TTLM_MAX_CNT];
+
+@@ -1781,6 +1782,7 @@ struct ieee802_11_elems {
+ /* mult-link element can be de-fragmented and thus u8 is not sufficient */
+ size_t ml_basic_len;
+ size_t ml_reconf_len;
++ size_t ml_epcs_len;
+
+ u8 ttlm_num;
+
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 111066928b963c..88751b0eb317a3 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -4733,6 +4733,7 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
+ parse_params.start = bss_ies->data;
+ parse_params.len = bss_ies->len;
+ parse_params.bss = cbss;
++ parse_params.link_id = -1;
+ bss_elems = ieee802_11_parse_elems_full(&parse_params);
+ if (!bss_elems) {
+ ret = false;
+diff --git a/net/mac80211/parse.c b/net/mac80211/parse.c
+index 279c5143b3356d..6da39c864f45ba 100644
+--- a/net/mac80211/parse.c
++++ b/net/mac80211/parse.c
+@@ -44,6 +44,12 @@ struct ieee80211_elems_parse {
+ /* The reconfiguration Multi-Link element in the original elements */
+ const struct element *ml_reconf_elem;
+
++ /* The EPCS Multi-Link element in the original elements */
++ const struct element *ml_epcs_elem;
++
++ bool multi_link_inner;
++ bool skip_vendor;
++
+ /*
+ * scratch buffer that can be used for various element parsing related
+ * tasks, e.g., element de-fragmentation etc.
+@@ -149,16 +155,18 @@ ieee80211_parse_extension_element(u32 *crc,
+ switch (le16_get_bits(mle->control,
+ IEEE80211_ML_CONTROL_TYPE)) {
+ case IEEE80211_ML_CONTROL_TYPE_BASIC:
+- if (elems_parse->ml_basic_elem) {
++ if (elems_parse->multi_link_inner) {
+ elems->parse_error |=
+ IEEE80211_PARSE_ERR_DUP_NEST_ML_BASIC;
+ break;
+ }
+- elems_parse->ml_basic_elem = elem;
+ break;
+ case IEEE80211_ML_CONTROL_TYPE_RECONF:
+ elems_parse->ml_reconf_elem = elem;
+ break;
++ case IEEE80211_ML_CONTROL_TYPE_PRIO_ACCESS:
++ elems_parse->ml_epcs_elem = elem;
++ break;
+ default:
+ break;
+ }
+@@ -393,6 +401,9 @@ _ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params,
+ IEEE80211_PARSE_ERR_BAD_ELEM_SIZE;
+ break;
+ case WLAN_EID_VENDOR_SPECIFIC:
++ if (elems_parse->skip_vendor)
++ break;
++
+ if (elen >= 4 && pos[0] == 0x00 && pos[1] == 0x50 &&
+ pos[2] == 0xf2) {
+ /* Microsoft OUI (00:50:F2) */
+@@ -860,21 +871,36 @@ ieee80211_mle_get_sta_prof(struct ieee80211_elems_parse *elems_parse,
+ }
+ }
+
+-static void ieee80211_mle_parse_link(struct ieee80211_elems_parse *elems_parse,
+- struct ieee80211_elems_parse_params *params)
++static const struct element *
++ieee80211_prep_mle_link_parse(struct ieee80211_elems_parse *elems_parse,
++ struct ieee80211_elems_parse_params *params,
++ struct ieee80211_elems_parse_params *sub)
+ {
+ struct ieee802_11_elems *elems = &elems_parse->elems;
+ struct ieee80211_mle_per_sta_profile *prof;
+- struct ieee80211_elems_parse_params sub = {
+- .mode = params->mode,
+- .action = params->action,
+- .from_ap = params->from_ap,
+- .link_id = -1,
+- };
+- ssize_t ml_len = elems->ml_basic_len;
+- const struct element *non_inherit = NULL;
++ const struct element *tmp;
++ ssize_t ml_len;
+ const u8 *end;
+
++ if (params->mode < IEEE80211_CONN_MODE_EHT)
++ return NULL;
++
++ for_each_element_extid(tmp, WLAN_EID_EXT_EHT_MULTI_LINK,
++ elems->ie_start, elems->total_len) {
++ const struct ieee80211_multi_link_elem *mle =
++ (void *)tmp->data + 1;
++
++ if (!ieee80211_mle_size_ok(tmp->data + 1, tmp->datalen - 1))
++ continue;
++
++ if (le16_get_bits(mle->control, IEEE80211_ML_CONTROL_TYPE) !=
++ IEEE80211_ML_CONTROL_TYPE_BASIC)
++ continue;
++
++ elems_parse->ml_basic_elem = tmp;
++ break;
++ }
++
+ ml_len = cfg80211_defragment_element(elems_parse->ml_basic_elem,
+ elems->ie_start,
+ elems->total_len,
+@@ -885,26 +911,26 @@ static void ieee80211_mle_parse_link(struct ieee80211_elems_parse *elems_parse,
+ WLAN_EID_FRAGMENT);
+
+ if (ml_len < 0)
+- return;
++ return NULL;
+
+ elems->ml_basic = (const void *)elems_parse->scratch_pos;
+ elems->ml_basic_len = ml_len;
+ elems_parse->scratch_pos += ml_len;
+
+ if (params->link_id == -1)
+- return;
++ return NULL;
+
+ ieee80211_mle_get_sta_prof(elems_parse, params->link_id);
+ prof = elems->prof;
+
+ if (!prof)
+- return;
++ return NULL;
+
+ /* check if we have the 4 bytes for the fixed part in assoc response */
+ if (elems->sta_prof_len < sizeof(*prof) + prof->sta_info_len - 1 + 4) {
+ elems->prof = NULL;
+ elems->sta_prof_len = 0;
+- return;
++ return NULL;
+ }
+
+ /*
+@@ -913,13 +939,17 @@ static void ieee80211_mle_parse_link(struct ieee80211_elems_parse *elems_parse,
+ * the -1 is because the 'sta_info_len' is accounted to as part of the
+ * per-STA profile, but not part of the 'u8 variable[]' portion.
+ */
+- sub.start = prof->variable + prof->sta_info_len - 1 + 4;
++ sub->start = prof->variable + prof->sta_info_len - 1 + 4;
+ end = (const u8 *)prof + elems->sta_prof_len;
+- sub.len = end - sub.start;
++ sub->len = end - sub->start;
++
++ sub->mode = params->mode;
++ sub->action = params->action;
++ sub->from_ap = params->from_ap;
++ sub->link_id = -1;
+
+- non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
+- sub.start, sub.len);
+- _ieee802_11_parse_elems_full(&sub, elems_parse, non_inherit);
++ return cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
++ sub->start, sub->len);
+ }
+
+ static void
+@@ -943,18 +973,43 @@ ieee80211_mle_defrag_reconf(struct ieee80211_elems_parse *elems_parse)
+ elems_parse->scratch_pos += ml_len;
+ }
+
++static void
++ieee80211_mle_defrag_epcs(struct ieee80211_elems_parse *elems_parse)
++{
++ struct ieee802_11_elems *elems = &elems_parse->elems;
++ ssize_t ml_len;
++
++ ml_len = cfg80211_defragment_element(elems_parse->ml_epcs_elem,
++ elems->ie_start,
++ elems->total_len,
++ elems_parse->scratch_pos,
++ elems_parse->scratch +
++ elems_parse->scratch_len -
++ elems_parse->scratch_pos,
++ WLAN_EID_FRAGMENT);
++ if (ml_len < 0)
++ return;
++ elems->ml_epcs = (void *)elems_parse->scratch_pos;
++ elems->ml_epcs_len = ml_len;
++ elems_parse->scratch_pos += ml_len;
++}
++
+ struct ieee802_11_elems *
+ ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params)
+ {
++ struct ieee80211_elems_parse_params sub = {};
+ struct ieee80211_elems_parse *elems_parse;
+- struct ieee802_11_elems *elems;
+ const struct element *non_inherit = NULL;
+- u8 *nontransmitted_profile;
+- int nontransmitted_profile_len = 0;
++ struct ieee802_11_elems *elems;
+ size_t scratch_len = 3 * params->len;
++ bool multi_link_inner = false;
+
+ BUILD_BUG_ON(offsetof(typeof(*elems_parse), elems) != 0);
+
++ /* cannot parse for both a specific link and non-transmitted BSS */
++ if (WARN_ON(params->link_id >= 0 && params->bss))
++ return NULL;
++
+ elems_parse = kzalloc(struct_size(elems_parse, scratch, scratch_len),
+ GFP_ATOMIC);
+ if (!elems_parse)
+@@ -971,36 +1026,55 @@ ieee802_11_parse_elems_full(struct ieee80211_elems_parse_params *params)
+ ieee80211_clear_tpe(&elems->tpe);
+ ieee80211_clear_tpe(&elems->csa_tpe);
+
+- nontransmitted_profile = elems_parse->scratch_pos;
+- nontransmitted_profile_len =
+- ieee802_11_find_bssid_profile(params->start, params->len,
+- elems, params->bss,
+- nontransmitted_profile);
+- elems_parse->scratch_pos += nontransmitted_profile_len;
+- non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
+- nontransmitted_profile,
+- nontransmitted_profile_len);
++ /*
++ * If we're looking for a non-transmitted BSS then we cannot at
++ * the same time be looking for a second link as the two can only
++ * appear in the same frame carrying info for different BSSes.
++ *
++ * In any case, we only look for one at a time, as encoded by
++ * the WARN_ON above.
++ */
++ if (params->bss) {
++ int nontx_len =
++ ieee802_11_find_bssid_profile(params->start,
++ params->len,
++ elems, params->bss,
++ elems_parse->scratch_pos);
++ sub.start = elems_parse->scratch_pos;
++ sub.mode = params->mode;
++ sub.len = nontx_len;
++ sub.action = params->action;
++ sub.link_id = params->link_id;
++
++ /* consume the space used for non-transmitted profile */
++ elems_parse->scratch_pos += nontx_len;
++
++ non_inherit = cfg80211_find_ext_elem(WLAN_EID_EXT_NON_INHERITANCE,
++ sub.start, nontx_len);
++ } else {
++ /* must always parse to get elems_parse->ml_basic_elem */
++ non_inherit = ieee80211_prep_mle_link_parse(elems_parse, params,
++ &sub);
++ multi_link_inner = true;
++ }
+
++ elems_parse->skip_vendor =
++ cfg80211_find_elem(WLAN_EID_VENDOR_SPECIFIC,
++ sub.start, sub.len);
+ elems->crc = _ieee802_11_parse_elems_full(params, elems_parse,
+ non_inherit);
+
+- /* Override with nontransmitted profile, if found */
+- if (nontransmitted_profile_len) {
+- struct ieee80211_elems_parse_params sub = {
+- .mode = params->mode,
+- .start = nontransmitted_profile,
+- .len = nontransmitted_profile_len,
+- .action = params->action,
+- .link_id = params->link_id,
+- };
+-
++ /* Override with nontransmitted/per-STA profile if found */
++ if (sub.len) {
++ elems_parse->multi_link_inner = multi_link_inner;
++ elems_parse->skip_vendor = false;
+ _ieee802_11_parse_elems_full(&sub, elems_parse, NULL);
+ }
+
+- ieee80211_mle_parse_link(elems_parse, params);
+-
+ ieee80211_mle_defrag_reconf(elems_parse);
+
++ ieee80211_mle_defrag_epcs(elems_parse);
++
+ if (elems->tim && !elems->parse_error) {
+ const struct ieee80211_tim_ie *tim_ie = elems->tim;
+
+diff --git a/net/mptcp/pm_netlink.c b/net/mptcp/pm_netlink.c
+index b4ba2d9f041765..2a085ec5bfd097 100644
+--- a/net/mptcp/pm_netlink.c
++++ b/net/mptcp/pm_netlink.c
+@@ -968,7 +968,7 @@ static void __mptcp_pm_release_addr_entry(struct mptcp_pm_addr_entry *entry)
+
+ static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet,
+ struct mptcp_pm_addr_entry *entry,
+- bool needs_id)
++ bool needs_id, bool replace)
+ {
+ struct mptcp_pm_addr_entry *cur, *del_entry = NULL;
+ unsigned int addr_max;
+@@ -1008,6 +1008,17 @@ static int mptcp_pm_nl_append_new_local_addr(struct pm_nl_pernet *pernet,
+ if (entry->addr.id)
+ goto out;
+
++ /* allow callers that only need to look up the local
++ * addr's id to skip replacement. This allows them to
++ * avoid calling synchronize_rcu in the packet recv
++ * path.
++ */
++ if (!replace) {
++ kfree(entry);
++ ret = cur->addr.id;
++ goto out;
++ }
++
+ pernet->addrs--;
+ entry->addr.id = cur->addr.id;
+ list_del_rcu(&cur->list);
+@@ -1160,7 +1171,7 @@ int mptcp_pm_nl_get_local_id(struct mptcp_sock *msk, struct mptcp_addr_info *skc
+ entry->ifindex = 0;
+ entry->flags = MPTCP_PM_ADDR_FLAG_IMPLICIT;
+ entry->lsk = NULL;
+- ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true);
++ ret = mptcp_pm_nl_append_new_local_addr(pernet, entry, true, false);
+ if (ret < 0)
+ kfree(entry);
+
+@@ -1432,7 +1443,8 @@ int mptcp_pm_nl_add_addr_doit(struct sk_buff *skb, struct genl_info *info)
+ }
+ }
+ ret = mptcp_pm_nl_append_new_local_addr(pernet, entry,
+- !mptcp_pm_has_addr_attr_id(attr, info));
++ !mptcp_pm_has_addr_attr_id(attr, info),
++ true);
+ if (ret < 0) {
+ GENL_SET_ERR_MSG_FMT(info, "too many addresses or duplicate one: %d", ret);
+ goto out_free;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index 1e78f575fb5630..ecfceddce00fcc 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -4218,6 +4218,11 @@ static int parse_monitor_flags(struct nlattr *nla, u32 *mntrflags)
+ if (flags[flag])
+ *mntrflags |= (1<<flag);
+
++ /* cooked monitor mode is incompatible with other modes */
++ if (*mntrflags & MONITOR_FLAG_COOK_FRAMES &&
++ *mntrflags != MONITOR_FLAG_COOK_FRAMES)
++ return -EOPNOTSUPP;
++
+ *mntrflags |= MONITOR_FLAG_CHANGED;
+
+ return 0;
+diff --git a/net/wireless/reg.c b/net/wireless/reg.c
+index 6489ba943a633d..2b626078739c52 100644
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -407,7 +407,8 @@ static bool is_an_alpha2(const char *alpha2)
+ {
+ if (!alpha2)
+ return false;
+- return isalpha(alpha2[0]) && isalpha(alpha2[1]);
++ return isascii(alpha2[0]) && isalpha(alpha2[0]) &&
++ isascii(alpha2[1]) && isalpha(alpha2[1]);
+ }
+
+ static bool alpha2_equal(const char *alpha2_x, const char *alpha2_y)
+diff --git a/rust/Makefile b/rust/Makefile
+index 45779a064fa4f4..09521fc449dca2 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -3,7 +3,7 @@
+ # Where to place rustdoc generated documentation
+ rustdoc_output := $(objtree)/Documentation/output/rust/rustdoc
+
+-obj-$(CONFIG_RUST) += core.o compiler_builtins.o
++obj-$(CONFIG_RUST) += core.o compiler_builtins.o ffi.o
+ always-$(CONFIG_RUST) += exports_core_generated.h
+
+ # Missing prototypes are expected in the helpers since these are exported
+@@ -15,8 +15,8 @@ always-$(CONFIG_RUST) += libmacros.so
+ no-clean-files += libmacros.so
+
+ always-$(CONFIG_RUST) += bindings/bindings_generated.rs bindings/bindings_helpers_generated.rs
+-obj-$(CONFIG_RUST) += alloc.o bindings.o kernel.o
+-always-$(CONFIG_RUST) += exports_alloc_generated.h exports_helpers_generated.h \
++obj-$(CONFIG_RUST) += bindings.o kernel.o
++always-$(CONFIG_RUST) += exports_helpers_generated.h \
+ exports_bindings_generated.h exports_kernel_generated.h
+
+ always-$(CONFIG_RUST) += uapi/uapi_generated.rs
+@@ -53,15 +53,10 @@ endif
+ core-cfgs = \
+ --cfg no_fp_fmt_parse
+
+-alloc-cfgs = \
+- --cfg no_global_oom_handling \
+- --cfg no_rc \
+- --cfg no_sync
+-
+ quiet_cmd_rustdoc = RUSTDOC $(if $(rustdoc_host),H, ) $<
+ cmd_rustdoc = \
+ OBJTREE=$(abspath $(objtree)) \
+- $(RUSTDOC) $(if $(rustdoc_host),$(rust_common_flags),$(rust_flags)) \
++ $(RUSTDOC) $(filter-out $(skip_flags),$(if $(rustdoc_host),$(rust_common_flags),$(rust_flags))) \
+ $(rustc_target_flags) -L$(objtree)/$(obj) \
+ -Zunstable-options --generate-link-to-definition \
+ --output $(rustdoc_output) \
+@@ -81,7 +76,7 @@ quiet_cmd_rustdoc = RUSTDOC $(if $(rustdoc_host),H, ) $<
+ # command-like flags to solve the issue. Meanwhile, we use the non-custom case
+ # and then retouch the generated files.
+ rustdoc: rustdoc-core rustdoc-macros rustdoc-compiler_builtins \
+- rustdoc-alloc rustdoc-kernel
++ rustdoc-kernel
+ $(Q)cp $(srctree)/Documentation/images/logo.svg $(rustdoc_output)/static.files/
+ $(Q)cp $(srctree)/Documentation/images/COPYING-logo $(rustdoc_output)/static.files/
+ $(Q)find $(rustdoc_output) -name '*.html' -type f -print0 | xargs -0 sed -Ei \
+@@ -98,6 +93,9 @@ rustdoc-macros: private rustc_target_flags = --crate-type proc-macro \
+ rustdoc-macros: $(src)/macros/lib.rs FORCE
+ +$(call if_changed,rustdoc)
+
++# Starting with Rust 1.82.0, skipping `-Wrustdoc::unescaped_backticks` should
++# not be needed -- see https://github.com/rust-lang/rust/pull/128307.
++rustdoc-core: private skip_flags = -Wrustdoc::unescaped_backticks
+ rustdoc-core: private rustc_target_flags = $(core-cfgs)
+ rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs FORCE
+ +$(call if_changed,rustdoc)
+@@ -105,20 +103,14 @@ rustdoc-core: $(RUST_LIB_SRC)/core/src/lib.rs FORCE
+ rustdoc-compiler_builtins: $(src)/compiler_builtins.rs rustdoc-core FORCE
+ +$(call if_changed,rustdoc)
+
+-# We need to allow `rustdoc::broken_intra_doc_links` because some
+-# `no_global_oom_handling` functions refer to non-`no_global_oom_handling`
+-# functions. Ideally `rustdoc` would have a way to distinguish broken links
+-# due to things that are "configured out" vs. entirely non-existing ones.
+-rustdoc-alloc: private rustc_target_flags = $(alloc-cfgs) \
+- -Arustdoc::broken_intra_doc_links
+-rustdoc-alloc: $(RUST_LIB_SRC)/alloc/src/lib.rs rustdoc-core rustdoc-compiler_builtins FORCE
++rustdoc-ffi: $(src)/ffi.rs rustdoc-core FORCE
+ +$(call if_changed,rustdoc)
+
+-rustdoc-kernel: private rustc_target_flags = --extern alloc \
++rustdoc-kernel: private rustc_target_flags = --extern ffi \
+ --extern build_error --extern macros=$(objtree)/$(obj)/libmacros.so \
+ --extern bindings --extern uapi
+-rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-macros \
+- rustdoc-compiler_builtins rustdoc-alloc $(obj)/libmacros.so \
++rustdoc-kernel: $(src)/kernel/lib.rs rustdoc-core rustdoc-ffi rustdoc-macros \
++ rustdoc-compiler_builtins $(obj)/libmacros.so \
+ $(obj)/bindings.o FORCE
+ +$(call if_changed,rustdoc)
+
+@@ -135,15 +127,28 @@ quiet_cmd_rustc_test_library = RUSTC TL $<
+ rusttestlib-build_error: $(src)/build_error.rs FORCE
+ +$(call if_changed,rustc_test_library)
+
++rusttestlib-ffi: $(src)/ffi.rs FORCE
++ +$(call if_changed,rustc_test_library)
++
+ rusttestlib-macros: private rustc_target_flags = --extern proc_macro
+ rusttestlib-macros: private rustc_test_library_proc = yes
+ rusttestlib-macros: $(src)/macros/lib.rs FORCE
+ +$(call if_changed,rustc_test_library)
+
+-rusttestlib-bindings: $(src)/bindings/lib.rs FORCE
++rusttestlib-kernel: private rustc_target_flags = --extern ffi \
++ --extern build_error --extern macros \
++ --extern bindings --extern uapi
++rusttestlib-kernel: $(src)/kernel/lib.rs \
++ rusttestlib-bindings rusttestlib-uapi rusttestlib-build_error \
++ $(obj)/libmacros.so $(obj)/bindings.o FORCE
++ +$(call if_changed,rustc_test_library)
++
++rusttestlib-bindings: private rustc_target_flags = --extern ffi
++rusttestlib-bindings: $(src)/bindings/lib.rs rusttestlib-ffi FORCE
+ +$(call if_changed,rustc_test_library)
+
+-rusttestlib-uapi: $(src)/uapi/lib.rs FORCE
++rusttestlib-uapi: private rustc_target_flags = --extern ffi
++rusttestlib-uapi: $(src)/uapi/lib.rs rusttestlib-ffi FORCE
+ +$(call if_changed,rustc_test_library)
+
+ quiet_cmd_rustdoc_test = RUSTDOC T $<
+@@ -162,7 +167,7 @@ quiet_cmd_rustdoc_test_kernel = RUSTDOC TK $<
+ mkdir -p $(objtree)/$(obj)/test/doctests/kernel; \
+ OBJTREE=$(abspath $(objtree)) \
+ $(RUSTDOC) --test $(rust_flags) \
+- -L$(objtree)/$(obj) --extern alloc --extern kernel \
++ -L$(objtree)/$(obj) --extern ffi --extern kernel \
+ --extern build_error --extern macros \
+ --extern bindings --extern uapi \
+ --no-run --crate-name kernel -Zunstable-options \
+@@ -192,19 +197,20 @@ quiet_cmd_rustc_test = RUSTC T $<
+
+ rusttest: rusttest-macros rusttest-kernel
+
+-rusttest-macros: private rustc_target_flags = --extern proc_macro
++rusttest-macros: private rustc_target_flags = --extern proc_macro \
++ --extern macros --extern kernel
+ rusttest-macros: private rustdoc_test_target_flags = --crate-type proc-macro
+-rusttest-macros: $(src)/macros/lib.rs FORCE
++rusttest-macros: $(src)/macros/lib.rs \
++ rusttestlib-macros rusttestlib-kernel FORCE
+ +$(call if_changed,rustc_test)
+ +$(call if_changed,rustdoc_test)
+
+-rusttest-kernel: private rustc_target_flags = --extern alloc \
++rusttest-kernel: private rustc_target_flags = --extern ffi \
+ --extern build_error --extern macros --extern bindings --extern uapi
+-rusttest-kernel: $(src)/kernel/lib.rs \
++rusttest-kernel: $(src)/kernel/lib.rs rusttestlib-ffi rusttestlib-kernel \
+ rusttestlib-build_error rusttestlib-macros rusttestlib-bindings \
+ rusttestlib-uapi FORCE
+ +$(call if_changed,rustc_test)
+- +$(call if_changed,rustc_test_library)
+
+ ifdef CONFIG_CC_IS_CLANG
+ bindgen_c_flags = $(c_flags)
+@@ -266,7 +272,11 @@ else
+ bindgen_c_flags_lto = $(bindgen_c_flags)
+ endif
+
+-bindgen_c_flags_final = $(bindgen_c_flags_lto) -D__BINDGEN__
++# `-fno-builtin` is passed to avoid `bindgen` from using `clang` builtin
++# prototypes for functions like `memcpy` -- if this flag is not passed,
++# `bindgen`-generated prototypes use `c_ulong` or `c_uint` depending on
++# architecture instead of generating `usize`.
++bindgen_c_flags_final = $(bindgen_c_flags_lto) -fno-builtin -D__BINDGEN__
+
+ # Each `bindgen` release may upgrade the list of Rust target versions. By
+ # default, the highest stable release in their list is used. Thus we need to set
+@@ -284,7 +294,7 @@ bindgen_c_flags_final = $(bindgen_c_flags_lto) -D__BINDGEN__
+ quiet_cmd_bindgen = BINDGEN $@
+ cmd_bindgen = \
+ $(BINDGEN) $< $(bindgen_target_flags) --rust-target 1.68 \
+- --use-core --with-derive-default --ctypes-prefix core::ffi --no-layout-tests \
++ --use-core --with-derive-default --ctypes-prefix ffi --no-layout-tests \
+ --no-debug '.*' --enable-function-attribute-detection \
+ -o $@ -- $(bindgen_c_flags_final) -DMODULE \
+ $(bindgen_target_cflags) $(bindgen_target_extra)
+@@ -325,9 +335,6 @@ quiet_cmd_exports = EXPORTS $@
+ $(obj)/exports_core_generated.h: $(obj)/core.o FORCE
+ $(call if_changed,exports)
+
+-$(obj)/exports_alloc_generated.h: $(obj)/alloc.o FORCE
+- $(call if_changed,exports)
+-
+ # Even though Rust kernel modules should never use the bindings directly,
+ # symbols from the `bindings` crate and the C helpers need to be exported
+ # because Rust generics and inlined functions may not get their code generated
+@@ -374,7 +381,7 @@ quiet_cmd_rustc_library = $(if $(skip_clippy),RUSTC,$(RUSTC_OR_CLIPPY_QUIET)) L
+
+ rust-analyzer:
+ $(Q)$(srctree)/scripts/generate_rust_analyzer.py \
+- --cfgs='core=$(core-cfgs)' --cfgs='alloc=$(alloc-cfgs)' \
++ --cfgs='core=$(core-cfgs)' \
+ $(realpath $(srctree)) $(realpath $(objtree)) \
+ $(rustc_sysroot) $(RUST_LIB_SRC) $(KBUILD_EXTMOD) > \
+ $(if $(KBUILD_EXTMOD),$(extmod_prefix),$(objtree))/rust-project.json
+@@ -412,29 +419,28 @@ $(obj)/compiler_builtins.o: private rustc_objcopy = -w -W '__*'
+ $(obj)/compiler_builtins.o: $(src)/compiler_builtins.rs $(obj)/core.o FORCE
+ +$(call if_changed_rule,rustc_library)
+
+-$(obj)/alloc.o: private skip_clippy = 1
+-$(obj)/alloc.o: private skip_flags = -Wunreachable_pub
+-$(obj)/alloc.o: private rustc_target_flags = $(alloc-cfgs)
+-$(obj)/alloc.o: $(RUST_LIB_SRC)/alloc/src/lib.rs $(obj)/compiler_builtins.o FORCE
++$(obj)/build_error.o: $(src)/build_error.rs $(obj)/compiler_builtins.o FORCE
+ +$(call if_changed_rule,rustc_library)
+
+-$(obj)/build_error.o: $(src)/build_error.rs $(obj)/compiler_builtins.o FORCE
++$(obj)/ffi.o: $(src)/ffi.rs $(obj)/compiler_builtins.o FORCE
+ +$(call if_changed_rule,rustc_library)
+
++$(obj)/bindings.o: private rustc_target_flags = --extern ffi
+ $(obj)/bindings.o: $(src)/bindings/lib.rs \
+- $(obj)/compiler_builtins.o \
++ $(obj)/ffi.o \
+ $(obj)/bindings/bindings_generated.rs \
+ $(obj)/bindings/bindings_helpers_generated.rs FORCE
+ +$(call if_changed_rule,rustc_library)
+
++$(obj)/uapi.o: private rustc_target_flags = --extern ffi
+ $(obj)/uapi.o: $(src)/uapi/lib.rs \
+- $(obj)/compiler_builtins.o \
++ $(obj)/ffi.o \
+ $(obj)/uapi/uapi_generated.rs FORCE
+ +$(call if_changed_rule,rustc_library)
+
+-$(obj)/kernel.o: private rustc_target_flags = --extern alloc \
++$(obj)/kernel.o: private rustc_target_flags = --extern ffi \
+ --extern build_error --extern macros --extern bindings --extern uapi
+-$(obj)/kernel.o: $(src)/kernel/lib.rs $(obj)/alloc.o $(obj)/build_error.o \
++$(obj)/kernel.o: $(src)/kernel/lib.rs $(obj)/build_error.o \
+ $(obj)/libmacros.so $(obj)/bindings.o $(obj)/uapi.o FORCE
+ +$(call if_changed_rule,rustc_library)
+
+diff --git a/rust/bindgen_parameters b/rust/bindgen_parameters
+index b7c7483123b7ab..0f96af8b9a7fee 100644
+--- a/rust/bindgen_parameters
++++ b/rust/bindgen_parameters
+@@ -1,5 +1,10 @@
+ # SPDX-License-Identifier: GPL-2.0
+
++# We want to map these types to `isize`/`usize` manually, instead of
++# define them as `int`/`long` depending on platform bitwidth.
++--blocklist-type __kernel_s?size_t
++--blocklist-type __kernel_ptrdiff_t
++
+ --opaque-type xregs_state
+ --opaque-type desc_struct
+ --opaque-type arch_lbr_state
+diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
+index ae82e9c941afa1..a80783fcbe042a 100644
+--- a/rust/bindings/bindings_helper.h
++++ b/rust/bindings/bindings_helper.h
+@@ -31,4 +31,5 @@ const gfp_t RUST_CONST_HELPER_GFP_KERNEL_ACCOUNT = GFP_KERNEL_ACCOUNT;
+ const gfp_t RUST_CONST_HELPER_GFP_NOWAIT = GFP_NOWAIT;
+ const gfp_t RUST_CONST_HELPER___GFP_ZERO = __GFP_ZERO;
+ const gfp_t RUST_CONST_HELPER___GFP_HIGHMEM = ___GFP_HIGHMEM;
++const gfp_t RUST_CONST_HELPER___GFP_NOWARN = ___GFP_NOWARN;
+ const blk_features_t RUST_CONST_HELPER_BLK_FEAT_ROTATIONAL = BLK_FEAT_ROTATIONAL;
+diff --git a/rust/bindings/lib.rs b/rust/bindings/lib.rs
+index 93a1a3fc97bc9b..014af0d1fc70cb 100644
+--- a/rust/bindings/lib.rs
++++ b/rust/bindings/lib.rs
+@@ -25,7 +25,13 @@
+ )]
+
+ #[allow(dead_code)]
++#[allow(clippy::undocumented_unsafe_blocks)]
+ mod bindings_raw {
++ // Manual definition for blocklisted types.
++ type __kernel_size_t = usize;
++ type __kernel_ssize_t = isize;
++ type __kernel_ptrdiff_t = isize;
++
+ // Use glob import here to expose all helpers.
+ // Symbols defined within the module will take precedence to the glob import.
+ pub use super::bindings_helper::*;
+diff --git a/rust/exports.c b/rust/exports.c
+index e5695f3b45b7aa..82a037381798d7 100644
+--- a/rust/exports.c
++++ b/rust/exports.c
+@@ -16,7 +16,6 @@
+ #define EXPORT_SYMBOL_RUST_GPL(sym) extern int sym; EXPORT_SYMBOL_GPL(sym)
+
+ #include "exports_core_generated.h"
+-#include "exports_alloc_generated.h"
+ #include "exports_helpers_generated.h"
+ #include "exports_bindings_generated.h"
+ #include "exports_kernel_generated.h"
+diff --git a/rust/ffi.rs b/rust/ffi.rs
+new file mode 100644
+index 00000000000000..584f75b49862b3
+--- /dev/null
++++ b/rust/ffi.rs
+@@ -0,0 +1,48 @@
++// SPDX-License-Identifier: GPL-2.0
++
++//! Foreign function interface (FFI) types.
++//!
++//! This crate provides mapping from C primitive types to Rust ones.
++//!
++//! The Rust [`core`] crate provides [`core::ffi`], which maps integer types to the platform default
++//! C ABI. The kernel does not use [`core::ffi`], so it can customise the mapping that deviates from
++//! the platform default.
++
++#![no_std]
++
++macro_rules! alias {
++ ($($name:ident = $ty:ty;)*) => {$(
++ #[allow(non_camel_case_types, missing_docs)]
++ pub type $name = $ty;
++
++ // Check size compatibility with `core`.
++ const _: () = assert!(
++ core::mem::size_of::<$name>() == core::mem::size_of::<core::ffi::$name>()
++ );
++ )*}
++}
++
++alias! {
++ // `core::ffi::c_char` is either `i8` or `u8` depending on architecture. In the kernel, we use
++ // `-funsigned-char` so it's always mapped to `u8`.
++ c_char = u8;
++
++ c_schar = i8;
++ c_uchar = u8;
++
++ c_short = i16;
++ c_ushort = u16;
++
++ c_int = i32;
++ c_uint = u32;
++
++ // In the kernel, `intptr_t` is defined to be `long` in all platforms, so we can map the type to
++ // `isize`.
++ c_long = isize;
++ c_ulong = usize;
++
++ c_longlong = i64;
++ c_ulonglong = u64;
++}
++
++pub use core::ffi::c_void;
+diff --git a/rust/helpers/helpers.c b/rust/helpers/helpers.c
+index 30f40149f3a969..20a0c69d5cc7b8 100644
+--- a/rust/helpers/helpers.c
++++ b/rust/helpers/helpers.c
+@@ -22,5 +22,6 @@
+ #include "spinlock.c"
+ #include "task.c"
+ #include "uaccess.c"
++#include "vmalloc.c"
+ #include "wait.c"
+ #include "workqueue.c"
+diff --git a/rust/helpers/slab.c b/rust/helpers/slab.c
+index f043e087f9d666..a842bfbddcba91 100644
+--- a/rust/helpers/slab.c
++++ b/rust/helpers/slab.c
+@@ -7,3 +7,9 @@ rust_helper_krealloc(const void *objp, size_t new_size, gfp_t flags)
+ {
+ return krealloc(objp, new_size, flags);
+ }
++
++void * __must_check __realloc_size(2)
++rust_helper_kvrealloc(const void *p, size_t size, gfp_t flags)
++{
++ return kvrealloc(p, size, flags);
++}
+diff --git a/rust/helpers/vmalloc.c b/rust/helpers/vmalloc.c
+new file mode 100644
+index 00000000000000..80d34501bbc010
+--- /dev/null
++++ b/rust/helpers/vmalloc.c
+@@ -0,0 +1,9 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/vmalloc.h>
++
++void * __must_check __realloc_size(2)
++rust_helper_vrealloc(const void *p, size_t size, gfp_t flags)
++{
++ return vrealloc(p, size, flags);
++}
+diff --git a/rust/kernel/alloc.rs b/rust/kernel/alloc.rs
+index 1966bd40701741..f2f7f3a53d298c 100644
+--- a/rust/kernel/alloc.rs
++++ b/rust/kernel/alloc.rs
+@@ -1,23 +1,41 @@
+ // SPDX-License-Identifier: GPL-2.0
+
+-//! Extensions to the [`alloc`] crate.
++//! Implementation of the kernel's memory allocation infrastructure.
+
+-#[cfg(not(test))]
+-#[cfg(not(testlib))]
+-mod allocator;
+-pub mod box_ext;
+-pub mod vec_ext;
++#[cfg(not(any(test, testlib)))]
++pub mod allocator;
++pub mod kbox;
++pub mod kvec;
++pub mod layout;
++
++#[cfg(any(test, testlib))]
++pub mod allocator_test;
++
++#[cfg(any(test, testlib))]
++pub use self::allocator_test as allocator;
++
++pub use self::kbox::Box;
++pub use self::kbox::KBox;
++pub use self::kbox::KVBox;
++pub use self::kbox::VBox;
++
++pub use self::kvec::IntoIter;
++pub use self::kvec::KVVec;
++pub use self::kvec::KVec;
++pub use self::kvec::VVec;
++pub use self::kvec::Vec;
+
+ /// Indicates an allocation error.
+ #[derive(Copy, Clone, PartialEq, Eq, Debug)]
+ pub struct AllocError;
++use core::{alloc::Layout, ptr::NonNull};
+
+ /// Flags to be used when allocating memory.
+ ///
+ /// They can be combined with the operators `|`, `&`, and `!`.
+ ///
+ /// Values can be used from the [`flags`] module.
+-#[derive(Clone, Copy)]
++#[derive(Clone, Copy, PartialEq)]
+ pub struct Flags(u32);
+
+ impl Flags {
+@@ -25,6 +43,11 @@ impl Flags {
+ pub(crate) fn as_raw(self) -> u32 {
+ self.0
+ }
++
++ /// Check whether `flags` is contained in `self`.
++ pub fn contains(self, flags: Flags) -> bool {
++ (self & flags) == flags
++ }
+ }
+
+ impl core::ops::BitOr for Flags {
+@@ -85,4 +108,117 @@ pub mod flags {
+ /// use any filesystem callback. It is very likely to fail to allocate memory, even for very
+ /// small allocations.
+ pub const GFP_NOWAIT: Flags = Flags(bindings::GFP_NOWAIT);
++
++ /// Suppresses allocation failure reports.
++ ///
++ /// This is normally or'd with other flags.
++ pub const __GFP_NOWARN: Flags = Flags(bindings::__GFP_NOWARN);
++}
++
++/// The kernel's [`Allocator`] trait.
++///
++/// An implementation of [`Allocator`] can allocate, re-allocate and free memory buffers described
++/// via [`Layout`].
++///
++/// [`Allocator`] is designed to be implemented as a ZST; [`Allocator`] functions do not operate on
++/// an object instance.
++///
++/// In order to be able to support `#[derive(SmartPointer)]` later on, we need to avoid a design
++/// that requires an `Allocator` to be instantiated, hence its functions must not contain any kind
++/// of `self` parameter.
++///
++/// # Safety
++///
++/// - A memory allocation returned from an allocator must remain valid until it is explicitly freed.
++///
++/// - Any pointer to a valid memory allocation must be valid to be passed to any other [`Allocator`]
++/// function of the same type.
++///
++/// - Implementers must ensure that all trait functions abide by the guarantees documented in the
++/// `# Guarantees` sections.
++pub unsafe trait Allocator {
++ /// Allocate memory based on `layout` and `flags`.
++ ///
++ /// On success, returns a buffer represented as `NonNull<[u8]>` that satisfies the layout
++ /// constraints (i.e. minimum size and alignment as specified by `layout`).
++ ///
++ /// This function is equivalent to `realloc` when called with `None`.
++ ///
++ /// # Guarantees
++ ///
++ /// When the return value is `Ok(ptr)`, then `ptr` is
++ /// - valid for reads and writes for `layout.size()` bytes, until it is passed to
++ /// [`Allocator::free`] or [`Allocator::realloc`],
++ /// - aligned to `layout.align()`,
++ ///
++ /// Additionally, `Flags` are honored as documented in
++ /// <https://docs.kernel.org/core-api/mm-api.html#mm-api-gfp-flags>.
++ fn alloc(layout: Layout, flags: Flags) -> Result<NonNull<[u8]>, AllocError> {
++ // SAFETY: Passing `None` to `realloc` is valid by its safety requirements and asks for a
++ // new memory allocation.
++ unsafe { Self::realloc(None, layout, Layout::new::<()>(), flags) }
++ }
++
++ /// Re-allocate an existing memory allocation to satisfy the requested `layout`.
++ ///
++ /// If the requested size is zero, `realloc` behaves equivalent to `free`.
++ ///
++ /// If the requested size is larger than the size of the existing allocation, a successful call
++ /// to `realloc` guarantees that the new or grown buffer has at least `Layout::size` bytes, but
++ /// may also be larger.
++ ///
++ /// If the requested size is smaller than the size of the existing allocation, `realloc` may or
++ /// may not shrink the buffer; this is implementation specific to the allocator.
++ ///
++ /// On allocation failure, the existing buffer, if any, remains valid.
++ ///
++ /// The buffer is represented as `NonNull<[u8]>`.
++ ///
++ /// # Safety
++ ///
++ /// - If `ptr == Some(p)`, then `p` must point to an existing and valid memory allocation
++ /// created by this [`Allocator`]; if `old_layout` is zero-sized `p` does not need to be a
++ /// pointer returned by this [`Allocator`].
++ /// - `ptr` is allowed to be `None`; in this case a new memory allocation is created and
++ /// `old_layout` is ignored.
++ /// - `old_layout` must match the `Layout` the allocation has been created with.
++ ///
++ /// # Guarantees
++ ///
++ /// This function has the same guarantees as [`Allocator::alloc`]. When `ptr == Some(p)`, then
++ /// it additionally guarantees that:
++ /// - the contents of the memory pointed to by `p` are preserved up to the lesser of the new
++ /// and old size, i.e. `ret_ptr[0..min(layout.size(), old_layout.size())] ==
++ /// p[0..min(layout.size(), old_layout.size())]`.
++ /// - when the return value is `Err(AllocError)`, then `ptr` is still valid.
++ unsafe fn realloc(
++ ptr: Option<NonNull<u8>>,
++ layout: Layout,
++ old_layout: Layout,
++ flags: Flags,
++ ) -> Result<NonNull<[u8]>, AllocError>;
++
++ /// Free an existing memory allocation.
++ ///
++ /// # Safety
++ ///
++ /// - `ptr` must point to an existing and valid memory allocation created by this [`Allocator`];
++ /// if `old_layout` is zero-sized `p` does not need to be a pointer returned by this
++ /// [`Allocator`].
++ /// - `layout` must match the `Layout` the allocation has been created with.
++ /// - The memory allocation at `ptr` must never again be read from or written to.
++ unsafe fn free(ptr: NonNull<u8>, layout: Layout) {
++ // SAFETY: The caller guarantees that `ptr` points at a valid allocation created by this
++ // allocator. We are passing a `Layout` with the smallest possible alignment, so it is
++ // smaller than or equal to the alignment previously used with this allocation.
++ let _ = unsafe { Self::realloc(Some(ptr), Layout::new::<()>(), layout, Flags(0)) };
++ }
++}
++
++/// Returns a properly aligned dangling pointer from the given `layout`.
++pub(crate) fn dangling_from_layout(layout: Layout) -> NonNull<u8> {
++ let ptr = layout.align() as *mut u8;
++
++ // SAFETY: `layout.align()` (and hence `ptr`) is guaranteed to be non-zero.
++ unsafe { NonNull::new_unchecked(ptr) }
+ }
+diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs
+index e6ea601f38c6d9..439985e29fbc0e 100644
+--- a/rust/kernel/alloc/allocator.rs
++++ b/rust/kernel/alloc/allocator.rs
+@@ -1,74 +1,188 @@
+ // SPDX-License-Identifier: GPL-2.0
+
+ //! Allocator support.
++//!
++//! Documentation for the kernel's memory allocators can found in the "Memory Allocation Guide"
++//! linked below. For instance, this includes the concept of "get free page" (GFP) flags and the
++//! typical application of the different kernel allocators.
++//!
++//! Reference: <https://docs.kernel.org/core-api/memory-allocation.html>
+
+-use super::{flags::*, Flags};
+-use core::alloc::{GlobalAlloc, Layout};
++use super::Flags;
++use core::alloc::Layout;
+ use core::ptr;
++use core::ptr::NonNull;
+
+-struct KernelAllocator;
++use crate::alloc::{AllocError, Allocator};
++use crate::bindings;
++use crate::pr_warn;
+
+-/// Calls `krealloc` with a proper size to alloc a new object aligned to `new_layout`'s alignment.
++/// The contiguous kernel allocator.
+ ///
+-/// # Safety
++/// `Kmalloc` is typically used for physically contiguous allocations up to page size, but also
++/// supports larger allocations up to `bindings::KMALLOC_MAX_SIZE`, which is hardware specific.
+ ///
+-/// - `ptr` can be either null or a pointer which has been allocated by this allocator.
+-/// - `new_layout` must have a non-zero size.
+-pub(crate) unsafe fn krealloc_aligned(ptr: *mut u8, new_layout: Layout, flags: Flags) -> *mut u8 {
++/// For more details see [self].
++pub struct Kmalloc;
++
++/// The virtually contiguous kernel allocator.
++///
++/// `Vmalloc` allocates pages from the page level allocator and maps them into the contiguous kernel
++/// virtual space. It is typically used for large allocations. The memory allocated with this
++/// allocator is not physically contiguous.
++///
++/// For more details see [self].
++pub struct Vmalloc;
++
++/// The kvmalloc kernel allocator.
++///
++/// `KVmalloc` attempts to allocate memory with `Kmalloc` first, but falls back to `Vmalloc` upon
++/// failure. This allocator is typically used when the size for the requested allocation is not
++/// known and may exceed the capabilities of `Kmalloc`.
++///
++/// For more details see [self].
++pub struct KVmalloc;
++
++/// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment.
++fn aligned_size(new_layout: Layout) -> usize {
+ // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first.
+ let layout = new_layout.pad_to_align();
+
+ // Note that `layout.size()` (after padding) is guaranteed to be a multiple of `layout.align()`
+ // which together with the slab guarantees means the `krealloc` will return a properly aligned
+ // object (see comments in `kmalloc()` for more information).
+- let size = layout.size();
+-
+- // SAFETY:
+- // - `ptr` is either null or a pointer returned from a previous `k{re}alloc()` by the
+- // function safety requirement.
+- // - `size` is greater than 0 since it's from `layout.size()` (which cannot be zero according
+- // to the function safety requirement)
+- unsafe { bindings::krealloc(ptr as *const core::ffi::c_void, size, flags.0) as *mut u8 }
++ layout.size()
+ }
+
+-unsafe impl GlobalAlloc for KernelAllocator {
+- unsafe fn alloc(&self, layout: Layout) -> *mut u8 {
+- // SAFETY: `ptr::null_mut()` is null and `layout` has a non-zero size by the function safety
+- // requirement.
+- unsafe { krealloc_aligned(ptr::null_mut(), layout, GFP_KERNEL) }
+- }
++/// # Invariants
++///
++/// One of the following: `krealloc`, `vrealloc`, `kvrealloc`.
++struct ReallocFunc(
++ unsafe extern "C" fn(*const crate::ffi::c_void, usize, u32) -> *mut crate::ffi::c_void,
++);
+
+- unsafe fn dealloc(&self, ptr: *mut u8, _layout: Layout) {
+- unsafe {
+- bindings::kfree(ptr as *const core::ffi::c_void);
+- }
+- }
++impl ReallocFunc {
++ // INVARIANT: `krealloc` satisfies the type invariants.
++ const KREALLOC: Self = Self(bindings::krealloc);
+
+- unsafe fn realloc(&self, ptr: *mut u8, layout: Layout, new_size: usize) -> *mut u8 {
+- // SAFETY:
+- // - `new_size`, when rounded up to the nearest multiple of `layout.align()`, will not
+- // overflow `isize` by the function safety requirement.
+- // - `layout.align()` is a proper alignment (i.e. not zero and must be a power of two).
+- let layout = unsafe { Layout::from_size_align_unchecked(new_size, layout.align()) };
++ // INVARIANT: `vrealloc` satisfies the type invariants.
++ const VREALLOC: Self = Self(bindings::vrealloc);
++
++ // INVARIANT: `kvrealloc` satisfies the type invariants.
++ const KVREALLOC: Self = Self(bindings::kvrealloc);
++
++ /// # Safety
++ ///
++ /// This method has the same safety requirements as [`Allocator::realloc`].
++ ///
++ /// # Guarantees
++ ///
++ /// This method has the same guarantees as `Allocator::realloc`. Additionally
++ /// - it accepts any pointer to a valid memory allocation allocated by this function.
++ /// - memory allocated by this function remains valid until it is passed to this function.
++ unsafe fn call(
++ &self,
++ ptr: Option<NonNull<u8>>,
++ layout: Layout,
++ old_layout: Layout,
++ flags: Flags,
++ ) -> Result<NonNull<[u8]>, AllocError> {
++ let size = aligned_size(layout);
++ let ptr = match ptr {
++ Some(ptr) => {
++ if old_layout.size() == 0 {
++ ptr::null()
++ } else {
++ ptr.as_ptr()
++ }
++ }
++ None => ptr::null(),
++ };
+
+ // SAFETY:
+- // - `ptr` is either null or a pointer allocated by this allocator by the function safety
+- // requirement.
+- // - the size of `layout` is not zero because `new_size` is not zero by the function safety
+- // requirement.
+- unsafe { krealloc_aligned(ptr, layout, GFP_KERNEL) }
++ // - `self.0` is one of `krealloc`, `vrealloc`, `kvrealloc` and thus only requires that
++ // `ptr` is NULL or valid.
++ // - `ptr` is either NULL or valid by the safety requirements of this function.
++ //
++ // GUARANTEE:
++ // - `self.0` is one of `krealloc`, `vrealloc`, `kvrealloc`.
++ // - Those functions provide the guarantees of this function.
++ let raw_ptr = unsafe {
++ // If `size == 0` and `ptr != NULL` the memory behind the pointer is freed.
++ self.0(ptr.cast(), size, flags.0).cast()
++ };
++
++ let ptr = if size == 0 {
++ crate::alloc::dangling_from_layout(layout)
++ } else {
++ NonNull::new(raw_ptr).ok_or(AllocError)?
++ };
++
++ Ok(NonNull::slice_from_raw_parts(ptr, size))
++ }
++}
++
++// SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that
++// - memory remains valid until it is explicitly freed,
++// - passing a pointer to a valid memory allocation is OK,
++// - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same.
++unsafe impl Allocator for Kmalloc {
++ #[inline]
++ unsafe fn realloc(
++ ptr: Option<NonNull<u8>>,
++ layout: Layout,
++ old_layout: Layout,
++ flags: Flags,
++ ) -> Result<NonNull<[u8]>, AllocError> {
++ // SAFETY: `ReallocFunc::call` has the same safety requirements as `Allocator::realloc`.
++ unsafe { ReallocFunc::KREALLOC.call(ptr, layout, old_layout, flags) }
+ }
++}
++
++// SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that
++// - memory remains valid until it is explicitly freed,
++// - passing a pointer to a valid memory allocation is OK,
++// - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same.
++unsafe impl Allocator for Vmalloc {
++ #[inline]
++ unsafe fn realloc(
++ ptr: Option<NonNull<u8>>,
++ layout: Layout,
++ old_layout: Layout,
++ flags: Flags,
++ ) -> Result<NonNull<[u8]>, AllocError> {
++ // TODO: Support alignments larger than PAGE_SIZE.
++ if layout.align() > bindings::PAGE_SIZE {
++ pr_warn!("Vmalloc does not support alignments larger than PAGE_SIZE yet.\n");
++ return Err(AllocError);
++ }
+
+- unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut u8 {
+- // SAFETY: `ptr::null_mut()` is null and `layout` has a non-zero size by the function safety
+- // requirement.
+- unsafe { krealloc_aligned(ptr::null_mut(), layout, GFP_KERNEL | __GFP_ZERO) }
++ // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously
++ // allocated with this `Allocator`.
++ unsafe { ReallocFunc::VREALLOC.call(ptr, layout, old_layout, flags) }
+ }
+ }
+
+-#[global_allocator]
+-static ALLOCATOR: KernelAllocator = KernelAllocator;
++// SAFETY: `realloc` delegates to `ReallocFunc::call`, which guarantees that
++// - memory remains valid until it is explicitly freed,
++// - passing a pointer to a valid memory allocation is OK,
++// - `realloc` satisfies the guarantees, since `ReallocFunc::call` has the same.
++unsafe impl Allocator for KVmalloc {
++ #[inline]
++ unsafe fn realloc(
++ ptr: Option<NonNull<u8>>,
++ layout: Layout,
++ old_layout: Layout,
++ flags: Flags,
++ ) -> Result<NonNull<[u8]>, AllocError> {
++ // TODO: Support alignments larger than PAGE_SIZE.
++ if layout.align() > bindings::PAGE_SIZE {
++ pr_warn!("KVmalloc does not support alignments larger than PAGE_SIZE yet.\n");
++ return Err(AllocError);
++ }
+
+-// See <https://github.com/rust-lang/rust/pull/86844>.
+-#[no_mangle]
+-static __rust_no_alloc_shim_is_unstable: u8 = 0;
++ // SAFETY: If not `None`, `ptr` is guaranteed to point to valid memory, which was previously
++ // allocated with this `Allocator`.
++ unsafe { ReallocFunc::KVREALLOC.call(ptr, layout, old_layout, flags) }
++ }
++}
+diff --git a/rust/kernel/alloc/allocator_test.rs b/rust/kernel/alloc/allocator_test.rs
+new file mode 100644
+index 00000000000000..e3240d16040bd9
+--- /dev/null
++++ b/rust/kernel/alloc/allocator_test.rs
+@@ -0,0 +1,95 @@
++// SPDX-License-Identifier: GPL-2.0
++
++//! So far the kernel's `Box` and `Vec` types can't be used by userspace test cases, since all users
++//! of those types (e.g. `CString`) use kernel allocators for instantiation.
++//!
++//! In order to allow userspace test cases to make use of such types as well, implement the
++//! `Cmalloc` allocator within the allocator_test module and type alias all kernel allocators to
++//! `Cmalloc`. The `Cmalloc` allocator uses libc's `realloc()` function as allocator backend.
++
++#![allow(missing_docs)]
++
++use super::{flags::*, AllocError, Allocator, Flags};
++use core::alloc::Layout;
++use core::cmp;
++use core::ptr;
++use core::ptr::NonNull;
++
++/// The userspace allocator based on libc.
++pub struct Cmalloc;
++
++pub type Kmalloc = Cmalloc;
++pub type Vmalloc = Kmalloc;
++pub type KVmalloc = Kmalloc;
++
++extern "C" {
++ #[link_name = "aligned_alloc"]
++ fn libc_aligned_alloc(align: usize, size: usize) -> *mut crate::ffi::c_void;
++
++ #[link_name = "free"]
++ fn libc_free(ptr: *mut crate::ffi::c_void);
++}
++
++// SAFETY:
++// - memory remains valid until it is explicitly freed,
++// - passing a pointer to a valid memory allocation created by this `Allocator` is always OK,
++// - `realloc` provides the guarantees as provided in the `# Guarantees` section.
++unsafe impl Allocator for Cmalloc {
++ unsafe fn realloc(
++ ptr: Option<NonNull<u8>>,
++ layout: Layout,
++ old_layout: Layout,
++ flags: Flags,
++ ) -> Result<NonNull<[u8]>, AllocError> {
++ let src = match ptr {
++ Some(src) => {
++ if old_layout.size() == 0 {
++ ptr::null_mut()
++ } else {
++ src.as_ptr()
++ }
++ }
++ None => ptr::null_mut(),
++ };
++
++ if layout.size() == 0 {
++ // SAFETY: `src` is either NULL or was previously allocated with this `Allocator`
++ unsafe { libc_free(src.cast()) };
++
++ return Ok(NonNull::slice_from_raw_parts(
++ crate::alloc::dangling_from_layout(layout),
++ 0,
++ ));
++ }
++
++ // SAFETY: Returns either NULL or a pointer to a memory allocation that satisfies or
++ // exceeds the given size and alignment requirements.
++ let dst = unsafe { libc_aligned_alloc(layout.align(), layout.size()) } as *mut u8;
++ let dst = NonNull::new(dst).ok_or(AllocError)?;
++
++ if flags.contains(__GFP_ZERO) {
++ // SAFETY: The preceding calls to `libc_aligned_alloc` and `NonNull::new`
++ // guarantee that `dst` points to memory of at least `layout.size()` bytes.
++ unsafe { dst.as_ptr().write_bytes(0, layout.size()) };
++ }
++
++ if !src.is_null() {
++ // SAFETY:
++ // - `src` has previously been allocated with this `Allocator`; `dst` has just been
++ // newly allocated, hence the memory regions do not overlap.
++ // - both` src` and `dst` are properly aligned and valid for reads and writes
++ unsafe {
++ ptr::copy_nonoverlapping(
++ src,
++ dst.as_ptr(),
++ cmp::min(layout.size(), old_layout.size()),
++ )
++ };
++ }
++
++ // SAFETY: `src` is either NULL or was previously allocated with this `Allocator`
++ unsafe { libc_free(src.cast()) };
++
++ Ok(NonNull::slice_from_raw_parts(dst, layout.size()))
++ }
++}
+diff --git a/rust/kernel/alloc/box_ext.rs b/rust/kernel/alloc/box_ext.rs
+deleted file mode 100644
+index 7009ad78d4e082..00000000000000
+--- a/rust/kernel/alloc/box_ext.rs
++++ /dev/null
+@@ -1,89 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-
+-//! Extensions to [`Box`] for fallible allocations.
+-
+-use super::{AllocError, Flags};
+-use alloc::boxed::Box;
+-use core::{mem::MaybeUninit, ptr, result::Result};
+-
+-/// Extensions to [`Box`].
+-pub trait BoxExt<T>: Sized {
+- /// Allocates a new box.
+- ///
+- /// The allocation may fail, in which case an error is returned.
+- fn new(x: T, flags: Flags) -> Result<Self, AllocError>;
+-
+- /// Allocates a new uninitialised box.
+- ///
+- /// The allocation may fail, in which case an error is returned.
+- fn new_uninit(flags: Flags) -> Result<Box<MaybeUninit<T>>, AllocError>;
+-
+- /// Drops the contents, but keeps the allocation.
+- ///
+- /// # Examples
+- ///
+- /// ```
+- /// use kernel::alloc::{flags, box_ext::BoxExt};
+- /// let value = Box::new([0; 32], flags::GFP_KERNEL)?;
+- /// assert_eq!(*value, [0; 32]);
+- /// let mut value = Box::drop_contents(value);
+- /// // Now we can re-use `value`:
+- /// value.write([1; 32]);
+- /// // SAFETY: We just wrote to it.
+- /// let value = unsafe { value.assume_init() };
+- /// assert_eq!(*value, [1; 32]);
+- /// # Ok::<(), Error>(())
+- /// ```
+- fn drop_contents(this: Self) -> Box<MaybeUninit<T>>;
+-}
+-
+-impl<T> BoxExt<T> for Box<T> {
+- fn new(x: T, flags: Flags) -> Result<Self, AllocError> {
+- let mut b = <Self as BoxExt<_>>::new_uninit(flags)?;
+- b.write(x);
+- // SAFETY: We just wrote to it.
+- Ok(unsafe { b.assume_init() })
+- }
+-
+- #[cfg(any(test, testlib))]
+- fn new_uninit(_flags: Flags) -> Result<Box<MaybeUninit<T>>, AllocError> {
+- Ok(Box::new_uninit())
+- }
+-
+- #[cfg(not(any(test, testlib)))]
+- fn new_uninit(flags: Flags) -> Result<Box<MaybeUninit<T>>, AllocError> {
+- let ptr = if core::mem::size_of::<MaybeUninit<T>>() == 0 {
+- core::ptr::NonNull::<_>::dangling().as_ptr()
+- } else {
+- let layout = core::alloc::Layout::new::<MaybeUninit<T>>();
+-
+- // SAFETY: Memory is being allocated (first arg is null). The only other source of
+- // safety issues is sleeping on atomic context, which is addressed by klint. Lastly,
+- // the type is not a SZT (checked above).
+- let ptr =
+- unsafe { super::allocator::krealloc_aligned(core::ptr::null_mut(), layout, flags) };
+- if ptr.is_null() {
+- return Err(AllocError);
+- }
+-
+- ptr.cast::<MaybeUninit<T>>()
+- };
+-
+- // SAFETY: For non-zero-sized types, we allocate above using the global allocator. For
+- // zero-sized types, we use `NonNull::dangling`.
+- Ok(unsafe { Box::from_raw(ptr) })
+- }
+-
+- fn drop_contents(this: Self) -> Box<MaybeUninit<T>> {
+- let ptr = Box::into_raw(this);
+- // SAFETY: `ptr` is valid, because it came from `Box::into_raw`.
+- unsafe { ptr::drop_in_place(ptr) };
+-
+- // CAST: `MaybeUninit<T>` is a transparent wrapper of `T`.
+- let ptr = ptr.cast::<MaybeUninit<T>>();
+-
+- // SAFETY: `ptr` is valid for writes, because it came from `Box::into_raw` and it is valid for
+- // reads, since the pointer came from `Box::into_raw` and the type is `MaybeUninit<T>`.
+- unsafe { Box::from_raw(ptr) }
+- }
+-}
+diff --git a/rust/kernel/alloc/kbox.rs b/rust/kernel/alloc/kbox.rs
+new file mode 100644
+index 00000000000000..9ce414361c2c6d
+--- /dev/null
++++ b/rust/kernel/alloc/kbox.rs
+@@ -0,0 +1,456 @@
++// SPDX-License-Identifier: GPL-2.0
++
++//! Implementation of [`Box`].
++
++#[allow(unused_imports)] // Used in doc comments.
++use super::allocator::{KVmalloc, Kmalloc, Vmalloc};
++use super::{AllocError, Allocator, Flags};
++use core::alloc::Layout;
++use core::fmt;
++use core::marker::PhantomData;
++use core::mem::ManuallyDrop;
++use core::mem::MaybeUninit;
++use core::ops::{Deref, DerefMut};
++use core::pin::Pin;
++use core::ptr::NonNull;
++use core::result::Result;
++
++use crate::init::{InPlaceInit, InPlaceWrite, Init, PinInit};
++use crate::types::ForeignOwnable;
++
++/// The kernel's [`Box`] type -- a heap allocation for a single value of type `T`.
++///
++/// This is the kernel's version of the Rust stdlib's `Box`. There are several differences,
++/// for example no `noalias` attribute is emitted and partially moving out of a `Box` is not
++/// supported. There are also several API differences, e.g. `Box` always requires an [`Allocator`]
++/// implementation to be passed as generic, page [`Flags`] when allocating memory and all functions
++/// that may allocate memory are fallible.
++///
++/// `Box` works with any of the kernel's allocators, e.g. [`Kmalloc`], [`Vmalloc`] or [`KVmalloc`].
++/// There are aliases for `Box` with these allocators ([`KBox`], [`VBox`], [`KVBox`]).
++///
++/// When dropping a [`Box`], the value is also dropped and the heap memory is automatically freed.
++///
++/// # Examples
++///
++/// ```
++/// let b = KBox::<u64>::new(24_u64, GFP_KERNEL)?;
++///
++/// assert_eq!(*b, 24_u64);
++/// # Ok::<(), Error>(())
++/// ```
++///
++/// ```
++/// # use kernel::bindings;
++/// const SIZE: usize = bindings::KMALLOC_MAX_SIZE as usize + 1;
++/// struct Huge([u8; SIZE]);
++///
++/// assert!(KBox::<Huge>::new_uninit(GFP_KERNEL | __GFP_NOWARN).is_err());
++/// ```
++///
++/// ```
++/// # use kernel::bindings;
++/// const SIZE: usize = bindings::KMALLOC_MAX_SIZE as usize + 1;
++/// struct Huge([u8; SIZE]);
++///
++/// assert!(KVBox::<Huge>::new_uninit(GFP_KERNEL).is_ok());
++/// ```
++///
++/// # Invariants
++///
++/// `self.0` is always properly aligned and either points to memory allocated with `A` or, for
++/// zero-sized types, is a dangling, well aligned pointer.
++#[repr(transparent)]
++pub struct Box<T: ?Sized, A: Allocator>(NonNull<T>, PhantomData<A>);
++
++/// Type alias for [`Box`] with a [`Kmalloc`] allocator.
++///
++/// # Examples
++///
++/// ```
++/// let b = KBox::new(24_u64, GFP_KERNEL)?;
++///
++/// assert_eq!(*b, 24_u64);
++/// # Ok::<(), Error>(())
++/// ```
++pub type KBox<T> = Box<T, super::allocator::Kmalloc>;
++
++/// Type alias for [`Box`] with a [`Vmalloc`] allocator.
++///
++/// # Examples
++///
++/// ```
++/// let b = VBox::new(24_u64, GFP_KERNEL)?;
++///
++/// assert_eq!(*b, 24_u64);
++/// # Ok::<(), Error>(())
++/// ```
++pub type VBox<T> = Box<T, super::allocator::Vmalloc>;
++
++/// Type alias for [`Box`] with a [`KVmalloc`] allocator.
++///
++/// # Examples
++///
++/// ```
++/// let b = KVBox::new(24_u64, GFP_KERNEL)?;
++///
++/// assert_eq!(*b, 24_u64);
++/// # Ok::<(), Error>(())
++/// ```
++pub type KVBox<T> = Box<T, super::allocator::KVmalloc>;
++
++// SAFETY: `Box` is `Send` if `T` is `Send` because the `Box` owns a `T`.
++unsafe impl<T, A> Send for Box<T, A>
++where
++ T: Send + ?Sized,
++ A: Allocator,
++{
++}
++
++// SAFETY: `Box` is `Sync` if `T` is `Sync` because the `Box` owns a `T`.
++unsafe impl<T, A> Sync for Box<T, A>
++where
++ T: Sync + ?Sized,
++ A: Allocator,
++{
++}
++
++impl<T, A> Box<T, A>
++where
++ T: ?Sized,
++ A: Allocator,
++{
++ /// Creates a new `Box<T, A>` from a raw pointer.
++ ///
++ /// # Safety
++ ///
++ /// For non-ZSTs, `raw` must point at an allocation allocated with `A` that is sufficiently
++ /// aligned for and holds a valid `T`. The caller passes ownership of the allocation to the
++ /// `Box`.
++ ///
++ /// For ZSTs, `raw` must be a dangling, well aligned pointer.
++ #[inline]
++ pub const unsafe fn from_raw(raw: *mut T) -> Self {
++ // INVARIANT: Validity of `raw` is guaranteed by the safety preconditions of this function.
++ // SAFETY: By the safety preconditions of this function, `raw` is not a NULL pointer.
++ Self(unsafe { NonNull::new_unchecked(raw) }, PhantomData)
++ }
++
++ /// Consumes the `Box<T, A>` and returns a raw pointer.
++ ///
++ /// This will not run the destructor of `T` and for non-ZSTs the allocation will stay alive
++ /// indefinitely. Use [`Box::from_raw`] to recover the [`Box`], drop the value and free the
++ /// allocation, if any.
++ ///
++ /// # Examples
++ ///
++ /// ```
++ /// let x = KBox::new(24, GFP_KERNEL)?;
++ /// let ptr = KBox::into_raw(x);
++ /// // SAFETY: `ptr` comes from a previous call to `KBox::into_raw`.
++ /// let x = unsafe { KBox::from_raw(ptr) };
++ ///
++ /// assert_eq!(*x, 24);
++ /// # Ok::<(), Error>(())
++ /// ```
++ #[inline]
++ pub fn into_raw(b: Self) -> *mut T {
++ ManuallyDrop::new(b).0.as_ptr()
++ }
++
++ /// Consumes and leaks the `Box<T, A>` and returns a mutable reference.
++ ///
++ /// See [`Box::into_raw`] for more details.
++ #[inline]
++ pub fn leak<'a>(b: Self) -> &'a mut T {
++ // SAFETY: `Box::into_raw` always returns a properly aligned and dereferenceable pointer
++ // which points to an initialized instance of `T`.
++ unsafe { &mut *Box::into_raw(b) }
++ }
++}
++
++impl<T, A> Box<MaybeUninit<T>, A>
++where
++ A: Allocator,
++{
++ /// Converts a `Box<MaybeUninit<T>, A>` to a `Box<T, A>`.
++ ///
++ /// It is undefined behavior to call this function while the value inside of `b` is not yet
++ /// fully initialized.
++ ///
++ /// # Safety
++ ///
++ /// Callers must ensure that the value inside of `b` is in an initialized state.
++ pub unsafe fn assume_init(self) -> Box<T, A> {
++ let raw = Self::into_raw(self);
++
++ // SAFETY: `raw` comes from a previous call to `Box::into_raw`. By the safety requirements
++ // of this function, the value inside the `Box` is in an initialized state. Hence, it is
++ // safe to reconstruct the `Box` as `Box<T, A>`.
++ unsafe { Box::from_raw(raw.cast()) }
++ }
++
++ /// Writes the value and converts to `Box<T, A>`.
++ pub fn write(mut self, value: T) -> Box<T, A> {
++ (*self).write(value);
++
++ // SAFETY: We've just initialized `b`'s value.
++ unsafe { self.assume_init() }
++ }
++}
++
++impl<T, A> Box<T, A>
++where
++ A: Allocator,
++{
++ /// Creates a new `Box<T, A>` and initializes its contents with `x`.
++ ///
++ /// New memory is allocated with `A`. The allocation may fail, in which case an error is
++ /// returned. For ZSTs no memory is allocated.
++ pub fn new(x: T, flags: Flags) -> Result<Self, AllocError> {
++ let b = Self::new_uninit(flags)?;
++ Ok(Box::write(b, x))
++ }
++
++ /// Creates a new `Box<T, A>` with uninitialized contents.
++ ///
++ /// New memory is allocated with `A`. The allocation may fail, in which case an error is
++ /// returned. For ZSTs no memory is allocated.
++ ///
++ /// # Examples
++ ///
++ /// ```
++ /// let b = KBox::<u64>::new_uninit(GFP_KERNEL)?;
++ /// let b = KBox::write(b, 24);
++ ///
++ /// assert_eq!(*b, 24_u64);
++ /// # Ok::<(), Error>(())
++ /// ```
++ pub fn new_uninit(flags: Flags) -> Result<Box<MaybeUninit<T>, A>, AllocError> {
++ let layout = Layout::new::<MaybeUninit<T>>();
++ let ptr = A::alloc(layout, flags)?;
++
++ // INVARIANT: `ptr` is either a dangling pointer or points to memory allocated with `A`,
++ // which is sufficient in size and alignment for storing a `T`.
++ Ok(Box(ptr.cast(), PhantomData))
++ }
++
++ /// Constructs a new `Pin<Box<T, A>>`. If `T` does not implement [`Unpin`], then `x` will be
++ /// pinned in memory and can't be moved.
++ #[inline]
++ pub fn pin(x: T, flags: Flags) -> Result<Pin<Box<T, A>>, AllocError>
++ where
++ A: 'static,
++ {
++ Ok(Self::new(x, flags)?.into())
++ }
++
++ /// Forgets the contents (does not run the destructor), but keeps the allocation.
++ fn forget_contents(this: Self) -> Box<MaybeUninit<T>, A> {
++ let ptr = Self::into_raw(this);
++
++ // SAFETY: `ptr` is valid, because it came from `Box::into_raw`.
++ unsafe { Box::from_raw(ptr.cast()) }
++ }
++
++ /// Drops the contents, but keeps the allocation.
++ ///
++ /// # Examples
++ ///
++ /// ```
++ /// let value = KBox::new([0; 32], GFP_KERNEL)?;
++ /// assert_eq!(*value, [0; 32]);
++ /// let value = KBox::drop_contents(value);
++ /// // Now we can re-use `value`:
++ /// let value = KBox::write(value, [1; 32]);
++ /// assert_eq!(*value, [1; 32]);
++ /// # Ok::<(), Error>(())
++ /// ```
++ pub fn drop_contents(this: Self) -> Box<MaybeUninit<T>, A> {
++ let ptr = this.0.as_ptr();
++
++ // SAFETY: `ptr` is valid, because it came from `this`. After this call we never access the
++ // value stored in `this` again.
++ unsafe { core::ptr::drop_in_place(ptr) };
++
++ Self::forget_contents(this)
++ }
++
++ /// Moves the `Box`'s value out of the `Box` and consumes the `Box`.
++ pub fn into_inner(b: Self) -> T {
++ // SAFETY: By the type invariant `&*b` is valid for `read`.
++ let value = unsafe { core::ptr::read(&*b) };
++ let _ = Self::forget_contents(b);
++ value
++ }
++}
++
++impl<T, A> From<Box<T, A>> for Pin<Box<T, A>>
++where
++ T: ?Sized,
++ A: Allocator,
++{
++ /// Converts a `Box<T, A>` into a `Pin<Box<T, A>>`. If `T` does not implement [`Unpin`], then
++ /// `*b` will be pinned in memory and can't be moved.
++ ///
++ /// This moves `b` into `Pin` without moving `*b` or allocating and copying any memory.
++ fn from(b: Box<T, A>) -> Self {
++ // SAFETY: The value wrapped inside a `Pin<Box<T, A>>` cannot be moved or replaced as long
++ // as `T` does not implement `Unpin`.
++ unsafe { Pin::new_unchecked(b) }
++ }
++}
++
++impl<T, A> InPlaceWrite<T> for Box<MaybeUninit<T>, A>
++where
++ A: Allocator + 'static,
++{
++ type Initialized = Box<T, A>;
++
++ fn write_init<E>(mut self, init: impl Init<T, E>) -> Result<Self::Initialized, E> {
++ let slot = self.as_mut_ptr();
++ // SAFETY: When init errors/panics, slot will get deallocated but not dropped,
++ // slot is valid.
++ unsafe { init.__init(slot)? };
++ // SAFETY: All fields have been initialized.
++ Ok(unsafe { Box::assume_init(self) })
++ }
++
++ fn write_pin_init<E>(mut self, init: impl PinInit<T, E>) -> Result<Pin<Self::Initialized>, E> {
++ let slot = self.as_mut_ptr();
++ // SAFETY: When init errors/panics, slot will get deallocated but not dropped,
++ // slot is valid and will not be moved, because we pin it later.
++ unsafe { init.__pinned_init(slot)? };
++ // SAFETY: All fields have been initialized.
++ Ok(unsafe { Box::assume_init(self) }.into())
++ }
++}
++
++impl<T, A> InPlaceInit<T> for Box<T, A>
++where
++ A: Allocator + 'static,
++{
++ type PinnedSelf = Pin<Self>;
++
++ #[inline]
++ fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Pin<Self>, E>
++ where
++ E: From<AllocError>,
++ {
++ Box::<_, A>::new_uninit(flags)?.write_pin_init(init)
++ }
++
++ #[inline]
++ fn try_init<E>(init: impl Init<T, E>, flags: Flags) -> Result<Self, E>
++ where
++ E: From<AllocError>,
++ {
++ Box::<_, A>::new_uninit(flags)?.write_init(init)
++ }
++}
++
++impl<T: 'static, A> ForeignOwnable for Box<T, A>
++where
++ A: Allocator,
++{
++ type Borrowed<'a> = &'a T;
++
++ fn into_foreign(self) -> *const crate::ffi::c_void {
++ Box::into_raw(self) as _
++ }
++
++ unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self {
++ // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous
++ // call to `Self::into_foreign`.
++ unsafe { Box::from_raw(ptr as _) }
++ }
++
++ unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> &'a T {
++ // SAFETY: The safety requirements of this method ensure that the object remains alive and
++ // immutable for the duration of 'a.
++ unsafe { &*ptr.cast() }
++ }
++}
++
++impl<T: 'static, A> ForeignOwnable for Pin<Box<T, A>>
++where
++ A: Allocator,
++{
++ type Borrowed<'a> = Pin<&'a T>;
++
++ fn into_foreign(self) -> *const crate::ffi::c_void {
++ // SAFETY: We are still treating the box as pinned.
++ Box::into_raw(unsafe { Pin::into_inner_unchecked(self) }) as _
++ }
++
++ unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self {
++ // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous
++ // call to `Self::into_foreign`.
++ unsafe { Pin::new_unchecked(Box::from_raw(ptr as _)) }
++ }
++
++ unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> Pin<&'a T> {
++ // SAFETY: The safety requirements for this function ensure that the object is still alive,
++ // so it is safe to dereference the raw pointer.
++ // The safety requirements of `from_foreign` also ensure that the object remains alive for
++ // the lifetime of the returned value.
++ let r = unsafe { &*ptr.cast() };
++
++ // SAFETY: This pointer originates from a `Pin<Box<T>>`.
++ unsafe { Pin::new_unchecked(r) }
++ }
++}
++
++impl<T, A> Deref for Box<T, A>
++where
++ T: ?Sized,
++ A: Allocator,
++{
++ type Target = T;
++
++ fn deref(&self) -> &T {
++ // SAFETY: `self.0` is always properly aligned, dereferenceable and points to an initialized
++ // instance of `T`.
++ unsafe { self.0.as_ref() }
++ }
++}
++
++impl<T, A> DerefMut for Box<T, A>
++where
++ T: ?Sized,
++ A: Allocator,
++{
++ fn deref_mut(&mut self) -> &mut T {
++ // SAFETY: `self.0` is always properly aligned, dereferenceable and points to an initialized
++ // instance of `T`.
++ unsafe { self.0.as_mut() }
++ }
++}
++
++impl<T, A> fmt::Debug for Box<T, A>
++where
++ T: ?Sized + fmt::Debug,
++ A: Allocator,
++{
++ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
++ fmt::Debug::fmt(&**self, f)
++ }
++}
++
++impl<T, A> Drop for Box<T, A>
++where
++ T: ?Sized,
++ A: Allocator,
++{
++ fn drop(&mut self) {
++ let layout = Layout::for_value::<T>(self);
++
++ // SAFETY: The pointer in `self.0` is guaranteed to be valid by the type invariant.
++ unsafe { core::ptr::drop_in_place::<T>(self.deref_mut()) };
++
++ // SAFETY:
++ // - `self.0` was previously allocated with `A`.
++ // - `layout` is equal to the `Layout´ `self.0` was allocated with.
++ unsafe { A::free(self.0.cast(), layout) };
++ }
++}
+diff --git a/rust/kernel/alloc/kvec.rs b/rust/kernel/alloc/kvec.rs
+new file mode 100644
+index 00000000000000..ae9d072741cedb
+--- /dev/null
++++ b/rust/kernel/alloc/kvec.rs
+@@ -0,0 +1,913 @@
++// SPDX-License-Identifier: GPL-2.0
++
++//! Implementation of [`Vec`].
++
++use super::{
++ allocator::{KVmalloc, Kmalloc, Vmalloc},
++ layout::ArrayLayout,
++ AllocError, Allocator, Box, Flags,
++};
++use core::{
++ fmt,
++ marker::PhantomData,
++ mem::{ManuallyDrop, MaybeUninit},
++ ops::Deref,
++ ops::DerefMut,
++ ops::Index,
++ ops::IndexMut,
++ ptr,
++ ptr::NonNull,
++ slice,
++ slice::SliceIndex,
++};
++
++/// Create a [`KVec`] containing the arguments.
++///
++/// New memory is allocated with `GFP_KERNEL`.
++///
++/// # Examples
++///
++/// ```
++/// let mut v = kernel::kvec![];
++/// v.push(1, GFP_KERNEL)?;
++/// assert_eq!(v, [1]);
++///
++/// let mut v = kernel::kvec![1; 3]?;
++/// v.push(4, GFP_KERNEL)?;
++/// assert_eq!(v, [1, 1, 1, 4]);
++///
++/// let mut v = kernel::kvec![1, 2, 3]?;
++/// v.push(4, GFP_KERNEL)?;
++/// assert_eq!(v, [1, 2, 3, 4]);
++///
++/// # Ok::<(), Error>(())
++/// ```
++#[macro_export]
++macro_rules! kvec {
++ () => (
++ $crate::alloc::KVec::new()
++ );
++ ($elem:expr; $n:expr) => (
++ $crate::alloc::KVec::from_elem($elem, $n, GFP_KERNEL)
++ );
++ ($($x:expr),+ $(,)?) => (
++ match $crate::alloc::KBox::new_uninit(GFP_KERNEL) {
++ Ok(b) => Ok($crate::alloc::KVec::from($crate::alloc::KBox::write(b, [$($x),+]))),
++ Err(e) => Err(e),
++ }
++ );
++}
++
++/// The kernel's [`Vec`] type.
++///
++/// A contiguous growable array type with contents allocated with the kernel's allocators (e.g.
++/// [`Kmalloc`], [`Vmalloc`] or [`KVmalloc`]), written `Vec<T, A>`.
++///
++/// For non-zero-sized values, a [`Vec`] will use the given allocator `A` for its allocation. For
++/// the most common allocators the type aliases [`KVec`], [`VVec`] and [`KVVec`] exist.
++///
++/// For zero-sized types the [`Vec`]'s pointer must be `dangling_mut::<T>`; no memory is allocated.
++///
++/// Generally, [`Vec`] consists of a pointer that represents the vector's backing buffer, the
++/// capacity of the vector (the number of elements that currently fit into the vector), its length
++/// (the number of elements that are currently stored in the vector) and the `Allocator` type used
++/// to allocate (and free) the backing buffer.
++///
++/// A [`Vec`] can be deconstructed into and (re-)constructed from its previously named raw parts
++/// and manually modified.
++///
++/// [`Vec`]'s backing buffer gets, if required, automatically increased (re-allocated) when elements
++/// are added to the vector.
++///
++/// # Invariants
++///
++/// - `self.ptr` is always properly aligned and either points to memory allocated with `A` or, for
++/// zero-sized types, is a dangling, well aligned pointer.
++///
++/// - `self.len` always represents the exact number of elements stored in the vector.
++///
++/// - `self.layout` represents the absolute number of elements that can be stored within the vector
++/// without re-allocation. For ZSTs `self.layout`'s capacity is zero. However, it is legal for the
++/// backing buffer to be larger than `layout`.
++///
++/// - The `Allocator` type `A` of the vector is the exact same `Allocator` type the backing buffer
++/// was allocated with (and must be freed with).
++pub struct Vec<T, A: Allocator> {
++ ptr: NonNull<T>,
++ /// Represents the actual buffer size as `cap` times `size_of::<T>` bytes.
++ ///
++ /// Note: This isn't quite the same as `Self::capacity`, which in contrast returns the number of
++ /// elements we can still store without reallocating.
++ layout: ArrayLayout<T>,
++ len: usize,
++ _p: PhantomData<A>,
++}
++
++/// Type alias for [`Vec`] with a [`Kmalloc`] allocator.
++///
++/// # Examples
++///
++/// ```
++/// let mut v = KVec::new();
++/// v.push(1, GFP_KERNEL)?;
++/// assert_eq!(&v, &[1]);
++///
++/// # Ok::<(), Error>(())
++/// ```
++pub type KVec<T> = Vec<T, Kmalloc>;
++
++/// Type alias for [`Vec`] with a [`Vmalloc`] allocator.
++///
++/// # Examples
++///
++/// ```
++/// let mut v = VVec::new();
++/// v.push(1, GFP_KERNEL)?;
++/// assert_eq!(&v, &[1]);
++///
++/// # Ok::<(), Error>(())
++/// ```
++pub type VVec<T> = Vec<T, Vmalloc>;
++
++/// Type alias for [`Vec`] with a [`KVmalloc`] allocator.
++///
++/// # Examples
++///
++/// ```
++/// let mut v = KVVec::new();
++/// v.push(1, GFP_KERNEL)?;
++/// assert_eq!(&v, &[1]);
++///
++/// # Ok::<(), Error>(())
++/// ```
++pub type KVVec<T> = Vec<T, KVmalloc>;
++
++// SAFETY: `Vec` is `Send` if `T` is `Send` because `Vec` owns its elements.
++unsafe impl<T, A> Send for Vec<T, A>
++where
++ T: Send,
++ A: Allocator,
++{
++}
++
++// SAFETY: `Vec` is `Sync` if `T` is `Sync` because `Vec` owns its elements.
++unsafe impl<T, A> Sync for Vec<T, A>
++where
++ T: Sync,
++ A: Allocator,
++{
++}
++
++impl<T, A> Vec<T, A>
++where
++ A: Allocator,
++{
++ #[inline]
++ const fn is_zst() -> bool {
++ core::mem::size_of::<T>() == 0
++ }
++
++ /// Returns the number of elements that can be stored within the vector without allocating
++ /// additional memory.
++ pub fn capacity(&self) -> usize {
++ if const { Self::is_zst() } {
++ usize::MAX
++ } else {
++ self.layout.len()
++ }
++ }
++
++ /// Returns the number of elements stored within the vector.
++ #[inline]
++ pub fn len(&self) -> usize {
++ self.len
++ }
++
++ /// Forcefully sets `self.len` to `new_len`.
++ ///
++ /// # Safety
++ ///
++ /// - `new_len` must be less than or equal to [`Self::capacity`].
++ /// - If `new_len` is greater than `self.len`, all elements within the interval
++ /// [`self.len`,`new_len`) must be initialized.
++ #[inline]
++ pub unsafe fn set_len(&mut self, new_len: usize) {
++ debug_assert!(new_len <= self.capacity());
++ self.len = new_len;
++ }
++
++ /// Returns a slice of the entire vector.
++ #[inline]
++ pub fn as_slice(&self) -> &[T] {
++ self
++ }
++
++ /// Returns a mutable slice of the entire vector.
++ #[inline]
++ pub fn as_mut_slice(&mut self) -> &mut [T] {
++ self
++ }
++
++ /// Returns a mutable raw pointer to the vector's backing buffer, or, if `T` is a ZST, a
++ /// dangling raw pointer.
++ #[inline]
++ pub fn as_mut_ptr(&mut self) -> *mut T {
++ self.ptr.as_ptr()
++ }
++
++ /// Returns a raw pointer to the vector's backing buffer, or, if `T` is a ZST, a dangling raw
++ /// pointer.
++ #[inline]
++ pub fn as_ptr(&self) -> *const T {
++ self.ptr.as_ptr()
++ }
++
++ /// Returns `true` if the vector contains no elements, `false` otherwise.
++ ///
++ /// # Examples
++ ///
++ /// ```
++ /// let mut v = KVec::new();
++ /// assert!(v.is_empty());
++ ///
++ /// v.push(1, GFP_KERNEL);
++ /// assert!(!v.is_empty());
++ /// ```
++ #[inline]
++ pub fn is_empty(&self) -> bool {
++ self.len() == 0
++ }
++
++ /// Creates a new, empty `Vec<T, A>`.
++ ///
++ /// This method does not allocate by itself.
++ #[inline]
++ pub const fn new() -> Self {
++ // INVARIANT: Since this is a new, empty `Vec` with no backing memory yet,
++ // - `ptr` is a properly aligned dangling pointer for type `T`,
++ // - `layout` is an empty `ArrayLayout` (zero capacity)
++ // - `len` is zero, since no elements can be or have been stored,
++ // - `A` is always valid.
++ Self {
++ ptr: NonNull::dangling(),
++ layout: ArrayLayout::empty(),
++ len: 0,
++ _p: PhantomData::<A>,
++ }
++ }
++
++ /// Returns a slice of `MaybeUninit<T>` for the remaining spare capacity of the vector.
++ pub fn spare_capacity_mut(&mut self) -> &mut [MaybeUninit<T>] {
++ // SAFETY:
++ // - `self.len` is smaller than `self.capacity` and hence, the resulting pointer is
++ // guaranteed to be part of the same allocated object.
++ // - `self.len` can not overflow `isize`.
++ let ptr = unsafe { self.as_mut_ptr().add(self.len) } as *mut MaybeUninit<T>;
++
++ // SAFETY: The memory between `self.len` and `self.capacity` is guaranteed to be allocated
++ // and valid, but uninitialized.
++ unsafe { slice::from_raw_parts_mut(ptr, self.capacity() - self.len) }
++ }
++
++ /// Appends an element to the back of the [`Vec`] instance.
++ ///
++ /// # Examples
++ ///
++ /// ```
++ /// let mut v = KVec::new();
++ /// v.push(1, GFP_KERNEL)?;
++ /// assert_eq!(&v, &[1]);
++ ///
++ /// v.push(2, GFP_KERNEL)?;
++ /// assert_eq!(&v, &[1, 2]);
++ /// # Ok::<(), Error>(())
++ /// ```
++ pub fn push(&mut self, v: T, flags: Flags) -> Result<(), AllocError> {
++ self.reserve(1, flags)?;
++
++ // SAFETY:
++ // - `self.len` is smaller than `self.capacity` and hence, the resulting pointer is
++ // guaranteed to be part of the same allocated object.
++ // - `self.len` can not overflow `isize`.
++ let ptr = unsafe { self.as_mut_ptr().add(self.len) };
++
++ // SAFETY:
++ // - `ptr` is properly aligned and valid for writes.
++ unsafe { core::ptr::write(ptr, v) };
++
++ // SAFETY: We just initialised the first spare entry, so it is safe to increase the length
++ // by 1. We also know that the new length is <= capacity because of the previous call to
++ // `reserve` above.
++ unsafe { self.set_len(self.len() + 1) };
++ Ok(())
++ }
++
++ /// Creates a new [`Vec`] instance with at least the given capacity.
++ ///
++ /// # Examples
++ ///
++ /// ```
++ /// let v = KVec::<u32>::with_capacity(20, GFP_KERNEL)?;
++ ///
++ /// assert!(v.capacity() >= 20);
++ /// # Ok::<(), Error>(())
++ /// ```
++ pub fn with_capacity(capacity: usize, flags: Flags) -> Result<Self, AllocError> {
++ let mut v = Vec::new();
++
++ v.reserve(capacity, flags)?;
++
++ Ok(v)
++ }
++
++ /// Creates a `Vec<T, A>` from a pointer, a length and a capacity using the allocator `A`.
++ ///
++ /// # Examples
++ ///
++ /// ```
++ /// let mut v = kernel::kvec![1, 2, 3]?;
++ /// v.reserve(1, GFP_KERNEL)?;
++ ///
++ /// let (mut ptr, mut len, cap) = v.into_raw_parts();
++ ///
++ /// // SAFETY: We've just reserved memory for another element.
++ /// unsafe { ptr.add(len).write(4) };
++ /// len += 1;
++ ///
++ /// // SAFETY: We only wrote an additional element at the end of the `KVec`'s buffer and
++ /// // correspondingly increased the length of the `KVec` by one. Otherwise, we construct it
++ /// // from the exact same raw parts.
++ /// let v = unsafe { KVec::from_raw_parts(ptr, len, cap) };
++ ///
++ /// assert_eq!(v, [1, 2, 3, 4]);
++ ///
++ /// # Ok::<(), Error>(())
++ /// ```
++ ///
++ /// # Safety
++ ///
++ /// If `T` is a ZST:
++ ///
++ /// - `ptr` must be a dangling, well aligned pointer.
++ ///
++ /// Otherwise:
++ ///
++ /// - `ptr` must have been allocated with the allocator `A`.
++ /// - `ptr` must satisfy or exceed the alignment requirements of `T`.
++ /// - `ptr` must point to memory with a size of at least `size_of::<T>() * capacity` bytes.
++ /// - The allocated size in bytes must not be larger than `isize::MAX`.
++ /// - `length` must be less than or equal to `capacity`.
++ /// - The first `length` elements must be initialized values of type `T`.
++ ///
++ /// It is also valid to create an empty `Vec` passing a dangling pointer for `ptr` and zero for
++ /// `cap` and `len`.
++ pub unsafe fn from_raw_parts(ptr: *mut T, length: usize, capacity: usize) -> Self {
++ let layout = if Self::is_zst() {
++ ArrayLayout::empty()
++ } else {
++ // SAFETY: By the safety requirements of this function, `capacity * size_of::<T>()` is
++ // smaller than `isize::MAX`.
++ unsafe { ArrayLayout::new_unchecked(capacity) }
++ };
++
++ // INVARIANT: For ZSTs, we store an empty `ArrayLayout`, all other type invariants are
++ // covered by the safety requirements of this function.
++ Self {
++ // SAFETY: By the safety requirements, `ptr` is either dangling or pointing to a valid
++ // memory allocation, allocated with `A`.
++ ptr: unsafe { NonNull::new_unchecked(ptr) },
++ layout,
++ len: length,
++ _p: PhantomData::<A>,
++ }
++ }
++
++ /// Consumes the `Vec<T, A>` and returns its raw components `pointer`, `length` and `capacity`.
++ ///
++ /// This will not run the destructor of the contained elements and for non-ZSTs the allocation
++ /// will stay alive indefinitely. Use [`Vec::from_raw_parts`] to recover the [`Vec`], drop the
++ /// elements and free the allocation, if any.
++ pub fn into_raw_parts(self) -> (*mut T, usize, usize) {
++ let mut me = ManuallyDrop::new(self);
++ let len = me.len();
++ let capacity = me.capacity();
++ let ptr = me.as_mut_ptr();
++ (ptr, len, capacity)
++ }
++
++ /// Ensures that the capacity exceeds the length by at least `additional` elements.
++ ///
++ /// # Examples
++ ///
++ /// ```
++ /// let mut v = KVec::new();
++ /// v.push(1, GFP_KERNEL)?;
++ ///
++ /// v.reserve(10, GFP_KERNEL)?;
++ /// let cap = v.capacity();
++ /// assert!(cap >= 10);
++ ///
++ /// v.reserve(10, GFP_KERNEL)?;
++ /// let new_cap = v.capacity();
++ /// assert_eq!(new_cap, cap);
++ ///
++ /// # Ok::<(), Error>(())
++ /// ```
++ pub fn reserve(&mut self, additional: usize, flags: Flags) -> Result<(), AllocError> {
++ let len = self.len();
++ let cap = self.capacity();
++
++ if cap - len >= additional {
++ return Ok(());
++ }
++
++ if Self::is_zst() {
++ // The capacity is already `usize::MAX` for ZSTs, we can't go higher.
++ return Err(AllocError);
++ }
++
++ // We know that `cap <= isize::MAX` because of the type invariants of `Self`. So the
++ // multiplication by two won't overflow.
++ let new_cap = core::cmp::max(cap * 2, len.checked_add(additional).ok_or(AllocError)?);
++ let layout = ArrayLayout::new(new_cap).map_err(|_| AllocError)?;
++
++ // SAFETY:
++ // - `ptr` is valid because it's either `None` or comes from a previous call to
++ // `A::realloc`.
++ // - `self.layout` matches the `ArrayLayout` of the preceding allocation.
++ let ptr = unsafe {
++ A::realloc(
++ Some(self.ptr.cast()),
++ layout.into(),
++ self.layout.into(),
++ flags,
++ )?
++ };
++
++ // INVARIANT:
++ // - `layout` is some `ArrayLayout::<T>`,
++ // - `ptr` has been created by `A::realloc` from `layout`.
++ self.ptr = ptr.cast();
++ self.layout = layout;
++
++ Ok(())
++ }
++}
++
++impl<T: Clone, A: Allocator> Vec<T, A> {
++ /// Extend the vector by `n` clones of `value`.
++ pub fn extend_with(&mut self, n: usize, value: T, flags: Flags) -> Result<(), AllocError> {
++ if n == 0 {
++ return Ok(());
++ }
++
++ self.reserve(n, flags)?;
++
++ let spare = self.spare_capacity_mut();
++
++ for item in spare.iter_mut().take(n - 1) {
++ item.write(value.clone());
++ }
++
++ // We can write the last element directly without cloning needlessly.
++ spare[n - 1].write(value);
++
++ // SAFETY:
++ // - `self.len() + n < self.capacity()` due to the call to reserve above,
++ // - the loop and the line above initialized the next `n` elements.
++ unsafe { self.set_len(self.len() + n) };
++
++ Ok(())
++ }
++
++ /// Pushes clones of the elements of slice into the [`Vec`] instance.
++ ///
++ /// # Examples
++ ///
++ /// ```
++ /// let mut v = KVec::new();
++ /// v.push(1, GFP_KERNEL)?;
++ ///
++ /// v.extend_from_slice(&[20, 30, 40], GFP_KERNEL)?;
++ /// assert_eq!(&v, &[1, 20, 30, 40]);
++ ///
++ /// v.extend_from_slice(&[50, 60], GFP_KERNEL)?;
++ /// assert_eq!(&v, &[1, 20, 30, 40, 50, 60]);
++ /// # Ok::<(), Error>(())
++ /// ```
++ pub fn extend_from_slice(&mut self, other: &[T], flags: Flags) -> Result<(), AllocError> {
++ self.reserve(other.len(), flags)?;
++ for (slot, item) in core::iter::zip(self.spare_capacity_mut(), other) {
++ slot.write(item.clone());
++ }
++
++ // SAFETY:
++ // - `other.len()` spare entries have just been initialized, so it is safe to increase
++ // the length by the same number.
++ // - `self.len() + other.len() <= self.capacity()` is guaranteed by the preceding `reserve`
++ // call.
++ unsafe { self.set_len(self.len() + other.len()) };
++ Ok(())
++ }
++
++ /// Create a new `Vec<T, A>` and extend it by `n` clones of `value`.
++ pub fn from_elem(value: T, n: usize, flags: Flags) -> Result<Self, AllocError> {
++ let mut v = Self::with_capacity(n, flags)?;
++
++ v.extend_with(n, value, flags)?;
++
++ Ok(v)
++ }
++}
++
++impl<T, A> Drop for Vec<T, A>
++where
++ A: Allocator,
++{
++ fn drop(&mut self) {
++ // SAFETY: `self.as_mut_ptr` is guaranteed to be valid by the type invariant.
++ unsafe {
++ ptr::drop_in_place(core::ptr::slice_from_raw_parts_mut(
++ self.as_mut_ptr(),
++ self.len,
++ ))
++ };
++
++ // SAFETY:
++ // - `self.ptr` was previously allocated with `A`.
++ // - `self.layout` matches the `ArrayLayout` of the preceding allocation.
++ unsafe { A::free(self.ptr.cast(), self.layout.into()) };
++ }
++}
++
++impl<T, A, const N: usize> From<Box<[T; N], A>> for Vec<T, A>
++where
++ A: Allocator,
++{
++ fn from(b: Box<[T; N], A>) -> Vec<T, A> {
++ let len = b.len();
++ let ptr = Box::into_raw(b);
++
++ // SAFETY:
++ // - `b` has been allocated with `A`,
++ // - `ptr` fulfills the alignment requirements for `T`,
++ // - `ptr` points to memory with at least a size of `size_of::<T>() * len`,
++ // - all elements within `b` are initialized values of `T`,
++ // - `len` does not exceed `isize::MAX`.
++ unsafe { Vec::from_raw_parts(ptr as _, len, len) }
++ }
++}
++
++impl<T> Default for KVec<T> {
++ #[inline]
++ fn default() -> Self {
++ Self::new()
++ }
++}
++
++impl<T: fmt::Debug, A: Allocator> fmt::Debug for Vec<T, A> {
++ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
++ fmt::Debug::fmt(&**self, f)
++ }
++}
++
++impl<T, A> Deref for Vec<T, A>
++where
++ A: Allocator,
++{
++ type Target = [T];
++
++ #[inline]
++ fn deref(&self) -> &[T] {
++ // SAFETY: The memory behind `self.as_ptr()` is guaranteed to contain `self.len`
++ // initialized elements of type `T`.
++ unsafe { slice::from_raw_parts(self.as_ptr(), self.len) }
++ }
++}
++
++impl<T, A> DerefMut for Vec<T, A>
++where
++ A: Allocator,
++{
++ #[inline]
++ fn deref_mut(&mut self) -> &mut [T] {
++ // SAFETY: The memory behind `self.as_ptr()` is guaranteed to contain `self.len`
++ // initialized elements of type `T`.
++ unsafe { slice::from_raw_parts_mut(self.as_mut_ptr(), self.len) }
++ }
++}
++
++impl<T: Eq, A> Eq for Vec<T, A> where A: Allocator {}
++
++impl<T, I: SliceIndex<[T]>, A> Index<I> for Vec<T, A>
++where
++ A: Allocator,
++{
++ type Output = I::Output;
++
++ #[inline]
++ fn index(&self, index: I) -> &Self::Output {
++ Index::index(&**self, index)
++ }
++}
++
++impl<T, I: SliceIndex<[T]>, A> IndexMut<I> for Vec<T, A>
++where
++ A: Allocator,
++{
++ #[inline]
++ fn index_mut(&mut self, index: I) -> &mut Self::Output {
++ IndexMut::index_mut(&mut **self, index)
++ }
++}
++
++macro_rules! impl_slice_eq {
++ ($([$($vars:tt)*] $lhs:ty, $rhs:ty,)*) => {
++ $(
++ impl<T, U, $($vars)*> PartialEq<$rhs> for $lhs
++ where
++ T: PartialEq<U>,
++ {
++ #[inline]
++ fn eq(&self, other: &$rhs) -> bool { self[..] == other[..] }
++ }
++ )*
++ }
++}
++
++impl_slice_eq! {
++ [A1: Allocator, A2: Allocator] Vec<T, A1>, Vec<U, A2>,
++ [A: Allocator] Vec<T, A>, &[U],
++ [A: Allocator] Vec<T, A>, &mut [U],
++ [A: Allocator] &[T], Vec<U, A>,
++ [A: Allocator] &mut [T], Vec<U, A>,
++ [A: Allocator] Vec<T, A>, [U],
++ [A: Allocator] [T], Vec<U, A>,
++ [A: Allocator, const N: usize] Vec<T, A>, [U; N],
++ [A: Allocator, const N: usize] Vec<T, A>, &[U; N],
++}
++
++impl<'a, T, A> IntoIterator for &'a Vec<T, A>
++where
++ A: Allocator,
++{
++ type Item = &'a T;
++ type IntoIter = slice::Iter<'a, T>;
++
++ fn into_iter(self) -> Self::IntoIter {
++ self.iter()
++ }
++}
++
++impl<'a, T, A: Allocator> IntoIterator for &'a mut Vec<T, A>
++where
++ A: Allocator,
++{
++ type Item = &'a mut T;
++ type IntoIter = slice::IterMut<'a, T>;
++
++ fn into_iter(self) -> Self::IntoIter {
++ self.iter_mut()
++ }
++}
++
++/// An [`Iterator`] implementation for [`Vec`] that moves elements out of a vector.
++///
++/// This structure is created by the [`Vec::into_iter`] method on [`Vec`] (provided by the
++/// [`IntoIterator`] trait).
++///
++/// # Examples
++///
++/// ```
++/// let v = kernel::kvec![0, 1, 2]?;
++/// let iter = v.into_iter();
++///
++/// # Ok::<(), Error>(())
++/// ```
++pub struct IntoIter<T, A: Allocator> {
++ ptr: *mut T,
++ buf: NonNull<T>,
++ len: usize,
++ layout: ArrayLayout<T>,
++ _p: PhantomData<A>,
++}
++
++impl<T, A> IntoIter<T, A>
++where
++ A: Allocator,
++{
++ fn into_raw_parts(self) -> (*mut T, NonNull<T>, usize, usize) {
++ let me = ManuallyDrop::new(self);
++ let ptr = me.ptr;
++ let buf = me.buf;
++ let len = me.len;
++ let cap = me.layout.len();
++ (ptr, buf, len, cap)
++ }
++
++ /// Same as `Iterator::collect` but specialized for `Vec`'s `IntoIter`.
++ ///
++ /// # Examples
++ ///
++ /// ```
++ /// let v = kernel::kvec![1, 2, 3]?;
++ /// let mut it = v.into_iter();
++ ///
++ /// assert_eq!(it.next(), Some(1));
++ ///
++ /// let v = it.collect(GFP_KERNEL);
++ /// assert_eq!(v, [2, 3]);
++ ///
++ /// # Ok::<(), Error>(())
++ /// ```
++ ///
++ /// # Implementation details
++ ///
++ /// Currently, we can't implement `FromIterator`. There are a couple of issues with this trait
++ /// in the kernel, namely:
++ ///
++ /// - Rust's specialization feature is unstable. This prevents us to optimize for the special
++ /// case where `I::IntoIter` equals `Vec`'s `IntoIter` type.
++ /// - We also can't use `I::IntoIter`'s type ID either to work around this, since `FromIterator`
++ /// doesn't require this type to be `'static`.
++ /// - `FromIterator::from_iter` does return `Self` instead of `Result<Self, AllocError>`, hence
++ /// we can't properly handle allocation failures.
++ /// - Neither `Iterator::collect` nor `FromIterator::from_iter` can handle additional allocation
++ /// flags.
++ ///
++ /// Instead, provide `IntoIter::collect`, such that we can at least convert a `IntoIter` into a
++ /// `Vec` again.
++ ///
++ /// Note that `IntoIter::collect` doesn't require `Flags`, since it re-uses the existing backing
++ /// buffer. However, this backing buffer may be shrunk to the actual count of elements.
++ pub fn collect(self, flags: Flags) -> Vec<T, A> {
++ let old_layout = self.layout;
++ let (mut ptr, buf, len, mut cap) = self.into_raw_parts();
++ let has_advanced = ptr != buf.as_ptr();
++
++ if has_advanced {
++ // Copy the contents we have advanced to at the beginning of the buffer.
++ //
++ // SAFETY:
++ // - `ptr` is valid for reads of `len * size_of::<T>()` bytes,
++ // - `buf.as_ptr()` is valid for writes of `len * size_of::<T>()` bytes,
++ // - `ptr` and `buf.as_ptr()` are not be subject to aliasing restrictions relative to
++ // each other,
++ // - both `ptr` and `buf.ptr()` are properly aligned.
++ unsafe { ptr::copy(ptr, buf.as_ptr(), len) };
++ ptr = buf.as_ptr();
++
++ // SAFETY: `len` is guaranteed to be smaller than `self.layout.len()`.
++ let layout = unsafe { ArrayLayout::<T>::new_unchecked(len) };
++
++ // SAFETY: `buf` points to the start of the backing buffer and `len` is guaranteed to be
++ // smaller than `cap`. Depending on `alloc` this operation may shrink the buffer or leaves
++ // it as it is.
++ ptr = match unsafe {
++ A::realloc(Some(buf.cast()), layout.into(), old_layout.into(), flags)
++ } {
++ // If we fail to shrink, which likely can't even happen, continue with the existing
++ // buffer.
++ Err(_) => ptr,
++ Ok(ptr) => {
++ cap = len;
++ ptr.as_ptr().cast()
++ }
++ };
++ }
++
++ // SAFETY: If the iterator has been advanced, the advanced elements have been copied to
++ // the beginning of the buffer and `len` has been adjusted accordingly.
++ //
++ // - `ptr` is guaranteed to point to the start of the backing buffer.
++ // - `cap` is either the original capacity or, after shrinking the buffer, equal to `len`.
++ // - `alloc` is guaranteed to be unchanged since `into_iter` has been called on the original
++ // `Vec`.
++ unsafe { Vec::from_raw_parts(ptr, len, cap) }
++ }
++}
++
++impl<T, A> Iterator for IntoIter<T, A>
++where
++ A: Allocator,
++{
++ type Item = T;
++
++ /// # Examples
++ ///
++ /// ```
++ /// let v = kernel::kvec![1, 2, 3]?;
++ /// let mut it = v.into_iter();
++ ///
++ /// assert_eq!(it.next(), Some(1));
++ /// assert_eq!(it.next(), Some(2));
++ /// assert_eq!(it.next(), Some(3));
++ /// assert_eq!(it.next(), None);
++ ///
++ /// # Ok::<(), Error>(())
++ /// ```
++ fn next(&mut self) -> Option<T> {
++ if self.len == 0 {
++ return None;
++ }
++
++ let current = self.ptr;
++
++ // SAFETY: We can't overflow; decreasing `self.len` by one every time we advance `self.ptr`
++ // by one guarantees that.
++ unsafe { self.ptr = self.ptr.add(1) };
++
++ self.len -= 1;
++
++ // SAFETY: `current` is guaranteed to point at a valid element within the buffer.
++ Some(unsafe { current.read() })
++ }
++
++ /// # Examples
++ ///
++ /// ```
++ /// let v: KVec<u32> = kernel::kvec![1, 2, 3]?;
++ /// let mut iter = v.into_iter();
++ /// let size = iter.size_hint().0;
++ ///
++ /// iter.next();
++ /// assert_eq!(iter.size_hint().0, size - 1);
++ ///
++ /// iter.next();
++ /// assert_eq!(iter.size_hint().0, size - 2);
++ ///
++ /// iter.next();
++ /// assert_eq!(iter.size_hint().0, size - 3);
++ ///
++ /// # Ok::<(), Error>(())
++ /// ```
++ fn size_hint(&self) -> (usize, Option<usize>) {
++ (self.len, Some(self.len))
++ }
++}
++
++impl<T, A> Drop for IntoIter<T, A>
++where
++ A: Allocator,
++{
++ fn drop(&mut self) {
++ // SAFETY: `self.ptr` is guaranteed to be valid by the type invariant.
++ unsafe { ptr::drop_in_place(ptr::slice_from_raw_parts_mut(self.ptr, self.len)) };
++
++ // SAFETY:
++ // - `self.buf` was previously allocated with `A`.
++ // - `self.layout` matches the `ArrayLayout` of the preceding allocation.
++ unsafe { A::free(self.buf.cast(), self.layout.into()) };
++ }
++}
++
++impl<T, A> IntoIterator for Vec<T, A>
++where
++ A: Allocator,
++{
++ type Item = T;
++ type IntoIter = IntoIter<T, A>;
++
++ /// Consumes the `Vec<T, A>` and creates an `Iterator`, which moves each value out of the
++ /// vector (from start to end).
++ ///
++ /// # Examples
++ ///
++ /// ```
++ /// let v = kernel::kvec![1, 2]?;
++ /// let mut v_iter = v.into_iter();
++ ///
++ /// let first_element: Option<u32> = v_iter.next();
++ ///
++ /// assert_eq!(first_element, Some(1));
++ /// assert_eq!(v_iter.next(), Some(2));
++ /// assert_eq!(v_iter.next(), None);
++ ///
++ /// # Ok::<(), Error>(())
++ /// ```
++ ///
++ /// ```
++ /// let v = kernel::kvec![];
++ /// let mut v_iter = v.into_iter();
++ ///
++ /// let first_element: Option<u32> = v_iter.next();
++ ///
++ /// assert_eq!(first_element, None);
++ ///
++ /// # Ok::<(), Error>(())
++ /// ```
++ #[inline]
++ fn into_iter(self) -> Self::IntoIter {
++ let buf = self.ptr;
++ let layout = self.layout;
++ let (ptr, len, _) = self.into_raw_parts();
++
++ IntoIter {
++ ptr,
++ buf,
++ len,
++ layout,
++ _p: PhantomData::<A>,
++ }
++ }
++}
+diff --git a/rust/kernel/alloc/layout.rs b/rust/kernel/alloc/layout.rs
+new file mode 100644
+index 00000000000000..4b3cd7fdc816c1
+--- /dev/null
++++ b/rust/kernel/alloc/layout.rs
+@@ -0,0 +1,91 @@
++// SPDX-License-Identifier: GPL-2.0
++
++//! Memory layout.
++//!
++//! Custom layout types extending or improving [`Layout`].
++
++use core::{alloc::Layout, marker::PhantomData};
++
++/// Error when constructing an [`ArrayLayout`].
++pub struct LayoutError;
++
++/// A layout for an array `[T; n]`.
++///
++/// # Invariants
++///
++/// - `len * size_of::<T>() <= isize::MAX`.
++pub struct ArrayLayout<T> {
++ len: usize,
++ _phantom: PhantomData<fn() -> T>,
++}
++
++impl<T> Clone for ArrayLayout<T> {
++ fn clone(&self) -> Self {
++ *self
++ }
++}
++impl<T> Copy for ArrayLayout<T> {}
++
++const ISIZE_MAX: usize = isize::MAX as usize;
++
++impl<T> ArrayLayout<T> {
++ /// Creates a new layout for `[T; 0]`.
++ pub const fn empty() -> Self {
++ // INVARIANT: `0 * size_of::<T>() <= isize::MAX`.
++ Self {
++ len: 0,
++ _phantom: PhantomData,
++ }
++ }
++
++ /// Creates a new layout for `[T; len]`.
++ ///
++ /// # Errors
++ ///
++ /// When `len * size_of::<T>()` overflows or when `len * size_of::<T>() > isize::MAX`.
++ pub const fn new(len: usize) -> Result<Self, LayoutError> {
++ match len.checked_mul(core::mem::size_of::<T>()) {
++ Some(size) if size <= ISIZE_MAX => {
++ // INVARIANT: We checked above that `len * size_of::<T>() <= isize::MAX`.
++ Ok(Self {
++ len,
++ _phantom: PhantomData,
++ })
++ }
++ _ => Err(LayoutError),
++ }
++ }
++
++ /// Creates a new layout for `[T; len]`.
++ ///
++ /// # Safety
++ ///
++ /// `len` must be a value, for which `len * size_of::<T>() <= isize::MAX` is true.
++ pub unsafe fn new_unchecked(len: usize) -> Self {
++ // INVARIANT: By the safety requirements of this function
++ // `len * size_of::<T>() <= isize::MAX`.
++ Self {
++ len,
++ _phantom: PhantomData,
++ }
++ }
++
++ /// Returns the number of array elements represented by this layout.
++ pub const fn len(&self) -> usize {
++ self.len
++ }
++
++ /// Returns `true` when no array elements are represented by this layout.
++ pub const fn is_empty(&self) -> bool {
++ self.len == 0
++ }
++}
++
++impl<T> From<ArrayLayout<T>> for Layout {
++ fn from(value: ArrayLayout<T>) -> Self {
++ let res = Layout::array::<T>(value.len);
++ // SAFETY: By the type invariant of `ArrayLayout` we have
++ // `len * size_of::<T>() <= isize::MAX` and thus the result must be `Ok`.
++ unsafe { res.unwrap_unchecked() }
++ }
++}
+diff --git a/rust/kernel/alloc/vec_ext.rs b/rust/kernel/alloc/vec_ext.rs
+deleted file mode 100644
+index 1297a4be32e8c4..00000000000000
+--- a/rust/kernel/alloc/vec_ext.rs
++++ /dev/null
+@@ -1,185 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0
+-
+-//! Extensions to [`Vec`] for fallible allocations.
+-
+-use super::{AllocError, Flags};
+-use alloc::vec::Vec;
+-
+-/// Extensions to [`Vec`].
+-pub trait VecExt<T>: Sized {
+- /// Creates a new [`Vec`] instance with at least the given capacity.
+- ///
+- /// # Examples
+- ///
+- /// ```
+- /// let v = Vec::<u32>::with_capacity(20, GFP_KERNEL)?;
+- ///
+- /// assert!(v.capacity() >= 20);
+- /// # Ok::<(), Error>(())
+- /// ```
+- fn with_capacity(capacity: usize, flags: Flags) -> Result<Self, AllocError>;
+-
+- /// Appends an element to the back of the [`Vec`] instance.
+- ///
+- /// # Examples
+- ///
+- /// ```
+- /// let mut v = Vec::new();
+- /// v.push(1, GFP_KERNEL)?;
+- /// assert_eq!(&v, &[1]);
+- ///
+- /// v.push(2, GFP_KERNEL)?;
+- /// assert_eq!(&v, &[1, 2]);
+- /// # Ok::<(), Error>(())
+- /// ```
+- fn push(&mut self, v: T, flags: Flags) -> Result<(), AllocError>;
+-
+- /// Pushes clones of the elements of slice into the [`Vec`] instance.
+- ///
+- /// # Examples
+- ///
+- /// ```
+- /// let mut v = Vec::new();
+- /// v.push(1, GFP_KERNEL)?;
+- ///
+- /// v.extend_from_slice(&[20, 30, 40], GFP_KERNEL)?;
+- /// assert_eq!(&v, &[1, 20, 30, 40]);
+- ///
+- /// v.extend_from_slice(&[50, 60], GFP_KERNEL)?;
+- /// assert_eq!(&v, &[1, 20, 30, 40, 50, 60]);
+- /// # Ok::<(), Error>(())
+- /// ```
+- fn extend_from_slice(&mut self, other: &[T], flags: Flags) -> Result<(), AllocError>
+- where
+- T: Clone;
+-
+- /// Ensures that the capacity exceeds the length by at least `additional` elements.
+- ///
+- /// # Examples
+- ///
+- /// ```
+- /// let mut v = Vec::new();
+- /// v.push(1, GFP_KERNEL)?;
+- ///
+- /// v.reserve(10, GFP_KERNEL)?;
+- /// let cap = v.capacity();
+- /// assert!(cap >= 10);
+- ///
+- /// v.reserve(10, GFP_KERNEL)?;
+- /// let new_cap = v.capacity();
+- /// assert_eq!(new_cap, cap);
+- ///
+- /// # Ok::<(), Error>(())
+- /// ```
+- fn reserve(&mut self, additional: usize, flags: Flags) -> Result<(), AllocError>;
+-}
+-
+-impl<T> VecExt<T> for Vec<T> {
+- fn with_capacity(capacity: usize, flags: Flags) -> Result<Self, AllocError> {
+- let mut v = Vec::new();
+- <Self as VecExt<_>>::reserve(&mut v, capacity, flags)?;
+- Ok(v)
+- }
+-
+- fn push(&mut self, v: T, flags: Flags) -> Result<(), AllocError> {
+- <Self as VecExt<_>>::reserve(self, 1, flags)?;
+- let s = self.spare_capacity_mut();
+- s[0].write(v);
+-
+- // SAFETY: We just initialised the first spare entry, so it is safe to increase the length
+- // by 1. We also know that the new length is <= capacity because of the previous call to
+- // `reserve` above.
+- unsafe { self.set_len(self.len() + 1) };
+- Ok(())
+- }
+-
+- fn extend_from_slice(&mut self, other: &[T], flags: Flags) -> Result<(), AllocError>
+- where
+- T: Clone,
+- {
+- <Self as VecExt<_>>::reserve(self, other.len(), flags)?;
+- for (slot, item) in core::iter::zip(self.spare_capacity_mut(), other) {
+- slot.write(item.clone());
+- }
+-
+- // SAFETY: We just initialised the `other.len()` spare entries, so it is safe to increase
+- // the length by the same amount. We also know that the new length is <= capacity because
+- // of the previous call to `reserve` above.
+- unsafe { self.set_len(self.len() + other.len()) };
+- Ok(())
+- }
+-
+- #[cfg(any(test, testlib))]
+- fn reserve(&mut self, additional: usize, _flags: Flags) -> Result<(), AllocError> {
+- Vec::reserve(self, additional);
+- Ok(())
+- }
+-
+- #[cfg(not(any(test, testlib)))]
+- fn reserve(&mut self, additional: usize, flags: Flags) -> Result<(), AllocError> {
+- let len = self.len();
+- let cap = self.capacity();
+-
+- if cap - len >= additional {
+- return Ok(());
+- }
+-
+- if core::mem::size_of::<T>() == 0 {
+- // The capacity is already `usize::MAX` for SZTs, we can't go higher.
+- return Err(AllocError);
+- }
+-
+- // We know cap is <= `isize::MAX` because `Layout::array` fails if the resulting byte size
+- // is greater than `isize::MAX`. So the multiplication by two won't overflow.
+- let new_cap = core::cmp::max(cap * 2, len.checked_add(additional).ok_or(AllocError)?);
+- let layout = core::alloc::Layout::array::<T>(new_cap).map_err(|_| AllocError)?;
+-
+- let (old_ptr, len, cap) = destructure(self);
+-
+- // We need to make sure that `ptr` is either NULL or comes from a previous call to
+- // `krealloc_aligned`. A `Vec<T>`'s `ptr` value is not guaranteed to be NULL and might be
+- // dangling after being created with `Vec::new`. Instead, we can rely on `Vec<T>`'s capacity
+- // to be zero if no memory has been allocated yet.
+- let ptr = if cap == 0 {
+- core::ptr::null_mut()
+- } else {
+- old_ptr
+- };
+-
+- // SAFETY: `ptr` is valid because it's either NULL or comes from a previous call to
+- // `krealloc_aligned`. We also verified that the type is not a ZST.
+- let new_ptr = unsafe { super::allocator::krealloc_aligned(ptr.cast(), layout, flags) };
+- if new_ptr.is_null() {
+- // SAFETY: We are just rebuilding the existing `Vec` with no changes.
+- unsafe { rebuild(self, old_ptr, len, cap) };
+- Err(AllocError)
+- } else {
+- // SAFETY: `ptr` has been reallocated with the layout for `new_cap` elements. New cap
+- // is greater than `cap`, so it continues to be >= `len`.
+- unsafe { rebuild(self, new_ptr.cast::<T>(), len, new_cap) };
+- Ok(())
+- }
+- }
+-}
+-
+-#[cfg(not(any(test, testlib)))]
+-fn destructure<T>(v: &mut Vec<T>) -> (*mut T, usize, usize) {
+- let mut tmp = Vec::new();
+- core::mem::swap(&mut tmp, v);
+- let mut tmp = core::mem::ManuallyDrop::new(tmp);
+- let len = tmp.len();
+- let cap = tmp.capacity();
+- (tmp.as_mut_ptr(), len, cap)
+-}
+-
+-/// Rebuilds a `Vec` from a pointer, length, and capacity.
+-///
+-/// # Safety
+-///
+-/// The same as [`Vec::from_raw_parts`].
+-#[cfg(not(any(test, testlib)))]
+-unsafe fn rebuild<T>(v: &mut Vec<T>, ptr: *mut T, len: usize, cap: usize) {
+- // SAFETY: The safety requirements from this function satisfy those of `from_raw_parts`.
+- let mut tmp = unsafe { Vec::from_raw_parts(ptr, len, cap) };
+- core::mem::swap(&mut tmp, v);
+-}
+diff --git a/rust/kernel/block/mq/gen_disk.rs b/rust/kernel/block/mq/gen_disk.rs
+index 708125dce96a93..c6df153ebb8860 100644
+--- a/rust/kernel/block/mq/gen_disk.rs
++++ b/rust/kernel/block/mq/gen_disk.rs
+@@ -174,9 +174,9 @@ pub fn build<T: Operations>(
+ ///
+ /// # Invariants
+ ///
+-/// - `gendisk` must always point to an initialized and valid `struct gendisk`.
+-/// - `gendisk` was added to the VFS through a call to
+-/// `bindings::device_add_disk`.
++/// - `gendisk` must always point to an initialized and valid `struct gendisk`.
++/// - `gendisk` was added to the VFS through a call to
++/// `bindings::device_add_disk`.
+ pub struct GenDisk<T: Operations> {
+ _tagset: Arc<TagSet<T>>,
+ gendisk: *mut bindings::gendisk,
+diff --git a/rust/kernel/block/mq/operations.rs b/rust/kernel/block/mq/operations.rs
+index 9ba7fdfeb4b22c..c8646d0d98669f 100644
+--- a/rust/kernel/block/mq/operations.rs
++++ b/rust/kernel/block/mq/operations.rs
+@@ -131,7 +131,7 @@ impl<T: Operations> OperationsVTable<T> {
+ unsafe extern "C" fn poll_callback(
+ _hctx: *mut bindings::blk_mq_hw_ctx,
+ _iob: *mut bindings::io_comp_batch,
+- ) -> core::ffi::c_int {
++ ) -> crate::ffi::c_int {
+ T::poll().into()
+ }
+
+@@ -145,9 +145,9 @@ impl<T: Operations> OperationsVTable<T> {
+ /// for the same context.
+ unsafe extern "C" fn init_hctx_callback(
+ _hctx: *mut bindings::blk_mq_hw_ctx,
+- _tagset_data: *mut core::ffi::c_void,
+- _hctx_idx: core::ffi::c_uint,
+- ) -> core::ffi::c_int {
++ _tagset_data: *mut crate::ffi::c_void,
++ _hctx_idx: crate::ffi::c_uint,
++ ) -> crate::ffi::c_int {
+ from_result(|| Ok(0))
+ }
+
+@@ -159,7 +159,7 @@ impl<T: Operations> OperationsVTable<T> {
+ /// This function may only be called by blk-mq C infrastructure.
+ unsafe extern "C" fn exit_hctx_callback(
+ _hctx: *mut bindings::blk_mq_hw_ctx,
+- _hctx_idx: core::ffi::c_uint,
++ _hctx_idx: crate::ffi::c_uint,
+ ) {
+ }
+
+@@ -176,9 +176,9 @@ impl<T: Operations> OperationsVTable<T> {
+ unsafe extern "C" fn init_request_callback(
+ _set: *mut bindings::blk_mq_tag_set,
+ rq: *mut bindings::request,
+- _hctx_idx: core::ffi::c_uint,
+- _numa_node: core::ffi::c_uint,
+- ) -> core::ffi::c_int {
++ _hctx_idx: crate::ffi::c_uint,
++ _numa_node: crate::ffi::c_uint,
++ ) -> crate::ffi::c_int {
+ from_result(|| {
+ // SAFETY: By the safety requirements of this function, `rq` points
+ // to a valid allocation.
+@@ -203,7 +203,7 @@ impl<T: Operations> OperationsVTable<T> {
+ unsafe extern "C" fn exit_request_callback(
+ _set: *mut bindings::blk_mq_tag_set,
+ rq: *mut bindings::request,
+- _hctx_idx: core::ffi::c_uint,
++ _hctx_idx: crate::ffi::c_uint,
+ ) {
+ // SAFETY: The tagset invariants guarantee that all requests are allocated with extra memory
+ // for the request data.
+diff --git a/rust/kernel/block/mq/raw_writer.rs b/rust/kernel/block/mq/raw_writer.rs
+index 9222465d670bfe..7e2159e4f6a6f7 100644
+--- a/rust/kernel/block/mq/raw_writer.rs
++++ b/rust/kernel/block/mq/raw_writer.rs
+@@ -25,7 +25,7 @@ fn new(buffer: &'a mut [u8]) -> Result<RawWriter<'a>> {
+ }
+
+ pub(crate) fn from_array<const N: usize>(
+- a: &'a mut [core::ffi::c_char; N],
++ a: &'a mut [crate::ffi::c_char; N],
+ ) -> Result<RawWriter<'a>> {
+ Self::new(
+ // SAFETY: the buffer of `a` is valid for read and write as `u8` for
+diff --git a/rust/kernel/block/mq/tag_set.rs b/rust/kernel/block/mq/tag_set.rs
+index f9a1ca655a35be..d7f175a05d992b 100644
+--- a/rust/kernel/block/mq/tag_set.rs
++++ b/rust/kernel/block/mq/tag_set.rs
+@@ -53,7 +53,7 @@ pub fn new(
+ queue_depth: num_tags,
+ cmd_size,
+ flags: bindings::BLK_MQ_F_SHOULD_MERGE,
+- driver_data: core::ptr::null_mut::<core::ffi::c_void>(),
++ driver_data: core::ptr::null_mut::<crate::ffi::c_void>(),
+ nr_maps: num_maps,
+ ..tag_set
+ }
+diff --git a/rust/kernel/error.rs b/rust/kernel/error.rs
+index 6f1587a2524e8b..5fece574ec023b 100644
+--- a/rust/kernel/error.rs
++++ b/rust/kernel/error.rs
+@@ -6,9 +6,10 @@
+
+ use crate::{alloc::AllocError, str::CStr};
+
+-use alloc::alloc::LayoutError;
++use core::alloc::LayoutError;
+
+ use core::fmt;
++use core::num::NonZeroI32;
+ use core::num::TryFromIntError;
+ use core::str::Utf8Error;
+
+@@ -20,7 +21,11 @@ macro_rules! declare_err {
+ $(
+ #[doc = $doc]
+ )*
+- pub const $err: super::Error = super::Error(-(crate::bindings::$err as i32));
++ pub const $err: super::Error =
++ match super::Error::try_from_errno(-(crate::bindings::$err as i32)) {
++ Some(err) => err,
++ None => panic!("Invalid errno in `declare_err!`"),
++ };
+ };
+ }
+
+@@ -88,14 +93,14 @@ macro_rules! declare_err {
+ ///
+ /// The value is a valid `errno` (i.e. `>= -MAX_ERRNO && < 0`).
+ #[derive(Clone, Copy, PartialEq, Eq)]
+-pub struct Error(core::ffi::c_int);
++pub struct Error(NonZeroI32);
+
+ impl Error {
+ /// Creates an [`Error`] from a kernel error code.
+ ///
+ /// It is a bug to pass an out-of-range `errno`. `EINVAL` would
+ /// be returned in such a case.
+- pub(crate) fn from_errno(errno: core::ffi::c_int) -> Error {
++ pub fn from_errno(errno: crate::ffi::c_int) -> Error {
+ if errno < -(bindings::MAX_ERRNO as i32) || errno >= 0 {
+ // TODO: Make it a `WARN_ONCE` once available.
+ crate::pr_warn!(
+@@ -107,7 +112,20 @@ pub(crate) fn from_errno(errno: core::ffi::c_int) -> Error {
+
+ // INVARIANT: The check above ensures the type invariant
+ // will hold.
+- Error(errno)
++ // SAFETY: `errno` is checked above to be in a valid range.
++ unsafe { Error::from_errno_unchecked(errno) }
++ }
++
++ /// Creates an [`Error`] from a kernel error code.
++ ///
++ /// Returns [`None`] if `errno` is out-of-range.
++ const fn try_from_errno(errno: crate::ffi::c_int) -> Option<Error> {
++ if errno < -(bindings::MAX_ERRNO as i32) || errno >= 0 {
++ return None;
++ }
++
++ // SAFETY: `errno` is checked above to be in a valid range.
++ Some(unsafe { Error::from_errno_unchecked(errno) })
+ }
+
+ /// Creates an [`Error`] from a kernel error code.
+@@ -115,38 +133,35 @@ pub(crate) fn from_errno(errno: core::ffi::c_int) -> Error {
+ /// # Safety
+ ///
+ /// `errno` must be within error code range (i.e. `>= -MAX_ERRNO && < 0`).
+- unsafe fn from_errno_unchecked(errno: core::ffi::c_int) -> Error {
++ const unsafe fn from_errno_unchecked(errno: crate::ffi::c_int) -> Error {
+ // INVARIANT: The contract ensures the type invariant
+ // will hold.
+- Error(errno)
++ // SAFETY: The caller guarantees `errno` is non-zero.
++ Error(unsafe { NonZeroI32::new_unchecked(errno) })
+ }
+
+ /// Returns the kernel error code.
+- pub fn to_errno(self) -> core::ffi::c_int {
+- self.0
++ pub fn to_errno(self) -> crate::ffi::c_int {
++ self.0.get()
+ }
+
+ #[cfg(CONFIG_BLOCK)]
+ pub(crate) fn to_blk_status(self) -> bindings::blk_status_t {
+ // SAFETY: `self.0` is a valid error due to its invariant.
+- unsafe { bindings::errno_to_blk_status(self.0) }
++ unsafe { bindings::errno_to_blk_status(self.0.get()) }
+ }
+
+ /// Returns the error encoded as a pointer.
+- #[allow(dead_code)]
+- pub(crate) fn to_ptr<T>(self) -> *mut T {
+- #[cfg_attr(target_pointer_width = "32", allow(clippy::useless_conversion))]
++ pub fn to_ptr<T>(self) -> *mut T {
+ // SAFETY: `self.0` is a valid error due to its invariant.
+- unsafe {
+- bindings::ERR_PTR(self.0.into()) as *mut _
+- }
++ unsafe { bindings::ERR_PTR(self.0.get() as _) as *mut _ }
+ }
+
+ /// Returns a string representing the error, if one exists.
+- #[cfg(not(testlib))]
++ #[cfg(not(any(test, testlib)))]
+ pub fn name(&self) -> Option<&'static CStr> {
+ // SAFETY: Just an FFI call, there are no extra safety requirements.
+- let ptr = unsafe { bindings::errname(-self.0) };
++ let ptr = unsafe { bindings::errname(-self.0.get()) };
+ if ptr.is_null() {
+ None
+ } else {
+@@ -160,7 +175,7 @@ pub fn name(&self) -> Option<&'static CStr> {
+ /// When `testlib` is configured, this always returns `None` to avoid the dependency on a
+ /// kernel function so that tests that use this (e.g., by calling [`Result::unwrap`]) can still
+ /// run in userspace.
+- #[cfg(testlib)]
++ #[cfg(any(test, testlib))]
+ pub fn name(&self) -> Option<&'static CStr> {
+ None
+ }
+@@ -171,9 +186,11 @@ fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
+ match self.name() {
+ // Print out number if no name can be found.
+ None => f.debug_tuple("Error").field(&-self.0).finish(),
+- // SAFETY: These strings are ASCII-only.
+ Some(name) => f
+- .debug_tuple(unsafe { core::str::from_utf8_unchecked(name) })
++ .debug_tuple(
++ // SAFETY: These strings are ASCII-only.
++ unsafe { core::str::from_utf8_unchecked(name) },
++ )
+ .finish(),
+ }
+ }
+@@ -239,7 +256,7 @@ fn from(e: core::convert::Infallible) -> Error {
+
+ /// Converts an integer as returned by a C kernel function to an error if it's negative, and
+ /// `Ok(())` otherwise.
+-pub fn to_result(err: core::ffi::c_int) -> Result {
++pub fn to_result(err: crate::ffi::c_int) -> Result {
+ if err < 0 {
+ Err(Error::from_errno(err))
+ } else {
+@@ -262,21 +279,21 @@ pub fn to_result(err: core::ffi::c_int) -> Result {
+ /// fn devm_platform_ioremap_resource(
+ /// pdev: &mut PlatformDevice,
+ /// index: u32,
+-/// ) -> Result<*mut core::ffi::c_void> {
++/// ) -> Result<*mut kernel::ffi::c_void> {
+ /// // SAFETY: `pdev` points to a valid platform device. There are no safety requirements
+ /// // on `index`.
+ /// from_err_ptr(unsafe { bindings::devm_platform_ioremap_resource(pdev.to_ptr(), index) })
+ /// }
+ /// ```
+-// TODO: Remove `dead_code` marker once an in-kernel client is available.
+-#[allow(dead_code)]
+-pub(crate) fn from_err_ptr<T>(ptr: *mut T) -> Result<*mut T> {
+- // CAST: Casting a pointer to `*const core::ffi::c_void` is always valid.
+- let const_ptr: *const core::ffi::c_void = ptr.cast();
++pub fn from_err_ptr<T>(ptr: *mut T) -> Result<*mut T> {
++ // CAST: Casting a pointer to `*const crate::ffi::c_void` is always valid.
++ let const_ptr: *const crate::ffi::c_void = ptr.cast();
+ // SAFETY: The FFI function does not deref the pointer.
+ if unsafe { bindings::IS_ERR(const_ptr) } {
+ // SAFETY: The FFI function does not deref the pointer.
+ let err = unsafe { bindings::PTR_ERR(const_ptr) };
++
++ #[allow(clippy::unnecessary_cast)]
+ // CAST: If `IS_ERR()` returns `true`,
+ // then `PTR_ERR()` is guaranteed to return a
+ // negative value greater-or-equal to `-bindings::MAX_ERRNO`,
+@@ -286,8 +303,7 @@ pub(crate) fn from_err_ptr<T>(ptr: *mut T) -> Result<*mut T> {
+ //
+ // SAFETY: `IS_ERR()` ensures `err` is a
+ // negative value greater-or-equal to `-bindings::MAX_ERRNO`.
+- #[allow(clippy::unnecessary_cast)]
+- return Err(unsafe { Error::from_errno_unchecked(err as core::ffi::c_int) });
++ return Err(unsafe { Error::from_errno_unchecked(err as crate::ffi::c_int) });
+ }
+ Ok(ptr)
+ }
+@@ -307,7 +323,7 @@ pub(crate) fn from_err_ptr<T>(ptr: *mut T) -> Result<*mut T> {
+ /// # use kernel::bindings;
+ /// unsafe extern "C" fn probe_callback(
+ /// pdev: *mut bindings::platform_device,
+-/// ) -> core::ffi::c_int {
++/// ) -> kernel::ffi::c_int {
+ /// from_result(|| {
+ /// let ptr = devm_alloc(pdev)?;
+ /// bindings::platform_set_drvdata(pdev, ptr);
+@@ -315,9 +331,7 @@ pub(crate) fn from_err_ptr<T>(ptr: *mut T) -> Result<*mut T> {
+ /// })
+ /// }
+ /// ```
+-// TODO: Remove `dead_code` marker once an in-kernel client is available.
+-#[allow(dead_code)]
+-pub(crate) fn from_result<T, F>(f: F) -> T
++pub fn from_result<T, F>(f: F) -> T
+ where
+ T: From<i16>,
+ F: FnOnce() -> Result<T>,
+diff --git a/rust/kernel/firmware.rs b/rust/kernel/firmware.rs
+index 13a374a5cdb743..c5162fdc95ff05 100644
+--- a/rust/kernel/firmware.rs
++++ b/rust/kernel/firmware.rs
+@@ -12,7 +12,7 @@
+ /// One of the following: `bindings::request_firmware`, `bindings::firmware_request_nowarn`,
+ /// `bindings::firmware_request_platform`, `bindings::request_firmware_direct`.
+ struct FwFunc(
+- unsafe extern "C" fn(*mut *const bindings::firmware, *const i8, *mut bindings::device) -> i32,
++ unsafe extern "C" fn(*mut *const bindings::firmware, *const u8, *mut bindings::device) -> i32,
+ );
+
+ impl FwFunc {
+diff --git a/rust/kernel/init.rs b/rust/kernel/init.rs
+index 789f80f71ca7e1..c962029f96e1f1 100644
+--- a/rust/kernel/init.rs
++++ b/rust/kernel/init.rs
+@@ -13,7 +13,7 @@
+ //! To initialize a `struct` with an in-place constructor you will need two things:
+ //! - an in-place constructor,
+ //! - a memory location that can hold your `struct` (this can be the [stack], an [`Arc<T>`],
+-//! [`UniqueArc<T>`], [`Box<T>`] or any other smart pointer that implements [`InPlaceInit`]).
++//! [`UniqueArc<T>`], [`KBox<T>`] or any other smart pointer that implements [`InPlaceInit`]).
+ //!
+ //! To get an in-place constructor there are generally three options:
+ //! - directly creating an in-place constructor using the [`pin_init!`] macro,
+@@ -35,7 +35,7 @@
+ //! that you need to write `<-` instead of `:` for fields that you want to initialize in-place.
+ //!
+ //! ```rust
+-//! # #![allow(clippy::disallowed_names)]
++//! # #![expect(clippy::disallowed_names)]
+ //! use kernel::sync::{new_mutex, Mutex};
+ //! # use core::pin::Pin;
+ //! #[pin_data]
+@@ -55,7 +55,7 @@
+ //! (or just the stack) to actually initialize a `Foo`:
+ //!
+ //! ```rust
+-//! # #![allow(clippy::disallowed_names)]
++//! # #![expect(clippy::disallowed_names)]
+ //! # use kernel::sync::{new_mutex, Mutex};
+ //! # use core::pin::Pin;
+ //! # #[pin_data]
+@@ -68,7 +68,7 @@
+ //! # a <- new_mutex!(42, "Foo::a"),
+ //! # b: 24,
+ //! # });
+-//! let foo: Result<Pin<Box<Foo>>> = Box::pin_init(foo, GFP_KERNEL);
++//! let foo: Result<Pin<KBox<Foo>>> = KBox::pin_init(foo, GFP_KERNEL);
+ //! ```
+ //!
+ //! For more information see the [`pin_init!`] macro.
+@@ -87,20 +87,19 @@
+ //! To declare an init macro/function you just return an [`impl PinInit<T, E>`]:
+ //!
+ //! ```rust
+-//! # #![allow(clippy::disallowed_names)]
+ //! # use kernel::{sync::Mutex, new_mutex, init::PinInit, try_pin_init};
+ //! #[pin_data]
+ //! struct DriverData {
+ //! #[pin]
+ //! status: Mutex<i32>,
+-//! buffer: Box<[u8; 1_000_000]>,
++//! buffer: KBox<[u8; 1_000_000]>,
+ //! }
+ //!
+ //! impl DriverData {
+ //! fn new() -> impl PinInit<Self, Error> {
+ //! try_pin_init!(Self {
+ //! status <- new_mutex!(0, "DriverData::status"),
+-//! buffer: Box::init(kernel::init::zeroed(), GFP_KERNEL)?,
++//! buffer: KBox::init(kernel::init::zeroed(), GFP_KERNEL)?,
+ //! })
+ //! }
+ //! }
+@@ -121,11 +120,12 @@
+ //! `slot` gets called.
+ //!
+ //! ```rust
+-//! # #![allow(unreachable_pub, clippy::disallowed_names)]
++//! # #![expect(unreachable_pub, clippy::disallowed_names)]
+ //! use kernel::{init, types::Opaque};
+ //! use core::{ptr::addr_of_mut, marker::PhantomPinned, pin::Pin};
+ //! # mod bindings {
+-//! # #![allow(non_camel_case_types)]
++//! # #![expect(non_camel_case_types)]
++//! # #![expect(clippy::missing_safety_doc)]
+ //! # pub struct foo;
+ //! # pub unsafe fn init_foo(_ptr: *mut foo) {}
+ //! # pub unsafe fn destroy_foo(_ptr: *mut foo) {}
+@@ -133,7 +133,7 @@
+ //! # }
+ //! # // `Error::from_errno` is `pub(crate)` in the `kernel` crate, thus provide a workaround.
+ //! # trait FromErrno {
+-//! # fn from_errno(errno: core::ffi::c_int) -> Error {
++//! # fn from_errno(errno: kernel::ffi::c_int) -> Error {
+ //! # // Dummy error that can be constructed outside the `kernel` crate.
+ //! # Error::from(core::fmt::Error)
+ //! # }
+@@ -211,13 +211,12 @@
+ //! [`pin_init!`]: crate::pin_init!
+
+ use crate::{
+- alloc::{box_ext::BoxExt, AllocError, Flags},
++ alloc::{AllocError, Flags, KBox},
+ error::{self, Error},
+ sync::Arc,
+ sync::UniqueArc,
+ types::{Opaque, ScopeGuard},
+ };
+-use alloc::boxed::Box;
+ use core::{
+ cell::UnsafeCell,
+ convert::Infallible,
+@@ -238,7 +237,7 @@
+ /// # Examples
+ ///
+ /// ```rust
+-/// # #![allow(clippy::disallowed_names)]
++/// # #![expect(clippy::disallowed_names)]
+ /// # use kernel::{init, macros::pin_data, pin_init, stack_pin_init, init::*, sync::Mutex, new_mutex};
+ /// # use core::pin::Pin;
+ /// #[pin_data]
+@@ -290,7 +289,7 @@ macro_rules! stack_pin_init {
+ /// # Examples
+ ///
+ /// ```rust,ignore
+-/// # #![allow(clippy::disallowed_names)]
++/// # #![expect(clippy::disallowed_names)]
+ /// # use kernel::{init, pin_init, stack_try_pin_init, init::*, sync::Mutex, new_mutex};
+ /// # use macros::pin_data;
+ /// # use core::{alloc::AllocError, pin::Pin};
+@@ -298,7 +297,7 @@ macro_rules! stack_pin_init {
+ /// struct Foo {
+ /// #[pin]
+ /// a: Mutex<usize>,
+-/// b: Box<Bar>,
++/// b: KBox<Bar>,
+ /// }
+ ///
+ /// struct Bar {
+@@ -307,7 +306,7 @@ macro_rules! stack_pin_init {
+ ///
+ /// stack_try_pin_init!(let foo: Result<Pin<&mut Foo>, AllocError> = pin_init!(Foo {
+ /// a <- new_mutex!(42),
+-/// b: Box::new(Bar {
++/// b: KBox::new(Bar {
+ /// x: 64,
+ /// }, GFP_KERNEL)?,
+ /// }));
+@@ -316,7 +315,7 @@ macro_rules! stack_pin_init {
+ /// ```
+ ///
+ /// ```rust,ignore
+-/// # #![allow(clippy::disallowed_names)]
++/// # #![expect(clippy::disallowed_names)]
+ /// # use kernel::{init, pin_init, stack_try_pin_init, init::*, sync::Mutex, new_mutex};
+ /// # use macros::pin_data;
+ /// # use core::{alloc::AllocError, pin::Pin};
+@@ -324,7 +323,7 @@ macro_rules! stack_pin_init {
+ /// struct Foo {
+ /// #[pin]
+ /// a: Mutex<usize>,
+-/// b: Box<Bar>,
++/// b: KBox<Bar>,
+ /// }
+ ///
+ /// struct Bar {
+@@ -333,7 +332,7 @@ macro_rules! stack_pin_init {
+ ///
+ /// stack_try_pin_init!(let foo: Pin<&mut Foo> =? pin_init!(Foo {
+ /// a <- new_mutex!(42),
+-/// b: Box::new(Bar {
++/// b: KBox::new(Bar {
+ /// x: 64,
+ /// }, GFP_KERNEL)?,
+ /// }));
+@@ -368,7 +367,6 @@ macro_rules! stack_try_pin_init {
+ /// The syntax is almost identical to that of a normal `struct` initializer:
+ ///
+ /// ```rust
+-/// # #![allow(clippy::disallowed_names)]
+ /// # use kernel::{init, pin_init, macros::pin_data, init::*};
+ /// # use core::pin::Pin;
+ /// #[pin_data]
+@@ -392,7 +390,7 @@ macro_rules! stack_try_pin_init {
+ /// },
+ /// });
+ /// # initializer }
+-/// # Box::pin_init(demo(), GFP_KERNEL).unwrap();
++/// # KBox::pin_init(demo(), GFP_KERNEL).unwrap();
+ /// ```
+ ///
+ /// Arbitrary Rust expressions can be used to set the value of a variable.
+@@ -413,7 +411,6 @@ macro_rules! stack_try_pin_init {
+ /// To create an initializer function, simply declare it like this:
+ ///
+ /// ```rust
+-/// # #![allow(clippy::disallowed_names)]
+ /// # use kernel::{init, pin_init, init::*};
+ /// # use core::pin::Pin;
+ /// # #[pin_data]
+@@ -440,7 +437,7 @@ macro_rules! stack_try_pin_init {
+ /// Users of `Foo` can now create it like this:
+ ///
+ /// ```rust
+-/// # #![allow(clippy::disallowed_names)]
++/// # #![expect(clippy::disallowed_names)]
+ /// # use kernel::{init, pin_init, macros::pin_data, init::*};
+ /// # use core::pin::Pin;
+ /// # #[pin_data]
+@@ -462,13 +459,12 @@ macro_rules! stack_try_pin_init {
+ /// # })
+ /// # }
+ /// # }
+-/// let foo = Box::pin_init(Foo::new(), GFP_KERNEL);
++/// let foo = KBox::pin_init(Foo::new(), GFP_KERNEL);
+ /// ```
+ ///
+ /// They can also easily embed it into their own `struct`s:
+ ///
+ /// ```rust
+-/// # #![allow(clippy::disallowed_names)]
+ /// # use kernel::{init, pin_init, macros::pin_data, init::*};
+ /// # use core::pin::Pin;
+ /// # #[pin_data]
+@@ -541,6 +537,7 @@ macro_rules! stack_try_pin_init {
+ /// }
+ /// pin_init!(&this in Buf {
+ /// buf: [0; 64],
++/// // SAFETY: TODO.
+ /// ptr: unsafe { addr_of_mut!((*this.as_ptr()).buf).cast() },
+ /// pin: PhantomPinned,
+ /// });
+@@ -590,11 +587,10 @@ macro_rules! pin_init {
+ /// # Examples
+ ///
+ /// ```rust
+-/// # #![feature(new_uninit)]
+ /// use kernel::{init::{self, PinInit}, error::Error};
+ /// #[pin_data]
+ /// struct BigBuf {
+-/// big: Box<[u8; 1024 * 1024 * 1024]>,
++/// big: KBox<[u8; 1024 * 1024 * 1024]>,
+ /// small: [u8; 1024 * 1024],
+ /// ptr: *mut u8,
+ /// }
+@@ -602,7 +598,7 @@ macro_rules! pin_init {
+ /// impl BigBuf {
+ /// fn new() -> impl PinInit<Self, Error> {
+ /// try_pin_init!(Self {
+-/// big: Box::init(init::zeroed(), GFP_KERNEL)?,
++/// big: KBox::init(init::zeroed(), GFP_KERNEL)?,
+ /// small: [0; 1024 * 1024],
+ /// ptr: core::ptr::null_mut(),
+ /// }? Error)
+@@ -694,16 +690,16 @@ macro_rules! init {
+ /// # Examples
+ ///
+ /// ```rust
+-/// use kernel::{init::{PinInit, zeroed}, error::Error};
++/// use kernel::{alloc::KBox, init::{PinInit, zeroed}, error::Error};
+ /// struct BigBuf {
+-/// big: Box<[u8; 1024 * 1024 * 1024]>,
++/// big: KBox<[u8; 1024 * 1024 * 1024]>,
+ /// small: [u8; 1024 * 1024],
+ /// }
+ ///
+ /// impl BigBuf {
+ /// fn new() -> impl Init<Self, Error> {
+ /// try_init!(Self {
+-/// big: Box::init(zeroed(), GFP_KERNEL)?,
++/// big: KBox::init(zeroed(), GFP_KERNEL)?,
+ /// small: [0; 1024 * 1024],
+ /// }? Error)
+ /// }
+@@ -814,8 +810,8 @@ macro_rules! assert_pinned {
+ /// A pin-initializer for the type `T`.
+ ///
+ /// To use this initializer, you will need a suitable memory location that can hold a `T`. This can
+-/// be [`Box<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use the
+-/// [`InPlaceInit::pin_init`] function of a smart pointer like [`Arc<T>`] on this.
++/// be [`KBox<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use
++/// the [`InPlaceInit::pin_init`] function of a smart pointer like [`Arc<T>`] on this.
+ ///
+ /// Also see the [module description](self).
+ ///
+@@ -854,7 +850,7 @@ pub unsafe trait PinInit<T: ?Sized, E = Infallible>: Sized {
+ /// # Examples
+ ///
+ /// ```rust
+- /// # #![allow(clippy::disallowed_names)]
++ /// # #![expect(clippy::disallowed_names)]
+ /// use kernel::{types::Opaque, init::pin_init_from_closure};
+ /// #[repr(C)]
+ /// struct RawFoo([u8; 16]);
+@@ -875,6 +871,7 @@ pub unsafe trait PinInit<T: ?Sized, E = Infallible>: Sized {
+ /// }
+ ///
+ /// let foo = pin_init!(Foo {
++ /// // SAFETY: TODO.
+ /// raw <- unsafe {
+ /// Opaque::ffi_init(|s| {
+ /// init_foo(s);
+@@ -894,7 +891,7 @@ fn pin_chain<F>(self, f: F) -> ChainPinInit<Self, F, T, E>
+ }
+
+ /// An initializer returned by [`PinInit::pin_chain`].
+-pub struct ChainPinInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, Box<T>)>);
++pub struct ChainPinInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, KBox<T>)>);
+
+ // SAFETY: The `__pinned_init` function is implemented such that it
+ // - returns `Ok(())` on successful initialization,
+@@ -920,8 +917,8 @@ unsafe fn __pinned_init(self, slot: *mut T) -> Result<(), E> {
+ /// An initializer for `T`.
+ ///
+ /// To use this initializer, you will need a suitable memory location that can hold a `T`. This can
+-/// be [`Box<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use the
+-/// [`InPlaceInit::init`] function of a smart pointer like [`Arc<T>`] on this. Because
++/// be [`KBox<T>`], [`Arc<T>`], [`UniqueArc<T>`] or even the stack (see [`stack_pin_init!`]). Use
++/// the [`InPlaceInit::init`] function of a smart pointer like [`Arc<T>`] on this. Because
+ /// [`PinInit<T, E>`] is a super trait, you can use every function that takes it as well.
+ ///
+ /// Also see the [module description](self).
+@@ -965,7 +962,7 @@ pub unsafe trait Init<T: ?Sized, E = Infallible>: PinInit<T, E> {
+ /// # Examples
+ ///
+ /// ```rust
+- /// # #![allow(clippy::disallowed_names)]
++ /// # #![expect(clippy::disallowed_names)]
+ /// use kernel::{types::Opaque, init::{self, init_from_closure}};
+ /// struct Foo {
+ /// buf: [u8; 1_000_000],
+@@ -993,7 +990,7 @@ fn chain<F>(self, f: F) -> ChainInit<Self, F, T, E>
+ }
+
+ /// An initializer returned by [`Init::chain`].
+-pub struct ChainInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, Box<T>)>);
++pub struct ChainInit<I, F, T: ?Sized, E>(I, F, __internal::Invariant<(E, KBox<T>)>);
+
+ // SAFETY: The `__init` function is implemented such that it
+ // - returns `Ok(())` on successful initialization,
+@@ -1077,8 +1074,9 @@ pub fn uninit<T, E>() -> impl Init<MaybeUninit<T>, E> {
+ /// # Examples
+ ///
+ /// ```rust
+-/// use kernel::{error::Error, init::init_array_from_fn};
+-/// let array: Box<[usize; 1_000]> = Box::init::<Error>(init_array_from_fn(|i| i), GFP_KERNEL).unwrap();
++/// use kernel::{alloc::KBox, error::Error, init::init_array_from_fn};
++/// let array: KBox<[usize; 1_000]> =
++/// KBox::init::<Error>(init_array_from_fn(|i| i), GFP_KERNEL).unwrap();
+ /// assert_eq!(array.len(), 1_000);
+ /// ```
+ pub fn init_array_from_fn<I, const N: usize, T, E>(
+@@ -1162,6 +1160,7 @@ pub fn pin_init_array_from_fn<I, const N: usize, T, E>(
+ // SAFETY: Every type can be initialized by-value.
+ unsafe impl<T, E> Init<T, E> for T {
+ unsafe fn __init(self, slot: *mut T) -> Result<(), E> {
++ // SAFETY: TODO.
+ unsafe { slot.write(self) };
+ Ok(())
+ }
+@@ -1170,6 +1169,7 @@ unsafe fn __init(self, slot: *mut T) -> Result<(), E> {
+ // SAFETY: Every type can be initialized by-value. `__pinned_init` calls `__init`.
+ unsafe impl<T, E> PinInit<T, E> for T {
+ unsafe fn __pinned_init(self, slot: *mut T) -> Result<(), E> {
++ // SAFETY: TODO.
+ unsafe { self.__init(slot) }
+ }
+ }
+@@ -1243,26 +1243,6 @@ fn try_init<E>(init: impl Init<T, E>, flags: Flags) -> Result<Self, E>
+ }
+ }
+
+-impl<T> InPlaceInit<T> for Box<T> {
+- type PinnedSelf = Pin<Self>;
+-
+- #[inline]
+- fn try_pin_init<E>(init: impl PinInit<T, E>, flags: Flags) -> Result<Self::PinnedSelf, E>
+- where
+- E: From<AllocError>,
+- {
+- <Box<_> as BoxExt<_>>::new_uninit(flags)?.write_pin_init(init)
+- }
+-
+- #[inline]
+- fn try_init<E>(init: impl Init<T, E>, flags: Flags) -> Result<Self, E>
+- where
+- E: From<AllocError>,
+- {
+- <Box<_> as BoxExt<_>>::new_uninit(flags)?.write_init(init)
+- }
+-}
+-
+ impl<T> InPlaceInit<T> for UniqueArc<T> {
+ type PinnedSelf = Pin<Self>;
+
+@@ -1299,28 +1279,6 @@ pub trait InPlaceWrite<T> {
+ fn write_pin_init<E>(self, init: impl PinInit<T, E>) -> Result<Pin<Self::Initialized>, E>;
+ }
+
+-impl<T> InPlaceWrite<T> for Box<MaybeUninit<T>> {
+- type Initialized = Box<T>;
+-
+- fn write_init<E>(mut self, init: impl Init<T, E>) -> Result<Self::Initialized, E> {
+- let slot = self.as_mut_ptr();
+- // SAFETY: When init errors/panics, slot will get deallocated but not dropped,
+- // slot is valid.
+- unsafe { init.__init(slot)? };
+- // SAFETY: All fields have been initialized.
+- Ok(unsafe { self.assume_init() })
+- }
+-
+- fn write_pin_init<E>(mut self, init: impl PinInit<T, E>) -> Result<Pin<Self::Initialized>, E> {
+- let slot = self.as_mut_ptr();
+- // SAFETY: When init errors/panics, slot will get deallocated but not dropped,
+- // slot is valid and will not be moved, because we pin it later.
+- unsafe { init.__pinned_init(slot)? };
+- // SAFETY: All fields have been initialized.
+- Ok(unsafe { self.assume_init() }.into())
+- }
+-}
+-
+ impl<T> InPlaceWrite<T> for UniqueArc<MaybeUninit<T>> {
+ type Initialized = UniqueArc<T>;
+
+@@ -1411,6 +1369,7 @@ pub fn zeroed<T: Zeroable>() -> impl Init<T> {
+
+ macro_rules! impl_zeroable {
+ ($($({$($generics:tt)*})? $t:ty, )*) => {
++ // SAFETY: Safety comments written in the macro invocation.
+ $(unsafe impl$($($generics)*)? Zeroable for $t {})*
+ };
+ }
+@@ -1451,7 +1410,7 @@ macro_rules! impl_zeroable {
+ //
+ // In this case we are allowed to use `T: ?Sized`, since all zeros is the `None` variant.
+ {<T: ?Sized>} Option<NonNull<T>>,
+- {<T: ?Sized>} Option<Box<T>>,
++ {<T: ?Sized>} Option<KBox<T>>,
+
+ // SAFETY: `null` pointer is valid.
+ //
+diff --git a/rust/kernel/init/__internal.rs b/rust/kernel/init/__internal.rs
+index 13cefd37512f8d..74329cc3262c05 100644
+--- a/rust/kernel/init/__internal.rs
++++ b/rust/kernel/init/__internal.rs
+@@ -15,9 +15,10 @@
+ /// [this table]: https://doc.rust-lang.org/nomicon/phantom-data.html#table-of-phantomdata-patterns
+ pub(super) type Invariant<T> = PhantomData<fn(*mut T) -> *mut T>;
+
+-/// This is the module-internal type implementing `PinInit` and `Init`. It is unsafe to create this
+-/// type, since the closure needs to fulfill the same safety requirement as the
+-/// `__pinned_init`/`__init` functions.
++/// Module-internal type implementing `PinInit` and `Init`.
++///
++/// It is unsafe to create this type, since the closure needs to fulfill the same safety
++/// requirement as the `__pinned_init`/`__init` functions.
+ pub(crate) struct InitClosure<F, T: ?Sized, E>(pub(crate) F, pub(crate) Invariant<(E, T)>);
+
+ // SAFETY: While constructing the `InitClosure`, the user promised that it upholds the
+@@ -53,6 +54,7 @@ unsafe fn __pinned_init(self, slot: *mut T) -> Result<(), E> {
+ pub unsafe trait HasPinData {
+ type PinData: PinData;
+
++ #[expect(clippy::missing_safety_doc)]
+ unsafe fn __pin_data() -> Self::PinData;
+ }
+
+@@ -82,6 +84,7 @@ fn make_closure<F, O, E>(self, f: F) -> F
+ pub unsafe trait HasInitData {
+ type InitData: InitData;
+
++ #[expect(clippy::missing_safety_doc)]
+ unsafe fn __init_data() -> Self::InitData;
+ }
+
+@@ -102,7 +105,7 @@ fn make_closure<F, O, E>(self, f: F) -> F
+ }
+ }
+
+-pub struct AllData<T: ?Sized>(PhantomData<fn(Box<T>) -> Box<T>>);
++pub struct AllData<T: ?Sized>(PhantomData<fn(KBox<T>) -> KBox<T>>);
+
+ impl<T: ?Sized> Clone for AllData<T> {
+ fn clone(&self) -> Self {
+@@ -112,10 +115,12 @@ fn clone(&self) -> Self {
+
+ impl<T: ?Sized> Copy for AllData<T> {}
+
++// SAFETY: TODO.
+ unsafe impl<T: ?Sized> InitData for AllData<T> {
+ type Datee = T;
+ }
+
++// SAFETY: TODO.
+ unsafe impl<T: ?Sized> HasInitData for T {
+ type InitData = AllData<T>;
+
+diff --git a/rust/kernel/init/macros.rs b/rust/kernel/init/macros.rs
+index 9a0c4650ef676d..1fd146a8324165 100644
+--- a/rust/kernel/init/macros.rs
++++ b/rust/kernel/init/macros.rs
+@@ -182,13 +182,13 @@
+ //! // Normally `Drop` bounds do not have the correct semantics, but for this purpose they do
+ //! // (normally people want to know if a type has any kind of drop glue at all, here we want
+ //! // to know if it has any kind of custom drop glue, which is exactly what this bound does).
+-//! #[allow(drop_bounds)]
++//! #[expect(drop_bounds)]
+ //! impl<T: ::core::ops::Drop> MustNotImplDrop for T {}
+ //! impl<T> MustNotImplDrop for Bar<T> {}
+ //! // Here comes a convenience check, if one implemented `PinnedDrop`, but forgot to add it to
+ //! // `#[pin_data]`, then this will error with the same mechanic as above, this is not needed
+ //! // for safety, but a good sanity check, since no normal code calls `PinnedDrop::drop`.
+-//! #[allow(non_camel_case_types)]
++//! #[expect(non_camel_case_types)]
+ //! trait UselessPinnedDropImpl_you_need_to_specify_PinnedDrop {}
+ //! impl<
+ //! T: ::kernel::init::PinnedDrop,
+@@ -513,6 +513,7 @@ fn drop($($sig:tt)*) {
+ }
+ ),
+ ) => {
++ // SAFETY: TODO.
+ unsafe $($impl_sig)* {
+ // Inherit all attributes and the type/ident tokens for the signature.
+ $(#[$($attr)*])*
+@@ -872,6 +873,7 @@ unsafe fn __pin_data() -> Self::PinData {
+ }
+ }
+
++ // SAFETY: TODO.
+ unsafe impl<$($impl_generics)*>
+ $crate::init::__internal::PinData for __ThePinData<$($ty_generics)*>
+ where $($whr)*
+@@ -923,14 +925,14 @@ impl<'__pin, $($impl_generics)*> ::core::marker::Unpin for $name<$($ty_generics)
+ // `Drop`. Additionally we will implement this trait for the struct leading to a conflict,
+ // if it also implements `Drop`
+ trait MustNotImplDrop {}
+- #[allow(drop_bounds)]
++ #[expect(drop_bounds)]
+ impl<T: ::core::ops::Drop> MustNotImplDrop for T {}
+ impl<$($impl_generics)*> MustNotImplDrop for $name<$($ty_generics)*>
+ where $($whr)* {}
+ // We also take care to prevent users from writing a useless `PinnedDrop` implementation.
+ // They might implement `PinnedDrop` correctly for the struct, but forget to give
+ // `PinnedDrop` as the parameter to `#[pin_data]`.
+- #[allow(non_camel_case_types)]
++ #[expect(non_camel_case_types)]
+ trait UselessPinnedDropImpl_you_need_to_specify_PinnedDrop {}
+ impl<T: $crate::init::PinnedDrop>
+ UselessPinnedDropImpl_you_need_to_specify_PinnedDrop for T {}
+@@ -987,6 +989,7 @@ fn drop(&mut self) {
+ //
+ // The functions are `unsafe` to prevent accidentally calling them.
+ #[allow(dead_code)]
++ #[expect(clippy::missing_safety_doc)]
+ impl<$($impl_generics)*> $pin_data<$($ty_generics)*>
+ where $($whr)*
+ {
+@@ -997,6 +1000,7 @@ impl<$($impl_generics)*> $pin_data<$($ty_generics)*>
+ slot: *mut $p_type,
+ init: impl $crate::init::PinInit<$p_type, E>,
+ ) -> ::core::result::Result<(), E> {
++ // SAFETY: TODO.
+ unsafe { $crate::init::PinInit::__pinned_init(init, slot) }
+ }
+ )*
+@@ -1007,6 +1011,7 @@ impl<$($impl_generics)*> $pin_data<$($ty_generics)*>
+ slot: *mut $type,
+ init: impl $crate::init::Init<$type, E>,
+ ) -> ::core::result::Result<(), E> {
++ // SAFETY: TODO.
+ unsafe { $crate::init::Init::__init(init, slot) }
+ }
+ )*
+@@ -1121,6 +1126,8 @@ macro_rules! __init_internal {
+ // no possibility of returning without `unsafe`.
+ struct __InitOk;
+ // Get the data about fields from the supplied type.
++ //
++ // SAFETY: TODO.
+ let data = unsafe {
+ use $crate::init::__internal::$has_data;
+ // Here we abuse `paste!` to retokenize `$t`. Declarative macros have some internal
+@@ -1176,6 +1183,7 @@ fn assert_zeroable<T: $crate::init::Zeroable>(_: *mut T) {}
+ let init = move |slot| -> ::core::result::Result<(), $err> {
+ init(slot).map(|__InitOk| ())
+ };
++ // SAFETY: TODO.
+ let init = unsafe { $crate::init::$construct_closure::<_, $err>(init) };
+ init
+ }};
+@@ -1324,6 +1332,8 @@ fn assert_zeroable<T: $crate::init::Zeroable>(_: *mut T) {}
+ // Endpoint, nothing more to munch, create the initializer.
+ // Since we are in the closure that is never called, this will never get executed.
+ // We abuse `slot` to get the correct type inference here:
++ //
++ // SAFETY: TODO.
+ unsafe {
+ // Here we abuse `paste!` to retokenize `$t`. Declarative macros have some internal
+ // information that is associated to already parsed fragments, so a path fragment
+diff --git a/rust/kernel/ioctl.rs b/rust/kernel/ioctl.rs
+index cfa7d080b53193..2fc7662339e54b 100644
+--- a/rust/kernel/ioctl.rs
++++ b/rust/kernel/ioctl.rs
+@@ -4,7 +4,7 @@
+ //!
+ //! C header: [`include/asm-generic/ioctl.h`](srctree/include/asm-generic/ioctl.h)
+
+-#![allow(non_snake_case)]
++#![expect(non_snake_case)]
+
+ use crate::build_assert;
+
+diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
+index e936254531fd0a..d764cb7ff5d785 100644
+--- a/rust/kernel/lib.rs
++++ b/rust/kernel/lib.rs
+@@ -15,7 +15,8 @@
+ #![feature(arbitrary_self_types)]
+ #![feature(coerce_unsized)]
+ #![feature(dispatch_from_dyn)]
+-#![feature(new_uninit)]
++#![feature(inline_const)]
++#![feature(lint_reasons)]
+ #![feature(unsize)]
+
+ // Ensure conditional compilation based on the kernel configuration works;
+@@ -26,6 +27,8 @@
+ // Allow proc-macros to refer to `::kernel` inside the `kernel` crate (this crate).
+ extern crate self as kernel;
+
++pub use ffi;
++
+ pub mod alloc;
+ #[cfg(CONFIG_BLOCK)]
+ pub mod block;
+diff --git a/rust/kernel/list.rs b/rust/kernel/list.rs
+index 5b4aec29eb6753..fb93330f4af48c 100644
+--- a/rust/kernel/list.rs
++++ b/rust/kernel/list.rs
+@@ -354,6 +354,7 @@ pub fn pop_front(&mut self) -> Option<ListArc<T, ID>> {
+ ///
+ /// `item` must not be in a different linked list (with the same id).
+ pub unsafe fn remove(&mut self, item: &T) -> Option<ListArc<T, ID>> {
++ // SAFETY: TODO.
+ let mut item = unsafe { ListLinks::fields(T::view_links(item)) };
+ // SAFETY: The user provided a reference, and reference are never dangling.
+ //
+diff --git a/rust/kernel/list/arc_field.rs b/rust/kernel/list/arc_field.rs
+index 2330f673427ab0..c4b9dd50398264 100644
+--- a/rust/kernel/list/arc_field.rs
++++ b/rust/kernel/list/arc_field.rs
+@@ -56,7 +56,7 @@ pub unsafe fn assert_ref(&self) -> &T {
+ ///
+ /// The caller must have mutable access to the `ListArc<ID>` containing the struct with this
+ /// field for the duration of the returned reference.
+- #[allow(clippy::mut_from_ref)]
++ #[expect(clippy::mut_from_ref)]
+ pub unsafe fn assert_mut(&self) -> &mut T {
+ // SAFETY: The caller has exclusive access to the `ListArc`, so they also have exclusive
+ // access to this field.
+diff --git a/rust/kernel/net/phy.rs b/rust/kernel/net/phy.rs
+index 910ce867480a8a..beb62ec712c37a 100644
+--- a/rust/kernel/net/phy.rs
++++ b/rust/kernel/net/phy.rs
+@@ -314,7 +314,7 @@ impl<T: Driver> Adapter<T> {
+ /// `phydev` must be passed by the corresponding callback in `phy_driver`.
+ unsafe extern "C" fn soft_reset_callback(
+ phydev: *mut bindings::phy_device,
+- ) -> core::ffi::c_int {
++ ) -> crate::ffi::c_int {
+ from_result(|| {
+ // SAFETY: This callback is called only in contexts
+ // where we hold `phy_device->lock`, so the accessors on
+@@ -328,7 +328,7 @@ impl<T: Driver> Adapter<T> {
+ /// # Safety
+ ///
+ /// `phydev` must be passed by the corresponding callback in `phy_driver`.
+- unsafe extern "C" fn probe_callback(phydev: *mut bindings::phy_device) -> core::ffi::c_int {
++ unsafe extern "C" fn probe_callback(phydev: *mut bindings::phy_device) -> crate::ffi::c_int {
+ from_result(|| {
+ // SAFETY: This callback is called only in contexts
+ // where we can exclusively access `phy_device` because
+@@ -345,7 +345,7 @@ impl<T: Driver> Adapter<T> {
+ /// `phydev` must be passed by the corresponding callback in `phy_driver`.
+ unsafe extern "C" fn get_features_callback(
+ phydev: *mut bindings::phy_device,
+- ) -> core::ffi::c_int {
++ ) -> crate::ffi::c_int {
+ from_result(|| {
+ // SAFETY: This callback is called only in contexts
+ // where we hold `phy_device->lock`, so the accessors on
+@@ -359,7 +359,7 @@ impl<T: Driver> Adapter<T> {
+ /// # Safety
+ ///
+ /// `phydev` must be passed by the corresponding callback in `phy_driver`.
+- unsafe extern "C" fn suspend_callback(phydev: *mut bindings::phy_device) -> core::ffi::c_int {
++ unsafe extern "C" fn suspend_callback(phydev: *mut bindings::phy_device) -> crate::ffi::c_int {
+ from_result(|| {
+ // SAFETY: The C core code ensures that the accessors on
+ // `Device` are okay to call even though `phy_device->lock`
+@@ -373,7 +373,7 @@ impl<T: Driver> Adapter<T> {
+ /// # Safety
+ ///
+ /// `phydev` must be passed by the corresponding callback in `phy_driver`.
+- unsafe extern "C" fn resume_callback(phydev: *mut bindings::phy_device) -> core::ffi::c_int {
++ unsafe extern "C" fn resume_callback(phydev: *mut bindings::phy_device) -> crate::ffi::c_int {
+ from_result(|| {
+ // SAFETY: The C core code ensures that the accessors on
+ // `Device` are okay to call even though `phy_device->lock`
+@@ -389,7 +389,7 @@ impl<T: Driver> Adapter<T> {
+ /// `phydev` must be passed by the corresponding callback in `phy_driver`.
+ unsafe extern "C" fn config_aneg_callback(
+ phydev: *mut bindings::phy_device,
+- ) -> core::ffi::c_int {
++ ) -> crate::ffi::c_int {
+ from_result(|| {
+ // SAFETY: This callback is called only in contexts
+ // where we hold `phy_device->lock`, so the accessors on
+@@ -405,7 +405,7 @@ impl<T: Driver> Adapter<T> {
+ /// `phydev` must be passed by the corresponding callback in `phy_driver`.
+ unsafe extern "C" fn read_status_callback(
+ phydev: *mut bindings::phy_device,
+- ) -> core::ffi::c_int {
++ ) -> crate::ffi::c_int {
+ from_result(|| {
+ // SAFETY: This callback is called only in contexts
+ // where we hold `phy_device->lock`, so the accessors on
+@@ -421,7 +421,7 @@ impl<T: Driver> Adapter<T> {
+ /// `phydev` must be passed by the corresponding callback in `phy_driver`.
+ unsafe extern "C" fn match_phy_device_callback(
+ phydev: *mut bindings::phy_device,
+- ) -> core::ffi::c_int {
++ ) -> crate::ffi::c_int {
+ // SAFETY: This callback is called only in contexts
+ // where we hold `phy_device->lock`, so the accessors on
+ // `Device` are okay to call.
+diff --git a/rust/kernel/prelude.rs b/rust/kernel/prelude.rs
+index 4571daec0961bb..8bdab9aa0d16bf 100644
+--- a/rust/kernel/prelude.rs
++++ b/rust/kernel/prelude.rs
+@@ -14,10 +14,7 @@
+ #[doc(no_inline)]
+ pub use core::pin::Pin;
+
+-pub use crate::alloc::{box_ext::BoxExt, flags::*, vec_ext::VecExt};
+-
+-#[doc(no_inline)]
+-pub use alloc::{boxed::Box, vec::Vec};
++pub use crate::alloc::{flags::*, Box, KBox, KVBox, KVVec, KVec, VBox, VVec, Vec};
+
+ #[doc(no_inline)]
+ pub use macros::{module, pin_data, pinned_drop, vtable, Zeroable};
+diff --git a/rust/kernel/print.rs b/rust/kernel/print.rs
+index 508b0221256c97..a28077a7cb3011 100644
+--- a/rust/kernel/print.rs
++++ b/rust/kernel/print.rs
+@@ -14,6 +14,7 @@
+ use crate::str::RawFormatter;
+
+ // Called from `vsprintf` with format specifier `%pA`.
++#[expect(clippy::missing_safety_doc)]
+ #[no_mangle]
+ unsafe extern "C" fn rust_fmt_argument(
+ buf: *mut c_char,
+@@ -23,6 +24,7 @@
+ use fmt::Write;
+ // SAFETY: The C contract guarantees that `buf` is valid if it's less than `end`.
+ let mut w = unsafe { RawFormatter::from_ptrs(buf.cast(), end.cast()) };
++ // SAFETY: TODO.
+ let _ = w.write_fmt(unsafe { *(ptr as *const fmt::Arguments<'_>) });
+ w.pos().cast()
+ }
+@@ -102,6 +104,7 @@ pub unsafe fn call_printk(
+ ) {
+ // `_printk` does not seem to fail in any path.
+ #[cfg(CONFIG_PRINTK)]
++ // SAFETY: TODO.
+ unsafe {
+ bindings::_printk(
+ format_string.as_ptr() as _,
+@@ -137,7 +140,7 @@ pub fn call_printk_cont(args: fmt::Arguments<'_>) {
+ #[doc(hidden)]
+ #[cfg(not(testlib))]
+ #[macro_export]
+-#[allow(clippy::crate_in_macro_def)]
++#[expect(clippy::crate_in_macro_def)]
+ macro_rules! print_macro (
+ // The non-continuation cases (most of them, e.g. `INFO`).
+ ($format_string:path, false, $($arg:tt)+) => (
+diff --git a/rust/kernel/rbtree.rs b/rust/kernel/rbtree.rs
+index 7543378d372927..571e27efe54489 100644
+--- a/rust/kernel/rbtree.rs
++++ b/rust/kernel/rbtree.rs
+@@ -7,7 +7,6 @@
+ //! Reference: <https://docs.kernel.org/core-api/rbtree.html>
+
+ use crate::{alloc::Flags, bindings, container_of, error::Result, prelude::*};
+-use alloc::boxed::Box;
+ use core::{
+ cmp::{Ord, Ordering},
+ marker::PhantomData,
+@@ -497,7 +496,7 @@ fn drop(&mut self) {
+ // but it is not observable. The loop invariant is still maintained.
+
+ // SAFETY: `this` is valid per the loop invariant.
+- unsafe { drop(Box::from_raw(this.cast_mut())) };
++ unsafe { drop(KBox::from_raw(this.cast_mut())) };
+ }
+ }
+ }
+@@ -764,7 +763,7 @@ pub fn remove_current(self) -> (Option<Self>, RBTreeNode<K, V>) {
+ // point to the links field of `Node<K, V>` objects.
+ let this = unsafe { container_of!(self.current.as_ptr(), Node<K, V>, links) }.cast_mut();
+ // SAFETY: `this` is valid by the type invariants as described above.
+- let node = unsafe { Box::from_raw(this) };
++ let node = unsafe { KBox::from_raw(this) };
+ let node = RBTreeNode { node };
+ // SAFETY: The reference to the tree used to create the cursor outlives the cursor, so
+ // the tree cannot change. By the tree invariant, all nodes are valid.
+@@ -809,7 +808,7 @@ fn remove_neighbor(&mut self, direction: Direction) -> Option<RBTreeNode<K, V>>
+ // point to the links field of `Node<K, V>` objects.
+ let this = unsafe { container_of!(neighbor, Node<K, V>, links) }.cast_mut();
+ // SAFETY: `this` is valid by the type invariants as described above.
+- let node = unsafe { Box::from_raw(this) };
++ let node = unsafe { KBox::from_raw(this) };
+ return Some(RBTreeNode { node });
+ }
+ None
+@@ -1038,7 +1037,7 @@ fn next(&mut self) -> Option<Self::Item> {
+ /// It contains the memory needed to hold a node that can be inserted into a red-black tree. One
+ /// can be obtained by directly allocating it ([`RBTreeNodeReservation::new`]).
+ pub struct RBTreeNodeReservation<K, V> {
+- node: Box<MaybeUninit<Node<K, V>>>,
++ node: KBox<MaybeUninit<Node<K, V>>>,
+ }
+
+ impl<K, V> RBTreeNodeReservation<K, V> {
+@@ -1046,7 +1045,7 @@ impl<K, V> RBTreeNodeReservation<K, V> {
+ /// call to [`RBTree::insert`].
+ pub fn new(flags: Flags) -> Result<RBTreeNodeReservation<K, V>> {
+ Ok(RBTreeNodeReservation {
+- node: <Box<_> as BoxExt<_>>::new_uninit(flags)?,
++ node: KBox::new_uninit(flags)?,
+ })
+ }
+ }
+@@ -1062,14 +1061,15 @@ impl<K, V> RBTreeNodeReservation<K, V> {
+ /// Initialises a node reservation.
+ ///
+ /// It then becomes an [`RBTreeNode`] that can be inserted into a tree.
+- pub fn into_node(mut self, key: K, value: V) -> RBTreeNode<K, V> {
+- self.node.write(Node {
+- key,
+- value,
+- links: bindings::rb_node::default(),
+- });
+- // SAFETY: We just wrote to it.
+- let node = unsafe { self.node.assume_init() };
++ pub fn into_node(self, key: K, value: V) -> RBTreeNode<K, V> {
++ let node = KBox::write(
++ self.node,
++ Node {
++ key,
++ value,
++ links: bindings::rb_node::default(),
++ },
++ );
+ RBTreeNode { node }
+ }
+ }
+@@ -1079,7 +1079,7 @@ pub fn into_node(mut self, key: K, value: V) -> RBTreeNode<K, V> {
+ /// The node is fully initialised (with key and value) and can be inserted into a tree without any
+ /// extra allocations or failure paths.
+ pub struct RBTreeNode<K, V> {
+- node: Box<Node<K, V>>,
++ node: KBox<Node<K, V>>,
+ }
+
+ impl<K, V> RBTreeNode<K, V> {
+@@ -1091,7 +1091,9 @@ pub fn new(key: K, value: V, flags: Flags) -> Result<RBTreeNode<K, V>> {
+
+ /// Get the key and value from inside the node.
+ pub fn to_key_value(self) -> (K, V) {
+- (self.node.key, self.node.value)
++ let node = KBox::into_inner(self.node);
++
++ (node.key, node.value)
+ }
+ }
+
+@@ -1113,7 +1115,7 @@ impl<K, V> RBTreeNode<K, V> {
+ /// may be freed (but only for the key/value; memory for the node itself is kept for reuse).
+ pub fn into_reservation(self) -> RBTreeNodeReservation<K, V> {
+ RBTreeNodeReservation {
+- node: Box::drop_contents(self.node),
++ node: KBox::drop_contents(self.node),
+ }
+ }
+ }
+@@ -1164,7 +1166,7 @@ impl<'a, K, V> RawVacantEntry<'a, K, V> {
+ /// The `node` must have a key such that inserting it here does not break the ordering of this
+ /// [`RBTree`].
+ fn insert(self, node: RBTreeNode<K, V>) -> &'a mut V {
+- let node = Box::into_raw(node.node);
++ let node = KBox::into_raw(node.node);
+
+ // SAFETY: `node` is valid at least until we call `Box::from_raw`, which only happens when
+ // the node is removed or replaced.
+@@ -1238,21 +1240,24 @@ pub fn remove_node(self) -> RBTreeNode<K, V> {
+ // SAFETY: The node was a node in the tree, but we removed it, so we can convert it
+ // back into a box.
+ node: unsafe {
+- Box::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut())
++ KBox::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut())
+ },
+ }
+ }
+
+ /// Takes the value of the entry out of the map, and returns it.
+ pub fn remove(self) -> V {
+- self.remove_node().node.value
++ let rb_node = self.remove_node();
++ let node = KBox::into_inner(rb_node.node);
++
++ node.value
+ }
+
+ /// Swap the current node for the provided node.
+ ///
+ /// The key of both nodes must be equal.
+ fn replace(self, node: RBTreeNode<K, V>) -> RBTreeNode<K, V> {
+- let node = Box::into_raw(node.node);
++ let node = KBox::into_raw(node.node);
+
+ // SAFETY: `node` is valid at least until we call `Box::from_raw`, which only happens when
+ // the node is removed or replaced.
+@@ -1268,7 +1273,7 @@ fn replace(self, node: RBTreeNode<K, V>) -> RBTreeNode<K, V> {
+ // - `self.node_ptr` produces a valid pointer to a node in the tree.
+ // - Now that we removed this entry from the tree, we can convert the node to a box.
+ let old_node =
+- unsafe { Box::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) };
++ unsafe { KBox::from_raw(container_of!(self.node_links, Node<K, V>, links).cast_mut()) };
+
+ RBTreeNode { node: old_node }
+ }
+diff --git a/rust/kernel/std_vendor.rs b/rust/kernel/std_vendor.rs
+index 67bf9d37ddb557..8b4872b48e9775 100644
+--- a/rust/kernel/std_vendor.rs
++++ b/rust/kernel/std_vendor.rs
+@@ -1,5 +1,7 @@
+ // SPDX-License-Identifier: Apache-2.0 OR MIT
+
++//! Rust standard library vendored code.
++//!
+ //! The contents of this file come from the Rust standard library, hosted in
+ //! the <https://github.com/rust-lang/rust> repository, licensed under
+ //! "Apache-2.0 OR MIT" and adapted for kernel use. For copyright details,
+@@ -14,7 +16,7 @@
+ ///
+ /// ```rust
+ /// let a = 2;
+-/// # #[allow(clippy::dbg_macro)]
++/// # #[expect(clippy::disallowed_macros)]
+ /// let b = dbg!(a * 2) + 1;
+ /// // ^-- prints: [src/main.rs:2] a * 2 = 4
+ /// assert_eq!(b, 5);
+@@ -52,7 +54,7 @@
+ /// With a method call:
+ ///
+ /// ```rust
+-/// # #[allow(clippy::dbg_macro)]
++/// # #[expect(clippy::disallowed_macros)]
+ /// fn foo(n: usize) {
+ /// if dbg!(n.checked_sub(4)).is_some() {
+ /// // ...
+@@ -71,7 +73,7 @@
+ /// Naive factorial implementation:
+ ///
+ /// ```rust
+-/// # #[allow(clippy::dbg_macro)]
++/// # #[expect(clippy::disallowed_macros)]
+ /// # {
+ /// fn factorial(n: u32) -> u32 {
+ /// if dbg!(n <= 1) {
+@@ -118,7 +120,7 @@
+ /// a tuple (and return it, too):
+ ///
+ /// ```
+-/// # #[allow(clippy::dbg_macro)]
++/// # #![expect(clippy::disallowed_macros)]
+ /// assert_eq!(dbg!(1usize, 2u32), (1, 2));
+ /// ```
+ ///
+@@ -127,7 +129,7 @@
+ /// invocations. You can use a 1-tuple directly if you need one:
+ ///
+ /// ```
+-/// # #[allow(clippy::dbg_macro)]
++/// # #[expect(clippy::disallowed_macros)]
+ /// # {
+ /// assert_eq!(1, dbg!(1u32,)); // trailing comma ignored
+ /// assert_eq!((1,), dbg!((1u32,))); // 1-tuple
+diff --git a/rust/kernel/str.rs b/rust/kernel/str.rs
+index bb8d4f41475b59..d04c12a1426d1c 100644
+--- a/rust/kernel/str.rs
++++ b/rust/kernel/str.rs
+@@ -2,8 +2,7 @@
+
+ //! String representations.
+
+-use crate::alloc::{flags::*, vec_ext::VecExt, AllocError};
+-use alloc::vec::Vec;
++use crate::alloc::{flags::*, AllocError, KVec};
+ use core::fmt::{self, Write};
+ use core::ops::{self, Deref, DerefMut, Index};
+
+@@ -162,10 +161,10 @@ pub const fn len(&self) -> usize {
+ /// Returns the length of this string with `NUL`.
+ #[inline]
+ pub const fn len_with_nul(&self) -> usize {
+- // SAFETY: This is one of the invariant of `CStr`.
+- // We add a `unreachable_unchecked` here to hint the optimizer that
+- // the value returned from this function is non-zero.
+ if self.0.is_empty() {
++ // SAFETY: This is one of the invariant of `CStr`.
++ // We add a `unreachable_unchecked` here to hint the optimizer that
++ // the value returned from this function is non-zero.
+ unsafe { core::hint::unreachable_unchecked() };
+ }
+ self.0.len()
+@@ -185,7 +184,7 @@ pub const fn is_empty(&self) -> bool {
+ /// last at least `'a`. When `CStr` is alive, the memory pointed by `ptr`
+ /// must not be mutated.
+ #[inline]
+- pub unsafe fn from_char_ptr<'a>(ptr: *const core::ffi::c_char) -> &'a Self {
++ pub unsafe fn from_char_ptr<'a>(ptr: *const crate::ffi::c_char) -> &'a Self {
+ // SAFETY: The safety precondition guarantees `ptr` is a valid pointer
+ // to a `NUL`-terminated C string.
+ let len = unsafe { bindings::strlen(ptr) } + 1;
+@@ -248,7 +247,7 @@ pub unsafe fn from_bytes_with_nul_unchecked_mut(bytes: &mut [u8]) -> &mut CStr {
+
+ /// Returns a C pointer to the string.
+ #[inline]
+- pub const fn as_char_ptr(&self) -> *const core::ffi::c_char {
++ pub const fn as_char_ptr(&self) -> *const crate::ffi::c_char {
+ self.0.as_ptr() as _
+ }
+
+@@ -301,6 +300,7 @@ pub fn to_str(&self) -> Result<&str, core::str::Utf8Error> {
+ /// ```
+ #[inline]
+ pub unsafe fn as_str_unchecked(&self) -> &str {
++ // SAFETY: TODO.
+ unsafe { core::str::from_utf8_unchecked(self.as_bytes()) }
+ }
+
+@@ -524,7 +524,28 @@ macro_rules! c_str {
+ #[cfg(test)]
+ mod tests {
+ use super::*;
+- use alloc::format;
++
++ struct String(CString);
++
++ impl String {
++ fn from_fmt(args: fmt::Arguments<'_>) -> Self {
++ String(CString::try_from_fmt(args).unwrap())
++ }
++ }
++
++ impl Deref for String {
++ type Target = str;
++
++ fn deref(&self) -> &str {
++ self.0.to_str().unwrap()
++ }
++ }
++
++ macro_rules! format {
++ ($($f:tt)*) => ({
++ &*String::from_fmt(kernel::fmt!($($f)*))
++ })
++ }
+
+ const ALL_ASCII_CHARS: &'static str =
+ "\\x01\\x02\\x03\\x04\\x05\\x06\\x07\\x08\\x09\\x0a\\x0b\\x0c\\x0d\\x0e\\x0f\
+@@ -790,7 +811,7 @@ fn write_str(&mut self, s: &str) -> fmt::Result {
+ /// assert_eq!(s.is_ok(), false);
+ /// ```
+ pub struct CString {
+- buf: Vec<u8>,
++ buf: KVec<u8>,
+ }
+
+ impl CString {
+@@ -803,7 +824,7 @@ pub fn try_from_fmt(args: fmt::Arguments<'_>) -> Result<Self, Error> {
+ let size = f.bytes_written();
+
+ // Allocate a vector with the required number of bytes, and write to it.
+- let mut buf = <Vec<_> as VecExt<_>>::with_capacity(size, GFP_KERNEL)?;
++ let mut buf = KVec::with_capacity(size, GFP_KERNEL)?;
+ // SAFETY: The buffer stored in `buf` is at least of size `size` and is valid for writes.
+ let mut f = unsafe { Formatter::from_buffer(buf.as_mut_ptr(), size) };
+ f.write_fmt(args)?;
+@@ -850,10 +871,9 @@ impl<'a> TryFrom<&'a CStr> for CString {
+ type Error = AllocError;
+
+ fn try_from(cstr: &'a CStr) -> Result<CString, AllocError> {
+- let mut buf = Vec::new();
++ let mut buf = KVec::new();
+
+- <Vec<_> as VecExt<_>>::extend_from_slice(&mut buf, cstr.as_bytes_with_nul(), GFP_KERNEL)
+- .map_err(|_| AllocError)?;
++ buf.extend_from_slice(cstr.as_bytes_with_nul(), GFP_KERNEL)?;
+
+ // INVARIANT: The `CStr` and `CString` types have the same invariants for
+ // the string data, and we copied it over without changes.
+diff --git a/rust/kernel/sync/arc.rs b/rust/kernel/sync/arc.rs
+index 28743a7c74a847..fa4509406ee909 100644
+--- a/rust/kernel/sync/arc.rs
++++ b/rust/kernel/sync/arc.rs
+@@ -17,13 +17,12 @@
+ //! [`Arc`]: https://doc.rust-lang.org/std/sync/struct.Arc.html
+
+ use crate::{
+- alloc::{box_ext::BoxExt, AllocError, Flags},
++ alloc::{AllocError, Flags, KBox},
+ bindings,
+ init::{self, InPlaceInit, Init, PinInit},
+ try_init,
+ types::{ForeignOwnable, Opaque},
+ };
+-use alloc::boxed::Box;
+ use core::{
+ alloc::Layout,
+ fmt,
+@@ -201,11 +200,11 @@ pub fn new(contents: T, flags: Flags) -> Result<Self, AllocError> {
+ data: contents,
+ };
+
+- let inner = <Box<_> as BoxExt<_>>::new(value, flags)?;
++ let inner = KBox::new(value, flags)?;
+
+ // SAFETY: We just created `inner` with a reference count of 1, which is owned by the new
+ // `Arc` object.
+- Ok(unsafe { Self::from_inner(Box::leak(inner).into()) })
++ Ok(unsafe { Self::from_inner(KBox::leak(inner).into()) })
+ }
+ }
+
+@@ -333,12 +332,12 @@ pub fn into_unique_or_drop(self) -> Option<Pin<UniqueArc<T>>> {
+ impl<T: 'static> ForeignOwnable for Arc<T> {
+ type Borrowed<'a> = ArcBorrow<'a, T>;
+
+- fn into_foreign(self) -> *const core::ffi::c_void {
++ fn into_foreign(self) -> *const crate::ffi::c_void {
+ ManuallyDrop::new(self).ptr.as_ptr() as _
+ }
+
+- unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> ArcBorrow<'a, T> {
+- // SAFETY: By the safety requirement of this function, we know that `ptr` came from
++ unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> ArcBorrow<'a, T> {
++ // By the safety requirement of this function, we know that `ptr` came from
+ // a previous call to `Arc::into_foreign`.
+ let inner = NonNull::new(ptr as *mut ArcInner<T>).unwrap();
+
+@@ -347,7 +346,7 @@ unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> ArcBorrow<'a, T> {
+ unsafe { ArcBorrow::new(inner) }
+ }
+
+- unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self {
++ unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self {
+ // SAFETY: By the safety requirement of this function, we know that `ptr` came from
+ // a previous call to `Arc::into_foreign`, which guarantees that `ptr` is valid and
+ // holds a reference count increment that is transferrable to us.
+@@ -398,8 +397,8 @@ fn drop(&mut self) {
+ if is_zero {
+ // The count reached zero, we must free the memory.
+ //
+- // SAFETY: The pointer was initialised from the result of `Box::leak`.
+- unsafe { drop(Box::from_raw(self.ptr.as_ptr())) };
++ // SAFETY: The pointer was initialised from the result of `KBox::leak`.
++ unsafe { drop(KBox::from_raw(self.ptr.as_ptr())) };
+ }
+ }
+ }
+@@ -641,7 +640,7 @@ pub fn new(value: T, flags: Flags) -> Result<Self, AllocError> {
+ /// Tries to allocate a new [`UniqueArc`] instance whose contents are not initialised yet.
+ pub fn new_uninit(flags: Flags) -> Result<UniqueArc<MaybeUninit<T>>, AllocError> {
+ // INVARIANT: The refcount is initialised to a non-zero value.
+- let inner = Box::try_init::<AllocError>(
++ let inner = KBox::try_init::<AllocError>(
+ try_init!(ArcInner {
+ // SAFETY: There are no safety requirements for this FFI call.
+ refcount: Opaque::new(unsafe { bindings::REFCOUNT_INIT(1) }),
+@@ -651,8 +650,8 @@ pub fn new_uninit(flags: Flags) -> Result<UniqueArc<MaybeUninit<T>>, AllocError>
+ )?;
+ Ok(UniqueArc {
+ // INVARIANT: The newly-created object has a refcount of 1.
+- // SAFETY: The pointer from the `Box` is valid.
+- inner: unsafe { Arc::from_inner(Box::leak(inner).into()) },
++ // SAFETY: The pointer from the `KBox` is valid.
++ inner: unsafe { Arc::from_inner(KBox::leak(inner).into()) },
+ })
+ }
+ }
+diff --git a/rust/kernel/sync/arc/std_vendor.rs b/rust/kernel/sync/arc/std_vendor.rs
+index a66a0c2831b3ed..11b3f4ecca5f79 100644
+--- a/rust/kernel/sync/arc/std_vendor.rs
++++ b/rust/kernel/sync/arc/std_vendor.rs
+@@ -1,5 +1,7 @@
+ // SPDX-License-Identifier: Apache-2.0 OR MIT
+
++//! Rust standard library vendored code.
++//!
+ //! The contents of this file come from the Rust standard library, hosted in
+ //! the <https://github.com/rust-lang/rust> repository, licensed under
+ //! "Apache-2.0 OR MIT" and adapted for kernel use. For copyright details,
+diff --git a/rust/kernel/sync/condvar.rs b/rust/kernel/sync/condvar.rs
+index 2b306afbe56d96..7df565038d7d0d 100644
+--- a/rust/kernel/sync/condvar.rs
++++ b/rust/kernel/sync/condvar.rs
+@@ -7,6 +7,7 @@
+
+ use super::{lock::Backend, lock::Guard, LockClassKey};
+ use crate::{
++ ffi::{c_int, c_long},
+ init::PinInit,
+ pin_init,
+ str::CStr,
+@@ -14,7 +15,6 @@
+ time::Jiffies,
+ types::Opaque,
+ };
+-use core::ffi::{c_int, c_long};
+ use core::marker::PhantomPinned;
+ use core::ptr;
+ use macros::pin_data;
+@@ -70,8 +70,8 @@ macro_rules! new_condvar {
+ /// }
+ ///
+ /// /// Allocates a new boxed `Example`.
+-/// fn new_example() -> Result<Pin<Box<Example>>> {
+-/// Box::pin_init(pin_init!(Example {
++/// fn new_example() -> Result<Pin<KBox<Example>>> {
++/// KBox::pin_init(pin_init!(Example {
+ /// value <- new_mutex!(0),
+ /// value_changed <- new_condvar!(),
+ /// }), GFP_KERNEL)
+@@ -93,7 +93,6 @@ pub struct CondVar {
+ }
+
+ // SAFETY: `CondVar` only uses a `struct wait_queue_head`, which is safe to use on any thread.
+-#[allow(clippy::non_send_fields_in_send_ty)]
+ unsafe impl Send for CondVar {}
+
+ // SAFETY: `CondVar` only uses a `struct wait_queue_head`, which is safe to use on multiple threads
+diff --git a/rust/kernel/sync/lock.rs b/rust/kernel/sync/lock.rs
+index f6c34ca4d819f8..528eb690723120 100644
+--- a/rust/kernel/sync/lock.rs
++++ b/rust/kernel/sync/lock.rs
+@@ -46,7 +46,7 @@ pub unsafe trait Backend {
+ /// remain valid for read indefinitely.
+ unsafe fn init(
+ ptr: *mut Self::State,
+- name: *const core::ffi::c_char,
++ name: *const crate::ffi::c_char,
+ key: *mut bindings::lock_class_key,
+ );
+
+@@ -150,9 +150,9 @@ pub(crate) fn do_unlocked<U>(&mut self, cb: impl FnOnce() -> U) -> U {
+ // SAFETY: The caller owns the lock, so it is safe to unlock it.
+ unsafe { B::unlock(self.lock.state.get(), &self.state) };
+
+- // SAFETY: The lock was just unlocked above and is being relocked now.
+- let _relock =
+- ScopeGuard::new(|| unsafe { B::relock(self.lock.state.get(), &mut self.state) });
++ let _relock = ScopeGuard::new(||
++ // SAFETY: The lock was just unlocked above and is being relocked now.
++ unsafe { B::relock(self.lock.state.get(), &mut self.state) });
+
+ cb()
+ }
+diff --git a/rust/kernel/sync/lock/mutex.rs b/rust/kernel/sync/lock/mutex.rs
+index 30632070ee6709..59a872cbcac64a 100644
+--- a/rust/kernel/sync/lock/mutex.rs
++++ b/rust/kernel/sync/lock/mutex.rs
+@@ -58,7 +58,7 @@ macro_rules! new_mutex {
+ /// }
+ ///
+ /// // Allocate a boxed `Example`.
+-/// let e = Box::pin_init(Example::new(), GFP_KERNEL)?;
++/// let e = KBox::pin_init(Example::new(), GFP_KERNEL)?;
+ /// assert_eq!(e.c, 10);
+ /// assert_eq!(e.d.lock().a, 20);
+ /// assert_eq!(e.d.lock().b, 30);
+@@ -96,7 +96,7 @@ unsafe impl super::Backend for MutexBackend {
+
+ unsafe fn init(
+ ptr: *mut Self::State,
+- name: *const core::ffi::c_char,
++ name: *const crate::ffi::c_char,
+ key: *mut bindings::lock_class_key,
+ ) {
+ // SAFETY: The safety requirements ensure that `ptr` is valid for writes, and `name` and
+diff --git a/rust/kernel/sync/lock/spinlock.rs b/rust/kernel/sync/lock/spinlock.rs
+index ea5c5bc1ce12ed..b77eed1789ad0f 100644
+--- a/rust/kernel/sync/lock/spinlock.rs
++++ b/rust/kernel/sync/lock/spinlock.rs
+@@ -56,7 +56,7 @@ macro_rules! new_spinlock {
+ /// }
+ ///
+ /// // Allocate a boxed `Example`.
+-/// let e = Box::pin_init(Example::new(), GFP_KERNEL)?;
++/// let e = KBox::pin_init(Example::new(), GFP_KERNEL)?;
+ /// assert_eq!(e.c, 10);
+ /// assert_eq!(e.d.lock().a, 20);
+ /// assert_eq!(e.d.lock().b, 30);
+@@ -95,7 +95,7 @@ unsafe impl super::Backend for SpinLockBackend {
+
+ unsafe fn init(
+ ptr: *mut Self::State,
+- name: *const core::ffi::c_char,
++ name: *const crate::ffi::c_char,
+ key: *mut bindings::lock_class_key,
+ ) {
+ // SAFETY: The safety requirements ensure that `ptr` is valid for writes, and `name` and
+diff --git a/rust/kernel/sync/locked_by.rs b/rust/kernel/sync/locked_by.rs
+index ce2ee8d8786587..a7b244675c2b96 100644
+--- a/rust/kernel/sync/locked_by.rs
++++ b/rust/kernel/sync/locked_by.rs
+@@ -43,7 +43,7 @@
+ /// struct InnerDirectory {
+ /// /// The sum of the bytes used by all files.
+ /// bytes_used: u64,
+-/// _files: Vec<File>,
++/// _files: KVec<File>,
+ /// }
+ ///
+ /// struct Directory {
+diff --git a/rust/kernel/task.rs b/rust/kernel/task.rs
+index 55dff7e088bf5f..5bce090a386977 100644
+--- a/rust/kernel/task.rs
++++ b/rust/kernel/task.rs
+@@ -4,13 +4,9 @@
+ //!
+ //! C header: [`include/linux/sched.h`](srctree/include/linux/sched.h).
+
++use crate::ffi::{c_int, c_long, c_uint};
+ use crate::types::Opaque;
+-use core::{
+- ffi::{c_int, c_long, c_uint},
+- marker::PhantomData,
+- ops::Deref,
+- ptr,
+-};
++use core::{marker::PhantomData, ops::Deref, ptr};
+
+ /// A sentinel value used for infinite timeouts.
+ pub const MAX_SCHEDULE_TIMEOUT: c_long = c_long::MAX;
+diff --git a/rust/kernel/time.rs b/rust/kernel/time.rs
+index e3bb5e89f88dac..379c0f5772e575 100644
+--- a/rust/kernel/time.rs
++++ b/rust/kernel/time.rs
+@@ -12,10 +12,10 @@
+ pub const NSEC_PER_MSEC: i64 = bindings::NSEC_PER_MSEC as i64;
+
+ /// The time unit of Linux kernel. One jiffy equals (1/HZ) second.
+-pub type Jiffies = core::ffi::c_ulong;
++pub type Jiffies = crate::ffi::c_ulong;
+
+ /// The millisecond time unit.
+-pub type Msecs = core::ffi::c_uint;
++pub type Msecs = crate::ffi::c_uint;
+
+ /// Converts milliseconds to jiffies.
+ #[inline]
+diff --git a/rust/kernel/types.rs b/rust/kernel/types.rs
+index 9e7ca066355cd5..7c8c531ef1909d 100644
+--- a/rust/kernel/types.rs
++++ b/rust/kernel/types.rs
+@@ -3,13 +3,11 @@
+ //! Kernel types.
+
+ use crate::init::{self, PinInit};
+-use alloc::boxed::Box;
+ use core::{
+ cell::UnsafeCell,
+ marker::{PhantomData, PhantomPinned},
+ mem::{ManuallyDrop, MaybeUninit},
+ ops::{Deref, DerefMut},
+- pin::Pin,
+ ptr::NonNull,
+ };
+
+@@ -31,7 +29,7 @@ pub trait ForeignOwnable: Sized {
+ /// For example, it might be invalid, dangling or pointing to uninitialized memory. Using it in
+ /// any way except for [`ForeignOwnable::from_foreign`], [`ForeignOwnable::borrow`],
+ /// [`ForeignOwnable::try_from_foreign`] can result in undefined behavior.
+- fn into_foreign(self) -> *const core::ffi::c_void;
++ fn into_foreign(self) -> *const crate::ffi::c_void;
+
+ /// Borrows a foreign-owned object.
+ ///
+@@ -39,7 +37,7 @@ pub trait ForeignOwnable: Sized {
+ ///
+ /// `ptr` must have been returned by a previous call to [`ForeignOwnable::into_foreign`] for
+ /// which a previous matching [`ForeignOwnable::from_foreign`] hasn't been called yet.
+- unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> Self::Borrowed<'a>;
++ unsafe fn borrow<'a>(ptr: *const crate::ffi::c_void) -> Self::Borrowed<'a>;
+
+ /// Converts a foreign-owned object back to a Rust-owned one.
+ ///
+@@ -49,7 +47,7 @@ pub trait ForeignOwnable: Sized {
+ /// which a previous matching [`ForeignOwnable::from_foreign`] hasn't been called yet.
+ /// Additionally, all instances (if any) of values returned by [`ForeignOwnable::borrow`] for
+ /// this object must have been dropped.
+- unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self;
++ unsafe fn from_foreign(ptr: *const crate::ffi::c_void) -> Self;
+
+ /// Tries to convert a foreign-owned object back to a Rust-owned one.
+ ///
+@@ -60,7 +58,7 @@ pub trait ForeignOwnable: Sized {
+ ///
+ /// `ptr` must either be null or satisfy the safety requirements for
+ /// [`ForeignOwnable::from_foreign`].
+- unsafe fn try_from_foreign(ptr: *const core::ffi::c_void) -> Option<Self> {
++ unsafe fn try_from_foreign(ptr: *const crate::ffi::c_void) -> Option<Self> {
+ if ptr.is_null() {
+ None
+ } else {
+@@ -71,64 +69,16 @@ unsafe fn try_from_foreign(ptr: *const core::ffi::c_void) -> Option<Self> {
+ }
+ }
+
+-impl<T: 'static> ForeignOwnable for Box<T> {
+- type Borrowed<'a> = &'a T;
+-
+- fn into_foreign(self) -> *const core::ffi::c_void {
+- Box::into_raw(self) as _
+- }
+-
+- unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> &'a T {
+- // SAFETY: The safety requirements for this function ensure that the object is still alive,
+- // so it is safe to dereference the raw pointer.
+- // The safety requirements of `from_foreign` also ensure that the object remains alive for
+- // the lifetime of the returned value.
+- unsafe { &*ptr.cast() }
+- }
+-
+- unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self {
+- // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous
+- // call to `Self::into_foreign`.
+- unsafe { Box::from_raw(ptr as _) }
+- }
+-}
+-
+-impl<T: 'static> ForeignOwnable for Pin<Box<T>> {
+- type Borrowed<'a> = Pin<&'a T>;
+-
+- fn into_foreign(self) -> *const core::ffi::c_void {
+- // SAFETY: We are still treating the box as pinned.
+- Box::into_raw(unsafe { Pin::into_inner_unchecked(self) }) as _
+- }
+-
+- unsafe fn borrow<'a>(ptr: *const core::ffi::c_void) -> Pin<&'a T> {
+- // SAFETY: The safety requirements for this function ensure that the object is still alive,
+- // so it is safe to dereference the raw pointer.
+- // The safety requirements of `from_foreign` also ensure that the object remains alive for
+- // the lifetime of the returned value.
+- let r = unsafe { &*ptr.cast() };
+-
+- // SAFETY: This pointer originates from a `Pin<Box<T>>`.
+- unsafe { Pin::new_unchecked(r) }
+- }
+-
+- unsafe fn from_foreign(ptr: *const core::ffi::c_void) -> Self {
+- // SAFETY: The safety requirements of this function ensure that `ptr` comes from a previous
+- // call to `Self::into_foreign`.
+- unsafe { Pin::new_unchecked(Box::from_raw(ptr as _)) }
+- }
+-}
+-
+ impl ForeignOwnable for () {
+ type Borrowed<'a> = ();
+
+- fn into_foreign(self) -> *const core::ffi::c_void {
++ fn into_foreign(self) -> *const crate::ffi::c_void {
+ core::ptr::NonNull::dangling().as_ptr()
+ }
+
+- unsafe fn borrow<'a>(_: *const core::ffi::c_void) -> Self::Borrowed<'a> {}
++ unsafe fn borrow<'a>(_: *const crate::ffi::c_void) -> Self::Borrowed<'a> {}
+
+- unsafe fn from_foreign(_: *const core::ffi::c_void) -> Self {}
++ unsafe fn from_foreign(_: *const crate::ffi::c_void) -> Self {}
+ }
+
+ /// Runs a cleanup function/closure when dropped.
+@@ -185,7 +135,7 @@ unsafe fn from_foreign(_: *const core::ffi::c_void) -> Self {}
+ /// # use kernel::types::ScopeGuard;
+ /// fn example3(arg: bool) -> Result {
+ /// let mut vec =
+-/// ScopeGuard::new_with_data(Vec::new(), |v| pr_info!("vec had {} elements\n", v.len()));
++/// ScopeGuard::new_with_data(KVec::new(), |v| pr_info!("vec had {} elements\n", v.len()));
+ ///
+ /// vec.push(10u8, GFP_KERNEL)?;
+ /// if arg {
+@@ -225,7 +175,7 @@ pub fn dismiss(mut self) -> T {
+ impl ScopeGuard<(), fn(())> {
+ /// Creates a new guarded object with the given cleanup function.
+ pub fn new(cleanup: impl FnOnce()) -> ScopeGuard<(), impl FnOnce(())> {
+- ScopeGuard::new_with_data((), move |_| cleanup())
++ ScopeGuard::new_with_data((), move |()| cleanup())
+ }
+ }
+
+@@ -410,6 +360,7 @@ pub unsafe fn from_raw(ptr: NonNull<T>) -> Self {
+ ///
+ /// struct Empty {}
+ ///
++ /// # // SAFETY: TODO.
+ /// unsafe impl AlwaysRefCounted for Empty {
+ /// fn inc_ref(&self) {}
+ /// unsafe fn dec_ref(_obj: NonNull<Self>) {}
+@@ -417,6 +368,7 @@ pub unsafe fn from_raw(ptr: NonNull<T>) -> Self {
+ ///
+ /// let mut data = Empty {};
+ /// let ptr = NonNull::<Empty>::new(&mut data as *mut _).unwrap();
++ /// # // SAFETY: TODO.
+ /// let data_ref: ARef<Empty> = unsafe { ARef::from_raw(ptr) };
+ /// let raw_ptr: NonNull<Empty> = ARef::into_raw(data_ref);
+ ///
+@@ -481,21 +433,23 @@ pub enum Either<L, R> {
+ /// All bit-patterns must be valid for this type. This type must not have interior mutability.
+ pub unsafe trait FromBytes {}
+
+-// SAFETY: All bit patterns are acceptable values of the types below.
+-unsafe impl FromBytes for u8 {}
+-unsafe impl FromBytes for u16 {}
+-unsafe impl FromBytes for u32 {}
+-unsafe impl FromBytes for u64 {}
+-unsafe impl FromBytes for usize {}
+-unsafe impl FromBytes for i8 {}
+-unsafe impl FromBytes for i16 {}
+-unsafe impl FromBytes for i32 {}
+-unsafe impl FromBytes for i64 {}
+-unsafe impl FromBytes for isize {}
+-// SAFETY: If all bit patterns are acceptable for individual values in an array, then all bit
+-// patterns are also acceptable for arrays of that type.
+-unsafe impl<T: FromBytes> FromBytes for [T] {}
+-unsafe impl<T: FromBytes, const N: usize> FromBytes for [T; N] {}
++macro_rules! impl_frombytes {
++ ($($({$($generics:tt)*})? $t:ty, )*) => {
++ // SAFETY: Safety comments written in the macro invocation.
++ $(unsafe impl$($($generics)*)? FromBytes for $t {})*
++ };
++}
++
++impl_frombytes! {
++ // SAFETY: All bit patterns are acceptable values of the types below.
++ u8, u16, u32, u64, usize,
++ i8, i16, i32, i64, isize,
++
++ // SAFETY: If all bit patterns are acceptable for individual values in an array, then all bit
++ // patterns are also acceptable for arrays of that type.
++ {<T: FromBytes>} [T],
++ {<T: FromBytes, const N: usize>} [T; N],
++}
+
+ /// Types that can be viewed as an immutable slice of initialized bytes.
+ ///
+@@ -514,21 +468,23 @@ unsafe impl<T: FromBytes> FromBytes for [T] {}
+ /// mutability.
+ pub unsafe trait AsBytes {}
+
+-// SAFETY: Instances of the following types have no uninitialized portions.
+-unsafe impl AsBytes for u8 {}
+-unsafe impl AsBytes for u16 {}
+-unsafe impl AsBytes for u32 {}
+-unsafe impl AsBytes for u64 {}
+-unsafe impl AsBytes for usize {}
+-unsafe impl AsBytes for i8 {}
+-unsafe impl AsBytes for i16 {}
+-unsafe impl AsBytes for i32 {}
+-unsafe impl AsBytes for i64 {}
+-unsafe impl AsBytes for isize {}
+-unsafe impl AsBytes for bool {}
+-unsafe impl AsBytes for char {}
+-unsafe impl AsBytes for str {}
+-// SAFETY: If individual values in an array have no uninitialized portions, then the array itself
+-// does not have any uninitialized portions either.
+-unsafe impl<T: AsBytes> AsBytes for [T] {}
+-unsafe impl<T: AsBytes, const N: usize> AsBytes for [T; N] {}
++macro_rules! impl_asbytes {
++ ($($({$($generics:tt)*})? $t:ty, )*) => {
++ // SAFETY: Safety comments written in the macro invocation.
++ $(unsafe impl$($($generics)*)? AsBytes for $t {})*
++ };
++}
++
++impl_asbytes! {
++ // SAFETY: Instances of the following types have no uninitialized portions.
++ u8, u16, u32, u64, usize,
++ i8, i16, i32, i64, isize,
++ bool,
++ char,
++ str,
++
++ // SAFETY: If individual values in an array have no uninitialized portions, then the array
++ // itself does not have any uninitialized portions either.
++ {<T: AsBytes>} [T],
++ {<T: AsBytes, const N: usize>} [T; N],
++}
+diff --git a/rust/kernel/uaccess.rs b/rust/kernel/uaccess.rs
+index e9347cff99ab20..5a3c2d4df65f86 100644
+--- a/rust/kernel/uaccess.rs
++++ b/rust/kernel/uaccess.rs
+@@ -8,11 +8,10 @@
+ alloc::Flags,
+ bindings,
+ error::Result,
++ ffi::c_void,
+ prelude::*,
+ types::{AsBytes, FromBytes},
+ };
+-use alloc::vec::Vec;
+-use core::ffi::{c_ulong, c_void};
+ use core::mem::{size_of, MaybeUninit};
+
+ /// The type used for userspace addresses.
+@@ -46,15 +45,14 @@
+ /// every byte in the region.
+ ///
+ /// ```no_run
+-/// use alloc::vec::Vec;
+-/// use core::ffi::c_void;
++/// use kernel::ffi::c_void;
+ /// use kernel::error::Result;
+ /// use kernel::uaccess::{UserPtr, UserSlice};
+ ///
+ /// fn bytes_add_one(uptr: UserPtr, len: usize) -> Result<()> {
+ /// let (read, mut write) = UserSlice::new(uptr, len).reader_writer();
+ ///
+-/// let mut buf = Vec::new();
++/// let mut buf = KVec::new();
+ /// read.read_all(&mut buf, GFP_KERNEL)?;
+ ///
+ /// for b in &mut buf {
+@@ -69,8 +67,7 @@
+ /// Example illustrating a TOCTOU (time-of-check to time-of-use) bug.
+ ///
+ /// ```no_run
+-/// use alloc::vec::Vec;
+-/// use core::ffi::c_void;
++/// use kernel::ffi::c_void;
+ /// use kernel::error::{code::EINVAL, Result};
+ /// use kernel::uaccess::{UserPtr, UserSlice};
+ ///
+@@ -78,21 +75,21 @@
+ /// fn is_valid(uptr: UserPtr, len: usize) -> Result<bool> {
+ /// let read = UserSlice::new(uptr, len).reader();
+ ///
+-/// let mut buf = Vec::new();
++/// let mut buf = KVec::new();
+ /// read.read_all(&mut buf, GFP_KERNEL)?;
+ ///
+ /// todo!()
+ /// }
+ ///
+ /// /// Returns the bytes behind this user pointer if they are valid.
+-/// fn get_bytes_if_valid(uptr: UserPtr, len: usize) -> Result<Vec<u8>> {
++/// fn get_bytes_if_valid(uptr: UserPtr, len: usize) -> Result<KVec<u8>> {
+ /// if !is_valid(uptr, len)? {
+ /// return Err(EINVAL);
+ /// }
+ ///
+ /// let read = UserSlice::new(uptr, len).reader();
+ ///
+-/// let mut buf = Vec::new();
++/// let mut buf = KVec::new();
+ /// read.read_all(&mut buf, GFP_KERNEL)?;
+ ///
+ /// // THIS IS A BUG! The bytes could have changed since we checked them.
+@@ -130,7 +127,7 @@ pub fn new(ptr: UserPtr, length: usize) -> Self {
+ /// Reads the entirety of the user slice, appending it to the end of the provided buffer.
+ ///
+ /// Fails with [`EFAULT`] if the read happens on a bad address.
+- pub fn read_all(self, buf: &mut Vec<u8>, flags: Flags) -> Result {
++ pub fn read_all(self, buf: &mut KVec<u8>, flags: Flags) -> Result {
+ self.reader().read_all(buf, flags)
+ }
+
+@@ -227,13 +224,9 @@ pub fn read_raw(&mut self, out: &mut [MaybeUninit<u8>]) -> Result {
+ if len > self.length {
+ return Err(EFAULT);
+ }
+- let Ok(len_ulong) = c_ulong::try_from(len) else {
+- return Err(EFAULT);
+- };
+- // SAFETY: `out_ptr` points into a mutable slice of length `len_ulong`, so we may write
++ // SAFETY: `out_ptr` points into a mutable slice of length `len`, so we may write
+ // that many bytes to it.
+- let res =
+- unsafe { bindings::copy_from_user(out_ptr, self.ptr as *const c_void, len_ulong) };
++ let res = unsafe { bindings::copy_from_user(out_ptr, self.ptr as *const c_void, len) };
+ if res != 0 {
+ return Err(EFAULT);
+ }
+@@ -262,9 +255,6 @@ pub fn read<T: FromBytes>(&mut self) -> Result<T> {
+ if len > self.length {
+ return Err(EFAULT);
+ }
+- let Ok(len_ulong) = c_ulong::try_from(len) else {
+- return Err(EFAULT);
+- };
+ let mut out: MaybeUninit<T> = MaybeUninit::uninit();
+ // SAFETY: The local variable `out` is valid for writing `size_of::<T>()` bytes.
+ //
+@@ -275,7 +265,7 @@ pub fn read<T: FromBytes>(&mut self) -> Result<T> {
+ bindings::_copy_from_user(
+ out.as_mut_ptr().cast::<c_void>(),
+ self.ptr as *const c_void,
+- len_ulong,
++ len,
+ )
+ };
+ if res != 0 {
+@@ -291,9 +281,9 @@ pub fn read<T: FromBytes>(&mut self) -> Result<T> {
+ /// Reads the entirety of the user slice, appending it to the end of the provided buffer.
+ ///
+ /// Fails with [`EFAULT`] if the read happens on a bad address.
+- pub fn read_all(mut self, buf: &mut Vec<u8>, flags: Flags) -> Result {
++ pub fn read_all(mut self, buf: &mut KVec<u8>, flags: Flags) -> Result {
+ let len = self.length;
+- VecExt::<u8>::reserve(buf, len, flags)?;
++ buf.reserve(len, flags)?;
+
+ // The call to `try_reserve` was successful, so the spare capacity is at least `len` bytes
+ // long.
+@@ -338,12 +328,9 @@ pub fn write_slice(&mut self, data: &[u8]) -> Result {
+ if len > self.length {
+ return Err(EFAULT);
+ }
+- let Ok(len_ulong) = c_ulong::try_from(len) else {
+- return Err(EFAULT);
+- };
+- // SAFETY: `data_ptr` points into an immutable slice of length `len_ulong`, so we may read
++ // SAFETY: `data_ptr` points into an immutable slice of length `len`, so we may read
+ // that many bytes from it.
+- let res = unsafe { bindings::copy_to_user(self.ptr as *mut c_void, data_ptr, len_ulong) };
++ let res = unsafe { bindings::copy_to_user(self.ptr as *mut c_void, data_ptr, len) };
+ if res != 0 {
+ return Err(EFAULT);
+ }
+@@ -362,9 +349,6 @@ pub fn write<T: AsBytes>(&mut self, value: &T) -> Result {
+ if len > self.length {
+ return Err(EFAULT);
+ }
+- let Ok(len_ulong) = c_ulong::try_from(len) else {
+- return Err(EFAULT);
+- };
+ // SAFETY: The reference points to a value of type `T`, so it is valid for reading
+ // `size_of::<T>()` bytes.
+ //
+@@ -375,7 +359,7 @@ pub fn write<T: AsBytes>(&mut self, value: &T) -> Result {
+ bindings::_copy_to_user(
+ self.ptr as *mut c_void,
+ (value as *const T).cast::<c_void>(),
+- len_ulong,
++ len,
+ )
+ };
+ if res != 0 {
+diff --git a/rust/kernel/workqueue.rs b/rust/kernel/workqueue.rs
+index 553a5cba2adcb5..4d1d2062f6eba5 100644
+--- a/rust/kernel/workqueue.rs
++++ b/rust/kernel/workqueue.rs
+@@ -216,7 +216,7 @@ pub fn try_spawn<T: 'static + Send + FnOnce()>(
+ func: Some(func),
+ });
+
+- self.enqueue(Box::pin_init(init, flags).map_err(|_| AllocError)?);
++ self.enqueue(KBox::pin_init(init, flags).map_err(|_| AllocError)?);
+ Ok(())
+ }
+ }
+@@ -239,9 +239,9 @@ fn project(self: Pin<&mut Self>) -> &mut Option<T> {
+ }
+
+ impl<T: FnOnce()> WorkItem for ClosureWork<T> {
+- type Pointer = Pin<Box<Self>>;
++ type Pointer = Pin<KBox<Self>>;
+
+- fn run(mut this: Pin<Box<Self>>) {
++ fn run(mut this: Pin<KBox<Self>>) {
+ if let Some(func) = this.as_mut().project().take() {
+ (func)()
+ }
+@@ -297,7 +297,7 @@ unsafe fn __enqueue<F>(self, queue_work_on: F) -> Self::EnqueueOutput
+
+ /// Defines the method that should be called directly when a work item is executed.
+ ///
+-/// This trait is implemented by `Pin<Box<T>>` and [`Arc<T>`], and is mainly intended to be
++/// This trait is implemented by `Pin<KBox<T>>` and [`Arc<T>`], and is mainly intended to be
+ /// implemented for smart pointer types. For your own structs, you would implement [`WorkItem`]
+ /// instead. The [`run`] method on this trait will usually just perform the appropriate
+ /// `container_of` translation and then call into the [`run`][WorkItem::run] method from the
+@@ -329,7 +329,7 @@ pub unsafe trait WorkItemPointer<const ID: u64>: RawWorkItem<ID> {
+ /// This trait is used when the `work_struct` field is defined using the [`Work`] helper.
+ pub trait WorkItem<const ID: u64 = 0> {
+ /// The pointer type that this struct is wrapped in. This will typically be `Arc<Self>` or
+- /// `Pin<Box<Self>>`.
++ /// `Pin<KBox<Self>>`.
+ type Pointer: WorkItemPointer<ID>;
+
+ /// The method that should be called when this work item is executed.
+@@ -366,7 +366,6 @@ unsafe impl<T: ?Sized, const ID: u64> Sync for Work<T, ID> {}
+ impl<T: ?Sized, const ID: u64> Work<T, ID> {
+ /// Creates a new instance of [`Work`].
+ #[inline]
+- #[allow(clippy::new_ret_no_self)]
+ pub fn new(name: &'static CStr, key: &'static LockClassKey) -> impl PinInit<Self>
+ where
+ T: WorkItem<ID>,
+@@ -520,13 +519,14 @@ unsafe fn raw_get_work(ptr: *mut Self) -> *mut $crate::workqueue::Work<$work_typ
+ impl{T} HasWork<Self> for ClosureWork<T> { self.work }
+ }
+
++// SAFETY: TODO.
+ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Arc<T>
+ where
+ T: WorkItem<ID, Pointer = Self>,
+ T: HasWork<T, ID>,
+ {
+ unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
+- // SAFETY: The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
++ // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
+ let ptr = ptr as *mut Work<T, ID>;
+ // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`.
+ let ptr = unsafe { T::work_container_of(ptr) };
+@@ -537,6 +537,7 @@ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Arc<T>
+ }
+ }
+
++// SAFETY: TODO.
+ unsafe impl<T, const ID: u64> RawWorkItem<ID> for Arc<T>
+ where
+ T: WorkItem<ID, Pointer = Self>,
+@@ -565,18 +566,19 @@ unsafe fn __enqueue<F>(self, queue_work_on: F) -> Self::EnqueueOutput
+ }
+ }
+
+-unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<Box<T>>
++// SAFETY: TODO.
++unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<KBox<T>>
+ where
+ T: WorkItem<ID, Pointer = Self>,
+ T: HasWork<T, ID>,
+ {
+ unsafe extern "C" fn run(ptr: *mut bindings::work_struct) {
+- // SAFETY: The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
++ // The `__enqueue` method always uses a `work_struct` stored in a `Work<T, ID>`.
+ let ptr = ptr as *mut Work<T, ID>;
+ // SAFETY: This computes the pointer that `__enqueue` got from `Arc::into_raw`.
+ let ptr = unsafe { T::work_container_of(ptr) };
+ // SAFETY: This pointer comes from `Arc::into_raw` and we've been given back ownership.
+- let boxed = unsafe { Box::from_raw(ptr) };
++ let boxed = unsafe { KBox::from_raw(ptr) };
+ // SAFETY: The box was already pinned when it was enqueued.
+ let pinned = unsafe { Pin::new_unchecked(boxed) };
+
+@@ -584,7 +586,8 @@ unsafe impl<T, const ID: u64> WorkItemPointer<ID> for Pin<Box<T>>
+ }
+ }
+
+-unsafe impl<T, const ID: u64> RawWorkItem<ID> for Pin<Box<T>>
++// SAFETY: TODO.
++unsafe impl<T, const ID: u64> RawWorkItem<ID> for Pin<KBox<T>>
+ where
+ T: WorkItem<ID, Pointer = Self>,
+ T: HasWork<T, ID>,
+@@ -598,9 +601,9 @@ unsafe fn __enqueue<F>(self, queue_work_on: F) -> Self::EnqueueOutput
+ // SAFETY: We're not going to move `self` or any of its fields, so its okay to temporarily
+ // remove the `Pin` wrapper.
+ let boxed = unsafe { Pin::into_inner_unchecked(self) };
+- let ptr = Box::into_raw(boxed);
++ let ptr = KBox::into_raw(boxed);
+
+- // SAFETY: Pointers into a `Box` point at a valid value.
++ // SAFETY: Pointers into a `KBox` point at a valid value.
+ let work_ptr = unsafe { T::raw_get_work(ptr) };
+ // SAFETY: `raw_get_work` returns a pointer to a valid value.
+ let work_ptr = unsafe { Work::raw_get(work_ptr) };
+diff --git a/rust/macros/lib.rs b/rust/macros/lib.rs
+index 90e2202ba4d5a0..b16402a16acd48 100644
+--- a/rust/macros/lib.rs
++++ b/rust/macros/lib.rs
+@@ -132,7 +132,7 @@ pub fn module(ts: TokenStream) -> TokenStream {
+ /// calls to this function at compile time:
+ ///
+ /// ```compile_fail
+-/// # use kernel::error::VTABLE_DEFAULT_ERROR;
++/// # // Intentionally missing `use`s to simplify `rusttest`.
+ /// kernel::build_error(VTABLE_DEFAULT_ERROR)
+ /// ```
+ ///
+@@ -242,8 +242,8 @@ pub fn concat_idents(ts: TokenStream) -> TokenStream {
+ /// #[pin_data]
+ /// struct DriverData {
+ /// #[pin]
+-/// queue: Mutex<Vec<Command>>,
+-/// buf: Box<[u8; 1024 * 1024]>,
++/// queue: Mutex<KVec<Command>>,
++/// buf: KBox<[u8; 1024 * 1024]>,
+ /// }
+ /// ```
+ ///
+@@ -251,8 +251,8 @@ pub fn concat_idents(ts: TokenStream) -> TokenStream {
+ /// #[pin_data(PinnedDrop)]
+ /// struct DriverData {
+ /// #[pin]
+-/// queue: Mutex<Vec<Command>>,
+-/// buf: Box<[u8; 1024 * 1024]>,
++/// queue: Mutex<KVec<Command>>,
++/// buf: KBox<[u8; 1024 * 1024]>,
+ /// raw_info: *mut Info,
+ /// }
+ ///
+@@ -281,8 +281,8 @@ pub fn pin_data(inner: TokenStream, item: TokenStream) -> TokenStream {
+ /// #[pin_data(PinnedDrop)]
+ /// struct DriverData {
+ /// #[pin]
+-/// queue: Mutex<Vec<Command>>,
+-/// buf: Box<[u8; 1024 * 1024]>,
++/// queue: Mutex<KVec<Command>>,
++/// buf: KBox<[u8; 1024 * 1024]>,
+ /// raw_info: *mut Info,
+ /// }
+ ///
+diff --git a/rust/macros/module.rs b/rust/macros/module.rs
+index aef3b132f32b33..e7a087b7e88494 100644
+--- a/rust/macros/module.rs
++++ b/rust/macros/module.rs
+@@ -253,7 +253,7 @@ mod __module_init {{
+ #[doc(hidden)]
+ #[no_mangle]
+ #[link_section = \".init.text\"]
+- pub unsafe extern \"C\" fn init_module() -> core::ffi::c_int {{
++ pub unsafe extern \"C\" fn init_module() -> kernel::ffi::c_int {{
+ // SAFETY: This function is inaccessible to the outside due to the double
+ // module wrapping it. It is called exactly once by the C side via its
+ // unique name.
+@@ -292,7 +292,7 @@ mod __module_init {{
+ #[doc(hidden)]
+ #[link_section = \"{initcall_section}\"]
+ #[used]
+- pub static __{name}_initcall: extern \"C\" fn() -> core::ffi::c_int = __{name}_init;
++ pub static __{name}_initcall: extern \"C\" fn() -> kernel::ffi::c_int = __{name}_init;
+
+ #[cfg(not(MODULE))]
+ #[cfg(CONFIG_HAVE_ARCH_PREL32_RELOCATIONS)]
+@@ -307,7 +307,7 @@ mod __module_init {{
+ #[cfg(not(MODULE))]
+ #[doc(hidden)]
+ #[no_mangle]
+- pub extern \"C\" fn __{name}_init() -> core::ffi::c_int {{
++ pub extern \"C\" fn __{name}_init() -> kernel::ffi::c_int {{
+ // SAFETY: This function is inaccessible to the outside due to the double
+ // module wrapping it. It is called exactly once by the C side via its
+ // placement above in the initcall section.
+@@ -330,7 +330,7 @@ mod __module_init {{
+ /// # Safety
+ ///
+ /// This function must only be called once.
+- unsafe fn __init() -> core::ffi::c_int {{
++ unsafe fn __init() -> kernel::ffi::c_int {{
+ match <{type_} as kernel::Module>::init(&super::super::THIS_MODULE) {{
+ Ok(m) => {{
+ // SAFETY: No data race, since `__MOD` can only be accessed by this
+diff --git a/rust/uapi/lib.rs b/rust/uapi/lib.rs
+index 80a00260e3e7a1..13495910271faf 100644
+--- a/rust/uapi/lib.rs
++++ b/rust/uapi/lib.rs
+@@ -14,6 +14,7 @@
+ #![cfg_attr(test, allow(unsafe_op_in_unsafe_fn))]
+ #![allow(
+ clippy::all,
++ clippy::undocumented_unsafe_blocks,
+ dead_code,
+ missing_docs,
+ non_camel_case_types,
+@@ -24,4 +25,9 @@
+ unsafe_op_in_unsafe_fn
+ )]
+
++// Manual definition of blocklisted types.
++type __kernel_size_t = usize;
++type __kernel_ssize_t = isize;
++type __kernel_ptrdiff_t = isize;
++
+ include!(concat!(env!("OBJTREE"), "/rust/uapi/uapi_generated.rs"));
+diff --git a/samples/rust/rust_minimal.rs b/samples/rust/rust_minimal.rs
+index 2a9eaab62d1ca7..4aaf117bf8e3c0 100644
+--- a/samples/rust/rust_minimal.rs
++++ b/samples/rust/rust_minimal.rs
+@@ -13,7 +13,7 @@
+ }
+
+ struct RustMinimal {
+- numbers: Vec<i32>,
++ numbers: KVec<i32>,
+ }
+
+ impl kernel::Module for RustMinimal {
+@@ -21,7 +21,7 @@ fn init(_module: &'static ThisModule) -> Result<Self> {
+ pr_info!("Rust minimal sample (init)\n");
+ pr_info!("Am I built-in? {}\n", !cfg!(MODULE));
+
+- let mut numbers = Vec::new();
++ let mut numbers = KVec::new();
+ numbers.push(72, GFP_KERNEL)?;
+ numbers.push(108, GFP_KERNEL)?;
+ numbers.push(200, GFP_KERNEL)?;
+diff --git a/samples/rust/rust_print.rs b/samples/rust/rust_print.rs
+index 6eabb0d79ea3a7..ba1606bdbd7543 100644
+--- a/samples/rust/rust_print.rs
++++ b/samples/rust/rust_print.rs
+@@ -15,6 +15,7 @@
+
+ struct RustPrint;
+
++#[expect(clippy::disallowed_macros)]
+ fn arc_print() -> Result {
+ use kernel::sync::*;
+
+diff --git a/scripts/Makefile.build b/scripts/Makefile.build
+index 880785b52c04ad..2bba59e790b8a4 100644
+--- a/scripts/Makefile.build
++++ b/scripts/Makefile.build
+@@ -248,7 +248,7 @@ $(obj)/%.lst: $(obj)/%.c FORCE
+ # Compile Rust sources (.rs)
+ # ---------------------------------------------------------------------------
+
+-rust_allowed_features := arbitrary_self_types,new_uninit
++rust_allowed_features := arbitrary_self_types,lint_reasons
+
+ # `--out-dir` is required to avoid temporaries being created by `rustc` in the
+ # current working directory, which may be not accessible in the out-of-tree
+@@ -258,7 +258,7 @@ rust_common_cmd = \
+ -Zallow-features=$(rust_allowed_features) \
+ -Zcrate-attr=no_std \
+ -Zcrate-attr='feature($(rust_allowed_features))' \
+- -Zunstable-options --extern force:alloc --extern kernel \
++ -Zunstable-options --extern kernel \
+ --crate-type rlib -L $(objtree)/rust/ \
+ --crate-name $(basename $(notdir $@)) \
+ --sysroot=/dev/null \
+diff --git a/scripts/generate_rust_analyzer.py b/scripts/generate_rust_analyzer.py
+index d2bc63cde8c6a3..09e1d166d8d236 100755
+--- a/scripts/generate_rust_analyzer.py
++++ b/scripts/generate_rust_analyzer.py
+@@ -64,13 +64,6 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+ [],
+ )
+
+- append_crate(
+- "alloc",
+- sysroot_src / "alloc" / "src" / "lib.rs",
+- ["core", "compiler_builtins"],
+- cfg=crates_cfgs.get("alloc", []),
+- )
+-
+ append_crate(
+ "macros",
+ srctree / "rust" / "macros" / "lib.rs",
+@@ -96,7 +89,7 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+ append_crate(
+ "kernel",
+ srctree / "rust" / "kernel" / "lib.rs",
+- ["core", "alloc", "macros", "build_error", "bindings"],
++ ["core", "macros", "build_error", "bindings"],
+ cfg=cfg,
+ )
+ crates[-1]["source"] = {
+@@ -133,7 +126,7 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+ append_crate(
+ name,
+ path,
+- ["core", "alloc", "kernel"],
++ ["core", "kernel"],
+ cfg=cfg,
+ )
+
+diff --git a/sound/core/seq/seq_clientmgr.c b/sound/core/seq/seq_clientmgr.c
+index 9955c4d54e42a7..b30faf731da720 100644
+--- a/sound/core/seq/seq_clientmgr.c
++++ b/sound/core/seq/seq_clientmgr.c
+@@ -106,7 +106,7 @@ static struct snd_seq_client *clientptr(int clientid)
+ return clienttab[clientid];
+ }
+
+-struct snd_seq_client *snd_seq_client_use_ptr(int clientid)
++static struct snd_seq_client *client_use_ptr(int clientid, bool load_module)
+ {
+ unsigned long flags;
+ struct snd_seq_client *client;
+@@ -126,7 +126,7 @@ struct snd_seq_client *snd_seq_client_use_ptr(int clientid)
+ }
+ spin_unlock_irqrestore(&clients_lock, flags);
+ #ifdef CONFIG_MODULES
+- if (!in_interrupt()) {
++ if (load_module) {
+ static DECLARE_BITMAP(client_requested, SNDRV_SEQ_GLOBAL_CLIENTS);
+ static DECLARE_BITMAP(card_requested, SNDRV_CARDS);
+
+@@ -168,6 +168,20 @@ struct snd_seq_client *snd_seq_client_use_ptr(int clientid)
+ return client;
+ }
+
++/* get snd_seq_client object for the given id quickly */
++struct snd_seq_client *snd_seq_client_use_ptr(int clientid)
++{
++ return client_use_ptr(clientid, false);
++}
++
++/* get snd_seq_client object for the given id;
++ * if not found, retry after loading the modules
++ */
++static struct snd_seq_client *client_load_and_use_ptr(int clientid)
++{
++ return client_use_ptr(clientid, IS_ENABLED(CONFIG_MODULES));
++}
++
+ /* Take refcount and perform ioctl_mutex lock on the given client;
+ * used only for OSS sequencer
+ * Unlock via snd_seq_client_ioctl_unlock() below
+@@ -176,7 +190,7 @@ bool snd_seq_client_ioctl_lock(int clientid)
+ {
+ struct snd_seq_client *client;
+
+- client = snd_seq_client_use_ptr(clientid);
++ client = client_load_and_use_ptr(clientid);
+ if (!client)
+ return false;
+ mutex_lock(&client->ioctl_mutex);
+@@ -1195,7 +1209,7 @@ static int snd_seq_ioctl_running_mode(struct snd_seq_client *client, void *arg)
+ int err = 0;
+
+ /* requested client number */
+- cptr = snd_seq_client_use_ptr(info->client);
++ cptr = client_load_and_use_ptr(info->client);
+ if (cptr == NULL)
+ return -ENOENT; /* don't change !!! */
+
+@@ -1257,7 +1271,7 @@ static int snd_seq_ioctl_get_client_info(struct snd_seq_client *client,
+ struct snd_seq_client *cptr;
+
+ /* requested client number */
+- cptr = snd_seq_client_use_ptr(client_info->client);
++ cptr = client_load_and_use_ptr(client_info->client);
+ if (cptr == NULL)
+ return -ENOENT; /* don't change !!! */
+
+@@ -1392,7 +1406,7 @@ static int snd_seq_ioctl_get_port_info(struct snd_seq_client *client, void *arg)
+ struct snd_seq_client *cptr;
+ struct snd_seq_client_port *port;
+
+- cptr = snd_seq_client_use_ptr(info->addr.client);
++ cptr = client_load_and_use_ptr(info->addr.client);
+ if (cptr == NULL)
+ return -ENXIO;
+
+@@ -1496,10 +1510,10 @@ static int snd_seq_ioctl_subscribe_port(struct snd_seq_client *client,
+ struct snd_seq_client *receiver = NULL, *sender = NULL;
+ struct snd_seq_client_port *sport = NULL, *dport = NULL;
+
+- receiver = snd_seq_client_use_ptr(subs->dest.client);
++ receiver = client_load_and_use_ptr(subs->dest.client);
+ if (!receiver)
+ goto __end;
+- sender = snd_seq_client_use_ptr(subs->sender.client);
++ sender = client_load_and_use_ptr(subs->sender.client);
+ if (!sender)
+ goto __end;
+ sport = snd_seq_port_use_ptr(sender, subs->sender.port);
+@@ -1864,7 +1878,7 @@ static int snd_seq_ioctl_get_client_pool(struct snd_seq_client *client,
+ struct snd_seq_client_pool *info = arg;
+ struct snd_seq_client *cptr;
+
+- cptr = snd_seq_client_use_ptr(info->client);
++ cptr = client_load_and_use_ptr(info->client);
+ if (cptr == NULL)
+ return -ENOENT;
+ memset(info, 0, sizeof(*info));
+@@ -1968,7 +1982,7 @@ static int snd_seq_ioctl_get_subscription(struct snd_seq_client *client,
+ struct snd_seq_client_port *sport = NULL;
+
+ result = -EINVAL;
+- sender = snd_seq_client_use_ptr(subs->sender.client);
++ sender = client_load_and_use_ptr(subs->sender.client);
+ if (!sender)
+ goto __end;
+ sport = snd_seq_port_use_ptr(sender, subs->sender.port);
+@@ -1999,7 +2013,7 @@ static int snd_seq_ioctl_query_subs(struct snd_seq_client *client, void *arg)
+ struct list_head *p;
+ int i;
+
+- cptr = snd_seq_client_use_ptr(subs->root.client);
++ cptr = client_load_and_use_ptr(subs->root.client);
+ if (!cptr)
+ goto __end;
+ port = snd_seq_port_use_ptr(cptr, subs->root.port);
+@@ -2066,7 +2080,7 @@ static int snd_seq_ioctl_query_next_client(struct snd_seq_client *client,
+ if (info->client < 0)
+ info->client = 0;
+ for (; info->client < SNDRV_SEQ_MAX_CLIENTS; info->client++) {
+- cptr = snd_seq_client_use_ptr(info->client);
++ cptr = client_load_and_use_ptr(info->client);
+ if (cptr)
+ break; /* found */
+ }
+@@ -2089,7 +2103,7 @@ static int snd_seq_ioctl_query_next_port(struct snd_seq_client *client,
+ struct snd_seq_client *cptr;
+ struct snd_seq_client_port *port = NULL;
+
+- cptr = snd_seq_client_use_ptr(info->addr.client);
++ cptr = client_load_and_use_ptr(info->addr.client);
+ if (cptr == NULL)
+ return -ENXIO;
+
+@@ -2186,7 +2200,7 @@ static int snd_seq_ioctl_client_ump_info(struct snd_seq_client *caller,
+ size = sizeof(struct snd_ump_endpoint_info);
+ else
+ size = sizeof(struct snd_ump_block_info);
+- cptr = snd_seq_client_use_ptr(client);
++ cptr = client_load_and_use_ptr(client);
+ if (!cptr)
+ return -ENOENT;
+
+@@ -2458,7 +2472,7 @@ int snd_seq_kernel_client_enqueue(int client, struct snd_seq_event *ev,
+ if (check_event_type_and_length(ev))
+ return -EINVAL;
+
+- cptr = snd_seq_client_use_ptr(client);
++ cptr = client_load_and_use_ptr(client);
+ if (cptr == NULL)
+ return -EINVAL;
+
+@@ -2690,7 +2704,7 @@ void snd_seq_info_clients_read(struct snd_info_entry *entry,
+
+ /* list the client table */
+ for (c = 0; c < SNDRV_SEQ_MAX_CLIENTS; c++) {
+- client = snd_seq_client_use_ptr(c);
++ client = client_load_and_use_ptr(c);
+ if (client == NULL)
+ continue;
+ if (client->type == NO_CLIENT) {
+diff --git a/sound/pci/hda/Kconfig b/sound/pci/hda/Kconfig
+index 68f1eee9e5c938..dbf933c18a8219 100644
+--- a/sound/pci/hda/Kconfig
++++ b/sound/pci/hda/Kconfig
+@@ -208,6 +208,7 @@ comment "Set to Y if you want auto-loading the side codec driver"
+
+ config SND_HDA_CODEC_REALTEK
+ tristate "Build Realtek HD-audio codec support"
++ depends on INPUT
+ select SND_HDA_GENERIC
+ select SND_HDA_GENERIC_LEDS
+ select SND_HDA_SCODEC_COMPONENT
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index b4540c5cd2a6f9..ea52bc7370a58d 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2242,6 +2242,8 @@ static const struct snd_pci_quirk power_save_denylist[] = {
+ SND_PCI_QUIRK(0x1631, 0xe017, "Packard Bell NEC IMEDIA 5204", 0),
+ /* KONTRON SinglePC may cause a stall at runtime resume */
+ SND_PCI_QUIRK(0x1734, 0x1232, "KONTRON SinglePC", 0),
++ /* Dell ALC3271 */
++ SND_PCI_QUIRK(0x1028, 0x0962, "Dell ALC3271", 0),
+ {}
+ };
+
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 4a3b4c6d4114b9..b559f0d4e34885 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -3845,6 +3845,79 @@ static void alc225_shutup(struct hda_codec *codec)
+ }
+ }
+
++static void alc222_init(struct hda_codec *codec)
++{
++ struct alc_spec *spec = codec->spec;
++ hda_nid_t hp_pin = alc_get_hp_pin(spec);
++ bool hp1_pin_sense, hp2_pin_sense;
++
++ if (!hp_pin)
++ return;
++
++ msleep(30);
++
++ hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin);
++ hp2_pin_sense = snd_hda_jack_detect(codec, 0x14);
++
++ if (hp1_pin_sense || hp2_pin_sense) {
++ msleep(2);
++
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x14, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, PIN_OUT);
++ msleep(75);
++
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x14, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_UNMUTE);
++
++ msleep(75);
++ }
++}
++
++static void alc222_shutup(struct hda_codec *codec)
++{
++ struct alc_spec *spec = codec->spec;
++ hda_nid_t hp_pin = alc_get_hp_pin(spec);
++ bool hp1_pin_sense, hp2_pin_sense;
++
++ if (!hp_pin)
++ hp_pin = 0x21;
++
++ hp1_pin_sense = snd_hda_jack_detect(codec, hp_pin);
++ hp2_pin_sense = snd_hda_jack_detect(codec, 0x14);
++
++ if (hp1_pin_sense || hp2_pin_sense) {
++ msleep(2);
++
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x14, 0,
++ AC_VERB_SET_AMP_GAIN_MUTE, AMP_OUT_MUTE);
++
++ msleep(75);
++
++ if (hp1_pin_sense)
++ snd_hda_codec_write(codec, hp_pin, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++ if (hp2_pin_sense)
++ snd_hda_codec_write(codec, 0x14, 0,
++ AC_VERB_SET_PIN_WIDGET_CONTROL, 0x0);
++
++ msleep(75);
++ }
++ alc_auto_setup_eapd(codec, false);
++ alc_shutup_pins(codec);
++}
++
+ static void alc_default_init(struct hda_codec *codec)
+ {
+ struct alc_spec *spec = codec->spec;
+@@ -4929,7 +5002,6 @@ static void alc298_fixup_samsung_amp_v2_4_amps(struct hda_codec *codec,
+ alc298_samsung_v2_init_amps(codec, 4);
+ }
+
+-#if IS_REACHABLE(CONFIG_INPUT)
+ static void gpio2_mic_hotkey_event(struct hda_codec *codec,
+ struct hda_jack_callback *event)
+ {
+@@ -5038,10 +5110,6 @@ static void alc233_fixup_lenovo_line2_mic_hotkey(struct hda_codec *codec,
+ spec->kb_dev = NULL;
+ }
+ }
+-#else /* INPUT */
+-#define alc280_fixup_hp_gpio2_mic_hotkey NULL
+-#define alc233_fixup_lenovo_line2_mic_hotkey NULL
+-#endif /* INPUT */
+
+ static void alc269_fixup_hp_line1_mic1_led(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+@@ -5055,6 +5123,16 @@ static void alc269_fixup_hp_line1_mic1_led(struct hda_codec *codec,
+ }
+ }
+
++static void alc233_fixup_lenovo_low_en_micmute_led(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ struct alc_spec *spec = codec->spec;
++
++ if (action == HDA_FIXUP_ACT_PRE_PROBE)
++ spec->micmute_led_polarity = 1;
++ alc233_fixup_lenovo_line2_mic_hotkey(codec, fix, action);
++}
++
+ static void alc_hp_mute_disable(struct hda_codec *codec, unsigned int delay)
+ {
+ if (delay <= 0)
+@@ -7588,6 +7666,7 @@ enum {
+ ALC275_FIXUP_DELL_XPS,
+ ALC293_FIXUP_LENOVO_SPK_NOISE,
+ ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY,
++ ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED,
+ ALC255_FIXUP_DELL_SPK_NOISE,
+ ALC225_FIXUP_DISABLE_MIC_VREF,
+ ALC225_FIXUP_DELL1_MIC_NO_PRESENCE,
+@@ -7657,7 +7736,6 @@ enum {
+ ALC285_FIXUP_THINKPAD_X1_GEN7,
+ ALC285_FIXUP_THINKPAD_HEADSET_JACK,
+ ALC294_FIXUP_ASUS_ALLY,
+- ALC294_FIXUP_ASUS_ALLY_X,
+ ALC294_FIXUP_ASUS_ALLY_PINS,
+ ALC294_FIXUP_ASUS_ALLY_VERBS,
+ ALC294_FIXUP_ASUS_ALLY_SPEAKER,
+@@ -8574,6 +8652,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc233_fixup_lenovo_line2_mic_hotkey,
+ },
++ [ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc233_fixup_lenovo_low_en_micmute_led,
++ },
+ [ALC233_FIXUP_INTEL_NUC8_DMIC] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc_fixup_inv_dmic,
+@@ -9096,12 +9178,6 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC294_FIXUP_ASUS_ALLY_PINS
+ },
+- [ALC294_FIXUP_ASUS_ALLY_X] = {
+- .type = HDA_FIXUP_FUNC,
+- .v.func = tas2781_fixup_i2c,
+- .chained = true,
+- .chain_id = ALC294_FIXUP_ASUS_ALLY_PINS
+- },
+ [ALC294_FIXUP_ASUS_ALLY_PINS] = {
+ .type = HDA_FIXUP_PINS,
+ .v.pins = (const struct hda_pintbl[]) {
+@@ -10586,7 +10662,6 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1740, "ASUS UX430UA", ALC295_FIXUP_ASUS_DACS),
+ SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
+ SND_PCI_QUIRK(0x1043, 0x17f3, "ROG Ally NR2301L/X", ALC294_FIXUP_ASUS_ALLY),
+- SND_PCI_QUIRK(0x1043, 0x1eb3, "ROG Ally X RC72LA", ALC294_FIXUP_ASUS_ALLY_X),
+ SND_PCI_QUIRK(0x1043, 0x1863, "ASUS UX6404VI/VV", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1881, "ASUS Zephyrus S/M", ALC294_FIXUP_ASUS_GX502_PINS),
+ SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+@@ -10852,6 +10927,9 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x17aa, 0x3178, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ SND_PCI_QUIRK(0x17aa, 0x31af, "ThinkCentre Station", ALC623_FIXUP_LENOVO_THINKSTATION_P340),
+ SND_PCI_QUIRK(0x17aa, 0x334b, "Lenovo ThinkCentre M70 Gen5", ALC283_FIXUP_HEADSET_MIC),
++ SND_PCI_QUIRK(0x17aa, 0x3384, "ThinkCentre M90a PRO", ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED),
++ SND_PCI_QUIRK(0x17aa, 0x3386, "ThinkCentre M90a Gen6", ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED),
++ SND_PCI_QUIRK(0x17aa, 0x3387, "ThinkCentre M70a Gen6", ALC233_FIXUP_LENOVO_L2MH_LOW_ENLED),
+ SND_PCI_QUIRK(0x17aa, 0x3801, "Lenovo Yoga9 14IAP7", ALC287_FIXUP_YOGA9_14IAP7_BASS_SPK_PIN),
+ HDA_CODEC_QUIRK(0x17aa, 0x3802, "DuetITL 2021", ALC287_FIXUP_YOGA7_14ITL_SPEAKERS),
+ SND_PCI_QUIRK(0x17aa, 0x3802, "Lenovo Yoga Pro 9 14IRP8", ALC287_FIXUP_TAS2781_I2C),
+@@ -11838,8 +11916,11 @@ static int patch_alc269(struct hda_codec *codec)
+ spec->codec_variant = ALC269_TYPE_ALC300;
+ spec->gen.mixer_nid = 0; /* no loopback on ALC300 */
+ break;
++ case 0x10ec0222:
+ case 0x10ec0623:
+ spec->codec_variant = ALC269_TYPE_ALC623;
++ spec->shutup = alc222_shutup;
++ spec->init_hook = alc222_init;
+ break;
+ case 0x10ec0700:
+ case 0x10ec0701:
+diff --git a/sound/usb/usx2y/usbusx2y.c b/sound/usb/usx2y/usbusx2y.c
+index 5f81c68fd42b68..5756ff3528a2d3 100644
+--- a/sound/usb/usx2y/usbusx2y.c
++++ b/sound/usb/usx2y/usbusx2y.c
+@@ -151,6 +151,12 @@ static int snd_usx2y_card_used[SNDRV_CARDS];
+ static void snd_usx2y_card_private_free(struct snd_card *card);
+ static void usx2y_unlinkseq(struct snd_usx2y_async_seq *s);
+
++#ifdef USX2Y_NRPACKS_VARIABLE
++int nrpacks = USX2Y_NRPACKS; /* number of packets per urb */
++module_param(nrpacks, int, 0444);
++MODULE_PARM_DESC(nrpacks, "Number of packets per URB.");
++#endif
++
+ /*
+ * pipe 4 is used for switching the lamps, setting samplerate, volumes ....
+ */
+@@ -432,6 +438,11 @@ static int snd_usx2y_probe(struct usb_interface *intf,
+ struct snd_card *card;
+ int err;
+
++#ifdef USX2Y_NRPACKS_VARIABLE
++ if (nrpacks < 0 || nrpacks > USX2Y_NRPACKS_MAX)
++ return -EINVAL;
++#endif
++
+ if (le16_to_cpu(device->descriptor.idVendor) != 0x1604 ||
+ (le16_to_cpu(device->descriptor.idProduct) != USB_ID_US122 &&
+ le16_to_cpu(device->descriptor.idProduct) != USB_ID_US224 &&
+diff --git a/sound/usb/usx2y/usbusx2y.h b/sound/usb/usx2y/usbusx2y.h
+index 391fd7b4ed5ef6..6a76d04bf1c7df 100644
+--- a/sound/usb/usx2y/usbusx2y.h
++++ b/sound/usb/usx2y/usbusx2y.h
+@@ -7,6 +7,32 @@
+
+ #define NRURBS 2
+
++/* Default value used for nr of packs per urb.
++ * 1 to 4 have been tested ok on uhci.
++ * To use 3 on ohci, you'd need a patch:
++ * look for "0000425-linux-2.6.9-rc4-mm1_ohci-hcd.patch.gz" on
++ * "https://bugtrack.alsa-project.org/alsa-bug/bug_view_page.php?bug_id=0000425"
++ *
++ * 1, 2 and 4 work out of the box on ohci, if I recall correctly.
++ * Bigger is safer operation, smaller gives lower latencies.
++ */
++#define USX2Y_NRPACKS 4
++
++#define USX2Y_NRPACKS_MAX 1024
++
++/* If your system works ok with this module's parameter
++ * nrpacks set to 1, you might as well comment
++ * this define out, and thereby produce smaller, faster code.
++ * You'd also set USX2Y_NRPACKS to 1 then.
++ */
++#define USX2Y_NRPACKS_VARIABLE 1
++
++#ifdef USX2Y_NRPACKS_VARIABLE
++extern int nrpacks;
++#define nr_of_packs() nrpacks
++#else
++#define nr_of_packs() USX2Y_NRPACKS
++#endif
+
+ #define URBS_ASYNC_SEQ 10
+ #define URB_DATA_LEN_ASYNC_SEQ 32
+diff --git a/sound/usb/usx2y/usbusx2yaudio.c b/sound/usb/usx2y/usbusx2yaudio.c
+index f540f46a0b143b..acca8bead82e5b 100644
+--- a/sound/usb/usx2y/usbusx2yaudio.c
++++ b/sound/usb/usx2y/usbusx2yaudio.c
+@@ -28,33 +28,6 @@
+ #include "usx2y.h"
+ #include "usbusx2y.h"
+
+-/* Default value used for nr of packs per urb.
+- * 1 to 4 have been tested ok on uhci.
+- * To use 3 on ohci, you'd need a patch:
+- * look for "0000425-linux-2.6.9-rc4-mm1_ohci-hcd.patch.gz" on
+- * "https://bugtrack.alsa-project.org/alsa-bug/bug_view_page.php?bug_id=0000425"
+- *
+- * 1, 2 and 4 work out of the box on ohci, if I recall correctly.
+- * Bigger is safer operation, smaller gives lower latencies.
+- */
+-#define USX2Y_NRPACKS 4
+-
+-/* If your system works ok with this module's parameter
+- * nrpacks set to 1, you might as well comment
+- * this define out, and thereby produce smaller, faster code.
+- * You'd also set USX2Y_NRPACKS to 1 then.
+- */
+-#define USX2Y_NRPACKS_VARIABLE 1
+-
+-#ifdef USX2Y_NRPACKS_VARIABLE
+-static int nrpacks = USX2Y_NRPACKS; /* number of packets per urb */
+-#define nr_of_packs() nrpacks
+-module_param(nrpacks, int, 0444);
+-MODULE_PARM_DESC(nrpacks, "Number of packets per URB.");
+-#else
+-#define nr_of_packs() USX2Y_NRPACKS
+-#endif
+-
+ static int usx2y_urb_capt_retire(struct snd_usx2y_substream *subs)
+ {
+ struct urb *urb = subs->completed_urb;
+diff --git a/tools/testing/selftests/bpf/benchs/bench_trigger.c b/tools/testing/selftests/bpf/benchs/bench_trigger.c
+index 2ed0ef6f21eeec..32e9f194d4497e 100644
+--- a/tools/testing/selftests/bpf/benchs/bench_trigger.c
++++ b/tools/testing/selftests/bpf/benchs/bench_trigger.c
+@@ -4,6 +4,7 @@
+ #include <argp.h>
+ #include <unistd.h>
+ #include <stdint.h>
++#include "bpf_util.h"
+ #include "bench.h"
+ #include "trigger_bench.skel.h"
+ #include "trace_helpers.h"
+@@ -72,7 +73,7 @@ static __always_inline void inc_counter(struct counter *counters)
+ unsigned slot;
+
+ if (unlikely(tid == 0))
+- tid = syscall(SYS_gettid);
++ tid = sys_gettid();
+
+ /* multiplicative hashing, it's fast */
+ slot = 2654435769U * tid;
+diff --git a/tools/testing/selftests/bpf/bpf_util.h b/tools/testing/selftests/bpf/bpf_util.h
+index 10587a29b9674f..feff92219e213f 100644
+--- a/tools/testing/selftests/bpf/bpf_util.h
++++ b/tools/testing/selftests/bpf/bpf_util.h
+@@ -6,6 +6,7 @@
+ #include <stdlib.h>
+ #include <string.h>
+ #include <errno.h>
++#include <syscall.h>
+ #include <bpf/libbpf.h> /* libbpf_num_possible_cpus */
+
+ static inline unsigned int bpf_num_possible_cpus(void)
+@@ -59,4 +60,12 @@ static inline void bpf_strlcpy(char *dst, const char *src, size_t sz)
+ (offsetof(TYPE, MEMBER) + sizeof_field(TYPE, MEMBER))
+ #endif
+
++/* Availability of gettid across glibc versions is hit-and-miss, therefore
++ * fallback to syscall in this macro and use it everywhere.
++ */
++#ifndef sys_gettid
++#define sys_gettid() syscall(SYS_gettid)
++#endif
++
++
+ #endif /* __BPF_UTIL__ */
+diff --git a/tools/testing/selftests/bpf/map_tests/task_storage_map.c b/tools/testing/selftests/bpf/map_tests/task_storage_map.c
+index 7d050364efca1a..62971dbf299615 100644
+--- a/tools/testing/selftests/bpf/map_tests/task_storage_map.c
++++ b/tools/testing/selftests/bpf/map_tests/task_storage_map.c
+@@ -12,6 +12,7 @@
+ #include <bpf/bpf.h>
+ #include <bpf/libbpf.h>
+
++#include "bpf_util.h"
+ #include "test_maps.h"
+ #include "task_local_storage_helpers.h"
+ #include "read_bpf_task_storage_busy.skel.h"
+@@ -115,7 +116,7 @@ void test_task_storage_map_stress_lookup(void)
+ CHECK(err, "attach", "error %d\n", err);
+
+ /* Trigger program */
+- syscall(SYS_gettid);
++ sys_gettid();
+ skel->bss->pid = 0;
+
+ CHECK(skel->bss->busy != 0, "bad bpf_task_storage_busy", "got %d\n", skel->bss->busy);
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
+index 070c52c312e5f6..6befa870434bcb 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_cookie.c
+@@ -690,7 +690,7 @@ void test_bpf_cookie(void)
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+- skel->bss->my_tid = syscall(SYS_gettid);
++ skel->bss->my_tid = sys_gettid();
+
+ if (test__start_subtest("kprobe"))
+ kprobe_subtest(skel);
+diff --git a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+index 9006549a12945f..b8e1224cfd190d 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
++++ b/tools/testing/selftests/bpf/prog_tests/bpf_iter.c
+@@ -226,7 +226,7 @@ static void test_task_common_nocheck(struct bpf_iter_attach_opts *opts,
+ ASSERT_OK(pthread_create(&thread_id, NULL, &do_nothing_wait, NULL),
+ "pthread_create");
+
+- skel->bss->tid = syscall(SYS_gettid);
++ skel->bss->tid = sys_gettid();
+
+ do_dummy_read_opts(skel->progs.dump_task, opts);
+
+@@ -255,10 +255,10 @@ static void *run_test_task_tid(void *arg)
+ union bpf_iter_link_info linfo;
+ int num_unknown_tid, num_known_tid;
+
+- ASSERT_NEQ(getpid(), syscall(SYS_gettid), "check_new_thread_id");
++ ASSERT_NEQ(getpid(), sys_gettid(), "check_new_thread_id");
+
+ memset(&linfo, 0, sizeof(linfo));
+- linfo.task.tid = syscall(SYS_gettid);
++ linfo.task.tid = sys_gettid();
+ opts.link_info = &linfo;
+ opts.link_info_len = sizeof(linfo);
+ test_task_common(&opts, 0, 1);
+diff --git a/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c b/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
+index 747761572098cd..9015e2c2ab1201 100644
+--- a/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
++++ b/tools/testing/selftests/bpf/prog_tests/cgrp_local_storage.c
+@@ -63,14 +63,14 @@ static void test_tp_btf(int cgroup_fd)
+ if (!ASSERT_OK(err, "map_delete_elem"))
+ goto out;
+
+- skel->bss->target_pid = syscall(SYS_gettid);
++ skel->bss->target_pid = sys_gettid();
+
+ err = cgrp_ls_tp_btf__attach(skel);
+ if (!ASSERT_OK(err, "skel_attach"))
+ goto out;
+
+- syscall(SYS_gettid);
+- syscall(SYS_gettid);
++ sys_gettid();
++ sys_gettid();
+
+ skel->bss->target_pid = 0;
+
+@@ -154,7 +154,7 @@ static void test_recursion(int cgroup_fd)
+ goto out;
+
+ /* trigger sys_enter, make sure it does not cause deadlock */
+- syscall(SYS_gettid);
++ sys_gettid();
+
+ out:
+ cgrp_ls_recursion__destroy(skel);
+@@ -224,7 +224,7 @@ static void test_yes_rcu_lock(__u64 cgroup_id)
+ return;
+
+ CGROUP_MODE_SET(skel);
+- skel->bss->target_pid = syscall(SYS_gettid);
++ skel->bss->target_pid = sys_gettid();
+
+ bpf_program__set_autoload(skel->progs.yes_rcu_lock, true);
+ err = cgrp_ls_sleepable__load(skel);
+diff --git a/tools/testing/selftests/bpf/prog_tests/core_reloc.c b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+index 26019313e1fc20..1c682550e0e7ca 100644
+--- a/tools/testing/selftests/bpf/prog_tests/core_reloc.c
++++ b/tools/testing/selftests/bpf/prog_tests/core_reloc.c
+@@ -1010,7 +1010,7 @@ static void run_core_reloc_tests(bool use_btfgen)
+ struct data *data;
+ void *mmap_data = NULL;
+
+- my_pid_tgid = getpid() | ((uint64_t)syscall(SYS_gettid) << 32);
++ my_pid_tgid = getpid() | ((uint64_t)sys_gettid() << 32);
+
+ for (i = 0; i < ARRAY_SIZE(test_cases); i++) {
+ char btf_file[] = "/tmp/core_reloc.btf.XXXXXX";
+diff --git a/tools/testing/selftests/bpf/prog_tests/linked_funcs.c b/tools/testing/selftests/bpf/prog_tests/linked_funcs.c
+index cad6645469129d..fa639b021f7ef8 100644
+--- a/tools/testing/selftests/bpf/prog_tests/linked_funcs.c
++++ b/tools/testing/selftests/bpf/prog_tests/linked_funcs.c
+@@ -20,7 +20,7 @@ void test_linked_funcs(void)
+ bpf_program__set_autoload(skel->progs.handler1, true);
+ bpf_program__set_autoload(skel->progs.handler2, true);
+
+- skel->rodata->my_tid = syscall(SYS_gettid);
++ skel->rodata->my_tid = sys_gettid();
+ skel->bss->syscall_id = SYS_getpgid;
+
+ err = linked_funcs__load(skel);
+diff --git a/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c b/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
+index c29787e092d66a..761ce24bce38fd 100644
+--- a/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
++++ b/tools/testing/selftests/bpf/prog_tests/ns_current_pid_tgid.c
+@@ -23,7 +23,7 @@ static int get_pid_tgid(pid_t *pid, pid_t *tgid,
+ struct stat st;
+ int err;
+
+- *pid = syscall(SYS_gettid);
++ *pid = sys_gettid();
+ *tgid = getpid();
+
+ err = stat("/proc/self/ns/pid", &st);
+diff --git a/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c b/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c
+index a1f7e7378a64ce..ebe0c12b55363c 100644
+--- a/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c
++++ b/tools/testing/selftests/bpf/prog_tests/rcu_read_lock.c
+@@ -21,7 +21,7 @@ static void test_success(void)
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+- skel->bss->target_pid = syscall(SYS_gettid);
++ skel->bss->target_pid = sys_gettid();
+
+ bpf_program__set_autoload(skel->progs.get_cgroup_id, true);
+ bpf_program__set_autoload(skel->progs.task_succ, true);
+@@ -58,7 +58,7 @@ static void test_rcuptr_acquire(void)
+ if (!ASSERT_OK_PTR(skel, "skel_open"))
+ return;
+
+- skel->bss->target_pid = syscall(SYS_gettid);
++ skel->bss->target_pid = sys_gettid();
+
+ bpf_program__set_autoload(skel->progs.task_acquire, true);
+ err = rcu_read_lock__load(skel);
+diff --git a/tools/testing/selftests/bpf/prog_tests/task_local_storage.c b/tools/testing/selftests/bpf/prog_tests/task_local_storage.c
+index c33c05161a9ea4..0d42ce00166f07 100644
+--- a/tools/testing/selftests/bpf/prog_tests/task_local_storage.c
++++ b/tools/testing/selftests/bpf/prog_tests/task_local_storage.c
+@@ -23,14 +23,14 @@ static void test_sys_enter_exit(void)
+ if (!ASSERT_OK_PTR(skel, "skel_open_and_load"))
+ return;
+
+- skel->bss->target_pid = syscall(SYS_gettid);
++ skel->bss->target_pid = sys_gettid();
+
+ err = task_local_storage__attach(skel);
+ if (!ASSERT_OK(err, "skel_attach"))
+ goto out;
+
+- syscall(SYS_gettid);
+- syscall(SYS_gettid);
++ sys_gettid();
++ sys_gettid();
+
+ /* 3x syscalls: 1x attach and 2x gettid */
+ ASSERT_EQ(skel->bss->enter_cnt, 3, "enter_cnt");
+@@ -99,7 +99,7 @@ static void test_recursion(void)
+
+ /* trigger sys_enter, make sure it does not cause deadlock */
+ skel->bss->test_pid = getpid();
+- syscall(SYS_gettid);
++ sys_gettid();
+ skel->bss->test_pid = 0;
+ task_ls_recursion__detach(skel);
+
+diff --git a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
+index c1ac813ff9bae3..02a484b22aa69b 100644
+--- a/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
++++ b/tools/testing/selftests/bpf/prog_tests/uprobe_multi_test.c
+@@ -125,7 +125,7 @@ static void *child_thread(void *ctx)
+ struct child *child = ctx;
+ int c = 0, err;
+
+- child->tid = syscall(SYS_gettid);
++ child->tid = sys_gettid();
+
+ /* let parent know we are ready */
+ err = write(child->c2p[1], &c, 1);
+diff --git a/tools/testing/selftests/damon/damon_nr_regions.py b/tools/testing/selftests/damon/damon_nr_regions.py
+index 2e8a74aff54314..58f3291fed12a4 100755
+--- a/tools/testing/selftests/damon/damon_nr_regions.py
++++ b/tools/testing/selftests/damon/damon_nr_regions.py
+@@ -65,6 +65,7 @@ def test_nr_regions(real_nr_regions, min_nr_regions, max_nr_regions):
+
+ test_name = 'nr_regions test with %d/%d/%d real/min/max nr_regions' % (
+ real_nr_regions, min_nr_regions, max_nr_regions)
++ collected_nr_regions.sort()
+ if (collected_nr_regions[0] < min_nr_regions or
+ collected_nr_regions[-1] > max_nr_regions):
+ print('fail %s' % test_name)
+@@ -109,6 +110,7 @@ def main():
+ attrs = kdamonds.kdamonds[0].contexts[0].monitoring_attrs
+ attrs.min_nr_regions = 3
+ attrs.max_nr_regions = 7
++ attrs.update_us = 100000
+ err = kdamonds.kdamonds[0].commit()
+ if err is not None:
+ proc.terminate()
+diff --git a/tools/testing/selftests/damon/damos_quota.py b/tools/testing/selftests/damon/damos_quota.py
+index 7d4c6bb2e3cd27..57c4937aaed285 100755
+--- a/tools/testing/selftests/damon/damos_quota.py
++++ b/tools/testing/selftests/damon/damos_quota.py
+@@ -51,16 +51,19 @@ def main():
+ nr_quota_exceeds = scheme.stats.qt_exceeds
+
+ wss_collected.sort()
++ nr_expected_quota_exceeds = 0
+ for wss in wss_collected:
+ if wss > sz_quota:
+ print('quota is not kept: %s > %s' % (wss, sz_quota))
+ print('collected samples are as below')
+ print('\n'.join(['%d' % wss for wss in wss_collected]))
+ exit(1)
++ if wss == sz_quota:
++ nr_expected_quota_exceeds += 1
+
+- if nr_quota_exceeds < len(wss_collected):
+- print('quota is not always exceeded: %d > %d' %
+- (len(wss_collected), nr_quota_exceeds))
++ if nr_quota_exceeds < nr_expected_quota_exceeds:
++ print('quota is exceeded less than expected: %d < %d' %
++ (nr_quota_exceeds, nr_expected_quota_exceeds))
+ exit(1)
+
+ if __name__ == '__main__':
+diff --git a/tools/testing/selftests/damon/damos_quota_goal.py b/tools/testing/selftests/damon/damos_quota_goal.py
+index 18246f3b62f7ee..f76e0412b564cb 100755
+--- a/tools/testing/selftests/damon/damos_quota_goal.py
++++ b/tools/testing/selftests/damon/damos_quota_goal.py
+@@ -63,6 +63,9 @@ def main():
+ if last_effective_bytes != 0 else -1.0))
+
+ if last_effective_bytes == goal.effective_bytes:
++ # effective quota was already minimum that cannot be more reduced
++ if expect_increase is False and last_effective_bytes == 1:
++ continue
+ print('efective bytes not changed: %d' % goal.effective_bytes)
+ exit(1)
+
+diff --git a/tools/testing/selftests/mm/hugepage-mremap.c b/tools/testing/selftests/mm/hugepage-mremap.c
+index ada9156cc497b3..c463d1c09c9b4a 100644
+--- a/tools/testing/selftests/mm/hugepage-mremap.c
++++ b/tools/testing/selftests/mm/hugepage-mremap.c
+@@ -15,7 +15,7 @@
+ #define _GNU_SOURCE
+ #include <stdlib.h>
+ #include <stdio.h>
+-#include <asm-generic/unistd.h>
++#include <unistd.h>
+ #include <sys/mman.h>
+ #include <errno.h>
+ #include <fcntl.h> /* Definition of O_* constants */
+diff --git a/tools/testing/selftests/mm/ksm_functional_tests.c b/tools/testing/selftests/mm/ksm_functional_tests.c
+index 66b4e111b5a273..b61803e36d1cf5 100644
+--- a/tools/testing/selftests/mm/ksm_functional_tests.c
++++ b/tools/testing/selftests/mm/ksm_functional_tests.c
+@@ -11,7 +11,7 @@
+ #include <string.h>
+ #include <stdbool.h>
+ #include <stdint.h>
+-#include <asm-generic/unistd.h>
++#include <unistd.h>
+ #include <errno.h>
+ #include <fcntl.h>
+ #include <sys/mman.h>
+@@ -369,6 +369,7 @@ static void test_unmerge_discarded(void)
+ munmap(map, size);
+ }
+
++#ifdef __NR_userfaultfd
+ static void test_unmerge_uffd_wp(void)
+ {
+ struct uffdio_writeprotect uffd_writeprotect;
+@@ -429,6 +430,7 @@ static void test_unmerge_uffd_wp(void)
+ unmap:
+ munmap(map, size);
+ }
++#endif
+
+ /* Verify that KSM can be enabled / queried with prctl. */
+ static void test_prctl(void)
+@@ -684,7 +686,9 @@ int main(int argc, char **argv)
+ exit(test_child_ksm());
+ }
+
++#ifdef __NR_userfaultfd
+ tests++;
++#endif
+
+ ksft_print_header();
+ ksft_set_plan(tests);
+@@ -696,7 +700,9 @@ int main(int argc, char **argv)
+ test_unmerge();
+ test_unmerge_zero_pages();
+ test_unmerge_discarded();
++#ifdef __NR_userfaultfd
+ test_unmerge_uffd_wp();
++#endif
+
+ test_prot_none();
+
+diff --git a/tools/testing/selftests/mm/memfd_secret.c b/tools/testing/selftests/mm/memfd_secret.c
+index 74c911aa3aea9f..9a0597310a7651 100644
+--- a/tools/testing/selftests/mm/memfd_secret.c
++++ b/tools/testing/selftests/mm/memfd_secret.c
+@@ -17,7 +17,7 @@
+
+ #include <stdlib.h>
+ #include <string.h>
+-#include <asm-generic/unistd.h>
++#include <unistd.h>
+ #include <errno.h>
+ #include <stdio.h>
+ #include <fcntl.h>
+@@ -28,6 +28,8 @@
+ #define pass(fmt, ...) ksft_test_result_pass(fmt, ##__VA_ARGS__)
+ #define skip(fmt, ...) ksft_test_result_skip(fmt, ##__VA_ARGS__)
+
++#ifdef __NR_memfd_secret
++
+ #define PATTERN 0x55
+
+ static const int prot = PROT_READ | PROT_WRITE;
+@@ -332,3 +334,13 @@ int main(int argc, char *argv[])
+
+ ksft_finished();
+ }
++
++#else /* __NR_memfd_secret */
++
++int main(int argc, char *argv[])
++{
++ printf("skip: skipping memfd_secret test (missing __NR_memfd_secret)\n");
++ return KSFT_SKIP;
++}
++
++#endif /* __NR_memfd_secret */
+diff --git a/tools/testing/selftests/mm/mkdirty.c b/tools/testing/selftests/mm/mkdirty.c
+index 1db134063c38c0..b8a7efe9204ea1 100644
+--- a/tools/testing/selftests/mm/mkdirty.c
++++ b/tools/testing/selftests/mm/mkdirty.c
+@@ -9,7 +9,7 @@
+ */
+ #include <fcntl.h>
+ #include <signal.h>
+-#include <asm-generic/unistd.h>
++#include <unistd.h>
+ #include <string.h>
+ #include <errno.h>
+ #include <stdlib.h>
+@@ -265,6 +265,7 @@ static void test_pte_mapped_thp(void)
+ munmap(mmap_mem, mmap_size);
+ }
+
++#ifdef __NR_userfaultfd
+ static void test_uffdio_copy(void)
+ {
+ struct uffdio_register uffdio_register;
+@@ -321,6 +322,7 @@ static void test_uffdio_copy(void)
+ munmap(dst, pagesize);
+ free(src);
+ }
++#endif /* __NR_userfaultfd */
+
+ int main(void)
+ {
+@@ -333,7 +335,9 @@ int main(void)
+ thpsize / 1024);
+ tests += 3;
+ }
++#ifdef __NR_userfaultfd
+ tests += 1;
++#endif /* __NR_userfaultfd */
+
+ ksft_print_header();
+ ksft_set_plan(tests);
+@@ -363,7 +367,9 @@ int main(void)
+ if (thpsize)
+ test_pte_mapped_thp();
+ /* Placing a fresh page via userfaultfd may set the PTE dirty. */
++#ifdef __NR_userfaultfd
+ test_uffdio_copy();
++#endif /* __NR_userfaultfd */
+
+ err = ksft_get_fail_cnt();
+ if (err)
+diff --git a/tools/testing/selftests/mm/mlock2.h b/tools/testing/selftests/mm/mlock2.h
+index 1e5731bab499a3..4417eaa5cfb78b 100644
+--- a/tools/testing/selftests/mm/mlock2.h
++++ b/tools/testing/selftests/mm/mlock2.h
+@@ -3,7 +3,6 @@
+ #include <errno.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+-#include <asm-generic/unistd.h>
+
+ static int mlock2_(void *start, size_t len, int flags)
+ {
+diff --git a/tools/testing/selftests/mm/protection_keys.c b/tools/testing/selftests/mm/protection_keys.c
+index 4990f7ab4cb729..4fcecfb7b189bb 100644
+--- a/tools/testing/selftests/mm/protection_keys.c
++++ b/tools/testing/selftests/mm/protection_keys.c
+@@ -42,7 +42,7 @@
+ #include <sys/wait.h>
+ #include <sys/stat.h>
+ #include <fcntl.h>
+-#include <asm-generic/unistd.h>
++#include <unistd.h>
+ #include <sys/ptrace.h>
+ #include <setjmp.h>
+
+diff --git a/tools/testing/selftests/mm/uffd-common.c b/tools/testing/selftests/mm/uffd-common.c
+index 717539eddf9875..7ad6ba660c7d6f 100644
+--- a/tools/testing/selftests/mm/uffd-common.c
++++ b/tools/testing/selftests/mm/uffd-common.c
+@@ -673,7 +673,11 @@ int uffd_open_dev(unsigned int flags)
+
+ int uffd_open_sys(unsigned int flags)
+ {
++#ifdef __NR_userfaultfd
+ return syscall(__NR_userfaultfd, flags);
++#else
++ return -1;
++#endif
+ }
+
+ int uffd_open(unsigned int flags)
+diff --git a/tools/testing/selftests/mm/uffd-stress.c b/tools/testing/selftests/mm/uffd-stress.c
+index a4b83280998ab7..944d559ade21f2 100644
+--- a/tools/testing/selftests/mm/uffd-stress.c
++++ b/tools/testing/selftests/mm/uffd-stress.c
+@@ -33,10 +33,11 @@
+ * pthread_mutex_lock will also verify the atomicity of the memory
+ * transfer (UFFDIO_COPY).
+ */
+-#include <asm-generic/unistd.h>
++
+ #include "uffd-common.h"
+
+ uint64_t features;
++#ifdef __NR_userfaultfd
+
+ #define BOUNCE_RANDOM (1<<0)
+ #define BOUNCE_RACINGFAULTS (1<<1)
+@@ -471,3 +472,15 @@ int main(int argc, char **argv)
+ nr_pages, nr_pages_per_cpu);
+ return userfaultfd_stress();
+ }
++
++#else /* __NR_userfaultfd */
++
++#warning "missing __NR_userfaultfd definition"
++
++int main(void)
++{
++ printf("skip: Skipping userfaultfd test (missing __NR_userfaultfd)\n");
++ return KSFT_SKIP;
++}
++
++#endif /* __NR_userfaultfd */
+diff --git a/tools/testing/selftests/mm/uffd-unit-tests.c b/tools/testing/selftests/mm/uffd-unit-tests.c
+index a2e71b1636e7ca..3ddbb0a71b9c12 100644
+--- a/tools/testing/selftests/mm/uffd-unit-tests.c
++++ b/tools/testing/selftests/mm/uffd-unit-tests.c
+@@ -5,11 +5,12 @@
+ * Copyright (C) 2015-2023 Red Hat, Inc.
+ */
+
+-#include <asm-generic/unistd.h>
+ #include "uffd-common.h"
+
+ #include "../../../../mm/gup_test.h"
+
++#ifdef __NR_userfaultfd
++
+ /* The unit test doesn't need a large or random size, make it 32MB for now */
+ #define UFFD_TEST_MEM_SIZE (32UL << 20)
+
+@@ -1558,3 +1559,14 @@ int main(int argc, char *argv[])
+ return ksft_get_fail_cnt() ? KSFT_FAIL : KSFT_PASS;
+ }
+
++#else /* __NR_userfaultfd */
++
++#warning "missing __NR_userfaultfd definition"
++
++int main(void)
++{
++ printf("Skipping %s (missing __NR_userfaultfd)\n", __file__);
++ return KSFT_SKIP;
++}
++
++#endif /* __NR_userfaultfd */
+diff --git a/usr/include/Makefile b/usr/include/Makefile
+index 771e32872b2ab1..58173cfe5ff179 100644
+--- a/usr/include/Makefile
++++ b/usr/include/Makefile
+@@ -10,7 +10,7 @@ UAPI_CFLAGS := -std=c90 -Wall -Werror=implicit-function-declaration
+
+ # In theory, we do not care -m32 or -m64 for header compile tests.
+ # It is here just because CONFIG_CC_CAN_LINK is tested with -m32 or -m64.
+-UAPI_CFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CFLAGS))
++UAPI_CFLAGS += $(filter -m32 -m64 --target=%, $(KBUILD_CPPFLAGS) $(KBUILD_CFLAGS))
+
+ # USERCFLAGS might contain sysroot location for CC.
+ UAPI_CFLAGS += $(USERCFLAGS)
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-03-20 22:39 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-03-20 22:39 UTC (permalink / raw
To: gentoo-commits
commit: 18ea66dfadb2f6fded8b475ebf3396a1e7cb622d
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Mar 20 22:39:25 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Mar 20 22:39:25 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=18ea66df
wifi: mt76: mt7921: fix kernel panic due to null pointer dereference
Bug: https://bugs.gentoo.org/950243
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 34 +++---------
2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch | 74 ++++++++++++++++++++++++++
2 files changed, 81 insertions(+), 27 deletions(-)
diff --git a/0000_README b/0000_README
index a2f75d4a..c53357bf 100644
--- a/0000_README
+++ b/0000_README
@@ -95,30 +95,6 @@ Patch: 1012_linux-6.12.13.patch
From: https://www.kernel.org
Desc: Linux 6.12.13
-Patch: 1013_linux-6.12.14.patch
-From: https://www.kernel.org
-Desc: Linux 6.12.14
-
-Patch: 1014_linux-6.12.15.patch
-From: https://www.kernel.org
-Desc: Linux 6.12.15
-
-Patch: 1015_linux-6.12.16.patch
-From: https://www.kernel.org
-Desc: Linux 6.12.16
-
-Patch: 1016_linux-6.12.17.patch
-From: https://www.kernel.org
-Desc: Linux 6.12.17
-
-Patch: 1017_linux-6.12.18.patch
-From: https://www.kernel.org
-Desc: Linux 6.12.18
-
-Patch: 1018_linux-6.12.19.patch
-From: https://www.kernel.org
-Desc: Linux 6.12.19
-
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
@@ -139,6 +115,10 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
+Patch: 2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch
+From: https://github.com/nbd168/wireless/commit/adc3fd2a2277b7cc0b61692463771bf9bd298036
+Desc: wifi: mt76: mt7921: fix kernel panic due to null pointer dereference
+
Patch: 2901_tools-lib-subcmd-compile-fix.patch
From: https://lore.kernel.org/all/20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de/
Desc: tools lib subcmd: Fixed uninitialized use of variable in parse-options
@@ -151,9 +131,9 @@ Patch: 2920_sign-file-patch-for-libressl.patch
From: https://bugs.gentoo.org/717166
Desc: sign-file: full functionality with modern LibreSSL
-Patch: 2980_kbuild-gcc15-gnu23-to-gnu11-fix.patch
-From: https://github.com/hhoffstaette/kernel-patches/
-Desc: gcc 15 kbuild fixes
+Patch: 2980_GCC15-gnu23-to-gnu11-fix.patch
+From: https://lore.kernel.org/linux-kbuild/20241119044724.GA2246422@thelio-3990X/
+Desc: GCC 15 defaults to -std=gnu23. Hack in CSTD_FLAG to pass -std=gnu11 everywhere.
Patch: 2990_libbpf-v2-workaround-Wmaybe-uninitialized-false-pos.patch
From: https://lore.kernel.org/bpf/
diff --git a/2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch b/2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch
new file mode 100644
index 00000000..1cc1dbf3
--- /dev/null
+++ b/2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch
@@ -0,0 +1,74 @@
+From adc3fd2a2277b7cc0b61692463771bf9bd298036 Mon Sep 17 00:00:00 2001
+From: Ming Yen Hsieh <mingyen.hsieh@mediatek.com>
+Date: Tue, 18 Feb 2025 11:33:42 +0800
+Subject: [PATCH] wifi: mt76: mt7921: fix kernel panic due to null pointer
+ dereference
+
+Address a kernel panic caused by a null pointer dereference in the
+`mt792x_rx_get_wcid` function. The issue arises because the `deflink` structure
+is not properly initialized with the `sta` context. This patch ensures that the
+`deflink` structure is correctly linked to the `sta` context, preventing the
+null pointer dereference.
+
+ BUG: kernel NULL pointer dereference, address: 0000000000000400
+ #PF: supervisor read access in kernel mode
+ #PF: error_code(0x0000) - not-present page
+ PGD 0 P4D 0
+ Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI
+ CPU: 0 UID: 0 PID: 470 Comm: mt76-usb-rx phy Not tainted 6.12.13-gentoo-dist #1
+ Hardware name: /AMD HUDSON-M1, BIOS 4.6.4 11/15/2011
+ RIP: 0010:mt792x_rx_get_wcid+0x48/0x140 [mt792x_lib]
+ RSP: 0018:ffffa147c055fd98 EFLAGS: 00010202
+ RAX: 0000000000000000 RBX: ffff8e9ecb652000 RCX: 0000000000000000
+ RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8e9ecb652000
+ RBP: 0000000000000685 R08: ffff8e9ec6570000 R09: 0000000000000000
+ R10: ffff8e9ecd2ca000 R11: ffff8e9f22a217c0 R12: 0000000038010119
+ R13: 0000000080843801 R14: ffff8e9ec6570000 R15: ffff8e9ecb652000
+ FS: 0000000000000000(0000) GS:ffff8e9f22a00000(0000) knlGS:0000000000000000
+ CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ CR2: 0000000000000400 CR3: 000000000d2ea000 CR4: 00000000000006f0
+ Call Trace:
+ <TASK>
+ ? __die_body.cold+0x19/0x27
+ ? page_fault_oops+0x15a/0x2f0
+ ? search_module_extables+0x19/0x60
+ ? search_bpf_extables+0x5f/0x80
+ ? exc_page_fault+0x7e/0x180
+ ? asm_exc_page_fault+0x26/0x30
+ ? mt792x_rx_get_wcid+0x48/0x140 [mt792x_lib]
+ mt7921_queue_rx_skb+0x1c6/0xaa0 [mt7921_common]
+ mt76u_alloc_queues+0x784/0x810 [mt76_usb]
+ ? __pfx___mt76_worker_fn+0x10/0x10 [mt76]
+ __mt76_worker_fn+0x4f/0x80 [mt76]
+ kthread+0xd2/0x100
+ ? __pfx_kthread+0x10/0x10
+ ret_from_fork+0x34/0x50
+ ? __pfx_kthread+0x10/0x10
+ ret_from_fork_asm+0x1a/0x30
+ </TASK>
+ ---[ end trace 0000000000000000 ]---
+
+Reported-by: Nick Morrow <usbwifi2024@gmail.com>
+Closes: https://github.com/morrownr/USB-WiFi/issues/577
+Cc: stable@vger.kernel.org
+Fixes: 90c10286b176 ("wifi: mt76: mt7925: Update mt792x_rx_get_wcid for per-link STA")
+Signed-off-by: Ming Yen Hsieh <mingyen.hsieh@mediatek.com>
+Tested-by: Salah Coronya <salah.coronya@gmail.com>
+Link: https://patch.msgid.link/20250218033343.1999648-1-mingyen.hsieh@mediatek.com
+Signed-off-by: Felix Fietkau <nbd@nbd.name>
+---
+ drivers/net/wireless/mediatek/mt76/mt7921/main.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index 13e58c328aff..78b77a54d195 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -811,6 +811,7 @@ int mt7921_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ msta->deflink.wcid.phy_idx = mvif->bss_conf.mt76.band_idx;
+ msta->deflink.wcid.tx_info |= MT_WCID_TX_INFO_SET;
+ msta->deflink.last_txs = jiffies;
++ msta->deflink.sta = msta;
+
+ ret = mt76_connac_pm_wake(&dev->mphy, &dev->pm);
+ if (ret)
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-03-23 11:31 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-03-23 11:31 UTC (permalink / raw
To: gentoo-commits
commit: 977bc8af5b6f751986ac594ef727aa6423f3dc58
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Mar 23 11:30:56 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Mar 23 11:30:56 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=977bc8af
Linux patch 6.12.20
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 28 +
1019_linux-6.12.20.patch | 10202 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 10230 insertions(+)
diff --git a/0000_README b/0000_README
index c53357bf..ecb0495f 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,34 @@ Patch: 1012_linux-6.12.13.patch
From: https://www.kernel.org
Desc: Linux 6.12.13
+Patch: 1013_linux-6.12.14.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.14
+
+Patch: 1014_linux-6.12.15.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.15
+
+Patch: 1015_linux-6.12.16.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.16
+
+Patch: 1016_linux-6.12.17.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.17
+
+Patch: 1017_linux-6.12.18.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.18
+
+Patch: 1018_linux-6.12.19.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.19
+
+Patch: 1019_linux-6.12.20.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.20
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1019_linux-6.12.20.patch b/1019_linux-6.12.20.patch
new file mode 100644
index 00000000..aad096ee
--- /dev/null
+++ b/1019_linux-6.12.20.patch
@@ -0,0 +1,10202 @@
+diff --git a/Documentation/rust/quick-start.rst b/Documentation/rust/quick-start.rst
+index 2d107982c87bbe..ded0d0836aee0d 100644
+--- a/Documentation/rust/quick-start.rst
++++ b/Documentation/rust/quick-start.rst
+@@ -128,7 +128,7 @@ Rust standard library source
+ ****************************
+
+ The Rust standard library source is required because the build system will
+-cross-compile ``core`` and ``alloc``.
++cross-compile ``core``.
+
+ If ``rustup`` is being used, run::
+
+diff --git a/Makefile b/Makefile
+index 343c9f25433c7c..ca000bd227be66 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 19
++SUBLEVEL = 20
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/alpha/include/asm/elf.h b/arch/alpha/include/asm/elf.h
+index 4d7c46f50382e3..50c82187e60ec9 100644
+--- a/arch/alpha/include/asm/elf.h
++++ b/arch/alpha/include/asm/elf.h
+@@ -74,7 +74,7 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
+ /*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+-#define elf_check_arch(x) ((x)->e_machine == EM_ALPHA)
++#define elf_check_arch(x) (((x)->e_machine == EM_ALPHA) && !((x)->e_flags & EF_ALPHA_32BIT))
+
+ /*
+ * These are used to set parameters in the core dumps.
+@@ -137,10 +137,6 @@ extern int dump_elf_task(elf_greg_t *dest, struct task_struct *task);
+ : amask (AMASK_CIX) ? "ev6" : "ev67"); \
+ })
+
+-#define SET_PERSONALITY(EX) \
+- set_personality(((EX).e_flags & EF_ALPHA_32BIT) \
+- ? PER_LINUX_32BIT : PER_LINUX)
+-
+ extern int alpha_l1i_cacheshape;
+ extern int alpha_l1d_cacheshape;
+ extern int alpha_l2_cacheshape;
+diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
+index 635f0a5f5bbdeb..02e8817a89212c 100644
+--- a/arch/alpha/include/asm/pgtable.h
++++ b/arch/alpha/include/asm/pgtable.h
+@@ -360,7 +360,7 @@ static inline pte_t pte_swp_clear_exclusive(pte_t pte)
+
+ extern void paging_init(void);
+
+-/* We have our own get_unmapped_area to cope with ADDR_LIMIT_32BIT. */
++/* We have our own get_unmapped_area */
+ #define HAVE_ARCH_UNMAPPED_AREA
+
+ #endif /* _ALPHA_PGTABLE_H */
+diff --git a/arch/alpha/include/asm/processor.h b/arch/alpha/include/asm/processor.h
+index 55bb1c09fd39d5..5dce5518a21119 100644
+--- a/arch/alpha/include/asm/processor.h
++++ b/arch/alpha/include/asm/processor.h
+@@ -8,23 +8,19 @@
+ #ifndef __ASM_ALPHA_PROCESSOR_H
+ #define __ASM_ALPHA_PROCESSOR_H
+
+-#include <linux/personality.h> /* for ADDR_LIMIT_32BIT */
+-
+ /*
+ * We have a 42-bit user address space: 4TB user VM...
+ */
+ #define TASK_SIZE (0x40000000000UL)
+
+-#define STACK_TOP \
+- (current->personality & ADDR_LIMIT_32BIT ? 0x80000000 : 0x00120000000UL)
++#define STACK_TOP (0x00120000000UL)
+
+ #define STACK_TOP_MAX 0x00120000000UL
+
+ /* This decides where the kernel will search for a free chunk of vm
+ * space during mmap's.
+ */
+-#define TASK_UNMAPPED_BASE \
+- ((current->personality & ADDR_LIMIT_32BIT) ? 0x40000000 : TASK_SIZE / 2)
++#define TASK_UNMAPPED_BASE (TASK_SIZE / 2)
+
+ /* This is dead. Everything has been moved to thread_info. */
+ struct thread_struct { };
+diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
+index c0424de9e7cda2..077a1407be6d73 100644
+--- a/arch/alpha/kernel/osf_sys.c
++++ b/arch/alpha/kernel/osf_sys.c
+@@ -1211,8 +1211,7 @@ SYSCALL_DEFINE1(old_adjtimex, struct timex32 __user *, txc_p)
+ return ret;
+ }
+
+-/* Get an address range which is currently unmapped. Similar to the
+- generic version except that we know how to honor ADDR_LIMIT_32BIT. */
++/* Get an address range which is currently unmapped. */
+
+ static unsigned long
+ arch_get_unmapped_area_1(unsigned long addr, unsigned long len,
+@@ -1231,13 +1230,7 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
+ unsigned long len, unsigned long pgoff,
+ unsigned long flags, vm_flags_t vm_flags)
+ {
+- unsigned long limit;
+-
+- /* "32 bit" actually means 31 bit, since pointers sign extend. */
+- if (current->personality & ADDR_LIMIT_32BIT)
+- limit = 0x80000000;
+- else
+- limit = TASK_SIZE;
++ unsigned long limit = TASK_SIZE;
+
+ if (len > limit)
+ return -ENOMEM;
+diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
+index 95fbc8c0560798..9edbd871c31bf0 100644
+--- a/arch/arm64/include/asm/tlbflush.h
++++ b/arch/arm64/include/asm/tlbflush.h
+@@ -396,33 +396,35 @@ static inline void arch_tlbbatch_flush(struct arch_tlbflush_unmap_batch *batch)
+ #define __flush_tlb_range_op(op, start, pages, stride, \
+ asid, tlb_level, tlbi_user, lpa2) \
+ do { \
++ typeof(start) __flush_start = start; \
++ typeof(pages) __flush_pages = pages; \
+ int num = 0; \
+ int scale = 3; \
+ int shift = lpa2 ? 16 : PAGE_SHIFT; \
+ unsigned long addr; \
+ \
+- while (pages > 0) { \
++ while (__flush_pages > 0) { \
+ if (!system_supports_tlb_range() || \
+- pages == 1 || \
+- (lpa2 && start != ALIGN(start, SZ_64K))) { \
+- addr = __TLBI_VADDR(start, asid); \
++ __flush_pages == 1 || \
++ (lpa2 && __flush_start != ALIGN(__flush_start, SZ_64K))) { \
++ addr = __TLBI_VADDR(__flush_start, asid); \
+ __tlbi_level(op, addr, tlb_level); \
+ if (tlbi_user) \
+ __tlbi_user_level(op, addr, tlb_level); \
+- start += stride; \
+- pages -= stride >> PAGE_SHIFT; \
++ __flush_start += stride; \
++ __flush_pages -= stride >> PAGE_SHIFT; \
+ continue; \
+ } \
+ \
+- num = __TLBI_RANGE_NUM(pages, scale); \
++ num = __TLBI_RANGE_NUM(__flush_pages, scale); \
+ if (num >= 0) { \
+- addr = __TLBI_VADDR_RANGE(start >> shift, asid, \
++ addr = __TLBI_VADDR_RANGE(__flush_start >> shift, asid, \
+ scale, num, tlb_level); \
+ __tlbi(r##op, addr); \
+ if (tlbi_user) \
+ __tlbi_user(r##op, addr); \
+- start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
+- pages -= __TLBI_RANGE_PAGES(num, scale); \
++ __flush_start += __TLBI_RANGE_PAGES(num, scale) << PAGE_SHIFT; \
++ __flush_pages -= __TLBI_RANGE_PAGES(num, scale);\
+ } \
+ scale--; \
+ } \
+diff --git a/arch/arm64/kernel/topology.c b/arch/arm64/kernel/topology.c
+index 1a2c72f3e7f80e..cb180684d10d5b 100644
+--- a/arch/arm64/kernel/topology.c
++++ b/arch/arm64/kernel/topology.c
+@@ -194,12 +194,19 @@ static void amu_fie_setup(const struct cpumask *cpus)
+ int cpu;
+
+ /* We are already set since the last insmod of cpufreq driver */
+- if (unlikely(cpumask_subset(cpus, amu_fie_cpus)))
++ if (cpumask_available(amu_fie_cpus) &&
++ unlikely(cpumask_subset(cpus, amu_fie_cpus)))
+ return;
+
+- for_each_cpu(cpu, cpus) {
++ for_each_cpu(cpu, cpus)
+ if (!freq_counters_valid(cpu))
+ return;
++
++ if (!cpumask_available(amu_fie_cpus) &&
++ !zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL)) {
++ WARN_ONCE(1, "Failed to allocate FIE cpumask for CPUs[%*pbl]\n",
++ cpumask_pr_args(cpus));
++ return;
+ }
+
+ cpumask_or(amu_fie_cpus, amu_fie_cpus, cpus);
+@@ -237,17 +244,8 @@ static struct notifier_block init_amu_fie_notifier = {
+
+ static int __init init_amu_fie(void)
+ {
+- int ret;
+-
+- if (!zalloc_cpumask_var(&amu_fie_cpus, GFP_KERNEL))
+- return -ENOMEM;
+-
+- ret = cpufreq_register_notifier(&init_amu_fie_notifier,
++ return cpufreq_register_notifier(&init_amu_fie_notifier,
+ CPUFREQ_POLICY_NOTIFIER);
+- if (ret)
+- free_cpumask_var(amu_fie_cpus);
+-
+- return ret;
+ }
+ core_initcall(init_amu_fie);
+
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index e55b02fbddc8f3..e59c628c93f20d 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -1176,8 +1176,11 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
+ struct vmem_altmap *altmap)
+ {
+ WARN_ON((start < VMEMMAP_START) || (end > VMEMMAP_END));
++ /* [start, end] should be within one section */
++ WARN_ON_ONCE(end - start > PAGES_PER_SECTION * sizeof(struct page));
+
+- if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES))
++ if (!IS_ENABLED(CONFIG_ARM64_4K_PAGES) ||
++ (end - start < PAGES_PER_SECTION * sizeof(struct page)))
+ return vmemmap_populate_basepages(start, end, node, altmap);
+ else
+ return vmemmap_populate_hugepages(start, end, node, altmap);
+diff --git a/arch/loongarch/kvm/switch.S b/arch/loongarch/kvm/switch.S
+index 0c292f81849277..1be185e9480723 100644
+--- a/arch/loongarch/kvm/switch.S
++++ b/arch/loongarch/kvm/switch.S
+@@ -85,7 +85,7 @@
+ * Guest CRMD comes from separate GCSR_CRMD register
+ */
+ ori t0, zero, CSR_PRMD_PIE
+- csrxchg t0, t0, LOONGARCH_CSR_PRMD
++ csrwr t0, LOONGARCH_CSR_PRMD
+
+ /* Set PVM bit to setup ertn to guest context */
+ ori t0, zero, CSR_GSTAT_PVM
+diff --git a/arch/loongarch/mm/pageattr.c b/arch/loongarch/mm/pageattr.c
+index ffd8d76021d470..aca4e86d2d888b 100644
+--- a/arch/loongarch/mm/pageattr.c
++++ b/arch/loongarch/mm/pageattr.c
+@@ -3,6 +3,7 @@
+ * Copyright (C) 2024 Loongson Technology Corporation Limited
+ */
+
++#include <linux/memblock.h>
+ #include <linux/pagewalk.h>
+ #include <linux/pgtable.h>
+ #include <asm/set_memory.h>
+@@ -167,7 +168,7 @@ bool kernel_page_present(struct page *page)
+ unsigned long addr = (unsigned long)page_address(page);
+
+ if (addr < vm_map_base)
+- return true;
++ return memblock_is_memory(__pa(addr));
+
+ pgd = pgd_offset_k(addr);
+ if (pgd_none(pgdp_get(pgd)))
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 9ec3170c18f925..3a68b3e0b7a358 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -3949,6 +3949,85 @@ static inline bool intel_pmu_has_cap(struct perf_event *event, int idx)
+ return test_bit(idx, (unsigned long *)&intel_cap->capabilities);
+ }
+
++static u64 intel_pmu_freq_start_period(struct perf_event *event)
++{
++ int type = event->attr.type;
++ u64 config, factor;
++ s64 start;
++
++ /*
++ * The 127 is the lowest possible recommended SAV (sample after value)
++ * for a 4000 freq (default freq), according to the event list JSON file.
++ * Also, assume the workload is idle 50% time.
++ */
++ factor = 64 * 4000;
++ if (type != PERF_TYPE_HARDWARE && type != PERF_TYPE_HW_CACHE)
++ goto end;
++
++ /*
++ * The estimation of the start period in the freq mode is
++ * based on the below assumption.
++ *
++ * For a cycles or an instructions event, 1GHZ of the
++ * underlying platform, 1 IPC. The workload is idle 50% time.
++ * The start period = 1,000,000,000 * 1 / freq / 2.
++ * = 500,000,000 / freq
++ *
++ * Usually, the branch-related events occur less than the
++ * instructions event. According to the Intel event list JSON
++ * file, the SAV (sample after value) of a branch-related event
++ * is usually 1/4 of an instruction event.
++ * The start period of branch-related events = 125,000,000 / freq.
++ *
++ * The cache-related events occurs even less. The SAV is usually
++ * 1/20 of an instruction event.
++ * The start period of cache-related events = 25,000,000 / freq.
++ */
++ config = event->attr.config & PERF_HW_EVENT_MASK;
++ if (type == PERF_TYPE_HARDWARE) {
++ switch (config) {
++ case PERF_COUNT_HW_CPU_CYCLES:
++ case PERF_COUNT_HW_INSTRUCTIONS:
++ case PERF_COUNT_HW_BUS_CYCLES:
++ case PERF_COUNT_HW_STALLED_CYCLES_FRONTEND:
++ case PERF_COUNT_HW_STALLED_CYCLES_BACKEND:
++ case PERF_COUNT_HW_REF_CPU_CYCLES:
++ factor = 500000000;
++ break;
++ case PERF_COUNT_HW_BRANCH_INSTRUCTIONS:
++ case PERF_COUNT_HW_BRANCH_MISSES:
++ factor = 125000000;
++ break;
++ case PERF_COUNT_HW_CACHE_REFERENCES:
++ case PERF_COUNT_HW_CACHE_MISSES:
++ factor = 25000000;
++ break;
++ default:
++ goto end;
++ }
++ }
++
++ if (type == PERF_TYPE_HW_CACHE)
++ factor = 25000000;
++end:
++ /*
++ * Usually, a prime or a number with less factors (close to prime)
++ * is chosen as an SAV, which makes it less likely that the sampling
++ * period synchronizes with some periodic event in the workload.
++ * Minus 1 to make it at least avoiding values near power of twos
++ * for the default freq.
++ */
++ start = DIV_ROUND_UP_ULL(factor, event->attr.sample_freq) - 1;
++
++ if (start > x86_pmu.max_period)
++ start = x86_pmu.max_period;
++
++ if (x86_pmu.limit_period)
++ x86_pmu.limit_period(event, &start);
++
++ return start;
++}
++
+ static int intel_pmu_hw_config(struct perf_event *event)
+ {
+ int ret = x86_pmu_hw_config(event);
+@@ -3960,6 +4039,12 @@ static int intel_pmu_hw_config(struct perf_event *event)
+ if (ret)
+ return ret;
+
++ if (event->attr.freq && event->attr.sample_freq) {
++ event->hw.sample_period = intel_pmu_freq_start_period(event);
++ event->hw.last_period = event->hw.sample_period;
++ local64_set(&event->hw.period_left, event->hw.sample_period);
++ }
++
+ if (event->attr.precise_ip) {
+ if ((event->attr.config & INTEL_ARCH_EVENT_MASK) == INTEL_FIXED_VLBR_EVENT)
+ return -EINVAL;
+diff --git a/arch/x86/events/rapl.c b/arch/x86/events/rapl.c
+index a481a939862e54..fc06b216aacdb7 100644
+--- a/arch/x86/events/rapl.c
++++ b/arch/x86/events/rapl.c
+@@ -846,6 +846,7 @@ static const struct x86_cpu_id rapl_model_match[] __initconst = {
+ X86_MATCH_VFM(INTEL_METEORLAKE_L, &model_skl),
+ X86_MATCH_VFM(INTEL_ARROWLAKE_H, &model_skl),
+ X86_MATCH_VFM(INTEL_ARROWLAKE, &model_skl),
++ X86_MATCH_VFM(INTEL_ARROWLAKE_U, &model_skl),
+ X86_MATCH_VFM(INTEL_LUNARLAKE_M, &model_skl),
+ {},
+ };
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index def6a2854a4b7c..07fc145f353103 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -1075,7 +1075,7 @@ static enum ucode_state load_microcode_amd(u8 family, const u8 *data, size_t siz
+ if (ret != UCODE_OK)
+ return ret;
+
+- for_each_node(nid) {
++ for_each_node_with_cpus(nid) {
+ cpu = cpumask_first(cpumask_of_node(nid));
+ c = &cpu_data(cpu);
+
+diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c
+index 00189cdeb775f0..cb3f900c46fcc1 100644
+--- a/arch/x86/kernel/cpu/vmware.c
++++ b/arch/x86/kernel/cpu/vmware.c
+@@ -26,6 +26,7 @@
+ #include <linux/export.h>
+ #include <linux/clocksource.h>
+ #include <linux/cpu.h>
++#include <linux/efi.h>
+ #include <linux/reboot.h>
+ #include <linux/static_call.h>
+ #include <asm/div64.h>
+@@ -429,6 +430,9 @@ static void __init vmware_platform_setup(void)
+ pr_warn("Failed to get TSC freq from the hypervisor\n");
+ }
+
++ if (cc_platform_has(CC_ATTR_GUEST_SEV_SNP) && !efi_enabled(EFI_BOOT))
++ x86_init.mpparse.find_mptable = mpparse_find_mptable;
++
+ vmware_paravirt_ops_setup();
+
+ #ifdef CONFIG_X86_IO_APIC
+diff --git a/arch/x86/kernel/devicetree.c b/arch/x86/kernel/devicetree.c
+index 59d23cdf4ed0fa..dd8748c45529a8 100644
+--- a/arch/x86/kernel/devicetree.c
++++ b/arch/x86/kernel/devicetree.c
+@@ -2,6 +2,7 @@
+ /*
+ * Architecture specific OF callbacks.
+ */
++#include <linux/acpi.h>
+ #include <linux/export.h>
+ #include <linux/io.h>
+ #include <linux/interrupt.h>
+@@ -313,6 +314,6 @@ void __init x86_flattree_get_config(void)
+ if (initial_dtb)
+ early_memunmap(dt, map_len);
+ #endif
+- if (of_have_populated_dt())
++ if (acpi_disabled && of_have_populated_dt())
+ x86_init.mpparse.parse_smp_cfg = x86_dtb_parse_smp_config;
+ }
+diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
+index 385e3a5fc30458..feca4f20b06aaa 100644
+--- a/arch/x86/kernel/irq.c
++++ b/arch/x86/kernel/irq.c
+@@ -25,8 +25,10 @@
+ #include <asm/posted_intr.h>
+ #include <asm/irq_remapping.h>
+
++#if defined(CONFIG_X86_LOCAL_APIC) || defined(CONFIG_X86_THERMAL_VECTOR)
+ #define CREATE_TRACE_POINTS
+ #include <asm/trace/irq_vectors.h>
++#endif
+
+ DEFINE_PER_CPU_SHARED_ALIGNED(irq_cpustat_t, irq_stat);
+ EXPORT_PER_CPU_SYMBOL(irq_stat);
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 19c96278ba755d..9242c0649adf1b 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -7589,7 +7589,7 @@ static void kvm_mmu_start_lpage_recovery(struct once *once)
+ kvm_nx_huge_page_recovery_worker_kill,
+ kvm, "kvm-nx-lpage-recovery");
+
+- if (!nx_thread)
++ if (IS_ERR(nx_thread))
+ return;
+
+ vhost_task_start(nx_thread);
+diff --git a/block/bio.c b/block/bio.c
+index ac4d77c889322d..43d4ae26f47587 100644
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -77,7 +77,7 @@ struct bio_slab {
+ struct kmem_cache *slab;
+ unsigned int slab_ref;
+ unsigned int slab_size;
+- char name[8];
++ char name[12];
+ };
+ static DEFINE_MUTEX(bio_slab_lock);
+ static DEFINE_XARRAY(bio_slabs);
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index 90aaec923889cf..b4cd14e7fa76cc 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -563,6 +563,12 @@ static const struct dmi_system_id irq1_edge_low_force_override[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "RP-15"),
+ },
+ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Eluktronics Inc."),
++ DMI_MATCH(DMI_BOARD_NAME, "MECH-17"),
++ },
++ },
+ {
+ /* TongFang GM6XGxX/TUXEDO Stellaris 16 Gen5 AMD */
+ .matches = {
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index 2f0431e42c494d..c479348ce8ff69 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -1541,8 +1541,8 @@ static int null_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
+ cmd = blk_mq_rq_to_pdu(req);
+ cmd->error = null_process_cmd(cmd, req_op(req), blk_rq_pos(req),
+ blk_rq_sectors(req));
+- if (!blk_mq_add_to_batch(req, iob, (__force int) cmd->error,
+- blk_mq_end_request_batch))
++ if (!blk_mq_add_to_batch(req, iob, cmd->error != BLK_STS_OK,
++ blk_mq_end_request_batch))
+ blk_mq_end_request(req, cmd->error);
+ nr++;
+ }
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 0e50b65e1dbf5a..44a6937a4b65cc 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -1210,11 +1210,12 @@ static int virtblk_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
+
+ while ((vbr = virtqueue_get_buf(vq->vq, &len)) != NULL) {
+ struct request *req = blk_mq_rq_from_pdu(vbr);
++ u8 status = virtblk_vbr_status(vbr);
+
+ found++;
+ if (!blk_mq_complete_request_remote(req) &&
+- !blk_mq_add_to_batch(req, iob, virtblk_vbr_status(vbr),
+- virtblk_complete_batch))
++ !blk_mq_add_to_batch(req, iob, status != VIRTIO_BLK_S_OK,
++ virtblk_complete_batch))
+ virtblk_request_done(req);
+ }
+
+diff --git a/drivers/clk/samsung/clk-gs101.c b/drivers/clk/samsung/clk-gs101.c
+index 85098c61c15e6f..4d4363bc8b28db 100644
+--- a/drivers/clk/samsung/clk-gs101.c
++++ b/drivers/clk/samsung/clk-gs101.c
+@@ -382,17 +382,9 @@ static const unsigned long cmu_top_clk_regs[] __initconst = {
+ EARLY_WAKEUP_DPU_DEST,
+ EARLY_WAKEUP_CSIS_DEST,
+ EARLY_WAKEUP_SW_TRIG_APM,
+- EARLY_WAKEUP_SW_TRIG_APM_SET,
+- EARLY_WAKEUP_SW_TRIG_APM_CLEAR,
+ EARLY_WAKEUP_SW_TRIG_CLUSTER0,
+- EARLY_WAKEUP_SW_TRIG_CLUSTER0_SET,
+- EARLY_WAKEUP_SW_TRIG_CLUSTER0_CLEAR,
+ EARLY_WAKEUP_SW_TRIG_DPU,
+- EARLY_WAKEUP_SW_TRIG_DPU_SET,
+- EARLY_WAKEUP_SW_TRIG_DPU_CLEAR,
+ EARLY_WAKEUP_SW_TRIG_CSIS,
+- EARLY_WAKEUP_SW_TRIG_CSIS_SET,
+- EARLY_WAKEUP_SW_TRIG_CSIS_CLEAR,
+ CLK_CON_MUX_MUX_CLKCMU_BO_BUS,
+ CLK_CON_MUX_MUX_CLKCMU_BUS0_BUS,
+ CLK_CON_MUX_MUX_CLKCMU_BUS1_BUS,
+diff --git a/drivers/clk/samsung/clk-pll.c b/drivers/clk/samsung/clk-pll.c
+index cca3e630922c14..68a72f5fd9a5a6 100644
+--- a/drivers/clk/samsung/clk-pll.c
++++ b/drivers/clk/samsung/clk-pll.c
+@@ -206,6 +206,7 @@ static const struct clk_ops samsung_pll3000_clk_ops = {
+ */
+ /* Maximum lock time can be 270 * PDIV cycles */
+ #define PLL35XX_LOCK_FACTOR (270)
++#define PLL142XX_LOCK_FACTOR (150)
+
+ #define PLL35XX_MDIV_MASK (0x3FF)
+ #define PLL35XX_PDIV_MASK (0x3F)
+@@ -272,7 +273,11 @@ static int samsung_pll35xx_set_rate(struct clk_hw *hw, unsigned long drate,
+ }
+
+ /* Set PLL lock time. */
+- writel_relaxed(rate->pdiv * PLL35XX_LOCK_FACTOR,
++ if (pll->type == pll_142xx)
++ writel_relaxed(rate->pdiv * PLL142XX_LOCK_FACTOR,
++ pll->lock_reg);
++ else
++ writel_relaxed(rate->pdiv * PLL35XX_LOCK_FACTOR,
+ pll->lock_reg);
+
+ /* Change PLL PMS values */
+diff --git a/drivers/firmware/iscsi_ibft.c b/drivers/firmware/iscsi_ibft.c
+index 6e9788324fea55..371f24569b3b22 100644
+--- a/drivers/firmware/iscsi_ibft.c
++++ b/drivers/firmware/iscsi_ibft.c
+@@ -310,7 +310,10 @@ static ssize_t ibft_attr_show_nic(void *data, int type, char *buf)
+ str += sprintf_ipaddr(str, nic->ip_addr);
+ break;
+ case ISCSI_BOOT_ETH_SUBNET_MASK:
+- val = cpu_to_be32(~((1 << (32-nic->subnet_mask_prefix))-1));
++ if (nic->subnet_mask_prefix > 32)
++ val = cpu_to_be32(~0);
++ else
++ val = cpu_to_be32(~((1 << (32-nic->subnet_mask_prefix))-1));
+ str += sprintf(str, "%pI4", &val);
+ break;
+ case ISCSI_BOOT_ETH_PREFIX_LEN:
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
+index edcb5351f8cca7..9c6824e1c15660 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
+@@ -525,8 +525,9 @@ static void gmc_v12_0_get_vm_pte(struct amdgpu_device *adev,
+
+ bo_adev = amdgpu_ttm_adev(bo->tbo.bdev);
+ coherent = bo->flags & AMDGPU_GEM_CREATE_COHERENT;
+- is_system = (bo->tbo.resource->mem_type == TTM_PL_TT) ||
+- (bo->tbo.resource->mem_type == AMDGPU_PL_PREEMPT);
++ is_system = bo->tbo.resource &&
++ (bo->tbo.resource->mem_type == TTM_PL_TT ||
++ bo->tbo.resource->mem_type == AMDGPU_PL_PREEMPT);
+
+ if (bo && bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC)
+ *flags |= AMDGPU_PTE_DCC;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index 3cfb4a38d17c7f..dffe2a86f383ef 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -1199,11 +1199,13 @@ static int evict_process_queues_cpsch(struct device_queue_manager *dqm,
+ decrement_queue_count(dqm, qpd, q);
+
+ if (dqm->dev->kfd->shared_resources.enable_mes) {
+- retval = remove_queue_mes(dqm, q, qpd);
+- if (retval) {
++ int err;
++
++ err = remove_queue_mes(dqm, q, qpd);
++ if (err) {
+ dev_err(dev, "Failed to evict queue %d\n",
+ q->properties.queue_id);
+- goto out;
++ retval = err;
+ }
+ }
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 5df26f8937cc81..0688a428ee4f7d 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -243,6 +243,10 @@ static int amdgpu_dm_atomic_check(struct drm_device *dev,
+ static void handle_hpd_irq_helper(struct amdgpu_dm_connector *aconnector);
+ static void handle_hpd_rx_irq(void *param);
+
++static void amdgpu_dm_backlight_set_level(struct amdgpu_display_manager *dm,
++ int bl_idx,
++ u32 user_brightness);
++
+ static bool
+ is_timing_unchanged_for_freesync(struct drm_crtc_state *old_crtc_state,
+ struct drm_crtc_state *new_crtc_state);
+@@ -3295,6 +3299,12 @@ static int dm_resume(void *handle)
+
+ mutex_unlock(&dm->dc_lock);
+
++ /* set the backlight after a reset */
++ for (i = 0; i < dm->num_of_edps; i++) {
++ if (dm->backlight_dev[i])
++ amdgpu_dm_backlight_set_level(dm, i, dm->brightness[i]);
++ }
++
+ return 0;
+ }
+ /* Recreate dc_state - DC invalidates it when setting power state to S3. */
+@@ -4822,6 +4832,7 @@ amdgpu_dm_register_backlight_device(struct amdgpu_dm_connector *aconnector)
+ dm->backlight_dev[aconnector->bl_idx] =
+ backlight_device_register(bl_name, aconnector->base.kdev, dm,
+ &amdgpu_dm_backlight_ops, &props);
++ dm->brightness[aconnector->bl_idx] = props.brightness;
+
+ if (IS_ERR(dm->backlight_dev[aconnector->bl_idx])) {
+ DRM_ERROR("DM: Backlight registration failed!\n");
+@@ -4889,7 +4900,6 @@ static void setup_backlight_device(struct amdgpu_display_manager *dm,
+ aconnector->bl_idx = bl_idx;
+
+ amdgpu_dm_update_backlight_caps(dm, bl_idx);
+- dm->brightness[bl_idx] = AMDGPU_MAX_BL_LEVEL;
+ dm->backlight_link[bl_idx] = link;
+ dm->num_of_edps++;
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+index e339c7a8d541c9..c0dc2324404908 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+@@ -455,6 +455,7 @@ void hdcp_destroy(struct kobject *kobj, struct hdcp_workqueue *hdcp_work)
+ for (i = 0; i < hdcp_work->max_link; i++) {
+ cancel_delayed_work_sync(&hdcp_work[i].callback_dwork);
+ cancel_delayed_work_sync(&hdcp_work[i].watchdog_timer_dwork);
++ cancel_delayed_work_sync(&hdcp_work[i].property_validate_dwork);
+ }
+
+ sysfs_remove_bin_file(kobj, &hdcp_work[0].attr);
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+index c4a7fd453e5fc0..a215234151ac31 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_irq.c
+@@ -894,8 +894,16 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev)
+ struct drm_device *dev = adev_to_drm(adev);
+ struct drm_connector *connector;
+ struct drm_connector_list_iter iter;
++ int irq_type;
+ int i;
+
++ /* First, clear all hpd and hpdrx interrupts */
++ for (i = DC_IRQ_SOURCE_HPD1; i <= DC_IRQ_SOURCE_HPD6RX; i++) {
++ if (!dc_interrupt_set(adev->dm.dc, i, false))
++ drm_err(dev, "Failed to clear hpd(rx) source=%d on init\n",
++ i);
++ }
++
+ drm_connector_list_iter_begin(dev, &iter);
+ drm_for_each_connector_iter(connector, &iter) {
+ struct amdgpu_dm_connector *amdgpu_dm_connector;
+@@ -908,10 +916,31 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev)
+
+ dc_link = amdgpu_dm_connector->dc_link;
+
++ /*
++ * Get a base driver irq reference for hpd ints for the lifetime
++ * of dm. Note that only hpd interrupt types are registered with
++ * base driver; hpd_rx types aren't. IOW, amdgpu_irq_get/put on
++ * hpd_rx isn't available. DM currently controls hpd_rx
++ * explicitly with dc_interrupt_set()
++ */
+ if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) {
+- dc_interrupt_set(adev->dm.dc,
+- dc_link->irq_source_hpd,
+- true);
++ irq_type = dc_link->irq_source_hpd - DC_IRQ_SOURCE_HPD1;
++ /*
++ * TODO: There's a mismatch between mode_info.num_hpd
++ * and what bios reports as the # of connectors with hpd
++ * sources. Since the # of hpd source types registered
++ * with base driver == mode_info.num_hpd, we have to
++ * fallback to dc_interrupt_set for the remaining types.
++ */
++ if (irq_type < adev->mode_info.num_hpd) {
++ if (amdgpu_irq_get(adev, &adev->hpd_irq, irq_type))
++ drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n",
++ dc_link->irq_source_hpd);
++ } else {
++ dc_interrupt_set(adev->dm.dc,
++ dc_link->irq_source_hpd,
++ true);
++ }
+ }
+
+ if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) {
+@@ -921,12 +950,6 @@ void amdgpu_dm_hpd_init(struct amdgpu_device *adev)
+ }
+ }
+ drm_connector_list_iter_end(&iter);
+-
+- /* Update reference counts for HPDs */
+- for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) {
+- if (amdgpu_irq_get(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1))
+- drm_err(dev, "DM_IRQ: Failed get HPD for source=%d)!\n", i);
+- }
+ }
+
+ /**
+@@ -942,7 +965,7 @@ void amdgpu_dm_hpd_fini(struct amdgpu_device *adev)
+ struct drm_device *dev = adev_to_drm(adev);
+ struct drm_connector *connector;
+ struct drm_connector_list_iter iter;
+- int i;
++ int irq_type;
+
+ drm_connector_list_iter_begin(dev, &iter);
+ drm_for_each_connector_iter(connector, &iter) {
+@@ -956,9 +979,18 @@ void amdgpu_dm_hpd_fini(struct amdgpu_device *adev)
+ dc_link = amdgpu_dm_connector->dc_link;
+
+ if (dc_link->irq_source_hpd != DC_IRQ_SOURCE_INVALID) {
+- dc_interrupt_set(adev->dm.dc,
+- dc_link->irq_source_hpd,
+- false);
++ irq_type = dc_link->irq_source_hpd - DC_IRQ_SOURCE_HPD1;
++
++ /* TODO: See same TODO in amdgpu_dm_hpd_init() */
++ if (irq_type < adev->mode_info.num_hpd) {
++ if (amdgpu_irq_put(adev, &adev->hpd_irq, irq_type))
++ drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n",
++ dc_link->irq_source_hpd);
++ } else {
++ dc_interrupt_set(adev->dm.dc,
++ dc_link->irq_source_hpd,
++ false);
++ }
+ }
+
+ if (dc_link->irq_source_hpd_rx != DC_IRQ_SOURCE_INVALID) {
+@@ -968,10 +1000,4 @@ void amdgpu_dm_hpd_fini(struct amdgpu_device *adev)
+ }
+ }
+ drm_connector_list_iter_end(&iter);
+-
+- /* Update reference counts for HPDs */
+- for (i = DC_IRQ_SOURCE_HPD1; i <= adev->mode_info.num_hpd; i++) {
+- if (amdgpu_irq_put(adev, &adev->hpd_irq, i - DC_IRQ_SOURCE_HPD1))
+- drm_err(dev, "DM_IRQ: Failed put HPD for source=%d!\n", i);
+- }
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+index 83c7c8853edeca..62e30942f735d4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_plane.c
+@@ -275,8 +275,11 @@ static int amdgpu_dm_plane_validate_dcc(struct amdgpu_device *adev,
+ if (!dcc->enable)
+ return 0;
+
+- if (format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN ||
+- !dc->cap_funcs.get_dcc_compression_cap)
++ if (adev->family < AMDGPU_FAMILY_GC_12_0_0 &&
++ format >= SURFACE_PIXEL_FORMAT_VIDEO_BEGIN)
++ return -EINVAL;
++
++ if (!dc->cap_funcs.get_dcc_compression_cap)
+ return -EINVAL;
+
+ input.format = format;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+index f0eda0ba015600..bfcbbea377298f 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_resource.c
+@@ -3388,10 +3388,13 @@ static int get_norm_pix_clk(const struct dc_crtc_timing *timing)
+ break;
+ case COLOR_DEPTH_121212:
+ normalized_pix_clk = (pix_clk * 36) / 24;
+- break;
++ break;
++ case COLOR_DEPTH_141414:
++ normalized_pix_clk = (pix_clk * 42) / 24;
++ break;
+ case COLOR_DEPTH_161616:
+ normalized_pix_clk = (pix_clk * 48) / 24;
+- break;
++ break;
+ default:
+ ASSERT(0);
+ break;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce60/dce60_timing_generator.c b/drivers/gpu/drm/amd/display/dc/dce60/dce60_timing_generator.c
+index e5fb0e8333e43f..e691a1cf33567d 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce60/dce60_timing_generator.c
++++ b/drivers/gpu/drm/amd/display/dc/dce60/dce60_timing_generator.c
+@@ -239,6 +239,7 @@ static const struct timing_generator_funcs dce60_tg_funcs = {
+ dce60_timing_generator_enable_advanced_request,
+ .configure_crc = dce60_configure_crc,
+ .get_crc = dce110_get_crc,
++ .is_two_pixels_per_container = dce110_is_two_pixels_per_container,
+ };
+
+ void dce60_timing_generator_construct(
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+index 8dee0d397e0322..55014c15211674 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_translation_helper.c
+@@ -994,7 +994,7 @@ bool dml21_map_dc_state_into_dml_display_cfg(const struct dc *in_dc, struct dc_s
+ if (disp_cfg_stream_location < 0)
+ disp_cfg_stream_location = dml_dispcfg->num_streams++;
+
+- ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__);
++ ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__);
+ populate_dml21_timing_config_from_stream_state(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location].timing, context->streams[stream_index], dml_ctx);
+ populate_dml21_output_config_from_stream_state(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location].output, context->streams[stream_index], &context->res_ctx.pipe_ctx[stream_index]);
+ populate_dml21_stream_overrides_from_stream_state(&dml_dispcfg->stream_descriptors[disp_cfg_stream_location], context->streams[stream_index]);
+@@ -1018,7 +1018,7 @@ bool dml21_map_dc_state_into_dml_display_cfg(const struct dc *in_dc, struct dc_s
+ if (disp_cfg_plane_location < 0)
+ disp_cfg_plane_location = dml_dispcfg->num_planes++;
+
+- ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__);
++ ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__);
+
+ populate_dml21_surface_config_from_plane_state(in_dc, &dml_dispcfg->plane_descriptors[disp_cfg_plane_location].surface, context->stream_status[stream_index].plane_states[plane_index]);
+ populate_dml21_plane_config_from_plane_state(dml_ctx, &dml_dispcfg->plane_descriptors[disp_cfg_plane_location], context->stream_status[stream_index].plane_states[plane_index], context, stream_index);
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+index bde4250853b10c..81ba8809a3b4c5 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c
+@@ -746,7 +746,7 @@ static void populate_dml_output_cfg_from_stream_state(struct dml_output_cfg_st *
+ case SIGNAL_TYPE_DISPLAY_PORT_MST:
+ case SIGNAL_TYPE_DISPLAY_PORT:
+ out->OutputEncoder[location] = dml_dp;
+- if (dml2->v20.scratch.hpo_stream_to_link_encoder_mapping[location] != -1)
++ if (location < MAX_HPO_DP2_ENCODERS && dml2->v20.scratch.hpo_stream_to_link_encoder_mapping[location] != -1)
+ out->OutputEncoder[dml2->v20.scratch.hpo_stream_to_link_encoder_mapping[location]] = dml_dp2p0;
+ break;
+ case SIGNAL_TYPE_EDP:
+@@ -1303,7 +1303,7 @@ void map_dc_state_into_dml_display_cfg(struct dml2_context *dml2, struct dc_stat
+ if (disp_cfg_stream_location < 0)
+ disp_cfg_stream_location = dml_dispcfg->num_timings++;
+
+- ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__);
++ ASSERT(disp_cfg_stream_location >= 0 && disp_cfg_stream_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__);
+
+ populate_dml_timing_cfg_from_stream_state(&dml_dispcfg->timing, disp_cfg_stream_location, context->streams[i]);
+ populate_dml_output_cfg_from_stream_state(&dml_dispcfg->output, disp_cfg_stream_location, context->streams[i], current_pipe_context, dml2);
+@@ -1343,7 +1343,7 @@ void map_dc_state_into_dml_display_cfg(struct dml2_context *dml2, struct dc_stat
+ if (disp_cfg_plane_location < 0)
+ disp_cfg_plane_location = dml_dispcfg->num_surfaces++;
+
+- ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location <= __DML2_WRAPPER_MAX_STREAMS_PLANES__);
++ ASSERT(disp_cfg_plane_location >= 0 && disp_cfg_plane_location < __DML2_WRAPPER_MAX_STREAMS_PLANES__);
+
+ populate_dml_surface_cfg_from_plane_state(dml2->v20.dml_core_ctx.project, &dml_dispcfg->surface, disp_cfg_plane_location, context->stream_status[i].plane_states[j]);
+ populate_dml_plane_cfg_from_plane_state(
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index f0c6d50d8c3345..da6ff36623d30f 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -4034,6 +4034,22 @@ static int drm_dp_mst_handle_down_rep(struct drm_dp_mst_topology_mgr *mgr)
+ return 0;
+ }
+
++static bool primary_mstb_probing_is_done(struct drm_dp_mst_topology_mgr *mgr)
++{
++ bool probing_done = false;
++
++ mutex_lock(&mgr->lock);
++
++ if (mgr->mst_primary && drm_dp_mst_topology_try_get_mstb(mgr->mst_primary)) {
++ probing_done = mgr->mst_primary->link_address_sent;
++ drm_dp_mst_topology_put_mstb(mgr->mst_primary);
++ }
++
++ mutex_unlock(&mgr->lock);
++
++ return probing_done;
++}
++
+ static inline bool
+ drm_dp_mst_process_up_req(struct drm_dp_mst_topology_mgr *mgr,
+ struct drm_dp_pending_up_req *up_req)
+@@ -4064,8 +4080,12 @@ drm_dp_mst_process_up_req(struct drm_dp_mst_topology_mgr *mgr,
+
+ /* TODO: Add missing handler for DP_RESOURCE_STATUS_NOTIFY events */
+ if (msg->req_type == DP_CONNECTION_STATUS_NOTIFY) {
+- dowork = drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat);
+- hotplug = true;
++ if (!primary_mstb_probing_is_done(mgr)) {
++ drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it.\n");
++ } else {
++ dowork = drm_dp_mst_handle_conn_stat(mstb, &msg->u.conn_stat);
++ hotplug = true;
++ }
+ }
+
+ drm_dp_mst_topology_put_mstb(mstb);
+@@ -4144,10 +4164,11 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ drm_dp_send_up_ack_reply(mgr, mst_primary, up_req->msg.req_type,
+ false);
+
++ drm_dp_mst_topology_put_mstb(mst_primary);
++
+ if (up_req->msg.req_type == DP_CONNECTION_STATUS_NOTIFY) {
+ const struct drm_dp_connection_status_notify *conn_stat =
+ &up_req->msg.u.conn_stat;
+- bool handle_csn;
+
+ drm_dbg_kms(mgr->dev, "Got CSN: pn: %d ldps:%d ddps: %d mcs: %d ip: %d pdt: %d\n",
+ conn_stat->port_number,
+@@ -4156,16 +4177,6 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ conn_stat->message_capability_status,
+ conn_stat->input_port,
+ conn_stat->peer_device_type);
+-
+- mutex_lock(&mgr->probe_lock);
+- handle_csn = mst_primary->link_address_sent;
+- mutex_unlock(&mgr->probe_lock);
+-
+- if (!handle_csn) {
+- drm_dbg_kms(mgr->dev, "Got CSN before finish topology probing. Skip it.");
+- kfree(up_req);
+- goto out_put_primary;
+- }
+ } else if (up_req->msg.req_type == DP_RESOURCE_STATUS_NOTIFY) {
+ const struct drm_dp_resource_status_notify *res_stat =
+ &up_req->msg.u.resource_stat;
+@@ -4180,9 +4191,6 @@ static int drm_dp_mst_handle_up_req(struct drm_dp_mst_topology_mgr *mgr)
+ list_add_tail(&up_req->next, &mgr->up_req_list);
+ mutex_unlock(&mgr->up_req_lock);
+ queue_work(system_long_wq, &mgr->up_req_work);
+-
+-out_put_primary:
+- drm_dp_mst_topology_put_mstb(mst_primary);
+ out_clear_reply:
+ memset(&mgr->up_req_recv, 0, sizeof(struct drm_dp_sideband_msg_rx));
+ return 0;
+diff --git a/drivers/gpu/drm/drm_atomic_uapi.c b/drivers/gpu/drm/drm_atomic_uapi.c
+index 370dc676e3aa54..fd36b8fd54e9e1 100644
+--- a/drivers/gpu/drm/drm_atomic_uapi.c
++++ b/drivers/gpu/drm/drm_atomic_uapi.c
+@@ -956,6 +956,10 @@ int drm_atomic_connector_commit_dpms(struct drm_atomic_state *state,
+
+ if (mode != DRM_MODE_DPMS_ON)
+ mode = DRM_MODE_DPMS_OFF;
++
++ if (connector->dpms == mode)
++ goto out;
++
+ connector->dpms = mode;
+
+ crtc = connector->state->crtc;
+diff --git a/drivers/gpu/drm/drm_connector.c b/drivers/gpu/drm/drm_connector.c
+index 0e6021235a9304..994afa5a0ffb52 100644
+--- a/drivers/gpu/drm/drm_connector.c
++++ b/drivers/gpu/drm/drm_connector.c
+@@ -1308,6 +1308,10 @@ EXPORT_SYMBOL(drm_hdmi_connector_get_output_format_name);
+ * callback. For atomic drivers the remapping to the "ACTIVE" property is
+ * implemented in the DRM core.
+ *
++ * On atomic drivers any DPMS setproperty ioctl where the value does not
++ * change is completely skipped, otherwise a full atomic commit will occur.
++ * On legacy drivers the exact behavior is driver specific.
++ *
+ * Note that this property cannot be set through the MODE_ATOMIC ioctl,
+ * userspace must use "ACTIVE" on the CRTC instead.
+ *
+diff --git a/drivers/gpu/drm/drm_panic_qr.rs b/drivers/gpu/drm/drm_panic_qr.rs
+index bcf248f69252c2..6903e2010cb98b 100644
+--- a/drivers/gpu/drm/drm_panic_qr.rs
++++ b/drivers/gpu/drm/drm_panic_qr.rs
+@@ -545,7 +545,7 @@ fn add_segments(&mut self, segments: &[&Segment<'_>]) {
+ }
+ self.push(&mut offset, (MODE_STOP, 4));
+
+- let pad_offset = (offset + 7) / 8;
++ let pad_offset = offset.div_ceil(8);
+ for i in pad_offset..self.version.max_data() {
+ self.data[i] = PADDING[(i & 1) ^ (pad_offset & 1)];
+ }
+@@ -659,7 +659,7 @@ struct QrImage<'a> {
+ impl QrImage<'_> {
+ fn new<'a, 'b>(em: &'b EncodedMsg<'b>, qrdata: &'a mut [u8]) -> QrImage<'a> {
+ let width = em.version.width();
+- let stride = (width + 7) / 8;
++ let stride = width.div_ceil(8);
+ let data = qrdata;
+
+ let mut qr_image = QrImage {
+@@ -911,16 +911,16 @@ fn draw_all(&mut self, data: impl Iterator<Item = u8>) {
+ ///
+ /// * `url`: The base URL of the QR code. It will be encoded as Binary segment.
+ /// * `data`: A pointer to the binary data, to be encoded. if URL is NULL, it
+-/// will be encoded as binary segment, otherwise it will be encoded
+-/// efficiently as a numeric segment, and appended to the URL.
++/// will be encoded as binary segment, otherwise it will be encoded
++/// efficiently as a numeric segment, and appended to the URL.
+ /// * `data_len`: Length of the data, that needs to be encoded, must be less
+-/// than data_size.
++/// than data_size.
+ /// * `data_size`: Size of data buffer, it should be at least 4071 bytes to hold
+-/// a V40 QR code. It will then be overwritten with the QR code image.
++/// a V40 QR code. It will then be overwritten with the QR code image.
+ /// * `tmp`: A temporary buffer that the QR code encoder will use, to write the
+-/// segments and ECC.
++/// segments and ECC.
+ /// * `tmp_size`: Size of the temporary buffer, it must be at least 3706 bytes
+-/// long for V40.
++/// long for V40.
+ ///
+ /// # Safety
+ ///
+diff --git a/drivers/gpu/drm/gma500/mid_bios.c b/drivers/gpu/drm/gma500/mid_bios.c
+index 7e76790c6a81fa..cba97d7db131d8 100644
+--- a/drivers/gpu/drm/gma500/mid_bios.c
++++ b/drivers/gpu/drm/gma500/mid_bios.c
+@@ -279,6 +279,11 @@ static void mid_get_vbt_data(struct drm_psb_private *dev_priv)
+ 0, PCI_DEVFN(2, 0));
+ int ret = -1;
+
++ if (pci_gfx_root == NULL) {
++ WARN_ON(1);
++ return;
++ }
++
+ /* Get the address of the platform config vbt */
+ pci_read_config_dword(pci_gfx_root, 0xFC, &addr);
+ pci_dev_put(pci_gfx_root);
+diff --git a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
+index ff93e08d5036df..5f02a5a39ab4a2 100644
+--- a/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
++++ b/drivers/gpu/drm/hyperv/hyperv_drm_drv.c
+@@ -154,6 +154,7 @@ static int hyperv_vmbus_probe(struct hv_device *hdev,
+ return 0;
+
+ err_free_mmio:
++ iounmap(hv->vram);
+ vmbus_free_mmio(hv->mem->start, hv->fb_size);
+ err_vmbus_close:
+ vmbus_close(hdev->channel);
+@@ -172,6 +173,7 @@ static void hyperv_vmbus_remove(struct hv_device *hdev)
+ vmbus_close(hdev->channel);
+ hv_set_drvdata(hdev, NULL);
+
++ iounmap(hv->vram);
+ vmbus_free_mmio(hv->mem->start, hv->fb_size);
+ }
+
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index 3039ee03e1c7a8..d5eb8de645a9a3 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -7438,9 +7438,6 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
+ /* Now enable the clocks, plane, pipe, and connectors that we set up. */
+ dev_priv->display.funcs.display->commit_modeset_enables(state);
+
+- if (state->modeset)
+- intel_set_cdclk_post_plane_update(state);
+-
+ intel_wait_for_vblank_workers(state);
+
+ /* FIXME: We should call drm_atomic_helper_commit_hw_done() here
+@@ -7521,6 +7518,8 @@ static void intel_atomic_commit_tail(struct intel_atomic_state *state)
+ intel_verify_planes(state);
+
+ intel_sagv_post_plane_update(state);
++ if (state->modeset)
++ intel_set_cdclk_post_plane_update(state);
+ intel_pmdemand_post_plane_update(state);
+
+ drm_atomic_helper_commit_hw_done(&state->base);
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_mman.c b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+index 21274aa9bdddc1..c3dabb85796052 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_mman.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_mman.c
+@@ -164,6 +164,9 @@ static unsigned int tile_row_pages(const struct drm_i915_gem_object *obj)
+ * 4 - Support multiple fault handlers per object depending on object's
+ * backing storage (a.k.a. MMAP_OFFSET).
+ *
++ * 5 - Support multiple partial mmaps(mmap part of BO + unmap a offset, multiple
++ * times with different size and offset).
++ *
+ * Restrictions:
+ *
+ * * snoopable objects cannot be accessed via the GTT. It can cause machine
+@@ -191,7 +194,7 @@ static unsigned int tile_row_pages(const struct drm_i915_gem_object *obj)
+ */
+ int i915_gem_mmap_gtt_version(void)
+ {
+- return 4;
++ return 5;
+ }
+
+ static inline struct i915_gtt_view
+diff --git a/drivers/gpu/drm/nouveau/nouveau_connector.c b/drivers/gpu/drm/nouveau/nouveau_connector.c
+index b06aa473102b30..5ab4201c981e47 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_connector.c
++++ b/drivers/gpu/drm/nouveau/nouveau_connector.c
+@@ -776,7 +776,6 @@ nouveau_connector_force(struct drm_connector *connector)
+ if (!nv_encoder) {
+ NV_ERROR(drm, "can't find encoder to force %s on!\n",
+ connector->name);
+- connector->status = connector_status_disconnected;
+ return;
+ }
+
+diff --git a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+index cbd9584af32995..383fbe128348ea 100644
+--- a/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
++++ b/drivers/gpu/drm/tests/drm_hdmi_state_helper_test.c
+@@ -258,15 +258,16 @@ static void drm_test_check_broadcast_rgb_crtc_mode_changed(struct kunit *test)
+ 8);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
++
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -321,15 +322,16 @@ static void drm_test_check_broadcast_rgb_crtc_mode_not_changed(struct kunit *tes
+ 8);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
++
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -384,18 +386,18 @@ static void drm_test_check_broadcast_rgb_auto_cea_mode(struct kunit *test)
+ 8);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ KUNIT_ASSERT_TRUE(test, conn->display_info.is_hdmi);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+ KUNIT_ASSERT_NE(test, drm_match_cea_mode(preferred), 1);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -450,7 +452,6 @@ static void drm_test_check_broadcast_rgb_auto_cea_mode_vic_1(struct kunit *test)
+ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+- drm = &priv->drm;
+ crtc = priv->crtc;
+ ret = light_up_connector(test, drm, crtc, conn, mode, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+@@ -496,18 +497,18 @@ static void drm_test_check_broadcast_rgb_full_cea_mode(struct kunit *test)
+ 8);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ KUNIT_ASSERT_TRUE(test, conn->display_info.is_hdmi);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+ KUNIT_ASSERT_NE(test, drm_match_cea_mode(preferred), 1);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -564,7 +565,6 @@ static void drm_test_check_broadcast_rgb_full_cea_mode_vic_1(struct kunit *test)
+ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+- drm = &priv->drm;
+ crtc = priv->crtc;
+ ret = light_up_connector(test, drm, crtc, conn, mode, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+@@ -612,18 +612,18 @@ static void drm_test_check_broadcast_rgb_limited_cea_mode(struct kunit *test)
+ 8);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ KUNIT_ASSERT_TRUE(test, conn->display_info.is_hdmi);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+ KUNIT_ASSERT_NE(test, drm_match_cea_mode(preferred), 1);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -680,7 +680,6 @@ static void drm_test_check_broadcast_rgb_limited_cea_mode_vic_1(struct kunit *te
+ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+- drm = &priv->drm;
+ crtc = priv->crtc;
+ ret = light_up_connector(test, drm, crtc, conn, mode, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+@@ -730,20 +729,20 @@ static void drm_test_check_output_bpc_crtc_mode_changed(struct kunit *test)
+ 10);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+ KUNIT_ASSERT_GT(test, ret, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -804,20 +803,20 @@ static void drm_test_check_output_bpc_crtc_mode_not_changed(struct kunit *test)
+ 10);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz));
+ KUNIT_ASSERT_GT(test, ret, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -875,6 +874,8 @@ static void drm_test_check_output_bpc_dvi(struct kunit *test)
+ 12);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_dvi_1080p,
+@@ -884,14 +885,12 @@ static void drm_test_check_output_bpc_dvi(struct kunit *test)
+ info = &conn->display_info;
+ KUNIT_ASSERT_FALSE(test, info->is_hdmi);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -922,21 +921,21 @@ static void drm_test_check_tmds_char_rate_rgb_8bpc(struct kunit *test)
+ 8);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_max_200mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_max_200mhz));
+ KUNIT_ASSERT_GT(test, ret, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+ KUNIT_ASSERT_FALSE(test, preferred->flags & DRM_MODE_FLAG_DBLCLK);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -969,21 +968,21 @@ static void drm_test_check_tmds_char_rate_rgb_10bpc(struct kunit *test)
+ 10);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz));
+ KUNIT_ASSERT_GT(test, ret, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+ KUNIT_ASSERT_FALSE(test, preferred->flags & DRM_MODE_FLAG_DBLCLK);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -1016,21 +1015,21 @@ static void drm_test_check_tmds_char_rate_rgb_12bpc(struct kunit *test)
+ 12);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz,
+ ARRAY_SIZE(test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz));
+ KUNIT_ASSERT_GT(test, ret, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+ KUNIT_ASSERT_FALSE(test, preferred->flags & DRM_MODE_FLAG_DBLCLK);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -1067,15 +1066,16 @@ static void drm_test_check_hdmi_funcs_reject_rate(struct kunit *test)
+ 8);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
++
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_ASSERT_EQ(test, ret, 0);
+
+@@ -1123,6 +1123,8 @@ static void drm_test_check_max_tmds_rate_bpc_fallback(struct kunit *test)
+ 12);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+@@ -1133,9 +1135,6 @@ static void drm_test_check_max_tmds_rate_bpc_fallback(struct kunit *test)
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+ KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+ KUNIT_ASSERT_FALSE(test, preferred->flags & DRM_MODE_FLAG_DBLCLK);
+@@ -1146,8 +1145,9 @@ static void drm_test_check_max_tmds_rate_bpc_fallback(struct kunit *test)
+ rate = drm_hdmi_compute_mode_clock(preferred, 10, HDMI_COLORSPACE_RGB);
+ KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_EXPECT_EQ(test, ret, 0);
+
+@@ -1192,6 +1192,8 @@ static void drm_test_check_max_tmds_rate_format_fallback(struct kunit *test)
+ 12);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+@@ -1202,9 +1204,6 @@ static void drm_test_check_max_tmds_rate_format_fallback(struct kunit *test)
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+ KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+ KUNIT_ASSERT_FALSE(test, preferred->flags & DRM_MODE_FLAG_DBLCLK);
+@@ -1218,8 +1217,9 @@ static void drm_test_check_max_tmds_rate_format_fallback(struct kunit *test)
+ rate = drm_hdmi_compute_mode_clock(preferred, 12, HDMI_COLORSPACE_YUV422);
+ KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_EXPECT_EQ(test, ret, 0);
+
+@@ -1266,9 +1266,6 @@ static void drm_test_check_output_bpc_format_vic_1(struct kunit *test)
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+ KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ mode = drm_kunit_display_mode_from_cea_vic(test, drm, 1);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
+@@ -1282,7 +1279,9 @@ static void drm_test_check_output_bpc_format_vic_1(struct kunit *test)
+ rate = mode->clock * 1500;
+ KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000);
+
+- drm = &priv->drm;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ crtc = priv->crtc;
+ ret = light_up_connector(test, drm, crtc, conn, mode, ctx);
+ KUNIT_EXPECT_EQ(test, ret, 0);
+@@ -1316,6 +1315,8 @@ static void drm_test_check_output_bpc_format_driver_rgb_only(struct kunit *test)
+ 12);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_200mhz,
+@@ -1326,9 +1327,6 @@ static void drm_test_check_output_bpc_format_driver_rgb_only(struct kunit *test)
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+ KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+
+@@ -1347,8 +1345,9 @@ static void drm_test_check_output_bpc_format_driver_rgb_only(struct kunit *test)
+ rate = drm_hdmi_compute_mode_clock(preferred, 12, HDMI_COLORSPACE_YUV422);
+ KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_EXPECT_EQ(test, ret, 0);
+
+@@ -1383,6 +1382,8 @@ static void drm_test_check_output_bpc_format_display_rgb_only(struct kunit *test
+ 12);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_max_200mhz,
+@@ -1393,9 +1394,6 @@ static void drm_test_check_output_bpc_format_display_rgb_only(struct kunit *test
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+ KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+
+@@ -1414,8 +1412,9 @@ static void drm_test_check_output_bpc_format_display_rgb_only(struct kunit *test
+ rate = drm_hdmi_compute_mode_clock(preferred, 12, HDMI_COLORSPACE_YUV422);
+ KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_EXPECT_EQ(test, ret, 0);
+
+@@ -1449,6 +1448,8 @@ static void drm_test_check_output_bpc_format_driver_8bpc_only(struct kunit *test
+ 8);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_yuv_dc_max_340mhz,
+@@ -1459,9 +1460,6 @@ static void drm_test_check_output_bpc_format_driver_8bpc_only(struct kunit *test
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+ KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+
+@@ -1472,8 +1470,9 @@ static void drm_test_check_output_bpc_format_driver_8bpc_only(struct kunit *test
+ rate = drm_hdmi_compute_mode_clock(preferred, 12, HDMI_COLORSPACE_RGB);
+ KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_EXPECT_EQ(test, ret, 0);
+
+@@ -1509,6 +1508,8 @@ static void drm_test_check_output_bpc_format_display_8bpc_only(struct kunit *tes
+ 12);
+ KUNIT_ASSERT_NOT_NULL(test, priv);
+
++ drm = &priv->drm;
++ crtc = priv->crtc;
+ conn = &priv->connector;
+ ret = set_connector_edid(test, conn,
+ test_edid_hdmi_1080p_rgb_max_340mhz,
+@@ -1519,9 +1520,6 @@ static void drm_test_check_output_bpc_format_display_8bpc_only(struct kunit *tes
+ KUNIT_ASSERT_TRUE(test, info->is_hdmi);
+ KUNIT_ASSERT_GT(test, info->max_tmds_clock, 0);
+
+- ctx = drm_kunit_helper_acquire_ctx_alloc(test);
+- KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
+-
+ preferred = find_preferred_mode(conn);
+ KUNIT_ASSERT_NOT_NULL(test, preferred);
+
+@@ -1532,8 +1530,9 @@ static void drm_test_check_output_bpc_format_display_8bpc_only(struct kunit *tes
+ rate = drm_hdmi_compute_mode_clock(preferred, 12, HDMI_COLORSPACE_RGB);
+ KUNIT_ASSERT_LT(test, rate, info->max_tmds_clock * 1000);
+
+- drm = &priv->drm;
+- crtc = priv->crtc;
++ ctx = drm_kunit_helper_acquire_ctx_alloc(test);
++ KUNIT_ASSERT_NOT_ERR_OR_NULL(test, ctx);
++
+ ret = light_up_connector(test, drm, crtc, conn, preferred, ctx);
+ KUNIT_EXPECT_EQ(test, ret, 0);
+
+diff --git a/drivers/gpu/drm/vkms/vkms_composer.c b/drivers/gpu/drm/vkms/vkms_composer.c
+index e7441b227b3cea..3d6785d081f2cd 100644
+--- a/drivers/gpu/drm/vkms/vkms_composer.c
++++ b/drivers/gpu/drm/vkms/vkms_composer.c
+@@ -98,7 +98,7 @@ static u16 lerp_u16(u16 a, u16 b, s64 t)
+
+ s64 delta = drm_fixp_mul(b_fp - a_fp, t);
+
+- return drm_fixp2int(a_fp + delta);
++ return drm_fixp2int_round(a_fp + delta);
+ }
+
+ static s64 get_lut_index(const struct vkms_color_lut *lut, u16 channel_value)
+diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
+index fed23304e4da58..20d05efdd406e6 100644
+--- a/drivers/gpu/drm/xe/xe_guc_submit.c
++++ b/drivers/gpu/drm/xe/xe_guc_submit.c
+@@ -1213,9 +1213,11 @@ static void __guc_exec_queue_fini_async(struct work_struct *w)
+ xe_pm_runtime_get(guc_to_xe(guc));
+ trace_xe_exec_queue_destroy(q);
+
++ release_guc_id(guc, q);
+ if (xe_exec_queue_is_lr(q))
+ cancel_work_sync(&ge->lr_tdr);
+- release_guc_id(guc, q);
++ /* Confirm no work left behind accessing device structures */
++ cancel_delayed_work_sync(&ge->sched.base.work_tdr);
+ xe_sched_entity_fini(&ge->entity);
+ xe_sched_fini(&ge->sched);
+
+diff --git a/drivers/gpu/drm/xe/xe_hmm.c b/drivers/gpu/drm/xe/xe_hmm.c
+index d7a9408b3a97c8..f6bc4f29d7538e 100644
+--- a/drivers/gpu/drm/xe/xe_hmm.c
++++ b/drivers/gpu/drm/xe/xe_hmm.c
+@@ -138,13 +138,17 @@ static int xe_build_sg(struct xe_device *xe, struct hmm_range *range,
+ i += size;
+
+ if (unlikely(j == st->nents - 1)) {
++ xe_assert(xe, i >= npages);
+ if (i > npages)
+ size -= (i - npages);
++
+ sg_mark_end(sgl);
++ } else {
++ xe_assert(xe, i < npages);
+ }
++
+ sg_set_page(sgl, page, size << PAGE_SHIFT, 0);
+ }
+- xe_assert(xe, i == npages);
+
+ return dma_map_sgtable(dev, st, write ? DMA_BIDIRECTIONAL : DMA_TO_DEVICE,
+ DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_NO_KERNEL_MAPPING);
+diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
+index 33eb039053e4f5..06f50aa313267a 100644
+--- a/drivers/gpu/drm/xe/xe_pm.c
++++ b/drivers/gpu/drm/xe/xe_pm.c
+@@ -264,6 +264,15 @@ int xe_pm_init_early(struct xe_device *xe)
+ return 0;
+ }
+
++static u32 vram_threshold_value(struct xe_device *xe)
++{
++ /* FIXME: D3Cold temporarily disabled by default on BMG */
++ if (xe->info.platform == XE_BATTLEMAGE)
++ return 0;
++
++ return DEFAULT_VRAM_THRESHOLD;
++}
++
+ /**
+ * xe_pm_init - Initialize Xe Power Management
+ * @xe: xe device instance
+@@ -274,6 +283,7 @@ int xe_pm_init_early(struct xe_device *xe)
+ */
+ int xe_pm_init(struct xe_device *xe)
+ {
++ u32 vram_threshold;
+ int err;
+
+ /* For now suspend/resume is only allowed with GuC */
+@@ -287,7 +297,8 @@ int xe_pm_init(struct xe_device *xe)
+ if (err)
+ return err;
+
+- err = xe_pm_set_vram_threshold(xe, DEFAULT_VRAM_THRESHOLD);
++ vram_threshold = vram_threshold_value(xe);
++ err = xe_pm_set_vram_threshold(xe, vram_threshold);
+ if (err)
+ return err;
+ }
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index f8a56d6312425a..4500d7653b05ee 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -1154,7 +1154,8 @@ config HID_TOPRE
+ tristate "Topre REALFORCE keyboards"
+ depends on HID
+ help
+- Say Y for N-key rollover support on Topre REALFORCE R2 108/87 key keyboards.
++ Say Y for N-key rollover support on Topre REALFORCE R2 108/87 key and
++ Topre REALFORCE R3S 87 key keyboards.
+
+ config HID_THINGM
+ tristate "ThingM blink(1) USB RGB LED"
+diff --git a/drivers/hid/hid-apple.c b/drivers/hid/hid-apple.c
+index 7e1ae2a2bcc247..d900dd05c335c3 100644
+--- a/drivers/hid/hid-apple.c
++++ b/drivers/hid/hid-apple.c
+@@ -378,6 +378,12 @@ static bool apple_is_non_apple_keyboard(struct hid_device *hdev)
+ return false;
+ }
+
++static bool apple_is_omoton_kb066(struct hid_device *hdev)
++{
++ return hdev->product == USB_DEVICE_ID_APPLE_ALU_WIRELESS_ANSI &&
++ strcmp(hdev->name, "Bluetooth Keyboard") == 0;
++}
++
+ static inline void apple_setup_key_translation(struct input_dev *input,
+ const struct apple_key_translation *table)
+ {
+@@ -474,6 +480,7 @@ static int hidinput_apple_event(struct hid_device *hid, struct input_dev *input,
+ hid->product == USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_2015)
+ table = magic_keyboard_2015_fn_keys;
+ else if (hid->product == USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021 ||
++ hid->product == USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2024 ||
+ hid->product == USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021 ||
+ hid->product == USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_2021)
+ table = apple2021_fn_keys;
+@@ -724,7 +731,7 @@ static int apple_input_configured(struct hid_device *hdev,
+ {
+ struct apple_sc *asc = hid_get_drvdata(hdev);
+
+- if ((asc->quirks & APPLE_HAS_FN) && !asc->fn_found) {
++ if (((asc->quirks & APPLE_HAS_FN) && !asc->fn_found) || apple_is_omoton_kb066(hdev)) {
+ hid_info(hdev, "Fn key not found (Apple Wireless Keyboard clone?), disabling Fn key handling\n");
+ asc->quirks &= ~APPLE_HAS_FN;
+ }
+@@ -1150,6 +1157,10 @@ static const struct hid_device_id apple_devices[] = {
+ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK | APPLE_RDESC_BATTERY },
+ { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021),
+ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
++ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2024),
++ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK | APPLE_RDESC_BATTERY },
++ { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2024),
++ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK },
+ { HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021),
+ .driver_data = APPLE_HAS_FN | APPLE_ISO_TILDE_QUIRK | APPLE_RDESC_BATTERY },
+ { HID_BLUETOOTH_DEVICE(BT_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021),
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index ceb3b1a72e235c..c6ae7c4268b84c 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -184,6 +184,7 @@
+ #define USB_DEVICE_ID_APPLE_IRCONTROL4 0x8242
+ #define USB_DEVICE_ID_APPLE_IRCONTROL5 0x8243
+ #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2021 0x029c
++#define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_2024 0x0320
+ #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_FINGERPRINT_2021 0x029a
+ #define USB_DEVICE_ID_APPLE_MAGIC_KEYBOARD_NUMPAD_2021 0x029f
+ #define USB_DEVICE_ID_APPLE_TOUCHBAR_BACKLIGHT 0x8102
+@@ -1089,6 +1090,7 @@
+ #define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3001 0x3001
+ #define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3003 0x3003
+ #define USB_DEVICE_ID_QUANTA_OPTICAL_TOUCH_3008 0x3008
++#define USB_DEVICE_ID_QUANTA_HP_5MP_CAMERA_5473 0x5473
+
+ #define I2C_VENDOR_ID_RAYDIUM 0x2386
+ #define I2C_PRODUCT_ID_RAYDIUM_4B33 0x4b33
+@@ -1295,6 +1297,7 @@
+ #define USB_VENDOR_ID_TOPRE 0x0853
+ #define USB_DEVICE_ID_TOPRE_REALFORCE_R2_108 0x0148
+ #define USB_DEVICE_ID_TOPRE_REALFORCE_R2_87 0x0146
++#define USB_DEVICE_ID_TOPRE_REALFORCE_R3S_87 0x0313
+
+ #define USB_VENDOR_ID_TOPSEED 0x0766
+ #define USB_DEVICE_ID_TOPSEED_CYBERLINK 0x0204
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index e0bbf0c6345d68..5d7a418ccdbecf 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -891,6 +891,7 @@ static const struct hid_device_id hid_ignore_list[] = {
+ { HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_DPAD) },
+ #endif
+ { HID_USB_DEVICE(USB_VENDOR_ID_YEALINK, USB_DEVICE_ID_YEALINK_P1K_P4K_B2K) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_QUANTA, USB_DEVICE_ID_QUANTA_HP_5MP_CAMERA_5473) },
+ { }
+ };
+
+diff --git a/drivers/hid/hid-steam.c b/drivers/hid/hid-steam.c
+index 19b7bb0c3d7f99..9de875f27c246a 100644
+--- a/drivers/hid/hid-steam.c
++++ b/drivers/hid/hid-steam.c
+@@ -1051,10 +1051,10 @@ static void steam_mode_switch_cb(struct work_struct *work)
+ struct steam_device, mode_switch);
+ unsigned long flags;
+ bool client_opened;
+- steam->gamepad_mode = !steam->gamepad_mode;
+ if (!lizard_mode)
+ return;
+
++ steam->gamepad_mode = !steam->gamepad_mode;
+ if (steam->gamepad_mode)
+ steam_set_lizard_mode(steam, false);
+ else {
+@@ -1623,7 +1623,7 @@ static void steam_do_deck_input_event(struct steam_device *steam,
+ schedule_delayed_work(&steam->mode_switch, 45 * HZ / 100);
+ }
+
+- if (!steam->gamepad_mode)
++ if (!steam->gamepad_mode && lizard_mode)
+ return;
+
+ lpad_touched = b10 & BIT(3);
+@@ -1693,7 +1693,7 @@ static void steam_do_deck_sensors_event(struct steam_device *steam,
+ */
+ steam->sensor_timestamp_us += 4000;
+
+- if (!steam->gamepad_mode)
++ if (!steam->gamepad_mode && lizard_mode)
+ return;
+
+ input_event(sensors, EV_MSC, MSC_TIMESTAMP, steam->sensor_timestamp_us);
+diff --git a/drivers/hid/hid-topre.c b/drivers/hid/hid-topre.c
+index 848361f6225df1..ccedf8721722ec 100644
+--- a/drivers/hid/hid-topre.c
++++ b/drivers/hid/hid-topre.c
+@@ -29,6 +29,11 @@ static const __u8 *topre_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ hid_info(hdev,
+ "fixing up Topre REALFORCE keyboard report descriptor\n");
+ rdesc[72] = 0x02;
++ } else if (*rsize >= 106 && rdesc[28] == 0x29 && rdesc[29] == 0xe7 &&
++ rdesc[30] == 0x81 && rdesc[31] == 0x00) {
++ hid_info(hdev,
++ "fixing up Topre REALFORCE keyboard report descriptor\n");
++ rdesc[31] = 0x02;
+ }
+ return rdesc;
+ }
+@@ -38,6 +43,8 @@ static const struct hid_device_id topre_id_table[] = {
+ USB_DEVICE_ID_TOPRE_REALFORCE_R2_108) },
+ { HID_USB_DEVICE(USB_VENDOR_ID_TOPRE,
+ USB_DEVICE_ID_TOPRE_REALFORCE_R2_87) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_TOPRE,
++ USB_DEVICE_ID_TOPRE_REALFORCE_R3S_87) },
+ { }
+ };
+ MODULE_DEVICE_TABLE(hid, topre_id_table);
+diff --git a/drivers/hid/intel-ish-hid/ipc/hw-ish.h b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
+index cdd80c653918b2..07e90d51f073cc 100644
+--- a/drivers/hid/intel-ish-hid/ipc/hw-ish.h
++++ b/drivers/hid/intel-ish-hid/ipc/hw-ish.h
+@@ -36,6 +36,8 @@
+ #define PCI_DEVICE_ID_INTEL_ISH_ARL_H 0x7745
+ #define PCI_DEVICE_ID_INTEL_ISH_ARL_S 0x7F78
+ #define PCI_DEVICE_ID_INTEL_ISH_LNL_M 0xA845
++#define PCI_DEVICE_ID_INTEL_ISH_PTL_H 0xE345
++#define PCI_DEVICE_ID_INTEL_ISH_PTL_P 0xE445
+
+ #define REVISION_ID_CHT_A0 0x6
+ #define REVISION_ID_CHT_Ax_SI 0x0
+diff --git a/drivers/hid/intel-ish-hid/ipc/ipc.c b/drivers/hid/intel-ish-hid/ipc/ipc.c
+index 3cd53fc80634a6..4c861119e97aa0 100644
+--- a/drivers/hid/intel-ish-hid/ipc/ipc.c
++++ b/drivers/hid/intel-ish-hid/ipc/ipc.c
+@@ -517,6 +517,10 @@ static int ish_fw_reset_handler(struct ishtp_device *dev)
+ /* ISH FW is dead */
+ if (!ish_is_input_ready(dev))
+ return -EPIPE;
++
++ /* Send clock sync at once after reset */
++ ishtp_dev->prev_sync = 0;
++
+ /*
+ * Set HOST2ISH.ILUP. Apparently we need this BEFORE sending
+ * RESET_NOTIFY_ACK - FW will be checking for it
+@@ -577,15 +581,14 @@ static void fw_reset_work_fn(struct work_struct *work)
+ */
+ static void _ish_sync_fw_clock(struct ishtp_device *dev)
+ {
+- static unsigned long prev_sync;
+- uint64_t usec;
++ struct ipc_time_update_msg time = {};
+
+- if (prev_sync && time_before(jiffies, prev_sync + 20 * HZ))
++ if (dev->prev_sync && time_before(jiffies, dev->prev_sync + 20 * HZ))
+ return;
+
+- prev_sync = jiffies;
+- usec = ktime_to_us(ktime_get_boottime());
+- ipc_send_mng_msg(dev, MNG_SYNC_FW_CLOCK, &usec, sizeof(uint64_t));
++ dev->prev_sync = jiffies;
++ /* The fields of time would be updated while sending message */
++ ipc_send_mng_msg(dev, MNG_SYNC_FW_CLOCK, &time, sizeof(time));
+ }
+
+ /**
+diff --git a/drivers/hid/intel-ish-hid/ipc/pci-ish.c b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+index aae0d965b47b5e..1894743e880288 100644
+--- a/drivers/hid/intel-ish-hid/ipc/pci-ish.c
++++ b/drivers/hid/intel-ish-hid/ipc/pci-ish.c
+@@ -26,9 +26,11 @@
+ enum ishtp_driver_data_index {
+ ISHTP_DRIVER_DATA_NONE,
+ ISHTP_DRIVER_DATA_LNL_M,
++ ISHTP_DRIVER_DATA_PTL,
+ };
+
+ #define ISH_FW_GEN_LNL_M "lnlm"
++#define ISH_FW_GEN_PTL "ptl"
+
+ #define ISH_FIRMWARE_PATH(gen) "intel/ish/ish_" gen ".bin"
+ #define ISH_FIRMWARE_PATH_ALL "intel/ish/ish_*.bin"
+@@ -37,6 +39,9 @@ static struct ishtp_driver_data ishtp_driver_data[] = {
+ [ISHTP_DRIVER_DATA_LNL_M] = {
+ .fw_generation = ISH_FW_GEN_LNL_M,
+ },
++ [ISHTP_DRIVER_DATA_PTL] = {
++ .fw_generation = ISH_FW_GEN_PTL,
++ },
+ };
+
+ static const struct pci_device_id ish_pci_tbl[] = {
+@@ -63,6 +68,8 @@ static const struct pci_device_id ish_pci_tbl[] = {
+ {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_ARL_H)},
+ {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_ARL_S)},
+ {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_LNL_M), .driver_data = ISHTP_DRIVER_DATA_LNL_M},
++ {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_PTL_H), .driver_data = ISHTP_DRIVER_DATA_PTL},
++ {PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ISH_PTL_P), .driver_data = ISHTP_DRIVER_DATA_PTL},
+ {}
+ };
+ MODULE_DEVICE_TABLE(pci, ish_pci_tbl);
+diff --git a/drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h b/drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h
+index cdacce0a4c9d7d..b35afefd036d40 100644
+--- a/drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h
++++ b/drivers/hid/intel-ish-hid/ishtp/ishtp-dev.h
+@@ -242,6 +242,8 @@ struct ishtp_device {
+ unsigned int ipc_tx_cnt;
+ unsigned long long ipc_tx_bytes_cnt;
+
++ /* Time of the last clock sync */
++ unsigned long prev_sync;
+ const struct ishtp_hw_ops *ops;
+ size_t mtu;
+ uint32_t ishtp_msg_hdr;
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 9b15f7daf50597..2b6749c9712ef2 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -2262,12 +2262,25 @@ void vmbus_free_mmio(resource_size_t start, resource_size_t size)
+ struct resource *iter;
+
+ mutex_lock(&hyperv_mmio_lock);
++
++ /*
++ * If all bytes of the MMIO range to be released are within the
++ * special case fb_mmio shadow region, skip releasing the shadow
++ * region since no corresponding __request_region() was done
++ * in vmbus_allocate_mmio().
++ */
++ if (fb_mmio && start >= fb_mmio->start &&
++ (start + size - 1 <= fb_mmio->end))
++ goto skip_shadow_release;
++
+ for (iter = hyperv_mmio; iter; iter = iter->sibling) {
+ if ((iter->start >= start + size) || (iter->end <= start))
+ continue;
+
+ __release_region(iter, start, size);
+ }
++
++skip_shadow_release:
+ release_mem_region(start, size);
+ mutex_unlock(&hyperv_mmio_lock);
+
+diff --git a/drivers/i2c/busses/i2c-ali1535.c b/drivers/i2c/busses/i2c-ali1535.c
+index 544c94e86b8967..1eac3583804058 100644
+--- a/drivers/i2c/busses/i2c-ali1535.c
++++ b/drivers/i2c/busses/i2c-ali1535.c
+@@ -485,6 +485,8 @@ MODULE_DEVICE_TABLE(pci, ali1535_ids);
+
+ static int ali1535_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ {
++ int ret;
++
+ if (ali1535_setup(dev)) {
+ dev_warn(&dev->dev,
+ "ALI1535 not detected, module not inserted.\n");
+@@ -496,7 +498,15 @@ static int ali1535_probe(struct pci_dev *dev, const struct pci_device_id *id)
+
+ snprintf(ali1535_adapter.name, sizeof(ali1535_adapter.name),
+ "SMBus ALI1535 adapter at %04x", ali1535_offset);
+- return i2c_add_adapter(&ali1535_adapter);
++ ret = i2c_add_adapter(&ali1535_adapter);
++ if (ret)
++ goto release_region;
++
++ return 0;
++
++release_region:
++ release_region(ali1535_smba, ALI1535_SMB_IOSIZE);
++ return ret;
+ }
+
+ static void ali1535_remove(struct pci_dev *dev)
+diff --git a/drivers/i2c/busses/i2c-ali15x3.c b/drivers/i2c/busses/i2c-ali15x3.c
+index 4761c720810227..418d11266671e3 100644
+--- a/drivers/i2c/busses/i2c-ali15x3.c
++++ b/drivers/i2c/busses/i2c-ali15x3.c
+@@ -472,6 +472,8 @@ MODULE_DEVICE_TABLE (pci, ali15x3_ids);
+
+ static int ali15x3_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ {
++ int ret;
++
+ if (ali15x3_setup(dev)) {
+ dev_err(&dev->dev,
+ "ALI15X3 not detected, module not inserted.\n");
+@@ -483,7 +485,15 @@ static int ali15x3_probe(struct pci_dev *dev, const struct pci_device_id *id)
+
+ snprintf(ali15x3_adapter.name, sizeof(ali15x3_adapter.name),
+ "SMBus ALI15X3 adapter at %04x", ali15x3_smba);
+- return i2c_add_adapter(&ali15x3_adapter);
++ ret = i2c_add_adapter(&ali15x3_adapter);
++ if (ret)
++ goto release_region;
++
++ return 0;
++
++release_region:
++ release_region(ali15x3_smba, ALI15X3_SMB_IOSIZE);
++ return ret;
+ }
+
+ static void ali15x3_remove(struct pci_dev *dev)
+diff --git a/drivers/i2c/busses/i2c-sis630.c b/drivers/i2c/busses/i2c-sis630.c
+index 3505cf29cedda3..a19c3d251804d5 100644
+--- a/drivers/i2c/busses/i2c-sis630.c
++++ b/drivers/i2c/busses/i2c-sis630.c
+@@ -509,6 +509,8 @@ MODULE_DEVICE_TABLE(pci, sis630_ids);
+
+ static int sis630_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ {
++ int ret;
++
+ if (sis630_setup(dev)) {
+ dev_err(&dev->dev,
+ "SIS630 compatible bus not detected, "
+@@ -522,7 +524,15 @@ static int sis630_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ snprintf(sis630_adapter.name, sizeof(sis630_adapter.name),
+ "SMBus SIS630 adapter at %04x", smbus_base + SMB_STS);
+
+- return i2c_add_adapter(&sis630_adapter);
++ ret = i2c_add_adapter(&sis630_adapter);
++ if (ret)
++ goto release_region;
++
++ return 0;
++
++release_region:
++ release_region(smbus_base + SMB_STS, SIS630_SMB_IOREGION);
++ return ret;
+ }
+
+ static void sis630_remove(struct pci_dev *dev)
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 77fddab9d9502e..a6c7951011308c 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -140,6 +140,7 @@ static const struct xpad_device {
+ { 0x044f, 0x0f00, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ { 0x044f, 0x0f03, "Thrustmaster Wheel", 0, XTYPE_XBOX },
+ { 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
++ { 0x044f, 0xd01e, "ThrustMaster, Inc. ESWAP X 2 ELDEN RING EDITION", 0, XTYPE_XBOXONE },
+ { 0x044f, 0x0f10, "Thrustmaster Modena GT Wheel", 0, XTYPE_XBOX },
+ { 0x044f, 0xb326, "Thrustmaster Gamepad GP XID", 0, XTYPE_XBOX360 },
+ { 0x045e, 0x0202, "Microsoft X-Box pad v1 (US)", 0, XTYPE_XBOX },
+@@ -177,6 +178,7 @@ static const struct xpad_device {
+ { 0x06a3, 0x0200, "Saitek Racing Wheel", 0, XTYPE_XBOX },
+ { 0x06a3, 0x0201, "Saitek Adrenalin", 0, XTYPE_XBOX },
+ { 0x06a3, 0xf51a, "Saitek P3600", 0, XTYPE_XBOX360 },
++ { 0x0738, 0x4503, "Mad Catz Racing Wheel", 0, XTYPE_XBOXONE },
+ { 0x0738, 0x4506, "Mad Catz 4506 Wireless Controller", 0, XTYPE_XBOX },
+ { 0x0738, 0x4516, "Mad Catz Control Pad", 0, XTYPE_XBOX },
+ { 0x0738, 0x4520, "Mad Catz Control Pad Pro", 0, XTYPE_XBOX },
+@@ -238,6 +240,7 @@ static const struct xpad_device {
+ { 0x0e6f, 0x0146, "Rock Candy Wired Controller for Xbox One", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x0147, "PDP Marvel Xbox One Controller", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x015c, "PDP Xbox One Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
++ { 0x0e6f, 0x015d, "PDP Mirror's Edge Official Wired Controller for Xbox One", XTYPE_XBOXONE },
+ { 0x0e6f, 0x0161, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x0162, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
+ { 0x0e6f, 0x0163, "PDP Xbox One Controller", 0, XTYPE_XBOXONE },
+@@ -276,12 +279,15 @@ static const struct xpad_device {
+ { 0x0f0d, 0x0078, "Hori Real Arcade Pro V Kai Xbox One", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+ { 0x0f0d, 0x00c5, "Hori Fighting Commander ONE", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOXONE },
+ { 0x0f0d, 0x00dc, "HORIPAD FPS for Nintendo Switch", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
++ { 0x0f0d, 0x0151, "Hori Racing Wheel Overdrive for Xbox Series X", 0, XTYPE_XBOXONE },
++ { 0x0f0d, 0x0152, "Hori Racing Wheel Overdrive for Xbox Series X", 0, XTYPE_XBOXONE },
+ { 0x0f30, 0x010b, "Philips Recoil", 0, XTYPE_XBOX },
+ { 0x0f30, 0x0202, "Joytech Advanced Controller", 0, XTYPE_XBOX },
+ { 0x0f30, 0x8888, "BigBen XBMiniPad Controller", 0, XTYPE_XBOX },
+ { 0x102c, 0xff0c, "Joytech Wireless Advanced Controller", 0, XTYPE_XBOX },
+ { 0x1038, 0x1430, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 },
+ { 0x1038, 0x1431, "SteelSeries Stratus Duo", 0, XTYPE_XBOX360 },
++ { 0x10f5, 0x7005, "Turtle Beach Recon Controller", 0, XTYPE_XBOXONE },
+ { 0x11c9, 0x55f0, "Nacon GC-100XF", 0, XTYPE_XBOX360 },
+ { 0x11ff, 0x0511, "PXN V900", 0, XTYPE_XBOX360 },
+ { 0x1209, 0x2882, "Ardwiino Controller", 0, XTYPE_XBOX360 },
+@@ -306,7 +312,7 @@ static const struct xpad_device {
+ { 0x1689, 0xfe00, "Razer Sabertooth", 0, XTYPE_XBOX360 },
+ { 0x17ef, 0x6182, "Lenovo Legion Controller for Windows", 0, XTYPE_XBOX360 },
+ { 0x1949, 0x041a, "Amazon Game Controller", 0, XTYPE_XBOX360 },
+- { 0x1a86, 0xe310, "QH Electronics Controller", 0, XTYPE_XBOX360 },
++ { 0x1a86, 0xe310, "Legion Go S", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0x0002, "Harmonix Rock Band Guitar", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0x0003, "Harmonix Rock Band Drumkit", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x1bad, 0x0130, "Ion Drum Rocker", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
+@@ -343,6 +349,7 @@ static const struct xpad_device {
+ { 0x1bad, 0xfa01, "MadCatz GamePad", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0xfd00, "Razer Onza TE", 0, XTYPE_XBOX360 },
+ { 0x1bad, 0xfd01, "Razer Onza", 0, XTYPE_XBOX360 },
++ { 0x1ee9, 0x1590, "ZOTAC Gaming Zone", 0, XTYPE_XBOX360 },
+ { 0x20d6, 0x2001, "BDA Xbox Series X Wired Controller", 0, XTYPE_XBOXONE },
+ { 0x20d6, 0x2009, "PowerA Enhanced Wired Controller for Xbox Series X|S", 0, XTYPE_XBOXONE },
+ { 0x20d6, 0x281f, "PowerA Wired Controller For Xbox 360", 0, XTYPE_XBOX360 },
+@@ -366,6 +373,7 @@ static const struct xpad_device {
+ { 0x24c6, 0x5510, "Hori Fighting Commander ONE (Xbox 360/PC Mode)", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x24c6, 0x551a, "PowerA FUSION Pro Controller", 0, XTYPE_XBOXONE },
+ { 0x24c6, 0x561a, "PowerA FUSION Controller", 0, XTYPE_XBOXONE },
++ { 0x24c6, 0x581a, "ThrustMaster XB1 Classic Controller", 0, XTYPE_XBOXONE },
+ { 0x24c6, 0x5b00, "ThrustMaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 },
+ { 0x24c6, 0x5b02, "Thrustmaster, Inc. GPX Controller", 0, XTYPE_XBOX360 },
+ { 0x24c6, 0x5b03, "Thrustmaster Ferrari 458 Racing Wheel", 0, XTYPE_XBOX360 },
+@@ -374,10 +382,15 @@ static const struct xpad_device {
+ { 0x2563, 0x058d, "OneXPlayer Gamepad", 0, XTYPE_XBOX360 },
+ { 0x294b, 0x3303, "Snakebyte GAMEPAD BASE X", 0, XTYPE_XBOXONE },
+ { 0x294b, 0x3404, "Snakebyte GAMEPAD RGB X", 0, XTYPE_XBOXONE },
++ { 0x2993, 0x2001, "TECNO Pocket Go", 0, XTYPE_XBOX360 },
+ { 0x2dc8, 0x2000, "8BitDo Pro 2 Wired Controller fox Xbox", 0, XTYPE_XBOXONE },
+ { 0x2dc8, 0x3106, "8BitDo Ultimate Wireless / Pro 2 Wired Controller", 0, XTYPE_XBOX360 },
++ { 0x2dc8, 0x3109, "8BitDo Ultimate Wireless Bluetooth", 0, XTYPE_XBOX360 },
+ { 0x2dc8, 0x310a, "8BitDo Ultimate 2C Wireless Controller", 0, XTYPE_XBOX360 },
++ { 0x2dc8, 0x6001, "8BitDo SN30 Pro", 0, XTYPE_XBOX360 },
+ { 0x2e24, 0x0652, "Hyperkin Duke X-Box One pad", 0, XTYPE_XBOXONE },
++ { 0x2e24, 0x1688, "Hyperkin X91 X-Box One pad", 0, XTYPE_XBOXONE },
++ { 0x2e95, 0x0504, "SCUF Gaming Controller", MAP_SELECT_BUTTON, XTYPE_XBOXONE },
+ { 0x31e3, 0x1100, "Wooting One", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1200, "Wooting Two", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1210, "Wooting Lekker", 0, XTYPE_XBOX360 },
+@@ -385,11 +398,16 @@ static const struct xpad_device {
+ { 0x31e3, 0x1230, "Wooting Two HE (ARM)", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1300, "Wooting 60HE (AVR)", 0, XTYPE_XBOX360 },
+ { 0x31e3, 0x1310, "Wooting 60HE (ARM)", 0, XTYPE_XBOX360 },
++ { 0x3285, 0x0603, "Nacon Pro Compact controller for Xbox", 0, XTYPE_XBOXONE },
+ { 0x3285, 0x0607, "Nacon GC-100", 0, XTYPE_XBOX360 },
++ { 0x3285, 0x0614, "Nacon Pro Compact", 0, XTYPE_XBOXONE },
+ { 0x3285, 0x0646, "Nacon Pro Compact", 0, XTYPE_XBOXONE },
++ { 0x3285, 0x0662, "Nacon Revolution5 Pro", 0, XTYPE_XBOX360 },
+ { 0x3285, 0x0663, "Nacon Evol-X", 0, XTYPE_XBOXONE },
+ { 0x3537, 0x1004, "GameSir T4 Kaleid", 0, XTYPE_XBOX360 },
++ { 0x3537, 0x1010, "GameSir G7 SE", 0, XTYPE_XBOXONE },
+ { 0x3767, 0x0101, "Fanatec Speedster 3 Forceshock Wheel", 0, XTYPE_XBOX },
++ { 0x413d, 0x2104, "Black Shark Green Ghost Gamepad", 0, XTYPE_XBOX360 },
+ { 0xffff, 0xffff, "Chinese-made Xbox Controller", 0, XTYPE_XBOX },
+ { 0x0000, 0x0000, "Generic X-Box pad", 0, XTYPE_UNKNOWN }
+ };
+@@ -488,6 +506,7 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x03f0), /* HP HyperX Xbox 360 controllers */
+ XPAD_XBOXONE_VENDOR(0x03f0), /* HP HyperX Xbox One controllers */
+ XPAD_XBOX360_VENDOR(0x044f), /* Thrustmaster Xbox 360 controllers */
++ XPAD_XBOXONE_VENDOR(0x044f), /* Thrustmaster Xbox One controllers */
+ XPAD_XBOX360_VENDOR(0x045e), /* Microsoft Xbox 360 controllers */
+ XPAD_XBOXONE_VENDOR(0x045e), /* Microsoft Xbox One controllers */
+ XPAD_XBOX360_VENDOR(0x046d), /* Logitech Xbox 360-style controllers */
+@@ -519,8 +538,9 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOX360_VENDOR(0x1689), /* Razer Onza */
+ XPAD_XBOX360_VENDOR(0x17ef), /* Lenovo */
+ XPAD_XBOX360_VENDOR(0x1949), /* Amazon controllers */
+- XPAD_XBOX360_VENDOR(0x1a86), /* QH Electronics */
++ XPAD_XBOX360_VENDOR(0x1a86), /* Nanjing Qinheng Microelectronics (WCH) */
+ XPAD_XBOX360_VENDOR(0x1bad), /* Harmonix Rock Band guitar and drums */
++ XPAD_XBOX360_VENDOR(0x1ee9), /* ZOTAC Technology Limited */
+ XPAD_XBOX360_VENDOR(0x20d6), /* PowerA controllers */
+ XPAD_XBOXONE_VENDOR(0x20d6), /* PowerA controllers */
+ XPAD_XBOX360_VENDOR(0x2345), /* Machenike Controllers */
+@@ -528,17 +548,20 @@ static const struct usb_device_id xpad_table[] = {
+ XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA controllers */
+ XPAD_XBOX360_VENDOR(0x2563), /* OneXPlayer Gamepad */
+ XPAD_XBOX360_VENDOR(0x260d), /* Dareu H101 */
+- XPAD_XBOXONE_VENDOR(0x294b), /* Snakebyte */
++ XPAD_XBOXONE_VENDOR(0x294b), /* Snakebyte */
++ XPAD_XBOX360_VENDOR(0x2993), /* TECNO Mobile */
+ XPAD_XBOX360_VENDOR(0x2c22), /* Qanba Controllers */
+- XPAD_XBOX360_VENDOR(0x2dc8), /* 8BitDo Pro 2 Wired Controller */
+- XPAD_XBOXONE_VENDOR(0x2dc8), /* 8BitDo Pro 2 Wired Controller for Xbox */
+- XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Duke Xbox One pad */
+- XPAD_XBOX360_VENDOR(0x2f24), /* GameSir controllers */
++ XPAD_XBOX360_VENDOR(0x2dc8), /* 8BitDo Controllers */
++ XPAD_XBOXONE_VENDOR(0x2dc8), /* 8BitDo Controllers */
++ XPAD_XBOXONE_VENDOR(0x2e24), /* Hyperkin Controllers */
++ XPAD_XBOX360_VENDOR(0x2f24), /* GameSir Controllers */
++ XPAD_XBOXONE_VENDOR(0x2e95), /* SCUF Gaming Controller */
+ XPAD_XBOX360_VENDOR(0x31e3), /* Wooting Keyboards */
+ XPAD_XBOX360_VENDOR(0x3285), /* Nacon GC-100 */
+ XPAD_XBOXONE_VENDOR(0x3285), /* Nacon Evol-X */
+ XPAD_XBOX360_VENDOR(0x3537), /* GameSir Controllers */
+ XPAD_XBOXONE_VENDOR(0x3537), /* GameSir Controllers */
++ XPAD_XBOX360_VENDOR(0x413d), /* Black Shark Green Ghost Controller */
+ { }
+ };
+
+@@ -691,7 +714,9 @@ static const struct xboxone_init_packet xboxone_init_packets[] = {
+ XBOXONE_INIT_PKT(0x045e, 0x0b00, xboxone_s_init),
+ XBOXONE_INIT_PKT(0x045e, 0x0b00, extra_input_packet_init),
+ XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_led_on),
++ XBOXONE_INIT_PKT(0x20d6, 0xa01a, xboxone_pdp_led_on),
+ XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_auth),
++ XBOXONE_INIT_PKT(0x20d6, 0xa01a, xboxone_pdp_auth),
+ XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init),
+ XBOXONE_INIT_PKT(0x24c6, 0x542a, xboxone_rumblebegin_init),
+ XBOXONE_INIT_PKT(0x24c6, 0x543a, xboxone_rumblebegin_init),
+diff --git a/drivers/input/misc/iqs7222.c b/drivers/input/misc/iqs7222.c
+index be80a31de9f8f4..01c4009fd53e7b 100644
+--- a/drivers/input/misc/iqs7222.c
++++ b/drivers/input/misc/iqs7222.c
+@@ -100,11 +100,11 @@ enum iqs7222_reg_key_id {
+
+ enum iqs7222_reg_grp_id {
+ IQS7222_REG_GRP_STAT,
+- IQS7222_REG_GRP_FILT,
+ IQS7222_REG_GRP_CYCLE,
+ IQS7222_REG_GRP_GLBL,
+ IQS7222_REG_GRP_BTN,
+ IQS7222_REG_GRP_CHAN,
++ IQS7222_REG_GRP_FILT,
+ IQS7222_REG_GRP_SLDR,
+ IQS7222_REG_GRP_TPAD,
+ IQS7222_REG_GRP_GPIO,
+@@ -286,6 +286,7 @@ static const struct iqs7222_event_desc iqs7222_tp_events[] = {
+
+ struct iqs7222_reg_grp_desc {
+ u16 base;
++ u16 val_len;
+ int num_row;
+ int num_col;
+ };
+@@ -342,6 +343,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ },
+ [IQS7222_REG_GRP_FILT] = {
+ .base = 0xAC00,
++ .val_len = 3,
+ .num_row = 1,
+ .num_col = 2,
+ },
+@@ -400,6 +402,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ },
+ [IQS7222_REG_GRP_FILT] = {
+ .base = 0xAC00,
++ .val_len = 3,
+ .num_row = 1,
+ .num_col = 2,
+ },
+@@ -454,6 +457,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ },
+ [IQS7222_REG_GRP_FILT] = {
+ .base = 0xC400,
++ .val_len = 3,
+ .num_row = 1,
+ .num_col = 2,
+ },
+@@ -496,6 +500,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ },
+ [IQS7222_REG_GRP_FILT] = {
+ .base = 0xC400,
++ .val_len = 3,
+ .num_row = 1,
+ .num_col = 2,
+ },
+@@ -543,6 +548,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ },
+ [IQS7222_REG_GRP_FILT] = {
+ .base = 0xAA00,
++ .val_len = 3,
+ .num_row = 1,
+ .num_col = 2,
+ },
+@@ -600,6 +606,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ },
+ [IQS7222_REG_GRP_FILT] = {
+ .base = 0xAA00,
++ .val_len = 3,
+ .num_row = 1,
+ .num_col = 2,
+ },
+@@ -656,6 +663,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ },
+ [IQS7222_REG_GRP_FILT] = {
+ .base = 0xAE00,
++ .val_len = 3,
+ .num_row = 1,
+ .num_col = 2,
+ },
+@@ -712,6 +720,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ },
+ [IQS7222_REG_GRP_FILT] = {
+ .base = 0xAE00,
++ .val_len = 3,
+ .num_row = 1,
+ .num_col = 2,
+ },
+@@ -768,6 +777,7 @@ static const struct iqs7222_dev_desc iqs7222_devs[] = {
+ },
+ [IQS7222_REG_GRP_FILT] = {
+ .base = 0xAE00,
++ .val_len = 3,
+ .num_row = 1,
+ .num_col = 2,
+ },
+@@ -1604,7 +1614,7 @@ static int iqs7222_force_comms(struct iqs7222_private *iqs7222)
+ }
+
+ static int iqs7222_read_burst(struct iqs7222_private *iqs7222,
+- u16 reg, void *val, u16 num_val)
++ u16 reg, void *val, u16 val_len)
+ {
+ u8 reg_buf[sizeof(__be16)];
+ int ret, i;
+@@ -1619,7 +1629,7 @@ static int iqs7222_read_burst(struct iqs7222_private *iqs7222,
+ {
+ .addr = client->addr,
+ .flags = I2C_M_RD,
+- .len = num_val * sizeof(__le16),
++ .len = val_len,
+ .buf = (u8 *)val,
+ },
+ };
+@@ -1675,7 +1685,7 @@ static int iqs7222_read_word(struct iqs7222_private *iqs7222, u16 reg, u16 *val)
+ __le16 val_buf;
+ int error;
+
+- error = iqs7222_read_burst(iqs7222, reg, &val_buf, 1);
++ error = iqs7222_read_burst(iqs7222, reg, &val_buf, sizeof(val_buf));
+ if (error)
+ return error;
+
+@@ -1685,10 +1695,9 @@ static int iqs7222_read_word(struct iqs7222_private *iqs7222, u16 reg, u16 *val)
+ }
+
+ static int iqs7222_write_burst(struct iqs7222_private *iqs7222,
+- u16 reg, const void *val, u16 num_val)
++ u16 reg, const void *val, u16 val_len)
+ {
+ int reg_len = reg > U8_MAX ? sizeof(reg) : sizeof(u8);
+- int val_len = num_val * sizeof(__le16);
+ int msg_len = reg_len + val_len;
+ int ret, i;
+ struct i2c_client *client = iqs7222->client;
+@@ -1747,7 +1756,7 @@ static int iqs7222_write_word(struct iqs7222_private *iqs7222, u16 reg, u16 val)
+ {
+ __le16 val_buf = cpu_to_le16(val);
+
+- return iqs7222_write_burst(iqs7222, reg, &val_buf, 1);
++ return iqs7222_write_burst(iqs7222, reg, &val_buf, sizeof(val_buf));
+ }
+
+ static int iqs7222_ati_trigger(struct iqs7222_private *iqs7222)
+@@ -1831,30 +1840,14 @@ static int iqs7222_dev_init(struct iqs7222_private *iqs7222, int dir)
+
+ /*
+ * Acknowledge reset before writing any registers in case the device
+- * suffers a spurious reset during initialization. Because this step
+- * may change the reserved fields of the second filter beta register,
+- * its cache must be updated.
+- *
+- * Writing the second filter beta register, in turn, may clobber the
+- * system status register. As such, the filter beta register pair is
+- * written first to protect against this hazard.
++ * suffers a spurious reset during initialization.
+ */
+ if (dir == WRITE) {
+- u16 reg = dev_desc->reg_grps[IQS7222_REG_GRP_FILT].base + 1;
+- u16 filt_setup;
+-
+ error = iqs7222_write_word(iqs7222, IQS7222_SYS_SETUP,
+ iqs7222->sys_setup[0] |
+ IQS7222_SYS_SETUP_ACK_RESET);
+ if (error)
+ return error;
+-
+- error = iqs7222_read_word(iqs7222, reg, &filt_setup);
+- if (error)
+- return error;
+-
+- iqs7222->filt_setup[1] &= GENMASK(7, 0);
+- iqs7222->filt_setup[1] |= (filt_setup & ~GENMASK(7, 0));
+ }
+
+ /*
+@@ -1883,6 +1876,7 @@ static int iqs7222_dev_init(struct iqs7222_private *iqs7222, int dir)
+ int num_col = dev_desc->reg_grps[i].num_col;
+ u16 reg = dev_desc->reg_grps[i].base;
+ __le16 *val_buf;
++ u16 val_len = dev_desc->reg_grps[i].val_len ? : num_col * sizeof(*val_buf);
+ u16 *val;
+
+ if (!num_col)
+@@ -1900,7 +1894,7 @@ static int iqs7222_dev_init(struct iqs7222_private *iqs7222, int dir)
+ switch (dir) {
+ case READ:
+ error = iqs7222_read_burst(iqs7222, reg,
+- val_buf, num_col);
++ val_buf, val_len);
+ for (k = 0; k < num_col; k++)
+ val[k] = le16_to_cpu(val_buf[k]);
+ break;
+@@ -1909,7 +1903,7 @@ static int iqs7222_dev_init(struct iqs7222_private *iqs7222, int dir)
+ for (k = 0; k < num_col; k++)
+ val_buf[k] = cpu_to_le16(val[k]);
+ error = iqs7222_write_burst(iqs7222, reg,
+- val_buf, num_col);
++ val_buf, val_len);
+ break;
+
+ default:
+@@ -1962,7 +1956,7 @@ static int iqs7222_dev_info(struct iqs7222_private *iqs7222)
+ int error, i;
+
+ error = iqs7222_read_burst(iqs7222, IQS7222_PROD_NUM, dev_id,
+- ARRAY_SIZE(dev_id));
++ sizeof(dev_id));
+ if (error)
+ return error;
+
+@@ -2917,7 +2911,7 @@ static int iqs7222_report(struct iqs7222_private *iqs7222)
+ __le16 status[IQS7222_MAX_COLS_STAT];
+
+ error = iqs7222_read_burst(iqs7222, IQS7222_SYS_STATUS, status,
+- num_stat);
++ num_stat * sizeof(*status));
+ if (error)
+ return error;
+
+diff --git a/drivers/input/serio/i8042-acpipnpio.h b/drivers/input/serio/i8042-acpipnpio.h
+index 34d1f07ea4c304..8813db7eec3978 100644
+--- a/drivers/input/serio/i8042-acpipnpio.h
++++ b/drivers/input/serio/i8042-acpipnpio.h
+@@ -1080,16 +1080,14 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"),
+ DMI_MATCH(DMI_BOARD_NAME, "AURA1501"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "TUXEDO"),
+ DMI_MATCH(DMI_BOARD_NAME, "EDUBOOK1502"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ /* Mivvy M310 */
+@@ -1159,9 +1157,7 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ },
+ /*
+ * A lot of modern Clevo barebones have touchpad and/or keyboard issues
+- * after suspend fixable with nomux + reset + noloop + nopnp. Luckily,
+- * none of them have an external PS/2 port so this can safely be set for
+- * all of them.
++ * after suspend fixable with the forcenorestore quirk.
+ * Clevo barebones come with board_vendor and/or system_vendor set to
+ * either the very generic string "Notebook" and/or a different value
+ * for each individual reseller. The only somewhat universal way to
+@@ -1171,29 +1167,25 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "LAPQC71A"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "LAPQC71B"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "N140CU"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "N141CU"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ .matches = {
+@@ -1205,29 +1197,19 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "NH5xAx"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+- /*
+- * Setting SERIO_QUIRK_NOMUX or SERIO_QUIRK_RESET_ALWAYS makes
+- * the keyboard very laggy for ~5 seconds after boot and
+- * sometimes also after resume.
+- * However both are required for the keyboard to not fail
+- * completely sometimes after boot or resume.
+- */
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "NHxxRZQ"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "NL5xRU"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ /*
+ * At least one modern Clevo barebone has the touchpad connected both
+@@ -1243,17 +1225,15 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "NS50MU"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX |
+- SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP |
+- SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_NOAUX |
++ SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "NS50_70MU"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOAUX | SERIO_QUIRK_NOMUX |
+- SERIO_QUIRK_RESET_ALWAYS | SERIO_QUIRK_NOLOOP |
+- SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_NOAUX |
++ SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ .matches = {
+@@ -1265,8 +1245,13 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "NJ50_70CU"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "P640RE"),
++ },
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ /*
+@@ -1277,16 +1262,14 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "P65xH"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ /* Clevo P650RS, 650RP6, Sager NP8152-S, and others */
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "P65xRP"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ /*
+@@ -1297,8 +1280,7 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "P65_P67H"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ /*
+@@ -1309,8 +1291,7 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "P65_67RP"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ /*
+@@ -1321,8 +1302,7 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "P65_67RS"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ /*
+@@ -1333,22 +1313,43 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ .matches = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "P67xRP"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "PB50_70DFx,DDx"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "PB51RF"),
++ },
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "PB71RD"),
++ },
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "PC70DR"),
++ },
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "PCX0DX"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
++ },
++ {
++ .matches = {
++ DMI_MATCH(DMI_BOARD_NAME, "PCX0DX_GN20"),
++ },
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ /* See comment on TUXEDO InfinityBook S17 Gen6 / Clevo NS70MU above */
+ {
+@@ -1361,15 +1362,13 @@ static const struct dmi_system_id i8042_dmi_quirk_table[] __initconst = {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "X170SM"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ .matches = {
+ DMI_MATCH(DMI_BOARD_NAME, "X170KM-G"),
+ },
+- .driver_data = (void *)(SERIO_QUIRK_NOMUX | SERIO_QUIRK_RESET_ALWAYS |
+- SERIO_QUIRK_NOLOOP | SERIO_QUIRK_NOPNP)
++ .driver_data = (void *)(SERIO_QUIRK_FORCENORESTORE)
+ },
+ {
+ /*
+diff --git a/drivers/input/touchscreen/ads7846.c b/drivers/input/touchscreen/ads7846.c
+index 607f18af70104d..212dafa0bba2d1 100644
+--- a/drivers/input/touchscreen/ads7846.c
++++ b/drivers/input/touchscreen/ads7846.c
+@@ -1011,7 +1011,7 @@ static int ads7846_setup_pendown(struct spi_device *spi,
+ if (pdata->get_pendown_state) {
+ ts->get_pendown_state = pdata->get_pendown_state;
+ } else {
+- ts->gpio_pendown = gpiod_get(&spi->dev, "pendown", GPIOD_IN);
++ ts->gpio_pendown = devm_gpiod_get(&spi->dev, "pendown", GPIOD_IN);
+ if (IS_ERR(ts->gpio_pendown)) {
+ dev_err(&spi->dev, "failed to request pendown GPIO\n");
+ return PTR_ERR(ts->gpio_pendown);
+diff --git a/drivers/input/touchscreen/goodix_berlin_core.c b/drivers/input/touchscreen/goodix_berlin_core.c
+index 3fc03cf0ca23fd..141c64675997db 100644
+--- a/drivers/input/touchscreen/goodix_berlin_core.c
++++ b/drivers/input/touchscreen/goodix_berlin_core.c
+@@ -165,7 +165,7 @@ struct goodix_berlin_core {
+ struct device *dev;
+ struct regmap *regmap;
+ struct regulator *avdd;
+- struct regulator *iovdd;
++ struct regulator *vddio;
+ struct gpio_desc *reset_gpio;
+ struct touchscreen_properties props;
+ struct goodix_berlin_fw_version fw_version;
+@@ -248,19 +248,19 @@ static int goodix_berlin_power_on(struct goodix_berlin_core *cd)
+ {
+ int error;
+
+- error = regulator_enable(cd->iovdd);
++ error = regulator_enable(cd->vddio);
+ if (error) {
+- dev_err(cd->dev, "Failed to enable iovdd: %d\n", error);
++ dev_err(cd->dev, "Failed to enable vddio: %d\n", error);
+ return error;
+ }
+
+- /* Vendor waits 3ms for IOVDD to settle */
++ /* Vendor waits 3ms for VDDIO to settle */
+ usleep_range(3000, 3100);
+
+ error = regulator_enable(cd->avdd);
+ if (error) {
+ dev_err(cd->dev, "Failed to enable avdd: %d\n", error);
+- goto err_iovdd_disable;
++ goto err_vddio_disable;
+ }
+
+ /* Vendor waits 15ms for IOVDD to settle */
+@@ -283,8 +283,8 @@ static int goodix_berlin_power_on(struct goodix_berlin_core *cd)
+ err_dev_reset:
+ gpiod_set_value_cansleep(cd->reset_gpio, 1);
+ regulator_disable(cd->avdd);
+-err_iovdd_disable:
+- regulator_disable(cd->iovdd);
++err_vddio_disable:
++ regulator_disable(cd->vddio);
+ return error;
+ }
+
+@@ -292,7 +292,7 @@ static void goodix_berlin_power_off(struct goodix_berlin_core *cd)
+ {
+ gpiod_set_value_cansleep(cd->reset_gpio, 1);
+ regulator_disable(cd->avdd);
+- regulator_disable(cd->iovdd);
++ regulator_disable(cd->vddio);
+ }
+
+ static int goodix_berlin_read_version(struct goodix_berlin_core *cd)
+@@ -744,10 +744,10 @@ int goodix_berlin_probe(struct device *dev, int irq, const struct input_id *id,
+ return dev_err_probe(dev, PTR_ERR(cd->avdd),
+ "Failed to request avdd regulator\n");
+
+- cd->iovdd = devm_regulator_get(dev, "iovdd");
+- if (IS_ERR(cd->iovdd))
+- return dev_err_probe(dev, PTR_ERR(cd->iovdd),
+- "Failed to request iovdd regulator\n");
++ cd->vddio = devm_regulator_get(dev, "vddio");
++ if (IS_ERR(cd->vddio))
++ return dev_err_probe(dev, PTR_ERR(cd->vddio),
++ "Failed to request vddio regulator\n");
+
+ error = goodix_berlin_power_on(cd);
+ if (error) {
+diff --git a/drivers/md/dm-flakey.c b/drivers/md/dm-flakey.c
+index 731467d4ed101c..b690905ab89ffb 100644
+--- a/drivers/md/dm-flakey.c
++++ b/drivers/md/dm-flakey.c
+@@ -426,7 +426,7 @@ static struct bio *clone_bio(struct dm_target *ti, struct flakey_c *fc, struct b
+ if (!clone)
+ return NULL;
+
+- bio_init(clone, fc->dev->bdev, bio->bi_inline_vecs, nr_iovecs, bio->bi_opf);
++ bio_init(clone, fc->dev->bdev, clone->bi_inline_vecs, nr_iovecs, bio->bi_opf);
+
+ clone->bi_iter.bi_sector = flakey_map_sector(ti, bio->bi_iter.bi_sector);
+ clone->bi_private = bio;
+diff --git a/drivers/net/bonding/bond_options.c b/drivers/net/bonding/bond_options.c
+index 327b6ecdc77e00..d1b095af253bdc 100644
+--- a/drivers/net/bonding/bond_options.c
++++ b/drivers/net/bonding/bond_options.c
+@@ -1242,10 +1242,28 @@ static bool slave_can_set_ns_maddr(const struct bonding *bond, struct slave *sla
+ slave->dev->flags & IFF_MULTICAST;
+ }
+
++/**
++ * slave_set_ns_maddrs - add/del all NS mac addresses for slave
++ * @bond: bond device
++ * @slave: slave device
++ * @add: add or remove all the NS mac addresses
++ *
++ * This function tries to add or delete all the NS mac addresses on the slave
++ *
++ * Note, the IPv6 NS target address is the unicast address in Neighbor
++ * Solicitation (NS) message. The dest address of NS message should be
++ * solicited-node multicast address of the target. The dest mac of NS message
++ * is converted from the solicited-node multicast address.
++ *
++ * This function is called when
++ * * arp_validate changes
++ * * enslaving, releasing new slaves
++ */
+ static void slave_set_ns_maddrs(struct bonding *bond, struct slave *slave, bool add)
+ {
+ struct in6_addr *targets = bond->params.ns_targets;
+ char slot_maddr[MAX_ADDR_LEN];
++ struct in6_addr mcaddr;
+ int i;
+
+ if (!slave_can_set_ns_maddr(bond, slave))
+@@ -1255,7 +1273,8 @@ static void slave_set_ns_maddrs(struct bonding *bond, struct slave *slave, bool
+ if (ipv6_addr_any(&targets[i]))
+ break;
+
+- if (!ndisc_mc_map(&targets[i], slot_maddr, slave->dev, 0)) {
++ addrconf_addr_solict_mult(&targets[i], &mcaddr);
++ if (!ndisc_mc_map(&mcaddr, slot_maddr, slave->dev, 0)) {
+ if (add)
+ dev_mc_add(slave->dev, slot_maddr);
+ else
+@@ -1278,23 +1297,43 @@ void bond_slave_ns_maddrs_del(struct bonding *bond, struct slave *slave)
+ slave_set_ns_maddrs(bond, slave, false);
+ }
+
++/**
++ * slave_set_ns_maddr - set new NS mac address for slave
++ * @bond: bond device
++ * @slave: slave device
++ * @target: the new IPv6 target
++ * @slot: the old IPv6 target in the slot
++ *
++ * This function tries to replace the old mac address to new one on the slave.
++ *
++ * Note, the target/slot IPv6 address is the unicast address in Neighbor
++ * Solicitation (NS) message. The dest address of NS message should be
++ * solicited-node multicast address of the target. The dest mac of NS message
++ * is converted from the solicited-node multicast address.
++ *
++ * This function is called when
++ * * An IPv6 NS target is added or removed.
++ */
+ static void slave_set_ns_maddr(struct bonding *bond, struct slave *slave,
+ struct in6_addr *target, struct in6_addr *slot)
+ {
+- char target_maddr[MAX_ADDR_LEN], slot_maddr[MAX_ADDR_LEN];
++ char mac_addr[MAX_ADDR_LEN];
++ struct in6_addr mcast_addr;
+
+ if (!bond->params.arp_validate || !slave_can_set_ns_maddr(bond, slave))
+ return;
+
+- /* remove the previous maddr from slave */
++ /* remove the previous mac addr from slave */
++ addrconf_addr_solict_mult(slot, &mcast_addr);
+ if (!ipv6_addr_any(slot) &&
+- !ndisc_mc_map(slot, slot_maddr, slave->dev, 0))
+- dev_mc_del(slave->dev, slot_maddr);
++ !ndisc_mc_map(&mcast_addr, mac_addr, slave->dev, 0))
++ dev_mc_del(slave->dev, mac_addr);
+
+- /* add new maddr on slave if target is set */
++ /* add new mac addr on slave if target is set */
++ addrconf_addr_solict_mult(target, &mcast_addr);
+ if (!ipv6_addr_any(target) &&
+- !ndisc_mc_map(target, target_maddr, slave->dev, 0))
+- dev_mc_add(slave->dev, target_maddr);
++ !ndisc_mc_map(&mcast_addr, mac_addr, slave->dev, 0))
++ dev_mc_add(slave->dev, mac_addr);
+ }
+
+ static void _bond_options_ns_ip6_target_set(struct bonding *bond, int slot,
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 284270a4ade1c1..5aeecfab96306c 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -2261,13 +2261,11 @@ mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port,
+ return err;
+ }
+
+-static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port,
+- const unsigned char *addr, u16 vid,
+- u8 state)
++static int mv88e6xxx_port_db_get(struct mv88e6xxx_chip *chip,
++ const unsigned char *addr, u16 vid,
++ u16 *fid, struct mv88e6xxx_atu_entry *entry)
+ {
+- struct mv88e6xxx_atu_entry entry;
+ struct mv88e6xxx_vtu_entry vlan;
+- u16 fid;
+ int err;
+
+ /* Ports have two private address databases: one for when the port is
+@@ -2278,7 +2276,7 @@ static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port,
+ * VLAN ID into the port's database used for VLAN-unaware bridging.
+ */
+ if (vid == 0) {
+- fid = MV88E6XXX_FID_BRIDGED;
++ *fid = MV88E6XXX_FID_BRIDGED;
+ } else {
+ err = mv88e6xxx_vtu_get(chip, vid, &vlan);
+ if (err)
+@@ -2288,14 +2286,39 @@ static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port,
+ if (!vlan.valid)
+ return -EOPNOTSUPP;
+
+- fid = vlan.fid;
++ *fid = vlan.fid;
+ }
+
+- entry.state = 0;
+- ether_addr_copy(entry.mac, addr);
+- eth_addr_dec(entry.mac);
++ entry->state = 0;
++ ether_addr_copy(entry->mac, addr);
++ eth_addr_dec(entry->mac);
++
++ return mv88e6xxx_g1_atu_getnext(chip, *fid, entry);
++}
++
++static bool mv88e6xxx_port_db_find(struct mv88e6xxx_chip *chip,
++ const unsigned char *addr, u16 vid)
++{
++ struct mv88e6xxx_atu_entry entry;
++ u16 fid;
++ int err;
+
+- err = mv88e6xxx_g1_atu_getnext(chip, fid, &entry);
++ err = mv88e6xxx_port_db_get(chip, addr, vid, &fid, &entry);
++ if (err)
++ return false;
++
++ return entry.state && ether_addr_equal(entry.mac, addr);
++}
++
++static int mv88e6xxx_port_db_load_purge(struct mv88e6xxx_chip *chip, int port,
++ const unsigned char *addr, u16 vid,
++ u8 state)
++{
++ struct mv88e6xxx_atu_entry entry;
++ u16 fid;
++ int err;
++
++ err = mv88e6xxx_port_db_get(chip, addr, vid, &fid, &entry);
+ if (err)
+ return err;
+
+@@ -2893,6 +2916,13 @@ static int mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port,
+ mv88e6xxx_reg_lock(chip);
+ err = mv88e6xxx_port_db_load_purge(chip, port, addr, vid,
+ MV88E6XXX_G1_ATU_DATA_STATE_UC_STATIC);
++ if (err)
++ goto out;
++
++ if (!mv88e6xxx_port_db_find(chip, addr, vid))
++ err = -ENOSPC;
++
++out:
+ mv88e6xxx_reg_unlock(chip);
+
+ return err;
+@@ -6593,6 +6623,13 @@ static int mv88e6xxx_port_mdb_add(struct dsa_switch *ds, int port,
+ mv88e6xxx_reg_lock(chip);
+ err = mv88e6xxx_port_db_load_purge(chip, port, mdb->addr, mdb->vid,
+ MV88E6XXX_G1_ATU_DATA_STATE_MC_STATIC);
++ if (err)
++ goto out;
++
++ if (!mv88e6xxx_port_db_find(chip, mdb->addr, mdb->vid))
++ err = -ENOSPC;
++
++out:
+ mv88e6xxx_reg_unlock(chip);
+
+ return err;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index 603e9c968c44bd..e7580df13229a6 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -864,6 +864,11 @@ static void bnxt_tx_int(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
+ bnapi->events &= ~BNXT_TX_CMP_EVENT;
+ }
+
++static bool bnxt_separate_head_pool(void)
++{
++ return PAGE_SIZE > BNXT_RX_PAGE_SIZE;
++}
++
+ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping,
+ struct bnxt_rx_ring_info *rxr,
+ unsigned int *offset,
+@@ -886,27 +891,19 @@ static struct page *__bnxt_alloc_rx_page(struct bnxt *bp, dma_addr_t *mapping,
+ }
+
+ static inline u8 *__bnxt_alloc_rx_frag(struct bnxt *bp, dma_addr_t *mapping,
++ struct bnxt_rx_ring_info *rxr,
+ gfp_t gfp)
+ {
+- u8 *data;
+- struct pci_dev *pdev = bp->pdev;
++ unsigned int offset;
++ struct page *page;
+
+- if (gfp == GFP_ATOMIC)
+- data = napi_alloc_frag(bp->rx_buf_size);
+- else
+- data = netdev_alloc_frag(bp->rx_buf_size);
+- if (!data)
++ page = page_pool_alloc_frag(rxr->head_pool, &offset,
++ bp->rx_buf_size, gfp);
++ if (!page)
+ return NULL;
+
+- *mapping = dma_map_single_attrs(&pdev->dev, data + bp->rx_dma_offset,
+- bp->rx_buf_use_size, bp->rx_dir,
+- DMA_ATTR_WEAK_ORDERING);
+-
+- if (dma_mapping_error(&pdev->dev, *mapping)) {
+- skb_free_frag(data);
+- data = NULL;
+- }
+- return data;
++ *mapping = page_pool_get_dma_addr(page) + bp->rx_dma_offset + offset;
++ return page_address(page) + offset;
+ }
+
+ int bnxt_alloc_rx_data(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
+@@ -928,7 +925,7 @@ int bnxt_alloc_rx_data(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
+ rx_buf->data = page;
+ rx_buf->data_ptr = page_address(page) + offset + bp->rx_offset;
+ } else {
+- u8 *data = __bnxt_alloc_rx_frag(bp, &mapping, gfp);
++ u8 *data = __bnxt_alloc_rx_frag(bp, &mapping, rxr, gfp);
+
+ if (!data)
+ return -ENOMEM;
+@@ -1179,13 +1176,14 @@ static struct sk_buff *bnxt_rx_skb(struct bnxt *bp,
+ }
+
+ skb = napi_build_skb(data, bp->rx_buf_size);
+- dma_unmap_single_attrs(&bp->pdev->dev, dma_addr, bp->rx_buf_use_size,
+- bp->rx_dir, DMA_ATTR_WEAK_ORDERING);
++ dma_sync_single_for_cpu(&bp->pdev->dev, dma_addr, bp->rx_buf_use_size,
++ bp->rx_dir);
+ if (!skb) {
+- skb_free_frag(data);
++ page_pool_free_va(rxr->head_pool, data, true);
+ return NULL;
+ }
+
++ skb_mark_for_recycle(skb);
+ skb_reserve(skb, bp->rx_offset);
+ skb_put(skb, offset_and_len & 0xffff);
+ return skb;
+@@ -1840,7 +1838,8 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp,
+ u8 *new_data;
+ dma_addr_t new_mapping;
+
+- new_data = __bnxt_alloc_rx_frag(bp, &new_mapping, GFP_ATOMIC);
++ new_data = __bnxt_alloc_rx_frag(bp, &new_mapping, rxr,
++ GFP_ATOMIC);
+ if (!new_data) {
+ bnxt_abort_tpa(cpr, idx, agg_bufs);
+ cpr->sw_stats->rx.rx_oom_discards += 1;
+@@ -1852,16 +1851,16 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp,
+ tpa_info->mapping = new_mapping;
+
+ skb = napi_build_skb(data, bp->rx_buf_size);
+- dma_unmap_single_attrs(&bp->pdev->dev, mapping,
+- bp->rx_buf_use_size, bp->rx_dir,
+- DMA_ATTR_WEAK_ORDERING);
++ dma_sync_single_for_cpu(&bp->pdev->dev, mapping,
++ bp->rx_buf_use_size, bp->rx_dir);
+
+ if (!skb) {
+- skb_free_frag(data);
++ page_pool_free_va(rxr->head_pool, data, true);
+ bnxt_abort_tpa(cpr, idx, agg_bufs);
+ cpr->sw_stats->rx.rx_oom_discards += 1;
+ return NULL;
+ }
++ skb_mark_for_recycle(skb);
+ skb_reserve(skb, bp->rx_offset);
+ skb_put(skb, len);
+ }
+@@ -2025,6 +2024,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ struct rx_cmp_ext *rxcmp1;
+ u32 tmp_raw_cons = *raw_cons;
+ u16 cons, prod, cp_cons = RING_CMP(tmp_raw_cons);
++ struct skb_shared_info *sinfo;
+ struct bnxt_sw_rx_bd *rx_buf;
+ unsigned int len;
+ u8 *data_ptr, agg_bufs, cmp_type;
+@@ -2151,6 +2151,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ false);
+ if (!frag_len)
+ goto oom_next_rx;
++
+ }
+ xdp_active = true;
+ }
+@@ -2160,6 +2161,12 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ rc = 1;
+ goto next_rx;
+ }
++ if (xdp_buff_has_frags(&xdp)) {
++ sinfo = xdp_get_shared_info_from_buff(&xdp);
++ agg_bufs = sinfo->nr_frags;
++ } else {
++ agg_bufs = 0;
++ }
+ }
+
+ if (len <= bp->rx_copy_thresh) {
+@@ -2197,7 +2204,8 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_cp_ring_info *cpr,
+ if (!skb)
+ goto oom_next_rx;
+ } else {
+- skb = bnxt_xdp_build_skb(bp, skb, agg_bufs, rxr->page_pool, &xdp, rxcmp1);
++ skb = bnxt_xdp_build_skb(bp, skb, agg_bufs,
++ rxr->page_pool, &xdp);
+ if (!skb) {
+ /* we should be able to free the old skb here */
+ bnxt_xdp_buff_frags_free(rxr, &xdp);
+@@ -3316,28 +3324,22 @@ static void bnxt_free_tx_skbs(struct bnxt *bp)
+
+ static void bnxt_free_one_rx_ring(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
+ {
+- struct pci_dev *pdev = bp->pdev;
+ int i, max_idx;
+
+ max_idx = bp->rx_nr_pages * RX_DESC_CNT;
+
+ for (i = 0; i < max_idx; i++) {
+ struct bnxt_sw_rx_bd *rx_buf = &rxr->rx_buf_ring[i];
+- dma_addr_t mapping = rx_buf->mapping;
+ void *data = rx_buf->data;
+
+ if (!data)
+ continue;
+
+ rx_buf->data = NULL;
+- if (BNXT_RX_PAGE_MODE(bp)) {
++ if (BNXT_RX_PAGE_MODE(bp))
+ page_pool_recycle_direct(rxr->page_pool, data);
+- } else {
+- dma_unmap_single_attrs(&pdev->dev, mapping,
+- bp->rx_buf_use_size, bp->rx_dir,
+- DMA_ATTR_WEAK_ORDERING);
+- skb_free_frag(data);
+- }
++ else
++ page_pool_free_va(rxr->head_pool, data, true);
+ }
+ }
+
+@@ -3361,16 +3363,11 @@ static void bnxt_free_one_rx_agg_ring(struct bnxt *bp, struct bnxt_rx_ring_info
+ }
+ }
+
+-static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr)
++static void bnxt_free_one_tpa_info_data(struct bnxt *bp,
++ struct bnxt_rx_ring_info *rxr)
+ {
+- struct bnxt_rx_ring_info *rxr = &bp->rx_ring[ring_nr];
+- struct pci_dev *pdev = bp->pdev;
+- struct bnxt_tpa_idx_map *map;
+ int i;
+
+- if (!rxr->rx_tpa)
+- goto skip_rx_tpa_free;
+-
+ for (i = 0; i < bp->max_tpa; i++) {
+ struct bnxt_tpa_info *tpa_info = &rxr->rx_tpa[i];
+ u8 *data = tpa_info->data;
+@@ -3378,14 +3375,20 @@ static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp, int ring_nr)
+ if (!data)
+ continue;
+
+- dma_unmap_single_attrs(&pdev->dev, tpa_info->mapping,
+- bp->rx_buf_use_size, bp->rx_dir,
+- DMA_ATTR_WEAK_ORDERING);
+-
+ tpa_info->data = NULL;
+-
+- skb_free_frag(data);
++ page_pool_free_va(rxr->head_pool, data, false);
+ }
++}
++
++static void bnxt_free_one_rx_ring_skbs(struct bnxt *bp,
++ struct bnxt_rx_ring_info *rxr)
++{
++ struct bnxt_tpa_idx_map *map;
++
++ if (!rxr->rx_tpa)
++ goto skip_rx_tpa_free;
++
++ bnxt_free_one_tpa_info_data(bp, rxr);
+
+ skip_rx_tpa_free:
+ if (!rxr->rx_buf_ring)
+@@ -3413,7 +3416,7 @@ static void bnxt_free_rx_skbs(struct bnxt *bp)
+ return;
+
+ for (i = 0; i < bp->rx_nr_rings; i++)
+- bnxt_free_one_rx_ring_skbs(bp, i);
++ bnxt_free_one_rx_ring_skbs(bp, &bp->rx_ring[i]);
+ }
+
+ static void bnxt_free_skbs(struct bnxt *bp)
+@@ -3525,29 +3528,64 @@ static int bnxt_alloc_ring(struct bnxt *bp, struct bnxt_ring_mem_info *rmem)
+ return 0;
+ }
+
++static void bnxt_free_one_tpa_info(struct bnxt *bp,
++ struct bnxt_rx_ring_info *rxr)
++{
++ int i;
++
++ kfree(rxr->rx_tpa_idx_map);
++ rxr->rx_tpa_idx_map = NULL;
++ if (rxr->rx_tpa) {
++ for (i = 0; i < bp->max_tpa; i++) {
++ kfree(rxr->rx_tpa[i].agg_arr);
++ rxr->rx_tpa[i].agg_arr = NULL;
++ }
++ }
++ kfree(rxr->rx_tpa);
++ rxr->rx_tpa = NULL;
++}
++
+ static void bnxt_free_tpa_info(struct bnxt *bp)
+ {
+- int i, j;
++ int i;
+
+ for (i = 0; i < bp->rx_nr_rings; i++) {
+ struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
+
+- kfree(rxr->rx_tpa_idx_map);
+- rxr->rx_tpa_idx_map = NULL;
+- if (rxr->rx_tpa) {
+- for (j = 0; j < bp->max_tpa; j++) {
+- kfree(rxr->rx_tpa[j].agg_arr);
+- rxr->rx_tpa[j].agg_arr = NULL;
+- }
+- }
+- kfree(rxr->rx_tpa);
+- rxr->rx_tpa = NULL;
++ bnxt_free_one_tpa_info(bp, rxr);
+ }
+ }
+
++static int bnxt_alloc_one_tpa_info(struct bnxt *bp,
++ struct bnxt_rx_ring_info *rxr)
++{
++ struct rx_agg_cmp *agg;
++ int i;
++
++ rxr->rx_tpa = kcalloc(bp->max_tpa, sizeof(struct bnxt_tpa_info),
++ GFP_KERNEL);
++ if (!rxr->rx_tpa)
++ return -ENOMEM;
++
++ if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS))
++ return 0;
++ for (i = 0; i < bp->max_tpa; i++) {
++ agg = kcalloc(MAX_SKB_FRAGS, sizeof(*agg), GFP_KERNEL);
++ if (!agg)
++ return -ENOMEM;
++ rxr->rx_tpa[i].agg_arr = agg;
++ }
++ rxr->rx_tpa_idx_map = kzalloc(sizeof(*rxr->rx_tpa_idx_map),
++ GFP_KERNEL);
++ if (!rxr->rx_tpa_idx_map)
++ return -ENOMEM;
++
++ return 0;
++}
++
+ static int bnxt_alloc_tpa_info(struct bnxt *bp)
+ {
+- int i, j;
++ int i, rc;
+
+ bp->max_tpa = MAX_TPA;
+ if (bp->flags & BNXT_FLAG_CHIP_P5_PLUS) {
+@@ -3558,25 +3596,10 @@ static int bnxt_alloc_tpa_info(struct bnxt *bp)
+
+ for (i = 0; i < bp->rx_nr_rings; i++) {
+ struct bnxt_rx_ring_info *rxr = &bp->rx_ring[i];
+- struct rx_agg_cmp *agg;
+-
+- rxr->rx_tpa = kcalloc(bp->max_tpa, sizeof(struct bnxt_tpa_info),
+- GFP_KERNEL);
+- if (!rxr->rx_tpa)
+- return -ENOMEM;
+
+- if (!(bp->flags & BNXT_FLAG_CHIP_P5_PLUS))
+- continue;
+- for (j = 0; j < bp->max_tpa; j++) {
+- agg = kcalloc(MAX_SKB_FRAGS, sizeof(*agg), GFP_KERNEL);
+- if (!agg)
+- return -ENOMEM;
+- rxr->rx_tpa[j].agg_arr = agg;
+- }
+- rxr->rx_tpa_idx_map = kzalloc(sizeof(*rxr->rx_tpa_idx_map),
+- GFP_KERNEL);
+- if (!rxr->rx_tpa_idx_map)
+- return -ENOMEM;
++ rc = bnxt_alloc_one_tpa_info(bp, rxr);
++ if (rc)
++ return rc;
+ }
+ return 0;
+ }
+@@ -3600,7 +3623,9 @@ static void bnxt_free_rx_rings(struct bnxt *bp)
+ xdp_rxq_info_unreg(&rxr->xdp_rxq);
+
+ page_pool_destroy(rxr->page_pool);
+- rxr->page_pool = NULL;
++ if (bnxt_separate_head_pool())
++ page_pool_destroy(rxr->head_pool);
++ rxr->page_pool = rxr->head_pool = NULL;
+
+ kfree(rxr->rx_agg_bmap);
+ rxr->rx_agg_bmap = NULL;
+@@ -3618,6 +3643,7 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
+ int numa_node)
+ {
+ struct page_pool_params pp = { 0 };
++ struct page_pool *pool;
+
+ pp.pool_size = bp->rx_agg_ring_size;
+ if (BNXT_RX_PAGE_MODE(bp))
+@@ -3630,14 +3656,25 @@ static int bnxt_alloc_rx_page_pool(struct bnxt *bp,
+ pp.max_len = PAGE_SIZE;
+ pp.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV;
+
+- rxr->page_pool = page_pool_create(&pp);
+- if (IS_ERR(rxr->page_pool)) {
+- int err = PTR_ERR(rxr->page_pool);
++ pool = page_pool_create(&pp);
++ if (IS_ERR(pool))
++ return PTR_ERR(pool);
++ rxr->page_pool = pool;
+
+- rxr->page_pool = NULL;
+- return err;
++ if (bnxt_separate_head_pool()) {
++ pp.pool_size = max(bp->rx_ring_size, 1024);
++ pool = page_pool_create(&pp);
++ if (IS_ERR(pool))
++ goto err_destroy_pp;
+ }
++ rxr->head_pool = pool;
++
+ return 0;
++
++err_destroy_pp:
++ page_pool_destroy(rxr->page_pool);
++ rxr->page_pool = NULL;
++ return PTR_ERR(pool);
+ }
+
+ static int bnxt_alloc_rx_rings(struct bnxt *bp)
+@@ -4171,10 +4208,31 @@ static void bnxt_alloc_one_rx_ring_page(struct bnxt *bp,
+ rxr->rx_agg_prod = prod;
+ }
+
++static int bnxt_alloc_one_tpa_info_data(struct bnxt *bp,
++ struct bnxt_rx_ring_info *rxr)
++{
++ dma_addr_t mapping;
++ u8 *data;
++ int i;
++
++ for (i = 0; i < bp->max_tpa; i++) {
++ data = __bnxt_alloc_rx_frag(bp, &mapping, rxr,
++ GFP_KERNEL);
++ if (!data)
++ return -ENOMEM;
++
++ rxr->rx_tpa[i].data = data;
++ rxr->rx_tpa[i].data_ptr = data + bp->rx_offset;
++ rxr->rx_tpa[i].mapping = mapping;
++ }
++
++ return 0;
++}
++
+ static int bnxt_alloc_one_rx_ring(struct bnxt *bp, int ring_nr)
+ {
+ struct bnxt_rx_ring_info *rxr = &bp->rx_ring[ring_nr];
+- int i;
++ int rc;
+
+ bnxt_alloc_one_rx_ring_skb(bp, rxr, ring_nr);
+
+@@ -4184,18 +4242,9 @@ static int bnxt_alloc_one_rx_ring(struct bnxt *bp, int ring_nr)
+ bnxt_alloc_one_rx_ring_page(bp, rxr, ring_nr);
+
+ if (rxr->rx_tpa) {
+- dma_addr_t mapping;
+- u8 *data;
+-
+- for (i = 0; i < bp->max_tpa; i++) {
+- data = __bnxt_alloc_rx_frag(bp, &mapping, GFP_KERNEL);
+- if (!data)
+- return -ENOMEM;
+-
+- rxr->rx_tpa[i].data = data;
+- rxr->rx_tpa[i].data_ptr = data + bp->rx_offset;
+- rxr->rx_tpa[i].mapping = mapping;
+- }
++ rc = bnxt_alloc_one_tpa_info_data(bp, rxr);
++ if (rc)
++ return rc;
+ }
+ return 0;
+ }
+@@ -13452,7 +13501,7 @@ static void bnxt_rx_ring_reset(struct bnxt *bp)
+ bnxt_reset_task(bp, true);
+ break;
+ }
+- bnxt_free_one_rx_ring_skbs(bp, i);
++ bnxt_free_one_rx_ring_skbs(bp, rxr);
+ rxr->rx_prod = 0;
+ rxr->rx_agg_prod = 0;
+ rxr->rx_sw_agg_prod = 0;
+@@ -15023,6 +15072,9 @@ static void bnxt_get_queue_stats_rx(struct net_device *dev, int i,
+ struct bnxt_cp_ring_info *cpr;
+ u64 *sw;
+
++ if (!bp->bnapi)
++ return;
++
+ cpr = &bp->bnapi[i]->cp_ring;
+ sw = cpr->stats.sw_stats;
+
+@@ -15046,6 +15098,9 @@ static void bnxt_get_queue_stats_tx(struct net_device *dev, int i,
+ struct bnxt_napi *bnapi;
+ u64 *sw;
+
++ if (!bp->tx_ring)
++ return;
++
+ bnapi = bp->tx_ring[bp->tx_ring_map[i]].bnapi;
+ sw = bnapi->cp_ring.stats.sw_stats;
+
+@@ -15100,6 +15155,9 @@ static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
+ struct bnxt_ring_struct *ring;
+ int rc;
+
++ if (!bp->rx_ring)
++ return -ENETDOWN;
++
+ rxr = &bp->rx_ring[idx];
+ clone = qmem;
+ memcpy(clone, rxr, sizeof(*rxr));
+@@ -15141,15 +15199,25 @@ static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
+ goto err_free_rx_agg_ring;
+ }
+
++ if (bp->flags & BNXT_FLAG_TPA) {
++ rc = bnxt_alloc_one_tpa_info(bp, clone);
++ if (rc)
++ goto err_free_tpa_info;
++ }
++
+ bnxt_init_one_rx_ring_rxbd(bp, clone);
+ bnxt_init_one_rx_agg_ring_rxbd(bp, clone);
+
+ bnxt_alloc_one_rx_ring_skb(bp, clone, idx);
+ if (bp->flags & BNXT_FLAG_AGG_RINGS)
+ bnxt_alloc_one_rx_ring_page(bp, clone, idx);
++ if (bp->flags & BNXT_FLAG_TPA)
++ bnxt_alloc_one_tpa_info_data(bp, clone);
+
+ return 0;
+
++err_free_tpa_info:
++ bnxt_free_one_tpa_info(bp, clone);
+ err_free_rx_agg_ring:
+ bnxt_free_ring(bp, &clone->rx_agg_ring_struct.ring_mem);
+ err_free_rx_ring:
+@@ -15157,9 +15225,11 @@ static int bnxt_queue_mem_alloc(struct net_device *dev, void *qmem, int idx)
+ err_rxq_info_unreg:
+ xdp_rxq_info_unreg(&clone->xdp_rxq);
+ err_page_pool_destroy:
+- clone->page_pool->p.napi = NULL;
+ page_pool_destroy(clone->page_pool);
++ if (bnxt_separate_head_pool())
++ page_pool_destroy(clone->head_pool);
+ clone->page_pool = NULL;
++ clone->head_pool = NULL;
+ return rc;
+ }
+
+@@ -15169,13 +15239,16 @@ static void bnxt_queue_mem_free(struct net_device *dev, void *qmem)
+ struct bnxt *bp = netdev_priv(dev);
+ struct bnxt_ring_struct *ring;
+
+- bnxt_free_one_rx_ring(bp, rxr);
+- bnxt_free_one_rx_agg_ring(bp, rxr);
++ bnxt_free_one_rx_ring_skbs(bp, rxr);
++ bnxt_free_one_tpa_info(bp, rxr);
+
+ xdp_rxq_info_unreg(&rxr->xdp_rxq);
+
+ page_pool_destroy(rxr->page_pool);
++ if (bnxt_separate_head_pool())
++ page_pool_destroy(rxr->head_pool);
+ rxr->page_pool = NULL;
++ rxr->head_pool = NULL;
+
+ ring = &rxr->rx_ring_struct;
+ bnxt_free_ring(bp, &ring->ring_mem);
+@@ -15257,7 +15330,10 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
+ rxr->rx_agg_prod = clone->rx_agg_prod;
+ rxr->rx_sw_agg_prod = clone->rx_sw_agg_prod;
+ rxr->rx_next_cons = clone->rx_next_cons;
++ rxr->rx_tpa = clone->rx_tpa;
++ rxr->rx_tpa_idx_map = clone->rx_tpa_idx_map;
+ rxr->page_pool = clone->page_pool;
++ rxr->head_pool = clone->head_pool;
+ rxr->xdp_rxq = clone->xdp_rxq;
+
+ bnxt_copy_rx_ring(bp, rxr, clone);
+@@ -15276,7 +15352,7 @@ static int bnxt_queue_start(struct net_device *dev, void *qmem, int idx)
+ cpr = &rxr->bnapi->cp_ring;
+ cpr->sw_stats->rx.rx_resets++;
+
+- for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) {
++ for (i = 0; i <= bp->nr_vnics; i++) {
+ vnic = &bp->vnic_info[i];
+
+ rc = bnxt_hwrm_vnic_set_rss_p5(bp, vnic, true);
+@@ -15304,7 +15380,7 @@ static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx)
+ struct bnxt_vnic_info *vnic;
+ int i;
+
+- for (i = 0; i <= BNXT_VNIC_NTUPLE; i++) {
++ for (i = 0; i <= bp->nr_vnics; i++) {
+ vnic = &bp->vnic_info[i];
+ vnic->mru = 0;
+ bnxt_hwrm_vnic_update(bp, vnic,
+@@ -15318,6 +15394,8 @@ static int bnxt_queue_stop(struct net_device *dev, void *qmem, int idx)
+ bnxt_hwrm_rx_agg_ring_free(bp, rxr, false);
+ rxr->rx_next_cons = 0;
+ page_pool_disable_direct_recycling(rxr->page_pool);
++ if (bnxt_separate_head_pool())
++ page_pool_disable_direct_recycling(rxr->head_pool);
+
+ memcpy(qmem, rxr, sizeof(*rxr));
+ bnxt_init_rx_ring_struct(bp, qmem);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index bee645f58d0bde..1758edcd1db42a 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1108,6 +1108,7 @@ struct bnxt_rx_ring_info {
+ struct bnxt_ring_struct rx_agg_ring_struct;
+ struct xdp_rxq_info xdp_rxq;
+ struct page_pool *page_pool;
++ struct page_pool *head_pool;
+ };
+
+ struct bnxt_rx_sw_stats {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+index dc51dce209d5f0..8726657f5cb9e0 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c
+@@ -456,23 +456,16 @@ int bnxt_xdp(struct net_device *dev, struct netdev_bpf *xdp)
+
+ struct sk_buff *
+ bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb, u8 num_frags,
+- struct page_pool *pool, struct xdp_buff *xdp,
+- struct rx_cmp_ext *rxcmp1)
++ struct page_pool *pool, struct xdp_buff *xdp)
+ {
+ struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+
+ if (!skb)
+ return NULL;
+- skb_checksum_none_assert(skb);
+- if (RX_CMP_L4_CS_OK(rxcmp1)) {
+- if (bp->dev->features & NETIF_F_RXCSUM) {
+- skb->ip_summed = CHECKSUM_UNNECESSARY;
+- skb->csum_level = RX_CMP_ENCAP(rxcmp1);
+- }
+- }
++
+ xdp_update_skb_shared_info(skb, num_frags,
+ sinfo->xdp_frags_size,
+- BNXT_RX_PAGE_SIZE * sinfo->nr_frags,
++ BNXT_RX_PAGE_SIZE * num_frags,
+ xdp_buff_is_frag_pfmemalloc(xdp));
+ return skb;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
+index 0122782400b8a2..220285e190fcd1 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h
+@@ -33,6 +33,5 @@ void bnxt_xdp_buff_frags_free(struct bnxt_rx_ring_info *rxr,
+ struct xdp_buff *xdp);
+ struct sk_buff *bnxt_xdp_build_skb(struct bnxt *bp, struct sk_buff *skb,
+ u8 num_frags, struct page_pool *pool,
+- struct xdp_buff *xdp,
+- struct rx_cmp_ext *rxcmp1);
++ struct xdp_buff *xdp);
+ #endif
+diff --git a/drivers/net/ethernet/intel/ice/ice_arfs.c b/drivers/net/ethernet/intel/ice/ice_arfs.c
+index 7cee365cc7d167..405ddd17de1bff 100644
+--- a/drivers/net/ethernet/intel/ice/ice_arfs.c
++++ b/drivers/net/ethernet/intel/ice/ice_arfs.c
+@@ -511,7 +511,7 @@ void ice_init_arfs(struct ice_vsi *vsi)
+ struct hlist_head *arfs_fltr_list;
+ unsigned int i;
+
+- if (!vsi || vsi->type != ICE_VSI_PF)
++ if (!vsi || vsi->type != ICE_VSI_PF || ice_is_arfs_active(vsi))
+ return;
+
+ arfs_fltr_list = kcalloc(ICE_MAX_ARFS_LIST, sizeof(*arfs_fltr_list),
+diff --git a/drivers/net/ethernet/intel/ice/ice_eswitch.c b/drivers/net/ethernet/intel/ice/ice_eswitch.c
+index d649c197cf673f..ed21d7f55ac11b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_eswitch.c
++++ b/drivers/net/ethernet/intel/ice/ice_eswitch.c
+@@ -49,9 +49,6 @@ static int ice_eswitch_setup_env(struct ice_pf *pf)
+ if (vlan_ops->dis_rx_filtering(uplink_vsi))
+ goto err_vlan_filtering;
+
+- if (ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_set_allow_override))
+- goto err_override_uplink;
+-
+ if (ice_vsi_update_local_lb(uplink_vsi, true))
+ goto err_override_local_lb;
+
+@@ -63,8 +60,6 @@ static int ice_eswitch_setup_env(struct ice_pf *pf)
+ err_up:
+ ice_vsi_update_local_lb(uplink_vsi, false);
+ err_override_local_lb:
+- ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_clear_allow_override);
+-err_override_uplink:
+ vlan_ops->ena_rx_filtering(uplink_vsi);
+ err_vlan_filtering:
+ ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, false,
+@@ -275,7 +270,6 @@ static void ice_eswitch_release_env(struct ice_pf *pf)
+ vlan_ops = ice_get_compat_vsi_vlan_ops(uplink_vsi);
+
+ ice_vsi_update_local_lb(uplink_vsi, false);
+- ice_vsi_update_security(uplink_vsi, ice_vsi_ctx_clear_allow_override);
+ vlan_ops->ena_rx_filtering(uplink_vsi);
+ ice_cfg_dflt_vsi(uplink_vsi->port_info, uplink_vsi->idx, false,
+ ICE_FLTR_TX);
+diff --git a/drivers/net/ethernet/intel/ice/ice_lag.c b/drivers/net/ethernet/intel/ice/ice_lag.c
+index 1ccb572ce285df..22371011c24928 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lag.c
++++ b/drivers/net/ethernet/intel/ice/ice_lag.c
+@@ -1000,6 +1000,28 @@ static void ice_lag_link(struct ice_lag *lag)
+ netdev_info(lag->netdev, "Shared SR-IOV resources in bond are active\n");
+ }
+
++/**
++ * ice_lag_config_eswitch - configure eswitch to work with LAG
++ * @lag: lag info struct
++ * @netdev: active network interface device struct
++ *
++ * Updates all port representors in eswitch to use @netdev for Tx.
++ *
++ * Configures the netdev to keep dst metadata (also used in representor Tx).
++ * This is required for an uplink without switchdev mode configured.
++ */
++static void ice_lag_config_eswitch(struct ice_lag *lag,
++ struct net_device *netdev)
++{
++ struct ice_repr *repr;
++ unsigned long id;
++
++ xa_for_each(&lag->pf->eswitch.reprs, id, repr)
++ repr->dst->u.port_info.lower_dev = netdev;
++
++ netif_keep_dst(netdev);
++}
++
+ /**
+ * ice_lag_unlink - handle unlink event
+ * @lag: LAG info struct
+@@ -1021,6 +1043,9 @@ static void ice_lag_unlink(struct ice_lag *lag)
+ ice_lag_move_vf_nodes(lag, act_port, pri_port);
+ lag->primary = false;
+ lag->active_port = ICE_LAG_INVALID_PORT;
++
++ /* Config primary's eswitch back to normal operation. */
++ ice_lag_config_eswitch(lag, lag->netdev);
+ } else {
+ struct ice_lag *primary_lag;
+
+@@ -1419,6 +1444,7 @@ static void ice_lag_monitor_active(struct ice_lag *lag, void *ptr)
+ ice_lag_move_vf_nodes(lag, prim_port,
+ event_port);
+ lag->active_port = event_port;
++ ice_lag_config_eswitch(lag, event_netdev);
+ return;
+ }
+
+@@ -1428,6 +1454,7 @@ static void ice_lag_monitor_active(struct ice_lag *lag, void *ptr)
+ /* new active port */
+ ice_lag_move_vf_nodes(lag, lag->active_port, event_port);
+ lag->active_port = event_port;
++ ice_lag_config_eswitch(lag, event_netdev);
+ } else {
+ /* port not set as currently active (e.g. new active port
+ * has already claimed the nodes and filters
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
+index d4e74f96a8ad5d..121a5ad5c8e10b 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.c
++++ b/drivers/net/ethernet/intel/ice/ice_lib.c
+@@ -3928,24 +3928,6 @@ void ice_vsi_ctx_clear_antispoof(struct ice_vsi_ctx *ctx)
+ ICE_AQ_VSI_SEC_TX_PRUNE_ENA_S);
+ }
+
+-/**
+- * ice_vsi_ctx_set_allow_override - allow destination override on VSI
+- * @ctx: pointer to VSI ctx structure
+- */
+-void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx)
+-{
+- ctx->info.sec_flags |= ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD;
+-}
+-
+-/**
+- * ice_vsi_ctx_clear_allow_override - turn off destination override on VSI
+- * @ctx: pointer to VSI ctx structure
+- */
+-void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx)
+-{
+- ctx->info.sec_flags &= ~ICE_AQ_VSI_SEC_FLAG_ALLOW_DEST_OVRD;
+-}
+-
+ /**
+ * ice_vsi_update_local_lb - update sw block in VSI with local loopback bit
+ * @vsi: pointer to VSI structure
+diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
+index 1a6cfc8693ce47..2b27998fd1be36 100644
+--- a/drivers/net/ethernet/intel/ice/ice_lib.h
++++ b/drivers/net/ethernet/intel/ice/ice_lib.h
+@@ -106,10 +106,6 @@ ice_vsi_update_security(struct ice_vsi *vsi, void (*fill)(struct ice_vsi_ctx *))
+ void ice_vsi_ctx_set_antispoof(struct ice_vsi_ctx *ctx);
+
+ void ice_vsi_ctx_clear_antispoof(struct ice_vsi_ctx *ctx);
+-
+-void ice_vsi_ctx_set_allow_override(struct ice_vsi_ctx *ctx);
+-
+-void ice_vsi_ctx_clear_allow_override(struct ice_vsi_ctx *ctx);
+ int ice_vsi_update_local_lb(struct ice_vsi *vsi, bool set);
+ int ice_vsi_add_vlan_zero(struct ice_vsi *vsi);
+ int ice_vsi_del_vlan_zero(struct ice_vsi *vsi);
+diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
+index f12fb3a2b6ad94..f522dd42093a9f 100644
+--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
++++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
+@@ -2424,7 +2424,9 @@ ice_xmit_frame_ring(struct sk_buff *skb, struct ice_tx_ring *tx_ring)
+ ICE_TXD_CTX_QW1_CMD_S);
+
+ ice_tstamp(tx_ring, skb, first, &offload);
+- if (ice_is_switchdev_running(vsi->back) && vsi->type != ICE_VSI_SF)
++ if ((ice_is_switchdev_running(vsi->back) ||
++ ice_lag_is_switchdev_running(vsi->back)) &&
++ vsi->type != ICE_VSI_SF)
+ ice_eswitch_set_target_vsi(skb, &offload);
+
+ if (offload.cd_qw1 & ICE_TX_DESC_DTYPE_CTX) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index 98d4306929f3ed..a2cf3e79693dd8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -46,6 +46,9 @@ mlx5_devlink_info_get(struct devlink *devlink, struct devlink_info_req *req,
+ u32 running_fw, stored_fw;
+ int err;
+
++ if (!mlx5_core_is_pf(dev))
++ return 0;
++
+ err = devlink_info_version_fixed_put(req, "fw.psid", dev->board_id);
+ if (err)
+ return err;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
+index 5d128c5b4529af..0f5d7ea8956f72 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/rep/bridge.c
+@@ -48,15 +48,10 @@ mlx5_esw_bridge_lag_rep_get(struct net_device *dev, struct mlx5_eswitch *esw)
+ struct list_head *iter;
+
+ netdev_for_each_lower_dev(dev, lower, iter) {
+- struct mlx5_core_dev *mdev;
+- struct mlx5e_priv *priv;
+-
+ if (!mlx5e_eswitch_rep(lower))
+ continue;
+
+- priv = netdev_priv(lower);
+- mdev = priv->mdev;
+- if (mlx5_lag_is_shared_fdb(mdev) && mlx5_esw_bridge_dev_same_esw(lower, esw))
++ if (mlx5_esw_bridge_dev_same_esw(lower, esw))
+ return lower;
+ }
+
+@@ -125,7 +120,7 @@ static bool mlx5_esw_bridge_is_local(struct net_device *dev, struct net_device *
+ priv = netdev_priv(rep);
+ mdev = priv->mdev;
+ if (netif_is_lag_master(dev))
+- return mlx5_lag_is_shared_fdb(mdev) && mlx5_lag_is_master(mdev);
++ return mlx5_lag_is_master(mdev);
+ return true;
+ }
+
+@@ -455,6 +450,9 @@ static int mlx5_esw_bridge_switchdev_event(struct notifier_block *nb,
+ if (!rep)
+ return NOTIFY_DONE;
+
++ if (netif_is_lag_master(dev) && !mlx5_lag_is_shared_fdb(esw->dev))
++ return NOTIFY_DONE;
++
+ switch (event) {
+ case SWITCHDEV_FDB_ADD_TO_BRIDGE:
+ fdb_info = container_of(info,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 62b8a7c1c6b54a..1c087fa1ca269b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -5099,11 +5099,9 @@ static int mlx5e_bridge_getlink(struct sk_buff *skb, u32 pid, u32 seq,
+ struct mlx5e_priv *priv = netdev_priv(dev);
+ struct mlx5_core_dev *mdev = priv->mdev;
+ u8 mode, setting;
+- int err;
+
+- err = mlx5_eswitch_get_vepa(mdev->priv.eswitch, &setting);
+- if (err)
+- return err;
++ if (mlx5_eswitch_get_vepa(mdev->priv.eswitch, &setting))
++ return -EOPNOTSUPP;
+ mode = setting ? BRIDGE_MODE_VEPA : BRIDGE_MODE_VEB;
+ return ndo_dflt_bridge_getlink(skb, pid, seq, dev,
+ mode,
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+index 68cb86b37e561f..4241cf07a0306b 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+@@ -887,8 +887,8 @@ static void comp_irq_release_sf(struct mlx5_core_dev *dev, u16 vecidx)
+
+ static int comp_irq_request_sf(struct mlx5_core_dev *dev, u16 vecidx)
+ {
++ struct mlx5_irq_pool *pool = mlx5_irq_table_get_comp_irq_pool(dev);
+ struct mlx5_eq_table *table = dev->priv.eq_table;
+- struct mlx5_irq_pool *pool = mlx5_irq_pool_get(dev);
+ struct irq_affinity_desc af_desc = {};
+ struct mlx5_irq *irq;
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c b/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c
+index 1477db7f5307e0..2691d88cdee1f7 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/irq_affinity.c
+@@ -175,7 +175,7 @@ mlx5_irq_affinity_request(struct mlx5_core_dev *dev, struct mlx5_irq_pool *pool,
+
+ void mlx5_irq_affinity_irq_release(struct mlx5_core_dev *dev, struct mlx5_irq *irq)
+ {
+- struct mlx5_irq_pool *pool = mlx5_irq_pool_get(dev);
++ struct mlx5_irq_pool *pool = mlx5_irq_get_pool(irq);
+ int cpu;
+
+ cpu = cpumask_first(mlx5_irq_get_affinity_mask(irq));
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+index 7f68468c2e7598..4b3da7ebd6310e 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.c
+@@ -859,7 +859,7 @@ void mlx5_disable_lag(struct mlx5_lag *ldev)
+ mlx5_eswitch_reload_ib_reps(ldev->pf[i].dev->priv.eswitch);
+ }
+
+-static bool mlx5_shared_fdb_supported(struct mlx5_lag *ldev)
++bool mlx5_lag_shared_fdb_supported(struct mlx5_lag *ldev)
+ {
+ struct mlx5_core_dev *dev;
+ int i;
+@@ -937,7 +937,7 @@ static void mlx5_do_bond(struct mlx5_lag *ldev)
+ }
+
+ if (do_bond && !__mlx5_lag_is_active(ldev)) {
+- bool shared_fdb = mlx5_shared_fdb_supported(ldev);
++ bool shared_fdb = mlx5_lag_shared_fdb_supported(ldev);
+
+ roce_lag = mlx5_lag_is_roce_lag(ldev);
+
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
+index 50fcb1eee57483..48a5f3e7b91a85 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/lag.h
+@@ -92,6 +92,7 @@ mlx5_lag_is_ready(struct mlx5_lag *ldev)
+ return test_bit(MLX5_LAG_FLAG_NDEVS_READY, &ldev->state_flags);
+ }
+
++bool mlx5_lag_shared_fdb_supported(struct mlx5_lag *ldev);
+ bool mlx5_lag_check_prereq(struct mlx5_lag *ldev);
+ void mlx5_modify_lag(struct mlx5_lag *ldev,
+ struct lag_tracker *tracker);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c
+index 571ea26edd0cab..2381a0eec19006 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lag/mpesw.c
+@@ -81,7 +81,8 @@ static int enable_mpesw(struct mlx5_lag *ldev)
+ if (mlx5_eswitch_mode(dev0) != MLX5_ESWITCH_OFFLOADS ||
+ !MLX5_CAP_PORT_SELECTION(dev0, port_select_flow_table) ||
+ !MLX5_CAP_GEN(dev0, create_lag_when_not_master_up) ||
+- !mlx5_lag_check_prereq(ldev))
++ !mlx5_lag_check_prereq(ldev) ||
++ !mlx5_lag_shared_fdb_supported(ldev))
+ return -EOPNOTSUPP;
+
+ err = mlx5_mpesw_metadata_set(ldev);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+index a80ecb672f33dd..711d14dea2485f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/lib/fs_chains.c
+@@ -196,6 +196,11 @@ mlx5_chains_create_table(struct mlx5_fs_chains *chains,
+ ns = mlx5_get_flow_namespace(chains->dev, chains->ns);
+ }
+
++ if (!ns) {
++ mlx5_core_warn(chains->dev, "Failed to get flow namespace\n");
++ return ERR_PTR(-EOPNOTSUPP);
++ }
++
+ ft_attr.autogroup.num_reserved_entries = 2;
+ ft_attr.autogroup.max_num_groups = chains->group_num;
+ ft = mlx5_create_auto_grouped_flow_table(ns, &ft_attr);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h
+index 0881e961d8b177..586688da9940ee 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_irq.h
+@@ -10,12 +10,15 @@
+
+ struct mlx5_irq;
+ struct cpu_rmap;
++struct mlx5_irq_pool;
+
+ int mlx5_irq_table_init(struct mlx5_core_dev *dev);
+ void mlx5_irq_table_cleanup(struct mlx5_core_dev *dev);
+ int mlx5_irq_table_create(struct mlx5_core_dev *dev);
+ void mlx5_irq_table_destroy(struct mlx5_core_dev *dev);
+ void mlx5_irq_table_free_irqs(struct mlx5_core_dev *dev);
++struct mlx5_irq_pool *
++mlx5_irq_table_get_comp_irq_pool(struct mlx5_core_dev *dev);
+ int mlx5_irq_table_get_num_comp(struct mlx5_irq_table *table);
+ int mlx5_irq_table_get_sfs_vec(struct mlx5_irq_table *table);
+ struct mlx5_irq_table *mlx5_irq_table_get(struct mlx5_core_dev *dev);
+@@ -38,7 +41,6 @@ struct cpumask *mlx5_irq_get_affinity_mask(struct mlx5_irq *irq);
+ int mlx5_irq_get_index(struct mlx5_irq *irq);
+ int mlx5_irq_get_irq(const struct mlx5_irq *irq);
+
+-struct mlx5_irq_pool;
+ #ifdef CONFIG_MLX5_SF
+ struct mlx5_irq *mlx5_irq_affinity_irq_request_auto(struct mlx5_core_dev *dev,
+ struct cpumask *used_cpus, u16 vecidx);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+index d9362eabc6a1ca..2c5f850c31f683 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.c
+@@ -378,6 +378,11 @@ int mlx5_irq_get_index(struct mlx5_irq *irq)
+ return irq->map.index;
+ }
+
++struct mlx5_irq_pool *mlx5_irq_get_pool(struct mlx5_irq *irq)
++{
++ return irq->pool;
++}
++
+ /* irq_pool API */
+
+ /* requesting an irq from a given pool according to given index */
+@@ -405,18 +410,20 @@ static struct mlx5_irq_pool *sf_ctrl_irq_pool_get(struct mlx5_irq_table *irq_tab
+ return irq_table->sf_ctrl_pool;
+ }
+
+-static struct mlx5_irq_pool *sf_irq_pool_get(struct mlx5_irq_table *irq_table)
++static struct mlx5_irq_pool *
++sf_comp_irq_pool_get(struct mlx5_irq_table *irq_table)
+ {
+ return irq_table->sf_comp_pool;
+ }
+
+-struct mlx5_irq_pool *mlx5_irq_pool_get(struct mlx5_core_dev *dev)
++struct mlx5_irq_pool *
++mlx5_irq_table_get_comp_irq_pool(struct mlx5_core_dev *dev)
+ {
+ struct mlx5_irq_table *irq_table = mlx5_irq_table_get(dev);
+ struct mlx5_irq_pool *pool = NULL;
+
+ if (mlx5_core_is_sf(dev))
+- pool = sf_irq_pool_get(irq_table);
++ pool = sf_comp_irq_pool_get(irq_table);
+
+ /* In some configs, there won't be a pool of SFs IRQs. Hence, returning
+ * the PF IRQs pool in case the SF pool doesn't exist.
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.h b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.h
+index c4d377f8df3089..cc064425fe1608 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/pci_irq.h
+@@ -28,7 +28,6 @@ struct mlx5_irq_pool {
+ struct mlx5_core_dev *dev;
+ };
+
+-struct mlx5_irq_pool *mlx5_irq_pool_get(struct mlx5_core_dev *dev);
+ static inline bool mlx5_irq_pool_is_sf_pool(struct mlx5_irq_pool *pool)
+ {
+ return !strncmp("mlx5_sf", pool->name, strlen("mlx5_sf"));
+@@ -40,5 +39,6 @@ struct mlx5_irq *mlx5_irq_alloc(struct mlx5_irq_pool *pool, int i,
+ int mlx5_irq_get_locked(struct mlx5_irq *irq);
+ int mlx5_irq_read_locked(struct mlx5_irq *irq);
+ int mlx5_irq_put(struct mlx5_irq *irq);
++struct mlx5_irq_pool *mlx5_irq_get_pool(struct mlx5_irq *irq);
+
+ #endif /* __PCI_IRQ_H__ */
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc.h b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc.h
+index 4fe8c32d8fbe86..681fb73f00bbf3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/hws/mlx5hws_bwc.h
+@@ -16,8 +16,8 @@ struct mlx5hws_bwc_matcher {
+ struct mlx5hws_matcher *matcher;
+ struct mlx5hws_match_template *mt;
+ struct mlx5hws_action_template *at[MLX5HWS_BWC_MATCHER_ATTACH_AT_NUM];
++ u32 priority;
+ u8 num_of_at;
+- u16 priority;
+ u8 size_log;
+ u32 num_of_rules; /* atomically accessed */
+ struct list_head *rules;
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
+index f9dd50152b1e3e..28d24d59efb84f 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_sriov_common.c
+@@ -454,8 +454,10 @@ static int qlcnic_sriov_set_guest_vlan_mode(struct qlcnic_adapter *adapter,
+
+ num_vlans = sriov->num_allowed_vlans;
+ sriov->allowed_vlans = kcalloc(num_vlans, sizeof(u16), GFP_KERNEL);
+- if (!sriov->allowed_vlans)
++ if (!sriov->allowed_vlans) {
++ qlcnic_sriov_free_vlans(adapter);
+ return -ENOMEM;
++ }
+
+ vlans = (u16 *)&cmd->rsp.arg[3];
+ for (i = 0; i < num_vlans; i++)
+@@ -2167,8 +2169,10 @@ int qlcnic_sriov_alloc_vlans(struct qlcnic_adapter *adapter)
+ vf = &sriov->vf_info[i];
+ vf->sriov_vlans = kcalloc(sriov->num_allowed_vlans,
+ sizeof(*vf->sriov_vlans), GFP_KERNEL);
+- if (!vf->sriov_vlans)
++ if (!vf->sriov_vlans) {
++ qlcnic_sriov_free_vlans(adapter);
+ return -ENOMEM;
++ }
+ }
+
+ return 0;
+diff --git a/drivers/net/ethernet/realtek/rtase/rtase_main.c b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+index 14ffd45e9a25a7..86dd034fdddc52 100644
+--- a/drivers/net/ethernet/realtek/rtase/rtase_main.c
++++ b/drivers/net/ethernet/realtek/rtase/rtase_main.c
+@@ -1501,7 +1501,10 @@ static void rtase_wait_for_quiescence(const struct net_device *dev)
+ static void rtase_sw_reset(struct net_device *dev)
+ {
+ struct rtase_private *tp = netdev_priv(dev);
++ struct rtase_ring *ring, *tmp;
++ struct rtase_int_vector *ivec;
+ int ret;
++ u32 i;
+
+ netif_stop_queue(dev);
+ netif_carrier_off(dev);
+@@ -1512,6 +1515,13 @@ static void rtase_sw_reset(struct net_device *dev)
+ rtase_tx_clear(tp);
+ rtase_rx_clear(tp);
+
++ for (i = 0; i < tp->int_nums; i++) {
++ ivec = &tp->int_vector[i];
++ list_for_each_entry_safe(ring, tmp, &ivec->ring_list,
++ ring_entry)
++ list_del(&ring->ring_entry);
++ }
++
+ ret = rtase_init_ring(dev);
+ if (ret) {
+ netdev_err(dev, "unable to init ring\n");
+diff --git a/drivers/net/mctp/mctp-i2c.c b/drivers/net/mctp/mctp-i2c.c
+index e70fb66879941f..6622de48fc9e76 100644
+--- a/drivers/net/mctp/mctp-i2c.c
++++ b/drivers/net/mctp/mctp-i2c.c
+@@ -584,6 +584,7 @@ static int mctp_i2c_header_create(struct sk_buff *skb, struct net_device *dev,
+ struct mctp_i2c_hdr *hdr;
+ struct mctp_hdr *mhdr;
+ u8 lldst, llsrc;
++ int rc;
+
+ if (len > MCTP_I2C_MAXMTU)
+ return -EMSGSIZE;
+@@ -594,6 +595,10 @@ static int mctp_i2c_header_create(struct sk_buff *skb, struct net_device *dev,
+ lldst = *((u8 *)daddr);
+ llsrc = *((u8 *)saddr);
+
++ rc = skb_cow_head(skb, sizeof(struct mctp_i2c_hdr));
++ if (rc)
++ return rc;
++
+ skb_push(skb, sizeof(struct mctp_i2c_hdr));
+ skb_reset_mac_header(skb);
+ hdr = (void *)skb_mac_header(skb);
+diff --git a/drivers/net/mctp/mctp-i3c.c b/drivers/net/mctp/mctp-i3c.c
+index a2b15cddf46e6b..47513ebbc68079 100644
+--- a/drivers/net/mctp/mctp-i3c.c
++++ b/drivers/net/mctp/mctp-i3c.c
+@@ -506,10 +506,15 @@ static int mctp_i3c_header_create(struct sk_buff *skb, struct net_device *dev,
+ const void *saddr, unsigned int len)
+ {
+ struct mctp_i3c_internal_hdr *ihdr;
++ int rc;
+
+ if (!daddr || !saddr)
+ return -EINVAL;
+
++ rc = skb_cow_head(skb, sizeof(struct mctp_i3c_internal_hdr));
++ if (rc)
++ return rc;
++
+ skb_push(skb, sizeof(struct mctp_i3c_internal_hdr));
+ skb_reset_mac_header(skb);
+ ihdr = (void *)skb_mac_header(skb);
+diff --git a/drivers/net/phy/nxp-c45-tja11xx.c b/drivers/net/phy/nxp-c45-tja11xx.c
+index ae43103c76cbd8..9788b820c6be72 100644
+--- a/drivers/net/phy/nxp-c45-tja11xx.c
++++ b/drivers/net/phy/nxp-c45-tja11xx.c
+@@ -21,6 +21,11 @@
+ #define PHY_ID_TJA_1103 0x001BB010
+ #define PHY_ID_TJA_1120 0x001BB031
+
++#define VEND1_DEVICE_ID3 0x0004
++#define TJA1120_DEV_ID3_SILICON_VERSION GENMASK(15, 12)
++#define TJA1120_DEV_ID3_SAMPLE_TYPE GENMASK(11, 8)
++#define DEVICE_ID3_SAMPLE_TYPE_R 0x9
++
+ #define VEND1_DEVICE_CONTROL 0x0040
+ #define DEVICE_CONTROL_RESET BIT(15)
+ #define DEVICE_CONTROL_CONFIG_GLOBAL_EN BIT(14)
+@@ -108,6 +113,9 @@
+ #define MII_BASIC_CONFIG_RMII 0x5
+ #define MII_BASIC_CONFIG_MII 0x4
+
++#define VEND1_SGMII_BASIC_CONTROL 0xB000
++#define SGMII_LPM BIT(11)
++
+ #define VEND1_SYMBOL_ERROR_CNT_XTD 0x8351
+ #define EXTENDED_CNT_EN BIT(15)
+ #define VEND1_MONITOR_STATUS 0xAC80
+@@ -1583,6 +1591,63 @@ static int nxp_c45_set_phy_mode(struct phy_device *phydev)
+ return 0;
+ }
+
++/* Errata: ES_TJA1120 and ES_TJA1121 Rev. 1.0 — 28 November 2024 Section 3.1 & 3.2 */
++static void nxp_c45_tja1120_errata(struct phy_device *phydev)
++{
++ bool macsec_ability, sgmii_ability;
++ int silicon_version, sample_type;
++ int phy_abilities;
++ int ret = 0;
++
++ ret = phy_read_mmd(phydev, MDIO_MMD_VEND1, VEND1_DEVICE_ID3);
++ if (ret < 0)
++ return;
++
++ sample_type = FIELD_GET(TJA1120_DEV_ID3_SAMPLE_TYPE, ret);
++ if (sample_type != DEVICE_ID3_SAMPLE_TYPE_R)
++ return;
++
++ silicon_version = FIELD_GET(TJA1120_DEV_ID3_SILICON_VERSION, ret);
++
++ phy_abilities = phy_read_mmd(phydev, MDIO_MMD_VEND1,
++ VEND1_PORT_ABILITIES);
++ macsec_ability = !!(phy_abilities & MACSEC_ABILITY);
++ sgmii_ability = !!(phy_abilities & SGMII_ABILITY);
++ if ((!macsec_ability && silicon_version == 2) ||
++ (macsec_ability && silicon_version == 1)) {
++ /* TJA1120/TJA1121 PHY configuration errata workaround.
++ * Apply PHY writes sequence before link up.
++ */
++ if (!macsec_ability) {
++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 0x4b95);
++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 0xf3cd);
++ } else {
++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 0x89c7);
++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 0x0893);
++ }
++
++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x0476, 0x58a0);
++
++ phy_write_mmd(phydev, MDIO_MMD_PMAPMD, 0x8921, 0xa3a);
++ phy_write_mmd(phydev, MDIO_MMD_PMAPMD, 0x89F1, 0x16c1);
++
++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 0x0);
++ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 0x0);
++
++ if (sgmii_ability) {
++ /* TJA1120B/TJA1121B SGMII PCS restart errata workaround.
++ * Put SGMII PCS into power down mode and back up.
++ */
++ phy_set_bits_mmd(phydev, MDIO_MMD_VEND1,
++ VEND1_SGMII_BASIC_CONTROL,
++ SGMII_LPM);
++ phy_clear_bits_mmd(phydev, MDIO_MMD_VEND1,
++ VEND1_SGMII_BASIC_CONTROL,
++ SGMII_LPM);
++ }
++ }
++}
++
+ static int nxp_c45_config_init(struct phy_device *phydev)
+ {
+ int ret;
+@@ -1599,6 +1664,9 @@ static int nxp_c45_config_init(struct phy_device *phydev)
+ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F8, 1);
+ phy_write_mmd(phydev, MDIO_MMD_VEND1, 0x01F9, 2);
+
++ if (phy_id_compare(phydev->phy_id, PHY_ID_TJA_1120, GENMASK(31, 4)))
++ nxp_c45_tja1120_errata(phydev);
++
+ phy_set_bits_mmd(phydev, MDIO_MMD_VEND1, VEND1_PHY_CONFIG,
+ PHY_CONFIG_AUTO);
+
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index f30b0fc8eca97d..2b9a684cf61d57 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0 OR BSD-3-Clause
+ /*
+- * Copyright (C) 2012-2014, 2018-2024 Intel Corporation
++ * Copyright (C) 2012-2014, 2018-2025 Intel Corporation
+ * Copyright (C) 2013-2015 Intel Mobile Communications GmbH
+ * Copyright (C) 2016-2017 Intel Deutschland GmbH
+ */
+@@ -422,6 +422,8 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm,
+ /* if reached this point, Alive notification was received */
+ iwl_mei_alive_notif(true);
+
++ iwl_trans_fw_alive(mvm->trans, alive_data.scd_base_addr);
++
+ ret = iwl_pnvm_load(mvm->trans, &mvm->notif_wait,
+ &mvm->fw->ucode_capa);
+ if (ret) {
+@@ -430,8 +432,6 @@ static int iwl_mvm_load_ucode_wait_alive(struct iwl_mvm *mvm,
+ return ret;
+ }
+
+- iwl_trans_fw_alive(mvm->trans, alive_data.scd_base_addr);
+-
+ /*
+ * Note: all the queues are enabled as part of the interface
+ * initialization, but in firmware restart scenarios they
+diff --git a/drivers/net/wwan/mhi_wwan_mbim.c b/drivers/net/wwan/mhi_wwan_mbim.c
+index d5a9360323d29d..8755c5e6a65b30 100644
+--- a/drivers/net/wwan/mhi_wwan_mbim.c
++++ b/drivers/net/wwan/mhi_wwan_mbim.c
+@@ -220,7 +220,7 @@ static int mbim_rx_verify_nth16(struct mhi_mbim_context *mbim, struct sk_buff *s
+ if (mbim->rx_seq + 1 != le16_to_cpu(nth16->wSequence) &&
+ (mbim->rx_seq || le16_to_cpu(nth16->wSequence)) &&
+ !(mbim->rx_seq == 0xffff && !le16_to_cpu(nth16->wSequence))) {
+- net_err_ratelimited("sequence number glitch prev=%d curr=%d\n",
++ net_dbg_ratelimited("sequence number glitch prev=%d curr=%d\n",
+ mbim->rx_seq, le16_to_cpu(nth16->wSequence));
+ }
+ mbim->rx_seq = le16_to_cpu(nth16->wSequence);
+diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c
+index b1387dc459a323..e79a0adf13950b 100644
+--- a/drivers/nvme/host/apple.c
++++ b/drivers/nvme/host/apple.c
+@@ -599,7 +599,8 @@ static inline void apple_nvme_handle_cqe(struct apple_nvme_queue *q,
+ }
+
+ if (!nvme_try_complete_req(req, cqe->status, cqe->result) &&
+- !blk_mq_add_to_batch(req, iob, nvme_req(req)->status,
++ !blk_mq_add_to_batch(req, iob,
++ nvme_req(req)->status != NVME_SC_SUCCESS,
+ apple_nvme_complete_batch))
+ apple_nvme_complete_rq(req);
+ }
+@@ -1518,6 +1519,7 @@ static struct apple_nvme *apple_nvme_alloc(struct platform_device *pdev)
+
+ return anv;
+ put_dev:
++ apple_nvme_detach_genpd(anv);
+ put_device(anv->dev);
+ return ERR_PTR(ret);
+ }
+@@ -1551,6 +1553,7 @@ static int apple_nvme_probe(struct platform_device *pdev)
+ nvme_uninit_ctrl(&anv->ctrl);
+ out_put_ctrl:
+ nvme_put_ctrl(&anv->ctrl);
++ apple_nvme_detach_genpd(anv);
+ return ret;
+ }
+
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 8da50df56b0795..9bdf6fc53697c0 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -429,6 +429,12 @@ static inline void nvme_end_req_zoned(struct request *req)
+
+ static inline void __nvme_end_req(struct request *req)
+ {
++ if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET))) {
++ if (blk_rq_is_passthrough(req))
++ nvme_log_err_passthru(req);
++ else
++ nvme_log_error(req);
++ }
+ nvme_end_req_zoned(req);
+ nvme_trace_bio_complete(req);
+ if (req->cmd_flags & REQ_NVME_MPATH)
+@@ -439,12 +445,6 @@ void nvme_end_req(struct request *req)
+ {
+ blk_status_t status = nvme_error_status(nvme_req(req)->status);
+
+- if (unlikely(nvme_req(req)->status && !(req->rq_flags & RQF_QUIET))) {
+- if (blk_rq_is_passthrough(req))
+- nvme_log_err_passthru(req);
+- else
+- nvme_log_error(req);
+- }
+ __nvme_end_req(req);
+ blk_mq_end_request(req, status);
+ }
+@@ -562,8 +562,6 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
+ switch (new_state) {
+ case NVME_CTRL_LIVE:
+ switch (old_state) {
+- case NVME_CTRL_NEW:
+- case NVME_CTRL_RESETTING:
+ case NVME_CTRL_CONNECTING:
+ changed = true;
+ fallthrough;
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 682234da2fabe0..7c13a400071e65 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -786,49 +786,8 @@ nvme_fc_ctrl_connectivity_loss(struct nvme_fc_ctrl *ctrl)
+ "NVME-FC{%d}: controller connectivity lost. Awaiting "
+ "Reconnect", ctrl->cnum);
+
+- switch (nvme_ctrl_state(&ctrl->ctrl)) {
+- case NVME_CTRL_NEW:
+- case NVME_CTRL_LIVE:
+- /*
+- * Schedule a controller reset. The reset will terminate the
+- * association and schedule the reconnect timer. Reconnects
+- * will be attempted until either the ctlr_loss_tmo
+- * (max_retries * connect_delay) expires or the remoteport's
+- * dev_loss_tmo expires.
+- */
+- if (nvme_reset_ctrl(&ctrl->ctrl)) {
+- dev_warn(ctrl->ctrl.device,
+- "NVME-FC{%d}: Couldn't schedule reset.\n",
+- ctrl->cnum);
+- nvme_delete_ctrl(&ctrl->ctrl);
+- }
+- break;
+-
+- case NVME_CTRL_CONNECTING:
+- /*
+- * The association has already been terminated and the
+- * controller is attempting reconnects. No need to do anything
+- * futher. Reconnects will be attempted until either the
+- * ctlr_loss_tmo (max_retries * connect_delay) expires or the
+- * remoteport's dev_loss_tmo expires.
+- */
+- break;
+-
+- case NVME_CTRL_RESETTING:
+- /*
+- * Controller is already in the process of terminating the
+- * association. No need to do anything further. The reconnect
+- * step will kick in naturally after the association is
+- * terminated.
+- */
+- break;
+-
+- case NVME_CTRL_DELETING:
+- case NVME_CTRL_DELETING_NOIO:
+- default:
+- /* no action to take - let it delete */
+- break;
+- }
++ set_bit(ASSOC_FAILED, &ctrl->flags);
++ nvme_reset_ctrl(&ctrl->ctrl);
+ }
+
+ /**
+@@ -2546,7 +2505,6 @@ nvme_fc_error_recovery(struct nvme_fc_ctrl *ctrl, char *errmsg)
+ */
+ if (state == NVME_CTRL_CONNECTING) {
+ __nvme_fc_abort_outstanding_ios(ctrl, true);
+- set_bit(ASSOC_FAILED, &ctrl->flags);
+ dev_warn(ctrl->ctrl.device,
+ "NVME-FC{%d}: transport error during (re)connect\n",
+ ctrl->cnum);
+@@ -3065,7 +3023,6 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
+ struct nvmefc_ls_rcv_op *disls = NULL;
+ unsigned long flags;
+ int ret;
+- bool changed;
+
+ ++ctrl->ctrl.nr_reconnects;
+
+@@ -3176,12 +3133,13 @@ nvme_fc_create_association(struct nvme_fc_ctrl *ctrl)
+ if (ret)
+ goto out_term_aen_ops;
+
+- changed = nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE);
++ if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_LIVE)) {
++ ret = -EIO;
++ goto out_term_aen_ops;
++ }
+
+ ctrl->ctrl.nr_reconnects = 0;
+-
+- if (changed)
+- nvme_start_ctrl(&ctrl->ctrl);
++ nvme_start_ctrl(&ctrl->ctrl);
+
+ return 0; /* Success */
+
+@@ -3582,8 +3540,7 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
+ list_add_tail(&ctrl->ctrl_list, &rport->ctrl_list);
+ spin_unlock_irqrestore(&rport->lock, flags);
+
+- if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_RESETTING) ||
+- !nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
++ if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
+ dev_err(ctrl->ctrl.device,
+ "NVME-FC{%d}: failed to init ctrl state\n", ctrl->cnum);
+ goto fail_ctrl;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index e1329d4974fd6f..1d3205f08af847 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1131,8 +1131,9 @@ static inline void nvme_handle_cqe(struct nvme_queue *nvmeq,
+
+ trace_nvme_sq(req, cqe->sq_head, nvmeq->sq_tail);
+ if (!nvme_try_complete_req(req, cqe->status, cqe->result) &&
+- !blk_mq_add_to_batch(req, iob, nvme_req(req)->status,
+- nvme_pci_complete_batch))
++ !blk_mq_add_to_batch(req, iob,
++ nvme_req(req)->status != NVME_SC_SUCCESS,
++ nvme_pci_complete_batch))
+ nvme_pci_complete_rq(req);
+ }
+
+@@ -3669,6 +3670,8 @@ static const struct pci_device_id nvme_id_table[] = {
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
+ { PCI_DEVICE(0x1cc1, 0x5350), /* ADATA XPG GAMMIX S50 */
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
++ { PCI_DEVICE(0x1dbe, 0x5216), /* Acer/INNOGRIT FA100/5216 NVMe SSD */
++ .driver_data = NVME_QUIRK_BOGUS_NID, },
+ { PCI_DEVICE(0x1dbe, 0x5236), /* ADATA XPG GAMMIX S70 */
+ .driver_data = NVME_QUIRK_BOGUS_NID, },
+ { PCI_DEVICE(0x1e49, 0x0021), /* ZHITAI TiPro5000 NVMe SSD */
+diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
+index 1afd93026f9bf0..2a4536ef618487 100644
+--- a/drivers/nvme/target/rdma.c
++++ b/drivers/nvme/target/rdma.c
+@@ -996,6 +996,27 @@ static void nvmet_rdma_handle_command(struct nvmet_rdma_queue *queue,
+ nvmet_req_complete(&cmd->req, status);
+ }
+
++static bool nvmet_rdma_recv_not_live(struct nvmet_rdma_queue *queue,
++ struct nvmet_rdma_rsp *rsp)
++{
++ unsigned long flags;
++ bool ret = true;
++
++ spin_lock_irqsave(&queue->state_lock, flags);
++ /*
++ * recheck queue state is not live to prevent a race condition
++ * with RDMA_CM_EVENT_ESTABLISHED handler.
++ */
++ if (queue->state == NVMET_RDMA_Q_LIVE)
++ ret = false;
++ else if (queue->state == NVMET_RDMA_Q_CONNECTING)
++ list_add_tail(&rsp->wait_list, &queue->rsp_wait_list);
++ else
++ nvmet_rdma_put_rsp(rsp);
++ spin_unlock_irqrestore(&queue->state_lock, flags);
++ return ret;
++}
++
+ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ {
+ struct nvmet_rdma_cmd *cmd =
+@@ -1038,17 +1059,9 @@ static void nvmet_rdma_recv_done(struct ib_cq *cq, struct ib_wc *wc)
+ rsp->n_rdma = 0;
+ rsp->invalidate_rkey = 0;
+
+- if (unlikely(queue->state != NVMET_RDMA_Q_LIVE)) {
+- unsigned long flags;
+-
+- spin_lock_irqsave(&queue->state_lock, flags);
+- if (queue->state == NVMET_RDMA_Q_CONNECTING)
+- list_add_tail(&rsp->wait_list, &queue->rsp_wait_list);
+- else
+- nvmet_rdma_put_rsp(rsp);
+- spin_unlock_irqrestore(&queue->state_lock, flags);
++ if (unlikely(queue->state != NVMET_RDMA_Q_LIVE) &&
++ nvmet_rdma_recv_not_live(queue, rsp))
+ return;
+- }
+
+ nvmet_rdma_handle_command(queue, rsp);
+ }
+diff --git a/drivers/phy/ti/phy-gmii-sel.c b/drivers/phy/ti/phy-gmii-sel.c
+index 103b266fec7717..2c2256fe5a3b6f 100644
+--- a/drivers/phy/ti/phy-gmii-sel.c
++++ b/drivers/phy/ti/phy-gmii-sel.c
+@@ -423,6 +423,12 @@ static int phy_gmii_sel_init_ports(struct phy_gmii_sel_priv *priv)
+ return 0;
+ }
+
++static const struct regmap_config phy_gmii_sel_regmap_cfg = {
++ .reg_bits = 32,
++ .val_bits = 32,
++ .reg_stride = 4,
++};
++
+ static int phy_gmii_sel_probe(struct platform_device *pdev)
+ {
+ struct device *dev = &pdev->dev;
+@@ -467,7 +473,14 @@ static int phy_gmii_sel_probe(struct platform_device *pdev)
+
+ priv->regmap = syscon_node_to_regmap(node->parent);
+ if (IS_ERR(priv->regmap)) {
+- priv->regmap = device_node_to_regmap(node);
++ void __iomem *base;
++
++ base = devm_platform_ioremap_resource(pdev, 0);
++ if (IS_ERR(base))
++ return dev_err_probe(dev, PTR_ERR(base),
++ "failed to get base memory resource\n");
++
++ priv->regmap = regmap_init_mmio(dev, base, &phy_gmii_sel_regmap_cfg);
+ if (IS_ERR(priv->regmap))
+ return dev_err_probe(dev, PTR_ERR(priv->regmap),
+ "Failed to get syscon\n");
+diff --git a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
+index 73dbf29c002f39..cf6efa9c0364a1 100644
+--- a/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
++++ b/drivers/pinctrl/bcm/pinctrl-bcm281xx.c
+@@ -974,7 +974,7 @@ static const struct regmap_config bcm281xx_pinctrl_regmap_config = {
+ .reg_bits = 32,
+ .reg_stride = 4,
+ .val_bits = 32,
+- .max_register = BCM281XX_PIN_VC_CAM3_SDA,
++ .max_register = BCM281XX_PIN_VC_CAM3_SDA * 4,
+ };
+
+ static int bcm281xx_pinctrl_get_groups_count(struct pinctrl_dev *pctldev)
+diff --git a/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c b/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c
+index 471f644c5eef2c..d09a5e9b2eca53 100644
+--- a/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c
++++ b/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c
+@@ -2374,6 +2374,9 @@ static int npcm8xx_gpio_fw(struct npcm8xx_pinctrl *pctrl)
+ pctrl->gpio_bank[id].gc.parent = dev;
+ pctrl->gpio_bank[id].gc.fwnode = child;
+ pctrl->gpio_bank[id].gc.label = devm_kasprintf(dev, GFP_KERNEL, "%pfw", child);
++ if (pctrl->gpio_bank[id].gc.label == NULL)
++ return -ENOMEM;
++
+ pctrl->gpio_bank[id].gc.dbg_show = npcmgpio_dbg_show;
+ pctrl->gpio_bank[id].direction_input = pctrl->gpio_bank[id].gc.direction_input;
+ pctrl->gpio_bank[id].gc.direction_input = npcmgpio_direction_input;
+diff --git a/drivers/platform/x86/intel/int3472/discrete.c b/drivers/platform/x86/intel/int3472/discrete.c
+index 15678508ee5019..9e69ac9cfb92ce 100644
+--- a/drivers/platform/x86/intel/int3472/discrete.c
++++ b/drivers/platform/x86/intel/int3472/discrete.c
+@@ -2,6 +2,7 @@
+ /* Author: Dan Scally <djrscally@gmail.com> */
+
+ #include <linux/acpi.h>
++#include <linux/array_size.h>
+ #include <linux/bitfield.h>
+ #include <linux/device.h>
+ #include <linux/gpio/consumer.h>
+@@ -55,7 +56,7 @@ static void skl_int3472_log_sensor_module_name(struct int3472_discrete_device *i
+
+ static int skl_int3472_fill_gpiod_lookup(struct gpiod_lookup *table_entry,
+ struct acpi_resource_gpio *agpio,
+- const char *func, u32 polarity)
++ const char *func, unsigned long gpio_flags)
+ {
+ char *path = agpio->resource_source.string_ptr;
+ struct acpi_device *adev;
+@@ -70,14 +71,14 @@ static int skl_int3472_fill_gpiod_lookup(struct gpiod_lookup *table_entry,
+ if (!adev)
+ return -ENODEV;
+
+- *table_entry = GPIO_LOOKUP(acpi_dev_name(adev), agpio->pin_table[0], func, polarity);
++ *table_entry = GPIO_LOOKUP(acpi_dev_name(adev), agpio->pin_table[0], func, gpio_flags);
+
+ return 0;
+ }
+
+ static int skl_int3472_map_gpio_to_sensor(struct int3472_discrete_device *int3472,
+ struct acpi_resource_gpio *agpio,
+- const char *func, u32 polarity)
++ const char *func, unsigned long gpio_flags)
+ {
+ int ret;
+
+@@ -87,7 +88,7 @@ static int skl_int3472_map_gpio_to_sensor(struct int3472_discrete_device *int347
+ }
+
+ ret = skl_int3472_fill_gpiod_lookup(&int3472->gpios.table[int3472->n_sensor_gpios],
+- agpio, func, polarity);
++ agpio, func, gpio_flags);
+ if (ret)
+ return ret;
+
+@@ -100,7 +101,7 @@ static int skl_int3472_map_gpio_to_sensor(struct int3472_discrete_device *int347
+ static struct gpio_desc *
+ skl_int3472_gpiod_get_from_temp_lookup(struct int3472_discrete_device *int3472,
+ struct acpi_resource_gpio *agpio,
+- const char *func, u32 polarity)
++ const char *func, unsigned long gpio_flags)
+ {
+ struct gpio_desc *desc;
+ int ret;
+@@ -111,7 +112,7 @@ skl_int3472_gpiod_get_from_temp_lookup(struct int3472_discrete_device *int3472,
+ return ERR_PTR(-ENOMEM);
+
+ lookup->dev_id = dev_name(int3472->dev);
+- ret = skl_int3472_fill_gpiod_lookup(&lookup->table[0], agpio, func, polarity);
++ ret = skl_int3472_fill_gpiod_lookup(&lookup->table[0], agpio, func, gpio_flags);
+ if (ret)
+ return ERR_PTR(ret);
+
+@@ -122,32 +123,76 @@ skl_int3472_gpiod_get_from_temp_lookup(struct int3472_discrete_device *int3472,
+ return desc;
+ }
+
+-static void int3472_get_func_and_polarity(u8 type, const char **func, u32 *polarity)
++/**
++ * struct int3472_gpio_map - Map GPIOs to whatever is expected by the
++ * sensor driver (as in DT bindings)
++ * @hid: The ACPI HID of the device without the instance number e.g. INT347E
++ * @type_from: The GPIO type from ACPI ?SDT
++ * @type_to: The assigned GPIO type, typically same as @type_from
++ * @func: The function, e.g. "enable"
++ * @polarity_low: GPIO_ACTIVE_LOW true if the @polarity_low is true,
++ * GPIO_ACTIVE_HIGH otherwise
++ */
++struct int3472_gpio_map {
++ const char *hid;
++ u8 type_from;
++ u8 type_to;
++ bool polarity_low;
++ const char *func;
++};
++
++static const struct int3472_gpio_map int3472_gpio_map[] = {
++ { "INT347E", INT3472_GPIO_TYPE_RESET, INT3472_GPIO_TYPE_RESET, false, "enable" },
++};
++
++static void int3472_get_func_and_polarity(struct acpi_device *adev, u8 *type,
++ const char **func, unsigned long *gpio_flags)
+ {
+- switch (type) {
++ unsigned int i;
++
++ for (i = 0; i < ARRAY_SIZE(int3472_gpio_map); i++) {
++ /*
++ * Map the firmware-provided GPIO to whatever a driver expects
++ * (as in DT bindings). First check if the type matches with the
++ * GPIO map, then further check that the device _HID matches.
++ */
++ if (*type != int3472_gpio_map[i].type_from)
++ continue;
++
++ if (!acpi_dev_hid_uid_match(adev, int3472_gpio_map[i].hid, NULL))
++ continue;
++
++ *type = int3472_gpio_map[i].type_to;
++ *gpio_flags = int3472_gpio_map[i].polarity_low ?
++ GPIO_ACTIVE_LOW : GPIO_ACTIVE_HIGH;
++ *func = int3472_gpio_map[i].func;
++ return;
++ }
++
++ switch (*type) {
+ case INT3472_GPIO_TYPE_RESET:
+ *func = "reset";
+- *polarity = GPIO_ACTIVE_LOW;
++ *gpio_flags = GPIO_ACTIVE_LOW;
+ break;
+ case INT3472_GPIO_TYPE_POWERDOWN:
+ *func = "powerdown";
+- *polarity = GPIO_ACTIVE_LOW;
++ *gpio_flags = GPIO_ACTIVE_LOW;
+ break;
+ case INT3472_GPIO_TYPE_CLK_ENABLE:
+ *func = "clk-enable";
+- *polarity = GPIO_ACTIVE_HIGH;
++ *gpio_flags = GPIO_ACTIVE_HIGH;
+ break;
+ case INT3472_GPIO_TYPE_PRIVACY_LED:
+ *func = "privacy-led";
+- *polarity = GPIO_ACTIVE_HIGH;
++ *gpio_flags = GPIO_ACTIVE_HIGH;
+ break;
+ case INT3472_GPIO_TYPE_POWER_ENABLE:
+ *func = "power-enable";
+- *polarity = GPIO_ACTIVE_HIGH;
++ *gpio_flags = GPIO_ACTIVE_HIGH;
+ break;
+ default:
+ *func = "unknown";
+- *polarity = GPIO_ACTIVE_HIGH;
++ *gpio_flags = GPIO_ACTIVE_HIGH;
+ break;
+ }
+ }
+@@ -194,7 +239,7 @@ static int skl_int3472_handle_gpio_resources(struct acpi_resource *ares,
+ struct gpio_desc *gpio;
+ const char *err_msg;
+ const char *func;
+- u32 polarity;
++ unsigned long gpio_flags;
+ int ret;
+
+ if (!acpi_gpio_get_io_resource(ares, &agpio))
+@@ -217,7 +262,7 @@ static int skl_int3472_handle_gpio_resources(struct acpi_resource *ares,
+
+ type = FIELD_GET(INT3472_GPIO_DSM_TYPE, obj->integer.value);
+
+- int3472_get_func_and_polarity(type, &func, &polarity);
++ int3472_get_func_and_polarity(int3472->sensor, &type, &func, &gpio_flags);
+
+ pin = FIELD_GET(INT3472_GPIO_DSM_PIN, obj->integer.value);
+ if (pin != agpio->pin_table[0])
+@@ -227,16 +272,16 @@ static int skl_int3472_handle_gpio_resources(struct acpi_resource *ares,
+
+ active_value = FIELD_GET(INT3472_GPIO_DSM_SENSOR_ON_VAL, obj->integer.value);
+ if (!active_value)
+- polarity ^= GPIO_ACTIVE_LOW;
++ gpio_flags ^= GPIO_ACTIVE_LOW;
+
+ dev_dbg(int3472->dev, "%s %s pin %d active-%s\n", func,
+ agpio->resource_source.string_ptr, agpio->pin_table[0],
+- str_high_low(polarity == GPIO_ACTIVE_HIGH));
++ str_high_low(gpio_flags == GPIO_ACTIVE_HIGH));
+
+ switch (type) {
+ case INT3472_GPIO_TYPE_RESET:
+ case INT3472_GPIO_TYPE_POWERDOWN:
+- ret = skl_int3472_map_gpio_to_sensor(int3472, agpio, func, polarity);
++ ret = skl_int3472_map_gpio_to_sensor(int3472, agpio, func, gpio_flags);
+ if (ret)
+ err_msg = "Failed to map GPIO pin to sensor\n";
+
+@@ -244,7 +289,7 @@ static int skl_int3472_handle_gpio_resources(struct acpi_resource *ares,
+ case INT3472_GPIO_TYPE_CLK_ENABLE:
+ case INT3472_GPIO_TYPE_PRIVACY_LED:
+ case INT3472_GPIO_TYPE_POWER_ENABLE:
+- gpio = skl_int3472_gpiod_get_from_temp_lookup(int3472, agpio, func, polarity);
++ gpio = skl_int3472_gpiod_get_from_temp_lookup(int3472, agpio, func, gpio_flags);
+ if (IS_ERR(gpio)) {
+ ret = PTR_ERR(gpio);
+ err_msg = "Failed to get GPIO\n";
+diff --git a/drivers/platform/x86/intel/pmc/core.c b/drivers/platform/x86/intel/pmc/core.c
+index 4e9c8c96c8ccee..257c03c59fd958 100644
+--- a/drivers/platform/x86/intel/pmc/core.c
++++ b/drivers/platform/x86/intel/pmc/core.c
+@@ -625,8 +625,8 @@ static u32 convert_ltr_scale(u32 val)
+ static int pmc_core_ltr_show(struct seq_file *s, void *unused)
+ {
+ struct pmc_dev *pmcdev = s->private;
+- u64 decoded_snoop_ltr, decoded_non_snoop_ltr;
+- u32 ltr_raw_data, scale, val;
++ u64 decoded_snoop_ltr, decoded_non_snoop_ltr, val;
++ u32 ltr_raw_data, scale;
+ u16 snoop_ltr, nonsnoop_ltr;
+ unsigned int i, index, ltr_index = 0;
+
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index 84dcd7da7319e3..a3c73abb00f21e 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -7883,6 +7883,7 @@ static struct ibm_struct volume_driver_data = {
+
+ #define FAN_NS_CTRL_STATUS BIT(2) /* Bit which determines control is enabled or not */
+ #define FAN_NS_CTRL BIT(4) /* Bit which determines control is by host or EC */
++#define FAN_CLOCK_TPM (22500*60) /* Ticks per minute for a 22.5 kHz clock */
+
+ enum { /* Fan control constants */
+ fan_status_offset = 0x2f, /* EC register 0x2f */
+@@ -7938,6 +7939,7 @@ static int fan_watchdog_maxinterval;
+
+ static bool fan_with_ns_addr;
+ static bool ecfw_with_fan_dec_rpm;
++static bool fan_speed_in_tpr;
+
+ static struct mutex fan_mutex;
+
+@@ -8140,8 +8142,11 @@ static int fan_get_speed(unsigned int *speed)
+ !acpi_ec_read(fan_rpm_offset + 1, &hi)))
+ return -EIO;
+
+- if (likely(speed))
++ if (likely(speed)) {
+ *speed = (hi << 8) | lo;
++ if (fan_speed_in_tpr && *speed != 0)
++ *speed = FAN_CLOCK_TPM / *speed;
++ }
+ break;
+ case TPACPI_FAN_RD_TPEC_NS:
+ if (!acpi_ec_read(fan_rpm_status_ns, &lo))
+@@ -8174,8 +8179,11 @@ static int fan2_get_speed(unsigned int *speed)
+ if (rc)
+ return -EIO;
+
+- if (likely(speed))
++ if (likely(speed)) {
+ *speed = (hi << 8) | lo;
++ if (fan_speed_in_tpr && *speed != 0)
++ *speed = FAN_CLOCK_TPM / *speed;
++ }
+ break;
+
+ case TPACPI_FAN_RD_TPEC_NS:
+@@ -8786,6 +8794,7 @@ static const struct attribute_group fan_driver_attr_group = {
+ #define TPACPI_FAN_NOFAN 0x0008 /* no fan available */
+ #define TPACPI_FAN_NS 0x0010 /* For EC with non-Standard register addresses */
+ #define TPACPI_FAN_DECRPM 0x0020 /* For ECFW's with RPM in register as decimal */
++#define TPACPI_FAN_TPR 0x0040 /* Fan speed is in Ticks Per Revolution */
+
+ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ TPACPI_QEC_IBM('1', 'Y', TPACPI_FAN_Q1),
+@@ -8815,6 +8824,7 @@ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ TPACPI_Q_LNV3('R', '0', 'V', TPACPI_FAN_NS), /* 11e Gen5 KL-Y */
+ TPACPI_Q_LNV3('N', '1', 'O', TPACPI_FAN_NOFAN), /* X1 Tablet (2nd gen) */
+ TPACPI_Q_LNV3('R', '0', 'Q', TPACPI_FAN_DECRPM),/* L480 */
++ TPACPI_Q_LNV('8', 'F', TPACPI_FAN_TPR), /* ThinkPad x120e */
+ };
+
+ static int __init fan_init(struct ibm_init_struct *iibm)
+@@ -8885,6 +8895,8 @@ static int __init fan_init(struct ibm_init_struct *iibm)
+
+ if (quirks & TPACPI_FAN_Q1)
+ fan_quirk1_setup();
++ if (quirks & TPACPI_FAN_TPR)
++ fan_speed_in_tpr = true;
+ /* Try and probe the 2nd fan */
+ tp_features.second_fan = 1; /* needed for get_speed to work */
+ res = fan2_get_speed(&speed);
+@@ -10318,6 +10330,10 @@ static struct ibm_struct proxsensor_driver_data = {
+ #define DYTC_MODE_PSC_BALANCE 5 /* Default mode aka balanced */
+ #define DYTC_MODE_PSC_PERFORM 7 /* High power mode aka performance */
+
++#define DYTC_MODE_PSCV9_LOWPOWER 1 /* Low power mode */
++#define DYTC_MODE_PSCV9_BALANCE 3 /* Default mode aka balanced */
++#define DYTC_MODE_PSCV9_PERFORM 4 /* High power mode aka performance */
++
+ #define DYTC_ERR_MASK 0xF /* Bits 0-3 in cmd result are the error result */
+ #define DYTC_ERR_SUCCESS 1 /* CMD completed successful */
+
+@@ -10338,6 +10354,10 @@ static int dytc_capabilities;
+ static bool dytc_mmc_get_available;
+ static int profile_force;
+
++static int platform_psc_profile_lowpower = DYTC_MODE_PSC_LOWPOWER;
++static int platform_psc_profile_balanced = DYTC_MODE_PSC_BALANCE;
++static int platform_psc_profile_performance = DYTC_MODE_PSC_PERFORM;
++
+ static int convert_dytc_to_profile(int funcmode, int dytcmode,
+ enum platform_profile_option *profile)
+ {
+@@ -10359,19 +10379,15 @@ static int convert_dytc_to_profile(int funcmode, int dytcmode,
+ }
+ return 0;
+ case DYTC_FUNCTION_PSC:
+- switch (dytcmode) {
+- case DYTC_MODE_PSC_LOWPOWER:
++ if (dytcmode == platform_psc_profile_lowpower)
+ *profile = PLATFORM_PROFILE_LOW_POWER;
+- break;
+- case DYTC_MODE_PSC_BALANCE:
++ else if (dytcmode == platform_psc_profile_balanced)
+ *profile = PLATFORM_PROFILE_BALANCED;
+- break;
+- case DYTC_MODE_PSC_PERFORM:
++ else if (dytcmode == platform_psc_profile_performance)
+ *profile = PLATFORM_PROFILE_PERFORMANCE;
+- break;
+- default: /* Unknown mode */
++ else
+ return -EINVAL;
+- }
++
+ return 0;
+ case DYTC_FUNCTION_AMT:
+ /* For now return balanced. It's the closest we have to 'auto' */
+@@ -10392,19 +10408,19 @@ static int convert_profile_to_dytc(enum platform_profile_option profile, int *pe
+ if (dytc_capabilities & BIT(DYTC_FC_MMC))
+ *perfmode = DYTC_MODE_MMC_LOWPOWER;
+ else if (dytc_capabilities & BIT(DYTC_FC_PSC))
+- *perfmode = DYTC_MODE_PSC_LOWPOWER;
++ *perfmode = platform_psc_profile_lowpower;
+ break;
+ case PLATFORM_PROFILE_BALANCED:
+ if (dytc_capabilities & BIT(DYTC_FC_MMC))
+ *perfmode = DYTC_MODE_MMC_BALANCE;
+ else if (dytc_capabilities & BIT(DYTC_FC_PSC))
+- *perfmode = DYTC_MODE_PSC_BALANCE;
++ *perfmode = platform_psc_profile_balanced;
+ break;
+ case PLATFORM_PROFILE_PERFORMANCE:
+ if (dytc_capabilities & BIT(DYTC_FC_MMC))
+ *perfmode = DYTC_MODE_MMC_PERFORM;
+ else if (dytc_capabilities & BIT(DYTC_FC_PSC))
+- *perfmode = DYTC_MODE_PSC_PERFORM;
++ *perfmode = platform_psc_profile_performance;
+ break;
+ default: /* Unknown profile */
+ return -EOPNOTSUPP;
+@@ -10593,6 +10609,7 @@ static int tpacpi_dytc_profile_init(struct ibm_init_struct *iibm)
+ if (output & BIT(DYTC_QUERY_ENABLE_BIT))
+ dytc_version = (output >> DYTC_QUERY_REV_BIT) & 0xF;
+
++ dbg_printk(TPACPI_DBG_INIT, "DYTC version %d\n", dytc_version);
+ /* Check DYTC is enabled and supports mode setting */
+ if (dytc_version < 5)
+ return -ENODEV;
+@@ -10631,6 +10648,11 @@ static int tpacpi_dytc_profile_init(struct ibm_init_struct *iibm)
+ }
+ } else if (dytc_capabilities & BIT(DYTC_FC_PSC)) { /* PSC MODE */
+ pr_debug("PSC is supported\n");
++ if (dytc_version >= 9) { /* update profiles for DYTC 9 and up */
++ platform_psc_profile_lowpower = DYTC_MODE_PSCV9_LOWPOWER;
++ platform_psc_profile_balanced = DYTC_MODE_PSCV9_BALANCE;
++ platform_psc_profile_performance = DYTC_MODE_PSCV9_PERFORM;
++ }
+ } else {
+ dbg_printk(TPACPI_DBG_INIT, "No DYTC support available\n");
+ return -ENODEV;
+diff --git a/drivers/powercap/powercap_sys.c b/drivers/powercap/powercap_sys.c
+index 52c32dcbf7d846..4112a009733826 100644
+--- a/drivers/powercap/powercap_sys.c
++++ b/drivers/powercap/powercap_sys.c
+@@ -627,8 +627,7 @@ struct powercap_control_type *powercap_register_control_type(
+ dev_set_name(&control_type->dev, "%s", name);
+ result = device_register(&control_type->dev);
+ if (result) {
+- if (control_type->allocated)
+- kfree(control_type);
++ put_device(&control_type->dev);
+ return ERR_PTR(result);
+ }
+ idr_init(&control_type->idr);
+diff --git a/drivers/s390/cio/chp.c b/drivers/s390/cio/chp.c
+index a07bbecba61cd4..0c5bda060249e1 100644
+--- a/drivers/s390/cio/chp.c
++++ b/drivers/s390/cio/chp.c
+@@ -682,7 +682,8 @@ static int info_update(void)
+ if (time_after(jiffies, chp_info_expires)) {
+ /* Data is too old, update. */
+ rc = sclp_chp_read_info(&chp_info);
+- chp_info_expires = jiffies + CHP_INFO_UPDATE_INTERVAL ;
++ if (!rc)
++ chp_info_expires = jiffies + CHP_INFO_UPDATE_INTERVAL;
+ }
+ mutex_unlock(&info_lock);
+
+diff --git a/drivers/scsi/qla1280.c b/drivers/scsi/qla1280.c
+index 8958547ac111ac..fed07b1460702a 100644
+--- a/drivers/scsi/qla1280.c
++++ b/drivers/scsi/qla1280.c
+@@ -2867,7 +2867,7 @@ qla1280_64bit_start_scsi(struct scsi_qla_host *ha, struct srb * sp)
+ dprintk(3, "S/G Segment phys_addr=%x %x, len=0x%x\n",
+ cpu_to_le32(upper_32_bits(dma_handle)),
+ cpu_to_le32(lower_32_bits(dma_handle)),
+- cpu_to_le32(sg_dma_len(sg_next(s))));
++ cpu_to_le32(sg_dma_len(s)));
+ remseg--;
+ }
+ dprintk(5, "qla1280_64bit_start_scsi: Scatter/gather "
+diff --git a/drivers/scsi/scsi_scan.c b/drivers/scsi/scsi_scan.c
+index 042329b74c6e68..fe08af4dcb67cf 100644
+--- a/drivers/scsi/scsi_scan.c
++++ b/drivers/scsi/scsi_scan.c
+@@ -245,7 +245,7 @@ static int scsi_realloc_sdev_budget_map(struct scsi_device *sdev,
+ }
+ ret = sbitmap_init_node(&sdev->budget_map,
+ scsi_device_max_queue_depth(sdev),
+- new_shift, GFP_KERNEL,
++ new_shift, GFP_NOIO,
+ sdev->request_queue->node, false, true);
+ if (!ret)
+ sbitmap_resize(&sdev->budget_map, depth);
+diff --git a/drivers/spi/spi-microchip-core.c b/drivers/spi/spi-microchip-core.c
+index 7c1a9a9853733e..92b63a7f20415c 100644
+--- a/drivers/spi/spi-microchip-core.c
++++ b/drivers/spi/spi-microchip-core.c
+@@ -70,8 +70,7 @@
+ #define INT_RX_CHANNEL_OVERFLOW BIT(2)
+ #define INT_TX_CHANNEL_UNDERRUN BIT(3)
+
+-#define INT_ENABLE_MASK (CONTROL_RX_DATA_INT | CONTROL_TX_DATA_INT | \
+- CONTROL_RX_OVER_INT | CONTROL_TX_UNDER_INT)
++#define INT_ENABLE_MASK (CONTROL_RX_OVER_INT | CONTROL_TX_UNDER_INT)
+
+ #define REG_CONTROL (0x00)
+ #define REG_FRAME_SIZE (0x04)
+@@ -133,10 +132,15 @@ static inline void mchp_corespi_disable(struct mchp_corespi *spi)
+ mchp_corespi_write(spi, REG_CONTROL, control);
+ }
+
+-static inline void mchp_corespi_read_fifo(struct mchp_corespi *spi)
++static inline void mchp_corespi_read_fifo(struct mchp_corespi *spi, int fifo_max)
+ {
+- while (spi->rx_len >= spi->n_bytes && !(mchp_corespi_read(spi, REG_STATUS) & STATUS_RXFIFO_EMPTY)) {
+- u32 data = mchp_corespi_read(spi, REG_RX_DATA);
++ for (int i = 0; i < fifo_max; i++) {
++ u32 data;
++
++ while (mchp_corespi_read(spi, REG_STATUS) & STATUS_RXFIFO_EMPTY)
++ ;
++
++ data = mchp_corespi_read(spi, REG_RX_DATA);
+
+ spi->rx_len -= spi->n_bytes;
+
+@@ -211,11 +215,10 @@ static inline void mchp_corespi_set_xfer_size(struct mchp_corespi *spi, int len)
+ mchp_corespi_write(spi, REG_FRAMESUP, len);
+ }
+
+-static inline void mchp_corespi_write_fifo(struct mchp_corespi *spi)
++static inline void mchp_corespi_write_fifo(struct mchp_corespi *spi, int fifo_max)
+ {
+- int fifo_max, i = 0;
++ int i = 0;
+
+- fifo_max = DIV_ROUND_UP(min(spi->tx_len, FIFO_DEPTH), spi->n_bytes);
+ mchp_corespi_set_xfer_size(spi, fifo_max);
+
+ while ((i < fifo_max) && !(mchp_corespi_read(spi, REG_STATUS) & STATUS_TXFIFO_FULL)) {
+@@ -413,19 +416,6 @@ static irqreturn_t mchp_corespi_interrupt(int irq, void *dev_id)
+ if (intfield == 0)
+ return IRQ_NONE;
+
+- if (intfield & INT_TXDONE)
+- mchp_corespi_write(spi, REG_INT_CLEAR, INT_TXDONE);
+-
+- if (intfield & INT_RXRDY) {
+- mchp_corespi_write(spi, REG_INT_CLEAR, INT_RXRDY);
+-
+- if (spi->rx_len)
+- mchp_corespi_read_fifo(spi);
+- }
+-
+- if (!spi->rx_len && !spi->tx_len)
+- finalise = true;
+-
+ if (intfield & INT_RX_CHANNEL_OVERFLOW) {
+ mchp_corespi_write(spi, REG_INT_CLEAR, INT_RX_CHANNEL_OVERFLOW);
+ finalise = true;
+@@ -512,9 +502,14 @@ static int mchp_corespi_transfer_one(struct spi_controller *host,
+
+ mchp_corespi_write(spi, REG_SLAVE_SELECT, spi->pending_slave_select);
+
+- while (spi->tx_len)
+- mchp_corespi_write_fifo(spi);
++ while (spi->tx_len) {
++ int fifo_max = DIV_ROUND_UP(min(spi->tx_len, FIFO_DEPTH), spi->n_bytes);
++
++ mchp_corespi_write_fifo(spi, fifo_max);
++ mchp_corespi_read_fifo(spi, fifo_max);
++ }
+
++ spi_finalize_current_transfer(host);
+ return 1;
+ }
+
+diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
+index 280071be30b157..6b7ab1814c12df 100644
+--- a/drivers/thermal/cpufreq_cooling.c
++++ b/drivers/thermal/cpufreq_cooling.c
+@@ -57,8 +57,6 @@ struct time_in_idle {
+ * @max_level: maximum cooling level. One less than total number of valid
+ * cpufreq frequencies.
+ * @em: Reference on the Energy Model of the device
+- * @cdev: thermal_cooling_device pointer to keep track of the
+- * registered cooling device.
+ * @policy: cpufreq policy.
+ * @cooling_ops: cpufreq callbacks to thermal cooling device ops
+ * @idle_time: idle time stats
+diff --git a/drivers/ufs/core/ufshcd.c b/drivers/ufs/core/ufshcd.c
+index a3e95ef5eda82e..89fc0b5662919b 100644
+--- a/drivers/ufs/core/ufshcd.c
++++ b/drivers/ufs/core/ufshcd.c
+@@ -3138,8 +3138,13 @@ ufshcd_dev_cmd_completion(struct ufs_hba *hba, struct ufshcd_lrb *lrbp)
+ case UPIU_TRANSACTION_QUERY_RSP: {
+ u8 response = lrbp->ucd_rsp_ptr->header.response;
+
+- if (response == 0)
++ if (response == 0) {
+ err = ufshcd_copy_query_response(hba, lrbp);
++ } else {
++ err = -EINVAL;
++ dev_err(hba->dev, "%s: unexpected response in Query RSP: %x\n",
++ __func__, response);
++ }
+ break;
+ }
+ case UPIU_TRANSACTION_REJECT_UPIU:
+diff --git a/drivers/usb/phy/phy-generic.c b/drivers/usb/phy/phy-generic.c
+index e7d50e0a161238..aadf98f65c6084 100644
+--- a/drivers/usb/phy/phy-generic.c
++++ b/drivers/usb/phy/phy-generic.c
+@@ -212,7 +212,7 @@ int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_generic *nop)
+ if (of_property_read_u32(node, "clock-frequency", &clk_rate))
+ clk_rate = 0;
+
+- needs_clk = of_property_read_bool(node, "clocks");
++ needs_clk = of_property_present(node, "clocks");
+ }
+ nop->gpiod_reset = devm_gpiod_get_optional(dev, "reset",
+ GPIOD_ASIS);
+diff --git a/drivers/usb/serial/ftdi_sio.c b/drivers/usb/serial/ftdi_sio.c
+index c6f17d732b9581..236205ce350030 100644
+--- a/drivers/usb/serial/ftdi_sio.c
++++ b/drivers/usb/serial/ftdi_sio.c
+@@ -1079,6 +1079,20 @@ static const struct usb_device_id id_table_combined[] = {
+ .driver_info = (kernel_ulong_t)&ftdi_jtag_quirk },
+ /* GMC devices */
+ { USB_DEVICE(GMC_VID, GMC_Z216C_PID) },
++ /* Altera USB Blaster 3 */
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6022_PID, 1) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6025_PID, 2) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6026_PID, 2) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6026_PID, 3) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_6029_PID, 2) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602A_PID, 2) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602A_PID, 3) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602C_PID, 1) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602D_PID, 1) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602D_PID, 2) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 1) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 2) },
++ { USB_DEVICE_INTERFACE_NUMBER(ALTERA_VID, ALTERA_UB3_602E_PID, 3) },
+ { } /* Terminating entry */
+ };
+
+diff --git a/drivers/usb/serial/ftdi_sio_ids.h b/drivers/usb/serial/ftdi_sio_ids.h
+index 5ee60ba2a73cdb..52be47d684ea66 100644
+--- a/drivers/usb/serial/ftdi_sio_ids.h
++++ b/drivers/usb/serial/ftdi_sio_ids.h
+@@ -1612,3 +1612,16 @@
+ */
+ #define GMC_VID 0x1cd7
+ #define GMC_Z216C_PID 0x0217 /* GMC Z216C Adapter IR-USB */
++
++/*
++ * Altera USB Blaster 3 (http://www.altera.com).
++ */
++#define ALTERA_VID 0x09fb
++#define ALTERA_UB3_6022_PID 0x6022
++#define ALTERA_UB3_6025_PID 0x6025
++#define ALTERA_UB3_6026_PID 0x6026
++#define ALTERA_UB3_6029_PID 0x6029
++#define ALTERA_UB3_602A_PID 0x602a
++#define ALTERA_UB3_602C_PID 0x602c
++#define ALTERA_UB3_602D_PID 0x602d
++#define ALTERA_UB3_602E_PID 0x602e
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 58bd54e8c483a2..5cd26dac2069fa 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1368,13 +1368,13 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = NCTRL(0) | RSVD(1) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1075, 0xff), /* Telit FN990A (PCIe) */
+ .driver_info = RSVD(0) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990 (rmnet) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1080, 0xff), /* Telit FE990A (rmnet) */
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(2) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990 (MBIM) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1081, 0xff), /* Telit FE990A (MBIM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990 (RNDIS) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1082, 0xff), /* Telit FE990A (RNDIS) */
+ .driver_info = NCTRL(2) | RSVD(3) },
+- { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990 (ECM) */
++ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1083, 0xff), /* Telit FE990A (ECM) */
+ .driver_info = NCTRL(0) | RSVD(1) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10a0, 0xff), /* Telit FN20C04 (rmnet) */
+ .driver_info = RSVD(0) | NCTRL(3) },
+@@ -1388,28 +1388,44 @@ static const struct usb_device_id option_ids[] = {
+ .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10aa, 0xff), /* Telit FN920C04 (MBIM) */
+ .driver_info = NCTRL(3) | RSVD(4) | RSVD(5) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x30), /* Telit FE990B (rmnet) */
++ .driver_info = NCTRL(5) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b0, 0xff, 0xff, 0x60) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x30), /* Telit FE990B (MBIM) */
++ .driver_info = NCTRL(6) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b1, 0xff, 0xff, 0x60) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x30), /* Telit FE990B (RNDIS) */
++ .driver_info = NCTRL(6) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b2, 0xff, 0xff, 0x60) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x30), /* Telit FE990B (ECM) */
++ .driver_info = NCTRL(6) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10b3, 0xff, 0xff, 0x60) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c0, 0xff), /* Telit FE910C04 (rmnet) */
+ .driver_info = RSVD(0) | NCTRL(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c4, 0xff), /* Telit FE910C04 (rmnet) */
+ .driver_info = RSVD(0) | NCTRL(3) },
+ { USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x10c8, 0xff), /* Telit FE910C04 (rmnet) */
+ .driver_info = RSVD(0) | NCTRL(2) | RSVD(3) | RSVD(4) },
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x60) }, /* Telit FN990B (rmnet) */
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x40) },
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d0, 0x30),
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x30), /* Telit FN990B (rmnet) */
+ .driver_info = NCTRL(5) },
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x60) }, /* Telit FN990B (MBIM) */
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x40) },
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d1, 0x30),
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d0, 0xff, 0xff, 0x60) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x30), /* Telit FN990B (MBIM) */
+ .driver_info = NCTRL(6) },
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x60) }, /* Telit FN990B (RNDIS) */
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x40) },
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d2, 0x30),
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d1, 0xff, 0xff, 0x60) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x30), /* Telit FN990B (RNDIS) */
+ .driver_info = NCTRL(6) },
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x60) }, /* Telit FN990B (ECM) */
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x40) },
+- { USB_DEVICE_INTERFACE_PROTOCOL(TELIT_VENDOR_ID, 0x10d3, 0x30),
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d2, 0xff, 0xff, 0x60) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x30), /* Telit FN990B (ECM) */
+ .driver_info = NCTRL(6) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x40) },
++ { USB_DEVICE_AND_INTERFACE_INFO(TELIT_VENDOR_ID, 0x10d3, 0xff, 0xff, 0x60) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910),
+ .driver_info = NCTRL(0) | RSVD(1) | RSVD(3) },
+ { USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_ME910_DUAL_MODEM),
+diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
+index 9ac25d08f473e8..63612faeab7271 100644
+--- a/drivers/vhost/vhost.c
++++ b/drivers/vhost/vhost.c
+@@ -666,7 +666,7 @@ static struct vhost_worker *vhost_worker_create(struct vhost_dev *dev)
+
+ vtsk = vhost_task_create(vhost_run_work_list, vhost_worker_killed,
+ worker, name);
+- if (!vtsk)
++ if (IS_ERR(vtsk))
+ goto free_worker;
+
+ mutex_init(&worker->mutex);
+diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c
+index 7fdb5edd7e2e8d..75338ffc703fb5 100644
+--- a/drivers/video/fbdev/hyperv_fb.c
++++ b/drivers/video/fbdev/hyperv_fb.c
+@@ -282,6 +282,8 @@ static uint screen_depth;
+ static uint screen_fb_size;
+ static uint dio_fb_size; /* FB size for deferred IO */
+
++static void hvfb_putmem(struct fb_info *info);
++
+ /* Send message to Hyper-V host */
+ static inline int synthvid_send(struct hv_device *hdev,
+ struct synthvid_msg *msg)
+@@ -862,6 +864,17 @@ static void hvfb_ops_damage_area(struct fb_info *info, u32 x, u32 y, u32 width,
+ hvfb_ondemand_refresh_throttle(par, x, y, width, height);
+ }
+
++/*
++ * fb_ops.fb_destroy is called by the last put_fb_info() call at the end
++ * of unregister_framebuffer() or fb_release(). Do any cleanup related to
++ * framebuffer here.
++ */
++static void hvfb_destroy(struct fb_info *info)
++{
++ hvfb_putmem(info);
++ framebuffer_release(info);
++}
++
+ /*
+ * TODO: GEN1 codepaths allocate from system or DMA-able memory. Fix the
+ * driver to use the _SYSMEM_ or _DMAMEM_ helpers in these cases.
+@@ -877,6 +890,7 @@ static const struct fb_ops hvfb_ops = {
+ .fb_set_par = hvfb_set_par,
+ .fb_setcolreg = hvfb_setcolreg,
+ .fb_blank = hvfb_blank,
++ .fb_destroy = hvfb_destroy,
+ };
+
+ /* Get options from kernel paramenter "video=" */
+@@ -952,7 +966,7 @@ static phys_addr_t hvfb_get_phymem(struct hv_device *hdev,
+ }
+
+ /* Release contiguous physical memory */
+-static void hvfb_release_phymem(struct hv_device *hdev,
++static void hvfb_release_phymem(struct device *device,
+ phys_addr_t paddr, unsigned int size)
+ {
+ unsigned int order = get_order(size);
+@@ -960,7 +974,7 @@ static void hvfb_release_phymem(struct hv_device *hdev,
+ if (order <= MAX_PAGE_ORDER)
+ __free_pages(pfn_to_page(paddr >> PAGE_SHIFT), order);
+ else
+- dma_free_coherent(&hdev->device,
++ dma_free_coherent(device,
+ round_up(size, PAGE_SIZE),
+ phys_to_virt(paddr),
+ paddr);
+@@ -989,6 +1003,7 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info)
+
+ base = pci_resource_start(pdev, 0);
+ size = pci_resource_len(pdev, 0);
++ aperture_remove_conflicting_devices(base, size, KBUILD_MODNAME);
+
+ /*
+ * For Gen 1 VM, we can directly use the contiguous memory
+@@ -1010,11 +1025,21 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info)
+ goto getmem_done;
+ }
+ pr_info("Unable to allocate enough contiguous physical memory on Gen 1 VM. Using MMIO instead.\n");
++ } else {
++ aperture_remove_all_conflicting_devices(KBUILD_MODNAME);
+ }
+
+ /*
+- * Cannot use the contiguous physical memory.
+- * Allocate mmio space for framebuffer.
++ * Cannot use contiguous physical memory, so allocate MMIO space for
++ * the framebuffer. At this point in the function, conflicting devices
++ * that might have claimed the framebuffer MMIO space based on
++ * screen_info.lfb_base must have already been removed so that
++ * vmbus_allocate_mmio() does not allocate different MMIO space. If the
++ * kdump image were to be loaded using kexec_file_load(), the
++ * framebuffer location in the kdump image would be set from
++ * screen_info.lfb_base at the time that kdump is enabled. If the
++ * framebuffer has moved elsewhere, this could be the wrong location,
++ * causing kdump to hang when efifb (for example) loads.
+ */
+ dio_fb_size =
+ screen_width * screen_height * screen_depth / 8;
+@@ -1051,11 +1076,6 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info)
+ info->screen_size = dio_fb_size;
+
+ getmem_done:
+- if (base && size)
+- aperture_remove_conflicting_devices(base, size, KBUILD_MODNAME);
+- else
+- aperture_remove_all_conflicting_devices(KBUILD_MODNAME);
+-
+ if (!gen2vm)
+ pci_dev_put(pdev);
+
+@@ -1074,16 +1094,16 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info)
+ }
+
+ /* Release the framebuffer */
+-static void hvfb_putmem(struct hv_device *hdev, struct fb_info *info)
++static void hvfb_putmem(struct fb_info *info)
+ {
+ struct hvfb_par *par = info->par;
+
+ if (par->need_docopy) {
+ vfree(par->dio_vp);
+- iounmap(info->screen_base);
++ iounmap(par->mmio_vp);
+ vmbus_free_mmio(par->mem->start, screen_fb_size);
+ } else {
+- hvfb_release_phymem(hdev, info->fix.smem_start,
++ hvfb_release_phymem(info->device, info->fix.smem_start,
+ screen_fb_size);
+ }
+
+@@ -1172,7 +1192,7 @@ static int hvfb_probe(struct hv_device *hdev,
+ if (ret)
+ goto error;
+
+- ret = register_framebuffer(info);
++ ret = devm_register_framebuffer(&hdev->device, info);
+ if (ret) {
+ pr_err("Unable to register framebuffer\n");
+ goto error;
+@@ -1197,7 +1217,7 @@ static int hvfb_probe(struct hv_device *hdev,
+
+ error:
+ fb_deferred_io_cleanup(info);
+- hvfb_putmem(hdev, info);
++ hvfb_putmem(info);
+ error2:
+ vmbus_close(hdev->channel);
+ error1:
+@@ -1220,14 +1240,10 @@ static void hvfb_remove(struct hv_device *hdev)
+
+ fb_deferred_io_cleanup(info);
+
+- unregister_framebuffer(info);
+ cancel_delayed_work_sync(&par->dwork);
+
+ vmbus_close(hdev->channel);
+ hv_set_drvdata(hdev, NULL);
+-
+- hvfb_putmem(hdev, info);
+- framebuffer_release(info);
+ }
+
+ static int hvfb_suspend(struct hv_device *hdev)
+diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
+index 26c62e0d34e98b..1f65795cf5d7a2 100644
+--- a/drivers/xen/swiotlb-xen.c
++++ b/drivers/xen/swiotlb-xen.c
+@@ -113,7 +113,7 @@ static struct io_tlb_pool *xen_swiotlb_find_pool(struct device *dev,
+ }
+
+ #ifdef CONFIG_X86
+-int xen_swiotlb_fixup(void *buf, unsigned long nslabs)
++int __init xen_swiotlb_fixup(void *buf, unsigned long nslabs)
+ {
+ int rc;
+ unsigned int order = get_order(IO_TLB_SEGSIZE << IO_TLB_SHIFT);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index 660a5b9c08e9e4..6551fb003eed25 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -526,8 +526,6 @@ static void end_bbio_data_read(struct btrfs_bio *bbio)
+ u64 end;
+ u32 len;
+
+- /* For now only order 0 folios are supported for data. */
+- ASSERT(folio_order(folio) == 0);
+ btrfs_debug(fs_info,
+ "%s: bi_sector=%llu, err=%d, mirror=%u",
+ __func__, bio->bi_iter.bi_sector, bio->bi_status,
+@@ -555,7 +553,6 @@ static void end_bbio_data_read(struct btrfs_bio *bbio)
+
+ if (likely(uptodate)) {
+ loff_t i_size = i_size_read(inode);
+- pgoff_t end_index = i_size >> folio_shift(folio);
+
+ /*
+ * Zero out the remaining part if this range straddles
+@@ -564,9 +561,11 @@ static void end_bbio_data_read(struct btrfs_bio *bbio)
+ * Here we should only zero the range inside the folio,
+ * not touch anything else.
+ *
+- * NOTE: i_size is exclusive while end is inclusive.
++ * NOTE: i_size is exclusive while end is inclusive and
++ * folio_contains() takes PAGE_SIZE units.
+ */
+- if (folio_index(folio) == end_index && i_size <= end) {
++ if (folio_contains(folio, i_size >> PAGE_SHIFT) &&
++ i_size <= end) {
+ u32 zero_start = max(offset_in_folio(folio, i_size),
+ offset_in_folio(folio, start));
+ u32 zero_len = offset_in_folio(folio, end) + 1 -
+@@ -960,7 +959,7 @@ static int btrfs_do_readpage(struct folio *folio, struct extent_map **em_cached,
+ return ret;
+ }
+
+- if (folio->index == last_byte >> folio_shift(folio)) {
++ if (folio_contains(folio, last_byte >> PAGE_SHIFT)) {
+ size_t zero_offset = offset_in_folio(folio, last_byte);
+
+ if (zero_offset) {
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index fa9025c05d4e29..e9f58cdeeb5f3c 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1899,11 +1899,7 @@ int btrfs_qgroup_cleanup_dropped_subvolume(struct btrfs_fs_info *fs_info, u64 su
+ * Commit current transaction to make sure all the rfer/excl numbers
+ * get updated.
+ */
+- trans = btrfs_start_transaction(fs_info->quota_root, 0);
+- if (IS_ERR(trans))
+- return PTR_ERR(trans);
+-
+- ret = btrfs_commit_transaction(trans);
++ ret = btrfs_commit_current_transaction(fs_info->quota_root);
+ if (ret < 0)
+ return ret;
+
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index 2e62e62c07f836..bd6e675023c622 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1632,7 +1632,7 @@ static const char *fuse_get_link(struct dentry *dentry, struct inode *inode,
+ goto out_err;
+
+ if (fc->cache_symlinks)
+- return page_get_link(dentry, inode, callback);
++ return page_get_link_raw(dentry, inode, callback);
+
+ err = -ECHILD;
+ if (!dentry)
+diff --git a/fs/namei.c b/fs/namei.c
+index 4a4a22a08ac20d..6795600c5738a5 100644
+--- a/fs/namei.c
++++ b/fs/namei.c
+@@ -5300,10 +5300,9 @@ const char *vfs_get_link(struct dentry *dentry, struct delayed_call *done)
+ EXPORT_SYMBOL(vfs_get_link);
+
+ /* get the link contents into pagecache */
+-const char *page_get_link(struct dentry *dentry, struct inode *inode,
+- struct delayed_call *callback)
++static char *__page_get_link(struct dentry *dentry, struct inode *inode,
++ struct delayed_call *callback)
+ {
+- char *kaddr;
+ struct page *page;
+ struct address_space *mapping = inode->i_mapping;
+
+@@ -5322,8 +5321,23 @@ const char *page_get_link(struct dentry *dentry, struct inode *inode,
+ }
+ set_delayed_call(callback, page_put_link, page);
+ BUG_ON(mapping_gfp_mask(mapping) & __GFP_HIGHMEM);
+- kaddr = page_address(page);
+- nd_terminate_link(kaddr, inode->i_size, PAGE_SIZE - 1);
++ return page_address(page);
++}
++
++const char *page_get_link_raw(struct dentry *dentry, struct inode *inode,
++ struct delayed_call *callback)
++{
++ return __page_get_link(dentry, inode, callback);
++}
++EXPORT_SYMBOL_GPL(page_get_link_raw);
++
++const char *page_get_link(struct dentry *dentry, struct inode *inode,
++ struct delayed_call *callback)
++{
++ char *kaddr = __page_get_link(dentry, inode, callback);
++
++ if (!IS_ERR(kaddr))
++ nd_terminate_link(kaddr, inode->i_size, PAGE_SIZE - 1);
+ return kaddr;
+ }
+
+diff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c
+index 3b9461f5e712e5..eae415efae24a0 100644
+--- a/fs/netfs/read_collect.c
++++ b/fs/netfs/read_collect.c
+@@ -284,7 +284,7 @@ static bool netfs_consume_read_data(struct netfs_io_subrequest *subreq, bool was
+ netfs_trace_donate_to_deferred_next);
+ } else {
+ next = list_next_entry(subreq, rreq_link);
+- WRITE_ONCE(next->prev_donated, excess);
++ WRITE_ONCE(next->prev_donated, next->prev_donated + excess);
+ trace_netfs_donate(rreq, subreq, next, excess,
+ netfs_trace_donate_to_next);
+ }
+diff --git a/fs/smb/client/asn1.c b/fs/smb/client/asn1.c
+index b5724ef9f182f4..214a44509e7b99 100644
+--- a/fs/smb/client/asn1.c
++++ b/fs/smb/client/asn1.c
+@@ -52,6 +52,8 @@ int cifs_neg_token_init_mech_type(void *context, size_t hdrlen,
+ server->sec_kerberos = true;
+ else if (oid == OID_ntlmssp)
+ server->sec_ntlmssp = true;
++ else if (oid == OID_IAKerb)
++ server->sec_iakerb = true;
+ else {
+ char buf[50];
+
+diff --git a/fs/smb/client/cifs_spnego.c b/fs/smb/client/cifs_spnego.c
+index af7849e5974ff3..2ad067886ec3fa 100644
+--- a/fs/smb/client/cifs_spnego.c
++++ b/fs/smb/client/cifs_spnego.c
+@@ -130,11 +130,13 @@ cifs_get_spnego_key(struct cifs_ses *sesInfo,
+
+ dp = description + strlen(description);
+
+- /* for now, only sec=krb5 and sec=mskrb5 are valid */
++ /* for now, only sec=krb5 and sec=mskrb5 and iakerb are valid */
+ if (server->sec_kerberos)
+ sprintf(dp, ";sec=krb5");
+ else if (server->sec_mskerberos)
+ sprintf(dp, ";sec=mskrb5");
++ else if (server->sec_iakerb)
++ sprintf(dp, ";sec=iakerb");
+ else {
+ cifs_dbg(VFS, "unknown or missing server auth type, use krb5\n");
+ sprintf(dp, ";sec=krb5");
+diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h
+index b630beb757a44a..a8484af7a2fbc4 100644
+--- a/fs/smb/client/cifsglob.h
++++ b/fs/smb/client/cifsglob.h
+@@ -151,6 +151,7 @@ enum securityEnum {
+ NTLMv2, /* Legacy NTLM auth with NTLMv2 hash */
+ RawNTLMSSP, /* NTLMSSP without SPNEGO, NTLMv2 hash */
+ Kerberos, /* Kerberos via SPNEGO */
++ IAKerb, /* Kerberos proxy */
+ };
+
+ enum cifs_reparse_type {
+@@ -743,6 +744,7 @@ struct TCP_Server_Info {
+ bool sec_kerberosu2u; /* supports U2U Kerberos */
+ bool sec_kerberos; /* supports plain Kerberos */
+ bool sec_mskerberos; /* supports legacy MS Kerberos */
++ bool sec_iakerb; /* supports pass-through auth for Kerberos (krb5 proxy) */
+ bool large_buf; /* is current buffer large? */
+ /* use SMBD connection instead of socket */
+ bool rdma;
+@@ -2115,6 +2117,8 @@ static inline char *get_security_type_str(enum securityEnum sectype)
+ return "Kerberos";
+ case NTLMv2:
+ return "NTLMv2";
++ case IAKerb:
++ return "IAKerb";
+ default:
+ return "Unknown";
+ }
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index fb51cdf5520617..d327f31b317db9 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -1873,9 +1873,8 @@ static int match_session(struct cifs_ses *ses,
+ struct smb3_fs_context *ctx,
+ bool match_super)
+ {
+- if (ctx->sectype != Unspecified &&
+- ctx->sectype != ses->sectype)
+- return 0;
++ struct TCP_Server_Info *server = ses->server;
++ enum securityEnum ctx_sec, ses_sec;
+
+ if (!match_super && ctx->dfs_root_ses != ses->dfs_root_ses)
+ return 0;
+@@ -1887,11 +1886,20 @@ static int match_session(struct cifs_ses *ses,
+ if (ses->chan_max < ctx->max_channels)
+ return 0;
+
+- switch (ses->sectype) {
++ ctx_sec = server->ops->select_sectype(server, ctx->sectype);
++ ses_sec = server->ops->select_sectype(server, ses->sectype);
++
++ if (ctx_sec != ses_sec)
++ return 0;
++
++ switch (ctx_sec) {
++ case IAKerb:
+ case Kerberos:
+ if (!uid_eq(ctx->cred_uid, ses->cred_uid))
+ return 0;
+ break;
++ case NTLMv2:
++ case RawNTLMSSP:
+ default:
+ /* NULL username means anonymous session */
+ if (ses->user_name == NULL) {
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index 48606e2ddffdcd..f8bc1da3003781 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -164,6 +164,7 @@ const struct fs_parameter_spec smb3_fs_parameters[] = {
+ fsparam_string("username", Opt_user),
+ fsparam_string("pass", Opt_pass),
+ fsparam_string("password", Opt_pass),
++ fsparam_string("pass2", Opt_pass2),
+ fsparam_string("password2", Opt_pass2),
+ fsparam_string("ip", Opt_ip),
+ fsparam_string("addr", Opt_ip),
+@@ -1041,6 +1042,9 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ } else if (!strcmp("user", param->key) || !strcmp("username", param->key)) {
+ skip_parsing = true;
+ opt = Opt_user;
++ } else if (!strcmp("pass2", param->key) || !strcmp("password2", param->key)) {
++ skip_parsing = true;
++ opt = Opt_pass2;
+ }
+ }
+
+@@ -1250,21 +1254,21 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ }
+ break;
+ case Opt_acregmax:
+- ctx->acregmax = HZ * result.uint_32;
+- if (ctx->acregmax > CIFS_MAX_ACTIMEO) {
++ if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) {
+ cifs_errorf(fc, "acregmax too large\n");
+ goto cifs_parse_mount_err;
+ }
++ ctx->acregmax = HZ * result.uint_32;
+ break;
+ case Opt_acdirmax:
+- ctx->acdirmax = HZ * result.uint_32;
+- if (ctx->acdirmax > CIFS_MAX_ACTIMEO) {
++ if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) {
+ cifs_errorf(fc, "acdirmax too large\n");
+ goto cifs_parse_mount_err;
+ }
++ ctx->acdirmax = HZ * result.uint_32;
+ break;
+ case Opt_actimeo:
+- if (HZ * result.uint_32 > CIFS_MAX_ACTIMEO) {
++ if (result.uint_32 > CIFS_MAX_ACTIMEO / HZ) {
+ cifs_errorf(fc, "timeout too large\n");
+ goto cifs_parse_mount_err;
+ }
+@@ -1276,11 +1280,11 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ ctx->acdirmax = ctx->acregmax = HZ * result.uint_32;
+ break;
+ case Opt_closetimeo:
+- ctx->closetimeo = HZ * result.uint_32;
+- if (ctx->closetimeo > SMB3_MAX_DCLOSETIMEO) {
++ if (result.uint_32 > SMB3_MAX_DCLOSETIMEO / HZ) {
+ cifs_errorf(fc, "closetimeo too large\n");
+ goto cifs_parse_mount_err;
+ }
++ ctx->closetimeo = HZ * result.uint_32;
+ break;
+ case Opt_echo_interval:
+ ctx->echo_interval = result.uint_32;
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index a3f0835e12be31..97151715d1a413 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1193,6 +1193,19 @@ static int reparse_info_to_fattr(struct cifs_open_info_data *data,
+ rc = server->ops->parse_reparse_point(cifs_sb,
+ full_path,
+ iov, data);
++ /*
++ * If the reparse point was not handled but it is the
++ * name surrogate which points to directory, then treat
++ * is as a new mount point. Name surrogate reparse point
++ * represents another named entity in the system.
++ */
++ if (rc == -EOPNOTSUPP &&
++ IS_REPARSE_TAG_NAME_SURROGATE(data->reparse.tag) &&
++ (le32_to_cpu(data->fi.Attributes) & ATTR_DIRECTORY)) {
++ rc = 0;
++ cifs_create_junction_fattr(fattr, sb);
++ goto out;
++ }
+ }
+ break;
+ }
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index e56a8df23fec9a..bb246ef0458fb5 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -651,13 +651,17 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ case IO_REPARSE_TAG_LX_FIFO:
+ case IO_REPARSE_TAG_LX_CHR:
+ case IO_REPARSE_TAG_LX_BLK:
+- break;
++ if (le16_to_cpu(buf->ReparseDataLength) != 0) {
++ cifs_dbg(VFS, "srv returned malformed buffer for reparse point: 0x%08x\n",
++ le32_to_cpu(buf->ReparseTag));
++ return -EIO;
++ }
++ return 0;
+ default:
+ cifs_tcon_dbg(VFS | ONCE, "unhandled reparse tag: 0x%08x\n",
+ le32_to_cpu(buf->ReparseTag));
+- break;
++ return -EOPNOTSUPP;
+ }
+- return 0;
+ }
+
+ int smb2_parse_reparse_point(struct cifs_sb_info *cifs_sb,
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index c88e9657f47a8d..95e14977baeab0 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -1263,12 +1263,13 @@ cifs_select_sectype(struct TCP_Server_Info *server, enum securityEnum requested)
+ switch (requested) {
+ case Kerberos:
+ case RawNTLMSSP:
++ case IAKerb:
+ return requested;
+ case Unspecified:
+ if (server->sec_ntlmssp &&
+ (global_secflags & CIFSSEC_MAY_NTLMSSP))
+ return RawNTLMSSP;
+- if ((server->sec_kerberos || server->sec_mskerberos) &&
++ if ((server->sec_kerberos || server->sec_mskerberos || server->sec_iakerb) &&
+ (global_secflags & CIFSSEC_MAY_KRB5))
+ return Kerberos;
+ fallthrough;
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 2e3f78fe9210ff..75b13175a2e781 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -1435,7 +1435,7 @@ smb2_select_sectype(struct TCP_Server_Info *server, enum securityEnum requested)
+ if (server->sec_ntlmssp &&
+ (global_secflags & CIFSSEC_MAY_NTLMSSP))
+ return RawNTLMSSP;
+- if ((server->sec_kerberos || server->sec_mskerberos) &&
++ if ((server->sec_kerberos || server->sec_mskerberos || server->sec_iakerb) &&
+ (global_secflags & CIFSSEC_MAY_KRB5))
+ return Kerberos;
+ fallthrough;
+@@ -2175,7 +2175,7 @@ SMB2_tcon(const unsigned int xid, struct cifs_ses *ses, const char *tree,
+
+ tcon_error_exit:
+ if (rsp && rsp->hdr.Status == STATUS_BAD_NETWORK_NAME)
+- cifs_tcon_dbg(VFS, "BAD_NETWORK_NAME: %s\n", tree);
++ cifs_dbg(VFS | ONCE, "BAD_NETWORK_NAME: %s\n", tree);
+ goto tcon_exit;
+ }
+
+diff --git a/fs/smb/common/smbfsctl.h b/fs/smb/common/smbfsctl.h
+index 4b379e84c46b94..3253a18ecb5cbc 100644
+--- a/fs/smb/common/smbfsctl.h
++++ b/fs/smb/common/smbfsctl.h
+@@ -159,6 +159,9 @@
+ #define IO_REPARSE_TAG_LX_CHR 0x80000025
+ #define IO_REPARSE_TAG_LX_BLK 0x80000026
+
++/* If Name Surrogate Bit is set, the file or directory represents another named entity in the system. */
++#define IS_REPARSE_TAG_NAME_SURROGATE(tag) (!!((tag) & 0x20000000))
++
+ /* fsctl flags */
+ /* If Flags is set to this value, the request is an FSCTL not ioctl request */
+ #define SMB2_0_IOCTL_IS_FSCTL 0x00000001
+diff --git a/fs/smb/server/connection.c b/fs/smb/server/connection.c
+index bf45822db5d589..ab11246ccd8a09 100644
+--- a/fs/smb/server/connection.c
++++ b/fs/smb/server/connection.c
+@@ -432,6 +432,26 @@ void ksmbd_conn_init_server_callbacks(struct ksmbd_conn_ops *ops)
+ default_conn_ops.terminate_fn = ops->terminate_fn;
+ }
+
++void ksmbd_conn_r_count_inc(struct ksmbd_conn *conn)
++{
++ atomic_inc(&conn->r_count);
++}
++
++void ksmbd_conn_r_count_dec(struct ksmbd_conn *conn)
++{
++ /*
++ * Checking waitqueue to dropping pending requests on
++ * disconnection. waitqueue_active is safe because it
++ * uses atomic operation for condition.
++ */
++ atomic_inc(&conn->refcnt);
++ if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q))
++ wake_up(&conn->r_count_q);
++
++ if (atomic_dec_and_test(&conn->refcnt))
++ kfree(conn);
++}
++
+ int ksmbd_conn_transport_init(void)
+ {
+ int ret;
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index b379ae4fdcdffa..91c2318639e766 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -168,6 +168,8 @@ int ksmbd_conn_transport_init(void);
+ void ksmbd_conn_transport_destroy(void);
+ void ksmbd_conn_lock(struct ksmbd_conn *conn);
+ void ksmbd_conn_unlock(struct ksmbd_conn *conn);
++void ksmbd_conn_r_count_inc(struct ksmbd_conn *conn);
++void ksmbd_conn_r_count_dec(struct ksmbd_conn *conn);
+
+ /*
+ * WARNING
+diff --git a/fs/smb/server/ksmbd_work.c b/fs/smb/server/ksmbd_work.c
+index d7c676c151e209..544d8ccd29b0a0 100644
+--- a/fs/smb/server/ksmbd_work.c
++++ b/fs/smb/server/ksmbd_work.c
+@@ -26,7 +26,6 @@ struct ksmbd_work *ksmbd_alloc_work_struct(void)
+ INIT_LIST_HEAD(&work->request_entry);
+ INIT_LIST_HEAD(&work->async_request_entry);
+ INIT_LIST_HEAD(&work->fp_entry);
+- INIT_LIST_HEAD(&work->interim_entry);
+ INIT_LIST_HEAD(&work->aux_read_list);
+ work->iov_alloc_cnt = 4;
+ work->iov = kcalloc(work->iov_alloc_cnt, sizeof(struct kvec),
+@@ -56,8 +55,6 @@ void ksmbd_free_work_struct(struct ksmbd_work *work)
+ kfree(work->tr_buf);
+ kvfree(work->request_buf);
+ kfree(work->iov);
+- if (!list_empty(&work->interim_entry))
+- list_del(&work->interim_entry);
+
+ if (work->async_id)
+ ksmbd_release_id(&work->conn->async_ida, work->async_id);
+diff --git a/fs/smb/server/ksmbd_work.h b/fs/smb/server/ksmbd_work.h
+index 8ca2c813246e61..d36393ff8310cd 100644
+--- a/fs/smb/server/ksmbd_work.h
++++ b/fs/smb/server/ksmbd_work.h
+@@ -89,7 +89,6 @@ struct ksmbd_work {
+ /* List head at conn->async_requests */
+ struct list_head async_request_entry;
+ struct list_head fp_entry;
+- struct list_head interim_entry;
+ };
+
+ /**
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index 4142c7ad5fa910..592fe665973a87 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -46,7 +46,6 @@ static struct oplock_info *alloc_opinfo(struct ksmbd_work *work,
+ opinfo->fid = id;
+ opinfo->Tid = Tid;
+ INIT_LIST_HEAD(&opinfo->op_entry);
+- INIT_LIST_HEAD(&opinfo->interim_list);
+ init_waitqueue_head(&opinfo->oplock_q);
+ init_waitqueue_head(&opinfo->oplock_brk);
+ atomic_set(&opinfo->refcount, 1);
+@@ -635,6 +634,7 @@ static void __smb2_oplock_break_noti(struct work_struct *wk)
+ {
+ struct smb2_oplock_break *rsp = NULL;
+ struct ksmbd_work *work = container_of(wk, struct ksmbd_work, work);
++ struct ksmbd_conn *conn = work->conn;
+ struct oplock_break_info *br_info = work->request_buf;
+ struct smb2_hdr *rsp_hdr;
+ struct ksmbd_file *fp;
+@@ -690,6 +690,7 @@ static void __smb2_oplock_break_noti(struct work_struct *wk)
+
+ out:
+ ksmbd_free_work_struct(work);
++ ksmbd_conn_r_count_dec(conn);
+ }
+
+ /**
+@@ -724,6 +725,7 @@ static int smb2_oplock_break_noti(struct oplock_info *opinfo)
+ work->sess = opinfo->sess;
+
+ if (opinfo->op_state == OPLOCK_ACK_WAIT) {
++ ksmbd_conn_r_count_inc(conn);
+ INIT_WORK(&work->work, __smb2_oplock_break_noti);
+ ksmbd_queue_work(work);
+
+@@ -745,6 +747,7 @@ static void __smb2_lease_break_noti(struct work_struct *wk)
+ {
+ struct smb2_lease_break *rsp = NULL;
+ struct ksmbd_work *work = container_of(wk, struct ksmbd_work, work);
++ struct ksmbd_conn *conn = work->conn;
+ struct lease_break_info *br_info = work->request_buf;
+ struct smb2_hdr *rsp_hdr;
+
+@@ -791,6 +794,7 @@ static void __smb2_lease_break_noti(struct work_struct *wk)
+
+ out:
+ ksmbd_free_work_struct(work);
++ ksmbd_conn_r_count_dec(conn);
+ }
+
+ /**
+@@ -803,7 +807,6 @@ static void __smb2_lease_break_noti(struct work_struct *wk)
+ static int smb2_lease_break_noti(struct oplock_info *opinfo)
+ {
+ struct ksmbd_conn *conn = opinfo->conn;
+- struct list_head *tmp, *t;
+ struct ksmbd_work *work;
+ struct lease_break_info *br_info;
+ struct lease *lease = opinfo->o_lease;
+@@ -831,16 +834,7 @@ static int smb2_lease_break_noti(struct oplock_info *opinfo)
+ work->sess = opinfo->sess;
+
+ if (opinfo->op_state == OPLOCK_ACK_WAIT) {
+- list_for_each_safe(tmp, t, &opinfo->interim_list) {
+- struct ksmbd_work *in_work;
+-
+- in_work = list_entry(tmp, struct ksmbd_work,
+- interim_entry);
+- setup_async_work(in_work, NULL, NULL);
+- smb2_send_interim_resp(in_work, STATUS_PENDING);
+- list_del_init(&in_work->interim_entry);
+- release_async_work(in_work);
+- }
++ ksmbd_conn_r_count_inc(conn);
+ INIT_WORK(&work->work, __smb2_lease_break_noti);
+ ksmbd_queue_work(work);
+ wait_for_break_ack(opinfo);
+@@ -871,7 +865,8 @@ static void wait_lease_breaking(struct oplock_info *opinfo)
+ }
+ }
+
+-static int oplock_break(struct oplock_info *brk_opinfo, int req_op_level)
++static int oplock_break(struct oplock_info *brk_opinfo, int req_op_level,
++ struct ksmbd_work *in_work)
+ {
+ int err = 0;
+
+@@ -914,9 +909,15 @@ static int oplock_break(struct oplock_info *brk_opinfo, int req_op_level)
+ }
+
+ if (lease->state & (SMB2_LEASE_WRITE_CACHING_LE |
+- SMB2_LEASE_HANDLE_CACHING_LE))
++ SMB2_LEASE_HANDLE_CACHING_LE)) {
++ if (in_work) {
++ setup_async_work(in_work, NULL, NULL);
++ smb2_send_interim_resp(in_work, STATUS_PENDING);
++ release_async_work(in_work);
++ }
++
+ brk_opinfo->op_state = OPLOCK_ACK_WAIT;
+- else
++ } else
+ atomic_dec(&brk_opinfo->breaking_cnt);
+ } else {
+ err = oplock_break_pending(brk_opinfo, req_op_level);
+@@ -1116,7 +1117,7 @@ void smb_send_parent_lease_break_noti(struct ksmbd_file *fp,
+ if (ksmbd_conn_releasing(opinfo->conn))
+ continue;
+
+- oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE);
++ oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL);
+ opinfo_put(opinfo);
+ }
+ }
+@@ -1152,7 +1153,7 @@ void smb_lazy_parent_lease_break_close(struct ksmbd_file *fp)
+
+ if (ksmbd_conn_releasing(opinfo->conn))
+ continue;
+- oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE);
++ oplock_break(opinfo, SMB2_OPLOCK_LEVEL_NONE, NULL);
+ opinfo_put(opinfo);
+ }
+ }
+@@ -1252,8 +1253,7 @@ int smb_grant_oplock(struct ksmbd_work *work, int req_op_level, u64 pid,
+ goto op_break_not_needed;
+ }
+
+- list_add(&work->interim_entry, &prev_opinfo->interim_list);
+- err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II);
++ err = oplock_break(prev_opinfo, SMB2_OPLOCK_LEVEL_II, work);
+ opinfo_put(prev_opinfo);
+ if (err == -ENOENT)
+ goto set_lev;
+@@ -1322,8 +1322,7 @@ static void smb_break_all_write_oplock(struct ksmbd_work *work,
+ }
+
+ brk_opinfo->open_trunc = is_trunc;
+- list_add(&work->interim_entry, &brk_opinfo->interim_list);
+- oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II);
++ oplock_break(brk_opinfo, SMB2_OPLOCK_LEVEL_II, work);
+ opinfo_put(brk_opinfo);
+ }
+
+@@ -1386,7 +1385,7 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ SMB2_LEASE_KEY_SIZE))
+ goto next;
+ brk_op->open_trunc = is_trunc;
+- oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE);
++ oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE, NULL);
+ next:
+ opinfo_put(brk_op);
+ rcu_read_lock();
+diff --git a/fs/smb/server/oplock.h b/fs/smb/server/oplock.h
+index 72bc88a63a4082..3f64f07872638e 100644
+--- a/fs/smb/server/oplock.h
++++ b/fs/smb/server/oplock.h
+@@ -67,7 +67,6 @@ struct oplock_info {
+ bool is_lease;
+ bool open_trunc; /* truncate on open */
+ struct lease *o_lease;
+- struct list_head interim_list;
+ struct list_head op_entry;
+ struct list_head lease_entry;
+ wait_queue_head_t oplock_q; /* Other server threads */
+diff --git a/fs/smb/server/server.c b/fs/smb/server/server.c
+index d146b0e7c3a9dd..d523b860236ab3 100644
+--- a/fs/smb/server/server.c
++++ b/fs/smb/server/server.c
+@@ -270,17 +270,7 @@ static void handle_ksmbd_work(struct work_struct *wk)
+
+ ksmbd_conn_try_dequeue_request(work);
+ ksmbd_free_work_struct(work);
+- /*
+- * Checking waitqueue to dropping pending requests on
+- * disconnection. waitqueue_active is safe because it
+- * uses atomic operation for condition.
+- */
+- atomic_inc(&conn->refcnt);
+- if (!atomic_dec_return(&conn->r_count) && waitqueue_active(&conn->r_count_q))
+- wake_up(&conn->r_count_q);
+-
+- if (atomic_dec_and_test(&conn->refcnt))
+- kfree(conn);
++ ksmbd_conn_r_count_dec(conn);
+ }
+
+ /**
+@@ -310,7 +300,7 @@ static int queue_ksmbd_work(struct ksmbd_conn *conn)
+ conn->request_buf = NULL;
+
+ ksmbd_conn_enqueue_request(work);
+- atomic_inc(&conn->r_count);
++ ksmbd_conn_r_count_inc(conn);
+ /* update activity on connection */
+ conn->last_active = jiffies;
+ INIT_WORK(&work->work, handle_ksmbd_work);
+diff --git a/fs/vboxsf/super.c b/fs/vboxsf/super.c
+index e95b8a48d8a02d..1d94bb7841081d 100644
+--- a/fs/vboxsf/super.c
++++ b/fs/vboxsf/super.c
+@@ -21,7 +21,8 @@
+
+ #define VBOXSF_SUPER_MAGIC 0x786f4256 /* 'VBox' little endian */
+
+-static const unsigned char VBSF_MOUNT_SIGNATURE[4] = "\000\377\376\375";
++static const unsigned char VBSF_MOUNT_SIGNATURE[4] = { '\000', '\377', '\376',
++ '\375' };
+
+ static int follow_symlinks;
+ module_param(follow_symlinks, int, 0444);
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index a53cbe25691043..7b5e5388c3801a 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -871,12 +871,20 @@ static inline bool blk_mq_is_reserved_rq(struct request *rq)
+ return rq->rq_flags & RQF_RESV;
+ }
+
+-/*
++/**
++ * blk_mq_add_to_batch() - add a request to the completion batch
++ * @req: The request to add to batch
++ * @iob: The batch to add the request
++ * @is_error: Specify true if the request failed with an error
++ * @complete: The completaion handler for the request
++ *
+ * Batched completions only work when there is no I/O error and no special
+ * ->end_io handler.
++ *
++ * Return: true when the request was added to the batch, otherwise false
+ */
+ static inline bool blk_mq_add_to_batch(struct request *req,
+- struct io_comp_batch *iob, int ioerror,
++ struct io_comp_batch *iob, bool is_error,
+ void (*complete)(struct io_comp_batch *))
+ {
+ /*
+@@ -884,7 +892,7 @@ static inline bool blk_mq_add_to_batch(struct request *req,
+ * 1) No batch container
+ * 2) Has scheduler data attached
+ * 3) Not a passthrough request and end_io set
+- * 4) Not a passthrough request and an ioerror
++ * 4) Not a passthrough request and failed with an error
+ */
+ if (!iob)
+ return false;
+@@ -893,7 +901,7 @@ static inline bool blk_mq_add_to_batch(struct request *req,
+ if (!blk_rq_is_passthrough(req)) {
+ if (req->end_io)
+ return false;
+- if (ioerror < 0)
++ if (is_error)
+ return false;
+ }
+
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index fc3de42d9d764f..b98f128c9afa78 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -3320,6 +3320,8 @@ extern const struct file_operations generic_ro_fops;
+
+ extern int readlink_copy(char __user *, int, const char *);
+ extern int page_readlink(struct dentry *, char __user *, int);
++extern const char *page_get_link_raw(struct dentry *, struct inode *,
++ struct delayed_call *);
+ extern const char *page_get_link(struct dentry *, struct inode *,
+ struct delayed_call *);
+ extern void page_put_link(void *);
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 25a7b13574c28b..12f7a7b9c06e9b 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -687,6 +687,7 @@ struct huge_bootmem_page {
+ };
+
+ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list);
++void wait_for_freed_hugetlb_folios(void);
+ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+ unsigned long addr, int avoid_reserve);
+ struct folio *alloc_hugetlb_folio_nodemask(struct hstate *h, int preferred_nid,
+@@ -1057,6 +1058,10 @@ static inline int isolate_or_dissolve_huge_page(struct page *page,
+ return -ENOMEM;
+ }
+
++static inline void wait_for_freed_hugetlb_folios(void)
++{
++}
++
+ static inline struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+ unsigned long addr,
+ int avoid_reserve)
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 22f6b018cff8de..c9dc15355f1bac 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -3133,6 +3133,7 @@
+ #define PCI_DEVICE_ID_INTEL_HDA_LNL_P 0xa828
+ #define PCI_DEVICE_ID_INTEL_S21152BB 0xb152
+ #define PCI_DEVICE_ID_INTEL_HDA_BMG 0xe2f7
++#define PCI_DEVICE_ID_INTEL_HDA_PTL_H 0xe328
+ #define PCI_DEVICE_ID_INTEL_HDA_PTL 0xe428
+ #define PCI_DEVICE_ID_INTEL_HDA_CML_R 0xf0c8
+ #define PCI_DEVICE_ID_INTEL_HDA_RKL_S 0xf1c8
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index ba7b52584770d7..c95f7e6ba25514 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -804,6 +804,7 @@ struct hci_conn_params {
+ extern struct list_head hci_dev_list;
+ extern struct list_head hci_cb_list;
+ extern rwlock_t hci_dev_list_lock;
++extern struct mutex hci_cb_list_lock;
+
+ #define hci_dev_set_flag(hdev, nr) set_bit((nr), (hdev)->dev_flags)
+ #define hci_dev_clear_flag(hdev, nr) clear_bit((nr), (hdev)->dev_flags)
+@@ -2006,47 +2007,24 @@ struct hci_cb {
+
+ char *name;
+
+- bool (*match) (struct hci_conn *conn);
+ void (*connect_cfm) (struct hci_conn *conn, __u8 status);
+ void (*disconn_cfm) (struct hci_conn *conn, __u8 status);
+ void (*security_cfm) (struct hci_conn *conn, __u8 status,
+- __u8 encrypt);
++ __u8 encrypt);
+ void (*key_change_cfm) (struct hci_conn *conn, __u8 status);
+ void (*role_switch_cfm) (struct hci_conn *conn, __u8 status, __u8 role);
+ };
+
+-static inline void hci_cb_lookup(struct hci_conn *conn, struct list_head *list)
+-{
+- struct hci_cb *cb, *cpy;
+-
+- rcu_read_lock();
+- list_for_each_entry_rcu(cb, &hci_cb_list, list) {
+- if (cb->match && cb->match(conn)) {
+- cpy = kmalloc(sizeof(*cpy), GFP_ATOMIC);
+- if (!cpy)
+- break;
+-
+- *cpy = *cb;
+- INIT_LIST_HEAD(&cpy->list);
+- list_add_rcu(&cpy->list, list);
+- }
+- }
+- rcu_read_unlock();
+-}
+-
+ static inline void hci_connect_cfm(struct hci_conn *conn, __u8 status)
+ {
+- struct list_head list;
+- struct hci_cb *cb, *tmp;
+-
+- INIT_LIST_HEAD(&list);
+- hci_cb_lookup(conn, &list);
++ struct hci_cb *cb;
+
+- list_for_each_entry_safe(cb, tmp, &list, list) {
++ mutex_lock(&hci_cb_list_lock);
++ list_for_each_entry(cb, &hci_cb_list, list) {
+ if (cb->connect_cfm)
+ cb->connect_cfm(conn, status);
+- kfree(cb);
+ }
++ mutex_unlock(&hci_cb_list_lock);
+
+ if (conn->connect_cfm_cb)
+ conn->connect_cfm_cb(conn, status);
+@@ -2054,43 +2032,22 @@ static inline void hci_connect_cfm(struct hci_conn *conn, __u8 status)
+
+ static inline void hci_disconn_cfm(struct hci_conn *conn, __u8 reason)
+ {
+- struct list_head list;
+- struct hci_cb *cb, *tmp;
+-
+- INIT_LIST_HEAD(&list);
+- hci_cb_lookup(conn, &list);
++ struct hci_cb *cb;
+
+- list_for_each_entry_safe(cb, tmp, &list, list) {
++ mutex_lock(&hci_cb_list_lock);
++ list_for_each_entry(cb, &hci_cb_list, list) {
+ if (cb->disconn_cfm)
+ cb->disconn_cfm(conn, reason);
+- kfree(cb);
+ }
++ mutex_unlock(&hci_cb_list_lock);
+
+ if (conn->disconn_cfm_cb)
+ conn->disconn_cfm_cb(conn, reason);
+ }
+
+-static inline void hci_security_cfm(struct hci_conn *conn, __u8 status,
+- __u8 encrypt)
+-{
+- struct list_head list;
+- struct hci_cb *cb, *tmp;
+-
+- INIT_LIST_HEAD(&list);
+- hci_cb_lookup(conn, &list);
+-
+- list_for_each_entry_safe(cb, tmp, &list, list) {
+- if (cb->security_cfm)
+- cb->security_cfm(conn, status, encrypt);
+- kfree(cb);
+- }
+-
+- if (conn->security_cfm_cb)
+- conn->security_cfm_cb(conn, status);
+-}
+-
+ static inline void hci_auth_cfm(struct hci_conn *conn, __u8 status)
+ {
++ struct hci_cb *cb;
+ __u8 encrypt;
+
+ if (test_bit(HCI_CONN_ENCRYPT_PEND, &conn->flags))
+@@ -2098,11 +2055,20 @@ static inline void hci_auth_cfm(struct hci_conn *conn, __u8 status)
+
+ encrypt = test_bit(HCI_CONN_ENCRYPT, &conn->flags) ? 0x01 : 0x00;
+
+- hci_security_cfm(conn, status, encrypt);
++ mutex_lock(&hci_cb_list_lock);
++ list_for_each_entry(cb, &hci_cb_list, list) {
++ if (cb->security_cfm)
++ cb->security_cfm(conn, status, encrypt);
++ }
++ mutex_unlock(&hci_cb_list_lock);
++
++ if (conn->security_cfm_cb)
++ conn->security_cfm_cb(conn, status);
+ }
+
+ static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status)
+ {
++ struct hci_cb *cb;
+ __u8 encrypt;
+
+ if (conn->state == BT_CONFIG) {
+@@ -2129,38 +2095,40 @@ static inline void hci_encrypt_cfm(struct hci_conn *conn, __u8 status)
+ conn->sec_level = conn->pending_sec_level;
+ }
+
+- hci_security_cfm(conn, status, encrypt);
++ mutex_lock(&hci_cb_list_lock);
++ list_for_each_entry(cb, &hci_cb_list, list) {
++ if (cb->security_cfm)
++ cb->security_cfm(conn, status, encrypt);
++ }
++ mutex_unlock(&hci_cb_list_lock);
++
++ if (conn->security_cfm_cb)
++ conn->security_cfm_cb(conn, status);
+ }
+
+ static inline void hci_key_change_cfm(struct hci_conn *conn, __u8 status)
+ {
+- struct list_head list;
+- struct hci_cb *cb, *tmp;
+-
+- INIT_LIST_HEAD(&list);
+- hci_cb_lookup(conn, &list);
++ struct hci_cb *cb;
+
+- list_for_each_entry_safe(cb, tmp, &list, list) {
++ mutex_lock(&hci_cb_list_lock);
++ list_for_each_entry(cb, &hci_cb_list, list) {
+ if (cb->key_change_cfm)
+ cb->key_change_cfm(conn, status);
+- kfree(cb);
+ }
++ mutex_unlock(&hci_cb_list_lock);
+ }
+
+ static inline void hci_role_switch_cfm(struct hci_conn *conn, __u8 status,
+ __u8 role)
+ {
+- struct list_head list;
+- struct hci_cb *cb, *tmp;
+-
+- INIT_LIST_HEAD(&list);
+- hci_cb_lookup(conn, &list);
++ struct hci_cb *cb;
+
+- list_for_each_entry_safe(cb, tmp, &list, list) {
++ mutex_lock(&hci_cb_list_lock);
++ list_for_each_entry(cb, &hci_cb_list, list) {
+ if (cb->role_switch_cfm)
+ cb->role_switch_cfm(conn, status, role);
+- kfree(cb);
+ }
++ mutex_unlock(&hci_cb_list_lock);
+ }
+
+ static inline bool hci_bdaddr_is_rpa(bdaddr_t *bdaddr, u8 addr_type)
+diff --git a/include/net/bluetooth/l2cap.h b/include/net/bluetooth/l2cap.h
+index d9c767cf773de9..9189354c568f44 100644
+--- a/include/net/bluetooth/l2cap.h
++++ b/include/net/bluetooth/l2cap.h
+@@ -668,7 +668,7 @@ struct l2cap_conn {
+ struct l2cap_chan *smp;
+
+ struct list_head chan_l;
+- struct mutex chan_lock;
++ struct mutex lock;
+ struct kref ref;
+ struct list_head users;
+ };
+@@ -970,6 +970,7 @@ void l2cap_chan_del(struct l2cap_chan *chan, int err);
+ void l2cap_send_conn_req(struct l2cap_chan *chan);
+
+ struct l2cap_conn *l2cap_conn_get(struct l2cap_conn *conn);
++struct l2cap_conn *l2cap_conn_hold_unless_zero(struct l2cap_conn *conn);
+ void l2cap_conn_put(struct l2cap_conn *conn);
+
+ int l2cap_register_user(struct l2cap_conn *conn, struct l2cap_user *user);
+diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h
+index 788513cc384b7f..757abcb54d117d 100644
+--- a/include/net/netfilter/nf_tables.h
++++ b/include/net/netfilter/nf_tables.h
+@@ -1889,7 +1889,7 @@ void nft_chain_filter_fini(void);
+ void __init nft_chain_route_init(void);
+ void nft_chain_route_fini(void);
+
+-void nf_tables_trans_destroy_flush_work(void);
++void nf_tables_trans_destroy_flush_work(struct net *net);
+
+ int nf_msecs_to_jiffies64(const struct nlattr *nla, u64 *result);
+ __be64 nf_jiffies64_to_msecs(u64 input);
+@@ -1903,6 +1903,7 @@ static inline int nft_request_module(struct net *net, const char *fmt, ...) { re
+ struct nftables_pernet {
+ struct list_head tables;
+ struct list_head commit_list;
++ struct list_head destroy_list;
+ struct list_head commit_set_list;
+ struct list_head binding_list;
+ struct list_head module_list;
+@@ -1913,6 +1914,7 @@ struct nftables_pernet {
+ unsigned int base_seq;
+ unsigned int gc_seq;
+ u8 validate_state;
++ struct work_struct destroy_work;
+ };
+
+ extern unsigned int nf_tables_net_id;
+diff --git a/include/sound/soc.h b/include/sound/soc.h
+index e6e359c1a2ac4d..db3b464a91c7b7 100644
+--- a/include/sound/soc.h
++++ b/include/sound/soc.h
+@@ -1251,7 +1251,10 @@ void snd_soc_close_delayed_work(struct snd_soc_pcm_runtime *rtd);
+
+ /* mixer control */
+ struct soc_mixer_control {
+- int min, max, platform_max;
++ /* Minimum and maximum specified as written to the hardware */
++ int min, max;
++ /* Limited maximum value specified as presented through the control */
++ int platform_max;
+ int reg, rreg;
+ unsigned int shift, rshift;
+ unsigned int sign_bit;
+diff --git a/init/Kconfig b/init/Kconfig
+index 7256fa127530ff..293c565c62168e 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -1958,7 +1958,7 @@ config RUST
+ depends on !MODVERSIONS
+ depends on !GCC_PLUGIN_RANDSTRUCT
+ depends on !RANDSTRUCT
+- depends on !DEBUG_INFO_BTF || PAHOLE_HAS_LANG_EXCLUDE
++ depends on !DEBUG_INFO_BTF || (PAHOLE_HAS_LANG_EXCLUDE && !LTO)
+ depends on !CFI_CLANG || HAVE_CFI_ICALL_NORMALIZE_INTEGERS_RUSTC
+ select CFI_ICALL_NORMALIZE_INTEGERS if CFI_CLANG
+ depends on !CALL_PADDING || RUSTC_VERSION >= 108100
+diff --git a/io_uring/futex.c b/io_uring/futex.c
+index 914848f46beb21..01f044f89f8fa9 100644
+--- a/io_uring/futex.c
++++ b/io_uring/futex.c
+@@ -349,7 +349,7 @@ int io_futex_wait(struct io_kiocb *req, unsigned int issue_flags)
+ hlist_add_head(&req->hash_node, &ctx->futex_list);
+ io_ring_submit_unlock(ctx, issue_flags);
+
+- futex_queue(&ifd->q, hb);
++ futex_queue(&ifd->q, hb, NULL);
+ return IOU_ISSUE_SKIP_COMPLETE;
+ }
+
+diff --git a/io_uring/io-wq.c b/io_uring/io-wq.c
+index a38f36b6806041..a2d577b099308e 100644
+--- a/io_uring/io-wq.c
++++ b/io_uring/io-wq.c
+@@ -64,7 +64,7 @@ struct io_worker {
+
+ union {
+ struct rcu_head rcu;
+- struct work_struct work;
++ struct delayed_work work;
+ };
+ };
+
+@@ -770,6 +770,18 @@ static inline bool io_should_retry_thread(struct io_worker *worker, long err)
+ }
+ }
+
++static void queue_create_worker_retry(struct io_worker *worker)
++{
++ /*
++ * We only bother retrying because there's a chance that the
++ * failure to create a worker is due to some temporary condition
++ * in the forking task (e.g. outstanding signal); give the task
++ * some time to clear that condition.
++ */
++ schedule_delayed_work(&worker->work,
++ msecs_to_jiffies(worker->init_retries * 5));
++}
++
+ static void create_worker_cont(struct callback_head *cb)
+ {
+ struct io_worker *worker;
+@@ -809,12 +821,13 @@ static void create_worker_cont(struct callback_head *cb)
+
+ /* re-create attempts grab a new worker ref, drop the existing one */
+ io_worker_release(worker);
+- schedule_work(&worker->work);
++ queue_create_worker_retry(worker);
+ }
+
+ static void io_workqueue_create(struct work_struct *work)
+ {
+- struct io_worker *worker = container_of(work, struct io_worker, work);
++ struct io_worker *worker = container_of(work, struct io_worker,
++ work.work);
+ struct io_wq_acct *acct = io_wq_get_acct(worker);
+
+ if (!io_queue_worker_create(worker, acct, create_worker_cont))
+@@ -855,8 +868,8 @@ static bool create_io_worker(struct io_wq *wq, int index)
+ kfree(worker);
+ goto fail;
+ } else {
+- INIT_WORK(&worker->work, io_workqueue_create);
+- schedule_work(&worker->work);
++ INIT_DELAYED_WORK(&worker->work, io_workqueue_create);
++ queue_create_worker_retry(worker);
+ }
+
+ return true;
+diff --git a/kernel/futex/core.c b/kernel/futex/core.c
+index 136768ae26375f..010607a9919498 100644
+--- a/kernel/futex/core.c
++++ b/kernel/futex/core.c
+@@ -554,7 +554,8 @@ void futex_q_unlock(struct futex_hash_bucket *hb)
+ futex_hb_waiters_dec(hb);
+ }
+
+-void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb)
++void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb,
++ struct task_struct *task)
+ {
+ int prio;
+
+@@ -570,7 +571,7 @@ void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb)
+
+ plist_node_init(&q->list, prio);
+ plist_add(&q->list, &hb->chain);
+- q->task = current;
++ q->task = task;
+ }
+
+ /**
+diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
+index 8b195d06f4e8ed..12e47386232ed6 100644
+--- a/kernel/futex/futex.h
++++ b/kernel/futex/futex.h
+@@ -230,13 +230,15 @@ extern int futex_get_value_locked(u32 *dest, u32 __user *from);
+ extern struct futex_q *futex_top_waiter(struct futex_hash_bucket *hb, union futex_key *key);
+
+ extern void __futex_unqueue(struct futex_q *q);
+-extern void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb);
++extern void __futex_queue(struct futex_q *q, struct futex_hash_bucket *hb,
++ struct task_struct *task);
+ extern int futex_unqueue(struct futex_q *q);
+
+ /**
+ * futex_queue() - Enqueue the futex_q on the futex_hash_bucket
+ * @q: The futex_q to enqueue
+ * @hb: The destination hash bucket
++ * @task: Task queueing this futex
+ *
+ * The hb->lock must be held by the caller, and is released here. A call to
+ * futex_queue() is typically paired with exactly one call to futex_unqueue(). The
+@@ -244,11 +246,14 @@ extern int futex_unqueue(struct futex_q *q);
+ * or nothing if the unqueue is done as part of the wake process and the unqueue
+ * state is implicit in the state of woken task (see futex_wait_requeue_pi() for
+ * an example).
++ *
++ * Note that @task may be NULL, for async usage of futexes.
+ */
+-static inline void futex_queue(struct futex_q *q, struct futex_hash_bucket *hb)
++static inline void futex_queue(struct futex_q *q, struct futex_hash_bucket *hb,
++ struct task_struct *task)
+ __releases(&hb->lock)
+ {
+- __futex_queue(q, hb);
++ __futex_queue(q, hb, task);
+ spin_unlock(&hb->lock);
+ }
+
+diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c
+index 5722467f273794..8ec12f1aff83be 100644
+--- a/kernel/futex/pi.c
++++ b/kernel/futex/pi.c
+@@ -981,7 +981,7 @@ int futex_lock_pi(u32 __user *uaddr, unsigned int flags, ktime_t *time, int tryl
+ /*
+ * Only actually queue now that the atomic ops are done:
+ */
+- __futex_queue(&q, hb);
++ __futex_queue(&q, hb, current);
+
+ if (trylock) {
+ ret = rt_mutex_futex_trylock(&q.pi_state->pi_mutex);
+diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c
+index 3a10375d952186..a9056acb75eef9 100644
+--- a/kernel/futex/waitwake.c
++++ b/kernel/futex/waitwake.c
+@@ -350,7 +350,7 @@ void futex_wait_queue(struct futex_hash_bucket *hb, struct futex_q *q,
+ * access to the hash list and forcing another memory barrier.
+ */
+ set_current_state(TASK_INTERRUPTIBLE|TASK_FREEZABLE);
+- futex_queue(q, hb);
++ futex_queue(q, hb, current);
+
+ /* Arm the timer */
+ if (timeout)
+@@ -461,7 +461,7 @@ int futex_wait_multiple_setup(struct futex_vector *vs, int count, int *woken)
+ * next futex. Queue each futex at this moment so hb can
+ * be unlocked.
+ */
+- futex_queue(q, hb);
++ futex_queue(q, hb, current);
+ continue;
+ }
+
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index 3e486ccaa4ca34..8e52c1dd06284c 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3191,6 +3191,8 @@ void call_rcu(struct rcu_head *head, rcu_callback_t func)
+ }
+ EXPORT_SYMBOL_GPL(call_rcu);
+
++static struct workqueue_struct *rcu_reclaim_wq;
++
+ /* Maximum number of jiffies to wait before draining a batch. */
+ #define KFREE_DRAIN_JIFFIES (5 * HZ)
+ #define KFREE_N_BATCHES 2
+@@ -3519,10 +3521,10 @@ __schedule_delayed_monitor_work(struct kfree_rcu_cpu *krcp)
+ if (delayed_work_pending(&krcp->monitor_work)) {
+ delay_left = krcp->monitor_work.timer.expires - jiffies;
+ if (delay < delay_left)
+- mod_delayed_work(system_unbound_wq, &krcp->monitor_work, delay);
++ mod_delayed_work(rcu_reclaim_wq, &krcp->monitor_work, delay);
+ return;
+ }
+- queue_delayed_work(system_unbound_wq, &krcp->monitor_work, delay);
++ queue_delayed_work(rcu_reclaim_wq, &krcp->monitor_work, delay);
+ }
+
+ static void
+@@ -3620,7 +3622,7 @@ kvfree_rcu_queue_batch(struct kfree_rcu_cpu *krcp)
+ // "free channels", the batch can handle. Break
+ // the loop since it is done with this CPU thus
+ // queuing an RCU work is _always_ success here.
+- queued = queue_rcu_work(system_unbound_wq, &krwp->rcu_work);
++ queued = queue_rcu_work(rcu_reclaim_wq, &krwp->rcu_work);
+ WARN_ON_ONCE(!queued);
+ break;
+ }
+@@ -3708,7 +3710,7 @@ run_page_cache_worker(struct kfree_rcu_cpu *krcp)
+ if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING &&
+ !atomic_xchg(&krcp->work_in_progress, 1)) {
+ if (atomic_read(&krcp->backoff_page_cache_fill)) {
+- queue_delayed_work(system_unbound_wq,
++ queue_delayed_work(rcu_reclaim_wq,
+ &krcp->page_cache_work,
+ msecs_to_jiffies(rcu_delay_page_cache_fill_msec));
+ } else {
+@@ -5662,6 +5664,10 @@ static void __init kfree_rcu_batch_init(void)
+ int i, j;
+ struct shrinker *kfree_rcu_shrinker;
+
++ rcu_reclaim_wq = alloc_workqueue("kvfree_rcu_reclaim",
++ WQ_UNBOUND | WQ_MEM_RECLAIM, 0);
++ WARN_ON(!rcu_reclaim_wq);
++
+ /* Clamp it to [0:100] seconds interval. */
+ if (rcu_delay_page_cache_fill_msec < 0 ||
+ rcu_delay_page_cache_fill_msec > 100 * MSEC_PER_SEC) {
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 9803f10a082a7b..1f817d0c5d2d0e 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1058,9 +1058,10 @@ void wake_up_q(struct wake_q_head *head)
+ struct task_struct *task;
+
+ task = container_of(node, struct task_struct, wake_q);
+- /* Task can safely be re-inserted now: */
+ node = node->next;
+- task->wake_q.next = NULL;
++ /* pairs with cmpxchg_relaxed() in __wake_q_add() */
++ WRITE_ONCE(task->wake_q.next, NULL);
++ /* Task can safely be re-inserted now. */
+
+ /*
+ * wake_up_process() executes a full barrier, which pairs with
+diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
+index 82b165bf48c423..1e3bc0774efd51 100644
+--- a/kernel/sched/debug.c
++++ b/kernel/sched/debug.c
+@@ -1264,6 +1264,8 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
+ if (task_has_dl_policy(p)) {
+ P(dl.runtime);
+ P(dl.deadline);
++ } else if (fair_policy(p->policy)) {
++ P(se.slice);
+ }
+ #ifdef CONFIG_SCHED_CLASS_EXT
+ __PS("ext.enabled", task_on_scx(p));
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index 325fd5b9d47152..e5cab54dfdd142 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -6052,6 +6052,9 @@ __bpf_kfunc_start_defs();
+ __bpf_kfunc s32 scx_bpf_select_cpu_dfl(struct task_struct *p, s32 prev_cpu,
+ u64 wake_flags, bool *is_idle)
+ {
++ if (!ops_cpu_valid(prev_cpu, NULL))
++ goto prev_cpu;
++
+ if (!static_branch_likely(&scx_builtin_idle_enabled)) {
+ scx_ops_error("built-in idle tracking is disabled");
+ goto prev_cpu;
+diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
+index d116c28564f26c..db9c06bb23116a 100644
+--- a/kernel/time/hrtimer.c
++++ b/kernel/time/hrtimer.c
+@@ -156,11 +156,6 @@ static struct hrtimer_cpu_base migration_cpu_base = {
+
+ #define migration_base migration_cpu_base.clock_base[0]
+
+-static inline bool is_migration_base(struct hrtimer_clock_base *base)
+-{
+- return base == &migration_base;
+-}
+-
+ /*
+ * We are using hashed locking: holding per_cpu(hrtimer_bases)[n].lock
+ * means that all timers which are tied to this base via timer->base are
+@@ -312,11 +307,6 @@ switch_hrtimer_base(struct hrtimer *timer, struct hrtimer_clock_base *base,
+
+ #else /* CONFIG_SMP */
+
+-static inline bool is_migration_base(struct hrtimer_clock_base *base)
+-{
+- return false;
+-}
+-
+ static inline struct hrtimer_clock_base *
+ lock_hrtimer_base(const struct hrtimer *timer, unsigned long *flags)
+ __acquires(&timer->base->cpu_base->lock)
+@@ -1441,6 +1431,18 @@ static void hrtimer_sync_wait_running(struct hrtimer_cpu_base *cpu_base,
+ }
+ }
+
++#ifdef CONFIG_SMP
++static __always_inline bool is_migration_base(struct hrtimer_clock_base *base)
++{
++ return base == &migration_base;
++}
++#else
++static __always_inline bool is_migration_base(struct hrtimer_clock_base *base)
++{
++ return false;
++}
++#endif
++
+ /*
+ * This function is called on PREEMPT_RT kernels when the fast path
+ * deletion of a timer failed because the timer callback function was
+diff --git a/kernel/vhost_task.c b/kernel/vhost_task.c
+index 8800f5acc00717..2ef2e1b8009165 100644
+--- a/kernel/vhost_task.c
++++ b/kernel/vhost_task.c
+@@ -133,7 +133,7 @@ struct vhost_task *vhost_task_create(bool (*fn)(void *),
+
+ vtsk = kzalloc(sizeof(*vtsk), GFP_KERNEL);
+ if (!vtsk)
+- return NULL;
++ return ERR_PTR(-ENOMEM);
+ init_completion(&vtsk->exited);
+ mutex_init(&vtsk->exit_mutex);
+ vtsk->data = arg;
+@@ -145,7 +145,7 @@ struct vhost_task *vhost_task_create(bool (*fn)(void *),
+ tsk = copy_process(NULL, 0, NUMA_NO_NODE, &args);
+ if (IS_ERR(tsk)) {
+ kfree(vtsk);
+- return NULL;
++ return ERR_PTR(PTR_ERR(tsk));
+ }
+
+ vtsk->task = tsk;
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index 1e9aa6de4e21ea..e28e820fdb7756 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -2955,6 +2955,14 @@ int isolate_or_dissolve_huge_page(struct page *page, struct list_head *list)
+ return ret;
+ }
+
++void wait_for_freed_hugetlb_folios(void)
++{
++ if (llist_empty(&hpage_freelist))
++ return;
++
++ flush_work(&free_hpage_work);
++}
++
+ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
+ unsigned long addr, int avoid_reserve)
+ {
+diff --git a/mm/page_isolation.c b/mm/page_isolation.c
+index 7e04047977cfea..6989c5ffd47417 100644
+--- a/mm/page_isolation.c
++++ b/mm/page_isolation.c
+@@ -611,6 +611,16 @@ int test_pages_isolated(unsigned long start_pfn, unsigned long end_pfn,
+ struct zone *zone;
+ int ret;
+
++ /*
++ * Due to the deferred freeing of hugetlb folios, the hugepage folios may
++ * not immediately release to the buddy system. This can cause PageBuddy()
++ * to fail in __test_page_isolated_in_pageblock(). To ensure that the
++ * hugetlb folios are properly released back to the buddy system, we
++ * invoke the wait_for_freed_hugetlb_folios() function to wait for the
++ * release to complete.
++ */
++ wait_for_freed_hugetlb_folios();
++
+ /*
+ * Note: pageblock_nr_pages != MAX_PAGE_ORDER. Then, chunks of free
+ * pages are not aligned to pageblock_nr_pages.
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 66011831d7983d..080a00d916f6b6 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -18,6 +18,7 @@
+ #include <asm/tlbflush.h>
+ #include <asm/tlb.h>
+ #include "internal.h"
++#include "swap.h"
+
+ static __always_inline
+ bool validate_dst_vma(struct vm_area_struct *dst_vma, unsigned long dst_end)
+@@ -1067,15 +1068,13 @@ static int move_present_pte(struct mm_struct *mm,
+ return err;
+ }
+
+-static int move_swap_pte(struct mm_struct *mm,
++static int move_swap_pte(struct mm_struct *mm, struct vm_area_struct *dst_vma,
+ unsigned long dst_addr, unsigned long src_addr,
+ pte_t *dst_pte, pte_t *src_pte,
+ pte_t orig_dst_pte, pte_t orig_src_pte,
+- spinlock_t *dst_ptl, spinlock_t *src_ptl)
++ spinlock_t *dst_ptl, spinlock_t *src_ptl,
++ struct folio *src_folio)
+ {
+- if (!pte_swp_exclusive(orig_src_pte))
+- return -EBUSY;
+-
+ double_pt_lock(dst_ptl, src_ptl);
+
+ if (!pte_same(ptep_get(src_pte), orig_src_pte) ||
+@@ -1084,6 +1083,16 @@ static int move_swap_pte(struct mm_struct *mm,
+ return -EAGAIN;
+ }
+
++ /*
++ * The src_folio resides in the swapcache, requiring an update to its
++ * index and mapping to align with the dst_vma, where a swap-in may
++ * occur and hit the swapcache after moving the PTE.
++ */
++ if (src_folio) {
++ folio_move_anon_rmap(src_folio, dst_vma);
++ src_folio->index = linear_page_index(dst_vma, dst_addr);
++ }
++
+ orig_src_pte = ptep_get_and_clear(mm, src_addr, src_pte);
+ set_pte_at(mm, dst_addr, dst_pte, orig_src_pte);
+ double_pt_unlock(dst_ptl, src_ptl);
+@@ -1130,6 +1139,7 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
+ __u64 mode)
+ {
+ swp_entry_t entry;
++ struct swap_info_struct *si = NULL;
+ pte_t orig_src_pte, orig_dst_pte;
+ pte_t src_folio_pte;
+ spinlock_t *src_ptl, *dst_ptl;
+@@ -1255,8 +1265,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
+ spin_unlock(src_ptl);
+
+ if (!locked) {
+- pte_unmap(&orig_src_pte);
+- pte_unmap(&orig_dst_pte);
++ pte_unmap(src_pte);
++ pte_unmap(dst_pte);
+ src_pte = dst_pte = NULL;
+ /* now we can block and wait */
+ folio_lock(src_folio);
+@@ -1272,8 +1282,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
+ /* at this point we have src_folio locked */
+ if (folio_test_large(src_folio)) {
+ /* split_folio() can block */
+- pte_unmap(&orig_src_pte);
+- pte_unmap(&orig_dst_pte);
++ pte_unmap(src_pte);
++ pte_unmap(dst_pte);
+ src_pte = dst_pte = NULL;
+ err = split_folio(src_folio);
+ if (err)
+@@ -1298,8 +1308,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
+ goto out;
+ }
+ if (!anon_vma_trylock_write(src_anon_vma)) {
+- pte_unmap(&orig_src_pte);
+- pte_unmap(&orig_dst_pte);
++ pte_unmap(src_pte);
++ pte_unmap(dst_pte);
+ src_pte = dst_pte = NULL;
+ /* now we can block and wait */
+ anon_vma_lock_write(src_anon_vma);
+@@ -1312,11 +1322,13 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
+ orig_dst_pte, orig_src_pte,
+ dst_ptl, src_ptl, src_folio);
+ } else {
++ struct folio *folio = NULL;
++
+ entry = pte_to_swp_entry(orig_src_pte);
+ if (non_swap_entry(entry)) {
+ if (is_migration_entry(entry)) {
+- pte_unmap(&orig_src_pte);
+- pte_unmap(&orig_dst_pte);
++ pte_unmap(src_pte);
++ pte_unmap(dst_pte);
+ src_pte = dst_pte = NULL;
+ migration_entry_wait(mm, src_pmd, src_addr);
+ err = -EAGAIN;
+@@ -1325,10 +1337,53 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
+ goto out;
+ }
+
+- err = move_swap_pte(mm, dst_addr, src_addr,
+- dst_pte, src_pte,
+- orig_dst_pte, orig_src_pte,
+- dst_ptl, src_ptl);
++ if (!pte_swp_exclusive(orig_src_pte)) {
++ err = -EBUSY;
++ goto out;
++ }
++
++ si = get_swap_device(entry);
++ if (unlikely(!si)) {
++ err = -EAGAIN;
++ goto out;
++ }
++ /*
++ * Verify the existence of the swapcache. If present, the folio's
++ * index and mapping must be updated even when the PTE is a swap
++ * entry. The anon_vma lock is not taken during this process since
++ * the folio has already been unmapped, and the swap entry is
++ * exclusive, preventing rmap walks.
++ *
++ * For large folios, return -EBUSY immediately, as split_folio()
++ * also returns -EBUSY when attempting to split unmapped large
++ * folios in the swapcache. This issue needs to be resolved
++ * separately to allow proper handling.
++ */
++ if (!src_folio)
++ folio = filemap_get_folio(swap_address_space(entry),
++ swap_cache_index(entry));
++ if (!IS_ERR_OR_NULL(folio)) {
++ if (folio_test_large(folio)) {
++ err = -EBUSY;
++ folio_put(folio);
++ goto out;
++ }
++ src_folio = folio;
++ src_folio_pte = orig_src_pte;
++ if (!folio_trylock(src_folio)) {
++ pte_unmap(src_pte);
++ pte_unmap(dst_pte);
++ src_pte = dst_pte = NULL;
++ put_swap_device(si);
++ si = NULL;
++ /* now we can block and wait */
++ folio_lock(src_folio);
++ goto retry;
++ }
++ }
++ err = move_swap_pte(mm, dst_vma, dst_addr, src_addr, dst_pte, src_pte,
++ orig_dst_pte, orig_src_pte,
++ dst_ptl, src_ptl, src_folio);
+ }
+
+ out:
+@@ -1345,6 +1400,8 @@ static int move_pages_pte(struct mm_struct *mm, pmd_t *dst_pmd, pmd_t *src_pmd,
+ if (src_pte)
+ pte_unmap(src_pte);
+ mmu_notifier_invalidate_range_end(&range);
++ if (si)
++ put_swap_device(si);
+
+ return err;
+ }
+diff --git a/net/bluetooth/hci_core.c b/net/bluetooth/hci_core.c
+index b5553c08e73162..72439764186ed2 100644
+--- a/net/bluetooth/hci_core.c
++++ b/net/bluetooth/hci_core.c
+@@ -57,6 +57,7 @@ DEFINE_RWLOCK(hci_dev_list_lock);
+
+ /* HCI callback list */
+ LIST_HEAD(hci_cb_list);
++DEFINE_MUTEX(hci_cb_list_lock);
+
+ /* HCI ID Numbering */
+ static DEFINE_IDA(hci_index_ida);
+@@ -2992,7 +2993,9 @@ int hci_register_cb(struct hci_cb *cb)
+ {
+ BT_DBG("%p name %s", cb, cb->name);
+
+- list_add_tail_rcu(&cb->list, &hci_cb_list);
++ mutex_lock(&hci_cb_list_lock);
++ list_add_tail(&cb->list, &hci_cb_list);
++ mutex_unlock(&hci_cb_list_lock);
+
+ return 0;
+ }
+@@ -3002,8 +3005,9 @@ int hci_unregister_cb(struct hci_cb *cb)
+ {
+ BT_DBG("%p name %s", cb, cb->name);
+
+- list_del_rcu(&cb->list);
+- synchronize_rcu();
++ mutex_lock(&hci_cb_list_lock);
++ list_del(&cb->list);
++ mutex_unlock(&hci_cb_list_lock);
+
+ return 0;
+ }
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index 388d46c6a043d4..d64117be62cc44 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -3393,23 +3393,30 @@ static void hci_disconn_complete_evt(struct hci_dev *hdev, void *data,
+ hci_update_scan(hdev);
+ }
+
+- params = hci_conn_params_lookup(hdev, &conn->dst, conn->dst_type);
+- if (params) {
+- switch (params->auto_connect) {
+- case HCI_AUTO_CONN_LINK_LOSS:
+- if (ev->reason != HCI_ERROR_CONNECTION_TIMEOUT)
++ /* Re-enable passive scanning if disconnected device is marked
++ * as auto-connectable.
++ */
++ if (conn->type == LE_LINK) {
++ params = hci_conn_params_lookup(hdev, &conn->dst,
++ conn->dst_type);
++ if (params) {
++ switch (params->auto_connect) {
++ case HCI_AUTO_CONN_LINK_LOSS:
++ if (ev->reason != HCI_ERROR_CONNECTION_TIMEOUT)
++ break;
++ fallthrough;
++
++ case HCI_AUTO_CONN_DIRECT:
++ case HCI_AUTO_CONN_ALWAYS:
++ hci_pend_le_list_del_init(params);
++ hci_pend_le_list_add(params,
++ &hdev->pend_le_conns);
++ hci_update_passive_scan(hdev);
+ break;
+- fallthrough;
+
+- case HCI_AUTO_CONN_DIRECT:
+- case HCI_AUTO_CONN_ALWAYS:
+- hci_pend_le_list_del_init(params);
+- hci_pend_le_list_add(params, &hdev->pend_le_conns);
+- hci_update_passive_scan(hdev);
+- break;
+-
+- default:
+- break;
++ default:
++ break;
++ }
+ }
+ }
+
+diff --git a/net/bluetooth/iso.c b/net/bluetooth/iso.c
+index bda2f2da7d7311..644b606743e212 100644
+--- a/net/bluetooth/iso.c
++++ b/net/bluetooth/iso.c
+@@ -2137,11 +2137,6 @@ int iso_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ return HCI_LM_ACCEPT;
+ }
+
+-static bool iso_match(struct hci_conn *hcon)
+-{
+- return hcon->type == ISO_LINK || hcon->type == LE_LINK;
+-}
+-
+ static void iso_connect_cfm(struct hci_conn *hcon, __u8 status)
+ {
+ if (hcon->type != ISO_LINK) {
+@@ -2323,7 +2318,6 @@ void iso_recv(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
+
+ static struct hci_cb iso_cb = {
+ .name = "ISO",
+- .match = iso_match,
+ .connect_cfm = iso_connect_cfm,
+ .disconn_cfm = iso_disconn_cfm,
+ };
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index 728a5ce9b50587..c27ea70f71e1e1 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -119,7 +119,6 @@ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn,
+ {
+ struct l2cap_chan *c;
+
+- mutex_lock(&conn->chan_lock);
+ c = __l2cap_get_chan_by_scid(conn, cid);
+ if (c) {
+ /* Only lock if chan reference is not 0 */
+@@ -127,7 +126,6 @@ static struct l2cap_chan *l2cap_get_chan_by_scid(struct l2cap_conn *conn,
+ if (c)
+ l2cap_chan_lock(c);
+ }
+- mutex_unlock(&conn->chan_lock);
+
+ return c;
+ }
+@@ -140,7 +138,6 @@ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn,
+ {
+ struct l2cap_chan *c;
+
+- mutex_lock(&conn->chan_lock);
+ c = __l2cap_get_chan_by_dcid(conn, cid);
+ if (c) {
+ /* Only lock if chan reference is not 0 */
+@@ -148,7 +145,6 @@ static struct l2cap_chan *l2cap_get_chan_by_dcid(struct l2cap_conn *conn,
+ if (c)
+ l2cap_chan_lock(c);
+ }
+- mutex_unlock(&conn->chan_lock);
+
+ return c;
+ }
+@@ -418,7 +414,7 @@ static void l2cap_chan_timeout(struct work_struct *work)
+ if (!conn)
+ return;
+
+- mutex_lock(&conn->chan_lock);
++ mutex_lock(&conn->lock);
+ /* __set_chan_timer() calls l2cap_chan_hold(chan) while scheduling
+ * this work. No need to call l2cap_chan_hold(chan) here again.
+ */
+@@ -439,7 +435,7 @@ static void l2cap_chan_timeout(struct work_struct *work)
+ l2cap_chan_unlock(chan);
+ l2cap_chan_put(chan);
+
+- mutex_unlock(&conn->chan_lock);
++ mutex_unlock(&conn->lock);
+ }
+
+ struct l2cap_chan *l2cap_chan_create(void)
+@@ -642,9 +638,9 @@ void __l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
+
+ void l2cap_chan_add(struct l2cap_conn *conn, struct l2cap_chan *chan)
+ {
+- mutex_lock(&conn->chan_lock);
++ mutex_lock(&conn->lock);
+ __l2cap_chan_add(conn, chan);
+- mutex_unlock(&conn->chan_lock);
++ mutex_unlock(&conn->lock);
+ }
+
+ void l2cap_chan_del(struct l2cap_chan *chan, int err)
+@@ -732,9 +728,9 @@ void l2cap_chan_list(struct l2cap_conn *conn, l2cap_chan_func_t func,
+ if (!conn)
+ return;
+
+- mutex_lock(&conn->chan_lock);
++ mutex_lock(&conn->lock);
+ __l2cap_chan_list(conn, func, data);
+- mutex_unlock(&conn->chan_lock);
++ mutex_unlock(&conn->lock);
+ }
+
+ EXPORT_SYMBOL_GPL(l2cap_chan_list);
+@@ -746,7 +742,7 @@ static void l2cap_conn_update_id_addr(struct work_struct *work)
+ struct hci_conn *hcon = conn->hcon;
+ struct l2cap_chan *chan;
+
+- mutex_lock(&conn->chan_lock);
++ mutex_lock(&conn->lock);
+
+ list_for_each_entry(chan, &conn->chan_l, list) {
+ l2cap_chan_lock(chan);
+@@ -755,7 +751,7 @@ static void l2cap_conn_update_id_addr(struct work_struct *work)
+ l2cap_chan_unlock(chan);
+ }
+
+- mutex_unlock(&conn->chan_lock);
++ mutex_unlock(&conn->lock);
+ }
+
+ static void l2cap_chan_le_connect_reject(struct l2cap_chan *chan)
+@@ -949,6 +945,16 @@ static u8 l2cap_get_ident(struct l2cap_conn *conn)
+ return id;
+ }
+
++static void l2cap_send_acl(struct l2cap_conn *conn, struct sk_buff *skb,
++ u8 flags)
++{
++ /* Check if the hcon still valid before attempting to send */
++ if (hci_conn_valid(conn->hcon->hdev, conn->hcon))
++ hci_send_acl(conn->hchan, skb, flags);
++ else
++ kfree_skb(skb);
++}
++
+ static void l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len,
+ void *data)
+ {
+@@ -971,7 +977,7 @@ static void l2cap_send_cmd(struct l2cap_conn *conn, u8 ident, u8 code, u16 len,
+ bt_cb(skb)->force_active = BT_POWER_FORCE_ACTIVE_ON;
+ skb->priority = HCI_PRIO_MAX;
+
+- hci_send_acl(conn->hchan, skb, flags);
++ l2cap_send_acl(conn, skb, flags);
+ }
+
+ static void l2cap_do_send(struct l2cap_chan *chan, struct sk_buff *skb)
+@@ -1498,8 +1504,6 @@ static void l2cap_conn_start(struct l2cap_conn *conn)
+
+ BT_DBG("conn %p", conn);
+
+- mutex_lock(&conn->chan_lock);
+-
+ list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) {
+ l2cap_chan_lock(chan);
+
+@@ -1568,8 +1572,6 @@ static void l2cap_conn_start(struct l2cap_conn *conn)
+
+ l2cap_chan_unlock(chan);
+ }
+-
+- mutex_unlock(&conn->chan_lock);
+ }
+
+ static void l2cap_le_conn_ready(struct l2cap_conn *conn)
+@@ -1615,7 +1617,7 @@ static void l2cap_conn_ready(struct l2cap_conn *conn)
+ if (hcon->type == ACL_LINK)
+ l2cap_request_info(conn);
+
+- mutex_lock(&conn->chan_lock);
++ mutex_lock(&conn->lock);
+
+ list_for_each_entry(chan, &conn->chan_l, list) {
+
+@@ -1633,7 +1635,7 @@ static void l2cap_conn_ready(struct l2cap_conn *conn)
+ l2cap_chan_unlock(chan);
+ }
+
+- mutex_unlock(&conn->chan_lock);
++ mutex_unlock(&conn->lock);
+
+ if (hcon->type == LE_LINK)
+ l2cap_le_conn_ready(conn);
+@@ -1648,14 +1650,10 @@ static void l2cap_conn_unreliable(struct l2cap_conn *conn, int err)
+
+ BT_DBG("conn %p", conn);
+
+- mutex_lock(&conn->chan_lock);
+-
+ list_for_each_entry(chan, &conn->chan_l, list) {
+ if (test_bit(FLAG_FORCE_RELIABLE, &chan->flags))
+ l2cap_chan_set_err(chan, err);
+ }
+-
+- mutex_unlock(&conn->chan_lock);
+ }
+
+ static void l2cap_info_timeout(struct work_struct *work)
+@@ -1666,7 +1664,9 @@ static void l2cap_info_timeout(struct work_struct *work)
+ conn->info_state |= L2CAP_INFO_FEAT_MASK_REQ_DONE;
+ conn->info_ident = 0;
+
++ mutex_lock(&conn->lock);
+ l2cap_conn_start(conn);
++ mutex_unlock(&conn->lock);
+ }
+
+ /*
+@@ -1758,6 +1758,8 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err)
+
+ BT_DBG("hcon %p conn %p, err %d", hcon, conn, err);
+
++ mutex_lock(&conn->lock);
++
+ kfree_skb(conn->rx_skb);
+
+ skb_queue_purge(&conn->pending_rx);
+@@ -1776,8 +1778,6 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err)
+ /* Force the connection to be immediately dropped */
+ hcon->disc_timeout = 0;
+
+- mutex_lock(&conn->chan_lock);
+-
+ /* Kill channels */
+ list_for_each_entry_safe(chan, l, &conn->chan_l, list) {
+ l2cap_chan_hold(chan);
+@@ -1791,15 +1791,14 @@ static void l2cap_conn_del(struct hci_conn *hcon, int err)
+ l2cap_chan_put(chan);
+ }
+
+- mutex_unlock(&conn->chan_lock);
+-
+- hci_chan_del(conn->hchan);
+-
+ if (conn->info_state & L2CAP_INFO_FEAT_MASK_REQ_SENT)
+ cancel_delayed_work_sync(&conn->info_timer);
+
+- hcon->l2cap_data = NULL;
++ hci_chan_del(conn->hchan);
+ conn->hchan = NULL;
++
++ hcon->l2cap_data = NULL;
++ mutex_unlock(&conn->lock);
+ l2cap_conn_put(conn);
+ }
+
+@@ -2917,8 +2916,6 @@ static void l2cap_raw_recv(struct l2cap_conn *conn, struct sk_buff *skb)
+
+ BT_DBG("conn %p", conn);
+
+- mutex_lock(&conn->chan_lock);
+-
+ list_for_each_entry(chan, &conn->chan_l, list) {
+ if (chan->chan_type != L2CAP_CHAN_RAW)
+ continue;
+@@ -2933,8 +2930,6 @@ static void l2cap_raw_recv(struct l2cap_conn *conn, struct sk_buff *skb)
+ if (chan->ops->recv(chan, nskb))
+ kfree_skb(nskb);
+ }
+-
+- mutex_unlock(&conn->chan_lock);
+ }
+
+ /* ---- L2CAP signalling commands ---- */
+@@ -3957,7 +3952,6 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd,
+ goto response;
+ }
+
+- mutex_lock(&conn->chan_lock);
+ l2cap_chan_lock(pchan);
+
+ /* Check if the ACL is secure enough (if not SDP) */
+@@ -4064,7 +4058,6 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd,
+ }
+
+ l2cap_chan_unlock(pchan);
+- mutex_unlock(&conn->chan_lock);
+ l2cap_chan_put(pchan);
+ }
+
+@@ -4103,27 +4096,19 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
+ BT_DBG("dcid 0x%4.4x scid 0x%4.4x result 0x%2.2x status 0x%2.2x",
+ dcid, scid, result, status);
+
+- mutex_lock(&conn->chan_lock);
+-
+ if (scid) {
+ chan = __l2cap_get_chan_by_scid(conn, scid);
+- if (!chan) {
+- err = -EBADSLT;
+- goto unlock;
+- }
++ if (!chan)
++ return -EBADSLT;
+ } else {
+ chan = __l2cap_get_chan_by_ident(conn, cmd->ident);
+- if (!chan) {
+- err = -EBADSLT;
+- goto unlock;
+- }
++ if (!chan)
++ return -EBADSLT;
+ }
+
+ chan = l2cap_chan_hold_unless_zero(chan);
+- if (!chan) {
+- err = -EBADSLT;
+- goto unlock;
+- }
++ if (!chan)
++ return -EBADSLT;
+
+ err = 0;
+
+@@ -4161,9 +4146,6 @@ static int l2cap_connect_create_rsp(struct l2cap_conn *conn,
+ l2cap_chan_unlock(chan);
+ l2cap_chan_put(chan);
+
+-unlock:
+- mutex_unlock(&conn->chan_lock);
+-
+ return err;
+ }
+
+@@ -4451,11 +4433,7 @@ static inline int l2cap_disconnect_req(struct l2cap_conn *conn,
+
+ chan->ops->set_shutdown(chan);
+
+- l2cap_chan_unlock(chan);
+- mutex_lock(&conn->chan_lock);
+- l2cap_chan_lock(chan);
+ l2cap_chan_del(chan, ECONNRESET);
+- mutex_unlock(&conn->chan_lock);
+
+ chan->ops->close(chan);
+
+@@ -4492,11 +4470,7 @@ static inline int l2cap_disconnect_rsp(struct l2cap_conn *conn,
+ return 0;
+ }
+
+- l2cap_chan_unlock(chan);
+- mutex_lock(&conn->chan_lock);
+- l2cap_chan_lock(chan);
+ l2cap_chan_del(chan, 0);
+- mutex_unlock(&conn->chan_lock);
+
+ chan->ops->close(chan);
+
+@@ -4694,13 +4668,9 @@ static int l2cap_le_connect_rsp(struct l2cap_conn *conn,
+ BT_DBG("dcid 0x%4.4x mtu %u mps %u credits %u result 0x%2.2x",
+ dcid, mtu, mps, credits, result);
+
+- mutex_lock(&conn->chan_lock);
+-
+ chan = __l2cap_get_chan_by_ident(conn, cmd->ident);
+- if (!chan) {
+- err = -EBADSLT;
+- goto unlock;
+- }
++ if (!chan)
++ return -EBADSLT;
+
+ err = 0;
+
+@@ -4748,9 +4718,6 @@ static int l2cap_le_connect_rsp(struct l2cap_conn *conn,
+
+ l2cap_chan_unlock(chan);
+
+-unlock:
+- mutex_unlock(&conn->chan_lock);
+-
+ return err;
+ }
+
+@@ -4862,7 +4829,6 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
+ goto response;
+ }
+
+- mutex_lock(&conn->chan_lock);
+ l2cap_chan_lock(pchan);
+
+ if (!smp_sufficient_security(conn->hcon, pchan->sec_level,
+@@ -4928,7 +4894,6 @@ static int l2cap_le_connect_req(struct l2cap_conn *conn,
+
+ response_unlock:
+ l2cap_chan_unlock(pchan);
+- mutex_unlock(&conn->chan_lock);
+ l2cap_chan_put(pchan);
+
+ if (result == L2CAP_CR_PEND)
+@@ -5062,7 +5027,6 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
+ goto response;
+ }
+
+- mutex_lock(&conn->chan_lock);
+ l2cap_chan_lock(pchan);
+
+ if (!smp_sufficient_security(conn->hcon, pchan->sec_level,
+@@ -5137,7 +5101,6 @@ static inline int l2cap_ecred_conn_req(struct l2cap_conn *conn,
+
+ unlock:
+ l2cap_chan_unlock(pchan);
+- mutex_unlock(&conn->chan_lock);
+ l2cap_chan_put(pchan);
+
+ response:
+@@ -5174,8 +5137,6 @@ static inline int l2cap_ecred_conn_rsp(struct l2cap_conn *conn,
+ BT_DBG("mtu %u mps %u credits %u result 0x%4.4x", mtu, mps, credits,
+ result);
+
+- mutex_lock(&conn->chan_lock);
+-
+ cmd_len -= sizeof(*rsp);
+
+ list_for_each_entry_safe(chan, tmp, &conn->chan_l, list) {
+@@ -5261,8 +5222,6 @@ static inline int l2cap_ecred_conn_rsp(struct l2cap_conn *conn,
+ l2cap_chan_unlock(chan);
+ }
+
+- mutex_unlock(&conn->chan_lock);
+-
+ return err;
+ }
+
+@@ -5375,8 +5334,6 @@ static inline int l2cap_le_command_rej(struct l2cap_conn *conn,
+ if (cmd_len < sizeof(*rej))
+ return -EPROTO;
+
+- mutex_lock(&conn->chan_lock);
+-
+ chan = __l2cap_get_chan_by_ident(conn, cmd->ident);
+ if (!chan)
+ goto done;
+@@ -5391,7 +5348,6 @@ static inline int l2cap_le_command_rej(struct l2cap_conn *conn,
+ l2cap_chan_put(chan);
+
+ done:
+- mutex_unlock(&conn->chan_lock);
+ return 0;
+ }
+
+@@ -6846,8 +6802,12 @@ static void process_pending_rx(struct work_struct *work)
+
+ BT_DBG("");
+
++ mutex_lock(&conn->lock);
++
+ while ((skb = skb_dequeue(&conn->pending_rx)))
+ l2cap_recv_frame(conn, skb);
++
++ mutex_unlock(&conn->lock);
+ }
+
+ static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon)
+@@ -6886,7 +6846,7 @@ static struct l2cap_conn *l2cap_conn_add(struct hci_conn *hcon)
+ conn->local_fixed_chan |= L2CAP_FC_SMP_BREDR;
+
+ mutex_init(&conn->ident_lock);
+- mutex_init(&conn->chan_lock);
++ mutex_init(&conn->lock);
+
+ INIT_LIST_HEAD(&conn->chan_l);
+ INIT_LIST_HEAD(&conn->users);
+@@ -7077,7 +7037,7 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid,
+ }
+ }
+
+- mutex_lock(&conn->chan_lock);
++ mutex_lock(&conn->lock);
+ l2cap_chan_lock(chan);
+
+ if (cid && __l2cap_get_chan_by_dcid(conn, cid)) {
+@@ -7118,7 +7078,7 @@ int l2cap_chan_connect(struct l2cap_chan *chan, __le16 psm, u16 cid,
+
+ chan_unlock:
+ l2cap_chan_unlock(chan);
+- mutex_unlock(&conn->chan_lock);
++ mutex_unlock(&conn->lock);
+ done:
+ hci_dev_unlock(hdev);
+ hci_dev_put(hdev);
+@@ -7222,11 +7182,6 @@ static struct l2cap_chan *l2cap_global_fixed_chan(struct l2cap_chan *c,
+ return NULL;
+ }
+
+-static bool l2cap_match(struct hci_conn *hcon)
+-{
+- return hcon->type == ACL_LINK || hcon->type == LE_LINK;
+-}
+-
+ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status)
+ {
+ struct hci_dev *hdev = hcon->hdev;
+@@ -7234,6 +7189,9 @@ static void l2cap_connect_cfm(struct hci_conn *hcon, u8 status)
+ struct l2cap_chan *pchan;
+ u8 dst_type;
+
++ if (hcon->type != ACL_LINK && hcon->type != LE_LINK)
++ return;
++
+ BT_DBG("hcon %p bdaddr %pMR status %d", hcon, &hcon->dst, status);
+
+ if (status) {
+@@ -7298,6 +7256,9 @@ int l2cap_disconn_ind(struct hci_conn *hcon)
+
+ static void l2cap_disconn_cfm(struct hci_conn *hcon, u8 reason)
+ {
++ if (hcon->type != ACL_LINK && hcon->type != LE_LINK)
++ return;
++
+ BT_DBG("hcon %p reason %d", hcon, reason);
+
+ l2cap_conn_del(hcon, bt_to_errno(reason));
+@@ -7330,7 +7291,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt)
+
+ BT_DBG("conn %p status 0x%2.2x encrypt %u", conn, status, encrypt);
+
+- mutex_lock(&conn->chan_lock);
++ mutex_lock(&conn->lock);
+
+ list_for_each_entry(chan, &conn->chan_l, list) {
+ l2cap_chan_lock(chan);
+@@ -7404,7 +7365,7 @@ static void l2cap_security_cfm(struct hci_conn *hcon, u8 status, u8 encrypt)
+ l2cap_chan_unlock(chan);
+ }
+
+- mutex_unlock(&conn->chan_lock);
++ mutex_unlock(&conn->lock);
+ }
+
+ /* Append fragment into frame respecting the maximum len of rx_skb */
+@@ -7471,19 +7432,45 @@ static void l2cap_recv_reset(struct l2cap_conn *conn)
+ conn->rx_len = 0;
+ }
+
++struct l2cap_conn *l2cap_conn_hold_unless_zero(struct l2cap_conn *c)
++{
++ if (!c)
++ return NULL;
++
++ BT_DBG("conn %p orig refcnt %u", c, kref_read(&c->ref));
++
++ if (!kref_get_unless_zero(&c->ref))
++ return NULL;
++
++ return c;
++}
++
+ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
+ {
+- struct l2cap_conn *conn = hcon->l2cap_data;
++ struct l2cap_conn *conn;
+ int len;
+
++ /* Lock hdev to access l2cap_data to avoid race with l2cap_conn_del */
++ hci_dev_lock(hcon->hdev);
++
++ conn = hcon->l2cap_data;
++
+ if (!conn)
+ conn = l2cap_conn_add(hcon);
+
+- if (!conn)
+- goto drop;
++ conn = l2cap_conn_hold_unless_zero(conn);
++
++ hci_dev_unlock(hcon->hdev);
++
++ if (!conn) {
++ kfree_skb(skb);
++ return;
++ }
+
+ BT_DBG("conn %p len %u flags 0x%x", conn, skb->len, flags);
+
++ mutex_lock(&conn->lock);
++
+ switch (flags) {
+ case ACL_START:
+ case ACL_START_NO_FLUSH:
+@@ -7508,7 +7495,7 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
+ if (len == skb->len) {
+ /* Complete frame received */
+ l2cap_recv_frame(conn, skb);
+- return;
++ goto unlock;
+ }
+
+ BT_DBG("Start: total len %d, frag len %u", len, skb->len);
+@@ -7572,11 +7559,13 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
+
+ drop:
+ kfree_skb(skb);
++unlock:
++ mutex_unlock(&conn->lock);
++ l2cap_conn_put(conn);
+ }
+
+ static struct hci_cb l2cap_cb = {
+ .name = "L2CAP",
+- .match = l2cap_match,
+ .connect_cfm = l2cap_connect_cfm,
+ .disconn_cfm = l2cap_disconn_cfm,
+ .security_cfm = l2cap_security_cfm,
+diff --git a/net/bluetooth/l2cap_sock.c b/net/bluetooth/l2cap_sock.c
+index 46ea0bee2259f8..acd11b268b98ad 100644
+--- a/net/bluetooth/l2cap_sock.c
++++ b/net/bluetooth/l2cap_sock.c
+@@ -1326,9 +1326,10 @@ static int l2cap_sock_shutdown(struct socket *sock, int how)
+ /* prevent sk structure from being freed whilst unlocked */
+ sock_hold(sk);
+
+- chan = l2cap_pi(sk)->chan;
+ /* prevent chan structure from being freed whilst unlocked */
+- l2cap_chan_hold(chan);
++ chan = l2cap_chan_hold_unless_zero(l2cap_pi(sk)->chan);
++ if (!chan)
++ goto shutdown_already;
+
+ BT_DBG("chan %p state %s", chan, state_to_string(chan->state));
+
+@@ -1358,22 +1359,20 @@ static int l2cap_sock_shutdown(struct socket *sock, int how)
+ release_sock(sk);
+
+ l2cap_chan_lock(chan);
+- conn = chan->conn;
+- if (conn)
+- /* prevent conn structure from being freed */
+- l2cap_conn_get(conn);
++ /* prevent conn structure from being freed */
++ conn = l2cap_conn_hold_unless_zero(chan->conn);
+ l2cap_chan_unlock(chan);
+
+ if (conn)
+ /* mutex lock must be taken before l2cap_chan_lock() */
+- mutex_lock(&conn->chan_lock);
++ mutex_lock(&conn->lock);
+
+ l2cap_chan_lock(chan);
+ l2cap_chan_close(chan, 0);
+ l2cap_chan_unlock(chan);
+
+ if (conn) {
+- mutex_unlock(&conn->chan_lock);
++ mutex_unlock(&conn->lock);
+ l2cap_conn_put(conn);
+ }
+
+diff --git a/net/bluetooth/rfcomm/core.c b/net/bluetooth/rfcomm/core.c
+index 4c56ca5a216c6f..ad5177e3a69b77 100644
+--- a/net/bluetooth/rfcomm/core.c
++++ b/net/bluetooth/rfcomm/core.c
+@@ -2134,11 +2134,6 @@ static int rfcomm_run(void *unused)
+ return 0;
+ }
+
+-static bool rfcomm_match(struct hci_conn *hcon)
+-{
+- return hcon->type == ACL_LINK;
+-}
+-
+ static void rfcomm_security_cfm(struct hci_conn *conn, u8 status, u8 encrypt)
+ {
+ struct rfcomm_session *s;
+@@ -2185,7 +2180,6 @@ static void rfcomm_security_cfm(struct hci_conn *conn, u8 status, u8 encrypt)
+
+ static struct hci_cb rfcomm_cb = {
+ .name = "RFCOMM",
+- .match = rfcomm_match,
+ .security_cfm = rfcomm_security_cfm
+ };
+
+diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c
+index 071c404c790af9..b872a2ca3ff38b 100644
+--- a/net/bluetooth/sco.c
++++ b/net/bluetooth/sco.c
+@@ -1355,13 +1355,11 @@ int sco_connect_ind(struct hci_dev *hdev, bdaddr_t *bdaddr, __u8 *flags)
+ return lm;
+ }
+
+-static bool sco_match(struct hci_conn *hcon)
+-{
+- return hcon->type == SCO_LINK || hcon->type == ESCO_LINK;
+-}
+-
+ static void sco_connect_cfm(struct hci_conn *hcon, __u8 status)
+ {
++ if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK)
++ return;
++
+ BT_DBG("hcon %p bdaddr %pMR status %u", hcon, &hcon->dst, status);
+
+ if (!status) {
+@@ -1376,6 +1374,9 @@ static void sco_connect_cfm(struct hci_conn *hcon, __u8 status)
+
+ static void sco_disconn_cfm(struct hci_conn *hcon, __u8 reason)
+ {
++ if (hcon->type != SCO_LINK && hcon->type != ESCO_LINK)
++ return;
++
+ BT_DBG("hcon %p reason %d", hcon, reason);
+
+ sco_conn_del(hcon, bt_to_errno(reason));
+@@ -1401,7 +1402,6 @@ void sco_recv_scodata(struct hci_conn *hcon, struct sk_buff *skb)
+
+ static struct hci_cb sco_cb = {
+ .name = "SCO",
+- .match = sco_match,
+ .connect_cfm = sco_connect_cfm,
+ .disconn_cfm = sco_disconn_cfm,
+ };
+diff --git a/net/core/dev.c b/net/core/dev.c
+index c761f862bc5a2d..7b7b36c43c82cc 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -3723,6 +3723,9 @@ static struct sk_buff *validate_xmit_skb(struct sk_buff *skb, struct net_device
+ {
+ netdev_features_t features;
+
++ if (!skb_frags_readable(skb))
++ goto out_kfree_skb;
++
+ features = netif_skb_features(skb);
+ skb = validate_xmit_vlan(skb, features);
+ if (unlikely(!skb))
+@@ -4608,7 +4611,7 @@ static inline void ____napi_schedule(struct softnet_data *sd,
+ * we have to raise NET_RX_SOFTIRQ.
+ */
+ if (!sd->in_net_rx_action)
+- __raise_softirq_irqoff(NET_RX_SOFTIRQ);
++ raise_softirq_irqoff(NET_RX_SOFTIRQ);
+ }
+
+ #ifdef CONFIG_RPS
+diff --git a/net/core/netpoll.c b/net/core/netpoll.c
+index 45fb60bc480395..e95c2933756df9 100644
+--- a/net/core/netpoll.c
++++ b/net/core/netpoll.c
+@@ -319,6 +319,7 @@ static int netpoll_owner_active(struct net_device *dev)
+ static netdev_tx_t __netpoll_send_skb(struct netpoll *np, struct sk_buff *skb)
+ {
+ netdev_tx_t status = NETDEV_TX_BUSY;
++ netdev_tx_t ret = NET_XMIT_DROP;
+ struct net_device *dev;
+ unsigned long tries;
+ /* It is up to the caller to keep npinfo alive. */
+@@ -327,11 +328,12 @@ static netdev_tx_t __netpoll_send_skb(struct netpoll *np, struct sk_buff *skb)
+ lockdep_assert_irqs_disabled();
+
+ dev = np->dev;
++ rcu_read_lock();
+ npinfo = rcu_dereference_bh(dev->npinfo);
+
+ if (!npinfo || !netif_running(dev) || !netif_device_present(dev)) {
+ dev_kfree_skb_irq(skb);
+- return NET_XMIT_DROP;
++ goto out;
+ }
+
+ /* don't get messages out of order, and no recursion */
+@@ -370,7 +372,10 @@ static netdev_tx_t __netpoll_send_skb(struct netpoll *np, struct sk_buff *skb)
+ skb_queue_tail(&npinfo->txq, skb);
+ schedule_delayed_work(&npinfo->tx_work,0);
+ }
+- return NETDEV_TX_OK;
++ ret = NETDEV_TX_OK;
++out:
++ rcu_read_unlock();
++ return ret;
+ }
+
+ netdev_tx_t netpoll_send_skb(struct netpoll *np, struct sk_buff *skb)
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index f7c17388ff6aaf..26cdb665747573 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3237,16 +3237,13 @@ static void add_v4_addrs(struct inet6_dev *idev)
+ struct in6_addr addr;
+ struct net_device *dev;
+ struct net *net = dev_net(idev->dev);
+- int scope, plen, offset = 0;
++ int scope, plen;
+ u32 pflags = 0;
+
+ ASSERT_RTNL();
+
+ memset(&addr, 0, sizeof(struct in6_addr));
+- /* in case of IP6GRE the dev_addr is an IPv6 and therefore we use only the last 4 bytes */
+- if (idev->dev->addr_len == sizeof(struct in6_addr))
+- offset = sizeof(struct in6_addr) - 4;
+- memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4);
++ memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4);
+
+ if (!(idev->dev->flags & IFF_POINTOPOINT) && idev->dev->type == ARPHRD_SIT) {
+ scope = IPV6_ADDR_COMPATv4;
+@@ -3557,7 +3554,13 @@ static void addrconf_gre_config(struct net_device *dev)
+ return;
+ }
+
+- if (dev->type == ARPHRD_ETHER) {
++ /* Generate the IPv6 link-local address using addrconf_addr_gen(),
++ * unless we have an IPv4 GRE device not bound to an IP address and
++ * which is in EUI64 mode (as __ipv6_isatap_ifid() would fail in this
++ * case). Such devices fall back to add_v4_addrs() instead.
++ */
++ if (!(dev->type == ARPHRD_IPGRE && *(__be32 *)dev->dev_addr == 0 &&
++ idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_EUI64)) {
+ addrconf_addr_gen(idev, true);
+ return;
+ }
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 38c30e4ddda98c..2b6e8e7307ee5e 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -6,7 +6,7 @@
+ * Copyright 2007 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright (C) 2015-2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2024 Intel Corporation
++ * Copyright (C) 2018-2025 Intel Corporation
+ *
+ * utilities for mac80211
+ */
+@@ -2184,8 +2184,10 @@ int ieee80211_reconfig(struct ieee80211_local *local)
+ ieee80211_reconfig_roc(local);
+
+ /* Requeue all works */
+- list_for_each_entry(sdata, &local->interfaces, list)
+- wiphy_work_queue(local->hw.wiphy, &sdata->work);
++ list_for_each_entry(sdata, &local->interfaces, list) {
++ if (ieee80211_sdata_running(sdata))
++ wiphy_work_queue(local->hw.wiphy, &sdata->work);
++ }
+ }
+
+ ieee80211_wake_queues_by_reason(hw, IEEE80211_MAX_QUEUE_MAP,
+diff --git a/net/mctp/route.c b/net/mctp/route.c
+index 3f2bd65ff5e3c9..4c460160914f01 100644
+--- a/net/mctp/route.c
++++ b/net/mctp/route.c
+@@ -332,8 +332,14 @@ static int mctp_frag_queue(struct mctp_sk_key *key, struct sk_buff *skb)
+ & MCTP_HDR_SEQ_MASK;
+
+ if (!key->reasm_head) {
+- key->reasm_head = skb;
+- key->reasm_tailp = &(skb_shinfo(skb)->frag_list);
++ /* Since we're manipulating the shared frag_list, ensure it isn't
++ * shared with any other SKBs.
++ */
++ key->reasm_head = skb_unshare(skb, GFP_ATOMIC);
++ if (!key->reasm_head)
++ return -ENOMEM;
++
++ key->reasm_tailp = &(skb_shinfo(key->reasm_head)->frag_list);
+ key->last_seq = this_seq;
+ return 0;
+ }
+diff --git a/net/mctp/test/route-test.c b/net/mctp/test/route-test.c
+index 17165b86ce22d4..06c1897b685a8b 100644
+--- a/net/mctp/test/route-test.c
++++ b/net/mctp/test/route-test.c
+@@ -921,6 +921,114 @@ static void mctp_test_route_input_sk_fail_frag(struct kunit *test)
+ __mctp_route_test_fini(test, dev, rt, sock);
+ }
+
++/* Input route to socket, using a fragmented message created from clones.
++ */
++static void mctp_test_route_input_cloned_frag(struct kunit *test)
++{
++ /* 5 packet fragments, forming 2 complete messages */
++ const struct mctp_hdr hdrs[5] = {
++ RX_FRAG(FL_S, 0),
++ RX_FRAG(0, 1),
++ RX_FRAG(FL_E, 2),
++ RX_FRAG(FL_S, 0),
++ RX_FRAG(FL_E, 1),
++ };
++ struct mctp_test_route *rt;
++ struct mctp_test_dev *dev;
++ struct sk_buff *skb[5];
++ struct sk_buff *rx_skb;
++ struct socket *sock;
++ size_t data_len;
++ u8 compare[100];
++ u8 flat[100];
++ size_t total;
++ void *p;
++ int rc;
++
++ /* Arbitrary length */
++ data_len = 3;
++ total = data_len + sizeof(struct mctp_hdr);
++
++ __mctp_route_test_init(test, &dev, &rt, &sock, MCTP_NET_ANY);
++
++ /* Create a single skb initially with concatenated packets */
++ skb[0] = mctp_test_create_skb(&hdrs[0], 5 * total);
++ mctp_test_skb_set_dev(skb[0], dev);
++ memset(skb[0]->data, 0 * 0x11, skb[0]->len);
++ memcpy(skb[0]->data, &hdrs[0], sizeof(struct mctp_hdr));
++
++ /* Extract and populate packets */
++ for (int i = 1; i < 5; i++) {
++ skb[i] = skb_clone(skb[i - 1], GFP_ATOMIC);
++ KUNIT_ASSERT_TRUE(test, skb[i]);
++ p = skb_pull(skb[i], total);
++ KUNIT_ASSERT_TRUE(test, p);
++ skb_reset_network_header(skb[i]);
++ memcpy(skb[i]->data, &hdrs[i], sizeof(struct mctp_hdr));
++ memset(&skb[i]->data[sizeof(struct mctp_hdr)], i * 0x11, data_len);
++ }
++ for (int i = 0; i < 5; i++)
++ skb_trim(skb[i], total);
++
++ /* SOM packets have a type byte to match the socket */
++ skb[0]->data[4] = 0;
++ skb[3]->data[4] = 0;
++
++ skb_dump("pkt1 ", skb[0], false);
++ skb_dump("pkt2 ", skb[1], false);
++ skb_dump("pkt3 ", skb[2], false);
++ skb_dump("pkt4 ", skb[3], false);
++ skb_dump("pkt5 ", skb[4], false);
++
++ for (int i = 0; i < 5; i++) {
++ KUNIT_EXPECT_EQ(test, refcount_read(&skb[i]->users), 1);
++ /* Take a reference so we can check refcounts at the end */
++ skb_get(skb[i]);
++ }
++
++ /* Feed the fragments into MCTP core */
++ for (int i = 0; i < 5; i++) {
++ rc = mctp_route_input(&rt->rt, skb[i]);
++ KUNIT_EXPECT_EQ(test, rc, 0);
++ }
++
++ /* Receive first reassembled message */
++ rx_skb = skb_recv_datagram(sock->sk, MSG_DONTWAIT, &rc);
++ KUNIT_EXPECT_EQ(test, rc, 0);
++ KUNIT_EXPECT_EQ(test, rx_skb->len, 3 * data_len);
++ rc = skb_copy_bits(rx_skb, 0, flat, rx_skb->len);
++ for (int i = 0; i < rx_skb->len; i++)
++ compare[i] = (i / data_len) * 0x11;
++ /* Set type byte */
++ compare[0] = 0;
++
++ KUNIT_EXPECT_MEMEQ(test, flat, compare, rx_skb->len);
++ KUNIT_EXPECT_EQ(test, refcount_read(&rx_skb->users), 1);
++ kfree_skb(rx_skb);
++
++ /* Receive second reassembled message */
++ rx_skb = skb_recv_datagram(sock->sk, MSG_DONTWAIT, &rc);
++ KUNIT_EXPECT_EQ(test, rc, 0);
++ KUNIT_EXPECT_EQ(test, rx_skb->len, 2 * data_len);
++ rc = skb_copy_bits(rx_skb, 0, flat, rx_skb->len);
++ for (int i = 0; i < rx_skb->len; i++)
++ compare[i] = (i / data_len + 3) * 0x11;
++ /* Set type byte */
++ compare[0] = 0;
++
++ KUNIT_EXPECT_MEMEQ(test, flat, compare, rx_skb->len);
++ KUNIT_EXPECT_EQ(test, refcount_read(&rx_skb->users), 1);
++ kfree_skb(rx_skb);
++
++ /* Check input skb refcounts */
++ for (int i = 0; i < 5; i++) {
++ KUNIT_EXPECT_EQ(test, refcount_read(&skb[i]->users), 1);
++ kfree_skb(skb[i]);
++ }
++
++ __mctp_route_test_fini(test, dev, rt, sock);
++}
++
+ #if IS_ENABLED(CONFIG_MCTP_FLOWS)
+
+ static void mctp_test_flow_init(struct kunit *test,
+@@ -1144,6 +1252,7 @@ static struct kunit_case mctp_test_cases[] = {
+ KUNIT_CASE(mctp_test_packet_flow),
+ KUNIT_CASE(mctp_test_fragment_flow),
+ KUNIT_CASE(mctp_test_route_output_key_create),
++ KUNIT_CASE(mctp_test_route_input_cloned_frag),
+ {}
+ };
+
+diff --git a/net/mptcp/protocol.h b/net/mptcp/protocol.h
+index b70a303e082878..7e2f70f22b05b6 100644
+--- a/net/mptcp/protocol.h
++++ b/net/mptcp/protocol.h
+@@ -1194,6 +1194,8 @@ static inline void __mptcp_do_fallback(struct mptcp_sock *msk)
+ pr_debug("TCP fallback already done (msk=%p)\n", msk);
+ return;
+ }
++ if (WARN_ON_ONCE(!READ_ONCE(msk->allow_infinite_fallback)))
++ return;
+ set_bit(MPTCP_FALLBACK_DONE, &msk->flags);
+ }
+
+diff --git a/net/netfilter/ipvs/ip_vs_ctl.c b/net/netfilter/ipvs/ip_vs_ctl.c
+index dc6ddc4abbe213..3224f6e17e7361 100644
+--- a/net/netfilter/ipvs/ip_vs_ctl.c
++++ b/net/netfilter/ipvs/ip_vs_ctl.c
+@@ -3091,12 +3091,12 @@ do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
+ case IP_VS_SO_GET_SERVICES:
+ {
+ struct ip_vs_get_services *get;
+- int size;
++ size_t size;
+
+ get = (struct ip_vs_get_services *)arg;
+ size = struct_size(get, entrytable, get->num_services);
+ if (*len != size) {
+- pr_err("length: %u != %u\n", *len, size);
++ pr_err("length: %u != %zu\n", *len, size);
+ ret = -EINVAL;
+ goto out;
+ }
+@@ -3132,12 +3132,12 @@ do_ip_vs_get_ctl(struct sock *sk, int cmd, void __user *user, int *len)
+ case IP_VS_SO_GET_DESTS:
+ {
+ struct ip_vs_get_dests *get;
+- int size;
++ size_t size;
+
+ get = (struct ip_vs_get_dests *)arg;
+ size = struct_size(get, entrytable, get->num_dests);
+ if (*len != size) {
+- pr_err("length: %u != %u\n", *len, size);
++ pr_err("length: %u != %zu\n", *len, size);
+ ret = -EINVAL;
+ goto out;
+ }
+diff --git a/net/netfilter/nf_conncount.c b/net/netfilter/nf_conncount.c
+index 4890af4dc263fd..913ede2f57f9a9 100644
+--- a/net/netfilter/nf_conncount.c
++++ b/net/netfilter/nf_conncount.c
+@@ -132,7 +132,7 @@ static int __nf_conncount_add(struct net *net,
+ struct nf_conn *found_ct;
+ unsigned int collect = 0;
+
+- if (time_is_after_eq_jiffies((unsigned long)list->last_gc))
++ if ((u32)jiffies == list->last_gc)
+ goto add_new_node;
+
+ /* check the saved connections */
+@@ -234,7 +234,7 @@ bool nf_conncount_gc_list(struct net *net,
+ bool ret = false;
+
+ /* don't bother if we just did GC */
+- if (time_is_after_eq_jiffies((unsigned long)READ_ONCE(list->last_gc)))
++ if ((u32)jiffies == READ_ONCE(list->last_gc))
+ return false;
+
+ /* don't bother if other cpu is already doing GC */
+@@ -377,6 +377,8 @@ insert_tree(struct net *net,
+
+ conn->tuple = *tuple;
+ conn->zone = *zone;
++ conn->cpu = raw_smp_processor_id();
++ conn->jiffies32 = (u32)jiffies;
+ memcpy(rbconn->key, key, sizeof(u32) * data->keylen);
+
+ nf_conncount_list_init(&rbconn->list);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 939510247ef5a6..eb3a6f96b094db 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -31,7 +31,6 @@ unsigned int nf_tables_net_id __read_mostly;
+ static LIST_HEAD(nf_tables_expressions);
+ static LIST_HEAD(nf_tables_objects);
+ static LIST_HEAD(nf_tables_flowtables);
+-static LIST_HEAD(nf_tables_destroy_list);
+ static LIST_HEAD(nf_tables_gc_list);
+ static DEFINE_SPINLOCK(nf_tables_destroy_list_lock);
+ static DEFINE_SPINLOCK(nf_tables_gc_list_lock);
+@@ -122,7 +121,6 @@ static void nft_validate_state_update(struct nft_table *table, u8 new_validate_s
+ table->validate_state = new_validate_state;
+ }
+ static void nf_tables_trans_destroy_work(struct work_struct *w);
+-static DECLARE_WORK(trans_destroy_work, nf_tables_trans_destroy_work);
+
+ static void nft_trans_gc_work(struct work_struct *work);
+ static DECLARE_WORK(trans_gc_work, nft_trans_gc_work);
+@@ -9748,11 +9746,12 @@ static void nft_commit_release(struct nft_trans *trans)
+
+ static void nf_tables_trans_destroy_work(struct work_struct *w)
+ {
++ struct nftables_pernet *nft_net = container_of(w, struct nftables_pernet, destroy_work);
+ struct nft_trans *trans, *next;
+ LIST_HEAD(head);
+
+ spin_lock(&nf_tables_destroy_list_lock);
+- list_splice_init(&nf_tables_destroy_list, &head);
++ list_splice_init(&nft_net->destroy_list, &head);
+ spin_unlock(&nf_tables_destroy_list_lock);
+
+ if (list_empty(&head))
+@@ -9766,9 +9765,11 @@ static void nf_tables_trans_destroy_work(struct work_struct *w)
+ }
+ }
+
+-void nf_tables_trans_destroy_flush_work(void)
++void nf_tables_trans_destroy_flush_work(struct net *net)
+ {
+- flush_work(&trans_destroy_work);
++ struct nftables_pernet *nft_net = nft_pernet(net);
++
++ flush_work(&nft_net->destroy_work);
+ }
+ EXPORT_SYMBOL_GPL(nf_tables_trans_destroy_flush_work);
+
+@@ -10226,11 +10227,11 @@ static void nf_tables_commit_release(struct net *net)
+
+ trans->put_net = true;
+ spin_lock(&nf_tables_destroy_list_lock);
+- list_splice_tail_init(&nft_net->commit_list, &nf_tables_destroy_list);
++ list_splice_tail_init(&nft_net->commit_list, &nft_net->destroy_list);
+ spin_unlock(&nf_tables_destroy_list_lock);
+
+ nf_tables_module_autoload_cleanup(net);
+- schedule_work(&trans_destroy_work);
++ schedule_work(&nft_net->destroy_work);
+
+ mutex_unlock(&nft_net->commit_mutex);
+ }
+@@ -11653,7 +11654,7 @@ static int nft_rcv_nl_event(struct notifier_block *this, unsigned long event,
+
+ gc_seq = nft_gc_seq_begin(nft_net);
+
+- nf_tables_trans_destroy_flush_work();
++ nf_tables_trans_destroy_flush_work(net);
+ again:
+ list_for_each_entry(table, &nft_net->tables, list) {
+ if (nft_table_has_owner(table) &&
+@@ -11695,6 +11696,7 @@ static int __net_init nf_tables_init_net(struct net *net)
+
+ INIT_LIST_HEAD(&nft_net->tables);
+ INIT_LIST_HEAD(&nft_net->commit_list);
++ INIT_LIST_HEAD(&nft_net->destroy_list);
+ INIT_LIST_HEAD(&nft_net->commit_set_list);
+ INIT_LIST_HEAD(&nft_net->binding_list);
+ INIT_LIST_HEAD(&nft_net->module_list);
+@@ -11703,6 +11705,7 @@ static int __net_init nf_tables_init_net(struct net *net)
+ nft_net->base_seq = 1;
+ nft_net->gc_seq = 0;
+ nft_net->validate_state = NFT_VALIDATE_SKIP;
++ INIT_WORK(&nft_net->destroy_work, nf_tables_trans_destroy_work);
+
+ return 0;
+ }
+@@ -11731,14 +11734,17 @@ static void __net_exit nf_tables_exit_net(struct net *net)
+ if (!list_empty(&nft_net->module_list))
+ nf_tables_module_autoload_cleanup(net);
+
++ cancel_work_sync(&nft_net->destroy_work);
+ __nft_release_tables(net);
+
+ nft_gc_seq_end(nft_net, gc_seq);
+
+ mutex_unlock(&nft_net->commit_mutex);
++
+ WARN_ON_ONCE(!list_empty(&nft_net->tables));
+ WARN_ON_ONCE(!list_empty(&nft_net->module_list));
+ WARN_ON_ONCE(!list_empty(&nft_net->notify_list));
++ WARN_ON_ONCE(!list_empty(&nft_net->destroy_list));
+ }
+
+ static void nf_tables_exit_batch(struct list_head *net_exit_list)
+@@ -11829,10 +11835,8 @@ static void __exit nf_tables_module_exit(void)
+ unregister_netdevice_notifier(&nf_tables_flowtable_notifier);
+ nft_chain_filter_fini();
+ nft_chain_route_fini();
+- nf_tables_trans_destroy_flush_work();
+ unregister_pernet_subsys(&nf_tables_net_ops);
+ cancel_work_sync(&trans_gc_work);
+- cancel_work_sync(&trans_destroy_work);
+ rcu_barrier();
+ rhltable_destroy(&nft_objname_ht);
+ nf_tables_core_module_exit();
+diff --git a/net/netfilter/nft_compat.c b/net/netfilter/nft_compat.c
+index 7ca4f0d21fe2a2..72711d62fddfa4 100644
+--- a/net/netfilter/nft_compat.c
++++ b/net/netfilter/nft_compat.c
+@@ -228,7 +228,7 @@ static int nft_parse_compat(const struct nlattr *attr, u16 *proto, bool *inv)
+ return 0;
+ }
+
+-static void nft_compat_wait_for_destructors(void)
++static void nft_compat_wait_for_destructors(struct net *net)
+ {
+ /* xtables matches or targets can have side effects, e.g.
+ * creation/destruction of /proc files.
+@@ -236,7 +236,7 @@ static void nft_compat_wait_for_destructors(void)
+ * work queue. If we have pending invocations we thus
+ * need to wait for those to finish.
+ */
+- nf_tables_trans_destroy_flush_work();
++ nf_tables_trans_destroy_flush_work(net);
+ }
+
+ static int
+@@ -262,7 +262,7 @@ nft_target_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+
+ nft_target_set_tgchk_param(&par, ctx, target, info, &e, proto, inv);
+
+- nft_compat_wait_for_destructors();
++ nft_compat_wait_for_destructors(ctx->net);
+
+ ret = xt_check_target(&par, size, proto, inv);
+ if (ret < 0) {
+@@ -515,7 +515,7 @@ __nft_match_init(const struct nft_ctx *ctx, const struct nft_expr *expr,
+
+ nft_match_set_mtchk_param(&par, ctx, match, info, &e, proto, inv);
+
+- nft_compat_wait_for_destructors();
++ nft_compat_wait_for_destructors(ctx->net);
+
+ return xt_check_match(&par, size, proto, inv);
+ }
+diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c
+index 67a41cd2baaff0..a1b373b99f7b84 100644
+--- a/net/netfilter/nft_ct.c
++++ b/net/netfilter/nft_ct.c
+@@ -230,6 +230,7 @@ static void nft_ct_set_zone_eval(const struct nft_expr *expr,
+ enum ip_conntrack_info ctinfo;
+ u16 value = nft_reg_load16(®s->data[priv->sreg]);
+ struct nf_conn *ct;
++ int oldcnt;
+
+ ct = nf_ct_get(skb, &ctinfo);
+ if (ct) /* already tracked */
+@@ -250,10 +251,11 @@ static void nft_ct_set_zone_eval(const struct nft_expr *expr,
+
+ ct = this_cpu_read(nft_ct_pcpu_template);
+
+- if (likely(refcount_read(&ct->ct_general.use) == 1)) {
+- refcount_inc(&ct->ct_general.use);
++ __refcount_inc(&ct->ct_general.use, &oldcnt);
++ if (likely(oldcnt == 1)) {
+ nf_ct_zone_add(ct, &zone);
+ } else {
++ refcount_dec(&ct->ct_general.use);
+ /* previous skb got queued to userspace, allocate temporary
+ * one until percpu template can be reused.
+ */
+diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c
+index b8d03364566c1f..c74012c9912554 100644
+--- a/net/netfilter/nft_exthdr.c
++++ b/net/netfilter/nft_exthdr.c
+@@ -85,7 +85,6 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb,
+ unsigned char optbuf[sizeof(struct ip_options) + 40];
+ struct ip_options *opt = (struct ip_options *)optbuf;
+ struct iphdr *iph, _iph;
+- unsigned int start;
+ bool found = false;
+ __be32 info;
+ int optlen;
+@@ -93,7 +92,6 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb,
+ iph = skb_header_pointer(skb, 0, sizeof(_iph), &_iph);
+ if (!iph)
+ return -EBADMSG;
+- start = sizeof(struct iphdr);
+
+ optlen = iph->ihl * 4 - (int)sizeof(struct iphdr);
+ if (optlen <= 0)
+@@ -103,7 +101,7 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb,
+ /* Copy the options since __ip_options_compile() modifies
+ * the options.
+ */
+- if (skb_copy_bits(skb, start, opt->__data, optlen))
++ if (skb_copy_bits(skb, sizeof(struct iphdr), opt->__data, optlen))
+ return -EBADMSG;
+ opt->optlen = optlen;
+
+@@ -118,18 +116,18 @@ static int ipv4_find_option(struct net *net, struct sk_buff *skb,
+ found = target == IPOPT_SSRR ? opt->is_strictroute :
+ !opt->is_strictroute;
+ if (found)
+- *offset = opt->srr + start;
++ *offset = opt->srr;
+ break;
+ case IPOPT_RR:
+ if (!opt->rr)
+ break;
+- *offset = opt->rr + start;
++ *offset = opt->rr;
+ found = true;
+ break;
+ case IPOPT_RA:
+ if (!opt->router_alert)
+ break;
+- *offset = opt->router_alert + start;
++ *offset = opt->router_alert;
+ found = true;
+ break;
+ default:
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index 3bb4810234aac2..e573e92213029c 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -1368,8 +1368,11 @@ bool ovs_ct_verify(struct net *net, enum ovs_key_attr attr)
+ attr == OVS_KEY_ATTR_CT_MARK)
+ return true;
+ if (IS_ENABLED(CONFIG_NF_CONNTRACK_LABELS) &&
+- attr == OVS_KEY_ATTR_CT_LABELS)
+- return true;
++ attr == OVS_KEY_ATTR_CT_LABELS) {
++ struct ovs_net *ovs_net = net_generic(net, ovs_net_id);
++
++ return ovs_net->xt_label;
++ }
+
+ return false;
+ }
+@@ -1378,7 +1381,6 @@ int ovs_ct_copy_action(struct net *net, const struct nlattr *attr,
+ const struct sw_flow_key *key,
+ struct sw_flow_actions **sfa, bool log)
+ {
+- unsigned int n_bits = sizeof(struct ovs_key_ct_labels) * BITS_PER_BYTE;
+ struct ovs_conntrack_info ct_info;
+ const char *helper = NULL;
+ u16 family;
+@@ -1407,12 +1409,6 @@ int ovs_ct_copy_action(struct net *net, const struct nlattr *attr,
+ return -ENOMEM;
+ }
+
+- if (nf_connlabels_get(net, n_bits - 1)) {
+- nf_ct_tmpl_free(ct_info.ct);
+- OVS_NLERR(log, "Failed to set connlabel length");
+- return -EOPNOTSUPP;
+- }
+-
+ if (ct_info.timeout[0]) {
+ if (nf_ct_set_timeout(net, ct_info.ct, family, key->ip.proto,
+ ct_info.timeout))
+@@ -1581,7 +1577,6 @@ static void __ovs_ct_free_action(struct ovs_conntrack_info *ct_info)
+ if (ct_info->ct) {
+ if (ct_info->timeout[0])
+ nf_ct_destroy_timeout(ct_info->ct);
+- nf_connlabels_put(nf_ct_net(ct_info->ct));
+ nf_ct_tmpl_free(ct_info->ct);
+ }
+ }
+@@ -2006,9 +2001,17 @@ struct genl_family dp_ct_limit_genl_family __ro_after_init = {
+
+ int ovs_ct_init(struct net *net)
+ {
+-#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT)
++ unsigned int n_bits = sizeof(struct ovs_key_ct_labels) * BITS_PER_BYTE;
+ struct ovs_net *ovs_net = net_generic(net, ovs_net_id);
+
++ if (nf_connlabels_get(net, n_bits - 1)) {
++ ovs_net->xt_label = false;
++ OVS_NLERR(true, "Failed to set connlabel length");
++ } else {
++ ovs_net->xt_label = true;
++ }
++
++#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT)
+ return ovs_ct_limit_init(net, ovs_net);
+ #else
+ return 0;
+@@ -2017,9 +2020,12 @@ int ovs_ct_init(struct net *net)
+
+ void ovs_ct_exit(struct net *net)
+ {
+-#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT)
+ struct ovs_net *ovs_net = net_generic(net, ovs_net_id);
+
++#if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT)
+ ovs_ct_limit_exit(net, ovs_net);
+ #endif
++
++ if (ovs_net->xt_label)
++ nf_connlabels_put(net);
+ }
+diff --git a/net/openvswitch/datapath.h b/net/openvswitch/datapath.h
+index 365b9bb7f546e8..9ca6231ea64703 100644
+--- a/net/openvswitch/datapath.h
++++ b/net/openvswitch/datapath.h
+@@ -160,6 +160,9 @@ struct ovs_net {
+ #if IS_ENABLED(CONFIG_NETFILTER_CONNCOUNT)
+ struct ovs_ct_limit_info *ct_limit_info;
+ #endif
++
++ /* Module reference for configuring conntrack. */
++ bool xt_label;
+ };
+
+ /**
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 729ef582a3a8b8..0df89240b73361 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2317,14 +2317,10 @@ int ovs_nla_put_mask(const struct sw_flow *flow, struct sk_buff *skb)
+ OVS_FLOW_ATTR_MASK, true, skb);
+ }
+
+-#define MAX_ACTIONS_BUFSIZE (32 * 1024)
+-
+ static struct sw_flow_actions *nla_alloc_flow_actions(int size)
+ {
+ struct sw_flow_actions *sfa;
+
+- WARN_ON_ONCE(size > MAX_ACTIONS_BUFSIZE);
+-
+ sfa = kmalloc(kmalloc_size_roundup(sizeof(*sfa) + size), GFP_KERNEL);
+ if (!sfa)
+ return ERR_PTR(-ENOMEM);
+@@ -2480,15 +2476,6 @@ static struct nlattr *reserve_sfa_size(struct sw_flow_actions **sfa,
+
+ new_acts_size = max(next_offset + req_size, ksize(*sfa) * 2);
+
+- if (new_acts_size > MAX_ACTIONS_BUFSIZE) {
+- if ((next_offset + req_size) > MAX_ACTIONS_BUFSIZE) {
+- OVS_NLERR(log, "Flow action size exceeds max %u",
+- MAX_ACTIONS_BUFSIZE);
+- return ERR_PTR(-EMSGSIZE);
+- }
+- new_acts_size = MAX_ACTIONS_BUFSIZE;
+- }
+-
+ acts = nla_alloc_flow_actions(new_acts_size);
+ if (IS_ERR(acts))
+ return ERR_CAST(acts);
+@@ -3545,7 +3532,7 @@ int ovs_nla_copy_actions(struct net *net, const struct nlattr *attr,
+ int err;
+ u32 mpls_label_count = 0;
+
+- *sfa = nla_alloc_flow_actions(min(nla_len(attr), MAX_ACTIONS_BUFSIZE));
++ *sfa = nla_alloc_flow_actions(nla_len(attr));
+ if (IS_ERR(*sfa))
+ return PTR_ERR(*sfa);
+
+diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c
+index d26ac6bd9b1080..518f52f65a49d7 100644
+--- a/net/sched/sch_api.c
++++ b/net/sched/sch_api.c
+@@ -2254,6 +2254,12 @@ static int tc_ctl_tclass(struct sk_buff *skb, struct nlmsghdr *n,
+ return -EOPNOTSUPP;
+ }
+
++ /* Prevent creation of traffic classes with classid TC_H_ROOT */
++ if (clid == TC_H_ROOT) {
++ NL_SET_ERR_MSG(extack, "Cannot create traffic class with classid TC_H_ROOT");
++ return -EINVAL;
++ }
++
+ new_cl = cl;
+ err = -EOPNOTSUPP;
+ if (cops->change)
+diff --git a/net/sched/sch_gred.c b/net/sched/sch_gred.c
+index 79ba9dc702541e..43b0343a7cd0ca 100644
+--- a/net/sched/sch_gred.c
++++ b/net/sched/sch_gred.c
+@@ -913,7 +913,8 @@ static void gred_destroy(struct Qdisc *sch)
+ for (i = 0; i < table->DPs; i++)
+ gred_destroy_vq(table->tab[i]);
+
+- gred_offload(sch, TC_GRED_DESTROY);
++ if (table->opt)
++ gred_offload(sch, TC_GRED_DESTROY);
+ kfree(table->opt);
+ }
+
+diff --git a/net/sctp/stream.c b/net/sctp/stream.c
+index c241cc552e8d58..bfcff6d6a43866 100644
+--- a/net/sctp/stream.c
++++ b/net/sctp/stream.c
+@@ -735,7 +735,7 @@ struct sctp_chunk *sctp_process_strreset_tsnreq(
+ * value SHOULD be the smallest TSN not acknowledged by the
+ * receiver of the request plus 2^31.
+ */
+- init_tsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map) + (1 << 31);
++ init_tsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map) + (1U << 31);
+ sctp_tsnmap_init(&asoc->peer.tsn_map, SCTP_TSN_MAP_INITIAL,
+ init_tsn, GFP_ATOMIC);
+
+diff --git a/net/switchdev/switchdev.c b/net/switchdev/switchdev.c
+index 6488ead9e46459..4d5fbacef496fd 100644
+--- a/net/switchdev/switchdev.c
++++ b/net/switchdev/switchdev.c
+@@ -472,7 +472,7 @@ bool switchdev_port_obj_act_is_deferred(struct net_device *dev,
+ EXPORT_SYMBOL_GPL(switchdev_port_obj_act_is_deferred);
+
+ static ATOMIC_NOTIFIER_HEAD(switchdev_notif_chain);
+-static BLOCKING_NOTIFIER_HEAD(switchdev_blocking_notif_chain);
++static RAW_NOTIFIER_HEAD(switchdev_blocking_notif_chain);
+
+ /**
+ * register_switchdev_notifier - Register notifier
+@@ -518,17 +518,27 @@ EXPORT_SYMBOL_GPL(call_switchdev_notifiers);
+
+ int register_switchdev_blocking_notifier(struct notifier_block *nb)
+ {
+- struct blocking_notifier_head *chain = &switchdev_blocking_notif_chain;
++ struct raw_notifier_head *chain = &switchdev_blocking_notif_chain;
++ int err;
++
++ rtnl_lock();
++ err = raw_notifier_chain_register(chain, nb);
++ rtnl_unlock();
+
+- return blocking_notifier_chain_register(chain, nb);
++ return err;
+ }
+ EXPORT_SYMBOL_GPL(register_switchdev_blocking_notifier);
+
+ int unregister_switchdev_blocking_notifier(struct notifier_block *nb)
+ {
+- struct blocking_notifier_head *chain = &switchdev_blocking_notif_chain;
++ struct raw_notifier_head *chain = &switchdev_blocking_notif_chain;
++ int err;
+
+- return blocking_notifier_chain_unregister(chain, nb);
++ rtnl_lock();
++ err = raw_notifier_chain_unregister(chain, nb);
++ rtnl_unlock();
++
++ return err;
+ }
+ EXPORT_SYMBOL_GPL(unregister_switchdev_blocking_notifier);
+
+@@ -536,10 +546,11 @@ int call_switchdev_blocking_notifiers(unsigned long val, struct net_device *dev,
+ struct switchdev_notifier_info *info,
+ struct netlink_ext_ack *extack)
+ {
++ ASSERT_RTNL();
+ info->dev = dev;
+ info->extack = extack;
+- return blocking_notifier_call_chain(&switchdev_blocking_notif_chain,
+- val, info);
++ return raw_notifier_call_chain(&switchdev_blocking_notif_chain,
++ val, info);
+ }
+ EXPORT_SYMBOL_GPL(call_switchdev_blocking_notifiers);
+
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 7d313fb66d76ba..1ce8fff2a28a4e 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -1198,6 +1198,13 @@ void cfg80211_dev_free(struct cfg80211_registered_device *rdev)
+ {
+ struct cfg80211_internal_bss *scan, *tmp;
+ struct cfg80211_beacon_registration *reg, *treg;
++ unsigned long flags;
++
++ spin_lock_irqsave(&rdev->wiphy_work_lock, flags);
++ WARN_ON(!list_empty(&rdev->wiphy_work_list));
++ spin_unlock_irqrestore(&rdev->wiphy_work_lock, flags);
++ cancel_work_sync(&rdev->wiphy_work);
++
+ rfkill_destroy(rdev->wiphy.rfkill);
+ list_for_each_entry_safe(reg, treg, &rdev->beacon_registrations, list) {
+ list_del(®->list);
+diff --git a/rust/kernel/alloc/allocator_test.rs b/rust/kernel/alloc/allocator_test.rs
+index e3240d16040bd9..c37d4c0c64e9f9 100644
+--- a/rust/kernel/alloc/allocator_test.rs
++++ b/rust/kernel/alloc/allocator_test.rs
+@@ -62,6 +62,24 @@ unsafe fn realloc(
+ ));
+ }
+
++ // ISO C (ISO/IEC 9899:2011) defines `aligned_alloc`:
++ //
++ // > The value of alignment shall be a valid alignment supported by the implementation
++ // [...].
++ //
++ // As an example of the "supported by the implementation" requirement, POSIX.1-2001 (IEEE
++ // 1003.1-2001) defines `posix_memalign`:
++ //
++ // > The value of alignment shall be a power of two multiple of sizeof (void *).
++ //
++ // and POSIX-based implementations of `aligned_alloc` inherit this requirement. At the time
++ // of writing, this is known to be the case on macOS (but not in glibc).
++ //
++ // Satisfy the stricter requirement to avoid spurious test failures on some platforms.
++ let min_align = core::mem::size_of::<*const crate::ffi::c_void>();
++ let layout = layout.align_to(min_align).map_err(|_| AllocError)?;
++ let layout = layout.pad_to_align();
++
+ // SAFETY: Returns either NULL or a pointer to a memory allocation that satisfies or
+ // exceeds the given size and alignment requirements.
+ let dst = unsafe { libc_aligned_alloc(layout.align(), layout.size()) } as *mut u8;
+diff --git a/rust/kernel/error.rs b/rust/kernel/error.rs
+index 5fece574ec023b..4911b294bfe662 100644
+--- a/rust/kernel/error.rs
++++ b/rust/kernel/error.rs
+@@ -104,7 +104,7 @@ pub fn from_errno(errno: crate::ffi::c_int) -> Error {
+ if errno < -(bindings::MAX_ERRNO as i32) || errno >= 0 {
+ // TODO: Make it a `WARN_ONCE` once available.
+ crate::pr_warn!(
+- "attempted to create `Error` with out of range `errno`: {}",
++ "attempted to create `Error` with out of range `errno`: {}\n",
+ errno
+ );
+ return code::EINVAL;
+diff --git a/rust/kernel/init.rs b/rust/kernel/init.rs
+index c962029f96e1f1..90bfb5cb26cd7a 100644
+--- a/rust/kernel/init.rs
++++ b/rust/kernel/init.rs
+@@ -259,7 +259,7 @@
+ /// },
+ /// }));
+ /// let foo: Pin<&mut Foo> = foo;
+-/// pr_info!("a: {}", &*foo.a.lock());
++/// pr_info!("a: {}\n", &*foo.a.lock());
+ /// ```
+ ///
+ /// # Syntax
+@@ -311,7 +311,7 @@ macro_rules! stack_pin_init {
+ /// }, GFP_KERNEL)?,
+ /// }));
+ /// let foo = foo.unwrap();
+-/// pr_info!("a: {}", &*foo.a.lock());
++/// pr_info!("a: {}\n", &*foo.a.lock());
+ /// ```
+ ///
+ /// ```rust,ignore
+@@ -336,7 +336,7 @@ macro_rules! stack_pin_init {
+ /// x: 64,
+ /// }, GFP_KERNEL)?,
+ /// }));
+-/// pr_info!("a: {}", &*foo.a.lock());
++/// pr_info!("a: {}\n", &*foo.a.lock());
+ /// # Ok::<_, AllocError>(())
+ /// ```
+ ///
+@@ -866,7 +866,7 @@ pub unsafe trait PinInit<T: ?Sized, E = Infallible>: Sized {
+ ///
+ /// impl Foo {
+ /// fn setup(self: Pin<&mut Self>) {
+- /// pr_info!("Setting up foo");
++ /// pr_info!("Setting up foo\n");
+ /// }
+ /// }
+ ///
+@@ -970,7 +970,7 @@ pub unsafe trait Init<T: ?Sized, E = Infallible>: PinInit<T, E> {
+ ///
+ /// impl Foo {
+ /// fn setup(&mut self) {
+- /// pr_info!("Setting up foo");
++ /// pr_info!("Setting up foo\n");
+ /// }
+ /// }
+ ///
+@@ -1318,7 +1318,7 @@ fn write_pin_init<E>(mut self, init: impl PinInit<T, E>) -> Result<Pin<Self::Ini
+ /// #[pinned_drop]
+ /// impl PinnedDrop for Foo {
+ /// fn drop(self: Pin<&mut Self>) {
+-/// pr_info!("Foo is being dropped!");
++/// pr_info!("Foo is being dropped!\n");
+ /// }
+ /// }
+ /// ```
+@@ -1400,17 +1400,14 @@ macro_rules! impl_zeroable {
+ // SAFETY: `T: Zeroable` and `UnsafeCell` is `repr(transparent)`.
+ {<T: ?Sized + Zeroable>} UnsafeCell<T>,
+
+- // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee).
++ // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee:
++ // https://doc.rust-lang.org/stable/std/option/index.html#representation).
+ Option<NonZeroU8>, Option<NonZeroU16>, Option<NonZeroU32>, Option<NonZeroU64>,
+ Option<NonZeroU128>, Option<NonZeroUsize>,
+ Option<NonZeroI8>, Option<NonZeroI16>, Option<NonZeroI32>, Option<NonZeroI64>,
+ Option<NonZeroI128>, Option<NonZeroIsize>,
+-
+- // SAFETY: All zeros is equivalent to `None` (option layout optimization guarantee).
+- //
+- // In this case we are allowed to use `T: ?Sized`, since all zeros is the `None` variant.
+- {<T: ?Sized>} Option<NonNull<T>>,
+- {<T: ?Sized>} Option<KBox<T>>,
++ {<T>} Option<NonNull<T>>,
++ {<T>} Option<KBox<T>>,
+
+ // SAFETY: `null` pointer is valid.
+ //
+diff --git a/rust/kernel/init/macros.rs b/rust/kernel/init/macros.rs
+index 1fd146a8324165..b7213962a6a5ac 100644
+--- a/rust/kernel/init/macros.rs
++++ b/rust/kernel/init/macros.rs
+@@ -45,7 +45,7 @@
+ //! #[pinned_drop]
+ //! impl PinnedDrop for Foo {
+ //! fn drop(self: Pin<&mut Self>) {
+-//! pr_info!("{self:p} is getting dropped.");
++//! pr_info!("{self:p} is getting dropped.\n");
+ //! }
+ //! }
+ //!
+@@ -412,7 +412,7 @@
+ //! #[pinned_drop]
+ //! impl PinnedDrop for Foo {
+ //! fn drop(self: Pin<&mut Self>) {
+-//! pr_info!("{self:p} is getting dropped.");
++//! pr_info!("{self:p} is getting dropped.\n");
+ //! }
+ //! }
+ //! ```
+@@ -423,7 +423,7 @@
+ //! // `unsafe`, full path and the token parameter are added, everything else stays the same.
+ //! unsafe impl ::kernel::init::PinnedDrop for Foo {
+ //! fn drop(self: Pin<&mut Self>, _: ::kernel::init::__internal::OnlyCallFromDrop) {
+-//! pr_info!("{self:p} is getting dropped.");
++//! pr_info!("{self:p} is getting dropped.\n");
+ //! }
+ //! }
+ //! ```
+diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
+index d764cb7ff5d785..904d241604db91 100644
+--- a/rust/kernel/lib.rs
++++ b/rust/kernel/lib.rs
+@@ -6,7 +6,7 @@
+ //! usage by Rust code in the kernel and is shared by all of them.
+ //!
+ //! In other words, all the rest of the Rust code in the kernel (e.g. kernel
+-//! modules written in Rust) depends on [`core`], [`alloc`] and this crate.
++//! modules written in Rust) depends on [`core`] and this crate.
+ //!
+ //! If you need a kernel C API that is not ported or wrapped yet here, then
+ //! do so first instead of bypassing this crate.
+diff --git a/rust/kernel/sync.rs b/rust/kernel/sync.rs
+index 0ab20975a3b5db..697649ddef72e4 100644
+--- a/rust/kernel/sync.rs
++++ b/rust/kernel/sync.rs
+@@ -27,28 +27,20 @@
+ unsafe impl Sync for LockClassKey {}
+
+ impl LockClassKey {
+- /// Creates a new lock class key.
+- pub const fn new() -> Self {
+- Self(Opaque::uninit())
+- }
+-
+ pub(crate) fn as_ptr(&self) -> *mut bindings::lock_class_key {
+ self.0.get()
+ }
+ }
+
+-impl Default for LockClassKey {
+- fn default() -> Self {
+- Self::new()
+- }
+-}
+-
+ /// Defines a new static lock class and returns a pointer to it.
+ #[doc(hidden)]
+ #[macro_export]
+ macro_rules! static_lock_class {
+ () => {{
+- static CLASS: $crate::sync::LockClassKey = $crate::sync::LockClassKey::new();
++ static CLASS: $crate::sync::LockClassKey =
++ // SAFETY: lockdep expects uninitialized memory when it's handed a statically allocated
++ // lock_class_key
++ unsafe { ::core::mem::MaybeUninit::uninit().assume_init() };
+ &CLASS
+ }};
+ }
+diff --git a/scripts/generate_rust_analyzer.py b/scripts/generate_rust_analyzer.py
+index 09e1d166d8d236..d1f5adbf33f91c 100755
+--- a/scripts/generate_rust_analyzer.py
++++ b/scripts/generate_rust_analyzer.py
+@@ -49,14 +49,26 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+ }
+ })
+
+- # First, the ones in `rust/` since they are a bit special.
+- append_crate(
+- "core",
+- sysroot_src / "core" / "src" / "lib.rs",
+- [],
+- cfg=crates_cfgs.get("core", []),
+- is_workspace_member=False,
+- )
++ def append_sysroot_crate(
++ display_name,
++ deps,
++ cfg=[],
++ ):
++ append_crate(
++ display_name,
++ sysroot_src / display_name / "src" / "lib.rs",
++ deps,
++ cfg,
++ is_workspace_member=False,
++ )
++
++ # NB: sysroot crates reexport items from one another so setting up our transitive dependencies
++ # here is important for ensuring that rust-analyzer can resolve symbols. The sources of truth
++ # for this dependency graph are `(sysroot_src / crate / "Cargo.toml" for crate in crates)`.
++ append_sysroot_crate("core", [], cfg=crates_cfgs.get("core", []))
++ append_sysroot_crate("alloc", ["core"])
++ append_sysroot_crate("std", ["alloc", "core"])
++ append_sysroot_crate("proc_macro", ["core", "std"])
+
+ append_crate(
+ "compiler_builtins",
+@@ -67,7 +79,7 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+ append_crate(
+ "macros",
+ srctree / "rust" / "macros" / "lib.rs",
+- [],
++ ["std", "proc_macro"],
+ is_proc_macro=True,
+ )
+ crates[-1]["proc_macro_dylib_path"] = f"{objtree}/rust/libmacros.so"
+@@ -78,27 +90,28 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+ ["core", "compiler_builtins"],
+ )
+
+- append_crate(
+- "bindings",
+- srctree / "rust"/ "bindings" / "lib.rs",
+- ["core"],
+- cfg=cfg,
+- )
+- crates[-1]["env"]["OBJTREE"] = str(objtree.resolve(True))
+-
+- append_crate(
+- "kernel",
+- srctree / "rust" / "kernel" / "lib.rs",
+- ["core", "macros", "build_error", "bindings"],
+- cfg=cfg,
+- )
+- crates[-1]["source"] = {
+- "include_dirs": [
+- str(srctree / "rust" / "kernel"),
+- str(objtree / "rust")
+- ],
+- "exclude_dirs": [],
+- }
++ def append_crate_with_generated(
++ display_name,
++ deps,
++ ):
++ append_crate(
++ display_name,
++ srctree / "rust"/ display_name / "lib.rs",
++ deps,
++ cfg=cfg,
++ )
++ crates[-1]["env"]["OBJTREE"] = str(objtree.resolve(True))
++ crates[-1]["source"] = {
++ "include_dirs": [
++ str(srctree / "rust" / display_name),
++ str(objtree / "rust")
++ ],
++ "exclude_dirs": [],
++ }
++
++ append_crate_with_generated("bindings", ["core"])
++ append_crate_with_generated("uapi", ["core"])
++ append_crate_with_generated("kernel", ["core", "macros", "build_error", "bindings", "uapi"])
+
+ def is_root_crate(build_file, target):
+ try:
+diff --git a/scripts/rustdoc_test_gen.rs b/scripts/rustdoc_test_gen.rs
+index 5ebd42ae4a3fd1..76aaa8329413d8 100644
+--- a/scripts/rustdoc_test_gen.rs
++++ b/scripts/rustdoc_test_gen.rs
+@@ -15,8 +15,8 @@
+ //! - Test code should be able to define functions and call them, without having to carry
+ //! the context.
+ //!
+-//! - Later on, we may want to be able to test non-kernel code (e.g. `core`, `alloc` or
+-//! third-party crates) which likely use the standard library `assert*!` macros.
++//! - Later on, we may want to be able to test non-kernel code (e.g. `core` or third-party
++//! crates) which likely use the standard library `assert*!` macros.
+ //!
+ //! For this reason, instead of the passed context, `kunit_get_current_test()` is used instead
+ //! (i.e. `current->kunit_test`).
+diff --git a/sound/hda/intel-dsp-config.c b/sound/hda/intel-dsp-config.c
+index 9f849e05ce79f8..34825b2f3b1083 100644
+--- a/sound/hda/intel-dsp-config.c
++++ b/sound/hda/intel-dsp-config.c
+@@ -539,6 +539,11 @@ static const struct config_entry config_table[] = {
+ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
+ .device = PCI_DEVICE_ID_INTEL_HDA_PTL,
+ },
++ {
++ .flags = FLAG_SOF | FLAG_SOF_ONLY_IF_DMIC_OR_SOUNDWIRE,
++ .device = PCI_DEVICE_ID_INTEL_HDA_PTL_H,
++ },
++
+ #endif
+
+ };
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index ea52bc7370a58d..cb9925948175f9 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2508,6 +2508,8 @@ static const struct pci_device_id azx_ids[] = {
+ { PCI_DEVICE_DATA(INTEL, HDA_ARL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_SKYLAKE) },
+ /* Panther Lake */
+ { PCI_DEVICE_DATA(INTEL, HDA_PTL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_LNL) },
++ /* Panther Lake-H */
++ { PCI_DEVICE_DATA(INTEL, HDA_PTL_H, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_LNL) },
+ /* Apollolake (Broxton-P) */
+ { PCI_DEVICE_DATA(INTEL, HDA_APL, AZX_DRIVER_SKL | AZX_DCAPS_INTEL_BROXTON) },
+ /* Gemini-Lake */
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index b559f0d4e34885..3949e2614a6638 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -11064,6 +11064,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1d72, 0x1945, "Redmi G", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1d72, 0x1947, "RedmiBook Air", ALC255_FIXUP_XIAOMI_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1f66, 0x0105, "Ayaneo Portable Game Player", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x2014, 0x800a, "Positivo ARN50", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x2782, 0x0214, "VAIO VJFE-CL", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ SND_PCI_QUIRK(0x2782, 0x0228, "Infinix ZERO BOOK 13", ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13),
+ SND_PCI_QUIRK(0x2782, 0x0232, "CHUWI CoreBook XPro", ALC269VB_FIXUP_CHUWI_COREBOOK_XPRO),
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index b16587d8f97a89..a7637056972aab 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -248,6 +248,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "21M5"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "21M6"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+diff --git a/sound/soc/codecs/arizona.c b/sound/soc/codecs/arizona.c
+index 402b9a2ff02406..68cdb1027d0c05 100644
+--- a/sound/soc/codecs/arizona.c
++++ b/sound/soc/codecs/arizona.c
+@@ -967,7 +967,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w,
+ case ARIZONA_OUT3L_ENA_SHIFT:
+ case ARIZONA_OUT3R_ENA_SHIFT:
+ priv->out_up_pending++;
+- priv->out_up_delay += 17;
++ priv->out_up_delay += 17000;
+ break;
+ case ARIZONA_OUT4L_ENA_SHIFT:
+ case ARIZONA_OUT4R_ENA_SHIFT:
+@@ -977,7 +977,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w,
+ case WM8997:
+ break;
+ default:
+- priv->out_up_delay += 10;
++ priv->out_up_delay += 10000;
+ break;
+ }
+ break;
+@@ -999,7 +999,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w,
+ if (!priv->out_up_pending && priv->out_up_delay) {
+ dev_dbg(component->dev, "Power up delay: %d\n",
+ priv->out_up_delay);
+- msleep(priv->out_up_delay);
++ fsleep(priv->out_up_delay);
+ priv->out_up_delay = 0;
+ }
+ break;
+@@ -1017,7 +1017,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w,
+ case ARIZONA_OUT3L_ENA_SHIFT:
+ case ARIZONA_OUT3R_ENA_SHIFT:
+ priv->out_down_pending++;
+- priv->out_down_delay++;
++ priv->out_down_delay += 1000;
+ break;
+ case ARIZONA_OUT4L_ENA_SHIFT:
+ case ARIZONA_OUT4R_ENA_SHIFT:
+@@ -1028,10 +1028,10 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w,
+ break;
+ case WM8998:
+ case WM1814:
+- priv->out_down_delay += 5;
++ priv->out_down_delay += 5000;
+ break;
+ default:
+- priv->out_down_delay++;
++ priv->out_down_delay += 1000;
+ break;
+ }
+ break;
+@@ -1053,7 +1053,7 @@ int arizona_out_ev(struct snd_soc_dapm_widget *w,
+ if (!priv->out_down_pending && priv->out_down_delay) {
+ dev_dbg(component->dev, "Power down delay: %d\n",
+ priv->out_down_delay);
+- msleep(priv->out_down_delay);
++ fsleep(priv->out_down_delay);
+ priv->out_down_delay = 0;
+ }
+ break;
+diff --git a/sound/soc/codecs/cs42l43.c b/sound/soc/codecs/cs42l43.c
+index 8ec4083cd3b807..2c43e4a6751b10 100644
+--- a/sound/soc/codecs/cs42l43.c
++++ b/sound/soc/codecs/cs42l43.c
+@@ -1146,7 +1146,7 @@ static const struct snd_kcontrol_new cs42l43_controls[] = {
+
+ SOC_DOUBLE_R_SX_TLV("ADC Volume", CS42L43_ADC_B_CTRL1, CS42L43_ADC_B_CTRL2,
+ CS42L43_ADC_PGA_GAIN_SHIFT,
+- 0xF, 5, cs42l43_adc_tlv),
++ 0xF, 4, cs42l43_adc_tlv),
+
+ SOC_DOUBLE("PDM1 Invert Switch", CS42L43_DMIC_PDM_CTRL,
+ CS42L43_PDM1L_INV_SHIFT, CS42L43_PDM1R_INV_SHIFT, 1, 0),
+diff --git a/sound/soc/codecs/madera.c b/sound/soc/codecs/madera.c
+index b24d6472ad5fc9..fbfd7fb7f1685c 100644
+--- a/sound/soc/codecs/madera.c
++++ b/sound/soc/codecs/madera.c
+@@ -2322,10 +2322,10 @@ int madera_out_ev(struct snd_soc_dapm_widget *w,
+ case CS42L92:
+ case CS47L92:
+ case CS47L93:
+- out_up_delay = 6;
++ out_up_delay = 6000;
+ break;
+ default:
+- out_up_delay = 17;
++ out_up_delay = 17000;
+ break;
+ }
+
+@@ -2356,7 +2356,7 @@ int madera_out_ev(struct snd_soc_dapm_widget *w,
+ case MADERA_OUT3R_ENA_SHIFT:
+ priv->out_up_pending--;
+ if (!priv->out_up_pending) {
+- msleep(priv->out_up_delay);
++ fsleep(priv->out_up_delay);
+ priv->out_up_delay = 0;
+ }
+ break;
+@@ -2375,7 +2375,7 @@ int madera_out_ev(struct snd_soc_dapm_widget *w,
+ case MADERA_OUT3L_ENA_SHIFT:
+ case MADERA_OUT3R_ENA_SHIFT:
+ priv->out_down_pending++;
+- priv->out_down_delay++;
++ priv->out_down_delay += 1000;
+ break;
+ default:
+ break;
+@@ -2392,7 +2392,7 @@ int madera_out_ev(struct snd_soc_dapm_widget *w,
+ case MADERA_OUT3R_ENA_SHIFT:
+ priv->out_down_pending--;
+ if (!priv->out_down_pending) {
+- msleep(priv->out_down_delay);
++ fsleep(priv->out_down_delay);
+ priv->out_down_delay = 0;
+ }
+ break;
+diff --git a/sound/soc/codecs/rt722-sdca-sdw.c b/sound/soc/codecs/rt722-sdca-sdw.c
+index d5c985ff5ac553..5449d6b5cf3d11 100644
+--- a/sound/soc/codecs/rt722-sdca-sdw.c
++++ b/sound/soc/codecs/rt722-sdca-sdw.c
+@@ -86,6 +86,10 @@ static bool rt722_sdca_mbq_readable_register(struct device *dev, unsigned int re
+ case 0x6100067:
+ case 0x6100070 ... 0x610007c:
+ case 0x6100080:
++ case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_FU15, RT722_SDCA_CTL_FU_CH_GAIN,
++ CH_01) ...
++ SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_FU15, RT722_SDCA_CTL_FU_CH_GAIN,
++ CH_04):
+ case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E, RT722_SDCA_CTL_FU_VOLUME,
+ CH_01):
+ case SDW_SDCA_CTL(FUNC_NUM_MIC_ARRAY, RT722_SDCA_ENT_USER_FU1E, RT722_SDCA_CTL_FU_VOLUME,
+diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c
+index d482cd194c08c5..58315eab492a16 100644
+--- a/sound/soc/codecs/tas2764.c
++++ b/sound/soc/codecs/tas2764.c
+@@ -365,7 +365,7 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ {
+ struct snd_soc_component *component = dai->component;
+ struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component);
+- u8 tdm_rx_start_slot = 0, asi_cfg_0 = 0, asi_cfg_1 = 0;
++ u8 tdm_rx_start_slot = 0, asi_cfg_0 = 0, asi_cfg_1 = 0, asi_cfg_4 = 0;
+ int ret;
+
+ switch (fmt & SND_SOC_DAIFMT_INV_MASK) {
+@@ -374,12 +374,14 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ fallthrough;
+ case SND_SOC_DAIFMT_NB_NF:
+ asi_cfg_1 = TAS2764_TDM_CFG1_RX_RISING;
++ asi_cfg_4 = TAS2764_TDM_CFG4_TX_FALLING;
+ break;
+ case SND_SOC_DAIFMT_IB_IF:
+ asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START;
+ fallthrough;
+ case SND_SOC_DAIFMT_IB_NF:
+ asi_cfg_1 = TAS2764_TDM_CFG1_RX_FALLING;
++ asi_cfg_4 = TAS2764_TDM_CFG4_TX_RISING;
+ break;
+ }
+
+@@ -389,6 +391,12 @@ static int tas2764_set_fmt(struct snd_soc_dai *dai, unsigned int fmt)
+ if (ret < 0)
+ return ret;
+
++ ret = snd_soc_component_update_bits(component, TAS2764_TDM_CFG4,
++ TAS2764_TDM_CFG4_TX_MASK,
++ asi_cfg_4);
++ if (ret < 0)
++ return ret;
++
+ switch (fmt & SND_SOC_DAIFMT_FORMAT_MASK) {
+ case SND_SOC_DAIFMT_I2S:
+ asi_cfg_0 ^= TAS2764_TDM_CFG0_FRAME_START;
+diff --git a/sound/soc/codecs/tas2764.h b/sound/soc/codecs/tas2764.h
+index 168af772a898ff..9490f2686e3891 100644
+--- a/sound/soc/codecs/tas2764.h
++++ b/sound/soc/codecs/tas2764.h
+@@ -25,7 +25,7 @@
+
+ /* Power Control */
+ #define TAS2764_PWR_CTRL TAS2764_REG(0X0, 0x02)
+-#define TAS2764_PWR_CTRL_MASK GENMASK(1, 0)
++#define TAS2764_PWR_CTRL_MASK GENMASK(2, 0)
+ #define TAS2764_PWR_CTRL_ACTIVE 0x0
+ #define TAS2764_PWR_CTRL_MUTE BIT(0)
+ #define TAS2764_PWR_CTRL_SHUTDOWN BIT(1)
+@@ -79,6 +79,12 @@
+ #define TAS2764_TDM_CFG3_RXS_SHIFT 0x4
+ #define TAS2764_TDM_CFG3_MASK GENMASK(3, 0)
+
++/* TDM Configuration Reg4 */
++#define TAS2764_TDM_CFG4 TAS2764_REG(0X0, 0x0d)
++#define TAS2764_TDM_CFG4_TX_MASK BIT(0)
++#define TAS2764_TDM_CFG4_TX_RISING 0x0
++#define TAS2764_TDM_CFG4_TX_FALLING BIT(0)
++
+ /* TDM Configuration Reg5 */
+ #define TAS2764_TDM_CFG5 TAS2764_REG(0X0, 0x0e)
+ #define TAS2764_TDM_CFG5_VSNS_MASK BIT(6)
+diff --git a/sound/soc/codecs/tas2770.c b/sound/soc/codecs/tas2770.c
+index 9f93b230652a5d..863c3f672ba98d 100644
+--- a/sound/soc/codecs/tas2770.c
++++ b/sound/soc/codecs/tas2770.c
+@@ -506,7 +506,7 @@ static int tas2770_codec_probe(struct snd_soc_component *component)
+ }
+
+ static DECLARE_TLV_DB_SCALE(tas2770_digital_tlv, 1100, 50, 0);
+-static DECLARE_TLV_DB_SCALE(tas2770_playback_volume, -12750, 50, 0);
++static DECLARE_TLV_DB_SCALE(tas2770_playback_volume, -10050, 50, 0);
+
+ static const struct snd_kcontrol_new tas2770_snd_controls[] = {
+ SOC_SINGLE_TLV("Speaker Playback Volume", TAS2770_PLAY_CFG_REG2,
+diff --git a/sound/soc/codecs/wm0010.c b/sound/soc/codecs/wm0010.c
+index edd2cb185c42cf..9e67fbfc2ccaf8 100644
+--- a/sound/soc/codecs/wm0010.c
++++ b/sound/soc/codecs/wm0010.c
+@@ -920,7 +920,7 @@ static int wm0010_spi_probe(struct spi_device *spi)
+ if (ret) {
+ dev_err(wm0010->dev, "Failed to set IRQ %d as wake source: %d\n",
+ irq, ret);
+- return ret;
++ goto free_irq;
+ }
+
+ if (spi->max_speed_hz)
+@@ -932,9 +932,18 @@ static int wm0010_spi_probe(struct spi_device *spi)
+ &soc_component_dev_wm0010, wm0010_dai,
+ ARRAY_SIZE(wm0010_dai));
+ if (ret < 0)
+- return ret;
++ goto disable_irq_wake;
+
+ return 0;
++
++disable_irq_wake:
++ irq_set_irq_wake(wm0010->irq, 0);
++
++free_irq:
++ if (wm0010->irq)
++ free_irq(wm0010->irq, wm0010);
++
++ return ret;
+ }
+
+ static void wm0010_spi_remove(struct spi_device *spi)
+diff --git a/sound/soc/codecs/wm5110.c b/sound/soc/codecs/wm5110.c
+index 502196253d42a9..64eee0d2347da1 100644
+--- a/sound/soc/codecs/wm5110.c
++++ b/sound/soc/codecs/wm5110.c
+@@ -302,7 +302,7 @@ static int wm5110_hp_pre_enable(struct snd_soc_dapm_widget *w)
+ } else {
+ wseq = wm5110_no_dre_left_enable;
+ nregs = ARRAY_SIZE(wm5110_no_dre_left_enable);
+- priv->out_up_delay += 10;
++ priv->out_up_delay += 10000;
+ }
+ break;
+ case ARIZONA_OUT1R_ENA_SHIFT:
+@@ -312,7 +312,7 @@ static int wm5110_hp_pre_enable(struct snd_soc_dapm_widget *w)
+ } else {
+ wseq = wm5110_no_dre_right_enable;
+ nregs = ARRAY_SIZE(wm5110_no_dre_right_enable);
+- priv->out_up_delay += 10;
++ priv->out_up_delay += 10000;
+ }
+ break;
+ default:
+@@ -338,7 +338,7 @@ static int wm5110_hp_pre_disable(struct snd_soc_dapm_widget *w)
+ snd_soc_component_update_bits(component,
+ ARIZONA_SPARE_TRIGGERS,
+ ARIZONA_WS_TRG1, 0);
+- priv->out_down_delay += 27;
++ priv->out_down_delay += 27000;
+ }
+ break;
+ case ARIZONA_OUT1R_ENA_SHIFT:
+@@ -350,7 +350,7 @@ static int wm5110_hp_pre_disable(struct snd_soc_dapm_widget *w)
+ snd_soc_component_update_bits(component,
+ ARIZONA_SPARE_TRIGGERS,
+ ARIZONA_WS_TRG2, 0);
+- priv->out_down_delay += 27;
++ priv->out_down_delay += 27000;
+ }
+ break;
+ default:
+diff --git a/sound/soc/generic/simple-card-utils.c b/sound/soc/generic/simple-card-utils.c
+index fedae7f6f70cc5..975ffd2cad292c 100644
+--- a/sound/soc/generic/simple-card-utils.c
++++ b/sound/soc/generic/simple-card-utils.c
+@@ -1097,6 +1097,7 @@ int graph_util_parse_dai(struct device *dev, struct device_node *ep,
+ args.np = ep;
+ dai = snd_soc_get_dai_via_args(&args);
+ if (dai) {
++ dlc->of_node = node;
+ dlc->dai_name = snd_soc_dai_name_get(dai);
+ dlc->dai_args = snd_soc_copy_dai_args(dev, &args);
+ if (!dlc->dai_args)
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 84fc35d88b9267..380fc3be8c932e 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -13,6 +13,7 @@
+ #include <linux/soundwire/sdw.h>
+ #include <linux/soundwire/sdw_type.h>
+ #include <linux/soundwire/sdw_intel.h>
++#include <sound/core.h>
+ #include <sound/soc-acpi.h>
+ #include "sof_sdw_common.h"
+ #include "../../codecs/rt711.h"
+@@ -685,6 +686,23 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+ {}
+ };
+
++static const struct snd_pci_quirk sof_sdw_ssid_quirk_table[] = {
++ SND_PCI_QUIRK(0x1043, 0x1e13, "ASUS Zenbook S14", SOC_SDW_CODEC_MIC),
++ {}
++};
++
++static void sof_sdw_check_ssid_quirk(const struct snd_soc_acpi_mach *mach)
++{
++ const struct snd_pci_quirk *quirk_entry;
++
++ quirk_entry = snd_pci_quirk_lookup_id(mach->mach_params.subsystem_vendor,
++ mach->mach_params.subsystem_device,
++ sof_sdw_ssid_quirk_table);
++
++ if (quirk_entry)
++ sof_sdw_quirk = quirk_entry->value;
++}
++
+ static struct snd_soc_dai_link_component platform_component[] = {
+ {
+ /* name might be overridden during probe */
+@@ -853,7 +871,7 @@ static int create_sdw_dailinks(struct snd_soc_card *card,
+
+ /* generate DAI links by each sdw link */
+ while (sof_dais->initialised) {
+- int current_be_id;
++ int current_be_id = 0;
+
+ ret = create_sdw_dailink(card, sof_dais, dai_links,
+ ¤t_be_id, codec_conf);
+@@ -1212,6 +1230,13 @@ static int mc_probe(struct platform_device *pdev)
+
+ snd_soc_card_set_drvdata(card, ctx);
+
++ if (mach->mach_params.subsystem_id_set) {
++ snd_soc_card_set_pci_ssid(card,
++ mach->mach_params.subsystem_vendor,
++ mach->mach_params.subsystem_device);
++ sof_sdw_check_ssid_quirk(mach);
++ }
++
+ dmi_check_system(sof_sdw_quirk_table);
+
+ if (quirk_override != -1) {
+@@ -1227,12 +1252,6 @@ static int mc_probe(struct platform_device *pdev)
+ for (i = 0; i < ctx->codec_info_list_count; i++)
+ codec_info_list[i].amp_num = 0;
+
+- if (mach->mach_params.subsystem_id_set) {
+- snd_soc_card_set_pci_ssid(card,
+- mach->mach_params.subsystem_vendor,
+- mach->mach_params.subsystem_device);
+- }
+-
+ ret = sof_card_dai_links_create(card);
+ if (ret < 0)
+ return ret;
+diff --git a/sound/soc/intel/common/soc-acpi-intel-mtl-match.c b/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
+index fd02c864e25ef9..a3f79176563719 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-mtl-match.c
+@@ -297,7 +297,7 @@ static const struct snd_soc_acpi_adr_device rt1316_3_single_adr[] = {
+
+ static const struct snd_soc_acpi_adr_device rt1318_1_single_adr[] = {
+ {
+- .adr = 0x000130025D131801,
++ .adr = 0x000130025D131801ull,
+ .num_endpoints = 1,
+ .endpoints = &single_endpoint,
+ .name_prefix = "rt1318-1"
+diff --git a/sound/soc/sh/rcar/core.c b/sound/soc/sh/rcar/core.c
+index eca5ce096e5457..e3ef9104b411c1 100644
+--- a/sound/soc/sh/rcar/core.c
++++ b/sound/soc/sh/rcar/core.c
+@@ -1758,20 +1758,6 @@ int rsnd_kctrl_accept_anytime(struct rsnd_dai_stream *io)
+ return 1;
+ }
+
+-int rsnd_kctrl_accept_runtime(struct rsnd_dai_stream *io)
+-{
+- struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io);
+- struct rsnd_priv *priv = rsnd_io_to_priv(io);
+- struct device *dev = rsnd_priv_to_dev(priv);
+-
+- if (!runtime) {
+- dev_warn(dev, "Can't update kctrl when idle\n");
+- return 0;
+- }
+-
+- return 1;
+-}
+-
+ struct rsnd_kctrl_cfg *rsnd_kctrl_init_m(struct rsnd_kctrl_cfg_m *cfg)
+ {
+ cfg->cfg.val = cfg->val;
+diff --git a/sound/soc/sh/rcar/rsnd.h b/sound/soc/sh/rcar/rsnd.h
+index 3c164d8e3b16bf..3f1100b98cdd33 100644
+--- a/sound/soc/sh/rcar/rsnd.h
++++ b/sound/soc/sh/rcar/rsnd.h
+@@ -742,7 +742,6 @@ struct rsnd_kctrl_cfg_s {
+ #define rsnd_kctrl_vals(x) ((x).val) /* = (x).cfg.val[0] */
+
+ int rsnd_kctrl_accept_anytime(struct rsnd_dai_stream *io);
+-int rsnd_kctrl_accept_runtime(struct rsnd_dai_stream *io);
+ struct rsnd_kctrl_cfg *rsnd_kctrl_init_m(struct rsnd_kctrl_cfg_m *cfg);
+ struct rsnd_kctrl_cfg *rsnd_kctrl_init_s(struct rsnd_kctrl_cfg_s *cfg);
+ int rsnd_kctrl_new(struct rsnd_mod *mod,
+diff --git a/sound/soc/sh/rcar/src.c b/sound/soc/sh/rcar/src.c
+index e7f86db0d94c3c..7d73b183bda685 100644
+--- a/sound/soc/sh/rcar/src.c
++++ b/sound/soc/sh/rcar/src.c
+@@ -35,6 +35,7 @@ struct rsnd_src {
+ struct rsnd_mod *dma;
+ struct rsnd_kctrl_cfg_s sen; /* sync convert enable */
+ struct rsnd_kctrl_cfg_s sync; /* sync convert */
++ u32 current_sync_rate;
+ int irq;
+ };
+
+@@ -100,7 +101,7 @@ static u32 rsnd_src_convert_rate(struct rsnd_dai_stream *io,
+ if (!rsnd_src_sync_is_enabled(mod))
+ return rsnd_io_converted_rate(io);
+
+- convert_rate = src->sync.val;
++ convert_rate = src->current_sync_rate;
+
+ if (!convert_rate)
+ convert_rate = rsnd_io_converted_rate(io);
+@@ -201,13 +202,73 @@ static const u32 chan222222[] = {
+ static void rsnd_src_set_convert_rate(struct rsnd_dai_stream *io,
+ struct rsnd_mod *mod)
+ {
++ struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io);
+ struct rsnd_priv *priv = rsnd_mod_to_priv(mod);
+- struct device *dev = rsnd_priv_to_dev(priv);
++ struct rsnd_src *src = rsnd_mod_to_src(mod);
++ u32 fin, fout, new_rate;
++ int inc, cnt, rate;
++ u64 base, val;
++
++ if (!runtime)
++ return;
++
++ if (!rsnd_src_sync_is_enabled(mod))
++ return;
++
++ fin = rsnd_src_get_in_rate(priv, io);
++ fout = rsnd_src_get_out_rate(priv, io);
++
++ new_rate = src->sync.val;
++
++ if (!new_rate)
++ new_rate = fout;
++
++ /* Do nothing if no diff */
++ if (new_rate == src->current_sync_rate)
++ return;
++
++ /*
++ * SRCm_IFSVR::INTIFS can change within 1%
++ * see
++ * SRCm_IFSVR::INTIFS Note
++ */
++ inc = fout / 100;
++ cnt = abs(new_rate - fout) / inc;
++ if (fout > new_rate)
++ inc *= -1;
++
++ /*
++ * After start running SRC, we can update only SRC_IFSVR
++ * for Synchronous Mode
++ */
++ base = (u64)0x0400000 * fin;
++ rate = fout;
++ for (int i = 0; i < cnt; i++) {
++ val = base;
++ rate += inc;
++ do_div(val, rate);
++
++ rsnd_mod_write(mod, SRC_IFSVR, val);
++ }
++ val = base;
++ do_div(val, new_rate);
++
++ rsnd_mod_write(mod, SRC_IFSVR, val);
++
++ /* update current_sync_rate */
++ src->current_sync_rate = new_rate;
++}
++
++static void rsnd_src_init_convert_rate(struct rsnd_dai_stream *io,
++ struct rsnd_mod *mod)
++{
+ struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io);
++ struct rsnd_priv *priv = rsnd_mod_to_priv(mod);
++ struct device *dev = rsnd_priv_to_dev(priv);
+ int is_play = rsnd_io_is_play(io);
+ int use_src = 0;
+ u32 fin, fout;
+- u32 ifscr, fsrate, adinr;
++ u32 ifscr, adinr;
+ u32 cr, route;
+ u32 i_busif, o_busif, tmp;
+ const u32 *bsdsr_table;
+@@ -245,26 +306,15 @@ static void rsnd_src_set_convert_rate(struct rsnd_dai_stream *io,
+ adinr = rsnd_get_adinr_bit(mod, io) | chan;
+
+ /*
+- * SRC_IFSCR / SRC_IFSVR
+- */
+- ifscr = 0;
+- fsrate = 0;
+- if (use_src) {
+- u64 n;
+-
+- ifscr = 1;
+- n = (u64)0x0400000 * fin;
+- do_div(n, fout);
+- fsrate = n;
+- }
+-
+- /*
++ * SRC_IFSCR
+ * SRC_SRCCR / SRC_ROUTE_MODE0
+ */
++ ifscr = 0;
+ cr = 0x00011110;
+ route = 0x0;
+ if (use_src) {
+ route = 0x1;
++ ifscr = 0x1;
+
+ if (rsnd_src_sync_is_enabled(mod)) {
+ cr |= 0x1;
+@@ -335,7 +385,6 @@ static void rsnd_src_set_convert_rate(struct rsnd_dai_stream *io,
+ rsnd_mod_write(mod, SRC_SRCIR, 1); /* initialize */
+ rsnd_mod_write(mod, SRC_ADINR, adinr);
+ rsnd_mod_write(mod, SRC_IFSCR, ifscr);
+- rsnd_mod_write(mod, SRC_IFSVR, fsrate);
+ rsnd_mod_write(mod, SRC_SRCCR, cr);
+ rsnd_mod_write(mod, SRC_BSDSR, bsdsr_table[idx]);
+ rsnd_mod_write(mod, SRC_BSISR, bsisr_table[idx]);
+@@ -348,6 +397,9 @@ static void rsnd_src_set_convert_rate(struct rsnd_dai_stream *io,
+
+ rsnd_adg_set_src_timesel_gen2(mod, io, fin, fout);
+
++ /* update SRC_IFSVR */
++ rsnd_src_set_convert_rate(io, mod);
++
+ return;
+
+ convert_rate_err:
+@@ -467,7 +519,8 @@ static int rsnd_src_init(struct rsnd_mod *mod,
+ int ret;
+
+ /* reset sync convert_rate */
+- src->sync.val = 0;
++ src->sync.val =
++ src->current_sync_rate = 0;
+
+ ret = rsnd_mod_power_on(mod);
+ if (ret < 0)
+@@ -475,7 +528,7 @@ static int rsnd_src_init(struct rsnd_mod *mod,
+
+ rsnd_src_activation(mod);
+
+- rsnd_src_set_convert_rate(io, mod);
++ rsnd_src_init_convert_rate(io, mod);
+
+ rsnd_src_status_clear(mod);
+
+@@ -493,7 +546,8 @@ static int rsnd_src_quit(struct rsnd_mod *mod,
+ rsnd_mod_power_off(mod);
+
+ /* reset sync convert_rate */
+- src->sync.val = 0;
++ src->sync.val =
++ src->current_sync_rate = 0;
+
+ return 0;
+ }
+@@ -531,6 +585,22 @@ static irqreturn_t rsnd_src_interrupt(int irq, void *data)
+ return IRQ_HANDLED;
+ }
+
++static int rsnd_src_kctrl_accept_runtime(struct rsnd_dai_stream *io)
++{
++ struct snd_pcm_runtime *runtime = rsnd_io_to_runtime(io);
++
++ if (!runtime) {
++ struct rsnd_priv *priv = rsnd_io_to_priv(io);
++ struct device *dev = rsnd_priv_to_dev(priv);
++
++ dev_warn(dev, "\"SRC Out Rate\" can use during running\n");
++
++ return 0;
++ }
++
++ return 1;
++}
++
+ static int rsnd_src_probe_(struct rsnd_mod *mod,
+ struct rsnd_dai_stream *io,
+ struct rsnd_priv *priv)
+@@ -585,7 +655,7 @@ static int rsnd_src_pcm_new(struct rsnd_mod *mod,
+ "SRC Out Rate Switch" :
+ "SRC In Rate Switch",
+ rsnd_kctrl_accept_anytime,
+- rsnd_src_set_convert_rate,
++ rsnd_src_init_convert_rate,
+ &src->sen, 1);
+ if (ret < 0)
+ return ret;
+@@ -594,7 +664,7 @@ static int rsnd_src_pcm_new(struct rsnd_mod *mod,
+ rsnd_io_is_play(io) ?
+ "SRC Out Rate" :
+ "SRC In Rate",
+- rsnd_kctrl_accept_runtime,
++ rsnd_src_kctrl_accept_runtime,
+ rsnd_src_set_convert_rate,
+ &src->sync, 192000);
+
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index b3d4e8ae07eff8..0c6424a1fcac04 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -336,7 +336,8 @@ static int rsnd_ssi_master_clk_start(struct rsnd_mod *mod,
+ return 0;
+
+ rate_err:
+- dev_err(dev, "unsupported clock rate\n");
++ dev_err(dev, "unsupported clock rate (%d)\n", rate);
++
+ return ret;
+ }
+
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 19928f098d8dcb..b0e4e4168f38d5 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -337,7 +337,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ if (ucontrol->value.integer.value[0] < 0)
+ return -EINVAL;
+ val = ucontrol->value.integer.value[0];
+- if (mc->platform_max && ((int)val + min) > mc->platform_max)
++ if (mc->platform_max && val > mc->platform_max)
+ return -EINVAL;
+ if (val > max - min)
+ return -EINVAL;
+@@ -350,7 +350,7 @@ int snd_soc_put_volsw(struct snd_kcontrol *kcontrol,
+ if (ucontrol->value.integer.value[1] < 0)
+ return -EINVAL;
+ val2 = ucontrol->value.integer.value[1];
+- if (mc->platform_max && ((int)val2 + min) > mc->platform_max)
++ if (mc->platform_max && val2 > mc->platform_max)
+ return -EINVAL;
+ if (val2 > max - min)
+ return -EINVAL;
+@@ -503,17 +503,16 @@ int snd_soc_info_volsw_range(struct snd_kcontrol *kcontrol,
+ {
+ struct soc_mixer_control *mc =
+ (struct soc_mixer_control *)kcontrol->private_value;
+- int platform_max;
+- int min = mc->min;
++ int max;
+
+- if (!mc->platform_max)
+- mc->platform_max = mc->max;
+- platform_max = mc->platform_max;
++ max = mc->max - mc->min;
++ if (mc->platform_max && mc->platform_max < max)
++ max = mc->platform_max;
+
+ uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
+ uinfo->count = snd_soc_volsw_is_stereo(mc) ? 2 : 1;
+ uinfo->value.integer.min = 0;
+- uinfo->value.integer.max = platform_max - min;
++ uinfo->value.integer.max = max;
+
+ return 0;
+ }
+diff --git a/sound/soc/sof/amd/acp-ipc.c b/sound/soc/sof/amd/acp-ipc.c
+index b44b1b1adb6ed9..cf3994a705f946 100644
+--- a/sound/soc/sof/amd/acp-ipc.c
++++ b/sound/soc/sof/amd/acp-ipc.c
+@@ -167,6 +167,7 @@ irqreturn_t acp_sof_ipc_irq_thread(int irq, void *context)
+
+ if (sdev->first_boot && sdev->fw_state != SOF_FW_BOOT_COMPLETE) {
+ acp_mailbox_read(sdev, sdev->dsp_box.offset, &status, sizeof(status));
++
+ if ((status & SOF_IPC_PANIC_MAGIC_MASK) == SOF_IPC_PANIC_MAGIC) {
+ snd_sof_dsp_panic(sdev, sdev->dsp_box.offset + sizeof(status),
+ true);
+@@ -188,13 +189,21 @@ irqreturn_t acp_sof_ipc_irq_thread(int irq, void *context)
+
+ dsp_ack = snd_sof_dsp_read(sdev, ACP_DSP_BAR, ACP_SCRATCH_REG_0 + dsp_ack_write);
+ if (dsp_ack) {
+- spin_lock_irq(&sdev->ipc_lock);
+- /* handle immediate reply from DSP core */
+- acp_dsp_ipc_get_reply(sdev);
+- snd_sof_ipc_reply(sdev, 0);
+- /* set the done bit */
+- acp_dsp_ipc_dsp_done(sdev);
+- spin_unlock_irq(&sdev->ipc_lock);
++ if (likely(sdev->fw_state == SOF_FW_BOOT_COMPLETE)) {
++ spin_lock_irq(&sdev->ipc_lock);
++
++ /* handle immediate reply from DSP core */
++ acp_dsp_ipc_get_reply(sdev);
++ snd_sof_ipc_reply(sdev, 0);
++ /* set the done bit */
++ acp_dsp_ipc_dsp_done(sdev);
++
++ spin_unlock_irq(&sdev->ipc_lock);
++ } else {
++ dev_dbg_ratelimited(sdev->dev, "IPC reply before FW_BOOT_COMPLETE: %#x\n",
++ dsp_ack);
++ }
++
+ ipc_irq = true;
+ }
+
+diff --git a/sound/soc/sof/amd/acp.c b/sound/soc/sof/amd/acp.c
+index 95d4762c9d9390..35eb23d2a056d8 100644
+--- a/sound/soc/sof/amd/acp.c
++++ b/sound/soc/sof/amd/acp.c
+@@ -27,6 +27,7 @@ MODULE_PARM_DESC(enable_fw_debug, "Enable Firmware debug");
+ static struct acp_quirk_entry quirk_valve_galileo = {
+ .signed_fw_image = true,
+ .skip_iram_dram_size_mod = true,
++ .post_fw_run_delay = true,
+ };
+
+ const struct dmi_system_id acp_sof_quirk_table[] = {
+diff --git a/sound/soc/sof/amd/acp.h b/sound/soc/sof/amd/acp.h
+index 800594440f7391..2a19d82d620022 100644
+--- a/sound/soc/sof/amd/acp.h
++++ b/sound/soc/sof/amd/acp.h
+@@ -220,6 +220,7 @@ struct sof_amd_acp_desc {
+ struct acp_quirk_entry {
+ bool signed_fw_image;
+ bool skip_iram_dram_size_mod;
++ bool post_fw_run_delay;
+ };
+
+ /* Common device data struct for ACP devices */
+diff --git a/sound/soc/sof/amd/vangogh.c b/sound/soc/sof/amd/vangogh.c
+index 61372958c09dc8..436f58be3a9f94 100644
+--- a/sound/soc/sof/amd/vangogh.c
++++ b/sound/soc/sof/amd/vangogh.c
+@@ -11,6 +11,7 @@
+ * Hardware interface for Audio DSP on Vangogh platform
+ */
+
++#include <linux/delay.h>
+ #include <linux/platform_device.h>
+ #include <linux/module.h>
+
+@@ -136,6 +137,20 @@ static struct snd_soc_dai_driver vangogh_sof_dai[] = {
+ },
+ };
+
++static int sof_vangogh_post_fw_run_delay(struct snd_sof_dev *sdev)
++{
++ /*
++ * Resuming from suspend in some cases my cause the DSP firmware
++ * to enter an unrecoverable faulty state. Delaying a bit any host
++ * to DSP transmission right after firmware boot completion seems
++ * to resolve the issue.
++ */
++ if (!sdev->first_boot)
++ usleep_range(100, 150);
++
++ return 0;
++}
++
+ /* Vangogh ops */
+ struct snd_sof_dsp_ops sof_vangogh_ops;
+ EXPORT_SYMBOL_NS(sof_vangogh_ops, SND_SOC_SOF_AMD_COMMON);
+@@ -157,6 +172,9 @@ int sof_vangogh_ops_init(struct snd_sof_dev *sdev)
+
+ if (quirks->signed_fw_image)
+ sof_vangogh_ops.load_firmware = acp_sof_load_signed_firmware;
++
++ if (quirks->post_fw_run_delay)
++ sof_vangogh_ops.post_fw_run = sof_vangogh_post_fw_run_delay;
+ }
+
+ return 0;
+diff --git a/sound/soc/sof/intel/hda-codec.c b/sound/soc/sof/intel/hda-codec.c
+index dc46888faa0dc9..c0c58b42971556 100644
+--- a/sound/soc/sof/intel/hda-codec.c
++++ b/sound/soc/sof/intel/hda-codec.c
+@@ -454,6 +454,7 @@ int hda_codec_i915_exit(struct snd_sof_dev *sdev)
+ }
+ EXPORT_SYMBOL_NS_GPL(hda_codec_i915_exit, SND_SOC_SOF_HDA_AUDIO_CODEC_I915);
+
++MODULE_SOFTDEP("pre: snd-hda-codec-hdmi");
+ #endif
+
+ MODULE_LICENSE("Dual BSD/GPL");
+diff --git a/sound/soc/sof/intel/hda.c b/sound/soc/sof/intel/hda.c
+index f10ed4d1025016..c924a998d6f90d 100644
+--- a/sound/soc/sof/intel/hda.c
++++ b/sound/soc/sof/intel/hda.c
+@@ -1305,22 +1305,8 @@ struct snd_soc_acpi_mach *hda_machine_select(struct snd_sof_dev *sdev)
+ /* report to machine driver if any DMICs are found */
+ mach->mach_params.dmic_num = check_dmic_num(sdev);
+
+- if (sdw_mach_found) {
+- /*
+- * DMICs use up to 4 pins and are typically pin-muxed with SoundWire
+- * link 2 and 3, or link 1 and 2, thus we only try to enable dmics
+- * if all conditions are true:
+- * a) 2 or fewer links are used by SoundWire
+- * b) the NHLT table reports the presence of microphones
+- */
+- if (hweight_long(mach->link_mask) <= 2)
+- dmic_fixup = true;
+- else
+- mach->mach_params.dmic_num = 0;
+- } else {
+- if (mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_DMIC_NUMBER)
+- dmic_fixup = true;
+- }
++ if (sdw_mach_found || mach->tplg_quirk_mask & SND_SOC_ACPI_TPLG_INTEL_DMIC_NUMBER)
++ dmic_fixup = true;
+
+ if (tplg_fixup &&
+ dmic_fixup &&
+diff --git a/sound/soc/sof/intel/pci-ptl.c b/sound/soc/sof/intel/pci-ptl.c
+index 69195b5e7b1a92..f54d098d616f67 100644
+--- a/sound/soc/sof/intel/pci-ptl.c
++++ b/sound/soc/sof/intel/pci-ptl.c
+@@ -50,6 +50,7 @@ static const struct sof_dev_desc ptl_desc = {
+ /* PCI IDs */
+ static const struct pci_device_id sof_pci_ids[] = {
+ { PCI_DEVICE_DATA(INTEL, HDA_PTL, &ptl_desc) }, /* PTL */
++ { PCI_DEVICE_DATA(INTEL, HDA_PTL_H, &ptl_desc) }, /* PTL-H */
+ { 0, }
+ };
+ MODULE_DEVICE_TABLE(pci, sof_pci_ids);
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 1691aa6e6ce32d..3c3e5760e81b83 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2061,6 +2061,14 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
+ reloc_addend(reloc) == pfunc->offset)
+ break;
+
++ /*
++ * Clang sometimes leaves dangling unused jump table entries
++ * which point to the end of the function. Ignore them.
++ */
++ if (reloc->sym->sec == pfunc->sec &&
++ reloc_addend(reloc) == pfunc->offset + pfunc->len)
++ goto next;
++
+ dest_insn = find_insn(file, reloc->sym->sec, reloc_addend(reloc));
+ if (!dest_insn)
+ break;
+@@ -2078,6 +2086,7 @@ static int add_jump_table(struct objtool_file *file, struct instruction *insn,
+ alt->insn = dest_insn;
+ alt->next = insn->alts;
+ insn->alts = alt;
++next:
+ prev_offset = reloc_offset(reloc);
+ }
+
+diff --git a/tools/sched_ext/include/scx/common.bpf.h b/tools/sched_ext/include/scx/common.bpf.h
+index f7206374a73dd8..75d6d61279d664 100644
+--- a/tools/sched_ext/include/scx/common.bpf.h
++++ b/tools/sched_ext/include/scx/common.bpf.h
+@@ -333,6 +333,17 @@ static __always_inline const struct cpumask *cast_mask(struct bpf_cpumask *mask)
+ return (const struct cpumask *)mask;
+ }
+
++/*
++ * Return true if task @p cannot migrate to a different CPU, false
++ * otherwise.
++ */
++static inline bool is_migration_disabled(const struct task_struct *p)
++{
++ if (bpf_core_field_exists(p->migration_disabled))
++ return p->migration_disabled;
++ return false;
++}
++
+ /* rcu */
+ void bpf_rcu_read_lock(void) __ksym;
+ void bpf_rcu_read_unlock(void) __ksym;
+diff --git a/tools/sound/dapm-graph b/tools/sound/dapm-graph
+index f14bdfedee8f11..b6196ee5065a4e 100755
+--- a/tools/sound/dapm-graph
++++ b/tools/sound/dapm-graph
+@@ -10,7 +10,7 @@ set -eu
+
+ STYLE_COMPONENT_ON="color=dodgerblue;style=bold"
+ STYLE_COMPONENT_OFF="color=gray40;style=filled;fillcolor=gray90"
+-STYLE_NODE_ON="shape=box,style=bold,color=green4"
++STYLE_NODE_ON="shape=box,style=bold,color=green4,fillcolor=white"
+ STYLE_NODE_OFF="shape=box,style=filled,color=gray30,fillcolor=gray95"
+
+ # Print usage and exit
+diff --git a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+index 82bfb266741cfa..fb08c565d6aada 100644
+--- a/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
++++ b/tools/testing/selftests/bpf/prog_tests/sockmap_basic.c
+@@ -492,8 +492,8 @@ static void test_sockmap_skb_verdict_shutdown(void)
+ if (!ASSERT_EQ(err, 1, "epoll_wait(fd)"))
+ goto out_close;
+
+- n = recv(c1, &b, 1, SOCK_NONBLOCK);
+- ASSERT_EQ(n, 0, "recv_timeout(fin)");
++ n = recv(c1, &b, 1, MSG_DONTWAIT);
++ ASSERT_EQ(n, 0, "recv(fin)");
+ out_close:
+ close(c1);
+ close(p1);
+@@ -546,7 +546,7 @@ static void test_sockmap_skb_verdict_fionread(bool pass_prog)
+ ASSERT_EQ(avail, expected, "ioctl(FIONREAD)");
+ /* On DROP test there will be no data to read */
+ if (pass_prog) {
+- recvd = recv_timeout(c1, &buf, sizeof(buf), SOCK_NONBLOCK, IO_TIMEOUT_SEC);
++ recvd = recv_timeout(c1, &buf, sizeof(buf), MSG_DONTWAIT, IO_TIMEOUT_SEC);
+ ASSERT_EQ(recvd, sizeof(buf), "recv_timeout(c0)");
+ }
+
+diff --git a/tools/testing/selftests/cgroup/test_cpuset_v1_hp.sh b/tools/testing/selftests/cgroup/test_cpuset_v1_hp.sh
+index 3f45512fb512eb..7406c24be1ac99 100755
+--- a/tools/testing/selftests/cgroup/test_cpuset_v1_hp.sh
++++ b/tools/testing/selftests/cgroup/test_cpuset_v1_hp.sh
+@@ -1,4 +1,4 @@
+-#!/bin/sh
++#!/bin/bash
+ # SPDX-License-Identifier: GPL-2.0
+ #
+ # Test the special cpuset v1 hotplug case where a cpuset become empty of
+diff --git a/tools/testing/selftests/drivers/net/bonding/bond_options.sh b/tools/testing/selftests/drivers/net/bonding/bond_options.sh
+index edc56e2cc60690..7bc148889ca729 100755
+--- a/tools/testing/selftests/drivers/net/bonding/bond_options.sh
++++ b/tools/testing/selftests/drivers/net/bonding/bond_options.sh
+@@ -11,8 +11,8 @@ ALL_TESTS="
+
+ lib_dir=$(dirname "$0")
+ source ${lib_dir}/bond_topo_3d1c.sh
+-c_maddr="33:33:00:00:00:10"
+-g_maddr="33:33:00:00:02:54"
++c_maddr="33:33:ff:00:00:10"
++g_maddr="33:33:ff:00:02:54"
+
+ skip_prio()
+ {
+diff --git a/tools/testing/selftests/filesystems/statmount/statmount_test.c b/tools/testing/selftests/filesystems/statmount/statmount_test.c
+index c773334bbcc95b..550e5d762c23f4 100644
+--- a/tools/testing/selftests/filesystems/statmount/statmount_test.c
++++ b/tools/testing/selftests/filesystems/statmount/statmount_test.c
+@@ -383,6 +383,10 @@ static void test_statmount_mnt_point(void)
+ return;
+ }
+
++ if (!(sm->mask & STATMOUNT_MNT_POINT)) {
++ ksft_test_result_fail("missing STATMOUNT_MNT_POINT in mask\n");
++ return;
++ }
+ if (strcmp(sm->str + sm->mnt_point, "/") != 0) {
+ ksft_test_result_fail("unexpected mount point: '%s' != '/'\n",
+ sm->str + sm->mnt_point);
+@@ -408,6 +412,10 @@ static void test_statmount_mnt_root(void)
+ strerror(errno));
+ return;
+ }
++ if (!(sm->mask & STATMOUNT_MNT_ROOT)) {
++ ksft_test_result_fail("missing STATMOUNT_MNT_ROOT in mask\n");
++ return;
++ }
+ mnt_root = sm->str + sm->mnt_root;
+ last_root = strrchr(mnt_root, '/');
+ if (last_root)
+@@ -437,6 +445,10 @@ static void test_statmount_fs_type(void)
+ strerror(errno));
+ return;
+ }
++ if (!(sm->mask & STATMOUNT_FS_TYPE)) {
++ ksft_test_result_fail("missing STATMOUNT_FS_TYPE in mask\n");
++ return;
++ }
+ fs_type = sm->str + sm->fs_type;
+ for (s = known_fs; s != NULL; s++) {
+ if (strcmp(fs_type, *s) == 0)
+@@ -464,6 +476,11 @@ static void test_statmount_mnt_opts(void)
+ return;
+ }
+
++ if (!(sm->mask & STATMOUNT_MNT_BASIC)) {
++ ksft_test_result_fail("missing STATMOUNT_MNT_BASIC in mask\n");
++ return;
++ }
++
+ while (getline(&line, &len, f_mountinfo) != -1) {
+ int i;
+ char *p, *p2;
+@@ -514,7 +531,10 @@ static void test_statmount_mnt_opts(void)
+ if (p2)
+ *p2 = '\0';
+
+- statmount_opts = sm->str + sm->mnt_opts;
++ if (sm->mask & STATMOUNT_MNT_OPTS)
++ statmount_opts = sm->str + sm->mnt_opts;
++ else
++ statmount_opts = "";
+ if (strcmp(statmount_opts, p) != 0)
+ ksft_test_result_fail(
+ "unexpected mount options: '%s' != '%s'\n",
+diff --git a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
+index c9a2da0575a0fa..6dcf7e6104afb0 100644
+--- a/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
++++ b/tools/testing/selftests/sched_ext/dsp_local_on.bpf.c
+@@ -43,7 +43,7 @@ void BPF_STRUCT_OPS(dsp_local_on_dispatch, s32 cpu, struct task_struct *prev)
+ if (!p)
+ return;
+
+- if (p->nr_cpus_allowed == nr_cpus)
++ if (p->nr_cpus_allowed == nr_cpus && !is_migration_disabled(p))
+ target = bpf_get_prandom_u32() % nr_cpus;
+ else
+ target = scx_bpf_task_cpu(p);
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-03-29 10:47 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-03-29 10:47 UTC (permalink / raw
To: gentoo-commits
commit: 1a34b289a2641ba3e4ee1a58335f9185b4ef765a
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 29 10:46:47 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar 29 10:46:47 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1a34b289
Linux patch 6.12.21
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1020_linux-6.12.21.patch | 4384 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 4388 insertions(+)
diff --git a/0000_README b/0000_README
index ecb0495f..accea09e 100644
--- a/0000_README
+++ b/0000_README
@@ -123,6 +123,10 @@ Patch: 1019_linux-6.12.20.patch
From: https://www.kernel.org
Desc: Linux 6.12.20
+Patch: 1020_linux-6.12.21.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.21
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1020_linux-6.12.21.patch b/1020_linux-6.12.21.patch
new file mode 100644
index 00000000..5eae937b
--- /dev/null
+++ b/1020_linux-6.12.21.patch
@@ -0,0 +1,4384 @@
+diff --git a/Documentation/devicetree/bindings/net/can/renesas,rcar-canfd.yaml b/Documentation/devicetree/bindings/net/can/renesas,rcar-canfd.yaml
+index 7c5ac5d2e880bb..f6884f6e59e743 100644
+--- a/Documentation/devicetree/bindings/net/can/renesas,rcar-canfd.yaml
++++ b/Documentation/devicetree/bindings/net/can/renesas,rcar-canfd.yaml
+@@ -170,7 +170,7 @@ allOf:
+ const: renesas,r8a779h0-canfd
+ then:
+ patternProperties:
+- "^channel[5-7]$": false
++ "^channel[4-7]$": false
+ else:
+ if:
+ not:
+diff --git a/Makefile b/Makefile
+index ca000bd227be66..a646151342b832 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 20
++SUBLEVEL = 21
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/boot/dts/broadcom/bcm2711-rpi.dtsi b/arch/arm/boot/dts/broadcom/bcm2711-rpi.dtsi
+index 6bf4241fe3b737..c78ed064d1667d 100644
+--- a/arch/arm/boot/dts/broadcom/bcm2711-rpi.dtsi
++++ b/arch/arm/boot/dts/broadcom/bcm2711-rpi.dtsi
+@@ -1,7 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include "bcm2835-rpi.dtsi"
+
+-#include <dt-bindings/power/raspberrypi-power.h>
+ #include <dt-bindings/reset/raspberrypi,firmware-reset.h>
+
+ / {
+@@ -101,7 +100,3 @@ &v3d {
+ &vchiq {
+ interrupts = <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>;
+ };
+-
+-&xhci {
+- power-domains = <&power RPI_POWER_DOMAIN_USB>;
+-};
+diff --git a/arch/arm/boot/dts/broadcom/bcm2711.dtsi b/arch/arm/boot/dts/broadcom/bcm2711.dtsi
+index e4e42af21ef3a4..c06d9f5e53c804 100644
+--- a/arch/arm/boot/dts/broadcom/bcm2711.dtsi
++++ b/arch/arm/boot/dts/broadcom/bcm2711.dtsi
+@@ -134,7 +134,7 @@ uart2: serial@7e201400 {
+ clocks = <&clocks BCM2835_CLOCK_UART>,
+ <&clocks BCM2835_CLOCK_VPU>;
+ clock-names = "uartclk", "apb_pclk";
+- arm,primecell-periphid = <0x00241011>;
++ arm,primecell-periphid = <0x00341011>;
+ status = "disabled";
+ };
+
+@@ -145,7 +145,7 @@ uart3: serial@7e201600 {
+ clocks = <&clocks BCM2835_CLOCK_UART>,
+ <&clocks BCM2835_CLOCK_VPU>;
+ clock-names = "uartclk", "apb_pclk";
+- arm,primecell-periphid = <0x00241011>;
++ arm,primecell-periphid = <0x00341011>;
+ status = "disabled";
+ };
+
+@@ -156,7 +156,7 @@ uart4: serial@7e201800 {
+ clocks = <&clocks BCM2835_CLOCK_UART>,
+ <&clocks BCM2835_CLOCK_VPU>;
+ clock-names = "uartclk", "apb_pclk";
+- arm,primecell-periphid = <0x00241011>;
++ arm,primecell-periphid = <0x00341011>;
+ status = "disabled";
+ };
+
+@@ -167,7 +167,7 @@ uart5: serial@7e201a00 {
+ clocks = <&clocks BCM2835_CLOCK_UART>,
+ <&clocks BCM2835_CLOCK_VPU>;
+ clock-names = "uartclk", "apb_pclk";
+- arm,primecell-periphid = <0x00241011>;
++ arm,primecell-periphid = <0x00341011>;
+ status = "disabled";
+ };
+
+@@ -451,8 +451,6 @@ IRQ_TYPE_LEVEL_LOW)>,
+ IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(4) |
+ IRQ_TYPE_LEVEL_LOW)>;
+- /* This only applies to the ARMv7 stub */
+- arm,cpu-registers-not-fw-configured;
+ };
+
+ cpus: cpus {
+@@ -610,6 +608,7 @@ xhci: usb@7e9c0000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ interrupts = <GIC_SPI 176 IRQ_TYPE_LEVEL_HIGH>;
++ power-domains = <&pm BCM2835_POWER_DOMAIN_USB>;
+ /* DWC2 and this IP block share the same USB PHY,
+ * enabling both at the same time results in lockups.
+ * So keep this node disabled and let the bootloader
+@@ -1177,6 +1176,7 @@ &txp {
+ };
+
+ &uart0 {
++ arm,primecell-periphid = <0x00341011>;
+ interrupts = <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+diff --git a/arch/arm/boot/dts/broadcom/bcm4709-asus-rt-ac3200.dts b/arch/arm/boot/dts/broadcom/bcm4709-asus-rt-ac3200.dts
+index 53cb0c58f6d057..3da2daee0c849d 100644
+--- a/arch/arm/boot/dts/broadcom/bcm4709-asus-rt-ac3200.dts
++++ b/arch/arm/boot/dts/broadcom/bcm4709-asus-rt-ac3200.dts
+@@ -124,19 +124,19 @@ port@0 {
+ };
+
+ port@1 {
+- label = "lan1";
++ label = "lan4";
+ };
+
+ port@2 {
+- label = "lan2";
++ label = "lan3";
+ };
+
+ port@3 {
+- label = "lan3";
++ label = "lan2";
+ };
+
+ port@4 {
+- label = "lan4";
++ label = "lan1";
+ };
+ };
+ };
+diff --git a/arch/arm/boot/dts/broadcom/bcm47094-asus-rt-ac5300.dts b/arch/arm/boot/dts/broadcom/bcm47094-asus-rt-ac5300.dts
+index 6c666dc7ad23ef..01ec8c03686a66 100644
+--- a/arch/arm/boot/dts/broadcom/bcm47094-asus-rt-ac5300.dts
++++ b/arch/arm/boot/dts/broadcom/bcm47094-asus-rt-ac5300.dts
+@@ -126,11 +126,11 @@ &srab {
+
+ ports {
+ port@0 {
+- label = "lan4";
++ label = "wan";
+ };
+
+ port@1 {
+- label = "lan3";
++ label = "lan1";
+ };
+
+ port@2 {
+@@ -138,11 +138,11 @@ port@2 {
+ };
+
+ port@3 {
+- label = "lan1";
++ label = "lan3";
+ };
+
+ port@4 {
+- label = "wan";
++ label = "lan4";
+ };
+ };
+ };
+diff --git a/arch/arm/boot/dts/nxp/imx/imx6qdl-apalis.dtsi b/arch/arm/boot/dts/nxp/imx/imx6qdl-apalis.dtsi
+index edf55760a5c1a2..1a63a6add43988 100644
+--- a/arch/arm/boot/dts/nxp/imx/imx6qdl-apalis.dtsi
++++ b/arch/arm/boot/dts/nxp/imx/imx6qdl-apalis.dtsi
+@@ -108,6 +108,11 @@ lvds_panel_in: endpoint {
+ };
+ };
+
++ poweroff {
++ compatible = "regulator-poweroff";
++ cpu-supply = <&vgen2_reg>;
++ };
++
+ reg_module_3v3: regulator-module-3v3 {
+ compatible = "regulator-fixed";
+ regulator-always-on;
+@@ -236,10 +241,6 @@ &can2 {
+ status = "disabled";
+ };
+
+-&clks {
+- fsl,pmic-stby-poweroff;
+-};
+-
+ /* Apalis SPI1 */
+ &ecspi1 {
+ cs-gpios = <&gpio5 25 GPIO_ACTIVE_LOW>;
+@@ -527,7 +528,6 @@ &i2c2 {
+
+ pmic: pmic@8 {
+ compatible = "fsl,pfuze100";
+- fsl,pmic-stby-poweroff;
+ reg = <0x08>;
+
+ regulators {
+diff --git a/arch/arm/mach-davinci/Kconfig b/arch/arm/mach-davinci/Kconfig
+index 2a8a9fe46586d2..3fa15f3422409a 100644
+--- a/arch/arm/mach-davinci/Kconfig
++++ b/arch/arm/mach-davinci/Kconfig
+@@ -27,6 +27,7 @@ config ARCH_DAVINCI_DA830
+
+ config ARCH_DAVINCI_DA850
+ bool "DA850/OMAP-L138/AM18x based system"
++ select ARCH_DAVINCI_DA8XX
+ select DAVINCI_CP_INTC
+
+ config ARCH_DAVINCI_DA8XX
+diff --git a/arch/arm/mach-omap1/Kconfig b/arch/arm/mach-omap1/Kconfig
+index a643b71e30a355..08ec6bd84ada56 100644
+--- a/arch/arm/mach-omap1/Kconfig
++++ b/arch/arm/mach-omap1/Kconfig
+@@ -8,6 +8,7 @@ menuconfig ARCH_OMAP1
+ select ARCH_OMAP
+ select CLKSRC_MMIO
+ select FORCE_PCI if PCCARD
++ select GENERIC_IRQ_CHIP
+ select GPIOLIB
+ help
+ Support for older TI OMAP1 (omap7xx, omap15xx or omap16xx)
+diff --git a/arch/arm/mach-shmobile/headsmp.S b/arch/arm/mach-shmobile/headsmp.S
+index a956b489b6ea12..2bc7e73a8582d2 100644
+--- a/arch/arm/mach-shmobile/headsmp.S
++++ b/arch/arm/mach-shmobile/headsmp.S
+@@ -136,6 +136,7 @@ ENDPROC(shmobile_smp_sleep)
+ .long shmobile_smp_arg - 1b
+
+ .bss
++ .align 2
+ .globl shmobile_smp_mpidr
+ shmobile_smp_mpidr:
+ .space NR_CPUS * 4
+diff --git a/arch/arm64/boot/dts/broadcom/bcm2712.dtsi b/arch/arm64/boot/dts/broadcom/bcm2712.dtsi
+index 26a29e5e5078d5..447bfa060918ca 100644
+--- a/arch/arm64/boot/dts/broadcom/bcm2712.dtsi
++++ b/arch/arm64/boot/dts/broadcom/bcm2712.dtsi
+@@ -232,7 +232,7 @@ uart10: serial@7d001000 {
+ interrupts = <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_uart>, <&clk_vpu>;
+ clock-names = "uartclk", "apb_pclk";
+- arm,primecell-periphid = <0x00241011>;
++ arm,primecell-periphid = <0x00341011>;
+ status = "disabled";
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mm-verdin-dahlia.dtsi b/arch/arm64/boot/dts/freescale/imx8mm-verdin-dahlia.dtsi
+index ce20de25980545..3d0b1496813104 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mm-verdin-dahlia.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mm-verdin-dahlia.dtsi
+@@ -16,10 +16,10 @@ sound_card: sound-card {
+ "Headphone Jack", "HPOUTR",
+ "IN2L", "Line In Jack",
+ "IN2R", "Line In Jack",
+- "Headphone Jack", "MICBIAS",
+- "IN1L", "Headphone Jack";
++ "Microphone Jack", "MICBIAS",
++ "IN1L", "Microphone Jack";
+ simple-audio-card,widgets =
+- "Microphone", "Headphone Jack",
++ "Microphone", "Microphone Jack",
+ "Headphone", "Headphone Jack",
+ "Line", "Line In Jack";
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql.dtsi
+index 336785a9fba896..3ddc5aaa7c5f0c 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-tqma8mpql.dtsi
+@@ -1,7 +1,8 @@
+ // SPDX-License-Identifier: GPL-2.0-or-later OR MIT
+ /*
+- * Copyright 2021-2022 TQ-Systems GmbH
+- * Author: Alexander Stein <alexander.stein@tq-group.com>
++ * Copyright 2021-2025 TQ-Systems GmbH <linux@ew.tq-group.com>,
++ * D-82229 Seefeld, Germany.
++ * Author: Alexander Stein
+ */
+
+ #include "imx8mp.dtsi"
+@@ -23,15 +24,6 @@ reg_vcc3v3: regulator-vcc3v3 {
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+-
+- /* e-MMC IO, needed for HS modes */
+- reg_vcc1v8: regulator-vcc1v8 {
+- compatible = "regulator-fixed";
+- regulator-name = "VCC1V8";
+- regulator-min-microvolt = <1800000>;
+- regulator-max-microvolt = <1800000>;
+- regulator-always-on;
+- };
+ };
+
+ &A53_0 {
+@@ -197,7 +189,7 @@ &usdhc3 {
+ no-sd;
+ no-sdio;
+ vmmc-supply = <®_vcc3v3>;
+- vqmmc-supply = <®_vcc1v8>;
++ vqmmc-supply = <&buck5_reg>;
+ status = "okay";
+ };
+
+diff --git a/arch/arm64/boot/dts/freescale/imx8mp-verdin-dahlia.dtsi b/arch/arm64/boot/dts/freescale/imx8mp-verdin-dahlia.dtsi
+index da8902c5f7e5b2..1493319aa748d0 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mp-verdin-dahlia.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mp-verdin-dahlia.dtsi
+@@ -28,10 +28,10 @@ sound {
+ "Headphone Jack", "HPOUTR",
+ "IN2L", "Line In Jack",
+ "IN2R", "Line In Jack",
+- "Headphone Jack", "MICBIAS",
+- "IN1L", "Headphone Jack";
++ "Microphone Jack", "MICBIAS",
++ "IN1L", "Microphone Jack";
+ simple-audio-card,widgets =
+- "Microphone", "Headphone Jack",
++ "Microphone", "Microphone Jack",
+ "Headphone", "Headphone Jack",
+ "Line", "Line In Jack";
+
+diff --git a/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts b/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts
+index 0905668cbe1f4e..3d5e81a0afdc57 100644
+--- a/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts
++++ b/arch/arm64/boot/dts/rockchip/px30-ringneck-haikou.dts
+@@ -194,6 +194,13 @@ sd_card_led_pin: sd-card-led-pin {
+ <3 RK_PB3 RK_FUNC_GPIO &pcfg_pull_none>;
+ };
+ };
++
++ uart {
++ uart5_rts_pin: uart5-rts-pin {
++ rockchip,pins =
++ <0 RK_PB5 RK_FUNC_GPIO &pcfg_pull_none>;
++ };
++ };
+ };
+
+ &pwm0 {
+@@ -222,10 +229,15 @@ &u2phy_otg {
+ };
+
+ &uart0 {
++ pinctrl-names = "default";
++ pinctrl-0 = <&uart0_xfer>;
+ status = "okay";
+ };
+
+ &uart5 {
++ /* Add pinmux for rts-gpios (uart5_rts_pin) */
++ pinctrl-names = "default";
++ pinctrl-0 = <&uart5_xfer &uart5_rts_pin>;
+ rts-gpios = <&gpio0 RK_PB5 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts b/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts
+index fe5b526100107a..6a6b36c36ce215 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3399-nanopi-r4s.dts
+@@ -117,7 +117,7 @@ &u2phy0_host {
+ };
+
+ &u2phy1_host {
+- status = "disabled";
++ phy-supply = <&vdd_5v>;
+ };
+
+ &uart0 {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts b/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts
+index 9a2f59a351dee5..48ccdd6b471182 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3566-lubancat-1.dts
+@@ -512,7 +512,6 @@ &sdhci {
+
+ &sdmmc0 {
+ max-frequency = <150000000>;
+- supports-sd;
+ bus-width = <4>;
+ cap-mmc-highspeed;
+ cap-sd-highspeed;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-jaguar.dts b/arch/arm64/boot/dts/rockchip/rk3588-jaguar.dts
+index 31d2f8994f8513..e61c5731fb99f0 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-jaguar.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3588-jaguar.dts
+@@ -455,7 +455,6 @@ &sdhci {
+ non-removable;
+ pinctrl-names = "default";
+ pinctrl-0 = <&emmc_bus8 &emmc_cmd &emmc_clk &emmc_data_strobe>;
+- supports-cqe;
+ vmmc-supply = <&vcc_3v3_s3>;
+ vqmmc-supply = <&vcc_1v8_s3>;
+ status = "okay";
+diff --git a/arch/arm64/boot/dts/rockchip/rk3588-tiger.dtsi b/arch/arm64/boot/dts/rockchip/rk3588-tiger.dtsi
+index 615094bb8ba380..a82fe75bda55c8 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3588-tiger.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3588-tiger.dtsi
+@@ -367,7 +367,6 @@ &sdhci {
+ non-removable;
+ pinctrl-names = "default";
+ pinctrl-0 = <&emmc_bus8 &emmc_cmd &emmc_clk &emmc_data_strobe>;
+- supports-cqe;
+ vmmc-supply = <&vcc_3v3_s3>;
+ vqmmc-supply = <&vcc_1v8_s3>;
+ status = "okay";
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 1bf70fa1045dcd..122a1e12582c05 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -602,23 +602,13 @@ struct kvm_host_data {
+ struct kvm_cpu_context host_ctxt;
+
+ /*
+- * All pointers in this union are hyp VA.
++ * Hyp VA.
+ * sve_state is only used in pKVM and if system_supports_sve().
+ */
+- union {
+- struct user_fpsimd_state *fpsimd_state;
+- struct cpu_sve_state *sve_state;
+- };
+-
+- union {
+- /* HYP VA pointer to the host storage for FPMR */
+- u64 *fpmr_ptr;
+- /*
+- * Used by pKVM only, as it needs to provide storage
+- * for the host
+- */
+- u64 fpmr;
+- };
++ struct cpu_sve_state *sve_state;
++
++ /* Used by pKVM only. */
++ u64 fpmr;
+
+ /* Ownership of the FP regs */
+ enum {
+@@ -697,7 +687,6 @@ struct kvm_vcpu_arch {
+ u64 hcr_el2;
+ u64 hcrx_el2;
+ u64 mdcr_el2;
+- u64 cptr_el2;
+
+ /* Exception Information */
+ struct kvm_vcpu_fault_info fault;
+@@ -902,10 +891,6 @@ struct kvm_vcpu_arch {
+ /* Save TRBE context if active */
+ #define DEBUG_STATE_SAVE_TRBE __vcpu_single_flag(iflags, BIT(6))
+
+-/* SVE enabled for host EL0 */
+-#define HOST_SVE_ENABLED __vcpu_single_flag(sflags, BIT(0))
+-/* SME enabled for EL0 */
+-#define HOST_SME_ENABLED __vcpu_single_flag(sflags, BIT(1))
+ /* Physical CPU not in supported_cpus */
+ #define ON_UNSUPPORTED_CPU __vcpu_single_flag(sflags, BIT(2))
+ /* WFIT instruction trapped */
+diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
+index 6d21971ae5594f..f38d22dac140f1 100644
+--- a/arch/arm64/kernel/fpsimd.c
++++ b/arch/arm64/kernel/fpsimd.c
+@@ -1694,31 +1694,6 @@ void fpsimd_signal_preserve_current_state(void)
+ sve_to_fpsimd(current);
+ }
+
+-/*
+- * Called by KVM when entering the guest.
+- */
+-void fpsimd_kvm_prepare(void)
+-{
+- if (!system_supports_sve())
+- return;
+-
+- /*
+- * KVM does not save host SVE state since we can only enter
+- * the guest from a syscall so the ABI means that only the
+- * non-saved SVE state needs to be saved. If we have left
+- * SVE enabled for performance reasons then update the task
+- * state to be FPSIMD only.
+- */
+- get_cpu_fpsimd_context();
+-
+- if (test_and_clear_thread_flag(TIF_SVE)) {
+- sve_to_fpsimd(current);
+- current->thread.fp_type = FP_STATE_FPSIMD;
+- }
+-
+- put_cpu_fpsimd_context();
+-}
+-
+ /*
+ * Associate current's FPSIMD context with this cpu
+ * The caller must have ownership of the cpu FPSIMD context before calling
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 3cf65daa75a51f..634d3f62481827 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -1577,7 +1577,6 @@ static int kvm_arch_vcpu_ioctl_vcpu_init(struct kvm_vcpu *vcpu,
+ }
+
+ vcpu_reset_hcr(vcpu);
+- vcpu->arch.cptr_el2 = kvm_get_reset_cptr_el2(vcpu);
+
+ /*
+ * Handle the "start in power-off" case.
+@@ -2477,14 +2476,6 @@ static void finalize_init_hyp_mode(void)
+ per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->sve_state =
+ kern_hyp_va(sve_state);
+ }
+- } else {
+- for_each_possible_cpu(cpu) {
+- struct user_fpsimd_state *fpsimd_state;
+-
+- fpsimd_state = &per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->host_ctxt.fp_regs;
+- per_cpu_ptr_nvhe_sym(kvm_host_data, cpu)->fpsimd_state =
+- kern_hyp_va(fpsimd_state);
+- }
+ }
+ }
+
+diff --git a/arch/arm64/kvm/fpsimd.c b/arch/arm64/kvm/fpsimd.c
+index ea5484ce1f3ba3..3cbb999419af7b 100644
+--- a/arch/arm64/kvm/fpsimd.c
++++ b/arch/arm64/kvm/fpsimd.c
+@@ -54,43 +54,16 @@ void kvm_arch_vcpu_load_fp(struct kvm_vcpu *vcpu)
+ if (!system_supports_fpsimd())
+ return;
+
+- fpsimd_kvm_prepare();
+-
+ /*
+- * We will check TIF_FOREIGN_FPSTATE just before entering the
+- * guest in kvm_arch_vcpu_ctxflush_fp() and override this to
+- * FP_STATE_FREE if the flag set.
++ * Ensure that any host FPSIMD/SVE/SME state is saved and unbound such
++ * that the host kernel is responsible for restoring this state upon
++ * return to userspace, and the hyp code doesn't need to save anything.
++ *
++ * When the host may use SME, fpsimd_save_and_flush_cpu_state() ensures
++ * that PSTATE.{SM,ZA} == {0,0}.
+ */
+- *host_data_ptr(fp_owner) = FP_STATE_HOST_OWNED;
+- *host_data_ptr(fpsimd_state) = kern_hyp_va(¤t->thread.uw.fpsimd_state);
+- *host_data_ptr(fpmr_ptr) = kern_hyp_va(¤t->thread.uw.fpmr);
+-
+- vcpu_clear_flag(vcpu, HOST_SVE_ENABLED);
+- if (read_sysreg(cpacr_el1) & CPACR_EL1_ZEN_EL0EN)
+- vcpu_set_flag(vcpu, HOST_SVE_ENABLED);
+-
+- if (system_supports_sme()) {
+- vcpu_clear_flag(vcpu, HOST_SME_ENABLED);
+- if (read_sysreg(cpacr_el1) & CPACR_EL1_SMEN_EL0EN)
+- vcpu_set_flag(vcpu, HOST_SME_ENABLED);
+-
+- /*
+- * If PSTATE.SM is enabled then save any pending FP
+- * state and disable PSTATE.SM. If we leave PSTATE.SM
+- * enabled and the guest does not enable SME via
+- * CPACR_EL1.SMEN then operations that should be valid
+- * may generate SME traps from EL1 to EL1 which we
+- * can't intercept and which would confuse the guest.
+- *
+- * Do the same for PSTATE.ZA in the case where there
+- * is state in the registers which has not already
+- * been saved, this is very unlikely to happen.
+- */
+- if (read_sysreg_s(SYS_SVCR) & (SVCR_SM_MASK | SVCR_ZA_MASK)) {
+- *host_data_ptr(fp_owner) = FP_STATE_FREE;
+- fpsimd_save_and_flush_cpu_state();
+- }
+- }
++ fpsimd_save_and_flush_cpu_state();
++ *host_data_ptr(fp_owner) = FP_STATE_FREE;
+
+ /*
+ * If normal guests gain SME support, maintain this behavior for pKVM
+@@ -162,52 +135,7 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
+
+ local_irq_save(flags);
+
+- /*
+- * If we have VHE then the Hyp code will reset CPACR_EL1 to
+- * the default value and we need to reenable SME.
+- */
+- if (has_vhe() && system_supports_sme()) {
+- /* Also restore EL0 state seen on entry */
+- if (vcpu_get_flag(vcpu, HOST_SME_ENABLED))
+- sysreg_clear_set(CPACR_EL1, 0, CPACR_ELx_SMEN);
+- else
+- sysreg_clear_set(CPACR_EL1,
+- CPACR_EL1_SMEN_EL0EN,
+- CPACR_EL1_SMEN_EL1EN);
+- isb();
+- }
+-
+ if (guest_owns_fp_regs()) {
+- if (vcpu_has_sve(vcpu)) {
+- u64 zcr = read_sysreg_el1(SYS_ZCR);
+-
+- /*
+- * If the vCPU is in the hyp context then ZCR_EL1 is
+- * loaded with its vEL2 counterpart.
+- */
+- __vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)) = zcr;
+-
+- /*
+- * Restore the VL that was saved when bound to the CPU,
+- * which is the maximum VL for the guest. Because the
+- * layout of the data when saving the sve state depends
+- * on the VL, we need to use a consistent (i.e., the
+- * maximum) VL.
+- * Note that this means that at guest exit ZCR_EL1 is
+- * not necessarily the same as on guest entry.
+- *
+- * ZCR_EL2 holds the guest hypervisor's VL when running
+- * a nested guest, which could be smaller than the
+- * max for the vCPU. Similar to above, we first need to
+- * switch to a VL consistent with the layout of the
+- * vCPU's SVE state. KVM support for NV implies VHE, so
+- * using the ZCR_EL1 alias is safe.
+- */
+- if (!has_vhe() || (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu)))
+- sve_cond_update_zcr_vq(vcpu_sve_max_vq(vcpu) - 1,
+- SYS_ZCR_EL1);
+- }
+-
+ /*
+ * Flush (save and invalidate) the fpsimd/sve state so that if
+ * the host tries to use fpsimd/sve, it's not using stale data
+@@ -219,18 +147,6 @@ void kvm_arch_vcpu_put_fp(struct kvm_vcpu *vcpu)
+ * when needed.
+ */
+ fpsimd_save_and_flush_cpu_state();
+- } else if (has_vhe() && system_supports_sve()) {
+- /*
+- * The FPSIMD/SVE state in the CPU has not been touched, and we
+- * have SVE (and VHE): CPACR_EL1 (alias CPTR_EL2) has been
+- * reset by kvm_reset_cptr_el2() in the Hyp code, disabling SVE
+- * for EL0. To avoid spurious traps, restore the trap state
+- * seen by kvm_arch_vcpu_load_fp():
+- */
+- if (vcpu_get_flag(vcpu, HOST_SVE_ENABLED))
+- sysreg_clear_set(CPACR_EL1, 0, CPACR_EL1_ZEN_EL0EN);
+- else
+- sysreg_clear_set(CPACR_EL1, CPACR_EL1_ZEN_EL0EN, 0);
+ }
+
+ local_irq_restore(flags);
+diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
+index 4433a234aa9ba2..9f4e8d68ab505c 100644
+--- a/arch/arm64/kvm/hyp/entry.S
++++ b/arch/arm64/kvm/hyp/entry.S
+@@ -44,6 +44,11 @@ alternative_if ARM64_HAS_RAS_EXTN
+ alternative_else_nop_endif
+ mrs x1, isr_el1
+ cbz x1, 1f
++
++ // Ensure that __guest_enter() always provides a context
++ // synchronization event so that callers don't need ISBs for anything
++ // that would usually be synchonized by the ERET.
++ isb
+ mov x0, #ARM_EXCEPTION_IRQ
+ ret
+
+diff --git a/arch/arm64/kvm/hyp/include/hyp/switch.h b/arch/arm64/kvm/hyp/include/hyp/switch.h
+index 5310fe1da6165b..cc9cb63959463a 100644
+--- a/arch/arm64/kvm/hyp/include/hyp/switch.h
++++ b/arch/arm64/kvm/hyp/include/hyp/switch.h
+@@ -295,7 +295,7 @@ static inline bool __populate_fault_info(struct kvm_vcpu *vcpu)
+ return __get_fault_info(vcpu->arch.fault.esr_el2, &vcpu->arch.fault);
+ }
+
+-static bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code)
++static inline bool kvm_hyp_handle_mops(struct kvm_vcpu *vcpu, u64 *exit_code)
+ {
+ *vcpu_pc(vcpu) = read_sysreg_el2(SYS_ELR);
+ arm64_mops_reset_regs(vcpu_gp_regs(vcpu), vcpu->arch.fault.esr_el2);
+@@ -344,7 +344,87 @@ static inline void __hyp_sve_save_host(void)
+ true);
+ }
+
+-static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu);
++static inline void fpsimd_lazy_switch_to_guest(struct kvm_vcpu *vcpu)
++{
++ u64 zcr_el1, zcr_el2;
++
++ if (!guest_owns_fp_regs())
++ return;
++
++ if (vcpu_has_sve(vcpu)) {
++ /* A guest hypervisor may restrict the effective max VL. */
++ if (vcpu_has_nv(vcpu) && !is_hyp_ctxt(vcpu))
++ zcr_el2 = __vcpu_sys_reg(vcpu, ZCR_EL2);
++ else
++ zcr_el2 = vcpu_sve_max_vq(vcpu) - 1;
++
++ write_sysreg_el2(zcr_el2, SYS_ZCR);
++
++ zcr_el1 = __vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu));
++ write_sysreg_el1(zcr_el1, SYS_ZCR);
++ }
++}
++
++static inline void fpsimd_lazy_switch_to_host(struct kvm_vcpu *vcpu)
++{
++ u64 zcr_el1, zcr_el2;
++
++ if (!guest_owns_fp_regs())
++ return;
++
++ /*
++ * When the guest owns the FP regs, we know that guest+hyp traps for
++ * any FPSIMD/SVE/SME features exposed to the guest have been disabled
++ * by either fpsimd_lazy_switch_to_guest() or kvm_hyp_handle_fpsimd()
++ * prior to __guest_entry(). As __guest_entry() guarantees a context
++ * synchronization event, we don't need an ISB here to avoid taking
++ * traps for anything that was exposed to the guest.
++ */
++ if (vcpu_has_sve(vcpu)) {
++ zcr_el1 = read_sysreg_el1(SYS_ZCR);
++ __vcpu_sys_reg(vcpu, vcpu_sve_zcr_elx(vcpu)) = zcr_el1;
++
++ /*
++ * The guest's state is always saved using the guest's max VL.
++ * Ensure that the host has the guest's max VL active such that
++ * the host can save the guest's state lazily, but don't
++ * artificially restrict the host to the guest's max VL.
++ */
++ if (has_vhe()) {
++ zcr_el2 = vcpu_sve_max_vq(vcpu) - 1;
++ write_sysreg_el2(zcr_el2, SYS_ZCR);
++ } else {
++ zcr_el2 = sve_vq_from_vl(kvm_host_sve_max_vl) - 1;
++ write_sysreg_el2(zcr_el2, SYS_ZCR);
++
++ zcr_el1 = vcpu_sve_max_vq(vcpu) - 1;
++ write_sysreg_el1(zcr_el1, SYS_ZCR);
++ }
++ }
++}
++
++static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
++{
++ /*
++ * Non-protected kvm relies on the host restoring its sve state.
++ * Protected kvm restores the host's sve state as not to reveal that
++ * fpsimd was used by a guest nor leak upper sve bits.
++ */
++ if (system_supports_sve()) {
++ __hyp_sve_save_host();
++
++ /* Re-enable SVE traps if not supported for the guest vcpu. */
++ if (!vcpu_has_sve(vcpu))
++ cpacr_clear_set(CPACR_ELx_ZEN, 0);
++
++ } else {
++ __fpsimd_save_state(host_data_ptr(host_ctxt.fp_regs));
++ }
++
++ if (kvm_has_fpmr(kern_hyp_va(vcpu->kvm)))
++ *host_data_ptr(fpmr) = read_sysreg_s(SYS_FPMR);
++}
++
+
+ /*
+ * We trap the first access to the FP/SIMD to save the host context and
+@@ -352,7 +432,7 @@ static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu);
+ * If FP/SIMD is not implemented, handle the trap and inject an undefined
+ * instruction exception to the guest. Similarly for trapped SVE accesses.
+ */
+-static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
++static inline bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
+ {
+ bool sve_guest;
+ u8 esr_ec;
+@@ -394,7 +474,7 @@ static bool kvm_hyp_handle_fpsimd(struct kvm_vcpu *vcpu, u64 *exit_code)
+ isb();
+
+ /* Write out the host state if it's in the registers */
+- if (host_owns_fp_regs())
++ if (is_protected_kvm_enabled() && host_owns_fp_regs())
+ kvm_hyp_save_fpsimd_host(vcpu);
+
+ /* Restore the guest state */
+@@ -543,7 +623,7 @@ static bool handle_ampere1_tcr(struct kvm_vcpu *vcpu)
+ return true;
+ }
+
+-static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
++static inline bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
+ {
+ if (cpus_have_final_cap(ARM64_WORKAROUND_CAVIUM_TX2_219_TVM) &&
+ handle_tx2_tvm(vcpu))
+@@ -563,7 +643,7 @@ static bool kvm_hyp_handle_sysreg(struct kvm_vcpu *vcpu, u64 *exit_code)
+ return false;
+ }
+
+-static bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code)
++static inline bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code)
+ {
+ if (static_branch_unlikely(&vgic_v3_cpuif_trap) &&
+ __vgic_v3_perform_cpuif_access(vcpu) == 1)
+@@ -572,19 +652,18 @@ static bool kvm_hyp_handle_cp15_32(struct kvm_vcpu *vcpu, u64 *exit_code)
+ return false;
+ }
+
+-static bool kvm_hyp_handle_memory_fault(struct kvm_vcpu *vcpu, u64 *exit_code)
++static inline bool kvm_hyp_handle_memory_fault(struct kvm_vcpu *vcpu,
++ u64 *exit_code)
+ {
+ if (!__populate_fault_info(vcpu))
+ return true;
+
+ return false;
+ }
+-static bool kvm_hyp_handle_iabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
+- __alias(kvm_hyp_handle_memory_fault);
+-static bool kvm_hyp_handle_watchpt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
+- __alias(kvm_hyp_handle_memory_fault);
++#define kvm_hyp_handle_iabt_low kvm_hyp_handle_memory_fault
++#define kvm_hyp_handle_watchpt_low kvm_hyp_handle_memory_fault
+
+-static bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
++static inline bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
+ {
+ if (kvm_hyp_handle_memory_fault(vcpu, exit_code))
+ return true;
+@@ -614,23 +693,16 @@ static bool kvm_hyp_handle_dabt_low(struct kvm_vcpu *vcpu, u64 *exit_code)
+
+ typedef bool (*exit_handler_fn)(struct kvm_vcpu *, u64 *);
+
+-static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu);
+-
+-static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code);
+-
+ /*
+ * Allow the hypervisor to handle the exit with an exit handler if it has one.
+ *
+ * Returns true if the hypervisor handled the exit, and control should go back
+ * to the guest, or false if it hasn't.
+ */
+-static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
++static inline bool kvm_hyp_handle_exit(struct kvm_vcpu *vcpu, u64 *exit_code,
++ const exit_handler_fn *handlers)
+ {
+- const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu);
+- exit_handler_fn fn;
+-
+- fn = handlers[kvm_vcpu_trap_get_class(vcpu)];
+-
++ exit_handler_fn fn = handlers[kvm_vcpu_trap_get_class(vcpu)];
+ if (fn)
+ return fn(vcpu, exit_code);
+
+@@ -660,20 +732,9 @@ static inline void synchronize_vcpu_pstate(struct kvm_vcpu *vcpu, u64 *exit_code
+ * the guest, false when we should restore the host state and return to the
+ * main run loop.
+ */
+-static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
++static inline bool __fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code,
++ const exit_handler_fn *handlers)
+ {
+- /*
+- * Save PSTATE early so that we can evaluate the vcpu mode
+- * early on.
+- */
+- synchronize_vcpu_pstate(vcpu, exit_code);
+-
+- /*
+- * Check whether we want to repaint the state one way or
+- * another.
+- */
+- early_exit_filter(vcpu, exit_code);
+-
+ if (ARM_EXCEPTION_CODE(*exit_code) != ARM_EXCEPTION_IRQ)
+ vcpu->arch.fault.esr_el2 = read_sysreg_el2(SYS_ESR);
+
+@@ -703,7 +764,7 @@ static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+ goto exit;
+
+ /* Check if there's an exit handler and allow it to handle the exit. */
+- if (kvm_hyp_handle_exit(vcpu, exit_code))
++ if (kvm_hyp_handle_exit(vcpu, exit_code, handlers))
+ goto guest;
+ exit:
+ /* Return to the host kernel and handle the exit */
+diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+index fefc89209f9e41..75f7e386de75bc 100644
+--- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
++++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
+@@ -5,6 +5,7 @@
+ */
+
+ #include <hyp/adjust_pc.h>
++#include <hyp/switch.h>
+
+ #include <asm/pgtable-types.h>
+ #include <asm/kvm_asm.h>
+@@ -83,7 +84,7 @@ static void fpsimd_sve_sync(struct kvm_vcpu *vcpu)
+ if (system_supports_sve())
+ __hyp_sve_restore_host();
+ else
+- __fpsimd_restore_state(*host_data_ptr(fpsimd_state));
++ __fpsimd_restore_state(host_data_ptr(host_ctxt.fp_regs));
+
+ if (has_fpmr)
+ write_sysreg_s(*host_data_ptr(fpmr), SYS_FPMR);
+@@ -177,7 +178,9 @@ static void handle___kvm_vcpu_run(struct kvm_cpu_context *host_ctxt)
+ pkvm_put_hyp_vcpu(hyp_vcpu);
+ } else {
+ /* The host is fully trusted, run its vCPU directly. */
++ fpsimd_lazy_switch_to_guest(host_vcpu);
+ ret = __kvm_vcpu_run(host_vcpu);
++ fpsimd_lazy_switch_to_host(host_vcpu);
+ }
+
+ out:
+@@ -486,12 +489,6 @@ void handle_trap(struct kvm_cpu_context *host_ctxt)
+ case ESR_ELx_EC_SMC64:
+ handle_host_smc(host_ctxt);
+ break;
+- case ESR_ELx_EC_SVE:
+- cpacr_clear_set(0, CPACR_ELx_ZEN);
+- isb();
+- sve_cond_update_zcr_vq(sve_vq_from_vl(kvm_host_sve_max_vl) - 1,
+- SYS_ZCR_EL2);
+- break;
+ case ESR_ELx_EC_IABT_LOW:
+ case ESR_ELx_EC_DABT_LOW:
+ handle_host_mem_abort(host_ctxt);
+diff --git a/arch/arm64/kvm/hyp/nvhe/pkvm.c b/arch/arm64/kvm/hyp/nvhe/pkvm.c
+index 077d4098548d2c..7c464340bcd078 100644
+--- a/arch/arm64/kvm/hyp/nvhe/pkvm.c
++++ b/arch/arm64/kvm/hyp/nvhe/pkvm.c
+@@ -28,8 +28,6 @@ static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu)
+ const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64PFR0_EL1);
+ u64 hcr_set = HCR_RW;
+ u64 hcr_clear = 0;
+- u64 cptr_set = 0;
+- u64 cptr_clear = 0;
+
+ /* Protected KVM does not support AArch32 guests. */
+ BUILD_BUG_ON(FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_EL0),
+@@ -59,21 +57,10 @@ static void pvm_init_traps_aa64pfr0(struct kvm_vcpu *vcpu)
+ /* Trap AMU */
+ if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_AMU), feature_ids)) {
+ hcr_clear |= HCR_AMVOFFEN;
+- cptr_set |= CPTR_EL2_TAM;
+- }
+-
+- /* Trap SVE */
+- if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64PFR0_EL1_SVE), feature_ids)) {
+- if (has_hvhe())
+- cptr_clear |= CPACR_ELx_ZEN;
+- else
+- cptr_set |= CPTR_EL2_TZ;
+ }
+
+ vcpu->arch.hcr_el2 |= hcr_set;
+ vcpu->arch.hcr_el2 &= ~hcr_clear;
+- vcpu->arch.cptr_el2 |= cptr_set;
+- vcpu->arch.cptr_el2 &= ~cptr_clear;
+ }
+
+ /*
+@@ -103,7 +90,6 @@ static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu)
+ const u64 feature_ids = pvm_read_id_reg(vcpu, SYS_ID_AA64DFR0_EL1);
+ u64 mdcr_set = 0;
+ u64 mdcr_clear = 0;
+- u64 cptr_set = 0;
+
+ /* Trap/constrain PMU */
+ if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_PMUVer), feature_ids)) {
+@@ -130,21 +116,12 @@ static void pvm_init_traps_aa64dfr0(struct kvm_vcpu *vcpu)
+ if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceFilt), feature_ids))
+ mdcr_set |= MDCR_EL2_TTRF;
+
+- /* Trap Trace */
+- if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_TraceVer), feature_ids)) {
+- if (has_hvhe())
+- cptr_set |= CPACR_EL1_TTA;
+- else
+- cptr_set |= CPTR_EL2_TTA;
+- }
+-
+ /* Trap External Trace */
+ if (!FIELD_GET(ARM64_FEATURE_MASK(ID_AA64DFR0_EL1_ExtTrcBuff), feature_ids))
+ mdcr_clear |= MDCR_EL2_E2TB_MASK << MDCR_EL2_E2TB_SHIFT;
+
+ vcpu->arch.mdcr_el2 |= mdcr_set;
+ vcpu->arch.mdcr_el2 &= ~mdcr_clear;
+- vcpu->arch.cptr_el2 |= cptr_set;
+ }
+
+ /*
+@@ -195,10 +172,6 @@ static void pvm_init_trap_regs(struct kvm_vcpu *vcpu)
+ /* Clear res0 and set res1 bits to trap potential new features. */
+ vcpu->arch.hcr_el2 &= ~(HCR_RES0);
+ vcpu->arch.mdcr_el2 &= ~(MDCR_EL2_RES0);
+- if (!has_hvhe()) {
+- vcpu->arch.cptr_el2 |= CPTR_NVHE_EL2_RES1;
+- vcpu->arch.cptr_el2 &= ~(CPTR_NVHE_EL2_RES0);
+- }
+ }
+
+ /*
+@@ -579,8 +552,6 @@ int __pkvm_init_vcpu(pkvm_handle_t handle, struct kvm_vcpu *host_vcpu,
+ return ret;
+ }
+
+- hyp_vcpu->vcpu.arch.cptr_el2 = kvm_get_reset_cptr_el2(&hyp_vcpu->vcpu);
+-
+ return 0;
+ }
+
+diff --git a/arch/arm64/kvm/hyp/nvhe/switch.c b/arch/arm64/kvm/hyp/nvhe/switch.c
+index cc69106734ca73..a1245fa8383195 100644
+--- a/arch/arm64/kvm/hyp/nvhe/switch.c
++++ b/arch/arm64/kvm/hyp/nvhe/switch.c
+@@ -36,33 +36,71 @@ DEFINE_PER_CPU(unsigned long, kvm_hyp_vector);
+
+ extern void kvm_nvhe_prepare_backtrace(unsigned long fp, unsigned long pc);
+
+-static void __activate_traps(struct kvm_vcpu *vcpu)
++static void __activate_cptr_traps(struct kvm_vcpu *vcpu)
+ {
+- u64 val;
++ u64 val = CPTR_EL2_TAM; /* Same bit irrespective of E2H */
+
+- ___activate_traps(vcpu, vcpu->arch.hcr_el2);
+- __activate_traps_common(vcpu);
++ if (!guest_owns_fp_regs())
++ __activate_traps_fpsimd32(vcpu);
+
+- val = vcpu->arch.cptr_el2;
+- val |= CPTR_EL2_TAM; /* Same bit irrespective of E2H */
+- val |= has_hvhe() ? CPACR_EL1_TTA : CPTR_EL2_TTA;
+- if (cpus_have_final_cap(ARM64_SME)) {
+- if (has_hvhe())
+- val &= ~CPACR_ELx_SMEN;
+- else
+- val |= CPTR_EL2_TSM;
++ if (has_hvhe()) {
++ val |= CPACR_ELx_TTA;
++
++ if (guest_owns_fp_regs()) {
++ val |= CPACR_ELx_FPEN;
++ if (vcpu_has_sve(vcpu))
++ val |= CPACR_ELx_ZEN;
++ }
++
++ write_sysreg(val, cpacr_el1);
++ } else {
++ val |= CPTR_EL2_TTA | CPTR_NVHE_EL2_RES1;
++
++ /*
++ * Always trap SME since it's not supported in KVM.
++ * TSM is RES1 if SME isn't implemented.
++ */
++ val |= CPTR_EL2_TSM;
++
++ if (!vcpu_has_sve(vcpu) || !guest_owns_fp_regs())
++ val |= CPTR_EL2_TZ;
++
++ if (!guest_owns_fp_regs())
++ val |= CPTR_EL2_TFP;
++
++ write_sysreg(val, cptr_el2);
+ }
++}
+
+- if (!guest_owns_fp_regs()) {
+- if (has_hvhe())
+- val &= ~(CPACR_ELx_FPEN | CPACR_ELx_ZEN);
+- else
+- val |= CPTR_EL2_TFP | CPTR_EL2_TZ;
++static void __deactivate_cptr_traps(struct kvm_vcpu *vcpu)
++{
++ if (has_hvhe()) {
++ u64 val = CPACR_ELx_FPEN;
+
+- __activate_traps_fpsimd32(vcpu);
++ if (cpus_have_final_cap(ARM64_SVE))
++ val |= CPACR_ELx_ZEN;
++ if (cpus_have_final_cap(ARM64_SME))
++ val |= CPACR_ELx_SMEN;
++
++ write_sysreg(val, cpacr_el1);
++ } else {
++ u64 val = CPTR_NVHE_EL2_RES1;
++
++ if (!cpus_have_final_cap(ARM64_SVE))
++ val |= CPTR_EL2_TZ;
++ if (!cpus_have_final_cap(ARM64_SME))
++ val |= CPTR_EL2_TSM;
++
++ write_sysreg(val, cptr_el2);
+ }
++}
++
++static void __activate_traps(struct kvm_vcpu *vcpu)
++{
++ ___activate_traps(vcpu, vcpu->arch.hcr_el2);
++ __activate_traps_common(vcpu);
++ __activate_cptr_traps(vcpu);
+
+- kvm_write_cptr_el2(val);
+ write_sysreg(__this_cpu_read(kvm_hyp_vector), vbar_el2);
+
+ if (cpus_have_final_cap(ARM64_WORKAROUND_SPECULATIVE_AT)) {
+@@ -107,7 +145,7 @@ static void __deactivate_traps(struct kvm_vcpu *vcpu)
+
+ write_sysreg(this_cpu_ptr(&kvm_init_params)->hcr_el2, hcr_el2);
+
+- kvm_reset_cptr_el2(vcpu);
++ __deactivate_cptr_traps(vcpu);
+ write_sysreg(__kvm_hyp_host_vector, vbar_el2);
+ }
+
+@@ -180,34 +218,6 @@ static bool kvm_handle_pvm_sys64(struct kvm_vcpu *vcpu, u64 *exit_code)
+ kvm_handle_pvm_sysreg(vcpu, exit_code));
+ }
+
+-static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
+-{
+- /*
+- * Non-protected kvm relies on the host restoring its sve state.
+- * Protected kvm restores the host's sve state as not to reveal that
+- * fpsimd was used by a guest nor leak upper sve bits.
+- */
+- if (unlikely(is_protected_kvm_enabled() && system_supports_sve())) {
+- __hyp_sve_save_host();
+-
+- /* Re-enable SVE traps if not supported for the guest vcpu. */
+- if (!vcpu_has_sve(vcpu))
+- cpacr_clear_set(CPACR_ELx_ZEN, 0);
+-
+- } else {
+- __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+- }
+-
+- if (kvm_has_fpmr(kern_hyp_va(vcpu->kvm))) {
+- u64 val = read_sysreg_s(SYS_FPMR);
+-
+- if (unlikely(is_protected_kvm_enabled()))
+- *host_data_ptr(fpmr) = val;
+- else
+- **host_data_ptr(fpmr_ptr) = val;
+- }
+-}
+-
+ static const exit_handler_fn hyp_exit_handlers[] = {
+ [0 ... ESR_ELx_EC_MAX] = NULL,
+ [ESR_ELx_EC_CP15_32] = kvm_hyp_handle_cp15_32,
+@@ -239,19 +249,21 @@ static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu)
+ return hyp_exit_handlers;
+ }
+
+-/*
+- * Some guests (e.g., protected VMs) are not be allowed to run in AArch32.
+- * The ARMv8 architecture does not give the hypervisor a mechanism to prevent a
+- * guest from dropping to AArch32 EL0 if implemented by the CPU. If the
+- * hypervisor spots a guest in such a state ensure it is handled, and don't
+- * trust the host to spot or fix it. The check below is based on the one in
+- * kvm_arch_vcpu_ioctl_run().
+- *
+- * Returns false if the guest ran in AArch32 when it shouldn't have, and
+- * thus should exit to the host, or true if a the guest run loop can continue.
+- */
+-static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)
++static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+ {
++ const exit_handler_fn *handlers = kvm_get_exit_handler_array(vcpu);
++
++ synchronize_vcpu_pstate(vcpu, exit_code);
++
++ /*
++ * Some guests (e.g., protected VMs) are not be allowed to run in
++ * AArch32. The ARMv8 architecture does not give the hypervisor a
++ * mechanism to prevent a guest from dropping to AArch32 EL0 if
++ * implemented by the CPU. If the hypervisor spots a guest in such a
++ * state ensure it is handled, and don't trust the host to spot or fix
++ * it. The check below is based on the one in
++ * kvm_arch_vcpu_ioctl_run().
++ */
+ if (unlikely(vcpu_is_protected(vcpu) && vcpu_mode_is_32bit(vcpu))) {
+ /*
+ * As we have caught the guest red-handed, decide that it isn't
+@@ -264,6 +276,8 @@ static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)
+ *exit_code &= BIT(ARM_EXIT_WITH_SERROR_BIT);
+ *exit_code |= ARM_EXCEPTION_IL;
+ }
++
++ return __fixup_guest_exit(vcpu, exit_code, handlers);
+ }
+
+ /* Switch to the guest for legacy non-VHE systems */
+diff --git a/arch/arm64/kvm/hyp/vhe/switch.c b/arch/arm64/kvm/hyp/vhe/switch.c
+index 80581b1c399595..496abfd3646b98 100644
+--- a/arch/arm64/kvm/hyp/vhe/switch.c
++++ b/arch/arm64/kvm/hyp/vhe/switch.c
+@@ -309,14 +309,6 @@ static bool kvm_hyp_handle_eret(struct kvm_vcpu *vcpu, u64 *exit_code)
+ return true;
+ }
+
+-static void kvm_hyp_save_fpsimd_host(struct kvm_vcpu *vcpu)
+-{
+- __fpsimd_save_state(*host_data_ptr(fpsimd_state));
+-
+- if (kvm_has_fpmr(vcpu->kvm))
+- **host_data_ptr(fpmr_ptr) = read_sysreg_s(SYS_FPMR);
+-}
+-
+ static bool kvm_hyp_handle_tlbi_el2(struct kvm_vcpu *vcpu, u64 *exit_code)
+ {
+ int ret = -EINVAL;
+@@ -431,13 +423,10 @@ static const exit_handler_fn hyp_exit_handlers[] = {
+ [ESR_ELx_EC_MOPS] = kvm_hyp_handle_mops,
+ };
+
+-static const exit_handler_fn *kvm_get_exit_handler_array(struct kvm_vcpu *vcpu)
++static inline bool fixup_guest_exit(struct kvm_vcpu *vcpu, u64 *exit_code)
+ {
+- return hyp_exit_handlers;
+-}
++ synchronize_vcpu_pstate(vcpu, exit_code);
+
+-static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)
+-{
+ /*
+ * If we were in HYP context on entry, adjust the PSTATE view
+ * so that the usual helpers work correctly.
+@@ -457,6 +446,8 @@ static void early_exit_filter(struct kvm_vcpu *vcpu, u64 *exit_code)
+ *vcpu_cpsr(vcpu) &= ~(PSR_MODE_MASK | PSR_MODE32_BIT);
+ *vcpu_cpsr(vcpu) |= mode;
+ }
++
++ return __fixup_guest_exit(vcpu, exit_code, hyp_exit_handlers);
+ }
+
+ /* Switch to the guest for VHE systems running in EL2 */
+@@ -471,6 +462,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
+
+ sysreg_save_host_state_vhe(host_ctxt);
+
++ fpsimd_lazy_switch_to_guest(vcpu);
++
+ /*
+ * Note that ARM erratum 1165522 requires us to configure both stage 1
+ * and stage 2 translation for the guest context before we clear
+@@ -495,6 +488,8 @@ static int __kvm_vcpu_run_vhe(struct kvm_vcpu *vcpu)
+
+ __deactivate_traps(vcpu);
+
++ fpsimd_lazy_switch_to_host(vcpu);
++
+ sysreg_restore_host_state_vhe(host_ctxt);
+
+ if (guest_owns_fp_regs())
+diff --git a/arch/riscv/boot/dts/starfive/jh7110-pinfunc.h b/arch/riscv/boot/dts/starfive/jh7110-pinfunc.h
+index 256de17f526113..ae49c908e7fb3f 100644
+--- a/arch/riscv/boot/dts/starfive/jh7110-pinfunc.h
++++ b/arch/riscv/boot/dts/starfive/jh7110-pinfunc.h
+@@ -89,7 +89,7 @@
+ #define GPOUT_SYS_SDIO1_DATA1 59
+ #define GPOUT_SYS_SDIO1_DATA2 60
+ #define GPOUT_SYS_SDIO1_DATA3 61
+-#define GPOUT_SYS_SDIO1_DATA4 63
++#define GPOUT_SYS_SDIO1_DATA4 62
+ #define GPOUT_SYS_SDIO1_DATA5 63
+ #define GPOUT_SYS_SDIO1_DATA6 64
+ #define GPOUT_SYS_SDIO1_DATA7 65
+diff --git a/drivers/accel/qaic/qaic_data.c b/drivers/accel/qaic/qaic_data.c
+index c20eb63750f517..43aba57b48f05f 100644
+--- a/drivers/accel/qaic/qaic_data.c
++++ b/drivers/accel/qaic/qaic_data.c
+@@ -172,9 +172,10 @@ static void free_slice(struct kref *kref)
+ static int clone_range_of_sgt_for_slice(struct qaic_device *qdev, struct sg_table **sgt_out,
+ struct sg_table *sgt_in, u64 size, u64 offset)
+ {
+- int total_len, len, nents, offf = 0, offl = 0;
+ struct scatterlist *sg, *sgn, *sgf, *sgl;
++ unsigned int len, nents, offf, offl;
+ struct sg_table *sgt;
++ size_t total_len;
+ int ret, j;
+
+ /* find out number of relevant nents needed for this mem */
+@@ -182,6 +183,8 @@ static int clone_range_of_sgt_for_slice(struct qaic_device *qdev, struct sg_tabl
+ sgf = NULL;
+ sgl = NULL;
+ nents = 0;
++ offf = 0;
++ offl = 0;
+
+ size = size ? size : PAGE_SIZE;
+ for_each_sgtable_dma_sg(sgt_in, sg, j) {
+@@ -554,6 +557,7 @@ static bool invalid_sem(struct qaic_sem *sem)
+ static int qaic_validate_req(struct qaic_device *qdev, struct qaic_attach_slice_entry *slice_ent,
+ u32 count, u64 total_size)
+ {
++ u64 total;
+ int i;
+
+ for (i = 0; i < count; i++) {
+@@ -563,7 +567,8 @@ static int qaic_validate_req(struct qaic_device *qdev, struct qaic_attach_slice_
+ invalid_sem(&slice_ent[i].sem2) || invalid_sem(&slice_ent[i].sem3))
+ return -EINVAL;
+
+- if (slice_ent[i].offset + slice_ent[i].size > total_size)
++ if (check_add_overflow(slice_ent[i].offset, slice_ent[i].size, &total) ||
++ total > total_size)
+ return -EINVAL;
+ }
+
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index c085dd81ebe7f6..d956735e2a7645 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -2845,6 +2845,10 @@ int ata_dev_configure(struct ata_device *dev)
+ (id[ATA_ID_SATA_CAPABILITY] & 0xe) == 0x2)
+ dev->quirks |= ATA_QUIRK_NOLPM;
+
++ if (dev->quirks & ATA_QUIRK_NO_LPM_ON_ATI &&
++ ata_dev_check_adapter(dev, PCI_VENDOR_ID_ATI))
++ dev->quirks |= ATA_QUIRK_NOLPM;
++
+ if (ap->flags & ATA_FLAG_NO_LPM)
+ dev->quirks |= ATA_QUIRK_NOLPM;
+
+@@ -3897,6 +3901,7 @@ static const char * const ata_quirk_names[] = {
+ [__ATA_QUIRK_MAX_SEC_1024] = "maxsec1024",
+ [__ATA_QUIRK_MAX_TRIM_128M] = "maxtrim128m",
+ [__ATA_QUIRK_NO_NCQ_ON_ATI] = "noncqonati",
++ [__ATA_QUIRK_NO_LPM_ON_ATI] = "nolpmonati",
+ [__ATA_QUIRK_NO_ID_DEV_LOG] = "noiddevlog",
+ [__ATA_QUIRK_NO_LOG_DIR] = "nologdir",
+ [__ATA_QUIRK_NO_FUA] = "nofua",
+@@ -4142,13 +4147,16 @@ static const struct ata_dev_quirks_entry __ata_dev_quirks[] = {
+ ATA_QUIRK_ZERO_AFTER_TRIM },
+ { "Samsung SSD 860*", NULL, ATA_QUIRK_NO_NCQ_TRIM |
+ ATA_QUIRK_ZERO_AFTER_TRIM |
+- ATA_QUIRK_NO_NCQ_ON_ATI },
++ ATA_QUIRK_NO_NCQ_ON_ATI |
++ ATA_QUIRK_NO_LPM_ON_ATI },
+ { "Samsung SSD 870*", NULL, ATA_QUIRK_NO_NCQ_TRIM |
+ ATA_QUIRK_ZERO_AFTER_TRIM |
+- ATA_QUIRK_NO_NCQ_ON_ATI },
++ ATA_QUIRK_NO_NCQ_ON_ATI |
++ ATA_QUIRK_NO_LPM_ON_ATI },
+ { "SAMSUNG*MZ7LH*", NULL, ATA_QUIRK_NO_NCQ_TRIM |
+ ATA_QUIRK_ZERO_AFTER_TRIM |
+- ATA_QUIRK_NO_NCQ_ON_ATI, },
++ ATA_QUIRK_NO_NCQ_ON_ATI |
++ ATA_QUIRK_NO_LPM_ON_ATI },
+ { "FCCT*M500*", NULL, ATA_QUIRK_NO_NCQ_TRIM |
+ ATA_QUIRK_ZERO_AFTER_TRIM },
+
+diff --git a/drivers/dpll/dpll_core.c b/drivers/dpll/dpll_core.c
+index 32019dc33cca7e..1877201d1aa9fe 100644
+--- a/drivers/dpll/dpll_core.c
++++ b/drivers/dpll/dpll_core.c
+@@ -505,7 +505,7 @@ dpll_pin_alloc(u64 clock_id, u32 pin_idx, struct module *module,
+ xa_init_flags(&pin->parent_refs, XA_FLAGS_ALLOC);
+ ret = xa_alloc_cyclic(&dpll_pin_xa, &pin->id, pin, xa_limit_32b,
+ &dpll_pin_xa_id, GFP_KERNEL);
+- if (ret)
++ if (ret < 0)
+ goto err_xa_alloc;
+ return pin;
+ err_xa_alloc:
+diff --git a/drivers/firmware/efi/libstub/randomalloc.c b/drivers/firmware/efi/libstub/randomalloc.c
+index 8ad3efb9b1ff16..593e98e3b993ea 100644
+--- a/drivers/firmware/efi/libstub/randomalloc.c
++++ b/drivers/firmware/efi/libstub/randomalloc.c
+@@ -75,6 +75,10 @@ efi_status_t efi_random_alloc(unsigned long size,
+ if (align < EFI_ALLOC_ALIGN)
+ align = EFI_ALLOC_ALIGN;
+
++ /* Avoid address 0x0, as it can be mistaken for NULL */
++ if (alloc_min == 0)
++ alloc_min = align;
++
+ size = round_up(size, EFI_ALLOC_ALIGN);
+
+ /* count the suitable slots in each memory map entry */
+diff --git a/drivers/firmware/imx/imx-scu.c b/drivers/firmware/imx/imx-scu.c
+index 1dd4362ef9a3fc..8c28e25ddc8a65 100644
+--- a/drivers/firmware/imx/imx-scu.c
++++ b/drivers/firmware/imx/imx-scu.c
+@@ -280,6 +280,7 @@ static int imx_scu_probe(struct platform_device *pdev)
+ return ret;
+
+ sc_ipc->fast_ipc = of_device_is_compatible(args.np, "fsl,imx8-mu-scu");
++ of_node_put(args.np);
+
+ num_channel = sc_ipc->fast_ipc ? 2 : SCU_MU_CHAN_NUM;
+ for (i = 0; i < num_channel; i++) {
+diff --git a/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c b/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
+index 447246bd04be3f..98a463e9774bf0 100644
+--- a/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
++++ b/drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
+@@ -814,15 +814,6 @@ static int qcom_uefisecapp_probe(struct auxiliary_device *aux_dev,
+
+ qcuefi->client = container_of(aux_dev, struct qseecom_client, aux_dev);
+
+- auxiliary_set_drvdata(aux_dev, qcuefi);
+- status = qcuefi_set_reference(qcuefi);
+- if (status)
+- return status;
+-
+- status = efivars_register(&qcuefi->efivars, &qcom_efivar_ops);
+- if (status)
+- qcuefi_set_reference(NULL);
+-
+ memset(&pool_config, 0, sizeof(pool_config));
+ pool_config.initial_size = SZ_4K;
+ pool_config.policy = QCOM_TZMEM_POLICY_MULTIPLIER;
+@@ -833,6 +824,15 @@ static int qcom_uefisecapp_probe(struct auxiliary_device *aux_dev,
+ if (IS_ERR(qcuefi->mempool))
+ return PTR_ERR(qcuefi->mempool);
+
++ auxiliary_set_drvdata(aux_dev, qcuefi);
++ status = qcuefi_set_reference(qcuefi);
++ if (status)
++ return status;
++
++ status = efivars_register(&qcuefi->efivars, &qcom_efivar_ops);
++ if (status)
++ qcuefi_set_reference(NULL);
++
+ return status;
+ }
+
+diff --git a/drivers/firmware/qcom/qcom_scm.c b/drivers/firmware/qcom/qcom_scm.c
+index 2e093c39b610ae..23aefbf6fca588 100644
+--- a/drivers/firmware/qcom/qcom_scm.c
++++ b/drivers/firmware/qcom/qcom_scm.c
+@@ -2054,8 +2054,8 @@ static int qcom_scm_probe(struct platform_device *pdev)
+
+ __scm->mempool = devm_qcom_tzmem_pool_new(__scm->dev, &pool_config);
+ if (IS_ERR(__scm->mempool)) {
+- dev_err_probe(__scm->dev, PTR_ERR(__scm->mempool),
+- "Failed to create the SCM memory pool\n");
++ ret = dev_err_probe(__scm->dev, PTR_ERR(__scm->mempool),
++ "Failed to create the SCM memory pool\n");
+ goto err;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index ca130880edfd42..d3798a333d1f88 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -2395,7 +2395,7 @@ static int gfx_v12_0_cp_gfx_load_me_microcode_rs64(struct amdgpu_device *adev)
+ (void **)&adev->gfx.me.me_fw_data_ptr);
+ if (r) {
+ dev_err(adev->dev, "(%d) failed to create me data bo\n", r);
+- gfx_v12_0_pfp_fini(adev);
++ gfx_v12_0_me_fini(adev);
+ return r;
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
+index 9c6824e1c15660..60acf676000b34 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gmc_v12_0.c
+@@ -498,9 +498,6 @@ static void gmc_v12_0_get_vm_pte(struct amdgpu_device *adev,
+ uint64_t *flags)
+ {
+ struct amdgpu_bo *bo = mapping->bo_va->base.bo;
+- struct amdgpu_device *bo_adev;
+- bool coherent, is_system;
+-
+
+ *flags &= ~AMDGPU_PTE_EXECUTABLE;
+ *flags |= mapping->flags & AMDGPU_PTE_EXECUTABLE;
+@@ -516,26 +513,11 @@ static void gmc_v12_0_get_vm_pte(struct amdgpu_device *adev,
+ *flags &= ~AMDGPU_PTE_VALID;
+ }
+
+- if (!bo)
+- return;
+-
+- if (bo->flags & (AMDGPU_GEM_CREATE_COHERENT |
+- AMDGPU_GEM_CREATE_UNCACHED))
+- *flags = AMDGPU_PTE_MTYPE_GFX12(*flags, MTYPE_UC);
+-
+- bo_adev = amdgpu_ttm_adev(bo->tbo.bdev);
+- coherent = bo->flags & AMDGPU_GEM_CREATE_COHERENT;
+- is_system = bo->tbo.resource &&
+- (bo->tbo.resource->mem_type == TTM_PL_TT ||
+- bo->tbo.resource->mem_type == AMDGPU_PL_PREEMPT);
+-
+ if (bo && bo->flags & AMDGPU_GEM_CREATE_GFX12_DCC)
+ *flags |= AMDGPU_PTE_DCC;
+
+- /* WA for HW bug */
+- if (is_system || ((bo_adev != adev) && coherent))
+- *flags = AMDGPU_PTE_MTYPE_GFX12(*flags, MTYPE_NC);
+-
++ if (bo && bo->flags & AMDGPU_GEM_CREATE_UNCACHED)
++ *flags = AMDGPU_PTE_MTYPE_GFX12(*flags, MTYPE_UC);
+ }
+
+ static unsigned gmc_v12_0_get_vbios_fb_size(struct amdgpu_device *adev)
+diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
+index 73065a85e0d264..4f94a119d62754 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nv.c
++++ b/drivers/gpu/drm/amd/amdgpu/nv.c
+@@ -78,12 +78,12 @@ static const struct amdgpu_video_codecs nv_video_codecs_encode = {
+
+ /* Navi1x */
+ static const struct amdgpu_video_codec_info nv_video_codecs_decode_array[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 1920, 1088, 3)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 1920, 1088, 5)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 1920, 1088, 4)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 8192, 8192, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
+ };
+
+@@ -104,10 +104,10 @@ static const struct amdgpu_video_codecs sc_video_codecs_encode = {
+ };
+
+ static const struct amdgpu_video_codec_info sc_video_codecs_decode_array_vcn0[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 1920, 1088, 3)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 1920, 1088, 5)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 1920, 1088, 4)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 16384, 16384, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
+@@ -115,10 +115,10 @@ static const struct amdgpu_video_codec_info sc_video_codecs_decode_array_vcn0[]
+ };
+
+ static const struct amdgpu_video_codec_info sc_video_codecs_decode_array_vcn1[] = {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 1920, 1088, 3)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 1920, 1088, 5)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 1920, 1088, 4)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 16384, 16384, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index 307185c0e1b8f2..4cbe0da100d8f3 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -103,12 +103,11 @@ static const struct amdgpu_video_codecs vega_video_codecs_encode =
+ /* Vega */
+ static const struct amdgpu_video_codec_info vega_video_codecs_decode_array[] =
+ {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 1920, 1088, 3)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 1920, 1088, 5)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 1920, 1088, 4)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 4096, 186)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
+ };
+
+ static const struct amdgpu_video_codecs vega_video_codecs_decode =
+@@ -120,12 +119,12 @@ static const struct amdgpu_video_codecs vega_video_codecs_decode =
+ /* Raven */
+ static const struct amdgpu_video_codec_info rv_video_codecs_decode_array[] =
+ {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 1920, 1088, 3)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 1920, 1088, 5)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 1920, 1088, 4)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 4096, 4096, 186)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 4096, 4096, 0)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 8192, 8192, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 4096, 4096, 0)},
+ };
+
+@@ -138,10 +137,10 @@ static const struct amdgpu_video_codecs rv_video_codecs_decode =
+ /* Renoir, Arcturus */
+ static const struct amdgpu_video_codec_info rn_video_codecs_decode_array[] =
+ {
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 4096, 4096, 3)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 4096, 4096, 5)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2, 1920, 1088, 3)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4, 1920, 1088, 5)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4_AVC, 4096, 4096, 52)},
+- {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 4096, 4096, 4)},
++ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1, 1920, 1088, 4)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_HEVC, 8192, 4352, 186)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG, 16384, 16384, 0)},
+ {codec_info_build(AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VP9, 8192, 4352, 0)},
+diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
+index 792b2eb6bbacea..48ab93e715c823 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vi.c
++++ b/drivers/gpu/drm/amd/amdgpu/vi.c
+@@ -167,16 +167,16 @@ static const struct amdgpu_video_codec_info tonga_video_codecs_decode_array[] =
+ {
+ {
+ .codec_type = AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2,
+- .max_width = 4096,
+- .max_height = 4096,
+- .max_pixels_per_frame = 4096 * 4096,
++ .max_width = 1920,
++ .max_height = 1088,
++ .max_pixels_per_frame = 1920 * 1088,
+ .max_level = 3,
+ },
+ {
+ .codec_type = AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4,
+- .max_width = 4096,
+- .max_height = 4096,
+- .max_pixels_per_frame = 4096 * 4096,
++ .max_width = 1920,
++ .max_height = 1088,
++ .max_pixels_per_frame = 1920 * 1088,
+ .max_level = 5,
+ },
+ {
+@@ -188,9 +188,9 @@ static const struct amdgpu_video_codec_info tonga_video_codecs_decode_array[] =
+ },
+ {
+ .codec_type = AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1,
+- .max_width = 4096,
+- .max_height = 4096,
+- .max_pixels_per_frame = 4096 * 4096,
++ .max_width = 1920,
++ .max_height = 1088,
++ .max_pixels_per_frame = 1920 * 1088,
+ .max_level = 4,
+ },
+ };
+@@ -206,16 +206,16 @@ static const struct amdgpu_video_codec_info cz_video_codecs_decode_array[] =
+ {
+ {
+ .codec_type = AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG2,
+- .max_width = 4096,
+- .max_height = 4096,
+- .max_pixels_per_frame = 4096 * 4096,
++ .max_width = 1920,
++ .max_height = 1088,
++ .max_pixels_per_frame = 1920 * 1088,
+ .max_level = 3,
+ },
+ {
+ .codec_type = AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_MPEG4,
+- .max_width = 4096,
+- .max_height = 4096,
+- .max_pixels_per_frame = 4096 * 4096,
++ .max_width = 1920,
++ .max_height = 1088,
++ .max_pixels_per_frame = 1920 * 1088,
+ .max_level = 5,
+ },
+ {
+@@ -227,9 +227,9 @@ static const struct amdgpu_video_codec_info cz_video_codecs_decode_array[] =
+ },
+ {
+ .codec_type = AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_VC1,
+- .max_width = 4096,
+- .max_height = 4096,
+- .max_pixels_per_frame = 4096 * 4096,
++ .max_width = 1920,
++ .max_height = 1088,
++ .max_pixels_per_frame = 1920 * 1088,
+ .max_level = 4,
+ },
+ {
+@@ -239,13 +239,6 @@ static const struct amdgpu_video_codec_info cz_video_codecs_decode_array[] =
+ .max_pixels_per_frame = 4096 * 4096,
+ .max_level = 186,
+ },
+- {
+- .codec_type = AMDGPU_INFO_VIDEO_CAPS_CODEC_IDX_JPEG,
+- .max_width = 4096,
+- .max_height = 4096,
+- .max_pixels_per_frame = 4096 * 4096,
+- .max_level = 0,
+- },
+ };
+
+ static const struct amdgpu_video_codecs cz_video_codecs_decode =
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_queue.c b/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
+index 80c85b6cc478a9..29d7cb4cfe69ae 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_queue.c
+@@ -233,6 +233,7 @@ void kfd_queue_buffer_put(struct amdgpu_bo **bo)
+ int kfd_queue_acquire_buffers(struct kfd_process_device *pdd, struct queue_properties *properties)
+ {
+ struct kfd_topology_device *topo_dev;
++ u64 expected_queue_size;
+ struct amdgpu_vm *vm;
+ u32 total_cwsr_size;
+ int err;
+@@ -241,6 +242,15 @@ int kfd_queue_acquire_buffers(struct kfd_process_device *pdd, struct queue_prope
+ if (!topo_dev)
+ return -EINVAL;
+
++ /* AQL queues on GFX7 and GFX8 appear twice their actual size */
++ if (properties->type == KFD_QUEUE_TYPE_COMPUTE &&
++ properties->format == KFD_QUEUE_FORMAT_AQL &&
++ topo_dev->node_props.gfx_target_version >= 70000 &&
++ topo_dev->node_props.gfx_target_version < 90000)
++ expected_queue_size = properties->queue_size / 2;
++ else
++ expected_queue_size = properties->queue_size;
++
+ vm = drm_priv_to_vm(pdd->drm_priv);
+ err = amdgpu_bo_reserve(vm->root.bo, false);
+ if (err)
+@@ -255,7 +265,7 @@ int kfd_queue_acquire_buffers(struct kfd_process_device *pdd, struct queue_prope
+ goto out_err_unreserve;
+
+ err = kfd_queue_buffer_get(vm, (void *)properties->queue_address,
+- &properties->ring_bo, properties->queue_size);
++ &properties->ring_bo, expected_queue_size);
+ if (err)
+ goto out_err_unreserve;
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index 1893c27746a523..8c61dee5ca0db1 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -1276,13 +1276,7 @@ svm_range_get_pte_flags(struct kfd_node *node,
+ break;
+ case IP_VERSION(12, 0, 0):
+ case IP_VERSION(12, 0, 1):
+- if (domain == SVM_RANGE_VRAM_DOMAIN) {
+- if (bo_node != node)
+- mapping_flags |= AMDGPU_VM_MTYPE_NC;
+- } else {
+- mapping_flags |= coherent ?
+- AMDGPU_VM_MTYPE_UC : AMDGPU_VM_MTYPE_NC;
+- }
++ mapping_flags |= AMDGPU_VM_MTYPE_NC;
+ break;
+ default:
+ mapping_flags |= coherent ?
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0688a428ee4f7d..d9a3917d207e93 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1720,7 +1720,7 @@ static void retrieve_dmi_info(struct amdgpu_display_manager *dm, struct dc_init_
+ }
+ if (quirk_entries.support_edp0_on_dp1) {
+ init_data->flags.support_edp0_on_dp1 = true;
+- drm_info(dev, "aux_hpd_discon_quirk attached\n");
++ drm_info(dev, "support_edp0_on_dp1 attached\n");
+ }
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+index bf636b28e3e16e..6e2fce329d7382 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+@@ -69,5 +69,16 @@ bool should_use_dmub_lock(struct dc_link *link)
+ if (link->replay_settings.replay_feature_enabled)
+ return true;
+
++ /* only use HW lock for PSR1 on single eDP */
++ if (link->psr_settings.psr_version == DC_PSR_VERSION_1) {
++ struct dc_link *edp_links[MAX_NUM_EDP];
++ int edp_num;
++
++ dc_get_edp_links(link->dc, edp_links, &edp_num);
++
++ if (edp_num == 1)
++ return true;
++ }
++
+ return false;
+ }
+diff --git a/drivers/gpu/drm/amd/pm/amdgpu_pm.c b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+index 0fa6fbee197899..bfdfba676025e7 100644
+--- a/drivers/gpu/drm/amd/pm/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/pm/amdgpu_pm.c
+@@ -2493,6 +2493,8 @@ static int default_attr_update(struct amdgpu_device *adev, struct amdgpu_device_
+ case IP_VERSION(11, 0, 1):
+ case IP_VERSION(11, 0, 2):
+ case IP_VERSION(11, 0, 3):
++ case IP_VERSION(12, 0, 0):
++ case IP_VERSION(12, 0, 1):
+ *states = ATTR_STATE_SUPPORTED;
+ break;
+ default:
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+index 9ec53431f2c32d..e98a6a2f3e6acc 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu14/smu_v14_0_2_ppt.c
+@@ -1206,16 +1206,9 @@ static int smu_v14_0_2_print_clk_levels(struct smu_context *smu,
+ PP_OD_FEATURE_GFXCLK_BIT))
+ break;
+
+- PPTable_t *pptable = smu->smu_table.driver_pptable;
+- const OverDriveLimits_t * const overdrive_upperlimits =
+- &pptable->SkuTable.OverDriveLimitsBasicMax;
+- const OverDriveLimits_t * const overdrive_lowerlimits =
+- &pptable->SkuTable.OverDriveLimitsBasicMin;
+-
+ size += sysfs_emit_at(buf, size, "OD_SCLK_OFFSET:\n");
+- size += sysfs_emit_at(buf, size, "0: %dMhz\n1: %uMhz\n",
+- overdrive_lowerlimits->GfxclkFoffset,
+- overdrive_upperlimits->GfxclkFoffset);
++ size += sysfs_emit_at(buf, size, "%dMhz\n",
++ od_table->OverDriveTable.GfxclkFoffset);
+ break;
+
+ case SMU_OD_MCLK:
+@@ -1349,13 +1342,9 @@ static int smu_v14_0_2_print_clk_levels(struct smu_context *smu,
+ size += sysfs_emit_at(buf, size, "%s:\n", "OD_RANGE");
+
+ if (smu_v14_0_2_is_od_feature_supported(smu, PP_OD_FEATURE_GFXCLK_BIT)) {
+- smu_v14_0_2_get_od_setting_limits(smu,
+- PP_OD_FEATURE_GFXCLK_FMIN,
+- &min_value,
+- NULL);
+ smu_v14_0_2_get_od_setting_limits(smu,
+ PP_OD_FEATURE_GFXCLK_FMAX,
+- NULL,
++ &min_value,
+ &max_value);
+ size += sysfs_emit_at(buf, size, "SCLK_OFFSET: %7dMhz %10uMhz\n",
+ min_value, max_value);
+@@ -1639,6 +1628,39 @@ static void smu_v14_0_2_get_unique_id(struct smu_context *smu)
+ adev->unique_id = ((uint64_t)upper32 << 32) | lower32;
+ }
+
++static int smu_v14_0_2_get_fan_speed_pwm(struct smu_context *smu,
++ uint32_t *speed)
++{
++ int ret;
++
++ if (!speed)
++ return -EINVAL;
++
++ ret = smu_v14_0_2_get_smu_metrics_data(smu,
++ METRICS_CURR_FANPWM,
++ speed);
++ if (ret) {
++ dev_err(smu->adev->dev, "Failed to get fan speed(PWM)!");
++ return ret;
++ }
++
++ /* Convert the PMFW output which is in percent to pwm(255) based */
++ *speed = min(*speed * 255 / 100, (uint32_t)255);
++
++ return 0;
++}
++
++static int smu_v14_0_2_get_fan_speed_rpm(struct smu_context *smu,
++ uint32_t *speed)
++{
++ if (!speed)
++ return -EINVAL;
++
++ return smu_v14_0_2_get_smu_metrics_data(smu,
++ METRICS_CURR_FANSPEED,
++ speed);
++}
++
+ static int smu_v14_0_2_get_power_limit(struct smu_context *smu,
+ uint32_t *current_power_limit,
+ uint32_t *default_power_limit,
+@@ -2429,36 +2451,24 @@ static int smu_v14_0_2_od_edit_dpm_table(struct smu_context *smu,
+ return -ENOTSUPP;
+ }
+
+- for (i = 0; i < size; i += 2) {
+- if (i + 2 > size) {
+- dev_info(adev->dev, "invalid number of input parameters %d\n", size);
+- return -EINVAL;
+- }
+-
+- switch (input[i]) {
+- case 1:
+- smu_v14_0_2_get_od_setting_limits(smu,
+- PP_OD_FEATURE_GFXCLK_FMAX,
+- &minimum,
+- &maximum);
+- if (input[i + 1] < minimum ||
+- input[i + 1] > maximum) {
+- dev_info(adev->dev, "GfxclkFmax (%ld) must be within [%u, %u]!\n",
+- input[i + 1], minimum, maximum);
+- return -EINVAL;
+- }
+-
+- od_table->OverDriveTable.GfxclkFoffset = input[i + 1];
+- od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_GFXCLK_BIT;
+- break;
++ if (size != 1) {
++ dev_info(adev->dev, "invalid number of input parameters %d\n", size);
++ return -EINVAL;
++ }
+
+- default:
+- dev_info(adev->dev, "Invalid SCLK_VDDC_TABLE index: %ld\n", input[i]);
+- dev_info(adev->dev, "Supported indices: [0:min,1:max]\n");
+- return -EINVAL;
+- }
++ smu_v14_0_2_get_od_setting_limits(smu,
++ PP_OD_FEATURE_GFXCLK_FMAX,
++ &minimum,
++ &maximum);
++ if (input[0] < minimum ||
++ input[0] > maximum) {
++ dev_info(adev->dev, "GfxclkFoffset must be within [%d, %u]!\n",
++ minimum, maximum);
++ return -EINVAL;
+ }
+
++ od_table->OverDriveTable.GfxclkFoffset = input[0];
++ od_table->OverDriveTable.FeatureCtrlMask |= 1U << PP_OD_FEATURE_GFXCLK_BIT;
+ break;
+
+ case PP_OD_EDIT_MCLK_VDDC_TABLE:
+@@ -2817,6 +2827,8 @@ static const struct pptable_funcs smu_v14_0_2_ppt_funcs = {
+ .set_performance_level = smu_v14_0_set_performance_level,
+ .gfx_off_control = smu_v14_0_gfx_off_control,
+ .get_unique_id = smu_v14_0_2_get_unique_id,
++ .get_fan_speed_pwm = smu_v14_0_2_get_fan_speed_pwm,
++ .get_fan_speed_rpm = smu_v14_0_2_get_fan_speed_rpm,
+ .get_power_limit = smu_v14_0_2_get_power_limit,
+ .set_power_limit = smu_v14_0_2_set_power_limit,
+ .get_power_profile_mode = smu_v14_0_2_get_power_profile_mode,
+diff --git a/drivers/gpu/drm/radeon/radeon_vce.c b/drivers/gpu/drm/radeon/radeon_vce.c
+index d1871af967d4af..2355a78e1b69d6 100644
+--- a/drivers/gpu/drm/radeon/radeon_vce.c
++++ b/drivers/gpu/drm/radeon/radeon_vce.c
+@@ -557,7 +557,7 @@ int radeon_vce_cs_parse(struct radeon_cs_parser *p)
+ {
+ int session_idx = -1;
+ bool destroyed = false, created = false, allocated = false;
+- uint32_t tmp, handle = 0;
++ uint32_t tmp = 0, handle = 0;
+ uint32_t *size = &tmp;
+ int i, r = 0;
+
+diff --git a/drivers/gpu/drm/scheduler/sched_entity.c b/drivers/gpu/drm/scheduler/sched_entity.c
+index a75eede8bf8dab..002057be0d84a2 100644
+--- a/drivers/gpu/drm/scheduler/sched_entity.c
++++ b/drivers/gpu/drm/scheduler/sched_entity.c
+@@ -259,9 +259,16 @@ static void drm_sched_entity_kill(struct drm_sched_entity *entity)
+ struct drm_sched_fence *s_fence = job->s_fence;
+
+ dma_fence_get(&s_fence->finished);
+- if (!prev || dma_fence_add_callback(prev, &job->finish_cb,
+- drm_sched_entity_kill_jobs_cb))
++ if (!prev ||
++ dma_fence_add_callback(prev, &job->finish_cb,
++ drm_sched_entity_kill_jobs_cb)) {
++ /*
++ * Adding callback above failed.
++ * dma_fence_put() checks for NULL.
++ */
++ dma_fence_put(prev);
+ drm_sched_entity_kill_jobs_cb(NULL, &job->finish_cb);
++ }
+
+ prev = &s_fence->finished;
+ }
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index 4f935f1d50a943..3066cfdb054cc0 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -319,11 +319,15 @@ v3d_tfu_job_run(struct drm_sched_job *sched_job)
+ struct drm_device *dev = &v3d->drm;
+ struct dma_fence *fence;
+
++ if (unlikely(job->base.base.s_fence->finished.error))
++ return NULL;
++
++ v3d->tfu_job = job;
++
+ fence = v3d_fence_create(v3d, V3D_TFU);
+ if (IS_ERR(fence))
+ return NULL;
+
+- v3d->tfu_job = job;
+ if (job->base.irq_fence)
+ dma_fence_put(job->base.irq_fence);
+ job->base.irq_fence = dma_fence_get(fence);
+@@ -361,6 +365,9 @@ v3d_csd_job_run(struct drm_sched_job *sched_job)
+ struct dma_fence *fence;
+ int i, csd_cfg0_reg;
+
++ if (unlikely(job->base.base.s_fence->finished.error))
++ return NULL;
++
+ v3d->csd_job = job;
+
+ v3d_invalidate_caches(v3d);
+diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h
+index 6e4be52306dfc9..d22269a230aa19 100644
+--- a/drivers/gpu/drm/xe/xe_bo.h
++++ b/drivers/gpu/drm/xe/xe_bo.h
+@@ -314,7 +314,6 @@ static inline unsigned int xe_sg_segment_size(struct device *dev)
+
+ #define i915_gem_object_flush_if_display(obj) ((void)(obj))
+
+-#if IS_ENABLED(CONFIG_DRM_XE_KUNIT_TEST)
+ /**
+ * xe_bo_is_mem_type - Whether the bo currently resides in the given
+ * TTM memory type
+@@ -329,4 +328,3 @@ static inline bool xe_bo_is_mem_type(struct xe_bo *bo, u32 mem_type)
+ return bo->ttm.resource->mem_type == mem_type;
+ }
+ #endif
+-#endif
+diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c
+index 68f309f5e98153..f3bf7d3157b479 100644
+--- a/drivers/gpu/drm/xe/xe_dma_buf.c
++++ b/drivers/gpu/drm/xe/xe_dma_buf.c
+@@ -58,7 +58,7 @@ static int xe_dma_buf_pin(struct dma_buf_attachment *attach)
+ * 1) Avoid pinning in a placement not accessible to some importers.
+ * 2) Pinning in VRAM requires PIN accounting which is a to-do.
+ */
+- if (xe_bo_is_pinned(bo) && bo->ttm.resource->placement != XE_PL_TT) {
++ if (xe_bo_is_pinned(bo) && !xe_bo_is_mem_type(bo, XE_PL_TT)) {
+ drm_dbg(&xe->drm, "Can't migrate pinned bo for dma-buf pin.\n");
+ return -EINVAL;
+ }
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index 710674ef40a973..3f23a7d91519fa 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -367,6 +367,10 @@ static bool host1x_wants_iommu(struct host1x *host1x)
+ return true;
+ }
+
++/*
++ * Returns ERR_PTR on failure, NULL if the translation is IDENTITY, otherwise a
++ * valid paging domain.
++ */
+ static struct iommu_domain *host1x_iommu_attach(struct host1x *host)
+ {
+ struct iommu_domain *domain = iommu_get_domain_for_dev(host->dev);
+@@ -391,6 +395,8 @@ static struct iommu_domain *host1x_iommu_attach(struct host1x *host)
+ * Similarly, if host1x is already attached to an IOMMU (via the DMA
+ * API), don't try to attach again.
+ */
++ if (domain && domain->type == IOMMU_DOMAIN_IDENTITY)
++ domain = NULL;
+ if (!host1x_wants_iommu(host) || domain)
+ return domain;
+
+diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c
+index 1d9ad25c89ae55..8c9cf08ad45e22 100644
+--- a/drivers/i2c/busses/i2c-omap.c
++++ b/drivers/i2c/busses/i2c-omap.c
+@@ -1048,23 +1048,6 @@ static int omap_i2c_transmit_data(struct omap_i2c_dev *omap, u8 num_bytes,
+ return 0;
+ }
+
+-static irqreturn_t
+-omap_i2c_isr(int irq, void *dev_id)
+-{
+- struct omap_i2c_dev *omap = dev_id;
+- irqreturn_t ret = IRQ_HANDLED;
+- u16 mask;
+- u16 stat;
+-
+- stat = omap_i2c_read_reg(omap, OMAP_I2C_STAT_REG);
+- mask = omap_i2c_read_reg(omap, OMAP_I2C_IE_REG) & ~OMAP_I2C_STAT_NACK;
+-
+- if (stat & mask)
+- ret = IRQ_WAKE_THREAD;
+-
+- return ret;
+-}
+-
+ static int omap_i2c_xfer_data(struct omap_i2c_dev *omap)
+ {
+ u16 bits;
+@@ -1095,8 +1078,13 @@ static int omap_i2c_xfer_data(struct omap_i2c_dev *omap)
+ }
+
+ if (stat & OMAP_I2C_STAT_NACK) {
+- err |= OMAP_I2C_STAT_NACK;
++ omap->cmd_err |= OMAP_I2C_STAT_NACK;
+ omap_i2c_ack_stat(omap, OMAP_I2C_STAT_NACK);
++
++ if (!(stat & ~OMAP_I2C_STAT_NACK)) {
++ err = -EAGAIN;
++ break;
++ }
+ }
+
+ if (stat & OMAP_I2C_STAT_AL) {
+@@ -1472,7 +1460,7 @@ omap_i2c_probe(struct platform_device *pdev)
+ IRQF_NO_SUSPEND, pdev->name, omap);
+ else
+ r = devm_request_threaded_irq(&pdev->dev, omap->irq,
+- omap_i2c_isr, omap_i2c_isr_thread,
++ NULL, omap_i2c_isr_thread,
+ IRQF_NO_SUSPEND | IRQF_ONESHOT,
+ pdev->name, omap);
+
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_fp.c b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+index 613b5fc70e13ea..7436ce55157972 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_fp.c
++++ b/drivers/infiniband/hw/bnxt_re/qplib_fp.c
+@@ -1216,8 +1216,6 @@ static void __modify_flags_from_init_state(struct bnxt_qplib_qp *qp)
+ qp->path_mtu =
+ CMDQ_MODIFY_QP_PATH_MTU_MTU_2048;
+ }
+- qp->modify_flags &=
+- ~CMDQ_MODIFY_QP_MODIFY_MASK_VLAN_ID;
+ /* Bono FW require the max_dest_rd_atomic to be >= 1 */
+ if (qp->max_dest_rd_atomic < 1)
+ qp->max_dest_rd_atomic = 1;
+diff --git a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+index 07779aeb75759d..a4deb45ec849fa 100644
+--- a/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
++++ b/drivers/infiniband/hw/bnxt_re/qplib_rcfw.h
+@@ -283,9 +283,10 @@ int bnxt_qplib_deinit_rcfw(struct bnxt_qplib_rcfw *rcfw);
+ int bnxt_qplib_init_rcfw(struct bnxt_qplib_rcfw *rcfw,
+ struct bnxt_qplib_ctx *ctx, int is_virtfn);
+ void bnxt_qplib_mark_qp_error(void *qp_handle);
++
+ static inline u32 map_qp_id_to_tbl_indx(u32 qid, struct bnxt_qplib_rcfw *rcfw)
+ {
+ /* Last index of the qp_tbl is for QP1 ie. qp_tbl_size - 1*/
+- return (qid == 1) ? rcfw->qp_tbl_size - 1 : qid % rcfw->qp_tbl_size - 2;
++ return (qid == 1) ? rcfw->qp_tbl_size - 1 : (qid % (rcfw->qp_tbl_size - 2));
+ }
+ #endif /* __BNXT_QPLIB_RCFW_H__ */
+diff --git a/drivers/infiniband/hw/hns/hns_roce_alloc.c b/drivers/infiniband/hw/hns/hns_roce_alloc.c
+index 950c133d4220e7..6ee911f6885b54 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_alloc.c
++++ b/drivers/infiniband/hw/hns/hns_roce_alloc.c
+@@ -175,8 +175,10 @@ void hns_roce_cleanup_bitmap(struct hns_roce_dev *hr_dev)
+ if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_XRC)
+ ida_destroy(&hr_dev->xrcd_ida.ida);
+
+- if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ)
++ if (hr_dev->caps.flags & HNS_ROCE_CAP_FLAG_SRQ) {
+ ida_destroy(&hr_dev->srq_table.srq_ida.ida);
++ xa_destroy(&hr_dev->srq_table.xa);
++ }
+ hns_roce_cleanup_qp_table(hr_dev);
+ hns_roce_cleanup_cq_table(hr_dev);
+ ida_destroy(&hr_dev->mr_table.mtpt_ida.ida);
+diff --git a/drivers/infiniband/hw/hns/hns_roce_cq.c b/drivers/infiniband/hw/hns/hns_roce_cq.c
+index 4106423a1b399d..3a5c93c9fb3e66 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_cq.c
++++ b/drivers/infiniband/hw/hns/hns_roce_cq.c
+@@ -537,5 +537,6 @@ void hns_roce_cleanup_cq_table(struct hns_roce_dev *hr_dev)
+
+ for (i = 0; i < HNS_ROCE_CQ_BANK_NUM; i++)
+ ida_destroy(&hr_dev->cq_table.bank[i].ida);
++ xa_destroy(&hr_dev->cq_table.array);
+ mutex_destroy(&hr_dev->cq_table.bank_mutex);
+ }
+diff --git a/drivers/infiniband/hw/hns/hns_roce_hem.c b/drivers/infiniband/hw/hns/hns_roce_hem.c
+index 605562122ecce2..ca0798224e565c 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_hem.c
++++ b/drivers/infiniband/hw/hns/hns_roce_hem.c
+@@ -1361,6 +1361,11 @@ static int hem_list_alloc_root_bt(struct hns_roce_dev *hr_dev,
+ return ret;
+ }
+
++/* This is the bottom bt pages number of a 100G MR on 4K OS, assuming
++ * the bt page size is not expanded by cal_best_bt_pg_sz()
++ */
++#define RESCHED_LOOP_CNT_THRESHOLD_ON_4K 12800
++
+ /* construct the base address table and link them by address hop config */
+ int hns_roce_hem_list_request(struct hns_roce_dev *hr_dev,
+ struct hns_roce_hem_list *hem_list,
+@@ -1369,6 +1374,7 @@ int hns_roce_hem_list_request(struct hns_roce_dev *hr_dev,
+ {
+ const struct hns_roce_buf_region *r;
+ int ofs, end;
++ int loop;
+ int unit;
+ int ret;
+ int i;
+@@ -1386,7 +1392,10 @@ int hns_roce_hem_list_request(struct hns_roce_dev *hr_dev,
+ continue;
+
+ end = r->offset + r->count;
+- for (ofs = r->offset; ofs < end; ofs += unit) {
++ for (ofs = r->offset, loop = 1; ofs < end; ofs += unit, loop++) {
++ if (!(loop % RESCHED_LOOP_CNT_THRESHOLD_ON_4K))
++ cond_resched();
++
+ ret = hem_list_alloc_mid_bt(hr_dev, r, unit, ofs,
+ hem_list->mid_bt[i],
+ &hem_list->btm_bt);
+@@ -1443,9 +1452,14 @@ void *hns_roce_hem_list_find_mtt(struct hns_roce_dev *hr_dev,
+ struct list_head *head = &hem_list->btm_bt;
+ struct hns_roce_hem_item *hem, *temp_hem;
+ void *cpu_base = NULL;
++ int loop = 1;
+ int nr = 0;
+
+ list_for_each_entry_safe(hem, temp_hem, head, sibling) {
++ if (!(loop % RESCHED_LOOP_CNT_THRESHOLD_ON_4K))
++ cond_resched();
++ loop++;
++
+ if (hem_list_page_is_in_range(hem, offset)) {
+ nr = offset - hem->start;
+ cpu_base = hem->addr + nr * BA_BYTE_LEN;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index ae24c81c9812d9..cf89a8db4f64cd 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -183,7 +183,7 @@ static int hns_roce_query_device(struct ib_device *ib_dev,
+ IB_DEVICE_RC_RNR_NAK_GEN;
+ props->max_send_sge = hr_dev->caps.max_sq_sg;
+ props->max_recv_sge = hr_dev->caps.max_rq_sg;
+- props->max_sge_rd = 1;
++ props->max_sge_rd = hr_dev->caps.max_sq_sg;
+ props->max_cq = hr_dev->caps.num_cqs;
+ props->max_cqe = hr_dev->caps.max_cqes;
+ props->max_mr = hr_dev->caps.num_mtpts;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_qp.c b/drivers/infiniband/hw/hns/hns_roce_qp.c
+index 9e2e76c5940636..8901c142c1b652 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_qp.c
++++ b/drivers/infiniband/hw/hns/hns_roce_qp.c
+@@ -868,12 +868,14 @@ static int alloc_user_qp_db(struct hns_roce_dev *hr_dev,
+ struct hns_roce_ib_create_qp *ucmd,
+ struct hns_roce_ib_create_qp_resp *resp)
+ {
++ bool has_sdb = user_qp_has_sdb(hr_dev, init_attr, udata, resp, ucmd);
+ struct hns_roce_ucontext *uctx = rdma_udata_to_drv_context(udata,
+ struct hns_roce_ucontext, ibucontext);
++ bool has_rdb = user_qp_has_rdb(hr_dev, init_attr, udata, resp);
+ struct ib_device *ibdev = &hr_dev->ib_dev;
+ int ret;
+
+- if (user_qp_has_sdb(hr_dev, init_attr, udata, resp, ucmd)) {
++ if (has_sdb) {
+ ret = hns_roce_db_map_user(uctx, ucmd->sdb_addr, &hr_qp->sdb);
+ if (ret) {
+ ibdev_err(ibdev,
+@@ -884,7 +886,7 @@ static int alloc_user_qp_db(struct hns_roce_dev *hr_dev,
+ hr_qp->en_flags |= HNS_ROCE_QP_CAP_SQ_RECORD_DB;
+ }
+
+- if (user_qp_has_rdb(hr_dev, init_attr, udata, resp)) {
++ if (has_rdb) {
+ ret = hns_roce_db_map_user(uctx, ucmd->db_addr, &hr_qp->rdb);
+ if (ret) {
+ ibdev_err(ibdev,
+@@ -898,7 +900,7 @@ static int alloc_user_qp_db(struct hns_roce_dev *hr_dev,
+ return 0;
+
+ err_sdb:
+- if (hr_qp->en_flags & HNS_ROCE_QP_CAP_SQ_RECORD_DB)
++ if (has_sdb)
+ hns_roce_db_unmap_user(uctx, &hr_qp->sdb);
+ err_out:
+ return ret;
+@@ -1119,24 +1121,23 @@ static int set_qp_param(struct hns_roce_dev *hr_dev, struct hns_roce_qp *hr_qp,
+ ibucontext);
+ hr_qp->config = uctx->config;
+ ret = set_user_sq_size(hr_dev, &init_attr->cap, hr_qp, ucmd);
+- if (ret)
++ if (ret) {
+ ibdev_err(ibdev,
+ "failed to set user SQ size, ret = %d.\n",
+ ret);
++ return ret;
++ }
+
+ ret = set_congest_param(hr_dev, hr_qp, ucmd);
+- if (ret)
+- return ret;
+ } else {
+ if (hr_dev->pci_dev->revision >= PCI_REVISION_ID_HIP09)
+ hr_qp->config = HNS_ROCE_EXSGE_FLAGS;
++ default_congest_type(hr_dev, hr_qp);
+ ret = set_kernel_sq_size(hr_dev, &init_attr->cap, hr_qp);
+ if (ret)
+ ibdev_err(ibdev,
+ "failed to set kernel SQ size, ret = %d.\n",
+ ret);
+-
+- default_congest_type(hr_dev, hr_qp);
+ }
+
+ return ret;
+@@ -1219,7 +1220,7 @@ static int hns_roce_create_qp_common(struct hns_roce_dev *hr_dev,
+ min(udata->outlen, sizeof(resp)));
+ if (ret) {
+ ibdev_err(ibdev, "copy qp resp failed!\n");
+- goto err_store;
++ goto err_flow_ctrl;
+ }
+ }
+
+@@ -1602,6 +1603,7 @@ void hns_roce_cleanup_qp_table(struct hns_roce_dev *hr_dev)
+ for (i = 0; i < HNS_ROCE_QP_BANK_NUM; i++)
+ ida_destroy(&hr_dev->qp_table.bank[i].ida);
+ xa_destroy(&hr_dev->qp_table.dip_xa);
++ xa_destroy(&hr_dev->qp_table_xa);
+ mutex_destroy(&hr_dev->qp_table.bank_mutex);
+ mutex_destroy(&hr_dev->qp_table.scc_mutex);
+ }
+diff --git a/drivers/infiniband/hw/mlx5/ah.c b/drivers/infiniband/hw/mlx5/ah.c
+index 99036afb3aef0b..531a57f9ee7e8b 100644
+--- a/drivers/infiniband/hw/mlx5/ah.c
++++ b/drivers/infiniband/hw/mlx5/ah.c
+@@ -50,11 +50,12 @@ static __be16 mlx5_ah_get_udp_sport(const struct mlx5_ib_dev *dev,
+ return sport;
+ }
+
+-static void create_ib_ah(struct mlx5_ib_dev *dev, struct mlx5_ib_ah *ah,
++static int create_ib_ah(struct mlx5_ib_dev *dev, struct mlx5_ib_ah *ah,
+ struct rdma_ah_init_attr *init_attr)
+ {
+ struct rdma_ah_attr *ah_attr = init_attr->ah_attr;
+ enum ib_gid_type gid_type;
++ int rate_val;
+
+ if (rdma_ah_get_ah_flags(ah_attr) & IB_AH_GRH) {
+ const struct ib_global_route *grh = rdma_ah_read_grh(ah_attr);
+@@ -67,8 +68,10 @@ static void create_ib_ah(struct mlx5_ib_dev *dev, struct mlx5_ib_ah *ah,
+ ah->av.tclass = grh->traffic_class;
+ }
+
+- ah->av.stat_rate_sl =
+- (mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah_attr)) << 4);
++ rate_val = mlx5r_ib_rate(dev, rdma_ah_get_static_rate(ah_attr));
++ if (rate_val < 0)
++ return rate_val;
++ ah->av.stat_rate_sl = rate_val << 4;
+
+ if (ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) {
+ if (init_attr->xmit_slave)
+@@ -89,6 +92,8 @@ static void create_ib_ah(struct mlx5_ib_dev *dev, struct mlx5_ib_ah *ah,
+ ah->av.fl_mlid = rdma_ah_get_path_bits(ah_attr) & 0x7f;
+ ah->av.stat_rate_sl |= (rdma_ah_get_sl(ah_attr) & 0xf);
+ }
++
++ return 0;
+ }
+
+ int mlx5_ib_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
+@@ -121,8 +126,7 @@ int mlx5_ib_create_ah(struct ib_ah *ibah, struct rdma_ah_init_attr *init_attr,
+ return err;
+ }
+
+- create_ib_ah(dev, ah, init_attr);
+- return 0;
++ return create_ib_ah(dev, ah, init_attr);
+ }
+
+ int mlx5_ib_query_ah(struct ib_ah *ibah, struct rdma_ah_attr *ah_attr)
+diff --git a/drivers/infiniband/sw/rxe/rxe.c b/drivers/infiniband/sw/rxe/rxe.c
+index 1ba4a0c8726aed..e27478fe9456c9 100644
+--- a/drivers/infiniband/sw/rxe/rxe.c
++++ b/drivers/infiniband/sw/rxe/rxe.c
+@@ -38,10 +38,8 @@ void rxe_dealloc(struct ib_device *ib_dev)
+ }
+
+ /* initialize rxe device parameters */
+-static void rxe_init_device_param(struct rxe_dev *rxe)
++static void rxe_init_device_param(struct rxe_dev *rxe, struct net_device *ndev)
+ {
+- struct net_device *ndev;
+-
+ rxe->max_inline_data = RXE_MAX_INLINE_DATA;
+
+ rxe->attr.vendor_id = RXE_VENDOR_ID;
+@@ -74,15 +72,9 @@ static void rxe_init_device_param(struct rxe_dev *rxe)
+ rxe->attr.max_pkeys = RXE_MAX_PKEYS;
+ rxe->attr.local_ca_ack_delay = RXE_LOCAL_CA_ACK_DELAY;
+
+- ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
+- if (!ndev)
+- return;
+-
+ addrconf_addr_eui48((unsigned char *)&rxe->attr.sys_image_guid,
+ ndev->dev_addr);
+
+- dev_put(ndev);
+-
+ rxe->max_ucontext = RXE_MAX_UCONTEXT;
+ }
+
+@@ -115,18 +107,13 @@ static void rxe_init_port_param(struct rxe_port *port)
+ /* initialize port state, note IB convention that HCA ports are always
+ * numbered from 1
+ */
+-static void rxe_init_ports(struct rxe_dev *rxe)
++static void rxe_init_ports(struct rxe_dev *rxe, struct net_device *ndev)
+ {
+ struct rxe_port *port = &rxe->port;
+- struct net_device *ndev;
+
+ rxe_init_port_param(port);
+- ndev = rxe_ib_device_get_netdev(&rxe->ib_dev);
+- if (!ndev)
+- return;
+ addrconf_addr_eui48((unsigned char *)&port->port_guid,
+ ndev->dev_addr);
+- dev_put(ndev);
+ spin_lock_init(&port->port_lock);
+ }
+
+@@ -144,12 +131,12 @@ static void rxe_init_pools(struct rxe_dev *rxe)
+ }
+
+ /* initialize rxe device state */
+-static void rxe_init(struct rxe_dev *rxe)
++static void rxe_init(struct rxe_dev *rxe, struct net_device *ndev)
+ {
+ /* init default device parameters */
+- rxe_init_device_param(rxe);
++ rxe_init_device_param(rxe, ndev);
+
+- rxe_init_ports(rxe);
++ rxe_init_ports(rxe, ndev);
+ rxe_init_pools(rxe);
+
+ /* init pending mmap list */
+@@ -184,7 +171,7 @@ void rxe_set_mtu(struct rxe_dev *rxe, unsigned int ndev_mtu)
+ int rxe_add(struct rxe_dev *rxe, unsigned int mtu, const char *ibdev_name,
+ struct net_device *ndev)
+ {
+- rxe_init(rxe);
++ rxe_init(rxe, ndev);
+ rxe_set_mtu(rxe, mtu);
+
+ return rxe_register_device(rxe, ibdev_name, ndev);
+diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
+index cdbd2edf4b2e7c..8c0853dca8b923 100644
+--- a/drivers/mmc/host/atmel-mci.c
++++ b/drivers/mmc/host/atmel-mci.c
+@@ -2499,8 +2499,10 @@ static int atmci_probe(struct platform_device *pdev)
+ /* Get MCI capabilities and set operations according to it */
+ atmci_get_cap(host);
+ ret = atmci_configure_dma(host);
+- if (ret == -EPROBE_DEFER)
++ if (ret == -EPROBE_DEFER) {
++ clk_disable_unprepare(host->mck);
+ goto err_dma_probe_defer;
++ }
+ if (ret == 0) {
+ host->prepare_data = &atmci_prepare_data_dma;
+ host->submit_data = &atmci_submit_data_dma;
+diff --git a/drivers/mmc/host/sdhci-brcmstb.c b/drivers/mmc/host/sdhci-brcmstb.c
+index 031a4b514d16bd..9d9ddc2f6f70c1 100644
+--- a/drivers/mmc/host/sdhci-brcmstb.c
++++ b/drivers/mmc/host/sdhci-brcmstb.c
+@@ -503,8 +503,15 @@ static int sdhci_brcmstb_suspend(struct device *dev)
+ struct sdhci_host *host = dev_get_drvdata(dev);
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ struct sdhci_brcmstb_priv *priv = sdhci_pltfm_priv(pltfm_host);
++ int ret;
+
+ clk_disable_unprepare(priv->base_clk);
++ if (host->mmc->caps2 & MMC_CAP2_CQE) {
++ ret = cqhci_suspend(host->mmc);
++ if (ret)
++ return ret;
++ }
++
+ return sdhci_pltfm_suspend(dev);
+ }
+
+@@ -529,6 +536,9 @@ static int sdhci_brcmstb_resume(struct device *dev)
+ ret = clk_set_rate(priv->base_clk, priv->base_freq_hz);
+ }
+
++ if (host->mmc->caps2 & MMC_CAP2_CQE)
++ ret = cqhci_resume(host->mmc);
++
+ return ret;
+ }
+ #endif
+diff --git a/drivers/net/can/flexcan/flexcan-core.c b/drivers/net/can/flexcan/flexcan-core.c
+index ac1a860986df69..b080740bcb104f 100644
+--- a/drivers/net/can/flexcan/flexcan-core.c
++++ b/drivers/net/can/flexcan/flexcan-core.c
+@@ -2260,14 +2260,19 @@ static int __maybe_unused flexcan_suspend(struct device *device)
+
+ flexcan_chip_interrupts_disable(dev);
+
++ err = flexcan_transceiver_disable(priv);
++ if (err)
++ return err;
++
+ err = pinctrl_pm_select_sleep_state(device);
+ if (err)
+ return err;
+ }
+ netif_stop_queue(dev);
+ netif_device_detach(dev);
++
++ priv->can.state = CAN_STATE_SLEEPING;
+ }
+- priv->can.state = CAN_STATE_SLEEPING;
+
+ return 0;
+ }
+@@ -2278,7 +2283,6 @@ static int __maybe_unused flexcan_resume(struct device *device)
+ struct flexcan_priv *priv = netdev_priv(dev);
+ int err;
+
+- priv->can.state = CAN_STATE_ERROR_ACTIVE;
+ if (netif_running(dev)) {
+ netif_device_attach(dev);
+ netif_start_queue(dev);
+@@ -2292,12 +2296,20 @@ static int __maybe_unused flexcan_resume(struct device *device)
+ if (err)
+ return err;
+
+- err = flexcan_chip_start(dev);
++ err = flexcan_transceiver_enable(priv);
+ if (err)
+ return err;
+
++ err = flexcan_chip_start(dev);
++ if (err) {
++ flexcan_transceiver_disable(priv);
++ return err;
++ }
++
+ flexcan_chip_interrupts_enable(dev);
+ }
++
++ priv->can.state = CAN_STATE_ERROR_ACTIVE;
+ }
+
+ return 0;
+diff --git a/drivers/net/can/rcar/rcar_canfd.c b/drivers/net/can/rcar/rcar_canfd.c
+index df1a5d0b37b226..aa3df0d05b853b 100644
+--- a/drivers/net/can/rcar/rcar_canfd.c
++++ b/drivers/net/can/rcar/rcar_canfd.c
+@@ -787,22 +787,14 @@ static void rcar_canfd_configure_controller(struct rcar_canfd_global *gpriv)
+ }
+
+ static void rcar_canfd_configure_afl_rules(struct rcar_canfd_global *gpriv,
+- u32 ch)
++ u32 ch, u32 rule_entry)
+ {
+- u32 cfg;
+- int offset, start, page, num_rules = RCANFD_CHANNEL_NUMRULES;
++ int offset, page, num_rules = RCANFD_CHANNEL_NUMRULES;
++ u32 rule_entry_index = rule_entry % 16;
+ u32 ridx = ch + RCANFD_RFFIFO_IDX;
+
+- if (ch == 0) {
+- start = 0; /* Channel 0 always starts from 0th rule */
+- } else {
+- /* Get number of Channel 0 rules and adjust */
+- cfg = rcar_canfd_read(gpriv->base, RCANFD_GAFLCFG(ch));
+- start = RCANFD_GAFLCFG_GETRNC(gpriv, 0, cfg);
+- }
+-
+ /* Enable write access to entry */
+- page = RCANFD_GAFL_PAGENUM(start);
++ page = RCANFD_GAFL_PAGENUM(rule_entry);
+ rcar_canfd_set_bit(gpriv->base, RCANFD_GAFLECTR,
+ (RCANFD_GAFLECTR_AFLPN(gpriv, page) |
+ RCANFD_GAFLECTR_AFLDAE));
+@@ -818,13 +810,13 @@ static void rcar_canfd_configure_afl_rules(struct rcar_canfd_global *gpriv,
+ offset = RCANFD_C_GAFL_OFFSET;
+
+ /* Accept all IDs */
+- rcar_canfd_write(gpriv->base, RCANFD_GAFLID(offset, start), 0);
++ rcar_canfd_write(gpriv->base, RCANFD_GAFLID(offset, rule_entry_index), 0);
+ /* IDE or RTR is not considered for matching */
+- rcar_canfd_write(gpriv->base, RCANFD_GAFLM(offset, start), 0);
++ rcar_canfd_write(gpriv->base, RCANFD_GAFLM(offset, rule_entry_index), 0);
+ /* Any data length accepted */
+- rcar_canfd_write(gpriv->base, RCANFD_GAFLP0(offset, start), 0);
++ rcar_canfd_write(gpriv->base, RCANFD_GAFLP0(offset, rule_entry_index), 0);
+ /* Place the msg in corresponding Rx FIFO entry */
+- rcar_canfd_set_bit(gpriv->base, RCANFD_GAFLP1(offset, start),
++ rcar_canfd_set_bit(gpriv->base, RCANFD_GAFLP1(offset, rule_entry_index),
+ RCANFD_GAFLP1_GAFLFDP(ridx));
+
+ /* Disable write access to page */
+@@ -1851,6 +1843,7 @@ static int rcar_canfd_probe(struct platform_device *pdev)
+ unsigned long channels_mask = 0;
+ int err, ch_irq, g_irq;
+ int g_err_irq, g_recc_irq;
++ u32 rule_entry = 0;
+ bool fdmode = true; /* CAN FD only mode - default */
+ char name[9] = "channelX";
+ int i;
+@@ -2023,7 +2016,8 @@ static int rcar_canfd_probe(struct platform_device *pdev)
+ rcar_canfd_configure_tx(gpriv, ch);
+
+ /* Configure receive rules */
+- rcar_canfd_configure_afl_rules(gpriv, ch);
++ rcar_canfd_configure_afl_rules(gpriv, ch, rule_entry);
++ rule_entry += RCANFD_CHANNEL_NUMRULES;
+ }
+
+ /* Configure common interrupts */
+diff --git a/drivers/net/can/usb/ucan.c b/drivers/net/can/usb/ucan.c
+index 39a63b7313a46d..07406daf7c88ed 100644
+--- a/drivers/net/can/usb/ucan.c
++++ b/drivers/net/can/usb/ucan.c
+@@ -186,7 +186,7 @@ union ucan_ctl_payload {
+ */
+ struct ucan_ctl_cmd_get_protocol_version cmd_get_protocol_version;
+
+- u8 raw[128];
++ u8 fw_str[128];
+ } __packed;
+
+ enum {
+@@ -424,18 +424,20 @@ static int ucan_ctrl_command_out(struct ucan_priv *up,
+ UCAN_USB_CTL_PIPE_TIMEOUT);
+ }
+
+-static int ucan_device_request_in(struct ucan_priv *up,
+- u8 cmd, u16 subcmd, u16 datalen)
++static void ucan_get_fw_str(struct ucan_priv *up, char *fw_str, size_t size)
+ {
+- return usb_control_msg(up->udev,
+- usb_rcvctrlpipe(up->udev, 0),
+- cmd,
+- USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+- subcmd,
+- 0,
+- up->ctl_msg_buffer,
+- datalen,
+- UCAN_USB_CTL_PIPE_TIMEOUT);
++ int ret;
++
++ ret = usb_control_msg(up->udev, usb_rcvctrlpipe(up->udev, 0),
++ UCAN_DEVICE_GET_FW_STRING,
++ USB_DIR_IN | USB_TYPE_VENDOR |
++ USB_RECIP_DEVICE,
++ 0, 0, fw_str, size - 1,
++ UCAN_USB_CTL_PIPE_TIMEOUT);
++ if (ret > 0)
++ fw_str[ret] = '\0';
++ else
++ strscpy(fw_str, "unknown", size);
+ }
+
+ /* Parse the device information structure reported by the device and
+@@ -1314,7 +1316,6 @@ static int ucan_probe(struct usb_interface *intf,
+ u8 in_ep_addr;
+ u8 out_ep_addr;
+ union ucan_ctl_payload *ctl_msg_buffer;
+- char firmware_str[sizeof(union ucan_ctl_payload) + 1];
+
+ udev = interface_to_usbdev(intf);
+
+@@ -1527,17 +1528,6 @@ static int ucan_probe(struct usb_interface *intf,
+ */
+ ucan_parse_device_info(up, &ctl_msg_buffer->cmd_get_device_info);
+
+- /* just print some device information - if available */
+- ret = ucan_device_request_in(up, UCAN_DEVICE_GET_FW_STRING, 0,
+- sizeof(union ucan_ctl_payload));
+- if (ret > 0) {
+- /* copy string while ensuring zero termination */
+- strscpy(firmware_str, up->ctl_msg_buffer->raw,
+- sizeof(union ucan_ctl_payload) + 1);
+- } else {
+- strcpy(firmware_str, "unknown");
+- }
+-
+ /* device is compatible, reset it */
+ ret = ucan_ctrl_command_out(up, UCAN_COMMAND_RESET, 0, 0);
+ if (ret < 0)
+@@ -1555,7 +1545,10 @@ static int ucan_probe(struct usb_interface *intf,
+
+ /* initialisation complete, log device info */
+ netdev_info(up->netdev, "registered device\n");
+- netdev_info(up->netdev, "firmware string: %s\n", firmware_str);
++ ucan_get_fw_str(up, up->ctl_msg_buffer->fw_str,
++ sizeof(up->ctl_msg_buffer->fw_str));
++ netdev_info(up->netdev, "firmware string: %s\n",
++ up->ctl_msg_buffer->fw_str);
+
+ /* success */
+ return 0;
+diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+index 0c2ba2fa88c466..36802e0a8b570f 100644
+--- a/drivers/net/ethernet/microsoft/mana/gdma_main.c
++++ b/drivers/net/ethernet/microsoft/mana/gdma_main.c
+@@ -131,9 +131,10 @@ static int mana_gd_detect_devices(struct pci_dev *pdev)
+ struct gdma_list_devices_resp resp = {};
+ struct gdma_general_req req = {};
+ struct gdma_dev_id dev;
+- u32 i, max_num_devs;
++ int found_dev = 0;
+ u16 dev_type;
+ int err;
++ u32 i;
+
+ mana_gd_init_req_hdr(&req.hdr, GDMA_LIST_DEVICES, sizeof(req),
+ sizeof(resp));
+@@ -145,12 +146,17 @@ static int mana_gd_detect_devices(struct pci_dev *pdev)
+ return err ? err : -EPROTO;
+ }
+
+- max_num_devs = min_t(u32, MAX_NUM_GDMA_DEVICES, resp.num_of_devs);
+-
+- for (i = 0; i < max_num_devs; i++) {
++ for (i = 0; i < GDMA_DEV_LIST_SIZE &&
++ found_dev < resp.num_of_devs; i++) {
+ dev = resp.devs[i];
+ dev_type = dev.type;
+
++ /* Skip empty devices */
++ if (dev.as_uint32 == 0)
++ continue;
++
++ found_dev++;
++
+ /* HWC is already detected in mana_hwc_create_channel(). */
+ if (dev_type == GDMA_DEVICE_HWC)
+ continue;
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 3e090f87f97ebd..308a2b72a65de3 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2225,14 +2225,18 @@ static void am65_cpsw_nuss_remove_tx_chns(struct am65_cpsw_common *common)
+ static int am65_cpsw_nuss_ndev_add_tx_napi(struct am65_cpsw_common *common)
+ {
+ struct device *dev = common->dev;
++ struct am65_cpsw_tx_chn *tx_chn;
+ int i, ret = 0;
+
+ for (i = 0; i < common->tx_ch_num; i++) {
+- struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
++ tx_chn = &common->tx_chns[i];
+
+ hrtimer_init(&tx_chn->tx_hrtimer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_PINNED);
+ tx_chn->tx_hrtimer.function = &am65_cpsw_nuss_tx_timer_callback;
+
++ netif_napi_add_tx(common->dma_ndev, &tx_chn->napi_tx,
++ am65_cpsw_nuss_tx_poll);
++
+ ret = devm_request_irq(dev, tx_chn->irq,
+ am65_cpsw_nuss_tx_irq,
+ IRQF_TRIGGER_HIGH,
+@@ -2242,19 +2246,16 @@ static int am65_cpsw_nuss_ndev_add_tx_napi(struct am65_cpsw_common *common)
+ tx_chn->id, tx_chn->irq, ret);
+ goto err;
+ }
+-
+- netif_napi_add_tx(common->dma_ndev, &tx_chn->napi_tx,
+- am65_cpsw_nuss_tx_poll);
+ }
+
+ return 0;
+
+ err:
+- for (--i ; i >= 0 ; i--) {
+- struct am65_cpsw_tx_chn *tx_chn = &common->tx_chns[i];
+-
+- netif_napi_del(&tx_chn->napi_tx);
++ netif_napi_del(&tx_chn->napi_tx);
++ for (--i; i >= 0; i--) {
++ tx_chn = &common->tx_chns[i];
+ devm_free_irq(dev, tx_chn->irq, tx_chn);
++ netif_napi_del(&tx_chn->napi_tx);
+ }
+
+ return ret;
+@@ -2488,6 +2489,9 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+ HRTIMER_MODE_REL_PINNED);
+ flow->rx_hrtimer.function = &am65_cpsw_nuss_rx_timer_callback;
+
++ netif_napi_add(common->dma_ndev, &flow->napi_rx,
++ am65_cpsw_nuss_rx_poll);
++
+ ret = devm_request_irq(dev, flow->irq,
+ am65_cpsw_nuss_rx_irq,
+ IRQF_TRIGGER_HIGH,
+@@ -2496,11 +2500,8 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+ dev_err(dev, "failure requesting rx %d irq %u, %d\n",
+ i, flow->irq, ret);
+ flow->irq = -EINVAL;
+- goto err_flow;
++ goto err_request_irq;
+ }
+-
+- netif_napi_add(common->dma_ndev, &flow->napi_rx,
+- am65_cpsw_nuss_rx_poll);
+ }
+
+ /* setup classifier to route priorities to flows */
+@@ -2508,11 +2509,14 @@ static int am65_cpsw_nuss_init_rx_chns(struct am65_cpsw_common *common)
+
+ return 0;
+
++err_request_irq:
++ netif_napi_del(&flow->napi_rx);
++
+ err_flow:
+- for (--i; i >= 0 ; i--) {
++ for (--i; i >= 0; i--) {
+ flow = &rx_chn->flows[i];
+- netif_napi_del(&flow->napi_rx);
+ devm_free_irq(dev, flow->irq, flow);
++ netif_napi_del(&flow->napi_rx);
+ }
+
+ err:
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.c b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+index cb11635a8d1209..6f0700d156e710 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.c
+@@ -1555,6 +1555,7 @@ static int prueth_probe(struct platform_device *pdev)
+ }
+
+ spin_lock_init(&prueth->vtbl_lock);
++ spin_lock_init(&prueth->stats_lock);
+ /* setup netdev interfaces */
+ if (eth0_node) {
+ ret = prueth_netdev_init(prueth, eth0_node);
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_prueth.h b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+index 5473315ea20406..e456a11c5d4e38 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_prueth.h
++++ b/drivers/net/ethernet/ti/icssg/icssg_prueth.h
+@@ -297,6 +297,8 @@ struct prueth {
+ int default_vlan;
+ /** @vtbl_lock: Lock for vtbl in shared memory */
+ spinlock_t vtbl_lock;
++ /** @stats_lock: Lock for reading icssg stats */
++ spinlock_t stats_lock;
+ };
+
+ struct emac_tx_ts_response {
+diff --git a/drivers/net/ethernet/ti/icssg/icssg_stats.c b/drivers/net/ethernet/ti/icssg/icssg_stats.c
+index 8800bd3a8d074c..6f0edae38ea242 100644
+--- a/drivers/net/ethernet/ti/icssg/icssg_stats.c
++++ b/drivers/net/ethernet/ti/icssg/icssg_stats.c
+@@ -26,6 +26,8 @@ void emac_update_hardware_stats(struct prueth_emac *emac)
+ u32 val, reg;
+ int i;
+
++ spin_lock(&prueth->stats_lock);
++
+ for (i = 0; i < ARRAY_SIZE(icssg_all_miig_stats); i++) {
+ regmap_read(prueth->miig_rt,
+ base + icssg_all_miig_stats[i].offset,
+@@ -51,6 +53,8 @@ void emac_update_hardware_stats(struct prueth_emac *emac)
+ emac->pa_stats[i] += val;
+ }
+ }
++
++ spin_unlock(&prueth->stats_lock);
+ }
+
+ void icssg_stats_work_handler(struct work_struct *work)
+diff --git a/drivers/net/phy/phy_link_topology.c b/drivers/net/phy/phy_link_topology.c
+index 4a5d73002a1a85..0e9e987f37dd84 100644
+--- a/drivers/net/phy/phy_link_topology.c
++++ b/drivers/net/phy/phy_link_topology.c
+@@ -73,7 +73,7 @@ int phy_link_topo_add_phy(struct net_device *dev,
+ xa_limit_32b, &topo->next_phy_index,
+ GFP_KERNEL);
+
+- if (ret)
++ if (ret < 0)
+ goto err;
+
+ return 0;
+diff --git a/drivers/pmdomain/amlogic/meson-secure-pwrc.c b/drivers/pmdomain/amlogic/meson-secure-pwrc.c
+index 42ce41a2fe3a0c..ff76ea36835e53 100644
+--- a/drivers/pmdomain/amlogic/meson-secure-pwrc.c
++++ b/drivers/pmdomain/amlogic/meson-secure-pwrc.c
+@@ -221,7 +221,7 @@ static const struct meson_secure_pwrc_domain_desc t7_pwrc_domains[] = {
+ SEC_PD(T7_VI_CLK2, 0),
+ /* ETH is for ethernet online wakeup, and should be always on */
+ SEC_PD(T7_ETH, GENPD_FLAG_ALWAYS_ON),
+- SEC_PD(T7_ISP, 0),
++ TOP_PD(T7_ISP, 0, PWRC_T7_MIPI_ISP_ID),
+ SEC_PD(T7_MIPI_ISP, 0),
+ TOP_PD(T7_GDC, 0, PWRC_T7_NIC3_ID),
+ TOP_PD(T7_DEWARP, 0, PWRC_T7_NIC3_ID),
+diff --git a/drivers/regulator/core.c b/drivers/regulator/core.c
+index 4bb2652740d001..1f4698d724bb78 100644
+--- a/drivers/regulator/core.c
++++ b/drivers/regulator/core.c
+@@ -2024,6 +2024,10 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+
+ if (have_full_constraints()) {
+ r = dummy_regulator_rdev;
++ if (!r) {
++ ret = -EPROBE_DEFER;
++ goto out;
++ }
+ get_device(&r->dev);
+ } else {
+ dev_err(dev, "Failed to resolve %s-supply for %s\n",
+@@ -2041,6 +2045,10 @@ static int regulator_resolve_supply(struct regulator_dev *rdev)
+ goto out;
+ }
+ r = dummy_regulator_rdev;
++ if (!r) {
++ ret = -EPROBE_DEFER;
++ goto out;
++ }
+ get_device(&r->dev);
+ }
+
+@@ -2166,8 +2174,10 @@ struct regulator *_regulator_get_common(struct regulator_dev *rdev, struct devic
+ * enabled, even if it isn't hooked up, and just
+ * provide a dummy.
+ */
+- dev_warn(dev, "supply %s not found, using dummy regulator\n", id);
+ rdev = dummy_regulator_rdev;
++ if (!rdev)
++ return ERR_PTR(-EPROBE_DEFER);
++ dev_warn(dev, "supply %s not found, using dummy regulator\n", id);
+ get_device(&rdev->dev);
+ break;
+
+diff --git a/drivers/regulator/dummy.c b/drivers/regulator/dummy.c
+index 5b9b9e4e762d52..9f59889129abec 100644
+--- a/drivers/regulator/dummy.c
++++ b/drivers/regulator/dummy.c
+@@ -60,7 +60,7 @@ static struct platform_driver dummy_regulator_driver = {
+ .probe = dummy_regulator_probe,
+ .driver = {
+ .name = "reg-dummy",
+- .probe_type = PROBE_PREFER_ASYNCHRONOUS,
++ .probe_type = PROBE_FORCE_SYNCHRONOUS,
+ },
+ };
+
+diff --git a/drivers/soc/imx/soc-imx8m.c b/drivers/soc/imx/soc-imx8m.c
+index 5ea8887828c064..3ed8161d7d28ba 100644
+--- a/drivers/soc/imx/soc-imx8m.c
++++ b/drivers/soc/imx/soc-imx8m.c
+@@ -30,11 +30,9 @@
+
+ struct imx8_soc_data {
+ char *name;
+- int (*soc_revision)(u32 *socrev);
++ int (*soc_revision)(u32 *socrev, u64 *socuid);
+ };
+
+-static u64 soc_uid;
+-
+ #ifdef CONFIG_HAVE_ARM_SMCCC
+ static u32 imx8mq_soc_revision_from_atf(void)
+ {
+@@ -51,24 +49,22 @@ static u32 imx8mq_soc_revision_from_atf(void)
+ static inline u32 imx8mq_soc_revision_from_atf(void) { return 0; };
+ #endif
+
+-static int imx8mq_soc_revision(u32 *socrev)
++static int imx8mq_soc_revision(u32 *socrev, u64 *socuid)
+ {
+- struct device_node *np;
++ struct device_node *np __free(device_node) =
++ of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp");
+ void __iomem *ocotp_base;
+ u32 magic;
+ u32 rev;
+ struct clk *clk;
+ int ret;
+
+- np = of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp");
+ if (!np)
+ return -EINVAL;
+
+ ocotp_base = of_iomap(np, 0);
+- if (!ocotp_base) {
+- ret = -EINVAL;
+- goto err_iomap;
+- }
++ if (!ocotp_base)
++ return -EINVAL;
+
+ clk = of_clk_get_by_name(np, NULL);
+ if (IS_ERR(clk)) {
+@@ -89,44 +85,39 @@ static int imx8mq_soc_revision(u32 *socrev)
+ rev = REV_B1;
+ }
+
+- soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH);
+- soc_uid <<= 32;
+- soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW);
++ *socuid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH);
++ *socuid <<= 32;
++ *socuid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW);
+
+ *socrev = rev;
+
+ clk_disable_unprepare(clk);
+ clk_put(clk);
+ iounmap(ocotp_base);
+- of_node_put(np);
+
+ return 0;
+
+ err_clk:
+ iounmap(ocotp_base);
+-err_iomap:
+- of_node_put(np);
+ return ret;
+ }
+
+-static int imx8mm_soc_uid(void)
++static int imx8mm_soc_uid(u64 *socuid)
+ {
++ struct device_node *np __free(device_node) =
++ of_find_compatible_node(NULL, NULL, "fsl,imx8mm-ocotp");
+ void __iomem *ocotp_base;
+- struct device_node *np;
+ struct clk *clk;
+ int ret = 0;
+ u32 offset = of_machine_is_compatible("fsl,imx8mp") ?
+ IMX8MP_OCOTP_UID_OFFSET : 0;
+
+- np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-ocotp");
+ if (!np)
+ return -EINVAL;
+
+ ocotp_base = of_iomap(np, 0);
+- if (!ocotp_base) {
+- ret = -EINVAL;
+- goto err_iomap;
+- }
++ if (!ocotp_base)
++ return -EINVAL;
+
+ clk = of_clk_get_by_name(np, NULL);
+ if (IS_ERR(clk)) {
+@@ -136,47 +127,36 @@ static int imx8mm_soc_uid(void)
+
+ clk_prepare_enable(clk);
+
+- soc_uid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH + offset);
+- soc_uid <<= 32;
+- soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW + offset);
++ *socuid = readl_relaxed(ocotp_base + OCOTP_UID_HIGH + offset);
++ *socuid <<= 32;
++ *socuid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW + offset);
+
+ clk_disable_unprepare(clk);
+ clk_put(clk);
+
+ err_clk:
+ iounmap(ocotp_base);
+-err_iomap:
+- of_node_put(np);
+-
+ return ret;
+ }
+
+-static int imx8mm_soc_revision(u32 *socrev)
++static int imx8mm_soc_revision(u32 *socrev, u64 *socuid)
+ {
+- struct device_node *np;
++ struct device_node *np __free(device_node) =
++ of_find_compatible_node(NULL, NULL, "fsl,imx8mm-anatop");
+ void __iomem *anatop_base;
+- int ret;
+
+- np = of_find_compatible_node(NULL, NULL, "fsl,imx8mm-anatop");
+ if (!np)
+ return -EINVAL;
+
+ anatop_base = of_iomap(np, 0);
+- if (!anatop_base) {
+- ret = -EINVAL;
+- goto err_iomap;
+- }
++ if (!anatop_base)
++ return -EINVAL;
+
+ *socrev = readl_relaxed(anatop_base + ANADIG_DIGPROG_IMX8MM);
+
+ iounmap(anatop_base);
+- of_node_put(np);
+
+- return imx8mm_soc_uid();
+-
+-err_iomap:
+- of_node_put(np);
+- return ret;
++ return imx8mm_soc_uid(socuid);
+ }
+
+ static const struct imx8_soc_data imx8mq_soc_data = {
+@@ -207,21 +187,34 @@ static __maybe_unused const struct of_device_id imx8_soc_match[] = {
+ { }
+ };
+
+-#define imx8_revision(soc_rev) \
+- soc_rev ? \
+- kasprintf(GFP_KERNEL, "%d.%d", (soc_rev >> 4) & 0xf, soc_rev & 0xf) : \
++#define imx8_revision(dev, soc_rev) \
++ (soc_rev) ? \
++ devm_kasprintf((dev), GFP_KERNEL, "%d.%d", ((soc_rev) >> 4) & 0xf, (soc_rev) & 0xf) : \
+ "unknown"
+
++static void imx8m_unregister_soc(void *data)
++{
++ soc_device_unregister(data);
++}
++
++static void imx8m_unregister_cpufreq(void *data)
++{
++ platform_device_unregister(data);
++}
++
+ static int imx8m_soc_probe(struct platform_device *pdev)
+ {
+ struct soc_device_attribute *soc_dev_attr;
+- struct soc_device *soc_dev;
++ struct platform_device *cpufreq_dev;
++ const struct imx8_soc_data *data;
++ struct device *dev = &pdev->dev;
+ const struct of_device_id *id;
++ struct soc_device *soc_dev;
+ u32 soc_rev = 0;
+- const struct imx8_soc_data *data;
++ u64 soc_uid = 0;
+ int ret;
+
+- soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL);
++ soc_dev_attr = devm_kzalloc(dev, sizeof(*soc_dev_attr), GFP_KERNEL);
+ if (!soc_dev_attr)
+ return -ENOMEM;
+
+@@ -229,58 +222,52 @@ static int imx8m_soc_probe(struct platform_device *pdev)
+
+ ret = of_property_read_string(of_root, "model", &soc_dev_attr->machine);
+ if (ret)
+- goto free_soc;
++ return ret;
+
+ id = of_match_node(imx8_soc_match, of_root);
+- if (!id) {
+- ret = -ENODEV;
+- goto free_soc;
+- }
++ if (!id)
++ return -ENODEV;
+
+ data = id->data;
+ if (data) {
+ soc_dev_attr->soc_id = data->name;
+ if (data->soc_revision) {
+- ret = data->soc_revision(&soc_rev);
++ ret = data->soc_revision(&soc_rev, &soc_uid);
+ if (ret)
+- goto free_soc;
++ return ret;
+ }
+ }
+
+- soc_dev_attr->revision = imx8_revision(soc_rev);
+- if (!soc_dev_attr->revision) {
+- ret = -ENOMEM;
+- goto free_soc;
+- }
++ soc_dev_attr->revision = imx8_revision(dev, soc_rev);
++ if (!soc_dev_attr->revision)
++ return -ENOMEM;
+
+- soc_dev_attr->serial_number = kasprintf(GFP_KERNEL, "%016llX", soc_uid);
+- if (!soc_dev_attr->serial_number) {
+- ret = -ENOMEM;
+- goto free_rev;
+- }
++ soc_dev_attr->serial_number = devm_kasprintf(dev, GFP_KERNEL, "%016llX", soc_uid);
++ if (!soc_dev_attr->serial_number)
++ return -ENOMEM;
+
+ soc_dev = soc_device_register(soc_dev_attr);
+- if (IS_ERR(soc_dev)) {
+- ret = PTR_ERR(soc_dev);
+- goto free_serial_number;
+- }
++ if (IS_ERR(soc_dev))
++ return PTR_ERR(soc_dev);
++
++ ret = devm_add_action(dev, imx8m_unregister_soc, soc_dev);
++ if (ret)
++ return ret;
+
+ pr_info("SoC: %s revision %s\n", soc_dev_attr->soc_id,
+ soc_dev_attr->revision);
+
+- if (IS_ENABLED(CONFIG_ARM_IMX_CPUFREQ_DT))
+- platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0);
++ if (IS_ENABLED(CONFIG_ARM_IMX_CPUFREQ_DT)) {
++ cpufreq_dev = platform_device_register_simple("imx-cpufreq-dt", -1, NULL, 0);
++ if (IS_ERR(cpufreq_dev))
++ return dev_err_probe(dev, PTR_ERR(cpufreq_dev),
++ "Failed to register imx-cpufreq-dev device\n");
++ ret = devm_add_action(dev, imx8m_unregister_cpufreq, cpufreq_dev);
++ if (ret)
++ return ret;
++ }
+
+ return 0;
+-
+-free_serial_number:
+- kfree(soc_dev_attr->serial_number);
+-free_rev:
+- if (strcmp(soc_dev_attr->revision, "unknown"))
+- kfree(soc_dev_attr->revision);
+-free_soc:
+- kfree(soc_dev_attr);
+- return ret;
+ }
+
+ static struct platform_driver imx8m_soc_driver = {
+diff --git a/drivers/soc/qcom/pdr_interface.c b/drivers/soc/qcom/pdr_interface.c
+index 328b6153b2be6c..71be378d2e43a5 100644
+--- a/drivers/soc/qcom/pdr_interface.c
++++ b/drivers/soc/qcom/pdr_interface.c
+@@ -75,7 +75,6 @@ static int pdr_locator_new_server(struct qmi_handle *qmi,
+ {
+ struct pdr_handle *pdr = container_of(qmi, struct pdr_handle,
+ locator_hdl);
+- struct pdr_service *pds;
+
+ mutex_lock(&pdr->lock);
+ /* Create a local client port for QMI communication */
+@@ -87,12 +86,7 @@ static int pdr_locator_new_server(struct qmi_handle *qmi,
+ mutex_unlock(&pdr->lock);
+
+ /* Service pending lookup requests */
+- mutex_lock(&pdr->list_lock);
+- list_for_each_entry(pds, &pdr->lookups, node) {
+- if (pds->need_locator_lookup)
+- schedule_work(&pdr->locator_work);
+- }
+- mutex_unlock(&pdr->list_lock);
++ schedule_work(&pdr->locator_work);
+
+ return 0;
+ }
+diff --git a/fs/libfs.c b/fs/libfs.c
+index b0f262223b5351..3cb49463a84969 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -492,7 +492,7 @@ offset_dir_lookup(struct dentry *parent, loff_t offset)
+ found = find_positive_dentry(parent, NULL, false);
+ else {
+ rcu_read_lock();
+- child = mas_find(&mas, DIR_OFFSET_MAX);
++ child = mas_find_rev(&mas, DIR_OFFSET_MIN);
+ found = find_positive_dentry(parent, child, false);
+ rcu_read_unlock();
+ }
+diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c
+index 82290c92ba7a29..412d4da7422701 100644
+--- a/fs/netfs/write_collect.c
++++ b/fs/netfs/write_collect.c
+@@ -576,7 +576,8 @@ void netfs_write_collection_worker(struct work_struct *work)
+ trace_netfs_rreq(wreq, netfs_rreq_trace_write_done);
+
+ if (wreq->io_streams[1].active &&
+- wreq->io_streams[1].failed) {
++ wreq->io_streams[1].failed &&
++ ictx->ops->invalidate_cache) {
+ /* Cache write failure doesn't prevent writeback completion
+ * unless we're in disconnected mode.
+ */
+diff --git a/fs/proc/generic.c b/fs/proc/generic.c
+index dbe82cf23ee49c..3431b083f7d05c 100644
+--- a/fs/proc/generic.c
++++ b/fs/proc/generic.c
+@@ -557,10 +557,16 @@ struct proc_dir_entry *proc_create_reg(const char *name, umode_t mode,
+ return p;
+ }
+
+-static inline void pde_set_flags(struct proc_dir_entry *pde)
++static void pde_set_flags(struct proc_dir_entry *pde)
+ {
+ if (pde->proc_ops->proc_flags & PROC_ENTRY_PERMANENT)
+ pde->flags |= PROC_ENTRY_PERMANENT;
++ if (pde->proc_ops->proc_read_iter)
++ pde->flags |= PROC_ENTRY_proc_read_iter;
++#ifdef CONFIG_COMPAT
++ if (pde->proc_ops->proc_compat_ioctl)
++ pde->flags |= PROC_ENTRY_proc_compat_ioctl;
++#endif
+ }
+
+ struct proc_dir_entry *proc_create_data(const char *name, umode_t mode,
+@@ -624,6 +630,7 @@ struct proc_dir_entry *proc_create_seq_private(const char *name, umode_t mode,
+ p->proc_ops = &proc_seq_ops;
+ p->seq_ops = ops;
+ p->state_size = state_size;
++ pde_set_flags(p);
+ return proc_register(parent, p);
+ }
+ EXPORT_SYMBOL(proc_create_seq_private);
+@@ -654,6 +661,7 @@ struct proc_dir_entry *proc_create_single_data(const char *name, umode_t mode,
+ return NULL;
+ p->proc_ops = &proc_single_ops;
+ p->single_show = show;
++ pde_set_flags(p);
+ return proc_register(parent, p);
+ }
+ EXPORT_SYMBOL(proc_create_single_data);
+diff --git a/fs/proc/inode.c b/fs/proc/inode.c
+index 626ad7bd94f244..a3eb3b740f7664 100644
+--- a/fs/proc/inode.c
++++ b/fs/proc/inode.c
+@@ -656,13 +656,13 @@ struct inode *proc_get_inode(struct super_block *sb, struct proc_dir_entry *de)
+
+ if (S_ISREG(inode->i_mode)) {
+ inode->i_op = de->proc_iops;
+- if (de->proc_ops->proc_read_iter)
++ if (pde_has_proc_read_iter(de))
+ inode->i_fop = &proc_iter_file_ops;
+ else
+ inode->i_fop = &proc_reg_file_ops;
+ #ifdef CONFIG_COMPAT
+- if (de->proc_ops->proc_compat_ioctl) {
+- if (de->proc_ops->proc_read_iter)
++ if (pde_has_proc_compat_ioctl(de)) {
++ if (pde_has_proc_read_iter(de))
+ inode->i_fop = &proc_iter_file_ops_compat;
+ else
+ inode->i_fop = &proc_reg_file_ops_compat;
+diff --git a/fs/proc/internal.h b/fs/proc/internal.h
+index 87e4d628202520..4e0c5b57ffdbb8 100644
+--- a/fs/proc/internal.h
++++ b/fs/proc/internal.h
+@@ -85,6 +85,20 @@ static inline void pde_make_permanent(struct proc_dir_entry *pde)
+ pde->flags |= PROC_ENTRY_PERMANENT;
+ }
+
++static inline bool pde_has_proc_read_iter(const struct proc_dir_entry *pde)
++{
++ return pde->flags & PROC_ENTRY_proc_read_iter;
++}
++
++static inline bool pde_has_proc_compat_ioctl(const struct proc_dir_entry *pde)
++{
++#ifdef CONFIG_COMPAT
++ return pde->flags & PROC_ENTRY_proc_compat_ioctl;
++#else
++ return false;
++#endif
++}
++
+ extern struct kmem_cache *proc_dir_entry_cache;
+ void pde_free(struct proc_dir_entry *pde);
+
+diff --git a/fs/smb/server/smbacl.c b/fs/smb/server/smbacl.c
+index da8ed72f335d99..109036e2227ca1 100644
+--- a/fs/smb/server/smbacl.c
++++ b/fs/smb/server/smbacl.c
+@@ -398,7 +398,9 @@ static void parse_dacl(struct mnt_idmap *idmap,
+ if (num_aces <= 0)
+ return;
+
+- if (num_aces > ULONG_MAX / sizeof(struct smb_ace *))
++ if (num_aces > (le16_to_cpu(pdacl->size) - sizeof(struct smb_acl)) /
++ (offsetof(struct smb_ace, sid) +
++ offsetof(struct smb_sid, sub_auth) + sizeof(__le16)))
+ return;
+
+ ret = init_acl_state(&acl_state, num_aces);
+@@ -432,6 +434,7 @@ static void parse_dacl(struct mnt_idmap *idmap,
+ offsetof(struct smb_sid, sub_auth);
+
+ if (end_of_acl - acl_base < acl_size ||
++ ppace[i]->sid.num_subauth == 0 ||
+ ppace[i]->sid.num_subauth > SID_MAX_SUB_AUTHORITIES ||
+ (end_of_acl - acl_base <
+ acl_size + sizeof(__le32) * ppace[i]->sid.num_subauth) ||
+diff --git a/include/linux/key.h b/include/linux/key.h
+index 074dca3222b967..ba05de8579ecc5 100644
+--- a/include/linux/key.h
++++ b/include/linux/key.h
+@@ -236,6 +236,7 @@ struct key {
+ #define KEY_FLAG_ROOT_CAN_INVAL 7 /* set if key can be invalidated by root without permission */
+ #define KEY_FLAG_KEEP 8 /* set if key should not be removed */
+ #define KEY_FLAG_UID_KEYRING 9 /* set if key is a user or user session keyring */
++#define KEY_FLAG_FINAL_PUT 10 /* set if final put has happened on key */
+
+ /* the key type and key description string
+ * - the desc is used to match a key against search criteria
+diff --git a/include/linux/libata.h b/include/linux/libata.h
+index 9b4a6ff03235bc..79974a99265fc2 100644
+--- a/include/linux/libata.h
++++ b/include/linux/libata.h
+@@ -88,6 +88,7 @@ enum ata_quirks {
+ __ATA_QUIRK_MAX_SEC_1024, /* Limit max sects to 1024 */
+ __ATA_QUIRK_MAX_TRIM_128M, /* Limit max trim size to 128M */
+ __ATA_QUIRK_NO_NCQ_ON_ATI, /* Disable NCQ on ATI chipset */
++ __ATA_QUIRK_NO_LPM_ON_ATI, /* Disable LPM on ATI chipset */
+ __ATA_QUIRK_NO_ID_DEV_LOG, /* Identify device log missing */
+ __ATA_QUIRK_NO_LOG_DIR, /* Do not read log directory */
+ __ATA_QUIRK_NO_FUA, /* Do not use FUA */
+@@ -434,6 +435,7 @@ enum {
+ ATA_QUIRK_MAX_SEC_1024 = (1U << __ATA_QUIRK_MAX_SEC_1024),
+ ATA_QUIRK_MAX_TRIM_128M = (1U << __ATA_QUIRK_MAX_TRIM_128M),
+ ATA_QUIRK_NO_NCQ_ON_ATI = (1U << __ATA_QUIRK_NO_NCQ_ON_ATI),
++ ATA_QUIRK_NO_LPM_ON_ATI = (1U << __ATA_QUIRK_NO_LPM_ON_ATI),
+ ATA_QUIRK_NO_ID_DEV_LOG = (1U << __ATA_QUIRK_NO_ID_DEV_LOG),
+ ATA_QUIRK_NO_LOG_DIR = (1U << __ATA_QUIRK_NO_LOG_DIR),
+ ATA_QUIRK_NO_FUA = (1U << __ATA_QUIRK_NO_FUA),
+diff --git a/include/linux/proc_fs.h b/include/linux/proc_fs.h
+index 0b2a8985444097..ea62201c74c402 100644
+--- a/include/linux/proc_fs.h
++++ b/include/linux/proc_fs.h
+@@ -20,10 +20,13 @@ enum {
+ * If in doubt, ignore this flag.
+ */
+ #ifdef MODULE
+- PROC_ENTRY_PERMANENT = 0U,
++ PROC_ENTRY_PERMANENT = 0U,
+ #else
+- PROC_ENTRY_PERMANENT = 1U << 0,
++ PROC_ENTRY_PERMANENT = 1U << 0,
+ #endif
++
++ PROC_ENTRY_proc_read_iter = 1U << 1,
++ PROC_ENTRY_proc_compat_ioctl = 1U << 2,
+ };
+
+ struct proc_ops {
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index 5bb4eaa52e14cf..dd10e02bfc746e 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -683,7 +683,7 @@ enum {
+ #define HCI_ERROR_REMOTE_POWER_OFF 0x15
+ #define HCI_ERROR_LOCAL_HOST_TERM 0x16
+ #define HCI_ERROR_PAIRING_NOT_ALLOWED 0x18
+-#define HCI_ERROR_UNSUPPORTED_REMOTE_FEATURE 0x1e
++#define HCI_ERROR_UNSUPPORTED_REMOTE_FEATURE 0x1a
+ #define HCI_ERROR_INVALID_LL_PARAMS 0x1e
+ #define HCI_ERROR_UNSPECIFIED 0x1f
+ #define HCI_ERROR_ADVERTISING_TIMEOUT 0x3c
+diff --git a/include/net/mana/gdma.h b/include/net/mana/gdma.h
+index de47fa533b1504..6a0e83ac0fdb41 100644
+--- a/include/net/mana/gdma.h
++++ b/include/net/mana/gdma.h
+@@ -406,8 +406,6 @@ struct gdma_context {
+ struct gdma_dev mana_ib;
+ };
+
+-#define MAX_NUM_GDMA_DEVICES 4
+-
+ static inline bool mana_gd_is_mana(struct gdma_dev *gd)
+ {
+ return gd->dev_id.type == GDMA_DEVICE_MANA;
+@@ -554,11 +552,15 @@ enum {
+ #define GDMA_DRV_CAP_FLAG_1_HWC_TIMEOUT_RECONFIG BIT(3)
+ #define GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT BIT(5)
+
++/* Driver can handle holes (zeros) in the device list */
++#define GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP BIT(11)
++
+ #define GDMA_DRV_CAP_FLAGS1 \
+ (GDMA_DRV_CAP_FLAG_1_EQ_SHARING_MULTI_VPORT | \
+ GDMA_DRV_CAP_FLAG_1_NAPI_WKDONE_FIX | \
+ GDMA_DRV_CAP_FLAG_1_HWC_TIMEOUT_RECONFIG | \
+- GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT)
++ GDMA_DRV_CAP_FLAG_1_VARIABLE_INDIRECTION_TABLE_SUPPORT | \
++ GDMA_DRV_CAP_FLAG_1_DEV_LIST_HOLES_SUP)
+
+ #define GDMA_DRV_CAP_FLAGS2 0
+
+@@ -619,11 +621,12 @@ struct gdma_query_max_resources_resp {
+ }; /* HW DATA */
+
+ /* GDMA_LIST_DEVICES */
++#define GDMA_DEV_LIST_SIZE 64
+ struct gdma_list_devices_resp {
+ struct gdma_resp_hdr hdr;
+ u32 num_of_devs;
+ u32 reserved;
+- struct gdma_dev_id devs[64];
++ struct gdma_dev_id devs[GDMA_DEV_LIST_SIZE];
+ }; /* HW DATA */
+
+ /* GDMA_REGISTER_DEVICE */
+diff --git a/io_uring/net.c b/io_uring/net.c
+index f32311f6411338..7ea99e082e97e7 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -152,7 +152,7 @@ static void io_netmsg_recycle(struct io_kiocb *req, unsigned int issue_flags)
+ if (iov)
+ kasan_mempool_poison_object(iov);
+ req->async_data = NULL;
+- req->flags &= ~REQ_F_ASYNC_DATA;
++ req->flags &= ~(REQ_F_ASYNC_DATA|REQ_F_NEED_CLEANUP);
+ }
+ }
+
+@@ -447,7 +447,6 @@ int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ static void io_req_msg_cleanup(struct io_kiocb *req,
+ unsigned int issue_flags)
+ {
+- req->flags &= ~REQ_F_NEED_CLEANUP;
+ io_netmsg_recycle(req, issue_flags);
+ }
+
+@@ -1428,6 +1427,7 @@ int io_send_zc(struct io_kiocb *req, unsigned int issue_flags)
+ */
+ if (!(issue_flags & IO_URING_F_UNLOCKED)) {
+ io_notif_flush(zc->notif);
++ zc->notif = NULL;
+ io_req_msg_cleanup(req, 0);
+ }
+ io_req_set_res(req, ret, IORING_CQE_F_MORE);
+@@ -1488,6 +1488,7 @@ int io_sendmsg_zc(struct io_kiocb *req, unsigned int issue_flags)
+ */
+ if (!(issue_flags & IO_URING_F_UNLOCKED)) {
+ io_notif_flush(sr->notif);
++ sr->notif = NULL;
+ io_req_msg_cleanup(req, 0);
+ }
+ io_req_set_res(req, ret, IORING_CQE_F_MORE);
+diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
+index 5b4e6d3bf7bcca..b8fe0b3d0ffb69 100644
+--- a/kernel/dma/direct.c
++++ b/kernel/dma/direct.c
+@@ -584,6 +584,22 @@ int dma_direct_supported(struct device *dev, u64 mask)
+ return mask >= phys_to_dma_unencrypted(dev, min_mask);
+ }
+
++static const struct bus_dma_region *dma_find_range(struct device *dev,
++ unsigned long start_pfn)
++{
++ const struct bus_dma_region *m;
++
++ for (m = dev->dma_range_map; PFN_DOWN(m->size); m++) {
++ unsigned long cpu_start_pfn = PFN_DOWN(m->cpu_start);
++
++ if (start_pfn >= cpu_start_pfn &&
++ start_pfn - cpu_start_pfn < PFN_DOWN(m->size))
++ return m;
++ }
++
++ return NULL;
++}
++
+ /*
+ * To check whether all ram resource ranges are covered by dma range map
+ * Returns 0 when further check is needed
+@@ -593,20 +609,12 @@ static int check_ram_in_range_map(unsigned long start_pfn,
+ unsigned long nr_pages, void *data)
+ {
+ unsigned long end_pfn = start_pfn + nr_pages;
+- const struct bus_dma_region *bdr = NULL;
+- const struct bus_dma_region *m;
+ struct device *dev = data;
+
+ while (start_pfn < end_pfn) {
+- for (m = dev->dma_range_map; PFN_DOWN(m->size); m++) {
+- unsigned long cpu_start_pfn = PFN_DOWN(m->cpu_start);
++ const struct bus_dma_region *bdr;
+
+- if (start_pfn >= cpu_start_pfn &&
+- start_pfn - cpu_start_pfn < PFN_DOWN(m->size)) {
+- bdr = m;
+- break;
+- }
+- }
++ bdr = dma_find_range(dev, start_pfn);
+ if (!bdr)
+ return 1;
+
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 1f817d0c5d2d0e..e9bb1b4c58421f 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -8919,7 +8919,7 @@ void sched_release_group(struct task_group *tg)
+ spin_unlock_irqrestore(&task_group_lock, flags);
+ }
+
+-static struct task_group *sched_get_task_group(struct task_struct *tsk)
++static void sched_change_group(struct task_struct *tsk)
+ {
+ struct task_group *tg;
+
+@@ -8931,13 +8931,7 @@ static struct task_group *sched_get_task_group(struct task_struct *tsk)
+ tg = container_of(task_css_check(tsk, cpu_cgrp_id, true),
+ struct task_group, css);
+ tg = autogroup_task_group(tsk, tg);
+-
+- return tg;
+-}
+-
+-static void sched_change_group(struct task_struct *tsk, struct task_group *group)
+-{
+- tsk->sched_task_group = group;
++ tsk->sched_task_group = tg;
+
+ #ifdef CONFIG_FAIR_GROUP_SCHED
+ if (tsk->sched_class->task_change_group)
+@@ -8958,20 +8952,11 @@ void sched_move_task(struct task_struct *tsk, bool for_autogroup)
+ {
+ int queued, running, queue_flags =
+ DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
+- struct task_group *group;
+ struct rq *rq;
+
+ CLASS(task_rq_lock, rq_guard)(tsk);
+ rq = rq_guard.rq;
+
+- /*
+- * Esp. with SCHED_AUTOGROUP enabled it is possible to get superfluous
+- * group changes.
+- */
+- group = sched_get_task_group(tsk);
+- if (group == tsk->sched_task_group)
+- return;
+-
+ update_rq_clock(rq);
+
+ running = task_current(rq, tsk);
+@@ -8982,7 +8967,7 @@ void sched_move_task(struct task_struct *tsk, bool for_autogroup)
+ if (running)
+ put_prev_task(rq, tsk);
+
+- sched_change_group(tsk, group);
++ sched_change_group(tsk);
+ if (!for_autogroup)
+ scx_cgroup_move_task(tsk);
+
+diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c
+index 99048c33038223..4acdab16579390 100644
+--- a/kernel/trace/trace_fprobe.c
++++ b/kernel/trace/trace_fprobe.c
+@@ -889,13 +889,8 @@ static void __find_tracepoint_module_cb(struct tracepoint *tp, struct module *mo
+
+ if (!data->tpoint && !strcmp(data->tp_name, tp->name)) {
+ data->tpoint = tp;
+- if (!data->mod) {
++ if (!data->mod)
+ data->mod = mod;
+- if (!try_module_get(data->mod)) {
+- data->tpoint = NULL;
+- data->mod = NULL;
+- }
+- }
+ }
+ }
+
+@@ -907,13 +902,7 @@ static void __find_tracepoint_cb(struct tracepoint *tp, void *priv)
+ data->tpoint = tp;
+ }
+
+-/*
+- * Find a tracepoint from kernel and module. If the tracepoint is in a module,
+- * this increments the module refcount to prevent unloading until the
+- * trace_fprobe is registered to the list. After registering the trace_fprobe
+- * on the trace_fprobe list, the module refcount is decremented because
+- * tracepoint_probe_module_cb will handle it.
+- */
++/* Find a tracepoint from kernel and module. */
+ static struct tracepoint *find_tracepoint(const char *tp_name,
+ struct module **tp_mod)
+ {
+@@ -942,6 +931,7 @@ static void reenable_trace_fprobe(struct trace_fprobe *tf)
+ }
+ }
+
++/* Find a tracepoint from specified module. */
+ static struct tracepoint *find_tracepoint_in_module(struct module *mod,
+ const char *tp_name)
+ {
+@@ -977,10 +967,13 @@ static int __tracepoint_probe_module_cb(struct notifier_block *self,
+ reenable_trace_fprobe(tf);
+ }
+ } else if (val == MODULE_STATE_GOING && tp_mod->mod == tf->mod) {
+- tracepoint_probe_unregister(tf->tpoint,
++ unregister_fprobe(&tf->fp);
++ if (trace_fprobe_is_tracepoint(tf)) {
++ tracepoint_probe_unregister(tf->tpoint,
+ tf->tpoint->probestub, NULL);
+- tf->tpoint = NULL;
+- tf->mod = NULL;
++ tf->tpoint = TRACEPOINT_STUB;
++ tf->mod = NULL;
++ }
+ }
+ }
+ mutex_unlock(&event_mutex);
+@@ -1174,6 +1167,11 @@ static int __trace_fprobe_create(int argc, const char *argv[])
+ if (is_tracepoint) {
+ ctx.flags |= TPARG_FL_TPOINT;
+ tpoint = find_tracepoint(symbol, &tp_mod);
++ /* lock module until register this tprobe. */
++ if (tp_mod && !try_module_get(tp_mod)) {
++ tpoint = NULL;
++ tp_mod = NULL;
++ }
+ if (tpoint) {
+ ctx.funcname = kallsyms_lookup(
+ (unsigned long)tpoint->probestub,
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 05adf0392625da..3c37ad6c598bbb 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -1966,8 +1966,19 @@ struct folio *__filemap_get_folio(struct address_space *mapping, pgoff_t index,
+
+ if (err == -EEXIST)
+ goto repeat;
+- if (err)
++ if (err) {
++ /*
++ * When NOWAIT I/O fails to allocate folios this could
++ * be due to a nonblocking memory allocation and not
++ * because the system actually is out of memory.
++ * Return -EAGAIN so that there caller retries in a
++ * blocking fashion instead of propagating -ENOMEM
++ * to the application.
++ */
++ if ((fgp_flags & FGP_NOWAIT) && err == -ENOMEM)
++ err = -EAGAIN;
+ return ERR_PTR(err);
++ }
+ /*
+ * filemap_add_folio locks the page, and for mmap
+ * we expect an unlocked page.
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index f127b61f04a825..40ac11e294231e 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -3224,7 +3224,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
+ folio_account_cleaned(tail,
+ inode_to_wb(folio->mapping->host));
+ __filemap_remove_folio(tail, NULL);
+- folio_put(tail);
++ folio_put_refs(tail, folio_nr_pages(tail));
+ } else if (!PageAnon(page)) {
+ __xa_store(&folio->mapping->i_pages, head[i].index,
+ head + i, 0);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index ae1d184d035a4d..2d1e402f06f22a 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -1882,9 +1882,18 @@ void drain_all_stock(struct mem_cgroup *root_memcg)
+ static int memcg_hotplug_cpu_dead(unsigned int cpu)
+ {
+ struct memcg_stock_pcp *stock;
++ struct obj_cgroup *old;
++ unsigned long flags;
+
+ stock = &per_cpu(memcg_stock, cpu);
++
++ /* drain_obj_stock requires stock_lock */
++ local_lock_irqsave(&memcg_stock.stock_lock, flags);
++ old = drain_obj_stock(stock);
++ local_unlock_irqrestore(&memcg_stock.stock_lock, flags);
++
+ drain_stock(stock);
++ obj_cgroup_put(old);
+
+ return 0;
+ }
+diff --git a/mm/migrate.c b/mm/migrate.c
+index dfa24e41e8f956..25e7438af968a4 100644
+--- a/mm/migrate.c
++++ b/mm/migrate.c
+@@ -526,15 +526,13 @@ static int __folio_migrate_mapping(struct address_space *mapping,
+ if (folio_test_anon(folio) && folio_test_large(folio))
+ mod_mthp_stat(folio_order(folio), MTHP_STAT_NR_ANON, 1);
+ folio_ref_add(newfolio, nr); /* add cache reference */
+- if (folio_test_swapbacked(folio)) {
++ if (folio_test_swapbacked(folio))
+ __folio_set_swapbacked(newfolio);
+- if (folio_test_swapcache(folio)) {
+- folio_set_swapcache(newfolio);
+- newfolio->private = folio_get_private(folio);
+- }
++ if (folio_test_swapcache(folio)) {
++ folio_set_swapcache(newfolio);
++ newfolio->private = folio_get_private(folio);
+ entries = nr;
+ } else {
+- VM_BUG_ON_FOLIO(folio_test_swapcache(folio), folio);
+ entries = 1;
+ }
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index e0a77fe1b6300d..fd4e0e1cd65e43 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -7094,7 +7094,7 @@ static inline bool has_unaccepted_memory(void)
+
+ static bool cond_accept_memory(struct zone *zone, unsigned int order)
+ {
+- long to_accept;
++ long to_accept, wmark;
+ bool ret = false;
+
+ if (!has_unaccepted_memory())
+@@ -7103,8 +7103,18 @@ static bool cond_accept_memory(struct zone *zone, unsigned int order)
+ if (list_empty(&zone->unaccepted_pages))
+ return false;
+
++ wmark = promo_wmark_pages(zone);
++
++ /*
++ * Watermarks have not been initialized yet.
++ *
++ * Accepting one MAX_ORDER page to ensure progress.
++ */
++ if (!wmark)
++ return try_to_accept_memory_one(zone);
++
+ /* How much to accept to get to promo watermark? */
+- to_accept = promo_wmark_pages(zone) -
++ to_accept = wmark -
+ (zone_page_state(zone, NR_FREE_PAGES) -
+ __zone_watermark_unusable_free(zone, order, 0) -
+ zone_page_state(zone, NR_UNACCEPTED));
+diff --git a/net/atm/lec.c b/net/atm/lec.c
+index ffef658862db15..a948dd47c3f347 100644
+--- a/net/atm/lec.c
++++ b/net/atm/lec.c
+@@ -181,6 +181,7 @@ static void
+ lec_send(struct atm_vcc *vcc, struct sk_buff *skb)
+ {
+ struct net_device *dev = skb->dev;
++ unsigned int len = skb->len;
+
+ ATM_SKB(skb)->vcc = vcc;
+ atm_account_tx(vcc, skb);
+@@ -191,7 +192,7 @@ lec_send(struct atm_vcc *vcc, struct sk_buff *skb)
+ }
+
+ dev->stats.tx_packets++;
+- dev->stats.tx_bytes += skb->len;
++ dev->stats.tx_bytes += len;
+ }
+
+ static void lec_tx_timeout(struct net_device *dev, unsigned int txqueue)
+diff --git a/net/batman-adv/bat_iv_ogm.c b/net/batman-adv/bat_iv_ogm.c
+index 74b49c35ddc14d..209180b4c26817 100644
+--- a/net/batman-adv/bat_iv_ogm.c
++++ b/net/batman-adv/bat_iv_ogm.c
+@@ -324,8 +324,7 @@ batadv_iv_ogm_aggr_packet(int buff_pos, int packet_len,
+ /* check if there is enough space for the optional TVLV */
+ next_buff_pos += ntohs(ogm_packet->tvlv_len);
+
+- return (next_buff_pos <= packet_len) &&
+- (next_buff_pos <= BATADV_MAX_AGGREGATION_BYTES);
++ return next_buff_pos <= packet_len;
+ }
+
+ /* send a batman ogm to a given interface */
+diff --git a/net/batman-adv/bat_v_ogm.c b/net/batman-adv/bat_v_ogm.c
+index e503ee0d896bd5..8f89ffe6020ced 100644
+--- a/net/batman-adv/bat_v_ogm.c
++++ b/net/batman-adv/bat_v_ogm.c
+@@ -839,8 +839,7 @@ batadv_v_ogm_aggr_packet(int buff_pos, int packet_len,
+ /* check if there is enough space for the optional TVLV */
+ next_buff_pos += ntohs(ogm2_packet->tvlv_len);
+
+- return (next_buff_pos <= packet_len) &&
+- (next_buff_pos <= BATADV_MAX_AGGREGATION_BYTES);
++ return next_buff_pos <= packet_len;
+ }
+
+ /**
+diff --git a/net/bluetooth/6lowpan.c b/net/bluetooth/6lowpan.c
+index 50cfec8ccac4f7..3c29778171c581 100644
+--- a/net/bluetooth/6lowpan.c
++++ b/net/bluetooth/6lowpan.c
+@@ -825,11 +825,16 @@ static struct sk_buff *chan_alloc_skb_cb(struct l2cap_chan *chan,
+ unsigned long hdr_len,
+ unsigned long len, int nb)
+ {
++ struct sk_buff *skb;
++
+ /* Note that we must allocate using GFP_ATOMIC here as
+ * this function is called originally from netdev hard xmit
+ * function in atomic context.
+ */
+- return bt_skb_alloc(hdr_len + len, GFP_ATOMIC);
++ skb = bt_skb_alloc(hdr_len + len, GFP_ATOMIC);
++ if (!skb)
++ return ERR_PTR(-ENOMEM);
++ return skb;
+ }
+
+ static void chan_suspend_cb(struct l2cap_chan *chan)
+diff --git a/net/core/lwtunnel.c b/net/core/lwtunnel.c
+index 711cd3b4347a79..4417a18b3e951a 100644
+--- a/net/core/lwtunnel.c
++++ b/net/core/lwtunnel.c
+@@ -23,6 +23,8 @@
+ #include <net/ip6_fib.h>
+ #include <net/rtnh.h>
+
++#include "dev.h"
++
+ DEFINE_STATIC_KEY_FALSE(nf_hooks_lwtunnel_enabled);
+ EXPORT_SYMBOL_GPL(nf_hooks_lwtunnel_enabled);
+
+@@ -325,13 +327,23 @@ EXPORT_SYMBOL_GPL(lwtunnel_cmp_encap);
+
+ int lwtunnel_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+- struct dst_entry *dst = skb_dst(skb);
+ const struct lwtunnel_encap_ops *ops;
+ struct lwtunnel_state *lwtstate;
+- int ret = -EINVAL;
++ struct dst_entry *dst;
++ int ret;
++
++ if (dev_xmit_recursion()) {
++ net_crit_ratelimited("%s(): recursion limit reached on datapath\n",
++ __func__);
++ ret = -ENETDOWN;
++ goto drop;
++ }
+
+- if (!dst)
++ dst = skb_dst(skb);
++ if (!dst) {
++ ret = -EINVAL;
+ goto drop;
++ }
+ lwtstate = dst->lwtstate;
+
+ if (lwtstate->type == LWTUNNEL_ENCAP_NONE ||
+@@ -341,8 +353,11 @@ int lwtunnel_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ ret = -EOPNOTSUPP;
+ rcu_read_lock();
+ ops = rcu_dereference(lwtun_encaps[lwtstate->type]);
+- if (likely(ops && ops->output))
++ if (likely(ops && ops->output)) {
++ dev_xmit_recursion_inc();
+ ret = ops->output(net, sk, skb);
++ dev_xmit_recursion_dec();
++ }
+ rcu_read_unlock();
+
+ if (ret == -EOPNOTSUPP)
+@@ -359,13 +374,23 @@ EXPORT_SYMBOL_GPL(lwtunnel_output);
+
+ int lwtunnel_xmit(struct sk_buff *skb)
+ {
+- struct dst_entry *dst = skb_dst(skb);
+ const struct lwtunnel_encap_ops *ops;
+ struct lwtunnel_state *lwtstate;
+- int ret = -EINVAL;
++ struct dst_entry *dst;
++ int ret;
++
++ if (dev_xmit_recursion()) {
++ net_crit_ratelimited("%s(): recursion limit reached on datapath\n",
++ __func__);
++ ret = -ENETDOWN;
++ goto drop;
++ }
+
+- if (!dst)
++ dst = skb_dst(skb);
++ if (!dst) {
++ ret = -EINVAL;
+ goto drop;
++ }
+
+ lwtstate = dst->lwtstate;
+
+@@ -376,8 +401,11 @@ int lwtunnel_xmit(struct sk_buff *skb)
+ ret = -EOPNOTSUPP;
+ rcu_read_lock();
+ ops = rcu_dereference(lwtun_encaps[lwtstate->type]);
+- if (likely(ops && ops->xmit))
++ if (likely(ops && ops->xmit)) {
++ dev_xmit_recursion_inc();
+ ret = ops->xmit(skb);
++ dev_xmit_recursion_dec();
++ }
+ rcu_read_unlock();
+
+ if (ret == -EOPNOTSUPP)
+@@ -394,13 +422,23 @@ EXPORT_SYMBOL_GPL(lwtunnel_xmit);
+
+ int lwtunnel_input(struct sk_buff *skb)
+ {
+- struct dst_entry *dst = skb_dst(skb);
+ const struct lwtunnel_encap_ops *ops;
+ struct lwtunnel_state *lwtstate;
+- int ret = -EINVAL;
++ struct dst_entry *dst;
++ int ret;
+
+- if (!dst)
++ if (dev_xmit_recursion()) {
++ net_crit_ratelimited("%s(): recursion limit reached on datapath\n",
++ __func__);
++ ret = -ENETDOWN;
+ goto drop;
++ }
++
++ dst = skb_dst(skb);
++ if (!dst) {
++ ret = -EINVAL;
++ goto drop;
++ }
+ lwtstate = dst->lwtstate;
+
+ if (lwtstate->type == LWTUNNEL_ENCAP_NONE ||
+@@ -410,8 +448,11 @@ int lwtunnel_input(struct sk_buff *skb)
+ ret = -EOPNOTSUPP;
+ rcu_read_lock();
+ ops = rcu_dereference(lwtun_encaps[lwtstate->type]);
+- if (likely(ops && ops->input))
++ if (likely(ops && ops->input)) {
++ dev_xmit_recursion_inc();
+ ret = ops->input(skb);
++ dev_xmit_recursion_dec();
++ }
+ rcu_read_unlock();
+
+ if (ret == -EOPNOTSUPP)
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index c7f7ea61b524a2..8082cc6be4fc1b 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -2301,6 +2301,7 @@ static const struct nla_policy nl_neightbl_policy[NDTA_MAX+1] = {
+ static const struct nla_policy nl_ntbl_parm_policy[NDTPA_MAX+1] = {
+ [NDTPA_IFINDEX] = { .type = NLA_U32 },
+ [NDTPA_QUEUE_LEN] = { .type = NLA_U32 },
++ [NDTPA_QUEUE_LENBYTES] = { .type = NLA_U32 },
+ [NDTPA_PROXY_QLEN] = { .type = NLA_U32 },
+ [NDTPA_APP_PROBES] = { .type = NLA_U32 },
+ [NDTPA_UCAST_PROBES] = { .type = NLA_U32 },
+diff --git a/net/devlink/core.c b/net/devlink/core.c
+index f49cd83f1955f5..7203c39532fcc3 100644
+--- a/net/devlink/core.c
++++ b/net/devlink/core.c
+@@ -117,7 +117,7 @@ static struct devlink_rel *devlink_rel_alloc(void)
+
+ err = xa_alloc_cyclic(&devlink_rels, &rel->index, rel,
+ xa_limit_32b, &next, GFP_KERNEL);
+- if (err) {
++ if (err < 0) {
+ kfree(rel);
+ return ERR_PTR(err);
+ }
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 26cdb665747573..f7c17388ff6aaf 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3237,13 +3237,16 @@ static void add_v4_addrs(struct inet6_dev *idev)
+ struct in6_addr addr;
+ struct net_device *dev;
+ struct net *net = dev_net(idev->dev);
+- int scope, plen;
++ int scope, plen, offset = 0;
+ u32 pflags = 0;
+
+ ASSERT_RTNL();
+
+ memset(&addr, 0, sizeof(struct in6_addr));
+- memcpy(&addr.s6_addr32[3], idev->dev->dev_addr, 4);
++ /* in case of IP6GRE the dev_addr is an IPv6 and therefore we use only the last 4 bytes */
++ if (idev->dev->addr_len == sizeof(struct in6_addr))
++ offset = sizeof(struct in6_addr) - 4;
++ memcpy(&addr.s6_addr32[3], idev->dev->dev_addr + offset, 4);
+
+ if (!(idev->dev->flags & IFF_POINTOPOINT) && idev->dev->type == ARPHRD_SIT) {
+ scope = IPV6_ADDR_COMPATv4;
+@@ -3554,13 +3557,7 @@ static void addrconf_gre_config(struct net_device *dev)
+ return;
+ }
+
+- /* Generate the IPv6 link-local address using addrconf_addr_gen(),
+- * unless we have an IPv4 GRE device not bound to an IP address and
+- * which is in EUI64 mode (as __ipv6_isatap_ifid() would fail in this
+- * case). Such devices fall back to add_v4_addrs() instead.
+- */
+- if (!(dev->type == ARPHRD_IPGRE && *(__be32 *)dev->dev_addr == 0 &&
+- idev->cnf.addr_gen_mode == IN6_ADDR_GEN_MODE_EUI64)) {
++ if (dev->type == ARPHRD_ETHER) {
+ addrconf_addr_gen(idev, true);
+ return;
+ }
+diff --git a/net/ipv6/ioam6_iptunnel.c b/net/ipv6/ioam6_iptunnel.c
+index 4215cebe7d85a9..647dd8417c6cf9 100644
+--- a/net/ipv6/ioam6_iptunnel.c
++++ b/net/ipv6/ioam6_iptunnel.c
+@@ -339,7 +339,6 @@ static int ioam6_do_encap(struct net *net, struct sk_buff *skb,
+ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+ struct dst_entry *dst = skb_dst(skb), *cache_dst = NULL;
+- struct in6_addr orig_daddr;
+ struct ioam6_lwt *ilwt;
+ int err = -EINVAL;
+ u32 pkt_cnt;
+@@ -354,8 +353,6 @@ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ if (pkt_cnt % ilwt->freq.n >= ilwt->freq.k)
+ goto out;
+
+- orig_daddr = ipv6_hdr(skb)->daddr;
+-
+ local_bh_disable();
+ cache_dst = dst_cache_get(&ilwt->cache);
+ local_bh_enable();
+@@ -424,7 +421,10 @@ static int ioam6_output(struct net *net, struct sock *sk, struct sk_buff *skb)
+ goto drop;
+ }
+
+- if (!ipv6_addr_equal(&orig_daddr, &ipv6_hdr(skb)->daddr)) {
++ /* avoid lwtunnel_output() reentry loop when destination is the same
++ * after transformation (e.g., with the inline mode)
++ */
++ if (dst->lwtstate != cache_dst->lwtstate) {
+ skb_dst_drop(skb);
+ skb_dst_set(skb, cache_dst);
+ return dst_output(net, sk, skb);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 2736dea77575b5..b393c37d24245c 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -3644,7 +3644,8 @@ int fib6_nh_init(struct net *net, struct fib6_nh *fib6_nh,
+ in6_dev_put(idev);
+
+ if (err) {
+- lwtstate_put(fib6_nh->fib_nh_lws);
++ fib_nh_common_release(&fib6_nh->nh_common);
++ fib6_nh->nh_common.nhc_pcpu_rth_output = NULL;
+ fib6_nh->fib_nh_lws = NULL;
+ netdev_put(dev, dev_tracker);
+ }
+@@ -3802,10 +3803,12 @@ static struct fib6_info *ip6_route_info_create(struct fib6_config *cfg,
+ if (nh) {
+ if (rt->fib6_src.plen) {
+ NL_SET_ERR_MSG(extack, "Nexthops can not be used with source routing");
++ err = -EINVAL;
+ goto out_free;
+ }
+ if (!nexthop_get(nh)) {
+ NL_SET_ERR_MSG(extack, "Nexthop has been deleted");
++ err = -ENOENT;
+ goto out_free;
+ }
+ rt->nh = nh;
+diff --git a/net/ipv6/tcpv6_offload.c b/net/ipv6/tcpv6_offload.c
+index a45bf17cb2a172..ae2da28f9dfb1c 100644
+--- a/net/ipv6/tcpv6_offload.c
++++ b/net/ipv6/tcpv6_offload.c
+@@ -94,14 +94,23 @@ INDIRECT_CALLABLE_SCOPE int tcp6_gro_complete(struct sk_buff *skb, int thoff)
+ }
+
+ static void __tcpv6_gso_segment_csum(struct sk_buff *seg,
++ struct in6_addr *oldip,
++ const struct in6_addr *newip,
+ __be16 *oldport, __be16 newport)
+ {
+- struct tcphdr *th;
++ struct tcphdr *th = tcp_hdr(seg);
++
++ if (!ipv6_addr_equal(oldip, newip)) {
++ inet_proto_csum_replace16(&th->check, seg,
++ oldip->s6_addr32,
++ newip->s6_addr32,
++ true);
++ *oldip = *newip;
++ }
+
+ if (*oldport == newport)
+ return;
+
+- th = tcp_hdr(seg);
+ inet_proto_csum_replace2(&th->check, seg, *oldport, newport, false);
+ *oldport = newport;
+ }
+@@ -129,10 +138,10 @@ static struct sk_buff *__tcpv6_gso_segment_list_csum(struct sk_buff *segs)
+ th2 = tcp_hdr(seg);
+ iph2 = ipv6_hdr(seg);
+
+- iph2->saddr = iph->saddr;
+- iph2->daddr = iph->daddr;
+- __tcpv6_gso_segment_csum(seg, &th2->source, th->source);
+- __tcpv6_gso_segment_csum(seg, &th2->dest, th->dest);
++ __tcpv6_gso_segment_csum(seg, &iph2->saddr, &iph->saddr,
++ &th2->source, th->source);
++ __tcpv6_gso_segment_csum(seg, &iph2->daddr, &iph->daddr,
++ &th2->dest, th->dest);
+ }
+
+ return segs;
+diff --git a/net/mptcp/options.c b/net/mptcp/options.c
+index fd2de185bc939f..23949ae2a3a8db 100644
+--- a/net/mptcp/options.c
++++ b/net/mptcp/options.c
+@@ -651,6 +651,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff *
+ struct mptcp_sock *msk = mptcp_sk(subflow->conn);
+ bool drop_other_suboptions = false;
+ unsigned int opt_size = *size;
++ struct mptcp_addr_info addr;
+ bool echo;
+ int len;
+
+@@ -659,7 +660,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff *
+ */
+ if (!mptcp_pm_should_add_signal(msk) ||
+ (opts->suboptions & (OPTION_MPTCP_MPJ_ACK | OPTION_MPTCP_MPC_ACK)) ||
+- !mptcp_pm_add_addr_signal(msk, skb, opt_size, remaining, &opts->addr,
++ !mptcp_pm_add_addr_signal(msk, skb, opt_size, remaining, &addr,
+ &echo, &drop_other_suboptions))
+ return false;
+
+@@ -672,7 +673,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff *
+ else if (opts->suboptions & OPTION_MPTCP_DSS)
+ return false;
+
+- len = mptcp_add_addr_len(opts->addr.family, echo, !!opts->addr.port);
++ len = mptcp_add_addr_len(addr.family, echo, !!addr.port);
+ if (remaining < len)
+ return false;
+
+@@ -689,6 +690,7 @@ static bool mptcp_established_options_add_addr(struct sock *sk, struct sk_buff *
+ opts->ahmac = 0;
+ *size -= opt_size;
+ }
++ opts->addr = addr;
+ opts->suboptions |= OPTION_MPTCP_ADD_ADDR;
+ if (!echo) {
+ MPTCP_INC_STATS(sock_net(sk), MPTCP_MIB_ADDADDRTX);
+diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
+index 0662d34b09ee78..87e865b9b83af9 100644
+--- a/net/xdp/xsk_buff_pool.c
++++ b/net/xdp/xsk_buff_pool.c
+@@ -106,7 +106,7 @@ struct xsk_buff_pool *xp_create_and_assign_umem(struct xdp_sock *xs,
+ if (pool->unaligned)
+ pool->free_heads[i] = xskb;
+ else
+- xp_init_xskb_addr(xskb, pool, i * pool->chunk_size);
++ xp_init_xskb_addr(xskb, pool, (u64)i * pool->chunk_size);
+ }
+
+ return pool;
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index e5722c95b8bb38..a30538a980cc7f 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -610,6 +610,40 @@ int xfrm_output_resume(struct sock *sk, struct sk_buff *skb, int err)
+ }
+ EXPORT_SYMBOL_GPL(xfrm_output_resume);
+
++static int xfrm_dev_direct_output(struct sock *sk, struct xfrm_state *x,
++ struct sk_buff *skb)
++{
++ struct dst_entry *dst = skb_dst(skb);
++ struct net *net = xs_net(x);
++ int err;
++
++ dst = skb_dst_pop(skb);
++ if (!dst) {
++ XFRM_INC_STATS(net, LINUX_MIB_XFRMOUTERROR);
++ kfree_skb(skb);
++ return -EHOSTUNREACH;
++ }
++ skb_dst_set(skb, dst);
++ nf_reset_ct(skb);
++
++ err = skb_dst(skb)->ops->local_out(net, sk, skb);
++ if (unlikely(err != 1)) {
++ kfree_skb(skb);
++ return err;
++ }
++
++ /* In transport mode, network destination is
++ * directly reachable, while in tunnel mode,
++ * inner packet network may not be. In packet
++ * offload type, HW is responsible for hard
++ * header packet mangling so directly xmit skb
++ * to netdevice.
++ */
++ skb->dev = x->xso.dev;
++ __skb_push(skb, skb->dev->hard_header_len);
++ return dev_queue_xmit(skb);
++}
++
+ static int xfrm_output2(struct net *net, struct sock *sk, struct sk_buff *skb)
+ {
+ return xfrm_output_resume(sk, skb, 1);
+@@ -729,6 +763,13 @@ int xfrm_output(struct sock *sk, struct sk_buff *skb)
+ return -EHOSTUNREACH;
+ }
+
++ /* Exclusive direct xmit for tunnel mode, as
++ * some filtering or matching rules may apply
++ * in transport mode.
++ */
++ if (x->props.mode == XFRM_MODE_TUNNEL)
++ return xfrm_dev_direct_output(sk, x, skb);
++
+ return xfrm_output_resume(sk, skb, 0);
+ }
+
+@@ -752,7 +793,7 @@ int xfrm_output(struct sock *sk, struct sk_buff *skb)
+ skb->encapsulation = 1;
+
+ if (skb_is_gso(skb)) {
+- if (skb->inner_protocol)
++ if (skb->inner_protocol && x->props.mode == XFRM_MODE_TUNNEL)
+ return xfrm_output_gso(net, sk, skb);
+
+ skb_shinfo(skb)->gso_type |= SKB_GSO_ESP;
+diff --git a/security/keys/gc.c b/security/keys/gc.c
+index 7d687b0962b146..f27223ea4578f1 100644
+--- a/security/keys/gc.c
++++ b/security/keys/gc.c
+@@ -218,8 +218,10 @@ static void key_garbage_collector(struct work_struct *work)
+ key = rb_entry(cursor, struct key, serial_node);
+ cursor = rb_next(cursor);
+
+- if (refcount_read(&key->usage) == 0)
++ if (test_bit(KEY_FLAG_FINAL_PUT, &key->flags)) {
++ smp_mb(); /* Clobber key->user after FINAL_PUT seen. */
+ goto found_unreferenced_key;
++ }
+
+ if (unlikely(gc_state & KEY_GC_REAPING_DEAD_1)) {
+ if (key->type == key_gc_dead_keytype) {
+diff --git a/security/keys/key.c b/security/keys/key.c
+index 3d7d185019d30a..7198cd2ac3a3a5 100644
+--- a/security/keys/key.c
++++ b/security/keys/key.c
+@@ -658,6 +658,8 @@ void key_put(struct key *key)
+ key->user->qnbytes -= key->quotalen;
+ spin_unlock_irqrestore(&key->user->lock, flags);
+ }
++ smp_mb(); /* key->user before FINAL_PUT set. */
++ set_bit(KEY_FLAG_FINAL_PUT, &key->flags);
+ schedule_work(&key_gc_work);
+ }
+ }
+diff --git a/tools/lib/subcmd/parse-options.c b/tools/lib/subcmd/parse-options.c
+index eb896d30545b63..555d617c1f502a 100644
+--- a/tools/lib/subcmd/parse-options.c
++++ b/tools/lib/subcmd/parse-options.c
+@@ -807,7 +807,7 @@ static int option__cmp(const void *va, const void *vb)
+ static struct option *options__order(const struct option *opts)
+ {
+ int nr_opts = 0, nr_group = 0, nr_parent = 0, len;
+- const struct option *o, *p = opts;
++ const struct option *o = NULL, *p = opts;
+ struct option *opt, *ordered = NULL, *group;
+
+ /* flatten the options that have parents */
+diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
+index c5797ad1d37b68..d86ca1554d6d0a 100755
+--- a/tools/testing/selftests/mm/run_vmtests.sh
++++ b/tools/testing/selftests/mm/run_vmtests.sh
+@@ -300,7 +300,9 @@ uffd_stress_bin=./uffd-stress
+ CATEGORY="userfaultfd" run_test ${uffd_stress_bin} anon 20 16
+ # Hugetlb tests require source and destination huge pages. Pass in half
+ # the size of the free pages we have, which is used for *each*.
+-half_ufd_size_MB=$((freepgs / 2))
++# uffd-stress expects a region expressed in MiB, so we adjust
++# half_ufd_size_MB accordingly.
++half_ufd_size_MB=$(((freepgs * hpgsize_KB) / 1024 / 2))
+ CATEGORY="userfaultfd" run_test ${uffd_stress_bin} hugetlb "$half_ufd_size_MB" 32
+ CATEGORY="userfaultfd" run_test ${uffd_stress_bin} hugetlb-private "$half_ufd_size_MB" 32
+ CATEGORY="userfaultfd" run_test ${uffd_stress_bin} shmem 20 16
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-03-29 10:59 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-03-29 10:59 UTC (permalink / raw
To: gentoo-commits
commit: 05f176184ad749c5e3d3208b8f6e1165855dd482
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat Mar 29 10:59:27 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat Mar 29 10:59:27 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=05f17618
Remove redundant patch
Removed:
2901_tools-lib-subcmd-compile-fix.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ---
2901_tools-lib-subcmd-compile-fix.patch | 54 ---------------------------------
2 files changed, 58 deletions(-)
diff --git a/0000_README b/0000_README
index accea09e..6196ef75 100644
--- a/0000_README
+++ b/0000_README
@@ -151,10 +151,6 @@ Patch: 2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch
From: https://github.com/nbd168/wireless/commit/adc3fd2a2277b7cc0b61692463771bf9bd298036
Desc: wifi: mt76: mt7921: fix kernel panic due to null pointer dereference
-Patch: 2901_tools-lib-subcmd-compile-fix.patch
-From: https://lore.kernel.org/all/20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de/
-Desc: tools lib subcmd: Fixed uninitialized use of variable in parse-options
-
Patch: 2910_bfp-mark-get-entry-ip-as--maybe-unused.patch
From: https://www.spinics.net/lists/stable/msg604665.html
Desc: bpf: mark get_entry_ip as __maybe_unused
diff --git a/2901_tools-lib-subcmd-compile-fix.patch b/2901_tools-lib-subcmd-compile-fix.patch
deleted file mode 100644
index bb1f7ffd..00000000
--- a/2901_tools-lib-subcmd-compile-fix.patch
+++ /dev/null
@@ -1,54 +0,0 @@
-From git@z Thu Jan 1 00:00:00 1970
-Subject: [PATCH] tools lib subcmd: Fixed uninitialized use of variable in
- parse-options
-From: Michael Weiß <michael.weiss@aisec.fraunhofer.de>
-Date: Wed, 31 Jul 2024 10:52:17 +0200
-Message-Id: <20240731085217.94928-1-michael.weiss@aisec.fraunhofer.de>
-MIME-Version: 1.0
-Content-Type: text/plain; charset="utf-8"
-Content-Transfer-Encoding: 8bit
-
-Since commit ea558c86248b ("tools lib subcmd: Show parent options in
-help"), our debug images fail to build.
-
-For our Yocto-based GyroidOS, we build debug images with debugging enabled
-for all binaries including the kernel. Yocto passes the corresponding gcc
-option "-Og" also to the kernel HOSTCFLAGS. This results in the following
-build error:
-
- parse-options.c: In function ‘options__order’:
- parse-options.c:834:9: error: ‘o’ may be used uninitialized [-Werror=maybe-uninitialized]
- 834 | memcpy(&ordered[nr_opts], o, sizeof(*o));
- | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- parse-options.c:812:30: note: ‘o’ was declared here
- 812 | const struct option *o, *p = opts;
- | ^
- ..
-
-Fix it by initializing 'o' instead of 'p' in the above failing line 812.
-'p' is initialized afterwards in the following for-loop anyway.
-I think that was the intention of the commit ea558c86248b ("tools lib
-subcmd: Show parent options in help") in the first place.
-
-Fixes: ea558c86248b ("tools lib subcmd: Show parent options in help")
-Signed-off-by: Michael Weiß <michael.weiss@aisec.fraunhofer.de>
----
- tools/lib/subcmd/parse-options.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/tools/lib/subcmd/parse-options.c b/tools/lib/subcmd/parse-options.c
-index 4b60ec03b0bb..2a3b51a690c7 100644
---- a/tools/lib/subcmd/parse-options.c
-+++ b/tools/lib/subcmd/parse-options.c
-@@ -809,7 +809,7 @@ static int option__cmp(const void *va, const void *vb)
- static struct option *options__order(const struct option *opts)
- {
- int nr_opts = 0, nr_group = 0, nr_parent = 0, len;
-- const struct option *o, *p = opts;
-+ const struct option *o = opts, *p;
- struct option *opt, *ordered = NULL, *group;
-
- /* flatten the options that have parents */
---
-2.39.2
-
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-04-07 10:30 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-04-07 10:30 UTC (permalink / raw
To: gentoo-commits
commit: 557a0c081003662fedb1e21ff88bad89f2a69a13
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 7 10:30:01 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Apr 7 10:30:01 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=557a0c08
Linux patch 6.12.22
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1021_linux-6.12.22.patch | 916 +++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 920 insertions(+)
diff --git a/0000_README b/0000_README
index 6196ef75..696eb7c9 100644
--- a/0000_README
+++ b/0000_README
@@ -127,6 +127,10 @@ Patch: 1020_linux-6.12.21.patch
From: https://www.kernel.org
Desc: Linux 6.12.21
+Patch: 1021_linux-6.12.22.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.22
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1021_linux-6.12.22.patch b/1021_linux-6.12.22.patch
new file mode 100644
index 00000000..dd1e3934
--- /dev/null
+++ b/1021_linux-6.12.22.patch
@@ -0,0 +1,916 @@
+diff --git a/Makefile b/Makefile
+index a646151342b832..f380005d1600ad 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 21
++SUBLEVEL = 22
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/drivers/counter/microchip-tcb-capture.c b/drivers/counter/microchip-tcb-capture.c
+index b3e615cbd2caa6..461f57f66631c3 100644
+--- a/drivers/counter/microchip-tcb-capture.c
++++ b/drivers/counter/microchip-tcb-capture.c
+@@ -368,6 +368,25 @@ static int mchp_tc_probe(struct platform_device *pdev)
+ channel);
+ }
+
++ /* Disable Quadrature Decoder and position measure */
++ ret = regmap_update_bits(regmap, ATMEL_TC_BMR, ATMEL_TC_QDEN | ATMEL_TC_POSEN, 0);
++ if (ret)
++ return ret;
++
++ /* Setup the period capture mode */
++ ret = regmap_update_bits(regmap, ATMEL_TC_REG(priv->channel[0], CMR),
++ ATMEL_TC_WAVE | ATMEL_TC_ABETRG | ATMEL_TC_CMR_MASK |
++ ATMEL_TC_TCCLKS,
++ ATMEL_TC_CMR_MASK);
++ if (ret)
++ return ret;
++
++ /* Enable clock and trigger counter */
++ ret = regmap_write(regmap, ATMEL_TC_REG(priv->channel[0], CCR),
++ ATMEL_TC_CLKEN | ATMEL_TC_SWTRG);
++ if (ret)
++ return ret;
++
+ priv->tc_cfg = tcb_config;
+ priv->regmap = regmap;
+ counter->name = dev_name(&pdev->dev);
+diff --git a/drivers/counter/stm32-lptimer-cnt.c b/drivers/counter/stm32-lptimer-cnt.c
+index 8439755559b219..537fe9b669f352 100644
+--- a/drivers/counter/stm32-lptimer-cnt.c
++++ b/drivers/counter/stm32-lptimer-cnt.c
+@@ -58,37 +58,43 @@ static int stm32_lptim_set_enable_state(struct stm32_lptim_cnt *priv,
+ return 0;
+ }
+
++ ret = clk_enable(priv->clk);
++ if (ret)
++ goto disable_cnt;
++
+ /* LP timer must be enabled before writing CMP & ARR */
+ ret = regmap_write(priv->regmap, STM32_LPTIM_ARR, priv->ceiling);
+ if (ret)
+- return ret;
++ goto disable_clk;
+
+ ret = regmap_write(priv->regmap, STM32_LPTIM_CMP, 0);
+ if (ret)
+- return ret;
++ goto disable_clk;
+
+ /* ensure CMP & ARR registers are properly written */
+ ret = regmap_read_poll_timeout(priv->regmap, STM32_LPTIM_ISR, val,
+ (val & STM32_LPTIM_CMPOK_ARROK) == STM32_LPTIM_CMPOK_ARROK,
+ 100, 1000);
+ if (ret)
+- return ret;
++ goto disable_clk;
+
+ ret = regmap_write(priv->regmap, STM32_LPTIM_ICR,
+ STM32_LPTIM_CMPOKCF_ARROKCF);
+ if (ret)
+- return ret;
++ goto disable_clk;
+
+- ret = clk_enable(priv->clk);
+- if (ret) {
+- regmap_write(priv->regmap, STM32_LPTIM_CR, 0);
+- return ret;
+- }
+ priv->enabled = true;
+
+ /* Start LP timer in continuous mode */
+ return regmap_update_bits(priv->regmap, STM32_LPTIM_CR,
+ STM32_LPTIM_CNTSTRT, STM32_LPTIM_CNTSTRT);
++
++disable_clk:
++ clk_disable(priv->clk);
++disable_cnt:
++ regmap_write(priv->regmap, STM32_LPTIM_CR, 0);
++
++ return ret;
+ }
+
+ static int stm32_lptim_setup(struct stm32_lptim_cnt *priv, int enable)
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index d9a3917d207e93..c4c6538eabae6d 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3231,8 +3231,7 @@ static int dm_resume(void *handle)
+ struct dm_atomic_state *dm_state = to_dm_atomic_state(dm->atomic_obj.state);
+ enum dc_connection_type new_connection_type = dc_connection_none;
+ struct dc_state *dc_state;
+- int i, r, j, ret;
+- bool need_hotplug = false;
++ int i, r, j;
+ struct dc_commit_streams_params commit_params = {};
+
+ if (dm->dc->caps.ips_support) {
+@@ -3427,23 +3426,16 @@ static int dm_resume(void *handle)
+ aconnector->mst_root)
+ continue;
+
+- ret = drm_dp_mst_topology_mgr_resume(&aconnector->mst_mgr, true);
+-
+- if (ret < 0) {
+- dm_helpers_dp_mst_stop_top_mgr(aconnector->dc_link->ctx,
+- aconnector->dc_link);
+- need_hotplug = true;
+- }
++ drm_dp_mst_topology_queue_probe(&aconnector->mst_mgr);
+ }
+ drm_connector_list_iter_end(&iter);
+
+- if (need_hotplug)
+- drm_kms_helper_hotplug_event(ddev);
+-
+ amdgpu_dm_irq_resume_late(adev);
+
+ amdgpu_dm_smu_write_watermarks_table(adev);
+
++ drm_kms_helper_hotplug_event(ddev);
++
+ return 0;
+ }
+
+diff --git a/drivers/hid/hid-plantronics.c b/drivers/hid/hid-plantronics.c
+index 25cfd964dc25d9..acb9eb18f7ccfe 100644
+--- a/drivers/hid/hid-plantronics.c
++++ b/drivers/hid/hid-plantronics.c
+@@ -6,9 +6,6 @@
+ * Copyright (c) 2015-2018 Terry Junge <terry.junge@plantronics.com>
+ */
+
+-/*
+- */
+-
+ #include "hid-ids.h"
+
+ #include <linux/hid.h>
+@@ -23,30 +20,28 @@
+
+ #define PLT_VOL_UP 0x00b1
+ #define PLT_VOL_DOWN 0x00b2
++#define PLT_MIC_MUTE 0x00b5
+
+ #define PLT1_VOL_UP (PLT_HID_1_0_PAGE | PLT_VOL_UP)
+ #define PLT1_VOL_DOWN (PLT_HID_1_0_PAGE | PLT_VOL_DOWN)
++#define PLT1_MIC_MUTE (PLT_HID_1_0_PAGE | PLT_MIC_MUTE)
+ #define PLT2_VOL_UP (PLT_HID_2_0_PAGE | PLT_VOL_UP)
+ #define PLT2_VOL_DOWN (PLT_HID_2_0_PAGE | PLT_VOL_DOWN)
++#define PLT2_MIC_MUTE (PLT_HID_2_0_PAGE | PLT_MIC_MUTE)
++#define HID_TELEPHONY_MUTE (HID_UP_TELEPHONY | 0x2f)
++#define HID_CONSUMER_MUTE (HID_UP_CONSUMER | 0xe2)
+
+ #define PLT_DA60 0xda60
+ #define PLT_BT300_MIN 0x0413
+ #define PLT_BT300_MAX 0x0418
+
+-
+-#define PLT_ALLOW_CONSUMER (field->application == HID_CP_CONSUMERCONTROL && \
+- (usage->hid & HID_USAGE_PAGE) == HID_UP_CONSUMER)
+-
+-#define PLT_QUIRK_DOUBLE_VOLUME_KEYS BIT(0)
+-#define PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS BIT(1)
+-
+ #define PLT_DOUBLE_KEY_TIMEOUT 5 /* ms */
+-#define PLT_FOLLOWED_OPPOSITE_KEY_TIMEOUT 220 /* ms */
+
+ struct plt_drv_data {
+ unsigned long device_type;
+- unsigned long last_volume_key_ts;
+- u32 quirks;
++ unsigned long last_key_ts;
++ unsigned long double_key_to;
++ __u16 last_key;
+ };
+
+ static int plantronics_input_mapping(struct hid_device *hdev,
+@@ -58,34 +53,43 @@ static int plantronics_input_mapping(struct hid_device *hdev,
+ unsigned short mapped_key;
+ struct plt_drv_data *drv_data = hid_get_drvdata(hdev);
+ unsigned long plt_type = drv_data->device_type;
++ int allow_mute = usage->hid == HID_TELEPHONY_MUTE;
++ int allow_consumer = field->application == HID_CP_CONSUMERCONTROL &&
++ (usage->hid & HID_USAGE_PAGE) == HID_UP_CONSUMER &&
++ usage->hid != HID_CONSUMER_MUTE;
+
+ /* special case for PTT products */
+ if (field->application == HID_GD_JOYSTICK)
+ goto defaulted;
+
+- /* handle volume up/down mapping */
+ /* non-standard types or multi-HID interfaces - plt_type is PID */
+ if (!(plt_type & HID_USAGE_PAGE)) {
+ switch (plt_type) {
+ case PLT_DA60:
+- if (PLT_ALLOW_CONSUMER)
++ if (allow_consumer)
+ goto defaulted;
+- goto ignored;
++ if (usage->hid == HID_CONSUMER_MUTE) {
++ mapped_key = KEY_MICMUTE;
++ goto mapped;
++ }
++ break;
+ default:
+- if (PLT_ALLOW_CONSUMER)
++ if (allow_consumer || allow_mute)
+ goto defaulted;
+ }
++ goto ignored;
+ }
+- /* handle standard types - plt_type is 0xffa0uuuu or 0xffa2uuuu */
+- /* 'basic telephony compliant' - allow default consumer page map */
+- else if ((plt_type & HID_USAGE) >= PLT_BASIC_TELEPHONY &&
+- (plt_type & HID_USAGE) != PLT_BASIC_EXCEPTION) {
+- if (PLT_ALLOW_CONSUMER)
+- goto defaulted;
+- }
+- /* not 'basic telephony' - apply legacy mapping */
+- /* only map if the field is in the device's primary vendor page */
+- else if (!((field->application ^ plt_type) & HID_USAGE_PAGE)) {
++
++ /* handle standard consumer control mapping */
++ /* and standard telephony mic mute mapping */
++ if (allow_consumer || allow_mute)
++ goto defaulted;
++
++ /* handle vendor unique types - plt_type is 0xffa0uuuu or 0xffa2uuuu */
++ /* if not 'basic telephony compliant' - map vendor unique controls */
++ if (!((plt_type & HID_USAGE) >= PLT_BASIC_TELEPHONY &&
++ (plt_type & HID_USAGE) != PLT_BASIC_EXCEPTION) &&
++ !((field->application ^ plt_type) & HID_USAGE_PAGE))
+ switch (usage->hid) {
+ case PLT1_VOL_UP:
+ case PLT2_VOL_UP:
+@@ -95,8 +99,11 @@ static int plantronics_input_mapping(struct hid_device *hdev,
+ case PLT2_VOL_DOWN:
+ mapped_key = KEY_VOLUMEDOWN;
+ goto mapped;
++ case PLT1_MIC_MUTE:
++ case PLT2_MIC_MUTE:
++ mapped_key = KEY_MICMUTE;
++ goto mapped;
+ }
+- }
+
+ /*
+ * Future mapping of call control or other usages,
+@@ -105,6 +112,8 @@ static int plantronics_input_mapping(struct hid_device *hdev,
+ */
+
+ ignored:
++ hid_dbg(hdev, "usage: %08x (appl: %08x) - ignored\n",
++ usage->hid, field->application);
+ return -1;
+
+ defaulted:
+@@ -123,38 +132,26 @@ static int plantronics_event(struct hid_device *hdev, struct hid_field *field,
+ struct hid_usage *usage, __s32 value)
+ {
+ struct plt_drv_data *drv_data = hid_get_drvdata(hdev);
++ unsigned long prev_tsto, cur_ts;
++ __u16 prev_key, cur_key;
+
+- if (drv_data->quirks & PLT_QUIRK_DOUBLE_VOLUME_KEYS) {
+- unsigned long prev_ts, cur_ts;
++ /* Usages are filtered in plantronics_usages. */
+
+- /* Usages are filtered in plantronics_usages. */
++ /* HZ too low for ms resolution - double key detection disabled */
++ /* or it is a key release - handle key presses only. */
++ if (!drv_data->double_key_to || !value)
++ return 0;
+
+- if (!value) /* Handle key presses only. */
+- return 0;
++ prev_tsto = drv_data->last_key_ts + drv_data->double_key_to;
++ cur_ts = drv_data->last_key_ts = jiffies;
++ prev_key = drv_data->last_key;
++ cur_key = drv_data->last_key = usage->code;
+
+- prev_ts = drv_data->last_volume_key_ts;
+- cur_ts = jiffies;
+- if (jiffies_to_msecs(cur_ts - prev_ts) <= PLT_DOUBLE_KEY_TIMEOUT)
+- return 1; /* Ignore the repeated key. */
+-
+- drv_data->last_volume_key_ts = cur_ts;
++ /* If the same key occurs in <= double_key_to -- ignore it */
++ if (prev_key == cur_key && time_before_eq(cur_ts, prev_tsto)) {
++ hid_dbg(hdev, "double key %d ignored\n", cur_key);
++ return 1; /* Ignore the repeated key. */
+ }
+- if (drv_data->quirks & PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS) {
+- unsigned long prev_ts, cur_ts;
+-
+- /* Usages are filtered in plantronics_usages. */
+-
+- if (!value) /* Handle key presses only. */
+- return 0;
+-
+- prev_ts = drv_data->last_volume_key_ts;
+- cur_ts = jiffies;
+- if (jiffies_to_msecs(cur_ts - prev_ts) <= PLT_FOLLOWED_OPPOSITE_KEY_TIMEOUT)
+- return 1; /* Ignore the followed opposite volume key. */
+-
+- drv_data->last_volume_key_ts = cur_ts;
+- }
+-
+ return 0;
+ }
+
+@@ -196,12 +193,16 @@ static int plantronics_probe(struct hid_device *hdev,
+ ret = hid_parse(hdev);
+ if (ret) {
+ hid_err(hdev, "parse failed\n");
+- goto err;
++ return ret;
+ }
+
+ drv_data->device_type = plantronics_device_type(hdev);
+- drv_data->quirks = id->driver_data;
+- drv_data->last_volume_key_ts = jiffies - msecs_to_jiffies(PLT_DOUBLE_KEY_TIMEOUT);
++ drv_data->double_key_to = msecs_to_jiffies(PLT_DOUBLE_KEY_TIMEOUT);
++ drv_data->last_key_ts = jiffies - drv_data->double_key_to;
++
++ /* if HZ does not allow ms resolution - disable double key detection */
++ if (drv_data->double_key_to < PLT_DOUBLE_KEY_TIMEOUT)
++ drv_data->double_key_to = 0;
+
+ hid_set_drvdata(hdev, drv_data);
+
+@@ -210,29 +211,10 @@ static int plantronics_probe(struct hid_device *hdev,
+ if (ret)
+ hid_err(hdev, "hw start failed\n");
+
+-err:
+ return ret;
+ }
+
+ static const struct hid_device_id plantronics_devices[] = {
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3210_SERIES),
+- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3220_SERIES),
+- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3215_SERIES),
+- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3225_SERIES),
+- .driver_data = PLT_QUIRK_DOUBLE_VOLUME_KEYS },
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_BLACKWIRE_3325_SERIES),
+- .driver_data = PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS },
+- { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS,
+- USB_DEVICE_ID_PLANTRONICS_ENCOREPRO_500_SERIES),
+- .driver_data = PLT_QUIRK_FOLLOWED_OPPOSITE_VOLUME_KEYS },
+ { HID_USB_DEVICE(USB_VENDOR_ID_PLANTRONICS, HID_ANY_ID) },
+ { }
+ };
+@@ -241,6 +223,14 @@ MODULE_DEVICE_TABLE(hid, plantronics_devices);
+ static const struct hid_usage_id plantronics_usages[] = {
+ { HID_CP_VOLUMEUP, EV_KEY, HID_ANY_ID },
+ { HID_CP_VOLUMEDOWN, EV_KEY, HID_ANY_ID },
++ { HID_TELEPHONY_MUTE, EV_KEY, HID_ANY_ID },
++ { HID_CONSUMER_MUTE, EV_KEY, HID_ANY_ID },
++ { PLT2_VOL_UP, EV_KEY, HID_ANY_ID },
++ { PLT2_VOL_DOWN, EV_KEY, HID_ANY_ID },
++ { PLT2_MIC_MUTE, EV_KEY, HID_ANY_ID },
++ { PLT1_VOL_UP, EV_KEY, HID_ANY_ID },
++ { PLT1_VOL_DOWN, EV_KEY, HID_ANY_ID },
++ { PLT1_MIC_MUTE, EV_KEY, HID_ANY_ID },
+ { HID_TERMINATOR, HID_TERMINATOR, HID_TERMINATOR }
+ };
+
+diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c
+index ffdd8de9ec5d79..d99f8922d4ad04 100644
+--- a/drivers/memstick/host/rtsx_usb_ms.c
++++ b/drivers/memstick/host/rtsx_usb_ms.c
+@@ -813,6 +813,7 @@ static void rtsx_usb_ms_drv_remove(struct platform_device *pdev)
+
+ host->eject = true;
+ cancel_work_sync(&host->handle_req);
++ cancel_delayed_work_sync(&host->poll_card);
+
+ mutex_lock(&host->host_mutex);
+ if (host->req) {
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 9fe7f704a2f7b8..944a33361dae59 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1365,9 +1365,11 @@ static const struct usb_device_id products[] = {
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a0, 0)}, /* Telit FN920C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a4, 0)}, /* Telit FN920C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10a9, 0)}, /* Telit FN920C04 */
++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10b0, 0)}, /* Telit FE990B */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c0, 0)}, /* Telit FE910C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c4, 0)}, /* Telit FE910C04 */
+ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10c8, 0)}, /* Telit FE910C04 */
++ {QMI_QUIRK_SET_DTR(0x1bc7, 0x10d0, 0)}, /* Telit FN990B */
+ {QMI_FIXED_INTF(0x1bc7, 0x1100, 3)}, /* Telit ME910 */
+ {QMI_FIXED_INTF(0x1bc7, 0x1101, 3)}, /* Telit ME910 dual modem */
+ {QMI_FIXED_INTF(0x1bc7, 0x1200, 5)}, /* Telit LE920 */
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index 44179f4e807fc3..aeab2308b15008 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -178,6 +178,17 @@ int usbnet_get_ethernet_addr(struct usbnet *dev, int iMACAddress)
+ }
+ EXPORT_SYMBOL_GPL(usbnet_get_ethernet_addr);
+
++static bool usbnet_needs_usb_name_format(struct usbnet *dev, struct net_device *net)
++{
++ /* Point to point devices which don't have a real MAC address
++ * (or report a fake local one) have historically used the usb%d
++ * naming. Preserve this..
++ */
++ return (dev->driver_info->flags & FLAG_POINTTOPOINT) != 0 &&
++ (is_zero_ether_addr(net->dev_addr) ||
++ is_local_ether_addr(net->dev_addr));
++}
++
+ static void intr_complete (struct urb *urb)
+ {
+ struct usbnet *dev = urb->context;
+@@ -1762,13 +1773,11 @@ usbnet_probe (struct usb_interface *udev, const struct usb_device_id *prod)
+ if (status < 0)
+ goto out1;
+
+- // heuristic: "usb%d" for links we know are two-host,
+- // else "eth%d" when there's reasonable doubt. userspace
+- // can rename the link if it knows better.
++ /* heuristic: rename to "eth%d" if we are not sure this link
++ * is two-host (these links keep "usb%d")
++ */
+ if ((dev->driver_info->flags & FLAG_ETHER) != 0 &&
+- ((dev->driver_info->flags & FLAG_POINTTOPOINT) == 0 ||
+- /* somebody touched it*/
+- !is_zero_ether_addr(net->dev_addr)))
++ !usbnet_needs_usb_name_format(dev, net))
+ strscpy(net->name, "eth%d", sizeof(net->name));
+ /* WLAN devices should always be named "wlan%d" */
+ if ((dev->driver_info->flags & FLAG_WLAN) != 0)
+diff --git a/drivers/tty/serial/8250/8250_dma.c b/drivers/tty/serial/8250/8250_dma.c
+index f245a84f4a508d..bdd26c9f34bdf2 100644
+--- a/drivers/tty/serial/8250/8250_dma.c
++++ b/drivers/tty/serial/8250/8250_dma.c
+@@ -162,7 +162,7 @@ void serial8250_tx_dma_flush(struct uart_8250_port *p)
+ */
+ dma->tx_size = 0;
+
+- dmaengine_terminate_async(dma->rxchan);
++ dmaengine_terminate_async(dma->txchan);
+ }
+
+ int serial8250_rx_dma(struct uart_8250_port *p)
+diff --git a/drivers/tty/serial/8250/8250_pci.c b/drivers/tty/serial/8250/8250_pci.c
+index de6d90bf0d70a2..b3c19ba777c68d 100644
+--- a/drivers/tty/serial/8250/8250_pci.c
++++ b/drivers/tty/serial/8250/8250_pci.c
+@@ -2687,6 +2687,22 @@ static struct pci_serial_quirk pci_serial_quirks[] = {
+ .init = pci_oxsemi_tornado_init,
+ .setup = pci_oxsemi_tornado_setup,
+ },
++ {
++ .vendor = PCI_VENDOR_ID_INTASHIELD,
++ .device = 0x4026,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .init = pci_oxsemi_tornado_init,
++ .setup = pci_oxsemi_tornado_setup,
++ },
++ {
++ .vendor = PCI_VENDOR_ID_INTASHIELD,
++ .device = 0x4021,
++ .subvendor = PCI_ANY_ID,
++ .subdevice = PCI_ANY_ID,
++ .init = pci_oxsemi_tornado_init,
++ .setup = pci_oxsemi_tornado_setup,
++ },
+ {
+ .vendor = PCI_VENDOR_ID_INTEL,
+ .device = 0x8811,
+@@ -5213,6 +5229,14 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ pbn_b2_2_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x0BA2,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b2_2_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x0BA3,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b2_2_115200 },
+ /*
+ * Brainboxes UC-235/246
+ */
+@@ -5333,6 +5357,14 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ pbn_b2_4_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x0C42,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b2_4_115200 },
++ { PCI_VENDOR_ID_INTASHIELD, 0x0C43,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_b2_4_115200 },
+ /*
+ * Brainboxes UC-420
+ */
+@@ -5559,6 +5591,20 @@ static const struct pci_device_id serial_pci_tbl[] = {
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ pbn_oxsemi_1_15625000 },
++ /*
++ * Brainboxes XC-235
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x4026,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_1_15625000 },
++ /*
++ * Brainboxes XC-475
++ */
++ { PCI_VENDOR_ID_INTASHIELD, 0x4021,
++ PCI_ANY_ID, PCI_ANY_ID,
++ 0, 0,
++ pbn_oxsemi_1_15625000 },
+
+ /*
+ * Perle PCI-RAS cards
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 77efa7ee6eda29..9f9fc733eb2c1f 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -1483,6 +1483,19 @@ static int lpuart32_config_rs485(struct uart_port *port, struct ktermios *termio
+
+ unsigned long modem = lpuart32_read(&sport->port, UARTMODIR)
+ & ~(UARTMODIR_TXRTSPOL | UARTMODIR_TXRTSE);
++ u32 ctrl;
++
++ /* TXRTSE and TXRTSPOL only can be changed when transmitter is disabled. */
++ ctrl = lpuart32_read(&sport->port, UARTCTRL);
++ if (ctrl & UARTCTRL_TE) {
++ /* wait for the transmit engine to complete */
++ lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
++ lpuart32_write(&sport->port, ctrl & ~UARTCTRL_TE, UARTCTRL);
++
++ while (lpuart32_read(&sport->port, UARTCTRL) & UARTCTRL_TE)
++ cpu_relax();
++ }
++
+ lpuart32_write(&sport->port, modem, UARTMODIR);
+
+ if (rs485->flags & SER_RS485_ENABLED) {
+@@ -1502,6 +1515,10 @@ static int lpuart32_config_rs485(struct uart_port *port, struct ktermios *termio
+ }
+
+ lpuart32_write(&sport->port, modem, UARTMODIR);
++
++ if (ctrl & UARTCTRL_TE)
++ lpuart32_write(&sport->port, ctrl, UARTCTRL);
++
+ return 0;
+ }
+
+diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c
+index f5199fdecff278..9b9981352b1e1a 100644
+--- a/drivers/tty/serial/stm32-usart.c
++++ b/drivers/tty/serial/stm32-usart.c
+@@ -965,10 +965,8 @@ static void stm32_usart_start_tx(struct uart_port *port)
+ {
+ struct tty_port *tport = &port->state->port;
+
+- if (kfifo_is_empty(&tport->xmit_fifo) && !port->x_char) {
+- stm32_usart_rs485_rts_disable(port);
++ if (kfifo_is_empty(&tport->xmit_fifo) && !port->x_char)
+ return;
+- }
+
+ stm32_usart_rs485_rts_enable(port);
+
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 4384b86ea7b66c..2fad9563dca40b 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -2866,6 +2866,10 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ if (!ep_seg) {
+
+ if (ep->skip && usb_endpoint_xfer_isoc(&td->urb->ep->desc)) {
++ /* this event is unlikely to match any TD, don't skip them all */
++ if (trb_comp_code == COMP_STOPPED_LENGTH_INVALID)
++ return 0;
++
+ skip_isoc_td(xhci, td, ep, status);
+ if (!list_empty(&ep_ring->td_list))
+ continue;
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 439767d242fa9c..71588e4db0e34b 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1748,11 +1748,20 @@ static inline void xhci_write_64(struct xhci_hcd *xhci,
+ }
+
+
+-/* Link TRB chain should always be set on 0.95 hosts, and AMD 0.96 ISOC rings */
++/*
++ * Reportedly, some chapters of v0.95 spec said that Link TRB always has its chain bit set.
++ * Other chapters and later specs say that it should only be set if the link is inside a TD
++ * which continues from the end of one segment to the next segment.
++ *
++ * Some 0.95 hardware was found to misbehave if any link TRB doesn't have the chain bit set.
++ *
++ * 0.96 hardware from AMD and NEC was found to ignore unchained isochronous link TRBs when
++ * "resynchronizing the pipe" after a Missed Service Error.
++ */
+ static inline bool xhci_link_chain_quirk(struct xhci_hcd *xhci, enum xhci_ring_type type)
+ {
+ return (xhci->quirks & XHCI_LINK_TRB_QUIRK) ||
+- (type == TYPE_ISOC && (xhci->quirks & XHCI_AMD_0x96_HOST));
++ (type == TYPE_ISOC && (xhci->quirks & (XHCI_AMD_0x96_HOST | XHCI_NEC_HOST)));
+ }
+
+ /* xHCI debugging */
+diff --git a/fs/bcachefs/fs-ioctl.c b/fs/bcachefs/fs-ioctl.c
+index 405cf08bda3473..e599d5ac6e4d2a 100644
+--- a/fs/bcachefs/fs-ioctl.c
++++ b/fs/bcachefs/fs-ioctl.c
+@@ -520,10 +520,12 @@ static long bch2_ioctl_subvolume_destroy(struct bch_fs *c, struct file *filp,
+ ret = -ENOENT;
+ goto err;
+ }
+- ret = __bch2_unlink(dir, victim, true);
++
++ ret = inode_permission(file_mnt_idmap(filp), d_inode(victim), MAY_WRITE) ?:
++ __bch2_unlink(dir, victim, true);
+ if (!ret) {
+ fsnotify_rmdir(dir, victim);
+- d_delete(victim);
++ d_invalidate(victim);
+ }
+ err:
+ inode_unlock(dir);
+diff --git a/fs/nfsd/nfs4recover.c b/fs/nfsd/nfs4recover.c
+index 4a765555bf8459..1c8fcb04b3cdeb 100644
+--- a/fs/nfsd/nfs4recover.c
++++ b/fs/nfsd/nfs4recover.c
+@@ -2052,7 +2052,6 @@ static inline int check_for_legacy_methods(int status, struct net *net)
+ path_put(&path);
+ if (status)
+ return -ENOTDIR;
+- status = nn->client_tracking_ops->init(net);
+ }
+ return status;
+ }
+diff --git a/net/atm/mpc.c b/net/atm/mpc.c
+index 324e3ab96bb393..12da0269275c54 100644
+--- a/net/atm/mpc.c
++++ b/net/atm/mpc.c
+@@ -1314,6 +1314,8 @@ static void MPOA_cache_impos_rcvd(struct k_message *msg,
+ holding_time = msg->content.eg_info.holding_time;
+ dprintk("(%s) entry = %p, holding_time = %u\n",
+ mpc->dev->name, entry, holding_time);
++ if (entry == NULL && !holding_time)
++ return;
+ if (entry == NULL && holding_time) {
+ entry = mpc->eg_ops->add_entry(msg, mpc);
+ mpc->eg_ops->put(entry);
+diff --git a/net/ipv6/netfilter/nf_socket_ipv6.c b/net/ipv6/netfilter/nf_socket_ipv6.c
+index a7690ec6232596..9ea5ef56cb2704 100644
+--- a/net/ipv6/netfilter/nf_socket_ipv6.c
++++ b/net/ipv6/netfilter/nf_socket_ipv6.c
+@@ -103,6 +103,10 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb,
+ struct sk_buff *data_skb = NULL;
+ int doff = 0;
+ int thoff = 0, tproto;
++#if IS_ENABLED(CONFIG_NF_CONNTRACK)
++ enum ip_conntrack_info ctinfo;
++ struct nf_conn const *ct;
++#endif
+
+ tproto = ipv6_find_hdr(skb, &thoff, -1, NULL, NULL);
+ if (tproto < 0) {
+@@ -136,6 +140,25 @@ struct sock *nf_sk_lookup_slow_v6(struct net *net, const struct sk_buff *skb,
+ return NULL;
+ }
+
++#if IS_ENABLED(CONFIG_NF_CONNTRACK)
++ /* Do the lookup with the original socket address in
++ * case this is a reply packet of an established
++ * SNAT-ted connection.
++ */
++ ct = nf_ct_get(skb, &ctinfo);
++ if (ct &&
++ ((tproto != IPPROTO_ICMPV6 &&
++ ctinfo == IP_CT_ESTABLISHED_REPLY) ||
++ (tproto == IPPROTO_ICMPV6 &&
++ ctinfo == IP_CT_RELATED_REPLY)) &&
++ (ct->status & IPS_SRC_NAT_DONE)) {
++ daddr = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u3.in6;
++ dport = (tproto == IPPROTO_TCP) ?
++ ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.tcp.port :
++ ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.udp.port;
++ }
++#endif
++
+ return nf_socket_get_sock_v6(net, data_skb, doff, tproto, saddr, daddr,
+ sport, dport, indev);
+ }
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 3949e2614a6638..8c7da13a804c04 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -10441,6 +10441,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8811, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ SND_PCI_QUIRK(0x103c, 0x8812, "HP Spectre x360 15-eb1xxx", ALC285_FIXUP_HP_SPECTRE_X360_EB1),
+ SND_PCI_QUIRK(0x103c, 0x881d, "HP 250 G8 Notebook PC", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
++ SND_PCI_QUIRK(0x103c, 0x881e, "HP Laptop 15s-du3xxx", ALC236_FIXUP_HP_MUTE_LED_COEFBIT2),
+ SND_PCI_QUIRK(0x103c, 0x8846, "HP EliteBook 850 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x8847, "HP EliteBook x360 830 G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x103c, 0x884b, "HP EliteBook 840 Aero G8 Notebook PC", ALC285_FIXUP_HP_GPIO_LED),
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index a95ebcf4e46e76..1e7192cb4693c0 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -4156,6 +4156,52 @@ static void snd_dragonfly_quirk_db_scale(struct usb_mixer_interface *mixer,
+ }
+ }
+
++/*
++ * Some Plantronics headsets have control names that don't meet ALSA naming
++ * standards. This function fixes nonstandard source names. By the time
++ * this function is called the control name should look like one of these:
++ * "source names Playback Volume"
++ * "source names Playback Switch"
++ * "source names Capture Volume"
++ * "source names Capture Switch"
++ * If any of the trigger words are found in the name then the name will
++ * be changed to:
++ * "Headset Playback Volume"
++ * "Headset Playback Switch"
++ * "Headset Capture Volume"
++ * "Headset Capture Switch"
++ * depending on the current suffix.
++ */
++static void snd_fix_plt_name(struct snd_usb_audio *chip,
++ struct snd_ctl_elem_id *id)
++{
++ /* no variant of "Sidetone" should be added to this list */
++ static const char * const trigger[] = {
++ "Earphone", "Microphone", "Receive", "Transmit"
++ };
++ static const char * const suffix[] = {
++ " Playback Volume", " Playback Switch",
++ " Capture Volume", " Capture Switch"
++ };
++ int i;
++
++ for (i = 0; i < ARRAY_SIZE(trigger); i++)
++ if (strstr(id->name, trigger[i]))
++ goto triggered;
++ usb_audio_dbg(chip, "no change in %s\n", id->name);
++ return;
++
++triggered:
++ for (i = 0; i < ARRAY_SIZE(suffix); i++)
++ if (strstr(id->name, suffix[i])) {
++ usb_audio_dbg(chip, "fixing kctl name %s\n", id->name);
++ snprintf(id->name, sizeof(id->name), "Headset%s",
++ suffix[i]);
++ return;
++ }
++ usb_audio_dbg(chip, "something wrong in kctl name %s\n", id->name);
++}
++
+ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ struct usb_mixer_elem_info *cval, int unitid,
+ struct snd_kcontrol *kctl)
+@@ -4173,5 +4219,10 @@ void snd_usb_mixer_fu_apply_quirk(struct usb_mixer_interface *mixer,
+ cval->min_mute = 1;
+ break;
+ }
++
++ /* ALSA-ify some Plantronics headset control names */
++ if (USB_ID_VENDOR(mixer->chip->usb_id) == 0x047f &&
++ (cval->control == UAC_FU_MUTE || cval->control == UAC_FU_VOLUME))
++ snd_fix_plt_name(mixer->chip, &kctl->id);
+ }
+
+diff --git a/tools/perf/Documentation/intel-hybrid.txt b/tools/perf/Documentation/intel-hybrid.txt
+index e7a776ad25d719..0379903673a4ac 100644
+--- a/tools/perf/Documentation/intel-hybrid.txt
++++ b/tools/perf/Documentation/intel-hybrid.txt
+@@ -8,15 +8,15 @@ Part of events are available on core cpu, part of events are available
+ on atom cpu and even part of events are available on both.
+
+ Kernel exports two new cpu pmus via sysfs:
+-/sys/devices/cpu_core
+-/sys/devices/cpu_atom
++/sys/bus/event_source/devices/cpu_core
++/sys/bus/event_source/devices/cpu_atom
+
+ The 'cpus' files are created under the directories. For example,
+
+-cat /sys/devices/cpu_core/cpus
++cat /sys/bus/event_source/devices/cpu_core/cpus
+ 0-15
+
+-cat /sys/devices/cpu_atom/cpus
++cat /sys/bus/event_source/devices/cpu_atom/cpus
+ 16-23
+
+ It indicates cpu0-cpu15 are core cpus and cpu16-cpu23 are atom cpus.
+@@ -60,8 +60,8 @@ can't carry pmu information. So now this type is extended to be PMU aware
+ type. The PMU type ID is stored at attr.config[63:32].
+
+ PMU type ID is retrieved from sysfs.
+-/sys/devices/cpu_atom/type
+-/sys/devices/cpu_core/type
++/sys/bus/event_source/devices/cpu_atom/type
++/sys/bus/event_source/devices/cpu_core/type
+
+ The new attr.config layout for PERF_TYPE_HARDWARE:
+
+diff --git a/tools/perf/Documentation/perf-list.txt b/tools/perf/Documentation/perf-list.txt
+index dea005410ec02f..ee5d333e2ca964 100644
+--- a/tools/perf/Documentation/perf-list.txt
++++ b/tools/perf/Documentation/perf-list.txt
+@@ -188,7 +188,7 @@ in the CPU vendor specific documentation.
+
+ The available PMUs and their raw parameters can be listed with
+
+- ls /sys/devices/*/format
++ ls /sys/bus/event_source/devices/*/format
+
+ For example the raw event "LSD.UOPS" core pmu event above could
+ be specified as
+diff --git a/tools/perf/arch/x86/util/iostat.c b/tools/perf/arch/x86/util/iostat.c
+index df7b5dfcc26a51..7ea882ef293a18 100644
+--- a/tools/perf/arch/x86/util/iostat.c
++++ b/tools/perf/arch/x86/util/iostat.c
+@@ -32,7 +32,7 @@
+ #define MAX_PATH 1024
+ #endif
+
+-#define UNCORE_IIO_PMU_PATH "devices/uncore_iio_%d"
++#define UNCORE_IIO_PMU_PATH "bus/event_source/devices/uncore_iio_%d"
+ #define SYSFS_UNCORE_PMU_PATH "%s/"UNCORE_IIO_PMU_PATH
+ #define PLATFORM_MAPPING_PATH UNCORE_IIO_PMU_PATH"/die%d"
+
+diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
+index 4933efdfee76fb..628c61397d2d38 100644
+--- a/tools/perf/builtin-stat.c
++++ b/tools/perf/builtin-stat.c
+@@ -96,7 +96,7 @@
+ #include <internal/threadmap.h>
+
+ #define DEFAULT_SEPARATOR " "
+-#define FREEZE_ON_SMI_PATH "devices/cpu/freeze_on_smi"
++#define FREEZE_ON_SMI_PATH "bus/event_source/devices/cpu/freeze_on_smi"
+
+ static void print_counters(struct timespec *ts, int argc, const char **argv);
+
+diff --git a/tools/perf/util/mem-events.c b/tools/perf/util/mem-events.c
+index bf5090f5220bbd..9c4adfb45f62ba 100644
+--- a/tools/perf/util/mem-events.c
++++ b/tools/perf/util/mem-events.c
+@@ -189,7 +189,7 @@ static bool perf_pmu__mem_events_supported(const char *mnt, struct perf_pmu *pmu
+ if (!e->event_name)
+ return true;
+
+- scnprintf(path, PATH_MAX, "%s/devices/%s/events/%s", mnt, pmu->name, e->event_name);
++ scnprintf(path, PATH_MAX, "%s/bus/event_source/devices/%s/events/%s", mnt, pmu->name, e->event_name);
+
+ return !stat(path, &st);
+ }
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index 61bdda01a05aca..ed893c3c6ad938 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -33,12 +33,12 @@
+ #define UNIT_MAX_LEN 31 /* max length for event unit name */
+
+ enum event_source {
+- /* An event loaded from /sys/devices/<pmu>/events. */
++ /* An event loaded from /sys/bus/event_source/devices/<pmu>/events. */
+ EVENT_SRC_SYSFS,
+ /* An event loaded from a CPUID matched json file. */
+ EVENT_SRC_CPU_JSON,
+ /*
+- * An event loaded from a /sys/devices/<pmu>/identifier matched json
++ * An event loaded from a /sys/bus/event_source/devices/<pmu>/identifier matched json
+ * file.
+ */
+ EVENT_SRC_SYS_JSON,
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-04-10 13:29 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-04-10 13:29 UTC (permalink / raw
To: gentoo-commits
commit: ccc6a3ab6573c6ffd2c26c881967e7a52dfd73c0
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Apr 10 13:29:16 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Apr 10 13:29:16 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ccc6a3ab
Linux patch 6.12.23
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1022_linux-6.12.23.patch | 15453 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 15457 insertions(+)
diff --git a/0000_README b/0000_README
index 696eb7c9..26583822 100644
--- a/0000_README
+++ b/0000_README
@@ -131,6 +131,10 @@ Patch: 1021_linux-6.12.22.patch
From: https://www.kernel.org
Desc: Linux 6.12.22
+Patch: 1022_linux-6.12.23.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.23
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1022_linux-6.12.23.patch b/1022_linux-6.12.23.patch
new file mode 100644
index 00000000..932e4ca5
--- /dev/null
+++ b/1022_linux-6.12.23.patch
@@ -0,0 +1,15453 @@
+diff --git a/Documentation/devicetree/bindings/vendor-prefixes.yaml b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+index fbfce9b4ae6b8e..71a1a399e1e1fe 100644
+--- a/Documentation/devicetree/bindings/vendor-prefixes.yaml
++++ b/Documentation/devicetree/bindings/vendor-prefixes.yaml
+@@ -581,6 +581,8 @@ patternProperties:
+ description: GlobalTop Technology, Inc.
+ "^gmt,.*":
+ description: Global Mixed-mode Technology, Inc.
++ "^gocontroll,.*":
++ description: GOcontroll Modular Embedded Electronics B.V.
+ "^goldelico,.*":
+ description: Golden Delicious Computers GmbH & Co. KG
+ "^goodix,.*":
+diff --git a/Makefile b/Makefile
+index f380005d1600ad..6a2a60eb67a3e7 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 22
++SUBLEVEL = 23
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 202397be76d803..d0040fb67c36f3 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -118,7 +118,7 @@ config ARM
+ select HAVE_KERNEL_XZ
+ select HAVE_KPROBES if !XIP_KERNEL && !CPU_ENDIAN_BE32 && !CPU_V7M
+ select HAVE_KRETPROBES if HAVE_KPROBES
+- select HAVE_LD_DEAD_CODE_DATA_ELIMINATION if (LD_VERSION >= 23600 || LD_IS_LLD)
++ select HAVE_LD_DEAD_CODE_DATA_ELIMINATION if (LD_VERSION >= 23600 || LD_CAN_USE_KEEP_IN_OVERLAY)
+ select HAVE_MOD_ARCH_SPECIFIC
+ select HAVE_NMI
+ select HAVE_OPTPROBES if !THUMB2_KERNEL
+diff --git a/arch/arm/include/asm/vmlinux.lds.h b/arch/arm/include/asm/vmlinux.lds.h
+index d60f6e83a9f700..14811b4f48ec8a 100644
+--- a/arch/arm/include/asm/vmlinux.lds.h
++++ b/arch/arm/include/asm/vmlinux.lds.h
+@@ -34,6 +34,12 @@
+ #define NOCROSSREFS
+ #endif
+
++#ifdef CONFIG_LD_CAN_USE_KEEP_IN_OVERLAY
++#define OVERLAY_KEEP(x) KEEP(x)
++#else
++#define OVERLAY_KEEP(x) x
++#endif
++
+ /* Set start/end symbol names to the LMA for the section */
+ #define ARM_LMA(sym, section) \
+ sym##_start = LOADADDR(section); \
+@@ -125,13 +131,13 @@
+ __vectors_lma = .; \
+ OVERLAY 0xffff0000 : NOCROSSREFS AT(__vectors_lma) { \
+ .vectors { \
+- *(.vectors) \
++ OVERLAY_KEEP(*(.vectors)) \
+ } \
+ .vectors.bhb.loop8 { \
+- *(.vectors.bhb.loop8) \
++ OVERLAY_KEEP(*(.vectors.bhb.loop8)) \
+ } \
+ .vectors.bhb.bpiall { \
+- *(.vectors.bhb.bpiall) \
++ OVERLAY_KEEP(*(.vectors.bhb.bpiall)) \
+ } \
+ } \
+ ARM_LMA(__vectors, .vectors); \
+diff --git a/arch/arm64/kernel/compat_alignment.c b/arch/arm64/kernel/compat_alignment.c
+index deff21bfa6800c..b68e1d328d4cb9 100644
+--- a/arch/arm64/kernel/compat_alignment.c
++++ b/arch/arm64/kernel/compat_alignment.c
+@@ -368,6 +368,8 @@ int do_compat_alignment_fixup(unsigned long addr, struct pt_regs *regs)
+ return 1;
+ }
+
++ if (!handler)
++ return 1;
+ type = handler(addr, instr, regs);
+
+ if (type == TYPE_ERROR || type == TYPE_FAULT)
+diff --git a/arch/loongarch/Kconfig b/arch/loongarch/Kconfig
+index d9fce0fd475a04..fe9f895138dba5 100644
+--- a/arch/loongarch/Kconfig
++++ b/arch/loongarch/Kconfig
+@@ -375,8 +375,8 @@ config CMDLINE_BOOTLOADER
+ config CMDLINE_EXTEND
+ bool "Use built-in to extend bootloader kernel arguments"
+ help
+- The command-line arguments provided during boot will be
+- appended to the built-in command line. This is useful in
++ The built-in command line will be appended to the command-
++ line arguments provided during boot. This is useful in
+ cases where the provided arguments are insufficient and
+ you don't want to or cannot modify them.
+
+diff --git a/arch/loongarch/include/asm/cache.h b/arch/loongarch/include/asm/cache.h
+index 1b6d0961719989..aa622c75441442 100644
+--- a/arch/loongarch/include/asm/cache.h
++++ b/arch/loongarch/include/asm/cache.h
+@@ -8,6 +8,8 @@
+ #define L1_CACHE_SHIFT CONFIG_L1_CACHE_SHIFT
+ #define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
+
++#define ARCH_DMA_MINALIGN (16)
++
+ #define __read_mostly __section(".data..read_mostly")
+
+ #endif /* _ASM_CACHE_H */
+diff --git a/arch/loongarch/include/asm/irq.h b/arch/loongarch/include/asm/irq.h
+index 9c2ca785faa9bd..b2915fd5386209 100644
+--- a/arch/loongarch/include/asm/irq.h
++++ b/arch/loongarch/include/asm/irq.h
+@@ -53,7 +53,7 @@ void spurious_interrupt(void);
+ #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
+ void arch_trigger_cpumask_backtrace(const struct cpumask *mask, int exclude_cpu);
+
+-#define MAX_IO_PICS 2
++#define MAX_IO_PICS 8
+ #define NR_IRQS (64 + NR_VECTORS * (NR_CPUS + MAX_IO_PICS))
+
+ struct acpi_vector_group {
+diff --git a/arch/loongarch/include/asm/stacktrace.h b/arch/loongarch/include/asm/stacktrace.h
+index f23adb15f418fb..fc8b64773794a9 100644
+--- a/arch/loongarch/include/asm/stacktrace.h
++++ b/arch/loongarch/include/asm/stacktrace.h
+@@ -8,6 +8,7 @@
+ #include <asm/asm.h>
+ #include <asm/ptrace.h>
+ #include <asm/loongarch.h>
++#include <asm/unwind_hints.h>
+ #include <linux/stringify.h>
+
+ enum stack_type {
+@@ -43,6 +44,7 @@ int get_stack_info(unsigned long stack, struct task_struct *task, struct stack_i
+ static __always_inline void prepare_frametrace(struct pt_regs *regs)
+ {
+ __asm__ __volatile__(
++ UNWIND_HINT_SAVE
+ /* Save $ra */
+ STORE_ONE_REG(1)
+ /* Use $ra to save PC */
+@@ -80,6 +82,7 @@ static __always_inline void prepare_frametrace(struct pt_regs *regs)
+ STORE_ONE_REG(29)
+ STORE_ONE_REG(30)
+ STORE_ONE_REG(31)
++ UNWIND_HINT_RESTORE
+ : "=m" (regs->csr_era)
+ : "r" (regs->regs)
+ : "memory");
+diff --git a/arch/loongarch/include/asm/unwind_hints.h b/arch/loongarch/include/asm/unwind_hints.h
+index a01086ad9ddea4..2c68bc72736c95 100644
+--- a/arch/loongarch/include/asm/unwind_hints.h
++++ b/arch/loongarch/include/asm/unwind_hints.h
+@@ -23,6 +23,14 @@
+ UNWIND_HINT sp_reg=ORC_REG_SP type=UNWIND_HINT_TYPE_CALL
+ .endm
+
+-#endif /* __ASSEMBLY__ */
++#else /* !__ASSEMBLY__ */
++
++#define UNWIND_HINT_SAVE \
++ UNWIND_HINT(UNWIND_HINT_TYPE_SAVE, 0, 0, 0)
++
++#define UNWIND_HINT_RESTORE \
++ UNWIND_HINT(UNWIND_HINT_TYPE_RESTORE, 0, 0, 0)
++
++#endif /* !__ASSEMBLY__ */
+
+ #endif /* _ASM_LOONGARCH_UNWIND_HINTS_H */
+diff --git a/arch/loongarch/kernel/env.c b/arch/loongarch/kernel/env.c
+index 2f1f5b08638f81..27144de5c5fe4f 100644
+--- a/arch/loongarch/kernel/env.c
++++ b/arch/loongarch/kernel/env.c
+@@ -68,6 +68,8 @@ static int __init fdt_cpu_clk_init(void)
+ return -ENODEV;
+
+ clk = of_clk_get(np, 0);
++ of_node_put(np);
++
+ if (IS_ERR(clk))
+ return -ENODEV;
+
+diff --git a/arch/loongarch/kernel/kgdb.c b/arch/loongarch/kernel/kgdb.c
+index 445c452d72a79c..7be5b4c0c90020 100644
+--- a/arch/loongarch/kernel/kgdb.c
++++ b/arch/loongarch/kernel/kgdb.c
+@@ -8,6 +8,7 @@
+ #include <linux/hw_breakpoint.h>
+ #include <linux/kdebug.h>
+ #include <linux/kgdb.h>
++#include <linux/objtool.h>
+ #include <linux/processor.h>
+ #include <linux/ptrace.h>
+ #include <linux/sched.h>
+@@ -224,13 +225,13 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long pc)
+ regs->csr_era = pc;
+ }
+
+-void arch_kgdb_breakpoint(void)
++noinline void arch_kgdb_breakpoint(void)
+ {
+ __asm__ __volatile__ ( \
+ ".globl kgdb_breakinst\n\t" \
+- "nop\n" \
+ "kgdb_breakinst:\tbreak 2\n\t"); /* BRK_KDB = 2 */
+ }
++STACK_FRAME_NON_STANDARD(arch_kgdb_breakpoint);
+
+ /*
+ * Calls linux_debug_hook before the kernel dies. If KGDB is enabled,
+diff --git a/arch/loongarch/net/bpf_jit.c b/arch/loongarch/net/bpf_jit.c
+index ea357a3edc0943..fa1500d4aa3e3a 100644
+--- a/arch/loongarch/net/bpf_jit.c
++++ b/arch/loongarch/net/bpf_jit.c
+@@ -142,6 +142,8 @@ static void build_prologue(struct jit_ctx *ctx)
+ */
+ if (seen_tail_call(ctx) && seen_call(ctx))
+ move_reg(ctx, TCC_SAVED, REG_TCC);
++ else
++ emit_insn(ctx, nop);
+
+ ctx->stack_size = stack_adjust;
+ }
+@@ -905,7 +907,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
+
+ move_addr(ctx, t1, func_addr);
+ emit_insn(ctx, jirl, LOONGARCH_GPR_RA, t1, 0);
+- move_reg(ctx, regmap[BPF_REG_0], LOONGARCH_GPR_A0);
++
++ if (insn->src_reg != BPF_PSEUDO_CALL)
++ move_reg(ctx, regmap[BPF_REG_0], LOONGARCH_GPR_A0);
++
+ break;
+
+ /* tail call */
+@@ -930,7 +935,10 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx, bool ext
+ {
+ const u64 imm64 = (u64)(insn + 1)->imm << 32 | (u32)insn->imm;
+
+- move_imm(ctx, dst, imm64, is32);
++ if (bpf_pseudo_func(insn))
++ move_addr(ctx, dst, imm64);
++ else
++ move_imm(ctx, dst, imm64, is32);
+ return 1;
+ }
+
+diff --git a/arch/loongarch/net/bpf_jit.h b/arch/loongarch/net/bpf_jit.h
+index 68586338ecf859..f9c569f5394914 100644
+--- a/arch/loongarch/net/bpf_jit.h
++++ b/arch/loongarch/net/bpf_jit.h
+@@ -27,6 +27,11 @@ struct jit_data {
+ struct jit_ctx ctx;
+ };
+
++static inline void emit_nop(union loongarch_instruction *insn)
++{
++ insn->word = INSN_NOP;
++}
++
+ #define emit_insn(ctx, func, ...) \
+ do { \
+ if (ctx->image != NULL) { \
+diff --git a/arch/powerpc/configs/mpc885_ads_defconfig b/arch/powerpc/configs/mpc885_ads_defconfig
+index 77306be62e9ee8..129355f87f80fc 100644
+--- a/arch/powerpc/configs/mpc885_ads_defconfig
++++ b/arch/powerpc/configs/mpc885_ads_defconfig
+@@ -78,4 +78,4 @@ CONFIG_DEBUG_VM_PGTABLE=y
+ CONFIG_DETECT_HUNG_TASK=y
+ CONFIG_BDI_SWITCH=y
+ CONFIG_PPC_EARLY_DEBUG=y
+-CONFIG_GENERIC_PTDUMP=y
++CONFIG_PTDUMP_DEBUGFS=y
+diff --git a/arch/powerpc/crypto/Makefile b/arch/powerpc/crypto/Makefile
+index 59808592f0a1b5..1e52b02d8943b3 100644
+--- a/arch/powerpc/crypto/Makefile
++++ b/arch/powerpc/crypto/Makefile
+@@ -56,3 +56,4 @@ $(obj)/aesp8-ppc.S $(obj)/ghashp8-ppc.S: $(obj)/%.S: $(src)/%.pl FORCE
+ OBJECT_FILES_NON_STANDARD_aesp10-ppc.o := y
+ OBJECT_FILES_NON_STANDARD_ghashp10-ppc.o := y
+ OBJECT_FILES_NON_STANDARD_aesp8-ppc.o := y
++OBJECT_FILES_NON_STANDARD_ghashp8-ppc.o := y
+diff --git a/arch/powerpc/kexec/relocate_32.S b/arch/powerpc/kexec/relocate_32.S
+index 104c9911f40611..dd86e338307d3f 100644
+--- a/arch/powerpc/kexec/relocate_32.S
++++ b/arch/powerpc/kexec/relocate_32.S
+@@ -348,16 +348,13 @@ write_utlb:
+ rlwinm r10, r24, 0, 22, 27
+
+ cmpwi r10, PPC47x_TLB0_4K
+- bne 0f
+ li r10, 0x1000 /* r10 = 4k */
+- ANNOTATE_INTRA_FUNCTION_CALL
+- bl 1f
++ beq 0f
+
+-0:
+ /* Defaults to 256M */
+ lis r10, 0x1000
+
+- bcl 20,31,$+4
++0: bcl 20,31,$+4
+ 1: mflr r4
+ addi r4, r4, (2f-1b) /* virtual address of 2f */
+
+diff --git a/arch/powerpc/platforms/cell/spufs/gang.c b/arch/powerpc/platforms/cell/spufs/gang.c
+index 827d338deaf4c6..2c2999de6bfa25 100644
+--- a/arch/powerpc/platforms/cell/spufs/gang.c
++++ b/arch/powerpc/platforms/cell/spufs/gang.c
+@@ -25,6 +25,7 @@ struct spu_gang *alloc_spu_gang(void)
+ mutex_init(&gang->aff_mutex);
+ INIT_LIST_HEAD(&gang->list);
+ INIT_LIST_HEAD(&gang->aff_list_head);
++ gang->alive = 1;
+
+ out:
+ return gang;
+diff --git a/arch/powerpc/platforms/cell/spufs/inode.c b/arch/powerpc/platforms/cell/spufs/inode.c
+index 70236d1df3d3e0..9f9e4b87162782 100644
+--- a/arch/powerpc/platforms/cell/spufs/inode.c
++++ b/arch/powerpc/platforms/cell/spufs/inode.c
+@@ -192,13 +192,32 @@ static int spufs_fill_dir(struct dentry *dir,
+ return -ENOMEM;
+ ret = spufs_new_file(dir->d_sb, dentry, files->ops,
+ files->mode & mode, files->size, ctx);
+- if (ret)
++ if (ret) {
++ dput(dentry);
+ return ret;
++ }
+ files++;
+ }
+ return 0;
+ }
+
++static void unuse_gang(struct dentry *dir)
++{
++ struct inode *inode = dir->d_inode;
++ struct spu_gang *gang = SPUFS_I(inode)->i_gang;
++
++ if (gang) {
++ bool dead;
++
++ inode_lock(inode); // exclusion with spufs_create_context()
++ dead = !--gang->alive;
++ inode_unlock(inode);
++
++ if (dead)
++ simple_recursive_removal(dir, NULL);
++ }
++}
++
+ static int spufs_dir_close(struct inode *inode, struct file *file)
+ {
+ struct inode *parent;
+@@ -213,6 +232,7 @@ static int spufs_dir_close(struct inode *inode, struct file *file)
+ inode_unlock(parent);
+ WARN_ON(ret);
+
++ unuse_gang(dir->d_parent);
+ return dcache_dir_close(inode, file);
+ }
+
+@@ -405,7 +425,7 @@ spufs_create_context(struct inode *inode, struct dentry *dentry,
+ {
+ int ret;
+ int affinity;
+- struct spu_gang *gang;
++ struct spu_gang *gang = SPUFS_I(inode)->i_gang;
+ struct spu_context *neighbor;
+ struct path path = {.mnt = mnt, .dentry = dentry};
+
+@@ -420,11 +440,15 @@ spufs_create_context(struct inode *inode, struct dentry *dentry,
+ if ((flags & SPU_CREATE_ISOLATE) && !isolated_loader)
+ return -ENODEV;
+
+- gang = NULL;
++ if (gang) {
++ if (!gang->alive)
++ return -ENOENT;
++ gang->alive++;
++ }
++
+ neighbor = NULL;
+ affinity = flags & (SPU_CREATE_AFFINITY_MEM | SPU_CREATE_AFFINITY_SPU);
+ if (affinity) {
+- gang = SPUFS_I(inode)->i_gang;
+ if (!gang)
+ return -EINVAL;
+ mutex_lock(&gang->aff_mutex);
+@@ -436,8 +460,11 @@ spufs_create_context(struct inode *inode, struct dentry *dentry,
+ }
+
+ ret = spufs_mkdir(inode, dentry, flags, mode & 0777);
+- if (ret)
++ if (ret) {
++ if (neighbor)
++ put_spu_context(neighbor);
+ goto out_aff_unlock;
++ }
+
+ if (affinity) {
+ spufs_set_affinity(flags, SPUFS_I(d_inode(dentry))->i_ctx,
+@@ -453,6 +480,8 @@ spufs_create_context(struct inode *inode, struct dentry *dentry,
+ out_aff_unlock:
+ if (affinity)
+ mutex_unlock(&gang->aff_mutex);
++ if (ret && gang)
++ gang->alive--; // can't reach 0
+ return ret;
+ }
+
+@@ -482,6 +511,7 @@ spufs_mkgang(struct inode *dir, struct dentry *dentry, umode_t mode)
+ inode->i_fop = &simple_dir_operations;
+
+ d_instantiate(dentry, inode);
++ dget(dentry);
+ inc_nlink(dir);
+ inc_nlink(d_inode(dentry));
+ return ret;
+@@ -492,6 +522,21 @@ spufs_mkgang(struct inode *dir, struct dentry *dentry, umode_t mode)
+ return ret;
+ }
+
++static int spufs_gang_close(struct inode *inode, struct file *file)
++{
++ unuse_gang(file->f_path.dentry);
++ return dcache_dir_close(inode, file);
++}
++
++static const struct file_operations spufs_gang_fops = {
++ .open = dcache_dir_open,
++ .release = spufs_gang_close,
++ .llseek = dcache_dir_lseek,
++ .read = generic_read_dir,
++ .iterate_shared = dcache_readdir,
++ .fsync = noop_fsync,
++};
++
+ static int spufs_gang_open(const struct path *path)
+ {
+ int ret;
+@@ -511,7 +556,7 @@ static int spufs_gang_open(const struct path *path)
+ return PTR_ERR(filp);
+ }
+
+- filp->f_op = &simple_dir_operations;
++ filp->f_op = &spufs_gang_fops;
+ fd_install(ret, filp);
+ return ret;
+ }
+@@ -526,10 +571,8 @@ static int spufs_create_gang(struct inode *inode,
+ ret = spufs_mkgang(inode, dentry, mode & 0777);
+ if (!ret) {
+ ret = spufs_gang_open(&path);
+- if (ret < 0) {
+- int err = simple_rmdir(inode, dentry);
+- WARN_ON(err);
+- }
++ if (ret < 0)
++ unuse_gang(dentry);
+ }
+ return ret;
+ }
+diff --git a/arch/powerpc/platforms/cell/spufs/spufs.h b/arch/powerpc/platforms/cell/spufs/spufs.h
+index 84958487f696a4..d33787c57c39a2 100644
+--- a/arch/powerpc/platforms/cell/spufs/spufs.h
++++ b/arch/powerpc/platforms/cell/spufs/spufs.h
+@@ -151,6 +151,8 @@ struct spu_gang {
+ int aff_flags;
+ struct spu *aff_ref_spu;
+ atomic_t aff_sched_count;
++
++ int alive;
+ };
+
+ /* Flag bits for spu_gang aff_flags */
+diff --git a/arch/riscv/errata/Makefile b/arch/riscv/errata/Makefile
+index f0da9d7b39c374..bc6c77ba837d2d 100644
+--- a/arch/riscv/errata/Makefile
++++ b/arch/riscv/errata/Makefile
+@@ -1,5 +1,9 @@
+ ifdef CONFIG_RELOCATABLE
+-KBUILD_CFLAGS += -fno-pie
++# We can't use PIC/PIE when handling early-boot errata parsing, as the kernel
++# doesn't have a GOT setup at that point. So instead just use medany: it's
++# usually position-independent, so it should be good enough for the errata
++# handling.
++KBUILD_CFLAGS += -fno-pie -mcmodel=medany
+ endif
+
+ ifdef CONFIG_RISCV_ALTERNATIVE_EARLY
+diff --git a/arch/riscv/include/asm/ftrace.h b/arch/riscv/include/asm/ftrace.h
+index 2cddd79ff21b1e..f253c8dae878ef 100644
+--- a/arch/riscv/include/asm/ftrace.h
++++ b/arch/riscv/include/asm/ftrace.h
+@@ -92,7 +92,7 @@ struct dyn_arch_ftrace {
+ #define make_call_t0(caller, callee, call) \
+ do { \
+ unsigned int offset = \
+- (unsigned long) callee - (unsigned long) caller; \
++ (unsigned long) (callee) - (unsigned long) (caller); \
+ call[0] = to_auipc_t0(offset); \
+ call[1] = to_jalr_t0(offset); \
+ } while (0)
+@@ -108,7 +108,7 @@ do { \
+ #define make_call_ra(caller, callee, call) \
+ do { \
+ unsigned int offset = \
+- (unsigned long) callee - (unsigned long) caller; \
++ (unsigned long) (callee) - (unsigned long) (caller); \
+ call[0] = to_auipc_ra(offset); \
+ call[1] = to_jalr_ra(offset); \
+ } while (0)
+diff --git a/arch/riscv/kernel/elf_kexec.c b/arch/riscv/kernel/elf_kexec.c
+index 3c37661801f95d..e783a72d051f43 100644
+--- a/arch/riscv/kernel/elf_kexec.c
++++ b/arch/riscv/kernel/elf_kexec.c
+@@ -468,6 +468,9 @@ int arch_kexec_apply_relocations_add(struct purgatory_info *pi,
+ case R_RISCV_ALIGN:
+ case R_RISCV_RELAX:
+ break;
++ case R_RISCV_64:
++ *(u64 *)loc = val;
++ break;
+ default:
+ pr_err("Unknown rela relocation: %d\n", r_type);
+ return -ENOEXEC;
+diff --git a/arch/riscv/kvm/vcpu_pmu.c b/arch/riscv/kvm/vcpu_pmu.c
+index 2707a51b082ca7..78ac3216a54ddb 100644
+--- a/arch/riscv/kvm/vcpu_pmu.c
++++ b/arch/riscv/kvm/vcpu_pmu.c
+@@ -666,6 +666,7 @@ int kvm_riscv_vcpu_pmu_ctr_cfg_match(struct kvm_vcpu *vcpu, unsigned long ctr_ba
+ .type = etype,
+ .size = sizeof(struct perf_event_attr),
+ .pinned = true,
++ .disabled = true,
+ /*
+ * It should never reach here if the platform doesn't support the sscofpmf
+ * extension as mode filtering won't work without it.
+diff --git a/arch/riscv/mm/hugetlbpage.c b/arch/riscv/mm/hugetlbpage.c
+index b4a78a4b35cff5..375dd96bb4a0d2 100644
+--- a/arch/riscv/mm/hugetlbpage.c
++++ b/arch/riscv/mm/hugetlbpage.c
+@@ -148,22 +148,25 @@ unsigned long hugetlb_mask_last_page(struct hstate *h)
+ static pte_t get_clear_contig(struct mm_struct *mm,
+ unsigned long addr,
+ pte_t *ptep,
+- unsigned long pte_num)
++ unsigned long ncontig)
+ {
+- pte_t orig_pte = ptep_get(ptep);
+- unsigned long i;
+-
+- for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++) {
+- pte_t pte = ptep_get_and_clear(mm, addr, ptep);
+-
+- if (pte_dirty(pte))
+- orig_pte = pte_mkdirty(orig_pte);
+-
+- if (pte_young(pte))
+- orig_pte = pte_mkyoung(orig_pte);
++ pte_t pte, tmp_pte;
++ bool present;
++
++ pte = ptep_get_and_clear(mm, addr, ptep);
++ present = pte_present(pte);
++ while (--ncontig) {
++ ptep++;
++ addr += PAGE_SIZE;
++ tmp_pte = ptep_get_and_clear(mm, addr, ptep);
++ if (present) {
++ if (pte_dirty(tmp_pte))
++ pte = pte_mkdirty(pte);
++ if (pte_young(tmp_pte))
++ pte = pte_mkyoung(pte);
++ }
+ }
+-
+- return orig_pte;
++ return pte;
+ }
+
+ static pte_t get_clear_contig_flush(struct mm_struct *mm,
+@@ -212,6 +215,26 @@ static void clear_flush(struct mm_struct *mm,
+ flush_tlb_range(&vma, saddr, addr);
+ }
+
++static int num_contig_ptes_from_size(unsigned long sz, size_t *pgsize)
++{
++ unsigned long hugepage_shift;
++
++ if (sz >= PGDIR_SIZE)
++ hugepage_shift = PGDIR_SHIFT;
++ else if (sz >= P4D_SIZE)
++ hugepage_shift = P4D_SHIFT;
++ else if (sz >= PUD_SIZE)
++ hugepage_shift = PUD_SHIFT;
++ else if (sz >= PMD_SIZE)
++ hugepage_shift = PMD_SHIFT;
++ else
++ hugepage_shift = PAGE_SHIFT;
++
++ *pgsize = 1 << hugepage_shift;
++
++ return sz >> hugepage_shift;
++}
++
+ /*
+ * When dealing with NAPOT mappings, the privileged specification indicates that
+ * "if an update needs to be made, the OS generally should first mark all of the
+@@ -226,22 +249,10 @@ void set_huge_pte_at(struct mm_struct *mm,
+ pte_t pte,
+ unsigned long sz)
+ {
+- unsigned long hugepage_shift, pgsize;
++ size_t pgsize;
+ int i, pte_num;
+
+- if (sz >= PGDIR_SIZE)
+- hugepage_shift = PGDIR_SHIFT;
+- else if (sz >= P4D_SIZE)
+- hugepage_shift = P4D_SHIFT;
+- else if (sz >= PUD_SIZE)
+- hugepage_shift = PUD_SHIFT;
+- else if (sz >= PMD_SIZE)
+- hugepage_shift = PMD_SHIFT;
+- else
+- hugepage_shift = PAGE_SHIFT;
+-
+- pte_num = sz >> hugepage_shift;
+- pgsize = 1 << hugepage_shift;
++ pte_num = num_contig_ptes_from_size(sz, &pgsize);
+
+ if (!pte_present(pte)) {
+ for (i = 0; i < pte_num; i++, ptep++, addr += pgsize)
+@@ -295,13 +306,14 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long addr,
+ pte_t *ptep, unsigned long sz)
+ {
++ size_t pgsize;
+ pte_t orig_pte = ptep_get(ptep);
+ int pte_num;
+
+ if (!pte_napot(orig_pte))
+ return ptep_get_and_clear(mm, addr, ptep);
+
+- pte_num = napot_pte_num(napot_cont_order(orig_pte));
++ pte_num = num_contig_ptes_from_size(sz, &pgsize);
+
+ return get_clear_contig(mm, addr, ptep, pte_num);
+ }
+@@ -351,6 +363,7 @@ void huge_pte_clear(struct mm_struct *mm,
+ pte_t *ptep,
+ unsigned long sz)
+ {
++ size_t pgsize;
+ pte_t pte = ptep_get(ptep);
+ int i, pte_num;
+
+@@ -359,8 +372,9 @@ void huge_pte_clear(struct mm_struct *mm,
+ return;
+ }
+
+- pte_num = napot_pte_num(napot_cont_order(pte));
+- for (i = 0; i < pte_num; i++, addr += PAGE_SIZE, ptep++)
++ pte_num = num_contig_ptes_from_size(sz, &pgsize);
++
++ for (i = 0; i < pte_num; i++, addr += pgsize, ptep++)
+ pte_clear(mm, addr, ptep);
+ }
+
+diff --git a/arch/riscv/purgatory/entry.S b/arch/riscv/purgatory/entry.S
+index 0e6ca6d5ae4b41..c5db2f072c341a 100644
+--- a/arch/riscv/purgatory/entry.S
++++ b/arch/riscv/purgatory/entry.S
+@@ -12,6 +12,7 @@
+
+ .text
+
++.align 2
+ SYM_CODE_START(purgatory_start)
+
+ lla sp, .Lstack
+diff --git a/arch/s390/include/asm/io.h b/arch/s390/include/asm/io.h
+index fc9933a743d692..251e0372ccbd0a 100644
+--- a/arch/s390/include/asm/io.h
++++ b/arch/s390/include/asm/io.h
+@@ -34,8 +34,6 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr);
+
+ #define ioremap_wc(addr, size) \
+ ioremap_prot((addr), (size), pgprot_val(pgprot_writecombine(PAGE_KERNEL)))
+-#define ioremap_wt(addr, size) \
+- ioremap_prot((addr), (size), pgprot_val(pgprot_writethrough(PAGE_KERNEL)))
+
+ static inline void __iomem *ioport_map(unsigned long port, unsigned int nr)
+ {
+diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
+index 0ffbaf7419558b..5ee73f245a0c0b 100644
+--- a/arch/s390/include/asm/pgtable.h
++++ b/arch/s390/include/asm/pgtable.h
+@@ -1365,9 +1365,6 @@ void gmap_pmdp_idte_global(struct mm_struct *mm, unsigned long vmaddr);
+ #define pgprot_writecombine pgprot_writecombine
+ pgprot_t pgprot_writecombine(pgprot_t prot);
+
+-#define pgprot_writethrough pgprot_writethrough
+-pgprot_t pgprot_writethrough(pgprot_t prot);
+-
+ #define PFN_PTE_SHIFT PAGE_SHIFT
+
+ /*
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 594da4cba707a6..a7de838f803189 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -501,7 +501,7 @@ SYM_CODE_START(mcck_int_handler)
+ clgrjl %r9,%r14, 4f
+ larl %r14,.Lsie_leave
+ clgrjhe %r9,%r14, 4f
+- lg %r10,__LC_PCPU
++ lg %r10,__LC_PCPU(%r13)
+ oi __PCPU_FLAGS+7(%r10), _CIF_MCCK_GUEST
+ 4: BPENTER __SF_SIE_FLAGS(%r15),_TIF_ISOLATE_BP_GUEST
+ SIEEXIT __SF_SIE_CONTROL(%r15),%r13
+diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
+index 2c944bafb0309c..b03c665d72426a 100644
+--- a/arch/s390/mm/pgtable.c
++++ b/arch/s390/mm/pgtable.c
+@@ -34,16 +34,6 @@ pgprot_t pgprot_writecombine(pgprot_t prot)
+ }
+ EXPORT_SYMBOL_GPL(pgprot_writecombine);
+
+-pgprot_t pgprot_writethrough(pgprot_t prot)
+-{
+- /*
+- * mio_wb_bit_mask may be set on a different CPU, but it is only set
+- * once at init and only read afterwards.
+- */
+- return __pgprot(pgprot_val(prot) & ~mio_wb_bit_mask);
+-}
+-EXPORT_SYMBOL_GPL(pgprot_writethrough);
+-
+ static inline void ptep_ipte_local(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, int nodat)
+ {
+diff --git a/arch/um/include/shared/os.h b/arch/um/include/shared/os.h
+index 9a039d6f1f7483..77a8593f219a13 100644
+--- a/arch/um/include/shared/os.h
++++ b/arch/um/include/shared/os.h
+@@ -218,7 +218,6 @@ extern int os_protect_memory(void *addr, unsigned long len,
+ extern int os_unmap_memory(void *addr, int len);
+ extern int os_drop_memory(void *addr, int length);
+ extern int can_drop_memory(void);
+-extern int os_mincore(void *addr, unsigned long len);
+
+ /* execvp.c */
+ extern int execvp_noalloc(char *buf, const char *file, char *const argv[]);
+diff --git a/arch/um/kernel/Makefile b/arch/um/kernel/Makefile
+index f8567b933ffaa9..4df1cd0d20179e 100644
+--- a/arch/um/kernel/Makefile
++++ b/arch/um/kernel/Makefile
+@@ -17,7 +17,7 @@ extra-y := vmlinux.lds
+ obj-y = config.o exec.o exitcode.o irq.o ksyms.o mem.o \
+ physmem.o process.o ptrace.o reboot.o sigio.o \
+ signal.o sysrq.o time.o tlb.o trap.o \
+- um_arch.o umid.o maccess.o kmsg_dump.o capflags.o skas/
++ um_arch.o umid.o kmsg_dump.o capflags.o skas/
+ obj-y += load_file.o
+
+ obj-$(CONFIG_BLK_DEV_INITRD) += initrd.o
+diff --git a/arch/um/kernel/maccess.c b/arch/um/kernel/maccess.c
+deleted file mode 100644
+index 8ccd56813f684f..00000000000000
+--- a/arch/um/kernel/maccess.c
++++ /dev/null
+@@ -1,19 +0,0 @@
+-// SPDX-License-Identifier: GPL-2.0-only
+-/*
+- * Copyright (C) 2013 Richard Weinberger <richrd@nod.at>
+- */
+-
+-#include <linux/uaccess.h>
+-#include <linux/kernel.h>
+-#include <os.h>
+-
+-bool copy_from_kernel_nofault_allowed(const void *src, size_t size)
+-{
+- void *psrc = (void *)rounddown((unsigned long)src, PAGE_SIZE);
+-
+- if ((unsigned long)src < PAGE_SIZE || size <= 0)
+- return false;
+- if (os_mincore(psrc, size + src - psrc) <= 0)
+- return false;
+- return true;
+-}
+diff --git a/arch/um/os-Linux/process.c b/arch/um/os-Linux/process.c
+index e52dd37ddadccc..2686120ab2325a 100644
+--- a/arch/um/os-Linux/process.c
++++ b/arch/um/os-Linux/process.c
+@@ -223,57 +223,6 @@ int __init can_drop_memory(void)
+ return ok;
+ }
+
+-static int os_page_mincore(void *addr)
+-{
+- char vec[2];
+- int ret;
+-
+- ret = mincore(addr, UM_KERN_PAGE_SIZE, vec);
+- if (ret < 0) {
+- if (errno == ENOMEM || errno == EINVAL)
+- return 0;
+- else
+- return -errno;
+- }
+-
+- return vec[0] & 1;
+-}
+-
+-int os_mincore(void *addr, unsigned long len)
+-{
+- char *vec;
+- int ret, i;
+-
+- if (len <= UM_KERN_PAGE_SIZE)
+- return os_page_mincore(addr);
+-
+- vec = calloc(1, (len + UM_KERN_PAGE_SIZE - 1) / UM_KERN_PAGE_SIZE);
+- if (!vec)
+- return -ENOMEM;
+-
+- ret = mincore(addr, UM_KERN_PAGE_SIZE, vec);
+- if (ret < 0) {
+- if (errno == ENOMEM || errno == EINVAL)
+- ret = 0;
+- else
+- ret = -errno;
+-
+- goto out;
+- }
+-
+- for (i = 0; i < ((len + UM_KERN_PAGE_SIZE - 1) / UM_KERN_PAGE_SIZE); i++) {
+- if (!(vec[i] & 1)) {
+- ret = 0;
+- goto out;
+- }
+- }
+-
+- ret = 1;
+-out:
+- free(vec);
+- return ret;
+-}
+-
+ void init_new_thread_signals(void)
+ {
+ set_handler(SIGSEGV);
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index 6f8e9af827e0c9..db38d2b9b78868 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -226,7 +226,7 @@ config X86
+ select HAVE_SAMPLE_FTRACE_DIRECT_MULTI if X86_64
+ select HAVE_EBPF_JIT
+ select HAVE_EFFICIENT_UNALIGNED_ACCESS
+- select HAVE_EISA
++ select HAVE_EISA if X86_32
+ select HAVE_EXIT_THREAD
+ select HAVE_GUP_FAST
+ select HAVE_FENTRY if X86_64 || DYNAMIC_FTRACE
+@@ -894,6 +894,7 @@ config INTEL_TDX_GUEST
+ depends on X86_64 && CPU_SUP_INTEL
+ depends on X86_X2APIC
+ depends on EFI_STUB
++ depends on PARAVIRT
+ select ARCH_HAS_CC_PLATFORM
+ select X86_MEM_ENCRYPT
+ select X86_MCE
+diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
+index 2a7279d80460a8..42e6a40876ea4c 100644
+--- a/arch/x86/Kconfig.cpu
++++ b/arch/x86/Kconfig.cpu
+@@ -368,7 +368,7 @@ config X86_HAVE_PAE
+
+ config X86_CMPXCHG64
+ def_bool y
+- depends on X86_HAVE_PAE || M586TSC || M586MMX || MK6 || MK7
++ depends on X86_HAVE_PAE || M586TSC || M586MMX || MK6 || MK7 || MGEODEGX1 || MGEODE_LX
+
+ # this should be set for all -march=.. options where the compiler
+ # generates cmov.
+diff --git a/arch/x86/Makefile.um b/arch/x86/Makefile.um
+index a46b1397ad01c2..c86cbd9cbba38f 100644
+--- a/arch/x86/Makefile.um
++++ b/arch/x86/Makefile.um
+@@ -7,12 +7,13 @@ core-y += arch/x86/crypto/
+ # GCC versions < 11. See:
+ # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99652
+ #
+-ifeq ($(CONFIG_CC_IS_CLANG),y)
+-KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx
+-KBUILD_RUSTFLAGS += --target=$(objtree)/scripts/target.json
++ifeq ($(call gcc-min-version, 110000)$(CONFIG_CC_IS_CLANG),y)
++KBUILD_CFLAGS += -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx
+ KBUILD_RUSTFLAGS += -Ctarget-feature=-sse,-sse2,-sse3,-ssse3,-sse4.1,-sse4.2,-avx,-avx2
+ endif
+
++KBUILD_RUSTFLAGS += --target=$(objtree)/scripts/target.json
++
+ ifeq ($(CONFIG_X86_32),y)
+ START := 0x8048000
+
+diff --git a/arch/x86/coco/tdx/tdx.c b/arch/x86/coco/tdx/tdx.c
+index 2f85ed005c42f1..b8aeb3ac7d28b7 100644
+--- a/arch/x86/coco/tdx/tdx.c
++++ b/arch/x86/coco/tdx/tdx.c
+@@ -14,6 +14,7 @@
+ #include <asm/ia32.h>
+ #include <asm/insn.h>
+ #include <asm/insn-eval.h>
++#include <asm/paravirt_types.h>
+ #include <asm/pgtable.h>
+ #include <asm/set_memory.h>
+ #include <asm/traps.h>
+@@ -359,7 +360,7 @@ static int handle_halt(struct ve_info *ve)
+ return ve_instr_len(ve);
+ }
+
+-void __cpuidle tdx_safe_halt(void)
++void __cpuidle tdx_halt(void)
+ {
+ const bool irq_disabled = false;
+
+@@ -370,6 +371,16 @@ void __cpuidle tdx_safe_halt(void)
+ WARN_ONCE(1, "HLT instruction emulation failed\n");
+ }
+
++static void __cpuidle tdx_safe_halt(void)
++{
++ tdx_halt();
++ /*
++ * "__cpuidle" section doesn't support instrumentation, so stick
++ * with raw_* variant that avoids tracing hooks.
++ */
++ raw_local_irq_enable();
++}
++
+ static int read_msr(struct pt_regs *regs, struct ve_info *ve)
+ {
+ struct tdx_module_args args = {
+@@ -1056,6 +1067,19 @@ void __init tdx_early_init(void)
+ x86_platform.guest.enc_kexec_begin = tdx_kexec_begin;
+ x86_platform.guest.enc_kexec_finish = tdx_kexec_finish;
+
++ /*
++ * Avoid "sti;hlt" execution in TDX guests as HLT induces a #VE that
++ * will enable interrupts before HLT TDCALL invocation if executed
++ * in STI-shadow, possibly resulting in missed wakeup events.
++ *
++ * Modify all possible HLT execution paths to use TDX specific routines
++ * that directly execute TDCALL and toggle the interrupt state as
++ * needed after TDCALL completion. This also reduces HLT related #VEs
++ * in addition to having a reliable halt logic execution.
++ */
++ pv_ops.irq.safe_halt = tdx_safe_halt;
++ pv_ops.irq.halt = tdx_halt;
++
+ /*
+ * TDX intercepts the RDMSR to read the X2APIC ID in the parallel
+ * bringup low level code. That raises #VE which cannot be handled
+diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
+index ea81770629eea6..626a81c6015bda 100644
+--- a/arch/x86/entry/calling.h
++++ b/arch/x86/entry/calling.h
+@@ -70,6 +70,8 @@ For 32-bit we have the following conventions - kernel is built with
+ pushq %rsi /* pt_regs->si */
+ movq 8(%rsp), %rsi /* temporarily store the return address in %rsi */
+ movq %rdi, 8(%rsp) /* pt_regs->di (overwriting original return address) */
++ /* We just clobbered the return address - use the IRET frame for unwinding: */
++ UNWIND_HINT_IRET_REGS offset=3*8
+ .else
+ pushq %rdi /* pt_regs->di */
+ pushq %rsi /* pt_regs->si */
+diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
+index 94941c5a10ac10..51efd2da4d7fdd 100644
+--- a/arch/x86/entry/common.c
++++ b/arch/x86/entry/common.c
+@@ -142,7 +142,7 @@ static __always_inline int syscall_32_enter(struct pt_regs *regs)
+ #ifdef CONFIG_IA32_EMULATION
+ bool __ia32_enabled __ro_after_init = !IS_ENABLED(CONFIG_IA32_EMULATION_DEFAULT_DISABLED);
+
+-static int ia32_emulation_override_cmdline(char *arg)
++static int __init ia32_emulation_override_cmdline(char *arg)
+ {
+ return kstrtobool(arg, &__ia32_enabled);
+ }
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index 3a68b3e0b7a358..f86e47afd56099 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -2779,28 +2779,33 @@ static u64 icl_update_topdown_event(struct perf_event *event)
+
+ DEFINE_STATIC_CALL(intel_pmu_update_topdown_event, x86_perf_event_update);
+
+-static void intel_pmu_read_topdown_event(struct perf_event *event)
++static void intel_pmu_read_event(struct perf_event *event)
+ {
+- struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++ if (event->hw.flags & (PERF_X86_EVENT_AUTO_RELOAD | PERF_X86_EVENT_TOPDOWN)) {
++ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
++ bool pmu_enabled = cpuc->enabled;
+
+- /* Only need to call update_topdown_event() once for group read. */
+- if ((cpuc->txn_flags & PERF_PMU_TXN_READ) &&
+- !is_slots_event(event))
+- return;
++ /* Only need to call update_topdown_event() once for group read. */
++ if (is_metric_event(event) && (cpuc->txn_flags & PERF_PMU_TXN_READ))
++ return;
+
+- perf_pmu_disable(event->pmu);
+- static_call(intel_pmu_update_topdown_event)(event);
+- perf_pmu_enable(event->pmu);
+-}
++ cpuc->enabled = 0;
++ if (pmu_enabled)
++ intel_pmu_disable_all();
+
+-static void intel_pmu_read_event(struct perf_event *event)
+-{
+- if (event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD)
+- intel_pmu_auto_reload_read(event);
+- else if (is_topdown_count(event))
+- intel_pmu_read_topdown_event(event);
+- else
+- x86_perf_event_update(event);
++ if (is_topdown_event(event))
++ static_call(intel_pmu_update_topdown_event)(event);
++ else
++ intel_pmu_drain_pebs_buffer();
++
++ cpuc->enabled = pmu_enabled;
++ if (pmu_enabled)
++ intel_pmu_enable_all(0);
++
++ return;
++ }
++
++ x86_perf_event_update(event);
+ }
+
+ static void intel_pmu_enable_fixed(struct perf_event *event)
+@@ -3067,7 +3072,7 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
+
+ handled++;
+ x86_pmu_handle_guest_pebs(regs, &data);
+- x86_pmu.drain_pebs(regs, &data);
++ static_call(x86_pmu_drain_pebs)(regs, &data);
+ status &= intel_ctrl | GLOBAL_STATUS_TRACE_TOPAPMI;
+
+ /*
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index c07ca43e67e7f1..1617aa3efd68b1 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -932,11 +932,11 @@ int intel_pmu_drain_bts_buffer(void)
+ return 1;
+ }
+
+-static inline void intel_pmu_drain_pebs_buffer(void)
++void intel_pmu_drain_pebs_buffer(void)
+ {
+ struct perf_sample_data data;
+
+- x86_pmu.drain_pebs(NULL, &data);
++ static_call(x86_pmu_drain_pebs)(NULL, &data);
+ }
+
+ /*
+@@ -2079,15 +2079,6 @@ get_next_pebs_record_by_bit(void *base, void *top, int bit)
+ return NULL;
+ }
+
+-void intel_pmu_auto_reload_read(struct perf_event *event)
+-{
+- WARN_ON(!(event->hw.flags & PERF_X86_EVENT_AUTO_RELOAD));
+-
+- perf_pmu_disable(event->pmu);
+- intel_pmu_drain_pebs_buffer();
+- perf_pmu_enable(event->pmu);
+-}
+-
+ /*
+ * Special variant of intel_pmu_save_and_restart() for auto-reload.
+ */
+diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
+index ac1182141bf67f..8c616656391ec4 100644
+--- a/arch/x86/events/perf_event.h
++++ b/arch/x86/events/perf_event.h
+@@ -1092,6 +1092,7 @@ extern struct x86_pmu x86_pmu __read_mostly;
+
+ DECLARE_STATIC_CALL(x86_pmu_set_period, *x86_pmu.set_period);
+ DECLARE_STATIC_CALL(x86_pmu_update, *x86_pmu.update);
++DECLARE_STATIC_CALL(x86_pmu_drain_pebs, *x86_pmu.drain_pebs);
+
+ static __always_inline struct x86_perf_task_context_opt *task_context_opt(void *ctx)
+ {
+@@ -1626,7 +1627,7 @@ void intel_pmu_pebs_disable_all(void);
+
+ void intel_pmu_pebs_sched_task(struct perf_event_pmu_context *pmu_ctx, bool sched_in);
+
+-void intel_pmu_auto_reload_read(struct perf_event *event);
++void intel_pmu_drain_pebs_buffer(void);
+
+ void intel_pmu_store_pebs_lbrs(struct lbr_entry *lbr);
+
+diff --git a/arch/x86/hyperv/hv_vtl.c b/arch/x86/hyperv/hv_vtl.c
+index 04775346369c59..d04ccd4b3b4af0 100644
+--- a/arch/x86/hyperv/hv_vtl.c
++++ b/arch/x86/hyperv/hv_vtl.c
+@@ -30,6 +30,7 @@ void __init hv_vtl_init_platform(void)
+ x86_platform.realmode_init = x86_init_noop;
+ x86_init.irqs.pre_vector_init = x86_init_noop;
+ x86_init.timers.timer_init = x86_init_noop;
++ x86_init.resources.probe_roms = x86_init_noop;
+
+ /* Avoid searching for BIOS MP tables */
+ x86_init.mpparse.find_mptable = x86_init_noop;
+diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
+index 60fc3ed728304c..4065f5ef3ae08e 100644
+--- a/arch/x86/hyperv/ivm.c
++++ b/arch/x86/hyperv/ivm.c
+@@ -339,7 +339,7 @@ int hv_snp_boot_ap(u32 cpu, unsigned long start_ip)
+ vmsa->sev_features = sev_status >> 2;
+
+ ret = snp_set_vmsa(vmsa, true);
+- if (!ret) {
++ if (ret) {
+ pr_err("RMPADJUST(%llx) failed: %llx\n", (u64)vmsa, ret);
+ free_page((u64)vmsa);
+ return ret;
+@@ -465,7 +465,6 @@ static int hv_mark_gpa_visibility(u16 count, const u64 pfn[],
+ enum hv_mem_host_visibility visibility)
+ {
+ struct hv_gpa_range_for_visibility *input;
+- u16 pages_processed;
+ u64 hv_status;
+ unsigned long flags;
+
+@@ -494,7 +493,7 @@ static int hv_mark_gpa_visibility(u16 count, const u64 pfn[],
+ memcpy((void *)input->gpa_page_list, pfn, count * sizeof(*pfn));
+ hv_status = hv_do_rep_hypercall(
+ HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY, count,
+- 0, input, &pages_processed);
++ 0, input, NULL);
+ local_irq_restore(flags);
+
+ if (hv_result_success(hv_status))
+diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h
+index eba178996d8459..b5b63329406137 100644
+--- a/arch/x86/include/asm/tdx.h
++++ b/arch/x86/include/asm/tdx.h
+@@ -58,7 +58,7 @@ void tdx_get_ve_info(struct ve_info *ve);
+
+ bool tdx_handle_virt_exception(struct pt_regs *regs, struct ve_info *ve);
+
+-void tdx_safe_halt(void);
++void tdx_halt(void);
+
+ bool tdx_early_handle_ve(struct pt_regs *regs);
+
+@@ -69,7 +69,7 @@ u64 tdx_hcall_get_quote(u8 *buf, size_t size);
+ #else
+
+ static inline void tdx_early_init(void) { };
+-static inline void tdx_safe_halt(void) { };
++static inline void tdx_halt(void) { };
+
+ static inline bool tdx_early_handle_ve(struct pt_regs *regs) { return false; }
+
+diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
+index 02fc2aa06e9e0e..3da64513974853 100644
+--- a/arch/x86/include/asm/tlbflush.h
++++ b/arch/x86/include/asm/tlbflush.h
+@@ -242,7 +242,7 @@ void flush_tlb_multi(const struct cpumask *cpumask,
+ flush_tlb_mm_range((vma)->vm_mm, start, end, \
+ ((vma)->vm_flags & VM_HUGETLB) \
+ ? huge_page_shift(hstate_vma(vma)) \
+- : PAGE_SHIFT, false)
++ : PAGE_SHIFT, true)
+
+ extern void flush_tlb_all(void);
+ extern void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
+diff --git a/arch/x86/kernel/cpu/mce/severity.c b/arch/x86/kernel/cpu/mce/severity.c
+index dac4d64dfb2a8e..2235a74774360d 100644
+--- a/arch/x86/kernel/cpu/mce/severity.c
++++ b/arch/x86/kernel/cpu/mce/severity.c
+@@ -300,13 +300,12 @@ static noinstr int error_context(struct mce *m, struct pt_regs *regs)
+ copy_user = is_copy_from_user(regs);
+ instrumentation_end();
+
+- switch (fixup_type) {
+- case EX_TYPE_UACCESS:
+- if (!copy_user)
+- return IN_KERNEL;
+- m->kflags |= MCE_IN_KERNEL_COPYIN;
+- fallthrough;
++ if (copy_user) {
++ m->kflags |= MCE_IN_KERNEL_COPYIN | MCE_IN_KERNEL_RECOV;
++ return IN_KERNEL_RECOV;
++ }
+
++ switch (fixup_type) {
+ case EX_TYPE_FAULT_MCE_SAFE:
+ case EX_TYPE_DEFAULT_MCE_SAFE:
+ m->kflags |= MCE_IN_KERNEL_RECOV;
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 07fc145f353103..5cd735728fa028 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -600,7 +600,7 @@ static bool __apply_microcode_amd(struct microcode_amd *mc, u32 *cur_rev,
+ unsigned long p_addr = (unsigned long)&mc->hdr.data_code;
+
+ if (!verify_sha256_digest(mc->hdr.patch_id, *cur_rev, (const u8 *)p_addr, psize))
+- return -1;
++ return false;
+
+ native_wrmsrl(MSR_AMD64_PATCH_LOADER, p_addr);
+
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index d7163b764c6268..2d48db66fca857 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -148,7 +148,8 @@ static int closid_alloc(void)
+
+ lockdep_assert_held(&rdtgroup_mutex);
+
+- if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID)) {
++ if (IS_ENABLED(CONFIG_RESCTRL_RMID_DEPENDS_ON_CLOSID) &&
++ is_llc_occupancy_enabled()) {
+ cleanest_closid = resctrl_find_cleanest_closid();
+ if (cleanest_closid < 0)
+ return cleanest_closid;
+diff --git a/arch/x86/kernel/cpu/sgx/driver.c b/arch/x86/kernel/cpu/sgx/driver.c
+index 22b65a5f5ec6c4..7f8d1e11dbee24 100644
+--- a/arch/x86/kernel/cpu/sgx/driver.c
++++ b/arch/x86/kernel/cpu/sgx/driver.c
+@@ -150,13 +150,15 @@ int __init sgx_drv_init(void)
+ u64 xfrm_mask;
+ int ret;
+
+- if (!cpu_feature_enabled(X86_FEATURE_SGX_LC))
++ if (!cpu_feature_enabled(X86_FEATURE_SGX_LC)) {
++ pr_info("SGX disabled: SGX launch control CPU feature is not available, /dev/sgx_enclave disabled.\n");
+ return -ENODEV;
++ }
+
+ cpuid_count(SGX_CPUID, 0, &eax, &ebx, &ecx, &edx);
+
+ if (!(eax & 1)) {
+- pr_err("SGX disabled: SGX1 instruction support not available.\n");
++ pr_info("SGX disabled: SGX1 instruction support not available, /dev/sgx_enclave disabled.\n");
+ return -ENODEV;
+ }
+
+@@ -173,8 +175,10 @@ int __init sgx_drv_init(void)
+ }
+
+ ret = misc_register(&sgx_dev_enclave);
+- if (ret)
++ if (ret) {
++ pr_info("SGX disabled: Unable to register the /dev/sgx_enclave driver (%d).\n", ret);
+ return ret;
++ }
+
+ return 0;
+ }
+diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
+index a7d562697e50e4..b2b118a8c09be9 100644
+--- a/arch/x86/kernel/dumpstack.c
++++ b/arch/x86/kernel/dumpstack.c
+@@ -195,6 +195,7 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
+ printk("%sCall Trace:\n", log_lvl);
+
+ unwind_start(&state, task, regs, stack);
++ stack = stack ?: get_stack_pointer(task, regs);
+ regs = unwind_get_entry_regs(&state, &partial);
+
+ /*
+@@ -213,9 +214,7 @@ static void show_trace_log_lvl(struct task_struct *task, struct pt_regs *regs,
+ * - hardirq stack
+ * - entry stack
+ */
+- for (stack = stack ?: get_stack_pointer(task, regs);
+- stack;
+- stack = stack_info.next_sp) {
++ for (; stack; stack = stack_info.next_sp) {
+ const char *stack_name;
+
+ stack = PTR_ALIGN(stack, sizeof(long));
+diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
+index 1209c7aebb211f..dcac3c058fb761 100644
+--- a/arch/x86/kernel/fpu/core.c
++++ b/arch/x86/kernel/fpu/core.c
+@@ -220,7 +220,7 @@ bool fpu_alloc_guest_fpstate(struct fpu_guest *gfpu)
+ struct fpstate *fpstate;
+ unsigned int size;
+
+- size = fpu_user_cfg.default_size + ALIGN(offsetof(struct fpstate, regs), 64);
++ size = fpu_kernel_cfg.default_size + ALIGN(offsetof(struct fpstate, regs), 64);
+ fpstate = vzalloc(size);
+ if (!fpstate)
+ return false;
+@@ -232,8 +232,8 @@ bool fpu_alloc_guest_fpstate(struct fpu_guest *gfpu)
+ fpstate->is_guest = true;
+
+ gfpu->fpstate = fpstate;
+- gfpu->xfeatures = fpu_user_cfg.default_features;
+- gfpu->perm = fpu_user_cfg.default_features;
++ gfpu->xfeatures = fpu_kernel_cfg.default_features;
++ gfpu->perm = fpu_kernel_cfg.default_features;
+
+ /*
+ * KVM sets the FP+SSE bits in the XSAVE header when copying FPU state
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 15507e739c255b..c7ce3655b70780 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -92,7 +92,12 @@ EXPORT_PER_CPU_SYMBOL_GPL(__tss_limit_invalid);
+ */
+ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+ {
+- memcpy(dst, src, arch_task_struct_size);
++ /* init_task is not dynamically sized (incomplete FPU state) */
++ if (unlikely(src == &init_task))
++ memcpy_and_pad(dst, arch_task_struct_size, src, sizeof(init_task), 0);
++ else
++ memcpy(dst, src, arch_task_struct_size);
++
+ #ifdef CONFIG_VM86
+ dst->thread.vm86 = NULL;
+ #endif
+@@ -933,7 +938,7 @@ void __init select_idle_routine(void)
+ static_call_update(x86_idle, mwait_idle);
+ } else if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) {
+ pr_info("using TDX aware idle routine\n");
+- static_call_update(x86_idle, tdx_safe_halt);
++ static_call_update(x86_idle, tdx_halt);
+ } else {
+ static_call_update(x86_idle, default_idle);
+ }
+diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
+index 2dbadf347b5f4f..5e3e036e6e537f 100644
+--- a/arch/x86/kernel/traps.c
++++ b/arch/x86/kernel/traps.c
+@@ -379,6 +379,21 @@ __visible void __noreturn handle_stack_overflow(struct pt_regs *regs,
+ }
+ #endif
+
++/*
++ * Prevent the compiler and/or objtool from marking the !CONFIG_X86_ESPFIX64
++ * version of exc_double_fault() as noreturn. Otherwise the noreturn mismatch
++ * between configs triggers objtool warnings.
++ *
++ * This is a temporary hack until we have compiler or plugin support for
++ * annotating noreturns.
++ */
++#ifdef CONFIG_X86_ESPFIX64
++#define always_true() true
++#else
++bool always_true(void);
++bool __weak always_true(void) { return true; }
++#endif
++
+ /*
+ * Runs on an IST stack for x86_64 and on a special task stack for x86_32.
+ *
+@@ -514,7 +529,8 @@ DEFINE_IDTENTRY_DF(exc_double_fault)
+
+ pr_emerg("PANIC: double fault, error_code: 0x%lx\n", error_code);
+ die("double fault", regs, error_code);
+- panic("Machine halted.");
++ if (always_true())
++ panic("Machine halted.");
+ instrumentation_end();
+ }
+
+diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
+index dfe6847fd99e5e..310d8cdf7ca3a9 100644
+--- a/arch/x86/kernel/tsc.c
++++ b/arch/x86/kernel/tsc.c
+@@ -956,7 +956,7 @@ static unsigned long long cyc2ns_suspend;
+
+ void tsc_save_sched_clock_state(void)
+ {
+- if (!sched_clock_stable())
++ if (!static_branch_likely(&__use_tsc) && !sched_clock_stable())
+ return;
+
+ cyc2ns_suspend = sched_clock();
+@@ -976,7 +976,7 @@ void tsc_restore_sched_clock_state(void)
+ unsigned long flags;
+ int cpu;
+
+- if (!sched_clock_stable())
++ if (!static_branch_likely(&__use_tsc) && !sched_clock_stable())
+ return;
+
+ local_irq_save(flags);
+diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
+index 5a952c5ea66bc6..9194695662b26f 100644
+--- a/arch/x86/kernel/uprobes.c
++++ b/arch/x86/kernel/uprobes.c
+@@ -357,19 +357,23 @@ void *arch_uprobe_trampoline(unsigned long *psize)
+ return &insn;
+ }
+
+-static unsigned long trampoline_check_ip(void)
++static unsigned long trampoline_check_ip(unsigned long tramp)
+ {
+- unsigned long tramp = uprobe_get_trampoline_vaddr();
+-
+ return tramp + (uretprobe_syscall_check - uretprobe_trampoline_entry);
+ }
+
+ SYSCALL_DEFINE0(uretprobe)
+ {
+ struct pt_regs *regs = task_pt_regs(current);
+- unsigned long err, ip, sp, r11_cx_ax[3];
++ unsigned long err, ip, sp, r11_cx_ax[3], tramp;
++
++ /* If there's no trampoline, we are called from wrong place. */
++ tramp = uprobe_get_trampoline_vaddr();
++ if (unlikely(tramp == UPROBE_NO_TRAMPOLINE_VADDR))
++ goto sigill;
+
+- if (regs->ip != trampoline_check_ip())
++ /* Make sure the ip matches the only allowed sys_uretprobe caller. */
++ if (unlikely(regs->ip != trampoline_check_ip(tramp)))
+ goto sigill;
+
+ err = copy_from_user(r11_cx_ax, (void __user *)regs->sp, sizeof(r11_cx_ax));
+diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
+index 3ec56bf76ef164..6154cb450b448b 100644
+--- a/arch/x86/kvm/svm/sev.c
++++ b/arch/x86/kvm/svm/sev.c
+@@ -3957,16 +3957,12 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm)
+
+ /*
+ * The target vCPU is valid, so the vCPU will be kicked unless the
+- * request is for CREATE_ON_INIT. For any errors at this stage, the
+- * kick will place the vCPU in an non-runnable state.
++ * request is for CREATE_ON_INIT.
+ */
+ kick = true;
+
+ mutex_lock(&target_svm->sev_es.snp_vmsa_mutex);
+
+- target_svm->sev_es.snp_vmsa_gpa = INVALID_PAGE;
+- target_svm->sev_es.snp_ap_waiting_for_reset = true;
+-
+ /* Interrupt injection mode shouldn't change for AP creation */
+ if (request < SVM_VMGEXIT_AP_DESTROY) {
+ u64 sev_features;
+@@ -4012,20 +4008,23 @@ static int sev_snp_ap_creation(struct vcpu_svm *svm)
+ target_svm->sev_es.snp_vmsa_gpa = svm->vmcb->control.exit_info_2;
+ break;
+ case SVM_VMGEXIT_AP_DESTROY:
++ target_svm->sev_es.snp_vmsa_gpa = INVALID_PAGE;
+ break;
+ default:
+ vcpu_unimpl(vcpu, "vmgexit: invalid AP creation request [%#x] from guest\n",
+ request);
+ ret = -EINVAL;
+- break;
++ goto out;
+ }
+
+-out:
++ target_svm->sev_es.snp_ap_waiting_for_reset = true;
++
+ if (kick) {
+ kvm_make_request(KVM_REQ_UPDATE_PROTECTED_GUEST_STATE, target_vcpu);
+ kvm_vcpu_kick(target_vcpu);
+ }
+
++out:
+ mutex_unlock(&target_svm->sev_es.snp_vmsa_mutex);
+
+ return ret;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 8794c0a8a2e447..45337a3fc03cd7 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4590,6 +4590,11 @@ static bool kvm_is_vm_type_supported(unsigned long type)
+ return type < 32 && (kvm_caps.supported_vm_types & BIT(type));
+ }
+
++static inline u32 kvm_sync_valid_fields(struct kvm *kvm)
++{
++ return kvm && kvm->arch.has_protected_state ? 0 : KVM_SYNC_X86_VALID_FIELDS;
++}
++
+ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ {
+ int r = 0;
+@@ -4698,7 +4703,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+ break;
+ #endif
+ case KVM_CAP_SYNC_REGS:
+- r = KVM_SYNC_X86_VALID_FIELDS;
++ r = kvm_sync_valid_fields(kvm);
+ break;
+ case KVM_CAP_ADJUST_CLOCK:
+ r = KVM_CLOCK_VALID_FLAGS;
+@@ -11470,6 +11475,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ {
+ struct kvm_queued_exception *ex = &vcpu->arch.exception;
+ struct kvm_run *kvm_run = vcpu->run;
++ u32 sync_valid_fields;
+ int r;
+
+ r = kvm_mmu_post_init_vm(vcpu->kvm);
+@@ -11515,8 +11521,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+ goto out;
+ }
+
+- if ((kvm_run->kvm_valid_regs & ~KVM_SYNC_X86_VALID_FIELDS) ||
+- (kvm_run->kvm_dirty_regs & ~KVM_SYNC_X86_VALID_FIELDS)) {
++ sync_valid_fields = kvm_sync_valid_fields(vcpu->kvm);
++ if ((kvm_run->kvm_valid_regs & ~sync_valid_fields) ||
++ (kvm_run->kvm_dirty_regs & ~sync_valid_fields)) {
+ r = -EINVAL;
+ goto out;
+ }
+@@ -11574,7 +11581,7 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
+
+ out:
+ kvm_put_guest_fpu(vcpu);
+- if (kvm_run->kvm_valid_regs)
++ if (kvm_run->kvm_valid_regs && likely(!vcpu->arch.guest_state_protected))
+ store_regs(vcpu);
+ post_kvm_run_save(vcpu);
+ kvm_vcpu_srcu_read_unlock(vcpu);
+diff --git a/arch/x86/lib/copy_user_64.S b/arch/x86/lib/copy_user_64.S
+index fc9fb5d0617443..b8f74d80f35c61 100644
+--- a/arch/x86/lib/copy_user_64.S
++++ b/arch/x86/lib/copy_user_64.S
+@@ -74,6 +74,24 @@ SYM_FUNC_START(rep_movs_alternative)
+ _ASM_EXTABLE_UA( 0b, 1b)
+
+ .Llarge_movsq:
++ /* Do the first possibly unaligned word */
++0: movq (%rsi),%rax
++1: movq %rax,(%rdi)
++
++ _ASM_EXTABLE_UA( 0b, .Lcopy_user_tail)
++ _ASM_EXTABLE_UA( 1b, .Lcopy_user_tail)
++
++ /* What would be the offset to the aligned destination? */
++ leaq 8(%rdi),%rax
++ andq $-8,%rax
++ subq %rdi,%rax
++
++ /* .. and update pointers and count to match */
++ addq %rax,%rdi
++ addq %rax,%rsi
++ subq %rax,%rcx
++
++ /* make %rcx contain the number of words, %rax the remainder */
+ movq %rcx,%rax
+ shrq $3,%rcx
+ andl $7,%eax
+diff --git a/arch/x86/mm/mem_encrypt_identity.c b/arch/x86/mm/mem_encrypt_identity.c
+index ac33b2263a434d..b922b9fea6b648 100644
+--- a/arch/x86/mm/mem_encrypt_identity.c
++++ b/arch/x86/mm/mem_encrypt_identity.c
+@@ -562,7 +562,7 @@ void __head sme_enable(struct boot_params *bp)
+ }
+
+ RIP_REL_REF(sme_me_mask) = me_mask;
+- physical_mask &= ~me_mask;
+- cc_vendor = CC_VENDOR_AMD;
++ RIP_REL_REF(physical_mask) &= ~me_mask;
++ RIP_REL_REF(cc_vendor) = CC_VENDOR_AMD;
+ cc_set_mask(me_mask);
+ }
+diff --git a/arch/x86/mm/pat/cpa-test.c b/arch/x86/mm/pat/cpa-test.c
+index 3d2f7f0a6ed142..ad3c1feec990db 100644
+--- a/arch/x86/mm/pat/cpa-test.c
++++ b/arch/x86/mm/pat/cpa-test.c
+@@ -183,7 +183,7 @@ static int pageattr_test(void)
+ break;
+
+ case 1:
+- err = change_page_attr_set(addrs, len[1], PAGE_CPA_TEST, 1);
++ err = change_page_attr_set(addrs, len[i], PAGE_CPA_TEST, 1);
+ break;
+
+ case 2:
+diff --git a/arch/x86/mm/pat/memtype.c b/arch/x86/mm/pat/memtype.c
+index feb8cc6a12bf23..d721cc19addbd6 100644
+--- a/arch/x86/mm/pat/memtype.c
++++ b/arch/x86/mm/pat/memtype.c
+@@ -984,29 +984,42 @@ static int get_pat_info(struct vm_area_struct *vma, resource_size_t *paddr,
+ return -EINVAL;
+ }
+
+-/*
+- * track_pfn_copy is called when vma that is covering the pfnmap gets
+- * copied through copy_page_range().
+- *
+- * If the vma has a linear pfn mapping for the entire range, we get the prot
+- * from pte and reserve the entire vma range with single reserve_pfn_range call.
+- */
+-int track_pfn_copy(struct vm_area_struct *vma)
++int track_pfn_copy(struct vm_area_struct *dst_vma,
++ struct vm_area_struct *src_vma, unsigned long *pfn)
+ {
++ const unsigned long vma_size = src_vma->vm_end - src_vma->vm_start;
+ resource_size_t paddr;
+- unsigned long vma_size = vma->vm_end - vma->vm_start;
+ pgprot_t pgprot;
++ int rc;
+
+- if (vma->vm_flags & VM_PAT) {
+- if (get_pat_info(vma, &paddr, &pgprot))
+- return -EINVAL;
+- /* reserve the whole chunk covered by vma. */
+- return reserve_pfn_range(paddr, vma_size, &pgprot, 1);
+- }
++ if (!(src_vma->vm_flags & VM_PAT))
++ return 0;
++
++ /*
++ * Duplicate the PAT information for the dst VMA based on the src
++ * VMA.
++ */
++ if (get_pat_info(src_vma, &paddr, &pgprot))
++ return -EINVAL;
++ rc = reserve_pfn_range(paddr, vma_size, &pgprot, 1);
++ if (rc)
++ return rc;
+
++ /* Reservation for the destination VMA succeeded. */
++ vm_flags_set(dst_vma, VM_PAT);
++ *pfn = PHYS_PFN(paddr);
+ return 0;
+ }
+
++void untrack_pfn_copy(struct vm_area_struct *dst_vma, unsigned long pfn)
++{
++ untrack_pfn(dst_vma, pfn, dst_vma->vm_end - dst_vma->vm_start, true);
++ /*
++ * Reservation was freed, any copied page tables will get cleaned
++ * up later, but without getting PAT involved again.
++ */
++}
++
+ /*
+ * prot is passed in as a parameter for the new mapping. If the vma has
+ * a linear pfn mapping for the entire range, or no vma is provided,
+@@ -1095,15 +1108,6 @@ void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
+ }
+ }
+
+-/*
+- * untrack_pfn_clear is called if the following situation fits:
+- *
+- * 1) while mremapping a pfnmap for a new region, with the old vma after
+- * its pfnmap page table has been removed. The new vma has a new pfnmap
+- * to the same pfn & cache type with VM_PAT set.
+- * 2) while duplicating vm area, the new vma fails to copy the pgtable from
+- * old vma.
+- */
+ void untrack_pfn_clear(struct vm_area_struct *vma)
+ {
+ vm_flags_clear(vma, VM_PAT);
+diff --git a/crypto/api.c b/crypto/api.c
+index bfd177a4313a01..c2c4eb14ef955f 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -36,7 +36,8 @@ EXPORT_SYMBOL_GPL(crypto_chain);
+ DEFINE_STATIC_KEY_FALSE(__crypto_boot_test_finished);
+ #endif
+
+-static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg);
++static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg,
++ u32 type, u32 mask);
+ static struct crypto_alg *crypto_alg_lookup(const char *name, u32 type,
+ u32 mask);
+
+@@ -145,7 +146,7 @@ static struct crypto_alg *crypto_larval_add(const char *name, u32 type,
+ if (alg != &larval->alg) {
+ kfree(larval);
+ if (crypto_is_larval(alg))
+- alg = crypto_larval_wait(alg);
++ alg = crypto_larval_wait(alg, type, mask);
+ }
+
+ return alg;
+@@ -197,7 +198,8 @@ static void crypto_start_test(struct crypto_larval *larval)
+ crypto_schedule_test(larval);
+ }
+
+-static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg)
++static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg,
++ u32 type, u32 mask)
+ {
+ struct crypto_larval *larval;
+ long time_left;
+@@ -219,12 +221,7 @@ static struct crypto_alg *crypto_larval_wait(struct crypto_alg *alg)
+ crypto_larval_kill(larval);
+ alg = ERR_PTR(-ETIMEDOUT);
+ } else if (!alg) {
+- u32 type;
+- u32 mask;
+-
+ alg = &larval->alg;
+- type = alg->cra_flags & ~(CRYPTO_ALG_LARVAL | CRYPTO_ALG_DEAD);
+- mask = larval->mask;
+ alg = crypto_alg_lookup(alg->cra_name, type, mask) ?:
+ ERR_PTR(-EAGAIN);
+ } else if (IS_ERR(alg))
+@@ -304,7 +301,7 @@ static struct crypto_alg *crypto_larval_lookup(const char *name, u32 type,
+ }
+
+ if (!IS_ERR_OR_NULL(alg) && crypto_is_larval(alg))
+- alg = crypto_larval_wait(alg);
++ alg = crypto_larval_wait(alg, type, mask);
+ else if (alg)
+ ;
+ else if (!(mask & CRYPTO_ALG_TESTED))
+@@ -352,7 +349,7 @@ struct crypto_alg *crypto_alg_mod_lookup(const char *name, u32 type, u32 mask)
+ ok = crypto_probing_notify(CRYPTO_MSG_ALG_REQUEST, larval);
+
+ if (ok == NOTIFY_STOP)
+- alg = crypto_larval_wait(larval);
++ alg = crypto_larval_wait(larval, type, mask);
+ else {
+ crypto_mod_put(larval);
+ alg = ERR_PTR(-ENOENT);
+diff --git a/crypto/bpf_crypto_skcipher.c b/crypto/bpf_crypto_skcipher.c
+index b5e657415770a3..a88798d3e8c872 100644
+--- a/crypto/bpf_crypto_skcipher.c
++++ b/crypto/bpf_crypto_skcipher.c
+@@ -80,3 +80,4 @@ static void __exit bpf_crypto_skcipher_exit(void)
+ module_init(bpf_crypto_skcipher_init);
+ module_exit(bpf_crypto_skcipher_exit);
+ MODULE_LICENSE("GPL");
++MODULE_DESCRIPTION("Symmetric key cipher support for BPF");
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index a5d47819b3a4e2..ae035b93da0878 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -485,7 +485,7 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ cmd_mask = nd_desc->cmd_mask;
+ if (cmd == ND_CMD_CALL && call_pkg->nd_family) {
+ family = call_pkg->nd_family;
+- if (family > NVDIMM_BUS_FAMILY_MAX ||
++ if (call_pkg->nd_family > NVDIMM_BUS_FAMILY_MAX ||
+ !test_bit(family, &nd_desc->bus_family_mask))
+ return -EINVAL;
+ family = array_index_nospec(family,
+diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
+index 831fa4a1215985..0888e4d618d53a 100644
+--- a/drivers/acpi/processor_idle.c
++++ b/drivers/acpi/processor_idle.c
+@@ -268,6 +268,10 @@ static int acpi_processor_get_power_info_fadt(struct acpi_processor *pr)
+ ACPI_CX_DESC_LEN, "ACPI P_LVL3 IOPORT 0x%x",
+ pr->power.states[ACPI_STATE_C3].address);
+
++ if (!pr->power.states[ACPI_STATE_C2].address &&
++ !pr->power.states[ACPI_STATE_C3].address)
++ return -ENODEV;
++
+ return 0;
+ }
+
+diff --git a/drivers/acpi/resource.c b/drivers/acpi/resource.c
+index b4cd14e7fa76cc..14c7bac4100b46 100644
+--- a/drivers/acpi/resource.c
++++ b/drivers/acpi/resource.c
+@@ -440,6 +440,13 @@ static const struct dmi_system_id irq1_level_low_skip_override[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "S5602ZA"),
+ },
+ },
++ {
++ /* Asus Vivobook X1404VAP */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++ DMI_MATCH(DMI_BOARD_NAME, "X1404VAP"),
++ },
++ },
+ {
+ /* Asus Vivobook X1504VAP */
+ .matches = {
+diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c
+index 068c1612660bc0..4ee30c2897a2b9 100644
+--- a/drivers/acpi/x86/utils.c
++++ b/drivers/acpi/x86/utils.c
+@@ -374,7 +374,8 @@ static const struct dmi_system_id acpi_quirk_skip_dmi_ids[] = {
+ DMI_MATCH(DMI_PRODUCT_VERSION, "Blade3-10A-001"),
+ },
+ .driver_data = (void *)(ACPI_QUIRK_SKIP_I2C_CLIENTS |
+- ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY),
++ ACPI_QUIRK_SKIP_ACPI_AC_AND_BATTERY |
++ ACPI_QUIRK_SKIP_GPIO_EVENT_HANDLERS),
+ },
+ {
+ /* Medion Lifetab S10346 */
+diff --git a/drivers/auxdisplay/Kconfig b/drivers/auxdisplay/Kconfig
+index 21545ffba0658f..2a9bb31633a71e 100644
+--- a/drivers/auxdisplay/Kconfig
++++ b/drivers/auxdisplay/Kconfig
+@@ -503,6 +503,7 @@ config HT16K33
+ config MAX6959
+ tristate "Maxim MAX6958/6959 7-segment LED controller"
+ depends on I2C
++ select BITREVERSE
+ select REGMAP_I2C
+ select LINEDISP
+ help
+diff --git a/drivers/auxdisplay/panel.c b/drivers/auxdisplay/panel.c
+index a731f28455b45f..6dc8798d01f98c 100644
+--- a/drivers/auxdisplay/panel.c
++++ b/drivers/auxdisplay/panel.c
+@@ -1664,7 +1664,7 @@ static void panel_attach(struct parport *port)
+ if (lcd.enabled)
+ charlcd_unregister(lcd.charlcd);
+ err_unreg_device:
+- kfree(lcd.charlcd);
++ charlcd_free(lcd.charlcd);
+ lcd.charlcd = NULL;
+ parport_unregister_device(pprt);
+ pprt = NULL;
+@@ -1692,7 +1692,7 @@ static void panel_detach(struct parport *port)
+ charlcd_unregister(lcd.charlcd);
+ lcd.initialized = false;
+ kfree(lcd.charlcd->drvdata);
+- kfree(lcd.charlcd);
++ charlcd_free(lcd.charlcd);
+ lcd.charlcd = NULL;
+ }
+
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 4a67e83300e164..1abe61f11525d9 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -913,6 +913,9 @@ static void device_resume(struct device *dev, pm_message_t state, bool async)
+ if (dev->power.syscore)
+ goto Complete;
+
++ if (!dev->power.is_suspended)
++ goto Complete;
++
+ if (dev->power.direct_complete) {
+ /* Match the pm_runtime_disable() in __device_suspend(). */
+ pm_runtime_enable(dev);
+@@ -931,9 +934,6 @@ static void device_resume(struct device *dev, pm_message_t state, bool async)
+ */
+ dev->power.is_prepared = false;
+
+- if (!dev->power.is_suspended)
+- goto Unlock;
+-
+ if (dev->pm_domain) {
+ info = "power domain ";
+ callback = pm_op(&dev->pm_domain->ops, state);
+@@ -973,7 +973,6 @@ static void device_resume(struct device *dev, pm_message_t state, bool async)
+ error = dpm_run_callback(callback, dev, state, info);
+ dev->power.is_suspended = false;
+
+- Unlock:
+ device_unlock(dev);
+ dpm_watchdog_clear(&wd);
+
+@@ -1254,14 +1253,13 @@ static int device_suspend_noirq(struct device *dev, pm_message_t state, bool asy
+ dev->power.is_noirq_suspended = true;
+
+ /*
+- * Skipping the resume of devices that were in use right before the
+- * system suspend (as indicated by their PM-runtime usage counters)
+- * would be suboptimal. Also resume them if doing that is not allowed
+- * to be skipped.
++ * Devices must be resumed unless they are explicitly allowed to be left
++ * in suspend, but even in that case skipping the resume of devices that
++ * were in use right before the system suspend (as indicated by their
++ * runtime PM usage counters and child counters) would be suboptimal.
+ */
+- if (atomic_read(&dev->power.usage_count) > 1 ||
+- !(dev_pm_test_driver_flags(dev, DPM_FLAG_MAY_SKIP_RESUME) &&
+- dev->power.may_skip_resume))
++ if (!(dev_pm_test_driver_flags(dev, DPM_FLAG_MAY_SKIP_RESUME) &&
++ dev->power.may_skip_resume) || !pm_runtime_need_not_resume(dev))
+ dev->power.must_resume = true;
+
+ if (dev->power.must_resume)
+@@ -1628,6 +1626,7 @@ static int device_suspend(struct device *dev, pm_message_t state, bool async)
+ pm_runtime_disable(dev);
+ if (pm_runtime_status_suspended(dev)) {
+ pm_dev_dbg(dev, state, "direct-complete ");
++ dev->power.is_suspended = true;
+ goto Complete;
+ }
+
+diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
+index 2ee45841486bc7..04113adb092b52 100644
+--- a/drivers/base/power/runtime.c
++++ b/drivers/base/power/runtime.c
+@@ -1874,7 +1874,7 @@ void pm_runtime_drop_link(struct device_link *link)
+ pm_request_idle(link->supplier);
+ }
+
+-static bool pm_runtime_need_not_resume(struct device *dev)
++bool pm_runtime_need_not_resume(struct device *dev)
+ {
+ return atomic_read(&dev->power.usage_count) <= 1 &&
+ (atomic_read(&dev->power.child_count) == 0 ||
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index c7d728d686e5a5..79b7bd8bfd4584 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -1416,17 +1416,27 @@ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ubq)
+ }
+ }
+
++/* Must be called when queue is frozen */
++static bool ublk_mark_queue_canceling(struct ublk_queue *ubq)
++{
++ bool canceled;
++
++ spin_lock(&ubq->cancel_lock);
++ canceled = ubq->canceling;
++ if (!canceled)
++ ubq->canceling = true;
++ spin_unlock(&ubq->cancel_lock);
++
++ return canceled;
++}
++
+ static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq)
+ {
++ bool was_canceled = ubq->canceling;
+ struct gendisk *disk;
+
+- spin_lock(&ubq->cancel_lock);
+- if (ubq->canceling) {
+- spin_unlock(&ubq->cancel_lock);
++ if (was_canceled)
+ return false;
+- }
+- ubq->canceling = true;
+- spin_unlock(&ubq->cancel_lock);
+
+ spin_lock(&ub->lock);
+ disk = ub->ub_disk;
+@@ -1438,14 +1448,23 @@ static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue *ubq)
+ if (!disk)
+ return false;
+
+- /* Now we are serialized with ublk_queue_rq() */
++ /*
++ * Now we are serialized with ublk_queue_rq()
++ *
++ * Make sure that ubq->canceling is set when queue is frozen,
++ * because ublk_queue_rq() has to rely on this flag for avoiding to
++ * touch completed uring_cmd
++ */
+ blk_mq_quiesce_queue(disk->queue);
+- /* abort queue is for making forward progress */
+- ublk_abort_queue(ub, ubq);
++ was_canceled = ublk_mark_queue_canceling(ubq);
++ if (!was_canceled) {
++ /* abort queue is for making forward progress */
++ ublk_abort_queue(ub, ubq);
++ }
+ blk_mq_unquiesce_queue(disk->queue);
+ put_device(disk_to_dev(disk));
+
+- return true;
++ return !was_canceled;
+ }
+
+ static void ublk_cancel_cmd(struct ublk_queue *ubq, struct ublk_io *io,
+diff --git a/drivers/clk/imx/clk-imx8mp-audiomix.c b/drivers/clk/imx/clk-imx8mp-audiomix.c
+index c409fc7e061869..775f62dddb11d8 100644
+--- a/drivers/clk/imx/clk-imx8mp-audiomix.c
++++ b/drivers/clk/imx/clk-imx8mp-audiomix.c
+@@ -180,14 +180,14 @@ static struct clk_imx8mp_audiomix_sel sels[] = {
+ CLK_GATE("asrc", ASRC_IPG),
+ CLK_GATE("pdm", PDM_IPG),
+ CLK_GATE("earc", EARC_IPG),
+- CLK_GATE("ocrama", OCRAMA_IPG),
++ CLK_GATE_PARENT("ocrama", OCRAMA_IPG, "axi"),
+ CLK_GATE("aud2htx", AUD2HTX_IPG),
+ CLK_GATE_PARENT("earc_phy", EARC_PHY, "sai_pll_out_div2"),
+ CLK_GATE("sdma2", SDMA2_ROOT),
+ CLK_GATE("sdma3", SDMA3_ROOT),
+ CLK_GATE("spba2", SPBA2_ROOT),
+- CLK_GATE("dsp", DSP_ROOT),
+- CLK_GATE("dspdbg", DSPDBG_ROOT),
++ CLK_GATE_PARENT("dsp", DSP_ROOT, "axi"),
++ CLK_GATE_PARENT("dspdbg", DSPDBG_ROOT, "axi"),
+ CLK_GATE("edma", EDMA_ROOT),
+ CLK_GATE_PARENT("audpll", AUDPLL_ROOT, "osc_24m"),
+ CLK_GATE("mu2", MU2_ROOT),
+diff --git a/drivers/clk/meson/g12a.c b/drivers/clk/meson/g12a.c
+index 02dda57105b10e..4f92b83965d5a9 100644
+--- a/drivers/clk/meson/g12a.c
++++ b/drivers/clk/meson/g12a.c
+@@ -1139,8 +1139,18 @@ static struct clk_regmap g12a_cpu_clk_div16_en = {
+ .hw.init = &(struct clk_init_data) {
+ .name = "cpu_clk_div16_en",
+ .ops = &clk_regmap_gate_ro_ops,
+- .parent_hws = (const struct clk_hw *[]) {
+- &g12a_cpu_clk.hw
++ .parent_data = &(const struct clk_parent_data) {
++ /*
++ * Note:
++ * G12A and G12B have different cpu clocks (with
++ * different struct clk_hw). We fallback to the global
++ * naming string mechanism so this clock picks
++ * up the appropriate one. Same goes for the other
++ * clock using cpu cluster A clock output and present
++ * on both G12 variant.
++ */
++ .name = "cpu_clk",
++ .index = -1,
+ },
+ .num_parents = 1,
+ /*
+@@ -1205,7 +1215,10 @@ static struct clk_regmap g12a_cpu_clk_apb_div = {
+ .hw.init = &(struct clk_init_data){
+ .name = "cpu_clk_apb_div",
+ .ops = &clk_regmap_divider_ro_ops,
+- .parent_hws = (const struct clk_hw *[]) { &g12a_cpu_clk.hw },
++ .parent_data = &(const struct clk_parent_data) {
++ .name = "cpu_clk",
++ .index = -1,
++ },
+ .num_parents = 1,
+ },
+ };
+@@ -1239,7 +1252,10 @@ static struct clk_regmap g12a_cpu_clk_atb_div = {
+ .hw.init = &(struct clk_init_data){
+ .name = "cpu_clk_atb_div",
+ .ops = &clk_regmap_divider_ro_ops,
+- .parent_hws = (const struct clk_hw *[]) { &g12a_cpu_clk.hw },
++ .parent_data = &(const struct clk_parent_data) {
++ .name = "cpu_clk",
++ .index = -1,
++ },
+ .num_parents = 1,
+ },
+ };
+@@ -1273,7 +1289,10 @@ static struct clk_regmap g12a_cpu_clk_axi_div = {
+ .hw.init = &(struct clk_init_data){
+ .name = "cpu_clk_axi_div",
+ .ops = &clk_regmap_divider_ro_ops,
+- .parent_hws = (const struct clk_hw *[]) { &g12a_cpu_clk.hw },
++ .parent_data = &(const struct clk_parent_data) {
++ .name = "cpu_clk",
++ .index = -1,
++ },
+ .num_parents = 1,
+ },
+ };
+@@ -1308,13 +1327,6 @@ static struct clk_regmap g12a_cpu_clk_trace_div = {
+ .name = "cpu_clk_trace_div",
+ .ops = &clk_regmap_divider_ro_ops,
+ .parent_data = &(const struct clk_parent_data) {
+- /*
+- * Note:
+- * G12A and G12B have different cpu_clks (with
+- * different struct clk_hw). We fallback to the global
+- * naming string mechanism so cpu_clk_trace_div picks
+- * up the appropriate one.
+- */
+ .name = "cpu_clk",
+ .index = -1,
+ },
+@@ -4317,7 +4329,7 @@ static MESON_GATE(g12a_spicc_1, HHI_GCLK_MPEG0, 14);
+ static MESON_GATE(g12a_hiu_reg, HHI_GCLK_MPEG0, 19);
+ static MESON_GATE(g12a_mipi_dsi_phy, HHI_GCLK_MPEG0, 20);
+ static MESON_GATE(g12a_assist_misc, HHI_GCLK_MPEG0, 23);
+-static MESON_GATE(g12a_emmc_a, HHI_GCLK_MPEG0, 4);
++static MESON_GATE(g12a_emmc_a, HHI_GCLK_MPEG0, 24);
+ static MESON_GATE(g12a_emmc_b, HHI_GCLK_MPEG0, 25);
+ static MESON_GATE(g12a_emmc_c, HHI_GCLK_MPEG0, 26);
+ static MESON_GATE(g12a_audio_codec, HHI_GCLK_MPEG0, 28);
+diff --git a/drivers/clk/meson/gxbb.c b/drivers/clk/meson/gxbb.c
+index f071faad1ebb70..d9529de200ae44 100644
+--- a/drivers/clk/meson/gxbb.c
++++ b/drivers/clk/meson/gxbb.c
+@@ -1272,14 +1272,13 @@ static struct clk_regmap gxbb_cts_i958 = {
+ },
+ };
+
++/*
++ * This table skips a clock named 'cts_slow_oscin' in the documentation
++ * This clock does not exist yet in this controller or the AO one
++ */
++static u32 gxbb_32k_clk_parents_val_table[] = { 0, 2, 3 };
+ static const struct clk_parent_data gxbb_32k_clk_parent_data[] = {
+ { .fw_name = "xtal", },
+- /*
+- * FIXME: This clock is provided by the ao clock controller but the
+- * clock is not yet part of the binding of this controller, so string
+- * name must be use to set this parent.
+- */
+- { .name = "cts_slow_oscin", .index = -1 },
+ { .hw = &gxbb_fclk_div3.hw },
+ { .hw = &gxbb_fclk_div5.hw },
+ };
+@@ -1289,6 +1288,7 @@ static struct clk_regmap gxbb_32k_clk_sel = {
+ .offset = HHI_32K_CLK_CNTL,
+ .mask = 0x3,
+ .shift = 16,
++ .table = gxbb_32k_clk_parents_val_table,
+ },
+ .hw.init = &(struct clk_init_data){
+ .name = "32k_clk_sel",
+@@ -1312,7 +1312,7 @@ static struct clk_regmap gxbb_32k_clk_div = {
+ &gxbb_32k_clk_sel.hw
+ },
+ .num_parents = 1,
+- .flags = CLK_SET_RATE_PARENT | CLK_DIVIDER_ROUND_CLOSEST,
++ .flags = CLK_SET_RATE_PARENT,
+ },
+ };
+
+diff --git a/drivers/clk/qcom/gcc-msm8953.c b/drivers/clk/qcom/gcc-msm8953.c
+index 855a61966f3ef5..8f29ecc74c50bf 100644
+--- a/drivers/clk/qcom/gcc-msm8953.c
++++ b/drivers/clk/qcom/gcc-msm8953.c
+@@ -3770,7 +3770,7 @@ static struct clk_branch gcc_venus0_axi_clk = {
+
+ static struct clk_branch gcc_venus0_core0_vcodec0_clk = {
+ .halt_reg = 0x4c02c,
+- .halt_check = BRANCH_HALT,
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x4c02c,
+ .enable_mask = BIT(0),
+diff --git a/drivers/clk/qcom/gcc-sm8650.c b/drivers/clk/qcom/gcc-sm8650.c
+index 9dd5c48f33bed5..fa1672c4e7d814 100644
+--- a/drivers/clk/qcom/gcc-sm8650.c
++++ b/drivers/clk/qcom/gcc-sm8650.c
+@@ -3497,7 +3497,7 @@ static struct gdsc usb30_prim_gdsc = {
+ .pd = {
+ .name = "usb30_prim_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+@@ -3506,7 +3506,7 @@ static struct gdsc usb3_phy_gdsc = {
+ .pd = {
+ .name = "usb3_phy_gdsc",
+ },
+- .pwrsts = PWRSTS_OFF_ON,
++ .pwrsts = PWRSTS_RET_ON,
+ .flags = POLL_CFG_GDSCR | RETAIN_FF_ENABLE,
+ };
+
+diff --git a/drivers/clk/qcom/gcc-x1e80100.c b/drivers/clk/qcom/gcc-x1e80100.c
+index 7288af845434d8..009f39139b6440 100644
+--- a/drivers/clk/qcom/gcc-x1e80100.c
++++ b/drivers/clk/qcom/gcc-x1e80100.c
+@@ -2564,19 +2564,6 @@ static struct clk_branch gcc_disp_hf_axi_clk = {
+ },
+ };
+
+-static struct clk_branch gcc_disp_xo_clk = {
+- .halt_reg = 0x27018,
+- .halt_check = BRANCH_HALT,
+- .clkr = {
+- .enable_reg = 0x27018,
+- .enable_mask = BIT(0),
+- .hw.init = &(const struct clk_init_data) {
+- .name = "gcc_disp_xo_clk",
+- .ops = &clk_branch2_ops,
+- },
+- },
+-};
+-
+ static struct clk_branch gcc_gp1_clk = {
+ .halt_reg = 0x64000,
+ .halt_check = BRANCH_HALT,
+@@ -2631,21 +2618,6 @@ static struct clk_branch gcc_gp3_clk = {
+ },
+ };
+
+-static struct clk_branch gcc_gpu_cfg_ahb_clk = {
+- .halt_reg = 0x71004,
+- .halt_check = BRANCH_HALT_VOTED,
+- .hwcg_reg = 0x71004,
+- .hwcg_bit = 1,
+- .clkr = {
+- .enable_reg = 0x71004,
+- .enable_mask = BIT(0),
+- .hw.init = &(const struct clk_init_data) {
+- .name = "gcc_gpu_cfg_ahb_clk",
+- .ops = &clk_branch2_ops,
+- },
+- },
+-};
+-
+ static struct clk_branch gcc_gpu_gpll0_cph_clk_src = {
+ .halt_check = BRANCH_HALT_DELAY,
+ .clkr = {
+@@ -6268,7 +6240,6 @@ static struct clk_regmap *gcc_x1e80100_clocks[] = {
+ [GCC_CNOC_PCIE_TUNNEL_CLK] = &gcc_cnoc_pcie_tunnel_clk.clkr,
+ [GCC_DDRSS_GPU_AXI_CLK] = &gcc_ddrss_gpu_axi_clk.clkr,
+ [GCC_DISP_HF_AXI_CLK] = &gcc_disp_hf_axi_clk.clkr,
+- [GCC_DISP_XO_CLK] = &gcc_disp_xo_clk.clkr,
+ [GCC_GP1_CLK] = &gcc_gp1_clk.clkr,
+ [GCC_GP1_CLK_SRC] = &gcc_gp1_clk_src.clkr,
+ [GCC_GP2_CLK] = &gcc_gp2_clk.clkr,
+@@ -6281,7 +6252,6 @@ static struct clk_regmap *gcc_x1e80100_clocks[] = {
+ [GCC_GPLL7] = &gcc_gpll7.clkr,
+ [GCC_GPLL8] = &gcc_gpll8.clkr,
+ [GCC_GPLL9] = &gcc_gpll9.clkr,
+- [GCC_GPU_CFG_AHB_CLK] = &gcc_gpu_cfg_ahb_clk.clkr,
+ [GCC_GPU_GPLL0_CPH_CLK_SRC] = &gcc_gpu_gpll0_cph_clk_src.clkr,
+ [GCC_GPU_GPLL0_DIV_CPH_CLK_SRC] = &gcc_gpu_gpll0_div_cph_clk_src.clkr,
+ [GCC_GPU_MEMNOC_GFX_CLK] = &gcc_gpu_memnoc_gfx_clk.clkr,
+diff --git a/drivers/clk/qcom/mmcc-sdm660.c b/drivers/clk/qcom/mmcc-sdm660.c
+index 98ba5b4518fb3b..b9f02d91004e8b 100644
+--- a/drivers/clk/qcom/mmcc-sdm660.c
++++ b/drivers/clk/qcom/mmcc-sdm660.c
+@@ -2544,7 +2544,7 @@ static struct clk_branch video_core_clk = {
+
+ static struct clk_branch video_subcore0_clk = {
+ .halt_reg = 0x1048,
+- .halt_check = BRANCH_HALT,
++ .halt_check = BRANCH_HALT_SKIP,
+ .clkr = {
+ .enable_reg = 0x1048,
+ .enable_mask = BIT(0),
+diff --git a/drivers/clk/renesas/r9a08g045-cpg.c b/drivers/clk/renesas/r9a08g045-cpg.c
+index 1ce40fb51f13bd..a1f961d5b85691 100644
+--- a/drivers/clk/renesas/r9a08g045-cpg.c
++++ b/drivers/clk/renesas/r9a08g045-cpg.c
+@@ -50,7 +50,7 @@
+ #define G3S_SEL_SDHI2 SEL_PLL_PACK(G3S_CPG_SDHI_DSEL, 8, 2)
+
+ /* PLL 1/4/6 configuration registers macro. */
+-#define G3S_PLL146_CONF(clk1, clk2) ((clk1) << 22 | (clk2) << 12)
++#define G3S_PLL146_CONF(clk1, clk2, setting) ((clk1) << 22 | (clk2) << 12 | (setting))
+
+ #define DEF_G3S_MUX(_name, _id, _conf, _parent_names, _mux_flags, _clk_flags) \
+ DEF_TYPE(_name, _id, CLK_TYPE_MUX, .conf = (_conf), \
+@@ -133,7 +133,8 @@ static const struct cpg_core_clk r9a08g045_core_clks[] __initconst = {
+
+ /* Internal Core Clocks */
+ DEF_FIXED(".osc_div1000", CLK_OSC_DIV1000, CLK_EXTAL, 1, 1000),
+- DEF_G3S_PLL(".pll1", CLK_PLL1, CLK_EXTAL, G3S_PLL146_CONF(0x4, 0x8)),
++ DEF_G3S_PLL(".pll1", CLK_PLL1, CLK_EXTAL, G3S_PLL146_CONF(0x4, 0x8, 0x100),
++ 1100000000UL),
+ DEF_FIXED(".pll2", CLK_PLL2, CLK_EXTAL, 200, 3),
+ DEF_FIXED(".pll3", CLK_PLL3, CLK_EXTAL, 200, 3),
+ DEF_FIXED(".pll4", CLK_PLL4, CLK_EXTAL, 100, 3),
+diff --git a/drivers/clk/renesas/rzg2l-cpg.c b/drivers/clk/renesas/rzg2l-cpg.c
+index b43b763dfe186a..229f4540b219e3 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.c
++++ b/drivers/clk/renesas/rzg2l-cpg.c
+@@ -51,6 +51,7 @@
+ #define RZG3S_DIV_M GENMASK(25, 22)
+ #define RZG3S_DIV_NI GENMASK(21, 13)
+ #define RZG3S_DIV_NF GENMASK(12, 1)
++#define RZG3S_SEL_PLL BIT(0)
+
+ #define CLK_ON_R(reg) (reg)
+ #define CLK_MON_R(reg) (0x180 + (reg))
+@@ -60,6 +61,7 @@
+ #define GET_REG_OFFSET(val) ((val >> 20) & 0xfff)
+ #define GET_REG_SAMPLL_CLK1(val) ((val >> 22) & 0xfff)
+ #define GET_REG_SAMPLL_CLK2(val) ((val >> 12) & 0xfff)
++#define GET_REG_SAMPLL_SETTING(val) ((val) & 0xfff)
+
+ #define CPG_WEN_BIT BIT(16)
+
+@@ -943,6 +945,7 @@ rzg2l_cpg_sipll5_register(const struct cpg_core_clk *core,
+
+ struct pll_clk {
+ struct clk_hw hw;
++ unsigned long default_rate;
+ unsigned int conf;
+ unsigned int type;
+ void __iomem *base;
+@@ -980,12 +983,19 @@ static unsigned long rzg3s_cpg_pll_clk_recalc_rate(struct clk_hw *hw,
+ {
+ struct pll_clk *pll_clk = to_pll(hw);
+ struct rzg2l_cpg_priv *priv = pll_clk->priv;
+- u32 nir, nfr, mr, pr, val;
++ u32 nir, nfr, mr, pr, val, setting;
+ u64 rate;
+
+ if (pll_clk->type != CLK_TYPE_G3S_PLL)
+ return parent_rate;
+
++ setting = GET_REG_SAMPLL_SETTING(pll_clk->conf);
++ if (setting) {
++ val = readl(priv->base + setting);
++ if (val & RZG3S_SEL_PLL)
++ return pll_clk->default_rate;
++ }
++
+ val = readl(priv->base + GET_REG_SAMPLL_CLK1(pll_clk->conf));
+
+ pr = 1 << FIELD_GET(RZG3S_DIV_P, val);
+@@ -1038,6 +1048,7 @@ rzg2l_cpg_pll_clk_register(const struct cpg_core_clk *core,
+ pll_clk->base = priv->base;
+ pll_clk->priv = priv;
+ pll_clk->type = core->type;
++ pll_clk->default_rate = core->default_rate;
+
+ ret = devm_clk_hw_register(dev, &pll_clk->hw);
+ if (ret)
+diff --git a/drivers/clk/renesas/rzg2l-cpg.h b/drivers/clk/renesas/rzg2l-cpg.h
+index ecfe7e7ea8a177..019efe00ffd9f2 100644
+--- a/drivers/clk/renesas/rzg2l-cpg.h
++++ b/drivers/clk/renesas/rzg2l-cpg.h
+@@ -102,7 +102,10 @@ struct cpg_core_clk {
+ const struct clk_div_table *dtable;
+ const u32 *mtable;
+ const unsigned long invalid_rate;
+- const unsigned long max_rate;
++ union {
++ const unsigned long max_rate;
++ const unsigned long default_rate;
++ };
+ const char * const *parent_names;
+ notifier_fn_t notifier;
+ u32 flag;
+@@ -144,8 +147,9 @@ enum clk_types {
+ DEF_TYPE(_name, _id, _type, .parent = _parent)
+ #define DEF_SAMPLL(_name, _id, _parent, _conf) \
+ DEF_TYPE(_name, _id, CLK_TYPE_SAM_PLL, .parent = _parent, .conf = _conf)
+-#define DEF_G3S_PLL(_name, _id, _parent, _conf) \
+- DEF_TYPE(_name, _id, CLK_TYPE_G3S_PLL, .parent = _parent, .conf = _conf)
++#define DEF_G3S_PLL(_name, _id, _parent, _conf, _default_rate) \
++ DEF_TYPE(_name, _id, CLK_TYPE_G3S_PLL, .parent = _parent, .conf = _conf, \
++ .default_rate = _default_rate)
+ #define DEF_INPUT(_name, _id) \
+ DEF_TYPE(_name, _id, CLK_TYPE_IN)
+ #define DEF_FIXED(_name, _id, _parent, _mult, _div) \
+diff --git a/drivers/clk/rockchip/clk-rk3328.c b/drivers/clk/rockchip/clk-rk3328.c
+index 3bb87b27b662da..cf60fcf2fa5cde 100644
+--- a/drivers/clk/rockchip/clk-rk3328.c
++++ b/drivers/clk/rockchip/clk-rk3328.c
+@@ -201,7 +201,7 @@ PNAME(mux_aclk_peri_pre_p) = { "cpll_peri",
+ "gpll_peri",
+ "hdmiphy_peri" };
+ PNAME(mux_ref_usb3otg_src_p) = { "xin24m",
+- "clk_usb3otg_ref" };
++ "clk_ref_usb3otg_src" };
+ PNAME(mux_xin24m_32k_p) = { "xin24m",
+ "clk_rtc32k" };
+ PNAME(mux_mac2io_src_p) = { "clk_mac2io_src",
+diff --git a/drivers/clk/samsung/clk.c b/drivers/clk/samsung/clk.c
+index afa5760ed3a11b..e6533513f8ac29 100644
+--- a/drivers/clk/samsung/clk.c
++++ b/drivers/clk/samsung/clk.c
+@@ -74,12 +74,12 @@ struct samsung_clk_provider * __init samsung_clk_init(struct device *dev,
+ if (!ctx)
+ panic("could not allocate clock provider context.\n");
+
++ ctx->clk_data.num = nr_clks;
+ for (i = 0; i < nr_clks; ++i)
+ ctx->clk_data.hws[i] = ERR_PTR(-ENOENT);
+
+ ctx->dev = dev;
+ ctx->reg_base = base;
+- ctx->clk_data.num = nr_clks;
+ spin_lock_init(&ctx->lock);
+
+ return ctx;
+diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
+index 5f7e13e60c8023..e67b2326671c9c 100644
+--- a/drivers/cpufreq/Kconfig.arm
++++ b/drivers/cpufreq/Kconfig.arm
+@@ -245,7 +245,7 @@ config ARM_TEGRA186_CPUFREQ
+
+ config ARM_TEGRA194_CPUFREQ
+ tristate "Tegra194 CPUFreq support"
+- depends on ARCH_TEGRA_194_SOC || (64BIT && COMPILE_TEST)
++ depends on ARCH_TEGRA_194_SOC || ARCH_TEGRA_234_SOC || (64BIT && COMPILE_TEST)
+ depends on TEGRA_BPMP
+ default y
+ help
+diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
+index af44ee6a64304f..1a7fcaf39cc9b5 100644
+--- a/drivers/cpufreq/cpufreq_governor.c
++++ b/drivers/cpufreq/cpufreq_governor.c
+@@ -145,7 +145,23 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
+ time_elapsed = update_time - j_cdbs->prev_update_time;
+ j_cdbs->prev_update_time = update_time;
+
+- idle_time = cur_idle_time - j_cdbs->prev_cpu_idle;
++ /*
++ * cur_idle_time could be smaller than j_cdbs->prev_cpu_idle if
++ * it's obtained from get_cpu_idle_time_jiffy() when NOHZ is
++ * off, where idle_time is calculated by the difference between
++ * time elapsed in jiffies and "busy time" obtained from CPU
++ * statistics. If a CPU is 100% busy, the time elapsed and busy
++ * time should grow with the same amount in two consecutive
++ * samples, but in practice there could be a tiny difference,
++ * making the accumulated idle time decrease sometimes. Hence,
++ * in this case, idle_time should be regarded as 0 in order to
++ * make the further process correct.
++ */
++ if (cur_idle_time > j_cdbs->prev_cpu_idle)
++ idle_time = cur_idle_time - j_cdbs->prev_cpu_idle;
++ else
++ idle_time = 0;
++
+ j_cdbs->prev_cpu_idle = cur_idle_time;
+
+ if (ignore_nice) {
+@@ -162,7 +178,7 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
+ * calls, so the previous load value can be used then.
+ */
+ load = j_cdbs->prev_load;
+- } else if (unlikely((int)idle_time > 2 * sampling_rate &&
++ } else if (unlikely(idle_time > 2 * sampling_rate &&
+ j_cdbs->prev_load)) {
+ /*
+ * If the CPU had gone completely idle and a task has
+@@ -189,30 +205,15 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
+ load = j_cdbs->prev_load;
+ j_cdbs->prev_load = 0;
+ } else {
+- if (time_elapsed >= idle_time) {
++ if (time_elapsed > idle_time)
+ load = 100 * (time_elapsed - idle_time) / time_elapsed;
+- } else {
+- /*
+- * That can happen if idle_time is returned by
+- * get_cpu_idle_time_jiffy(). In that case
+- * idle_time is roughly equal to the difference
+- * between time_elapsed and "busy time" obtained
+- * from CPU statistics. Then, the "busy time"
+- * can end up being greater than time_elapsed
+- * (for example, if jiffies_64 and the CPU
+- * statistics are updated by different CPUs),
+- * so idle_time may in fact be negative. That
+- * means, though, that the CPU was busy all
+- * the time (on the rough average) during the
+- * last sampling interval and 100 can be
+- * returned as the load.
+- */
+- load = (int)idle_time < 0 ? 100 : 0;
+- }
++ else
++ load = 0;
++
+ j_cdbs->prev_load = load;
+ }
+
+- if (unlikely((int)idle_time > 2 * sampling_rate)) {
++ if (unlikely(idle_time > 2 * sampling_rate)) {
+ unsigned int periods = idle_time / sampling_rate;
+
+ if (periods < idle_periods)
+diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
+index 8d73e6e8be2a58..f2d913a91be9e0 100644
+--- a/drivers/cpufreq/scpi-cpufreq.c
++++ b/drivers/cpufreq/scpi-cpufreq.c
+@@ -39,8 +39,9 @@ static unsigned int scpi_cpufreq_get_rate(unsigned int cpu)
+ static int
+ scpi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
+ {
+- u64 rate = policy->freq_table[index].frequency * 1000;
++ unsigned long freq_khz = policy->freq_table[index].frequency;
+ struct scpi_data *priv = policy->driver_data;
++ unsigned long rate = freq_khz * 1000;
+ int ret;
+
+ ret = clk_set_rate(priv->clk, rate);
+@@ -48,7 +49,7 @@ scpi_cpufreq_set_target(struct cpufreq_policy *policy, unsigned int index)
+ if (ret)
+ return ret;
+
+- if (clk_get_rate(priv->clk) != rate)
++ if (clk_get_rate(priv->clk) / 1000 != freq_khz)
+ return -EIO;
+
+ return 0;
+diff --git a/drivers/crypto/hisilicon/sec2/sec.h b/drivers/crypto/hisilicon/sec2/sec.h
+index 30c2b1a64695c0..2fc04e210bc4f6 100644
+--- a/drivers/crypto/hisilicon/sec2/sec.h
++++ b/drivers/crypto/hisilicon/sec2/sec.h
+@@ -37,7 +37,6 @@ struct sec_aead_req {
+ u8 *a_ivin;
+ dma_addr_t a_ivin_dma;
+ struct aead_request *aead_req;
+- bool fallback;
+ };
+
+ /* SEC request of Crypto */
+diff --git a/drivers/crypto/hisilicon/sec2/sec_crypto.c b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+index a9b1b9b0b03bf7..8605cb3cae92cd 100644
+--- a/drivers/crypto/hisilicon/sec2/sec_crypto.c
++++ b/drivers/crypto/hisilicon/sec2/sec_crypto.c
+@@ -57,7 +57,6 @@
+ #define SEC_TYPE_MASK 0x0F
+ #define SEC_DONE_MASK 0x0001
+ #define SEC_ICV_MASK 0x000E
+-#define SEC_SQE_LEN_RATE_MASK 0x3
+
+ #define SEC_TOTAL_IV_SZ(depth) (SEC_IV_SIZE * (depth))
+ #define SEC_SGL_SGE_NR 128
+@@ -80,16 +79,16 @@
+ #define SEC_TOTAL_PBUF_SZ(depth) (PAGE_SIZE * SEC_PBUF_PAGE_NUM(depth) + \
+ SEC_PBUF_LEFT_SZ(depth))
+
+-#define SEC_SQE_LEN_RATE 4
+ #define SEC_SQE_CFLAG 2
+ #define SEC_SQE_AEAD_FLAG 3
+ #define SEC_SQE_DONE 0x1
+ #define SEC_ICV_ERR 0x2
+-#define MIN_MAC_LEN 4
+ #define MAC_LEN_MASK 0x1U
+ #define MAX_INPUT_DATA_LEN 0xFFFE00
+ #define BITS_MASK 0xFF
++#define WORD_MASK 0x3
+ #define BYTE_BITS 0x8
++#define BYTES_TO_WORDS(bcount) ((bcount) >> 2)
+ #define SEC_XTS_NAME_SZ 0x3
+ #define IV_CM_CAL_NUM 2
+ #define IV_CL_MASK 0x7
+@@ -691,14 +690,10 @@ static int sec_skcipher_fbtfm_init(struct crypto_skcipher *tfm)
+
+ c_ctx->fallback = false;
+
+- /* Currently, only XTS mode need fallback tfm when using 192bit key */
+- if (likely(strncmp(alg, "xts", SEC_XTS_NAME_SZ)))
+- return 0;
+-
+ c_ctx->fbtfm = crypto_alloc_sync_skcipher(alg, 0,
+ CRYPTO_ALG_NEED_FALLBACK);
+ if (IS_ERR(c_ctx->fbtfm)) {
+- pr_err("failed to alloc xts mode fallback tfm!\n");
++ pr_err("failed to alloc fallback tfm for %s!\n", alg);
+ return PTR_ERR(c_ctx->fbtfm);
+ }
+
+@@ -858,7 +853,7 @@ static int sec_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
+ }
+
+ memcpy(c_ctx->c_key, key, keylen);
+- if (c_ctx->fallback && c_ctx->fbtfm) {
++ if (c_ctx->fbtfm) {
+ ret = crypto_sync_skcipher_setkey(c_ctx->fbtfm, key, keylen);
+ if (ret) {
+ dev_err(dev, "failed to set fallback skcipher key!\n");
+@@ -1090,11 +1085,6 @@ static int sec_aead_auth_set_key(struct sec_auth_ctx *ctx,
+ struct crypto_shash *hash_tfm = ctx->hash_tfm;
+ int blocksize, digestsize, ret;
+
+- if (!keys->authkeylen) {
+- pr_err("hisi_sec2: aead auth key error!\n");
+- return -EINVAL;
+- }
+-
+ blocksize = crypto_shash_blocksize(hash_tfm);
+ digestsize = crypto_shash_digestsize(hash_tfm);
+ if (keys->authkeylen > blocksize) {
+@@ -1106,7 +1096,8 @@ static int sec_aead_auth_set_key(struct sec_auth_ctx *ctx,
+ }
+ ctx->a_key_len = digestsize;
+ } else {
+- memcpy(ctx->a_key, keys->authkey, keys->authkeylen);
++ if (keys->authkeylen)
++ memcpy(ctx->a_key, keys->authkey, keys->authkeylen);
+ ctx->a_key_len = keys->authkeylen;
+ }
+
+@@ -1160,8 +1151,10 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ }
+
+ ret = crypto_authenc_extractkeys(&keys, key, keylen);
+- if (ret)
++ if (ret) {
++ dev_err(dev, "sec extract aead keys err!\n");
+ goto bad_key;
++ }
+
+ ret = sec_aead_aes_set_key(c_ctx, &keys);
+ if (ret) {
+@@ -1175,12 +1168,6 @@ static int sec_aead_setkey(struct crypto_aead *tfm, const u8 *key,
+ goto bad_key;
+ }
+
+- if (ctx->a_ctx.a_key_len & SEC_SQE_LEN_RATE_MASK) {
+- ret = -EINVAL;
+- dev_err(dev, "AUTH key length error!\n");
+- goto bad_key;
+- }
+-
+ ret = sec_aead_fallback_setkey(a_ctx, tfm, key, keylen);
+ if (ret) {
+ dev_err(dev, "set sec fallback key err!\n");
+@@ -1583,11 +1570,10 @@ static void sec_auth_bd_fill_ex(struct sec_auth_ctx *ctx, int dir,
+
+ sec_sqe->type2.a_key_addr = cpu_to_le64(ctx->a_key_dma);
+
+- sec_sqe->type2.mac_key_alg = cpu_to_le32(authsize / SEC_SQE_LEN_RATE);
++ sec_sqe->type2.mac_key_alg = cpu_to_le32(BYTES_TO_WORDS(authsize));
+
+ sec_sqe->type2.mac_key_alg |=
+- cpu_to_le32((u32)((ctx->a_key_len) /
+- SEC_SQE_LEN_RATE) << SEC_AKEY_OFFSET);
++ cpu_to_le32((u32)BYTES_TO_WORDS(ctx->a_key_len) << SEC_AKEY_OFFSET);
+
+ sec_sqe->type2.mac_key_alg |=
+ cpu_to_le32((u32)(ctx->a_alg) << SEC_AEAD_ALG_OFFSET);
+@@ -1639,12 +1625,10 @@ static void sec_auth_bd_fill_ex_v3(struct sec_auth_ctx *ctx, int dir,
+ sqe3->a_key_addr = cpu_to_le64(ctx->a_key_dma);
+
+ sqe3->auth_mac_key |=
+- cpu_to_le32((u32)(authsize /
+- SEC_SQE_LEN_RATE) << SEC_MAC_OFFSET_V3);
++ cpu_to_le32(BYTES_TO_WORDS(authsize) << SEC_MAC_OFFSET_V3);
+
+ sqe3->auth_mac_key |=
+- cpu_to_le32((u32)(ctx->a_key_len /
+- SEC_SQE_LEN_RATE) << SEC_AKEY_OFFSET_V3);
++ cpu_to_le32((u32)BYTES_TO_WORDS(ctx->a_key_len) << SEC_AKEY_OFFSET_V3);
+
+ sqe3->auth_mac_key |=
+ cpu_to_le32((u32)(ctx->a_alg) << SEC_AUTH_ALG_OFFSET_V3);
+@@ -2003,8 +1987,7 @@ static int sec_aead_sha512_ctx_init(struct crypto_aead *tfm)
+ return sec_aead_ctx_init(tfm, "sha512");
+ }
+
+-static int sec_skcipher_cryptlen_check(struct sec_ctx *ctx,
+- struct sec_req *sreq)
++static int sec_skcipher_cryptlen_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ {
+ u32 cryptlen = sreq->c_req.sk_req->cryptlen;
+ struct device *dev = ctx->dev;
+@@ -2026,10 +2009,6 @@ static int sec_skcipher_cryptlen_check(struct sec_ctx *ctx,
+ }
+ break;
+ case SEC_CMODE_CTR:
+- if (unlikely(ctx->sec->qm.ver < QM_HW_V3)) {
+- dev_err(dev, "skcipher HW version error!\n");
+- ret = -EINVAL;
+- }
+ break;
+ default:
+ ret = -EINVAL;
+@@ -2038,17 +2017,21 @@ static int sec_skcipher_cryptlen_check(struct sec_ctx *ctx,
+ return ret;
+ }
+
+-static int sec_skcipher_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
++static int sec_skcipher_param_check(struct sec_ctx *ctx,
++ struct sec_req *sreq, bool *need_fallback)
+ {
+ struct skcipher_request *sk_req = sreq->c_req.sk_req;
+ struct device *dev = ctx->dev;
+ u8 c_alg = ctx->c_ctx.c_alg;
+
+- if (unlikely(!sk_req->src || !sk_req->dst ||
+- sk_req->cryptlen > MAX_INPUT_DATA_LEN)) {
++ if (unlikely(!sk_req->src || !sk_req->dst)) {
+ dev_err(dev, "skcipher input param error!\n");
+ return -EINVAL;
+ }
++
++ if (sk_req->cryptlen > MAX_INPUT_DATA_LEN)
++ *need_fallback = true;
++
+ sreq->c_req.c_len = sk_req->cryptlen;
+
+ if (ctx->pbuf_supported && sk_req->cryptlen <= SEC_PBUF_SZ)
+@@ -2106,6 +2089,7 @@ static int sec_skcipher_crypto(struct skcipher_request *sk_req, bool encrypt)
+ struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(sk_req);
+ struct sec_req *req = skcipher_request_ctx(sk_req);
+ struct sec_ctx *ctx = crypto_skcipher_ctx(tfm);
++ bool need_fallback = false;
+ int ret;
+
+ if (!sk_req->cryptlen) {
+@@ -2119,11 +2103,11 @@ static int sec_skcipher_crypto(struct skcipher_request *sk_req, bool encrypt)
+ req->c_req.encrypt = encrypt;
+ req->ctx = ctx;
+
+- ret = sec_skcipher_param_check(ctx, req);
++ ret = sec_skcipher_param_check(ctx, req, &need_fallback);
+ if (unlikely(ret))
+ return -EINVAL;
+
+- if (unlikely(ctx->c_ctx.fallback))
++ if (unlikely(ctx->c_ctx.fallback || need_fallback))
+ return sec_skcipher_soft_crypto(ctx, sk_req, encrypt);
+
+ return ctx->req_op->process(ctx, req);
+@@ -2231,52 +2215,35 @@ static int sec_aead_spec_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+ size_t sz = crypto_aead_authsize(tfm);
+ u8 c_mode = ctx->c_ctx.c_mode;
+- struct device *dev = ctx->dev;
+ int ret;
+
+- /* Hardware does not handle cases where authsize is less than 4 bytes */
+- if (unlikely(sz < MIN_MAC_LEN)) {
+- sreq->aead_req.fallback = true;
++ if (unlikely(ctx->sec->qm.ver == QM_HW_V2 && !sreq->c_req.c_len))
+ return -EINVAL;
+- }
+
+ if (unlikely(req->cryptlen + req->assoclen > MAX_INPUT_DATA_LEN ||
+- req->assoclen > SEC_MAX_AAD_LEN)) {
+- dev_err(dev, "aead input spec error!\n");
++ req->assoclen > SEC_MAX_AAD_LEN))
+ return -EINVAL;
+- }
+
+ if (c_mode == SEC_CMODE_CCM) {
+- if (unlikely(req->assoclen > SEC_MAX_CCM_AAD_LEN)) {
+- dev_err_ratelimited(dev, "CCM input aad parameter is too long!\n");
++ if (unlikely(req->assoclen > SEC_MAX_CCM_AAD_LEN))
+ return -EINVAL;
+- }
+- ret = aead_iv_demension_check(req);
+- if (ret) {
+- dev_err(dev, "aead input iv param error!\n");
+- return ret;
+- }
+- }
+
+- if (sreq->c_req.encrypt)
+- sreq->c_req.c_len = req->cryptlen;
+- else
+- sreq->c_req.c_len = req->cryptlen - sz;
+- if (c_mode == SEC_CMODE_CBC) {
+- if (unlikely(sreq->c_req.c_len & (AES_BLOCK_SIZE - 1))) {
+- dev_err(dev, "aead crypto length error!\n");
++ ret = aead_iv_demension_check(req);
++ if (unlikely(ret))
++ return -EINVAL;
++ } else if (c_mode == SEC_CMODE_CBC) {
++ if (unlikely(sz & WORD_MASK))
++ return -EINVAL;
++ if (unlikely(ctx->a_ctx.a_key_len & WORD_MASK))
+ return -EINVAL;
+- }
+ }
+
+ return 0;
+ }
+
+-static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
++static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq, bool *need_fallback)
+ {
+ struct aead_request *req = sreq->aead_req.aead_req;
+- struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+- size_t authsize = crypto_aead_authsize(tfm);
+ struct device *dev = ctx->dev;
+ u8 c_alg = ctx->c_ctx.c_alg;
+
+@@ -2285,12 +2252,10 @@ static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ return -EINVAL;
+ }
+
+- if (ctx->sec->qm.ver == QM_HW_V2) {
+- if (unlikely(!req->cryptlen || (!sreq->c_req.encrypt &&
+- req->cryptlen <= authsize))) {
+- sreq->aead_req.fallback = true;
+- return -EINVAL;
+- }
++ if (unlikely(ctx->c_ctx.c_mode == SEC_CMODE_CBC &&
++ sreq->c_req.c_len & (AES_BLOCK_SIZE - 1))) {
++ dev_err(dev, "aead cbc mode input data length error!\n");
++ return -EINVAL;
+ }
+
+ /* Support AES or SM4 */
+@@ -2299,8 +2264,10 @@ static int sec_aead_param_check(struct sec_ctx *ctx, struct sec_req *sreq)
+ return -EINVAL;
+ }
+
+- if (unlikely(sec_aead_spec_check(ctx, sreq)))
++ if (unlikely(sec_aead_spec_check(ctx, sreq))) {
++ *need_fallback = true;
+ return -EINVAL;
++ }
+
+ if (ctx->pbuf_supported && (req->cryptlen + req->assoclen) <=
+ SEC_PBUF_SZ)
+@@ -2344,17 +2311,19 @@ static int sec_aead_crypto(struct aead_request *a_req, bool encrypt)
+ struct crypto_aead *tfm = crypto_aead_reqtfm(a_req);
+ struct sec_req *req = aead_request_ctx(a_req);
+ struct sec_ctx *ctx = crypto_aead_ctx(tfm);
++ size_t sz = crypto_aead_authsize(tfm);
++ bool need_fallback = false;
+ int ret;
+
+ req->flag = a_req->base.flags;
+ req->aead_req.aead_req = a_req;
+ req->c_req.encrypt = encrypt;
+ req->ctx = ctx;
+- req->aead_req.fallback = false;
++ req->c_req.c_len = a_req->cryptlen - (req->c_req.encrypt ? 0 : sz);
+
+- ret = sec_aead_param_check(ctx, req);
++ ret = sec_aead_param_check(ctx, req, &need_fallback);
+ if (unlikely(ret)) {
+- if (req->aead_req.fallback)
++ if (need_fallback)
+ return sec_aead_soft_crypto(ctx, a_req, encrypt);
+ return -EINVAL;
+ }
+diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+index d2f07e34f3142d..e1f60f0f507c96 100644
+--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c
++++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c
+@@ -1527,7 +1527,7 @@ static int iaa_comp_acompress(struct acomp_req *req)
+ iaa_wq = idxd_wq_get_private(wq);
+
+ if (!req->dst) {
+- gfp_t flags = req->flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
++ gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC;
+
+ /* incompressible data will always be < 2 * slen */
+ req->dlen = 2 * req->slen;
+@@ -1609,7 +1609,7 @@ static int iaa_comp_acompress(struct acomp_req *req)
+
+ static int iaa_comp_adecompress_alloc_dest(struct acomp_req *req)
+ {
+- gfp_t flags = req->flags & CRYPTO_TFM_REQ_MAY_SLEEP ?
++ gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ?
+ GFP_KERNEL : GFP_ATOMIC;
+ struct crypto_tfm *tfm = req->base.tfm;
+ dma_addr_t src_addr, dst_addr;
+diff --git a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+index 9faef33e54bd32..a17adc4beda2e3 100644
+--- a/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
++++ b/drivers/crypto/intel/qat/qat_420xx/adf_420xx_hw_data.c
+@@ -420,6 +420,7 @@ static void adf_gen4_set_err_mask(struct adf_dev_err_mask *dev_err_mask)
+ dev_err_mask->parerr_cpr_xlt_mask = ADF_420XX_PARITYERRORMASK_CPR_XLT_MASK;
+ dev_err_mask->parerr_dcpr_ucs_mask = ADF_420XX_PARITYERRORMASK_DCPR_UCS_MASK;
+ dev_err_mask->parerr_pke_mask = ADF_420XX_PARITYERRORMASK_PKE_MASK;
++ dev_err_mask->parerr_wat_wcp_mask = ADF_420XX_PARITYERRORMASK_WAT_WCP_MASK;
+ dev_err_mask->ssmfeatren_mask = ADF_420XX_SSMFEATREN_MASK;
+ }
+
+diff --git a/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c b/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c
+index 2dd3772bf58a6c..0f7f00a19e7dc6 100644
+--- a/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c
++++ b/drivers/crypto/intel/qat/qat_common/adf_gen4_ras.c
+@@ -695,7 +695,7 @@ static bool adf_handle_slice_hang_error(struct adf_accel_dev *accel_dev,
+ if (err_mask->parerr_wat_wcp_mask)
+ adf_poll_slicehang_csr(accel_dev, csr,
+ ADF_GEN4_SLICEHANGSTATUS_WAT_WCP,
+- "ath_cph");
++ "wat_wcp");
+
+ return false;
+ }
+@@ -1043,63 +1043,16 @@ static bool adf_handle_ssmcpppar_err(struct adf_accel_dev *accel_dev,
+ return reset_required;
+ }
+
+-static bool adf_handle_rf_parr_err(struct adf_accel_dev *accel_dev,
++static void adf_handle_rf_parr_err(struct adf_accel_dev *accel_dev,
+ void __iomem *csr, u32 iastatssm)
+ {
+- struct adf_dev_err_mask *err_mask = GET_ERR_MASK(accel_dev);
+- u32 reg;
+-
+ if (!(iastatssm & ADF_GEN4_IAINTSTATSSM_SSMSOFTERRORPARITY_BIT))
+- return false;
+-
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_SRC);
+- reg &= ADF_GEN4_SSMSOFTERRORPARITY_SRC_BIT;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_SRC, reg);
+- }
+-
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_ATH_CPH);
+- reg &= err_mask->parerr_ath_cph_mask;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_ATH_CPH, reg);
+- }
+-
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_CPR_XLT);
+- reg &= err_mask->parerr_cpr_xlt_mask;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_CPR_XLT, reg);
+- }
+-
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_DCPR_UCS);
+- reg &= err_mask->parerr_dcpr_ucs_mask;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_DCPR_UCS, reg);
+- }
+-
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_PKE);
+- reg &= err_mask->parerr_pke_mask;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_PKE, reg);
+- }
+-
+- if (err_mask->parerr_wat_wcp_mask) {
+- reg = ADF_CSR_RD(csr, ADF_GEN4_SSMSOFTERRORPARITY_WAT_WCP);
+- reg &= err_mask->parerr_wat_wcp_mask;
+- if (reg) {
+- ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+- ADF_CSR_WR(csr, ADF_GEN4_SSMSOFTERRORPARITY_WAT_WCP,
+- reg);
+- }
+- }
++ return;
+
++ ADF_RAS_ERR_CTR_INC(accel_dev->ras_errors, ADF_RAS_UNCORR);
+ dev_err(&GET_DEV(accel_dev), "Slice ssm soft parity error reported");
+
+- return false;
++ return;
+ }
+
+ static bool adf_handle_ser_err_ssmsh(struct adf_accel_dev *accel_dev,
+@@ -1171,8 +1124,8 @@ static bool adf_handle_iaintstatssm(struct adf_accel_dev *accel_dev,
+ reset_required |= adf_handle_slice_hang_error(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_spppar_err(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_ssmcpppar_err(accel_dev, csr, iastatssm);
+- reset_required |= adf_handle_rf_parr_err(accel_dev, csr, iastatssm);
+ reset_required |= adf_handle_ser_err_ssmsh(accel_dev, csr, iastatssm);
++ adf_handle_rf_parr_err(accel_dev, csr, iastatssm);
+
+ ADF_CSR_WR(csr, ADF_GEN4_IAINTSTATSSM, iastatssm);
+
+diff --git a/drivers/crypto/nx/nx-common-pseries.c b/drivers/crypto/nx/nx-common-pseries.c
+index 35f2d0d8507ed7..7e98f174f69b99 100644
+--- a/drivers/crypto/nx/nx-common-pseries.c
++++ b/drivers/crypto/nx/nx-common-pseries.c
+@@ -1144,6 +1144,7 @@ static void __init nxcop_get_capabilities(void)
+ {
+ struct hv_vas_all_caps *hv_caps;
+ struct hv_nx_cop_caps *hv_nxc;
++ u64 feat;
+ int rc;
+
+ hv_caps = kmalloc(sizeof(*hv_caps), GFP_KERNEL);
+@@ -1154,27 +1155,26 @@ static void __init nxcop_get_capabilities(void)
+ */
+ rc = h_query_vas_capabilities(H_QUERY_NX_CAPABILITIES, 0,
+ (u64)virt_to_phys(hv_caps));
++ if (!rc)
++ feat = be64_to_cpu(hv_caps->feat_type);
++ kfree(hv_caps);
+ if (rc)
+- goto out;
++ return;
++ if (!(feat & VAS_NX_GZIP_FEAT_BIT))
++ return;
+
+- caps_feat = be64_to_cpu(hv_caps->feat_type);
+ /*
+ * NX-GZIP feature available
+ */
+- if (caps_feat & VAS_NX_GZIP_FEAT_BIT) {
+- hv_nxc = kmalloc(sizeof(*hv_nxc), GFP_KERNEL);
+- if (!hv_nxc)
+- goto out;
+- /*
+- * Get capabilities for NX-GZIP feature
+- */
+- rc = h_query_vas_capabilities(H_QUERY_NX_CAPABILITIES,
+- VAS_NX_GZIP_FEAT,
+- (u64)virt_to_phys(hv_nxc));
+- } else {
+- pr_err("NX-GZIP feature is not available\n");
+- rc = -EINVAL;
+- }
++ hv_nxc = kmalloc(sizeof(*hv_nxc), GFP_KERNEL);
++ if (!hv_nxc)
++ return;
++ /*
++ * Get capabilities for NX-GZIP feature
++ */
++ rc = h_query_vas_capabilities(H_QUERY_NX_CAPABILITIES,
++ VAS_NX_GZIP_FEAT,
++ (u64)virt_to_phys(hv_nxc));
+
+ if (!rc) {
+ nx_cop_caps.descriptor = be64_to_cpu(hv_nxc->descriptor);
+@@ -1184,13 +1184,10 @@ static void __init nxcop_get_capabilities(void)
+ be64_to_cpu(hv_nxc->min_compress_len);
+ nx_cop_caps.min_decompress_len =
+ be64_to_cpu(hv_nxc->min_decompress_len);
+- } else {
+- caps_feat = 0;
++ caps_feat = feat;
+ }
+
+ kfree(hv_nxc);
+-out:
+- kfree(hv_caps);
+ }
+
+ static const struct vio_device_id nx842_vio_driver_ids[] = {
+diff --git a/drivers/crypto/tegra/tegra-se-aes.c b/drivers/crypto/tegra/tegra-se-aes.c
+index 3106fd1e84b91e..0ed0515e1ed54c 100644
+--- a/drivers/crypto/tegra/tegra-se-aes.c
++++ b/drivers/crypto/tegra/tegra-se-aes.c
+@@ -282,7 +282,7 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq)
+
+ /* Prepare the command and submit for execution */
+ cmdlen = tegra_aes_prep_cmd(ctx, rctx);
+- ret = tegra_se_host1x_submit(se, cmdlen);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+
+ /* Copy the result */
+ tegra_aes_update_iv(req, ctx);
+@@ -443,6 +443,9 @@ static int tegra_aes_crypt(struct skcipher_request *req, bool encrypt)
+ if (!req->cryptlen)
+ return 0;
+
++ if (ctx->alg == SE_ALG_ECB)
++ req->iv = NULL;
++
+ rctx->encrypt = encrypt;
+ rctx->config = tegra234_aes_cfg(ctx->alg, encrypt);
+ rctx->crypto_config = tegra234_aes_crypto_cfg(ctx->alg, encrypt);
+@@ -719,7 +722,7 @@ static int tegra_gcm_do_gmac(struct tegra_aead_ctx *ctx, struct tegra_aead_reqct
+
+ cmdlen = tegra_gmac_prep_cmd(ctx, rctx);
+
+- return tegra_se_host1x_submit(se, cmdlen);
++ return tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ }
+
+ static int tegra_gcm_do_crypt(struct tegra_aead_ctx *ctx, struct tegra_aead_reqctx *rctx)
+@@ -736,7 +739,7 @@ static int tegra_gcm_do_crypt(struct tegra_aead_ctx *ctx, struct tegra_aead_reqc
+
+ /* Prepare command and submit */
+ cmdlen = tegra_gcm_crypt_prep_cmd(ctx, rctx);
+- ret = tegra_se_host1x_submit(se, cmdlen);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ if (ret)
+ return ret;
+
+@@ -759,7 +762,7 @@ static int tegra_gcm_do_final(struct tegra_aead_ctx *ctx, struct tegra_aead_reqc
+
+ /* Prepare command and submit */
+ cmdlen = tegra_gcm_prep_final_cmd(se, cpuvaddr, rctx);
+- ret = tegra_se_host1x_submit(se, cmdlen);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ if (ret)
+ return ret;
+
+@@ -891,7 +894,7 @@ static int tegra_ccm_do_cbcmac(struct tegra_aead_ctx *ctx, struct tegra_aead_req
+ /* Prepare command and submit */
+ cmdlen = tegra_cbcmac_prep_cmd(ctx, rctx);
+
+- return tegra_se_host1x_submit(se, cmdlen);
++ return tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ }
+
+ static int tegra_ccm_set_msg_len(u8 *block, unsigned int msglen, int csize)
+@@ -1098,7 +1101,7 @@ static int tegra_ccm_do_ctr(struct tegra_aead_ctx *ctx, struct tegra_aead_reqctx
+
+ /* Prepare command and submit */
+ cmdlen = tegra_ctr_prep_cmd(ctx, rctx);
+- ret = tegra_se_host1x_submit(se, cmdlen);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ if (ret)
+ return ret;
+
+@@ -1513,23 +1516,16 @@ static int tegra_cmac_do_update(struct ahash_request *req)
+ rctx->residue.size = nresidue;
+
+ /*
+- * If this is not the first 'update' call, paste the previous copied
++ * If this is not the first task, paste the previous copied
+ * intermediate results to the registers so that it gets picked up.
+- * This is to support the import/export functionality.
+ */
+ if (!(rctx->task & SHA_FIRST))
+ tegra_cmac_paste_result(ctx->se, rctx);
+
+ cmdlen = tegra_cmac_prep_cmd(ctx, rctx);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+
+- ret = tegra_se_host1x_submit(se, cmdlen);
+- /*
+- * If this is not the final update, copy the intermediate results
+- * from the registers so that it can be used in the next 'update'
+- * call. This is to support the import/export functionality.
+- */
+- if (!(rctx->task & SHA_FINAL))
+- tegra_cmac_copy_result(ctx->se, rctx);
++ tegra_cmac_copy_result(ctx->se, rctx);
+
+ return ret;
+ }
+@@ -1553,9 +1549,16 @@ static int tegra_cmac_do_final(struct ahash_request *req)
+ rctx->total_len += rctx->residue.size;
+ rctx->config = tegra234_aes_cfg(SE_ALG_CMAC, 0);
+
++ /*
++ * If this is not the first task, paste the previous copied
++ * intermediate results to the registers so that it gets picked up.
++ */
++ if (!(rctx->task & SHA_FIRST))
++ tegra_cmac_paste_result(ctx->se, rctx);
++
+ /* Prepare command and submit */
+ cmdlen = tegra_cmac_prep_cmd(ctx, rctx);
+- ret = tegra_se_host1x_submit(se, cmdlen);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, cmdlen);
+ if (ret)
+ goto out;
+
+@@ -1581,18 +1584,24 @@ static int tegra_cmac_do_one_req(struct crypto_engine *engine, void *areq)
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+ struct tegra_cmac_ctx *ctx = crypto_ahash_ctx(tfm);
+ struct tegra_se *se = ctx->se;
+- int ret;
++ int ret = 0;
+
+ if (rctx->task & SHA_UPDATE) {
+ ret = tegra_cmac_do_update(req);
++ if (ret)
++ goto out;
++
+ rctx->task &= ~SHA_UPDATE;
+ }
+
+ if (rctx->task & SHA_FINAL) {
+ ret = tegra_cmac_do_final(req);
++ if (ret)
++ goto out;
++
+ rctx->task &= ~SHA_FINAL;
+ }
+-
++out:
+ crypto_finalize_hash_request(se->engine, req, ret);
+
+ return 0;
+diff --git a/drivers/crypto/tegra/tegra-se-hash.c b/drivers/crypto/tegra/tegra-se-hash.c
+index 0b5cdd5676b17e..726e30c0e63ebb 100644
+--- a/drivers/crypto/tegra/tegra-se-hash.c
++++ b/drivers/crypto/tegra/tegra-se-hash.c
+@@ -300,8 +300,9 @@ static int tegra_sha_do_update(struct ahash_request *req)
+ {
+ struct tegra_sha_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req));
+ struct tegra_sha_reqctx *rctx = ahash_request_ctx(req);
++ struct tegra_se *se = ctx->se;
+ unsigned int nblks, nresidue, size, ret;
+- u32 *cpuvaddr = ctx->se->cmdbuf->addr;
++ u32 *cpuvaddr = se->cmdbuf->addr;
+
+ nresidue = (req->nbytes + rctx->residue.size) % rctx->blk_size;
+ nblks = (req->nbytes + rctx->residue.size) / rctx->blk_size;
+@@ -353,11 +354,11 @@ static int tegra_sha_do_update(struct ahash_request *req)
+ * This is to support the import/export functionality.
+ */
+ if (!(rctx->task & SHA_FIRST))
+- tegra_sha_paste_hash_result(ctx->se, rctx);
++ tegra_sha_paste_hash_result(se, rctx);
+
+- size = tegra_sha_prep_cmd(ctx->se, cpuvaddr, rctx);
++ size = tegra_sha_prep_cmd(se, cpuvaddr, rctx);
+
+- ret = tegra_se_host1x_submit(ctx->se, size);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, size);
+
+ /*
+ * If this is not the final update, copy the intermediate results
+@@ -365,7 +366,7 @@ static int tegra_sha_do_update(struct ahash_request *req)
+ * call. This is to support the import/export functionality.
+ */
+ if (!(rctx->task & SHA_FINAL))
+- tegra_sha_copy_hash_result(ctx->se, rctx);
++ tegra_sha_copy_hash_result(se, rctx);
+
+ return ret;
+ }
+@@ -388,7 +389,7 @@ static int tegra_sha_do_final(struct ahash_request *req)
+
+ size = tegra_sha_prep_cmd(se, cpuvaddr, rctx);
+
+- ret = tegra_se_host1x_submit(se, size);
++ ret = tegra_se_host1x_submit(se, se->cmdbuf, size);
+ if (ret)
+ goto out;
+
+@@ -416,14 +417,21 @@ static int tegra_sha_do_one_req(struct crypto_engine *engine, void *areq)
+
+ if (rctx->task & SHA_UPDATE) {
+ ret = tegra_sha_do_update(req);
++ if (ret)
++ goto out;
++
+ rctx->task &= ~SHA_UPDATE;
+ }
+
+ if (rctx->task & SHA_FINAL) {
+ ret = tegra_sha_do_final(req);
++ if (ret)
++ goto out;
++
+ rctx->task &= ~SHA_FINAL;
+ }
+
++out:
+ crypto_finalize_hash_request(se->engine, req, ret);
+
+ return 0;
+@@ -559,13 +567,18 @@ static int tegra_hmac_setkey(struct crypto_ahash *tfm, const u8 *key,
+ unsigned int keylen)
+ {
+ struct tegra_sha_ctx *ctx = crypto_ahash_ctx(tfm);
++ int ret;
+
+ if (aes_check_keylen(keylen))
+ return tegra_hmac_fallback_setkey(ctx, key, keylen);
+
++ ret = tegra_key_submit(ctx->se, key, keylen, ctx->alg, &ctx->key_id);
++ if (ret)
++ return tegra_hmac_fallback_setkey(ctx, key, keylen);
++
+ ctx->fallback = false;
+
+- return tegra_key_submit(ctx->se, key, keylen, ctx->alg, &ctx->key_id);
++ return 0;
+ }
+
+ static int tegra_sha_update(struct ahash_request *req)
+diff --git a/drivers/crypto/tegra/tegra-se-key.c b/drivers/crypto/tegra/tegra-se-key.c
+index ac14678dbd30d5..276b261fb6df1f 100644
+--- a/drivers/crypto/tegra/tegra-se-key.c
++++ b/drivers/crypto/tegra/tegra-se-key.c
+@@ -115,11 +115,17 @@ static int tegra_key_insert(struct tegra_se *se, const u8 *key,
+ u32 keylen, u16 slot, u32 alg)
+ {
+ const u32 *keyval = (u32 *)key;
+- u32 *addr = se->cmdbuf->addr, size;
++ u32 *addr = se->keybuf->addr, size;
++ int ret;
++
++ mutex_lock(&kslt_lock);
+
+ size = tegra_key_prep_ins_cmd(se, addr, keyval, keylen, slot, alg);
++ ret = tegra_se_host1x_submit(se, se->keybuf, size);
+
+- return tegra_se_host1x_submit(se, size);
++ mutex_unlock(&kslt_lock);
++
++ return ret;
+ }
+
+ void tegra_key_invalidate(struct tegra_se *se, u32 keyid, u32 alg)
+diff --git a/drivers/crypto/tegra/tegra-se-main.c b/drivers/crypto/tegra/tegra-se-main.c
+index f94c0331b148cc..55690b044e4174 100644
+--- a/drivers/crypto/tegra/tegra-se-main.c
++++ b/drivers/crypto/tegra/tegra-se-main.c
+@@ -141,7 +141,7 @@ static struct tegra_se_cmdbuf *tegra_se_host1x_bo_alloc(struct tegra_se *se, ssi
+ return cmdbuf;
+ }
+
+-int tegra_se_host1x_submit(struct tegra_se *se, u32 size)
++int tegra_se_host1x_submit(struct tegra_se *se, struct tegra_se_cmdbuf *cmdbuf, u32 size)
+ {
+ struct host1x_job *job;
+ int ret;
+@@ -160,9 +160,9 @@ int tegra_se_host1x_submit(struct tegra_se *se, u32 size)
+ job->engine_fallback_streamid = se->stream_id;
+ job->engine_streamid_offset = SE_STREAM_ID;
+
+- se->cmdbuf->words = size;
++ cmdbuf->words = size;
+
+- host1x_job_add_gather(job, &se->cmdbuf->bo, size, 0);
++ host1x_job_add_gather(job, &cmdbuf->bo, size, 0);
+
+ ret = host1x_job_pin(job, se->dev);
+ if (ret) {
+@@ -220,14 +220,22 @@ static int tegra_se_client_init(struct host1x_client *client)
+ goto syncpt_put;
+ }
+
++ se->keybuf = tegra_se_host1x_bo_alloc(se, SZ_4K);
++ if (!se->keybuf) {
++ ret = -ENOMEM;
++ goto cmdbuf_put;
++ }
++
+ ret = se->hw->init_alg(se);
+ if (ret) {
+ dev_err(se->dev, "failed to register algorithms\n");
+- goto cmdbuf_put;
++ goto keybuf_put;
+ }
+
+ return 0;
+
++keybuf_put:
++ tegra_se_cmdbuf_put(&se->keybuf->bo);
+ cmdbuf_put:
+ tegra_se_cmdbuf_put(&se->cmdbuf->bo);
+ syncpt_put:
+diff --git a/drivers/crypto/tegra/tegra-se.h b/drivers/crypto/tegra/tegra-se.h
+index b9dd7ceb8783c9..b54aefe717a174 100644
+--- a/drivers/crypto/tegra/tegra-se.h
++++ b/drivers/crypto/tegra/tegra-se.h
+@@ -420,6 +420,7 @@ struct tegra_se {
+ struct host1x_client client;
+ struct host1x_channel *channel;
+ struct tegra_se_cmdbuf *cmdbuf;
++ struct tegra_se_cmdbuf *keybuf;
+ struct crypto_engine *engine;
+ struct host1x_syncpt *syncpt;
+ struct device *dev;
+@@ -502,7 +503,7 @@ void tegra_deinit_hash(struct tegra_se *se);
+ int tegra_key_submit(struct tegra_se *se, const u8 *key,
+ u32 keylen, u32 alg, u32 *keyid);
+ void tegra_key_invalidate(struct tegra_se *se, u32 keyid, u32 alg);
+-int tegra_se_host1x_submit(struct tegra_se *se, u32 size);
++int tegra_se_host1x_submit(struct tegra_se *se, struct tegra_se_cmdbuf *cmdbuf, u32 size);
+
+ /* HOST1x OPCODES */
+ static inline u32 host1x_opcode_setpayload(unsigned int payload)
+diff --git a/drivers/dma/fsl-edma-main.c b/drivers/dma/fsl-edma-main.c
+index 70cb7fda757a94..27645606f900b8 100644
+--- a/drivers/dma/fsl-edma-main.c
++++ b/drivers/dma/fsl-edma-main.c
+@@ -303,6 +303,7 @@ fsl_edma2_irq_init(struct platform_device *pdev,
+
+ /* The last IRQ is for eDMA err */
+ if (i == count - 1) {
++ fsl_edma->errirq = irq;
+ ret = devm_request_irq(&pdev->dev, irq,
+ fsl_edma_err_handler,
+ 0, "eDMA2-ERR", fsl_edma);
+@@ -322,10 +323,13 @@ static void fsl_edma_irq_exit(
+ struct platform_device *pdev, struct fsl_edma_engine *fsl_edma)
+ {
+ if (fsl_edma->txirq == fsl_edma->errirq) {
+- devm_free_irq(&pdev->dev, fsl_edma->txirq, fsl_edma);
++ if (fsl_edma->txirq >= 0)
++ devm_free_irq(&pdev->dev, fsl_edma->txirq, fsl_edma);
+ } else {
+- devm_free_irq(&pdev->dev, fsl_edma->txirq, fsl_edma);
+- devm_free_irq(&pdev->dev, fsl_edma->errirq, fsl_edma);
++ if (fsl_edma->txirq >= 0)
++ devm_free_irq(&pdev->dev, fsl_edma->txirq, fsl_edma);
++ if (fsl_edma->errirq >= 0)
++ devm_free_irq(&pdev->dev, fsl_edma->errirq, fsl_edma);
+ }
+ }
+
+@@ -513,6 +517,8 @@ static int fsl_edma_probe(struct platform_device *pdev)
+ if (!fsl_edma)
+ return -ENOMEM;
+
++ fsl_edma->errirq = -EINVAL;
++ fsl_edma->txirq = -EINVAL;
+ fsl_edma->drvdata = drvdata;
+ fsl_edma->n_chans = chans;
+ mutex_init(&fsl_edma->fsl_edma_mutex);
+@@ -699,9 +705,9 @@ static void fsl_edma_remove(struct platform_device *pdev)
+ struct fsl_edma_engine *fsl_edma = platform_get_drvdata(pdev);
+
+ fsl_edma_irq_exit(pdev, fsl_edma);
+- fsl_edma_cleanup_vchan(&fsl_edma->dma_dev);
+ of_dma_controller_free(np);
+ dma_async_device_unregister(&fsl_edma->dma_dev);
++ fsl_edma_cleanup_vchan(&fsl_edma->dma_dev);
+ fsl_disable_clocks(fsl_edma, fsl_edma->drvdata->dmamuxs);
+ }
+
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index 51556c72a96746..fbdf005bed3a49 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -751,6 +751,8 @@ static int i10nm_get_ddr_munits(void)
+ continue;
+ } else {
+ d->imc[lmc].mdev = mdev;
++ if (res_cfg->type == SPR)
++ skx_set_mc_mapping(d, i, lmc);
+ lmc++;
+ }
+ }
+diff --git a/drivers/edac/ie31200_edac.c b/drivers/edac/ie31200_edac.c
+index 9ef13570f2e540..56be8ef40f376b 100644
+--- a/drivers/edac/ie31200_edac.c
++++ b/drivers/edac/ie31200_edac.c
+@@ -91,8 +91,6 @@
+ (((did) & PCI_DEVICE_ID_INTEL_IE31200_HB_CFL_MASK) == \
+ PCI_DEVICE_ID_INTEL_IE31200_HB_CFL_MASK))
+
+-#define IE31200_DIMMS 4
+-#define IE31200_RANKS 8
+ #define IE31200_RANKS_PER_CHANNEL 4
+ #define IE31200_DIMMS_PER_CHANNEL 2
+ #define IE31200_CHANNELS 2
+@@ -164,6 +162,7 @@
+ #define IE31200_MAD_DIMM_0_OFFSET 0x5004
+ #define IE31200_MAD_DIMM_0_OFFSET_SKL 0x500C
+ #define IE31200_MAD_DIMM_SIZE GENMASK_ULL(7, 0)
++#define IE31200_MAD_DIMM_SIZE_SKL GENMASK_ULL(5, 0)
+ #define IE31200_MAD_DIMM_A_RANK BIT(17)
+ #define IE31200_MAD_DIMM_A_RANK_SHIFT 17
+ #define IE31200_MAD_DIMM_A_RANK_SKL BIT(10)
+@@ -377,7 +376,7 @@ static void __iomem *ie31200_map_mchbar(struct pci_dev *pdev)
+ static void __skl_populate_dimm_info(struct dimm_data *dd, u32 addr_decode,
+ int chan)
+ {
+- dd->size = (addr_decode >> (chan << 4)) & IE31200_MAD_DIMM_SIZE;
++ dd->size = (addr_decode >> (chan << 4)) & IE31200_MAD_DIMM_SIZE_SKL;
+ dd->dual_rank = (addr_decode & (IE31200_MAD_DIMM_A_RANK_SKL << (chan << 4))) ? 1 : 0;
+ dd->x16_width = ((addr_decode & (IE31200_MAD_DIMM_A_WIDTH_SKL << (chan << 4))) >>
+ (IE31200_MAD_DIMM_A_WIDTH_SKL_SHIFT + (chan << 4)));
+@@ -426,7 +425,7 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
+
+ nr_channels = how_many_channels(pdev);
+ layers[0].type = EDAC_MC_LAYER_CHIP_SELECT;
+- layers[0].size = IE31200_DIMMS;
++ layers[0].size = IE31200_RANKS_PER_CHANNEL;
+ layers[0].is_virt_csrow = true;
+ layers[1].type = EDAC_MC_LAYER_CHANNEL;
+ layers[1].size = nr_channels;
+@@ -618,7 +617,7 @@ static int __init ie31200_init(void)
+
+ pci_rc = pci_register_driver(&ie31200_driver);
+ if (pci_rc < 0)
+- goto fail0;
++ return pci_rc;
+
+ if (!mci_pdev) {
+ ie31200_registered = 0;
+@@ -629,11 +628,13 @@ static int __init ie31200_init(void)
+ if (mci_pdev)
+ break;
+ }
++
+ if (!mci_pdev) {
+ edac_dbg(0, "ie31200 pci_get_device fail\n");
+ pci_rc = -ENODEV;
+- goto fail1;
++ goto fail0;
+ }
++
+ pci_rc = ie31200_init_one(mci_pdev, &ie31200_pci_tbl[i]);
+ if (pci_rc < 0) {
+ edac_dbg(0, "ie31200 init fail\n");
+@@ -641,12 +642,12 @@ static int __init ie31200_init(void)
+ goto fail1;
+ }
+ }
+- return 0;
+
++ return 0;
+ fail1:
+- pci_unregister_driver(&ie31200_driver);
+-fail0:
+ pci_dev_put(mci_pdev);
++fail0:
++ pci_unregister_driver(&ie31200_driver);
+
+ return pci_rc;
+ }
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index 6cf17af7d9112b..85ec3196664d30 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -120,6 +120,35 @@ void skx_adxl_put(void)
+ }
+ EXPORT_SYMBOL_GPL(skx_adxl_put);
+
++static void skx_init_mc_mapping(struct skx_dev *d)
++{
++ /*
++ * By default, the BIOS presents all memory controllers within each
++ * socket to the EDAC driver. The physical indices are the same as
++ * the logical indices of the memory controllers enumerated by the
++ * EDAC driver.
++ */
++ for (int i = 0; i < NUM_IMC; i++)
++ d->mc_mapping[i] = i;
++}
++
++void skx_set_mc_mapping(struct skx_dev *d, u8 pmc, u8 lmc)
++{
++ edac_dbg(0, "Set the mapping of mc phy idx to logical idx: %02d -> %02d\n",
++ pmc, lmc);
++
++ d->mc_mapping[pmc] = lmc;
++}
++EXPORT_SYMBOL_GPL(skx_set_mc_mapping);
++
++static u8 skx_get_mc_mapping(struct skx_dev *d, u8 pmc)
++{
++ edac_dbg(0, "Get the mapping of mc phy idx to logical idx: %02d -> %02d\n",
++ pmc, d->mc_mapping[pmc]);
++
++ return d->mc_mapping[pmc];
++}
++
+ static bool skx_adxl_decode(struct decoded_addr *res, enum error_source err_src)
+ {
+ struct skx_dev *d;
+@@ -187,6 +216,8 @@ static bool skx_adxl_decode(struct decoded_addr *res, enum error_source err_src)
+ return false;
+ }
+
++ res->imc = skx_get_mc_mapping(d, res->imc);
++
+ for (i = 0; i < adxl_component_count; i++) {
+ if (adxl_values[i] == ~0x0ull)
+ continue;
+@@ -307,6 +338,8 @@ int skx_get_all_bus_mappings(struct res_config *cfg, struct list_head **list)
+ d->bus[0], d->bus[1], d->bus[2], d->bus[3]);
+ list_add_tail(&d->list, &dev_edac_list);
+ prev = pdev;
++
++ skx_init_mc_mapping(d);
+ }
+
+ if (list)
+diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
+index 54bba8a62f727c..849198fd14da69 100644
+--- a/drivers/edac/skx_common.h
++++ b/drivers/edac/skx_common.h
+@@ -93,6 +93,16 @@ struct skx_dev {
+ struct pci_dev *uracu; /* for i10nm CPU */
+ struct pci_dev *pcu_cr3; /* for HBM memory detection */
+ u32 mcroute;
++ /*
++ * Some server BIOS may hide certain memory controllers, and the
++ * EDAC driver skips those hidden memory controllers. However, the
++ * ADXL still decodes memory error address using physical memory
++ * controller indices. The mapping table is used to convert the
++ * physical indices (reported by ADXL) to the logical indices
++ * (used the EDAC driver) of present memory controllers during the
++ * error handling process.
++ */
++ u8 mc_mapping[NUM_IMC];
+ struct skx_imc {
+ struct mem_ctl_info *mci;
+ struct pci_dev *mdev; /* for i10nm CPU */
+@@ -242,6 +252,7 @@ void skx_adxl_put(void);
+ void skx_set_decode(skx_decode_f decode, skx_show_retry_log_f show_retry_log);
+ void skx_set_mem_cfg(bool mem_cfg_2lm);
+ void skx_set_res_cfg(struct res_config *cfg);
++void skx_set_mc_mapping(struct skx_dev *d, u8 pmc, u8 lmc);
+
+ int skx_get_src_id(struct skx_dev *d, int off, u8 *id);
+ int skx_get_node_id(struct skx_dev *d, u8 *id);
+diff --git a/drivers/firmware/cirrus/cs_dsp.c b/drivers/firmware/cirrus/cs_dsp.c
+index bd1ea99c3b4751..ea452f19085427 100644
+--- a/drivers/firmware/cirrus/cs_dsp.c
++++ b/drivers/firmware/cirrus/cs_dsp.c
+@@ -1631,6 +1631,7 @@ static int cs_dsp_load(struct cs_dsp *dsp, const struct firmware *firmware,
+
+ cs_dsp_debugfs_save_wmfwname(dsp, file);
+
++ ret = 0;
+ out_fw:
+ cs_dsp_buf_free(&buf_list);
+
+@@ -2338,6 +2339,7 @@ static int cs_dsp_load_coeff(struct cs_dsp *dsp, const struct firmware *firmware
+
+ cs_dsp_debugfs_save_binname(dsp, file);
+
++ ret = 0;
+ out_fw:
+ cs_dsp_buf_free(&buf_list);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 32afcf9485245e..7978d5189c37d4 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -2633,7 +2633,6 @@ static int amdgpu_pmops_freeze(struct device *dev)
+
+ adev->in_s4 = true;
+ r = amdgpu_device_suspend(drm_dev, true);
+- adev->in_s4 = false;
+ if (r)
+ return r;
+
+@@ -2645,8 +2644,13 @@ static int amdgpu_pmops_freeze(struct device *dev)
+ static int amdgpu_pmops_thaw(struct device *dev)
+ {
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
++ struct amdgpu_device *adev = drm_to_adev(drm_dev);
++ int r;
+
+- return amdgpu_device_resume(drm_dev, true);
++ r = amdgpu_device_resume(drm_dev, true);
++ adev->in_s4 = false;
++
++ return r;
+ }
+
+ static int amdgpu_pmops_poweroff(struct device *dev)
+@@ -2659,6 +2663,9 @@ static int amdgpu_pmops_poweroff(struct device *dev)
+ static int amdgpu_pmops_restore(struct device *dev)
+ {
+ struct drm_device *drm_dev = dev_get_drvdata(dev);
++ struct amdgpu_device *adev = drm_to_adev(drm_dev);
++
++ adev->in_s4 = false;
+
+ return amdgpu_device_resume(drm_dev, true);
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c
+index 6162582d0aa272..d5125523bfa7be 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_umsch_mm.c
+@@ -584,7 +584,7 @@ int amdgpu_umsch_mm_init_microcode(struct amdgpu_umsch_mm *umsch)
+ fw_name = "amdgpu/umsch_mm_4_0_0.bin";
+ break;
+ default:
+- break;
++ return -EINVAL;
+ }
+
+ r = amdgpu_ucode_request(adev, &adev->umsch_mm.fw, "%s", fw_name);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+index d3e8be82a1727a..84cf5fd297b7f6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c
+@@ -1549,7 +1549,7 @@ static int gfx_v11_0_sw_init(void *handle)
+ adev->gfx.me.num_me = 1;
+ adev->gfx.me.num_pipe_per_me = 1;
+ adev->gfx.me.num_queue_per_pipe = 1;
+- adev->gfx.mec.num_mec = 2;
++ adev->gfx.mec.num_mec = 1;
+ adev->gfx.mec.num_pipe_per_mec = 4;
+ adev->gfx.mec.num_queue_per_pipe = 4;
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+index d3798a333d1f88..b259e217930c75 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v12_0.c
+@@ -1332,7 +1332,7 @@ static int gfx_v12_0_sw_init(void *handle)
+ adev->gfx.me.num_me = 1;
+ adev->gfx.me.num_pipe_per_me = 1;
+ adev->gfx.me.num_queue_per_pipe = 1;
+- adev->gfx.mec.num_mec = 2;
++ adev->gfx.mec.num_mec = 1;
+ adev->gfx.mec.num_pipe_per_mec = 2;
+ adev->gfx.mec.num_queue_per_pipe = 4;
+ break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 05d1ae2ef84b4e..114653a0b57013 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1269,6 +1269,7 @@ static void gfx_v9_0_check_fw_write_wait(struct amdgpu_device *adev)
+ adev->gfx.mec_fw_write_wait = false;
+
+ if ((amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 4, 1)) &&
++ (amdgpu_ip_version(adev, GC_HWIP, 0) != IP_VERSION(9, 4, 2)) &&
+ ((adev->gfx.mec_fw_version < 0x000001a5) ||
+ (adev->gfx.mec_feature_version < 46) ||
+ (adev->gfx.pfp_fw_version < 0x000000b7) ||
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+index dffe2a86f383ef..951b87e7e3f68e 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device_queue_manager.c
+@@ -205,21 +205,6 @@ static int add_queue_mes(struct device_queue_manager *dqm, struct queue *q,
+ if (!down_read_trylock(&adev->reset_domain->sem))
+ return -EIO;
+
+- if (!pdd->proc_ctx_cpu_ptr) {
+- r = amdgpu_amdkfd_alloc_gtt_mem(adev,
+- AMDGPU_MES_PROC_CTX_SIZE,
+- &pdd->proc_ctx_bo,
+- &pdd->proc_ctx_gpu_addr,
+- &pdd->proc_ctx_cpu_ptr,
+- false);
+- if (r) {
+- dev_err(adev->dev,
+- "failed to allocate process context bo\n");
+- return r;
+- }
+- memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
+- }
+-
+ memset(&queue_input, 0x0, sizeof(struct mes_add_queue_input));
+ queue_input.process_id = qpd->pqm->process->pasid;
+ queue_input.page_table_base_addr = qpd->page_table_base;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index 42fd7669ac7d37..ac777244ee0a18 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -361,10 +361,26 @@ int pqm_create_queue(struct process_queue_manager *pqm,
+ if (retval != 0)
+ return retval;
+
++ /* Register process if this is the first queue */
+ if (list_empty(&pdd->qpd.queues_list) &&
+ list_empty(&pdd->qpd.priv_queue_list))
+ dev->dqm->ops.register_process(dev->dqm, &pdd->qpd);
+
++ /* Allocate proc_ctx_bo only if MES is enabled and this is the first queue */
++ if (!pdd->proc_ctx_cpu_ptr && dev->kfd->shared_resources.enable_mes) {
++ retval = amdgpu_amdkfd_alloc_gtt_mem(dev->adev,
++ AMDGPU_MES_PROC_CTX_SIZE,
++ &pdd->proc_ctx_bo,
++ &pdd->proc_ctx_gpu_addr,
++ &pdd->proc_ctx_cpu_ptr,
++ false);
++ if (retval) {
++ dev_err(dev->adev->dev, "failed to allocate process context bo\n");
++ return retval;
++ }
++ memset(pdd->proc_ctx_cpu_ptr, 0, AMDGPU_MES_PROC_CTX_SIZE);
++ }
++
+ pqn = kzalloc(sizeof(*pqn), GFP_KERNEL);
+ if (!pqn) {
+ retval = -ENOMEM;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index c4c6538eabae6d..260b6b8d29fd6c 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3306,6 +3306,11 @@ static int dm_resume(void *handle)
+
+ return 0;
+ }
++
++ /* leave display off for S4 sequence */
++ if (adev->in_s4)
++ return 0;
++
+ /* Recreate dc_state - DC invalidates it when setting power state to S3. */
+ dc_state_release(dm_state->context);
+ dm_state->context = dc_state_create(dm->dc, NULL);
+diff --git a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+index 6e2fce329d7382..d37ecfdde4f1bc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/dce/dmub_hw_lock_mgr.c
+@@ -63,6 +63,10 @@ void dmub_hw_lock_mgr_inbox0_cmd(struct dc_dmub_srv *dmub_srv,
+
+ bool should_use_dmub_lock(struct dc_link *link)
+ {
++ /* ASIC doesn't support DMUB */
++ if (!link->ctx->dmub_srv)
++ return false;
++
+ if (link->psr_settings.psr_version == DC_PSR_VERSION_SU_1)
+ return true;
+
+diff --git a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+index 1c10ba4dcddea4..abe51cf3aab29e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
++++ b/drivers/gpu/drm/amd/display/dc/dml/dcn30/display_mode_vba_30.c
+@@ -281,10 +281,10 @@ static void CalculateDynamicMetadataParameters(
+ double DISPCLK,
+ double DCFClkDeepSleep,
+ double PixelClock,
+- long HTotal,
+- long VBlank,
+- long DynamicMetadataTransmittedBytes,
+- long DynamicMetadataLinesBeforeActiveRequired,
++ unsigned int HTotal,
++ unsigned int VBlank,
++ unsigned int DynamicMetadataTransmittedBytes,
++ int DynamicMetadataLinesBeforeActiveRequired,
+ int InterlaceEnable,
+ bool ProgressiveToInterlaceUnitInOPP,
+ double *Tsetup,
+@@ -3277,8 +3277,8 @@ static double CalculateWriteBackDelay(
+
+
+ static void CalculateDynamicMetadataParameters(int MaxInterDCNTileRepeaters, double DPPCLK, double DISPCLK,
+- double DCFClkDeepSleep, double PixelClock, long HTotal, long VBlank, long DynamicMetadataTransmittedBytes,
+- long DynamicMetadataLinesBeforeActiveRequired, int InterlaceEnable, bool ProgressiveToInterlaceUnitInOPP,
++ double DCFClkDeepSleep, double PixelClock, unsigned int HTotal, unsigned int VBlank, unsigned int DynamicMetadataTransmittedBytes,
++ int DynamicMetadataLinesBeforeActiveRequired, int InterlaceEnable, bool ProgressiveToInterlaceUnitInOPP,
+ double *Tsetup, double *Tdmbf, double *Tdmec, double *Tdmsks)
+ {
+ double TotalRepeaterDelayTime = 0;
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c
+index 0aa4e4d343b04e..2c1316d1b6eb85 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/src/dml2_core/dml2_core_dcn4.c
+@@ -139,9 +139,8 @@ bool core_dcn4_initialize(struct dml2_core_initialize_in_out *in_out)
+ core->clean_me_up.mode_lib.ip.subvp_fw_processing_delay_us = core_dcn4_ip_caps_base.subvp_pstate_allow_width_us;
+ core->clean_me_up.mode_lib.ip.subvp_swath_height_margin_lines = core_dcn4_ip_caps_base.subvp_swath_height_margin_lines;
+ } else {
+- memcpy(&core->clean_me_up.mode_lib.ip, &core_dcn4_ip_caps_base, sizeof(struct dml2_core_ip_params));
++ memcpy(&core->clean_me_up.mode_lib.ip, &core_dcn4_ip_caps_base, sizeof(struct dml2_core_ip_params));
+ patch_ip_params_with_ip_caps(&core->clean_me_up.mode_lib.ip, in_out->ip_caps);
+-
+ core->clean_me_up.mode_lib.ip.imall_supported = false;
+ }
+
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+index 0d71db7be325da..0ce1766c859f5c 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu_cmn.c
+@@ -459,8 +459,7 @@ int smu_cmn_send_smc_msg_with_param(struct smu_context *smu,
+ }
+ if (read_arg) {
+ smu_cmn_read_arg(smu, read_arg);
+- dev_dbg(adev->dev, "smu send message: %s(%d) param: 0x%08x, resp: 0x%08x,\
+- readval: 0x%08x\n",
++ dev_dbg(adev->dev, "smu send message: %s(%d) param: 0x%08x, resp: 0x%08x, readval: 0x%08x\n",
+ smu_get_message_name(smu, msg), index, param, reg, *read_arg);
+ } else {
+ dev_dbg(adev->dev, "smu send message: %s(%d) param: 0x%08x, resp: 0x%08x\n",
+diff --git a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+index 41f72d458487fb..9ba2a667a1f3a1 100644
+--- a/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
++++ b/drivers/gpu/drm/bridge/cadence/cdns-mhdp8546-core.c
+@@ -2463,9 +2463,9 @@ static int cdns_mhdp_probe(struct platform_device *pdev)
+ if (!mhdp)
+ return -ENOMEM;
+
+- clk = devm_clk_get(dev, NULL);
++ clk = devm_clk_get_enabled(dev, NULL);
+ if (IS_ERR(clk)) {
+- dev_err(dev, "couldn't get clk: %ld\n", PTR_ERR(clk));
++ dev_err(dev, "couldn't get and enable clk: %ld\n", PTR_ERR(clk));
+ return PTR_ERR(clk);
+ }
+
+@@ -2504,14 +2504,12 @@ static int cdns_mhdp_probe(struct platform_device *pdev)
+
+ mhdp->info = of_device_get_match_data(dev);
+
+- clk_prepare_enable(clk);
+-
+ pm_runtime_enable(dev);
+ ret = pm_runtime_resume_and_get(dev);
+ if (ret < 0) {
+ dev_err(dev, "pm_runtime_resume_and_get failed\n");
+ pm_runtime_disable(dev);
+- goto clk_disable;
++ return ret;
+ }
+
+ if (mhdp->info && mhdp->info->ops && mhdp->info->ops->init) {
+@@ -2590,8 +2588,6 @@ static int cdns_mhdp_probe(struct platform_device *pdev)
+ runtime_put:
+ pm_runtime_put_sync(dev);
+ pm_runtime_disable(dev);
+-clk_disable:
+- clk_disable_unprepare(mhdp->clk);
+
+ return ret;
+ }
+@@ -2632,8 +2628,6 @@ static void cdns_mhdp_remove(struct platform_device *pdev)
+ cancel_work_sync(&mhdp->modeset_retry_work);
+ flush_work(&mhdp->hpd_work);
+ /* Ignoring mhdp->hdcp.check_work and mhdp->hdcp.prop_work here. */
+-
+- clk_disable_unprepare(mhdp->clk);
+ }
+
+ static const struct of_device_id mhdp_ids[] = {
+diff --git a/drivers/gpu/drm/bridge/ite-it6505.c b/drivers/gpu/drm/bridge/ite-it6505.c
+index faee8e2e82a053..967aa24b7c5377 100644
+--- a/drivers/gpu/drm/bridge/ite-it6505.c
++++ b/drivers/gpu/drm/bridge/ite-it6505.c
+@@ -2042,12 +2042,13 @@ static bool it6505_hdcp_part2_ksvlist_check(struct it6505 *it6505)
+ continue;
+ }
+
+- for (i = 0; i < 5; i++) {
++ for (i = 0; i < 5; i++)
+ if (bv[i][3] != av[i][0] || bv[i][2] != av[i][1] ||
+- av[i][1] != av[i][2] || bv[i][0] != av[i][3])
++ bv[i][1] != av[i][2] || bv[i][0] != av[i][3])
+ break;
+
+- DRM_DEV_DEBUG_DRIVER(dev, "V' all match!! %d, %d", retry, i);
++ if (i == 5) {
++ DRM_DEV_DEBUG_DRIVER(dev, "V' all match!! %d", retry);
+ return true;
+ }
+ }
+diff --git a/drivers/gpu/drm/bridge/ti-sn65dsi86.c b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+index 582cf4f73a74c7..95ce50ed53acf6 100644
+--- a/drivers/gpu/drm/bridge/ti-sn65dsi86.c
++++ b/drivers/gpu/drm/bridge/ti-sn65dsi86.c
+@@ -480,6 +480,7 @@ static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata,
+ const char *name)
+ {
+ struct device *dev = pdata->dev;
++ const struct i2c_client *client = to_i2c_client(dev);
+ struct auxiliary_device *aux;
+ int ret;
+
+@@ -488,6 +489,7 @@ static int ti_sn65dsi86_add_aux_device(struct ti_sn65dsi86 *pdata,
+ return -ENOMEM;
+
+ aux->name = name;
++ aux->id = (client->adapter->nr << 10) | client->addr;
+ aux->dev.parent = dev;
+ aux->dev.release = ti_sn65dsi86_aux_device_release;
+ device_set_of_node_from_dev(&aux->dev, dev);
+diff --git a/drivers/gpu/drm/display/drm_dp_mst_topology.c b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+index da6ff36623d30f..3e5f721d754005 100644
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -179,13 +179,13 @@ static int
+ drm_dp_mst_rad_to_str(const u8 rad[8], u8 lct, char *out, size_t len)
+ {
+ int i;
+- u8 unpacked_rad[16];
++ u8 unpacked_rad[16] = {};
+
+- for (i = 0; i < lct; i++) {
++ for (i = 1; i < lct; i++) {
+ if (i % 2)
+- unpacked_rad[i] = rad[i / 2] >> 4;
++ unpacked_rad[i] = rad[(i - 1) / 2] >> 4;
+ else
+- unpacked_rad[i] = rad[i / 2] & BIT_MASK(4);
++ unpacked_rad[i] = rad[(i - 1) / 2] & 0xF;
+ }
+
+ /* TODO: Eventually add something to printk so we can format the rad
+diff --git a/drivers/gpu/drm/mediatek/mtk_crtc.c b/drivers/gpu/drm/mediatek/mtk_crtc.c
+index 5674f5707cca83..8f6fba4217ece5 100644
+--- a/drivers/gpu/drm/mediatek/mtk_crtc.c
++++ b/drivers/gpu/drm/mediatek/mtk_crtc.c
+@@ -620,13 +620,16 @@ static void mtk_crtc_update_config(struct mtk_crtc *mtk_crtc, bool needs_vblank)
+
+ mbox_send_message(mtk_crtc->cmdq_client.chan, cmdq_handle);
+ mbox_client_txdone(mtk_crtc->cmdq_client.chan, 0);
++ goto update_config_out;
+ }
+-#else
++#endif
+ spin_lock_irqsave(&mtk_crtc->config_lock, flags);
+ mtk_crtc->config_updating = false;
+ spin_unlock_irqrestore(&mtk_crtc->config_lock, flags);
+-#endif
+
++#if IS_REACHABLE(CONFIG_MTK_CMDQ)
++update_config_out:
++#endif
+ mutex_unlock(&mtk_crtc->hw_lock);
+ }
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_dp.c b/drivers/gpu/drm/mediatek/mtk_dp.c
+index cad65ea851edc7..4979d49ae25a61 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dp.c
++++ b/drivers/gpu/drm/mediatek/mtk_dp.c
+@@ -1746,7 +1746,7 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp)
+
+ ret = drm_dp_dpcd_readb(&mtk_dp->aux, DP_MSTM_CAP, &val);
+ if (ret < 1) {
+- drm_err(mtk_dp->drm_dev, "Read mstm cap failed\n");
++ dev_err(mtk_dp->dev, "Read mstm cap failed: %zd\n", ret);
+ return ret == 0 ? -EIO : ret;
+ }
+
+@@ -1756,7 +1756,7 @@ static int mtk_dp_parse_capabilities(struct mtk_dp *mtk_dp)
+ DP_DEVICE_SERVICE_IRQ_VECTOR_ESI0,
+ &val);
+ if (ret < 1) {
+- drm_err(mtk_dp->drm_dev, "Read irq vector failed\n");
++ dev_err(mtk_dp->dev, "Read irq vector failed: %zd\n", ret);
+ return ret == 0 ? -EIO : ret;
+ }
+
+@@ -2039,7 +2039,7 @@ static int mtk_dp_wait_hpd_asserted(struct drm_dp_aux *mtk_aux, unsigned long wa
+
+ ret = mtk_dp_parse_capabilities(mtk_dp);
+ if (ret) {
+- drm_err(mtk_dp->drm_dev, "Can't parse capabilities\n");
++ dev_err(mtk_dp->dev, "Can't parse capabilities: %d\n", ret);
+ return ret;
+ }
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_dsi.c b/drivers/gpu/drm/mediatek/mtk_dsi.c
+index b9b7fd08b7d7e9..88f3dfeb4731d3 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dsi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dsi.c
+@@ -1108,12 +1108,12 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host,
+ const struct mipi_dsi_msg *msg)
+ {
+ struct mtk_dsi *dsi = host_to_dsi(host);
+- u32 recv_cnt, i;
++ ssize_t recv_cnt;
+ u8 read_data[16];
+ void *src_addr;
+ u8 irq_flag = CMD_DONE_INT_FLAG;
+ u32 dsi_mode;
+- int ret;
++ int ret, i;
+
+ dsi_mode = readl(dsi->regs + DSI_MODE_CTRL);
+ if (dsi_mode & MODE) {
+@@ -1162,7 +1162,7 @@ static ssize_t mtk_dsi_host_transfer(struct mipi_dsi_host *host,
+ if (recv_cnt)
+ memcpy(msg->rx_buf, src_addr, recv_cnt);
+
+- DRM_INFO("dsi get %d byte data from the panel address(0x%x)\n",
++ DRM_INFO("dsi get %zd byte data from the panel address(0x%x)\n",
+ recv_cnt, *((u8 *)(msg->tx_buf)));
+
+ restore_dsi_mode:
+diff --git a/drivers/gpu/drm/mediatek/mtk_hdmi.c b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+index 7687f673964ec7..1aad8e6cf52e75 100644
+--- a/drivers/gpu/drm/mediatek/mtk_hdmi.c
++++ b/drivers/gpu/drm/mediatek/mtk_hdmi.c
+@@ -137,7 +137,7 @@ enum hdmi_aud_channel_swap_type {
+
+ struct hdmi_audio_param {
+ enum hdmi_audio_coding_type aud_codec;
+- enum hdmi_audio_sample_size aud_sampe_size;
++ enum hdmi_audio_sample_size aud_sample_size;
+ enum hdmi_aud_input_type aud_input_type;
+ enum hdmi_aud_i2s_fmt aud_i2s_fmt;
+ enum hdmi_aud_mclk aud_mclk;
+@@ -173,6 +173,7 @@ struct mtk_hdmi {
+ unsigned int sys_offset;
+ void __iomem *regs;
+ enum hdmi_colorspace csp;
++ struct platform_device *audio_pdev;
+ struct hdmi_audio_param aud_param;
+ bool audio_enable;
+ bool powered;
+@@ -1074,7 +1075,7 @@ static int mtk_hdmi_output_init(struct mtk_hdmi *hdmi)
+
+ hdmi->csp = HDMI_COLORSPACE_RGB;
+ aud_param->aud_codec = HDMI_AUDIO_CODING_TYPE_PCM;
+- aud_param->aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16;
++ aud_param->aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16;
+ aud_param->aud_input_type = HDMI_AUD_INPUT_I2S;
+ aud_param->aud_i2s_fmt = HDMI_I2S_MODE_I2S_24BIT;
+ aud_param->aud_mclk = HDMI_AUD_MCLK_128FS;
+@@ -1572,14 +1573,14 @@ static int mtk_hdmi_audio_hw_params(struct device *dev, void *data,
+ switch (daifmt->fmt) {
+ case HDMI_I2S:
+ hdmi_params.aud_codec = HDMI_AUDIO_CODING_TYPE_PCM;
+- hdmi_params.aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16;
++ hdmi_params.aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16;
+ hdmi_params.aud_input_type = HDMI_AUD_INPUT_I2S;
+ hdmi_params.aud_i2s_fmt = HDMI_I2S_MODE_I2S_24BIT;
+ hdmi_params.aud_mclk = HDMI_AUD_MCLK_128FS;
+ break;
+ case HDMI_SPDIF:
+ hdmi_params.aud_codec = HDMI_AUDIO_CODING_TYPE_PCM;
+- hdmi_params.aud_sampe_size = HDMI_AUDIO_SAMPLE_SIZE_16;
++ hdmi_params.aud_sample_size = HDMI_AUDIO_SAMPLE_SIZE_16;
+ hdmi_params.aud_input_type = HDMI_AUD_INPUT_SPDIF;
+ break;
+ default:
+@@ -1663,6 +1664,11 @@ static const struct hdmi_codec_ops mtk_hdmi_audio_codec_ops = {
+ .no_capture_mute = 1,
+ };
+
++static void mtk_hdmi_unregister_audio_driver(void *data)
++{
++ platform_device_unregister(data);
++}
++
+ static int mtk_hdmi_register_audio_driver(struct device *dev)
+ {
+ struct mtk_hdmi *hdmi = dev_get_drvdata(dev);
+@@ -1672,13 +1678,20 @@ static int mtk_hdmi_register_audio_driver(struct device *dev)
+ .i2s = 1,
+ .data = hdmi,
+ };
+- struct platform_device *pdev;
++ int ret;
+
+- pdev = platform_device_register_data(dev, HDMI_CODEC_DRV_NAME,
+- PLATFORM_DEVID_AUTO, &codec_data,
+- sizeof(codec_data));
+- if (IS_ERR(pdev))
+- return PTR_ERR(pdev);
++ hdmi->audio_pdev = platform_device_register_data(dev,
++ HDMI_CODEC_DRV_NAME,
++ PLATFORM_DEVID_AUTO,
++ &codec_data,
++ sizeof(codec_data));
++ if (IS_ERR(hdmi->audio_pdev))
++ return PTR_ERR(hdmi->audio_pdev);
++
++ ret = devm_add_action_or_reset(dev, mtk_hdmi_unregister_audio_driver,
++ hdmi->audio_pdev);
++ if (ret)
++ return ret;
+
+ DRM_INFO("%s driver bound to HDMI\n", HDMI_CODEC_DRV_NAME);
+ return 0;
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+index 0fcae53c0b140b..159665cb6b14f9 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu_state.c
+@@ -1507,6 +1507,8 @@ static void a6xx_get_indexed_registers(struct msm_gpu *gpu,
+
+ /* Restore the size in the hardware */
+ gpu_write(gpu, REG_A6XX_CP_MEM_POOL_SIZE, mempool_size);
++
++ a6xx_state->nr_indexed_regs = count;
+ }
+
+ static void a7xx_get_indexed_registers(struct msm_gpu *gpu,
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+index db6c57900781d9..ecd595215a6bea 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_crtc.c
+@@ -1191,10 +1191,6 @@ static int dpu_crtc_atomic_check(struct drm_crtc *crtc,
+
+ DRM_DEBUG_ATOMIC("%s: check\n", dpu_crtc->name);
+
+- /* force a full mode set if active state changed */
+- if (crtc_state->active_changed)
+- crtc_state->mode_changed = true;
+-
+ if (cstate->num_mixers) {
+ rc = _dpu_crtc_check_and_setup_lm_bounds(crtc, crtc_state);
+ if (rc)
+diff --git a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+index 2cf8150adf81ff..47b514c89ce667 100644
+--- a/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
++++ b/drivers/gpu/drm/msm/disp/dpu1/dpu_encoder.c
+@@ -718,12 +718,11 @@ static int dpu_encoder_virt_atomic_check(
+ crtc_state->mode_changed = true;
+ /*
+ * Release and Allocate resources on every modeset
+- * Dont allocate when active is false.
+ */
+ if (drm_atomic_crtc_needs_modeset(crtc_state)) {
+ dpu_rm_release(global_state, drm_enc);
+
+- if (!crtc_state->active_changed || crtc_state->enable)
++ if (crtc_state->enable)
+ ret = dpu_rm_reserve(&dpu_kms->rm, global_state,
+ drm_enc, crtc_state, topology);
+ if (!ret)
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index a98d24b7cb00b4..7459fb8c517746 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -846,7 +846,7 @@ static void dsi_ctrl_enable(struct msm_dsi_host *msm_host,
+ dsi_write(msm_host, REG_DSI_CPHY_MODE_CTRL, BIT(0));
+ }
+
+-static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mode, u32 hdisplay)
++static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mode)
+ {
+ struct drm_dsc_config *dsc = msm_host->dsc;
+ u32 reg, reg_ctrl, reg_ctrl2;
+@@ -858,7 +858,7 @@ static void dsi_update_dsc_timing(struct msm_dsi_host *msm_host, bool is_cmd_mod
+ /* first calculate dsc parameters and then program
+ * compress mode registers
+ */
+- slice_per_intf = msm_dsc_get_slices_per_intf(dsc, hdisplay);
++ slice_per_intf = dsc->slice_count;
+
+ total_bytes_per_intf = dsc->slice_chunk_size * slice_per_intf;
+ bytes_per_pkt = dsc->slice_chunk_size; /* * slice_per_pkt; */
+@@ -991,7 +991,7 @@ static void dsi_timing_setup(struct msm_dsi_host *msm_host, bool is_bonded_dsi)
+
+ if (msm_host->mode_flags & MIPI_DSI_MODE_VIDEO) {
+ if (msm_host->dsc)
+- dsi_update_dsc_timing(msm_host, false, mode->hdisplay);
++ dsi_update_dsc_timing(msm_host, false);
+
+ dsi_write(msm_host, REG_DSI_ACTIVE_H,
+ DSI_ACTIVE_H_START(ha_start) |
+@@ -1012,7 +1012,7 @@ static void dsi_timing_setup(struct msm_dsi_host *msm_host, bool is_bonded_dsi)
+ DSI_ACTIVE_VSYNC_VPOS_END(vs_end));
+ } else { /* command mode */
+ if (msm_host->dsc)
+- dsi_update_dsc_timing(msm_host, true, mode->hdisplay);
++ dsi_update_dsc_timing(msm_host, true);
+
+ /* image data and 1 byte write_memory_start cmd */
+ if (!msm_host->dsc)
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_manager.c b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+index a210b7c9e5ca28..4fabb01345aa2a 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_manager.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_manager.c
+@@ -74,17 +74,35 @@ static int dsi_mgr_setup_components(int id)
+ int ret;
+
+ if (!IS_BONDED_DSI()) {
++ /*
++ * Set the usecase before calling msm_dsi_host_register(), which would
++ * already program the PLL source mux based on a default usecase.
++ */
++ msm_dsi_phy_set_usecase(msm_dsi->phy, MSM_DSI_PHY_STANDALONE);
++ msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
++
+ ret = msm_dsi_host_register(msm_dsi->host);
+ if (ret)
+ return ret;
+-
+- msm_dsi_phy_set_usecase(msm_dsi->phy, MSM_DSI_PHY_STANDALONE);
+- msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
+ } else if (other_dsi) {
+ struct msm_dsi *master_link_dsi = IS_MASTER_DSI_LINK(id) ?
+ msm_dsi : other_dsi;
+ struct msm_dsi *slave_link_dsi = IS_MASTER_DSI_LINK(id) ?
+ other_dsi : msm_dsi;
++
++ /*
++ * PLL0 is to drive both DSI link clocks in bonded DSI mode.
++ *
++ * Set the usecase before calling msm_dsi_host_register(), which would
++ * already program the PLL source mux based on a default usecase.
++ */
++ msm_dsi_phy_set_usecase(clk_master_dsi->phy,
++ MSM_DSI_PHY_MASTER);
++ msm_dsi_phy_set_usecase(clk_slave_dsi->phy,
++ MSM_DSI_PHY_SLAVE);
++ msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
++ msm_dsi_host_set_phy_mode(other_dsi->host, other_dsi->phy);
++
+ /* Register slave host first, so that slave DSI device
+ * has a chance to probe, and do not block the master
+ * DSI device's probe.
+@@ -98,14 +116,6 @@ static int dsi_mgr_setup_components(int id)
+ ret = msm_dsi_host_register(master_link_dsi->host);
+ if (ret)
+ return ret;
+-
+- /* PLL0 is to drive both 2 DSI link clocks in bonded DSI mode. */
+- msm_dsi_phy_set_usecase(clk_master_dsi->phy,
+- MSM_DSI_PHY_MASTER);
+- msm_dsi_phy_set_usecase(clk_slave_dsi->phy,
+- MSM_DSI_PHY_SLAVE);
+- msm_dsi_host_set_phy_mode(msm_dsi->host, msm_dsi->phy);
+- msm_dsi_host_set_phy_mode(other_dsi->host, other_dsi->phy);
+ }
+
+ return 0;
+diff --git a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+index 798168180c1ab6..a2c87c84aa05b8 100644
+--- a/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
++++ b/drivers/gpu/drm/msm/dsi/phy/dsi_phy_7nm.c
+@@ -305,7 +305,7 @@ static void dsi_pll_commit(struct dsi_pll_7nm *pll, struct dsi_pll_config *confi
+ writel(pll->phy->cphy_mode ? 0x00 : 0x10,
+ base + REG_DSI_7nm_PHY_PLL_CMODE_1);
+ writel(config->pll_clock_inverters,
+- base + REG_DSI_7nm_PHY_PLL_CLOCK_INVERTERS);
++ base + REG_DSI_7nm_PHY_PLL_CLOCK_INVERTERS_1);
+ }
+
+ static int dsi_pll_7nm_vco_set_rate(struct clk_hw *hw, unsigned long rate,
+diff --git a/drivers/gpu/drm/msm/msm_dsc_helper.h b/drivers/gpu/drm/msm/msm_dsc_helper.h
+index b9049fe1e27907..63f95523b2cbb4 100644
+--- a/drivers/gpu/drm/msm/msm_dsc_helper.h
++++ b/drivers/gpu/drm/msm/msm_dsc_helper.h
+@@ -12,17 +12,6 @@
+ #include <linux/math.h>
+ #include <drm/display/drm_dsc_helper.h>
+
+-/**
+- * msm_dsc_get_slices_per_intf() - calculate number of slices per interface
+- * @dsc: Pointer to drm dsc config struct
+- * @intf_width: interface width in pixels
+- * Returns: Integer representing the number of slices for the given interface
+- */
+-static inline u32 msm_dsc_get_slices_per_intf(const struct drm_dsc_config *dsc, u32 intf_width)
+-{
+- return DIV_ROUND_UP(intf_width, dsc->slice_width);
+-}
+-
+ /**
+ * msm_dsc_get_bytes_per_line() - calculate bytes per line
+ * @dsc: Pointer to drm dsc config struct
+diff --git a/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c b/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c
+index 266a087fe14c13..3c24a63b6be8c7 100644
+--- a/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c
++++ b/drivers/gpu/drm/panel/panel-ilitek-ili9882t.c
+@@ -607,7 +607,7 @@ static int ili9882t_add(struct ili9882t *ili)
+
+ ili->enable_gpio = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW);
+ if (IS_ERR(ili->enable_gpio)) {
+- dev_err(dev, "cannot get reset-gpios %ld\n",
++ dev_err(dev, "cannot get enable-gpios %ld\n",
+ PTR_ERR(ili->enable_gpio));
+ return PTR_ERR(ili->enable_gpio);
+ }
+diff --git a/drivers/gpu/drm/panthor/panthor_fw.h b/drivers/gpu/drm/panthor/panthor_fw.h
+index 22448abde99232..6598d96c6d2aab 100644
+--- a/drivers/gpu/drm/panthor/panthor_fw.h
++++ b/drivers/gpu/drm/panthor/panthor_fw.h
+@@ -102,9 +102,9 @@ struct panthor_fw_cs_output_iface {
+ #define CS_STATUS_BLOCKED_REASON_SB_WAIT 1
+ #define CS_STATUS_BLOCKED_REASON_PROGRESS_WAIT 2
+ #define CS_STATUS_BLOCKED_REASON_SYNC_WAIT 3
+-#define CS_STATUS_BLOCKED_REASON_DEFERRED 5
+-#define CS_STATUS_BLOCKED_REASON_RES 6
+-#define CS_STATUS_BLOCKED_REASON_FLUSH 7
++#define CS_STATUS_BLOCKED_REASON_DEFERRED 4
++#define CS_STATUS_BLOCKED_REASON_RESOURCE 5
++#define CS_STATUS_BLOCKED_REASON_FLUSH 6
+ #define CS_STATUS_BLOCKED_REASON_MASK GENMASK(3, 0)
+ u32 status_blocked_reason;
+ u32 status_wait_sync_value_hi;
+diff --git a/drivers/gpu/drm/solomon/ssd130x-spi.c b/drivers/gpu/drm/solomon/ssd130x-spi.c
+index 84bfde31d1724a..fd1b858dcb788e 100644
+--- a/drivers/gpu/drm/solomon/ssd130x-spi.c
++++ b/drivers/gpu/drm/solomon/ssd130x-spi.c
+@@ -151,7 +151,6 @@ static const struct of_device_id ssd130x_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, ssd130x_of_match);
+
+-#if IS_MODULE(CONFIG_DRM_SSD130X_SPI)
+ /*
+ * The SPI core always reports a MODALIAS uevent of the form "spi:<dev>", even
+ * if the device was registered via OF. This means that the module will not be
+@@ -160,7 +159,7 @@ MODULE_DEVICE_TABLE(of, ssd130x_of_match);
+ * To workaround this issue, add a SPI device ID table. Even when this should
+ * not be needed for this driver to match the registered SPI devices.
+ */
+-static const struct spi_device_id ssd130x_spi_table[] = {
++static const struct spi_device_id ssd130x_spi_id[] = {
+ /* ssd130x family */
+ { "sh1106", SH1106_ID },
+ { "ssd1305", SSD1305_ID },
+@@ -175,14 +174,14 @@ static const struct spi_device_id ssd130x_spi_table[] = {
+ { "ssd1331", SSD1331_ID },
+ { /* sentinel */ }
+ };
+-MODULE_DEVICE_TABLE(spi, ssd130x_spi_table);
+-#endif
++MODULE_DEVICE_TABLE(spi, ssd130x_spi_id);
+
+ static struct spi_driver ssd130x_spi_driver = {
+ .driver = {
+ .name = DRIVER_NAME,
+ .of_match_table = ssd130x_of_match,
+ },
++ .id_table = ssd130x_spi_id,
+ .probe = ssd130x_spi_probe,
+ .remove = ssd130x_spi_remove,
+ .shutdown = ssd130x_spi_shutdown,
+diff --git a/drivers/gpu/drm/solomon/ssd130x.c b/drivers/gpu/drm/solomon/ssd130x.c
+index 6f51bcf774e27c..06f5057690bd87 100644
+--- a/drivers/gpu/drm/solomon/ssd130x.c
++++ b/drivers/gpu/drm/solomon/ssd130x.c
+@@ -880,7 +880,7 @@ static int ssd132x_update_rect(struct ssd130x_device *ssd130x,
+ u8 n1 = buf[i * width + j];
+ u8 n2 = buf[i * width + j + 1];
+
+- data_array[array_idx++] = (n2 << 4) | n1;
++ data_array[array_idx++] = (n2 & 0xf0) | (n1 >> 4);
+ }
+ }
+
+@@ -1037,7 +1037,7 @@ static int ssd132x_fb_blit_rect(struct drm_framebuffer *fb,
+ struct drm_format_conv_state *fmtcnv_state)
+ {
+ struct ssd130x_device *ssd130x = drm_to_ssd130x(fb->dev);
+- unsigned int dst_pitch = drm_rect_width(rect);
++ unsigned int dst_pitch;
+ struct iosys_map dst;
+ int ret = 0;
+
+@@ -1046,6 +1046,8 @@ static int ssd132x_fb_blit_rect(struct drm_framebuffer *fb,
+ rect->x2 = min_t(unsigned int, round_up(rect->x2, SSD132X_SEGMENT_WIDTH),
+ ssd130x->width);
+
++ dst_pitch = drm_rect_width(rect);
++
+ ret = drm_gem_fb_begin_cpu_access(fb, DMA_FROM_DEVICE);
+ if (ret)
+ return ret;
+diff --git a/drivers/gpu/drm/vkms/vkms_drv.c b/drivers/gpu/drm/vkms/vkms_drv.c
+index 0c1a713b7b7b3b..be642ee739c4fb 100644
+--- a/drivers/gpu/drm/vkms/vkms_drv.c
++++ b/drivers/gpu/drm/vkms/vkms_drv.c
+@@ -245,17 +245,19 @@ static int __init vkms_init(void)
+ if (!config)
+ return -ENOMEM;
+
+- default_config = config;
+-
+ config->cursor = enable_cursor;
+ config->writeback = enable_writeback;
+ config->overlay = enable_overlay;
+
+ ret = vkms_create(config);
+- if (ret)
++ if (ret) {
+ kfree(config);
++ return ret;
++ }
+
+- return ret;
++ default_config = config;
++
++ return 0;
+ }
+
+ static void vkms_destroy(struct vkms_config *config)
+@@ -279,9 +281,10 @@ static void vkms_destroy(struct vkms_config *config)
+
+ static void __exit vkms_exit(void)
+ {
+- if (default_config->dev)
+- vkms_destroy(default_config);
++ if (!default_config)
++ return;
+
++ vkms_destroy(default_config);
+ kfree(default_config);
+ }
+
+diff --git a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
+index f5781939de9c35..a25b22238e3d2f 100644
+--- a/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
++++ b/drivers/gpu/drm/xlnx/zynqmp_dpsub.c
+@@ -231,6 +231,8 @@ static int zynqmp_dpsub_probe(struct platform_device *pdev)
+ if (ret)
+ return ret;
+
++ dma_set_max_seg_size(&pdev->dev, DMA_BIT_MASK(32));
++
+ /* Try the reserved memory. Proceed if there's none. */
+ of_reserved_mem_device_init(&pdev->dev);
+
+diff --git a/drivers/greybus/gb-beagleplay.c b/drivers/greybus/gb-beagleplay.c
+index 473ac3f2d38219..da31f1131afcab 100644
+--- a/drivers/greybus/gb-beagleplay.c
++++ b/drivers/greybus/gb-beagleplay.c
+@@ -912,7 +912,9 @@ static enum fw_upload_err cc1352_prepare(struct fw_upload *fw_upload,
+ cc1352_bootloader_reset(bg);
+ WRITE_ONCE(bg->flashing_mode, false);
+ msleep(200);
+- gb_greybus_init(bg);
++ if (gb_greybus_init(bg) < 0)
++ return dev_err_probe(&bg->sd->dev, FW_UPLOAD_ERR_RW_ERROR,
++ "Failed to initialize greybus");
+ gb_beagleplay_start_svc(bg);
+ return FW_UPLOAD_ERR_FW_INVALID;
+ }
+diff --git a/drivers/hid/Makefile b/drivers/hid/Makefile
+index 496dab54c73a82..f2900ee2ef8582 100644
+--- a/drivers/hid/Makefile
++++ b/drivers/hid/Makefile
+@@ -165,7 +165,6 @@ obj-$(CONFIG_USB_KBD) += usbhid/
+ obj-$(CONFIG_I2C_HID_CORE) += i2c-hid/
+
+ obj-$(CONFIG_INTEL_ISH_HID) += intel-ish-hid/
+-obj-$(INTEL_ISH_FIRMWARE_DOWNLOADER) += intel-ish-hid/
+
+ obj-$(CONFIG_AMD_SFH_HID) += amd-sfh-hid/
+
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 4e87380d3edd6b..bcca89ef73606d 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -284,7 +284,7 @@ static int i2c_hid_get_report(struct i2c_hid *ihid,
+ ihid->rawbuf, recv_len + sizeof(__le16));
+ if (error) {
+ dev_err(&ihid->client->dev,
+- "failed to set a report to device: %d\n", error);
++ "failed to get a report from device: %d\n", error);
+ return error;
+ }
+
+diff --git a/drivers/hwmon/nct6775-core.c b/drivers/hwmon/nct6775-core.c
+index fa3351351825b7..79bc67ffb9986f 100644
+--- a/drivers/hwmon/nct6775-core.c
++++ b/drivers/hwmon/nct6775-core.c
+@@ -273,8 +273,8 @@ static const s8 NCT6776_BEEP_BITS[NUM_BEEP_BITS] = {
+ static const u16 NCT6776_REG_TOLERANCE_H[] = {
+ 0x10c, 0x20c, 0x30c, 0x80c, 0x90c, 0xa0c, 0xb0c };
+
+-static const u8 NCT6776_REG_PWM_MODE[] = { 0x04, 0, 0, 0, 0, 0 };
+-static const u8 NCT6776_PWM_MODE_MASK[] = { 0x01, 0, 0, 0, 0, 0 };
++static const u8 NCT6776_REG_PWM_MODE[] = { 0x04, 0, 0, 0, 0, 0, 0 };
++static const u8 NCT6776_PWM_MODE_MASK[] = { 0x01, 0, 0, 0, 0, 0, 0 };
+
+ static const u16 NCT6776_REG_FAN_MIN[] = {
+ 0x63a, 0x63c, 0x63e, 0x640, 0x642, 0x64a, 0x64c };
+diff --git a/drivers/hwtracing/coresight/coresight-catu.c b/drivers/hwtracing/coresight/coresight-catu.c
+index bfea880d6dfbf1..d8ad64ea81f119 100644
+--- a/drivers/hwtracing/coresight/coresight-catu.c
++++ b/drivers/hwtracing/coresight/coresight-catu.c
+@@ -269,7 +269,7 @@ catu_init_sg_table(struct device *catu_dev, int node,
+ * Each table can address upto 1MB and we can have
+ * CATU_PAGES_PER_SYSPAGE tables in a system page.
+ */
+- nr_tpages = DIV_ROUND_UP(size, SZ_1M) / CATU_PAGES_PER_SYSPAGE;
++ nr_tpages = DIV_ROUND_UP(size, CATU_PAGES_PER_SYSPAGE * SZ_1M);
+ catu_table = tmc_alloc_sg_table(catu_dev, node, nr_tpages,
+ size >> PAGE_SHIFT, pages);
+ if (IS_ERR(catu_table))
+diff --git a/drivers/hwtracing/coresight/coresight-core.c b/drivers/hwtracing/coresight/coresight-core.c
+index ea38ecf26fcbfb..c42aa9fddab9b7 100644
+--- a/drivers/hwtracing/coresight/coresight-core.c
++++ b/drivers/hwtracing/coresight/coresight-core.c
+@@ -1017,18 +1017,20 @@ static void coresight_remove_conns(struct coresight_device *csdev)
+ }
+
+ /**
+- * coresight_timeout - loop until a bit has changed to a specific register
+- * state.
++ * coresight_timeout_action - loop until a bit has changed to a specific register
++ * state, with a callback after every trial.
+ * @csa: coresight device access for the device
+ * @offset: Offset of the register from the base of the device.
+ * @position: the position of the bit of interest.
+ * @value: the value the bit should have.
++ * @cb: Call back after each trial.
+ *
+ * Return: 0 as soon as the bit has taken the desired state or -EAGAIN if
+ * TIMEOUT_US has elapsed, which ever happens first.
+ */
+-int coresight_timeout(struct csdev_access *csa, u32 offset,
+- int position, int value)
++int coresight_timeout_action(struct csdev_access *csa, u32 offset,
++ int position, int value,
++ coresight_timeout_cb_t cb)
+ {
+ int i;
+ u32 val;
+@@ -1044,7 +1046,8 @@ int coresight_timeout(struct csdev_access *csa, u32 offset,
+ if (!(val & BIT(position)))
+ return 0;
+ }
+-
++ if (cb)
++ cb(csa, offset, position, value);
+ /*
+ * Delay is arbitrary - the specification doesn't say how long
+ * we are expected to wait. Extra check required to make sure
+@@ -1056,6 +1059,13 @@ int coresight_timeout(struct csdev_access *csa, u32 offset,
+
+ return -EAGAIN;
+ }
++EXPORT_SYMBOL_GPL(coresight_timeout_action);
++
++int coresight_timeout(struct csdev_access *csa, u32 offset,
++ int position, int value)
++{
++ return coresight_timeout_action(csa, offset, position, value, NULL);
++}
+ EXPORT_SYMBOL_GPL(coresight_timeout);
+
+ u32 coresight_relaxed_read32(struct coresight_device *csdev, u32 offset)
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index 66d44a404ad0cd..be8b46f26ddc83 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -399,6 +399,29 @@ static void etm4_check_arch_features(struct etmv4_drvdata *drvdata,
+ }
+ #endif /* CONFIG_ETM4X_IMPDEF_FEATURE */
+
++static void etm4x_sys_ins_barrier(struct csdev_access *csa, u32 offset, int pos, int val)
++{
++ if (!csa->io_mem)
++ isb();
++}
++
++/*
++ * etm4x_wait_status: Poll for TRCSTATR.<pos> == <val>. While using system
++ * instruction to access the trace unit, each access must be separated by a
++ * synchronization barrier. See ARM IHI0064H.b section "4.3.7 Synchronization of
++ * register updates", for system instructions section, in "Notes":
++ *
++ * "In particular, whenever disabling or enabling the trace unit, a poll of
++ * TRCSTATR needs explicit synchronization between each read of TRCSTATR"
++ */
++static int etm4x_wait_status(struct csdev_access *csa, int pos, int val)
++{
++ if (!csa->io_mem)
++ return coresight_timeout_action(csa, TRCSTATR, pos, val,
++ etm4x_sys_ins_barrier);
++ return coresight_timeout(csa, TRCSTATR, pos, val);
++}
++
+ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ {
+ int i, rc;
+@@ -430,7 +453,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ isb();
+
+ /* wait for TRCSTATR.IDLE to go up */
+- if (coresight_timeout(csa, TRCSTATR, TRCSTATR_IDLE_BIT, 1))
++ if (etm4x_wait_status(csa, TRCSTATR_IDLE_BIT, 1))
+ dev_err(etm_dev,
+ "timeout while waiting for Idle Trace Status\n");
+ if (drvdata->nr_pe)
+@@ -523,7 +546,7 @@ static int etm4_enable_hw(struct etmv4_drvdata *drvdata)
+ isb();
+
+ /* wait for TRCSTATR.IDLE to go back down to '0' */
+- if (coresight_timeout(csa, TRCSTATR, TRCSTATR_IDLE_BIT, 0))
++ if (etm4x_wait_status(csa, TRCSTATR_IDLE_BIT, 0))
+ dev_err(etm_dev,
+ "timeout while waiting for Idle Trace Status\n");
+
+@@ -906,10 +929,25 @@ static void etm4_disable_hw(void *info)
+ tsb_csync();
+ etm4x_relaxed_write32(csa, control, TRCPRGCTLR);
+
++ /*
++ * As recommended by section 4.3.7 ("Synchronization when using system
++ * instructions to progrom the trace unit") of ARM IHI 0064H.b, the
++ * self-hosted trace analyzer must perform a Context synchronization
++ * event between writing to the TRCPRGCTLR and reading the TRCSTATR.
++ */
++ if (!csa->io_mem)
++ isb();
++
+ /* wait for TRCSTATR.PMSTABLE to go to '1' */
+- if (coresight_timeout(csa, TRCSTATR, TRCSTATR_PMSTABLE_BIT, 1))
++ if (etm4x_wait_status(csa, TRCSTATR_PMSTABLE_BIT, 1))
+ dev_err(etm_dev,
+ "timeout while waiting for PM stable Trace Status\n");
++ /*
++ * As recommended by section 4.3.7 (Synchronization of register updates)
++ * of ARM IHI 0064H.b.
++ */
++ isb();
++
+ /* read the status of the single shot comparators */
+ for (i = 0; i < drvdata->nr_ss_cmp; i++) {
+ config->ss_status[i] =
+@@ -1711,7 +1749,7 @@ static int __etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ etm4_os_lock(drvdata);
+
+ /* wait for TRCSTATR.PMSTABLE to go up */
+- if (coresight_timeout(csa, TRCSTATR, TRCSTATR_PMSTABLE_BIT, 1)) {
++ if (etm4x_wait_status(csa, TRCSTATR_PMSTABLE_BIT, 1)) {
+ dev_err(etm_dev,
+ "timeout while waiting for PM Stable Status\n");
+ etm4_os_unlock(drvdata);
+@@ -1802,7 +1840,7 @@ static int __etm4_cpu_save(struct etmv4_drvdata *drvdata)
+ state->trcpdcr = etm4x_read32(csa, TRCPDCR);
+
+ /* wait for TRCSTATR.IDLE to go up */
+- if (coresight_timeout(csa, TRCSTATR, TRCSTATR_IDLE_BIT, 1)) {
++ if (etm4x_wait_status(csa, TRCSTATR_PMSTABLE_BIT, 1)) {
+ dev_err(etm_dev,
+ "timeout while waiting for Idle Trace Status\n");
+ etm4_os_unlock(drvdata);
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index 565af3759813bd..87f98fa8afd582 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -990,7 +990,7 @@ static int svc_i3c_update_ibirules(struct svc_i3c_master *master)
+
+ /* Create the IBIRULES register for both cases */
+ i3c_bus_for_each_i3cdev(&master->base.bus, dev) {
+- if (I3C_BCR_DEVICE_ROLE(dev->info.bcr) == I3C_BCR_I3C_MASTER)
++ if (!(dev->info.bcr & I3C_BCR_IBI_REQ_CAP))
+ continue;
+
+ if (dev->info.bcr & I3C_BCR_IBI_PAYLOAD) {
+diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c
+index 62e6369e22696c..de207526babee2 100644
+--- a/drivers/iio/accel/mma8452.c
++++ b/drivers/iio/accel/mma8452.c
+@@ -711,7 +711,7 @@ static int mma8452_write_raw(struct iio_dev *indio_dev,
+ int val, int val2, long mask)
+ {
+ struct mma8452_data *data = iio_priv(indio_dev);
+- int i, ret;
++ int i, j, ret;
+
+ ret = iio_device_claim_direct_mode(indio_dev);
+ if (ret)
+@@ -771,14 +771,18 @@ static int mma8452_write_raw(struct iio_dev *indio_dev,
+ break;
+
+ case IIO_CHAN_INFO_OVERSAMPLING_RATIO:
+- ret = mma8452_get_odr_index(data);
++ j = mma8452_get_odr_index(data);
+
+ for (i = 0; i < ARRAY_SIZE(mma8452_os_ratio); i++) {
+- if (mma8452_os_ratio[i][ret] == val) {
++ if (mma8452_os_ratio[i][j] == val) {
+ ret = mma8452_set_power_mode(data, i);
+ break;
+ }
+ }
++ if (i == ARRAY_SIZE(mma8452_os_ratio)) {
++ ret = -EINVAL;
++ break;
++ }
+ break;
+ default:
+ ret = -EINVAL;
+diff --git a/drivers/iio/accel/msa311.c b/drivers/iio/accel/msa311.c
+index 57025354c7cd58..f484be27058d98 100644
+--- a/drivers/iio/accel/msa311.c
++++ b/drivers/iio/accel/msa311.c
+@@ -593,23 +593,25 @@ static int msa311_read_raw_data(struct iio_dev *indio_dev,
+ __le16 axis;
+ int err;
+
+- err = pm_runtime_resume_and_get(dev);
++ err = iio_device_claim_direct_mode(indio_dev);
+ if (err)
+ return err;
+
+- err = iio_device_claim_direct_mode(indio_dev);
+- if (err)
++ err = pm_runtime_resume_and_get(dev);
++ if (err) {
++ iio_device_release_direct_mode(indio_dev);
+ return err;
++ }
+
+ mutex_lock(&msa311->lock);
+ err = msa311_get_axis(msa311, chan, &axis);
+ mutex_unlock(&msa311->lock);
+
+- iio_device_release_direct_mode(indio_dev);
+-
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
+
++ iio_device_release_direct_mode(indio_dev);
++
+ if (err) {
+ dev_err(dev, "can't get axis %s (%pe)\n",
+ chan->datasheet_name, ERR_PTR(err));
+@@ -755,10 +757,6 @@ static int msa311_write_samp_freq(struct iio_dev *indio_dev, int val, int val2)
+ unsigned int odr;
+ int err;
+
+- err = pm_runtime_resume_and_get(dev);
+- if (err)
+- return err;
+-
+ /*
+ * Sampling frequency changing is prohibited when buffer mode is
+ * enabled, because sometimes MSA311 chip returns outliers during
+@@ -768,6 +766,12 @@ static int msa311_write_samp_freq(struct iio_dev *indio_dev, int val, int val2)
+ if (err)
+ return err;
+
++ err = pm_runtime_resume_and_get(dev);
++ if (err) {
++ iio_device_release_direct_mode(indio_dev);
++ return err;
++ }
++
+ err = -EINVAL;
+ for (odr = 0; odr < ARRAY_SIZE(msa311_odr_table); odr++)
+ if (val == msa311_odr_table[odr].integral &&
+@@ -778,11 +782,11 @@ static int msa311_write_samp_freq(struct iio_dev *indio_dev, int val, int val2)
+ break;
+ }
+
+- iio_device_release_direct_mode(indio_dev);
+-
+ pm_runtime_mark_last_busy(dev);
+ pm_runtime_put_autosuspend(dev);
+
++ iio_device_release_direct_mode(indio_dev);
++
+ if (err)
+ dev_err(dev, "can't update frequency (%pe)\n", ERR_PTR(err));
+
+diff --git a/drivers/iio/adc/ad4130.c b/drivers/iio/adc/ad4130.c
+index de32cc9d18c5ef..712f95f53c9ecd 100644
+--- a/drivers/iio/adc/ad4130.c
++++ b/drivers/iio/adc/ad4130.c
+@@ -223,6 +223,10 @@ enum ad4130_pin_function {
+ AD4130_PIN_FN_VBIAS = BIT(3),
+ };
+
++/*
++ * If you make adaptations in this struct, you most likely also have to adapt
++ * ad4130_setup_info_eq(), too.
++ */
+ struct ad4130_setup_info {
+ unsigned int iout0_val;
+ unsigned int iout1_val;
+@@ -591,6 +595,40 @@ static irqreturn_t ad4130_irq_handler(int irq, void *private)
+ return IRQ_HANDLED;
+ }
+
++static bool ad4130_setup_info_eq(struct ad4130_setup_info *a,
++ struct ad4130_setup_info *b)
++{
++ /*
++ * This is just to make sure that the comparison is adapted after
++ * struct ad4130_setup_info was changed.
++ */
++ static_assert(sizeof(*a) ==
++ sizeof(struct {
++ unsigned int iout0_val;
++ unsigned int iout1_val;
++ unsigned int burnout;
++ unsigned int pga;
++ unsigned int fs;
++ u32 ref_sel;
++ enum ad4130_filter_mode filter_mode;
++ bool ref_bufp;
++ bool ref_bufm;
++ }));
++
++ if (a->iout0_val != b->iout0_val ||
++ a->iout1_val != b->iout1_val ||
++ a->burnout != b->burnout ||
++ a->pga != b->pga ||
++ a->fs != b->fs ||
++ a->ref_sel != b->ref_sel ||
++ a->filter_mode != b->filter_mode ||
++ a->ref_bufp != b->ref_bufp ||
++ a->ref_bufm != b->ref_bufm)
++ return false;
++
++ return true;
++}
++
+ static int ad4130_find_slot(struct ad4130_state *st,
+ struct ad4130_setup_info *target_setup_info,
+ unsigned int *slot, bool *overwrite)
+@@ -604,8 +642,7 @@ static int ad4130_find_slot(struct ad4130_state *st,
+ struct ad4130_slot_info *slot_info = &st->slots_info[i];
+
+ /* Immediately accept a matching setup info. */
+- if (!memcmp(target_setup_info, &slot_info->setup,
+- sizeof(*target_setup_info))) {
++ if (ad4130_setup_info_eq(target_setup_info, &slot_info->setup)) {
+ *slot = i;
+ return 0;
+ }
+diff --git a/drivers/iio/adc/ad7124.c b/drivers/iio/adc/ad7124.c
+index 8d94bc2b1cac35..30a7392c4f8b95 100644
+--- a/drivers/iio/adc/ad7124.c
++++ b/drivers/iio/adc/ad7124.c
+@@ -147,7 +147,11 @@ struct ad7124_chip_info {
+ struct ad7124_channel_config {
+ bool live;
+ unsigned int cfg_slot;
+- /* Following fields are used to compare equality. */
++ /*
++ * Following fields are used to compare for equality. If you
++ * make adaptations in it, you most likely also have to adapt
++ * ad7124_find_similar_live_cfg(), too.
++ */
+ struct_group(config_props,
+ enum ad7124_ref_sel refsel;
+ bool bipolar;
+@@ -334,15 +338,38 @@ static struct ad7124_channel_config *ad7124_find_similar_live_cfg(struct ad7124_
+ struct ad7124_channel_config *cfg)
+ {
+ struct ad7124_channel_config *cfg_aux;
+- ptrdiff_t cmp_size;
+ int i;
+
+- cmp_size = sizeof_field(struct ad7124_channel_config, config_props);
++ /*
++ * This is just to make sure that the comparison is adapted after
++ * struct ad7124_channel_config was changed.
++ */
++ static_assert(sizeof_field(struct ad7124_channel_config, config_props) ==
++ sizeof(struct {
++ enum ad7124_ref_sel refsel;
++ bool bipolar;
++ bool buf_positive;
++ bool buf_negative;
++ unsigned int vref_mv;
++ unsigned int pga_bits;
++ unsigned int odr;
++ unsigned int odr_sel_bits;
++ unsigned int filter_type;
++ }));
++
+ for (i = 0; i < st->num_channels; i++) {
+ cfg_aux = &st->channels[i].cfg;
+
+ if (cfg_aux->live &&
+- !memcmp(&cfg->config_props, &cfg_aux->config_props, cmp_size))
++ cfg->refsel == cfg_aux->refsel &&
++ cfg->bipolar == cfg_aux->bipolar &&
++ cfg->buf_positive == cfg_aux->buf_positive &&
++ cfg->buf_negative == cfg_aux->buf_negative &&
++ cfg->vref_mv == cfg_aux->vref_mv &&
++ cfg->pga_bits == cfg_aux->pga_bits &&
++ cfg->odr == cfg_aux->odr &&
++ cfg->odr_sel_bits == cfg_aux->odr_sel_bits &&
++ cfg->filter_type == cfg_aux->filter_type)
+ return cfg_aux;
+ }
+
+diff --git a/drivers/iio/adc/ad7173.c b/drivers/iio/adc/ad7173.c
+index 5a65be00dd190f..2eebc6f761a632 100644
+--- a/drivers/iio/adc/ad7173.c
++++ b/drivers/iio/adc/ad7173.c
+@@ -181,7 +181,11 @@ struct ad7173_channel_config {
+ u8 cfg_slot;
+ bool live;
+
+- /* Following fields are used to compare equality. */
++ /*
++ * Following fields are used to compare equality. If you
++ * make adaptations in it, you most likely also have to adapt
++ * ad7173_find_live_config(), too.
++ */
+ struct_group(config_props,
+ bool bipolar;
+ bool input_buf;
+@@ -582,15 +586,28 @@ static struct ad7173_channel_config *
+ ad7173_find_live_config(struct ad7173_state *st, struct ad7173_channel_config *cfg)
+ {
+ struct ad7173_channel_config *cfg_aux;
+- ptrdiff_t cmp_size;
+ int i;
+
+- cmp_size = sizeof_field(struct ad7173_channel_config, config_props);
++ /*
++ * This is just to make sure that the comparison is adapted after
++ * struct ad7173_channel_config was changed.
++ */
++ static_assert(sizeof_field(struct ad7173_channel_config, config_props) ==
++ sizeof(struct {
++ bool bipolar;
++ bool input_buf;
++ u8 odr;
++ u8 ref_sel;
++ }));
++
+ for (i = 0; i < st->num_channels; i++) {
+ cfg_aux = &st->channels[i].cfg;
+
+ if (cfg_aux->live &&
+- !memcmp(&cfg->config_props, &cfg_aux->config_props, cmp_size))
++ cfg->bipolar == cfg_aux->bipolar &&
++ cfg->input_buf == cfg_aux->input_buf &&
++ cfg->odr == cfg_aux->odr &&
++ cfg->ref_sel == cfg_aux->ref_sel)
+ return cfg_aux;
+ }
+ return NULL;
+diff --git a/drivers/iio/adc/ad7768-1.c b/drivers/iio/adc/ad7768-1.c
+index 113703fb724544..6f8816483f1a02 100644
+--- a/drivers/iio/adc/ad7768-1.c
++++ b/drivers/iio/adc/ad7768-1.c
+@@ -574,6 +574,21 @@ static int ad7768_probe(struct spi_device *spi)
+ return -ENOMEM;
+
+ st = iio_priv(indio_dev);
++ /*
++ * Datasheet recommends SDI line to be kept high when data is not being
++ * clocked out of the controller and the spi clock is free running,
++ * to prevent accidental reset.
++ * Since many controllers do not support the SPI_MOSI_IDLE_HIGH flag
++ * yet, only request the MOSI idle state to enable if the controller
++ * supports it.
++ */
++ if (spi->controller->mode_bits & SPI_MOSI_IDLE_HIGH) {
++ spi->mode |= SPI_MOSI_IDLE_HIGH;
++ ret = spi_setup(spi);
++ if (ret < 0)
++ return ret;
++ }
++
+ st->spi = spi;
+
+ st->vref = devm_regulator_get(&spi->dev, "vref");
+diff --git a/drivers/iio/industrialio-backend.c b/drivers/iio/industrialio-backend.c
+index fb34a8e4d04e74..42e0ee683ef6b2 100644
+--- a/drivers/iio/industrialio-backend.c
++++ b/drivers/iio/industrialio-backend.c
+@@ -155,10 +155,12 @@ static ssize_t iio_backend_debugfs_write_reg(struct file *file,
+ ssize_t rc;
+ int ret;
+
+- rc = simple_write_to_buffer(buf, sizeof(buf), ppos, userbuf, count);
++ rc = simple_write_to_buffer(buf, sizeof(buf) - 1, ppos, userbuf, count);
+ if (rc < 0)
+ return rc;
+
++ buf[count] = '\0';
++
+ ret = sscanf(buf, "%i %i", &back->cached_reg_addr, &val);
+
+ switch (ret) {
+diff --git a/drivers/iio/light/veml6075.c b/drivers/iio/light/veml6075.c
+index 05d4c0e9015d6e..859891e8f11521 100644
+--- a/drivers/iio/light/veml6075.c
++++ b/drivers/iio/light/veml6075.c
+@@ -195,13 +195,17 @@ static int veml6075_read_uv_direct(struct veml6075_data *data, int chan,
+
+ static int veml6075_read_int_time_index(struct veml6075_data *data)
+ {
+- int ret, conf;
++ int ret, conf, int_index;
+
+ ret = regmap_read(data->regmap, VEML6075_CMD_CONF, &conf);
+ if (ret < 0)
+ return ret;
+
+- return FIELD_GET(VEML6075_CONF_IT, conf);
++ int_index = FIELD_GET(VEML6075_CONF_IT, conf);
++ if (int_index >= ARRAY_SIZE(veml6075_it_ms))
++ return -EINVAL;
++
++ return int_index;
+ }
+
+ static int veml6075_read_int_time_ms(struct veml6075_data *data, int *val)
+diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
+index e029401b56805f..46102f179955ba 100644
+--- a/drivers/infiniband/core/device.c
++++ b/drivers/infiniband/core/device.c
+@@ -544,6 +544,8 @@ static struct class ib_class = {
+ static void rdma_init_coredev(struct ib_core_device *coredev,
+ struct ib_device *dev, struct net *net)
+ {
++ bool is_full_dev = &dev->coredev == coredev;
++
+ /* This BUILD_BUG_ON is intended to catch layout change
+ * of union of ib_core_device and device.
+ * dev must be the first element as ib_core and providers
+@@ -555,6 +557,13 @@ static void rdma_init_coredev(struct ib_core_device *coredev,
+
+ coredev->dev.class = &ib_class;
+ coredev->dev.groups = dev->groups;
++
++ /*
++ * Don't expose hw counters outside of the init namespace.
++ */
++ if (!is_full_dev && dev->hw_stats_attr_index)
++ coredev->dev.groups[dev->hw_stats_attr_index] = NULL;
++
+ device_initialize(&coredev->dev);
+ coredev->owner = dev;
+ INIT_LIST_HEAD(&coredev->port_list);
+@@ -1357,9 +1366,11 @@ static void ib_device_notify_register(struct ib_device *device)
+ u32 port;
+ int ret;
+
++ down_read(&devices_rwsem);
++
+ ret = rdma_nl_notify_event(device, 0, RDMA_REGISTER_EVENT);
+ if (ret)
+- return;
++ goto out;
+
+ rdma_for_each_port(device, port) {
+ netdev = ib_device_get_netdev(device, port);
+@@ -1370,8 +1381,11 @@ static void ib_device_notify_register(struct ib_device *device)
+ RDMA_NETDEV_ATTACH_EVENT);
+ dev_put(netdev);
+ if (ret)
+- return;
++ goto out;
+ }
++
++out:
++ up_read(&devices_rwsem);
+ }
+
+ /**
+diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
+index 1fd54d5c4dd8b7..73f3a0b9a54b5f 100644
+--- a/drivers/infiniband/core/mad.c
++++ b/drivers/infiniband/core/mad.c
+@@ -2671,11 +2671,11 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
+ struct ib_mad_private *mad)
+ {
+ unsigned long flags;
+- int post, ret;
+ struct ib_mad_private *mad_priv;
+ struct ib_sge sg_list;
+ struct ib_recv_wr recv_wr;
+ struct ib_mad_queue *recv_queue = &qp_info->recv_queue;
++ int ret = 0;
+
+ /* Initialize common scatter list fields */
+ sg_list.lkey = qp_info->port_priv->pd->local_dma_lkey;
+@@ -2685,7 +2685,7 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
+ recv_wr.sg_list = &sg_list;
+ recv_wr.num_sge = 1;
+
+- do {
++ while (true) {
+ /* Allocate and map receive buffer */
+ if (mad) {
+ mad_priv = mad;
+@@ -2693,10 +2693,8 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
+ } else {
+ mad_priv = alloc_mad_private(port_mad_size(qp_info->port_priv),
+ GFP_ATOMIC);
+- if (!mad_priv) {
+- ret = -ENOMEM;
+- break;
+- }
++ if (!mad_priv)
++ return -ENOMEM;
+ }
+ sg_list.length = mad_priv_dma_size(mad_priv);
+ sg_list.addr = ib_dma_map_single(qp_info->port_priv->device,
+@@ -2705,37 +2703,41 @@ static int ib_mad_post_receive_mads(struct ib_mad_qp_info *qp_info,
+ DMA_FROM_DEVICE);
+ if (unlikely(ib_dma_mapping_error(qp_info->port_priv->device,
+ sg_list.addr))) {
+- kfree(mad_priv);
+ ret = -ENOMEM;
+- break;
++ goto free_mad_priv;
+ }
+ mad_priv->header.mapping = sg_list.addr;
+ mad_priv->header.mad_list.mad_queue = recv_queue;
+ mad_priv->header.mad_list.cqe.done = ib_mad_recv_done;
+ recv_wr.wr_cqe = &mad_priv->header.mad_list.cqe;
+-
+- /* Post receive WR */
+ spin_lock_irqsave(&recv_queue->lock, flags);
+- post = (++recv_queue->count < recv_queue->max_active);
+- list_add_tail(&mad_priv->header.mad_list.list, &recv_queue->list);
++ if (recv_queue->count >= recv_queue->max_active) {
++ /* Fully populated the receive queue */
++ spin_unlock_irqrestore(&recv_queue->lock, flags);
++ break;
++ }
++ recv_queue->count++;
++ list_add_tail(&mad_priv->header.mad_list.list,
++ &recv_queue->list);
+ spin_unlock_irqrestore(&recv_queue->lock, flags);
++
+ ret = ib_post_recv(qp_info->qp, &recv_wr, NULL);
+ if (ret) {
+ spin_lock_irqsave(&recv_queue->lock, flags);
+ list_del(&mad_priv->header.mad_list.list);
+ recv_queue->count--;
+ spin_unlock_irqrestore(&recv_queue->lock, flags);
+- ib_dma_unmap_single(qp_info->port_priv->device,
+- mad_priv->header.mapping,
+- mad_priv_dma_size(mad_priv),
+- DMA_FROM_DEVICE);
+- kfree(mad_priv);
+ dev_err(&qp_info->port_priv->device->dev,
+ "ib_post_recv failed: %d\n", ret);
+ break;
+ }
+- } while (post);
++ }
+
++ ib_dma_unmap_single(qp_info->port_priv->device,
++ mad_priv->header.mapping,
++ mad_priv_dma_size(mad_priv), DMA_FROM_DEVICE);
++free_mad_priv:
++ kfree(mad_priv);
+ return ret;
+ }
+
+diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
+index 9f97bef0214975..210092b9bf17d2 100644
+--- a/drivers/infiniband/core/sysfs.c
++++ b/drivers/infiniband/core/sysfs.c
+@@ -988,6 +988,7 @@ int ib_setup_device_attrs(struct ib_device *ibdev)
+ for (i = 0; i != ARRAY_SIZE(ibdev->groups); i++)
+ if (!ibdev->groups[i]) {
+ ibdev->groups[i] = &data->group;
++ ibdev->hw_stats_attr_index = i;
+ return 0;
+ }
+ WARN(true, "struct ib_device->groups is too small");
+diff --git a/drivers/infiniband/hw/erdma/erdma_cm.c b/drivers/infiniband/hw/erdma/erdma_cm.c
+index 771059a8eb7d7f..e349e8d2fb50a8 100644
+--- a/drivers/infiniband/hw/erdma/erdma_cm.c
++++ b/drivers/infiniband/hw/erdma/erdma_cm.c
+@@ -705,7 +705,6 @@ static void erdma_accept_newconn(struct erdma_cep *cep)
+ erdma_cancel_mpatimer(new_cep);
+
+ erdma_cep_put(new_cep);
+- new_cep->sock = NULL;
+ }
+
+ if (new_s) {
+diff --git a/drivers/infiniband/hw/mana/main.c b/drivers/infiniband/hw/mana/main.c
+index 457cea6d990958..f6bf289041bfe3 100644
+--- a/drivers/infiniband/hw/mana/main.c
++++ b/drivers/infiniband/hw/mana/main.c
+@@ -358,7 +358,7 @@ static int mana_ib_gd_create_dma_region(struct mana_ib_dev *dev, struct ib_umem
+ unsigned int tail = 0;
+ u64 *page_addr_list;
+ void *request_buf;
+- int err;
++ int err = 0;
+
+ gc = mdev_to_gc(dev);
+ hwc = gc->hwc.driver_data;
+diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
+index 4c54dc57806901..1aa5311b03e9f5 100644
+--- a/drivers/infiniband/hw/mlx5/cq.c
++++ b/drivers/infiniband/hw/mlx5/cq.c
+@@ -490,7 +490,7 @@ static int mlx5_poll_one(struct mlx5_ib_cq *cq,
+ }
+
+ qpn = ntohl(cqe64->sop_drop_qpn) & 0xffffff;
+- if (!*cur_qp || (qpn != (*cur_qp)->ibqp.qp_num)) {
++ if (!*cur_qp || (qpn != (*cur_qp)->trans_qp.base.mqp.qpn)) {
+ /* We do not have to take the QP table lock here,
+ * because CQs will be locked while QPs are removed
+ * from the table.
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 753faa9ad06a88..068eac3bdb50ba 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -56,7 +56,7 @@ static void
+ create_mkey_callback(int status, struct mlx5_async_work *context);
+ static struct mlx5_ib_mr *reg_create(struct ib_pd *pd, struct ib_umem *umem,
+ u64 iova, int access_flags,
+- unsigned int page_size, bool populate,
++ unsigned long page_size, bool populate,
+ int access_mode);
+ static int __mlx5_ib_dereg_mr(struct ib_mr *ibmr);
+
+@@ -919,6 +919,25 @@ mlx5r_cache_create_ent_locked(struct mlx5_ib_dev *dev,
+ return ERR_PTR(ret);
+ }
+
++static void mlx5r_destroy_cache_entries(struct mlx5_ib_dev *dev)
++{
++ struct rb_root *root = &dev->cache.rb_root;
++ struct mlx5_cache_ent *ent;
++ struct rb_node *node;
++
++ mutex_lock(&dev->cache.rb_lock);
++ node = rb_first(root);
++ while (node) {
++ ent = rb_entry(node, struct mlx5_cache_ent, node);
++ node = rb_next(node);
++ clean_keys(dev, ent);
++ rb_erase(&ent->node, root);
++ mlx5r_mkeys_uninit(ent);
++ kfree(ent);
++ }
++ mutex_unlock(&dev->cache.rb_lock);
++}
++
+ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
+ {
+ struct mlx5_mkey_cache *cache = &dev->cache;
+@@ -970,6 +989,8 @@ int mlx5_mkey_cache_init(struct mlx5_ib_dev *dev)
+ err:
+ mutex_unlock(&cache->rb_lock);
+ mlx5_mkey_cache_debugfs_cleanup(dev);
++ mlx5r_destroy_cache_entries(dev);
++ destroy_workqueue(cache->wq);
+ mlx5_ib_warn(dev, "failed to create mkey cache entry\n");
+ return ret;
+ }
+@@ -1003,17 +1024,7 @@ void mlx5_mkey_cache_cleanup(struct mlx5_ib_dev *dev)
+ mlx5_cmd_cleanup_async_ctx(&dev->async_ctx);
+
+ /* At this point all entries are disabled and have no concurrent work. */
+- mutex_lock(&dev->cache.rb_lock);
+- node = rb_first(root);
+- while (node) {
+- ent = rb_entry(node, struct mlx5_cache_ent, node);
+- node = rb_next(node);
+- clean_keys(dev, ent);
+- rb_erase(&ent->node, root);
+- mlx5r_mkeys_uninit(ent);
+- kfree(ent);
+- }
+- mutex_unlock(&dev->cache.rb_lock);
++ mlx5r_destroy_cache_entries(dev);
+
+ destroy_workqueue(dev->cache.wq);
+ del_timer_sync(&dev->delay_timer);
+@@ -1115,7 +1126,7 @@ static struct mlx5_ib_mr *alloc_cacheable_mr(struct ib_pd *pd,
+ struct mlx5r_cache_rb_key rb_key = {};
+ struct mlx5_cache_ent *ent;
+ struct mlx5_ib_mr *mr;
+- unsigned int page_size;
++ unsigned long page_size;
+
+ if (umem->is_dmabuf)
+ page_size = mlx5_umem_dmabuf_default_pgsz(umem, iova);
+@@ -1219,7 +1230,7 @@ reg_create_crossing_vhca_mr(struct ib_pd *pd, u64 iova, u64 length, int access_f
+ */
+ static struct mlx5_ib_mr *reg_create(struct ib_pd *pd, struct ib_umem *umem,
+ u64 iova, int access_flags,
+- unsigned int page_size, bool populate,
++ unsigned long page_size, bool populate,
+ int access_mode)
+ {
+ struct mlx5_ib_dev *dev = to_mdev(pd->device);
+@@ -1425,7 +1436,7 @@ static struct ib_mr *create_real_mr(struct ib_pd *pd, struct ib_umem *umem,
+ mr = alloc_cacheable_mr(pd, umem, iova, access_flags,
+ MLX5_MKC_ACCESS_MODE_MTT);
+ } else {
+- unsigned int page_size =
++ unsigned long page_size =
+ mlx5_umem_mkc_find_best_pgsz(dev, umem, iova);
+
+ mutex_lock(&dev->slow_path_mutex);
+diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c
+index b4e2a6f9cb9c3d..e158d5b1ab17b1 100644
+--- a/drivers/infiniband/hw/mlx5/odp.c
++++ b/drivers/infiniband/hw/mlx5/odp.c
+@@ -309,9 +309,6 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni,
+ blk_start_idx = idx;
+ in_block = 1;
+ }
+-
+- /* Count page invalidations */
+- invalidations += idx - blk_start_idx + 1;
+ } else {
+ u64 umr_offset = idx & umr_block_mask;
+
+@@ -321,14 +318,19 @@ static bool mlx5_ib_invalidate_range(struct mmu_interval_notifier *mni,
+ MLX5_IB_UPD_XLT_ZAP |
+ MLX5_IB_UPD_XLT_ATOMIC);
+ in_block = 0;
++ /* Count page invalidations */
++ invalidations += idx - blk_start_idx + 1;
+ }
+ }
+ }
+- if (in_block)
++ if (in_block) {
+ mlx5r_umr_update_xlt(mr, blk_start_idx,
+ idx - blk_start_idx + 1, 0,
+ MLX5_IB_UPD_XLT_ZAP |
+ MLX5_IB_UPD_XLT_ATOMIC);
++ /* Count page invalidations */
++ invalidations += idx - blk_start_idx + 1;
++ }
+
+ mlx5_update_odp_stats(mr, invalidations, invalidations);
+
+diff --git a/drivers/leds/led-core.c b/drivers/leds/led-core.c
+index 001c290bc07b7d..cda0995b167988 100644
+--- a/drivers/leds/led-core.c
++++ b/drivers/leds/led-core.c
+@@ -159,8 +159,19 @@ static void set_brightness_delayed(struct work_struct *ws)
+ * before this work item runs once. To make sure this works properly
+ * handle LED_SET_BRIGHTNESS_OFF first.
+ */
+- if (test_and_clear_bit(LED_SET_BRIGHTNESS_OFF, &led_cdev->work_flags))
++ if (test_and_clear_bit(LED_SET_BRIGHTNESS_OFF, &led_cdev->work_flags)) {
+ set_brightness_delayed_set_brightness(led_cdev, LED_OFF);
++ /*
++ * The consecutives led_set_brightness(LED_OFF),
++ * led_set_brightness(LED_FULL) could have been executed out of
++ * order (LED_FULL first), if the work_flags has been set
++ * between LED_SET_BRIGHTNESS_OFF and LED_SET_BRIGHTNESS of this
++ * work. To avoid ending with the LED turned off, turn the LED
++ * on again.
++ */
++ if (led_cdev->delayed_set_value != LED_OFF)
++ set_bit(LED_SET_BRIGHTNESS, &led_cdev->work_flags);
++ }
+
+ if (test_and_clear_bit(LED_SET_BRIGHTNESS, &led_cdev->work_flags))
+ set_brightness_delayed_set_brightness(led_cdev, led_cdev->delayed_set_value);
+@@ -331,10 +342,13 @@ void led_set_brightness_nopm(struct led_classdev *led_cdev, unsigned int value)
+ * change is done immediately afterwards (before the work runs),
+ * it uses a separate work_flag.
+ */
+- if (value) {
+- led_cdev->delayed_set_value = value;
++ led_cdev->delayed_set_value = value;
++ /* Ensure delayed_set_value is seen before work_flags modification */
++ smp_mb__before_atomic();
++
++ if (value)
+ set_bit(LED_SET_BRIGHTNESS, &led_cdev->work_flags);
+- } else {
++ else {
+ clear_bit(LED_SET_BRIGHTNESS, &led_cdev->work_flags);
+ clear_bit(LED_SET_BLINK, &led_cdev->work_flags);
+ set_bit(LED_SET_BRIGHTNESS_OFF, &led_cdev->work_flags);
+diff --git a/drivers/media/dvb-frontends/dib8000.c b/drivers/media/dvb-frontends/dib8000.c
+index 2f5165918163df..cfe59c3255f706 100644
+--- a/drivers/media/dvb-frontends/dib8000.c
++++ b/drivers/media/dvb-frontends/dib8000.c
+@@ -2701,8 +2701,11 @@ static void dib8000_set_dds(struct dib8000_state *state, s32 offset_khz)
+ u8 ratio;
+
+ if (state->revision == 0x8090) {
++ u32 internal = dib8000_read32(state, 23) / 1000;
++
+ ratio = 4;
+- unit_khz_dds_val = (1<<26) / (dib8000_read32(state, 23) / 1000);
++
++ unit_khz_dds_val = (1<<26) / (internal ?: 1);
+ if (offset_khz < 0)
+ dds = (1 << 26) - (abs_offset_khz * unit_khz_dds_val);
+ else
+diff --git a/drivers/media/platform/allegro-dvt/allegro-core.c b/drivers/media/platform/allegro-dvt/allegro-core.c
+index 88c36eb6174ad6..9ca4e2f94647b0 100644
+--- a/drivers/media/platform/allegro-dvt/allegro-core.c
++++ b/drivers/media/platform/allegro-dvt/allegro-core.c
+@@ -3914,6 +3914,7 @@ static int allegro_probe(struct platform_device *pdev)
+ if (ret < 0) {
+ v4l2_err(&dev->v4l2_dev,
+ "failed to request firmware: %d\n", ret);
++ v4l2_device_unregister(&dev->v4l2_dev);
+ return ret;
+ }
+
+diff --git a/drivers/media/platform/ti/omap3isp/isp.c b/drivers/media/platform/ti/omap3isp/isp.c
+index 91101ba88ef01f..b2210841a320f4 100644
+--- a/drivers/media/platform/ti/omap3isp/isp.c
++++ b/drivers/media/platform/ti/omap3isp/isp.c
+@@ -1961,6 +1961,13 @@ static int isp_attach_iommu(struct isp_device *isp)
+ struct dma_iommu_mapping *mapping;
+ int ret;
+
++ /* We always want to replace any default mapping from the arch code */
++ mapping = to_dma_iommu_mapping(isp->dev);
++ if (mapping) {
++ arm_iommu_detach_device(isp->dev);
++ arm_iommu_release_mapping(mapping);
++ }
++
+ /*
+ * Create the ARM mapping, used by the ARM DMA mapping core to allocate
+ * VAs. This will allocate a corresponding IOMMU domain.
+diff --git a/drivers/media/platform/verisilicon/hantro_g2_hevc_dec.c b/drivers/media/platform/verisilicon/hantro_g2_hevc_dec.c
+index 85a44143b3786b..0e212198dd65b1 100644
+--- a/drivers/media/platform/verisilicon/hantro_g2_hevc_dec.c
++++ b/drivers/media/platform/verisilicon/hantro_g2_hevc_dec.c
+@@ -518,6 +518,7 @@ static void set_buffers(struct hantro_ctx *ctx)
+ hantro_reg_write(vpu, &g2_stream_len, src_len);
+ hantro_reg_write(vpu, &g2_strm_buffer_len, src_buf_len);
+ hantro_reg_write(vpu, &g2_strm_start_offset, 0);
++ hantro_reg_write(vpu, &g2_start_bit, 0);
+ hantro_reg_write(vpu, &g2_write_mvs_e, 1);
+
+ hantro_write_addr(vpu, G2_TILE_SIZES_ADDR, ctx->hevc_dec.tile_sizes.dma);
+diff --git a/drivers/media/rc/streamzap.c b/drivers/media/rc/streamzap.c
+index 9b209e687f256d..2ce62fe5d60f5a 100644
+--- a/drivers/media/rc/streamzap.c
++++ b/drivers/media/rc/streamzap.c
+@@ -385,8 +385,8 @@ static void streamzap_disconnect(struct usb_interface *interface)
+ if (!sz)
+ return;
+
+- rc_unregister_device(sz->rdev);
+ usb_kill_urb(sz->urb_in);
++ rc_unregister_device(sz->rdev);
+ usb_free_urb(sz->urb_in);
+ usb_free_coherent(usbdev, sz->buf_in_len, sz->buf_in, sz->dma_in);
+
+diff --git a/drivers/media/test-drivers/vimc/vimc-streamer.c b/drivers/media/test-drivers/vimc/vimc-streamer.c
+index 807551a5143b78..15d863f97cbf96 100644
+--- a/drivers/media/test-drivers/vimc/vimc-streamer.c
++++ b/drivers/media/test-drivers/vimc/vimc-streamer.c
+@@ -59,6 +59,12 @@ static void vimc_streamer_pipeline_terminate(struct vimc_stream *stream)
+ continue;
+
+ sd = media_entity_to_v4l2_subdev(ved->ent);
++ /*
++ * Do not call .s_stream() to stop an already
++ * stopped/unstarted subdev.
++ */
++ if (!v4l2_subdev_is_streaming(sd))
++ continue;
+ v4l2_subdev_call(sd, video, s_stream, 0);
+ }
+ }
+diff --git a/drivers/memory/omap-gpmc.c b/drivers/memory/omap-gpmc.c
+index c8a0d82f9c27df..719225c09a4d60 100644
+--- a/drivers/memory/omap-gpmc.c
++++ b/drivers/memory/omap-gpmc.c
+@@ -2245,26 +2245,6 @@ static int gpmc_probe_generic_child(struct platform_device *pdev,
+ goto err;
+ }
+
+- if (of_node_name_eq(child, "nand")) {
+- /* Warn about older DT blobs with no compatible property */
+- if (!of_property_read_bool(child, "compatible")) {
+- dev_warn(&pdev->dev,
+- "Incompatible NAND node: missing compatible");
+- ret = -EINVAL;
+- goto err;
+- }
+- }
+-
+- if (of_node_name_eq(child, "onenand")) {
+- /* Warn about older DT blobs with no compatible property */
+- if (!of_property_read_bool(child, "compatible")) {
+- dev_warn(&pdev->dev,
+- "Incompatible OneNAND node: missing compatible");
+- ret = -EINVAL;
+- goto err;
+- }
+- }
+-
+ if (of_match_node(omap_nand_ids, child)) {
+ /* NAND specific setup */
+ val = 8;
+diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
+index b3592982a83b55..5b6dc1cb9bfc36 100644
+--- a/drivers/mfd/sm501.c
++++ b/drivers/mfd/sm501.c
+@@ -920,7 +920,7 @@ static void sm501_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
+ {
+ struct sm501_gpio_chip *smchip = gpiochip_get_data(chip);
+ struct sm501_gpio *smgpio = smchip->ourgpio;
+- unsigned long bit = 1 << offset;
++ unsigned long bit = BIT(offset);
+ void __iomem *regs = smchip->regbase;
+ unsigned long save;
+ unsigned long val;
+@@ -946,7 +946,7 @@ static int sm501_gpio_input(struct gpio_chip *chip, unsigned offset)
+ struct sm501_gpio_chip *smchip = gpiochip_get_data(chip);
+ struct sm501_gpio *smgpio = smchip->ourgpio;
+ void __iomem *regs = smchip->regbase;
+- unsigned long bit = 1 << offset;
++ unsigned long bit = BIT(offset);
+ unsigned long save;
+ unsigned long ddr;
+
+@@ -971,7 +971,7 @@ static int sm501_gpio_output(struct gpio_chip *chip,
+ {
+ struct sm501_gpio_chip *smchip = gpiochip_get_data(chip);
+ struct sm501_gpio *smgpio = smchip->ourgpio;
+- unsigned long bit = 1 << offset;
++ unsigned long bit = BIT(offset);
+ void __iomem *regs = smchip->regbase;
+ unsigned long save;
+ unsigned long val;
+diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
+index 335350a4e99aba..ee0940d96febf6 100644
+--- a/drivers/mmc/host/omap.c
++++ b/drivers/mmc/host/omap.c
+@@ -1272,19 +1272,25 @@ static int mmc_omap_new_slot(struct mmc_omap_host *host, int id)
+ /* Check for some optional GPIO controls */
+ slot->vsd = devm_gpiod_get_index_optional(host->dev, "vsd",
+ id, GPIOD_OUT_LOW);
+- if (IS_ERR(slot->vsd))
+- return dev_err_probe(host->dev, PTR_ERR(slot->vsd),
++ if (IS_ERR(slot->vsd)) {
++ r = dev_err_probe(host->dev, PTR_ERR(slot->vsd),
+ "error looking up VSD GPIO\n");
++ goto err_free_host;
++ }
+ slot->vio = devm_gpiod_get_index_optional(host->dev, "vio",
+ id, GPIOD_OUT_LOW);
+- if (IS_ERR(slot->vio))
+- return dev_err_probe(host->dev, PTR_ERR(slot->vio),
++ if (IS_ERR(slot->vio)) {
++ r = dev_err_probe(host->dev, PTR_ERR(slot->vio),
+ "error looking up VIO GPIO\n");
++ goto err_free_host;
++ }
+ slot->cover = devm_gpiod_get_index_optional(host->dev, "cover",
+ id, GPIOD_IN);
+- if (IS_ERR(slot->cover))
+- return dev_err_probe(host->dev, PTR_ERR(slot->cover),
++ if (IS_ERR(slot->cover)) {
++ r = dev_err_probe(host->dev, PTR_ERR(slot->cover),
+ "error looking up cover switch GPIO\n");
++ goto err_free_host;
++ }
+
+ host->slots[id] = slot;
+
+@@ -1344,6 +1350,7 @@ static int mmc_omap_new_slot(struct mmc_omap_host *host, int id)
+ device_remove_file(&mmc->class_dev, &dev_attr_slot_name);
+ err_remove_host:
+ mmc_remove_host(mmc);
++err_free_host:
+ mmc_free_host(mmc);
+ return r;
+ }
+diff --git a/drivers/mmc/host/sdhci-omap.c b/drivers/mmc/host/sdhci-omap.c
+index 5841a9afeb9f50..ea4a801c9ace5c 100644
+--- a/drivers/mmc/host/sdhci-omap.c
++++ b/drivers/mmc/host/sdhci-omap.c
+@@ -1339,8 +1339,8 @@ static int sdhci_omap_probe(struct platform_device *pdev)
+ /* R1B responses is required to properly manage HW busy detection. */
+ mmc->caps |= MMC_CAP_NEED_RSP_BUSY;
+
+- /* Allow card power off and runtime PM for eMMC/SD card devices */
+- mmc->caps |= MMC_CAP_POWER_OFF_CARD | MMC_CAP_AGGRESSIVE_PM;
++ /* Enable SDIO card power off. */
++ mmc->caps |= MMC_CAP_POWER_OFF_CARD;
+
+ ret = sdhci_setup_host(host);
+ if (ret)
+diff --git a/drivers/mmc/host/sdhci-pxav3.c b/drivers/mmc/host/sdhci-pxav3.c
+index 3af43ac0582552..376fd927ae7386 100644
+--- a/drivers/mmc/host/sdhci-pxav3.c
++++ b/drivers/mmc/host/sdhci-pxav3.c
+@@ -399,6 +399,7 @@ static int sdhci_pxav3_probe(struct platform_device *pdev)
+ if (!IS_ERR(pxa->clk_core))
+ clk_prepare_enable(pxa->clk_core);
+
++ host->mmc->caps |= MMC_CAP_NEED_RSP_BUSY;
+ /* enable 1/8V DDR capable */
+ host->mmc->caps |= MMC_CAP_1_8V_DDR;
+
+diff --git a/drivers/net/arcnet/com20020-pci.c b/drivers/net/arcnet/com20020-pci.c
+index c5e571ec94c990..0472bcdff13072 100644
+--- a/drivers/net/arcnet/com20020-pci.c
++++ b/drivers/net/arcnet/com20020-pci.c
+@@ -251,18 +251,33 @@ static int com20020pci_probe(struct pci_dev *pdev,
+ card->tx_led.default_trigger = devm_kasprintf(&pdev->dev,
+ GFP_KERNEL, "arc%d-%d-tx",
+ dev->dev_id, i);
++ if (!card->tx_led.default_trigger) {
++ ret = -ENOMEM;
++ goto err_free_arcdev;
++ }
+ card->tx_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "pci:green:tx:%d-%d",
+ dev->dev_id, i);
+-
++ if (!card->tx_led.name) {
++ ret = -ENOMEM;
++ goto err_free_arcdev;
++ }
+ card->tx_led.dev = &dev->dev;
+ card->recon_led.brightness_set = led_recon_set;
+ card->recon_led.default_trigger = devm_kasprintf(&pdev->dev,
+ GFP_KERNEL, "arc%d-%d-recon",
+ dev->dev_id, i);
++ if (!card->recon_led.default_trigger) {
++ ret = -ENOMEM;
++ goto err_free_arcdev;
++ }
+ card->recon_led.name = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "pci:red:recon:%d-%d",
+ dev->dev_id, i);
++ if (!card->recon_led.name) {
++ ret = -ENOMEM;
++ goto err_free_arcdev;
++ }
+ card->recon_led.dev = &dev->dev;
+
+ ret = devm_led_classdev_register(&pdev->dev, &card->tx_led);
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 5aeecfab96306c..5935100e7d65f8 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -7301,13 +7301,13 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ err = mv88e6xxx_switch_reset(chip);
+ mv88e6xxx_reg_unlock(chip);
+ if (err)
+- goto out;
++ goto out_phy;
+
+ if (np) {
+ chip->irq = of_irq_get(np, 0);
+ if (chip->irq == -EPROBE_DEFER) {
+ err = chip->irq;
+- goto out;
++ goto out_phy;
+ }
+ }
+
+@@ -7326,7 +7326,7 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ mv88e6xxx_reg_unlock(chip);
+
+ if (err)
+- goto out;
++ goto out_phy;
+
+ if (chip->info->g2_irqs > 0) {
+ err = mv88e6xxx_g2_irq_setup(chip);
+@@ -7360,6 +7360,8 @@ static int mv88e6xxx_probe(struct mdio_device *mdiodev)
+ mv88e6xxx_g1_irq_free(chip);
+ else
+ mv88e6xxx_irq_poll_free(chip);
++out_phy:
++ mv88e6xxx_phy_destroy(chip);
+ out:
+ if (pdata)
+ dev_put(pdata->netdev);
+@@ -7382,7 +7384,6 @@ static void mv88e6xxx_remove(struct mdio_device *mdiodev)
+ mv88e6xxx_ptp_free(chip);
+ }
+
+- mv88e6xxx_phy_destroy(chip);
+ mv88e6xxx_unregister_switch(chip);
+
+ mv88e6xxx_g1_vtu_prob_irq_free(chip);
+@@ -7395,6 +7396,8 @@ static void mv88e6xxx_remove(struct mdio_device *mdiodev)
+ mv88e6xxx_g1_irq_free(chip);
+ else
+ mv88e6xxx_irq_poll_free(chip);
++
++ mv88e6xxx_phy_destroy(chip);
+ }
+
+ static void mv88e6xxx_shutdown(struct mdio_device *mdiodev)
+diff --git a/drivers/net/dsa/mv88e6xxx/phy.c b/drivers/net/dsa/mv88e6xxx/phy.c
+index 8bb88b3d900db3..ee9e5d7e527709 100644
+--- a/drivers/net/dsa/mv88e6xxx/phy.c
++++ b/drivers/net/dsa/mv88e6xxx/phy.c
+@@ -229,7 +229,10 @@ static void mv88e6xxx_phy_ppu_state_init(struct mv88e6xxx_chip *chip)
+
+ static void mv88e6xxx_phy_ppu_state_destroy(struct mv88e6xxx_chip *chip)
+ {
++ mutex_lock(&chip->ppu_mutex);
+ del_timer_sync(&chip->ppu_timer);
++ cancel_work_sync(&chip->ppu_work);
++ mutex_unlock(&chip->ppu_mutex);
+ }
+
+ int mv88e6185_phy_ppu_read(struct mv88e6xxx_chip *chip, struct mii_bus *bus,
+diff --git a/drivers/net/dsa/realtek/Kconfig b/drivers/net/dsa/realtek/Kconfig
+index 10687722d14c08..d6eb6713e5f6ba 100644
+--- a/drivers/net/dsa/realtek/Kconfig
++++ b/drivers/net/dsa/realtek/Kconfig
+@@ -44,7 +44,7 @@ config NET_DSA_REALTEK_RTL8366RB
+ Select to enable support for Realtek RTL8366RB.
+
+ config NET_DSA_REALTEK_RTL8366RB_LEDS
+- bool "Support RTL8366RB LED control"
++ bool
+ depends on (LEDS_CLASS=y || LEDS_CLASS=NET_DSA_REALTEK_RTL8366RB)
+ depends on NET_DSA_REALTEK_RTL8366RB
+ default NET_DSA_REALTEK_RTL8366RB
+diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
+index b619a3ec245b24..04192190bebabb 100644
+--- a/drivers/net/ethernet/ibm/ibmveth.c
++++ b/drivers/net/ethernet/ibm/ibmveth.c
+@@ -1802,18 +1802,22 @@ static ssize_t veth_pool_store(struct kobject *kobj, struct attribute *attr,
+ long value = simple_strtol(buf, NULL, 10);
+ long rc;
+
++ rtnl_lock();
++
+ if (attr == &veth_active_attr) {
+ if (value && !pool->active) {
+ if (netif_running(netdev)) {
+ if (ibmveth_alloc_buffer_pool(pool)) {
+ netdev_err(netdev,
+ "unable to alloc pool\n");
+- return -ENOMEM;
++ rc = -ENOMEM;
++ goto unlock_err;
+ }
+ pool->active = 1;
+ ibmveth_close(netdev);
+- if ((rc = ibmveth_open(netdev)))
+- return rc;
++ rc = ibmveth_open(netdev);
++ if (rc)
++ goto unlock_err;
+ } else {
+ pool->active = 1;
+ }
+@@ -1833,48 +1837,59 @@ static ssize_t veth_pool_store(struct kobject *kobj, struct attribute *attr,
+
+ if (i == IBMVETH_NUM_BUFF_POOLS) {
+ netdev_err(netdev, "no active pool >= MTU\n");
+- return -EPERM;
++ rc = -EPERM;
++ goto unlock_err;
+ }
+
+ if (netif_running(netdev)) {
+ ibmveth_close(netdev);
+ pool->active = 0;
+- if ((rc = ibmveth_open(netdev)))
+- return rc;
++ rc = ibmveth_open(netdev);
++ if (rc)
++ goto unlock_err;
+ }
+ pool->active = 0;
+ }
+ } else if (attr == &veth_num_attr) {
+ if (value <= 0 || value > IBMVETH_MAX_POOL_COUNT) {
+- return -EINVAL;
++ rc = -EINVAL;
++ goto unlock_err;
+ } else {
+ if (netif_running(netdev)) {
+ ibmveth_close(netdev);
+ pool->size = value;
+- if ((rc = ibmveth_open(netdev)))
+- return rc;
++ rc = ibmveth_open(netdev);
++ if (rc)
++ goto unlock_err;
+ } else {
+ pool->size = value;
+ }
+ }
+ } else if (attr == &veth_size_attr) {
+ if (value <= IBMVETH_BUFF_OH || value > IBMVETH_MAX_BUF_SIZE) {
+- return -EINVAL;
++ rc = -EINVAL;
++ goto unlock_err;
+ } else {
+ if (netif_running(netdev)) {
+ ibmveth_close(netdev);
+ pool->buff_size = value;
+- if ((rc = ibmveth_open(netdev)))
+- return rc;
++ rc = ibmveth_open(netdev);
++ if (rc)
++ goto unlock_err;
+ } else {
+ pool->buff_size = value;
+ }
+ }
+ }
++ rtnl_unlock();
+
+ /* kick the interrupt handler to allocate/deallocate pools */
+ ibmveth_interrupt(netdev->irq, netdev);
+ return count;
++
++unlock_err:
++ rtnl_unlock();
++ return rc;
+ }
+
+
+diff --git a/drivers/net/ethernet/intel/e1000e/defines.h b/drivers/net/ethernet/intel/e1000e/defines.h
+index 5e2cfa73f8891c..8294a7c4f122c3 100644
+--- a/drivers/net/ethernet/intel/e1000e/defines.h
++++ b/drivers/net/ethernet/intel/e1000e/defines.h
+@@ -803,4 +803,7 @@
+ /* SerDes Control */
+ #define E1000_GEN_POLL_TIMEOUT 640
+
++#define E1000_FEXTNVM12_PHYPD_CTRL_MASK 0x00C00000
++#define E1000_FEXTNVM12_PHYPD_CTRL_P1 0x00800000
++
+ #endif /* _E1000_DEFINES_H_ */
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+index 2f9655cf5dd9ee..364378133526a1 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
+@@ -285,6 +285,45 @@ static void e1000_toggle_lanphypc_pch_lpt(struct e1000_hw *hw)
+ }
+ }
+
++/**
++ * e1000_reconfigure_k1_exit_timeout - reconfigure K1 exit timeout to
++ * align to MTP and later platform requirements.
++ * @hw: pointer to the HW structure
++ *
++ * Context: PHY semaphore must be held by caller.
++ * Return: 0 on success, negative on failure
++ */
++static s32 e1000_reconfigure_k1_exit_timeout(struct e1000_hw *hw)
++{
++ u16 phy_timeout;
++ u32 fextnvm12;
++ s32 ret_val;
++
++ if (hw->mac.type < e1000_pch_mtp)
++ return 0;
++
++ /* Change Kumeran K1 power down state from P0s to P1 */
++ fextnvm12 = er32(FEXTNVM12);
++ fextnvm12 &= ~E1000_FEXTNVM12_PHYPD_CTRL_MASK;
++ fextnvm12 |= E1000_FEXTNVM12_PHYPD_CTRL_P1;
++ ew32(FEXTNVM12, fextnvm12);
++
++ /* Wait for the interface the settle */
++ usleep_range(1000, 1100);
++
++ /* Change K1 exit timeout */
++ ret_val = e1e_rphy_locked(hw, I217_PHY_TIMEOUTS_REG,
++ &phy_timeout);
++ if (ret_val)
++ return ret_val;
++
++ phy_timeout &= ~I217_PHY_TIMEOUTS_K1_EXIT_TO_MASK;
++ phy_timeout |= 0xF00;
++
++ return e1e_wphy_locked(hw, I217_PHY_TIMEOUTS_REG,
++ phy_timeout);
++}
++
+ /**
+ * e1000_init_phy_workarounds_pchlan - PHY initialization workarounds
+ * @hw: pointer to the HW structure
+@@ -327,15 +366,22 @@ static s32 e1000_init_phy_workarounds_pchlan(struct e1000_hw *hw)
+ * LANPHYPC Value bit to force the interconnect to PCIe mode.
+ */
+ switch (hw->mac.type) {
++ case e1000_pch_mtp:
++ case e1000_pch_lnp:
++ case e1000_pch_ptp:
++ case e1000_pch_nvp:
++ /* At this point the PHY might be inaccessible so don't
++ * propagate the failure
++ */
++ if (e1000_reconfigure_k1_exit_timeout(hw))
++ e_dbg("Failed to reconfigure K1 exit timeout\n");
++
++ fallthrough;
+ case e1000_pch_lpt:
+ case e1000_pch_spt:
+ case e1000_pch_cnp:
+ case e1000_pch_tgp:
+ case e1000_pch_adp:
+- case e1000_pch_mtp:
+- case e1000_pch_lnp:
+- case e1000_pch_ptp:
+- case e1000_pch_nvp:
+ if (e1000_phy_is_accessible_pchlan(hw))
+ break;
+
+@@ -419,8 +465,20 @@ static s32 e1000_init_phy_workarounds_pchlan(struct e1000_hw *hw)
+ * the PHY is in.
+ */
+ ret_val = hw->phy.ops.check_reset_block(hw);
+- if (ret_val)
++ if (ret_val) {
+ e_err("ME blocked access to PHY after reset\n");
++ goto out;
++ }
++
++ if (hw->mac.type >= e1000_pch_mtp) {
++ ret_val = hw->phy.ops.acquire(hw);
++ if (ret_val) {
++ e_err("Failed to reconfigure K1 exit timeout\n");
++ goto out;
++ }
++ ret_val = e1000_reconfigure_k1_exit_timeout(hw);
++ hw->phy.ops.release(hw);
++ }
+ }
+
+ out:
+@@ -4888,6 +4946,18 @@ static s32 e1000_init_hw_ich8lan(struct e1000_hw *hw)
+ u16 i;
+
+ e1000_initialize_hw_bits_ich8lan(hw);
++ if (hw->mac.type >= e1000_pch_mtp) {
++ ret_val = hw->phy.ops.acquire(hw);
++ if (ret_val)
++ return ret_val;
++
++ ret_val = e1000_reconfigure_k1_exit_timeout(hw);
++ hw->phy.ops.release(hw);
++ if (ret_val) {
++ e_dbg("Error failed to reconfigure K1 exit timeout\n");
++ return ret_val;
++ }
++ }
+
+ /* Initialize identification LED */
+ ret_val = mac->ops.id_led_init(hw);
+diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.h b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+index 2504b11c3169fa..5feb589a9b5ff2 100644
+--- a/drivers/net/ethernet/intel/e1000e/ich8lan.h
++++ b/drivers/net/ethernet/intel/e1000e/ich8lan.h
+@@ -219,6 +219,10 @@
+ #define I217_PLL_CLOCK_GATE_REG PHY_REG(772, 28)
+ #define I217_PLL_CLOCK_GATE_MASK 0x07FF
+
++/* PHY Timeouts */
++#define I217_PHY_TIMEOUTS_REG PHY_REG(770, 21)
++#define I217_PHY_TIMEOUTS_K1_EXIT_TO_MASK 0x0FC0
++
+ #define SW_FLAG_TIMEOUT 1000 /* SW Semaphore flag timeout in ms */
+
+ /* Inband Control */
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_main.c b/drivers/net/ethernet/intel/idpf/idpf_main.c
+index dfd56fc5ff6550..7557bb6694c090 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_main.c
++++ b/drivers/net/ethernet/intel/idpf/idpf_main.c
+@@ -87,7 +87,11 @@ static void idpf_remove(struct pci_dev *pdev)
+ */
+ static void idpf_shutdown(struct pci_dev *pdev)
+ {
+- idpf_remove(pdev);
++ struct idpf_adapter *adapter = pci_get_drvdata(pdev);
++
++ cancel_delayed_work_sync(&adapter->vc_event_task);
++ idpf_vc_core_deinit(adapter);
++ idpf_deinit_dflt_mbx(adapter);
+
+ if (system_state == SYSTEM_POWER_OFF)
+ pci_set_power_state(pdev, PCI_D3hot);
+diff --git a/drivers/net/ethernet/intel/idpf/idpf_txrx.h b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+index f0537826f8403f..9c1fe84108ed2e 100644
+--- a/drivers/net/ethernet/intel/idpf/idpf_txrx.h
++++ b/drivers/net/ethernet/intel/idpf/idpf_txrx.h
+@@ -438,7 +438,8 @@ struct idpf_q_vector {
+ __cacheline_group_end_aligned(cold);
+ };
+ libeth_cacheline_set_assert(struct idpf_q_vector, 112,
+- 424 + 2 * sizeof(struct dim),
++ 24 + sizeof(struct napi_struct) +
++ 2 * sizeof(struct dim),
+ 8 + sizeof(cpumask_var_t));
+
+ struct idpf_rx_queue_stats {
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+index 9e02e4367bec81..9bd3d76b5fe2ac 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2.h
+@@ -1108,6 +1108,9 @@ struct mvpp2 {
+
+ /* Spinlocks for CM3 shared memory configuration */
+ spinlock_t mss_spinlock;
++
++ /* Spinlock for shared PRS parser memory and shadow table */
++ spinlock_t prs_spinlock;
+ };
+
+ struct mvpp2_pcpu_stats {
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 3880dcc0418b2d..66b5a80c9c28aa 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -7640,8 +7640,9 @@ static int mvpp2_probe(struct platform_device *pdev)
+ if (mvpp2_read(priv, MVPP2_VER_ID_REG) == MVPP2_VER_PP23)
+ priv->hw_version = MVPP23;
+
+- /* Init mss lock */
++ /* Init locks for shared packet processor resources */
+ spin_lock_init(&priv->mss_spinlock);
++ spin_lock_init(&priv->prs_spinlock);
+
+ /* Initialize network controller */
+ err = mvpp2_init(pdev, priv);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+index 9af22f497a40f5..93e978bdf303c4 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_prs.c
+@@ -23,6 +23,8 @@ static int mvpp2_prs_hw_write(struct mvpp2 *priv, struct mvpp2_prs_entry *pe)
+ {
+ int i;
+
++ lockdep_assert_held(&priv->prs_spinlock);
++
+ if (pe->index > MVPP2_PRS_TCAM_SRAM_SIZE - 1)
+ return -EINVAL;
+
+@@ -43,11 +45,13 @@ static int mvpp2_prs_hw_write(struct mvpp2 *priv, struct mvpp2_prs_entry *pe)
+ }
+
+ /* Initialize tcam entry from hw */
+-int mvpp2_prs_init_from_hw(struct mvpp2 *priv, struct mvpp2_prs_entry *pe,
+- int tid)
++static int __mvpp2_prs_init_from_hw(struct mvpp2 *priv,
++ struct mvpp2_prs_entry *pe, int tid)
+ {
+ int i;
+
++ lockdep_assert_held(&priv->prs_spinlock);
++
+ if (tid > MVPP2_PRS_TCAM_SRAM_SIZE - 1)
+ return -EINVAL;
+
+@@ -73,6 +77,18 @@ int mvpp2_prs_init_from_hw(struct mvpp2 *priv, struct mvpp2_prs_entry *pe,
+ return 0;
+ }
+
++int mvpp2_prs_init_from_hw(struct mvpp2 *priv, struct mvpp2_prs_entry *pe,
++ int tid)
++{
++ int err;
++
++ spin_lock_bh(&priv->prs_spinlock);
++ err = __mvpp2_prs_init_from_hw(priv, pe, tid);
++ spin_unlock_bh(&priv->prs_spinlock);
++
++ return err;
++}
++
+ /* Invalidate tcam hw entry */
+ static void mvpp2_prs_hw_inv(struct mvpp2 *priv, int index)
+ {
+@@ -374,7 +390,7 @@ static int mvpp2_prs_flow_find(struct mvpp2 *priv, int flow)
+ priv->prs_shadow[tid].lu != MVPP2_PRS_LU_FLOWS)
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ bits = mvpp2_prs_sram_ai_get(&pe);
+
+ /* Sram store classification lookup ID in AI bits [5:0] */
+@@ -441,7 +457,7 @@ static void mvpp2_prs_mac_drop_all_set(struct mvpp2 *priv, int port, bool add)
+
+ if (priv->prs_shadow[MVPP2_PE_DROP_ALL].valid) {
+ /* Entry exist - update port only */
+- mvpp2_prs_init_from_hw(priv, &pe, MVPP2_PE_DROP_ALL);
++ __mvpp2_prs_init_from_hw(priv, &pe, MVPP2_PE_DROP_ALL);
+ } else {
+ /* Entry doesn't exist - create new */
+ memset(&pe, 0, sizeof(pe));
+@@ -469,14 +485,17 @@ static void mvpp2_prs_mac_drop_all_set(struct mvpp2 *priv, int port, bool add)
+ }
+
+ /* Set port to unicast or multicast promiscuous mode */
+-void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port,
+- enum mvpp2_prs_l2_cast l2_cast, bool add)
++static void __mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port,
++ enum mvpp2_prs_l2_cast l2_cast,
++ bool add)
+ {
+ struct mvpp2_prs_entry pe;
+ unsigned char cast_match;
+ unsigned int ri;
+ int tid;
+
++ lockdep_assert_held(&priv->prs_spinlock);
++
+ if (l2_cast == MVPP2_PRS_L2_UNI_CAST) {
+ cast_match = MVPP2_PRS_UCAST_VAL;
+ tid = MVPP2_PE_MAC_UC_PROMISCUOUS;
+@@ -489,7 +508,7 @@ void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port,
+
+ /* promiscuous mode - Accept unknown unicast or multicast packets */
+ if (priv->prs_shadow[tid].valid) {
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ } else {
+ memset(&pe, 0, sizeof(pe));
+ mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_MAC);
+@@ -522,6 +541,14 @@ void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port,
+ mvpp2_prs_hw_write(priv, &pe);
+ }
+
++void mvpp2_prs_mac_promisc_set(struct mvpp2 *priv, int port,
++ enum mvpp2_prs_l2_cast l2_cast, bool add)
++{
++ spin_lock_bh(&priv->prs_spinlock);
++ __mvpp2_prs_mac_promisc_set(priv, port, l2_cast, add);
++ spin_unlock_bh(&priv->prs_spinlock);
++}
++
+ /* Set entry for dsa packets */
+ static void mvpp2_prs_dsa_tag_set(struct mvpp2 *priv, int port, bool add,
+ bool tagged, bool extend)
+@@ -539,7 +566,7 @@ static void mvpp2_prs_dsa_tag_set(struct mvpp2 *priv, int port, bool add,
+
+ if (priv->prs_shadow[tid].valid) {
+ /* Entry exist - update port only */
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ } else {
+ /* Entry doesn't exist - create new */
+ memset(&pe, 0, sizeof(pe));
+@@ -610,7 +637,7 @@ static void mvpp2_prs_dsa_tag_ethertype_set(struct mvpp2 *priv, int port,
+
+ if (priv->prs_shadow[tid].valid) {
+ /* Entry exist - update port only */
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ } else {
+ /* Entry doesn't exist - create new */
+ memset(&pe, 0, sizeof(pe));
+@@ -673,7 +700,7 @@ static int mvpp2_prs_vlan_find(struct mvpp2 *priv, unsigned short tpid, int ai)
+ priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VLAN)
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ match = mvpp2_prs_tcam_data_cmp(&pe, 0, tpid);
+ if (!match)
+ continue;
+@@ -726,7 +753,7 @@ static int mvpp2_prs_vlan_add(struct mvpp2 *priv, unsigned short tpid, int ai,
+ priv->prs_shadow[tid_aux].lu != MVPP2_PRS_LU_VLAN)
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid_aux);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid_aux);
+ ri_bits = mvpp2_prs_sram_ri_get(&pe);
+ if ((ri_bits & MVPP2_PRS_RI_VLAN_MASK) ==
+ MVPP2_PRS_RI_VLAN_DOUBLE)
+@@ -760,7 +787,7 @@ static int mvpp2_prs_vlan_add(struct mvpp2 *priv, unsigned short tpid, int ai,
+
+ mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VLAN);
+ } else {
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ }
+ /* Update ports' mask */
+ mvpp2_prs_tcam_port_map_set(&pe, port_map);
+@@ -800,7 +827,7 @@ static int mvpp2_prs_double_vlan_find(struct mvpp2 *priv, unsigned short tpid1,
+ priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VLAN)
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+
+ match = mvpp2_prs_tcam_data_cmp(&pe, 0, tpid1) &&
+ mvpp2_prs_tcam_data_cmp(&pe, 4, tpid2);
+@@ -849,7 +876,7 @@ static int mvpp2_prs_double_vlan_add(struct mvpp2 *priv, unsigned short tpid1,
+ priv->prs_shadow[tid_aux].lu != MVPP2_PRS_LU_VLAN)
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid_aux);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid_aux);
+ ri_bits = mvpp2_prs_sram_ri_get(&pe);
+ ri_bits &= MVPP2_PRS_RI_VLAN_MASK;
+ if (ri_bits == MVPP2_PRS_RI_VLAN_SINGLE ||
+@@ -880,7 +907,7 @@ static int mvpp2_prs_double_vlan_add(struct mvpp2 *priv, unsigned short tpid1,
+
+ mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VLAN);
+ } else {
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ }
+
+ /* Update ports' mask */
+@@ -1213,8 +1240,8 @@ static void mvpp2_prs_mac_init(struct mvpp2 *priv)
+ /* Create dummy entries for drop all and promiscuous modes */
+ mvpp2_prs_drop_fc(priv);
+ mvpp2_prs_mac_drop_all_set(priv, 0, false);
+- mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_UNI_CAST, false);
+- mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_MULTI_CAST, false);
++ __mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_UNI_CAST, false);
++ __mvpp2_prs_mac_promisc_set(priv, 0, MVPP2_PRS_L2_MULTI_CAST, false);
+ }
+
+ /* Set default entries for various types of dsa packets */
+@@ -1533,12 +1560,6 @@ static int mvpp2_prs_vlan_init(struct platform_device *pdev, struct mvpp2 *priv)
+ struct mvpp2_prs_entry pe;
+ int err;
+
+- priv->prs_double_vlans = devm_kcalloc(&pdev->dev, sizeof(bool),
+- MVPP2_PRS_DBL_VLANS_MAX,
+- GFP_KERNEL);
+- if (!priv->prs_double_vlans)
+- return -ENOMEM;
+-
+ /* Double VLAN: 0x88A8, 0x8100 */
+ err = mvpp2_prs_double_vlan_add(priv, ETH_P_8021AD, ETH_P_8021Q,
+ MVPP2_PRS_PORT_MASK);
+@@ -1941,7 +1962,7 @@ static int mvpp2_prs_vid_range_find(struct mvpp2_port *port, u16 vid, u16 mask)
+ port->priv->prs_shadow[tid].lu != MVPP2_PRS_LU_VID)
+ continue;
+
+- mvpp2_prs_init_from_hw(port->priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(port->priv, &pe, tid);
+
+ mvpp2_prs_tcam_data_byte_get(&pe, 2, &byte[0], &enable[0]);
+ mvpp2_prs_tcam_data_byte_get(&pe, 3, &byte[1], &enable[1]);
+@@ -1970,6 +1991,8 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid)
+
+ memset(&pe, 0, sizeof(pe));
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ /* Scan TCAM and see if entry with this <vid,port> already exist */
+ tid = mvpp2_prs_vid_range_find(port, vid, mask);
+
+@@ -1988,8 +2011,10 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid)
+ MVPP2_PRS_VLAN_FILT_MAX_ENTRY);
+
+ /* There isn't room for a new VID filter */
+- if (tid < 0)
++ if (tid < 0) {
++ spin_unlock_bh(&priv->prs_spinlock);
+ return tid;
++ }
+
+ mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_VID);
+ pe.index = tid;
+@@ -1997,7 +2022,7 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid)
+ /* Mask all ports */
+ mvpp2_prs_tcam_port_map_set(&pe, 0);
+ } else {
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ }
+
+ /* Enable the current port */
+@@ -2019,6 +2044,7 @@ int mvpp2_prs_vid_entry_add(struct mvpp2_port *port, u16 vid)
+ mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VID);
+ mvpp2_prs_hw_write(priv, &pe);
+
++ spin_unlock_bh(&priv->prs_spinlock);
+ return 0;
+ }
+
+@@ -2028,15 +2054,16 @@ void mvpp2_prs_vid_entry_remove(struct mvpp2_port *port, u16 vid)
+ struct mvpp2 *priv = port->priv;
+ int tid;
+
+- /* Scan TCAM and see if entry with this <vid,port> already exist */
+- tid = mvpp2_prs_vid_range_find(port, vid, 0xfff);
++ spin_lock_bh(&priv->prs_spinlock);
+
+- /* No such entry */
+- if (tid < 0)
+- return;
++ /* Invalidate TCAM entry with this <vid,port>, if it exists */
++ tid = mvpp2_prs_vid_range_find(port, vid, 0xfff);
++ if (tid >= 0) {
++ mvpp2_prs_hw_inv(priv, tid);
++ priv->prs_shadow[tid].valid = false;
++ }
+
+- mvpp2_prs_hw_inv(priv, tid);
+- priv->prs_shadow[tid].valid = false;
++ spin_unlock_bh(&priv->prs_spinlock);
+ }
+
+ /* Remove all existing VID filters on this port */
+@@ -2045,6 +2072,8 @@ void mvpp2_prs_vid_remove_all(struct mvpp2_port *port)
+ struct mvpp2 *priv = port->priv;
+ int tid;
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ for (tid = MVPP2_PRS_VID_PORT_FIRST(port->id);
+ tid <= MVPP2_PRS_VID_PORT_LAST(port->id); tid++) {
+ if (priv->prs_shadow[tid].valid) {
+@@ -2052,6 +2081,8 @@ void mvpp2_prs_vid_remove_all(struct mvpp2_port *port)
+ priv->prs_shadow[tid].valid = false;
+ }
+ }
++
++ spin_unlock_bh(&priv->prs_spinlock);
+ }
+
+ /* Remove VID filering entry for this port */
+@@ -2060,10 +2091,14 @@ void mvpp2_prs_vid_disable_filtering(struct mvpp2_port *port)
+ unsigned int tid = MVPP2_PRS_VID_PORT_DFLT(port->id);
+ struct mvpp2 *priv = port->priv;
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ /* Invalidate the guard entry */
+ mvpp2_prs_hw_inv(priv, tid);
+
+ priv->prs_shadow[tid].valid = false;
++
++ spin_unlock_bh(&priv->prs_spinlock);
+ }
+
+ /* Add guard entry that drops packets when no VID is matched on this port */
+@@ -2079,6 +2114,8 @@ void mvpp2_prs_vid_enable_filtering(struct mvpp2_port *port)
+
+ memset(&pe, 0, sizeof(pe));
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ pe.index = tid;
+
+ reg_val = mvpp2_read(priv, MVPP2_MH_REG(port->id));
+@@ -2111,6 +2148,8 @@ void mvpp2_prs_vid_enable_filtering(struct mvpp2_port *port)
+ /* Update shadow table */
+ mvpp2_prs_shadow_set(priv, pe.index, MVPP2_PRS_LU_VID);
+ mvpp2_prs_hw_write(priv, &pe);
++
++ spin_unlock_bh(&priv->prs_spinlock);
+ }
+
+ /* Parser default initialization */
+@@ -2118,6 +2157,20 @@ int mvpp2_prs_default_init(struct platform_device *pdev, struct mvpp2 *priv)
+ {
+ int err, index, i;
+
++ priv->prs_shadow = devm_kcalloc(&pdev->dev, MVPP2_PRS_TCAM_SRAM_SIZE,
++ sizeof(*priv->prs_shadow),
++ GFP_KERNEL);
++ if (!priv->prs_shadow)
++ return -ENOMEM;
++
++ priv->prs_double_vlans = devm_kcalloc(&pdev->dev, sizeof(bool),
++ MVPP2_PRS_DBL_VLANS_MAX,
++ GFP_KERNEL);
++ if (!priv->prs_double_vlans)
++ return -ENOMEM;
++
++ spin_lock_bh(&priv->prs_spinlock);
++
+ /* Enable tcam table */
+ mvpp2_write(priv, MVPP2_PRS_TCAM_CTRL_REG, MVPP2_PRS_TCAM_EN_MASK);
+
+@@ -2136,12 +2189,6 @@ int mvpp2_prs_default_init(struct platform_device *pdev, struct mvpp2 *priv)
+ for (index = 0; index < MVPP2_PRS_TCAM_SRAM_SIZE; index++)
+ mvpp2_prs_hw_inv(priv, index);
+
+- priv->prs_shadow = devm_kcalloc(&pdev->dev, MVPP2_PRS_TCAM_SRAM_SIZE,
+- sizeof(*priv->prs_shadow),
+- GFP_KERNEL);
+- if (!priv->prs_shadow)
+- return -ENOMEM;
+-
+ /* Always start from lookup = 0 */
+ for (index = 0; index < MVPP2_MAX_PORTS; index++)
+ mvpp2_prs_hw_port_init(priv, index, MVPP2_PRS_LU_MH,
+@@ -2158,26 +2205,13 @@ int mvpp2_prs_default_init(struct platform_device *pdev, struct mvpp2 *priv)
+ mvpp2_prs_vid_init(priv);
+
+ err = mvpp2_prs_etype_init(priv);
+- if (err)
+- return err;
+-
+- err = mvpp2_prs_vlan_init(pdev, priv);
+- if (err)
+- return err;
+-
+- err = mvpp2_prs_pppoe_init(priv);
+- if (err)
+- return err;
+-
+- err = mvpp2_prs_ip6_init(priv);
+- if (err)
+- return err;
+-
+- err = mvpp2_prs_ip4_init(priv);
+- if (err)
+- return err;
++ err = err ? : mvpp2_prs_vlan_init(pdev, priv);
++ err = err ? : mvpp2_prs_pppoe_init(priv);
++ err = err ? : mvpp2_prs_ip6_init(priv);
++ err = err ? : mvpp2_prs_ip4_init(priv);
+
+- return 0;
++ spin_unlock_bh(&priv->prs_spinlock);
++ return err;
+ }
+
+ /* Compare MAC DA with tcam entry data */
+@@ -2217,7 +2251,7 @@ mvpp2_prs_mac_da_range_find(struct mvpp2 *priv, int pmap, const u8 *da,
+ (priv->prs_shadow[tid].udf != udf_type))
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ entry_pmap = mvpp2_prs_tcam_port_map_get(&pe);
+
+ if (mvpp2_prs_mac_range_equals(&pe, da, mask) &&
+@@ -2229,7 +2263,8 @@ mvpp2_prs_mac_da_range_find(struct mvpp2 *priv, int pmap, const u8 *da,
+ }
+
+ /* Update parser's mac da entry */
+-int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add)
++static int __mvpp2_prs_mac_da_accept(struct mvpp2_port *port,
++ const u8 *da, bool add)
+ {
+ unsigned char mask[ETH_ALEN] = { 0xff, 0xff, 0xff, 0xff, 0xff, 0xff };
+ struct mvpp2 *priv = port->priv;
+@@ -2261,7 +2296,7 @@ int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add)
+ /* Mask all ports */
+ mvpp2_prs_tcam_port_map_set(&pe, 0);
+ } else {
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+ }
+
+ mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_MAC);
+@@ -2317,6 +2352,17 @@ int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add)
+ return 0;
+ }
+
++int mvpp2_prs_mac_da_accept(struct mvpp2_port *port, const u8 *da, bool add)
++{
++ int err;
++
++ spin_lock_bh(&port->priv->prs_spinlock);
++ err = __mvpp2_prs_mac_da_accept(port, da, add);
++ spin_unlock_bh(&port->priv->prs_spinlock);
++
++ return err;
++}
++
+ int mvpp2_prs_update_mac_da(struct net_device *dev, const u8 *da)
+ {
+ struct mvpp2_port *port = netdev_priv(dev);
+@@ -2345,6 +2391,8 @@ void mvpp2_prs_mac_del_all(struct mvpp2_port *port)
+ unsigned long pmap;
+ int index, tid;
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ for (tid = MVPP2_PE_MAC_RANGE_START;
+ tid <= MVPP2_PE_MAC_RANGE_END; tid++) {
+ unsigned char da[ETH_ALEN], da_mask[ETH_ALEN];
+@@ -2354,7 +2402,7 @@ void mvpp2_prs_mac_del_all(struct mvpp2_port *port)
+ (priv->prs_shadow[tid].udf != MVPP2_PRS_UDF_MAC_DEF))
+ continue;
+
+- mvpp2_prs_init_from_hw(priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(priv, &pe, tid);
+
+ pmap = mvpp2_prs_tcam_port_map_get(&pe);
+
+@@ -2375,14 +2423,17 @@ void mvpp2_prs_mac_del_all(struct mvpp2_port *port)
+ continue;
+
+ /* Remove entry from TCAM */
+- mvpp2_prs_mac_da_accept(port, da, false);
++ __mvpp2_prs_mac_da_accept(port, da, false);
+ }
++
++ spin_unlock_bh(&priv->prs_spinlock);
+ }
+
+ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type)
+ {
+ switch (type) {
+ case MVPP2_TAG_TYPE_EDSA:
++ spin_lock_bh(&priv->prs_spinlock);
+ /* Add port to EDSA entries */
+ mvpp2_prs_dsa_tag_set(priv, port, true,
+ MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA);
+@@ -2393,9 +2444,11 @@ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type)
+ MVPP2_PRS_TAGGED, MVPP2_PRS_DSA);
+ mvpp2_prs_dsa_tag_set(priv, port, false,
+ MVPP2_PRS_UNTAGGED, MVPP2_PRS_DSA);
++ spin_unlock_bh(&priv->prs_spinlock);
+ break;
+
+ case MVPP2_TAG_TYPE_DSA:
++ spin_lock_bh(&priv->prs_spinlock);
+ /* Add port to DSA entries */
+ mvpp2_prs_dsa_tag_set(priv, port, true,
+ MVPP2_PRS_TAGGED, MVPP2_PRS_DSA);
+@@ -2406,10 +2459,12 @@ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type)
+ MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA);
+ mvpp2_prs_dsa_tag_set(priv, port, false,
+ MVPP2_PRS_UNTAGGED, MVPP2_PRS_EDSA);
++ spin_unlock_bh(&priv->prs_spinlock);
+ break;
+
+ case MVPP2_TAG_TYPE_MH:
+ case MVPP2_TAG_TYPE_NONE:
++ spin_lock_bh(&priv->prs_spinlock);
+ /* Remove port form EDSA and DSA entries */
+ mvpp2_prs_dsa_tag_set(priv, port, false,
+ MVPP2_PRS_TAGGED, MVPP2_PRS_DSA);
+@@ -2419,6 +2474,7 @@ int mvpp2_prs_tag_mode_set(struct mvpp2 *priv, int port, int type)
+ MVPP2_PRS_TAGGED, MVPP2_PRS_EDSA);
+ mvpp2_prs_dsa_tag_set(priv, port, false,
+ MVPP2_PRS_UNTAGGED, MVPP2_PRS_EDSA);
++ spin_unlock_bh(&priv->prs_spinlock);
+ break;
+
+ default:
+@@ -2437,11 +2493,15 @@ int mvpp2_prs_add_flow(struct mvpp2 *priv, int flow, u32 ri, u32 ri_mask)
+
+ memset(&pe, 0, sizeof(pe));
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ tid = mvpp2_prs_tcam_first_free(priv,
+ MVPP2_PE_LAST_FREE_TID,
+ MVPP2_PE_FIRST_FREE_TID);
+- if (tid < 0)
++ if (tid < 0) {
++ spin_unlock_bh(&priv->prs_spinlock);
+ return tid;
++ }
+
+ pe.index = tid;
+
+@@ -2461,6 +2521,7 @@ int mvpp2_prs_add_flow(struct mvpp2 *priv, int flow, u32 ri, u32 ri_mask)
+ mvpp2_prs_tcam_port_map_set(&pe, MVPP2_PRS_PORT_MASK);
+ mvpp2_prs_hw_write(priv, &pe);
+
++ spin_unlock_bh(&priv->prs_spinlock);
+ return 0;
+ }
+
+@@ -2472,6 +2533,8 @@ int mvpp2_prs_def_flow(struct mvpp2_port *port)
+
+ memset(&pe, 0, sizeof(pe));
+
++ spin_lock_bh(&port->priv->prs_spinlock);
++
+ tid = mvpp2_prs_flow_find(port->priv, port->id);
+
+ /* Such entry not exist */
+@@ -2480,8 +2543,10 @@ int mvpp2_prs_def_flow(struct mvpp2_port *port)
+ tid = mvpp2_prs_tcam_first_free(port->priv,
+ MVPP2_PE_LAST_FREE_TID,
+ MVPP2_PE_FIRST_FREE_TID);
+- if (tid < 0)
++ if (tid < 0) {
++ spin_unlock_bh(&port->priv->prs_spinlock);
+ return tid;
++ }
+
+ pe.index = tid;
+
+@@ -2492,13 +2557,14 @@ int mvpp2_prs_def_flow(struct mvpp2_port *port)
+ /* Update shadow table */
+ mvpp2_prs_shadow_set(port->priv, pe.index, MVPP2_PRS_LU_FLOWS);
+ } else {
+- mvpp2_prs_init_from_hw(port->priv, &pe, tid);
++ __mvpp2_prs_init_from_hw(port->priv, &pe, tid);
+ }
+
+ mvpp2_prs_tcam_lu_set(&pe, MVPP2_PRS_LU_FLOWS);
+ mvpp2_prs_tcam_port_map_set(&pe, (1 << port->id));
+ mvpp2_prs_hw_write(port->priv, &pe);
+
++ spin_unlock_bh(&port->priv->prs_spinlock);
+ return 0;
+ }
+
+@@ -2509,11 +2575,14 @@ int mvpp2_prs_hits(struct mvpp2 *priv, int index)
+ if (index > MVPP2_PRS_TCAM_SRAM_SIZE)
+ return -EINVAL;
+
++ spin_lock_bh(&priv->prs_spinlock);
++
+ mvpp2_write(priv, MVPP2_PRS_TCAM_HIT_IDX_REG, index);
+
+ val = mvpp2_read(priv, MVPP2_PRS_TCAM_HIT_CNT_REG);
+
+ val &= MVPP2_PRS_TCAM_HIT_CNT_MASK;
+
++ spin_unlock_bh(&priv->prs_spinlock);
+ return val;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index cd0d7b7774f1af..6575c422635b76 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -2634,7 +2634,7 @@ static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
+ rvupf_write64(rvu, RVU_PF_VFPF_MBOX_INTX(1), intr);
+
+ rvu_queue_work(&rvu->afvf_wq_info, 64, vfs, intr);
+- vfs -= 64;
++ vfs = 64;
+ }
+
+ intr = rvupf_read64(rvu, RVU_PF_VFPF_MBOX_INTX(0));
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
+index 7498ab429963d4..06f778baaeef2d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_devlink.c
+@@ -207,7 +207,7 @@ static void rvu_nix_unregister_interrupts(struct rvu *rvu)
+ rvu->irq_allocated[offs + NIX_AF_INT_VEC_RVU] = false;
+ }
+
+- for (i = NIX_AF_INT_VEC_AF_ERR; i < NIX_AF_INT_VEC_CNT; i++)
++ for (i = NIX_AF_INT_VEC_GEN; i < NIX_AF_INT_VEC_CNT; i++)
+ if (rvu->irq_allocated[offs + i]) {
+ free_irq(pci_irq_vector(rvu->pdev, offs + i), rvu_dl);
+ rvu->irq_allocated[offs + i] = false;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+index 64b62ed17b07a7..31eb99f09c63c1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/params.c
+@@ -423,7 +423,7 @@ u8 mlx5e_shampo_get_log_pkt_per_rsrv(struct mlx5_core_dev *mdev,
+ struct mlx5e_params *params)
+ {
+ u32 resrv_size = BIT(mlx5e_shampo_get_log_rsrv_size(mdev, params)) *
+- PAGE_SIZE;
++ MLX5E_SHAMPO_WQ_BASE_RESRV_SIZE;
+
+ return order_base_2(DIV_ROUND_UP(resrv_size, params->sw_mtu));
+ }
+@@ -827,7 +827,8 @@ static u32 mlx5e_shampo_get_log_cq_size(struct mlx5_core_dev *mdev,
+ struct mlx5e_params *params,
+ struct mlx5e_xsk_param *xsk)
+ {
+- int rsrv_size = BIT(mlx5e_shampo_get_log_rsrv_size(mdev, params)) * PAGE_SIZE;
++ int rsrv_size = BIT(mlx5e_shampo_get_log_rsrv_size(mdev, params)) *
++ MLX5E_SHAMPO_WQ_BASE_RESRV_SIZE;
+ u16 num_strides = BIT(mlx5e_mpwqe_get_log_num_strides(mdev, params, xsk));
+ int pkt_per_rsrv = BIT(mlx5e_shampo_get_log_pkt_per_rsrv(mdev, params));
+ u8 log_stride_sz = mlx5e_mpwqe_get_log_stride_size(mdev, params, xsk);
+@@ -1036,7 +1037,8 @@ u32 mlx5e_shampo_hd_per_wqe(struct mlx5_core_dev *mdev,
+ struct mlx5e_params *params,
+ struct mlx5e_rq_param *rq_param)
+ {
+- int resv_size = BIT(mlx5e_shampo_get_log_rsrv_size(mdev, params)) * PAGE_SIZE;
++ int resv_size = BIT(mlx5e_shampo_get_log_rsrv_size(mdev, params)) *
++ MLX5E_SHAMPO_WQ_BASE_RESRV_SIZE;
+ u16 num_strides = BIT(mlx5e_mpwqe_get_log_num_strides(mdev, params, NULL));
+ int pkt_per_resv = BIT(mlx5e_shampo_get_log_pkt_per_rsrv(mdev, params));
+ u8 log_stride_sz = mlx5e_mpwqe_get_log_stride_size(mdev, params, NULL);
+diff --git a/drivers/net/phy/broadcom.c b/drivers/net/phy/broadcom.c
+index ddded162c44c13..d2a9cf3fde5ace 100644
+--- a/drivers/net/phy/broadcom.c
++++ b/drivers/net/phy/broadcom.c
+@@ -859,7 +859,7 @@ static int brcm_fet_config_init(struct phy_device *phydev)
+ return reg;
+
+ /* Unmask events we are interested in and mask interrupts globally. */
+- if (phydev->phy_id == PHY_ID_BCM5221)
++ if (phydev->drv->phy_id == PHY_ID_BCM5221)
+ reg = MII_BRCM_FET_IR_ENABLE |
+ MII_BRCM_FET_IR_MASK;
+ else
+@@ -888,7 +888,7 @@ static int brcm_fet_config_init(struct phy_device *phydev)
+ return err;
+ }
+
+- if (phydev->phy_id != PHY_ID_BCM5221) {
++ if (phydev->drv->phy_id != PHY_ID_BCM5221) {
+ /* Set the LED mode */
+ reg = __phy_read(phydev, MII_BRCM_FET_SHDW_AUXMODE4);
+ if (reg < 0) {
+@@ -1009,7 +1009,7 @@ static int brcm_fet_suspend(struct phy_device *phydev)
+ return err;
+ }
+
+- if (phydev->phy_id == PHY_ID_BCM5221)
++ if (phydev->drv->phy_id == PHY_ID_BCM5221)
+ /* Force Low Power Mode with clock enabled */
+ reg = BCM5221_SHDW_AM4_EN_CLK_LPM | BCM5221_SHDW_AM4_FORCE_LPM;
+ else
+diff --git a/drivers/net/usb/rndis_host.c b/drivers/net/usb/rndis_host.c
+index 7b3739b29c8f72..bb0bf141587274 100644
+--- a/drivers/net/usb/rndis_host.c
++++ b/drivers/net/usb/rndis_host.c
+@@ -630,6 +630,16 @@ static const struct driver_info zte_rndis_info = {
+ .tx_fixup = rndis_tx_fixup,
+ };
+
++static const struct driver_info wwan_rndis_info = {
++ .description = "Mobile Broadband RNDIS device",
++ .flags = FLAG_WWAN | FLAG_POINTTOPOINT | FLAG_FRAMING_RN | FLAG_NO_SETINT,
++ .bind = rndis_bind,
++ .unbind = rndis_unbind,
++ .status = rndis_status,
++ .rx_fixup = rndis_rx_fixup,
++ .tx_fixup = rndis_tx_fixup,
++};
++
+ /*-------------------------------------------------------------------------*/
+
+ static const struct usb_device_id products [] = {
+@@ -666,9 +676,11 @@ static const struct usb_device_id products [] = {
+ USB_INTERFACE_INFO(USB_CLASS_WIRELESS_CONTROLLER, 1, 3),
+ .driver_info = (unsigned long) &rndis_info,
+ }, {
+- /* Novatel Verizon USB730L */
++ /* Mobile Broadband Modem, seen in Novatel Verizon USB730L and
++ * Telit FN990A (RNDIS)
++ */
+ USB_INTERFACE_INFO(USB_CLASS_MISC, 4, 1),
+- .driver_info = (unsigned long) &rndis_info,
++ .driver_info = (unsigned long)&wwan_rndis_info,
+ },
+ { }, // END
+ };
+diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
+index aeab2308b15008..724b93aa4f7eb3 100644
+--- a/drivers/net/usb/usbnet.c
++++ b/drivers/net/usb/usbnet.c
+@@ -530,7 +530,8 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags)
+ netif_device_present (dev->net) &&
+ test_bit(EVENT_DEV_OPEN, &dev->flags) &&
+ !test_bit (EVENT_RX_HALT, &dev->flags) &&
+- !test_bit (EVENT_DEV_ASLEEP, &dev->flags)) {
++ !test_bit (EVENT_DEV_ASLEEP, &dev->flags) &&
++ !usbnet_going_away(dev)) {
+ switch (retval = usb_submit_urb (urb, GFP_ATOMIC)) {
+ case -EPIPE:
+ usbnet_defer_kevent (dev, EVENT_RX_HALT);
+@@ -551,8 +552,7 @@ static int rx_submit (struct usbnet *dev, struct urb *urb, gfp_t flags)
+ tasklet_schedule (&dev->bh);
+ break;
+ case 0:
+- if (!usbnet_going_away(dev))
+- __usbnet_queue_skb(&dev->rxq, skb, rx_start);
++ __usbnet_queue_skb(&dev->rxq, skb, rx_start);
+ }
+ } else {
+ netif_dbg(dev, ifdown, dev->net, "rx: stopped\n");
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+index 8a1e3376424487..cfdd92564060af 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c
+@@ -1167,6 +1167,7 @@ static int brcmf_ops_sdio_suspend(struct device *dev)
+ struct brcmf_bus *bus_if;
+ struct brcmf_sdio_dev *sdiodev;
+ mmc_pm_flag_t sdio_flags;
++ bool cap_power_off;
+ int ret = 0;
+
+ func = container_of(dev, struct sdio_func, dev);
+@@ -1174,19 +1175,23 @@ static int brcmf_ops_sdio_suspend(struct device *dev)
+ if (func->num != 1)
+ return 0;
+
++ cap_power_off = !!(func->card->host->caps & MMC_CAP_POWER_OFF_CARD);
+
+ bus_if = dev_get_drvdata(dev);
+ sdiodev = bus_if->bus_priv.sdio;
+
+- if (sdiodev->wowl_enabled) {
++ if (sdiodev->wowl_enabled || !cap_power_off) {
+ brcmf_sdiod_freezer_on(sdiodev);
+ brcmf_sdio_wd_timer(sdiodev->bus, 0);
+
+ sdio_flags = MMC_PM_KEEP_POWER;
+- if (sdiodev->settings->bus.sdio.oob_irq_supported)
+- enable_irq_wake(sdiodev->settings->bus.sdio.oob_irq_nr);
+- else
+- sdio_flags |= MMC_PM_WAKE_SDIO_IRQ;
++
++ if (sdiodev->wowl_enabled) {
++ if (sdiodev->settings->bus.sdio.oob_irq_supported)
++ enable_irq_wake(sdiodev->settings->bus.sdio.oob_irq_nr);
++ else
++ sdio_flags |= MMC_PM_WAKE_SDIO_IRQ;
++ }
+
+ if (sdio_set_host_pm_flags(sdiodev->func1, sdio_flags))
+ brcmf_err("Failed to set pm_flags %x\n", sdio_flags);
+@@ -1208,18 +1213,19 @@ static int brcmf_ops_sdio_resume(struct device *dev)
+ struct brcmf_sdio_dev *sdiodev = bus_if->bus_priv.sdio;
+ struct sdio_func *func = container_of(dev, struct sdio_func, dev);
+ int ret = 0;
++ bool cap_power_off = !!(func->card->host->caps & MMC_CAP_POWER_OFF_CARD);
+
+ brcmf_dbg(SDIO, "Enter: F%d\n", func->num);
+ if (func->num != 2)
+ return 0;
+
+- if (!sdiodev->wowl_enabled) {
++ if (!sdiodev->wowl_enabled && cap_power_off) {
+ /* bus was powered off and device removed, probe again */
+ ret = brcmf_sdiod_probe(sdiodev);
+ if (ret)
+ brcmf_err("Failed to probe device on resume\n");
+ } else {
+- if (sdiodev->settings->bus.sdio.oob_irq_supported)
++ if (sdiodev->wowl_enabled && sdiodev->settings->bus.sdio.oob_irq_supported)
+ disable_irq_wake(sdiodev->settings->bus.sdio.oob_irq_nr);
+
+ brcmf_sdiod_freezer_off(sdiodev);
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+index fb2ea38e89acab..6594216f873c47 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/dbg.c
+@@ -558,41 +558,71 @@ static void iwl_dump_prph(struct iwl_fw_runtime *fwrt,
+ }
+
+ /*
+- * alloc_sgtable - allocates scallerlist table in the given size,
+- * fills it with pages and returns it
++ * alloc_sgtable - allocates (chained) scatterlist in the given size,
++ * fills it with pages and returns it
+ * @size: the size (in bytes) of the table
+-*/
+-static struct scatterlist *alloc_sgtable(int size)
++ */
++static struct scatterlist *alloc_sgtable(ssize_t size)
+ {
+- int alloc_size, nents, i;
+- struct page *new_page;
+- struct scatterlist *iter;
+- struct scatterlist *table;
++ struct scatterlist *result = NULL, *prev;
++ int nents, i, n_prev;
+
+ nents = DIV_ROUND_UP(size, PAGE_SIZE);
+- table = kcalloc(nents, sizeof(*table), GFP_KERNEL);
+- if (!table)
+- return NULL;
+- sg_init_table(table, nents);
+- iter = table;
+- for_each_sg(table, iter, sg_nents(table), i) {
+- new_page = alloc_page(GFP_KERNEL);
+- if (!new_page) {
+- /* release all previous allocated pages in the table */
+- iter = table;
+- for_each_sg(table, iter, sg_nents(table), i) {
+- new_page = sg_page(iter);
+- if (new_page)
+- __free_page(new_page);
+- }
+- kfree(table);
++
++#define N_ENTRIES_PER_PAGE (PAGE_SIZE / sizeof(*result))
++ /*
++ * We need an additional entry for table chaining,
++ * this ensures the loop can finish i.e. we can
++ * fit at least two entries per page (obviously,
++ * many more really fit.)
++ */
++ BUILD_BUG_ON(N_ENTRIES_PER_PAGE < 2);
++
++ while (nents > 0) {
++ struct scatterlist *new, *iter;
++ int n_fill, n_alloc;
++
++ if (nents <= N_ENTRIES_PER_PAGE) {
++ /* last needed table */
++ n_fill = nents;
++ n_alloc = nents;
++ nents = 0;
++ } else {
++ /* fill a page with entries */
++ n_alloc = N_ENTRIES_PER_PAGE;
++ /* reserve one for chaining */
++ n_fill = n_alloc - 1;
++ nents -= n_fill;
++ }
++
++ new = kcalloc(n_alloc, sizeof(*new), GFP_KERNEL);
++ if (!new) {
++ if (result)
++ _devcd_free_sgtable(result);
+ return NULL;
+ }
+- alloc_size = min_t(int, size, PAGE_SIZE);
+- size -= PAGE_SIZE;
+- sg_set_page(iter, new_page, alloc_size, 0);
++ sg_init_table(new, n_alloc);
++
++ if (!result)
++ result = new;
++ else
++ sg_chain(prev, n_prev, new);
++ prev = new;
++ n_prev = n_alloc;
++
++ for_each_sg(new, iter, n_fill, i) {
++ struct page *new_page = alloc_page(GFP_KERNEL);
++
++ if (!new_page) {
++ _devcd_free_sgtable(result);
++ return NULL;
++ }
++
++ sg_set_page(iter, new_page, PAGE_SIZE, 0);
++ }
+ }
+- return table;
++
++ return result;
+ }
+
+ static void iwl_fw_get_prph_len(struct iwl_fw_runtime *fwrt,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+index 65f8933c34b420..0b52d77f578375 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c
+@@ -995,7 +995,7 @@ iwl_mvm_decode_he_phy_ru_alloc(struct iwl_mvm_rx_phy_data *phy_data,
+ */
+ u8 ru = le32_get_bits(phy_data->d1, IWL_RX_PHY_DATA1_HE_RU_ALLOC_MASK);
+ u32 rate_n_flags = phy_data->rate_n_flags;
+- u32 he_type = rate_n_flags & RATE_MCS_HE_TYPE_MSK_V1;
++ u32 he_type = rate_n_flags & RATE_MCS_HE_TYPE_MSK;
+ u8 offs = 0;
+
+ rx_status->bw = RATE_INFO_BW_HE_RU;
+@@ -1050,13 +1050,13 @@ iwl_mvm_decode_he_phy_ru_alloc(struct iwl_mvm_rx_phy_data *phy_data,
+
+ if (he_mu)
+ he_mu->flags2 |=
+- le16_encode_bits(FIELD_GET(RATE_MCS_CHAN_WIDTH_MSK_V1,
++ le16_encode_bits(FIELD_GET(RATE_MCS_CHAN_WIDTH_MSK,
+ rate_n_flags),
+ IEEE80211_RADIOTAP_HE_MU_FLAGS2_BW_FROM_SIG_A_BW);
+- else if (he_type == RATE_MCS_HE_TYPE_TRIG_V1)
++ else if (he_type == RATE_MCS_HE_TYPE_TRIG)
+ he->data6 |=
+ cpu_to_le16(IEEE80211_RADIOTAP_HE_DATA6_TB_PPDU_BW_KNOWN) |
+- le16_encode_bits(FIELD_GET(RATE_MCS_CHAN_WIDTH_MSK_V1,
++ le16_encode_bits(FIELD_GET(RATE_MCS_CHAN_WIDTH_MSK,
+ rate_n_flags),
+ IEEE80211_RADIOTAP_HE_DATA6_TB_PPDU_BW);
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+index e2dfd3670c4c93..6a3629f71caaa7 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
+@@ -805,6 +805,7 @@ int mt7921_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
+ msta->deflink.wcid.phy_idx = mvif->bss_conf.mt76.band_idx;
+ msta->deflink.wcid.tx_info |= MT_WCID_TX_INFO_SET;
+ msta->deflink.last_txs = jiffies;
++ msta->deflink.sta = msta;
+
+ ret = mt76_connac_pm_wake(&dev->mphy, &dev->pm);
+ if (ret)
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index ce3d8197b026a6..c7eba60897d276 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -3117,7 +3117,6 @@ __mt7925_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
+
+ .idx = idx,
+ .env = env_cap,
+- .acpi_conf = mt792x_acpi_get_flags(&dev->phy),
+ };
+ int ret, valid_cnt = 0;
+ u8 i, *pos;
+diff --git a/drivers/ntb/hw/intel/ntb_hw_gen3.c b/drivers/ntb/hw/intel/ntb_hw_gen3.c
+index ffcfc3e02c3532..a5aa96a31f4a64 100644
+--- a/drivers/ntb/hw/intel/ntb_hw_gen3.c
++++ b/drivers/ntb/hw/intel/ntb_hw_gen3.c
+@@ -215,6 +215,9 @@ static int gen3_init_ntb(struct intel_ntb_dev *ndev)
+ }
+
+ ndev->db_valid_mask = BIT_ULL(ndev->db_count) - 1;
++ /* Make sure we are not using DB's used for link status */
++ if (ndev->hwerr_flags & NTB_HWERR_MSIX_VECTOR32_BAD)
++ ndev->db_valid_mask &= ~ndev->db_link_mask;
+
+ ndev->reg->db_iowrite(ndev->db_valid_mask,
+ ndev->self_mmio +
+diff --git a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
+index ad1786be2554b3..f851397b65d6e5 100644
+--- a/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
++++ b/drivers/ntb/hw/mscc/ntb_hw_switchtec.c
+@@ -288,7 +288,7 @@ static int switchtec_ntb_mw_set_trans(struct ntb_dev *ntb, int pidx, int widx,
+ if (size != 0 && xlate_pos < 12)
+ return -EINVAL;
+
+- if (!IS_ALIGNED(addr, BIT_ULL(xlate_pos))) {
++ if (xlate_pos >= 0 && !IS_ALIGNED(addr, BIT_ULL(xlate_pos))) {
+ /*
+ * In certain circumstances we can get a buffer that is
+ * not aligned to its size. (Most of the time
+diff --git a/drivers/ntb/test/ntb_perf.c b/drivers/ntb/test/ntb_perf.c
+index 72bc1d017a46ee..dfd175f79e8f08 100644
+--- a/drivers/ntb/test/ntb_perf.c
++++ b/drivers/ntb/test/ntb_perf.c
+@@ -839,10 +839,8 @@ static int perf_copy_chunk(struct perf_thread *pthr,
+ dma_set_unmap(tx, unmap);
+
+ ret = dma_submit_error(dmaengine_submit(tx));
+- if (ret) {
+- dmaengine_unmap_put(unmap);
++ if (ret)
+ goto err_free_resource;
+- }
+
+ dmaengine_unmap_put(unmap);
+
+diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c
+index e4daac9c244015..a1b3c538a4bd2e 100644
+--- a/drivers/nvme/host/ioctl.c
++++ b/drivers/nvme/host/ioctl.c
+@@ -141,7 +141,7 @@ static int nvme_map_user_request(struct request *req, u64 ubuffer,
+ struct iov_iter iter;
+
+ /* fixedbufs is only for non-vectored io */
+- if (WARN_ON_ONCE(flags & NVME_IOCTL_VEC)) {
++ if (flags & NVME_IOCTL_VEC) {
+ ret = -EINVAL;
+ goto out;
+ }
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index 1d3205f08af847..af45a1b865ee10 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -1413,9 +1413,20 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req)
+ struct nvme_dev *dev = nvmeq->dev;
+ struct request *abort_req;
+ struct nvme_command cmd = { };
++ struct pci_dev *pdev = to_pci_dev(dev->dev);
+ u32 csts = readl(dev->bar + NVME_REG_CSTS);
+ u8 opcode;
+
++ /*
++ * Shutdown the device immediately if we see it is disconnected. This
++ * unblocks PCIe error handling if the nvme driver is waiting in
++ * error_resume for a device that has been removed. We can't unbind the
++ * driver while the driver's error callback is waiting to complete, so
++ * we're relying on a timeout to break that deadlock if a removal
++ * occurs while reset work is running.
++ */
++ if (pci_dev_is_disconnected(pdev))
++ nvme_change_ctrl_state(&dev->ctrl, NVME_CTRL_DELETING);
+ if (nvme_state_terminal(&dev->ctrl))
+ goto disable;
+
+@@ -1423,7 +1434,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req)
+ * the recovery mechanism will surely fail.
+ */
+ mb();
+- if (pci_channel_offline(to_pci_dev(dev->dev)))
++ if (pci_channel_offline(pdev))
+ return BLK_EH_RESET_TIMER;
+
+ /*
+@@ -1984,6 +1995,18 @@ static void nvme_map_cmb(struct nvme_dev *dev)
+ if (offset > bar_size)
+ return;
+
++ /*
++ * Controllers may support a CMB size larger than their BAR, for
++ * example, due to being behind a bridge. Reduce the CMB to the
++ * reported size of the BAR
++ */
++ size = min(size, bar_size - offset);
++
++ if (!IS_ALIGNED(size, memremap_compat_align()) ||
++ !IS_ALIGNED(pci_resource_start(pdev, bar),
++ memremap_compat_align()))
++ return;
++
+ /*
+ * Tell the controller about the host side address mapping the CMB,
+ * and enable CMB decoding for the NVMe 1.4+ scheme:
+@@ -1994,17 +2017,10 @@ static void nvme_map_cmb(struct nvme_dev *dev)
+ dev->bar + NVME_REG_CMBMSC);
+ }
+
+- /*
+- * Controllers may support a CMB size larger than their BAR,
+- * for example, due to being behind a bridge. Reduce the CMB to
+- * the reported size of the BAR
+- */
+- if (size > bar_size - offset)
+- size = bar_size - offset;
+-
+ if (pci_p2pdma_add_resource(pdev, bar, size, offset)) {
+ dev_warn(dev->ctrl.device,
+ "failed to register the CMB\n");
++ hi_lo_writeq(0, dev->bar + NVME_REG_CMBMSC);
+ return;
+ }
+
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index eeb05b7bc0fd01..854aa6a070ca87 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -2736,6 +2736,7 @@ static int nvme_tcp_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
+ {
+ struct nvme_tcp_queue *queue = hctx->driver_data;
+ struct sock *sk = queue->sock->sk;
++ int ret;
+
+ if (!test_bit(NVME_TCP_Q_LIVE, &queue->flags))
+ return 0;
+@@ -2743,9 +2744,9 @@ static int nvme_tcp_poll(struct blk_mq_hw_ctx *hctx, struct io_comp_batch *iob)
+ set_bit(NVME_TCP_Q_POLLING, &queue->flags);
+ if (sk_can_busy_loop(sk) && skb_queue_empty_lockless(&sk->sk_receive_queue))
+ sk_busy_loop(sk, true);
+- nvme_tcp_try_recv(queue);
++ ret = nvme_tcp_try_recv(queue);
+ clear_bit(NVME_TCP_Q_POLLING, &queue->flags);
+- return queue->nr_cqe;
++ return ret < 0 ? ret : queue->nr_cqe;
+ }
+
+ static int nvme_tcp_get_address(struct nvme_ctrl *ctrl, char *buf, int size)
+diff --git a/drivers/nvme/target/debugfs.c b/drivers/nvme/target/debugfs.c
+index 220c7391fc19ad..c6571fbd35e30e 100644
+--- a/drivers/nvme/target/debugfs.c
++++ b/drivers/nvme/target/debugfs.c
+@@ -78,7 +78,7 @@ static int nvmet_ctrl_state_show(struct seq_file *m, void *p)
+ bool sep = false;
+ int i;
+
+- for (i = 0; i < 7; i++) {
++ for (i = 0; i < ARRAY_SIZE(csts_state_names); i++) {
+ int state = BIT(i);
+
+ if (!(ctrl->csts & state))
+diff --git a/drivers/pci/controller/cadence/pcie-cadence-ep.c b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+index e0cc4560dfde7f..0bf4cde34f5171 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence-ep.c
++++ b/drivers/pci/controller/cadence/pcie-cadence-ep.c
+@@ -352,8 +352,7 @@ static void cdns_pcie_ep_assert_intx(struct cdns_pcie_ep *ep, u8 fn, u8 intx,
+ spin_unlock_irqrestore(&ep->lock, flags);
+
+ offset = CDNS_PCIE_NORMAL_MSG_ROUTING(MSG_ROUTING_LOCAL) |
+- CDNS_PCIE_NORMAL_MSG_CODE(msg_code) |
+- CDNS_PCIE_MSG_NO_DATA;
++ CDNS_PCIE_NORMAL_MSG_CODE(msg_code);
+ writel(0, ep->irq_cpu_addr + offset);
+ }
+
+diff --git a/drivers/pci/controller/cadence/pcie-cadence.h b/drivers/pci/controller/cadence/pcie-cadence.h
+index f5eeff834ec192..39ee9945c903ec 100644
+--- a/drivers/pci/controller/cadence/pcie-cadence.h
++++ b/drivers/pci/controller/cadence/pcie-cadence.h
+@@ -246,7 +246,7 @@ struct cdns_pcie_rp_ib_bar {
+ #define CDNS_PCIE_NORMAL_MSG_CODE_MASK GENMASK(15, 8)
+ #define CDNS_PCIE_NORMAL_MSG_CODE(code) \
+ (((code) << 8) & CDNS_PCIE_NORMAL_MSG_CODE_MASK)
+-#define CDNS_PCIE_MSG_NO_DATA BIT(16)
++#define CDNS_PCIE_MSG_DATA BIT(16)
+
+ struct cdns_pcie;
+
+diff --git a/drivers/pci/controller/dwc/pcie-designware-ep.c b/drivers/pci/controller/dwc/pcie-designware-ep.c
+index b58e89ea566b8d..dea19250598a66 100644
+--- a/drivers/pci/controller/dwc/pcie-designware-ep.c
++++ b/drivers/pci/controller/dwc/pcie-designware-ep.c
+@@ -755,6 +755,7 @@ int dw_pcie_ep_init_registers(struct dw_pcie_ep *ep)
+ if (ret)
+ return ret;
+
++ ret = -ENOMEM;
+ if (!ep->ib_window_map) {
+ ep->ib_window_map = devm_bitmap_zalloc(dev, pci->num_ib_windows,
+ GFP_KERNEL);
+diff --git a/drivers/pci/controller/dwc/pcie-histb.c b/drivers/pci/controller/dwc/pcie-histb.c
+index 7a11c618b9d9c4..5538e5bf99fb68 100644
+--- a/drivers/pci/controller/dwc/pcie-histb.c
++++ b/drivers/pci/controller/dwc/pcie-histb.c
+@@ -409,16 +409,21 @@ static int histb_pcie_probe(struct platform_device *pdev)
+ ret = histb_pcie_host_enable(pp);
+ if (ret) {
+ dev_err(dev, "failed to enable host\n");
+- return ret;
++ goto err_exit_phy;
+ }
+
+ ret = dw_pcie_host_init(pp);
+ if (ret) {
+ dev_err(dev, "failed to initialize host\n");
+- return ret;
++ goto err_exit_phy;
+ }
+
+ return 0;
++
++err_exit_phy:
++ phy_exit(hipcie->phy);
++
++ return ret;
+ }
+
+ static void histb_pcie_remove(struct platform_device *pdev)
+@@ -427,8 +432,7 @@ static void histb_pcie_remove(struct platform_device *pdev)
+
+ histb_pcie_host_disable(hipcie);
+
+- if (hipcie->phy)
+- phy_exit(hipcie->phy);
++ phy_exit(hipcie->phy);
+ }
+
+ static const struct of_device_id histb_pcie_of_match[] = {
+diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
+index 9321280f6edbab..582fa110708781 100644
+--- a/drivers/pci/controller/pcie-brcmstb.c
++++ b/drivers/pci/controller/pcie-brcmstb.c
+@@ -403,10 +403,10 @@ static int brcm_pcie_set_ssc(struct brcm_pcie *pcie)
+ static void brcm_pcie_set_gen(struct brcm_pcie *pcie, int gen)
+ {
+ u16 lnkctl2 = readw(pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCTL2);
+- u32 lnkcap = readl(pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCAP);
++ u32 lnkcap = readl(pcie->base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
+
+ lnkcap = (lnkcap & ~PCI_EXP_LNKCAP_SLS) | gen;
+- writel(lnkcap, pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCAP);
++ writel(lnkcap, pcie->base + PCIE_RC_CFG_PRIV1_LINK_CAPABILITY);
+
+ lnkctl2 = (lnkctl2 & ~0xf) | gen;
+ writew(lnkctl2, pcie->base + BRCM_PCIE_CAP_REGS + PCI_EXP_LNKCTL2);
+@@ -1276,6 +1276,10 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
+ bool ssc_good = false;
+ int ret, i;
+
++ /* Limit the generation if specified */
++ if (pcie->gen)
++ brcm_pcie_set_gen(pcie, pcie->gen);
++
+ /* Unassert the fundamental reset */
+ ret = pcie->perst_set(pcie, 0);
+ if (ret)
+@@ -1302,9 +1306,6 @@ static int brcm_pcie_start_link(struct brcm_pcie *pcie)
+
+ brcm_config_clkreq(pcie);
+
+- if (pcie->gen)
+- brcm_pcie_set_gen(pcie, pcie->gen);
+-
+ if (pcie->ssc) {
+ ret = brcm_pcie_set_ssc(pcie);
+ if (ret == 0)
+@@ -1367,7 +1368,8 @@ static int brcm_pcie_add_bus(struct pci_bus *bus)
+
+ ret = regulator_bulk_get(dev, sr->num_supplies, sr->supplies);
+ if (ret) {
+- dev_info(dev, "No regulators for downstream device\n");
++ dev_info(dev, "Did not get regulators, err=%d\n", ret);
++ pcie->sr = NULL;
+ goto no_regulators;
+ }
+
+@@ -1390,7 +1392,7 @@ static void brcm_pcie_remove_bus(struct pci_bus *bus)
+ struct subdev_regulators *sr = pcie->sr;
+ struct device *dev = &bus->dev;
+
+- if (!sr)
++ if (!sr || !bus->parent || !pci_is_root_bus(bus->parent))
+ return;
+
+ if (regulator_bulk_disable(sr->num_supplies, sr->supplies))
+diff --git a/drivers/pci/controller/pcie-xilinx-cpm.c b/drivers/pci/controller/pcie-xilinx-cpm.c
+index a0f5e1d67b04c6..1594d9e9e637af 100644
+--- a/drivers/pci/controller/pcie-xilinx-cpm.c
++++ b/drivers/pci/controller/pcie-xilinx-cpm.c
+@@ -570,15 +570,17 @@ static int xilinx_cpm_pcie_probe(struct platform_device *pdev)
+ return err;
+
+ bus = resource_list_first_type(&bridge->windows, IORESOURCE_BUS);
+- if (!bus)
+- return -ENODEV;
++ if (!bus) {
++ err = -ENODEV;
++ goto err_free_irq_domains;
++ }
+
+ port->variant = of_device_get_match_data(dev);
+
+ err = xilinx_cpm_pcie_parse_dt(port, bus->res);
+ if (err) {
+ dev_err(dev, "Parsing DT failed\n");
+- goto err_parse_dt;
++ goto err_free_irq_domains;
+ }
+
+ xilinx_cpm_pcie_init_port(port);
+@@ -602,7 +604,7 @@ static int xilinx_cpm_pcie_probe(struct platform_device *pdev)
+ xilinx_cpm_free_interrupts(port);
+ err_setup_irq:
+ pci_ecam_free(port->cfg);
+-err_parse_dt:
++err_free_irq_domains:
+ xilinx_cpm_free_irq_domains(port);
+ return err;
+ }
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 736ad8baa2a555..8f3e4c7de961f6 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -842,7 +842,9 @@ void pcie_enable_interrupt(struct controller *ctrl)
+ {
+ u16 mask;
+
+- mask = PCI_EXP_SLTCTL_HPIE | PCI_EXP_SLTCTL_DLLSCE;
++ mask = PCI_EXP_SLTCTL_DLLSCE;
++ if (!pciehp_poll_mode)
++ mask |= PCI_EXP_SLTCTL_HPIE;
+ pcie_write_cmd(ctrl, mask, mask);
+ }
+
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index 3e5a117f5b5d60..5af4a804a4f896 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -1444,7 +1444,7 @@ static ssize_t __resource_resize_store(struct device *dev, int n,
+ return -EINVAL;
+
+ device_lock(dev);
+- if (dev->driver) {
++ if (dev->driver || pci_num_vf(pdev)) {
+ ret = -EBUSY;
+ goto unlock;
+ }
+@@ -1466,7 +1466,7 @@ static ssize_t __resource_resize_store(struct device *dev, int n,
+
+ pci_remove_resource_files(pdev);
+
+- for (i = 0; i < PCI_STD_NUM_BARS; i++) {
++ for (i = 0; i < PCI_BRIDGE_RESOURCES; i++) {
+ if (pci_resource_len(pdev, i) &&
+ pci_resource_flags(pdev, i) == flags)
+ pci_release_resource(pdev, i);
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 1aa5d6f98ebda2..169aa8fd74a11f 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -955,8 +955,10 @@ struct pci_acs {
+ };
+
+ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
+- const char *p, u16 mask, u16 flags)
++ const char *p, const u16 acs_mask, const u16 acs_flags)
+ {
++ u16 flags = acs_flags;
++ u16 mask = acs_mask;
+ char *delimit;
+ int ret = 0;
+
+@@ -964,7 +966,7 @@ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
+ return;
+
+ while (*p) {
+- if (!mask) {
++ if (!acs_mask) {
+ /* Check for ACS flags */
+ delimit = strstr(p, "@");
+ if (delimit) {
+@@ -972,6 +974,8 @@ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
+ u32 shift = 0;
+
+ end = delimit - p - 1;
++ mask = 0;
++ flags = 0;
+
+ while (end > -1) {
+ if (*(p + end) == '0') {
+@@ -1028,10 +1032,14 @@ static void __pci_config_acs(struct pci_dev *dev, struct pci_acs *caps,
+
+ pci_dbg(dev, "ACS mask = %#06x\n", mask);
+ pci_dbg(dev, "ACS flags = %#06x\n", flags);
++ pci_dbg(dev, "ACS control = %#06x\n", caps->ctrl);
++ pci_dbg(dev, "ACS fw_ctrl = %#06x\n", caps->fw_ctrl);
+
+- /* If mask is 0 then we copy the bit from the firmware setting. */
+- caps->ctrl = (caps->ctrl & ~mask) | (caps->fw_ctrl & mask);
+- caps->ctrl |= flags;
++ /*
++ * For mask bits that are 0, copy them from the firmware setting
++ * and apply flags for all the mask bits that are 1.
++ */
++ caps->ctrl = (caps->fw_ctrl & ~mask) | (flags & mask);
+
+ pci_info(dev, "Configured ACS to %#06x\n", caps->ctrl);
+ }
+@@ -5520,6 +5528,8 @@ static bool pci_bus_resettable(struct pci_bus *bus)
+ return false;
+
+ list_for_each_entry(dev, &bus->devices, bus_list) {
++ if (!pci_reset_supported(dev))
++ return false;
+ if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
+ (dev->subordinate && !pci_bus_resettable(dev->subordinate)))
+ return false;
+@@ -5596,6 +5606,8 @@ static bool pci_slot_resettable(struct pci_slot *slot)
+ list_for_each_entry(dev, &slot->bus->devices, bus_list) {
+ if (!dev->slot || dev->slot != slot)
+ continue;
++ if (!pci_reset_supported(dev))
++ return false;
+ if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
+ (dev->subordinate && !pci_bus_resettable(dev->subordinate)))
+ return false;
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index cee2365e54b8b2..62650a2f00ccc1 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -1242,16 +1242,16 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ parent_link = link->parent;
+
+ /*
+- * link->downstream is a pointer to the pci_dev of function 0. If
+- * we remove that function, the pci_dev is about to be deallocated,
+- * so we can't use link->downstream again. Free the link state to
+- * avoid this.
++ * Free the parent link state, no later than function 0 (i.e.
++ * link->downstream) being removed.
+ *
+- * If we're removing a non-0 function, it's possible we could
+- * retain the link state, but PCIe r6.0, sec 7.5.3.7, recommends
+- * programming the same ASPM Control value for all functions of
+- * multi-function devices, so disable ASPM for all of them.
++ * Do not free the link state any earlier. If function 0 is a
++ * switch upstream port, this link state is parent_link to all
++ * subordinate ones.
+ */
++ if (pdev != link->downstream)
++ goto out;
++
+ pcie_config_aspm_link(link, 0);
+ list_del(&link->sibling);
+ free_link_state(link);
+@@ -1262,6 +1262,7 @@ void pcie_aspm_exit_link_state(struct pci_dev *pdev)
+ pcie_config_aspm_path(parent_link);
+ }
+
++ out:
+ mutex_unlock(&aspm_lock);
+ up_read(&pci_bus_sem);
+ }
+diff --git a/drivers/pci/pcie/portdrv.c b/drivers/pci/pcie/portdrv.c
+index 6af5e042587285..604c055f607867 100644
+--- a/drivers/pci/pcie/portdrv.c
++++ b/drivers/pci/pcie/portdrv.c
+@@ -228,10 +228,12 @@ static int get_port_device_capability(struct pci_dev *dev)
+
+ /*
+ * Disable hot-plug interrupts in case they have been enabled
+- * by the BIOS and the hot-plug service driver is not loaded.
++ * by the BIOS and the hot-plug service driver won't be loaded
++ * to handle them.
+ */
+- pcie_capability_clear_word(dev, PCI_EXP_SLTCTL,
+- PCI_EXP_SLTCTL_CCIE | PCI_EXP_SLTCTL_HPIE);
++ if (!IS_ENABLED(CONFIG_HOTPLUG_PCI_PCIE))
++ pcie_capability_clear_word(dev, PCI_EXP_SLTCTL,
++ PCI_EXP_SLTCTL_CCIE | PCI_EXP_SLTCTL_HPIE);
+ }
+
+ #ifdef CONFIG_PCIEAER
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index ebb0c1d5cae255..0e757b23a09f0f 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -950,10 +950,9 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ /* Temporarily move resources off the list */
+ list_splice_init(&bridge->windows, &resources);
+ err = device_add(&bridge->dev);
+- if (err) {
+- put_device(&bridge->dev);
++ if (err)
+ goto free;
+- }
++
+ bus->bridge = get_device(&bridge->dev);
+ device_enable_async_suspend(bus->bridge);
+ pci_set_bus_of_node(bus);
+diff --git a/drivers/pci/setup-bus.c b/drivers/pci/setup-bus.c
+index 23082bc0ca37ae..f16c7ce3bf3fc8 100644
+--- a/drivers/pci/setup-bus.c
++++ b/drivers/pci/setup-bus.c
+@@ -1150,7 +1150,6 @@ static int pbus_size_mem(struct pci_bus *bus, unsigned long mask,
+ min_align = 1ULL << (max_order + __ffs(SZ_1M));
+ min_align = max(min_align, win_align);
+ size0 = calculate_memsize(size, min_size, 0, 0, resource_size(b_res), win_align);
+- add_align = win_align;
+ pci_info(bus->self, "bridge window %pR to %pR requires relaxed alignment rules\n",
+ b_res, &bus->busn_res);
+ }
+@@ -2105,8 +2104,7 @@ pci_root_bus_distribute_available_resources(struct pci_bus *bus,
+ * in case of root bus.
+ */
+ if (bridge && pci_bridge_resources_not_assigned(dev))
+- pci_bridge_distribute_available_resources(bridge,
+- add_list);
++ pci_bridge_distribute_available_resources(dev, add_list);
+ else
+ pci_root_bus_distribute_available_resources(b, add_list);
+ }
+diff --git a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+index 69c3ec0938f74f..be6f1ca9095aaa 100644
+--- a/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
++++ b/drivers/phy/rockchip/phy-rockchip-samsung-hdptx.c
+@@ -266,11 +266,22 @@ enum rk_hdptx_reset {
+ RST_MAX
+ };
+
++#define MAX_HDPTX_PHY_NUM 2
++
++struct rk_hdptx_phy_cfg {
++ unsigned int num_phys;
++ unsigned int phy_ids[MAX_HDPTX_PHY_NUM];
++};
++
+ struct rk_hdptx_phy {
+ struct device *dev;
+ struct regmap *regmap;
+ struct regmap *grf;
+
++ /* PHY const config */
++ const struct rk_hdptx_phy_cfg *cfgs;
++ int phy_id;
++
+ struct phy *phy;
+ struct phy_config *phy_cfg;
+ struct clk_bulk_data *clks;
+@@ -1019,15 +1030,14 @@ static int rk_hdptx_phy_clk_register(struct rk_hdptx_phy *hdptx)
+ struct device *dev = hdptx->dev;
+ const char *name, *pname;
+ struct clk *refclk;
+- int ret, id;
++ int ret;
+
+ refclk = devm_clk_get(dev, "ref");
+ if (IS_ERR(refclk))
+ return dev_err_probe(dev, PTR_ERR(refclk),
+ "Failed to get ref clock\n");
+
+- id = of_alias_get_id(dev->of_node, "hdptxphy");
+- name = id > 0 ? "clk_hdmiphy_pixel1" : "clk_hdmiphy_pixel0";
++ name = hdptx->phy_id > 0 ? "clk_hdmiphy_pixel1" : "clk_hdmiphy_pixel0";
+ pname = __clk_get_name(refclk);
+
+ hdptx->hw.init = CLK_HW_INIT(name, pname, &hdptx_phy_clk_ops,
+@@ -1070,8 +1080,9 @@ static int rk_hdptx_phy_probe(struct platform_device *pdev)
+ struct phy_provider *phy_provider;
+ struct device *dev = &pdev->dev;
+ struct rk_hdptx_phy *hdptx;
++ struct resource *res;
+ void __iomem *regs;
+- int ret;
++ int ret, id;
+
+ hdptx = devm_kzalloc(dev, sizeof(*hdptx), GFP_KERNEL);
+ if (!hdptx)
+@@ -1079,11 +1090,27 @@ static int rk_hdptx_phy_probe(struct platform_device *pdev)
+
+ hdptx->dev = dev;
+
+- regs = devm_platform_ioremap_resource(pdev, 0);
++ regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ if (IS_ERR(regs))
+ return dev_err_probe(dev, PTR_ERR(regs),
+ "Failed to ioremap resource\n");
+
++ hdptx->cfgs = device_get_match_data(dev);
++ if (!hdptx->cfgs)
++ return dev_err_probe(dev, -EINVAL, "missing match data\n");
++
++ /* find the phy-id from the io address */
++ hdptx->phy_id = -ENODEV;
++ for (id = 0; id < hdptx->cfgs->num_phys; id++) {
++ if (res->start == hdptx->cfgs->phy_ids[id]) {
++ hdptx->phy_id = id;
++ break;
++ }
++ }
++
++ if (hdptx->phy_id < 0)
++ return dev_err_probe(dev, -ENODEV, "no matching device found\n");
++
+ ret = devm_clk_bulk_get_all(dev, &hdptx->clks);
+ if (ret < 0)
+ return dev_err_probe(dev, ret, "Failed to get clocks\n");
+@@ -1147,8 +1174,19 @@ static const struct dev_pm_ops rk_hdptx_phy_pm_ops = {
+ rk_hdptx_phy_runtime_resume, NULL)
+ };
+
++static const struct rk_hdptx_phy_cfg rk3588_hdptx_phy_cfgs = {
++ .num_phys = 2,
++ .phy_ids = {
++ 0xfed60000,
++ 0xfed70000,
++ },
++};
++
+ static const struct of_device_id rk_hdptx_phy_of_match[] = {
+- { .compatible = "rockchip,rk3588-hdptx-phy", },
++ {
++ .compatible = "rockchip,rk3588-hdptx-phy",
++ .data = &rk3588_hdptx_phy_cfgs
++ },
+ {}
+ };
+ MODULE_DEVICE_TABLE(of, rk_hdptx_phy_of_match);
+diff --git a/drivers/pinctrl/intel/pinctrl-intel.c b/drivers/pinctrl/intel/pinctrl-intel.c
+index 928607a21d36db..f8abc69a39d162 100644
+--- a/drivers/pinctrl/intel/pinctrl-intel.c
++++ b/drivers/pinctrl/intel/pinctrl-intel.c
+@@ -1531,7 +1531,6 @@ static int intel_pinctrl_probe_pwm(struct intel_pinctrl *pctrl,
+ .clk_rate = 19200000,
+ .npwm = 1,
+ .base_unit_bits = 22,
+- .bypass = true,
+ };
+ struct pwm_chip *chip;
+
+diff --git a/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c b/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c
+index d09a5e9b2eca53..f6a1e684a3864e 100644
+--- a/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c
++++ b/drivers/pinctrl/nuvoton/pinctrl-npcm8xx.c
+@@ -1290,12 +1290,14 @@ static struct npcm8xx_func npcm8xx_funcs[] = {
+ };
+
+ #define NPCM8XX_PINCFG(a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q) \
+- [a] { .fn0 = fn_ ## b, .reg0 = NPCM8XX_GCR_ ## c, .bit0 = d, \
++ [a] = { \
++ .flag = q, \
++ .fn0 = fn_ ## b, .reg0 = NPCM8XX_GCR_ ## c, .bit0 = d, \
+ .fn1 = fn_ ## e, .reg1 = NPCM8XX_GCR_ ## f, .bit1 = g, \
+ .fn2 = fn_ ## h, .reg2 = NPCM8XX_GCR_ ## i, .bit2 = j, \
+ .fn3 = fn_ ## k, .reg3 = NPCM8XX_GCR_ ## l, .bit3 = m, \
+ .fn4 = fn_ ## n, .reg4 = NPCM8XX_GCR_ ## o, .bit4 = p, \
+- .flag = q }
++ }
+
+ /* Drive strength controlled by NPCM8XX_GP_N_ODSC */
+ #define DRIVE_STRENGTH_LO_SHIFT 8
+@@ -2361,8 +2363,8 @@ static int npcm8xx_gpio_fw(struct npcm8xx_pinctrl *pctrl)
+ return dev_err_probe(dev, ret, "gpio-ranges fail for GPIO bank %u\n", id);
+
+ ret = fwnode_irq_get(child, 0);
+- if (!ret)
+- return dev_err_probe(dev, ret, "No IRQ for GPIO bank %u\n", id);
++ if (ret < 0)
++ return dev_err_probe(dev, ret, "Failed to retrieve IRQ for bank %u\n", id);
+
+ pctrl->gpio_bank[id].irq = ret;
+ pctrl->gpio_bank[id].irq_chip = npcmgpio_irqchip;
+diff --git a/drivers/pinctrl/renesas/pinctrl-rza2.c b/drivers/pinctrl/renesas/pinctrl-rza2.c
+index af689d7c117f35..773eaf508565b0 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rza2.c
++++ b/drivers/pinctrl/renesas/pinctrl-rza2.c
+@@ -253,6 +253,8 @@ static int rza2_gpio_register(struct rza2_pinctrl_priv *priv)
+ return ret;
+ }
+
++ of_node_put(of_args.np);
++
+ if ((of_args.args[0] != 0) ||
+ (of_args.args[1] != 0) ||
+ (of_args.args[2] != priv->npins)) {
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzg2l.c b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+index 5081c7d8064fae..d90685cfe2e1a4 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzg2l.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzg2l.c
+@@ -2583,6 +2583,8 @@ static int rzg2l_gpio_register(struct rzg2l_pinctrl *pctrl)
+ if (ret)
+ return dev_err_probe(pctrl->dev, ret, "Unable to parse gpio-ranges\n");
+
++ of_node_put(of_args.np);
++
+ if (of_args.args[0] != 0 || of_args.args[1] != 0 ||
+ of_args.args[2] != pctrl->data->n_port_pins)
+ return dev_err_probe(pctrl->dev, -EINVAL,
+@@ -3180,6 +3182,7 @@ static struct platform_driver rzg2l_pinctrl_driver = {
+ .name = DRV_NAME,
+ .of_match_table = of_match_ptr(rzg2l_pinctrl_of_table),
+ .pm = pm_sleep_ptr(&rzg2l_pinctrl_pm_ops),
++ .suppress_bind_attrs = true,
+ },
+ .probe = rzg2l_pinctrl_probe,
+ };
+diff --git a/drivers/pinctrl/renesas/pinctrl-rzv2m.c b/drivers/pinctrl/renesas/pinctrl-rzv2m.c
+index 4062c56619f595..8c7169db4fcce6 100644
+--- a/drivers/pinctrl/renesas/pinctrl-rzv2m.c
++++ b/drivers/pinctrl/renesas/pinctrl-rzv2m.c
+@@ -940,6 +940,8 @@ static int rzv2m_gpio_register(struct rzv2m_pinctrl *pctrl)
+ return ret;
+ }
+
++ of_node_put(of_args.np);
++
+ if (of_args.args[0] != 0 || of_args.args[1] != 0 ||
+ of_args.args[2] != pctrl->data->n_port_pins) {
+ dev_err(pctrl->dev, "gpio-ranges does not match selected SOC\n");
+diff --git a/drivers/pinctrl/tegra/pinctrl-tegra.c b/drivers/pinctrl/tegra/pinctrl-tegra.c
+index c83e5a65e6801c..3b046450bd3ff8 100644
+--- a/drivers/pinctrl/tegra/pinctrl-tegra.c
++++ b/drivers/pinctrl/tegra/pinctrl-tegra.c
+@@ -270,6 +270,9 @@ static int tegra_pinctrl_set_mux(struct pinctrl_dev *pctldev,
+ val = pmx_readl(pmx, g->mux_bank, g->mux_reg);
+ val &= ~(0x3 << g->mux_bit);
+ val |= i << g->mux_bit;
++ /* Set the SFIO/GPIO selection to SFIO when under pinmux control*/
++ if (pmx->soc->sfsel_in_mux)
++ val |= (1 << g->sfsel_bit);
+ pmx_writel(pmx, val, g->mux_bank, g->mux_reg);
+
+ return 0;
+diff --git a/drivers/platform/x86/amd/pmf/pmf.h b/drivers/platform/x86/amd/pmf/pmf.h
+index 8ce8816da9c168..43ba1b9aa1811a 100644
+--- a/drivers/platform/x86/amd/pmf/pmf.h
++++ b/drivers/platform/x86/amd/pmf/pmf.h
+@@ -105,9 +105,12 @@ struct cookie_header {
+ #define PMF_TA_IF_VERSION_MAJOR 1
+ #define TA_PMF_ACTION_MAX 32
+ #define TA_PMF_UNDO_MAX 8
+-#define TA_OUTPUT_RESERVED_MEM 906
++#define TA_OUTPUT_RESERVED_MEM 922
+ #define MAX_OPERATION_PARAMS 4
+
++#define TA_ERROR_CRYPTO_INVALID_PARAM 0x20002
++#define TA_ERROR_CRYPTO_BIN_TOO_LARGE 0x2000d
++
+ #define PMF_IF_V1 1
+ #define PMF_IF_V2 2
+
+diff --git a/drivers/platform/x86/amd/pmf/tee-if.c b/drivers/platform/x86/amd/pmf/tee-if.c
+index 19c27b6e46663c..09131507d7a925 100644
+--- a/drivers/platform/x86/amd/pmf/tee-if.c
++++ b/drivers/platform/x86/amd/pmf/tee-if.c
+@@ -27,8 +27,11 @@ module_param(pb_side_load, bool, 0444);
+ MODULE_PARM_DESC(pb_side_load, "Sideload policy binaries debug policy failures");
+ #endif
+
+-static const uuid_t amd_pmf_ta_uuid = UUID_INIT(0x6fd93b77, 0x3fb8, 0x524d,
+- 0xb1, 0x2d, 0xc5, 0x29, 0xb1, 0x3d, 0x85, 0x43);
++static const uuid_t amd_pmf_ta_uuid[] = { UUID_INIT(0xd9b39bf2, 0x66bd, 0x4154, 0xaf, 0xb8, 0x8a,
++ 0xcc, 0x2b, 0x2b, 0x60, 0xd6),
++ UUID_INIT(0x6fd93b77, 0x3fb8, 0x524d, 0xb1, 0x2d, 0xc5,
++ 0x29, 0xb1, 0x3d, 0x85, 0x43),
++ };
+
+ static const char *amd_pmf_uevent_as_str(unsigned int state)
+ {
+@@ -321,9 +324,9 @@ static int amd_pmf_start_policy_engine(struct amd_pmf_dev *dev)
+ */
+ schedule_delayed_work(&dev->pb_work, msecs_to_jiffies(pb_actions_ms * 3));
+ } else {
+- dev_err(dev->dev, "ta invoke cmd init failed err: %x\n", res);
++ dev_dbg(dev->dev, "ta invoke cmd init failed err: %x\n", res);
+ dev->smart_pc_enabled = false;
+- return -EIO;
++ return res;
+ }
+
+ return 0;
+@@ -390,12 +393,12 @@ static int amd_pmf_amdtee_ta_match(struct tee_ioctl_version_data *ver, const voi
+ return ver->impl_id == TEE_IMPL_ID_AMDTEE;
+ }
+
+-static int amd_pmf_ta_open_session(struct tee_context *ctx, u32 *id)
++static int amd_pmf_ta_open_session(struct tee_context *ctx, u32 *id, const uuid_t *uuid)
+ {
+ struct tee_ioctl_open_session_arg sess_arg = {};
+ int rc;
+
+- export_uuid(sess_arg.uuid, &amd_pmf_ta_uuid);
++ export_uuid(sess_arg.uuid, uuid);
+ sess_arg.clnt_login = TEE_IOCTL_LOGIN_PUBLIC;
+ sess_arg.num_params = 0;
+
+@@ -434,7 +437,7 @@ static int amd_pmf_register_input_device(struct amd_pmf_dev *dev)
+ return 0;
+ }
+
+-static int amd_pmf_tee_init(struct amd_pmf_dev *dev)
++static int amd_pmf_tee_init(struct amd_pmf_dev *dev, const uuid_t *uuid)
+ {
+ u32 size;
+ int ret;
+@@ -445,7 +448,7 @@ static int amd_pmf_tee_init(struct amd_pmf_dev *dev)
+ return PTR_ERR(dev->tee_ctx);
+ }
+
+- ret = amd_pmf_ta_open_session(dev->tee_ctx, &dev->session_id);
++ ret = amd_pmf_ta_open_session(dev->tee_ctx, &dev->session_id, uuid);
+ if (ret) {
+ dev_err(dev->dev, "Failed to open TA session (%d)\n", ret);
+ ret = -EINVAL;
+@@ -489,7 +492,8 @@ static void amd_pmf_tee_deinit(struct amd_pmf_dev *dev)
+
+ int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)
+ {
+- int ret;
++ bool status;
++ int ret, i;
+
+ ret = apmf_check_smart_pc(dev);
+ if (ret) {
+@@ -502,26 +506,22 @@ int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)
+ return -ENODEV;
+ }
+
+- ret = amd_pmf_tee_init(dev);
+- if (ret)
+- return ret;
+-
+ INIT_DELAYED_WORK(&dev->pb_work, amd_pmf_invoke_cmd);
+
+ ret = amd_pmf_set_dram_addr(dev, true);
+ if (ret)
+- goto error;
++ goto err_cancel_work;
+
+ dev->policy_base = devm_ioremap(dev->dev, dev->policy_addr, dev->policy_sz);
+ if (!dev->policy_base) {
+ ret = -ENOMEM;
+- goto error;
++ goto err_free_dram_buf;
+ }
+
+ dev->policy_buf = kzalloc(dev->policy_sz, GFP_KERNEL);
+ if (!dev->policy_buf) {
+ ret = -ENOMEM;
+- goto error;
++ goto err_free_dram_buf;
+ }
+
+ memcpy_fromio(dev->policy_buf, dev->policy_base, dev->policy_sz);
+@@ -531,24 +531,60 @@ int amd_pmf_init_smart_pc(struct amd_pmf_dev *dev)
+ dev->prev_data = kzalloc(sizeof(*dev->prev_data), GFP_KERNEL);
+ if (!dev->prev_data) {
+ ret = -ENOMEM;
+- goto error;
++ goto err_free_policy;
+ }
+
+- ret = amd_pmf_start_policy_engine(dev);
+- if (ret)
+- goto error;
++ for (i = 0; i < ARRAY_SIZE(amd_pmf_ta_uuid); i++) {
++ ret = amd_pmf_tee_init(dev, &amd_pmf_ta_uuid[i]);
++ if (ret)
++ goto err_free_prev_data;
++
++ ret = amd_pmf_start_policy_engine(dev);
++ switch (ret) {
++ case TA_PMF_TYPE_SUCCESS:
++ status = true;
++ break;
++ case TA_ERROR_CRYPTO_INVALID_PARAM:
++ case TA_ERROR_CRYPTO_BIN_TOO_LARGE:
++ amd_pmf_tee_deinit(dev);
++ status = false;
++ break;
++ default:
++ ret = -EINVAL;
++ amd_pmf_tee_deinit(dev);
++ goto err_free_prev_data;
++ }
++
++ if (status)
++ break;
++ }
++
++ if (!status && !pb_side_load) {
++ ret = -EINVAL;
++ goto err_free_prev_data;
++ }
+
+ if (pb_side_load)
+ amd_pmf_open_pb(dev, dev->dbgfs_dir);
+
+ ret = amd_pmf_register_input_device(dev);
+ if (ret)
+- goto error;
++ goto err_pmf_remove_pb;
+
+ return 0;
+
+-error:
+- amd_pmf_deinit_smart_pc(dev);
++err_pmf_remove_pb:
++ if (pb_side_load && dev->esbin)
++ amd_pmf_remove_pb(dev);
++ amd_pmf_tee_deinit(dev);
++err_free_prev_data:
++ kfree(dev->prev_data);
++err_free_policy:
++ kfree(dev->policy_buf);
++err_free_dram_buf:
++ kfree(dev->buf);
++err_cancel_work:
++ cancel_delayed_work_sync(&dev->pb_work);
+
+ return ret;
+ }
+diff --git a/drivers/platform/x86/dell/dell-uart-backlight.c b/drivers/platform/x86/dell/dell-uart-backlight.c
+index c45bc332af7a02..e4868584cde2db 100644
+--- a/drivers/platform/x86/dell/dell-uart-backlight.c
++++ b/drivers/platform/x86/dell/dell-uart-backlight.c
+@@ -325,7 +325,7 @@ static int dell_uart_bl_serdev_probe(struct serdev_device *serdev)
+ return PTR_ERR_OR_ZERO(dell_bl->bl);
+ }
+
+-struct serdev_device_driver dell_uart_bl_serdev_driver = {
++static struct serdev_device_driver dell_uart_bl_serdev_driver = {
+ .probe = dell_uart_bl_serdev_probe,
+ .driver = {
+ .name = KBUILD_MODNAME,
+diff --git a/drivers/platform/x86/dell/dell-wmi-ddv.c b/drivers/platform/x86/dell/dell-wmi-ddv.c
+index e75cd6e1efe6ac..ab5f7d3ab82497 100644
+--- a/drivers/platform/x86/dell/dell-wmi-ddv.c
++++ b/drivers/platform/x86/dell/dell-wmi-ddv.c
+@@ -665,8 +665,10 @@ static ssize_t temp_show(struct device *dev, struct device_attribute *attr, char
+ if (ret < 0)
+ return ret;
+
+- /* Use 2731 instead of 2731.5 to avoid unnecessary rounding */
+- return sysfs_emit(buf, "%d\n", value - 2731);
++ /* Use 2732 instead of 2731.5 to avoid unnecessary rounding and to emulate
++ * the behaviour of the OEM application which seems to round down the result.
++ */
++ return sysfs_emit(buf, "%d\n", value - 2732);
+ }
+
+ static ssize_t eppid_show(struct device *dev, struct device_attribute *attr, char *buf)
+diff --git a/drivers/platform/x86/intel/hid.c b/drivers/platform/x86/intel/hid.c
+index 445e7a59beb414..9a609358956f3a 100644
+--- a/drivers/platform/x86/intel/hid.c
++++ b/drivers/platform/x86/intel/hid.c
+@@ -132,6 +132,13 @@ static const struct dmi_system_id button_array_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 3"),
+ },
+ },
++ {
++ .ident = "Microsoft Surface Go 4",
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "Microsoft Corporation"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Surface Go 4"),
++ },
++ },
+ { }
+ };
+
+diff --git a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+index dbcd3087aaa4b0..31239a93dd71bd 100644
+--- a/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
++++ b/drivers/platform/x86/intel/speed_select_if/isst_if_common.c
+@@ -84,7 +84,7 @@ static DECLARE_HASHTABLE(isst_hash, 8);
+ static DEFINE_MUTEX(isst_hash_lock);
+
+ static int isst_store_new_cmd(int cmd, u32 cpu, int mbox_cmd_type, u32 param,
+- u32 data)
++ u64 data)
+ {
+ struct isst_cmd *sst_cmd;
+
+diff --git a/drivers/platform/x86/intel/vsec.c b/drivers/platform/x86/intel/vsec.c
+index 7b5cc9993974ef..55dd2286f3f354 100644
+--- a/drivers/platform/x86/intel/vsec.c
++++ b/drivers/platform/x86/intel/vsec.c
+@@ -410,6 +410,11 @@ static const struct intel_vsec_platform_info oobmsm_info = {
+ .caps = VSEC_CAP_TELEMETRY | VSEC_CAP_SDSI | VSEC_CAP_TPMI,
+ };
+
++/* DMR OOBMSM info */
++static const struct intel_vsec_platform_info dmr_oobmsm_info = {
++ .caps = VSEC_CAP_TELEMETRY | VSEC_CAP_TPMI,
++};
++
+ /* TGL info */
+ static const struct intel_vsec_platform_info tgl_info = {
+ .caps = VSEC_CAP_TELEMETRY,
+@@ -426,6 +431,7 @@ static const struct intel_vsec_platform_info lnl_info = {
+ #define PCI_DEVICE_ID_INTEL_VSEC_MTL_M 0x7d0d
+ #define PCI_DEVICE_ID_INTEL_VSEC_MTL_S 0xad0d
+ #define PCI_DEVICE_ID_INTEL_VSEC_OOBMSM 0x09a7
++#define PCI_DEVICE_ID_INTEL_VSEC_OOBMSM_DMR 0x09a1
+ #define PCI_DEVICE_ID_INTEL_VSEC_RPL 0xa77d
+ #define PCI_DEVICE_ID_INTEL_VSEC_TGL 0x9a0d
+ #define PCI_DEVICE_ID_INTEL_VSEC_LNL_M 0x647d
+@@ -435,6 +441,7 @@ static const struct pci_device_id intel_vsec_pci_ids[] = {
+ { PCI_DEVICE_DATA(INTEL, VSEC_MTL_M, &mtl_info) },
+ { PCI_DEVICE_DATA(INTEL, VSEC_MTL_S, &mtl_info) },
+ { PCI_DEVICE_DATA(INTEL, VSEC_OOBMSM, &oobmsm_info) },
++ { PCI_DEVICE_DATA(INTEL, VSEC_OOBMSM_DMR, &dmr_oobmsm_info) },
+ { PCI_DEVICE_DATA(INTEL, VSEC_RPL, &tgl_info) },
+ { PCI_DEVICE_DATA(INTEL, VSEC_TGL, &tgl_info) },
+ { PCI_DEVICE_DATA(INTEL, VSEC_LNL_M, &lnl_info) },
+diff --git a/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c b/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
+index 32d9b6009c4229..21de7c3a1ee3db 100644
+--- a/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
++++ b/drivers/platform/x86/lenovo-yoga-tab2-pro-1380-fastcharger.c
+@@ -219,7 +219,7 @@ static int yt2_1380_fc_serdev_probe(struct serdev_device *serdev)
+ return 0;
+ }
+
+-struct serdev_device_driver yt2_1380_fc_serdev_driver = {
++static struct serdev_device_driver yt2_1380_fc_serdev_driver = {
+ .probe = yt2_1380_fc_serdev_probe,
+ .driver = {
+ .name = KBUILD_MODNAME,
+diff --git a/drivers/platform/x86/thinkpad_acpi.c b/drivers/platform/x86/thinkpad_acpi.c
+index a3c73abb00f21e..dea40da867552f 100644
+--- a/drivers/platform/x86/thinkpad_acpi.c
++++ b/drivers/platform/x86/thinkpad_acpi.c
+@@ -8795,6 +8795,7 @@ static const struct attribute_group fan_driver_attr_group = {
+ #define TPACPI_FAN_NS 0x0010 /* For EC with non-Standard register addresses */
+ #define TPACPI_FAN_DECRPM 0x0020 /* For ECFW's with RPM in register as decimal */
+ #define TPACPI_FAN_TPR 0x0040 /* Fan speed is in Ticks Per Revolution */
++#define TPACPI_FAN_NOACPI 0x0080 /* Don't use ACPI methods even if detected */
+
+ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ TPACPI_QEC_IBM('1', 'Y', TPACPI_FAN_Q1),
+@@ -8825,6 +8826,9 @@ static const struct tpacpi_quirk fan_quirk_table[] __initconst = {
+ TPACPI_Q_LNV3('N', '1', 'O', TPACPI_FAN_NOFAN), /* X1 Tablet (2nd gen) */
+ TPACPI_Q_LNV3('R', '0', 'Q', TPACPI_FAN_DECRPM),/* L480 */
+ TPACPI_Q_LNV('8', 'F', TPACPI_FAN_TPR), /* ThinkPad x120e */
++ TPACPI_Q_LNV3('R', '0', '0', TPACPI_FAN_NOACPI),/* E560 */
++ TPACPI_Q_LNV3('R', '1', '2', TPACPI_FAN_NOACPI),/* T495 */
++ TPACPI_Q_LNV3('R', '1', '3', TPACPI_FAN_NOACPI),/* T495s */
+ };
+
+ static int __init fan_init(struct ibm_init_struct *iibm)
+@@ -8876,6 +8880,13 @@ static int __init fan_init(struct ibm_init_struct *iibm)
+ tp_features.fan_ctrl_status_undef = 1;
+ }
+
++ if (quirks & TPACPI_FAN_NOACPI) {
++ /* E560, T495, T495s */
++ pr_info("Ignoring buggy ACPI fan access method\n");
++ fang_handle = NULL;
++ fanw_handle = NULL;
++ }
++
+ if (gfan_handle) {
+ /* 570, 600e/x, 770e, 770x */
+ fan_status_access_mode = TPACPI_FAN_RD_ACPI_GFAN;
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 51fb88aca0f9fd..1a20c775489c72 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1913,7 +1913,6 @@ static void bq27xxx_battery_update_unlocked(struct bq27xxx_device_info *di)
+ cache.flags = -1; /* read error */
+ if (cache.flags >= 0) {
+ cache.capacity = bq27xxx_battery_read_soc(di);
+- di->cache.flags = cache.flags;
+
+ /*
+ * On gauges with signed current reporting the current must be
+diff --git a/drivers/power/supply/max77693_charger.c b/drivers/power/supply/max77693_charger.c
+index 4caac142c4285a..b32d881111850b 100644
+--- a/drivers/power/supply/max77693_charger.c
++++ b/drivers/power/supply/max77693_charger.c
+@@ -608,7 +608,7 @@ static int max77693_set_charge_input_threshold_volt(struct max77693_charger *chg
+ case 4700000:
+ case 4800000:
+ case 4900000:
+- data = (uvolt - 4700000) / 100000;
++ data = ((uvolt - 4700000) / 100000) + 1;
+ break;
+ default:
+ dev_err(chg->dev, "Wrong value for charge input voltage regulation threshold\n");
+diff --git a/drivers/regulator/pca9450-regulator.c b/drivers/regulator/pca9450-regulator.c
+index 9714afe347dcc0..1ffa145319f239 100644
+--- a/drivers/regulator/pca9450-regulator.c
++++ b/drivers/regulator/pca9450-regulator.c
+@@ -454,7 +454,7 @@ static const struct pca9450_regulator_desc pca9450a_regulators[] = {
+ .n_linear_ranges = ARRAY_SIZE(pca9450_ldo5_volts),
+ .vsel_reg = PCA9450_REG_LDO5CTRL_H,
+ .vsel_mask = LDO5HOUT_MASK,
+- .enable_reg = PCA9450_REG_LDO5CTRL_H,
++ .enable_reg = PCA9450_REG_LDO5CTRL_L,
+ .enable_mask = LDO5H_EN_MASK,
+ .owner = THIS_MODULE,
+ },
+@@ -663,7 +663,7 @@ static const struct pca9450_regulator_desc pca9450bc_regulators[] = {
+ .n_linear_ranges = ARRAY_SIZE(pca9450_ldo5_volts),
+ .vsel_reg = PCA9450_REG_LDO5CTRL_H,
+ .vsel_mask = LDO5HOUT_MASK,
+- .enable_reg = PCA9450_REG_LDO5CTRL_H,
++ .enable_reg = PCA9450_REG_LDO5CTRL_L,
+ .enable_mask = LDO5H_EN_MASK,
+ .owner = THIS_MODULE,
+ },
+@@ -835,7 +835,7 @@ static const struct pca9450_regulator_desc pca9451a_regulators[] = {
+ .n_linear_ranges = ARRAY_SIZE(pca9450_ldo5_volts),
+ .vsel_reg = PCA9450_REG_LDO5CTRL_H,
+ .vsel_mask = LDO5HOUT_MASK,
+- .enable_reg = PCA9450_REG_LDO5CTRL_H,
++ .enable_reg = PCA9450_REG_LDO5CTRL_L,
+ .enable_mask = LDO5H_EN_MASK,
+ .owner = THIS_MODULE,
+ },
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 32c3531b20c70a..e19081d530226b 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1839,6 +1839,13 @@ static int q6v5_pds_attach(struct device *dev, struct device **devs,
+ while (pd_names[num_pds])
+ num_pds++;
+
++ /* Handle single power domain */
++ if (num_pds == 1 && dev->pm_domain) {
++ devs[0] = dev;
++ pm_runtime_enable(dev);
++ return 1;
++ }
++
+ for (i = 0; i < num_pds; i++) {
+ devs[i] = dev_pm_domain_attach_by_name(dev, pd_names[i]);
+ if (IS_ERR_OR_NULL(devs[i])) {
+@@ -1859,8 +1866,15 @@ static int q6v5_pds_attach(struct device *dev, struct device **devs,
+ static void q6v5_pds_detach(struct q6v5 *qproc, struct device **pds,
+ size_t pd_count)
+ {
++ struct device *dev = qproc->dev;
+ int i;
+
++ /* Handle single power domain */
++ if (pd_count == 1 && dev->pm_domain) {
++ pm_runtime_disable(dev);
++ return;
++ }
++
+ for (i = 0; i < pd_count; i++)
+ dev_pm_domain_detach(pds[i], false);
+ }
+@@ -2469,13 +2483,13 @@ static const struct rproc_hexagon_res msm8974_mss = {
+ .supply = "pll",
+ .uA = 100000,
+ },
+- {}
+- },
+- .fallback_proxy_supply = (struct qcom_mss_reg_res[]) {
+ {
+ .supply = "mx",
+ .uV = 1050000,
+ },
++ {}
++ },
++ .fallback_proxy_supply = (struct qcom_mss_reg_res[]) {
+ {
+ .supply = "cx",
+ .uA = 100000,
+@@ -2501,7 +2515,6 @@ static const struct rproc_hexagon_res msm8974_mss = {
+ NULL
+ },
+ .proxy_pd_names = (char*[]){
+- "mx",
+ "cx",
+ NULL
+ },
+diff --git a/drivers/remoteproc/qcom_q6v5_pas.c b/drivers/remoteproc/qcom_q6v5_pas.c
+index 1a2d08ec9de9ef..ea4a91f37b506d 100644
+--- a/drivers/remoteproc/qcom_q6v5_pas.c
++++ b/drivers/remoteproc/qcom_q6v5_pas.c
+@@ -509,16 +509,16 @@ static int adsp_pds_attach(struct device *dev, struct device **devs,
+ if (!pd_names)
+ return 0;
+
++ while (pd_names[num_pds])
++ num_pds++;
++
+ /* Handle single power domain */
+- if (dev->pm_domain) {
++ if (num_pds == 1 && dev->pm_domain) {
+ devs[0] = dev;
+ pm_runtime_enable(dev);
+ return 1;
+ }
+
+- while (pd_names[num_pds])
+- num_pds++;
+-
+ for (i = 0; i < num_pds; i++) {
+ devs[i] = dev_pm_domain_attach_by_name(dev, pd_names[i]);
+ if (IS_ERR_OR_NULL(devs[i])) {
+@@ -543,7 +543,7 @@ static void adsp_pds_detach(struct qcom_adsp *adsp, struct device **pds,
+ int i;
+
+ /* Handle single power domain */
+- if (dev->pm_domain && pd_count) {
++ if (pd_count == 1 && dev->pm_domain) {
+ pm_runtime_disable(dev);
+ return;
+ }
+@@ -1356,6 +1356,7 @@ static const struct adsp_data sc7280_wpss_resource = {
+ .crash_reason_smem = 626,
+ .firmware_name = "wpss.mdt",
+ .pas_id = 6,
++ .minidump_id = 4,
+ .auto_boot = false,
+ .proxy_pd_names = (char*[]){
+ "cx",
+@@ -1418,7 +1419,7 @@ static const struct adsp_data sm8650_mpss_resource = {
+ };
+
+ static const struct of_device_id adsp_of_match[] = {
+- { .compatible = "qcom,msm8226-adsp-pil", .data = &adsp_resource_init},
++ { .compatible = "qcom,msm8226-adsp-pil", .data = &msm8996_adsp_resource},
+ { .compatible = "qcom,msm8953-adsp-pil", .data = &msm8996_adsp_resource},
+ { .compatible = "qcom,msm8974-adsp-pil", .data = &adsp_resource_init},
+ { .compatible = "qcom,msm8996-adsp-pil", .data = &msm8996_adsp_resource},
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index ef6febe3563307..d2308c2f97eb94 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -2025,6 +2025,7 @@ int rproc_shutdown(struct rproc *rproc)
+ kfree(rproc->cached_table);
+ rproc->cached_table = NULL;
+ rproc->table_ptr = NULL;
++ rproc->table_sz = 0;
+ out:
+ mutex_unlock(&rproc->lock);
+ return ret;
+diff --git a/drivers/soundwire/slave.c b/drivers/soundwire/slave.c
+index f1a4df6cfebd9c..2bcb733de4de49 100644
+--- a/drivers/soundwire/slave.c
++++ b/drivers/soundwire/slave.c
+@@ -12,6 +12,7 @@ static void sdw_slave_release(struct device *dev)
+ {
+ struct sdw_slave *slave = dev_to_sdw_dev(dev);
+
++ of_node_put(slave->dev.of_node);
+ mutex_destroy(&slave->sdw_dev_lock);
+ kfree(slave);
+ }
+diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c
+index e1b9b12357877f..5926e004d9a659 100644
+--- a/drivers/spi/spi-bcm2835.c
++++ b/drivers/spi/spi-bcm2835.c
+@@ -1162,7 +1162,8 @@ static void bcm2835_spi_cleanup(struct spi_device *spi)
+ sizeof(u32),
+ DMA_TO_DEVICE);
+
+- gpiod_put(bs->cs_gpio);
++ if (!IS_ERR(bs->cs_gpio))
++ gpiod_put(bs->cs_gpio);
+ spi_set_csgpiod(spi, 0, NULL);
+
+ kfree(target);
+@@ -1225,7 +1226,12 @@ static int bcm2835_spi_setup(struct spi_device *spi)
+ struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr);
+ struct bcm2835_spidev *target = spi_get_ctldata(spi);
+ struct gpiod_lookup_table *lookup __free(kfree) = NULL;
+- int ret;
++ const char *pinctrl_compats[] = {
++ "brcm,bcm2835-gpio",
++ "brcm,bcm2711-gpio",
++ "brcm,bcm7211-gpio",
++ };
++ int ret, i;
+ u32 cs;
+
+ if (!target) {
+@@ -1290,6 +1296,14 @@ static int bcm2835_spi_setup(struct spi_device *spi)
+ goto err_cleanup;
+ }
+
++ for (i = 0; i < ARRAY_SIZE(pinctrl_compats); i++) {
++ if (of_find_compatible_node(NULL, NULL, pinctrl_compats[i]))
++ break;
++ }
++
++ if (i == ARRAY_SIZE(pinctrl_compats))
++ return 0;
++
+ /*
+ * TODO: The code below is a slightly better alternative to the utter
+ * abuse of the GPIO API that I found here before. It creates a
+diff --git a/drivers/spi/spi-cadence-xspi.c b/drivers/spi/spi-cadence-xspi.c
+index aed98ab1433467..6dcba0e0ddaa3e 100644
+--- a/drivers/spi/spi-cadence-xspi.c
++++ b/drivers/spi/spi-cadence-xspi.c
+@@ -432,7 +432,7 @@ static bool cdns_mrvl_xspi_setup_clock(struct cdns_xspi_dev *cdns_xspi,
+ u32 clk_reg;
+ bool update_clk = false;
+
+- while (i < ARRAY_SIZE(cdns_mrvl_xspi_clk_div_list)) {
++ while (i < (ARRAY_SIZE(cdns_mrvl_xspi_clk_div_list) - 1)) {
+ clk_val = MRVL_XSPI_CLOCK_DIVIDED(
+ cdns_mrvl_xspi_clk_div_list[i]);
+ if (clk_val <= requested_clk)
+diff --git a/drivers/staging/rtl8723bs/Kconfig b/drivers/staging/rtl8723bs/Kconfig
+index 8d48c61961a6b7..353e6ee2c14508 100644
+--- a/drivers/staging/rtl8723bs/Kconfig
++++ b/drivers/staging/rtl8723bs/Kconfig
+@@ -4,6 +4,7 @@ config RTL8723BS
+ depends on WLAN && MMC && CFG80211
+ depends on m
+ select CRYPTO
++ select CRYPTO_LIB_AES
+ select CRYPTO_LIB_ARC4
+ help
+ This option enables support for RTL8723BS SDIO drivers, such as
+diff --git a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+index 5fab33adf58ed0..97787002080a18 100644
+--- a/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
++++ b/drivers/staging/vc04_services/interface/vchiq_arm/vchiq_arm.c
+@@ -1745,8 +1745,6 @@ static int vchiq_probe(struct platform_device *pdev)
+ if (ret)
+ goto failed_platform_init;
+
+- vchiq_debugfs_init(&mgmt->state);
+-
+ dev_dbg(&pdev->dev, "arm: platform initialised - version %d (min %d)\n",
+ VCHIQ_VERSION, VCHIQ_VERSION_MIN);
+
+@@ -1760,6 +1758,8 @@ static int vchiq_probe(struct platform_device *pdev)
+ goto error_exit;
+ }
+
++ vchiq_debugfs_init(&mgmt->state);
++
+ bcm2835_audio = vchiq_device_register(&pdev->dev, "bcm2835-audio");
+ bcm2835_camera = vchiq_device_register(&pdev->dev, "bcm2835-camera");
+
+@@ -1786,7 +1786,8 @@ static void vchiq_remove(struct platform_device *pdev)
+ kthread_stop(mgmt->state.slot_handler_thread);
+
+ arm_state = vchiq_platform_get_arm_state(&mgmt->state);
+- kthread_stop(arm_state->ka_thread);
++ if (!IS_ERR_OR_NULL(arm_state->ka_thread))
++ kthread_stop(arm_state->ka_thread);
+ }
+
+ static struct platform_driver vchiq_driver = {
+diff --git a/drivers/thermal/intel/int340x_thermal/int3402_thermal.c b/drivers/thermal/intel/int340x_thermal/int3402_thermal.c
+index ab8bfb5a3946bc..40ab6b2d4fb05f 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3402_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3402_thermal.c
+@@ -45,6 +45,9 @@ static int int3402_thermal_probe(struct platform_device *pdev)
+ struct int3402_thermal_data *d;
+ int ret;
+
++ if (!adev)
++ return -ENODEV;
++
+ if (!acpi_has_method(adev->handle, "_TMP"))
+ return -ENODEV;
+
+diff --git a/drivers/tty/n_tty.c b/drivers/tty/n_tty.c
+index 5e9ca4376d686e..94fa981081fdb5 100644
+--- a/drivers/tty/n_tty.c
++++ b/drivers/tty/n_tty.c
+@@ -486,7 +486,8 @@ static int do_output_char(u8 c, struct tty_struct *tty, int space)
+ static int process_output(u8 c, struct tty_struct *tty)
+ {
+ struct n_tty_data *ldata = tty->disc_data;
+- int space, retval;
++ unsigned int space;
++ int retval;
+
+ mutex_lock(&ldata->output_lock);
+
+@@ -522,16 +523,16 @@ static ssize_t process_output_block(struct tty_struct *tty,
+ const u8 *buf, unsigned int nr)
+ {
+ struct n_tty_data *ldata = tty->disc_data;
+- int space;
+- int i;
++ unsigned int space;
++ int i;
+ const u8 *cp;
+
+ mutex_lock(&ldata->output_lock);
+
+ space = tty_write_room(tty);
+- if (space <= 0) {
++ if (space == 0) {
+ mutex_unlock(&ldata->output_lock);
+- return space;
++ return 0;
+ }
+ if (nr > space)
+ nr = space;
+@@ -696,7 +697,7 @@ static int n_tty_process_echo_ops(struct tty_struct *tty, size_t *tail,
+ static size_t __process_echoes(struct tty_struct *tty)
+ {
+ struct n_tty_data *ldata = tty->disc_data;
+- int space, old_space;
++ unsigned int space, old_space;
+ size_t tail;
+ u8 c;
+
+diff --git a/drivers/tty/serial/fsl_lpuart.c b/drivers/tty/serial/fsl_lpuart.c
+index 9f9fc733eb2c1f..951c3cdac3b947 100644
+--- a/drivers/tty/serial/fsl_lpuart.c
++++ b/drivers/tty/serial/fsl_lpuart.c
+@@ -440,7 +440,7 @@ static unsigned int lpuart_get_baud_clk_rate(struct lpuart_port *sport)
+
+ static void lpuart_stop_tx(struct uart_port *port)
+ {
+- unsigned char temp;
++ u8 temp;
+
+ temp = readb(port->membase + UARTCR2);
+ temp &= ~(UARTCR2_TIE | UARTCR2_TCIE);
+@@ -449,7 +449,7 @@ static void lpuart_stop_tx(struct uart_port *port)
+
+ static void lpuart32_stop_tx(struct uart_port *port)
+ {
+- unsigned long temp;
++ u32 temp;
+
+ temp = lpuart32_read(port, UARTCTRL);
+ temp &= ~(UARTCTRL_TIE | UARTCTRL_TCIE);
+@@ -458,7 +458,7 @@ static void lpuart32_stop_tx(struct uart_port *port)
+
+ static void lpuart_stop_rx(struct uart_port *port)
+ {
+- unsigned char temp;
++ u8 temp;
+
+ temp = readb(port->membase + UARTCR2);
+ writeb(temp & ~UARTCR2_RE, port->membase + UARTCR2);
+@@ -466,7 +466,7 @@ static void lpuart_stop_rx(struct uart_port *port)
+
+ static void lpuart32_stop_rx(struct uart_port *port)
+ {
+- unsigned long temp;
++ u32 temp;
+
+ temp = lpuart32_read(port, UARTCTRL);
+ lpuart32_write(port, temp & ~UARTCTRL_RE, UARTCTRL);
+@@ -580,7 +580,7 @@ static int lpuart_dma_tx_request(struct uart_port *port)
+ ret = dmaengine_slave_config(sport->dma_tx_chan, &dma_tx_sconfig);
+
+ if (ret) {
+- dev_err(sport->port.dev,
++ dev_err(port->dev,
+ "DMA slave config failed, err = %d\n", ret);
+ return ret;
+ }
+@@ -610,13 +610,13 @@ static void lpuart_flush_buffer(struct uart_port *port)
+ }
+
+ if (lpuart_is_32(sport)) {
+- val = lpuart32_read(&sport->port, UARTFIFO);
++ val = lpuart32_read(port, UARTFIFO);
+ val |= UARTFIFO_TXFLUSH | UARTFIFO_RXFLUSH;
+- lpuart32_write(&sport->port, val, UARTFIFO);
++ lpuart32_write(port, val, UARTFIFO);
+ } else {
+- val = readb(sport->port.membase + UARTCFIFO);
++ val = readb(port->membase + UARTCFIFO);
+ val |= UARTCFIFO_TXFLUSH | UARTCFIFO_RXFLUSH;
+- writeb(val, sport->port.membase + UARTCFIFO);
++ writeb(val, port->membase + UARTCFIFO);
+ }
+ }
+
+@@ -638,38 +638,36 @@ static void lpuart32_wait_bit_set(struct uart_port *port, unsigned int offset,
+
+ static int lpuart_poll_init(struct uart_port *port)
+ {
+- struct lpuart_port *sport = container_of(port,
+- struct lpuart_port, port);
+ unsigned long flags;
+- unsigned char temp;
++ u8 temp;
+
+- sport->port.fifosize = 0;
++ port->fifosize = 0;
+
+- uart_port_lock_irqsave(&sport->port, &flags);
++ uart_port_lock_irqsave(port, &flags);
+ /* Disable Rx & Tx */
+- writeb(0, sport->port.membase + UARTCR2);
++ writeb(0, port->membase + UARTCR2);
+
+- temp = readb(sport->port.membase + UARTPFIFO);
++ temp = readb(port->membase + UARTPFIFO);
+ /* Enable Rx and Tx FIFO */
+ writeb(temp | UARTPFIFO_RXFE | UARTPFIFO_TXFE,
+- sport->port.membase + UARTPFIFO);
++ port->membase + UARTPFIFO);
+
+ /* flush Tx and Rx FIFO */
+ writeb(UARTCFIFO_TXFLUSH | UARTCFIFO_RXFLUSH,
+- sport->port.membase + UARTCFIFO);
++ port->membase + UARTCFIFO);
+
+ /* explicitly clear RDRF */
+- if (readb(sport->port.membase + UARTSR1) & UARTSR1_RDRF) {
+- readb(sport->port.membase + UARTDR);
+- writeb(UARTSFIFO_RXUF, sport->port.membase + UARTSFIFO);
++ if (readb(port->membase + UARTSR1) & UARTSR1_RDRF) {
++ readb(port->membase + UARTDR);
++ writeb(UARTSFIFO_RXUF, port->membase + UARTSFIFO);
+ }
+
+- writeb(0, sport->port.membase + UARTTWFIFO);
+- writeb(1, sport->port.membase + UARTRWFIFO);
++ writeb(0, port->membase + UARTTWFIFO);
++ writeb(1, port->membase + UARTRWFIFO);
+
+ /* Enable Rx and Tx */
+- writeb(UARTCR2_RE | UARTCR2_TE, sport->port.membase + UARTCR2);
+- uart_port_unlock_irqrestore(&sport->port, flags);
++ writeb(UARTCR2_RE | UARTCR2_TE, port->membase + UARTCR2);
++ uart_port_unlock_irqrestore(port, flags);
+
+ return 0;
+ }
+@@ -692,33 +690,32 @@ static int lpuart_poll_get_char(struct uart_port *port)
+ static int lpuart32_poll_init(struct uart_port *port)
+ {
+ unsigned long flags;
+- struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+ u32 temp;
+
+- sport->port.fifosize = 0;
++ port->fifosize = 0;
+
+- uart_port_lock_irqsave(&sport->port, &flags);
++ uart_port_lock_irqsave(port, &flags);
+
+ /* Disable Rx & Tx */
+- lpuart32_write(&sport->port, 0, UARTCTRL);
++ lpuart32_write(port, 0, UARTCTRL);
+
+- temp = lpuart32_read(&sport->port, UARTFIFO);
++ temp = lpuart32_read(port, UARTFIFO);
+
+ /* Enable Rx and Tx FIFO */
+- lpuart32_write(&sport->port, temp | UARTFIFO_RXFE | UARTFIFO_TXFE, UARTFIFO);
++ lpuart32_write(port, temp | UARTFIFO_RXFE | UARTFIFO_TXFE, UARTFIFO);
+
+ /* flush Tx and Rx FIFO */
+- lpuart32_write(&sport->port, UARTFIFO_TXFLUSH | UARTFIFO_RXFLUSH, UARTFIFO);
++ lpuart32_write(port, UARTFIFO_TXFLUSH | UARTFIFO_RXFLUSH, UARTFIFO);
+
+ /* explicitly clear RDRF */
+- if (lpuart32_read(&sport->port, UARTSTAT) & UARTSTAT_RDRF) {
+- lpuart32_read(&sport->port, UARTDATA);
+- lpuart32_write(&sport->port, UARTFIFO_RXUF, UARTFIFO);
++ if (lpuart32_read(port, UARTSTAT) & UARTSTAT_RDRF) {
++ lpuart32_read(port, UARTDATA);
++ lpuart32_write(port, UARTFIFO_RXUF, UARTFIFO);
+ }
+
+ /* Enable Rx and Tx */
+- lpuart32_write(&sport->port, UARTCTRL_RE | UARTCTRL_TE, UARTCTRL);
+- uart_port_unlock_irqrestore(&sport->port, flags);
++ lpuart32_write(port, UARTCTRL_RE | UARTCTRL_TE, UARTCTRL);
++ uart_port_unlock_irqrestore(port, flags);
+
+ return 0;
+ }
+@@ -751,7 +748,7 @@ static inline void lpuart_transmit_buffer(struct lpuart_port *sport)
+ static inline void lpuart32_transmit_buffer(struct lpuart_port *sport)
+ {
+ struct tty_port *tport = &sport->port.state->port;
+- unsigned long txcnt;
++ u32 txcnt;
+ unsigned char c;
+
+ if (sport->port.x_char) {
+@@ -788,7 +785,7 @@ static void lpuart_start_tx(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port,
+ struct lpuart_port, port);
+- unsigned char temp;
++ u8 temp;
+
+ temp = readb(port->membase + UARTCR2);
+ writeb(temp | UARTCR2_TIE, port->membase + UARTCR2);
+@@ -805,7 +802,7 @@ static void lpuart_start_tx(struct uart_port *port)
+ static void lpuart32_start_tx(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+- unsigned long temp;
++ u32 temp;
+
+ if (sport->lpuart_dma_tx_use) {
+ if (!lpuart_stopped_or_empty(port))
+@@ -838,8 +835,8 @@ static unsigned int lpuart_tx_empty(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port,
+ struct lpuart_port, port);
+- unsigned char sr1 = readb(port->membase + UARTSR1);
+- unsigned char sfifo = readb(port->membase + UARTSFIFO);
++ u8 sr1 = readb(port->membase + UARTSR1);
++ u8 sfifo = readb(port->membase + UARTSFIFO);
+
+ if (sport->dma_tx_in_progress)
+ return 0;
+@@ -854,9 +851,9 @@ static unsigned int lpuart32_tx_empty(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port,
+ struct lpuart_port, port);
+- unsigned long stat = lpuart32_read(port, UARTSTAT);
+- unsigned long sfifo = lpuart32_read(port, UARTFIFO);
+- unsigned long ctrl = lpuart32_read(port, UARTCTRL);
++ u32 stat = lpuart32_read(port, UARTSTAT);
++ u32 sfifo = lpuart32_read(port, UARTFIFO);
++ u32 ctrl = lpuart32_read(port, UARTCTRL);
+
+ if (sport->dma_tx_in_progress)
+ return 0;
+@@ -883,7 +880,7 @@ static void lpuart_rxint(struct lpuart_port *sport)
+ {
+ unsigned int flg, ignored = 0, overrun = 0;
+ struct tty_port *port = &sport->port.state->port;
+- unsigned char rx, sr;
++ u8 rx, sr;
+
+ uart_port_lock(&sport->port);
+
+@@ -960,7 +957,7 @@ static void lpuart32_rxint(struct lpuart_port *sport)
+ {
+ unsigned int flg, ignored = 0;
+ struct tty_port *port = &sport->port.state->port;
+- unsigned long rx, sr;
++ u32 rx, sr;
+ bool is_break;
+
+ uart_port_lock(&sport->port);
+@@ -1038,7 +1035,7 @@ static void lpuart32_rxint(struct lpuart_port *sport)
+ static irqreturn_t lpuart_int(int irq, void *dev_id)
+ {
+ struct lpuart_port *sport = dev_id;
+- unsigned char sts;
++ u8 sts;
+
+ sts = readb(sport->port.membase + UARTSR1);
+
+@@ -1112,7 +1109,7 @@ static void lpuart_copy_rx_to_tty(struct lpuart_port *sport)
+ int count, copied;
+
+ if (lpuart_is_32(sport)) {
+- unsigned long sr = lpuart32_read(&sport->port, UARTSTAT);
++ u32 sr = lpuart32_read(&sport->port, UARTSTAT);
+
+ if (sr & (UARTSTAT_PE | UARTSTAT_FE)) {
+ /* Clear the error flags */
+@@ -1124,10 +1121,10 @@ static void lpuart_copy_rx_to_tty(struct lpuart_port *sport)
+ sport->port.icount.frame++;
+ }
+ } else {
+- unsigned char sr = readb(sport->port.membase + UARTSR1);
++ u8 sr = readb(sport->port.membase + UARTSR1);
+
+ if (sr & (UARTSR1_PE | UARTSR1_FE)) {
+- unsigned char cr2;
++ u8 cr2;
+
+ /* Disable receiver during this operation... */
+ cr2 = readb(sport->port.membase + UARTCR2);
+@@ -1278,7 +1275,7 @@ static void lpuart32_dma_idleint(struct lpuart_port *sport)
+ static irqreturn_t lpuart32_int(int irq, void *dev_id)
+ {
+ struct lpuart_port *sport = dev_id;
+- unsigned long sts, rxcount;
++ u32 sts, rxcount;
+
+ sts = lpuart32_read(&sport->port, UARTSTAT);
+ rxcount = lpuart32_read(&sport->port, UARTWATER);
+@@ -1410,12 +1407,12 @@ static inline int lpuart_start_rx_dma(struct lpuart_port *sport)
+ dma_async_issue_pending(chan);
+
+ if (lpuart_is_32(sport)) {
+- unsigned long temp = lpuart32_read(&sport->port, UARTBAUD);
++ u32 temp = lpuart32_read(&sport->port, UARTBAUD);
+
+ lpuart32_write(&sport->port, temp | UARTBAUD_RDMAE, UARTBAUD);
+
+ if (sport->dma_idle_int) {
+- unsigned long ctrl = lpuart32_read(&sport->port, UARTCTRL);
++ u32 ctrl = lpuart32_read(&sport->port, UARTCTRL);
+
+ lpuart32_write(&sport->port, ctrl | UARTCTRL_ILIE, UARTCTRL);
+ }
+@@ -1448,12 +1445,9 @@ static void lpuart_dma_rx_free(struct uart_port *port)
+ static int lpuart_config_rs485(struct uart_port *port, struct ktermios *termios,
+ struct serial_rs485 *rs485)
+ {
+- struct lpuart_port *sport = container_of(port,
+- struct lpuart_port, port);
+-
+- u8 modem = readb(sport->port.membase + UARTMODEM) &
++ u8 modem = readb(port->membase + UARTMODEM) &
+ ~(UARTMODEM_TXRTSPOL | UARTMODEM_TXRTSE);
+- writeb(modem, sport->port.membase + UARTMODEM);
++ writeb(modem, port->membase + UARTMODEM);
+
+ if (rs485->flags & SER_RS485_ENABLED) {
+ /* Enable auto RS-485 RTS mode */
+@@ -1471,32 +1465,29 @@ static int lpuart_config_rs485(struct uart_port *port, struct ktermios *termios,
+ modem &= ~UARTMODEM_TXRTSPOL;
+ }
+
+- writeb(modem, sport->port.membase + UARTMODEM);
++ writeb(modem, port->membase + UARTMODEM);
+ return 0;
+ }
+
+ static int lpuart32_config_rs485(struct uart_port *port, struct ktermios *termios,
+ struct serial_rs485 *rs485)
+ {
+- struct lpuart_port *sport = container_of(port,
+- struct lpuart_port, port);
+-
+- unsigned long modem = lpuart32_read(&sport->port, UARTMODIR)
++ u32 modem = lpuart32_read(port, UARTMODIR)
+ & ~(UARTMODIR_TXRTSPOL | UARTMODIR_TXRTSE);
+ u32 ctrl;
+
+ /* TXRTSE and TXRTSPOL only can be changed when transmitter is disabled. */
+- ctrl = lpuart32_read(&sport->port, UARTCTRL);
++ ctrl = lpuart32_read(port, UARTCTRL);
+ if (ctrl & UARTCTRL_TE) {
+ /* wait for the transmit engine to complete */
+- lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
+- lpuart32_write(&sport->port, ctrl & ~UARTCTRL_TE, UARTCTRL);
++ lpuart32_wait_bit_set(port, UARTSTAT, UARTSTAT_TC);
++ lpuart32_write(port, ctrl & ~UARTCTRL_TE, UARTCTRL);
+
+- while (lpuart32_read(&sport->port, UARTCTRL) & UARTCTRL_TE)
++ while (lpuart32_read(port, UARTCTRL) & UARTCTRL_TE)
+ cpu_relax();
+ }
+
+- lpuart32_write(&sport->port, modem, UARTMODIR);
++ lpuart32_write(port, modem, UARTMODIR);
+
+ if (rs485->flags & SER_RS485_ENABLED) {
+ /* Enable auto RS-485 RTS mode */
+@@ -1514,10 +1505,10 @@ static int lpuart32_config_rs485(struct uart_port *port, struct ktermios *termio
+ modem &= ~UARTMODIR_TXRTSPOL;
+ }
+
+- lpuart32_write(&sport->port, modem, UARTMODIR);
++ lpuart32_write(port, modem, UARTMODIR);
+
+ if (ctrl & UARTCTRL_TE)
+- lpuart32_write(&sport->port, ctrl, UARTCTRL);
++ lpuart32_write(port, ctrl, UARTCTRL);
+
+ return 0;
+ }
+@@ -1576,7 +1567,7 @@ static void lpuart32_set_mctrl(struct uart_port *port, unsigned int mctrl)
+
+ static void lpuart_break_ctl(struct uart_port *port, int break_state)
+ {
+- unsigned char temp;
++ u8 temp;
+
+ temp = readb(port->membase + UARTCR2) & ~UARTCR2_SBK;
+
+@@ -1588,7 +1579,7 @@ static void lpuart_break_ctl(struct uart_port *port, int break_state)
+
+ static void lpuart32_break_ctl(struct uart_port *port, int break_state)
+ {
+- unsigned long temp;
++ u32 temp;
+
+ temp = lpuart32_read(port, UARTCTRL);
+
+@@ -1622,8 +1613,7 @@ static void lpuart32_break_ctl(struct uart_port *port, int break_state)
+
+ static void lpuart_setup_watermark(struct lpuart_port *sport)
+ {
+- unsigned char val, cr2;
+- unsigned char cr2_saved;
++ u8 val, cr2, cr2_saved;
+
+ cr2 = readb(sport->port.membase + UARTCR2);
+ cr2_saved = cr2;
+@@ -1656,7 +1646,7 @@ static void lpuart_setup_watermark(struct lpuart_port *sport)
+
+ static void lpuart_setup_watermark_enable(struct lpuart_port *sport)
+ {
+- unsigned char cr2;
++ u8 cr2;
+
+ lpuart_setup_watermark(sport);
+
+@@ -1667,8 +1657,7 @@ static void lpuart_setup_watermark_enable(struct lpuart_port *sport)
+
+ static void lpuart32_setup_watermark(struct lpuart_port *sport)
+ {
+- unsigned long val, ctrl;
+- unsigned long ctrl_saved;
++ u32 val, ctrl, ctrl_saved;
+
+ ctrl = lpuart32_read(&sport->port, UARTCTRL);
+ ctrl_saved = ctrl;
+@@ -1777,7 +1766,7 @@ static void lpuart_tx_dma_startup(struct lpuart_port *sport)
+ static void lpuart_rx_dma_startup(struct lpuart_port *sport)
+ {
+ int ret;
+- unsigned char cr3;
++ u8 cr3;
+
+ if (uart_console(&sport->port))
+ goto err;
+@@ -1827,14 +1816,14 @@ static void lpuart_hw_setup(struct lpuart_port *sport)
+ static int lpuart_startup(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+- unsigned char temp;
++ u8 temp;
+
+ /* determine FIFO size and enable FIFO mode */
+- temp = readb(sport->port.membase + UARTPFIFO);
++ temp = readb(port->membase + UARTPFIFO);
+
+ sport->txfifo_size = UARTFIFO_DEPTH((temp >> UARTPFIFO_TXSIZE_OFF) &
+ UARTPFIFO_FIFOSIZE_MASK);
+- sport->port.fifosize = sport->txfifo_size;
++ port->fifosize = sport->txfifo_size;
+
+ sport->rxfifo_size = UARTFIFO_DEPTH((temp >> UARTPFIFO_RXSIZE_OFF) &
+ UARTPFIFO_FIFOSIZE_MASK);
+@@ -1847,7 +1836,7 @@ static int lpuart_startup(struct uart_port *port)
+
+ static void lpuart32_hw_disable(struct lpuart_port *sport)
+ {
+- unsigned long temp;
++ u32 temp;
+
+ temp = lpuart32_read(&sport->port, UARTCTRL);
+ temp &= ~(UARTCTRL_RIE | UARTCTRL_ILIE | UARTCTRL_RE |
+@@ -1857,7 +1846,7 @@ static void lpuart32_hw_disable(struct lpuart_port *sport)
+
+ static void lpuart32_configure(struct lpuart_port *sport)
+ {
+- unsigned long temp;
++ u32 temp;
+
+ temp = lpuart32_read(&sport->port, UARTCTRL);
+ if (!sport->lpuart_dma_rx_use)
+@@ -1887,14 +1876,14 @@ static void lpuart32_hw_setup(struct lpuart_port *sport)
+ static int lpuart32_startup(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+- unsigned long temp;
++ u32 temp;
+
+ /* determine FIFO size */
+- temp = lpuart32_read(&sport->port, UARTFIFO);
++ temp = lpuart32_read(port, UARTFIFO);
+
+ sport->txfifo_size = UARTFIFO_DEPTH((temp >> UARTFIFO_TXSIZE_OFF) &
+ UARTFIFO_FIFOSIZE_MASK);
+- sport->port.fifosize = sport->txfifo_size;
++ port->fifosize = sport->txfifo_size;
+
+ sport->rxfifo_size = UARTFIFO_DEPTH((temp >> UARTFIFO_RXSIZE_OFF) &
+ UARTFIFO_FIFOSIZE_MASK);
+@@ -1907,7 +1896,7 @@ static int lpuart32_startup(struct uart_port *port)
+ if (is_layerscape_lpuart(sport)) {
+ sport->rxfifo_size = 16;
+ sport->txfifo_size = 16;
+- sport->port.fifosize = sport->txfifo_size;
++ port->fifosize = sport->txfifo_size;
+ }
+
+ lpuart_request_dma(sport);
+@@ -1941,7 +1930,7 @@ static void lpuart_dma_shutdown(struct lpuart_port *sport)
+ static void lpuart_shutdown(struct uart_port *port)
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+- unsigned char temp;
++ u8 temp;
+ unsigned long flags;
+
+ uart_port_lock_irqsave(port, &flags);
+@@ -1961,14 +1950,14 @@ static void lpuart32_shutdown(struct uart_port *port)
+ {
+ struct lpuart_port *sport =
+ container_of(port, struct lpuart_port, port);
+- unsigned long temp;
++ u32 temp;
+ unsigned long flags;
+
+ uart_port_lock_irqsave(port, &flags);
+
+ /* clear status */
+- temp = lpuart32_read(&sport->port, UARTSTAT);
+- lpuart32_write(&sport->port, temp, UARTSTAT);
++ temp = lpuart32_read(port, UARTSTAT);
++ lpuart32_write(port, temp, UARTSTAT);
+
+ /* disable Rx/Tx DMA */
+ temp = lpuart32_read(port, UARTBAUD);
+@@ -1992,17 +1981,17 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios,
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+ unsigned long flags;
+- unsigned char cr1, old_cr1, old_cr2, cr3, cr4, bdh, modem;
++ u8 cr1, old_cr1, old_cr2, cr3, cr4, bdh, modem;
+ unsigned int baud;
+ unsigned int old_csize = old ? old->c_cflag & CSIZE : CS8;
+ unsigned int sbr, brfa;
+
+- cr1 = old_cr1 = readb(sport->port.membase + UARTCR1);
+- old_cr2 = readb(sport->port.membase + UARTCR2);
+- cr3 = readb(sport->port.membase + UARTCR3);
+- cr4 = readb(sport->port.membase + UARTCR4);
+- bdh = readb(sport->port.membase + UARTBDH);
+- modem = readb(sport->port.membase + UARTMODEM);
++ cr1 = old_cr1 = readb(port->membase + UARTCR1);
++ old_cr2 = readb(port->membase + UARTCR2);
++ cr3 = readb(port->membase + UARTCR3);
++ cr4 = readb(port->membase + UARTCR4);
++ bdh = readb(port->membase + UARTBDH);
++ modem = readb(port->membase + UARTMODEM);
+ /*
+ * only support CS8 and CS7, and for CS7 must enable PE.
+ * supported mode:
+@@ -2034,7 +2023,7 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios,
+ * When auto RS-485 RTS mode is enabled,
+ * hardware flow control need to be disabled.
+ */
+- if (sport->port.rs485.flags & SER_RS485_ENABLED)
++ if (port->rs485.flags & SER_RS485_ENABLED)
+ termios->c_cflag &= ~CRTSCTS;
+
+ if (termios->c_cflag & CRTSCTS)
+@@ -2075,59 +2064,59 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios,
+ * Need to update the Ring buffer length according to the selected
+ * baud rate and restart Rx DMA path.
+ *
+- * Since timer function acqures sport->port.lock, need to stop before
++ * Since timer function acqures port->lock, need to stop before
+ * acquring same lock because otherwise del_timer_sync() can deadlock.
+ */
+ if (old && sport->lpuart_dma_rx_use)
+- lpuart_dma_rx_free(&sport->port);
++ lpuart_dma_rx_free(port);
+
+- uart_port_lock_irqsave(&sport->port, &flags);
++ uart_port_lock_irqsave(port, &flags);
+
+- sport->port.read_status_mask = 0;
++ port->read_status_mask = 0;
+ if (termios->c_iflag & INPCK)
+- sport->port.read_status_mask |= UARTSR1_FE | UARTSR1_PE;
++ port->read_status_mask |= UARTSR1_FE | UARTSR1_PE;
+ if (termios->c_iflag & (IGNBRK | BRKINT | PARMRK))
+- sport->port.read_status_mask |= UARTSR1_FE;
++ port->read_status_mask |= UARTSR1_FE;
+
+ /* characters to ignore */
+- sport->port.ignore_status_mask = 0;
++ port->ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+- sport->port.ignore_status_mask |= UARTSR1_PE;
++ port->ignore_status_mask |= UARTSR1_PE;
+ if (termios->c_iflag & IGNBRK) {
+- sport->port.ignore_status_mask |= UARTSR1_FE;
++ port->ignore_status_mask |= UARTSR1_FE;
+ /*
+ * if we're ignoring parity and break indicators,
+ * ignore overruns too (for real raw support).
+ */
+ if (termios->c_iflag & IGNPAR)
+- sport->port.ignore_status_mask |= UARTSR1_OR;
++ port->ignore_status_mask |= UARTSR1_OR;
+ }
+
+ /* update the per-port timeout */
+ uart_update_timeout(port, termios->c_cflag, baud);
+
+ /* wait transmit engin complete */
+- lpuart_wait_bit_set(&sport->port, UARTSR1, UARTSR1_TC);
++ lpuart_wait_bit_set(port, UARTSR1, UARTSR1_TC);
+
+ /* disable transmit and receive */
+ writeb(old_cr2 & ~(UARTCR2_TE | UARTCR2_RE),
+- sport->port.membase + UARTCR2);
++ port->membase + UARTCR2);
+
+- sbr = sport->port.uartclk / (16 * baud);
+- brfa = ((sport->port.uartclk - (16 * sbr * baud)) * 2) / baud;
++ sbr = port->uartclk / (16 * baud);
++ brfa = ((port->uartclk - (16 * sbr * baud)) * 2) / baud;
+ bdh &= ~UARTBDH_SBR_MASK;
+ bdh |= (sbr >> 8) & 0x1F;
+ cr4 &= ~UARTCR4_BRFA_MASK;
+ brfa &= UARTCR4_BRFA_MASK;
+- writeb(cr4 | brfa, sport->port.membase + UARTCR4);
+- writeb(bdh, sport->port.membase + UARTBDH);
+- writeb(sbr & 0xFF, sport->port.membase + UARTBDL);
+- writeb(cr3, sport->port.membase + UARTCR3);
+- writeb(cr1, sport->port.membase + UARTCR1);
+- writeb(modem, sport->port.membase + UARTMODEM);
++ writeb(cr4 | brfa, port->membase + UARTCR4);
++ writeb(bdh, port->membase + UARTBDH);
++ writeb(sbr & 0xFF, port->membase + UARTBDL);
++ writeb(cr3, port->membase + UARTCR3);
++ writeb(cr1, port->membase + UARTCR1);
++ writeb(modem, port->membase + UARTMODEM);
+
+ /* restore control register */
+- writeb(old_cr2, sport->port.membase + UARTCR2);
++ writeb(old_cr2, port->membase + UARTCR2);
+
+ if (old && sport->lpuart_dma_rx_use) {
+ if (!lpuart_start_rx_dma(sport))
+@@ -2136,7 +2125,7 @@ lpuart_set_termios(struct uart_port *port, struct ktermios *termios,
+ sport->lpuart_dma_rx_use = false;
+ }
+
+- uart_port_unlock_irqrestore(&sport->port, flags);
++ uart_port_unlock_irqrestore(port, flags);
+ }
+
+ static void __lpuart32_serial_setbrg(struct uart_port *port,
+@@ -2230,13 +2219,13 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ {
+ struct lpuart_port *sport = container_of(port, struct lpuart_port, port);
+ unsigned long flags;
+- unsigned long ctrl, old_ctrl, bd, modem;
++ u32 ctrl, old_ctrl, bd, modem;
+ unsigned int baud;
+ unsigned int old_csize = old ? old->c_cflag & CSIZE : CS8;
+
+- ctrl = old_ctrl = lpuart32_read(&sport->port, UARTCTRL);
+- bd = lpuart32_read(&sport->port, UARTBAUD);
+- modem = lpuart32_read(&sport->port, UARTMODIR);
++ ctrl = old_ctrl = lpuart32_read(port, UARTCTRL);
++ bd = lpuart32_read(port, UARTBAUD);
++ modem = lpuart32_read(port, UARTMODIR);
+ sport->is_cs7 = false;
+ /*
+ * only support CS8 and CS7, and for CS7 must enable PE.
+@@ -2269,7 +2258,7 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ * When auto RS-485 RTS mode is enabled,
+ * hardware flow control need to be disabled.
+ */
+- if (sport->port.rs485.flags & SER_RS485_ENABLED)
++ if (port->rs485.flags & SER_RS485_ENABLED)
+ termios->c_cflag &= ~CRTSCTS;
+
+ if (termios->c_cflag & CRTSCTS)
+@@ -2310,59 +2299,61 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ * Need to update the Ring buffer length according to the selected
+ * baud rate and restart Rx DMA path.
+ *
+- * Since timer function acqures sport->port.lock, need to stop before
++ * Since timer function acqures port->lock, need to stop before
+ * acquring same lock because otherwise del_timer_sync() can deadlock.
+ */
+ if (old && sport->lpuart_dma_rx_use)
+- lpuart_dma_rx_free(&sport->port);
++ lpuart_dma_rx_free(port);
+
+- uart_port_lock_irqsave(&sport->port, &flags);
++ uart_port_lock_irqsave(port, &flags);
+
+- sport->port.read_status_mask = 0;
++ port->read_status_mask = 0;
+ if (termios->c_iflag & INPCK)
+- sport->port.read_status_mask |= UARTSTAT_FE | UARTSTAT_PE;
++ port->read_status_mask |= UARTSTAT_FE | UARTSTAT_PE;
+ if (termios->c_iflag & (IGNBRK | BRKINT | PARMRK))
+- sport->port.read_status_mask |= UARTSTAT_FE;
++ port->read_status_mask |= UARTSTAT_FE;
+
+ /* characters to ignore */
+- sport->port.ignore_status_mask = 0;
++ port->ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+- sport->port.ignore_status_mask |= UARTSTAT_PE;
++ port->ignore_status_mask |= UARTSTAT_PE;
+ if (termios->c_iflag & IGNBRK) {
+- sport->port.ignore_status_mask |= UARTSTAT_FE;
++ port->ignore_status_mask |= UARTSTAT_FE;
+ /*
+ * if we're ignoring parity and break indicators,
+ * ignore overruns too (for real raw support).
+ */
+ if (termios->c_iflag & IGNPAR)
+- sport->port.ignore_status_mask |= UARTSTAT_OR;
++ port->ignore_status_mask |= UARTSTAT_OR;
+ }
+
+ /* update the per-port timeout */
+ uart_update_timeout(port, termios->c_cflag, baud);
+
++ /*
++ * disable CTS to ensure the transmit engine is not blocked by the flow
++ * control when there is dirty data in TX FIFO
++ */
++ lpuart32_write(port, modem & ~UARTMODIR_TXCTSE, UARTMODIR);
++
+ /*
+ * LPUART Transmission Complete Flag may never be set while queuing a break
+ * character, so skip waiting for transmission complete when UARTCTRL_SBK is
+ * asserted.
+ */
+- if (!(old_ctrl & UARTCTRL_SBK)) {
+- lpuart32_write(&sport->port, 0, UARTMODIR);
+- lpuart32_wait_bit_set(&sport->port, UARTSTAT, UARTSTAT_TC);
+- }
++ if (!(old_ctrl & UARTCTRL_SBK))
++ lpuart32_wait_bit_set(port, UARTSTAT, UARTSTAT_TC);
+
+ /* disable transmit and receive */
+- lpuart32_write(&sport->port, old_ctrl & ~(UARTCTRL_TE | UARTCTRL_RE),
++ lpuart32_write(port, old_ctrl & ~(UARTCTRL_TE | UARTCTRL_RE),
+ UARTCTRL);
+
+- lpuart32_write(&sport->port, bd, UARTBAUD);
++ lpuart32_write(port, bd, UARTBAUD);
+ lpuart32_serial_setbrg(sport, baud);
+- /* disable CTS before enabling UARTCTRL_TE to avoid pending idle preamble */
+- lpuart32_write(&sport->port, modem & ~UARTMODIR_TXCTSE, UARTMODIR);
+ /* restore control register */
+- lpuart32_write(&sport->port, ctrl, UARTCTRL);
++ lpuart32_write(port, ctrl, UARTCTRL);
+ /* re-enable the CTS if needed */
+- lpuart32_write(&sport->port, modem, UARTMODIR);
++ lpuart32_write(port, modem, UARTMODIR);
+
+ if ((ctrl & (UARTCTRL_PE | UARTCTRL_M)) == UARTCTRL_PE)
+ sport->is_cs7 = true;
+@@ -2374,7 +2365,7 @@ lpuart32_set_termios(struct uart_port *port, struct ktermios *termios,
+ sport->lpuart_dma_rx_use = false;
+ }
+
+- uart_port_unlock_irqrestore(&sport->port, flags);
++ uart_port_unlock_irqrestore(port, flags);
+ }
+
+ static const char *lpuart_type(struct uart_port *port)
+@@ -2487,7 +2478,7 @@ static void
+ lpuart_console_write(struct console *co, const char *s, unsigned int count)
+ {
+ struct lpuart_port *sport = lpuart_ports[co->index];
+- unsigned char old_cr2, cr2;
++ u8 old_cr2, cr2;
+ unsigned long flags;
+ int locked = 1;
+
+@@ -2517,7 +2508,7 @@ static void
+ lpuart32_console_write(struct console *co, const char *s, unsigned int count)
+ {
+ struct lpuart_port *sport = lpuart_ports[co->index];
+- unsigned long old_cr, cr;
++ u32 old_cr, cr;
+ unsigned long flags;
+ int locked = 1;
+
+@@ -2551,7 +2542,7 @@ static void __init
+ lpuart_console_get_options(struct lpuart_port *sport, int *baud,
+ int *parity, int *bits)
+ {
+- unsigned char cr, bdh, bdl, brfa;
++ u8 cr, bdh, bdl, brfa;
+ unsigned int sbr, uartclk, baud_raw;
+
+ cr = readb(sport->port.membase + UARTCR2);
+@@ -2600,7 +2591,7 @@ static void __init
+ lpuart32_console_get_options(struct lpuart_port *sport, int *baud,
+ int *parity, int *bits)
+ {
+- unsigned long cr, bd;
++ u32 cr, bd;
+ unsigned int sbr, uartclk, baud_raw;
+
+ cr = lpuart32_read(&sport->port, UARTCTRL);
+@@ -2806,13 +2797,13 @@ static int lpuart_global_reset(struct lpuart_port *sport)
+ {
+ struct uart_port *port = &sport->port;
+ void __iomem *global_addr;
+- unsigned long ctrl, bd;
++ u32 ctrl, bd;
+ unsigned int val = 0;
+ int ret;
+
+ ret = clk_prepare_enable(sport->ipg_clk);
+ if (ret) {
+- dev_err(sport->port.dev, "failed to enable uart ipg clk: %d\n", ret);
++ dev_err(port->dev, "failed to enable uart ipg clk: %d\n", ret);
+ return ret;
+ }
+
+@@ -2823,10 +2814,10 @@ static int lpuart_global_reset(struct lpuart_port *sport)
+ */
+ ctrl = lpuart32_read(port, UARTCTRL);
+ if (ctrl & UARTCTRL_TE) {
+- bd = lpuart32_read(&sport->port, UARTBAUD);
++ bd = lpuart32_read(port, UARTBAUD);
+ if (read_poll_timeout(lpuart32_tx_empty, val, val, 1, 100000, false,
+ port)) {
+- dev_warn(sport->port.dev,
++ dev_warn(port->dev,
+ "timeout waiting for transmit engine to complete\n");
+ clk_disable_unprepare(sport->ipg_clk);
+ return 0;
+@@ -3012,7 +3003,7 @@ static int lpuart_runtime_resume(struct device *dev)
+
+ static void serial_lpuart_enable_wakeup(struct lpuart_port *sport, bool on)
+ {
+- unsigned int val, baud;
++ u32 val, baud;
+
+ if (lpuart_is_32(sport)) {
+ val = lpuart32_read(&sport->port, UARTCTRL);
+@@ -3077,7 +3068,7 @@ static int lpuart_suspend_noirq(struct device *dev)
+ static int lpuart_resume_noirq(struct device *dev)
+ {
+ struct lpuart_port *sport = dev_get_drvdata(dev);
+- unsigned int val;
++ u32 val;
+
+ pinctrl_pm_select_default_state(dev);
+
+@@ -3097,7 +3088,8 @@ static int lpuart_resume_noirq(struct device *dev)
+ static int lpuart_suspend(struct device *dev)
+ {
+ struct lpuart_port *sport = dev_get_drvdata(dev);
+- unsigned long temp, flags;
++ u32 temp;
++ unsigned long flags;
+
+ uart_suspend_port(&lpuart_reg, &sport->port);
+
+@@ -3177,7 +3169,7 @@ static void lpuart_console_fixup(struct lpuart_port *sport)
+ * in VLLS mode, or restore console setting here.
+ */
+ if (is_imx7ulp_lpuart(sport) && lpuart_uport_is_active(sport) &&
+- console_suspend_enabled && uart_console(&sport->port)) {
++ console_suspend_enabled && uart_console(uport)) {
+
+ mutex_lock(&port->mutex);
+ memset(&termios, 0, sizeof(struct ktermios));
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 32c8693b438b07..8c26275696df99 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -2397,10 +2397,10 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ page_size = readl(&xhci->op_regs->page_size);
+ xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+ "Supported page size register = 0x%x", page_size);
+- i = ffs(page_size);
+- if (i < 16)
++ val = ffs(page_size) - 1;
++ if (val < 16)
+ xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+- "Supported page size of %iK", (1 << (i+12)) / 1024);
++ "Supported page size of %iK", (1 << (val + 12)) / 1024);
+ else
+ xhci_warn(xhci, "WARN: no supported page size\n");
+ /* Use 4K pages, since that's common and the minimum the HC supports */
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index 4b1668733a4bec..511dd1b224ae51 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -1433,11 +1433,10 @@ static int ucsi_ccg_probe(struct i2c_client *client)
+ uc->fw_build = CCG_FW_BUILD_NVIDIA_TEGRA;
+ else if (!strcmp(fw_name, "nvidia,gpu"))
+ uc->fw_build = CCG_FW_BUILD_NVIDIA;
++ if (!uc->fw_build)
++ dev_err(uc->dev, "failed to get FW build information\n");
+ }
+
+- if (!uc->fw_build)
+- dev_err(uc->dev, "failed to get FW build information\n");
+-
+ /* reset ccg device and initialize ucsi */
+ status = ucsi_ccg_init(uc);
+ if (status < 0) {
+diff --git a/drivers/vhost/scsi.c b/drivers/vhost/scsi.c
+index 718fa4e0b31ec2..7aeff435c1d873 100644
+--- a/drivers/vhost/scsi.c
++++ b/drivers/vhost/scsi.c
+@@ -1699,14 +1699,19 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
+ }
+ }
+
++ if (vs->vs_tpg) {
++ pr_err("vhost-scsi endpoint already set for %s.\n",
++ vs->vs_vhost_wwpn);
++ ret = -EEXIST;
++ goto out;
++ }
++
+ len = sizeof(vs_tpg[0]) * VHOST_SCSI_MAX_TARGET;
+ vs_tpg = kzalloc(len, GFP_KERNEL);
+ if (!vs_tpg) {
+ ret = -ENOMEM;
+ goto out;
+ }
+- if (vs->vs_tpg)
+- memcpy(vs_tpg, vs->vs_tpg, len);
+
+ mutex_lock(&vhost_scsi_mutex);
+ list_for_each_entry(tpg, &vhost_scsi_list, tv_tpg_list) {
+@@ -1722,12 +1727,6 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
+ tv_tport = tpg->tport;
+
+ if (!strcmp(tv_tport->tport_name, t->vhost_wwpn)) {
+- if (vs->vs_tpg && vs->vs_tpg[tpg->tport_tpgt]) {
+- mutex_unlock(&tpg->tv_tpg_mutex);
+- mutex_unlock(&vhost_scsi_mutex);
+- ret = -EEXIST;
+- goto undepend;
+- }
+ /*
+ * In order to ensure individual vhost-scsi configfs
+ * groups cannot be removed while in use by vhost ioctl,
+@@ -1774,15 +1773,15 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
+ }
+ ret = 0;
+ } else {
+- ret = -EEXIST;
++ ret = -ENODEV;
++ goto free_tpg;
+ }
+
+ /*
+- * Act as synchronize_rcu to make sure access to
+- * old vs->vs_tpg is finished.
++ * Act as synchronize_rcu to make sure requests after this point
++ * see a fully setup device.
+ */
+ vhost_scsi_flush(vs);
+- kfree(vs->vs_tpg);
+ vs->vs_tpg = vs_tpg;
+ goto out;
+
+@@ -1802,6 +1801,7 @@ vhost_scsi_set_endpoint(struct vhost_scsi *vs,
+ target_undepend_item(&tpg->se_tpg.tpg_group.cg_item);
+ }
+ }
++free_tpg:
+ kfree(vs_tpg);
+ out:
+ mutex_unlock(&vs->dev.mutex);
+@@ -1904,6 +1904,7 @@ vhost_scsi_clear_endpoint(struct vhost_scsi *vs,
+ vhost_scsi_flush(vs);
+ kfree(vs->vs_tpg);
+ vs->vs_tpg = NULL;
++ memset(vs->vs_vhost_wwpn, 0, sizeof(vs->vs_vhost_wwpn));
+ WARN_ON(vs->vs_events_nr);
+ mutex_unlock(&vs->dev.mutex);
+ return 0;
+diff --git a/drivers/video/console/Kconfig b/drivers/video/console/Kconfig
+index bc31db6ef7d262..3e9f2bda67027e 100644
+--- a/drivers/video/console/Kconfig
++++ b/drivers/video/console/Kconfig
+@@ -24,7 +24,7 @@ config VGA_CONSOLE
+ Say Y.
+
+ config MDA_CONSOLE
+- depends on !M68K && !PARISC && ISA
++ depends on VGA_CONSOLE && ISA
+ tristate "MDA text console (dual-headed)"
+ help
+ Say Y here if you have an old MDA or monochrome Hercules graphics
+@@ -52,7 +52,7 @@ config DUMMY_CONSOLE
+
+ config DUMMY_CONSOLE_COLUMNS
+ int "Initial number of console screen columns"
+- depends on DUMMY_CONSOLE && !ARCH_FOOTBRIDGE
++ depends on DUMMY_CONSOLE && !(ARCH_FOOTBRIDGE && VGA_CONSOLE)
+ default 160 if PARISC
+ default 80
+ help
+@@ -62,7 +62,7 @@ config DUMMY_CONSOLE_COLUMNS
+
+ config DUMMY_CONSOLE_ROWS
+ int "Initial number of console screen rows"
+- depends on DUMMY_CONSOLE && !ARCH_FOOTBRIDGE
++ depends on DUMMY_CONSOLE && !(ARCH_FOOTBRIDGE && VGA_CONSOLE)
+ default 64 if PARISC
+ default 30 if ARM
+ default 25
+diff --git a/drivers/video/fbdev/au1100fb.c b/drivers/video/fbdev/au1100fb.c
+index 840f221607635b..6251a6b07b3a11 100644
+--- a/drivers/video/fbdev/au1100fb.c
++++ b/drivers/video/fbdev/au1100fb.c
+@@ -137,13 +137,15 @@ static int au1100fb_fb_blank(int blank_mode, struct fb_info *fbi)
+ */
+ int au1100fb_setmode(struct au1100fb_device *fbdev)
+ {
+- struct fb_info *info = &fbdev->info;
++ struct fb_info *info;
+ u32 words;
+ int index;
+
+ if (!fbdev)
+ return -EINVAL;
+
++ info = &fbdev->info;
++
+ /* Update var-dependent FB info */
+ if (panel_is_active(fbdev->panel) || panel_is_color(fbdev->panel)) {
+ if (info->var.bits_per_pixel <= 8) {
+diff --git a/drivers/video/fbdev/sm501fb.c b/drivers/video/fbdev/sm501fb.c
+index 86ecbb2d86db8d..2eb27ebf822e80 100644
+--- a/drivers/video/fbdev/sm501fb.c
++++ b/drivers/video/fbdev/sm501fb.c
+@@ -326,6 +326,13 @@ static int sm501fb_check_var(struct fb_var_screeninfo *var,
+ if (var->xres_virtual > 4096 || var->yres_virtual > 2048)
+ return -EINVAL;
+
++ /* geometry sanity checks */
++ if (var->xres + var->xoffset > var->xres_virtual)
++ return -EINVAL;
++
++ if (var->yres + var->yoffset > var->yres_virtual)
++ return -EINVAL;
++
+ /* can cope with 8,16 or 32bpp */
+
+ if (var->bits_per_pixel <= 8)
+diff --git a/drivers/w1/masters/w1-uart.c b/drivers/w1/masters/w1-uart.c
+index a31782e56ba75a..c87eea34780678 100644
+--- a/drivers/w1/masters/w1-uart.c
++++ b/drivers/w1/masters/w1-uart.c
+@@ -372,11 +372,11 @@ static int w1_uart_probe(struct serdev_device *serdev)
+ init_completion(&w1dev->rx_byte_received);
+ mutex_init(&w1dev->rx_mutex);
+
++ serdev_device_set_drvdata(serdev, w1dev);
++ serdev_device_set_client_ops(serdev, &w1_uart_serdev_ops);
+ ret = w1_uart_serdev_open(w1dev);
+ if (ret < 0)
+ return ret;
+- serdev_device_set_drvdata(serdev, w1dev);
+- serdev_device_set_client_ops(serdev, &w1_uart_serdev_ops);
+
+ return w1_add_master_device(&w1dev->bus);
+ }
+diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c
+index 143ac03b7425c0..3397939fd2d5af 100644
+--- a/fs/9p/vfs_inode_dotl.c
++++ b/fs/9p/vfs_inode_dotl.c
+@@ -407,8 +407,8 @@ static int v9fs_vfs_mkdir_dotl(struct mnt_idmap *idmap,
+ err);
+ goto error;
+ }
+- v9fs_fid_add(dentry, &fid);
+ v9fs_set_create_acl(inode, fid, dacl, pacl);
++ v9fs_fid_add(dentry, &fid);
+ d_instantiate(dentry, inode);
+ err = 0;
+ inc_nlink(dir);
+diff --git a/fs/affs/file.c b/fs/affs/file.c
+index a5a861dd522301..7a71018e3f6758 100644
+--- a/fs/affs/file.c
++++ b/fs/affs/file.c
+@@ -596,7 +596,7 @@ affs_extent_file_ofs(struct inode *inode, u32 newsize)
+ BUG_ON(tmp > bsize);
+ AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA);
+ AFFS_DATA_HEAD(bh)->key = cpu_to_be32(inode->i_ino);
+- AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx);
++ AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx + 1);
+ AFFS_DATA_HEAD(bh)->size = cpu_to_be32(tmp);
+ affs_fix_checksum(sb, bh);
+ bh->b_state &= ~(1UL << BH_New);
+@@ -724,7 +724,8 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping,
+ tmp = min(bsize - boff, to - from);
+ BUG_ON(boff + tmp > bsize || tmp > bsize);
+ memcpy(AFFS_DATA(bh) + boff, data + from, tmp);
+- be32_add_cpu(&AFFS_DATA_HEAD(bh)->size, tmp);
++ AFFS_DATA_HEAD(bh)->size = cpu_to_be32(
++ max(boff + tmp, be32_to_cpu(AFFS_DATA_HEAD(bh)->size)));
+ affs_fix_checksum(sb, bh);
+ mark_buffer_dirty_inode(bh, inode);
+ written += tmp;
+@@ -746,7 +747,7 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping,
+ if (buffer_new(bh)) {
+ AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA);
+ AFFS_DATA_HEAD(bh)->key = cpu_to_be32(inode->i_ino);
+- AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx);
++ AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx + 1);
+ AFFS_DATA_HEAD(bh)->size = cpu_to_be32(bsize);
+ AFFS_DATA_HEAD(bh)->next = 0;
+ bh->b_state &= ~(1UL << BH_New);
+@@ -780,7 +781,7 @@ static int affs_write_end_ofs(struct file *file, struct address_space *mapping,
+ if (buffer_new(bh)) {
+ AFFS_DATA_HEAD(bh)->ptype = cpu_to_be32(T_DATA);
+ AFFS_DATA_HEAD(bh)->key = cpu_to_be32(inode->i_ino);
+- AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx);
++ AFFS_DATA_HEAD(bh)->sequence = cpu_to_be32(bidx + 1);
+ AFFS_DATA_HEAD(bh)->size = cpu_to_be32(tmp);
+ AFFS_DATA_HEAD(bh)->next = 0;
+ bh->b_state &= ~(1UL << BH_New);
+diff --git a/fs/exec.c b/fs/exec.c
+index 67513bd606c249..d6079437296383 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1246,13 +1246,12 @@ int begin_new_exec(struct linux_binprm * bprm)
+ */
+ bprm->point_of_no_return = true;
+
+- /*
+- * Make this the only thread in the thread group.
+- */
++ /* Make this the only thread in the thread group */
+ retval = de_thread(me);
+ if (retval)
+ goto out;
+-
++ /* see the comment in check_unsafe_exec() */
++ current->fs->in_exec = 0;
+ /*
+ * Cancel any io_uring activity across execve
+ */
+@@ -1514,6 +1513,8 @@ static void free_bprm(struct linux_binprm *bprm)
+ }
+ free_arg_pages(bprm);
+ if (bprm->cred) {
++ /* in case exec fails before de_thread() succeeds */
++ current->fs->in_exec = 0;
+ mutex_unlock(¤t->signal->cred_guard_mutex);
+ abort_creds(bprm->cred);
+ }
+@@ -1620,6 +1621,10 @@ static void check_unsafe_exec(struct linux_binprm *bprm)
+ * suid exec because the differently privileged task
+ * will be able to manipulate the current directory, etc.
+ * It would be nice to force an unshare instead...
++ *
++ * Otherwise we set fs->in_exec = 1 to deny clone(CLONE_FS)
++ * from another sub-thread until de_thread() succeeds, this
++ * state is protected by cred_guard_mutex we hold.
+ */
+ n_fs = 1;
+ spin_lock(&p->fs->lock);
+@@ -1878,7 +1883,6 @@ static int bprm_execve(struct linux_binprm *bprm)
+
+ sched_mm_cid_after_execve(current);
+ /* execve succeeded */
+- current->fs->in_exec = 0;
+ current->in_execve = 0;
+ rseq_execve(current);
+ user_events_execve(current);
+@@ -1897,7 +1901,6 @@ static int bprm_execve(struct linux_binprm *bprm)
+ force_fatal_sig(SIGSEGV);
+
+ sched_mm_cid_after_execve(current);
+- current->fs->in_exec = 0;
+ current->in_execve = 0;
+
+ return retval;
+diff --git a/fs/exfat/fatent.c b/fs/exfat/fatent.c
+index 6f3651c6ca91ef..8df5ad6ebb10cb 100644
+--- a/fs/exfat/fatent.c
++++ b/fs/exfat/fatent.c
+@@ -265,7 +265,7 @@ int exfat_find_last_cluster(struct super_block *sb, struct exfat_chain *p_chain,
+ clu = next;
+ if (exfat_ent_get(sb, clu, &next))
+ return -EIO;
+- } while (next != EXFAT_EOF_CLUSTER);
++ } while (next != EXFAT_EOF_CLUSTER && count <= p_chain->size);
+
+ if (p_chain->size != count) {
+ exfat_fs_error(sb,
+diff --git a/fs/exfat/file.c b/fs/exfat/file.c
+index 807349d8ea0501..841a5b18e3dfdb 100644
+--- a/fs/exfat/file.c
++++ b/fs/exfat/file.c
+@@ -582,6 +582,9 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ loff_t pos = iocb->ki_pos;
+ loff_t valid_size;
+
++ if (unlikely(exfat_forced_shutdown(inode->i_sb)))
++ return -EIO;
++
+ inode_lock(inode);
+
+ valid_size = ei->valid_size;
+@@ -635,6 +638,16 @@ static ssize_t exfat_file_write_iter(struct kiocb *iocb, struct iov_iter *iter)
+ return ret;
+ }
+
++static ssize_t exfat_file_read_iter(struct kiocb *iocb, struct iov_iter *iter)
++{
++ struct inode *inode = file_inode(iocb->ki_filp);
++
++ if (unlikely(exfat_forced_shutdown(inode->i_sb)))
++ return -EIO;
++
++ return generic_file_read_iter(iocb, iter);
++}
++
+ static vm_fault_t exfat_page_mkwrite(struct vm_fault *vmf)
+ {
+ int err;
+@@ -672,14 +685,26 @@ static const struct vm_operations_struct exfat_file_vm_ops = {
+
+ static int exfat_file_mmap(struct file *file, struct vm_area_struct *vma)
+ {
++ if (unlikely(exfat_forced_shutdown(file_inode(file)->i_sb)))
++ return -EIO;
++
+ file_accessed(file);
+ vma->vm_ops = &exfat_file_vm_ops;
+ return 0;
+ }
+
++static ssize_t exfat_splice_read(struct file *in, loff_t *ppos,
++ struct pipe_inode_info *pipe, size_t len, unsigned int flags)
++{
++ if (unlikely(exfat_forced_shutdown(file_inode(in)->i_sb)))
++ return -EIO;
++
++ return filemap_splice_read(in, ppos, pipe, len, flags);
++}
++
+ const struct file_operations exfat_file_operations = {
+ .llseek = generic_file_llseek,
+- .read_iter = generic_file_read_iter,
++ .read_iter = exfat_file_read_iter,
+ .write_iter = exfat_file_write_iter,
+ .unlocked_ioctl = exfat_ioctl,
+ #ifdef CONFIG_COMPAT
+@@ -687,7 +712,7 @@ const struct file_operations exfat_file_operations = {
+ #endif
+ .mmap = exfat_file_mmap,
+ .fsync = exfat_file_fsync,
+- .splice_read = filemap_splice_read,
++ .splice_read = exfat_splice_read,
+ .splice_write = iter_file_splice_write,
+ };
+
+diff --git a/fs/exfat/inode.c b/fs/exfat/inode.c
+index d724de8f57bf92..3801516ac50716 100644
+--- a/fs/exfat/inode.c
++++ b/fs/exfat/inode.c
+@@ -344,7 +344,8 @@ static int exfat_get_block(struct inode *inode, sector_t iblock,
+ * The block has been partially written,
+ * zero the unwritten part and map the block.
+ */
+- loff_t size, off, pos;
++ loff_t size, pos;
++ void *addr;
+
+ max_blocks = 1;
+
+@@ -355,17 +356,43 @@ static int exfat_get_block(struct inode *inode, sector_t iblock,
+ if (!bh_result->b_folio)
+ goto done;
+
++ /*
++ * No buffer_head is allocated.
++ * (1) bmap: It's enough to fill bh_result without I/O.
++ * (2) read: The unwritten part should be filled with 0
++ * If a folio does not have any buffers,
++ * let's returns -EAGAIN to fallback to
++ * per-bh IO like block_read_full_folio().
++ */
++ if (!folio_buffers(bh_result->b_folio)) {
++ err = -EAGAIN;
++ goto done;
++ }
++
+ pos = EXFAT_BLK_TO_B(iblock, sb);
+ size = ei->valid_size - pos;
+- off = pos & (PAGE_SIZE - 1);
++ addr = folio_address(bh_result->b_folio) +
++ offset_in_folio(bh_result->b_folio, pos);
++
++ /* Check if bh->b_data points to proper addr in folio */
++ if (bh_result->b_data != addr) {
++ exfat_fs_error_ratelimit(sb,
++ "b_data(%p) != folio_addr(%p)",
++ bh_result->b_data, addr);
++ err = -EINVAL;
++ goto done;
++ }
+
+- folio_set_bh(bh_result, bh_result->b_folio, off);
++ /* Read a block */
+ err = bh_read(bh_result, 0);
+ if (err < 0)
+- goto unlock_ret;
++ goto done;
++
++ /* Zero unwritten part of a block */
++ memset(bh_result->b_data + size, 0,
++ bh_result->b_size - size);
+
+- folio_zero_segment(bh_result->b_folio, off + size,
+- off + sb->s_blocksize);
++ err = 0;
+ } else {
+ /*
+ * The range has not been written, clear the mapped flag
+@@ -376,6 +403,8 @@ static int exfat_get_block(struct inode *inode, sector_t iblock,
+ }
+ done:
+ bh_result->b_size = EXFAT_BLK_TO_B(max_blocks, sb);
++ if (err < 0)
++ clear_buffer_mapped(bh_result);
+ unlock_ret:
+ mutex_unlock(&sbi->s_lock);
+ return err;
+diff --git a/fs/exfat/namei.c b/fs/exfat/namei.c
+index e47a5ddfc79b3d..7b3951951f8af1 100644
+--- a/fs/exfat/namei.c
++++ b/fs/exfat/namei.c
+@@ -639,6 +639,11 @@ static int exfat_find(struct inode *dir, struct qstr *qname,
+ info->valid_size = le64_to_cpu(ep2->dentry.stream.valid_size);
+ info->size = le64_to_cpu(ep2->dentry.stream.size);
+
++ if (unlikely(EXFAT_B_TO_CLU_ROUND_UP(info->size, sbi) > sbi->used_clusters)) {
++ exfat_fs_error(sb, "data size is invalid(%lld)", info->size);
++ return -EIO;
++ }
++
+ info->start_clu = le32_to_cpu(ep2->dentry.stream.start_clu);
+ if (!is_valid_cluster(sbi, info->start_clu) && info->size) {
+ exfat_warn(sb, "start_clu is invalid cluster(0x%x)",
+diff --git a/fs/ext4/dir.c b/fs/ext4/dir.c
+index ef6a3c8f3a9a06..b278b5703c1977 100644
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -104,6 +104,9 @@ int __ext4_check_dir_entry(const char *function, unsigned int line,
+ else if (unlikely(le32_to_cpu(de->inode) >
+ le32_to_cpu(EXT4_SB(dir->i_sb)->s_es->s_inodes_count)))
+ error_msg = "inode out of bounds";
++ else if (unlikely(next_offset == size && de->name_len == 1 &&
++ de->name[0] == '.'))
++ error_msg = "'.' directory cannot be the last in data block";
+ else
+ return 0;
+
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 940ac1a49b729e..d3795c6c0a9d8e 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -6781,22 +6781,29 @@ static int ext4_statfs_project(struct super_block *sb,
+ dquot->dq_dqb.dqb_bhardlimit);
+ limit >>= sb->s_blocksize_bits;
+
+- if (limit && buf->f_blocks > limit) {
++ if (limit) {
++ uint64_t remaining = 0;
++
+ curblock = (dquot->dq_dqb.dqb_curspace +
+ dquot->dq_dqb.dqb_rsvspace) >> sb->s_blocksize_bits;
+- buf->f_blocks = limit;
+- buf->f_bfree = buf->f_bavail =
+- (buf->f_blocks > curblock) ?
+- (buf->f_blocks - curblock) : 0;
++ if (limit > curblock)
++ remaining = limit - curblock;
++
++ buf->f_blocks = min(buf->f_blocks, limit);
++ buf->f_bfree = min(buf->f_bfree, remaining);
++ buf->f_bavail = min(buf->f_bavail, remaining);
+ }
+
+ limit = min_not_zero(dquot->dq_dqb.dqb_isoftlimit,
+ dquot->dq_dqb.dqb_ihardlimit);
+- if (limit && buf->f_files > limit) {
+- buf->f_files = limit;
+- buf->f_ffree =
+- (buf->f_files > dquot->dq_dqb.dqb_curinodes) ?
+- (buf->f_files - dquot->dq_dqb.dqb_curinodes) : 0;
++ if (limit) {
++ uint64_t remaining = 0;
++
++ if (limit > dquot->dq_dqb.dqb_curinodes)
++ remaining = limit - dquot->dq_dqb.dqb_curinodes;
++
++ buf->f_files = min(buf->f_files, limit);
++ buf->f_ffree = min(buf->f_ffree, remaining);
+ }
+
+ spin_unlock(&dquot->dq_dqb_lock);
+diff --git a/fs/fuse/dax.c b/fs/fuse/dax.c
+index 12ef91d170bb30..7faf1af59d5d84 100644
+--- a/fs/fuse/dax.c
++++ b/fs/fuse/dax.c
+@@ -681,7 +681,6 @@ static int __fuse_dax_break_layouts(struct inode *inode, bool *retry,
+ 0, 0, fuse_wait_dax_page(inode));
+ }
+
+-/* dmap_end == 0 leads to unmapping of whole file */
+ int fuse_dax_break_layouts(struct inode *inode, u64 dmap_start,
+ u64 dmap_end)
+ {
+diff --git a/fs/fuse/dir.c b/fs/fuse/dir.c
+index bd6e675023c622..a1e86ec07c38b5 100644
+--- a/fs/fuse/dir.c
++++ b/fs/fuse/dir.c
+@@ -1936,7 +1936,7 @@ int fuse_do_setattr(struct mnt_idmap *idmap, struct dentry *dentry,
+ if (FUSE_IS_DAX(inode) && is_truncate) {
+ filemap_invalidate_lock(mapping);
+ fault_blocked = true;
+- err = fuse_dax_break_layouts(inode, 0, 0);
++ err = fuse_dax_break_layouts(inode, 0, -1);
+ if (err) {
+ filemap_invalidate_unlock(mapping);
+ return err;
+diff --git a/fs/fuse/file.c b/fs/fuse/file.c
+index e20d91d0ae558c..f597f7e68e5014 100644
+--- a/fs/fuse/file.c
++++ b/fs/fuse/file.c
+@@ -253,7 +253,7 @@ static int fuse_open(struct inode *inode, struct file *file)
+
+ if (dax_truncate) {
+ filemap_invalidate_lock(inode->i_mapping);
+- err = fuse_dax_break_layouts(inode, 0, 0);
++ err = fuse_dax_break_layouts(inode, 0, -1);
+ if (err)
+ goto out_inode_unlock;
+ }
+@@ -3146,7 +3146,7 @@ static long fuse_file_fallocate(struct file *file, int mode, loff_t offset,
+ inode_lock(inode);
+ if (block_faults) {
+ filemap_invalidate_lock(inode->i_mapping);
+- err = fuse_dax_break_layouts(inode, 0, 0);
++ err = fuse_dax_break_layouts(inode, 0, -1);
+ if (err)
+ goto out;
+ }
+diff --git a/fs/hostfs/hostfs.h b/fs/hostfs/hostfs.h
+index 8b39c15c408ccd..15b2f094d36ef8 100644
+--- a/fs/hostfs/hostfs.h
++++ b/fs/hostfs/hostfs.h
+@@ -60,7 +60,7 @@ struct hostfs_stat {
+ unsigned int uid;
+ unsigned int gid;
+ unsigned long long size;
+- struct hostfs_timespec atime, mtime, ctime;
++ struct hostfs_timespec atime, mtime, ctime, btime;
+ unsigned int blksize;
+ unsigned long long blocks;
+ struct {
+diff --git a/fs/hostfs/hostfs_kern.c b/fs/hostfs/hostfs_kern.c
+index 94f3cc42c74035..a16a7df0766cd1 100644
+--- a/fs/hostfs/hostfs_kern.c
++++ b/fs/hostfs/hostfs_kern.c
+@@ -33,6 +33,7 @@ struct hostfs_inode_info {
+ struct inode vfs_inode;
+ struct mutex open_mutex;
+ dev_t dev;
++ struct hostfs_timespec btime;
+ };
+
+ static inline struct hostfs_inode_info *HOSTFS_I(struct inode *inode)
+@@ -550,6 +551,7 @@ static int hostfs_inode_set(struct inode *ino, void *data)
+ }
+
+ HOSTFS_I(ino)->dev = dev;
++ HOSTFS_I(ino)->btime = st->btime;
+ ino->i_ino = st->ino;
+ ino->i_mode = st->mode;
+ return hostfs_inode_update(ino, st);
+@@ -560,7 +562,10 @@ static int hostfs_inode_test(struct inode *inode, void *data)
+ const struct hostfs_stat *st = data;
+ dev_t dev = MKDEV(st->dev.maj, st->dev.min);
+
+- return inode->i_ino == st->ino && HOSTFS_I(inode)->dev == dev;
++ return inode->i_ino == st->ino && HOSTFS_I(inode)->dev == dev &&
++ (inode->i_mode & S_IFMT) == (st->mode & S_IFMT) &&
++ HOSTFS_I(inode)->btime.tv_sec == st->btime.tv_sec &&
++ HOSTFS_I(inode)->btime.tv_nsec == st->btime.tv_nsec;
+ }
+
+ static struct inode *hostfs_iget(struct super_block *sb, char *name)
+diff --git a/fs/hostfs/hostfs_user.c b/fs/hostfs/hostfs_user.c
+index 97e9c40a944883..3bcd9f35e70b22 100644
+--- a/fs/hostfs/hostfs_user.c
++++ b/fs/hostfs/hostfs_user.c
+@@ -18,39 +18,48 @@
+ #include "hostfs.h"
+ #include <utime.h>
+
+-static void stat64_to_hostfs(const struct stat64 *buf, struct hostfs_stat *p)
++static void statx_to_hostfs(const struct statx *buf, struct hostfs_stat *p)
+ {
+- p->ino = buf->st_ino;
+- p->mode = buf->st_mode;
+- p->nlink = buf->st_nlink;
+- p->uid = buf->st_uid;
+- p->gid = buf->st_gid;
+- p->size = buf->st_size;
+- p->atime.tv_sec = buf->st_atime;
+- p->atime.tv_nsec = 0;
+- p->ctime.tv_sec = buf->st_ctime;
+- p->ctime.tv_nsec = 0;
+- p->mtime.tv_sec = buf->st_mtime;
+- p->mtime.tv_nsec = 0;
+- p->blksize = buf->st_blksize;
+- p->blocks = buf->st_blocks;
+- p->rdev.maj = os_major(buf->st_rdev);
+- p->rdev.min = os_minor(buf->st_rdev);
+- p->dev.maj = os_major(buf->st_dev);
+- p->dev.min = os_minor(buf->st_dev);
++ p->ino = buf->stx_ino;
++ p->mode = buf->stx_mode;
++ p->nlink = buf->stx_nlink;
++ p->uid = buf->stx_uid;
++ p->gid = buf->stx_gid;
++ p->size = buf->stx_size;
++ p->atime.tv_sec = buf->stx_atime.tv_sec;
++ p->atime.tv_nsec = buf->stx_atime.tv_nsec;
++ p->ctime.tv_sec = buf->stx_ctime.tv_sec;
++ p->ctime.tv_nsec = buf->stx_ctime.tv_nsec;
++ p->mtime.tv_sec = buf->stx_mtime.tv_sec;
++ p->mtime.tv_nsec = buf->stx_mtime.tv_nsec;
++ if (buf->stx_mask & STATX_BTIME) {
++ p->btime.tv_sec = buf->stx_btime.tv_sec;
++ p->btime.tv_nsec = buf->stx_btime.tv_nsec;
++ } else {
++ memset(&p->btime, 0, sizeof(p->btime));
++ }
++ p->blksize = buf->stx_blksize;
++ p->blocks = buf->stx_blocks;
++ p->rdev.maj = buf->stx_rdev_major;
++ p->rdev.min = buf->stx_rdev_minor;
++ p->dev.maj = buf->stx_dev_major;
++ p->dev.min = buf->stx_dev_minor;
+ }
+
+ int stat_file(const char *path, struct hostfs_stat *p, int fd)
+ {
+- struct stat64 buf;
++ struct statx buf;
++ int flags = AT_SYMLINK_NOFOLLOW;
+
+ if (fd >= 0) {
+- if (fstat64(fd, &buf) < 0)
+- return -errno;
+- } else if (lstat64(path, &buf) < 0) {
+- return -errno;
++ flags |= AT_EMPTY_PATH;
++ path = "";
+ }
+- stat64_to_hostfs(&buf, p);
++
++ if ((statx(fd, path, flags, STATX_BASIC_STATS | STATX_BTIME, &buf)) < 0)
++ return -errno;
++
++ statx_to_hostfs(&buf, p);
+ return 0;
+ }
+
+diff --git a/fs/isofs/dir.c b/fs/isofs/dir.c
+index eb2f8273e6f15e..09df40b612fbf2 100644
+--- a/fs/isofs/dir.c
++++ b/fs/isofs/dir.c
+@@ -147,7 +147,8 @@ static int do_isofs_readdir(struct inode *inode, struct file *file,
+ de = tmpde;
+ }
+ /* Basic sanity check, whether name doesn't exceed dir entry */
+- if (de_len < de->name_len[0] +
++ if (de_len < sizeof(struct iso_directory_record) ||
++ de_len < de->name_len[0] +
+ sizeof(struct iso_directory_record)) {
+ printk(KERN_NOTICE "iso9660: Corrupted directory entry"
+ " in block %lu of inode %lu\n", block,
+diff --git a/fs/jfs/jfs_dtree.c b/fs/jfs/jfs_dtree.c
+index 8f85177f284b5a..93db6eec446556 100644
+--- a/fs/jfs/jfs_dtree.c
++++ b/fs/jfs/jfs_dtree.c
+@@ -117,7 +117,8 @@ do { \
+ if (!(RC)) { \
+ if (((P)->header.nextindex > \
+ (((BN) == 0) ? DTROOTMAXSLOT : (P)->header.maxslot)) || \
+- ((BN) && ((P)->header.maxslot > DTPAGEMAXSLOT))) { \
++ ((BN) && (((P)->header.maxslot > DTPAGEMAXSLOT) || \
++ ((P)->header.stblindex >= DTPAGEMAXSLOT)))) { \
+ BT_PUTPAGE(MP); \
+ jfs_error((IP)->i_sb, \
+ "DT_GETPAGE: dtree page corrupt\n"); \
+diff --git a/fs/jfs/xattr.c b/fs/jfs/xattr.c
+index 24afbae87225a7..11d7f74d207be0 100644
+--- a/fs/jfs/xattr.c
++++ b/fs/jfs/xattr.c
+@@ -559,11 +559,16 @@ static int ea_get(struct inode *inode, struct ea_buffer *ea_buf, int min_size)
+
+ size_check:
+ if (EALIST_SIZE(ea_buf->xattr) != ea_size) {
+- int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr));
+-
+- printk(KERN_ERR "ea_get: invalid extended attribute\n");
+- print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1,
+- ea_buf->xattr, size, 1);
++ if (unlikely(EALIST_SIZE(ea_buf->xattr) > INT_MAX)) {
++ printk(KERN_ERR "ea_get: extended attribute size too large: %u > INT_MAX\n",
++ EALIST_SIZE(ea_buf->xattr));
++ } else {
++ int size = clamp_t(int, ea_size, 0, EALIST_SIZE(ea_buf->xattr));
++
++ printk(KERN_ERR "ea_get: invalid extended attribute\n");
++ print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1,
++ ea_buf->xattr, size, 1);
++ }
+ ea_release(inode, ea_buf);
+ rc = -EIO;
+ goto clean_up;
+diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
+index b1a66a6e6bc2d6..917b7edc34ef57 100644
+--- a/fs/netfs/direct_read.c
++++ b/fs/netfs/direct_read.c
+@@ -108,9 +108,9 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)
+ * Perform a read to an application buffer, bypassing the pagecache and the
+ * local disk cache.
+ */
+-static int netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync)
++static ssize_t netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync)
+ {
+- int ret;
++ ssize_t ret;
+
+ _enter("R=%x %llx-%llx",
+ rreq->debug_id, rreq->start, rreq->start + rreq->len - 1);
+@@ -149,7 +149,7 @@ static int netfs_unbuffered_read(struct netfs_io_request *rreq, bool sync)
+ }
+
+ out:
+- _leave(" = %d", ret);
++ _leave(" = %zd", ret);
+ return ret;
+ }
+
+diff --git a/fs/nfs/delegation.c b/fs/nfs/delegation.c
+index 4db912f5623055..325ba0663a6de2 100644
+--- a/fs/nfs/delegation.c
++++ b/fs/nfs/delegation.c
+@@ -79,6 +79,7 @@ static void nfs_mark_return_delegation(struct nfs_server *server,
+ struct nfs_delegation *delegation)
+ {
+ set_bit(NFS_DELEGATION_RETURN, &delegation->flags);
++ set_bit(NFS4SERV_DELEGRETURN, &server->delegation_flags);
+ set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
+ }
+
+@@ -330,14 +331,16 @@ nfs_start_delegation_return(struct nfs_inode *nfsi)
+ }
+
+ static void nfs_abort_delegation_return(struct nfs_delegation *delegation,
+- struct nfs_client *clp, int err)
++ struct nfs_server *server, int err)
+ {
+-
+ spin_lock(&delegation->lock);
+ clear_bit(NFS_DELEGATION_RETURNING, &delegation->flags);
+ if (err == -EAGAIN) {
+ set_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags);
+- set_bit(NFS4CLNT_DELEGRETURN_DELAYED, &clp->cl_state);
++ set_bit(NFS4SERV_DELEGRETURN_DELAYED,
++ &server->delegation_flags);
++ set_bit(NFS4CLNT_DELEGRETURN_DELAYED,
++ &server->nfs_client->cl_state);
+ }
+ spin_unlock(&delegation->lock);
+ }
+@@ -547,7 +550,7 @@ int nfs_inode_set_delegation(struct inode *inode, const struct cred *cred,
+ */
+ static int nfs_end_delegation_return(struct inode *inode, struct nfs_delegation *delegation, int issync)
+ {
+- struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
++ struct nfs_server *server = NFS_SERVER(inode);
+ unsigned int mode = O_WRONLY | O_RDWR;
+ int err = 0;
+
+@@ -569,11 +572,11 @@ static int nfs_end_delegation_return(struct inode *inode, struct nfs_delegation
+ /*
+ * Guard against state recovery
+ */
+- err = nfs4_wait_clnt_recover(clp);
++ err = nfs4_wait_clnt_recover(server->nfs_client);
+ }
+
+ if (err) {
+- nfs_abort_delegation_return(delegation, clp, err);
++ nfs_abort_delegation_return(delegation, server, err);
+ goto out;
+ }
+
+@@ -590,17 +593,6 @@ static bool nfs_delegation_need_return(struct nfs_delegation *delegation)
+
+ if (test_and_clear_bit(NFS_DELEGATION_RETURN, &delegation->flags))
+ ret = true;
+- else if (test_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags)) {
+- struct inode *inode;
+-
+- spin_lock(&delegation->lock);
+- inode = delegation->inode;
+- if (inode && list_empty(&NFS_I(inode)->open_files))
+- ret = true;
+- spin_unlock(&delegation->lock);
+- }
+- if (ret)
+- clear_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags);
+ if (test_bit(NFS_DELEGATION_RETURNING, &delegation->flags) ||
+ test_bit(NFS_DELEGATION_RETURN_DELAYED, &delegation->flags) ||
+ test_bit(NFS_DELEGATION_REVOKED, &delegation->flags))
+@@ -619,6 +611,9 @@ static int nfs_server_return_marked_delegations(struct nfs_server *server,
+ struct nfs_delegation *place_holder_deleg = NULL;
+ int err = 0;
+
++ if (!test_and_clear_bit(NFS4SERV_DELEGRETURN,
++ &server->delegation_flags))
++ return 0;
+ restart:
+ /*
+ * To avoid quadratic looping we hold a reference
+@@ -670,6 +665,7 @@ static int nfs_server_return_marked_delegations(struct nfs_server *server,
+ cond_resched();
+ if (!err)
+ goto restart;
++ set_bit(NFS4SERV_DELEGRETURN, &server->delegation_flags);
+ set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
+ goto out;
+ }
+@@ -684,6 +680,9 @@ static bool nfs_server_clear_delayed_delegations(struct nfs_server *server)
+ struct nfs_delegation *d;
+ bool ret = false;
+
++ if (!test_and_clear_bit(NFS4SERV_DELEGRETURN_DELAYED,
++ &server->delegation_flags))
++ goto out;
+ list_for_each_entry_rcu (d, &server->delegations, super_list) {
+ if (!test_bit(NFS_DELEGATION_RETURN_DELAYED, &d->flags))
+ continue;
+@@ -691,6 +690,7 @@ static bool nfs_server_clear_delayed_delegations(struct nfs_server *server)
+ clear_bit(NFS_DELEGATION_RETURN_DELAYED, &d->flags);
+ ret = true;
+ }
++out:
+ return ret;
+ }
+
+@@ -878,11 +878,25 @@ int nfs4_inode_make_writeable(struct inode *inode)
+ return nfs4_inode_return_delegation(inode);
+ }
+
+-static void nfs_mark_return_if_closed_delegation(struct nfs_server *server,
+- struct nfs_delegation *delegation)
++static void
++nfs_mark_return_if_closed_delegation(struct nfs_server *server,
++ struct nfs_delegation *delegation)
+ {
+- set_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags);
+- set_bit(NFS4CLNT_DELEGRETURN, &server->nfs_client->cl_state);
++ struct inode *inode;
++
++ if (test_bit(NFS_DELEGATION_RETURN, &delegation->flags) ||
++ test_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags))
++ return;
++ spin_lock(&delegation->lock);
++ inode = delegation->inode;
++ if (!inode)
++ goto out;
++ if (list_empty(&NFS_I(inode)->open_files))
++ nfs_mark_return_delegation(server, delegation);
++ else
++ set_bit(NFS_DELEGATION_RETURN_IF_CLOSED, &delegation->flags);
++out:
++ spin_unlock(&delegation->lock);
+ }
+
+ static bool nfs_server_mark_return_all_delegations(struct nfs_server *server)
+@@ -1276,6 +1290,7 @@ static void nfs_mark_test_expired_delegation(struct nfs_server *server,
+ return;
+ clear_bit(NFS_DELEGATION_NEED_RECLAIM, &delegation->flags);
+ set_bit(NFS_DELEGATION_TEST_EXPIRED, &delegation->flags);
++ set_bit(NFS4SERV_DELEGATION_EXPIRED, &server->delegation_flags);
+ set_bit(NFS4CLNT_DELEGATION_EXPIRED, &server->nfs_client->cl_state);
+ }
+
+@@ -1354,6 +1369,9 @@ static int nfs_server_reap_expired_delegations(struct nfs_server *server,
+ nfs4_stateid stateid;
+ unsigned long gen = ++server->delegation_gen;
+
++ if (!test_and_clear_bit(NFS4SERV_DELEGATION_EXPIRED,
++ &server->delegation_flags))
++ return 0;
+ restart:
+ rcu_read_lock();
+ list_for_each_entry_rcu(delegation, &server->delegations, super_list) {
+@@ -1383,6 +1401,9 @@ static int nfs_server_reap_expired_delegations(struct nfs_server *server,
+ goto restart;
+ }
+ nfs_inode_mark_test_expired_delegation(server,inode);
++ set_bit(NFS4SERV_DELEGATION_EXPIRED, &server->delegation_flags);
++ set_bit(NFS4CLNT_DELEGATION_EXPIRED,
++ &server->nfs_client->cl_state);
+ iput(inode);
+ return -EAGAIN;
+ }
+diff --git a/fs/nfs/nfs4xdr.c b/fs/nfs/nfs4xdr.c
+index e8ac3f615f932e..71f45cc0ca74d1 100644
+--- a/fs/nfs/nfs4xdr.c
++++ b/fs/nfs/nfs4xdr.c
+@@ -82,9 +82,8 @@ static int decode_layoutget(struct xdr_stream *xdr, struct rpc_rqst *req,
+ * we currently use size 2 (u64) out of (NFS4_OPAQUE_LIMIT >> 2)
+ */
+ #define pagepad_maxsz (1)
+-#define open_owner_id_maxsz (1 + 2 + 1 + 1 + 2)
+-#define lock_owner_id_maxsz (1 + 1 + 4)
+-#define decode_lockowner_maxsz (1 + XDR_QUADLEN(IDMAP_NAMESZ))
++#define open_owner_id_maxsz (2 + 1 + 2 + 2)
++#define lock_owner_id_maxsz (2 + 1 + 2)
+ #define compound_encode_hdr_maxsz (3 + (NFS4_MAXTAGLEN >> 2))
+ #define compound_decode_hdr_maxsz (3 + (NFS4_MAXTAGLEN >> 2))
+ #define op_encode_hdr_maxsz (1)
+@@ -185,7 +184,7 @@ static int decode_layoutget(struct xdr_stream *xdr, struct rpc_rqst *req,
+ #define encode_claim_null_maxsz (1 + nfs4_name_maxsz)
+ #define encode_open_maxsz (op_encode_hdr_maxsz + \
+ 2 + encode_share_access_maxsz + 2 + \
+- open_owner_id_maxsz + \
++ 1 + open_owner_id_maxsz + \
+ encode_opentype_maxsz + \
+ encode_claim_null_maxsz)
+ #define decode_space_limit_maxsz (3)
+@@ -255,13 +254,14 @@ static int decode_layoutget(struct xdr_stream *xdr, struct rpc_rqst *req,
+ #define encode_link_maxsz (op_encode_hdr_maxsz + \
+ nfs4_name_maxsz)
+ #define decode_link_maxsz (op_decode_hdr_maxsz + decode_change_info_maxsz)
+-#define encode_lockowner_maxsz (7)
++#define encode_lockowner_maxsz (2 + 1 + lock_owner_id_maxsz)
++
+ #define encode_lock_maxsz (op_encode_hdr_maxsz + \
+ 7 + \
+ 1 + encode_stateid_maxsz + 1 + \
+ encode_lockowner_maxsz)
+ #define decode_lock_denied_maxsz \
+- (8 + decode_lockowner_maxsz)
++ (2 + 2 + 1 + 2 + 1 + lock_owner_id_maxsz)
+ #define decode_lock_maxsz (op_decode_hdr_maxsz + \
+ decode_lock_denied_maxsz)
+ #define encode_lockt_maxsz (op_encode_hdr_maxsz + 5 + \
+@@ -617,7 +617,7 @@ static int decode_layoutget(struct xdr_stream *xdr, struct rpc_rqst *req,
+ encode_lockowner_maxsz)
+ #define NFS4_dec_release_lockowner_sz \
+ (compound_decode_hdr_maxsz + \
+- decode_lockowner_maxsz)
++ decode_release_lockowner_maxsz)
+ #define NFS4_enc_access_sz (compound_encode_hdr_maxsz + \
+ encode_sequence_maxsz + \
+ encode_putfh_maxsz + \
+@@ -1412,7 +1412,7 @@ static inline void encode_openhdr(struct xdr_stream *xdr, const struct nfs_opena
+ __be32 *p;
+ /*
+ * opcode 4, seqid 4, share_access 4, share_deny 4, clientid 8, ownerlen 4,
+- * owner 4 = 32
++ * owner 28
+ */
+ encode_nfs4_seqid(xdr, arg->seqid);
+ encode_share_access(xdr, arg->share_access);
+@@ -5077,7 +5077,7 @@ static int decode_link(struct xdr_stream *xdr, struct nfs4_change_info *cinfo)
+ /*
+ * We create the owner, so we know a proper owner.id length is 4.
+ */
+-static int decode_lock_denied (struct xdr_stream *xdr, struct file_lock *fl)
++static int decode_lock_denied(struct xdr_stream *xdr, struct file_lock *fl)
+ {
+ uint64_t offset, length, clientid;
+ __be32 *p;
+diff --git a/fs/nfs/sysfs.c b/fs/nfs/sysfs.c
+index 7b59a40d40c061..784f7c1d003bfc 100644
+--- a/fs/nfs/sysfs.c
++++ b/fs/nfs/sysfs.c
+@@ -14,6 +14,7 @@
+ #include <linux/rcupdate.h>
+ #include <linux/lockd/lockd.h>
+
++#include "internal.h"
+ #include "nfs4_fs.h"
+ #include "netns.h"
+ #include "sysfs.h"
+@@ -228,6 +229,25 @@ static void shutdown_client(struct rpc_clnt *clnt)
+ rpc_cancel_tasks(clnt, -EIO, shutdown_match_client, NULL);
+ }
+
++/*
++ * Shut down the nfs_client only once all the superblocks
++ * have been shut down.
++ */
++static void shutdown_nfs_client(struct nfs_client *clp)
++{
++ struct nfs_server *server;
++ rcu_read_lock();
++ list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link) {
++ if (!(server->flags & NFS_MOUNT_SHUTDOWN)) {
++ rcu_read_unlock();
++ return;
++ }
++ }
++ rcu_read_unlock();
++ nfs_mark_client_ready(clp, -EIO);
++ shutdown_client(clp->cl_rpcclient);
++}
++
+ static ssize_t
+ shutdown_show(struct kobject *kobj, struct kobj_attribute *attr,
+ char *buf)
+@@ -259,7 +279,6 @@ shutdown_store(struct kobject *kobj, struct kobj_attribute *attr,
+
+ server->flags |= NFS_MOUNT_SHUTDOWN;
+ shutdown_client(server->client);
+- shutdown_client(server->nfs_client->cl_rpcclient);
+
+ if (!IS_ERR(server->client_acl))
+ shutdown_client(server->client_acl);
+@@ -267,6 +286,7 @@ shutdown_store(struct kobject *kobj, struct kobj_attribute *attr,
+ if (server->nlm_host)
+ shutdown_client(server->nlm_host->h_rpcclnt);
+ out:
++ shutdown_nfs_client(server->nfs_client);
+ return count;
+ }
+
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index 82ae2b85d393cb..8ff8db09a1e066 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -579,8 +579,10 @@ static struct nfs_page *nfs_lock_and_join_requests(struct folio *folio)
+
+ while (!nfs_lock_request(head)) {
+ ret = nfs_wait_on_request(head);
+- if (ret < 0)
++ if (ret < 0) {
++ nfs_release_request(head);
+ return ERR_PTR(ret);
++ }
+ }
+
+ /* Ensure that nobody removed the request before we locked it */
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 57f8818aa47c5f..5e81c819c3846a 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -1057,6 +1057,12 @@ static struct nfs4_ol_stateid * nfs4_alloc_open_stateid(struct nfs4_client *clp)
+ return openlockstateid(stid);
+ }
+
++/*
++ * As the sc_free callback of deleg, this may be called by nfs4_put_stid
++ * in nfsd_break_one_deleg.
++ * Considering nfsd_break_one_deleg is called with the flc->flc_lock held,
++ * this function mustn't ever sleep.
++ */
+ static void nfs4_free_deleg(struct nfs4_stid *stid)
+ {
+ struct nfs4_delegation *dp = delegstateid(stid);
+@@ -5269,6 +5275,7 @@ static const struct nfsd4_callback_ops nfsd4_cb_recall_ops = {
+
+ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
+ {
++ bool queued;
+ /*
+ * We're assuming the state code never drops its reference
+ * without first removing the lease. Since we're in this lease
+@@ -5277,7 +5284,10 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
+ * we know it's safe to take a reference.
+ */
+ refcount_inc(&dp->dl_stid.sc_count);
+- WARN_ON_ONCE(!nfsd4_run_cb(&dp->dl_recall));
++ queued = nfsd4_run_cb(&dp->dl_recall);
++ WARN_ON_ONCE(!queued);
++ if (!queued)
++ nfs4_put_stid(&dp->dl_stid);
+ }
+
+ /* Called from break_lease() with flc_lock held. */
+@@ -6689,14 +6699,19 @@ deleg_reaper(struct nfsd_net *nn)
+ spin_lock(&nn->client_lock);
+ list_for_each_safe(pos, next, &nn->client_lru) {
+ clp = list_entry(pos, struct nfs4_client, cl_lru);
+- if (clp->cl_state != NFSD4_ACTIVE ||
+- list_empty(&clp->cl_delegations) ||
+- atomic_read(&clp->cl_delegs_in_recall) ||
+- test_bit(NFSD4_CLIENT_CB_RECALL_ANY, &clp->cl_flags) ||
+- (ktime_get_boottime_seconds() -
+- clp->cl_ra_time < 5)) {
++
++ if (clp->cl_state != NFSD4_ACTIVE)
++ continue;
++ if (list_empty(&clp->cl_delegations))
++ continue;
++ if (atomic_read(&clp->cl_delegs_in_recall))
++ continue;
++ if (test_bit(NFSD4_CLIENT_CB_RECALL_ANY, &clp->cl_flags))
++ continue;
++ if (ktime_get_boottime_seconds() - clp->cl_ra_time < 5)
++ continue;
++ if (clp->cl_cb_state != NFSD4_CB_UP)
+ continue;
+- }
+ list_add(&clp->cl_ra_cblist, &cblist);
+
+ /* release in nfsd4_cb_recall_any_release */
+@@ -6880,7 +6895,7 @@ nfsd4_lookup_stateid(struct nfsd4_compound_state *cstate,
+ */
+ statusmask |= SC_STATUS_REVOKED;
+
+- statusmask |= SC_STATUS_ADMIN_REVOKED;
++ statusmask |= SC_STATUS_ADMIN_REVOKED | SC_STATUS_FREEABLE;
+
+ if (ZERO_STATEID(stateid) || ONE_STATEID(stateid) ||
+ CLOSE_STATEID(stateid))
+@@ -7535,9 +7550,7 @@ nfsd4_delegreturn(struct svc_rqst *rqstp, struct nfsd4_compound_state *cstate,
+ if ((status = fh_verify(rqstp, &cstate->current_fh, S_IFREG, 0)))
+ return status;
+
+- status = nfsd4_lookup_stateid(cstate, stateid, SC_TYPE_DELEG,
+- SC_STATUS_REVOKED | SC_STATUS_FREEABLE,
+- &s, nn);
++ status = nfsd4_lookup_stateid(cstate, stateid, SC_TYPE_DELEG, SC_STATUS_REVOKED, &s, nn);
+ if (status)
+ goto out;
+ dp = delegstateid(s);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index 3adbc05ebaac4c..e83629f396044b 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1959,6 +1959,7 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info)
+ struct svc_serv *serv;
+ LIST_HEAD(permsocks);
+ struct nfsd_net *nn;
++ bool delete = false;
+ int err, rem;
+
+ mutex_lock(&nfsd_mutex);
+@@ -2019,34 +2020,28 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info)
+ }
+ }
+
+- /* For now, no removing old sockets while server is running */
+- if (serv->sv_nrthreads && !list_empty(&permsocks)) {
++ /*
++ * If there are listener transports remaining on the permsocks list,
++ * it means we were asked to remove a listener.
++ */
++ if (!list_empty(&permsocks)) {
+ list_splice_init(&permsocks, &serv->sv_permsocks);
+- spin_unlock_bh(&serv->sv_lock);
+- err = -EBUSY;
+- goto out_unlock_mtx;
++ delete = true;
+ }
++ spin_unlock_bh(&serv->sv_lock);
+
+- /* Close the remaining sockets on the permsocks list */
+- while (!list_empty(&permsocks)) {
+- xprt = list_first_entry(&permsocks, struct svc_xprt, xpt_list);
+- list_move(&xprt->xpt_list, &serv->sv_permsocks);
+-
+- /*
+- * Newly-created sockets are born with the BUSY bit set. Clear
+- * it if there are no threads, since nothing can pick it up
+- * in that case.
+- */
+- if (!serv->sv_nrthreads)
+- clear_bit(XPT_BUSY, &xprt->xpt_flags);
+-
+- set_bit(XPT_CLOSE, &xprt->xpt_flags);
+- spin_unlock_bh(&serv->sv_lock);
+- svc_xprt_close(xprt);
+- spin_lock_bh(&serv->sv_lock);
++ /* Do not remove listeners while there are active threads. */
++ if (serv->sv_nrthreads) {
++ err = -EBUSY;
++ goto out_unlock_mtx;
+ }
+
+- spin_unlock_bh(&serv->sv_lock);
++ /*
++ * Since we can't delete an arbitrary llist entry, destroy the
++ * remaining listeners and recreate the list.
++ */
++ if (delete)
++ svc_xprt_destroy_all(serv, net);
+
+ /* walk list of addrs again, open any that still don't exist */
+ nlmsg_for_each_attr(attr, info->nlhdr, GENL_HDRLEN, rem) {
+@@ -2073,6 +2068,9 @@ int nfsd_nl_listener_set_doit(struct sk_buff *skb, struct genl_info *info)
+
+ xprt = svc_find_listener(serv, xcl_name, net, sa);
+ if (xprt) {
++ if (delete)
++ WARN_ONCE(1, "Transport type=%s already exists\n",
++ xcl_name);
+ svc_xprt_put(xprt);
+ continue;
+ }
+diff --git a/fs/nfsd/vfs.c b/fs/nfsd/vfs.c
+index d6d4f2a0e89826..ca29a5e1600fd9 100644
+--- a/fs/nfsd/vfs.c
++++ b/fs/nfsd/vfs.c
+@@ -1935,9 +1935,17 @@ nfsd_rename(struct svc_rqst *rqstp, struct svc_fh *ffhp, char *fname, int flen,
+ return err;
+ }
+
+-/*
+- * Unlink a file or directory
+- * N.B. After this call fhp needs an fh_put
++/**
++ * nfsd_unlink - remove a directory entry
++ * @rqstp: RPC transaction context
++ * @fhp: the file handle of the parent directory to be modified
++ * @type: enforced file type of the object to be removed
++ * @fname: the name of directory entry to be removed
++ * @flen: length of @fname in octets
++ *
++ * After this call fhp needs an fh_put.
++ *
++ * Returns a generic NFS status code in network byte-order.
+ */
+ __be32
+ nfsd_unlink(struct svc_rqst *rqstp, struct svc_fh *fhp, int type,
+@@ -2011,15 +2019,17 @@ nfsd_unlink(struct svc_rqst *rqstp, struct svc_fh *fhp, int type,
+ fh_drop_write(fhp);
+ out_nfserr:
+ if (host_err == -EBUSY) {
+- /* name is mounted-on. There is no perfect
+- * error status.
++ /*
++ * See RFC 8881 Section 18.25.4 para 4: NFSv4 REMOVE
++ * wants a status unique to the object type.
+ */
+- err = nfserr_file_open;
+- } else {
+- err = nfserrno(host_err);
++ if (type != S_IFDIR)
++ err = nfserr_file_open;
++ else
++ err = nfserr_acces;
+ }
+ out:
+- return err;
++ return err != nfs_ok ? err : nfserrno(host_err);
+ out_unlock:
+ inode_unlock(dirp);
+ goto out_drop_write;
+diff --git a/fs/ntfs3/attrib.c b/fs/ntfs3/attrib.c
+index da1a9312e61a0e..dd459316529e8e 100644
+--- a/fs/ntfs3/attrib.c
++++ b/fs/ntfs3/attrib.c
+@@ -2663,8 +2663,9 @@ int attr_set_compress(struct ntfs_inode *ni, bool compr)
+ attr->nres.run_off = cpu_to_le16(run_off);
+ }
+
+- /* Update data attribute flags. */
++ /* Update attribute flags. */
+ if (compr) {
++ attr->flags &= ~ATTR_FLAG_SPARSED;
+ attr->flags |= ATTR_FLAG_COMPRESSED;
+ attr->nres.c_unit = NTFS_LZNT_CUNIT;
+ } else {
+diff --git a/fs/ntfs3/file.c b/fs/ntfs3/file.c
+index f704ceef953948..7976ac4611c8d0 100644
+--- a/fs/ntfs3/file.c
++++ b/fs/ntfs3/file.c
+@@ -101,8 +101,26 @@ int ntfs_fileattr_set(struct mnt_idmap *idmap, struct dentry *dentry,
+ /* Allowed to change compression for empty files and for directories only. */
+ if (!is_dedup(ni) && !is_encrypted(ni) &&
+ (S_ISREG(inode->i_mode) || S_ISDIR(inode->i_mode))) {
+- /* Change compress state. */
+- int err = ni_set_compress(inode, flags & FS_COMPR_FL);
++ int err = 0;
++ struct address_space *mapping = inode->i_mapping;
++
++ /* write out all data and wait. */
++ filemap_invalidate_lock(mapping);
++ err = filemap_write_and_wait(mapping);
++
++ if (err >= 0) {
++ /* Change compress state. */
++ bool compr = flags & FS_COMPR_FL;
++ err = ni_set_compress(inode, compr);
++
++ /* For files change a_ops too. */
++ if (!err)
++ mapping->a_ops = compr ? &ntfs_aops_cmpr :
++ &ntfs_aops;
++ }
++
++ filemap_invalidate_unlock(mapping);
++
+ if (err)
+ return err;
+ }
+diff --git a/fs/ntfs3/frecord.c b/fs/ntfs3/frecord.c
+index 175662acd5eaf0..608634361a302f 100644
+--- a/fs/ntfs3/frecord.c
++++ b/fs/ntfs3/frecord.c
+@@ -3431,10 +3431,12 @@ int ni_set_compress(struct inode *inode, bool compr)
+ }
+
+ ni->std_fa = std->fa;
+- if (compr)
++ if (compr) {
++ std->fa &= ~FILE_ATTRIBUTE_SPARSE_FILE;
+ std->fa |= FILE_ATTRIBUTE_COMPRESSED;
+- else
++ } else {
+ std->fa &= ~FILE_ATTRIBUTE_COMPRESSED;
++ }
+
+ if (ni->std_fa != std->fa) {
+ ni->std_fa = std->fa;
+diff --git a/fs/ntfs3/index.c b/fs/ntfs3/index.c
+index 7eb9fae22f8da6..78d20e4baa2c9a 100644
+--- a/fs/ntfs3/index.c
++++ b/fs/ntfs3/index.c
+@@ -618,7 +618,7 @@ static bool index_hdr_check(const struct INDEX_HDR *hdr, u32 bytes)
+ u32 off = le32_to_cpu(hdr->de_off);
+
+ if (!IS_ALIGNED(off, 8) || tot > bytes || end > tot ||
+- off + sizeof(struct NTFS_DE) > end) {
++ size_add(off, sizeof(struct NTFS_DE)) > end) {
+ /* incorrect index buffer. */
+ return false;
+ }
+@@ -736,7 +736,7 @@ static struct NTFS_DE *hdr_find_e(const struct ntfs_index *indx,
+ if (end > total)
+ return NULL;
+
+- if (off + sizeof(struct NTFS_DE) > end)
++ if (size_add(off, sizeof(struct NTFS_DE)) > end)
+ return NULL;
+
+ e = Add2Ptr(hdr, off);
+diff --git a/fs/ntfs3/ntfs.h b/fs/ntfs3/ntfs.h
+index 241f2ffdd9201a..1ff13b6f961326 100644
+--- a/fs/ntfs3/ntfs.h
++++ b/fs/ntfs3/ntfs.h
+@@ -717,7 +717,7 @@ static inline struct NTFS_DE *hdr_first_de(const struct INDEX_HDR *hdr)
+ struct NTFS_DE *e;
+ u16 esize;
+
+- if (de_off >= used || de_off + sizeof(struct NTFS_DE) > used )
++ if (de_off >= used || size_add(de_off, sizeof(struct NTFS_DE)) > used)
+ return NULL;
+
+ e = Add2Ptr(hdr, de_off);
+diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c
+index ea9127ba320844..5d9388b44e5be7 100644
+--- a/fs/ocfs2/alloc.c
++++ b/fs/ocfs2/alloc.c
+@@ -1803,6 +1803,14 @@ static int __ocfs2_find_path(struct ocfs2_caching_info *ci,
+
+ el = root_el;
+ while (el->l_tree_depth) {
++ if (unlikely(le16_to_cpu(el->l_tree_depth) >= OCFS2_MAX_PATH_DEPTH)) {
++ ocfs2_error(ocfs2_metadata_cache_get_super(ci),
++ "Owner %llu has invalid tree depth %u in extent list\n",
++ (unsigned long long)ocfs2_metadata_cache_owner(ci),
++ le16_to_cpu(el->l_tree_depth));
++ ret = -EROFS;
++ goto out;
++ }
+ if (le16_to_cpu(el->l_next_free_rec) == 0) {
+ ocfs2_error(ocfs2_metadata_cache_get_super(ci),
+ "Owner %llu has empty extent list at depth %u\n",
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index b31283d81c52ea..a2541f5204af06 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -417,7 +417,7 @@ static const struct file_operations proc_pid_cmdline_ops = {
+ #ifdef CONFIG_KALLSYMS
+ /*
+ * Provides a wchan file via kallsyms in a proper one-value-per-file format.
+- * Returns the resolved symbol. If that fails, simply return the address.
++ * Returns the resolved symbol to user space.
+ */
+ static int proc_pid_wchan(struct seq_file *m, struct pid_namespace *ns,
+ struct pid *pid, struct task_struct *task)
+diff --git a/fs/smb/client/cifsacl.c b/fs/smb/client/cifsacl.c
+index ebe9a7d7c70e86..e36f0e2d7d21e2 100644
+--- a/fs/smb/client/cifsacl.c
++++ b/fs/smb/client/cifsacl.c
+@@ -763,7 +763,7 @@ static void parse_dacl(struct smb_acl *pdacl, char *end_of_acl,
+ struct cifs_fattr *fattr, bool mode_from_special_sid)
+ {
+ int i;
+- int num_aces = 0;
++ u16 num_aces = 0;
+ int acl_size;
+ char *acl_base;
+ struct smb_ace **ppace;
+@@ -778,14 +778,15 @@ static void parse_dacl(struct smb_acl *pdacl, char *end_of_acl,
+ }
+
+ /* validate that we do not go past end of acl */
+- if (end_of_acl < (char *)pdacl + le16_to_cpu(pdacl->size)) {
++ if (end_of_acl < (char *)pdacl + sizeof(struct smb_acl) ||
++ end_of_acl < (char *)pdacl + le16_to_cpu(pdacl->size)) {
+ cifs_dbg(VFS, "ACL too small to parse DACL\n");
+ return;
+ }
+
+ cifs_dbg(NOISY, "DACL revision %d size %d num aces %d\n",
+ le16_to_cpu(pdacl->revision), le16_to_cpu(pdacl->size),
+- le32_to_cpu(pdacl->num_aces));
++ le16_to_cpu(pdacl->num_aces));
+
+ /* reset rwx permissions for user/group/other.
+ Also, if num_aces is 0 i.e. DACL has no ACEs,
+@@ -795,12 +796,15 @@ static void parse_dacl(struct smb_acl *pdacl, char *end_of_acl,
+ acl_base = (char *)pdacl;
+ acl_size = sizeof(struct smb_acl);
+
+- num_aces = le32_to_cpu(pdacl->num_aces);
++ num_aces = le16_to_cpu(pdacl->num_aces);
+ if (num_aces > 0) {
+ umode_t denied_mode = 0;
+
+- if (num_aces > ULONG_MAX / sizeof(struct smb_ace *))
++ if (num_aces > (le16_to_cpu(pdacl->size) - sizeof(struct smb_acl)) /
++ (offsetof(struct smb_ace, sid) +
++ offsetof(struct smb_sid, sub_auth) + sizeof(__le16)))
+ return;
++
+ ppace = kmalloc_array(num_aces, sizeof(struct smb_ace *),
+ GFP_KERNEL);
+ if (!ppace)
+@@ -937,12 +941,12 @@ unsigned int setup_special_user_owner_ACE(struct smb_ace *pntace)
+ static void populate_new_aces(char *nacl_base,
+ struct smb_sid *pownersid,
+ struct smb_sid *pgrpsid,
+- __u64 *pnmode, u32 *pnum_aces, u16 *pnsize,
++ __u64 *pnmode, u16 *pnum_aces, u16 *pnsize,
+ bool modefromsid,
+ bool posix)
+ {
+ __u64 nmode;
+- u32 num_aces = 0;
++ u16 num_aces = 0;
+ u16 nsize = 0;
+ __u64 user_mode;
+ __u64 group_mode;
+@@ -1050,7 +1054,7 @@ static __u16 replace_sids_and_copy_aces(struct smb_acl *pdacl, struct smb_acl *p
+ u16 size = 0;
+ struct smb_ace *pntace = NULL;
+ char *acl_base = NULL;
+- u32 src_num_aces = 0;
++ u16 src_num_aces = 0;
+ u16 nsize = 0;
+ struct smb_ace *pnntace = NULL;
+ char *nacl_base = NULL;
+@@ -1058,7 +1062,7 @@ static __u16 replace_sids_and_copy_aces(struct smb_acl *pdacl, struct smb_acl *p
+
+ acl_base = (char *)pdacl;
+ size = sizeof(struct smb_acl);
+- src_num_aces = le32_to_cpu(pdacl->num_aces);
++ src_num_aces = le16_to_cpu(pdacl->num_aces);
+
+ nacl_base = (char *)pndacl;
+ nsize = sizeof(struct smb_acl);
+@@ -1090,11 +1094,11 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl,
+ u16 size = 0;
+ struct smb_ace *pntace = NULL;
+ char *acl_base = NULL;
+- u32 src_num_aces = 0;
++ u16 src_num_aces = 0;
+ u16 nsize = 0;
+ struct smb_ace *pnntace = NULL;
+ char *nacl_base = NULL;
+- u32 num_aces = 0;
++ u16 num_aces = 0;
+ bool new_aces_set = false;
+
+ /* Assuming that pndacl and pnmode are never NULL */
+@@ -1112,7 +1116,7 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl,
+
+ acl_base = (char *)pdacl;
+ size = sizeof(struct smb_acl);
+- src_num_aces = le32_to_cpu(pdacl->num_aces);
++ src_num_aces = le16_to_cpu(pdacl->num_aces);
+
+ /* Retain old ACEs which we can retain */
+ for (i = 0; i < src_num_aces; ++i) {
+@@ -1158,7 +1162,7 @@ static int set_chmod_dacl(struct smb_acl *pdacl, struct smb_acl *pndacl,
+ }
+
+ finalize_dacl:
+- pndacl->num_aces = cpu_to_le32(num_aces);
++ pndacl->num_aces = cpu_to_le16(num_aces);
+ pndacl->size = cpu_to_le16(nsize);
+
+ return 0;
+@@ -1293,7 +1297,7 @@ static int build_sec_desc(struct smb_ntsd *pntsd, struct smb_ntsd *pnntsd,
+ dacloffset ? dacl_ptr->revision : cpu_to_le16(ACL_REVISION);
+
+ ndacl_ptr->size = cpu_to_le16(0);
+- ndacl_ptr->num_aces = cpu_to_le32(0);
++ ndacl_ptr->num_aces = cpu_to_le16(0);
+
+ rc = set_chmod_dacl(dacl_ptr, ndacl_ptr, owner_sid_ptr, group_sid_ptr,
+ pnmode, mode_from_sid, posix);
+@@ -1651,7 +1655,7 @@ id_mode_to_cifs_acl(struct inode *inode, const char *path, __u64 *pnmode,
+ dacl_ptr = (struct smb_acl *)((char *)pntsd + dacloffset);
+ if (mode_from_sid)
+ nsecdesclen +=
+- le32_to_cpu(dacl_ptr->num_aces) * sizeof(struct smb_ace);
++ le16_to_cpu(dacl_ptr->num_aces) * sizeof(struct smb_ace);
+ else /* cifsacl */
+ nsecdesclen += le16_to_cpu(dacl_ptr->size);
+ }
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index d327f31b317db9..8b8475b4e26277 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -316,6 +316,7 @@ cifs_abort_connection(struct TCP_Server_Info *server)
+ server->ssocket->flags);
+ sock_release(server->ssocket);
+ server->ssocket = NULL;
++ put_net(cifs_net_ns(server));
+ }
+ server->sequence_number = 0;
+ server->session_estab = false;
+@@ -3138,8 +3139,12 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ /*
+ * Grab netns reference for the socket.
+ *
+- * It'll be released here, on error, or in clean_demultiplex_info() upon server
+- * teardown.
++ * This reference will be released in several situations:
++ * - In the failure path before the cifsd thread is started.
++ * - In the all place where server->socket is released, it is
++ * also set to NULL.
++ * - Ultimately in clean_demultiplex_info(), during the final
++ * teardown.
+ */
+ get_net(net);
+
+@@ -3155,10 +3160,8 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ }
+
+ rc = bind_socket(server);
+- if (rc < 0) {
+- put_net(cifs_net_ns(server));
++ if (rc < 0)
+ return rc;
+- }
+
+ /*
+ * Eventually check for other socket options to change from
+@@ -3204,9 +3207,6 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ if (sport == htons(RFC1001_PORT))
+ rc = ip_rfc1001_connect(server);
+
+- if (rc < 0)
+- put_net(cifs_net_ns(server));
+-
+ return rc;
+ }
+
+diff --git a/fs/smb/common/smbacl.h b/fs/smb/common/smbacl.h
+index 6a60698fc6f0f4..a624ec9e4a1443 100644
+--- a/fs/smb/common/smbacl.h
++++ b/fs/smb/common/smbacl.h
+@@ -107,7 +107,8 @@ struct smb_sid {
+ struct smb_acl {
+ __le16 revision; /* revision level */
+ __le16 size;
+- __le32 num_aces;
++ __le16 num_aces;
++ __le16 reserved;
+ } __attribute__((packed));
+
+ struct smb_ace {
+diff --git a/fs/smb/server/auth.c b/fs/smb/server/auth.c
+index 8892177e500f19..95449751368314 100644
+--- a/fs/smb/server/auth.c
++++ b/fs/smb/server/auth.c
+@@ -1016,9 +1016,9 @@ static int ksmbd_get_encryption_key(struct ksmbd_work *work, __u64 ses_id,
+
+ ses_enc_key = enc ? sess->smb3encryptionkey :
+ sess->smb3decryptionkey;
+- if (enc)
+- ksmbd_user_session_get(sess);
+ memcpy(key, ses_enc_key, SMB3_ENC_DEC_KEY_SIZE);
++ if (!enc)
++ ksmbd_user_session_put(sess);
+
+ return 0;
+ }
+@@ -1217,7 +1217,7 @@ int ksmbd_crypt_message(struct ksmbd_work *work, struct kvec *iov,
+ free_sg:
+ kfree(sg);
+ free_req:
+- kfree(req);
++ aead_request_free(req);
+ free_ctx:
+ ksmbd_release_crypto_ctx(ctx);
+ return rc;
+diff --git a/fs/smb/server/connection.h b/fs/smb/server/connection.h
+index 91c2318639e766..14620e147dda57 100644
+--- a/fs/smb/server/connection.h
++++ b/fs/smb/server/connection.h
+@@ -27,6 +27,7 @@ enum {
+ KSMBD_SESS_EXITING,
+ KSMBD_SESS_NEED_RECONNECT,
+ KSMBD_SESS_NEED_NEGOTIATE,
++ KSMBD_SESS_NEED_SETUP,
+ KSMBD_SESS_RELEASING
+ };
+
+@@ -187,6 +188,11 @@ static inline bool ksmbd_conn_need_negotiate(struct ksmbd_conn *conn)
+ return READ_ONCE(conn->status) == KSMBD_SESS_NEED_NEGOTIATE;
+ }
+
++static inline bool ksmbd_conn_need_setup(struct ksmbd_conn *conn)
++{
++ return READ_ONCE(conn->status) == KSMBD_SESS_NEED_SETUP;
++}
++
+ static inline bool ksmbd_conn_need_reconnect(struct ksmbd_conn *conn)
+ {
+ return READ_ONCE(conn->status) == KSMBD_SESS_NEED_RECONNECT;
+@@ -217,6 +223,11 @@ static inline void ksmbd_conn_set_need_negotiate(struct ksmbd_conn *conn)
+ WRITE_ONCE(conn->status, KSMBD_SESS_NEED_NEGOTIATE);
+ }
+
++static inline void ksmbd_conn_set_need_setup(struct ksmbd_conn *conn)
++{
++ WRITE_ONCE(conn->status, KSMBD_SESS_NEED_SETUP);
++}
++
+ static inline void ksmbd_conn_set_need_reconnect(struct ksmbd_conn *conn)
+ {
+ WRITE_ONCE(conn->status, KSMBD_SESS_NEED_RECONNECT);
+diff --git a/fs/smb/server/mgmt/user_session.c b/fs/smb/server/mgmt/user_session.c
+index d960ddcbba1657..f83daf72f877e2 100644
+--- a/fs/smb/server/mgmt/user_session.c
++++ b/fs/smb/server/mgmt/user_session.c
+@@ -181,7 +181,7 @@ static void ksmbd_expire_session(struct ksmbd_conn *conn)
+ down_write(&sessions_table_lock);
+ down_write(&conn->session_lock);
+ xa_for_each(&conn->sessions, id, sess) {
+- if (atomic_read(&sess->refcnt) == 0 &&
++ if (atomic_read(&sess->refcnt) <= 1 &&
+ (sess->state != SMB2_SESSION_VALID ||
+ time_after(jiffies,
+ sess->last_active + SMB2_SESSION_TIMEOUT))) {
+@@ -230,7 +230,11 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn)
+ if (!ksmbd_chann_del(conn, sess) &&
+ xa_empty(&sess->ksmbd_chann_list)) {
+ hash_del(&sess->hlist);
+- ksmbd_session_destroy(sess);
++ down_write(&conn->session_lock);
++ xa_erase(&conn->sessions, sess->id);
++ up_write(&conn->session_lock);
++ if (atomic_dec_and_test(&sess->refcnt))
++ ksmbd_session_destroy(sess);
+ }
+ }
+ }
+@@ -249,13 +253,30 @@ void ksmbd_sessions_deregister(struct ksmbd_conn *conn)
+ if (xa_empty(&sess->ksmbd_chann_list)) {
+ xa_erase(&conn->sessions, sess->id);
+ hash_del(&sess->hlist);
+- ksmbd_session_destroy(sess);
++ if (atomic_dec_and_test(&sess->refcnt))
++ ksmbd_session_destroy(sess);
+ }
+ }
+ up_write(&conn->session_lock);
+ up_write(&sessions_table_lock);
+ }
+
++bool is_ksmbd_session_in_connection(struct ksmbd_conn *conn,
++ unsigned long long id)
++{
++ struct ksmbd_session *sess;
++
++ down_read(&conn->session_lock);
++ sess = xa_load(&conn->sessions, id);
++ if (sess) {
++ up_read(&conn->session_lock);
++ return true;
++ }
++ up_read(&conn->session_lock);
++
++ return false;
++}
++
+ struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn,
+ unsigned long long id)
+ {
+@@ -309,8 +330,8 @@ void ksmbd_user_session_put(struct ksmbd_session *sess)
+
+ if (atomic_read(&sess->refcnt) <= 0)
+ WARN_ON(1);
+- else
+- atomic_dec(&sess->refcnt);
++ else if (atomic_dec_and_test(&sess->refcnt))
++ ksmbd_session_destroy(sess);
+ }
+
+ struct preauth_session *ksmbd_preauth_session_alloc(struct ksmbd_conn *conn,
+@@ -353,13 +374,13 @@ void destroy_previous_session(struct ksmbd_conn *conn,
+ ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_RECONNECT);
+ err = ksmbd_conn_wait_idle_sess_id(conn, id);
+ if (err) {
+- ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_NEGOTIATE);
++ ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_SETUP);
+ goto out;
+ }
+
+ ksmbd_destroy_file_table(&prev_sess->file_table);
+ prev_sess->state = SMB2_SESSION_EXPIRED;
+- ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_NEGOTIATE);
++ ksmbd_all_conn_set_status(id, KSMBD_SESS_NEED_SETUP);
+ ksmbd_launch_ksmbd_durable_scavenger();
+ out:
+ up_write(&conn->session_lock);
+@@ -417,7 +438,7 @@ static struct ksmbd_session *__session_create(int protocol)
+ xa_init(&sess->rpc_handle_list);
+ sess->sequence_number = 1;
+ rwlock_init(&sess->tree_conns_lock);
+- atomic_set(&sess->refcnt, 1);
++ atomic_set(&sess->refcnt, 2);
+
+ ret = __init_smb2_session(sess);
+ if (ret)
+diff --git a/fs/smb/server/mgmt/user_session.h b/fs/smb/server/mgmt/user_session.h
+index c1c4b20bd5c6cf..f21348381d5984 100644
+--- a/fs/smb/server/mgmt/user_session.h
++++ b/fs/smb/server/mgmt/user_session.h
+@@ -87,6 +87,8 @@ void ksmbd_session_destroy(struct ksmbd_session *sess);
+ struct ksmbd_session *ksmbd_session_lookup_slowpath(unsigned long long id);
+ struct ksmbd_session *ksmbd_session_lookup(struct ksmbd_conn *conn,
+ unsigned long long id);
++bool is_ksmbd_session_in_connection(struct ksmbd_conn *conn,
++ unsigned long long id);
+ int ksmbd_session_register(struct ksmbd_conn *conn,
+ struct ksmbd_session *sess);
+ void ksmbd_sessions_deregister(struct ksmbd_conn *conn);
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index 592fe665973a87..deacf78b4400cc 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -724,8 +724,8 @@ static int smb2_oplock_break_noti(struct oplock_info *opinfo)
+ work->conn = conn;
+ work->sess = opinfo->sess;
+
++ ksmbd_conn_r_count_inc(conn);
+ if (opinfo->op_state == OPLOCK_ACK_WAIT) {
+- ksmbd_conn_r_count_inc(conn);
+ INIT_WORK(&work->work, __smb2_oplock_break_noti);
+ ksmbd_queue_work(work);
+
+@@ -833,8 +833,8 @@ static int smb2_lease_break_noti(struct oplock_info *opinfo)
+ work->conn = conn;
+ work->sess = opinfo->sess;
+
++ ksmbd_conn_r_count_inc(conn);
+ if (opinfo->op_state == OPLOCK_ACK_WAIT) {
+- ksmbd_conn_r_count_inc(conn);
+ INIT_WORK(&work->work, __smb2_lease_break_noti);
+ ksmbd_queue_work(work);
+ wait_for_break_ack(opinfo);
+@@ -1505,6 +1505,10 @@ struct lease_ctx_info *parse_lease_state(void *open_req)
+ if (sizeof(struct lease_context_v2) == le32_to_cpu(cc->DataLength)) {
+ struct create_lease_v2 *lc = (struct create_lease_v2 *)cc;
+
++ if (le16_to_cpu(cc->DataOffset) + le32_to_cpu(cc->DataLength) <
++ sizeof(struct create_lease_v2) - 4)
++ return NULL;
++
+ memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE);
+ lreq->req_state = lc->lcontext.LeaseState;
+ lreq->flags = lc->lcontext.LeaseFlags;
+@@ -1517,6 +1521,10 @@ struct lease_ctx_info *parse_lease_state(void *open_req)
+ } else {
+ struct create_lease *lc = (struct create_lease *)cc;
+
++ if (le16_to_cpu(cc->DataOffset) + le32_to_cpu(cc->DataLength) <
++ sizeof(struct create_lease))
++ return NULL;
++
+ memcpy(lreq->lease_key, lc->lcontext.LeaseKey, SMB2_LEASE_KEY_SIZE);
+ lreq->req_state = lc->lcontext.LeaseState;
+ lreq->flags = lc->lcontext.LeaseFlags;
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 8464261d763876..7fea86edc71763 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1247,7 +1247,7 @@ int smb2_handle_negotiate(struct ksmbd_work *work)
+ }
+
+ conn->srv_sec_mode = le16_to_cpu(rsp->SecurityMode);
+- ksmbd_conn_set_need_negotiate(conn);
++ ksmbd_conn_set_need_setup(conn);
+
+ err_out:
+ if (rc)
+@@ -1268,6 +1268,9 @@ static int alloc_preauth_hash(struct ksmbd_session *sess,
+ if (sess->Preauth_HashValue)
+ return 0;
+
++ if (!conn->preauth_info)
++ return -ENOMEM;
++
+ sess->Preauth_HashValue = kmemdup(conn->preauth_info->Preauth_HashValue,
+ PREAUTH_HASHVALUE_SIZE, GFP_KERNEL);
+ if (!sess->Preauth_HashValue)
+@@ -1671,6 +1674,11 @@ int smb2_sess_setup(struct ksmbd_work *work)
+
+ ksmbd_debug(SMB, "Received request for session setup\n");
+
++ if (!ksmbd_conn_need_setup(conn) && !ksmbd_conn_good(conn)) {
++ work->send_no_response = 1;
++ return rc;
++ }
++
+ WORK_BUFFERS(work, req, rsp);
+
+ rsp->StructureSize = cpu_to_le16(9);
+@@ -1704,44 +1712,38 @@ int smb2_sess_setup(struct ksmbd_work *work)
+
+ if (conn->dialect != sess->dialect) {
+ rc = -EINVAL;
+- ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (!(req->hdr.Flags & SMB2_FLAGS_SIGNED)) {
+ rc = -EINVAL;
+- ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (strncmp(conn->ClientGUID, sess->ClientGUID,
+ SMB2_CLIENT_GUID_SIZE)) {
+ rc = -ENOENT;
+- ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (sess->state == SMB2_SESSION_IN_PROGRESS) {
+ rc = -EACCES;
+- ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+
+ if (sess->state == SMB2_SESSION_EXPIRED) {
+ rc = -EFAULT;
+- ksmbd_user_session_put(sess);
+ goto out_err;
+ }
+- ksmbd_user_session_put(sess);
+
+ if (ksmbd_conn_need_reconnect(conn)) {
+ rc = -EFAULT;
++ ksmbd_user_session_put(sess);
+ sess = NULL;
+ goto out_err;
+ }
+
+- sess = ksmbd_session_lookup(conn, sess_id);
+- if (!sess) {
++ if (is_ksmbd_session_in_connection(conn, sess_id)) {
+ rc = -EACCES;
+ goto out_err;
+ }
+@@ -1907,10 +1909,12 @@ int smb2_sess_setup(struct ksmbd_work *work)
+
+ sess->last_active = jiffies;
+ sess->state = SMB2_SESSION_EXPIRED;
++ ksmbd_user_session_put(sess);
++ work->sess = NULL;
+ if (try_delay) {
+ ksmbd_conn_set_need_reconnect(conn);
+ ssleep(5);
+- ksmbd_conn_set_need_negotiate(conn);
++ ksmbd_conn_set_need_setup(conn);
+ }
+ }
+ smb2_set_err_rsp(work);
+@@ -2234,14 +2238,15 @@ int smb2_session_logoff(struct ksmbd_work *work)
+ return -ENOENT;
+ }
+
+- ksmbd_destroy_file_table(&sess->file_table);
+ down_write(&conn->session_lock);
+ sess->state = SMB2_SESSION_EXPIRED;
+ up_write(&conn->session_lock);
+
+- ksmbd_free_user(sess->user);
+- sess->user = NULL;
+- ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_NEGOTIATE);
++ if (sess->user) {
++ ksmbd_free_user(sess->user);
++ sess->user = NULL;
++ }
++ ksmbd_all_conn_set_status(sess_id, KSMBD_SESS_NEED_SETUP);
+
+ rsp->StructureSize = cpu_to_le16(4);
+ err = ksmbd_iov_pin_rsp(work, rsp, sizeof(struct smb2_logoff_rsp));
+@@ -2703,6 +2708,13 @@ static int parse_durable_handle_context(struct ksmbd_work *work,
+ goto out;
+ }
+
++ if (le16_to_cpu(context->DataOffset) +
++ le32_to_cpu(context->DataLength) <
++ sizeof(struct create_durable_reconn_v2_req)) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ recon_v2 = (struct create_durable_reconn_v2_req *)context;
+ persistent_id = recon_v2->Fid.PersistentFileId;
+ dh_info->fp = ksmbd_lookup_durable_fd(persistent_id);
+@@ -2736,6 +2748,13 @@ static int parse_durable_handle_context(struct ksmbd_work *work,
+ goto out;
+ }
+
++ if (le16_to_cpu(context->DataOffset) +
++ le32_to_cpu(context->DataLength) <
++ sizeof(struct create_durable_reconn_req)) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ recon = (struct create_durable_reconn_req *)context;
+ persistent_id = recon->Data.Fid.PersistentFileId;
+ dh_info->fp = ksmbd_lookup_durable_fd(persistent_id);
+@@ -2761,6 +2780,13 @@ static int parse_durable_handle_context(struct ksmbd_work *work,
+ goto out;
+ }
+
++ if (le16_to_cpu(context->DataOffset) +
++ le32_to_cpu(context->DataLength) <
++ sizeof(struct create_durable_req_v2)) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ durable_v2_blob =
+ (struct create_durable_req_v2 *)context;
+ ksmbd_debug(SMB, "Request for durable v2 open\n");
+diff --git a/fs/smb/server/smbacl.c b/fs/smb/server/smbacl.c
+index 109036e2227ca1..376ae68144afa0 100644
+--- a/fs/smb/server/smbacl.c
++++ b/fs/smb/server/smbacl.c
+@@ -270,6 +270,11 @@ static int sid_to_id(struct mnt_idmap *idmap,
+ return -EIO;
+ }
+
++ if (psid->num_subauth == 0) {
++ pr_err("%s: zero subauthorities!\n", __func__);
++ return -EIO;
++ }
++
+ if (sidtype == SIDOWNER) {
+ kuid_t uid;
+ uid_t id;
+@@ -333,7 +338,7 @@ void posix_state_to_acl(struct posix_acl_state *state,
+ pace->e_perm = state->other.allow;
+ }
+
+-int init_acl_state(struct posix_acl_state *state, int cnt)
++int init_acl_state(struct posix_acl_state *state, u16 cnt)
+ {
+ int alloc;
+
+@@ -368,7 +373,7 @@ static void parse_dacl(struct mnt_idmap *idmap,
+ struct smb_fattr *fattr)
+ {
+ int i, ret;
+- int num_aces = 0;
++ u16 num_aces = 0;
+ unsigned int acl_size;
+ char *acl_base;
+ struct smb_ace **ppace;
+@@ -389,12 +394,12 @@ static void parse_dacl(struct mnt_idmap *idmap,
+
+ ksmbd_debug(SMB, "DACL revision %d size %d num aces %d\n",
+ le16_to_cpu(pdacl->revision), le16_to_cpu(pdacl->size),
+- le32_to_cpu(pdacl->num_aces));
++ le16_to_cpu(pdacl->num_aces));
+
+ acl_base = (char *)pdacl;
+ acl_size = sizeof(struct smb_acl);
+
+- num_aces = le32_to_cpu(pdacl->num_aces);
++ num_aces = le16_to_cpu(pdacl->num_aces);
+ if (num_aces <= 0)
+ return;
+
+@@ -583,7 +588,7 @@ static void parse_dacl(struct mnt_idmap *idmap,
+
+ static void set_posix_acl_entries_dacl(struct mnt_idmap *idmap,
+ struct smb_ace *pndace,
+- struct smb_fattr *fattr, u32 *num_aces,
++ struct smb_fattr *fattr, u16 *num_aces,
+ u16 *size, u32 nt_aces_num)
+ {
+ struct posix_acl_entry *pace;
+@@ -704,7 +709,7 @@ static void set_ntacl_dacl(struct mnt_idmap *idmap,
+ struct smb_fattr *fattr)
+ {
+ struct smb_ace *ntace, *pndace;
+- int nt_num_aces = le32_to_cpu(nt_dacl->num_aces), num_aces = 0;
++ u16 nt_num_aces = le16_to_cpu(nt_dacl->num_aces), num_aces = 0;
+ unsigned short size = 0;
+ int i;
+
+@@ -731,7 +736,7 @@ static void set_ntacl_dacl(struct mnt_idmap *idmap,
+
+ set_posix_acl_entries_dacl(idmap, pndace, fattr,
+ &num_aces, &size, nt_num_aces);
+- pndacl->num_aces = cpu_to_le32(num_aces);
++ pndacl->num_aces = cpu_to_le16(num_aces);
+ pndacl->size = cpu_to_le16(le16_to_cpu(pndacl->size) + size);
+ }
+
+@@ -739,7 +744,7 @@ static void set_mode_dacl(struct mnt_idmap *idmap,
+ struct smb_acl *pndacl, struct smb_fattr *fattr)
+ {
+ struct smb_ace *pace, *pndace;
+- u32 num_aces = 0;
++ u16 num_aces = 0;
+ u16 size = 0, ace_size = 0;
+ uid_t uid;
+ const struct smb_sid *sid;
+@@ -795,7 +800,7 @@ static void set_mode_dacl(struct mnt_idmap *idmap,
+ fattr->cf_mode, 0007);
+
+ out:
+- pndacl->num_aces = cpu_to_le32(num_aces);
++ pndacl->num_aces = cpu_to_le16(num_aces);
+ pndacl->size = cpu_to_le16(le16_to_cpu(pndacl->size) + size);
+ }
+
+@@ -1025,8 +1030,11 @@ int smb_inherit_dacl(struct ksmbd_conn *conn,
+ struct smb_sid owner_sid, group_sid;
+ struct dentry *parent = path->dentry->d_parent;
+ struct mnt_idmap *idmap = mnt_idmap(path->mnt);
+- int inherited_flags = 0, flags = 0, i, ace_cnt = 0, nt_size = 0, pdacl_size;
+- int rc = 0, num_aces, dacloffset, pntsd_type, pntsd_size, acl_len, aces_size;
++ int inherited_flags = 0, flags = 0, i, nt_size = 0, pdacl_size;
++ int rc = 0, pntsd_type, pntsd_size, acl_len, aces_size;
++ unsigned int dacloffset;
++ size_t dacl_struct_end;
++ u16 num_aces, ace_cnt = 0;
+ char *aces_base;
+ bool is_dir = S_ISDIR(d_inode(path->dentry)->i_mode);
+
+@@ -1034,15 +1042,18 @@ int smb_inherit_dacl(struct ksmbd_conn *conn,
+ parent, &parent_pntsd);
+ if (pntsd_size <= 0)
+ return -ENOENT;
++
+ dacloffset = le32_to_cpu(parent_pntsd->dacloffset);
+- if (!dacloffset || (dacloffset + sizeof(struct smb_acl) > pntsd_size)) {
++ if (!dacloffset ||
++ check_add_overflow(dacloffset, sizeof(struct smb_acl), &dacl_struct_end) ||
++ dacl_struct_end > (size_t)pntsd_size) {
+ rc = -EINVAL;
+ goto free_parent_pntsd;
+ }
+
+ parent_pdacl = (struct smb_acl *)((char *)parent_pntsd + dacloffset);
+ acl_len = pntsd_size - dacloffset;
+- num_aces = le32_to_cpu(parent_pdacl->num_aces);
++ num_aces = le16_to_cpu(parent_pdacl->num_aces);
+ pntsd_type = le16_to_cpu(parent_pntsd->type);
+ pdacl_size = le16_to_cpu(parent_pdacl->size);
+
+@@ -1201,7 +1212,7 @@ int smb_inherit_dacl(struct ksmbd_conn *conn,
+ pdacl = (struct smb_acl *)((char *)pntsd + le32_to_cpu(pntsd->dacloffset));
+ pdacl->revision = cpu_to_le16(2);
+ pdacl->size = cpu_to_le16(sizeof(struct smb_acl) + nt_size);
+- pdacl->num_aces = cpu_to_le32(ace_cnt);
++ pdacl->num_aces = cpu_to_le16(ace_cnt);
+ pace = (struct smb_ace *)((char *)pdacl + sizeof(struct smb_acl));
+ memcpy(pace, aces_base, nt_size);
+ pntsd_size += sizeof(struct smb_acl) + nt_size;
+@@ -1238,7 +1249,9 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, const struct path *path,
+ struct smb_ntsd *pntsd = NULL;
+ struct smb_acl *pdacl;
+ struct posix_acl *posix_acls;
+- int rc = 0, pntsd_size, acl_size, aces_size, pdacl_size, dacl_offset;
++ int rc = 0, pntsd_size, acl_size, aces_size, pdacl_size;
++ unsigned int dacl_offset;
++ size_t dacl_struct_end;
+ struct smb_sid sid;
+ int granted = le32_to_cpu(*pdaccess & ~FILE_MAXIMAL_ACCESS_LE);
+ struct smb_ace *ace;
+@@ -1257,7 +1270,8 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, const struct path *path,
+
+ dacl_offset = le32_to_cpu(pntsd->dacloffset);
+ if (!dacl_offset ||
+- (dacl_offset + sizeof(struct smb_acl) > pntsd_size))
++ check_add_overflow(dacl_offset, sizeof(struct smb_acl), &dacl_struct_end) ||
++ dacl_struct_end > (size_t)pntsd_size)
+ goto err_out;
+
+ pdacl = (struct smb_acl *)((char *)pntsd + le32_to_cpu(pntsd->dacloffset));
+@@ -1282,7 +1296,7 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, const struct path *path,
+
+ ace = (struct smb_ace *)((char *)pdacl + sizeof(struct smb_acl));
+ aces_size = acl_size - sizeof(struct smb_acl);
+- for (i = 0; i < le32_to_cpu(pdacl->num_aces); i++) {
++ for (i = 0; i < le16_to_cpu(pdacl->num_aces); i++) {
+ if (offsetof(struct smb_ace, access_req) > aces_size)
+ break;
+ ace_size = le16_to_cpu(ace->size);
+@@ -1303,7 +1317,7 @@ int smb_check_perm_dacl(struct ksmbd_conn *conn, const struct path *path,
+
+ ace = (struct smb_ace *)((char *)pdacl + sizeof(struct smb_acl));
+ aces_size = acl_size - sizeof(struct smb_acl);
+- for (i = 0; i < le32_to_cpu(pdacl->num_aces); i++) {
++ for (i = 0; i < le16_to_cpu(pdacl->num_aces); i++) {
+ if (offsetof(struct smb_ace, access_req) > aces_size)
+ break;
+ ace_size = le16_to_cpu(ace->size);
+diff --git a/fs/smb/server/smbacl.h b/fs/smb/server/smbacl.h
+index 24ce576fc2924b..355adaee39b871 100644
+--- a/fs/smb/server/smbacl.h
++++ b/fs/smb/server/smbacl.h
+@@ -86,7 +86,7 @@ int parse_sec_desc(struct mnt_idmap *idmap, struct smb_ntsd *pntsd,
+ int build_sec_desc(struct mnt_idmap *idmap, struct smb_ntsd *pntsd,
+ struct smb_ntsd *ppntsd, int ppntsd_size, int addition_info,
+ __u32 *secdesclen, struct smb_fattr *fattr);
+-int init_acl_state(struct posix_acl_state *state, int cnt);
++int init_acl_state(struct posix_acl_state *state, u16 cnt);
+ void free_acl_state(struct posix_acl_state *state);
+ void posix_state_to_acl(struct posix_acl_state *state,
+ struct posix_acl_entry *pace);
+diff --git a/include/drm/display/drm_dp_mst_helper.h b/include/drm/display/drm_dp_mst_helper.h
+index a80ba457a858f3..6398a6b50bd1b7 100644
+--- a/include/drm/display/drm_dp_mst_helper.h
++++ b/include/drm/display/drm_dp_mst_helper.h
+@@ -222,6 +222,13 @@ struct drm_dp_mst_branch {
+ */
+ struct list_head destroy_next;
+
++ /**
++ * @rad: Relative Address of the MST branch.
++ * For &drm_dp_mst_topology_mgr.mst_primary, it's rad[8] are all 0,
++ * unset and unused. For MST branches connected after mst_primary,
++ * in each element of rad[] the nibbles are ordered by the most
++ * signifcant 4 bits first and the least significant 4 bits second.
++ */
+ u8 rad[8];
+ u8 lct;
+ int num_ports;
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index a32eebcd23da47..38b2af336e4a01 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -324,6 +324,7 @@ struct cgroup_base_stat {
+ #ifdef CONFIG_SCHED_CORE
+ u64 forceidle_sum;
+ #endif
++ u64 ntime;
+ };
+
+ /*
+diff --git a/include/linux/context_tracking_irq.h b/include/linux/context_tracking_irq.h
+index c50b5670c4a52f..197916ee91a4bd 100644
+--- a/include/linux/context_tracking_irq.h
++++ b/include/linux/context_tracking_irq.h
+@@ -10,12 +10,12 @@ void ct_irq_exit_irqson(void);
+ void ct_nmi_enter(void);
+ void ct_nmi_exit(void);
+ #else
+-static inline void ct_irq_enter(void) { }
+-static inline void ct_irq_exit(void) { }
++static __always_inline void ct_irq_enter(void) { }
++static __always_inline void ct_irq_exit(void) { }
+ static inline void ct_irq_enter_irqson(void) { }
+ static inline void ct_irq_exit_irqson(void) { }
+-static inline void ct_nmi_enter(void) { }
+-static inline void ct_nmi_exit(void) { }
++static __always_inline void ct_nmi_enter(void) { }
++static __always_inline void ct_nmi_exit(void) { }
+ #endif
+
+ #endif
+diff --git a/include/linux/coresight.h b/include/linux/coresight.h
+index c1334259427850..f106b102511189 100644
+--- a/include/linux/coresight.h
++++ b/include/linux/coresight.h
+@@ -639,6 +639,10 @@ extern int coresight_enable_sysfs(struct coresight_device *csdev);
+ extern void coresight_disable_sysfs(struct coresight_device *csdev);
+ extern int coresight_timeout(struct csdev_access *csa, u32 offset,
+ int position, int value);
++typedef void (*coresight_timeout_cb_t) (struct csdev_access *, u32, int, int);
++extern int coresight_timeout_action(struct csdev_access *csa, u32 offset,
++ int position, int value,
++ coresight_timeout_cb_t cb);
+
+ extern int coresight_claim_device(struct coresight_device *csdev);
+ extern int coresight_claim_device_unlocked(struct coresight_device *csdev);
+diff --git a/include/linux/fwnode.h b/include/linux/fwnode.h
+index 0d79070c5a70f2..487d4bd9b0c999 100644
+--- a/include/linux/fwnode.h
++++ b/include/linux/fwnode.h
+@@ -91,7 +91,7 @@ struct fwnode_endpoint {
+ #define SWNODE_GRAPH_PORT_NAME_FMT "port@%u"
+ #define SWNODE_GRAPH_ENDPOINT_NAME_FMT "endpoint@%u"
+
+-#define NR_FWNODE_REFERENCE_ARGS 8
++#define NR_FWNODE_REFERENCE_ARGS 16
+
+ /**
+ * struct fwnode_reference_args - Fwnode reference with additional arguments
+diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
+index 457151f9f263d9..b378fbf885ce37 100644
+--- a/include/linux/interrupt.h
++++ b/include/linux/interrupt.h
+@@ -448,7 +448,7 @@ irq_calc_affinity_vectors(unsigned int minvec, unsigned int maxvec,
+ static inline void disable_irq_nosync_lockdep(unsigned int irq)
+ {
+ disable_irq_nosync(irq);
+-#ifdef CONFIG_LOCKDEP
++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT)
+ local_irq_disable();
+ #endif
+ }
+@@ -456,7 +456,7 @@ static inline void disable_irq_nosync_lockdep(unsigned int irq)
+ static inline void disable_irq_nosync_lockdep_irqsave(unsigned int irq, unsigned long *flags)
+ {
+ disable_irq_nosync(irq);
+-#ifdef CONFIG_LOCKDEP
++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT)
+ local_irq_save(*flags);
+ #endif
+ }
+@@ -471,7 +471,7 @@ static inline void disable_irq_lockdep(unsigned int irq)
+
+ static inline void enable_irq_lockdep(unsigned int irq)
+ {
+-#ifdef CONFIG_LOCKDEP
++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT)
+ local_irq_enable();
+ #endif
+ enable_irq(irq);
+@@ -479,7 +479,7 @@ static inline void enable_irq_lockdep(unsigned int irq)
+
+ static inline void enable_irq_lockdep_irqrestore(unsigned int irq, unsigned long *flags)
+ {
+-#ifdef CONFIG_LOCKDEP
++#if defined(CONFIG_LOCKDEP) && !defined(CONFIG_PREEMPT_RT)
+ local_irq_restore(*flags);
+ #endif
+ enable_irq(irq);
+diff --git a/include/linux/nfs_fs_sb.h b/include/linux/nfs_fs_sb.h
+index b804346a974195..81ab18658d72dc 100644
+--- a/include/linux/nfs_fs_sb.h
++++ b/include/linux/nfs_fs_sb.h
+@@ -251,6 +251,10 @@ struct nfs_server {
+ struct list_head ss_copies;
+ struct list_head ss_src_copies;
+
++ unsigned long delegation_flags;
++#define NFS4SERV_DELEGRETURN (1)
++#define NFS4SERV_DELEGATION_EXPIRED (2)
++#define NFS4SERV_DELEGRETURN_DELAYED (3)
+ unsigned long delegation_gen;
+ unsigned long mig_gen;
+ unsigned long mig_status;
+diff --git a/include/linux/nmi.h b/include/linux/nmi.h
+index a8dfb38c9bb6f1..e78fa535f61dd8 100644
+--- a/include/linux/nmi.h
++++ b/include/linux/nmi.h
+@@ -17,7 +17,6 @@
+ void lockup_detector_init(void);
+ void lockup_detector_retry_init(void);
+ void lockup_detector_soft_poweroff(void);
+-void lockup_detector_cleanup(void);
+
+ extern int watchdog_user_enabled;
+ extern int watchdog_thresh;
+@@ -37,7 +36,6 @@ extern int sysctl_hardlockup_all_cpu_backtrace;
+ static inline void lockup_detector_init(void) { }
+ static inline void lockup_detector_retry_init(void) { }
+ static inline void lockup_detector_soft_poweroff(void) { }
+-static inline void lockup_detector_cleanup(void) { }
+ #endif /* !CONFIG_LOCKUP_DETECTOR */
+
+ #ifdef CONFIG_SOFTLOCKUP_DETECTOR
+@@ -104,12 +102,10 @@ void watchdog_hardlockup_check(unsigned int cpu, struct pt_regs *regs);
+ #if defined(CONFIG_HARDLOCKUP_DETECTOR_PERF)
+ extern void hardlockup_detector_perf_stop(void);
+ extern void hardlockup_detector_perf_restart(void);
+-extern void hardlockup_detector_perf_cleanup(void);
+ extern void hardlockup_config_perf_event(const char *str);
+ #else
+ static inline void hardlockup_detector_perf_stop(void) { }
+ static inline void hardlockup_detector_perf_restart(void) { }
+-static inline void hardlockup_detector_perf_cleanup(void) { }
+ static inline void hardlockup_config_perf_event(const char *str) { }
+ #endif
+
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index e8b2ac6bd2ae3b..8df030ebd86286 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -1518,14 +1518,25 @@ static inline void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
+ }
+
+ /*
+- * track_pfn_copy is called when vma that is covering the pfnmap gets
+- * copied through copy_page_range().
++ * track_pfn_copy is called when a VM_PFNMAP VMA is about to get the page
++ * tables copied during copy_page_range(). On success, stores the pfn to be
++ * passed to untrack_pfn_copy().
+ */
+-static inline int track_pfn_copy(struct vm_area_struct *vma)
++static inline int track_pfn_copy(struct vm_area_struct *dst_vma,
++ struct vm_area_struct *src_vma, unsigned long *pfn)
+ {
+ return 0;
+ }
+
++/*
++ * untrack_pfn_copy is called when a VM_PFNMAP VMA failed to copy during
++ * copy_page_range(), but after track_pfn_copy() was already called.
++ */
++static inline void untrack_pfn_copy(struct vm_area_struct *dst_vma,
++ unsigned long pfn)
++{
++}
++
+ /*
+ * untrack_pfn is called while unmapping a pfnmap for a region.
+ * untrack can be called for a specific region indicated by pfn and size or
+@@ -1538,8 +1549,10 @@ static inline void untrack_pfn(struct vm_area_struct *vma,
+ }
+
+ /*
+- * untrack_pfn_clear is called while mremapping a pfnmap for a new region
+- * or fails to copy pgtable during duplicate vm area.
++ * untrack_pfn_clear is called in the following cases on a VM_PFNMAP VMA:
++ *
++ * 1) During mremap() on the src VMA after the page tables were moved.
++ * 2) During fork() on the dst VMA, immediately after duplicating the src VMA.
+ */
+ static inline void untrack_pfn_clear(struct vm_area_struct *vma)
+ {
+@@ -1550,7 +1563,10 @@ extern int track_pfn_remap(struct vm_area_struct *vma, pgprot_t *prot,
+ unsigned long size);
+ extern void track_pfn_insert(struct vm_area_struct *vma, pgprot_t *prot,
+ pfn_t pfn);
+-extern int track_pfn_copy(struct vm_area_struct *vma);
++extern int track_pfn_copy(struct vm_area_struct *dst_vma,
++ struct vm_area_struct *src_vma, unsigned long *pfn);
++extern void untrack_pfn_copy(struct vm_area_struct *dst_vma,
++ unsigned long pfn);
+ extern void untrack_pfn(struct vm_area_struct *vma, unsigned long pfn,
+ unsigned long size, bool mm_wr_locked);
+ extern void untrack_pfn_clear(struct vm_area_struct *vma);
+diff --git a/include/linux/pm_runtime.h b/include/linux/pm_runtime.h
+index d39dc863f612fe..d0b29cd1fd204e 100644
+--- a/include/linux/pm_runtime.h
++++ b/include/linux/pm_runtime.h
+@@ -66,6 +66,7 @@ static inline bool queue_pm_work(struct work_struct *work)
+
+ extern int pm_generic_runtime_suspend(struct device *dev);
+ extern int pm_generic_runtime_resume(struct device *dev);
++extern bool pm_runtime_need_not_resume(struct device *dev);
+ extern int pm_runtime_force_suspend(struct device *dev);
+ extern int pm_runtime_force_resume(struct device *dev);
+
+@@ -241,6 +242,7 @@ static inline bool queue_pm_work(struct work_struct *work) { return false; }
+
+ static inline int pm_generic_runtime_suspend(struct device *dev) { return 0; }
+ static inline int pm_generic_runtime_resume(struct device *dev) { return 0; }
++static inline bool pm_runtime_need_not_resume(struct device *dev) {return true; }
+ static inline int pm_runtime_force_suspend(struct device *dev) { return 0; }
+ static inline int pm_runtime_force_resume(struct device *dev) { return 0; }
+
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index 48e5c03df1dd83..bd69ddc102fbc5 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -138,7 +138,7 @@ static inline void rcu_sysrq_end(void) { }
+ #if defined(CONFIG_NO_HZ_FULL) && (!defined(CONFIG_GENERIC_ENTRY) || !defined(CONFIG_KVM_XFER_TO_GUEST_WORK))
+ void rcu_irq_work_resched(void);
+ #else
+-static inline void rcu_irq_work_resched(void) { }
++static __always_inline void rcu_irq_work_resched(void) { }
+ #endif
+
+ #ifdef CONFIG_RCU_NOCB_CPU
+diff --git a/include/linux/sched/smt.h b/include/linux/sched/smt.h
+index fb1e295e7e63e2..166b19af956f8b 100644
+--- a/include/linux/sched/smt.h
++++ b/include/linux/sched/smt.h
+@@ -12,7 +12,7 @@ static __always_inline bool sched_smt_active(void)
+ return static_branch_likely(&sched_smt_present);
+ }
+ #else
+-static inline bool sched_smt_active(void) { return false; }
++static __always_inline bool sched_smt_active(void) { return false; }
+ #endif
+
+ void arch_smt_update(void);
+diff --git a/include/linux/thermal.h b/include/linux/thermal.h
+index 25ea8fe2313e6d..0da2c257e32cf9 100644
+--- a/include/linux/thermal.h
++++ b/include/linux/thermal.h
+@@ -83,8 +83,6 @@ struct thermal_trip {
+ #define THERMAL_TRIP_PRIV_TO_INT(_val_) (uintptr_t)(_val_)
+ #define THERMAL_INT_TO_TRIP_PRIV(_val_) (void *)(uintptr_t)(_val_)
+
+-struct thermal_zone_device;
+-
+ struct cooling_spec {
+ unsigned long upper; /* Highest cooling state */
+ unsigned long lower; /* Lowest cooling state */
+diff --git a/include/linux/trace_events.h b/include/linux/trace_events.h
+index 77769ff5054441..fcf5a64d5cfe2d 100644
+--- a/include/linux/trace_events.h
++++ b/include/linux/trace_events.h
+@@ -689,6 +689,20 @@ struct trace_event_file {
+ atomic_t tm_ref; /* trigger-mode reference counter */
+ };
+
++#ifdef CONFIG_HIST_TRIGGERS
++extern struct irq_work hist_poll_work;
++extern wait_queue_head_t hist_poll_wq;
++
++static inline void hist_poll_wakeup(void)
++{
++ if (wq_has_sleeper(&hist_poll_wq))
++ irq_work_queue(&hist_poll_work);
++}
++
++#define hist_poll_wait(file, wait) \
++ poll_wait(file, &hist_poll_wq, wait)
++#endif
++
+ #define __TRACE_EVENT_FLAGS(name, value) \
+ static int __init trace_init_flags_##name(void) \
+ { \
+diff --git a/include/linux/uprobes.h b/include/linux/uprobes.h
+index 2b294bf1881fef..d0cb0e02cd6ae0 100644
+--- a/include/linux/uprobes.h
++++ b/include/linux/uprobes.h
+@@ -28,6 +28,8 @@ struct page;
+
+ #define MAX_URETPROBE_DEPTH 64
+
++#define UPROBE_NO_TRAMPOLINE_VADDR (~0UL)
++
+ struct uprobe_consumer {
+ /*
+ * handler() can return UPROBE_HANDLER_REMOVE to signal the need to
+diff --git a/include/rdma/ib_verbs.h b/include/rdma/ib_verbs.h
+index 67551133b5228e..c2b5de75daf252 100644
+--- a/include/rdma/ib_verbs.h
++++ b/include/rdma/ib_verbs.h
+@@ -2737,6 +2737,7 @@ struct ib_device {
+ * It is a NULL terminated array.
+ */
+ const struct attribute_group *groups[4];
++ u8 hw_stats_attr_index;
+
+ u64 uverbs_cmd_mask;
+
+diff --git a/init/Kconfig b/init/Kconfig
+index 293c565c62168e..243d0087f94458 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -129,6 +129,11 @@ config CC_HAS_COUNTED_BY
+ # https://github.com/llvm/llvm-project/pull/112636
+ depends on !(CC_IS_CLANG && CLANG_VERSION < 190103)
+
++config LD_CAN_USE_KEEP_IN_OVERLAY
++ # ld.lld prior to 21.0.0 did not support KEEP within an overlay description
++ # https://github.com/llvm/llvm-project/pull/130661
++ def_bool LD_IS_BFD || LLD_VERSION >= 210000
++
+ config PAHOLE_VERSION
+ int
+ default $(shell,$(srctree)/scripts/pahole-version.sh $(PAHOLE))
+diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
+index 2b9c8c168a0ba3..a60a6a2ce0d7f4 100644
+--- a/kernel/bpf/core.c
++++ b/kernel/bpf/core.c
+@@ -2290,17 +2290,18 @@ void bpf_patch_call_args(struct bpf_insn *insn, u32 stack_depth)
+ insn->code = BPF_JMP | BPF_CALL_ARGS;
+ }
+ #endif
+-#else
++#endif
++
+ static unsigned int __bpf_prog_ret0_warn(const void *ctx,
+ const struct bpf_insn *insn)
+ {
+ /* If this handler ever gets executed, then BPF_JIT_ALWAYS_ON
+- * is not working properly, so warn about it!
++ * is not working properly, or interpreter is being used when
++ * prog->jit_requested is not 0, so warn about it!
+ */
+ WARN_ON_ONCE(1);
+ return 0;
+ }
+-#endif
+
+ bool bpf_prog_map_compatible(struct bpf_map *map,
+ const struct bpf_prog *fp)
+@@ -2380,8 +2381,18 @@ static void bpf_prog_select_func(struct bpf_prog *fp)
+ {
+ #ifndef CONFIG_BPF_JIT_ALWAYS_ON
+ u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
++ u32 idx = (round_up(stack_depth, 32) / 32) - 1;
+
+- fp->bpf_func = interpreters[(round_up(stack_depth, 32) / 32) - 1];
++ /* may_goto may cause stack size > 512, leading to idx out-of-bounds.
++ * But for non-JITed programs, we don't need bpf_func, so no bounds
++ * check needed.
++ */
++ if (!fp->jit_requested &&
++ !WARN_ON_ONCE(idx >= ARRAY_SIZE(interpreters))) {
++ fp->bpf_func = interpreters[idx];
++ } else {
++ fp->bpf_func = __bpf_prog_ret0_warn;
++ }
+ #else
+ fp->bpf_func = __bpf_prog_ret0_warn;
+ #endif
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index a0cab0d0252fab..9000806ee3bae8 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -21276,6 +21276,13 @@ static int do_misc_fixups(struct bpf_verifier_env *env)
+ if (subprogs[cur_subprog + 1].start == i + delta + 1) {
+ subprogs[cur_subprog].stack_depth += stack_depth_extra;
+ subprogs[cur_subprog].stack_extra = stack_depth_extra;
++
++ stack_depth = subprogs[cur_subprog].stack_depth;
++ if (stack_depth > MAX_BPF_STACK && !prog->jit_requested) {
++ verbose(env, "stack size %d(extra %d) is too large\n",
++ stack_depth, stack_depth_extra);
++ return -EINVAL;
++ }
+ cur_subprog++;
+ stack_depth = subprogs[cur_subprog].stack_depth;
+ stack_depth_extra = 0;
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index ce295b73c0a366..3e01781aeb7bd0 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -444,6 +444,7 @@ static void cgroup_base_stat_add(struct cgroup_base_stat *dst_bstat,
+ #ifdef CONFIG_SCHED_CORE
+ dst_bstat->forceidle_sum += src_bstat->forceidle_sum;
+ #endif
++ dst_bstat->ntime += src_bstat->ntime;
+ }
+
+ static void cgroup_base_stat_sub(struct cgroup_base_stat *dst_bstat,
+@@ -455,6 +456,7 @@ static void cgroup_base_stat_sub(struct cgroup_base_stat *dst_bstat,
+ #ifdef CONFIG_SCHED_CORE
+ dst_bstat->forceidle_sum -= src_bstat->forceidle_sum;
+ #endif
++ dst_bstat->ntime -= src_bstat->ntime;
+ }
+
+ static void cgroup_base_stat_flush(struct cgroup *cgrp, int cpu)
+@@ -534,8 +536,10 @@ void __cgroup_account_cputime_field(struct cgroup *cgrp,
+ rstatc = cgroup_base_stat_cputime_account_begin(cgrp, &flags);
+
+ switch (index) {
+- case CPUTIME_USER:
+ case CPUTIME_NICE:
++ rstatc->bstat.ntime += delta_exec;
++ fallthrough;
++ case CPUTIME_USER:
+ rstatc->bstat.cputime.utime += delta_exec;
+ break;
+ case CPUTIME_SYSTEM:
+@@ -590,6 +594,7 @@ static void root_cgroup_cputime(struct cgroup_base_stat *bstat)
+ #ifdef CONFIG_SCHED_CORE
+ bstat->forceidle_sum += cpustat[CPUTIME_FORCEIDLE];
+ #endif
++ bstat->ntime += cpustat[CPUTIME_NICE];
+ }
+ }
+
+@@ -607,32 +612,33 @@ static void cgroup_force_idle_show(struct seq_file *seq, struct cgroup_base_stat
+ void cgroup_base_stat_cputime_show(struct seq_file *seq)
+ {
+ struct cgroup *cgrp = seq_css(seq)->cgroup;
+- u64 usage, utime, stime;
++ struct cgroup_base_stat bstat;
+
+ if (cgroup_parent(cgrp)) {
+ cgroup_rstat_flush_hold(cgrp);
+- usage = cgrp->bstat.cputime.sum_exec_runtime;
++ bstat = cgrp->bstat;
+ cputime_adjust(&cgrp->bstat.cputime, &cgrp->prev_cputime,
+- &utime, &stime);
++ &bstat.cputime.utime, &bstat.cputime.stime);
+ cgroup_rstat_flush_release(cgrp);
+ } else {
+- /* cgrp->bstat of root is not actually used, reuse it */
+- root_cgroup_cputime(&cgrp->bstat);
+- usage = cgrp->bstat.cputime.sum_exec_runtime;
+- utime = cgrp->bstat.cputime.utime;
+- stime = cgrp->bstat.cputime.stime;
++ root_cgroup_cputime(&bstat);
+ }
+
+- do_div(usage, NSEC_PER_USEC);
+- do_div(utime, NSEC_PER_USEC);
+- do_div(stime, NSEC_PER_USEC);
++ do_div(bstat.cputime.sum_exec_runtime, NSEC_PER_USEC);
++ do_div(bstat.cputime.utime, NSEC_PER_USEC);
++ do_div(bstat.cputime.stime, NSEC_PER_USEC);
++ do_div(bstat.ntime, NSEC_PER_USEC);
+
+ seq_printf(seq, "usage_usec %llu\n"
+- "user_usec %llu\n"
+- "system_usec %llu\n",
+- usage, utime, stime);
+-
+- cgroup_force_idle_show(seq, &cgrp->bstat);
++ "user_usec %llu\n"
++ "system_usec %llu\n"
++ "nice_usec %llu\n",
++ bstat.cputime.sum_exec_runtime,
++ bstat.cputime.utime,
++ bstat.cputime.stime,
++ bstat.ntime);
++
++ cgroup_force_idle_show(seq, &bstat);
+ }
+
+ /* Add bpf kfuncs for cgroup_rstat_updated() and cgroup_rstat_flush() */
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 9ee6c9145b1df9..cf02a629f99023 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -1452,11 +1452,6 @@ static int __ref _cpu_down(unsigned int cpu, int tasks_frozen,
+
+ out:
+ cpus_write_unlock();
+- /*
+- * Do post unplug cleanup. This is still protected against
+- * concurrent CPU hotplug via cpu_add_remove_lock.
+- */
+- lockup_detector_cleanup();
+ arch_smt_update();
+ return ret;
+ }
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 5fff74c736063c..b5ccf52bb71baa 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2407,6 +2407,7 @@ ctx_time_update_event(struct perf_event_context *ctx, struct perf_event *event)
+ #define DETACH_GROUP 0x01UL
+ #define DETACH_CHILD 0x02UL
+ #define DETACH_DEAD 0x04UL
++#define DETACH_EXIT 0x08UL
+
+ /*
+ * Cross CPU call to remove a performance event
+@@ -2421,6 +2422,7 @@ __perf_remove_from_context(struct perf_event *event,
+ void *info)
+ {
+ struct perf_event_pmu_context *pmu_ctx = event->pmu_ctx;
++ enum perf_event_state state = PERF_EVENT_STATE_OFF;
+ unsigned long flags = (unsigned long)info;
+
+ ctx_time_update(cpuctx, ctx);
+@@ -2429,16 +2431,19 @@ __perf_remove_from_context(struct perf_event *event,
+ * Ensure event_sched_out() switches to OFF, at the very least
+ * this avoids raising perf_pending_task() at this time.
+ */
+- if (flags & DETACH_DEAD)
++ if (flags & DETACH_EXIT)
++ state = PERF_EVENT_STATE_EXIT;
++ if (flags & DETACH_DEAD) {
+ event->pending_disable = 1;
++ state = PERF_EVENT_STATE_DEAD;
++ }
+ event_sched_out(event, ctx);
++ perf_event_set_state(event, min(event->state, state));
+ if (flags & DETACH_GROUP)
+ perf_group_detach(event);
+ if (flags & DETACH_CHILD)
+ perf_child_detach(event);
+ list_del_event(event, ctx);
+- if (flags & DETACH_DEAD)
+- event->state = PERF_EVENT_STATE_DEAD;
+
+ if (!pmu_ctx->nr_events) {
+ pmu_ctx->rotate_necessary = 0;
+@@ -11737,6 +11742,21 @@ static int pmu_dev_alloc(struct pmu *pmu)
+ static struct lock_class_key cpuctx_mutex;
+ static struct lock_class_key cpuctx_lock;
+
++static bool idr_cmpxchg(struct idr *idr, unsigned long id, void *old, void *new)
++{
++ void *tmp, *val = idr_find(idr, id);
++
++ if (val != old)
++ return false;
++
++ tmp = idr_replace(idr, new, id);
++ if (IS_ERR(tmp))
++ return false;
++
++ WARN_ON_ONCE(tmp != val);
++ return true;
++}
++
+ int perf_pmu_register(struct pmu *pmu, const char *name, int type)
+ {
+ int cpu, ret, max = PERF_TYPE_MAX;
+@@ -11763,7 +11783,7 @@ int perf_pmu_register(struct pmu *pmu, const char *name, int type)
+ if (type >= 0)
+ max = type;
+
+- ret = idr_alloc(&pmu_idr, pmu, max, 0, GFP_KERNEL);
++ ret = idr_alloc(&pmu_idr, NULL, max, 0, GFP_KERNEL);
+ if (ret < 0)
+ goto free_pdc;
+
+@@ -11771,6 +11791,7 @@ int perf_pmu_register(struct pmu *pmu, const char *name, int type)
+
+ type = ret;
+ pmu->type = type;
++ atomic_set(&pmu->exclusive_cnt, 0);
+
+ if (pmu_bus_running && !pmu->dev) {
+ ret = pmu_dev_alloc(pmu);
+@@ -11819,14 +11840,22 @@ int perf_pmu_register(struct pmu *pmu, const char *name, int type)
+ if (!pmu->event_idx)
+ pmu->event_idx = perf_event_idx_default;
+
++ /*
++ * Now that the PMU is complete, make it visible to perf_try_init_event().
++ */
++ if (!idr_cmpxchg(&pmu_idr, pmu->type, NULL, pmu))
++ goto free_context;
+ list_add_rcu(&pmu->entry, &pmus);
+- atomic_set(&pmu->exclusive_cnt, 0);
++
+ ret = 0;
+ unlock:
+ mutex_unlock(&pmus_lock);
+
+ return ret;
+
++free_context:
++ free_percpu(pmu->cpu_pmu_context);
++
+ free_dev:
+ if (pmu->dev && pmu->dev != PMU_NULL_DEV) {
+ device_del(pmu->dev);
+@@ -13319,12 +13348,7 @@ perf_event_exit_event(struct perf_event *event, struct perf_event_context *ctx)
+ mutex_lock(&parent_event->child_mutex);
+ }
+
+- perf_remove_from_context(event, detach_flags);
+-
+- raw_spin_lock_irq(&ctx->lock);
+- if (event->state > PERF_EVENT_STATE_EXIT)
+- perf_event_set_state(event, PERF_EVENT_STATE_EXIT);
+- raw_spin_unlock_irq(&ctx->lock);
++ perf_remove_from_context(event, detach_flags | DETACH_EXIT);
+
+ /*
+ * Child events can be freed.
+diff --git a/kernel/events/ring_buffer.c b/kernel/events/ring_buffer.c
+index 4f46f688d0d490..bbfa22c0a1597a 100644
+--- a/kernel/events/ring_buffer.c
++++ b/kernel/events/ring_buffer.c
+@@ -19,7 +19,7 @@
+
+ static void perf_output_wakeup(struct perf_output_handle *handle)
+ {
+- atomic_set(&handle->rb->poll, EPOLLIN);
++ atomic_set(&handle->rb->poll, EPOLLIN | EPOLLRDNORM);
+
+ handle->event->pending_wakeup = 1;
+
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index 4fdc08ca0f3cbd..e60f5e71e35df7 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -167,6 +167,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
+ DEFINE_FOLIO_VMA_WALK(pvmw, old_folio, vma, addr, 0);
+ int err;
+ struct mmu_notifier_range range;
++ pte_t pte;
+
+ mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, addr,
+ addr + PAGE_SIZE);
+@@ -186,6 +187,16 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
+ if (!page_vma_mapped_walk(&pvmw))
+ goto unlock;
+ VM_BUG_ON_PAGE(addr != pvmw.address, old_page);
++ pte = ptep_get(pvmw.pte);
++
++ /*
++ * Handle PFN swap PTES, such as device-exclusive ones, that actually
++ * map pages: simply trigger GUP again to fix it up.
++ */
++ if (unlikely(!pte_present(pte))) {
++ page_vma_mapped_walk_done(&pvmw);
++ goto unlock;
++ }
+
+ if (new_page) {
+ folio_get(new_folio);
+@@ -200,7 +211,7 @@ static int __replace_page(struct vm_area_struct *vma, unsigned long addr,
+ inc_mm_counter(mm, MM_ANONPAGES);
+ }
+
+- flush_cache_page(vma, addr, pte_pfn(ptep_get(pvmw.pte)));
++ flush_cache_page(vma, addr, pte_pfn(pte));
+ ptep_clear_flush(vma, addr, pvmw.pte);
+ if (new_page)
+ set_pte_at(mm, addr, pvmw.pte,
+@@ -1887,8 +1898,8 @@ void uprobe_copy_process(struct task_struct *t, unsigned long flags)
+ */
+ unsigned long uprobe_get_trampoline_vaddr(void)
+ {
++ unsigned long trampoline_vaddr = UPROBE_NO_TRAMPOLINE_VADDR;
+ struct xol_area *area;
+- unsigned long trampoline_vaddr = -1;
+
+ /* Pairs with xol_add_vma() smp_store_release() */
+ area = READ_ONCE(current->mm->uprobes_state.xol_area); /* ^^^ */
+diff --git a/kernel/fork.c b/kernel/fork.c
+index e192bdbc9adebb..12decadff468f5 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -505,6 +505,10 @@ struct vm_area_struct *vm_area_dup(struct vm_area_struct *orig)
+ vma_numab_state_init(new);
+ dup_anon_vma_name(orig, new);
+
++ /* track_pfn_copy() will later take care of copying internal state. */
++ if (unlikely(new->vm_flags & VM_PFNMAP))
++ untrack_pfn_clear(new);
++
+ return new;
+ }
+
+diff --git a/kernel/kexec_elf.c b/kernel/kexec_elf.c
+index d3689632e8b90f..3a5c25b2adc94d 100644
+--- a/kernel/kexec_elf.c
++++ b/kernel/kexec_elf.c
+@@ -390,7 +390,7 @@ int kexec_elf_load(struct kimage *image, struct elfhdr *ehdr,
+ struct kexec_buf *kbuf,
+ unsigned long *lowest_load_addr)
+ {
+- unsigned long lowest_addr = UINT_MAX;
++ unsigned long lowest_addr = ULONG_MAX;
+ int ret;
+ size_t i;
+
+diff --git a/kernel/locking/semaphore.c b/kernel/locking/semaphore.c
+index 34bfae72f29526..de9117c0e671e9 100644
+--- a/kernel/locking/semaphore.c
++++ b/kernel/locking/semaphore.c
+@@ -29,6 +29,7 @@
+ #include <linux/export.h>
+ #include <linux/sched.h>
+ #include <linux/sched/debug.h>
++#include <linux/sched/wake_q.h>
+ #include <linux/semaphore.h>
+ #include <linux/spinlock.h>
+ #include <linux/ftrace.h>
+@@ -38,7 +39,7 @@ static noinline void __down(struct semaphore *sem);
+ static noinline int __down_interruptible(struct semaphore *sem);
+ static noinline int __down_killable(struct semaphore *sem);
+ static noinline int __down_timeout(struct semaphore *sem, long timeout);
+-static noinline void __up(struct semaphore *sem);
++static noinline void __up(struct semaphore *sem, struct wake_q_head *wake_q);
+
+ /**
+ * down - acquire the semaphore
+@@ -183,13 +184,16 @@ EXPORT_SYMBOL(down_timeout);
+ void __sched up(struct semaphore *sem)
+ {
+ unsigned long flags;
++ DEFINE_WAKE_Q(wake_q);
+
+ raw_spin_lock_irqsave(&sem->lock, flags);
+ if (likely(list_empty(&sem->wait_list)))
+ sem->count++;
+ else
+- __up(sem);
++ __up(sem, &wake_q);
+ raw_spin_unlock_irqrestore(&sem->lock, flags);
++ if (!wake_q_empty(&wake_q))
++ wake_up_q(&wake_q);
+ }
+ EXPORT_SYMBOL(up);
+
+@@ -269,11 +273,12 @@ static noinline int __sched __down_timeout(struct semaphore *sem, long timeout)
+ return __down_common(sem, TASK_UNINTERRUPTIBLE, timeout);
+ }
+
+-static noinline void __sched __up(struct semaphore *sem)
++static noinline void __sched __up(struct semaphore *sem,
++ struct wake_q_head *wake_q)
+ {
+ struct semaphore_waiter *waiter = list_first_entry(&sem->wait_list,
+ struct semaphore_waiter, list);
+ list_del(&waiter->list);
+ waiter->up = true;
+- wake_up_process(waiter->task);
++ wake_q_add(wake_q, waiter->task);
+ }
+diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
+index a17c23b53049cc..5e7ae404c8d2a4 100644
+--- a/kernel/sched/deadline.c
++++ b/kernel/sched/deadline.c
+@@ -3179,7 +3179,7 @@ int sched_dl_global_validate(void)
+ * value smaller than the currently allocated bandwidth in
+ * any of the root_domains.
+ */
+- for_each_possible_cpu(cpu) {
++ for_each_online_cpu(cpu) {
+ rcu_read_lock_sched();
+
+ if (dl_bw_visited(cpu, gen))
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 58ba14ed8fbcb9..ceb023629d48dd 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -885,6 +885,26 @@ struct sched_entity *__pick_first_entity(struct cfs_rq *cfs_rq)
+ return __node_2_se(left);
+ }
+
++/*
++ * HACK, stash a copy of deadline at the point of pick in vlag,
++ * which isn't used until dequeue.
++ */
++static inline void set_protect_slice(struct sched_entity *se)
++{
++ se->vlag = se->deadline;
++}
++
++static inline bool protect_slice(struct sched_entity *se)
++{
++ return se->vlag == se->deadline;
++}
++
++static inline void cancel_protect_slice(struct sched_entity *se)
++{
++ if (protect_slice(se))
++ se->vlag = se->deadline + 1;
++}
++
+ /*
+ * Earliest Eligible Virtual Deadline First
+ *
+@@ -921,11 +941,7 @@ static struct sched_entity *pick_eevdf(struct cfs_rq *cfs_rq)
+ if (curr && (!curr->on_rq || !entity_eligible(cfs_rq, curr)))
+ curr = NULL;
+
+- /*
+- * Once selected, run a task until it either becomes non-eligible or
+- * until it gets a new slice. See the HACK in set_next_entity().
+- */
+- if (sched_feat(RUN_TO_PARITY) && curr && curr->vlag == curr->deadline)
++ if (sched_feat(RUN_TO_PARITY) && curr && protect_slice(curr))
+ return curr;
+
+ /* Pick the leftmost entity if it's eligible */
+@@ -5626,11 +5642,8 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
+ update_stats_wait_end_fair(cfs_rq, se);
+ __dequeue_entity(cfs_rq, se);
+ update_load_avg(cfs_rq, se, UPDATE_TG);
+- /*
+- * HACK, stash a copy of deadline at the point of pick in vlag,
+- * which isn't used until dequeue.
+- */
+- se->vlag = se->deadline;
++
++ set_protect_slice(se);
+ }
+
+ update_stats_curr_start(cfs_rq, se);
+@@ -7090,6 +7103,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ update_cfs_group(se);
+
+ se->slice = slice;
++ if (se != cfs_rq->curr)
++ min_vruntime_cb_propagate(&se->run_node, NULL);
+ slice = cfs_rq_min_slice(cfs_rq);
+
+ cfs_rq->h_nr_running++;
+@@ -7219,6 +7234,8 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags)
+ update_cfs_group(se);
+
+ se->slice = slice;
++ if (se != cfs_rq->curr)
++ min_vruntime_cb_propagate(&se->run_node, NULL);
+ slice = cfs_rq_min_slice(cfs_rq);
+
+ cfs_rq->h_nr_running -= h_nr_running;
+@@ -8882,8 +8899,15 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ * Preempt an idle entity in favor of a non-idle entity (and don't preempt
+ * in the inverse case).
+ */
+- if (cse_is_idle && !pse_is_idle)
++ if (cse_is_idle && !pse_is_idle) {
++ /*
++ * When non-idle entity preempt an idle entity,
++ * don't give idle entity slice protection.
++ */
++ cancel_protect_slice(se);
+ goto preempt;
++ }
++
+ if (cse_is_idle != pse_is_idle)
+ return;
+
+@@ -8902,8 +8926,8 @@ static void check_preempt_wakeup_fair(struct rq *rq, struct task_struct *p, int
+ * Note that even if @p does not turn out to be the most eligible
+ * task at this moment, current's slice protection will be lost.
+ */
+- if (do_preempt_short(cfs_rq, pse, se) && se->vlag == se->deadline)
+- se->vlag = se->deadline + 1;
++ if (do_preempt_short(cfs_rq, pse, se))
++ cancel_protect_slice(se);
+
+ /*
+ * If @p has become the most eligible task, force preemption.
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 449efaaa387a68..55f279ddfd63d5 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -833,7 +833,7 @@ static int bpf_send_signal_common(u32 sig, enum pid_type type)
+ if (unlikely(is_global_init(current)))
+ return -EPERM;
+
+- if (!preemptible()) {
++ if (preempt_count() != 0 || irqs_disabled()) {
+ /* Do an early check on signal validity. Otherwise,
+ * the error is lost in deferred irq_work.
+ */
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index ea8ad5480e286d..3e252ba16d5c6e 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -7442,9 +7442,9 @@ static __init int rb_write_something(struct rb_test_data *data, bool nested)
+ /* Ignore dropped events before test starts. */
+ if (started) {
+ if (nested)
+- data->bytes_dropped += len;
+- else
+ data->bytes_dropped_nested += len;
++ else
++ data->bytes_dropped += len;
+ }
+ return len;
+ }
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index ea9b44847ce6b7..29eba68e07859d 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -3111,6 +3111,20 @@ static bool event_in_systems(struct trace_event_call *call,
+ return !*p || isspace(*p) || *p == ',';
+ }
+
++#ifdef CONFIG_HIST_TRIGGERS
++/*
++ * Wake up waiter on the hist_poll_wq from irq_work because the hist trigger
++ * may happen in any context.
++ */
++static void hist_poll_event_irq_work(struct irq_work *work)
++{
++ wake_up_all(&hist_poll_wq);
++}
++
++DEFINE_IRQ_WORK(hist_poll_work, hist_poll_event_irq_work);
++DECLARE_WAIT_QUEUE_HEAD(hist_poll_wq);
++#endif
++
+ static struct trace_event_file *
+ trace_create_new_event(struct trace_event_call *call,
+ struct trace_array *tr)
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 31f5ad322fab0a..4ebafc655223a8 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -5314,6 +5314,8 @@ static void event_hist_trigger(struct event_trigger_data *data,
+
+ if (resolve_var_refs(hist_data, key, var_ref_vals, true))
+ hist_trigger_actions(hist_data, elt, buffer, rec, rbe, key, var_ref_vals);
++
++ hist_poll_wakeup();
+ }
+
+ static void hist_trigger_stacktrace_print(struct seq_file *m,
+@@ -5593,49 +5595,137 @@ static void hist_trigger_show(struct seq_file *m,
+ n_entries, (u64)atomic64_read(&hist_data->map->drops));
+ }
+
++struct hist_file_data {
++ struct file *file;
++ u64 last_read;
++ u64 last_act;
++};
++
++static u64 get_hist_hit_count(struct trace_event_file *event_file)
++{
++ struct hist_trigger_data *hist_data;
++ struct event_trigger_data *data;
++ u64 ret = 0;
++
++ list_for_each_entry(data, &event_file->triggers, list) {
++ if (data->cmd_ops->trigger_type == ETT_EVENT_HIST) {
++ hist_data = data->private_data;
++ ret += atomic64_read(&hist_data->map->hits);
++ }
++ }
++ return ret;
++}
++
+ static int hist_show(struct seq_file *m, void *v)
+ {
++ struct hist_file_data *hist_file = m->private;
+ struct event_trigger_data *data;
+ struct trace_event_file *event_file;
+- int n = 0, ret = 0;
++ int n = 0;
+
+- mutex_lock(&event_mutex);
++ guard(mutex)(&event_mutex);
+
+- event_file = event_file_file(m->private);
+- if (unlikely(!event_file)) {
+- ret = -ENODEV;
+- goto out_unlock;
+- }
++ event_file = event_file_file(hist_file->file);
++ if (unlikely(!event_file))
++ return -ENODEV;
+
+ list_for_each_entry(data, &event_file->triggers, list) {
+ if (data->cmd_ops->trigger_type == ETT_EVENT_HIST)
+ hist_trigger_show(m, data, n++);
+ }
++ hist_file->last_read = get_hist_hit_count(event_file);
++ /*
++ * Update last_act too so that poll()/POLLPRI can wait for the next
++ * event after any syscall on hist file.
++ */
++ hist_file->last_act = hist_file->last_read;
+
+- out_unlock:
+- mutex_unlock(&event_mutex);
++ return 0;
++}
++
++static __poll_t event_hist_poll(struct file *file, struct poll_table_struct *wait)
++{
++ struct trace_event_file *event_file;
++ struct seq_file *m = file->private_data;
++ struct hist_file_data *hist_file = m->private;
++ __poll_t ret = 0;
++ u64 cnt;
++
++ guard(mutex)(&event_mutex);
++
++ event_file = event_file_data(file);
++ if (!event_file)
++ return EPOLLERR;
++
++ hist_poll_wait(file, wait);
++
++ cnt = get_hist_hit_count(event_file);
++ if (hist_file->last_read != cnt)
++ ret |= EPOLLIN | EPOLLRDNORM;
++ if (hist_file->last_act != cnt) {
++ hist_file->last_act = cnt;
++ ret |= EPOLLPRI;
++ }
+
+ return ret;
+ }
+
++static int event_hist_release(struct inode *inode, struct file *file)
++{
++ struct seq_file *m = file->private_data;
++ struct hist_file_data *hist_file = m->private;
++
++ kfree(hist_file);
++ return tracing_single_release_file_tr(inode, file);
++}
++
+ static int event_hist_open(struct inode *inode, struct file *file)
+ {
++ struct trace_event_file *event_file;
++ struct hist_file_data *hist_file;
+ int ret;
+
+ ret = tracing_open_file_tr(inode, file);
+ if (ret)
+ return ret;
+
++ guard(mutex)(&event_mutex);
++
++ event_file = event_file_data(file);
++ if (!event_file) {
++ ret = -ENODEV;
++ goto err;
++ }
++
++ hist_file = kzalloc(sizeof(*hist_file), GFP_KERNEL);
++ if (!hist_file) {
++ ret = -ENOMEM;
++ goto err;
++ }
++
++ hist_file->file = file;
++ hist_file->last_act = get_hist_hit_count(event_file);
++
+ /* Clear private_data to avoid warning in single_open() */
+ file->private_data = NULL;
+- return single_open(file, hist_show, file);
++ ret = single_open(file, hist_show, hist_file);
++ if (ret) {
++ kfree(hist_file);
++ goto err;
++ }
++
++ return 0;
++err:
++ tracing_release_file_tr(inode, file);
++ return ret;
+ }
+
+ const struct file_operations event_hist_fops = {
+ .open = event_hist_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+- .release = tracing_single_release_file_tr,
++ .release = event_hist_release,
++ .poll = event_hist_poll,
+ };
+
+ #ifdef CONFIG_HIST_TRIGGERS_DEBUG
+@@ -5876,25 +5966,19 @@ static int hist_debug_show(struct seq_file *m, void *v)
+ {
+ struct event_trigger_data *data;
+ struct trace_event_file *event_file;
+- int n = 0, ret = 0;
++ int n = 0;
+
+- mutex_lock(&event_mutex);
++ guard(mutex)(&event_mutex);
+
+ event_file = event_file_file(m->private);
+- if (unlikely(!event_file)) {
+- ret = -ENODEV;
+- goto out_unlock;
+- }
++ if (unlikely(!event_file))
++ return -ENODEV;
+
+ list_for_each_entry(data, &event_file->triggers, list) {
+ if (data->cmd_ops->trigger_type == ETT_EVENT_HIST)
+ hist_trigger_debug_show(m, data, n++);
+ }
+-
+- out_unlock:
+- mutex_unlock(&event_mutex);
+-
+- return ret;
++ return 0;
+ }
+
+ static int event_hist_debug_open(struct inode *inode, struct file *file)
+@@ -5907,7 +5991,10 @@ static int event_hist_debug_open(struct inode *inode, struct file *file)
+
+ /* Clear private_data to avoid warning in single_open() */
+ file->private_data = NULL;
+- return single_open(file, hist_debug_show, file);
++ ret = single_open(file, hist_debug_show, file);
++ if (ret)
++ tracing_release_file_tr(inode, file);
++ return ret;
+ }
+
+ const struct file_operations event_hist_debug_fops = {
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index c82b401a294d96..24c9962c40db1a 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -312,7 +312,7 @@ static const char *synth_field_fmt(char *type)
+ else if (strcmp(type, "gfp_t") == 0)
+ fmt = "%x";
+ else if (synth_field_is_string(type))
+- fmt = "%.*s";
++ fmt = "%s";
+ else if (synth_field_is_stack(type))
+ fmt = "%s";
+
+@@ -859,6 +859,38 @@ static struct trace_event_fields synth_event_fields_array[] = {
+ {}
+ };
+
++static int synth_event_reg(struct trace_event_call *call,
++ enum trace_reg type, void *data)
++{
++ struct synth_event *event = container_of(call, struct synth_event, call);
++
++ switch (type) {
++#ifdef CONFIG_PERF_EVENTS
++ case TRACE_REG_PERF_REGISTER:
++#endif
++ case TRACE_REG_REGISTER:
++ if (!try_module_get(event->mod))
++ return -EBUSY;
++ break;
++ default:
++ break;
++ }
++
++ int ret = trace_event_reg(call, type, data);
++
++ switch (type) {
++#ifdef CONFIG_PERF_EVENTS
++ case TRACE_REG_PERF_UNREGISTER:
++#endif
++ case TRACE_REG_UNREGISTER:
++ module_put(event->mod);
++ break;
++ default:
++ break;
++ }
++ return ret;
++}
++
+ static int register_synth_event(struct synth_event *event)
+ {
+ struct trace_event_call *call = &event->call;
+@@ -888,7 +920,7 @@ static int register_synth_event(struct synth_event *event)
+ goto out;
+ }
+ call->flags = TRACE_EVENT_FL_TRACEPOINT;
+- call->class->reg = trace_event_reg;
++ call->class->reg = synth_event_reg;
+ call->class->probe = trace_event_raw_event_synth;
+ call->data = event;
+ call->tp = event->tp;
+diff --git a/kernel/trace/trace_functions_graph.c b/kernel/trace/trace_functions_graph.c
+index ebb61ddca749d8..655246a9bec348 100644
+--- a/kernel/trace/trace_functions_graph.c
++++ b/kernel/trace/trace_functions_graph.c
+@@ -1353,6 +1353,7 @@ void graph_trace_close(struct trace_iterator *iter)
+ if (data) {
+ free_percpu(data->cpu_data);
+ kfree(data);
++ iter->private = NULL;
+ }
+ }
+
+diff --git a/kernel/trace/trace_irqsoff.c b/kernel/trace/trace_irqsoff.c
+index fce064e205706f..10ea6d0a35e85d 100644
+--- a/kernel/trace/trace_irqsoff.c
++++ b/kernel/trace/trace_irqsoff.c
+@@ -233,8 +233,6 @@ static void irqsoff_trace_open(struct trace_iterator *iter)
+ {
+ if (is_graph(iter->tr))
+ graph_trace_open(iter);
+- else
+- iter->private = NULL;
+ }
+
+ static void irqsoff_trace_close(struct trace_iterator *iter)
+diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
+index 032fdeba37d350..a94790f5cda727 100644
+--- a/kernel/trace/trace_osnoise.c
++++ b/kernel/trace/trace_osnoise.c
+@@ -2038,7 +2038,6 @@ static int start_kthread(unsigned int cpu)
+
+ if (IS_ERR(kthread)) {
+ pr_err(BANNER "could not start sampling thread\n");
+- stop_per_cpu_kthreads();
+ return -ENOMEM;
+ }
+
+diff --git a/kernel/trace/trace_sched_wakeup.c b/kernel/trace/trace_sched_wakeup.c
+index ae2ace5e515af5..039382576bc165 100644
+--- a/kernel/trace/trace_sched_wakeup.c
++++ b/kernel/trace/trace_sched_wakeup.c
+@@ -170,8 +170,6 @@ static void wakeup_trace_open(struct trace_iterator *iter)
+ {
+ if (is_graph(iter->tr))
+ graph_trace_open(iter);
+- else
+- iter->private = NULL;
+ }
+
+ static void wakeup_trace_close(struct trace_iterator *iter)
+diff --git a/kernel/watch_queue.c b/kernel/watch_queue.c
+index d36242fd493644..e55f9810b91add 100644
+--- a/kernel/watch_queue.c
++++ b/kernel/watch_queue.c
+@@ -269,6 +269,15 @@ long watch_queue_set_size(struct pipe_inode_info *pipe, unsigned int nr_notes)
+ if (ret < 0)
+ goto error;
+
++ /*
++ * pipe_resize_ring() does not update nr_accounted for watch_queue
++ * pipes, because the above vastly overprovisions. Set nr_accounted on
++ * and max_usage this pipe to the number that was actually charged to
++ * the user above via account_pipe_buffers.
++ */
++ pipe->max_usage = nr_pages;
++ pipe->nr_accounted = nr_pages;
++
+ ret = -ENOMEM;
+ pages = kcalloc(nr_pages, sizeof(struct page *), GFP_KERNEL);
+ if (!pages)
+diff --git a/kernel/watchdog.c b/kernel/watchdog.c
+index 262691ba62b7ad..4dc72540c3b0fb 100644
+--- a/kernel/watchdog.c
++++ b/kernel/watchdog.c
+@@ -347,8 +347,6 @@ static int __init watchdog_thresh_setup(char *str)
+ }
+ __setup("watchdog_thresh=", watchdog_thresh_setup);
+
+-static void __lockup_detector_cleanup(void);
+-
+ #ifdef CONFIG_SOFTLOCKUP_DETECTOR_INTR_STORM
+ enum stats_per_group {
+ STATS_SYSTEM,
+@@ -878,11 +876,6 @@ static void __lockup_detector_reconfigure(void)
+
+ watchdog_hardlockup_start();
+ cpus_read_unlock();
+- /*
+- * Must be called outside the cpus locked section to prevent
+- * recursive locking in the perf code.
+- */
+- __lockup_detector_cleanup();
+ }
+
+ void lockup_detector_reconfigure(void)
+@@ -932,24 +925,6 @@ static inline void lockup_detector_setup(void)
+ }
+ #endif /* !CONFIG_SOFTLOCKUP_DETECTOR */
+
+-static void __lockup_detector_cleanup(void)
+-{
+- lockdep_assert_held(&watchdog_mutex);
+- hardlockup_detector_perf_cleanup();
+-}
+-
+-/**
+- * lockup_detector_cleanup - Cleanup after cpu hotplug or sysctl changes
+- *
+- * Caller must not hold the cpu hotplug rwsem.
+- */
+-void lockup_detector_cleanup(void)
+-{
+- mutex_lock(&watchdog_mutex);
+- __lockup_detector_cleanup();
+- mutex_unlock(&watchdog_mutex);
+-}
+-
+ /**
+ * lockup_detector_soft_poweroff - Interface to stop lockup detector(s)
+ *
+diff --git a/kernel/watchdog_perf.c b/kernel/watchdog_perf.c
+index 59c1d86a73a248..2fdb96eaf49336 100644
+--- a/kernel/watchdog_perf.c
++++ b/kernel/watchdog_perf.c
+@@ -21,8 +21,6 @@
+ #include <linux/perf_event.h>
+
+ static DEFINE_PER_CPU(struct perf_event *, watchdog_ev);
+-static DEFINE_PER_CPU(struct perf_event *, dead_event);
+-static struct cpumask dead_events_mask;
+
+ static atomic_t watchdog_cpus = ATOMIC_INIT(0);
+
+@@ -181,36 +179,12 @@ void watchdog_hardlockup_disable(unsigned int cpu)
+
+ if (event) {
+ perf_event_disable(event);
++ perf_event_release_kernel(event);
+ this_cpu_write(watchdog_ev, NULL);
+- this_cpu_write(dead_event, event);
+- cpumask_set_cpu(smp_processor_id(), &dead_events_mask);
+ atomic_dec(&watchdog_cpus);
+ }
+ }
+
+-/**
+- * hardlockup_detector_perf_cleanup - Cleanup disabled events and destroy them
+- *
+- * Called from lockup_detector_cleanup(). Serialized by the caller.
+- */
+-void hardlockup_detector_perf_cleanup(void)
+-{
+- int cpu;
+-
+- for_each_cpu(cpu, &dead_events_mask) {
+- struct perf_event *event = per_cpu(dead_event, cpu);
+-
+- /*
+- * Required because for_each_cpu() reports unconditionally
+- * CPU0 as set on UP kernels. Sigh.
+- */
+- if (event)
+- perf_event_release_kernel(event);
+- per_cpu(dead_event, cpu) = NULL;
+- }
+- cpumask_clear(&dead_events_mask);
+-}
+-
+ /**
+ * hardlockup_detector_perf_stop - Globally stop watchdog events
+ *
+diff --git a/lib/842/842_compress.c b/lib/842/842_compress.c
+index c02baa4168e168..055356508d97c5 100644
+--- a/lib/842/842_compress.c
++++ b/lib/842/842_compress.c
+@@ -532,6 +532,8 @@ int sw842_compress(const u8 *in, unsigned int ilen,
+ }
+ if (repeat_count) {
+ ret = add_repeat_template(p, repeat_count);
++ if (ret)
++ return ret;
+ repeat_count = 0;
+ if (next == last) /* reached max repeat bits */
+ goto repeat;
+diff --git a/lib/stackinit_kunit.c b/lib/stackinit_kunit.c
+index c40818ec9c1801..49d32e43d06ef8 100644
+--- a/lib/stackinit_kunit.c
++++ b/lib/stackinit_kunit.c
+@@ -146,6 +146,15 @@ static bool stackinit_range_contains(char *haystack_start, size_t haystack_size,
+ #define INIT_STRUCT_assigned_copy(var_type) \
+ ; var = *(arg)
+
++/*
++ * The "did we actually fill the stack?" check value needs
++ * to be neither 0 nor any of the "pattern" bytes. The
++ * pattern bytes are compiler, architecture, and type based,
++ * so we have to pick a value that never appears for those
++ * combinations. Use 0x99 which is not 0xFF, 0xFE, nor 0xAA.
++ */
++#define FILL_BYTE 0x99
++
+ /*
+ * @name: unique string name for the test
+ * @var_type: type to be tested for zeroing initialization
+@@ -168,12 +177,12 @@ static noinline void test_ ## name (struct kunit *test) \
+ ZERO_CLONE_ ## which(zero); \
+ /* Clear entire check buffer for 0xFF overlap test. */ \
+ memset(check_buf, 0x00, sizeof(check_buf)); \
+- /* Fill stack with 0xFF. */ \
++ /* Fill stack with FILL_BYTE. */ \
+ ignored = leaf_ ##name((unsigned long)&ignored, 1, \
+ FETCH_ARG_ ## which(zero)); \
+- /* Verify all bytes overwritten with 0xFF. */ \
++ /* Verify all bytes overwritten with FILL_BYTE. */ \
+ for (sum = 0, i = 0; i < target_size; i++) \
+- sum += (check_buf[i] != 0xFF); \
++ sum += (check_buf[i] != FILL_BYTE); \
+ /* Clear entire check buffer for later bit tests. */ \
+ memset(check_buf, 0x00, sizeof(check_buf)); \
+ /* Extract stack-defined variable contents. */ \
+@@ -184,7 +193,8 @@ static noinline void test_ ## name (struct kunit *test) \
+ * possible between the two leaf function calls. \
+ */ \
+ KUNIT_ASSERT_EQ_MSG(test, sum, 0, \
+- "leaf fill was not 0xFF!?\n"); \
++ "leaf fill was not 0x%02X!?\n", \
++ FILL_BYTE); \
+ \
+ /* Validate that compiler lined up fill and target. */ \
+ KUNIT_ASSERT_TRUE_MSG(test, \
+@@ -196,9 +206,9 @@ static noinline void test_ ## name (struct kunit *test) \
+ (int)((ssize_t)(uintptr_t)fill_start - \
+ (ssize_t)(uintptr_t)target_start)); \
+ \
+- /* Look for any bytes still 0xFF in check region. */ \
++ /* Validate check region has no FILL_BYTE bytes. */ \
+ for (sum = 0, i = 0; i < target_size; i++) \
+- sum += (check_buf[i] == 0xFF); \
++ sum += (check_buf[i] == FILL_BYTE); \
+ \
+ if (sum != 0 && xfail) \
+ kunit_skip(test, \
+@@ -233,12 +243,12 @@ static noinline int leaf_ ## name(unsigned long sp, bool fill, \
+ * stack frame of SOME kind... \
+ */ \
+ memset(buf, (char)(sp & 0xff), sizeof(buf)); \
+- /* Fill variable with 0xFF. */ \
++ /* Fill variable with FILL_BYTE. */ \
+ if (fill) { \
+ fill_start = &var; \
+ fill_size = sizeof(var); \
+ memset(fill_start, \
+- (char)((sp & 0xff) | forced_mask), \
++ FILL_BYTE & forced_mask, \
+ fill_size); \
+ } \
+ \
+@@ -380,7 +390,7 @@ static int noinline __leaf_switch_none(int path, bool fill)
+ fill_start = &var;
+ fill_size = sizeof(var);
+
+- memset(fill_start, forced_mask | 0x55, fill_size);
++ memset(fill_start, (forced_mask | 0x55) & FILL_BYTE, fill_size);
+ }
+ memcpy(check_buf, target_start, target_size);
+ break;
+@@ -391,7 +401,7 @@ static int noinline __leaf_switch_none(int path, bool fill)
+ fill_start = &var;
+ fill_size = sizeof(var);
+
+- memset(fill_start, forced_mask | 0xaa, fill_size);
++ memset(fill_start, (forced_mask | 0xaa) & FILL_BYTE, fill_size);
+ }
+ memcpy(check_buf, target_start, target_size);
+ break;
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index c5e2ec9303c5d3..a69e71a1ca55e6 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -2255,7 +2255,7 @@ int __init no_hash_pointers_enable(char *str)
+ early_param("no_hash_pointers", no_hash_pointers_enable);
+
+ /* Used for Rust formatting ('%pA'). */
+-char *rust_fmt_argument(char *buf, char *end, void *ptr);
++char *rust_fmt_argument(char *buf, char *end, const void *ptr);
+
+ /*
+ * Show a '%p' thing. A kernel extension is that the '%p' is followed
+diff --git a/mm/gup.c b/mm/gup.c
+index 44c536904a83bb..d27e7c9e2596ce 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -1283,6 +1283,9 @@ static int check_vma_flags(struct vm_area_struct *vma, unsigned long gup_flags)
+ if ((gup_flags & FOLL_LONGTERM) && vma_is_fsdax(vma))
+ return -EOPNOTSUPP;
+
++ if ((gup_flags & FOLL_SPLIT_PMD) && is_vm_hugetlb_page(vma))
++ return -EOPNOTSUPP;
++
+ if (vma_is_secretmem(vma))
+ return -EFAULT;
+
+diff --git a/mm/memory.c b/mm/memory.c
+index 525f96ad65b8d7..99dceaf6a10579 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -1356,12 +1356,12 @@ int
+ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ {
+ pgd_t *src_pgd, *dst_pgd;
+- unsigned long next;
+ unsigned long addr = src_vma->vm_start;
+ unsigned long end = src_vma->vm_end;
+ struct mm_struct *dst_mm = dst_vma->vm_mm;
+ struct mm_struct *src_mm = src_vma->vm_mm;
+ struct mmu_notifier_range range;
++ unsigned long next, pfn;
+ bool is_cow;
+ int ret;
+
+@@ -1372,11 +1372,7 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ return copy_hugetlb_page_range(dst_mm, src_mm, dst_vma, src_vma);
+
+ if (unlikely(src_vma->vm_flags & VM_PFNMAP)) {
+- /*
+- * We do not free on error cases below as remove_vma
+- * gets called on error from higher level routine
+- */
+- ret = track_pfn_copy(src_vma);
++ ret = track_pfn_copy(dst_vma, src_vma, &pfn);
+ if (ret)
+ return ret;
+ }
+@@ -1413,7 +1409,6 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ continue;
+ if (unlikely(copy_p4d_range(dst_vma, src_vma, dst_pgd, src_pgd,
+ addr, next))) {
+- untrack_pfn_clear(dst_vma);
+ ret = -ENOMEM;
+ break;
+ }
+@@ -1423,6 +1418,8 @@ copy_page_range(struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma)
+ raw_write_seqcount_end(&src_mm->write_protect_seq);
+ mmu_notifier_invalidate_range_end(&range);
+ }
++ if (ret && unlikely(src_vma->vm_flags & VM_PFNMAP))
++ untrack_pfn_copy(dst_vma, pfn);
+ return ret;
+ }
+
+@@ -6718,10 +6715,8 @@ void __might_fault(const char *file, int line)
+ if (pagefault_disabled())
+ return;
+ __might_sleep(file, line);
+-#if defined(CONFIG_DEBUG_ATOMIC_SLEEP)
+ if (current->mm)
+ might_lock_read(¤t->mm->mmap_lock);
+-#endif
+ }
+ EXPORT_SYMBOL(__might_fault);
+ #endif
+diff --git a/mm/zswap.c b/mm/zswap.c
+index 7fefb2eb3fcd80..00d51d01375746 100644
+--- a/mm/zswap.c
++++ b/mm/zswap.c
+@@ -876,18 +876,32 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node)
+ {
+ struct zswap_pool *pool = hlist_entry(node, struct zswap_pool, node);
+ struct crypto_acomp_ctx *acomp_ctx = per_cpu_ptr(pool->acomp_ctx, cpu);
++ struct acomp_req *req;
++ struct crypto_acomp *acomp;
++ u8 *buffer;
++
++ if (IS_ERR_OR_NULL(acomp_ctx))
++ return 0;
+
+ mutex_lock(&acomp_ctx->mutex);
+- if (!IS_ERR_OR_NULL(acomp_ctx)) {
+- if (!IS_ERR_OR_NULL(acomp_ctx->req))
+- acomp_request_free(acomp_ctx->req);
+- acomp_ctx->req = NULL;
+- if (!IS_ERR_OR_NULL(acomp_ctx->acomp))
+- crypto_free_acomp(acomp_ctx->acomp);
+- kfree(acomp_ctx->buffer);
+- }
++ req = acomp_ctx->req;
++ acomp = acomp_ctx->acomp;
++ buffer = acomp_ctx->buffer;
++ acomp_ctx->req = NULL;
++ acomp_ctx->acomp = NULL;
++ acomp_ctx->buffer = NULL;
+ mutex_unlock(&acomp_ctx->mutex);
+
++ /*
++ * Do the actual freeing after releasing the mutex to avoid subtle
++ * locking dependencies causing deadlocks.
++ */
++ if (!IS_ERR_OR_NULL(req))
++ acomp_request_free(req);
++ if (!IS_ERR_OR_NULL(acomp))
++ crypto_free_acomp(acomp);
++ kfree(buffer);
++
+ return 0;
+ }
+
+diff --git a/net/can/af_can.c b/net/can/af_can.c
+index 01f3fbb3b67dc6..65230e81fa08c9 100644
+--- a/net/can/af_can.c
++++ b/net/can/af_can.c
+@@ -287,8 +287,8 @@ int can_send(struct sk_buff *skb, int loop)
+ netif_rx(newskb);
+
+ /* update statistics */
+- pkg_stats->tx_frames++;
+- pkg_stats->tx_frames_delta++;
++ atomic_long_inc(&pkg_stats->tx_frames);
++ atomic_long_inc(&pkg_stats->tx_frames_delta);
+
+ return 0;
+
+@@ -647,8 +647,8 @@ static void can_receive(struct sk_buff *skb, struct net_device *dev)
+ int matches;
+
+ /* update statistics */
+- pkg_stats->rx_frames++;
+- pkg_stats->rx_frames_delta++;
++ atomic_long_inc(&pkg_stats->rx_frames);
++ atomic_long_inc(&pkg_stats->rx_frames_delta);
+
+ /* create non-zero unique skb identifier together with *skb */
+ while (!(can_skb_prv(skb)->skbcnt))
+@@ -669,8 +669,8 @@ static void can_receive(struct sk_buff *skb, struct net_device *dev)
+ consume_skb(skb);
+
+ if (matches > 0) {
+- pkg_stats->matches++;
+- pkg_stats->matches_delta++;
++ atomic_long_inc(&pkg_stats->matches);
++ atomic_long_inc(&pkg_stats->matches_delta);
+ }
+ }
+
+diff --git a/net/can/af_can.h b/net/can/af_can.h
+index 7c2d9161e22457..22f3352c77fece 100644
+--- a/net/can/af_can.h
++++ b/net/can/af_can.h
+@@ -66,9 +66,9 @@ struct receiver {
+ struct can_pkg_stats {
+ unsigned long jiffies_init;
+
+- unsigned long rx_frames;
+- unsigned long tx_frames;
+- unsigned long matches;
++ atomic_long_t rx_frames;
++ atomic_long_t tx_frames;
++ atomic_long_t matches;
+
+ unsigned long total_rx_rate;
+ unsigned long total_tx_rate;
+@@ -82,9 +82,9 @@ struct can_pkg_stats {
+ unsigned long max_tx_rate;
+ unsigned long max_rx_match_ratio;
+
+- unsigned long rx_frames_delta;
+- unsigned long tx_frames_delta;
+- unsigned long matches_delta;
++ atomic_long_t rx_frames_delta;
++ atomic_long_t tx_frames_delta;
++ atomic_long_t matches_delta;
+ };
+
+ /* persistent statistics */
+diff --git a/net/can/proc.c b/net/can/proc.c
+index bbce97825f13fb..25fdf060e30d0d 100644
+--- a/net/can/proc.c
++++ b/net/can/proc.c
+@@ -118,6 +118,13 @@ void can_stat_update(struct timer_list *t)
+ struct can_pkg_stats *pkg_stats = net->can.pkg_stats;
+ unsigned long j = jiffies; /* snapshot */
+
++ long rx_frames = atomic_long_read(&pkg_stats->rx_frames);
++ long tx_frames = atomic_long_read(&pkg_stats->tx_frames);
++ long matches = atomic_long_read(&pkg_stats->matches);
++ long rx_frames_delta = atomic_long_read(&pkg_stats->rx_frames_delta);
++ long tx_frames_delta = atomic_long_read(&pkg_stats->tx_frames_delta);
++ long matches_delta = atomic_long_read(&pkg_stats->matches_delta);
++
+ /* restart counting in timer context on user request */
+ if (user_reset)
+ can_init_stats(net);
+@@ -127,35 +134,33 @@ void can_stat_update(struct timer_list *t)
+ can_init_stats(net);
+
+ /* prevent overflow in calc_rate() */
+- if (pkg_stats->rx_frames > (ULONG_MAX / HZ))
++ if (rx_frames > (LONG_MAX / HZ))
+ can_init_stats(net);
+
+ /* prevent overflow in calc_rate() */
+- if (pkg_stats->tx_frames > (ULONG_MAX / HZ))
++ if (tx_frames > (LONG_MAX / HZ))
+ can_init_stats(net);
+
+ /* matches overflow - very improbable */
+- if (pkg_stats->matches > (ULONG_MAX / 100))
++ if (matches > (LONG_MAX / 100))
+ can_init_stats(net);
+
+ /* calc total values */
+- if (pkg_stats->rx_frames)
+- pkg_stats->total_rx_match_ratio = (pkg_stats->matches * 100) /
+- pkg_stats->rx_frames;
++ if (rx_frames)
++ pkg_stats->total_rx_match_ratio = (matches * 100) / rx_frames;
+
+ pkg_stats->total_tx_rate = calc_rate(pkg_stats->jiffies_init, j,
+- pkg_stats->tx_frames);
++ tx_frames);
+ pkg_stats->total_rx_rate = calc_rate(pkg_stats->jiffies_init, j,
+- pkg_stats->rx_frames);
++ rx_frames);
+
+ /* calc current values */
+- if (pkg_stats->rx_frames_delta)
++ if (rx_frames_delta)
+ pkg_stats->current_rx_match_ratio =
+- (pkg_stats->matches_delta * 100) /
+- pkg_stats->rx_frames_delta;
++ (matches_delta * 100) / rx_frames_delta;
+
+- pkg_stats->current_tx_rate = calc_rate(0, HZ, pkg_stats->tx_frames_delta);
+- pkg_stats->current_rx_rate = calc_rate(0, HZ, pkg_stats->rx_frames_delta);
++ pkg_stats->current_tx_rate = calc_rate(0, HZ, tx_frames_delta);
++ pkg_stats->current_rx_rate = calc_rate(0, HZ, rx_frames_delta);
+
+ /* check / update maximum values */
+ if (pkg_stats->max_tx_rate < pkg_stats->current_tx_rate)
+@@ -168,9 +173,9 @@ void can_stat_update(struct timer_list *t)
+ pkg_stats->max_rx_match_ratio = pkg_stats->current_rx_match_ratio;
+
+ /* clear values for 'current rate' calculation */
+- pkg_stats->tx_frames_delta = 0;
+- pkg_stats->rx_frames_delta = 0;
+- pkg_stats->matches_delta = 0;
++ atomic_long_set(&pkg_stats->tx_frames_delta, 0);
++ atomic_long_set(&pkg_stats->rx_frames_delta, 0);
++ atomic_long_set(&pkg_stats->matches_delta, 0);
+
+ /* restart timer (one second) */
+ mod_timer(&net->can.stattimer, round_jiffies(jiffies + HZ));
+@@ -214,9 +219,12 @@ static int can_stats_proc_show(struct seq_file *m, void *v)
+ struct can_rcv_lists_stats *rcv_lists_stats = net->can.rcv_lists_stats;
+
+ seq_putc(m, '\n');
+- seq_printf(m, " %8ld transmitted frames (TXF)\n", pkg_stats->tx_frames);
+- seq_printf(m, " %8ld received frames (RXF)\n", pkg_stats->rx_frames);
+- seq_printf(m, " %8ld matched frames (RXMF)\n", pkg_stats->matches);
++ seq_printf(m, " %8ld transmitted frames (TXF)\n",
++ atomic_long_read(&pkg_stats->tx_frames));
++ seq_printf(m, " %8ld received frames (RXF)\n",
++ atomic_long_read(&pkg_stats->rx_frames));
++ seq_printf(m, " %8ld matched frames (RXMF)\n",
++ atomic_long_read(&pkg_stats->matches));
+
+ seq_putc(m, '\n');
+
+diff --git a/net/core/devmem.c b/net/core/devmem.c
+index 11b91c12ee1135..17f8a83a5ee74a 100644
+--- a/net/core/devmem.c
++++ b/net/core/devmem.c
+@@ -108,6 +108,7 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
+ struct netdev_rx_queue *rxq;
+ unsigned long xa_idx;
+ unsigned int rxq_idx;
++ int err;
+
+ if (binding->list.next)
+ list_del(&binding->list);
+@@ -119,7 +120,8 @@ void net_devmem_unbind_dmabuf(struct net_devmem_dmabuf_binding *binding)
+
+ rxq_idx = get_netdev_rx_queue_index(rxq);
+
+- WARN_ON(netdev_rx_queue_restart(binding->dev, rxq_idx));
++ err = netdev_rx_queue_restart(binding->dev, rxq_idx);
++ WARN_ON(err && err != -ENETDOWN);
+ }
+
+ xa_erase(&net_devmem_dmabuf_bindings, binding->id);
+diff --git a/net/core/dst.c b/net/core/dst.c
+index 9552a90d4772dc..6d76b799ce645d 100644
+--- a/net/core/dst.c
++++ b/net/core/dst.c
+@@ -165,6 +165,14 @@ static void dst_count_dec(struct dst_entry *dst)
+ void dst_release(struct dst_entry *dst)
+ {
+ if (dst && rcuref_put(&dst->__rcuref)) {
++#ifdef CONFIG_DST_CACHE
++ if (dst->flags & DST_METADATA) {
++ struct metadata_dst *md_dst = (struct metadata_dst *)dst;
++
++ if (md_dst->type == METADATA_IP_TUNNEL)
++ dst_cache_reset_now(&md_dst->u.tun_info.dst_cache);
++ }
++#endif
+ dst_count_dec(dst);
+ call_rcu_hurry(&dst->rcu_head, dst_destroy_rcu);
+ }
+diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c
+index 2ba5cd965d3fae..4d0ee1c9002aac 100644
+--- a/net/core/rtnetlink.c
++++ b/net/core/rtnetlink.c
+@@ -1005,6 +1005,9 @@ static inline int rtnl_vfinfo_size(const struct net_device *dev,
+ /* IFLA_VF_STATS_TX_DROPPED */
+ nla_total_size_64bit(sizeof(__u64)));
+ }
++ if (dev->netdev_ops->ndo_get_vf_guid)
++ size += num_vfs * 2 *
++ nla_total_size(sizeof(struct ifla_vf_guid));
+ return size;
+ } else
+ return 0;
+diff --git a/net/ipv4/ip_tunnel_core.c b/net/ipv4/ip_tunnel_core.c
+index a3676155be78b9..f65d2f7273813b 100644
+--- a/net/ipv4/ip_tunnel_core.c
++++ b/net/ipv4/ip_tunnel_core.c
+@@ -416,7 +416,7 @@ int skb_tunnel_check_pmtu(struct sk_buff *skb, struct dst_entry *encap_dst,
+
+ skb_dst_update_pmtu_no_confirm(skb, mtu);
+
+- if (!reply || skb->pkt_type == PACKET_HOST)
++ if (!reply)
+ return 0;
+
+ if (skb->protocol == htons(ETH_P_IP))
+@@ -451,7 +451,7 @@ static const struct nla_policy
+ geneve_opt_policy[LWTUNNEL_IP_OPT_GENEVE_MAX + 1] = {
+ [LWTUNNEL_IP_OPT_GENEVE_CLASS] = { .type = NLA_U16 },
+ [LWTUNNEL_IP_OPT_GENEVE_TYPE] = { .type = NLA_U8 },
+- [LWTUNNEL_IP_OPT_GENEVE_DATA] = { .type = NLA_BINARY, .len = 128 },
++ [LWTUNNEL_IP_OPT_GENEVE_DATA] = { .type = NLA_BINARY, .len = 127 },
+ };
+
+ static const struct nla_policy
+diff --git a/net/ipv4/udp.c b/net/ipv4/udp.c
+index 8da74dc63061c0..f4e24fc878fabe 100644
+--- a/net/ipv4/udp.c
++++ b/net/ipv4/udp.c
+@@ -1470,12 +1470,12 @@ static bool udp_skb_has_head_state(struct sk_buff *skb)
+ }
+
+ /* fully reclaim rmem/fwd memory allocated for skb */
+-static void udp_rmem_release(struct sock *sk, int size, int partial,
+- bool rx_queue_lock_held)
++static void udp_rmem_release(struct sock *sk, unsigned int size,
++ int partial, bool rx_queue_lock_held)
+ {
+ struct udp_sock *up = udp_sk(sk);
+ struct sk_buff_head *sk_queue;
+- int amt;
++ unsigned int amt;
+
+ if (likely(partial)) {
+ up->forward_deficit += size;
+@@ -1495,10 +1495,8 @@ static void udp_rmem_release(struct sock *sk, int size, int partial,
+ if (!rx_queue_lock_held)
+ spin_lock(&sk_queue->lock);
+
+-
+- sk_forward_alloc_add(sk, size);
+- amt = (sk->sk_forward_alloc - partial) & ~(PAGE_SIZE - 1);
+- sk_forward_alloc_add(sk, -amt);
++ amt = (size + sk->sk_forward_alloc - partial) & ~(PAGE_SIZE - 1);
++ sk_forward_alloc_add(sk, size - amt);
+
+ if (amt)
+ __sk_mem_reduce_allocated(sk, amt >> PAGE_SHIFT);
+@@ -1570,17 +1568,25 @@ static int udp_rmem_schedule(struct sock *sk, int size)
+ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
+ {
+ struct sk_buff_head *list = &sk->sk_receive_queue;
+- int rmem, err = -ENOMEM;
++ unsigned int rmem, rcvbuf;
+ spinlock_t *busy = NULL;
+- int size, rcvbuf;
++ int size, err = -ENOMEM;
+
+- /* Immediately drop when the receive queue is full.
+- * Always allow at least one packet.
+- */
+ rmem = atomic_read(&sk->sk_rmem_alloc);
+ rcvbuf = READ_ONCE(sk->sk_rcvbuf);
+- if (rmem > rcvbuf)
+- goto drop;
++ size = skb->truesize;
++
++ /* Immediately drop when the receive queue is full.
++ * Cast to unsigned int performs the boundary check for INT_MAX.
++ */
++ if (rmem + size > rcvbuf) {
++ if (rcvbuf > INT_MAX >> 1)
++ goto drop;
++
++ /* Always allow at least one packet for small buffer. */
++ if (rmem > rcvbuf)
++ goto drop;
++ }
+
+ /* Under mem pressure, it might be helpful to help udp_recvmsg()
+ * having linear skbs :
+@@ -1590,10 +1596,10 @@ int __udp_enqueue_schedule_skb(struct sock *sk, struct sk_buff *skb)
+ */
+ if (rmem > (rcvbuf >> 1)) {
+ skb_condense(skb);
+-
++ size = skb->truesize;
+ busy = busylock_acquire(sk);
+ }
+- size = skb->truesize;
++
+ udp_set_dev_scratch(skb);
+
+ atomic_add(size, &sk->sk_rmem_alloc);
+@@ -1680,7 +1686,7 @@ EXPORT_SYMBOL_GPL(skb_consume_udp);
+
+ static struct sk_buff *__first_packet_length(struct sock *sk,
+ struct sk_buff_head *rcvq,
+- int *total)
++ unsigned int *total)
+ {
+ struct sk_buff *skb;
+
+@@ -1713,8 +1719,8 @@ static int first_packet_length(struct sock *sk)
+ {
+ struct sk_buff_head *rcvq = &udp_sk(sk)->reader_queue;
+ struct sk_buff_head *sk_queue = &sk->sk_receive_queue;
++ unsigned int total = 0;
+ struct sk_buff *skb;
+- int total = 0;
+ int res;
+
+ spin_lock_bh(&rcvq->lock);
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index f7c17388ff6aaf..f5d49162f79834 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -5807,6 +5807,27 @@ static void snmp6_fill_stats(u64 *stats, struct inet6_dev *idev, int attrtype,
+ }
+ }
+
++static int inet6_fill_ifla6_stats_attrs(struct sk_buff *skb,
++ struct inet6_dev *idev)
++{
++ struct nlattr *nla;
++
++ nla = nla_reserve(skb, IFLA_INET6_STATS, IPSTATS_MIB_MAX * sizeof(u64));
++ if (!nla)
++ goto nla_put_failure;
++ snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_STATS, nla_len(nla));
++
++ nla = nla_reserve(skb, IFLA_INET6_ICMP6STATS, ICMP6_MIB_MAX * sizeof(u64));
++ if (!nla)
++ goto nla_put_failure;
++ snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_ICMP6STATS, nla_len(nla));
++
++ return 0;
++
++nla_put_failure:
++ return -EMSGSIZE;
++}
++
+ static int inet6_fill_ifla6_attrs(struct sk_buff *skb, struct inet6_dev *idev,
+ u32 ext_filter_mask)
+ {
+@@ -5829,18 +5850,10 @@ static int inet6_fill_ifla6_attrs(struct sk_buff *skb, struct inet6_dev *idev,
+
+ /* XXX - MC not implemented */
+
+- if (ext_filter_mask & RTEXT_FILTER_SKIP_STATS)
+- return 0;
+-
+- nla = nla_reserve(skb, IFLA_INET6_STATS, IPSTATS_MIB_MAX * sizeof(u64));
+- if (!nla)
+- goto nla_put_failure;
+- snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_STATS, nla_len(nla));
+-
+- nla = nla_reserve(skb, IFLA_INET6_ICMP6STATS, ICMP6_MIB_MAX * sizeof(u64));
+- if (!nla)
+- goto nla_put_failure;
+- snmp6_fill_stats(nla_data(nla), idev, IFLA_INET6_ICMP6STATS, nla_len(nla));
++ if (!(ext_filter_mask & RTEXT_FILTER_SKIP_STATS)) {
++ if (inet6_fill_ifla6_stats_attrs(skb, idev) < 0)
++ goto nla_put_failure;
++ }
+
+ nla = nla_reserve(skb, IFLA_INET6_TOKEN, sizeof(struct in6_addr));
+ if (!nla)
+diff --git a/net/ipv6/calipso.c b/net/ipv6/calipso.c
+index dbcea9fee6262d..62618a058b8fad 100644
+--- a/net/ipv6/calipso.c
++++ b/net/ipv6/calipso.c
+@@ -1072,8 +1072,13 @@ static int calipso_sock_getattr(struct sock *sk,
+ struct ipv6_opt_hdr *hop;
+ int opt_len, len, ret_val = -ENOMSG, offset;
+ unsigned char *opt;
+- struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk));
++ struct ipv6_pinfo *pinfo = inet6_sk(sk);
++ struct ipv6_txoptions *txopts;
++
++ if (!pinfo)
++ return -EAFNOSUPPORT;
+
++ txopts = txopt_get(pinfo);
+ if (!txopts || !txopts->hopopt)
+ goto done;
+
+@@ -1125,8 +1130,13 @@ static int calipso_sock_setattr(struct sock *sk,
+ {
+ int ret_val;
+ struct ipv6_opt_hdr *old, *new;
+- struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk));
++ struct ipv6_pinfo *pinfo = inet6_sk(sk);
++ struct ipv6_txoptions *txopts;
++
++ if (!pinfo)
++ return -EAFNOSUPPORT;
+
++ txopts = txopt_get(pinfo);
+ old = NULL;
+ if (txopts)
+ old = txopts->hopopt;
+@@ -1153,8 +1163,13 @@ static int calipso_sock_setattr(struct sock *sk,
+ static void calipso_sock_delattr(struct sock *sk)
+ {
+ struct ipv6_opt_hdr *new_hop;
+- struct ipv6_txoptions *txopts = txopt_get(inet6_sk(sk));
++ struct ipv6_pinfo *pinfo = inet6_sk(sk);
++ struct ipv6_txoptions *txopts;
++
++ if (!pinfo)
++ return;
+
++ txopts = txopt_get(pinfo);
+ if (!txopts || !txopts->hopopt)
+ goto done;
+
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index b393c37d24245c..987492dcb07ca8 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -412,12 +412,37 @@ static bool rt6_check_expired(const struct rt6_info *rt)
+ return false;
+ }
+
++static struct fib6_info *
++rt6_multipath_first_sibling_rcu(const struct fib6_info *rt)
++{
++ struct fib6_info *iter;
++ struct fib6_node *fn;
++
++ fn = rcu_dereference(rt->fib6_node);
++ if (!fn)
++ goto out;
++ iter = rcu_dereference(fn->leaf);
++ if (!iter)
++ goto out;
++
++ while (iter) {
++ if (iter->fib6_metric == rt->fib6_metric &&
++ rt6_qualify_for_ecmp(iter))
++ return iter;
++ iter = rcu_dereference(iter->fib6_next);
++ }
++
++out:
++ return NULL;
++}
++
+ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ struct flowi6 *fl6, int oif, bool have_oif_match,
+ const struct sk_buff *skb, int strict)
+ {
+- struct fib6_info *match = res->f6i;
++ struct fib6_info *first, *match = res->f6i;
+ struct fib6_info *sibling;
++ int hash;
+
+ if (!match->nh && (!match->fib6_nsiblings || have_oif_match))
+ goto out;
+@@ -440,16 +465,25 @@ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ return;
+ }
+
+- if (fl6->mp_hash <= atomic_read(&match->fib6_nh->fib_nh_upper_bound))
++ first = rt6_multipath_first_sibling_rcu(match);
++ if (!first)
+ goto out;
+
+- list_for_each_entry_rcu(sibling, &match->fib6_siblings,
++ hash = fl6->mp_hash;
++ if (hash <= atomic_read(&first->fib6_nh->fib_nh_upper_bound) &&
++ rt6_score_route(first->fib6_nh, first->fib6_flags, oif,
++ strict) >= 0) {
++ match = first;
++ goto out;
++ }
++
++ list_for_each_entry_rcu(sibling, &first->fib6_siblings,
+ fib6_siblings) {
+ const struct fib6_nh *nh = sibling->fib6_nh;
+ int nh_upper_bound;
+
+ nh_upper_bound = atomic_read(&nh->fib_nh_upper_bound);
+- if (fl6->mp_hash > nh_upper_bound)
++ if (hash > nh_upper_bound)
+ continue;
+ if (rt6_score_route(nh, sibling->fib6_flags, oif, strict) < 0)
+ break;
+diff --git a/net/mac80211/driver-ops.c b/net/mac80211/driver-ops.c
+index fe868b52162201..4243f8ee5ab6b6 100644
+--- a/net/mac80211/driver-ops.c
++++ b/net/mac80211/driver-ops.c
+@@ -115,8 +115,14 @@ void drv_remove_interface(struct ieee80211_local *local,
+
+ sdata->flags &= ~IEEE80211_SDATA_IN_DRIVER;
+
+- /* Remove driver debugfs entries */
+- ieee80211_debugfs_recreate_netdev(sdata, sdata->vif.valid_links);
++ /*
++ * Remove driver debugfs entries.
++ * The virtual monitor interface doesn't get a debugfs
++ * entry, so it's exempt here.
++ */
++ if (sdata != rcu_access_pointer(local->monitor_sdata))
++ ieee80211_debugfs_recreate_netdev(sdata,
++ sdata->vif.valid_links);
+
+ trace_drv_remove_interface(local, sdata);
+ local->ops->remove_interface(&local->hw, &sdata->vif);
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index af9055252e6dfa..8bbfa45e1796df 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -1205,16 +1205,17 @@ void ieee80211_del_virtual_monitor(struct ieee80211_local *local)
+ return;
+ }
+
+- RCU_INIT_POINTER(local->monitor_sdata, NULL);
+- mutex_unlock(&local->iflist_mtx);
+-
+- synchronize_net();
+-
++ clear_bit(SDATA_STATE_RUNNING, &sdata->state);
+ ieee80211_link_release_channel(&sdata->deflink);
+
+ if (ieee80211_hw_check(&local->hw, WANT_MONITOR_VIF))
+ drv_remove_interface(local, sdata);
+
++ RCU_INIT_POINTER(local->monitor_sdata, NULL);
++ mutex_unlock(&local->iflist_mtx);
++
++ synchronize_net();
++
+ kfree(sdata);
+ }
+
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 6f3a86040cfcd8..8e1fbdd3bff10b 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -6,7 +6,7 @@
+ * Copyright 2007-2010 Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2015 - 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2024 Intel Corporation
++ * Copyright (C) 2018-2025 Intel Corporation
+ */
+
+ #include <linux/jiffies.h>
+@@ -3323,8 +3323,8 @@ static void ieee80211_process_sa_query_req(struct ieee80211_sub_if_data *sdata,
+ return;
+ }
+
+- if (!ether_addr_equal(mgmt->sa, sdata->deflink.u.mgd.bssid) ||
+- !ether_addr_equal(mgmt->bssid, sdata->deflink.u.mgd.bssid)) {
++ if (!ether_addr_equal(mgmt->sa, sdata->vif.cfg.ap_addr) ||
++ !ether_addr_equal(mgmt->bssid, sdata->vif.cfg.ap_addr)) {
+ /* Not from the current AP or not associated yet. */
+ return;
+ }
+@@ -3340,9 +3340,9 @@ static void ieee80211_process_sa_query_req(struct ieee80211_sub_if_data *sdata,
+
+ skb_reserve(skb, local->hw.extra_tx_headroom);
+ resp = skb_put_zero(skb, 24);
+- memcpy(resp->da, mgmt->sa, ETH_ALEN);
++ memcpy(resp->da, sdata->vif.cfg.ap_addr, ETH_ALEN);
+ memcpy(resp->sa, sdata->vif.addr, ETH_ALEN);
+- memcpy(resp->bssid, sdata->deflink.u.mgd.bssid, ETH_ALEN);
++ memcpy(resp->bssid, sdata->vif.cfg.ap_addr, ETH_ALEN);
+ resp->frame_control = cpu_to_le16(IEEE80211_FTYPE_MGMT |
+ IEEE80211_STYPE_ACTION);
+ skb_put(skb, 1 + sizeof(resp->u.action.u.sa_query));
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index aa22f09e6d145f..49095f19a0f221 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -4,7 +4,7 @@
+ * Copyright 2006-2007 Jiri Benc <jbenc@suse.cz>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright (C) 2015 - 2017 Intel Deutschland GmbH
+- * Copyright (C) 2018-2023 Intel Corporation
++ * Copyright (C) 2018-2024 Intel Corporation
+ */
+
+ #include <linux/module.h>
+@@ -1317,9 +1317,13 @@ static int _sta_info_move_state(struct sta_info *sta,
+ sta->sta.addr, new_state);
+
+ /* notify the driver before the actual changes so it can
+- * fail the transition
++ * fail the transition if the state is increasing.
++ * The driver is required not to fail when the transition
++ * is decreasing the state, so first, do all the preparation
++ * work and only then, notify the driver.
+ */
+- if (test_sta_flag(sta, WLAN_STA_INSERTED)) {
++ if (new_state > sta->sta_state &&
++ test_sta_flag(sta, WLAN_STA_INSERTED)) {
+ int err = drv_sta_state(sta->local, sta->sdata, sta,
+ sta->sta_state, new_state);
+ if (err)
+@@ -1395,6 +1399,16 @@ static int _sta_info_move_state(struct sta_info *sta,
+ break;
+ }
+
++ if (new_state < sta->sta_state &&
++ test_sta_flag(sta, WLAN_STA_INSERTED)) {
++ int err = drv_sta_state(sta->local, sta->sdata, sta,
++ sta->sta_state, new_state);
++
++ WARN_ONCE(err,
++ "Driver is not allowed to fail if the sta_state is transitioning down the list: %d\n",
++ err);
++ }
++
+ sta->sta_state = new_state;
+
+ return 0;
+diff --git a/net/mac80211/util.c b/net/mac80211/util.c
+index 2b6e8e7307ee5e..a98ae563613c04 100644
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -685,7 +685,7 @@ void __ieee80211_flush_queues(struct ieee80211_local *local,
+ struct ieee80211_sub_if_data *sdata,
+ unsigned int queues, bool drop)
+ {
+- if (!local->ops->flush)
++ if (!local->ops->flush && !drop)
+ return;
+
+ /*
+@@ -712,7 +712,8 @@ void __ieee80211_flush_queues(struct ieee80211_local *local,
+ }
+ }
+
+- drv_flush(local, sdata, queues, drop);
++ if (local->ops->flush)
++ drv_flush(local, sdata, queues, drop);
+
+ ieee80211_wake_queues_by_reason(&local->hw, queues,
+ IEEE80211_QUEUE_STOP_REASON_FLUSH,
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index eb3a6f96b094db..bdee187bc5dd45 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -2732,11 +2732,11 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy,
+ err = nft_netdev_register_hooks(ctx->net, &hook.list);
+ if (err < 0)
+ goto err_hooks;
++
++ unregister = true;
+ }
+ }
+
+- unregister = true;
+-
+ if (nla[NFTA_CHAIN_COUNTERS]) {
+ if (!nft_is_base_chain(chain)) {
+ err = -EOPNOTSUPP;
+diff --git a/net/netfilter/nft_set_hash.c b/net/netfilter/nft_set_hash.c
+index b93f046ac7d1e1..4b3452dff2ec08 100644
+--- a/net/netfilter/nft_set_hash.c
++++ b/net/netfilter/nft_set_hash.c
+@@ -309,7 +309,8 @@ static bool nft_rhash_expr_needs_gc_run(const struct nft_set *set,
+
+ nft_setelem_expr_foreach(expr, elem_expr, size) {
+ if (expr->ops->gc &&
+- expr->ops->gc(read_pnet(&set->net), expr))
++ expr->ops->gc(read_pnet(&set->net), expr) &&
++ set->flags & NFT_SET_EVAL)
+ return true;
+ }
+
+diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c
+index 5c6ed68cc6e058..0d99786c322e88 100644
+--- a/net/netfilter/nft_tunnel.c
++++ b/net/netfilter/nft_tunnel.c
+@@ -335,13 +335,13 @@ static int nft_tunnel_obj_erspan_init(const struct nlattr *attr,
+ static const struct nla_policy nft_tunnel_opts_geneve_policy[NFTA_TUNNEL_KEY_GENEVE_MAX + 1] = {
+ [NFTA_TUNNEL_KEY_GENEVE_CLASS] = { .type = NLA_U16 },
+ [NFTA_TUNNEL_KEY_GENEVE_TYPE] = { .type = NLA_U8 },
+- [NFTA_TUNNEL_KEY_GENEVE_DATA] = { .type = NLA_BINARY, .len = 128 },
++ [NFTA_TUNNEL_KEY_GENEVE_DATA] = { .type = NLA_BINARY, .len = 127 },
+ };
+
+ static int nft_tunnel_obj_geneve_init(const struct nlattr *attr,
+ struct nft_tunnel_opts *opts)
+ {
+- struct geneve_opt *opt = (struct geneve_opt *)opts->u.data + opts->len;
++ struct geneve_opt *opt = (struct geneve_opt *)(opts->u.data + opts->len);
+ struct nlattr *tb[NFTA_TUNNEL_KEY_GENEVE_MAX + 1];
+ int err, data_len;
+
+@@ -628,7 +628,7 @@ static int nft_tunnel_opts_dump(struct sk_buff *skb,
+ if (!inner)
+ goto failure;
+ while (opts->len > offset) {
+- opt = (struct geneve_opt *)opts->u.data + offset;
++ opt = (struct geneve_opt *)(opts->u.data + offset);
+ if (nla_put_be16(skb, NFTA_TUNNEL_KEY_GENEVE_CLASS,
+ opt->opt_class) ||
+ nla_put_u8(skb, NFTA_TUNNEL_KEY_GENEVE_TYPE,
+diff --git a/net/openvswitch/actions.c b/net/openvswitch/actions.c
+index 704c858cf2093b..61fea7baae5d5c 100644
+--- a/net/openvswitch/actions.c
++++ b/net/openvswitch/actions.c
+@@ -947,12 +947,6 @@ static void do_output(struct datapath *dp, struct sk_buff *skb, int out_port,
+ pskb_trim(skb, ovs_mac_header_len(key));
+ }
+
+- /* Need to set the pkt_type to involve the routing layer. The
+- * packet movement through the OVS datapath doesn't generally
+- * use routing, but this is needed for tunnel cases.
+- */
+- skb->pkt_type = PACKET_OUTGOING;
+-
+ if (likely(!mru ||
+ (skb->len <= mru + vport->dev->hard_header_len))) {
+ ovs_vport_send(vport, skb, ovs_key_mac_proto(key));
+diff --git a/net/sched/act_tunnel_key.c b/net/sched/act_tunnel_key.c
+index af7c9984594880..e296714803dc02 100644
+--- a/net/sched/act_tunnel_key.c
++++ b/net/sched/act_tunnel_key.c
+@@ -68,7 +68,7 @@ geneve_opt_policy[TCA_TUNNEL_KEY_ENC_OPT_GENEVE_MAX + 1] = {
+ [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_CLASS] = { .type = NLA_U16 },
+ [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_TYPE] = { .type = NLA_U8 },
+ [TCA_TUNNEL_KEY_ENC_OPT_GENEVE_DATA] = { .type = NLA_BINARY,
+- .len = 128 },
++ .len = 127 },
+ };
+
+ static const struct nla_policy
+diff --git a/net/sched/cls_flower.c b/net/sched/cls_flower.c
+index 03505673d5234d..099ff6a3e1f516 100644
+--- a/net/sched/cls_flower.c
++++ b/net/sched/cls_flower.c
+@@ -766,7 +766,7 @@ geneve_opt_policy[TCA_FLOWER_KEY_ENC_OPT_GENEVE_MAX + 1] = {
+ [TCA_FLOWER_KEY_ENC_OPT_GENEVE_CLASS] = { .type = NLA_U16 },
+ [TCA_FLOWER_KEY_ENC_OPT_GENEVE_TYPE] = { .type = NLA_U8 },
+ [TCA_FLOWER_KEY_ENC_OPT_GENEVE_DATA] = { .type = NLA_BINARY,
+- .len = 128 },
++ .len = 127 },
+ };
+
+ static const struct nla_policy
+diff --git a/net/sched/sch_skbprio.c b/net/sched/sch_skbprio.c
+index 20ff7386b74bd8..f485f62ab721ab 100644
+--- a/net/sched/sch_skbprio.c
++++ b/net/sched/sch_skbprio.c
+@@ -123,8 +123,6 @@ static int skbprio_enqueue(struct sk_buff *skb, struct Qdisc *sch,
+ /* Check to update highest and lowest priorities. */
+ if (skb_queue_empty(lp_qdisc)) {
+ if (q->lowest_prio == q->highest_prio) {
+- /* The incoming packet is the only packet in queue. */
+- BUG_ON(sch->q.qlen != 1);
+ q->lowest_prio = prio;
+ q->highest_prio = prio;
+ } else {
+@@ -156,7 +154,6 @@ static struct sk_buff *skbprio_dequeue(struct Qdisc *sch)
+ /* Update highest priority field. */
+ if (skb_queue_empty(hpq)) {
+ if (q->lowest_prio == q->highest_prio) {
+- BUG_ON(sch->q.qlen);
+ q->highest_prio = 0;
+ q->lowest_prio = SKBPRIO_MAX_PRIORITY - 1;
+ } else {
+diff --git a/net/sctp/sysctl.c b/net/sctp/sysctl.c
+index 8e1e97be4df79f..ee3eac338a9dee 100644
+--- a/net/sctp/sysctl.c
++++ b/net/sctp/sysctl.c
+@@ -525,6 +525,8 @@ static int proc_sctp_do_auth(const struct ctl_table *ctl, int write,
+ return ret;
+ }
+
++static DEFINE_MUTEX(sctp_sysctl_mutex);
++
+ static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write,
+ void *buffer, size_t *lenp, loff_t *ppos)
+ {
+@@ -549,6 +551,7 @@ static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write,
+ if (new_value > max || new_value < min)
+ return -EINVAL;
+
++ mutex_lock(&sctp_sysctl_mutex);
+ net->sctp.udp_port = new_value;
+ sctp_udp_sock_stop(net);
+ if (new_value) {
+@@ -561,6 +564,7 @@ static int proc_sctp_do_udp_port(const struct ctl_table *ctl, int write,
+ lock_sock(sk);
+ sctp_sk(sk)->udp_port = htons(net->sctp.udp_port);
+ release_sock(sk);
++ mutex_unlock(&sctp_sysctl_mutex);
+ }
+
+ return ret;
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index eb6ea26b390ee8..d08f205b33dccf 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1551,7 +1551,11 @@ static int vsock_connect(struct socket *sock, struct sockaddr *addr,
+ timeout = vsk->connect_timeout;
+ prepare_to_wait(sk_sleep(sk), &wait, TASK_INTERRUPTIBLE);
+
+- while (sk->sk_state != TCP_ESTABLISHED && sk->sk_err == 0) {
++ /* If the socket is already closing or it is in an error state, there
++ * is no point in waiting.
++ */
++ while (sk->sk_state != TCP_ESTABLISHED &&
++ sk->sk_state != TCP_CLOSING && sk->sk_err == 0) {
+ if (flags & O_NONBLOCK) {
+ /* If we're not going to block, we schedule a timeout
+ * function to generate a timeout on the connection
+diff --git a/rust/Makefile b/rust/Makefile
+index 09521fc449dca2..1b00e16951eeb8 100644
+--- a/rust/Makefile
++++ b/rust/Makefile
+@@ -227,7 +227,8 @@ bindgen_skip_c_flags := -mno-fp-ret-in-387 -mpreferred-stack-boundary=% \
+ -mfunction-return=thunk-extern -mrecord-mcount -mabi=lp64 \
+ -mindirect-branch-cs-prefix -mstack-protector-guard% -mtraceback=no \
+ -mno-pointers-to-nested-functions -mno-string \
+- -mno-strict-align -mstrict-align \
++ -mno-strict-align -mstrict-align -mdirect-extern-access \
++ -mexplicit-relocs -mno-check-zero-division \
+ -fconserve-stack -falign-jumps=% -falign-loops=% \
+ -femit-struct-debug-baseonly -fno-ipa-cp-clone -fno-ipa-sra \
+ -fno-partial-inlining -fplugin-arg-arm_ssp_per_task_plugin-% \
+@@ -241,6 +242,7 @@ bindgen_skip_c_flags := -mno-fp-ret-in-387 -mpreferred-stack-boundary=% \
+ # Derived from `scripts/Makefile.clang`.
+ BINDGEN_TARGET_x86 := x86_64-linux-gnu
+ BINDGEN_TARGET_arm64 := aarch64-linux-gnu
++BINDGEN_TARGET_loongarch := loongarch64-linux-gnusf
+ BINDGEN_TARGET := $(BINDGEN_TARGET_$(SRCARCH))
+
+ # All warnings are inhibited since GCC builds are very experimental,
+diff --git a/rust/kernel/print.rs b/rust/kernel/print.rs
+index a28077a7cb3011..e52cd64333bccc 100644
+--- a/rust/kernel/print.rs
++++ b/rust/kernel/print.rs
+@@ -6,12 +6,11 @@
+ //!
+ //! Reference: <https://docs.kernel.org/core-api/printk-basics.html>
+
+-use core::{
++use crate::{
+ ffi::{c_char, c_void},
+- fmt,
++ str::RawFormatter,
+ };
+-
+-use crate::str::RawFormatter;
++use core::fmt;
+
+ // Called from `vsprintf` with format specifier `%pA`.
+ #[expect(clippy::missing_safety_doc)]
+diff --git a/scripts/package/debian/rules b/scripts/package/debian/rules
+index ca07243bd5cdf6..2b3f9a0bd6c40f 100755
+--- a/scripts/package/debian/rules
++++ b/scripts/package/debian/rules
+@@ -21,9 +21,11 @@ ifeq ($(origin KBUILD_VERBOSE),undefined)
+ endif
+ endif
+
+-revision = $(lastword $(subst -, ,$(shell dpkg-parsechangelog -S Version)))
++revision = $(shell dpkg-parsechangelog -S Version | sed -n 's/.*-//p')
+ CROSS_COMPILE ?= $(filter-out $(DEB_BUILD_GNU_TYPE)-, $(DEB_HOST_GNU_TYPE)-)
+-make-opts = ARCH=$(ARCH) KERNELRELEASE=$(KERNELRELEASE) KBUILD_BUILD_VERSION=$(revision) $(addprefix CROSS_COMPILE=,$(CROSS_COMPILE))
++make-opts = ARCH=$(ARCH) KERNELRELEASE=$(KERNELRELEASE) \
++ $(addprefix KBUILD_BUILD_VERSION=,$(revision)) \
++ $(addprefix CROSS_COMPILE=,$(CROSS_COMPILE))
+
+ binary-targets := $(addprefix binary-, image image-dbg headers libc-dev)
+
+diff --git a/scripts/selinux/install_policy.sh b/scripts/selinux/install_policy.sh
+index 24086793b0d8d4..db40237e60ce7e 100755
+--- a/scripts/selinux/install_policy.sh
++++ b/scripts/selinux/install_policy.sh
+@@ -6,27 +6,24 @@ if [ `id -u` -ne 0 ]; then
+ exit 1
+ fi
+
+-SF=`which setfiles`
+-if [ $? -eq 1 ]; then
++SF=`which setfiles` || {
+ echo "Could not find setfiles"
+ echo "Do you have policycoreutils installed?"
+ exit 1
+-fi
++}
+
+-CP=`which checkpolicy`
+-if [ $? -eq 1 ]; then
++CP=`which checkpolicy` || {
+ echo "Could not find checkpolicy"
+ echo "Do you have checkpolicy installed?"
+ exit 1
+-fi
++}
+ VERS=`$CP -V | awk '{print $1}'`
+
+-ENABLED=`which selinuxenabled`
+-if [ $? -eq 1 ]; then
++ENABLED=`which selinuxenabled` || {
+ echo "Could not find selinuxenabled"
+ echo "Do you have libselinux-utils installed?"
+ exit 1
+-fi
++}
+
+ if selinuxenabled; then
+ echo "SELinux is already enabled"
+diff --git a/security/smack/smack.h b/security/smack/smack.h
+index dbf8d7226eb56a..1c3656b5e3b91b 100644
+--- a/security/smack/smack.h
++++ b/security/smack/smack.h
+@@ -152,6 +152,7 @@ struct smk_net4addr {
+ struct smack_known *smk_label; /* label */
+ };
+
++#if IS_ENABLED(CONFIG_IPV6)
+ /*
+ * An entry in the table identifying IPv6 hosts.
+ */
+@@ -162,7 +163,9 @@ struct smk_net6addr {
+ int smk_masks; /* mask size */
+ struct smack_known *smk_label; /* label */
+ };
++#endif /* CONFIG_IPV6 */
+
++#ifdef SMACK_IPV6_PORT_LABELING
+ /*
+ * An entry in the table identifying ports.
+ */
+@@ -175,6 +178,7 @@ struct smk_port_label {
+ short smk_sock_type; /* Socket type */
+ short smk_can_reuse;
+ };
++#endif /* SMACK_IPV6_PORT_LABELING */
+
+ struct smack_known_list_elem {
+ struct list_head list;
+@@ -314,7 +318,9 @@ extern struct smack_known smack_known_web;
+ extern struct mutex smack_known_lock;
+ extern struct list_head smack_known_list;
+ extern struct list_head smk_net4addr_list;
++#if IS_ENABLED(CONFIG_IPV6)
+ extern struct list_head smk_net6addr_list;
++#endif /* CONFIG_IPV6 */
+
+ extern struct mutex smack_onlycap_lock;
+ extern struct list_head smack_onlycap_list;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 370fd594da1252..9e13fd39206300 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -2498,6 +2498,7 @@ static struct smack_known *smack_ipv4host_label(struct sockaddr_in *sip)
+ return NULL;
+ }
+
++#if IS_ENABLED(CONFIG_IPV6)
+ /*
+ * smk_ipv6_localhost - Check for local ipv6 host address
+ * @sip: the address
+@@ -2565,6 +2566,7 @@ static struct smack_known *smack_ipv6host_label(struct sockaddr_in6 *sip)
+
+ return NULL;
+ }
++#endif /* CONFIG_IPV6 */
+
+ /**
+ * smack_netlbl_add - Set the secattr on a socket
+@@ -2669,6 +2671,7 @@ static int smk_ipv4_check(struct sock *sk, struct sockaddr_in *sap)
+ return rc;
+ }
+
++#if IS_ENABLED(CONFIG_IPV6)
+ /**
+ * smk_ipv6_check - check Smack access
+ * @subject: subject Smack label
+@@ -2701,6 +2704,7 @@ static int smk_ipv6_check(struct smack_known *subject,
+ rc = smk_bu_note("IPv6 check", subject, object, MAY_WRITE, rc);
+ return rc;
+ }
++#endif /* CONFIG_IPV6 */
+
+ #ifdef SMACK_IPV6_PORT_LABELING
+ /**
+@@ -3033,7 +3037,9 @@ static int smack_socket_connect(struct socket *sock, struct sockaddr *sap,
+ return 0;
+ if (addrlen < offsetofend(struct sockaddr, sa_family))
+ return 0;
+- if (IS_ENABLED(CONFIG_IPV6) && sap->sa_family == AF_INET6) {
++
++#if IS_ENABLED(CONFIG_IPV6)
++ if (sap->sa_family == AF_INET6) {
+ struct sockaddr_in6 *sip = (struct sockaddr_in6 *)sap;
+ struct smack_known *rsp = NULL;
+
+@@ -3053,6 +3059,8 @@ static int smack_socket_connect(struct socket *sock, struct sockaddr *sap,
+
+ return rc;
+ }
++#endif /* CONFIG_IPV6 */
++
+ if (sap->sa_family != AF_INET || addrlen < sizeof(struct sockaddr_in))
+ return 0;
+ rc = smk_ipv4_check(sock->sk, (struct sockaddr_in *)sap);
+@@ -4349,29 +4357,6 @@ static int smack_socket_getpeersec_dgram(struct socket *sock,
+ return 0;
+ }
+
+-/**
+- * smack_sock_graft - Initialize a newly created socket with an existing sock
+- * @sk: child sock
+- * @parent: parent socket
+- *
+- * Set the smk_{in,out} state of an existing sock based on the process that
+- * is creating the new socket.
+- */
+-static void smack_sock_graft(struct sock *sk, struct socket *parent)
+-{
+- struct socket_smack *ssp;
+- struct smack_known *skp = smk_of_current();
+-
+- if (sk == NULL ||
+- (sk->sk_family != PF_INET && sk->sk_family != PF_INET6))
+- return;
+-
+- ssp = smack_sock(sk);
+- ssp->smk_in = skp;
+- ssp->smk_out = skp;
+- /* cssp->smk_packet is already set in smack_inet_csk_clone() */
+-}
+-
+ /**
+ * smack_inet_conn_request - Smack access check on connect
+ * @sk: socket involved
+@@ -5160,7 +5145,6 @@ static struct security_hook_list smack_hooks[] __ro_after_init = {
+ LSM_HOOK_INIT(sk_free_security, smack_sk_free_security),
+ #endif
+ LSM_HOOK_INIT(sk_clone_security, smack_sk_clone_security),
+- LSM_HOOK_INIT(sock_graft, smack_sock_graft),
+ LSM_HOOK_INIT(inet_conn_request, smack_inet_conn_request),
+ LSM_HOOK_INIT(inet_csk_clone, smack_inet_csk_clone),
+
+diff --git a/sound/core/timer.c b/sound/core/timer.c
+index fbada79380f9ea..d774b9b71ce238 100644
+--- a/sound/core/timer.c
++++ b/sound/core/timer.c
+@@ -1515,91 +1515,97 @@ static void snd_timer_user_copy_id(struct snd_timer_id *id, struct snd_timer *ti
+ id->subdevice = timer->tmr_subdevice;
+ }
+
+-static int snd_timer_user_next_device(struct snd_timer_id __user *_tid)
++static void get_next_device(struct snd_timer_id *id)
+ {
+- struct snd_timer_id id;
+ struct snd_timer *timer;
+ struct list_head *p;
+
+- if (copy_from_user(&id, _tid, sizeof(id)))
+- return -EFAULT;
+- guard(mutex)(®ister_mutex);
+- if (id.dev_class < 0) { /* first item */
++ if (id->dev_class < 0) { /* first item */
+ if (list_empty(&snd_timer_list))
+- snd_timer_user_zero_id(&id);
++ snd_timer_user_zero_id(id);
+ else {
+ timer = list_entry(snd_timer_list.next,
+ struct snd_timer, device_list);
+- snd_timer_user_copy_id(&id, timer);
++ snd_timer_user_copy_id(id, timer);
+ }
+ } else {
+- switch (id.dev_class) {
++ switch (id->dev_class) {
+ case SNDRV_TIMER_CLASS_GLOBAL:
+- id.device = id.device < 0 ? 0 : id.device + 1;
++ id->device = id->device < 0 ? 0 : id->device + 1;
+ list_for_each(p, &snd_timer_list) {
+ timer = list_entry(p, struct snd_timer, device_list);
+ if (timer->tmr_class > SNDRV_TIMER_CLASS_GLOBAL) {
+- snd_timer_user_copy_id(&id, timer);
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+- if (timer->tmr_device >= id.device) {
+- snd_timer_user_copy_id(&id, timer);
++ if (timer->tmr_device >= id->device) {
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+ }
+ if (p == &snd_timer_list)
+- snd_timer_user_zero_id(&id);
++ snd_timer_user_zero_id(id);
+ break;
+ case SNDRV_TIMER_CLASS_CARD:
+ case SNDRV_TIMER_CLASS_PCM:
+- if (id.card < 0) {
+- id.card = 0;
++ if (id->card < 0) {
++ id->card = 0;
+ } else {
+- if (id.device < 0) {
+- id.device = 0;
++ if (id->device < 0) {
++ id->device = 0;
+ } else {
+- if (id.subdevice < 0)
+- id.subdevice = 0;
+- else if (id.subdevice < INT_MAX)
+- id.subdevice++;
++ if (id->subdevice < 0)
++ id->subdevice = 0;
++ else if (id->subdevice < INT_MAX)
++ id->subdevice++;
+ }
+ }
+ list_for_each(p, &snd_timer_list) {
+ timer = list_entry(p, struct snd_timer, device_list);
+- if (timer->tmr_class > id.dev_class) {
+- snd_timer_user_copy_id(&id, timer);
++ if (timer->tmr_class > id->dev_class) {
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+- if (timer->tmr_class < id.dev_class)
++ if (timer->tmr_class < id->dev_class)
+ continue;
+- if (timer->card->number > id.card) {
+- snd_timer_user_copy_id(&id, timer);
++ if (timer->card->number > id->card) {
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+- if (timer->card->number < id.card)
++ if (timer->card->number < id->card)
+ continue;
+- if (timer->tmr_device > id.device) {
+- snd_timer_user_copy_id(&id, timer);
++ if (timer->tmr_device > id->device) {
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+- if (timer->tmr_device < id.device)
++ if (timer->tmr_device < id->device)
+ continue;
+- if (timer->tmr_subdevice > id.subdevice) {
+- snd_timer_user_copy_id(&id, timer);
++ if (timer->tmr_subdevice > id->subdevice) {
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+- if (timer->tmr_subdevice < id.subdevice)
++ if (timer->tmr_subdevice < id->subdevice)
+ continue;
+- snd_timer_user_copy_id(&id, timer);
++ snd_timer_user_copy_id(id, timer);
+ break;
+ }
+ if (p == &snd_timer_list)
+- snd_timer_user_zero_id(&id);
++ snd_timer_user_zero_id(id);
+ break;
+ default:
+- snd_timer_user_zero_id(&id);
++ snd_timer_user_zero_id(id);
+ }
+ }
++}
++
++static int snd_timer_user_next_device(struct snd_timer_id __user *_tid)
++{
++ struct snd_timer_id id;
++
++ if (copy_from_user(&id, _tid, sizeof(id)))
++ return -EFAULT;
++ scoped_guard(mutex, ®ister_mutex)
++ get_next_device(&id);
+ if (copy_to_user(_tid, &id, sizeof(*_tid)))
+ return -EFAULT;
+ return 0;
+@@ -1620,23 +1626,24 @@ static int snd_timer_user_ginfo(struct file *file,
+ tid = ginfo->tid;
+ memset(ginfo, 0, sizeof(*ginfo));
+ ginfo->tid = tid;
+- guard(mutex)(®ister_mutex);
+- t = snd_timer_find(&tid);
+- if (!t)
+- return -ENODEV;
+- ginfo->card = t->card ? t->card->number : -1;
+- if (t->hw.flags & SNDRV_TIMER_HW_SLAVE)
+- ginfo->flags |= SNDRV_TIMER_FLG_SLAVE;
+- strscpy(ginfo->id, t->id, sizeof(ginfo->id));
+- strscpy(ginfo->name, t->name, sizeof(ginfo->name));
+- scoped_guard(spinlock_irq, &t->lock)
+- ginfo->resolution = snd_timer_hw_resolution(t);
+- if (t->hw.resolution_min > 0) {
+- ginfo->resolution_min = t->hw.resolution_min;
+- ginfo->resolution_max = t->hw.resolution_max;
+- }
+- list_for_each(p, &t->open_list_head) {
+- ginfo->clients++;
++ scoped_guard(mutex, ®ister_mutex) {
++ t = snd_timer_find(&tid);
++ if (!t)
++ return -ENODEV;
++ ginfo->card = t->card ? t->card->number : -1;
++ if (t->hw.flags & SNDRV_TIMER_HW_SLAVE)
++ ginfo->flags |= SNDRV_TIMER_FLG_SLAVE;
++ strscpy(ginfo->id, t->id, sizeof(ginfo->id));
++ strscpy(ginfo->name, t->name, sizeof(ginfo->name));
++ scoped_guard(spinlock_irq, &t->lock)
++ ginfo->resolution = snd_timer_hw_resolution(t);
++ if (t->hw.resolution_min > 0) {
++ ginfo->resolution_min = t->hw.resolution_min;
++ ginfo->resolution_max = t->hw.resolution_max;
++ }
++ list_for_each(p, &t->open_list_head) {
++ ginfo->clients++;
++ }
+ }
+ if (copy_to_user(_ginfo, ginfo, sizeof(*ginfo)))
+ return -EFAULT;
+@@ -1674,31 +1681,31 @@ static int snd_timer_user_gstatus(struct file *file,
+ struct snd_timer_gstatus gstatus;
+ struct snd_timer_id tid;
+ struct snd_timer *t;
+- int err = 0;
+
+ if (copy_from_user(&gstatus, _gstatus, sizeof(gstatus)))
+ return -EFAULT;
+ tid = gstatus.tid;
+ memset(&gstatus, 0, sizeof(gstatus));
+ gstatus.tid = tid;
+- guard(mutex)(®ister_mutex);
+- t = snd_timer_find(&tid);
+- if (t != NULL) {
+- guard(spinlock_irq)(&t->lock);
+- gstatus.resolution = snd_timer_hw_resolution(t);
+- if (t->hw.precise_resolution) {
+- t->hw.precise_resolution(t, &gstatus.resolution_num,
+- &gstatus.resolution_den);
++ scoped_guard(mutex, ®ister_mutex) {
++ t = snd_timer_find(&tid);
++ if (t != NULL) {
++ guard(spinlock_irq)(&t->lock);
++ gstatus.resolution = snd_timer_hw_resolution(t);
++ if (t->hw.precise_resolution) {
++ t->hw.precise_resolution(t, &gstatus.resolution_num,
++ &gstatus.resolution_den);
++ } else {
++ gstatus.resolution_num = gstatus.resolution;
++ gstatus.resolution_den = 1000000000uL;
++ }
+ } else {
+- gstatus.resolution_num = gstatus.resolution;
+- gstatus.resolution_den = 1000000000uL;
++ return -ENODEV;
+ }
+- } else {
+- err = -ENODEV;
+ }
+- if (err >= 0 && copy_to_user(_gstatus, &gstatus, sizeof(gstatus)))
+- err = -EFAULT;
+- return err;
++ if (copy_to_user(_gstatus, &gstatus, sizeof(gstatus)))
++ return -EFAULT;
++ return 0;
+ }
+
+ static int snd_timer_user_tselect(struct file *file,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 8c7da13a804c04..59e59fdc38f2c4 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -586,6 +586,9 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ {
+ struct alc_spec *spec = codec->spec;
+
++ if (spec->no_shutup_pins)
++ return;
++
+ switch (codec->core.vendor_id) {
+ case 0x10ec0236:
+ case 0x10ec0256:
+@@ -601,8 +604,7 @@ static void alc_shutup_pins(struct hda_codec *codec)
+ alc_headset_mic_no_shutup(codec);
+ break;
+ default:
+- if (!spec->no_shutup_pins)
+- snd_hda_shutup_pins(codec);
++ snd_hda_shutup_pins(codec);
+ break;
+ }
+ }
+@@ -4792,6 +4794,21 @@ static void alc236_fixup_hp_coef_micmute_led(struct hda_codec *codec,
+ }
+ }
+
++static void alc295_fixup_hp_mute_led_coefbit11(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ struct alc_spec *spec = codec->spec;
++
++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++ spec->mute_led_polarity = 0;
++ spec->mute_led_coef.idx = 0xb;
++ spec->mute_led_coef.mask = 3 << 3;
++ spec->mute_led_coef.on = 1 << 3;
++ spec->mute_led_coef.off = 1 << 4;
++ snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set);
++ }
++}
++
+ static void alc285_fixup_hp_mute_led(struct hda_codec *codec,
+ const struct hda_fixup *fix, int action)
+ {
+@@ -7624,6 +7641,7 @@ enum {
+ ALC290_FIXUP_MONO_SPEAKERS_HSJACK,
+ ALC290_FIXUP_SUBWOOFER,
+ ALC290_FIXUP_SUBWOOFER_HSJACK,
++ ALC295_FIXUP_HP_MUTE_LED_COEFBIT11,
+ ALC269_FIXUP_THINKPAD_ACPI,
+ ALC269_FIXUP_DMIC_THINKPAD_ACPI,
+ ALC269VB_FIXUP_INFINIX_ZERO_BOOK_13,
+@@ -9359,6 +9377,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ .chained = true,
+ .chain_id = ALC283_FIXUP_INT_MIC,
+ },
++ [ALC295_FIXUP_HP_MUTE_LED_COEFBIT11] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc295_fixup_hp_mute_led_coefbit11,
++ },
+ [ALC298_FIXUP_SAMSUNG_AMP] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc298_fixup_samsung_amp,
+@@ -10394,6 +10416,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ SND_PCI_QUIRK(0x103c, 0x8519, "HP Spectre x360 15-df0xxx", ALC285_FIXUP_HP_SPECTRE_X360),
+ SND_PCI_QUIRK(0x103c, 0x8537, "HP ProBook 440 G6", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
++ SND_PCI_QUIRK(0x103c, 0x85c6, "HP Pavilion x360 Convertible 14-dy1xxx", ALC295_FIXUP_HP_MUTE_LED_COEFBIT11),
+ SND_PCI_QUIRK(0x103c, 0x85de, "HP Envy x360 13-ar0xxx", ALC285_FIXUP_HP_ENVY_X360),
+ SND_PCI_QUIRK(0x103c, 0x860f, "HP ZBook 15 G6", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+ SND_PCI_QUIRK(0x103c, 0x861f, "HP Elite Dragonfly G1", ALC285_FIXUP_HP_GPIO_AMP_INIT),
+@@ -10618,7 +10641,9 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8e1a, "HP ZBook Firefly 14 G12A", ALC285_FIXUP_HP_GPIO_LED),
+ SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
++ SND_PCI_QUIRK(0x1043, 0x1054, "ASUS G614FH/FM/FP", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++ SND_PCI_QUIRK(0x1043, 0x1074, "ASUS G614PH/PM/PP", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x10a1, "ASUS UX391UA", ALC294_FIXUP_ASUS_SPK),
+ SND_PCI_QUIRK(0x1043, 0x10a4, "ASUS TP3407SA", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x10c0, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+@@ -10626,15 +10651,18 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x10d3, "ASUS K6500ZC", ALC294_FIXUP_ASUS_SPK),
+ SND_PCI_QUIRK(0x1043, 0x1154, "ASUS TP3607SH", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x115d, "Asus 1015E", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
++ SND_PCI_QUIRK(0x1043, 0x1194, "ASUS UM3406KA", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x11c0, "ASUS X556UR", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1204, "ASUS Strix G615JHR_JMR_JPR", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x1214, "ASUS Strix G615LH_LM_LP", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x125e, "ASUS Q524UQK", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1271, "ASUS X430UN", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1290, "ASUS X441SA", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1043, 0x1294, "ASUS B3405CVA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x12a3, "Asus N7691ZM", ALC269_FIXUP_ASUS_N7601ZM),
+ SND_PCI_QUIRK(0x1043, 0x12af, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x12b4, "ASUS B3405CCA / P3405CCA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE),
+@@ -10648,6 +10676,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1493, "ASUS GV601VV/VU/VJ/VQ/VI", ALC285_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x14d3, "ASUS G614JY/JZ/JG", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x14e3, "ASUS G513PI/PU/PV", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x1043, 0x14f2, "ASUS VivoBook X515JA", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1503, "ASUS G733PY/PZ/PZV/PYV", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x1517, "Asus Zenbook UX31A", ALC269VB_FIXUP_ASUS_ZENBOOK_UX31A),
+ SND_PCI_QUIRK(0x1043, 0x1533, "ASUS GV302XA/XJ/XQ/XU/XV/XI", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10687,6 +10716,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1c43, "ASUS UX8406MA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1c62, "ASUS GU603", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1c63, "ASUS GU605M", ALC285_FIXUP_ASUS_GU605_SPI_SPEAKER2_TO_DAC1),
++ SND_PCI_QUIRK(0x1043, 0x1c80, "ASUS VivoBook TP401", ALC256_FIXUP_ASUS_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x1c92, "ASUS ROG Strix G15", ALC285_FIXUP_ASUS_G533Z_PINS),
+ SND_PCI_QUIRK(0x1043, 0x1c9f, "ASUS G614JU/JV/JI", ALC285_FIXUP_ASUS_HEADSET_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1caf, "ASUS G634JY/JZ/JI/JG", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+@@ -10715,14 +10745,28 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1f12, "ASUS UM5302", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x1f1f, "ASUS H7604JI/JV/J3D", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1f62, "ASUS UX7602ZM", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x1f63, "ASUS P5405CSA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1f92, "ASUS ROG Flow X16", ALC289_FIXUP_ASUS_GA401),
++ SND_PCI_QUIRK(0x1043, 0x1fb3, "ASUS ROG Flow Z13 GZ302EA", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x1043, 0x3011, "ASUS B5605CVA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
++ SND_PCI_QUIRK(0x1043, 0x3061, "ASUS B3405CCA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x3071, "ASUS B5405CCA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x30c1, "ASUS B3605CCA / P3605CCA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x30d1, "ASUS B5405CCA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x30e1, "ASUS B5605CCA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x31d0, "ASUS Zen AIO 27 Z272SD_A272SD", ALC274_FIXUP_ASUS_ZEN_AIO_27),
++ SND_PCI_QUIRK(0x1043, 0x31e1, "ASUS B5605CCA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x31f1, "ASUS B3605CCA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a50, "ASUS G834JYR/JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a60, "ASUS G634JYR/JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
++ SND_PCI_QUIRK(0x1043, 0x3d78, "ASUS GA603KH", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x1043, 0x3d88, "ASUS GA603KM", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x1043, 0x3e00, "ASUS G814FH/FM/FP", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x1043, 0x3e20, "ASUS G814PH/PM/PP", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x3e30, "ASUS TP3607SA", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x3ee0, "ASUS Strix G815_JHR_JMR_JPR", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x3ef0, "ASUS Strix G635LR_LW_LX", ALC287_FIXUP_TAS2781_I2C),
+@@ -10730,6 +10774,8 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x3f10, "ASUS Strix G835LR_LW_LX", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x3f20, "ASUS Strix G615LR_LW", ALC287_FIXUP_TAS2781_I2C),
+ SND_PCI_QUIRK(0x1043, 0x3f30, "ASUS Strix G815LR_LW", ALC287_FIXUP_TAS2781_I2C),
++ SND_PCI_QUIRK(0x1043, 0x3fd0, "ASUS B3605CVA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x3ff0, "ASUS B5405CVA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x831a, "ASUS P901", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x834a, "ASUS S101", ALC269_FIXUP_STEREO_DMIC),
+ SND_PCI_QUIRK(0x1043, 0x8398, "ASUS P1005", ALC269_FIXUP_STEREO_DMIC),
+diff --git a/sound/soc/amd/acp/acp-legacy-common.c b/sound/soc/amd/acp/acp-legacy-common.c
+index be01b178172e86..e4af2640feeb14 100644
+--- a/sound/soc/amd/acp/acp-legacy-common.c
++++ b/sound/soc/amd/acp/acp-legacy-common.c
+@@ -13,6 +13,7 @@
+ */
+
+ #include "amd.h"
++#include <linux/acpi.h>
+ #include <linux/pci.h>
+ #include <linux/export.h>
+
+@@ -445,7 +446,9 @@ void check_acp_config(struct pci_dev *pci, struct acp_chip_info *chip)
+ {
+ struct acpi_device *pdm_dev;
+ const union acpi_object *obj;
+- u32 pdm_addr;
++ acpi_handle handle;
++ acpi_integer dmic_status;
++ u32 pdm_addr, ret;
+
+ switch (chip->acp_rev) {
+ case ACP3X_DEV:
+@@ -477,6 +480,11 @@ void check_acp_config(struct pci_dev *pci, struct acp_chip_info *chip)
+ obj->integer.value == pdm_addr)
+ chip->is_pdm_dev = true;
+ }
++
++ handle = ACPI_HANDLE(&pci->dev);
++ ret = acpi_evaluate_integer(handle, "_WOV", NULL, &dmic_status);
++ if (!ACPI_FAILURE(ret))
++ chip->is_pdm_dev = dmic_status;
+ }
+ }
+ EXPORT_SYMBOL_NS_GPL(check_acp_config, SND_SOC_ACP_COMMON);
+diff --git a/sound/soc/codecs/cs35l41-spi.c b/sound/soc/codecs/cs35l41-spi.c
+index a6db44520c060b..f9b6bf7bea9c97 100644
+--- a/sound/soc/codecs/cs35l41-spi.c
++++ b/sound/soc/codecs/cs35l41-spi.c
+@@ -32,13 +32,16 @@ static int cs35l41_spi_probe(struct spi_device *spi)
+ const struct regmap_config *regmap_config = &cs35l41_regmap_spi;
+ struct cs35l41_hw_cfg *hw_cfg = dev_get_platdata(&spi->dev);
+ struct cs35l41_private *cs35l41;
++ int ret;
+
+ cs35l41 = devm_kzalloc(&spi->dev, sizeof(struct cs35l41_private), GFP_KERNEL);
+ if (!cs35l41)
+ return -ENOMEM;
+
+ spi->max_speed_hz = CS35L41_SPI_MAX_FREQ;
+- spi_setup(spi);
++ ret = spi_setup(spi);
++ if (ret < 0)
++ return ret;
+
+ spi_set_drvdata(spi, cs35l41);
+ cs35l41->regmap = devm_regmap_init_spi(spi, regmap_config);
+diff --git a/sound/soc/codecs/rt1320-sdw.c b/sound/soc/codecs/rt1320-sdw.c
+index f4e1ea29c26513..f2d194e76a947f 100644
+--- a/sound/soc/codecs/rt1320-sdw.c
++++ b/sound/soc/codecs/rt1320-sdw.c
+@@ -3705,6 +3705,9 @@ static int rt1320_read_prop(struct sdw_slave *slave)
+ /* set the timeout values */
+ prop->clk_stop_timeout = 64;
+
++ /* BIOS may set wake_capable. Make sure it is 0 as wake events are disabled. */
++ prop->wake_capable = 0;
++
+ return 0;
+ }
+
+diff --git a/sound/soc/codecs/rt5665.c b/sound/soc/codecs/rt5665.c
+index 47df14ba52784b..4f0236b34a2d9b 100644
+--- a/sound/soc/codecs/rt5665.c
++++ b/sound/soc/codecs/rt5665.c
+@@ -31,9 +31,7 @@
+ #include "rl6231.h"
+ #include "rt5665.h"
+
+-#define RT5665_NUM_SUPPLIES 3
+-
+-static const char *rt5665_supply_names[RT5665_NUM_SUPPLIES] = {
++static const char * const rt5665_supply_names[] = {
+ "AVDD",
+ "MICVDD",
+ "VBAT",
+@@ -46,7 +44,6 @@ struct rt5665_priv {
+ struct gpio_desc *gpiod_ldo1_en;
+ struct gpio_desc *gpiod_reset;
+ struct snd_soc_jack *hs_jack;
+- struct regulator_bulk_data supplies[RT5665_NUM_SUPPLIES];
+ struct delayed_work jack_detect_work;
+ struct delayed_work calibrate_work;
+ struct delayed_work jd_check_work;
+@@ -4471,8 +4468,6 @@ static void rt5665_remove(struct snd_soc_component *component)
+ struct rt5665_priv *rt5665 = snd_soc_component_get_drvdata(component);
+
+ regmap_write(rt5665->regmap, RT5665_RESET, 0);
+-
+- regulator_bulk_disable(ARRAY_SIZE(rt5665->supplies), rt5665->supplies);
+ }
+
+ #ifdef CONFIG_PM
+@@ -4758,7 +4753,7 @@ static int rt5665_i2c_probe(struct i2c_client *i2c)
+ {
+ struct rt5665_platform_data *pdata = dev_get_platdata(&i2c->dev);
+ struct rt5665_priv *rt5665;
+- int i, ret;
++ int ret;
+ unsigned int val;
+
+ rt5665 = devm_kzalloc(&i2c->dev, sizeof(struct rt5665_priv),
+@@ -4774,24 +4769,13 @@ static int rt5665_i2c_probe(struct i2c_client *i2c)
+ else
+ rt5665_parse_dt(rt5665, &i2c->dev);
+
+- for (i = 0; i < ARRAY_SIZE(rt5665->supplies); i++)
+- rt5665->supplies[i].supply = rt5665_supply_names[i];
+-
+- ret = devm_regulator_bulk_get(&i2c->dev, ARRAY_SIZE(rt5665->supplies),
+- rt5665->supplies);
++ ret = devm_regulator_bulk_get_enable(&i2c->dev, ARRAY_SIZE(rt5665_supply_names),
++ rt5665_supply_names);
+ if (ret != 0) {
+ dev_err(&i2c->dev, "Failed to request supplies: %d\n", ret);
+ return ret;
+ }
+
+- ret = regulator_bulk_enable(ARRAY_SIZE(rt5665->supplies),
+- rt5665->supplies);
+- if (ret != 0) {
+- dev_err(&i2c->dev, "Failed to enable supplies: %d\n", ret);
+- return ret;
+- }
+-
+-
+ rt5665->gpiod_ldo1_en = devm_gpiod_get_optional(&i2c->dev,
+ "realtek,ldo1-en",
+ GPIOD_OUT_HIGH);
+diff --git a/sound/soc/codecs/wsa884x.c b/sound/soc/codecs/wsa884x.c
+index 86df5152c547bc..560a2c04b69553 100644
+--- a/sound/soc/codecs/wsa884x.c
++++ b/sound/soc/codecs/wsa884x.c
+@@ -1875,7 +1875,7 @@ static int wsa884x_get_temp(struct wsa884x_priv *wsa884x, long *temp)
+ * Reading temperature is possible only when Power Amplifier is
+ * off. Report last cached data.
+ */
+- *temp = wsa884x->temperature;
++ *temp = wsa884x->temperature * 1000;
+ return 0;
+ }
+
+@@ -1934,7 +1934,7 @@ static int wsa884x_get_temp(struct wsa884x_priv *wsa884x, long *temp)
+ if ((val > WSA884X_LOW_TEMP_THRESHOLD) &&
+ (val < WSA884X_HIGH_TEMP_THRESHOLD)) {
+ wsa884x->temperature = val;
+- *temp = val;
++ *temp = val * 1000;
+ ret = 0;
+ } else {
+ ret = -EAGAIN;
+diff --git a/sound/soc/fsl/imx-card.c b/sound/soc/fsl/imx-card.c
+index a7215bad648457..93dbe40008c009 100644
+--- a/sound/soc/fsl/imx-card.c
++++ b/sound/soc/fsl/imx-card.c
+@@ -738,6 +738,8 @@ static int imx_card_probe(struct platform_device *pdev)
+ data->dapm_routes[i].sink =
+ devm_kasprintf(&pdev->dev, GFP_KERNEL, "%d %s",
+ i + 1, "Playback");
++ if (!data->dapm_routes[i].sink)
++ return -ENOMEM;
+ data->dapm_routes[i].source = "CPU-Playback";
+ }
+ }
+@@ -755,6 +757,8 @@ static int imx_card_probe(struct platform_device *pdev)
+ data->dapm_routes[i].source =
+ devm_kasprintf(&pdev->dev, GFP_KERNEL, "%d %s",
+ i + 1, "Capture");
++ if (!data->dapm_routes[i].source)
++ return -ENOMEM;
+ data->dapm_routes[i].sink = "CPU-Capture";
+ }
+ }
+diff --git a/sound/soc/ti/j721e-evm.c b/sound/soc/ti/j721e-evm.c
+index d9d1e021f5b2ee..0f96cc45578d8c 100644
+--- a/sound/soc/ti/j721e-evm.c
++++ b/sound/soc/ti/j721e-evm.c
+@@ -182,6 +182,8 @@ static int j721e_configure_refclk(struct j721e_priv *priv,
+ clk_id = J721E_CLK_PARENT_48000;
+ else if (!(rate % 11025) && priv->pll_rates[J721E_CLK_PARENT_44100])
+ clk_id = J721E_CLK_PARENT_44100;
++ else if (!(rate % 11025) && priv->pll_rates[J721E_CLK_PARENT_48000])
++ clk_id = J721E_CLK_PARENT_48000;
+ else
+ return ret;
+
+diff --git a/tools/arch/x86/lib/insn.c b/tools/arch/x86/lib/insn.c
+index ab5cdc3337dacb..e91d4c4e1c1621 100644
+--- a/tools/arch/x86/lib/insn.c
++++ b/tools/arch/x86/lib/insn.c
+@@ -13,7 +13,7 @@
+ #endif
+ #include "../include/asm/inat.h" /* __ignore_sync_check__ */
+ #include "../include/asm/insn.h" /* __ignore_sync_check__ */
+-#include "../include/linux/unaligned.h" /* __ignore_sync_check__ */
++#include <linux/unaligned.h> /* __ignore_sync_check__ */
+
+ #include <linux/errno.h>
+ #include <linux/kconfig.h>
+diff --git a/tools/lib/bpf/linker.c b/tools/lib/bpf/linker.c
+index 777600822d8e45..179f6b31cbd6fa 100644
+--- a/tools/lib/bpf/linker.c
++++ b/tools/lib/bpf/linker.c
+@@ -2007,7 +2007,7 @@ static int linker_append_elf_sym(struct bpf_linker *linker, struct src_obj *obj,
+
+ obj->sym_map[src_sym_idx] = dst_sym_idx;
+
+- if (sym_type == STT_SECTION && dst_sym) {
++ if (sym_type == STT_SECTION && dst_sec) {
+ dst_sec->sec_sym_idx = dst_sym_idx;
+ dst_sym->st_value = 0;
+ }
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 3c3e5760e81b83..286a2c0af02aa8 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -4153,7 +4153,7 @@ static bool ignore_unreachable_insn(struct objtool_file *file, struct instructio
+ * It may also insert a UD2 after calling a __noreturn function.
+ */
+ prev_insn = prev_insn_same_sec(file, insn);
+- if (prev_insn->dead_end &&
++ if (prev_insn && prev_insn->dead_end &&
+ (insn->type == INSN_BUG ||
+ (insn->type == INSN_JUMP_UNCONDITIONAL &&
+ insn->jump_dest && insn->jump_dest->type == INSN_BUG)))
+@@ -4575,35 +4575,6 @@ static int validate_sls(struct objtool_file *file)
+ return warnings;
+ }
+
+-static bool ignore_noreturn_call(struct instruction *insn)
+-{
+- struct symbol *call_dest = insn_call_dest(insn);
+-
+- /*
+- * FIXME: hack, we need a real noreturn solution
+- *
+- * Problem is, exc_double_fault() may or may not return, depending on
+- * whether CONFIG_X86_ESPFIX64 is set. But objtool has no visibility
+- * to the kernel config.
+- *
+- * Other potential ways to fix it:
+- *
+- * - have compiler communicate __noreturn functions somehow
+- * - remove CONFIG_X86_ESPFIX64
+- * - read the .config file
+- * - add a cmdline option
+- * - create a generic objtool annotation format (vs a bunch of custom
+- * formats) and annotate it
+- */
+- if (!strcmp(call_dest->name, "exc_double_fault")) {
+- /* prevent further unreachable warnings for the caller */
+- insn->sym->warned = 1;
+- return true;
+- }
+-
+- return false;
+-}
+-
+ static int validate_reachable_instructions(struct objtool_file *file)
+ {
+ struct instruction *insn, *prev_insn;
+@@ -4620,7 +4591,7 @@ static int validate_reachable_instructions(struct objtool_file *file)
+ prev_insn = prev_insn_same_sec(file, insn);
+ if (prev_insn && prev_insn->dead_end) {
+ call_dest = insn_call_dest(prev_insn);
+- if (call_dest && !ignore_noreturn_call(prev_insn)) {
++ if (call_dest) {
+ WARN_INSN(insn, "%s() is missing a __noreturn annotation",
+ call_dest->name);
+ warnings++;
+@@ -4643,6 +4614,8 @@ static int disas_funcs(const char *funcs)
+ char *cmd;
+
+ cross_compile = getenv("CROSS_COMPILE");
++ if (!cross_compile)
++ cross_compile = "";
+
+ objdump_str = "%sobjdump -wdr %s | gawk -M -v _funcs='%s' '"
+ "BEGIN { split(_funcs, funcs); }"
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 2ce71d2e5fae05..b102a4c525e4b0 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -513,13 +513,14 @@ ifeq ($(feature-setns), 1)
+ $(call detected,CONFIG_SETNS)
+ endif
+
++ifeq ($(feature-reallocarray), 0)
++ CFLAGS += -DCOMPAT_NEED_REALLOCARRAY
++endif
++
+ ifdef CORESIGHT
+ $(call feature_check,libopencsd)
+ ifeq ($(feature-libopencsd), 1)
+ CFLAGS += -DHAVE_CSTRACE_SUPPORT $(LIBOPENCSD_CFLAGS)
+- ifeq ($(feature-reallocarray), 0)
+- CFLAGS += -DCOMPAT_NEED_REALLOCARRAY
+- endif
+ LDFLAGS += $(LIBOPENCSD_LDFLAGS)
+ EXTLIBS += $(OPENCSDLIBS)
+ $(call detected,CONFIG_LIBOPENCSD)
+@@ -1135,9 +1136,6 @@ ifndef NO_AUXTRACE
+ ifndef NO_AUXTRACE
+ $(call detected,CONFIG_AUXTRACE)
+ CFLAGS += -DHAVE_AUXTRACE_SUPPORT
+- ifeq ($(feature-reallocarray), 0)
+- CFLAGS += -DCOMPAT_NEED_REALLOCARRAY
+- endif
+ endif
+ endif
+
+diff --git a/tools/perf/Makefile.perf b/tools/perf/Makefile.perf
+index 9dd2e8d3f3c9b7..8ee59ecb14110f 100644
+--- a/tools/perf/Makefile.perf
++++ b/tools/perf/Makefile.perf
+@@ -164,7 +164,7 @@ ifneq ($(OUTPUT),)
+ VPATH += $(OUTPUT)
+ export VPATH
+ # create symlink to the original source
+-SOURCE := $(shell ln -sf $(srctree)/tools/perf $(OUTPUT)/source)
++SOURCE := $(shell ln -sfn $(srctree)/tools/perf $(OUTPUT)/source)
+ endif
+
+ ifeq ($(V),1)
+diff --git a/tools/perf/bench/syscall.c b/tools/perf/bench/syscall.c
+index ea4dfc07cbd6b8..e7dc216f717f5a 100644
+--- a/tools/perf/bench/syscall.c
++++ b/tools/perf/bench/syscall.c
+@@ -22,8 +22,7 @@
+ #define __NR_fork -1
+ #endif
+
+-#define LOOPS_DEFAULT 10000000
+-static int loops = LOOPS_DEFAULT;
++static int loops;
+
+ static const struct option options[] = {
+ OPT_INTEGER('l', "loop", &loops, "Specify number of loops"),
+@@ -80,6 +79,18 @@ static int bench_syscall_common(int argc, const char **argv, int syscall)
+ const char *name = NULL;
+ int i;
+
++ switch (syscall) {
++ case __NR_fork:
++ case __NR_execve:
++ /* Limit default loop to 10000 times to save time */
++ loops = 10000;
++ break;
++ default:
++ loops = 10000000;
++ break;
++ }
++
++ /* Options -l and --loops override default above */
+ argc = parse_options(argc, argv, options, bench_syscall_usage, 0);
+
+ gettimeofday(&start, NULL);
+@@ -94,16 +105,9 @@ static int bench_syscall_common(int argc, const char **argv, int syscall)
+ break;
+ case __NR_fork:
+ test_fork();
+- /* Only loop 10000 times to save time */
+- if (i == 10000)
+- loops = 10000;
+ break;
+ case __NR_execve:
+ test_execve();
+- /* Only loop 10000 times to save time */
+- if (i == 10000)
+- loops = 10000;
+- break;
+ default:
+ break;
+ }
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 645deec294c842..8700c39680662a 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -1551,12 +1551,12 @@ int cmd_report(int argc, const char **argv)
+ input_name = "perf.data";
+ }
+
++repeat:
+ data.path = input_name;
+ data.force = symbol_conf.force;
+
+ symbol_conf.skip_empty = report.skip_empty;
+
+-repeat:
+ perf_tool__init(&report.tool, ordered_events);
+ report.tool.sample = process_sample_event;
+ report.tool.mmap = perf_event__process_mmap;
+diff --git a/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json b/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json
+index c5d1d22bd034b1..5228f94a793f95 100644
+--- a/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json
++++ b/tools/perf/pmu-events/arch/arm64/ampere/ampereonex/metrics.json
+@@ -229,19 +229,19 @@
+ },
+ {
+ "MetricName": "slots_lost_misspeculation_fraction",
+- "MetricExpr": "(OP_SPEC - OP_RETIRED) / (CPU_CYCLES * #slots)",
++ "MetricExpr": "100 * (OP_SPEC - OP_RETIRED) / (CPU_CYCLES * #slots)",
+ "BriefDescription": "Fraction of slots lost due to misspeculation",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricGroup": "Default;TopdownL1",
+- "ScaleUnit": "100percent of slots"
++ "ScaleUnit": "1percent of slots"
+ },
+ {
+ "MetricName": "retired_fraction",
+- "MetricExpr": "OP_RETIRED / (CPU_CYCLES * #slots)",
++ "MetricExpr": "100 * OP_RETIRED / (CPU_CYCLES * #slots)",
+ "BriefDescription": "Fraction of slots retiring, useful work",
+ "DefaultMetricgroupName": "TopdownL1",
+ "MetricGroup": "Default;TopdownL1",
+- "ScaleUnit": "100percent of slots"
++ "ScaleUnit": "1percent of slots"
+ },
+ {
+ "MetricName": "backend_core",
+@@ -266,7 +266,7 @@
+ },
+ {
+ "MetricName": "frontend_bandwidth",
+- "MetricExpr": "frontend_bound - frontend_latency",
++ "MetricExpr": "frontend_bound - 100 * frontend_latency",
+ "BriefDescription": "Fraction of slots the CPU did not dispatch at full bandwidth - able to dispatch partial slots only (1, 2, or 3 uops)",
+ "MetricGroup": "TopdownL2",
+ "ScaleUnit": "1percent of slots"
+diff --git a/tools/perf/tests/shell/coresight/asm_pure_loop/asm_pure_loop.S b/tools/perf/tests/shell/coresight/asm_pure_loop/asm_pure_loop.S
+index 75cf084a927d3d..5777600467723f 100644
+--- a/tools/perf/tests/shell/coresight/asm_pure_loop/asm_pure_loop.S
++++ b/tools/perf/tests/shell/coresight/asm_pure_loop/asm_pure_loop.S
+@@ -26,3 +26,5 @@ skip:
+ mov x0, #0
+ mov x8, #93 // __NR_exit syscall
+ svc #0
++
++.section .note.GNU-stack, "", @progbits
+diff --git a/tools/perf/tests/shell/record_bpf_filter.sh b/tools/perf/tests/shell/record_bpf_filter.sh
+index 1b58ccc1fd882d..4d6c3c1b7fb925 100755
+--- a/tools/perf/tests/shell/record_bpf_filter.sh
++++ b/tools/perf/tests/shell/record_bpf_filter.sh
+@@ -89,7 +89,7 @@ test_bpf_filter_fail() {
+ test_bpf_filter_group() {
+ echo "Group bpf-filter test"
+
+- if ! perf record -e task-clock --filter 'period > 1000 || ip > 0' \
++ if ! perf record -e task-clock --filter 'period > 1000, ip > 0' \
+ -o /dev/null true 2>/dev/null
+ then
+ echo "Group bpf-filter test [Failed should succeed]"
+@@ -97,7 +97,7 @@ test_bpf_filter_group() {
+ return
+ fi
+
+- if ! perf record -e task-clock --filter 'cpu > 0 || ip > 0' \
++ if ! perf record -e task-clock --filter 'period > 1000 , cpu > 0 || ip > 0' \
+ -o /dev/null true 2>&1 | grep -q PERF_SAMPLE_CPU
+ then
+ echo "Group bpf-filter test [Failed forbidden CPU]"
+diff --git a/tools/perf/util/arm-spe.c b/tools/perf/util/arm-spe.c
+index 138ffc71b32dd7..2c06f2a85400e1 100644
+--- a/tools/perf/util/arm-spe.c
++++ b/tools/perf/util/arm-spe.c
+@@ -37,6 +37,8 @@
+ #include "../../arch/arm64/include/asm/cputype.h"
+ #define MAX_TIMESTAMP (~0ULL)
+
++#define is_ldst_op(op) (!!((op) & ARM_SPE_OP_LDST))
++
+ struct arm_spe {
+ struct auxtrace auxtrace;
+ struct auxtrace_queues queues;
+@@ -520,6 +522,10 @@ static u64 arm_spe__synth_data_source(const struct arm_spe_record *record, u64 m
+ union perf_mem_data_src data_src = { .mem_op = PERF_MEM_OP_NA };
+ bool is_neoverse = is_midr_in_range_list(midr, neoverse_spe);
+
++ /* Only synthesize data source for LDST operations */
++ if (!is_ldst_op(record->op))
++ return 0;
++
+ if (record->op & ARM_SPE_OP_LD)
+ data_src.mem_op = PERF_MEM_OP_LOAD;
+ else if (record->op & ARM_SPE_OP_ST)
+@@ -619,7 +625,7 @@ static int arm_spe_sample(struct arm_spe_queue *speq)
+ * When data_src is zero it means the record is not a memory operation,
+ * skip to synthesize memory sample for this case.
+ */
+- if (spe->sample_memory && data_src) {
++ if (spe->sample_memory && is_ldst_op(record->op)) {
+ err = arm_spe__synth_mem_sample(speq, spe->memory_id, data_src);
+ if (err)
+ return err;
+diff --git a/tools/perf/util/bpf-filter.l b/tools/perf/util/bpf-filter.l
+index f313404f95a90d..6aa65ade33851b 100644
+--- a/tools/perf/util/bpf-filter.l
++++ b/tools/perf/util/bpf-filter.l
+@@ -76,7 +76,7 @@ static int path_or_error(void)
+ num_dec [0-9]+
+ num_hex 0[Xx][0-9a-fA-F]+
+ space [ \t]+
+-path [^ \t\n]+
++path [^ \t\n,]+
+ ident [_a-zA-Z][_a-zA-Z0-9]+
+
+ %%
+diff --git a/tools/perf/util/comm.c b/tools/perf/util/comm.c
+index 49b79cf0c5cc51..8aa456d7c2cd2d 100644
+--- a/tools/perf/util/comm.c
++++ b/tools/perf/util/comm.c
+@@ -5,6 +5,8 @@
+ #include <internal/rc_check.h>
+ #include <linux/refcount.h>
+ #include <linux/zalloc.h>
++#include <tools/libc_compat.h> // reallocarray
++
+ #include "rwsem.h"
+
+ DECLARE_RC_STRUCT(comm_str) {
+diff --git a/tools/perf/util/debug.c b/tools/perf/util/debug.c
+index d633d15329fa09..e56330c85fe7e1 100644
+--- a/tools/perf/util/debug.c
++++ b/tools/perf/util/debug.c
+@@ -46,8 +46,8 @@ int debug_type_profile;
+ FILE *debug_file(void)
+ {
+ if (!_debug_file) {
+- pr_warning_once("debug_file not set");
+ debug_set_file(stderr);
++ pr_warning_once("debug_file not set");
+ }
+ return _debug_file;
+ }
+diff --git a/tools/perf/util/dso.h b/tools/perf/util/dso.h
+index bb8e8f444054d8..c0472a41147c3c 100644
+--- a/tools/perf/util/dso.h
++++ b/tools/perf/util/dso.h
+@@ -808,7 +808,9 @@ static inline bool dso__is_kcore(const struct dso *dso)
+
+ static inline bool dso__is_kallsyms(const struct dso *dso)
+ {
+- return RC_CHK_ACCESS(dso)->kernel && RC_CHK_ACCESS(dso)->long_name[0] != '/';
++ enum dso_binary_type bt = dso__binary_type(dso);
++
++ return bt == DSO_BINARY_TYPE__KALLSYMS || bt == DSO_BINARY_TYPE__GUEST_KALLSYMS;
+ }
+
+ bool dso__is_object_file(const struct dso *dso);
+diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
+index a9df84692d4a88..dac87dccaaaa5d 100644
+--- a/tools/perf/util/evlist.c
++++ b/tools/perf/util/evlist.c
+@@ -1434,19 +1434,18 @@ static int evlist__create_syswide_maps(struct evlist *evlist)
+ */
+ cpus = perf_cpu_map__new_online_cpus();
+ if (!cpus)
+- goto out;
++ return -ENOMEM;
+
+ threads = perf_thread_map__new_dummy();
+- if (!threads)
+- goto out_put;
++ if (!threads) {
++ perf_cpu_map__put(cpus);
++ return -ENOMEM;
++ }
+
+ perf_evlist__set_maps(&evlist->core, cpus, threads);
+-
+ perf_thread_map__put(threads);
+-out_put:
+ perf_cpu_map__put(cpus);
+-out:
+- return -ENOMEM;
++ return 0;
+ }
+
+ int evlist__open(struct evlist *evlist)
+diff --git a/tools/perf/util/intel-tpebs.c b/tools/perf/util/intel-tpebs.c
+index 50a3c3e0716065..2c421b475b3b8b 100644
+--- a/tools/perf/util/intel-tpebs.c
++++ b/tools/perf/util/intel-tpebs.c
+@@ -254,7 +254,7 @@ int tpebs_start(struct evlist *evsel_list)
+ new = zalloc(sizeof(*new));
+ if (!new) {
+ ret = -1;
+- zfree(name);
++ zfree(&name);
+ goto err;
+ }
+ new->name = name;
+diff --git a/tools/perf/util/pmu.c b/tools/perf/util/pmu.c
+index ed893c3c6ad938..8b4e346808b4c2 100644
+--- a/tools/perf/util/pmu.c
++++ b/tools/perf/util/pmu.c
+@@ -593,7 +593,7 @@ static int perf_pmu__new_alias(struct perf_pmu *pmu, const char *name,
+ };
+ if (pmu_events_table__find_event(pmu->events_table, pmu, name,
+ update_alias, &data) == 0)
+- pmu->cpu_json_aliases++;
++ pmu->cpu_common_json_aliases++;
+ }
+ pmu->sysfs_aliases++;
+ break;
+@@ -1807,9 +1807,10 @@ size_t perf_pmu__num_events(struct perf_pmu *pmu)
+ if (pmu->cpu_aliases_added)
+ nr += pmu->cpu_json_aliases;
+ else if (pmu->events_table)
+- nr += pmu_events_table__num_events(pmu->events_table, pmu) - pmu->cpu_json_aliases;
++ nr += pmu_events_table__num_events(pmu->events_table, pmu) -
++ pmu->cpu_common_json_aliases;
+ else
+- assert(pmu->cpu_json_aliases == 0);
++ assert(pmu->cpu_json_aliases == 0 && pmu->cpu_common_json_aliases == 0);
+
+ return pmu->selectable ? nr + 1 : nr;
+ }
+diff --git a/tools/perf/util/pmu.h b/tools/perf/util/pmu.h
+index 4397c48ad569a3..bcd278b9b546fb 100644
+--- a/tools/perf/util/pmu.h
++++ b/tools/perf/util/pmu.h
+@@ -131,6 +131,11 @@ struct perf_pmu {
+ uint32_t cpu_json_aliases;
+ /** @sys_json_aliases: Number of json event aliases loaded matching the PMU's identifier. */
+ uint32_t sys_json_aliases;
++ /**
++ * @cpu_common_json_aliases: Number of json events that overlapped with sysfs when
++ * loading all sysfs events.
++ */
++ uint32_t cpu_common_json_aliases;
+ /** @sysfs_aliases_loaded: Are sysfs aliases loaded from disk? */
+ bool sysfs_aliases_loaded;
+ /**
+diff --git a/tools/perf/util/pmus.c b/tools/perf/util/pmus.c
+index d7d67e09d759bb..362596ed272945 100644
+--- a/tools/perf/util/pmus.c
++++ b/tools/perf/util/pmus.c
+@@ -701,11 +701,25 @@ char *perf_pmus__default_pmu_name(void)
+ struct perf_pmu *evsel__find_pmu(const struct evsel *evsel)
+ {
+ struct perf_pmu *pmu = evsel->pmu;
++ bool legacy_core_type;
+
+- if (!pmu) {
+- pmu = perf_pmus__find_by_type(evsel->core.attr.type);
+- ((struct evsel *)evsel)->pmu = pmu;
++ if (pmu)
++ return pmu;
++
++ pmu = perf_pmus__find_by_type(evsel->core.attr.type);
++ legacy_core_type =
++ evsel->core.attr.type == PERF_TYPE_HARDWARE ||
++ evsel->core.attr.type == PERF_TYPE_HW_CACHE;
++ if (!pmu && legacy_core_type) {
++ if (perf_pmus__supports_extended_type()) {
++ u32 type = evsel->core.attr.config >> PERF_PMU_TYPE_SHIFT;
++
++ pmu = perf_pmus__find_by_type(type);
++ } else {
++ pmu = perf_pmus__find_core_pmu();
++ }
+ }
++ ((struct evsel *)evsel)->pmu = pmu;
+ return pmu;
+ }
+
+diff --git a/tools/perf/util/python.c b/tools/perf/util/python.c
+index ee3d43a7ba4570..e7f36ea9e2fa12 100644
+--- a/tools/perf/util/python.c
++++ b/tools/perf/util/python.c
+@@ -79,7 +79,7 @@ struct pyrf_event {
+ };
+
+ #define sample_members \
+- sample_member_def(sample_ip, ip, T_ULONGLONG, "event type"), \
++ sample_member_def(sample_ip, ip, T_ULONGLONG, "event ip"), \
+ sample_member_def(sample_pid, pid, T_INT, "event pid"), \
+ sample_member_def(sample_tid, tid, T_INT, "event tid"), \
+ sample_member_def(sample_time, time, T_ULONGLONG, "event timestamp"), \
+@@ -512,6 +512,11 @@ static PyObject *pyrf_event__new(union perf_event *event)
+ event->header.type == PERF_RECORD_SWITCH_CPU_WIDE))
+ return NULL;
+
++ // FIXME this better be dynamic or we need to parse everything
++ // before calling perf_mmap__consume(), including tracepoint fields.
++ if (sizeof(pevent->event) < event->header.size)
++ return NULL;
++
+ ptype = pyrf_event__type[event->header.type];
+ pevent = PyObject_New(struct pyrf_event, ptype);
+ if (pevent != NULL)
+@@ -1011,20 +1016,22 @@ static PyObject *pyrf_evlist__read_on_cpu(struct pyrf_evlist *pevlist,
+
+ evsel = evlist__event2evsel(evlist, event);
+ if (!evsel) {
++ Py_DECREF(pyevent);
+ Py_INCREF(Py_None);
+ return Py_None;
+ }
+
+ pevent->evsel = evsel;
+
+- err = evsel__parse_sample(evsel, event, &pevent->sample);
+-
+- /* Consume the even only after we parsed it out. */
+ perf_mmap__consume(&md->core);
+
+- if (err)
++ err = evsel__parse_sample(evsel, &pevent->event, &pevent->sample);
++ if (err) {
++ Py_DECREF(pyevent);
+ return PyErr_Format(PyExc_OSError,
+ "perf: can't parse sample, err=%d", err);
++ }
++
+ return pyevent;
+ }
+ end:
+diff --git a/tools/perf/util/stat-shadow.c b/tools/perf/util/stat-shadow.c
+index 99376c12dd8ec9..7c49997fab3a3a 100644
+--- a/tools/perf/util/stat-shadow.c
++++ b/tools/perf/util/stat-shadow.c
+@@ -154,6 +154,7 @@ static double find_stat(const struct evsel *evsel, int aggr_idx, enum stat_type
+ {
+ const struct evsel *cur;
+ int evsel_ctx = evsel_context(evsel);
++ struct perf_pmu *evsel_pmu = evsel__find_pmu(evsel);
+
+ evlist__for_each_entry(evsel->evlist, cur) {
+ struct perf_stat_aggr *aggr;
+@@ -180,7 +181,7 @@ static double find_stat(const struct evsel *evsel, int aggr_idx, enum stat_type
+ * Except the SW CLOCK events,
+ * ignore if not the PMU we're looking for.
+ */
+- if ((type != STAT_NSECS) && (evsel->pmu != cur->pmu))
++ if ((type != STAT_NSECS) && (evsel_pmu != evsel__find_pmu(cur)))
+ continue;
+
+ aggr = &cur->stats->aggr[aggr_idx];
+diff --git a/tools/perf/util/units.c b/tools/perf/util/units.c
+index 32c39cfe209b3b..4c6a86e1cb54b2 100644
+--- a/tools/perf/util/units.c
++++ b/tools/perf/util/units.c
+@@ -64,7 +64,7 @@ unsigned long convert_unit(unsigned long value, char *unit)
+
+ int unit_number__scnprintf(char *buf, size_t size, u64 n)
+ {
+- char unit[4] = "BKMG";
++ char unit[] = "BKMG";
+ int i = 0;
+
+ while (((n / 1024) > 1) && (i < 3)) {
+diff --git a/tools/power/x86/turbostat/turbostat.8 b/tools/power/x86/turbostat/turbostat.8
+index 56c7ff6efcdabc..a3cf1d17163ae7 100644
+--- a/tools/power/x86/turbostat/turbostat.8
++++ b/tools/power/x86/turbostat/turbostat.8
+@@ -168,6 +168,8 @@ The system configuration dump (if --quiet is not used) is followed by statistics
+ .PP
+ \fBPkgTmp\fP Degrees Celsius reported by the per-package Package Thermal Monitor.
+ .PP
++\fBCoreThr\fP Core Thermal Throttling events during the measurement interval. Note that events since boot can be find in /sys/devices/system/cpu/cpu*/thermal_throttle/*
++.PP
+ \fBGFX%rc6\fP The percentage of time the GPU is in the "render C6" state, rc6, during the measurement interval. From /sys/class/drm/card0/power/rc6_residency_ms or /sys/class/drm/card0/gt/gt0/rc6_residency_ms or /sys/class/drm/card0/device/tile0/gtN/gtidle/idle_residency_ms depending on the graphics driver being used.
+ .PP
+ \fBGFXMHz\fP Instantaneous snapshot of what sysfs presents at the end of the measurement interval. From /sys/class/graphics/fb0/device/drm/card0/gt_cur_freq_mhz or /sys/class/drm/card0/gt_cur_freq_mhz or /sys/class/drm/card0/gt/gt0/rps_cur_freq_mhz or /sys/class/drm/card0/device/tile0/gtN/freq0/cur_freq depending on the graphics driver being used.
+diff --git a/tools/power/x86/turbostat/turbostat.c b/tools/power/x86/turbostat/turbostat.c
+index 235e82fe7d0a56..77ef60980ee581 100644
+--- a/tools/power/x86/turbostat/turbostat.c
++++ b/tools/power/x86/turbostat/turbostat.c
+@@ -3242,7 +3242,7 @@ void delta_core(struct core_data *new, struct core_data *old)
+ old->c6 = new->c6 - old->c6;
+ old->c7 = new->c7 - old->c7;
+ old->core_temp_c = new->core_temp_c;
+- old->core_throt_cnt = new->core_throt_cnt;
++ old->core_throt_cnt = new->core_throt_cnt - old->core_throt_cnt;
+ old->mc6_us = new->mc6_us - old->mc6_us;
+
+ DELTA_WRAP32(new->core_energy.raw_value, old->core_energy.raw_value);
+diff --git a/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c b/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c
+index cc184e4420f6e3..67557cda220835 100644
+--- a/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c
++++ b/tools/testing/selftests/bpf/prog_tests/bloom_filter_map.c
+@@ -6,6 +6,10 @@
+ #include <test_progs.h>
+ #include "bloom_filter_map.skel.h"
+
++#ifndef NUMA_NO_NODE
++#define NUMA_NO_NODE (-1)
++#endif
++
+ static void test_fail_cases(void)
+ {
+ LIBBPF_OPTS(bpf_map_create_opts, opts);
+@@ -69,6 +73,7 @@ static void test_success_cases(void)
+
+ /* Create a map */
+ opts.map_flags = BPF_F_ZERO_SEED | BPF_F_NUMA_NODE;
++ opts.numa_node = NUMA_NO_NODE;
+ fd = bpf_map_create(BPF_MAP_TYPE_BLOOM_FILTER, NULL, 0, sizeof(value), 100, &opts);
+ if (!ASSERT_GE(fd, 0, "bpf_map_create bloom filter success case"))
+ return;
+diff --git a/tools/testing/selftests/bpf/prog_tests/tailcalls.c b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+index 40f22454cf05b0..1f0977742741f3 100644
+--- a/tools/testing/selftests/bpf/prog_tests/tailcalls.c
++++ b/tools/testing/selftests/bpf/prog_tests/tailcalls.c
+@@ -1599,6 +1599,7 @@ static void test_tailcall_bpf2bpf_freplace(void)
+ goto out;
+
+ err = bpf_link__destroy(freplace_link);
++ freplace_link = NULL;
+ if (!ASSERT_OK(err, "destroy link"))
+ goto out;
+
+diff --git a/tools/testing/selftests/bpf/progs/strncmp_bench.c b/tools/testing/selftests/bpf/progs/strncmp_bench.c
+index 18373a7df76e6c..f47bf88f8d2a73 100644
+--- a/tools/testing/selftests/bpf/progs/strncmp_bench.c
++++ b/tools/testing/selftests/bpf/progs/strncmp_bench.c
+@@ -35,7 +35,10 @@ static __always_inline int local_strncmp(const char *s1, unsigned int sz,
+ SEC("tp/syscalls/sys_enter_getpgid")
+ int strncmp_no_helper(void *ctx)
+ {
+- if (local_strncmp(str, cmp_str_len + 1, target) < 0)
++ const char *target_str = target;
++
++ barrier_var(target_str);
++ if (local_strncmp(str, cmp_str_len + 1, target_str) < 0)
+ __sync_add_and_fetch(&hits, 1);
+ return 0;
+ }
+diff --git a/tools/testing/selftests/mm/cow.c b/tools/testing/selftests/mm/cow.c
+index 1238e1c5aae150..d87c5b1763ff15 100644
+--- a/tools/testing/selftests/mm/cow.c
++++ b/tools/testing/selftests/mm/cow.c
+@@ -876,7 +876,7 @@ static void do_run_with_thp(test_fn fn, enum thp_run thp_run, size_t thpsize)
+ mremap_size = thpsize / 2;
+ mremap_mem = mmap(NULL, mremap_size, PROT_NONE,
+ MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
+- if (mem == MAP_FAILED) {
++ if (mremap_mem == MAP_FAILED) {
+ ksft_test_result_fail("mmap() failed\n");
+ goto munmap;
+ }
+diff --git a/tools/testing/selftests/net/netfilter/br_netfilter.sh b/tools/testing/selftests/net/netfilter/br_netfilter.sh
+index c28379a965d838..1559ba275105ed 100755
+--- a/tools/testing/selftests/net/netfilter/br_netfilter.sh
++++ b/tools/testing/selftests/net/netfilter/br_netfilter.sh
+@@ -13,6 +13,12 @@ source lib.sh
+
+ checktool "nft --version" "run test without nft tool"
+
++read t < /proc/sys/kernel/tainted
++if [ "$t" -ne 0 ];then
++ echo SKIP: kernel is tainted
++ exit $ksft_skip
++fi
++
+ cleanup() {
+ cleanup_all_ns
+ }
+@@ -165,6 +171,7 @@ if [ "$t" -eq 0 ];then
+ echo PASS: kernel not tainted
+ else
+ echo ERROR: kernel is tainted
++ dmesg
+ ret=1
+ fi
+
+diff --git a/tools/testing/selftests/net/netfilter/br_netfilter_queue.sh b/tools/testing/selftests/net/netfilter/br_netfilter_queue.sh
+index 6a764d70ab06f9..4788641717d935 100755
+--- a/tools/testing/selftests/net/netfilter/br_netfilter_queue.sh
++++ b/tools/testing/selftests/net/netfilter/br_netfilter_queue.sh
+@@ -4,6 +4,12 @@ source lib.sh
+
+ checktool "nft --version" "run test without nft tool"
+
++read t < /proc/sys/kernel/tainted
++if [ "$t" -ne 0 ];then
++ echo SKIP: kernel is tainted
++ exit $ksft_skip
++fi
++
+ cleanup() {
+ cleanup_all_ns
+ }
+@@ -72,6 +78,7 @@ if [ "$t" -eq 0 ];then
+ echo PASS: kernel not tainted
+ else
+ echo ERROR: kernel is tainted
++ dmesg
+ exit 1
+ fi
+
+diff --git a/tools/testing/selftests/net/netfilter/nft_queue.sh b/tools/testing/selftests/net/netfilter/nft_queue.sh
+index a9d109fcc15c25..00fe1a6c1f30c4 100755
+--- a/tools/testing/selftests/net/netfilter/nft_queue.sh
++++ b/tools/testing/selftests/net/netfilter/nft_queue.sh
+@@ -593,6 +593,7 @@ EOF
+ echo "PASS: queue program exiting while packets queued"
+ else
+ echo "TAINT: queue program exiting while packets queued"
++ dmesg
+ ret=1
+ fi
+ }
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-04-10 13:50 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-04-10 13:50 UTC (permalink / raw
To: gentoo-commits
commit: 39b5396d38f81c7ab29618580608f8a8cfce9d74
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Apr 10 13:50:02 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Apr 10 13:50:02 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=39b5396d
Remove redundant patch
Removed:
2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 --
2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch | 74 --------------------------
2 files changed, 78 deletions(-)
diff --git a/0000_README b/0000_README
index 26583822..7e2e4141 100644
--- a/0000_README
+++ b/0000_README
@@ -155,10 +155,6 @@ Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
-Patch: 2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch
-From: https://github.com/nbd168/wireless/commit/adc3fd2a2277b7cc0b61692463771bf9bd298036
-Desc: wifi: mt76: mt7921: fix kernel panic due to null pointer dereference
-
Patch: 2910_bfp-mark-get-entry-ip-as--maybe-unused.patch
From: https://www.spinics.net/lists/stable/msg604665.html
Desc: bpf: mark get_entry_ip as __maybe_unused
diff --git a/2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch b/2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch
deleted file mode 100644
index 1cc1dbf3..00000000
--- a/2400_wifi-mt76-mt7921-null-ptr-deref-fix.patch
+++ /dev/null
@@ -1,74 +0,0 @@
-From adc3fd2a2277b7cc0b61692463771bf9bd298036 Mon Sep 17 00:00:00 2001
-From: Ming Yen Hsieh <mingyen.hsieh@mediatek.com>
-Date: Tue, 18 Feb 2025 11:33:42 +0800
-Subject: [PATCH] wifi: mt76: mt7921: fix kernel panic due to null pointer
- dereference
-
-Address a kernel panic caused by a null pointer dereference in the
-`mt792x_rx_get_wcid` function. The issue arises because the `deflink` structure
-is not properly initialized with the `sta` context. This patch ensures that the
-`deflink` structure is correctly linked to the `sta` context, preventing the
-null pointer dereference.
-
- BUG: kernel NULL pointer dereference, address: 0000000000000400
- #PF: supervisor read access in kernel mode
- #PF: error_code(0x0000) - not-present page
- PGD 0 P4D 0
- Oops: Oops: 0000 [#1] PREEMPT SMP NOPTI
- CPU: 0 UID: 0 PID: 470 Comm: mt76-usb-rx phy Not tainted 6.12.13-gentoo-dist #1
- Hardware name: /AMD HUDSON-M1, BIOS 4.6.4 11/15/2011
- RIP: 0010:mt792x_rx_get_wcid+0x48/0x140 [mt792x_lib]
- RSP: 0018:ffffa147c055fd98 EFLAGS: 00010202
- RAX: 0000000000000000 RBX: ffff8e9ecb652000 RCX: 0000000000000000
- RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8e9ecb652000
- RBP: 0000000000000685 R08: ffff8e9ec6570000 R09: 0000000000000000
- R10: ffff8e9ecd2ca000 R11: ffff8e9f22a217c0 R12: 0000000038010119
- R13: 0000000080843801 R14: ffff8e9ec6570000 R15: ffff8e9ecb652000
- FS: 0000000000000000(0000) GS:ffff8e9f22a00000(0000) knlGS:0000000000000000
- CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
- CR2: 0000000000000400 CR3: 000000000d2ea000 CR4: 00000000000006f0
- Call Trace:
- <TASK>
- ? __die_body.cold+0x19/0x27
- ? page_fault_oops+0x15a/0x2f0
- ? search_module_extables+0x19/0x60
- ? search_bpf_extables+0x5f/0x80
- ? exc_page_fault+0x7e/0x180
- ? asm_exc_page_fault+0x26/0x30
- ? mt792x_rx_get_wcid+0x48/0x140 [mt792x_lib]
- mt7921_queue_rx_skb+0x1c6/0xaa0 [mt7921_common]
- mt76u_alloc_queues+0x784/0x810 [mt76_usb]
- ? __pfx___mt76_worker_fn+0x10/0x10 [mt76]
- __mt76_worker_fn+0x4f/0x80 [mt76]
- kthread+0xd2/0x100
- ? __pfx_kthread+0x10/0x10
- ret_from_fork+0x34/0x50
- ? __pfx_kthread+0x10/0x10
- ret_from_fork_asm+0x1a/0x30
- </TASK>
- ---[ end trace 0000000000000000 ]---
-
-Reported-by: Nick Morrow <usbwifi2024@gmail.com>
-Closes: https://github.com/morrownr/USB-WiFi/issues/577
-Cc: stable@vger.kernel.org
-Fixes: 90c10286b176 ("wifi: mt76: mt7925: Update mt792x_rx_get_wcid for per-link STA")
-Signed-off-by: Ming Yen Hsieh <mingyen.hsieh@mediatek.com>
-Tested-by: Salah Coronya <salah.coronya@gmail.com>
-Link: https://patch.msgid.link/20250218033343.1999648-1-mingyen.hsieh@mediatek.com
-Signed-off-by: Felix Fietkau <nbd@nbd.name>
----
- drivers/net/wireless/mediatek/mt76/mt7921/main.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/drivers/net/wireless/mediatek/mt76/mt7921/main.c b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
-index 13e58c328aff..78b77a54d195 100644
---- a/drivers/net/wireless/mediatek/mt76/mt7921/main.c
-+++ b/drivers/net/wireless/mediatek/mt76/mt7921/main.c
-@@ -811,6 +811,7 @@ int mt7921_mac_sta_add(struct mt76_dev *mdev, struct ieee80211_vif *vif,
- msta->deflink.wcid.phy_idx = mvif->bss_conf.mt76.band_idx;
- msta->deflink.wcid.tx_info |= MT_WCID_TX_INFO_SET;
- msta->deflink.last_txs = jiffies;
-+ msta->deflink.sta = msta;
-
- ret = mt76_connac_pm_wake(&dev->mphy, &dev->pm);
- if (ret)
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-04-20 9:38 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-04-20 9:38 UTC (permalink / raw
To: gentoo-commits
commit: a28004010231768f718bec49a133cfe566a60c83
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Apr 20 09:38:11 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Apr 20 09:38:11 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a2800401
Linux patch 6.12.24
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1023_linux-6.12.24.patch | 16325 +++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 16329 insertions(+)
diff --git a/0000_README b/0000_README
index 7e2e4141..6b594792 100644
--- a/0000_README
+++ b/0000_README
@@ -135,6 +135,10 @@ Patch: 1022_linux-6.12.23.patch
From: https://www.kernel.org
Desc: Linux 6.12.23
+Patch: 1023_linux-6.12.24.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.24
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1023_linux-6.12.24.patch b/1023_linux-6.12.24.patch
new file mode 100644
index 00000000..a8202e32
--- /dev/null
+++ b/1023_linux-6.12.24.patch
@@ -0,0 +1,16325 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index d401577b5a6ace..607a8937f17549 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -3028,6 +3028,8 @@
+ * max_sec_lba48: Set or clear transfer size limit to
+ 65535 sectors.
+
++ * external: Mark port as external (hotplug-capable).
++
+ * [no]lpm: Enable or disable link power management.
+
+ * [no]setxfer: Indicate if transfer speed mode setting
+diff --git a/Documentation/devicetree/bindings/arm/qcom,coresight-tpda.yaml b/Documentation/devicetree/bindings/arm/qcom,coresight-tpda.yaml
+index 76163abed655a2..5ed40f21b8eb5d 100644
+--- a/Documentation/devicetree/bindings/arm/qcom,coresight-tpda.yaml
++++ b/Documentation/devicetree/bindings/arm/qcom,coresight-tpda.yaml
+@@ -55,8 +55,7 @@ properties:
+ - const: arm,primecell
+
+ reg:
+- minItems: 1
+- maxItems: 2
++ maxItems: 1
+
+ clocks:
+ maxItems: 1
+diff --git a/Documentation/devicetree/bindings/arm/qcom,coresight-tpdm.yaml b/Documentation/devicetree/bindings/arm/qcom,coresight-tpdm.yaml
+index 8eec07d9d45428..07d21a3617f5b2 100644
+--- a/Documentation/devicetree/bindings/arm/qcom,coresight-tpdm.yaml
++++ b/Documentation/devicetree/bindings/arm/qcom,coresight-tpdm.yaml
+@@ -41,8 +41,7 @@ properties:
+ - const: arm,primecell
+
+ reg:
+- minItems: 1
+- maxItems: 2
++ maxItems: 1
+
+ qcom,dsb-element-bits:
+ description:
+diff --git a/Documentation/devicetree/bindings/media/i2c/st,st-mipid02.yaml b/Documentation/devicetree/bindings/media/i2c/st,st-mipid02.yaml
+index b68141264c0e9f..4d40e75b4e1eff 100644
+--- a/Documentation/devicetree/bindings/media/i2c/st,st-mipid02.yaml
++++ b/Documentation/devicetree/bindings/media/i2c/st,st-mipid02.yaml
+@@ -71,7 +71,7 @@ properties:
+ description:
+ Any lane can be inverted or not.
+ minItems: 1
+- maxItems: 2
++ maxItems: 3
+
+ required:
+ - data-lanes
+diff --git a/Makefile b/Makefile
+index 6a2a60eb67a3e7..e1fa425089c220 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 23
++SUBLEVEL = 24
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -1013,6 +1013,9 @@ ifdef CONFIG_CC_IS_GCC
+ KBUILD_CFLAGS += -fconserve-stack
+ endif
+
++# Ensure compilers do not transform certain loops into calls to wcslen()
++KBUILD_CFLAGS += -fno-builtin-wcslen
++
+ # change __FILE__ to the relative path from the srctree
+ KBUILD_CPPFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
+
+diff --git a/arch/arm64/boot/dts/exynos/google/gs101.dtsi b/arch/arm64/boot/dts/exynos/google/gs101.dtsi
+index 302c5beb224aa4..b8f8255f840b13 100644
+--- a/arch/arm64/boot/dts/exynos/google/gs101.dtsi
++++ b/arch/arm64/boot/dts/exynos/google/gs101.dtsi
+@@ -1451,6 +1451,7 @@ pinctrl_gsacore: pinctrl@17a80000 {
+ /* TODO: update once support for this CMU exists */
+ clocks = <0>;
+ clock-names = "pclk";
++ status = "disabled";
+ };
+
+ cmu_top: clock-controller@1e080000 {
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+index 3458be7f7f6114..f49ec749590609 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+@@ -1255,8 +1255,7 @@ dpi0_out: endpoint {
+ };
+
+ pwm0: pwm@1401e000 {
+- compatible = "mediatek,mt8173-disp-pwm",
+- "mediatek,mt6595-disp-pwm";
++ compatible = "mediatek,mt8173-disp-pwm";
+ reg = <0 0x1401e000 0 0x1000>;
+ #pwm-cells = <2>;
+ clocks = <&mmsys CLK_MM_DISP_PWM026M>,
+@@ -1266,8 +1265,7 @@ pwm0: pwm@1401e000 {
+ };
+
+ pwm1: pwm@1401f000 {
+- compatible = "mediatek,mt8173-disp-pwm",
+- "mediatek,mt6595-disp-pwm";
++ compatible = "mediatek,mt8173-disp-pwm";
+ reg = <0 0x1401f000 0 0x1000>;
+ #pwm-cells = <2>;
+ clocks = <&mmsys CLK_MM_DISP_PWM126M>,
+diff --git a/arch/arm64/boot/dts/nvidia/tegra234-p3768-0000+p3767.dtsi b/arch/arm64/boot/dts/nvidia/tegra234-p3768-0000+p3767.dtsi
+index 19340d13f789f0..41821354bbdae6 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra234-p3768-0000+p3767.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra234-p3768-0000+p3767.dtsi
+@@ -227,13 +227,6 @@ key-power {
+ wakeup-event-action = <EV_ACT_ASSERTED>;
+ wakeup-source;
+ };
+-
+- key-suspend {
+- label = "Suspend";
+- gpios = <&gpio TEGRA234_MAIN_GPIO(G, 2) GPIO_ACTIVE_LOW>;
+- linux,input-type = <EV_KEY>;
+- linux,code = <KEY_SLEEP>;
+- };
+ };
+
+ fan: pwm-fan {
+diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
+index 488f8e75134959..2a4e686e633c62 100644
+--- a/arch/arm64/include/asm/cputype.h
++++ b/arch/arm64/include/asm/cputype.h
+@@ -75,6 +75,7 @@
+ #define ARM_CPU_PART_CORTEX_A76 0xD0B
+ #define ARM_CPU_PART_NEOVERSE_N1 0xD0C
+ #define ARM_CPU_PART_CORTEX_A77 0xD0D
++#define ARM_CPU_PART_CORTEX_A76AE 0xD0E
+ #define ARM_CPU_PART_NEOVERSE_V1 0xD40
+ #define ARM_CPU_PART_CORTEX_A78 0xD41
+ #define ARM_CPU_PART_CORTEX_A78AE 0xD42
+@@ -119,6 +120,7 @@
+ #define QCOM_CPU_PART_KRYO 0x200
+ #define QCOM_CPU_PART_KRYO_2XX_GOLD 0x800
+ #define QCOM_CPU_PART_KRYO_2XX_SILVER 0x801
++#define QCOM_CPU_PART_KRYO_3XX_GOLD 0x802
+ #define QCOM_CPU_PART_KRYO_3XX_SILVER 0x803
+ #define QCOM_CPU_PART_KRYO_4XX_GOLD 0x804
+ #define QCOM_CPU_PART_KRYO_4XX_SILVER 0x805
+@@ -158,6 +160,7 @@
+ #define MIDR_CORTEX_A76 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76)
+ #define MIDR_NEOVERSE_N1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_N1)
+ #define MIDR_CORTEX_A77 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A77)
++#define MIDR_CORTEX_A76AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A76AE)
+ #define MIDR_NEOVERSE_V1 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_NEOVERSE_V1)
+ #define MIDR_CORTEX_A78 MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78)
+ #define MIDR_CORTEX_A78AE MIDR_CPU_MODEL(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A78AE)
+@@ -195,6 +198,7 @@
+ #define MIDR_QCOM_KRYO MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO)
+ #define MIDR_QCOM_KRYO_2XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_GOLD)
+ #define MIDR_QCOM_KRYO_2XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_2XX_SILVER)
++#define MIDR_QCOM_KRYO_3XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_GOLD)
+ #define MIDR_QCOM_KRYO_3XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_3XX_SILVER)
+ #define MIDR_QCOM_KRYO_4XX_GOLD MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_GOLD)
+ #define MIDR_QCOM_KRYO_4XX_SILVER MIDR_CPU_MODEL(ARM_CPU_IMP_QCOM, QCOM_CPU_PART_KRYO_4XX_SILVER)
+diff --git a/arch/arm64/include/asm/spectre.h b/arch/arm64/include/asm/spectre.h
+index 0c4d9045c31f47..f1524cdeacf1c4 100644
+--- a/arch/arm64/include/asm/spectre.h
++++ b/arch/arm64/include/asm/spectre.h
+@@ -97,7 +97,6 @@ enum mitigation_state arm64_get_meltdown_state(void);
+
+ enum mitigation_state arm64_get_spectre_bhb_state(void);
+ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry, int scope);
+-u8 spectre_bhb_loop_affected(int scope);
+ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *__unused);
+ bool try_emulate_el1_ssbs(struct pt_regs *regs, u32 instr);
+
+diff --git a/arch/arm64/include/asm/traps.h b/arch/arm64/include/asm/traps.h
+index d780d1bd2eacb9..82cf1f879c61df 100644
+--- a/arch/arm64/include/asm/traps.h
++++ b/arch/arm64/include/asm/traps.h
+@@ -109,10 +109,9 @@ static inline void arm64_mops_reset_regs(struct user_pt_regs *regs, unsigned lon
+ int dstreg = ESR_ELx_MOPS_ISS_DESTREG(esr);
+ int srcreg = ESR_ELx_MOPS_ISS_SRCREG(esr);
+ int sizereg = ESR_ELx_MOPS_ISS_SIZEREG(esr);
+- unsigned long dst, src, size;
++ unsigned long dst, size;
+
+ dst = regs->regs[dstreg];
+- src = regs->regs[srcreg];
+ size = regs->regs[sizereg];
+
+ /*
+@@ -129,6 +128,7 @@ static inline void arm64_mops_reset_regs(struct user_pt_regs *regs, unsigned lon
+ }
+ } else {
+ /* CPY* instruction */
++ unsigned long src = regs->regs[srcreg];
+ if (!(option_a ^ wrong_option)) {
+ /* Format is from Option B */
+ if (regs->pstate & PSR_N_BIT) {
+diff --git a/arch/arm64/kernel/proton-pack.c b/arch/arm64/kernel/proton-pack.c
+index da53722f95d41a..0f51fd10b4b063 100644
+--- a/arch/arm64/kernel/proton-pack.c
++++ b/arch/arm64/kernel/proton-pack.c
+@@ -845,52 +845,86 @@ static unsigned long system_bhb_mitigations;
+ * This must be called with SCOPE_LOCAL_CPU for each type of CPU, before any
+ * SCOPE_SYSTEM call will give the right answer.
+ */
+-u8 spectre_bhb_loop_affected(int scope)
++static bool is_spectre_bhb_safe(int scope)
++{
++ static const struct midr_range spectre_bhb_safe_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A35),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A53),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A55),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A510),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A520),
++ MIDR_ALL_VERSIONS(MIDR_BRAHMA_B53),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_2XX_SILVER),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_3XX_SILVER),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_4XX_SILVER),
++ {},
++ };
++ static bool all_safe = true;
++
++ if (scope != SCOPE_LOCAL_CPU)
++ return all_safe;
++
++ if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_safe_list))
++ return true;
++
++ all_safe = false;
++
++ return false;
++}
++
++static u8 spectre_bhb_loop_affected(void)
+ {
+ u8 k = 0;
+- static u8 max_bhb_k;
+-
+- if (scope == SCOPE_LOCAL_CPU) {
+- static const struct midr_range spectre_bhb_k32_list[] = {
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A78AE),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
+- MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
+- MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
+- {},
+- };
+- static const struct midr_range spectre_bhb_k24_list[] = {
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A77),
+- MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
+- {},
+- };
+- static const struct midr_range spectre_bhb_k11_list[] = {
+- MIDR_ALL_VERSIONS(MIDR_AMPERE1),
+- {},
+- };
+- static const struct midr_range spectre_bhb_k8_list[] = {
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
+- {},
+- };
+-
+- if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k32_list))
+- k = 32;
+- else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
+- k = 24;
+- else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k11_list))
+- k = 11;
+- else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
+- k = 8;
+-
+- max_bhb_k = max(max_bhb_k, k);
+- } else {
+- k = max_bhb_k;
+- }
++
++ static const struct midr_range spectre_bhb_k132_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X3),
++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V2),
++ };
++ static const struct midr_range spectre_bhb_k38_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A715),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A720),
++ };
++ static const struct midr_range spectre_bhb_k32_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78AE),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A78C),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X1),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A710),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_X2),
++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N2),
++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_V1),
++ {},
++ };
++ static const struct midr_range spectre_bhb_k24_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A76),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A76AE),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A77),
++ MIDR_ALL_VERSIONS(MIDR_NEOVERSE_N1),
++ MIDR_ALL_VERSIONS(MIDR_QCOM_KRYO_4XX_GOLD),
++ {},
++ };
++ static const struct midr_range spectre_bhb_k11_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_AMPERE1),
++ {},
++ };
++ static const struct midr_range spectre_bhb_k8_list[] = {
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A72),
++ MIDR_ALL_VERSIONS(MIDR_CORTEX_A57),
++ {},
++ };
++
++ if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k132_list))
++ k = 132;
++ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k38_list))
++ k = 38;
++ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k32_list))
++ k = 32;
++ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k24_list))
++ k = 24;
++ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k11_list))
++ k = 11;
++ else if (is_midr_in_range_list(read_cpuid_id(), spectre_bhb_k8_list))
++ k = 8;
+
+ return k;
+ }
+@@ -916,29 +950,13 @@ static enum mitigation_state spectre_bhb_get_cpu_fw_mitigation_state(void)
+ }
+ }
+
+-static bool is_spectre_bhb_fw_affected(int scope)
++static bool has_spectre_bhb_fw_mitigation(void)
+ {
+- static bool system_affected;
+ enum mitigation_state fw_state;
+ bool has_smccc = arm_smccc_1_1_get_conduit() != SMCCC_CONDUIT_NONE;
+- static const struct midr_range spectre_bhb_firmware_mitigated_list[] = {
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A73),
+- MIDR_ALL_VERSIONS(MIDR_CORTEX_A75),
+- {},
+- };
+- bool cpu_in_list = is_midr_in_range_list(read_cpuid_id(),
+- spectre_bhb_firmware_mitigated_list);
+-
+- if (scope != SCOPE_LOCAL_CPU)
+- return system_affected;
+
+ fw_state = spectre_bhb_get_cpu_fw_mitigation_state();
+- if (cpu_in_list || (has_smccc && fw_state == SPECTRE_MITIGATED)) {
+- system_affected = true;
+- return true;
+- }
+-
+- return false;
++ return has_smccc && fw_state == SPECTRE_MITIGATED;
+ }
+
+ static bool supports_ecbhb(int scope)
+@@ -954,6 +972,8 @@ static bool supports_ecbhb(int scope)
+ ID_AA64MMFR1_EL1_ECBHB_SHIFT);
+ }
+
++static u8 max_bhb_k;
++
+ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry,
+ int scope)
+ {
+@@ -962,16 +982,18 @@ bool is_spectre_bhb_affected(const struct arm64_cpu_capabilities *entry,
+ if (supports_csv2p3(scope))
+ return false;
+
+- if (supports_clearbhb(scope))
+- return true;
+-
+- if (spectre_bhb_loop_affected(scope))
+- return true;
++ if (is_spectre_bhb_safe(scope))
++ return false;
+
+- if (is_spectre_bhb_fw_affected(scope))
+- return true;
++ /*
++ * At this point the core isn't known to be "safe" so we're going to
++ * assume it's vulnerable. We still need to update `max_bhb_k` though,
++ * but only if we aren't mitigating with clearbhb though.
++ */
++ if (scope == SCOPE_LOCAL_CPU && !supports_clearbhb(SCOPE_LOCAL_CPU))
++ max_bhb_k = max(max_bhb_k, spectre_bhb_loop_affected());
+
+- return false;
++ return true;
+ }
+
+ static void this_cpu_set_vectors(enum arm64_bp_harden_el1_vectors slot)
+@@ -1002,7 +1024,7 @@ early_param("nospectre_bhb", parse_spectre_bhb_param);
+ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
+ {
+ bp_hardening_cb_t cpu_cb;
+- enum mitigation_state fw_state, state = SPECTRE_VULNERABLE;
++ enum mitigation_state state = SPECTRE_VULNERABLE;
+ struct bp_hardening_data *data = this_cpu_ptr(&bp_hardening_data);
+
+ if (!is_spectre_bhb_affected(entry, SCOPE_LOCAL_CPU))
+@@ -1028,7 +1050,7 @@ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
+ this_cpu_set_vectors(EL1_VECTOR_BHB_CLEAR_INSN);
+ state = SPECTRE_MITIGATED;
+ set_bit(BHB_INSN, &system_bhb_mitigations);
+- } else if (spectre_bhb_loop_affected(SCOPE_LOCAL_CPU)) {
++ } else if (spectre_bhb_loop_affected()) {
+ /*
+ * Ensure KVM uses the indirect vector which will have the
+ * branchy-loop added. A57/A72-r0 will already have selected
+@@ -1041,32 +1063,29 @@ void spectre_bhb_enable_mitigation(const struct arm64_cpu_capabilities *entry)
+ this_cpu_set_vectors(EL1_VECTOR_BHB_LOOP);
+ state = SPECTRE_MITIGATED;
+ set_bit(BHB_LOOP, &system_bhb_mitigations);
+- } else if (is_spectre_bhb_fw_affected(SCOPE_LOCAL_CPU)) {
+- fw_state = spectre_bhb_get_cpu_fw_mitigation_state();
+- if (fw_state == SPECTRE_MITIGATED) {
+- /*
+- * Ensure KVM uses one of the spectre bp_hardening
+- * vectors. The indirect vector doesn't include the EL3
+- * call, so needs upgrading to
+- * HYP_VECTOR_SPECTRE_INDIRECT.
+- */
+- if (!data->slot || data->slot == HYP_VECTOR_INDIRECT)
+- data->slot += 1;
+-
+- this_cpu_set_vectors(EL1_VECTOR_BHB_FW);
+-
+- /*
+- * The WA3 call in the vectors supersedes the WA1 call
+- * made during context-switch. Uninstall any firmware
+- * bp_hardening callback.
+- */
+- cpu_cb = spectre_v2_get_sw_mitigation_cb();
+- if (__this_cpu_read(bp_hardening_data.fn) != cpu_cb)
+- __this_cpu_write(bp_hardening_data.fn, NULL);
+-
+- state = SPECTRE_MITIGATED;
+- set_bit(BHB_FW, &system_bhb_mitigations);
+- }
++ } else if (has_spectre_bhb_fw_mitigation()) {
++ /*
++ * Ensure KVM uses one of the spectre bp_hardening
++ * vectors. The indirect vector doesn't include the EL3
++ * call, so needs upgrading to
++ * HYP_VECTOR_SPECTRE_INDIRECT.
++ */
++ if (!data->slot || data->slot == HYP_VECTOR_INDIRECT)
++ data->slot += 1;
++
++ this_cpu_set_vectors(EL1_VECTOR_BHB_FW);
++
++ /*
++ * The WA3 call in the vectors supersedes the WA1 call
++ * made during context-switch. Uninstall any firmware
++ * bp_hardening callback.
++ */
++ cpu_cb = spectre_v2_get_sw_mitigation_cb();
++ if (__this_cpu_read(bp_hardening_data.fn) != cpu_cb)
++ __this_cpu_write(bp_hardening_data.fn, NULL);
++
++ state = SPECTRE_MITIGATED;
++ set_bit(BHB_FW, &system_bhb_mitigations);
+ }
+
+ update_mitigation_state(&spectre_bhb_state, state);
+@@ -1100,7 +1119,6 @@ void noinstr spectre_bhb_patch_loop_iter(struct alt_instr *alt,
+ {
+ u8 rd;
+ u32 insn;
+- u16 loop_count = spectre_bhb_loop_affected(SCOPE_SYSTEM);
+
+ BUG_ON(nr_inst != 1); /* MOV -> MOV */
+
+@@ -1109,7 +1127,7 @@ void noinstr spectre_bhb_patch_loop_iter(struct alt_instr *alt,
+
+ insn = le32_to_cpu(*origptr);
+ rd = aarch64_insn_decode_register(AARCH64_INSN_REGTYPE_RD, insn);
+- insn = aarch64_insn_gen_movewide(rd, loop_count, 0,
++ insn = aarch64_insn_gen_movewide(rd, max_bhb_k, 0,
+ AARCH64_INSN_VARIANT_64BIT,
+ AARCH64_INSN_MOVEWIDE_ZERO);
+ *updptr++ = cpu_to_le32(insn);
+diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c
+index 634d3f62481827..7d301da8ff2899 100644
+--- a/arch/arm64/kvm/arm.c
++++ b/arch/arm64/kvm/arm.c
+@@ -493,7 +493,11 @@ int kvm_arch_vcpu_create(struct kvm_vcpu *vcpu)
+ if (err)
+ return err;
+
+- return kvm_share_hyp(vcpu, vcpu + 1);
++ err = kvm_share_hyp(vcpu, vcpu + 1);
++ if (err)
++ kvm_vgic_vcpu_destroy(vcpu);
++
++ return err;
+ }
+
+ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
+diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
+index e59c628c93f20d..9bcd51fd67d4e0 100644
+--- a/arch/arm64/mm/mmu.c
++++ b/arch/arm64/mm/mmu.c
+@@ -1360,7 +1360,8 @@ int arch_add_memory(int nid, u64 start, u64 size,
+ __remove_pgd_mapping(swapper_pg_dir,
+ __phys_to_virt(start), size);
+ else {
+- max_pfn = PFN_UP(start + size);
++ /* Address of hotplugged memory can be smaller */
++ max_pfn = max(max_pfn, PFN_UP(start + size));
+ max_low_pfn = max_pfn;
+ }
+
+diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
+index f14329989e9a71..4b6ce4f07bc2c3 100644
+--- a/arch/powerpc/kvm/powerpc.c
++++ b/arch/powerpc/kvm/powerpc.c
+@@ -550,12 +550,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
+
+ #ifdef CONFIG_PPC_BOOK3S_64
+ case KVM_CAP_SPAPR_TCE:
++ fallthrough;
+ case KVM_CAP_SPAPR_TCE_64:
+- r = 1;
+- break;
+ case KVM_CAP_SPAPR_TCE_VFIO:
+- r = !!cpu_has_feature(CPU_FTR_HVMODE);
+- break;
+ case KVM_CAP_PPC_RTAS:
+ case KVM_CAP_PPC_FIXUP_HCALL:
+ case KVM_CAP_PPC_ENABLE_HCALL:
+diff --git a/arch/s390/Makefile b/arch/s390/Makefile
+index 9b772093278704..5b97af31170928 100644
+--- a/arch/s390/Makefile
++++ b/arch/s390/Makefile
+@@ -15,7 +15,7 @@ KBUILD_CFLAGS_MODULE += -fPIC
+ KBUILD_AFLAGS += -m64
+ KBUILD_CFLAGS += -m64
+ KBUILD_CFLAGS += -fPIC
+-LDFLAGS_vmlinux := -no-pie --emit-relocs --discard-none
++LDFLAGS_vmlinux := $(call ld-option,-no-pie) --emit-relocs --discard-none
+ extra_tools := relocs
+ aflags_dwarf := -Wa,-gdwarf-2
+ KBUILD_AFLAGS_DECOMPRESSOR := $(CLANG_FLAGS) -m64 -D__ASSEMBLY__
+diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
+index c3075e4a8efc31..6d6b057b562fda 100644
+--- a/arch/s390/kernel/perf_cpum_cf.c
++++ b/arch/s390/kernel/perf_cpum_cf.c
+@@ -858,18 +858,13 @@ static int cpumf_pmu_event_type(struct perf_event *event)
+ static int cpumf_pmu_event_init(struct perf_event *event)
+ {
+ unsigned int type = event->attr.type;
+- int err;
++ int err = -ENOENT;
+
+ if (type == PERF_TYPE_HARDWARE || type == PERF_TYPE_RAW)
+ err = __hw_perf_event_init(event, type);
+ else if (event->pmu->type == type)
+ /* Registered as unknown PMU */
+ err = __hw_perf_event_init(event, cpumf_pmu_event_type(event));
+- else
+- return -ENOENT;
+-
+- if (unlikely(err) && event->destroy)
+- event->destroy(event);
+
+ return err;
+ }
+@@ -1819,8 +1814,6 @@ static int cfdiag_event_init(struct perf_event *event)
+ event->destroy = hw_perf_event_destroy;
+
+ err = cfdiag_event_init2(event);
+- if (unlikely(err))
+- event->destroy(event);
+ out:
+ return err;
+ }
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index 331e0654d61d78..efdd6ead7ba812 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -898,9 +898,6 @@ static int cpumsf_pmu_event_init(struct perf_event *event)
+ event->attr.exclude_idle = 0;
+
+ err = __hw_perf_event_init(event);
+- if (unlikely(err))
+- if (event->destroy)
+- event->destroy(event);
+ return err;
+ }
+
+diff --git a/arch/s390/pci/pci_bus.c b/arch/s390/pci/pci_bus.c
+index c3854682934557..23c27c6320130b 100644
+--- a/arch/s390/pci/pci_bus.c
++++ b/arch/s390/pci/pci_bus.c
+@@ -335,6 +335,9 @@ static bool zpci_bus_is_isolated_vf(struct zpci_bus *zbus, struct zpci_dev *zdev
+ {
+ struct pci_dev *pdev;
+
++ if (!zdev->vfn)
++ return false;
++
+ pdev = zpci_iov_find_parent_pf(zbus, zdev);
+ if (!pdev)
+ return true;
+diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
+index de5c0b389a3ec8..4779c3cb6cfab2 100644
+--- a/arch/s390/pci/pci_mmio.c
++++ b/arch/s390/pci/pci_mmio.c
+@@ -171,8 +171,12 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
+ args.address = mmio_addr;
+ args.vma = vma;
+ ret = follow_pfnmap_start(&args);
+- if (ret)
+- goto out_unlock_mmap;
++ if (ret) {
++ fixup_user_fault(current->mm, mmio_addr, FAULT_FLAG_WRITE, NULL);
++ ret = follow_pfnmap_start(&args);
++ if (ret)
++ goto out_unlock_mmap;
++ }
+
+ io_addr = (void __iomem *)((args.pfn << PAGE_SHIFT) |
+ (mmio_addr & ~PAGE_MASK));
+@@ -305,14 +309,18 @@ SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,
+ if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
+ goto out_unlock_mmap;
+ ret = -EACCES;
+- if (!(vma->vm_flags & VM_WRITE))
++ if (!(vma->vm_flags & VM_READ))
+ goto out_unlock_mmap;
+
+ args.vma = vma;
+ args.address = mmio_addr;
+ ret = follow_pfnmap_start(&args);
+- if (ret)
+- goto out_unlock_mmap;
++ if (ret) {
++ fixup_user_fault(current->mm, mmio_addr, 0, NULL);
++ ret = follow_pfnmap_start(&args);
++ if (ret)
++ goto out_unlock_mmap;
++ }
+
+ io_addr = (void __iomem *)((args.pfn << PAGE_SHIFT) |
+ (mmio_addr & ~PAGE_MASK));
+diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
+index 2b7f358762c187..dc28f2c4eee3f2 100644
+--- a/arch/sparc/include/asm/pgtable_64.h
++++ b/arch/sparc/include/asm/pgtable_64.h
+@@ -936,7 +936,6 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
+ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pte, unsigned int nr)
+ {
+- arch_enter_lazy_mmu_mode();
+ for (;;) {
+ __set_pte_at(mm, addr, ptep, pte, 0);
+ if (--nr == 0)
+@@ -945,7 +944,6 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
+ pte_val(pte) += PAGE_SIZE;
+ addr += PAGE_SIZE;
+ }
+- arch_leave_lazy_mmu_mode();
+ }
+ #define set_ptes set_ptes
+
+diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
+index 8648a50afe8899..a35ddcca5e7668 100644
+--- a/arch/sparc/mm/tlb.c
++++ b/arch/sparc/mm/tlb.c
+@@ -52,8 +52,10 @@ void flush_tlb_pending(void)
+
+ void arch_enter_lazy_mmu_mode(void)
+ {
+- struct tlb_batch *tb = this_cpu_ptr(&tlb_batch);
++ struct tlb_batch *tb;
+
++ preempt_disable();
++ tb = this_cpu_ptr(&tlb_batch);
+ tb->active = 1;
+ }
+
+@@ -64,6 +66,7 @@ void arch_leave_lazy_mmu_mode(void)
+ if (tb->tlb_nr)
+ flush_tlb_pending();
+ tb->active = 0;
++ preempt_enable();
+ }
+
+ static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index db38d2b9b78868..e54da3b4d334e4 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -2434,18 +2434,20 @@ config CC_HAS_NAMED_AS
+ def_bool $(success,echo 'int __seg_fs fs; int __seg_gs gs;' | $(CC) -x c - -S -o /dev/null)
+ depends on CC_IS_GCC
+
++#
++# -fsanitize=kernel-address (KASAN) and -fsanitize=thread (KCSAN)
++# are incompatible with named address spaces with GCC < 13.3
++# (see GCC PR sanitizer/111736 and also PR sanitizer/115172).
++#
++
+ config CC_HAS_NAMED_AS_FIXED_SANITIZERS
+- def_bool CC_IS_GCC && GCC_VERSION >= 130300
++ def_bool y
++ depends on !(KASAN || KCSAN) || GCC_VERSION >= 130300
++ depends on !(UBSAN_BOOL && KASAN) || GCC_VERSION >= 140200
+
+ config USE_X86_SEG_SUPPORT
+- def_bool y
+- depends on CC_HAS_NAMED_AS
+- #
+- # -fsanitize=kernel-address (KASAN) and -fsanitize=thread
+- # (KCSAN) are incompatible with named address spaces with
+- # GCC < 13.3 - see GCC PR sanitizer/111736.
+- #
+- depends on !(KASAN || KCSAN) || CC_HAS_NAMED_AS_FIXED_SANITIZERS
++ def_bool CC_HAS_NAMED_AS
++ depends on CC_HAS_NAMED_AS_FIXED_SANITIZERS
+
+ config CC_HAS_SLS
+ def_bool $(cc-option,-mharden-sls=all)
+diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
+index cf7fc2b8e3ce1f..1c2db11a2c3cb9 100644
+--- a/arch/x86/include/asm/irqflags.h
++++ b/arch/x86/include/asm/irqflags.h
+@@ -76,6 +76,28 @@ static __always_inline void native_local_irq_restore(unsigned long flags)
+
+ #endif
+
++#ifndef CONFIG_PARAVIRT
++#ifndef __ASSEMBLY__
++/*
++ * Used in the idle loop; sti takes one instruction cycle
++ * to complete:
++ */
++static __always_inline void arch_safe_halt(void)
++{
++ native_safe_halt();
++}
++
++/*
++ * Used when interrupts are already enabled or to
++ * shutdown the processor:
++ */
++static __always_inline void halt(void)
++{
++ native_halt();
++}
++#endif /* __ASSEMBLY__ */
++#endif /* CONFIG_PARAVIRT */
++
+ #ifdef CONFIG_PARAVIRT_XXL
+ #include <asm/paravirt.h>
+ #else
+@@ -97,24 +119,6 @@ static __always_inline void arch_local_irq_enable(void)
+ native_irq_enable();
+ }
+
+-/*
+- * Used in the idle loop; sti takes one instruction cycle
+- * to complete:
+- */
+-static __always_inline void arch_safe_halt(void)
+-{
+- native_safe_halt();
+-}
+-
+-/*
+- * Used when interrupts are already enabled or to
+- * shutdown the processor:
+- */
+-static __always_inline void halt(void)
+-{
+- native_halt();
+-}
+-
+ /*
+ * For spinlocks, etc:
+ */
+diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
+index d4eb9e1d61b8ef..75d4c994f5e2a5 100644
+--- a/arch/x86/include/asm/paravirt.h
++++ b/arch/x86/include/asm/paravirt.h
+@@ -107,6 +107,16 @@ static inline void notify_page_enc_status_changed(unsigned long pfn,
+ PVOP_VCALL3(mmu.notify_page_enc_status_changed, pfn, npages, enc);
+ }
+
++static __always_inline void arch_safe_halt(void)
++{
++ PVOP_VCALL0(irq.safe_halt);
++}
++
++static inline void halt(void)
++{
++ PVOP_VCALL0(irq.halt);
++}
++
+ #ifdef CONFIG_PARAVIRT_XXL
+ static inline void load_sp0(unsigned long sp0)
+ {
+@@ -170,16 +180,6 @@ static inline void __write_cr4(unsigned long x)
+ PVOP_VCALL1(cpu.write_cr4, x);
+ }
+
+-static __always_inline void arch_safe_halt(void)
+-{
+- PVOP_VCALL0(irq.safe_halt);
+-}
+-
+-static inline void halt(void)
+-{
+- PVOP_VCALL0(irq.halt);
+-}
+-
+ extern noinstr void pv_native_wbinvd(void);
+
+ static __always_inline void wbinvd(void)
+diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
+index 8d4fbe1be48954..9334fdd1f63502 100644
+--- a/arch/x86/include/asm/paravirt_types.h
++++ b/arch/x86/include/asm/paravirt_types.h
+@@ -122,10 +122,9 @@ struct pv_irq_ops {
+ struct paravirt_callee_save save_fl;
+ struct paravirt_callee_save irq_disable;
+ struct paravirt_callee_save irq_enable;
+-
++#endif
+ void (*safe_halt)(void);
+ void (*halt)(void);
+-#endif
+ } __no_randomize_layout;
+
+ struct pv_mmu_ops {
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index c70b86f1f2954f..63adda8a143f93 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -23,6 +23,8 @@
+ #include <linux/serial_core.h>
+ #include <linux/pgtable.h>
+
++#include <xen/xen.h>
++
+ #include <asm/e820/api.h>
+ #include <asm/irqdomain.h>
+ #include <asm/pci_x86.h>
+@@ -1730,6 +1732,15 @@ int __init acpi_mps_check(void)
+ {
+ #if defined(CONFIG_X86_LOCAL_APIC) && !defined(CONFIG_X86_MPPARSE)
+ /* mptable code is not built-in*/
++
++ /*
++ * Xen disables ACPI in PV DomU guests but it still emulates APIC and
++ * supports SMP. Returning early here ensures that APIC is not disabled
++ * unnecessarily and the guest is not limited to a single vCPU.
++ */
++ if (xen_pv_domain() && !xen_initial_domain())
++ return 0;
++
+ if (acpi_disabled || acpi_noirq) {
+ pr_warn("MPS support code is not built-in, using acpi=off or acpi=noirq or pci=noacpi may have problem\n");
+ return 1;
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 79d2e17f6582e9..425bed00b2e071 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -627,7 +627,7 @@ static void init_amd_k8(struct cpuinfo_x86 *c)
+ * (model = 0x14) and later actually support it.
+ * (AMD Erratum #110, docId: 25759).
+ */
+- if (c->x86_model < 0x14 && cpu_has(c, X86_FEATURE_LAHF_LM)) {
++ if (c->x86_model < 0x14 && cpu_has(c, X86_FEATURE_LAHF_LM) && !cpu_has(c, X86_FEATURE_HYPERVISOR)) {
+ clear_cpu_cap(c, X86_FEATURE_LAHF_LM);
+ if (!rdmsrl_amd_safe(0xc001100d, &value)) {
+ value &= ~BIT_64(32);
+diff --git a/arch/x86/kernel/e820.c b/arch/x86/kernel/e820.c
+index 4893d30ce43844..b4746eb8b11526 100644
+--- a/arch/x86/kernel/e820.c
++++ b/arch/x86/kernel/e820.c
+@@ -754,22 +754,21 @@ void __init e820__memory_setup_extended(u64 phys_addr, u32 data_len)
+ void __init e820__register_nosave_regions(unsigned long limit_pfn)
+ {
+ int i;
+- unsigned long pfn = 0;
++ u64 last_addr = 0;
+
+ for (i = 0; i < e820_table->nr_entries; i++) {
+ struct e820_entry *entry = &e820_table->entries[i];
+
+- if (pfn < PFN_UP(entry->addr))
+- register_nosave_region(pfn, PFN_UP(entry->addr));
+-
+- pfn = PFN_DOWN(entry->addr + entry->size);
+-
+ if (entry->type != E820_TYPE_RAM && entry->type != E820_TYPE_RESERVED_KERN)
+- register_nosave_region(PFN_UP(entry->addr), pfn);
++ continue;
+
+- if (pfn >= limit_pfn)
+- break;
++ if (last_addr < entry->addr)
++ register_nosave_region(PFN_DOWN(last_addr), PFN_UP(entry->addr));
++
++ last_addr = entry->addr + entry->size;
+ }
++
++ register_nosave_region(PFN_DOWN(last_addr), limit_pfn);
+ }
+
+ #ifdef CONFIG_ACPI
+diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
+index fec38153355581..0c1b915d7efac8 100644
+--- a/arch/x86/kernel/paravirt.c
++++ b/arch/x86/kernel/paravirt.c
+@@ -100,6 +100,11 @@ int paravirt_disable_iospace(void)
+ return request_resource(&ioport_resource, &reserve_ioports);
+ }
+
++static noinstr void pv_native_safe_halt(void)
++{
++ native_safe_halt();
++}
++
+ #ifdef CONFIG_PARAVIRT_XXL
+ static noinstr void pv_native_write_cr2(unsigned long val)
+ {
+@@ -121,10 +126,6 @@ noinstr void pv_native_wbinvd(void)
+ native_wbinvd();
+ }
+
+-static noinstr void pv_native_safe_halt(void)
+-{
+- native_safe_halt();
+-}
+ #endif
+
+ struct pv_info pv_info = {
+@@ -182,9 +183,11 @@ struct paravirt_patch_template pv_ops = {
+ .irq.save_fl = __PV_IS_CALLEE_SAVE(pv_native_save_fl),
+ .irq.irq_disable = __PV_IS_CALLEE_SAVE(pv_native_irq_disable),
+ .irq.irq_enable = __PV_IS_CALLEE_SAVE(pv_native_irq_enable),
++#endif /* CONFIG_PARAVIRT_XXL */
++
++ /* Irq HLT ops. */
+ .irq.safe_halt = pv_native_safe_halt,
+ .irq.halt = native_halt,
+-#endif /* CONFIG_PARAVIRT_XXL */
+
+ /* Mmu ops. */
+ .mmu.flush_tlb_user = native_flush_tlb_local,
+diff --git a/arch/x86/kernel/signal_32.c b/arch/x86/kernel/signal_32.c
+index ef654530bf5a93..98123ff10506c6 100644
+--- a/arch/x86/kernel/signal_32.c
++++ b/arch/x86/kernel/signal_32.c
+@@ -33,25 +33,55 @@
+ #include <asm/smap.h>
+ #include <asm/gsseg.h>
+
++/*
++ * The first GDT descriptor is reserved as 'NULL descriptor'. As bits 0
++ * and 1 of a segment selector, i.e., the RPL bits, are NOT used to index
++ * GDT, selector values 0~3 all point to the NULL descriptor, thus values
++ * 0, 1, 2 and 3 are all valid NULL selector values.
++ *
++ * However IRET zeros ES, FS, GS, and DS segment registers if any of them
++ * is found to have any nonzero NULL selector value, which can be used by
++ * userspace in pre-FRED systems to spot any interrupt/exception by loading
++ * a nonzero NULL selector and waiting for it to become zero. Before FRED
++ * there was nothing software could do to prevent such an information leak.
++ *
++ * ERETU, the only legit instruction to return to userspace from kernel
++ * under FRED, by design does NOT zero any segment register to avoid this
++ * problem behavior.
++ *
++ * As such, leave NULL selector values 0~3 unchanged.
++ */
++static inline u16 fixup_rpl(u16 sel)
++{
++ return sel <= 3 ? sel : sel | 3;
++}
++
+ #ifdef CONFIG_IA32_EMULATION
+ #include <asm/unistd_32_ia32.h>
+
+ static inline void reload_segments(struct sigcontext_32 *sc)
+ {
+- unsigned int cur;
++ u16 cur;
+
++ /*
++ * Reload fs and gs if they have changed in the signal
++ * handler. This does not handle long fs/gs base changes in
++ * the handler, but does not clobber them at least in the
++ * normal case.
++ */
+ savesegment(gs, cur);
+- if ((sc->gs | 0x03) != cur)
+- load_gs_index(sc->gs | 0x03);
++ if (fixup_rpl(sc->gs) != cur)
++ load_gs_index(fixup_rpl(sc->gs));
+ savesegment(fs, cur);
+- if ((sc->fs | 0x03) != cur)
+- loadsegment(fs, sc->fs | 0x03);
++ if (fixup_rpl(sc->fs) != cur)
++ loadsegment(fs, fixup_rpl(sc->fs));
++
+ savesegment(ds, cur);
+- if ((sc->ds | 0x03) != cur)
+- loadsegment(ds, sc->ds | 0x03);
++ if (fixup_rpl(sc->ds) != cur)
++ loadsegment(ds, fixup_rpl(sc->ds));
+ savesegment(es, cur);
+- if ((sc->es | 0x03) != cur)
+- loadsegment(es, sc->es | 0x03);
++ if (fixup_rpl(sc->es) != cur)
++ loadsegment(es, fixup_rpl(sc->es));
+ }
+
+ #define sigset32_t compat_sigset_t
+@@ -105,18 +135,12 @@ static bool ia32_restore_sigcontext(struct pt_regs *regs,
+ regs->orig_ax = -1;
+
+ #ifdef CONFIG_IA32_EMULATION
+- /*
+- * Reload fs and gs if they have changed in the signal
+- * handler. This does not handle long fs/gs base changes in
+- * the handler, but does not clobber them at least in the
+- * normal case.
+- */
+ reload_segments(&sc);
+ #else
+- loadsegment(gs, sc.gs);
+- regs->fs = sc.fs;
+- regs->es = sc.es;
+- regs->ds = sc.ds;
++ loadsegment(gs, fixup_rpl(sc.gs));
++ regs->fs = fixup_rpl(sc.fs);
++ regs->es = fixup_rpl(sc.es);
++ regs->ds = fixup_rpl(sc.ds);
+ #endif
+
+ return fpu__restore_sig(compat_ptr(sc.fpstate), 1);
+diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
+index 9157b4485dedce..c92e43f2d0c4ec 100644
+--- a/arch/x86/kvm/cpuid.c
++++ b/arch/x86/kvm/cpuid.c
+@@ -1047,8 +1047,8 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ }
+ break;
+ case 0xa: { /* Architectural Performance Monitoring */
+- union cpuid10_eax eax;
+- union cpuid10_edx edx;
++ union cpuid10_eax eax = { };
++ union cpuid10_edx edx = { };
+
+ if (!enable_pmu || !static_cpu_has(X86_FEATURE_ARCH_PERFMON)) {
+ entry->eax = entry->ebx = entry->ecx = entry->edx = 0;
+@@ -1064,8 +1064,6 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+
+ if (kvm_pmu_cap.version)
+ edx.split.anythread_deprecated = 1;
+- edx.split.reserved1 = 0;
+- edx.split.reserved2 = 0;
+
+ entry->eax = eax.full;
+ entry->ebx = kvm_pmu_cap.events_mask;
+@@ -1383,7 +1381,7 @@ static inline int __do_cpuid_func(struct kvm_cpuid_array *array, u32 function)
+ break;
+ /* AMD Extended Performance Monitoring and Debug */
+ case 0x80000022: {
+- union cpuid_0x80000022_ebx ebx;
++ union cpuid_0x80000022_ebx ebx = { };
+
+ entry->ecx = entry->edx = 0;
+ if (!enable_pmu || !kvm_cpu_cap_has(X86_FEATURE_PERFMON_V2)) {
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 45337a3fc03cd7..1a4ca471d63df6 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -11769,6 +11769,8 @@ int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
+ if (kvm_mpx_supported())
+ kvm_load_guest_fpu(vcpu);
+
++ kvm_vcpu_srcu_read_lock(vcpu);
++
+ r = kvm_apic_accept_events(vcpu);
+ if (r < 0)
+ goto out;
+@@ -11782,6 +11784,8 @@ int kvm_arch_vcpu_ioctl_get_mpstate(struct kvm_vcpu *vcpu,
+ mp_state->mp_state = vcpu->arch.mp_state;
+
+ out:
++ kvm_vcpu_srcu_read_unlock(vcpu);
++
+ if (kvm_mpx_supported())
+ kvm_put_guest_fpu(vcpu);
+ vcpu_put(vcpu);
+diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
+index 44f7b2ea6a073f..69ceb967d73e9c 100644
+--- a/arch/x86/mm/pat/set_memory.c
++++ b/arch/x86/mm/pat/set_memory.c
+@@ -2422,7 +2422,7 @@ static int __set_pages_np(struct page *page, int numpages)
+ .pgd = NULL,
+ .numpages = numpages,
+ .mask_set = __pgprot(0),
+- .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW),
++ .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY),
+ .flags = CPA_NO_CHECK_ALIAS };
+
+ /*
+@@ -2501,7 +2501,7 @@ int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
+ .pgd = pgd,
+ .numpages = numpages,
+ .mask_set = __pgprot(0),
+- .mask_clr = __pgprot(~page_flags & (_PAGE_NX|_PAGE_RW)),
++ .mask_clr = __pgprot(~page_flags & (_PAGE_NX|_PAGE_RW|_PAGE_DIRTY)),
+ .flags = CPA_NO_CHECK_ALIAS,
+ };
+
+@@ -2544,7 +2544,7 @@ int __init kernel_unmap_pages_in_pgd(pgd_t *pgd, unsigned long address,
+ .pgd = pgd,
+ .numpages = numpages,
+ .mask_set = __pgprot(0),
+- .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW),
++ .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY),
+ .flags = CPA_NO_CHECK_ALIAS,
+ };
+
+diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
+index b4f3784f27e956..0c950bbca309ff 100644
+--- a/arch/x86/xen/enlighten.c
++++ b/arch/x86/xen/enlighten.c
+@@ -70,6 +70,9 @@ EXPORT_SYMBOL(xen_start_flags);
+ */
+ struct shared_info *HYPERVISOR_shared_info = &xen_dummy_shared_info;
+
++/* Number of pages released from the initial allocation. */
++unsigned long xen_released_pages;
++
+ static __ref void xen_get_vendor(void)
+ {
+ init_cpu_devs();
+@@ -465,6 +468,13 @@ int __init arch_xen_unpopulated_init(struct resource **res)
+ xen_free_unpopulated_pages(1, &pg);
+ }
+
++ /*
++ * Account for the region being in the physmap but unpopulated.
++ * The value in xen_released_pages is used by the balloon
++ * driver to know how much of the physmap is unpopulated and
++ * set an accurate initial memory target.
++ */
++ xen_released_pages += xen_extra_mem[i].n_pfns;
+ /* Zero so region is not also added to the balloon driver. */
+ xen_extra_mem[i].n_pfns = 0;
+ }
+diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
+index c3db71d96c434a..3823e52aef523c 100644
+--- a/arch/x86/xen/setup.c
++++ b/arch/x86/xen/setup.c
+@@ -37,9 +37,6 @@
+
+ #define GB(x) ((uint64_t)(x) * 1024 * 1024 * 1024)
+
+-/* Number of pages released from the initial allocation. */
+-unsigned long xen_released_pages;
+-
+ /* Memory map would allow PCI passthrough. */
+ bool xen_pv_pci_possible;
+
+diff --git a/drivers/accel/ivpu/ivpu_debugfs.c b/drivers/accel/ivpu/ivpu_debugfs.c
+index 8d50981594d153..eccedb0c8886bf 100644
+--- a/drivers/accel/ivpu/ivpu_debugfs.c
++++ b/drivers/accel/ivpu/ivpu_debugfs.c
+@@ -331,7 +331,7 @@ ivpu_force_recovery_fn(struct file *file, const char __user *user_buf, size_t si
+ return -EINVAL;
+
+ ret = ivpu_rpm_get(vdev);
+- if (ret)
++ if (ret < 0)
+ return ret;
+
+ ivpu_pm_trigger_recovery(vdev, "debugfs");
+@@ -408,7 +408,7 @@ static int dct_active_set(void *data, u64 active_percent)
+ return -EINVAL;
+
+ ret = ivpu_rpm_get(vdev);
+- if (ret)
++ if (ret < 0)
+ return ret;
+
+ if (active_percent)
+diff --git a/drivers/accel/ivpu/ivpu_ipc.c b/drivers/accel/ivpu/ivpu_ipc.c
+index 13c8a12162e89e..f0402dc847582a 100644
+--- a/drivers/accel/ivpu/ivpu_ipc.c
++++ b/drivers/accel/ivpu/ivpu_ipc.c
+@@ -299,7 +299,8 @@ ivpu_ipc_send_receive_internal(struct ivpu_device *vdev, struct vpu_jsm_msg *req
+ struct ivpu_ipc_consumer cons;
+ int ret;
+
+- drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev));
++ drm_WARN_ON(&vdev->drm, pm_runtime_status_suspended(vdev->drm.dev) &&
++ pm_runtime_enabled(vdev->drm.dev));
+
+ ivpu_ipc_consumer_add(vdev, &cons, channel, NULL);
+
+diff --git a/drivers/accel/ivpu/ivpu_ms.c b/drivers/accel/ivpu/ivpu_ms.c
+index 2f9d37f5c208a9..a961002fe25b2b 100644
+--- a/drivers/accel/ivpu/ivpu_ms.c
++++ b/drivers/accel/ivpu/ivpu_ms.c
+@@ -4,6 +4,7 @@
+ */
+
+ #include <drm/drm_file.h>
++#include <linux/pm_runtime.h>
+
+ #include "ivpu_drv.h"
+ #include "ivpu_gem.h"
+@@ -44,6 +45,10 @@ int ivpu_ms_start_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
+ args->sampling_period_ns < MS_MIN_SAMPLE_PERIOD_NS)
+ return -EINVAL;
+
++ ret = ivpu_rpm_get(vdev);
++ if (ret < 0)
++ return ret;
++
+ mutex_lock(&file_priv->ms_lock);
+
+ if (get_instance_by_mask(file_priv, args->metric_group_mask)) {
+@@ -96,6 +101,8 @@ int ivpu_ms_start_ioctl(struct drm_device *dev, void *data, struct drm_file *fil
+ kfree(ms);
+ unlock:
+ mutex_unlock(&file_priv->ms_lock);
++
++ ivpu_rpm_put(vdev);
+ return ret;
+ }
+
+@@ -160,6 +167,10 @@ int ivpu_ms_get_data_ioctl(struct drm_device *dev, void *data, struct drm_file *
+ if (!args->metric_group_mask)
+ return -EINVAL;
+
++ ret = ivpu_rpm_get(vdev);
++ if (ret < 0)
++ return ret;
++
+ mutex_lock(&file_priv->ms_lock);
+
+ ms = get_instance_by_mask(file_priv, args->metric_group_mask);
+@@ -187,6 +198,7 @@ int ivpu_ms_get_data_ioctl(struct drm_device *dev, void *data, struct drm_file *
+ unlock:
+ mutex_unlock(&file_priv->ms_lock);
+
++ ivpu_rpm_put(vdev);
+ return ret;
+ }
+
+@@ -204,11 +216,17 @@ int ivpu_ms_stop_ioctl(struct drm_device *dev, void *data, struct drm_file *file
+ {
+ struct ivpu_file_priv *file_priv = file->driver_priv;
+ struct drm_ivpu_metric_streamer_stop *args = data;
++ struct ivpu_device *vdev = file_priv->vdev;
+ struct ivpu_ms_instance *ms;
++ int ret;
+
+ if (!args->metric_group_mask)
+ return -EINVAL;
+
++ ret = ivpu_rpm_get(vdev);
++ if (ret < 0)
++ return ret;
++
+ mutex_lock(&file_priv->ms_lock);
+
+ ms = get_instance_by_mask(file_priv, args->metric_group_mask);
+@@ -217,6 +235,7 @@ int ivpu_ms_stop_ioctl(struct drm_device *dev, void *data, struct drm_file *file
+
+ mutex_unlock(&file_priv->ms_lock);
+
++ ivpu_rpm_put(vdev);
+ return ms ? 0 : -EINVAL;
+ }
+
+@@ -281,6 +300,9 @@ int ivpu_ms_get_info_ioctl(struct drm_device *dev, void *data, struct drm_file *
+ void ivpu_ms_cleanup(struct ivpu_file_priv *file_priv)
+ {
+ struct ivpu_ms_instance *ms, *tmp;
++ struct ivpu_device *vdev = file_priv->vdev;
++
++ pm_runtime_get_sync(vdev->drm.dev);
+
+ mutex_lock(&file_priv->ms_lock);
+
+@@ -293,6 +315,8 @@ void ivpu_ms_cleanup(struct ivpu_file_priv *file_priv)
+ free_instance(file_priv, ms);
+
+ mutex_unlock(&file_priv->ms_lock);
++
++ pm_runtime_put_autosuspend(vdev->drm.dev);
+ }
+
+ void ivpu_ms_cleanup_all(struct ivpu_device *vdev)
+diff --git a/drivers/acpi/platform_profile.c b/drivers/acpi/platform_profile.c
+index d2f7fd7743a13d..11278f785526d4 100644
+--- a/drivers/acpi/platform_profile.c
++++ b/drivers/acpi/platform_profile.c
+@@ -22,8 +22,8 @@ static const char * const profile_names[] = {
+ };
+ static_assert(ARRAY_SIZE(profile_names) == PLATFORM_PROFILE_LAST);
+
+-static ssize_t platform_profile_choices_show(struct device *dev,
+- struct device_attribute *attr,
++static ssize_t platform_profile_choices_show(struct kobject *kobj,
++ struct kobj_attribute *attr,
+ char *buf)
+ {
+ int len = 0;
+@@ -49,8 +49,8 @@ static ssize_t platform_profile_choices_show(struct device *dev,
+ return len;
+ }
+
+-static ssize_t platform_profile_show(struct device *dev,
+- struct device_attribute *attr,
++static ssize_t platform_profile_show(struct kobject *kobj,
++ struct kobj_attribute *attr,
+ char *buf)
+ {
+ enum platform_profile_option profile = PLATFORM_PROFILE_BALANCED;
+@@ -77,8 +77,8 @@ static ssize_t platform_profile_show(struct device *dev,
+ return sysfs_emit(buf, "%s\n", profile_names[profile]);
+ }
+
+-static ssize_t platform_profile_store(struct device *dev,
+- struct device_attribute *attr,
++static ssize_t platform_profile_store(struct kobject *kobj,
++ struct kobj_attribute *attr,
+ const char *buf, size_t count)
+ {
+ int err, i;
+@@ -115,12 +115,12 @@ static ssize_t platform_profile_store(struct device *dev,
+ return count;
+ }
+
+-static DEVICE_ATTR_RO(platform_profile_choices);
+-static DEVICE_ATTR_RW(platform_profile);
++static struct kobj_attribute attr_platform_profile_choices = __ATTR_RO(platform_profile_choices);
++static struct kobj_attribute attr_platform_profile = __ATTR_RW(platform_profile);
+
+ static struct attribute *platform_profile_attrs[] = {
+- &dev_attr_platform_profile_choices.attr,
+- &dev_attr_platform_profile.attr,
++ &attr_platform_profile_choices.attr,
++ &attr_platform_profile.attr,
+ NULL
+ };
+
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 45f63b09828a1a..650122deb480d0 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -63,6 +63,7 @@ enum board_ids {
+ board_ahci_pcs_quirk_no_devslp,
+ board_ahci_pcs_quirk_no_sntf,
+ board_ahci_yes_fbs,
++ board_ahci_yes_fbs_atapi_dma,
+
+ /* board IDs for specific chipsets in alphabetical order */
+ board_ahci_al,
+@@ -188,6 +189,14 @@ static const struct ata_port_info ahci_port_info[] = {
+ .udma_mask = ATA_UDMA6,
+ .port_ops = &ahci_ops,
+ },
++ [board_ahci_yes_fbs_atapi_dma] = {
++ AHCI_HFLAGS (AHCI_HFLAG_YES_FBS |
++ AHCI_HFLAG_ATAPI_DMA_QUIRK),
++ .flags = AHCI_FLAG_COMMON,
++ .pio_mask = ATA_PIO4,
++ .udma_mask = ATA_UDMA6,
++ .port_ops = &ahci_ops,
++ },
+ /* by chipsets */
+ [board_ahci_al] = {
+ AHCI_HFLAGS (AHCI_HFLAG_NO_PMP | AHCI_HFLAG_NO_MSI),
+@@ -589,6 +598,8 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ .driver_data = board_ahci_yes_fbs },
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x91a3),
+ .driver_data = board_ahci_yes_fbs },
++ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9215),
++ .driver_data = board_ahci_yes_fbs_atapi_dma },
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9230),
+ .driver_data = board_ahci_yes_fbs },
+ { PCI_DEVICE(PCI_VENDOR_ID_MARVELL_EXT, 0x9235),
+diff --git a/drivers/ata/ahci.h b/drivers/ata/ahci.h
+index 8f40f75ba08cff..10a5fe02f0a453 100644
+--- a/drivers/ata/ahci.h
++++ b/drivers/ata/ahci.h
+@@ -246,6 +246,7 @@ enum {
+ AHCI_HFLAG_NO_SXS = BIT(26), /* SXS not supported */
+ AHCI_HFLAG_43BIT_ONLY = BIT(27), /* 43bit DMA addr limit */
+ AHCI_HFLAG_INTEL_PCS_QUIRK = BIT(28), /* apply Intel PCS quirk */
++ AHCI_HFLAG_ATAPI_DMA_QUIRK = BIT(29), /* force ATAPI to use DMA */
+
+ /* ap->flags bits */
+
+diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
+index fdfa7b2662180b..a28ffe1e596918 100644
+--- a/drivers/ata/libahci.c
++++ b/drivers/ata/libahci.c
+@@ -1321,6 +1321,10 @@ static void ahci_dev_config(struct ata_device *dev)
+ {
+ struct ahci_host_priv *hpriv = dev->link->ap->host->private_data;
+
++ if ((dev->class == ATA_DEV_ATAPI) &&
++ (hpriv->flags & AHCI_HFLAG_ATAPI_DMA_QUIRK))
++ dev->quirks |= ATA_QUIRK_ATAPI_MOD16_DMA;
++
+ if (hpriv->flags & AHCI_HFLAG_SECT255) {
+ dev->max_sectors = 255;
+ ata_dev_info(dev,
+diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
+index d956735e2a7645..0cb97181d10a9e 100644
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -88,6 +88,7 @@ struct ata_force_param {
+ unsigned int xfer_mask;
+ unsigned int quirk_on;
+ unsigned int quirk_off;
++ unsigned int pflags_on;
+ u16 lflags_on;
+ u16 lflags_off;
+ };
+@@ -331,6 +332,35 @@ void ata_force_cbl(struct ata_port *ap)
+ }
+ }
+
++/**
++ * ata_force_pflags - force port flags according to libata.force
++ * @ap: ATA port of interest
++ *
++ * Force port flags according to libata.force and whine about it.
++ *
++ * LOCKING:
++ * EH context.
++ */
++static void ata_force_pflags(struct ata_port *ap)
++{
++ int i;
++
++ for (i = ata_force_tbl_size - 1; i >= 0; i--) {
++ const struct ata_force_ent *fe = &ata_force_tbl[i];
++
++ if (fe->port != -1 && fe->port != ap->print_id)
++ continue;
++
++ /* let pflags stack */
++ if (fe->param.pflags_on) {
++ ap->pflags |= fe->param.pflags_on;
++ ata_port_notice(ap,
++ "FORCE: port flag 0x%x forced -> 0x%x\n",
++ fe->param.pflags_on, ap->pflags);
++ }
++ }
++}
++
+ /**
+ * ata_force_link_limits - force link limits according to libata.force
+ * @link: ATA link of interest
+@@ -486,6 +516,7 @@ static void ata_force_quirks(struct ata_device *dev)
+ }
+ }
+ #else
++static inline void ata_force_pflags(struct ata_port *ap) { }
+ static inline void ata_force_link_limits(struct ata_link *link) { }
+ static inline void ata_force_xfermask(struct ata_device *dev) { }
+ static inline void ata_force_quirks(struct ata_device *dev) { }
+@@ -5460,6 +5491,8 @@ struct ata_port *ata_port_alloc(struct ata_host *host)
+ #endif
+ ata_sff_port_init(ap);
+
++ ata_force_pflags(ap);
++
+ return ap;
+ }
+ EXPORT_SYMBOL_GPL(ata_port_alloc);
+@@ -6272,6 +6305,9 @@ EXPORT_SYMBOL_GPL(ata_platform_remove_one);
+ { "no" #name, .lflags_on = (flags) }, \
+ { #name, .lflags_off = (flags) }
+
++#define force_pflag_on(name, flags) \
++ { #name, .pflags_on = (flags) }
++
+ #define force_quirk_on(name, flag) \
+ { #name, .quirk_on = (flag) }
+
+@@ -6331,6 +6367,8 @@ static const struct ata_force_param force_tbl[] __initconst = {
+ force_lflag_on(rstonce, ATA_LFLAG_RST_ONCE),
+ force_lflag_onoff(dbdelay, ATA_LFLAG_NO_DEBOUNCE_DELAY),
+
++ force_pflag_on(external, ATA_PFLAG_EXTERNAL),
++
+ force_quirk_onoff(ncq, ATA_QUIRK_NONCQ),
+ force_quirk_onoff(ncqtrim, ATA_QUIRK_NO_NCQ_TRIM),
+ force_quirk_onoff(ncqati, ATA_QUIRK_NO_NCQ_ON_ATI),
+diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
+index 3b303d4ae37a01..16cd676eae1f9a 100644
+--- a/drivers/ata/libata-eh.c
++++ b/drivers/ata/libata-eh.c
+@@ -1542,8 +1542,15 @@ unsigned int atapi_eh_request_sense(struct ata_device *dev,
+ tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
+ tf.command = ATA_CMD_PACKET;
+
+- /* is it pointless to prefer PIO for "safety reasons"? */
+- if (ap->flags & ATA_FLAG_PIO_DMA) {
++ /*
++ * Do not use DMA if the connected device only supports PIO, even if the
++ * port prefers PIO commands via DMA.
++ *
++ * Ideally, we should call atapi_check_dma() to check if it is safe for
++ * the LLD to use DMA for REQUEST_SENSE, but we don't have a qc.
++ * Since we can't check the command, perhaps we should only use pio?
++ */
++ if ((ap->flags & ATA_FLAG_PIO_DMA) && !(dev->flags & ATA_DFLAG_PIO)) {
+ tf.protocol = ATAPI_PROT_DMA;
+ tf.feature |= ATAPI_PKT_DMA;
+ } else {
+diff --git a/drivers/ata/pata_pxa.c b/drivers/ata/pata_pxa.c
+index 538bd3423d859d..1bdcd6ee741d36 100644
+--- a/drivers/ata/pata_pxa.c
++++ b/drivers/ata/pata_pxa.c
+@@ -223,10 +223,16 @@ static int pxa_ata_probe(struct platform_device *pdev)
+
+ ap->ioaddr.cmd_addr = devm_ioremap(&pdev->dev, cmd_res->start,
+ resource_size(cmd_res));
++ if (!ap->ioaddr.cmd_addr)
++ return -ENOMEM;
+ ap->ioaddr.ctl_addr = devm_ioremap(&pdev->dev, ctl_res->start,
+ resource_size(ctl_res));
++ if (!ap->ioaddr.ctl_addr)
++ return -ENOMEM;
+ ap->ioaddr.bmdma_addr = devm_ioremap(&pdev->dev, dma_res->start,
+ resource_size(dma_res));
++ if (!ap->ioaddr.bmdma_addr)
++ return -ENOMEM;
+
+ /*
+ * Adjust register offsets
+diff --git a/drivers/ata/sata_sx4.c b/drivers/ata/sata_sx4.c
+index a482741eb181ff..c3042eca6332df 100644
+--- a/drivers/ata/sata_sx4.c
++++ b/drivers/ata/sata_sx4.c
+@@ -1117,9 +1117,14 @@ static int pdc20621_prog_dimm0(struct ata_host *host)
+ mmio += PDC_CHIP0_OFS;
+
+ for (i = 0; i < ARRAY_SIZE(pdc_i2c_read_data); i++)
+- pdc20621_i2c_read(host, PDC_DIMM0_SPD_DEV_ADDRESS,
+- pdc_i2c_read_data[i].reg,
+- &spd0[pdc_i2c_read_data[i].ofs]);
++ if (!pdc20621_i2c_read(host, PDC_DIMM0_SPD_DEV_ADDRESS,
++ pdc_i2c_read_data[i].reg,
++ &spd0[pdc_i2c_read_data[i].ofs])) {
++ dev_err(host->dev,
++ "Failed in i2c read at index %d: device=%#x, reg=%#x\n",
++ i, PDC_DIMM0_SPD_DEV_ADDRESS, pdc_i2c_read_data[i].reg);
++ return -EIO;
++ }
+
+ data |= (spd0[4] - 8) | ((spd0[21] != 0) << 3) | ((spd0[3]-11) << 4);
+ data |= ((spd0[17] / 4) << 6) | ((spd0[5] / 2) << 7) |
+@@ -1284,6 +1289,8 @@ static unsigned int pdc20621_dimm_init(struct ata_host *host)
+
+ /* Programming DIMM0 Module Control Register (index_CID0:80h) */
+ size = pdc20621_prog_dimm0(host);
++ if (size < 0)
++ return size;
+ dev_dbg(host->dev, "Local DIMM Size = %dMB\n", size);
+
+ /* Programming DIMM Module Global Control Register (index_CID0:88h) */
+diff --git a/drivers/auxdisplay/hd44780.c b/drivers/auxdisplay/hd44780.c
+index 025dc6855cb253..41807ce363399d 100644
+--- a/drivers/auxdisplay/hd44780.c
++++ b/drivers/auxdisplay/hd44780.c
+@@ -313,7 +313,7 @@ static int hd44780_probe(struct platform_device *pdev)
+ fail3:
+ kfree(hd);
+ fail2:
+- kfree(lcd);
++ charlcd_free(lcd);
+ fail1:
+ kfree(hdc);
+ return ret;
+@@ -328,7 +328,7 @@ static void hd44780_remove(struct platform_device *pdev)
+ kfree(hdc->hd44780);
+ kfree(lcd->drvdata);
+
+- kfree(lcd);
++ charlcd_free(lcd);
+ }
+
+ static const struct of_device_id hd44780_of_match[] = {
+diff --git a/drivers/base/devres.c b/drivers/base/devres.c
+index 2152eec0c1352c..68224f2f83fff2 100644
+--- a/drivers/base/devres.c
++++ b/drivers/base/devres.c
+@@ -687,6 +687,13 @@ int devres_release_group(struct device *dev, void *id)
+ spin_unlock_irqrestore(&dev->devres_lock, flags);
+
+ release_nodes(dev, &todo);
++ } else if (list_empty(&dev->devres_head)) {
++ /*
++ * dev is probably dying via devres_release_all(): groups
++ * have already been removed and are on the process of
++ * being released - don't touch and don't warn.
++ */
++ spin_unlock_irqrestore(&dev->devres_lock, flags);
+ } else {
+ WARN_ON(1);
+ spin_unlock_irqrestore(&dev->devres_lock, flags);
+diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
+index 79b7bd8bfd4584..38b9e485e520d5 100644
+--- a/drivers/block/ublk_drv.c
++++ b/drivers/block/ublk_drv.c
+@@ -681,22 +681,44 @@ static int ublk_max_cmd_buf_size(void)
+ return __ublk_queue_cmd_buf_size(UBLK_MAX_QUEUE_DEPTH);
+ }
+
+-static inline bool ublk_queue_can_use_recovery_reissue(
+- struct ublk_queue *ubq)
++/*
++ * Should I/O outstanding to the ublk server when it exits be reissued?
++ * If not, outstanding I/O will get errors.
++ */
++static inline bool ublk_nosrv_should_reissue_outstanding(struct ublk_device *ub)
+ {
+- return (ubq->flags & UBLK_F_USER_RECOVERY) &&
+- (ubq->flags & UBLK_F_USER_RECOVERY_REISSUE);
++ return (ub->dev_info.flags & UBLK_F_USER_RECOVERY) &&
++ (ub->dev_info.flags & UBLK_F_USER_RECOVERY_REISSUE);
+ }
+
+-static inline bool ublk_queue_can_use_recovery(
+- struct ublk_queue *ubq)
++/*
++ * Should I/O issued while there is no ublk server queue? If not, I/O
++ * issued while there is no ublk server will get errors.
++ */
++static inline bool ublk_nosrv_dev_should_queue_io(struct ublk_device *ub)
++{
++ return ub->dev_info.flags & UBLK_F_USER_RECOVERY;
++}
++
++/*
++ * Same as ublk_nosrv_dev_should_queue_io, but uses a queue-local copy
++ * of the device flags for smaller cache footprint - better for fast
++ * paths.
++ */
++static inline bool ublk_nosrv_should_queue_io(struct ublk_queue *ubq)
+ {
+ return ubq->flags & UBLK_F_USER_RECOVERY;
+ }
+
+-static inline bool ublk_can_use_recovery(struct ublk_device *ub)
++/*
++ * Should ublk devices be stopped (i.e. no recovery possible) when the
++ * ublk server exits? If not, devices can be used again by a future
++ * incarnation of a ublk server via the start_recovery/end_recovery
++ * commands.
++ */
++static inline bool ublk_nosrv_should_stop_dev(struct ublk_device *ub)
+ {
+- return ub->dev_info.flags & UBLK_F_USER_RECOVERY;
++ return !(ub->dev_info.flags & UBLK_F_USER_RECOVERY);
+ }
+
+ static void ublk_free_disk(struct gendisk *disk)
+@@ -1059,6 +1081,25 @@ static void ublk_complete_rq(struct kref *ref)
+ __ublk_complete_rq(req);
+ }
+
++static void ublk_do_fail_rq(struct request *req)
++{
++ struct ublk_queue *ubq = req->mq_hctx->driver_data;
++
++ if (ublk_nosrv_should_reissue_outstanding(ubq->dev))
++ blk_mq_requeue_request(req, false);
++ else
++ __ublk_complete_rq(req);
++}
++
++static void ublk_fail_rq_fn(struct kref *ref)
++{
++ struct ublk_rq_data *data = container_of(ref, struct ublk_rq_data,
++ ref);
++ struct request *req = blk_mq_rq_from_pdu(data);
++
++ ublk_do_fail_rq(req);
++}
++
+ /*
+ * Since __ublk_rq_task_work always fails requests immediately during
+ * exiting, __ublk_fail_req() is only called from abort context during
+@@ -1072,10 +1113,13 @@ static void __ublk_fail_req(struct ublk_queue *ubq, struct ublk_io *io,
+ {
+ WARN_ON_ONCE(io->flags & UBLK_IO_FLAG_ACTIVE);
+
+- if (ublk_queue_can_use_recovery_reissue(ubq))
+- blk_mq_requeue_request(req, false);
+- else
+- ublk_put_req_ref(ubq, req);
++ if (ublk_need_req_ref(ubq)) {
++ struct ublk_rq_data *data = blk_mq_rq_to_pdu(req);
++
++ kref_put(&data->ref, ublk_fail_rq_fn);
++ } else {
++ ublk_do_fail_rq(req);
++ }
+ }
+
+ static void ubq_complete_io_cmd(struct ublk_io *io, int res,
+@@ -1100,7 +1144,7 @@ static inline void __ublk_abort_rq(struct ublk_queue *ubq,
+ struct request *rq)
+ {
+ /* We cannot process this rq so just requeue it. */
+- if (ublk_queue_can_use_recovery(ubq))
++ if (ublk_nosrv_dev_should_queue_io(ubq->dev))
+ blk_mq_requeue_request(rq, false);
+ else
+ blk_mq_end_request(rq, BLK_STS_IOERR);
+@@ -1245,10 +1289,10 @@ static enum blk_eh_timer_return ublk_timeout(struct request *rq)
+ struct ublk_device *ub = ubq->dev;
+
+ if (ublk_abort_requests(ub, ubq)) {
+- if (ublk_can_use_recovery(ub))
+- schedule_work(&ub->quiesce_work);
+- else
++ if (ublk_nosrv_should_stop_dev(ub))
+ schedule_work(&ub->stop_work);
++ else
++ schedule_work(&ub->quiesce_work);
+ }
+ return BLK_EH_DONE;
+ }
+@@ -1277,7 +1321,7 @@ static blk_status_t ublk_queue_rq(struct blk_mq_hw_ctx *hctx,
+ * Note: force_abort is guaranteed to be seen because it is set
+ * before request queue is unqiuesced.
+ */
+- if (ublk_queue_can_use_recovery(ubq) && unlikely(ubq->force_abort))
++ if (ublk_nosrv_should_queue_io(ubq) && unlikely(ubq->force_abort))
+ return BLK_STS_IOERR;
+
+ if (unlikely(ubq->canceling)) {
+@@ -1517,10 +1561,10 @@ static void ublk_uring_cmd_cancel_fn(struct io_uring_cmd *cmd,
+ ublk_cancel_cmd(ubq, io, issue_flags);
+
+ if (need_schedule) {
+- if (ublk_can_use_recovery(ub))
+- schedule_work(&ub->quiesce_work);
+- else
++ if (ublk_nosrv_should_stop_dev(ub))
+ schedule_work(&ub->stop_work);
++ else
++ schedule_work(&ub->quiesce_work);
+ }
+ }
+
+@@ -1640,7 +1684,7 @@ static void ublk_stop_dev(struct ublk_device *ub)
+ mutex_lock(&ub->mutex);
+ if (ub->dev_info.state == UBLK_S_DEV_DEAD)
+ goto unlock;
+- if (ublk_can_use_recovery(ub)) {
++ if (ublk_nosrv_dev_should_queue_io(ub)) {
+ if (ub->dev_info.state == UBLK_S_DEV_LIVE)
+ __ublk_quiesce_dev(ub);
+ ublk_unquiesce_dev(ub);
+@@ -2738,7 +2782,7 @@ static int ublk_ctrl_start_recovery(struct ublk_device *ub,
+ int i;
+
+ mutex_lock(&ub->mutex);
+- if (!ublk_can_use_recovery(ub))
++ if (ublk_nosrv_should_stop_dev(ub))
+ goto out_unlock;
+ if (!ub->nr_queues_ready)
+ goto out_unlock;
+@@ -2791,7 +2835,7 @@ static int ublk_ctrl_end_recovery(struct ublk_device *ub,
+ __func__, ub->dev_info.nr_hw_queues, header->dev_id);
+
+ mutex_lock(&ub->mutex);
+- if (!ublk_can_use_recovery(ub))
++ if (ublk_nosrv_should_stop_dev(ub))
+ goto out_unlock;
+
+ if (ub->dev_info.state != UBLK_S_DEV_QUIESCED) {
+diff --git a/drivers/bluetooth/btintel_pcie.c b/drivers/bluetooth/btintel_pcie.c
+index 53f6b4f76bccdd..ab465e13c1f60f 100644
+--- a/drivers/bluetooth/btintel_pcie.c
++++ b/drivers/bluetooth/btintel_pcie.c
+@@ -36,6 +36,7 @@
+ /* Intel Bluetooth PCIe device id table */
+ static const struct pci_device_id btintel_pcie_table[] = {
+ { BTINTEL_PCI_DEVICE(0xA876, PCI_ANY_ID) },
++ { BTINTEL_PCI_DEVICE(0xE476, PCI_ANY_ID) },
+ { 0 }
+ };
+ MODULE_DEVICE_TABLE(pci, btintel_pcie_table);
+diff --git a/drivers/bluetooth/btqca.c b/drivers/bluetooth/btqca.c
+index 04d02c746ec0fd..dd2c0485b9848d 100644
+--- a/drivers/bluetooth/btqca.c
++++ b/drivers/bluetooth/btqca.c
+@@ -785,6 +785,7 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ const char *firmware_name)
+ {
+ struct qca_fw_config config = {};
++ const char *variant = "";
+ int err;
+ u8 rom_ver = 0;
+ u32 soc_ver;
+@@ -879,13 +880,11 @@ int qca_uart_setup(struct hci_dev *hdev, uint8_t baudrate,
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+ case QCA_WCN3998:
+- if (le32_to_cpu(ver.soc_id) == QCA_WCN3991_SOC_ID) {
+- snprintf(config.fwname, sizeof(config.fwname),
+- "qca/crnv%02xu.bin", rom_ver);
+- } else {
+- snprintf(config.fwname, sizeof(config.fwname),
+- "qca/crnv%02x.bin", rom_ver);
+- }
++ if (le32_to_cpu(ver.soc_id) == QCA_WCN3991_SOC_ID)
++ variant = "u";
++
++ snprintf(config.fwname, sizeof(config.fwname),
++ "qca/crnv%02x%s.bin", rom_ver, variant);
+ break;
+ case QCA_WCN3988:
+ snprintf(config.fwname, sizeof(config.fwname),
+diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
+index 3a0b9dc98707f5..151054a718522a 100644
+--- a/drivers/bluetooth/btusb.c
++++ b/drivers/bluetooth/btusb.c
+@@ -626,6 +626,10 @@ static const struct usb_device_id quirks_table[] = {
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x0489, 0xe102), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe152), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
++ { USB_DEVICE(0x0489, 0xe153), .driver_info = BTUSB_MEDIATEK |
++ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x04ca, 0x3804), .driver_info = BTUSB_MEDIATEK |
+ BTUSB_WIDEBAND_SPEECH },
+ { USB_DEVICE(0x04ca, 0x38e4), .driver_info = BTUSB_MEDIATEK |
+diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
+index 395d66e32a2ea9..2f322f890b81f2 100644
+--- a/drivers/bluetooth/hci_ldisc.c
++++ b/drivers/bluetooth/hci_ldisc.c
+@@ -102,7 +102,8 @@ static inline struct sk_buff *hci_uart_dequeue(struct hci_uart *hu)
+ if (!skb) {
+ percpu_down_read(&hu->proto_lock);
+
+- if (test_bit(HCI_UART_PROTO_READY, &hu->flags))
++ if (test_bit(HCI_UART_PROTO_READY, &hu->flags) ||
++ test_bit(HCI_UART_PROTO_INIT, &hu->flags))
+ skb = hu->proto->dequeue(hu);
+
+ percpu_up_read(&hu->proto_lock);
+@@ -124,7 +125,8 @@ int hci_uart_tx_wakeup(struct hci_uart *hu)
+ if (!percpu_down_read_trylock(&hu->proto_lock))
+ return 0;
+
+- if (!test_bit(HCI_UART_PROTO_READY, &hu->flags))
++ if (!test_bit(HCI_UART_PROTO_READY, &hu->flags) &&
++ !test_bit(HCI_UART_PROTO_INIT, &hu->flags))
+ goto no_schedule;
+
+ set_bit(HCI_UART_TX_WAKEUP, &hu->tx_state);
+@@ -278,7 +280,8 @@ static int hci_uart_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
+
+ percpu_down_read(&hu->proto_lock);
+
+- if (!test_bit(HCI_UART_PROTO_READY, &hu->flags)) {
++ if (!test_bit(HCI_UART_PROTO_READY, &hu->flags) &&
++ !test_bit(HCI_UART_PROTO_INIT, &hu->flags)) {
+ percpu_up_read(&hu->proto_lock);
+ return -EUNATCH;
+ }
+@@ -585,7 +588,8 @@ static void hci_uart_tty_wakeup(struct tty_struct *tty)
+ if (tty != hu->tty)
+ return;
+
+- if (test_bit(HCI_UART_PROTO_READY, &hu->flags))
++ if (test_bit(HCI_UART_PROTO_READY, &hu->flags) ||
++ test_bit(HCI_UART_PROTO_INIT, &hu->flags))
+ hci_uart_tx_wakeup(hu);
+ }
+
+@@ -611,7 +615,8 @@ static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data,
+
+ percpu_down_read(&hu->proto_lock);
+
+- if (!test_bit(HCI_UART_PROTO_READY, &hu->flags)) {
++ if (!test_bit(HCI_UART_PROTO_READY, &hu->flags) &&
++ !test_bit(HCI_UART_PROTO_INIT, &hu->flags)) {
+ percpu_up_read(&hu->proto_lock);
+ return;
+ }
+@@ -707,12 +712,16 @@ static int hci_uart_set_proto(struct hci_uart *hu, int id)
+
+ hu->proto = p;
+
++ set_bit(HCI_UART_PROTO_INIT, &hu->flags);
++
+ err = hci_uart_register_dev(hu);
+ if (err) {
+ return err;
+ }
+
+ set_bit(HCI_UART_PROTO_READY, &hu->flags);
++ clear_bit(HCI_UART_PROTO_INIT, &hu->flags);
++
+ return 0;
+ }
+
+diff --git a/drivers/bluetooth/hci_qca.c b/drivers/bluetooth/hci_qca.c
+index 37fddf6055bebb..1837622ea625a8 100644
+--- a/drivers/bluetooth/hci_qca.c
++++ b/drivers/bluetooth/hci_qca.c
+@@ -2353,6 +2353,7 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ switch (qcadev->btsoc_type) {
+ case QCA_WCN6855:
+ case QCA_WCN7850:
++ case QCA_WCN6750:
+ if (!device_property_present(&serdev->dev, "enable-gpios")) {
+ /*
+ * Backward compatibility with old DT sources. If the
+@@ -2372,7 +2373,6 @@ static int qca_serdev_probe(struct serdev_device *serdev)
+ case QCA_WCN3990:
+ case QCA_WCN3991:
+ case QCA_WCN3998:
+- case QCA_WCN6750:
+ qcadev->bt_power->dev = &serdev->dev;
+ err = qca_init_regulators(qcadev->bt_power, data->vregs,
+ data->num_vregs);
+diff --git a/drivers/bluetooth/hci_uart.h b/drivers/bluetooth/hci_uart.h
+index fbf3079b92a533..5ea5dd80e297c7 100644
+--- a/drivers/bluetooth/hci_uart.h
++++ b/drivers/bluetooth/hci_uart.h
+@@ -90,6 +90,7 @@ struct hci_uart {
+ #define HCI_UART_REGISTERED 1
+ #define HCI_UART_PROTO_READY 2
+ #define HCI_UART_NO_SUSPEND_NOTIFIER 3
++#define HCI_UART_PROTO_INIT 4
+
+ /* TX states */
+ #define HCI_UART_SENDING 1
+diff --git a/drivers/bus/mhi/host/main.c b/drivers/bus/mhi/host/main.c
+index 4de75674f19350..aa8a0ef697c779 100644
+--- a/drivers/bus/mhi/host/main.c
++++ b/drivers/bus/mhi/host/main.c
+@@ -1207,11 +1207,16 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ struct mhi_ring_element *mhi_tre;
+ struct mhi_buf_info *buf_info;
+ int eot, eob, chain, bei;
+- int ret;
++ int ret = 0;
+
+ /* Protect accesses for reading and incrementing WP */
+ write_lock_bh(&mhi_chan->lock);
+
++ if (mhi_chan->ch_state != MHI_CH_STATE_ENABLED) {
++ ret = -ENODEV;
++ goto out;
++ }
++
+ buf_ring = &mhi_chan->buf_ring;
+ tre_ring = &mhi_chan->tre_ring;
+
+@@ -1229,10 +1234,8 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+
+ if (!info->pre_mapped) {
+ ret = mhi_cntrl->map_single(mhi_cntrl, buf_info);
+- if (ret) {
+- write_unlock_bh(&mhi_chan->lock);
+- return ret;
+- }
++ if (ret)
++ goto out;
+ }
+
+ eob = !!(flags & MHI_EOB);
+@@ -1250,9 +1253,10 @@ int mhi_gen_tre(struct mhi_controller *mhi_cntrl, struct mhi_chan *mhi_chan,
+ mhi_add_ring_element(mhi_cntrl, tre_ring);
+ mhi_add_ring_element(mhi_cntrl, buf_ring);
+
++out:
+ write_unlock_bh(&mhi_chan->lock);
+
+- return 0;
++ return ret;
+ }
+
+ int mhi_queue_buf(struct mhi_device *mhi_dev, enum dma_data_direction dir,
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 7df7abaf3e526b..e25daf2396d37b 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -168,6 +168,11 @@ int tpm_try_get_ops(struct tpm_chip *chip)
+ goto out_ops;
+
+ mutex_lock(&chip->tpm_mutex);
++
++ /* tmp_chip_start may issue IO that is denied while suspended */
++ if (chip->flags & TPM_CHIP_FLAG_SUSPENDED)
++ goto out_lock;
++
+ rc = tpm_chip_start(chip);
+ if (rc)
+ goto out_lock;
+@@ -300,6 +305,7 @@ int tpm_class_shutdown(struct device *dev)
+ down_write(&chip->ops_sem);
+ if (chip->flags & TPM_CHIP_FLAG_TPM2) {
+ if (!tpm_chip_start(chip)) {
++ tpm2_end_auth_session(chip);
+ tpm2_shutdown(chip, TPM2_SU_CLEAR);
+ tpm_chip_stop(chip);
+ }
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index b1daa0d7b341b1..f62f7871edbdb0 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -445,18 +445,11 @@ int tpm_get_random(struct tpm_chip *chip, u8 *out, size_t max)
+ if (!chip)
+ return -ENODEV;
+
+- /* Give back zero bytes, as TPM chip has not yet fully resumed: */
+- if (chip->flags & TPM_CHIP_FLAG_SUSPENDED) {
+- rc = 0;
+- goto out;
+- }
+-
+ if (chip->flags & TPM_CHIP_FLAG_TPM2)
+ rc = tpm2_get_random(chip, out, max);
+ else
+ rc = tpm1_get_random(chip, out, max);
+
+-out:
+ tpm_put_ops(chip);
+ return rc;
+ }
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index fdef214b9f6bff..ed0d3d8449b306 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -114,11 +114,10 @@ static int wait_for_tpm_stat(struct tpm_chip *chip, u8 mask,
+ return 0;
+ /* process status changes without irq support */
+ do {
++ usleep_range(priv->timeout_min, priv->timeout_max);
+ status = chip->ops->status(chip);
+ if ((status & mask) == mask)
+ return 0;
+- usleep_range(priv->timeout_min,
+- priv->timeout_max);
+ } while (time_before(jiffies, stop));
+ return -ETIME;
+ }
+@@ -464,7 +463,10 @@ static int tpm_tis_send_data(struct tpm_chip *chip, const u8 *buf, size_t len)
+
+ if (wait_for_tpm_stat(chip, TPM_STS_VALID, chip->timeout_c,
+ &priv->int_queue, false) < 0) {
+- rc = -ETIME;
++ if (test_bit(TPM_TIS_STATUS_VALID_RETRY, &priv->flags))
++ rc = -EAGAIN;
++ else
++ rc = -ETIME;
+ goto out_err;
+ }
+ status = tpm_tis_status(chip);
+@@ -481,7 +483,10 @@ static int tpm_tis_send_data(struct tpm_chip *chip, const u8 *buf, size_t len)
+
+ if (wait_for_tpm_stat(chip, TPM_STS_VALID, chip->timeout_c,
+ &priv->int_queue, false) < 0) {
+- rc = -ETIME;
++ if (test_bit(TPM_TIS_STATUS_VALID_RETRY, &priv->flags))
++ rc = -EAGAIN;
++ else
++ rc = -ETIME;
+ goto out_err;
+ }
+ status = tpm_tis_status(chip);
+@@ -546,9 +551,11 @@ static int tpm_tis_send_main(struct tpm_chip *chip, const u8 *buf, size_t len)
+ if (rc >= 0)
+ /* Data transfer done successfully */
+ break;
+- else if (rc != -EIO)
++ else if (rc != -EAGAIN && rc != -EIO)
+ /* Data transfer failed, not recoverable */
+ return rc;
++
++ usleep_range(priv->timeout_min, priv->timeout_max);
+ }
+
+ /* go and do it */
+@@ -1144,6 +1151,9 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ priv->timeout_max = TIS_TIMEOUT_MAX_ATML;
+ }
+
++ if (priv->manufacturer_id == TPM_VID_IFX)
++ set_bit(TPM_TIS_STATUS_VALID_RETRY, &priv->flags);
++
+ if (is_bsw()) {
+ priv->ilb_base_addr = ioremap(INTEL_LEGACY_BLK_BASE_ADDR,
+ ILB_REMAP_SIZE);
+diff --git a/drivers/char/tpm/tpm_tis_core.h b/drivers/char/tpm/tpm_tis_core.h
+index 690ad8e9b73190..970d02c337c7f1 100644
+--- a/drivers/char/tpm/tpm_tis_core.h
++++ b/drivers/char/tpm/tpm_tis_core.h
+@@ -89,6 +89,7 @@ enum tpm_tis_flags {
+ TPM_TIS_INVALID_STATUS = 1,
+ TPM_TIS_DEFAULT_CANCELLATION = 2,
+ TPM_TIS_IRQ_TESTED = 3,
++ TPM_TIS_STATUS_VALID_RETRY = 4,
+ };
+
+ struct tpm_tis_data {
+diff --git a/drivers/clk/qcom/clk-branch.c b/drivers/clk/qcom/clk-branch.c
+index 229480c5b075a0..0f10090d4ae681 100644
+--- a/drivers/clk/qcom/clk-branch.c
++++ b/drivers/clk/qcom/clk-branch.c
+@@ -28,7 +28,7 @@ static bool clk_branch_in_hwcg_mode(const struct clk_branch *br)
+
+ static bool clk_branch_check_halt(const struct clk_branch *br, bool enabling)
+ {
+- bool invert = (br->halt_check == BRANCH_HALT_ENABLE);
++ bool invert = (br->halt_check & BRANCH_HALT_ENABLE);
+ u32 val;
+
+ regmap_read(br->clkr.regmap, br->halt_reg, &val);
+@@ -44,7 +44,7 @@ static bool clk_branch2_check_halt(const struct clk_branch *br, bool enabling)
+ {
+ u32 val;
+ u32 mask;
+- bool invert = (br->halt_check == BRANCH_HALT_ENABLE);
++ bool invert = (br->halt_check & BRANCH_HALT_ENABLE);
+
+ mask = CBCR_NOC_FSM_STATUS;
+ mask |= CBCR_CLK_OFF;
+diff --git a/drivers/clk/qcom/gdsc.c b/drivers/clk/qcom/gdsc.c
+index fa5fe4c2a2ee77..208fc430ec98f1 100644
+--- a/drivers/clk/qcom/gdsc.c
++++ b/drivers/clk/qcom/gdsc.c
+@@ -292,6 +292,9 @@ static int gdsc_enable(struct generic_pm_domain *domain)
+ */
+ udelay(1);
+
++ if (sc->flags & RETAIN_FF_ENABLE)
++ gdsc_retain_ff_on(sc);
++
+ /* Turn on HW trigger mode if supported */
+ if (sc->flags & HW_CTRL) {
+ ret = gdsc_hwctrl(sc, true);
+@@ -308,9 +311,6 @@ static int gdsc_enable(struct generic_pm_domain *domain)
+ udelay(1);
+ }
+
+- if (sc->flags & RETAIN_FF_ENABLE)
+- gdsc_retain_ff_on(sc);
+-
+ return 0;
+ }
+
+@@ -457,13 +457,6 @@ static int gdsc_init(struct gdsc *sc)
+ goto err_disable_supply;
+ }
+
+- /* Turn on HW trigger mode if supported */
+- if (sc->flags & HW_CTRL) {
+- ret = gdsc_hwctrl(sc, true);
+- if (ret < 0)
+- goto err_disable_supply;
+- }
+-
+ /*
+ * Make sure the retain bit is set if the GDSC is already on,
+ * otherwise we end up turning off the GDSC and destroying all
+@@ -471,6 +464,14 @@ static int gdsc_init(struct gdsc *sc)
+ */
+ if (sc->flags & RETAIN_FF_ENABLE)
+ gdsc_retain_ff_on(sc);
++
++ /* Turn on HW trigger mode if supported */
++ if (sc->flags & HW_CTRL) {
++ ret = gdsc_hwctrl(sc, true);
++ if (ret < 0)
++ goto err_disable_supply;
++ }
++
+ } else if (sc->flags & ALWAYS_ON) {
+ /* If ALWAYS_ON GDSCs are not ON, turn them ON */
+ gdsc_enable(&sc->pd);
+@@ -506,6 +507,23 @@ static int gdsc_init(struct gdsc *sc)
+ return ret;
+ }
+
++static void gdsc_pm_subdomain_remove(struct gdsc_desc *desc, size_t num)
++{
++ struct device *dev = desc->dev;
++ struct gdsc **scs = desc->scs;
++ int i;
++
++ /* Remove subdomains */
++ for (i = num - 1; i >= 0; i--) {
++ if (!scs[i])
++ continue;
++ if (scs[i]->parent)
++ pm_genpd_remove_subdomain(scs[i]->parent, &scs[i]->pd);
++ else if (!IS_ERR_OR_NULL(dev->pm_domain))
++ pm_genpd_remove_subdomain(pd_to_genpd(dev->pm_domain), &scs[i]->pd);
++ }
++}
++
+ int gdsc_register(struct gdsc_desc *desc,
+ struct reset_controller_dev *rcdev, struct regmap *regmap)
+ {
+@@ -555,30 +573,27 @@ int gdsc_register(struct gdsc_desc *desc,
+ if (!scs[i])
+ continue;
+ if (scs[i]->parent)
+- pm_genpd_add_subdomain(scs[i]->parent, &scs[i]->pd);
++ ret = pm_genpd_add_subdomain(scs[i]->parent, &scs[i]->pd);
+ else if (!IS_ERR_OR_NULL(dev->pm_domain))
+- pm_genpd_add_subdomain(pd_to_genpd(dev->pm_domain), &scs[i]->pd);
++ ret = pm_genpd_add_subdomain(pd_to_genpd(dev->pm_domain), &scs[i]->pd);
++ if (ret)
++ goto err_pm_subdomain_remove;
+ }
+
+ return of_genpd_add_provider_onecell(dev->of_node, data);
++
++err_pm_subdomain_remove:
++ gdsc_pm_subdomain_remove(desc, i);
++
++ return ret;
+ }
+
+ void gdsc_unregister(struct gdsc_desc *desc)
+ {
+- int i;
+ struct device *dev = desc->dev;
+- struct gdsc **scs = desc->scs;
+ size_t num = desc->num;
+
+- /* Remove subdomains */
+- for (i = 0; i < num; i++) {
+- if (!scs[i])
+- continue;
+- if (scs[i]->parent)
+- pm_genpd_remove_subdomain(scs[i]->parent, &scs[i]->pd);
+- else if (!IS_ERR_OR_NULL(dev->pm_domain))
+- pm_genpd_remove_subdomain(pd_to_genpd(dev->pm_domain), &scs[i]->pd);
+- }
++ gdsc_pm_subdomain_remove(desc, num);
+ of_genpd_del_provider(dev->of_node);
+ }
+
+diff --git a/drivers/clk/renesas/r9a07g043-cpg.c b/drivers/clk/renesas/r9a07g043-cpg.c
+index c3c2b0c4398330..fce2eecfa8c03c 100644
+--- a/drivers/clk/renesas/r9a07g043-cpg.c
++++ b/drivers/clk/renesas/r9a07g043-cpg.c
+@@ -89,7 +89,9 @@ static const struct clk_div_table dtable_1_32[] = {
+
+ /* Mux clock tables */
+ static const char * const sel_pll3_3[] = { ".pll3_533", ".pll3_400" };
++#ifdef CONFIG_ARM64
+ static const char * const sel_pll6_2[] = { ".pll6_250", ".pll5_250" };
++#endif
+ static const char * const sel_sdhi[] = { ".clk_533", ".clk_400", ".clk_266" };
+
+ static const u32 mtable_sdhi[] = { 1, 2, 3 };
+@@ -137,7 +139,12 @@ static const struct cpg_core_clk r9a07g043_core_clks[] __initconst = {
+ DEF_DIV("P2", R9A07G043_CLK_P2, CLK_PLL3_DIV2_4_2, DIVPL3A, dtable_1_32),
+ DEF_FIXED("M0", R9A07G043_CLK_M0, CLK_PLL3_DIV2_4, 1, 1),
+ DEF_FIXED("ZT", R9A07G043_CLK_ZT, CLK_PLL3_DIV2_4_2, 1, 1),
++#ifdef CONFIG_ARM64
+ DEF_MUX("HP", R9A07G043_CLK_HP, SEL_PLL6_2, sel_pll6_2),
++#endif
++#ifdef CONFIG_RISCV
++ DEF_FIXED("HP", R9A07G043_CLK_HP, CLK_PLL6_250, 1, 1),
++#endif
+ DEF_FIXED("SPI0", R9A07G043_CLK_SPI0, CLK_DIV_PLL3_C, 1, 2),
+ DEF_FIXED("SPI1", R9A07G043_CLK_SPI1, CLK_DIV_PLL3_C, 1, 4),
+ DEF_SD_MUX("SD0", R9A07G043_CLK_SD0, SEL_SDHI0, SEL_SDHI0_STS, sel_sdhi,
+diff --git a/drivers/clocksource/timer-stm32-lp.c b/drivers/clocksource/timer-stm32-lp.c
+index a4c95161cb22c4..193e4f643358bc 100644
+--- a/drivers/clocksource/timer-stm32-lp.c
++++ b/drivers/clocksource/timer-stm32-lp.c
+@@ -168,9 +168,7 @@ static int stm32_clkevent_lp_probe(struct platform_device *pdev)
+ }
+
+ if (of_property_read_bool(pdev->dev.parent->of_node, "wakeup-source")) {
+- ret = device_init_wakeup(&pdev->dev, true);
+- if (ret)
+- goto out_clk_disable;
++ device_set_wakeup_capable(&pdev->dev, true);
+
+ ret = dev_pm_set_wake_irq(&pdev->dev, irq);
+ if (ret)
+diff --git a/drivers/crypto/ccp/sp-pci.c b/drivers/crypto/ccp/sp-pci.c
+index 248d98fd8c48d0..157f9a9ed63616 100644
+--- a/drivers/crypto/ccp/sp-pci.c
++++ b/drivers/crypto/ccp/sp-pci.c
+@@ -189,14 +189,17 @@ static bool sp_pci_is_master(struct sp_device *sp)
+ pdev_new = to_pci_dev(dev_new);
+ pdev_cur = to_pci_dev(dev_cur);
+
+- if (pdev_new->bus->number < pdev_cur->bus->number)
+- return true;
++ if (pci_domain_nr(pdev_new->bus) != pci_domain_nr(pdev_cur->bus))
++ return pci_domain_nr(pdev_new->bus) < pci_domain_nr(pdev_cur->bus);
+
+- if (PCI_SLOT(pdev_new->devfn) < PCI_SLOT(pdev_cur->devfn))
+- return true;
++ if (pdev_new->bus->number != pdev_cur->bus->number)
++ return pdev_new->bus->number < pdev_cur->bus->number;
+
+- if (PCI_FUNC(pdev_new->devfn) < PCI_FUNC(pdev_cur->devfn))
+- return true;
++ if (PCI_SLOT(pdev_new->devfn) != PCI_SLOT(pdev_cur->devfn))
++ return PCI_SLOT(pdev_new->devfn) < PCI_SLOT(pdev_cur->devfn);
++
++ if (PCI_FUNC(pdev_new->devfn) != PCI_FUNC(pdev_cur->devfn))
++ return PCI_FUNC(pdev_new->devfn) < PCI_FUNC(pdev_cur->devfn);
+
+ return false;
+ }
+diff --git a/drivers/gpio/gpio-tegra186.c b/drivers/gpio/gpio-tegra186.c
+index 1ecb733a5e88b4..45543ab5073f66 100644
+--- a/drivers/gpio/gpio-tegra186.c
++++ b/drivers/gpio/gpio-tegra186.c
+@@ -823,6 +823,7 @@ static int tegra186_gpio_probe(struct platform_device *pdev)
+ struct gpio_irq_chip *irq;
+ struct tegra_gpio *gpio;
+ struct device_node *np;
++ struct resource *res;
+ char **names;
+ int err;
+
+@@ -842,19 +843,19 @@ static int tegra186_gpio_probe(struct platform_device *pdev)
+ gpio->num_banks++;
+
+ /* get register apertures */
+- gpio->secure = devm_platform_ioremap_resource_byname(pdev, "security");
+- if (IS_ERR(gpio->secure)) {
+- gpio->secure = devm_platform_ioremap_resource(pdev, 0);
+- if (IS_ERR(gpio->secure))
+- return PTR_ERR(gpio->secure);
+- }
+-
+- gpio->base = devm_platform_ioremap_resource_byname(pdev, "gpio");
+- if (IS_ERR(gpio->base)) {
+- gpio->base = devm_platform_ioremap_resource(pdev, 1);
+- if (IS_ERR(gpio->base))
+- return PTR_ERR(gpio->base);
+- }
++ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "security");
++ if (!res)
++ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
++ gpio->secure = devm_ioremap_resource(&pdev->dev, res);
++ if (IS_ERR(gpio->secure))
++ return PTR_ERR(gpio->secure);
++
++ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "gpio");
++ if (!res)
++ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
++ gpio->base = devm_ioremap_resource(&pdev->dev, res);
++ if (IS_ERR(gpio->base))
++ return PTR_ERR(gpio->base);
+
+ err = platform_irq_count(pdev);
+ if (err < 0)
+diff --git a/drivers/gpio/gpio-zynq.c b/drivers/gpio/gpio-zynq.c
+index 1a42336dfc1d4a..cc53e6940ad7e6 100644
+--- a/drivers/gpio/gpio-zynq.c
++++ b/drivers/gpio/gpio-zynq.c
+@@ -1011,6 +1011,7 @@ static void zynq_gpio_remove(struct platform_device *pdev)
+ ret = pm_runtime_get_sync(&pdev->dev);
+ if (ret < 0)
+ dev_warn(&pdev->dev, "pm_runtime_get_sync() Failed\n");
++ device_init_wakeup(&pdev->dev, 0);
+ gpiochip_remove(&gpio->chip);
+ device_set_wakeup_capable(&pdev->dev, 0);
+ pm_runtime_disable(&pdev->dev);
+diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
+index 880f1efcaca534..e543129d360500 100644
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -193,6 +193,8 @@ static void of_gpio_try_fixup_polarity(const struct device_node *np,
+ */
+ { "himax,hx8357", "gpios-reset", false },
+ { "himax,hx8369", "gpios-reset", false },
++#endif
++#if IS_ENABLED(CONFIG_MTD_NAND_JZ4780)
+ /*
+ * The rb-gpios semantics was undocumented and qi,lb60 (along with
+ * the ingenic driver) got it wrong. The active state encodes the
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 96845541b2d255..31d4df96889812 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -6575,18 +6575,26 @@ struct dma_fence *amdgpu_device_switch_gang(struct amdgpu_device *adev,
+ {
+ struct dma_fence *old = NULL;
+
++ dma_fence_get(gang);
+ do {
+ dma_fence_put(old);
+ old = amdgpu_device_get_gang(adev);
+ if (old == gang)
+ break;
+
+- if (!dma_fence_is_signaled(old))
++ if (!dma_fence_is_signaled(old)) {
++ dma_fence_put(gang);
+ return old;
++ }
+
+ } while (cmpxchg((struct dma_fence __force **)&adev->gang_submit,
+ old, gang) != old);
+
++ /*
++ * Drop it once for the exchanged reference in adev and once for the
++ * thread local reference acquired in amdgpu_device_get_gang().
++ */
++ dma_fence_put(old);
+ dma_fence_put(old);
+ return NULL;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+index 73e02141a6e215..37d53578825b33 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+@@ -2434,8 +2434,6 @@ int amdgpu_vm_init(struct amdgpu_device *adev, struct amdgpu_vm *vm,
+ spin_lock_init(&vm->status_lock);
+ INIT_LIST_HEAD(&vm->freed);
+ INIT_LIST_HEAD(&vm->done);
+- INIT_LIST_HEAD(&vm->pt_freed);
+- INIT_WORK(&vm->pt_free_work, amdgpu_vm_pt_free_work);
+ INIT_KFIFO(vm->faults);
+
+ r = amdgpu_vm_init_entities(adev, vm);
+@@ -2607,8 +2605,6 @@ void amdgpu_vm_fini(struct amdgpu_device *adev, struct amdgpu_vm *vm)
+
+ amdgpu_amdkfd_gpuvm_destroy_cb(adev, vm);
+
+- flush_work(&vm->pt_free_work);
+-
+ root = amdgpu_bo_ref(vm->root.bo);
+ amdgpu_bo_reserve(root, true);
+ amdgpu_vm_put_task_info(vm->task_info);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+index 52dd7cdfdc8145..ee893527a4f1db 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.h
+@@ -360,10 +360,6 @@ struct amdgpu_vm {
+ /* BOs which are invalidated, has been updated in the PTs */
+ struct list_head done;
+
+- /* PT BOs scheduled to free and fill with zero if vm_resv is not hold */
+- struct list_head pt_freed;
+- struct work_struct pt_free_work;
+-
+ /* contains the page directory */
+ struct amdgpu_vm_bo_base root;
+ struct dma_fence *last_update;
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
+index f78a0434a48fa2..54ae0e9bc6d772 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm_pt.c
+@@ -546,27 +546,6 @@ static void amdgpu_vm_pt_free(struct amdgpu_vm_bo_base *entry)
+ amdgpu_bo_unref(&entry->bo);
+ }
+
+-void amdgpu_vm_pt_free_work(struct work_struct *work)
+-{
+- struct amdgpu_vm_bo_base *entry, *next;
+- struct amdgpu_vm *vm;
+- LIST_HEAD(pt_freed);
+-
+- vm = container_of(work, struct amdgpu_vm, pt_free_work);
+-
+- spin_lock(&vm->status_lock);
+- list_splice_init(&vm->pt_freed, &pt_freed);
+- spin_unlock(&vm->status_lock);
+-
+- /* flush_work in amdgpu_vm_fini ensure vm->root.bo is valid. */
+- amdgpu_bo_reserve(vm->root.bo, true);
+-
+- list_for_each_entry_safe(entry, next, &pt_freed, vm_status)
+- amdgpu_vm_pt_free(entry);
+-
+- amdgpu_bo_unreserve(vm->root.bo);
+-}
+-
+ /**
+ * amdgpu_vm_pt_free_list - free PD/PT levels
+ *
+@@ -579,19 +558,15 @@ void amdgpu_vm_pt_free_list(struct amdgpu_device *adev,
+ struct amdgpu_vm_update_params *params)
+ {
+ struct amdgpu_vm_bo_base *entry, *next;
+- struct amdgpu_vm *vm = params->vm;
+ bool unlocked = params->unlocked;
+
+ if (list_empty(¶ms->tlb_flush_waitlist))
+ return;
+
+- if (unlocked) {
+- spin_lock(&vm->status_lock);
+- list_splice_init(¶ms->tlb_flush_waitlist, &vm->pt_freed);
+- spin_unlock(&vm->status_lock);
+- schedule_work(&vm->pt_free_work);
+- return;
+- }
++ /*
++ * unlocked unmap clear page table leaves, warning to free the page entry.
++ */
++ WARN_ON(unlocked);
+
+ list_for_each_entry_safe(entry, next, ¶ms->tlb_flush_waitlist, vm_status)
+ amdgpu_vm_pt_free(entry);
+@@ -899,7 +874,15 @@ int amdgpu_vm_ptes_update(struct amdgpu_vm_update_params *params,
+ incr = (uint64_t)AMDGPU_GPU_PAGE_SIZE << shift;
+ mask = amdgpu_vm_pt_entries_mask(adev, cursor.level);
+ pe_start = ((cursor.pfn >> shift) & mask) * 8;
+- entry_end = ((uint64_t)mask + 1) << shift;
++
++ if (cursor.level < AMDGPU_VM_PTB && params->unlocked)
++ /*
++ * MMU notifier callback unlocked unmap huge page, leave is PDE entry,
++ * only clear one entry. Next entry search again for PDE or PTE leave.
++ */
++ entry_end = 1ULL << shift;
++ else
++ entry_end = ((uint64_t)mask + 1) << shift;
+ entry_end += cursor.pfn & ~(entry_end - 1);
+ entry_end = min(entry_end, end);
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+index 3e6b4736a7feaa..67b5f3d7ff8e91 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
+@@ -212,6 +212,11 @@ static int set_queue_properties_from_user(struct queue_properties *q_properties,
+ return -EINVAL;
+ }
+
++ if (args->ring_size < KFD_MIN_QUEUE_RING_SIZE) {
++ args->ring_size = KFD_MIN_QUEUE_RING_SIZE;
++ pr_debug("Size lower. clamped to KFD_MIN_QUEUE_RING_SIZE");
++ }
++
+ if (!access_ok((const void __user *) args->read_pointer_address,
+ sizeof(uint32_t))) {
+ pr_err("Can't access read pointer\n");
+@@ -461,6 +466,11 @@ static int kfd_ioctl_update_queue(struct file *filp, struct kfd_process *p,
+ return -EINVAL;
+ }
+
++ if (args->ring_size < KFD_MIN_QUEUE_RING_SIZE) {
++ args->ring_size = KFD_MIN_QUEUE_RING_SIZE;
++ pr_debug("Size lower. clamped to KFD_MIN_QUEUE_RING_SIZE");
++ }
++
+ properties.queue_address = args->ring_base_address;
+ properties.queue_size = args->ring_size;
+ properties.queue_percent = args->queue_percentage & 0xFF;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index d350c7ce35b3d6..9186ef0bd2a32a 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -1493,6 +1493,11 @@ int kfd_debugfs_hang_hws(struct kfd_node *dev)
+ return -EINVAL;
+ }
+
++ if (dev->kfd->shared_resources.enable_mes) {
++ dev_err(dev->adev->dev, "Inducing MES hang is not supported\n");
++ return -EINVAL;
++ }
++
+ return dqm_debugfs_hang_hws(dev->dqm);
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process.c b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+index 264bd764f6f27d..0ec8b457494bd7 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process.c
+@@ -35,6 +35,7 @@
+ #include <linux/pm_runtime.h>
+ #include "amdgpu_amdkfd.h"
+ #include "amdgpu.h"
++#include "amdgpu_reset.h"
+
+ struct mm_struct;
+
+@@ -1140,6 +1141,17 @@ static void kfd_process_remove_sysfs(struct kfd_process *p)
+ p->kobj = NULL;
+ }
+
++/*
++ * If any GPU is ongoing reset, wait for reset complete.
++ */
++static void kfd_process_wait_gpu_reset_complete(struct kfd_process *p)
++{
++ int i;
++
++ for (i = 0; i < p->n_pdds; i++)
++ flush_workqueue(p->pdds[i]->dev->adev->reset_domain->wq);
++}
++
+ /* No process locking is needed in this function, because the process
+ * is not findable any more. We must assume that no other thread is
+ * using it any more, otherwise we couldn't safely free the process
+@@ -1154,6 +1166,11 @@ static void kfd_process_wq_release(struct work_struct *work)
+ kfd_process_dequeue_from_all_devices(p);
+ pqm_uninit(&p->pqm);
+
++ /*
++ * If GPU in reset, user queues may still running, wait for reset complete.
++ */
++ kfd_process_wait_gpu_reset_complete(p);
++
+ /* Signal the eviction fence after user mode queues are
+ * destroyed. This allows any BOs to be freed without
+ * triggering pointless evictions or waiting for fences.
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+index ac777244ee0a18..4078a81761871c 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_process_queue_manager.c
+@@ -546,7 +546,7 @@ int pqm_destroy_queue(struct process_queue_manager *pqm, unsigned int qid)
+ pr_err("Pasid 0x%x destroy queue %d failed, ret %d\n",
+ pqm->process->pasid,
+ pqn->q->properties.queue_id, retval);
+- if (retval != -ETIME)
++ if (retval != -ETIME && retval != -EIO)
+ goto err_destroy_queue;
+ }
+ kfd_procfs_del_queue(pqn->q);
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+index 8c61dee5ca0db1..b50283864dcd26 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_svm.c
+@@ -2992,19 +2992,6 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
+ goto out;
+ }
+
+- /* check if this page fault time stamp is before svms->checkpoint_ts */
+- if (svms->checkpoint_ts[gpuidx] != 0) {
+- if (amdgpu_ih_ts_after(ts, svms->checkpoint_ts[gpuidx])) {
+- pr_debug("draining retry fault, drop fault 0x%llx\n", addr);
+- r = 0;
+- goto out;
+- } else
+- /* ts is after svms->checkpoint_ts now, reset svms->checkpoint_ts
+- * to zero to avoid following ts wrap around give wrong comparing
+- */
+- svms->checkpoint_ts[gpuidx] = 0;
+- }
+-
+ if (!p->xnack_enabled) {
+ pr_debug("XNACK not enabled for pasid 0x%x\n", pasid);
+ r = -EFAULT;
+@@ -3024,6 +3011,21 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
+ mmap_read_lock(mm);
+ retry_write_locked:
+ mutex_lock(&svms->lock);
++
++ /* check if this page fault time stamp is before svms->checkpoint_ts */
++ if (svms->checkpoint_ts[gpuidx] != 0) {
++ if (amdgpu_ih_ts_after(ts, svms->checkpoint_ts[gpuidx])) {
++ pr_debug("draining retry fault, drop fault 0x%llx\n", addr);
++ r = -EAGAIN;
++ goto out_unlock_svms;
++ } else {
++ /* ts is after svms->checkpoint_ts now, reset svms->checkpoint_ts
++ * to zero to avoid following ts wrap around give wrong comparing
++ */
++ svms->checkpoint_ts[gpuidx] = 0;
++ }
++ }
++
+ prange = svm_range_from_addr(svms, addr, NULL);
+ if (!prange) {
+ pr_debug("failed to find prange svms 0x%p address [0x%llx]\n",
+@@ -3148,7 +3150,8 @@ svm_range_restore_pages(struct amdgpu_device *adev, unsigned int pasid,
+ mutex_unlock(&svms->lock);
+ mmap_read_unlock(mm);
+
+- svm_range_count_fault(node, p, gpuidx);
++ if (r != -EAGAIN)
++ svm_range_count_fault(node, p, gpuidx);
+
+ mmput(mm);
+ out:
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
+index 1ed21c1b86a5bb..a966abd4078810 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_dc_resource_mgmt.c
+@@ -532,26 +532,6 @@ static void calculate_odm_slices(const struct dc_stream_state *stream, unsigned
+ odm_slice_end_x[odm_factor - 1] = stream->src.width - 1;
+ }
+
+-static bool is_plane_in_odm_slice(const struct dc_plane_state *plane, unsigned int slice_index, unsigned int *odm_slice_end_x, unsigned int num_slices)
+-{
+- unsigned int slice_start_x, slice_end_x;
+-
+- if (slice_index == 0)
+- slice_start_x = 0;
+- else
+- slice_start_x = odm_slice_end_x[slice_index - 1] + 1;
+-
+- slice_end_x = odm_slice_end_x[slice_index];
+-
+- if (plane->clip_rect.x + plane->clip_rect.width < slice_start_x)
+- return false;
+-
+- if (plane->clip_rect.x > slice_end_x)
+- return false;
+-
+- return true;
+-}
+-
+ static void add_odm_slice_to_odm_tree(struct dml2_context *ctx,
+ struct dc_state *state,
+ struct dc_pipe_mapping_scratch *scratch,
+@@ -791,12 +771,6 @@ static void map_pipes_for_plane(struct dml2_context *ctx, struct dc_state *state
+ sort_pipes_for_splitting(&scratch->pipe_pool);
+
+ for (odm_slice_index = 0; odm_slice_index < scratch->odm_info.odm_factor; odm_slice_index++) {
+- // We build the tree for one ODM slice at a time.
+- // Each ODM slice shares a common OPP
+- if (!is_plane_in_odm_slice(plane, odm_slice_index, scratch->odm_info.odm_slice_end_x, scratch->odm_info.odm_factor)) {
+- continue;
+- }
+-
+ // Now we have a list of all pipes to be used for this plane/stream, now setup the tree.
+ scratch->odm_info.next_higher_pipe_for_odm_slice[odm_slice_index] = add_plane_to_blend_tree(ctx, state,
+ plane,
+diff --git a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
+index a65a0ddee64672..c671908ba7d06c 100644
+--- a/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
++++ b/drivers/gpu/drm/amd/display/dc/hubp/dcn31/dcn31_hubp.c
+@@ -44,7 +44,7 @@ void hubp31_set_unbounded_requesting(struct hubp *hubp, bool enable)
+ struct dcn20_hubp *hubp2 = TO_DCN20_HUBP(hubp);
+
+ REG_UPDATE(DCHUBP_CNTL, HUBP_UNBOUNDED_REQ_MODE, enable);
+- REG_UPDATE(CURSOR_CONTROL, CURSOR_REQ_MODE, enable);
++ REG_UPDATE(CURSOR_CONTROL, CURSOR_REQ_MODE, 1);
+ }
+
+ void hubp31_soft_reset(struct hubp *hubp, bool reset)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+index fd0530251c6e5a..d725af14af371a 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn10/dcn10_hwseq.c
+@@ -1992,20 +1992,11 @@ static void delay_cursor_until_vupdate(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ dc->hwss.get_position(&pipe_ctx, 1, &position);
+ vpos = position.vertical_count;
+
+- /* Avoid wraparound calculation issues */
+- vupdate_start += stream->timing.v_total;
+- vupdate_end += stream->timing.v_total;
+- vpos += stream->timing.v_total;
+-
+ if (vpos <= vupdate_start) {
+ /* VPOS is in VACTIVE or back porch. */
+ lines_to_vupdate = vupdate_start - vpos;
+- } else if (vpos > vupdate_end) {
+- /* VPOS is in the front porch. */
+- return;
+ } else {
+- /* VPOS is in VUPDATE. */
+- lines_to_vupdate = 0;
++ lines_to_vupdate = stream->timing.v_total - vpos + vupdate_start;
+ }
+
+ /* Calculate time until VUPDATE in microseconds. */
+@@ -2013,13 +2004,18 @@ static void delay_cursor_until_vupdate(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz;
+ us_to_vupdate = lines_to_vupdate * us_per_line;
+
++ /* Stall out until the cursor update completes. */
++ if (vupdate_end < vupdate_start)
++ vupdate_end += stream->timing.v_total;
++
++ /* Position is in the range of vupdate start and end*/
++ if (lines_to_vupdate > stream->timing.v_total - vupdate_end + vupdate_start)
++ us_to_vupdate = 0;
++
+ /* 70 us is a conservative estimate of cursor update time*/
+ if (us_to_vupdate > 70)
+ return;
+
+- /* Stall out until the cursor update completes. */
+- if (vupdate_end < vupdate_start)
+- vupdate_end += stream->timing.v_total;
+ us_vupdate = (vupdate_end - vupdate_start + 1) * us_per_line;
+ udelay(us_to_vupdate + us_vupdate);
+ }
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+index a71c6117d7e547..0115d26b5af92d 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/amd_powerplay.c
+@@ -51,6 +51,11 @@ static int amd_powerplay_create(struct amdgpu_device *adev)
+ hwmgr->adev = adev;
+ hwmgr->not_vf = !amdgpu_sriov_vf(adev);
+ hwmgr->device = amdgpu_cgs_create_device(adev);
++ if (!hwmgr->device) {
++ kfree(hwmgr);
++ return -ENOMEM;
++ }
++
+ mutex_init(&hwmgr->msg_lock);
+ hwmgr->chip_family = adev->family;
+ hwmgr->chip_id = adev->asic_type;
+diff --git a/drivers/gpu/drm/drm_atomic_helper.c b/drivers/gpu/drm/drm_atomic_helper.c
+index 5186d2114a5037..32902f77f00dd8 100644
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1376,7 +1376,7 @@ crtc_set_mode(struct drm_device *dev, struct drm_atomic_state *old_state)
+ mode = &new_crtc_state->mode;
+ adjusted_mode = &new_crtc_state->adjusted_mode;
+
+- if (!new_crtc_state->mode_changed)
++ if (!new_crtc_state->mode_changed && !new_crtc_state->connectors_changed)
+ continue;
+
+ drm_dbg_atomic(dev, "modeset on [ENCODER:%d:%s]\n",
+diff --git a/drivers/gpu/drm/drm_debugfs.c b/drivers/gpu/drm/drm_debugfs.c
+index 9d3e6dd68810e3..98a37dc3324e4f 100644
+--- a/drivers/gpu/drm/drm_debugfs.c
++++ b/drivers/gpu/drm/drm_debugfs.c
+@@ -743,7 +743,7 @@ static int bridges_show(struct seq_file *m, void *data)
+ unsigned int idx = 0;
+
+ drm_for_each_bridge_in_chain(encoder, bridge) {
+- drm_printf(&p, "bridge[%d]: %ps\n", idx++, bridge->funcs);
++ drm_printf(&p, "bridge[%u]: %ps\n", idx++, bridge->funcs);
+ drm_printf(&p, "\ttype: [%d] %s\n",
+ bridge->type,
+ drm_get_connector_type_name(bridge->type));
+diff --git a/drivers/gpu/drm/drm_panel.c b/drivers/gpu/drm/drm_panel.c
+index 19ab0a794add31..fd8fa2e0ef6fac 100644
+--- a/drivers/gpu/drm/drm_panel.c
++++ b/drivers/gpu/drm/drm_panel.c
+@@ -49,7 +49,7 @@ static LIST_HEAD(panel_list);
+ * @dev: parent device of the panel
+ * @funcs: panel operations
+ * @connector_type: the connector type (DRM_MODE_CONNECTOR_*) corresponding to
+- * the panel interface
++ * the panel interface (must NOT be DRM_MODE_CONNECTOR_Unknown)
+ *
+ * Initialize the panel structure for subsequent registration with
+ * drm_panel_add().
+@@ -57,6 +57,9 @@ static LIST_HEAD(panel_list);
+ void drm_panel_init(struct drm_panel *panel, struct device *dev,
+ const struct drm_panel_funcs *funcs, int connector_type)
+ {
++ if (connector_type == DRM_MODE_CONNECTOR_Unknown)
++ DRM_WARN("%s: %s: a valid connector type is required!\n", __func__, dev_name(dev));
++
+ INIT_LIST_HEAD(&panel->list);
+ INIT_LIST_HEAD(&panel->followers);
+ mutex_init(&panel->follower_lock);
+diff --git a/drivers/gpu/drm/drm_panel_orientation_quirks.c b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+index 4a73821b81f6fd..c554ad8f246b65 100644
+--- a/drivers/gpu/drm/drm_panel_orientation_quirks.c
++++ b/drivers/gpu/drm/drm_panel_orientation_quirks.c
+@@ -93,6 +93,12 @@ static const struct drm_dmi_panel_orientation_data onegx1_pro = {
+ .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+
++static const struct drm_dmi_panel_orientation_data lcd640x960_leftside_up = {
++ .width = 640,
++ .height = 960,
++ .orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP,
++};
++
+ static const struct drm_dmi_panel_orientation_data lcd720x1280_rightside_up = {
+ .width = 720,
+ .height = 1280,
+@@ -123,6 +129,12 @@ static const struct drm_dmi_panel_orientation_data lcd1080x1920_rightside_up = {
+ .orientation = DRM_MODE_PANEL_ORIENTATION_RIGHT_UP,
+ };
+
++static const struct drm_dmi_panel_orientation_data lcd1200x1920_leftside_up = {
++ .width = 1200,
++ .height = 1920,
++ .orientation = DRM_MODE_PANEL_ORIENTATION_LEFT_UP,
++};
++
+ static const struct drm_dmi_panel_orientation_data lcd1200x1920_rightside_up = {
+ .width = 1200,
+ .height = 1920,
+@@ -184,10 +196,10 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T103HAF"),
+ },
+ .driver_data = (void *)&lcd800x1280_rightside_up,
+- }, { /* AYA NEO AYANEO 2 */
++ }, { /* AYA NEO AYANEO 2/2S */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
+- DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "AYANEO 2"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "AYANEO 2"),
+ },
+ .driver_data = (void *)&lcd1200x1920_rightside_up,
+ }, { /* AYA NEO 2021 */
+@@ -202,6 +214,18 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "AIR"),
+ },
+ .driver_data = (void *)&lcd1080x1920_leftside_up,
++ }, { /* AYA NEO Flip DS Bottom Screen */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "FLIP DS"),
++ },
++ .driver_data = (void *)&lcd640x960_leftside_up,
++ }, { /* AYA NEO Flip KB/DS Top Screen */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "FLIP"),
++ },
++ .driver_data = (void *)&lcd1080x1920_leftside_up,
+ }, { /* AYA NEO Founder */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYA NEO"),
+@@ -226,6 +250,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_MATCH(DMI_BOARD_NAME, "KUN"),
+ },
+ .driver_data = (void *)&lcd1600x2560_rightside_up,
++ }, { /* AYA NEO SLIDE */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "AYANEO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "SLIDE"),
++ },
++ .driver_data = (void *)&lcd1080x1920_leftside_up,
+ }, { /* AYN Loki Max */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ayn"),
+@@ -315,6 +345,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_BOARD_NAME, "Default string"),
+ },
+ .driver_data = (void *)&gpd_win2,
++ }, { /* GPD Win 2 (correct DMI strings) */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "GPD"),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "WIN2")
++ },
++ .driver_data = (void *)&lcd720x1280_rightside_up,
+ }, { /* GPD Win 3 */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "GPD"),
+@@ -443,6 +479,12 @@ static const struct dmi_system_id orientation_data[] = {
+ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ONE XPLAYER"),
+ },
+ .driver_data = (void *)&lcd1600x2560_leftside_up,
++ }, { /* OneXPlayer Mini (Intel) */
++ .matches = {
++ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ONE-NETBOOK TECHNOLOGY CO., LTD."),
++ DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "ONE XPLAYER"),
++ },
++ .driver_data = (void *)&lcd1200x1920_leftside_up,
+ }, { /* OrangePi Neo */
+ .matches = {
+ DMI_EXACT_MATCH(DMI_SYS_VENDOR, "OrangePi"),
+diff --git a/drivers/gpu/drm/i915/gt/intel_rc6.c b/drivers/gpu/drm/i915/gt/intel_rc6.c
+index 9378d5901c4939..9ca42589da4dad 100644
+--- a/drivers/gpu/drm/i915/gt/intel_rc6.c
++++ b/drivers/gpu/drm/i915/gt/intel_rc6.c
+@@ -117,21 +117,10 @@ static void gen11_rc6_enable(struct intel_rc6 *rc6)
+ GEN6_RC_CTL_RC6_ENABLE |
+ GEN6_RC_CTL_EI_MODE(1);
+
+- /*
+- * BSpec 52698 - Render powergating must be off.
+- * FIXME BSpec is outdated, disabling powergating for MTL is just
+- * temporary wa and should be removed after fixing real cause
+- * of forcewake timeouts.
+- */
+- if (IS_GFX_GT_IP_RANGE(gt, IP_VER(12, 70), IP_VER(12, 74)))
+- pg_enable =
+- GEN9_MEDIA_PG_ENABLE |
+- GEN11_MEDIA_SAMPLER_PG_ENABLE;
+- else
+- pg_enable =
+- GEN9_RENDER_PG_ENABLE |
+- GEN9_MEDIA_PG_ENABLE |
+- GEN11_MEDIA_SAMPLER_PG_ENABLE;
++ pg_enable =
++ GEN9_RENDER_PG_ENABLE |
++ GEN9_MEDIA_PG_ENABLE |
++ GEN11_MEDIA_SAMPLER_PG_ENABLE;
+
+ if (GRAPHICS_VER(gt->i915) >= 12 && !IS_DG1(gt->i915)) {
+ for (i = 0; i < I915_MAX_VCS; i++)
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.c b/drivers/gpu/drm/i915/gt/uc/intel_huc.c
+index 2d9152eb728255..24fdce844d9e3e 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_huc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.c
+@@ -317,6 +317,11 @@ void intel_huc_init_early(struct intel_huc *huc)
+ }
+ }
+
++void intel_huc_fini_late(struct intel_huc *huc)
++{
++ delayed_huc_load_fini(huc);
++}
++
+ #define HUC_LOAD_MODE_STRING(x) (x ? "GSC" : "legacy")
+ static int check_huc_loading_mode(struct intel_huc *huc)
+ {
+@@ -414,12 +419,6 @@ int intel_huc_init(struct intel_huc *huc)
+
+ void intel_huc_fini(struct intel_huc *huc)
+ {
+- /*
+- * the fence is initialized in init_early, so we need to clean it up
+- * even if HuC loading is off.
+- */
+- delayed_huc_load_fini(huc);
+-
+ if (huc->heci_pkt)
+ i915_vma_unpin_and_release(&huc->heci_pkt, 0);
+
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_huc.h b/drivers/gpu/drm/i915/gt/uc/intel_huc.h
+index ba5cb08e9e7bf1..09aff3148f7ddb 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_huc.h
++++ b/drivers/gpu/drm/i915/gt/uc/intel_huc.h
+@@ -55,6 +55,7 @@ struct intel_huc {
+
+ int intel_huc_sanitize(struct intel_huc *huc);
+ void intel_huc_init_early(struct intel_huc *huc);
++void intel_huc_fini_late(struct intel_huc *huc);
+ int intel_huc_init(struct intel_huc *huc);
+ void intel_huc_fini(struct intel_huc *huc);
+ void intel_huc_suspend(struct intel_huc *huc);
+diff --git a/drivers/gpu/drm/i915/gt/uc/intel_uc.c b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+index 5b8080ec5315b6..4f751ce74214d4 100644
+--- a/drivers/gpu/drm/i915/gt/uc/intel_uc.c
++++ b/drivers/gpu/drm/i915/gt/uc/intel_uc.c
+@@ -136,6 +136,7 @@ void intel_uc_init_late(struct intel_uc *uc)
+
+ void intel_uc_driver_late_release(struct intel_uc *uc)
+ {
++ intel_huc_fini_late(&uc->huc);
+ }
+
+ /**
+diff --git a/drivers/gpu/drm/i915/selftests/i915_selftest.c b/drivers/gpu/drm/i915/selftests/i915_selftest.c
+index fee76c1d2f4500..889281819c5b13 100644
+--- a/drivers/gpu/drm/i915/selftests/i915_selftest.c
++++ b/drivers/gpu/drm/i915/selftests/i915_selftest.c
+@@ -23,7 +23,9 @@
+
+ #include <linux/random.h>
+
++#include "gt/intel_gt.h"
+ #include "gt/intel_gt_pm.h"
++#include "gt/intel_gt_regs.h"
+ #include "gt/uc/intel_gsc_fw.h"
+
+ #include "i915_driver.h"
+@@ -253,11 +255,27 @@ int i915_mock_selftests(void)
+ int i915_live_selftests(struct pci_dev *pdev)
+ {
+ struct drm_i915_private *i915 = pdev_to_i915(pdev);
++ struct intel_uncore *uncore = &i915->uncore;
+ int err;
++ u32 pg_enable;
++ intel_wakeref_t wakeref;
+
+ if (!i915_selftest.live)
+ return 0;
+
++ /*
++ * FIXME Disable render powergating, this is temporary wa and should be removed
++ * after fixing real cause of forcewake timeouts.
++ */
++ with_intel_runtime_pm(uncore->rpm, wakeref) {
++ if (IS_GFX_GT_IP_RANGE(to_gt(i915), IP_VER(12, 00), IP_VER(12, 74))) {
++ pg_enable = intel_uncore_read(uncore, GEN9_PG_ENABLE);
++ if (pg_enable & GEN9_RENDER_PG_ENABLE)
++ intel_uncore_write_fw(uncore, GEN9_PG_ENABLE,
++ pg_enable & ~GEN9_RENDER_PG_ENABLE);
++ }
++ }
++
+ __wait_gsc_proxy_completed(i915);
+ __wait_gsc_huc_load_completed(i915);
+
+diff --git a/drivers/gpu/drm/mediatek/mtk_dpi.c b/drivers/gpu/drm/mediatek/mtk_dpi.c
+index a08d2065495432..9c11d3158324c1 100644
+--- a/drivers/gpu/drm/mediatek/mtk_dpi.c
++++ b/drivers/gpu/drm/mediatek/mtk_dpi.c
+@@ -127,14 +127,14 @@ struct mtk_dpi_yc_limit {
+ * @is_ck_de_pol: Support CK/DE polarity.
+ * @swap_input_support: Support input swap function.
+ * @support_direct_pin: IP supports direct connection to dpi panels.
+- * @input_2pixel: Input pixel of dp_intf is 2 pixel per round, so enable this
+- * config to enable this feature.
+ * @dimension_mask: Mask used for HWIDTH, HPORCH, VSYNC_WIDTH and VSYNC_PORCH
+ * (no shift).
+ * @hvsize_mask: Mask of HSIZE and VSIZE mask (no shift).
+ * @channel_swap_shift: Shift value of channel swap.
+ * @yuv422_en_bit: Enable bit of yuv422.
+ * @csc_enable_bit: Enable bit of CSC.
++ * @input_2p_en_bit: Enable bit for input two pixel per round feature.
++ * If present, implies that the feature must be enabled.
+ * @pixels_per_iter: Quantity of transferred pixels per iteration.
+ * @edge_cfg_in_mmsys: If the edge configuration for DPI's output needs to be set in MMSYS.
+ */
+@@ -148,12 +148,12 @@ struct mtk_dpi_conf {
+ bool is_ck_de_pol;
+ bool swap_input_support;
+ bool support_direct_pin;
+- bool input_2pixel;
+ u32 dimension_mask;
+ u32 hvsize_mask;
+ u32 channel_swap_shift;
+ u32 yuv422_en_bit;
+ u32 csc_enable_bit;
++ u32 input_2p_en_bit;
+ u32 pixels_per_iter;
+ bool edge_cfg_in_mmsys;
+ };
+@@ -471,6 +471,7 @@ static void mtk_dpi_power_off(struct mtk_dpi *dpi)
+
+ mtk_dpi_disable(dpi);
+ clk_disable_unprepare(dpi->pixel_clk);
++ clk_disable_unprepare(dpi->tvd_clk);
+ clk_disable_unprepare(dpi->engine_clk);
+ }
+
+@@ -487,6 +488,12 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi)
+ goto err_refcount;
+ }
+
++ ret = clk_prepare_enable(dpi->tvd_clk);
++ if (ret) {
++ dev_err(dpi->dev, "Failed to enable tvd pll: %d\n", ret);
++ goto err_engine;
++ }
++
+ ret = clk_prepare_enable(dpi->pixel_clk);
+ if (ret) {
+ dev_err(dpi->dev, "Failed to enable pixel clock: %d\n", ret);
+@@ -496,6 +503,8 @@ static int mtk_dpi_power_on(struct mtk_dpi *dpi)
+ return 0;
+
+ err_pixel:
++ clk_disable_unprepare(dpi->tvd_clk);
++err_engine:
+ clk_disable_unprepare(dpi->engine_clk);
+ err_refcount:
+ dpi->refcount--;
+@@ -610,9 +619,9 @@ static int mtk_dpi_set_display_mode(struct mtk_dpi *dpi,
+ mtk_dpi_dual_edge(dpi);
+ mtk_dpi_config_disable_edge(dpi);
+ }
+- if (dpi->conf->input_2pixel) {
+- mtk_dpi_mask(dpi, DPI_CON, DPINTF_INPUT_2P_EN,
+- DPINTF_INPUT_2P_EN);
++ if (dpi->conf->input_2p_en_bit) {
++ mtk_dpi_mask(dpi, DPI_CON, dpi->conf->input_2p_en_bit,
++ dpi->conf->input_2p_en_bit);
+ }
+ mtk_dpi_sw_reset(dpi, false);
+
+@@ -992,12 +1001,12 @@ static const struct mtk_dpi_conf mt8195_dpintf_conf = {
+ .output_fmts = mt8195_output_fmts,
+ .num_output_fmts = ARRAY_SIZE(mt8195_output_fmts),
+ .pixels_per_iter = 4,
+- .input_2pixel = true,
+ .dimension_mask = DPINTF_HPW_MASK,
+ .hvsize_mask = DPINTF_HSIZE_MASK,
+ .channel_swap_shift = DPINTF_CH_SWAP,
+ .yuv422_en_bit = DPINTF_YUV422_EN,
+ .csc_enable_bit = DPINTF_CSC_ENABLE,
++ .input_2p_en_bit = DPINTF_INPUT_2P_EN,
+ };
+
+ static int mtk_dpi_probe(struct platform_device *pdev)
+diff --git a/drivers/gpu/drm/tests/drm_client_modeset_test.c b/drivers/gpu/drm/tests/drm_client_modeset_test.c
+index 7516f6cb36e4e3..3e9518d7b8b7eb 100644
+--- a/drivers/gpu/drm/tests/drm_client_modeset_test.c
++++ b/drivers/gpu/drm/tests/drm_client_modeset_test.c
+@@ -95,6 +95,9 @@ static void drm_test_pick_cmdline_res_1920_1080_60(struct kunit *test)
+ expected_mode = drm_mode_find_dmt(priv->drm, 1920, 1080, 60, false);
+ KUNIT_ASSERT_NOT_NULL(test, expected_mode);
+
++ ret = drm_kunit_add_mode_destroy_action(test, expected_mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ KUNIT_ASSERT_TRUE(test,
+ drm_mode_parse_command_line_for_connector(cmdline,
+ connector,
+diff --git a/drivers/gpu/drm/tests/drm_cmdline_parser_test.c b/drivers/gpu/drm/tests/drm_cmdline_parser_test.c
+index 59c8408c453c2e..1cfcb597b088b4 100644
+--- a/drivers/gpu/drm/tests/drm_cmdline_parser_test.c
++++ b/drivers/gpu/drm/tests/drm_cmdline_parser_test.c
+@@ -7,6 +7,7 @@
+ #include <kunit/test.h>
+
+ #include <drm/drm_connector.h>
++#include <drm/drm_kunit_helpers.h>
+ #include <drm/drm_modes.h>
+
+ static const struct drm_connector no_connector = {};
+@@ -955,8 +956,15 @@ struct drm_cmdline_tv_option_test {
+ static void drm_test_cmdline_tv_options(struct kunit *test)
+ {
+ const struct drm_cmdline_tv_option_test *params = test->param_value;
+- const struct drm_display_mode *expected_mode = params->mode_fn(NULL);
++ struct drm_display_mode *expected_mode;
+ struct drm_cmdline_mode mode = { };
++ int ret;
++
++ expected_mode = params->mode_fn(NULL);
++ KUNIT_ASSERT_NOT_NULL(test, expected_mode);
++
++ ret = drm_kunit_add_mode_destroy_action(test, expected_mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
+
+ KUNIT_EXPECT_TRUE(test, drm_mode_parse_command_line_for_connector(params->cmdline,
+ &no_connector, &mode));
+diff --git a/drivers/gpu/drm/tests/drm_kunit_helpers.c b/drivers/gpu/drm/tests/drm_kunit_helpers.c
+index 3c0b7824c0be37..922c4b6ed1dc9b 100644
+--- a/drivers/gpu/drm/tests/drm_kunit_helpers.c
++++ b/drivers/gpu/drm/tests/drm_kunit_helpers.c
+@@ -319,6 +319,28 @@ static void kunit_action_drm_mode_destroy(void *ptr)
+ drm_mode_destroy(NULL, mode);
+ }
+
++/**
++ * drm_kunit_add_mode_destroy_action() - Add a drm_destroy_mode kunit action
++ * @test: The test context object
++ * @mode: The drm_display_mode to destroy eventually
++ *
++ * Registers a kunit action that will destroy the drm_display_mode at
++ * the end of the test.
++ *
++ * If an error occurs, the drm_display_mode will be destroyed.
++ *
++ * Returns:
++ * 0 on success, an error code otherwise.
++ */
++int drm_kunit_add_mode_destroy_action(struct kunit *test,
++ struct drm_display_mode *mode)
++{
++ return kunit_add_action_or_reset(test,
++ kunit_action_drm_mode_destroy,
++ mode);
++}
++EXPORT_SYMBOL_GPL(drm_kunit_add_mode_destroy_action);
++
+ /**
+ * drm_kunit_display_mode_from_cea_vic() - return a mode for CEA VIC for a KUnit test
+ * @test: The test context object
+diff --git a/drivers/gpu/drm/tests/drm_modes_test.c b/drivers/gpu/drm/tests/drm_modes_test.c
+index 6ed51f99e133c9..7ba646d87856f5 100644
+--- a/drivers/gpu/drm/tests/drm_modes_test.c
++++ b/drivers/gpu/drm/tests/drm_modes_test.c
+@@ -40,6 +40,7 @@ static void drm_test_modes_analog_tv_ntsc_480i(struct kunit *test)
+ {
+ struct drm_test_modes_priv *priv = test->priv;
+ struct drm_display_mode *mode;
++ int ret;
+
+ mode = drm_analog_tv_mode(priv->drm,
+ DRM_MODE_TV_MODE_NTSC,
+@@ -47,6 +48,9 @@ static void drm_test_modes_analog_tv_ntsc_480i(struct kunit *test)
+ true);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
++ ret = drm_kunit_add_mode_destroy_action(test, mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ KUNIT_EXPECT_EQ(test, drm_mode_vrefresh(mode), 60);
+ KUNIT_EXPECT_EQ(test, mode->hdisplay, 720);
+
+@@ -70,6 +74,7 @@ static void drm_test_modes_analog_tv_ntsc_480i_inlined(struct kunit *test)
+ {
+ struct drm_test_modes_priv *priv = test->priv;
+ struct drm_display_mode *expected, *mode;
++ int ret;
+
+ expected = drm_analog_tv_mode(priv->drm,
+ DRM_MODE_TV_MODE_NTSC,
+@@ -77,9 +82,15 @@ static void drm_test_modes_analog_tv_ntsc_480i_inlined(struct kunit *test)
+ true);
+ KUNIT_ASSERT_NOT_NULL(test, expected);
+
++ ret = drm_kunit_add_mode_destroy_action(test, expected);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ mode = drm_mode_analog_ntsc_480i(priv->drm);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
++ ret = drm_kunit_add_mode_destroy_action(test, mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ KUNIT_EXPECT_TRUE(test, drm_mode_equal(expected, mode));
+ }
+
+@@ -87,6 +98,7 @@ static void drm_test_modes_analog_tv_pal_576i(struct kunit *test)
+ {
+ struct drm_test_modes_priv *priv = test->priv;
+ struct drm_display_mode *mode;
++ int ret;
+
+ mode = drm_analog_tv_mode(priv->drm,
+ DRM_MODE_TV_MODE_PAL,
+@@ -94,6 +106,9 @@ static void drm_test_modes_analog_tv_pal_576i(struct kunit *test)
+ true);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
++ ret = drm_kunit_add_mode_destroy_action(test, mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ KUNIT_EXPECT_EQ(test, drm_mode_vrefresh(mode), 50);
+ KUNIT_EXPECT_EQ(test, mode->hdisplay, 720);
+
+@@ -117,6 +132,7 @@ static void drm_test_modes_analog_tv_pal_576i_inlined(struct kunit *test)
+ {
+ struct drm_test_modes_priv *priv = test->priv;
+ struct drm_display_mode *expected, *mode;
++ int ret;
+
+ expected = drm_analog_tv_mode(priv->drm,
+ DRM_MODE_TV_MODE_PAL,
+@@ -124,9 +140,15 @@ static void drm_test_modes_analog_tv_pal_576i_inlined(struct kunit *test)
+ true);
+ KUNIT_ASSERT_NOT_NULL(test, expected);
+
++ ret = drm_kunit_add_mode_destroy_action(test, expected);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ mode = drm_mode_analog_pal_576i(priv->drm);
+ KUNIT_ASSERT_NOT_NULL(test, mode);
+
++ ret = drm_kunit_add_mode_destroy_action(test, mode);
++ KUNIT_ASSERT_EQ(test, ret, 0);
++
+ KUNIT_EXPECT_TRUE(test, drm_mode_equal(expected, mode));
+ }
+
+diff --git a/drivers/gpu/drm/tests/drm_probe_helper_test.c b/drivers/gpu/drm/tests/drm_probe_helper_test.c
+index bc09ff38aca18e..db0e4f5df275e8 100644
+--- a/drivers/gpu/drm/tests/drm_probe_helper_test.c
++++ b/drivers/gpu/drm/tests/drm_probe_helper_test.c
+@@ -98,7 +98,7 @@ drm_test_connector_helper_tv_get_modes_check(struct kunit *test)
+ struct drm_connector *connector = &priv->connector;
+ struct drm_cmdline_mode *cmdline = &connector->cmdline_mode;
+ struct drm_display_mode *mode;
+- const struct drm_display_mode *expected;
++ struct drm_display_mode *expected;
+ size_t len;
+ int ret;
+
+@@ -134,6 +134,9 @@ drm_test_connector_helper_tv_get_modes_check(struct kunit *test)
+
+ KUNIT_EXPECT_TRUE(test, drm_mode_equal(mode, expected));
+ KUNIT_EXPECT_TRUE(test, mode->type & DRM_MODE_TYPE_PREFERRED);
++
++ ret = drm_kunit_add_mode_destroy_action(test, expected);
++ KUNIT_ASSERT_EQ(test, ret, 0);
+ }
+
+ if (params->num_expected_modes >= 2) {
+@@ -145,6 +148,9 @@ drm_test_connector_helper_tv_get_modes_check(struct kunit *test)
+
+ KUNIT_EXPECT_TRUE(test, drm_mode_equal(mode, expected));
+ KUNIT_EXPECT_FALSE(test, mode->type & DRM_MODE_TYPE_PREFERRED);
++
++ ret = drm_kunit_add_mode_destroy_action(test, expected);
++ KUNIT_ASSERT_EQ(test, ret, 0);
+ }
+
+ mutex_unlock(&priv->drm->mode_config.mutex);
+diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
+index 98fe8573e054e9..17ba15132a9840 100644
+--- a/drivers/gpu/drm/xe/xe_gt.c
++++ b/drivers/gpu/drm/xe/xe_gt.c
+@@ -32,6 +32,7 @@
+ #include "xe_gt_pagefault.h"
+ #include "xe_gt_printk.h"
+ #include "xe_gt_sriov_pf.h"
++#include "xe_gt_sriov_vf.h"
+ #include "xe_gt_sysfs.h"
+ #include "xe_gt_tlb_invalidation.h"
+ #include "xe_gt_topology.h"
+@@ -647,6 +648,9 @@ static int do_gt_reset(struct xe_gt *gt)
+ {
+ int err;
+
++ if (IS_SRIOV_VF(gt_to_xe(gt)))
++ return xe_gt_sriov_vf_reset(gt);
++
+ xe_gsc_wa_14015076503(gt, true);
+
+ xe_mmio_write32(gt, GDRST, GRDOM_FULL);
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
+index 4ebc82e607af65..f982d6f9f218d8 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.c
+@@ -57,6 +57,22 @@ static int vf_reset_guc_state(struct xe_gt *gt)
+ return err;
+ }
+
++/**
++ * xe_gt_sriov_vf_reset - Reset GuC VF internal state.
++ * @gt: the &xe_gt
++ *
++ * It requires functional `GuC MMIO based communication`_.
++ *
++ * Return: 0 on success or a negative error code on failure.
++ */
++int xe_gt_sriov_vf_reset(struct xe_gt *gt)
++{
++ if (!xe_device_uc_enabled(gt_to_xe(gt)))
++ return -ENODEV;
++
++ return vf_reset_guc_state(gt);
++}
++
+ static int guc_action_match_version(struct xe_guc *guc,
+ u32 wanted_branch, u32 wanted_major, u32 wanted_minor,
+ u32 *branch, u32 *major, u32 *minor, u32 *patch)
+diff --git a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
+index e541ce57bec246..576ff5e795a8b0 100644
+--- a/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
++++ b/drivers/gpu/drm/xe/xe_gt_sriov_vf.h
+@@ -12,6 +12,7 @@ struct drm_printer;
+ struct xe_gt;
+ struct xe_reg;
+
++int xe_gt_sriov_vf_reset(struct xe_gt *gt);
+ int xe_gt_sriov_vf_bootstrap(struct xe_gt *gt);
+ int xe_gt_sriov_vf_query_config(struct xe_gt *gt);
+ int xe_gt_sriov_vf_connect(struct xe_gt *gt);
+diff --git a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
+index b53e8d2accdbd7..a440442b4d7270 100644
+--- a/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
++++ b/drivers/gpu/drm/xe/xe_hw_engine_class_sysfs.c
+@@ -32,14 +32,61 @@ bool xe_hw_engine_timeout_in_range(u64 timeout, u64 min, u64 max)
+ return timeout >= min && timeout <= max;
+ }
+
+-static void kobj_xe_hw_engine_release(struct kobject *kobj)
++static void xe_hw_engine_sysfs_kobj_release(struct kobject *kobj)
+ {
+ kfree(kobj);
+ }
+
++static ssize_t xe_hw_engine_class_sysfs_attr_show(struct kobject *kobj,
++ struct attribute *attr,
++ char *buf)
++{
++ struct xe_device *xe = kobj_to_xe(kobj);
++ struct kobj_attribute *kattr;
++ ssize_t ret = -EIO;
++
++ kattr = container_of(attr, struct kobj_attribute, attr);
++ if (kattr->show) {
++ xe_pm_runtime_get(xe);
++ ret = kattr->show(kobj, kattr, buf);
++ xe_pm_runtime_put(xe);
++ }
++
++ return ret;
++}
++
++static ssize_t xe_hw_engine_class_sysfs_attr_store(struct kobject *kobj,
++ struct attribute *attr,
++ const char *buf,
++ size_t count)
++{
++ struct xe_device *xe = kobj_to_xe(kobj);
++ struct kobj_attribute *kattr;
++ ssize_t ret = -EIO;
++
++ kattr = container_of(attr, struct kobj_attribute, attr);
++ if (kattr->store) {
++ xe_pm_runtime_get(xe);
++ ret = kattr->store(kobj, kattr, buf, count);
++ xe_pm_runtime_put(xe);
++ }
++
++ return ret;
++}
++
++static const struct sysfs_ops xe_hw_engine_class_sysfs_ops = {
++ .show = xe_hw_engine_class_sysfs_attr_show,
++ .store = xe_hw_engine_class_sysfs_attr_store,
++};
++
+ static const struct kobj_type kobj_xe_hw_engine_type = {
+- .release = kobj_xe_hw_engine_release,
+- .sysfs_ops = &kobj_sysfs_ops
++ .release = xe_hw_engine_sysfs_kobj_release,
++ .sysfs_ops = &xe_hw_engine_class_sysfs_ops,
++};
++
++static const struct kobj_type kobj_xe_hw_engine_type_def = {
++ .release = xe_hw_engine_sysfs_kobj_release,
++ .sysfs_ops = &kobj_sysfs_ops,
+ };
+
+ static ssize_t job_timeout_max_store(struct kobject *kobj,
+@@ -543,7 +590,7 @@ static int xe_add_hw_engine_class_defaults(struct xe_device *xe,
+ if (!kobj)
+ return -ENOMEM;
+
+- kobject_init(kobj, &kobj_xe_hw_engine_type);
++ kobject_init(kobj, &kobj_xe_hw_engine_type_def);
+ err = kobject_add(kobj, parent, "%s", ".defaults");
+ if (err)
+ goto err_object;
+@@ -559,57 +606,6 @@ static int xe_add_hw_engine_class_defaults(struct xe_device *xe,
+ return err;
+ }
+
+-static void xe_hw_engine_sysfs_kobj_release(struct kobject *kobj)
+-{
+- kfree(kobj);
+-}
+-
+-static ssize_t xe_hw_engine_class_sysfs_attr_show(struct kobject *kobj,
+- struct attribute *attr,
+- char *buf)
+-{
+- struct xe_device *xe = kobj_to_xe(kobj);
+- struct kobj_attribute *kattr;
+- ssize_t ret = -EIO;
+-
+- kattr = container_of(attr, struct kobj_attribute, attr);
+- if (kattr->show) {
+- xe_pm_runtime_get(xe);
+- ret = kattr->show(kobj, kattr, buf);
+- xe_pm_runtime_put(xe);
+- }
+-
+- return ret;
+-}
+-
+-static ssize_t xe_hw_engine_class_sysfs_attr_store(struct kobject *kobj,
+- struct attribute *attr,
+- const char *buf,
+- size_t count)
+-{
+- struct xe_device *xe = kobj_to_xe(kobj);
+- struct kobj_attribute *kattr;
+- ssize_t ret = -EIO;
+-
+- kattr = container_of(attr, struct kobj_attribute, attr);
+- if (kattr->store) {
+- xe_pm_runtime_get(xe);
+- ret = kattr->store(kobj, kattr, buf, count);
+- xe_pm_runtime_put(xe);
+- }
+-
+- return ret;
+-}
+-
+-static const struct sysfs_ops xe_hw_engine_class_sysfs_ops = {
+- .show = xe_hw_engine_class_sysfs_attr_show,
+- .store = xe_hw_engine_class_sysfs_attr_store,
+-};
+-
+-static const struct kobj_type xe_hw_engine_sysfs_kobj_type = {
+- .release = xe_hw_engine_sysfs_kobj_release,
+- .sysfs_ops = &xe_hw_engine_class_sysfs_ops,
+-};
+
+ static void hw_engine_class_sysfs_fini(void *arg)
+ {
+@@ -640,7 +636,7 @@ int xe_hw_engine_class_sysfs_init(struct xe_gt *gt)
+ if (!kobj)
+ return -ENOMEM;
+
+- kobject_init(kobj, &xe_hw_engine_sysfs_kobj_type);
++ kobject_init(kobj, &kobj_xe_hw_engine_type);
+
+ err = kobject_add(kobj, gt->sysfs, "engines");
+ if (err)
+diff --git a/drivers/gpu/drm/xe/xe_tuning.c b/drivers/gpu/drm/xe/xe_tuning.c
+index 0d5e04158917be..1fb12da21c9e4c 100644
+--- a/drivers/gpu/drm/xe/xe_tuning.c
++++ b/drivers/gpu/drm/xe/xe_tuning.c
+@@ -97,14 +97,6 @@ static const struct xe_rtp_entry_sr engine_tunings[] = {
+ };
+
+ static const struct xe_rtp_entry_sr lrc_tunings[] = {
+- { XE_RTP_NAME("Tuning: ganged timer, also known as 16011163337"),
+- XE_RTP_RULES(GRAPHICS_VERSION_RANGE(1200, 1210), ENGINE_CLASS(RENDER)),
+- /* read verification is ignored due to 1608008084. */
+- XE_RTP_ACTIONS(FIELD_SET_NO_READ_MASK(FF_MODE2,
+- FF_MODE2_GS_TIMER_MASK,
+- FF_MODE2_GS_TIMER_224))
+- },
+-
+ /* DG2 */
+
+ { XE_RTP_NAME("Tuning: L3 cache"),
+diff --git a/drivers/gpu/drm/xe/xe_wa.c b/drivers/gpu/drm/xe/xe_wa.c
+index 37e592b2bf062a..0a1905f8d380a8 100644
+--- a/drivers/gpu/drm/xe/xe_wa.c
++++ b/drivers/gpu/drm/xe/xe_wa.c
+@@ -606,6 +606,13 @@ static const struct xe_rtp_entry_sr engine_was[] = {
+ };
+
+ static const struct xe_rtp_entry_sr lrc_was[] = {
++ { XE_RTP_NAME("16011163337"),
++ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(1200, 1210), ENGINE_CLASS(RENDER)),
++ /* read verification is ignored due to 1608008084. */
++ XE_RTP_ACTIONS(FIELD_SET_NO_READ_MASK(FF_MODE2,
++ FF_MODE2_GS_TIMER_MASK,
++ FF_MODE2_GS_TIMER_224))
++ },
+ { XE_RTP_NAME("1409342910, 14010698770, 14010443199, 1408979724, 1409178076, 1409207793, 1409217633, 1409252684, 1409347922, 1409142259"),
+ XE_RTP_RULES(GRAPHICS_VERSION_RANGE(1200, 1210)),
+ XE_RTP_ACTIONS(SET(COMMON_SLICE_CHICKEN3,
+diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
+index 4500d7653b05ee..95a4ede2709917 100644
+--- a/drivers/hid/Kconfig
++++ b/drivers/hid/Kconfig
+@@ -1205,6 +1205,20 @@ config HID_U2FZERO
+ allow setting the brightness to anything but 1, which will
+ trigger a single blink and immediately reset back to 0.
+
++config HID_UNIVERSAL_PIDFF
++ tristate "universal-pidff: extended USB PID driver compatibility and usage"
++ depends on USB_HID
++ depends on HID_PID
++ help
++ Extended PID support for selected devices.
++
++ Contains report fixups, extended usable button range and
++ pidff quirk management to extend compatibility with slightly
++ non-compliant USB PID devices and better fuzz/flat values for
++ high precision direct drive devices.
++
++ Supports Moza Racing, Cammus, VRS, FFBeast and more.
++
+ config HID_WACOM
+ tristate "Wacom Intuos/Graphire tablet support (USB)"
+ depends on USB_HID
+diff --git a/drivers/hid/Makefile b/drivers/hid/Makefile
+index f2900ee2ef8582..27ee02bf6f26d3 100644
+--- a/drivers/hid/Makefile
++++ b/drivers/hid/Makefile
+@@ -139,6 +139,7 @@ hid-uclogic-objs := hid-uclogic-core.o \
+ hid-uclogic-params.o
+ obj-$(CONFIG_HID_UCLOGIC) += hid-uclogic.o
+ obj-$(CONFIG_HID_UDRAW_PS3) += hid-udraw-ps3.o
++obj-$(CONFIG_HID_UNIVERSAL_PIDFF) += hid-universal-pidff.o
+ obj-$(CONFIG_HID_LED) += hid-led.o
+ obj-$(CONFIG_HID_XIAOMI) += hid-xiaomi.o
+ obj-$(CONFIG_HID_XINMO) += hid-xinmo.o
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index c6ae7c4268b84c..92baa34f42f28a 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -190,6 +190,12 @@
+ #define USB_DEVICE_ID_APPLE_TOUCHBAR_BACKLIGHT 0x8102
+ #define USB_DEVICE_ID_APPLE_TOUCHBAR_DISPLAY 0x8302
+
++#define USB_VENDOR_ID_ASETEK 0x2433
++#define USB_DEVICE_ID_ASETEK_INVICTA 0xf300
++#define USB_DEVICE_ID_ASETEK_FORTE 0xf301
++#define USB_DEVICE_ID_ASETEK_LA_PRIMA 0xf303
++#define USB_DEVICE_ID_ASETEK_TONY_KANAAN 0xf306
++
+ #define USB_VENDOR_ID_ASUS 0x0486
+ #define USB_DEVICE_ID_ASUS_T91MT 0x0185
+ #define USB_DEVICE_ID_ASUSTEK_MULTITOUCH_YFO 0x0186
+@@ -262,6 +268,10 @@
+ #define USB_DEVICE_ID_BTC_EMPREX_REMOTE 0x5578
+ #define USB_DEVICE_ID_BTC_EMPREX_REMOTE_2 0x5577
+
++#define USB_VENDOR_ID_CAMMUS 0x3416
++#define USB_DEVICE_ID_CAMMUS_C5 0x0301
++#define USB_DEVICE_ID_CAMMUS_C12 0x0302
++
+ #define USB_VENDOR_ID_CANDO 0x2087
+ #define USB_DEVICE_ID_CANDO_PIXCIR_MULTI_TOUCH 0x0703
+ #define USB_DEVICE_ID_CANDO_MULTI_TOUCH 0x0a01
+@@ -453,6 +463,11 @@
+ #define USB_VENDOR_ID_EVISION 0x320f
+ #define USB_DEVICE_ID_EVISION_ICL01 0x5041
+
++#define USB_VENDOR_ID_FFBEAST 0x045b
++#define USB_DEVICE_ID_FFBEAST_JOYSTICK 0x58f9
++#define USB_DEVICE_ID_FFBEAST_RUDDER 0x5968
++#define USB_DEVICE_ID_FFBEAST_WHEEL 0x59d7
++
+ #define USB_VENDOR_ID_FLATFROG 0x25b5
+ #define USB_DEVICE_ID_MULTITOUCH_3200 0x0002
+
+@@ -813,6 +828,13 @@
+ #define I2C_DEVICE_ID_LG_8001 0x8001
+ #define I2C_DEVICE_ID_LG_7010 0x7010
+
++#define USB_VENDOR_ID_LITE_STAR 0x11ff
++#define USB_DEVICE_ID_PXN_V10 0x3245
++#define USB_DEVICE_ID_PXN_V12 0x1212
++#define USB_DEVICE_ID_PXN_V12_LITE 0x1112
++#define USB_DEVICE_ID_PXN_V12_LITE_2 0x1211
++#define USB_DEVICE_LITE_STAR_GT987_FF 0x2141
++
+ #define USB_VENDOR_ID_LOGITECH 0x046d
+ #define USB_DEVICE_ID_LOGITECH_Z_10_SPK 0x0a07
+ #define USB_DEVICE_ID_LOGITECH_AUDIOHUB 0x0a0e
+@@ -960,6 +982,18 @@
+ #define USB_VENDOR_ID_MONTEREY 0x0566
+ #define USB_DEVICE_ID_GENIUS_KB29E 0x3004
+
++#define USB_VENDOR_ID_MOZA 0x346e
++#define USB_DEVICE_ID_MOZA_R3 0x0005
++#define USB_DEVICE_ID_MOZA_R3_2 0x0015
++#define USB_DEVICE_ID_MOZA_R5 0x0004
++#define USB_DEVICE_ID_MOZA_R5_2 0x0014
++#define USB_DEVICE_ID_MOZA_R9 0x0002
++#define USB_DEVICE_ID_MOZA_R9_2 0x0012
++#define USB_DEVICE_ID_MOZA_R12 0x0006
++#define USB_DEVICE_ID_MOZA_R12_2 0x0016
++#define USB_DEVICE_ID_MOZA_R16_R21 0x0000
++#define USB_DEVICE_ID_MOZA_R16_R21_2 0x0010
++
+ #define USB_VENDOR_ID_MSI 0x1770
+ #define USB_DEVICE_ID_MSI_GT683R_LED_PANEL 0xff00
+
+@@ -1371,6 +1405,9 @@
+ #define USB_DEVICE_ID_VELLEMAN_K8061_FIRST 0x8061
+ #define USB_DEVICE_ID_VELLEMAN_K8061_LAST 0x8068
+
++#define USB_VENDOR_ID_VRS 0x0483
++#define USB_DEVICE_ID_VRS_DFP 0xa355
++
+ #define USB_VENDOR_ID_VTL 0x0306
+ #define USB_DEVICE_ID_VTL_MULTITOUCH_FF3F 0xff3f
+
+diff --git a/drivers/hid/hid-universal-pidff.c b/drivers/hid/hid-universal-pidff.c
+new file mode 100644
+index 00000000000000..5b89ec7b5c26c5
+--- /dev/null
++++ b/drivers/hid/hid-universal-pidff.c
+@@ -0,0 +1,202 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * HID UNIVERSAL PIDFF
++ * hid-pidff wrapper for PID-enabled devices
++ * Handles device reports, quirks and extends usable button range
++ *
++ * Copyright (c) 2024, 2025 Oleg Makarenko
++ * Copyright (c) 2024, 2025 Tomasz Pakuła
++ */
++
++#include <linux/device.h>
++#include <linux/hid.h>
++#include <linux/module.h>
++#include <linux/input-event-codes.h>
++#include "hid-ids.h"
++#include "usbhid/hid-pidff.h"
++
++#define JOY_RANGE (BTN_DEAD - BTN_JOYSTICK + 1)
++
++/*
++ * Map buttons manually to extend the default joystick button limit
++ */
++static int universal_pidff_input_mapping(struct hid_device *hdev,
++ struct hid_input *hi, struct hid_field *field, struct hid_usage *usage,
++ unsigned long **bit, int *max)
++{
++ if ((usage->hid & HID_USAGE_PAGE) != HID_UP_BUTTON)
++ return 0;
++
++ if (field->application != HID_GD_JOYSTICK)
++ return 0;
++
++ int button = ((usage->hid - 1) & HID_USAGE);
++ int code = button + BTN_JOYSTICK;
++
++ /* Detect the end of JOYSTICK buttons range */
++ if (code > BTN_DEAD)
++ code = button + KEY_NEXT_FAVORITE - JOY_RANGE;
++
++ /*
++ * Map overflowing buttons to KEY_RESERVED to not ignore
++ * them and let them still trigger MSC_SCAN
++ */
++ if (code > KEY_MAX)
++ code = KEY_RESERVED;
++
++ hid_map_usage(hi, usage, bit, max, EV_KEY, code);
++ hid_dbg(hdev, "Button %d: usage %d", button, code);
++ return 1;
++}
++
++/*
++ * Check if the device is PID and initialize it
++ * Add quirks after initialisation
++ */
++static int universal_pidff_probe(struct hid_device *hdev,
++ const struct hid_device_id *id)
++{
++ int i, error;
++ error = hid_parse(hdev);
++ if (error) {
++ hid_err(hdev, "HID parse failed\n");
++ goto err;
++ }
++
++ error = hid_hw_start(hdev, HID_CONNECT_DEFAULT & ~HID_CONNECT_FF);
++ if (error) {
++ hid_err(hdev, "HID hw start failed\n");
++ goto err;
++ }
++
++ /* Check if device contains PID usage page */
++ error = 1;
++ for (i = 0; i < hdev->collection_size; i++)
++ if ((hdev->collection[i].usage & HID_USAGE_PAGE) == HID_UP_PID) {
++ error = 0;
++ hid_dbg(hdev, "PID usage page found\n");
++ break;
++ }
++
++ /*
++ * Do not fail as this might be the second "device"
++ * just for additional buttons/axes. Exit cleanly if force
++ * feedback usage page wasn't found (included devices were
++ * tested and confirmed to be USB PID after all).
++ */
++ if (error) {
++ hid_dbg(hdev, "PID usage page not found in the descriptor\n");
++ return 0;
++ }
++
++ /* Check if HID_PID support is enabled */
++ int (*init_function)(struct hid_device *, u32);
++ init_function = hid_pidff_init_with_quirks;
++
++ if (!init_function) {
++ hid_warn(hdev, "HID_PID support not enabled!\n");
++ return 0;
++ }
++
++ error = init_function(hdev, id->driver_data);
++ if (error) {
++ hid_warn(hdev, "Error initialising force feedback\n");
++ goto err;
++ }
++
++ hid_info(hdev, "Universal pidff driver loaded sucessfully!");
++
++ return 0;
++err:
++ return error;
++}
++
++static int universal_pidff_input_configured(struct hid_device *hdev,
++ struct hid_input *hidinput)
++{
++ int axis;
++ struct input_dev *input = hidinput->input;
++
++ if (!input->absinfo)
++ return 0;
++
++ /* Decrease fuzz and deadzone on available axes */
++ for (axis = ABS_X; axis <= ABS_BRAKE; axis++) {
++ if (!test_bit(axis, input->absbit))
++ continue;
++
++ input_set_abs_params(input, axis,
++ input->absinfo[axis].minimum,
++ input->absinfo[axis].maximum,
++ axis == ABS_X ? 0 : 8, 0);
++ }
++
++ /* Remove fuzz and deadzone from the second joystick axis */
++ if (hdev->vendor == USB_VENDOR_ID_FFBEAST &&
++ hdev->product == USB_DEVICE_ID_FFBEAST_JOYSTICK)
++ input_set_abs_params(input, ABS_Y,
++ input->absinfo[ABS_Y].minimum,
++ input->absinfo[ABS_Y].maximum, 0, 0);
++
++ return 0;
++}
++
++static const struct hid_device_id universal_pidff_devices[] = {
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R3),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R3_2),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R5),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R5_2),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R9),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R9_2),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R12),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R12_2),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R16_R21),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_MOZA, USB_DEVICE_ID_MOZA_R16_R21_2),
++ .driver_data = HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION },
++ { HID_USB_DEVICE(USB_VENDOR_ID_CAMMUS, USB_DEVICE_ID_CAMMUS_C5) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_CAMMUS, USB_DEVICE_ID_CAMMUS_C12) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_VRS, USB_DEVICE_ID_VRS_DFP),
++ .driver_data = HID_PIDFF_QUIRK_PERMISSIVE_CONTROL },
++ { HID_USB_DEVICE(USB_VENDOR_ID_FFBEAST, USB_DEVICE_ID_FFBEAST_JOYSTICK), },
++ { HID_USB_DEVICE(USB_VENDOR_ID_FFBEAST, USB_DEVICE_ID_FFBEAST_RUDDER), },
++ { HID_USB_DEVICE(USB_VENDOR_ID_FFBEAST, USB_DEVICE_ID_FFBEAST_WHEEL) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LITE_STAR, USB_DEVICE_ID_PXN_V10),
++ .driver_data = HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LITE_STAR, USB_DEVICE_ID_PXN_V12),
++ .driver_data = HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LITE_STAR, USB_DEVICE_ID_PXN_V12_LITE),
++ .driver_data = HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LITE_STAR, USB_DEVICE_ID_PXN_V12_LITE_2),
++ .driver_data = HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_LITE_STAR, USB_DEVICE_LITE_STAR_GT987_FF),
++ .driver_data = HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY },
++ { HID_USB_DEVICE(USB_VENDOR_ID_ASETEK, USB_DEVICE_ID_ASETEK_INVICTA) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_ASETEK, USB_DEVICE_ID_ASETEK_FORTE) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_ASETEK, USB_DEVICE_ID_ASETEK_LA_PRIMA) },
++ { HID_USB_DEVICE(USB_VENDOR_ID_ASETEK, USB_DEVICE_ID_ASETEK_TONY_KANAAN) },
++ { }
++};
++MODULE_DEVICE_TABLE(hid, universal_pidff_devices);
++
++static struct hid_driver universal_pidff = {
++ .name = "hid-universal-pidff",
++ .id_table = universal_pidff_devices,
++ .input_mapping = universal_pidff_input_mapping,
++ .probe = universal_pidff_probe,
++ .input_configured = universal_pidff_input_configured
++};
++module_hid_driver(universal_pidff);
++
++MODULE_DESCRIPTION("Universal driver for USB PID Force Feedback devices");
++MODULE_LICENSE("GPL");
++MODULE_AUTHOR("Oleg Makarenko <oleg@makarenk.ooo>");
++MODULE_AUTHOR("Tomasz Pakuła <tomasz.pakula.oficjalny@gmail.com>");
+diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
+index a9e85bdd4cc656..bf0f51ef0149ff 100644
+--- a/drivers/hid/usbhid/hid-core.c
++++ b/drivers/hid/usbhid/hid-core.c
+@@ -35,6 +35,7 @@
+ #include <linux/hid-debug.h>
+ #include <linux/hidraw.h>
+ #include "usbhid.h"
++#include "hid-pidff.h"
+
+ /*
+ * Version Information
+diff --git a/drivers/hid/usbhid/hid-pidff.c b/drivers/hid/usbhid/hid-pidff.c
+index 3b4ee21cd81119..8dfd2c554a2762 100644
+--- a/drivers/hid/usbhid/hid-pidff.c
++++ b/drivers/hid/usbhid/hid-pidff.c
+@@ -3,27 +3,27 @@
+ * Force feedback driver for USB HID PID compliant devices
+ *
+ * Copyright (c) 2005, 2006 Anssi Hannula <anssi.hannula@gmail.com>
++ * Upgraded 2025 by Oleg Makarenko and Tomasz Pakuła
+ */
+
+-/*
+- */
+-
+-/* #define DEBUG */
+-
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
++#include "hid-pidff.h"
+ #include <linux/input.h>
+ #include <linux/slab.h>
+ #include <linux/usb.h>
+-
+ #include <linux/hid.h>
++#include <linux/minmax.h>
+
+-#include "usbhid.h"
+
+ #define PID_EFFECTS_MAX 64
++#define PID_INFINITE U16_MAX
+
+-/* Report usage table used to put reports into an array */
++/* Linux Force Feedback API uses miliseconds as time unit */
++#define FF_TIME_EXPONENT -3
++#define FF_INFINITE 0
+
++/* Report usage table used to put reports into an array */
+ #define PID_SET_EFFECT 0
+ #define PID_EFFECT_OPERATION 1
+ #define PID_DEVICE_GAIN 2
+@@ -44,12 +44,19 @@ static const u8 pidff_reports[] = {
+ 0x21, 0x77, 0x7d, 0x7f, 0x89, 0x90, 0x96, 0xab,
+ 0x5a, 0x5f, 0x6e, 0x73, 0x74
+ };
++/*
++ * device_control is really 0x95, but 0x96 specified
++ * as it is the usage of the only field in that report.
++ */
+
+-/* device_control is really 0x95, but 0x96 specified as it is the usage of
+-the only field in that report */
++/* PID special fields */
++#define PID_EFFECT_TYPE 0x25
++#define PID_DIRECTION 0x57
++#define PID_EFFECT_OPERATION_ARRAY 0x78
++#define PID_BLOCK_LOAD_STATUS 0x8b
++#define PID_DEVICE_CONTROL_ARRAY 0x96
+
+ /* Value usage tables used to put fields and values into arrays */
+-
+ #define PID_EFFECT_BLOCK_INDEX 0
+
+ #define PID_DURATION 1
+@@ -107,10 +114,13 @@ static const u8 pidff_device_gain[] = { 0x7e };
+ static const u8 pidff_pool[] = { 0x80, 0x83, 0xa9 };
+
+ /* Special field key tables used to put special field keys into arrays */
+-
+ #define PID_ENABLE_ACTUATORS 0
+-#define PID_RESET 1
+-static const u8 pidff_device_control[] = { 0x97, 0x9a };
++#define PID_DISABLE_ACTUATORS 1
++#define PID_STOP_ALL_EFFECTS 2
++#define PID_RESET 3
++#define PID_PAUSE 4
++#define PID_CONTINUE 5
++static const u8 pidff_device_control[] = { 0x97, 0x98, 0x99, 0x9a, 0x9b, 0x9c };
+
+ #define PID_CONSTANT 0
+ #define PID_RAMP 1
+@@ -130,12 +140,16 @@ static const u8 pidff_effect_types[] = {
+
+ #define PID_BLOCK_LOAD_SUCCESS 0
+ #define PID_BLOCK_LOAD_FULL 1
+-static const u8 pidff_block_load_status[] = { 0x8c, 0x8d };
++#define PID_BLOCK_LOAD_ERROR 2
++static const u8 pidff_block_load_status[] = { 0x8c, 0x8d, 0x8e};
+
+ #define PID_EFFECT_START 0
+ #define PID_EFFECT_STOP 1
+ static const u8 pidff_effect_operation_status[] = { 0x79, 0x7b };
+
++/* Polar direction 90 degrees (East) */
++#define PIDFF_FIXED_WHEEL_DIRECTION 0x4000
++
+ struct pidff_usage {
+ struct hid_field *field;
+ s32 *value;
+@@ -159,8 +173,10 @@ struct pidff_device {
+ struct pidff_usage effect_operation[sizeof(pidff_effect_operation)];
+ struct pidff_usage block_free[sizeof(pidff_block_free)];
+
+- /* Special field is a field that is not composed of
+- usage<->value pairs that pidff_usage values are */
++ /*
++ * Special field is a field that is not composed of
++ * usage<->value pairs that pidff_usage values are
++ */
+
+ /* Special field in create_new_effect */
+ struct hid_field *create_new_effect_type;
+@@ -184,30 +200,61 @@ struct pidff_device {
+ int operation_id[sizeof(pidff_effect_operation_status)];
+
+ int pid_id[PID_EFFECTS_MAX];
++
++ u32 quirks;
++ u8 effect_count;
+ };
+
++/*
++ * Clamp value for a given field
++ */
++static s32 pidff_clamp(s32 i, struct hid_field *field)
++{
++ s32 clamped = clamp(i, field->logical_minimum, field->logical_maximum);
++ pr_debug("clamped from %d to %d", i, clamped);
++ return clamped;
++}
++
+ /*
+ * Scale an unsigned value with range 0..max for the given field
+ */
+ static int pidff_rescale(int i, int max, struct hid_field *field)
+ {
+ return i * (field->logical_maximum - field->logical_minimum) / max +
+- field->logical_minimum;
++ field->logical_minimum;
+ }
+
+ /*
+- * Scale a signed value in range -0x8000..0x7fff for the given field
++ * Scale a signed value in range S16_MIN..S16_MAX for the given field
+ */
+ static int pidff_rescale_signed(int i, struct hid_field *field)
+ {
+- return i == 0 ? 0 : i >
+- 0 ? i * field->logical_maximum / 0x7fff : i *
+- field->logical_minimum / -0x8000;
++ if (i > 0) return i * field->logical_maximum / S16_MAX;
++ if (i < 0) return i * field->logical_minimum / S16_MIN;
++ return 0;
++}
++
++/*
++ * Scale time value from Linux default (ms) to field units
++ */
++static u32 pidff_rescale_time(u16 time, struct hid_field *field)
++{
++ u32 scaled_time = time;
++ int exponent = field->unit_exponent;
++ pr_debug("time field exponent: %d\n", exponent);
++
++ for (;exponent < FF_TIME_EXPONENT; exponent++)
++ scaled_time *= 10;
++ for (;exponent > FF_TIME_EXPONENT; exponent--)
++ scaled_time /= 10;
++
++ pr_debug("time calculated from %d to %d\n", time, scaled_time);
++ return scaled_time;
+ }
+
+ static void pidff_set(struct pidff_usage *usage, u16 value)
+ {
+- usage->value[0] = pidff_rescale(value, 0xffff, usage->field);
++ usage->value[0] = pidff_rescale(value, U16_MAX, usage->field);
+ pr_debug("calculated from %d to %d\n", value, usage->value[0]);
+ }
+
+@@ -218,14 +265,35 @@ static void pidff_set_signed(struct pidff_usage *usage, s16 value)
+ else {
+ if (value < 0)
+ usage->value[0] =
+- pidff_rescale(-value, 0x8000, usage->field);
++ pidff_rescale(-value, -S16_MIN, usage->field);
+ else
+ usage->value[0] =
+- pidff_rescale(value, 0x7fff, usage->field);
++ pidff_rescale(value, S16_MAX, usage->field);
+ }
+ pr_debug("calculated from %d to %d\n", value, usage->value[0]);
+ }
+
++static void pidff_set_time(struct pidff_usage *usage, u16 time)
++{
++ u32 modified_time = pidff_rescale_time(time, usage->field);
++ usage->value[0] = pidff_clamp(modified_time, usage->field);
++}
++
++static void pidff_set_duration(struct pidff_usage *usage, u16 duration)
++{
++ /* Infinite value conversion from Linux API -> PID */
++ if (duration == FF_INFINITE)
++ duration = PID_INFINITE;
++
++ /* PID defines INFINITE as the max possible value for duration field */
++ if (duration == PID_INFINITE) {
++ usage->value[0] = (1U << usage->field->report_size) - 1;
++ return;
++ }
++
++ pidff_set_time(usage, duration);
++}
++
+ /*
+ * Send envelope report to the device
+ */
+@@ -233,19 +301,21 @@ static void pidff_set_envelope_report(struct pidff_device *pidff,
+ struct ff_envelope *envelope)
+ {
+ pidff->set_envelope[PID_EFFECT_BLOCK_INDEX].value[0] =
+- pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0];
++ pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0];
+
+ pidff->set_envelope[PID_ATTACK_LEVEL].value[0] =
+- pidff_rescale(envelope->attack_level >
+- 0x7fff ? 0x7fff : envelope->attack_level, 0x7fff,
+- pidff->set_envelope[PID_ATTACK_LEVEL].field);
++ pidff_rescale(envelope->attack_level >
++ S16_MAX ? S16_MAX : envelope->attack_level, S16_MAX,
++ pidff->set_envelope[PID_ATTACK_LEVEL].field);
+ pidff->set_envelope[PID_FADE_LEVEL].value[0] =
+- pidff_rescale(envelope->fade_level >
+- 0x7fff ? 0x7fff : envelope->fade_level, 0x7fff,
+- pidff->set_envelope[PID_FADE_LEVEL].field);
++ pidff_rescale(envelope->fade_level >
++ S16_MAX ? S16_MAX : envelope->fade_level, S16_MAX,
++ pidff->set_envelope[PID_FADE_LEVEL].field);
+
+- pidff->set_envelope[PID_ATTACK_TIME].value[0] = envelope->attack_length;
+- pidff->set_envelope[PID_FADE_TIME].value[0] = envelope->fade_length;
++ pidff_set_time(&pidff->set_envelope[PID_ATTACK_TIME],
++ envelope->attack_length);
++ pidff_set_time(&pidff->set_envelope[PID_FADE_TIME],
++ envelope->fade_length);
+
+ hid_dbg(pidff->hid, "attack %u => %d\n",
+ envelope->attack_level,
+@@ -261,10 +331,22 @@ static void pidff_set_envelope_report(struct pidff_device *pidff,
+ static int pidff_needs_set_envelope(struct ff_envelope *envelope,
+ struct ff_envelope *old)
+ {
+- return envelope->attack_level != old->attack_level ||
+- envelope->fade_level != old->fade_level ||
++ bool needs_new_envelope;
++ needs_new_envelope = envelope->attack_level != 0 ||
++ envelope->fade_level != 0 ||
++ envelope->attack_length != 0 ||
++ envelope->fade_length != 0;
++
++ if (!needs_new_envelope)
++ return false;
++
++ if (!old)
++ return needs_new_envelope;
++
++ return envelope->attack_level != old->attack_level ||
++ envelope->fade_level != old->fade_level ||
+ envelope->attack_length != old->attack_length ||
+- envelope->fade_length != old->fade_length;
++ envelope->fade_length != old->fade_length;
+ }
+
+ /*
+@@ -301,17 +383,27 @@ static void pidff_set_effect_report(struct pidff_device *pidff,
+ pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0];
+ pidff->set_effect_type->value[0] =
+ pidff->create_new_effect_type->value[0];
+- pidff->set_effect[PID_DURATION].value[0] = effect->replay.length;
++
++ pidff_set_duration(&pidff->set_effect[PID_DURATION],
++ effect->replay.length);
++
+ pidff->set_effect[PID_TRIGGER_BUTTON].value[0] = effect->trigger.button;
+- pidff->set_effect[PID_TRIGGER_REPEAT_INT].value[0] =
+- effect->trigger.interval;
++ pidff_set_time(&pidff->set_effect[PID_TRIGGER_REPEAT_INT],
++ effect->trigger.interval);
+ pidff->set_effect[PID_GAIN].value[0] =
+ pidff->set_effect[PID_GAIN].field->logical_maximum;
+ pidff->set_effect[PID_DIRECTION_ENABLE].value[0] = 1;
+- pidff->effect_direction->value[0] =
+- pidff_rescale(effect->direction, 0xffff,
+- pidff->effect_direction);
+- pidff->set_effect[PID_START_DELAY].value[0] = effect->replay.delay;
++
++ /* Use fixed direction if needed */
++ pidff->effect_direction->value[0] = pidff_rescale(
++ pidff->quirks & HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION ?
++ PIDFF_FIXED_WHEEL_DIRECTION : effect->direction,
++ U16_MAX, pidff->effect_direction);
++
++ /* Omit setting delay field if it's missing */
++ if (!(pidff->quirks & HID_PIDFF_QUIRK_MISSING_DELAY))
++ pidff_set_time(&pidff->set_effect[PID_START_DELAY],
++ effect->replay.delay);
+
+ hid_hw_request(pidff->hid, pidff->reports[PID_SET_EFFECT],
+ HID_REQ_SET_REPORT);
+@@ -343,11 +435,11 @@ static void pidff_set_periodic_report(struct pidff_device *pidff,
+ pidff_set_signed(&pidff->set_periodic[PID_OFFSET],
+ effect->u.periodic.offset);
+ pidff_set(&pidff->set_periodic[PID_PHASE], effect->u.periodic.phase);
+- pidff->set_periodic[PID_PERIOD].value[0] = effect->u.periodic.period;
++ pidff_set_time(&pidff->set_periodic[PID_PERIOD],
++ effect->u.periodic.period);
+
+ hid_hw_request(pidff->hid, pidff->reports[PID_SET_PERIODIC],
+ HID_REQ_SET_REPORT);
+-
+ }
+
+ /*
+@@ -368,13 +460,19 @@ static int pidff_needs_set_periodic(struct ff_effect *effect,
+ static void pidff_set_condition_report(struct pidff_device *pidff,
+ struct ff_effect *effect)
+ {
+- int i;
++ int i, max_axis;
++
++ /* Devices missing Parameter Block Offset can only have one axis */
++ max_axis = pidff->quirks & HID_PIDFF_QUIRK_MISSING_PBO ? 1 : 2;
+
+ pidff->set_condition[PID_EFFECT_BLOCK_INDEX].value[0] =
+ pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0];
+
+- for (i = 0; i < 2; i++) {
+- pidff->set_condition[PID_PARAM_BLOCK_OFFSET].value[0] = i;
++ for (i = 0; i < max_axis; i++) {
++ /* Omit Parameter Block Offset if missing */
++ if (!(pidff->quirks & HID_PIDFF_QUIRK_MISSING_PBO))
++ pidff->set_condition[PID_PARAM_BLOCK_OFFSET].value[0] = i;
++
+ pidff_set_signed(&pidff->set_condition[PID_CP_OFFSET],
+ effect->u.condition[i].center);
+ pidff_set_signed(&pidff->set_condition[PID_POS_COEFFICIENT],
+@@ -441,9 +539,104 @@ static int pidff_needs_set_ramp(struct ff_effect *effect, struct ff_effect *old)
+ effect->u.ramp.end_level != old->u.ramp.end_level;
+ }
+
++/*
++ * Set device gain
++ */
++static void pidff_set_gain_report(struct pidff_device *pidff, u16 gain)
++{
++ if (!pidff->device_gain[PID_DEVICE_GAIN_FIELD].field)
++ return;
++
++ pidff_set(&pidff->device_gain[PID_DEVICE_GAIN_FIELD], gain);
++ hid_hw_request(pidff->hid, pidff->reports[PID_DEVICE_GAIN],
++ HID_REQ_SET_REPORT);
++}
++
++/*
++ * Send device control report to the device
++ */
++static void pidff_set_device_control(struct pidff_device *pidff, int field)
++{
++ int i, index;
++ int field_index = pidff->control_id[field];
++
++ if (field_index < 1)
++ return;
++
++ /* Detect if the field is a bitmask variable or an array */
++ if (pidff->device_control->flags & HID_MAIN_ITEM_VARIABLE) {
++ hid_dbg(pidff->hid, "DEVICE_CONTROL is a bitmask\n");
++
++ /* Clear current bitmask */
++ for(i = 0; i < sizeof(pidff_device_control); i++) {
++ index = pidff->control_id[i];
++ if (index < 1)
++ continue;
++
++ pidff->device_control->value[index - 1] = 0;
++ }
++
++ pidff->device_control->value[field_index - 1] = 1;
++ } else {
++ hid_dbg(pidff->hid, "DEVICE_CONTROL is an array\n");
++ pidff->device_control->value[0] = field_index;
++ }
++
++ hid_hw_request(pidff->hid, pidff->reports[PID_DEVICE_CONTROL], HID_REQ_SET_REPORT);
++ hid_hw_wait(pidff->hid);
++}
++
++/*
++ * Modify actuators state
++ */
++static void pidff_set_actuators(struct pidff_device *pidff, bool enable)
++{
++ hid_dbg(pidff->hid, "%s actuators\n", enable ? "Enable" : "Disable");
++ pidff_set_device_control(pidff,
++ enable ? PID_ENABLE_ACTUATORS : PID_DISABLE_ACTUATORS);
++}
++
++/*
++ * Reset the device, stop all effects, enable actuators
++ */
++static void pidff_reset(struct pidff_device *pidff)
++{
++ /* We reset twice as sometimes hid_wait_io isn't waiting long enough */
++ pidff_set_device_control(pidff, PID_RESET);
++ pidff_set_device_control(pidff, PID_RESET);
++ pidff->effect_count = 0;
++
++ pidff_set_device_control(pidff, PID_STOP_ALL_EFFECTS);
++ pidff_set_actuators(pidff, 1);
++}
++
++/*
++ * Fetch pool report
++ */
++static void pidff_fetch_pool(struct pidff_device *pidff)
++{
++ int i;
++ struct hid_device *hid = pidff->hid;
++
++ /* Repeat if PID_SIMULTANEOUS_MAX < 2 to make sure it's correct */
++ for(i = 0; i < 20; i++) {
++ hid_hw_request(hid, pidff->reports[PID_POOL], HID_REQ_GET_REPORT);
++ hid_hw_wait(hid);
++
++ if (!pidff->pool[PID_SIMULTANEOUS_MAX].value)
++ return;
++ if (pidff->pool[PID_SIMULTANEOUS_MAX].value[0] >= 2)
++ return;
++ }
++ hid_warn(hid, "device reports %d simultaneous effects\n",
++ pidff->pool[PID_SIMULTANEOUS_MAX].value[0]);
++}
++
+ /*
+ * Send a request for effect upload to the device
+ *
++ * Reset and enable actuators if no effects were present on the device
++ *
+ * Returns 0 if device reported success, -ENOSPC if the device reported memory
+ * is full. Upon unknown response the function will retry for 60 times, if
+ * still unsuccessful -EIO is returned.
+@@ -452,6 +645,9 @@ static int pidff_request_effect_upload(struct pidff_device *pidff, int efnum)
+ {
+ int j;
+
++ if (!pidff->effect_count)
++ pidff_reset(pidff);
++
+ pidff->create_new_effect_type->value[0] = efnum;
+ hid_hw_request(pidff->hid, pidff->reports[PID_CREATE_NEW_EFFECT],
+ HID_REQ_SET_REPORT);
+@@ -471,6 +667,8 @@ static int pidff_request_effect_upload(struct pidff_device *pidff, int efnum)
+ hid_dbg(pidff->hid, "device reported free memory: %d bytes\n",
+ pidff->block_load[PID_RAM_POOL_AVAILABLE].value ?
+ pidff->block_load[PID_RAM_POOL_AVAILABLE].value[0] : -1);
++
++ pidff->effect_count++;
+ return 0;
+ }
+ if (pidff->block_load_status->value[0] ==
+@@ -480,6 +678,11 @@ static int pidff_request_effect_upload(struct pidff_device *pidff, int efnum)
+ pidff->block_load[PID_RAM_POOL_AVAILABLE].value[0] : -1);
+ return -ENOSPC;
+ }
++ if (pidff->block_load_status->value[0] ==
++ pidff->status_id[PID_BLOCK_LOAD_ERROR]) {
++ hid_dbg(pidff->hid, "device error during effect creation\n");
++ return -EREMOTEIO;
++ }
+ }
+ hid_err(pidff->hid, "pid_block_load failed 60 times\n");
+ return -EIO;
+@@ -498,7 +701,8 @@ static void pidff_playback_pid(struct pidff_device *pidff, int pid_id, int n)
+ } else {
+ pidff->effect_operation_status->value[0] =
+ pidff->operation_id[PID_EFFECT_START];
+- pidff->effect_operation[PID_LOOP_COUNT].value[0] = n;
++ pidff->effect_operation[PID_LOOP_COUNT].value[0] =
++ pidff_clamp(n, pidff->effect_operation[PID_LOOP_COUNT].field);
+ }
+
+ hid_hw_request(pidff->hid, pidff->reports[PID_EFFECT_OPERATION],
+@@ -511,20 +715,22 @@ static void pidff_playback_pid(struct pidff_device *pidff, int pid_id, int n)
+ static int pidff_playback(struct input_dev *dev, int effect_id, int value)
+ {
+ struct pidff_device *pidff = dev->ff->private;
+-
+ pidff_playback_pid(pidff, pidff->pid_id[effect_id], value);
+-
+ return 0;
+ }
+
+ /*
+ * Erase effect with PID id
++ * Decrease the device effect counter
+ */
+ static void pidff_erase_pid(struct pidff_device *pidff, int pid_id)
+ {
+ pidff->block_free[PID_EFFECT_BLOCK_INDEX].value[0] = pid_id;
+ hid_hw_request(pidff->hid, pidff->reports[PID_BLOCK_FREE],
+ HID_REQ_SET_REPORT);
++
++ if (pidff->effect_count > 0)
++ pidff->effect_count--;
+ }
+
+ /*
+@@ -537,8 +743,11 @@ static int pidff_erase_effect(struct input_dev *dev, int effect_id)
+
+ hid_dbg(pidff->hid, "starting to erase %d/%d\n",
+ effect_id, pidff->pid_id[effect_id]);
+- /* Wait for the queue to clear. We do not want a full fifo to
+- prevent the effect removal. */
++
++ /*
++ * Wait for the queue to clear. We do not want
++ * a full fifo to prevent the effect removal.
++ */
+ hid_hw_wait(pidff->hid);
+ pidff_playback_pid(pidff, pid_id, 0);
+ pidff_erase_pid(pidff, pid_id);
+@@ -574,11 +783,9 @@ static int pidff_upload_effect(struct input_dev *dev, struct ff_effect *effect,
+ pidff_set_effect_report(pidff, effect);
+ if (!old || pidff_needs_set_constant(effect, old))
+ pidff_set_constant_force_report(pidff, effect);
+- if (!old ||
+- pidff_needs_set_envelope(&effect->u.constant.envelope,
+- &old->u.constant.envelope))
+- pidff_set_envelope_report(pidff,
+- &effect->u.constant.envelope);
++ if (pidff_needs_set_envelope(&effect->u.constant.envelope,
++ old ? &old->u.constant.envelope : NULL))
++ pidff_set_envelope_report(pidff, &effect->u.constant.envelope);
+ break;
+
+ case FF_PERIODIC:
+@@ -604,6 +811,9 @@ static int pidff_upload_effect(struct input_dev *dev, struct ff_effect *effect,
+ return -EINVAL;
+ }
+
++ if (pidff->quirks & HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY)
++ type_id = PID_SINE;
++
+ error = pidff_request_effect_upload(pidff,
+ pidff->type_id[type_id]);
+ if (error)
+@@ -613,11 +823,9 @@ static int pidff_upload_effect(struct input_dev *dev, struct ff_effect *effect,
+ pidff_set_effect_report(pidff, effect);
+ if (!old || pidff_needs_set_periodic(effect, old))
+ pidff_set_periodic_report(pidff, effect);
+- if (!old ||
+- pidff_needs_set_envelope(&effect->u.periodic.envelope,
+- &old->u.periodic.envelope))
+- pidff_set_envelope_report(pidff,
+- &effect->u.periodic.envelope);
++ if (pidff_needs_set_envelope(&effect->u.periodic.envelope,
++ old ? &old->u.periodic.envelope : NULL))
++ pidff_set_envelope_report(pidff, &effect->u.periodic.envelope);
+ break;
+
+ case FF_RAMP:
+@@ -631,56 +839,32 @@ static int pidff_upload_effect(struct input_dev *dev, struct ff_effect *effect,
+ pidff_set_effect_report(pidff, effect);
+ if (!old || pidff_needs_set_ramp(effect, old))
+ pidff_set_ramp_force_report(pidff, effect);
+- if (!old ||
+- pidff_needs_set_envelope(&effect->u.ramp.envelope,
+- &old->u.ramp.envelope))
+- pidff_set_envelope_report(pidff,
+- &effect->u.ramp.envelope);
++ if (pidff_needs_set_envelope(&effect->u.ramp.envelope,
++ old ? &old->u.ramp.envelope : NULL))
++ pidff_set_envelope_report(pidff, &effect->u.ramp.envelope);
+ break;
+
+ case FF_SPRING:
+- if (!old) {
+- error = pidff_request_effect_upload(pidff,
+- pidff->type_id[PID_SPRING]);
+- if (error)
+- return error;
+- }
+- if (!old || pidff_needs_set_effect(effect, old))
+- pidff_set_effect_report(pidff, effect);
+- if (!old || pidff_needs_set_condition(effect, old))
+- pidff_set_condition_report(pidff, effect);
+- break;
+-
+- case FF_FRICTION:
+- if (!old) {
+- error = pidff_request_effect_upload(pidff,
+- pidff->type_id[PID_FRICTION]);
+- if (error)
+- return error;
+- }
+- if (!old || pidff_needs_set_effect(effect, old))
+- pidff_set_effect_report(pidff, effect);
+- if (!old || pidff_needs_set_condition(effect, old))
+- pidff_set_condition_report(pidff, effect);
+- break;
+-
+ case FF_DAMPER:
+- if (!old) {
+- error = pidff_request_effect_upload(pidff,
+- pidff->type_id[PID_DAMPER]);
+- if (error)
+- return error;
+- }
+- if (!old || pidff_needs_set_effect(effect, old))
+- pidff_set_effect_report(pidff, effect);
+- if (!old || pidff_needs_set_condition(effect, old))
+- pidff_set_condition_report(pidff, effect);
+- break;
+-
+ case FF_INERTIA:
++ case FF_FRICTION:
+ if (!old) {
++ switch(effect->type) {
++ case FF_SPRING:
++ type_id = PID_SPRING;
++ break;
++ case FF_DAMPER:
++ type_id = PID_DAMPER;
++ break;
++ case FF_INERTIA:
++ type_id = PID_INERTIA;
++ break;
++ case FF_FRICTION:
++ type_id = PID_FRICTION;
++ break;
++ }
+ error = pidff_request_effect_upload(pidff,
+- pidff->type_id[PID_INERTIA]);
++ pidff->type_id[type_id]);
+ if (error)
+ return error;
+ }
+@@ -709,11 +893,7 @@ static int pidff_upload_effect(struct input_dev *dev, struct ff_effect *effect,
+ */
+ static void pidff_set_gain(struct input_dev *dev, u16 gain)
+ {
+- struct pidff_device *pidff = dev->ff->private;
+-
+- pidff_set(&pidff->device_gain[PID_DEVICE_GAIN_FIELD], gain);
+- hid_hw_request(pidff->hid, pidff->reports[PID_DEVICE_GAIN],
+- HID_REQ_SET_REPORT);
++ pidff_set_gain_report(dev->ff->private, gain);
+ }
+
+ static void pidff_autocenter(struct pidff_device *pidff, u16 magnitude)
+@@ -736,7 +916,10 @@ static void pidff_autocenter(struct pidff_device *pidff, u16 magnitude)
+ pidff->set_effect[PID_TRIGGER_REPEAT_INT].value[0] = 0;
+ pidff_set(&pidff->set_effect[PID_GAIN], magnitude);
+ pidff->set_effect[PID_DIRECTION_ENABLE].value[0] = 1;
+- pidff->set_effect[PID_START_DELAY].value[0] = 0;
++
++ /* Omit setting delay field if it's missing */
++ if (!(pidff->quirks & HID_PIDFF_QUIRK_MISSING_DELAY))
++ pidff->set_effect[PID_START_DELAY].value[0] = 0;
+
+ hid_hw_request(pidff->hid, pidff->reports[PID_SET_EFFECT],
+ HID_REQ_SET_REPORT);
+@@ -747,9 +930,7 @@ static void pidff_autocenter(struct pidff_device *pidff, u16 magnitude)
+ */
+ static void pidff_set_autocenter(struct input_dev *dev, u16 magnitude)
+ {
+- struct pidff_device *pidff = dev->ff->private;
+-
+- pidff_autocenter(pidff, magnitude);
++ pidff_autocenter(dev->ff->private, magnitude);
+ }
+
+ /*
+@@ -758,7 +939,13 @@ static void pidff_set_autocenter(struct input_dev *dev, u16 magnitude)
+ static int pidff_find_fields(struct pidff_usage *usage, const u8 *table,
+ struct hid_report *report, int count, int strict)
+ {
++ if (!report) {
++ pr_debug("pidff_find_fields, null report\n");
++ return -1;
++ }
++
+ int i, j, k, found;
++ int return_value = 0;
+
+ for (k = 0; k < count; k++) {
+ found = 0;
+@@ -783,12 +970,22 @@ static int pidff_find_fields(struct pidff_usage *usage, const u8 *table,
+ if (found)
+ break;
+ }
+- if (!found && strict) {
++ if (!found && table[k] == pidff_set_effect[PID_START_DELAY]) {
++ pr_debug("Delay field not found, but that's OK\n");
++ pr_debug("Setting MISSING_DELAY quirk\n");
++ return_value |= HID_PIDFF_QUIRK_MISSING_DELAY;
++ }
++ else if (!found && table[k] == pidff_set_condition[PID_PARAM_BLOCK_OFFSET]) {
++ pr_debug("PBO field not found, but that's OK\n");
++ pr_debug("Setting MISSING_PBO quirk\n");
++ return_value |= HID_PIDFF_QUIRK_MISSING_PBO;
++ }
++ else if (!found && strict) {
+ pr_debug("failed to locate %d\n", k);
+ return -1;
+ }
+ }
+- return 0;
++ return return_value;
+ }
+
+ /*
+@@ -871,6 +1068,11 @@ static int pidff_reports_ok(struct pidff_device *pidff)
+ static struct hid_field *pidff_find_special_field(struct hid_report *report,
+ int usage, int enforce_min)
+ {
++ if (!report) {
++ pr_debug("pidff_find_special_field, null report\n");
++ return NULL;
++ }
++
+ int i;
+
+ for (i = 0; i < report->maxfield; i++) {
+@@ -923,22 +1125,24 @@ static int pidff_find_special_fields(struct pidff_device *pidff)
+
+ pidff->create_new_effect_type =
+ pidff_find_special_field(pidff->reports[PID_CREATE_NEW_EFFECT],
+- 0x25, 1);
++ PID_EFFECT_TYPE, 1);
+ pidff->set_effect_type =
+ pidff_find_special_field(pidff->reports[PID_SET_EFFECT],
+- 0x25, 1);
++ PID_EFFECT_TYPE, 1);
+ pidff->effect_direction =
+ pidff_find_special_field(pidff->reports[PID_SET_EFFECT],
+- 0x57, 0);
++ PID_DIRECTION, 0);
+ pidff->device_control =
+ pidff_find_special_field(pidff->reports[PID_DEVICE_CONTROL],
+- 0x96, 1);
++ PID_DEVICE_CONTROL_ARRAY,
++ !(pidff->quirks & HID_PIDFF_QUIRK_PERMISSIVE_CONTROL));
++
+ pidff->block_load_status =
+ pidff_find_special_field(pidff->reports[PID_BLOCK_LOAD],
+- 0x8b, 1);
++ PID_BLOCK_LOAD_STATUS, 1);
+ pidff->effect_operation_status =
+ pidff_find_special_field(pidff->reports[PID_EFFECT_OPERATION],
+- 0x78, 1);
++ PID_EFFECT_OPERATION_ARRAY, 1);
+
+ hid_dbg(pidff->hid, "search done\n");
+
+@@ -967,10 +1171,6 @@ static int pidff_find_special_fields(struct pidff_device *pidff)
+ return -1;
+ }
+
+- pidff_find_special_keys(pidff->control_id, pidff->device_control,
+- pidff_device_control,
+- sizeof(pidff_device_control));
+-
+ PIDFF_FIND_SPECIAL_KEYS(control_id, device_control, device_control);
+
+ if (!PIDFF_FIND_SPECIAL_KEYS(type_id, create_new_effect_type,
+@@ -1049,7 +1249,6 @@ static int pidff_find_effects(struct pidff_device *pidff,
+ set_bit(FF_FRICTION, dev->ffbit);
+
+ return 0;
+-
+ }
+
+ #define PIDFF_FIND_FIELDS(name, report, strict) \
+@@ -1062,12 +1261,19 @@ static int pidff_find_effects(struct pidff_device *pidff,
+ */
+ static int pidff_init_fields(struct pidff_device *pidff, struct input_dev *dev)
+ {
+- int envelope_ok = 0;
++ int status = 0;
+
+- if (PIDFF_FIND_FIELDS(set_effect, PID_SET_EFFECT, 1)) {
++ /* Save info about the device not having the DELAY ffb field. */
++ status = PIDFF_FIND_FIELDS(set_effect, PID_SET_EFFECT, 1);
++ if (status == -1) {
+ hid_err(pidff->hid, "unknown set_effect report layout\n");
+ return -ENODEV;
+ }
++ pidff->quirks |= status;
++
++ if (status & HID_PIDFF_QUIRK_MISSING_DELAY)
++ hid_dbg(pidff->hid, "Adding MISSING_DELAY quirk\n");
++
+
+ PIDFF_FIND_FIELDS(block_load, PID_BLOCK_LOAD, 0);
+ if (!pidff->block_load[PID_EFFECT_BLOCK_INDEX].value) {
+@@ -1085,13 +1291,10 @@ static int pidff_init_fields(struct pidff_device *pidff, struct input_dev *dev)
+ return -ENODEV;
+ }
+
+- if (!PIDFF_FIND_FIELDS(set_envelope, PID_SET_ENVELOPE, 1))
+- envelope_ok = 1;
+-
+ if (pidff_find_special_fields(pidff) || pidff_find_effects(pidff, dev))
+ return -ENODEV;
+
+- if (!envelope_ok) {
++ if (PIDFF_FIND_FIELDS(set_envelope, PID_SET_ENVELOPE, 1)) {
+ if (test_and_clear_bit(FF_CONSTANT, dev->ffbit))
+ hid_warn(pidff->hid,
+ "has constant effect but no envelope\n");
+@@ -1116,16 +1319,20 @@ static int pidff_init_fields(struct pidff_device *pidff, struct input_dev *dev)
+ clear_bit(FF_RAMP, dev->ffbit);
+ }
+
+- if ((test_bit(FF_SPRING, dev->ffbit) ||
+- test_bit(FF_DAMPER, dev->ffbit) ||
+- test_bit(FF_FRICTION, dev->ffbit) ||
+- test_bit(FF_INERTIA, dev->ffbit)) &&
+- PIDFF_FIND_FIELDS(set_condition, PID_SET_CONDITION, 1)) {
+- hid_warn(pidff->hid, "unknown condition effect layout\n");
+- clear_bit(FF_SPRING, dev->ffbit);
+- clear_bit(FF_DAMPER, dev->ffbit);
+- clear_bit(FF_FRICTION, dev->ffbit);
+- clear_bit(FF_INERTIA, dev->ffbit);
++ if (test_bit(FF_SPRING, dev->ffbit) ||
++ test_bit(FF_DAMPER, dev->ffbit) ||
++ test_bit(FF_FRICTION, dev->ffbit) ||
++ test_bit(FF_INERTIA, dev->ffbit)) {
++ status = PIDFF_FIND_FIELDS(set_condition, PID_SET_CONDITION, 1);
++
++ if (status < 0) {
++ hid_warn(pidff->hid, "unknown condition effect layout\n");
++ clear_bit(FF_SPRING, dev->ffbit);
++ clear_bit(FF_DAMPER, dev->ffbit);
++ clear_bit(FF_FRICTION, dev->ffbit);
++ clear_bit(FF_INERTIA, dev->ffbit);
++ }
++ pidff->quirks |= status;
+ }
+
+ if (test_bit(FF_PERIODIC, dev->ffbit) &&
+@@ -1142,46 +1349,6 @@ static int pidff_init_fields(struct pidff_device *pidff, struct input_dev *dev)
+ return 0;
+ }
+
+-/*
+- * Reset the device
+- */
+-static void pidff_reset(struct pidff_device *pidff)
+-{
+- struct hid_device *hid = pidff->hid;
+- int i = 0;
+-
+- pidff->device_control->value[0] = pidff->control_id[PID_RESET];
+- /* We reset twice as sometimes hid_wait_io isn't waiting long enough */
+- hid_hw_request(hid, pidff->reports[PID_DEVICE_CONTROL], HID_REQ_SET_REPORT);
+- hid_hw_wait(hid);
+- hid_hw_request(hid, pidff->reports[PID_DEVICE_CONTROL], HID_REQ_SET_REPORT);
+- hid_hw_wait(hid);
+-
+- pidff->device_control->value[0] =
+- pidff->control_id[PID_ENABLE_ACTUATORS];
+- hid_hw_request(hid, pidff->reports[PID_DEVICE_CONTROL], HID_REQ_SET_REPORT);
+- hid_hw_wait(hid);
+-
+- /* pool report is sometimes messed up, refetch it */
+- hid_hw_request(hid, pidff->reports[PID_POOL], HID_REQ_GET_REPORT);
+- hid_hw_wait(hid);
+-
+- if (pidff->pool[PID_SIMULTANEOUS_MAX].value) {
+- while (pidff->pool[PID_SIMULTANEOUS_MAX].value[0] < 2) {
+- if (i++ > 20) {
+- hid_warn(pidff->hid,
+- "device reports %d simultaneous effects\n",
+- pidff->pool[PID_SIMULTANEOUS_MAX].value[0]);
+- break;
+- }
+- hid_dbg(pidff->hid, "pid_pool requested again\n");
+- hid_hw_request(hid, pidff->reports[PID_POOL],
+- HID_REQ_GET_REPORT);
+- hid_hw_wait(hid);
+- }
+- }
+-}
+-
+ /*
+ * Test if autocenter modification is using the supported method
+ */
+@@ -1206,24 +1373,23 @@ static int pidff_check_autocenter(struct pidff_device *pidff,
+
+ if (pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0] ==
+ pidff->block_load[PID_EFFECT_BLOCK_INDEX].field->logical_minimum + 1) {
+- pidff_autocenter(pidff, 0xffff);
++ pidff_autocenter(pidff, U16_MAX);
+ set_bit(FF_AUTOCENTER, dev->ffbit);
+ } else {
+ hid_notice(pidff->hid,
+ "device has unknown autocenter control method\n");
+ }
+-
+ pidff_erase_pid(pidff,
+ pidff->block_load[PID_EFFECT_BLOCK_INDEX].value[0]);
+
+ return 0;
+-
+ }
+
+ /*
+ * Check if the device is PID and initialize it
++ * Set initial quirks
+ */
+-int hid_pidff_init(struct hid_device *hid)
++int hid_pidff_init_with_quirks(struct hid_device *hid, u32 initial_quirks)
+ {
+ struct pidff_device *pidff;
+ struct hid_input *hidinput = list_entry(hid->inputs.next,
+@@ -1245,6 +1411,8 @@ int hid_pidff_init(struct hid_device *hid)
+ return -ENOMEM;
+
+ pidff->hid = hid;
++ pidff->quirks = initial_quirks;
++ pidff->effect_count = 0;
+
+ hid_device_io_start(hid);
+
+@@ -1261,14 +1429,9 @@ int hid_pidff_init(struct hid_device *hid)
+ if (error)
+ goto fail;
+
+- pidff_reset(pidff);
+-
+- if (test_bit(FF_GAIN, dev->ffbit)) {
+- pidff_set(&pidff->device_gain[PID_DEVICE_GAIN_FIELD], 0xffff);
+- hid_hw_request(hid, pidff->reports[PID_DEVICE_GAIN],
+- HID_REQ_SET_REPORT);
+- }
+-
++ /* pool report is sometimes messed up, refetch it */
++ pidff_fetch_pool(pidff);
++ pidff_set_gain_report(pidff, U16_MAX);
+ error = pidff_check_autocenter(pidff, dev);
+ if (error)
+ goto fail;
+@@ -1311,6 +1474,7 @@ int hid_pidff_init(struct hid_device *hid)
+ ff->playback = pidff_playback;
+
+ hid_info(dev, "Force feedback for USB HID PID devices by Anssi Hannula <anssi.hannula@gmail.com>\n");
++ hid_dbg(dev, "Active quirks mask: 0x%x\n", pidff->quirks);
+
+ hid_device_io_stop(hid);
+
+@@ -1322,3 +1486,14 @@ int hid_pidff_init(struct hid_device *hid)
+ kfree(pidff);
+ return error;
+ }
++EXPORT_SYMBOL_GPL(hid_pidff_init_with_quirks);
++
++/*
++ * Check if the device is PID and initialize it
++ * Wrapper made to keep the compatibility with old
++ * init function
++ */
++int hid_pidff_init(struct hid_device *hid)
++{
++ return hid_pidff_init_with_quirks(hid, 0);
++}
+diff --git a/drivers/hid/usbhid/hid-pidff.h b/drivers/hid/usbhid/hid-pidff.h
+new file mode 100644
+index 00000000000000..dda571e0a5bd38
+--- /dev/null
++++ b/drivers/hid/usbhid/hid-pidff.h
+@@ -0,0 +1,33 @@
++/* SPDX-License-Identifier: GPL-2.0-or-later */
++#ifndef __HID_PIDFF_H
++#define __HID_PIDFF_H
++
++#include <linux/hid.h>
++
++/* HID PIDFF quirks */
++
++/* Delay field (0xA7) missing. Skip it during set effect report upload */
++#define HID_PIDFF_QUIRK_MISSING_DELAY BIT(0)
++
++/* Missing Paramter block offset (0x23). Skip it during SET_CONDITION
++ report upload */
++#define HID_PIDFF_QUIRK_MISSING_PBO BIT(1)
++
++/* Initialise device control field even if logical_minimum != 1 */
++#define HID_PIDFF_QUIRK_PERMISSIVE_CONTROL BIT(2)
++
++/* Use fixed 0x4000 direction during SET_EFFECT report upload */
++#define HID_PIDFF_QUIRK_FIX_WHEEL_DIRECTION BIT(3)
++
++/* Force all periodic effects to be uploaded as SINE */
++#define HID_PIDFF_QUIRK_PERIODIC_SINE_ONLY BIT(4)
++
++#ifdef CONFIG_HID_PID
++int hid_pidff_init(struct hid_device *hid);
++int hid_pidff_init_with_quirks(struct hid_device *hid, u32 initial_quirks);
++#else
++#define hid_pidff_init NULL
++#define hid_pidff_init_with_quirks NULL
++#endif
++
++#endif
+diff --git a/drivers/hsi/clients/ssi_protocol.c b/drivers/hsi/clients/ssi_protocol.c
+index afe470f3661c77..6105ea9a6c6aa2 100644
+--- a/drivers/hsi/clients/ssi_protocol.c
++++ b/drivers/hsi/clients/ssi_protocol.c
+@@ -401,6 +401,7 @@ static void ssip_reset(struct hsi_client *cl)
+ del_timer(&ssi->rx_wd);
+ del_timer(&ssi->tx_wd);
+ del_timer(&ssi->keep_alive);
++ cancel_work_sync(&ssi->work);
+ ssi->main_state = 0;
+ ssi->send_state = 0;
+ ssi->recv_state = 0;
+diff --git a/drivers/i3c/master.c b/drivers/i3c/master.c
+index 53ab814b676ffd..7c1dc42b809bfc 100644
+--- a/drivers/i3c/master.c
++++ b/drivers/i3c/master.c
+@@ -2553,6 +2553,9 @@ static void i3c_master_unregister_i3c_devs(struct i3c_master_controller *master)
+ */
+ void i3c_master_queue_ibi(struct i3c_dev_desc *dev, struct i3c_ibi_slot *slot)
+ {
++ if (!dev->ibi || !slot)
++ return;
++
+ atomic_inc(&dev->ibi->pending_ibis);
+ queue_work(dev->ibi->wq, &slot->work);
+ }
+diff --git a/drivers/i3c/master/svc-i3c-master.c b/drivers/i3c/master/svc-i3c-master.c
+index 87f98fa8afd582..42102baabcddad 100644
+--- a/drivers/i3c/master/svc-i3c-master.c
++++ b/drivers/i3c/master/svc-i3c-master.c
+@@ -378,7 +378,7 @@ static int svc_i3c_master_handle_ibi(struct svc_i3c_master *master,
+ slot->len < SVC_I3C_FIFO_SIZE) {
+ mdatactrl = readl(master->regs + SVC_I3C_MDATACTRL);
+ count = SVC_I3C_MDATACTRL_RXCOUNT(mdatactrl);
+- readsl(master->regs + SVC_I3C_MRDATAB, buf, count);
++ readsb(master->regs + SVC_I3C_MRDATAB, buf, count);
+ slot->len += count;
+ buf += count;
+ }
+diff --git a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+index d525ab43a4aebf..dd7d030d2e8909 100644
+--- a/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
++++ b/drivers/iommu/arm/arm-smmu-v3/tegra241-cmdqv.c
+@@ -487,17 +487,6 @@ static int tegra241_cmdqv_hw_reset(struct arm_smmu_device *smmu)
+
+ /* VCMDQ Resource Helpers */
+
+-static void tegra241_vcmdq_free_smmu_cmdq(struct tegra241_vcmdq *vcmdq)
+-{
+- struct arm_smmu_queue *q = &vcmdq->cmdq.q;
+- size_t nents = 1 << q->llq.max_n_shift;
+- size_t qsz = nents << CMDQ_ENT_SZ_SHIFT;
+-
+- if (!q->base)
+- return;
+- dmam_free_coherent(vcmdq->cmdqv->smmu.dev, qsz, q->base, q->base_dma);
+-}
+-
+ static int tegra241_vcmdq_alloc_smmu_cmdq(struct tegra241_vcmdq *vcmdq)
+ {
+ struct arm_smmu_device *smmu = &vcmdq->cmdqv->smmu;
+@@ -560,7 +549,8 @@ static void tegra241_vintf_free_lvcmdq(struct tegra241_vintf *vintf, u16 lidx)
+ struct tegra241_vcmdq *vcmdq = vintf->lvcmdqs[lidx];
+ char header[64];
+
+- tegra241_vcmdq_free_smmu_cmdq(vcmdq);
++ /* Note that the lvcmdq queue memory space is managed by devres */
++
+ tegra241_vintf_deinit_lvcmdq(vintf, lidx);
+
+ dev_dbg(vintf->cmdqv->dev,
+@@ -768,13 +758,13 @@ static int tegra241_cmdqv_init_structures(struct arm_smmu_device *smmu)
+
+ vintf = kzalloc(sizeof(*vintf), GFP_KERNEL);
+ if (!vintf)
+- goto out_fallback;
++ return -ENOMEM;
+
+ /* Init VINTF0 for in-kernel use */
+ ret = tegra241_cmdqv_init_vintf(cmdqv, 0, vintf);
+ if (ret) {
+ dev_err(cmdqv->dev, "failed to init vintf0: %d\n", ret);
+- goto free_vintf;
++ return ret;
+ }
+
+ /* Preallocate logical VCMDQs to VINTF0 */
+@@ -783,24 +773,12 @@ static int tegra241_cmdqv_init_structures(struct arm_smmu_device *smmu)
+
+ vcmdq = tegra241_vintf_alloc_lvcmdq(vintf, lidx);
+ if (IS_ERR(vcmdq))
+- goto free_lvcmdq;
++ return PTR_ERR(vcmdq);
+ }
+
+ /* Now, we are ready to run all the impl ops */
+ smmu->impl_ops = &tegra241_cmdqv_impl_ops;
+ return 0;
+-
+-free_lvcmdq:
+- for (lidx--; lidx >= 0; lidx--)
+- tegra241_vintf_free_lvcmdq(vintf, lidx);
+- tegra241_cmdqv_deinit_vintf(cmdqv, vintf->idx);
+-free_vintf:
+- kfree(vintf);
+-out_fallback:
+- dev_info(smmu->impl_dev, "Falling back to standard SMMU CMDQ\n");
+- smmu->options &= ~ARM_SMMU_OPT_TEGRA241_CMDQV;
+- tegra241_cmdqv_remove(smmu);
+- return 0;
+ }
+
+ #ifdef CONFIG_IOMMU_DEBUGFS
+diff --git a/drivers/iommu/exynos-iommu.c b/drivers/iommu/exynos-iommu.c
+index c666ecab955d21..7465dbb6fa80c8 100644
+--- a/drivers/iommu/exynos-iommu.c
++++ b/drivers/iommu/exynos-iommu.c
+@@ -832,7 +832,7 @@ static int __maybe_unused exynos_sysmmu_suspend(struct device *dev)
+ struct exynos_iommu_owner *owner = dev_iommu_priv_get(master);
+
+ mutex_lock(&owner->rpm_lock);
+- if (&data->domain->domain != &exynos_identity_domain) {
++ if (data->domain) {
+ dev_dbg(data->sysmmu, "saving state\n");
+ __sysmmu_disable(data);
+ }
+@@ -850,7 +850,7 @@ static int __maybe_unused exynos_sysmmu_resume(struct device *dev)
+ struct exynos_iommu_owner *owner = dev_iommu_priv_get(master);
+
+ mutex_lock(&owner->rpm_lock);
+- if (&data->domain->domain != &exynos_identity_domain) {
++ if (data->domain) {
+ dev_dbg(data->sysmmu, "restoring state\n");
+ __sysmmu_enable(data);
+ }
+diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
+index 9c46a4cd384842..038a66388564a8 100644
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -3174,6 +3174,7 @@ static int __init probe_acpi_namespace_devices(void)
+ if (dev->bus != &acpi_bus_type)
+ continue;
+
++ up_read(&dmar_global_lock);
+ adev = to_acpi_device(dev);
+ mutex_lock(&adev->physical_node_lock);
+ list_for_each_entry(pn,
+@@ -3183,6 +3184,7 @@ static int __init probe_acpi_namespace_devices(void)
+ break;
+ }
+ mutex_unlock(&adev->physical_node_lock);
++ down_read(&dmar_global_lock);
+
+ if (ret)
+ return ret;
+diff --git a/drivers/iommu/intel/irq_remapping.c b/drivers/iommu/intel/irq_remapping.c
+index 7a6d188e3bea09..71b3383b7115cb 100644
+--- a/drivers/iommu/intel/irq_remapping.c
++++ b/drivers/iommu/intel/irq_remapping.c
+@@ -26,11 +26,6 @@
+ #include "../iommu-pages.h"
+ #include "cap_audit.h"
+
+-enum irq_mode {
+- IRQ_REMAPPING,
+- IRQ_POSTING,
+-};
+-
+ struct ioapic_scope {
+ struct intel_iommu *iommu;
+ unsigned int id;
+@@ -50,8 +45,8 @@ struct irq_2_iommu {
+ u16 irte_index;
+ u16 sub_handle;
+ u8 irte_mask;
+- enum irq_mode mode;
+ bool posted_msi;
++ bool posted_vcpu;
+ };
+
+ struct intel_ir_data {
+@@ -139,7 +134,6 @@ static int alloc_irte(struct intel_iommu *iommu,
+ irq_iommu->irte_index = index;
+ irq_iommu->sub_handle = 0;
+ irq_iommu->irte_mask = mask;
+- irq_iommu->mode = IRQ_REMAPPING;
+ }
+ raw_spin_unlock_irqrestore(&irq_2_ir_lock, flags);
+
+@@ -194,8 +188,6 @@ static int modify_irte(struct irq_2_iommu *irq_iommu,
+
+ rc = qi_flush_iec(iommu, index, 0);
+
+- /* Update iommu mode according to the IRTE mode */
+- irq_iommu->mode = irte->pst ? IRQ_POSTING : IRQ_REMAPPING;
+ raw_spin_unlock_irqrestore(&irq_2_ir_lock, flags);
+
+ return rc;
+@@ -1173,7 +1165,26 @@ static void intel_ir_reconfigure_irte_posted(struct irq_data *irqd)
+ static inline void intel_ir_reconfigure_irte_posted(struct irq_data *irqd) {}
+ #endif
+
+-static void intel_ir_reconfigure_irte(struct irq_data *irqd, bool force)
++static void __intel_ir_reconfigure_irte(struct irq_data *irqd, bool force_host)
++{
++ struct intel_ir_data *ir_data = irqd->chip_data;
++
++ /*
++ * Don't modify IRTEs for IRQs that are being posted to vCPUs if the
++ * host CPU affinity changes.
++ */
++ if (ir_data->irq_2_iommu.posted_vcpu && !force_host)
++ return;
++
++ ir_data->irq_2_iommu.posted_vcpu = false;
++
++ if (ir_data->irq_2_iommu.posted_msi)
++ intel_ir_reconfigure_irte_posted(irqd);
++ else
++ modify_irte(&ir_data->irq_2_iommu, &ir_data->irte_entry);
++}
++
++static void intel_ir_reconfigure_irte(struct irq_data *irqd, bool force_host)
+ {
+ struct intel_ir_data *ir_data = irqd->chip_data;
+ struct irte *irte = &ir_data->irte_entry;
+@@ -1186,10 +1197,7 @@ static void intel_ir_reconfigure_irte(struct irq_data *irqd, bool force)
+ irte->vector = cfg->vector;
+ irte->dest_id = IRTE_DEST(cfg->dest_apicid);
+
+- if (ir_data->irq_2_iommu.posted_msi)
+- intel_ir_reconfigure_irte_posted(irqd);
+- else if (force || ir_data->irq_2_iommu.mode == IRQ_REMAPPING)
+- modify_irte(&ir_data->irq_2_iommu, irte);
++ __intel_ir_reconfigure_irte(irqd, force_host);
+ }
+
+ /*
+@@ -1244,7 +1252,7 @@ static int intel_ir_set_vcpu_affinity(struct irq_data *data, void *info)
+
+ /* stop posting interrupts, back to the default mode */
+ if (!vcpu_pi_info) {
+- modify_irte(&ir_data->irq_2_iommu, &ir_data->irte_entry);
++ __intel_ir_reconfigure_irte(data, true);
+ } else {
+ struct irte irte_pi;
+
+@@ -1267,6 +1275,7 @@ static int intel_ir_set_vcpu_affinity(struct irq_data *data, void *info)
+ irte_pi.pda_h = (vcpu_pi_info->pi_desc_addr >> 32) &
+ ~(-1UL << PDA_HIGH_BIT);
+
++ ir_data->irq_2_iommu.posted_vcpu = true;
+ modify_irte(&ir_data->irq_2_iommu, &irte_pi);
+ }
+
+@@ -1282,43 +1291,44 @@ static struct irq_chip intel_ir_chip = {
+ };
+
+ /*
+- * With posted MSIs, all vectors are multiplexed into a single notification
+- * vector. Devices MSIs are then dispatched in a demux loop where
+- * EOIs can be coalesced as well.
++ * With posted MSIs, the MSI vectors are multiplexed into a single notification
++ * vector, and only the notification vector is sent to the APIC IRR. Device
++ * MSIs are then dispatched in a demux loop that harvests the MSIs from the
++ * CPU's Posted Interrupt Request bitmap. I.e. Posted MSIs never get sent to
++ * the APIC IRR, and thus do not need an EOI. The notification handler instead
++ * performs a single EOI after processing the PIR.
+ *
+- * "INTEL-IR-POST" IRQ chip does not do EOI on ACK, thus the dummy irq_ack()
+- * function. Instead EOI is performed by the posted interrupt notification
+- * handler.
++ * Note! Pending SMP/CPU affinity changes, which are per MSI, must still be
++ * honored, only the APIC EOI is omitted.
+ *
+ * For the example below, 3 MSIs are coalesced into one CPU notification. Only
+- * one apic_eoi() is needed.
++ * one apic_eoi() is needed, but each MSI needs to process pending changes to
++ * its CPU affinity.
+ *
+ * __sysvec_posted_msi_notification()
+ * irq_enter();
+ * handle_edge_irq()
+ * irq_chip_ack_parent()
+- * dummy(); // No EOI
++ * irq_move_irq(); // No EOI
+ * handle_irq_event()
+ * driver_handler()
+ * handle_edge_irq()
+ * irq_chip_ack_parent()
+- * dummy(); // No EOI
++ * irq_move_irq(); // No EOI
+ * handle_irq_event()
+ * driver_handler()
+ * handle_edge_irq()
+ * irq_chip_ack_parent()
+- * dummy(); // No EOI
++ * irq_move_irq(); // No EOI
+ * handle_irq_event()
+ * driver_handler()
+ * apic_eoi()
+ * irq_exit()
++ *
+ */
+-
+-static void dummy_ack(struct irq_data *d) { }
+-
+ static struct irq_chip intel_ir_chip_post_msi = {
+ .name = "INTEL-IR-POST",
+- .irq_ack = dummy_ack,
++ .irq_ack = irq_move_irq,
+ .irq_set_affinity = intel_ir_set_affinity,
+ .irq_compose_msi_msg = intel_ir_compose_msi_msg,
+ .irq_set_vcpu_affinity = intel_ir_set_vcpu_affinity,
+@@ -1494,6 +1504,9 @@ static void intel_irq_remapping_deactivate(struct irq_domain *domain,
+ struct intel_ir_data *data = irq_data->chip_data;
+ struct irte entry;
+
++ WARN_ON_ONCE(data->irq_2_iommu.posted_vcpu);
++ data->irq_2_iommu.posted_vcpu = false;
++
+ memset(&entry, 0, sizeof(entry));
+ modify_irte(&data->irq_2_iommu, &entry);
+ }
+diff --git a/drivers/iommu/iommufd/device.c b/drivers/iommu/iommufd/device.c
+index 5fd3dd42029015..3fd8920e79ffb9 100644
+--- a/drivers/iommu/iommufd/device.c
++++ b/drivers/iommu/iommufd/device.c
+@@ -352,6 +352,122 @@ iommufd_device_attach_reserved_iova(struct iommufd_device *idev,
+ return 0;
+ }
+
++/* The device attach/detach/replace helpers for attach_handle */
++
++/* Check if idev is attached to igroup->hwpt */
++static bool iommufd_device_is_attached(struct iommufd_device *idev)
++{
++ struct iommufd_device *cur;
++
++ list_for_each_entry(cur, &idev->igroup->device_list, group_item)
++ if (cur == idev)
++ return true;
++ return false;
++}
++
++static int iommufd_hwpt_attach_device(struct iommufd_hw_pagetable *hwpt,
++ struct iommufd_device *idev)
++{
++ struct iommufd_attach_handle *handle;
++ int rc;
++
++ lockdep_assert_held(&idev->igroup->lock);
++
++ handle = kzalloc(sizeof(*handle), GFP_KERNEL);
++ if (!handle)
++ return -ENOMEM;
++
++ if (hwpt->fault) {
++ rc = iommufd_fault_iopf_enable(idev);
++ if (rc)
++ goto out_free_handle;
++ }
++
++ handle->idev = idev;
++ rc = iommu_attach_group_handle(hwpt->domain, idev->igroup->group,
++ &handle->handle);
++ if (rc)
++ goto out_disable_iopf;
++
++ return 0;
++
++out_disable_iopf:
++ if (hwpt->fault)
++ iommufd_fault_iopf_disable(idev);
++out_free_handle:
++ kfree(handle);
++ return rc;
++}
++
++static struct iommufd_attach_handle *
++iommufd_device_get_attach_handle(struct iommufd_device *idev)
++{
++ struct iommu_attach_handle *handle;
++
++ lockdep_assert_held(&idev->igroup->lock);
++
++ handle =
++ iommu_attach_handle_get(idev->igroup->group, IOMMU_NO_PASID, 0);
++ if (IS_ERR(handle))
++ return NULL;
++ return to_iommufd_handle(handle);
++}
++
++static void iommufd_hwpt_detach_device(struct iommufd_hw_pagetable *hwpt,
++ struct iommufd_device *idev)
++{
++ struct iommufd_attach_handle *handle;
++
++ handle = iommufd_device_get_attach_handle(idev);
++ iommu_detach_group_handle(hwpt->domain, idev->igroup->group);
++ if (hwpt->fault) {
++ iommufd_auto_response_faults(hwpt, handle);
++ iommufd_fault_iopf_disable(idev);
++ }
++ kfree(handle);
++}
++
++static int iommufd_hwpt_replace_device(struct iommufd_device *idev,
++ struct iommufd_hw_pagetable *hwpt,
++ struct iommufd_hw_pagetable *old)
++{
++ struct iommufd_attach_handle *handle, *old_handle =
++ iommufd_device_get_attach_handle(idev);
++ int rc;
++
++ handle = kzalloc(sizeof(*handle), GFP_KERNEL);
++ if (!handle)
++ return -ENOMEM;
++
++ if (hwpt->fault && !old->fault) {
++ rc = iommufd_fault_iopf_enable(idev);
++ if (rc)
++ goto out_free_handle;
++ }
++
++ handle->idev = idev;
++ rc = iommu_replace_group_handle(idev->igroup->group, hwpt->domain,
++ &handle->handle);
++ if (rc)
++ goto out_disable_iopf;
++
++ if (old->fault) {
++ iommufd_auto_response_faults(hwpt, old_handle);
++ if (!hwpt->fault)
++ iommufd_fault_iopf_disable(idev);
++ }
++ kfree(old_handle);
++
++ return 0;
++
++out_disable_iopf:
++ if (hwpt->fault && !old->fault)
++ iommufd_fault_iopf_disable(idev);
++out_free_handle:
++ kfree(handle);
++ return rc;
++}
++
+ int iommufd_hw_pagetable_attach(struct iommufd_hw_pagetable *hwpt,
+ struct iommufd_device *idev)
+ {
+@@ -488,6 +604,11 @@ iommufd_device_do_replace(struct iommufd_device *idev,
+ goto err_unlock;
+ }
+
++ if (!iommufd_device_is_attached(idev)) {
++ rc = -EINVAL;
++ goto err_unlock;
++ }
++
+ if (hwpt == igroup->hwpt) {
+ mutex_unlock(&idev->igroup->lock);
+ return NULL;
+@@ -1127,7 +1248,7 @@ int iommufd_access_rw(struct iommufd_access *access, unsigned long iova,
+ struct io_pagetable *iopt;
+ struct iopt_area *area;
+ unsigned long last_iova;
+- int rc;
++ int rc = -EINVAL;
+
+ if (!length)
+ return -EINVAL;
+diff --git a/drivers/iommu/iommufd/fault.c b/drivers/iommu/iommufd/fault.c
+index 95e2e99ab27241..1b0812f8bf840a 100644
+--- a/drivers/iommu/iommufd/fault.c
++++ b/drivers/iommu/iommufd/fault.c
+@@ -16,7 +16,7 @@
+ #include "../iommu-priv.h"
+ #include "iommufd_private.h"
+
+-static int iommufd_fault_iopf_enable(struct iommufd_device *idev)
++int iommufd_fault_iopf_enable(struct iommufd_device *idev)
+ {
+ struct device *dev = idev->dev;
+ int ret;
+@@ -45,7 +45,7 @@ static int iommufd_fault_iopf_enable(struct iommufd_device *idev)
+ return ret;
+ }
+
+-static void iommufd_fault_iopf_disable(struct iommufd_device *idev)
++void iommufd_fault_iopf_disable(struct iommufd_device *idev)
+ {
+ mutex_lock(&idev->iopf_lock);
+ if (!WARN_ON(idev->iopf_enabled == 0)) {
+@@ -93,8 +93,8 @@ int iommufd_fault_domain_attach_dev(struct iommufd_hw_pagetable *hwpt,
+ return ret;
+ }
+
+-static void iommufd_auto_response_faults(struct iommufd_hw_pagetable *hwpt,
+- struct iommufd_attach_handle *handle)
++void iommufd_auto_response_faults(struct iommufd_hw_pagetable *hwpt,
++ struct iommufd_attach_handle *handle)
+ {
+ struct iommufd_fault *fault = hwpt->fault;
+ struct iopf_group *group, *next;
+diff --git a/drivers/iommu/iommufd/iommufd_private.h b/drivers/iommu/iommufd/iommufd_private.h
+index c1f82cb6824256..18cdf1391a0348 100644
+--- a/drivers/iommu/iommufd/iommufd_private.h
++++ b/drivers/iommu/iommufd/iommufd_private.h
+@@ -523,35 +523,10 @@ int iommufd_fault_domain_replace_dev(struct iommufd_device *idev,
+ struct iommufd_hw_pagetable *hwpt,
+ struct iommufd_hw_pagetable *old);
+
+-static inline int iommufd_hwpt_attach_device(struct iommufd_hw_pagetable *hwpt,
+- struct iommufd_device *idev)
+-{
+- if (hwpt->fault)
+- return iommufd_fault_domain_attach_dev(hwpt, idev);
+-
+- return iommu_attach_group(hwpt->domain, idev->igroup->group);
+-}
+-
+-static inline void iommufd_hwpt_detach_device(struct iommufd_hw_pagetable *hwpt,
+- struct iommufd_device *idev)
+-{
+- if (hwpt->fault) {
+- iommufd_fault_domain_detach_dev(hwpt, idev);
+- return;
+- }
+-
+- iommu_detach_group(hwpt->domain, idev->igroup->group);
+-}
+-
+-static inline int iommufd_hwpt_replace_device(struct iommufd_device *idev,
+- struct iommufd_hw_pagetable *hwpt,
+- struct iommufd_hw_pagetable *old)
+-{
+- if (old->fault || hwpt->fault)
+- return iommufd_fault_domain_replace_dev(idev, hwpt, old);
+-
+- return iommu_group_replace_domain(idev->igroup->group, hwpt->domain);
+-}
++int iommufd_fault_iopf_enable(struct iommufd_device *idev);
++void iommufd_fault_iopf_disable(struct iommufd_device *idev);
++void iommufd_auto_response_faults(struct iommufd_hw_pagetable *hwpt,
++ struct iommufd_attach_handle *handle);
+
+ #ifdef CONFIG_IOMMUFD_TEST
+ int iommufd_test(struct iommufd_ucmd *ucmd);
+diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
+index 6a2707fe7a78c0..32deab732209ec 100644
+--- a/drivers/iommu/mtk_iommu.c
++++ b/drivers/iommu/mtk_iommu.c
+@@ -1371,15 +1371,6 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ platform_set_drvdata(pdev, data);
+ mutex_init(&data->mutex);
+
+- ret = iommu_device_sysfs_add(&data->iommu, dev, NULL,
+- "mtk-iommu.%pa", &ioaddr);
+- if (ret)
+- goto out_link_remove;
+-
+- ret = iommu_device_register(&data->iommu, &mtk_iommu_ops, dev);
+- if (ret)
+- goto out_sysfs_remove;
+-
+ if (MTK_IOMMU_HAS_FLAG(data->plat_data, SHARE_PGTABLE)) {
+ list_add_tail(&data->list, data->plat_data->hw_list);
+ data->hw_list = data->plat_data->hw_list;
+@@ -1389,19 +1380,28 @@ static int mtk_iommu_probe(struct platform_device *pdev)
+ data->hw_list = &data->hw_list_head;
+ }
+
++ ret = iommu_device_sysfs_add(&data->iommu, dev, NULL,
++ "mtk-iommu.%pa", &ioaddr);
++ if (ret)
++ goto out_list_del;
++
++ ret = iommu_device_register(&data->iommu, &mtk_iommu_ops, dev);
++ if (ret)
++ goto out_sysfs_remove;
++
+ if (MTK_IOMMU_IS_TYPE(data->plat_data, MTK_IOMMU_TYPE_MM)) {
+ ret = component_master_add_with_match(dev, &mtk_iommu_com_ops, match);
+ if (ret)
+- goto out_list_del;
++ goto out_device_unregister;
+ }
+ return ret;
+
+-out_list_del:
+- list_del(&data->list);
++out_device_unregister:
+ iommu_device_unregister(&data->iommu);
+ out_sysfs_remove:
+ iommu_device_sysfs_remove(&data->iommu);
+-out_link_remove:
++out_list_del:
++ list_del(&data->list);
+ if (MTK_IOMMU_IS_TYPE(data->plat_data, MTK_IOMMU_TYPE_MM))
+ device_link_remove(data->smicomm_dev, dev);
+ out_runtime_disable:
+diff --git a/drivers/leds/rgb/leds-qcom-lpg.c b/drivers/leds/rgb/leds-qcom-lpg.c
+index f3c9ef2bfa572f..5d8e27e2e7ae71 100644
+--- a/drivers/leds/rgb/leds-qcom-lpg.c
++++ b/drivers/leds/rgb/leds-qcom-lpg.c
+@@ -461,7 +461,7 @@ static int lpg_calc_freq(struct lpg_channel *chan, uint64_t period)
+ max_res = LPG_RESOLUTION_9BIT;
+ }
+
+- min_period = div64_u64((u64)NSEC_PER_SEC * (1 << pwm_resolution_arr[0]),
++ min_period = div64_u64((u64)NSEC_PER_SEC * ((1 << pwm_resolution_arr[0]) - 1),
+ clk_rate_arr[clk_len - 1]);
+ if (period <= min_period)
+ return -EINVAL;
+@@ -482,7 +482,7 @@ static int lpg_calc_freq(struct lpg_channel *chan, uint64_t period)
+ */
+
+ for (i = 0; i < pwm_resolution_count; i++) {
+- resolution = 1 << pwm_resolution_arr[i];
++ resolution = (1 << pwm_resolution_arr[i]) - 1;
+ for (clk_sel = 1; clk_sel < clk_len; clk_sel++) {
+ u64 numerator = period * clk_rate_arr[clk_sel];
+
+@@ -529,7 +529,7 @@ static void lpg_calc_duty(struct lpg_channel *chan, uint64_t duty)
+ unsigned int clk_rate;
+
+ if (chan->subtype == LPG_SUBTYPE_HI_RES_PWM) {
+- max = LPG_RESOLUTION_15BIT - 1;
++ max = BIT(lpg_pwm_resolution_hi_res[chan->pwm_resolution_sel]) - 1;
+ clk_rate = lpg_clk_rates_hi_res[chan->clk_sel];
+ } else {
+ max = LPG_RESOLUTION_9BIT - 1;
+@@ -1291,7 +1291,7 @@ static int lpg_pwm_get_state(struct pwm_chip *chip, struct pwm_device *pwm,
+ if (ret)
+ return ret;
+
+- state->period = DIV_ROUND_UP_ULL((u64)NSEC_PER_SEC * (1 << resolution) *
++ state->period = DIV_ROUND_UP_ULL((u64)NSEC_PER_SEC * ((1 << resolution) - 1) *
+ pre_div * (1 << m), refclk);
+ state->duty_cycle = DIV_ROUND_UP_ULL((u64)NSEC_PER_SEC * pwm_value * pre_div * (1 << m), refclk);
+ } else {
+diff --git a/drivers/mailbox/tegra-hsp.c b/drivers/mailbox/tegra-hsp.c
+index 46c921000a34cf..76f54f8b6b6c5e 100644
+--- a/drivers/mailbox/tegra-hsp.c
++++ b/drivers/mailbox/tegra-hsp.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (c) 2016-2023, NVIDIA CORPORATION. All rights reserved.
++ * Copyright (c) 2016-2025, NVIDIA CORPORATION. All rights reserved.
+ */
+
+ #include <linux/delay.h>
+@@ -28,12 +28,6 @@
+ #define HSP_INT_FULL_MASK 0xff
+
+ #define HSP_INT_DIMENSIONING 0x380
+-#define HSP_nSM_SHIFT 0
+-#define HSP_nSS_SHIFT 4
+-#define HSP_nAS_SHIFT 8
+-#define HSP_nDB_SHIFT 12
+-#define HSP_nSI_SHIFT 16
+-#define HSP_nINT_MASK 0xf
+
+ #define HSP_DB_TRIGGER 0x0
+ #define HSP_DB_ENABLE 0x4
+@@ -97,6 +91,20 @@ struct tegra_hsp_soc {
+ bool has_per_mb_ie;
+ bool has_128_bit_mb;
+ unsigned int reg_stride;
++
++ /* Shifts for dimensioning register. */
++ unsigned int si_shift;
++ unsigned int db_shift;
++ unsigned int as_shift;
++ unsigned int ss_shift;
++ unsigned int sm_shift;
++
++ /* Masks for dimensioning register. */
++ unsigned int si_mask;
++ unsigned int db_mask;
++ unsigned int as_mask;
++ unsigned int ss_mask;
++ unsigned int sm_mask;
+ };
+
+ struct tegra_hsp {
+@@ -747,11 +755,11 @@ static int tegra_hsp_probe(struct platform_device *pdev)
+ return PTR_ERR(hsp->regs);
+
+ value = tegra_hsp_readl(hsp, HSP_INT_DIMENSIONING);
+- hsp->num_sm = (value >> HSP_nSM_SHIFT) & HSP_nINT_MASK;
+- hsp->num_ss = (value >> HSP_nSS_SHIFT) & HSP_nINT_MASK;
+- hsp->num_as = (value >> HSP_nAS_SHIFT) & HSP_nINT_MASK;
+- hsp->num_db = (value >> HSP_nDB_SHIFT) & HSP_nINT_MASK;
+- hsp->num_si = (value >> HSP_nSI_SHIFT) & HSP_nINT_MASK;
++ hsp->num_sm = (value >> hsp->soc->sm_shift) & hsp->soc->sm_mask;
++ hsp->num_ss = (value >> hsp->soc->ss_shift) & hsp->soc->ss_mask;
++ hsp->num_as = (value >> hsp->soc->as_shift) & hsp->soc->as_mask;
++ hsp->num_db = (value >> hsp->soc->db_shift) & hsp->soc->db_mask;
++ hsp->num_si = (value >> hsp->soc->si_shift) & hsp->soc->si_mask;
+
+ err = platform_get_irq_byname_optional(pdev, "doorbell");
+ if (err >= 0)
+@@ -915,6 +923,16 @@ static const struct tegra_hsp_soc tegra186_hsp_soc = {
+ .has_per_mb_ie = false,
+ .has_128_bit_mb = false,
+ .reg_stride = 0x100,
++ .si_shift = 16,
++ .db_shift = 12,
++ .as_shift = 8,
++ .ss_shift = 4,
++ .sm_shift = 0,
++ .si_mask = 0xf,
++ .db_mask = 0xf,
++ .as_mask = 0xf,
++ .ss_mask = 0xf,
++ .sm_mask = 0xf,
+ };
+
+ static const struct tegra_hsp_soc tegra194_hsp_soc = {
+@@ -922,6 +940,16 @@ static const struct tegra_hsp_soc tegra194_hsp_soc = {
+ .has_per_mb_ie = true,
+ .has_128_bit_mb = false,
+ .reg_stride = 0x100,
++ .si_shift = 16,
++ .db_shift = 12,
++ .as_shift = 8,
++ .ss_shift = 4,
++ .sm_shift = 0,
++ .si_mask = 0xf,
++ .db_mask = 0xf,
++ .as_mask = 0xf,
++ .ss_mask = 0xf,
++ .sm_mask = 0xf,
+ };
+
+ static const struct tegra_hsp_soc tegra234_hsp_soc = {
+@@ -929,6 +957,16 @@ static const struct tegra_hsp_soc tegra234_hsp_soc = {
+ .has_per_mb_ie = false,
+ .has_128_bit_mb = true,
+ .reg_stride = 0x100,
++ .si_shift = 16,
++ .db_shift = 12,
++ .as_shift = 8,
++ .ss_shift = 4,
++ .sm_shift = 0,
++ .si_mask = 0xf,
++ .db_mask = 0xf,
++ .as_mask = 0xf,
++ .ss_mask = 0xf,
++ .sm_mask = 0xf,
+ };
+
+ static const struct tegra_hsp_soc tegra264_hsp_soc = {
+@@ -936,6 +974,16 @@ static const struct tegra_hsp_soc tegra264_hsp_soc = {
+ .has_per_mb_ie = false,
+ .has_128_bit_mb = true,
+ .reg_stride = 0x1000,
++ .si_shift = 17,
++ .db_shift = 12,
++ .as_shift = 8,
++ .ss_shift = 4,
++ .sm_shift = 0,
++ .si_mask = 0x1f,
++ .db_mask = 0x1f,
++ .as_mask = 0xf,
++ .ss_mask = 0xf,
++ .sm_mask = 0xf,
+ };
+
+ static const struct of_device_id tegra_hsp_match[] = {
+diff --git a/drivers/md/dm-ebs-target.c b/drivers/md/dm-ebs-target.c
+index 18ae45dcbfb28b..b19b0142a690a3 100644
+--- a/drivers/md/dm-ebs-target.c
++++ b/drivers/md/dm-ebs-target.c
+@@ -390,6 +390,12 @@ static int ebs_map(struct dm_target *ti, struct bio *bio)
+ return DM_MAPIO_REMAPPED;
+ }
+
++static void ebs_postsuspend(struct dm_target *ti)
++{
++ struct ebs_c *ec = ti->private;
++ dm_bufio_client_reset(ec->bufio);
++}
++
+ static void ebs_status(struct dm_target *ti, status_type_t type,
+ unsigned int status_flags, char *result, unsigned int maxlen)
+ {
+@@ -447,6 +453,7 @@ static struct target_type ebs_target = {
+ .ctr = ebs_ctr,
+ .dtr = ebs_dtr,
+ .map = ebs_map,
++ .postsuspend = ebs_postsuspend,
+ .status = ebs_status,
+ .io_hints = ebs_io_hints,
+ .prepare_ioctl = ebs_prepare_ioctl,
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 555dc06b942287..b35b779b170443 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -21,6 +21,7 @@
+ #include <linux/reboot.h>
+ #include <crypto/hash.h>
+ #include <crypto/skcipher.h>
++#include <crypto/utils.h>
+ #include <linux/async_tx.h>
+ #include <linux/dm-bufio.h>
+
+@@ -516,7 +517,7 @@ static int sb_mac(struct dm_integrity_c *ic, bool wr)
+ dm_integrity_io_error(ic, "crypto_shash_digest", r);
+ return r;
+ }
+- if (memcmp(mac, actual_mac, mac_size)) {
++ if (crypto_memneq(mac, actual_mac, mac_size)) {
+ dm_integrity_io_error(ic, "superblock mac", -EILSEQ);
+ dm_audit_log_target(DM_MSG_PREFIX, "mac-superblock", ic->ti, 0);
+ return -EILSEQ;
+@@ -859,7 +860,7 @@ static void rw_section_mac(struct dm_integrity_c *ic, unsigned int section, bool
+ if (likely(wr))
+ memcpy(&js->mac, result + (j * JOURNAL_MAC_PER_SECTOR), JOURNAL_MAC_PER_SECTOR);
+ else {
+- if (memcmp(&js->mac, result + (j * JOURNAL_MAC_PER_SECTOR), JOURNAL_MAC_PER_SECTOR)) {
++ if (crypto_memneq(&js->mac, result + (j * JOURNAL_MAC_PER_SECTOR), JOURNAL_MAC_PER_SECTOR)) {
+ dm_integrity_io_error(ic, "journal mac", -EILSEQ);
+ dm_audit_log_target(DM_MSG_PREFIX, "mac-journal", ic->ti, 0);
+ }
+@@ -1401,10 +1402,9 @@ static bool find_newer_committed_node(struct dm_integrity_c *ic, struct journal_
+ static int dm_integrity_rw_tag(struct dm_integrity_c *ic, unsigned char *tag, sector_t *metadata_block,
+ unsigned int *metadata_offset, unsigned int total_size, int op)
+ {
+-#define MAY_BE_FILLER 1
+-#define MAY_BE_HASH 2
+ unsigned int hash_offset = 0;
+- unsigned int may_be = MAY_BE_HASH | (ic->discard ? MAY_BE_FILLER : 0);
++ unsigned char mismatch_hash = 0;
++ unsigned char mismatch_filler = !ic->discard;
+
+ do {
+ unsigned char *data, *dp;
+@@ -1425,7 +1425,7 @@ static int dm_integrity_rw_tag(struct dm_integrity_c *ic, unsigned char *tag, se
+ if (op == TAG_READ) {
+ memcpy(tag, dp, to_copy);
+ } else if (op == TAG_WRITE) {
+- if (memcmp(dp, tag, to_copy)) {
++ if (crypto_memneq(dp, tag, to_copy)) {
+ memcpy(dp, tag, to_copy);
+ dm_bufio_mark_partial_buffer_dirty(b, *metadata_offset, *metadata_offset + to_copy);
+ }
+@@ -1433,29 +1433,30 @@ static int dm_integrity_rw_tag(struct dm_integrity_c *ic, unsigned char *tag, se
+ /* e.g.: op == TAG_CMP */
+
+ if (likely(is_power_of_2(ic->tag_size))) {
+- if (unlikely(memcmp(dp, tag, to_copy)))
+- if (unlikely(!ic->discard) ||
+- unlikely(memchr_inv(dp, DISCARD_FILLER, to_copy) != NULL)) {
+- goto thorough_test;
+- }
++ if (unlikely(crypto_memneq(dp, tag, to_copy)))
++ goto thorough_test;
+ } else {
+ unsigned int i, ts;
+ thorough_test:
+ ts = total_size;
+
+ for (i = 0; i < to_copy; i++, ts--) {
+- if (unlikely(dp[i] != tag[i]))
+- may_be &= ~MAY_BE_HASH;
+- if (likely(dp[i] != DISCARD_FILLER))
+- may_be &= ~MAY_BE_FILLER;
++ /*
++ * Warning: the control flow must not be
++ * dependent on match/mismatch of
++ * individual bytes.
++ */
++ mismatch_hash |= dp[i] ^ tag[i];
++ mismatch_filler |= dp[i] ^ DISCARD_FILLER;
+ hash_offset++;
+ if (unlikely(hash_offset == ic->tag_size)) {
+- if (unlikely(!may_be)) {
++ if (unlikely(mismatch_hash) && unlikely(mismatch_filler)) {
+ dm_bufio_release(b);
+ return ts;
+ }
+ hash_offset = 0;
+- may_be = MAY_BE_HASH | (ic->discard ? MAY_BE_FILLER : 0);
++ mismatch_hash = 0;
++ mismatch_filler = !ic->discard;
+ }
+ }
+ }
+@@ -1476,8 +1477,6 @@ static int dm_integrity_rw_tag(struct dm_integrity_c *ic, unsigned char *tag, se
+ } while (unlikely(total_size));
+
+ return 0;
+-#undef MAY_BE_FILLER
+-#undef MAY_BE_HASH
+ }
+
+ struct flush_request {
+@@ -2076,7 +2075,7 @@ static bool __journal_read_write(struct dm_integrity_io *dio, struct bio *bio,
+ char checksums_onstack[MAX_T(size_t, HASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
+
+ integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack);
+- if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
++ if (unlikely(crypto_memneq(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
+ DMERR_LIMIT("Checksum failed when reading from journal, at sector 0x%llx",
+ logical_sector);
+ dm_audit_log_bio(DM_MSG_PREFIX, "journal-checksum",
+@@ -2595,7 +2594,7 @@ static void dm_integrity_inline_recheck(struct work_struct *w)
+ bio_put(outgoing_bio);
+
+ integrity_sector_checksum(ic, dio->bio_details.bi_iter.bi_sector, outgoing_data, digest);
+- if (unlikely(memcmp(digest, dio->integrity_payload, min(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)))) {
++ if (unlikely(crypto_memneq(digest, dio->integrity_payload, min(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)))) {
+ DMERR_LIMIT("%pg: Checksum failed at sector 0x%llx",
+ ic->dev->bdev, dio->bio_details.bi_iter.bi_sector);
+ atomic64_inc(&ic->number_of_mismatches);
+@@ -2634,7 +2633,7 @@ static int dm_integrity_end_io(struct dm_target *ti, struct bio *bio, blk_status
+ char *mem = bvec_kmap_local(&bv);
+ //memset(mem, 0xff, ic->sectors_per_block << SECTOR_SHIFT);
+ integrity_sector_checksum(ic, dio->bio_details.bi_iter.bi_sector, mem, digest);
+- if (unlikely(memcmp(digest, dio->integrity_payload + pos,
++ if (unlikely(crypto_memneq(digest, dio->integrity_payload + pos,
+ min(crypto_shash_digestsize(ic->internal_hash), ic->tag_size)))) {
+ kunmap_local(mem);
+ dm_integrity_free_payload(dio);
+@@ -2911,7 +2910,7 @@ static void do_journal_write(struct dm_integrity_c *ic, unsigned int write_start
+
+ integrity_sector_checksum(ic, sec + ((l - j) << ic->sb->log2_sectors_per_block),
+ (char *)access_journal_data(ic, i, l), test_tag);
+- if (unlikely(memcmp(test_tag, journal_entry_tag(ic, je2), ic->tag_size))) {
++ if (unlikely(crypto_memneq(test_tag, journal_entry_tag(ic, je2), ic->tag_size))) {
+ dm_integrity_io_error(ic, "tag mismatch when replaying journal", -EILSEQ);
+ dm_audit_log_target(DM_MSG_PREFIX, "integrity-replay-journal", ic->ti, 0);
+ }
+@@ -5081,16 +5080,19 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv
+
+ ic->recalc_bitmap = dm_integrity_alloc_page_list(n_bitmap_pages);
+ if (!ic->recalc_bitmap) {
++ ti->error = "Could not allocate memory for bitmap";
+ r = -ENOMEM;
+ goto bad;
+ }
+ ic->may_write_bitmap = dm_integrity_alloc_page_list(n_bitmap_pages);
+ if (!ic->may_write_bitmap) {
++ ti->error = "Could not allocate memory for bitmap";
+ r = -ENOMEM;
+ goto bad;
+ }
+ ic->bbs = kvmalloc_array(ic->n_bitmap_blocks, sizeof(struct bitmap_block_status), GFP_KERNEL);
+ if (!ic->bbs) {
++ ti->error = "Could not allocate memory for bitmap";
+ r = -ENOMEM;
+ goto bad;
+ }
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index c142ec5458b70f..53ba0fbdf495c8 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -796,6 +796,13 @@ static int verity_map(struct dm_target *ti, struct bio *bio)
+ return DM_MAPIO_SUBMITTED;
+ }
+
++static void verity_postsuspend(struct dm_target *ti)
++{
++ struct dm_verity *v = ti->private;
++ flush_workqueue(v->verify_wq);
++ dm_bufio_client_reset(v->bufio);
++}
++
+ /*
+ * Status: V (valid) or C (corruption found)
+ */
+@@ -1766,6 +1773,7 @@ static struct target_type verity_target = {
+ .ctr = verity_ctr,
+ .dtr = verity_dtr,
+ .map = verity_map,
++ .postsuspend = verity_postsuspend,
+ .status = verity_status,
+ .prepare_ioctl = verity_prepare_ioctl,
+ .iterate_devices = verity_iterate_devices,
+diff --git a/drivers/media/common/siano/smsdvb-main.c b/drivers/media/common/siano/smsdvb-main.c
+index 44d8fe8b220e79..9b1a650ed055c9 100644
+--- a/drivers/media/common/siano/smsdvb-main.c
++++ b/drivers/media/common/siano/smsdvb-main.c
+@@ -1243,6 +1243,8 @@ static int __init smsdvb_module_init(void)
+ smsdvb_debugfs_register();
+
+ rc = smscore_register_hotplug(smsdvb_hotplug);
++ if (rc)
++ smsdvb_debugfs_unregister();
+
+ pr_debug("\n");
+
+diff --git a/drivers/media/i2c/adv748x/adv748x.h b/drivers/media/i2c/adv748x/adv748x.h
+index 9bc0121d0eff39..2c1db5968af8e7 100644
+--- a/drivers/media/i2c/adv748x/adv748x.h
++++ b/drivers/media/i2c/adv748x/adv748x.h
+@@ -320,7 +320,7 @@ struct adv748x_state {
+
+ /* Free run pattern select */
+ #define ADV748X_SDP_FRP 0x14
+-#define ADV748X_SDP_FRP_MASK GENMASK(3, 1)
++#define ADV748X_SDP_FRP_MASK GENMASK(2, 0)
+
+ /* Saturation */
+ #define ADV748X_SDP_SD_SAT_U 0xe3 /* user_map_rw_reg_e3 */
+diff --git a/drivers/media/i2c/ccs/ccs-core.c b/drivers/media/i2c/ccs/ccs-core.c
+index cb21df46bab169..4b7d8039b1c9fc 100644
+--- a/drivers/media/i2c/ccs/ccs-core.c
++++ b/drivers/media/i2c/ccs/ccs-core.c
+@@ -3562,6 +3562,7 @@ static int ccs_probe(struct i2c_client *client)
+ out_disable_runtime_pm:
+ pm_runtime_put_noidle(&client->dev);
+ pm_runtime_disable(&client->dev);
++ pm_runtime_set_suspended(&client->dev);
+
+ out_cleanup:
+ ccs_cleanup(sensor);
+@@ -3591,9 +3592,10 @@ static void ccs_remove(struct i2c_client *client)
+ v4l2_async_unregister_subdev(subdev);
+
+ pm_runtime_disable(&client->dev);
+- if (!pm_runtime_status_suspended(&client->dev))
++ if (!pm_runtime_status_suspended(&client->dev)) {
+ ccs_power_off(&client->dev);
+- pm_runtime_set_suspended(&client->dev);
++ pm_runtime_set_suspended(&client->dev);
++ }
+
+ for (i = 0; i < sensor->ssds_used; i++)
+ v4l2_device_unregister_subdev(&sensor->ssds[i].sd);
+diff --git a/drivers/media/i2c/hi556.c b/drivers/media/i2c/hi556.c
+index f31f9886c924e4..0e89aff9c664da 100644
+--- a/drivers/media/i2c/hi556.c
++++ b/drivers/media/i2c/hi556.c
+@@ -1230,12 +1230,13 @@ static int hi556_check_hwcfg(struct device *dev)
+ ret = fwnode_property_read_u32(fwnode, "clock-frequency", &mclk);
+ if (ret) {
+ dev_err(dev, "can't get clock frequency");
+- return ret;
++ goto check_hwcfg_error;
+ }
+
+ if (mclk != HI556_MCLK) {
+ dev_err(dev, "external clock %d is not supported", mclk);
+- return -EINVAL;
++ ret = -EINVAL;
++ goto check_hwcfg_error;
+ }
+
+ if (bus_cfg.bus.mipi_csi2.num_data_lanes != 2) {
+diff --git a/drivers/media/i2c/imx214.c b/drivers/media/i2c/imx214.c
+index 4962cfe7c83d62..6a393e18267f42 100644
+--- a/drivers/media/i2c/imx214.c
++++ b/drivers/media/i2c/imx214.c
+@@ -1075,10 +1075,6 @@ static int imx214_probe(struct i2c_client *client)
+ */
+ imx214_power_on(imx214->dev);
+
+- pm_runtime_set_active(imx214->dev);
+- pm_runtime_enable(imx214->dev);
+- pm_runtime_idle(imx214->dev);
+-
+ ret = imx214_ctrls_init(imx214);
+ if (ret < 0)
+ goto error_power_off;
+@@ -1099,22 +1095,30 @@ static int imx214_probe(struct i2c_client *client)
+
+ imx214_entity_init_state(&imx214->sd, NULL);
+
++ pm_runtime_set_active(imx214->dev);
++ pm_runtime_enable(imx214->dev);
++
+ ret = v4l2_async_register_subdev_sensor(&imx214->sd);
+ if (ret < 0) {
+ dev_err(dev, "could not register v4l2 device\n");
+ goto free_entity;
+ }
+
++ pm_runtime_idle(imx214->dev);
++
+ return 0;
+
+ free_entity:
++ pm_runtime_disable(imx214->dev);
++ pm_runtime_set_suspended(&client->dev);
+ media_entity_cleanup(&imx214->sd.entity);
++
+ free_ctrl:
+ mutex_destroy(&imx214->mutex);
+ v4l2_ctrl_handler_free(&imx214->ctrls);
++
+ error_power_off:
+- pm_runtime_disable(imx214->dev);
+- regulator_bulk_disable(IMX214_NUM_SUPPLIES, imx214->supplies);
++ imx214_power_off(imx214->dev);
+
+ return ret;
+ }
+@@ -1127,11 +1131,12 @@ static void imx214_remove(struct i2c_client *client)
+ v4l2_async_unregister_subdev(&imx214->sd);
+ media_entity_cleanup(&imx214->sd.entity);
+ v4l2_ctrl_handler_free(&imx214->ctrls);
+-
+- pm_runtime_disable(&client->dev);
+- pm_runtime_set_suspended(&client->dev);
+-
+ mutex_destroy(&imx214->mutex);
++ pm_runtime_disable(&client->dev);
++ if (!pm_runtime_status_suspended(&client->dev)) {
++ imx214_power_off(imx214->dev);
++ pm_runtime_set_suspended(&client->dev);
++ }
+ }
+
+ static const struct of_device_id imx214_of_match[] = {
+diff --git a/drivers/media/i2c/imx219.c b/drivers/media/i2c/imx219.c
+index e78a80b2bb2e45..906aa314b7f84c 100644
+--- a/drivers/media/i2c/imx219.c
++++ b/drivers/media/i2c/imx219.c
+@@ -134,10 +134,11 @@
+
+ /* Pixel rate is fixed for all the modes */
+ #define IMX219_PIXEL_RATE 182400000
+-#define IMX219_PIXEL_RATE_4LANE 280800000
++#define IMX219_PIXEL_RATE_4LANE 281600000
+
+ #define IMX219_DEFAULT_LINK_FREQ 456000000
+-#define IMX219_DEFAULT_LINK_FREQ_4LANE 363000000
++#define IMX219_DEFAULT_LINK_FREQ_4LANE_UNSUPPORTED 363000000
++#define IMX219_DEFAULT_LINK_FREQ_4LANE 364000000
+
+ /* IMX219 native and active pixel array size. */
+ #define IMX219_NATIVE_WIDTH 3296U
+@@ -169,15 +170,6 @@ static const struct cci_reg_sequence imx219_common_regs[] = {
+ { CCI_REG8(0x30eb), 0x05 },
+ { CCI_REG8(0x30eb), 0x09 },
+
+- /* PLL Clock Table */
+- { IMX219_REG_VTPXCK_DIV, 5 },
+- { IMX219_REG_VTSYCK_DIV, 1 },
+- { IMX219_REG_PREPLLCK_VT_DIV, 3 }, /* 0x03 = AUTO set */
+- { IMX219_REG_PREPLLCK_OP_DIV, 3 }, /* 0x03 = AUTO set */
+- { IMX219_REG_PLL_VT_MPY, 57 },
+- { IMX219_REG_OPSYCK_DIV, 1 },
+- { IMX219_REG_PLL_OP_MPY, 114 },
+-
+ /* Undocumented registers */
+ { CCI_REG8(0x455e), 0x00 },
+ { CCI_REG8(0x471e), 0x4b },
+@@ -202,12 +194,45 @@ static const struct cci_reg_sequence imx219_common_regs[] = {
+ { IMX219_REG_EXCK_FREQ, IMX219_EXCK_FREQ(IMX219_XCLK_FREQ / 1000000) },
+ };
+
++static const struct cci_reg_sequence imx219_2lane_regs[] = {
++ /* PLL Clock Table */
++ { IMX219_REG_VTPXCK_DIV, 5 },
++ { IMX219_REG_VTSYCK_DIV, 1 },
++ { IMX219_REG_PREPLLCK_VT_DIV, 3 }, /* 0x03 = AUTO set */
++ { IMX219_REG_PREPLLCK_OP_DIV, 3 }, /* 0x03 = AUTO set */
++ { IMX219_REG_PLL_VT_MPY, 57 },
++ { IMX219_REG_OPSYCK_DIV, 1 },
++ { IMX219_REG_PLL_OP_MPY, 114 },
++
++ /* 2-Lane CSI Mode */
++ { IMX219_REG_CSI_LANE_MODE, IMX219_CSI_2_LANE_MODE },
++};
++
++static const struct cci_reg_sequence imx219_4lane_regs[] = {
++ /* PLL Clock Table */
++ { IMX219_REG_VTPXCK_DIV, 5 },
++ { IMX219_REG_VTSYCK_DIV, 1 },
++ { IMX219_REG_PREPLLCK_VT_DIV, 3 }, /* 0x03 = AUTO set */
++ { IMX219_REG_PREPLLCK_OP_DIV, 3 }, /* 0x03 = AUTO set */
++ { IMX219_REG_PLL_VT_MPY, 88 },
++ { IMX219_REG_OPSYCK_DIV, 1 },
++ { IMX219_REG_PLL_OP_MPY, 91 },
++
++ /* 4-Lane CSI Mode */
++ { IMX219_REG_CSI_LANE_MODE, IMX219_CSI_4_LANE_MODE },
++};
++
+ static const s64 imx219_link_freq_menu[] = {
+ IMX219_DEFAULT_LINK_FREQ,
+ };
+
+ static const s64 imx219_link_freq_4lane_menu[] = {
+ IMX219_DEFAULT_LINK_FREQ_4LANE,
++ /*
++ * This will never be advertised to userspace, but will be used for
++ * v4l2_link_freq_to_bitmap
++ */
++ IMX219_DEFAULT_LINK_FREQ_4LANE_UNSUPPORTED,
+ };
+
+ static const char * const imx219_test_pattern_menu[] = {
+@@ -663,9 +688,11 @@ static int imx219_set_framefmt(struct imx219 *imx219,
+
+ static int imx219_configure_lanes(struct imx219 *imx219)
+ {
+- return cci_write(imx219->regmap, IMX219_REG_CSI_LANE_MODE,
+- imx219->lanes == 2 ? IMX219_CSI_2_LANE_MODE :
+- IMX219_CSI_4_LANE_MODE, NULL);
++ /* Write the appropriate PLL settings for the number of MIPI lanes */
++ return cci_multi_reg_write(imx219->regmap,
++ imx219->lanes == 2 ? imx219_2lane_regs : imx219_4lane_regs,
++ imx219->lanes == 2 ? ARRAY_SIZE(imx219_2lane_regs) :
++ ARRAY_SIZE(imx219_4lane_regs), NULL);
+ };
+
+ static int imx219_start_streaming(struct imx219 *imx219,
+@@ -1042,6 +1069,7 @@ static int imx219_check_hwcfg(struct device *dev, struct imx219 *imx219)
+ struct v4l2_fwnode_endpoint ep_cfg = {
+ .bus_type = V4L2_MBUS_CSI2_DPHY
+ };
++ unsigned long link_freq_bitmap;
+ int ret = -EINVAL;
+
+ endpoint = fwnode_graph_get_next_endpoint(dev_fwnode(dev), NULL);
+@@ -1063,23 +1091,40 @@ static int imx219_check_hwcfg(struct device *dev, struct imx219 *imx219)
+ imx219->lanes = ep_cfg.bus.mipi_csi2.num_data_lanes;
+
+ /* Check the link frequency set in device tree */
+- if (!ep_cfg.nr_of_link_frequencies) {
+- dev_err_probe(dev, -EINVAL,
+- "link-frequency property not found in DT\n");
+- goto error_out;
++ switch (imx219->lanes) {
++ case 2:
++ ret = v4l2_link_freq_to_bitmap(dev,
++ ep_cfg.link_frequencies,
++ ep_cfg.nr_of_link_frequencies,
++ imx219_link_freq_menu,
++ ARRAY_SIZE(imx219_link_freq_menu),
++ &link_freq_bitmap);
++ break;
++ case 4:
++ ret = v4l2_link_freq_to_bitmap(dev,
++ ep_cfg.link_frequencies,
++ ep_cfg.nr_of_link_frequencies,
++ imx219_link_freq_4lane_menu,
++ ARRAY_SIZE(imx219_link_freq_4lane_menu),
++ &link_freq_bitmap);
++
++ if (!ret && (link_freq_bitmap & BIT(1))) {
++ dev_warn(dev, "Link frequency of %d not supported, but has been incorrectly advertised previously\n",
++ IMX219_DEFAULT_LINK_FREQ_4LANE_UNSUPPORTED);
++ dev_warn(dev, "Using link frequency of %d\n",
++ IMX219_DEFAULT_LINK_FREQ_4LANE);
++ link_freq_bitmap |= BIT(0);
++ }
++ break;
+ }
+
+- if (ep_cfg.nr_of_link_frequencies != 1 ||
+- (ep_cfg.link_frequencies[0] != ((imx219->lanes == 2) ?
+- IMX219_DEFAULT_LINK_FREQ : IMX219_DEFAULT_LINK_FREQ_4LANE))) {
++ if (ret || !(link_freq_bitmap & BIT(0))) {
++ ret = -EINVAL;
+ dev_err_probe(dev, -EINVAL,
+ "Link frequency not supported: %lld\n",
+ ep_cfg.link_frequencies[0]);
+- goto error_out;
+ }
+
+- ret = 0;
+-
+ error_out:
+ v4l2_fwnode_endpoint_free(&ep_cfg);
+ fwnode_handle_put(endpoint);
+@@ -1186,6 +1231,9 @@ static int imx219_probe(struct i2c_client *client)
+ goto error_media_entity;
+ }
+
++ pm_runtime_set_active(dev);
++ pm_runtime_enable(dev);
++
+ ret = v4l2_async_register_subdev_sensor(&imx219->sd);
+ if (ret < 0) {
+ dev_err_probe(dev, ret,
+@@ -1193,15 +1241,14 @@ static int imx219_probe(struct i2c_client *client)
+ goto error_subdev_cleanup;
+ }
+
+- /* Enable runtime PM and turn off the device */
+- pm_runtime_set_active(dev);
+- pm_runtime_enable(dev);
+ pm_runtime_idle(dev);
+
+ return 0;
+
+ error_subdev_cleanup:
+ v4l2_subdev_cleanup(&imx219->sd);
++ pm_runtime_disable(dev);
++ pm_runtime_set_suspended(dev);
+
+ error_media_entity:
+ media_entity_cleanup(&imx219->sd.entity);
+@@ -1226,9 +1273,10 @@ static void imx219_remove(struct i2c_client *client)
+ imx219_free_controls(imx219);
+
+ pm_runtime_disable(&client->dev);
+- if (!pm_runtime_status_suspended(&client->dev))
++ if (!pm_runtime_status_suspended(&client->dev)) {
+ imx219_power_off(&client->dev);
+- pm_runtime_set_suspended(&client->dev);
++ pm_runtime_set_suspended(&client->dev);
++ }
+ }
+
+ static const struct of_device_id imx219_dt_ids[] = {
+diff --git a/drivers/media/i2c/imx319.c b/drivers/media/i2c/imx319.c
+index dd1b4ff983dcb1..701840f4a5cc00 100644
+--- a/drivers/media/i2c/imx319.c
++++ b/drivers/media/i2c/imx319.c
+@@ -2442,17 +2442,19 @@ static int imx319_probe(struct i2c_client *client)
+ if (full_power)
+ pm_runtime_set_active(&client->dev);
+ pm_runtime_enable(&client->dev);
+- pm_runtime_idle(&client->dev);
+
+ ret = v4l2_async_register_subdev_sensor(&imx319->sd);
+ if (ret < 0)
+ goto error_media_entity_pm;
+
++ pm_runtime_idle(&client->dev);
++
+ return 0;
+
+ error_media_entity_pm:
+ pm_runtime_disable(&client->dev);
+- pm_runtime_set_suspended(&client->dev);
++ if (full_power)
++ pm_runtime_set_suspended(&client->dev);
+ media_entity_cleanup(&imx319->sd.entity);
+
+ error_handler_free:
+@@ -2474,7 +2476,8 @@ static void imx319_remove(struct i2c_client *client)
+ v4l2_ctrl_handler_free(sd->ctrl_handler);
+
+ pm_runtime_disable(&client->dev);
+- pm_runtime_set_suspended(&client->dev);
++ if (!pm_runtime_status_suspended(&client->dev))
++ pm_runtime_set_suspended(&client->dev);
+
+ mutex_destroy(&imx319->mutex);
+ }
+diff --git a/drivers/media/i2c/ov7251.c b/drivers/media/i2c/ov7251.c
+index 30f61e04ecaf51..3226888d77e9c7 100644
+--- a/drivers/media/i2c/ov7251.c
++++ b/drivers/media/i2c/ov7251.c
+@@ -922,6 +922,8 @@ static int ov7251_set_power_on(struct device *dev)
+ return ret;
+ }
+
++ usleep_range(1000, 1100);
++
+ gpiod_set_value_cansleep(ov7251->enable_gpio, 1);
+
+ /* wait at least 65536 external clock cycles */
+@@ -1696,7 +1698,7 @@ static int ov7251_probe(struct i2c_client *client)
+ return PTR_ERR(ov7251->analog_regulator);
+ }
+
+- ov7251->enable_gpio = devm_gpiod_get(dev, "enable", GPIOD_OUT_HIGH);
++ ov7251->enable_gpio = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW);
+ if (IS_ERR(ov7251->enable_gpio)) {
+ dev_err(dev, "cannot get enable gpio\n");
+ return PTR_ERR(ov7251->enable_gpio);
+diff --git a/drivers/media/pci/intel/ipu6/ipu6-isys-video.c b/drivers/media/pci/intel/ipu6/ipu6-isys-video.c
+index b37561352ead38..48388c0c851ca1 100644
+--- a/drivers/media/pci/intel/ipu6/ipu6-isys-video.c
++++ b/drivers/media/pci/intel/ipu6/ipu6-isys-video.c
+@@ -1296,6 +1296,7 @@ int ipu6_isys_video_init(struct ipu6_isys_video *av)
+ av->vdev.release = video_device_release_empty;
+ av->vdev.fops = &isys_fops;
+ av->vdev.v4l2_dev = &av->isys->v4l2_dev;
++ av->vdev.dev_parent = &av->isys->adev->isp->pdev->dev;
+ if (!av->vdev.ioctl_ops)
+ av->vdev.ioctl_ops = &ipu6_v4l2_ioctl_ops;
+ av->vdev.queue = &av->aq.vbq;
+diff --git a/drivers/media/pci/mgb4/mgb4_cmt.c b/drivers/media/pci/mgb4/mgb4_cmt.c
+index a25b68403bc608..c22ef51436ed5d 100644
+--- a/drivers/media/pci/mgb4/mgb4_cmt.c
++++ b/drivers/media/pci/mgb4/mgb4_cmt.c
+@@ -135,8 +135,8 @@ static const u16 cmt_vals_out[][15] = {
+ };
+
+ static const u16 cmt_vals_in[][13] = {
+- {0x1082, 0x0000, 0x5104, 0x0000, 0x11C7, 0x0000, 0x1041, 0x02BC, 0x7C01, 0xFFE9, 0x9900, 0x9908, 0x8100},
+ {0x1104, 0x0000, 0x9208, 0x0000, 0x138E, 0x0000, 0x1041, 0x015E, 0x7C01, 0xFFE9, 0x0100, 0x0908, 0x1000},
++ {0x1082, 0x0000, 0x5104, 0x0000, 0x11C7, 0x0000, 0x1041, 0x02BC, 0x7C01, 0xFFE9, 0x9900, 0x9908, 0x8100},
+ };
+
+ static const u32 cmt_addrs_out[][15] = {
+@@ -206,10 +206,11 @@ u32 mgb4_cmt_set_vout_freq(struct mgb4_vout_dev *voutdev, unsigned int freq)
+
+ mgb4_write_reg(video, regs->config, 0x1 | (config & ~0x3));
+
++ mgb4_mask_reg(video, regs->config, 0x100, 0x100);
++
+ for (i = 0; i < ARRAY_SIZE(cmt_addrs_out[0]); i++)
+ mgb4_write_reg(&voutdev->mgbdev->cmt, addr[i], reg_set[i]);
+
+- mgb4_mask_reg(video, regs->config, 0x100, 0x100);
+ mgb4_mask_reg(video, regs->config, 0x100, 0x0);
+
+ mgb4_write_reg(video, regs->config, config & ~0x1);
+@@ -236,10 +237,11 @@ void mgb4_cmt_set_vin_freq_range(struct mgb4_vin_dev *vindev,
+
+ mgb4_write_reg(video, regs->config, 0x1 | (config & ~0x3));
+
++ mgb4_mask_reg(video, regs->config, 0x1000, 0x1000);
++
+ for (i = 0; i < ARRAY_SIZE(cmt_addrs_in[0]); i++)
+ mgb4_write_reg(&vindev->mgbdev->cmt, addr[i], reg_set[i]);
+
+- mgb4_mask_reg(video, regs->config, 0x1000, 0x1000);
+ mgb4_mask_reg(video, regs->config, 0x1000, 0x0);
+
+ mgb4_write_reg(video, regs->config, config & ~0x1);
+diff --git a/drivers/media/platform/chips-media/wave5/wave5-hw.c b/drivers/media/platform/chips-media/wave5/wave5-hw.c
+index c89aafabc74213..710311d8511386 100644
+--- a/drivers/media/platform/chips-media/wave5/wave5-hw.c
++++ b/drivers/media/platform/chips-media/wave5/wave5-hw.c
+@@ -576,7 +576,7 @@ int wave5_vpu_build_up_dec_param(struct vpu_instance *inst,
+ vpu_write_reg(inst->dev, W5_CMD_NUM_CQ_DEPTH_M1,
+ WAVE521_COMMAND_QUEUE_DEPTH - 1);
+ }
+-
++ vpu_write_reg(inst->dev, W5_CMD_ERR_CONCEAL, 0);
+ ret = send_firmware_command(inst, W5_CREATE_INSTANCE, true, NULL, NULL);
+ if (ret) {
+ wave5_vdi_free_dma_memory(vpu_dev, &p_dec_info->vb_work);
+diff --git a/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c b/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c
+index 0c5c9a8de91faa..e238447c88bbf3 100644
+--- a/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c
++++ b/drivers/media/platform/chips-media/wave5/wave5-vpu-dec.c
+@@ -1424,10 +1424,24 @@ static int wave5_vpu_dec_start_streaming(struct vb2_queue *q, unsigned int count
+ if (ret)
+ goto free_bitstream_vbuf;
+ } else if (q->type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
++ struct dec_initial_info *initial_info =
++ &inst->codec_info->dec_info.initial_info;
++
+ if (inst->state == VPU_INST_STATE_STOP)
+ ret = switch_state(inst, VPU_INST_STATE_INIT_SEQ);
+ if (ret)
+ goto return_buffers;
++
++ if (inst->state == VPU_INST_STATE_INIT_SEQ &&
++ inst->dev->product_code == WAVE521C_CODE) {
++ if (initial_info->luma_bitdepth != 8) {
++ dev_info(inst->dev->dev, "%s: no support for %d bit depth",
++ __func__, initial_info->luma_bitdepth);
++ ret = -EINVAL;
++ goto return_buffers;
++ }
++ }
++
+ }
+
+ return ret;
+@@ -1446,6 +1460,16 @@ static int streamoff_output(struct vb2_queue *q)
+ struct vb2_v4l2_buffer *buf;
+ int ret;
+ dma_addr_t new_rd_ptr;
++ struct dec_output_info dec_info;
++ unsigned int i;
++
++ for (i = 0; i < v4l2_m2m_num_dst_bufs_ready(m2m_ctx); i++) {
++ ret = wave5_vpu_dec_set_disp_flag(inst, i);
++ if (ret)
++ dev_dbg(inst->dev->dev,
++ "%s: Setting display flag of buf index: %u, fail: %d\n",
++ __func__, i, ret);
++ }
+
+ while ((buf = v4l2_m2m_src_buf_remove(m2m_ctx))) {
+ dev_dbg(inst->dev->dev, "%s: (Multiplanar) buf type %4u | index %4u\n",
+@@ -1453,6 +1477,11 @@ static int streamoff_output(struct vb2_queue *q)
+ v4l2_m2m_buf_done(buf, VB2_BUF_STATE_ERROR);
+ }
+
++ while (wave5_vpu_dec_get_output_info(inst, &dec_info) == 0) {
++ if (dec_info.index_frame_display >= 0)
++ wave5_vpu_dec_set_disp_flag(inst, dec_info.index_frame_display);
++ }
++
+ ret = wave5_vpu_flush_instance(inst);
+ if (ret)
+ return ret;
+@@ -1535,7 +1564,7 @@ static void wave5_vpu_dec_stop_streaming(struct vb2_queue *q)
+ break;
+
+ if (wave5_vpu_dec_get_output_info(inst, &dec_output_info))
+- dev_dbg(inst->dev->dev, "Getting decoding results from fw, fail\n");
++ dev_dbg(inst->dev->dev, "there is no output info\n");
+ }
+
+ v4l2_m2m_update_stop_streaming_state(m2m_ctx, q);
+diff --git a/drivers/media/platform/chips-media/wave5/wave5-vpu.c b/drivers/media/platform/chips-media/wave5/wave5-vpu.c
+index 7273254ecb0349..b13c5cd46d7e48 100644
+--- a/drivers/media/platform/chips-media/wave5/wave5-vpu.c
++++ b/drivers/media/platform/chips-media/wave5/wave5-vpu.c
+@@ -54,12 +54,12 @@ static void wave5_vpu_handle_irq(void *dev_id)
+ struct vpu_device *dev = dev_id;
+
+ irq_reason = wave5_vdi_read_register(dev, W5_VPU_VINT_REASON);
++ seq_done = wave5_vdi_read_register(dev, W5_RET_SEQ_DONE_INSTANCE_INFO);
++ cmd_done = wave5_vdi_read_register(dev, W5_RET_QUEUE_CMD_DONE_INST);
+ wave5_vdi_write_register(dev, W5_VPU_VINT_REASON_CLR, irq_reason);
+ wave5_vdi_write_register(dev, W5_VPU_VINT_CLEAR, 0x1);
+
+ list_for_each_entry(inst, &dev->instances, list) {
+- seq_done = wave5_vdi_read_register(dev, W5_RET_SEQ_DONE_INSTANCE_INFO);
+- cmd_done = wave5_vdi_read_register(dev, W5_RET_QUEUE_CMD_DONE_INST);
+
+ if (irq_reason & BIT(INT_WAVE5_INIT_SEQ) ||
+ irq_reason & BIT(INT_WAVE5_ENC_SET_PARAM)) {
+diff --git a/drivers/media/platform/chips-media/wave5/wave5-vpuapi.c b/drivers/media/platform/chips-media/wave5/wave5-vpuapi.c
+index 1a3efb638dde5a..65fdabcd9d2921 100644
+--- a/drivers/media/platform/chips-media/wave5/wave5-vpuapi.c
++++ b/drivers/media/platform/chips-media/wave5/wave5-vpuapi.c
+@@ -73,6 +73,16 @@ int wave5_vpu_flush_instance(struct vpu_instance *inst)
+ inst->type == VPU_INST_TYPE_DEC ? "DECODER" : "ENCODER", inst->id);
+ mutex_unlock(&inst->dev->hw_lock);
+ return -ETIMEDOUT;
++ } else if (ret == -EBUSY) {
++ struct dec_output_info dec_info;
++
++ mutex_unlock(&inst->dev->hw_lock);
++ wave5_vpu_dec_get_output_info(inst, &dec_info);
++ ret = mutex_lock_interruptible(&inst->dev->hw_lock);
++ if (ret)
++ return ret;
++ if (dec_info.index_frame_display > 0)
++ wave5_vpu_dec_set_disp_flag(inst, dec_info.index_frame_display);
+ }
+ } while (ret != 0);
+ mutex_unlock(&inst->dev->hw_lock);
+diff --git a/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c b/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c
+index ff23b225db705a..1b0bc47355c05f 100644
+--- a/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c
++++ b/drivers/media/platform/mediatek/vcodec/common/mtk_vcodec_fw_scp.c
+@@ -79,8 +79,11 @@ struct mtk_vcodec_fw *mtk_vcodec_fw_scp_init(void *priv, enum mtk_vcodec_fw_use
+ }
+
+ fw = devm_kzalloc(&plat_dev->dev, sizeof(*fw), GFP_KERNEL);
+- if (!fw)
++ if (!fw) {
++ scp_put(scp);
+ return ERR_PTR(-ENOMEM);
++ }
++
+ fw->type = SCP;
+ fw->ops = &mtk_vcodec_rproc_msg;
+ fw->scp = scp;
+diff --git a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp9_req_lat_if.c b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp9_req_lat_if.c
+index eea709d9382091..47c302745c1de9 100644
+--- a/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp9_req_lat_if.c
++++ b/drivers/media/platform/mediatek/vcodec/decoder/vdec/vdec_vp9_req_lat_if.c
+@@ -1188,7 +1188,8 @@ static int vdec_vp9_slice_setup_lat(struct vdec_vp9_slice_instance *instance,
+ return ret;
+ }
+
+-static
++/* clang stack usage explodes if this is inlined */
++static noinline_for_stack
+ void vdec_vp9_slice_map_counts_eob_coef(unsigned int i, unsigned int j, unsigned int k,
+ struct vdec_vp9_slice_frame_counts *counts,
+ struct v4l2_vp9_frame_symbol_counts *counts_helper)
+diff --git a/drivers/media/platform/mediatek/vcodec/encoder/venc/venc_h264_if.c b/drivers/media/platform/mediatek/vcodec/encoder/venc/venc_h264_if.c
+index f8145998fcaf78..8522f71fc901d5 100644
+--- a/drivers/media/platform/mediatek/vcodec/encoder/venc/venc_h264_if.c
++++ b/drivers/media/platform/mediatek/vcodec/encoder/venc/venc_h264_if.c
+@@ -594,7 +594,11 @@ static int h264_enc_init(struct mtk_vcodec_enc_ctx *ctx)
+
+ inst->ctx = ctx;
+ inst->vpu_inst.ctx = ctx;
+- inst->vpu_inst.id = is_ext ? SCP_IPI_VENC_H264 : IPI_VENC_H264;
++ if (is_ext)
++ inst->vpu_inst.id = SCP_IPI_VENC_H264;
++ else
++ inst->vpu_inst.id = IPI_VENC_H264;
++
+ inst->hw_base = mtk_vcodec_get_reg_addr(inst->ctx->dev->reg_base, VENC_SYS);
+
+ ret = vpu_enc_init(&inst->vpu_inst);
+diff --git a/drivers/media/platform/nuvoton/npcm-video.c b/drivers/media/platform/nuvoton/npcm-video.c
+index db454c9d2641f8..e0dee768a3be1d 100644
+--- a/drivers/media/platform/nuvoton/npcm-video.c
++++ b/drivers/media/platform/nuvoton/npcm-video.c
+@@ -1650,8 +1650,8 @@ static int npcm_video_setup_video(struct npcm_video *video)
+
+ static int npcm_video_ece_init(struct npcm_video *video)
+ {
++ struct device_node *ece_node __free(device_node) = NULL;
+ struct device *dev = video->dev;
+- struct device_node *ece_node;
+ struct platform_device *ece_pdev;
+ void __iomem *regs;
+
+@@ -1671,7 +1671,7 @@ static int npcm_video_ece_init(struct npcm_video *video)
+ dev_err(dev, "Failed to find ECE device\n");
+ return -ENODEV;
+ }
+- of_node_put(ece_node);
++ struct device *ece_dev __free(put_device) = &ece_pdev->dev;
+
+ regs = devm_platform_ioremap_resource(ece_pdev, 0);
+ if (IS_ERR(regs)) {
+@@ -1686,7 +1686,7 @@ static int npcm_video_ece_init(struct npcm_video *video)
+ return PTR_ERR(video->ece.regmap);
+ }
+
+- video->ece.reset = devm_reset_control_get(&ece_pdev->dev, NULL);
++ video->ece.reset = devm_reset_control_get(ece_dev, NULL);
+ if (IS_ERR(video->ece.reset)) {
+ dev_err(dev, "Failed to get ECE reset control in DTS\n");
+ return PTR_ERR(video->ece.reset);
+diff --git a/drivers/media/platform/qcom/venus/hfi_parser.c b/drivers/media/platform/qcom/venus/hfi_parser.c
+index 3df241dc3a118b..1b3db2caa99fe4 100644
+--- a/drivers/media/platform/qcom/venus/hfi_parser.c
++++ b/drivers/media/platform/qcom/venus/hfi_parser.c
+@@ -19,6 +19,8 @@ static void init_codecs(struct venus_core *core)
+ struct hfi_plat_caps *caps = core->caps, *cap;
+ unsigned long bit;
+
++ core->codecs_count = 0;
++
+ if (hweight_long(core->dec_codecs) + hweight_long(core->enc_codecs) > MAX_CODEC_NUM)
+ return;
+
+@@ -62,7 +64,7 @@ fill_buf_mode(struct hfi_plat_caps *cap, const void *data, unsigned int num)
+ cap->cap_bufs_mode_dynamic = true;
+ }
+
+-static void
++static int
+ parse_alloc_mode(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ {
+ struct hfi_buffer_alloc_mode_supported *mode = data;
+@@ -70,7 +72,7 @@ parse_alloc_mode(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ u32 *type;
+
+ if (num_entries > MAX_ALLOC_MODE_ENTRIES)
+- return;
++ return -EINVAL;
+
+ type = mode->data;
+
+@@ -82,6 +84,8 @@ parse_alloc_mode(struct venus_core *core, u32 codecs, u32 domain, void *data)
+
+ type++;
+ }
++
++ return sizeof(*mode);
+ }
+
+ static void fill_profile_level(struct hfi_plat_caps *cap, const void *data,
+@@ -96,7 +100,7 @@ static void fill_profile_level(struct hfi_plat_caps *cap, const void *data,
+ cap->num_pl += num;
+ }
+
+-static void
++static int
+ parse_profile_level(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ {
+ struct hfi_profile_level_supported *pl = data;
+@@ -104,12 +108,14 @@ parse_profile_level(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ struct hfi_profile_level pl_arr[HFI_MAX_PROFILE_COUNT] = {};
+
+ if (pl->profile_count > HFI_MAX_PROFILE_COUNT)
+- return;
++ return -EINVAL;
+
+ memcpy(pl_arr, proflevel, pl->profile_count * sizeof(*proflevel));
+
+ for_each_codec(core->caps, ARRAY_SIZE(core->caps), codecs, domain,
+ fill_profile_level, pl_arr, pl->profile_count);
++
++ return pl->profile_count * sizeof(*proflevel) + sizeof(u32);
+ }
+
+ static void
+@@ -124,7 +130,7 @@ fill_caps(struct hfi_plat_caps *cap, const void *data, unsigned int num)
+ cap->num_caps += num;
+ }
+
+-static void
++static int
+ parse_caps(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ {
+ struct hfi_capabilities *caps = data;
+@@ -133,12 +139,14 @@ parse_caps(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ struct hfi_capability caps_arr[MAX_CAP_ENTRIES] = {};
+
+ if (num_caps > MAX_CAP_ENTRIES)
+- return;
++ return -EINVAL;
+
+ memcpy(caps_arr, cap, num_caps * sizeof(*cap));
+
+ for_each_codec(core->caps, ARRAY_SIZE(core->caps), codecs, domain,
+ fill_caps, caps_arr, num_caps);
++
++ return sizeof(*caps);
+ }
+
+ static void fill_raw_fmts(struct hfi_plat_caps *cap, const void *fmts,
+@@ -153,7 +161,7 @@ static void fill_raw_fmts(struct hfi_plat_caps *cap, const void *fmts,
+ cap->num_fmts += num_fmts;
+ }
+
+-static void
++static int
+ parse_raw_formats(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ {
+ struct hfi_uncompressed_format_supported *fmt = data;
+@@ -162,7 +170,8 @@ parse_raw_formats(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ struct raw_formats rawfmts[MAX_FMT_ENTRIES] = {};
+ u32 entries = fmt->format_entries;
+ unsigned int i = 0;
+- u32 num_planes;
++ u32 num_planes = 0;
++ u32 size;
+
+ while (entries) {
+ num_planes = pinfo->num_planes;
+@@ -172,7 +181,7 @@ parse_raw_formats(struct venus_core *core, u32 codecs, u32 domain, void *data)
+ i++;
+
+ if (i >= MAX_FMT_ENTRIES)
+- return;
++ return -EINVAL;
+
+ if (pinfo->num_planes > MAX_PLANES)
+ break;
+@@ -184,9 +193,13 @@ parse_raw_formats(struct venus_core *core, u32 codecs, u32 domain, void *data)
+
+ for_each_codec(core->caps, ARRAY_SIZE(core->caps), codecs, domain,
+ fill_raw_fmts, rawfmts, i);
++ size = fmt->format_entries * (sizeof(*constr) * num_planes + 2 * sizeof(u32))
++ + 2 * sizeof(u32);
++
++ return size;
+ }
+
+-static void parse_codecs(struct venus_core *core, void *data)
++static int parse_codecs(struct venus_core *core, void *data)
+ {
+ struct hfi_codec_supported *codecs = data;
+
+@@ -198,21 +211,27 @@ static void parse_codecs(struct venus_core *core, void *data)
+ core->dec_codecs &= ~HFI_VIDEO_CODEC_SPARK;
+ core->enc_codecs &= ~HFI_VIDEO_CODEC_HEVC;
+ }
++
++ return sizeof(*codecs);
+ }
+
+-static void parse_max_sessions(struct venus_core *core, const void *data)
++static int parse_max_sessions(struct venus_core *core, const void *data)
+ {
+ const struct hfi_max_sessions_supported *sessions = data;
+
+ core->max_sessions_supported = sessions->max_sessions;
++
++ return sizeof(*sessions);
+ }
+
+-static void parse_codecs_mask(u32 *codecs, u32 *domain, void *data)
++static int parse_codecs_mask(u32 *codecs, u32 *domain, void *data)
+ {
+ struct hfi_codec_mask_supported *mask = data;
+
+ *codecs = mask->codecs;
+ *domain = mask->video_domains;
++
++ return sizeof(*mask);
+ }
+
+ static void parser_init(struct venus_inst *inst, u32 *codecs, u32 *domain)
+@@ -281,8 +300,9 @@ static int hfi_platform_parser(struct venus_core *core, struct venus_inst *inst)
+ u32 hfi_parser(struct venus_core *core, struct venus_inst *inst, void *buf,
+ u32 size)
+ {
+- unsigned int words_count = size >> 2;
+- u32 *word = buf, *data, codecs = 0, domain = 0;
++ u32 *words = buf, *payload, codecs = 0, domain = 0;
++ u32 *frame_size = buf + size;
++ u32 rem_bytes = size;
+ int ret;
+
+ ret = hfi_platform_parser(core, inst);
+@@ -299,38 +319,66 @@ u32 hfi_parser(struct venus_core *core, struct venus_inst *inst, void *buf,
+ memset(core->caps, 0, sizeof(core->caps));
+ }
+
+- while (words_count) {
+- data = word + 1;
++ while (words < frame_size) {
++ payload = words + 1;
+
+- switch (*word) {
++ switch (*words) {
+ case HFI_PROPERTY_PARAM_CODEC_SUPPORTED:
+- parse_codecs(core, data);
++ if (rem_bytes <= sizeof(struct hfi_codec_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_codecs(core, payload);
++ if (ret < 0)
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
+ init_codecs(core);
+ break;
+ case HFI_PROPERTY_PARAM_MAX_SESSIONS_SUPPORTED:
+- parse_max_sessions(core, data);
++ if (rem_bytes <= sizeof(struct hfi_max_sessions_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_max_sessions(core, payload);
+ break;
+ case HFI_PROPERTY_PARAM_CODEC_MASK_SUPPORTED:
+- parse_codecs_mask(&codecs, &domain, data);
++ if (rem_bytes <= sizeof(struct hfi_codec_mask_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_codecs_mask(&codecs, &domain, payload);
+ break;
+ case HFI_PROPERTY_PARAM_UNCOMPRESSED_FORMAT_SUPPORTED:
+- parse_raw_formats(core, codecs, domain, data);
++ if (rem_bytes <= sizeof(struct hfi_uncompressed_format_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_raw_formats(core, codecs, domain, payload);
+ break;
+ case HFI_PROPERTY_PARAM_CAPABILITY_SUPPORTED:
+- parse_caps(core, codecs, domain, data);
++ if (rem_bytes <= sizeof(struct hfi_capabilities))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_caps(core, codecs, domain, payload);
+ break;
+ case HFI_PROPERTY_PARAM_PROFILE_LEVEL_SUPPORTED:
+- parse_profile_level(core, codecs, domain, data);
++ if (rem_bytes <= sizeof(struct hfi_profile_level_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_profile_level(core, codecs, domain, payload);
+ break;
+ case HFI_PROPERTY_PARAM_BUFFER_ALLOC_MODE_SUPPORTED:
+- parse_alloc_mode(core, codecs, domain, data);
++ if (rem_bytes <= sizeof(struct hfi_buffer_alloc_mode_supported))
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ ret = parse_alloc_mode(core, codecs, domain, payload);
+ break;
+ default:
++ ret = sizeof(u32);
+ break;
+ }
+
+- word++;
+- words_count--;
++ if (ret < 0)
++ return HFI_ERR_SYS_INSUFFICIENT_RESOURCES;
++
++ words += ret / sizeof(u32);
++ rem_bytes -= ret;
+ }
+
+ if (!core->max_sessions_supported)
+diff --git a/drivers/media/platform/qcom/venus/hfi_venus.c b/drivers/media/platform/qcom/venus/hfi_venus.c
+index f9437b6412b91c..ab93757fff4b31 100644
+--- a/drivers/media/platform/qcom/venus/hfi_venus.c
++++ b/drivers/media/platform/qcom/venus/hfi_venus.c
+@@ -187,6 +187,9 @@ static int venus_write_queue(struct venus_hfi_device *hdev,
+ /* ensure rd/wr indices's are read from memory */
+ rmb();
+
++ if (qsize > IFACEQ_QUEUE_SIZE / 4)
++ return -EINVAL;
++
+ if (wr_idx >= rd_idx)
+ empty_space = qsize - (wr_idx - rd_idx);
+ else
+@@ -255,6 +258,9 @@ static int venus_read_queue(struct venus_hfi_device *hdev,
+ wr_idx = qhdr->write_idx;
+ qsize = qhdr->q_size;
+
++ if (qsize > IFACEQ_QUEUE_SIZE / 4)
++ return -EINVAL;
++
+ /* make sure data is valid before using it */
+ rmb();
+
+@@ -1035,18 +1041,26 @@ static void venus_sfr_print(struct venus_hfi_device *hdev)
+ {
+ struct device *dev = hdev->core->dev;
+ struct hfi_sfr *sfr = hdev->sfr.kva;
++ u32 size;
+ void *p;
+
+ if (!sfr)
+ return;
+
+- p = memchr(sfr->data, '\0', sfr->buf_size);
++ size = sfr->buf_size;
++ if (!size)
++ return;
++
++ if (size > ALIGNED_SFR_SIZE)
++ size = ALIGNED_SFR_SIZE;
++
++ p = memchr(sfr->data, '\0', size);
+ /*
+ * SFR isn't guaranteed to be NULL terminated since SYS_ERROR indicates
+ * that Venus is in the process of crashing.
+ */
+ if (!p)
+- sfr->data[sfr->buf_size - 1] = '\0';
++ sfr->data[size - 1] = '\0';
+
+ dev_err_ratelimited(dev, "SFR message from FW: %s\n", sfr->data);
+ }
+diff --git a/drivers/media/platform/rockchip/rga/rga-hw.c b/drivers/media/platform/rockchip/rga/rga-hw.c
+index 11c3d72347572b..b2ef3beec5258a 100644
+--- a/drivers/media/platform/rockchip/rga/rga-hw.c
++++ b/drivers/media/platform/rockchip/rga/rga-hw.c
+@@ -376,7 +376,7 @@ static void rga_cmd_set_dst_info(struct rga_ctx *ctx,
+ * Configure the dest framebuffer base address with pixel offset.
+ */
+ offsets = rga_get_addr_offset(&ctx->out, offset, dst_x, dst_y, dst_w, dst_h);
+- dst_offset = rga_lookup_draw_pos(&offsets, mir_mode, rot_mode);
++ dst_offset = rga_lookup_draw_pos(&offsets, rot_mode, mir_mode);
+
+ dest[(RGA_DST_Y_RGB_BASE_ADDR - RGA_MODE_BASE_REG) >> 2] =
+ dst_offset->y_off;
+diff --git a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc_opr_v6.c b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc_opr_v6.c
+index 73f7af674c01bd..0c636090d723de 100644
+--- a/drivers/media/platform/samsung/s5p-mfc/s5p_mfc_opr_v6.c
++++ b/drivers/media/platform/samsung/s5p-mfc/s5p_mfc_opr_v6.c
+@@ -549,8 +549,9 @@ static void s5p_mfc_enc_calc_src_size_v6(struct s5p_mfc_ctx *ctx)
+ case V4L2_PIX_FMT_NV21M:
+ ctx->stride[0] = ALIGN(ctx->img_width, S5P_FIMV_NV12M_HALIGN_V6);
+ ctx->stride[1] = ALIGN(ctx->img_width, S5P_FIMV_NV12M_HALIGN_V6);
+- ctx->luma_size = ctx->stride[0] * ALIGN(ctx->img_height, 16);
+- ctx->chroma_size = ctx->stride[0] * ALIGN(ctx->img_height / 2, 16);
++ ctx->luma_size = ALIGN(ctx->stride[0] * ALIGN(ctx->img_height, 16), 256);
++ ctx->chroma_size = ALIGN(ctx->stride[0] * ALIGN(ctx->img_height / 2, 16),
++ 256);
+ break;
+ case V4L2_PIX_FMT_YUV420M:
+ case V4L2_PIX_FMT_YVU420M:
+diff --git a/drivers/media/platform/st/stm32/dma2d/dma2d.c b/drivers/media/platform/st/stm32/dma2d/dma2d.c
+index 92f1edee58f899..3c64e91260250e 100644
+--- a/drivers/media/platform/st/stm32/dma2d/dma2d.c
++++ b/drivers/media/platform/st/stm32/dma2d/dma2d.c
+@@ -492,7 +492,8 @@ static void device_run(void *prv)
+ dst->sequence = frm_cap->sequence++;
+ v4l2_m2m_buf_copy_metadata(src, dst, true);
+
+- clk_enable(dev->gate);
++ if (clk_enable(dev->gate))
++ goto end;
+
+ dma2d_config_fg(dev, frm_out,
+ vb2_dma_contig_plane_dma_addr(&src->vb2_buf, 0));
+diff --git a/drivers/media/rc/streamzap.c b/drivers/media/rc/streamzap.c
+index 2ce62fe5d60f5a..d3b48a0dd1f474 100644
+--- a/drivers/media/rc/streamzap.c
++++ b/drivers/media/rc/streamzap.c
+@@ -138,39 +138,10 @@ static void sz_push_half_space(struct streamzap_ir *sz,
+ sz_push_full_space(sz, value & SZ_SPACE_MASK);
+ }
+
+-/*
+- * streamzap_callback - usb IRQ handler callback
+- *
+- * This procedure is invoked on reception of data from
+- * the usb remote.
+- */
+-static void streamzap_callback(struct urb *urb)
++static void sz_process_ir_data(struct streamzap_ir *sz, int len)
+ {
+- struct streamzap_ir *sz;
+ unsigned int i;
+- int len;
+-
+- if (!urb)
+- return;
+-
+- sz = urb->context;
+- len = urb->actual_length;
+-
+- switch (urb->status) {
+- case -ECONNRESET:
+- case -ENOENT:
+- case -ESHUTDOWN:
+- /*
+- * this urb is terminated, clean up.
+- * sz might already be invalid at this point
+- */
+- dev_err(sz->dev, "urb terminated, status: %d\n", urb->status);
+- return;
+- default:
+- break;
+- }
+
+- dev_dbg(sz->dev, "%s: received urb, len %d\n", __func__, len);
+ for (i = 0; i < len; i++) {
+ dev_dbg(sz->dev, "sz->buf_in[%d]: %x\n",
+ i, (unsigned char)sz->buf_in[i]);
+@@ -219,6 +190,43 @@ static void streamzap_callback(struct urb *urb)
+ }
+
+ ir_raw_event_handle(sz->rdev);
++}
++
++/*
++ * streamzap_callback - usb IRQ handler callback
++ *
++ * This procedure is invoked on reception of data from
++ * the usb remote.
++ */
++static void streamzap_callback(struct urb *urb)
++{
++ struct streamzap_ir *sz;
++ int len;
++
++ if (!urb)
++ return;
++
++ sz = urb->context;
++ len = urb->actual_length;
++
++ switch (urb->status) {
++ case 0:
++ dev_dbg(sz->dev, "%s: received urb, len %d\n", __func__, len);
++ sz_process_ir_data(sz, len);
++ break;
++ case -ECONNRESET:
++ case -ENOENT:
++ case -ESHUTDOWN:
++ /*
++ * this urb is terminated, clean up.
++ * sz might already be invalid at this point
++ */
++ dev_err(sz->dev, "urb terminated, status: %d\n", urb->status);
++ return;
++ default:
++ break;
++ }
++
+ usb_submit_urb(urb, GFP_ATOMIC);
+ }
+
+diff --git a/drivers/media/test-drivers/vim2m.c b/drivers/media/test-drivers/vim2m.c
+index 3e3b424b486058..8ca6459286ba6e 100644
+--- a/drivers/media/test-drivers/vim2m.c
++++ b/drivers/media/test-drivers/vim2m.c
+@@ -1316,9 +1316,6 @@ static int vim2m_probe(struct platform_device *pdev)
+ vfd->v4l2_dev = &dev->v4l2_dev;
+
+ video_set_drvdata(vfd, dev);
+- v4l2_info(&dev->v4l2_dev,
+- "Device registered as /dev/video%d\n", vfd->num);
+-
+ platform_set_drvdata(pdev, dev);
+
+ dev->m2m_dev = v4l2_m2m_init(&m2m_ops);
+@@ -1345,6 +1342,9 @@ static int vim2m_probe(struct platform_device *pdev)
+ goto error_m2m;
+ }
+
++ v4l2_info(&dev->v4l2_dev,
++ "Device registered as /dev/video%d\n", vfd->num);
++
+ #ifdef CONFIG_MEDIA_CONTROLLER
+ ret = v4l2_m2m_register_media_controller(dev->m2m_dev, vfd,
+ MEDIA_ENT_F_PROC_VIDEO_SCALER);
+diff --git a/drivers/media/test-drivers/visl/visl-core.c b/drivers/media/test-drivers/visl/visl-core.c
+index c46464bcaf2e13..93239391f2cf64 100644
+--- a/drivers/media/test-drivers/visl/visl-core.c
++++ b/drivers/media/test-drivers/visl/visl-core.c
+@@ -161,9 +161,15 @@ static const struct visl_ctrl_desc visl_h264_ctrl_descs[] = {
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_H264_DECODE_MODE,
++ .cfg.min = V4L2_STATELESS_H264_DECODE_MODE_SLICE_BASED,
++ .cfg.max = V4L2_STATELESS_H264_DECODE_MODE_FRAME_BASED,
++ .cfg.def = V4L2_STATELESS_H264_DECODE_MODE_SLICE_BASED,
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_H264_START_CODE,
++ .cfg.min = V4L2_STATELESS_H264_START_CODE_NONE,
++ .cfg.max = V4L2_STATELESS_H264_START_CODE_ANNEX_B,
++ .cfg.def = V4L2_STATELESS_H264_START_CODE_NONE,
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_H264_SLICE_PARAMS,
+@@ -198,9 +204,15 @@ static const struct visl_ctrl_desc visl_hevc_ctrl_descs[] = {
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_HEVC_DECODE_MODE,
++ .cfg.min = V4L2_STATELESS_HEVC_DECODE_MODE_SLICE_BASED,
++ .cfg.max = V4L2_STATELESS_HEVC_DECODE_MODE_FRAME_BASED,
++ .cfg.def = V4L2_STATELESS_HEVC_DECODE_MODE_SLICE_BASED,
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_HEVC_START_CODE,
++ .cfg.min = V4L2_STATELESS_HEVC_START_CODE_NONE,
++ .cfg.max = V4L2_STATELESS_HEVC_START_CODE_ANNEX_B,
++ .cfg.def = V4L2_STATELESS_HEVC_START_CODE_NONE,
+ },
+ {
+ .cfg.id = V4L2_CID_STATELESS_HEVC_ENTRY_POINT_OFFSETS,
+diff --git a/drivers/media/usb/uvc/uvc_driver.c b/drivers/media/usb/uvc/uvc_driver.c
+index 4d8e00b425f443..a0d683d2664719 100644
+--- a/drivers/media/usb/uvc/uvc_driver.c
++++ b/drivers/media/usb/uvc/uvc_driver.c
+@@ -3039,6 +3039,15 @@ static const struct usb_device_id uvc_ids[] = {
+ .bInterfaceProtocol = 0,
+ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_PROBE_MINMAX
+ | UVC_QUIRK_IGNORE_SELECTOR_UNIT) },
++ /* Actions Microelectronics Co. Display capture-UVC05 */
++ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
++ | USB_DEVICE_ID_MATCH_INT_INFO,
++ .idVendor = 0x1de1,
++ .idProduct = 0xf105,
++ .bInterfaceClass = USB_CLASS_VIDEO,
++ .bInterfaceSubClass = 1,
++ .bInterfaceProtocol = 0,
++ .driver_info = UVC_INFO_QUIRK(UVC_QUIRK_DISABLE_AUTOSUSPEND) },
+ /* NXP Semiconductors IR VIDEO */
+ { .match_flags = USB_DEVICE_ID_MATCH_DEVICE
+ | USB_DEVICE_ID_MATCH_INT_INFO,
+diff --git a/drivers/media/v4l2-core/v4l2-dv-timings.c b/drivers/media/v4l2-core/v4l2-dv-timings.c
+index 2cf5dcee0ce800..4d05873892c168 100644
+--- a/drivers/media/v4l2-core/v4l2-dv-timings.c
++++ b/drivers/media/v4l2-core/v4l2-dv-timings.c
+@@ -764,7 +764,7 @@ bool v4l2_detect_gtf(unsigned int frame_height,
+ u64 num;
+ u32 den;
+
+- num = ((image_width * GTF_D_C_PRIME * (u64)hfreq) -
++ num = (((u64)image_width * GTF_D_C_PRIME * hfreq) -
+ ((u64)image_width * GTF_D_M_PRIME * 1000));
+ den = (hfreq * (100 - GTF_D_C_PRIME) + GTF_D_M_PRIME * 1000) *
+ (2 * GTF_CELL_GRAN);
+@@ -774,7 +774,7 @@ bool v4l2_detect_gtf(unsigned int frame_height,
+ u64 num;
+ u32 den;
+
+- num = ((image_width * GTF_S_C_PRIME * (u64)hfreq) -
++ num = (((u64)image_width * GTF_S_C_PRIME * hfreq) -
+ ((u64)image_width * GTF_S_M_PRIME * 1000));
+ den = (hfreq * (100 - GTF_S_C_PRIME) + GTF_S_M_PRIME * 1000) *
+ (2 * GTF_CELL_GRAN);
+diff --git a/drivers/mfd/ene-kb3930.c b/drivers/mfd/ene-kb3930.c
+index fa0ad2f14a3961..9460a67acb0b5e 100644
+--- a/drivers/mfd/ene-kb3930.c
++++ b/drivers/mfd/ene-kb3930.c
+@@ -162,7 +162,7 @@ static int kb3930_probe(struct i2c_client *client)
+ devm_gpiod_get_array_optional(dev, "off", GPIOD_IN);
+ if (IS_ERR(ddata->off_gpios))
+ return PTR_ERR(ddata->off_gpios);
+- if (ddata->off_gpios->ndescs < 2) {
++ if (ddata->off_gpios && ddata->off_gpios->ndescs < 2) {
+ dev_err(dev, "invalid off-gpios property\n");
+ return -EINVAL;
+ }
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index 3aaaf47fa4ee20..8dea2b44fd8bfe 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -85,7 +85,6 @@
+ #define PCI_DEVICE_ID_RENESAS_R8A774E1 0x0025
+ #define PCI_DEVICE_ID_RENESAS_R8A779F0 0x0031
+
+-#define PCI_VENDOR_ID_ROCKCHIP 0x1d87
+ #define PCI_DEVICE_ID_ROCKCHIP_RK3588 0x3588
+
+ static DEFINE_IDA(pci_endpoint_test_ida);
+@@ -235,7 +234,7 @@ static bool pci_endpoint_test_request_irq(struct pci_endpoint_test *test)
+ return true;
+
+ fail:
+- switch (irq_type) {
++ switch (test->irq_type) {
+ case IRQ_TYPE_INTX:
+ dev_err(dev, "Failed to request IRQ %d for Legacy\n",
+ pci_irq_vector(pdev, i));
+diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
+index e9f6e4e622901a..55158540c28cfd 100644
+--- a/drivers/mmc/host/dw_mmc.c
++++ b/drivers/mmc/host/dw_mmc.c
+@@ -2579,6 +2579,91 @@ static void dw_mci_pull_data64(struct dw_mci *host, void *buf, int cnt)
+ }
+ }
+
++static void dw_mci_push_data64_32(struct dw_mci *host, void *buf, int cnt)
++{
++ struct mmc_data *data = host->data;
++ int init_cnt = cnt;
++
++ /* try and push anything in the part_buf */
++ if (unlikely(host->part_buf_count)) {
++ int len = dw_mci_push_part_bytes(host, buf, cnt);
++
++ buf += len;
++ cnt -= len;
++
++ if (host->part_buf_count == 8) {
++ mci_fifo_l_writeq(host->fifo_reg, host->part_buf);
++ host->part_buf_count = 0;
++ }
++ }
++#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
++ if (unlikely((unsigned long)buf & 0x7)) {
++ while (cnt >= 8) {
++ u64 aligned_buf[16];
++ int len = min(cnt & -8, (int)sizeof(aligned_buf));
++ int items = len >> 3;
++ int i;
++ /* memcpy from input buffer into aligned buffer */
++ memcpy(aligned_buf, buf, len);
++ buf += len;
++ cnt -= len;
++ /* push data from aligned buffer into fifo */
++ for (i = 0; i < items; ++i)
++ mci_fifo_l_writeq(host->fifo_reg, aligned_buf[i]);
++ }
++ } else
++#endif
++ {
++ u64 *pdata = buf;
++
++ for (; cnt >= 8; cnt -= 8)
++ mci_fifo_l_writeq(host->fifo_reg, *pdata++);
++ buf = pdata;
++ }
++ /* put anything remaining in the part_buf */
++ if (cnt) {
++ dw_mci_set_part_bytes(host, buf, cnt);
++ /* Push data if we have reached the expected data length */
++ if ((data->bytes_xfered + init_cnt) ==
++ (data->blksz * data->blocks))
++ mci_fifo_l_writeq(host->fifo_reg, host->part_buf);
++ }
++}
++
++static void dw_mci_pull_data64_32(struct dw_mci *host, void *buf, int cnt)
++{
++#ifndef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
++ if (unlikely((unsigned long)buf & 0x7)) {
++ while (cnt >= 8) {
++ /* pull data from fifo into aligned buffer */
++ u64 aligned_buf[16];
++ int len = min(cnt & -8, (int)sizeof(aligned_buf));
++ int items = len >> 3;
++ int i;
++
++ for (i = 0; i < items; ++i)
++ aligned_buf[i] = mci_fifo_l_readq(host->fifo_reg);
++
++ /* memcpy from aligned buffer into output buffer */
++ memcpy(buf, aligned_buf, len);
++ buf += len;
++ cnt -= len;
++ }
++ } else
++#endif
++ {
++ u64 *pdata = buf;
++
++ for (; cnt >= 8; cnt -= 8)
++ *pdata++ = mci_fifo_l_readq(host->fifo_reg);
++ buf = pdata;
++ }
++ if (cnt) {
++ host->part_buf = mci_fifo_l_readq(host->fifo_reg);
++ dw_mci_pull_final_bytes(host, buf, cnt);
++ }
++}
++
+ static void dw_mci_pull_data(struct dw_mci *host, void *buf, int cnt)
+ {
+ int len;
+@@ -3379,8 +3464,13 @@ int dw_mci_probe(struct dw_mci *host)
+ width = 16;
+ host->data_shift = 1;
+ } else if (i == 2) {
+- host->push_data = dw_mci_push_data64;
+- host->pull_data = dw_mci_pull_data64;
++ if ((host->quirks & DW_MMC_QUIRK_FIFO64_32)) {
++ host->push_data = dw_mci_push_data64_32;
++ host->pull_data = dw_mci_pull_data64_32;
++ } else {
++ host->push_data = dw_mci_push_data64;
++ host->pull_data = dw_mci_pull_data64;
++ }
+ width = 64;
+ host->data_shift = 3;
+ } else {
+diff --git a/drivers/mmc/host/dw_mmc.h b/drivers/mmc/host/dw_mmc.h
+index 6447b916990dcd..5463392dc81105 100644
+--- a/drivers/mmc/host/dw_mmc.h
++++ b/drivers/mmc/host/dw_mmc.h
+@@ -281,6 +281,8 @@ struct dw_mci_board {
+
+ /* Support for longer data read timeout */
+ #define DW_MMC_QUIRK_EXTENDED_TMOUT BIT(0)
++/* Force 32-bit access to the FIFO */
++#define DW_MMC_QUIRK_FIFO64_32 BIT(1)
+
+ #define DW_MMC_240A 0x240a
+ #define DW_MMC_280A 0x280a
+@@ -472,6 +474,31 @@ struct dw_mci_board {
+ #define mci_fifo_writel(__value, __reg) __raw_writel(__reg, __value)
+ #define mci_fifo_writeq(__value, __reg) __raw_writeq(__reg, __value)
+
++/*
++ * Some dw_mmc devices have 64-bit FIFOs, but expect them to be
++ * accessed using two 32-bit accesses. If such controller is used
++ * with a 64-bit kernel, this has to be done explicitly.
++ */
++static inline u64 mci_fifo_l_readq(void __iomem *addr)
++{
++ u64 ans;
++ u32 proxy[2];
++
++ proxy[0] = mci_fifo_readl(addr);
++ proxy[1] = mci_fifo_readl(addr + 4);
++ memcpy(&ans, proxy, 8);
++ return ans;
++}
++
++static inline void mci_fifo_l_writeq(void __iomem *addr, u64 value)
++{
++ u32 proxy[2];
++
++ memcpy(proxy, &value, 8);
++ mci_fifo_writel(addr, proxy[0]);
++ mci_fifo_writel(addr + 4, proxy[1]);
++}
++
+ /* Register access macros */
+ #define mci_readl(dev, reg) \
+ readl_relaxed((dev)->regs + SDMMC_##reg)
+diff --git a/drivers/mtd/inftlcore.c b/drivers/mtd/inftlcore.c
+index 9739387cff8c91..58c6e1743f5c65 100644
+--- a/drivers/mtd/inftlcore.c
++++ b/drivers/mtd/inftlcore.c
+@@ -482,10 +482,11 @@ static inline u16 INFTL_findwriteunit(struct INFTLrecord *inftl, unsigned block)
+ silly = MAX_LOOPS;
+
+ while (thisEUN <= inftl->lastEUN) {
+- inftl_read_oob(mtd, (thisEUN * inftl->EraseSize) +
+- blockofs, 8, &retlen, (char *)&bci);
+-
+- status = bci.Status | bci.Status1;
++ if (inftl_read_oob(mtd, (thisEUN * inftl->EraseSize) +
++ blockofs, 8, &retlen, (char *)&bci) < 0)
++ status = SECTOR_IGNORE;
++ else
++ status = bci.Status | bci.Status1;
+ pr_debug("INFTL: status of block %d in EUN %d is %x\n",
+ block , writeEUN, status);
+
+diff --git a/drivers/mtd/mtdpstore.c b/drivers/mtd/mtdpstore.c
+index 7ac8ac90130685..9cf3872e37ae14 100644
+--- a/drivers/mtd/mtdpstore.c
++++ b/drivers/mtd/mtdpstore.c
+@@ -417,11 +417,14 @@ static void mtdpstore_notify_add(struct mtd_info *mtd)
+ }
+
+ longcnt = BITS_TO_LONGS(div_u64(mtd->size, info->kmsg_size));
+- cxt->rmmap = kcalloc(longcnt, sizeof(long), GFP_KERNEL);
+- cxt->usedmap = kcalloc(longcnt, sizeof(long), GFP_KERNEL);
++ cxt->rmmap = devm_kcalloc(&mtd->dev, longcnt, sizeof(long), GFP_KERNEL);
++ cxt->usedmap = devm_kcalloc(&mtd->dev, longcnt, sizeof(long), GFP_KERNEL);
+
+ longcnt = BITS_TO_LONGS(div_u64(mtd->size, mtd->erasesize));
+- cxt->badmap = kcalloc(longcnt, sizeof(long), GFP_KERNEL);
++ cxt->badmap = devm_kcalloc(&mtd->dev, longcnt, sizeof(long), GFP_KERNEL);
++
++ if (!cxt->rmmap || !cxt->usedmap || !cxt->badmap)
++ return;
+
+ /* just support dmesg right now */
+ cxt->dev.flags = PSTORE_FLAGS_DMESG;
+@@ -527,9 +530,6 @@ static void mtdpstore_notify_remove(struct mtd_info *mtd)
+ mtdpstore_flush_removed(cxt);
+
+ unregister_pstore_device(&cxt->dev);
+- kfree(cxt->badmap);
+- kfree(cxt->usedmap);
+- kfree(cxt->rmmap);
+ cxt->mtd = NULL;
+ cxt->index = -1;
+ }
+diff --git a/drivers/mtd/nand/raw/brcmnand/brcmnand.c b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+index e76df6a00ed4f5..2eb44c1428fbc2 100644
+--- a/drivers/mtd/nand/raw/brcmnand/brcmnand.c
++++ b/drivers/mtd/nand/raw/brcmnand/brcmnand.c
+@@ -3008,7 +3008,7 @@ static int brcmnand_resume(struct device *dev)
+ brcmnand_save_restore_cs_config(host, 1);
+
+ /* Reset the chip, required by some chips after power-up */
+- nand_reset_op(chip);
++ nand_reset(chip, 0);
+ }
+
+ return 0;
+diff --git a/drivers/mtd/nand/raw/r852.c b/drivers/mtd/nand/raw/r852.c
+index ed0cf732d20e40..36cfe03cd4ac3b 100644
+--- a/drivers/mtd/nand/raw/r852.c
++++ b/drivers/mtd/nand/raw/r852.c
+@@ -387,6 +387,9 @@ static int r852_wait(struct nand_chip *chip)
+ static int r852_ready(struct nand_chip *chip)
+ {
+ struct r852_device *dev = r852_get_dev(nand_to_mtd(chip));
++ if (dev->card_unstable)
++ return 0;
++
+ return !(r852_read_reg(dev, R852_CARD_STA) & R852_CARD_STA_BUSY);
+ }
+
+diff --git a/drivers/net/can/flexcan/flexcan-core.c b/drivers/net/can/flexcan/flexcan-core.c
+index b080740bcb104f..fca290afb5329a 100644
+--- a/drivers/net/can/flexcan/flexcan-core.c
++++ b/drivers/net/can/flexcan/flexcan-core.c
+@@ -386,6 +386,16 @@ static const struct flexcan_devtype_data fsl_lx2160a_r1_devtype_data = {
+ FLEXCAN_QUIRK_SUPPORT_RX_MAILBOX_RTR,
+ };
+
++static const struct flexcan_devtype_data nxp_s32g2_devtype_data = {
++ .quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |
++ FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_BROKEN_PERR_STATE |
++ FLEXCAN_QUIRK_USE_RX_MAILBOX | FLEXCAN_QUIRK_SUPPORT_FD |
++ FLEXCAN_QUIRK_SUPPORT_ECC | FLEXCAN_QUIRK_NR_IRQ_3 |
++ FLEXCAN_QUIRK_SUPPORT_RX_MAILBOX |
++ FLEXCAN_QUIRK_SUPPORT_RX_MAILBOX_RTR |
++ FLEXCAN_QUIRK_SECONDARY_MB_IRQ,
++};
++
+ static const struct can_bittiming_const flexcan_bittiming_const = {
+ .name = DRV_NAME,
+ .tseg1_min = 4,
+@@ -1762,14 +1772,25 @@ static int flexcan_open(struct net_device *dev)
+ goto out_free_irq_boff;
+ }
+
++ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SECONDARY_MB_IRQ) {
++ err = request_irq(priv->irq_secondary_mb,
++ flexcan_irq, IRQF_SHARED, dev->name, dev);
++ if (err)
++ goto out_free_irq_err;
++ }
++
+ flexcan_chip_interrupts_enable(dev);
+
+ netif_start_queue(dev);
+
+ return 0;
+
++ out_free_irq_err:
++ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_NR_IRQ_3)
++ free_irq(priv->irq_err, dev);
+ out_free_irq_boff:
+- free_irq(priv->irq_boff, dev);
++ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_NR_IRQ_3)
++ free_irq(priv->irq_boff, dev);
+ out_free_irq:
+ free_irq(dev->irq, dev);
+ out_can_rx_offload_disable:
+@@ -1794,6 +1815,9 @@ static int flexcan_close(struct net_device *dev)
+ netif_stop_queue(dev);
+ flexcan_chip_interrupts_disable(dev);
+
++ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SECONDARY_MB_IRQ)
++ free_irq(priv->irq_secondary_mb, dev);
++
+ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_NR_IRQ_3) {
+ free_irq(priv->irq_err, dev);
+ free_irq(priv->irq_boff, dev);
+@@ -2041,6 +2065,7 @@ static const struct of_device_id flexcan_of_match[] = {
+ { .compatible = "fsl,vf610-flexcan", .data = &fsl_vf610_devtype_data, },
+ { .compatible = "fsl,ls1021ar2-flexcan", .data = &fsl_ls1021a_r2_devtype_data, },
+ { .compatible = "fsl,lx2160ar1-flexcan", .data = &fsl_lx2160a_r1_devtype_data, },
++ { .compatible = "nxp,s32g2-flexcan", .data = &nxp_s32g2_devtype_data, },
+ { /* sentinel */ },
+ };
+ MODULE_DEVICE_TABLE(of, flexcan_of_match);
+@@ -2187,6 +2212,14 @@ static int flexcan_probe(struct platform_device *pdev)
+ }
+ }
+
++ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SECONDARY_MB_IRQ) {
++ priv->irq_secondary_mb = platform_get_irq_byname(pdev, "mb-1");
++ if (priv->irq_secondary_mb < 0) {
++ err = priv->irq_secondary_mb;
++ goto failed_platform_get_irq;
++ }
++ }
++
+ if (priv->devtype_data.quirks & FLEXCAN_QUIRK_SUPPORT_FD) {
+ priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD |
+ CAN_CTRLMODE_FD_NON_ISO;
+diff --git a/drivers/net/can/flexcan/flexcan.h b/drivers/net/can/flexcan/flexcan.h
+index 4933d8c7439e62..2cf886618c9621 100644
+--- a/drivers/net/can/flexcan/flexcan.h
++++ b/drivers/net/can/flexcan/flexcan.h
+@@ -70,6 +70,10 @@
+ #define FLEXCAN_QUIRK_SUPPORT_RX_FIFO BIT(16)
+ /* Setup stop mode with ATF SCMI protocol to support wakeup */
+ #define FLEXCAN_QUIRK_SETUP_STOP_MODE_SCMI BIT(17)
++/* Device has two separate interrupt lines for two mailbox ranges, which
++ * both need to have an interrupt handler registered.
++ */
++#define FLEXCAN_QUIRK_SECONDARY_MB_IRQ BIT(18)
+
+ struct flexcan_devtype_data {
+ u32 quirks; /* quirks needed for different IP cores */
+@@ -107,6 +111,7 @@ struct flexcan_priv {
+
+ int irq_boff;
+ int irq_err;
++ int irq_secondary_mb;
+
+ /* IPC handle when setup stop mode by System Controller firmware(scfw) */
+ struct imx_sc_ipc *sc_ipc_handle;
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index 5935100e7d65f8..e20d9d62032e31 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -3691,6 +3691,21 @@ static int mv88e6xxx_stats_setup(struct mv88e6xxx_chip *chip)
+ return mv88e6xxx_g1_stats_clear(chip);
+ }
+
++static int mv88e6320_setup_errata(struct mv88e6xxx_chip *chip)
++{
++ u16 dummy;
++ int err;
++
++ /* Workaround for erratum
++ * 3.3 RGMII timing may be out of spec when transmit delay is enabled
++ */
++ err = mv88e6xxx_port_hidden_write(chip, 0, 0xf, 0x7, 0xe000);
++ if (err)
++ return err;
++
++ return mv88e6xxx_port_hidden_read(chip, 0, 0xf, 0x7, &dummy);
++}
++
+ /* Check if the errata has already been applied. */
+ static bool mv88e6390_setup_errata_applied(struct mv88e6xxx_chip *chip)
+ {
+@@ -5144,6 +5159,7 @@ static const struct mv88e6xxx_ops mv88e6290_ops = {
+
+ static const struct mv88e6xxx_ops mv88e6320_ops = {
+ /* MV88E6XXX_FAMILY_6320 */
++ .setup_errata = mv88e6320_setup_errata,
+ .ieee_pri_map = mv88e6085_g1_ieee_pri_map,
+ .ip_pri_map = mv88e6085_g1_ip_pri_map,
+ .irl_init_all = mv88e6352_g2_irl_init_all,
+@@ -5193,6 +5209,7 @@ static const struct mv88e6xxx_ops mv88e6320_ops = {
+
+ static const struct mv88e6xxx_ops mv88e6321_ops = {
+ /* MV88E6XXX_FAMILY_6320 */
++ .setup_errata = mv88e6320_setup_errata,
+ .ieee_pri_map = mv88e6085_g1_ieee_pri_map,
+ .ip_pri_map = mv88e6085_g1_ip_pri_map,
+ .irl_init_all = mv88e6352_g2_irl_init_all,
+@@ -6154,7 +6171,8 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .num_databases = 4096,
+ .num_macs = 8192,
+ .num_ports = 7,
+- .num_internal_phys = 5,
++ .num_internal_phys = 2,
++ .internal_phys_offset = 3,
+ .num_gpio = 15,
+ .max_vid = 4095,
+ .max_sid = 63,
+@@ -6348,7 +6366,8 @@ static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ .num_databases = 4096,
+ .num_macs = 8192,
+ .num_ports = 7,
+- .num_internal_phys = 5,
++ .num_internal_phys = 2,
++ .internal_phys_offset = 3,
+ .num_gpio = 15,
+ .max_vid = 4095,
+ .max_sid = 63,
+diff --git a/drivers/net/ethernet/google/gve/gve_ethtool.c b/drivers/net/ethernet/google/gve/gve_ethtool.c
+index bdfc6e77b2af56..1f5db1096d4a40 100644
+--- a/drivers/net/ethernet/google/gve/gve_ethtool.c
++++ b/drivers/net/ethernet/google/gve/gve_ethtool.c
+@@ -392,7 +392,9 @@ gve_get_ethtool_stats(struct net_device *netdev,
+ */
+ data[i++] = 0;
+ data[i++] = 0;
+- data[i++] = tx->dqo_tx.tail - tx->dqo_tx.head;
++ data[i++] =
++ (tx->dqo_tx.tail - tx->dqo_tx.head) &
++ tx->mask;
+ }
+ do {
+ start =
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
+index 0f844c14485a0e..35acc07bd96489 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/qos.c
+@@ -165,6 +165,11 @@ static void __otx2_qos_txschq_cfg(struct otx2_nic *pfvf,
+
+ otx2_config_sched_shaping(pfvf, node, cfg, &num_regs);
+ } else if (level == NIX_TXSCH_LVL_TL2) {
++ /* configure parent txschq */
++ cfg->reg[num_regs] = NIX_AF_TL2X_PARENT(node->schq);
++ cfg->regval[num_regs] = (u64)hw->tx_link << 16;
++ num_regs++;
++
+ /* configure link cfg */
+ if (level == pfvf->qos.link_cfg_lvl) {
+ cfg->reg[num_regs] = NIX_AF_TL3_TL2X_LINKX_CFG(node->schq, hw->tx_link);
+diff --git a/drivers/net/ethernet/microsoft/mana/mana_en.c b/drivers/net/ethernet/microsoft/mana/mana_en.c
+index b2d206dec70c8a..12c22261dd3a8b 100644
+--- a/drivers/net/ethernet/microsoft/mana/mana_en.c
++++ b/drivers/net/ethernet/microsoft/mana/mana_en.c
+@@ -636,30 +636,16 @@ int mana_pre_alloc_rxbufs(struct mana_port_context *mpc, int new_mtu, int num_qu
+ mpc->rxbpre_total = 0;
+
+ for (i = 0; i < num_rxb; i++) {
+- if (mpc->rxbpre_alloc_size > PAGE_SIZE) {
+- va = netdev_alloc_frag(mpc->rxbpre_alloc_size);
+- if (!va)
+- goto error;
+-
+- page = virt_to_head_page(va);
+- /* Check if the frag falls back to single page */
+- if (compound_order(page) <
+- get_order(mpc->rxbpre_alloc_size)) {
+- put_page(page);
+- goto error;
+- }
+- } else {
+- page = dev_alloc_page();
+- if (!page)
+- goto error;
++ page = dev_alloc_pages(get_order(mpc->rxbpre_alloc_size));
++ if (!page)
++ goto error;
+
+- va = page_to_virt(page);
+- }
++ va = page_to_virt(page);
+
+ da = dma_map_single(dev, va + mpc->rxbpre_headroom,
+ mpc->rxbpre_datasize, DMA_FROM_DEVICE);
+ if (dma_mapping_error(dev, da)) {
+- put_page(virt_to_head_page(va));
++ put_page(page);
+ goto error;
+ }
+
+@@ -1618,7 +1604,7 @@ static void mana_rx_skb(void *buf_va, bool from_pool,
+ }
+
+ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
+- dma_addr_t *da, bool *from_pool, bool is_napi)
++ dma_addr_t *da, bool *from_pool)
+ {
+ struct page *page;
+ void *va;
+@@ -1629,21 +1615,6 @@ static void *mana_get_rxfrag(struct mana_rxq *rxq, struct device *dev,
+ if (rxq->xdp_save_va) {
+ va = rxq->xdp_save_va;
+ rxq->xdp_save_va = NULL;
+- } else if (rxq->alloc_size > PAGE_SIZE) {
+- if (is_napi)
+- va = napi_alloc_frag(rxq->alloc_size);
+- else
+- va = netdev_alloc_frag(rxq->alloc_size);
+-
+- if (!va)
+- return NULL;
+-
+- page = virt_to_head_page(va);
+- /* Check if the frag falls back to single page */
+- if (compound_order(page) < get_order(rxq->alloc_size)) {
+- put_page(page);
+- return NULL;
+- }
+ } else {
+ page = page_pool_dev_alloc_pages(rxq->page_pool);
+ if (!page)
+@@ -1676,7 +1647,7 @@ static void mana_refill_rx_oob(struct device *dev, struct mana_rxq *rxq,
+ dma_addr_t da;
+ void *va;
+
+- va = mana_get_rxfrag(rxq, dev, &da, &from_pool, true);
++ va = mana_get_rxfrag(rxq, dev, &da, &from_pool);
+ if (!va)
+ return;
+
+@@ -2083,7 +2054,7 @@ static int mana_fill_rx_oob(struct mana_recv_buf_oob *rx_oob, u32 mem_key,
+ if (mpc->rxbufs_pre)
+ va = mana_get_rxbuf_pre(rxq, &da);
+ else
+- va = mana_get_rxfrag(rxq, dev, &da, &from_pool, false);
++ va = mana_get_rxfrag(rxq, dev, &da, &from_pool);
+
+ if (!va)
+ return -ENOMEM;
+@@ -2169,6 +2140,7 @@ static int mana_create_page_pool(struct mana_rxq *rxq, struct gdma_context *gc)
+ pprm.nid = gc->numa_node;
+ pprm.napi = &rxq->rx_cq.napi;
+ pprm.netdev = rxq->ndev;
++ pprm.order = get_order(rxq->alloc_size);
+
+ rxq->page_pool = page_pool_create(&pprm);
+
+diff --git a/drivers/net/ethernet/wangxun/libwx/wx_lib.c b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+index 2b3d6586f44a53..71c891d14fb626 100644
+--- a/drivers/net/ethernet/wangxun/libwx/wx_lib.c
++++ b/drivers/net/ethernet/wangxun/libwx/wx_lib.c
+@@ -309,7 +309,8 @@ static bool wx_alloc_mapped_page(struct wx_ring *rx_ring,
+ return true;
+
+ page = page_pool_dev_alloc_pages(rx_ring->page_pool);
+- WARN_ON(!page);
++ if (unlikely(!page))
++ return false;
+ dma = page_pool_get_dma_addr(page);
+
+ bi->page_dma = dma;
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 119dfa2d6643a9..8af44224480f15 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -289,6 +289,46 @@ static bool phy_drv_wol_enabled(struct phy_device *phydev)
+ return wol.wolopts != 0;
+ }
+
++static void phy_link_change(struct phy_device *phydev, bool up)
++{
++ struct net_device *netdev = phydev->attached_dev;
++
++ if (up)
++ netif_carrier_on(netdev);
++ else
++ netif_carrier_off(netdev);
++ phydev->adjust_link(netdev);
++ if (phydev->mii_ts && phydev->mii_ts->link_state)
++ phydev->mii_ts->link_state(phydev->mii_ts, phydev);
++}
++
++/**
++ * phy_uses_state_machine - test whether consumer driver uses PAL state machine
++ * @phydev: the target PHY device structure
++ *
++ * Ultimately, this aims to indirectly determine whether the PHY is attached
++ * to a consumer which uses the state machine by calling phy_start() and
++ * phy_stop().
++ *
++ * When the PHY driver consumer uses phylib, it must have previously called
++ * phy_connect_direct() or one of its derivatives, so that phy_prepare_link()
++ * has set up a hook for monitoring state changes.
++ *
++ * When the PHY driver is used by the MAC driver consumer through phylink (the
++ * only other provider of a phy_link_change() method), using the PHY state
++ * machine is not optional.
++ *
++ * Return: true if consumer calls phy_start() and phy_stop(), false otherwise.
++ */
++static bool phy_uses_state_machine(struct phy_device *phydev)
++{
++ if (phydev->phy_link_change == phy_link_change)
++ return phydev->attached_dev && phydev->adjust_link;
++
++ /* phydev->phy_link_change is implicitly phylink_phy_change() */
++ return true;
++}
++
+ static bool mdio_bus_phy_may_suspend(struct phy_device *phydev)
+ {
+ struct device_driver *drv = phydev->mdio.dev.driver;
+@@ -355,7 +395,7 @@ static __maybe_unused int mdio_bus_phy_suspend(struct device *dev)
+ * may call phy routines that try to grab the same lock, and that may
+ * lead to a deadlock.
+ */
+- if (phydev->attached_dev && phydev->adjust_link)
++ if (phy_uses_state_machine(phydev))
+ phy_stop_machine(phydev);
+
+ if (!mdio_bus_phy_may_suspend(phydev))
+@@ -409,7 +449,7 @@ static __maybe_unused int mdio_bus_phy_resume(struct device *dev)
+ }
+ }
+
+- if (phydev->attached_dev && phydev->adjust_link)
++ if (phy_uses_state_machine(phydev))
+ phy_start_machine(phydev);
+
+ return 0;
+@@ -1101,19 +1141,6 @@ struct phy_device *phy_find_first(struct mii_bus *bus)
+ }
+ EXPORT_SYMBOL(phy_find_first);
+
+-static void phy_link_change(struct phy_device *phydev, bool up)
+-{
+- struct net_device *netdev = phydev->attached_dev;
+-
+- if (up)
+- netif_carrier_on(netdev);
+- else
+- netif_carrier_off(netdev);
+- phydev->adjust_link(netdev);
+- if (phydev->mii_ts && phydev->mii_ts->link_state)
+- phydev->mii_ts->link_state(phydev->mii_ts, phydev);
+-}
+-
+ /**
+ * phy_prepare_link - prepares the PHY layer to monitor link status
+ * @phydev: target phy_device struct
+diff --git a/drivers/net/phy/sfp.c b/drivers/net/phy/sfp.c
+index dcec92625cf651..7b33993f7001e4 100644
+--- a/drivers/net/phy/sfp.c
++++ b/drivers/net/phy/sfp.c
+@@ -385,7 +385,7 @@ static void sfp_fixup_rollball(struct sfp *sfp)
+ sfp->phy_t_retry = msecs_to_jiffies(1000);
+ }
+
+-static void sfp_fixup_fs_2_5gt(struct sfp *sfp)
++static void sfp_fixup_rollball_wait4s(struct sfp *sfp)
+ {
+ sfp_fixup_rollball(sfp);
+
+@@ -399,7 +399,7 @@ static void sfp_fixup_fs_2_5gt(struct sfp *sfp)
+ static void sfp_fixup_fs_10gt(struct sfp *sfp)
+ {
+ sfp_fixup_10gbaset_30m(sfp);
+- sfp_fixup_fs_2_5gt(sfp);
++ sfp_fixup_rollball_wait4s(sfp);
+ }
+
+ static void sfp_fixup_halny_gsfp(struct sfp *sfp)
+@@ -479,9 +479,10 @@ static const struct sfp_quirk sfp_quirks[] = {
+ // PHY.
+ SFP_QUIRK_F("FS", "SFP-10G-T", sfp_fixup_fs_10gt),
+
+- // Fiberstore SFP-2.5G-T uses Rollball protocol to talk to the PHY and
+- // needs 4 sec wait before probing the PHY.
+- SFP_QUIRK_F("FS", "SFP-2.5G-T", sfp_fixup_fs_2_5gt),
++ // Fiberstore SFP-2.5G-T and SFP-10GM-T uses Rollball protocol to talk
++ // to the PHY and needs 4 sec wait before probing the PHY.
++ SFP_QUIRK_F("FS", "SFP-2.5G-T", sfp_fixup_rollball_wait4s),
++ SFP_QUIRK_F("FS", "SFP-10GM-T", sfp_fixup_rollball_wait4s),
+
+ // Fiberstore GPON-ONU-34-20BI can operate at 2500base-X, but report 1.2GBd
+ // NRZ in their EEPROM
+@@ -515,6 +516,8 @@ static const struct sfp_quirk sfp_quirks[] = {
+
+ SFP_QUIRK_F("OEM", "SFP-10G-T", sfp_fixup_rollball_cc),
+ SFP_QUIRK_M("OEM", "SFP-2.5G-T", sfp_quirk_oem_2_5g),
++ SFP_QUIRK_M("OEM", "SFP-2.5G-BX10-D", sfp_quirk_2500basex),
++ SFP_QUIRK_M("OEM", "SFP-2.5G-BX10-U", sfp_quirk_2500basex),
+ SFP_QUIRK_F("OEM", "RTSFP-10", sfp_fixup_rollball_cc),
+ SFP_QUIRK_F("OEM", "RTSFP-10G", sfp_fixup_rollball_cc),
+ SFP_QUIRK_F("Turris", "RTSFP-2.5G", sfp_fixup_rollball),
+diff --git a/drivers/net/ppp/ppp_synctty.c b/drivers/net/ppp/ppp_synctty.c
+index 644e99fc3623f5..9c4932198931f3 100644
+--- a/drivers/net/ppp/ppp_synctty.c
++++ b/drivers/net/ppp/ppp_synctty.c
+@@ -506,6 +506,11 @@ ppp_sync_txmunge(struct syncppp *ap, struct sk_buff *skb)
+ unsigned char *data;
+ int islcp;
+
++ /* Ensure we can safely access protocol field and LCP code */
++ if (!pskb_may_pull(skb, 3)) {
++ kfree_skb(skb);
++ return NULL;
++ }
+ data = skb->data;
+ proto = get_unaligned_be16(data);
+
+diff --git a/drivers/net/usb/asix_devices.c b/drivers/net/usb/asix_devices.c
+index 57d6e5abc30e88..da24941a6e4446 100644
+--- a/drivers/net/usb/asix_devices.c
++++ b/drivers/net/usb/asix_devices.c
+@@ -1421,6 +1421,19 @@ static const struct driver_info hg20f9_info = {
+ .data = FLAG_EEPROM_MAC,
+ };
+
++static const struct driver_info lyconsys_fibergecko100_info = {
++ .description = "LyconSys FiberGecko 100 USB 2.0 to SFP Adapter",
++ .bind = ax88178_bind,
++ .status = asix_status,
++ .link_reset = ax88178_link_reset,
++ .reset = ax88178_link_reset,
++ .flags = FLAG_ETHER | FLAG_FRAMING_AX | FLAG_LINK_INTR |
++ FLAG_MULTI_PACKET,
++ .rx_fixup = asix_rx_fixup_common,
++ .tx_fixup = asix_tx_fixup,
++ .data = 0x20061201,
++};
++
+ static const struct usb_device_id products [] = {
+ {
+ // Linksys USB200M
+@@ -1578,6 +1591,10 @@ static const struct usb_device_id products [] = {
+ // Linux Automation GmbH USB 10Base-T1L
+ USB_DEVICE(0x33f7, 0x0004),
+ .driver_info = (unsigned long) &lxausb_t1l_info,
++}, {
++ /* LyconSys FiberGecko 100 */
++ USB_DEVICE(0x1d2a, 0x0801),
++ .driver_info = (unsigned long) &lyconsys_fibergecko100_info,
+ },
+ { }, // END
+ };
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index a6469235d904e7..a032c1ded40634 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -783,6 +783,13 @@ static const struct usb_device_id products[] = {
+ .driver_info = 0,
+ },
+
++/* Lenovo ThinkPad Hybrid USB-C with USB-A Dock (40af0135eu, based on Realtek RTL8153) */
++{
++ USB_DEVICE_AND_INTERFACE_INFO(LENOVO_VENDOR_ID, 0xa359, USB_CLASS_COMM,
++ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
++ .driver_info = 0,
++},
++
+ /* Aquantia AQtion USB to 5GbE Controller (based on AQC111U) */
+ {
+ USB_DEVICE_AND_INTERFACE_INFO(AQUANTIA_VENDOR_ID, 0xc101,
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 468c739740463d..96fa3857d8e257 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -785,6 +785,7 @@ enum rtl8152_flags {
+ #define DEVICE_ID_THINKPAD_USB_C_DONGLE 0x720c
+ #define DEVICE_ID_THINKPAD_USB_C_DOCK_GEN2 0xa387
+ #define DEVICE_ID_THINKPAD_USB_C_DOCK_GEN3 0x3062
++#define DEVICE_ID_THINKPAD_HYBRID_USB_C_DOCK 0xa359
+
+ struct tally_counter {
+ __le64 tx_packets;
+@@ -9787,6 +9788,7 @@ static bool rtl8152_supports_lenovo_macpassthru(struct usb_device *udev)
+ case DEVICE_ID_THINKPAD_USB_C_DOCK_GEN2:
+ case DEVICE_ID_THINKPAD_USB_C_DOCK_GEN3:
+ case DEVICE_ID_THINKPAD_USB_C_DONGLE:
++ case DEVICE_ID_THINKPAD_HYBRID_USB_C_DOCK:
+ return 1;
+ }
+ } else if (vendor_id == VENDOR_ID_REALTEK && parent_vendor_id == VENDOR_ID_LENOVO) {
+@@ -10064,6 +10066,8 @@ static const struct usb_device_id rtl8152_table[] = {
+ { USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927) },
+ { USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0c5e) },
+ { USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101) },
++
++ /* Lenovo */
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x304f) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3054) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x3062) },
+@@ -10074,7 +10078,9 @@ static const struct usb_device_id rtl8152_table[] = {
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x720c) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x7214) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0x721e) },
++ { USB_DEVICE(VENDOR_ID_LENOVO, 0xa359) },
+ { USB_DEVICE(VENDOR_ID_LENOVO, 0xa387) },
++
+ { USB_DEVICE(VENDOR_ID_LINKSYS, 0x0041) },
+ { USB_DEVICE(VENDOR_ID_NVIDIA, 0x09ff) },
+ { USB_DEVICE(VENDOR_ID_TPLINK, 0x0601) },
+diff --git a/drivers/net/usb/r8153_ecm.c b/drivers/net/usb/r8153_ecm.c
+index 20b2df8d74ae1b..8d860dacdf49b2 100644
+--- a/drivers/net/usb/r8153_ecm.c
++++ b/drivers/net/usb/r8153_ecm.c
+@@ -135,6 +135,12 @@ static const struct usb_device_id products[] = {
+ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
+ .driver_info = (unsigned long)&r8153_info,
+ },
++/* Lenovo ThinkPad Hybrid USB-C with USB-A Dock (40af0135eu, based on Realtek RTL8153) */
++{
++ USB_DEVICE_AND_INTERFACE_INFO(VENDOR_ID_LENOVO, 0xa359, USB_CLASS_COMM,
++ USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
++ .driver_info = (unsigned long)&r8153_info,
++},
+
+ { }, /* END */
+ };
+diff --git a/drivers/net/wireless/ath/ath11k/ahb.c b/drivers/net/wireless/ath/ath11k/ahb.c
+index 97b12f51ef28c0..9389dc5f4a3dac 100644
+--- a/drivers/net/wireless/ath/ath11k/ahb.c
++++ b/drivers/net/wireless/ath/ath11k/ahb.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2022-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <linux/module.h>
+@@ -1290,6 +1290,7 @@ static void ath11k_ahb_remove(struct platform_device *pdev)
+ ath11k_core_deinit(ab);
+
+ qmi_fail:
++ ath11k_fw_destroy(ab);
+ ath11k_ahb_free_resources(ab);
+ }
+
+@@ -1309,6 +1310,7 @@ static void ath11k_ahb_shutdown(struct platform_device *pdev)
+ ath11k_core_deinit(ab);
+
+ free_resources:
++ ath11k_fw_destroy(ab);
+ ath11k_ahb_free_resources(ab);
+ }
+
+diff --git a/drivers/net/wireless/ath/ath11k/core.c b/drivers/net/wireless/ath/ath11k/core.c
+index ccf4ad35fdc335..7eba6ee054ffef 100644
+--- a/drivers/net/wireless/ath/ath11k/core.c
++++ b/drivers/net/wireless/ath/ath11k/core.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <linux/module.h>
+@@ -2214,7 +2214,6 @@ void ath11k_core_deinit(struct ath11k_base *ab)
+ ath11k_hif_power_down(ab);
+ ath11k_mac_destroy(ab);
+ ath11k_core_soc_destroy(ab);
+- ath11k_fw_destroy(ab);
+ }
+ EXPORT_SYMBOL(ath11k_core_deinit);
+
+diff --git a/drivers/net/wireless/ath/ath11k/dp.c b/drivers/net/wireless/ath/ath11k/dp.c
+index fbf666d0ecf1dc..f124b7329e1ac2 100644
+--- a/drivers/net/wireless/ath/ath11k/dp.c
++++ b/drivers/net/wireless/ath/ath11k/dp.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2018-2019 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <crypto/hash.h>
+@@ -104,14 +104,12 @@ void ath11k_dp_srng_cleanup(struct ath11k_base *ab, struct dp_srng *ring)
+ if (!ring->vaddr_unaligned)
+ return;
+
+- if (ring->cached) {
+- dma_unmap_single(ab->dev, ring->paddr_unaligned, ring->size,
+- DMA_FROM_DEVICE);
+- kfree(ring->vaddr_unaligned);
+- } else {
++ if (ring->cached)
++ dma_free_noncoherent(ab->dev, ring->size, ring->vaddr_unaligned,
++ ring->paddr_unaligned, DMA_FROM_DEVICE);
++ else
+ dma_free_coherent(ab->dev, ring->size, ring->vaddr_unaligned,
+ ring->paddr_unaligned);
+- }
+
+ ring->vaddr_unaligned = NULL;
+ }
+@@ -249,25 +247,14 @@ int ath11k_dp_srng_setup(struct ath11k_base *ab, struct dp_srng *ring,
+ default:
+ cached = false;
+ }
+-
+- if (cached) {
+- ring->vaddr_unaligned = kzalloc(ring->size, GFP_KERNEL);
+- if (!ring->vaddr_unaligned)
+- return -ENOMEM;
+-
+- ring->paddr_unaligned = dma_map_single(ab->dev,
+- ring->vaddr_unaligned,
+- ring->size,
+- DMA_FROM_DEVICE);
+- if (dma_mapping_error(ab->dev, ring->paddr_unaligned)) {
+- kfree(ring->vaddr_unaligned);
+- ring->vaddr_unaligned = NULL;
+- return -ENOMEM;
+- }
+- }
+ }
+
+- if (!cached)
++ if (cached)
++ ring->vaddr_unaligned = dma_alloc_noncoherent(ab->dev, ring->size,
++ &ring->paddr_unaligned,
++ DMA_FROM_DEVICE,
++ GFP_KERNEL);
++ else
+ ring->vaddr_unaligned = dma_alloc_coherent(ab->dev, ring->size,
+ &ring->paddr_unaligned,
+ GFP_KERNEL);
+diff --git a/drivers/net/wireless/ath/ath11k/fw.c b/drivers/net/wireless/ath/ath11k/fw.c
+index 4e36292a79db89..cbbd8e57119f28 100644
+--- a/drivers/net/wireless/ath/ath11k/fw.c
++++ b/drivers/net/wireless/ath/ath11k/fw.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+- * Copyright (c) 2022-2023, Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2022-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include "core.h"
+@@ -166,3 +166,4 @@ void ath11k_fw_destroy(struct ath11k_base *ab)
+ {
+ release_firmware(ab->fw.fw);
+ }
++EXPORT_SYMBOL(ath11k_fw_destroy);
+diff --git a/drivers/net/wireless/ath/ath11k/pci.c b/drivers/net/wireless/ath/ath11k/pci.c
+index be9d2c69cc4137..6ebfa5d02e2e54 100644
+--- a/drivers/net/wireless/ath/ath11k/pci.c
++++ b/drivers/net/wireless/ath/ath11k/pci.c
+@@ -1,7 +1,7 @@
+ // SPDX-License-Identifier: BSD-3-Clause-Clear
+ /*
+ * Copyright (c) 2019-2020 The Linux Foundation. All rights reserved.
+- * Copyright (c) 2021-2024 Qualcomm Innovation Center, Inc. All rights reserved.
++ * Copyright (c) 2021-2025 Qualcomm Innovation Center, Inc. All rights reserved.
+ */
+
+ #include <linux/module.h>
+@@ -981,6 +981,7 @@ static void ath11k_pci_remove(struct pci_dev *pdev)
+ ath11k_core_deinit(ab);
+
+ qmi_fail:
++ ath11k_fw_destroy(ab);
+ ath11k_mhi_unregister(ab_pci);
+
+ ath11k_pcic_free_irq(ab);
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index 5c6749bc4039d2..1706ec27eb9c0f 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -2533,7 +2533,7 @@ int ath12k_dp_mon_rx_process_stats(struct ath12k *ar, int mac_id,
+ dest_idx = 0;
+ move_next:
+ ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
+- ath12k_hal_srng_src_get_next_entry(ab, srng);
++ ath12k_hal_srng_dst_get_next_entry(ab, srng);
+ num_buffs_reaped++;
+ }
+
+diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c
+index 91e3393f7b5f40..4cbba96121a114 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_rx.c
++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c
+@@ -2470,6 +2470,29 @@ static void ath12k_dp_rx_deliver_msdu(struct ath12k *ar, struct napi_struct *nap
+ ieee80211_rx_napi(ath12k_ar_to_hw(ar), pubsta, msdu, napi);
+ }
+
++static bool ath12k_dp_rx_check_nwifi_hdr_len_valid(struct ath12k_base *ab,
++ struct hal_rx_desc *rx_desc,
++ struct sk_buff *msdu)
++{
++ struct ieee80211_hdr *hdr;
++ u8 decap_type;
++ u32 hdr_len;
++
++ decap_type = ath12k_dp_rx_h_decap_type(ab, rx_desc);
++ if (decap_type != DP_RX_DECAP_TYPE_NATIVE_WIFI)
++ return true;
++
++ hdr = (struct ieee80211_hdr *)msdu->data;
++ hdr_len = ieee80211_hdrlen(hdr->frame_control);
++
++ if ((likely(hdr_len <= DP_MAX_NWIFI_HDR_LEN)))
++ return true;
++
++ ab->soc_stats.invalid_rbm++;
++ WARN_ON_ONCE(1);
++ return false;
++}
++
+ static int ath12k_dp_rx_process_msdu(struct ath12k *ar,
+ struct sk_buff *msdu,
+ struct sk_buff_head *msdu_list,
+@@ -2528,6 +2551,11 @@ static int ath12k_dp_rx_process_msdu(struct ath12k *ar,
+ }
+ }
+
++ if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, rx_desc, msdu))) {
++ ret = -EINVAL;
++ goto free_out;
++ }
++
+ ath12k_dp_rx_h_ppdu(ar, rx_desc, rx_status);
+ ath12k_dp_rx_h_mpdu(ar, msdu, rx_desc, rx_status);
+
+@@ -2880,6 +2908,9 @@ static int ath12k_dp_rx_h_verify_tkip_mic(struct ath12k *ar, struct ath12k_peer
+ RX_FLAG_IV_STRIPPED | RX_FLAG_DECRYPTED;
+ skb_pull(msdu, hal_rx_desc_sz);
+
++ if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, rx_desc, msdu)))
++ return -EINVAL;
++
+ ath12k_dp_rx_h_ppdu(ar, rx_desc, rxs);
+ ath12k_dp_rx_h_undecap(ar, msdu, rx_desc,
+ HAL_ENCRYPT_TYPE_TKIP_MIC, rxs, true);
+@@ -3600,6 +3631,9 @@ static int ath12k_dp_rx_h_null_q_desc(struct ath12k *ar, struct sk_buff *msdu,
+ skb_put(msdu, hal_rx_desc_sz + l3pad_bytes + msdu_len);
+ skb_pull(msdu, hal_rx_desc_sz + l3pad_bytes);
+ }
++ if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, desc, msdu)))
++ return -EINVAL;
++
+ ath12k_dp_rx_h_ppdu(ar, desc, status);
+
+ ath12k_dp_rx_h_mpdu(ar, msdu, desc, status);
+@@ -3644,7 +3678,7 @@ static bool ath12k_dp_rx_h_reo_err(struct ath12k *ar, struct sk_buff *msdu,
+ return drop;
+ }
+
+-static void ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
++static bool ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
+ struct ieee80211_rx_status *status)
+ {
+ struct ath12k_base *ab = ar->ab;
+@@ -3662,6 +3696,9 @@ static void ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
+ skb_put(msdu, hal_rx_desc_sz + l3pad_bytes + msdu_len);
+ skb_pull(msdu, hal_rx_desc_sz + l3pad_bytes);
+
++ if (unlikely(!ath12k_dp_rx_check_nwifi_hdr_len_valid(ab, desc, msdu)))
++ return true;
++
+ ath12k_dp_rx_h_ppdu(ar, desc, status);
+
+ status->flag |= (RX_FLAG_MMIC_STRIPPED | RX_FLAG_MMIC_ERROR |
+@@ -3669,6 +3706,7 @@ static void ath12k_dp_rx_h_tkip_mic_err(struct ath12k *ar, struct sk_buff *msdu,
+
+ ath12k_dp_rx_h_undecap(ar, msdu, desc,
+ HAL_ENCRYPT_TYPE_TKIP_MIC, status, false);
++ return false;
+ }
+
+ static bool ath12k_dp_rx_h_rxdma_err(struct ath12k *ar, struct sk_buff *msdu,
+@@ -3687,7 +3725,7 @@ static bool ath12k_dp_rx_h_rxdma_err(struct ath12k *ar, struct sk_buff *msdu,
+ case HAL_REO_ENTR_RING_RXDMA_ECODE_TKIP_MIC_ERR:
+ err_bitmap = ath12k_dp_rx_h_mpdu_err(ab, rx_desc);
+ if (err_bitmap & HAL_RX_MPDU_ERR_TKIP_MIC) {
+- ath12k_dp_rx_h_tkip_mic_err(ar, msdu, status);
++ drop = ath12k_dp_rx_h_tkip_mic_err(ar, msdu, status);
+ break;
+ }
+ fallthrough;
+diff --git a/drivers/net/wireless/ath/ath12k/pci.c b/drivers/net/wireless/ath/ath12k/pci.c
+index bd269aa1740bcd..2ff866e1d7d5bb 100644
+--- a/drivers/net/wireless/ath/ath12k/pci.c
++++ b/drivers/net/wireless/ath/ath12k/pci.c
+@@ -1541,6 +1541,7 @@ static void ath12k_pci_remove(struct pci_dev *pdev)
+ ath12k_core_deinit(ab);
+
+ qmi_fail:
++ ath12k_fw_unmap(ab);
+ ath12k_mhi_unregister(ab_pci);
+
+ ath12k_pci_free_irq(ab);
+diff --git a/drivers/net/wireless/mediatek/mt76/eeprom.c b/drivers/net/wireless/mediatek/mt76/eeprom.c
+index 0bc66cc19acd1e..443517d06c9fa9 100644
+--- a/drivers/net/wireless/mediatek/mt76/eeprom.c
++++ b/drivers/net/wireless/mediatek/mt76/eeprom.c
+@@ -95,6 +95,10 @@ int mt76_get_of_data_from_mtd(struct mt76_dev *dev, void *eep, int offset, int l
+
+ #ifdef CONFIG_NL80211_TESTMODE
+ dev->test_mtd.name = devm_kstrdup(dev->dev, part, GFP_KERNEL);
++ if (!dev->test_mtd.name) {
++ ret = -ENOMEM;
++ goto out_put_node;
++ }
+ dev->test_mtd.offset = offset;
+ #endif
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76.h b/drivers/net/wireless/mediatek/mt76/mt76.h
+index 0b75a45ad2e821..e2e9b5ece74e21 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76.h
+@@ -755,6 +755,7 @@ struct mt76_testmode_data {
+
+ struct mt76_vif {
+ u8 idx;
++ u8 link_idx;
+ u8 omac_idx;
+ u8 band_idx;
+ u8 wmm_idx;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+index 7d07e720e4ec1d..452579ccc49228 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76_connac_mcu.c
+@@ -1164,7 +1164,7 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
+ .tag = cpu_to_le16(DEV_INFO_ACTIVE),
+ .len = cpu_to_le16(sizeof(struct req_tlv)),
+ .active = enable,
+- .link_idx = mvif->idx,
++ .link_idx = mvif->link_idx,
+ },
+ };
+ struct {
+@@ -1187,7 +1187,7 @@ int mt76_connac_mcu_uni_add_dev(struct mt76_phy *phy,
+ .bmc_tx_wlan_idx = cpu_to_le16(wcid->idx),
+ .sta_idx = cpu_to_le16(wcid->idx),
+ .conn_state = 1,
+- .link_idx = mvif->idx,
++ .link_idx = mvif->link_idx,
+ },
+ };
+ int err, idx, cmd, len;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+index e832ad53e2393b..a4f4d12f904e7c 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+@@ -22,6 +22,7 @@ static const struct usb_device_id mt76x2u_device_table[] = {
+ { USB_DEVICE(0x0846, 0x9053) }, /* Netgear A6210 */
+ { USB_DEVICE(0x045e, 0x02e6) }, /* XBox One Wireless Adapter */
+ { USB_DEVICE(0x045e, 0x02fe) }, /* XBox One Wireless Adapter */
++ { USB_DEVICE(0x2357, 0x0137) }, /* TP-Link TL-WDN6200 */
+ { },
+ };
+
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/main.c b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+index ddc67423efe2cb..d2a98c92e1147d 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/main.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/main.c
+@@ -256,7 +256,7 @@ int mt7925_init_mlo_caps(struct mt792x_phy *phy)
+
+ ext_capab[0].eml_capabilities = phy->eml_cap;
+ ext_capab[0].mld_capa_and_ops =
+- u16_encode_bits(1, IEEE80211_MLD_CAP_OP_MAX_SIMUL_LINKS);
++ u16_encode_bits(0, IEEE80211_MLD_CAP_OP_MAX_SIMUL_LINKS);
+
+ wiphy->flags |= WIPHY_FLAG_SUPPORTS_MLO;
+ wiphy->iftype_ext_capab = ext_capab;
+@@ -356,10 +356,15 @@ static int mt7925_mac_link_bss_add(struct mt792x_dev *dev,
+ struct mt76_txq *mtxq;
+ int idx, ret = 0;
+
+- mconf->mt76.idx = __ffs64(~dev->mt76.vif_mask);
+- if (mconf->mt76.idx >= MT792x_MAX_INTERFACES) {
+- ret = -ENOSPC;
+- goto out;
++ if (vif->type == NL80211_IFTYPE_P2P_DEVICE) {
++ mconf->mt76.idx = MT792x_MAX_INTERFACES;
++ } else {
++ mconf->mt76.idx = __ffs64(~dev->mt76.vif_mask);
++
++ if (mconf->mt76.idx >= MT792x_MAX_INTERFACES) {
++ ret = -ENOSPC;
++ goto out;
++ }
+ }
+
+ mconf->mt76.omac_idx = ieee80211_vif_is_mld(vif) ?
+@@ -367,6 +372,7 @@ static int mt7925_mac_link_bss_add(struct mt792x_dev *dev,
+ mconf->mt76.band_idx = 0xff;
+ mconf->mt76.wmm_idx = ieee80211_vif_is_mld(vif) ?
+ 0 : mconf->mt76.idx % MT76_CONNAC_MAX_WMM_SETS;
++ mconf->mt76.link_idx = hweight16(mvif->valid_links);
+
+ if (mvif->phy->mt76->chandef.chan->band != NL80211_BAND_2GHZ)
+ mconf->mt76.basic_rates_idx = MT792x_BASIC_RATES_TBL + 4;
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+index c7eba60897d276..8476f9caa98dbf 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.c
+@@ -3119,13 +3119,14 @@ __mt7925_mcu_set_clc(struct mt792x_dev *dev, u8 *alpha2,
+ .env = env_cap,
+ };
+ int ret, valid_cnt = 0;
+- u8 i, *pos;
++ u8 *pos, *last_pos;
+
+ if (!clc)
+ return 0;
+
+ pos = clc->data + sizeof(*seg) * clc->nr_seg;
+- for (i = 0; i < clc->nr_country; i++) {
++ last_pos = clc->data + le32_to_cpu(*(__le32 *)(clc->data + 4));
++ while (pos < last_pos) {
+ struct mt7925_clc_rule *rule = (struct mt7925_clc_rule *)pos;
+
+ pos += sizeof(*rule);
+diff --git a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+index fe6a613ba00889..887427e0760aed 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
++++ b/drivers/net/wireless/mediatek/mt76/mt7925/mcu.h
+@@ -566,8 +566,8 @@ struct mt7925_wow_pattern_tlv {
+ u8 offset;
+ u8 mask[MT76_CONNAC_WOW_MASK_MAX_LEN];
+ u8 pattern[MT76_CONNAC_WOW_PATTEN_MAX_LEN];
+- u8 rsv[7];
+-} __packed;
++ u8 rsv[4];
++};
+
+ struct roc_acquire_tlv {
+ __le16 tag;
+diff --git a/drivers/ntb/ntb_transport.c b/drivers/ntb/ntb_transport.c
+index a22ea4a4b202bd..4f775c3e218f45 100644
+--- a/drivers/ntb/ntb_transport.c
++++ b/drivers/ntb/ntb_transport.c
+@@ -1353,7 +1353,7 @@ static int ntb_transport_probe(struct ntb_client *self, struct ntb_dev *ndev)
+ qp_count = ilog2(qp_bitmap);
+ if (nt->use_msi) {
+ qp_count -= 1;
+- nt->msi_db_mask = 1 << qp_count;
++ nt->msi_db_mask = BIT_ULL(qp_count);
+ ntb_db_clear_mask(ndev, nt->msi_db_mask);
+ }
+
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index e1abb27927ff74..da195d61a9664c 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -478,7 +478,7 @@ fcloop_t2h_xmt_ls_rsp(struct nvme_fc_local_port *localport,
+ if (targetport) {
+ tport = targetport->private;
+ spin_lock(&tport->lock);
+- list_add_tail(&tport->ls_list, &tls_req->ls_list);
++ list_add_tail(&tls_req->ls_list, &tport->ls_list);
+ spin_unlock(&tport->lock);
+ queue_work(nvmet_wq, &tport->ls_work);
+ }
+diff --git a/drivers/of/irq.c b/drivers/of/irq.c
+index 1fb329c0a55b8c..5fbfc4d4e06e49 100644
+--- a/drivers/of/irq.c
++++ b/drivers/of/irq.c
+@@ -16,6 +16,7 @@
+
+ #define pr_fmt(fmt) "OF: " fmt
+
++#include <linux/cleanup.h>
+ #include <linux/device.h>
+ #include <linux/errno.h>
+ #include <linux/list.h>
+@@ -38,11 +39,15 @@
+ unsigned int irq_of_parse_and_map(struct device_node *dev, int index)
+ {
+ struct of_phandle_args oirq;
++ unsigned int ret;
+
+ if (of_irq_parse_one(dev, index, &oirq))
+ return 0;
+
+- return irq_create_of_mapping(&oirq);
++ ret = irq_create_of_mapping(&oirq);
++ of_node_put(oirq.np);
++
++ return ret;
+ }
+ EXPORT_SYMBOL_GPL(irq_of_parse_and_map);
+
+@@ -165,6 +170,8 @@ const __be32 *of_irq_parse_imap_parent(const __be32 *imap, int len, struct of_ph
+ * the specifier for each map, and then returns the translated map.
+ *
+ * Return: 0 on success and a negative number on error
++ *
++ * Note: refcount of node @out_irq->np is increased by 1 on success.
+ */
+ int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq)
+ {
+@@ -310,6 +317,12 @@ int of_irq_parse_raw(const __be32 *addr, struct of_phandle_args *out_irq)
+ addrsize = (imap - match_array) - intsize;
+
+ if (ipar == newpar) {
++ /*
++ * We got @ipar's refcount, but the refcount was
++ * gotten again by of_irq_parse_imap_parent() via its
++ * alias @newpar.
++ */
++ of_node_put(ipar);
+ pr_debug("%pOF interrupt-map entry to self\n", ipar);
+ return 0;
+ }
+@@ -339,10 +352,12 @@ EXPORT_SYMBOL_GPL(of_irq_parse_raw);
+ * This function resolves an interrupt for a node by walking the interrupt tree,
+ * finding which interrupt controller node it is attached to, and returning the
+ * interrupt specifier that can be used to retrieve a Linux IRQ number.
++ *
++ * Note: refcount of node @out_irq->np is increased by 1 on success.
+ */
+ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_args *out_irq)
+ {
+- struct device_node *p;
++ struct device_node __free(device_node) *p = NULL;
+ const __be32 *addr;
+ u32 intsize;
+ int i, res, addr_len;
+@@ -367,41 +382,33 @@ int of_irq_parse_one(struct device_node *device, int index, struct of_phandle_ar
+ /* Try the new-style interrupts-extended first */
+ res = of_parse_phandle_with_args(device, "interrupts-extended",
+ "#interrupt-cells", index, out_irq);
+- if (!res)
+- return of_irq_parse_raw(addr_buf, out_irq);
+-
+- /* Look for the interrupt parent. */
+- p = of_irq_find_parent(device);
+- if (p == NULL)
+- return -EINVAL;
++ if (!res) {
++ p = out_irq->np;
++ } else {
++ /* Look for the interrupt parent. */
++ p = of_irq_find_parent(device);
++ /* Get size of interrupt specifier */
++ if (!p || of_property_read_u32(p, "#interrupt-cells", &intsize))
++ return -EINVAL;
++
++ pr_debug(" parent=%pOF, intsize=%d\n", p, intsize);
++
++ /* Copy intspec into irq structure */
++ out_irq->np = p;
++ out_irq->args_count = intsize;
++ for (i = 0; i < intsize; i++) {
++ res = of_property_read_u32_index(device, "interrupts",
++ (index * intsize) + i,
++ out_irq->args + i);
++ if (res)
++ return res;
++ }
+
+- /* Get size of interrupt specifier */
+- if (of_property_read_u32(p, "#interrupt-cells", &intsize)) {
+- res = -EINVAL;
+- goto out;
++ pr_debug(" intspec=%d\n", *out_irq->args);
+ }
+
+- pr_debug(" parent=%pOF, intsize=%d\n", p, intsize);
+-
+- /* Copy intspec into irq structure */
+- out_irq->np = p;
+- out_irq->args_count = intsize;
+- for (i = 0; i < intsize; i++) {
+- res = of_property_read_u32_index(device, "interrupts",
+- (index * intsize) + i,
+- out_irq->args + i);
+- if (res)
+- goto out;
+- }
+-
+- pr_debug(" intspec=%d\n", *out_irq->args);
+-
+-
+ /* Check if there are any interrupt-map translations to process */
+- res = of_irq_parse_raw(addr_buf, out_irq);
+- out:
+- of_node_put(p);
+- return res;
++ return of_irq_parse_raw(addr_buf, out_irq);
+ }
+ EXPORT_SYMBOL_GPL(of_irq_parse_one);
+
+@@ -505,8 +512,10 @@ int of_irq_count(struct device_node *dev)
+ struct of_phandle_args irq;
+ int nr = 0;
+
+- while (of_irq_parse_one(dev, nr, &irq) == 0)
++ while (of_irq_parse_one(dev, nr, &irq) == 0) {
++ of_node_put(irq.np);
+ nr++;
++ }
+
+ return nr;
+ }
+@@ -623,6 +632,8 @@ void __init of_irq_init(const struct of_device_id *matches)
+ __func__, desc->dev, desc->dev,
+ desc->interrupt_parent);
+ of_node_clear_flag(desc->dev, OF_POPULATED);
++ of_node_put(desc->interrupt_parent);
++ of_node_put(desc->dev);
+ kfree(desc);
+ continue;
+ }
+@@ -653,6 +664,7 @@ void __init of_irq_init(const struct of_device_id *matches)
+ err:
+ list_for_each_entry_safe(desc, temp_desc, &intc_desc_list, list) {
+ list_del(&desc->list);
++ of_node_put(desc->interrupt_parent);
+ of_node_put(desc->dev);
+ kfree(desc);
+ }
+diff --git a/drivers/pci/controller/cadence/pci-j721e.c b/drivers/pci/controller/cadence/pci-j721e.c
+index e091c3e55b5c6f..bae829ac759e12 100644
+--- a/drivers/pci/controller/cadence/pci-j721e.c
++++ b/drivers/pci/controller/cadence/pci-j721e.c
+@@ -355,6 +355,7 @@ static const struct j721e_pcie_data j7200_pcie_rc_data = {
+ static const struct j721e_pcie_data j7200_pcie_ep_data = {
+ .mode = PCI_MODE_EP,
+ .quirk_detect_quiet_flag = true,
++ .linkdown_irq_regfield = J7200_LINK_DOWN,
+ .quirk_disable_flr = true,
+ .max_lanes = 2,
+ };
+@@ -376,13 +377,13 @@ static const struct j721e_pcie_data j784s4_pcie_rc_data = {
+ .mode = PCI_MODE_RC,
+ .quirk_retrain_flag = true,
+ .byte_access_allowed = false,
+- .linkdown_irq_regfield = LINK_DOWN,
++ .linkdown_irq_regfield = J7200_LINK_DOWN,
+ .max_lanes = 4,
+ };
+
+ static const struct j721e_pcie_data j784s4_pcie_ep_data = {
+ .mode = PCI_MODE_EP,
+- .linkdown_irq_regfield = LINK_DOWN,
++ .linkdown_irq_regfield = J7200_LINK_DOWN,
+ .max_lanes = 4,
+ };
+
+diff --git a/drivers/pci/controller/pcie-brcmstb.c b/drivers/pci/controller/pcie-brcmstb.c
+index 582fa110708781..792d24cea5747b 100644
+--- a/drivers/pci/controller/pcie-brcmstb.c
++++ b/drivers/pci/controller/pcie-brcmstb.c
+@@ -1786,7 +1786,7 @@ static struct pci_ops brcm7425_pcie_ops = {
+
+ static int brcm_pcie_probe(struct platform_device *pdev)
+ {
+- struct device_node *np = pdev->dev.of_node, *msi_np;
++ struct device_node *np = pdev->dev.of_node;
+ struct pci_host_bridge *bridge;
+ const struct pcie_cfg_data *data;
+ struct brcm_pcie *pcie;
+@@ -1890,9 +1890,14 @@ static int brcm_pcie_probe(struct platform_device *pdev)
+ goto fail;
+ }
+
+- msi_np = of_parse_phandle(pcie->np, "msi-parent", 0);
+- if (pci_msi_enabled() && msi_np == pcie->np) {
+- ret = brcm_pcie_enable_msi(pcie);
++ if (pci_msi_enabled()) {
++ struct device_node *msi_np = of_parse_phandle(pcie->np, "msi-parent", 0);
++
++ if (msi_np == pcie->np)
++ ret = brcm_pcie_enable_msi(pcie);
++
++ of_node_put(msi_np);
++
+ if (ret) {
+ dev_err(pcie->dev, "probe of internal MSI failed");
+ goto fail;
+diff --git a/drivers/pci/controller/pcie-rockchip-host.c b/drivers/pci/controller/pcie-rockchip-host.c
+index cbec711148253a..481dcc476c556b 100644
+--- a/drivers/pci/controller/pcie-rockchip-host.c
++++ b/drivers/pci/controller/pcie-rockchip-host.c
+@@ -367,7 +367,7 @@ static int rockchip_pcie_host_init_port(struct rockchip_pcie *rockchip)
+ }
+ }
+
+- rockchip_pcie_write(rockchip, ROCKCHIP_VENDOR_ID,
++ rockchip_pcie_write(rockchip, PCI_VENDOR_ID_ROCKCHIP,
+ PCIE_CORE_CONFIG_VENDOR);
+ rockchip_pcie_write(rockchip,
+ PCI_CLASS_BRIDGE_PCI_NORMAL << 8,
+diff --git a/drivers/pci/controller/pcie-rockchip.h b/drivers/pci/controller/pcie-rockchip.h
+index 15ee949f2485e3..688f51d9bde631 100644
+--- a/drivers/pci/controller/pcie-rockchip.h
++++ b/drivers/pci/controller/pcie-rockchip.h
+@@ -188,7 +188,6 @@
+ #define AXI_WRAPPER_NOR_MSG 0xc
+
+ #define PCIE_RC_SEND_PME_OFF 0x11960
+-#define ROCKCHIP_VENDOR_ID 0x1d87
+ #define PCIE_LINK_IS_L2(x) \
+ (((x) & PCIE_CLIENT_DEBUG_LTSSM_MASK) == PCIE_CLIENT_DEBUG_LTSSM_L2)
+ #define PCIE_LINK_UP(x) \
+diff --git a/drivers/pci/controller/vmd.c b/drivers/pci/controller/vmd.c
+index 9d9596947350f5..94ceec50a2b94c 100644
+--- a/drivers/pci/controller/vmd.c
++++ b/drivers/pci/controller/vmd.c
+@@ -125,7 +125,7 @@ struct vmd_irq_list {
+ struct vmd_dev {
+ struct pci_dev *dev;
+
+- spinlock_t cfg_lock;
++ raw_spinlock_t cfg_lock;
+ void __iomem *cfgbar;
+
+ int msix_count;
+@@ -391,7 +391,7 @@ static int vmd_pci_read(struct pci_bus *bus, unsigned int devfn, int reg,
+ if (!addr)
+ return -EFAULT;
+
+- spin_lock_irqsave(&vmd->cfg_lock, flags);
++ raw_spin_lock_irqsave(&vmd->cfg_lock, flags);
+ switch (len) {
+ case 1:
+ *value = readb(addr);
+@@ -406,7 +406,7 @@ static int vmd_pci_read(struct pci_bus *bus, unsigned int devfn, int reg,
+ ret = -EINVAL;
+ break;
+ }
+- spin_unlock_irqrestore(&vmd->cfg_lock, flags);
++ raw_spin_unlock_irqrestore(&vmd->cfg_lock, flags);
+ return ret;
+ }
+
+@@ -426,7 +426,7 @@ static int vmd_pci_write(struct pci_bus *bus, unsigned int devfn, int reg,
+ if (!addr)
+ return -EFAULT;
+
+- spin_lock_irqsave(&vmd->cfg_lock, flags);
++ raw_spin_lock_irqsave(&vmd->cfg_lock, flags);
+ switch (len) {
+ case 1:
+ writeb(value, addr);
+@@ -444,7 +444,7 @@ static int vmd_pci_write(struct pci_bus *bus, unsigned int devfn, int reg,
+ ret = -EINVAL;
+ break;
+ }
+- spin_unlock_irqrestore(&vmd->cfg_lock, flags);
++ raw_spin_unlock_irqrestore(&vmd->cfg_lock, flags);
+ return ret;
+ }
+
+@@ -1009,7 +1009,7 @@ static int vmd_probe(struct pci_dev *dev, const struct pci_device_id *id)
+ if (features & VMD_FEAT_OFFSET_FIRST_VECTOR)
+ vmd->first_vec = 1;
+
+- spin_lock_init(&vmd->cfg_lock);
++ raw_spin_lock_init(&vmd->cfg_lock);
+ pci_set_drvdata(dev, vmd);
+ err = vmd_enable_domain(vmd, features);
+ if (err)
+diff --git a/drivers/pci/devres.c b/drivers/pci/devres.c
+index 643f85849ef64b..3f2691888c35d3 100644
+--- a/drivers/pci/devres.c
++++ b/drivers/pci/devres.c
+@@ -40,7 +40,7 @@
+ * Legacy struct storing addresses to whole mapped BARs.
+ */
+ struct pcim_iomap_devres {
+- void __iomem *table[PCI_STD_NUM_BARS];
++ void __iomem *table[PCI_NUM_RESOURCES];
+ };
+
+ /* Used to restore the old INTx state on driver detach. */
+@@ -577,7 +577,7 @@ static int pcim_add_mapping_to_legacy_table(struct pci_dev *pdev,
+ {
+ void __iomem **legacy_iomap_table;
+
+- if (bar >= PCI_STD_NUM_BARS)
++ if (!pci_bar_index_is_valid(bar))
+ return -EINVAL;
+
+ legacy_iomap_table = (void __iomem **)pcim_iomap_table(pdev);
+@@ -622,7 +622,7 @@ static void pcim_remove_bar_from_legacy_table(struct pci_dev *pdev, int bar)
+ {
+ void __iomem **legacy_iomap_table;
+
+- if (bar >= PCI_STD_NUM_BARS)
++ if (!pci_bar_index_is_valid(bar))
+ return;
+
+ legacy_iomap_table = (void __iomem **)pcim_iomap_table(pdev);
+@@ -655,6 +655,9 @@ void __iomem *pcim_iomap(struct pci_dev *pdev, int bar, unsigned long maxlen)
+ void __iomem *mapping;
+ struct pcim_addr_devres *res;
+
++ if (!pci_bar_index_is_valid(bar))
++ return NULL;
++
+ res = pcim_addr_devres_alloc(pdev);
+ if (!res)
+ return NULL;
+@@ -722,6 +725,9 @@ void __iomem *pcim_iomap_region(struct pci_dev *pdev, int bar,
+ int ret;
+ struct pcim_addr_devres *res;
+
++ if (!pci_bar_index_is_valid(bar))
++ return IOMEM_ERR_PTR(-EINVAL);
++
+ res = pcim_addr_devres_alloc(pdev);
+ if (!res)
+ return IOMEM_ERR_PTR(-ENOMEM);
+@@ -822,6 +828,9 @@ static int _pcim_request_region(struct pci_dev *pdev, int bar, const char *name,
+ int ret;
+ struct pcim_addr_devres *res;
+
++ if (!pci_bar_index_is_valid(bar))
++ return -EINVAL;
++
+ res = pcim_addr_devres_alloc(pdev);
+ if (!res)
+ return -ENOMEM;
+@@ -1043,6 +1052,9 @@ void __iomem *pcim_iomap_range(struct pci_dev *pdev, int bar,
+ void __iomem *mapping;
+ struct pcim_addr_devres *res;
+
++ if (!pci_bar_index_is_valid(bar))
++ return IOMEM_ERR_PTR(-EINVAL);
++
+ res = pcim_addr_devres_alloc(pdev);
+ if (!res)
+ return IOMEM_ERR_PTR(-ENOMEM);
+diff --git a/drivers/pci/hotplug/pciehp_core.c b/drivers/pci/hotplug/pciehp_core.c
+index ff458e692fedb3..997841c6989359 100644
+--- a/drivers/pci/hotplug/pciehp_core.c
++++ b/drivers/pci/hotplug/pciehp_core.c
+@@ -286,9 +286,12 @@ static int pciehp_suspend(struct pcie_device *dev)
+
+ static bool pciehp_device_replaced(struct controller *ctrl)
+ {
+- struct pci_dev *pdev __free(pci_dev_put);
++ struct pci_dev *pdev __free(pci_dev_put) = NULL;
+ u32 reg;
+
++ if (pci_dev_is_disconnected(ctrl->pcie->port))
++ return false;
++
+ pdev = pci_get_slot(ctrl->pcie->port->subordinate, PCI_DEVFN(0, 0));
+ if (!pdev)
+ return true;
+diff --git a/drivers/pci/iomap.c b/drivers/pci/iomap.c
+index 9fb7cacc15cdef..fe706ed946dfd2 100644
+--- a/drivers/pci/iomap.c
++++ b/drivers/pci/iomap.c
+@@ -9,6 +9,8 @@
+
+ #include <linux/export.h>
+
++#include "pci.h" /* for pci_bar_index_is_valid() */
++
+ /**
+ * pci_iomap_range - create a virtual mapping cookie for a PCI BAR
+ * @dev: PCI device that owns the BAR
+@@ -33,12 +35,19 @@ void __iomem *pci_iomap_range(struct pci_dev *dev,
+ unsigned long offset,
+ unsigned long maxlen)
+ {
+- resource_size_t start = pci_resource_start(dev, bar);
+- resource_size_t len = pci_resource_len(dev, bar);
+- unsigned long flags = pci_resource_flags(dev, bar);
++ resource_size_t start, len;
++ unsigned long flags;
++
++ if (!pci_bar_index_is_valid(bar))
++ return NULL;
++
++ start = pci_resource_start(dev, bar);
++ len = pci_resource_len(dev, bar);
++ flags = pci_resource_flags(dev, bar);
+
+ if (len <= offset || !start)
+ return NULL;
++
+ len -= offset;
+ start += offset;
+ if (maxlen && len > maxlen)
+@@ -77,16 +86,20 @@ void __iomem *pci_iomap_wc_range(struct pci_dev *dev,
+ unsigned long offset,
+ unsigned long maxlen)
+ {
+- resource_size_t start = pci_resource_start(dev, bar);
+- resource_size_t len = pci_resource_len(dev, bar);
+- unsigned long flags = pci_resource_flags(dev, bar);
++ resource_size_t start, len;
++ unsigned long flags;
+
+-
+- if (flags & IORESOURCE_IO)
++ if (!pci_bar_index_is_valid(bar))
+ return NULL;
+
++ start = pci_resource_start(dev, bar);
++ len = pci_resource_len(dev, bar);
++ flags = pci_resource_flags(dev, bar);
++
+ if (len <= offset || !start)
+ return NULL;
++ if (flags & IORESOURCE_IO)
++ return NULL;
+
+ len -= offset;
+ start += offset;
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index 169aa8fd74a11f..be61fa93d39712 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -3922,6 +3922,9 @@ EXPORT_SYMBOL(pci_enable_atomic_ops_to_root);
+ */
+ void pci_release_region(struct pci_dev *pdev, int bar)
+ {
++ if (!pci_bar_index_is_valid(bar))
++ return;
++
+ /*
+ * This is done for backwards compatibility, because the old PCI devres
+ * API had a mode in which the function became managed if it had been
+@@ -3967,6 +3970,9 @@ EXPORT_SYMBOL(pci_release_region);
+ static int __pci_request_region(struct pci_dev *pdev, int bar,
+ const char *res_name, int exclusive)
+ {
++ if (!pci_bar_index_is_valid(bar))
++ return -EINVAL;
++
+ if (pci_is_managed(pdev)) {
+ if (exclusive == IORESOURCE_EXCLUSIVE)
+ return pcim_request_region_exclusive(pdev, bar, res_name);
+diff --git a/drivers/pci/pci.h b/drivers/pci/pci.h
+index 1cdc2c9547a7e1..65df6d2ac0032e 100644
+--- a/drivers/pci/pci.h
++++ b/drivers/pci/pci.h
+@@ -165,6 +165,22 @@ static inline void pci_wakeup_event(struct pci_dev *dev)
+ pm_wakeup_event(&dev->dev, 100);
+ }
+
++/**
++ * pci_bar_index_is_valid - Check whether a BAR index is within valid range
++ * @bar: BAR index
++ *
++ * Protects against overflowing &struct pci_dev.resource array.
++ *
++ * Return: true for valid index, false otherwise.
++ */
++static inline bool pci_bar_index_is_valid(int bar)
++{
++ if (bar >= 0 && bar < PCI_NUM_RESOURCES)
++ return true;
++
++ return false;
++}
++
+ static inline bool pci_has_subordinate(struct pci_dev *pci_dev)
+ {
+ return !!(pci_dev->subordinate);
+diff --git a/drivers/pci/probe.c b/drivers/pci/probe.c
+index 0e757b23a09f0f..cf7c7886b64203 100644
+--- a/drivers/pci/probe.c
++++ b/drivers/pci/probe.c
+@@ -908,6 +908,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ resource_size_t offset, next_offset;
+ LIST_HEAD(resources);
+ struct resource *res, *next_res;
++ bool bus_registered = false;
+ char addr[64], *fmt;
+ const char *name;
+ int err;
+@@ -971,6 +972,7 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ name = dev_name(&bus->dev);
+
+ err = device_register(&bus->dev);
++ bus_registered = true;
+ if (err)
+ goto unregister;
+
+@@ -1057,12 +1059,15 @@ static int pci_register_host_bridge(struct pci_host_bridge *bridge)
+ unregister:
+ put_device(&bridge->dev);
+ device_del(&bridge->dev);
+-
+ free:
+ #ifdef CONFIG_PCI_DOMAINS_GENERIC
+ pci_bus_release_domain_nr(parent, bus->domain_nr);
+ #endif
+- kfree(bus);
++ if (bus_registered)
++ put_device(&bus->dev);
++ else
++ kfree(bus);
++
+ return err;
+ }
+
+@@ -1171,7 +1176,10 @@ static struct pci_bus *pci_alloc_child_bus(struct pci_bus *parent,
+ add_dev:
+ pci_set_bus_msi_domain(child);
+ ret = device_register(&child->dev);
+- WARN_ON(ret < 0);
++ if (WARN_ON(ret < 0)) {
++ put_device(&child->dev);
++ return NULL;
++ }
+
+ pcibios_add_bus(child);
+
+@@ -1327,8 +1335,6 @@ static int pci_scan_bridge_extend(struct pci_bus *bus, struct pci_dev *dev,
+ pci_write_config_word(dev, PCI_BRIDGE_CONTROL,
+ bctl & ~PCI_BRIDGE_CTL_MASTER_ABORT);
+
+- pci_enable_rrs_sv(dev);
+-
+ if ((secondary || subordinate) && !pcibios_assign_all_busses() &&
+ !is_cardbus && !broken) {
+ unsigned int cmax, buses;
+@@ -1569,6 +1575,11 @@ void set_pcie_port_type(struct pci_dev *pdev)
+ pdev->pcie_cap = pos;
+ pci_read_config_word(pdev, pos + PCI_EXP_FLAGS, ®16);
+ pdev->pcie_flags_reg = reg16;
++
++ type = pci_pcie_type(pdev);
++ if (type == PCI_EXP_TYPE_ROOT_PORT)
++ pci_enable_rrs_sv(pdev);
++
+ pci_read_config_dword(pdev, pos + PCI_EXP_DEVCAP, &pdev->devcap);
+ pdev->pcie_mpss = FIELD_GET(PCI_EXP_DEVCAP_PAYLOAD, pdev->devcap);
+
+@@ -1585,7 +1596,6 @@ void set_pcie_port_type(struct pci_dev *pdev)
+ * correctly so detect impossible configurations here and correct
+ * the port type accordingly.
+ */
+- type = pci_pcie_type(pdev);
+ if (type == PCI_EXP_TYPE_DOWNSTREAM) {
+ /*
+ * If pdev claims to be downstream port but the parent
+diff --git a/drivers/perf/arm_pmu.c b/drivers/perf/arm_pmu.c
+index 398cce3d76fc44..2f33e69a8caf20 100644
+--- a/drivers/perf/arm_pmu.c
++++ b/drivers/perf/arm_pmu.c
+@@ -342,12 +342,10 @@ armpmu_add(struct perf_event *event, int flags)
+ if (idx < 0)
+ return idx;
+
+- /*
+- * If there is an event in the counter we are going to use then make
+- * sure it is disabled.
+- */
++ /* The newly-allocated counter should be empty */
++ WARN_ON_ONCE(hw_events->events[idx]);
++
+ event->hw.idx = idx;
+- armpmu->disable(event);
+ hw_events->events[idx] = event;
+
+ hwc->state = PERF_HES_STOPPED | PERF_HES_UPTODATE;
+diff --git a/drivers/perf/dwc_pcie_pmu.c b/drivers/perf/dwc_pcie_pmu.c
+index 4ca50f9b6dfed8..7dbda36884c8d3 100644
+--- a/drivers/perf/dwc_pcie_pmu.c
++++ b/drivers/perf/dwc_pcie_pmu.c
+@@ -567,8 +567,10 @@ static int dwc_pcie_register_dev(struct pci_dev *pdev)
+ return PTR_ERR(plat_dev);
+
+ dev_info = kzalloc(sizeof(*dev_info), GFP_KERNEL);
+- if (!dev_info)
++ if (!dev_info) {
++ platform_device_unregister(plat_dev);
+ return -ENOMEM;
++ }
+
+ /* Cache platform device to handle pci device hotplug */
+ dev_info->plat_dev = plat_dev;
+@@ -724,6 +726,15 @@ static struct platform_driver dwc_pcie_pmu_driver = {
+ .driver = {.name = "dwc_pcie_pmu",},
+ };
+
++static void dwc_pcie_cleanup_devices(void)
++{
++ struct dwc_pcie_dev_info *dev_info, *tmp;
++
++ list_for_each_entry_safe(dev_info, tmp, &dwc_pcie_dev_info_head, dev_node) {
++ dwc_pcie_unregister_dev(dev_info);
++ }
++}
++
+ static int __init dwc_pcie_pmu_init(void)
+ {
+ struct pci_dev *pdev = NULL;
+@@ -736,7 +747,7 @@ static int __init dwc_pcie_pmu_init(void)
+ ret = dwc_pcie_register_dev(pdev);
+ if (ret) {
+ pci_dev_put(pdev);
+- return ret;
++ goto err_cleanup;
+ }
+ }
+
+@@ -745,35 +756,35 @@ static int __init dwc_pcie_pmu_init(void)
+ dwc_pcie_pmu_online_cpu,
+ dwc_pcie_pmu_offline_cpu);
+ if (ret < 0)
+- return ret;
++ goto err_cleanup;
+
+ dwc_pcie_pmu_hp_state = ret;
+
+ ret = platform_driver_register(&dwc_pcie_pmu_driver);
+ if (ret)
+- goto platform_driver_register_err;
++ goto err_remove_cpuhp;
+
+ ret = bus_register_notifier(&pci_bus_type, &dwc_pcie_pmu_nb);
+ if (ret)
+- goto platform_driver_register_err;
++ goto err_unregister_driver;
+ notify = true;
+
+ return 0;
+
+-platform_driver_register_err:
++err_unregister_driver:
++ platform_driver_unregister(&dwc_pcie_pmu_driver);
++err_remove_cpuhp:
+ cpuhp_remove_multi_state(dwc_pcie_pmu_hp_state);
+-
++err_cleanup:
++ dwc_pcie_cleanup_devices();
+ return ret;
+ }
+
+ static void __exit dwc_pcie_pmu_exit(void)
+ {
+- struct dwc_pcie_dev_info *dev_info, *tmp;
+-
+ if (notify)
+ bus_unregister_notifier(&pci_bus_type, &dwc_pcie_pmu_nb);
+- list_for_each_entry_safe(dev_info, tmp, &dwc_pcie_dev_info_head, dev_node)
+- dwc_pcie_unregister_dev(dev_info);
++ dwc_pcie_cleanup_devices();
+ platform_driver_unregister(&dwc_pcie_pmu_driver);
+ cpuhp_remove_multi_state(dwc_pcie_pmu_hp_state);
+ }
+diff --git a/drivers/phy/freescale/phy-fsl-imx8m-pcie.c b/drivers/phy/freescale/phy-fsl-imx8m-pcie.c
+index e98361dcdeadfe..afd52392cd5301 100644
+--- a/drivers/phy/freescale/phy-fsl-imx8m-pcie.c
++++ b/drivers/phy/freescale/phy-fsl-imx8m-pcie.c
+@@ -162,6 +162,16 @@ static int imx8_pcie_phy_power_on(struct phy *phy)
+ return ret;
+ }
+
++static int imx8_pcie_phy_power_off(struct phy *phy)
++{
++ struct imx8_pcie_phy *imx8_phy = phy_get_drvdata(phy);
++
++ reset_control_assert(imx8_phy->reset);
++ reset_control_assert(imx8_phy->perst);
++
++ return 0;
++}
++
+ static int imx8_pcie_phy_init(struct phy *phy)
+ {
+ struct imx8_pcie_phy *imx8_phy = phy_get_drvdata(phy);
+@@ -182,6 +192,7 @@ static const struct phy_ops imx8_pcie_phy_ops = {
+ .init = imx8_pcie_phy_init,
+ .exit = imx8_pcie_phy_exit,
+ .power_on = imx8_pcie_phy_power_on,
++ .power_off = imx8_pcie_phy_power_off,
+ .owner = THIS_MODULE,
+ };
+
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index aeaf0d1958f56a..a6bdff7a0bb254 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -1044,8 +1044,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ const struct msm_pingroup *g;
+ u32 intr_target_mask = GENMASK(2, 0);
+ unsigned long flags;
+- bool was_enabled;
+- u32 val;
++ u32 val, oldval;
+
+ if (msm_gpio_needs_dual_edge_parent_workaround(d, type)) {
+ set_bit(d->hwirq, pctrl->dual_edge_irqs);
+@@ -1107,8 +1106,7 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ * internal circuitry of TLMM, toggling the RAW_STATUS
+ * could cause the INTR_STATUS to be set for EDGE interrupts.
+ */
+- val = msm_readl_intr_cfg(pctrl, g);
+- was_enabled = val & BIT(g->intr_raw_status_bit);
++ val = oldval = msm_readl_intr_cfg(pctrl, g);
+ val |= BIT(g->intr_raw_status_bit);
+ if (g->intr_detection_width == 2) {
+ val &= ~(3 << g->intr_detection_bit);
+@@ -1161,9 +1159,11 @@ static int msm_gpio_irq_set_type(struct irq_data *d, unsigned int type)
+ /*
+ * The first time we set RAW_STATUS_EN it could trigger an interrupt.
+ * Clear the interrupt. This is safe because we have
+- * IRQCHIP_SET_TYPE_MASKED.
++ * IRQCHIP_SET_TYPE_MASKED. When changing the interrupt type, we could
++ * also still have a non-matching interrupt latched, so clear whenever
++ * making changes to the interrupt configuration.
+ */
+- if (!was_enabled)
++ if (val != oldval)
+ msm_ack_intr_status(pctrl, g);
+
+ if (test_bit(d->hwirq, pctrl->dual_edge_irqs))
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+index 5480e0884abecf..23b4bc1e5da81c 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
++++ b/drivers/pinctrl/samsung/pinctrl-exynos-arm64.c
+@@ -939,83 +939,83 @@ const struct samsung_pinctrl_of_match_data fsd_of_data __initconst = {
+
+ /* pin banks of gs101 pin-controller (ALIVE) */
+ static const struct samsung_pin_bank_data gs101_pin_alive[] = {
+- EXYNOS850_PIN_BANK_EINTW(8, 0x0, "gpa0", 0x00),
+- EXYNOS850_PIN_BANK_EINTW(7, 0x20, "gpa1", 0x04),
+- EXYNOS850_PIN_BANK_EINTW(5, 0x40, "gpa2", 0x08),
+- EXYNOS850_PIN_BANK_EINTW(4, 0x60, "gpa3", 0x0c),
+- EXYNOS850_PIN_BANK_EINTW(4, 0x80, "gpa4", 0x10),
+- EXYNOS850_PIN_BANK_EINTW(7, 0xa0, "gpa5", 0x14),
+- EXYNOS850_PIN_BANK_EINTW(8, 0xc0, "gpa9", 0x18),
+- EXYNOS850_PIN_BANK_EINTW(2, 0xe0, "gpa10", 0x1c),
++ GS101_PIN_BANK_EINTW(8, 0x0, "gpa0", 0x00, 0x00),
++ GS101_PIN_BANK_EINTW(7, 0x20, "gpa1", 0x04, 0x08),
++ GS101_PIN_BANK_EINTW(5, 0x40, "gpa2", 0x08, 0x10),
++ GS101_PIN_BANK_EINTW(4, 0x60, "gpa3", 0x0c, 0x18),
++ GS101_PIN_BANK_EINTW(4, 0x80, "gpa4", 0x10, 0x1c),
++ GS101_PIN_BANK_EINTW(7, 0xa0, "gpa5", 0x14, 0x20),
++ GS101_PIN_BANK_EINTW(8, 0xc0, "gpa9", 0x18, 0x28),
++ GS101_PIN_BANK_EINTW(2, 0xe0, "gpa10", 0x1c, 0x30),
+ };
+
+ /* pin banks of gs101 pin-controller (FAR_ALIVE) */
+ static const struct samsung_pin_bank_data gs101_pin_far_alive[] = {
+- EXYNOS850_PIN_BANK_EINTW(8, 0x0, "gpa6", 0x00),
+- EXYNOS850_PIN_BANK_EINTW(4, 0x20, "gpa7", 0x04),
+- EXYNOS850_PIN_BANK_EINTW(8, 0x40, "gpa8", 0x08),
+- EXYNOS850_PIN_BANK_EINTW(2, 0x60, "gpa11", 0x0c),
++ GS101_PIN_BANK_EINTW(8, 0x0, "gpa6", 0x00, 0x00),
++ GS101_PIN_BANK_EINTW(4, 0x20, "gpa7", 0x04, 0x08),
++ GS101_PIN_BANK_EINTW(8, 0x40, "gpa8", 0x08, 0x0c),
++ GS101_PIN_BANK_EINTW(2, 0x60, "gpa11", 0x0c, 0x14),
+ };
+
+ /* pin banks of gs101 pin-controller (GSACORE) */
+ static const struct samsung_pin_bank_data gs101_pin_gsacore[] = {
+- EXYNOS850_PIN_BANK_EINTG(2, 0x0, "gps0", 0x00),
+- EXYNOS850_PIN_BANK_EINTG(8, 0x20, "gps1", 0x04),
+- EXYNOS850_PIN_BANK_EINTG(3, 0x40, "gps2", 0x08),
++ GS101_PIN_BANK_EINTG(2, 0x0, "gps0", 0x00, 0x00),
++ GS101_PIN_BANK_EINTG(8, 0x20, "gps1", 0x04, 0x04),
++ GS101_PIN_BANK_EINTG(3, 0x40, "gps2", 0x08, 0x0c),
+ };
+
+ /* pin banks of gs101 pin-controller (GSACTRL) */
+ static const struct samsung_pin_bank_data gs101_pin_gsactrl[] = {
+- EXYNOS850_PIN_BANK_EINTW(6, 0x0, "gps3", 0x00),
++ GS101_PIN_BANK_EINTW(6, 0x0, "gps3", 0x00, 0x00),
+ };
+
+ /* pin banks of gs101 pin-controller (PERIC0) */
+ static const struct samsung_pin_bank_data gs101_pin_peric0[] = {
+- EXYNOS850_PIN_BANK_EINTG(5, 0x0, "gpp0", 0x00),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x20, "gpp1", 0x04),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x40, "gpp2", 0x08),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x60, "gpp3", 0x0c),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x80, "gpp4", 0x10),
+- EXYNOS850_PIN_BANK_EINTG(2, 0xa0, "gpp5", 0x14),
+- EXYNOS850_PIN_BANK_EINTG(4, 0xc0, "gpp6", 0x18),
+- EXYNOS850_PIN_BANK_EINTG(2, 0xe0, "gpp7", 0x1c),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x100, "gpp8", 0x20),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x120, "gpp9", 0x24),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x140, "gpp10", 0x28),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x160, "gpp11", 0x2c),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x180, "gpp12", 0x30),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x1a0, "gpp13", 0x34),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x1c0, "gpp14", 0x38),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x1e0, "gpp15", 0x3c),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x200, "gpp16", 0x40),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x220, "gpp17", 0x44),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x240, "gpp18", 0x48),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x260, "gpp19", 0x4c),
++ GS101_PIN_BANK_EINTG(5, 0x0, "gpp0", 0x00, 0x00),
++ GS101_PIN_BANK_EINTG(4, 0x20, "gpp1", 0x04, 0x08),
++ GS101_PIN_BANK_EINTG(4, 0x40, "gpp2", 0x08, 0x0c),
++ GS101_PIN_BANK_EINTG(2, 0x60, "gpp3", 0x0c, 0x10),
++ GS101_PIN_BANK_EINTG(4, 0x80, "gpp4", 0x10, 0x14),
++ GS101_PIN_BANK_EINTG(2, 0xa0, "gpp5", 0x14, 0x18),
++ GS101_PIN_BANK_EINTG(4, 0xc0, "gpp6", 0x18, 0x1c),
++ GS101_PIN_BANK_EINTG(2, 0xe0, "gpp7", 0x1c, 0x20),
++ GS101_PIN_BANK_EINTG(4, 0x100, "gpp8", 0x20, 0x24),
++ GS101_PIN_BANK_EINTG(2, 0x120, "gpp9", 0x24, 0x28),
++ GS101_PIN_BANK_EINTG(4, 0x140, "gpp10", 0x28, 0x2c),
++ GS101_PIN_BANK_EINTG(2, 0x160, "gpp11", 0x2c, 0x30),
++ GS101_PIN_BANK_EINTG(4, 0x180, "gpp12", 0x30, 0x34),
++ GS101_PIN_BANK_EINTG(2, 0x1a0, "gpp13", 0x34, 0x38),
++ GS101_PIN_BANK_EINTG(4, 0x1c0, "gpp14", 0x38, 0x3c),
++ GS101_PIN_BANK_EINTG(2, 0x1e0, "gpp15", 0x3c, 0x40),
++ GS101_PIN_BANK_EINTG(4, 0x200, "gpp16", 0x40, 0x44),
++ GS101_PIN_BANK_EINTG(2, 0x220, "gpp17", 0x44, 0x48),
++ GS101_PIN_BANK_EINTG(4, 0x240, "gpp18", 0x48, 0x4c),
++ GS101_PIN_BANK_EINTG(4, 0x260, "gpp19", 0x4c, 0x50),
+ };
+
+ /* pin banks of gs101 pin-controller (PERIC1) */
+ static const struct samsung_pin_bank_data gs101_pin_peric1[] = {
+- EXYNOS850_PIN_BANK_EINTG(8, 0x0, "gpp20", 0x00),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x20, "gpp21", 0x04),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x40, "gpp22", 0x08),
+- EXYNOS850_PIN_BANK_EINTG(8, 0x60, "gpp23", 0x0c),
+- EXYNOS850_PIN_BANK_EINTG(4, 0x80, "gpp24", 0x10),
+- EXYNOS850_PIN_BANK_EINTG(4, 0xa0, "gpp25", 0x14),
+- EXYNOS850_PIN_BANK_EINTG(5, 0xc0, "gpp26", 0x18),
+- EXYNOS850_PIN_BANK_EINTG(4, 0xe0, "gpp27", 0x1c),
++ GS101_PIN_BANK_EINTG(8, 0x0, "gpp20", 0x00, 0x00),
++ GS101_PIN_BANK_EINTG(4, 0x20, "gpp21", 0x04, 0x08),
++ GS101_PIN_BANK_EINTG(2, 0x40, "gpp22", 0x08, 0x0c),
++ GS101_PIN_BANK_EINTG(8, 0x60, "gpp23", 0x0c, 0x10),
++ GS101_PIN_BANK_EINTG(4, 0x80, "gpp24", 0x10, 0x18),
++ GS101_PIN_BANK_EINTG(4, 0xa0, "gpp25", 0x14, 0x1c),
++ GS101_PIN_BANK_EINTG(5, 0xc0, "gpp26", 0x18, 0x20),
++ GS101_PIN_BANK_EINTG(4, 0xe0, "gpp27", 0x1c, 0x28),
+ };
+
+ /* pin banks of gs101 pin-controller (HSI1) */
+ static const struct samsung_pin_bank_data gs101_pin_hsi1[] = {
+- EXYNOS850_PIN_BANK_EINTG(6, 0x0, "gph0", 0x00),
+- EXYNOS850_PIN_BANK_EINTG(7, 0x20, "gph1", 0x04),
++ GS101_PIN_BANK_EINTG(6, 0x0, "gph0", 0x00, 0x00),
++ GS101_PIN_BANK_EINTG(7, 0x20, "gph1", 0x04, 0x08),
+ };
+
+ /* pin banks of gs101 pin-controller (HSI2) */
+ static const struct samsung_pin_bank_data gs101_pin_hsi2[] = {
+- EXYNOS850_PIN_BANK_EINTG(6, 0x0, "gph2", 0x00),
+- EXYNOS850_PIN_BANK_EINTG(2, 0x20, "gph3", 0x04),
+- EXYNOS850_PIN_BANK_EINTG(6, 0x40, "gph4", 0x08),
++ GS101_PIN_BANK_EINTG(6, 0x0, "gph2", 0x00, 0x00),
++ GS101_PIN_BANK_EINTG(2, 0x20, "gph3", 0x04, 0x08),
++ GS101_PIN_BANK_EINTG(6, 0x40, "gph4", 0x08, 0x0c),
+ };
+
+ static const struct samsung_pin_ctrl gs101_pin_ctrl[] __initconst = {
+diff --git a/drivers/pinctrl/samsung/pinctrl-exynos.h b/drivers/pinctrl/samsung/pinctrl-exynos.h
+index 305cb1d31de491..97a43fa4dfc567 100644
+--- a/drivers/pinctrl/samsung/pinctrl-exynos.h
++++ b/drivers/pinctrl/samsung/pinctrl-exynos.h
+@@ -165,6 +165,28 @@
+ .name = id \
+ }
+
++#define GS101_PIN_BANK_EINTG(pins, reg, id, offs, fltcon_offs) \
++ { \
++ .type = &exynos850_bank_type_off, \
++ .pctl_offset = reg, \
++ .nr_pins = pins, \
++ .eint_type = EINT_TYPE_GPIO, \
++ .eint_offset = offs, \
++ .eint_fltcon_offset = fltcon_offs, \
++ .name = id \
++ }
++
++#define GS101_PIN_BANK_EINTW(pins, reg, id, offs, fltcon_offs) \
++ { \
++ .type = &exynos850_bank_type_alive, \
++ .pctl_offset = reg, \
++ .nr_pins = pins, \
++ .eint_type = EINT_TYPE_WKUP, \
++ .eint_offset = offs, \
++ .eint_fltcon_offset = fltcon_offs, \
++ .name = id \
++ }
++
+ /**
+ * struct exynos_weint_data: irq specific data for all the wakeup interrupts
+ * generated by the external wakeup interrupt controller.
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.c b/drivers/pinctrl/samsung/pinctrl-samsung.c
+index c142cd7920307f..63ac89a802d301 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.c
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.c
+@@ -1230,6 +1230,7 @@ samsung_pinctrl_get_soc_data(struct samsung_pinctrl_drv_data *d,
+ bank->eint_con_offset = bdata->eint_con_offset;
+ bank->eint_mask_offset = bdata->eint_mask_offset;
+ bank->eint_pend_offset = bdata->eint_pend_offset;
++ bank->eint_fltcon_offset = bdata->eint_fltcon_offset;
+ bank->name = bdata->name;
+
+ raw_spin_lock_init(&bank->slock);
+diff --git a/drivers/pinctrl/samsung/pinctrl-samsung.h b/drivers/pinctrl/samsung/pinctrl-samsung.h
+index a1e7377bd890b7..14c3b6b965851e 100644
+--- a/drivers/pinctrl/samsung/pinctrl-samsung.h
++++ b/drivers/pinctrl/samsung/pinctrl-samsung.h
+@@ -144,6 +144,7 @@ struct samsung_pin_bank_type {
+ * @eint_con_offset: ExynosAuto SoC-specific EINT control register offset of bank.
+ * @eint_mask_offset: ExynosAuto SoC-specific EINT mask register offset of bank.
+ * @eint_pend_offset: ExynosAuto SoC-specific EINT pend register offset of bank.
++ * @eint_fltcon_offset: GS101 SoC-specific EINT filter config register offset.
+ * @name: name to be prefixed for each pin in this pin bank.
+ */
+ struct samsung_pin_bank_data {
+@@ -158,6 +159,7 @@ struct samsung_pin_bank_data {
+ u32 eint_con_offset;
+ u32 eint_mask_offset;
+ u32 eint_pend_offset;
++ u32 eint_fltcon_offset;
+ const char *name;
+ };
+
+@@ -175,6 +177,7 @@ struct samsung_pin_bank_data {
+ * @eint_con_offset: ExynosAuto SoC-specific EINT register or interrupt offset of bank.
+ * @eint_mask_offset: ExynosAuto SoC-specific EINT mask register offset of bank.
+ * @eint_pend_offset: ExynosAuto SoC-specific EINT pend register offset of bank.
++ * @eint_fltcon_offset: GS101 SoC-specific EINT filter config register offset.
+ * @name: name to be prefixed for each pin in this pin bank.
+ * @id: id of the bank, propagated to the pin range.
+ * @pin_base: starting pin number of the bank.
+@@ -201,6 +204,7 @@ struct samsung_pin_bank {
+ u32 eint_con_offset;
+ u32 eint_mask_offset;
+ u32 eint_pend_offset;
++ u32 eint_fltcon_offset;
+ const char *name;
+ u32 id;
+
+diff --git a/drivers/platform/chrome/cros_ec_lpc.c b/drivers/platform/chrome/cros_ec_lpc.c
+index 626e2635e3da70..ac198d1fd17073 100644
+--- a/drivers/platform/chrome/cros_ec_lpc.c
++++ b/drivers/platform/chrome/cros_ec_lpc.c
+@@ -30,6 +30,7 @@
+
+ #define DRV_NAME "cros_ec_lpcs"
+ #define ACPI_DRV_NAME "GOOG0004"
++#define FRMW_ACPI_DRV_NAME "FRMWC004"
+
+ /* True if ACPI device is present */
+ static bool cros_ec_lpc_acpi_device_found;
+@@ -460,7 +461,7 @@ static int cros_ec_lpc_probe(struct platform_device *pdev)
+ acpi_status status;
+ struct cros_ec_device *ec_dev;
+ struct cros_ec_lpc *ec_lpc;
+- struct lpc_driver_data *driver_data;
++ const struct lpc_driver_data *driver_data;
+ u8 buf[2] = {};
+ int irq, ret;
+ u32 quirks;
+@@ -472,6 +473,9 @@ static int cros_ec_lpc_probe(struct platform_device *pdev)
+ ec_lpc->mmio_memory_base = EC_LPC_ADDR_MEMMAP;
+
+ driver_data = platform_get_drvdata(pdev);
++ if (!driver_data)
++ driver_data = acpi_device_get_match_data(dev);
++
+ if (driver_data) {
+ quirks = driver_data->quirks;
+
+@@ -625,12 +629,6 @@ static void cros_ec_lpc_remove(struct platform_device *pdev)
+ cros_ec_unregister(ec_dev);
+ }
+
+-static const struct acpi_device_id cros_ec_lpc_acpi_device_ids[] = {
+- { ACPI_DRV_NAME, 0 },
+- { }
+-};
+-MODULE_DEVICE_TABLE(acpi, cros_ec_lpc_acpi_device_ids);
+-
+ static const struct lpc_driver_data framework_laptop_npcx_lpc_driver_data __initconst = {
+ .quirks = CROS_EC_LPC_QUIRK_REMAP_MEMORY,
+ .quirk_mmio_memory_base = 0xE00,
+@@ -642,6 +640,13 @@ static const struct lpc_driver_data framework_laptop_mec_lpc_driver_data __initc
+ .quirk_aml_mutex_name = "ECMT",
+ };
+
++static const struct acpi_device_id cros_ec_lpc_acpi_device_ids[] = {
++ { ACPI_DRV_NAME, 0 },
++ { FRMW_ACPI_DRV_NAME, (kernel_ulong_t)&framework_laptop_npcx_lpc_driver_data },
++ { }
++};
++MODULE_DEVICE_TABLE(acpi, cros_ec_lpc_acpi_device_ids);
++
+ static const struct dmi_system_id cros_ec_lpc_dmi_table[] __initconst = {
+ {
+ /*
+@@ -795,7 +800,8 @@ static int __init cros_ec_lpc_init(void)
+ int ret;
+ const struct dmi_system_id *dmi_match;
+
+- cros_ec_lpc_acpi_device_found = !!cros_ec_lpc_get_device(ACPI_DRV_NAME);
++ cros_ec_lpc_acpi_device_found = !!cros_ec_lpc_get_device(ACPI_DRV_NAME) ||
++ !!cros_ec_lpc_get_device(FRMW_ACPI_DRV_NAME);
+
+ dmi_match = dmi_first_match(cros_ec_lpc_dmi_table);
+
+diff --git a/drivers/platform/x86/x86-android-tablets/Kconfig b/drivers/platform/x86/x86-android-tablets/Kconfig
+index 88d9e8f2ff24ec..c98dfbdfb9dda3 100644
+--- a/drivers/platform/x86/x86-android-tablets/Kconfig
++++ b/drivers/platform/x86/x86-android-tablets/Kconfig
+@@ -8,6 +8,7 @@ config X86_ANDROID_TABLETS
+ depends on I2C && SPI && SERIAL_DEV_BUS && ACPI && EFI && GPIOLIB && PMIC_OPREGION
+ select NEW_LEDS
+ select LEDS_CLASS
++ select POWER_SUPPLY
+ help
+ X86 tablets which ship with Android as (part of) the factory image
+ typically have various problems with their DSDTs. The factory kernels
+diff --git a/drivers/pwm/pwm-fsl-ftm.c b/drivers/pwm/pwm-fsl-ftm.c
+index 2510c10ca47303..c45a5fca4cbbd2 100644
+--- a/drivers/pwm/pwm-fsl-ftm.c
++++ b/drivers/pwm/pwm-fsl-ftm.c
+@@ -118,6 +118,9 @@ static unsigned int fsl_pwm_ticks_to_ns(struct fsl_pwm_chip *fpc,
+ unsigned long long exval;
+
+ rate = clk_get_rate(fpc->clk[fpc->period.clk_select]);
++ if (rate >> fpc->period.clk_ps == 0)
++ return 0;
++
+ exval = ticks;
+ exval *= 1000000000UL;
+ do_div(exval, rate >> fpc->period.clk_ps);
+@@ -190,6 +193,9 @@ static unsigned int fsl_pwm_calculate_duty(struct fsl_pwm_chip *fpc,
+ unsigned int period = fpc->period.mod_period + 1;
+ unsigned int period_ns = fsl_pwm_ticks_to_ns(fpc, period);
+
++ if (!period_ns)
++ return 0;
++
+ duty = (unsigned long long)duty_ns * period;
+ do_div(duty, period_ns);
+
+diff --git a/drivers/pwm/pwm-mediatek.c b/drivers/pwm/pwm-mediatek.c
+index 01dfa0fab80a44..7eaab58314995c 100644
+--- a/drivers/pwm/pwm-mediatek.c
++++ b/drivers/pwm/pwm-mediatek.c
+@@ -121,21 +121,25 @@ static int pwm_mediatek_config(struct pwm_chip *chip, struct pwm_device *pwm,
+ struct pwm_mediatek_chip *pc = to_pwm_mediatek_chip(chip);
+ u32 clkdiv = 0, cnt_period, cnt_duty, reg_width = PWMDWIDTH,
+ reg_thres = PWMTHRES;
++ unsigned long clk_rate;
+ u64 resolution;
+ int ret;
+
+ ret = pwm_mediatek_clk_enable(chip, pwm);
+-
+ if (ret < 0)
+ return ret;
+
++ clk_rate = clk_get_rate(pc->clk_pwms[pwm->hwpwm]);
++ if (!clk_rate)
++ return -EINVAL;
++
+ /* Make sure we use the bus clock and not the 26MHz clock */
+ if (pc->soc->has_ck_26m_sel)
+ writel(0, pc->regs + PWM_CK_26M_SEL);
+
+ /* Using resolution in picosecond gets accuracy higher */
+ resolution = (u64)NSEC_PER_SEC * 1000;
+- do_div(resolution, clk_get_rate(pc->clk_pwms[pwm->hwpwm]));
++ do_div(resolution, clk_rate);
+
+ cnt_period = DIV_ROUND_CLOSEST_ULL((u64)period_ns * 1000, resolution);
+ while (cnt_period > 8191) {
+diff --git a/drivers/pwm/pwm-rcar.c b/drivers/pwm/pwm-rcar.c
+index 2261789cc27dae..578dbdd2d5a721 100644
+--- a/drivers/pwm/pwm-rcar.c
++++ b/drivers/pwm/pwm-rcar.c
+@@ -8,6 +8,7 @@
+ * - The hardware cannot generate a 0% duty cycle.
+ */
+
++#include <linux/bitfield.h>
+ #include <linux/clk.h>
+ #include <linux/err.h>
+ #include <linux/io.h>
+@@ -102,23 +103,24 @@ static void rcar_pwm_set_clock_control(struct rcar_pwm_chip *rp,
+ rcar_pwm_write(rp, value, RCAR_PWMCR);
+ }
+
+-static int rcar_pwm_set_counter(struct rcar_pwm_chip *rp, int div, int duty_ns,
+- int period_ns)
++static int rcar_pwm_set_counter(struct rcar_pwm_chip *rp, int div, u64 duty_ns,
++ u64 period_ns)
+ {
+- unsigned long long one_cycle, tmp; /* 0.01 nanoseconds */
++ unsigned long long tmp;
+ unsigned long clk_rate = clk_get_rate(rp->clk);
+ u32 cyc, ph;
+
+- one_cycle = NSEC_PER_SEC * 100ULL << div;
+- do_div(one_cycle, clk_rate);
++ /* div <= 24 == RCAR_PWM_MAX_DIVISION, so the shift doesn't overflow. */
++ tmp = mul_u64_u64_div_u64(period_ns, clk_rate, (u64)NSEC_PER_SEC << div);
++ if (tmp > FIELD_MAX(RCAR_PWMCNT_CYC0_MASK))
++ tmp = FIELD_MAX(RCAR_PWMCNT_CYC0_MASK);
+
+- tmp = period_ns * 100ULL;
+- do_div(tmp, one_cycle);
+- cyc = (tmp << RCAR_PWMCNT_CYC0_SHIFT) & RCAR_PWMCNT_CYC0_MASK;
++ cyc = FIELD_PREP(RCAR_PWMCNT_CYC0_MASK, tmp);
+
+- tmp = duty_ns * 100ULL;
+- do_div(tmp, one_cycle);
+- ph = tmp & RCAR_PWMCNT_PH0_MASK;
++ tmp = mul_u64_u64_div_u64(duty_ns, clk_rate, (u64)NSEC_PER_SEC << div);
++ if (tmp > FIELD_MAX(RCAR_PWMCNT_PH0_MASK))
++ tmp = FIELD_MAX(RCAR_PWMCNT_PH0_MASK);
++ ph = FIELD_PREP(RCAR_PWMCNT_PH0_MASK, tmp);
+
+ /* Avoid prohibited setting */
+ if (cyc == 0 || ph == 0)
+diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
+index 21fa7ac849e5c3..4904b831c0a75f 100644
+--- a/drivers/s390/virtio/virtio_ccw.c
++++ b/drivers/s390/virtio/virtio_ccw.c
+@@ -302,11 +302,17 @@ static struct airq_info *new_airq_info(int index)
+ static unsigned long *get_airq_indicator(struct virtqueue *vqs[], int nvqs,
+ u64 *first, void **airq_info)
+ {
+- int i, j;
++ int i, j, queue_idx, highest_queue_idx = -1;
+ struct airq_info *info;
+ unsigned long *indicator_addr = NULL;
+ unsigned long bit, flags;
+
++ /* Array entries without an actual queue pointer must be ignored. */
++ for (i = 0; i < nvqs; i++) {
++ if (vqs[i])
++ highest_queue_idx++;
++ }
++
+ for (i = 0; i < MAX_AIRQ_AREAS && !indicator_addr; i++) {
+ mutex_lock(&airq_areas_lock);
+ if (!airq_areas[i])
+@@ -316,7 +322,7 @@ static unsigned long *get_airq_indicator(struct virtqueue *vqs[], int nvqs,
+ if (!info)
+ return NULL;
+ write_lock_irqsave(&info->lock, flags);
+- bit = airq_iv_alloc(info->aiv, nvqs);
++ bit = airq_iv_alloc(info->aiv, highest_queue_idx + 1);
+ if (bit == -1UL) {
+ /* Not enough vacancies. */
+ write_unlock_irqrestore(&info->lock, flags);
+@@ -325,8 +331,10 @@ static unsigned long *get_airq_indicator(struct virtqueue *vqs[], int nvqs,
+ *first = bit;
+ *airq_info = info;
+ indicator_addr = info->aiv->vector;
+- for (j = 0; j < nvqs; j++) {
+- airq_iv_set_ptr(info->aiv, bit + j,
++ for (j = 0, queue_idx = 0; j < nvqs; j++) {
++ if (!vqs[j])
++ continue;
++ airq_iv_set_ptr(info->aiv, bit + queue_idx++,
+ (unsigned long)vqs[j]);
+ }
+ write_unlock_irqrestore(&info->lock, flags);
+diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h
+index 1e715fd65a7d4b..ee5a75a4b3bb80 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr.h
++++ b/drivers/scsi/mpi3mr/mpi3mr.h
+@@ -81,13 +81,14 @@ extern atomic64_t event_counter;
+
+ /* Admin queue management definitions */
+ #define MPI3MR_ADMIN_REQ_Q_SIZE (2 * MPI3MR_PAGE_SIZE_4K)
+-#define MPI3MR_ADMIN_REPLY_Q_SIZE (4 * MPI3MR_PAGE_SIZE_4K)
++#define MPI3MR_ADMIN_REPLY_Q_SIZE (8 * MPI3MR_PAGE_SIZE_4K)
+ #define MPI3MR_ADMIN_REQ_FRAME_SZ 128
+ #define MPI3MR_ADMIN_REPLY_FRAME_SZ 16
+
+ /* Operational queue management definitions */
+ #define MPI3MR_OP_REQ_Q_QD 512
+ #define MPI3MR_OP_REP_Q_QD 1024
++#define MPI3MR_OP_REP_Q_QD2K 2048
+ #define MPI3MR_OP_REP_Q_QD4K 4096
+ #define MPI3MR_OP_REQ_Q_SEG_SIZE 4096
+ #define MPI3MR_OP_REP_Q_SEG_SIZE 4096
+@@ -329,6 +330,7 @@ enum mpi3mr_reset_reason {
+ #define MPI3MR_RESET_REASON_OSTYPE_SHIFT 28
+ #define MPI3MR_RESET_REASON_IOCNUM_SHIFT 20
+
++
+ /* Queue type definitions */
+ enum queue_type {
+ MPI3MR_DEFAULT_QUEUE = 0,
+@@ -388,6 +390,7 @@ struct mpi3mr_ioc_facts {
+ u16 max_msix_vectors;
+ u8 personality;
+ u8 dma_mask;
++ bool max_req_limit;
+ u8 protocol_flags;
+ u8 sge_mod_mask;
+ u8 sge_mod_value;
+@@ -457,6 +460,8 @@ struct op_req_qinfo {
+ * @enable_irq_poll: Flag to indicate polling is enabled
+ * @in_use: Queue is handled by poll/ISR
+ * @qtype: Type of queue (types defined in enum queue_type)
++ * @qfull_watermark: Watermark defined in reply queue to avoid
++ * reply queue full
+ */
+ struct op_reply_qinfo {
+ u16 ci;
+@@ -472,6 +477,7 @@ struct op_reply_qinfo {
+ bool enable_irq_poll;
+ atomic_t in_use;
+ enum queue_type qtype;
++ u16 qfull_watermark;
+ };
+
+ /**
+@@ -1091,6 +1097,7 @@ struct scmd_priv {
+ * @ts_update_interval: Timestamp update interval
+ * @reset_in_progress: Reset in progress flag
+ * @unrecoverable: Controller unrecoverable flag
++ * @io_admin_reset_sync: Manage state of I/O ops during an admin reset process
+ * @prev_reset_result: Result of previous reset
+ * @reset_mutex: Controller reset mutex
+ * @reset_waitq: Controller reset wait queue
+@@ -1154,6 +1161,8 @@ struct scmd_priv {
+ * @snapdump_trigger_active: Snapdump trigger active flag
+ * @pci_err_recovery: PCI error recovery in progress
+ * @block_on_pci_err: Block IO during PCI error recovery
++ * @reply_qfull_count: Occurences of reply queue full avoidance kicking-in
++ * @prevent_reply_qfull: Enable reply queue prevention
+ */
+ struct mpi3mr_ioc {
+ struct list_head list;
+@@ -1277,6 +1286,7 @@ struct mpi3mr_ioc {
+ u16 ts_update_interval;
+ u8 reset_in_progress;
+ u8 unrecoverable;
++ u8 io_admin_reset_sync;
+ int prev_reset_result;
+ struct mutex reset_mutex;
+ wait_queue_head_t reset_waitq;
+@@ -1352,6 +1362,8 @@ struct mpi3mr_ioc {
+ bool fw_release_trigger_active;
+ bool pci_err_recovery;
+ bool block_on_pci_err;
++ atomic_t reply_qfull_count;
++ bool prevent_reply_qfull;
+ };
+
+ /**
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_app.c b/drivers/scsi/mpi3mr/mpi3mr_app.c
+index 7589f48aebc80f..1532436f0f3af1 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_app.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_app.c
+@@ -3060,6 +3060,29 @@ reply_queue_count_show(struct device *dev, struct device_attribute *attr,
+
+ static DEVICE_ATTR_RO(reply_queue_count);
+
++/**
++ * reply_qfull_count_show - Show reply qfull count
++ * @dev: class device
++ * @attr: Device attributes
++ * @buf: Buffer to copy
++ *
++ * Retrieves the current value of the reply_qfull_count from the mrioc structure and
++ * formats it as a string for display.
++ *
++ * Return: sysfs_emit() return
++ */
++static ssize_t
++reply_qfull_count_show(struct device *dev, struct device_attribute *attr,
++ char *buf)
++{
++ struct Scsi_Host *shost = class_to_shost(dev);
++ struct mpi3mr_ioc *mrioc = shost_priv(shost);
++
++ return sysfs_emit(buf, "%u\n", atomic_read(&mrioc->reply_qfull_count));
++}
++
++static DEVICE_ATTR_RO(reply_qfull_count);
++
+ /**
+ * logging_level_show - Show controller debug level
+ * @dev: class device
+@@ -3152,6 +3175,7 @@ static struct attribute *mpi3mr_host_attrs[] = {
+ &dev_attr_fw_queue_depth.attr,
+ &dev_attr_op_req_q_count.attr,
+ &dev_attr_reply_queue_count.attr,
++ &dev_attr_reply_qfull_count.attr,
+ &dev_attr_logging_level.attr,
+ &dev_attr_adp_state.attr,
+ NULL,
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_fw.c b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+index 5ed31fe57474a3..ec5b1ab2871776 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_fw.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_fw.c
+@@ -17,7 +17,7 @@ static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc,
+ struct mpi3_ioc_facts_data *facts_data);
+ static void mpi3mr_pel_wait_complete(struct mpi3mr_ioc *mrioc,
+ struct mpi3mr_drv_cmd *drv_cmd);
+-
++static int mpi3mr_check_op_admin_proc(struct mpi3mr_ioc *mrioc);
+ static int poll_queues;
+ module_param(poll_queues, int, 0444);
+ MODULE_PARM_DESC(poll_queues, "Number of queues for io_uring poll mode. (Range 1 - 126)");
+@@ -459,7 +459,7 @@ int mpi3mr_process_admin_reply_q(struct mpi3mr_ioc *mrioc)
+ }
+
+ do {
+- if (mrioc->unrecoverable)
++ if (mrioc->unrecoverable || mrioc->io_admin_reset_sync)
+ break;
+
+ mrioc->admin_req_ci = le16_to_cpu(reply_desc->request_queue_ci);
+@@ -554,7 +554,7 @@ int mpi3mr_process_op_reply_q(struct mpi3mr_ioc *mrioc,
+ }
+
+ do {
+- if (mrioc->unrecoverable)
++ if (mrioc->unrecoverable || mrioc->io_admin_reset_sync)
+ break;
+
+ req_q_idx = le16_to_cpu(reply_desc->request_queue_id) - 1;
+@@ -2104,15 +2104,22 @@ static int mpi3mr_create_op_reply_q(struct mpi3mr_ioc *mrioc, u16 qidx)
+ }
+
+ reply_qid = qidx + 1;
+- op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD;
+- if ((mrioc->pdev->device == MPI3_MFGPAGE_DEVID_SAS4116) &&
+- !mrioc->pdev->revision)
+- op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD4K;
++
++ if (mrioc->pdev->device == MPI3_MFGPAGE_DEVID_SAS4116) {
++ if (mrioc->pdev->revision)
++ op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD;
++ else
++ op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD4K;
++ } else
++ op_reply_q->num_replies = MPI3MR_OP_REP_Q_QD2K;
++
+ op_reply_q->ci = 0;
+ op_reply_q->ephase = 1;
+ atomic_set(&op_reply_q->pend_ios, 0);
+ atomic_set(&op_reply_q->in_use, 0);
+ op_reply_q->enable_irq_poll = false;
++ op_reply_q->qfull_watermark =
++ op_reply_q->num_replies - (MPI3MR_THRESHOLD_REPLY_COUNT * 2);
+
+ if (!op_reply_q->q_segments) {
+ retval = mpi3mr_alloc_op_reply_q_segments(mrioc, qidx);
+@@ -2416,8 +2423,10 @@ int mpi3mr_op_request_post(struct mpi3mr_ioc *mrioc,
+ void *segment_base_addr;
+ u16 req_sz = mrioc->facts.op_req_sz;
+ struct segments *segments = op_req_q->q_segments;
++ struct op_reply_qinfo *op_reply_q = NULL;
+
+ reply_qidx = op_req_q->reply_qid - 1;
++ op_reply_q = mrioc->op_reply_qinfo + reply_qidx;
+
+ if (mrioc->unrecoverable)
+ return -EFAULT;
+@@ -2448,6 +2457,15 @@ int mpi3mr_op_request_post(struct mpi3mr_ioc *mrioc,
+ goto out;
+ }
+
++ /* Reply queue is nearing to get full, push back IOs to SML */
++ if ((mrioc->prevent_reply_qfull == true) &&
++ (atomic_read(&op_reply_q->pend_ios) >
++ (op_reply_q->qfull_watermark))) {
++ atomic_inc(&mrioc->reply_qfull_count);
++ retval = -EAGAIN;
++ goto out;
++ }
++
+ segment_base_addr = segments[pi / op_req_q->segment_qd].segment;
+ req_entry = (u8 *)segment_base_addr +
+ ((pi % op_req_q->segment_qd) * req_sz);
+@@ -3091,6 +3109,9 @@ static void mpi3mr_process_factsdata(struct mpi3mr_ioc *mrioc,
+ mrioc->facts.dma_mask = (facts_flags &
+ MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK) >>
+ MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT;
++ mrioc->facts.dma_mask = (facts_flags &
++ MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_MASK) >>
++ MPI3_IOCFACTS_FLAGS_DMA_ADDRESS_WIDTH_SHIFT;
+ mrioc->facts.protocol_flags = facts_data->protocol_flags;
+ mrioc->facts.mpi_version = le32_to_cpu(facts_data->mpi_version.word);
+ mrioc->facts.max_reqs = le16_to_cpu(facts_data->max_outstanding_requests);
+@@ -4214,6 +4235,9 @@ int mpi3mr_init_ioc(struct mpi3mr_ioc *mrioc)
+ mrioc->shost->transportt = mpi3mr_transport_template;
+ }
+
++ if (mrioc->facts.max_req_limit)
++ mrioc->prevent_reply_qfull = true;
++
+ mrioc->reply_sz = mrioc->facts.reply_sz;
+
+ retval = mpi3mr_check_reset_dma_mask(mrioc);
+@@ -4370,6 +4394,7 @@ int mpi3mr_reinit_ioc(struct mpi3mr_ioc *mrioc, u8 is_resume)
+ goto out_failed_noretry;
+ }
+
++ mrioc->io_admin_reset_sync = 0;
+ if (is_resume || mrioc->block_on_pci_err) {
+ dprint_reset(mrioc, "setting up single ISR\n");
+ retval = mpi3mr_setup_isr(mrioc, 1);
+@@ -5228,6 +5253,55 @@ void mpi3mr_pel_get_seqnum_complete(struct mpi3mr_ioc *mrioc,
+ drv_cmd->retry_count = 0;
+ }
+
++/**
++ * mpi3mr_check_op_admin_proc -
++ * @mrioc: Adapter instance reference
++ *
++ * Check if any of the operation reply queues
++ * or the admin reply queue are currently in use.
++ * If any queue is in use, this function waits for
++ * a maximum of 10 seconds for them to become available.
++ *
++ * Return: 0 on success, non-zero on failure.
++ */
++static int mpi3mr_check_op_admin_proc(struct mpi3mr_ioc *mrioc)
++{
++
++ u16 timeout = 10 * 10;
++ u16 elapsed_time = 0;
++ bool op_admin_in_use = false;
++
++ do {
++ op_admin_in_use = false;
++
++ /* Check admin_reply queue first to exit early */
++ if (atomic_read(&mrioc->admin_reply_q_in_use) == 1)
++ op_admin_in_use = true;
++ else {
++ /* Check op_reply queues */
++ int i;
++
++ for (i = 0; i < mrioc->num_queues; i++) {
++ if (atomic_read(&mrioc->op_reply_qinfo[i].in_use) == 1) {
++ op_admin_in_use = true;
++ break;
++ }
++ }
++ }
++
++ if (!op_admin_in_use)
++ break;
++
++ msleep(100);
++
++ } while (++elapsed_time < timeout);
++
++ if (op_admin_in_use)
++ return 1;
++
++ return 0;
++}
++
+ /**
+ * mpi3mr_soft_reset_handler - Reset the controller
+ * @mrioc: Adapter instance reference
+@@ -5308,6 +5382,7 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
+ mpi3mr_wait_for_host_io(mrioc, MPI3MR_RESET_HOST_IOWAIT_TIMEOUT);
+
+ mpi3mr_ioc_disable_intr(mrioc);
++ mrioc->io_admin_reset_sync = 1;
+
+ if (snapdump) {
+ mpi3mr_set_diagsave(mrioc);
+@@ -5335,6 +5410,16 @@ int mpi3mr_soft_reset_handler(struct mpi3mr_ioc *mrioc,
+ ioc_err(mrioc, "Failed to issue soft reset to the ioc\n");
+ goto out;
+ }
++
++ retval = mpi3mr_check_op_admin_proc(mrioc);
++ if (retval) {
++ ioc_err(mrioc, "Soft reset failed due to an Admin or I/O queue polling\n"
++ "thread still processing replies even after a 10 second\n"
++ "timeout. Marking the controller as unrecoverable!\n");
++
++ goto out;
++ }
++
+ if (mrioc->num_io_throttle_group !=
+ mrioc->facts.max_io_throttle_group) {
+ ioc_err(mrioc,
+diff --git a/drivers/scsi/st.c b/drivers/scsi/st.c
+index 0dc37fc6f23678..a17441635ff3ab 100644
+--- a/drivers/scsi/st.c
++++ b/drivers/scsi/st.c
+@@ -4119,7 +4119,7 @@ static void validate_options(void)
+ */
+ static int __init st_setup(char *str)
+ {
+- int i, len, ints[5];
++ int i, len, ints[ARRAY_SIZE(parms) + 1];
+ char *stp;
+
+ stp = get_options(str, ARRAY_SIZE(ints), ints);
+diff --git a/drivers/soc/samsung/exynos-chipid.c b/drivers/soc/samsung/exynos-chipid.c
+index b1118d37779e46..bba8d86ae1bb06 100644
+--- a/drivers/soc/samsung/exynos-chipid.c
++++ b/drivers/soc/samsung/exynos-chipid.c
+@@ -131,6 +131,8 @@ static int exynos_chipid_probe(struct platform_device *pdev)
+
+ soc_dev_attr->revision = devm_kasprintf(&pdev->dev, GFP_KERNEL,
+ "%x", soc_info.revision);
++ if (!soc_dev_attr->revision)
++ return -ENOMEM;
+ soc_dev_attr->soc_id = product_id_to_soc_id(soc_info.product_id);
+ if (!soc_dev_attr->soc_id) {
+ pr_err("Unknown SoC\n");
+diff --git a/drivers/spi/spi-cadence-quadspi.c b/drivers/spi/spi-cadence-quadspi.c
+index 73b1edd0531b43..f9463f263fba16 100644
+--- a/drivers/spi/spi-cadence-quadspi.c
++++ b/drivers/spi/spi-cadence-quadspi.c
+@@ -1634,6 +1634,12 @@ static int cqspi_request_mmap_dma(struct cqspi_st *cqspi)
+ int ret = PTR_ERR(cqspi->rx_chan);
+
+ cqspi->rx_chan = NULL;
++ if (ret == -ENODEV) {
++ /* DMA support is not mandatory */
++ dev_info(&cqspi->pdev->dev, "No Rx DMA available\n");
++ return 0;
++ }
++
+ return dev_err_probe(&cqspi->pdev->dev, ret, "No Rx DMA available\n");
+ }
+ init_completion(&cqspi->rx_dma_complete);
+diff --git a/drivers/target/target_core_spc.c b/drivers/target/target_core_spc.c
+index ea14a38356814d..61c065702350e0 100644
+--- a/drivers/target/target_core_spc.c
++++ b/drivers/target/target_core_spc.c
+@@ -2243,7 +2243,7 @@ spc_emulate_report_supp_op_codes(struct se_cmd *cmd)
+ response_length += spc_rsoc_encode_command_descriptor(
+ &buf[response_length], rctd, descr);
+ }
+- put_unaligned_be32(response_length - 3, buf);
++ put_unaligned_be32(response_length - 4, buf);
+ } else {
+ response_length = spc_rsoc_encode_one_command_descriptor(
+ &buf[response_length], rctd, descr,
+diff --git a/drivers/thermal/mediatek/lvts_thermal.c b/drivers/thermal/mediatek/lvts_thermal.c
+index 1997e91bb3be94..4b3225377e8f8f 100644
+--- a/drivers/thermal/mediatek/lvts_thermal.c
++++ b/drivers/thermal/mediatek/lvts_thermal.c
+@@ -65,7 +65,7 @@
+ #define LVTS_HW_FILTER 0x0
+ #define LVTS_TSSEL_CONF 0x13121110
+ #define LVTS_CALSCALE_CONF 0x300
+-#define LVTS_MONINT_CONF 0x8300318C
++#define LVTS_MONINT_CONF 0x0300318C
+
+ #define LVTS_MONINT_OFFSET_SENSOR0 0xC
+ #define LVTS_MONINT_OFFSET_SENSOR1 0x180
+@@ -91,8 +91,6 @@
+ #define LVTS_MSR_READ_TIMEOUT_US 400
+ #define LVTS_MSR_READ_WAIT_US (LVTS_MSR_READ_TIMEOUT_US / 2)
+
+-#define LVTS_HW_TSHUT_TEMP 105000
+-
+ #define LVTS_MINIMUM_THRESHOLD 20000
+
+ static int golden_temp = LVTS_GOLDEN_TEMP_DEFAULT;
+@@ -145,7 +143,6 @@ struct lvts_ctrl {
+ struct lvts_sensor sensors[LVTS_SENSOR_MAX];
+ const struct lvts_data *lvts_data;
+ u32 calibration[LVTS_SENSOR_MAX];
+- u32 hw_tshut_raw_temp;
+ u8 valid_sensor_mask;
+ int mode;
+ void __iomem *base;
+@@ -837,14 +834,6 @@ static int lvts_ctrl_init(struct device *dev, struct lvts_domain *lvts_td,
+ */
+ lvts_ctrl[i].mode = lvts_data->lvts_ctrl[i].mode;
+
+- /*
+- * The temperature to raw temperature must be done
+- * after initializing the calibration.
+- */
+- lvts_ctrl[i].hw_tshut_raw_temp =
+- lvts_temp_to_raw(LVTS_HW_TSHUT_TEMP,
+- lvts_data->temp_factor);
+-
+ lvts_ctrl[i].low_thresh = INT_MIN;
+ lvts_ctrl[i].high_thresh = INT_MIN;
+ }
+@@ -860,6 +849,32 @@ static int lvts_ctrl_init(struct device *dev, struct lvts_domain *lvts_td,
+ return 0;
+ }
+
++static void lvts_ctrl_monitor_enable(struct device *dev, struct lvts_ctrl *lvts_ctrl, bool enable)
++{
++ /*
++ * Bitmaps to enable each sensor on filtered mode in the MONCTL0
++ * register.
++ */
++ static const u8 sensor_filt_bitmap[] = { BIT(0), BIT(1), BIT(2), BIT(3) };
++ u32 sensor_map = 0;
++ int i;
++
++ if (lvts_ctrl->mode != LVTS_MSR_FILTERED_MODE)
++ return;
++
++ if (enable) {
++ lvts_for_each_valid_sensor(i, lvts_ctrl)
++ sensor_map |= sensor_filt_bitmap[i];
++ }
++
++ /*
++ * Bits:
++ * 9: Single point access flow
++ * 0-3: Enable sensing point 0-3
++ */
++ writel(sensor_map | BIT(9), LVTS_MONCTL0(lvts_ctrl->base));
++}
++
+ /*
+ * At this point the configuration register is the only place in the
+ * driver where we write multiple values. Per hardware constraint,
+@@ -893,7 +908,6 @@ static int lvts_irq_init(struct lvts_ctrl *lvts_ctrl)
+ * 10 : Selected sensor with bits 19-18
+ * 11 : Reserved
+ */
+- writel(BIT(16), LVTS_PROTCTL(lvts_ctrl->base));
+
+ /*
+ * LVTS_PROTTA : Stage 1 temperature threshold
+@@ -906,8 +920,8 @@ static int lvts_irq_init(struct lvts_ctrl *lvts_ctrl)
+ *
+ * writel(0x0, LVTS_PROTTA(lvts_ctrl->base));
+ * writel(0x0, LVTS_PROTTB(lvts_ctrl->base));
++ * writel(0x0, LVTS_PROTTC(lvts_ctrl->base));
+ */
+- writel(lvts_ctrl->hw_tshut_raw_temp, LVTS_PROTTC(lvts_ctrl->base));
+
+ /*
+ * LVTS_MONINT : Interrupt configuration register
+@@ -1381,8 +1395,11 @@ static int lvts_suspend(struct device *dev)
+
+ lvts_td = dev_get_drvdata(dev);
+
+- for (i = 0; i < lvts_td->num_lvts_ctrl; i++)
++ for (i = 0; i < lvts_td->num_lvts_ctrl; i++) {
++ lvts_ctrl_monitor_enable(dev, &lvts_td->lvts_ctrl[i], false);
++ usleep_range(100, 200);
+ lvts_ctrl_set_enable(&lvts_td->lvts_ctrl[i], false);
++ }
+
+ clk_disable_unprepare(lvts_td->clk);
+
+@@ -1400,8 +1417,11 @@ static int lvts_resume(struct device *dev)
+ if (ret)
+ return ret;
+
+- for (i = 0; i < lvts_td->num_lvts_ctrl; i++)
++ for (i = 0; i < lvts_td->num_lvts_ctrl; i++) {
+ lvts_ctrl_set_enable(&lvts_td->lvts_ctrl[i], true);
++ usleep_range(100, 200);
++ lvts_ctrl_monitor_enable(dev, &lvts_td->lvts_ctrl[i], true);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/thermal/rockchip_thermal.c b/drivers/thermal/rockchip_thermal.c
+index 086ed42dd16cd4..a84f48a752d159 100644
+--- a/drivers/thermal/rockchip_thermal.c
++++ b/drivers/thermal/rockchip_thermal.c
+@@ -386,6 +386,7 @@ static const struct tsadc_table rk3328_code_table[] = {
+ {296, -40000},
+ {304, -35000},
+ {313, -30000},
++ {322, -25000},
+ {331, -20000},
+ {340, -15000},
+ {349, -10000},
+diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
+index 8455f08f5d4060..61424342c09641 100644
+--- a/drivers/vdpa/mlx5/core/mr.c
++++ b/drivers/vdpa/mlx5/core/mr.c
+@@ -190,9 +190,12 @@ static void fill_indir(struct mlx5_vdpa_dev *mvdev, struct mlx5_vdpa_mr *mkey, v
+ klm->bcount = cpu_to_be32(klm_bcount(dmr->end - dmr->start));
+ preve = dmr->end;
+ } else {
++ u64 bcount = min_t(u64, dmr->start - preve, MAX_KLM_SIZE);
++
+ klm->key = cpu_to_be32(mvdev->res.null_mkey);
+- klm->bcount = cpu_to_be32(klm_bcount(dmr->start - preve));
+- preve = dmr->start;
++ klm->bcount = cpu_to_be32(klm_bcount(bcount));
++ preve += bcount;
++
+ goto again;
+ }
+ }
+diff --git a/drivers/video/backlight/led_bl.c b/drivers/video/backlight/led_bl.c
+index c7aefcd6e4e3e3..78260060184575 100644
+--- a/drivers/video/backlight/led_bl.c
++++ b/drivers/video/backlight/led_bl.c
+@@ -229,8 +229,11 @@ static void led_bl_remove(struct platform_device *pdev)
+ backlight_device_unregister(bl);
+
+ led_bl_power_off(priv);
+- for (i = 0; i < priv->nb_leds; i++)
++ for (i = 0; i < priv->nb_leds; i++) {
++ mutex_lock(&priv->leds[i]->led_access);
+ led_sysfs_enable(priv->leds[i]);
++ mutex_unlock(&priv->leds[i]->led_access);
++ }
+ }
+
+ static const struct of_device_id led_bl_of_match[] = {
+diff --git a/drivers/video/fbdev/omap2/omapfb/dss/dispc.c b/drivers/video/fbdev/omap2/omapfb/dss/dispc.c
+index 5832485ab998c4..c29b6236952b31 100644
+--- a/drivers/video/fbdev/omap2/omapfb/dss/dispc.c
++++ b/drivers/video/fbdev/omap2/omapfb/dss/dispc.c
+@@ -2749,9 +2749,13 @@ int dispc_ovl_setup(enum omap_plane plane, const struct omap_overlay_info *oi,
+ bool mem_to_mem)
+ {
+ int r;
+- enum omap_overlay_caps caps = dss_feat_get_overlay_caps(plane);
++ enum omap_overlay_caps caps;
+ enum omap_channel channel;
+
++ if (plane == OMAP_DSS_WB)
++ return -EINVAL;
++
++ caps = dss_feat_get_overlay_caps(plane);
+ channel = dispc_ovl_get_channel_out(plane);
+
+ DSSDBG("dispc_ovl_setup %d, pa %pad, pa_uv %pad, sw %d, %d,%d, %dx%d ->"
+diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
+index 528395133b4f8e..4bd31242bd773c 100644
+--- a/drivers/xen/balloon.c
++++ b/drivers/xen/balloon.c
+@@ -675,7 +675,7 @@ void xen_free_ballooned_pages(unsigned int nr_pages, struct page **pages)
+ }
+ EXPORT_SYMBOL(xen_free_ballooned_pages);
+
+-static void __init balloon_add_regions(void)
++static int __init balloon_add_regions(void)
+ {
+ unsigned long start_pfn, pages;
+ unsigned long pfn, extra_pfn_end;
+@@ -698,26 +698,38 @@ static void __init balloon_add_regions(void)
+ for (pfn = start_pfn; pfn < extra_pfn_end; pfn++)
+ balloon_append(pfn_to_page(pfn));
+
+- balloon_stats.total_pages += extra_pfn_end - start_pfn;
++ /*
++ * Extra regions are accounted for in the physmap, but need
++ * decreasing from current_pages to balloon down the initial
++ * allocation, because they are already accounted for in
++ * total_pages.
++ */
++ if (extra_pfn_end - start_pfn >= balloon_stats.current_pages) {
++ WARN(1, "Extra pages underflow current target");
++ return -ERANGE;
++ }
++ balloon_stats.current_pages -= extra_pfn_end - start_pfn;
+ }
++
++ return 0;
+ }
+
+ static int __init balloon_init(void)
+ {
+ struct task_struct *task;
++ int rc;
+
+ if (!xen_domain())
+ return -ENODEV;
+
+ pr_info("Initialising balloon driver\n");
+
+-#ifdef CONFIG_XEN_PV
+- balloon_stats.current_pages = xen_pv_domain()
+- ? min(xen_start_info->nr_pages - xen_released_pages, max_pfn)
+- : get_num_physpages();
+-#else
+- balloon_stats.current_pages = get_num_physpages();
+-#endif
++ if (xen_released_pages >= get_num_physpages()) {
++ WARN(1, "Released pages underflow current target");
++ return -ERANGE;
++ }
++
++ balloon_stats.current_pages = get_num_physpages() - xen_released_pages;
+ balloon_stats.target_pages = balloon_stats.current_pages;
+ balloon_stats.balloon_low = 0;
+ balloon_stats.balloon_high = 0;
+@@ -734,7 +746,9 @@ static int __init balloon_init(void)
+ register_sysctl_init("xen/balloon", balloon_table);
+ #endif
+
+- balloon_add_regions();
++ rc = balloon_add_regions();
++ if (rc)
++ return rc;
+
+ task = kthread_run(balloon_thread, NULL, "xen-balloon");
+ if (IS_ERR(task)) {
+diff --git a/drivers/xen/xenfs/xensyms.c b/drivers/xen/xenfs/xensyms.c
+index b799bc759c15f4..088b7f02c35866 100644
+--- a/drivers/xen/xenfs/xensyms.c
++++ b/drivers/xen/xenfs/xensyms.c
+@@ -48,7 +48,7 @@ static int xensyms_next_sym(struct xensyms *xs)
+ return -ENOMEM;
+
+ set_xen_guest_handle(symdata->name, xs->name);
+- symdata->symnum--; /* Rewind */
++ symdata->symnum = symnum; /* Rewind */
+
+ ret = HYPERVISOR_platform_op(&xs->op);
+ if (ret < 0)
+@@ -78,7 +78,7 @@ static void *xensyms_next(struct seq_file *m, void *p, loff_t *pos)
+ {
+ struct xensyms *xs = m->private;
+
+- xs->op.u.symdata.symnum = ++(*pos);
++ *pos = xs->op.u.symdata.symnum;
+
+ if (xensyms_next_sym(xs))
+ return NULL;
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index 563f106774e592..19e5f8eaae772d 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -4274,6 +4274,18 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ */
+ btrfs_flush_workqueue(fs_info->delalloc_workers);
+
++ /*
++ * When finishing a compressed write bio we schedule a work queue item
++ * to finish an ordered extent - btrfs_finish_compressed_write_work()
++ * calls btrfs_finish_ordered_extent() which in turns does a call to
++ * btrfs_queue_ordered_fn(), and that queues the ordered extent
++ * completion either in the endio_write_workers work queue or in the
++ * fs_info->endio_freespace_worker work queue. We flush those queues
++ * below, so before we flush them we must flush this queue for the
++ * workers of compressed writes.
++ */
++ flush_workqueue(fs_info->compressed_write_workers);
++
+ /*
+ * After we parked the cleaner kthread, ordered extents may have
+ * completed and created new delayed iputs. If one of the async reclaim
+diff --git a/fs/btrfs/extent-tree.c b/fs/btrfs/extent-tree.c
+index f3e93ba7ec97fa..4ceffbef32987b 100644
+--- a/fs/btrfs/extent-tree.c
++++ b/fs/btrfs/extent-tree.c
+@@ -2897,7 +2897,15 @@ int btrfs_finish_extent_commit(struct btrfs_trans_handle *trans)
+ block_group->length,
+ &trimmed);
+
++ /*
++ * Not strictly necessary to lock, as the block_group should be
++ * read-only from btrfs_delete_unused_bgs().
++ */
++ ASSERT(block_group->ro);
++ spin_lock(&fs_info->unused_bgs_lock);
+ list_del_init(&block_group->bg_list);
++ spin_unlock(&fs_info->unused_bgs_lock);
++
+ btrfs_unfreeze_block_group(block_group);
+ btrfs_put_block_group(block_group);
+
+diff --git a/fs/btrfs/tests/extent-map-tests.c b/fs/btrfs/tests/extent-map-tests.c
+index 56e61ac1cc64c8..609bb6c9c0873f 100644
+--- a/fs/btrfs/tests/extent-map-tests.c
++++ b/fs/btrfs/tests/extent-map-tests.c
+@@ -1045,6 +1045,7 @@ static int test_rmap_block(struct btrfs_fs_info *fs_info,
+ ret = btrfs_add_chunk_map(fs_info, map);
+ if (ret) {
+ test_err("error adding chunk map to mapping tree");
++ btrfs_free_chunk_map(map);
+ goto out_free;
+ }
+
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index 82dd9ee89fbc5b..24806e19c7c410 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -161,7 +161,13 @@ void btrfs_put_transaction(struct btrfs_transaction *transaction)
+ cache = list_first_entry(&transaction->deleted_bgs,
+ struct btrfs_block_group,
+ bg_list);
++ /*
++ * Not strictly necessary to lock, as no other task will be using a
++ * block_group on the deleted_bgs list during a transaction abort.
++ */
++ spin_lock(&transaction->fs_info->unused_bgs_lock);
+ list_del_init(&cache->bg_list);
++ spin_unlock(&transaction->fs_info->unused_bgs_lock);
+ btrfs_unfreeze_block_group(cache);
+ btrfs_put_block_group(cache);
+ }
+@@ -2099,7 +2105,13 @@ static void btrfs_cleanup_pending_block_groups(struct btrfs_trans_handle *trans)
+
+ list_for_each_entry_safe(block_group, tmp, &trans->new_bgs, bg_list) {
+ btrfs_dec_delayed_refs_rsv_bg_inserts(fs_info);
++ /*
++ * Not strictly necessary to lock, as no other task will be using a
++ * block_group on the new_bgs list during a transaction abort.
++ */
++ spin_lock(&fs_info->unused_bgs_lock);
+ list_del_init(&block_group->bg_list);
++ spin_unlock(&fs_info->unused_bgs_lock);
+ }
+ }
+
+diff --git a/fs/btrfs/zoned.c b/fs/btrfs/zoned.c
+index 69d03feea4e0ec..2bb7e32ad94588 100644
+--- a/fs/btrfs/zoned.c
++++ b/fs/btrfs/zoned.c
+@@ -2107,6 +2107,9 @@ bool btrfs_zone_activate(struct btrfs_block_group *block_group)
+ physical = map->stripes[i].physical;
+ zinfo = device->zone_info;
+
++ if (!device->bdev)
++ continue;
++
+ if (zinfo->max_active_zones == 0)
+ continue;
+
+@@ -2268,6 +2271,9 @@ static int do_zone_finish(struct btrfs_block_group *block_group, bool fully_writ
+ struct btrfs_zoned_device_info *zinfo = device->zone_info;
+ unsigned int nofs_flags;
+
++ if (!device->bdev)
++ continue;
++
+ if (zinfo->max_active_zones == 0)
+ continue;
+
+diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
+index 0c01e4423ee2a8..0ad496ceb638d2 100644
+--- a/fs/dlm/lock.c
++++ b/fs/dlm/lock.c
+@@ -741,6 +741,7 @@ static int find_rsb_dir(struct dlm_ls *ls, const void *name, int len,
+ read_lock_bh(&ls->ls_rsbtbl_lock);
+ if (!rsb_flag(r, RSB_HASHED)) {
+ read_unlock_bh(&ls->ls_rsbtbl_lock);
++ error = -EBADR;
+ goto do_new;
+ }
+
+@@ -784,6 +785,7 @@ static int find_rsb_dir(struct dlm_ls *ls, const void *name, int len,
+ }
+ } else {
+ write_unlock_bh(&ls->ls_rsbtbl_lock);
++ error = -EBADR;
+ goto do_new;
+ }
+
+diff --git a/fs/erofs/fileio.c b/fs/erofs/fileio.c
+index 33f8539dda4aeb..17aed5f6c5490d 100644
+--- a/fs/erofs/fileio.c
++++ b/fs/erofs/fileio.c
+@@ -32,6 +32,8 @@ static void erofs_fileio_ki_complete(struct kiocb *iocb, long ret)
+ ret = 0;
+ }
+ if (rq->bio.bi_end_io) {
++ if (ret < 0 && !rq->bio.bi_status)
++ rq->bio.bi_status = errno_to_blk_status(ret);
+ rq->bio.bi_end_io(&rq->bio);
+ } else {
+ bio_for_each_folio_all(fi, &rq->bio) {
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 67a5b937f5a92d..ffa6aa55a1a7a8 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4681,22 +4681,43 @@ static inline void ext4_inode_set_iversion_queried(struct inode *inode, u64 val)
+ inode_set_iversion_queried(inode, val);
+ }
+
+-static const char *check_igot_inode(struct inode *inode, ext4_iget_flags flags)
+-
++static int check_igot_inode(struct inode *inode, ext4_iget_flags flags,
++ const char *function, unsigned int line)
+ {
++ const char *err_str;
++
+ if (flags & EXT4_IGET_EA_INODE) {
+- if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL))
+- return "missing EA_INODE flag";
++ if (!(EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) {
++ err_str = "missing EA_INODE flag";
++ goto error;
++ }
+ if (ext4_test_inode_state(inode, EXT4_STATE_XATTR) ||
+- EXT4_I(inode)->i_file_acl)
+- return "ea_inode with extended attributes";
++ EXT4_I(inode)->i_file_acl) {
++ err_str = "ea_inode with extended attributes";
++ goto error;
++ }
+ } else {
+- if ((EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL))
+- return "unexpected EA_INODE flag";
++ if ((EXT4_I(inode)->i_flags & EXT4_EA_INODE_FL)) {
++ /*
++ * open_by_handle_at() could provide an old inode number
++ * that has since been reused for an ea_inode; this does
++ * not indicate filesystem corruption
++ */
++ if (flags & EXT4_IGET_HANDLE)
++ return -ESTALE;
++ err_str = "unexpected EA_INODE flag";
++ goto error;
++ }
++ }
++ if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD)) {
++ err_str = "unexpected bad inode w/o EXT4_IGET_BAD";
++ goto error;
+ }
+- if (is_bad_inode(inode) && !(flags & EXT4_IGET_BAD))
+- return "unexpected bad inode w/o EXT4_IGET_BAD";
+- return NULL;
++ return 0;
++
++error:
++ ext4_error_inode(inode, function, line, 0, err_str);
++ return -EFSCORRUPTED;
+ }
+
+ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+@@ -4708,7 +4729,6 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ struct ext4_inode_info *ei;
+ struct ext4_super_block *es = EXT4_SB(sb)->s_es;
+ struct inode *inode;
+- const char *err_str;
+ journal_t *journal = EXT4_SB(sb)->s_journal;
+ long ret;
+ loff_t size;
+@@ -4737,10 +4757,10 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ if (!inode)
+ return ERR_PTR(-ENOMEM);
+ if (!(inode->i_state & I_NEW)) {
+- if ((err_str = check_igot_inode(inode, flags)) != NULL) {
+- ext4_error_inode(inode, function, line, 0, err_str);
++ ret = check_igot_inode(inode, flags, function, line);
++ if (ret) {
+ iput(inode);
+- return ERR_PTR(-EFSCORRUPTED);
++ return ERR_PTR(ret);
+ }
+ return inode;
+ }
+@@ -5012,13 +5032,21 @@ struct inode *__ext4_iget(struct super_block *sb, unsigned long ino,
+ ret = -EFSCORRUPTED;
+ goto bad_inode;
+ }
+- if ((err_str = check_igot_inode(inode, flags)) != NULL) {
+- ext4_error_inode(inode, function, line, 0, err_str);
+- ret = -EFSCORRUPTED;
+- goto bad_inode;
++ ret = check_igot_inode(inode, flags, function, line);
++ /*
++ * -ESTALE here means there is nothing inherently wrong with the inode,
++ * it's just not an inode we can return for an fhandle lookup.
++ */
++ if (ret == -ESTALE) {
++ brelse(iloc.bh);
++ unlock_new_inode(inode);
++ iput(inode);
++ return ERR_PTR(-ESTALE);
+ }
+-
++ if (ret)
++ goto bad_inode;
+ brelse(iloc.bh);
++
+ unlock_new_inode(inode);
+ return inode;
+
+diff --git a/fs/ext4/namei.c b/fs/ext4/namei.c
+index 790db7eac6c2ad..286f8fcb74cc9d 100644
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -1995,7 +1995,7 @@ static struct ext4_dir_entry_2 *do_split(handle_t *handle, struct inode *dir,
+ * split it in half by count; each resulting block will have at least
+ * half the space free.
+ */
+- if (i > 0)
++ if (i >= 0)
+ split = count - move;
+ else
+ split = count/2;
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index d3795c6c0a9d8e..4291ab3c20be67 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -6906,12 +6906,25 @@ static int ext4_release_dquot(struct dquot *dquot)
+ {
+ int ret, err;
+ handle_t *handle;
++ bool freeze_protected = false;
++
++ /*
++ * Trying to sb_start_intwrite() in a running transaction
++ * can result in a deadlock. Further, running transactions
++ * are already protected from freezing.
++ */
++ if (!ext4_journal_current_handle()) {
++ sb_start_intwrite(dquot->dq_sb);
++ freeze_protected = true;
++ }
+
+ handle = ext4_journal_start(dquot_to_inode(dquot), EXT4_HT_QUOTA,
+ EXT4_QUOTA_DEL_BLOCKS(dquot->dq_sb));
+ if (IS_ERR(handle)) {
+ /* Release dquot anyway to avoid endless cycle in dqput() */
+ dquot_release(dquot);
++ if (freeze_protected)
++ sb_end_intwrite(dquot->dq_sb);
+ return PTR_ERR(handle);
+ }
+ ret = dquot_release(dquot);
+@@ -6922,6 +6935,10 @@ static int ext4_release_dquot(struct dquot *dquot)
+ err = ext4_journal_stop(handle);
+ if (!ret)
+ ret = err;
++
++ if (freeze_protected)
++ sb_end_intwrite(dquot->dq_sb);
++
+ return ret;
+ }
+
+diff --git a/fs/ext4/xattr.c b/fs/ext4/xattr.c
+index 7647e9f6e1903a..6ff94cdf1515c5 100644
+--- a/fs/ext4/xattr.c
++++ b/fs/ext4/xattr.c
+@@ -1176,15 +1176,24 @@ ext4_xattr_inode_dec_ref_all(handle_t *handle, struct inode *parent,
+ {
+ struct inode *ea_inode;
+ struct ext4_xattr_entry *entry;
++ struct ext4_iloc iloc;
+ bool dirty = false;
+ unsigned int ea_ino;
+ int err;
+ int credits;
++ void *end;
++
++ if (block_csum)
++ end = (void *)bh->b_data + bh->b_size;
++ else {
++ ext4_get_inode_loc(parent, &iloc);
++ end = (void *)ext4_raw_inode(&iloc) + EXT4_SB(parent->i_sb)->s_inode_size;
++ }
+
+ /* One credit for dec ref on ea_inode, one for orphan list addition, */
+ credits = 2 + extra_credits;
+
+- for (entry = first; !IS_LAST_ENTRY(entry);
++ for (entry = first; (void *)entry < end && !IS_LAST_ENTRY(entry);
+ entry = EXT4_XATTR_NEXT(entry)) {
+ if (!entry->e_value_inum)
+ continue;
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index efda9a0229816b..86228f82f54d0c 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -1344,21 +1344,13 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ struct f2fs_checkpoint *ckpt = F2FS_CKPT(sbi);
+ unsigned long flags;
+
+- if (cpc->reason & CP_UMOUNT) {
+- if (le32_to_cpu(ckpt->cp_pack_total_block_count) +
+- NM_I(sbi)->nat_bits_blocks > BLKS_PER_SEG(sbi)) {
+- clear_ckpt_flags(sbi, CP_NAT_BITS_FLAG);
+- f2fs_notice(sbi, "Disable nat_bits due to no space");
+- } else if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG) &&
+- f2fs_nat_bitmap_enabled(sbi)) {
+- f2fs_enable_nat_bits(sbi);
+- set_ckpt_flags(sbi, CP_NAT_BITS_FLAG);
+- f2fs_notice(sbi, "Rebuild and enable nat_bits");
+- }
+- }
+-
+ spin_lock_irqsave(&sbi->cp_lock, flags);
+
++ if ((cpc->reason & CP_UMOUNT) &&
++ le32_to_cpu(ckpt->cp_pack_total_block_count) >
++ sbi->blocks_per_seg - NM_I(sbi)->nat_bits_blocks)
++ disable_nat_bits(sbi, false);
++
+ if (cpc->reason & CP_TRIMMED)
+ __set_ckpt_flags(ckpt, CP_TRIMMED_FLAG);
+ else
+@@ -1541,8 +1533,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ start_blk = __start_cp_next_addr(sbi);
+
+ /* write nat bits */
+- if ((cpc->reason & CP_UMOUNT) &&
+- is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG)) {
++ if (enabled_nat_bits(sbi, cpc)) {
+ __u64 cp_ver = cur_cp_version(ckpt);
+ block_t blk;
+
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index b52df8aa95350e..1c783c2e4902ae 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -2231,6 +2231,36 @@ static inline void f2fs_up_write(struct f2fs_rwsem *sem)
+ #endif
+ }
+
++static inline void disable_nat_bits(struct f2fs_sb_info *sbi, bool lock)
++{
++ unsigned long flags;
++ unsigned char *nat_bits;
++
++ /*
++ * In order to re-enable nat_bits we need to call fsck.f2fs by
++ * set_sbi_flag(sbi, SBI_NEED_FSCK). But it may give huge cost,
++ * so let's rely on regular fsck or unclean shutdown.
++ */
++
++ if (lock)
++ spin_lock_irqsave(&sbi->cp_lock, flags);
++ __clear_ckpt_flags(F2FS_CKPT(sbi), CP_NAT_BITS_FLAG);
++ nat_bits = NM_I(sbi)->nat_bits;
++ NM_I(sbi)->nat_bits = NULL;
++ if (lock)
++ spin_unlock_irqrestore(&sbi->cp_lock, flags);
++
++ kvfree(nat_bits);
++}
++
++static inline bool enabled_nat_bits(struct f2fs_sb_info *sbi,
++ struct cp_control *cpc)
++{
++ bool set = is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG);
++
++ return (cpc) ? (cpc->reason & CP_UMOUNT) && set : set;
++}
++
+ static inline void f2fs_lock_op(struct f2fs_sb_info *sbi)
+ {
+ f2fs_down_read(&sbi->cp_rwsem);
+@@ -3671,7 +3701,6 @@ int f2fs_truncate_inode_blocks(struct inode *inode, pgoff_t from);
+ int f2fs_truncate_xattr_node(struct inode *inode);
+ int f2fs_wait_on_node_pages_writeback(struct f2fs_sb_info *sbi,
+ unsigned int seq_id);
+-bool f2fs_nat_bitmap_enabled(struct f2fs_sb_info *sbi);
+ int f2fs_remove_inode_page(struct inode *inode);
+ struct page *f2fs_new_inode_page(struct inode *inode);
+ struct page *f2fs_new_node_page(struct dnode_of_data *dn, unsigned int ofs);
+@@ -3696,7 +3725,6 @@ int f2fs_recover_xattr_data(struct inode *inode, struct page *page);
+ int f2fs_recover_inode_page(struct f2fs_sb_info *sbi, struct page *page);
+ int f2fs_restore_node_summary(struct f2fs_sb_info *sbi,
+ unsigned int segno, struct f2fs_summary_block *sum);
+-void f2fs_enable_nat_bits(struct f2fs_sb_info *sbi);
+ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc);
+ int f2fs_build_node_manager(struct f2fs_sb_info *sbi);
+ void f2fs_destroy_node_manager(struct f2fs_sb_info *sbi);
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 10780e37fc7b68..a60db5e795a4c4 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -34,10 +34,8 @@ void f2fs_mark_inode_dirty_sync(struct inode *inode, bool sync)
+ if (f2fs_inode_dirtied(inode, sync))
+ return;
+
+- if (f2fs_is_atomic_file(inode)) {
+- set_inode_flag(inode, FI_ATOMIC_DIRTIED);
++ if (f2fs_is_atomic_file(inode))
+ return;
+- }
+
+ mark_inode_dirty_sync(inode);
+ }
+@@ -751,8 +749,12 @@ void f2fs_update_inode_page(struct inode *inode)
+ if (err == -ENOENT)
+ return;
+
++ if (err == -EFSCORRUPTED)
++ goto stop_checkpoint;
++
+ if (err == -ENOMEM || ++count <= DEFAULT_RETRY_IO_COUNT)
+ goto retry;
++stop_checkpoint:
+ f2fs_stop_checkpoint(sbi, false, STOP_CP_REASON_UPDATE_INODE);
+ return;
+ }
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 4d7b9fd6ef31ab..12c76e3d1cd49d 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1134,7 +1134,14 @@ int f2fs_truncate_inode_blocks(struct inode *inode, pgoff_t from)
+ trace_f2fs_truncate_inode_blocks_enter(inode, from);
+
+ level = get_node_path(inode, from, offset, noffset);
+- if (level < 0) {
++ if (level <= 0) {
++ if (!level) {
++ level = -EFSCORRUPTED;
++ f2fs_err(sbi, "%s: inode ino=%lx has corrupted node block, from:%lu addrs:%u",
++ __func__, inode->i_ino,
++ from, ADDRS_PER_INODE(inode));
++ set_sbi_flag(sbi, SBI_NEED_FSCK);
++ }
+ trace_f2fs_truncate_inode_blocks_exit(inode, level);
+ return level;
+ }
+@@ -2270,24 +2277,6 @@ static void __move_free_nid(struct f2fs_sb_info *sbi, struct free_nid *i,
+ }
+ }
+
+-bool f2fs_nat_bitmap_enabled(struct f2fs_sb_info *sbi)
+-{
+- struct f2fs_nm_info *nm_i = NM_I(sbi);
+- unsigned int i;
+- bool ret = true;
+-
+- f2fs_down_read(&nm_i->nat_tree_lock);
+- for (i = 0; i < nm_i->nat_blocks; i++) {
+- if (!test_bit_le(i, nm_i->nat_block_bitmap)) {
+- ret = false;
+- break;
+- }
+- }
+- f2fs_up_read(&nm_i->nat_tree_lock);
+-
+- return ret;
+-}
+-
+ static void update_free_nid_bitmap(struct f2fs_sb_info *sbi, nid_t nid,
+ bool set, bool build)
+ {
+@@ -2966,23 +2955,7 @@ static void __adjust_nat_entry_set(struct nat_entry_set *nes,
+ list_add_tail(&nes->set_list, head);
+ }
+
+-static void __update_nat_bits(struct f2fs_nm_info *nm_i, unsigned int nat_ofs,
+- unsigned int valid)
+-{
+- if (valid == 0) {
+- __set_bit_le(nat_ofs, nm_i->empty_nat_bits);
+- __clear_bit_le(nat_ofs, nm_i->full_nat_bits);
+- return;
+- }
+-
+- __clear_bit_le(nat_ofs, nm_i->empty_nat_bits);
+- if (valid == NAT_ENTRY_PER_BLOCK)
+- __set_bit_le(nat_ofs, nm_i->full_nat_bits);
+- else
+- __clear_bit_le(nat_ofs, nm_i->full_nat_bits);
+-}
+-
+-static void update_nat_bits(struct f2fs_sb_info *sbi, nid_t start_nid,
++static void __update_nat_bits(struct f2fs_sb_info *sbi, nid_t start_nid,
+ struct page *page)
+ {
+ struct f2fs_nm_info *nm_i = NM_I(sbi);
+@@ -2991,7 +2964,7 @@ static void update_nat_bits(struct f2fs_sb_info *sbi, nid_t start_nid,
+ int valid = 0;
+ int i = 0;
+
+- if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG))
++ if (!enabled_nat_bits(sbi, NULL))
+ return;
+
+ if (nat_index == 0) {
+@@ -3002,36 +2975,17 @@ static void update_nat_bits(struct f2fs_sb_info *sbi, nid_t start_nid,
+ if (le32_to_cpu(nat_blk->entries[i].block_addr) != NULL_ADDR)
+ valid++;
+ }
+-
+- __update_nat_bits(nm_i, nat_index, valid);
+-}
+-
+-void f2fs_enable_nat_bits(struct f2fs_sb_info *sbi)
+-{
+- struct f2fs_nm_info *nm_i = NM_I(sbi);
+- unsigned int nat_ofs;
+-
+- f2fs_down_read(&nm_i->nat_tree_lock);
+-
+- for (nat_ofs = 0; nat_ofs < nm_i->nat_blocks; nat_ofs++) {
+- unsigned int valid = 0, nid_ofs = 0;
+-
+- /* handle nid zero due to it should never be used */
+- if (unlikely(nat_ofs == 0)) {
+- valid = 1;
+- nid_ofs = 1;
+- }
+-
+- for (; nid_ofs < NAT_ENTRY_PER_BLOCK; nid_ofs++) {
+- if (!test_bit_le(nid_ofs,
+- nm_i->free_nid_bitmap[nat_ofs]))
+- valid++;
+- }
+-
+- __update_nat_bits(nm_i, nat_ofs, valid);
++ if (valid == 0) {
++ __set_bit_le(nat_index, nm_i->empty_nat_bits);
++ __clear_bit_le(nat_index, nm_i->full_nat_bits);
++ return;
+ }
+
+- f2fs_up_read(&nm_i->nat_tree_lock);
++ __clear_bit_le(nat_index, nm_i->empty_nat_bits);
++ if (valid == NAT_ENTRY_PER_BLOCK)
++ __set_bit_le(nat_index, nm_i->full_nat_bits);
++ else
++ __clear_bit_le(nat_index, nm_i->full_nat_bits);
+ }
+
+ static int __flush_nat_entry_set(struct f2fs_sb_info *sbi,
+@@ -3050,7 +3004,7 @@ static int __flush_nat_entry_set(struct f2fs_sb_info *sbi,
+ * #1, flush nat entries to journal in current hot data summary block.
+ * #2, flush nat entries to nat page.
+ */
+- if ((cpc->reason & CP_UMOUNT) ||
++ if (enabled_nat_bits(sbi, cpc) ||
+ !__has_cursum_space(journal, set->entry_cnt, NAT_JOURNAL))
+ to_journal = false;
+
+@@ -3097,7 +3051,7 @@ static int __flush_nat_entry_set(struct f2fs_sb_info *sbi,
+ if (to_journal) {
+ up_write(&curseg->journal_rwsem);
+ } else {
+- update_nat_bits(sbi, start_nid, page);
++ __update_nat_bits(sbi, start_nid, page);
+ f2fs_put_page(page, 1);
+ }
+
+@@ -3128,7 +3082,7 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ * during unmount, let's flush nat_bits before checking
+ * nat_cnt[DIRTY_NAT].
+ */
+- if (cpc->reason & CP_UMOUNT) {
++ if (enabled_nat_bits(sbi, cpc)) {
+ f2fs_down_write(&nm_i->nat_tree_lock);
+ remove_nats_in_journal(sbi);
+ f2fs_up_write(&nm_i->nat_tree_lock);
+@@ -3144,7 +3098,7 @@ int f2fs_flush_nat_entries(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ * entries, remove all entries from journal and merge them
+ * into nat entry set.
+ */
+- if (cpc->reason & CP_UMOUNT ||
++ if (enabled_nat_bits(sbi, cpc) ||
+ !__has_cursum_space(journal,
+ nm_i->nat_cnt[DIRTY_NAT], NAT_JOURNAL))
+ remove_nats_in_journal(sbi);
+@@ -3181,18 +3135,15 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
+ __u64 cp_ver = cur_cp_version(ckpt);
+ block_t nat_bits_addr;
+
++ if (!enabled_nat_bits(sbi, NULL))
++ return 0;
++
+ nm_i->nat_bits_blocks = F2FS_BLK_ALIGN((nat_bits_bytes << 1) + 8);
+ nm_i->nat_bits = f2fs_kvzalloc(sbi,
+ F2FS_BLK_TO_BYTES(nm_i->nat_bits_blocks), GFP_KERNEL);
+ if (!nm_i->nat_bits)
+ return -ENOMEM;
+
+- nm_i->full_nat_bits = nm_i->nat_bits + 8;
+- nm_i->empty_nat_bits = nm_i->full_nat_bits + nat_bits_bytes;
+-
+- if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG))
+- return 0;
+-
+ nat_bits_addr = __start_cp_addr(sbi) + BLKS_PER_SEG(sbi) -
+ nm_i->nat_bits_blocks;
+ for (i = 0; i < nm_i->nat_bits_blocks; i++) {
+@@ -3209,12 +3160,13 @@ static int __get_nat_bitmaps(struct f2fs_sb_info *sbi)
+
+ cp_ver |= (cur_cp_crc(ckpt) << 32);
+ if (cpu_to_le64(cp_ver) != *(__le64 *)nm_i->nat_bits) {
+- clear_ckpt_flags(sbi, CP_NAT_BITS_FLAG);
+- f2fs_notice(sbi, "Disable nat_bits due to incorrect cp_ver (%llu, %llu)",
+- cp_ver, le64_to_cpu(*(__le64 *)nm_i->nat_bits));
++ disable_nat_bits(sbi, true);
+ return 0;
+ }
+
++ nm_i->full_nat_bits = nm_i->nat_bits + 8;
++ nm_i->empty_nat_bits = nm_i->full_nat_bits + nat_bits_bytes;
++
+ f2fs_notice(sbi, "Found nat_bits in checkpoint");
+ return 0;
+ }
+@@ -3225,7 +3177,7 @@ static inline void load_free_nid_bitmap(struct f2fs_sb_info *sbi)
+ unsigned int i = 0;
+ nid_t nid, last_nid;
+
+- if (!is_set_ckpt_flags(sbi, CP_NAT_BITS_FLAG))
++ if (!enabled_nat_bits(sbi, NULL))
+ return;
+
+ for (i = 0; i < nm_i->nat_blocks; i++) {
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index a622056f27f3a2..573cc4725e2e88 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -1515,6 +1515,10 @@ int f2fs_inode_dirtied(struct inode *inode, bool sync)
+ inc_page_count(sbi, F2FS_DIRTY_IMETA);
+ }
+ spin_unlock(&sbi->inode_lock[DIRTY_META]);
++
++ if (!ret && f2fs_is_atomic_file(inode))
++ set_inode_flag(inode, FI_ATOMIC_DIRTIED);
++
+ return ret;
+ }
+
+diff --git a/fs/file.c b/fs/file.c
+index 4cb952541dd036..b6fb6d18ac3b9b 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -367,17 +367,25 @@ struct files_struct *dup_fd(struct files_struct *oldf, struct fd_range *punch_ho
+ old_fds = old_fdt->fd;
+ new_fds = new_fdt->fd;
+
++ /*
++ * We may be racing against fd allocation from other threads using this
++ * files_struct, despite holding ->file_lock.
++ *
++ * alloc_fd() might have already claimed a slot, while fd_install()
++ * did not populate it yet. Note the latter operates locklessly, so
++ * the file can show up as we are walking the array below.
++ *
++ * At the same time we know no files will disappear as all other
++ * operations take the lock.
++ *
++ * Instead of trying to placate userspace racing with itself, we
++ * ref the file if we see it and mark the fd slot as unused otherwise.
++ */
+ for (i = open_files; i != 0; i--) {
+- struct file *f = *old_fds++;
++ struct file *f = rcu_dereference_raw(*old_fds++);
+ if (f) {
+ get_file(f);
+ } else {
+- /*
+- * The fd may be claimed in the fd bitmap but not yet
+- * instantiated in the files array if a sibling thread
+- * is partway through open(). So make sure that this
+- * fd is available to the new process.
+- */
+ __clear_open_fd(open_files - i, new_fdt);
+ }
+ rcu_assign_pointer(*new_fds++, f);
+@@ -637,7 +645,7 @@ struct file *file_close_fd_locked(struct files_struct *files, unsigned fd)
+ return NULL;
+
+ fd = array_index_nospec(fd, fdt->max_fds);
+- file = fdt->fd[fd];
++ file = rcu_dereference_raw(fdt->fd[fd]);
+ if (file) {
+ rcu_assign_pointer(fdt->fd[fd], NULL);
+ __put_unused_fd(files, fd);
+@@ -1219,7 +1227,7 @@ __releases(&files->file_lock)
+ */
+ fdt = files_fdtable(files);
+ fd = array_index_nospec(fd, fdt->max_fds);
+- tofree = fdt->fd[fd];
++ tofree = rcu_dereference_raw(fdt->fd[fd]);
+ if (!tofree && fd_is_open(fd, fdt))
+ goto Ebusy;
+ get_file(file);
+diff --git a/fs/jbd2/journal.c b/fs/jbd2/journal.c
+index 97f487c3d8fcf0..c073f5fb98594f 100644
+--- a/fs/jbd2/journal.c
++++ b/fs/jbd2/journal.c
+@@ -1884,7 +1884,6 @@ int jbd2_journal_update_sb_log_tail(journal_t *journal, tid_t tail_tid,
+
+ /* Log is no longer empty */
+ write_lock(&journal->j_state_lock);
+- WARN_ON(!sb->s_sequence);
+ journal->j_flags &= ~JBD2_FLUSHED;
+ write_unlock(&journal->j_state_lock);
+
+diff --git a/fs/jfs/jfs_dmap.c b/fs/jfs/jfs_dmap.c
+index f9009e4f9ffd89..0e1019382cf519 100644
+--- a/fs/jfs/jfs_dmap.c
++++ b/fs/jfs/jfs_dmap.c
+@@ -204,6 +204,10 @@ int dbMount(struct inode *ipbmap)
+ bmp->db_aglevel = le32_to_cpu(dbmp_le->dn_aglevel);
+ bmp->db_agheight = le32_to_cpu(dbmp_le->dn_agheight);
+ bmp->db_agwidth = le32_to_cpu(dbmp_le->dn_agwidth);
++ if (!bmp->db_agwidth) {
++ err = -EINVAL;
++ goto err_release_metapage;
++ }
+ bmp->db_agstart = le32_to_cpu(dbmp_le->dn_agstart);
+ bmp->db_agl2size = le32_to_cpu(dbmp_le->dn_agl2size);
+ if (bmp->db_agl2size > L2MAXL2SIZE - L2MAXAG ||
+@@ -3403,7 +3407,7 @@ int dbExtendFS(struct inode *ipbmap, s64 blkno, s64 nblocks)
+ oldl2agsize = bmp->db_agl2size;
+
+ bmp->db_agl2size = l2agsize;
+- bmp->db_agsize = 1 << l2agsize;
++ bmp->db_agsize = (s64)1 << l2agsize;
+
+ /* compute new number of AG */
+ agno = bmp->db_numag;
+@@ -3666,8 +3670,8 @@ void dbFinalizeBmap(struct inode *ipbmap)
+ * system size is not a multiple of the group size).
+ */
+ inactfree = (inactags && ag_rem) ?
+- ((inactags - 1) << bmp->db_agl2size) + ag_rem
+- : inactags << bmp->db_agl2size;
++ (((s64)inactags - 1) << bmp->db_agl2size) + ag_rem
++ : ((s64)inactags << bmp->db_agl2size);
+
+ /* determine how many free blocks are in the active
+ * allocation groups plus the average number of free blocks
+diff --git a/fs/jfs/jfs_imap.c b/fs/jfs/jfs_imap.c
+index a360b24ed320c0..8ddc14c56501ac 100644
+--- a/fs/jfs/jfs_imap.c
++++ b/fs/jfs/jfs_imap.c
+@@ -102,7 +102,7 @@ int diMount(struct inode *ipimap)
+ * allocate/initialize the in-memory inode map control structure
+ */
+ /* allocate the in-memory inode map control structure. */
+- imap = kmalloc(sizeof(struct inomap), GFP_KERNEL);
++ imap = kzalloc(sizeof(struct inomap), GFP_KERNEL);
+ if (imap == NULL)
+ return -ENOMEM;
+
+@@ -456,7 +456,7 @@ struct inode *diReadSpecial(struct super_block *sb, ino_t inum, int secondary)
+ dp += inum % 8; /* 8 inodes per 4K page */
+
+ /* copy on-disk inode to in-memory inode */
+- if ((copy_from_dinode(dp, ip)) != 0) {
++ if ((copy_from_dinode(dp, ip) != 0) || (ip->i_nlink == 0)) {
+ /* handle bad return by returning NULL for ip */
+ set_nlink(ip, 1); /* Don't want iput() deleting it */
+ iput(ip);
+diff --git a/fs/namespace.c b/fs/namespace.c
+index 73da51ac5a0349..f898de3a6f7056 100644
+--- a/fs/namespace.c
++++ b/fs/namespace.c
+@@ -1986,6 +1986,7 @@ static void warn_mandlock(void)
+ static int can_umount(const struct path *path, int flags)
+ {
+ struct mount *mnt = real_mount(path->mnt);
++ struct super_block *sb = path->dentry->d_sb;
+
+ if (!may_mount())
+ return -EPERM;
+@@ -1995,7 +1996,7 @@ static int can_umount(const struct path *path, int flags)
+ return -EINVAL;
+ if (mnt->mnt.mnt_flags & MNT_LOCKED) /* Check optimistically */
+ return -EINVAL;
+- if (flags & MNT_FORCE && !capable(CAP_SYS_ADMIN))
++ if (flags & MNT_FORCE && !ns_capable(sb->s_user_ns, CAP_SYS_ADMIN))
+ return -EPERM;
+ return 0;
+ }
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index 88c03e18257323..127626aba7a234 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -605,7 +605,7 @@ static int nfs4_xdr_dec_cb_getattr(struct rpc_rqst *rqstp,
+ return status;
+
+ status = decode_cb_op_status(xdr, OP_CB_GETATTR, &cb->cb_status);
+- if (status)
++ if (unlikely(status || cb->cb_status))
+ return status;
+ if (xdr_stream_decode_uint32_array(xdr, bitmap, 3) < 0)
+ return -NFSERR_BAD_XDR;
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index e83629f396044b..2e835e7c107ee0 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -2244,8 +2244,14 @@ static __net_init int nfsd_net_init(struct net *net)
+ NFSD_STATS_COUNTERS_NUM);
+ if (retval)
+ goto out_repcache_error;
++
+ memset(&nn->nfsd_svcstats, 0, sizeof(nn->nfsd_svcstats));
+ nn->nfsd_svcstats.program = &nfsd_programs[0];
++ if (!nfsd_proc_stat_init(net)) {
++ retval = -ENOMEM;
++ goto out_proc_error;
++ }
++
+ for (i = 0; i < sizeof(nn->nfsd_versions); i++)
+ nn->nfsd_versions[i] = nfsd_support_version(i);
+ for (i = 0; i < sizeof(nn->nfsd4_minorversions); i++)
+@@ -2255,12 +2261,13 @@ static __net_init int nfsd_net_init(struct net *net)
+ nfsd4_init_leases_net(nn);
+ get_random_bytes(&nn->siphash_key, sizeof(nn->siphash_key));
+ seqlock_init(&nn->writeverf_lock);
+- nfsd_proc_stat_init(net);
+ #if IS_ENABLED(CONFIG_NFS_LOCALIO)
+ INIT_LIST_HEAD(&nn->local_clients);
+ #endif
+ return 0;
+
++out_proc_error:
++ percpu_counter_destroy_many(nn->counter, NFSD_STATS_COUNTERS_NUM);
+ out_repcache_error:
+ nfsd_idmap_shutdown(net);
+ out_idmap_error:
+diff --git a/fs/nfsd/stats.c b/fs/nfsd/stats.c
+index bb22893f1157e4..f7eaf95e20fc87 100644
+--- a/fs/nfsd/stats.c
++++ b/fs/nfsd/stats.c
+@@ -73,11 +73,11 @@ static int nfsd_show(struct seq_file *seq, void *v)
+
+ DEFINE_PROC_SHOW_ATTRIBUTE(nfsd);
+
+-void nfsd_proc_stat_init(struct net *net)
++struct proc_dir_entry *nfsd_proc_stat_init(struct net *net)
+ {
+ struct nfsd_net *nn = net_generic(net, nfsd_net_id);
+
+- svc_proc_register(net, &nn->nfsd_svcstats, &nfsd_proc_ops);
++ return svc_proc_register(net, &nn->nfsd_svcstats, &nfsd_proc_ops);
+ }
+
+ void nfsd_proc_stat_shutdown(struct net *net)
+diff --git a/fs/nfsd/stats.h b/fs/nfsd/stats.h
+index 04aacb6c36e257..e4efb0e4e56d46 100644
+--- a/fs/nfsd/stats.h
++++ b/fs/nfsd/stats.h
+@@ -10,7 +10,7 @@
+ #include <uapi/linux/nfsd/stats.h>
+ #include <linux/percpu_counter.h>
+
+-void nfsd_proc_stat_init(struct net *net);
++struct proc_dir_entry *nfsd_proc_stat_init(struct net *net);
+ void nfsd_proc_stat_shutdown(struct net *net);
+
+ static inline void nfsd_stats_rc_hits_inc(struct nfsd_net *nn)
+diff --git a/fs/smb/client/cifsencrypt.c b/fs/smb/client/cifsencrypt.c
+index 7a43daacc81595..7c61c1e944c7ae 100644
+--- a/fs/smb/client/cifsencrypt.c
++++ b/fs/smb/client/cifsencrypt.c
+@@ -702,18 +702,12 @@ cifs_crypto_secmech_release(struct TCP_Server_Info *server)
+ cifs_free_hash(&server->secmech.md5);
+ cifs_free_hash(&server->secmech.sha512);
+
+- if (!SERVER_IS_CHAN(server)) {
+- if (server->secmech.enc) {
+- crypto_free_aead(server->secmech.enc);
+- server->secmech.enc = NULL;
+- }
+-
+- if (server->secmech.dec) {
+- crypto_free_aead(server->secmech.dec);
+- server->secmech.dec = NULL;
+- }
+- } else {
++ if (server->secmech.enc) {
++ crypto_free_aead(server->secmech.enc);
+ server->secmech.enc = NULL;
++ }
++ if (server->secmech.dec) {
++ crypto_free_aead(server->secmech.dec);
+ server->secmech.dec = NULL;
+ }
+ }
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 8b8475b4e26277..3aaf5cdce1b720 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -1722,6 +1722,7 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+ /* Grab netns reference for this server. */
+ cifs_set_net_ns(tcp_ses, get_net(current->nsproxy->net_ns));
+
++ tcp_ses->sign = ctx->sign;
+ tcp_ses->conn_id = atomic_inc_return(&tcpSesNextId);
+ tcp_ses->noblockcnt = ctx->rootfs;
+ tcp_ses->noblocksnd = ctx->noblocksnd || ctx->rootfs;
+@@ -2474,6 +2475,8 @@ static int match_tcon(struct cifs_tcon *tcon, struct smb3_fs_context *ctx)
+ return 0;
+ if (tcon->nodelete != ctx->nodelete)
+ return 0;
++ if (tcon->posix_extensions != ctx->linux_ext)
++ return 0;
+ return 1;
+ }
+
+diff --git a/fs/smb/client/fs_context.c b/fs/smb/client/fs_context.c
+index f8bc1da3003781..1f1f4586673a7a 100644
+--- a/fs/smb/client/fs_context.c
++++ b/fs/smb/client/fs_context.c
+@@ -1287,6 +1287,11 @@ static int smb3_fs_context_parse_param(struct fs_context *fc,
+ ctx->closetimeo = HZ * result.uint_32;
+ break;
+ case Opt_echo_interval:
++ if (result.uint_32 < SMB_ECHO_INTERVAL_MIN ||
++ result.uint_32 > SMB_ECHO_INTERVAL_MAX) {
++ cifs_errorf(fc, "echo interval is out of bounds\n");
++ goto cifs_parse_mount_err;
++ }
+ ctx->echo_interval = result.uint_32;
+ break;
+ case Opt_snapshot:
+diff --git a/fs/smb/client/inode.c b/fs/smb/client/inode.c
+index 97151715d1a413..31fce0a1b57191 100644
+--- a/fs/smb/client/inode.c
++++ b/fs/smb/client/inode.c
+@@ -1206,6 +1206,16 @@ static int reparse_info_to_fattr(struct cifs_open_info_data *data,
+ cifs_create_junction_fattr(fattr, sb);
+ goto out;
+ }
++ /*
++ * If the reparse point is unsupported by the Linux SMB
++ * client then let it process by the SMB server. So mask
++ * the -EOPNOTSUPP error code. This will allow Linux SMB
++ * client to send SMB OPEN request to server. If server
++ * does not support this reparse point too then server
++ * will return error during open the path.
++ */
++ if (rc == -EOPNOTSUPP)
++ rc = 0;
+ }
+ break;
+ }
+diff --git a/fs/smb/client/reparse.c b/fs/smb/client/reparse.c
+index bb246ef0458fb5..b6556fe3dfa11a 100644
+--- a/fs/smb/client/reparse.c
++++ b/fs/smb/client/reparse.c
+@@ -633,8 +633,6 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ const char *full_path,
+ bool unicode, struct cifs_open_info_data *data)
+ {
+- struct cifs_tcon *tcon = cifs_sb_master_tcon(cifs_sb);
+-
+ data->reparse.buf = buf;
+
+ /* See MS-FSCC 2.1.2 */
+@@ -658,8 +656,6 @@ int parse_reparse_point(struct reparse_data_buffer *buf,
+ }
+ return 0;
+ default:
+- cifs_tcon_dbg(VFS | ONCE, "unhandled reparse tag: 0x%08x\n",
+- le32_to_cpu(buf->ReparseTag));
+ return -EOPNOTSUPP;
+ }
+ }
+diff --git a/fs/smb/client/sess.c b/fs/smb/client/sess.c
+index 95e14977baeab0..2426fa7405173c 100644
+--- a/fs/smb/client/sess.c
++++ b/fs/smb/client/sess.c
+@@ -550,6 +550,13 @@ cifs_ses_add_channel(struct cifs_ses *ses,
+ ctx->sockopt_tcp_nodelay = ses->server->tcp_nodelay;
+ ctx->echo_interval = ses->server->echo_interval / HZ;
+ ctx->max_credits = ses->server->max_credits;
++ ctx->min_offload = ses->server->min_offload;
++ ctx->compress = ses->server->compression.requested;
++ ctx->dfs_conn = ses->server->dfs_conn;
++ ctx->ignore_signature = ses->server->ignore_signature;
++ ctx->leaf_fullpath = ses->server->leaf_fullpath;
++ ctx->rootfs = ses->server->noblockcnt;
++ ctx->retrans = ses->server->retrans;
+
+ /*
+ * This will be used for encoding/decoding user/domain/pw
+diff --git a/fs/smb/client/smb2misc.c b/fs/smb/client/smb2misc.c
+index f3c4b70b77b94f..cddf273c14aed7 100644
+--- a/fs/smb/client/smb2misc.c
++++ b/fs/smb/client/smb2misc.c
+@@ -816,11 +816,12 @@ smb2_handle_cancelled_close(struct cifs_tcon *tcon, __u64 persistent_fid,
+ WARN_ONCE(tcon->tc_count < 0, "tcon refcount is negative");
+ spin_unlock(&cifs_tcp_ses_lock);
+
+- if (tcon->ses)
++ if (tcon->ses) {
+ server = tcon->ses->server;
+-
+- cifs_server_dbg(FYI, "tid=0x%x: tcon is closing, skipping async close retry of fid %llu %llu\n",
+- tcon->tid, persistent_fid, volatile_fid);
++ cifs_server_dbg(FYI,
++ "tid=0x%x: tcon is closing, skipping async close retry of fid %llu %llu\n",
++ tcon->tid, persistent_fid, volatile_fid);
++ }
+
+ return 0;
+ }
+diff --git a/fs/smb/client/smb2ops.c b/fs/smb/client/smb2ops.c
+index 516be8c0b2a9b4..590b70d71694be 100644
+--- a/fs/smb/client/smb2ops.c
++++ b/fs/smb/client/smb2ops.c
+@@ -4576,9 +4576,9 @@ decrypt_raw_data(struct TCP_Server_Info *server, char *buf,
+ return rc;
+ }
+ } else {
+- if (unlikely(!server->secmech.dec))
+- return -EIO;
+-
++ rc = smb3_crypto_aead_allocate(server);
++ if (unlikely(rc))
++ return rc;
+ tfm = server->secmech.dec;
+ }
+
+diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c
+index 75b13175a2e781..1a7b82664255ab 100644
+--- a/fs/smb/client/smb2pdu.c
++++ b/fs/smb/client/smb2pdu.c
+@@ -1269,15 +1269,8 @@ SMB2_negotiate(const unsigned int xid,
+ cifs_server_dbg(VFS, "Missing expected negotiate contexts\n");
+ }
+
+- if (server->cipher_type && !rc) {
+- if (!SERVER_IS_CHAN(server)) {
+- rc = smb3_crypto_aead_allocate(server);
+- } else {
+- /* For channels, just reuse the primary server crypto secmech. */
+- server->secmech.enc = server->primary_server->secmech.enc;
+- server->secmech.dec = server->primary_server->secmech.dec;
+- }
+- }
++ if (server->cipher_type && !rc)
++ rc = smb3_crypto_aead_allocate(server);
+ neg_exit:
+ free_rsp_buf(resp_buftype, rsp);
+ return rc;
+diff --git a/fs/udf/inode.c b/fs/udf/inode.c
+index 70c907fe8af9eb..4386dd845e4009 100644
+--- a/fs/udf/inode.c
++++ b/fs/udf/inode.c
+@@ -810,6 +810,7 @@ static int inode_getblk(struct inode *inode, struct udf_map_rq *map)
+ }
+ map->oflags = UDF_BLK_MAPPED;
+ map->pblk = udf_get_lb_pblock(inode->i_sb, &eloc, offset);
++ ret = 0;
+ goto out_free;
+ }
+
+diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
+index 7c0bd0b55f8800..199ec6d10b62af 100644
+--- a/fs/userfaultfd.c
++++ b/fs/userfaultfd.c
+@@ -395,32 +395,6 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
+ if (!(vmf->flags & FAULT_FLAG_USER) && (ctx->flags & UFFD_USER_MODE_ONLY))
+ goto out;
+
+- /*
+- * If it's already released don't get it. This avoids to loop
+- * in __get_user_pages if userfaultfd_release waits on the
+- * caller of handle_userfault to release the mmap_lock.
+- */
+- if (unlikely(READ_ONCE(ctx->released))) {
+- /*
+- * Don't return VM_FAULT_SIGBUS in this case, so a non
+- * cooperative manager can close the uffd after the
+- * last UFFDIO_COPY, without risking to trigger an
+- * involuntary SIGBUS if the process was starting the
+- * userfaultfd while the userfaultfd was still armed
+- * (but after the last UFFDIO_COPY). If the uffd
+- * wasn't already closed when the userfault reached
+- * this point, that would normally be solved by
+- * userfaultfd_must_wait returning 'false'.
+- *
+- * If we were to return VM_FAULT_SIGBUS here, the non
+- * cooperative manager would be instead forced to
+- * always call UFFDIO_UNREGISTER before it can safely
+- * close the uffd.
+- */
+- ret = VM_FAULT_NOPAGE;
+- goto out;
+- }
+-
+ /*
+ * Check that we can return VM_FAULT_RETRY.
+ *
+@@ -457,6 +431,31 @@ vm_fault_t handle_userfault(struct vm_fault *vmf, unsigned long reason)
+ if (vmf->flags & FAULT_FLAG_RETRY_NOWAIT)
+ goto out;
+
++ if (unlikely(READ_ONCE(ctx->released))) {
++ /*
++ * If a concurrent release is detected, do not return
++ * VM_FAULT_SIGBUS or VM_FAULT_NOPAGE, but instead always
++ * return VM_FAULT_RETRY with lock released proactively.
++ *
++ * If we were to return VM_FAULT_SIGBUS here, the non
++ * cooperative manager would be instead forced to
++ * always call UFFDIO_UNREGISTER before it can safely
++ * close the uffd, to avoid involuntary SIGBUS triggered.
++ *
++ * If we were to return VM_FAULT_NOPAGE, it would work for
++ * the fault path, in which the lock will be released
++ * later. However for GUP, faultin_page() does nothing
++ * special on NOPAGE, so GUP would spin retrying without
++ * releasing the mmap read lock, causing possible livelock.
++ *
++ * Here only VM_FAULT_RETRY would make sure the mmap lock
++ * be released immediately, so that the thread concurrently
++ * releasing the userfault would always make progress.
++ */
++ release_fault_lock(vmf);
++ goto out;
++ }
++
+ /* take the reference before dropping the mmap_lock */
+ userfaultfd_ctx_get(ctx);
+
+diff --git a/include/drm/drm_kunit_helpers.h b/include/drm/drm_kunit_helpers.h
+index afdd46ef04f70d..c835f113055dc4 100644
+--- a/include/drm/drm_kunit_helpers.h
++++ b/include/drm/drm_kunit_helpers.h
+@@ -120,6 +120,9 @@ drm_kunit_helper_create_crtc(struct kunit *test,
+ const struct drm_crtc_funcs *funcs,
+ const struct drm_crtc_helper_funcs *helper_funcs);
+
++int drm_kunit_add_mode_destroy_action(struct kunit *test,
++ struct drm_display_mode *mode);
++
+ struct drm_display_mode *
+ drm_kunit_display_mode_from_cea_vic(struct kunit *test, struct drm_device *dev,
+ u8 video_code);
+diff --git a/include/drm/intel/i915_pciids.h b/include/drm/intel/i915_pciids.h
+index f35534522d3338..dacea289acaf5a 100644
+--- a/include/drm/intel/i915_pciids.h
++++ b/include/drm/intel/i915_pciids.h
+@@ -809,6 +809,9 @@
+ MACRO__(0xE20B, ## __VA_ARGS__), \
+ MACRO__(0xE20C, ## __VA_ARGS__), \
+ MACRO__(0xE20D, ## __VA_ARGS__), \
+- MACRO__(0xE212, ## __VA_ARGS__)
++ MACRO__(0xE210, ## __VA_ARGS__), \
++ MACRO__(0xE212, ## __VA_ARGS__), \
++ MACRO__(0xE215, ## __VA_ARGS__), \
++ MACRO__(0xE216, ## __VA_ARGS__)
+
+ #endif /* _I915_PCIIDS_H */
+diff --git a/include/linux/cgroup-defs.h b/include/linux/cgroup-defs.h
+index 38b2af336e4a01..252eed781a6e94 100644
+--- a/include/linux/cgroup-defs.h
++++ b/include/linux/cgroup-defs.h
+@@ -711,6 +711,7 @@ struct cgroup_subsys {
+ void (*css_released)(struct cgroup_subsys_state *css);
+ void (*css_free)(struct cgroup_subsys_state *css);
+ void (*css_reset)(struct cgroup_subsys_state *css);
++ void (*css_killed)(struct cgroup_subsys_state *css);
+ void (*css_rstat_flush)(struct cgroup_subsys_state *css, int cpu);
+ int (*css_extra_stat_show)(struct seq_file *seq,
+ struct cgroup_subsys_state *css);
+diff --git a/include/linux/cgroup.h b/include/linux/cgroup.h
+index f8ef47f8a634df..fc1324ed597d6b 100644
+--- a/include/linux/cgroup.h
++++ b/include/linux/cgroup.h
+@@ -343,7 +343,7 @@ static inline u64 cgroup_id(const struct cgroup *cgrp)
+ */
+ static inline bool css_is_dying(struct cgroup_subsys_state *css)
+ {
+- return !(css->flags & CSS_NO_REF) && percpu_ref_is_dying(&css->refcnt);
++ return css->flags & CSS_DYING;
+ }
+
+ static inline void cgroup_get(struct cgroup *cgrp)
+diff --git a/include/linux/hid.h b/include/linux/hid.h
+index dd33423012538d..018de72505b073 100644
+--- a/include/linux/hid.h
++++ b/include/linux/hid.h
+@@ -1221,12 +1221,6 @@ unsigned long hid_lookup_quirk(const struct hid_device *hdev);
+ int hid_quirks_init(char **quirks_param, __u16 bus, int count);
+ void hid_quirks_exit(__u16 bus);
+
+-#ifdef CONFIG_HID_PID
+-int hid_pidff_init(struct hid_device *hid);
+-#else
+-#define hid_pidff_init NULL
+-#endif
+-
+ #define dbg_hid(fmt, ...) pr_debug("%s: " fmt, __FILE__, ##__VA_ARGS__)
+
+ #define hid_err(hid, fmt, ...) \
+diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h
+index 4b9ba523978d20..5ce332fc6ff507 100644
+--- a/include/linux/io_uring_types.h
++++ b/include/linux/io_uring_types.h
+@@ -457,6 +457,7 @@ enum {
+ REQ_F_SKIP_LINK_CQES_BIT,
+ REQ_F_SINGLE_POLL_BIT,
+ REQ_F_DOUBLE_POLL_BIT,
++ REQ_F_MULTISHOT_BIT,
+ REQ_F_APOLL_MULTISHOT_BIT,
+ REQ_F_CLEAR_POLLIN_BIT,
+ REQ_F_HASH_LOCKED_BIT,
+@@ -530,6 +531,8 @@ enum {
+ REQ_F_SINGLE_POLL = IO_REQ_FLAG(REQ_F_SINGLE_POLL_BIT),
+ /* double poll may active */
+ REQ_F_DOUBLE_POLL = IO_REQ_FLAG(REQ_F_DOUBLE_POLL_BIT),
++ /* request posts multiple completions, should be set at prep time */
++ REQ_F_MULTISHOT = IO_REQ_FLAG(REQ_F_MULTISHOT_BIT),
+ /* fast poll multishot mode */
+ REQ_F_APOLL_MULTISHOT = IO_REQ_FLAG(REQ_F_APOLL_MULTISHOT_BIT),
+ /* recvmsg special flag, clear EPOLLIN */
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index 2c66ca21801c17..15206450929d5e 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -2330,7 +2330,7 @@ static inline bool kvm_is_visible_memslot(struct kvm_memory_slot *memslot)
+ struct kvm_vcpu *kvm_get_running_vcpu(void);
+ struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
+
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ bool kvm_arch_has_irq_bypass(void);
+ int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *,
+ struct irq_bypass_producer *);
+diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
+index 48c66b84668281..a9244291f5067a 100644
+--- a/include/linux/page-flags.h
++++ b/include/linux/page-flags.h
+@@ -1111,6 +1111,12 @@ static inline bool is_page_hwpoison(const struct page *page)
+ return folio_test_hugetlb(folio) && PageHWPoison(&folio->page);
+ }
+
++static inline bool folio_contain_hwpoisoned_page(struct folio *folio)
++{
++ return folio_test_hwpoison(folio) ||
++ (folio_test_large(folio) && folio_test_has_hwpoisoned(folio));
++}
++
+ bool is_free_buddy_page(const struct page *page);
+
+ PAGEFLAG(Isolated, isolated, PF_ANY);
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index c9dc15355f1bac..c395b3c5c05cfb 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -2605,6 +2605,8 @@
+
+ #define PCI_VENDOR_ID_ZHAOXIN 0x1d17
+
++#define PCI_VENDOR_ID_ROCKCHIP 0x1d87
++
+ #define PCI_VENDOR_ID_HYGON 0x1d94
+
+ #define PCI_VENDOR_ID_META 0x1d9b
+diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
+index 347901525a46ae..0997077bcc52ad 100644
+--- a/include/linux/perf_event.h
++++ b/include/linux/perf_event.h
+@@ -170,6 +170,12 @@ struct hw_perf_event {
+ };
+ struct { /* aux / Intel-PT */
+ u64 aux_config;
++ /*
++ * For AUX area events, aux_paused cannot be a state
++ * flag because it can be updated asynchronously to
++ * state.
++ */
++ unsigned int aux_paused;
+ };
+ struct { /* software */
+ struct hrtimer hrtimer;
+@@ -294,6 +300,7 @@ struct perf_event_pmu_context;
+ #define PERF_PMU_CAP_NO_EXCLUDE 0x0040
+ #define PERF_PMU_CAP_AUX_OUTPUT 0x0080
+ #define PERF_PMU_CAP_EXTENDED_HW_TYPE 0x0100
++#define PERF_PMU_CAP_AUX_PAUSE 0x0200
+
+ /**
+ * pmu::scope
+@@ -384,6 +391,8 @@ struct pmu {
+ #define PERF_EF_START 0x01 /* start the counter when adding */
+ #define PERF_EF_RELOAD 0x02 /* reload the counter when starting */
+ #define PERF_EF_UPDATE 0x04 /* update the counter when stopping */
++#define PERF_EF_PAUSE 0x08 /* AUX area event, pause tracing */
++#define PERF_EF_RESUME 0x10 /* AUX area event, resume tracing */
+
+ /*
+ * Adds/Removes a counter to/from the PMU, can be done inside a
+@@ -423,6 +432,18 @@ struct pmu {
+ *
+ * ->start() with PERF_EF_RELOAD will reprogram the counter
+ * value, must be preceded by a ->stop() with PERF_EF_UPDATE.
++ *
++ * ->stop() with PERF_EF_PAUSE will stop as simply as possible. Will not
++ * overlap another ->stop() with PERF_EF_PAUSE nor ->start() with
++ * PERF_EF_RESUME.
++ *
++ * ->start() with PERF_EF_RESUME will start as simply as possible but
++ * only if the counter is not otherwise stopped. Will not overlap
++ * another ->start() with PERF_EF_RESUME nor ->stop() with
++ * PERF_EF_PAUSE.
++ *
++ * Notably, PERF_EF_PAUSE/PERF_EF_RESUME *can* be concurrent with other
++ * ->stop()/->start() invocations, just not itself.
+ */
+ void (*start) (struct perf_event *event, int flags);
+ void (*stop) (struct perf_event *event, int flags);
+@@ -652,13 +673,15 @@ struct swevent_hlist {
+ struct rcu_head rcu_head;
+ };
+
+-#define PERF_ATTACH_CONTEXT 0x01
+-#define PERF_ATTACH_GROUP 0x02
+-#define PERF_ATTACH_TASK 0x04
+-#define PERF_ATTACH_TASK_DATA 0x08
+-#define PERF_ATTACH_ITRACE 0x10
+-#define PERF_ATTACH_SCHED_CB 0x20
+-#define PERF_ATTACH_CHILD 0x40
++#define PERF_ATTACH_CONTEXT 0x0001
++#define PERF_ATTACH_GROUP 0x0002
++#define PERF_ATTACH_TASK 0x0004
++#define PERF_ATTACH_TASK_DATA 0x0008
++#define PERF_ATTACH_ITRACE 0x0010
++#define PERF_ATTACH_SCHED_CB 0x0020
++#define PERF_ATTACH_CHILD 0x0040
++#define PERF_ATTACH_EXCLUSIVE 0x0080
++#define PERF_ATTACH_CALLCHAIN 0x0100
+
+ struct bpf_prog;
+ struct perf_cgroup;
+@@ -810,7 +833,6 @@ struct perf_event {
+ struct irq_work pending_disable_irq;
+ struct callback_head pending_task;
+ unsigned int pending_work;
+- struct rcuwait pending_work_wait;
+
+ atomic_t event_limit;
+
+@@ -1685,6 +1707,13 @@ static inline bool has_aux(struct perf_event *event)
+ return event->pmu->setup_aux;
+ }
+
++static inline bool has_aux_action(struct perf_event *event)
++{
++ return event->attr.aux_sample_size ||
++ event->attr.aux_pause ||
++ event->attr.aux_resume;
++}
++
+ static inline bool is_write_backward(struct perf_event *event)
+ {
+ return !!event->attr.write_backward;
+diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
+index 8df030ebd86286..be6ca84db4d85c 100644
+--- a/include/linux/pgtable.h
++++ b/include/linux/pgtable.h
+@@ -201,10 +201,14 @@ static inline int pmd_dirty(pmd_t pmd)
+ * hazard could result in the direct mode hypervisor case, since the actual
+ * write to the page tables may not yet have taken place, so reads though
+ * a raw PTE pointer after it has been modified are not guaranteed to be
+- * up to date. This mode can only be entered and left under the protection of
+- * the page table locks for all page tables which may be modified. In the UP
+- * case, this is required so that preemption is disabled, and in the SMP case,
+- * it must synchronize the delayed page table writes properly on other CPUs.
++ * up to date.
++ *
++ * In the general case, no lock is guaranteed to be held between entry and exit
++ * of the lazy mode. So the implementation must assume preemption may be enabled
++ * and cpu migration is possible; it must take steps to be robust against this.
++ * (In practice, for user PTE updates, the appropriate page table lock(s) are
++ * held, but for kernel PTE updates, no lock is held). Nesting is not permitted
++ * and the mode cannot be used in interrupt context.
+ */
+ #ifndef __HAVE_ARCH_ENTER_LAZY_MMU_MODE
+ #define arch_enter_lazy_mmu_mode() do {} while (0)
+@@ -266,7 +270,6 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
+ {
+ page_table_check_ptes_set(mm, ptep, pte, nr);
+
+- arch_enter_lazy_mmu_mode();
+ for (;;) {
+ set_pte(ptep, pte);
+ if (--nr == 0)
+@@ -274,7 +277,6 @@ static inline void set_ptes(struct mm_struct *mm, unsigned long addr,
+ ptep++;
+ pte = pte_next_pfn(pte);
+ }
+- arch_leave_lazy_mmu_mode();
+ }
+ #endif
+ #define set_pte_at(mm, addr, ptep, pte) set_ptes(mm, addr, ptep, pte, 1)
+diff --git a/include/linux/printk.h b/include/linux/printk.h
+index eca9bb2ee637b6..0cb647ecd77f54 100644
+--- a/include/linux/printk.h
++++ b/include/linux/printk.h
+@@ -204,6 +204,7 @@ void printk_legacy_allow_panic_sync(void);
+ extern bool nbcon_device_try_acquire(struct console *con);
+ extern void nbcon_device_release(struct console *con);
+ void nbcon_atomic_flush_unsafe(void);
++bool pr_flush(int timeout_ms, bool reset_on_progress);
+ #else
+ static inline __printf(1, 0)
+ int vprintk(const char *s, va_list args)
+@@ -304,6 +305,11 @@ static inline void nbcon_atomic_flush_unsafe(void)
+ {
+ }
+
++static inline bool pr_flush(int timeout_ms, bool reset_on_progress)
++{
++ return true;
++}
++
+ #endif
+
+ bool this_cpu_in_panic(void);
+diff --git a/include/linux/tpm.h b/include/linux/tpm.h
+index 20a40ade803086..6c3125300c009a 100644
+--- a/include/linux/tpm.h
++++ b/include/linux/tpm.h
+@@ -335,6 +335,7 @@ enum tpm2_cc_attrs {
+ #define TPM_VID_WINBOND 0x1050
+ #define TPM_VID_STM 0x104A
+ #define TPM_VID_ATML 0x1114
++#define TPM_VID_IFX 0x15D1
+
+ enum tpm_chip_flags {
+ TPM_CHIP_FLAG_BOOTSTRAPPED = BIT(0),
+diff --git a/include/net/bluetooth/hci.h b/include/net/bluetooth/hci.h
+index dd10e02bfc746e..71d24328764065 100644
+--- a/include/net/bluetooth/hci.h
++++ b/include/net/bluetooth/hci.h
+@@ -353,6 +353,22 @@ enum {
+ * during the hdev->setup vendor callback.
+ */
+ HCI_QUIRK_FIXUP_LE_EXT_ADV_REPORT_PHY,
++
++ /* When this quirk is set, the HCI_OP_READ_VOICE_SETTING command is
++ * skipped. This is required for a subset of the CSR controller clones
++ * which erroneously claim to support it.
++ *
++ * This quirk must be set before hci_register_dev is called.
++ */
++ HCI_QUIRK_BROKEN_READ_VOICE_SETTING,
++
++ /* When this quirk is set, the HCI_OP_READ_PAGE_SCAN_TYPE command is
++ * skipped. This is required for a subset of the CSR controller clones
++ * which erroneously claim to support it.
++ *
++ * This quirk must be set before hci_register_dev is called.
++ */
++ HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE,
+ };
+
+ /* HCI device flags */
+diff --git a/include/net/bluetooth/hci_core.h b/include/net/bluetooth/hci_core.h
+index c95f7e6ba25514..4245910ffc4a2d 100644
+--- a/include/net/bluetooth/hci_core.h
++++ b/include/net/bluetooth/hci_core.h
+@@ -1921,6 +1921,10 @@ void hci_conn_del_sysfs(struct hci_conn *conn);
+ ((dev)->commands[20] & 0x10 && \
+ !test_bit(HCI_QUIRK_BROKEN_READ_ENC_KEY_SIZE, &hdev->quirks))
+
++#define read_voice_setting_capable(dev) \
++ ((dev)->commands[9] & 0x04 && \
++ !test_bit(HCI_QUIRK_BROKEN_READ_VOICE_SETTING, &(dev)->quirks))
++
+ /* Use enhanced synchronous connection if command is supported and its quirk
+ * has not been set.
+ */
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index 5b712582f9a9ce..3b964f8834e719 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -2826,6 +2826,11 @@ struct ieee80211_txq {
+ * implements MLO, so operation can continue on other links when one
+ * link is switching.
+ *
++ * @IEEE80211_HW_STRICT: strictly enforce certain things mandated by the spec
++ * but otherwise ignored/worked around for interoperability. This is a
++ * HW flag so drivers can opt in according to their own control, e.g. in
++ * testing.
++ *
+ * @NUM_IEEE80211_HW_FLAGS: number of hardware flags, used for sizing arrays
+ */
+ enum ieee80211_hw_flags {
+@@ -2885,6 +2890,7 @@ enum ieee80211_hw_flags {
+ IEEE80211_HW_DISALLOW_PUNCTURING,
+ IEEE80211_HW_DISALLOW_PUNCTURING_5GHZ,
+ IEEE80211_HW_HANDLES_QUIET_CSA,
++ IEEE80211_HW_STRICT,
+
+ /* keep last, obviously */
+ NUM_IEEE80211_HW_FLAGS
+diff --git a/include/net/sctp/structs.h b/include/net/sctp/structs.h
+index 31248cfdfb235f..dcd288fa1bb6fb 100644
+--- a/include/net/sctp/structs.h
++++ b/include/net/sctp/structs.h
+@@ -775,6 +775,7 @@ struct sctp_transport {
+
+ /* Reference counting. */
+ refcount_t refcnt;
++ __u32 dead:1,
+ /* RTO-Pending : A flag used to track if one of the DATA
+ * chunks sent to this address is currently being
+ * used to compute a RTT. If this flag is 0,
+@@ -784,7 +785,7 @@ struct sctp_transport {
+ * calculation completes (i.e. the DATA chunk
+ * is SACK'd) clear this flag.
+ */
+- __u32 rto_pending:1,
++ rto_pending:1,
+
+ /*
+ * hb_sent : a flag that signals that we have a pending
+diff --git a/include/net/sock.h b/include/net/sock.h
+index fa055cf1785efd..fa9b9dadbe1709 100644
+--- a/include/net/sock.h
++++ b/include/net/sock.h
+@@ -338,6 +338,8 @@ struct sk_filter;
+ * @sk_txtime_unused: unused txtime flags
+ * @ns_tracker: tracker for netns reference
+ * @sk_user_frags: xarray of pages the user is holding a reference on.
++ * @sk_owner: reference to the real owner of the socket that calls
++ * sock_lock_init_class_and_name().
+ */
+ struct sock {
+ /*
+@@ -544,6 +546,10 @@ struct sock {
+ struct rcu_head sk_rcu;
+ netns_tracker ns_tracker;
+ struct xarray sk_user_frags;
++
++#if IS_ENABLED(CONFIG_PROVE_LOCKING) && IS_ENABLED(CONFIG_MODULES)
++ struct module *sk_owner;
++#endif
+ };
+
+ struct sock_bh_locked {
+@@ -1585,6 +1591,35 @@ static inline void sk_mem_uncharge(struct sock *sk, int size)
+ sk_mem_reclaim(sk);
+ }
+
++#if IS_ENABLED(CONFIG_PROVE_LOCKING) && IS_ENABLED(CONFIG_MODULES)
++static inline void sk_owner_set(struct sock *sk, struct module *owner)
++{
++ __module_get(owner);
++ sk->sk_owner = owner;
++}
++
++static inline void sk_owner_clear(struct sock *sk)
++{
++ sk->sk_owner = NULL;
++}
++
++static inline void sk_owner_put(struct sock *sk)
++{
++ module_put(sk->sk_owner);
++}
++#else
++static inline void sk_owner_set(struct sock *sk, struct module *owner)
++{
++}
++
++static inline void sk_owner_clear(struct sock *sk)
++{
++}
++
++static inline void sk_owner_put(struct sock *sk)
++{
++}
++#endif
+ /*
+ * Macro so as to not evaluate some arguments when
+ * lockdep is not enabled.
+@@ -1594,13 +1629,14 @@ static inline void sk_mem_uncharge(struct sock *sk, int size)
+ */
+ #define sock_lock_init_class_and_name(sk, sname, skey, name, key) \
+ do { \
++ sk_owner_set(sk, THIS_MODULE); \
+ sk->sk_lock.owned = 0; \
+ init_waitqueue_head(&sk->sk_lock.wq); \
+ spin_lock_init(&(sk)->sk_lock.slock); \
+ debug_check_no_locks_freed((void *)&(sk)->sk_lock, \
+- sizeof((sk)->sk_lock)); \
++ sizeof((sk)->sk_lock)); \
+ lockdep_set_class_and_name(&(sk)->sk_lock.slock, \
+- (skey), (sname)); \
++ (skey), (sname)); \
+ lockdep_init_map(&(sk)->sk_lock.dep_map, (name), (key), 0); \
+ } while (0)
+
+diff --git a/include/uapi/linux/kfd_ioctl.h b/include/uapi/linux/kfd_ioctl.h
+index 717307d6b5b74c..3e1c11d9d9808f 100644
+--- a/include/uapi/linux/kfd_ioctl.h
++++ b/include/uapi/linux/kfd_ioctl.h
+@@ -62,6 +62,8 @@ struct kfd_ioctl_get_version_args {
+ #define KFD_MAX_QUEUE_PERCENTAGE 100
+ #define KFD_MAX_QUEUE_PRIORITY 15
+
++#define KFD_MIN_QUEUE_RING_SIZE 1024
++
+ struct kfd_ioctl_create_queue_args {
+ __u64 ring_base_address; /* to KFD */
+ __u64 write_pointer_address; /* from KFD */
+diff --git a/include/uapi/linux/landlock.h b/include/uapi/linux/landlock.h
+index 33745642f7875a..c223572f82296b 100644
+--- a/include/uapi/linux/landlock.h
++++ b/include/uapi/linux/landlock.h
+@@ -57,9 +57,11 @@ struct landlock_ruleset_attr {
+ *
+ * - %LANDLOCK_CREATE_RULESET_VERSION: Get the highest supported Landlock ABI
+ * version.
++ * - %LANDLOCK_CREATE_RULESET_ERRATA: Get a bitmask of fixed issues.
+ */
+ /* clang-format off */
+ #define LANDLOCK_CREATE_RULESET_VERSION (1U << 0)
++#define LANDLOCK_CREATE_RULESET_ERRATA (1U << 1)
+ /* clang-format on */
+
+ /**
+diff --git a/include/uapi/linux/perf_event.h b/include/uapi/linux/perf_event.h
+index 4842c36fdf8019..0524d541d4e3d5 100644
+--- a/include/uapi/linux/perf_event.h
++++ b/include/uapi/linux/perf_event.h
+@@ -511,7 +511,16 @@ struct perf_event_attr {
+ __u16 sample_max_stack;
+ __u16 __reserved_2;
+ __u32 aux_sample_size;
+- __u32 __reserved_3;
++
++ union {
++ __u32 aux_action;
++ struct {
++ __u32 aux_start_paused : 1, /* start AUX area tracing paused */
++ aux_pause : 1, /* on overflow, pause AUX area tracing */
++ aux_resume : 1, /* on overflow, resume AUX area tracing */
++ __reserved_3 : 29;
++ };
++ };
+
+ /*
+ * User provided data if sigtrap=1, passed back to user via
+diff --git a/include/uapi/linux/psp-sev.h b/include/uapi/linux/psp-sev.h
+index 832c15d9155bdb..eeb20dfb1fdaa4 100644
+--- a/include/uapi/linux/psp-sev.h
++++ b/include/uapi/linux/psp-sev.h
+@@ -73,13 +73,20 @@ typedef enum {
+ SEV_RET_INVALID_PARAM,
+ SEV_RET_RESOURCE_LIMIT,
+ SEV_RET_SECURE_DATA_INVALID,
+- SEV_RET_INVALID_KEY = 0x27,
+- SEV_RET_INVALID_PAGE_SIZE,
+- SEV_RET_INVALID_PAGE_STATE,
+- SEV_RET_INVALID_MDATA_ENTRY,
+- SEV_RET_INVALID_PAGE_OWNER,
+- SEV_RET_INVALID_PAGE_AEAD_OFLOW,
+- SEV_RET_RMP_INIT_REQUIRED,
++ SEV_RET_INVALID_PAGE_SIZE = 0x0019,
++ SEV_RET_INVALID_PAGE_STATE = 0x001A,
++ SEV_RET_INVALID_MDATA_ENTRY = 0x001B,
++ SEV_RET_INVALID_PAGE_OWNER = 0x001C,
++ SEV_RET_AEAD_OFLOW = 0x001D,
++ SEV_RET_EXIT_RING_BUFFER = 0x001F,
++ SEV_RET_RMP_INIT_REQUIRED = 0x0020,
++ SEV_RET_BAD_SVN = 0x0021,
++ SEV_RET_BAD_VERSION = 0x0022,
++ SEV_RET_SHUTDOWN_REQUIRED = 0x0023,
++ SEV_RET_UPDATE_FAILED = 0x0024,
++ SEV_RET_RESTORE_REQUIRED = 0x0025,
++ SEV_RET_RMP_INITIALIZATION_FAILED = 0x0026,
++ SEV_RET_INVALID_KEY = 0x0027,
+ SEV_RET_MAX,
+ } sev_ret_code;
+
+diff --git a/include/uapi/linux/rkisp1-config.h b/include/uapi/linux/rkisp1-config.h
+index 430daceafac705..2d995f3c1ca378 100644
+--- a/include/uapi/linux/rkisp1-config.h
++++ b/include/uapi/linux/rkisp1-config.h
+@@ -1528,7 +1528,7 @@ enum rksip1_ext_param_buffer_version {
+ * The expected memory layout of the parameters buffer is::
+ *
+ * +-------------------- struct rkisp1_ext_params_cfg -------------------+
+- * | version = RKISP_EXT_PARAMS_BUFFER_V1; |
++ * | version = RKISP1_EXT_PARAM_BUFFER_V1; |
+ * | data_size = sizeof(struct rkisp1_ext_params_bls_config) |
+ * | + sizeof(struct rkisp1_ext_params_dpcc_config); |
+ * | +------------------------- data ---------------------------------+ |
+diff --git a/include/xen/interface/xen-mca.h b/include/xen/interface/xen-mca.h
+index 464aa6b3a5f928..1c9afbe8cc2600 100644
+--- a/include/xen/interface/xen-mca.h
++++ b/include/xen/interface/xen-mca.h
+@@ -372,7 +372,7 @@ struct xen_mce {
+ #define XEN_MCE_LOG_LEN 32
+
+ struct xen_mce_log {
+- char signature[12]; /* "MACHINECHECK" */
++ char signature[12] __nonstring; /* "MACHINECHECK" */
+ unsigned len; /* = XEN_MCE_LOG_LEN */
+ unsigned next;
+ unsigned flags;
+diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c
+index cf28d29fffbf0e..19de7129ae0b35 100644
+--- a/io_uring/io_uring.c
++++ b/io_uring/io_uring.c
+@@ -1821,7 +1821,7 @@ void io_wq_submit_work(struct io_wq_work *work)
+ * Don't allow any multishot execution from io-wq. It's more restrictive
+ * than necessary and also cleaner.
+ */
+- if (req->flags & REQ_F_APOLL_MULTISHOT) {
++ if (req->flags & (REQ_F_MULTISHOT|REQ_F_APOLL_MULTISHOT)) {
+ err = -EBADFD;
+ if (!io_file_can_poll(req))
+ goto fail;
+@@ -1832,7 +1832,7 @@ void io_wq_submit_work(struct io_wq_work *work)
+ goto fail;
+ return;
+ } else {
+- req->flags &= ~REQ_F_APOLL_MULTISHOT;
++ req->flags &= ~(REQ_F_APOLL_MULTISHOT|REQ_F_MULTISHOT);
+ }
+ }
+
+diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c
+index e1895952066eeb..7a8c3a004800ed 100644
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -484,6 +484,8 @@ int io_provide_buffers_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe
+ p->nbufs = tmp;
+ p->addr = READ_ONCE(sqe->addr);
+ p->len = READ_ONCE(sqe->len);
++ if (!p->len)
++ return -EINVAL;
+
+ if (check_mul_overflow((unsigned long)p->len, (unsigned long)p->nbufs,
+ &size))
+diff --git a/io_uring/net.c b/io_uring/net.c
+index 7ea99e082e97e7..384915d931b72c 100644
+--- a/io_uring/net.c
++++ b/io_uring/net.c
+@@ -435,6 +435,7 @@ int io_sendmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ sr->msg_flags |= MSG_WAITALL;
+ sr->buf_group = req->buf_index;
+ req->buf_list = NULL;
++ req->flags |= REQ_F_MULTISHOT;
+ }
+
+ #ifdef CONFIG_COMPAT
+@@ -1616,6 +1617,8 @@ int io_accept(struct io_kiocb *req, unsigned int issue_flags)
+ }
+
+ io_req_set_res(req, ret, cflags);
++ if (!(issue_flags & IO_URING_F_MULTISHOT))
++ return IOU_OK;
+ return IOU_STOP_MULTISHOT;
+ }
+
+diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
+index 216535e055e112..4378f3eff25d25 100644
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -5909,6 +5909,12 @@ static void kill_css(struct cgroup_subsys_state *css)
+ if (css->flags & CSS_DYING)
+ return;
+
++ /*
++ * Call css_killed(), if defined, before setting the CSS_DYING flag
++ */
++ if (css->ss->css_killed)
++ css->ss->css_killed(css);
++
+ css->flags |= CSS_DYING;
+
+ /*
+diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
+index 24ece85fd3b126..839f88ba17f7d3 100644
+--- a/kernel/cgroup/cpuset.c
++++ b/kernel/cgroup/cpuset.c
+@@ -84,9 +84,19 @@ static bool have_boot_isolcpus;
+ static struct list_head remote_children;
+
+ /*
+- * A flag to force sched domain rebuild at the end of an operation while
+- * inhibiting it in the intermediate stages when set. Currently it is only
+- * set in hotplug code.
++ * A flag to force sched domain rebuild at the end of an operation.
++ * It can be set in
++ * - update_partition_sd_lb()
++ * - remote_partition_check()
++ * - update_cpumasks_hier()
++ * - cpuset_update_flag()
++ * - cpuset_hotplug_update_tasks()
++ * - cpuset_handle_hotplug()
++ *
++ * Protected by cpuset_mutex (with cpus_read_lock held) or cpus_write_lock.
++ *
++ * Note that update_relax_domain_level() in cpuset-v1.c can still call
++ * rebuild_sched_domains_locked() directly without using this flag.
+ */
+ static bool force_sd_rebuild;
+
+@@ -283,6 +293,12 @@ static inline void dec_attach_in_progress(struct cpuset *cs)
+ mutex_unlock(&cpuset_mutex);
+ }
+
++static inline bool cpuset_v2(void)
++{
++ return !IS_ENABLED(CONFIG_CPUSETS_V1) ||
++ cgroup_subsys_on_dfl(cpuset_cgrp_subsys);
++}
++
+ /*
+ * Cgroup v2 behavior is used on the "cpus" and "mems" control files when
+ * on default hierarchy or when the cpuset_v2_mode flag is set by mounting
+@@ -293,7 +309,7 @@ static inline void dec_attach_in_progress(struct cpuset *cs)
+ */
+ static inline bool is_in_v2_mode(void)
+ {
+- return cgroup_subsys_on_dfl(cpuset_cgrp_subsys) ||
++ return cpuset_v2() ||
+ (cpuset_cgrp_subsys.root->flags & CGRP_ROOT_CPUSET_V2_MODE);
+ }
+
+@@ -728,7 +744,7 @@ static int generate_sched_domains(cpumask_var_t **domains,
+ int nslot; /* next empty doms[] struct cpumask slot */
+ struct cgroup_subsys_state *pos_css;
+ bool root_load_balance = is_sched_load_balance(&top_cpuset);
+- bool cgrpv2 = cgroup_subsys_on_dfl(cpuset_cgrp_subsys);
++ bool cgrpv2 = cpuset_v2();
+ int nslot_update;
+
+ doms = NULL;
+@@ -998,6 +1014,7 @@ void rebuild_sched_domains_locked(void)
+
+ lockdep_assert_cpus_held();
+ lockdep_assert_held(&cpuset_mutex);
++ force_sd_rebuild = false;
+
+ /*
+ * If we have raced with CPU hotplug, return early to avoid
+@@ -1172,8 +1189,8 @@ static void update_partition_sd_lb(struct cpuset *cs, int old_prs)
+ clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
+ }
+
+- if (rebuild_domains && !force_sd_rebuild)
+- rebuild_sched_domains_locked();
++ if (rebuild_domains)
++ cpuset_force_rebuild();
+ }
+
+ /*
+@@ -1195,7 +1212,7 @@ static void reset_partition_data(struct cpuset *cs)
+ {
+ struct cpuset *parent = parent_cs(cs);
+
+- if (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys))
++ if (!cpuset_v2())
+ return;
+
+ lockdep_assert_held(&callback_lock);
+@@ -1383,6 +1400,7 @@ static int remote_partition_enable(struct cpuset *cs, int new_prs,
+ list_add(&cs->remote_sibling, &remote_children);
+ spin_unlock_irq(&callback_lock);
+ update_unbound_workqueue_cpumask(isolcpus_updated);
++ cs->prs_err = 0;
+
+ /*
+ * Proprogate changes in top_cpuset's effective_cpus down the hierarchy.
+@@ -1413,9 +1431,11 @@ static void remote_partition_disable(struct cpuset *cs, struct tmpmasks *tmp)
+ list_del_init(&cs->remote_sibling);
+ isolcpus_updated = partition_xcpus_del(cs->partition_root_state,
+ NULL, tmp->new_cpus);
+- cs->partition_root_state = -cs->partition_root_state;
+- if (!cs->prs_err)
+- cs->prs_err = PERR_INVCPUS;
++ if (cs->prs_err)
++ cs->partition_root_state = -cs->partition_root_state;
++ else
++ cs->partition_root_state = PRS_MEMBER;
++
+ reset_partition_data(cs);
+ spin_unlock_irq(&callback_lock);
+ update_unbound_workqueue_cpumask(isolcpus_updated);
+@@ -1448,8 +1468,10 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *newmask,
+
+ WARN_ON_ONCE(!cpumask_subset(cs->effective_xcpus, subpartitions_cpus));
+
+- if (cpumask_empty(newmask))
++ if (cpumask_empty(newmask)) {
++ cs->prs_err = PERR_CPUSEMPTY;
+ goto invalidate;
++ }
+
+ adding = cpumask_andnot(tmp->addmask, newmask, cs->effective_xcpus);
+ deleting = cpumask_andnot(tmp->delmask, cs->effective_xcpus, newmask);
+@@ -1459,10 +1481,15 @@ static void remote_cpus_update(struct cpuset *cs, struct cpumask *newmask,
+ * not allocated to other partitions and there are effective_cpus
+ * left in the top cpuset.
+ */
+- if (adding && (!capable(CAP_SYS_ADMIN) ||
+- cpumask_intersects(tmp->addmask, subpartitions_cpus) ||
+- cpumask_subset(top_cpuset.effective_cpus, tmp->addmask)))
+- goto invalidate;
++ if (adding) {
++ if (!capable(CAP_SYS_ADMIN))
++ cs->prs_err = PERR_ACCESS;
++ else if (cpumask_intersects(tmp->addmask, subpartitions_cpus) ||
++ cpumask_subset(top_cpuset.effective_cpus, tmp->addmask))
++ cs->prs_err = PERR_NOCPUS;
++ if (cs->prs_err)
++ goto invalidate;
++ }
+
+ spin_lock_irq(&callback_lock);
+ if (adding)
+@@ -1520,8 +1547,8 @@ static void remote_partition_check(struct cpuset *cs, struct cpumask *newmask,
+ remote_partition_disable(child, tmp);
+ disable_cnt++;
+ }
+- if (disable_cnt && !force_sd_rebuild)
+- rebuild_sched_domains_locked();
++ if (disable_cnt)
++ cpuset_force_rebuild();
+ }
+
+ /*
+@@ -1578,7 +1605,7 @@ static bool prstate_housekeeping_conflict(int prstate, struct cpumask *new_cpus)
+ * The partcmd_update command is used by update_cpumasks_hier() with newmask
+ * NULL and update_cpumask() with newmask set. The partcmd_invalidate is used
+ * by update_cpumask() with NULL newmask. In both cases, the callers won't
+- * check for error and so partition_root_state and prs_error will be updated
++ * check for error and so partition_root_state and prs_err will be updated
+ * directly.
+ */
+ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
+@@ -1656,9 +1683,9 @@ static int update_parent_effective_cpumask(struct cpuset *cs, int cmd,
+ if (nocpu)
+ return PERR_NOCPUS;
+
+- cpumask_copy(tmp->delmask, xcpus);
+- deleting = true;
+- subparts_delta++;
++ deleting = cpumask_and(tmp->delmask, xcpus, parent->effective_xcpus);
++ if (deleting)
++ subparts_delta++;
+ new_prs = (cmd == partcmd_enable) ? PRS_ROOT : PRS_ISOLATED;
+ } else if (cmd == partcmd_disable) {
+ /*
+@@ -1930,12 +1957,6 @@ static void compute_partition_effective_cpumask(struct cpuset *cs,
+ rcu_read_unlock();
+ }
+
+-/*
+- * update_cpumasks_hier() flags
+- */
+-#define HIER_CHECKALL 0x01 /* Check all cpusets with no skipping */
+-#define HIER_NO_SD_REBUILD 0x02 /* Don't rebuild sched domains */
+-
+ /*
+ * update_cpumasks_hier - Update effective cpumasks and tasks in the subtree
+ * @cs: the cpuset to consider
+@@ -1950,7 +1971,7 @@ static void compute_partition_effective_cpumask(struct cpuset *cs,
+ * Called with cpuset_mutex held
+ */
+ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
+- int flags)
++ bool force)
+ {
+ struct cpuset *cp;
+ struct cgroup_subsys_state *pos_css;
+@@ -2015,12 +2036,12 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
+ * Skip the whole subtree if
+ * 1) the cpumask remains the same,
+ * 2) has no partition root state,
+- * 3) HIER_CHECKALL flag not set, and
++ * 3) force flag not set, and
+ * 4) for v2 load balance state same as its parent.
+ */
+- if (!cp->partition_root_state && !(flags & HIER_CHECKALL) &&
++ if (!cp->partition_root_state && !force &&
+ cpumask_equal(tmp->new_cpus, cp->effective_cpus) &&
+- (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) ||
++ (!cpuset_v2() ||
+ (is_sched_load_balance(parent) == is_sched_load_balance(cp)))) {
+ pos_css = css_rightmost_descendant(pos_css);
+ continue;
+@@ -2094,8 +2115,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
+ * from parent if current cpuset isn't a valid partition root
+ * and their load balance states differ.
+ */
+- if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) &&
+- !is_partition_valid(cp) &&
++ if (cpuset_v2() && !is_partition_valid(cp) &&
+ (is_sched_load_balance(parent) != is_sched_load_balance(cp))) {
+ if (is_sched_load_balance(parent))
+ set_bit(CS_SCHED_LOAD_BALANCE, &cp->flags);
+@@ -2111,8 +2131,7 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
+ */
+ if (!cpumask_empty(cp->cpus_allowed) &&
+ is_sched_load_balance(cp) &&
+- (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) ||
+- is_partition_valid(cp)))
++ (!cpuset_v2() || is_partition_valid(cp)))
+ need_rebuild_sched_domains = true;
+
+ rcu_read_lock();
+@@ -2120,9 +2139,8 @@ static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
+ }
+ rcu_read_unlock();
+
+- if (need_rebuild_sched_domains && !(flags & HIER_NO_SD_REBUILD) &&
+- !force_sd_rebuild)
+- rebuild_sched_domains_locked();
++ if (need_rebuild_sched_domains)
++ cpuset_force_rebuild();
+ }
+
+ /**
+@@ -2149,9 +2167,7 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
+ * directly.
+ *
+ * The update_cpumasks_hier() function may sleep. So we have to
+- * release the RCU read lock before calling it. HIER_NO_SD_REBUILD
+- * flag is used to suppress rebuild of sched domains as the callers
+- * will take care of that.
++ * release the RCU read lock before calling it.
+ */
+ rcu_read_lock();
+ cpuset_for_each_child(sibling, pos_css, parent) {
+@@ -2167,7 +2183,7 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
+ continue;
+
+ rcu_read_unlock();
+- update_cpumasks_hier(sibling, tmp, HIER_NO_SD_REBUILD);
++ update_cpumasks_hier(sibling, tmp, false);
+ rcu_read_lock();
+ css_put(&sibling->css);
+ }
+@@ -2187,7 +2203,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ struct tmpmasks tmp;
+ struct cpuset *parent = parent_cs(cs);
+ bool invalidate = false;
+- int hier_flags = 0;
++ bool force = false;
+ int old_prs = cs->partition_root_state;
+
+ /* top_cpuset.cpus_allowed tracks cpu_online_mask; it's read-only */
+@@ -2248,12 +2264,11 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ * Check all the descendants in update_cpumasks_hier() if
+ * effective_xcpus is to be changed.
+ */
+- if (!cpumask_equal(cs->effective_xcpus, trialcs->effective_xcpus))
+- hier_flags = HIER_CHECKALL;
++ force = !cpumask_equal(cs->effective_xcpus, trialcs->effective_xcpus);
+
+ retval = validate_change(cs, trialcs);
+
+- if ((retval == -EINVAL) && cgroup_subsys_on_dfl(cpuset_cgrp_subsys)) {
++ if ((retval == -EINVAL) && cpuset_v2()) {
+ struct cgroup_subsys_state *css;
+ struct cpuset *cp;
+
+@@ -2317,7 +2332,7 @@ static int update_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ spin_unlock_irq(&callback_lock);
+
+ /* effective_cpus/effective_xcpus will be updated here */
+- update_cpumasks_hier(cs, &tmp, hier_flags);
++ update_cpumasks_hier(cs, &tmp, force);
+
+ /* Update CS_SCHED_LOAD_BALANCE and/or sched_domains, if necessary */
+ if (cs->partition_root_state)
+@@ -2342,7 +2357,7 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ struct tmpmasks tmp;
+ struct cpuset *parent = parent_cs(cs);
+ bool invalidate = false;
+- int hier_flags = 0;
++ bool force = false;
+ int old_prs = cs->partition_root_state;
+
+ if (!*buf) {
+@@ -2365,8 +2380,7 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ * Check all the descendants in update_cpumasks_hier() if
+ * effective_xcpus is to be changed.
+ */
+- if (!cpumask_equal(cs->effective_xcpus, trialcs->effective_xcpus))
+- hier_flags = HIER_CHECKALL;
++ force = !cpumask_equal(cs->effective_xcpus, trialcs->effective_xcpus);
+
+ retval = validate_change(cs, trialcs);
+ if (retval)
+@@ -2419,8 +2433,8 @@ static int update_exclusive_cpumask(struct cpuset *cs, struct cpuset *trialcs,
+ * of the subtree when it is a valid partition root or effective_xcpus
+ * is updated.
+ */
+- if (is_partition_valid(cs) || hier_flags)
+- update_cpumasks_hier(cs, &tmp, hier_flags);
++ if (is_partition_valid(cs) || force)
++ update_cpumasks_hier(cs, &tmp, force);
+
+ /* Update CS_SCHED_LOAD_BALANCE and/or sched_domains, if necessary */
+ if (cs->partition_root_state)
+@@ -2745,9 +2759,12 @@ int cpuset_update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
+ cs->flags = trialcs->flags;
+ spin_unlock_irq(&callback_lock);
+
+- if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed &&
+- !force_sd_rebuild)
+- rebuild_sched_domains_locked();
++ if (!cpumask_empty(trialcs->cpus_allowed) && balance_flag_changed) {
++ if (cpuset_v2())
++ cpuset_force_rebuild();
++ else
++ rebuild_sched_domains_locked();
++ }
+
+ if (spread_flag_changed)
+ cpuset1_update_tasks_flags(cs);
+@@ -2861,12 +2878,14 @@ static int update_prstate(struct cpuset *cs, int new_prs)
+ update_unbound_workqueue_cpumask(new_xcpus_state);
+
+ /* Force update if switching back to member */
+- update_cpumasks_hier(cs, &tmpmask, !new_prs ? HIER_CHECKALL : 0);
++ update_cpumasks_hier(cs, &tmpmask, !new_prs);
+
+ /* Update sched domains and load balance flag */
+ update_partition_sd_lb(cs, old_prs);
+
+ notify_partition_change(cs, old_prs);
++ if (force_sd_rebuild)
++ rebuild_sched_domains_locked();
+ free_cpumasks(NULL, &tmpmask);
+ return 0;
+ }
+@@ -2927,8 +2946,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
+ * migration permission derives from hierarchy ownership in
+ * cgroup_procs_write_permission()).
+ */
+- if (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) ||
+- (cpus_updated || mems_updated)) {
++ if (!cpuset_v2() || (cpus_updated || mems_updated)) {
+ ret = security_task_setscheduler(task);
+ if (ret)
+ goto out_unlock;
+@@ -3042,8 +3060,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
+ * in effective cpus and mems. In that case, we can optimize out
+ * by skipping the task iteration and update.
+ */
+- if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) &&
+- !cpus_updated && !mems_updated) {
++ if (cpuset_v2() && !cpus_updated && !mems_updated) {
+ cpuset_attach_nodemask_to = cs->effective_mems;
+ goto out;
+ }
+@@ -3137,6 +3154,8 @@ ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
+ }
+
+ free_cpuset(trialcs);
++ if (force_sd_rebuild)
++ rebuild_sched_domains_locked();
+ out_unlock:
+ mutex_unlock(&cpuset_mutex);
+ cpus_read_unlock();
+@@ -3366,7 +3385,7 @@ cpuset_css_alloc(struct cgroup_subsys_state *parent_css)
+ INIT_LIST_HEAD(&cs->remote_sibling);
+
+ /* Set CS_MEMORY_MIGRATE for default hierarchy */
+- if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys))
++ if (cpuset_v2())
+ __set_bit(CS_MEMORY_MIGRATE, &cs->flags);
+
+ return &cs->css;
+@@ -3393,8 +3412,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
+ /*
+ * For v2, clear CS_SCHED_LOAD_BALANCE if parent is isolated
+ */
+- if (cgroup_subsys_on_dfl(cpuset_cgrp_subsys) &&
+- !is_sched_load_balance(parent))
++ if (cpuset_v2() && !is_sched_load_balance(parent))
+ clear_bit(CS_SCHED_LOAD_BALANCE, &cs->flags);
+
+ cpuset_inc();
+@@ -3461,11 +3479,7 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
+ cpus_read_lock();
+ mutex_lock(&cpuset_mutex);
+
+- if (is_partition_valid(cs))
+- update_prstate(cs, 0);
+-
+- if (!cgroup_subsys_on_dfl(cpuset_cgrp_subsys) &&
+- is_sched_load_balance(cs))
++ if (!cpuset_v2() && is_sched_load_balance(cs))
+ cpuset_update_flag(CS_SCHED_LOAD_BALANCE, cs, 0);
+
+ cpuset_dec();
+@@ -3475,6 +3489,22 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
+ cpus_read_unlock();
+ }
+
++static void cpuset_css_killed(struct cgroup_subsys_state *css)
++{
++ struct cpuset *cs = css_cs(css);
++
++ cpus_read_lock();
++ mutex_lock(&cpuset_mutex);
++
++ /* Reset valid partition back to member */
++ if (is_partition_valid(cs))
++ update_prstate(cs, PRS_MEMBER);
++
++ mutex_unlock(&cpuset_mutex);
++ cpus_read_unlock();
++
++}
++
+ static void cpuset_css_free(struct cgroup_subsys_state *css)
+ {
+ struct cpuset *cs = css_cs(css);
+@@ -3596,6 +3626,7 @@ struct cgroup_subsys cpuset_cgrp_subsys = {
+ .css_alloc = cpuset_css_alloc,
+ .css_online = cpuset_css_online,
+ .css_offline = cpuset_css_offline,
++ .css_killed = cpuset_css_killed,
+ .css_free = cpuset_css_free,
+ .can_attach = cpuset_can_attach,
+ .cancel_attach = cpuset_cancel_attach,
+@@ -3726,6 +3757,7 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
+
+ if (remote && cpumask_empty(&new_cpus) &&
+ partition_is_populated(cs, NULL)) {
++ cs->prs_err = PERR_HOTPLUG;
+ remote_partition_disable(cs, tmp);
+ compute_effective_cpumask(&new_cpus, cs, parent);
+ remote = false;
+@@ -3879,11 +3911,9 @@ static void cpuset_handle_hotplug(void)
+ rcu_read_unlock();
+ }
+
+- /* rebuild sched domains if cpus_allowed has changed */
+- if (force_sd_rebuild) {
+- force_sd_rebuild = false;
++ /* rebuild sched domains if necessary */
++ if (force_sd_rebuild)
+ rebuild_sched_domains_cpuslocked();
+- }
+
+ free_cpumasks(NULL, ptmp);
+ }
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index b5ccf52bb71baa..97af53c43608e4 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -2146,7 +2146,7 @@ static void perf_put_aux_event(struct perf_event *event)
+
+ static bool perf_need_aux_event(struct perf_event *event)
+ {
+- return !!event->attr.aux_output || !!event->attr.aux_sample_size;
++ return event->attr.aux_output || has_aux_action(event);
+ }
+
+ static int perf_get_aux_event(struct perf_event *event,
+@@ -2171,6 +2171,10 @@ static int perf_get_aux_event(struct perf_event *event,
+ !perf_aux_output_match(event, group_leader))
+ return 0;
+
++ if ((event->attr.aux_pause || event->attr.aux_resume) &&
++ !(group_leader->pmu->capabilities & PERF_PMU_CAP_AUX_PAUSE))
++ return 0;
++
+ if (event->attr.aux_sample_size && !group_leader->pmu->snapshot_aux)
+ return 0;
+
+@@ -5258,6 +5262,8 @@ static int exclusive_event_init(struct perf_event *event)
+ return -EBUSY;
+ }
+
++ event->attach_state |= PERF_ATTACH_EXCLUSIVE;
++
+ return 0;
+ }
+
+@@ -5265,14 +5271,13 @@ static void exclusive_event_destroy(struct perf_event *event)
+ {
+ struct pmu *pmu = event->pmu;
+
+- if (!is_exclusive_pmu(pmu))
+- return;
+-
+ /* see comment in exclusive_event_init() */
+ if (event->attach_state & PERF_ATTACH_TASK)
+ atomic_dec(&pmu->exclusive_cnt);
+ else
+ atomic_inc(&pmu->exclusive_cnt);
++
++ event->attach_state &= ~PERF_ATTACH_EXCLUSIVE;
+ }
+
+ static bool exclusive_event_match(struct perf_event *e1, struct perf_event *e2)
+@@ -5307,35 +5312,58 @@ static bool exclusive_event_installable(struct perf_event *event,
+ static void perf_addr_filters_splice(struct perf_event *event,
+ struct list_head *head);
+
+-static void perf_pending_task_sync(struct perf_event *event)
++/* vs perf_event_alloc() error */
++static void __free_event(struct perf_event *event)
+ {
+- struct callback_head *head = &event->pending_task;
++ if (event->attach_state & PERF_ATTACH_CALLCHAIN)
++ put_callchain_buffers();
++
++ kfree(event->addr_filter_ranges);
++
++ if (event->attach_state & PERF_ATTACH_EXCLUSIVE)
++ exclusive_event_destroy(event);
++
++ if (is_cgroup_event(event))
++ perf_detach_cgroup(event);
++
++ if (event->destroy)
++ event->destroy(event);
+
+- if (!event->pending_work)
+- return;
+ /*
+- * If the task is queued to the current task's queue, we
+- * obviously can't wait for it to complete. Simply cancel it.
++ * Must be after ->destroy(), due to uprobe_perf_close() using
++ * hw.target.
+ */
+- if (task_work_cancel(current, head)) {
+- event->pending_work = 0;
+- local_dec(&event->ctx->nr_no_switch_fast);
+- return;
++ if (event->hw.target)
++ put_task_struct(event->hw.target);
++
++ if (event->pmu_ctx) {
++ /*
++ * put_pmu_ctx() needs an event->ctx reference, because of
++ * epc->ctx.
++ */
++ WARN_ON_ONCE(!event->ctx);
++ WARN_ON_ONCE(event->pmu_ctx->ctx != event->ctx);
++ put_pmu_ctx(event->pmu_ctx);
+ }
+
+ /*
+- * All accesses related to the event are within the same RCU section in
+- * perf_pending_task(). The RCU grace period before the event is freed
+- * will make sure all those accesses are complete by then.
++ * perf_event_free_task() relies on put_ctx() being 'last', in
++ * particular all task references must be cleaned up.
+ */
+- rcuwait_wait_event(&event->pending_work_wait, !event->pending_work, TASK_UNINTERRUPTIBLE);
++ if (event->ctx)
++ put_ctx(event->ctx);
++
++ if (event->pmu)
++ module_put(event->pmu->module);
++
++ call_rcu(&event->rcu_head, free_event_rcu);
+ }
+
++/* vs perf_event_alloc() success */
+ static void _free_event(struct perf_event *event)
+ {
+ irq_work_sync(&event->pending_irq);
+ irq_work_sync(&event->pending_disable_irq);
+- perf_pending_task_sync(event);
+
+ unaccount_event(event);
+
+@@ -5353,42 +5381,10 @@ static void _free_event(struct perf_event *event)
+ mutex_unlock(&event->mmap_mutex);
+ }
+
+- if (is_cgroup_event(event))
+- perf_detach_cgroup(event);
+-
+- if (!event->parent) {
+- if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
+- put_callchain_buffers();
+- }
+-
+ perf_event_free_bpf_prog(event);
+ perf_addr_filters_splice(event, NULL);
+- kfree(event->addr_filter_ranges);
+-
+- if (event->destroy)
+- event->destroy(event);
+-
+- /*
+- * Must be after ->destroy(), due to uprobe_perf_close() using
+- * hw.target.
+- */
+- if (event->hw.target)
+- put_task_struct(event->hw.target);
+
+- if (event->pmu_ctx)
+- put_pmu_ctx(event->pmu_ctx);
+-
+- /*
+- * perf_event_free_task() relies on put_ctx() being 'last', in particular
+- * all task references must be cleaned up.
+- */
+- if (event->ctx)
+- put_ctx(event->ctx);
+-
+- exclusive_event_destroy(event);
+- module_put(event->pmu->module);
+-
+- call_rcu(&event->rcu_head, free_event_rcu);
++ __free_event(event);
+ }
+
+ /*
+@@ -5460,10 +5456,17 @@ static void perf_remove_from_owner(struct perf_event *event)
+
+ static void put_event(struct perf_event *event)
+ {
++ struct perf_event *parent;
++
+ if (!atomic_long_dec_and_test(&event->refcount))
+ return;
+
++ parent = event->parent;
+ _free_event(event);
++
++ /* Matches the refcount bump in inherit_event() */
++ if (parent)
++ put_event(parent);
+ }
+
+ /*
+@@ -5547,11 +5550,6 @@ int perf_event_release_kernel(struct perf_event *event)
+ if (tmp == child) {
+ perf_remove_from_context(child, DETACH_GROUP);
+ list_move(&child->child_list, &free_list);
+- /*
+- * This matches the refcount bump in inherit_event();
+- * this can't be the last reference.
+- */
+- put_event(event);
+ } else {
+ var = &ctx->refcount;
+ }
+@@ -5577,7 +5575,8 @@ int perf_event_release_kernel(struct perf_event *event)
+ void *var = &child->ctx->refcount;
+
+ list_del(&child->child_list);
+- free_event(child);
++ /* Last reference unless ->pending_task work is pending */
++ put_event(child);
+
+ /*
+ * Wake any perf_event_free_task() waiting for this event to be
+@@ -5588,7 +5587,11 @@ int perf_event_release_kernel(struct perf_event *event)
+ }
+
+ no_ctx:
+- put_event(event); /* Must be the 'last' reference */
++ /*
++ * Last reference unless ->pending_task work is pending on this event
++ * or any of its children.
++ */
++ put_event(event);
+ return 0;
+ }
+ EXPORT_SYMBOL_GPL(perf_event_release_kernel);
+@@ -6973,12 +6976,6 @@ static void perf_pending_task(struct callback_head *head)
+ struct perf_event *event = container_of(head, struct perf_event, pending_task);
+ int rctx;
+
+- /*
+- * All accesses to the event must belong to the same implicit RCU read-side
+- * critical section as the ->pending_work reset. See comment in
+- * perf_pending_task_sync().
+- */
+- rcu_read_lock();
+ /*
+ * If we 'fail' here, that's OK, it means recursion is already disabled
+ * and we won't recurse 'further'.
+@@ -6989,9 +6986,8 @@ static void perf_pending_task(struct callback_head *head)
+ event->pending_work = 0;
+ perf_sigtrap(event);
+ local_dec(&event->ctx->nr_no_switch_fast);
+- rcuwait_wake_up(&event->pending_work_wait);
+ }
+- rcu_read_unlock();
++ put_event(event);
+
+ if (rctx >= 0)
+ perf_swevent_put_recursion_context(rctx);
+@@ -8029,6 +8025,49 @@ void perf_prepare_header(struct perf_event_header *header,
+ WARN_ON_ONCE(header->size & 7);
+ }
+
++static void __perf_event_aux_pause(struct perf_event *event, bool pause)
++{
++ if (pause) {
++ if (!event->hw.aux_paused) {
++ event->hw.aux_paused = 1;
++ event->pmu->stop(event, PERF_EF_PAUSE);
++ }
++ } else {
++ if (event->hw.aux_paused) {
++ event->hw.aux_paused = 0;
++ event->pmu->start(event, PERF_EF_RESUME);
++ }
++ }
++}
++
++static void perf_event_aux_pause(struct perf_event *event, bool pause)
++{
++ struct perf_buffer *rb;
++
++ if (WARN_ON_ONCE(!event))
++ return;
++
++ rb = ring_buffer_get(event);
++ if (!rb)
++ return;
++
++ scoped_guard (irqsave) {
++ /*
++ * Guard against self-recursion here. Another event could trip
++ * this same from NMI context.
++ */
++ if (READ_ONCE(rb->aux_in_pause_resume))
++ break;
++
++ WRITE_ONCE(rb->aux_in_pause_resume, 1);
++ barrier();
++ __perf_event_aux_pause(event, pause);
++ barrier();
++ WRITE_ONCE(rb->aux_in_pause_resume, 0);
++ }
++ ring_buffer_put(rb);
++}
++
+ static __always_inline int
+ __perf_event_output(struct perf_event *event,
+ struct perf_sample_data *data,
+@@ -9832,9 +9871,12 @@ static int __perf_event_overflow(struct perf_event *event,
+
+ ret = __perf_event_account_interrupt(event, throttle);
+
++ if (event->attr.aux_pause)
++ perf_event_aux_pause(event->aux_event, true);
++
+ if (event->prog && event->prog->type == BPF_PROG_TYPE_PERF_EVENT &&
+ !bpf_overflow_handler(event, data, regs))
+- return ret;
++ goto out;
+
+ /*
+ * XXX event_limit might not quite work as expected on inherited
+@@ -9868,6 +9910,7 @@ static int __perf_event_overflow(struct perf_event *event,
+ !task_work_add(current, &event->pending_task, notify_mode)) {
+ event->pending_work = pending_id;
+ local_inc(&event->ctx->nr_no_switch_fast);
++ WARN_ON_ONCE(!atomic_long_inc_not_zero(&event->refcount));
+
+ event->pending_addr = 0;
+ if (valid_sample && (data->sample_flags & PERF_SAMPLE_ADDR))
+@@ -9896,6 +9939,9 @@ static int __perf_event_overflow(struct perf_event *event,
+ event->pending_wakeup = 1;
+ irq_work_queue(&event->pending_irq);
+ }
++out:
++ if (event->attr.aux_resume)
++ perf_event_aux_pause(event->aux_event, false);
+
+ return ret;
+ }
+@@ -11961,8 +12007,10 @@ static int perf_try_init_event(struct pmu *pmu, struct perf_event *event)
+ event->destroy(event);
+ }
+
+- if (ret)
++ if (ret) {
++ event->pmu = NULL;
+ module_put(pmu->module);
++ }
+
+ return ret;
+ }
+@@ -12211,7 +12259,6 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ init_irq_work(&event->pending_irq, perf_pending_irq);
+ event->pending_disable_irq = IRQ_WORK_INIT_HARD(perf_pending_disable);
+ init_task_work(&event->pending_task, perf_pending_task);
+- rcuwait_init(&event->pending_work_wait);
+
+ mutex_init(&event->mmap_mutex);
+ raw_spin_lock_init(&event->addr_filters.lock);
+@@ -12290,7 +12337,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ * See perf_output_read().
+ */
+ if (has_inherit_and_sample_read(attr) && !(attr->sample_type & PERF_SAMPLE_TID))
+- goto err_ns;
++ goto err;
+
+ if (!has_branch_stack(event))
+ event->attr.branch_sample_type = 0;
+@@ -12298,7 +12345,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ pmu = perf_init_event(event);
+ if (IS_ERR(pmu)) {
+ err = PTR_ERR(pmu);
+- goto err_ns;
++ goto err;
+ }
+
+ /*
+@@ -12308,24 +12355,38 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ */
+ if (pmu->task_ctx_nr == perf_invalid_context && (task || cgroup_fd != -1)) {
+ err = -EINVAL;
+- goto err_pmu;
++ goto err;
+ }
+
+ if (event->attr.aux_output &&
+- !(pmu->capabilities & PERF_PMU_CAP_AUX_OUTPUT)) {
++ (!(pmu->capabilities & PERF_PMU_CAP_AUX_OUTPUT) ||
++ event->attr.aux_pause || event->attr.aux_resume)) {
+ err = -EOPNOTSUPP;
+- goto err_pmu;
++ goto err;
++ }
++
++ if (event->attr.aux_pause && event->attr.aux_resume) {
++ err = -EINVAL;
++ goto err;
++ }
++
++ if (event->attr.aux_start_paused) {
++ if (!(pmu->capabilities & PERF_PMU_CAP_AUX_PAUSE)) {
++ err = -EOPNOTSUPP;
++ goto err;
++ }
++ event->hw.aux_paused = 1;
+ }
+
+ if (cgroup_fd != -1) {
+ err = perf_cgroup_connect(cgroup_fd, event, attr, group_leader);
+ if (err)
+- goto err_pmu;
++ goto err;
+ }
+
+ err = exclusive_event_init(event);
+ if (err)
+- goto err_pmu;
++ goto err;
+
+ if (has_addr_filter(event)) {
+ event->addr_filter_ranges = kcalloc(pmu->nr_addr_filters,
+@@ -12333,7 +12394,7 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ GFP_KERNEL);
+ if (!event->addr_filter_ranges) {
+ err = -ENOMEM;
+- goto err_per_task;
++ goto err;
+ }
+
+ /*
+@@ -12358,41 +12419,22 @@ perf_event_alloc(struct perf_event_attr *attr, int cpu,
+ if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN) {
+ err = get_callchain_buffers(attr->sample_max_stack);
+ if (err)
+- goto err_addr_filters;
++ goto err;
++ event->attach_state |= PERF_ATTACH_CALLCHAIN;
+ }
+ }
+
+ err = security_perf_event_alloc(event);
+ if (err)
+- goto err_callchain_buffer;
++ goto err;
+
+ /* symmetric to unaccount_event() in _free_event() */
+ account_event(event);
+
+ return event;
+
+-err_callchain_buffer:
+- if (!event->parent) {
+- if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
+- put_callchain_buffers();
+- }
+-err_addr_filters:
+- kfree(event->addr_filter_ranges);
+-
+-err_per_task:
+- exclusive_event_destroy(event);
+-
+-err_pmu:
+- if (is_cgroup_event(event))
+- perf_detach_cgroup(event);
+- if (event->destroy)
+- event->destroy(event);
+- module_put(pmu->module);
+-err_ns:
+- if (event->hw.target)
+- put_task_struct(event->hw.target);
+- call_rcu(&event->rcu_head, free_event_rcu);
+-
++err:
++ __free_event(event);
+ return ERR_PTR(err);
+ }
+
+@@ -13112,7 +13154,7 @@ perf_event_create_kernel_counter(struct perf_event_attr *attr, int cpu,
+ * Grouping is not supported for kernel events, neither is 'AUX',
+ * make sure the caller's intentions are adjusted.
+ */
+- if (attr->aux_output)
++ if (attr->aux_output || attr->aux_action)
+ return ERR_PTR(-EINVAL);
+
+ event = perf_event_alloc(attr, cpu, task, NULL, NULL,
+@@ -13359,8 +13401,7 @@ perf_event_exit_event(struct perf_event *event, struct perf_event_context *ctx)
+ * Kick perf_poll() for is_event_hup();
+ */
+ perf_event_wakeup(parent_event);
+- free_event(event);
+- put_event(parent_event);
++ put_event(event);
+ return;
+ }
+
+@@ -13478,13 +13519,11 @@ static void perf_free_event(struct perf_event *event,
+ list_del_init(&event->child_list);
+ mutex_unlock(&parent->child_mutex);
+
+- put_event(parent);
+-
+ raw_spin_lock_irq(&ctx->lock);
+ perf_group_detach(event);
+ list_del_event(event, ctx);
+ raw_spin_unlock_irq(&ctx->lock);
+- free_event(event);
++ put_event(event);
+ }
+
+ /*
+diff --git a/kernel/events/internal.h b/kernel/events/internal.h
+index e072d995d670f7..249288d82b8dcf 100644
+--- a/kernel/events/internal.h
++++ b/kernel/events/internal.h
+@@ -52,6 +52,7 @@ struct perf_buffer {
+ void (*free_aux)(void *);
+ refcount_t aux_refcount;
+ int aux_in_sampling;
++ int aux_in_pause_resume;
+ void **aux_pages;
+ void *aux_priv;
+
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 536bd471557f5b..53c76dc71f3f57 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -6223,6 +6223,9 @@ static void zap_class(struct pending_free *pf, struct lock_class *class)
+ hlist_del_rcu(&class->hash_entry);
+ WRITE_ONCE(class->key, NULL);
+ WRITE_ONCE(class->name, NULL);
++ /* Class allocated but not used, -1 in nr_unused_locks */
++ if (class->usage_mask == 0)
++ debug_atomic_dec(nr_unused_locks);
+ nr_lock_classes--;
+ __clear_bit(class - lock_classes, lock_classes_in_use);
+ if (class - lock_classes == max_lock_class_idx)
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index b483fcea811b1a..d8bad1eeedd3e5 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -1443,10 +1443,10 @@ static const char * const comp_alg_enabled[] = {
+ static int hibernate_compressor_param_set(const char *compressor,
+ const struct kernel_param *kp)
+ {
+- unsigned int sleep_flags;
+ int index, ret;
+
+- sleep_flags = lock_system_sleep();
++ if (!mutex_trylock(&system_transition_mutex))
++ return -EBUSY;
+
+ index = sysfs_match_string(comp_alg_enabled, compressor);
+ if (index >= 0) {
+@@ -1458,7 +1458,7 @@ static int hibernate_compressor_param_set(const char *compressor,
+ ret = index;
+ }
+
+- unlock_system_sleep(sleep_flags);
++ mutex_unlock(&system_transition_mutex);
+
+ if (ret)
+ pr_debug("Cannot set specified compressor %s\n",
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index 3b75f6e8410b9d..881a26e18c658b 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -2436,7 +2436,6 @@ asmlinkage __visible int _printk(const char *fmt, ...)
+ }
+ EXPORT_SYMBOL(_printk);
+
+-static bool pr_flush(int timeout_ms, bool reset_on_progress);
+ static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress);
+
+ #else /* CONFIG_PRINTK */
+@@ -2449,7 +2448,6 @@ static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progre
+
+ static u64 syslog_seq;
+
+-static bool pr_flush(int timeout_ms, bool reset_on_progress) { return true; }
+ static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progress) { return true; }
+
+ #endif /* CONFIG_PRINTK */
+@@ -4436,7 +4434,7 @@ static bool __pr_flush(struct console *con, int timeout_ms, bool reset_on_progre
+ * Context: Process context. May sleep while acquiring console lock.
+ * Return: true if all usable printers are caught up.
+ */
+-static bool pr_flush(int timeout_ms, bool reset_on_progress)
++bool pr_flush(int timeout_ms, bool reset_on_progress)
+ {
+ return __pr_flush(NULL, timeout_ms, reset_on_progress);
+ }
+diff --git a/kernel/reboot.c b/kernel/reboot.c
+index f05dbde2c93fe7..d6ee090eda943c 100644
+--- a/kernel/reboot.c
++++ b/kernel/reboot.c
+@@ -697,6 +697,7 @@ void kernel_power_off(void)
+ migrate_to_reboot_cpu();
+ syscore_shutdown();
+ pr_emerg("Power down\n");
++ pr_flush(1000, true);
+ kmsg_dump(KMSG_DUMP_SHUTDOWN);
+ machine_power_off();
+ }
+diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c
+index e5cab54dfdd142..fcf968490308b9 100644
+--- a/kernel/sched/ext.c
++++ b/kernel/sched/ext.c
+@@ -4160,8 +4160,8 @@ static struct scx_dispatch_q *create_dsq(u64 dsq_id, int node)
+
+ init_dsq(dsq, dsq_id);
+
+- ret = rhashtable_insert_fast(&dsq_hash, &dsq->hash_node,
+- dsq_hash_params);
++ ret = rhashtable_lookup_insert_fast(&dsq_hash, &dsq->hash_node,
++ dsq_hash_params);
+ if (ret) {
+ kfree(dsq);
+ return ERR_PTR(ret);
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index dbd375f28ee098..90b59c627bb8e7 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -3523,16 +3523,16 @@ int ftrace_startup_subops(struct ftrace_ops *ops, struct ftrace_ops *subops, int
+ ftrace_hash_empty(subops->func_hash->notrace_hash)) {
+ notrace_hash = EMPTY_HASH;
+ } else {
+- size_bits = max(ops->func_hash->filter_hash->size_bits,
+- subops->func_hash->filter_hash->size_bits);
++ size_bits = max(ops->func_hash->notrace_hash->size_bits,
++ subops->func_hash->notrace_hash->size_bits);
+ notrace_hash = alloc_ftrace_hash(size_bits);
+ if (!notrace_hash) {
+ free_ftrace_hash(filter_hash);
+ return -ENOMEM;
+ }
+
+- ret = intersect_hash(¬race_hash, ops->func_hash->filter_hash,
+- subops->func_hash->filter_hash);
++ ret = intersect_hash(¬race_hash, ops->func_hash->notrace_hash,
++ subops->func_hash->notrace_hash);
+ if (ret < 0) {
+ free_ftrace_hash(filter_hash);
+ free_ftrace_hash(notrace_hash);
+@@ -6848,6 +6848,7 @@ ftrace_graph_set_hash(struct ftrace_hash *hash, char *buffer)
+ }
+ }
+ }
++ cond_resched();
+ } while_for_each_ftrace_rec();
+ out:
+ mutex_unlock(&ftrace_lock);
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 3e252ba16d5c6e..e1ffbed8cc5eb5 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -5994,7 +5994,7 @@ static void rb_update_meta_page(struct ring_buffer_per_cpu *cpu_buffer)
+ meta->read = cpu_buffer->read;
+
+ /* Some archs do not have data cache coherency between kernel and user-space */
+- flush_dcache_folio(virt_to_folio(cpu_buffer->meta_page));
++ flush_kernel_vmap_range(cpu_buffer->meta_page, PAGE_SIZE);
+ }
+
+ static void
+@@ -7309,7 +7309,8 @@ int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu)
+
+ out:
+ /* Some archs do not have data cache coherency between kernel and user-space */
+- flush_dcache_folio(virt_to_folio(cpu_buffer->reader_page->page));
++ flush_kernel_vmap_range(cpu_buffer->reader_page->page,
++ buffer->subbuf_size + BUF_PAGE_HDR_SIZE);
+
+ rb_update_meta_page(cpu_buffer);
+
+diff --git a/kernel/trace/trace_events.c b/kernel/trace/trace_events.c
+index 29eba68e07859d..11dea25ef880a5 100644
+--- a/kernel/trace/trace_events.c
++++ b/kernel/trace/trace_events.c
+@@ -790,7 +790,9 @@ static int __ftrace_event_enable_disable(struct trace_event_file *file,
+ clear_bit(EVENT_FILE_FL_RECORDED_TGID_BIT, &file->flags);
+ }
+
+- call->class->reg(call, TRACE_REG_UNREGISTER, file);
++ ret = call->class->reg(call, TRACE_REG_UNREGISTER, file);
++
++ WARN_ON_ONCE(ret);
+ }
+ /* If in SOFT_MODE, just set the SOFT_DISABLE_BIT, else clear it */
+ if (file->flags & EVENT_FILE_FL_SOFT_MODE)
+diff --git a/kernel/trace/trace_events_synth.c b/kernel/trace/trace_events_synth.c
+index 24c9962c40db1a..1b9e32f6442fb5 100644
+--- a/kernel/trace/trace_events_synth.c
++++ b/kernel/trace/trace_events_synth.c
+@@ -377,7 +377,6 @@ static enum print_line_t print_synth_event(struct trace_iterator *iter,
+ union trace_synth_field *data = &entry->fields[n_u64];
+
+ trace_seq_printf(s, print_fmt, se->fields[i]->name,
+- STR_VAR_LEN_MAX,
+ (char *)entry + data->as_dynamic.offset,
+ i == se->n_fields - 1 ? "" : " ");
+ n_u64++;
+diff --git a/kernel/trace/trace_fprobe.c b/kernel/trace/trace_fprobe.c
+index 4acdab16579390..af7d6e2060d9d9 100644
+--- a/kernel/trace/trace_fprobe.c
++++ b/kernel/trace/trace_fprobe.c
+@@ -888,9 +888,15 @@ static void __find_tracepoint_module_cb(struct tracepoint *tp, struct module *mo
+ struct __find_tracepoint_cb_data *data = priv;
+
+ if (!data->tpoint && !strcmp(data->tp_name, tp->name)) {
+- data->tpoint = tp;
+- if (!data->mod)
++ /* If module is not specified, try getting module refcount. */
++ if (!data->mod && mod) {
++ /* If failed to get refcount, ignore this tracepoint. */
++ if (!try_module_get(mod))
++ return;
++
+ data->mod = mod;
++ }
++ data->tpoint = tp;
+ }
+ }
+
+@@ -902,7 +908,11 @@ static void __find_tracepoint_cb(struct tracepoint *tp, void *priv)
+ data->tpoint = tp;
+ }
+
+-/* Find a tracepoint from kernel and module. */
++/*
++ * Find a tracepoint from kernel and module. If the tracepoint is on the module,
++ * the module's refcount is incremented and returned as *@tp_mod. Thus, if it is
++ * not NULL, caller must call module_put(*tp_mod) after used the tracepoint.
++ */
+ static struct tracepoint *find_tracepoint(const char *tp_name,
+ struct module **tp_mod)
+ {
+@@ -931,7 +941,10 @@ static void reenable_trace_fprobe(struct trace_fprobe *tf)
+ }
+ }
+
+-/* Find a tracepoint from specified module. */
++/*
++ * Find a tracepoint from specified module. In this case, this does not get the
++ * module's refcount. The caller must ensure the module is not freed.
++ */
+ static struct tracepoint *find_tracepoint_in_module(struct module *mod,
+ const char *tp_name)
+ {
+@@ -1167,11 +1180,6 @@ static int __trace_fprobe_create(int argc, const char *argv[])
+ if (is_tracepoint) {
+ ctx.flags |= TPARG_FL_TPOINT;
+ tpoint = find_tracepoint(symbol, &tp_mod);
+- /* lock module until register this tprobe. */
+- if (tp_mod && !try_module_get(tp_mod)) {
+- tpoint = NULL;
+- tp_mod = NULL;
+- }
+ if (tpoint) {
+ ctx.funcname = kallsyms_lookup(
+ (unsigned long)tpoint->probestub,
+diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c
+index 16a5e368e7b77c..578919962e5dff 100644
+--- a/kernel/trace/trace_probe.c
++++ b/kernel/trace/trace_probe.c
+@@ -770,6 +770,10 @@ static int check_prepare_btf_string_fetch(char *typename,
+
+ #ifdef CONFIG_HAVE_FUNCTION_ARG_ACCESS_API
+
++/*
++ * Add the entry code to store the 'argnum'th parameter and return the offset
++ * in the entry data buffer where the data will be stored.
++ */
+ static int __store_entry_arg(struct trace_probe *tp, int argnum)
+ {
+ struct probe_entry_arg *earg = tp->entry_arg;
+@@ -793,6 +797,20 @@ static int __store_entry_arg(struct trace_probe *tp, int argnum)
+ tp->entry_arg = earg;
+ }
+
++ /*
++ * The entry code array is repeating the pair of
++ * [FETCH_OP_ARG(argnum)][FETCH_OP_ST_EDATA(offset of entry data buffer)]
++ * and the rest of entries are filled with [FETCH_OP_END].
++ *
++ * To reduce the redundant function parameter fetching, we scan the entry
++ * code array to find the FETCH_OP_ARG which already fetches the 'argnum'
++ * parameter. If it doesn't match, update 'offset' to find the last
++ * offset.
++ * If we find the FETCH_OP_END without matching FETCH_OP_ARG entry, we
++ * will save the entry with FETCH_OP_ARG and FETCH_OP_ST_EDATA, and
++ * return data offset so that caller can find the data offset in the entry
++ * data buffer.
++ */
+ offset = 0;
+ for (i = 0; i < earg->size - 1; i++) {
+ switch (earg->code[i].op) {
+@@ -826,6 +844,16 @@ int traceprobe_get_entry_data_size(struct trace_probe *tp)
+ if (!earg)
+ return 0;
+
++ /*
++ * earg->code[] array has an operation sequence which is run in
++ * the entry handler.
++ * The sequence stopped by FETCH_OP_END and each data stored in
++ * the entry data buffer by FETCH_OP_ST_EDATA. The FETCH_OP_ST_EDATA
++ * stores the data at the data buffer + its offset, and all data are
++ * "unsigned long" size. The offset must be increased when a data is
++ * stored. Thus we need to find the last FETCH_OP_ST_EDATA in the
++ * code array.
++ */
+ for (i = 0; i < earg->size; i++) {
+ switch (earg->code[i].op) {
+ case FETCH_OP_END:
+diff --git a/lib/sg_split.c b/lib/sg_split.c
+index 60a0babebf2efc..0f89aab5c6715b 100644
+--- a/lib/sg_split.c
++++ b/lib/sg_split.c
+@@ -88,8 +88,6 @@ static void sg_split_phys(struct sg_splitter *splitters, const int nb_splits)
+ if (!j) {
+ out_sg->offset += split->skip_sg0;
+ out_sg->length -= split->skip_sg0;
+- } else {
+- out_sg->offset = 0;
+ }
+ sg_dma_address(out_sg) = 0;
+ sg_dma_len(out_sg) = 0;
+diff --git a/lib/zstd/common/portability_macros.h b/lib/zstd/common/portability_macros.h
+index 0e3b2c0a527db7..0dde8bf56595ea 100644
+--- a/lib/zstd/common/portability_macros.h
++++ b/lib/zstd/common/portability_macros.h
+@@ -55,7 +55,7 @@
+ #ifndef DYNAMIC_BMI2
+ #if ((defined(__clang__) && __has_attribute(__target__)) \
+ || (defined(__GNUC__) \
+- && (__GNUC__ >= 5 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 8)))) \
++ && (__GNUC__ >= 11))) \
+ && (defined(__x86_64__) || defined(_M_X64)) \
+ && !defined(__BMI2__)
+ # define DYNAMIC_BMI2 1
+diff --git a/mm/damon/ops-common.c b/mm/damon/ops-common.c
+index d25d99cb5f2bb9..d511be201c4c9e 100644
+--- a/mm/damon/ops-common.c
++++ b/mm/damon/ops-common.c
+@@ -24,7 +24,7 @@ struct folio *damon_get_folio(unsigned long pfn)
+ struct page *page = pfn_to_online_page(pfn);
+ struct folio *folio;
+
+- if (!page || PageTail(page))
++ if (!page)
+ return NULL;
+
+ folio = page_folio(page);
+diff --git a/mm/damon/paddr.c b/mm/damon/paddr.c
+index a9ff35341d65d2..8813038abc6fb3 100644
+--- a/mm/damon/paddr.c
++++ b/mm/damon/paddr.c
+@@ -264,11 +264,14 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
+ damos_add_filter(s, filter);
+ }
+
+- for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
++ addr = r->ar.start;
++ while (addr < r->ar.end) {
+ struct folio *folio = damon_get_folio(PHYS_PFN(addr));
+
+- if (!folio)
++ if (!folio) {
++ addr += PAGE_SIZE;
+ continue;
++ }
+
+ if (damos_pa_filter_out(s, folio))
+ goto put_folio;
+@@ -282,6 +285,7 @@ static unsigned long damon_pa_pageout(struct damon_region *r, struct damos *s)
+ else
+ list_add(&folio->lru, &folio_list);
+ put_folio:
++ addr += folio_size(folio);
+ folio_put(folio);
+ }
+ if (install_young_filter)
+@@ -296,11 +300,14 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
+ {
+ unsigned long addr, applied = 0;
+
+- for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
++ addr = r->ar.start;
++ while (addr < r->ar.end) {
+ struct folio *folio = damon_get_folio(PHYS_PFN(addr));
+
+- if (!folio)
++ if (!folio) {
++ addr += PAGE_SIZE;
+ continue;
++ }
+
+ if (damos_pa_filter_out(s, folio))
+ goto put_folio;
+@@ -311,6 +318,7 @@ static inline unsigned long damon_pa_mark_accessed_or_deactivate(
+ folio_deactivate(folio);
+ applied += folio_nr_pages(folio);
+ put_folio:
++ addr += folio_size(folio);
+ folio_put(folio);
+ }
+ return applied * PAGE_SIZE;
+@@ -454,11 +462,14 @@ static unsigned long damon_pa_migrate(struct damon_region *r, struct damos *s)
+ unsigned long addr, applied;
+ LIST_HEAD(folio_list);
+
+- for (addr = r->ar.start; addr < r->ar.end; addr += PAGE_SIZE) {
++ addr = r->ar.start;
++ while (addr < r->ar.end) {
+ struct folio *folio = damon_get_folio(PHYS_PFN(addr));
+
+- if (!folio)
++ if (!folio) {
++ addr += PAGE_SIZE;
+ continue;
++ }
+
+ if (damos_pa_filter_out(s, folio))
+ goto put_folio;
+@@ -467,6 +478,7 @@ static unsigned long damon_pa_migrate(struct damon_region *r, struct damos *s)
+ goto put_folio;
+ list_add(&folio->lru, &folio_list);
+ put_folio:
++ addr += folio_size(folio);
+ folio_put(folio);
+ }
+ applied = damon_pa_migrate_pages(&folio_list, s->target_nid);
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index e28e820fdb7756..ad646fe6688a49 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4863,7 +4863,7 @@ static struct ctl_table hugetlb_table[] = {
+ },
+ };
+
+-static void hugetlb_sysctl_init(void)
++static void __init hugetlb_sysctl_init(void)
+ {
+ register_sysctl_init("vm", hugetlb_table);
+ }
+diff --git a/mm/memory-failure.c b/mm/memory-failure.c
+index fa25a022e64d71..ec1c71abe88dfd 100644
+--- a/mm/memory-failure.c
++++ b/mm/memory-failure.c
+@@ -879,12 +879,17 @@ static int kill_accessing_process(struct task_struct *p, unsigned long pfn,
+ mmap_read_lock(p->mm);
+ ret = walk_page_range(p->mm, 0, TASK_SIZE, &hwpoison_walk_ops,
+ (void *)&priv);
++ /*
++ * ret = 1 when CMCI wins, regardless of whether try_to_unmap()
++ * succeeds or fails, then kill the process with SIGBUS.
++ * ret = 0 when poison page is a clean page and it's dropped, no
++ * SIGBUS is needed.
++ */
+ if (ret == 1 && priv.tk.addr)
+ kill_proc(&priv.tk, pfn, flags);
+- else
+- ret = 0;
+ mmap_read_unlock(p->mm);
+- return ret > 0 ? -EHWPOISON : -EFAULT;
++
++ return ret > 0 ? -EHWPOISON : 0;
+ }
+
+ /*
+diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
+index 619445096ef4a6..0a42e9a8caba2a 100644
+--- a/mm/memory_hotplug.c
++++ b/mm/memory_hotplug.c
+@@ -1801,8 +1801,7 @@ static void do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
+ if (unlikely(page_folio(page) != folio))
+ goto put_folio;
+
+- if (folio_test_hwpoison(folio) ||
+- (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) {
++ if (folio_contain_hwpoisoned_page(folio)) {
+ if (WARN_ON(folio_test_lru(folio)))
+ folio_isolate_lru(folio);
+ if (folio_mapped(folio)) {
+diff --git a/mm/mremap.c b/mm/mremap.c
+index 1b2edd65c2a172..12af89b4342a7b 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -696,8 +696,8 @@ static unsigned long move_vma(struct vm_area_struct *vma,
+ unsigned long vm_flags = vma->vm_flags;
+ unsigned long new_pgoff;
+ unsigned long moved_len;
+- unsigned long account_start = 0;
+- unsigned long account_end = 0;
++ bool account_start = false;
++ bool account_end = false;
+ unsigned long hiwater_vm;
+ int err = 0;
+ bool need_rmap_locks;
+@@ -781,9 +781,9 @@ static unsigned long move_vma(struct vm_area_struct *vma,
+ if (vm_flags & VM_ACCOUNT && !(flags & MREMAP_DONTUNMAP)) {
+ vm_flags_clear(vma, VM_ACCOUNT);
+ if (vma->vm_start < old_addr)
+- account_start = vma->vm_start;
++ account_start = true;
+ if (vma->vm_end > old_addr + old_len)
+- account_end = vma->vm_end;
++ account_end = true;
+ }
+
+ /*
+@@ -823,7 +823,7 @@ static unsigned long move_vma(struct vm_area_struct *vma,
+ /* OOM: unable to split vma, just get accounts right */
+ if (vm_flags & VM_ACCOUNT && !(flags & MREMAP_DONTUNMAP))
+ vm_acct_memory(old_len >> PAGE_SHIFT);
+- account_start = account_end = 0;
++ account_start = account_end = false;
+ }
+
+ if (vm_flags & VM_LOCKED) {
+diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
+index ae5cc42aa2087a..585a53f7b06f08 100644
+--- a/mm/page_vma_mapped.c
++++ b/mm/page_vma_mapped.c
+@@ -77,6 +77,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp)
+ * mapped at the @pvmw->pte
+ * @pvmw: page_vma_mapped_walk struct, includes a pair pte and pfn range
+ * for checking
++ * @pte_nr: the number of small pages described by @pvmw->pte.
+ *
+ * page_vma_mapped_walk() found a place where pfn range is *potentially*
+ * mapped. check_pte() has to validate this.
+@@ -93,7 +94,7 @@ static bool map_pte(struct page_vma_mapped_walk *pvmw, spinlock_t **ptlp)
+ * Otherwise, return false.
+ *
+ */
+-static bool check_pte(struct page_vma_mapped_walk *pvmw)
++static bool check_pte(struct page_vma_mapped_walk *pvmw, unsigned long pte_nr)
+ {
+ unsigned long pfn;
+ pte_t ptent = ptep_get(pvmw->pte);
+@@ -126,7 +127,11 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw)
+ pfn = pte_pfn(ptent);
+ }
+
+- return (pfn - pvmw->pfn) < pvmw->nr_pages;
++ if ((pfn + pte_nr - 1) < pvmw->pfn)
++ return false;
++ if (pfn > (pvmw->pfn + pvmw->nr_pages - 1))
++ return false;
++ return true;
+ }
+
+ /* Returns true if the two ranges overlap. Careful to not overflow. */
+@@ -201,7 +206,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
+ return false;
+
+ pvmw->ptl = huge_pte_lock(hstate, mm, pvmw->pte);
+- if (!check_pte(pvmw))
++ if (!check_pte(pvmw, pages_per_huge_page(hstate)))
+ return not_found(pvmw);
+ return true;
+ }
+@@ -284,7 +289,7 @@ bool page_vma_mapped_walk(struct page_vma_mapped_walk *pvmw)
+ goto next_pte;
+ }
+ this_pte:
+- if (check_pte(pvmw))
++ if (check_pte(pvmw, 1))
+ return true;
+ next_pte:
+ do {
+diff --git a/mm/rmap.c b/mm/rmap.c
+index 73d5998677d40f..674362de029d2a 100644
+--- a/mm/rmap.c
++++ b/mm/rmap.c
+@@ -2488,7 +2488,7 @@ static bool folio_make_device_exclusive(struct folio *folio,
+ * Restrict to anonymous folios for now to avoid potential writeback
+ * issues.
+ */
+- if (!folio_test_anon(folio))
++ if (!folio_test_anon(folio) || folio_test_hugetlb(folio))
+ return false;
+
+ rmap_walk(folio, &rwc);
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 5960e5035f9835..88fd6e2a2dcf8a 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -3042,8 +3042,7 @@ shmem_write_begin(struct file *file, struct address_space *mapping,
+ if (ret)
+ return ret;
+
+- if (folio_test_hwpoison(folio) ||
+- (folio_test_large(folio) && folio_test_has_hwpoisoned(folio))) {
++ if (folio_contain_hwpoisoned_page(folio)) {
+ folio_unlock(folio);
+ folio_put(folio);
+ return -EIO;
+diff --git a/mm/vmscan.c b/mm/vmscan.c
+index 77d015d5db0c5b..39b3c7f35ea85c 100644
+--- a/mm/vmscan.c
++++ b/mm/vmscan.c
+@@ -7557,7 +7557,7 @@ int node_reclaim(struct pglist_data *pgdat, gfp_t gfp_mask, unsigned int order)
+ return NODE_RECLAIM_NOSCAN;
+
+ ret = __node_reclaim(pgdat, gfp_mask, order);
+- clear_bit(PGDAT_RECLAIM_LOCKED, &pgdat->flags);
++ clear_bit_unlock(PGDAT_RECLAIM_LOCKED, &pgdat->flags);
+
+ if (ret)
+ count_vm_event(PGSCAN_ZONE_RECLAIM_SUCCESS);
+diff --git a/net/8021q/vlan_dev.c b/net/8021q/vlan_dev.c
+index 458040e8a0e0be..9184cf7eb12864 100644
+--- a/net/8021q/vlan_dev.c
++++ b/net/8021q/vlan_dev.c
+@@ -273,17 +273,6 @@ static int vlan_dev_open(struct net_device *dev)
+ goto out;
+ }
+
+- if (dev->flags & IFF_ALLMULTI) {
+- err = dev_set_allmulti(real_dev, 1);
+- if (err < 0)
+- goto del_unicast;
+- }
+- if (dev->flags & IFF_PROMISC) {
+- err = dev_set_promiscuity(real_dev, 1);
+- if (err < 0)
+- goto clear_allmulti;
+- }
+-
+ ether_addr_copy(vlan->real_dev_addr, real_dev->dev_addr);
+
+ if (vlan->flags & VLAN_FLAG_GVRP)
+@@ -297,12 +286,6 @@ static int vlan_dev_open(struct net_device *dev)
+ netif_carrier_on(dev);
+ return 0;
+
+-clear_allmulti:
+- if (dev->flags & IFF_ALLMULTI)
+- dev_set_allmulti(real_dev, -1);
+-del_unicast:
+- if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr))
+- dev_uc_del(real_dev, dev->dev_addr);
+ out:
+ netif_carrier_off(dev);
+ return err;
+@@ -315,10 +298,6 @@ static int vlan_dev_stop(struct net_device *dev)
+
+ dev_mc_unsync(real_dev, dev);
+ dev_uc_unsync(real_dev, dev);
+- if (dev->flags & IFF_ALLMULTI)
+- dev_set_allmulti(real_dev, -1);
+- if (dev->flags & IFF_PROMISC)
+- dev_set_promiscuity(real_dev, -1);
+
+ if (!ether_addr_equal(dev->dev_addr, real_dev->dev_addr))
+ dev_uc_del(real_dev, dev->dev_addr);
+@@ -490,12 +469,10 @@ static void vlan_dev_change_rx_flags(struct net_device *dev, int change)
+ {
+ struct net_device *real_dev = vlan_dev_priv(dev)->real_dev;
+
+- if (dev->flags & IFF_UP) {
+- if (change & IFF_ALLMULTI)
+- dev_set_allmulti(real_dev, dev->flags & IFF_ALLMULTI ? 1 : -1);
+- if (change & IFF_PROMISC)
+- dev_set_promiscuity(real_dev, dev->flags & IFF_PROMISC ? 1 : -1);
+- }
++ if (change & IFF_ALLMULTI)
++ dev_set_allmulti(real_dev, dev->flags & IFF_ALLMULTI ? 1 : -1);
++ if (change & IFF_PROMISC)
++ dev_set_promiscuity(real_dev, dev->flags & IFF_PROMISC ? 1 : -1);
+ }
+
+ static void vlan_dev_set_rx_mode(struct net_device *vlan_dev)
+diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c
+index 7b2b04d6b85630..cb4d47ae129e8b 100644
+--- a/net/bluetooth/hci_sync.c
++++ b/net/bluetooth/hci_sync.c
+@@ -3720,6 +3720,9 @@ static int hci_read_local_name_sync(struct hci_dev *hdev)
+ /* Read Voice Setting */
+ static int hci_read_voice_setting_sync(struct hci_dev *hdev)
+ {
++ if (!read_voice_setting_capable(hdev))
++ return 0;
++
+ return __hci_cmd_sync_status(hdev, HCI_OP_READ_VOICE_SETTING,
+ 0, NULL, HCI_CMD_TIMEOUT);
+ }
+@@ -4153,7 +4156,8 @@ static int hci_read_page_scan_type_sync(struct hci_dev *hdev)
+ * support the Read Page Scan Type command. Check support for
+ * this command in the bit mask of supported commands.
+ */
+- if (!(hdev->commands[13] & 0x01))
++ if (!(hdev->commands[13] & 0x01) ||
++ test_bit(HCI_QUIRK_BROKEN_READ_PAGE_SCAN_TYPE, &hdev->quirks))
+ return 0;
+
+ return __hci_cmd_sync_status(hdev, HCI_OP_READ_PAGE_SCAN_TYPE,
+diff --git a/net/core/filter.c b/net/core/filter.c
+index a2f990bf51e5e1..790345c2546b7b 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -218,24 +218,36 @@ BPF_CALL_3(bpf_skb_get_nlattr_nest, struct sk_buff *, skb, u32, a, u32, x)
+ return 0;
+ }
+
++static int bpf_skb_load_helper_convert_offset(const struct sk_buff *skb, int offset)
++{
++ if (likely(offset >= 0))
++ return offset;
++
++ if (offset >= SKF_NET_OFF)
++ return offset - SKF_NET_OFF + skb_network_offset(skb);
++
++ if (offset >= SKF_LL_OFF && skb_mac_header_was_set(skb))
++ return offset - SKF_LL_OFF + skb_mac_offset(skb);
++
++ return INT_MIN;
++}
++
+ BPF_CALL_4(bpf_skb_load_helper_8, const struct sk_buff *, skb, const void *,
+ data, int, headlen, int, offset)
+ {
+- u8 tmp, *ptr;
++ u8 tmp;
+ const int len = sizeof(tmp);
+
+- if (offset >= 0) {
+- if (headlen - offset >= len)
+- return *(u8 *)(data + offset);
+- if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
+- return tmp;
+- } else {
+- ptr = bpf_internal_load_pointer_neg_helper(skb, offset, len);
+- if (likely(ptr))
+- return *(u8 *)ptr;
+- }
++ offset = bpf_skb_load_helper_convert_offset(skb, offset);
++ if (offset == INT_MIN)
++ return -EFAULT;
+
+- return -EFAULT;
++ if (headlen - offset >= len)
++ return *(u8 *)(data + offset);
++ if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
++ return tmp;
++ else
++ return -EFAULT;
+ }
+
+ BPF_CALL_2(bpf_skb_load_helper_8_no_cache, const struct sk_buff *, skb,
+@@ -248,21 +260,19 @@ BPF_CALL_2(bpf_skb_load_helper_8_no_cache, const struct sk_buff *, skb,
+ BPF_CALL_4(bpf_skb_load_helper_16, const struct sk_buff *, skb, const void *,
+ data, int, headlen, int, offset)
+ {
+- __be16 tmp, *ptr;
++ __be16 tmp;
+ const int len = sizeof(tmp);
+
+- if (offset >= 0) {
+- if (headlen - offset >= len)
+- return get_unaligned_be16(data + offset);
+- if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
+- return be16_to_cpu(tmp);
+- } else {
+- ptr = bpf_internal_load_pointer_neg_helper(skb, offset, len);
+- if (likely(ptr))
+- return get_unaligned_be16(ptr);
+- }
++ offset = bpf_skb_load_helper_convert_offset(skb, offset);
++ if (offset == INT_MIN)
++ return -EFAULT;
+
+- return -EFAULT;
++ if (headlen - offset >= len)
++ return get_unaligned_be16(data + offset);
++ if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
++ return be16_to_cpu(tmp);
++ else
++ return -EFAULT;
+ }
+
+ BPF_CALL_2(bpf_skb_load_helper_16_no_cache, const struct sk_buff *, skb,
+@@ -275,21 +285,19 @@ BPF_CALL_2(bpf_skb_load_helper_16_no_cache, const struct sk_buff *, skb,
+ BPF_CALL_4(bpf_skb_load_helper_32, const struct sk_buff *, skb, const void *,
+ data, int, headlen, int, offset)
+ {
+- __be32 tmp, *ptr;
++ __be32 tmp;
+ const int len = sizeof(tmp);
+
+- if (likely(offset >= 0)) {
+- if (headlen - offset >= len)
+- return get_unaligned_be32(data + offset);
+- if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
+- return be32_to_cpu(tmp);
+- } else {
+- ptr = bpf_internal_load_pointer_neg_helper(skb, offset, len);
+- if (likely(ptr))
+- return get_unaligned_be32(ptr);
+- }
++ offset = bpf_skb_load_helper_convert_offset(skb, offset);
++ if (offset == INT_MIN)
++ return -EFAULT;
+
+- return -EFAULT;
++ if (headlen - offset >= len)
++ return get_unaligned_be32(data + offset);
++ if (!skb_copy_bits(skb, offset, &tmp, sizeof(tmp)))
++ return be32_to_cpu(tmp);
++ else
++ return -EFAULT;
+ }
+
+ BPF_CALL_2(bpf_skb_load_helper_32_no_cache, const struct sk_buff *, skb,
+diff --git a/net/core/page_pool.c b/net/core/page_pool.c
+index a813d30d213536..7b20f6fcb82c02 100644
+--- a/net/core/page_pool.c
++++ b/net/core/page_pool.c
+@@ -1066,7 +1066,13 @@ static void page_pool_release_retry(struct work_struct *wq)
+ int inflight;
+
+ inflight = page_pool_release(pool);
+- if (!inflight)
++ /* In rare cases, a driver bug may cause inflight to go negative.
++ * Don't reschedule release if inflight is 0 or negative.
++ * - If 0, the page_pool has been destroyed
++ * - if negative, we will never recover
++ * in both cases no reschedule is necessary.
++ */
++ if (inflight <= 0)
+ return;
+
+ /* Periodic warning for page pools the user can't see */
+diff --git a/net/core/page_pool_user.c b/net/core/page_pool_user.c
+index 48335766c1bfd6..8d31c71bea1a39 100644
+--- a/net/core/page_pool_user.c
++++ b/net/core/page_pool_user.c
+@@ -353,7 +353,7 @@ void page_pool_unlist(struct page_pool *pool)
+ int page_pool_check_memory_provider(struct net_device *dev,
+ struct netdev_rx_queue *rxq)
+ {
+- struct net_devmem_dmabuf_binding *binding = rxq->mp_params.mp_priv;
++ void *binding = rxq->mp_params.mp_priv;
+ struct page_pool *pool;
+ struct hlist_node *n;
+
+diff --git a/net/core/sock.c b/net/core/sock.c
+index a83f64a1d96a29..0842dc9189bf80 100644
+--- a/net/core/sock.c
++++ b/net/core/sock.c
+@@ -2107,6 +2107,8 @@ int sk_getsockopt(struct sock *sk, int level, int optname,
+ */
+ static inline void sock_lock_init(struct sock *sk)
+ {
++ sk_owner_clear(sk);
++
+ if (sk->sk_kern_sock)
+ sock_lock_init_class_and_name(
+ sk,
+@@ -2203,6 +2205,9 @@ static void sk_prot_free(struct proto *prot, struct sock *sk)
+ cgroup_sk_free(&sk->sk_cgrp_data);
+ mem_cgroup_sk_free(sk);
+ security_sk_free(sk);
++
++ sk_owner_put(sk);
++
+ if (slab != NULL)
+ kmem_cache_free(slab, sk);
+ else
+diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c
+index e233dfc8ca4bec..a52be67139d0ac 100644
+--- a/net/ethtool/netlink.c
++++ b/net/ethtool/netlink.c
+@@ -490,7 +490,7 @@ static int ethnl_default_doit(struct sk_buff *skb, struct genl_info *info)
+ ret = ops->prepare_data(req_info, reply_data, info);
+ rtnl_unlock();
+ if (ret < 0)
+- goto err_cleanup;
++ goto err_dev;
+ ret = ops->reply_size(req_info, reply_data);
+ if (ret < 0)
+ goto err_cleanup;
+@@ -548,7 +548,7 @@ static int ethnl_default_dump_one(struct sk_buff *skb, struct net_device *dev,
+ ret = ctx->ops->prepare_data(ctx->req_info, ctx->reply_data, info);
+ rtnl_unlock();
+ if (ret < 0)
+- goto out;
++ goto out_cancel;
+ ret = ethnl_fill_reply_header(skb, dev, ctx->ops->hdr_attr);
+ if (ret < 0)
+ goto out;
+@@ -557,6 +557,7 @@ static int ethnl_default_dump_one(struct sk_buff *skb, struct net_device *dev,
+ out:
+ if (ctx->ops->cleanup_data)
+ ctx->ops->cleanup_data(ctx->reply_data);
++out_cancel:
+ ctx->reply_data->dev = NULL;
+ if (ret < 0)
+ genlmsg_cancel(skb, ehdr);
+@@ -760,7 +761,7 @@ static void ethnl_default_notify(struct net_device *dev, unsigned int cmd,
+ ethnl_init_reply_data(reply_data, ops, dev);
+ ret = ops->prepare_data(req_info, reply_data, &info);
+ if (ret < 0)
+- goto err_cleanup;
++ goto err_rep;
+ ret = ops->reply_size(req_info, reply_data);
+ if (ret < 0)
+ goto err_cleanup;
+@@ -795,6 +796,7 @@ static void ethnl_default_notify(struct net_device *dev, unsigned int cmd,
+ err_cleanup:
+ if (ops->cleanup_data)
+ ops->cleanup_data(reply_data);
++err_rep:
+ kfree(reply_data);
+ kfree(req_info);
+ return;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 987492dcb07ca8..bae8ece3e881e0 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -470,10 +470,10 @@ void fib6_select_path(const struct net *net, struct fib6_result *res,
+ goto out;
+
+ hash = fl6->mp_hash;
+- if (hash <= atomic_read(&first->fib6_nh->fib_nh_upper_bound) &&
+- rt6_score_route(first->fib6_nh, first->fib6_flags, oif,
+- strict) >= 0) {
+- match = first;
++ if (hash <= atomic_read(&first->fib6_nh->fib_nh_upper_bound)) {
++ if (rt6_score_route(first->fib6_nh, first->fib6_flags, oif,
++ strict) >= 0)
++ match = first;
+ goto out;
+ }
+
+diff --git a/net/mac80211/debugfs.c b/net/mac80211/debugfs.c
+index 02b5476a4376c0..a0710ae0e7a499 100644
+--- a/net/mac80211/debugfs.c
++++ b/net/mac80211/debugfs.c
+@@ -499,6 +499,7 @@ static const char *hw_flag_names[] = {
+ FLAG(DISALLOW_PUNCTURING),
+ FLAG(DISALLOW_PUNCTURING_5GHZ),
+ FLAG(HANDLES_QUIET_CSA),
++ FLAG(STRICT),
+ #undef FLAG
+ };
+
+@@ -531,6 +532,46 @@ static ssize_t hwflags_read(struct file *file, char __user *user_buf,
+ return rv;
+ }
+
++static ssize_t hwflags_write(struct file *file, const char __user *user_buf,
++ size_t count, loff_t *ppos)
++{
++ struct ieee80211_local *local = file->private_data;
++ char buf[100];
++ int val;
++
++ if (count >= sizeof(buf))
++ return -EINVAL;
++
++ if (copy_from_user(buf, user_buf, count))
++ return -EFAULT;
++
++ if (count && buf[count - 1] == '\n')
++ buf[count - 1] = '\0';
++ else
++ buf[count] = '\0';
++
++ if (sscanf(buf, "strict=%d", &val) == 1) {
++ switch (val) {
++ case 0:
++ ieee80211_hw_set(&local->hw, STRICT);
++ return count;
++ case 1:
++ __clear_bit(IEEE80211_HW_STRICT, local->hw.flags);
++ return count;
++ default:
++ return -EINVAL;
++ }
++ }
++
++ return -EINVAL;
++}
++
++static const struct file_operations hwflags_ops = {
++ .open = simple_open,
++ .read = hwflags_read,
++ .write = hwflags_write,
++};
++
+ static ssize_t misc_read(struct file *file, char __user *user_buf,
+ size_t count, loff_t *ppos)
+ {
+@@ -581,7 +622,6 @@ static ssize_t queues_read(struct file *file, char __user *user_buf,
+ return simple_read_from_buffer(user_buf, count, ppos, buf, res);
+ }
+
+-DEBUGFS_READONLY_FILE_OPS(hwflags);
+ DEBUGFS_READONLY_FILE_OPS(queues);
+ DEBUGFS_READONLY_FILE_OPS(misc);
+
+@@ -659,7 +699,7 @@ void debugfs_hw_add(struct ieee80211_local *local)
+ #ifdef CONFIG_PM
+ DEBUGFS_ADD_MODE(reset, 0200);
+ #endif
+- DEBUGFS_ADD(hwflags);
++ DEBUGFS_ADD_MODE(hwflags, 0600);
+ DEBUGFS_ADD(user_power);
+ DEBUGFS_ADD(power);
+ DEBUGFS_ADD(hw_conf);
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index 8bbfa45e1796df..dbcd75c5d778e6 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -8,7 +8,7 @@
+ * Copyright 2008, Johannes Berg <johannes@sipsolutions.net>
+ * Copyright 2013-2014 Intel Mobile Communications GmbH
+ * Copyright (c) 2016 Intel Deutschland GmbH
+- * Copyright (C) 2018-2024 Intel Corporation
++ * Copyright (C) 2018-2025 Intel Corporation
+ */
+ #include <linux/slab.h>
+ #include <linux/kernel.h>
+@@ -812,6 +812,9 @@ static void ieee80211_set_multicast_list(struct net_device *dev)
+ */
+ static void ieee80211_teardown_sdata(struct ieee80211_sub_if_data *sdata)
+ {
++ if (WARN_ON(!list_empty(&sdata->work.entry)))
++ wiphy_work_cancel(sdata->local->hw.wiphy, &sdata->work);
++
+ /* free extra data */
+ ieee80211_free_keys(sdata, false);
+
+diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
+index 579d0f24ac9d61..2922a9fec950dd 100644
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -367,6 +367,12 @@ u32 airtime_link_metric_get(struct ieee80211_local *local,
+ return (u32)result;
+ }
+
++/* Check that the first metric is at least 10% better than the second one */
++static bool is_metric_better(u32 x, u32 y)
++{
++ return (x < y) && (x < (y - x / 10));
++}
++
+ /**
+ * hwmp_route_info_get - Update routing info to originator and transmitter
+ *
+@@ -458,8 +464,8 @@ static u32 hwmp_route_info_get(struct ieee80211_sub_if_data *sdata,
+ (mpath->sn == orig_sn &&
+ (rcu_access_pointer(mpath->next_hop) !=
+ sta ?
+- mult_frac(new_metric, 10, 9) :
+- new_metric) >= mpath->metric)) {
++ !is_metric_better(new_metric, mpath->metric) :
++ new_metric >= mpath->metric))) {
+ process = false;
+ fresh_info = false;
+ }
+@@ -533,8 +539,8 @@ static u32 hwmp_route_info_get(struct ieee80211_sub_if_data *sdata,
+ if ((mpath->flags & MESH_PATH_FIXED) ||
+ ((mpath->flags & MESH_PATH_ACTIVE) &&
+ ((rcu_access_pointer(mpath->next_hop) != sta ?
+- mult_frac(last_hop_metric, 10, 9) :
+- last_hop_metric) > mpath->metric)))
++ !is_metric_better(last_hop_metric, mpath->metric) :
++ last_hop_metric > mpath->metric))))
+ fresh_info = false;
+ } else {
+ mpath = mesh_path_add(sdata, ta);
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 88751b0eb317a3..ad0d040569dcd3 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -166,6 +166,9 @@ ieee80211_determine_ap_chan(struct ieee80211_sub_if_data *sdata,
+ bool no_vht = false;
+ u32 ht_cfreq;
+
++ if (ieee80211_hw_check(&sdata->local->hw, STRICT))
++ ignore_ht_channel_mismatch = false;
++
+ *chandef = (struct cfg80211_chan_def) {
+ .chan = channel,
+ .width = NL80211_CHAN_WIDTH_20_NOHT,
+@@ -385,7 +388,7 @@ ieee80211_verify_peer_he_mcs_support(struct ieee80211_sub_if_data *sdata,
+ * zeroes, which is nonsense, and completely inconsistent with itself
+ * (it doesn't have 8 streams). Accept the settings in this case anyway.
+ */
+- if (!ap_min_req_set)
++ if (!ieee80211_hw_check(&sdata->local->hw, STRICT) && !ap_min_req_set)
+ return true;
+
+ /* make sure the AP is consistent with itself
+@@ -445,7 +448,7 @@ ieee80211_verify_sta_he_mcs_support(struct ieee80211_sub_if_data *sdata,
+ * zeroes, which is nonsense, and completely inconsistent with itself
+ * (it doesn't have 8 streams). Accept the settings in this case anyway.
+ */
+- if (!ap_min_req_set)
++ if (!ieee80211_hw_check(&sdata->local->hw, STRICT) && !ap_min_req_set)
+ return true;
+
+ /* Need to go over for 80MHz, 160MHz and for 80+80 */
+@@ -1212,13 +1215,15 @@ static bool ieee80211_add_vht_ie(struct ieee80211_sub_if_data *sdata,
+ * Some APs apparently get confused if our capabilities are better
+ * than theirs, so restrict what we advertise in the assoc request.
+ */
+- if (!(ap_vht_cap->vht_cap_info &
+- cpu_to_le32(IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)))
+- cap &= ~(IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
+- IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE);
+- else if (!(ap_vht_cap->vht_cap_info &
+- cpu_to_le32(IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE)))
+- cap &= ~IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE;
++ if (!ieee80211_hw_check(&local->hw, STRICT)) {
++ if (!(ap_vht_cap->vht_cap_info &
++ cpu_to_le32(IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE)))
++ cap &= ~(IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
++ IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE);
++ else if (!(ap_vht_cap->vht_cap_info &
++ cpu_to_le32(IEEE80211_VHT_CAP_MU_BEAMFORMER_CAPABLE)))
++ cap &= ~IEEE80211_VHT_CAP_MU_BEAMFORMEE_CAPABLE;
++ }
+
+ /*
+ * If some other vif is using the MU-MIMO capability we cannot associate
+@@ -1260,14 +1265,16 @@ static bool ieee80211_add_vht_ie(struct ieee80211_sub_if_data *sdata,
+ return mu_mimo_owner;
+ }
+
+-static void ieee80211_assoc_add_rates(struct sk_buff *skb,
++static void ieee80211_assoc_add_rates(struct ieee80211_local *local,
++ struct sk_buff *skb,
+ enum nl80211_chan_width width,
+ struct ieee80211_supported_band *sband,
+ struct ieee80211_mgd_assoc_data *assoc_data)
+ {
+ u32 rates;
+
+- if (assoc_data->supp_rates_len) {
++ if (assoc_data->supp_rates_len &&
++ !ieee80211_hw_check(&local->hw, STRICT)) {
+ /*
+ * Get all rates supported by the device and the AP as
+ * some APs don't like getting a superset of their rates
+@@ -1481,7 +1488,7 @@ static size_t ieee80211_assoc_link_elems(struct ieee80211_sub_if_data *sdata,
+ *capab |= WLAN_CAPABILITY_SPECTRUM_MGMT;
+
+ if (sband->band != NL80211_BAND_S1GHZ)
+- ieee80211_assoc_add_rates(skb, width, sband, assoc_data);
++ ieee80211_assoc_add_rates(local, skb, width, sband, assoc_data);
+
+ if (*capab & WLAN_CAPABILITY_SPECTRUM_MGMT ||
+ *capab & WLAN_CAPABILITY_RADIO_MEASURE) {
+@@ -1925,7 +1932,8 @@ static int ieee80211_send_assoc(struct ieee80211_sub_if_data *sdata)
+ * for some reason check it and want it to be set, set the bit for all
+ * pre-EHT connections as we used to do.
+ */
+- if (link->u.mgd.conn.mode < IEEE80211_CONN_MODE_EHT)
++ if (link->u.mgd.conn.mode < IEEE80211_CONN_MODE_EHT &&
++ !ieee80211_hw_check(&local->hw, STRICT))
+ capab |= WLAN_CAPABILITY_ESS;
+
+ /* add the elements for the assoc (main) link */
+@@ -4710,7 +4718,7 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
+ * 2G/3G/4G wifi routers, reported models include the "Onda PN51T",
+ * "Vodafone PocketWiFi 2", "ZTE MF60" and a similar T-Mobile device.
+ */
+- if (!is_6ghz &&
++ if (!ieee80211_hw_check(&local->hw, STRICT) && !is_6ghz &&
+ ((assoc_data->wmm && !elems->wmm_param) ||
+ (link->u.mgd.conn.mode >= IEEE80211_CONN_MODE_HT &&
+ (!elems->ht_cap_elem || !elems->ht_operation)) ||
+@@ -4846,6 +4854,15 @@ static bool ieee80211_assoc_config_link(struct ieee80211_link_data *link,
+ bss_vht_cap = (const void *)elem->data;
+ }
+
++ if (ieee80211_hw_check(&local->hw, STRICT) &&
++ (!bss_vht_cap || memcmp(bss_vht_cap, elems->vht_cap_elem,
++ sizeof(*bss_vht_cap)))) {
++ rcu_read_unlock();
++ ret = false;
++ link_info(link, "VHT capabilities mismatch\n");
++ goto out;
++ }
++
+ ieee80211_vht_cap_ie_to_sta_vht_cap(sdata, sband,
+ elems->vht_cap_elem,
+ bss_vht_cap, link_sta);
+diff --git a/net/mptcp/sockopt.c b/net/mptcp/sockopt.c
+index 505445a9598faf..3caa0a9d3b3885 100644
+--- a/net/mptcp/sockopt.c
++++ b/net/mptcp/sockopt.c
+@@ -1419,6 +1419,12 @@ static int mptcp_getsockopt_v4(struct mptcp_sock *msk, int optname,
+ switch (optname) {
+ case IP_TOS:
+ return mptcp_put_int_option(msk, optval, optlen, READ_ONCE(inet_sk(sk)->tos));
++ case IP_FREEBIND:
++ return mptcp_put_int_option(msk, optval, optlen,
++ inet_test_bit(FREEBIND, sk));
++ case IP_TRANSPARENT:
++ return mptcp_put_int_option(msk, optval, optlen,
++ inet_test_bit(TRANSPARENT, sk));
+ case IP_BIND_ADDRESS_NO_PORT:
+ return mptcp_put_int_option(msk, optval, optlen,
+ inet_test_bit(BIND_ADDRESS_NO_PORT, sk));
+@@ -1430,6 +1436,26 @@ static int mptcp_getsockopt_v4(struct mptcp_sock *msk, int optname,
+ return -EOPNOTSUPP;
+ }
+
++static int mptcp_getsockopt_v6(struct mptcp_sock *msk, int optname,
++ char __user *optval, int __user *optlen)
++{
++ struct sock *sk = (void *)msk;
++
++ switch (optname) {
++ case IPV6_V6ONLY:
++ return mptcp_put_int_option(msk, optval, optlen,
++ sk->sk_ipv6only);
++ case IPV6_TRANSPARENT:
++ return mptcp_put_int_option(msk, optval, optlen,
++ inet_test_bit(TRANSPARENT, sk));
++ case IPV6_FREEBIND:
++ return mptcp_put_int_option(msk, optval, optlen,
++ inet_test_bit(FREEBIND, sk));
++ }
++
++ return -EOPNOTSUPP;
++}
++
+ static int mptcp_getsockopt_sol_mptcp(struct mptcp_sock *msk, int optname,
+ char __user *optval, int __user *optlen)
+ {
+@@ -1469,6 +1495,8 @@ int mptcp_getsockopt(struct sock *sk, int level, int optname,
+
+ if (level == SOL_IP)
+ return mptcp_getsockopt_v4(msk, optname, optval, option);
++ if (level == SOL_IPV6)
++ return mptcp_getsockopt_v6(msk, optname, optval, option);
+ if (level == SOL_TCP)
+ return mptcp_getsockopt_sol_tcp(msk, optname, optval, option);
+ if (level == SOL_MPTCP)
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index b56bbee7312c48..4c2aa45c466d93 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -754,8 +754,6 @@ static bool subflow_hmac_valid(const struct request_sock *req,
+
+ subflow_req = mptcp_subflow_rsk(req);
+ msk = subflow_req->msk;
+- if (!msk)
+- return false;
+
+ subflow_generate_hmac(READ_ONCE(msk->remote_key),
+ READ_ONCE(msk->local_key),
+@@ -853,12 +851,8 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+
+ } else if (subflow_req->mp_join) {
+ mptcp_get_options(skb, &mp_opt);
+- if (!(mp_opt.suboptions & OPTION_MPTCP_MPJ_ACK) ||
+- !subflow_hmac_valid(req, &mp_opt) ||
+- !mptcp_can_accept_new_subflow(subflow_req->msk)) {
+- SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINACKMAC);
++ if (!(mp_opt.suboptions & OPTION_MPTCP_MPJ_ACK))
+ fallback = true;
+- }
+ }
+
+ create_child:
+@@ -908,6 +902,17 @@ static struct sock *subflow_syn_recv_sock(const struct sock *sk,
+ goto dispose_child;
+ }
+
++ if (!subflow_hmac_valid(req, &mp_opt)) {
++ SUBFLOW_REQ_INC_STATS(req, MPTCP_MIB_JOINACKMAC);
++ subflow_add_reset_reason(skb, MPTCP_RST_EPROHIBIT);
++ goto dispose_child;
++ }
++
++ if (!mptcp_can_accept_new_subflow(owner)) {
++ subflow_add_reset_reason(skb, MPTCP_RST_EPROHIBIT);
++ goto dispose_child;
++ }
++
+ /* move the msk reference ownership to the subflow */
+ subflow_req->msk = NULL;
+ ctx->conn = (struct sock *)owner;
+diff --git a/net/netfilter/nft_set_pipapo_avx2.c b/net/netfilter/nft_set_pipapo_avx2.c
+index b8d3c3213efee5..c15db28c5ebc43 100644
+--- a/net/netfilter/nft_set_pipapo_avx2.c
++++ b/net/netfilter/nft_set_pipapo_avx2.c
+@@ -994,8 +994,9 @@ static int nft_pipapo_avx2_lookup_8b_16(unsigned long *map, unsigned long *fill,
+ NFT_PIPAPO_AVX2_BUCKET_LOAD8(5, lt, 8, pkt[8], bsize);
+
+ NFT_PIPAPO_AVX2_AND(6, 2, 3);
++ NFT_PIPAPO_AVX2_AND(3, 4, 7);
+ NFT_PIPAPO_AVX2_BUCKET_LOAD8(7, lt, 9, pkt[9], bsize);
+- NFT_PIPAPO_AVX2_AND(0, 4, 5);
++ NFT_PIPAPO_AVX2_AND(0, 3, 5);
+ NFT_PIPAPO_AVX2_BUCKET_LOAD8(1, lt, 10, pkt[10], bsize);
+ NFT_PIPAPO_AVX2_AND(2, 6, 7);
+ NFT_PIPAPO_AVX2_BUCKET_LOAD8(3, lt, 11, pkt[11], bsize);
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index 998ea3b5badfce..a3bab5e27e71bb 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -2051,6 +2051,7 @@ static int tcf_fill_node(struct net *net, struct sk_buff *skb,
+ struct tcmsg *tcm;
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb_tail_pointer(skb);
++ int ret = -EMSGSIZE;
+
+ nlh = nlmsg_put(skb, portid, seq, event, sizeof(*tcm), flags);
+ if (!nlh)
+@@ -2095,11 +2096,45 @@ static int tcf_fill_node(struct net *net, struct sk_buff *skb,
+
+ return skb->len;
+
++cls_op_not_supp:
++ ret = -EOPNOTSUPP;
+ out_nlmsg_trim:
+ nla_put_failure:
+-cls_op_not_supp:
+ nlmsg_trim(skb, b);
+- return -1;
++ return ret;
++}
++
++static struct sk_buff *tfilter_notify_prep(struct net *net,
++ struct sk_buff *oskb,
++ struct nlmsghdr *n,
++ struct tcf_proto *tp,
++ struct tcf_block *block,
++ struct Qdisc *q, u32 parent,
++ void *fh, int event,
++ u32 portid, bool rtnl_held,
++ struct netlink_ext_ack *extack)
++{
++ unsigned int size = oskb ? max(NLMSG_GOODSIZE, oskb->len) : NLMSG_GOODSIZE;
++ struct sk_buff *skb;
++ int ret;
++
++retry:
++ skb = alloc_skb(size, GFP_KERNEL);
++ if (!skb)
++ return ERR_PTR(-ENOBUFS);
++
++ ret = tcf_fill_node(net, skb, tp, block, q, parent, fh, portid,
++ n->nlmsg_seq, n->nlmsg_flags, event, false,
++ rtnl_held, extack);
++ if (ret <= 0) {
++ kfree_skb(skb);
++ if (ret == -EMSGSIZE) {
++ size += NLMSG_GOODSIZE;
++ goto retry;
++ }
++ return ERR_PTR(-EINVAL);
++ }
++ return skb;
+ }
+
+ static int tfilter_notify(struct net *net, struct sk_buff *oskb,
+@@ -2115,16 +2150,10 @@ static int tfilter_notify(struct net *net, struct sk_buff *oskb,
+ if (!unicast && !rtnl_notify_needed(net, n->nlmsg_flags, RTNLGRP_TC))
+ return 0;
+
+- skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+- if (!skb)
+- return -ENOBUFS;
+-
+- if (tcf_fill_node(net, skb, tp, block, q, parent, fh, portid,
+- n->nlmsg_seq, n->nlmsg_flags, event,
+- false, rtnl_held, extack) <= 0) {
+- kfree_skb(skb);
+- return -EINVAL;
+- }
++ skb = tfilter_notify_prep(net, oskb, n, tp, block, q, parent, fh, event,
++ portid, rtnl_held, extack);
++ if (IS_ERR(skb))
++ return PTR_ERR(skb);
+
+ if (unicast)
+ err = rtnl_unicast(skb, net, portid);
+@@ -2147,16 +2176,11 @@ static int tfilter_del_notify(struct net *net, struct sk_buff *oskb,
+ if (!rtnl_notify_needed(net, n->nlmsg_flags, RTNLGRP_TC))
+ return tp->ops->delete(tp, fh, last, rtnl_held, extack);
+
+- skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+- if (!skb)
+- return -ENOBUFS;
+-
+- if (tcf_fill_node(net, skb, tp, block, q, parent, fh, portid,
+- n->nlmsg_seq, n->nlmsg_flags, RTM_DELTFILTER,
+- false, rtnl_held, extack) <= 0) {
++ skb = tfilter_notify_prep(net, oskb, n, tp, block, q, parent, fh,
++ RTM_DELTFILTER, portid, rtnl_held, extack);
++ if (IS_ERR(skb)) {
+ NL_SET_ERR_MSG(extack, "Failed to build del event notification");
+- kfree_skb(skb);
+- return -EINVAL;
++ return PTR_ERR(skb);
+ }
+
+ err = tp->ops->delete(tp, fh, last, rtnl_held, extack);
+diff --git a/net/sched/sch_codel.c b/net/sched/sch_codel.c
+index 3e8d4fe4d91e3e..e1f6e7618debd4 100644
+--- a/net/sched/sch_codel.c
++++ b/net/sched/sch_codel.c
+@@ -65,10 +65,7 @@ static struct sk_buff *codel_qdisc_dequeue(struct Qdisc *sch)
+ &q->stats, qdisc_pkt_len, codel_get_enqueue_time,
+ drop_func, dequeue_func);
+
+- /* We cant call qdisc_tree_reduce_backlog() if our qlen is 0,
+- * or HTB crashes. Defer it for next round.
+- */
+- if (q->stats.drop_count && sch->q.qlen) {
++ if (q->stats.drop_count) {
+ qdisc_tree_reduce_backlog(sch, q->stats.drop_count, q->stats.drop_len);
+ q->stats.drop_count = 0;
+ q->stats.drop_len = 0;
+diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
+index 4f908c11ba9528..778f6e5966be80 100644
+--- a/net/sched/sch_fq_codel.c
++++ b/net/sched/sch_fq_codel.c
+@@ -314,10 +314,8 @@ static struct sk_buff *fq_codel_dequeue(struct Qdisc *sch)
+ }
+ qdisc_bstats_update(sch, skb);
+ flow->deficit -= qdisc_pkt_len(skb);
+- /* We cant call qdisc_tree_reduce_backlog() if our qlen is 0,
+- * or HTB crashes. Defer it for next round.
+- */
+- if (q->cstats.drop_count && sch->q.qlen) {
++
++ if (q->cstats.drop_count) {
+ qdisc_tree_reduce_backlog(sch, q->cstats.drop_count,
+ q->cstats.drop_len);
+ q->cstats.drop_count = 0;
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index 65d5b59da58303..58b42dcf8f2013 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -631,6 +631,15 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
+ struct red_parms *p = NULL;
+ struct sk_buff *to_free = NULL;
+ struct sk_buff *tail = NULL;
++ unsigned int maxflows;
++ unsigned int quantum;
++ unsigned int divisor;
++ int perturb_period;
++ u8 headdrop;
++ u8 maxdepth;
++ int limit;
++ u8 flags;
++
+
+ if (opt->nla_len < nla_attr_size(sizeof(*ctl)))
+ return -EINVAL;
+@@ -652,39 +661,64 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt,
+ if (!p)
+ return -ENOMEM;
+ }
+- if (ctl->limit == 1) {
+- NL_SET_ERR_MSG_MOD(extack, "invalid limit");
+- return -EINVAL;
+- }
++
+ sch_tree_lock(sch);
++
++ limit = q->limit;
++ divisor = q->divisor;
++ headdrop = q->headdrop;
++ maxdepth = q->maxdepth;
++ maxflows = q->maxflows;
++ perturb_period = q->perturb_period;
++ quantum = q->quantum;
++ flags = q->flags;
++
++ /* update and validate configuration */
+ if (ctl->quantum)
+- q->quantum = ctl->quantum;
+- WRITE_ONCE(q->perturb_period, ctl->perturb_period * HZ);
++ quantum = ctl->quantum;
++ perturb_period = ctl->perturb_period * HZ;
+ if (ctl->flows)
+- q->maxflows = min_t(u32, ctl->flows, SFQ_MAX_FLOWS);
++ maxflows = min_t(u32, ctl->flows, SFQ_MAX_FLOWS);
+ if (ctl->divisor) {
+- q->divisor = ctl->divisor;
+- q->maxflows = min_t(u32, q->maxflows, q->divisor);
++ divisor = ctl->divisor;
++ maxflows = min_t(u32, maxflows, divisor);
+ }
+ if (ctl_v1) {
+ if (ctl_v1->depth)
+- q->maxdepth = min_t(u32, ctl_v1->depth, SFQ_MAX_DEPTH);
++ maxdepth = min_t(u32, ctl_v1->depth, SFQ_MAX_DEPTH);
+ if (p) {
+- swap(q->red_parms, p);
+- red_set_parms(q->red_parms,
++ red_set_parms(p,
+ ctl_v1->qth_min, ctl_v1->qth_max,
+ ctl_v1->Wlog,
+ ctl_v1->Plog, ctl_v1->Scell_log,
+ NULL,
+ ctl_v1->max_P);
+ }
+- q->flags = ctl_v1->flags;
+- q->headdrop = ctl_v1->headdrop;
++ flags = ctl_v1->flags;
++ headdrop = ctl_v1->headdrop;
+ }
+ if (ctl->limit) {
+- q->limit = min_t(u32, ctl->limit, q->maxdepth * q->maxflows);
+- q->maxflows = min_t(u32, q->maxflows, q->limit);
++ limit = min_t(u32, ctl->limit, maxdepth * maxflows);
++ maxflows = min_t(u32, maxflows, limit);
+ }
++ if (limit == 1) {
++ sch_tree_unlock(sch);
++ kfree(p);
++ NL_SET_ERR_MSG_MOD(extack, "invalid limit");
++ return -EINVAL;
++ }
++
++ /* commit configuration */
++ q->limit = limit;
++ q->divisor = divisor;
++ q->headdrop = headdrop;
++ q->maxdepth = maxdepth;
++ q->maxflows = maxflows;
++ WRITE_ONCE(q->perturb_period, perturb_period);
++ q->quantum = quantum;
++ q->flags = flags;
++ if (p)
++ swap(q->red_parms, p);
+
+ qlen = sch->q.qlen;
+ while (sch->q.qlen > q->limit) {
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 36ee34f483d703..53725ee7ba06d7 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -72,8 +72,9 @@
+ /* Forward declarations for internal helper functions. */
+ static bool sctp_writeable(const struct sock *sk);
+ static void sctp_wfree(struct sk_buff *skb);
+-static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+- size_t msg_len);
++static int sctp_wait_for_sndbuf(struct sctp_association *asoc,
++ struct sctp_transport *transport,
++ long *timeo_p, size_t msg_len);
+ static int sctp_wait_for_packet(struct sock *sk, int *err, long *timeo_p);
+ static int sctp_wait_for_connect(struct sctp_association *, long *timeo_p);
+ static int sctp_wait_for_accept(struct sock *sk, long timeo);
+@@ -1828,7 +1829,7 @@ static int sctp_sendmsg_to_asoc(struct sctp_association *asoc,
+
+ if (sctp_wspace(asoc) <= 0 || !sk_wmem_schedule(sk, msg_len)) {
+ timeo = sock_sndtimeo(sk, msg->msg_flags & MSG_DONTWAIT);
+- err = sctp_wait_for_sndbuf(asoc, &timeo, msg_len);
++ err = sctp_wait_for_sndbuf(asoc, transport, &timeo, msg_len);
+ if (err)
+ goto err;
+ if (unlikely(sinfo->sinfo_stream >= asoc->stream.outcnt)) {
+@@ -9214,8 +9215,9 @@ void sctp_sock_rfree(struct sk_buff *skb)
+
+
+ /* Helper function to wait for space in the sndbuf. */
+-static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+- size_t msg_len)
++static int sctp_wait_for_sndbuf(struct sctp_association *asoc,
++ struct sctp_transport *transport,
++ long *timeo_p, size_t msg_len)
+ {
+ struct sock *sk = asoc->base.sk;
+ long current_timeo = *timeo_p;
+@@ -9225,7 +9227,9 @@ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+ pr_debug("%s: asoc:%p, timeo:%ld, msg_len:%zu\n", __func__, asoc,
+ *timeo_p, msg_len);
+
+- /* Increment the association's refcnt. */
++ /* Increment the transport and association's refcnt. */
++ if (transport)
++ sctp_transport_hold(transport);
+ sctp_association_hold(asoc);
+
+ /* Wait on the association specific sndbuf space. */
+@@ -9234,7 +9238,7 @@ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+ TASK_INTERRUPTIBLE);
+ if (asoc->base.dead)
+ goto do_dead;
+- if (!*timeo_p)
++ if ((!*timeo_p) || (transport && transport->dead))
+ goto do_nonblock;
+ if (sk->sk_err || asoc->state >= SCTP_STATE_SHUTDOWN_PENDING)
+ goto do_error;
+@@ -9259,7 +9263,9 @@ static int sctp_wait_for_sndbuf(struct sctp_association *asoc, long *timeo_p,
+ out:
+ finish_wait(&asoc->wait, &wait);
+
+- /* Release the association's refcnt. */
++ /* Release the transport and association's refcnt. */
++ if (transport)
++ sctp_transport_put(transport);
+ sctp_association_put(asoc);
+
+ return err;
+diff --git a/net/sctp/transport.c b/net/sctp/transport.c
+index 2abe45af98e7c6..31eca29b6cfbfb 100644
+--- a/net/sctp/transport.c
++++ b/net/sctp/transport.c
+@@ -117,6 +117,8 @@ struct sctp_transport *sctp_transport_new(struct net *net,
+ */
+ void sctp_transport_free(struct sctp_transport *transport)
+ {
++ transport->dead = 1;
++
+ /* Try to delete the heartbeat timer. */
+ if (del_timer(&transport->hb_timer))
+ sctp_transport_put(transport);
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index c3fbf0779d4ab6..aca8bdf65d729f 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -621,7 +621,8 @@ static void __svc_rdma_free(struct work_struct *work)
+ /* Destroy the CM ID */
+ rdma_destroy_id(rdma->sc_cm_id);
+
+- rpcrdma_rn_unregister(device, &rdma->sc_rn);
++ if (!test_bit(XPT_LISTENER, &rdma->sc_xprt.xpt_flags))
++ rpcrdma_rn_unregister(device, &rdma->sc_rn);
+ kfree(rdma);
+ }
+
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index 5c2088a469cea1..5689e1f4854797 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -1046,6 +1046,7 @@ int tipc_link_xmit(struct tipc_link *l, struct sk_buff_head *list,
+ if (unlikely(l->backlog[imp].len >= l->backlog[imp].limit)) {
+ if (imp == TIPC_SYSTEM_IMPORTANCE) {
+ pr_warn("%s<%s>, link overflow", link_rst_msg, l->name);
++ __skb_queue_purge(list);
+ return -ENOBUFS;
+ }
+ rc = link_schedule_user(l, hdr);
+diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c
+index 6b4b9f2749a6fd..0acf313deb01ff 100644
+--- a/net/tls/tls_main.c
++++ b/net/tls/tls_main.c
+@@ -809,6 +809,11 @@ static int tls_setsockopt(struct sock *sk, int level, int optname,
+ return do_tls_setsockopt(sk, optname, optval, optlen);
+ }
+
++static int tls_disconnect(struct sock *sk, int flags)
++{
++ return -EOPNOTSUPP;
++}
++
+ struct tls_context *tls_ctx_create(struct sock *sk)
+ {
+ struct inet_connection_sock *icsk = inet_csk(sk);
+@@ -904,6 +909,7 @@ static void build_protos(struct proto prot[TLS_NUM_CONFIG][TLS_NUM_CONFIG],
+ prot[TLS_BASE][TLS_BASE] = *base;
+ prot[TLS_BASE][TLS_BASE].setsockopt = tls_setsockopt;
+ prot[TLS_BASE][TLS_BASE].getsockopt = tls_getsockopt;
++ prot[TLS_BASE][TLS_BASE].disconnect = tls_disconnect;
+ prot[TLS_BASE][TLS_BASE].close = tls_sk_proto_close;
+
+ prot[TLS_SW][TLS_BASE] = prot[TLS_BASE][TLS_BASE];
+diff --git a/scripts/generate_builtin_ranges.awk b/scripts/generate_builtin_ranges.awk
+index b9ec761b3befc4..d4bd5c2b998ca2 100755
+--- a/scripts/generate_builtin_ranges.awk
++++ b/scripts/generate_builtin_ranges.awk
+@@ -282,6 +282,11 @@ ARGIND == 2 && !anchor && NF == 2 && $1 ~ /^0x/ && $2 !~ /^0x/ {
+ # section.
+ #
+ ARGIND == 2 && sect && NF == 4 && /^ [^ \*]/ && !($1 in sect_addend) {
++ # There are a few sections with constant data (without symbols) that
++ # can get resized during linking, so it is best to ignore them.
++ if ($1 ~ /^\.rodata\.(cst|str)[0-9]/)
++ next;
++
+ if (!($1 in sect_base)) {
+ sect_base[$1] = base;
+
+diff --git a/security/integrity/ima/ima.h b/security/integrity/ima/ima.h
+index abfdb4905ca2ac..56bf2f55d9387d 100644
+--- a/security/integrity/ima/ima.h
++++ b/security/integrity/ima/ima.h
+@@ -181,7 +181,8 @@ struct ima_kexec_hdr {
+ #define IMA_UPDATE_XATTR 1
+ #define IMA_CHANGE_ATTR 2
+ #define IMA_DIGSIG 3
+-#define IMA_MUST_MEASURE 4
++#define IMA_MAY_EMIT_TOMTOU 4
++#define IMA_EMITTED_OPENWRITERS 5
+
+ /* IMA integrity metadata associated with an inode */
+ struct ima_iint_cache {
+diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
+index 4b213de8dcb40c..a9aab10bebcaa1 100644
+--- a/security/integrity/ima/ima_main.c
++++ b/security/integrity/ima/ima_main.c
+@@ -129,16 +129,22 @@ static void ima_rdwr_violation_check(struct file *file,
+ if (atomic_read(&inode->i_readcount) && IS_IMA(inode)) {
+ if (!iint)
+ iint = ima_iint_find(inode);
++
+ /* IMA_MEASURE is set from reader side */
+- if (iint && test_bit(IMA_MUST_MEASURE,
+- &iint->atomic_flags))
++ if (iint && test_and_clear_bit(IMA_MAY_EMIT_TOMTOU,
++ &iint->atomic_flags))
+ send_tomtou = true;
+ }
+ } else {
+ if (must_measure)
+- set_bit(IMA_MUST_MEASURE, &iint->atomic_flags);
+- if (inode_is_open_for_write(inode) && must_measure)
+- send_writers = true;
++ set_bit(IMA_MAY_EMIT_TOMTOU, &iint->atomic_flags);
++
++ /* Limit number of open_writers violations */
++ if (inode_is_open_for_write(inode) && must_measure) {
++ if (!test_and_set_bit(IMA_EMITTED_OPENWRITERS,
++ &iint->atomic_flags))
++ send_writers = true;
++ }
+ }
+
+ if (!send_tomtou && !send_writers)
+@@ -167,6 +173,8 @@ static void ima_check_last_writer(struct ima_iint_cache *iint,
+ if (atomic_read(&inode->i_writecount) == 1) {
+ struct kstat stat;
+
++ clear_bit(IMA_EMITTED_OPENWRITERS, &iint->atomic_flags);
++
+ update = test_and_clear_bit(IMA_UPDATE_XATTR,
+ &iint->atomic_flags);
+ if ((iint->flags & IMA_NEW_FILE) ||
+diff --git a/security/landlock/errata.h b/security/landlock/errata.h
+new file mode 100644
+index 00000000000000..8e626accac1011
+--- /dev/null
++++ b/security/landlock/errata.h
+@@ -0,0 +1,99 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++/*
++ * Landlock - Errata information
++ *
++ * Copyright © 2025 Microsoft Corporation
++ */
++
++#ifndef _SECURITY_LANDLOCK_ERRATA_H
++#define _SECURITY_LANDLOCK_ERRATA_H
++
++#include <linux/init.h>
++
++struct landlock_erratum {
++ const int abi;
++ const u8 number;
++};
++
++/* clang-format off */
++#define LANDLOCK_ERRATUM(NUMBER) \
++ { \
++ .abi = LANDLOCK_ERRATA_ABI, \
++ .number = NUMBER, \
++ },
++/* clang-format on */
++
++/*
++ * Some fixes may require user space to check if they are applied on the running
++ * kernel before using a specific feature. For instance, this applies when a
++ * restriction was previously too restrictive and is now getting relaxed (for
++ * compatibility or semantic reasons). However, non-visible changes for
++ * legitimate use (e.g. security fixes) do not require an erratum.
++ */
++static const struct landlock_erratum landlock_errata_init[] __initconst = {
++
++/*
++ * Only Sparse may not implement __has_include. If a compiler does not
++ * implement __has_include, a warning will be printed at boot time (see
++ * setup.c).
++ */
++#ifdef __has_include
++
++#define LANDLOCK_ERRATA_ABI 1
++#if __has_include("errata/abi-1.h")
++#include "errata/abi-1.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++#define LANDLOCK_ERRATA_ABI 2
++#if __has_include("errata/abi-2.h")
++#include "errata/abi-2.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++#define LANDLOCK_ERRATA_ABI 3
++#if __has_include("errata/abi-3.h")
++#include "errata/abi-3.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++#define LANDLOCK_ERRATA_ABI 4
++#if __has_include("errata/abi-4.h")
++#include "errata/abi-4.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++#define LANDLOCK_ERRATA_ABI 5
++#if __has_include("errata/abi-5.h")
++#include "errata/abi-5.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++#define LANDLOCK_ERRATA_ABI 6
++#if __has_include("errata/abi-6.h")
++#include "errata/abi-6.h"
++#endif
++#undef LANDLOCK_ERRATA_ABI
++
++/*
++ * For each new erratum, we need to include all the ABI files up to the impacted
++ * ABI to make all potential future intermediate errata easy to backport.
++ *
++ * If such change involves more than one ABI addition, then it must be in a
++ * dedicated commit with the same Fixes tag as used for the actual fix.
++ *
++ * Each commit creating a new security/landlock/errata/abi-*.h file must have a
++ * Depends-on tag to reference the commit that previously added the line to
++ * include this new file, except if the original Fixes tag is enough.
++ *
++ * Each erratum must be documented in its related ABI file, and a dedicated
++ * commit must update Documentation/userspace-api/landlock.rst to include this
++ * erratum. This commit will not be backported.
++ */
++
++#endif
++
++ {}
++};
++
++#endif /* _SECURITY_LANDLOCK_ERRATA_H */
+diff --git a/security/landlock/errata/abi-4.h b/security/landlock/errata/abi-4.h
+new file mode 100644
+index 00000000000000..c052ee54f89f60
+--- /dev/null
++++ b/security/landlock/errata/abi-4.h
+@@ -0,0 +1,15 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++/**
++ * DOC: erratum_1
++ *
++ * Erratum 1: TCP socket identification
++ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++ *
++ * This fix addresses an issue where IPv4 and IPv6 stream sockets (e.g., SMC,
++ * MPTCP, or SCTP) were incorrectly restricted by TCP access rights during
++ * :manpage:`bind(2)` and :manpage:`connect(2)` operations. This change ensures
++ * that only TCP sockets are subject to TCP access rights, allowing other
++ * protocols to operate without unnecessary restrictions.
++ */
++LANDLOCK_ERRATUM(1)
+diff --git a/security/landlock/errata/abi-6.h b/security/landlock/errata/abi-6.h
+new file mode 100644
+index 00000000000000..df7bc0e1fdf472
+--- /dev/null
++++ b/security/landlock/errata/abi-6.h
+@@ -0,0 +1,19 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++/**
++ * DOC: erratum_2
++ *
++ * Erratum 2: Scoped signal handling
++ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
++ *
++ * This fix addresses an issue where signal scoping was overly restrictive,
++ * preventing sandboxed threads from signaling other threads within the same
++ * process if they belonged to different domains. Because threads are not
++ * security boundaries, user space might assume that any thread within the same
++ * process can send signals between themselves (see :manpage:`nptl(7)` and
++ * :manpage:`libpsx(3)`). Consistent with :manpage:`ptrace(2)` behavior, direct
++ * interaction between threads of the same process should always be allowed.
++ * This change ensures that any thread is allowed to send signals to any other
++ * thread within the same process, regardless of their domain.
++ */
++LANDLOCK_ERRATUM(2)
+diff --git a/security/landlock/fs.c b/security/landlock/fs.c
+index 7adb25150488fc..511e6ae8b79c9e 100644
+--- a/security/landlock/fs.c
++++ b/security/landlock/fs.c
+@@ -27,7 +27,9 @@
+ #include <linux/mount.h>
+ #include <linux/namei.h>
+ #include <linux/path.h>
++#include <linux/pid.h>
+ #include <linux/rcupdate.h>
++#include <linux/sched/signal.h>
+ #include <linux/spinlock.h>
+ #include <linux/stat.h>
+ #include <linux/types.h>
+@@ -1623,21 +1625,46 @@ static int hook_file_ioctl_compat(struct file *file, unsigned int cmd,
+ return -EACCES;
+ }
+
+-static void hook_file_set_fowner(struct file *file)
++/*
++ * Always allow sending signals between threads of the same process. This
++ * ensures consistency with hook_task_kill().
++ */
++static bool control_current_fowner(struct fown_struct *const fown)
+ {
+- struct landlock_ruleset *new_dom, *prev_dom;
++ struct task_struct *p;
+
+ /*
+ * Lock already held by __f_setown(), see commit 26f204380a3c ("fs: Fix
+ * file_set_fowner LSM hook inconsistencies").
+ */
+- lockdep_assert_held(&file_f_owner(file)->lock);
+- new_dom = landlock_get_current_domain();
+- landlock_get_ruleset(new_dom);
++ lockdep_assert_held(&fown->lock);
++
++ /*
++ * Some callers (e.g. fcntl_dirnotify) may not be in an RCU read-side
++ * critical section.
++ */
++ guard(rcu)();
++ p = pid_task(fown->pid, fown->pid_type);
++ if (!p)
++ return true;
++
++ return !same_thread_group(p, current);
++}
++
++static void hook_file_set_fowner(struct file *file)
++{
++ struct landlock_ruleset *prev_dom;
++ struct landlock_ruleset *new_dom = NULL;
++
++ if (control_current_fowner(file_f_owner(file))) {
++ new_dom = landlock_get_current_domain();
++ landlock_get_ruleset(new_dom);
++ }
++
+ prev_dom = landlock_file(file)->fown_domain;
+ landlock_file(file)->fown_domain = new_dom;
+
+- /* Called in an RCU read-side critical section. */
++ /* May be called in an RCU read-side critical section. */
+ landlock_put_ruleset_deferred(prev_dom);
+ }
+
+diff --git a/security/landlock/setup.c b/security/landlock/setup.c
+index 28519a45b11ffb..0c85ea27e40990 100644
+--- a/security/landlock/setup.c
++++ b/security/landlock/setup.c
+@@ -6,12 +6,14 @@
+ * Copyright © 2018-2020 ANSSI
+ */
+
++#include <linux/bits.h>
+ #include <linux/init.h>
+ #include <linux/lsm_hooks.h>
+ #include <uapi/linux/lsm.h>
+
+ #include "common.h"
+ #include "cred.h"
++#include "errata.h"
+ #include "fs.h"
+ #include "net.h"
+ #include "setup.h"
+@@ -19,6 +21,11 @@
+
+ bool landlock_initialized __ro_after_init = false;
+
++const struct lsm_id landlock_lsmid = {
++ .name = LANDLOCK_NAME,
++ .id = LSM_ID_LANDLOCK,
++};
++
+ struct lsm_blob_sizes landlock_blob_sizes __ro_after_init = {
+ .lbs_cred = sizeof(struct landlock_cred_security),
+ .lbs_file = sizeof(struct landlock_file_security),
+@@ -26,13 +33,36 @@ struct lsm_blob_sizes landlock_blob_sizes __ro_after_init = {
+ .lbs_superblock = sizeof(struct landlock_superblock_security),
+ };
+
+-const struct lsm_id landlock_lsmid = {
+- .name = LANDLOCK_NAME,
+- .id = LSM_ID_LANDLOCK,
+-};
++int landlock_errata __ro_after_init;
++
++static void __init compute_errata(void)
++{
++ size_t i;
++
++#ifndef __has_include
++ /*
++ * This is a safeguard to make sure the compiler implements
++ * __has_include (see errata.h).
++ */
++ WARN_ON_ONCE(1);
++ return;
++#endif
++
++ for (i = 0; landlock_errata_init[i].number; i++) {
++ const int prev_errata = landlock_errata;
++
++ if (WARN_ON_ONCE(landlock_errata_init[i].abi >
++ landlock_abi_version))
++ continue;
++
++ landlock_errata |= BIT(landlock_errata_init[i].number - 1);
++ WARN_ON_ONCE(prev_errata == landlock_errata);
++ }
++}
+
+ static int __init landlock_init(void)
+ {
++ compute_errata();
+ landlock_add_cred_hooks();
+ landlock_add_task_hooks();
+ landlock_add_fs_hooks();
+diff --git a/security/landlock/setup.h b/security/landlock/setup.h
+index c4252d46d49d48..fca307c35fee5d 100644
+--- a/security/landlock/setup.h
++++ b/security/landlock/setup.h
+@@ -11,7 +11,10 @@
+
+ #include <linux/lsm_hooks.h>
+
++extern const int landlock_abi_version;
++
+ extern bool landlock_initialized;
++extern int landlock_errata;
+
+ extern struct lsm_blob_sizes landlock_blob_sizes;
+ extern const struct lsm_id landlock_lsmid;
+diff --git a/security/landlock/syscalls.c b/security/landlock/syscalls.c
+index c097d356fa4535..4fa2d09f657aee 100644
+--- a/security/landlock/syscalls.c
++++ b/security/landlock/syscalls.c
+@@ -159,7 +159,9 @@ static const struct file_operations ruleset_fops = {
+ * the new ruleset.
+ * @size: Size of the pointed &struct landlock_ruleset_attr (needed for
+ * backward and forward compatibility).
+- * @flags: Supported value: %LANDLOCK_CREATE_RULESET_VERSION.
++ * @flags: Supported value:
++ * - %LANDLOCK_CREATE_RULESET_VERSION
++ * - %LANDLOCK_CREATE_RULESET_ERRATA
+ *
+ * This system call enables to create a new Landlock ruleset, and returns the
+ * related file descriptor on success.
+@@ -168,6 +170,10 @@ static const struct file_operations ruleset_fops = {
+ * 0, then the returned value is the highest supported Landlock ABI version
+ * (starting at 1).
+ *
++ * If @flags is %LANDLOCK_CREATE_RULESET_ERRATA and @attr is NULL and @size is
++ * 0, then the returned value is a bitmask of fixed issues for the current
++ * Landlock ABI version.
++ *
+ * Possible returned errors are:
+ *
+ * - %EOPNOTSUPP: Landlock is supported by the kernel but disabled at boot time;
+@@ -191,9 +197,15 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+ return -EOPNOTSUPP;
+
+ if (flags) {
+- if ((flags == LANDLOCK_CREATE_RULESET_VERSION) && !attr &&
+- !size)
+- return LANDLOCK_ABI_VERSION;
++ if (attr || size)
++ return -EINVAL;
++
++ if (flags == LANDLOCK_CREATE_RULESET_VERSION)
++ return landlock_abi_version;
++
++ if (flags == LANDLOCK_CREATE_RULESET_ERRATA)
++ return landlock_errata;
++
+ return -EINVAL;
+ }
+
+@@ -234,6 +246,8 @@ SYSCALL_DEFINE3(landlock_create_ruleset,
+ return ruleset_fd;
+ }
+
++const int landlock_abi_version = LANDLOCK_ABI_VERSION;
++
+ /*
+ * Returns an owned ruleset from a FD. It is thus needed to call
+ * landlock_put_ruleset() on the return value.
+diff --git a/security/landlock/task.c b/security/landlock/task.c
+index dc7dab78392edc..4578ce6e319d83 100644
+--- a/security/landlock/task.c
++++ b/security/landlock/task.c
+@@ -13,6 +13,7 @@
+ #include <linux/lsm_hooks.h>
+ #include <linux/rcupdate.h>
+ #include <linux/sched.h>
++#include <linux/sched/signal.h>
+ #include <net/af_unix.h>
+ #include <net/sock.h>
+
+@@ -264,6 +265,17 @@ static int hook_task_kill(struct task_struct *const p,
+ /* Dealing with USB IO. */
+ dom = landlock_cred(cred)->domain;
+ } else {
++ /*
++ * Always allow sending signals between threads of the same process.
++ * This is required for process credential changes by the Native POSIX
++ * Threads Library and implemented by the set*id(2) wrappers and
++ * libcap(3) with tgkill(2). See nptl(7) and libpsx(3).
++ *
++ * This exception is similar to the __ptrace_may_access() one.
++ */
++ if (same_thread_group(p, current))
++ return 0;
++
+ dom = landlock_get_current_domain();
+ }
+ dom = landlock_get_applicable_domain(dom, signal_scope);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index cb9925948175f9..25b1984898ab21 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -37,6 +37,7 @@
+ #include <linux/completion.h>
+ #include <linux/acpi.h>
+ #include <linux/pgtable.h>
++#include <linux/dmi.h>
+
+ #ifdef CONFIG_X86
+ /* for snoop control */
+@@ -1360,8 +1361,21 @@ static void azx_free(struct azx *chip)
+ if (use_vga_switcheroo(hda)) {
+ if (chip->disabled && hda->probe_continued)
+ snd_hda_unlock_devices(&chip->bus);
+- if (hda->vga_switcheroo_registered)
++ if (hda->vga_switcheroo_registered) {
+ vga_switcheroo_unregister_client(chip->pci);
++
++ /* Some GPUs don't have sound, and azx_first_init fails,
++ * leaving the device probed but non-functional. As long
++ * as it's probed, the PCI subsystem keeps its runtime
++ * PM status as active. Force it to suspended (as we
++ * actually stop the chip) to allow GPU to suspend via
++ * vga_switcheroo, and print a warning.
++ */
++ dev_warn(&pci->dev, "GPU sound probed, but not operational: please add a quirk to driver_denylist\n");
++ pm_runtime_disable(&pci->dev);
++ pm_runtime_set_suspended(&pci->dev);
++ pm_runtime_enable(&pci->dev);
++ }
+ }
+
+ if (bus->chip_init) {
+@@ -2071,6 +2085,27 @@ static const struct pci_device_id driver_denylist[] = {
+ {}
+ };
+
++static struct pci_device_id driver_denylist_ideapad_z570[] = {
++ { PCI_DEVICE_SUB(0x10de, 0x0bea, 0x0000, 0x0000) }, /* NVIDIA GF108 HDA */
++ {}
++};
++
++/* DMI-based denylist, to be used when:
++ * - PCI subsystem IDs are zero, impossible to distinguish from valid sound cards.
++ * - Different modifications of the same laptop use different GPU models.
++ */
++static const struct dmi_system_id driver_denylist_dmi[] = {
++ {
++ /* No HDA in NVIDIA DGPU. BIOS disables it, but quirk_nvidia_hda() reenables. */
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_VERSION, "Ideapad Z570"),
++ },
++ .driver_data = &driver_denylist_ideapad_z570,
++ },
++ {}
++};
++
+ static const struct hda_controller_ops pci_hda_ops = {
+ .disable_msi_reset_irq = disable_msi_reset_irq,
+ .position_check = azx_position_check,
+@@ -2081,6 +2116,7 @@ static DECLARE_BITMAP(probed_devs, SNDRV_CARDS);
+ static int azx_probe(struct pci_dev *pci,
+ const struct pci_device_id *pci_id)
+ {
++ const struct dmi_system_id *dmi;
+ struct snd_card *card;
+ struct hda_intel *hda;
+ struct azx *chip;
+@@ -2093,6 +2129,12 @@ static int azx_probe(struct pci_dev *pci,
+ return -ENODEV;
+ }
+
++ dmi = dmi_first_match(driver_denylist_dmi);
++ if (dmi && pci_match_id(dmi->driver_data, pci)) {
++ dev_info(&pci->dev, "Skipping the device on the DMI denylist\n");
++ return -ENODEV;
++ }
++
+ dev = find_first_zero_bit(probed_devs, SNDRV_CARDS);
+ if (dev >= SNDRV_CARDS)
+ return -ENODEV;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 59e59fdc38f2c4..0bf833c9602155 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -4744,6 +4744,22 @@ static void alc245_fixup_hp_mute_led_coefbit(struct hda_codec *codec,
+ }
+ }
+
++static void alc245_fixup_hp_mute_led_v1_coefbit(struct hda_codec *codec,
++ const struct hda_fixup *fix,
++ int action)
++{
++ struct alc_spec *spec = codec->spec;
++
++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++ spec->mute_led_polarity = 0;
++ spec->mute_led_coef.idx = 0x0b;
++ spec->mute_led_coef.mask = 1 << 3;
++ spec->mute_led_coef.on = 1 << 3;
++ spec->mute_led_coef.off = 0;
++ snd_hda_gen_add_mute_led_cdev(codec, coef_mute_led_set);
++ }
++}
++
+ /* turn on/off mic-mute LED per capture hook by coef bit */
+ static int coef_micmute_led_set(struct led_classdev *led_cdev,
+ enum led_brightness brightness)
+@@ -7851,6 +7867,7 @@ enum {
+ ALC287_FIXUP_TAS2781_I2C,
+ ALC287_FIXUP_YOGA7_14ARB7_I2C,
+ ALC245_FIXUP_HP_MUTE_LED_COEFBIT,
++ ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT,
+ ALC245_FIXUP_HP_X360_MUTE_LEDS,
+ ALC287_FIXUP_THINKPAD_I2S_SPK,
+ ALC287_FIXUP_MG_RTKC_CSAMP_CS35L41_I2C_THINKPAD,
+@@ -10084,6 +10101,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc245_fixup_hp_mute_led_coefbit,
+ },
++ [ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc245_fixup_hp_mute_led_v1_coefbit,
++ },
+ [ALC245_FIXUP_HP_X360_MUTE_LEDS] = {
+ .type = HDA_FIXUP_FUNC,
+ .v.func = alc245_fixup_hp_mute_led_coefbit,
+@@ -10569,6 +10590,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x103c, 0x8b97, "HP", ALC236_FIXUP_HP_MUTE_LED_MICMUTE_VREF),
+ SND_PCI_QUIRK(0x103c, 0x8bb3, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x103c, 0x8bb4, "HP Slim OMEN", ALC287_FIXUP_CS35L41_I2C_2),
++ SND_PCI_QUIRK(0x103c, 0x8bcd, "HP Omen 16-xd0xxx", ALC245_FIXUP_HP_MUTE_LED_V1_COEFBIT),
+ SND_PCI_QUIRK(0x103c, 0x8bdd, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x103c, 0x8bde, "HP Envy 17", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x103c, 0x8bdf, "HP Envy 15", ALC287_FIXUP_CS35L41_I2C_2),
+diff --git a/sound/soc/amd/ps/acp63.h b/sound/soc/amd/ps/acp63.h
+index 39208305dd6c3c..f9759c9342cf38 100644
+--- a/sound/soc/amd/ps/acp63.h
++++ b/sound/soc/amd/ps/acp63.h
+@@ -11,6 +11,7 @@
+ #define ACP_DEVICE_ID 0x15E2
+ #define ACP63_REG_START 0x1240000
+ #define ACP63_REG_END 0x125C000
++#define ACP63_PCI_REV 0x63
+
+ #define ACP_SOFT_RESET_SOFTRESET_AUDDONE_MASK 0x00010001
+ #define ACP_PGFSM_CNTL_POWER_ON_MASK 1
+diff --git a/sound/soc/amd/ps/pci-ps.c b/sound/soc/amd/ps/pci-ps.c
+index 5c4a0be7a78892..aec3150ecf5812 100644
+--- a/sound/soc/amd/ps/pci-ps.c
++++ b/sound/soc/amd/ps/pci-ps.c
+@@ -559,7 +559,7 @@ static int snd_acp63_probe(struct pci_dev *pci,
+
+ /* Pink Sardine device check */
+ switch (pci->revision) {
+- case 0x63:
++ case ACP63_PCI_REV:
+ break;
+ default:
+ dev_dbg(&pci->dev, "acp63 pci device not found\n");
+diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c
+index a7637056972aab..e632f16c910250 100644
+--- a/sound/soc/amd/yc/acp6x-mach.c
++++ b/sound/soc/amd/yc/acp6x-mach.c
+@@ -339,6 +339,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "83Q3"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "LENOVO"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "83J2"),
++ }
++ },
+ {
+ .driver_data = &acp6x_card,
+ .matches = {
+@@ -584,6 +591,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_VERSION, "pang13"),
+ }
+ },
++ {
++ .driver_data = &acp6x_card,
++ .matches = {
++ DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."),
++ DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 15 C7UCX"),
++ }
++ },
+ {}
+ };
+
+diff --git a/sound/soc/codecs/wcd937x.c b/sound/soc/codecs/wcd937x.c
+index 08fb13a334a4cc..9c1997a42334d6 100644
+--- a/sound/soc/codecs/wcd937x.c
++++ b/sound/soc/codecs/wcd937x.c
+@@ -2564,6 +2564,7 @@ static int wcd937x_soc_codec_probe(struct snd_soc_component *component)
+ ARRAY_SIZE(wcd9375_dapm_widgets));
+ if (ret < 0) {
+ dev_err(component->dev, "Failed to add snd_ctls\n");
++ wcd_clsh_ctrl_free(wcd937x->clsh_info);
+ return ret;
+ }
+
+@@ -2571,6 +2572,7 @@ static int wcd937x_soc_codec_probe(struct snd_soc_component *component)
+ ARRAY_SIZE(wcd9375_audio_map));
+ if (ret < 0) {
+ dev_err(component->dev, "Failed to add routes\n");
++ wcd_clsh_ctrl_free(wcd937x->clsh_info);
+ return ret;
+ }
+ }
+diff --git a/sound/soc/fsl/fsl_audmix.c b/sound/soc/fsl/fsl_audmix.c
+index 3cd9a66b70a157..7981d598ba139b 100644
+--- a/sound/soc/fsl/fsl_audmix.c
++++ b/sound/soc/fsl/fsl_audmix.c
+@@ -488,11 +488,17 @@ static int fsl_audmix_probe(struct platform_device *pdev)
+ goto err_disable_pm;
+ }
+
+- priv->pdev = platform_device_register_data(dev, "imx-audmix", 0, NULL, 0);
+- if (IS_ERR(priv->pdev)) {
+- ret = PTR_ERR(priv->pdev);
+- dev_err(dev, "failed to register platform: %d\n", ret);
+- goto err_disable_pm;
++ /*
++ * If dais property exist, then register the imx-audmix card driver.
++ * otherwise, it should be linked by audio graph card.
++ */
++ if (of_find_property(pdev->dev.of_node, "dais", NULL)) {
++ priv->pdev = platform_device_register_data(dev, "imx-audmix", 0, NULL, 0);
++ if (IS_ERR(priv->pdev)) {
++ ret = PTR_ERR(priv->pdev);
++ dev_err(dev, "failed to register platform: %d\n", ret);
++ goto err_disable_pm;
++ }
+ }
+
+ return 0;
+diff --git a/sound/soc/intel/common/soc-acpi-intel-adl-match.c b/sound/soc/intel/common/soc-acpi-intel-adl-match.c
+index bb1324fb588e97..a68efbe98948f4 100644
+--- a/sound/soc/intel/common/soc-acpi-intel-adl-match.c
++++ b/sound/soc/intel/common/soc-acpi-intel-adl-match.c
+@@ -214,6 +214,15 @@ static const struct snd_soc_acpi_adr_device rt1316_1_group2_adr[] = {
+ }
+ };
+
++static const struct snd_soc_acpi_adr_device rt1316_2_group2_adr[] = {
++ {
++ .adr = 0x000232025D131601ull,
++ .num_endpoints = 1,
++ .endpoints = &spk_r_endpoint,
++ .name_prefix = "rt1316-2"
++ }
++};
++
+ static const struct snd_soc_acpi_adr_device rt1316_1_single_adr[] = {
+ {
+ .adr = 0x000130025D131601ull,
+@@ -547,6 +556,20 @@ static const struct snd_soc_acpi_link_adr adl_chromebook_base[] = {
+ {}
+ };
+
++static const struct snd_soc_acpi_link_adr adl_sdw_rt1316_link02[] = {
++ {
++ .mask = BIT(0),
++ .num_adr = ARRAY_SIZE(rt1316_0_group2_adr),
++ .adr_d = rt1316_0_group2_adr,
++ },
++ {
++ .mask = BIT(2),
++ .num_adr = ARRAY_SIZE(rt1316_2_group2_adr),
++ .adr_d = rt1316_2_group2_adr,
++ },
++ {}
++};
++
+ static const struct snd_soc_acpi_codecs adl_max98357a_amp = {
+ .num_codecs = 1,
+ .codecs = {"MX98357A"}
+@@ -749,6 +772,12 @@ struct snd_soc_acpi_mach snd_soc_acpi_intel_adl_sdw_machines[] = {
+ .drv_name = "sof_sdw",
+ .sof_tplg_filename = "sof-adl-sdw-max98373-rt5682.tplg",
+ },
++ {
++ .link_mask = BIT(0) | BIT(2),
++ .links = adl_sdw_rt1316_link02,
++ .drv_name = "sof_sdw",
++ .sof_tplg_filename = "sof-adl-rt1316-l02.tplg",
++ },
+ {},
+ };
+ EXPORT_SYMBOL_GPL(snd_soc_acpi_intel_adl_sdw_machines);
+diff --git a/sound/soc/qcom/qdsp6/q6apm-dai.c b/sound/soc/qcom/qdsp6/q6apm-dai.c
+index c9404b5934c7e6..2cd522108221a2 100644
+--- a/sound/soc/qcom/qdsp6/q6apm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6apm-dai.c
+@@ -24,8 +24,8 @@
+ #define PLAYBACK_MIN_PERIOD_SIZE 128
+ #define CAPTURE_MIN_NUM_PERIODS 2
+ #define CAPTURE_MAX_NUM_PERIODS 8
+-#define CAPTURE_MAX_PERIOD_SIZE 4096
+-#define CAPTURE_MIN_PERIOD_SIZE 320
++#define CAPTURE_MAX_PERIOD_SIZE 65536
++#define CAPTURE_MIN_PERIOD_SIZE 6144
+ #define BUFFER_BYTES_MAX (PLAYBACK_MAX_NUM_PERIODS * PLAYBACK_MAX_PERIOD_SIZE)
+ #define BUFFER_BYTES_MIN (PLAYBACK_MIN_NUM_PERIODS * PLAYBACK_MIN_PERIOD_SIZE)
+ #define COMPR_PLAYBACK_MAX_FRAGMENT_SIZE (128 * 1024)
+@@ -64,12 +64,12 @@ struct q6apm_dai_rtd {
+ phys_addr_t phys;
+ unsigned int pcm_size;
+ unsigned int pcm_count;
+- unsigned int pos; /* Buffer position */
+ unsigned int periods;
+ unsigned int bytes_sent;
+ unsigned int bytes_received;
+ unsigned int copied_total;
+ uint16_t bits_per_sample;
++ snd_pcm_uframes_t queue_ptr;
+ bool next_track;
+ enum stream_state state;
+ struct q6apm_graph *graph;
+@@ -123,25 +123,16 @@ static void event_handler(uint32_t opcode, uint32_t token, void *payload, void *
+ {
+ struct q6apm_dai_rtd *prtd = priv;
+ struct snd_pcm_substream *substream = prtd->substream;
+- unsigned long flags;
+
+ switch (opcode) {
+ case APM_CLIENT_EVENT_CMD_EOS_DONE:
+ prtd->state = Q6APM_STREAM_STOPPED;
+ break;
+ case APM_CLIENT_EVENT_DATA_WRITE_DONE:
+- spin_lock_irqsave(&prtd->lock, flags);
+- prtd->pos += prtd->pcm_count;
+- spin_unlock_irqrestore(&prtd->lock, flags);
+ snd_pcm_period_elapsed(substream);
+- if (prtd->state == Q6APM_STREAM_RUNNING)
+- q6apm_write_async(prtd->graph, prtd->pcm_count, 0, 0, 0);
+
+ break;
+ case APM_CLIENT_EVENT_DATA_READ_DONE:
+- spin_lock_irqsave(&prtd->lock, flags);
+- prtd->pos += prtd->pcm_count;
+- spin_unlock_irqrestore(&prtd->lock, flags);
+ snd_pcm_period_elapsed(substream);
+ if (prtd->state == Q6APM_STREAM_RUNNING)
+ q6apm_read(prtd->graph);
+@@ -248,7 +239,6 @@ static int q6apm_dai_prepare(struct snd_soc_component *component,
+ }
+
+ prtd->pcm_count = snd_pcm_lib_period_bytes(substream);
+- prtd->pos = 0;
+ /* rate and channels are sent to audio driver */
+ ret = q6apm_graph_media_format_shmem(prtd->graph, &cfg);
+ if (ret < 0) {
+@@ -294,6 +284,27 @@ static int q6apm_dai_prepare(struct snd_soc_component *component,
+ return 0;
+ }
+
++static int q6apm_dai_ack(struct snd_soc_component *component, struct snd_pcm_substream *substream)
++{
++ struct snd_pcm_runtime *runtime = substream->runtime;
++ struct q6apm_dai_rtd *prtd = runtime->private_data;
++ int i, ret = 0, avail_periods;
++
++ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
++ avail_periods = (runtime->control->appl_ptr - prtd->queue_ptr)/runtime->period_size;
++ for (i = 0; i < avail_periods; i++) {
++ ret = q6apm_write_async(prtd->graph, prtd->pcm_count, 0, 0, NO_TIMESTAMP);
++ if (ret < 0) {
++ dev_err(component->dev, "Error queuing playback buffer %d\n", ret);
++ return ret;
++ }
++ prtd->queue_ptr += runtime->period_size;
++ }
++ }
++
++ return ret;
++}
++
+ static int q6apm_dai_trigger(struct snd_soc_component *component,
+ struct snd_pcm_substream *substream, int cmd)
+ {
+@@ -305,9 +316,6 @@ static int q6apm_dai_trigger(struct snd_soc_component *component,
+ case SNDRV_PCM_TRIGGER_START:
+ case SNDRV_PCM_TRIGGER_RESUME:
+ case SNDRV_PCM_TRIGGER_PAUSE_RELEASE:
+- /* start writing buffers for playback only as we already queued capture buffers */
+- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+- ret = q6apm_write_async(prtd->graph, prtd->pcm_count, 0, 0, 0);
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ /* TODO support be handled via SoftPause Module */
+@@ -377,13 +385,14 @@ static int q6apm_dai_open(struct snd_soc_component *component,
+ }
+ }
+
+- ret = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_BYTES, 32);
++ /* setup 10ms latency to accommodate DSP restrictions */
++ ret = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_PERIOD_SIZE, 480);
+ if (ret < 0) {
+ dev_err(dev, "constraint for period bytes step ret = %d\n", ret);
+ goto err;
+ }
+
+- ret = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_BUFFER_BYTES, 32);
++ ret = snd_pcm_hw_constraint_step(runtime, 0, SNDRV_PCM_HW_PARAM_BUFFER_SIZE, 480);
+ if (ret < 0) {
+ dev_err(dev, "constraint for buffer bytes step ret = %d\n", ret);
+ goto err;
+@@ -428,16 +437,12 @@ static snd_pcm_uframes_t q6apm_dai_pointer(struct snd_soc_component *component,
+ struct snd_pcm_runtime *runtime = substream->runtime;
+ struct q6apm_dai_rtd *prtd = runtime->private_data;
+ snd_pcm_uframes_t ptr;
+- unsigned long flags;
+
+- spin_lock_irqsave(&prtd->lock, flags);
+- if (prtd->pos == prtd->pcm_size)
+- prtd->pos = 0;
+-
+- ptr = bytes_to_frames(runtime, prtd->pos);
+- spin_unlock_irqrestore(&prtd->lock, flags);
++ ptr = q6apm_get_hw_pointer(prtd->graph, substream->stream) * runtime->period_size;
++ if (ptr)
++ return ptr - 1;
+
+- return ptr;
++ return 0;
+ }
+
+ static int q6apm_dai_hw_params(struct snd_soc_component *component,
+@@ -652,8 +657,6 @@ static int q6apm_dai_compr_set_params(struct snd_soc_component *component,
+ prtd->pcm_size = runtime->fragments * runtime->fragment_size;
+ prtd->bits_per_sample = 16;
+
+- prtd->pos = 0;
+-
+ if (prtd->next_track != true) {
+ memcpy(&prtd->codec, codec, sizeof(*codec));
+
+@@ -836,6 +839,7 @@ static const struct snd_soc_component_driver q6apm_fe_dai_component = {
+ .hw_params = q6apm_dai_hw_params,
+ .pointer = q6apm_dai_pointer,
+ .trigger = q6apm_dai_trigger,
++ .ack = q6apm_dai_ack,
+ .compress_ops = &q6apm_dai_compress_ops,
+ .use_dai_pcm_id = true,
+ };
+diff --git a/sound/soc/qcom/qdsp6/q6apm.c b/sound/soc/qcom/qdsp6/q6apm.c
+index 2a2a5bd98110bc..ca57413cb7847a 100644
+--- a/sound/soc/qcom/qdsp6/q6apm.c
++++ b/sound/soc/qcom/qdsp6/q6apm.c
+@@ -494,6 +494,19 @@ int q6apm_read(struct q6apm_graph *graph)
+ }
+ EXPORT_SYMBOL_GPL(q6apm_read);
+
++int q6apm_get_hw_pointer(struct q6apm_graph *graph, int dir)
++{
++ struct audioreach_graph_data *data;
++
++ if (dir == SNDRV_PCM_STREAM_PLAYBACK)
++ data = &graph->rx_data;
++ else
++ data = &graph->tx_data;
++
++ return (int)atomic_read(&data->hw_ptr);
++}
++EXPORT_SYMBOL_GPL(q6apm_get_hw_pointer);
++
+ static int graph_callback(struct gpr_resp_pkt *data, void *priv, int op)
+ {
+ struct data_cmd_rsp_rd_sh_mem_ep_data_buffer_done_v2 *rd_done;
+@@ -520,7 +533,8 @@ static int graph_callback(struct gpr_resp_pkt *data, void *priv, int op)
+ done = data->payload;
+ phys = graph->rx_data.buf[token].phys;
+ mutex_unlock(&graph->lock);
+-
++ /* token numbering starts at 0 */
++ atomic_set(&graph->rx_data.hw_ptr, token + 1);
+ if (lower_32_bits(phys) == done->buf_addr_lsw &&
+ upper_32_bits(phys) == done->buf_addr_msw) {
+ graph->result.opcode = hdr->opcode;
+@@ -553,6 +567,8 @@ static int graph_callback(struct gpr_resp_pkt *data, void *priv, int op)
+ rd_done = data->payload;
+ phys = graph->tx_data.buf[hdr->token].phys;
+ mutex_unlock(&graph->lock);
++ /* token numbering starts at 0 */
++ atomic_set(&graph->tx_data.hw_ptr, hdr->token + 1);
+
+ if (upper_32_bits(phys) == rd_done->buf_addr_msw &&
+ lower_32_bits(phys) == rd_done->buf_addr_lsw) {
+diff --git a/sound/soc/qcom/qdsp6/q6apm.h b/sound/soc/qcom/qdsp6/q6apm.h
+index c248c8d2b1ab7f..7ce08b401e3102 100644
+--- a/sound/soc/qcom/qdsp6/q6apm.h
++++ b/sound/soc/qcom/qdsp6/q6apm.h
+@@ -2,6 +2,7 @@
+ #ifndef __Q6APM_H__
+ #define __Q6APM_H__
+ #include <linux/types.h>
++#include <linux/atomic.h>
+ #include <linux/slab.h>
+ #include <linux/wait.h>
+ #include <linux/kernel.h>
+@@ -77,6 +78,7 @@ struct audioreach_graph_data {
+ uint32_t num_periods;
+ uint32_t dsp_buf;
+ uint32_t mem_map_handle;
++ atomic_t hw_ptr;
+ };
+
+ struct audioreach_graph {
+@@ -150,4 +152,5 @@ int q6apm_enable_compress_module(struct device *dev, struct q6apm_graph *graph,
+ int q6apm_remove_initial_silence(struct device *dev, struct q6apm_graph *graph, uint32_t samples);
+ int q6apm_remove_trailing_silence(struct device *dev, struct q6apm_graph *graph, uint32_t samples);
+ int q6apm_set_real_module_id(struct device *dev, struct q6apm_graph *graph, uint32_t codec_id);
++int q6apm_get_hw_pointer(struct q6apm_graph *graph, int dir);
+ #endif /* __APM_GRAPH_ */
+diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c
+index 045100c9435271..a400c9a31fead5 100644
+--- a/sound/soc/qcom/qdsp6/q6asm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6asm-dai.c
+@@ -892,9 +892,7 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component,
+
+ if (ret < 0) {
+ dev_err(dev, "q6asm_open_write failed\n");
+- q6asm_audio_client_free(prtd->audio_client);
+- prtd->audio_client = NULL;
+- return ret;
++ goto open_err;
+ }
+ }
+
+@@ -903,7 +901,7 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component,
+ prtd->session_id, dir);
+ if (ret) {
+ dev_err(dev, "Stream reg failed ret:%d\n", ret);
+- return ret;
++ goto q6_err;
+ }
+
+ ret = __q6asm_dai_compr_set_codec_params(component, stream,
+@@ -911,7 +909,7 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component,
+ prtd->stream_id);
+ if (ret) {
+ dev_err(dev, "codec param setup failed ret:%d\n", ret);
+- return ret;
++ goto q6_err;
+ }
+
+ ret = q6asm_map_memory_regions(dir, prtd->audio_client, prtd->phys,
+@@ -920,12 +918,21 @@ static int q6asm_dai_compr_set_params(struct snd_soc_component *component,
+
+ if (ret < 0) {
+ dev_err(dev, "Buffer Mapping failed ret:%d\n", ret);
+- return -ENOMEM;
++ ret = -ENOMEM;
++ goto q6_err;
+ }
+
+ prtd->state = Q6ASM_STREAM_RUNNING;
+
+ return 0;
++
++q6_err:
++ q6asm_cmd(prtd->audio_client, prtd->stream_id, CMD_CLOSE);
++
++open_err:
++ q6asm_audio_client_free(prtd->audio_client);
++ prtd->audio_client = NULL;
++ return ret;
+ }
+
+ static int q6asm_dai_compr_set_metadata(struct snd_soc_component *component,
+diff --git a/sound/soc/sof/topology.c b/sound/soc/sof/topology.c
+index b3fca5fd87d68c..37ca15cc5728ca 100644
+--- a/sound/soc/sof/topology.c
++++ b/sound/soc/sof/topology.c
+@@ -1269,8 +1269,8 @@ static int sof_widget_parse_tokens(struct snd_soc_component *scomp, struct snd_s
+ struct snd_sof_tuple *new_tuples;
+
+ num_tuples += token_list[object_token_list[i]].count * (num_sets - 1);
+- new_tuples = krealloc(swidget->tuples,
+- sizeof(*new_tuples) * num_tuples, GFP_KERNEL);
++ new_tuples = krealloc_array(swidget->tuples,
++ num_tuples, sizeof(*new_tuples), GFP_KERNEL);
+ if (!new_tuples) {
+ ret = -ENOMEM;
+ goto err;
+diff --git a/sound/usb/midi.c b/sound/usb/midi.c
+index 779d97d31f170e..826ac870f24690 100644
+--- a/sound/usb/midi.c
++++ b/sound/usb/midi.c
+@@ -489,16 +489,84 @@ static void ch345_broken_sysex_input(struct snd_usb_midi_in_endpoint *ep,
+
+ /*
+ * CME protocol: like the standard protocol, but SysEx commands are sent as a
+- * single USB packet preceded by a 0x0F byte.
++ * single USB packet preceded by a 0x0F byte, as are system realtime
++ * messages and MIDI Active Sensing.
++ * Also, multiple messages can be sent in the same packet.
+ */
+ static void snd_usbmidi_cme_input(struct snd_usb_midi_in_endpoint *ep,
+ uint8_t *buffer, int buffer_length)
+ {
+- if (buffer_length < 2 || (buffer[0] & 0x0f) != 0x0f)
+- snd_usbmidi_standard_input(ep, buffer, buffer_length);
+- else
+- snd_usbmidi_input_data(ep, buffer[0] >> 4,
+- &buffer[1], buffer_length - 1);
++ int remaining = buffer_length;
++
++ /*
++ * CME send sysex, song position pointer, system realtime
++ * and active sensing using CIN 0x0f, which in the standard
++ * is only intended for single byte unparsed data.
++ * So we need to interpret these here before sending them on.
++ * By default, we assume single byte data, which is true
++ * for system realtime (midi clock, start, stop and continue)
++ * and active sensing, and handle the other (known) cases
++ * separately.
++ * In contrast to the standard, CME does not split sysex
++ * into multiple 4-byte packets, but lumps everything together
++ * into one. In addition, CME can string multiple messages
++ * together in the same packet; pressing the Record button
++ * on an UF6 sends a sysex message directly followed
++ * by a song position pointer in the same packet.
++ * For it to have any reasonable meaning, a sysex message
++ * needs to be at least 3 bytes in length (0xf0, id, 0xf7),
++ * corresponding to a packet size of 4 bytes, and the ones sent
++ * by CME devices are 6 or 7 bytes, making the packet fragments
++ * 7 or 8 bytes long (six or seven bytes plus preceding CN+CIN byte).
++ * For the other types, the packet size is always 4 bytes,
++ * as per the standard, with the data size being 3 for SPP
++ * and 1 for the others.
++ * Thus all packet fragments are at least 4 bytes long, so we can
++ * skip anything that is shorter; this also conveniantly skips
++ * packets with size 0, which CME devices continuously send when
++ * they have nothing better to do.
++ * Another quirk is that sometimes multiple messages are sent
++ * in the same packet. This has been observed for midi clock
++ * and active sensing i.e. 0x0f 0xf8 0x00 0x00 0x0f 0xfe 0x00 0x00,
++ * but also multiple note ons/offs, and control change together
++ * with MIDI clock. Similarly, some sysex messages are followed by
++ * the song position pointer in the same packet, and occasionally
++ * additionally by a midi clock or active sensing.
++ * We handle this by looping over all data and parsing it along the way.
++ */
++ while (remaining >= 4) {
++ int source_length = 4; /* default */
++
++ if ((buffer[0] & 0x0f) == 0x0f) {
++ int data_length = 1; /* default */
++
++ if (buffer[1] == 0xf0) {
++ /* Sysex: Find EOX and send on whole message. */
++ /* To kick off the search, skip the first
++ * two bytes (CN+CIN and SYSEX (0xf0).
++ */
++ uint8_t *tmp_buf = buffer + 2;
++ int tmp_length = remaining - 2;
++
++ while (tmp_length > 1 && *tmp_buf != 0xf7) {
++ tmp_buf++;
++ tmp_length--;
++ }
++ data_length = tmp_buf - buffer;
++ source_length = data_length + 1;
++ } else if (buffer[1] == 0xf2) {
++ /* Three byte song position pointer */
++ data_length = 3;
++ }
++ snd_usbmidi_input_data(ep, buffer[0] >> 4,
++ &buffer[1], data_length);
++ } else {
++ /* normal channel events */
++ snd_usbmidi_standard_input(ep, buffer, source_length);
++ }
++ buffer += source_length;
++ remaining -= source_length;
++ }
+ }
+
+ /*
+diff --git a/tools/lib/bpf/btf_dump.c b/tools/lib/bpf/btf_dump.c
+index 0a7327541c17f1..46cce18c830864 100644
+--- a/tools/lib/bpf/btf_dump.c
++++ b/tools/lib/bpf/btf_dump.c
+@@ -867,8 +867,8 @@ static void btf_dump_emit_bit_padding(const struct btf_dump *d,
+ } pads[] = {
+ {"long", d->ptr_sz * 8}, {"int", 32}, {"short", 16}, {"char", 8}
+ };
+- int new_off, pad_bits, bits, i;
+- const char *pad_type;
++ int new_off = 0, pad_bits = 0, bits, i;
++ const char *pad_type = NULL;
+
+ if (cur_off >= next_off)
+ return; /* no gap */
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 286a2c0af02aa8..127862fa05c619 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -3990,6 +3990,11 @@ static int validate_unret(struct objtool_file *file, struct instruction *insn)
+ WARN_INSN(insn, "RET before UNTRAIN");
+ return 1;
+
++ case INSN_CONTEXT_SWITCH:
++ if (insn_func(insn))
++ break;
++ return 0;
++
+ case INSN_NOP:
+ if (insn->retpoline_safe)
+ return 0;
+diff --git a/tools/power/cpupower/bench/parse.c b/tools/power/cpupower/bench/parse.c
+index e63dc11fa3a533..48e25be6e16356 100644
+--- a/tools/power/cpupower/bench/parse.c
++++ b/tools/power/cpupower/bench/parse.c
+@@ -120,6 +120,10 @@ FILE *prepare_output(const char *dirname)
+ struct config *prepare_default_config()
+ {
+ struct config *config = malloc(sizeof(struct config));
++ if (!config) {
++ perror("malloc");
++ return NULL;
++ }
+
+ dprintf("loading defaults\n");
+
+diff --git a/tools/testing/ktest/ktest.pl b/tools/testing/ktest/ktest.pl
+index c76ad0be54e2ed..7e524601e01ada 100755
+--- a/tools/testing/ktest/ktest.pl
++++ b/tools/testing/ktest/ktest.pl
+@@ -4303,6 +4303,14 @@ if (defined($opt{"LOG_FILE"})) {
+ if ($opt{"CLEAR_LOG"}) {
+ unlink $opt{"LOG_FILE"};
+ }
++
++ if (! -e $opt{"LOG_FILE"} && $opt{"LOG_FILE"} =~ m,^(.*/),) {
++ my $dir = $1;
++ if (! -d $dir) {
++ mkpath($dir) or die "Failed to create directories '$dir': $!";
++ print "\nThe log directory $dir did not exist, so it was created.\n";
++ }
++ }
+ open(LOG, ">> $opt{LOG_FILE}") or die "Can't write to $opt{LOG_FILE}";
+ LOG->autoflush(1);
+ }
+diff --git a/tools/testing/selftests/futex/functional/futex_wait_wouldblock.c b/tools/testing/selftests/futex/functional/futex_wait_wouldblock.c
+index 7d7a6a06cdb75b..2d8230da906429 100644
+--- a/tools/testing/selftests/futex/functional/futex_wait_wouldblock.c
++++ b/tools/testing/selftests/futex/functional/futex_wait_wouldblock.c
+@@ -98,7 +98,7 @@ int main(int argc, char *argv[])
+ info("Calling futex_waitv on f1: %u @ %p with val=%u\n", f1, &f1, f1+1);
+ res = futex_waitv(&waitv, 1, 0, &to, CLOCK_MONOTONIC);
+ if (!res || errno != EWOULDBLOCK) {
+- ksft_test_result_pass("futex_waitv returned: %d %s\n",
++ ksft_test_result_fail("futex_waitv returned: %d %s\n",
+ res ? errno : res,
+ res ? strerror(errno) : "");
+ ret = RET_FAIL;
+diff --git a/tools/testing/selftests/landlock/base_test.c b/tools/testing/selftests/landlock/base_test.c
+index 1bc16fde2e8aea..4766f8fec9f605 100644
+--- a/tools/testing/selftests/landlock/base_test.c
++++ b/tools/testing/selftests/landlock/base_test.c
+@@ -98,10 +98,54 @@ TEST(abi_version)
+ ASSERT_EQ(EINVAL, errno);
+ }
+
++/*
++ * Old source trees might not have the set of Kselftest fixes related to kernel
++ * UAPI headers.
++ */
++#ifndef LANDLOCK_CREATE_RULESET_ERRATA
++#define LANDLOCK_CREATE_RULESET_ERRATA (1U << 1)
++#endif
++
++TEST(errata)
++{
++ const struct landlock_ruleset_attr ruleset_attr = {
++ .handled_access_fs = LANDLOCK_ACCESS_FS_READ_FILE,
++ };
++ int errata;
++
++ errata = landlock_create_ruleset(NULL, 0,
++ LANDLOCK_CREATE_RULESET_ERRATA);
++ /* The errata bitmask will not be backported to tests. */
++ ASSERT_LE(0, errata);
++ TH_LOG("errata: 0x%x", errata);
++
++ ASSERT_EQ(-1, landlock_create_ruleset(&ruleset_attr, 0,
++ LANDLOCK_CREATE_RULESET_ERRATA));
++ ASSERT_EQ(EINVAL, errno);
++
++ ASSERT_EQ(-1, landlock_create_ruleset(NULL, sizeof(ruleset_attr),
++ LANDLOCK_CREATE_RULESET_ERRATA));
++ ASSERT_EQ(EINVAL, errno);
++
++ ASSERT_EQ(-1,
++ landlock_create_ruleset(&ruleset_attr, sizeof(ruleset_attr),
++ LANDLOCK_CREATE_RULESET_ERRATA));
++ ASSERT_EQ(EINVAL, errno);
++
++ ASSERT_EQ(-1, landlock_create_ruleset(
++ NULL, 0,
++ LANDLOCK_CREATE_RULESET_VERSION |
++ LANDLOCK_CREATE_RULESET_ERRATA));
++ ASSERT_EQ(-1, landlock_create_ruleset(NULL, 0,
++ LANDLOCK_CREATE_RULESET_ERRATA |
++ 1 << 31));
++ ASSERT_EQ(EINVAL, errno);
++}
++
+ /* Tests ordering of syscall argument checks. */
+ TEST(create_ruleset_checks_ordering)
+ {
+- const int last_flag = LANDLOCK_CREATE_RULESET_VERSION;
++ const int last_flag = LANDLOCK_CREATE_RULESET_ERRATA;
+ const int invalid_flag = last_flag << 1;
+ int ruleset_fd;
+ const struct landlock_ruleset_attr ruleset_attr = {
+diff --git a/tools/testing/selftests/landlock/common.h b/tools/testing/selftests/landlock/common.h
+index 40a2def50b837e..60afc1ce11bcd7 100644
+--- a/tools/testing/selftests/landlock/common.h
++++ b/tools/testing/selftests/landlock/common.h
+@@ -68,6 +68,7 @@ static void _init_caps(struct __test_metadata *const _metadata, bool drop_all)
+ CAP_MKNOD,
+ CAP_NET_ADMIN,
+ CAP_NET_BIND_SERVICE,
++ CAP_SETUID,
+ CAP_SYS_ADMIN,
+ CAP_SYS_CHROOT,
+ /* clang-format on */
+diff --git a/tools/testing/selftests/landlock/scoped_signal_test.c b/tools/testing/selftests/landlock/scoped_signal_test.c
+index 475ee62a832d6d..d8bf33417619f6 100644
+--- a/tools/testing/selftests/landlock/scoped_signal_test.c
++++ b/tools/testing/selftests/landlock/scoped_signal_test.c
+@@ -249,47 +249,67 @@ TEST_F(scoped_domains, check_access_signal)
+ _metadata->exit_code = KSFT_FAIL;
+ }
+
+-static int thread_pipe[2];
+-
+ enum thread_return {
+ THREAD_INVALID = 0,
+ THREAD_SUCCESS = 1,
+ THREAD_ERROR = 2,
++ THREAD_TEST_FAILED = 3,
+ };
+
+-void *thread_func(void *arg)
++static void *thread_sync(void *arg)
+ {
++ const int pipe_read = *(int *)arg;
+ char buf;
+
+- if (read(thread_pipe[0], &buf, 1) != 1)
++ if (read(pipe_read, &buf, 1) != 1)
+ return (void *)THREAD_ERROR;
+
+ return (void *)THREAD_SUCCESS;
+ }
+
+-TEST(signal_scoping_threads)
++TEST(signal_scoping_thread_before)
+ {
+- pthread_t no_sandbox_thread, scoped_thread;
++ pthread_t no_sandbox_thread;
+ enum thread_return ret = THREAD_INVALID;
++ int thread_pipe[2];
+
+ drop_caps(_metadata);
+ ASSERT_EQ(0, pipe2(thread_pipe, O_CLOEXEC));
+
+- ASSERT_EQ(0,
+- pthread_create(&no_sandbox_thread, NULL, thread_func, NULL));
++ ASSERT_EQ(0, pthread_create(&no_sandbox_thread, NULL, thread_sync,
++ &thread_pipe[0]));
+
+- /* Restricts the domain after creating the first thread. */
++ /* Enforces restriction after creating the thread. */
+ create_scoped_domain(_metadata, LANDLOCK_SCOPE_SIGNAL);
+
+- ASSERT_EQ(EPERM, pthread_kill(no_sandbox_thread, 0));
+- ASSERT_EQ(1, write(thread_pipe[1], ".", 1));
+-
+- ASSERT_EQ(0, pthread_create(&scoped_thread, NULL, thread_func, NULL));
+- ASSERT_EQ(0, pthread_kill(scoped_thread, 0));
+- ASSERT_EQ(1, write(thread_pipe[1], ".", 1));
++ EXPECT_EQ(0, pthread_kill(no_sandbox_thread, 0));
++ EXPECT_EQ(1, write(thread_pipe[1], ".", 1));
+
+ EXPECT_EQ(0, pthread_join(no_sandbox_thread, (void **)&ret));
+ EXPECT_EQ(THREAD_SUCCESS, ret);
++
++ EXPECT_EQ(0, close(thread_pipe[0]));
++ EXPECT_EQ(0, close(thread_pipe[1]));
++}
++
++TEST(signal_scoping_thread_after)
++{
++ pthread_t scoped_thread;
++ enum thread_return ret = THREAD_INVALID;
++ int thread_pipe[2];
++
++ drop_caps(_metadata);
++ ASSERT_EQ(0, pipe2(thread_pipe, O_CLOEXEC));
++
++ /* Enforces restriction before creating the thread. */
++ create_scoped_domain(_metadata, LANDLOCK_SCOPE_SIGNAL);
++
++ ASSERT_EQ(0, pthread_create(&scoped_thread, NULL, thread_sync,
++ &thread_pipe[0]));
++
++ EXPECT_EQ(0, pthread_kill(scoped_thread, 0));
++ EXPECT_EQ(1, write(thread_pipe[1], ".", 1));
++
+ EXPECT_EQ(0, pthread_join(scoped_thread, (void **)&ret));
+ EXPECT_EQ(THREAD_SUCCESS, ret);
+
+@@ -297,6 +317,64 @@ TEST(signal_scoping_threads)
+ EXPECT_EQ(0, close(thread_pipe[1]));
+ }
+
++struct thread_setuid_args {
++ int pipe_read, new_uid;
++};
++
++void *thread_setuid(void *ptr)
++{
++ const struct thread_setuid_args *arg = ptr;
++ char buf;
++
++ if (read(arg->pipe_read, &buf, 1) != 1)
++ return (void *)THREAD_ERROR;
++
++ /* libc's setuid() should update all thread's credentials. */
++ if (getuid() != arg->new_uid)
++ return (void *)THREAD_TEST_FAILED;
++
++ return (void *)THREAD_SUCCESS;
++}
++
++TEST(signal_scoping_thread_setuid)
++{
++ struct thread_setuid_args arg;
++ pthread_t no_sandbox_thread;
++ enum thread_return ret = THREAD_INVALID;
++ int pipe_parent[2];
++ int prev_uid;
++
++ disable_caps(_metadata);
++
++ /* This test does not need to be run as root. */
++ prev_uid = getuid();
++ arg.new_uid = prev_uid + 1;
++ EXPECT_LT(0, arg.new_uid);
++
++ ASSERT_EQ(0, pipe2(pipe_parent, O_CLOEXEC));
++ arg.pipe_read = pipe_parent[0];
++
++ /* Capabilities must be set before creating a new thread. */
++ set_cap(_metadata, CAP_SETUID);
++ ASSERT_EQ(0, pthread_create(&no_sandbox_thread, NULL, thread_setuid,
++ &arg));
++
++ /* Enforces restriction after creating the thread. */
++ create_scoped_domain(_metadata, LANDLOCK_SCOPE_SIGNAL);
++
++ EXPECT_NE(arg.new_uid, getuid());
++ EXPECT_EQ(0, setuid(arg.new_uid));
++ EXPECT_EQ(arg.new_uid, getuid());
++ EXPECT_EQ(1, write(pipe_parent[1], ".", 1));
++
++ EXPECT_EQ(0, pthread_join(no_sandbox_thread, (void **)&ret));
++ EXPECT_EQ(THREAD_SUCCESS, ret);
++
++ clear_cap(_metadata, CAP_SETUID);
++ EXPECT_EQ(0, close(pipe_parent[0]));
++ EXPECT_EQ(0, close(pipe_parent[1]));
++}
++
+ const short backlog = 10;
+
+ static volatile sig_atomic_t signal_received;
+diff --git a/tools/testing/selftests/net/mptcp/mptcp_connect.c b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+index d240d02fa443a1..c83a8b47bbdfa5 100644
+--- a/tools/testing/selftests/net/mptcp/mptcp_connect.c
++++ b/tools/testing/selftests/net/mptcp/mptcp_connect.c
+@@ -1270,7 +1270,7 @@ int main_loop(void)
+
+ if (cfg_input && cfg_sockopt_types.mptfo) {
+ fd_in = open(cfg_input, O_RDONLY);
+- if (fd < 0)
++ if (fd_in < 0)
+ xerror("can't open %s:%d", cfg_input, errno);
+ }
+
+@@ -1293,13 +1293,13 @@ int main_loop(void)
+
+ if (cfg_input && !cfg_sockopt_types.mptfo) {
+ fd_in = open(cfg_input, O_RDONLY);
+- if (fd < 0)
++ if (fd_in < 0)
+ xerror("can't open %s:%d", cfg_input, errno);
+ }
+
+ ret = copyfd_io(fd_in, fd, 1, 0, &winfo);
+ if (ret)
+- return ret;
++ goto out;
+
+ if (cfg_truncate > 0) {
+ shutdown(fd, SHUT_WR);
+@@ -1320,7 +1320,10 @@ int main_loop(void)
+ close(fd);
+ }
+
+- return 0;
++out:
++ if (cfg_input)
++ close(fd_in);
++ return ret;
+ }
+
+ int parse_proto(const char *proto)
+diff --git a/virt/kvm/Kconfig b/virt/kvm/Kconfig
+index fd6a3010afa833..1f51a4d906b877 100644
+--- a/virt/kvm/Kconfig
++++ b/virt/kvm/Kconfig
+@@ -75,7 +75,7 @@ config KVM_COMPAT
+ depends on KVM && COMPAT && !(S390 || ARM64 || RISCV)
+
+ config HAVE_KVM_IRQ_BYPASS
+- bool
++ tristate
+ select IRQ_BYPASS_MANAGER
+
+ config HAVE_KVM_VCPU_ASYNC_IOCTL
+diff --git a/virt/kvm/eventfd.c b/virt/kvm/eventfd.c
+index 6b390b622b728e..929c7980fda6a4 100644
+--- a/virt/kvm/eventfd.c
++++ b/virt/kvm/eventfd.c
+@@ -149,7 +149,7 @@ irqfd_shutdown(struct work_struct *work)
+ /*
+ * It is now safe to release the object's resources
+ */
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ irq_bypass_unregister_consumer(&irqfd->consumer);
+ #endif
+ eventfd_ctx_put(irqfd->eventfd);
+@@ -274,7 +274,7 @@ static void irqfd_update(struct kvm *kvm, struct kvm_kernel_irqfd *irqfd)
+ write_seqcount_end(&irqfd->irq_entry_sc);
+ }
+
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ void __attribute__((weak)) kvm_arch_irq_bypass_stop(
+ struct irq_bypass_consumer *cons)
+ {
+@@ -425,7 +425,7 @@ kvm_irqfd_assign(struct kvm *kvm, struct kvm_irqfd *args)
+ if (events & EPOLLIN)
+ schedule_work(&irqfd->inject);
+
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ if (kvm_arch_has_irq_bypass()) {
+ irqfd->consumer.token = (void *)irqfd->eventfd;
+ irqfd->consumer.add_producer = kvm_arch_irq_bypass_add_producer;
+@@ -618,14 +618,14 @@ void kvm_irq_routing_update(struct kvm *kvm)
+ spin_lock_irq(&kvm->irqfds.lock);
+
+ list_for_each_entry(irqfd, &kvm->irqfds.items, list) {
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ /* Under irqfds.lock, so can read irq_entry safely */
+ struct kvm_kernel_irq_routing_entry old = irqfd->irq_entry;
+ #endif
+
+ irqfd_update(kvm, irqfd);
+
+-#ifdef CONFIG_HAVE_KVM_IRQ_BYPASS
++#if IS_ENABLED(CONFIG_HAVE_KVM_IRQ_BYPASS)
+ if (irqfd->producer &&
+ kvm_arch_irqfd_route_changed(&old, &irqfd->irq_entry)) {
+ int ret = kvm_arch_update_irqfd_routing(
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-04-22 18:48 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-04-22 18:48 UTC (permalink / raw
To: gentoo-commits
commit: f9872df78d5468a6053ce8e99dbdd4a344188222
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 22 18:48:35 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 22 18:48:35 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f9872df7
x86/insn_decoder_test: allow longer symbol-names
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 ++
...sn-decoder-test-allow-longer-symbol-names.patch | 49 ++++++++++++++++++++++
2 files changed, 53 insertions(+)
diff --git a/0000_README b/0000_README
index 6b594792..b04d2cdd 100644
--- a/0000_README
+++ b/0000_README
@@ -155,6 +155,10 @@ Patch: 1730_parisc-Disable-prctl.patch
From: https://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux.git
Desc: prctl: Temporarily disable prctl(PR_SET_MDWE) on parisc
+Patch: 1740_x86-insn-decoder-test-allow-longer-symbol-names.patch
+From: https://gitlab.com/cki-project/kernel-ark/-/commit/8d4a52c3921d278f27241fc0c6949d8fdc13a7f5
+Desc: x86/insn_decoder_test: allow longer symbol-names
+
Patch: 2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
From: https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
Desc: Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
diff --git a/1740_x86-insn-decoder-test-allow-longer-symbol-names.patch b/1740_x86-insn-decoder-test-allow-longer-symbol-names.patch
new file mode 100644
index 00000000..70c706ba
--- /dev/null
+++ b/1740_x86-insn-decoder-test-allow-longer-symbol-names.patch
@@ -0,0 +1,49 @@
+From 8d4a52c3921d278f27241fc0c6949d8fdc13a7f5 Mon Sep 17 00:00:00 2001
+From: David Rheinsberg <david@readahead.eu>
+Date: Tue, 24 Jan 2023 12:04:59 +0100
+Subject: [PATCH] x86/insn_decoder_test: allow longer symbol-names
+
+Increase the allowed line-length of the insn-decoder-test to 4k to allow
+for symbol-names longer than 256 characters.
+
+The insn-decoder-test takes objdump output as input, which may contain
+symbol-names as instruction arguments. With rust-code entering the
+kernel, those symbol-names will include mangled-symbols which might
+exceed the current line-length-limit of the tool.
+
+By bumping the line-length-limit of the tool to 4k, we get a reasonable
+buffer for all objdump outputs I have seen so far. Unfortunately, ELF
+symbol-names are not restricted in length, so technically this might
+still end up failing if we encounter longer names in the future.
+
+My compile-failure looks like this:
+
+ arch/x86/tools/insn_decoder_test: error: malformed line 1152000:
+ tBb_+0xf2>
+
+..which overflowed by 10 characters reading this line:
+
+ ffffffff81458193: 74 3d je ffffffff814581d2 <_RNvXse_NtNtNtCshGpAVYOtgW1_4core4iter8adapters7flattenINtB5_13FlattenCompatINtNtB7_3map3MapNtNtNtBb_3str4iter5CharsNtB1v_17CharEscapeDefaultENtNtBb_4char13EscapeDefaultENtNtBb_3fmt5Debug3fmtBb_+0xf2>
+
+Signed-off-by: David Rheinsberg <david@readahead.eu>
+Signed-off-by: Scott Weaver <scweaver@redhat.com>
+---
+ arch/x86/tools/insn_decoder_test.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/arch/x86/tools/insn_decoder_test.c b/arch/x86/tools/insn_decoder_test.c
+index 472540aeabc23..366e07546344b 100644
+--- a/arch/x86/tools/insn_decoder_test.c
++++ b/arch/x86/tools/insn_decoder_test.c
+@@ -106,7 +106,7 @@ static void parse_args(int argc, char **argv)
+ }
+ }
+
+-#define BUFSIZE 256
++#define BUFSIZE 4096
+
+ int main(int argc, char **argv)
+ {
+--
+GitLab
+
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-04-25 11:47 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-04-25 11:47 UTC (permalink / raw
To: gentoo-commits
commit: a8bb1202a1b4d7cdd7d8e5bf773b6e5b93796b80
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 25 11:47:35 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr 25 11:47:35 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a8bb1202
Linux patch 6.12.25
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 4 +
1024_linux-6.12.25.patch | 8312 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 8316 insertions(+)
diff --git a/0000_README b/0000_README
index b04d2cdd..e07d8d2e 100644
--- a/0000_README
+++ b/0000_README
@@ -139,6 +139,10 @@ Patch: 1023_linux-6.12.24.patch
From: https://www.kernel.org
Desc: Linux 6.12.24
+Patch: 1024_linux-6.12.25.patch
+From: https://www.kernel.org
+Desc: Linux 6.12.25
+
Patch: 1500_fortify-copy-size-value-range-tracking-fix.patch
From: https://git.kernel.org/
Desc: fortify: Hide run-time copy size from value range tracking
diff --git a/1024_linux-6.12.25.patch b/1024_linux-6.12.25.patch
new file mode 100644
index 00000000..0bba333c
--- /dev/null
+++ b/1024_linux-6.12.25.patch
@@ -0,0 +1,8312 @@
+diff --git a/Documentation/arch/arm64/booting.rst b/Documentation/arch/arm64/booting.rst
+index b57776a68f156d..15bcd1b4003a73 100644
+--- a/Documentation/arch/arm64/booting.rst
++++ b/Documentation/arch/arm64/booting.rst
+@@ -285,6 +285,12 @@ Before jumping into the kernel, the following conditions must be met:
+
+ - SCR_EL3.FGTEn (bit 27) must be initialised to 0b1.
+
++ For CPUs with the Fine Grained Traps 2 (FEAT_FGT2) extension present:
++
++ - If EL3 is present and the kernel is entered at EL2:
++
++ - SCR_EL3.FGTEn2 (bit 59) must be initialised to 0b1.
++
+ For CPUs with support for HCRX_EL2 (FEAT_HCX) present:
+
+ - If EL3 is present and the kernel is entered at EL2:
+@@ -379,6 +385,22 @@ Before jumping into the kernel, the following conditions must be met:
+
+ - SMCR_EL2.EZT0 (bit 30) must be initialised to 0b1.
+
++ For CPUs with the Performance Monitors Extension (FEAT_PMUv3p9):
++
++ - If EL3 is present:
++
++ - MDCR_EL3.EnPM2 (bit 7) must be initialised to 0b1.
++
++ - If the kernel is entered at EL1 and EL2 is present:
++
++ - HDFGRTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1.
++ - HDFGRTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1.
++ - HDFGRTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1.
++
++ - HDFGWTR2_EL2.nPMICNTR_EL0 (bit 2) must be initialised to 0b1.
++ - HDFGWTR2_EL2.nPMICFILTR_EL0 (bit 3) must be initialised to 0b1.
++ - HDFGWTR2_EL2.nPMUACR_EL1 (bit 4) must be initialised to 0b1.
++
+ For CPUs with Memory Copy and Memory Set instructions (FEAT_MOPS):
+
+ - If the kernel is entered at EL1 and EL2 is present:
+diff --git a/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml b/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml
+index 31295be910130c..234089b5954ddb 100644
+--- a/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml
++++ b/Documentation/devicetree/bindings/soc/fsl/fsl,ls1028a-reset.yaml
+@@ -7,7 +7,7 @@ $schema: http://devicetree.org/meta-schemas/core.yaml#
+ title: Freescale Layerscape Reset Registers Module
+
+ maintainers:
+- - Frank Li
++ - Frank Li <Frank.Li@nxp.com>
+
+ description:
+ Reset Module includes chip reset, service processor control and Reset Control
+diff --git a/Documentation/netlink/specs/ovs_vport.yaml b/Documentation/netlink/specs/ovs_vport.yaml
+index 86ba9ac2a52103..b538bb99ee9b5f 100644
+--- a/Documentation/netlink/specs/ovs_vport.yaml
++++ b/Documentation/netlink/specs/ovs_vport.yaml
+@@ -123,12 +123,12 @@ attribute-sets:
+
+ operations:
+ name-prefix: ovs-vport-cmd-
++ fixed-header: ovs-header
+ list:
+ -
+ name: new
+ doc: Create a new OVS vport
+ attribute-set: vport
+- fixed-header: ovs-header
+ do:
+ request:
+ attributes:
+@@ -141,7 +141,6 @@ operations:
+ name: del
+ doc: Delete existing OVS vport from a data path
+ attribute-set: vport
+- fixed-header: ovs-header
+ do:
+ request:
+ attributes:
+@@ -152,7 +151,6 @@ operations:
+ name: get
+ doc: Get / dump OVS vport configuration and state
+ attribute-set: vport
+- fixed-header: ovs-header
+ do: &vport-get-op
+ request:
+ attributes:
+diff --git a/Documentation/netlink/specs/rt_link.yaml b/Documentation/netlink/specs/rt_link.yaml
+index 0c4d5d40cae905..a048fc30389d68 100644
+--- a/Documentation/netlink/specs/rt_link.yaml
++++ b/Documentation/netlink/specs/rt_link.yaml
+@@ -1094,11 +1094,10 @@ attribute-sets:
+ -
+ name: prop-list
+ type: nest
+- nested-attributes: link-attrs
++ nested-attributes: prop-list-link-attrs
+ -
+ name: alt-ifname
+ type: string
+- multi-attr: true
+ -
+ name: perm-address
+ type: binary
+@@ -1137,6 +1136,13 @@ attribute-sets:
+ name: dpll-pin
+ type: nest
+ nested-attributes: link-dpll-pin-attrs
++ -
++ name: prop-list-link-attrs
++ subset-of: link-attrs
++ attributes:
++ -
++ name: alt-ifname
++ multi-attr: true
+ -
+ name: af-spec-attrs
+ attributes:
+@@ -2071,9 +2077,10 @@ attribute-sets:
+ type: u32
+ -
+ name: mctp-attrs
++ name-prefix: ifla-mctp-
+ attributes:
+ -
+- name: mctp-net
++ name: net
+ type: u32
+ -
+ name: stats-attrs
+@@ -2319,7 +2326,6 @@ operations:
+ - min-mtu
+ - max-mtu
+ - prop-list
+- - alt-ifname
+ - perm-address
+ - proto-down-reason
+ - parent-dev-name
+diff --git a/Documentation/wmi/devices/msi-wmi-platform.rst b/Documentation/wmi/devices/msi-wmi-platform.rst
+index 31a13694289238..73197b31926a57 100644
+--- a/Documentation/wmi/devices/msi-wmi-platform.rst
++++ b/Documentation/wmi/devices/msi-wmi-platform.rst
+@@ -138,6 +138,10 @@ input data, the meaning of which depends on the subfeature being accessed.
+ The output buffer contains a single byte which signals success or failure (``0x00`` on failure)
+ and 31 bytes of output data, the meaning if which depends on the subfeature being accessed.
+
++.. note::
++ The ACPI control method responsible for handling the WMI method calls is not thread-safe.
++ This is a firmware bug that needs to be handled inside the driver itself.
++
+ WMI method Get_EC()
+ -------------------
+
+diff --git a/Makefile b/Makefile
+index e1fa425089c220..93f4ba25a45336 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 6
+ PATCHLEVEL = 12
+-SUBLEVEL = 24
++SUBLEVEL = 25
+ EXTRAVERSION =
+ NAME = Baby Opossum Posse
+
+@@ -455,7 +455,6 @@ export rust_common_flags := --edition=2021 \
+ -Wclippy::ignored_unit_patterns \
+ -Wclippy::mut_mut \
+ -Wclippy::needless_bitwise_bool \
+- -Wclippy::needless_continue \
+ -Aclippy::needless_lifetimes \
+ -Wclippy::no_mangle_with_rust_abi \
+ -Wclippy::undocumented_unsafe_blocks \
+@@ -1016,6 +1015,9 @@ endif
+ # Ensure compilers do not transform certain loops into calls to wcslen()
+ KBUILD_CFLAGS += -fno-builtin-wcslen
+
++# Ensure compilers do not transform certain loops into calls to wcslen()
++KBUILD_CFLAGS += -fno-builtin-wcslen
++
+ # change __FILE__ to the relative path from the srctree
+ KBUILD_CPPFLAGS += $(call cc-option,-fmacro-prefix-map=$(srctree)/=)
+
+diff --git a/arch/arm64/include/asm/el2_setup.h b/arch/arm64/include/asm/el2_setup.h
+index e0ffdf13a18b3f..bdbe9e08664a69 100644
+--- a/arch/arm64/include/asm/el2_setup.h
++++ b/arch/arm64/include/asm/el2_setup.h
+@@ -215,6 +215,30 @@
+ .Lskip_fgt_\@:
+ .endm
+
++.macro __init_el2_fgt2
++ mrs x1, id_aa64mmfr0_el1
++ ubfx x1, x1, #ID_AA64MMFR0_EL1_FGT_SHIFT, #4
++ cmp x1, #ID_AA64MMFR0_EL1_FGT_FGT2
++ b.lt .Lskip_fgt2_\@
++
++ mov x0, xzr
++ mrs x1, id_aa64dfr0_el1
++ ubfx x1, x1, #ID_AA64DFR0_EL1_PMUVer_SHIFT, #4
++ cmp x1, #ID_AA64DFR0_EL1_PMUVer_V3P9
++ b.lt .Lskip_pmuv3p9_\@
++
++ orr x0, x0, #HDFGRTR2_EL2_nPMICNTR_EL0
++ orr x0, x0, #HDFGRTR2_EL2_nPMICFILTR_EL0
++ orr x0, x0, #HDFGRTR2_EL2_nPMUACR_EL1
++.Lskip_pmuv3p9_\@:
++ msr_s SYS_HDFGRTR2_EL2, x0
++ msr_s SYS_HDFGWTR2_EL2, x0
++ msr_s SYS_HFGRTR2_EL2, xzr
++ msr_s SYS_HFGWTR2_EL2, xzr
++ msr_s SYS_HFGITR2_EL2, xzr
++.Lskip_fgt2_\@:
++.endm
++
+ .macro __init_el2_nvhe_prepare_eret
+ mov x0, #INIT_PSTATE_EL1
+ msr spsr_el2, x0
+@@ -240,6 +264,7 @@
+ __init_el2_nvhe_idregs
+ __init_el2_cptr
+ __init_el2_fgt
++ __init_el2_fgt2
+ .endm
+
+ #ifndef __KVM_NVHE_HYPERVISOR__
+diff --git a/arch/arm64/tools/sysreg b/arch/arm64/tools/sysreg
+index 8d637ac4b7c6b9..362bcfa0aed18f 100644
+--- a/arch/arm64/tools/sysreg
++++ b/arch/arm64/tools/sysreg
+@@ -1238,6 +1238,7 @@ UnsignedEnum 11:8 PMUVer
+ 0b0110 V3P5
+ 0b0111 V3P7
+ 0b1000 V3P8
++ 0b1001 V3P9
+ 0b1111 IMP_DEF
+ EndEnum
+ UnsignedEnum 7:4 TraceVer
+@@ -1556,6 +1557,7 @@ EndEnum
+ UnsignedEnum 59:56 FGT
+ 0b0000 NI
+ 0b0001 IMP
++ 0b0010 FGT2
+ EndEnum
+ Res0 55:48
+ UnsignedEnum 47:44 EXS
+@@ -1617,6 +1619,7 @@ Enum 3:0 PARANGE
+ 0b0100 44
+ 0b0101 48
+ 0b0110 52
++ 0b0111 56
+ EndEnum
+ EndSysreg
+
+@@ -2463,6 +2466,101 @@ Field 1 ICIALLU
+ Field 0 ICIALLUIS
+ EndSysreg
+
++Sysreg HDFGRTR2_EL2 3 4 3 1 0
++Res0 63:25
++Field 24 nPMBMAR_EL1
++Field 23 nMDSTEPOP_EL1
++Field 22 nTRBMPAM_EL1
++Res0 21
++Field 20 nTRCITECR_EL1
++Field 19 nPMSDSFR_EL1
++Field 18 nSPMDEVAFF_EL1
++Field 17 nSPMID
++Field 16 nSPMSCR_EL1
++Field 15 nSPMACCESSR_EL1
++Field 14 nSPMCR_EL0
++Field 13 nSPMOVS
++Field 12 nSPMINTEN
++Field 11 nSPMCNTEN
++Field 10 nSPMSELR_EL0
++Field 9 nSPMEVTYPERn_EL0
++Field 8 nSPMEVCNTRn_EL0
++Field 7 nPMSSCR_EL1
++Field 6 nPMSSDATA
++Field 5 nMDSELR_EL1
++Field 4 nPMUACR_EL1
++Field 3 nPMICFILTR_EL0
++Field 2 nPMICNTR_EL0
++Field 1 nPMIAR_EL1
++Field 0 nPMECR_EL1
++EndSysreg
++
++Sysreg HDFGWTR2_EL2 3 4 3 1 1
++Res0 63:25
++Field 24 nPMBMAR_EL1
++Field 23 nMDSTEPOP_EL1
++Field 22 nTRBMPAM_EL1
++Field 21 nPMZR_EL0
++Field 20 nTRCITECR_EL1
++Field 19 nPMSDSFR_EL1
++Res0 18:17
++Field 16 nSPMSCR_EL1
++Field 15 nSPMACCESSR_EL1
++Field 14 nSPMCR_EL0
++Field 13 nSPMOVS
++Field 12 nSPMINTEN
++Field 11 nSPMCNTEN
++Field 10 nSPMSELR_EL0
++Field 9 nSPMEVTYPERn_EL0
++Field 8 nSPMEVCNTRn_EL0
++Field 7 nPMSSCR_EL1
++Res0 6
++Field 5 nMDSELR_EL1
++Field 4 nPMUACR_EL1
++Field 3 nPMICFILTR_EL0
++Field 2 nPMICNTR_EL0
++Field 1 nPMIAR_EL1
++Field 0 nPMECR_EL1
++EndSysreg
++
++Sysreg HFGRTR2_EL2 3 4 3 1 2
++Res0 63:15
++Field 14 nACTLRALIAS_EL1
++Field 13 nACTLRMASK_EL1
++Field 12 nTCR2ALIAS_EL1
++Field 11 nTCRALIAS_EL1
++Field 10 nSCTLRALIAS2_EL1
++Field 9 nSCTLRALIAS_EL1
++Field 8 nCPACRALIAS_EL1
++Field 7 nTCR2MASK_EL1
++Field 6 nTCRMASK_EL1
++Field 5 nSCTLR2MASK_EL1
++Field 4 nSCTLRMASK_EL1
++Field 3 nCPACRMASK_EL1
++Field 2 nRCWSMASK_EL1
++Field 1 nERXGSR_EL1
++Field 0 nPFAR_EL1
++EndSysreg
++
++Sysreg HFGWTR2_EL2 3 4 3 1 3
++Res0 63:15
++Field 14 nACTLRALIAS_EL1
++Field 13 nACTLRMASK_EL1
++Field 12 nTCR2ALIAS_EL1
++Field 11 nTCRALIAS_EL1
++Field 10 nSCTLRALIAS2_EL1
++Field 9 nSCTLRALIAS_EL1
++Field 8 nCPACRALIAS_EL1
++Field 7 nTCR2MASK_EL1
++Field 6 nTCRMASK_EL1
++Field 5 nSCTLR2MASK_EL1
++Field 4 nSCTLRMASK_EL1
++Field 3 nCPACRMASK_EL1
++Field 2 nRCWSMASK_EL1
++Res0 1
++Field 0 nPFAR_EL1
++EndSysreg
++
+ Sysreg HDFGRTR_EL2 3 4 3 1 4
+ Field 63 PMBIDR_EL1
+ Field 62 nPMSNEVFR_EL1
+@@ -2635,6 +2733,12 @@ Field 1 AMEVCNTR00_EL0
+ Field 0 AMCNTEN0
+ EndSysreg
+
++Sysreg HFGITR2_EL2 3 4 3 1 7
++Res0 63:2
++Field 1 nDCCIVAPS
++Field 0 TSBCSYNC
++EndSysreg
++
+ Sysreg ZCR_EL2 3 4 1 2 0
+ Fields ZCR_ELx
+ EndSysreg
+diff --git a/arch/loongarch/kernel/acpi.c b/arch/loongarch/kernel/acpi.c
+index 382a09a7152c30..1120ac2824f6e8 100644
+--- a/arch/loongarch/kernel/acpi.c
++++ b/arch/loongarch/kernel/acpi.c
+@@ -249,18 +249,6 @@ static __init int setup_node(int pxm)
+ return acpi_map_pxm_to_node(pxm);
+ }
+
+-/*
+- * Callback for SLIT parsing. pxm_to_node() returns NUMA_NO_NODE for
+- * I/O localities since SRAT does not list them. I/O localities are
+- * not supported at this point.
+- */
+-unsigned int numa_distance_cnt;
+-
+-static inline unsigned int get_numa_distances_cnt(struct acpi_table_slit *slit)
+-{
+- return slit->locality_count;
+-}
+-
+ void __init numa_set_distance(int from, int to, int distance)
+ {
+ if ((u8)distance != distance || (from == to && distance != LOCAL_DISTANCE)) {
+diff --git a/arch/mips/dec/prom/init.c b/arch/mips/dec/prom/init.c
+index cb12eb211a49e0..8d74d7d6c05b47 100644
+--- a/arch/mips/dec/prom/init.c
++++ b/arch/mips/dec/prom/init.c
+@@ -42,7 +42,7 @@ int (*__pmax_close)(int);
+ * Detect which PROM the DECSTATION has, and set the callback vectors
+ * appropriately.
+ */
+-void __init which_prom(s32 magic, s32 *prom_vec)
++static void __init which_prom(s32 magic, s32 *prom_vec)
+ {
+ /*
+ * No sign of the REX PROM's magic number means we assume a non-REX
+diff --git a/arch/mips/include/asm/ds1287.h b/arch/mips/include/asm/ds1287.h
+index 46cfb01f9a14e7..51cb61fd4c0330 100644
+--- a/arch/mips/include/asm/ds1287.h
++++ b/arch/mips/include/asm/ds1287.h
+@@ -8,7 +8,7 @@
+ #define __ASM_DS1287_H
+
+ extern int ds1287_timer_state(void);
+-extern void ds1287_set_base_clock(unsigned int clock);
++extern int ds1287_set_base_clock(unsigned int hz);
+ extern int ds1287_clockevent_init(int irq);
+
+ #endif
+diff --git a/arch/mips/kernel/cevt-ds1287.c b/arch/mips/kernel/cevt-ds1287.c
+index 9a47fbcd4638a6..de64d6bb7ba36c 100644
+--- a/arch/mips/kernel/cevt-ds1287.c
++++ b/arch/mips/kernel/cevt-ds1287.c
+@@ -10,6 +10,7 @@
+ #include <linux/mc146818rtc.h>
+ #include <linux/irq.h>
+
++#include <asm/ds1287.h>
+ #include <asm/time.h>
+
+ int ds1287_timer_state(void)
+diff --git a/arch/riscv/include/asm/kgdb.h b/arch/riscv/include/asm/kgdb.h
+index 46677daf708bd0..cc11c4544cffd1 100644
+--- a/arch/riscv/include/asm/kgdb.h
++++ b/arch/riscv/include/asm/kgdb.h
+@@ -19,16 +19,9 @@
+
+ #ifndef __ASSEMBLY__
+
++void arch_kgdb_breakpoint(void);
+ extern unsigned long kgdb_compiled_break;
+
+-static inline void arch_kgdb_breakpoint(void)
+-{
+- asm(".global kgdb_compiled_break\n"
+- ".option norvc\n"
+- "kgdb_compiled_break: ebreak\n"
+- ".option rvc\n");
+-}
+-
+ #endif /* !__ASSEMBLY__ */
+
+ #define DBG_REG_ZERO "zero"
+diff --git a/arch/riscv/include/asm/syscall.h b/arch/riscv/include/asm/syscall.h
+index 121fff429dce66..eceabf59ae482a 100644
+--- a/arch/riscv/include/asm/syscall.h
++++ b/arch/riscv/include/asm/syscall.h
+@@ -62,8 +62,11 @@ static inline void syscall_get_arguments(struct task_struct *task,
+ unsigned long *args)
+ {
+ args[0] = regs->orig_a0;
+- args++;
+- memcpy(args, ®s->a1, 5 * sizeof(args[0]));
++ args[1] = regs->a1;
++ args[2] = regs->a2;
++ args[3] = regs->a3;
++ args[4] = regs->a4;
++ args[5] = regs->a5;
+ }
+
+ static inline int syscall_get_arch(struct task_struct *task)
+diff --git a/arch/riscv/kernel/kgdb.c b/arch/riscv/kernel/kgdb.c
+index 2e0266ae6bd728..9f3db3503dabd6 100644
+--- a/arch/riscv/kernel/kgdb.c
++++ b/arch/riscv/kernel/kgdb.c
+@@ -254,6 +254,12 @@ void kgdb_arch_set_pc(struct pt_regs *regs, unsigned long pc)
+ regs->epc = pc;
+ }
+
++noinline void arch_kgdb_breakpoint(void)
++{
++ asm(".global kgdb_compiled_break\n"
++ "kgdb_compiled_break: ebreak\n");
++}
++
+ void kgdb_arch_handle_qxfer_pkt(char *remcom_in_buffer,
+ char *remcom_out_buffer)
+ {
+diff --git a/arch/riscv/kernel/module-sections.c b/arch/riscv/kernel/module-sections.c
+index e264e59e596e80..91d0b355ceeff6 100644
+--- a/arch/riscv/kernel/module-sections.c
++++ b/arch/riscv/kernel/module-sections.c
+@@ -73,16 +73,17 @@ static bool duplicate_rela(const Elf_Rela *rela, int idx)
+ static void count_max_entries(Elf_Rela *relas, int num,
+ unsigned int *plts, unsigned int *gots)
+ {
+- unsigned int type, i;
+-
+- for (i = 0; i < num; i++) {
+- type = ELF_RISCV_R_TYPE(relas[i].r_info);
+- if (type == R_RISCV_CALL_PLT) {
++ for (int i = 0; i < num; i++) {
++ switch (ELF_R_TYPE(relas[i].r_info)) {
++ case R_RISCV_CALL_PLT:
++ case R_RISCV_PLT32:
+ if (!duplicate_rela(relas, i))
+ (*plts)++;
+- } else if (type == R_RISCV_GOT_HI20) {
++ break;
++ case R_RISCV_GOT_HI20:
+ if (!duplicate_rela(relas, i))
+ (*gots)++;
++ break;
+ }
+ }
+ }
+diff --git a/arch/riscv/kernel/module.c b/arch/riscv/kernel/module.c
+index 47d0ebeec93c23..7f6147c18033b2 100644
+--- a/arch/riscv/kernel/module.c
++++ b/arch/riscv/kernel/module.c
+@@ -648,7 +648,7 @@ process_accumulated_relocations(struct module *me,
+ kfree(bucket_iter);
+ }
+
+- kfree(*relocation_hashtable);
++ kvfree(*relocation_hashtable);
+ }
+
+ static int add_relocation_to_accumulate(struct module *me, int type,
+@@ -752,9 +752,10 @@ initialize_relocation_hashtable(unsigned int num_relocations,
+
+ hashtable_size <<= should_double_size;
+
+- *relocation_hashtable = kmalloc_array(hashtable_size,
+- sizeof(**relocation_hashtable),
+- GFP_KERNEL);
++ /* Number of relocations may be large, so kvmalloc it */
++ *relocation_hashtable = kvmalloc_array(hashtable_size,
++ sizeof(**relocation_hashtable),
++ GFP_KERNEL);
+ if (!*relocation_hashtable)
+ return 0;
+
+@@ -859,7 +860,7 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
+ }
+
+ j++;
+- if (j > sechdrs[relsec].sh_size / sizeof(*rel))
++ if (j == num_relocations)
+ j = 0;
+
+ } while (j_idx != j);
+diff --git a/arch/riscv/kernel/setup.c b/arch/riscv/kernel/setup.c
+index 7934613a98c883..194bda6d74ce72 100644
+--- a/arch/riscv/kernel/setup.c
++++ b/arch/riscv/kernel/setup.c
+@@ -66,6 +66,9 @@ static struct resource bss_res = { .name = "Kernel bss", };
+ static struct resource elfcorehdr_res = { .name = "ELF Core hdr", };
+ #endif
+
++static int num_standard_resources;
++static struct resource *standard_resources;
++
+ static int __init add_resource(struct resource *parent,
+ struct resource *res)
+ {
+@@ -139,7 +142,7 @@ static void __init init_resources(void)
+ struct resource *res = NULL;
+ struct resource *mem_res = NULL;
+ size_t mem_res_sz = 0;
+- int num_resources = 0, res_idx = 0;
++ int num_resources = 0, res_idx = 0, non_resv_res = 0;
+ int ret = 0;
+
+ /* + 1 as memblock_alloc() might increase memblock.reserved.cnt */
+@@ -195,6 +198,7 @@ static void __init init_resources(void)
+ /* Add /memory regions to the resource tree */
+ for_each_mem_region(region) {
+ res = &mem_res[res_idx--];
++ non_resv_res++;
+
+ if (unlikely(memblock_is_nomap(region))) {
+ res->name = "Reserved";
+@@ -212,6 +216,9 @@ static void __init init_resources(void)
+ goto error;
+ }
+
++ num_standard_resources = non_resv_res;
++ standard_resources = &mem_res[res_idx + 1];
++
+ /* Clean-up any unused pre-allocated resources */
+ if (res_idx >= 0)
+ memblock_free(mem_res, (res_idx + 1) * sizeof(*mem_res));
+@@ -223,6 +230,33 @@ static void __init init_resources(void)
+ memblock_free(mem_res, mem_res_sz);
+ }
+
++static int __init reserve_memblock_reserved_regions(void)
++{
++ u64 i, j;
++
++ for (i = 0; i < num_standard_resources; i++) {
++ struct resource *mem = &standard_resources[i];
++ phys_addr_t r_start, r_end, mem_size = resource_size(mem);
++
++ if (!memblock_is_region_reserved(mem->start, mem_size))
++ continue;
++
++ for_each_reserved_mem_range(j, &r_start, &r_end) {
++ resource_size_t start, end;
++
++ start = max(PFN_PHYS(PFN_DOWN(r_start)), mem->start);
++ end = min(PFN_PHYS(PFN_UP(r_end)) - 1, mem->end);
++
++ if (start > mem->end || end < mem->start)
++ continue;
++
++ reserve_region_with_split(mem, start, end, "Reserved");
++ }
++ }
++
++ return 0;
++}
++arch_initcall(reserve_memblock_reserved_regions);
+
+ static void __init parse_dtb(void)
+ {
+diff --git a/arch/x86/boot/compressed/mem.c b/arch/x86/boot/compressed/mem.c
+index dbba332e4a12d7..f676156d9f3db4 100644
+--- a/arch/x86/boot/compressed/mem.c
++++ b/arch/x86/boot/compressed/mem.c
+@@ -34,11 +34,14 @@ static bool early_is_tdx_guest(void)
+
+ void arch_accept_memory(phys_addr_t start, phys_addr_t end)
+ {
++ static bool sevsnp;
++
+ /* Platform-specific memory-acceptance call goes here */
+ if (early_is_tdx_guest()) {
+ if (!tdx_accept_memory(start, end))
+ panic("TDX: Failed to accept memory\n");
+- } else if (sev_snp_enabled()) {
++ } else if (sevsnp || (sev_get_status() & MSR_AMD64_SEV_SNP_ENABLED)) {
++ sevsnp = true;
+ snp_accept_memory(start, end);
+ } else {
+ error("Cannot accept memory: unknown platform\n");
+diff --git a/arch/x86/boot/compressed/sev.c b/arch/x86/boot/compressed/sev.c
+index cd44e120fe5377..f49f7eef1dba07 100644
+--- a/arch/x86/boot/compressed/sev.c
++++ b/arch/x86/boot/compressed/sev.c
+@@ -164,10 +164,7 @@ bool sev_snp_enabled(void)
+
+ static void __page_state_change(unsigned long paddr, enum psc_op op)
+ {
+- u64 val;
+-
+- if (!sev_snp_enabled())
+- return;
++ u64 val, msr;
+
+ /*
+ * If private -> shared then invalidate the page before requesting the
+@@ -176,6 +173,9 @@ static void __page_state_change(unsigned long paddr, enum psc_op op)
+ if (op == SNP_PAGE_STATE_SHARED)
+ pvalidate_4k_page(paddr, paddr, false);
+
++ /* Save the current GHCB MSR value */
++ msr = sev_es_rd_ghcb_msr();
++
+ /* Issue VMGEXIT to change the page state in RMP table. */
+ sev_es_wr_ghcb_msr(GHCB_MSR_PSC_REQ_GFN(paddr >> PAGE_SHIFT, op));
+ VMGEXIT();
+@@ -185,6 +185,9 @@ static void __page_state_change(unsigned long paddr, enum psc_op op)
+ if ((GHCB_RESP_CODE(val) != GHCB_MSR_PSC_RESP) || GHCB_MSR_PSC_RESP_VAL(val))
+ sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
+
++ /* Restore the GHCB MSR value */
++ sev_es_wr_ghcb_msr(msr);
++
+ /*
+ * Now that page state is changed in the RMP table, validate it so that it is
+ * consistent with the RMP entry.
+@@ -195,11 +198,17 @@ static void __page_state_change(unsigned long paddr, enum psc_op op)
+
+ void snp_set_page_private(unsigned long paddr)
+ {
++ if (!sev_snp_enabled())
++ return;
++
+ __page_state_change(paddr, SNP_PAGE_STATE_PRIVATE);
+ }
+
+ void snp_set_page_shared(unsigned long paddr)
+ {
++ if (!sev_snp_enabled())
++ return;
++
+ __page_state_change(paddr, SNP_PAGE_STATE_SHARED);
+ }
+
+@@ -223,56 +232,10 @@ static bool early_setup_ghcb(void)
+ return true;
+ }
+
+-static phys_addr_t __snp_accept_memory(struct snp_psc_desc *desc,
+- phys_addr_t pa, phys_addr_t pa_end)
+-{
+- struct psc_hdr *hdr;
+- struct psc_entry *e;
+- unsigned int i;
+-
+- hdr = &desc->hdr;
+- memset(hdr, 0, sizeof(*hdr));
+-
+- e = desc->entries;
+-
+- i = 0;
+- while (pa < pa_end && i < VMGEXIT_PSC_MAX_ENTRY) {
+- hdr->end_entry = i;
+-
+- e->gfn = pa >> PAGE_SHIFT;
+- e->operation = SNP_PAGE_STATE_PRIVATE;
+- if (IS_ALIGNED(pa, PMD_SIZE) && (pa_end - pa) >= PMD_SIZE) {
+- e->pagesize = RMP_PG_SIZE_2M;
+- pa += PMD_SIZE;
+- } else {
+- e->pagesize = RMP_PG_SIZE_4K;
+- pa += PAGE_SIZE;
+- }
+-
+- e++;
+- i++;
+- }
+-
+- if (vmgexit_psc(boot_ghcb, desc))
+- sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
+-
+- pvalidate_pages(desc);
+-
+- return pa;
+-}
+-
+ void snp_accept_memory(phys_addr_t start, phys_addr_t end)
+ {
+- struct snp_psc_desc desc = {};
+- unsigned int i;
+- phys_addr_t pa;
+-
+- if (!boot_ghcb && !early_setup_ghcb())
+- sev_es_terminate(SEV_TERM_SET_LINUX, GHCB_TERM_PSC);
+-
+- pa = start;
+- while (pa < end)
+- pa = __snp_accept_memory(&desc, pa, end);
++ for (phys_addr_t pa = start; pa < end; pa += PAGE_SIZE)
++ __page_state_change(pa, SNP_PAGE_STATE_PRIVATE);
+ }
+
+ void sev_es_shutdown_ghcb(void)
+diff --git a/arch/x86/boot/compressed/sev.h b/arch/x86/boot/compressed/sev.h
+index fc725a981b093b..4e463f33186df4 100644
+--- a/arch/x86/boot/compressed/sev.h
++++ b/arch/x86/boot/compressed/sev.h
+@@ -12,11 +12,13 @@
+
+ bool sev_snp_enabled(void);
+ void snp_accept_memory(phys_addr_t start, phys_addr_t end);
++u64 sev_get_status(void);
+
+ #else
+
+ static inline bool sev_snp_enabled(void) { return false; }
+ static inline void snp_accept_memory(phys_addr_t start, phys_addr_t end) { }
++static inline u64 sev_get_status(void) { return 0; }
+
+ #endif
+
+diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
+index 1617aa3efd68b1..1b82bcc6fa5564 100644
+--- a/arch/x86/events/intel/ds.c
++++ b/arch/x86/events/intel/ds.c
+@@ -1317,8 +1317,10 @@ static u64 pebs_update_adaptive_cfg(struct perf_event *event)
+ * + precise_ip < 2 for the non event IP
+ * + For RTM TSX weight we need GPRs for the abort code.
+ */
+- gprs = (sample_type & PERF_SAMPLE_REGS_INTR) &&
+- (attr->sample_regs_intr & PEBS_GP_REGS);
++ gprs = ((sample_type & PERF_SAMPLE_REGS_INTR) &&
++ (attr->sample_regs_intr & PEBS_GP_REGS)) ||
++ ((sample_type & PERF_SAMPLE_REGS_USER) &&
++ (attr->sample_regs_user & PEBS_GP_REGS));
+
+ tsx_weight = (sample_type & PERF_SAMPLE_WEIGHT_TYPE) &&
+ ((attr->config & INTEL_ARCH_EVENT_MASK) ==
+@@ -1970,7 +1972,7 @@ static void setup_pebs_adaptive_sample_data(struct perf_event *event,
+ regs->flags &= ~PERF_EFLAGS_EXACT;
+ }
+
+- if (sample_type & PERF_SAMPLE_REGS_INTR)
++ if (sample_type & (PERF_SAMPLE_REGS_INTR | PERF_SAMPLE_REGS_USER))
+ adaptive_pebs_save_regs(regs, gprs);
+ }
+
+diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
+index ca98744343b89e..543609d1231efc 100644
+--- a/arch/x86/events/intel/uncore_snbep.c
++++ b/arch/x86/events/intel/uncore_snbep.c
+@@ -4891,28 +4891,28 @@ static struct uncore_event_desc snr_uncore_iio_freerunning_events[] = {
+ INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"),
+ /* Free-Running IIO BANDWIDTH IN Counters */
+ INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"),
++ INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.0517578125e-5"),
+ INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"),
+ { /* end: all zeroes */ },
+ };
+@@ -5485,37 +5485,6 @@ static struct freerunning_counters icx_iio_freerunning[] = {
+ [ICX_IIO_MSR_BW_IN] = { 0xaa0, 0x1, 0x10, 8, 48, icx_iio_bw_freerunning_box_offsets },
+ };
+
+-static struct uncore_event_desc icx_uncore_iio_freerunning_events[] = {
+- /* Free-Running IIO CLOCKS Counter */
+- INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"),
+- /* Free-Running IIO BANDWIDTH IN Counters */
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"),
+- { /* end: all zeroes */ },
+-};
+-
+ static struct intel_uncore_type icx_uncore_iio_free_running = {
+ .name = "iio_free_running",
+ .num_counters = 9,
+@@ -5523,7 +5492,7 @@ static struct intel_uncore_type icx_uncore_iio_free_running = {
+ .num_freerunning_types = ICX_IIO_FREERUNNING_TYPE_MAX,
+ .freerunning = icx_iio_freerunning,
+ .ops = &skx_uncore_iio_freerunning_ops,
+- .event_descs = icx_uncore_iio_freerunning_events,
++ .event_descs = snr_uncore_iio_freerunning_events,
+ .format_group = &skx_uncore_iio_freerunning_format_group,
+ };
+
+@@ -6320,69 +6289,13 @@ static struct freerunning_counters spr_iio_freerunning[] = {
+ [SPR_IIO_MSR_BW_OUT] = { 0x3808, 0x1, 0x10, 8, 48 },
+ };
+
+-static struct uncore_event_desc spr_uncore_iio_freerunning_events[] = {
+- /* Free-Running IIO CLOCKS Counter */
+- INTEL_UNCORE_EVENT_DESC(ioclk, "event=0xff,umask=0x10"),
+- /* Free-Running IIO BANDWIDTH IN Counters */
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0, "event=0xff,umask=0x20"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port0.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1, "event=0xff,umask=0x21"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port1.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2, "event=0xff,umask=0x22"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port2.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3, "event=0xff,umask=0x23"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port3.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4, "event=0xff,umask=0x24"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port4.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5, "event=0xff,umask=0x25"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port5.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6, "event=0xff,umask=0x26"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port6.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7, "event=0xff,umask=0x27"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_in_port7.unit, "MiB"),
+- /* Free-Running IIO BANDWIDTH OUT Counters */
+- INTEL_UNCORE_EVENT_DESC(bw_out_port0, "event=0xff,umask=0x30"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port0.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port0.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port1, "event=0xff,umask=0x31"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port1.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port1.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port2, "event=0xff,umask=0x32"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port2.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port2.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port3, "event=0xff,umask=0x33"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port3.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port3.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port4, "event=0xff,umask=0x34"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port4.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port4.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port5, "event=0xff,umask=0x35"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port5.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port5.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port6, "event=0xff,umask=0x36"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port6.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port6.unit, "MiB"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port7, "event=0xff,umask=0x37"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port7.scale, "3.814697266e-6"),
+- INTEL_UNCORE_EVENT_DESC(bw_out_port7.unit, "MiB"),
+- { /* end: all zeroes */ },
+-};
+-
+ static struct intel_uncore_type spr_uncore_iio_free_running = {
+ .name = "iio_free_running",
+ .num_counters = 17,
+ .num_freerunning_types = SPR_IIO_FREERUNNING_TYPE_MAX,
+ .freerunning = spr_iio_freerunning,
+ .ops = &skx_uncore_iio_freerunning_ops,
+- .event_descs = spr_uncore_iio_freerunning_events,
++ .event_descs = snr_uncore_iio_freerunning_events,
+ .format_group = &skx_uncore_iio_freerunning_format_group,
+ };
+
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 425bed00b2e071..e432910859cb1a 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -862,6 +862,16 @@ static void init_amd_zen1(struct cpuinfo_x86 *c)
+
+ pr_notice_once("AMD Zen1 DIV0 bug detected. Disable SMT for full protection.\n");
+ setup_force_cpu_bug(X86_BUG_DIV0);
++
++ /*
++ * Turn off the Instructions Retired free counter on machines that are
++ * susceptible to erratum #1054 "Instructions Retired Performance
++ * Counter May Be Inaccurate".
++ */
++ if (c->x86_model < 0x30) {
++ msr_clear_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
++ clear_cpu_cap(c, X86_FEATURE_IRPERF);
++ }
+ }
+
+ static bool cpu_has_zenbleed_microcode(void)
+@@ -1045,13 +1055,8 @@ static void init_amd(struct cpuinfo_x86 *c)
+ if (!cpu_feature_enabled(X86_FEATURE_XENPV))
+ set_cpu_bug(c, X86_BUG_SYSRET_SS_ATTRS);
+
+- /*
+- * Turn on the Instructions Retired free counter on machines not
+- * susceptible to erratum #1054 "Instructions Retired Performance
+- * Counter May Be Inaccurate".
+- */
+- if (cpu_has(c, X86_FEATURE_IRPERF) &&
+- (boot_cpu_has(X86_FEATURE_ZEN1) && c->x86_model > 0x2f))
++ /* Enable the Instructions Retired free counter */
++ if (cpu_has(c, X86_FEATURE_IRPERF))
+ msr_set_bit(MSR_K7_HWCR, MSR_K7_HWCR_IRPERF_EN_BIT);
+
+ check_null_seg_clears_base(c);
+diff --git a/arch/x86/kernel/cpu/microcode/amd.c b/arch/x86/kernel/cpu/microcode/amd.c
+index 5cd735728fa028..093d3ca43c4674 100644
+--- a/arch/x86/kernel/cpu/microcode/amd.c
++++ b/arch/x86/kernel/cpu/microcode/amd.c
+@@ -199,6 +199,12 @@ static bool need_sha_check(u32 cur_rev)
+ case 0xa70c0: return cur_rev <= 0xa70C009; break;
+ case 0xaa001: return cur_rev <= 0xaa00116; break;
+ case 0xaa002: return cur_rev <= 0xaa00218; break;
++ case 0xb0021: return cur_rev <= 0xb002146; break;
++ case 0xb1010: return cur_rev <= 0xb101046; break;
++ case 0xb2040: return cur_rev <= 0xb204031; break;
++ case 0xb4040: return cur_rev <= 0xb404031; break;
++ case 0xb6000: return cur_rev <= 0xb600031; break;
++ case 0xb7000: return cur_rev <= 0xb700031; break;
+ default: break;
+ }
+
+@@ -214,8 +220,7 @@ static bool verify_sha256_digest(u32 patch_id, u32 cur_rev, const u8 *data, unsi
+ struct sha256_state s;
+ int i;
+
+- if (x86_family(bsp_cpuid_1_eax) < 0x17 ||
+- x86_family(bsp_cpuid_1_eax) > 0x19)
++ if (x86_family(bsp_cpuid_1_eax) < 0x17)
+ return true;
+
+ if (!need_sha_check(cur_rev))
+diff --git a/arch/x86/xen/multicalls.c b/arch/x86/xen/multicalls.c
+index 10c660fae8b300..7237d56a9d3f01 100644
+--- a/arch/x86/xen/multicalls.c
++++ b/arch/x86/xen/multicalls.c
+@@ -54,14 +54,20 @@ struct mc_debug_data {
+
+ static DEFINE_PER_CPU(struct mc_buffer, mc_buffer);
+ static struct mc_debug_data mc_debug_data_early __initdata;
+-static DEFINE_PER_CPU(struct mc_debug_data *, mc_debug_data) =
+- &mc_debug_data_early;
+ static struct mc_debug_data __percpu *mc_debug_data_ptr;
+ DEFINE_PER_CPU(unsigned long, xen_mc_irq_flags);
+
+ static struct static_key mc_debug __ro_after_init;
+ static bool mc_debug_enabled __initdata;
+
++static struct mc_debug_data * __ref get_mc_debug(void)
++{
++ if (!mc_debug_data_ptr)
++ return &mc_debug_data_early;
++
++ return this_cpu_ptr(mc_debug_data_ptr);
++}
++
+ static int __init xen_parse_mc_debug(char *arg)
+ {
+ mc_debug_enabled = true;
+@@ -71,20 +77,16 @@ static int __init xen_parse_mc_debug(char *arg)
+ }
+ early_param("xen_mc_debug", xen_parse_mc_debug);
+
+-void mc_percpu_init(unsigned int cpu)
+-{
+- per_cpu(mc_debug_data, cpu) = per_cpu_ptr(mc_debug_data_ptr, cpu);
+-}
+-
+ static int __init mc_debug_enable(void)
+ {
+ unsigned long flags;
++ struct mc_debug_data __percpu *mcdb;
+
+ if (!mc_debug_enabled)
+ return 0;
+
+- mc_debug_data_ptr = alloc_percpu(struct mc_debug_data);
+- if (!mc_debug_data_ptr) {
++ mcdb = alloc_percpu(struct mc_debug_data);
++ if (!mcdb) {
+ pr_err("xen_mc_debug inactive\n");
+ static_key_slow_dec(&mc_debug);
+ return -ENOMEM;
+@@ -93,7 +95,7 @@ static int __init mc_debug_enable(void)
+ /* Be careful when switching to percpu debug data. */
+ local_irq_save(flags);
+ xen_mc_flush();
+- mc_percpu_init(0);
++ mc_debug_data_ptr = mcdb;
+ local_irq_restore(flags);
+
+ pr_info("xen_mc_debug active\n");
+@@ -155,7 +157,7 @@ void xen_mc_flush(void)
+ trace_xen_mc_flush(b->mcidx, b->argidx, b->cbidx);
+
+ if (static_key_false(&mc_debug)) {
+- mcdb = __this_cpu_read(mc_debug_data);
++ mcdb = get_mc_debug();
+ memcpy(mcdb->entries, b->entries,
+ b->mcidx * sizeof(struct multicall_entry));
+ }
+@@ -235,7 +237,7 @@ struct multicall_space __xen_mc_entry(size_t args)
+
+ ret.mc = &b->entries[b->mcidx];
+ if (static_key_false(&mc_debug)) {
+- struct mc_debug_data *mcdb = __this_cpu_read(mc_debug_data);
++ struct mc_debug_data *mcdb = get_mc_debug();
+
+ mcdb->caller[b->mcidx] = __builtin_return_address(0);
+ mcdb->argsz[b->mcidx] = args;
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 6863d3da7decfc..7ea57f728b89db 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -305,7 +305,6 @@ static int xen_pv_kick_ap(unsigned int cpu, struct task_struct *idle)
+ return rc;
+
+ xen_pmu_init(cpu);
+- mc_percpu_init(cpu);
+
+ /*
+ * Why is this a BUG? If the hypercall fails then everything can be
+diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
+index 63c13a2ccf556a..25e318ef27d6b0 100644
+--- a/arch/x86/xen/xen-ops.h
++++ b/arch/x86/xen/xen-ops.h
+@@ -261,9 +261,6 @@ void xen_mc_callback(void (*fn)(void *), void *data);
+ */
+ struct multicall_space xen_mc_extend_args(unsigned long op, size_t arg_size);
+
+-/* Do percpu data initialization for multicalls. */
+-void mc_percpu_init(unsigned int cpu);
+-
+ extern bool is_xen_pmu;
+
+ irqreturn_t xen_pmu_irq_handler(int irq, void *dev_id);
+diff --git a/block/bio-integrity.c b/block/bio-integrity.c
+index 9aed61fcd0bf94..456026c4a3c962 100644
+--- a/block/bio-integrity.c
++++ b/block/bio-integrity.c
+@@ -104,16 +104,12 @@ struct bio_integrity_payload *bio_integrity_alloc(struct bio *bio,
+ }
+ EXPORT_SYMBOL(bio_integrity_alloc);
+
+-static void bio_integrity_unpin_bvec(struct bio_vec *bv, int nr_vecs,
+- bool dirty)
++static void bio_integrity_unpin_bvec(struct bio_vec *bv, int nr_vecs)
+ {
+ int i;
+
+- for (i = 0; i < nr_vecs; i++) {
+- if (dirty && !PageCompound(bv[i].bv_page))
+- set_page_dirty_lock(bv[i].bv_page);
++ for (i = 0; i < nr_vecs; i++)
+ unpin_user_page(bv[i].bv_page);
+- }
+ }
+
+ static void bio_integrity_uncopy_user(struct bio_integrity_payload *bip)
+@@ -129,7 +125,7 @@ static void bio_integrity_uncopy_user(struct bio_integrity_payload *bip)
+ ret = copy_to_iter(bvec_virt(bounce_bvec), bytes, &orig_iter);
+ WARN_ON_ONCE(ret != bytes);
+
+- bio_integrity_unpin_bvec(orig_bvecs, orig_nr_vecs, true);
++ bio_integrity_unpin_bvec(orig_bvecs, orig_nr_vecs);
+ }
+
+ /**
+@@ -149,8 +145,7 @@ void bio_integrity_unmap_user(struct bio *bio)
+ return;
+ }
+
+- bio_integrity_unpin_bvec(bip->bip_vec, bip->bip_max_vcnt,
+- bio_data_dir(bio) == READ);
++ bio_integrity_unpin_bvec(bip->bip_vec, bip->bip_max_vcnt);
+ }
+
+ /**
+@@ -236,7 +231,7 @@ static int bio_integrity_copy_user(struct bio *bio, struct bio_vec *bvec,
+ }
+
+ if (write)
+- bio_integrity_unpin_bvec(bvec, nr_vecs, false);
++ bio_integrity_unpin_bvec(bvec, nr_vecs);
+ else
+ memcpy(&bip->bip_vec[1], bvec, nr_vecs * sizeof(*bvec));
+
+@@ -362,7 +357,7 @@ int bio_integrity_map_user(struct bio *bio, void __user *ubuf, ssize_t bytes,
+ return 0;
+
+ release_pages:
+- bio_integrity_unpin_bvec(bvec, nr_bvecs, false);
++ bio_integrity_unpin_bvec(bvec, nr_bvecs);
+ free_bvec:
+ if (bvec != stack_vec)
+ kfree(bvec);
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 42023addf9cda6..c7b6c1f7635978 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -1121,8 +1121,8 @@ void blk_start_plug_nr_ios(struct blk_plug *plug, unsigned short nr_ios)
+ return;
+
+ plug->cur_ktime = 0;
+- plug->mq_list = NULL;
+- plug->cached_rq = NULL;
++ rq_list_init(&plug->mq_list);
++ rq_list_init(&plug->cached_rqs);
+ plug->nr_ios = min_t(unsigned short, nr_ios, BLK_MAX_REQUEST_COUNT);
+ plug->rq_count = 0;
+ plug->multiple_queues = false;
+@@ -1218,7 +1218,7 @@ void __blk_flush_plug(struct blk_plug *plug, bool from_schedule)
+ * queue for cached requests, we don't want a blocked task holding
+ * up a queue freeze/quiesce event.
+ */
+- if (unlikely(!rq_list_empty(plug->cached_rq)))
++ if (unlikely(!rq_list_empty(&plug->cached_rqs)))
+ blk_mq_free_plug_rqs(plug);
+
+ plug->cur_ktime = 0;
+diff --git a/block/blk-merge.c b/block/blk-merge.c
+index 5baa950f34fe21..ceac64e796ea82 100644
+--- a/block/blk-merge.c
++++ b/block/blk-merge.c
+@@ -1175,7 +1175,7 @@ bool blk_attempt_plug_merge(struct request_queue *q, struct bio *bio,
+ struct blk_plug *plug = current->plug;
+ struct request *rq;
+
+- if (!plug || rq_list_empty(plug->mq_list))
++ if (!plug || rq_list_empty(&plug->mq_list))
+ return false;
+
+ rq_list_for_each(&plug->mq_list, rq) {
+diff --git a/block/blk-mq-cpumap.c b/block/blk-mq-cpumap.c
+index 9638b25fd52124..ad8d6a363f24ae 100644
+--- a/block/blk-mq-cpumap.c
++++ b/block/blk-mq-cpumap.c
+@@ -11,6 +11,7 @@
+ #include <linux/smp.h>
+ #include <linux/cpu.h>
+ #include <linux/group_cpus.h>
++#include <linux/device/bus.h>
+
+ #include "blk.h"
+ #include "blk-mq.h"
+@@ -54,3 +55,39 @@ int blk_mq_hw_queue_to_node(struct blk_mq_queue_map *qmap, unsigned int index)
+
+ return NUMA_NO_NODE;
+ }
++
++/**
++ * blk_mq_map_hw_queues - Create CPU to hardware queue mapping
++ * @qmap: CPU to hardware queue map
++ * @dev: The device to map queues
++ * @offset: Queue offset to use for the device
++ *
++ * Create a CPU to hardware queue mapping in @qmap. The struct bus_type
++ * irq_get_affinity callback will be used to retrieve the affinity.
++ */
++void blk_mq_map_hw_queues(struct blk_mq_queue_map *qmap,
++ struct device *dev, unsigned int offset)
++
++{
++ const struct cpumask *mask;
++ unsigned int queue, cpu;
++
++ if (!dev->bus->irq_get_affinity)
++ goto fallback;
++
++ for (queue = 0; queue < qmap->nr_queues; queue++) {
++ mask = dev->bus->irq_get_affinity(dev, queue + offset);
++ if (!mask)
++ goto fallback;
++
++ for_each_cpu(cpu, mask)
++ qmap->mq_map[cpu] = qmap->queue_offset + queue;
++ }
++
++ return;
++
++fallback:
++ WARN_ON_ONCE(qmap->nr_queues > 1);
++ blk_mq_clear_mq_map(qmap);
++}
++EXPORT_SYMBOL_GPL(blk_mq_map_hw_queues);
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 662e52ab06467f..f26bee56269363 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -506,7 +506,7 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data)
+ prefetch(tags->static_rqs[tag]);
+ tag_mask &= ~(1UL << i);
+ rq = blk_mq_rq_ctx_init(data, tags, tag);
+- rq_list_add(data->cached_rq, rq);
++ rq_list_add_head(data->cached_rqs, rq);
+ nr++;
+ }
+ if (!(data->rq_flags & RQF_SCHED_TAGS))
+@@ -515,7 +515,7 @@ __blk_mq_alloc_requests_batch(struct blk_mq_alloc_data *data)
+ percpu_ref_get_many(&data->q->q_usage_counter, nr - 1);
+ data->nr_tags -= nr;
+
+- return rq_list_pop(data->cached_rq);
++ return rq_list_pop(data->cached_rqs);
+ }
+
+ static struct request *__blk_mq_alloc_requests(struct blk_mq_alloc_data *data)
+@@ -612,7 +612,7 @@ static struct request *blk_mq_rq_cache_fill(struct request_queue *q,
+ .flags = flags,
+ .cmd_flags = opf,
+ .nr_tags = plug->nr_ios,
+- .cached_rq = &plug->cached_rq,
++ .cached_rqs = &plug->cached_rqs,
+ };
+ struct request *rq;
+
+@@ -637,14 +637,14 @@ static struct request *blk_mq_alloc_cached_request(struct request_queue *q,
+ if (!plug)
+ return NULL;
+
+- if (rq_list_empty(plug->cached_rq)) {
++ if (rq_list_empty(&plug->cached_rqs)) {
+ if (plug->nr_ios == 1)
+ return NULL;
+ rq = blk_mq_rq_cache_fill(q, plug, opf, flags);
+ if (!rq)
+ return NULL;
+ } else {
+- rq = rq_list_peek(&plug->cached_rq);
++ rq = rq_list_peek(&plug->cached_rqs);
+ if (!rq || rq->q != q)
+ return NULL;
+
+@@ -653,7 +653,7 @@ static struct request *blk_mq_alloc_cached_request(struct request_queue *q,
+ if (op_is_flush(rq->cmd_flags) != op_is_flush(opf))
+ return NULL;
+
+- plug->cached_rq = rq_list_next(rq);
++ rq_list_pop(&plug->cached_rqs);
+ blk_mq_rq_time_init(rq, 0);
+ }
+
+@@ -830,7 +830,7 @@ void blk_mq_free_plug_rqs(struct blk_plug *plug)
+ {
+ struct request *rq;
+
+- while ((rq = rq_list_pop(&plug->cached_rq)) != NULL)
++ while ((rq = rq_list_pop(&plug->cached_rqs)) != NULL)
+ blk_mq_free_request(rq);
+ }
+
+@@ -1386,8 +1386,7 @@ static void blk_add_rq_to_plug(struct blk_plug *plug, struct request *rq)
+ */
+ if (!plug->has_elevator && (rq->rq_flags & RQF_SCHED_TAGS))
+ plug->has_elevator = true;
+- rq->rq_next = NULL;
+- rq_list_add(&plug->mq_list, rq);
++ rq_list_add_tail(&plug->mq_list, rq);
+ plug->rq_count++;
+ }
+
+@@ -2781,7 +2780,7 @@ static void blk_mq_plug_issue_direct(struct blk_plug *plug)
+ blk_status_t ret = BLK_STS_OK;
+
+ while ((rq = rq_list_pop(&plug->mq_list))) {
+- bool last = rq_list_empty(plug->mq_list);
++ bool last = rq_list_empty(&plug->mq_list);
+
+ if (hctx != rq->mq_hctx) {
+ if (hctx) {
+@@ -2824,8 +2823,7 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
+ {
+ struct blk_mq_hw_ctx *this_hctx = NULL;
+ struct blk_mq_ctx *this_ctx = NULL;
+- struct request *requeue_list = NULL;
+- struct request **requeue_lastp = &requeue_list;
++ struct rq_list requeue_list = {};
+ unsigned int depth = 0;
+ bool is_passthrough = false;
+ LIST_HEAD(list);
+@@ -2839,12 +2837,12 @@ static void blk_mq_dispatch_plug_list(struct blk_plug *plug, bool from_sched)
+ is_passthrough = blk_rq_is_passthrough(rq);
+ } else if (this_hctx != rq->mq_hctx || this_ctx != rq->mq_ctx ||
+ is_passthrough != blk_rq_is_passthrough(rq)) {
+- rq_list_add_tail(&requeue_lastp, rq);
++ rq_list_add_tail(&requeue_list, rq);
+ continue;
+ }
+- list_add(&rq->queuelist, &list);
++ list_add_tail(&rq->queuelist, &list);
+ depth++;
+- } while (!rq_list_empty(plug->mq_list));
++ } while (!rq_list_empty(&plug->mq_list));
+
+ plug->mq_list = requeue_list;
+ trace_block_unplug(this_hctx->queue, depth, !from_sched);
+@@ -2899,19 +2897,19 @@ void blk_mq_flush_plug_list(struct blk_plug *plug, bool from_schedule)
+ if (q->mq_ops->queue_rqs) {
+ blk_mq_run_dispatch_ops(q,
+ __blk_mq_flush_plug_list(q, plug));
+- if (rq_list_empty(plug->mq_list))
++ if (rq_list_empty(&plug->mq_list))
+ return;
+ }
+
+ blk_mq_run_dispatch_ops(q,
+ blk_mq_plug_issue_direct(plug));
+- if (rq_list_empty(plug->mq_list))
++ if (rq_list_empty(&plug->mq_list))
+ return;
+ }
+
+ do {
+ blk_mq_dispatch_plug_list(plug, from_schedule);
+- } while (!rq_list_empty(plug->mq_list));
++ } while (!rq_list_empty(&plug->mq_list));
+ }
+
+ static void blk_mq_try_issue_list_directly(struct blk_mq_hw_ctx *hctx,
+@@ -2976,7 +2974,7 @@ static struct request *blk_mq_get_new_requests(struct request_queue *q,
+ if (plug) {
+ data.nr_tags = plug->nr_ios;
+ plug->nr_ios = 1;
+- data.cached_rq = &plug->cached_rq;
++ data.cached_rqs = &plug->cached_rqs;
+ }
+
+ rq = __blk_mq_alloc_requests(&data);
+@@ -2999,7 +2997,7 @@ static struct request *blk_mq_peek_cached_request(struct blk_plug *plug,
+
+ if (!plug)
+ return NULL;
+- rq = rq_list_peek(&plug->cached_rq);
++ rq = rq_list_peek(&plug->cached_rqs);
+ if (!rq || rq->q != q)
+ return NULL;
+ if (type != rq->mq_hctx->type &&
+@@ -3013,14 +3011,14 @@ static struct request *blk_mq_peek_cached_request(struct blk_plug *plug,
+ static void blk_mq_use_cached_rq(struct request *rq, struct blk_plug *plug,
+ struct bio *bio)
+ {
+- WARN_ON_ONCE(rq_list_peek(&plug->cached_rq) != rq);
++ if (rq_list_pop(&plug->cached_rqs) != rq)
++ WARN_ON_ONCE(1);
+
+ /*
+ * If any qos ->throttle() end up blocking, we will have flushed the
+ * plug and hence killed the cached_rq list as well. Pop this entry
+ * before we throttle.
+ */
+- plug->cached_rq = rq_list_next(rq);
+ rq_qos_throttle(rq->q, bio);
+
+ blk_mq_rq_time_init(rq, 0);
+diff --git a/block/blk-mq.h b/block/blk-mq.h
+index 364c0415293cf7..a80d3b3105f9ed 100644
+--- a/block/blk-mq.h
++++ b/block/blk-mq.h
+@@ -155,7 +155,7 @@ struct blk_mq_alloc_data {
+
+ /* allocate multiple requests/tags in one go */
+ unsigned int nr_tags;
+- struct request **cached_rq;
++ struct rq_list *cached_rqs;
+
+ /* input & output parameter */
+ struct blk_mq_ctx *ctx;
+diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
+index 692b27266220fe..0e2520d929e1db 100644
+--- a/block/blk-sysfs.c
++++ b/block/blk-sysfs.c
+@@ -813,6 +813,8 @@ int blk_register_queue(struct gendisk *disk)
+ out_debugfs_remove:
+ blk_debugfs_remove(disk);
+ mutex_unlock(&q->sysfs_lock);
++ if (queue_is_mq(q))
++ blk_mq_sysfs_unregister(disk);
+ out_put_queue_kobj:
+ kobject_put(&disk->queue_kobj);
+ mutex_unlock(&q->sysfs_dir_lock);
+diff --git a/drivers/ata/libata-sata.c b/drivers/ata/libata-sata.c
+index 9c76fb1ad2ec50..a7442dc0bd8e10 100644
+--- a/drivers/ata/libata-sata.c
++++ b/drivers/ata/libata-sata.c
+@@ -1510,6 +1510,8 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link)
+ unsigned int err_mask, tag;
+ u8 *sense, sk = 0, asc = 0, ascq = 0;
+ u64 sense_valid, val;
++ u16 extended_sense;
++ bool aux_icc_valid;
+ int ret = 0;
+
+ err_mask = ata_read_log_page(dev, ATA_LOG_SENSE_NCQ, 0, buf, 2);
+@@ -1529,6 +1531,8 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link)
+
+ sense_valid = (u64)buf[8] | ((u64)buf[9] << 8) |
+ ((u64)buf[10] << 16) | ((u64)buf[11] << 24);
++ extended_sense = get_unaligned_le16(&buf[14]);
++ aux_icc_valid = extended_sense & BIT(15);
+
+ ata_qc_for_each_raw(ap, qc, tag) {
+ if (!(qc->flags & ATA_QCFLAG_EH) ||
+@@ -1556,6 +1560,17 @@ int ata_eh_get_ncq_success_sense(struct ata_link *link)
+ continue;
+ }
+
++ qc->result_tf.nsect = sense[6];
++ qc->result_tf.hob_nsect = sense[7];
++ qc->result_tf.lbal = sense[8];
++ qc->result_tf.lbam = sense[9];
++ qc->result_tf.lbah = sense[10];
++ qc->result_tf.hob_lbal = sense[11];
++ qc->result_tf.hob_lbam = sense[12];
++ qc->result_tf.hob_lbah = sense[13];
++ if (aux_icc_valid)
++ qc->result_tf.auxiliary = get_unaligned_le32(&sense[16]);
++
+ /* Set sense without also setting scsicmd->result */
+ scsi_build_sense_buffer(dev->flags & ATA_DFLAG_D_SENSE,
+ qc->scsicmd->sense_buffer, sk,
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 86cc3b19faae86..8827a768284ac4 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -233,72 +233,6 @@ static void loop_set_size(struct loop_device *lo, loff_t size)
+ kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE);
+ }
+
+-static int lo_write_bvec(struct file *file, struct bio_vec *bvec, loff_t *ppos)
+-{
+- struct iov_iter i;
+- ssize_t bw;
+-
+- iov_iter_bvec(&i, ITER_SOURCE, bvec, 1, bvec->bv_len);
+-
+- bw = vfs_iter_write(file, &i, ppos, 0);
+-
+- if (likely(bw == bvec->bv_len))
+- return 0;
+-
+- printk_ratelimited(KERN_ERR
+- "loop: Write error at byte offset %llu, length %i.\n",
+- (unsigned long long)*ppos, bvec->bv_len);
+- if (bw >= 0)
+- bw = -EIO;
+- return bw;
+-}
+-
+-static int lo_write_simple(struct loop_device *lo, struct request *rq,
+- loff_t pos)
+-{
+- struct bio_vec bvec;
+- struct req_iterator iter;
+- int ret = 0;
+-
+- rq_for_each_segment(bvec, rq, iter) {
+- ret = lo_write_bvec(lo->lo_backing_file, &bvec, &pos);
+- if (ret < 0)
+- break;
+- cond_resched();
+- }
+-
+- return ret;
+-}
+-
+-static int lo_read_simple(struct loop_device *lo, struct request *rq,
+- loff_t pos)
+-{
+- struct bio_vec bvec;
+- struct req_iterator iter;
+- struct iov_iter i;
+- ssize_t len;
+-
+- rq_for_each_segment(bvec, rq, iter) {
+- iov_iter_bvec(&i, ITER_DEST, &bvec, 1, bvec.bv_len);
+- len = vfs_iter_read(lo->lo_backing_file, &i, &pos, 0);
+- if (len < 0)
+- return len;
+-
+- flush_dcache_page(bvec.bv_page);
+-
+- if (len != bvec.bv_len) {
+- struct bio *bio;
+-
+- __rq_for_each_bio(bio, rq)
+- zero_fill_bio(bio);
+- break;
+- }
+- cond_resched();
+- }
+-
+- return 0;
+-}
+-
+ static void loop_clear_limits(struct loop_device *lo, int mode)
+ {
+ struct queue_limits lim = queue_limits_start_update(lo->lo_queue);
+@@ -357,7 +291,7 @@ static void lo_complete_rq(struct request *rq)
+ struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);
+ blk_status_t ret = BLK_STS_OK;
+
+- if (!cmd->use_aio || cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) ||
++ if (cmd->ret < 0 || cmd->ret == blk_rq_bytes(rq) ||
+ req_op(rq) != REQ_OP_READ) {
+ if (cmd->ret < 0)
+ ret = errno_to_blk_status(cmd->ret);
+@@ -373,14 +307,13 @@ static void lo_complete_rq(struct request *rq)
+ cmd->ret = 0;
+ blk_mq_requeue_request(rq, true);
+ } else {
+- if (cmd->use_aio) {
+- struct bio *bio = rq->bio;
++ struct bio *bio = rq->bio;
+
+- while (bio) {
+- zero_fill_bio(bio);
+- bio = bio->bi_next;
+- }
++ while (bio) {
++ zero_fill_bio(bio);
++ bio = bio->bi_next;
+ }
++
+ ret = BLK_STS_IOERR;
+ end_io:
+ blk_mq_end_request(rq, ret);
+@@ -460,9 +393,14 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
+
+ cmd->iocb.ki_pos = pos;
+ cmd->iocb.ki_filp = file;
+- cmd->iocb.ki_complete = lo_rw_aio_complete;
+- cmd->iocb.ki_flags = IOCB_DIRECT;
+- cmd->iocb.ki_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
++ cmd->iocb.ki_ioprio = req_get_ioprio(rq);
++ if (cmd->use_aio) {
++ cmd->iocb.ki_complete = lo_rw_aio_complete;
++ cmd->iocb.ki_flags = IOCB_DIRECT;
++ } else {
++ cmd->iocb.ki_complete = NULL;
++ cmd->iocb.ki_flags = 0;
++ }
+
+ if (rw == ITER_SOURCE)
+ ret = file->f_op->write_iter(&cmd->iocb, &iter);
+@@ -473,7 +411,7 @@ static int lo_rw_aio(struct loop_device *lo, struct loop_cmd *cmd,
+
+ if (ret != -EIOCBQUEUED)
+ lo_rw_aio_complete(&cmd->iocb, ret);
+- return 0;
++ return -EIOCBQUEUED;
+ }
+
+ static int do_req_filebacked(struct loop_device *lo, struct request *rq)
+@@ -481,15 +419,6 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq)
+ struct loop_cmd *cmd = blk_mq_rq_to_pdu(rq);
+ loff_t pos = ((loff_t) blk_rq_pos(rq) << 9) + lo->lo_offset;
+
+- /*
+- * lo_write_simple and lo_read_simple should have been covered
+- * by io submit style function like lo_rw_aio(), one blocker
+- * is that lo_read_simple() need to call flush_dcache_page after
+- * the page is written from kernel, and it isn't easy to handle
+- * this in io submit style function which submits all segments
+- * of the req at one time. And direct read IO doesn't need to
+- * run flush_dcache_page().
+- */
+ switch (req_op(rq)) {
+ case REQ_OP_FLUSH:
+ return lo_req_flush(lo, rq);
+@@ -505,15 +434,9 @@ static int do_req_filebacked(struct loop_device *lo, struct request *rq)
+ case REQ_OP_DISCARD:
+ return lo_fallocate(lo, rq, pos, FALLOC_FL_PUNCH_HOLE);
+ case REQ_OP_WRITE:
+- if (cmd->use_aio)
+- return lo_rw_aio(lo, cmd, pos, ITER_SOURCE);
+- else
+- return lo_write_simple(lo, rq, pos);
++ return lo_rw_aio(lo, cmd, pos, ITER_SOURCE);
+ case REQ_OP_READ:
+- if (cmd->use_aio)
+- return lo_rw_aio(lo, cmd, pos, ITER_DEST);
+- else
+- return lo_read_simple(lo, rq, pos);
++ return lo_rw_aio(lo, cmd, pos, ITER_DEST);
+ default:
+ WARN_ON_ONCE(1);
+ return -EIO;
+@@ -645,19 +568,20 @@ static int loop_change_fd(struct loop_device *lo, struct block_device *bdev,
+ * dependency.
+ */
+ fput(old_file);
++ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
+ if (partscan)
+ loop_reread_partitions(lo);
+
+ error = 0;
+ done:
+- /* enable and uncork uevent now that we are done */
+- dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
++ kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE);
+ return error;
+
+ out_err:
+ loop_global_unlock(lo, is_loop);
+ out_putf:
+ fput(file);
++ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
+ goto done;
+ }
+
+@@ -1111,8 +1035,8 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
+ if (partscan)
+ clear_bit(GD_SUPPRESS_PART_SCAN, &lo->lo_disk->state);
+
+- /* enable and uncork uevent now that we are done */
+ dev_set_uevent_suppress(disk_to_dev(lo->lo_disk), 0);
++ kobject_uevent(&disk_to_dev(lo->lo_disk)->kobj, KOBJ_CHANGE);
+
+ loop_global_unlock(lo, is_loop);
+ if (partscan)
+@@ -1888,7 +1812,6 @@ static void loop_handle_cmd(struct loop_cmd *cmd)
+ struct loop_device *lo = rq->q->queuedata;
+ int ret = 0;
+ struct mem_cgroup *old_memcg = NULL;
+- const bool use_aio = cmd->use_aio;
+
+ if (write && (lo->lo_flags & LO_FLAGS_READ_ONLY)) {
+ ret = -EIO;
+@@ -1918,7 +1841,7 @@ static void loop_handle_cmd(struct loop_cmd *cmd)
+ }
+ failed:
+ /* complete non-aio request */
+- if (!use_aio || ret) {
++ if (ret != -EIOCBQUEUED) {
+ if (ret == -EOPNOTSUPP)
+ cmd->ret = ret;
+ else
+diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
+index c479348ce8ff69..f10369ad90f768 100644
+--- a/drivers/block/null_blk/main.c
++++ b/drivers/block/null_blk/main.c
+@@ -1638,10 +1638,9 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx,
+ return BLK_STS_OK;
+ }
+
+-static void null_queue_rqs(struct request **rqlist)
++static void null_queue_rqs(struct rq_list *rqlist)
+ {
+- struct request *requeue_list = NULL;
+- struct request **requeue_lastp = &requeue_list;
++ struct rq_list requeue_list = {};
+ struct blk_mq_queue_data bd = { };
+ blk_status_t ret;
+
+@@ -1651,8 +1650,8 @@ static void null_queue_rqs(struct request **rqlist)
+ bd.rq = rq;
+ ret = null_queue_rq(rq->mq_hctx, &bd);
+ if (ret != BLK_STS_OK)
+- rq_list_add_tail(&requeue_lastp, rq);
+- } while (!rq_list_empty(*rqlist));
++ rq_list_add_tail(&requeue_list, rq);
++ } while (!rq_list_empty(rqlist));
+
+ *rqlist = requeue_list;
+ }
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 44a6937a4b65cc..fd6c565f8a507c 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -472,7 +472,7 @@ static bool virtblk_prep_rq_batch(struct request *req)
+ }
+
+ static void virtblk_add_req_batch(struct virtio_blk_vq *vq,
+- struct request **rqlist)
++ struct rq_list *rqlist)
+ {
+ struct request *req;
+ unsigned long flags;
+@@ -499,11 +499,10 @@ static void virtblk_add_req_batch(struct virtio_blk_vq *vq,
+ virtqueue_notify(vq->vq);
+ }
+
+-static void virtio_queue_rqs(struct request **rqlist)
++static void virtio_queue_rqs(struct rq_list *rqlist)
+ {
+- struct request *submit_list = NULL;
+- struct request *requeue_list = NULL;
+- struct request **requeue_lastp = &requeue_list;
++ struct rq_list submit_list = { };
++ struct rq_list requeue_list = { };
+ struct virtio_blk_vq *vq = NULL;
+ struct request *req;
+
+@@ -515,9 +514,9 @@ static void virtio_queue_rqs(struct request **rqlist)
+ vq = this_vq;
+
+ if (virtblk_prep_rq_batch(req))
+- rq_list_add(&submit_list, req); /* reverse order */
++ rq_list_add_tail(&submit_list, req);
+ else
+- rq_list_add_tail(&requeue_lastp, req);
++ rq_list_add_tail(&requeue_list, req);
+ }
+
+ if (vq)
+diff --git a/drivers/bluetooth/btrtl.c b/drivers/bluetooth/btrtl.c
+index 0a6ca6dfb94841..59eb9486642232 100644
+--- a/drivers/bluetooth/btrtl.c
++++ b/drivers/bluetooth/btrtl.c
+@@ -1215,6 +1215,8 @@ struct btrtl_device_info *btrtl_initialize(struct hci_dev *hdev,
+ rtl_dev_err(hdev, "mandatory config file %s not found",
+ btrtl_dev->ic_info->cfg_name);
+ ret = btrtl_dev->cfg_len;
++ if (!ret)
++ ret = -EINVAL;
+ goto err_free;
+ }
+ }
+diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c
+index 7651321d351ccd..9ac22e4a070bef 100644
+--- a/drivers/bluetooth/hci_vhci.c
++++ b/drivers/bluetooth/hci_vhci.c
+@@ -289,18 +289,18 @@ static void vhci_coredump(struct hci_dev *hdev)
+
+ static void vhci_coredump_hdr(struct hci_dev *hdev, struct sk_buff *skb)
+ {
+- char buf[80];
++ const char *buf;
+
+- snprintf(buf, sizeof(buf), "Controller Name: vhci_ctrl\n");
++ buf = "Controller Name: vhci_ctrl\n";
+ skb_put_data(skb, buf, strlen(buf));
+
+- snprintf(buf, sizeof(buf), "Firmware Version: vhci_fw\n");
++ buf = "Firmware Version: vhci_fw\n";
+ skb_put_data(skb, buf, strlen(buf));
+
+- snprintf(buf, sizeof(buf), "Driver: vhci_drv\n");
++ buf = "Driver: vhci_drv\n";
+ skb_put_data(skb, buf, strlen(buf));
+
+- snprintf(buf, sizeof(buf), "Vendor: vhci\n");
++ buf = "Vendor: vhci\n";
+ skb_put_data(skb, buf, strlen(buf));
+ }
+
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index f98c9438760c97..67b4e3d18ffe22 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2748,10 +2748,18 @@ EXPORT_SYMBOL(cpufreq_update_policy);
+ */
+ void cpufreq_update_limits(unsigned int cpu)
+ {
++ struct cpufreq_policy *policy;
++
++ policy = cpufreq_cpu_get(cpu);
++ if (!policy)
++ return;
++
+ if (cpufreq_driver->update_limits)
+ cpufreq_driver->update_limits(cpu);
+ else
+ cpufreq_update_policy(cpu);
++
++ cpufreq_cpu_put(policy);
+ }
+ EXPORT_SYMBOL_GPL(cpufreq_update_limits);
+
+diff --git a/drivers/crypto/caam/qi.c b/drivers/crypto/caam/qi.c
+index 8ed2bb01a619fd..44860630050019 100644
+--- a/drivers/crypto/caam/qi.c
++++ b/drivers/crypto/caam/qi.c
+@@ -122,12 +122,12 @@ int caam_qi_enqueue(struct device *qidev, struct caam_drv_req *req)
+ qm_fd_addr_set64(&fd, addr);
+
+ do {
++ refcount_inc(&req->drv_ctx->refcnt);
+ ret = qman_enqueue(req->drv_ctx->req_fq, &fd);
+- if (likely(!ret)) {
+- refcount_inc(&req->drv_ctx->refcnt);
++ if (likely(!ret))
+ return 0;
+- }
+
++ refcount_dec(&req->drv_ctx->refcnt);
+ if (ret != -EBUSY)
+ break;
+ num_retries++;
+diff --git a/drivers/crypto/tegra/tegra-se-aes.c b/drivers/crypto/tegra/tegra-se-aes.c
+index 0ed0515e1ed54c..cd52807e76afdb 100644
+--- a/drivers/crypto/tegra/tegra-se-aes.c
++++ b/drivers/crypto/tegra/tegra-se-aes.c
+@@ -263,13 +263,7 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq)
+ unsigned int cmdlen;
+ int ret;
+
+- rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_AES_BUFLEN,
+- &rctx->datbuf.addr, GFP_KERNEL);
+- if (!rctx->datbuf.buf)
+- return -ENOMEM;
+-
+- rctx->datbuf.size = SE_AES_BUFLEN;
+- rctx->iv = (u32 *)req->iv;
++ rctx->iv = (ctx->alg == SE_ALG_ECB) ? NULL : (u32 *)req->iv;
+ rctx->len = req->cryptlen;
+
+ /* Pad input to AES Block size */
+@@ -278,6 +272,12 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq)
+ rctx->len += AES_BLOCK_SIZE - (rctx->len % AES_BLOCK_SIZE);
+ }
+
++ rctx->datbuf.size = rctx->len;
++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->datbuf.size,
++ &rctx->datbuf.addr, GFP_KERNEL);
++ if (!rctx->datbuf.buf)
++ return -ENOMEM;
++
+ scatterwalk_map_and_copy(rctx->datbuf.buf, req->src, 0, req->cryptlen, 0);
+
+ /* Prepare the command and submit for execution */
+@@ -289,7 +289,7 @@ static int tegra_aes_do_one_req(struct crypto_engine *engine, void *areq)
+ scatterwalk_map_and_copy(rctx->datbuf.buf, req->dst, 0, req->cryptlen, 1);
+
+ /* Free the buffer */
+- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ dma_free_coherent(ctx->se->dev, rctx->datbuf.size,
+ rctx->datbuf.buf, rctx->datbuf.addr);
+
+ crypto_finalize_skcipher_request(se->engine, req, ret);
+@@ -443,9 +443,6 @@ static int tegra_aes_crypt(struct skcipher_request *req, bool encrypt)
+ if (!req->cryptlen)
+ return 0;
+
+- if (ctx->alg == SE_ALG_ECB)
+- req->iv = NULL;
+-
+ rctx->encrypt = encrypt;
+ rctx->config = tegra234_aes_cfg(ctx->alg, encrypt);
+ rctx->crypto_config = tegra234_aes_crypto_cfg(ctx->alg, encrypt);
+@@ -1120,6 +1117,11 @@ static int tegra_ccm_crypt_init(struct aead_request *req, struct tegra_se *se,
+ rctx->assoclen = req->assoclen;
+ rctx->authsize = crypto_aead_authsize(tfm);
+
++ if (rctx->encrypt)
++ rctx->cryptlen = req->cryptlen;
++ else
++ rctx->cryptlen = req->cryptlen - rctx->authsize;
++
+ memcpy(iv, req->iv, 16);
+
+ ret = tegra_ccm_check_iv(iv);
+@@ -1148,30 +1150,26 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq)
+ struct tegra_se *se = ctx->se;
+ int ret;
+
++ ret = tegra_ccm_crypt_init(req, se, rctx);
++ if (ret)
++ return ret;
++
+ /* Allocate buffers required */
+- rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ rctx->inbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen + 100;
++ rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->inbuf.size,
+ &rctx->inbuf.addr, GFP_KERNEL);
+ if (!rctx->inbuf.buf)
+ return -ENOMEM;
+
+- rctx->inbuf.size = SE_AES_BUFLEN;
+-
+- rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ rctx->outbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen + 100;
++ rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->outbuf.size,
+ &rctx->outbuf.addr, GFP_KERNEL);
+ if (!rctx->outbuf.buf) {
+ ret = -ENOMEM;
+ goto outbuf_err;
+ }
+
+- rctx->outbuf.size = SE_AES_BUFLEN;
+-
+- ret = tegra_ccm_crypt_init(req, se, rctx);
+- if (ret)
+- goto out;
+-
+ if (rctx->encrypt) {
+- rctx->cryptlen = req->cryptlen;
+-
+ /* CBC MAC Operation */
+ ret = tegra_ccm_compute_auth(ctx, rctx);
+ if (ret)
+@@ -1182,10 +1180,6 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq)
+ if (ret)
+ goto out;
+ } else {
+- rctx->cryptlen = req->cryptlen - ctx->authsize;
+- if (ret)
+- goto out;
+-
+ /* CTR operation */
+ ret = tegra_ccm_do_ctr(ctx, rctx);
+ if (ret)
+@@ -1198,11 +1192,11 @@ static int tegra_ccm_do_one_req(struct crypto_engine *engine, void *areq)
+ }
+
+ out:
+- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ dma_free_coherent(ctx->se->dev, rctx->inbuf.size,
+ rctx->outbuf.buf, rctx->outbuf.addr);
+
+ outbuf_err:
+- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ dma_free_coherent(ctx->se->dev, rctx->outbuf.size,
+ rctx->inbuf.buf, rctx->inbuf.addr);
+
+ crypto_finalize_aead_request(ctx->se->engine, req, ret);
+@@ -1218,23 +1212,6 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq)
+ struct tegra_aead_reqctx *rctx = aead_request_ctx(req);
+ int ret;
+
+- /* Allocate buffers required */
+- rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
+- &rctx->inbuf.addr, GFP_KERNEL);
+- if (!rctx->inbuf.buf)
+- return -ENOMEM;
+-
+- rctx->inbuf.size = SE_AES_BUFLEN;
+-
+- rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, SE_AES_BUFLEN,
+- &rctx->outbuf.addr, GFP_KERNEL);
+- if (!rctx->outbuf.buf) {
+- ret = -ENOMEM;
+- goto outbuf_err;
+- }
+-
+- rctx->outbuf.size = SE_AES_BUFLEN;
+-
+ rctx->src_sg = req->src;
+ rctx->dst_sg = req->dst;
+ rctx->assoclen = req->assoclen;
+@@ -1248,6 +1225,21 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq)
+ memcpy(rctx->iv, req->iv, GCM_AES_IV_SIZE);
+ rctx->iv[3] = (1 << 24);
+
++ /* Allocate buffers required */
++ rctx->inbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen;
++ rctx->inbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->inbuf.size,
++ &rctx->inbuf.addr, GFP_KERNEL);
++ if (!rctx->inbuf.buf)
++ return -ENOMEM;
++
++ rctx->outbuf.size = rctx->assoclen + rctx->authsize + rctx->cryptlen;
++ rctx->outbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->outbuf.size,
++ &rctx->outbuf.addr, GFP_KERNEL);
++ if (!rctx->outbuf.buf) {
++ ret = -ENOMEM;
++ goto outbuf_err;
++ }
++
+ /* If there is associated data perform GMAC operation */
+ if (rctx->assoclen) {
+ ret = tegra_gcm_do_gmac(ctx, rctx);
+@@ -1271,11 +1263,11 @@ static int tegra_gcm_do_one_req(struct crypto_engine *engine, void *areq)
+ ret = tegra_gcm_do_verify(ctx->se, rctx);
+
+ out:
+- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ dma_free_coherent(ctx->se->dev, rctx->outbuf.size,
+ rctx->outbuf.buf, rctx->outbuf.addr);
+
+ outbuf_err:
+- dma_free_coherent(ctx->se->dev, SE_AES_BUFLEN,
++ dma_free_coherent(ctx->se->dev, rctx->inbuf.size,
+ rctx->inbuf.buf, rctx->inbuf.addr);
+
+ /* Finalize the request if there are no errors */
+@@ -1502,6 +1494,11 @@ static int tegra_cmac_do_update(struct ahash_request *req)
+ return 0;
+ }
+
++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->datbuf.size,
++ &rctx->datbuf.addr, GFP_KERNEL);
++ if (!rctx->datbuf.buf)
++ return -ENOMEM;
++
+ /* Copy the previous residue first */
+ if (rctx->residue.size)
+ memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
+@@ -1527,6 +1524,9 @@ static int tegra_cmac_do_update(struct ahash_request *req)
+
+ tegra_cmac_copy_result(ctx->se, rctx);
+
++ dma_free_coherent(ctx->se->dev, rctx->datbuf.size,
++ rctx->datbuf.buf, rctx->datbuf.addr);
++
+ return ret;
+ }
+
+@@ -1541,10 +1541,20 @@ static int tegra_cmac_do_final(struct ahash_request *req)
+
+ if (!req->nbytes && !rctx->total_len && ctx->fallback_tfm) {
+ return crypto_shash_tfm_digest(ctx->fallback_tfm,
+- rctx->datbuf.buf, 0, req->result);
++ NULL, 0, req->result);
++ }
++
++ if (rctx->residue.size) {
++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->residue.size,
++ &rctx->datbuf.addr, GFP_KERNEL);
++ if (!rctx->datbuf.buf) {
++ ret = -ENOMEM;
++ goto out_free;
++ }
++
++ memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
+ }
+
+- memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
+ rctx->datbuf.size = rctx->residue.size;
+ rctx->total_len += rctx->residue.size;
+ rctx->config = tegra234_aes_cfg(SE_ALG_CMAC, 0);
+@@ -1570,8 +1580,10 @@ static int tegra_cmac_do_final(struct ahash_request *req)
+ writel(0, se->base + se->hw->regs->result + (i * 4));
+
+ out:
+- dma_free_coherent(se->dev, SE_SHA_BUFLEN,
+- rctx->datbuf.buf, rctx->datbuf.addr);
++ if (rctx->residue.size)
++ dma_free_coherent(se->dev, rctx->datbuf.size,
++ rctx->datbuf.buf, rctx->datbuf.addr);
++out_free:
+ dma_free_coherent(se->dev, crypto_ahash_blocksize(tfm) * 2,
+ rctx->residue.buf, rctx->residue.addr);
+ return ret;
+@@ -1683,28 +1695,15 @@ static int tegra_cmac_init(struct ahash_request *req)
+ rctx->residue.buf = dma_alloc_coherent(se->dev, rctx->blk_size * 2,
+ &rctx->residue.addr, GFP_KERNEL);
+ if (!rctx->residue.buf)
+- goto resbuf_fail;
++ return -ENOMEM;
+
+ rctx->residue.size = 0;
+
+- rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_SHA_BUFLEN,
+- &rctx->datbuf.addr, GFP_KERNEL);
+- if (!rctx->datbuf.buf)
+- goto datbuf_fail;
+-
+- rctx->datbuf.size = 0;
+-
+ /* Clear any previous result */
+ for (i = 0; i < CMAC_RESULT_REG_COUNT; i++)
+ writel(0, se->base + se->hw->regs->result + (i * 4));
+
+ return 0;
+-
+-datbuf_fail:
+- dma_free_coherent(se->dev, rctx->blk_size, rctx->residue.buf,
+- rctx->residue.addr);
+-resbuf_fail:
+- return -ENOMEM;
+ }
+
+ static int tegra_cmac_setkey(struct crypto_ahash *tfm, const u8 *key,
+diff --git a/drivers/crypto/tegra/tegra-se-hash.c b/drivers/crypto/tegra/tegra-se-hash.c
+index 726e30c0e63ebb..451b8eaab16aab 100644
+--- a/drivers/crypto/tegra/tegra-se-hash.c
++++ b/drivers/crypto/tegra/tegra-se-hash.c
+@@ -332,6 +332,11 @@ static int tegra_sha_do_update(struct ahash_request *req)
+ return 0;
+ }
+
++ rctx->datbuf.buf = dma_alloc_coherent(ctx->se->dev, rctx->datbuf.size,
++ &rctx->datbuf.addr, GFP_KERNEL);
++ if (!rctx->datbuf.buf)
++ return -ENOMEM;
++
+ /* Copy the previous residue first */
+ if (rctx->residue.size)
+ memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
+@@ -368,6 +373,9 @@ static int tegra_sha_do_update(struct ahash_request *req)
+ if (!(rctx->task & SHA_FINAL))
+ tegra_sha_copy_hash_result(se, rctx);
+
++ dma_free_coherent(ctx->se->dev, rctx->datbuf.size,
++ rctx->datbuf.buf, rctx->datbuf.addr);
++
+ return ret;
+ }
+
+@@ -380,7 +388,17 @@ static int tegra_sha_do_final(struct ahash_request *req)
+ u32 *cpuvaddr = se->cmdbuf->addr;
+ int size, ret = 0;
+
+- memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
++ if (rctx->residue.size) {
++ rctx->datbuf.buf = dma_alloc_coherent(se->dev, rctx->residue.size,
++ &rctx->datbuf.addr, GFP_KERNEL);
++ if (!rctx->datbuf.buf) {
++ ret = -ENOMEM;
++ goto out_free;
++ }
++
++ memcpy(rctx->datbuf.buf, rctx->residue.buf, rctx->residue.size);
++ }
++
+ rctx->datbuf.size = rctx->residue.size;
+ rctx->total_len += rctx->residue.size;
+
+@@ -397,8 +415,10 @@ static int tegra_sha_do_final(struct ahash_request *req)
+ memcpy(req->result, rctx->digest.buf, rctx->digest.size);
+
+ out:
+- dma_free_coherent(se->dev, SE_SHA_BUFLEN,
+- rctx->datbuf.buf, rctx->datbuf.addr);
++ if (rctx->residue.size)
++ dma_free_coherent(se->dev, rctx->datbuf.size,
++ rctx->datbuf.buf, rctx->datbuf.addr);
++out_free:
+ dma_free_coherent(se->dev, crypto_ahash_blocksize(tfm),
+ rctx->residue.buf, rctx->residue.addr);
+ dma_free_coherent(se->dev, rctx->digest.size, rctx->digest.buf,
+@@ -534,19 +554,11 @@ static int tegra_sha_init(struct ahash_request *req)
+ if (!rctx->residue.buf)
+ goto resbuf_fail;
+
+- rctx->datbuf.buf = dma_alloc_coherent(se->dev, SE_SHA_BUFLEN,
+- &rctx->datbuf.addr, GFP_KERNEL);
+- if (!rctx->datbuf.buf)
+- goto datbuf_fail;
+-
+ return 0;
+
+-datbuf_fail:
+- dma_free_coherent(se->dev, rctx->blk_size, rctx->residue.buf,
+- rctx->residue.addr);
+ resbuf_fail:
+- dma_free_coherent(se->dev, SE_SHA_BUFLEN, rctx->datbuf.buf,
+- rctx->datbuf.addr);
++ dma_free_coherent(se->dev, rctx->digest.size, rctx->digest.buf,
++ rctx->digest.addr);
+ digbuf_fail:
+ return -ENOMEM;
+ }
+diff --git a/drivers/crypto/tegra/tegra-se.h b/drivers/crypto/tegra/tegra-se.h
+index b54aefe717a174..e196a90eedb92c 100644
+--- a/drivers/crypto/tegra/tegra-se.h
++++ b/drivers/crypto/tegra/tegra-se.h
+@@ -340,8 +340,6 @@
+ #define SE_CRYPTO_CTR_REG_COUNT 4
+ #define SE_MAX_KEYSLOT 15
+ #define SE_MAX_MEM_ALLOC SZ_4M
+-#define SE_AES_BUFLEN 0x8000
+-#define SE_SHA_BUFLEN 0x2000
+
+ #define SHA_FIRST BIT(0)
+ #define SHA_UPDATE BIT(1)
+diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c
+index c353029789cf1a..1290886f065e33 100644
+--- a/drivers/dma-buf/sw_sync.c
++++ b/drivers/dma-buf/sw_sync.c
+@@ -444,15 +444,17 @@ static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long a
+ return -EINVAL;
+
+ pt = dma_fence_to_sync_pt(fence);
+- if (!pt)
+- return -EINVAL;
++ if (!pt) {
++ ret = -EINVAL;
++ goto put_fence;
++ }
+
+ spin_lock_irqsave(fence->lock, flags);
+- if (test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) {
+- data.deadline_ns = ktime_to_ns(pt->deadline);
+- } else {
++ if (!test_bit(SW_SYNC_HAS_DEADLINE_BIT, &fence->flags)) {
+ ret = -ENOENT;
++ goto unlock;
+ }
++ data.deadline_ns = ktime_to_ns(pt->deadline);
+ spin_unlock_irqrestore(fence->lock, flags);
+
+ dma_fence_put(fence);
+@@ -464,6 +466,13 @@ static int sw_sync_ioctl_get_deadline(struct sync_timeline *obj, unsigned long a
+ return -EFAULT;
+
+ return 0;
++
++unlock:
++ spin_unlock_irqrestore(fence->lock, flags);
++put_fence:
++ dma_fence_put(fence);
++
++ return ret;
+ }
+
+ static long sw_sync_ioctl(struct file *file, unsigned int cmd,
+diff --git a/drivers/firmware/efi/libstub/efistub.h b/drivers/firmware/efi/libstub/efistub.h
+index 685098f9626f2b..eebcdf653d0729 100644
+--- a/drivers/firmware/efi/libstub/efistub.h
++++ b/drivers/firmware/efi/libstub/efistub.h
+@@ -171,7 +171,7 @@ void efi_set_u64_split(u64 data, u32 *lo, u32 *hi)
+ * the EFI memory map. Other related structures, e.g. x86 e820ext, need
+ * to factor in this headroom requirement as well.
+ */
+-#define EFI_MMAP_NR_SLACK_SLOTS 8
++#define EFI_MMAP_NR_SLACK_SLOTS 32
+
+ typedef struct efi_generic_dev_path efi_device_path_protocol_t;
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+index 45affc02548c16..a3a7d20ab4fea9 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_bios.c
+@@ -437,6 +437,13 @@ static bool amdgpu_get_bios_apu(struct amdgpu_device *adev)
+ return true;
+ }
+
++static bool amdgpu_prefer_rom_resource(struct amdgpu_device *adev)
++{
++ struct resource *res = &adev->pdev->resource[PCI_ROM_RESOURCE];
++
++ return (res->flags & IORESOURCE_ROM_SHADOW);
++}
++
+ static bool amdgpu_get_bios_dgpu(struct amdgpu_device *adev)
+ {
+ if (amdgpu_atrm_get_bios(adev)) {
+@@ -455,14 +462,27 @@ static bool amdgpu_get_bios_dgpu(struct amdgpu_device *adev)
+ goto success;
+ }
+
+- if (amdgpu_read_platform_bios(adev)) {
+- dev_info(adev->dev, "Fetched VBIOS from platform\n");
+- goto success;
+- }
++ if (amdgpu_prefer_rom_resource(adev)) {
++ if (amdgpu_read_bios(adev)) {
++ dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n");
++ goto success;
++ }
+
+- if (amdgpu_read_bios(adev)) {
+- dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n");
+- goto success;
++ if (amdgpu_read_platform_bios(adev)) {
++ dev_info(adev->dev, "Fetched VBIOS from platform\n");
++ goto success;
++ }
++
++ } else {
++ if (amdgpu_read_platform_bios(adev)) {
++ dev_info(adev->dev, "Fetched VBIOS from platform\n");
++ goto success;
++ }
++
++ if (amdgpu_read_bios(adev)) {
++ dev_info(adev->dev, "Fetched VBIOS from ROM BAR\n");
++ goto success;
++ }
+ }
+
+ if (amdgpu_read_bios_from_rom(adev)) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 31d4df96889812..24d007715a14ae 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3322,6 +3322,7 @@ static int amdgpu_device_ip_fini(struct amdgpu_device *adev)
+ amdgpu_device_mem_scratch_fini(adev);
+ amdgpu_ib_pool_fini(adev);
+ amdgpu_seq64_fini(adev);
++ amdgpu_doorbell_fini(adev);
+ }
+
+ r = adev->ip_blocks[i].version->funcs->sw_fini((void *)adev);
+@@ -4670,7 +4671,6 @@ void amdgpu_device_fini_sw(struct amdgpu_device *adev)
+
+ iounmap(adev->rmmio);
+ adev->rmmio = NULL;
+- amdgpu_doorbell_fini(adev);
+ drm_dev_exit(idx);
+ }
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+index 8e81a83d37d846..2f90fff1b9ddc0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c
+@@ -181,7 +181,7 @@ static void amdgpu_dma_buf_unmap(struct dma_buf_attachment *attach,
+ struct sg_table *sgt,
+ enum dma_data_direction dir)
+ {
+- if (sgt->sgl->page_link) {
++ if (sg_page(sgt->sgl)) {
+ dma_unmap_sgtable(attach->dev, sgt, dir, 0);
+ sg_free_table(sgt);
+ kfree(sgt);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+index 7978d5189c37d4..a9eb0927a7664f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+@@ -1795,7 +1795,6 @@ static const u16 amdgpu_unsupported_pciidlist[] = {
+ };
+
+ static const struct pci_device_id pciidlist[] = {
+-#ifdef CONFIG_DRM_AMDGPU_SI
+ {0x1002, 0x6780, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
+ {0x1002, 0x6784, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
+ {0x1002, 0x6788, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TAHITI},
+@@ -1868,8 +1867,6 @@ static const struct pci_device_id pciidlist[] = {
+ {0x1002, 0x6665, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY},
+ {0x1002, 0x6667, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY},
+ {0x1002, 0x666F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_HAINAN|AMD_IS_MOBILITY},
+-#endif
+-#ifdef CONFIG_DRM_AMDGPU_CIK
+ /* Kaveri */
+ {0x1002, 0x1304, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|AMD_IS_MOBILITY|AMD_IS_APU},
+ {0x1002, 0x1305, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|AMD_IS_APU},
+@@ -1952,7 +1949,6 @@ static const struct pci_device_id pciidlist[] = {
+ {0x1002, 0x985D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU},
+ {0x1002, 0x985E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU},
+ {0x1002, 0x985F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_MULLINS|AMD_IS_MOBILITY|AMD_IS_APU},
+-#endif
+ /* topaz */
+ {0x1002, 0x6900, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TOPAZ},
+ {0x1002, 0x6901, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_TOPAZ},
+@@ -2284,14 +2280,14 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ return -ENOTSUPP;
+ }
+
++ switch (flags & AMD_ASIC_MASK) {
++ case CHIP_TAHITI:
++ case CHIP_PITCAIRN:
++ case CHIP_VERDE:
++ case CHIP_OLAND:
++ case CHIP_HAINAN:
+ #ifdef CONFIG_DRM_AMDGPU_SI
+- if (!amdgpu_si_support) {
+- switch (flags & AMD_ASIC_MASK) {
+- case CHIP_TAHITI:
+- case CHIP_PITCAIRN:
+- case CHIP_VERDE:
+- case CHIP_OLAND:
+- case CHIP_HAINAN:
++ if (!amdgpu_si_support) {
+ dev_info(&pdev->dev,
+ "SI support provided by radeon.\n");
+ dev_info(&pdev->dev,
+@@ -2299,16 +2295,18 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ );
+ return -ENODEV;
+ }
+- }
++ break;
++#else
++ dev_info(&pdev->dev, "amdgpu is built without SI support.\n");
++ return -ENODEV;
+ #endif
++ case CHIP_KAVERI:
++ case CHIP_BONAIRE:
++ case CHIP_HAWAII:
++ case CHIP_KABINI:
++ case CHIP_MULLINS:
+ #ifdef CONFIG_DRM_AMDGPU_CIK
+- if (!amdgpu_cik_support) {
+- switch (flags & AMD_ASIC_MASK) {
+- case CHIP_KAVERI:
+- case CHIP_BONAIRE:
+- case CHIP_HAWAII:
+- case CHIP_KABINI:
+- case CHIP_MULLINS:
++ if (!amdgpu_cik_support) {
+ dev_info(&pdev->dev,
+ "CIK support provided by radeon.\n");
+ dev_info(&pdev->dev,
+@@ -2316,8 +2314,14 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
+ );
+ return -ENODEV;
+ }
+- }
++ break;
++#else
++ dev_info(&pdev->dev, "amdgpu is built without CIK support.\n");
++ return -ENODEV;
+ #endif
++ default:
++ break;
++ }
+
+ adev = devm_drm_dev_alloc(&pdev->dev, &amdgpu_kms_driver, typeof(*adev), ddev);
+ if (IS_ERR(adev))
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+index 971419e3a9bbdf..4c4bdc4f51b294 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
+@@ -161,8 +161,8 @@ void amdgpu_bo_placement_from_domain(struct amdgpu_bo *abo, u32 domain)
+ * When GTT is just an alternative to VRAM make sure that we
+ * only use it as fallback and still try to fill up VRAM first.
+ */
+- if (domain & abo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM &&
+- !(adev->flags & AMD_IS_APU))
++ if (abo->tbo.resource && !(adev->flags & AMD_IS_APU) &&
++ domain & abo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM)
+ places[c].flags |= TTM_PL_FLAG_FALLBACK;
+ c++;
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+index 231a3d490ea8e3..7a773fcd7752c2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v11_0.c
+@@ -859,6 +859,10 @@ static void mes_v11_0_get_fw_version(struct amdgpu_device *adev)
+ {
+ int pipe;
+
++ /* return early if we have already fetched these */
++ if (adev->mes.sched_version && adev->mes.kiq_version)
++ return;
++
+ /* get MES scheduler/KIQ versions */
+ mutex_lock(&adev->srbm_mutex);
+
+diff --git a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+index b3175ff676f33c..459f7b8d72b4d1 100644
+--- a/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/mes_v12_0.c
+@@ -1225,17 +1225,20 @@ static int mes_v12_0_queue_init(struct amdgpu_device *adev,
+ mes_v12_0_queue_init_register(ring);
+ }
+
+- /* get MES scheduler/KIQ versions */
+- mutex_lock(&adev->srbm_mutex);
+- soc21_grbm_select(adev, 3, pipe, 0, 0);
++ if (((pipe == AMDGPU_MES_SCHED_PIPE) && !adev->mes.sched_version) ||
++ ((pipe == AMDGPU_MES_KIQ_PIPE) && !adev->mes.kiq_version)) {
++ /* get MES scheduler/KIQ versions */
++ mutex_lock(&adev->srbm_mutex);
++ soc21_grbm_select(adev, 3, pipe, 0, 0);
+
+- if (pipe == AMDGPU_MES_SCHED_PIPE)
+- adev->mes.sched_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
+- else if (pipe == AMDGPU_MES_KIQ_PIPE && adev->enable_mes_kiq)
+- adev->mes.kiq_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
++ if (pipe == AMDGPU_MES_SCHED_PIPE)
++ adev->mes.sched_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
++ else if (pipe == AMDGPU_MES_KIQ_PIPE && adev->enable_mes_kiq)
++ adev->mes.kiq_version = RREG32_SOC15(GC, 0, regCP_MES_GP3_LO);
+
+- soc21_grbm_select(adev, 0, 0, 0, 0);
+- mutex_unlock(&adev->srbm_mutex);
++ soc21_grbm_select(adev, 0, 0, 0, 0);
++ mutex_unlock(&adev->srbm_mutex);
++ }
+
+ return 0;
+ }
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 260b6b8d29fd6c..c22da13859bd51 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1690,6 +1690,13 @@ static const struct dmi_system_id dmi_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "HP Elite mt645 G8 Mobile Thin Client"),
+ },
+ },
++ {
++ .callback = edp0_on_dp1_callback,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 645 14 inch G11 Notebook PC"),
++ },
++ },
+ {
+ .callback = edp0_on_dp1_callback,
+ .matches = {
+@@ -1697,6 +1704,20 @@ static const struct dmi_system_id dmi_quirk_table[] = {
+ DMI_MATCH(DMI_PRODUCT_NAME, "HP EliteBook 665 16 inch G11 Notebook PC"),
+ },
+ },
++ {
++ .callback = edp0_on_dp1_callback,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 445 14 inch G11 Notebook PC"),
++ },
++ },
++ {
++ .callback = edp0_on_dp1_callback,
++ .matches = {
++ DMI_MATCH(DMI_SYS_VENDOR, "HP"),
++ DMI_MATCH(DMI_PRODUCT_NAME, "HP ProBook 465 16 inch G11 Notebook PC"),
++ },
++ },
+ {}
+ /* TODO: refactor this from a fixed table to a dynamic option */
+ };
+@@ -8458,14 +8479,39 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
+ int offdelay;
+
+ if (acrtc_state) {
+- if (amdgpu_ip_version(adev, DCE_HWIP, 0) <
+- IP_VERSION(3, 5, 0) ||
+- acrtc_state->stream->link->psr_settings.psr_version <
+- DC_PSR_VERSION_UNSUPPORTED ||
+- !(adev->flags & AMD_IS_APU)) {
+- timing = &acrtc_state->stream->timing;
+-
+- /* at least 2 frames */
++ timing = &acrtc_state->stream->timing;
++
++ /*
++ * Depending on when the HW latching event of double-buffered
++ * registers happen relative to the PSR SDP deadline, and how
++ * bad the Panel clock has drifted since the last ALPM off
++ * event, there can be up to 3 frames of delay between sending
++ * the PSR exit cmd to DMUB fw, and when the panel starts
++ * displaying live frames.
++ *
++ * We can set:
++ *
++ * 20/100 * offdelay_ms = 3_frames_ms
++ * => offdelay_ms = 5 * 3_frames_ms
++ *
++ * This ensures that `3_frames_ms` will only be experienced as a
++ * 20% delay on top how long the display has been static, and
++ * thus make the delay less perceivable.
++ */
++ if (acrtc_state->stream->link->psr_settings.psr_version <
++ DC_PSR_VERSION_UNSUPPORTED) {
++ offdelay = DIV64_U64_ROUND_UP((u64)5 * 3 * 10 *
++ timing->v_total *
++ timing->h_total,
++ timing->pix_clk_100hz);
++ config.offdelay_ms = offdelay ?: 30;
++ } else if (amdgpu_ip_version(adev, DCE_HWIP, 0) <
++ IP_VERSION(3, 5, 0) ||
++ !(adev->flags & AMD_IS_APU)) {
++ /*
++ * Older HW and DGPU have issues with instant off;
++ * use a 2 frame offdelay.
++ */
+ offdelay = DIV64_U64_ROUND_UP((u64)20 *
+ timing->v_total *
+ timing->h_total,
+@@ -8473,6 +8519,8 @@ static void manage_dm_interrupts(struct amdgpu_device *adev,
+
+ config.offdelay_ms = offdelay ?: 30;
+ } else {
++ /* offdelay_ms = 0 will never disable vblank */
++ config.offdelay_ms = 1;
+ config.disable_immediate = true;
+ }
+
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+index 70fcfae8e4c552..2ac56e79df05e6 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_crtc.c
+@@ -113,6 +113,7 @@ bool amdgpu_dm_crtc_vrr_active(const struct dm_crtc_state *dm_state)
+ *
+ * Panel Replay and PSR SU
+ * - Enable when:
++ * - VRR is disabled
+ * - vblank counter is disabled
+ * - entry is allowed: usermode demonstrates an adequate number of fast
+ * commits)
+@@ -131,19 +132,20 @@ static void amdgpu_dm_crtc_set_panel_sr_feature(
+ bool is_sr_active = (link->replay_settings.replay_allow_active ||
+ link->psr_settings.psr_allow_active);
+ bool is_crc_window_active = false;
++ bool vrr_active = amdgpu_dm_crtc_vrr_active_irq(vblank_work->acrtc);
+
+ #ifdef CONFIG_DRM_AMD_SECURE_DISPLAY
+ is_crc_window_active =
+ amdgpu_dm_crc_window_is_activated(&vblank_work->acrtc->base);
+ #endif
+
+- if (link->replay_settings.replay_feature_enabled &&
++ if (link->replay_settings.replay_feature_enabled && !vrr_active &&
+ allow_sr_entry && !is_sr_active && !is_crc_window_active) {
+ amdgpu_dm_replay_enable(vblank_work->stream, true);
+ } else if (vblank_enabled) {
+ if (link->psr_settings.psr_version < DC_PSR_VERSION_SU_1 && is_sr_active)
+ amdgpu_dm_psr_disable(vblank_work->stream, false);
+- } else if (link->psr_settings.psr_feature_enabled &&
++ } else if (link->psr_settings.psr_feature_enabled && !vrr_active &&
+ allow_sr_entry && !is_sr_active && !is_crc_window_active) {
+
+ struct amdgpu_dm_connector *aconn =
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c
+index d35dd507cb9f85..cb187604744e96 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml21/dml21_wrapper.c
+@@ -87,6 +87,8 @@ static void dml21_init(const struct dc *in_dc, struct dml2_context **dml_ctx, co
+ /* Store configuration options */
+ (*dml_ctx)->config = *config;
+
++ DC_FP_START();
++
+ /*Initialize SOCBB and DCNIP params */
+ dml21_initialize_soc_bb_params(&(*dml_ctx)->v21.dml_init, config, in_dc);
+ dml21_initialize_ip_params(&(*dml_ctx)->v21.dml_init, config, in_dc);
+@@ -97,6 +99,8 @@ static void dml21_init(const struct dc *in_dc, struct dml2_context **dml_ctx, co
+
+ /*Initialize DML21 instance */
+ dml2_initialize_instance(&(*dml_ctx)->v21.dml_init);
++
++ DC_FP_END();
+ }
+
+ bool dml21_create(const struct dc *in_dc, struct dml2_context **dml_ctx, const struct dml2_configuration_options *config)
+@@ -277,11 +281,16 @@ bool dml21_validate(const struct dc *in_dc, struct dc_state *context, struct dml
+ {
+ bool out = false;
+
++ DC_FP_START();
++
+ /* Use dml_validate_only for fast_validate path */
+- if (fast_validate) {
++ if (fast_validate)
+ out = dml21_check_mode_support(in_dc, context, dml_ctx);
+- } else
++ else
+ out = dml21_mode_check_and_programming(in_dc, context, dml_ctx);
++
++ DC_FP_END();
++
+ return out;
+ }
+
+@@ -420,8 +429,12 @@ void dml21_copy(struct dml2_context *dst_dml_ctx,
+
+ dst_dml_ctx->v21.mode_programming.programming = dst_dml2_programming;
+
++ DC_FP_START();
++
+ /* need to initialize copied instance for internal references to be correct */
+ dml2_initialize_instance(&dst_dml_ctx->v21.dml_init);
++
++ DC_FP_END();
+ }
+
+ bool dml21_create_copy(struct dml2_context **dst_dml_ctx,
+diff --git a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+index 4d64c45930da49..cb2cb89dfecb2c 100644
+--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_wrapper.c
+@@ -734,11 +734,16 @@ bool dml2_validate(const struct dc *in_dc, struct dc_state *context, struct dml2
+ return out;
+ }
+
++ DC_FP_START();
++
+ /* Use dml_validate_only for fast_validate path */
+ if (fast_validate)
+ out = dml2_validate_only(context);
+ else
+ out = dml2_validate_and_build_resource(in_dc, context);
++
++ DC_FP_END();
++
+ return out;
+ }
+
+@@ -779,11 +784,15 @@ static void dml2_init(const struct dc *in_dc, const struct dml2_configuration_op
+ break;
+ }
+
++ DC_FP_START();
++
+ initialize_dml2_ip_params(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.ip);
+
+ initialize_dml2_soc_bbox(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.soc);
+
+ initialize_dml2_soc_states(*dml2, in_dc, &(*dml2)->v20.dml_core_ctx.soc, &(*dml2)->v20.dml_core_ctx.states);
++
++ DC_FP_END();
+ }
+
+ bool dml2_create(const struct dc *in_dc, const struct dml2_configuration_options *config, struct dml2_context **dml2)
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+index 36d12db8d02256..f5f1ccd8303cf3 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn20/dcn20_hwseq.c
+@@ -3003,7 +3003,11 @@ void dcn20_enable_stream(struct pipe_ctx *pipe_ctx)
+ dccg->funcs->set_dpstreamclk(dccg, DTBCLK0, tg->inst, dp_hpo_inst);
+
+ phyd32clk = get_phyd32clk_src(link);
+- dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
++ if (link->cur_link_settings.link_rate == LINK_RATE_UNKNOWN) {
++ dccg->funcs->disable_symclk32_se(dccg, dp_hpo_inst);
++ } else {
++ dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
++ }
+ } else {
+ if (dccg->funcs->enable_symclk_se)
+ dccg->funcs->enable_symclk_se(dccg, stream_enc->stream_enc_inst,
+diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+index 0b743669f23b44..62f1e597787e69 100644
+--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn401/dcn401_hwseq.c
+@@ -1001,8 +1001,11 @@ void dcn401_enable_stream(struct pipe_ctx *pipe_ctx)
+ if (dc_is_dp_signal(pipe_ctx->stream->signal) || dc_is_virtual_signal(pipe_ctx->stream->signal)) {
+ if (dc->link_srv->dp_is_128b_132b_signal(pipe_ctx)) {
+ dccg->funcs->set_dpstreamclk(dccg, DPREFCLK, tg->inst, dp_hpo_inst);
+-
+- dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
++ if (link->cur_link_settings.link_rate == LINK_RATE_UNKNOWN) {
++ dccg->funcs->disable_symclk32_se(dccg, dp_hpo_inst);
++ } else {
++ dccg->funcs->enable_symclk32_se(dccg, dp_hpo_inst, phyd32clk);
++ }
+ } else {
+ /* need to set DTBCLK_P source to DPREFCLK for DP8B10B */
+ dccg->funcs->set_dtbclk_p_src(dccg, DPREFCLK, tg->inst);
+diff --git a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+index 80386f698ae4de..0ca6358a9782e2 100644
+--- a/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/resource/dcn31/dcn31_resource.c
+@@ -891,7 +891,7 @@ static const struct dc_debug_options debug_defaults_drv = {
+ .disable_z10 = true,
+ .enable_legacy_fast_update = true,
+ .enable_z9_disable_interface = true, /* Allow support for the PMFW interface for disable Z9*/
+- .dml_hostvm_override = DML_HOSTVM_NO_OVERRIDE,
++ .dml_hostvm_override = DML_HOSTVM_OVERRIDE_FALSE,
+ .using_dml2 = false,
+ };
+
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c
+index a8fc0fa44db69d..ba5c1237fcfe1a 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/smu7_thermal.c
+@@ -267,10 +267,10 @@ int smu7_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed)
+ if (hwmgr->thermal_controller.fanInfo.bNoFan ||
+ (hwmgr->thermal_controller.fanInfo.
+ ucTachometerPulsesPerRevolution == 0) ||
+- speed == 0 ||
++ (!speed || speed > UINT_MAX/8) ||
+ (speed < hwmgr->thermal_controller.fanInfo.ulMinRPM) ||
+ (speed > hwmgr->thermal_controller.fanInfo.ulMaxRPM))
+- return 0;
++ return -EINVAL;
+
+ if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl))
+ smu7_fan_ctrl_stop_smc_fan_control(hwmgr);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c
+index 379012494da57b..56423aedf3fa7c 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega10_thermal.c
+@@ -307,10 +307,10 @@ int vega10_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed)
+ int result = 0;
+
+ if (hwmgr->thermal_controller.fanInfo.bNoFan ||
+- speed == 0 ||
++ (!speed || speed > UINT_MAX/8) ||
+ (speed < hwmgr->thermal_controller.fanInfo.ulMinRPM) ||
+ (speed > hwmgr->thermal_controller.fanInfo.ulMaxRPM))
+- return -1;
++ return -EINVAL;
+
+ if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl))
+ result = vega10_fan_ctrl_stop_smc_fan_control(hwmgr);
+diff --git a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c
+index a3331ffb2daf7f..1b1c88590156cd 100644
+--- a/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c
++++ b/drivers/gpu/drm/amd/pm/powerplay/hwmgr/vega20_thermal.c
+@@ -191,7 +191,7 @@ int vega20_fan_ctrl_set_fan_speed_rpm(struct pp_hwmgr *hwmgr, uint32_t speed)
+ uint32_t tach_period, crystal_clock_freq;
+ int result = 0;
+
+- if (!speed)
++ if (!speed || speed > UINT_MAX/8)
+ return -EINVAL;
+
+ if (PP_CAP(PHM_PlatformCaps_MicrocodeFanControl)) {
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+index fc1297fecc62e0..d4b954b22441c5 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/arcturus_ppt.c
+@@ -1267,6 +1267,9 @@ static int arcturus_set_fan_speed_rpm(struct smu_context *smu,
+ uint32_t crystal_clock_freq = 2500;
+ uint32_t tach_period;
+
++ if (!speed || speed > UINT_MAX/8)
++ return -EINVAL;
++
+ tach_period = 60 * crystal_clock_freq * 10000 / (8 * speed);
+ WREG32_SOC15(THM, 0, mmCG_TACH_CTRL_ARCT,
+ REG_SET_FIELD(RREG32_SOC15(THM, 0, mmCG_TACH_CTRL_ARCT),
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+index 16fcd9dcd202e0..6c61e87359dd48 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/smu_v11_0.c
+@@ -1199,7 +1199,7 @@ int smu_v11_0_set_fan_speed_rpm(struct smu_context *smu,
+ uint32_t crystal_clock_freq = 2500;
+ uint32_t tach_period;
+
+- if (speed == 0)
++ if (!speed || speed > UINT_MAX/8)
+ return -EINVAL;
+ /*
+ * To prevent from possible overheat, some ASICs may have requirement
+diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+index 2024a85fa11bd5..4f78c84da780c7 100644
+--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
+@@ -1228,7 +1228,7 @@ int smu_v13_0_set_fan_speed_rpm(struct smu_context *smu,
+ uint32_t tach_period;
+ int ret;
+
+- if (!speed)
++ if (!speed || speed > UINT_MAX/8)
+ return -EINVAL;
+
+ ret = smu_v13_0_auto_fan_control(smu, 0);
+diff --git a/drivers/gpu/drm/ast/ast_dp.c b/drivers/gpu/drm/ast/ast_dp.c
+index 00b364f9a71e54..5dadc895e7f26b 100644
+--- a/drivers/gpu/drm/ast/ast_dp.c
++++ b/drivers/gpu/drm/ast/ast_dp.c
+@@ -17,6 +17,12 @@ static bool ast_astdp_is_connected(struct ast_device *ast)
+ {
+ if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDF, AST_IO_VGACRDF_HPD))
+ return false;
++ /*
++ * HPD might be set even if no monitor is connected, so also check that
++ * the link training was successful.
++ */
++ if (!ast_get_index_reg_mask(ast, AST_IO_VGACRI, 0xDC, AST_IO_VGACRDC_LINK_SUCCESS))
++ return false;
+ return true;
+ }
+
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index d5eb8de645a9a3..4f8899cd125d9d 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -1006,7 +1006,9 @@ static bool vrr_params_changed(const struct intel_crtc_state *old_crtc_state,
+ old_crtc_state->vrr.vmin != new_crtc_state->vrr.vmin ||
+ old_crtc_state->vrr.vmax != new_crtc_state->vrr.vmax ||
+ old_crtc_state->vrr.guardband != new_crtc_state->vrr.guardband ||
+- old_crtc_state->vrr.pipeline_full != new_crtc_state->vrr.pipeline_full;
++ old_crtc_state->vrr.pipeline_full != new_crtc_state->vrr.pipeline_full ||
++ old_crtc_state->vrr.vsync_start != new_crtc_state->vrr.vsync_start ||
++ old_crtc_state->vrr.vsync_end != new_crtc_state->vrr.vsync_end;
+ }
+
+ static bool cmrr_params_changed(const struct intel_crtc_state *old_crtc_state,
+diff --git a/drivers/gpu/drm/i915/gvt/opregion.c b/drivers/gpu/drm/i915/gvt/opregion.c
+index 908f910420c20c..4ef45520e199af 100644
+--- a/drivers/gpu/drm/i915/gvt/opregion.c
++++ b/drivers/gpu/drm/i915/gvt/opregion.c
+@@ -222,7 +222,6 @@ int intel_vgpu_init_opregion(struct intel_vgpu *vgpu)
+ u8 *buf;
+ struct opregion_header *header;
+ struct vbt v;
+- const char opregion_signature[16] = OPREGION_SIGNATURE;
+
+ gvt_dbg_core("init vgpu%d opregion\n", vgpu->id);
+ vgpu_opregion(vgpu)->va = (void *)__get_free_pages(GFP_KERNEL |
+@@ -236,8 +235,10 @@ int intel_vgpu_init_opregion(struct intel_vgpu *vgpu)
+ /* emulated opregion with VBT mailbox only */
+ buf = (u8 *)vgpu_opregion(vgpu)->va;
+ header = (struct opregion_header *)buf;
+- memcpy(header->signature, opregion_signature,
+- sizeof(opregion_signature));
++
++ static_assert(sizeof(header->signature) == sizeof(OPREGION_SIGNATURE) - 1);
++ memcpy(header->signature, OPREGION_SIGNATURE, sizeof(header->signature));
++
+ header->size = 0x8;
+ header->opregion_ver = 0x02000000;
+ header->mboxes = MBOX_VBT;
+diff --git a/drivers/gpu/drm/imagination/pvr_fw.c b/drivers/gpu/drm/imagination/pvr_fw.c
+index 3debc9870a82ae..d09c4c68411627 100644
+--- a/drivers/gpu/drm/imagination/pvr_fw.c
++++ b/drivers/gpu/drm/imagination/pvr_fw.c
+@@ -732,7 +732,7 @@ pvr_fw_process(struct pvr_device *pvr_dev)
+ fw_mem->core_data, fw_mem->core_code_alloc_size);
+
+ if (err)
+- goto err_free_fw_core_data_obj;
++ goto err_free_kdata;
+
+ memcpy(fw_code_ptr, fw_mem->code, fw_mem->code_alloc_size);
+ memcpy(fw_data_ptr, fw_mem->data, fw_mem->data_alloc_size);
+@@ -742,10 +742,14 @@ pvr_fw_process(struct pvr_device *pvr_dev)
+ memcpy(fw_core_data_ptr, fw_mem->core_data, fw_mem->core_data_alloc_size);
+
+ /* We're finished with the firmware section memory on the CPU, unmap. */
+- if (fw_core_data_ptr)
++ if (fw_core_data_ptr) {
+ pvr_fw_object_vunmap(fw_mem->core_data_obj);
+- if (fw_core_code_ptr)
++ fw_core_data_ptr = NULL;
++ }
++ if (fw_core_code_ptr) {
+ pvr_fw_object_vunmap(fw_mem->core_code_obj);
++ fw_core_code_ptr = NULL;
++ }
+ pvr_fw_object_vunmap(fw_mem->data_obj);
+ fw_data_ptr = NULL;
+ pvr_fw_object_vunmap(fw_mem->code_obj);
+@@ -753,7 +757,7 @@ pvr_fw_process(struct pvr_device *pvr_dev)
+
+ err = pvr_fw_create_fwif_connection_ctl(pvr_dev);
+ if (err)
+- goto err_free_fw_core_data_obj;
++ goto err_free_kdata;
+
+ return 0;
+
+@@ -763,13 +767,16 @@ pvr_fw_process(struct pvr_device *pvr_dev)
+ kfree(fw_mem->data);
+ kfree(fw_mem->code);
+
+-err_free_fw_core_data_obj:
+ if (fw_core_data_ptr)
+- pvr_fw_object_unmap_and_destroy(fw_mem->core_data_obj);
++ pvr_fw_object_vunmap(fw_mem->core_data_obj);
++ if (fw_mem->core_data_obj)
++ pvr_fw_object_destroy(fw_mem->core_data_obj);
+
+ err_free_fw_core_code_obj:
+ if (fw_core_code_ptr)
+- pvr_fw_object_unmap_and_destroy(fw_mem->core_code_obj);
++ pvr_fw_object_vunmap(fw_mem->core_code_obj);
++ if (fw_mem->core_code_obj)
++ pvr_fw_object_destroy(fw_mem->core_code_obj);
+
+ err_free_fw_data_obj:
+ if (fw_data_ptr)
+@@ -836,6 +843,12 @@ pvr_fw_cleanup(struct pvr_device *pvr_dev)
+ struct pvr_fw_mem *fw_mem = &pvr_dev->fw_dev.mem;
+
+ pvr_fw_fini_fwif_connection_ctl(pvr_dev);
++
++ kfree(fw_mem->core_data);
++ kfree(fw_mem->core_code);
++ kfree(fw_mem->data);
++ kfree(fw_mem->code);
++
+ if (fw_mem->core_code_obj)
+ pvr_fw_object_destroy(fw_mem->core_code_obj);
+ if (fw_mem->core_data_obj)
+diff --git a/drivers/gpu/drm/imagination/pvr_job.c b/drivers/gpu/drm/imagination/pvr_job.c
+index 78c2f3c6dce019..6a15c1d2d871d3 100644
+--- a/drivers/gpu/drm/imagination/pvr_job.c
++++ b/drivers/gpu/drm/imagination/pvr_job.c
+@@ -684,6 +684,13 @@ pvr_jobs_link_geom_frag(struct pvr_job_data *job_data, u32 *job_count)
+ geom_job->paired_job = frag_job;
+ frag_job->paired_job = geom_job;
+
++ /* The geometry job pvr_job structure is used when the fragment
++ * job is being prepared by the GPU scheduler. Have the fragment
++ * job hold a reference on the geometry job to prevent it being
++ * freed until the fragment job has finished with it.
++ */
++ pvr_job_get(geom_job);
++
+ /* Skip the fragment job we just paired to the geometry job. */
+ i++;
+ }
+diff --git a/drivers/gpu/drm/imagination/pvr_queue.c b/drivers/gpu/drm/imagination/pvr_queue.c
+index 87780cc7c0c322..130473cfdfc9b7 100644
+--- a/drivers/gpu/drm/imagination/pvr_queue.c
++++ b/drivers/gpu/drm/imagination/pvr_queue.c
+@@ -866,6 +866,10 @@ static void pvr_queue_free_job(struct drm_sched_job *sched_job)
+ struct pvr_job *job = container_of(sched_job, struct pvr_job, base);
+
+ drm_sched_job_cleanup(sched_job);
++
++ if (job->type == DRM_PVR_JOB_TYPE_FRAGMENT && job->paired_job)
++ pvr_job_put(job->paired_job);
++
+ job->paired_job = NULL;
+ pvr_job_put(job);
+ }
+diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c
+index fb71658c3117b2..6067d08aeee34b 100644
+--- a/drivers/gpu/drm/mgag200/mgag200_mode.c
++++ b/drivers/gpu/drm/mgag200/mgag200_mode.c
+@@ -223,7 +223,7 @@ void mgag200_set_mode_regs(struct mga_device *mdev, const struct drm_display_mod
+ vsyncstr = mode->crtc_vsync_start - 1;
+ vsyncend = mode->crtc_vsync_end - 1;
+ vtotal = mode->crtc_vtotal - 2;
+- vblkstr = mode->crtc_vblank_start;
++ vblkstr = mode->crtc_vblank_start - 1;
+ vblkend = vtotal + 1;
+
+ linecomp = vdispend;
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+index e386b059187acf..67fa528f546d33 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -1126,49 +1126,50 @@ static void a6xx_gmu_shutdown(struct a6xx_gmu *gmu)
+ struct a6xx_gpu *a6xx_gpu = container_of(gmu, struct a6xx_gpu, gmu);
+ struct adreno_gpu *adreno_gpu = &a6xx_gpu->base;
+ u32 val;
++ int ret;
+
+ /*
+- * The GMU may still be in slumber unless the GPU started so check and
+- * skip putting it back into slumber if so
++ * GMU firmware's internal power state gets messed up if we send "prepare_slumber" hfi when
++ * oob_gpu handshake wasn't done after the last wake up. So do a dummy handshake here when
++ * required
+ */
+- val = gmu_read(gmu, REG_A6XX_GPU_GMU_CX_GMU_RPMH_POWER_STATE);
++ if (adreno_gpu->base.needs_hw_init) {
++ if (a6xx_gmu_set_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET))
++ goto force_off;
+
+- if (val != 0xf) {
+- int ret = a6xx_gmu_wait_for_idle(gmu);
++ a6xx_gmu_clear_oob(&a6xx_gpu->gmu, GMU_OOB_GPU_SET);
++ }
+
+- /* If the GMU isn't responding assume it is hung */
+- if (ret) {
+- a6xx_gmu_force_off(gmu);
+- return;
+- }
++ ret = a6xx_gmu_wait_for_idle(gmu);
+
+- a6xx_bus_clear_pending_transactions(adreno_gpu, a6xx_gpu->hung);
++ /* If the GMU isn't responding assume it is hung */
++ if (ret)
++ goto force_off;
+
+- /* tell the GMU we want to slumber */
+- ret = a6xx_gmu_notify_slumber(gmu);
+- if (ret) {
+- a6xx_gmu_force_off(gmu);
+- return;
+- }
++ a6xx_bus_clear_pending_transactions(adreno_gpu, a6xx_gpu->hung);
+
+- ret = gmu_poll_timeout(gmu,
+- REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS, val,
+- !(val & A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS_GPUBUSYIGNAHB),
+- 100, 10000);
++ /* tell the GMU we want to slumber */
++ ret = a6xx_gmu_notify_slumber(gmu);
++ if (ret)
++ goto force_off;
+
+- /*
+- * Let the user know we failed to slumber but don't worry too
+- * much because we are powering down anyway
+- */
++ ret = gmu_poll_timeout(gmu,
++ REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS, val,
++ !(val & A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS_GPUBUSYIGNAHB),
++ 100, 10000);
+
+- if (ret)
+- DRM_DEV_ERROR(gmu->dev,
+- "Unable to slumber GMU: status = 0%x/0%x\n",
+- gmu_read(gmu,
+- REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS),
+- gmu_read(gmu,
+- REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS2));
+- }
++ /*
++ * Let the user know we failed to slumber but don't worry too
++ * much because we are powering down anyway
++ */
++
++ if (ret)
++ DRM_DEV_ERROR(gmu->dev,
++ "Unable to slumber GMU: status = 0%x/0%x\n",
++ gmu_read(gmu,
++ REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS),
++ gmu_read(gmu,
++ REG_A6XX_GPU_GMU_AO_GPU_CX_BUSY_STATUS2));
+
+ /* Turn off HFI */
+ a6xx_hfi_stop(gmu);
+@@ -1178,6 +1179,11 @@ static void a6xx_gmu_shutdown(struct a6xx_gmu *gmu)
+
+ /* Tell RPMh to power off the GPU */
+ a6xx_rpmh_stop(gmu);
++
++ return;
++
++force_off:
++ a6xx_gmu_force_off(gmu);
+ }
+
+
+diff --git a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+index 702b8d4b349723..d903ad9c0b5fb8 100644
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gpu.c
+@@ -233,10 +233,10 @@ static void a6xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ break;
+ fallthrough;
+ case MSM_SUBMIT_CMD_BUF:
+- OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3);
++ OUT_PKT7(ring, CP_INDIRECT_BUFFER, 3);
+ OUT_RING(ring, lower_32_bits(submit->cmd[i].iova));
+ OUT_RING(ring, upper_32_bits(submit->cmd[i].iova));
+- OUT_RING(ring, submit->cmd[i].size);
++ OUT_RING(ring, A5XX_CP_INDIRECT_BUFFER_2_IB_SIZE(submit->cmd[i].size));
+ ibs++;
+ break;
+ }
+@@ -319,10 +319,10 @@ static void a7xx_submit(struct msm_gpu *gpu, struct msm_gem_submit *submit)
+ break;
+ fallthrough;
+ case MSM_SUBMIT_CMD_BUF:
+- OUT_PKT7(ring, CP_INDIRECT_BUFFER_PFE, 3);
++ OUT_PKT7(ring, CP_INDIRECT_BUFFER, 3);
+ OUT_RING(ring, lower_32_bits(submit->cmd[i].iova));
+ OUT_RING(ring, upper_32_bits(submit->cmd[i].iova));
+- OUT_RING(ring, submit->cmd[i].size);
++ OUT_RING(ring, A5XX_CP_INDIRECT_BUFFER_2_IB_SIZE(submit->cmd[i].size));
+ ibs++;
+ break;
+ }
+diff --git a/drivers/gpu/drm/msm/dsi/dsi_host.c b/drivers/gpu/drm/msm/dsi/dsi_host.c
+index 7459fb8c517746..d22e01751f5eeb 100644
+--- a/drivers/gpu/drm/msm/dsi/dsi_host.c
++++ b/drivers/gpu/drm/msm/dsi/dsi_host.c
+@@ -1827,8 +1827,15 @@ static int dsi_host_parse_dt(struct msm_dsi_host *msm_host)
+ __func__, ret);
+ goto err;
+ }
+- if (!ret)
++ if (!ret) {
+ msm_dsi->te_source = devm_kstrdup(dev, te_source, GFP_KERNEL);
++ if (!msm_dsi->te_source) {
++ DRM_DEV_ERROR(dev, "%s: failed to allocate te_source\n",
++ __func__);
++ ret = -ENOMEM;
++ goto err;
++ }
++ }
+ ret = 0;
+
+ if (of_property_read_bool(np, "syscon-sfpb")) {
+diff --git a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
+index cab01af55d2226..c6cdc5c003dc07 100644
+--- a/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
++++ b/drivers/gpu/drm/msm/registers/adreno/adreno_pm4.xml
+@@ -2264,5 +2264,12 @@ opcode: CP_LOAD_STATE4 (30) (4 dwords)
+ </reg32>
+ </domain>
+
++<domain name="CP_INDIRECT_BUFFER" width="32" varset="chip" prefix="chip" variants="A5XX-">
++ <reg64 offset="0" name="IB_BASE" type="address"/>
++ <reg32 offset="2" name="2">
++ <bitfield name="IB_SIZE" low="0" high="19"/>
++ </reg32>
++</domain>
++
+ </database>
+
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index db961eade2257f..2016c1e7242fe3 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -144,6 +144,9 @@ nouveau_bo_del_ttm(struct ttm_buffer_object *bo)
+ nouveau_bo_del_io_reserve_lru(bo);
+ nv10_bo_put_tile_region(dev, nvbo->tile, NULL);
+
++ if (bo->base.import_attach)
++ drm_prime_gem_destroy(&bo->base, bo->sg);
++
+ /*
+ * If nouveau_bo_new() allocated this buffer, the GEM object was never
+ * initialized, so don't attempt to release it.
+diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c
+index 9ae2cee1c7c580..67e3c99de73ae6 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_gem.c
++++ b/drivers/gpu/drm/nouveau/nouveau_gem.c
+@@ -87,9 +87,6 @@ nouveau_gem_object_del(struct drm_gem_object *gem)
+ return;
+ }
+
+- if (gem->import_attach)
+- drm_prime_gem_destroy(gem, nvbo->bo.sg);
+-
+ ttm_bo_put(&nvbo->bo);
+
+ pm_runtime_mark_last_busy(dev);
+diff --git a/drivers/gpu/drm/sti/Makefile b/drivers/gpu/drm/sti/Makefile
+index f203ac5514ae0b..f778a4eee7c9cf 100644
+--- a/drivers/gpu/drm/sti/Makefile
++++ b/drivers/gpu/drm/sti/Makefile
+@@ -7,8 +7,6 @@ sti-drm-y := \
+ sti_compositor.o \
+ sti_crtc.o \
+ sti_plane.o \
+- sti_crtc.o \
+- sti_plane.o \
+ sti_hdmi.o \
+ sti_hdmi_tx3g4c28phy.o \
+ sti_dvo.o \
+diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c
+index 1f78aa3d26bbd4..768dfea15aec09 100644
+--- a/drivers/gpu/drm/tiny/repaper.c
++++ b/drivers/gpu/drm/tiny/repaper.c
+@@ -455,7 +455,7 @@ static void repaper_frame_fixed_repeat(struct repaper_epd *epd, u8 fixed_value,
+ enum repaper_stage stage)
+ {
+ u64 start = local_clock();
+- u64 end = start + (epd->factored_stage_time * 1000 * 1000);
++ u64 end = start + ((u64)epd->factored_stage_time * 1000 * 1000);
+
+ do {
+ repaper_frame_fixed(epd, fixed_value, stage);
+@@ -466,7 +466,7 @@ static void repaper_frame_data_repeat(struct repaper_epd *epd, const u8 *image,
+ const u8 *mask, enum repaper_stage stage)
+ {
+ u64 start = local_clock();
+- u64 end = start + (epd->factored_stage_time * 1000 * 1000);
++ u64 end = start + ((u64)epd->factored_stage_time * 1000 * 1000);
+
+ do {
+ repaper_frame_data(epd, image, mask, stage);
+diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
+index 3066cfdb054cc0..4a6aa36619fe39 100644
+--- a/drivers/gpu/drm/v3d/v3d_sched.c
++++ b/drivers/gpu/drm/v3d/v3d_sched.c
+@@ -410,7 +410,8 @@ v3d_rewrite_csd_job_wg_counts_from_indirect(struct v3d_cpu_job *job)
+ struct v3d_bo *bo = to_v3d_bo(job->base.bo[0]);
+ struct v3d_bo *indirect = to_v3d_bo(indirect_csd->indirect);
+ struct drm_v3d_submit_csd *args = &indirect_csd->job->args;
+- u32 *wg_counts;
++ struct v3d_dev *v3d = job->base.v3d;
++ u32 num_batches, *wg_counts;
+
+ v3d_get_bo_vaddr(bo);
+ v3d_get_bo_vaddr(indirect);
+@@ -423,8 +424,17 @@ v3d_rewrite_csd_job_wg_counts_from_indirect(struct v3d_cpu_job *job)
+ args->cfg[0] = wg_counts[0] << V3D_CSD_CFG012_WG_COUNT_SHIFT;
+ args->cfg[1] = wg_counts[1] << V3D_CSD_CFG012_WG_COUNT_SHIFT;
+ args->cfg[2] = wg_counts[2] << V3D_CSD_CFG012_WG_COUNT_SHIFT;
+- args->cfg[4] = DIV_ROUND_UP(indirect_csd->wg_size, 16) *
+- (wg_counts[0] * wg_counts[1] * wg_counts[2]) - 1;
++
++ num_batches = DIV_ROUND_UP(indirect_csd->wg_size, 16) *
++ (wg_counts[0] * wg_counts[1] * wg_counts[2]);
++
++ /* V3D 7.1.6 and later don't subtract 1 from the number of batches */
++ if (v3d->ver < 71 || (v3d->ver == 71 && v3d->rev < 6))
++ args->cfg[4] = num_batches - 1;
++ else
++ args->cfg[4] = num_batches;
++
++ WARN_ON(args->cfg[4] == ~0);
+
+ for (int i = 0; i < 3; i++) {
+ /* 0xffffffff indicates that the uniform rewrite is not needed */
+diff --git a/drivers/gpu/drm/xe/xe_dma_buf.c b/drivers/gpu/drm/xe/xe_dma_buf.c
+index f3bf7d3157b479..78204578443f46 100644
+--- a/drivers/gpu/drm/xe/xe_dma_buf.c
++++ b/drivers/gpu/drm/xe/xe_dma_buf.c
+@@ -145,10 +145,7 @@ static void xe_dma_buf_unmap(struct dma_buf_attachment *attach,
+ struct sg_table *sgt,
+ enum dma_data_direction dir)
+ {
+- struct dma_buf *dma_buf = attach->dmabuf;
+- struct xe_bo *bo = gem_to_xe_bo(dma_buf->priv);
+-
+- if (!xe_bo_is_vram(bo)) {
++ if (sg_page(sgt->sgl)) {
+ dma_unmap_sgtable(attach->dev, sgt, dir, 0);
+ sg_free_table(sgt);
+ kfree(sgt);
+diff --git a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+index ace1fe831a7b72..98a450271f5cee 100644
+--- a/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
++++ b/drivers/gpu/drm/xe/xe_gt_tlb_invalidation.c
+@@ -310,6 +310,13 @@ int xe_gt_tlb_invalidation_ggtt(struct xe_gt *gt)
+ return 0;
+ }
+
++/*
++ * Ensure that roundup_pow_of_two(length) doesn't overflow.
++ * Note that roundup_pow_of_two() operates on unsigned long,
++ * not on u64.
++ */
++#define MAX_RANGE_TLB_INVALIDATION_LENGTH (rounddown_pow_of_two(ULONG_MAX))
++
+ /**
+ * xe_gt_tlb_invalidation_range - Issue a TLB invalidation on this GT for an
+ * address range
+@@ -334,6 +341,7 @@ int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
+ struct xe_device *xe = gt_to_xe(gt);
+ #define MAX_TLB_INVALIDATION_LEN 7
+ u32 action[MAX_TLB_INVALIDATION_LEN];
++ u64 length = end - start;
+ int len = 0;
+
+ xe_gt_assert(gt, fence);
+@@ -346,11 +354,11 @@ int xe_gt_tlb_invalidation_range(struct xe_gt *gt,
+
+ action[len++] = XE_GUC_ACTION_TLB_INVALIDATION;
+ action[len++] = 0; /* seqno, replaced in send_tlb_invalidation */
+- if (!xe->info.has_range_tlb_invalidation) {
++ if (!xe->info.has_range_tlb_invalidation ||
++ length > MAX_RANGE_TLB_INVALIDATION_LENGTH) {
+ action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL);
+ } else {
+ u64 orig_start = start;
+- u64 length = end - start;
+ u64 align;
+
+ if (length < SZ_4K)
+diff --git a/drivers/gpu/drm/xe/xe_guc_ads.c b/drivers/gpu/drm/xe/xe_guc_ads.c
+index d1902a8581ca11..e144fd41c0a762 100644
+--- a/drivers/gpu/drm/xe/xe_guc_ads.c
++++ b/drivers/gpu/drm/xe/xe_guc_ads.c
+@@ -483,24 +483,52 @@ static void fill_engine_enable_masks(struct xe_gt *gt,
+ engine_enable_mask(gt, XE_ENGINE_CLASS_OTHER));
+ }
+
+-static void guc_prep_golden_lrc_null(struct xe_guc_ads *ads)
++/*
++ * Write the offsets corresponding to the golden LRCs. The actual data is
++ * populated later by guc_golden_lrc_populate()
++ */
++static void guc_golden_lrc_init(struct xe_guc_ads *ads)
+ {
+ struct xe_device *xe = ads_to_xe(ads);
++ struct xe_gt *gt = ads_to_gt(ads);
+ struct iosys_map info_map = IOSYS_MAP_INIT_OFFSET(ads_to_map(ads),
+ offsetof(struct __guc_ads_blob, system_info));
+- u8 guc_class;
++ size_t alloc_size, real_size;
++ u32 addr_ggtt, offset;
++ int class;
++
++ offset = guc_ads_golden_lrc_offset(ads);
++ addr_ggtt = xe_bo_ggtt_addr(ads->bo) + offset;
++
++ for (class = 0; class < XE_ENGINE_CLASS_MAX; ++class) {
++ u8 guc_class;
++
++ guc_class = xe_engine_class_to_guc_class(class);
+
+- for (guc_class = 0; guc_class <= GUC_MAX_ENGINE_CLASSES; ++guc_class) {
+ if (!info_map_read(xe, &info_map,
+ engine_enabled_masks[guc_class]))
+ continue;
+
++ real_size = xe_gt_lrc_size(gt, class);
++ alloc_size = PAGE_ALIGN(real_size);
++
++ /*
++ * This interface is slightly confusing. We need to pass the
++ * base address of the full golden context and the size of just
++ * the engine state, which is the section of the context image
++ * that starts after the execlists LRC registers. This is
++ * required to allow the GuC to restore just the engine state
++ * when a watchdog reset occurs.
++ * We calculate the engine state size by removing the size of
++ * what comes before it in the context image (which is identical
++ * on all engines).
++ */
+ ads_blob_write(ads, ads.eng_state_size[guc_class],
+- guc_ads_golden_lrc_size(ads) -
+- xe_lrc_skip_size(xe));
++ real_size - xe_lrc_skip_size(xe));
+ ads_blob_write(ads, ads.golden_context_lrca[guc_class],
+- xe_bo_ggtt_addr(ads->bo) +
+- guc_ads_golden_lrc_offset(ads));
++ addr_ggtt);
++
++ addr_ggtt += alloc_size;
+ }
+ }
+
+@@ -710,7 +738,7 @@ void xe_guc_ads_populate_minimal(struct xe_guc_ads *ads)
+
+ xe_map_memset(ads_to_xe(ads), ads_to_map(ads), 0, 0, ads->bo->size);
+ guc_policies_init(ads);
+- guc_prep_golden_lrc_null(ads);
++ guc_golden_lrc_init(ads);
+ guc_mapping_table_init_invalid(gt, &info_map);
+ guc_doorbell_init(ads);
+
+@@ -736,7 +764,7 @@ void xe_guc_ads_populate(struct xe_guc_ads *ads)
+ guc_policies_init(ads);
+ fill_engine_enable_masks(gt, &info_map);
+ guc_mmio_reg_state_init(ads);
+- guc_prep_golden_lrc_null(ads);
++ guc_golden_lrc_init(ads);
+ guc_mapping_table_init(gt, &info_map);
+ guc_capture_list_init(ads);
+ guc_doorbell_init(ads);
+@@ -756,18 +784,22 @@ void xe_guc_ads_populate(struct xe_guc_ads *ads)
+ guc_ads_private_data_offset(ads));
+ }
+
+-static void guc_populate_golden_lrc(struct xe_guc_ads *ads)
++/*
++ * After the golden LRC's are recorded for each engine class by the first
++ * submission, copy them to the ADS, as initialized earlier by
++ * guc_golden_lrc_init().
++ */
++static void guc_golden_lrc_populate(struct xe_guc_ads *ads)
+ {
+ struct xe_device *xe = ads_to_xe(ads);
+ struct xe_gt *gt = ads_to_gt(ads);
+ struct iosys_map info_map = IOSYS_MAP_INIT_OFFSET(ads_to_map(ads),
+ offsetof(struct __guc_ads_blob, system_info));
+ size_t total_size = 0, alloc_size, real_size;
+- u32 addr_ggtt, offset;
++ u32 offset;
+ int class;
+
+ offset = guc_ads_golden_lrc_offset(ads);
+- addr_ggtt = xe_bo_ggtt_addr(ads->bo) + offset;
+
+ for (class = 0; class < XE_ENGINE_CLASS_MAX; ++class) {
+ u8 guc_class;
+@@ -784,26 +816,9 @@ static void guc_populate_golden_lrc(struct xe_guc_ads *ads)
+ alloc_size = PAGE_ALIGN(real_size);
+ total_size += alloc_size;
+
+- /*
+- * This interface is slightly confusing. We need to pass the
+- * base address of the full golden context and the size of just
+- * the engine state, which is the section of the context image
+- * that starts after the execlists LRC registers. This is
+- * required to allow the GuC to restore just the engine state
+- * when a watchdog reset occurs.
+- * We calculate the engine state size by removing the size of
+- * what comes before it in the context image (which is identical
+- * on all engines).
+- */
+- ads_blob_write(ads, ads.eng_state_size[guc_class],
+- real_size - xe_lrc_skip_size(xe));
+- ads_blob_write(ads, ads.golden_context_lrca[guc_class],
+- addr_ggtt);
+-
+ xe_map_memcpy_to(xe, ads_to_map(ads), offset,
+ gt->default_lrc[class], real_size);
+
+- addr_ggtt += alloc_size;
+ offset += alloc_size;
+ }
+
+@@ -812,7 +827,7 @@ static void guc_populate_golden_lrc(struct xe_guc_ads *ads)
+
+ void xe_guc_ads_populate_post_load(struct xe_guc_ads *ads)
+ {
+- guc_populate_golden_lrc(ads);
++ guc_golden_lrc_populate(ads);
+ }
+
+ static int guc_ads_action_update_policies(struct xe_guc_ads *ads, u32 policy_offset)
+diff --git a/drivers/gpu/drm/xe/xe_hmm.c b/drivers/gpu/drm/xe/xe_hmm.c
+index f6bc4f29d7538e..3d0278c3db9355 100644
+--- a/drivers/gpu/drm/xe/xe_hmm.c
++++ b/drivers/gpu/drm/xe/xe_hmm.c
+@@ -19,29 +19,6 @@ static u64 xe_npages_in_range(unsigned long start, unsigned long end)
+ return (end - start) >> PAGE_SHIFT;
+ }
+
+-/**
+- * xe_mark_range_accessed() - mark a range is accessed, so core mm
+- * have such information for memory eviction or write back to
+- * hard disk
+- * @range: the range to mark
+- * @write: if write to this range, we mark pages in this range
+- * as dirty
+- */
+-static void xe_mark_range_accessed(struct hmm_range *range, bool write)
+-{
+- struct page *page;
+- u64 i, npages;
+-
+- npages = xe_npages_in_range(range->start, range->end);
+- for (i = 0; i < npages; i++) {
+- page = hmm_pfn_to_page(range->hmm_pfns[i]);
+- if (write)
+- set_page_dirty_lock(page);
+-
+- mark_page_accessed(page);
+- }
+-}
+-
+ static int xe_alloc_sg(struct xe_device *xe, struct sg_table *st,
+ struct hmm_range *range, struct rw_semaphore *notifier_sem)
+ {
+@@ -331,7 +308,6 @@ int xe_hmm_userptr_populate_range(struct xe_userptr_vma *uvma,
+ if (ret)
+ goto out_unlock;
+
+- xe_mark_range_accessed(&hmm_range, write);
+ userptr->sg = &userptr->sgt;
+ xe_hmm_userptr_set_mapped(uvma);
+ userptr->notifier_seq = hmm_range.notifier_seq;
+diff --git a/drivers/gpu/drm/xe/xe_migrate.c b/drivers/gpu/drm/xe/xe_migrate.c
+index 1b97d90aaddaf4..6431697c616939 100644
+--- a/drivers/gpu/drm/xe/xe_migrate.c
++++ b/drivers/gpu/drm/xe/xe_migrate.c
+@@ -1177,7 +1177,7 @@ struct dma_fence *xe_migrate_clear(struct xe_migrate *m,
+ err_sync:
+ /* Sync partial copies if any. FIXME: job_mutex? */
+ if (fence) {
+- dma_fence_wait(m->fence, false);
++ dma_fence_wait(fence, false);
+ dma_fence_put(fence);
+ }
+
+diff --git a/drivers/i2c/busses/i2c-cros-ec-tunnel.c b/drivers/i2c/busses/i2c-cros-ec-tunnel.c
+index ab2688bd4d338a..e19cb62d6796d9 100644
+--- a/drivers/i2c/busses/i2c-cros-ec-tunnel.c
++++ b/drivers/i2c/busses/i2c-cros-ec-tunnel.c
+@@ -247,6 +247,9 @@ static int ec_i2c_probe(struct platform_device *pdev)
+ u32 remote_bus;
+ int err;
+
++ if (!ec)
++ return dev_err_probe(dev, -EPROBE_DEFER, "couldn't find parent EC device\n");
++
+ if (!ec->cmd_xfer) {
+ dev_err(dev, "Missing sendrecv\n");
+ return -EINVAL;
+diff --git a/drivers/i2c/i2c-atr.c b/drivers/i2c/i2c-atr.c
+index 0d54d0b5e32731..5342e934aa5e40 100644
+--- a/drivers/i2c/i2c-atr.c
++++ b/drivers/i2c/i2c-atr.c
+@@ -8,12 +8,12 @@
+ * Originally based on i2c-mux.c
+ */
+
+-#include <linux/fwnode.h>
+ #include <linux/i2c-atr.h>
+ #include <linux/i2c.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
+ #include <linux/mutex.h>
++#include <linux/property.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 91db10515d7472..176d0b3e448870 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -72,6 +72,8 @@ static const char * const cma_events[] = {
+ static void cma_iboe_set_mgid(struct sockaddr *addr, union ib_gid *mgid,
+ enum ib_gid_type gid_type);
+
++static void cma_netevent_work_handler(struct work_struct *_work);
++
+ const char *__attribute_const__ rdma_event_msg(enum rdma_cm_event_type event)
+ {
+ size_t index = event;
+@@ -1033,6 +1035,7 @@ __rdma_create_id(struct net *net, rdma_cm_event_handler event_handler,
+ get_random_bytes(&id_priv->seq_num, sizeof id_priv->seq_num);
+ id_priv->id.route.addr.dev_addr.net = get_net(net);
+ id_priv->seq_num &= 0x00ffffff;
++ INIT_WORK(&id_priv->id.net_work, cma_netevent_work_handler);
+
+ rdma_restrack_new(&id_priv->res, RDMA_RESTRACK_CM_ID);
+ if (parent)
+@@ -5227,7 +5230,6 @@ static int cma_netevent_callback(struct notifier_block *self,
+ if (!memcmp(current_id->id.route.addr.dev_addr.dst_dev_addr,
+ neigh->ha, ETH_ALEN))
+ continue;
+- INIT_WORK(¤t_id->id.net_work, cma_netevent_work_handler);
+ cma_id_get(current_id);
+ queue_work(cma_wq, ¤t_id->id.net_work);
+ }
+diff --git a/drivers/infiniband/core/umem_odp.c b/drivers/infiniband/core/umem_odp.c
+index e9fa22d31c2332..c48ef608302055 100644
+--- a/drivers/infiniband/core/umem_odp.c
++++ b/drivers/infiniband/core/umem_odp.c
+@@ -76,12 +76,14 @@ static inline int ib_init_umem_odp(struct ib_umem_odp *umem_odp,
+
+ npfns = (end - start) >> PAGE_SHIFT;
+ umem_odp->pfn_list = kvcalloc(
+- npfns, sizeof(*umem_odp->pfn_list), GFP_KERNEL);
++ npfns, sizeof(*umem_odp->pfn_list),
++ GFP_KERNEL | __GFP_NOWARN);
+ if (!umem_odp->pfn_list)
+ return -ENOMEM;
+
+ umem_odp->dma_list = kvcalloc(
+- ndmas, sizeof(*umem_odp->dma_list), GFP_KERNEL);
++ ndmas, sizeof(*umem_odp->dma_list),
++ GFP_KERNEL | __GFP_NOWARN);
+ if (!umem_odp->dma_list) {
+ ret = -ENOMEM;
+ goto out_pfn_list;
+diff --git a/drivers/infiniband/hw/hns/hns_roce_main.c b/drivers/infiniband/hw/hns/hns_roce_main.c
+index cf89a8db4f64cd..8d0b63d4b50a6c 100644
+--- a/drivers/infiniband/hw/hns/hns_roce_main.c
++++ b/drivers/infiniband/hw/hns/hns_roce_main.c
+@@ -763,7 +763,7 @@ static int hns_roce_register_device(struct hns_roce_dev *hr_dev)
+ if (ret)
+ return ret;
+ }
+- dma_set_max_seg_size(dev, UINT_MAX);
++ dma_set_max_seg_size(dev, SZ_2G);
+ ret = ib_register_device(ib_dev, "hns_%d", dev);
+ if (ret) {
+ dev_err(dev, "ib_register_device failed!\n");
+diff --git a/drivers/infiniband/hw/usnic/usnic_ib_main.c b/drivers/infiniband/hw/usnic/usnic_ib_main.c
+index 13b654ddd3cc8d..bcf7d8607d56ef 100644
+--- a/drivers/infiniband/hw/usnic/usnic_ib_main.c
++++ b/drivers/infiniband/hw/usnic/usnic_ib_main.c
+@@ -380,7 +380,7 @@ static void *usnic_ib_device_add(struct pci_dev *dev)
+ if (!us_ibdev) {
+ usnic_err("Device %s context alloc failed\n",
+ netdev_name(pci_get_drvdata(dev)));
+- return ERR_PTR(-EFAULT);
++ return NULL;
+ }
+
+ us_ibdev->ufdev = usnic_fwd_dev_alloc(dev);
+@@ -500,8 +500,8 @@ static struct usnic_ib_dev *usnic_ib_discover_pf(struct usnic_vnic *vnic)
+ }
+
+ us_ibdev = usnic_ib_device_add(parent_pci);
+- if (IS_ERR_OR_NULL(us_ibdev)) {
+- us_ibdev = us_ibdev ? us_ibdev : ERR_PTR(-EFAULT);
++ if (!us_ibdev) {
++ us_ibdev = ERR_PTR(-EFAULT);
+ goto out;
+ }
+
+@@ -569,10 +569,10 @@ static int usnic_ib_pci_probe(struct pci_dev *pdev,
+ }
+
+ pf = usnic_ib_discover_pf(vf->vnic);
+- if (IS_ERR_OR_NULL(pf)) {
+- usnic_err("Failed to discover pf of vnic %s with err%ld\n",
+- pci_name(pdev), PTR_ERR(pf));
+- err = pf ? PTR_ERR(pf) : -EFAULT;
++ if (IS_ERR(pf)) {
++ err = PTR_ERR(pf);
++ usnic_err("Failed to discover pf of vnic %s with err%d\n",
++ pci_name(pdev), err);
+ goto out_clean_vnic;
+ }
+
+diff --git a/drivers/md/md-bitmap.c b/drivers/md/md-bitmap.c
+index 2e3087556adb37..fbb4f57010da69 100644
+--- a/drivers/md/md-bitmap.c
++++ b/drivers/md/md-bitmap.c
+@@ -2355,9 +2355,8 @@ static int bitmap_get_stats(void *data, struct md_bitmap_stats *stats)
+
+ if (!bitmap)
+ return -ENOENT;
+- if (bitmap->mddev->bitmap_info.external)
+- return -ENOENT;
+- if (!bitmap->storage.sb_page) /* no superblock */
++ if (!bitmap->mddev->bitmap_info.external &&
++ !bitmap->storage.sb_page)
+ return -EINVAL;
+ sb = kmap_local_page(bitmap->storage.sb_page);
+ stats->sync_size = le64_to_cpu(sb->sync_size);
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index fff28aea23c89e..7809b951e09aa0 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -629,6 +629,12 @@ static void __mddev_put(struct mddev *mddev)
+ queue_work(md_misc_wq, &mddev->del_work);
+ }
+
++static void mddev_put_locked(struct mddev *mddev)
++{
++ if (atomic_dec_and_test(&mddev->active))
++ __mddev_put(mddev);
++}
++
+ void mddev_put(struct mddev *mddev)
+ {
+ if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
+@@ -8461,9 +8467,7 @@ static int md_seq_show(struct seq_file *seq, void *v)
+ if (mddev == list_last_entry(&all_mddevs, struct mddev, all_mddevs))
+ status_unused(seq);
+
+- if (atomic_dec_and_test(&mddev->active))
+- __mddev_put(mddev);
+-
++ mddev_put_locked(mddev);
+ return 0;
+ }
+
+@@ -9886,11 +9890,11 @@ EXPORT_SYMBOL_GPL(rdev_clear_badblocks);
+ static int md_notify_reboot(struct notifier_block *this,
+ unsigned long code, void *x)
+ {
+- struct mddev *mddev, *n;
++ struct mddev *mddev;
+ int need_delay = 0;
+
+ spin_lock(&all_mddevs_lock);
+- list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
++ list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
+ if (!mddev_get(mddev))
+ continue;
+ spin_unlock(&all_mddevs_lock);
+@@ -9902,8 +9906,8 @@ static int md_notify_reboot(struct notifier_block *this,
+ mddev_unlock(mddev);
+ }
+ need_delay = 1;
+- mddev_put(mddev);
+ spin_lock(&all_mddevs_lock);
++ mddev_put_locked(mddev);
+ }
+ spin_unlock(&all_mddevs_lock);
+
+@@ -10236,7 +10240,7 @@ void md_autostart_arrays(int part)
+
+ static __exit void md_exit(void)
+ {
+- struct mddev *mddev, *n;
++ struct mddev *mddev;
+ int delay = 1;
+
+ unregister_blkdev(MD_MAJOR,"md");
+@@ -10257,7 +10261,7 @@ static __exit void md_exit(void)
+ remove_proc_entry("mdstat", NULL);
+
+ spin_lock(&all_mddevs_lock);
+- list_for_each_entry_safe(mddev, n, &all_mddevs, all_mddevs) {
++ list_for_each_entry(mddev, &all_mddevs, all_mddevs) {
+ if (!mddev_get(mddev))
+ continue;
+ spin_unlock(&all_mddevs_lock);
+@@ -10269,8 +10273,8 @@ static __exit void md_exit(void)
+ * the mddev for destruction by a workqueue, and the
+ * destroy_workqueue() below will wait for that to complete.
+ */
+- mddev_put(mddev);
+ spin_lock(&all_mddevs_lock);
++ mddev_put_locked(mddev);
+ }
+ spin_unlock(&all_mddevs_lock);
+
+diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
+index a214fed4f16226..cc194f6ec18dab 100644
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -1687,6 +1687,7 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
+ * The discard bio returns only first r10bio finishes
+ */
+ if (first_copy) {
++ md_account_bio(mddev, &bio);
+ r10_bio->master_bio = bio;
+ set_bit(R10BIO_Discard, &r10_bio->state);
+ first_copy = false;
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index 8dea2b44fd8bfe..e22afb420d099e 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -251,6 +251,9 @@ static bool pci_endpoint_test_request_irq(struct pci_endpoint_test *test)
+ break;
+ }
+
++ test->num_irqs = i;
++ pci_endpoint_test_release_irq(test);
++
+ return false;
+ }
+
+@@ -738,6 +741,7 @@ static bool pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
+ if (!pci_endpoint_test_request_irq(test))
+ goto err;
+
++ irq_type = test->irq_type;
+ return true;
+
+ err:
+diff --git a/drivers/net/can/rockchip/rockchip_canfd-core.c b/drivers/net/can/rockchip/rockchip_canfd-core.c
+index d9a937ba126c3c..ac514766d431ce 100644
+--- a/drivers/net/can/rockchip/rockchip_canfd-core.c
++++ b/drivers/net/can/rockchip/rockchip_canfd-core.c
+@@ -907,15 +907,16 @@ static int rkcanfd_probe(struct platform_device *pdev)
+ priv->can.data_bittiming_const = &rkcanfd_data_bittiming_const;
+ priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
+ CAN_CTRLMODE_BERR_REPORTING;
+- if (!(priv->devtype_data.quirks & RKCANFD_QUIRK_CANFD_BROKEN))
+- priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD;
+ priv->can.do_set_mode = rkcanfd_set_mode;
+ priv->can.do_get_berr_counter = rkcanfd_get_berr_counter;
+ priv->ndev = ndev;
+
+ match = device_get_match_data(&pdev->dev);
+- if (match)
++ if (match) {
+ priv->devtype_data = *(struct rkcanfd_devtype_data *)match;
++ if (!(priv->devtype_data.quirks & RKCANFD_QUIRK_CANFD_BROKEN))
++ priv->can.ctrlmode_supported |= CAN_CTRLMODE_FD;
++ }
+
+ err = can_rx_offload_add_manual(ndev, &priv->offload,
+ RKCANFD_NAPI_WEIGHT);
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index c39cb119e760db..d4600ab0b70b3b 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -737,6 +737,15 @@ static void b53_enable_mib(struct b53_device *dev)
+ b53_write8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, gc);
+ }
+
++static void b53_enable_stp(struct b53_device *dev)
++{
++ u8 gc;
++
++ b53_read8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, &gc);
++ gc |= GC_RX_BPDU_EN;
++ b53_write8(dev, B53_MGMT_PAGE, B53_GLOBAL_CONFIG, gc);
++}
++
+ static u16 b53_default_pvid(struct b53_device *dev)
+ {
+ if (is5325(dev) || is5365(dev))
+@@ -876,6 +885,7 @@ static int b53_switch_reset(struct b53_device *dev)
+ }
+
+ b53_enable_mib(dev);
++ b53_enable_stp(dev);
+
+ return b53_flush_arl(dev, FAST_AGE_STATIC);
+ }
+diff --git a/drivers/net/dsa/mv88e6xxx/chip.c b/drivers/net/dsa/mv88e6xxx/chip.c
+index e20d9d62032e31..df1df601541217 100644
+--- a/drivers/net/dsa/mv88e6xxx/chip.c
++++ b/drivers/net/dsa/mv88e6xxx/chip.c
+@@ -1878,6 +1878,8 @@ static int mv88e6xxx_vtu_get(struct mv88e6xxx_chip *chip, u16 vid,
+ if (!chip->info->ops->vtu_getnext)
+ return -EOPNOTSUPP;
+
++ memset(entry, 0, sizeof(*entry));
++
+ entry->vid = vid ? vid - 1 : mv88e6xxx_max_vid(chip);
+ entry->valid = false;
+
+@@ -2013,7 +2015,16 @@ static int mv88e6xxx_mst_put(struct mv88e6xxx_chip *chip, u8 sid)
+ struct mv88e6xxx_mst *mst, *tmp;
+ int err;
+
+- if (!sid)
++ /* If the SID is zero, it is for a VLAN mapped to the default MSTI,
++ * and mv88e6xxx_stu_setup() made sure it is always present, and thus,
++ * should not be removed here.
++ *
++ * If the chip lacks STU support, numerically the "sid" variable will
++ * happen to also be zero, but we don't want to rely on that fact, so
++ * we explicitly test that first. In that case, there is also nothing
++ * to do here.
++ */
++ if (!mv88e6xxx_has_stu(chip) || !sid)
+ return 0;
+
+ list_for_each_entry_safe(mst, tmp, &chip->msts, node) {
+diff --git a/drivers/net/dsa/mv88e6xxx/devlink.c b/drivers/net/dsa/mv88e6xxx/devlink.c
+index a08dab75e0c0c1..f57fde02077d22 100644
+--- a/drivers/net/dsa/mv88e6xxx/devlink.c
++++ b/drivers/net/dsa/mv88e6xxx/devlink.c
+@@ -743,7 +743,8 @@ void mv88e6xxx_teardown_devlink_regions_global(struct dsa_switch *ds)
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(mv88e6xxx_regions); i++)
+- dsa_devlink_region_destroy(chip->regions[i]);
++ if (chip->regions[i])
++ dsa_devlink_region_destroy(chip->regions[i]);
+ }
+
+ void mv88e6xxx_teardown_devlink_regions_port(struct dsa_switch *ds, int port)
+diff --git a/drivers/net/ethernet/amd/pds_core/debugfs.c b/drivers/net/ethernet/amd/pds_core/debugfs.c
+index ac37a4e738ae7d..04c5e3abd8d706 100644
+--- a/drivers/net/ethernet/amd/pds_core/debugfs.c
++++ b/drivers/net/ethernet/amd/pds_core/debugfs.c
+@@ -154,8 +154,9 @@ void pdsc_debugfs_add_qcq(struct pdsc *pdsc, struct pdsc_qcq *qcq)
+ debugfs_create_u32("index", 0400, intr_dentry, &intr->index);
+ debugfs_create_u32("vector", 0400, intr_dentry, &intr->vector);
+
+- intr_ctrl_regset = kzalloc(sizeof(*intr_ctrl_regset),
+- GFP_KERNEL);
++ intr_ctrl_regset = devm_kzalloc(pdsc->dev,
++ sizeof(*intr_ctrl_regset),
++ GFP_KERNEL);
+ if (!intr_ctrl_regset)
+ return;
+ intr_ctrl_regset->regs = intr_ctrl_regs;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index e7580df13229a6..016dcfec8d4965 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -758,7 +758,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ dev_kfree_skb_any(skb);
+ tx_kick_pending:
+ if (BNXT_TX_PTP_IS_SET(lflags)) {
+- txr->tx_buf_ring[txr->tx_prod].is_ts_pkt = 0;
++ txr->tx_buf_ring[RING_TX(bp, txr->tx_prod)].is_ts_pkt = 0;
+ atomic64_inc(&bp->ptp_cfg->stats.ts_err);
+ if (!(bp->fw_cap & BNXT_FW_CAP_TX_TS_CMP))
+ /* set SKB to err so PTP worker will clean up */
+@@ -766,7 +766,7 @@ static netdev_tx_t bnxt_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ }
+ if (txr->kick_pending)
+ bnxt_txr_db_kick(bp, txr, txr->tx_prod);
+- txr->tx_buf_ring[txr->tx_prod].skb = NULL;
++ txr->tx_buf_ring[RING_TX(bp, txr->tx_prod)].skb = NULL;
+ dev_core_stats_tx_dropped_inc(dev);
+ return NETDEV_TX_OK;
+ }
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
+index 7f3f5afa864f4a..1546c3db08f093 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ethtool.c
+@@ -2270,6 +2270,7 @@ int cxgb4_init_ethtool_filters(struct adapter *adap)
+ eth_filter->port[i].bmap = bitmap_zalloc(nentries, GFP_KERNEL);
+ if (!eth_filter->port[i].bmap) {
+ ret = -ENOMEM;
++ kvfree(eth_filter->port[i].loc_array);
+ goto free_eth_finfo;
+ }
+ }
+diff --git a/drivers/net/ethernet/intel/igc/igc.h b/drivers/net/ethernet/intel/igc/igc.h
+index eac0f966e0e4c5..323db1e2be3886 100644
+--- a/drivers/net/ethernet/intel/igc/igc.h
++++ b/drivers/net/ethernet/intel/igc/igc.h
+@@ -319,6 +319,7 @@ struct igc_adapter {
+ struct timespec64 prev_ptp_time; /* Pre-reset PTP clock */
+ ktime_t ptp_reset_start; /* Reset time in clock mono */
+ struct system_time_snapshot snapshot;
++ struct mutex ptm_lock; /* Only allow one PTM transaction at a time */
+
+ char fw_version[32];
+
+diff --git a/drivers/net/ethernet/intel/igc/igc_defines.h b/drivers/net/ethernet/intel/igc/igc_defines.h
+index 8e449904aa7dbd..d19325b0e6e0ba 100644
+--- a/drivers/net/ethernet/intel/igc/igc_defines.h
++++ b/drivers/net/ethernet/intel/igc/igc_defines.h
+@@ -574,7 +574,10 @@
+ #define IGC_PTM_CTRL_SHRT_CYC(usec) (((usec) & 0x3f) << 2)
+ #define IGC_PTM_CTRL_PTM_TO(usec) (((usec) & 0xff) << 8)
+
+-#define IGC_PTM_SHORT_CYC_DEFAULT 1 /* Default short cycle interval */
++/* A short cycle time of 1us theoretically should work, but appears to be too
++ * short in practice.
++ */
++#define IGC_PTM_SHORT_CYC_DEFAULT 4 /* Default short cycle interval */
+ #define IGC_PTM_CYC_TIME_DEFAULT 5 /* Default PTM cycle time */
+ #define IGC_PTM_TIMEOUT_DEFAULT 255 /* Default timeout for PTM errors */
+
+@@ -593,6 +596,7 @@
+ #define IGC_PTM_STAT_T4M1_OVFL BIT(3) /* T4 minus T1 overflow */
+ #define IGC_PTM_STAT_ADJUST_1ST BIT(4) /* 1588 timer adjusted during 1st PTM cycle */
+ #define IGC_PTM_STAT_ADJUST_CYC BIT(5) /* 1588 timer adjusted during non-1st PTM cycle */
++#define IGC_PTM_STAT_ALL GENMASK(5, 0) /* Used to clear all status */
+
+ /* PCIe PTM Cycle Control */
+ #define IGC_PTM_CYCLE_CTRL_CYC_TIME(msec) ((msec) & 0x3ff) /* PTM Cycle Time (msec) */
+diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c
+index 1ec9e8cc99d947..082b0baf5d37c5 100644
+--- a/drivers/net/ethernet/intel/igc/igc_main.c
++++ b/drivers/net/ethernet/intel/igc/igc_main.c
+@@ -7173,6 +7173,7 @@ static int igc_probe(struct pci_dev *pdev,
+
+ err_register:
+ igc_release_hw_control(adapter);
++ igc_ptp_stop(adapter);
+ err_eeprom:
+ if (!igc_check_reset_block(hw))
+ igc_reset_phy(hw);
+diff --git a/drivers/net/ethernet/intel/igc/igc_ptp.c b/drivers/net/ethernet/intel/igc/igc_ptp.c
+index 946edbad43022c..612ed26a29c5d4 100644
+--- a/drivers/net/ethernet/intel/igc/igc_ptp.c
++++ b/drivers/net/ethernet/intel/igc/igc_ptp.c
+@@ -974,45 +974,62 @@ static void igc_ptm_log_error(struct igc_adapter *adapter, u32 ptm_stat)
+ }
+ }
+
++/* The PTM lock: adapter->ptm_lock must be held when calling igc_ptm_trigger() */
++static void igc_ptm_trigger(struct igc_hw *hw)
++{
++ u32 ctrl;
++
++ /* To "manually" start the PTM cycle we need to set the
++ * trigger (TRIG) bit
++ */
++ ctrl = rd32(IGC_PTM_CTRL);
++ ctrl |= IGC_PTM_CTRL_TRIG;
++ wr32(IGC_PTM_CTRL, ctrl);
++ /* Perform flush after write to CTRL register otherwise
++ * transaction may not start
++ */
++ wrfl();
++}
++
++/* The PTM lock: adapter->ptm_lock must be held when calling igc_ptm_reset() */
++static void igc_ptm_reset(struct igc_hw *hw)
++{
++ u32 ctrl;
++
++ ctrl = rd32(IGC_PTM_CTRL);
++ ctrl &= ~IGC_PTM_CTRL_TRIG;
++ wr32(IGC_PTM_CTRL, ctrl);
++ /* Write to clear all status */
++ wr32(IGC_PTM_STAT, IGC_PTM_STAT_ALL);
++}
++
+ static int igc_phc_get_syncdevicetime(ktime_t *device,
+ struct system_counterval_t *system,
+ void *ctx)
+ {
+- u32 stat, t2_curr_h, t2_curr_l, ctrl;
+ struct igc_adapter *adapter = ctx;
+ struct igc_hw *hw = &adapter->hw;
++ u32 stat, t2_curr_h, t2_curr_l;
+ int err, count = 100;
+ ktime_t t1, t2_curr;
+
+- /* Get a snapshot of system clocks to use as historic value. */
+- ktime_get_snapshot(&adapter->snapshot);
+-
++ /* Doing this in a loop because in the event of a
++ * badly timed (ha!) system clock adjustment, we may
++ * get PTM errors from the PCI root, but these errors
++ * are transitory. Repeating the process returns valid
++ * data eventually.
++ */
+ do {
+- /* Doing this in a loop because in the event of a
+- * badly timed (ha!) system clock adjustment, we may
+- * get PTM errors from the PCI root, but these errors
+- * are transitory. Repeating the process returns valid
+- * data eventually.
+- */
++ /* Get a snapshot of system clocks to use as historic value. */
++ ktime_get_snapshot(&adapter->snapshot);
+
+- /* To "manually" start the PTM cycle we need to clear and
+- * then set again the TRIG bit.
+- */
+- ctrl = rd32(IGC_PTM_CTRL);
+- ctrl &= ~IGC_PTM_CTRL_TRIG;
+- wr32(IGC_PTM_CTRL, ctrl);
+- ctrl |= IGC_PTM_CTRL_TRIG;
+- wr32(IGC_PTM_CTRL, ctrl);
+-
+- /* The cycle only starts "for real" when software notifies
+- * that it has read the registers, this is done by setting
+- * VALID bit.
+- */
+- wr32(IGC_PTM_STAT, IGC_PTM_STAT_VALID);
++ igc_ptm_trigger(hw);
+
+ err = readx_poll_timeout(rd32, IGC_PTM_STAT, stat,
+ stat, IGC_PTM_STAT_SLEEP,
+ IGC_PTM_STAT_TIMEOUT);
++ igc_ptm_reset(hw);
++
+ if (err < 0) {
+ netdev_err(adapter->netdev, "Timeout reading IGC_PTM_STAT register\n");
+ return err;
+@@ -1021,15 +1038,7 @@ static int igc_phc_get_syncdevicetime(ktime_t *device,
+ if ((stat & IGC_PTM_STAT_VALID) == IGC_PTM_STAT_VALID)
+ break;
+
+- if (stat & ~IGC_PTM_STAT_VALID) {
+- /* An error occurred, log it. */
+- igc_ptm_log_error(adapter, stat);
+- /* The STAT register is write-1-to-clear (W1C),
+- * so write the previous error status to clear it.
+- */
+- wr32(IGC_PTM_STAT, stat);
+- continue;
+- }
++ igc_ptm_log_error(adapter, stat);
+ } while (--count);
+
+ if (!count) {
+@@ -1061,9 +1070,16 @@ static int igc_ptp_getcrosststamp(struct ptp_clock_info *ptp,
+ {
+ struct igc_adapter *adapter = container_of(ptp, struct igc_adapter,
+ ptp_caps);
++ int ret;
++
++ /* This blocks until any in progress PTM transactions complete */
++ mutex_lock(&adapter->ptm_lock);
+
+- return get_device_system_crosststamp(igc_phc_get_syncdevicetime,
+- adapter, &adapter->snapshot, cts);
++ ret = get_device_system_crosststamp(igc_phc_get_syncdevicetime,
++ adapter, &adapter->snapshot, cts);
++ mutex_unlock(&adapter->ptm_lock);
++
++ return ret;
+ }
+
+ static int igc_ptp_getcyclesx64(struct ptp_clock_info *ptp,
+@@ -1162,6 +1178,7 @@ void igc_ptp_init(struct igc_adapter *adapter)
+ spin_lock_init(&adapter->ptp_tx_lock);
+ spin_lock_init(&adapter->free_timer_lock);
+ spin_lock_init(&adapter->tmreg_lock);
++ mutex_init(&adapter->ptm_lock);
+
+ adapter->tstamp_config.rx_filter = HWTSTAMP_FILTER_NONE;
+ adapter->tstamp_config.tx_type = HWTSTAMP_TX_OFF;
+@@ -1174,6 +1191,7 @@ void igc_ptp_init(struct igc_adapter *adapter)
+ if (IS_ERR(adapter->ptp_clock)) {
+ adapter->ptp_clock = NULL;
+ netdev_err(netdev, "ptp_clock_register failed\n");
++ mutex_destroy(&adapter->ptm_lock);
+ } else if (adapter->ptp_clock) {
+ netdev_info(netdev, "PHC added\n");
+ adapter->ptp_flags |= IGC_PTP_ENABLED;
+@@ -1203,10 +1221,12 @@ static void igc_ptm_stop(struct igc_adapter *adapter)
+ struct igc_hw *hw = &adapter->hw;
+ u32 ctrl;
+
++ mutex_lock(&adapter->ptm_lock);
+ ctrl = rd32(IGC_PTM_CTRL);
+ ctrl &= ~IGC_PTM_CTRL_EN;
+
+ wr32(IGC_PTM_CTRL, ctrl);
++ mutex_unlock(&adapter->ptm_lock);
+ }
+
+ /**
+@@ -1237,13 +1257,18 @@ void igc_ptp_suspend(struct igc_adapter *adapter)
+ **/
+ void igc_ptp_stop(struct igc_adapter *adapter)
+ {
++ if (!(adapter->ptp_flags & IGC_PTP_ENABLED))
++ return;
++
+ igc_ptp_suspend(adapter);
+
++ adapter->ptp_flags &= ~IGC_PTP_ENABLED;
+ if (adapter->ptp_clock) {
+ ptp_clock_unregister(adapter->ptp_clock);
+ netdev_info(adapter->netdev, "PHC removed\n");
+ adapter->ptp_flags &= ~IGC_PTP_ENABLED;
+ }
++ mutex_destroy(&adapter->ptm_lock);
+ }
+
+ /**
+@@ -1255,10 +1280,13 @@ void igc_ptp_stop(struct igc_adapter *adapter)
+ void igc_ptp_reset(struct igc_adapter *adapter)
+ {
+ struct igc_hw *hw = &adapter->hw;
+- u32 cycle_ctrl, ctrl;
++ u32 cycle_ctrl, ctrl, stat;
+ unsigned long flags;
+ u32 timadj;
+
++ if (!(adapter->ptp_flags & IGC_PTP_ENABLED))
++ return;
++
+ /* reset the tstamp_config */
+ igc_ptp_set_timestamp_mode(adapter, &adapter->tstamp_config);
+
+@@ -1280,6 +1308,7 @@ void igc_ptp_reset(struct igc_adapter *adapter)
+ if (!igc_is_crosststamp_supported(adapter))
+ break;
+
++ mutex_lock(&adapter->ptm_lock);
+ wr32(IGC_PCIE_DIG_DELAY, IGC_PCIE_DIG_DELAY_DEFAULT);
+ wr32(IGC_PCIE_PHY_DELAY, IGC_PCIE_PHY_DELAY_DEFAULT);
+
+@@ -1290,14 +1319,20 @@ void igc_ptp_reset(struct igc_adapter *adapter)
+ ctrl = IGC_PTM_CTRL_EN |
+ IGC_PTM_CTRL_START_NOW |
+ IGC_PTM_CTRL_SHRT_CYC(IGC_PTM_SHORT_CYC_DEFAULT) |
+- IGC_PTM_CTRL_PTM_TO(IGC_PTM_TIMEOUT_DEFAULT) |
+- IGC_PTM_CTRL_TRIG;
++ IGC_PTM_CTRL_PTM_TO(IGC_PTM_TIMEOUT_DEFAULT);
+
+ wr32(IGC_PTM_CTRL, ctrl);
+
+ /* Force the first cycle to run. */
+- wr32(IGC_PTM_STAT, IGC_PTM_STAT_VALID);
++ igc_ptm_trigger(hw);
++
++ if (readx_poll_timeout_atomic(rd32, IGC_PTM_STAT, stat,
++ stat, IGC_PTM_STAT_SLEEP,
++ IGC_PTM_STAT_TIMEOUT))
++ netdev_err(adapter->netdev, "Timeout reading IGC_PTM_STAT register\n");
+
++ igc_ptm_reset(hw);
++ mutex_unlock(&adapter->ptm_lock);
+ break;
+ default:
+ /* No work to do. */
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index ed7313c10a0524..d408dcda76d794 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -734,7 +734,7 @@ static void mtk_set_queue_speed(struct mtk_eth *eth, unsigned int idx,
+ case SPEED_100:
+ val |= MTK_QTX_SCH_MAX_RATE_EN |
+ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 103) |
+- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 3);
++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 3) |
+ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 1);
+ break;
+ case SPEED_1000:
+@@ -757,13 +757,13 @@ static void mtk_set_queue_speed(struct mtk_eth *eth, unsigned int idx,
+ case SPEED_100:
+ val |= MTK_QTX_SCH_MAX_RATE_EN |
+ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 1) |
+- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 5);
++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 5) |
+ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 1);
+ break;
+ case SPEED_1000:
+ val |= MTK_QTX_SCH_MAX_RATE_EN |
+- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 10) |
+- FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 5) |
++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_MAN, 1) |
++ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_EXP, 6) |
+ FIELD_PREP(MTK_QTX_SCH_MAX_RATE_WEIGHT, 10);
+ break;
+ default:
+@@ -823,9 +823,25 @@ static const struct phylink_mac_ops mtk_phylink_ops = {
+ .mac_link_up = mtk_mac_link_up,
+ };
+
++static void mtk_mdio_config(struct mtk_eth *eth)
++{
++ u32 val;
++
++ /* Configure MDC Divider */
++ val = FIELD_PREP(PPSC_MDC_CFG, eth->mdc_divider);
++
++ /* Configure MDC Turbo Mode */
++ if (mtk_is_netsys_v3_or_greater(eth))
++ mtk_m32(eth, 0, MISC_MDC_TURBO, MTK_MAC_MISC_V3);
++ else
++ val |= PPSC_MDC_TURBO;
++
++ mtk_m32(eth, PPSC_MDC_CFG, val, MTK_PPSC);
++}
++
+ static int mtk_mdio_init(struct mtk_eth *eth)
+ {
+- unsigned int max_clk = 2500000, divider;
++ unsigned int max_clk = 2500000;
+ struct device_node *mii_np;
+ int ret;
+ u32 val;
+@@ -865,20 +881,9 @@ static int mtk_mdio_init(struct mtk_eth *eth)
+ }
+ max_clk = val;
+ }
+- divider = min_t(unsigned int, DIV_ROUND_UP(MDC_MAX_FREQ, max_clk), 63);
+-
+- /* Configure MDC Turbo Mode */
+- if (mtk_is_netsys_v3_or_greater(eth))
+- mtk_m32(eth, 0, MISC_MDC_TURBO, MTK_MAC_MISC_V3);
+-
+- /* Configure MDC Divider */
+- val = FIELD_PREP(PPSC_MDC_CFG, divider);
+- if (!mtk_is_netsys_v3_or_greater(eth))
+- val |= PPSC_MDC_TURBO;
+- mtk_m32(eth, PPSC_MDC_CFG, val, MTK_PPSC);
+-
+- dev_dbg(eth->dev, "MDC is running on %d Hz\n", MDC_MAX_FREQ / divider);
+-
++ eth->mdc_divider = min_t(unsigned int, DIV_ROUND_UP(MDC_MAX_FREQ, max_clk), 63);
++ mtk_mdio_config(eth);
++ dev_dbg(eth->dev, "MDC is running on %d Hz\n", MDC_MAX_FREQ / eth->mdc_divider);
+ ret = of_mdiobus_register(eth->mii_bus, mii_np);
+
+ err_put_node:
+@@ -3269,7 +3274,7 @@ static int mtk_start_dma(struct mtk_eth *eth)
+ if (mtk_is_netsys_v2_or_greater(eth))
+ val |= MTK_MUTLI_CNT | MTK_RESV_BUF |
+ MTK_WCOMP_EN | MTK_DMAD_WR_WDONE |
+- MTK_CHK_DDONE_EN | MTK_LEAKY_BUCKET_EN;
++ MTK_CHK_DDONE_EN;
+ else
+ val |= MTK_RX_BT_32DWORDS;
+ mtk_w32(eth, val, reg_map->qdma.glo_cfg);
+@@ -3928,6 +3933,10 @@ static int mtk_hw_init(struct mtk_eth *eth, bool reset)
+ else
+ mtk_hw_reset(eth);
+
++ /* No MT7628/88 support yet */
++ if (reset && !MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628))
++ mtk_mdio_config(eth);
++
+ if (mtk_is_netsys_v3_or_greater(eth)) {
+ /* Set FE to PDMAv2 if necessary */
+ val = mtk_r32(eth, MTK_FE_GLO_MISC);
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 0d5225f1d3eef6..8d7b6818d86012 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -1260,6 +1260,7 @@ struct mtk_eth {
+ struct clk *clks[MTK_CLK_MAX];
+
+ struct mii_bus *mii_bus;
++ unsigned int mdc_divider;
+ struct work_struct pending_work;
+ unsigned long state;
+
+diff --git a/drivers/net/ethernet/ti/am65-cpsw-nuss.c b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+index 308a2b72a65de3..a21e7c0afbfdc8 100644
+--- a/drivers/net/ethernet/ti/am65-cpsw-nuss.c
++++ b/drivers/net/ethernet/ti/am65-cpsw-nuss.c
+@@ -2680,7 +2680,7 @@ static int am65_cpsw_nuss_init_slave_ports(struct am65_cpsw_common *common)
+ of_property_read_bool(port_np, "ti,mac-only");
+
+ /* get phy/link info */
+- port->slave.port_np = port_np;
++ port->slave.port_np = of_node_get(port_np);
+ ret = of_get_phy_mode(port_np, &port->slave.phy_if);
+ if (ret) {
+ dev_err(dev, "%pOF read phy-mode err %d\n",
+@@ -2741,6 +2741,17 @@ static void am65_cpsw_nuss_phylink_cleanup(struct am65_cpsw_common *common)
+ }
+ }
+
++static void am65_cpsw_remove_dt(struct am65_cpsw_common *common)
++{
++ struct am65_cpsw_port *port;
++ int i;
++
++ for (i = 0; i < common->port_num; i++) {
++ port = &common->ports[i];
++ of_node_put(port->slave.port_np);
++ }
++}
++
+ static int
+ am65_cpsw_nuss_init_port_ndev(struct am65_cpsw_common *common, u32 port_idx)
+ {
+@@ -3647,6 +3658,7 @@ static int am65_cpsw_nuss_probe(struct platform_device *pdev)
+ am65_cpsw_nuss_cleanup_ndev(common);
+ am65_cpsw_nuss_phylink_cleanup(common);
+ am65_cpts_release(common->cpts);
++ am65_cpsw_remove_dt(common);
+ err_of_clear:
+ if (common->mdio_dev)
+ of_platform_device_destroy(common->mdio_dev, NULL);
+@@ -3686,6 +3698,7 @@ static void am65_cpsw_nuss_remove(struct platform_device *pdev)
+ am65_cpsw_nuss_phylink_cleanup(common);
+ am65_cpts_release(common->cpts);
+ am65_cpsw_disable_serdes_phy(common);
++ am65_cpsw_remove_dt(common);
+
+ if (common->mdio_dev)
+ of_platform_device_destroy(common->mdio_dev, NULL);
+diff --git a/drivers/net/ethernet/ti/icssg/icss_iep.c b/drivers/net/ethernet/ti/icssg/icss_iep.c
+index d59c1744840af2..2a1c43316f462b 100644
+--- a/drivers/net/ethernet/ti/icssg/icss_iep.c
++++ b/drivers/net/ethernet/ti/icssg/icss_iep.c
+@@ -406,66 +406,79 @@ static void icss_iep_update_to_next_boundary(struct icss_iep *iep, u64 start_ns)
+ static int icss_iep_perout_enable_hw(struct icss_iep *iep,
+ struct ptp_perout_request *req, int on)
+ {
++ struct timespec64 ts;
++ u64 ns_start;
++ u64 ns_width;
+ int ret;
+ u64 cmp;
+
++ if (!on) {
++ /* Disable CMP 1 */
++ regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
++ IEP_CMP_CFG_CMP_EN(1), 0);
++
++ /* clear CMP regs */
++ regmap_write(iep->map, ICSS_IEP_CMP1_REG0, 0);
++ if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT)
++ regmap_write(iep->map, ICSS_IEP_CMP1_REG1, 0);
++
++ /* Disable sync */
++ regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0);
++
++ return 0;
++ }
++
++ /* Calculate width of the signal for PPS/PEROUT handling */
++ ts.tv_sec = req->on.sec;
++ ts.tv_nsec = req->on.nsec;
++ ns_width = timespec64_to_ns(&ts);
++
++ if (req->flags & PTP_PEROUT_PHASE) {
++ ts.tv_sec = req->phase.sec;
++ ts.tv_nsec = req->phase.nsec;
++ ns_start = timespec64_to_ns(&ts);
++ } else {
++ ns_start = 0;
++ }
++
+ if (iep->ops && iep->ops->perout_enable) {
+ ret = iep->ops->perout_enable(iep->clockops_data, req, on, &cmp);
+ if (ret)
+ return ret;
+
+- if (on) {
+- /* Configure CMP */
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG0, lower_32_bits(cmp));
+- if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT)
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG1, upper_32_bits(cmp));
+- /* Configure SYNC, 1ms pulse width */
+- regmap_write(iep->map, ICSS_IEP_SYNC_PWIDTH_REG, 1000000);
+- regmap_write(iep->map, ICSS_IEP_SYNC0_PERIOD_REG, 0);
+- regmap_write(iep->map, ICSS_IEP_SYNC_START_REG, 0);
+- regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0); /* one-shot mode */
+- /* Enable CMP 1 */
+- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
+- IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1));
+- } else {
+- /* Disable CMP 1 */
+- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
+- IEP_CMP_CFG_CMP_EN(1), 0);
+-
+- /* clear regs */
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG0, 0);
+- if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT)
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG1, 0);
+- }
++ /* Configure CMP */
++ regmap_write(iep->map, ICSS_IEP_CMP1_REG0, lower_32_bits(cmp));
++ if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT)
++ regmap_write(iep->map, ICSS_IEP_CMP1_REG1, upper_32_bits(cmp));
++ /* Configure SYNC, based on req on width */
++ regmap_write(iep->map, ICSS_IEP_SYNC_PWIDTH_REG,
++ div_u64(ns_width, iep->def_inc));
++ regmap_write(iep->map, ICSS_IEP_SYNC0_PERIOD_REG, 0);
++ regmap_write(iep->map, ICSS_IEP_SYNC_START_REG,
++ div_u64(ns_start, iep->def_inc));
++ regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0); /* one-shot mode */
++ /* Enable CMP 1 */
++ regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
++ IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1));
+ } else {
+- if (on) {
+- u64 start_ns;
+-
+- iep->period = ((u64)req->period.sec * NSEC_PER_SEC) +
+- req->period.nsec;
+- start_ns = ((u64)req->period.sec * NSEC_PER_SEC)
+- + req->period.nsec;
+- icss_iep_update_to_next_boundary(iep, start_ns);
+-
+- /* Enable Sync in single shot mode */
+- regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG,
+- IEP_SYNC_CTRL_SYNC_N_EN(0) | IEP_SYNC_CTRL_SYNC_EN);
+- /* Enable CMP 1 */
+- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
+- IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1));
+- } else {
+- /* Disable CMP 1 */
+- regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
+- IEP_CMP_CFG_CMP_EN(1), 0);
+-
+- /* clear CMP regs */
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG0, 0);
+- if (iep->plat_data->flags & ICSS_IEP_64BIT_COUNTER_SUPPORT)
+- regmap_write(iep->map, ICSS_IEP_CMP1_REG1, 0);
+-
+- /* Disable sync */
+- regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG, 0);
+- }
++ u64 start_ns;
++
++ iep->period = ((u64)req->period.sec * NSEC_PER_SEC) +
++ req->period.nsec;
++ start_ns = ((u64)req->period.sec * NSEC_PER_SEC)
++ + req->period.nsec;
++ icss_iep_update_to_next_boundary(iep, start_ns);
++
++ regmap_write(iep->map, ICSS_IEP_SYNC_PWIDTH_REG,
++ div_u64(ns_width, iep->def_inc));
++ regmap_write(iep->map, ICSS_IEP_SYNC_START_REG,
++ div_u64(ns_start, iep->def_inc));
++ /* Enable Sync in single shot mode */
++ regmap_write(iep->map, ICSS_IEP_SYNC_CTRL_REG,
++ IEP_SYNC_CTRL_SYNC_N_EN(0) | IEP_SYNC_CTRL_SYNC_EN);
++ /* Enable CMP 1 */
++ regmap_update_bits(iep->map, ICSS_IEP_CMP_CFG_REG,
++ IEP_CMP_CFG_CMP_EN(1), IEP_CMP_CFG_CMP_EN(1));
+ }
+
+ return 0;
+@@ -474,7 +487,41 @@ static int icss_iep_perout_enable_hw(struct icss_iep *iep,
+ static int icss_iep_perout_enable(struct icss_iep *iep,
+ struct ptp_perout_request *req, int on)
+ {
+- return -EOPNOTSUPP;
++ int ret = 0;
++
++ if (!on)
++ goto disable;
++
++ /* Reject requests with unsupported flags */
++ if (req->flags & ~(PTP_PEROUT_DUTY_CYCLE |
++ PTP_PEROUT_PHASE))
++ return -EOPNOTSUPP;
++
++ /* Set default "on" time (1ms) for the signal if not passed by the app */
++ if (!(req->flags & PTP_PEROUT_DUTY_CYCLE)) {
++ req->on.sec = 0;
++ req->on.nsec = NSEC_PER_MSEC;
++ }
++
++disable:
++ mutex_lock(&iep->ptp_clk_mutex);
++
++ if (iep->pps_enabled) {
++ ret = -EBUSY;
++ goto exit;
++ }
++
++ if (iep->perout_enabled == !!on)
++ goto exit;
++
++ ret = icss_iep_perout_enable_hw(iep, req, on);
++ if (!ret)
++ iep->perout_enabled = !!on;
++
++exit:
++ mutex_unlock(&iep->ptp_clk_mutex);
++
++ return ret;
+ }
+
+ static void icss_iep_cap_cmp_work(struct work_struct *work)
+@@ -549,10 +596,13 @@ static int icss_iep_pps_enable(struct icss_iep *iep, int on)
+ if (on) {
+ ns = icss_iep_gettime(iep, NULL);
+ ts = ns_to_timespec64(ns);
++ rq.perout.flags = 0;
+ rq.perout.period.sec = 1;
+ rq.perout.period.nsec = 0;
+ rq.perout.start.sec = ts.tv_sec + 2;
+ rq.perout.start.nsec = 0;
++ rq.perout.on.sec = 0;
++ rq.perout.on.nsec = NSEC_PER_MSEC;
+ ret = icss_iep_perout_enable_hw(iep, &rq.perout, on);
+ } else {
+ ret = icss_iep_perout_enable_hw(iep, &rq.perout, on);
+diff --git a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
+index 53aeae2f884b01..1be2a5cc4a83c3 100644
+--- a/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
++++ b/drivers/net/ethernet/wangxun/ngbe/ngbe_main.c
+@@ -607,7 +607,7 @@ static int ngbe_probe(struct pci_dev *pdev,
+ /* setup the private structure */
+ err = ngbe_sw_init(wx);
+ if (err)
+- goto err_free_mac_table;
++ goto err_pci_release_regions;
+
+ /* check if flash load is done after hw power up */
+ err = wx_check_flash_load(wx, NGBE_SPI_ILDR_STATUS_PERST);
+@@ -701,6 +701,7 @@ static int ngbe_probe(struct pci_dev *pdev,
+ err_clear_interrupt_scheme:
+ wx_clear_interrupt_scheme(wx);
+ err_free_mac_table:
++ kfree(wx->rss_key);
+ kfree(wx->mac_table);
+ err_pci_release_regions:
+ pci_release_selected_regions(pdev,
+diff --git a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+index f7745026803643..7e352837184fad 100644
+--- a/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
++++ b/drivers/net/ethernet/wangxun/txgbe/txgbe_main.c
+@@ -559,7 +559,7 @@ static int txgbe_probe(struct pci_dev *pdev,
+ /* setup the private structure */
+ err = txgbe_sw_init(wx);
+ if (err)
+- goto err_free_mac_table;
++ goto err_pci_release_regions;
+
+ /* check if flash load is done after hw power up */
+ err = wx_check_flash_load(wx, TXGBE_SPI_ILDR_STATUS_PERST);
+@@ -717,6 +717,7 @@ static int txgbe_probe(struct pci_dev *pdev,
+ wx_clear_interrupt_scheme(wx);
+ wx_control_hw(wx, false);
+ err_free_mac_table:
++ kfree(wx->rss_key);
+ kfree(wx->mac_table);
+ err_pci_release_regions:
+ pci_release_selected_regions(pdev,
+diff --git a/drivers/net/wireless/ath/ath12k/dp_mon.c b/drivers/net/wireless/ath/ath12k/dp_mon.c
+index 1706ec27eb9c0f..4c98b9de1e5840 100644
+--- a/drivers/net/wireless/ath/ath12k/dp_mon.c
++++ b/drivers/net/wireless/ath/ath12k/dp_mon.c
+@@ -2118,7 +2118,7 @@ int ath12k_dp_mon_srng_process(struct ath12k *ar, int mac_id, int *budget,
+ dest_idx = 0;
+ move_next:
+ ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
+- ath12k_hal_srng_src_get_next_entry(ab, srng);
++ ath12k_hal_srng_dst_get_next_entry(ab, srng);
+ num_buffs_reaped++;
+ }
+
+@@ -2533,7 +2533,7 @@ int ath12k_dp_mon_rx_process_stats(struct ath12k *ar, int mac_id,
+ dest_idx = 0;
+ move_next:
+ ath12k_dp_mon_buf_replenish(ab, buf_ring, 1);
+- ath12k_hal_srng_dst_get_next_entry(ab, srng);
++ ath12k_hal_srng_src_get_next_entry(ab, srng);
+ num_buffs_reaped++;
+ }
+
+diff --git a/drivers/net/wireless/atmel/at76c50x-usb.c b/drivers/net/wireless/atmel/at76c50x-usb.c
+index 504e05ea30f298..97ea7ab0f49102 100644
+--- a/drivers/net/wireless/atmel/at76c50x-usb.c
++++ b/drivers/net/wireless/atmel/at76c50x-usb.c
+@@ -2552,7 +2552,7 @@ static void at76_disconnect(struct usb_interface *interface)
+
+ wiphy_info(priv->hw->wiphy, "disconnecting\n");
+ at76_delete_device(priv);
+- usb_put_dev(priv->udev);
++ usb_put_dev(interface_to_usbdev(interface));
+ dev_info(&interface->dev, "disconnected\n");
+ }
+
+diff --git a/drivers/net/wireless/ti/wl1251/tx.c b/drivers/net/wireless/ti/wl1251/tx.c
+index 474b603c121cba..adb4840b048932 100644
+--- a/drivers/net/wireless/ti/wl1251/tx.c
++++ b/drivers/net/wireless/ti/wl1251/tx.c
+@@ -342,8 +342,10 @@ void wl1251_tx_work(struct work_struct *work)
+ while ((skb = skb_dequeue(&wl->tx_queue))) {
+ if (!woken_up) {
+ ret = wl1251_ps_elp_wakeup(wl);
+- if (ret < 0)
++ if (ret < 0) {
++ skb_queue_head(&wl->tx_queue, skb);
+ goto out;
++ }
+ woken_up = true;
+ }
+
+diff --git a/drivers/nvme/host/apple.c b/drivers/nvme/host/apple.c
+index e79a0adf13950b..328f5a103628fe 100644
+--- a/drivers/nvme/host/apple.c
++++ b/drivers/nvme/host/apple.c
+@@ -650,7 +650,7 @@ static bool apple_nvme_handle_cq(struct apple_nvme_queue *q, bool force)
+
+ found = apple_nvme_poll_cq(q, &iob);
+
+- if (!rq_list_empty(iob.req_list))
++ if (!rq_list_empty(&iob.req_list))
+ apple_nvme_complete_batch(&iob);
+
+ return found;
+diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
+index af45a1b865ee10..e70618e8d35eb4 100644
+--- a/drivers/nvme/host/pci.c
++++ b/drivers/nvme/host/pci.c
+@@ -985,7 +985,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx,
+ return BLK_STS_OK;
+ }
+
+-static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct request **rqlist)
++static void nvme_submit_cmds(struct nvme_queue *nvmeq, struct rq_list *rqlist)
+ {
+ struct request *req;
+
+@@ -1013,11 +1013,10 @@ static bool nvme_prep_rq_batch(struct nvme_queue *nvmeq, struct request *req)
+ return nvme_prep_rq(nvmeq->dev, req) == BLK_STS_OK;
+ }
+
+-static void nvme_queue_rqs(struct request **rqlist)
++static void nvme_queue_rqs(struct rq_list *rqlist)
+ {
+- struct request *submit_list = NULL;
+- struct request *requeue_list = NULL;
+- struct request **requeue_lastp = &requeue_list;
++ struct rq_list submit_list = { };
++ struct rq_list requeue_list = { };
+ struct nvme_queue *nvmeq = NULL;
+ struct request *req;
+
+@@ -1027,9 +1026,9 @@ static void nvme_queue_rqs(struct request **rqlist)
+ nvmeq = req->mq_hctx->driver_data;
+
+ if (nvme_prep_rq_batch(nvmeq, req))
+- rq_list_add(&submit_list, req); /* reverse order */
++ rq_list_add_tail(&submit_list, req);
+ else
+- rq_list_add_tail(&requeue_lastp, req);
++ rq_list_add_tail(&requeue_list, req);
+ }
+
+ if (nvmeq)
+@@ -1176,7 +1175,7 @@ static irqreturn_t nvme_irq(int irq, void *data)
+ DEFINE_IO_COMP_BATCH(iob);
+
+ if (nvme_poll_cq(nvmeq, &iob)) {
+- if (!rq_list_empty(iob.req_list))
++ if (!rq_list_empty(&iob.req_list))
+ nvme_pci_complete_batch(&iob);
+ return IRQ_HANDLED;
+ }
+diff --git a/drivers/nvme/target/fc.c b/drivers/nvme/target/fc.c
+index 3ef4beacde3257..7318b736d41417 100644
+--- a/drivers/nvme/target/fc.c
++++ b/drivers/nvme/target/fc.c
+@@ -172,20 +172,6 @@ struct nvmet_fc_tgt_assoc {
+ struct work_struct del_work;
+ };
+
+-
+-static inline int
+-nvmet_fc_iodnum(struct nvmet_fc_ls_iod *iodptr)
+-{
+- return (iodptr - iodptr->tgtport->iod);
+-}
+-
+-static inline int
+-nvmet_fc_fodnum(struct nvmet_fc_fcp_iod *fodptr)
+-{
+- return (fodptr - fodptr->queue->fod);
+-}
+-
+-
+ /*
+ * Association and Connection IDs:
+ *
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index be61fa93d39712..25c07af1686b9b 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -5534,8 +5534,6 @@ static bool pci_bus_resettable(struct pci_bus *bus)
+ return false;
+
+ list_for_each_entry(dev, &bus->devices, bus_list) {
+- if (!pci_reset_supported(dev))
+- return false;
+ if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
+ (dev->subordinate && !pci_bus_resettable(dev->subordinate)))
+ return false;
+@@ -5612,8 +5610,6 @@ static bool pci_slot_resettable(struct pci_slot *slot)
+ list_for_each_entry(dev, &slot->bus->devices, bus_list) {
+ if (!dev->slot || dev->slot != slot)
+ continue;
+- if (!pci_reset_supported(dev))
+- return false;
+ if (dev->dev_flags & PCI_DEV_FLAGS_NO_BUS_RESET ||
+ (dev->subordinate && !pci_bus_resettable(dev->subordinate)))
+ return false;
+diff --git a/drivers/platform/x86/amd/pmf/auto-mode.c b/drivers/platform/x86/amd/pmf/auto-mode.c
+index 02ff68be10d012..a184922bba8d65 100644
+--- a/drivers/platform/x86/amd/pmf/auto-mode.c
++++ b/drivers/platform/x86/amd/pmf/auto-mode.c
+@@ -120,9 +120,9 @@ static void amd_pmf_set_automode(struct amd_pmf_dev *dev, int idx,
+ amd_pmf_send_cmd(dev, SET_SPPT_APU_ONLY, false, pwr_ctrl->sppt_apu_only, NULL);
+ amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false, pwr_ctrl->stt_min, NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false,
+- pwr_ctrl->stt_skin_temp[STT_TEMP_APU], NULL);
++ fixp_q88_fromint(pwr_ctrl->stt_skin_temp[STT_TEMP_APU]), NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false,
+- pwr_ctrl->stt_skin_temp[STT_TEMP_HS2], NULL);
++ fixp_q88_fromint(pwr_ctrl->stt_skin_temp[STT_TEMP_HS2]), NULL);
+
+ if (is_apmf_func_supported(dev, APMF_FUNC_SET_FAN_IDX))
+ apmf_update_fan_idx(dev, config_store.mode_set[idx].fan_control.manual,
+diff --git a/drivers/platform/x86/amd/pmf/cnqf.c b/drivers/platform/x86/amd/pmf/cnqf.c
+index bc8899e15c914b..207a0b33d8d368 100644
+--- a/drivers/platform/x86/amd/pmf/cnqf.c
++++ b/drivers/platform/x86/amd/pmf/cnqf.c
+@@ -81,10 +81,10 @@ static int amd_pmf_set_cnqf(struct amd_pmf_dev *dev, int src, int idx,
+ amd_pmf_send_cmd(dev, SET_SPPT, false, pc->sppt, NULL);
+ amd_pmf_send_cmd(dev, SET_SPPT_APU_ONLY, false, pc->sppt_apu_only, NULL);
+ amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false, pc->stt_min, NULL);
+- amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, pc->stt_skin_temp[STT_TEMP_APU],
+- NULL);
+- amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, pc->stt_skin_temp[STT_TEMP_HS2],
+- NULL);
++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false,
++ fixp_q88_fromint(pc->stt_skin_temp[STT_TEMP_APU]), NULL);
++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false,
++ fixp_q88_fromint(pc->stt_skin_temp[STT_TEMP_HS2]), NULL);
+
+ if (is_apmf_func_supported(dev, APMF_FUNC_SET_FAN_IDX))
+ apmf_update_fan_idx(dev,
+diff --git a/drivers/platform/x86/amd/pmf/core.c b/drivers/platform/x86/amd/pmf/core.c
+index 347bb43a5f2b75..719caa2a00f056 100644
+--- a/drivers/platform/x86/amd/pmf/core.c
++++ b/drivers/platform/x86/amd/pmf/core.c
+@@ -176,6 +176,20 @@ static void __maybe_unused amd_pmf_dump_registers(struct amd_pmf_dev *dev)
+ dev_dbg(dev->dev, "AMD_PMF_REGISTER_MESSAGE:%x\n", value);
+ }
+
++/**
++ * fixp_q88_fromint: Convert integer to Q8.8
++ * @val: input value
++ *
++ * Converts an integer into binary fixed point format where 8 bits
++ * are used for integer and 8 bits are used for the decimal.
++ *
++ * Return: unsigned integer converted to Q8.8 format
++ */
++u32 fixp_q88_fromint(u32 val)
++{
++ return val << 8;
++}
++
+ int amd_pmf_send_cmd(struct amd_pmf_dev *dev, u8 message, bool get, u32 arg, u32 *data)
+ {
+ int rc;
+diff --git a/drivers/platform/x86/amd/pmf/pmf.h b/drivers/platform/x86/amd/pmf/pmf.h
+index 43ba1b9aa1811a..34ba0309a33a2f 100644
+--- a/drivers/platform/x86/amd/pmf/pmf.h
++++ b/drivers/platform/x86/amd/pmf/pmf.h
+@@ -746,6 +746,7 @@ int apmf_install_handler(struct amd_pmf_dev *pmf_dev);
+ int apmf_os_power_slider_update(struct amd_pmf_dev *dev, u8 flag);
+ int amd_pmf_set_dram_addr(struct amd_pmf_dev *dev, bool alloc_buffer);
+ int amd_pmf_notify_sbios_heartbeat_event_v2(struct amd_pmf_dev *dev, u8 flag);
++u32 fixp_q88_fromint(u32 val);
+
+ /* SPS Layer */
+ int amd_pmf_get_pprof_modes(struct amd_pmf_dev *pmf);
+diff --git a/drivers/platform/x86/amd/pmf/sps.c b/drivers/platform/x86/amd/pmf/sps.c
+index 92f7fb22277dca..3a24209f7df03e 100644
+--- a/drivers/platform/x86/amd/pmf/sps.c
++++ b/drivers/platform/x86/amd/pmf/sps.c
+@@ -198,9 +198,11 @@ static void amd_pmf_update_slider_v2(struct amd_pmf_dev *dev, int idx)
+ amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false,
+ apts_config_store.val[idx].stt_min_limit, NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false,
+- apts_config_store.val[idx].stt_skin_temp_limit_apu, NULL);
++ fixp_q88_fromint(apts_config_store.val[idx].stt_skin_temp_limit_apu),
++ NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false,
+- apts_config_store.val[idx].stt_skin_temp_limit_hs2, NULL);
++ fixp_q88_fromint(apts_config_store.val[idx].stt_skin_temp_limit_hs2),
++ NULL);
+ }
+
+ void amd_pmf_update_slider(struct amd_pmf_dev *dev, bool op, int idx,
+@@ -217,9 +219,11 @@ void amd_pmf_update_slider(struct amd_pmf_dev *dev, bool op, int idx,
+ amd_pmf_send_cmd(dev, SET_STT_MIN_LIMIT, false,
+ config_store.prop[src][idx].stt_min, NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false,
+- config_store.prop[src][idx].stt_skin_temp[STT_TEMP_APU], NULL);
++ fixp_q88_fromint(config_store.prop[src][idx].stt_skin_temp[STT_TEMP_APU]),
++ NULL);
+ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false,
+- config_store.prop[src][idx].stt_skin_temp[STT_TEMP_HS2], NULL);
++ fixp_q88_fromint(config_store.prop[src][idx].stt_skin_temp[STT_TEMP_HS2]),
++ NULL);
+ } else if (op == SLIDER_OP_GET) {
+ amd_pmf_send_cmd(dev, GET_SPL, true, ARG_NONE, &table->prop[src][idx].spl);
+ amd_pmf_send_cmd(dev, GET_FPPT, true, ARG_NONE, &table->prop[src][idx].fppt);
+diff --git a/drivers/platform/x86/amd/pmf/tee-if.c b/drivers/platform/x86/amd/pmf/tee-if.c
+index 09131507d7a925..cb5abab2210a7b 100644
+--- a/drivers/platform/x86/amd/pmf/tee-if.c
++++ b/drivers/platform/x86/amd/pmf/tee-if.c
+@@ -123,7 +123,8 @@ static void amd_pmf_apply_policies(struct amd_pmf_dev *dev, struct ta_pmf_enact_
+
+ case PMF_POLICY_STT_SKINTEMP_APU:
+ if (dev->prev_data->stt_skintemp_apu != val) {
+- amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false, val, NULL);
++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_APU, false,
++ fixp_q88_fromint(val), NULL);
+ dev_dbg(dev->dev, "update STT_SKINTEMP_APU: %u\n", val);
+ dev->prev_data->stt_skintemp_apu = val;
+ }
+@@ -131,7 +132,8 @@ static void amd_pmf_apply_policies(struct amd_pmf_dev *dev, struct ta_pmf_enact_
+
+ case PMF_POLICY_STT_SKINTEMP_HS2:
+ if (dev->prev_data->stt_skintemp_hs2 != val) {
+- amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false, val, NULL);
++ amd_pmf_send_cmd(dev, SET_STT_LIMIT_HS2, false,
++ fixp_q88_fromint(val), NULL);
+ dev_dbg(dev->dev, "update STT_SKINTEMP_HS2: %u\n", val);
+ dev->prev_data->stt_skintemp_hs2 = val;
+ }
+diff --git a/drivers/platform/x86/asus-laptop.c b/drivers/platform/x86/asus-laptop.c
+index 9d7e6b712abf11..8d2e6d8be9e54a 100644
+--- a/drivers/platform/x86/asus-laptop.c
++++ b/drivers/platform/x86/asus-laptop.c
+@@ -426,11 +426,14 @@ static int asus_pega_lucid_set(struct asus_laptop *asus, int unit, bool enable)
+
+ static int pega_acc_axis(struct asus_laptop *asus, int curr, char *method)
+ {
++ unsigned long long val = (unsigned long long)curr;
++ acpi_status status;
+ int i, delta;
+- unsigned long long val;
+- for (i = 0; i < PEGA_ACC_RETRIES; i++) {
+- acpi_evaluate_integer(asus->handle, method, NULL, &val);
+
++ for (i = 0; i < PEGA_ACC_RETRIES; i++) {
++ status = acpi_evaluate_integer(asus->handle, method, NULL, &val);
++ if (ACPI_FAILURE(status))
++ continue;
+ /* The output is noisy. From reading the ASL
+ * dissassembly, timeout errors are returned with 1's
+ * in the high word, and the lack of locking around
+diff --git a/drivers/platform/x86/msi-wmi-platform.c b/drivers/platform/x86/msi-wmi-platform.c
+index 9b5c7f8c79b0dd..dc5e9878cb6822 100644
+--- a/drivers/platform/x86/msi-wmi-platform.c
++++ b/drivers/platform/x86/msi-wmi-platform.c
+@@ -10,6 +10,7 @@
+ #include <linux/acpi.h>
+ #include <linux/bits.h>
+ #include <linux/bitfield.h>
++#include <linux/cleanup.h>
+ #include <linux/debugfs.h>
+ #include <linux/device.h>
+ #include <linux/device/driver.h>
+@@ -17,6 +18,7 @@
+ #include <linux/hwmon.h>
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/mutex.h>
+ #include <linux/printk.h>
+ #include <linux/rwsem.h>
+ #include <linux/types.h>
+@@ -76,8 +78,13 @@ enum msi_wmi_platform_method {
+ MSI_PLATFORM_GET_WMI = 0x1d,
+ };
+
+-struct msi_wmi_platform_debugfs_data {
++struct msi_wmi_platform_data {
+ struct wmi_device *wdev;
++ struct mutex wmi_lock; /* Necessary when calling WMI methods */
++};
++
++struct msi_wmi_platform_debugfs_data {
++ struct msi_wmi_platform_data *data;
+ enum msi_wmi_platform_method method;
+ struct rw_semaphore buffer_lock; /* Protects debugfs buffer */
+ size_t length;
+@@ -132,8 +139,9 @@ static int msi_wmi_platform_parse_buffer(union acpi_object *obj, u8 *output, siz
+ return 0;
+ }
+
+-static int msi_wmi_platform_query(struct wmi_device *wdev, enum msi_wmi_platform_method method,
+- u8 *input, size_t input_length, u8 *output, size_t output_length)
++static int msi_wmi_platform_query(struct msi_wmi_platform_data *data,
++ enum msi_wmi_platform_method method, u8 *input,
++ size_t input_length, u8 *output, size_t output_length)
+ {
+ struct acpi_buffer out = { ACPI_ALLOCATE_BUFFER, NULL };
+ struct acpi_buffer in = {
+@@ -147,9 +155,15 @@ static int msi_wmi_platform_query(struct wmi_device *wdev, enum msi_wmi_platform
+ if (!input_length || !output_length)
+ return -EINVAL;
+
+- status = wmidev_evaluate_method(wdev, 0x0, method, &in, &out);
+- if (ACPI_FAILURE(status))
+- return -EIO;
++ /*
++ * The ACPI control method responsible for handling the WMI method calls
++ * is not thread-safe. Because of this we have to do the locking ourself.
++ */
++ scoped_guard(mutex, &data->wmi_lock) {
++ status = wmidev_evaluate_method(data->wdev, 0x0, method, &in, &out);
++ if (ACPI_FAILURE(status))
++ return -EIO;
++ }
+
+ obj = out.pointer;
+ if (!obj)
+@@ -170,22 +184,22 @@ static umode_t msi_wmi_platform_is_visible(const void *drvdata, enum hwmon_senso
+ static int msi_wmi_platform_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
+ int channel, long *val)
+ {
+- struct wmi_device *wdev = dev_get_drvdata(dev);
++ struct msi_wmi_platform_data *data = dev_get_drvdata(dev);
+ u8 input[32] = { 0 };
+ u8 output[32];
+- u16 data;
++ u16 value;
+ int ret;
+
+- ret = msi_wmi_platform_query(wdev, MSI_PLATFORM_GET_FAN, input, sizeof(input), output,
++ ret = msi_wmi_platform_query(data, MSI_PLATFORM_GET_FAN, input, sizeof(input), output,
+ sizeof(output));
+ if (ret < 0)
+ return ret;
+
+- data = get_unaligned_be16(&output[channel * 2 + 1]);
+- if (!data)
++ value = get_unaligned_be16(&output[channel * 2 + 1]);
++ if (!value)
+ *val = 0;
+ else
+- *val = 480000 / data;
++ *val = 480000 / value;
+
+ return 0;
+ }
+@@ -231,7 +245,7 @@ static ssize_t msi_wmi_platform_write(struct file *fp, const char __user *input,
+ return ret;
+
+ down_write(&data->buffer_lock);
+- ret = msi_wmi_platform_query(data->wdev, data->method, payload, data->length, data->buffer,
++ ret = msi_wmi_platform_query(data->data, data->method, payload, data->length, data->buffer,
+ data->length);
+ up_write(&data->buffer_lock);
+
+@@ -277,17 +291,17 @@ static void msi_wmi_platform_debugfs_remove(void *data)
+ debugfs_remove_recursive(dir);
+ }
+
+-static void msi_wmi_platform_debugfs_add(struct wmi_device *wdev, struct dentry *dir,
++static void msi_wmi_platform_debugfs_add(struct msi_wmi_platform_data *drvdata, struct dentry *dir,
+ const char *name, enum msi_wmi_platform_method method)
+ {
+ struct msi_wmi_platform_debugfs_data *data;
+ struct dentry *entry;
+
+- data = devm_kzalloc(&wdev->dev, sizeof(*data), GFP_KERNEL);
++ data = devm_kzalloc(&drvdata->wdev->dev, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return;
+
+- data->wdev = wdev;
++ data->data = drvdata;
+ data->method = method;
+ init_rwsem(&data->buffer_lock);
+
+@@ -298,82 +312,82 @@ static void msi_wmi_platform_debugfs_add(struct wmi_device *wdev, struct dentry
+
+ entry = debugfs_create_file(name, 0600, dir, data, &msi_wmi_platform_debugfs_fops);
+ if (IS_ERR(entry))
+- devm_kfree(&wdev->dev, data);
++ devm_kfree(&drvdata->wdev->dev, data);
+ }
+
+-static void msi_wmi_platform_debugfs_init(struct wmi_device *wdev)
++static void msi_wmi_platform_debugfs_init(struct msi_wmi_platform_data *data)
+ {
+ struct dentry *dir;
+ char dir_name[64];
+ int ret, method;
+
+- scnprintf(dir_name, ARRAY_SIZE(dir_name), "%s-%s", DRIVER_NAME, dev_name(&wdev->dev));
++ scnprintf(dir_name, ARRAY_SIZE(dir_name), "%s-%s", DRIVER_NAME, dev_name(&data->wdev->dev));
+
+ dir = debugfs_create_dir(dir_name, NULL);
+ if (IS_ERR(dir))
+ return;
+
+- ret = devm_add_action_or_reset(&wdev->dev, msi_wmi_platform_debugfs_remove, dir);
++ ret = devm_add_action_or_reset(&data->wdev->dev, msi_wmi_platform_debugfs_remove, dir);
+ if (ret < 0)
+ return;
+
+ for (method = MSI_PLATFORM_GET_PACKAGE; method <= MSI_PLATFORM_GET_WMI; method++)
+- msi_wmi_platform_debugfs_add(wdev, dir, msi_wmi_platform_debugfs_names[method - 1],
++ msi_wmi_platform_debugfs_add(data, dir, msi_wmi_platform_debugfs_names[method - 1],
+ method);
+ }
+
+-static int msi_wmi_platform_hwmon_init(struct wmi_device *wdev)
++static int msi_wmi_platform_hwmon_init(struct msi_wmi_platform_data *data)
+ {
+ struct device *hdev;
+
+- hdev = devm_hwmon_device_register_with_info(&wdev->dev, "msi_wmi_platform", wdev,
++ hdev = devm_hwmon_device_register_with_info(&data->wdev->dev, "msi_wmi_platform", data,
+ &msi_wmi_platform_chip_info, NULL);
+
+ return PTR_ERR_OR_ZERO(hdev);
+ }
+
+-static int msi_wmi_platform_ec_init(struct wmi_device *wdev)
++static int msi_wmi_platform_ec_init(struct msi_wmi_platform_data *data)
+ {
+ u8 input[32] = { 0 };
+ u8 output[32];
+ u8 flags;
+ int ret;
+
+- ret = msi_wmi_platform_query(wdev, MSI_PLATFORM_GET_EC, input, sizeof(input), output,
++ ret = msi_wmi_platform_query(data, MSI_PLATFORM_GET_EC, input, sizeof(input), output,
+ sizeof(output));
+ if (ret < 0)
+ return ret;
+
+ flags = output[MSI_PLATFORM_EC_FLAGS_OFFSET];
+
+- dev_dbg(&wdev->dev, "EC RAM version %lu.%lu\n",
++ dev_dbg(&data->wdev->dev, "EC RAM version %lu.%lu\n",
+ FIELD_GET(MSI_PLATFORM_EC_MAJOR_MASK, flags),
+ FIELD_GET(MSI_PLATFORM_EC_MINOR_MASK, flags));
+- dev_dbg(&wdev->dev, "EC firmware version %.28s\n",
++ dev_dbg(&data->wdev->dev, "EC firmware version %.28s\n",
+ &output[MSI_PLATFORM_EC_VERSION_OFFSET]);
+
+ if (!(flags & MSI_PLATFORM_EC_IS_TIGERLAKE)) {
+ if (!force)
+ return -ENODEV;
+
+- dev_warn(&wdev->dev, "Loading on a non-Tigerlake platform\n");
++ dev_warn(&data->wdev->dev, "Loading on a non-Tigerlake platform\n");
+ }
+
+ return 0;
+ }
+
+-static int msi_wmi_platform_init(struct wmi_device *wdev)
++static int msi_wmi_platform_init(struct msi_wmi_platform_data *data)
+ {
+ u8 input[32] = { 0 };
+ u8 output[32];
+ int ret;
+
+- ret = msi_wmi_platform_query(wdev, MSI_PLATFORM_GET_WMI, input, sizeof(input), output,
++ ret = msi_wmi_platform_query(data, MSI_PLATFORM_GET_WMI, input, sizeof(input), output,
+ sizeof(output));
+ if (ret < 0)
+ return ret;
+
+- dev_dbg(&wdev->dev, "WMI interface version %u.%u\n",
++ dev_dbg(&data->wdev->dev, "WMI interface version %u.%u\n",
+ output[MSI_PLATFORM_WMI_MAJOR_OFFSET],
+ output[MSI_PLATFORM_WMI_MINOR_OFFSET]);
+
+@@ -381,7 +395,8 @@ static int msi_wmi_platform_init(struct wmi_device *wdev)
+ if (!force)
+ return -ENODEV;
+
+- dev_warn(&wdev->dev, "Loading despite unsupported WMI interface version (%u.%u)\n",
++ dev_warn(&data->wdev->dev,
++ "Loading despite unsupported WMI interface version (%u.%u)\n",
+ output[MSI_PLATFORM_WMI_MAJOR_OFFSET],
+ output[MSI_PLATFORM_WMI_MINOR_OFFSET]);
+ }
+@@ -391,19 +406,31 @@ static int msi_wmi_platform_init(struct wmi_device *wdev)
+
+ static int msi_wmi_platform_probe(struct wmi_device *wdev, const void *context)
+ {
++ struct msi_wmi_platform_data *data;
+ int ret;
+
+- ret = msi_wmi_platform_init(wdev);
++ data = devm_kzalloc(&wdev->dev, sizeof(*data), GFP_KERNEL);
++ if (!data)
++ return -ENOMEM;
++
++ data->wdev = wdev;
++ dev_set_drvdata(&wdev->dev, data);
++
++ ret = devm_mutex_init(&wdev->dev, &data->wmi_lock);
++ if (ret < 0)
++ return ret;
++
++ ret = msi_wmi_platform_init(data);
+ if (ret < 0)
+ return ret;
+
+- ret = msi_wmi_platform_ec_init(wdev);
++ ret = msi_wmi_platform_ec_init(data);
+ if (ret < 0)
+ return ret;
+
+- msi_wmi_platform_debugfs_init(wdev);
++ msi_wmi_platform_debugfs_init(data);
+
+- return msi_wmi_platform_hwmon_init(wdev);
++ return msi_wmi_platform_hwmon_init(data);
+ }
+
+ static const struct wmi_device_id msi_wmi_platform_id_table[] = {
+diff --git a/drivers/ptp/ptp_ocp.c b/drivers/ptp/ptp_ocp.c
+index 120db96d9e95d6..0eeb503e06c230 100644
+--- a/drivers/ptp/ptp_ocp.c
++++ b/drivers/ptp/ptp_ocp.c
+@@ -2067,6 +2067,7 @@ ptp_ocp_signal_set(struct ptp_ocp *bp, int gen, struct ptp_ocp_signal *s)
+ if (!s->start) {
+ /* roundup() does not work on 32-bit systems */
+ s->start = DIV64_U64_ROUND_UP(start_ns, s->period);
++ s->start *= s->period;
+ s->start = ktime_add(s->start, s->phase);
+ }
+
+diff --git a/drivers/ras/amd/atl/internal.h b/drivers/ras/amd/atl/internal.h
+index 143d04c779a821..b7c7d5ba4d9dd1 100644
+--- a/drivers/ras/amd/atl/internal.h
++++ b/drivers/ras/amd/atl/internal.h
+@@ -361,4 +361,7 @@ static inline void atl_debug_on_bad_intlv_mode(struct addr_ctx *ctx)
+ atl_debug(ctx, "Unrecognized interleave mode: %u", ctx->map.intlv_mode);
+ }
+
++#define MI300_UMC_MCA_COL GENMASK(5, 1)
++#define MI300_UMC_MCA_ROW13 BIT(23)
++
+ #endif /* __AMD_ATL_INTERNAL_H__ */
+diff --git a/drivers/ras/amd/atl/umc.c b/drivers/ras/amd/atl/umc.c
+index dc8aa12f63c811..6e072b7667e98b 100644
+--- a/drivers/ras/amd/atl/umc.c
++++ b/drivers/ras/amd/atl/umc.c
+@@ -229,7 +229,6 @@ int get_umc_info_mi300(void)
+ * Additionally, the PC and Bank bits may be hashed. This must be accounted for before
+ * reconstructing the normalized address.
+ */
+-#define MI300_UMC_MCA_COL GENMASK(5, 1)
+ #define MI300_UMC_MCA_BANK GENMASK(9, 6)
+ #define MI300_UMC_MCA_ROW GENMASK(24, 10)
+ #define MI300_UMC_MCA_PC BIT(25)
+@@ -320,7 +319,7 @@ static unsigned long convert_dram_to_norm_addr_mi300(unsigned long addr)
+ * See amd_atl::convert_dram_to_norm_addr_mi300() for MI300 address formats.
+ */
+ #define MI300_NUM_COL BIT(HWEIGHT(MI300_UMC_MCA_COL))
+-static void retire_row_mi300(struct atl_err *a_err)
++static void _retire_row_mi300(struct atl_err *a_err)
+ {
+ unsigned long addr;
+ struct page *p;
+@@ -351,6 +350,22 @@ static void retire_row_mi300(struct atl_err *a_err)
+ }
+ }
+
++/*
++ * In addition to the column bits, the row[13] bit should also be included when
++ * calculating addresses affected by a physical row.
++ *
++ * Instead of running through another loop over a single bit, just run through
++ * the column bits twice and flip the row[13] bit in-between.
++ *
++ * See MI300_UMC_MCA_ROW for the row bits in MCA_ADDR_UMC value.
++ */
++static void retire_row_mi300(struct atl_err *a_err)
++{
++ _retire_row_mi300(a_err);
++ a_err->addr ^= MI300_UMC_MCA_ROW13;
++ _retire_row_mi300(a_err);
++}
++
+ void amd_retire_dram_row(struct atl_err *a_err)
+ {
+ if (df_cfg.rev == DF4p5 && df_cfg.flags.heterogeneous)
+diff --git a/drivers/ras/amd/fmpm.c b/drivers/ras/amd/fmpm.c
+index 90de737fbc9097..8877c6ff64c468 100644
+--- a/drivers/ras/amd/fmpm.c
++++ b/drivers/ras/amd/fmpm.c
+@@ -250,6 +250,13 @@ static bool rec_has_valid_entries(struct fru_rec *rec)
+ return true;
+ }
+
++/*
++ * Row retirement is done on MI300 systems, and some bits are 'don't
++ * care' for comparing addresses with unique physical rows. This
++ * includes all column bits and the row[13] bit.
++ */
++#define MASK_ADDR(addr) ((addr) & ~(MI300_UMC_MCA_ROW13 | MI300_UMC_MCA_COL))
++
+ static bool fpds_equal(struct cper_fru_poison_desc *old, struct cper_fru_poison_desc *new)
+ {
+ /*
+@@ -258,7 +265,7 @@ static bool fpds_equal(struct cper_fru_poison_desc *old, struct cper_fru_poison_
+ *
+ * Also, order the checks from most->least likely to fail to shortcut the code.
+ */
+- if (old->addr != new->addr)
++ if (MASK_ADDR(old->addr) != MASK_ADDR(new->addr))
+ return false;
+
+ if (old->hw_id != new->hw_id)
+diff --git a/drivers/scsi/fnic/fnic_main.c b/drivers/scsi/fnic/fnic_main.c
+index adec0df24bc475..1cb517f731f4ac 100644
+--- a/drivers/scsi/fnic/fnic_main.c
++++ b/drivers/scsi/fnic/fnic_main.c
+@@ -16,7 +16,6 @@
+ #include <linux/spinlock.h>
+ #include <linux/workqueue.h>
+ #include <linux/if_ether.h>
+-#include <linux/blk-mq-pci.h>
+ #include <scsi/fc/fc_fip.h>
+ #include <scsi/scsi_host.h>
+ #include <scsi/scsi_transport.h>
+@@ -601,7 +600,7 @@ void fnic_mq_map_queues_cpus(struct Scsi_Host *host)
+ return;
+ }
+
+- blk_mq_pci_map_queues(qmap, l_pdev, FNIC_PCI_OFFSET);
++ blk_mq_map_hw_queues(qmap, &l_pdev->dev, FNIC_PCI_OFFSET);
+ }
+
+ static int fnic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+diff --git a/drivers/scsi/hisi_sas/hisi_sas.h b/drivers/scsi/hisi_sas/hisi_sas.h
+index d223f482488fc6..010479a354eeeb 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas.h
++++ b/drivers/scsi/hisi_sas/hisi_sas.h
+@@ -9,7 +9,6 @@
+
+ #include <linux/acpi.h>
+ #include <linux/blk-mq.h>
+-#include <linux/blk-mq-pci.h>
+ #include <linux/clk.h>
+ #include <linux/debugfs.h>
+ #include <linux/dmapool.h>
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+index 342d75f12051d2..89ff33daba4041 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v2_hw.c
+@@ -2501,6 +2501,7 @@ static void prep_ata_v2_hw(struct hisi_hba *hisi_hba,
+ struct hisi_sas_port *port = to_hisi_sas_port(sas_port);
+ struct sas_ata_task *ata_task = &task->ata_task;
+ struct sas_tmf_task *tmf = slot->tmf;
++ int phy_id;
+ u8 *buf_cmd;
+ int has_data = 0, hdr_tag = 0;
+ u32 dw0, dw1 = 0, dw2 = 0;
+@@ -2508,10 +2509,14 @@ static void prep_ata_v2_hw(struct hisi_hba *hisi_hba,
+ /* create header */
+ /* dw0 */
+ dw0 = port->id << CMD_HDR_PORT_OFF;
+- if (parent_dev && dev_is_expander(parent_dev->dev_type))
++ if (parent_dev && dev_is_expander(parent_dev->dev_type)) {
+ dw0 |= 3 << CMD_HDR_CMD_OFF;
+- else
++ } else {
++ phy_id = device->phy->identify.phy_identifier;
++ dw0 |= (1U << phy_id) << CMD_HDR_PHY_ID_OFF;
++ dw0 |= CMD_HDR_FORCE_PHY_MSK;
+ dw0 |= 4 << CMD_HDR_CMD_OFF;
++ }
+
+ if (tmf && ata_task->force_phy) {
+ dw0 |= CMD_HDR_FORCE_PHY_MSK;
+diff --git a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+index cd394d8c9f07f0..2b04556681a1ac 100644
+--- a/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
++++ b/drivers/scsi/hisi_sas/hisi_sas_v3_hw.c
+@@ -358,6 +358,10 @@
+ #define CMD_HDR_RESP_REPORT_MSK (0x1 << CMD_HDR_RESP_REPORT_OFF)
+ #define CMD_HDR_TLR_CTRL_OFF 6
+ #define CMD_HDR_TLR_CTRL_MSK (0x3 << CMD_HDR_TLR_CTRL_OFF)
++#define CMD_HDR_PHY_ID_OFF 8
++#define CMD_HDR_PHY_ID_MSK (0x1ff << CMD_HDR_PHY_ID_OFF)
++#define CMD_HDR_FORCE_PHY_OFF 17
++#define CMD_HDR_FORCE_PHY_MSK (0x1U << CMD_HDR_FORCE_PHY_OFF)
+ #define CMD_HDR_PORT_OFF 18
+ #define CMD_HDR_PORT_MSK (0xf << CMD_HDR_PORT_OFF)
+ #define CMD_HDR_PRIORITY_OFF 27
+@@ -1425,15 +1429,21 @@ static void prep_ata_v3_hw(struct hisi_hba *hisi_hba,
+ struct hisi_sas_cmd_hdr *hdr = slot->cmd_hdr;
+ struct asd_sas_port *sas_port = device->port;
+ struct hisi_sas_port *port = to_hisi_sas_port(sas_port);
++ int phy_id;
+ u8 *buf_cmd;
+ int has_data = 0, hdr_tag = 0;
+ u32 dw1 = 0, dw2 = 0;
+
+ hdr->dw0 = cpu_to_le32(port->id << CMD_HDR_PORT_OFF);
+- if (parent_dev && dev_is_expander(parent_dev->dev_type))
++ if (parent_dev && dev_is_expander(parent_dev->dev_type)) {
+ hdr->dw0 |= cpu_to_le32(3 << CMD_HDR_CMD_OFF);
+- else
++ } else {
++ phy_id = device->phy->identify.phy_identifier;
++ hdr->dw0 |= cpu_to_le32((1U << phy_id)
++ << CMD_HDR_PHY_ID_OFF);
++ hdr->dw0 |= CMD_HDR_FORCE_PHY_MSK;
+ hdr->dw0 |= cpu_to_le32(4U << CMD_HDR_CMD_OFF);
++ }
+
+ switch (task->data_dir) {
+ case DMA_TO_DEVICE:
+@@ -3323,8 +3333,8 @@ static void hisi_sas_map_queues(struct Scsi_Host *shost)
+ if (i == HCTX_TYPE_POLL)
+ blk_mq_map_queues(qmap);
+ else
+- blk_mq_pci_map_queues(qmap, hisi_hba->pci_dev,
+- BASE_VECTORS_V3_HW);
++ blk_mq_map_hw_queues(qmap, hisi_hba->dev,
++ BASE_VECTORS_V3_HW);
+ qoff += qmap->nr_queues;
+ }
+ }
+diff --git a/drivers/scsi/megaraid/megaraid_sas_base.c b/drivers/scsi/megaraid/megaraid_sas_base.c
+index 50f1dcb6d58460..21f22e913cd08d 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -37,7 +37,6 @@
+ #include <linux/poll.h>
+ #include <linux/vmalloc.h>
+ #include <linux/irq_poll.h>
+-#include <linux/blk-mq-pci.h>
+
+ #include <scsi/scsi.h>
+ #include <scsi/scsi_cmnd.h>
+@@ -2104,6 +2103,9 @@ static int megasas_device_configure(struct scsi_device *sdev,
+ /* This sdev property may change post OCR */
+ megasas_set_dynamic_target_properties(sdev, lim, is_target_prop);
+
++ if (!MEGASAS_IS_LOGICAL(sdev))
++ sdev->no_vpd_size = 1;
++
+ mutex_unlock(&instance->reset_mutex);
+
+ return 0;
+@@ -3193,7 +3195,7 @@ static void megasas_map_queues(struct Scsi_Host *shost)
+ map = &shost->tag_set.map[HCTX_TYPE_DEFAULT];
+ map->nr_queues = instance->msix_vectors - offset;
+ map->queue_offset = 0;
+- blk_mq_pci_map_queues(map, instance->pdev, offset);
++ blk_mq_map_hw_queues(map, &instance->pdev->dev, offset);
+ qoff += map->nr_queues;
+ offset += map->nr_queues;
+
+@@ -3663,8 +3665,10 @@ megasas_complete_cmd(struct megasas_instance *instance, struct megasas_cmd *cmd,
+
+ case MFI_STAT_SCSI_IO_FAILED:
+ case MFI_STAT_LD_INIT_IN_PROGRESS:
+- cmd->scmd->result =
+- (DID_ERROR << 16) | hdr->scsi_status;
++ if (hdr->scsi_status == 0xf0)
++ cmd->scmd->result = (DID_ERROR << 16) | SAM_STAT_CHECK_CONDITION;
++ else
++ cmd->scmd->result = (DID_ERROR << 16) | hdr->scsi_status;
+ break;
+
+ case MFI_STAT_SCSI_DONE_WITH_ERROR:
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index 1eec23da28e2d6..1eea4df9e47d35 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -2043,7 +2043,10 @@ map_cmd_status(struct fusion_context *fusion,
+
+ case MFI_STAT_SCSI_IO_FAILED:
+ case MFI_STAT_LD_INIT_IN_PROGRESS:
+- scmd->result = (DID_ERROR << 16) | ext_status;
++ if (ext_status == 0xf0)
++ scmd->result = (DID_ERROR << 16) | SAM_STAT_CHECK_CONDITION;
++ else
++ scmd->result = (DID_ERROR << 16) | ext_status;
+ break;
+
+ case MFI_STAT_SCSI_DONE_WITH_ERROR:
+diff --git a/drivers/scsi/mpi3mr/mpi3mr.h b/drivers/scsi/mpi3mr/mpi3mr.h
+index ee5a75a4b3bb80..ab7c5f1fc04121 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr.h
++++ b/drivers/scsi/mpi3mr/mpi3mr.h
+@@ -12,7 +12,6 @@
+
+ #include <linux/blkdev.h>
+ #include <linux/blk-mq.h>
+-#include <linux/blk-mq-pci.h>
+ #include <linux/delay.h>
+ #include <linux/dmapool.h>
+ #include <linux/errno.h>
+diff --git a/drivers/scsi/mpi3mr/mpi3mr_os.c b/drivers/scsi/mpi3mr/mpi3mr_os.c
+index 1bef88130d0c06..1e8735538b238e 100644
+--- a/drivers/scsi/mpi3mr/mpi3mr_os.c
++++ b/drivers/scsi/mpi3mr/mpi3mr_os.c
+@@ -4042,7 +4042,7 @@ static void mpi3mr_map_queues(struct Scsi_Host *shost)
+ */
+ map->queue_offset = qoff;
+ if (i != HCTX_TYPE_POLL)
+- blk_mq_pci_map_queues(map, mrioc->pdev, offset);
++ blk_mq_map_hw_queues(map, &mrioc->pdev->dev, offset);
+ else
+ blk_mq_map_queues(map);
+
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index f2a55aa5fe6503..9599d7a5002868 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -53,7 +53,6 @@
+ #include <linux/pci.h>
+ #include <linux/interrupt.h>
+ #include <linux/raid_class.h>
+-#include <linux/blk-mq-pci.h>
+ #include <linux/unaligned.h>
+
+ #include "mpt3sas_base.h"
+@@ -11890,7 +11889,7 @@ static void scsih_map_queues(struct Scsi_Host *shost)
+ */
+ map->queue_offset = qoff;
+ if (i != HCTX_TYPE_POLL)
+- blk_mq_pci_map_queues(map, ioc->pdev, offset);
++ blk_mq_map_hw_queues(map, &ioc->pdev->dev, offset);
+ else
+ blk_mq_map_queues(map);
+
+diff --git a/drivers/scsi/pm8001/pm8001_init.c b/drivers/scsi/pm8001/pm8001_init.c
+index 33e1eba62ca12c..b53b1ae5b74c30 100644
+--- a/drivers/scsi/pm8001/pm8001_init.c
++++ b/drivers/scsi/pm8001/pm8001_init.c
+@@ -101,7 +101,7 @@ static void pm8001_map_queues(struct Scsi_Host *shost)
+ struct blk_mq_queue_map *qmap = &shost->tag_set.map[HCTX_TYPE_DEFAULT];
+
+ if (pm8001_ha->number_of_intr > 1) {
+- blk_mq_pci_map_queues(qmap, pm8001_ha->pdev, 1);
++ blk_mq_map_hw_queues(qmap, &pm8001_ha->pdev->dev, 1);
+ return;
+ }
+
+diff --git a/drivers/scsi/pm8001/pm8001_sas.h b/drivers/scsi/pm8001/pm8001_sas.h
+index ced6721380a853..c46470e0cf63b7 100644
+--- a/drivers/scsi/pm8001/pm8001_sas.h
++++ b/drivers/scsi/pm8001/pm8001_sas.h
+@@ -56,7 +56,6 @@
+ #include <scsi/sas_ata.h>
+ #include <linux/atomic.h>
+ #include <linux/blk-mq.h>
+-#include <linux/blk-mq-pci.h>
+ #include "pm8001_defs.h"
+
+ #define DRV_NAME "pm80xx"
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index 8f4cc136a9c9c4..8ee2e337c9e1b7 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -8,7 +8,6 @@
+ #include <linux/delay.h>
+ #include <linux/nvme.h>
+ #include <linux/nvme-fc.h>
+-#include <linux/blk-mq-pci.h>
+ #include <linux/blk-mq.h>
+
+ static struct nvme_fc_port_template qla_nvme_fc_transport;
+@@ -841,7 +840,7 @@ static void qla_nvme_map_queues(struct nvme_fc_local_port *lport,
+ {
+ struct scsi_qla_host *vha = lport->private;
+
+- blk_mq_pci_map_queues(map, vha->hw->pdev, vha->irq_offset);
++ blk_mq_map_hw_queues(map, &vha->hw->pdev->dev, vha->irq_offset);
+ }
+
+ static void qla_nvme_localport_delete(struct nvme_fc_local_port *lport)
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 7ab717ed72327e..31535beaaa161c 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -13,7 +13,6 @@
+ #include <linux/mutex.h>
+ #include <linux/kobject.h>
+ #include <linux/slab.h>
+-#include <linux/blk-mq-pci.h>
+ #include <linux/refcount.h>
+ #include <linux/crash_dump.h>
+ #include <linux/trace_events.h>
+@@ -8071,7 +8070,8 @@ static void qla2xxx_map_queues(struct Scsi_Host *shost)
+ if (USER_CTRL_IRQ(vha->hw) || !vha->hw->mqiobase)
+ blk_mq_map_queues(qmap);
+ else
+- blk_mq_pci_map_queues(qmap, vha->hw->pdev, vha->irq_offset);
++ blk_mq_map_hw_queues(qmap, &vha->hw->pdev->dev,
++ vha->irq_offset);
+ }
+
+ struct scsi_host_template qla2xxx_driver_template = {
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index 9b47f91c5b9720..8274fe0ec7146f 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -3209,11 +3209,14 @@ iscsi_set_host_param(struct iscsi_transport *transport,
+ }
+
+ /* see similar check in iscsi_if_set_param() */
+- if (strlen(data) > ev->u.set_host_param.len)
+- return -EINVAL;
++ if (strlen(data) > ev->u.set_host_param.len) {
++ err = -EINVAL;
++ goto out;
++ }
+
+ err = transport->set_host_param(shost, ev->u.set_host_param.param,
+ data, ev->u.set_host_param.len);
++out:
+ scsi_host_put(shost);
+ return err;
+ }
+diff --git a/drivers/scsi/smartpqi/smartpqi_init.c b/drivers/scsi/smartpqi/smartpqi_init.c
+index 870f37b7054644..d919a74746a056 100644
+--- a/drivers/scsi/smartpqi/smartpqi_init.c
++++ b/drivers/scsi/smartpqi/smartpqi_init.c
+@@ -19,7 +19,7 @@
+ #include <linux/bcd.h>
+ #include <linux/reboot.h>
+ #include <linux/cciss_ioctl.h>
+-#include <linux/blk-mq-pci.h>
++#include <linux/crash_dump.h>
+ #include <scsi/scsi_host.h>
+ #include <scsi/scsi_cmnd.h>
+ #include <scsi/scsi_device.h>
+@@ -5247,7 +5247,7 @@ static void pqi_calculate_io_resources(struct pqi_ctrl_info *ctrl_info)
+ ctrl_info->error_buffer_length =
+ ctrl_info->max_io_slots * PQI_ERROR_BUFFER_ELEMENT_LENGTH;
+
+- if (reset_devices)
++ if (is_kdump_kernel())
+ max_transfer_size = min(ctrl_info->max_transfer_size,
+ PQI_MAX_TRANSFER_SIZE_KDUMP);
+ else
+@@ -5276,7 +5276,7 @@ static void pqi_calculate_queue_resources(struct pqi_ctrl_info *ctrl_info)
+ u16 num_elements_per_iq;
+ u16 num_elements_per_oq;
+
+- if (reset_devices) {
++ if (is_kdump_kernel()) {
+ num_queue_groups = 1;
+ } else {
+ int num_cpus;
+@@ -6547,10 +6547,10 @@ static void pqi_map_queues(struct Scsi_Host *shost)
+ struct pqi_ctrl_info *ctrl_info = shost_to_hba(shost);
+
+ if (!ctrl_info->disable_managed_interrupts)
+- return blk_mq_pci_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT],
+- ctrl_info->pci_dev, 0);
++ blk_mq_map_hw_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT],
++ &ctrl_info->pci_dev->dev, 0);
+ else
+- return blk_mq_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT]);
++ blk_mq_map_queues(&shost->tag_set.map[HCTX_TYPE_DEFAULT]);
+ }
+
+ static inline bool pqi_is_tape_changer_device(struct pqi_scsi_dev *device)
+@@ -8288,12 +8288,12 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info)
+ u32 product_id;
+
+ if (reset_devices) {
+- if (pqi_is_fw_triage_supported(ctrl_info)) {
++ if (is_kdump_kernel() && pqi_is_fw_triage_supported(ctrl_info)) {
+ rc = sis_wait_for_fw_triage_completion(ctrl_info);
+ if (rc)
+ return rc;
+ }
+- if (sis_is_ctrl_logging_supported(ctrl_info)) {
++ if (is_kdump_kernel() && sis_is_ctrl_logging_supported(ctrl_info)) {
+ sis_notify_kdump(ctrl_info);
+ rc = sis_wait_for_ctrl_logging_completion(ctrl_info);
+ if (rc)
+@@ -8344,7 +8344,7 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info)
+ ctrl_info->product_id = (u8)product_id;
+ ctrl_info->product_revision = (u8)(product_id >> 8);
+
+- if (reset_devices) {
++ if (is_kdump_kernel()) {
+ if (ctrl_info->max_outstanding_requests >
+ PQI_MAX_OUTSTANDING_REQUESTS_KDUMP)
+ ctrl_info->max_outstanding_requests =
+@@ -8480,7 +8480,7 @@ static int pqi_ctrl_init(struct pqi_ctrl_info *ctrl_info)
+ if (rc)
+ return rc;
+
+- if (ctrl_info->ctrl_logging_supported && !reset_devices) {
++ if (ctrl_info->ctrl_logging_supported && !is_kdump_kernel()) {
+ pqi_host_setup_buffer(ctrl_info, &ctrl_info->ctrl_log_memory, PQI_CTRL_LOG_TOTAL_SIZE, PQI_CTRL_LOG_MIN_SIZE);
+ pqi_host_memory_update(ctrl_info, &ctrl_info->ctrl_log_memory, PQI_VENDOR_GENERAL_CTRL_LOG_MEMORY_UPDATE);
+ }
+diff --git a/drivers/ufs/host/ufs-exynos.c b/drivers/ufs/host/ufs-exynos.c
+index 98505c68103d0e..f2cbfc2d399cdb 100644
+--- a/drivers/ufs/host/ufs-exynos.c
++++ b/drivers/ufs/host/ufs-exynos.c
+@@ -915,6 +915,12 @@ static int exynos_ufs_phy_init(struct exynos_ufs *ufs)
+ }
+
+ phy_set_bus_width(generic_phy, ufs->avail_ln_rx);
++
++ if (generic_phy->power_count) {
++ phy_power_off(generic_phy);
++ phy_exit(generic_phy);
++ }
++
+ ret = phy_init(generic_phy);
+ if (ret) {
+ dev_err(hba->dev, "%s: phy init failed, ret = %d\n",
+diff --git a/fs/Kconfig b/fs/Kconfig
+index aae170fc279524..3117304676331c 100644
+--- a/fs/Kconfig
++++ b/fs/Kconfig
+@@ -369,6 +369,7 @@ config GRACE_PERIOD
+ config LOCKD
+ tristate
+ depends on FILE_LOCKING
++ select CRC32
+ select GRACE_PERIOD
+
+ config LOCKD_V4
+diff --git a/fs/btrfs/super.c b/fs/btrfs/super.c
+index 73343503ea60e4..08ccf5d5e14407 100644
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -1140,8 +1140,7 @@ static int btrfs_show_options(struct seq_file *seq, struct dentry *dentry)
+ subvol_name = btrfs_get_subvol_name_from_objectid(info,
+ btrfs_root_id(BTRFS_I(d_inode(dentry))->root));
+ if (!IS_ERR(subvol_name)) {
+- seq_puts(seq, ",subvol=");
+- seq_escape(seq, subvol_name, " \t\n\\");
++ seq_show_option(seq, "subvol", subvol_name);
+ kfree(subvol_name);
+ }
+ return 0;
+diff --git a/fs/fuse/virtio_fs.c b/fs/fuse/virtio_fs.c
+index d220e28e755fef..749c9f66d74c6d 100644
+--- a/fs/fuse/virtio_fs.c
++++ b/fs/fuse/virtio_fs.c
+@@ -1663,6 +1663,9 @@ static int virtio_fs_get_tree(struct fs_context *fsc)
+ unsigned int virtqueue_size;
+ int err = -EIO;
+
++ if (!fsc->source)
++ return invalf(fsc, "No source specified");
++
+ /* This gets a reference on virtio_fs object. This ptr gets installed
+ * in fc->iq->priv. Once fuse_conn is going away, it calls ->put()
+ * to drop the reference to this object.
+diff --git a/fs/hfs/bnode.c b/fs/hfs/bnode.c
+index 6add6ebfef8967..cb823a8a6ba960 100644
+--- a/fs/hfs/bnode.c
++++ b/fs/hfs/bnode.c
+@@ -67,6 +67,12 @@ void hfs_bnode_read_key(struct hfs_bnode *node, void *key, int off)
+ else
+ key_len = tree->max_key_len + 1;
+
++ if (key_len > sizeof(hfs_btree_key) || key_len < 1) {
++ memset(key, 0, sizeof(hfs_btree_key));
++ pr_err("hfs: Invalid key length: %d\n", key_len);
++ return;
++ }
++
+ hfs_bnode_read(node, key, off, key_len);
+ }
+
+diff --git a/fs/hfsplus/bnode.c b/fs/hfsplus/bnode.c
+index 87974d5e679156..079ea80534f7de 100644
+--- a/fs/hfsplus/bnode.c
++++ b/fs/hfsplus/bnode.c
+@@ -67,6 +67,12 @@ void hfs_bnode_read_key(struct hfs_bnode *node, void *key, int off)
+ else
+ key_len = tree->max_key_len + 2;
+
++ if (key_len > sizeof(hfsplus_btree_key) || key_len < 1) {
++ memset(key, 0, sizeof(hfsplus_btree_key));
++ pr_err("hfsplus: Invalid key length: %d\n", key_len);
++ return;
++ }
++
+ hfs_bnode_read(node, key, off, key_len);
+ }
+
+diff --git a/fs/isofs/export.c b/fs/isofs/export.c
+index 35768a63fb1d23..421d247fae5230 100644
+--- a/fs/isofs/export.c
++++ b/fs/isofs/export.c
+@@ -180,7 +180,7 @@ static struct dentry *isofs_fh_to_parent(struct super_block *sb,
+ return NULL;
+
+ return isofs_export_iget(sb,
+- fh_len > 2 ? ifid->parent_block : 0,
++ fh_len > 3 ? ifid->parent_block : 0,
+ ifid->parent_offset,
+ fh_len > 4 ? ifid->parent_generation : 0);
+ }
+diff --git a/fs/nfs/Kconfig b/fs/nfs/Kconfig
+index d3f76101ad4b91..07932ce9246c17 100644
+--- a/fs/nfs/Kconfig
++++ b/fs/nfs/Kconfig
+@@ -2,6 +2,7 @@
+ config NFS_FS
+ tristate "NFS client support"
+ depends on INET && FILE_LOCKING && MULTIUSER
++ select CRC32
+ select LOCKD
+ select SUNRPC
+ select NFS_COMMON
+@@ -196,7 +197,6 @@ config NFS_USE_KERNEL_DNS
+ config NFS_DEBUG
+ bool
+ depends on NFS_FS && SUNRPC_DEBUG
+- select CRC32
+ default y
+
+ config NFS_DISABLE_UDP_SUPPORT
+diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h
+index 6bcc4b0e00ab72..8b568a514fd1c6 100644
+--- a/fs/nfs/internal.h
++++ b/fs/nfs/internal.h
+@@ -895,18 +895,11 @@ u64 nfs_timespec_to_change_attr(const struct timespec64 *ts)
+ return ((u64)ts->tv_sec << 30) + ts->tv_nsec;
+ }
+
+-#ifdef CONFIG_CRC32
+ static inline u32 nfs_stateid_hash(const nfs4_stateid *stateid)
+ {
+ return ~crc32_le(0xFFFFFFFF, &stateid->other[0],
+ NFS4_STATEID_OTHER_SIZE);
+ }
+-#else
+-static inline u32 nfs_stateid_hash(nfs4_stateid *stateid)
+-{
+- return 0;
+-}
+-#endif
+
+ static inline bool nfs_error_is_fatal(int err)
+ {
+diff --git a/fs/nfs/nfs4session.h b/fs/nfs/nfs4session.h
+index 351616c61df541..f9c291e2165cd8 100644
+--- a/fs/nfs/nfs4session.h
++++ b/fs/nfs/nfs4session.h
+@@ -148,16 +148,12 @@ static inline void nfs4_copy_sessionid(struct nfs4_sessionid *dst,
+ memcpy(dst->data, src->data, NFS4_MAX_SESSIONID_LEN);
+ }
+
+-#ifdef CONFIG_CRC32
+ /*
+ * nfs_session_id_hash - calculate the crc32 hash for the session id
+ * @session - pointer to session
+ */
+ #define nfs_session_id_hash(sess_id) \
+ (~crc32_le(0xFFFFFFFF, &(sess_id)->data[0], sizeof((sess_id)->data)))
+-#else
+-#define nfs_session_id_hash(session) (0)
+-#endif
+ #else /* defined(CONFIG_NFS_V4_1) */
+
+ static inline int nfs4_init_session(struct nfs_client *clp)
+diff --git a/fs/nfsd/Kconfig b/fs/nfsd/Kconfig
+index c0bd1509ccd480..9eb2e795c43c4c 100644
+--- a/fs/nfsd/Kconfig
++++ b/fs/nfsd/Kconfig
+@@ -4,6 +4,7 @@ config NFSD
+ depends on INET
+ depends on FILE_LOCKING
+ depends on FSNOTIFY
++ select CRC32
+ select LOCKD
+ select SUNRPC
+ select EXPORTFS
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 5e81c819c3846a..c50839a015e94f 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -5287,7 +5287,7 @@ static void nfsd_break_one_deleg(struct nfs4_delegation *dp)
+ queued = nfsd4_run_cb(&dp->dl_recall);
+ WARN_ON_ONCE(!queued);
+ if (!queued)
+- nfs4_put_stid(&dp->dl_stid);
++ refcount_dec(&dp->dl_stid.sc_count);
+ }
+
+ /* Called from break_lease() with flc_lock held. */
+diff --git a/fs/nfsd/nfsfh.h b/fs/nfsd/nfsfh.h
+index 876152a91f122f..5103c2f4d2253a 100644
+--- a/fs/nfsd/nfsfh.h
++++ b/fs/nfsd/nfsfh.h
+@@ -267,7 +267,6 @@ static inline bool fh_fsid_match(const struct knfsd_fh *fh1,
+ return true;
+ }
+
+-#ifdef CONFIG_CRC32
+ /**
+ * knfsd_fh_hash - calculate the crc32 hash for the filehandle
+ * @fh - pointer to filehandle
+@@ -279,12 +278,6 @@ static inline u32 knfsd_fh_hash(const struct knfsd_fh *fh)
+ {
+ return ~crc32_le(0xFFFFFFFF, fh->fh_raw, fh->fh_size);
+ }
+-#else
+-static inline u32 knfsd_fh_hash(const struct knfsd_fh *fh)
+-{
+- return 0;
+-}
+-#endif
+
+ /**
+ * fh_clear_pre_post_attrs - Reset pre/post attributes
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 844874b4a91a94..500a9634ad5334 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -547,8 +547,6 @@ int ovl_set_metacopy_xattr(struct ovl_fs *ofs, struct dentry *d,
+ bool ovl_is_metacopy_dentry(struct dentry *dentry);
+ char *ovl_get_redirect_xattr(struct ovl_fs *ofs, const struct path *path, int padding);
+ int ovl_ensure_verity_loaded(struct path *path);
+-int ovl_get_verity_xattr(struct ovl_fs *ofs, const struct path *path,
+- u8 *digest_buf, int *buf_length);
+ int ovl_validate_verity(struct ovl_fs *ofs,
+ struct path *metapath,
+ struct path *datapath);
+diff --git a/fs/overlayfs/super.c b/fs/overlayfs/super.c
+index fe511192f83cb0..87a36c6eea5f36 100644
+--- a/fs/overlayfs/super.c
++++ b/fs/overlayfs/super.c
+@@ -1119,6 +1119,11 @@ static struct ovl_entry *ovl_get_lowerstack(struct super_block *sb,
+ return ERR_PTR(-EINVAL);
+ }
+
++ if (ctx->nr == ctx->nr_data) {
++ pr_err("at least one non-data lowerdir is required\n");
++ return ERR_PTR(-EINVAL);
++ }
++
+ err = -EINVAL;
+ for (i = 0; i < ctx->nr; i++) {
+ l = &ctx->lower[i];
+diff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h
+index 907af3cbffd1bc..90b7b30abfbd87 100644
+--- a/fs/smb/client/cifsproto.h
++++ b/fs/smb/client/cifsproto.h
+@@ -160,6 +160,8 @@ extern int cifs_get_writable_path(struct cifs_tcon *tcon, const char *name,
+ extern struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *, bool);
+ extern int cifs_get_readable_path(struct cifs_tcon *tcon, const char *name,
+ struct cifsFileInfo **ret_file);
++extern int cifs_get_hardlink_path(struct cifs_tcon *tcon, struct inode *inode,
++ struct file *file);
+ extern unsigned int smbCalcSize(void *buf);
+ extern int decode_negTokenInit(unsigned char *security_blob, int length,
+ struct TCP_Server_Info *server);
+diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c
+index 3aaf5cdce1b720..d5549e06a533d8 100644
+--- a/fs/smb/client/connect.c
++++ b/fs/smb/client/connect.c
+@@ -316,7 +316,6 @@ cifs_abort_connection(struct TCP_Server_Info *server)
+ server->ssocket->flags);
+ sock_release(server->ssocket);
+ server->ssocket = NULL;
+- put_net(cifs_net_ns(server));
+ }
+ server->sequence_number = 0;
+ server->session_estab = false;
+@@ -988,13 +987,9 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
+ msleep(125);
+ if (cifs_rdma_enabled(server))
+ smbd_destroy(server);
+-
+ if (server->ssocket) {
+ sock_release(server->ssocket);
+ server->ssocket = NULL;
+-
+- /* Release netns reference for the socket. */
+- put_net(cifs_net_ns(server));
+ }
+
+ if (!list_empty(&server->pending_mid_q)) {
+@@ -1042,7 +1037,6 @@ clean_demultiplex_info(struct TCP_Server_Info *server)
+ */
+ }
+
+- /* Release netns reference for this server. */
+ put_net(cifs_net_ns(server));
+ kfree(server->leaf_fullpath);
+ kfree(server->hostname);
+@@ -1718,8 +1712,6 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+
+ tcp_ses->ops = ctx->ops;
+ tcp_ses->vals = ctx->vals;
+-
+- /* Grab netns reference for this server. */
+ cifs_set_net_ns(tcp_ses, get_net(current->nsproxy->net_ns));
+
+ tcp_ses->sign = ctx->sign;
+@@ -1852,7 +1844,6 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+ out_err_crypto_release:
+ cifs_crypto_secmech_release(tcp_ses);
+
+- /* Release netns reference for this server. */
+ put_net(cifs_net_ns(tcp_ses));
+
+ out_err:
+@@ -1861,10 +1852,8 @@ cifs_get_tcp_session(struct smb3_fs_context *ctx,
+ cifs_put_tcp_session(tcp_ses->primary_server, false);
+ kfree(tcp_ses->hostname);
+ kfree(tcp_ses->leaf_fullpath);
+- if (tcp_ses->ssocket) {
++ if (tcp_ses->ssocket)
+ sock_release(tcp_ses->ssocket);
+- put_net(cifs_net_ns(tcp_ses));
+- }
+ kfree(tcp_ses);
+ }
+ return ERR_PTR(rc);
+@@ -3132,24 +3121,20 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ socket = server->ssocket;
+ } else {
+ struct net *net = cifs_net_ns(server);
++ struct sock *sk;
+
+- rc = sock_create_kern(net, sfamily, SOCK_STREAM, IPPROTO_TCP, &server->ssocket);
++ rc = __sock_create(net, sfamily, SOCK_STREAM,
++ IPPROTO_TCP, &server->ssocket, 1);
+ if (rc < 0) {
+ cifs_server_dbg(VFS, "Error %d creating socket\n", rc);
+ return rc;
+ }
+
+- /*
+- * Grab netns reference for the socket.
+- *
+- * This reference will be released in several situations:
+- * - In the failure path before the cifsd thread is started.
+- * - In the all place where server->socket is released, it is
+- * also set to NULL.
+- * - Ultimately in clean_demultiplex_info(), during the final
+- * teardown.
+- */
+- get_net(net);
++ sk = server->ssocket->sk;
++ __netns_tracker_free(net, &sk->ns_tracker, false);
++ sk->sk_net_refcnt = 1;
++ get_net_track(net, &sk->ns_tracker, GFP_KERNEL);
++ sock_inuse_add(net, 1);
+
+ /* BB other socket options to set KEEPALIVE, NODELAY? */
+ cifs_dbg(FYI, "Socket created\n");
+@@ -3201,7 +3186,6 @@ generic_ip_connect(struct TCP_Server_Info *server)
+ if (rc < 0) {
+ cifs_dbg(FYI, "Error %d connecting to server\n", rc);
+ trace_smb3_connect_err(server->hostname, server->conn_id, &server->dstaddr, rc);
+- put_net(cifs_net_ns(server));
+ sock_release(socket);
+ server->ssocket = NULL;
+ return rc;
+diff --git a/fs/smb/client/file.c b/fs/smb/client/file.c
+index 313c851fc1c122..0f6fec042f6a03 100644
+--- a/fs/smb/client/file.c
++++ b/fs/smb/client/file.c
+@@ -1002,6 +1002,11 @@ int cifs_open(struct inode *inode, struct file *file)
+ } else {
+ _cifsFileInfo_put(cfile, true, false);
+ }
++ } else {
++ /* hard link on the defeered close file */
++ rc = cifs_get_hardlink_path(tcon, inode, file);
++ if (rc)
++ cifs_close_deferred_file(CIFS_I(inode));
+ }
+
+ if (server->oplocks)
+@@ -2066,6 +2071,29 @@ cifs_move_llist(struct list_head *source, struct list_head *dest)
+ list_move(li, dest);
+ }
+
++int
++cifs_get_hardlink_path(struct cifs_tcon *tcon, struct inode *inode,
++ struct file *file)
++{
++ struct cifsFileInfo *open_file = NULL;
++ struct cifsInodeInfo *cinode = CIFS_I(inode);
++ int rc = 0;
++
++ spin_lock(&tcon->open_file_lock);
++ spin_lock(&cinode->open_file_lock);
++
++ list_for_each_entry(open_file, &cinode->openFileList, flist) {
++ if (file->f_flags == open_file->f_flags) {
++ rc = -EINVAL;
++ break;
++ }
++ }
++
++ spin_unlock(&cinode->open_file_lock);
++ spin_unlock(&tcon->open_file_lock);
++ return rc;
++}
++
+ void
+ cifs_free_llist(struct list_head *llist)
+ {
+diff --git a/fs/smb/server/oplock.c b/fs/smb/server/oplock.c
+index deacf78b4400cc..e2ba0dadb5fbf7 100644
+--- a/fs/smb/server/oplock.c
++++ b/fs/smb/server/oplock.c
+@@ -129,14 +129,6 @@ static void free_opinfo(struct oplock_info *opinfo)
+ kfree(opinfo);
+ }
+
+-static inline void opinfo_free_rcu(struct rcu_head *rcu_head)
+-{
+- struct oplock_info *opinfo;
+-
+- opinfo = container_of(rcu_head, struct oplock_info, rcu_head);
+- free_opinfo(opinfo);
+-}
+-
+ struct oplock_info *opinfo_get(struct ksmbd_file *fp)
+ {
+ struct oplock_info *opinfo;
+@@ -157,8 +149,8 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci)
+ if (list_empty(&ci->m_op_list))
+ return NULL;
+
+- rcu_read_lock();
+- opinfo = list_first_or_null_rcu(&ci->m_op_list, struct oplock_info,
++ down_read(&ci->m_lock);
++ opinfo = list_first_entry(&ci->m_op_list, struct oplock_info,
+ op_entry);
+ if (opinfo) {
+ if (opinfo->conn == NULL ||
+@@ -171,8 +163,7 @@ static struct oplock_info *opinfo_get_list(struct ksmbd_inode *ci)
+ }
+ }
+ }
+-
+- rcu_read_unlock();
++ up_read(&ci->m_lock);
+
+ return opinfo;
+ }
+@@ -185,7 +176,7 @@ void opinfo_put(struct oplock_info *opinfo)
+ if (!atomic_dec_and_test(&opinfo->refcount))
+ return;
+
+- call_rcu(&opinfo->rcu_head, opinfo_free_rcu);
++ free_opinfo(opinfo);
+ }
+
+ static void opinfo_add(struct oplock_info *opinfo)
+@@ -193,7 +184,7 @@ static void opinfo_add(struct oplock_info *opinfo)
+ struct ksmbd_inode *ci = opinfo->o_fp->f_ci;
+
+ down_write(&ci->m_lock);
+- list_add_rcu(&opinfo->op_entry, &ci->m_op_list);
++ list_add(&opinfo->op_entry, &ci->m_op_list);
+ up_write(&ci->m_lock);
+ }
+
+@@ -207,7 +198,7 @@ static void opinfo_del(struct oplock_info *opinfo)
+ write_unlock(&lease_list_lock);
+ }
+ down_write(&ci->m_lock);
+- list_del_rcu(&opinfo->op_entry);
++ list_del(&opinfo->op_entry);
+ up_write(&ci->m_lock);
+ }
+
+@@ -1347,8 +1338,8 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ ci = fp->f_ci;
+ op = opinfo_get(fp);
+
+- rcu_read_lock();
+- list_for_each_entry_rcu(brk_op, &ci->m_op_list, op_entry) {
++ down_read(&ci->m_lock);
++ list_for_each_entry(brk_op, &ci->m_op_list, op_entry) {
+ if (brk_op->conn == NULL)
+ continue;
+
+@@ -1358,7 +1349,6 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ if (ksmbd_conn_releasing(brk_op->conn))
+ continue;
+
+- rcu_read_unlock();
+ if (brk_op->is_lease && (brk_op->o_lease->state &
+ (~(SMB2_LEASE_READ_CACHING_LE |
+ SMB2_LEASE_HANDLE_CACHING_LE)))) {
+@@ -1388,9 +1378,8 @@ void smb_break_all_levII_oplock(struct ksmbd_work *work, struct ksmbd_file *fp,
+ oplock_break(brk_op, SMB2_OPLOCK_LEVEL_NONE, NULL);
+ next:
+ opinfo_put(brk_op);
+- rcu_read_lock();
+ }
+- rcu_read_unlock();
++ up_read(&ci->m_lock);
+
+ if (op)
+ opinfo_put(op);
+diff --git a/fs/smb/server/oplock.h b/fs/smb/server/oplock.h
+index 3f64f07872638e..9a56eaadd0dd8f 100644
+--- a/fs/smb/server/oplock.h
++++ b/fs/smb/server/oplock.h
+@@ -71,7 +71,6 @@ struct oplock_info {
+ struct list_head lease_entry;
+ wait_queue_head_t oplock_q; /* Other server threads */
+ wait_queue_head_t oplock_brk; /* oplock breaking wait */
+- struct rcu_head rcu_head;
+ };
+
+ struct lease_break_info {
+diff --git a/fs/smb/server/smb2pdu.c b/fs/smb/server/smb2pdu.c
+index 7fea86edc71763..129517a0c5c739 100644
+--- a/fs/smb/server/smb2pdu.c
++++ b/fs/smb/server/smb2pdu.c
+@@ -1599,8 +1599,10 @@ static int krb5_authenticate(struct ksmbd_work *work,
+ if (prev_sess_id && prev_sess_id != sess->id)
+ destroy_previous_session(conn, sess->user, prev_sess_id);
+
+- if (sess->state == SMB2_SESSION_VALID)
++ if (sess->state == SMB2_SESSION_VALID) {
+ ksmbd_free_user(sess->user);
++ sess->user = NULL;
++ }
+
+ retval = ksmbd_krb5_authenticate(sess, in_blob, in_len,
+ out_blob, &out_len);
+diff --git a/fs/smb/server/transport_ipc.c b/fs/smb/server/transport_ipc.c
+index 87af57cf35a157..9b3c68014aee28 100644
+--- a/fs/smb/server/transport_ipc.c
++++ b/fs/smb/server/transport_ipc.c
+@@ -310,7 +310,11 @@ static int ipc_server_config_on_startup(struct ksmbd_startup_request *req)
+ server_conf.signing = req->signing;
+ server_conf.tcp_port = req->tcp_port;
+ server_conf.ipc_timeout = req->ipc_timeout * HZ;
+- server_conf.deadtime = req->deadtime * SMB_ECHO_INTERVAL;
++ if (check_mul_overflow(req->deadtime, SMB_ECHO_INTERVAL,
++ &server_conf.deadtime)) {
++ ret = -EINVAL;
++ goto out;
++ }
+ server_conf.share_fake_fscaps = req->share_fake_fscaps;
+ ksmbd_init_domain(req->sub_auth);
+
+@@ -336,6 +340,7 @@ static int ipc_server_config_on_startup(struct ksmbd_startup_request *req)
+ ret |= ksmbd_set_work_group(req->work_group);
+ ret |= ksmbd_tcp_set_interfaces(KSMBD_STARTUP_CONFIG_INTERFACES(req),
+ req->ifc_list_sz);
++out:
+ if (ret) {
+ pr_err("Server configuration error: %s %s %s\n",
+ req->netbios_name, req->server_string,
+diff --git a/fs/smb/server/vfs.c b/fs/smb/server/vfs.c
+index ee825971abd9ab..8fd070e31fa7dd 100644
+--- a/fs/smb/server/vfs.c
++++ b/fs/smb/server/vfs.c
+@@ -496,7 +496,8 @@ int ksmbd_vfs_write(struct ksmbd_work *work, struct ksmbd_file *fp,
+ int err = 0;
+
+ if (work->conn->connection_type) {
+- if (!(fp->daccess & (FILE_WRITE_DATA_LE | FILE_APPEND_DATA_LE))) {
++ if (!(fp->daccess & (FILE_WRITE_DATA_LE | FILE_APPEND_DATA_LE)) ||
++ S_ISDIR(file_inode(fp->filp)->i_mode)) {
+ pr_err("no right to write(%pD)\n", fp->filp);
+ err = -EACCES;
+ goto out;
+diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
+index 8e7af9a03b41dd..e721148c95d07d 100644
+--- a/include/linux/backing-dev.h
++++ b/include/linux/backing-dev.h
+@@ -249,6 +249,7 @@ static inline struct bdi_writeback *inode_to_wb(const struct inode *inode)
+ {
+ #ifdef CONFIG_LOCKDEP
+ WARN_ON_ONCE(debug_locks &&
++ (inode->i_sb->s_iflags & SB_I_CGROUPWB) &&
+ (!lockdep_is_held(&inode->i_lock) &&
+ !lockdep_is_held(&inode->i_mapping->i_pages.xa_lock) &&
+ !lockdep_is_held(&inode->i_wb->list_lock)));
+diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h
+index 7b5e5388c3801a..318245b4e38fb3 100644
+--- a/include/linux/blk-mq.h
++++ b/include/linux/blk-mq.h
+@@ -230,62 +230,61 @@ static inline unsigned short req_get_ioprio(struct request *req)
+ #define rq_dma_dir(rq) \
+ (op_is_write(req_op(rq)) ? DMA_TO_DEVICE : DMA_FROM_DEVICE)
+
+-#define rq_list_add(listptr, rq) do { \
+- (rq)->rq_next = *(listptr); \
+- *(listptr) = rq; \
+-} while (0)
+-
+-#define rq_list_add_tail(lastpptr, rq) do { \
+- (rq)->rq_next = NULL; \
+- **(lastpptr) = rq; \
+- *(lastpptr) = &rq->rq_next; \
+-} while (0)
+-
+-#define rq_list_pop(listptr) \
+-({ \
+- struct request *__req = NULL; \
+- if ((listptr) && *(listptr)) { \
+- __req = *(listptr); \
+- *(listptr) = __req->rq_next; \
+- } \
+- __req; \
+-})
++static inline int rq_list_empty(const struct rq_list *rl)
++{
++ return rl->head == NULL;
++}
+
+-#define rq_list_peek(listptr) \
+-({ \
+- struct request *__req = NULL; \
+- if ((listptr) && *(listptr)) \
+- __req = *(listptr); \
+- __req; \
+-})
++static inline void rq_list_init(struct rq_list *rl)
++{
++ rl->head = NULL;
++ rl->tail = NULL;
++}
+
+-#define rq_list_for_each(listptr, pos) \
+- for (pos = rq_list_peek((listptr)); pos; pos = rq_list_next(pos))
++static inline void rq_list_add_tail(struct rq_list *rl, struct request *rq)
++{
++ rq->rq_next = NULL;
++ if (rl->tail)
++ rl->tail->rq_next = rq;
++ else
++ rl->head = rq;
++ rl->tail = rq;
++}
+
+-#define rq_list_for_each_safe(listptr, pos, nxt) \
+- for (pos = rq_list_peek((listptr)), nxt = rq_list_next(pos); \
+- pos; pos = nxt, nxt = pos ? rq_list_next(pos) : NULL)
++static inline void rq_list_add_head(struct rq_list *rl, struct request *rq)
++{
++ rq->rq_next = rl->head;
++ rl->head = rq;
++ if (!rl->tail)
++ rl->tail = rq;
++}
+
+-#define rq_list_next(rq) (rq)->rq_next
+-#define rq_list_empty(list) ((list) == (struct request *) NULL)
++static inline struct request *rq_list_pop(struct rq_list *rl)
++{
++ struct request *rq = rl->head;
+
+-/**
+- * rq_list_move() - move a struct request from one list to another
+- * @src: The source list @rq is currently in
+- * @dst: The destination list that @rq will be appended to
+- * @rq: The request to move
+- * @prev: The request preceding @rq in @src (NULL if @rq is the head)
+- */
+-static inline void rq_list_move(struct request **src, struct request **dst,
+- struct request *rq, struct request *prev)
++ if (rq) {
++ rl->head = rl->head->rq_next;
++ if (!rl->head)
++ rl->tail = NULL;
++ rq->rq_next = NULL;
++ }
++
++ return rq;
++}
++
++static inline struct request *rq_list_peek(struct rq_list *rl)
+ {
+- if (prev)
+- prev->rq_next = rq->rq_next;
+- else
+- *src = rq->rq_next;
+- rq_list_add(dst, rq);
++ return rl->head;
+ }
+
++#define rq_list_for_each(rl, pos) \
++ for (pos = rq_list_peek((rl)); (pos); pos = pos->rq_next)
++
++#define rq_list_for_each_safe(rl, pos, nxt) \
++ for (pos = rq_list_peek((rl)), nxt = pos->rq_next; \
++ pos; pos = nxt, nxt = pos ? pos->rq_next : NULL)
++
+ /**
+ * enum blk_eh_timer_return - How the timeout handler should proceed
+ * @BLK_EH_DONE: The block driver completed the command or will complete it at
+@@ -577,7 +576,7 @@ struct blk_mq_ops {
+ * empty the @rqlist completely, then the rest will be queued
+ * individually by the block layer upon return.
+ */
+- void (*queue_rqs)(struct request **rqlist);
++ void (*queue_rqs)(struct rq_list *rqlist);
+
+ /**
+ * @get_budget: Reserve budget before queue request, once .queue_rq is
+@@ -910,7 +909,7 @@ static inline bool blk_mq_add_to_batch(struct request *req,
+ else if (iob->complete != complete)
+ return false;
+ iob->need_ts |= blk_mq_need_time_stamp(req);
+- rq_list_add(&iob->req_list, req);
++ rq_list_add_head(&iob->req_list, req);
+ return true;
+ }
+
+@@ -947,6 +946,8 @@ void blk_mq_unfreeze_queue_non_owner(struct request_queue *q);
+ void blk_freeze_queue_start_non_owner(struct request_queue *q);
+
+ void blk_mq_map_queues(struct blk_mq_queue_map *qmap);
++void blk_mq_map_hw_queues(struct blk_mq_queue_map *qmap,
++ struct device *dev, unsigned int offset);
+ void blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set, int nr_hw_queues);
+
+ void blk_mq_quiesce_queue_nowait(struct request_queue *q);
+diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
+index 8f37c5dd52b215..b94dc4b796f5a1 100644
+--- a/include/linux/blkdev.h
++++ b/include/linux/blkdev.h
+@@ -995,6 +995,11 @@ extern void blk_put_queue(struct request_queue *);
+
+ void blk_mark_disk_dead(struct gendisk *disk);
+
++struct rq_list {
++ struct request *head;
++ struct request *tail;
++};
++
+ #ifdef CONFIG_BLOCK
+ /*
+ * blk_plug permits building a queue of related requests by holding the I/O
+@@ -1008,10 +1013,10 @@ void blk_mark_disk_dead(struct gendisk *disk);
+ * blk_flush_plug() is called.
+ */
+ struct blk_plug {
+- struct request *mq_list; /* blk-mq requests */
++ struct rq_list mq_list; /* blk-mq requests */
+
+ /* if ios_left is > 1, we can batch tag/rq allocations */
+- struct request *cached_rq;
++ struct rq_list cached_rqs;
+ u64 cur_ktime;
+ unsigned short nr_ios;
+
+@@ -1660,7 +1665,7 @@ int bdev_thaw(struct block_device *bdev);
+ void bdev_fput(struct file *bdev_file);
+
+ struct io_comp_batch {
+- struct request *req_list;
++ struct rq_list req_list;
+ bool need_ts;
+ void (*complete)(struct io_comp_batch *);
+ };
+diff --git a/include/linux/bpf.h b/include/linux/bpf.h
+index a7af13f550e0d4..1150a595aa54c2 100644
+--- a/include/linux/bpf.h
++++ b/include/linux/bpf.h
+@@ -1499,6 +1499,7 @@ struct bpf_prog_aux {
+ bool exception_cb;
+ bool exception_boundary;
+ bool is_extended; /* true if extended by freplace program */
++ bool changes_pkt_data;
+ u64 prog_array_member_cnt; /* counts how many times as member of prog_array */
+ struct mutex ext_mutex; /* mutex for is_extended and prog_array_member_cnt */
+ struct bpf_arena *arena;
+diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
+index 4513372c5bc8e0..50eeb5b86ed70b 100644
+--- a/include/linux/bpf_verifier.h
++++ b/include/linux/bpf_verifier.h
+@@ -668,6 +668,7 @@ struct bpf_subprog_info {
+ bool args_cached: 1;
+ /* true if bpf_fastcall stack region is used by functions that can't be inlined */
+ bool keep_fastcall_stack: 1;
++ bool changes_pkt_data: 1;
+
+ u8 arg_cnt;
+ struct bpf_subprog_arg_info args[MAX_BPF_FUNC_REG_ARGS];
+diff --git a/include/linux/device/bus.h b/include/linux/device/bus.h
+index cdc4757217f9bb..b18658bce2c381 100644
+--- a/include/linux/device/bus.h
++++ b/include/linux/device/bus.h
+@@ -48,6 +48,7 @@ struct fwnode_handle;
+ * will never get called until they do.
+ * @remove: Called when a device removed from this bus.
+ * @shutdown: Called at shut-down time to quiesce the device.
++ * @irq_get_affinity: Get IRQ affinity mask for the device on this bus.
+ *
+ * @online: Called to put the device back online (after offlining it).
+ * @offline: Called to put the device offline for hot-removal. May fail.
+@@ -87,6 +88,8 @@ struct bus_type {
+ void (*sync_state)(struct device *dev);
+ void (*remove)(struct device *dev);
+ void (*shutdown)(struct device *dev);
++ const struct cpumask *(*irq_get_affinity)(struct device *dev,
++ unsigned int irq_vec);
+
+ int (*online)(struct device *dev);
+ int (*offline)(struct device *dev);
+diff --git a/include/linux/nfs.h b/include/linux/nfs.h
+index 9ad727ddfedb34..0906a0b40c6aa5 100644
+--- a/include/linux/nfs.h
++++ b/include/linux/nfs.h
+@@ -55,7 +55,6 @@ enum nfs3_stable_how {
+ NFS_INVALID_STABLE_HOW = -1
+ };
+
+-#ifdef CONFIG_CRC32
+ /**
+ * nfs_fhandle_hash - calculate the crc32 hash for the filehandle
+ * @fh - pointer to filehandle
+@@ -67,10 +66,4 @@ static inline u32 nfs_fhandle_hash(const struct nfs_fh *fh)
+ {
+ return ~crc32_le(0xFFFFFFFF, &fh->data[0], fh->size);
+ }
+-#else /* CONFIG_CRC32 */
+-static inline u32 nfs_fhandle_hash(const struct nfs_fh *fh)
+-{
+- return 0;
+-}
+-#endif /* CONFIG_CRC32 */
+ #endif /* _LINUX_NFS_H */
+diff --git a/io_uring/rw.c b/io_uring/rw.c
+index 6abc495602a4e9..a1ed64760eba2d 100644
+--- a/io_uring/rw.c
++++ b/io_uring/rw.c
+@@ -1190,12 +1190,12 @@ int io_do_iopoll(struct io_ring_ctx *ctx, bool force_nonspin)
+ poll_flags |= BLK_POLL_ONESHOT;
+
+ /* iopoll may have completed current req */
+- if (!rq_list_empty(iob.req_list) ||
++ if (!rq_list_empty(&iob.req_list) ||
+ READ_ONCE(req->iopoll_completed))
+ break;
+ }
+
+- if (!rq_list_empty(iob.req_list))
++ if (!rq_list_empty(&iob.req_list))
+ iob.complete(&iob);
+ else if (!pos)
+ return 0;
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 9000806ee3bae8..d2ef289993f20d 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2528,16 +2528,36 @@ static int cmp_subprogs(const void *a, const void *b)
+ ((struct bpf_subprog_info *)b)->start;
+ }
+
++/* Find subprogram that contains instruction at 'off' */
++static struct bpf_subprog_info *find_containing_subprog(struct bpf_verifier_env *env, int off)
++{
++ struct bpf_subprog_info *vals = env->subprog_info;
++ int l, r, m;
++
++ if (off >= env->prog->len || off < 0 || env->subprog_cnt == 0)
++ return NULL;
++
++ l = 0;
++ r = env->subprog_cnt - 1;
++ while (l < r) {
++ m = l + (r - l + 1) / 2;
++ if (vals[m].start <= off)
++ l = m;
++ else
++ r = m - 1;
++ }
++ return &vals[l];
++}
++
++/* Find subprogram that starts exactly at 'off' */
+ static int find_subprog(struct bpf_verifier_env *env, int off)
+ {
+ struct bpf_subprog_info *p;
+
+- p = bsearch(&off, env->subprog_info, env->subprog_cnt,
+- sizeof(env->subprog_info[0]), cmp_subprogs);
+- if (!p)
++ p = find_containing_subprog(env, off);
++ if (!p || p->start != off)
+ return -ENOENT;
+ return p - env->subprog_info;
+-
+ }
+
+ static int add_subprog(struct bpf_verifier_env *env, int off)
+@@ -9811,6 +9831,8 @@ static int check_func_call(struct bpf_verifier_env *env, struct bpf_insn *insn,
+
+ verbose(env, "Func#%d ('%s') is global and assumed valid.\n",
+ subprog, sub_name);
++ if (env->subprog_info[subprog].changes_pkt_data)
++ clear_all_pkt_pointers(env);
+ /* mark global subprog for verifying after main prog */
+ subprog_aux(env, subprog)->called = true;
+ clear_caller_saved_regs(env, caller->regs);
+@@ -16001,6 +16023,29 @@ static int check_return_code(struct bpf_verifier_env *env, int regno, const char
+ return 0;
+ }
+
++static void mark_subprog_changes_pkt_data(struct bpf_verifier_env *env, int off)
++{
++ struct bpf_subprog_info *subprog;
++
++ subprog = find_containing_subprog(env, off);
++ subprog->changes_pkt_data = true;
++}
++
++/* 't' is an index of a call-site.
++ * 'w' is a callee entry point.
++ * Eventually this function would be called when env->cfg.insn_state[w] == EXPLORED.
++ * Rely on DFS traversal order and absence of recursive calls to guarantee that
++ * callee's change_pkt_data marks would be correct at that moment.
++ */
++static void merge_callee_effects(struct bpf_verifier_env *env, int t, int w)
++{
++ struct bpf_subprog_info *caller, *callee;
++
++ caller = find_containing_subprog(env, t);
++ callee = find_containing_subprog(env, w);
++ caller->changes_pkt_data |= callee->changes_pkt_data;
++}
++
+ /* non-recursive DFS pseudo code
+ * 1 procedure DFS-iterative(G,v):
+ * 2 label v as discovered
+@@ -16134,6 +16179,7 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,
+ bool visit_callee)
+ {
+ int ret, insn_sz;
++ int w;
+
+ insn_sz = bpf_is_ldimm64(&insns[t]) ? 2 : 1;
+ ret = push_insn(t, t + insn_sz, FALLTHROUGH, env);
+@@ -16145,8 +16191,10 @@ static int visit_func_call_insn(int t, struct bpf_insn *insns,
+ mark_jmp_point(env, t + insn_sz);
+
+ if (visit_callee) {
++ w = t + insns[t].imm + 1;
+ mark_prune_point(env, t);
+- ret = push_insn(t, t + insns[t].imm + 1, BRANCH, env);
++ merge_callee_effects(env, t, w);
++ ret = push_insn(t, w, BRANCH, env);
+ }
+ return ret;
+ }
+@@ -16466,6 +16514,8 @@ static int visit_insn(int t, struct bpf_verifier_env *env)
+ mark_prune_point(env, t);
+ mark_jmp_point(env, t);
+ }
++ if (bpf_helper_call(insn) && bpf_helper_changes_pkt_data(insn->imm))
++ mark_subprog_changes_pkt_data(env, t);
+ if (insn->src_reg == BPF_PSEUDO_KFUNC_CALL) {
+ struct bpf_kfunc_call_arg_meta meta;
+
+@@ -16600,6 +16650,7 @@ static int check_cfg(struct bpf_verifier_env *env)
+ }
+ }
+ ret = 0; /* cfg looks good */
++ env->prog->aux->changes_pkt_data = env->subprog_info[0].changes_pkt_data;
+
+ err_free:
+ kvfree(insn_state);
+@@ -20102,6 +20153,7 @@ static int jit_subprogs(struct bpf_verifier_env *env)
+ func[i]->aux->num_exentries = num_exentries;
+ func[i]->aux->tail_call_reachable = env->subprog_info[i].tail_call_reachable;
+ func[i]->aux->exception_cb = env->subprog_info[i].is_exception_cb;
++ func[i]->aux->changes_pkt_data = env->subprog_info[i].changes_pkt_data;
+ if (!i)
+ func[i]->aux->exception_boundary = env->seen_exception;
+ func[i] = bpf_int_jit_compile(func[i]);
+@@ -21938,6 +21990,7 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
+ }
+ if (tgt_prog) {
+ struct bpf_prog_aux *aux = tgt_prog->aux;
++ bool tgt_changes_pkt_data;
+
+ if (bpf_prog_is_dev_bound(prog->aux) &&
+ !bpf_prog_dev_bound_match(prog, tgt_prog)) {
+@@ -21972,6 +22025,14 @@ int bpf_check_attach_target(struct bpf_verifier_log *log,
+ "Extension programs should be JITed\n");
+ return -EINVAL;
+ }
++ tgt_changes_pkt_data = aux->func
++ ? aux->func[subprog]->aux->changes_pkt_data
++ : aux->changes_pkt_data;
++ if (prog->aux->changes_pkt_data && !tgt_changes_pkt_data) {
++ bpf_log(log,
++ "Extension program changes packet data, while original does not\n");
++ return -EINVAL;
++ }
+ }
+ if (!tgt_prog->jited) {
+ bpf_log(log, "Can attach to only JITed progs\n");
+@@ -22437,10 +22498,6 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
+ if (ret < 0)
+ goto skip_full_check;
+
+- ret = check_attach_btf_id(env);
+- if (ret)
+- goto skip_full_check;
+-
+ ret = resolve_pseudo_ldimm64(env);
+ if (ret < 0)
+ goto skip_full_check;
+@@ -22455,6 +22512,10 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3
+ if (ret < 0)
+ goto skip_full_check;
+
++ ret = check_attach_btf_id(env);
++ if (ret)
++ goto skip_full_check;
++
+ ret = mark_fastcall_patterns(env);
+ if (ret < 0)
+ goto skip_full_check;
+diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
+index e51d5ce730be15..e5ced97d9681c1 100644
+--- a/kernel/sched/cpufreq_schedutil.c
++++ b/kernel/sched/cpufreq_schedutil.c
+@@ -81,9 +81,20 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
+ if (!cpufreq_this_cpu_can_update(sg_policy->policy))
+ return false;
+
+- if (unlikely(sg_policy->limits_changed)) {
+- sg_policy->limits_changed = false;
+- sg_policy->need_freq_update = cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS);
++ if (unlikely(READ_ONCE(sg_policy->limits_changed))) {
++ WRITE_ONCE(sg_policy->limits_changed, false);
++ sg_policy->need_freq_update = true;
++
++ /*
++ * The above limits_changed update must occur before the reads
++ * of policy limits in cpufreq_driver_resolve_freq() or a policy
++ * limits update might be missed, so use a memory barrier to
++ * ensure it.
++ *
++ * This pairs with the write memory barrier in sugov_limits().
++ */
++ smp_mb();
++
+ return true;
+ }
+
+@@ -95,10 +106,22 @@ static bool sugov_should_update_freq(struct sugov_policy *sg_policy, u64 time)
+ static bool sugov_update_next_freq(struct sugov_policy *sg_policy, u64 time,
+ unsigned int next_freq)
+ {
+- if (sg_policy->need_freq_update)
++ if (sg_policy->need_freq_update) {
+ sg_policy->need_freq_update = false;
+- else if (sg_policy->next_freq == next_freq)
++ /*
++ * The policy limits have changed, but if the return value of
++ * cpufreq_driver_resolve_freq() after applying the new limits
++ * is still equal to the previously selected frequency, the
++ * driver callback need not be invoked unless the driver
++ * specifically wants that to happen on every update of the
++ * policy limits.
++ */
++ if (sg_policy->next_freq == next_freq &&
++ !cpufreq_driver_test_flags(CPUFREQ_NEED_UPDATE_LIMITS))
++ return false;
++ } else if (sg_policy->next_freq == next_freq) {
+ return false;
++ }
+
+ sg_policy->next_freq = next_freq;
+ sg_policy->last_freq_update_time = time;
+@@ -365,7 +388,7 @@ static inline bool sugov_hold_freq(struct sugov_cpu *sg_cpu) { return false; }
+ static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
+ {
+ if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
+- sg_cpu->sg_policy->limits_changed = true;
++ WRITE_ONCE(sg_cpu->sg_policy->limits_changed, true);
+ }
+
+ static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
+@@ -888,7 +911,16 @@ static void sugov_limits(struct cpufreq_policy *policy)
+ mutex_unlock(&sg_policy->work_lock);
+ }
+
+- sg_policy->limits_changed = true;
++ /*
++ * The limits_changed update below must take place before the updates
++ * of policy limits in cpufreq_set_policy() or a policy limits update
++ * might be missed, so use a memory barrier to ensure it.
++ *
++ * This pairs with the memory barrier in sugov_should_update_freq().
++ */
++ smp_wmb();
++
++ WRITE_ONCE(sg_policy->limits_changed, true);
+ }
+
+ struct cpufreq_governor schedutil_gov = {
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index 90b59c627bb8e7..e67d67f7b90650 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -5944,9 +5944,10 @@ int register_ftrace_direct(struct ftrace_ops *ops, unsigned long addr)
+
+ /* Make a copy hash to place the new and the old entries in */
+ size = hash->count + direct_functions->count;
+- if (size > 32)
+- size = 32;
+- new_hash = alloc_ftrace_hash(fls(size));
++ size = fls(size);
++ if (size > FTRACE_HASH_MAX_BITS)
++ size = FTRACE_HASH_MAX_BITS;
++ new_hash = alloc_ftrace_hash(size);
+ if (!new_hash)
+ goto out_unlock;
+
+diff --git a/kernel/trace/trace_events_filter.c b/kernel/trace/trace_events_filter.c
+index 0c611b281a5b5f..f50c2ad43f3d82 100644
+--- a/kernel/trace/trace_events_filter.c
++++ b/kernel/trace/trace_events_filter.c
+@@ -808,7 +808,7 @@ static __always_inline char *test_string(char *str)
+ kstr = ubuf->buffer;
+
+ /* For safety, do not trust the string pointer */
+- if (!strncpy_from_kernel_nofault(kstr, str, USTRING_BUF_SIZE))
++ if (strncpy_from_kernel_nofault(kstr, str, USTRING_BUF_SIZE) < 0)
+ return NULL;
+ return kstr;
+ }
+@@ -827,7 +827,7 @@ static __always_inline char *test_ustring(char *str)
+
+ /* user space address? */
+ ustr = (char __user *)str;
+- if (!strncpy_from_user_nofault(kstr, ustr, USTRING_BUF_SIZE))
++ if (strncpy_from_user_nofault(kstr, ustr, USTRING_BUF_SIZE) < 0)
+ return NULL;
+
+ return kstr;
+diff --git a/lib/string.c b/lib/string.c
+index 76327b51e36f25..e657809fa71892 100644
+--- a/lib/string.c
++++ b/lib/string.c
+@@ -113,6 +113,7 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count)
+ if (count == 0 || WARN_ON_ONCE(count > INT_MAX))
+ return -E2BIG;
+
++#ifndef CONFIG_DCACHE_WORD_ACCESS
+ #ifdef CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS
+ /*
+ * If src is unaligned, don't cross a page boundary,
+@@ -127,12 +128,14 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count)
+ /* If src or dest is unaligned, don't do word-at-a-time. */
+ if (((long) dest | (long) src) & (sizeof(long) - 1))
+ max = 0;
++#endif
+ #endif
+
+ /*
+- * read_word_at_a_time() below may read uninitialized bytes after the
+- * trailing zero and use them in comparisons. Disable this optimization
+- * under KMSAN to prevent false positive reports.
++ * load_unaligned_zeropad() or read_word_at_a_time() below may read
++ * uninitialized bytes after the trailing zero and use them in
++ * comparisons. Disable this optimization under KMSAN to prevent
++ * false positive reports.
+ */
+ if (IS_ENABLED(CONFIG_KMSAN))
+ max = 0;
+@@ -140,7 +143,11 @@ ssize_t sized_strscpy(char *dest, const char *src, size_t count)
+ while (max >= sizeof(unsigned long)) {
+ unsigned long c, data;
+
++#ifdef CONFIG_DCACHE_WORD_ACCESS
++ c = load_unaligned_zeropad(src+res);
++#else
+ c = read_word_at_a_time(src+res);
++#endif
+ if (has_zero(c, &data, &constants)) {
+ data = prep_zero_mask(c, data, &constants);
+ data = create_zero_mask(data);
+diff --git a/mm/compaction.c b/mm/compaction.c
+index 77dbb9022b47f0..eb5474dea04d9d 100644
+--- a/mm/compaction.c
++++ b/mm/compaction.c
+@@ -980,13 +980,13 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
+ }
+
+ if (PageHuge(page)) {
++ const unsigned int order = compound_order(page);
+ /*
+ * skip hugetlbfs if we are not compacting for pages
+ * bigger than its order. THPs and other compound pages
+ * are handled below.
+ */
+ if (!cc->alloc_contig) {
+- const unsigned int order = compound_order(page);
+
+ if (order <= MAX_PAGE_ORDER) {
+ low_pfn += (1UL << order) - 1;
+@@ -1010,8 +1010,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
+ /* Do not report -EBUSY down the chain */
+ if (ret == -EBUSY)
+ ret = 0;
+- low_pfn += compound_nr(page) - 1;
+- nr_scanned += compound_nr(page) - 1;
++ low_pfn += (1UL << order) - 1;
++ nr_scanned += (1UL << order) - 1;
+ goto isolate_fail;
+ }
+
+diff --git a/mm/filemap.c b/mm/filemap.c
+index 3c37ad6c598bbb..fa18e71f9c8895 100644
+--- a/mm/filemap.c
++++ b/mm/filemap.c
+@@ -2222,6 +2222,7 @@ unsigned filemap_get_folios_contig(struct address_space *mapping,
+ *start = folio->index + nr;
+ goto out;
+ }
++ xas_advance(&xas, folio_next_index(folio) - 1);
+ continue;
+ put_folio:
+ folio_put(folio);
+diff --git a/mm/gup.c b/mm/gup.c
+index d27e7c9e2596ce..90866b827b60f4 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -2213,8 +2213,8 @@ size_t fault_in_safe_writeable(const char __user *uaddr, size_t size)
+ } while (start != end);
+ mmap_read_unlock(mm);
+
+- if (size > (unsigned long)uaddr - start)
+- return size - ((unsigned long)uaddr - start);
++ if (size > start - (unsigned long)uaddr)
++ return size - (start - (unsigned long)uaddr);
+ return 0;
+ }
+ EXPORT_SYMBOL(fault_in_safe_writeable);
+diff --git a/mm/memory.c b/mm/memory.c
+index 99dceaf6a10579..b6daa0e673a549 100644
+--- a/mm/memory.c
++++ b/mm/memory.c
+@@ -2811,11 +2811,11 @@ static int apply_to_pte_range(struct mm_struct *mm, pmd_t *pmd,
+ if (fn) {
+ do {
+ if (create || !pte_none(ptep_get(pte))) {
+- err = fn(pte++, addr, data);
++ err = fn(pte, addr, data);
+ if (err)
+ break;
+ }
+- } while (addr += PAGE_SIZE, addr != end);
++ } while (pte++, addr += PAGE_SIZE, addr != end);
+ }
+ *mask |= PGTBL_PTE_MODIFIED;
+
+diff --git a/mm/slub.c b/mm/slub.c
+index b9447a955f6112..c26d9cd107ccbc 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -1960,6 +1960,11 @@ static inline void handle_failed_objexts_alloc(unsigned long obj_exts,
+ #define OBJCGS_CLEAR_MASK (__GFP_DMA | __GFP_RECLAIMABLE | \
+ __GFP_ACCOUNT | __GFP_NOFAIL)
+
++static inline void init_slab_obj_exts(struct slab *slab)
++{
++ slab->obj_exts = 0;
++}
++
+ int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
+ gfp_t gfp, bool new_slab)
+ {
+@@ -2044,6 +2049,10 @@ static inline bool need_slab_obj_ext(void)
+
+ #else /* CONFIG_SLAB_OBJ_EXT */
+
++static inline void init_slab_obj_exts(struct slab *slab)
++{
++}
++
+ static int alloc_slab_obj_exts(struct slab *slab, struct kmem_cache *s,
+ gfp_t gfp, bool new_slab)
+ {
+@@ -2613,6 +2622,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
+ slab->objects = oo_objects(oo);
+ slab->inuse = 0;
+ slab->frozen = 0;
++ init_slab_obj_exts(slab);
+
+ account_slab(slab, oo_order(oo), s, flags);
+
+diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
+index 080a00d916f6b6..748b52ce856755 100644
+--- a/mm/userfaultfd.c
++++ b/mm/userfaultfd.c
+@@ -1873,6 +1873,14 @@ struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi,
+ unsigned long end)
+ {
+ struct vm_area_struct *ret;
++ bool give_up_on_oom = false;
++
++ /*
++ * If we are modifying only and not splitting, just give up on the merge
++ * if OOM prevents us from merging successfully.
++ */
++ if (start == vma->vm_start && end == vma->vm_end)
++ give_up_on_oom = true;
+
+ /* Reset ptes for the whole vma range if wr-protected */
+ if (userfaultfd_wp(vma))
+@@ -1880,7 +1888,7 @@ struct vm_area_struct *userfaultfd_clear_vma(struct vma_iterator *vmi,
+
+ ret = vma_modify_flags_uffd(vmi, prev, vma, start, end,
+ vma->vm_flags & ~__VM_UFFD_FLAGS,
+- NULL_VM_UFFD_CTX);
++ NULL_VM_UFFD_CTX, give_up_on_oom);
+
+ /*
+ * In the vma_merge() successful mprotect-like case 8:
+@@ -1931,7 +1939,8 @@ int userfaultfd_register_range(struct userfaultfd_ctx *ctx,
+ new_flags = (vma->vm_flags & ~__VM_UFFD_FLAGS) | vm_flags;
+ vma = vma_modify_flags_uffd(&vmi, prev, vma, start, vma_end,
+ new_flags,
+- (struct vm_userfaultfd_ctx){ctx});
++ (struct vm_userfaultfd_ctx){ctx},
++ /* give_up_on_oom = */false);
+ if (IS_ERR(vma))
+ return PTR_ERR(vma);
+
+diff --git a/mm/vma.c b/mm/vma.c
+index c9ddc06b672a52..9b4517944901dd 100644
+--- a/mm/vma.c
++++ b/mm/vma.c
+@@ -846,7 +846,13 @@ static struct vm_area_struct *vma_merge_existing_range(struct vma_merge_struct *
+ if (anon_dup)
+ unlink_anon_vmas(anon_dup);
+
+- vmg->state = VMA_MERGE_ERROR_NOMEM;
++ /*
++ * We've cleaned up any cloned anon_vma's, no VMAs have been
++ * modified, no harm no foul if the user requests that we not
++ * report this and just give up, leaving the VMAs unmerged.
++ */
++ if (!vmg->give_up_on_oom)
++ vmg->state = VMA_MERGE_ERROR_NOMEM;
+ return NULL;
+ }
+
+@@ -859,7 +865,15 @@ static struct vm_area_struct *vma_merge_existing_range(struct vma_merge_struct *
+ abort:
+ vma_iter_set(vmg->vmi, start);
+ vma_iter_load(vmg->vmi);
+- vmg->state = VMA_MERGE_ERROR_NOMEM;
++
++ /*
++ * This means we have failed to clone anon_vma's correctly, but no
++ * actual changes to VMAs have occurred, so no harm no foul - if the
++ * user doesn't want this reported and instead just wants to give up on
++ * the merge, allow it.
++ */
++ if (!vmg->give_up_on_oom)
++ vmg->state = VMA_MERGE_ERROR_NOMEM;
+ return NULL;
+ }
+
+@@ -1033,9 +1047,15 @@ int vma_expand(struct vma_merge_struct *vmg)
+ return 0;
+
+ nomem:
+- vmg->state = VMA_MERGE_ERROR_NOMEM;
+ if (anon_dup)
+ unlink_anon_vmas(anon_dup);
++ /*
++ * If the user requests that we just give upon OOM, we are safe to do so
++ * here, as commit merge provides this contract to us. Nothing has been
++ * changed - no harm no foul, just don't report it.
++ */
++ if (!vmg->give_up_on_oom)
++ vmg->state = VMA_MERGE_ERROR_NOMEM;
+ return -ENOMEM;
+ }
+
+@@ -1428,6 +1448,13 @@ static struct vm_area_struct *vma_modify(struct vma_merge_struct *vmg)
+ if (vmg_nomem(vmg))
+ return ERR_PTR(-ENOMEM);
+
++ /*
++ * Split can fail for reasons other than OOM, so if the user requests
++ * this it's probably a mistake.
++ */
++ VM_WARN_ON(vmg->give_up_on_oom &&
++ (vma->vm_start != start || vma->vm_end != end));
++
+ /* Split any preceding portion of the VMA. */
+ if (vma->vm_start < start) {
+ int err = split_vma(vmg->vmi, vma, start, 1);
+@@ -1496,12 +1523,15 @@ struct vm_area_struct
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ unsigned long new_flags,
+- struct vm_userfaultfd_ctx new_ctx)
++ struct vm_userfaultfd_ctx new_ctx,
++ bool give_up_on_oom)
+ {
+ VMG_VMA_STATE(vmg, vmi, prev, vma, start, end);
+
+ vmg.flags = new_flags;
+ vmg.uffd_ctx = new_ctx;
++ if (give_up_on_oom)
++ vmg.give_up_on_oom = true;
+
+ return vma_modify(&vmg);
+ }
+diff --git a/mm/vma.h b/mm/vma.h
+index d58068c0ff2eaa..729fe3741e897b 100644
+--- a/mm/vma.h
++++ b/mm/vma.h
+@@ -87,6 +87,12 @@ struct vma_merge_struct {
+ struct anon_vma_name *anon_name;
+ enum vma_merge_flags merge_flags;
+ enum vma_merge_state state;
++
++ /*
++ * If a merge is possible, but an OOM error occurs, give up and don't
++ * execute the merge, returning NULL.
++ */
++ bool give_up_on_oom :1;
+ };
+
+ static inline bool vmg_nomem(struct vma_merge_struct *vmg)
+@@ -303,7 +309,8 @@ struct vm_area_struct
+ struct vm_area_struct *vma,
+ unsigned long start, unsigned long end,
+ unsigned long new_flags,
+- struct vm_userfaultfd_ctx new_ctx);
++ struct vm_userfaultfd_ctx new_ctx,
++ bool give_up_on_oom);
+
+ struct vm_area_struct *vma_merge_new_range(struct vma_merge_struct *vmg);
+
+diff --git a/net/bluetooth/hci_event.c b/net/bluetooth/hci_event.c
+index d64117be62cc44..96ad1b75d1c4d4 100644
+--- a/net/bluetooth/hci_event.c
++++ b/net/bluetooth/hci_event.c
+@@ -6150,11 +6150,12 @@ static void process_adv_report(struct hci_dev *hdev, u8 type, bdaddr_t *bdaddr,
+ * event or send an immediate device found event if the data
+ * should not be stored for later.
+ */
+- if (!ext_adv && !has_pending_adv_report(hdev)) {
++ if (!has_pending_adv_report(hdev)) {
+ /* If the report will trigger a SCAN_REQ store it for
+ * later merging.
+ */
+- if (type == LE_ADV_IND || type == LE_ADV_SCAN_IND) {
++ if (!ext_adv && (type == LE_ADV_IND ||
++ type == LE_ADV_SCAN_IND)) {
+ store_pending_adv_report(hdev, bdaddr, bdaddr_type,
+ rssi, flags, data, len);
+ return;
+diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c
+index c27ea70f71e1e1..a55388fbf07c84 100644
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -3956,7 +3956,8 @@ static void l2cap_connect(struct l2cap_conn *conn, struct l2cap_cmd_hdr *cmd,
+
+ /* Check if the ACL is secure enough (if not SDP) */
+ if (psm != cpu_to_le16(L2CAP_PSM_SDP) &&
+- !hci_conn_check_link_mode(conn->hcon)) {
++ (!hci_conn_check_link_mode(conn->hcon) ||
++ !l2cap_check_enc_key_size(conn->hcon))) {
+ conn->disc_reason = HCI_ERROR_AUTH_FAILURE;
+ result = L2CAP_CR_SEC_BLOCK;
+ goto response;
+@@ -7503,8 +7504,24 @@ void l2cap_recv_acldata(struct hci_conn *hcon, struct sk_buff *skb, u16 flags)
+ if (skb->len > len) {
+ BT_ERR("Frame is too long (len %u, expected len %d)",
+ skb->len, len);
++ /* PTS test cases L2CAP/COS/CED/BI-14-C and BI-15-C
++ * (Multiple Signaling Command in one PDU, Data
++ * Truncated, BR/EDR) send a C-frame to the IUT with
++ * PDU Length set to 8 and Channel ID set to the
++ * correct signaling channel for the logical link.
++ * The Information payload contains one L2CAP_ECHO_REQ
++ * packet with Data Length set to 0 with 0 octets of
++ * echo data and one invalid command packet due to
++ * data truncated in PDU but present in HCI packet.
++ *
++ * Shorter the socket buffer to the PDU length to
++ * allow to process valid commands from the PDU before
++ * setting the socket unreliable.
++ */
++ skb->len = len;
++ l2cap_recv_frame(conn, skb);
+ l2cap_conn_unreliable(conn, ECOMM);
+- goto drop;
++ goto unlock;
+ }
+
+ /* Append fragment into frame (with header) */
+diff --git a/net/bridge/br_vlan.c b/net/bridge/br_vlan.c
+index 89f51ea4cabece..f2efb58d152bc2 100644
+--- a/net/bridge/br_vlan.c
++++ b/net/bridge/br_vlan.c
+@@ -715,8 +715,8 @@ static int br_vlan_add_existing(struct net_bridge *br,
+ u16 flags, bool *changed,
+ struct netlink_ext_ack *extack)
+ {
+- bool would_change = __vlan_flags_would_change(vlan, flags);
+ bool becomes_brentry = false;
++ bool would_change = false;
+ int err;
+
+ if (!br_vlan_is_brentry(vlan)) {
+@@ -725,6 +725,8 @@ static int br_vlan_add_existing(struct net_bridge *br,
+ return -EINVAL;
+
+ becomes_brentry = true;
++ } else {
++ would_change = __vlan_flags_would_change(vlan, flags);
+ }
+
+ /* Master VLANs that aren't brentries weren't notified before,
+diff --git a/net/dsa/dsa.c b/net/dsa/dsa.c
+index 1664547deffd07..ac3a252969cb61 100644
+--- a/net/dsa/dsa.c
++++ b/net/dsa/dsa.c
+@@ -862,6 +862,16 @@ static void dsa_tree_teardown_lags(struct dsa_switch_tree *dst)
+ kfree(dst->lags);
+ }
+
++static void dsa_tree_teardown_routing_table(struct dsa_switch_tree *dst)
++{
++ struct dsa_link *dl, *next;
++
++ list_for_each_entry_safe(dl, next, &dst->rtable, list) {
++ list_del(&dl->list);
++ kfree(dl);
++ }
++}
++
+ static int dsa_tree_setup(struct dsa_switch_tree *dst)
+ {
+ bool complete;
+@@ -879,7 +889,7 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst)
+
+ err = dsa_tree_setup_cpu_ports(dst);
+ if (err)
+- return err;
++ goto teardown_rtable;
+
+ err = dsa_tree_setup_switches(dst);
+ if (err)
+@@ -911,14 +921,14 @@ static int dsa_tree_setup(struct dsa_switch_tree *dst)
+ dsa_tree_teardown_switches(dst);
+ teardown_cpu_ports:
+ dsa_tree_teardown_cpu_ports(dst);
++teardown_rtable:
++ dsa_tree_teardown_routing_table(dst);
+
+ return err;
+ }
+
+ static void dsa_tree_teardown(struct dsa_switch_tree *dst)
+ {
+- struct dsa_link *dl, *next;
+-
+ if (!dst->setup)
+ return;
+
+@@ -932,10 +942,7 @@ static void dsa_tree_teardown(struct dsa_switch_tree *dst)
+
+ dsa_tree_teardown_cpu_ports(dst);
+
+- list_for_each_entry_safe(dl, next, &dst->rtable, list) {
+- list_del(&dl->list);
+- kfree(dl);
+- }
++ dsa_tree_teardown_routing_table(dst);
+
+ pr_info("DSA: tree %d torn down\n", dst->index);
+
+@@ -1478,12 +1485,44 @@ static int dsa_switch_parse(struct dsa_switch *ds, struct dsa_chip_data *cd)
+
+ static void dsa_switch_release_ports(struct dsa_switch *ds)
+ {
++ struct dsa_mac_addr *a, *tmp;
+ struct dsa_port *dp, *next;
++ struct dsa_vlan *v, *n;
+
+ dsa_switch_for_each_port_safe(dp, next, ds) {
+- WARN_ON(!list_empty(&dp->fdbs));
+- WARN_ON(!list_empty(&dp->mdbs));
+- WARN_ON(!list_empty(&dp->vlans));
++ /* These are either entries that upper layers lost track of
++ * (probably due to bugs), or installed through interfaces
++ * where one does not necessarily have to remove them, like
++ * ndo_dflt_fdb_add().
++ */
++ list_for_each_entry_safe(a, tmp, &dp->fdbs, list) {
++ dev_info(ds->dev,
++ "Cleaning up unicast address %pM vid %u from port %d\n",
++ a->addr, a->vid, dp->index);
++ list_del(&a->list);
++ kfree(a);
++ }
++
++ list_for_each_entry_safe(a, tmp, &dp->mdbs, list) {
++ dev_info(ds->dev,
++ "Cleaning up multicast address %pM vid %u from port %d\n",
++ a->addr, a->vid, dp->index);
++ list_del(&a->list);
++ kfree(a);
++ }
++
++ /* These are entries that upper layers have lost track of,
++ * probably due to bugs, but also due to dsa_port_do_vlan_del()
++ * having failed and the VLAN entry still lingering on.
++ */
++ list_for_each_entry_safe(v, n, &dp->vlans, list) {
++ dev_info(ds->dev,
++ "Cleaning up vid %u from port %d\n",
++ v->vid, dp->index);
++ list_del(&v->list);
++ kfree(v);
++ }
++
+ list_del(&dp->list);
+ kfree(dp);
+ }
+diff --git a/net/dsa/tag_8021q.c b/net/dsa/tag_8021q.c
+index 3ee53e28ec2e9f..53e03fd8071b4a 100644
+--- a/net/dsa/tag_8021q.c
++++ b/net/dsa/tag_8021q.c
+@@ -197,7 +197,7 @@ static int dsa_port_do_tag_8021q_vlan_del(struct dsa_port *dp, u16 vid)
+
+ err = ds->ops->tag_8021q_vlan_del(ds, port, vid);
+ if (err) {
+- refcount_inc(&v->refcount);
++ refcount_set(&v->refcount, 1);
+ return err;
+ }
+
+diff --git a/net/ethtool/cmis_cdb.c b/net/ethtool/cmis_cdb.c
+index 4d558114795203..8bf99295bfbe96 100644
+--- a/net/ethtool/cmis_cdb.c
++++ b/net/ethtool/cmis_cdb.c
+@@ -346,7 +346,7 @@ ethtool_cmis_module_poll(struct net_device *dev,
+ struct netlink_ext_ack extack = {};
+ int err;
+
+- ethtool_cmis_page_init(&page_data, 0, offset, sizeof(rpl));
++ ethtool_cmis_page_init(&page_data, 0, offset, sizeof(*rpl));
+ page_data.data = (u8 *)rpl;
+
+ err = ops->get_module_eeprom_by_page(dev, &page_data, &extack);
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index bae8ece3e881e0..d9ab070e78e052 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1771,6 +1771,7 @@ static int rt6_insert_exception(struct rt6_info *nrt,
+ if (!err) {
+ spin_lock_bh(&f6i->fib6_table->tb6_lock);
+ fib6_update_sernum(net, f6i);
++ fib6_add_gc_list(f6i);
+ spin_unlock_bh(&f6i->fib6_table->tb6_lock);
+ fib6_force_start_gc(net);
+ }
+diff --git a/net/mac80211/iface.c b/net/mac80211/iface.c
+index dbcd75c5d778e6..7e1e561ef76c1c 100644
+--- a/net/mac80211/iface.c
++++ b/net/mac80211/iface.c
+@@ -667,6 +667,9 @@ static void ieee80211_do_stop(struct ieee80211_sub_if_data *sdata, bool going_do
+ if (sdata->vif.type == NL80211_IFTYPE_AP_VLAN)
+ ieee80211_txq_remove_vlan(local, sdata);
+
++ if (sdata->vif.txq)
++ ieee80211_txq_purge(sdata->local, to_txq_info(sdata->vif.txq));
++
+ sdata->bss = NULL;
+
+ if (local->open_count == 0)
+diff --git a/net/mctp/af_mctp.c b/net/mctp/af_mctp.c
+index f6de136008f6f9..57850d4dac5db9 100644
+--- a/net/mctp/af_mctp.c
++++ b/net/mctp/af_mctp.c
+@@ -630,6 +630,9 @@ static int mctp_sk_hash(struct sock *sk)
+ {
+ struct net *net = sock_net(sk);
+
++ /* Bind lookup runs under RCU, remain live during that. */
++ sock_set_flag(sk, SOCK_RCU_FREE);
++
+ mutex_lock(&net->mctp.bind_lock);
+ sk_add_node_rcu(sk, &net->mctp.binds);
+ mutex_unlock(&net->mctp.bind_lock);
+diff --git a/net/openvswitch/flow_netlink.c b/net/openvswitch/flow_netlink.c
+index 0df89240b73361..305daf57a4f9dd 100644
+--- a/net/openvswitch/flow_netlink.c
++++ b/net/openvswitch/flow_netlink.c
+@@ -2876,7 +2876,8 @@ static int validate_set(const struct nlattr *a,
+ size_t key_len;
+
+ /* There can be only one key in a action */
+- if (nla_total_size(nla_len(ovs_key)) != nla_len(a))
++ if (!nla_ok(ovs_key, nla_len(a)) ||
++ nla_total_size(nla_len(ovs_key)) != nla_len(a))
+ return -EINVAL;
+
+ key_len = nla_len(ovs_key);
+diff --git a/net/smc/af_smc.c b/net/smc/af_smc.c
+index ebc41a7b13dbec..78b0e6dba0a2b7 100644
+--- a/net/smc/af_smc.c
++++ b/net/smc/af_smc.c
+@@ -362,6 +362,9 @@ static void smc_destruct(struct sock *sk)
+ return;
+ }
+
++static struct lock_class_key smc_key;
++static struct lock_class_key smc_slock_key;
++
+ void smc_sk_init(struct net *net, struct sock *sk, int protocol)
+ {
+ struct smc_sock *smc = smc_sk(sk);
+@@ -375,6 +378,8 @@ void smc_sk_init(struct net *net, struct sock *sk, int protocol)
+ INIT_WORK(&smc->connect_work, smc_connect_work);
+ INIT_DELAYED_WORK(&smc->conn.tx_work, smc_tx_work);
+ INIT_LIST_HEAD(&smc->accept_q);
++ sock_lock_init_class_and_name(sk, "slock-AF_SMC", &smc_slock_key,
++ "sk_lock-AF_SMC", &smc_key);
+ spin_lock_init(&smc->accept_q_lock);
+ spin_lock_init(&smc->conn.send_lock);
+ sk->sk_prot->hash(sk);
+diff --git a/scripts/Makefile.compiler b/scripts/Makefile.compiler
+index e0842496d26ed7..c6cd729b65cbfb 100644
+--- a/scripts/Makefile.compiler
++++ b/scripts/Makefile.compiler
+@@ -75,8 +75,8 @@ ld-option = $(call try-run, $(LD) $(KBUILD_LDFLAGS) $(1) -v,$(1),$(2),$(3))
+ # Usage: MY_RUSTFLAGS += $(call __rustc-option,$(RUSTC),$(MY_RUSTFLAGS),-Cinstrument-coverage,-Zinstrument-coverage)
+ # TODO: remove RUSTC_BOOTSTRAP=1 when we raise the minimum GNU Make version to 4.4
+ __rustc-option = $(call try-run,\
+- echo '#![allow(missing_docs)]#![feature(no_core)]#![no_core]' | RUSTC_BOOTSTRAP=1\
+- $(1) --sysroot=/dev/null $(filter-out --sysroot=/dev/null,$(2)) $(3)\
++ echo '$(pound)![allow(missing_docs)]$(pound)![feature(no_core)]$(pound)![no_core]' | RUSTC_BOOTSTRAP=1\
++ $(1) --sysroot=/dev/null $(filter-out --sysroot=/dev/null --target=%,$(2)) $(3)\
+ --crate-type=rlib --out-dir=$(TMPOUT) --emit=obj=- - >/dev/null,$(3),$(4))
+
+ # rustc-option
+diff --git a/scripts/generate_rust_analyzer.py b/scripts/generate_rust_analyzer.py
+index d1f5adbf33f91c..690f9830f06482 100755
+--- a/scripts/generate_rust_analyzer.py
++++ b/scripts/generate_rust_analyzer.py
+@@ -90,6 +90,12 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+ ["core", "compiler_builtins"],
+ )
+
++ append_crate(
++ "ffi",
++ srctree / "rust" / "ffi.rs",
++ ["core", "compiler_builtins"],
++ )
++
+ def append_crate_with_generated(
+ display_name,
+ deps,
+@@ -109,9 +115,9 @@ def generate_crates(srctree, objtree, sysroot_src, external_src, cfgs):
+ "exclude_dirs": [],
+ }
+
+- append_crate_with_generated("bindings", ["core"])
+- append_crate_with_generated("uapi", ["core"])
+- append_crate_with_generated("kernel", ["core", "macros", "build_error", "bindings", "uapi"])
++ append_crate_with_generated("bindings", ["core", "ffi"])
++ append_crate_with_generated("uapi", ["core", "ffi"])
++ append_crate_with_generated("kernel", ["core", "macros", "build_error", "ffi", "bindings", "uapi"])
+
+ def is_root_crate(build_file, target):
+ try:
+diff --git a/sound/pci/hda/Kconfig b/sound/pci/hda/Kconfig
+index dbf933c18a8219..fd9391e61b3d98 100644
+--- a/sound/pci/hda/Kconfig
++++ b/sound/pci/hda/Kconfig
+@@ -96,9 +96,7 @@ config SND_HDA_CIRRUS_SCODEC
+
+ config SND_HDA_CIRRUS_SCODEC_KUNIT_TEST
+ tristate "KUnit test for Cirrus side-codec library" if !KUNIT_ALL_TESTS
+- select SND_HDA_CIRRUS_SCODEC
+- select GPIOLIB
+- depends on KUNIT
++ depends on SND_HDA_CIRRUS_SCODEC && GPIOLIB && KUNIT
+ default KUNIT_ALL_TESTS
+ help
+ This builds KUnit tests for the cirrus side-codec library.
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 0bf833c9602155..4171aa22747c33 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -6603,6 +6603,16 @@ static void alc285_fixup_speaker2_to_dac1(struct hda_codec *codec,
+ }
+ }
+
++/* disable DAC3 (0x06) selection on NID 0x15 - share Speaker/Bass Speaker DAC 0x03 */
++static void alc294_fixup_bass_speaker_15(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++ static const hda_nid_t conn[] = { 0x02, 0x03 };
++ snd_hda_override_conn_list(codec, 0x15, ARRAY_SIZE(conn), conn);
++ }
++}
++
+ /* Hook to update amp GPIO4 for automute */
+ static void alc280_hp_gpio4_automute_hook(struct hda_codec *codec,
+ struct hda_jack_callback *jack)
+@@ -7587,6 +7597,16 @@ static void alc287_fixup_lenovo_thinkpad_with_alc1318(struct hda_codec *codec,
+ spec->gen.pcm_playback_hook = alc287_alc1318_playback_pcm_hook;
+ }
+
++/*
++ * Clear COEF 0x0d (PCBEEP passthrough) bit 0x40 where BIOS sets it wrongly
++ * at PM resume
++ */
++static void alc283_fixup_dell_hp_resume(struct hda_codec *codec,
++ const struct hda_fixup *fix, int action)
++{
++ if (action == HDA_FIXUP_ACT_INIT)
++ alc_write_coef_idx(codec, 0xd, 0x2800);
++}
+
+ enum {
+ ALC269_FIXUP_GPIO2,
+@@ -7888,6 +7908,9 @@ enum {
+ ALC245_FIXUP_CLEVO_NOISY_MIC,
+ ALC269_FIXUP_VAIO_VJFH52_MIC_NO_PRESENCE,
+ ALC233_FIXUP_MEDION_MTL_SPK,
++ ALC294_FIXUP_BASS_SPEAKER_15,
++ ALC283_FIXUP_DELL_HP_RESUME,
++ ALC294_FIXUP_ASUS_CS35L41_SPI_2,
+ };
+
+ /* A special fixup for Lenovo C940 and Yoga Duet 7;
+@@ -10222,6 +10245,20 @@ static const struct hda_fixup alc269_fixups[] = {
+ { }
+ },
+ },
++ [ALC294_FIXUP_BASS_SPEAKER_15] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc294_fixup_bass_speaker_15,
++ },
++ [ALC283_FIXUP_DELL_HP_RESUME] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = alc283_fixup_dell_hp_resume,
++ },
++ [ALC294_FIXUP_ASUS_CS35L41_SPI_2] = {
++ .type = HDA_FIXUP_FUNC,
++ .v.func = cs35l41_fixup_spi_two,
++ .chained = true,
++ .chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC,
++ },
+ };
+
+ static const struct hda_quirk alc269_fixup_tbl[] = {
+@@ -10282,6 +10319,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1028, 0x05f4, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x05f5, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1028, 0x05f6, "Dell", ALC269_FIXUP_DELL1_MIC_NO_PRESENCE),
++ SND_PCI_QUIRK(0x1028, 0x0604, "Dell Venue 11 Pro 7130", ALC283_FIXUP_DELL_HP_RESUME),
+ SND_PCI_QUIRK(0x1028, 0x0615, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK),
+ SND_PCI_QUIRK(0x1028, 0x0616, "Dell Vostro 5470", ALC290_FIXUP_SUBWOOFER_HSJACK),
+ SND_PCI_QUIRK(0x1028, 0x062c, "Dell Latitude E5550", ALC292_FIXUP_DELL_E7X),
+@@ -10684,7 +10722,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x12a0, "ASUS X441UV", ALC233_FIXUP_EAPD_COEF_AND_MIC_NO_PRESENCE),
+ SND_PCI_QUIRK(0x1043, 0x12a3, "Asus N7691ZM", ALC269_FIXUP_ASUS_N7601ZM),
+ SND_PCI_QUIRK(0x1043, 0x12af, "ASUS UX582ZS", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x12b4, "ASUS B3405CCA / P3405CCA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x12b4, "ASUS B3405CCA / P3405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x12e0, "ASUS X541SA", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x12f0, "ASUS X541UV", ALC256_FIXUP_ASUS_MIC),
+ SND_PCI_QUIRK(0x1043, 0x1313, "Asus K42JZ", ALC269VB_FIXUP_ASUS_MIC_NO_PRESENCE),
+@@ -10750,6 +10788,7 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1d42, "ASUS Zephyrus G14 2022", ALC289_FIXUP_ASUS_GA401),
+ SND_PCI_QUIRK(0x1043, 0x1d4e, "ASUS TM420", ALC256_FIXUP_ASUS_HPE),
+ SND_PCI_QUIRK(0x1043, 0x1da2, "ASUS UP6502ZA/ZD", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x1df3, "ASUS UM5606WA", ALC294_FIXUP_BASS_SPEAKER_15),
+ SND_PCI_QUIRK(0x1043, 0x1e02, "ASUS UX3402ZA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x1e11, "ASUS Zephyrus G15", ALC289_FIXUP_ASUS_GA502),
+ SND_PCI_QUIRK(0x1043, 0x1e12, "ASUS UM3402", ALC287_FIXUP_CS35L41_I2C_2),
+@@ -10772,14 +10811,14 @@ static const struct hda_quirk alc269_fixup_tbl[] = {
+ SND_PCI_QUIRK(0x1043, 0x1fb3, "ASUS ROG Flow Z13 GZ302EA", ALC287_FIXUP_CS35L41_I2C_2),
+ SND_PCI_QUIRK(0x1043, 0x3011, "ASUS B5605CVA", ALC245_FIXUP_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x3030, "ASUS ZN270IE", ALC256_FIXUP_ASUS_AIO_GPIO2),
+- SND_PCI_QUIRK(0x1043, 0x3061, "ASUS B3405CCA", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x3071, "ASUS B5405CCA", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x30c1, "ASUS B3605CCA / P3605CCA", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x30d1, "ASUS B5405CCA", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x30e1, "ASUS B5605CCA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x3061, "ASUS B3405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x3071, "ASUS B5405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x30c1, "ASUS B3605CCA / P3605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x30d1, "ASUS B5405CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x30e1, "ASUS B5605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x31d0, "ASUS Zen AIO 27 Z272SD_A272SD", ALC274_FIXUP_ASUS_ZEN_AIO_27),
+- SND_PCI_QUIRK(0x1043, 0x31e1, "ASUS B5605CCA", ALC245_FIXUP_CS35L41_SPI_2),
+- SND_PCI_QUIRK(0x1043, 0x31f1, "ASUS B3605CCA", ALC245_FIXUP_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x31e1, "ASUS B5605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
++ SND_PCI_QUIRK(0x1043, 0x31f1, "ASUS B3605CCA", ALC294_FIXUP_ASUS_CS35L41_SPI_2),
+ SND_PCI_QUIRK(0x1043, 0x3a20, "ASUS G614JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a30, "ASUS G814JVR/JIR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+ SND_PCI_QUIRK(0x1043, 0x3a40, "ASUS G814JZR", ALC285_FIXUP_ASUS_SPI_REAR_SPEAKERS),
+diff --git a/sound/soc/codecs/cs42l43-jack.c b/sound/soc/codecs/cs42l43-jack.c
+index d9ab003e166bfa..73d764fc853929 100644
+--- a/sound/soc/codecs/cs42l43-jack.c
++++ b/sound/soc/codecs/cs42l43-jack.c
+@@ -702,6 +702,9 @@ static void cs42l43_clear_jack(struct cs42l43_codec *priv)
+ CS42L43_PGA_WIDESWING_MODE_EN_MASK, 0);
+ regmap_update_bits(cs42l43->regmap, CS42L43_STEREO_MIC_CTRL,
+ CS42L43_JACK_STEREO_CONFIG_MASK, 0);
++ regmap_update_bits(cs42l43->regmap, CS42L43_STEREO_MIC_CLAMP_CTRL,
++ CS42L43_SMIC_HPAMP_CLAMP_DIS_FRC_MASK,
++ CS42L43_SMIC_HPAMP_CLAMP_DIS_FRC_MASK);
+ regmap_update_bits(cs42l43->regmap, CS42L43_HS2,
+ CS42L43_HSDET_MODE_MASK | CS42L43_HSDET_MANUAL_MODE_MASK,
+ 0x2 << CS42L43_HSDET_MODE_SHIFT);
+diff --git a/sound/soc/codecs/lpass-wsa-macro.c b/sound/soc/codecs/lpass-wsa-macro.c
+index c989d82d1d3c17..81bab8299eae4b 100644
+--- a/sound/soc/codecs/lpass-wsa-macro.c
++++ b/sound/soc/codecs/lpass-wsa-macro.c
+@@ -63,6 +63,10 @@
+ #define CDC_WSA_TX_SPKR_PROT_CLK_DISABLE 0
+ #define CDC_WSA_TX_SPKR_PROT_PCM_RATE_MASK GENMASK(3, 0)
+ #define CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K 0
++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_16K 1
++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_24K 2
++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_32K 3
++#define CDC_WSA_TX_SPKR_PROT_PCM_RATE_48K 4
+ #define CDC_WSA_TX0_SPKR_PROT_PATH_CFG0 (0x0248)
+ #define CDC_WSA_TX1_SPKR_PROT_PATH_CTL (0x0264)
+ #define CDC_WSA_TX1_SPKR_PROT_PATH_CFG0 (0x0268)
+@@ -407,6 +411,7 @@ struct wsa_macro {
+ int ear_spkr_gain;
+ int spkr_gain_offset;
+ int spkr_mode;
++ u32 pcm_rate_vi;
+ int is_softclip_on[WSA_MACRO_SOFTCLIP_MAX];
+ int softclip_clk_users[WSA_MACRO_SOFTCLIP_MAX];
+ struct regmap *regmap;
+@@ -1280,6 +1285,7 @@ static int wsa_macro_hw_params(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+ {
+ struct snd_soc_component *component = dai->component;
++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
+ int ret;
+
+ switch (substream->stream) {
+@@ -1291,6 +1297,11 @@ static int wsa_macro_hw_params(struct snd_pcm_substream *substream,
+ __func__, params_rate(params));
+ return ret;
+ }
++ break;
++ case SNDRV_PCM_STREAM_CAPTURE:
++ if (dai->id == WSA_MACRO_AIF_VI)
++ wsa->pcm_rate_vi = params_rate(params);
++
+ break;
+ default:
+ break;
+@@ -1448,35 +1459,11 @@ static void wsa_macro_mclk_enable(struct wsa_macro *wsa, bool mclk_enable)
+ }
+ }
+
+-static int wsa_macro_mclk_event(struct snd_soc_dapm_widget *w,
+- struct snd_kcontrol *kcontrol, int event)
++static void wsa_macro_enable_disable_vi_sense(struct snd_soc_component *component, bool enable,
++ u32 tx_reg0, u32 tx_reg1, u32 val)
+ {
+- struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
+- struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
+-
+- wsa_macro_mclk_enable(wsa, event == SND_SOC_DAPM_PRE_PMU);
+- return 0;
+-}
+-
+-static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w,
+- struct snd_kcontrol *kcontrol,
+- int event)
+-{
+- struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
+- struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
+- u32 tx_reg0, tx_reg1;
+-
+- if (test_bit(WSA_MACRO_TX0, &wsa->active_ch_mask[WSA_MACRO_AIF_VI])) {
+- tx_reg0 = CDC_WSA_TX0_SPKR_PROT_PATH_CTL;
+- tx_reg1 = CDC_WSA_TX1_SPKR_PROT_PATH_CTL;
+- } else if (test_bit(WSA_MACRO_TX1, &wsa->active_ch_mask[WSA_MACRO_AIF_VI])) {
+- tx_reg0 = CDC_WSA_TX2_SPKR_PROT_PATH_CTL;
+- tx_reg1 = CDC_WSA_TX3_SPKR_PROT_PATH_CTL;
+- }
+-
+- switch (event) {
+- case SND_SOC_DAPM_POST_PMU:
+- /* Enable V&I sensing */
++ if (enable) {
++ /* Enable V&I sensing */
+ snd_soc_component_update_bits(component, tx_reg0,
+ CDC_WSA_TX_SPKR_PROT_RESET_MASK,
+ CDC_WSA_TX_SPKR_PROT_RESET);
+@@ -1485,10 +1472,10 @@ static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w,
+ CDC_WSA_TX_SPKR_PROT_RESET);
+ snd_soc_component_update_bits(component, tx_reg0,
+ CDC_WSA_TX_SPKR_PROT_PCM_RATE_MASK,
+- CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K);
++ val);
+ snd_soc_component_update_bits(component, tx_reg1,
+ CDC_WSA_TX_SPKR_PROT_PCM_RATE_MASK,
+- CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K);
++ val);
+ snd_soc_component_update_bits(component, tx_reg0,
+ CDC_WSA_TX_SPKR_PROT_CLK_EN_MASK,
+ CDC_WSA_TX_SPKR_PROT_CLK_ENABLE);
+@@ -1501,9 +1488,7 @@ static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w,
+ snd_soc_component_update_bits(component, tx_reg1,
+ CDC_WSA_TX_SPKR_PROT_RESET_MASK,
+ CDC_WSA_TX_SPKR_PROT_NO_RESET);
+- break;
+- case SND_SOC_DAPM_POST_PMD:
+- /* Disable V&I sensing */
++ } else {
+ snd_soc_component_update_bits(component, tx_reg0,
+ CDC_WSA_TX_SPKR_PROT_RESET_MASK,
+ CDC_WSA_TX_SPKR_PROT_RESET);
+@@ -1516,6 +1501,72 @@ static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w,
+ snd_soc_component_update_bits(component, tx_reg1,
+ CDC_WSA_TX_SPKR_PROT_CLK_EN_MASK,
+ CDC_WSA_TX_SPKR_PROT_CLK_DISABLE);
++ }
++}
++
++static void wsa_macro_enable_disable_vi_feedback(struct snd_soc_component *component,
++ bool enable, u32 rate)
++{
++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
++
++ if (test_bit(WSA_MACRO_TX0, &wsa->active_ch_mask[WSA_MACRO_AIF_VI]))
++ wsa_macro_enable_disable_vi_sense(component, enable,
++ CDC_WSA_TX0_SPKR_PROT_PATH_CTL,
++ CDC_WSA_TX1_SPKR_PROT_PATH_CTL, rate);
++
++ if (test_bit(WSA_MACRO_TX1, &wsa->active_ch_mask[WSA_MACRO_AIF_VI]))
++ wsa_macro_enable_disable_vi_sense(component, enable,
++ CDC_WSA_TX2_SPKR_PROT_PATH_CTL,
++ CDC_WSA_TX3_SPKR_PROT_PATH_CTL, rate);
++}
++
++static int wsa_macro_mclk_event(struct snd_soc_dapm_widget *w,
++ struct snd_kcontrol *kcontrol, int event)
++{
++ struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
++
++ wsa_macro_mclk_enable(wsa, event == SND_SOC_DAPM_PRE_PMU);
++ return 0;
++}
++
++static int wsa_macro_enable_vi_feedback(struct snd_soc_dapm_widget *w,
++ struct snd_kcontrol *kcontrol,
++ int event)
++{
++ struct snd_soc_component *component = snd_soc_dapm_to_component(w->dapm);
++ struct wsa_macro *wsa = snd_soc_component_get_drvdata(component);
++ u32 rate_val;
++
++ switch (wsa->pcm_rate_vi) {
++ case 8000:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K;
++ break;
++ case 16000:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_16K;
++ break;
++ case 24000:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_24K;
++ break;
++ case 32000:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_32K;
++ break;
++ case 48000:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_48K;
++ break;
++ default:
++ rate_val = CDC_WSA_TX_SPKR_PROT_PCM_RATE_8K;
++ break;
++ }
++
++ switch (event) {
++ case SND_SOC_DAPM_POST_PMU:
++ /* Enable V&I sensing */
++ wsa_macro_enable_disable_vi_feedback(component, true, rate_val);
++ break;
++ case SND_SOC_DAPM_POST_PMD:
++ /* Disable V&I sensing */
++ wsa_macro_enable_disable_vi_feedback(component, false, rate_val);
+ break;
+ }
+
+diff --git a/sound/soc/dwc/dwc-i2s.c b/sound/soc/dwc/dwc-i2s.c
+index 57b789d7fbedd4..5b4f20dbf7bba4 100644
+--- a/sound/soc/dwc/dwc-i2s.c
++++ b/sound/soc/dwc/dwc-i2s.c
+@@ -199,12 +199,10 @@ static void i2s_start(struct dw_i2s_dev *dev,
+ else
+ i2s_write_reg(dev->i2s_base, IRER, 1);
+
+- /* I2S needs to enable IRQ to make a handshake with DMAC on the JH7110 SoC */
+- if (dev->use_pio || dev->is_jh7110)
+- i2s_enable_irqs(dev, substream->stream, config->chan_nr);
+- else
++ if (!(dev->use_pio || dev->is_jh7110))
+ i2s_enable_dma(dev, substream->stream);
+
++ i2s_enable_irqs(dev, substream->stream, config->chan_nr);
+ i2s_write_reg(dev->i2s_base, CER, 1);
+ }
+
+@@ -218,11 +216,12 @@ static void i2s_stop(struct dw_i2s_dev *dev,
+ else
+ i2s_write_reg(dev->i2s_base, IRER, 0);
+
+- if (dev->use_pio || dev->is_jh7110)
+- i2s_disable_irqs(dev, substream->stream, 8);
+- else
++ if (!(dev->use_pio || dev->is_jh7110))
+ i2s_disable_dma(dev, substream->stream);
+
++ i2s_disable_irqs(dev, substream->stream, 8);
++
++
+ if (!dev->active) {
+ i2s_write_reg(dev->i2s_base, CER, 0);
+ i2s_write_reg(dev->i2s_base, IER, 0);
+diff --git a/sound/soc/fsl/fsl_qmc_audio.c b/sound/soc/fsl/fsl_qmc_audio.c
+index 8668abd3520800..d41cb6f3efcacc 100644
+--- a/sound/soc/fsl/fsl_qmc_audio.c
++++ b/sound/soc/fsl/fsl_qmc_audio.c
+@@ -250,6 +250,9 @@ static int qmc_audio_pcm_trigger(struct snd_soc_component *component,
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ bitmap_zero(prtd->chans_pending, 64);
++ prtd->buffer_ended = 0;
++ prtd->ch_dma_addr_current = prtd->ch_dma_addr_start;
++
+ if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ for (i = 0; i < prtd->channels; i++)
+ prtd->qmc_dai->chans[i].prtd_tx = prtd;
+diff --git a/sound/soc/intel/avs/pcm.c b/sound/soc/intel/avs/pcm.c
+index 945f9c0a6a5455..15defce0f3eb84 100644
+--- a/sound/soc/intel/avs/pcm.c
++++ b/sound/soc/intel/avs/pcm.c
+@@ -925,7 +925,8 @@ static int avs_component_probe(struct snd_soc_component *component)
+ else
+ mach->tplg_filename = devm_kasprintf(adev->dev, GFP_KERNEL,
+ "hda-generic-tplg.bin");
+-
++ if (!mach->tplg_filename)
++ return -ENOMEM;
+ filename = kasprintf(GFP_KERNEL, "%s/%s", component->driver->topology_name_prefix,
+ mach->tplg_filename);
+ if (!filename)
+diff --git a/sound/soc/intel/boards/sof_sdw.c b/sound/soc/intel/boards/sof_sdw.c
+index 380fc3be8c932e..5911a055865160 100644
+--- a/sound/soc/intel/boards/sof_sdw.c
++++ b/sound/soc/intel/boards/sof_sdw.c
+@@ -688,6 +688,7 @@ static const struct dmi_system_id sof_sdw_quirk_table[] = {
+
+ static const struct snd_pci_quirk sof_sdw_ssid_quirk_table[] = {
+ SND_PCI_QUIRK(0x1043, 0x1e13, "ASUS Zenbook S14", SOC_SDW_CODEC_MIC),
++ SND_PCI_QUIRK(0x1043, 0x1f43, "ASUS Zenbook S16", SOC_SDW_CODEC_MIC),
+ {}
+ };
+
+diff --git a/sound/soc/qcom/lpass.h b/sound/soc/qcom/lpass.h
+index 27a2bf9a661393..de3ec6f594c11c 100644
+--- a/sound/soc/qcom/lpass.h
++++ b/sound/soc/qcom/lpass.h
+@@ -13,10 +13,11 @@
+ #include <linux/platform_device.h>
+ #include <linux/regmap.h>
+ #include <dt-bindings/sound/qcom,lpass.h>
++#include <dt-bindings/sound/qcom,q6afe.h>
+ #include "lpass-hdmi.h"
+
+ #define LPASS_AHBIX_CLOCK_FREQUENCY 131072000
+-#define LPASS_MAX_PORTS (LPASS_CDC_DMA_VA_TX8 + 1)
++#define LPASS_MAX_PORTS (DISPLAY_PORT_RX_7 + 1)
+ #define LPASS_MAX_MI2S_PORTS (8)
+ #define LPASS_MAX_DMA_CHANNELS (8)
+ #define LPASS_MAX_HDMI_DMA_CHANNELS (4)
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 127862fa05c619..ce3ea0c2de0425 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -217,6 +217,7 @@ static bool is_rust_noreturn(const struct symbol *func)
+ str_ends_with(func->name, "_4core9panicking14panic_nounwind") ||
+ str_ends_with(func->name, "_4core9panicking18panic_bounds_check") ||
+ str_ends_with(func->name, "_4core9panicking19assert_failed_inner") ||
++ str_ends_with(func->name, "_4core9panicking30panic_null_pointer_dereference") ||
+ str_ends_with(func->name, "_4core9panicking36panic_misaligned_pointer_dereference") ||
+ strstr(func->name, "_4core9panicking13assert_failed") ||
+ strstr(func->name, "_4core9panicking11panic_const24panic_const_") ||
+diff --git a/tools/testing/kunit/qemu_configs/sh.py b/tools/testing/kunit/qemu_configs/sh.py
+index 78a474a5b95f3a..f00cb89fdef6aa 100644
+--- a/tools/testing/kunit/qemu_configs/sh.py
++++ b/tools/testing/kunit/qemu_configs/sh.py
+@@ -7,7 +7,9 @@ CONFIG_CPU_SUBTYPE_SH7751R=y
+ CONFIG_MEMORY_START=0x0c000000
+ CONFIG_SH_RTS7751R2D=y
+ CONFIG_RTS7751R2D_PLUS=y
+-CONFIG_SERIAL_SH_SCI=y''',
++CONFIG_SERIAL_SH_SCI=y
++CONFIG_CMDLINE_EXTEND=y
++''',
+ qemu_arch='sh4',
+ kernel_path='arch/sh/boot/zImage',
+ kernel_command_line='console=ttySC1',
+diff --git a/tools/testing/selftests/bpf/prog_tests/changes_pkt_data.c b/tools/testing/selftests/bpf/prog_tests/changes_pkt_data.c
+new file mode 100644
+index 00000000000000..7526de3790814c
+--- /dev/null
++++ b/tools/testing/selftests/bpf/prog_tests/changes_pkt_data.c
+@@ -0,0 +1,107 @@
++// SPDX-License-Identifier: GPL-2.0
++#include "bpf/libbpf.h"
++#include "changes_pkt_data_freplace.skel.h"
++#include "changes_pkt_data.skel.h"
++#include <test_progs.h>
++
++static void print_verifier_log(const char *log)
++{
++ if (env.verbosity >= VERBOSE_VERY)
++ fprintf(stdout, "VERIFIER LOG:\n=============\n%s=============\n", log);
++}
++
++static void test_aux(const char *main_prog_name,
++ const char *to_be_replaced,
++ const char *replacement,
++ bool expect_load)
++{
++ struct changes_pkt_data_freplace *freplace = NULL;
++ struct bpf_program *freplace_prog = NULL;
++ struct bpf_program *main_prog = NULL;
++ LIBBPF_OPTS(bpf_object_open_opts, opts);
++ struct changes_pkt_data *main = NULL;
++ char log[16*1024];
++ int err;
++
++ opts.kernel_log_buf = log;
++ opts.kernel_log_size = sizeof(log);
++ if (env.verbosity >= VERBOSE_SUPER)
++ opts.kernel_log_level = 1 | 2 | 4;
++ main = changes_pkt_data__open_opts(&opts);
++ if (!ASSERT_OK_PTR(main, "changes_pkt_data__open"))
++ goto out;
++ main_prog = bpf_object__find_program_by_name(main->obj, main_prog_name);
++ if (!ASSERT_OK_PTR(main_prog, "main_prog"))
++ goto out;
++ bpf_program__set_autoload(main_prog, true);
++ err = changes_pkt_data__load(main);
++ print_verifier_log(log);
++ if (!ASSERT_OK(err, "changes_pkt_data__load"))
++ goto out;
++ freplace = changes_pkt_data_freplace__open_opts(&opts);
++ if (!ASSERT_OK_PTR(freplace, "changes_pkt_data_freplace__open"))
++ goto out;
++ freplace_prog = bpf_object__find_program_by_name(freplace->obj, replacement);
++ if (!ASSERT_OK_PTR(freplace_prog, "freplace_prog"))
++ goto out;
++ bpf_program__set_autoload(freplace_prog, true);
++ bpf_program__set_autoattach(freplace_prog, true);
++ bpf_program__set_attach_target(freplace_prog,
++ bpf_program__fd(main_prog),
++ to_be_replaced);
++ err = changes_pkt_data_freplace__load(freplace);
++ print_verifier_log(log);
++ if (expect_load) {
++ ASSERT_OK(err, "changes_pkt_data_freplace__load");
++ } else {
++ ASSERT_ERR(err, "changes_pkt_data_freplace__load");
++ ASSERT_HAS_SUBSTR(log, "Extension program changes packet data", "error log");
++ }
++
++out:
++ changes_pkt_data_freplace__destroy(freplace);
++ changes_pkt_data__destroy(main);
++}
++
++/* There are two global subprograms in both changes_pkt_data.skel.h:
++ * - one changes packet data;
++ * - another does not.
++ * It is ok to freplace subprograms that change packet data with those
++ * that either do or do not. It is only ok to freplace subprograms
++ * that do not change packet data with those that do not as well.
++ * The below tests check outcomes for each combination of such freplace.
++ * Also test a case when main subprogram itself is replaced and is a single
++ * subprogram in a program.
++ */
++void test_changes_pkt_data_freplace(void)
++{
++ struct {
++ const char *main;
++ const char *to_be_replaced;
++ bool changes;
++ } mains[] = {
++ { "main_with_subprogs", "changes_pkt_data", true },
++ { "main_with_subprogs", "does_not_change_pkt_data", false },
++ { "main_changes", "main_changes", true },
++ { "main_does_not_change", "main_does_not_change", false },
++ };
++ struct {
++ const char *func;
++ bool changes;
++ } replacements[] = {
++ { "changes_pkt_data", true },
++ { "does_not_change_pkt_data", false }
++ };
++ char buf[64];
++
++ for (int i = 0; i < ARRAY_SIZE(mains); ++i) {
++ for (int j = 0; j < ARRAY_SIZE(replacements); ++j) {
++ snprintf(buf, sizeof(buf), "%s_with_%s",
++ mains[i].to_be_replaced, replacements[j].func);
++ if (!test__start_subtest(buf))
++ continue;
++ test_aux(mains[i].main, mains[i].to_be_replaced, replacements[j].func,
++ mains[i].changes || !replacements[j].changes);
++ }
++ }
++}
+diff --git a/tools/testing/selftests/bpf/progs/changes_pkt_data.c b/tools/testing/selftests/bpf/progs/changes_pkt_data.c
+new file mode 100644
+index 00000000000000..43cada48b28ad4
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/changes_pkt_data.c
+@@ -0,0 +1,39 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/bpf.h>
++#include <bpf/bpf_helpers.h>
++
++__noinline
++long changes_pkt_data(struct __sk_buff *sk)
++{
++ return bpf_skb_pull_data(sk, 0);
++}
++
++__noinline __weak
++long does_not_change_pkt_data(struct __sk_buff *sk)
++{
++ return 0;
++}
++
++SEC("?tc")
++int main_with_subprogs(struct __sk_buff *sk)
++{
++ changes_pkt_data(sk);
++ does_not_change_pkt_data(sk);
++ return 0;
++}
++
++SEC("?tc")
++int main_changes(struct __sk_buff *sk)
++{
++ bpf_skb_pull_data(sk, 0);
++ return 0;
++}
++
++SEC("?tc")
++int main_does_not_change(struct __sk_buff *sk)
++{
++ return 0;
++}
++
++char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/progs/changes_pkt_data_freplace.c b/tools/testing/selftests/bpf/progs/changes_pkt_data_freplace.c
+new file mode 100644
+index 00000000000000..f9a622705f1b3b
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/changes_pkt_data_freplace.c
+@@ -0,0 +1,18 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/bpf.h>
++#include <bpf/bpf_helpers.h>
++
++SEC("?freplace")
++long changes_pkt_data(struct __sk_buff *sk)
++{
++ return bpf_skb_pull_data(sk, 0);
++}
++
++SEC("?freplace")
++long does_not_change_pkt_data(struct __sk_buff *sk)
++{
++ return 0;
++}
++
++char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/progs/raw_tp_null.c b/tools/testing/selftests/bpf/progs/raw_tp_null.c
+index 457f34c151e32f..5927054b6dd96f 100644
+--- a/tools/testing/selftests/bpf/progs/raw_tp_null.c
++++ b/tools/testing/selftests/bpf/progs/raw_tp_null.c
+@@ -3,6 +3,7 @@
+
+ #include <vmlinux.h>
+ #include <bpf/bpf_tracing.h>
++#include "bpf_misc.h"
+
+ char _license[] SEC("license") = "GPL";
+
+@@ -17,16 +18,14 @@ int BPF_PROG(test_raw_tp_null, struct sk_buff *skb)
+ if (task->pid != tid)
+ return 0;
+
+- i = i + skb->mark + 1;
+- /* The compiler may move the NULL check before this deref, which causes
+- * the load to fail as deref of scalar. Prevent that by using a barrier.
++ /* If dead code elimination kicks in, the increment +=2 will be
++ * removed. For raw_tp programs attaching to tracepoints in kernel
++ * modules, we mark input arguments as PTR_MAYBE_NULL, so branch
++ * prediction should never kick in.
+ */
+- barrier();
+- /* If dead code elimination kicks in, the increment below will
+- * be removed. For raw_tp programs, we mark input arguments as
+- * PTR_MAYBE_NULL, so branch prediction should never kick in.
+- */
+- if (!skb)
+- i += 2;
++ asm volatile ("%[i] += 1; if %[ctx] != 0 goto +1; %[i] += 2;"
++ : [i]"+r"(i)
++ : [ctx]"r"(skb)
++ : "memory");
+ return 0;
+ }
+diff --git a/tools/testing/selftests/bpf/progs/verifier_sock.c b/tools/testing/selftests/bpf/progs/verifier_sock.c
+index ee76b51005abe7..3c8f6646e33dae 100644
+--- a/tools/testing/selftests/bpf/progs/verifier_sock.c
++++ b/tools/testing/selftests/bpf/progs/verifier_sock.c
+@@ -50,6 +50,13 @@ struct {
+ __uint(map_flags, BPF_F_NO_PREALLOC);
+ } sk_storage_map SEC(".maps");
+
++struct {
++ __uint(type, BPF_MAP_TYPE_PROG_ARRAY);
++ __uint(max_entries, 1);
++ __uint(key_size, sizeof(__u32));
++ __uint(value_size, sizeof(__u32));
++} jmp_table SEC(".maps");
++
+ SEC("cgroup/skb")
+ __description("skb->sk: no NULL check")
+ __failure __msg("invalid mem access 'sock_common_or_null'")
+@@ -977,4 +984,53 @@ l1_%=: r0 = *(u8*)(r7 + 0); \
+ : __clobber_all);
+ }
+
++__noinline
++long skb_pull_data2(struct __sk_buff *sk, __u32 len)
++{
++ return bpf_skb_pull_data(sk, len);
++}
++
++__noinline
++long skb_pull_data1(struct __sk_buff *sk, __u32 len)
++{
++ return skb_pull_data2(sk, len);
++}
++
++/* global function calls bpf_skb_pull_data(), which invalidates packet
++ * pointers established before global function call.
++ */
++SEC("tc")
++__failure __msg("invalid mem access")
++int invalidate_pkt_pointers_from_global_func(struct __sk_buff *sk)
++{
++ int *p = (void *)(long)sk->data;
++
++ if ((void *)(p + 1) > (void *)(long)sk->data_end)
++ return TCX_DROP;
++ skb_pull_data1(sk, 0);
++ *p = 42; /* this is unsafe */
++ return TCX_PASS;
++}
++
++__noinline
++int tail_call(struct __sk_buff *sk)
++{
++ bpf_tail_call_static(sk, &jmp_table, 0);
++ return 0;
++}
++
++/* Tail calls invalidate packet pointers. */
++SEC("tc")
++__failure __msg("invalid mem access")
++int invalidate_pkt_pointers_by_tail_call(struct __sk_buff *sk)
++{
++ int *p = (void *)(long)sk->data;
++
++ if ((void *)(p + 1) > (void *)(long)sk->data_end)
++ return TCX_DROP;
++ tail_call(sk);
++ *p = 42; /* this is unsafe */
++ return TCX_PASS;
++}
++
+ char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+index 67df7b47087f03..e1fe16bcbbe880 100755
+--- a/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
++++ b/tools/testing/selftests/mm/charge_reserved_hugetlb.sh
+@@ -29,7 +29,7 @@ fi
+ if [[ $cgroup2 ]]; then
+ cgroup_path=$(mount -t cgroup2 | head -1 | awk '{print $3}')
+ if [[ -z "$cgroup_path" ]]; then
+- cgroup_path=/dev/cgroup/memory
++ cgroup_path=$(mktemp -d)
+ mount -t cgroup2 none $cgroup_path
+ do_umount=1
+ fi
+@@ -37,7 +37,7 @@ if [[ $cgroup2 ]]; then
+ else
+ cgroup_path=$(mount -t cgroup | grep ",hugetlb" | awk '{print $3}')
+ if [[ -z "$cgroup_path" ]]; then
+- cgroup_path=/dev/cgroup/memory
++ cgroup_path=$(mktemp -d)
+ mount -t cgroup memory,hugetlb $cgroup_path
+ do_umount=1
+ fi
+diff --git a/tools/testing/selftests/mm/hugetlb_reparenting_test.sh b/tools/testing/selftests/mm/hugetlb_reparenting_test.sh
+index 11f9bbe7dc222b..0b0d4ba1af2771 100755
+--- a/tools/testing/selftests/mm/hugetlb_reparenting_test.sh
++++ b/tools/testing/selftests/mm/hugetlb_reparenting_test.sh
+@@ -23,7 +23,7 @@ fi
+ if [[ $cgroup2 ]]; then
+ CGROUP_ROOT=$(mount -t cgroup2 | head -1 | awk '{print $3}')
+ if [[ -z "$CGROUP_ROOT" ]]; then
+- CGROUP_ROOT=/dev/cgroup/memory
++ CGROUP_ROOT=$(mktemp -d)
+ mount -t cgroup2 none $CGROUP_ROOT
+ do_umount=1
+ fi
+diff --git a/tools/testing/shared/linux.c b/tools/testing/shared/linux.c
+index 17263696b5d880..61b3f571f7a708 100644
+--- a/tools/testing/shared/linux.c
++++ b/tools/testing/shared/linux.c
+@@ -147,7 +147,7 @@ void kmem_cache_free(struct kmem_cache *cachep, void *objp)
+ void kmem_cache_free_bulk(struct kmem_cache *cachep, size_t size, void **list)
+ {
+ if (kmalloc_verbose)
+- pr_debug("Bulk free %p[0-%lu]\n", list, size - 1);
++ pr_debug("Bulk free %p[0-%zu]\n", list, size - 1);
+
+ pthread_mutex_lock(&cachep->lock);
+ for (int i = 0; i < size; i++)
+@@ -165,7 +165,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *cachep, gfp_t gfp, size_t size,
+ size_t i;
+
+ if (kmalloc_verbose)
+- pr_debug("Bulk alloc %lu\n", size);
++ pr_debug("Bulk alloc %zu\n", size);
+
+ pthread_mutex_lock(&cachep->lock);
+ if (cachep->nr_objs >= size) {
^ permalink raw reply related [flat|nested] 43+ messages in thread
* [gentoo-commits] proj/linux-patches:6.12 commit in: /
@ 2025-04-25 11:54 Mike Pagano
0 siblings, 0 replies; 43+ messages in thread
From: Mike Pagano @ 2025-04-25 11:54 UTC (permalink / raw
To: gentoo-commits
commit: 13c5d1f1019dc81393a5e15a0b7f0da86eb07edd
Author: Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 25 11:53:47 2025 +0000
Commit: Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr 25 11:53:47 2025 +0000
URL: https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=13c5d1f1
Remove no longer applying BMQ patch
Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>
0000_README | 8 -
5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch | 11302 -------------------------
5021_BMQ-and-PDS-gentoo-defaults.patch | 13 -
3 files changed, 11323 deletions(-)
diff --git a/0000_README b/0000_README
index e07d8d2e..f5e1ddec 100644
--- a/0000_README
+++ b/0000_README
@@ -194,11 +194,3 @@ Desc: Add Gentoo Linux support config settings and defaults.
Patch: 5010_enable-cpu-optimizations-universal.patch
From: https://github.com/graysky2/kernel_compiler_patch
Desc: Kernel >= 5.15 patch enables gcc = v11.1+ optimizations for additional CPUs.
-
-Patch: 5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch
-From: https://gitlab.com/alfredchen/projectc
-Desc: BMQ(BitMap Queue) Scheduler. A new CPU scheduler developed from PDS(incld). Inspired by the scheduler in zircon.
-
-Patch: 5021_BMQ-and-PDS-gentoo-defaults.patch
-From: https://gitweb.gentoo.org/proj/linux-patches.git/
-Desc: Set defaults for BMQ. default to n
diff --git a/5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch b/5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch
deleted file mode 100644
index 532813fd..00000000
--- a/5020_BMQ-and-PDS-io-scheduler-v6.12-r1.patch
+++ /dev/null
@@ -1,11302 +0,0 @@
-diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
-index f8bc1630eba0..1b90768a0916 100644
---- a/Documentation/admin-guide/sysctl/kernel.rst
-+++ b/Documentation/admin-guide/sysctl/kernel.rst
-@@ -1673,3 +1673,12 @@ is 10 seconds.
-
- The softlockup threshold is (``2 * watchdog_thresh``). Setting this
- tunable to zero will disable lockup detection altogether.
-+
-+yield_type:
-+===========
-+
-+BMQ/PDS CPU scheduler only. This determines what type of yield calls
-+to sched_yield() will be performed.
-+
-+ 0 - No yield.
-+ 1 - Requeue task. (default)
-diff --git a/Documentation/scheduler/sched-BMQ.txt b/Documentation/scheduler/sched-BMQ.txt
-new file mode 100644
-index 000000000000..05c84eec0f31
---- /dev/null
-+++ b/Documentation/scheduler/sched-BMQ.txt
-@@ -0,0 +1,110 @@
-+ BitMap queue CPU Scheduler
-+ --------------------------
-+
-+CONTENT
-+========
-+
-+ Background
-+ Design
-+ Overview
-+ Task policy
-+ Priority management
-+ BitMap Queue
-+ CPU Assignment and Migration
-+
-+
-+Background
-+==========
-+
-+BitMap Queue CPU scheduler, referred to as BMQ from here on, is an evolution
-+of previous Priority and Deadline based Skiplist multiple queue scheduler(PDS),
-+and inspired by Zircon scheduler. The goal of it is to keep the scheduler code
-+simple, while efficiency and scalable for interactive tasks, such as desktop,
-+movie playback and gaming etc.
-+
-+Design
-+======
-+
-+Overview
-+--------
-+
-+BMQ use per CPU run queue design, each CPU(logical) has it's own run queue,
-+each CPU is responsible for scheduling the tasks that are putting into it's
-+run queue.
-+
-+The run queue is a set of priority queues. Note that these queues are fifo
-+queue for non-rt tasks or priority queue for rt tasks in data structure. See
-+BitMap Queue below for details. BMQ is optimized for non-rt tasks in the fact
-+that most applications are non-rt tasks. No matter the queue is fifo or
-+priority, In each queue is an ordered list of runnable tasks awaiting execution
-+and the data structures are the same. When it is time for a new task to run,
-+the scheduler simply looks the lowest numbered queueue that contains a task,
-+and runs the first task from the head of that queue. And per CPU idle task is
-+also in the run queue, so the scheduler can always find a task to run on from
-+its run queue.
-+
-+Each task will assigned the same timeslice(default 4ms) when it is picked to
-+start running. Task will be reinserted at the end of the appropriate priority
-+queue when it uses its whole timeslice. When the scheduler selects a new task
-+from the priority queue it sets the CPU's preemption timer for the remainder of
-+the previous timeslice. When that timer fires the scheduler will stop execution
-+on that task, select another task and start over again.
-+
-+If a task blocks waiting for a shared resource then it's taken out of its
-+priority queue and is placed in a wait queue for the shared resource. When it
-+is unblocked it will be reinserted in the appropriate priority queue of an
-+eligible CPU.
-+
-+Task policy
-+-----------
-+
-+BMQ supports DEADLINE, FIFO, RR, NORMAL, BATCH and IDLE task policy like the
-+mainline CFS scheduler. But BMQ is heavy optimized for non-rt task, that's
-+NORMAL/BATCH/IDLE policy tasks. Below is the implementation detail of each
-+policy.
-+
-+DEADLINE
-+ It is squashed as priority 0 FIFO task.
-+
-+FIFO/RR
-+ All RT tasks share one single priority queue in BMQ run queue designed. The
-+complexity of insert operation is O(n). BMQ is not designed for system runs
-+with major rt policy tasks.
-+
-+NORMAL/BATCH/IDLE
-+ BATCH and IDLE tasks are treated as the same policy. They compete CPU with
-+NORMAL policy tasks, but they just don't boost. To control the priority of
-+NORMAL/BATCH/IDLE tasks, simply use nice level.
-+
-+ISO
-+ ISO policy is not supported in BMQ. Please use nice level -20 NORMAL policy
-+task instead.
-+
-+Priority management
-+-------------------
-+
-+RT tasks have priority from 0-99. For non-rt tasks, there are three different
-+factors used to determine the effective priority of a task. The effective
-+priority being what is used to determine which queue it will be in.
-+
-+The first factor is simply the task’s static priority. Which is assigned from
-+task's nice level, within [-20, 19] in userland's point of view and [0, 39]
-+internally.
-+
-+The second factor is the priority boost. This is a value bounded between
-+[-MAX_PRIORITY_ADJ, MAX_PRIORITY_ADJ] used to offset the base priority, it is
-+modified by the following cases:
-+
-+*When a thread has used up its entire timeslice, always deboost its boost by
-+increasing by one.
-+*When a thread gives up cpu control(voluntary or non-voluntary) to reschedule,
-+and its switch-in time(time after last switch and run) below the thredhold
-+based on its priority boost, will boost its boost by decreasing by one buti is
-+capped at 0 (won’t go negative).
-+
-+The intent in this system is to ensure that interactive threads are serviced
-+quickly. These are usually the threads that interact directly with the user
-+and cause user-perceivable latency. These threads usually do little work and
-+spend most of their time blocked awaiting another user event. So they get the
-+priority boost from unblocking while background threads that do most of the
-+processing receive the priority penalty for using their entire timeslice.
-diff --git a/fs/proc/base.c b/fs/proc/base.c
-index b31283d81c52..e27c5c7b05f6 100644
---- a/fs/proc/base.c
-+++ b/fs/proc/base.c
-@@ -516,7 +516,7 @@ static int proc_pid_schedstat(struct seq_file *m, struct pid_namespace *ns,
- seq_puts(m, "0 0 0\n");
- else
- seq_printf(m, "%llu %llu %lu\n",
-- (unsigned long long)task->se.sum_exec_runtime,
-+ (unsigned long long)tsk_seruntime(task),
- (unsigned long long)task->sched_info.run_delay,
- task->sched_info.pcount);
-
-diff --git a/include/asm-generic/resource.h b/include/asm-generic/resource.h
-index 8874f681b056..59eb72bf7d5f 100644
---- a/include/asm-generic/resource.h
-+++ b/include/asm-generic/resource.h
-@@ -23,7 +23,7 @@
- [RLIMIT_LOCKS] = { RLIM_INFINITY, RLIM_INFINITY }, \
- [RLIMIT_SIGPENDING] = { 0, 0 }, \
- [RLIMIT_MSGQUEUE] = { MQ_BYTES_MAX, MQ_BYTES_MAX }, \
-- [RLIMIT_NICE] = { 0, 0 }, \
-+ [RLIMIT_NICE] = { 30, 30 }, \
- [RLIMIT_RTPRIO] = { 0, 0 }, \
- [RLIMIT_RTTIME] = { RLIM_INFINITY, RLIM_INFINITY }, \
- }
-diff --git a/include/linux/sched.h b/include/linux/sched.h
-index bb343136ddd0..6adfea989b7b 100644
---- a/include/linux/sched.h
-+++ b/include/linux/sched.h
-@@ -804,9 +804,13 @@ struct task_struct {
- struct alloc_tag *alloc_tag;
- #endif
-
--#ifdef CONFIG_SMP
-+#if defined(CONFIG_SMP) || defined(CONFIG_SCHED_ALT)
- int on_cpu;
-+#endif
-+
-+#ifdef CONFIG_SMP
- struct __call_single_node wake_entry;
-+#ifndef CONFIG_SCHED_ALT
- unsigned int wakee_flips;
- unsigned long wakee_flip_decay_ts;
- struct task_struct *last_wakee;
-@@ -820,6 +824,7 @@ struct task_struct {
- */
- int recent_used_cpu;
- int wake_cpu;
-+#endif /* !CONFIG_SCHED_ALT */
- #endif
- int on_rq;
-
-@@ -828,6 +833,19 @@ struct task_struct {
- int normal_prio;
- unsigned int rt_priority;
-
-+#ifdef CONFIG_SCHED_ALT
-+ u64 last_ran;
-+ s64 time_slice;
-+ struct list_head sq_node;
-+#ifdef CONFIG_SCHED_BMQ
-+ int boost_prio;
-+#endif /* CONFIG_SCHED_BMQ */
-+#ifdef CONFIG_SCHED_PDS
-+ u64 deadline;
-+#endif /* CONFIG_SCHED_PDS */
-+ /* sched_clock time spent running */
-+ u64 sched_time;
-+#else /* !CONFIG_SCHED_ALT */
- struct sched_entity se;
- struct sched_rt_entity rt;
- struct sched_dl_entity dl;
-@@ -842,6 +860,7 @@ struct task_struct {
- unsigned long core_cookie;
- unsigned int core_occupation;
- #endif
-+#endif /* !CONFIG_SCHED_ALT */
-
- #ifdef CONFIG_CGROUP_SCHED
- struct task_group *sched_task_group;
-@@ -878,11 +897,15 @@ struct task_struct {
- const cpumask_t *cpus_ptr;
- cpumask_t *user_cpus_ptr;
- cpumask_t cpus_mask;
-+#ifndef CONFIG_SCHED_ALT
- void *migration_pending;
-+#endif
- #ifdef CONFIG_SMP
- unsigned short migration_disabled;
- #endif
-+#ifndef CONFIG_SCHED_ALT
- unsigned short migration_flags;
-+#endif
-
- #ifdef CONFIG_PREEMPT_RCU
- int rcu_read_lock_nesting;
-@@ -914,8 +937,10 @@ struct task_struct {
-
- struct list_head tasks;
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- struct plist_node pushable_tasks;
- struct rb_node pushable_dl_tasks;
-+#endif
- #endif
-
- struct mm_struct *mm;
-@@ -1609,6 +1634,15 @@ struct task_struct {
- */
- };
-
-+#ifdef CONFIG_SCHED_ALT
-+#define tsk_seruntime(t) ((t)->sched_time)
-+/* replace the uncertian rt_timeout with 0UL */
-+#define tsk_rttimeout(t) (0UL)
-+#else /* CFS */
-+#define tsk_seruntime(t) ((t)->se.sum_exec_runtime)
-+#define tsk_rttimeout(t) ((t)->rt.timeout)
-+#endif /* !CONFIG_SCHED_ALT */
-+
- #define TASK_REPORT_IDLE (TASK_REPORT + 1)
- #define TASK_REPORT_MAX (TASK_REPORT_IDLE << 1)
-
-@@ -2135,7 +2169,11 @@ static inline void set_task_cpu(struct task_struct *p, unsigned int cpu)
-
- static inline bool task_is_runnable(struct task_struct *p)
- {
-+#ifdef CONFIG_SCHED_ALT
-+ return p->on_rq;
-+#else
- return p->on_rq && !p->se.sched_delayed;
-+#endif /* !CONFIG_SCHED_ALT */
- }
-
- extern bool sched_task_on_rq(struct task_struct *p);
-diff --git a/include/linux/sched/deadline.h b/include/linux/sched/deadline.h
-index 3a912ab42bb5..269a1513a153 100644
---- a/include/linux/sched/deadline.h
-+++ b/include/linux/sched/deadline.h
-@@ -2,6 +2,25 @@
- #ifndef _LINUX_SCHED_DEADLINE_H
- #define _LINUX_SCHED_DEADLINE_H
-
-+#ifdef CONFIG_SCHED_ALT
-+
-+static inline int dl_task(struct task_struct *p)
-+{
-+ return 0;
-+}
-+
-+#ifdef CONFIG_SCHED_BMQ
-+#define __tsk_deadline(p) (0UL)
-+#endif
-+
-+#ifdef CONFIG_SCHED_PDS
-+#define __tsk_deadline(p) ((((u64) ((p)->prio))<<56) | (p)->deadline)
-+#endif
-+
-+#else
-+
-+#define __tsk_deadline(p) ((p)->dl.deadline)
-+
- /*
- * SCHED_DEADLINE tasks has negative priorities, reflecting
- * the fact that any of them has higher prio than RT and
-@@ -23,6 +42,7 @@ static inline bool dl_task(struct task_struct *p)
- {
- return dl_prio(p->prio);
- }
-+#endif /* CONFIG_SCHED_ALT */
-
- static inline bool dl_time_before(u64 a, u64 b)
- {
-diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
-index 6ab43b4f72f9..ef1cff556c5e 100644
---- a/include/linux/sched/prio.h
-+++ b/include/linux/sched/prio.h
-@@ -19,6 +19,28 @@
- #define MAX_PRIO (MAX_RT_PRIO + NICE_WIDTH)
- #define DEFAULT_PRIO (MAX_RT_PRIO + NICE_WIDTH / 2)
-
-+#ifdef CONFIG_SCHED_ALT
-+
-+/* Undefine MAX_PRIO and DEFAULT_PRIO */
-+#undef MAX_PRIO
-+#undef DEFAULT_PRIO
-+
-+/* +/- priority levels from the base priority */
-+#ifdef CONFIG_SCHED_BMQ
-+#define MAX_PRIORITY_ADJ (12)
-+#endif
-+
-+#ifdef CONFIG_SCHED_PDS
-+#define MAX_PRIORITY_ADJ (0)
-+#endif
-+
-+#define MIN_NORMAL_PRIO (128)
-+#define NORMAL_PRIO_NUM (64)
-+#define MAX_PRIO (MIN_NORMAL_PRIO + NORMAL_PRIO_NUM)
-+#define DEFAULT_PRIO (MAX_PRIO - MAX_PRIORITY_ADJ - NICE_WIDTH / 2)
-+
-+#endif /* CONFIG_SCHED_ALT */
-+
- /*
- * Convert user-nice values [ -20 ... 0 ... 19 ]
- * to static priority [ MAX_RT_PRIO..MAX_PRIO-1 ],
-diff --git a/include/linux/sched/rt.h b/include/linux/sched/rt.h
-index 4e3338103654..6dfef878fe3b 100644
---- a/include/linux/sched/rt.h
-+++ b/include/linux/sched/rt.h
-@@ -45,8 +45,10 @@ static inline bool rt_or_dl_task_policy(struct task_struct *tsk)
-
- if (policy == SCHED_FIFO || policy == SCHED_RR)
- return true;
-+#ifndef CONFIG_SCHED_ALT
- if (policy == SCHED_DEADLINE)
- return true;
-+#endif
- return false;
- }
-
-diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h
-index 4237daa5ac7a..3cebd93c49c8 100644
---- a/include/linux/sched/topology.h
-+++ b/include/linux/sched/topology.h
-@@ -244,7 +244,8 @@ static inline bool cpus_share_resources(int this_cpu, int that_cpu)
-
- #endif /* !CONFIG_SMP */
-
--#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL)
-+#if defined(CONFIG_ENERGY_MODEL) && defined(CONFIG_CPU_FREQ_GOV_SCHEDUTIL) && \
-+ !defined(CONFIG_SCHED_ALT)
- extern void rebuild_sched_domains_energy(void);
- #else
- static inline void rebuild_sched_domains_energy(void)
-diff --git a/init/Kconfig b/init/Kconfig
-index c521e1421ad4..4a397b48a453 100644
---- a/init/Kconfig
-+++ b/init/Kconfig
-@@ -652,6 +652,7 @@ config TASK_IO_ACCOUNTING
-
- config PSI
- bool "Pressure stall information tracking"
-+ depends on !SCHED_ALT
- select KERNFS
- help
- Collect metrics that indicate how overcommitted the CPU, memory,
-@@ -863,6 +864,35 @@ config UCLAMP_BUCKETS_COUNT
-
- If in doubt, use the default value.
-
-+menuconfig SCHED_ALT
-+ bool "Alternative CPU Schedulers"
-+ default y
-+ help
-+ This feature enable alternative CPU scheduler"
-+
-+if SCHED_ALT
-+
-+choice
-+ prompt "Alternative CPU Scheduler"
-+ default SCHED_BMQ
-+
-+config SCHED_BMQ
-+ bool "BMQ CPU scheduler"
-+ help
-+ The BitMap Queue CPU scheduler for excellent interactivity and
-+ responsiveness on the desktop and solid scalability on normal
-+ hardware and commodity servers.
-+
-+config SCHED_PDS
-+ bool "PDS CPU scheduler"
-+ help
-+ The Priority and Deadline based Skip list multiple queue CPU
-+ Scheduler.
-+
-+endchoice
-+
-+endif
-+
- endmenu
-
- #
-@@ -928,6 +958,7 @@ config NUMA_BALANCING
- depends on ARCH_SUPPORTS_NUMA_BALANCING
- depends on !ARCH_WANT_NUMA_VARIABLE_LOCALITY
- depends on SMP && NUMA && MIGRATION && !PREEMPT_RT
-+ depends on !SCHED_ALT
- help
- This option adds support for automatic NUMA aware memory/task placement.
- The mechanism is quite primitive and is based on migrating memory when
-@@ -1334,6 +1365,7 @@ config CHECKPOINT_RESTORE
-
- config SCHED_AUTOGROUP
- bool "Automatic process group scheduling"
-+ depends on !SCHED_ALT
- select CGROUPS
- select CGROUP_SCHED
- select FAIR_GROUP_SCHED
-diff --git a/init/init_task.c b/init/init_task.c
-index 136a8231355a..12c01ab8e718 100644
---- a/init/init_task.c
-+++ b/init/init_task.c
-@@ -71,9 +71,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
- .stack = init_stack,
- .usage = REFCOUNT_INIT(2),
- .flags = PF_KTHREAD,
-+#ifdef CONFIG_SCHED_ALT
-+ .on_cpu = 1,
-+ .prio = DEFAULT_PRIO,
-+ .static_prio = DEFAULT_PRIO,
-+ .normal_prio = DEFAULT_PRIO,
-+#else
- .prio = MAX_PRIO - 20,
- .static_prio = MAX_PRIO - 20,
- .normal_prio = MAX_PRIO - 20,
-+#endif
- .policy = SCHED_NORMAL,
- .cpus_ptr = &init_task.cpus_mask,
- .user_cpus_ptr = NULL,
-@@ -86,6 +93,16 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
- .restart_block = {
- .fn = do_no_restart_syscall,
- },
-+#ifdef CONFIG_SCHED_ALT
-+ .sq_node = LIST_HEAD_INIT(init_task.sq_node),
-+#ifdef CONFIG_SCHED_BMQ
-+ .boost_prio = 0,
-+#endif
-+#ifdef CONFIG_SCHED_PDS
-+ .deadline = 0,
-+#endif
-+ .time_slice = HZ,
-+#else
- .se = {
- .group_node = LIST_HEAD_INIT(init_task.se.group_node),
- },
-@@ -93,10 +110,13 @@ struct task_struct init_task __aligned(L1_CACHE_BYTES) = {
- .run_list = LIST_HEAD_INIT(init_task.rt.run_list),
- .time_slice = RR_TIMESLICE,
- },
-+#endif
- .tasks = LIST_HEAD_INIT(init_task.tasks),
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_SMP
- .pushable_tasks = PLIST_NODE_INIT(init_task.pushable_tasks, MAX_PRIO),
- #endif
-+#endif
- #ifdef CONFIG_CGROUP_SCHED
- .sched_task_group = &root_task_group,
- #endif
-diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
-index fe782cd77388..d27d2154d71a 100644
---- a/kernel/Kconfig.preempt
-+++ b/kernel/Kconfig.preempt
-@@ -117,7 +117,7 @@ config PREEMPT_DYNAMIC
-
- config SCHED_CORE
- bool "Core Scheduling for SMT"
-- depends on SCHED_SMT
-+ depends on SCHED_SMT && !SCHED_ALT
- help
- This option permits Core Scheduling, a means of coordinated task
- selection across SMT siblings. When enabled -- see
-@@ -135,7 +135,7 @@ config SCHED_CORE
-
- config SCHED_CLASS_EXT
- bool "Extensible Scheduling Class"
-- depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF
-+ depends on BPF_SYSCALL && BPF_JIT && DEBUG_INFO_BTF && !SCHED_ALT
- select STACKTRACE if STACKTRACE_SUPPORT
- help
- This option enables a new scheduler class sched_ext (SCX), which
-diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
-index a4dd285cdf39..5b4ebe58d032 100644
---- a/kernel/cgroup/cpuset.c
-+++ b/kernel/cgroup/cpuset.c
-@@ -620,7 +620,7 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
- return ret;
- }
-
--#ifdef CONFIG_SMP
-+#if defined(CONFIG_SMP) && !defined(CONFIG_SCHED_ALT)
- /*
- * Helper routine for generate_sched_domains().
- * Do cpusets a, b have overlapping effective cpus_allowed masks?
-@@ -1031,7 +1031,7 @@ void rebuild_sched_domains_locked(void)
- /* Have scheduler rebuild the domains */
- partition_and_rebuild_sched_domains(ndoms, doms, attr);
- }
--#else /* !CONFIG_SMP */
-+#else /* !CONFIG_SMP || CONFIG_SCHED_ALT */
- void rebuild_sched_domains_locked(void)
- {
- }
-@@ -2926,12 +2926,15 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
- goto out_unlock;
- }
-
-+#ifndef CONFIG_SCHED_ALT
- if (dl_task(task)) {
- cs->nr_migrate_dl_tasks++;
- cs->sum_migrate_dl_bw += task->dl.dl_bw;
- }
-+#endif
- }
-
-+#ifndef CONFIG_SCHED_ALT
- if (!cs->nr_migrate_dl_tasks)
- goto out_success;
-
-@@ -2952,6 +2955,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
- }
-
- out_success:
-+#endif
- /*
- * Mark attach is in progress. This makes validate_change() fail
- * changes which zero cpus/mems_allowed.
-@@ -2973,12 +2977,14 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
- mutex_lock(&cpuset_mutex);
- dec_attach_in_progress_locked(cs);
-
-+#ifndef CONFIG_SCHED_ALT
- if (cs->nr_migrate_dl_tasks) {
- int cpu = cpumask_any(cs->effective_cpus);
-
- dl_bw_free(cpu, cs->sum_migrate_dl_bw);
- reset_migrate_dl_data(cs);
- }
-+#endif
-
- mutex_unlock(&cpuset_mutex);
- }
-diff --git a/kernel/delayacct.c b/kernel/delayacct.c
-index dead51de8eb5..8edef9676ab3 100644
---- a/kernel/delayacct.c
-+++ b/kernel/delayacct.c
-@@ -149,7 +149,7 @@ int delayacct_add_tsk(struct taskstats *d, struct task_struct *tsk)
- */
- t1 = tsk->sched_info.pcount;
- t2 = tsk->sched_info.run_delay;
-- t3 = tsk->se.sum_exec_runtime;
-+ t3 = tsk_seruntime(tsk);
-
- d->cpu_count += t1;
-
-diff --git a/kernel/exit.c b/kernel/exit.c
-index 619f0014c33b..7dc53ddd45a8 100644
---- a/kernel/exit.c
-+++ b/kernel/exit.c
-@@ -175,7 +175,7 @@ static void __exit_signal(struct task_struct *tsk)
- sig->curr_target = next_thread(tsk);
- }
-
-- add_device_randomness((const void*) &tsk->se.sum_exec_runtime,
-+ add_device_randomness((const void*) &tsk_seruntime(tsk),
- sizeof(unsigned long long));
-
- /*
-@@ -196,7 +196,7 @@ static void __exit_signal(struct task_struct *tsk)
- sig->inblock += task_io_get_inblock(tsk);
- sig->oublock += task_io_get_oublock(tsk);
- task_io_accounting_add(&sig->ioac, &tsk->ioac);
-- sig->sum_sched_runtime += tsk->se.sum_exec_runtime;
-+ sig->sum_sched_runtime += tsk_seruntime(tsk);
- sig->nr_threads--;
- __unhash_process(tsk, group_dead);
- write_sequnlock(&sig->stats_lock);
-diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
-index ebebd0eec7f6..802112207855 100644
---- a/kernel/locking/rtmutex.c
-+++ b/kernel/locking/rtmutex.c
-@@ -363,7 +363,7 @@ waiter_update_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
- lockdep_assert(RB_EMPTY_NODE(&waiter->tree.entry));
-
- waiter->tree.prio = __waiter_prio(task);
-- waiter->tree.deadline = task->dl.deadline;
-+ waiter->tree.deadline = __tsk_deadline(task);
- }
-
- /*
-@@ -384,16 +384,20 @@ waiter_clone_prio(struct rt_mutex_waiter *waiter, struct task_struct *task)
- * Only use with rt_waiter_node_{less,equal}()
- */
- #define task_to_waiter_node(p) \
-- &(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = (p)->dl.deadline }
-+ &(struct rt_waiter_node){ .prio = __waiter_prio(p), .deadline = __tsk_deadline(p) }
- #define task_to_waiter(p) \
- &(struct rt_mutex_waiter){ .tree = *task_to_waiter_node(p) }
-
- static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
- struct rt_waiter_node *right)
- {
-+#ifdef CONFIG_SCHED_PDS
-+ return (left->deadline < right->deadline);
-+#else
- if (left->prio < right->prio)
- return 1;
-
-+#ifndef CONFIG_SCHED_BMQ
- /*
- * If both waiters have dl_prio(), we check the deadlines of the
- * associated tasks.
-@@ -402,16 +406,22 @@ static __always_inline int rt_waiter_node_less(struct rt_waiter_node *left,
- */
- if (dl_prio(left->prio))
- return dl_time_before(left->deadline, right->deadline);
-+#endif
-
- return 0;
-+#endif
- }
-
- static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
- struct rt_waiter_node *right)
- {
-+#ifdef CONFIG_SCHED_PDS
-+ return (left->deadline == right->deadline);
-+#else
- if (left->prio != right->prio)
- return 0;
-
-+#ifndef CONFIG_SCHED_BMQ
- /*
- * If both waiters have dl_prio(), we check the deadlines of the
- * associated tasks.
-@@ -420,8 +430,10 @@ static __always_inline int rt_waiter_node_equal(struct rt_waiter_node *left,
- */
- if (dl_prio(left->prio))
- return left->deadline == right->deadline;
-+#endif
-
- return 1;
-+#endif
- }
-
- static inline bool rt_mutex_steal(struct rt_mutex_waiter *waiter,
-diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
-index 76d204b7d29c..de1a52f963e5 100644
---- a/kernel/locking/ww_mutex.h
-+++ b/kernel/locking/ww_mutex.h
-@@ -247,6 +247,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
-
- /* equal static prio */
-
-+#ifndef CONFIG_SCHED_ALT
- if (dl_prio(a_prio)) {
- if (dl_time_before(b->task->dl.deadline,
- a->task->dl.deadline))
-@@ -256,6 +257,7 @@ __ww_ctx_less(struct ww_acquire_ctx *a, struct ww_acquire_ctx *b)
- b->task->dl.deadline))
- return false;
- }
-+#endif
-
- /* equal prio */
- }
-diff --git a/kernel/sched/Makefile b/kernel/sched/Makefile
-index 976092b7bd45..31d587c16ec1 100644
---- a/kernel/sched/Makefile
-+++ b/kernel/sched/Makefile
-@@ -28,7 +28,12 @@ endif
- # These compilation units have roughly the same size and complexity - so their
- # build parallelizes well and finishes roughly at once:
- #
-+ifdef CONFIG_SCHED_ALT
-+obj-y += alt_core.o
-+obj-$(CONFIG_SCHED_DEBUG) += alt_debug.o
-+else
- obj-y += core.o
- obj-y += fair.o
-+endif
- obj-y += build_policy.o
- obj-y += build_utility.o
-diff --git a/kernel/sched/alt_core.c b/kernel/sched/alt_core.c
-new file mode 100644
-index 000000000000..0a08bc0176ac
---- /dev/null
-+++ b/kernel/sched/alt_core.c
-@@ -0,0 +1,7515 @@
-+/*
-+ * kernel/sched/alt_core.c
-+ *
-+ * Core alternative kernel scheduler code and related syscalls
-+ *
-+ * Copyright (C) 1991-2002 Linus Torvalds
-+ *
-+ * 2009-08-13 Brainfuck deadline scheduling policy by Con Kolivas deletes
-+ * a whole lot of those previous things.
-+ * 2017-09-06 Priority and Deadline based Skip list multiple queue kernel
-+ * scheduler by Alfred Chen.
-+ * 2019-02-20 BMQ(BitMap Queue) kernel scheduler by Alfred Chen.
-+ */
-+#include <linux/sched/clock.h>
-+#include <linux/sched/cputime.h>
-+#include <linux/sched/debug.h>
-+#include <linux/sched/hotplug.h>
-+#include <linux/sched/init.h>
-+#include <linux/sched/isolation.h>
-+#include <linux/sched/loadavg.h>
-+#include <linux/sched/mm.h>
-+#include <linux/sched/nohz.h>
-+#include <linux/sched/stat.h>
-+#include <linux/sched/wake_q.h>
-+
-+#include <linux/blkdev.h>
-+#include <linux/context_tracking.h>
-+#include <linux/cpuset.h>
-+#include <linux/delayacct.h>
-+#include <linux/init_task.h>
-+#include <linux/kcov.h>
-+#include <linux/kprobes.h>
-+#include <linux/nmi.h>
-+#include <linux/rseq.h>
-+#include <linux/scs.h>
-+
-+#include <uapi/linux/sched/types.h>
-+
-+#include <asm/irq_regs.h>
-+#include <asm/switch_to.h>
-+
-+#define CREATE_TRACE_POINTS
-+#include <trace/events/sched.h>
-+#include <trace/events/ipi.h>
-+#undef CREATE_TRACE_POINTS
-+
-+#include "sched.h"
-+#include "smp.h"
-+
-+#include "pelt.h"
-+
-+#include "../../io_uring/io-wq.h"
-+#include "../smpboot.h"
-+
-+EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpu);
-+EXPORT_TRACEPOINT_SYMBOL_GPL(ipi_send_cpumask);
-+
-+/*
-+ * Export tracepoints that act as a bare tracehook (ie: have no trace event
-+ * associated with them) to allow external modules to probe them.
-+ */
-+EXPORT_TRACEPOINT_SYMBOL_GPL(pelt_irq_tp);
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+#define sched_feat(x) (1)
-+/*
-+ * Print a warning if need_resched is set for the given duration (if
-+ * LATENCY_WARN is enabled).
-+ *
-+ * If sysctl_resched_latency_warn_once is set, only one warning will be shown
-+ * per boot.
-+ */
-+__read_mostly int sysctl_resched_latency_warn_ms = 100;
-+__read_mostly int sysctl_resched_latency_warn_once = 1;
-+#else
-+#define sched_feat(x) (0)
-+#endif /* CONFIG_SCHED_DEBUG */
-+
-+#define ALT_SCHED_VERSION "v6.12-r1"
-+
-+#define STOP_PRIO (MAX_RT_PRIO - 1)
-+
-+/*
-+ * Time slice
-+ * (default: 4 msec, units: nanoseconds)
-+ */
-+unsigned int sysctl_sched_base_slice __read_mostly = (4 << 20);
-+
-+#include "alt_core.h"
-+#include "alt_topology.h"
-+
-+/* Reschedule if less than this many μs left */
-+#define RESCHED_NS (100 << 10)
-+
-+/**
-+ * sched_yield_type - Type of sched_yield() will be performed.
-+ * 0: No yield.
-+ * 1: Requeue task. (default)
-+ */
-+int sched_yield_type __read_mostly = 1;
-+
-+#ifdef CONFIG_SMP
-+cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
-+
-+DEFINE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
-+DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
-+DEFINE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_topo_end_mask);
-+
-+#ifdef CONFIG_SCHED_SMT
-+DEFINE_STATIC_KEY_FALSE(sched_smt_present);
-+EXPORT_SYMBOL_GPL(sched_smt_present);
-+
-+cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
-+#endif
-+
-+/*
-+ * Keep a unique ID per domain (we use the first CPUs number in the cpumask of
-+ * the domain), this allows us to quickly tell if two cpus are in the same cache
-+ * domain, see cpus_share_cache().
-+ */
-+DEFINE_PER_CPU(int, sd_llc_id);
-+#endif /* CONFIG_SMP */
-+
-+DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-+
-+#ifndef prepare_arch_switch
-+# define prepare_arch_switch(next) do { } while (0)
-+#endif
-+#ifndef finish_arch_post_lock_switch
-+# define finish_arch_post_lock_switch() do { } while (0)
-+#endif
-+
-+static cpumask_t sched_preempt_mask[SCHED_QUEUE_BITS + 2] ____cacheline_aligned_in_smp;
-+
-+cpumask_t *const sched_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS - 1];
-+cpumask_t *const sched_sg_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
-+cpumask_t *const sched_pcore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS];
-+cpumask_t *const sched_ecore_idle_mask = &sched_preempt_mask[SCHED_QUEUE_BITS + 1];
-+
-+/* task function */
-+static inline const struct cpumask *task_user_cpus(struct task_struct *p)
-+{
-+ if (!p->user_cpus_ptr)
-+ return cpu_possible_mask; /* &init_task.cpus_mask */
-+ return p->user_cpus_ptr;
-+}
-+
-+/* sched_queue related functions */
-+static inline void sched_queue_init(struct sched_queue *q)
-+{
-+ int i;
-+
-+ bitmap_zero(q->bitmap, SCHED_QUEUE_BITS);
-+ for(i = 0; i < SCHED_LEVELS; i++)
-+ INIT_LIST_HEAD(&q->heads[i]);
-+}
-+
-+/*
-+ * Init idle task and put into queue structure of rq
-+ * IMPORTANT: may be called multiple times for a single cpu
-+ */
-+static inline void sched_queue_init_idle(struct sched_queue *q,
-+ struct task_struct *idle)
-+{
-+ INIT_LIST_HEAD(&q->heads[IDLE_TASK_SCHED_PRIO]);
-+ list_add_tail(&idle->sq_node, &q->heads[IDLE_TASK_SCHED_PRIO]);
-+ idle->on_rq = TASK_ON_RQ_QUEUED;
-+}
-+
-+#define CLEAR_CACHED_PREEMPT_MASK(pr, low, high, cpu) \
-+ if (low < pr && pr <= high) \
-+ cpumask_clear_cpu(cpu, sched_preempt_mask + pr);
-+
-+#define SET_CACHED_PREEMPT_MASK(pr, low, high, cpu) \
-+ if (low < pr && pr <= high) \
-+ cpumask_set_cpu(cpu, sched_preempt_mask + pr);
-+
-+static atomic_t sched_prio_record = ATOMIC_INIT(0);
-+
-+/* water mark related functions */
-+static inline void update_sched_preempt_mask(struct rq *rq)
-+{
-+ int prio = find_first_bit(rq->queue.bitmap, SCHED_QUEUE_BITS);
-+ int last_prio = rq->prio;
-+ int cpu, pr;
-+
-+ if (prio == last_prio)
-+ return;
-+
-+ rq->prio = prio;
-+#ifdef CONFIG_SCHED_PDS
-+ rq->prio_idx = sched_prio2idx(rq->prio, rq);
-+#endif
-+ cpu = cpu_of(rq);
-+ pr = atomic_read(&sched_prio_record);
-+
-+ if (prio < last_prio) {
-+ if (IDLE_TASK_SCHED_PRIO == last_prio) {
-+ rq->clear_idle_mask_func(cpu, sched_idle_mask);
-+ last_prio -= 2;
-+ }
-+ CLEAR_CACHED_PREEMPT_MASK(pr, prio, last_prio, cpu);
-+
-+ return;
-+ }
-+ /* last_prio < prio */
-+ if (IDLE_TASK_SCHED_PRIO == prio) {
-+ rq->set_idle_mask_func(cpu, sched_idle_mask);
-+ prio -= 2;
-+ }
-+ SET_CACHED_PREEMPT_MASK(pr, last_prio, prio, cpu);
-+}
-+
-+/*
-+ * Serialization rules:
-+ *
-+ * Lock order:
-+ *
-+ * p->pi_lock
-+ * rq->lock
-+ * hrtimer_cpu_base->lock (hrtimer_start() for bandwidth controls)
-+ *
-+ * rq1->lock
-+ * rq2->lock where: rq1 < rq2
-+ *
-+ * Regular state:
-+ *
-+ * Normal scheduling state is serialized by rq->lock. __schedule() takes the
-+ * local CPU's rq->lock, it optionally removes the task from the runqueue and
-+ * always looks at the local rq data structures to find the most eligible task
-+ * to run next.
-+ *
-+ * Task enqueue is also under rq->lock, possibly taken from another CPU.
-+ * Wakeups from another LLC domain might use an IPI to transfer the enqueue to
-+ * the local CPU to avoid bouncing the runqueue state around [ see
-+ * ttwu_queue_wakelist() ]
-+ *
-+ * Task wakeup, specifically wakeups that involve migration, are horribly
-+ * complicated to avoid having to take two rq->locks.
-+ *
-+ * Special state:
-+ *
-+ * System-calls and anything external will use task_rq_lock() which acquires
-+ * both p->pi_lock and rq->lock. As a consequence the state they change is
-+ * stable while holding either lock:
-+ *
-+ * - sched_setaffinity()/
-+ * set_cpus_allowed_ptr(): p->cpus_ptr, p->nr_cpus_allowed
-+ * - set_user_nice(): p->se.load, p->*prio
-+ * - __sched_setscheduler(): p->sched_class, p->policy, p->*prio,
-+ * p->se.load, p->rt_priority,
-+ * p->dl.dl_{runtime, deadline, period, flags, bw, density}
-+ * - sched_setnuma(): p->numa_preferred_nid
-+ * - sched_move_task(): p->sched_task_group
-+ * - uclamp_update_active() p->uclamp*
-+ *
-+ * p->state <- TASK_*:
-+ *
-+ * is changed locklessly using set_current_state(), __set_current_state() or
-+ * set_special_state(), see their respective comments, or by
-+ * try_to_wake_up(). This latter uses p->pi_lock to serialize against
-+ * concurrent self.
-+ *
-+ * p->on_rq <- { 0, 1 = TASK_ON_RQ_QUEUED, 2 = TASK_ON_RQ_MIGRATING }:
-+ *
-+ * is set by activate_task() and cleared by deactivate_task(), under
-+ * rq->lock. Non-zero indicates the task is runnable, the special
-+ * ON_RQ_MIGRATING state is used for migration without holding both
-+ * rq->locks. It indicates task_cpu() is not stable, see task_rq_lock().
-+ *
-+ * Additionally it is possible to be ->on_rq but still be considered not
-+ * runnable when p->se.sched_delayed is true. These tasks are on the runqueue
-+ * but will be dequeued as soon as they get picked again. See the
-+ * task_is_runnable() helper.
-+ *
-+ * p->on_cpu <- { 0, 1 }:
-+ *
-+ * is set by prepare_task() and cleared by finish_task() such that it will be
-+ * set before p is scheduled-in and cleared after p is scheduled-out, both
-+ * under rq->lock. Non-zero indicates the task is running on its CPU.
-+ *
-+ * [ The astute reader will observe that it is possible for two tasks on one
-+ * CPU to have ->on_cpu = 1 at the same time. ]
-+ *
-+ * task_cpu(p): is changed by set_task_cpu(), the rules are:
-+ *
-+ * - Don't call set_task_cpu() on a blocked task:
-+ *
-+ * We don't care what CPU we're not running on, this simplifies hotplug,
-+ * the CPU assignment of blocked tasks isn't required to be valid.
-+ *
-+ * - for try_to_wake_up(), called under p->pi_lock:
-+ *
-+ * This allows try_to_wake_up() to only take one rq->lock, see its comment.
-+ *
-+ * - for migration called under rq->lock:
-+ * [ see task_on_rq_migrating() in task_rq_lock() ]
-+ *
-+ * o move_queued_task()
-+ * o detach_task()
-+ *
-+ * - for migration called under double_rq_lock():
-+ *
-+ * o __migrate_swap_task()
-+ * o push_rt_task() / pull_rt_task()
-+ * o push_dl_task() / pull_dl_task()
-+ * o dl_task_offline_migration()
-+ *
-+ */
-+
-+/*
-+ * Context: p->pi_lock
-+ */
-+static inline struct rq *
-+task_access_lock_irqsave(struct task_struct *p, raw_spinlock_t **plock, unsigned long *flags)
-+{
-+ struct rq *rq;
-+ for (;;) {
-+ rq = task_rq(p);
-+ if (p->on_cpu || task_on_rq_queued(p)) {
-+ raw_spin_lock_irqsave(&rq->lock, *flags);
-+ if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
-+ *plock = &rq->lock;
-+ return rq;
-+ }
-+ raw_spin_unlock_irqrestore(&rq->lock, *flags);
-+ } else if (task_on_rq_migrating(p)) {
-+ do {
-+ cpu_relax();
-+ } while (unlikely(task_on_rq_migrating(p)));
-+ } else {
-+ raw_spin_lock_irqsave(&p->pi_lock, *flags);
-+ if (likely(!p->on_cpu && !p->on_rq && rq == task_rq(p))) {
-+ *plock = &p->pi_lock;
-+ return rq;
-+ }
-+ raw_spin_unlock_irqrestore(&p->pi_lock, *flags);
-+ }
-+ }
-+}
-+
-+static inline void
-+task_access_unlock_irqrestore(struct task_struct *p, raw_spinlock_t *lock, unsigned long *flags)
-+{
-+ raw_spin_unlock_irqrestore(lock, *flags);
-+}
-+
-+/*
-+ * __task_rq_lock - lock the rq @p resides on.
-+ */
-+struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+ __acquires(rq->lock)
-+{
-+ struct rq *rq;
-+
-+ lockdep_assert_held(&p->pi_lock);
-+
-+ for (;;) {
-+ rq = task_rq(p);
-+ raw_spin_lock(&rq->lock);
-+ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p)))
-+ return rq;
-+ raw_spin_unlock(&rq->lock);
-+
-+ while (unlikely(task_on_rq_migrating(p)))
-+ cpu_relax();
-+ }
-+}
-+
-+/*
-+ * task_rq_lock - lock p->pi_lock and lock the rq @p resides on.
-+ */
-+struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+ __acquires(p->pi_lock)
-+ __acquires(rq->lock)
-+{
-+ struct rq *rq;
-+
-+ for (;;) {
-+ raw_spin_lock_irqsave(&p->pi_lock, rf->flags);
-+ rq = task_rq(p);
-+ raw_spin_lock(&rq->lock);
-+ /*
-+ * move_queued_task() task_rq_lock()
-+ *
-+ * ACQUIRE (rq->lock)
-+ * [S] ->on_rq = MIGRATING [L] rq = task_rq()
-+ * WMB (__set_task_cpu()) ACQUIRE (rq->lock);
-+ * [S] ->cpu = new_cpu [L] task_rq()
-+ * [L] ->on_rq
-+ * RELEASE (rq->lock)
-+ *
-+ * If we observe the old CPU in task_rq_lock(), the acquire of
-+ * the old rq->lock will fully serialize against the stores.
-+ *
-+ * If we observe the new CPU in task_rq_lock(), the address
-+ * dependency headed by '[L] rq = task_rq()' and the acquire
-+ * will pair with the WMB to ensure we then also see migrating.
-+ */
-+ if (likely(rq == task_rq(p) && !task_on_rq_migrating(p))) {
-+ return rq;
-+ }
-+ raw_spin_unlock(&rq->lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
-+
-+ while (unlikely(task_on_rq_migrating(p)))
-+ cpu_relax();
-+ }
-+}
-+
-+static inline void rq_lock_irqsave(struct rq *rq, struct rq_flags *rf)
-+ __acquires(rq->lock)
-+{
-+ raw_spin_lock_irqsave(&rq->lock, rf->flags);
-+}
-+
-+static inline void rq_unlock_irqrestore(struct rq *rq, struct rq_flags *rf)
-+ __releases(rq->lock)
-+{
-+ raw_spin_unlock_irqrestore(&rq->lock, rf->flags);
-+}
-+
-+DEFINE_LOCK_GUARD_1(rq_lock_irqsave, struct rq,
-+ rq_lock_irqsave(_T->lock, &_T->rf),
-+ rq_unlock_irqrestore(_T->lock, &_T->rf),
-+ struct rq_flags rf)
-+
-+void raw_spin_rq_lock_nested(struct rq *rq, int subclass)
-+{
-+ raw_spinlock_t *lock;
-+
-+ /* Matches synchronize_rcu() in __sched_core_enable() */
-+ preempt_disable();
-+
-+ for (;;) {
-+ lock = __rq_lockp(rq);
-+ raw_spin_lock_nested(lock, subclass);
-+ if (likely(lock == __rq_lockp(rq))) {
-+ /* preempt_count *MUST* be > 1 */
-+ preempt_enable_no_resched();
-+ return;
-+ }
-+ raw_spin_unlock(lock);
-+ }
-+}
-+
-+void raw_spin_rq_unlock(struct rq *rq)
-+{
-+ raw_spin_unlock(rq_lockp(rq));
-+}
-+
-+/*
-+ * RQ-clock updating methods:
-+ */
-+
-+static void update_rq_clock_task(struct rq *rq, s64 delta)
-+{
-+/*
-+ * In theory, the compile should just see 0 here, and optimize out the call
-+ * to sched_rt_avg_update. But I don't trust it...
-+ */
-+ s64 __maybe_unused steal = 0, irq_delta = 0;
-+
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+ irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
-+
-+ /*
-+ * Since irq_time is only updated on {soft,}irq_exit, we might run into
-+ * this case when a previous update_rq_clock() happened inside a
-+ * {soft,}IRQ region.
-+ *
-+ * When this happens, we stop ->clock_task and only update the
-+ * prev_irq_time stamp to account for the part that fit, so that a next
-+ * update will consume the rest. This ensures ->clock_task is
-+ * monotonic.
-+ *
-+ * It does however cause some slight miss-attribution of {soft,}IRQ
-+ * time, a more accurate solution would be to update the irq_time using
-+ * the current rq->clock timestamp, except that would require using
-+ * atomic ops.
-+ */
-+ if (irq_delta > delta)
-+ irq_delta = delta;
-+
-+ rq->prev_irq_time += irq_delta;
-+ delta -= irq_delta;
-+ delayacct_irq(rq->curr, irq_delta);
-+#endif
-+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
-+ if (static_key_false((¶virt_steal_rq_enabled))) {
-+ steal = paravirt_steal_clock(cpu_of(rq));
-+ steal -= rq->prev_steal_time_rq;
-+
-+ if (unlikely(steal > delta))
-+ steal = delta;
-+
-+ rq->prev_steal_time_rq += steal;
-+ delta -= steal;
-+ }
-+#endif
-+
-+ rq->clock_task += delta;
-+
-+#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
-+ if ((irq_delta + steal))
-+ update_irq_load_avg(rq, irq_delta + steal);
-+#endif
-+}
-+
-+static inline void update_rq_clock(struct rq *rq)
-+{
-+ s64 delta = sched_clock_cpu(cpu_of(rq)) - rq->clock;
-+
-+ if (unlikely(delta <= 0))
-+ return;
-+ rq->clock += delta;
-+ sched_update_rq_clock(rq);
-+ update_rq_clock_task(rq, delta);
-+}
-+
-+/*
-+ * RQ Load update routine
-+ */
-+#define RQ_LOAD_HISTORY_BITS (sizeof(s32) * 8ULL)
-+#define RQ_UTIL_SHIFT (8)
-+#define RQ_LOAD_HISTORY_TO_UTIL(l) (((l) >> (RQ_LOAD_HISTORY_BITS - 1 - RQ_UTIL_SHIFT)) & 0xff)
-+
-+#define LOAD_BLOCK(t) ((t) >> 17)
-+#define LOAD_HALF_BLOCK(t) ((t) >> 16)
-+#define BLOCK_MASK(t) ((t) & ((0x01 << 18) - 1))
-+#define LOAD_BLOCK_BIT(b) (1UL << (RQ_LOAD_HISTORY_BITS - 1 - (b)))
-+#define CURRENT_LOAD_BIT LOAD_BLOCK_BIT(0)
-+
-+static inline void rq_load_update(struct rq *rq)
-+{
-+ u64 time = rq->clock;
-+ u64 delta = min(LOAD_BLOCK(time) - LOAD_BLOCK(rq->load_stamp), RQ_LOAD_HISTORY_BITS - 1);
-+ u64 prev = !!(rq->load_history & CURRENT_LOAD_BIT);
-+ u64 curr = !!rq->nr_running;
-+
-+ if (delta) {
-+ rq->load_history = rq->load_history >> delta;
-+
-+ if (delta < RQ_UTIL_SHIFT) {
-+ rq->load_block += (~BLOCK_MASK(rq->load_stamp)) * prev;
-+ if (!!LOAD_HALF_BLOCK(rq->load_block) ^ curr)
-+ rq->load_history ^= LOAD_BLOCK_BIT(delta);
-+ }
-+
-+ rq->load_block = BLOCK_MASK(time) * prev;
-+ } else {
-+ rq->load_block += (time - rq->load_stamp) * prev;
-+ }
-+ if (prev ^ curr)
-+ rq->load_history ^= CURRENT_LOAD_BIT;
-+ rq->load_stamp = time;
-+}
-+
-+unsigned long rq_load_util(struct rq *rq, unsigned long max)
-+{
-+ return RQ_LOAD_HISTORY_TO_UTIL(rq->load_history) * (max >> RQ_UTIL_SHIFT);
-+}
-+
-+#ifdef CONFIG_SMP
-+unsigned long sched_cpu_util(int cpu)
-+{
-+ return rq_load_util(cpu_rq(cpu), arch_scale_cpu_capacity(cpu));
-+}
-+#endif /* CONFIG_SMP */
-+
-+#ifdef CONFIG_CPU_FREQ
-+/**
-+ * cpufreq_update_util - Take a note about CPU utilization changes.
-+ * @rq: Runqueue to carry out the update for.
-+ * @flags: Update reason flags.
-+ *
-+ * This function is called by the scheduler on the CPU whose utilization is
-+ * being updated.
-+ *
-+ * It can only be called from RCU-sched read-side critical sections.
-+ *
-+ * The way cpufreq is currently arranged requires it to evaluate the CPU
-+ * performance state (frequency/voltage) on a regular basis to prevent it from
-+ * being stuck in a completely inadequate performance level for too long.
-+ * That is not guaranteed to happen if the updates are only triggered from CFS
-+ * and DL, though, because they may not be coming in if only RT tasks are
-+ * active all the time (or there are RT tasks only).
-+ *
-+ * As a workaround for that issue, this function is called periodically by the
-+ * RT sched class to trigger extra cpufreq updates to prevent it from stalling,
-+ * but that really is a band-aid. Going forward it should be replaced with
-+ * solutions targeted more specifically at RT tasks.
-+ */
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
-+{
-+ struct update_util_data *data;
-+
-+#ifdef CONFIG_SMP
-+ rq_load_update(rq);
-+#endif
-+ data = rcu_dereference_sched(*per_cpu_ptr(&cpufreq_update_util_data, cpu_of(rq)));
-+ if (data)
-+ data->func(data, rq_clock(rq), flags);
-+}
-+#else
-+static inline void cpufreq_update_util(struct rq *rq, unsigned int flags)
-+{
-+#ifdef CONFIG_SMP
-+ rq_load_update(rq);
-+#endif
-+}
-+#endif /* CONFIG_CPU_FREQ */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+/*
-+ * Tick may be needed by tasks in the runqueue depending on their policy and
-+ * requirements. If tick is needed, lets send the target an IPI to kick it out
-+ * of nohz mode if necessary.
-+ */
-+static inline void sched_update_tick_dependency(struct rq *rq)
-+{
-+ int cpu = cpu_of(rq);
-+
-+ if (!tick_nohz_full_cpu(cpu))
-+ return;
-+
-+ if (rq->nr_running < 2)
-+ tick_nohz_dep_clear_cpu(cpu, TICK_DEP_BIT_SCHED);
-+ else
-+ tick_nohz_dep_set_cpu(cpu, TICK_DEP_BIT_SCHED);
-+}
-+#else /* !CONFIG_NO_HZ_FULL */
-+static inline void sched_update_tick_dependency(struct rq *rq) { }
-+#endif
-+
-+bool sched_task_on_rq(struct task_struct *p)
-+{
-+ return task_on_rq_queued(p);
-+}
-+
-+unsigned long get_wchan(struct task_struct *p)
-+{
-+ unsigned long ip = 0;
-+ unsigned int state;
-+
-+ if (!p || p == current)
-+ return 0;
-+
-+ /* Only get wchan if task is blocked and we can keep it that way. */
-+ raw_spin_lock_irq(&p->pi_lock);
-+ state = READ_ONCE(p->__state);
-+ smp_rmb(); /* see try_to_wake_up() */
-+ if (state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq)
-+ ip = __get_wchan(p);
-+ raw_spin_unlock_irq(&p->pi_lock);
-+
-+ return ip;
-+}
-+
-+/*
-+ * Add/Remove/Requeue task to/from the runqueue routines
-+ * Context: rq->lock
-+ */
-+#define __SCHED_DEQUEUE_TASK(p, rq, flags, func) \
-+ sched_info_dequeue(rq, p); \
-+ \
-+ __list_del_entry(&p->sq_node); \
-+ if (p->sq_node.prev == p->sq_node.next) { \
-+ clear_bit(sched_idx2prio(p->sq_node.next - &rq->queue.heads[0], rq), \
-+ rq->queue.bitmap); \
-+ func; \
-+ }
-+
-+#define __SCHED_ENQUEUE_TASK(p, rq, flags, func) \
-+ sched_info_enqueue(rq, p); \
-+ { \
-+ int idx, prio; \
-+ TASK_SCHED_PRIO_IDX(p, rq, idx, prio); \
-+ list_add_tail(&p->sq_node, &rq->queue.heads[idx]); \
-+ if (list_is_first(&p->sq_node, &rq->queue.heads[idx])) { \
-+ set_bit(prio, rq->queue.bitmap); \
-+ func; \
-+ } \
-+ }
-+
-+static inline void dequeue_task(struct task_struct *p, struct rq *rq, int flags)
-+{
-+#ifdef ALT_SCHED_DEBUG
-+ lockdep_assert_held(&rq->lock);
-+
-+ /*printk(KERN_INFO "sched: dequeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
-+ WARN_ONCE(task_rq(p) != rq, "sched: dequeue task reside on cpu%d from cpu%d\n",
-+ task_cpu(p), cpu_of(rq));
-+#endif
-+
-+ __SCHED_DEQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
-+ --rq->nr_running;
-+#ifdef CONFIG_SMP
-+ if (1 == rq->nr_running)
-+ cpumask_clear_cpu(cpu_of(rq), &sched_rq_pending_mask);
-+#endif
-+
-+ sched_update_tick_dependency(rq);
-+}
-+
-+static inline void enqueue_task(struct task_struct *p, struct rq *rq, int flags)
-+{
-+#ifdef ALT_SCHED_DEBUG
-+ lockdep_assert_held(&rq->lock);
-+
-+ /*printk(KERN_INFO "sched: enqueue(%d) %px %d\n", cpu_of(rq), p, p->prio);*/
-+ WARN_ONCE(task_rq(p) != rq, "sched: enqueue task reside on cpu%d to cpu%d\n",
-+ task_cpu(p), cpu_of(rq));
-+#endif
-+
-+ __SCHED_ENQUEUE_TASK(p, rq, flags, update_sched_preempt_mask(rq));
-+ ++rq->nr_running;
-+#ifdef CONFIG_SMP
-+ if (2 == rq->nr_running)
-+ cpumask_set_cpu(cpu_of(rq), &sched_rq_pending_mask);
-+#endif
-+
-+ sched_update_tick_dependency(rq);
-+}
-+
-+void requeue_task(struct task_struct *p, struct rq *rq)
-+{
-+ struct list_head *node = &p->sq_node;
-+ int deq_idx, idx, prio;
-+
-+ TASK_SCHED_PRIO_IDX(p, rq, idx, prio);
-+#ifdef ALT_SCHED_DEBUG
-+ lockdep_assert_held(&rq->lock);
-+ /*printk(KERN_INFO "sched: requeue(%d) %px %016llx\n", cpu_of(rq), p, p->deadline);*/
-+ WARN_ONCE(task_rq(p) != rq, "sched: cpu[%d] requeue task reside on cpu%d\n",
-+ cpu_of(rq), task_cpu(p));
-+#endif
-+ if (list_is_last(node, &rq->queue.heads[idx]))
-+ return;
-+
-+ __list_del_entry(node);
-+ if (node->prev == node->next && (deq_idx = node->next - &rq->queue.heads[0]) != idx)
-+ clear_bit(sched_idx2prio(deq_idx, rq), rq->queue.bitmap);
-+
-+ list_add_tail(node, &rq->queue.heads[idx]);
-+ if (list_is_first(node, &rq->queue.heads[idx]))
-+ set_bit(prio, rq->queue.bitmap);
-+ update_sched_preempt_mask(rq);
-+}
-+
-+/*
-+ * try_cmpxchg based fetch_or() macro so it works for different integer types:
-+ */
-+#define fetch_or(ptr, mask) \
-+ ({ \
-+ typeof(ptr) _ptr = (ptr); \
-+ typeof(mask) _mask = (mask); \
-+ typeof(*_ptr) _val = *_ptr; \
-+ \
-+ do { \
-+ } while (!try_cmpxchg(_ptr, &_val, _val | _mask)); \
-+ _val; \
-+})
-+
-+#if defined(CONFIG_SMP) && defined(TIF_POLLING_NRFLAG)
-+/*
-+ * Atomically set TIF_NEED_RESCHED and test for TIF_POLLING_NRFLAG,
-+ * this avoids any races wrt polling state changes and thereby avoids
-+ * spurious IPIs.
-+ */
-+static inline bool set_nr_and_not_polling(struct task_struct *p)
-+{
-+ struct thread_info *ti = task_thread_info(p);
-+ return !(fetch_or(&ti->flags, _TIF_NEED_RESCHED) & _TIF_POLLING_NRFLAG);
-+}
-+
-+/*
-+ * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
-+ *
-+ * If this returns true, then the idle task promises to call
-+ * sched_ttwu_pending() and reschedule soon.
-+ */
-+static bool set_nr_if_polling(struct task_struct *p)
-+{
-+ struct thread_info *ti = task_thread_info(p);
-+ typeof(ti->flags) val = READ_ONCE(ti->flags);
-+
-+ do {
-+ if (!(val & _TIF_POLLING_NRFLAG))
-+ return false;
-+ if (val & _TIF_NEED_RESCHED)
-+ return true;
-+ } while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
-+
-+ return true;
-+}
-+
-+#else
-+static inline bool set_nr_and_not_polling(struct task_struct *p)
-+{
-+ set_tsk_need_resched(p);
-+ return true;
-+}
-+
-+#ifdef CONFIG_SMP
-+static inline bool set_nr_if_polling(struct task_struct *p)
-+{
-+ return false;
-+}
-+#endif
-+#endif
-+
-+static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
-+{
-+ struct wake_q_node *node = &task->wake_q;
-+
-+ /*
-+ * Atomically grab the task, if ->wake_q is !nil already it means
-+ * it's already queued (either by us or someone else) and will get the
-+ * wakeup due to that.
-+ *
-+ * In order to ensure that a pending wakeup will observe our pending
-+ * state, even in the failed case, an explicit smp_mb() must be used.
-+ */
-+ smp_mb__before_atomic();
-+ if (unlikely(cmpxchg_relaxed(&node->next, NULL, WAKE_Q_TAIL)))
-+ return false;
-+
-+ /*
-+ * The head is context local, there can be no concurrency.
-+ */
-+ *head->lastp = node;
-+ head->lastp = &node->next;
-+ return true;
-+}
-+
-+/**
-+ * wake_q_add() - queue a wakeup for 'later' waking.
-+ * @head: the wake_q_head to add @task to
-+ * @task: the task to queue for 'later' wakeup
-+ *
-+ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
-+ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
-+ * instantly.
-+ *
-+ * This function must be used as-if it were wake_up_process(); IOW the task
-+ * must be ready to be woken at this location.
-+ */
-+void wake_q_add(struct wake_q_head *head, struct task_struct *task)
-+{
-+ if (__wake_q_add(head, task))
-+ get_task_struct(task);
-+}
-+
-+/**
-+ * wake_q_add_safe() - safely queue a wakeup for 'later' waking.
-+ * @head: the wake_q_head to add @task to
-+ * @task: the task to queue for 'later' wakeup
-+ *
-+ * Queue a task for later wakeup, most likely by the wake_up_q() call in the
-+ * same context, _HOWEVER_ this is not guaranteed, the wakeup can come
-+ * instantly.
-+ *
-+ * This function must be used as-if it were wake_up_process(); IOW the task
-+ * must be ready to be woken at this location.
-+ *
-+ * This function is essentially a task-safe equivalent to wake_q_add(). Callers
-+ * that already hold reference to @task can call the 'safe' version and trust
-+ * wake_q to do the right thing depending whether or not the @task is already
-+ * queued for wakeup.
-+ */
-+void wake_q_add_safe(struct wake_q_head *head, struct task_struct *task)
-+{
-+ if (!__wake_q_add(head, task))
-+ put_task_struct(task);
-+}
-+
-+void wake_up_q(struct wake_q_head *head)
-+{
-+ struct wake_q_node *node = head->first;
-+
-+ while (node != WAKE_Q_TAIL) {
-+ struct task_struct *task;
-+
-+ task = container_of(node, struct task_struct, wake_q);
-+ /* task can safely be re-inserted now: */
-+ node = node->next;
-+ task->wake_q.next = NULL;
-+
-+ /*
-+ * wake_up_process() executes a full barrier, which pairs with
-+ * the queueing in wake_q_add() so as not to miss wakeups.
-+ */
-+ wake_up_process(task);
-+ put_task_struct(task);
-+ }
-+}
-+
-+/*
-+ * resched_curr - mark rq's current task 'to be rescheduled now'.
-+ *
-+ * On UP this means the setting of the need_resched flag, on SMP it
-+ * might also involve a cross-CPU call to trigger the scheduler on
-+ * the target CPU.
-+ */
-+static inline void resched_curr(struct rq *rq)
-+{
-+ struct task_struct *curr = rq->curr;
-+ int cpu;
-+
-+ lockdep_assert_held(&rq->lock);
-+
-+ if (test_tsk_need_resched(curr))
-+ return;
-+
-+ cpu = cpu_of(rq);
-+ if (cpu == smp_processor_id()) {
-+ set_tsk_need_resched(curr);
-+ set_preempt_need_resched();
-+ return;
-+ }
-+
-+ if (set_nr_and_not_polling(curr))
-+ smp_send_reschedule(cpu);
-+ else
-+ trace_sched_wake_idle_without_ipi(cpu);
-+}
-+
-+void resched_cpu(int cpu)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+ unsigned long flags;
-+
-+ raw_spin_lock_irqsave(&rq->lock, flags);
-+ if (cpu_online(cpu) || cpu == smp_processor_id())
-+ resched_curr(cpu_rq(cpu));
-+ raw_spin_unlock_irqrestore(&rq->lock, flags);
-+}
-+
-+#ifdef CONFIG_SMP
-+#ifdef CONFIG_NO_HZ_COMMON
-+/*
-+ * This routine will record that the CPU is going idle with tick stopped.
-+ * This info will be used in performing idle load balancing in the future.
-+ */
-+void nohz_balance_enter_idle(int cpu) {}
-+
-+/*
-+ * In the semi idle case, use the nearest busy CPU for migrating timers
-+ * from an idle CPU. This is good for power-savings.
-+ *
-+ * We don't do similar optimization for completely idle system, as
-+ * selecting an idle CPU will add more delays to the timers than intended
-+ * (as that CPU's timer base may not be up to date wrt jiffies etc).
-+ */
-+int get_nohz_timer_target(void)
-+{
-+ int i, cpu = smp_processor_id(), default_cpu = -1;
-+ struct cpumask *mask;
-+ const struct cpumask *hk_mask;
-+
-+ if (housekeeping_cpu(cpu, HK_TYPE_TIMER)) {
-+ if (!idle_cpu(cpu))
-+ return cpu;
-+ default_cpu = cpu;
-+ }
-+
-+ hk_mask = housekeeping_cpumask(HK_TYPE_TIMER);
-+
-+ for (mask = per_cpu(sched_cpu_topo_masks, cpu);
-+ mask < per_cpu(sched_cpu_topo_end_mask, cpu); mask++)
-+ for_each_cpu_and(i, mask, hk_mask)
-+ if (!idle_cpu(i))
-+ return i;
-+
-+ if (default_cpu == -1)
-+ default_cpu = housekeeping_any_cpu(HK_TYPE_TIMER);
-+ cpu = default_cpu;
-+
-+ return cpu;
-+}
-+
-+/*
-+ * When add_timer_on() enqueues a timer into the timer wheel of an
-+ * idle CPU then this timer might expire before the next timer event
-+ * which is scheduled to wake up that CPU. In case of a completely
-+ * idle system the next event might even be infinite time into the
-+ * future. wake_up_idle_cpu() ensures that the CPU is woken up and
-+ * leaves the inner idle loop so the newly added timer is taken into
-+ * account when the CPU goes back to idle and evaluates the timer
-+ * wheel for the next timer event.
-+ */
-+static inline void wake_up_idle_cpu(int cpu)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+
-+ if (cpu == smp_processor_id())
-+ return;
-+
-+ /*
-+ * Set TIF_NEED_RESCHED and send an IPI if in the non-polling
-+ * part of the idle loop. This forces an exit from the idle loop
-+ * and a round trip to schedule(). Now this could be optimized
-+ * because a simple new idle loop iteration is enough to
-+ * re-evaluate the next tick. Provided some re-ordering of tick
-+ * nohz functions that would need to follow TIF_NR_POLLING
-+ * clearing:
-+ *
-+ * - On most architectures, a simple fetch_or on ti::flags with a
-+ * "0" value would be enough to know if an IPI needs to be sent.
-+ *
-+ * - x86 needs to perform a last need_resched() check between
-+ * monitor and mwait which doesn't take timers into account.
-+ * There a dedicated TIF_TIMER flag would be required to
-+ * fetch_or here and be checked along with TIF_NEED_RESCHED
-+ * before mwait().
-+ *
-+ * However, remote timer enqueue is not such a frequent event
-+ * and testing of the above solutions didn't appear to report
-+ * much benefits.
-+ */
-+ if (set_nr_and_not_polling(rq->idle))
-+ smp_send_reschedule(cpu);
-+ else
-+ trace_sched_wake_idle_without_ipi(cpu);
-+}
-+
-+static inline bool wake_up_full_nohz_cpu(int cpu)
-+{
-+ /*
-+ * We just need the target to call irq_exit() and re-evaluate
-+ * the next tick. The nohz full kick at least implies that.
-+ * If needed we can still optimize that later with an
-+ * empty IRQ.
-+ */
-+ if (cpu_is_offline(cpu))
-+ return true; /* Don't try to wake offline CPUs. */
-+ if (tick_nohz_full_cpu(cpu)) {
-+ if (cpu != smp_processor_id() ||
-+ tick_nohz_tick_stopped())
-+ tick_nohz_full_kick_cpu(cpu);
-+ return true;
-+ }
-+
-+ return false;
-+}
-+
-+void wake_up_nohz_cpu(int cpu)
-+{
-+ if (!wake_up_full_nohz_cpu(cpu))
-+ wake_up_idle_cpu(cpu);
-+}
-+
-+static void nohz_csd_func(void *info)
-+{
-+ struct rq *rq = info;
-+ int cpu = cpu_of(rq);
-+ unsigned int flags;
-+
-+ /*
-+ * Release the rq::nohz_csd.
-+ */
-+ flags = atomic_fetch_andnot(NOHZ_KICK_MASK, nohz_flags(cpu));
-+ WARN_ON(!(flags & NOHZ_KICK_MASK));
-+
-+ rq->idle_balance = idle_cpu(cpu);
-+ if (rq->idle_balance && !need_resched()) {
-+ rq->nohz_idle_balance = flags;
-+ raise_softirq_irqoff(SCHED_SOFTIRQ);
-+ }
-+}
-+
-+#endif /* CONFIG_NO_HZ_COMMON */
-+#endif /* CONFIG_SMP */
-+
-+static inline void wakeup_preempt(struct rq *rq)
-+{
-+ if (sched_rq_first_task(rq) != rq->curr)
-+ resched_curr(rq);
-+}
-+
-+static __always_inline
-+int __task_state_match(struct task_struct *p, unsigned int state)
-+{
-+ if (READ_ONCE(p->__state) & state)
-+ return 1;
-+
-+ if (READ_ONCE(p->saved_state) & state)
-+ return -1;
-+
-+ return 0;
-+}
-+
-+static __always_inline
-+int task_state_match(struct task_struct *p, unsigned int state)
-+{
-+ /*
-+ * Serialize against current_save_and_set_rtlock_wait_state(),
-+ * current_restore_rtlock_saved_state(), and __refrigerator().
-+ */
-+ guard(raw_spinlock_irq)(&p->pi_lock);
-+
-+ return __task_state_match(p, state);
-+}
-+
-+/*
-+ * wait_task_inactive - wait for a thread to unschedule.
-+ *
-+ * Wait for the thread to block in any of the states set in @match_state.
-+ * If it changes, i.e. @p might have woken up, then return zero. When we
-+ * succeed in waiting for @p to be off its CPU, we return a positive number
-+ * (its total switch count). If a second call a short while later returns the
-+ * same number, the caller can be sure that @p has remained unscheduled the
-+ * whole time.
-+ *
-+ * The caller must ensure that the task *will* unschedule sometime soon,
-+ * else this function might spin for a *long* time. This function can't
-+ * be called with interrupts off, or it may introduce deadlock with
-+ * smp_call_function() if an IPI is sent by the same process we are
-+ * waiting to become inactive.
-+ */
-+unsigned long wait_task_inactive(struct task_struct *p, unsigned int match_state)
-+{
-+ unsigned long flags;
-+ int running, queued, match;
-+ unsigned long ncsw;
-+ struct rq *rq;
-+ raw_spinlock_t *lock;
-+
-+ for (;;) {
-+ rq = task_rq(p);
-+
-+ /*
-+ * If the task is actively running on another CPU
-+ * still, just relax and busy-wait without holding
-+ * any locks.
-+ *
-+ * NOTE! Since we don't hold any locks, it's not
-+ * even sure that "rq" stays as the right runqueue!
-+ * But we don't care, since this will return false
-+ * if the runqueue has changed and p is actually now
-+ * running somewhere else!
-+ */
-+ while (task_on_cpu(p)) {
-+ if (!task_state_match(p, match_state))
-+ return 0;
-+ cpu_relax();
-+ }
-+
-+ /*
-+ * Ok, time to look more closely! We need the rq
-+ * lock now, to be *sure*. If we're wrong, we'll
-+ * just go back and repeat.
-+ */
-+ task_access_lock_irqsave(p, &lock, &flags);
-+ trace_sched_wait_task(p);
-+ running = task_on_cpu(p);
-+ queued = p->on_rq;
-+ ncsw = 0;
-+ if ((match = __task_state_match(p, match_state))) {
-+ /*
-+ * When matching on p->saved_state, consider this task
-+ * still queued so it will wait.
-+ */
-+ if (match < 0)
-+ queued = 1;
-+ ncsw = p->nvcsw | LONG_MIN; /* sets MSB */
-+ }
-+ task_access_unlock_irqrestore(p, lock, &flags);
-+
-+ /*
-+ * If it changed from the expected state, bail out now.
-+ */
-+ if (unlikely(!ncsw))
-+ break;
-+
-+ /*
-+ * Was it really running after all now that we
-+ * checked with the proper locks actually held?
-+ *
-+ * Oops. Go back and try again..
-+ */
-+ if (unlikely(running)) {
-+ cpu_relax();
-+ continue;
-+ }
-+
-+ /*
-+ * It's not enough that it's not actively running,
-+ * it must be off the runqueue _entirely_, and not
-+ * preempted!
-+ *
-+ * So if it was still runnable (but just not actively
-+ * running right now), it's preempted, and we should
-+ * yield - it could be a while.
-+ */
-+ if (unlikely(queued)) {
-+ ktime_t to = NSEC_PER_SEC / HZ;
-+
-+ set_current_state(TASK_UNINTERRUPTIBLE);
-+ schedule_hrtimeout(&to, HRTIMER_MODE_REL_HARD);
-+ continue;
-+ }
-+
-+ /*
-+ * Ahh, all good. It wasn't running, and it wasn't
-+ * runnable, which means that it will never become
-+ * running in the future either. We're all done!
-+ */
-+ break;
-+ }
-+
-+ return ncsw;
-+}
-+
-+#ifdef CONFIG_SCHED_HRTICK
-+/*
-+ * Use HR-timers to deliver accurate preemption points.
-+ */
-+
-+static void hrtick_clear(struct rq *rq)
-+{
-+ if (hrtimer_active(&rq->hrtick_timer))
-+ hrtimer_cancel(&rq->hrtick_timer);
-+}
-+
-+/*
-+ * High-resolution timer tick.
-+ * Runs from hardirq context with interrupts disabled.
-+ */
-+static enum hrtimer_restart hrtick(struct hrtimer *timer)
-+{
-+ struct rq *rq = container_of(timer, struct rq, hrtick_timer);
-+
-+ WARN_ON_ONCE(cpu_of(rq) != smp_processor_id());
-+
-+ raw_spin_lock(&rq->lock);
-+ resched_curr(rq);
-+ raw_spin_unlock(&rq->lock);
-+
-+ return HRTIMER_NORESTART;
-+}
-+
-+/*
-+ * Use hrtick when:
-+ * - enabled by features
-+ * - hrtimer is actually high res
-+ */
-+static inline int hrtick_enabled(struct rq *rq)
-+{
-+ /**
-+ * Alt schedule FW doesn't support sched_feat yet
-+ if (!sched_feat(HRTICK))
-+ return 0;
-+ */
-+ if (!cpu_active(cpu_of(rq)))
-+ return 0;
-+ return hrtimer_is_hres_active(&rq->hrtick_timer);
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static void __hrtick_restart(struct rq *rq)
-+{
-+ struct hrtimer *timer = &rq->hrtick_timer;
-+ ktime_t time = rq->hrtick_time;
-+
-+ hrtimer_start(timer, time, HRTIMER_MODE_ABS_PINNED_HARD);
-+}
-+
-+/*
-+ * called from hardirq (IPI) context
-+ */
-+static void __hrtick_start(void *arg)
-+{
-+ struct rq *rq = arg;
-+
-+ raw_spin_lock(&rq->lock);
-+ __hrtick_restart(rq);
-+ raw_spin_unlock(&rq->lock);
-+}
-+
-+/*
-+ * Called to set the hrtick timer state.
-+ *
-+ * called with rq->lock held and IRQs disabled
-+ */
-+static inline void hrtick_start(struct rq *rq, u64 delay)
-+{
-+ struct hrtimer *timer = &rq->hrtick_timer;
-+ s64 delta;
-+
-+ /*
-+ * Don't schedule slices shorter than 10000ns, that just
-+ * doesn't make sense and can cause timer DoS.
-+ */
-+ delta = max_t(s64, delay, 10000LL);
-+
-+ rq->hrtick_time = ktime_add_ns(timer->base->get_time(), delta);
-+
-+ if (rq == this_rq())
-+ __hrtick_restart(rq);
-+ else
-+ smp_call_function_single_async(cpu_of(rq), &rq->hrtick_csd);
-+}
-+
-+#else
-+/*
-+ * Called to set the hrtick timer state.
-+ *
-+ * called with rq->lock held and IRQs disabled
-+ */
-+static inline void hrtick_start(struct rq *rq, u64 delay)
-+{
-+ /*
-+ * Don't schedule slices shorter than 10000ns, that just
-+ * doesn't make sense. Rely on vruntime for fairness.
-+ */
-+ delay = max_t(u64, delay, 10000LL);
-+ hrtimer_start(&rq->hrtick_timer, ns_to_ktime(delay),
-+ HRTIMER_MODE_REL_PINNED_HARD);
-+}
-+#endif /* CONFIG_SMP */
-+
-+static void hrtick_rq_init(struct rq *rq)
-+{
-+#ifdef CONFIG_SMP
-+ INIT_CSD(&rq->hrtick_csd, __hrtick_start, rq);
-+#endif
-+
-+ hrtimer_init(&rq->hrtick_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
-+ rq->hrtick_timer.function = hrtick;
-+}
-+#else /* CONFIG_SCHED_HRTICK */
-+static inline int hrtick_enabled(struct rq *rq)
-+{
-+ return 0;
-+}
-+
-+static inline void hrtick_clear(struct rq *rq)
-+{
-+}
-+
-+static inline void hrtick_rq_init(struct rq *rq)
-+{
-+}
-+#endif /* CONFIG_SCHED_HRTICK */
-+
-+/*
-+ * activate_task - move a task to the runqueue.
-+ *
-+ * Context: rq->lock
-+ */
-+static void activate_task(struct task_struct *p, struct rq *rq)
-+{
-+ enqueue_task(p, rq, ENQUEUE_WAKEUP);
-+
-+ WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
-+ ASSERT_EXCLUSIVE_WRITER(p->on_rq);
-+
-+ /*
-+ * If in_iowait is set, the code below may not trigger any cpufreq
-+ * utilization updates, so do it here explicitly with the IOWAIT flag
-+ * passed.
-+ */
-+ cpufreq_update_util(rq, SCHED_CPUFREQ_IOWAIT * p->in_iowait);
-+}
-+
-+static void block_task(struct rq *rq, struct task_struct *p)
-+{
-+ dequeue_task(p, rq, DEQUEUE_SLEEP);
-+
-+ WRITE_ONCE(p->on_rq, 0);
-+ ASSERT_EXCLUSIVE_WRITER(p->on_rq);
-+ if (p->sched_contributes_to_load)
-+ rq->nr_uninterruptible++;
-+
-+ if (p->in_iowait) {
-+ atomic_inc(&rq->nr_iowait);
-+ delayacct_blkio_start();
-+ }
-+}
-+
-+static inline void __set_task_cpu(struct task_struct *p, unsigned int cpu)
-+{
-+#ifdef CONFIG_SMP
-+ /*
-+ * After ->cpu is set up to a new value, task_access_lock(p, ...) can be
-+ * successfully executed on another CPU. We must ensure that updates of
-+ * per-task data have been completed by this moment.
-+ */
-+ smp_wmb();
-+
-+ WRITE_ONCE(task_thread_info(p)->cpu, cpu);
-+#endif
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
-+{
-+#ifdef CONFIG_SCHED_DEBUG
-+ unsigned int state = READ_ONCE(p->__state);
-+
-+ /*
-+ * We should never call set_task_cpu() on a blocked task,
-+ * ttwu() will sort out the placement.
-+ */
-+ WARN_ON_ONCE(state != TASK_RUNNING && state != TASK_WAKING && !p->on_rq);
-+
-+#ifdef CONFIG_LOCKDEP
-+ /*
-+ * The caller should hold either p->pi_lock or rq->lock, when changing
-+ * a task's CPU. ->pi_lock for waking tasks, rq->lock for runnable tasks.
-+ *
-+ * sched_move_task() holds both and thus holding either pins the cgroup,
-+ * see task_group().
-+ */
-+ WARN_ON_ONCE(debug_locks && !(lockdep_is_held(&p->pi_lock) ||
-+ lockdep_is_held(&task_rq(p)->lock)));
-+#endif
-+ /*
-+ * Clearly, migrating tasks to offline CPUs is a fairly daft thing.
-+ */
-+ WARN_ON_ONCE(!cpu_online(new_cpu));
-+
-+ WARN_ON_ONCE(is_migration_disabled(p));
-+#endif
-+ trace_sched_migrate_task(p, new_cpu);
-+
-+ if (task_cpu(p) != new_cpu)
-+ {
-+ rseq_migrate(p);
-+ sched_mm_cid_migrate_from(p);
-+ perf_event_task_migrate(p);
-+ }
-+
-+ __set_task_cpu(p, new_cpu);
-+}
-+
-+static void
-+__do_set_cpus_ptr(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+ /*
-+ * This here violates the locking rules for affinity, since we're only
-+ * supposed to change these variables while holding both rq->lock and
-+ * p->pi_lock.
-+ *
-+ * HOWEVER, it magically works, because ttwu() is the only code that
-+ * accesses these variables under p->pi_lock and only does so after
-+ * smp_cond_load_acquire(&p->on_cpu, !VAL), and we're in __schedule()
-+ * before finish_task().
-+ *
-+ * XXX do further audits, this smells like something putrid.
-+ */
-+ SCHED_WARN_ON(!p->on_cpu);
-+ p->cpus_ptr = new_mask;
-+}
-+
-+void migrate_disable(void)
-+{
-+ struct task_struct *p = current;
-+ int cpu;
-+
-+ if (p->migration_disabled) {
-+#ifdef CONFIG_DEBUG_PREEMPT
-+ /*
-+ * Warn about overflow half-way through the range.
-+ */
-+ WARN_ON_ONCE((s16)p->migration_disabled < 0);
-+#endif
-+ p->migration_disabled++;
-+ return;
-+ }
-+
-+ guard(preempt)();
-+ cpu = smp_processor_id();
-+ if (cpumask_test_cpu(cpu, &p->cpus_mask)) {
-+ cpu_rq(cpu)->nr_pinned++;
-+ p->migration_disabled = 1;
-+ /*
-+ * Violates locking rules! see comment in __do_set_cpus_ptr().
-+ */
-+ if (p->cpus_ptr == &p->cpus_mask)
-+ __do_set_cpus_ptr(p, cpumask_of(cpu));
-+ }
-+}
-+EXPORT_SYMBOL_GPL(migrate_disable);
-+
-+void migrate_enable(void)
-+{
-+ struct task_struct *p = current;
-+
-+#ifdef CONFIG_DEBUG_PREEMPT
-+ /*
-+ * Check both overflow from migrate_disable() and superfluous
-+ * migrate_enable().
-+ */
-+ if (WARN_ON_ONCE((s16)p->migration_disabled <= 0))
-+ return;
-+#endif
-+
-+ if (p->migration_disabled > 1) {
-+ p->migration_disabled--;
-+ return;
-+ }
-+
-+ /*
-+ * Ensure stop_task runs either before or after this, and that
-+ * __set_cpus_allowed_ptr(SCA_MIGRATE_ENABLE) doesn't schedule().
-+ */
-+ guard(preempt)();
-+ /*
-+ * Assumption: current should be running on allowed cpu
-+ */
-+ WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &p->cpus_mask));
-+ if (p->cpus_ptr != &p->cpus_mask)
-+ __do_set_cpus_ptr(p, &p->cpus_mask);
-+ /*
-+ * Mustn't clear migration_disabled() until cpus_ptr points back at the
-+ * regular cpus_mask, otherwise things that race (eg.
-+ * select_fallback_rq) get confused.
-+ */
-+ barrier();
-+ p->migration_disabled = 0;
-+ this_rq()->nr_pinned--;
-+}
-+EXPORT_SYMBOL_GPL(migrate_enable);
-+
-+static void __migrate_force_enable(struct task_struct *p, struct rq *rq)
-+{
-+ if (likely(p->cpus_ptr != &p->cpus_mask))
-+ __do_set_cpus_ptr(p, &p->cpus_mask);
-+ p->migration_disabled = 0;
-+ /* When p is migrate_disabled, rq->lock should be held */
-+ rq->nr_pinned--;
-+}
-+
-+static inline bool rq_has_pinned_tasks(struct rq *rq)
-+{
-+ return rq->nr_pinned;
-+}
-+
-+/*
-+ * Per-CPU kthreads are allowed to run on !active && online CPUs, see
-+ * __set_cpus_allowed_ptr() and select_fallback_rq().
-+ */
-+static inline bool is_cpu_allowed(struct task_struct *p, int cpu)
-+{
-+ /* When not in the task's cpumask, no point in looking further. */
-+ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
-+ return false;
-+
-+ /* migrate_disabled() must be allowed to finish. */
-+ if (is_migration_disabled(p))
-+ return cpu_online(cpu);
-+
-+ /* Non kernel threads are not allowed during either online or offline. */
-+ if (!(p->flags & PF_KTHREAD))
-+ return cpu_active(cpu) && task_cpu_possible(cpu, p);
-+
-+ /* KTHREAD_IS_PER_CPU is always allowed. */
-+ if (kthread_is_per_cpu(p))
-+ return cpu_online(cpu);
-+
-+ /* Regular kernel threads don't get to stay during offline. */
-+ if (cpu_dying(cpu))
-+ return false;
-+
-+ /* But are allowed during online. */
-+ return cpu_online(cpu);
-+}
-+
-+/*
-+ * This is how migration works:
-+ *
-+ * 1) we invoke migration_cpu_stop() on the target CPU using
-+ * stop_one_cpu().
-+ * 2) stopper starts to run (implicitly forcing the migrated thread
-+ * off the CPU)
-+ * 3) it checks whether the migrated task is still in the wrong runqueue.
-+ * 4) if it's in the wrong runqueue then the migration thread removes
-+ * it and puts it into the right queue.
-+ * 5) stopper completes and stop_one_cpu() returns and the migration
-+ * is done.
-+ */
-+
-+/*
-+ * move_queued_task - move a queued task to new rq.
-+ *
-+ * Returns (locked) new rq. Old rq's lock is released.
-+ */
-+struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu)
-+{
-+ lockdep_assert_held(&rq->lock);
-+
-+ WRITE_ONCE(p->on_rq, TASK_ON_RQ_MIGRATING);
-+ dequeue_task(p, rq, 0);
-+ set_task_cpu(p, new_cpu);
-+ raw_spin_unlock(&rq->lock);
-+
-+ rq = cpu_rq(new_cpu);
-+
-+ raw_spin_lock(&rq->lock);
-+ WARN_ON_ONCE(task_cpu(p) != new_cpu);
-+
-+ sched_mm_cid_migrate_to(rq, p);
-+
-+ sched_task_sanity_check(p, rq);
-+ enqueue_task(p, rq, 0);
-+ WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED);
-+ wakeup_preempt(rq);
-+
-+ return rq;
-+}
-+
-+struct migration_arg {
-+ struct task_struct *task;
-+ int dest_cpu;
-+};
-+
-+/*
-+ * Move (not current) task off this CPU, onto the destination CPU. We're doing
-+ * this because either it can't run here any more (set_cpus_allowed()
-+ * away from this CPU, or CPU going down), or because we're
-+ * attempting to rebalance this task on exec (sched_exec).
-+ *
-+ * So we race with normal scheduler movements, but that's OK, as long
-+ * as the task is no longer on this CPU.
-+ */
-+static struct rq *__migrate_task(struct rq *rq, struct task_struct *p, int dest_cpu)
-+{
-+ /* Affinity changed (again). */
-+ if (!is_cpu_allowed(p, dest_cpu))
-+ return rq;
-+
-+ return move_queued_task(rq, p, dest_cpu);
-+}
-+
-+/*
-+ * migration_cpu_stop - this will be executed by a high-prio stopper thread
-+ * and performs thread migration by bumping thread off CPU then
-+ * 'pushing' onto another runqueue.
-+ */
-+static int migration_cpu_stop(void *data)
-+{
-+ struct migration_arg *arg = data;
-+ struct task_struct *p = arg->task;
-+ struct rq *rq = this_rq();
-+ unsigned long flags;
-+
-+ /*
-+ * The original target CPU might have gone down and we might
-+ * be on another CPU but it doesn't matter.
-+ */
-+ local_irq_save(flags);
-+ /*
-+ * We need to explicitly wake pending tasks before running
-+ * __migrate_task() such that we will not miss enforcing cpus_ptr
-+ * during wakeups, see set_cpus_allowed_ptr()'s TASK_WAKING test.
-+ */
-+ flush_smp_call_function_queue();
-+
-+ raw_spin_lock(&p->pi_lock);
-+ raw_spin_lock(&rq->lock);
-+ /*
-+ * If task_rq(p) != rq, it cannot be migrated here, because we're
-+ * holding rq->lock, if p->on_rq == 0 it cannot get enqueued because
-+ * we're holding p->pi_lock.
-+ */
-+ if (task_rq(p) == rq && task_on_rq_queued(p)) {
-+ update_rq_clock(rq);
-+ rq = __migrate_task(rq, p, arg->dest_cpu);
-+ }
-+ raw_spin_unlock(&rq->lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+ return 0;
-+}
-+
-+static inline void
-+set_cpus_allowed_common(struct task_struct *p, struct affinity_context *ctx)
-+{
-+ cpumask_copy(&p->cpus_mask, ctx->new_mask);
-+ p->nr_cpus_allowed = cpumask_weight(ctx->new_mask);
-+
-+ /*
-+ * Swap in a new user_cpus_ptr if SCA_USER flag set
-+ */
-+ if (ctx->flags & SCA_USER)
-+ swap(p->user_cpus_ptr, ctx->user_mask);
-+}
-+
-+static void
-+__do_set_cpus_allowed(struct task_struct *p, struct affinity_context *ctx)
-+{
-+ lockdep_assert_held(&p->pi_lock);
-+ set_cpus_allowed_common(p, ctx);
-+}
-+
-+/*
-+ * Used for kthread_bind() and select_fallback_rq(), in both cases the user
-+ * affinity (if any) should be destroyed too.
-+ */
-+void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+ struct affinity_context ac = {
-+ .new_mask = new_mask,
-+ .user_mask = NULL,
-+ .flags = SCA_USER, /* clear the user requested mask */
-+ };
-+ union cpumask_rcuhead {
-+ cpumask_t cpumask;
-+ struct rcu_head rcu;
-+ };
-+
-+ __do_set_cpus_allowed(p, &ac);
-+
-+ if (is_migration_disabled(p) && !cpumask_test_cpu(task_cpu(p), &p->cpus_mask))
-+ __migrate_force_enable(p, task_rq(p));
-+
-+ /*
-+ * Because this is called with p->pi_lock held, it is not possible
-+ * to use kfree() here (when PREEMPT_RT=y), therefore punt to using
-+ * kfree_rcu().
-+ */
-+ kfree_rcu((union cpumask_rcuhead *)ac.user_mask, rcu);
-+}
-+
-+int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
-+ int node)
-+{
-+ cpumask_t *user_mask;
-+ unsigned long flags;
-+
-+ /*
-+ * Always clear dst->user_cpus_ptr first as their user_cpus_ptr's
-+ * may differ by now due to racing.
-+ */
-+ dst->user_cpus_ptr = NULL;
-+
-+ /*
-+ * This check is racy and losing the race is a valid situation.
-+ * It is not worth the extra overhead of taking the pi_lock on
-+ * every fork/clone.
-+ */
-+ if (data_race(!src->user_cpus_ptr))
-+ return 0;
-+
-+ user_mask = alloc_user_cpus_ptr(node);
-+ if (!user_mask)
-+ return -ENOMEM;
-+
-+ /*
-+ * Use pi_lock to protect content of user_cpus_ptr
-+ *
-+ * Though unlikely, user_cpus_ptr can be reset to NULL by a concurrent
-+ * do_set_cpus_allowed().
-+ */
-+ raw_spin_lock_irqsave(&src->pi_lock, flags);
-+ if (src->user_cpus_ptr) {
-+ swap(dst->user_cpus_ptr, user_mask);
-+ cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
-+ }
-+ raw_spin_unlock_irqrestore(&src->pi_lock, flags);
-+
-+ if (unlikely(user_mask))
-+ kfree(user_mask);
-+
-+ return 0;
-+}
-+
-+static inline struct cpumask *clear_user_cpus_ptr(struct task_struct *p)
-+{
-+ struct cpumask *user_mask = NULL;
-+
-+ swap(p->user_cpus_ptr, user_mask);
-+
-+ return user_mask;
-+}
-+
-+void release_user_cpus_ptr(struct task_struct *p)
-+{
-+ kfree(clear_user_cpus_ptr(p));
-+}
-+
-+#endif
-+
-+/**
-+ * task_curr - is this task currently executing on a CPU?
-+ * @p: the task in question.
-+ *
-+ * Return: 1 if the task is currently executing. 0 otherwise.
-+ */
-+inline int task_curr(const struct task_struct *p)
-+{
-+ return cpu_curr(task_cpu(p)) == p;
-+}
-+
-+#ifdef CONFIG_SMP
-+/***
-+ * kick_process - kick a running thread to enter/exit the kernel
-+ * @p: the to-be-kicked thread
-+ *
-+ * Cause a process which is running on another CPU to enter
-+ * kernel-mode, without any delay. (to get signals handled.)
-+ *
-+ * NOTE: this function doesn't have to take the runqueue lock,
-+ * because all it wants to ensure is that the remote task enters
-+ * the kernel. If the IPI races and the task has been migrated
-+ * to another CPU then no harm is done and the purpose has been
-+ * achieved as well.
-+ */
-+void kick_process(struct task_struct *p)
-+{
-+ guard(preempt)();
-+ int cpu = task_cpu(p);
-+
-+ if ((cpu != smp_processor_id()) && task_curr(p))
-+ smp_send_reschedule(cpu);
-+}
-+EXPORT_SYMBOL_GPL(kick_process);
-+
-+/*
-+ * ->cpus_ptr is protected by both rq->lock and p->pi_lock
-+ *
-+ * A few notes on cpu_active vs cpu_online:
-+ *
-+ * - cpu_active must be a subset of cpu_online
-+ *
-+ * - on CPU-up we allow per-CPU kthreads on the online && !active CPU,
-+ * see __set_cpus_allowed_ptr(). At this point the newly online
-+ * CPU isn't yet part of the sched domains, and balancing will not
-+ * see it.
-+ *
-+ * - on cpu-down we clear cpu_active() to mask the sched domains and
-+ * avoid the load balancer to place new tasks on the to be removed
-+ * CPU. Existing tasks will remain running there and will be taken
-+ * off.
-+ *
-+ * This means that fallback selection must not select !active CPUs.
-+ * And can assume that any active CPU must be online. Conversely
-+ * select_task_rq() below may allow selection of !active CPUs in order
-+ * to satisfy the above rules.
-+ */
-+static int select_fallback_rq(int cpu, struct task_struct *p)
-+{
-+ int nid = cpu_to_node(cpu);
-+ const struct cpumask *nodemask = NULL;
-+ enum { cpuset, possible, fail } state = cpuset;
-+ int dest_cpu;
-+
-+ /*
-+ * If the node that the CPU is on has been offlined, cpu_to_node()
-+ * will return -1. There is no CPU on the node, and we should
-+ * select the CPU on the other node.
-+ */
-+ if (nid != -1) {
-+ nodemask = cpumask_of_node(nid);
-+
-+ /* Look for allowed, online CPU in same node. */
-+ for_each_cpu(dest_cpu, nodemask) {
-+ if (is_cpu_allowed(p, dest_cpu))
-+ return dest_cpu;
-+ }
-+ }
-+
-+ for (;;) {
-+ /* Any allowed, online CPU? */
-+ for_each_cpu(dest_cpu, p->cpus_ptr) {
-+ if (!is_cpu_allowed(p, dest_cpu))
-+ continue;
-+ goto out;
-+ }
-+
-+ /* No more Mr. Nice Guy. */
-+ switch (state) {
-+ case cpuset:
-+ if (cpuset_cpus_allowed_fallback(p)) {
-+ state = possible;
-+ break;
-+ }
-+ fallthrough;
-+ case possible:
-+ /*
-+ * XXX When called from select_task_rq() we only
-+ * hold p->pi_lock and again violate locking order.
-+ *
-+ * More yuck to audit.
-+ */
-+ do_set_cpus_allowed(p, task_cpu_possible_mask(p));
-+ state = fail;
-+ break;
-+
-+ case fail:
-+ BUG();
-+ break;
-+ }
-+ }
-+
-+out:
-+ if (state != cpuset) {
-+ /*
-+ * Don't tell them about moving exiting tasks or
-+ * kernel threads (both mm NULL), since they never
-+ * leave kernel.
-+ */
-+ if (p->mm && printk_ratelimit()) {
-+ printk_deferred("process %d (%s) no longer affine to cpu%d\n",
-+ task_pid_nr(p), p->comm, cpu);
-+ }
-+ }
-+
-+ return dest_cpu;
-+}
-+
-+static inline void
-+sched_preempt_mask_flush(cpumask_t *mask, int prio, int ref)
-+{
-+ int cpu;
-+
-+ cpumask_copy(mask, sched_preempt_mask + ref);
-+ if (prio < ref) {
-+ for_each_clear_bit(cpu, cpumask_bits(mask), nr_cpumask_bits) {
-+ if (prio < cpu_rq(cpu)->prio)
-+ cpumask_set_cpu(cpu, mask);
-+ }
-+ } else {
-+ for_each_cpu_andnot(cpu, mask, sched_idle_mask) {
-+ if (prio >= cpu_rq(cpu)->prio)
-+ cpumask_clear_cpu(cpu, mask);
-+ }
-+ }
-+}
-+
-+static inline int
-+preempt_mask_check(cpumask_t *preempt_mask, cpumask_t *allow_mask, int prio)
-+{
-+ cpumask_t *mask = sched_preempt_mask + prio;
-+ int pr = atomic_read(&sched_prio_record);
-+
-+ if (pr != prio && SCHED_QUEUE_BITS - 1 != prio) {
-+ sched_preempt_mask_flush(mask, prio, pr);
-+ atomic_set(&sched_prio_record, prio);
-+ }
-+
-+ return cpumask_and(preempt_mask, allow_mask, mask);
-+}
-+
-+__read_mostly idle_select_func_t idle_select_func ____cacheline_aligned_in_smp = cpumask_and;
-+
-+static inline int select_task_rq(struct task_struct *p)
-+{
-+ cpumask_t allow_mask, mask;
-+
-+ if (unlikely(!cpumask_and(&allow_mask, p->cpus_ptr, cpu_active_mask)))
-+ return select_fallback_rq(task_cpu(p), p);
-+
-+ if (idle_select_func(&mask, &allow_mask, sched_idle_mask) ||
-+ preempt_mask_check(&mask, &allow_mask, task_sched_prio(p)))
-+ return best_mask_cpu(task_cpu(p), &mask);
-+
-+ return best_mask_cpu(task_cpu(p), &allow_mask);
-+}
-+
-+void sched_set_stop_task(int cpu, struct task_struct *stop)
-+{
-+ static struct lock_class_key stop_pi_lock;
-+ struct sched_param stop_param = { .sched_priority = STOP_PRIO };
-+ struct sched_param start_param = { .sched_priority = 0 };
-+ struct task_struct *old_stop = cpu_rq(cpu)->stop;
-+
-+ if (stop) {
-+ /*
-+ * Make it appear like a SCHED_FIFO task, its something
-+ * userspace knows about and won't get confused about.
-+ *
-+ * Also, it will make PI more or less work without too
-+ * much confusion -- but then, stop work should not
-+ * rely on PI working anyway.
-+ */
-+ sched_setscheduler_nocheck(stop, SCHED_FIFO, &stop_param);
-+
-+ /*
-+ * The PI code calls rt_mutex_setprio() with ->pi_lock held to
-+ * adjust the effective priority of a task. As a result,
-+ * rt_mutex_setprio() can trigger (RT) balancing operations,
-+ * which can then trigger wakeups of the stop thread to push
-+ * around the current task.
-+ *
-+ * The stop task itself will never be part of the PI-chain, it
-+ * never blocks, therefore that ->pi_lock recursion is safe.
-+ * Tell lockdep about this by placing the stop->pi_lock in its
-+ * own class.
-+ */
-+ lockdep_set_class(&stop->pi_lock, &stop_pi_lock);
-+ }
-+
-+ cpu_rq(cpu)->stop = stop;
-+
-+ if (old_stop) {
-+ /*
-+ * Reset it back to a normal scheduling policy so that
-+ * it can die in pieces.
-+ */
-+ sched_setscheduler_nocheck(old_stop, SCHED_NORMAL, &start_param);
-+ }
-+}
-+
-+static int affine_move_task(struct rq *rq, struct task_struct *p, int dest_cpu,
-+ raw_spinlock_t *lock, unsigned long irq_flags)
-+ __releases(rq->lock)
-+ __releases(p->pi_lock)
-+{
-+ /* Can the task run on the task's current CPU? If so, we're done */
-+ if (!cpumask_test_cpu(task_cpu(p), &p->cpus_mask)) {
-+ if (is_migration_disabled(p))
-+ __migrate_force_enable(p, rq);
-+
-+ if (task_on_cpu(p) || READ_ONCE(p->__state) == TASK_WAKING) {
-+ struct migration_arg arg = { p, dest_cpu };
-+
-+ /* Need help from migration thread: drop lock and wait. */
-+ __task_access_unlock(p, lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+ stop_one_cpu(cpu_of(rq), migration_cpu_stop, &arg);
-+ return 0;
-+ }
-+ if (task_on_rq_queued(p)) {
-+ /*
-+ * OK, since we're going to drop the lock immediately
-+ * afterwards anyway.
-+ */
-+ update_rq_clock(rq);
-+ rq = move_queued_task(rq, p, dest_cpu);
-+ lock = &rq->lock;
-+ }
-+ }
-+ __task_access_unlock(p, lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+ return 0;
-+}
-+
-+static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
-+ struct affinity_context *ctx,
-+ struct rq *rq,
-+ raw_spinlock_t *lock,
-+ unsigned long irq_flags)
-+{
-+ const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
-+ const struct cpumask *cpu_valid_mask = cpu_active_mask;
-+ bool kthread = p->flags & PF_KTHREAD;
-+ int dest_cpu;
-+ int ret = 0;
-+
-+ if (kthread || is_migration_disabled(p)) {
-+ /*
-+ * Kernel threads are allowed on online && !active CPUs,
-+ * however, during cpu-hot-unplug, even these might get pushed
-+ * away if not KTHREAD_IS_PER_CPU.
-+ *
-+ * Specifically, migration_disabled() tasks must not fail the
-+ * cpumask_any_and_distribute() pick below, esp. so on
-+ * SCA_MIGRATE_ENABLE, otherwise we'll not call
-+ * set_cpus_allowed_common() and actually reset p->cpus_ptr.
-+ */
-+ cpu_valid_mask = cpu_online_mask;
-+ }
-+
-+ if (!kthread && !cpumask_subset(ctx->new_mask, cpu_allowed_mask)) {
-+ ret = -EINVAL;
-+ goto out;
-+ }
-+
-+ /*
-+ * Must re-check here, to close a race against __kthread_bind(),
-+ * sched_setaffinity() is not guaranteed to observe the flag.
-+ */
-+ if ((ctx->flags & SCA_CHECK) && (p->flags & PF_NO_SETAFFINITY)) {
-+ ret = -EINVAL;
-+ goto out;
-+ }
-+
-+ if (cpumask_equal(&p->cpus_mask, ctx->new_mask))
-+ goto out;
-+
-+ dest_cpu = cpumask_any_and(cpu_valid_mask, ctx->new_mask);
-+ if (dest_cpu >= nr_cpu_ids) {
-+ ret = -EINVAL;
-+ goto out;
-+ }
-+
-+ __do_set_cpus_allowed(p, ctx);
-+
-+ return affine_move_task(rq, p, dest_cpu, lock, irq_flags);
-+
-+out:
-+ __task_access_unlock(p, lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+
-+ return ret;
-+}
-+
-+/*
-+ * Change a given task's CPU affinity. Migrate the thread to a
-+ * is removed from the allowed bitmask.
-+ *
-+ * NOTE: the caller must have a valid reference to the task, the
-+ * task must not exit() & deallocate itself prematurely. The
-+ * call is not atomic; no spinlocks may be held.
-+ */
-+int __set_cpus_allowed_ptr(struct task_struct *p,
-+ struct affinity_context *ctx)
-+{
-+ unsigned long irq_flags;
-+ struct rq *rq;
-+ raw_spinlock_t *lock;
-+
-+ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
-+ rq = __task_access_lock(p, &lock);
-+ /*
-+ * Masking should be skipped if SCA_USER or any of the SCA_MIGRATE_*
-+ * flags are set.
-+ */
-+ if (p->user_cpus_ptr &&
-+ !(ctx->flags & SCA_USER) &&
-+ cpumask_and(rq->scratch_mask, ctx->new_mask, p->user_cpus_ptr))
-+ ctx->new_mask = rq->scratch_mask;
-+
-+
-+ return __set_cpus_allowed_ptr_locked(p, ctx, rq, lock, irq_flags);
-+}
-+
-+int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask)
-+{
-+ struct affinity_context ac = {
-+ .new_mask = new_mask,
-+ .flags = 0,
-+ };
-+
-+ return __set_cpus_allowed_ptr(p, &ac);
-+}
-+EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
-+
-+/*
-+ * Change a given task's CPU affinity to the intersection of its current
-+ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
-+ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
-+ * affinity or use cpu_online_mask instead.
-+ *
-+ * If the resulting mask is empty, leave the affinity unchanged and return
-+ * -EINVAL.
-+ */
-+static int restrict_cpus_allowed_ptr(struct task_struct *p,
-+ struct cpumask *new_mask,
-+ const struct cpumask *subset_mask)
-+{
-+ struct affinity_context ac = {
-+ .new_mask = new_mask,
-+ .flags = 0,
-+ };
-+ unsigned long irq_flags;
-+ raw_spinlock_t *lock;
-+ struct rq *rq;
-+ int err;
-+
-+ raw_spin_lock_irqsave(&p->pi_lock, irq_flags);
-+ rq = __task_access_lock(p, &lock);
-+
-+ if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
-+ err = -EINVAL;
-+ goto err_unlock;
-+ }
-+
-+ return __set_cpus_allowed_ptr_locked(p, &ac, rq, lock, irq_flags);
-+
-+err_unlock:
-+ __task_access_unlock(p, lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, irq_flags);
-+ return err;
-+}
-+
-+/*
-+ * Restrict the CPU affinity of task @p so that it is a subset of
-+ * task_cpu_possible_mask() and point @p->user_cpus_ptr to a copy of the
-+ * old affinity mask. If the resulting mask is empty, we warn and walk
-+ * up the cpuset hierarchy until we find a suitable mask.
-+ */
-+void force_compatible_cpus_allowed_ptr(struct task_struct *p)
-+{
-+ cpumask_var_t new_mask;
-+ const struct cpumask *override_mask = task_cpu_possible_mask(p);
-+
-+ alloc_cpumask_var(&new_mask, GFP_KERNEL);
-+
-+ /*
-+ * __migrate_task() can fail silently in the face of concurrent
-+ * offlining of the chosen destination CPU, so take the hotplug
-+ * lock to ensure that the migration succeeds.
-+ */
-+ cpus_read_lock();
-+ if (!cpumask_available(new_mask))
-+ goto out_set_mask;
-+
-+ if (!restrict_cpus_allowed_ptr(p, new_mask, override_mask))
-+ goto out_free_mask;
-+
-+ /*
-+ * We failed to find a valid subset of the affinity mask for the
-+ * task, so override it based on its cpuset hierarchy.
-+ */
-+ cpuset_cpus_allowed(p, new_mask);
-+ override_mask = new_mask;
-+
-+out_set_mask:
-+ if (printk_ratelimit()) {
-+ printk_deferred("Overriding affinity for process %d (%s) to CPUs %*pbl\n",
-+ task_pid_nr(p), p->comm,
-+ cpumask_pr_args(override_mask));
-+ }
-+
-+ WARN_ON(set_cpus_allowed_ptr(p, override_mask));
-+out_free_mask:
-+ cpus_read_unlock();
-+ free_cpumask_var(new_mask);
-+}
-+
-+/*
-+ * Restore the affinity of a task @p which was previously restricted by a
-+ * call to force_compatible_cpus_allowed_ptr().
-+ *
-+ * It is the caller's responsibility to serialise this with any calls to
-+ * force_compatible_cpus_allowed_ptr(@p).
-+ */
-+void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
-+{
-+ struct affinity_context ac = {
-+ .new_mask = task_user_cpus(p),
-+ .flags = 0,
-+ };
-+ int ret;
-+
-+ /*
-+ * Try to restore the old affinity mask with __sched_setaffinity().
-+ * Cpuset masking will be done there too.
-+ */
-+ ret = __sched_setaffinity(p, &ac);
-+ WARN_ON_ONCE(ret);
-+}
-+
-+#else /* CONFIG_SMP */
-+
-+static inline int select_task_rq(struct task_struct *p)
-+{
-+ return 0;
-+}
-+
-+static inline bool rq_has_pinned_tasks(struct rq *rq)
-+{
-+ return false;
-+}
-+
-+#endif /* !CONFIG_SMP */
-+
-+static void
-+ttwu_stat(struct task_struct *p, int cpu, int wake_flags)
-+{
-+ struct rq *rq;
-+
-+ if (!schedstat_enabled())
-+ return;
-+
-+ rq = this_rq();
-+
-+#ifdef CONFIG_SMP
-+ if (cpu == rq->cpu) {
-+ __schedstat_inc(rq->ttwu_local);
-+ __schedstat_inc(p->stats.nr_wakeups_local);
-+ } else {
-+ /** Alt schedule FW ToDo:
-+ * How to do ttwu_wake_remote
-+ */
-+ }
-+#endif /* CONFIG_SMP */
-+
-+ __schedstat_inc(rq->ttwu_count);
-+ __schedstat_inc(p->stats.nr_wakeups);
-+}
-+
-+/*
-+ * Mark the task runnable.
-+ */
-+static inline void ttwu_do_wakeup(struct task_struct *p)
-+{
-+ WRITE_ONCE(p->__state, TASK_RUNNING);
-+ trace_sched_wakeup(p);
-+}
-+
-+static inline void
-+ttwu_do_activate(struct rq *rq, struct task_struct *p, int wake_flags)
-+{
-+ if (p->sched_contributes_to_load)
-+ rq->nr_uninterruptible--;
-+
-+ if (
-+#ifdef CONFIG_SMP
-+ !(wake_flags & WF_MIGRATED) &&
-+#endif
-+ p->in_iowait) {
-+ delayacct_blkio_end(p);
-+ atomic_dec(&task_rq(p)->nr_iowait);
-+ }
-+
-+ activate_task(p, rq);
-+ wakeup_preempt(rq);
-+
-+ ttwu_do_wakeup(p);
-+}
-+
-+/*
-+ * Consider @p being inside a wait loop:
-+ *
-+ * for (;;) {
-+ * set_current_state(TASK_UNINTERRUPTIBLE);
-+ *
-+ * if (CONDITION)
-+ * break;
-+ *
-+ * schedule();
-+ * }
-+ * __set_current_state(TASK_RUNNING);
-+ *
-+ * between set_current_state() and schedule(). In this case @p is still
-+ * runnable, so all that needs doing is change p->state back to TASK_RUNNING in
-+ * an atomic manner.
-+ *
-+ * By taking task_rq(p)->lock we serialize against schedule(), if @p->on_rq
-+ * then schedule() must still happen and p->state can be changed to
-+ * TASK_RUNNING. Otherwise we lost the race, schedule() has happened, and we
-+ * need to do a full wakeup with enqueue.
-+ *
-+ * Returns: %true when the wakeup is done,
-+ * %false otherwise.
-+ */
-+static int ttwu_runnable(struct task_struct *p, int wake_flags)
-+{
-+ struct rq *rq;
-+ raw_spinlock_t *lock;
-+ int ret = 0;
-+
-+ rq = __task_access_lock(p, &lock);
-+ if (task_on_rq_queued(p)) {
-+ if (!task_on_cpu(p)) {
-+ /*
-+ * When on_rq && !on_cpu the task is preempted, see if
-+ * it should preempt the task that is current now.
-+ */
-+ update_rq_clock(rq);
-+ wakeup_preempt(rq);
-+ }
-+ ttwu_do_wakeup(p);
-+ ret = 1;
-+ }
-+ __task_access_unlock(p, lock);
-+
-+ return ret;
-+}
-+
-+#ifdef CONFIG_SMP
-+void sched_ttwu_pending(void *arg)
-+{
-+ struct llist_node *llist = arg;
-+ struct rq *rq = this_rq();
-+ struct task_struct *p, *t;
-+ struct rq_flags rf;
-+
-+ if (!llist)
-+ return;
-+
-+ rq_lock_irqsave(rq, &rf);
-+ update_rq_clock(rq);
-+
-+ llist_for_each_entry_safe(p, t, llist, wake_entry.llist) {
-+ if (WARN_ON_ONCE(p->on_cpu))
-+ smp_cond_load_acquire(&p->on_cpu, !VAL);
-+
-+ if (WARN_ON_ONCE(task_cpu(p) != cpu_of(rq)))
-+ set_task_cpu(p, cpu_of(rq));
-+
-+ ttwu_do_activate(rq, p, p->sched_remote_wakeup ? WF_MIGRATED : 0);
-+ }
-+
-+ /*
-+ * Must be after enqueueing at least once task such that
-+ * idle_cpu() does not observe a false-negative -- if it does,
-+ * it is possible for select_idle_siblings() to stack a number
-+ * of tasks on this CPU during that window.
-+ *
-+ * It is OK to clear ttwu_pending when another task pending.
-+ * We will receive IPI after local IRQ enabled and then enqueue it.
-+ * Since now nr_running > 0, idle_cpu() will always get correct result.
-+ */
-+ WRITE_ONCE(rq->ttwu_pending, 0);
-+ rq_unlock_irqrestore(rq, &rf);
-+}
-+
-+/*
-+ * Prepare the scene for sending an IPI for a remote smp_call
-+ *
-+ * Returns true if the caller can proceed with sending the IPI.
-+ * Returns false otherwise.
-+ */
-+bool call_function_single_prep_ipi(int cpu)
-+{
-+ if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
-+ trace_sched_wake_idle_without_ipi(cpu);
-+ return false;
-+ }
-+
-+ return true;
-+}
-+
-+/*
-+ * Queue a task on the target CPUs wake_list and wake the CPU via IPI if
-+ * necessary. The wakee CPU on receipt of the IPI will queue the task
-+ * via sched_ttwu_wakeup() for activation so the wakee incurs the cost
-+ * of the wakeup instead of the waker.
-+ */
-+static void __ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+
-+ p->sched_remote_wakeup = !!(wake_flags & WF_MIGRATED);
-+
-+ WRITE_ONCE(rq->ttwu_pending, 1);
-+ __smp_call_single_queue(cpu, &p->wake_entry.llist);
-+}
-+
-+static inline bool ttwu_queue_cond(struct task_struct *p, int cpu)
-+{
-+ /*
-+ * Do not complicate things with the async wake_list while the CPU is
-+ * in hotplug state.
-+ */
-+ if (!cpu_active(cpu))
-+ return false;
-+
-+ /* Ensure the task will still be allowed to run on the CPU. */
-+ if (!cpumask_test_cpu(cpu, p->cpus_ptr))
-+ return false;
-+
-+ /*
-+ * If the CPU does not share cache, then queue the task on the
-+ * remote rqs wakelist to avoid accessing remote data.
-+ */
-+ if (!cpus_share_cache(smp_processor_id(), cpu))
-+ return true;
-+
-+ if (cpu == smp_processor_id())
-+ return false;
-+
-+ /*
-+ * If the wakee cpu is idle, or the task is descheduling and the
-+ * only running task on the CPU, then use the wakelist to offload
-+ * the task activation to the idle (or soon-to-be-idle) CPU as
-+ * the current CPU is likely busy. nr_running is checked to
-+ * avoid unnecessary task stacking.
-+ *
-+ * Note that we can only get here with (wakee) p->on_rq=0,
-+ * p->on_cpu can be whatever, we've done the dequeue, so
-+ * the wakee has been accounted out of ->nr_running.
-+ */
-+ if (!cpu_rq(cpu)->nr_running)
-+ return true;
-+
-+ return false;
-+}
-+
-+static bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+ if (__is_defined(ALT_SCHED_TTWU_QUEUE) && ttwu_queue_cond(p, cpu)) {
-+ sched_clock_cpu(cpu); /* Sync clocks across CPUs */
-+ __ttwu_queue_wakelist(p, cpu, wake_flags);
-+ return true;
-+ }
-+
-+ return false;
-+}
-+
-+void wake_up_if_idle(int cpu)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+
-+ guard(rcu)();
-+ if (is_idle_task(rcu_dereference(rq->curr))) {
-+ guard(raw_spinlock_irqsave)(&rq->lock);
-+ if (is_idle_task(rq->curr))
-+ resched_curr(rq);
-+ }
-+}
-+
-+extern struct static_key_false sched_asym_cpucapacity;
-+
-+static __always_inline bool sched_asym_cpucap_active(void)
-+{
-+ return static_branch_unlikely(&sched_asym_cpucapacity);
-+}
-+
-+bool cpus_equal_capacity(int this_cpu, int that_cpu)
-+{
-+ if (!sched_asym_cpucap_active())
-+ return true;
-+
-+ if (this_cpu == that_cpu)
-+ return true;
-+
-+ return arch_scale_cpu_capacity(this_cpu) == arch_scale_cpu_capacity(that_cpu);
-+}
-+
-+bool cpus_share_cache(int this_cpu, int that_cpu)
-+{
-+ if (this_cpu == that_cpu)
-+ return true;
-+
-+ return per_cpu(sd_llc_id, this_cpu) == per_cpu(sd_llc_id, that_cpu);
-+}
-+#else /* !CONFIG_SMP */
-+
-+static inline bool ttwu_queue_wakelist(struct task_struct *p, int cpu, int wake_flags)
-+{
-+ return false;
-+}
-+
-+#endif /* CONFIG_SMP */
-+
-+static inline void ttwu_queue(struct task_struct *p, int cpu, int wake_flags)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+
-+ if (ttwu_queue_wakelist(p, cpu, wake_flags))
-+ return;
-+
-+ raw_spin_lock(&rq->lock);
-+ update_rq_clock(rq);
-+ ttwu_do_activate(rq, p, wake_flags);
-+ raw_spin_unlock(&rq->lock);
-+}
-+
-+/*
-+ * Invoked from try_to_wake_up() to check whether the task can be woken up.
-+ *
-+ * The caller holds p::pi_lock if p != current or has preemption
-+ * disabled when p == current.
-+ *
-+ * The rules of saved_state:
-+ *
-+ * The related locking code always holds p::pi_lock when updating
-+ * p::saved_state, which means the code is fully serialized in both cases.
-+ *
-+ * For PREEMPT_RT, the lock wait and lock wakeups happen via TASK_RTLOCK_WAIT.
-+ * No other bits set. This allows to distinguish all wakeup scenarios.
-+ *
-+ * For FREEZER, the wakeup happens via TASK_FROZEN. No other bits set. This
-+ * allows us to prevent early wakeup of tasks before they can be run on
-+ * asymmetric ISA architectures (eg ARMv9).
-+ */
-+static __always_inline
-+bool ttwu_state_match(struct task_struct *p, unsigned int state, int *success)
-+{
-+ int match;
-+
-+ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
-+ WARN_ON_ONCE((state & TASK_RTLOCK_WAIT) &&
-+ state != TASK_RTLOCK_WAIT);
-+ }
-+
-+ *success = !!(match = __task_state_match(p, state));
-+
-+ /*
-+ * Saved state preserves the task state across blocking on
-+ * an RT lock or TASK_FREEZABLE tasks. If the state matches,
-+ * set p::saved_state to TASK_RUNNING, but do not wake the task
-+ * because it waits for a lock wakeup or __thaw_task(). Also
-+ * indicate success because from the regular waker's point of
-+ * view this has succeeded.
-+ *
-+ * After acquiring the lock the task will restore p::__state
-+ * from p::saved_state which ensures that the regular
-+ * wakeup is not lost. The restore will also set
-+ * p::saved_state to TASK_RUNNING so any further tests will
-+ * not result in false positives vs. @success
-+ */
-+ if (match < 0)
-+ p->saved_state = TASK_RUNNING;
-+
-+ return match > 0;
-+}
-+
-+/*
-+ * Notes on Program-Order guarantees on SMP systems.
-+ *
-+ * MIGRATION
-+ *
-+ * The basic program-order guarantee on SMP systems is that when a task [t]
-+ * migrates, all its activity on its old CPU [c0] happens-before any subsequent
-+ * execution on its new CPU [c1].
-+ *
-+ * For migration (of runnable tasks) this is provided by the following means:
-+ *
-+ * A) UNLOCK of the rq(c0)->lock scheduling out task t
-+ * B) migration for t is required to synchronize *both* rq(c0)->lock and
-+ * rq(c1)->lock (if not at the same time, then in that order).
-+ * C) LOCK of the rq(c1)->lock scheduling in task
-+ *
-+ * Transitivity guarantees that B happens after A and C after B.
-+ * Note: we only require RCpc transitivity.
-+ * Note: the CPU doing B need not be c0 or c1
-+ *
-+ * Example:
-+ *
-+ * CPU0 CPU1 CPU2
-+ *
-+ * LOCK rq(0)->lock
-+ * sched-out X
-+ * sched-in Y
-+ * UNLOCK rq(0)->lock
-+ *
-+ * LOCK rq(0)->lock // orders against CPU0
-+ * dequeue X
-+ * UNLOCK rq(0)->lock
-+ *
-+ * LOCK rq(1)->lock
-+ * enqueue X
-+ * UNLOCK rq(1)->lock
-+ *
-+ * LOCK rq(1)->lock // orders against CPU2
-+ * sched-out Z
-+ * sched-in X
-+ * UNLOCK rq(1)->lock
-+ *
-+ *
-+ * BLOCKING -- aka. SLEEP + WAKEUP
-+ *
-+ * For blocking we (obviously) need to provide the same guarantee as for
-+ * migration. However the means are completely different as there is no lock
-+ * chain to provide order. Instead we do:
-+ *
-+ * 1) smp_store_release(X->on_cpu, 0) -- finish_task()
-+ * 2) smp_cond_load_acquire(!X->on_cpu) -- try_to_wake_up()
-+ *
-+ * Example:
-+ *
-+ * CPU0 (schedule) CPU1 (try_to_wake_up) CPU2 (schedule)
-+ *
-+ * LOCK rq(0)->lock LOCK X->pi_lock
-+ * dequeue X
-+ * sched-out X
-+ * smp_store_release(X->on_cpu, 0);
-+ *
-+ * smp_cond_load_acquire(&X->on_cpu, !VAL);
-+ * X->state = WAKING
-+ * set_task_cpu(X,2)
-+ *
-+ * LOCK rq(2)->lock
-+ * enqueue X
-+ * X->state = RUNNING
-+ * UNLOCK rq(2)->lock
-+ *
-+ * LOCK rq(2)->lock // orders against CPU1
-+ * sched-out Z
-+ * sched-in X
-+ * UNLOCK rq(2)->lock
-+ *
-+ * UNLOCK X->pi_lock
-+ * UNLOCK rq(0)->lock
-+ *
-+ *
-+ * However; for wakeups there is a second guarantee we must provide, namely we
-+ * must observe the state that lead to our wakeup. That is, not only must our
-+ * task observe its own prior state, it must also observe the stores prior to
-+ * its wakeup.
-+ *
-+ * This means that any means of doing remote wakeups must order the CPU doing
-+ * the wakeup against the CPU the task is going to end up running on. This,
-+ * however, is already required for the regular Program-Order guarantee above,
-+ * since the waking CPU is the one issueing the ACQUIRE (smp_cond_load_acquire).
-+ *
-+ */
-+
-+/**
-+ * try_to_wake_up - wake up a thread
-+ * @p: the thread to be awakened
-+ * @state: the mask of task states that can be woken
-+ * @wake_flags: wake modifier flags (WF_*)
-+ *
-+ * Conceptually does:
-+ *
-+ * If (@state & @p->state) @p->state = TASK_RUNNING.
-+ *
-+ * If the task was not queued/runnable, also place it back on a runqueue.
-+ *
-+ * This function is atomic against schedule() which would dequeue the task.
-+ *
-+ * It issues a full memory barrier before accessing @p->state, see the comment
-+ * with set_current_state().
-+ *
-+ * Uses p->pi_lock to serialize against concurrent wake-ups.
-+ *
-+ * Relies on p->pi_lock stabilizing:
-+ * - p->sched_class
-+ * - p->cpus_ptr
-+ * - p->sched_task_group
-+ * in order to do migration, see its use of select_task_rq()/set_task_cpu().
-+ *
-+ * Tries really hard to only take one task_rq(p)->lock for performance.
-+ * Takes rq->lock in:
-+ * - ttwu_runnable() -- old rq, unavoidable, see comment there;
-+ * - ttwu_queue() -- new rq, for enqueue of the task;
-+ * - psi_ttwu_dequeue() -- much sadness :-( accounting will kill us.
-+ *
-+ * As a consequence we race really badly with just about everything. See the
-+ * many memory barriers and their comments for details.
-+ *
-+ * Return: %true if @p->state changes (an actual wakeup was done),
-+ * %false otherwise.
-+ */
-+int try_to_wake_up(struct task_struct *p, unsigned int state, int wake_flags)
-+{
-+ guard(preempt)();
-+ int cpu, success = 0;
-+
-+ if (p == current) {
-+ /*
-+ * We're waking current, this means 'p->on_rq' and 'task_cpu(p)
-+ * == smp_processor_id()'. Together this means we can special
-+ * case the whole 'p->on_rq && ttwu_runnable()' case below
-+ * without taking any locks.
-+ *
-+ * In particular:
-+ * - we rely on Program-Order guarantees for all the ordering,
-+ * - we're serialized against set_special_state() by virtue of
-+ * it disabling IRQs (this allows not taking ->pi_lock).
-+ */
-+ if (!ttwu_state_match(p, state, &success))
-+ goto out;
-+
-+ trace_sched_waking(p);
-+ ttwu_do_wakeup(p);
-+ goto out;
-+ }
-+
-+ /*
-+ * If we are going to wake up a thread waiting for CONDITION we
-+ * need to ensure that CONDITION=1 done by the caller can not be
-+ * reordered with p->state check below. This pairs with smp_store_mb()
-+ * in set_current_state() that the waiting thread does.
-+ */
-+ scoped_guard (raw_spinlock_irqsave, &p->pi_lock) {
-+ smp_mb__after_spinlock();
-+ if (!ttwu_state_match(p, state, &success))
-+ break;
-+
-+ trace_sched_waking(p);
-+
-+ /*
-+ * Ensure we load p->on_rq _after_ p->state, otherwise it would
-+ * be possible to, falsely, observe p->on_rq == 0 and get stuck
-+ * in smp_cond_load_acquire() below.
-+ *
-+ * sched_ttwu_pending() try_to_wake_up()
-+ * STORE p->on_rq = 1 LOAD p->state
-+ * UNLOCK rq->lock
-+ *
-+ * __schedule() (switch to task 'p')
-+ * LOCK rq->lock smp_rmb();
-+ * smp_mb__after_spinlock();
-+ * UNLOCK rq->lock
-+ *
-+ * [task p]
-+ * STORE p->state = UNINTERRUPTIBLE LOAD p->on_rq
-+ *
-+ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
-+ * __schedule(). See the comment for smp_mb__after_spinlock().
-+ *
-+ * A similar smp_rmb() lives in __task_needs_rq_lock().
-+ */
-+ smp_rmb();
-+ if (READ_ONCE(p->on_rq) && ttwu_runnable(p, wake_flags))
-+ break;
-+
-+#ifdef CONFIG_SMP
-+ /*
-+ * Ensure we load p->on_cpu _after_ p->on_rq, otherwise it would be
-+ * possible to, falsely, observe p->on_cpu == 0.
-+ *
-+ * One must be running (->on_cpu == 1) in order to remove oneself
-+ * from the runqueue.
-+ *
-+ * __schedule() (switch to task 'p') try_to_wake_up()
-+ * STORE p->on_cpu = 1 LOAD p->on_rq
-+ * UNLOCK rq->lock
-+ *
-+ * __schedule() (put 'p' to sleep)
-+ * LOCK rq->lock smp_rmb();
-+ * smp_mb__after_spinlock();
-+ * STORE p->on_rq = 0 LOAD p->on_cpu
-+ *
-+ * Pairs with the LOCK+smp_mb__after_spinlock() on rq->lock in
-+ * __schedule(). See the comment for smp_mb__after_spinlock().
-+ *
-+ * Form a control-dep-acquire with p->on_rq == 0 above, to ensure
-+ * schedule()'s deactivate_task() has 'happened' and p will no longer
-+ * care about it's own p->state. See the comment in __schedule().
-+ */
-+ smp_acquire__after_ctrl_dep();
-+
-+ /*
-+ * We're doing the wakeup (@success == 1), they did a dequeue (p->on_rq
-+ * == 0), which means we need to do an enqueue, change p->state to
-+ * TASK_WAKING such that we can unlock p->pi_lock before doing the
-+ * enqueue, such as ttwu_queue_wakelist().
-+ */
-+ WRITE_ONCE(p->__state, TASK_WAKING);
-+
-+ /*
-+ * If the owning (remote) CPU is still in the middle of schedule() with
-+ * this task as prev, considering queueing p on the remote CPUs wake_list
-+ * which potentially sends an IPI instead of spinning on p->on_cpu to
-+ * let the waker make forward progress. This is safe because IRQs are
-+ * disabled and the IPI will deliver after on_cpu is cleared.
-+ *
-+ * Ensure we load task_cpu(p) after p->on_cpu:
-+ *
-+ * set_task_cpu(p, cpu);
-+ * STORE p->cpu = @cpu
-+ * __schedule() (switch to task 'p')
-+ * LOCK rq->lock
-+ * smp_mb__after_spin_lock() smp_cond_load_acquire(&p->on_cpu)
-+ * STORE p->on_cpu = 1 LOAD p->cpu
-+ *
-+ * to ensure we observe the correct CPU on which the task is currently
-+ * scheduling.
-+ */
-+ if (smp_load_acquire(&p->on_cpu) &&
-+ ttwu_queue_wakelist(p, task_cpu(p), wake_flags))
-+ break;
-+
-+ /*
-+ * If the owning (remote) CPU is still in the middle of schedule() with
-+ * this task as prev, wait until it's done referencing the task.
-+ *
-+ * Pairs with the smp_store_release() in finish_task().
-+ *
-+ * This ensures that tasks getting woken will be fully ordered against
-+ * their previous state and preserve Program Order.
-+ */
-+ smp_cond_load_acquire(&p->on_cpu, !VAL);
-+
-+ sched_task_ttwu(p);
-+
-+ if ((wake_flags & WF_CURRENT_CPU) &&
-+ cpumask_test_cpu(smp_processor_id(), p->cpus_ptr))
-+ cpu = smp_processor_id();
-+ else
-+ cpu = select_task_rq(p);
-+
-+ if (cpu != task_cpu(p)) {
-+ if (p->in_iowait) {
-+ delayacct_blkio_end(p);
-+ atomic_dec(&task_rq(p)->nr_iowait);
-+ }
-+
-+ wake_flags |= WF_MIGRATED;
-+ set_task_cpu(p, cpu);
-+ }
-+#else
-+ sched_task_ttwu(p);
-+
-+ cpu = task_cpu(p);
-+#endif /* CONFIG_SMP */
-+
-+ ttwu_queue(p, cpu, wake_flags);
-+ }
-+out:
-+ if (success)
-+ ttwu_stat(p, task_cpu(p), wake_flags);
-+
-+ return success;
-+}
-+
-+static bool __task_needs_rq_lock(struct task_struct *p)
-+{
-+ unsigned int state = READ_ONCE(p->__state);
-+
-+ /*
-+ * Since pi->lock blocks try_to_wake_up(), we don't need rq->lock when
-+ * the task is blocked. Make sure to check @state since ttwu() can drop
-+ * locks at the end, see ttwu_queue_wakelist().
-+ */
-+ if (state == TASK_RUNNING || state == TASK_WAKING)
-+ return true;
-+
-+ /*
-+ * Ensure we load p->on_rq after p->__state, otherwise it would be
-+ * possible to, falsely, observe p->on_rq == 0.
-+ *
-+ * See try_to_wake_up() for a longer comment.
-+ */
-+ smp_rmb();
-+ if (p->on_rq)
-+ return true;
-+
-+#ifdef CONFIG_SMP
-+ /*
-+ * Ensure the task has finished __schedule() and will not be referenced
-+ * anymore. Again, see try_to_wake_up() for a longer comment.
-+ */
-+ smp_rmb();
-+ smp_cond_load_acquire(&p->on_cpu, !VAL);
-+#endif
-+
-+ return false;
-+}
-+
-+/**
-+ * task_call_func - Invoke a function on task in fixed state
-+ * @p: Process for which the function is to be invoked, can be @current.
-+ * @func: Function to invoke.
-+ * @arg: Argument to function.
-+ *
-+ * Fix the task in it's current state by avoiding wakeups and or rq operations
-+ * and call @func(@arg) on it. This function can use task_is_runnable() and
-+ * task_curr() to work out what the state is, if required. Given that @func
-+ * can be invoked with a runqueue lock held, it had better be quite
-+ * lightweight.
-+ *
-+ * Returns:
-+ * Whatever @func returns
-+ */
-+int task_call_func(struct task_struct *p, task_call_f func, void *arg)
-+{
-+ struct rq *rq = NULL;
-+ struct rq_flags rf;
-+ int ret;
-+
-+ raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
-+
-+ if (__task_needs_rq_lock(p))
-+ rq = __task_rq_lock(p, &rf);
-+
-+ /*
-+ * At this point the task is pinned; either:
-+ * - blocked and we're holding off wakeups (pi->lock)
-+ * - woken, and we're holding off enqueue (rq->lock)
-+ * - queued, and we're holding off schedule (rq->lock)
-+ * - running, and we're holding off de-schedule (rq->lock)
-+ *
-+ * The called function (@func) can use: task_curr(), p->on_rq and
-+ * p->__state to differentiate between these states.
-+ */
-+ ret = func(p, arg);
-+
-+ if (rq)
-+ __task_rq_unlock(rq, &rf);
-+
-+ raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
-+ return ret;
-+}
-+
-+/**
-+ * cpu_curr_snapshot - Return a snapshot of the currently running task
-+ * @cpu: The CPU on which to snapshot the task.
-+ *
-+ * Returns the task_struct pointer of the task "currently" running on
-+ * the specified CPU. If the same task is running on that CPU throughout,
-+ * the return value will be a pointer to that task's task_struct structure.
-+ * If the CPU did any context switches even vaguely concurrently with the
-+ * execution of this function, the return value will be a pointer to the
-+ * task_struct structure of a randomly chosen task that was running on
-+ * that CPU somewhere around the time that this function was executing.
-+ *
-+ * If the specified CPU was offline, the return value is whatever it
-+ * is, perhaps a pointer to the task_struct structure of that CPU's idle
-+ * task, but there is no guarantee. Callers wishing a useful return
-+ * value must take some action to ensure that the specified CPU remains
-+ * online throughout.
-+ *
-+ * This function executes full memory barriers before and after fetching
-+ * the pointer, which permits the caller to confine this function's fetch
-+ * with respect to the caller's accesses to other shared variables.
-+ */
-+struct task_struct *cpu_curr_snapshot(int cpu)
-+{
-+ struct task_struct *t;
-+
-+ smp_mb(); /* Pairing determined by caller's synchronization design. */
-+ t = rcu_dereference(cpu_curr(cpu));
-+ smp_mb(); /* Pairing determined by caller's synchronization design. */
-+ return t;
-+}
-+
-+/**
-+ * wake_up_process - Wake up a specific process
-+ * @p: The process to be woken up.
-+ *
-+ * Attempt to wake up the nominated process and move it to the set of runnable
-+ * processes.
-+ *
-+ * Return: 1 if the process was woken up, 0 if it was already running.
-+ *
-+ * This function executes a full memory barrier before accessing the task state.
-+ */
-+int wake_up_process(struct task_struct *p)
-+{
-+ return try_to_wake_up(p, TASK_NORMAL, 0);
-+}
-+EXPORT_SYMBOL(wake_up_process);
-+
-+int wake_up_state(struct task_struct *p, unsigned int state)
-+{
-+ return try_to_wake_up(p, state, 0);
-+}
-+
-+/*
-+ * Perform scheduler related setup for a newly forked process p.
-+ * p is forked by current.
-+ *
-+ * __sched_fork() is basic setup used by init_idle() too:
-+ */
-+static inline void __sched_fork(unsigned long clone_flags, struct task_struct *p)
-+{
-+ p->on_rq = 0;
-+ p->on_cpu = 0;
-+ p->utime = 0;
-+ p->stime = 0;
-+ p->sched_time = 0;
-+
-+#ifdef CONFIG_SCHEDSTATS
-+ /* Even if schedstat is disabled, there should not be garbage */
-+ memset(&p->stats, 0, sizeof(p->stats));
-+#endif
-+
-+#ifdef CONFIG_PREEMPT_NOTIFIERS
-+ INIT_HLIST_HEAD(&p->preempt_notifiers);
-+#endif
-+
-+#ifdef CONFIG_COMPACTION
-+ p->capture_control = NULL;
-+#endif
-+#ifdef CONFIG_SMP
-+ p->wake_entry.u_flags = CSD_TYPE_TTWU;
-+#endif
-+ init_sched_mm_cid(p);
-+}
-+
-+/*
-+ * fork()/clone()-time setup:
-+ */
-+int sched_fork(unsigned long clone_flags, struct task_struct *p)
-+{
-+ __sched_fork(clone_flags, p);
-+ /*
-+ * We mark the process as NEW here. This guarantees that
-+ * nobody will actually run it, and a signal or other external
-+ * event cannot wake it up and insert it on the runqueue either.
-+ */
-+ p->__state = TASK_NEW;
-+
-+ /*
-+ * Make sure we do not leak PI boosting priority to the child.
-+ */
-+ p->prio = current->normal_prio;
-+
-+ /*
-+ * Revert to default priority/policy on fork if requested.
-+ */
-+ if (unlikely(p->sched_reset_on_fork)) {
-+ if (task_has_rt_policy(p)) {
-+ p->policy = SCHED_NORMAL;
-+ p->static_prio = NICE_TO_PRIO(0);
-+ p->rt_priority = 0;
-+ } else if (PRIO_TO_NICE(p->static_prio) < 0)
-+ p->static_prio = NICE_TO_PRIO(0);
-+
-+ p->prio = p->normal_prio = p->static_prio;
-+
-+ /*
-+ * We don't need the reset flag anymore after the fork. It has
-+ * fulfilled its duty:
-+ */
-+ p->sched_reset_on_fork = 0;
-+ }
-+
-+#ifdef CONFIG_SCHED_INFO
-+ if (unlikely(sched_info_on()))
-+ memset(&p->sched_info, 0, sizeof(p->sched_info));
-+#endif
-+ init_task_preempt_count(p);
-+
-+ return 0;
-+}
-+
-+int sched_cgroup_fork(struct task_struct *p, struct kernel_clone_args *kargs)
-+{
-+ unsigned long flags;
-+ struct rq *rq;
-+
-+ /*
-+ * Because we're not yet on the pid-hash, p->pi_lock isn't strictly
-+ * required yet, but lockdep gets upset if rules are violated.
-+ */
-+ raw_spin_lock_irqsave(&p->pi_lock, flags);
-+ /*
-+ * Share the timeslice between parent and child, thus the
-+ * total amount of pending timeslices in the system doesn't change,
-+ * resulting in more scheduling fairness.
-+ */
-+ rq = this_rq();
-+ raw_spin_lock(&rq->lock);
-+
-+ rq->curr->time_slice /= 2;
-+ p->time_slice = rq->curr->time_slice;
-+#ifdef CONFIG_SCHED_HRTICK
-+ hrtick_start(rq, rq->curr->time_slice);
-+#endif
-+
-+ if (p->time_slice < RESCHED_NS) {
-+ p->time_slice = sysctl_sched_base_slice;
-+ resched_curr(rq);
-+ }
-+ sched_task_fork(p, rq);
-+ raw_spin_unlock(&rq->lock);
-+
-+ rseq_migrate(p);
-+ /*
-+ * We're setting the CPU for the first time, we don't migrate,
-+ * so use __set_task_cpu().
-+ */
-+ __set_task_cpu(p, smp_processor_id());
-+ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+ return 0;
-+}
-+
-+void sched_cancel_fork(struct task_struct *p)
-+{
-+}
-+
-+void sched_post_fork(struct task_struct *p)
-+{
-+}
-+
-+#ifdef CONFIG_SCHEDSTATS
-+
-+DEFINE_STATIC_KEY_FALSE(sched_schedstats);
-+
-+static void set_schedstats(bool enabled)
-+{
-+ if (enabled)
-+ static_branch_enable(&sched_schedstats);
-+ else
-+ static_branch_disable(&sched_schedstats);
-+}
-+
-+void force_schedstat_enabled(void)
-+{
-+ if (!schedstat_enabled()) {
-+ pr_info("kernel profiling enabled schedstats, disable via kernel.sched_schedstats.\n");
-+ static_branch_enable(&sched_schedstats);
-+ }
-+}
-+
-+static int __init setup_schedstats(char *str)
-+{
-+ int ret = 0;
-+ if (!str)
-+ goto out;
-+
-+ if (!strcmp(str, "enable")) {
-+ set_schedstats(true);
-+ ret = 1;
-+ } else if (!strcmp(str, "disable")) {
-+ set_schedstats(false);
-+ ret = 1;
-+ }
-+out:
-+ if (!ret)
-+ pr_warn("Unable to parse schedstats=\n");
-+
-+ return ret;
-+}
-+__setup("schedstats=", setup_schedstats);
-+
-+#ifdef CONFIG_PROC_SYSCTL
-+static int sysctl_schedstats(const struct ctl_table *table, int write, void *buffer,
-+ size_t *lenp, loff_t *ppos)
-+{
-+ struct ctl_table t;
-+ int err;
-+ int state = static_branch_likely(&sched_schedstats);
-+
-+ if (write && !capable(CAP_SYS_ADMIN))
-+ return -EPERM;
-+
-+ t = *table;
-+ t.data = &state;
-+ err = proc_dointvec_minmax(&t, write, buffer, lenp, ppos);
-+ if (err < 0)
-+ return err;
-+ if (write)
-+ set_schedstats(state);
-+ return err;
-+}
-+
-+static struct ctl_table sched_core_sysctls[] = {
-+ {
-+ .procname = "sched_schedstats",
-+ .data = NULL,
-+ .maxlen = sizeof(unsigned int),
-+ .mode = 0644,
-+ .proc_handler = sysctl_schedstats,
-+ .extra1 = SYSCTL_ZERO,
-+ .extra2 = SYSCTL_ONE,
-+ },
-+};
-+static int __init sched_core_sysctl_init(void)
-+{
-+ register_sysctl_init("kernel", sched_core_sysctls);
-+ return 0;
-+}
-+late_initcall(sched_core_sysctl_init);
-+#endif /* CONFIG_PROC_SYSCTL */
-+#endif /* CONFIG_SCHEDSTATS */
-+
-+/*
-+ * wake_up_new_task - wake up a newly created task for the first time.
-+ *
-+ * This function will do some initial scheduler statistics housekeeping
-+ * that must be done for every newly created context, then puts the task
-+ * on the runqueue and wakes it.
-+ */
-+void wake_up_new_task(struct task_struct *p)
-+{
-+ unsigned long flags;
-+ struct rq *rq;
-+
-+ raw_spin_lock_irqsave(&p->pi_lock, flags);
-+ WRITE_ONCE(p->__state, TASK_RUNNING);
-+ rq = cpu_rq(select_task_rq(p));
-+#ifdef CONFIG_SMP
-+ rseq_migrate(p);
-+ /*
-+ * Fork balancing, do it here and not earlier because:
-+ * - cpus_ptr can change in the fork path
-+ * - any previously selected CPU might disappear through hotplug
-+ *
-+ * Use __set_task_cpu() to avoid calling sched_class::migrate_task_rq,
-+ * as we're not fully set-up yet.
-+ */
-+ __set_task_cpu(p, cpu_of(rq));
-+#endif
-+
-+ raw_spin_lock(&rq->lock);
-+ update_rq_clock(rq);
-+
-+ activate_task(p, rq);
-+ trace_sched_wakeup_new(p);
-+ wakeup_preempt(rq);
-+
-+ raw_spin_unlock(&rq->lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+}
-+
-+#ifdef CONFIG_PREEMPT_NOTIFIERS
-+
-+static DEFINE_STATIC_KEY_FALSE(preempt_notifier_key);
-+
-+void preempt_notifier_inc(void)
-+{
-+ static_branch_inc(&preempt_notifier_key);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_inc);
-+
-+void preempt_notifier_dec(void)
-+{
-+ static_branch_dec(&preempt_notifier_key);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_dec);
-+
-+/**
-+ * preempt_notifier_register - tell me when current is being preempted & rescheduled
-+ * @notifier: notifier struct to register
-+ */
-+void preempt_notifier_register(struct preempt_notifier *notifier)
-+{
-+ if (!static_branch_unlikely(&preempt_notifier_key))
-+ WARN(1, "registering preempt_notifier while notifiers disabled\n");
-+
-+ hlist_add_head(¬ifier->link, ¤t->preempt_notifiers);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_register);
-+
-+/**
-+ * preempt_notifier_unregister - no longer interested in preemption notifications
-+ * @notifier: notifier struct to unregister
-+ *
-+ * This is *not* safe to call from within a preemption notifier.
-+ */
-+void preempt_notifier_unregister(struct preempt_notifier *notifier)
-+{
-+ hlist_del(¬ifier->link);
-+}
-+EXPORT_SYMBOL_GPL(preempt_notifier_unregister);
-+
-+static void __fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+ struct preempt_notifier *notifier;
-+
-+ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
-+ notifier->ops->sched_in(notifier, raw_smp_processor_id());
-+}
-+
-+static __always_inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+ if (static_branch_unlikely(&preempt_notifier_key))
-+ __fire_sched_in_preempt_notifiers(curr);
-+}
-+
-+static void
-+__fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+ struct task_struct *next)
-+{
-+ struct preempt_notifier *notifier;
-+
-+ hlist_for_each_entry(notifier, &curr->preempt_notifiers, link)
-+ notifier->ops->sched_out(notifier, next);
-+}
-+
-+static __always_inline void
-+fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+ struct task_struct *next)
-+{
-+ if (static_branch_unlikely(&preempt_notifier_key))
-+ __fire_sched_out_preempt_notifiers(curr, next);
-+}
-+
-+#else /* !CONFIG_PREEMPT_NOTIFIERS */
-+
-+static inline void fire_sched_in_preempt_notifiers(struct task_struct *curr)
-+{
-+}
-+
-+static inline void
-+fire_sched_out_preempt_notifiers(struct task_struct *curr,
-+ struct task_struct *next)
-+{
-+}
-+
-+#endif /* CONFIG_PREEMPT_NOTIFIERS */
-+
-+static inline void prepare_task(struct task_struct *next)
-+{
-+ /*
-+ * Claim the task as running, we do this before switching to it
-+ * such that any running task will have this set.
-+ *
-+ * See the smp_load_acquire(&p->on_cpu) case in ttwu() and
-+ * its ordering comment.
-+ */
-+ WRITE_ONCE(next->on_cpu, 1);
-+}
-+
-+static inline void finish_task(struct task_struct *prev)
-+{
-+#ifdef CONFIG_SMP
-+ /*
-+ * This must be the very last reference to @prev from this CPU. After
-+ * p->on_cpu is cleared, the task can be moved to a different CPU. We
-+ * must ensure this doesn't happen until the switch is completely
-+ * finished.
-+ *
-+ * In particular, the load of prev->state in finish_task_switch() must
-+ * happen before this.
-+ *
-+ * Pairs with the smp_cond_load_acquire() in try_to_wake_up().
-+ */
-+ smp_store_release(&prev->on_cpu, 0);
-+#else
-+ prev->on_cpu = 0;
-+#endif
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+static void do_balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+ void (*func)(struct rq *rq);
-+ struct balance_callback *next;
-+
-+ lockdep_assert_held(&rq->lock);
-+
-+ while (head) {
-+ func = (void (*)(struct rq *))head->func;
-+ next = head->next;
-+ head->next = NULL;
-+ head = next;
-+
-+ func(rq);
-+ }
-+}
-+
-+static void balance_push(struct rq *rq);
-+
-+/*
-+ * balance_push_callback is a right abuse of the callback interface and plays
-+ * by significantly different rules.
-+ *
-+ * Where the normal balance_callback's purpose is to be ran in the same context
-+ * that queued it (only later, when it's safe to drop rq->lock again),
-+ * balance_push_callback is specifically targeted at __schedule().
-+ *
-+ * This abuse is tolerated because it places all the unlikely/odd cases behind
-+ * a single test, namely: rq->balance_callback == NULL.
-+ */
-+struct balance_callback balance_push_callback = {
-+ .next = NULL,
-+ .func = balance_push,
-+};
-+
-+static inline struct balance_callback *
-+__splice_balance_callbacks(struct rq *rq, bool split)
-+{
-+ struct balance_callback *head = rq->balance_callback;
-+
-+ if (likely(!head))
-+ return NULL;
-+
-+ lockdep_assert_rq_held(rq);
-+ /*
-+ * Must not take balance_push_callback off the list when
-+ * splice_balance_callbacks() and balance_callbacks() are not
-+ * in the same rq->lock section.
-+ *
-+ * In that case it would be possible for __schedule() to interleave
-+ * and observe the list empty.
-+ */
-+ if (split && head == &balance_push_callback)
-+ head = NULL;
-+ else
-+ rq->balance_callback = NULL;
-+
-+ return head;
-+}
-+
-+struct balance_callback *splice_balance_callbacks(struct rq *rq)
-+{
-+ return __splice_balance_callbacks(rq, true);
-+}
-+
-+static void __balance_callbacks(struct rq *rq)
-+{
-+ do_balance_callbacks(rq, __splice_balance_callbacks(rq, false));
-+}
-+
-+void balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+ unsigned long flags;
-+
-+ if (unlikely(head)) {
-+ raw_spin_lock_irqsave(&rq->lock, flags);
-+ do_balance_callbacks(rq, head);
-+ raw_spin_unlock_irqrestore(&rq->lock, flags);
-+ }
-+}
-+
-+#else
-+
-+static inline void __balance_callbacks(struct rq *rq)
-+{
-+}
-+#endif
-+
-+static inline void
-+prepare_lock_switch(struct rq *rq, struct task_struct *next)
-+{
-+ /*
-+ * Since the runqueue lock will be released by the next
-+ * task (which is an invalid locking op but in the case
-+ * of the scheduler it's an obvious special-case), so we
-+ * do an early lockdep release here:
-+ */
-+ spin_release(&rq->lock.dep_map, _THIS_IP_);
-+#ifdef CONFIG_DEBUG_SPINLOCK
-+ /* this is a valid case when another task releases the spinlock */
-+ rq->lock.owner = next;
-+#endif
-+}
-+
-+static inline void finish_lock_switch(struct rq *rq)
-+{
-+ /*
-+ * If we are tracking spinlock dependencies then we have to
-+ * fix up the runqueue lock - which gets 'carried over' from
-+ * prev into current:
-+ */
-+ spin_acquire(&rq->lock.dep_map, 0, 0, _THIS_IP_);
-+ __balance_callbacks(rq);
-+ raw_spin_unlock_irq(&rq->lock);
-+}
-+
-+/*
-+ * NOP if the arch has not defined these:
-+ */
-+
-+#ifndef prepare_arch_switch
-+# define prepare_arch_switch(next) do { } while (0)
-+#endif
-+
-+#ifndef finish_arch_post_lock_switch
-+# define finish_arch_post_lock_switch() do { } while (0)
-+#endif
-+
-+static inline void kmap_local_sched_out(void)
-+{
-+#ifdef CONFIG_KMAP_LOCAL
-+ if (unlikely(current->kmap_ctrl.idx))
-+ __kmap_local_sched_out();
-+#endif
-+}
-+
-+static inline void kmap_local_sched_in(void)
-+{
-+#ifdef CONFIG_KMAP_LOCAL
-+ if (unlikely(current->kmap_ctrl.idx))
-+ __kmap_local_sched_in();
-+#endif
-+}
-+
-+/**
-+ * prepare_task_switch - prepare to switch tasks
-+ * @rq: the runqueue preparing to switch
-+ * @next: the task we are going to switch to.
-+ *
-+ * This is called with the rq lock held and interrupts off. It must
-+ * be paired with a subsequent finish_task_switch after the context
-+ * switch.
-+ *
-+ * prepare_task_switch sets up locking and calls architecture specific
-+ * hooks.
-+ */
-+static inline void
-+prepare_task_switch(struct rq *rq, struct task_struct *prev,
-+ struct task_struct *next)
-+{
-+ kcov_prepare_switch(prev);
-+ sched_info_switch(rq, prev, next);
-+ perf_event_task_sched_out(prev, next);
-+ rseq_preempt(prev);
-+ fire_sched_out_preempt_notifiers(prev, next);
-+ kmap_local_sched_out();
-+ prepare_task(next);
-+ prepare_arch_switch(next);
-+}
-+
-+/**
-+ * finish_task_switch - clean up after a task-switch
-+ * @rq: runqueue associated with task-switch
-+ * @prev: the thread we just switched away from.
-+ *
-+ * finish_task_switch must be called after the context switch, paired
-+ * with a prepare_task_switch call before the context switch.
-+ * finish_task_switch will reconcile locking set up by prepare_task_switch,
-+ * and do any other architecture-specific cleanup actions.
-+ *
-+ * Note that we may have delayed dropping an mm in context_switch(). If
-+ * so, we finish that here outside of the runqueue lock. (Doing it
-+ * with the lock held can cause deadlocks; see schedule() for
-+ * details.)
-+ *
-+ * The context switch have flipped the stack from under us and restored the
-+ * local variables which were saved when this task called schedule() in the
-+ * past. 'prev == current' is still correct but we need to recalculate this_rq
-+ * because prev may have moved to another CPU.
-+ */
-+static struct rq *finish_task_switch(struct task_struct *prev)
-+ __releases(rq->lock)
-+{
-+ struct rq *rq = this_rq();
-+ struct mm_struct *mm = rq->prev_mm;
-+ unsigned int prev_state;
-+
-+ /*
-+ * The previous task will have left us with a preempt_count of 2
-+ * because it left us after:
-+ *
-+ * schedule()
-+ * preempt_disable(); // 1
-+ * __schedule()
-+ * raw_spin_lock_irq(&rq->lock) // 2
-+ *
-+ * Also, see FORK_PREEMPT_COUNT.
-+ */
-+ if (WARN_ONCE(preempt_count() != 2*PREEMPT_DISABLE_OFFSET,
-+ "corrupted preempt_count: %s/%d/0x%x\n",
-+ current->comm, current->pid, preempt_count()))
-+ preempt_count_set(FORK_PREEMPT_COUNT);
-+
-+ rq->prev_mm = NULL;
-+
-+ /*
-+ * A task struct has one reference for the use as "current".
-+ * If a task dies, then it sets TASK_DEAD in tsk->state and calls
-+ * schedule one last time. The schedule call will never return, and
-+ * the scheduled task must drop that reference.
-+ *
-+ * We must observe prev->state before clearing prev->on_cpu (in
-+ * finish_task), otherwise a concurrent wakeup can get prev
-+ * running on another CPU and we could rave with its RUNNING -> DEAD
-+ * transition, resulting in a double drop.
-+ */
-+ prev_state = READ_ONCE(prev->__state);
-+ vtime_task_switch(prev);
-+ perf_event_task_sched_in(prev, current);
-+ finish_task(prev);
-+ tick_nohz_task_switch();
-+ finish_lock_switch(rq);
-+ finish_arch_post_lock_switch();
-+ kcov_finish_switch(current);
-+ /*
-+ * kmap_local_sched_out() is invoked with rq::lock held and
-+ * interrupts disabled. There is no requirement for that, but the
-+ * sched out code does not have an interrupt enabled section.
-+ * Restoring the maps on sched in does not require interrupts being
-+ * disabled either.
-+ */
-+ kmap_local_sched_in();
-+
-+ fire_sched_in_preempt_notifiers(current);
-+ /*
-+ * When switching through a kernel thread, the loop in
-+ * membarrier_{private,global}_expedited() may have observed that
-+ * kernel thread and not issued an IPI. It is therefore possible to
-+ * schedule between user->kernel->user threads without passing though
-+ * switch_mm(). Membarrier requires a barrier after storing to
-+ * rq->curr, before returning to userspace, so provide them here:
-+ *
-+ * - a full memory barrier for {PRIVATE,GLOBAL}_EXPEDITED, implicitly
-+ * provided by mmdrop(),
-+ * - a sync_core for SYNC_CORE.
-+ */
-+ if (mm) {
-+ membarrier_mm_sync_core_before_usermode(mm);
-+ mmdrop_sched(mm);
-+ }
-+ if (unlikely(prev_state == TASK_DEAD)) {
-+ /* Task is done with its stack. */
-+ put_task_stack(prev);
-+
-+ put_task_struct_rcu_user(prev);
-+ }
-+
-+ return rq;
-+}
-+
-+/**
-+ * schedule_tail - first thing a freshly forked thread must call.
-+ * @prev: the thread we just switched away from.
-+ */
-+asmlinkage __visible void schedule_tail(struct task_struct *prev)
-+ __releases(rq->lock)
-+{
-+ /*
-+ * New tasks start with FORK_PREEMPT_COUNT, see there and
-+ * finish_task_switch() for details.
-+ *
-+ * finish_task_switch() will drop rq->lock() and lower preempt_count
-+ * and the preempt_enable() will end up enabling preemption (on
-+ * PREEMPT_COUNT kernels).
-+ */
-+
-+ finish_task_switch(prev);
-+ preempt_enable();
-+
-+ if (current->set_child_tid)
-+ put_user(task_pid_vnr(current), current->set_child_tid);
-+
-+ calculate_sigpending();
-+}
-+
-+/*
-+ * context_switch - switch to the new MM and the new thread's register state.
-+ */
-+static __always_inline struct rq *
-+context_switch(struct rq *rq, struct task_struct *prev,
-+ struct task_struct *next)
-+{
-+ prepare_task_switch(rq, prev, next);
-+
-+ /*
-+ * For paravirt, this is coupled with an exit in switch_to to
-+ * combine the page table reload and the switch backend into
-+ * one hypercall.
-+ */
-+ arch_start_context_switch(prev);
-+
-+ /*
-+ * kernel -> kernel lazy + transfer active
-+ * user -> kernel lazy + mmgrab() active
-+ *
-+ * kernel -> user switch + mmdrop() active
-+ * user -> user switch
-+ *
-+ * switch_mm_cid() needs to be updated if the barriers provided
-+ * by context_switch() are modified.
-+ */
-+ if (!next->mm) { // to kernel
-+ enter_lazy_tlb(prev->active_mm, next);
-+
-+ next->active_mm = prev->active_mm;
-+ if (prev->mm) // from user
-+ mmgrab(prev->active_mm);
-+ else
-+ prev->active_mm = NULL;
-+ } else { // to user
-+ membarrier_switch_mm(rq, prev->active_mm, next->mm);
-+ /*
-+ * sys_membarrier() requires an smp_mb() between setting
-+ * rq->curr / membarrier_switch_mm() and returning to userspace.
-+ *
-+ * The below provides this either through switch_mm(), or in
-+ * case 'prev->active_mm == next->mm' through
-+ * finish_task_switch()'s mmdrop().
-+ */
-+ switch_mm_irqs_off(prev->active_mm, next->mm, next);
-+ lru_gen_use_mm(next->mm);
-+
-+ if (!prev->mm) { // from kernel
-+ /* will mmdrop() in finish_task_switch(). */
-+ rq->prev_mm = prev->active_mm;
-+ prev->active_mm = NULL;
-+ }
-+ }
-+
-+ /* switch_mm_cid() requires the memory barriers above. */
-+ switch_mm_cid(rq, prev, next);
-+
-+ prepare_lock_switch(rq, next);
-+
-+ /* Here we just switch the register state and the stack. */
-+ switch_to(prev, next, prev);
-+ barrier();
-+
-+ return finish_task_switch(prev);
-+}
-+
-+/*
-+ * nr_running, nr_uninterruptible and nr_context_switches:
-+ *
-+ * externally visible scheduler statistics: current number of runnable
-+ * threads, total number of context switches performed since bootup.
-+ */
-+unsigned int nr_running(void)
-+{
-+ unsigned int i, sum = 0;
-+
-+ for_each_online_cpu(i)
-+ sum += cpu_rq(i)->nr_running;
-+
-+ return sum;
-+}
-+
-+/*
-+ * Check if only the current task is running on the CPU.
-+ *
-+ * Caution: this function does not check that the caller has disabled
-+ * preemption, thus the result might have a time-of-check-to-time-of-use
-+ * race. The caller is responsible to use it correctly, for example:
-+ *
-+ * - from a non-preemptible section (of course)
-+ *
-+ * - from a thread that is bound to a single CPU
-+ *
-+ * - in a loop with very short iterations (e.g. a polling loop)
-+ */
-+bool single_task_running(void)
-+{
-+ return raw_rq()->nr_running == 1;
-+}
-+EXPORT_SYMBOL(single_task_running);
-+
-+unsigned long long nr_context_switches_cpu(int cpu)
-+{
-+ return cpu_rq(cpu)->nr_switches;
-+}
-+
-+unsigned long long nr_context_switches(void)
-+{
-+ int i;
-+ unsigned long long sum = 0;
-+
-+ for_each_possible_cpu(i)
-+ sum += cpu_rq(i)->nr_switches;
-+
-+ return sum;
-+}
-+
-+/*
-+ * Consumers of these two interfaces, like for example the cpuidle menu
-+ * governor, are using nonsensical data. Preferring shallow idle state selection
-+ * for a CPU that has IO-wait which might not even end up running the task when
-+ * it does become runnable.
-+ */
-+
-+unsigned int nr_iowait_cpu(int cpu)
-+{
-+ return atomic_read(&cpu_rq(cpu)->nr_iowait);
-+}
-+
-+/*
-+ * IO-wait accounting, and how it's mostly bollocks (on SMP).
-+ *
-+ * The idea behind IO-wait account is to account the idle time that we could
-+ * have spend running if it were not for IO. That is, if we were to improve the
-+ * storage performance, we'd have a proportional reduction in IO-wait time.
-+ *
-+ * This all works nicely on UP, where, when a task blocks on IO, we account
-+ * idle time as IO-wait, because if the storage were faster, it could've been
-+ * running and we'd not be idle.
-+ *
-+ * This has been extended to SMP, by doing the same for each CPU. This however
-+ * is broken.
-+ *
-+ * Imagine for instance the case where two tasks block on one CPU, only the one
-+ * CPU will have IO-wait accounted, while the other has regular idle. Even
-+ * though, if the storage were faster, both could've ran at the same time,
-+ * utilising both CPUs.
-+ *
-+ * This means, that when looking globally, the current IO-wait accounting on
-+ * SMP is a lower bound, by reason of under accounting.
-+ *
-+ * Worse, since the numbers are provided per CPU, they are sometimes
-+ * interpreted per CPU, and that is nonsensical. A blocked task isn't strictly
-+ * associated with any one particular CPU, it can wake to another CPU than it
-+ * blocked on. This means the per CPU IO-wait number is meaningless.
-+ *
-+ * Task CPU affinities can make all that even more 'interesting'.
-+ */
-+
-+unsigned int nr_iowait(void)
-+{
-+ unsigned int i, sum = 0;
-+
-+ for_each_possible_cpu(i)
-+ sum += nr_iowait_cpu(i);
-+
-+ return sum;
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+/*
-+ * sched_exec - execve() is a valuable balancing opportunity, because at
-+ * this point the task has the smallest effective memory and cache
-+ * footprint.
-+ */
-+void sched_exec(void)
-+{
-+}
-+
-+#endif
-+
-+DEFINE_PER_CPU(struct kernel_stat, kstat);
-+DEFINE_PER_CPU(struct kernel_cpustat, kernel_cpustat);
-+
-+EXPORT_PER_CPU_SYMBOL(kstat);
-+EXPORT_PER_CPU_SYMBOL(kernel_cpustat);
-+
-+static inline void update_curr(struct rq *rq, struct task_struct *p)
-+{
-+ s64 ns = rq->clock_task - p->last_ran;
-+
-+ p->sched_time += ns;
-+ cgroup_account_cputime(p, ns);
-+ account_group_exec_runtime(p, ns);
-+
-+ p->time_slice -= ns;
-+ p->last_ran = rq->clock_task;
-+}
-+
-+/*
-+ * Return accounted runtime for the task.
-+ * Return separately the current's pending runtime that have not been
-+ * accounted yet.
-+ */
-+unsigned long long task_sched_runtime(struct task_struct *p)
-+{
-+ unsigned long flags;
-+ struct rq *rq;
-+ raw_spinlock_t *lock;
-+ u64 ns;
-+
-+#if defined(CONFIG_64BIT) && defined(CONFIG_SMP)
-+ /*
-+ * 64-bit doesn't need locks to atomically read a 64-bit value.
-+ * So we have a optimization chance when the task's delta_exec is 0.
-+ * Reading ->on_cpu is racy, but this is OK.
-+ *
-+ * If we race with it leaving CPU, we'll take a lock. So we're correct.
-+ * If we race with it entering CPU, unaccounted time is 0. This is
-+ * indistinguishable from the read occurring a few cycles earlier.
-+ * If we see ->on_cpu without ->on_rq, the task is leaving, and has
-+ * been accounted, so we're correct here as well.
-+ */
-+ if (!p->on_cpu || !task_on_rq_queued(p))
-+ return tsk_seruntime(p);
-+#endif
-+
-+ rq = task_access_lock_irqsave(p, &lock, &flags);
-+ /*
-+ * Must be ->curr _and_ ->on_rq. If dequeued, we would
-+ * project cycles that may never be accounted to this
-+ * thread, breaking clock_gettime().
-+ */
-+ if (p == rq->curr && task_on_rq_queued(p)) {
-+ update_rq_clock(rq);
-+ update_curr(rq, p);
-+ }
-+ ns = tsk_seruntime(p);
-+ task_access_unlock_irqrestore(p, lock, &flags);
-+
-+ return ns;
-+}
-+
-+/* This manages tasks that have run out of timeslice during a scheduler_tick */
-+static inline void scheduler_task_tick(struct rq *rq)
-+{
-+ struct task_struct *p = rq->curr;
-+
-+ if (is_idle_task(p))
-+ return;
-+
-+ update_curr(rq, p);
-+ cpufreq_update_util(rq, 0);
-+
-+ /*
-+ * Tasks have less than RESCHED_NS of time slice left they will be
-+ * rescheduled.
-+ */
-+ if (p->time_slice >= RESCHED_NS)
-+ return;
-+ set_tsk_need_resched(p);
-+ set_preempt_need_resched();
-+}
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+static u64 cpu_resched_latency(struct rq *rq)
-+{
-+ int latency_warn_ms = READ_ONCE(sysctl_resched_latency_warn_ms);
-+ u64 resched_latency, now = rq_clock(rq);
-+ static bool warned_once;
-+
-+ if (sysctl_resched_latency_warn_once && warned_once)
-+ return 0;
-+
-+ if (!need_resched() || !latency_warn_ms)
-+ return 0;
-+
-+ if (system_state == SYSTEM_BOOTING)
-+ return 0;
-+
-+ if (!rq->last_seen_need_resched_ns) {
-+ rq->last_seen_need_resched_ns = now;
-+ rq->ticks_without_resched = 0;
-+ return 0;
-+ }
-+
-+ rq->ticks_without_resched++;
-+ resched_latency = now - rq->last_seen_need_resched_ns;
-+ if (resched_latency <= latency_warn_ms * NSEC_PER_MSEC)
-+ return 0;
-+
-+ warned_once = true;
-+
-+ return resched_latency;
-+}
-+
-+static int __init setup_resched_latency_warn_ms(char *str)
-+{
-+ long val;
-+
-+ if ((kstrtol(str, 0, &val))) {
-+ pr_warn("Unable to set resched_latency_warn_ms\n");
-+ return 1;
-+ }
-+
-+ sysctl_resched_latency_warn_ms = val;
-+ return 1;
-+}
-+__setup("resched_latency_warn_ms=", setup_resched_latency_warn_ms);
-+#else
-+static inline u64 cpu_resched_latency(struct rq *rq) { return 0; }
-+#endif /* CONFIG_SCHED_DEBUG */
-+
-+/*
-+ * This function gets called by the timer code, with HZ frequency.
-+ * We call it with interrupts disabled.
-+ */
-+void sched_tick(void)
-+{
-+ int cpu __maybe_unused = smp_processor_id();
-+ struct rq *rq = cpu_rq(cpu);
-+ struct task_struct *curr = rq->curr;
-+ u64 resched_latency;
-+
-+ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+ arch_scale_freq_tick();
-+
-+ sched_clock_tick();
-+
-+ raw_spin_lock(&rq->lock);
-+ update_rq_clock(rq);
-+
-+ scheduler_task_tick(rq);
-+ if (sched_feat(LATENCY_WARN))
-+ resched_latency = cpu_resched_latency(rq);
-+ calc_global_load_tick(rq);
-+
-+ task_tick_mm_cid(rq, rq->curr);
-+
-+ raw_spin_unlock(&rq->lock);
-+
-+ if (sched_feat(LATENCY_WARN) && resched_latency)
-+ resched_latency_warn(cpu, resched_latency);
-+
-+ perf_event_task_tick();
-+
-+ if (curr->flags & PF_WQ_WORKER)
-+ wq_worker_tick(curr);
-+}
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+
-+struct tick_work {
-+ int cpu;
-+ atomic_t state;
-+ struct delayed_work work;
-+};
-+/* Values for ->state, see diagram below. */
-+#define TICK_SCHED_REMOTE_OFFLINE 0
-+#define TICK_SCHED_REMOTE_OFFLINING 1
-+#define TICK_SCHED_REMOTE_RUNNING 2
-+
-+/*
-+ * State diagram for ->state:
-+ *
-+ *
-+ * TICK_SCHED_REMOTE_OFFLINE
-+ * | ^
-+ * | |
-+ * | | sched_tick_remote()
-+ * | |
-+ * | |
-+ * +--TICK_SCHED_REMOTE_OFFLINING
-+ * | ^
-+ * | |
-+ * sched_tick_start() | | sched_tick_stop()
-+ * | |
-+ * V |
-+ * TICK_SCHED_REMOTE_RUNNING
-+ *
-+ *
-+ * Other transitions get WARN_ON_ONCE(), except that sched_tick_remote()
-+ * and sched_tick_start() are happy to leave the state in RUNNING.
-+ */
-+
-+static struct tick_work __percpu *tick_work_cpu;
-+
-+static void sched_tick_remote(struct work_struct *work)
-+{
-+ struct delayed_work *dwork = to_delayed_work(work);
-+ struct tick_work *twork = container_of(dwork, struct tick_work, work);
-+ int cpu = twork->cpu;
-+ struct rq *rq = cpu_rq(cpu);
-+ int os;
-+
-+ /*
-+ * Handle the tick only if it appears the remote CPU is running in full
-+ * dynticks mode. The check is racy by nature, but missing a tick or
-+ * having one too much is no big deal because the scheduler tick updates
-+ * statistics and checks timeslices in a time-independent way, regardless
-+ * of when exactly it is running.
-+ */
-+ if (tick_nohz_tick_stopped_cpu(cpu)) {
-+ guard(raw_spinlock_irqsave)(&rq->lock);
-+ struct task_struct *curr = rq->curr;
-+
-+ if (cpu_online(cpu)) {
-+ update_rq_clock(rq);
-+
-+ if (!is_idle_task(curr)) {
-+ /*
-+ * Make sure the next tick runs within a
-+ * reasonable amount of time.
-+ */
-+ u64 delta = rq_clock_task(rq) - curr->last_ran;
-+ WARN_ON_ONCE(delta > (u64)NSEC_PER_SEC * 3);
-+ }
-+ scheduler_task_tick(rq);
-+
-+ calc_load_nohz_remote(rq);
-+ }
-+ }
-+
-+ /*
-+ * Run the remote tick once per second (1Hz). This arbitrary
-+ * frequency is large enough to avoid overload but short enough
-+ * to keep scheduler internal stats reasonably up to date. But
-+ * first update state to reflect hotplug activity if required.
-+ */
-+ os = atomic_fetch_add_unless(&twork->state, -1, TICK_SCHED_REMOTE_RUNNING);
-+ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_OFFLINE);
-+ if (os == TICK_SCHED_REMOTE_RUNNING)
-+ queue_delayed_work(system_unbound_wq, dwork, HZ);
-+}
-+
-+static void sched_tick_start(int cpu)
-+{
-+ int os;
-+ struct tick_work *twork;
-+
-+ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+ return;
-+
-+ WARN_ON_ONCE(!tick_work_cpu);
-+
-+ twork = per_cpu_ptr(tick_work_cpu, cpu);
-+ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_RUNNING);
-+ WARN_ON_ONCE(os == TICK_SCHED_REMOTE_RUNNING);
-+ if (os == TICK_SCHED_REMOTE_OFFLINE) {
-+ twork->cpu = cpu;
-+ INIT_DELAYED_WORK(&twork->work, sched_tick_remote);
-+ queue_delayed_work(system_unbound_wq, &twork->work, HZ);
-+ }
-+}
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+static void sched_tick_stop(int cpu)
-+{
-+ struct tick_work *twork;
-+ int os;
-+
-+ if (housekeeping_cpu(cpu, HK_TYPE_TICK))
-+ return;
-+
-+ WARN_ON_ONCE(!tick_work_cpu);
-+
-+ twork = per_cpu_ptr(tick_work_cpu, cpu);
-+ /* There cannot be competing actions, but don't rely on stop-machine. */
-+ os = atomic_xchg(&twork->state, TICK_SCHED_REMOTE_OFFLINING);
-+ WARN_ON_ONCE(os != TICK_SCHED_REMOTE_RUNNING);
-+ /* Don't cancel, as this would mess up the state machine. */
-+}
-+#endif /* CONFIG_HOTPLUG_CPU */
-+
-+int __init sched_tick_offload_init(void)
-+{
-+ tick_work_cpu = alloc_percpu(struct tick_work);
-+ BUG_ON(!tick_work_cpu);
-+ return 0;
-+}
-+
-+#else /* !CONFIG_NO_HZ_FULL */
-+static inline void sched_tick_start(int cpu) { }
-+static inline void sched_tick_stop(int cpu) { }
-+#endif
-+
-+#if defined(CONFIG_PREEMPTION) && (defined(CONFIG_DEBUG_PREEMPT) || \
-+ defined(CONFIG_PREEMPT_TRACER))
-+/*
-+ * If the value passed in is equal to the current preempt count
-+ * then we just disabled preemption. Start timing the latency.
-+ */
-+static inline void preempt_latency_start(int val)
-+{
-+ if (preempt_count() == val) {
-+ unsigned long ip = get_lock_parent_ip();
-+#ifdef CONFIG_DEBUG_PREEMPT
-+ current->preempt_disable_ip = ip;
-+#endif
-+ trace_preempt_off(CALLER_ADDR0, ip);
-+ }
-+}
-+
-+void preempt_count_add(int val)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+ /*
-+ * Underflow?
-+ */
-+ if (DEBUG_LOCKS_WARN_ON((preempt_count() < 0)))
-+ return;
-+#endif
-+ __preempt_count_add(val);
-+#ifdef CONFIG_DEBUG_PREEMPT
-+ /*
-+ * Spinlock count overflowing soon?
-+ */
-+ DEBUG_LOCKS_WARN_ON((preempt_count() & PREEMPT_MASK) >=
-+ PREEMPT_MASK - 10);
-+#endif
-+ preempt_latency_start(val);
-+}
-+EXPORT_SYMBOL(preempt_count_add);
-+NOKPROBE_SYMBOL(preempt_count_add);
-+
-+/*
-+ * If the value passed in equals to the current preempt count
-+ * then we just enabled preemption. Stop timing the latency.
-+ */
-+static inline void preempt_latency_stop(int val)
-+{
-+ if (preempt_count() == val)
-+ trace_preempt_on(CALLER_ADDR0, get_lock_parent_ip());
-+}
-+
-+void preempt_count_sub(int val)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+ /*
-+ * Underflow?
-+ */
-+ if (DEBUG_LOCKS_WARN_ON(val > preempt_count()))
-+ return;
-+ /*
-+ * Is the spinlock portion underflowing?
-+ */
-+ if (DEBUG_LOCKS_WARN_ON((val < PREEMPT_MASK) &&
-+ !(preempt_count() & PREEMPT_MASK)))
-+ return;
-+#endif
-+
-+ preempt_latency_stop(val);
-+ __preempt_count_sub(val);
-+}
-+EXPORT_SYMBOL(preempt_count_sub);
-+NOKPROBE_SYMBOL(preempt_count_sub);
-+
-+#else
-+static inline void preempt_latency_start(int val) { }
-+static inline void preempt_latency_stop(int val) { }
-+#endif
-+
-+static inline unsigned long get_preempt_disable_ip(struct task_struct *p)
-+{
-+#ifdef CONFIG_DEBUG_PREEMPT
-+ return p->preempt_disable_ip;
-+#else
-+ return 0;
-+#endif
-+}
-+
-+/*
-+ * Print scheduling while atomic bug:
-+ */
-+static noinline void __schedule_bug(struct task_struct *prev)
-+{
-+ /* Save this before calling printk(), since that will clobber it */
-+ unsigned long preempt_disable_ip = get_preempt_disable_ip(current);
-+
-+ if (oops_in_progress)
-+ return;
-+
-+ printk(KERN_ERR "BUG: scheduling while atomic: %s/%d/0x%08x\n",
-+ prev->comm, prev->pid, preempt_count());
-+
-+ debug_show_held_locks(prev);
-+ print_modules();
-+ if (irqs_disabled())
-+ print_irqtrace_events(prev);
-+ if (IS_ENABLED(CONFIG_DEBUG_PREEMPT)) {
-+ pr_err("Preemption disabled at:");
-+ print_ip_sym(KERN_ERR, preempt_disable_ip);
-+ }
-+ check_panic_on_warn("scheduling while atomic");
-+
-+ dump_stack();
-+ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+
-+/*
-+ * Various schedule()-time debugging checks and statistics:
-+ */
-+static inline void schedule_debug(struct task_struct *prev, bool preempt)
-+{
-+#ifdef CONFIG_SCHED_STACK_END_CHECK
-+ if (task_stack_end_corrupted(prev))
-+ panic("corrupted stack end detected inside scheduler\n");
-+
-+ if (task_scs_end_corrupted(prev))
-+ panic("corrupted shadow stack detected inside scheduler\n");
-+#endif
-+
-+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
-+ if (!preempt && READ_ONCE(prev->__state) && prev->non_block_count) {
-+ printk(KERN_ERR "BUG: scheduling in a non-blocking section: %s/%d/%i\n",
-+ prev->comm, prev->pid, prev->non_block_count);
-+ dump_stack();
-+ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+ }
-+#endif
-+
-+ if (unlikely(in_atomic_preempt_off())) {
-+ __schedule_bug(prev);
-+ preempt_count_set(PREEMPT_DISABLED);
-+ }
-+ rcu_sleep_check();
-+ SCHED_WARN_ON(ct_state() == CT_STATE_USER);
-+
-+ profile_hit(SCHED_PROFILING, __builtin_return_address(0));
-+
-+ schedstat_inc(this_rq()->sched_count);
-+}
-+
-+#ifdef ALT_SCHED_DEBUG
-+void alt_sched_debug(void)
-+{
-+ printk(KERN_INFO "sched: pending: 0x%04lx, idle: 0x%04lx, sg_idle: 0x%04lx,"
-+ " ecore_idle: 0x%04lx\n",
-+ sched_rq_pending_mask.bits[0],
-+ sched_idle_mask->bits[0],
-+ sched_pcore_idle_mask->bits[0],
-+ sched_ecore_idle_mask->bits[0]);
-+}
-+#endif
-+
-+#ifdef CONFIG_SMP
-+
-+#ifdef CONFIG_PREEMPT_RT
-+#define SCHED_NR_MIGRATE_BREAK 8
-+#else
-+#define SCHED_NR_MIGRATE_BREAK 32
-+#endif
-+
-+const_debug unsigned int sysctl_sched_nr_migrate = SCHED_NR_MIGRATE_BREAK;
-+
-+/*
-+ * Migrate pending tasks in @rq to @dest_cpu
-+ */
-+static inline int
-+migrate_pending_tasks(struct rq *rq, struct rq *dest_rq, const int dest_cpu)
-+{
-+ struct task_struct *p, *skip = rq->curr;
-+ int nr_migrated = 0;
-+ int nr_tries = min(rq->nr_running / 2, sysctl_sched_nr_migrate);
-+
-+ /* WA to check rq->curr is still on rq */
-+ if (!task_on_rq_queued(skip))
-+ return 0;
-+
-+ while (skip != rq->idle && nr_tries &&
-+ (p = sched_rq_next_task(skip, rq)) != rq->idle) {
-+ skip = sched_rq_next_task(p, rq);
-+ if (cpumask_test_cpu(dest_cpu, p->cpus_ptr)) {
-+ __SCHED_DEQUEUE_TASK(p, rq, 0, );
-+ set_task_cpu(p, dest_cpu);
-+ sched_task_sanity_check(p, dest_rq);
-+ sched_mm_cid_migrate_to(dest_rq, p);
-+ __SCHED_ENQUEUE_TASK(p, dest_rq, 0, );
-+ nr_migrated++;
-+ }
-+ nr_tries--;
-+ }
-+
-+ return nr_migrated;
-+}
-+
-+static inline int take_other_rq_tasks(struct rq *rq, int cpu)
-+{
-+ cpumask_t *topo_mask, *end_mask, chk;
-+
-+ if (unlikely(!rq->online))
-+ return 0;
-+
-+ if (cpumask_empty(&sched_rq_pending_mask))
-+ return 0;
-+
-+ topo_mask = per_cpu(sched_cpu_topo_masks, cpu);
-+ end_mask = per_cpu(sched_cpu_topo_end_mask, cpu);
-+ do {
-+ int i;
-+
-+ if (!cpumask_and(&chk, &sched_rq_pending_mask, topo_mask))
-+ continue;
-+
-+ for_each_cpu_wrap(i, &chk, cpu) {
-+ int nr_migrated;
-+ struct rq *src_rq;
-+
-+ src_rq = cpu_rq(i);
-+ if (!do_raw_spin_trylock(&src_rq->lock))
-+ continue;
-+ spin_acquire(&src_rq->lock.dep_map,
-+ SINGLE_DEPTH_NESTING, 1, _RET_IP_);
-+
-+ if ((nr_migrated = migrate_pending_tasks(src_rq, rq, cpu))) {
-+ src_rq->nr_running -= nr_migrated;
-+ if (src_rq->nr_running < 2)
-+ cpumask_clear_cpu(i, &sched_rq_pending_mask);
-+
-+ spin_release(&src_rq->lock.dep_map, _RET_IP_);
-+ do_raw_spin_unlock(&src_rq->lock);
-+
-+ rq->nr_running += nr_migrated;
-+ if (rq->nr_running > 1)
-+ cpumask_set_cpu(cpu, &sched_rq_pending_mask);
-+
-+ update_sched_preempt_mask(rq);
-+ cpufreq_update_util(rq, 0);
-+
-+ return 1;
-+ }
-+
-+ spin_release(&src_rq->lock.dep_map, _RET_IP_);
-+ do_raw_spin_unlock(&src_rq->lock);
-+ }
-+ } while (++topo_mask < end_mask);
-+
-+ return 0;
-+}
-+#endif
-+
-+static inline void time_slice_expired(struct task_struct *p, struct rq *rq)
-+{
-+ p->time_slice = sysctl_sched_base_slice;
-+
-+ sched_task_renew(p, rq);
-+
-+ if (SCHED_FIFO != p->policy && task_on_rq_queued(p))
-+ requeue_task(p, rq);
-+}
-+
-+/*
-+ * Timeslices below RESCHED_NS are considered as good as expired as there's no
-+ * point rescheduling when there's so little time left.
-+ */
-+static inline void check_curr(struct task_struct *p, struct rq *rq)
-+{
-+ if (unlikely(rq->idle == p))
-+ return;
-+
-+ update_curr(rq, p);
-+
-+ if (p->time_slice < RESCHED_NS)
-+ time_slice_expired(p, rq);
-+}
-+
-+static inline struct task_struct *
-+choose_next_task(struct rq *rq, int cpu)
-+{
-+ struct task_struct *next = sched_rq_first_task(rq);
-+
-+ if (next == rq->idle) {
-+#ifdef CONFIG_SMP
-+ if (!take_other_rq_tasks(rq, cpu)) {
-+ if (likely(rq->balance_func && rq->online))
-+ rq->balance_func(rq, cpu);
-+#endif /* CONFIG_SMP */
-+
-+ schedstat_inc(rq->sched_goidle);
-+ /*printk(KERN_INFO "sched: choose_next_task(%d) idle %px\n", cpu, next);*/
-+ return next;
-+#ifdef CONFIG_SMP
-+ }
-+ next = sched_rq_first_task(rq);
-+#endif
-+ }
-+#ifdef CONFIG_HIGH_RES_TIMERS
-+ hrtick_start(rq, next->time_slice);
-+#endif
-+ /*printk(KERN_INFO "sched: choose_next_task(%d) next %px\n", cpu, next);*/
-+ return next;
-+}
-+
-+/*
-+ * Constants for the sched_mode argument of __schedule().
-+ *
-+ * The mode argument allows RT enabled kernels to differentiate a
-+ * preemption from blocking on an 'sleeping' spin/rwlock.
-+ */
-+ #define SM_IDLE (-1)
-+ #define SM_NONE 0
-+ #define SM_PREEMPT 1
-+ #define SM_RTLOCK_WAIT 2
-+
-+/*
-+ * schedule() is the main scheduler function.
-+ *
-+ * The main means of driving the scheduler and thus entering this function are:
-+ *
-+ * 1. Explicit blocking: mutex, semaphore, waitqueue, etc.
-+ *
-+ * 2. TIF_NEED_RESCHED flag is checked on interrupt and userspace return
-+ * paths. For example, see arch/x86/entry_64.S.
-+ *
-+ * To drive preemption between tasks, the scheduler sets the flag in timer
-+ * interrupt handler sched_tick().
-+ *
-+ * 3. Wakeups don't really cause entry into schedule(). They add a
-+ * task to the run-queue and that's it.
-+ *
-+ * Now, if the new task added to the run-queue preempts the current
-+ * task, then the wakeup sets TIF_NEED_RESCHED and schedule() gets
-+ * called on the nearest possible occasion:
-+ *
-+ * - If the kernel is preemptible (CONFIG_PREEMPTION=y):
-+ *
-+ * - in syscall or exception context, at the next outmost
-+ * preempt_enable(). (this might be as soon as the wake_up()'s
-+ * spin_unlock()!)
-+ *
-+ * - in IRQ context, return from interrupt-handler to
-+ * preemptible context
-+ *
-+ * - If the kernel is not preemptible (CONFIG_PREEMPTION is not set)
-+ * then at the next:
-+ *
-+ * - cond_resched() call
-+ * - explicit schedule() call
-+ * - return from syscall or exception to user-space
-+ * - return from interrupt-handler to user-space
-+ *
-+ * WARNING: must be called with preemption disabled!
-+ */
-+static void __sched notrace __schedule(int sched_mode)
-+{
-+ struct task_struct *prev, *next;
-+ /*
-+ * On PREEMPT_RT kernel, SM_RTLOCK_WAIT is noted
-+ * as a preemption by schedule_debug() and RCU.
-+ */
-+ bool preempt = sched_mode > SM_NONE;
-+ unsigned long *switch_count;
-+ unsigned long prev_state;
-+ struct rq *rq;
-+ int cpu;
-+
-+ cpu = smp_processor_id();
-+ rq = cpu_rq(cpu);
-+ prev = rq->curr;
-+
-+ schedule_debug(prev, preempt);
-+
-+ /* by passing sched_feat(HRTICK) checking which Alt schedule FW doesn't support */
-+ hrtick_clear(rq);
-+
-+ local_irq_disable();
-+ rcu_note_context_switch(preempt);
-+
-+ /*
-+ * Make sure that signal_pending_state()->signal_pending() below
-+ * can't be reordered with __set_current_state(TASK_INTERRUPTIBLE)
-+ * done by the caller to avoid the race with signal_wake_up():
-+ *
-+ * __set_current_state(@state) signal_wake_up()
-+ * schedule() set_tsk_thread_flag(p, TIF_SIGPENDING)
-+ * wake_up_state(p, state)
-+ * LOCK rq->lock LOCK p->pi_state
-+ * smp_mb__after_spinlock() smp_mb__after_spinlock()
-+ * if (signal_pending_state()) if (p->state & @state)
-+ *
-+ * Also, the membarrier system call requires a full memory barrier
-+ * after coming from user-space, before storing to rq->curr; this
-+ * barrier matches a full barrier in the proximity of the membarrier
-+ * system call exit.
-+ */
-+ raw_spin_lock(&rq->lock);
-+ smp_mb__after_spinlock();
-+
-+ update_rq_clock(rq);
-+
-+ switch_count = &prev->nivcsw;
-+
-+ /* Task state changes only considers SM_PREEMPT as preemption */
-+ preempt = sched_mode == SM_PREEMPT;
-+
-+ /*
-+ * We must load prev->state once (task_struct::state is volatile), such
-+ * that we form a control dependency vs deactivate_task() below.
-+ */
-+ prev_state = READ_ONCE(prev->__state);
-+ if (sched_mode == SM_IDLE) {
-+ if (!rq->nr_running) {
-+ next = prev;
-+ goto picked;
-+ }
-+ } else if (!preempt && prev_state) {
-+ if (signal_pending_state(prev_state, prev)) {
-+ WRITE_ONCE(prev->__state, TASK_RUNNING);
-+ } else {
-+ prev->sched_contributes_to_load =
-+ (prev_state & TASK_UNINTERRUPTIBLE) &&
-+ !(prev_state & TASK_NOLOAD) &&
-+ !(prev_state & TASK_FROZEN);
-+
-+ /*
-+ * __schedule() ttwu()
-+ * prev_state = prev->state; if (p->on_rq && ...)
-+ * if (prev_state) goto out;
-+ * p->on_rq = 0; smp_acquire__after_ctrl_dep();
-+ * p->state = TASK_WAKING
-+ *
-+ * Where __schedule() and ttwu() have matching control dependencies.
-+ *
-+ * After this, schedule() must not care about p->state any more.
-+ */
-+ sched_task_deactivate(prev, rq);
-+ block_task(rq, prev);
-+ }
-+ switch_count = &prev->nvcsw;
-+ }
-+
-+ check_curr(prev, rq);
-+
-+ next = choose_next_task(rq, cpu);
-+picked:
-+ clear_tsk_need_resched(prev);
-+ clear_preempt_need_resched();
-+#ifdef CONFIG_SCHED_DEBUG
-+ rq->last_seen_need_resched_ns = 0;
-+#endif
-+
-+ if (likely(prev != next)) {
-+ next->last_ran = rq->clock_task;
-+
-+ /*printk(KERN_INFO "sched: %px -> %px\n", prev, next);*/
-+ rq->nr_switches++;
-+ /*
-+ * RCU users of rcu_dereference(rq->curr) may not see
-+ * changes to task_struct made by pick_next_task().
-+ */
-+ RCU_INIT_POINTER(rq->curr, next);
-+ /*
-+ * The membarrier system call requires each architecture
-+ * to have a full memory barrier after updating
-+ * rq->curr, before returning to user-space.
-+ *
-+ * Here are the schemes providing that barrier on the
-+ * various architectures:
-+ * - mm ? switch_mm() : mmdrop() for x86, s390, sparc, PowerPC,
-+ * RISC-V. switch_mm() relies on membarrier_arch_switch_mm()
-+ * on PowerPC and on RISC-V.
-+ * - finish_lock_switch() for weakly-ordered
-+ * architectures where spin_unlock is a full barrier,
-+ * - switch_to() for arm64 (weakly-ordered, spin_unlock
-+ * is a RELEASE barrier),
-+ *
-+ * The barrier matches a full barrier in the proximity of
-+ * the membarrier system call entry.
-+ *
-+ * On RISC-V, this barrier pairing is also needed for the
-+ * SYNC_CORE command when switching between processes, cf.
-+ * the inline comments in membarrier_arch_switch_mm().
-+ */
-+ ++*switch_count;
-+
-+ trace_sched_switch(preempt, prev, next, prev_state);
-+
-+ /* Also unlocks the rq: */
-+ rq = context_switch(rq, prev, next);
-+
-+ cpu = cpu_of(rq);
-+ } else {
-+ __balance_callbacks(rq);
-+ raw_spin_unlock_irq(&rq->lock);
-+ }
-+}
-+
-+void __noreturn do_task_dead(void)
-+{
-+ /* Causes final put_task_struct in finish_task_switch(): */
-+ set_special_state(TASK_DEAD);
-+
-+ /* Tell freezer to ignore us: */
-+ current->flags |= PF_NOFREEZE;
-+
-+ __schedule(SM_NONE);
-+ BUG();
-+
-+ /* Avoid "noreturn function does return" - but don't continue if BUG() is a NOP: */
-+ for (;;)
-+ cpu_relax();
-+}
-+
-+static inline void sched_submit_work(struct task_struct *tsk)
-+{
-+ static DEFINE_WAIT_OVERRIDE_MAP(sched_map, LD_WAIT_CONFIG);
-+ unsigned int task_flags;
-+
-+ /*
-+ * Establish LD_WAIT_CONFIG context to ensure none of the code called
-+ * will use a blocking primitive -- which would lead to recursion.
-+ */
-+ lock_map_acquire_try(&sched_map);
-+
-+ task_flags = tsk->flags;
-+ /*
-+ * If a worker goes to sleep, notify and ask workqueue whether it
-+ * wants to wake up a task to maintain concurrency.
-+ */
-+ if (task_flags & PF_WQ_WORKER)
-+ wq_worker_sleeping(tsk);
-+ else if (task_flags & PF_IO_WORKER)
-+ io_wq_worker_sleeping(tsk);
-+
-+ /*
-+ * spinlock and rwlock must not flush block requests. This will
-+ * deadlock if the callback attempts to acquire a lock which is
-+ * already acquired.
-+ */
-+ SCHED_WARN_ON(current->__state & TASK_RTLOCK_WAIT);
-+
-+ /*
-+ * If we are going to sleep and we have plugged IO queued,
-+ * make sure to submit it to avoid deadlocks.
-+ */
-+ blk_flush_plug(tsk->plug, true);
-+
-+ lock_map_release(&sched_map);
-+}
-+
-+static void sched_update_worker(struct task_struct *tsk)
-+{
-+ if (tsk->flags & (PF_WQ_WORKER | PF_IO_WORKER | PF_BLOCK_TS)) {
-+ if (tsk->flags & PF_BLOCK_TS)
-+ blk_plug_invalidate_ts(tsk);
-+ if (tsk->flags & PF_WQ_WORKER)
-+ wq_worker_running(tsk);
-+ else if (tsk->flags & PF_IO_WORKER)
-+ io_wq_worker_running(tsk);
-+ }
-+}
-+
-+static __always_inline void __schedule_loop(int sched_mode)
-+{
-+ do {
-+ preempt_disable();
-+ __schedule(sched_mode);
-+ sched_preempt_enable_no_resched();
-+ } while (need_resched());
-+}
-+
-+asmlinkage __visible void __sched schedule(void)
-+{
-+ struct task_struct *tsk = current;
-+
-+#ifdef CONFIG_RT_MUTEXES
-+ lockdep_assert(!tsk->sched_rt_mutex);
-+#endif
-+
-+ if (!task_is_running(tsk))
-+ sched_submit_work(tsk);
-+ __schedule_loop(SM_NONE);
-+ sched_update_worker(tsk);
-+}
-+EXPORT_SYMBOL(schedule);
-+
-+/*
-+ * synchronize_rcu_tasks() makes sure that no task is stuck in preempted
-+ * state (have scheduled out non-voluntarily) by making sure that all
-+ * tasks have either left the run queue or have gone into user space.
-+ * As idle tasks do not do either, they must not ever be preempted
-+ * (schedule out non-voluntarily).
-+ *
-+ * schedule_idle() is similar to schedule_preempt_disable() except that it
-+ * never enables preemption because it does not call sched_submit_work().
-+ */
-+void __sched schedule_idle(void)
-+{
-+ /*
-+ * As this skips calling sched_submit_work(), which the idle task does
-+ * regardless because that function is a NOP when the task is in a
-+ * TASK_RUNNING state, make sure this isn't used someplace that the
-+ * current task can be in any other state. Note, idle is always in the
-+ * TASK_RUNNING state.
-+ */
-+ WARN_ON_ONCE(current->__state);
-+ do {
-+ __schedule(SM_IDLE);
-+ } while (need_resched());
-+}
-+
-+#if defined(CONFIG_CONTEXT_TRACKING_USER) && !defined(CONFIG_HAVE_CONTEXT_TRACKING_USER_OFFSTACK)
-+asmlinkage __visible void __sched schedule_user(void)
-+{
-+ /*
-+ * If we come here after a random call to set_need_resched(),
-+ * or we have been woken up remotely but the IPI has not yet arrived,
-+ * we haven't yet exited the RCU idle mode. Do it here manually until
-+ * we find a better solution.
-+ *
-+ * NB: There are buggy callers of this function. Ideally we
-+ * should warn if prev_state != CT_STATE_USER, but that will trigger
-+ * too frequently to make sense yet.
-+ */
-+ enum ctx_state prev_state = exception_enter();
-+ schedule();
-+ exception_exit(prev_state);
-+}
-+#endif
-+
-+/**
-+ * schedule_preempt_disabled - called with preemption disabled
-+ *
-+ * Returns with preemption disabled. Note: preempt_count must be 1
-+ */
-+void __sched schedule_preempt_disabled(void)
-+{
-+ sched_preempt_enable_no_resched();
-+ schedule();
-+ preempt_disable();
-+}
-+
-+#ifdef CONFIG_PREEMPT_RT
-+void __sched notrace schedule_rtlock(void)
-+{
-+ __schedule_loop(SM_RTLOCK_WAIT);
-+}
-+NOKPROBE_SYMBOL(schedule_rtlock);
-+#endif
-+
-+static void __sched notrace preempt_schedule_common(void)
-+{
-+ do {
-+ /*
-+ * Because the function tracer can trace preempt_count_sub()
-+ * and it also uses preempt_enable/disable_notrace(), if
-+ * NEED_RESCHED is set, the preempt_enable_notrace() called
-+ * by the function tracer will call this function again and
-+ * cause infinite recursion.
-+ *
-+ * Preemption must be disabled here before the function
-+ * tracer can trace. Break up preempt_disable() into two
-+ * calls. One to disable preemption without fear of being
-+ * traced. The other to still record the preemption latency,
-+ * which can also be traced by the function tracer.
-+ */
-+ preempt_disable_notrace();
-+ preempt_latency_start(1);
-+ __schedule(SM_PREEMPT);
-+ preempt_latency_stop(1);
-+ preempt_enable_no_resched_notrace();
-+
-+ /*
-+ * Check again in case we missed a preemption opportunity
-+ * between schedule and now.
-+ */
-+ } while (need_resched());
-+}
-+
-+#ifdef CONFIG_PREEMPTION
-+/*
-+ * This is the entry point to schedule() from in-kernel preemption
-+ * off of preempt_enable.
-+ */
-+asmlinkage __visible void __sched notrace preempt_schedule(void)
-+{
-+ /*
-+ * If there is a non-zero preempt_count or interrupts are disabled,
-+ * we do not want to preempt the current task. Just return..
-+ */
-+ if (likely(!preemptible()))
-+ return;
-+
-+ preempt_schedule_common();
-+}
-+NOKPROBE_SYMBOL(preempt_schedule);
-+EXPORT_SYMBOL(preempt_schedule);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#ifndef preempt_schedule_dynamic_enabled
-+#define preempt_schedule_dynamic_enabled preempt_schedule
-+#define preempt_schedule_dynamic_disabled NULL
-+#endif
-+DEFINE_STATIC_CALL(preempt_schedule, preempt_schedule_dynamic_enabled);
-+EXPORT_STATIC_CALL_TRAMP(preempt_schedule);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule);
-+void __sched notrace dynamic_preempt_schedule(void)
-+{
-+ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule))
-+ return;
-+ preempt_schedule();
-+}
-+NOKPROBE_SYMBOL(dynamic_preempt_schedule);
-+EXPORT_SYMBOL(dynamic_preempt_schedule);
-+#endif
-+#endif
-+
-+/**
-+ * preempt_schedule_notrace - preempt_schedule called by tracing
-+ *
-+ * The tracing infrastructure uses preempt_enable_notrace to prevent
-+ * recursion and tracing preempt enabling caused by the tracing
-+ * infrastructure itself. But as tracing can happen in areas coming
-+ * from userspace or just about to enter userspace, a preempt enable
-+ * can occur before user_exit() is called. This will cause the scheduler
-+ * to be called when the system is still in usermode.
-+ *
-+ * To prevent this, the preempt_enable_notrace will use this function
-+ * instead of preempt_schedule() to exit user context if needed before
-+ * calling the scheduler.
-+ */
-+asmlinkage __visible void __sched notrace preempt_schedule_notrace(void)
-+{
-+ enum ctx_state prev_ctx;
-+
-+ if (likely(!preemptible()))
-+ return;
-+
-+ do {
-+ /*
-+ * Because the function tracer can trace preempt_count_sub()
-+ * and it also uses preempt_enable/disable_notrace(), if
-+ * NEED_RESCHED is set, the preempt_enable_notrace() called
-+ * by the function tracer will call this function again and
-+ * cause infinite recursion.
-+ *
-+ * Preemption must be disabled here before the function
-+ * tracer can trace. Break up preempt_disable() into two
-+ * calls. One to disable preemption without fear of being
-+ * traced. The other to still record the preemption latency,
-+ * which can also be traced by the function tracer.
-+ */
-+ preempt_disable_notrace();
-+ preempt_latency_start(1);
-+ /*
-+ * Needs preempt disabled in case user_exit() is traced
-+ * and the tracer calls preempt_enable_notrace() causing
-+ * an infinite recursion.
-+ */
-+ prev_ctx = exception_enter();
-+ __schedule(SM_PREEMPT);
-+ exception_exit(prev_ctx);
-+
-+ preempt_latency_stop(1);
-+ preempt_enable_no_resched_notrace();
-+ } while (need_resched());
-+}
-+EXPORT_SYMBOL_GPL(preempt_schedule_notrace);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#ifndef preempt_schedule_notrace_dynamic_enabled
-+#define preempt_schedule_notrace_dynamic_enabled preempt_schedule_notrace
-+#define preempt_schedule_notrace_dynamic_disabled NULL
-+#endif
-+DEFINE_STATIC_CALL(preempt_schedule_notrace, preempt_schedule_notrace_dynamic_enabled);
-+EXPORT_STATIC_CALL_TRAMP(preempt_schedule_notrace);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_TRUE(sk_dynamic_preempt_schedule_notrace);
-+void __sched notrace dynamic_preempt_schedule_notrace(void)
-+{
-+ if (!static_branch_unlikely(&sk_dynamic_preempt_schedule_notrace))
-+ return;
-+ preempt_schedule_notrace();
-+}
-+NOKPROBE_SYMBOL(dynamic_preempt_schedule_notrace);
-+EXPORT_SYMBOL(dynamic_preempt_schedule_notrace);
-+#endif
-+#endif
-+
-+#endif /* CONFIG_PREEMPTION */
-+
-+/*
-+ * This is the entry point to schedule() from kernel preemption
-+ * off of IRQ context.
-+ * Note, that this is called and return with IRQs disabled. This will
-+ * protect us against recursive calling from IRQ contexts.
-+ */
-+asmlinkage __visible void __sched preempt_schedule_irq(void)
-+{
-+ enum ctx_state prev_state;
-+
-+ /* Catch callers which need to be fixed */
-+ BUG_ON(preempt_count() || !irqs_disabled());
-+
-+ prev_state = exception_enter();
-+
-+ do {
-+ preempt_disable();
-+ local_irq_enable();
-+ __schedule(SM_PREEMPT);
-+ local_irq_disable();
-+ sched_preempt_enable_no_resched();
-+ } while (need_resched());
-+
-+ exception_exit(prev_state);
-+}
-+
-+int default_wake_function(wait_queue_entry_t *curr, unsigned mode, int wake_flags,
-+ void *key)
-+{
-+ WARN_ON_ONCE(IS_ENABLED(CONFIG_SCHED_DEBUG) && wake_flags & ~(WF_SYNC|WF_CURRENT_CPU));
-+ return try_to_wake_up(curr->private, mode, wake_flags);
-+}
-+EXPORT_SYMBOL(default_wake_function);
-+
-+void check_task_changed(struct task_struct *p, struct rq *rq)
-+{
-+ /* Trigger resched if task sched_prio has been modified. */
-+ if (task_on_rq_queued(p)) {
-+ update_rq_clock(rq);
-+ requeue_task(p, rq);
-+ wakeup_preempt(rq);
-+ }
-+}
-+
-+void __setscheduler_prio(struct task_struct *p, int prio)
-+{
-+ p->prio = prio;
-+}
-+
-+#ifdef CONFIG_RT_MUTEXES
-+
-+/*
-+ * Would be more useful with typeof()/auto_type but they don't mix with
-+ * bit-fields. Since it's a local thing, use int. Keep the generic sounding
-+ * name such that if someone were to implement this function we get to compare
-+ * notes.
-+ */
-+#define fetch_and_set(x, v) ({ int _x = (x); (x) = (v); _x; })
-+
-+void rt_mutex_pre_schedule(void)
-+{
-+ lockdep_assert(!fetch_and_set(current->sched_rt_mutex, 1));
-+ sched_submit_work(current);
-+}
-+
-+void rt_mutex_schedule(void)
-+{
-+ lockdep_assert(current->sched_rt_mutex);
-+ __schedule_loop(SM_NONE);
-+}
-+
-+void rt_mutex_post_schedule(void)
-+{
-+ sched_update_worker(current);
-+ lockdep_assert(fetch_and_set(current->sched_rt_mutex, 0));
-+}
-+
-+/*
-+ * rt_mutex_setprio - set the current priority of a task
-+ * @p: task to boost
-+ * @pi_task: donor task
-+ *
-+ * This function changes the 'effective' priority of a task. It does
-+ * not touch ->normal_prio like __setscheduler().
-+ *
-+ * Used by the rt_mutex code to implement priority inheritance
-+ * logic. Call site only calls if the priority of the task changed.
-+ */
-+void rt_mutex_setprio(struct task_struct *p, struct task_struct *pi_task)
-+{
-+ int prio;
-+ struct rq *rq;
-+ raw_spinlock_t *lock;
-+
-+ /* XXX used to be waiter->prio, not waiter->task->prio */
-+ prio = __rt_effective_prio(pi_task, p->normal_prio);
-+
-+ /*
-+ * If nothing changed; bail early.
-+ */
-+ if (p->pi_top_task == pi_task && prio == p->prio)
-+ return;
-+
-+ rq = __task_access_lock(p, &lock);
-+ /*
-+ * Set under pi_lock && rq->lock, such that the value can be used under
-+ * either lock.
-+ *
-+ * Note that there is loads of tricky to make this pointer cache work
-+ * right. rt_mutex_slowunlock()+rt_mutex_postunlock() work together to
-+ * ensure a task is de-boosted (pi_task is set to NULL) before the
-+ * task is allowed to run again (and can exit). This ensures the pointer
-+ * points to a blocked task -- which guarantees the task is present.
-+ */
-+ p->pi_top_task = pi_task;
-+
-+ /*
-+ * For FIFO/RR we only need to set prio, if that matches we're done.
-+ */
-+ if (prio == p->prio)
-+ goto out_unlock;
-+
-+ /*
-+ * Idle task boosting is a no-no in general. There is one
-+ * exception, when PREEMPT_RT and NOHZ is active:
-+ *
-+ * The idle task calls get_next_timer_interrupt() and holds
-+ * the timer wheel base->lock on the CPU and another CPU wants
-+ * to access the timer (probably to cancel it). We can safely
-+ * ignore the boosting request, as the idle CPU runs this code
-+ * with interrupts disabled and will complete the lock
-+ * protected section without being interrupted. So there is no
-+ * real need to boost.
-+ */
-+ if (unlikely(p == rq->idle)) {
-+ WARN_ON(p != rq->curr);
-+ WARN_ON(p->pi_blocked_on);
-+ goto out_unlock;
-+ }
-+
-+ trace_sched_pi_setprio(p, pi_task);
-+
-+ __setscheduler_prio(p, prio);
-+
-+ check_task_changed(p, rq);
-+out_unlock:
-+ /* Avoid rq from going away on us: */
-+ preempt_disable();
-+
-+ if (task_on_rq_queued(p))
-+ __balance_callbacks(rq);
-+ __task_access_unlock(p, lock);
-+
-+ preempt_enable();
-+}
-+#endif
-+
-+#if !defined(CONFIG_PREEMPTION) || defined(CONFIG_PREEMPT_DYNAMIC)
-+int __sched __cond_resched(void)
-+{
-+ if (should_resched(0)) {
-+ preempt_schedule_common();
-+ return 1;
-+ }
-+ /*
-+ * In preemptible kernels, ->rcu_read_lock_nesting tells the tick
-+ * whether the current CPU is in an RCU read-side critical section,
-+ * so the tick can report quiescent states even for CPUs looping
-+ * in kernel context. In contrast, in non-preemptible kernels,
-+ * RCU readers leave no in-memory hints, which means that CPU-bound
-+ * processes executing in kernel context might never report an
-+ * RCU quiescent state. Therefore, the following code causes
-+ * cond_resched() to report a quiescent state, but only when RCU
-+ * is in urgent need of one.
-+ */
-+#ifndef CONFIG_PREEMPT_RCU
-+ rcu_all_qs();
-+#endif
-+ return 0;
-+}
-+EXPORT_SYMBOL(__cond_resched);
-+#endif
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#define cond_resched_dynamic_enabled __cond_resched
-+#define cond_resched_dynamic_disabled ((void *)&__static_call_return0)
-+DEFINE_STATIC_CALL_RET0(cond_resched, __cond_resched);
-+EXPORT_STATIC_CALL_TRAMP(cond_resched);
-+
-+#define might_resched_dynamic_enabled __cond_resched
-+#define might_resched_dynamic_disabled ((void *)&__static_call_return0)
-+DEFINE_STATIC_CALL_RET0(might_resched, __cond_resched);
-+EXPORT_STATIC_CALL_TRAMP(might_resched);
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_cond_resched);
-+int __sched dynamic_cond_resched(void)
-+{
-+ klp_sched_try_switch();
-+ if (!static_branch_unlikely(&sk_dynamic_cond_resched))
-+ return 0;
-+ return __cond_resched();
-+}
-+EXPORT_SYMBOL(dynamic_cond_resched);
-+
-+static DEFINE_STATIC_KEY_FALSE(sk_dynamic_might_resched);
-+int __sched dynamic_might_resched(void)
-+{
-+ if (!static_branch_unlikely(&sk_dynamic_might_resched))
-+ return 0;
-+ return __cond_resched();
-+}
-+EXPORT_SYMBOL(dynamic_might_resched);
-+#endif
-+#endif
-+
-+/*
-+ * __cond_resched_lock() - if a reschedule is pending, drop the given lock,
-+ * call schedule, and on return reacquire the lock.
-+ *
-+ * This works OK both with and without CONFIG_PREEMPTION. We do strange low-level
-+ * operations here to prevent schedule() from being called twice (once via
-+ * spin_unlock(), once by hand).
-+ */
-+int __cond_resched_lock(spinlock_t *lock)
-+{
-+ int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+ int ret = 0;
-+
-+ lockdep_assert_held(lock);
-+
-+ if (spin_needbreak(lock) || resched) {
-+ spin_unlock(lock);
-+ if (!_cond_resched())
-+ cpu_relax();
-+ ret = 1;
-+ spin_lock(lock);
-+ }
-+ return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_lock);
-+
-+int __cond_resched_rwlock_read(rwlock_t *lock)
-+{
-+ int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+ int ret = 0;
-+
-+ lockdep_assert_held_read(lock);
-+
-+ if (rwlock_needbreak(lock) || resched) {
-+ read_unlock(lock);
-+ if (!_cond_resched())
-+ cpu_relax();
-+ ret = 1;
-+ read_lock(lock);
-+ }
-+ return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_rwlock_read);
-+
-+int __cond_resched_rwlock_write(rwlock_t *lock)
-+{
-+ int resched = should_resched(PREEMPT_LOCK_OFFSET);
-+ int ret = 0;
-+
-+ lockdep_assert_held_write(lock);
-+
-+ if (rwlock_needbreak(lock) || resched) {
-+ write_unlock(lock);
-+ if (!_cond_resched())
-+ cpu_relax();
-+ ret = 1;
-+ write_lock(lock);
-+ }
-+ return ret;
-+}
-+EXPORT_SYMBOL(__cond_resched_rwlock_write);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+
-+#ifdef CONFIG_GENERIC_ENTRY
-+#include <linux/entry-common.h>
-+#endif
-+
-+/*
-+ * SC:cond_resched
-+ * SC:might_resched
-+ * SC:preempt_schedule
-+ * SC:preempt_schedule_notrace
-+ * SC:irqentry_exit_cond_resched
-+ *
-+ *
-+ * NONE:
-+ * cond_resched <- __cond_resched
-+ * might_resched <- RET0
-+ * preempt_schedule <- NOP
-+ * preempt_schedule_notrace <- NOP
-+ * irqentry_exit_cond_resched <- NOP
-+ *
-+ * VOLUNTARY:
-+ * cond_resched <- __cond_resched
-+ * might_resched <- __cond_resched
-+ * preempt_schedule <- NOP
-+ * preempt_schedule_notrace <- NOP
-+ * irqentry_exit_cond_resched <- NOP
-+ *
-+ * FULL:
-+ * cond_resched <- RET0
-+ * might_resched <- RET0
-+ * preempt_schedule <- preempt_schedule
-+ * preempt_schedule_notrace <- preempt_schedule_notrace
-+ * irqentry_exit_cond_resched <- irqentry_exit_cond_resched
-+ */
-+
-+enum {
-+ preempt_dynamic_undefined = -1,
-+ preempt_dynamic_none,
-+ preempt_dynamic_voluntary,
-+ preempt_dynamic_full,
-+};
-+
-+int preempt_dynamic_mode = preempt_dynamic_undefined;
-+
-+int sched_dynamic_mode(const char *str)
-+{
-+ if (!strcmp(str, "none"))
-+ return preempt_dynamic_none;
-+
-+ if (!strcmp(str, "voluntary"))
-+ return preempt_dynamic_voluntary;
-+
-+ if (!strcmp(str, "full"))
-+ return preempt_dynamic_full;
-+
-+ return -EINVAL;
-+}
-+
-+#if defined(CONFIG_HAVE_PREEMPT_DYNAMIC_CALL)
-+#define preempt_dynamic_enable(f) static_call_update(f, f##_dynamic_enabled)
-+#define preempt_dynamic_disable(f) static_call_update(f, f##_dynamic_disabled)
-+#elif defined(CONFIG_HAVE_PREEMPT_DYNAMIC_KEY)
-+#define preempt_dynamic_enable(f) static_key_enable(&sk_dynamic_##f.key)
-+#define preempt_dynamic_disable(f) static_key_disable(&sk_dynamic_##f.key)
-+#else
-+#error "Unsupported PREEMPT_DYNAMIC mechanism"
-+#endif
-+
-+static DEFINE_MUTEX(sched_dynamic_mutex);
-+static bool klp_override;
-+
-+static void __sched_dynamic_update(int mode)
-+{
-+ /*
-+ * Avoid {NONE,VOLUNTARY} -> FULL transitions from ever ending up in
-+ * the ZERO state, which is invalid.
-+ */
-+ if (!klp_override)
-+ preempt_dynamic_enable(cond_resched);
-+ preempt_dynamic_enable(cond_resched);
-+ preempt_dynamic_enable(might_resched);
-+ preempt_dynamic_enable(preempt_schedule);
-+ preempt_dynamic_enable(preempt_schedule_notrace);
-+ preempt_dynamic_enable(irqentry_exit_cond_resched);
-+
-+ switch (mode) {
-+ case preempt_dynamic_none:
-+ if (!klp_override)
-+ preempt_dynamic_enable(cond_resched);
-+ preempt_dynamic_disable(might_resched);
-+ preempt_dynamic_disable(preempt_schedule);
-+ preempt_dynamic_disable(preempt_schedule_notrace);
-+ preempt_dynamic_disable(irqentry_exit_cond_resched);
-+ if (mode != preempt_dynamic_mode)
-+ pr_info("Dynamic Preempt: none\n");
-+ break;
-+
-+ case preempt_dynamic_voluntary:
-+ if (!klp_override)
-+ preempt_dynamic_enable(cond_resched);
-+ preempt_dynamic_enable(might_resched);
-+ preempt_dynamic_disable(preempt_schedule);
-+ preempt_dynamic_disable(preempt_schedule_notrace);
-+ preempt_dynamic_disable(irqentry_exit_cond_resched);
-+ if (mode != preempt_dynamic_mode)
-+ pr_info("Dynamic Preempt: voluntary\n");
-+ break;
-+
-+ case preempt_dynamic_full:
-+ if (!klp_override)
-+ preempt_dynamic_enable(cond_resched);
-+ preempt_dynamic_disable(might_resched);
-+ preempt_dynamic_enable(preempt_schedule);
-+ preempt_dynamic_enable(preempt_schedule_notrace);
-+ preempt_dynamic_enable(irqentry_exit_cond_resched);
-+ if (mode != preempt_dynamic_mode)
-+ pr_info("Dynamic Preempt: full\n");
-+ break;
-+ }
-+
-+ preempt_dynamic_mode = mode;
-+}
-+
-+void sched_dynamic_update(int mode)
-+{
-+ mutex_lock(&sched_dynamic_mutex);
-+ __sched_dynamic_update(mode);
-+ mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+#ifdef CONFIG_HAVE_PREEMPT_DYNAMIC_CALL
-+
-+static int klp_cond_resched(void)
-+{
-+ __klp_sched_try_switch();
-+ return __cond_resched();
-+}
-+
-+void sched_dynamic_klp_enable(void)
-+{
-+ mutex_lock(&sched_dynamic_mutex);
-+
-+ klp_override = true;
-+ static_call_update(cond_resched, klp_cond_resched);
-+
-+ mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+void sched_dynamic_klp_disable(void)
-+{
-+ mutex_lock(&sched_dynamic_mutex);
-+
-+ klp_override = false;
-+ __sched_dynamic_update(preempt_dynamic_mode);
-+
-+ mutex_unlock(&sched_dynamic_mutex);
-+}
-+
-+#endif /* CONFIG_HAVE_PREEMPT_DYNAMIC_CALL */
-+
-+
-+static int __init setup_preempt_mode(char *str)
-+{
-+ int mode = sched_dynamic_mode(str);
-+ if (mode < 0) {
-+ pr_warn("Dynamic Preempt: unsupported mode: %s\n", str);
-+ return 0;
-+ }
-+
-+ sched_dynamic_update(mode);
-+ return 1;
-+}
-+__setup("preempt=", setup_preempt_mode);
-+
-+static void __init preempt_dynamic_init(void)
-+{
-+ if (preempt_dynamic_mode == preempt_dynamic_undefined) {
-+ if (IS_ENABLED(CONFIG_PREEMPT_NONE)) {
-+ sched_dynamic_update(preempt_dynamic_none);
-+ } else if (IS_ENABLED(CONFIG_PREEMPT_VOLUNTARY)) {
-+ sched_dynamic_update(preempt_dynamic_voluntary);
-+ } else {
-+ /* Default static call setting, nothing to do */
-+ WARN_ON_ONCE(!IS_ENABLED(CONFIG_PREEMPT));
-+ preempt_dynamic_mode = preempt_dynamic_full;
-+ pr_info("Dynamic Preempt: full\n");
-+ }
-+ }
-+}
-+
-+#define PREEMPT_MODEL_ACCESSOR(mode) \
-+ bool preempt_model_##mode(void) \
-+ { \
-+ WARN_ON_ONCE(preempt_dynamic_mode == preempt_dynamic_undefined); \
-+ return preempt_dynamic_mode == preempt_dynamic_##mode; \
-+ } \
-+ EXPORT_SYMBOL_GPL(preempt_model_##mode)
-+
-+PREEMPT_MODEL_ACCESSOR(none);
-+PREEMPT_MODEL_ACCESSOR(voluntary);
-+PREEMPT_MODEL_ACCESSOR(full);
-+
-+#else /* !CONFIG_PREEMPT_DYNAMIC: */
-+
-+static inline void preempt_dynamic_init(void) { }
-+
-+#endif /* CONFIG_PREEMPT_DYNAMIC */
-+
-+int io_schedule_prepare(void)
-+{
-+ int old_iowait = current->in_iowait;
-+
-+ current->in_iowait = 1;
-+ blk_flush_plug(current->plug, true);
-+ return old_iowait;
-+}
-+
-+void io_schedule_finish(int token)
-+{
-+ current->in_iowait = token;
-+}
-+
-+/*
-+ * This task is about to go to sleep on IO. Increment rq->nr_iowait so
-+ * that process accounting knows that this is a task in IO wait state.
-+ *
-+ * But don't do that if it is a deliberate, throttling IO wait (this task
-+ * has set its backing_dev_info: the queue against which it should throttle)
-+ */
-+
-+long __sched io_schedule_timeout(long timeout)
-+{
-+ int token;
-+ long ret;
-+
-+ token = io_schedule_prepare();
-+ ret = schedule_timeout(timeout);
-+ io_schedule_finish(token);
-+
-+ return ret;
-+}
-+EXPORT_SYMBOL(io_schedule_timeout);
-+
-+void __sched io_schedule(void)
-+{
-+ int token;
-+
-+ token = io_schedule_prepare();
-+ schedule();
-+ io_schedule_finish(token);
-+}
-+EXPORT_SYMBOL(io_schedule);
-+
-+void sched_show_task(struct task_struct *p)
-+{
-+ unsigned long free;
-+ int ppid;
-+
-+ if (!try_get_task_stack(p))
-+ return;
-+
-+ pr_info("task:%-15.15s state:%c", p->comm, task_state_to_char(p));
-+
-+ if (task_is_running(p))
-+ pr_cont(" running task ");
-+ free = stack_not_used(p);
-+ ppid = 0;
-+ rcu_read_lock();
-+ if (pid_alive(p))
-+ ppid = task_pid_nr(rcu_dereference(p->real_parent));
-+ rcu_read_unlock();
-+ pr_cont(" stack:%-5lu pid:%-5d tgid:%-5d ppid:%-6d flags:0x%08lx\n",
-+ free, task_pid_nr(p), task_tgid_nr(p),
-+ ppid, read_task_thread_flags(p));
-+
-+ print_worker_info(KERN_INFO, p);
-+ print_stop_info(KERN_INFO, p);
-+ show_stack(p, NULL, KERN_INFO);
-+ put_task_stack(p);
-+}
-+EXPORT_SYMBOL_GPL(sched_show_task);
-+
-+static inline bool
-+state_filter_match(unsigned long state_filter, struct task_struct *p)
-+{
-+ unsigned int state = READ_ONCE(p->__state);
-+
-+ /* no filter, everything matches */
-+ if (!state_filter)
-+ return true;
-+
-+ /* filter, but doesn't match */
-+ if (!(state & state_filter))
-+ return false;
-+
-+ /*
-+ * When looking for TASK_UNINTERRUPTIBLE skip TASK_IDLE (allows
-+ * TASK_KILLABLE).
-+ */
-+ if (state_filter == TASK_UNINTERRUPTIBLE && (state & TASK_NOLOAD))
-+ return false;
-+
-+ return true;
-+}
-+
-+
-+void show_state_filter(unsigned int state_filter)
-+{
-+ struct task_struct *g, *p;
-+
-+ rcu_read_lock();
-+ for_each_process_thread(g, p) {
-+ /*
-+ * reset the NMI-timeout, listing all files on a slow
-+ * console might take a lot of time:
-+ * Also, reset softlockup watchdogs on all CPUs, because
-+ * another CPU might be blocked waiting for us to process
-+ * an IPI.
-+ */
-+ touch_nmi_watchdog();
-+ touch_all_softlockup_watchdogs();
-+ if (state_filter_match(state_filter, p))
-+ sched_show_task(p);
-+ }
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+ /* TODO: Alt schedule FW should support this
-+ if (!state_filter)
-+ sysrq_sched_debug_show();
-+ */
-+#endif
-+ rcu_read_unlock();
-+ /*
-+ * Only show locks if all tasks are dumped:
-+ */
-+ if (!state_filter)
-+ debug_show_all_locks();
-+}
-+
-+void dump_cpu_task(int cpu)
-+{
-+ if (in_hardirq() && cpu == smp_processor_id()) {
-+ struct pt_regs *regs;
-+
-+ regs = get_irq_regs();
-+ if (regs) {
-+ show_regs(regs);
-+ return;
-+ }
-+ }
-+
-+ if (trigger_single_cpu_backtrace(cpu))
-+ return;
-+
-+ pr_info("Task dump for CPU %d:\n", cpu);
-+ sched_show_task(cpu_curr(cpu));
-+}
-+
-+/**
-+ * init_idle - set up an idle thread for a given CPU
-+ * @idle: task in question
-+ * @cpu: CPU the idle task belongs to
-+ *
-+ * NOTE: this function does not set the idle thread's NEED_RESCHED
-+ * flag, to make booting more robust.
-+ */
-+void __init init_idle(struct task_struct *idle, int cpu)
-+{
-+#ifdef CONFIG_SMP
-+ struct affinity_context ac = (struct affinity_context) {
-+ .new_mask = cpumask_of(cpu),
-+ .flags = 0,
-+ };
-+#endif
-+ struct rq *rq = cpu_rq(cpu);
-+ unsigned long flags;
-+
-+ __sched_fork(0, idle);
-+
-+ raw_spin_lock_irqsave(&idle->pi_lock, flags);
-+ raw_spin_lock(&rq->lock);
-+
-+ idle->last_ran = rq->clock_task;
-+ idle->__state = TASK_RUNNING;
-+ /*
-+ * PF_KTHREAD should already be set at this point; regardless, make it
-+ * look like a proper per-CPU kthread.
-+ */
-+ idle->flags |= PF_KTHREAD | PF_NO_SETAFFINITY;
-+ kthread_set_per_cpu(idle, cpu);
-+
-+ sched_queue_init_idle(&rq->queue, idle);
-+
-+#ifdef CONFIG_SMP
-+ /*
-+ * It's possible that init_idle() gets called multiple times on a task,
-+ * in that case do_set_cpus_allowed() will not do the right thing.
-+ *
-+ * And since this is boot we can forgo the serialisation.
-+ */
-+ set_cpus_allowed_common(idle, &ac);
-+#endif
-+
-+ /* Silence PROVE_RCU */
-+ rcu_read_lock();
-+ __set_task_cpu(idle, cpu);
-+ rcu_read_unlock();
-+
-+ rq->idle = idle;
-+ rcu_assign_pointer(rq->curr, idle);
-+ idle->on_cpu = 1;
-+
-+ raw_spin_unlock(&rq->lock);
-+ raw_spin_unlock_irqrestore(&idle->pi_lock, flags);
-+
-+ /* Set the preempt count _outside_ the spinlocks! */
-+ init_idle_preempt_count(idle, cpu);
-+
-+ ftrace_graph_init_idle_task(idle, cpu);
-+ vtime_init_idle(idle, cpu);
-+#ifdef CONFIG_SMP
-+ sprintf(idle->comm, "%s/%d", INIT_TASK_COMM, cpu);
-+#endif
-+}
-+
-+#ifdef CONFIG_SMP
-+
-+int cpuset_cpumask_can_shrink(const struct cpumask __maybe_unused *cur,
-+ const struct cpumask __maybe_unused *trial)
-+{
-+ return 1;
-+}
-+
-+int task_can_attach(struct task_struct *p)
-+{
-+ int ret = 0;
-+
-+ /*
-+ * Kthreads which disallow setaffinity shouldn't be moved
-+ * to a new cpuset; we don't want to change their CPU
-+ * affinity and isolating such threads by their set of
-+ * allowed nodes is unnecessary. Thus, cpusets are not
-+ * applicable for such threads. This prevents checking for
-+ * success of set_cpus_allowed_ptr() on all attached tasks
-+ * before cpus_mask may be changed.
-+ */
-+ if (p->flags & PF_NO_SETAFFINITY)
-+ ret = -EINVAL;
-+
-+ return ret;
-+}
-+
-+bool sched_smp_initialized __read_mostly;
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+/*
-+ * Ensures that the idle task is using init_mm right before its CPU goes
-+ * offline.
-+ */
-+void idle_task_exit(void)
-+{
-+ struct mm_struct *mm = current->active_mm;
-+
-+ BUG_ON(current != this_rq()->idle);
-+
-+ if (mm != &init_mm) {
-+ switch_mm(mm, &init_mm, current);
-+ finish_arch_post_lock_switch();
-+ }
-+
-+ /* finish_cpu(), as ran on the BP, will clean up the active_mm state */
-+}
-+
-+static int __balance_push_cpu_stop(void *arg)
-+{
-+ struct task_struct *p = arg;
-+ struct rq *rq = this_rq();
-+ struct rq_flags rf;
-+ int cpu;
-+
-+ raw_spin_lock_irq(&p->pi_lock);
-+ rq_lock(rq, &rf);
-+
-+ update_rq_clock(rq);
-+
-+ if (task_rq(p) == rq && task_on_rq_queued(p)) {
-+ cpu = select_fallback_rq(rq->cpu, p);
-+ rq = __migrate_task(rq, p, cpu);
-+ }
-+
-+ rq_unlock(rq, &rf);
-+ raw_spin_unlock_irq(&p->pi_lock);
-+
-+ put_task_struct(p);
-+
-+ return 0;
-+}
-+
-+static DEFINE_PER_CPU(struct cpu_stop_work, push_work);
-+
-+/*
-+ * This is enabled below SCHED_AP_ACTIVE; when !cpu_active(), but only
-+ * effective when the hotplug motion is down.
-+ */
-+static void balance_push(struct rq *rq)
-+{
-+ struct task_struct *push_task = rq->curr;
-+
-+ lockdep_assert_held(&rq->lock);
-+
-+ /*
-+ * Ensure the thing is persistent until balance_push_set(.on = false);
-+ */
-+ rq->balance_callback = &balance_push_callback;
-+
-+ /*
-+ * Only active while going offline and when invoked on the outgoing
-+ * CPU.
-+ */
-+ if (!cpu_dying(rq->cpu) || rq != this_rq())
-+ return;
-+
-+ /*
-+ * Both the cpu-hotplug and stop task are in this case and are
-+ * required to complete the hotplug process.
-+ */
-+ if (kthread_is_per_cpu(push_task) ||
-+ is_migration_disabled(push_task)) {
-+
-+ /*
-+ * If this is the idle task on the outgoing CPU try to wake
-+ * up the hotplug control thread which might wait for the
-+ * last task to vanish. The rcuwait_active() check is
-+ * accurate here because the waiter is pinned on this CPU
-+ * and can't obviously be running in parallel.
-+ *
-+ * On RT kernels this also has to check whether there are
-+ * pinned and scheduled out tasks on the runqueue. They
-+ * need to leave the migrate disabled section first.
-+ */
-+ if (!rq->nr_running && !rq_has_pinned_tasks(rq) &&
-+ rcuwait_active(&rq->hotplug_wait)) {
-+ raw_spin_unlock(&rq->lock);
-+ rcuwait_wake_up(&rq->hotplug_wait);
-+ raw_spin_lock(&rq->lock);
-+ }
-+ return;
-+ }
-+
-+ get_task_struct(push_task);
-+ /*
-+ * Temporarily drop rq->lock such that we can wake-up the stop task.
-+ * Both preemption and IRQs are still disabled.
-+ */
-+ preempt_disable();
-+ raw_spin_unlock(&rq->lock);
-+ stop_one_cpu_nowait(rq->cpu, __balance_push_cpu_stop, push_task,
-+ this_cpu_ptr(&push_work));
-+ preempt_enable();
-+ /*
-+ * At this point need_resched() is true and we'll take the loop in
-+ * schedule(). The next pick is obviously going to be the stop task
-+ * which kthread_is_per_cpu() and will push this task away.
-+ */
-+ raw_spin_lock(&rq->lock);
-+}
-+
-+static void balance_push_set(int cpu, bool on)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+ struct rq_flags rf;
-+
-+ rq_lock_irqsave(rq, &rf);
-+ if (on) {
-+ WARN_ON_ONCE(rq->balance_callback);
-+ rq->balance_callback = &balance_push_callback;
-+ } else if (rq->balance_callback == &balance_push_callback) {
-+ rq->balance_callback = NULL;
-+ }
-+ rq_unlock_irqrestore(rq, &rf);
-+}
-+
-+/*
-+ * Invoked from a CPUs hotplug control thread after the CPU has been marked
-+ * inactive. All tasks which are not per CPU kernel threads are either
-+ * pushed off this CPU now via balance_push() or placed on a different CPU
-+ * during wakeup. Wait until the CPU is quiescent.
-+ */
-+static void balance_hotplug_wait(void)
-+{
-+ struct rq *rq = this_rq();
-+
-+ rcuwait_wait_event(&rq->hotplug_wait,
-+ rq->nr_running == 1 && !rq_has_pinned_tasks(rq),
-+ TASK_UNINTERRUPTIBLE);
-+}
-+
-+#else
-+
-+static void balance_push(struct rq *rq)
-+{
-+}
-+
-+static void balance_push_set(int cpu, bool on)
-+{
-+}
-+
-+static inline void balance_hotplug_wait(void)
-+{
-+}
-+#endif /* CONFIG_HOTPLUG_CPU */
-+
-+static void set_rq_offline(struct rq *rq)
-+{
-+ if (rq->online) {
-+ update_rq_clock(rq);
-+ rq->online = false;
-+ }
-+}
-+
-+static void set_rq_online(struct rq *rq)
-+{
-+ if (!rq->online)
-+ rq->online = true;
-+}
-+
-+static inline void sched_set_rq_online(struct rq *rq, int cpu)
-+{
-+ unsigned long flags;
-+
-+ raw_spin_lock_irqsave(&rq->lock, flags);
-+ set_rq_online(rq);
-+ raw_spin_unlock_irqrestore(&rq->lock, flags);
-+}
-+
-+static inline void sched_set_rq_offline(struct rq *rq, int cpu)
-+{
-+ unsigned long flags;
-+
-+ raw_spin_lock_irqsave(&rq->lock, flags);
-+ set_rq_offline(rq);
-+ raw_spin_unlock_irqrestore(&rq->lock, flags);
-+}
-+
-+/*
-+ * used to mark begin/end of suspend/resume:
-+ */
-+static int num_cpus_frozen;
-+
-+/*
-+ * Update cpusets according to cpu_active mask. If cpusets are
-+ * disabled, cpuset_update_active_cpus() becomes a simple wrapper
-+ * around partition_sched_domains().
-+ *
-+ * If we come here as part of a suspend/resume, don't touch cpusets because we
-+ * want to restore it back to its original state upon resume anyway.
-+ */
-+static void cpuset_cpu_active(void)
-+{
-+ if (cpuhp_tasks_frozen) {
-+ /*
-+ * num_cpus_frozen tracks how many CPUs are involved in suspend
-+ * resume sequence. As long as this is not the last online
-+ * operation in the resume sequence, just build a single sched
-+ * domain, ignoring cpusets.
-+ */
-+ partition_sched_domains(1, NULL, NULL);
-+ if (--num_cpus_frozen)
-+ return;
-+ /*
-+ * This is the last CPU online operation. So fall through and
-+ * restore the original sched domains by considering the
-+ * cpuset configurations.
-+ */
-+ cpuset_force_rebuild();
-+ }
-+
-+ cpuset_update_active_cpus();
-+}
-+
-+static int cpuset_cpu_inactive(unsigned int cpu)
-+{
-+ if (!cpuhp_tasks_frozen) {
-+ cpuset_update_active_cpus();
-+ } else {
-+ num_cpus_frozen++;
-+ partition_sched_domains(1, NULL, NULL);
-+ }
-+ return 0;
-+}
-+
-+static inline void sched_smt_present_inc(int cpu)
-+{
-+#ifdef CONFIG_SCHED_SMT
-+ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
-+ static_branch_inc_cpuslocked(&sched_smt_present);
-+ cpumask_or(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
-+ }
-+#endif
-+}
-+
-+static inline void sched_smt_present_dec(int cpu)
-+{
-+#ifdef CONFIG_SCHED_SMT
-+ if (cpumask_weight(cpu_smt_mask(cpu)) == 2) {
-+ static_branch_dec_cpuslocked(&sched_smt_present);
-+ if (!static_branch_likely(&sched_smt_present))
-+ cpumask_clear(sched_pcore_idle_mask);
-+ cpumask_andnot(&sched_smt_mask, &sched_smt_mask, cpu_smt_mask(cpu));
-+ }
-+#endif
-+}
-+
-+int sched_cpu_activate(unsigned int cpu)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+
-+ /*
-+ * Clear the balance_push callback and prepare to schedule
-+ * regular tasks.
-+ */
-+ balance_push_set(cpu, false);
-+
-+ set_cpu_active(cpu, true);
-+
-+ if (sched_smp_initialized)
-+ cpuset_cpu_active();
-+
-+ /*
-+ * Put the rq online, if not already. This happens:
-+ *
-+ * 1) In the early boot process, because we build the real domains
-+ * after all cpus have been brought up.
-+ *
-+ * 2) At runtime, if cpuset_cpu_active() fails to rebuild the
-+ * domains.
-+ */
-+ sched_set_rq_online(rq, cpu);
-+
-+ /*
-+ * When going up, increment the number of cores with SMT present.
-+ */
-+ sched_smt_present_inc(cpu);
-+
-+ return 0;
-+}
-+
-+int sched_cpu_deactivate(unsigned int cpu)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+ int ret;
-+
-+ set_cpu_active(cpu, false);
-+
-+ /*
-+ * From this point forward, this CPU will refuse to run any task that
-+ * is not: migrate_disable() or KTHREAD_IS_PER_CPU, and will actively
-+ * push those tasks away until this gets cleared, see
-+ * sched_cpu_dying().
-+ */
-+ balance_push_set(cpu, true);
-+
-+ /*
-+ * We've cleared cpu_active_mask, wait for all preempt-disabled and RCU
-+ * users of this state to go away such that all new such users will
-+ * observe it.
-+ *
-+ * Specifically, we rely on ttwu to no longer target this CPU, see
-+ * ttwu_queue_cond() and is_cpu_allowed().
-+ *
-+ * Do sync before park smpboot threads to take care the RCU boost case.
-+ */
-+ synchronize_rcu();
-+
-+ sched_set_rq_offline(rq, cpu);
-+
-+ /*
-+ * When going down, decrement the number of cores with SMT present.
-+ */
-+ sched_smt_present_dec(cpu);
-+
-+ if (!sched_smp_initialized)
-+ return 0;
-+
-+ ret = cpuset_cpu_inactive(cpu);
-+ if (ret) {
-+ sched_smt_present_inc(cpu);
-+ sched_set_rq_online(rq, cpu);
-+ balance_push_set(cpu, false);
-+ set_cpu_active(cpu, true);
-+ return ret;
-+ }
-+
-+ return 0;
-+}
-+
-+static void sched_rq_cpu_starting(unsigned int cpu)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+
-+ rq->calc_load_update = calc_load_update;
-+}
-+
-+int sched_cpu_starting(unsigned int cpu)
-+{
-+ sched_rq_cpu_starting(cpu);
-+ sched_tick_start(cpu);
-+ return 0;
-+}
-+
-+#ifdef CONFIG_HOTPLUG_CPU
-+
-+/*
-+ * Invoked immediately before the stopper thread is invoked to bring the
-+ * CPU down completely. At this point all per CPU kthreads except the
-+ * hotplug thread (current) and the stopper thread (inactive) have been
-+ * either parked or have been unbound from the outgoing CPU. Ensure that
-+ * any of those which might be on the way out are gone.
-+ *
-+ * If after this point a bound task is being woken on this CPU then the
-+ * responsible hotplug callback has failed to do it's job.
-+ * sched_cpu_dying() will catch it with the appropriate fireworks.
-+ */
-+int sched_cpu_wait_empty(unsigned int cpu)
-+{
-+ balance_hotplug_wait();
-+ return 0;
-+}
-+
-+/*
-+ * Since this CPU is going 'away' for a while, fold any nr_active delta we
-+ * might have. Called from the CPU stopper task after ensuring that the
-+ * stopper is the last running task on the CPU, so nr_active count is
-+ * stable. We need to take the tear-down thread which is calling this into
-+ * account, so we hand in adjust = 1 to the load calculation.
-+ *
-+ * Also see the comment "Global load-average calculations".
-+ */
-+static void calc_load_migrate(struct rq *rq)
-+{
-+ long delta = calc_load_fold_active(rq, 1);
-+
-+ if (delta)
-+ atomic_long_add(delta, &calc_load_tasks);
-+}
-+
-+static void dump_rq_tasks(struct rq *rq, const char *loglvl)
-+{
-+ struct task_struct *g, *p;
-+ int cpu = cpu_of(rq);
-+
-+ lockdep_assert_held(&rq->lock);
-+
-+ printk("%sCPU%d enqueued tasks (%u total):\n", loglvl, cpu, rq->nr_running);
-+ for_each_process_thread(g, p) {
-+ if (task_cpu(p) != cpu)
-+ continue;
-+
-+ if (!task_on_rq_queued(p))
-+ continue;
-+
-+ printk("%s\tpid: %d, name: %s\n", loglvl, p->pid, p->comm);
-+ }
-+}
-+
-+int sched_cpu_dying(unsigned int cpu)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+ unsigned long flags;
-+
-+ /* Handle pending wakeups and then migrate everything off */
-+ sched_tick_stop(cpu);
-+
-+ raw_spin_lock_irqsave(&rq->lock, flags);
-+ if (rq->nr_running != 1 || rq_has_pinned_tasks(rq)) {
-+ WARN(true, "Dying CPU not properly vacated!");
-+ dump_rq_tasks(rq, KERN_WARNING);
-+ }
-+ raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+ calc_load_migrate(rq);
-+ hrtick_clear(rq);
-+ return 0;
-+}
-+#endif
-+
-+#ifdef CONFIG_SMP
-+static void sched_init_topology_cpumask_early(void)
-+{
-+ int cpu;
-+ cpumask_t *tmp;
-+
-+ for_each_possible_cpu(cpu) {
-+ /* init topo masks */
-+ tmp = per_cpu(sched_cpu_topo_masks, cpu);
-+
-+ cpumask_copy(tmp, cpu_possible_mask);
-+ per_cpu(sched_cpu_llc_mask, cpu) = tmp;
-+ per_cpu(sched_cpu_topo_end_mask, cpu) = ++tmp;
-+ }
-+}
-+
-+#define TOPOLOGY_CPUMASK(name, mask, last)\
-+ if (cpumask_and(topo, topo, mask)) { \
-+ cpumask_copy(topo, mask); \
-+ printk(KERN_INFO "sched: cpu#%02d topo: 0x%08lx - "#name, \
-+ cpu, (topo++)->bits[0]); \
-+ } \
-+ if (!last) \
-+ bitmap_complement(cpumask_bits(topo), cpumask_bits(mask), \
-+ nr_cpumask_bits);
-+
-+static void sched_init_topology_cpumask(void)
-+{
-+ int cpu;
-+ cpumask_t *topo;
-+
-+ for_each_online_cpu(cpu) {
-+ topo = per_cpu(sched_cpu_topo_masks, cpu);
-+
-+ bitmap_complement(cpumask_bits(topo), cpumask_bits(cpumask_of(cpu)),
-+ nr_cpumask_bits);
-+#ifdef CONFIG_SCHED_SMT
-+ TOPOLOGY_CPUMASK(smt, topology_sibling_cpumask(cpu), false);
-+#endif
-+ TOPOLOGY_CPUMASK(cluster, topology_cluster_cpumask(cpu), false);
-+
-+ per_cpu(sd_llc_id, cpu) = cpumask_first(cpu_coregroup_mask(cpu));
-+ per_cpu(sched_cpu_llc_mask, cpu) = topo;
-+ TOPOLOGY_CPUMASK(coregroup, cpu_coregroup_mask(cpu), false);
-+
-+ TOPOLOGY_CPUMASK(core, topology_core_cpumask(cpu), false);
-+
-+ TOPOLOGY_CPUMASK(others, cpu_online_mask, true);
-+
-+ per_cpu(sched_cpu_topo_end_mask, cpu) = topo;
-+ printk(KERN_INFO "sched: cpu#%02d llc_id = %d, llc_mask idx = %d\n",
-+ cpu, per_cpu(sd_llc_id, cpu),
-+ (int) (per_cpu(sched_cpu_llc_mask, cpu) -
-+ per_cpu(sched_cpu_topo_masks, cpu)));
-+ }
-+}
-+#endif
-+
-+void __init sched_init_smp(void)
-+{
-+ /* Move init over to a non-isolated CPU */
-+ if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_TYPE_DOMAIN)) < 0)
-+ BUG();
-+ current->flags &= ~PF_NO_SETAFFINITY;
-+
-+ sched_init_topology();
-+ sched_init_topology_cpumask();
-+
-+ sched_smp_initialized = true;
-+}
-+
-+static int __init migration_init(void)
-+{
-+ sched_cpu_starting(smp_processor_id());
-+ return 0;
-+}
-+early_initcall(migration_init);
-+
-+#else
-+void __init sched_init_smp(void)
-+{
-+ cpu_rq(0)->idle->time_slice = sysctl_sched_base_slice;
-+}
-+#endif /* CONFIG_SMP */
-+
-+int in_sched_functions(unsigned long addr)
-+{
-+ return in_lock_functions(addr) ||
-+ (addr >= (unsigned long)__sched_text_start
-+ && addr < (unsigned long)__sched_text_end);
-+}
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+/*
-+ * Default task group.
-+ * Every task in system belongs to this group at bootup.
-+ */
-+struct task_group root_task_group;
-+LIST_HEAD(task_groups);
-+
-+/* Cacheline aligned slab cache for task_group */
-+static struct kmem_cache *task_group_cache __ro_after_init;
-+#endif /* CONFIG_CGROUP_SCHED */
-+
-+void __init sched_init(void)
-+{
-+ int i;
-+ struct rq *rq;
-+
-+ printk(KERN_INFO "sched/alt: "ALT_SCHED_NAME" CPU Scheduler "ALT_SCHED_VERSION\
-+ " by Alfred Chen.\n");
-+
-+ wait_bit_init();
-+
-+#ifdef CONFIG_SMP
-+ for (i = 0; i < SCHED_QUEUE_BITS; i++)
-+ cpumask_copy(sched_preempt_mask + i, cpu_present_mask);
-+#endif
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+ task_group_cache = KMEM_CACHE(task_group, 0);
-+
-+ list_add(&root_task_group.list, &task_groups);
-+ INIT_LIST_HEAD(&root_task_group.children);
-+ INIT_LIST_HEAD(&root_task_group.siblings);
-+#endif /* CONFIG_CGROUP_SCHED */
-+ for_each_possible_cpu(i) {
-+ rq = cpu_rq(i);
-+
-+ sched_queue_init(&rq->queue);
-+ rq->prio = IDLE_TASK_SCHED_PRIO;
-+#ifdef CONFIG_SCHED_PDS
-+ rq->prio_idx = rq->prio;
-+#endif
-+
-+ raw_spin_lock_init(&rq->lock);
-+ rq->nr_running = rq->nr_uninterruptible = 0;
-+ rq->calc_load_active = 0;
-+ rq->calc_load_update = jiffies + LOAD_FREQ;
-+#ifdef CONFIG_SMP
-+ rq->online = false;
-+ rq->cpu = i;
-+
-+ rq->clear_idle_mask_func = cpumask_clear_cpu;
-+ rq->set_idle_mask_func = cpumask_set_cpu;
-+ rq->balance_func = NULL;
-+ rq->active_balance_arg.active = 0;
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+ INIT_CSD(&rq->nohz_csd, nohz_csd_func, rq);
-+#endif
-+ rq->balance_callback = &balance_push_callback;
-+#ifdef CONFIG_HOTPLUG_CPU
-+ rcuwait_init(&rq->hotplug_wait);
-+#endif
-+#endif /* CONFIG_SMP */
-+ rq->nr_switches = 0;
-+
-+ hrtick_rq_init(rq);
-+ atomic_set(&rq->nr_iowait, 0);
-+
-+ zalloc_cpumask_var_node(&rq->scratch_mask, GFP_KERNEL, cpu_to_node(i));
-+ }
-+#ifdef CONFIG_SMP
-+ /* Set rq->online for cpu 0 */
-+ cpu_rq(0)->online = true;
-+#endif
-+ /*
-+ * The boot idle thread does lazy MMU switching as well:
-+ */
-+ mmgrab(&init_mm);
-+ enter_lazy_tlb(&init_mm, current);
-+
-+ /*
-+ * The idle task doesn't need the kthread struct to function, but it
-+ * is dressed up as a per-CPU kthread and thus needs to play the part
-+ * if we want to avoid special-casing it in code that deals with per-CPU
-+ * kthreads.
-+ */
-+ WARN_ON(!set_kthread_struct(current));
-+
-+ /*
-+ * Make us the idle thread. Technically, schedule() should not be
-+ * called from this thread, however somewhere below it might be,
-+ * but because we are the idle thread, we just pick up running again
-+ * when this runqueue becomes "idle".
-+ */
-+ init_idle(current, smp_processor_id());
-+
-+ calc_load_update = jiffies + LOAD_FREQ;
-+
-+#ifdef CONFIG_SMP
-+ idle_thread_set_boot_cpu();
-+ balance_push_set(smp_processor_id(), false);
-+
-+ sched_init_topology_cpumask_early();
-+#endif /* SMP */
-+
-+ preempt_dynamic_init();
-+}
-+
-+#ifdef CONFIG_DEBUG_ATOMIC_SLEEP
-+
-+void __might_sleep(const char *file, int line)
-+{
-+ unsigned int state = get_current_state();
-+ /*
-+ * Blocking primitives will set (and therefore destroy) current->state,
-+ * since we will exit with TASK_RUNNING make sure we enter with it,
-+ * otherwise we will destroy state.
-+ */
-+ WARN_ONCE(state != TASK_RUNNING && current->task_state_change,
-+ "do not call blocking ops when !TASK_RUNNING; "
-+ "state=%x set at [<%p>] %pS\n", state,
-+ (void *)current->task_state_change,
-+ (void *)current->task_state_change);
-+
-+ __might_resched(file, line, 0);
-+}
-+EXPORT_SYMBOL(__might_sleep);
-+
-+static void print_preempt_disable_ip(int preempt_offset, unsigned long ip)
-+{
-+ if (!IS_ENABLED(CONFIG_DEBUG_PREEMPT))
-+ return;
-+
-+ if (preempt_count() == preempt_offset)
-+ return;
-+
-+ pr_err("Preemption disabled at:");
-+ print_ip_sym(KERN_ERR, ip);
-+}
-+
-+static inline bool resched_offsets_ok(unsigned int offsets)
-+{
-+ unsigned int nested = preempt_count();
-+
-+ nested += rcu_preempt_depth() << MIGHT_RESCHED_RCU_SHIFT;
-+
-+ return nested == offsets;
-+}
-+
-+void __might_resched(const char *file, int line, unsigned int offsets)
-+{
-+ /* Ratelimiting timestamp: */
-+ static unsigned long prev_jiffy;
-+
-+ unsigned long preempt_disable_ip;
-+
-+ /* WARN_ON_ONCE() by default, no rate limit required: */
-+ rcu_sleep_check();
-+
-+ if ((resched_offsets_ok(offsets) && !irqs_disabled() &&
-+ !is_idle_task(current) && !current->non_block_count) ||
-+ system_state == SYSTEM_BOOTING || system_state > SYSTEM_RUNNING ||
-+ oops_in_progress)
-+ return;
-+ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+ return;
-+ prev_jiffy = jiffies;
-+
-+ /* Save this before calling printk(), since that will clobber it: */
-+ preempt_disable_ip = get_preempt_disable_ip(current);
-+
-+ pr_err("BUG: sleeping function called from invalid context at %s:%d\n",
-+ file, line);
-+ pr_err("in_atomic(): %d, irqs_disabled(): %d, non_block: %d, pid: %d, name: %s\n",
-+ in_atomic(), irqs_disabled(), current->non_block_count,
-+ current->pid, current->comm);
-+ pr_err("preempt_count: %x, expected: %x\n", preempt_count(),
-+ offsets & MIGHT_RESCHED_PREEMPT_MASK);
-+
-+ if (IS_ENABLED(CONFIG_PREEMPT_RCU)) {
-+ pr_err("RCU nest depth: %d, expected: %u\n",
-+ rcu_preempt_depth(), offsets >> MIGHT_RESCHED_RCU_SHIFT);
-+ }
-+
-+ if (task_stack_end_corrupted(current))
-+ pr_emerg("Thread overran stack, or stack corrupted\n");
-+
-+ debug_show_held_locks(current);
-+ if (irqs_disabled())
-+ print_irqtrace_events(current);
-+
-+ print_preempt_disable_ip(offsets & MIGHT_RESCHED_PREEMPT_MASK,
-+ preempt_disable_ip);
-+
-+ dump_stack();
-+ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL(__might_resched);
-+
-+void __cant_sleep(const char *file, int line, int preempt_offset)
-+{
-+ static unsigned long prev_jiffy;
-+
-+ if (irqs_disabled())
-+ return;
-+
-+ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
-+ return;
-+
-+ if (preempt_count() > preempt_offset)
-+ return;
-+
-+ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+ return;
-+ prev_jiffy = jiffies;
-+
-+ printk(KERN_ERR "BUG: assuming atomic context at %s:%d\n", file, line);
-+ printk(KERN_ERR "in_atomic(): %d, irqs_disabled(): %d, pid: %d, name: %s\n",
-+ in_atomic(), irqs_disabled(),
-+ current->pid, current->comm);
-+
-+ debug_show_held_locks(current);
-+ dump_stack();
-+ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL_GPL(__cant_sleep);
-+
-+#ifdef CONFIG_SMP
-+void __cant_migrate(const char *file, int line)
-+{
-+ static unsigned long prev_jiffy;
-+
-+ if (irqs_disabled())
-+ return;
-+
-+ if (is_migration_disabled(current))
-+ return;
-+
-+ if (!IS_ENABLED(CONFIG_PREEMPT_COUNT))
-+ return;
-+
-+ if (preempt_count() > 0)
-+ return;
-+
-+ if (time_before(jiffies, prev_jiffy + HZ) && prev_jiffy)
-+ return;
-+ prev_jiffy = jiffies;
-+
-+ pr_err("BUG: assuming non migratable context at %s:%d\n", file, line);
-+ pr_err("in_atomic(): %d, irqs_disabled(): %d, migration_disabled() %u pid: %d, name: %s\n",
-+ in_atomic(), irqs_disabled(), is_migration_disabled(current),
-+ current->pid, current->comm);
-+
-+ debug_show_held_locks(current);
-+ dump_stack();
-+ add_taint(TAINT_WARN, LOCKDEP_STILL_OK);
-+}
-+EXPORT_SYMBOL_GPL(__cant_migrate);
-+#endif
-+#endif
-+
-+#ifdef CONFIG_MAGIC_SYSRQ
-+void normalize_rt_tasks(void)
-+{
-+ struct task_struct *g, *p;
-+ struct sched_attr attr = {
-+ .sched_policy = SCHED_NORMAL,
-+ };
-+
-+ read_lock(&tasklist_lock);
-+ for_each_process_thread(g, p) {
-+ /*
-+ * Only normalize user tasks:
-+ */
-+ if (p->flags & PF_KTHREAD)
-+ continue;
-+
-+ schedstat_set(p->stats.wait_start, 0);
-+ schedstat_set(p->stats.sleep_start, 0);
-+ schedstat_set(p->stats.block_start, 0);
-+
-+ if (!rt_or_dl_task(p)) {
-+ /*
-+ * Renice negative nice level userspace
-+ * tasks back to 0:
-+ */
-+ if (task_nice(p) < 0)
-+ set_user_nice(p, 0);
-+ continue;
-+ }
-+
-+ __sched_setscheduler(p, &attr, false, false);
-+ }
-+ read_unlock(&tasklist_lock);
-+}
-+#endif /* CONFIG_MAGIC_SYSRQ */
-+
-+#if defined(CONFIG_KGDB_KDB)
-+/*
-+ * These functions are only useful for KDB.
-+ *
-+ * They can only be called when the whole system has been
-+ * stopped - every CPU needs to be quiescent, and no scheduling
-+ * activity can take place. Using them for anything else would
-+ * be a serious bug, and as a result, they aren't even visible
-+ * under any other configuration.
-+ */
-+
-+/**
-+ * curr_task - return the current task for a given CPU.
-+ * @cpu: the processor in question.
-+ *
-+ * ONLY VALID WHEN THE WHOLE SYSTEM IS STOPPED!
-+ *
-+ * Return: The current task for @cpu.
-+ */
-+struct task_struct *curr_task(int cpu)
-+{
-+ return cpu_curr(cpu);
-+}
-+
-+#endif /* defined(CONFIG_KGDB_KDB) */
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+static void sched_free_group(struct task_group *tg)
-+{
-+ kmem_cache_free(task_group_cache, tg);
-+}
-+
-+static void sched_free_group_rcu(struct rcu_head *rhp)
-+{
-+ sched_free_group(container_of(rhp, struct task_group, rcu));
-+}
-+
-+static void sched_unregister_group(struct task_group *tg)
-+{
-+ /*
-+ * We have to wait for yet another RCU grace period to expire, as
-+ * print_cfs_stats() might run concurrently.
-+ */
-+ call_rcu(&tg->rcu, sched_free_group_rcu);
-+}
-+
-+/* allocate runqueue etc for a new task group */
-+struct task_group *sched_create_group(struct task_group *parent)
-+{
-+ struct task_group *tg;
-+
-+ tg = kmem_cache_alloc(task_group_cache, GFP_KERNEL | __GFP_ZERO);
-+ if (!tg)
-+ return ERR_PTR(-ENOMEM);
-+
-+ return tg;
-+}
-+
-+void sched_online_group(struct task_group *tg, struct task_group *parent)
-+{
-+}
-+
-+/* RCU callback to free various structures associated with a task group */
-+static void sched_unregister_group_rcu(struct rcu_head *rhp)
-+{
-+ /* Now it should be safe to free those cfs_rqs: */
-+ sched_unregister_group(container_of(rhp, struct task_group, rcu));
-+}
-+
-+void sched_destroy_group(struct task_group *tg)
-+{
-+ /* Wait for possible concurrent references to cfs_rqs complete: */
-+ call_rcu(&tg->rcu, sched_unregister_group_rcu);
-+}
-+
-+void sched_release_group(struct task_group *tg)
-+{
-+}
-+
-+static inline struct task_group *css_tg(struct cgroup_subsys_state *css)
-+{
-+ return css ? container_of(css, struct task_group, css) : NULL;
-+}
-+
-+static struct cgroup_subsys_state *
-+cpu_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
-+{
-+ struct task_group *parent = css_tg(parent_css);
-+ struct task_group *tg;
-+
-+ if (!parent) {
-+ /* This is early initialization for the top cgroup */
-+ return &root_task_group.css;
-+ }
-+
-+ tg = sched_create_group(parent);
-+ if (IS_ERR(tg))
-+ return ERR_PTR(-ENOMEM);
-+ return &tg->css;
-+}
-+
-+/* Expose task group only after completing cgroup initialization */
-+static int cpu_cgroup_css_online(struct cgroup_subsys_state *css)
-+{
-+ struct task_group *tg = css_tg(css);
-+ struct task_group *parent = css_tg(css->parent);
-+
-+ if (parent)
-+ sched_online_group(tg, parent);
-+ return 0;
-+}
-+
-+static void cpu_cgroup_css_released(struct cgroup_subsys_state *css)
-+{
-+ struct task_group *tg = css_tg(css);
-+
-+ sched_release_group(tg);
-+}
-+
-+static void cpu_cgroup_css_free(struct cgroup_subsys_state *css)
-+{
-+ struct task_group *tg = css_tg(css);
-+
-+ /*
-+ * Relies on the RCU grace period between css_released() and this.
-+ */
-+ sched_unregister_group(tg);
-+}
-+
-+#ifdef CONFIG_RT_GROUP_SCHED
-+static int cpu_cgroup_can_attach(struct cgroup_taskset *tset)
-+{
-+ return 0;
-+}
-+#endif
-+
-+static void cpu_cgroup_attach(struct cgroup_taskset *tset)
-+{
-+}
-+
-+#ifdef CONFIG_GROUP_SCHED_WEIGHT
-+static int sched_group_set_shares(struct task_group *tg, unsigned long shares)
-+{
-+ return 0;
-+}
-+
-+static int sched_group_set_idle(struct task_group *tg, long idle)
-+{
-+ return 0;
-+}
-+
-+static int cpu_shares_write_u64(struct cgroup_subsys_state *css,
-+ struct cftype *cftype, u64 shareval)
-+{
-+ return sched_group_set_shares(css_tg(css), shareval);
-+}
-+
-+static u64 cpu_shares_read_u64(struct cgroup_subsys_state *css,
-+ struct cftype *cft)
-+{
-+ return 0;
-+}
-+
-+static s64 cpu_idle_read_s64(struct cgroup_subsys_state *css,
-+ struct cftype *cft)
-+{
-+ return 0;
-+}
-+
-+static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
-+ struct cftype *cft, s64 idle)
-+{
-+ return sched_group_set_idle(css_tg(css), idle);
-+}
-+#endif
-+
-+#ifdef CONFIG_CFS_BANDWIDTH
-+static s64 cpu_cfs_quota_read_s64(struct cgroup_subsys_state *css,
-+ struct cftype *cft)
-+{
-+ return 0;
-+}
-+
-+static int cpu_cfs_quota_write_s64(struct cgroup_subsys_state *css,
-+ struct cftype *cftype, s64 cfs_quota_us)
-+{
-+ return 0;
-+}
-+
-+static u64 cpu_cfs_period_read_u64(struct cgroup_subsys_state *css,
-+ struct cftype *cft)
-+{
-+ return 0;
-+}
-+
-+static int cpu_cfs_period_write_u64(struct cgroup_subsys_state *css,
-+ struct cftype *cftype, u64 cfs_period_us)
-+{
-+ return 0;
-+}
-+
-+static u64 cpu_cfs_burst_read_u64(struct cgroup_subsys_state *css,
-+ struct cftype *cft)
-+{
-+ return 0;
-+}
-+
-+static int cpu_cfs_burst_write_u64(struct cgroup_subsys_state *css,
-+ struct cftype *cftype, u64 cfs_burst_us)
-+{
-+ return 0;
-+}
-+
-+static int cpu_cfs_stat_show(struct seq_file *sf, void *v)
-+{
-+ return 0;
-+}
-+
-+static int cpu_cfs_local_stat_show(struct seq_file *sf, void *v)
-+{
-+ return 0;
-+}
-+#endif
-+
-+#ifdef CONFIG_RT_GROUP_SCHED
-+static int cpu_rt_runtime_write(struct cgroup_subsys_state *css,
-+ struct cftype *cft, s64 val)
-+{
-+ return 0;
-+}
-+
-+static s64 cpu_rt_runtime_read(struct cgroup_subsys_state *css,
-+ struct cftype *cft)
-+{
-+ return 0;
-+}
-+
-+static int cpu_rt_period_write_uint(struct cgroup_subsys_state *css,
-+ struct cftype *cftype, u64 rt_period_us)
-+{
-+ return 0;
-+}
-+
-+static u64 cpu_rt_period_read_uint(struct cgroup_subsys_state *css,
-+ struct cftype *cft)
-+{
-+ return 0;
-+}
-+#endif
-+
-+#ifdef CONFIG_UCLAMP_TASK_GROUP
-+static int cpu_uclamp_min_show(struct seq_file *sf, void *v)
-+{
-+ return 0;
-+}
-+
-+static int cpu_uclamp_max_show(struct seq_file *sf, void *v)
-+{
-+ return 0;
-+}
-+
-+static ssize_t cpu_uclamp_min_write(struct kernfs_open_file *of,
-+ char *buf, size_t nbytes,
-+ loff_t off)
-+{
-+ return nbytes;
-+}
-+
-+static ssize_t cpu_uclamp_max_write(struct kernfs_open_file *of,
-+ char *buf, size_t nbytes,
-+ loff_t off)
-+{
-+ return nbytes;
-+}
-+#endif
-+
-+static struct cftype cpu_legacy_files[] = {
-+#ifdef CONFIG_GROUP_SCHED_WEIGHT
-+ {
-+ .name = "shares",
-+ .read_u64 = cpu_shares_read_u64,
-+ .write_u64 = cpu_shares_write_u64,
-+ },
-+ {
-+ .name = "idle",
-+ .read_s64 = cpu_idle_read_s64,
-+ .write_s64 = cpu_idle_write_s64,
-+ },
-+#endif
-+#ifdef CONFIG_CFS_BANDWIDTH
-+ {
-+ .name = "cfs_quota_us",
-+ .read_s64 = cpu_cfs_quota_read_s64,
-+ .write_s64 = cpu_cfs_quota_write_s64,
-+ },
-+ {
-+ .name = "cfs_period_us",
-+ .read_u64 = cpu_cfs_period_read_u64,
-+ .write_u64 = cpu_cfs_period_write_u64,
-+ },
-+ {
-+ .name = "cfs_burst_us",
-+ .read_u64 = cpu_cfs_burst_read_u64,
-+ .write_u64 = cpu_cfs_burst_write_u64,
-+ },
-+ {
-+ .name = "stat",
-+ .seq_show = cpu_cfs_stat_show,
-+ },
-+ {
-+ .name = "stat.local",
-+ .seq_show = cpu_cfs_local_stat_show,
-+ },
-+#endif
-+#ifdef CONFIG_RT_GROUP_SCHED
-+ {
-+ .name = "rt_runtime_us",
-+ .read_s64 = cpu_rt_runtime_read,
-+ .write_s64 = cpu_rt_runtime_write,
-+ },
-+ {
-+ .name = "rt_period_us",
-+ .read_u64 = cpu_rt_period_read_uint,
-+ .write_u64 = cpu_rt_period_write_uint,
-+ },
-+#endif
-+#ifdef CONFIG_UCLAMP_TASK_GROUP
-+ {
-+ .name = "uclamp.min",
-+ .flags = CFTYPE_NOT_ON_ROOT,
-+ .seq_show = cpu_uclamp_min_show,
-+ .write = cpu_uclamp_min_write,
-+ },
-+ {
-+ .name = "uclamp.max",
-+ .flags = CFTYPE_NOT_ON_ROOT,
-+ .seq_show = cpu_uclamp_max_show,
-+ .write = cpu_uclamp_max_write,
-+ },
-+#endif
-+ { } /* Terminate */
-+};
-+
-+#ifdef CONFIG_GROUP_SCHED_WEIGHT
-+static u64 cpu_weight_read_u64(struct cgroup_subsys_state *css,
-+ struct cftype *cft)
-+{
-+ return 0;
-+}
-+
-+static int cpu_weight_write_u64(struct cgroup_subsys_state *css,
-+ struct cftype *cft, u64 weight)
-+{
-+ return 0;
-+}
-+
-+static s64 cpu_weight_nice_read_s64(struct cgroup_subsys_state *css,
-+ struct cftype *cft)
-+{
-+ return 0;
-+}
-+
-+static int cpu_weight_nice_write_s64(struct cgroup_subsys_state *css,
-+ struct cftype *cft, s64 nice)
-+{
-+ return 0;
-+}
-+#endif
-+
-+#ifdef CONFIG_CFS_BANDWIDTH
-+static int cpu_max_show(struct seq_file *sf, void *v)
-+{
-+ return 0;
-+}
-+
-+static ssize_t cpu_max_write(struct kernfs_open_file *of,
-+ char *buf, size_t nbytes, loff_t off)
-+{
-+ return nbytes;
-+}
-+#endif
-+
-+static struct cftype cpu_files[] = {
-+#ifdef CONFIG_GROUP_SCHED_WEIGHT
-+ {
-+ .name = "weight",
-+ .flags = CFTYPE_NOT_ON_ROOT,
-+ .read_u64 = cpu_weight_read_u64,
-+ .write_u64 = cpu_weight_write_u64,
-+ },
-+ {
-+ .name = "weight.nice",
-+ .flags = CFTYPE_NOT_ON_ROOT,
-+ .read_s64 = cpu_weight_nice_read_s64,
-+ .write_s64 = cpu_weight_nice_write_s64,
-+ },
-+ {
-+ .name = "idle",
-+ .flags = CFTYPE_NOT_ON_ROOT,
-+ .read_s64 = cpu_idle_read_s64,
-+ .write_s64 = cpu_idle_write_s64,
-+ },
-+#endif
-+#ifdef CONFIG_CFS_BANDWIDTH
-+ {
-+ .name = "max",
-+ .flags = CFTYPE_NOT_ON_ROOT,
-+ .seq_show = cpu_max_show,
-+ .write = cpu_max_write,
-+ },
-+ {
-+ .name = "max.burst",
-+ .flags = CFTYPE_NOT_ON_ROOT,
-+ .read_u64 = cpu_cfs_burst_read_u64,
-+ .write_u64 = cpu_cfs_burst_write_u64,
-+ },
-+#endif
-+#ifdef CONFIG_UCLAMP_TASK_GROUP
-+ {
-+ .name = "uclamp.min",
-+ .flags = CFTYPE_NOT_ON_ROOT,
-+ .seq_show = cpu_uclamp_min_show,
-+ .write = cpu_uclamp_min_write,
-+ },
-+ {
-+ .name = "uclamp.max",
-+ .flags = CFTYPE_NOT_ON_ROOT,
-+ .seq_show = cpu_uclamp_max_show,
-+ .write = cpu_uclamp_max_write,
-+ },
-+#endif
-+ { } /* terminate */
-+};
-+
-+static int cpu_extra_stat_show(struct seq_file *sf,
-+ struct cgroup_subsys_state *css)
-+{
-+ return 0;
-+}
-+
-+static int cpu_local_stat_show(struct seq_file *sf,
-+ struct cgroup_subsys_state *css)
-+{
-+ return 0;
-+}
-+
-+struct cgroup_subsys cpu_cgrp_subsys = {
-+ .css_alloc = cpu_cgroup_css_alloc,
-+ .css_online = cpu_cgroup_css_online,
-+ .css_released = cpu_cgroup_css_released,
-+ .css_free = cpu_cgroup_css_free,
-+ .css_extra_stat_show = cpu_extra_stat_show,
-+ .css_local_stat_show = cpu_local_stat_show,
-+#ifdef CONFIG_RT_GROUP_SCHED
-+ .can_attach = cpu_cgroup_can_attach,
-+#endif
-+ .attach = cpu_cgroup_attach,
-+ .legacy_cftypes = cpu_legacy_files,
-+ .dfl_cftypes = cpu_files,
-+ .early_init = true,
-+ .threaded = true,
-+};
-+#endif /* CONFIG_CGROUP_SCHED */
-+
-+#undef CREATE_TRACE_POINTS
-+
-+#ifdef CONFIG_SCHED_MM_CID
-+
-+#
-+/*
-+ * @cid_lock: Guarantee forward-progress of cid allocation.
-+ *
-+ * Concurrency ID allocation within a bitmap is mostly lock-free. The cid_lock
-+ * is only used when contention is detected by the lock-free allocation so
-+ * forward progress can be guaranteed.
-+ */
-+DEFINE_RAW_SPINLOCK(cid_lock);
-+
-+/*
-+ * @use_cid_lock: Select cid allocation behavior: lock-free vs spinlock.
-+ *
-+ * When @use_cid_lock is 0, the cid allocation is lock-free. When contention is
-+ * detected, it is set to 1 to ensure that all newly coming allocations are
-+ * serialized by @cid_lock until the allocation which detected contention
-+ * completes and sets @use_cid_lock back to 0. This guarantees forward progress
-+ * of a cid allocation.
-+ */
-+int use_cid_lock;
-+
-+/*
-+ * mm_cid remote-clear implements a lock-free algorithm to clear per-mm/cpu cid
-+ * concurrently with respect to the execution of the source runqueue context
-+ * switch.
-+ *
-+ * There is one basic properties we want to guarantee here:
-+ *
-+ * (1) Remote-clear should _never_ mark a per-cpu cid UNSET when it is actively
-+ * used by a task. That would lead to concurrent allocation of the cid and
-+ * userspace corruption.
-+ *
-+ * Provide this guarantee by introducing a Dekker memory ordering to guarantee
-+ * that a pair of loads observe at least one of a pair of stores, which can be
-+ * shown as:
-+ *
-+ * X = Y = 0
-+ *
-+ * w[X]=1 w[Y]=1
-+ * MB MB
-+ * r[Y]=y r[X]=x
-+ *
-+ * Which guarantees that x==0 && y==0 is impossible. But rather than using
-+ * values 0 and 1, this algorithm cares about specific state transitions of the
-+ * runqueue current task (as updated by the scheduler context switch), and the
-+ * per-mm/cpu cid value.
-+ *
-+ * Let's introduce task (Y) which has task->mm == mm and task (N) which has
-+ * task->mm != mm for the rest of the discussion. There are two scheduler state
-+ * transitions on context switch we care about:
-+ *
-+ * (TSA) Store to rq->curr with transition from (N) to (Y)
-+ *
-+ * (TSB) Store to rq->curr with transition from (Y) to (N)
-+ *
-+ * On the remote-clear side, there is one transition we care about:
-+ *
-+ * (TMA) cmpxchg to *pcpu_cid to set the LAZY flag
-+ *
-+ * There is also a transition to UNSET state which can be performed from all
-+ * sides (scheduler, remote-clear). It is always performed with a cmpxchg which
-+ * guarantees that only a single thread will succeed:
-+ *
-+ * (TMB) cmpxchg to *pcpu_cid to mark UNSET
-+ *
-+ * Just to be clear, what we do _not_ want to happen is a transition to UNSET
-+ * when a thread is actively using the cid (property (1)).
-+ *
-+ * Let's looks at the relevant combinations of TSA/TSB, and TMA transitions.
-+ *
-+ * Scenario A) (TSA)+(TMA) (from next task perspective)
-+ *
-+ * CPU0 CPU1
-+ *
-+ * Context switch CS-1 Remote-clear
-+ * - store to rq->curr: (N)->(Y) (TSA) - cmpxchg to *pcpu_id to LAZY (TMA)
-+ * (implied barrier after cmpxchg)
-+ * - switch_mm_cid()
-+ * - memory barrier (see switch_mm_cid()
-+ * comment explaining how this barrier
-+ * is combined with other scheduler
-+ * barriers)
-+ * - mm_cid_get (next)
-+ * - READ_ONCE(*pcpu_cid) - rcu_dereference(src_rq->curr)
-+ *
-+ * This Dekker ensures that either task (Y) is observed by the
-+ * rcu_dereference() or the LAZY flag is observed by READ_ONCE(), or both are
-+ * observed.
-+ *
-+ * If task (Y) store is observed by rcu_dereference(), it means that there is
-+ * still an active task on the cpu. Remote-clear will therefore not transition
-+ * to UNSET, which fulfills property (1).
-+ *
-+ * If task (Y) is not observed, but the lazy flag is observed by READ_ONCE(),
-+ * it will move its state to UNSET, which clears the percpu cid perhaps
-+ * uselessly (which is not an issue for correctness). Because task (Y) is not
-+ * observed, CPU1 can move ahead to set the state to UNSET. Because moving
-+ * state to UNSET is done with a cmpxchg expecting that the old state has the
-+ * LAZY flag set, only one thread will successfully UNSET.
-+ *
-+ * If both states (LAZY flag and task (Y)) are observed, the thread on CPU0
-+ * will observe the LAZY flag and transition to UNSET (perhaps uselessly), and
-+ * CPU1 will observe task (Y) and do nothing more, which is fine.
-+ *
-+ * What we are effectively preventing with this Dekker is a scenario where
-+ * neither LAZY flag nor store (Y) are observed, which would fail property (1)
-+ * because this would UNSET a cid which is actively used.
-+ */
-+
-+void sched_mm_cid_migrate_from(struct task_struct *t)
-+{
-+ t->migrate_from_cpu = task_cpu(t);
-+}
-+
-+static
-+int __sched_mm_cid_migrate_from_fetch_cid(struct rq *src_rq,
-+ struct task_struct *t,
-+ struct mm_cid *src_pcpu_cid)
-+{
-+ struct mm_struct *mm = t->mm;
-+ struct task_struct *src_task;
-+ int src_cid, last_mm_cid;
-+
-+ if (!mm)
-+ return -1;
-+
-+ last_mm_cid = t->last_mm_cid;
-+ /*
-+ * If the migrated task has no last cid, or if the current
-+ * task on src rq uses the cid, it means the source cid does not need
-+ * to be moved to the destination cpu.
-+ */
-+ if (last_mm_cid == -1)
-+ return -1;
-+ src_cid = READ_ONCE(src_pcpu_cid->cid);
-+ if (!mm_cid_is_valid(src_cid) || last_mm_cid != src_cid)
-+ return -1;
-+
-+ /*
-+ * If we observe an active task using the mm on this rq, it means we
-+ * are not the last task to be migrated from this cpu for this mm, so
-+ * there is no need to move src_cid to the destination cpu.
-+ */
-+ guard(rcu)();
-+ src_task = rcu_dereference(src_rq->curr);
-+ if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
-+ t->last_mm_cid = -1;
-+ return -1;
-+ }
-+
-+ return src_cid;
-+}
-+
-+static
-+int __sched_mm_cid_migrate_from_try_steal_cid(struct rq *src_rq,
-+ struct task_struct *t,
-+ struct mm_cid *src_pcpu_cid,
-+ int src_cid)
-+{
-+ struct task_struct *src_task;
-+ struct mm_struct *mm = t->mm;
-+ int lazy_cid;
-+
-+ if (src_cid == -1)
-+ return -1;
-+
-+ /*
-+ * Attempt to clear the source cpu cid to move it to the destination
-+ * cpu.
-+ */
-+ lazy_cid = mm_cid_set_lazy_put(src_cid);
-+ if (!try_cmpxchg(&src_pcpu_cid->cid, &src_cid, lazy_cid))
-+ return -1;
-+
-+ /*
-+ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+ * rq->curr->mm matches the scheduler barrier in context_switch()
-+ * between store to rq->curr and load of prev and next task's
-+ * per-mm/cpu cid.
-+ *
-+ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+ * rq->curr->mm_cid_active matches the barrier in
-+ * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
-+ * sched_mm_cid_after_execve() between store to t->mm_cid_active and
-+ * load of per-mm/cpu cid.
-+ */
-+
-+ /*
-+ * If we observe an active task using the mm on this rq after setting
-+ * the lazy-put flag, this task will be responsible for transitioning
-+ * from lazy-put flag set to MM_CID_UNSET.
-+ */
-+ scoped_guard (rcu) {
-+ src_task = rcu_dereference(src_rq->curr);
-+ if (READ_ONCE(src_task->mm_cid_active) && src_task->mm == mm) {
-+ rcu_read_unlock();
-+ /*
-+ * We observed an active task for this mm, there is therefore
-+ * no point in moving this cid to the destination cpu.
-+ */
-+ t->last_mm_cid = -1;
-+ return -1;
-+ }
-+ }
-+
-+ /*
-+ * The src_cid is unused, so it can be unset.
-+ */
-+ if (!try_cmpxchg(&src_pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
-+ return -1;
-+ return src_cid;
-+}
-+
-+/*
-+ * Migration to dst cpu. Called with dst_rq lock held.
-+ * Interrupts are disabled, which keeps the window of cid ownership without the
-+ * source rq lock held small.
-+ */
-+void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t)
-+{
-+ struct mm_cid *src_pcpu_cid, *dst_pcpu_cid;
-+ struct mm_struct *mm = t->mm;
-+ int src_cid, dst_cid, src_cpu;
-+ struct rq *src_rq;
-+
-+ lockdep_assert_rq_held(dst_rq);
-+
-+ if (!mm)
-+ return;
-+ src_cpu = t->migrate_from_cpu;
-+ if (src_cpu == -1) {
-+ t->last_mm_cid = -1;
-+ return;
-+ }
-+ /*
-+ * Move the src cid if the dst cid is unset. This keeps id
-+ * allocation closest to 0 in cases where few threads migrate around
-+ * many CPUs.
-+ *
-+ * If destination cid is already set, we may have to just clear
-+ * the src cid to ensure compactness in frequent migrations
-+ * scenarios.
-+ *
-+ * It is not useful to clear the src cid when the number of threads is
-+ * greater or equal to the number of allowed CPUs, because user-space
-+ * can expect that the number of allowed cids can reach the number of
-+ * allowed CPUs.
-+ */
-+ dst_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(dst_rq));
-+ dst_cid = READ_ONCE(dst_pcpu_cid->cid);
-+ if (!mm_cid_is_unset(dst_cid) &&
-+ atomic_read(&mm->mm_users) >= t->nr_cpus_allowed)
-+ return;
-+ src_pcpu_cid = per_cpu_ptr(mm->pcpu_cid, src_cpu);
-+ src_rq = cpu_rq(src_cpu);
-+ src_cid = __sched_mm_cid_migrate_from_fetch_cid(src_rq, t, src_pcpu_cid);
-+ if (src_cid == -1)
-+ return;
-+ src_cid = __sched_mm_cid_migrate_from_try_steal_cid(src_rq, t, src_pcpu_cid,
-+ src_cid);
-+ if (src_cid == -1)
-+ return;
-+ if (!mm_cid_is_unset(dst_cid)) {
-+ __mm_cid_put(mm, src_cid);
-+ return;
-+ }
-+ /* Move src_cid to dst cpu. */
-+ mm_cid_snapshot_time(dst_rq, mm);
-+ WRITE_ONCE(dst_pcpu_cid->cid, src_cid);
-+}
-+
-+static void sched_mm_cid_remote_clear(struct mm_struct *mm, struct mm_cid *pcpu_cid,
-+ int cpu)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+ struct task_struct *t;
-+ int cid, lazy_cid;
-+
-+ cid = READ_ONCE(pcpu_cid->cid);
-+ if (!mm_cid_is_valid(cid))
-+ return;
-+
-+ /*
-+ * Clear the cpu cid if it is set to keep cid allocation compact. If
-+ * there happens to be other tasks left on the source cpu using this
-+ * mm, the next task using this mm will reallocate its cid on context
-+ * switch.
-+ */
-+ lazy_cid = mm_cid_set_lazy_put(cid);
-+ if (!try_cmpxchg(&pcpu_cid->cid, &cid, lazy_cid))
-+ return;
-+
-+ /*
-+ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+ * rq->curr->mm matches the scheduler barrier in context_switch()
-+ * between store to rq->curr and load of prev and next task's
-+ * per-mm/cpu cid.
-+ *
-+ * The implicit barrier after cmpxchg per-mm/cpu cid before loading
-+ * rq->curr->mm_cid_active matches the barrier in
-+ * sched_mm_cid_exit_signals(), sched_mm_cid_before_execve(), and
-+ * sched_mm_cid_after_execve() between store to t->mm_cid_active and
-+ * load of per-mm/cpu cid.
-+ */
-+
-+ /*
-+ * If we observe an active task using the mm on this rq after setting
-+ * the lazy-put flag, that task will be responsible for transitioning
-+ * from lazy-put flag set to MM_CID_UNSET.
-+ */
-+ scoped_guard (rcu) {
-+ t = rcu_dereference(rq->curr);
-+ if (READ_ONCE(t->mm_cid_active) && t->mm == mm)
-+ return;
-+ }
-+
-+ /*
-+ * The cid is unused, so it can be unset.
-+ * Disable interrupts to keep the window of cid ownership without rq
-+ * lock small.
-+ */
-+ scoped_guard (irqsave) {
-+ if (try_cmpxchg(&pcpu_cid->cid, &lazy_cid, MM_CID_UNSET))
-+ __mm_cid_put(mm, cid);
-+ }
-+}
-+
-+static void sched_mm_cid_remote_clear_old(struct mm_struct *mm, int cpu)
-+{
-+ struct rq *rq = cpu_rq(cpu);
-+ struct mm_cid *pcpu_cid;
-+ struct task_struct *curr;
-+ u64 rq_clock;
-+
-+ /*
-+ * rq->clock load is racy on 32-bit but one spurious clear once in a
-+ * while is irrelevant.
-+ */
-+ rq_clock = READ_ONCE(rq->clock);
-+ pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
-+
-+ /*
-+ * In order to take care of infrequently scheduled tasks, bump the time
-+ * snapshot associated with this cid if an active task using the mm is
-+ * observed on this rq.
-+ */
-+ scoped_guard (rcu) {
-+ curr = rcu_dereference(rq->curr);
-+ if (READ_ONCE(curr->mm_cid_active) && curr->mm == mm) {
-+ WRITE_ONCE(pcpu_cid->time, rq_clock);
-+ return;
-+ }
-+ }
-+
-+ if (rq_clock < pcpu_cid->time + SCHED_MM_CID_PERIOD_NS)
-+ return;
-+ sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
-+}
-+
-+static void sched_mm_cid_remote_clear_weight(struct mm_struct *mm, int cpu,
-+ int weight)
-+{
-+ struct mm_cid *pcpu_cid;
-+ int cid;
-+
-+ pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu);
-+ cid = READ_ONCE(pcpu_cid->cid);
-+ if (!mm_cid_is_valid(cid) || cid < weight)
-+ return;
-+ sched_mm_cid_remote_clear(mm, pcpu_cid, cpu);
-+}
-+
-+static void task_mm_cid_work(struct callback_head *work)
-+{
-+ unsigned long now = jiffies, old_scan, next_scan;
-+ struct task_struct *t = current;
-+ struct cpumask *cidmask;
-+ struct mm_struct *mm;
-+ int weight, cpu;
-+
-+ SCHED_WARN_ON(t != container_of(work, struct task_struct, cid_work));
-+
-+ work->next = work; /* Prevent double-add */
-+ if (t->flags & PF_EXITING)
-+ return;
-+ mm = t->mm;
-+ if (!mm)
-+ return;
-+ old_scan = READ_ONCE(mm->mm_cid_next_scan);
-+ next_scan = now + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-+ if (!old_scan) {
-+ unsigned long res;
-+
-+ res = cmpxchg(&mm->mm_cid_next_scan, old_scan, next_scan);
-+ if (res != old_scan)
-+ old_scan = res;
-+ else
-+ old_scan = next_scan;
-+ }
-+ if (time_before(now, old_scan))
-+ return;
-+ if (!try_cmpxchg(&mm->mm_cid_next_scan, &old_scan, next_scan))
-+ return;
-+ cidmask = mm_cidmask(mm);
-+ /* Clear cids that were not recently used. */
-+ for_each_possible_cpu(cpu)
-+ sched_mm_cid_remote_clear_old(mm, cpu);
-+ weight = cpumask_weight(cidmask);
-+ /*
-+ * Clear cids that are greater or equal to the cidmask weight to
-+ * recompact it.
-+ */
-+ for_each_possible_cpu(cpu)
-+ sched_mm_cid_remote_clear_weight(mm, cpu, weight);
-+}
-+
-+void init_sched_mm_cid(struct task_struct *t)
-+{
-+ struct mm_struct *mm = t->mm;
-+ int mm_users = 0;
-+
-+ if (mm) {
-+ mm_users = atomic_read(&mm->mm_users);
-+ if (mm_users == 1)
-+ mm->mm_cid_next_scan = jiffies + msecs_to_jiffies(MM_CID_SCAN_DELAY);
-+ }
-+ t->cid_work.next = &t->cid_work; /* Protect against double add */
-+ init_task_work(&t->cid_work, task_mm_cid_work);
-+}
-+
-+void task_tick_mm_cid(struct rq *rq, struct task_struct *curr)
-+{
-+ struct callback_head *work = &curr->cid_work;
-+ unsigned long now = jiffies;
-+
-+ if (!curr->mm || (curr->flags & (PF_EXITING | PF_KTHREAD)) ||
-+ work->next != work)
-+ return;
-+ if (time_before(now, READ_ONCE(curr->mm->mm_cid_next_scan)))
-+ return;
-+
-+ /* No page allocation under rq lock */
-+ task_work_add(curr, work, TWA_RESUME | TWAF_NO_ALLOC);
-+}
-+
-+void sched_mm_cid_exit_signals(struct task_struct *t)
-+{
-+ struct mm_struct *mm = t->mm;
-+ struct rq *rq;
-+
-+ if (!mm)
-+ return;
-+
-+ preempt_disable();
-+ rq = this_rq();
-+ guard(rq_lock_irqsave)(rq);
-+ preempt_enable_no_resched(); /* holding spinlock */
-+ WRITE_ONCE(t->mm_cid_active, 0);
-+ /*
-+ * Store t->mm_cid_active before loading per-mm/cpu cid.
-+ * Matches barrier in sched_mm_cid_remote_clear_old().
-+ */
-+ smp_mb();
-+ mm_cid_put(mm);
-+ t->last_mm_cid = t->mm_cid = -1;
-+}
-+
-+void sched_mm_cid_before_execve(struct task_struct *t)
-+{
-+ struct mm_struct *mm = t->mm;
-+ struct rq *rq;
-+
-+ if (!mm)
-+ return;
-+
-+ preempt_disable();
-+ rq = this_rq();
-+ guard(rq_lock_irqsave)(rq);
-+ preempt_enable_no_resched(); /* holding spinlock */
-+ WRITE_ONCE(t->mm_cid_active, 0);
-+ /*
-+ * Store t->mm_cid_active before loading per-mm/cpu cid.
-+ * Matches barrier in sched_mm_cid_remote_clear_old().
-+ */
-+ smp_mb();
-+ mm_cid_put(mm);
-+ t->last_mm_cid = t->mm_cid = -1;
-+}
-+
-+void sched_mm_cid_after_execve(struct task_struct *t)
-+{
-+ struct mm_struct *mm = t->mm;
-+ struct rq *rq;
-+
-+ if (!mm)
-+ return;
-+
-+ preempt_disable();
-+ rq = this_rq();
-+ scoped_guard (rq_lock_irqsave, rq) {
-+ preempt_enable_no_resched(); /* holding spinlock */
-+ WRITE_ONCE(t->mm_cid_active, 1);
-+ /*
-+ * Store t->mm_cid_active before loading per-mm/cpu cid.
-+ * Matches barrier in sched_mm_cid_remote_clear_old().
-+ */
-+ smp_mb();
-+ t->last_mm_cid = t->mm_cid = mm_cid_get(rq, mm);
-+ }
-+ rseq_set_notify_resume(t);
-+}
-+
-+void sched_mm_cid_fork(struct task_struct *t)
-+{
-+ WARN_ON_ONCE(!t->mm || t->mm_cid != -1);
-+ t->mm_cid_active = 1;
-+}
-+#endif
-diff --git a/kernel/sched/alt_core.h b/kernel/sched/alt_core.h
-new file mode 100644
-index 000000000000..12d76d9d290e
---- /dev/null
-+++ b/kernel/sched/alt_core.h
-@@ -0,0 +1,213 @@
-+#ifndef _KERNEL_SCHED_ALT_CORE_H
-+#define _KERNEL_SCHED_ALT_CORE_H
-+
-+/*
-+ * Compile time debug macro
-+ * #define ALT_SCHED_DEBUG
-+ */
-+
-+/*
-+ * Task related inlined functions
-+ */
-+static inline bool is_migration_disabled(struct task_struct *p)
-+{
-+#ifdef CONFIG_SMP
-+ return p->migration_disabled;
-+#else
-+ return false;
-+#endif
-+}
-+
-+/* rt_prio(prio) defined in include/linux/sched/rt.h */
-+#define rt_task(p) rt_prio((p)->prio)
-+#define rt_policy(policy) ((policy) == SCHED_FIFO || (policy) == SCHED_RR)
-+#define task_has_rt_policy(p) (rt_policy((p)->policy))
-+
-+struct affinity_context {
-+ const struct cpumask *new_mask;
-+ struct cpumask *user_mask;
-+ unsigned int flags;
-+};
-+
-+/* CONFIG_SCHED_CLASS_EXT is not supported */
-+#define scx_switched_all() false
-+
-+#define SCA_CHECK 0x01
-+#define SCA_MIGRATE_DISABLE 0x02
-+#define SCA_MIGRATE_ENABLE 0x04
-+#define SCA_USER 0x08
-+
-+#ifdef CONFIG_SMP
-+
-+extern int __set_cpus_allowed_ptr(struct task_struct *p, struct affinity_context *ctx);
-+
-+static inline cpumask_t *alloc_user_cpus_ptr(int node)
-+{
-+ /*
-+ * See do_set_cpus_allowed() above for the rcu_head usage.
-+ */
-+ int size = max_t(int, cpumask_size(), sizeof(struct rcu_head));
-+
-+ return kmalloc_node(size, GFP_KERNEL, node);
-+}
-+
-+#else /* !CONFIG_SMP: */
-+
-+static inline int __set_cpus_allowed_ptr(struct task_struct *p,
-+ struct affinity_context *ctx)
-+{
-+ return set_cpus_allowed_ptr(p, ctx->new_mask);
-+}
-+
-+static inline cpumask_t *alloc_user_cpus_ptr(int node)
-+{
-+ return NULL;
-+}
-+
-+#endif /* !CONFIG_SMP */
-+
-+#ifdef CONFIG_RT_MUTEXES
-+
-+static inline int __rt_effective_prio(struct task_struct *pi_task, int prio)
-+{
-+ if (pi_task)
-+ prio = min(prio, pi_task->prio);
-+
-+ return prio;
-+}
-+
-+static inline int rt_effective_prio(struct task_struct *p, int prio)
-+{
-+ struct task_struct *pi_task = rt_mutex_get_top_task(p);
-+
-+ return __rt_effective_prio(pi_task, prio);
-+}
-+
-+#else /* !CONFIG_RT_MUTEXES: */
-+
-+static inline int rt_effective_prio(struct task_struct *p, int prio)
-+{
-+ return prio;
-+}
-+
-+#endif /* !CONFIG_RT_MUTEXES */
-+
-+extern int __sched_setscheduler(struct task_struct *p, const struct sched_attr *attr, bool user, bool pi);
-+extern int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx);
-+extern void __setscheduler_prio(struct task_struct *p, int prio);
-+
-+/*
-+ * Context API
-+ */
-+static inline struct rq *__task_access_lock(struct task_struct *p, raw_spinlock_t **plock)
-+{
-+ struct rq *rq;
-+ for (;;) {
-+ rq = task_rq(p);
-+ if (p->on_cpu || task_on_rq_queued(p)) {
-+ raw_spin_lock(&rq->lock);
-+ if (likely((p->on_cpu || task_on_rq_queued(p)) && rq == task_rq(p))) {
-+ *plock = &rq->lock;
-+ return rq;
-+ }
-+ raw_spin_unlock(&rq->lock);
-+ } else if (task_on_rq_migrating(p)) {
-+ do {
-+ cpu_relax();
-+ } while (unlikely(task_on_rq_migrating(p)));
-+ } else {
-+ *plock = NULL;
-+ return rq;
-+ }
-+ }
-+}
-+
-+static inline void __task_access_unlock(struct task_struct *p, raw_spinlock_t *lock)
-+{
-+ if (NULL != lock)
-+ raw_spin_unlock(lock);
-+}
-+
-+void check_task_changed(struct task_struct *p, struct rq *rq);
-+
-+/*
-+ * RQ related inlined functions
-+ */
-+
-+/*
-+ * This routine assume that the idle task always in queue
-+ */
-+static inline struct task_struct *sched_rq_first_task(struct rq *rq)
-+{
-+ const struct list_head *head = &rq->queue.heads[sched_rq_prio_idx(rq)];
-+
-+ return list_first_entry(head, struct task_struct, sq_node);
-+}
-+
-+static inline struct task_struct * sched_rq_next_task(struct task_struct *p, struct rq *rq)
-+{
-+ struct list_head *next = p->sq_node.next;
-+
-+ if (&rq->queue.heads[0] <= next && next < &rq->queue.heads[SCHED_LEVELS]) {
-+ struct list_head *head;
-+ unsigned long idx = next - &rq->queue.heads[0];
-+
-+ idx = find_next_bit(rq->queue.bitmap, SCHED_QUEUE_BITS,
-+ sched_idx2prio(idx, rq) + 1);
-+ head = &rq->queue.heads[sched_prio2idx(idx, rq)];
-+
-+ return list_first_entry(head, struct task_struct, sq_node);
-+ }
-+
-+ return list_next_entry(p, sq_node);
-+}
-+
-+extern void requeue_task(struct task_struct *p, struct rq *rq);
-+
-+#ifdef ALT_SCHED_DEBUG
-+extern void alt_sched_debug(void);
-+#else
-+static inline void alt_sched_debug(void) {}
-+#endif
-+
-+extern int sched_yield_type;
-+
-+#ifdef CONFIG_SMP
-+extern cpumask_t sched_rq_pending_mask ____cacheline_aligned_in_smp;
-+
-+DECLARE_STATIC_KEY_FALSE(sched_smt_present);
-+DECLARE_PER_CPU_ALIGNED(cpumask_t *, sched_cpu_llc_mask);
-+
-+extern cpumask_t sched_smt_mask ____cacheline_aligned_in_smp;
-+
-+extern cpumask_t *const sched_idle_mask;
-+extern cpumask_t *const sched_sg_idle_mask;
-+extern cpumask_t *const sched_pcore_idle_mask;
-+extern cpumask_t *const sched_ecore_idle_mask;
-+
-+extern struct rq *move_queued_task(struct rq *rq, struct task_struct *p, int new_cpu);
-+
-+typedef bool (*idle_select_func_t)(struct cpumask *dstp, const struct cpumask *src1p,
-+ const struct cpumask *src2p);
-+
-+extern idle_select_func_t idle_select_func;
-+#endif
-+
-+/* balance callback */
-+#ifdef CONFIG_SMP
-+extern struct balance_callback *splice_balance_callbacks(struct rq *rq);
-+extern void balance_callbacks(struct rq *rq, struct balance_callback *head);
-+#else
-+
-+static inline struct balance_callback *splice_balance_callbacks(struct rq *rq)
-+{
-+ return NULL;
-+}
-+
-+static inline void balance_callbacks(struct rq *rq, struct balance_callback *head)
-+{
-+}
-+
-+#endif
-+
-+#endif /* _KERNEL_SCHED_ALT_CORE_H */
-diff --git a/kernel/sched/alt_debug.c b/kernel/sched/alt_debug.c
-new file mode 100644
-index 000000000000..1dbd7eb6a434
---- /dev/null
-+++ b/kernel/sched/alt_debug.c
-@@ -0,0 +1,32 @@
-+/*
-+ * kernel/sched/alt_debug.c
-+ *
-+ * Print the alt scheduler debugging details
-+ *
-+ * Author: Alfred Chen
-+ * Date : 2020
-+ */
-+#include "sched.h"
-+#include "linux/sched/debug.h"
-+
-+/*
-+ * This allows printing both to /proc/sched_debug and
-+ * to the console
-+ */
-+#define SEQ_printf(m, x...) \
-+ do { \
-+ if (m) \
-+ seq_printf(m, x); \
-+ else \
-+ pr_cont(x); \
-+ } while (0)
-+
-+void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
-+ struct seq_file *m)
-+{
-+ SEQ_printf(m, "%s (%d, #threads: %d)\n", p->comm, task_pid_nr_ns(p, ns),
-+ get_nr_threads(p));
-+}
-+
-+void proc_sched_set_task(struct task_struct *p)
-+{}
-diff --git a/kernel/sched/alt_sched.h b/kernel/sched/alt_sched.h
-new file mode 100644
-index 000000000000..7fb3433c5c41
---- /dev/null
-+++ b/kernel/sched/alt_sched.h
-@@ -0,0 +1,997 @@
-+#ifndef _KERNEL_SCHED_ALT_SCHED_H
-+#define _KERNEL_SCHED_ALT_SCHED_H
-+
-+#include <linux/context_tracking.h>
-+#include <linux/profile.h>
-+#include <linux/stop_machine.h>
-+#include <linux/syscalls.h>
-+#include <linux/tick.h>
-+
-+#include <trace/events/power.h>
-+#include <trace/events/sched.h>
-+
-+#include "../workqueue_internal.h"
-+
-+#include "cpupri.h"
-+
-+#ifdef CONFIG_CGROUP_SCHED
-+/* task group related information */
-+struct task_group {
-+ struct cgroup_subsys_state css;
-+
-+ struct rcu_head rcu;
-+ struct list_head list;
-+
-+ struct task_group *parent;
-+ struct list_head siblings;
-+ struct list_head children;
-+};
-+
-+extern struct task_group *sched_create_group(struct task_group *parent);
-+extern void sched_online_group(struct task_group *tg,
-+ struct task_group *parent);
-+extern void sched_destroy_group(struct task_group *tg);
-+extern void sched_release_group(struct task_group *tg);
-+#endif /* CONFIG_CGROUP_SCHED */
-+
-+#define MIN_SCHED_NORMAL_PRIO (32)
-+/*
-+ * levels: RT(0-24), reserved(25-31), NORMAL(32-63), cpu idle task(64)
-+ *
-+ * -- BMQ --
-+ * NORMAL: (lower boost range 12, NICE_WIDTH 40, higher boost range 12) / 2
-+ * -- PDS --
-+ * NORMAL: SCHED_EDGE_DELTA + ((NICE_WIDTH 40) / 2)
-+ */
-+#define SCHED_LEVELS (64 + 1)
-+
-+#define IDLE_TASK_SCHED_PRIO (SCHED_LEVELS - 1)
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+# define SCHED_WARN_ON(x) WARN_ONCE(x, #x)
-+extern void resched_latency_warn(int cpu, u64 latency);
-+#else
-+# define SCHED_WARN_ON(x) ({ (void)(x), 0; })
-+static inline void resched_latency_warn(int cpu, u64 latency) {}
-+#endif
-+
-+/*
-+ * Increase resolution of nice-level calculations for 64-bit architectures.
-+ * The extra resolution improves shares distribution and load balancing of
-+ * low-weight task groups (eg. nice +19 on an autogroup), deeper taskgroup
-+ * hierarchies, especially on larger systems. This is not a user-visible change
-+ * and does not change the user-interface for setting shares/weights.
-+ *
-+ * We increase resolution only if we have enough bits to allow this increased
-+ * resolution (i.e. 64-bit). The costs for increasing resolution when 32-bit
-+ * are pretty high and the returns do not justify the increased costs.
-+ *
-+ * Really only required when CONFIG_FAIR_GROUP_SCHED=y is also set, but to
-+ * increase coverage and consistency always enable it on 64-bit platforms.
-+ */
-+#ifdef CONFIG_64BIT
-+# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load(w) ((w) << SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load_down(w) \
-+({ \
-+ unsigned long __w = (w); \
-+ if (__w) \
-+ __w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
-+ __w; \
-+})
-+#else
-+# define NICE_0_LOAD_SHIFT (SCHED_FIXEDPOINT_SHIFT)
-+# define scale_load(w) (w)
-+# define scale_load_down(w) (w)
-+#endif
-+
-+/*
-+ * Tunables that become constants when CONFIG_SCHED_DEBUG is off:
-+ */
-+#ifdef CONFIG_SCHED_DEBUG
-+# define const_debug __read_mostly
-+#else
-+# define const_debug const
-+#endif
-+
-+/* task_struct::on_rq states: */
-+#define TASK_ON_RQ_QUEUED 1
-+#define TASK_ON_RQ_MIGRATING 2
-+
-+static inline int task_on_rq_queued(struct task_struct *p)
-+{
-+ return p->on_rq == TASK_ON_RQ_QUEUED;
-+}
-+
-+static inline int task_on_rq_migrating(struct task_struct *p)
-+{
-+ return READ_ONCE(p->on_rq) == TASK_ON_RQ_MIGRATING;
-+}
-+
-+/* Wake flags. The first three directly map to some SD flag value */
-+#define WF_EXEC 0x02 /* Wakeup after exec; maps to SD_BALANCE_EXEC */
-+#define WF_FORK 0x04 /* Wakeup after fork; maps to SD_BALANCE_FORK */
-+#define WF_TTWU 0x08 /* Wakeup; maps to SD_BALANCE_WAKE */
-+
-+#define WF_SYNC 0x10 /* Waker goes to sleep after wakeup */
-+#define WF_MIGRATED 0x20 /* Internal use, task got migrated */
-+#define WF_CURRENT_CPU 0x40 /* Prefer to move the wakee to the current CPU. */
-+
-+#ifdef CONFIG_SMP
-+static_assert(WF_EXEC == SD_BALANCE_EXEC);
-+static_assert(WF_FORK == SD_BALANCE_FORK);
-+static_assert(WF_TTWU == SD_BALANCE_WAKE);
-+#endif
-+
-+#define SCHED_QUEUE_BITS (SCHED_LEVELS - 1)
-+
-+struct sched_queue {
-+ DECLARE_BITMAP(bitmap, SCHED_QUEUE_BITS);
-+ struct list_head heads[SCHED_LEVELS];
-+};
-+
-+struct rq;
-+struct cpuidle_state;
-+
-+struct balance_callback {
-+ struct balance_callback *next;
-+ void (*func)(struct rq *rq);
-+};
-+
-+typedef void (*balance_func_t)(struct rq *rq, int cpu);
-+typedef void (*set_idle_mask_func_t)(unsigned int cpu, struct cpumask *dstp);
-+typedef void (*clear_idle_mask_func_t)(int cpu, struct cpumask *dstp);
-+
-+struct balance_arg {
-+ struct task_struct *task;
-+ int active;
-+ cpumask_t *cpumask;
-+};
-+
-+/*
-+ * This is the main, per-CPU runqueue data structure.
-+ * This data should only be modified by the local cpu.
-+ */
-+struct rq {
-+ /* runqueue lock: */
-+ raw_spinlock_t lock;
-+
-+ struct task_struct __rcu *curr;
-+ struct task_struct *idle;
-+ struct task_struct *stop;
-+ struct mm_struct *prev_mm;
-+
-+ struct sched_queue queue ____cacheline_aligned;
-+
-+ int prio;
-+#ifdef CONFIG_SCHED_PDS
-+ int prio_idx;
-+ u64 time_edge;
-+#endif
-+
-+ /* switch count */
-+ u64 nr_switches;
-+
-+ atomic_t nr_iowait;
-+
-+#ifdef CONFIG_SCHED_DEBUG
-+ u64 last_seen_need_resched_ns;
-+ int ticks_without_resched;
-+#endif
-+
-+#ifdef CONFIG_MEMBARRIER
-+ int membarrier_state;
-+#endif
-+
-+ set_idle_mask_func_t set_idle_mask_func;
-+ clear_idle_mask_func_t clear_idle_mask_func;
-+
-+#ifdef CONFIG_SMP
-+ int cpu; /* cpu of this runqueue */
-+ bool online;
-+
-+ unsigned int ttwu_pending;
-+ unsigned char nohz_idle_balance;
-+ unsigned char idle_balance;
-+
-+#ifdef CONFIG_HAVE_SCHED_AVG_IRQ
-+ struct sched_avg avg_irq;
-+#endif
-+
-+ balance_func_t balance_func;
-+ struct balance_arg active_balance_arg ____cacheline_aligned;
-+ struct cpu_stop_work active_balance_work;
-+
-+ struct balance_callback *balance_callback;
-+#ifdef CONFIG_HOTPLUG_CPU
-+ struct rcuwait hotplug_wait;
-+#endif
-+ unsigned int nr_pinned;
-+
-+#endif /* CONFIG_SMP */
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+ u64 prev_irq_time;
-+#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
-+#ifdef CONFIG_PARAVIRT
-+ u64 prev_steal_time;
-+#endif /* CONFIG_PARAVIRT */
-+#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
-+ u64 prev_steal_time_rq;
-+#endif /* CONFIG_PARAVIRT_TIME_ACCOUNTING */
-+
-+ /* For genenal cpu load util */
-+ s32 load_history;
-+ u64 load_block;
-+ u64 load_stamp;
-+
-+ /* calc_load related fields */
-+ unsigned long calc_load_update;
-+ long calc_load_active;
-+
-+ /* Ensure that all clocks are in the same cache line */
-+ u64 clock ____cacheline_aligned;
-+ u64 clock_task;
-+
-+ unsigned int nr_running;
-+ unsigned long nr_uninterruptible;
-+
-+#ifdef CONFIG_SCHED_HRTICK
-+#ifdef CONFIG_SMP
-+ call_single_data_t hrtick_csd;
-+#endif
-+ struct hrtimer hrtick_timer;
-+ ktime_t hrtick_time;
-+#endif
-+
-+#ifdef CONFIG_SCHEDSTATS
-+
-+ /* latency stats */
-+ struct sched_info rq_sched_info;
-+ unsigned long long rq_cpu_time;
-+ /* could above be rq->cfs_rq.exec_clock + rq->rt_rq.rt_runtime ? */
-+
-+ /* sys_sched_yield() stats */
-+ unsigned int yld_count;
-+
-+ /* schedule() stats */
-+ unsigned int sched_switch;
-+ unsigned int sched_count;
-+ unsigned int sched_goidle;
-+
-+ /* try_to_wake_up() stats */
-+ unsigned int ttwu_count;
-+ unsigned int ttwu_local;
-+#endif /* CONFIG_SCHEDSTATS */
-+
-+#ifdef CONFIG_CPU_IDLE
-+ /* Must be inspected within a rcu lock section */
-+ struct cpuidle_state *idle_state;
-+#endif
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+#ifdef CONFIG_SMP
-+ call_single_data_t nohz_csd;
-+#endif
-+ atomic_t nohz_flags;
-+#endif /* CONFIG_NO_HZ_COMMON */
-+
-+ /* Scratch cpumask to be temporarily used under rq_lock */
-+ cpumask_var_t scratch_mask;
-+};
-+
-+extern unsigned int sysctl_sched_base_slice;
-+
-+extern unsigned long rq_load_util(struct rq *rq, unsigned long max);
-+
-+extern unsigned long calc_load_update;
-+extern atomic_long_t calc_load_tasks;
-+
-+extern void calc_global_load_tick(struct rq *this_rq);
-+extern long calc_load_fold_active(struct rq *this_rq, long adjust);
-+
-+DECLARE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues);
-+#define cpu_rq(cpu) (&per_cpu(runqueues, (cpu)))
-+#define this_rq() this_cpu_ptr(&runqueues)
-+#define task_rq(p) cpu_rq(task_cpu(p))
-+#define cpu_curr(cpu) (cpu_rq(cpu)->curr)
-+#define raw_rq() raw_cpu_ptr(&runqueues)
-+
-+#ifdef CONFIG_SMP
-+#if defined(CONFIG_SCHED_DEBUG) && defined(CONFIG_SYSCTL)
-+void register_sched_domain_sysctl(void);
-+void unregister_sched_domain_sysctl(void);
-+#else
-+static inline void register_sched_domain_sysctl(void)
-+{
-+}
-+static inline void unregister_sched_domain_sysctl(void)
-+{
-+}
-+#endif
-+
-+extern bool sched_smp_initialized;
-+
-+enum {
-+#ifdef CONFIG_SCHED_SMT
-+ SMT_LEVEL_SPACE_HOLDER,
-+#endif
-+ COREGROUP_LEVEL_SPACE_HOLDER,
-+ CORE_LEVEL_SPACE_HOLDER,
-+ OTHER_LEVEL_SPACE_HOLDER,
-+ NR_CPU_AFFINITY_LEVELS
-+};
-+
-+DECLARE_PER_CPU_ALIGNED(cpumask_t [NR_CPU_AFFINITY_LEVELS], sched_cpu_topo_masks);
-+
-+static inline int
-+__best_mask_cpu(const cpumask_t *cpumask, const cpumask_t *mask)
-+{
-+ int cpu;
-+
-+ while ((cpu = cpumask_any_and(cpumask, mask)) >= nr_cpu_ids)
-+ mask++;
-+
-+ return cpu;
-+}
-+
-+static inline int best_mask_cpu(int cpu, const cpumask_t *mask)
-+{
-+ return __best_mask_cpu(mask, per_cpu(sched_cpu_topo_masks, cpu));
-+}
-+
-+#endif
-+
-+#ifndef arch_scale_freq_tick
-+static __always_inline
-+void arch_scale_freq_tick(void)
-+{
-+}
-+#endif
-+
-+#ifndef arch_scale_freq_capacity
-+static __always_inline
-+unsigned long arch_scale_freq_capacity(int cpu)
-+{
-+ return SCHED_CAPACITY_SCALE;
-+}
-+#endif
-+
-+static inline u64 __rq_clock_broken(struct rq *rq)
-+{
-+ return READ_ONCE(rq->clock);
-+}
-+
-+static inline u64 rq_clock(struct rq *rq)
-+{
-+ /*
-+ * Relax lockdep_assert_held() checking as in VRQ, call to
-+ * sched_info_xxxx() may not held rq->lock
-+ * lockdep_assert_held(&rq->lock);
-+ */
-+ return rq->clock;
-+}
-+
-+static inline u64 rq_clock_task(struct rq *rq)
-+{
-+ /*
-+ * Relax lockdep_assert_held() checking as in VRQ, call to
-+ * sched_info_xxxx() may not held rq->lock
-+ * lockdep_assert_held(&rq->lock);
-+ */
-+ return rq->clock_task;
-+}
-+
-+/*
-+ * {de,en}queue flags:
-+ *
-+ * DEQUEUE_SLEEP - task is no longer runnable
-+ * ENQUEUE_WAKEUP - task just became runnable
-+ *
-+ */
-+
-+#define DEQUEUE_SLEEP 0x01
-+
-+#define ENQUEUE_WAKEUP 0x01
-+
-+
-+/*
-+ * Below are scheduler API which using in other kernel code
-+ * It use the dummy rq_flags
-+ * ToDo : BMQ need to support these APIs for compatibility with mainline
-+ * scheduler code.
-+ */
-+struct rq_flags {
-+ unsigned long flags;
-+};
-+
-+struct rq *__task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+ __acquires(rq->lock);
-+
-+struct rq *task_rq_lock(struct task_struct *p, struct rq_flags *rf)
-+ __acquires(p->pi_lock)
-+ __acquires(rq->lock);
-+
-+static inline void __task_rq_unlock(struct rq *rq, struct rq_flags *rf)
-+ __releases(rq->lock)
-+{
-+ raw_spin_unlock(&rq->lock);
-+}
-+
-+static inline void
-+task_rq_unlock(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
-+ __releases(rq->lock)
-+ __releases(p->pi_lock)
-+{
-+ raw_spin_unlock(&rq->lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, rf->flags);
-+}
-+
-+static inline void
-+rq_lock(struct rq *rq, struct rq_flags *rf)
-+ __acquires(rq->lock)
-+{
-+ raw_spin_lock(&rq->lock);
-+}
-+
-+static inline void
-+rq_unlock(struct rq *rq, struct rq_flags *rf)
-+ __releases(rq->lock)
-+{
-+ raw_spin_unlock(&rq->lock);
-+}
-+
-+static inline void
-+rq_lock_irq(struct rq *rq, struct rq_flags *rf)
-+ __acquires(rq->lock)
-+{
-+ raw_spin_lock_irq(&rq->lock);
-+}
-+
-+static inline void
-+rq_unlock_irq(struct rq *rq, struct rq_flags *rf)
-+ __releases(rq->lock)
-+{
-+ raw_spin_unlock_irq(&rq->lock);
-+}
-+
-+static inline struct rq *
-+this_rq_lock_irq(struct rq_flags *rf)
-+ __acquires(rq->lock)
-+{
-+ struct rq *rq;
-+
-+ local_irq_disable();
-+ rq = this_rq();
-+ raw_spin_lock(&rq->lock);
-+
-+ return rq;
-+}
-+
-+static inline raw_spinlock_t *__rq_lockp(struct rq *rq)
-+{
-+ return &rq->lock;
-+}
-+
-+static inline raw_spinlock_t *rq_lockp(struct rq *rq)
-+{
-+ return __rq_lockp(rq);
-+}
-+
-+static inline void lockdep_assert_rq_held(struct rq *rq)
-+{
-+ lockdep_assert_held(__rq_lockp(rq));
-+}
-+
-+extern void raw_spin_rq_lock_nested(struct rq *rq, int subclass);
-+extern void raw_spin_rq_unlock(struct rq *rq);
-+
-+static inline void raw_spin_rq_lock(struct rq *rq)
-+{
-+ raw_spin_rq_lock_nested(rq, 0);
-+}
-+
-+static inline void raw_spin_rq_lock_irq(struct rq *rq)
-+{
-+ local_irq_disable();
-+ raw_spin_rq_lock(rq);
-+}
-+
-+static inline void raw_spin_rq_unlock_irq(struct rq *rq)
-+{
-+ raw_spin_rq_unlock(rq);
-+ local_irq_enable();
-+}
-+
-+static inline int task_current(struct rq *rq, struct task_struct *p)
-+{
-+ return rq->curr == p;
-+}
-+
-+static inline bool task_on_cpu(struct task_struct *p)
-+{
-+ return p->on_cpu;
-+}
-+
-+extern struct static_key_false sched_schedstats;
-+
-+#ifdef CONFIG_CPU_IDLE
-+static inline void idle_set_state(struct rq *rq,
-+ struct cpuidle_state *idle_state)
-+{
-+ rq->idle_state = idle_state;
-+}
-+
-+static inline struct cpuidle_state *idle_get_state(struct rq *rq)
-+{
-+ WARN_ON(!rcu_read_lock_held());
-+ return rq->idle_state;
-+}
-+#else
-+static inline void idle_set_state(struct rq *rq,
-+ struct cpuidle_state *idle_state)
-+{
-+}
-+
-+static inline struct cpuidle_state *idle_get_state(struct rq *rq)
-+{
-+ return NULL;
-+}
-+#endif
-+
-+static inline int cpu_of(const struct rq *rq)
-+{
-+#ifdef CONFIG_SMP
-+ return rq->cpu;
-+#else
-+ return 0;
-+#endif
-+}
-+
-+extern void resched_cpu(int cpu);
-+
-+#include "stats.h"
-+
-+#ifdef CONFIG_NO_HZ_COMMON
-+#define NOHZ_BALANCE_KICK_BIT 0
-+#define NOHZ_STATS_KICK_BIT 1
-+
-+#define NOHZ_BALANCE_KICK BIT(NOHZ_BALANCE_KICK_BIT)
-+#define NOHZ_STATS_KICK BIT(NOHZ_STATS_KICK_BIT)
-+
-+#define NOHZ_KICK_MASK (NOHZ_BALANCE_KICK | NOHZ_STATS_KICK)
-+
-+#define nohz_flags(cpu) (&cpu_rq(cpu)->nohz_flags)
-+
-+/* TODO: needed?
-+extern void nohz_balance_exit_idle(struct rq *rq);
-+#else
-+static inline void nohz_balance_exit_idle(struct rq *rq) { }
-+*/
-+#endif
-+
-+#ifdef CONFIG_IRQ_TIME_ACCOUNTING
-+struct irqtime {
-+ u64 total;
-+ u64 tick_delta;
-+ u64 irq_start_time;
-+ struct u64_stats_sync sync;
-+};
-+
-+DECLARE_PER_CPU(struct irqtime, cpu_irqtime);
-+
-+/*
-+ * Returns the irqtime minus the softirq time computed by ksoftirqd.
-+ * Otherwise ksoftirqd's sum_exec_runtime is substracted its own runtime
-+ * and never move forward.
-+ */
-+static inline u64 irq_time_read(int cpu)
-+{
-+ struct irqtime *irqtime = &per_cpu(cpu_irqtime, cpu);
-+ unsigned int seq;
-+ u64 total;
-+
-+ do {
-+ seq = __u64_stats_fetch_begin(&irqtime->sync);
-+ total = irqtime->total;
-+ } while (__u64_stats_fetch_retry(&irqtime->sync, seq));
-+
-+ return total;
-+}
-+#endif /* CONFIG_IRQ_TIME_ACCOUNTING */
-+
-+#ifdef CONFIG_CPU_FREQ
-+DECLARE_PER_CPU(struct update_util_data __rcu *, cpufreq_update_util_data);
-+#endif /* CONFIG_CPU_FREQ */
-+
-+#ifdef CONFIG_NO_HZ_FULL
-+extern int __init sched_tick_offload_init(void);
-+#else
-+static inline int sched_tick_offload_init(void) { return 0; }
-+#endif
-+
-+#ifdef arch_scale_freq_capacity
-+#ifndef arch_scale_freq_invariant
-+#define arch_scale_freq_invariant() (true)
-+#endif
-+#else /* arch_scale_freq_capacity */
-+#define arch_scale_freq_invariant() (false)
-+#endif
-+
-+#ifdef CONFIG_SMP
-+unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
-+ unsigned long min,
-+ unsigned long max);
-+#endif /* CONFIG_SMP */
-+
-+extern void schedule_idle(void);
-+
-+#define cap_scale(v, s) ((v)*(s) >> SCHED_CAPACITY_SHIFT)
-+
-+/*
-+ * !! For sched_setattr_nocheck() (kernel) only !!
-+ *
-+ * This is actually gross. :(
-+ *
-+ * It is used to make schedutil kworker(s) higher priority than SCHED_DEADLINE
-+ * tasks, but still be able to sleep. We need this on platforms that cannot
-+ * atomically change clock frequency. Remove once fast switching will be
-+ * available on such platforms.
-+ *
-+ * SUGOV stands for SchedUtil GOVernor.
-+ */
-+#define SCHED_FLAG_SUGOV 0x10000000
-+
-+#ifdef CONFIG_MEMBARRIER
-+/*
-+ * The scheduler provides memory barriers required by membarrier between:
-+ * - prior user-space memory accesses and store to rq->membarrier_state,
-+ * - store to rq->membarrier_state and following user-space memory accesses.
-+ * In the same way it provides those guarantees around store to rq->curr.
-+ */
-+static inline void membarrier_switch_mm(struct rq *rq,
-+ struct mm_struct *prev_mm,
-+ struct mm_struct *next_mm)
-+{
-+ int membarrier_state;
-+
-+ if (prev_mm == next_mm)
-+ return;
-+
-+ membarrier_state = atomic_read(&next_mm->membarrier_state);
-+ if (READ_ONCE(rq->membarrier_state) == membarrier_state)
-+ return;
-+
-+ WRITE_ONCE(rq->membarrier_state, membarrier_state);
-+}
-+#else
-+static inline void membarrier_switch_mm(struct rq *rq,
-+ struct mm_struct *prev_mm,
-+ struct mm_struct *next_mm)
-+{
-+}
-+#endif
-+
-+#ifdef CONFIG_NUMA
-+extern int sched_numa_find_closest(const struct cpumask *cpus, int cpu);
-+#else
-+static inline int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
-+{
-+ return nr_cpu_ids;
-+}
-+#endif
-+
-+extern void swake_up_all_locked(struct swait_queue_head *q);
-+extern void __prepare_to_swait(struct swait_queue_head *q, struct swait_queue *wait);
-+
-+extern int try_to_wake_up(struct task_struct *tsk, unsigned int state, int wake_flags);
-+
-+#ifdef CONFIG_PREEMPT_DYNAMIC
-+extern int preempt_dynamic_mode;
-+extern int sched_dynamic_mode(const char *str);
-+extern void sched_dynamic_update(int mode);
-+#endif
-+
-+static inline void nohz_run_idle_balance(int cpu) { }
-+
-+static inline unsigned long
-+uclamp_eff_value(struct task_struct *p, enum uclamp_id clamp_id)
-+{
-+ if (clamp_id == UCLAMP_MIN)
-+ return 0;
-+
-+ return SCHED_CAPACITY_SCALE;
-+}
-+
-+static inline bool uclamp_rq_is_capped(struct rq *rq) { return false; }
-+
-+static inline bool uclamp_is_used(void)
-+{
-+ return false;
-+}
-+
-+static inline unsigned long
-+uclamp_rq_get(struct rq *rq, enum uclamp_id clamp_id)
-+{
-+ if (clamp_id == UCLAMP_MIN)
-+ return 0;
-+
-+ return SCHED_CAPACITY_SCALE;
-+}
-+
-+static inline void
-+uclamp_rq_set(struct rq *rq, enum uclamp_id clamp_id, unsigned int value)
-+{
-+}
-+
-+static inline bool uclamp_rq_is_idle(struct rq *rq)
-+{
-+ return false;
-+}
-+
-+#ifdef CONFIG_SCHED_MM_CID
-+
-+#define SCHED_MM_CID_PERIOD_NS (100ULL * 1000000) /* 100ms */
-+#define MM_CID_SCAN_DELAY 100 /* 100ms */
-+
-+extern raw_spinlock_t cid_lock;
-+extern int use_cid_lock;
-+
-+extern void sched_mm_cid_migrate_from(struct task_struct *t);
-+extern void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t);
-+extern void task_tick_mm_cid(struct rq *rq, struct task_struct *curr);
-+extern void init_sched_mm_cid(struct task_struct *t);
-+
-+static inline void __mm_cid_put(struct mm_struct *mm, int cid)
-+{
-+ if (cid < 0)
-+ return;
-+ cpumask_clear_cpu(cid, mm_cidmask(mm));
-+}
-+
-+/*
-+ * The per-mm/cpu cid can have the MM_CID_LAZY_PUT flag set or transition to
-+ * the MM_CID_UNSET state without holding the rq lock, but the rq lock needs to
-+ * be held to transition to other states.
-+ *
-+ * State transitions synchronized with cmpxchg or try_cmpxchg need to be
-+ * consistent across cpus, which prevents use of this_cpu_cmpxchg.
-+ */
-+static inline void mm_cid_put_lazy(struct task_struct *t)
-+{
-+ struct mm_struct *mm = t->mm;
-+ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+ int cid;
-+
-+ lockdep_assert_irqs_disabled();
-+ cid = __this_cpu_read(pcpu_cid->cid);
-+ if (!mm_cid_is_lazy_put(cid) ||
-+ !try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
-+ return;
-+ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+}
-+
-+static inline int mm_cid_pcpu_unset(struct mm_struct *mm)
-+{
-+ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+ int cid, res;
-+
-+ lockdep_assert_irqs_disabled();
-+ cid = __this_cpu_read(pcpu_cid->cid);
-+ for (;;) {
-+ if (mm_cid_is_unset(cid))
-+ return MM_CID_UNSET;
-+ /*
-+ * Attempt transition from valid or lazy-put to unset.
-+ */
-+ res = cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, cid, MM_CID_UNSET);
-+ if (res == cid)
-+ break;
-+ cid = res;
-+ }
-+ return cid;
-+}
-+
-+static inline void mm_cid_put(struct mm_struct *mm)
-+{
-+ int cid;
-+
-+ lockdep_assert_irqs_disabled();
-+ cid = mm_cid_pcpu_unset(mm);
-+ if (cid == MM_CID_UNSET)
-+ return;
-+ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+}
-+
-+static inline int __mm_cid_try_get(struct mm_struct *mm)
-+{
-+ struct cpumask *cpumask;
-+ int cid;
-+
-+ cpumask = mm_cidmask(mm);
-+ /*
-+ * Retry finding first zero bit if the mask is temporarily
-+ * filled. This only happens during concurrent remote-clear
-+ * which owns a cid without holding a rq lock.
-+ */
-+ for (;;) {
-+ cid = cpumask_first_zero(cpumask);
-+ if (cid < nr_cpu_ids)
-+ break;
-+ cpu_relax();
-+ }
-+ if (cpumask_test_and_set_cpu(cid, cpumask))
-+ return -1;
-+ return cid;
-+}
-+
-+/*
-+ * Save a snapshot of the current runqueue time of this cpu
-+ * with the per-cpu cid value, allowing to estimate how recently it was used.
-+ */
-+static inline void mm_cid_snapshot_time(struct rq *rq, struct mm_struct *mm)
-+{
-+ struct mm_cid *pcpu_cid = per_cpu_ptr(mm->pcpu_cid, cpu_of(rq));
-+
-+ lockdep_assert_rq_held(rq);
-+ WRITE_ONCE(pcpu_cid->time, rq->clock);
-+}
-+
-+static inline int __mm_cid_get(struct rq *rq, struct mm_struct *mm)
-+{
-+ int cid;
-+
-+ /*
-+ * All allocations (even those using the cid_lock) are lock-free. If
-+ * use_cid_lock is set, hold the cid_lock to perform cid allocation to
-+ * guarantee forward progress.
-+ */
-+ if (!READ_ONCE(use_cid_lock)) {
-+ cid = __mm_cid_try_get(mm);
-+ if (cid >= 0)
-+ goto end;
-+ raw_spin_lock(&cid_lock);
-+ } else {
-+ raw_spin_lock(&cid_lock);
-+ cid = __mm_cid_try_get(mm);
-+ if (cid >= 0)
-+ goto unlock;
-+ }
-+
-+ /*
-+ * cid concurrently allocated. Retry while forcing following
-+ * allocations to use the cid_lock to ensure forward progress.
-+ */
-+ WRITE_ONCE(use_cid_lock, 1);
-+ /*
-+ * Set use_cid_lock before allocation. Only care about program order
-+ * because this is only required for forward progress.
-+ */
-+ barrier();
-+ /*
-+ * Retry until it succeeds. It is guaranteed to eventually succeed once
-+ * all newcoming allocations observe the use_cid_lock flag set.
-+ */
-+ do {
-+ cid = __mm_cid_try_get(mm);
-+ cpu_relax();
-+ } while (cid < 0);
-+ /*
-+ * Allocate before clearing use_cid_lock. Only care about
-+ * program order because this is for forward progress.
-+ */
-+ barrier();
-+ WRITE_ONCE(use_cid_lock, 0);
-+unlock:
-+ raw_spin_unlock(&cid_lock);
-+end:
-+ mm_cid_snapshot_time(rq, mm);
-+ return cid;
-+}
-+
-+static inline int mm_cid_get(struct rq *rq, struct mm_struct *mm)
-+{
-+ struct mm_cid __percpu *pcpu_cid = mm->pcpu_cid;
-+ struct cpumask *cpumask;
-+ int cid;
-+
-+ lockdep_assert_rq_held(rq);
-+ cpumask = mm_cidmask(mm);
-+ cid = __this_cpu_read(pcpu_cid->cid);
-+ if (mm_cid_is_valid(cid)) {
-+ mm_cid_snapshot_time(rq, mm);
-+ return cid;
-+ }
-+ if (mm_cid_is_lazy_put(cid)) {
-+ if (try_cmpxchg(&this_cpu_ptr(pcpu_cid)->cid, &cid, MM_CID_UNSET))
-+ __mm_cid_put(mm, mm_cid_clear_lazy_put(cid));
-+ }
-+ cid = __mm_cid_get(rq, mm);
-+ __this_cpu_write(pcpu_cid->cid, cid);
-+ return cid;
-+}
-+
-+static inline void switch_mm_cid(struct rq *rq,
-+ struct task_struct *prev,
-+ struct task_struct *next)
-+{
-+ /*
-+ * Provide a memory barrier between rq->curr store and load of
-+ * {prev,next}->mm->pcpu_cid[cpu] on rq->curr->mm transition.
-+ *
-+ * Should be adapted if context_switch() is modified.
-+ */
-+ if (!next->mm) { // to kernel
-+ /*
-+ * user -> kernel transition does not guarantee a barrier, but
-+ * we can use the fact that it performs an atomic operation in
-+ * mmgrab().
-+ */
-+ if (prev->mm) // from user
-+ smp_mb__after_mmgrab();
-+ /*
-+ * kernel -> kernel transition does not change rq->curr->mm
-+ * state. It stays NULL.
-+ */
-+ } else { // to user
-+ /*
-+ * kernel -> user transition does not provide a barrier
-+ * between rq->curr store and load of {prev,next}->mm->pcpu_cid[cpu].
-+ * Provide it here.
-+ */
-+ if (!prev->mm) // from kernel
-+ smp_mb();
-+ /*
-+ * user -> user transition guarantees a memory barrier through
-+ * switch_mm() when current->mm changes. If current->mm is
-+ * unchanged, no barrier is needed.
-+ */
-+ }
-+ if (prev->mm_cid_active) {
-+ mm_cid_snapshot_time(rq, prev->mm);
-+ mm_cid_put_lazy(prev);
-+ prev->mm_cid = -1;
-+ }
-+ if (next->mm_cid_active)
-+ next->last_mm_cid = next->mm_cid = mm_cid_get(rq, next->mm);
-+}
-+
-+#else
-+static inline void switch_mm_cid(struct rq *rq, struct task_struct *prev, struct task_struct *next) { }
-+static inline void sched_mm_cid_migrate_from(struct task_struct *t) { }
-+static inline void sched_mm_cid_migrate_to(struct rq *dst_rq, struct task_struct *t) { }
-+static inline void task_tick_mm_cid(struct rq *rq, struct task_struct *curr) { }
-+static inline void init_sched_mm_cid(struct task_struct *t) { }
-+#endif
-+
-+#ifdef CONFIG_SMP
-+extern struct balance_callback balance_push_callback;
-+
-+static inline void
-+queue_balance_callback(struct rq *rq,
-+ struct balance_callback *head,
-+ void (*func)(struct rq *rq))
-+{
-+ lockdep_assert_rq_held(rq);
-+
-+ /*
-+ * Don't (re)queue an already queued item; nor queue anything when
-+ * balance_push() is active, see the comment with
-+ * balance_push_callback.
-+ */
-+ if (unlikely(head->next || rq->balance_callback == &balance_push_callback))
-+ return;
-+
-+ head->func = func;
-+ head->next = rq->balance_callback;
-+ rq->balance_callback = head;
-+}
-+#endif /* CONFIG_SMP */
-+
-+#ifdef CONFIG_SCHED_BMQ
-+#include "bmq.h"
-+#endif
-+#ifdef CONFIG_SCHED_PDS
-+#include "pds.h"
-+#endif
-+
-+#endif /* _KERNEL_SCHED_ALT_SCHED_H */
-diff --git a/kernel/sched/alt_topology.c b/kernel/sched/alt_topology.c
-new file mode 100644
-index 000000000000..2266138ee783
---- /dev/null
-+++ b/kernel/sched/alt_topology.c
-@@ -0,0 +1,350 @@
-+#include "alt_core.h"
-+#include "alt_topology.h"
-+
-+#ifdef CONFIG_SMP
-+
-+static cpumask_t sched_pcore_mask ____cacheline_aligned_in_smp;
-+
-+static int __init sched_pcore_mask_setup(char *str)
-+{
-+ if (cpulist_parse(str, &sched_pcore_mask))
-+ pr_warn("sched/alt: pcore_cpus= incorrect CPU range\n");
-+
-+ return 0;
-+}
-+__setup("pcore_cpus=", sched_pcore_mask_setup);
-+
-+/*
-+ * set/clear idle mask functions
-+ */
-+#ifdef CONFIG_SCHED_SMT
-+static void set_idle_mask_smt(unsigned int cpu, struct cpumask *dstp)
-+{
-+ cpumask_set_cpu(cpu, dstp);
-+ if (cpumask_subset(cpu_smt_mask(cpu), sched_idle_mask))
-+ cpumask_or(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
-+}
-+
-+static void clear_idle_mask_smt(int cpu, struct cpumask *dstp)
-+{
-+ cpumask_clear_cpu(cpu, dstp);
-+ cpumask_andnot(sched_sg_idle_mask, sched_sg_idle_mask, cpu_smt_mask(cpu));
-+}
-+#endif
-+
-+static void set_idle_mask_pcore(unsigned int cpu, struct cpumask *dstp)
-+{
-+ cpumask_set_cpu(cpu, dstp);
-+ cpumask_set_cpu(cpu, sched_pcore_idle_mask);
-+}
-+
-+static void clear_idle_mask_pcore(int cpu, struct cpumask *dstp)
-+{
-+ cpumask_clear_cpu(cpu, dstp);
-+ cpumask_clear_cpu(cpu, sched_pcore_idle_mask);
-+}
-+
-+static void set_idle_mask_ecore(unsigned int cpu, struct cpumask *dstp)
-+{
-+ cpumask_set_cpu(cpu, dstp);
-+ cpumask_set_cpu(cpu, sched_ecore_idle_mask);
-+}
-+
-+static void clear_idle_mask_ecore(int cpu, struct cpumask *dstp)
-+{
-+ cpumask_clear_cpu(cpu, dstp);
-+ cpumask_clear_cpu(cpu, sched_ecore_idle_mask);
-+}
-+
-+/*
-+ * Idle cpu/rq selection functions
-+ */
-+#ifdef CONFIG_SCHED_SMT
-+static bool p1_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
-+ const struct cpumask *src2p)
-+{
-+ return cpumask_and(dstp, src1p, src2p + 1) ||
-+ cpumask_and(dstp, src1p, src2p);
-+}
-+#endif
-+
-+static bool p1p2_idle_select_func(struct cpumask *dstp, const struct cpumask *src1p,
-+ const struct cpumask *src2p)
-+{
-+ return cpumask_and(dstp, src1p, src2p + 1) ||
-+ cpumask_and(dstp, src1p, src2p + 2) ||
-+ cpumask_and(dstp, src1p, src2p);
-+}
-+
-+/* common balance functions */
-+static int active_balance_cpu_stop(void *data)
-+{
-+ struct balance_arg *arg = data;
-+ struct task_struct *p = arg->task;
-+ struct rq *rq = this_rq();
-+ unsigned long flags;
-+ cpumask_t tmp;
-+
-+ local_irq_save(flags);
-+
-+ raw_spin_lock(&p->pi_lock);
-+ raw_spin_lock(&rq->lock);
-+
-+ arg->active = 0;
-+
-+ if (task_on_rq_queued(p) && task_rq(p) == rq &&
-+ cpumask_and(&tmp, p->cpus_ptr, arg->cpumask) &&
-+ !is_migration_disabled(p)) {
-+ int dcpu = __best_mask_cpu(&tmp, per_cpu(sched_cpu_llc_mask, cpu_of(rq)));
-+ rq = move_queued_task(rq, p, dcpu);
-+ }
-+
-+ raw_spin_unlock(&rq->lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+ return 0;
-+}
-+
-+/* trigger_active_balance - for @rq */
-+static inline int
-+trigger_active_balance(struct rq *src_rq, struct rq *rq, cpumask_t *target_mask)
-+{
-+ struct balance_arg *arg;
-+ unsigned long flags;
-+ struct task_struct *p;
-+ int res;
-+
-+ if (!raw_spin_trylock_irqsave(&rq->lock, flags))
-+ return 0;
-+
-+ arg = &rq->active_balance_arg;
-+ res = (1 == rq->nr_running) && \
-+ !is_migration_disabled((p = sched_rq_first_task(rq))) && \
-+ cpumask_intersects(p->cpus_ptr, target_mask) && \
-+ !arg->active;
-+ if (res) {
-+ arg->task = p;
-+ arg->cpumask = target_mask;
-+
-+ arg->active = 1;
-+ }
-+
-+ raw_spin_unlock_irqrestore(&rq->lock, flags);
-+
-+ if (res) {
-+ preempt_disable();
-+ raw_spin_unlock(&src_rq->lock);
-+
-+ stop_one_cpu_nowait(cpu_of(rq), active_balance_cpu_stop, arg,
-+ &rq->active_balance_work);
-+
-+ preempt_enable();
-+ raw_spin_lock(&src_rq->lock);
-+ }
-+
-+ return res;
-+}
-+
-+static inline int
-+ecore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
-+{
-+ if (cpumask_andnot(single_task_mask, single_task_mask, &sched_pcore_mask)) {
-+ int i, cpu = cpu_of(rq);
-+
-+ for_each_cpu_wrap(i, single_task_mask, cpu)
-+ if (trigger_active_balance(rq, cpu_rq(i), target_mask))
-+ return 1;
-+ }
-+
-+ return 0;
-+}
-+
-+static DEFINE_PER_CPU(struct balance_callback, active_balance_head);
-+
-+#ifdef CONFIG_SCHED_SMT
-+static inline int
-+smt_pcore_source_balance(struct rq *rq, cpumask_t *single_task_mask, cpumask_t *target_mask)
-+{
-+ cpumask_t smt_single_mask;
-+
-+ if (cpumask_and(&smt_single_mask, single_task_mask, &sched_smt_mask)) {
-+ int i, cpu = cpu_of(rq);
-+
-+ for_each_cpu_wrap(i, &smt_single_mask, cpu) {
-+ if (cpumask_subset(cpu_smt_mask(i), &smt_single_mask) &&
-+ trigger_active_balance(rq, cpu_rq(i), target_mask))
-+ return 1;
-+ }
-+ }
-+
-+ return 0;
-+}
-+
-+/* smt p core balance functions */
-+static inline void smt_pcore_balance(struct rq *rq)
-+{
-+ cpumask_t single_task_mask;
-+
-+ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
-+ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
-+ (/* smt core group balance */
-+ (static_key_count(&sched_smt_present.key) > 1 &&
-+ smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)
-+ ) ||
-+ /* e core to idle smt core balance */
-+ ecore_source_balance(rq, &single_task_mask, sched_sg_idle_mask)))
-+ return;
-+}
-+
-+static void smt_pcore_balance_func(struct rq *rq, const int cpu)
-+{
-+ if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
-+ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_pcore_balance);
-+}
-+
-+/* smt balance functions */
-+static inline void smt_balance(struct rq *rq)
-+{
-+ cpumask_t single_task_mask;
-+
-+ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
-+ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
-+ static_key_count(&sched_smt_present.key) > 1 &&
-+ smt_pcore_source_balance(rq, &single_task_mask, sched_sg_idle_mask))
-+ return;
-+}
-+
-+static void smt_balance_func(struct rq *rq, const int cpu)
-+{
-+ if (cpumask_test_cpu(cpu, sched_sg_idle_mask))
-+ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), smt_balance);
-+}
-+
-+/* e core balance functions */
-+static inline void ecore_balance(struct rq *rq)
-+{
-+ cpumask_t single_task_mask;
-+
-+ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
-+ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
-+ /* smt occupied p core to idle e core balance */
-+ smt_pcore_source_balance(rq, &single_task_mask, sched_ecore_idle_mask))
-+ return;
-+}
-+
-+static void ecore_balance_func(struct rq *rq, const int cpu)
-+{
-+ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), ecore_balance);
-+}
-+#endif /* CONFIG_SCHED_SMT */
-+
-+/* p core balance functions */
-+static inline void pcore_balance(struct rq *rq)
-+{
-+ cpumask_t single_task_mask;
-+
-+ if (cpumask_andnot(&single_task_mask, cpu_active_mask, sched_idle_mask) &&
-+ cpumask_andnot(&single_task_mask, &single_task_mask, &sched_rq_pending_mask) &&
-+ /* idle e core to p core balance */
-+ ecore_source_balance(rq, &single_task_mask, sched_pcore_idle_mask))
-+ return;
-+}
-+
-+static void pcore_balance_func(struct rq *rq, const int cpu)
-+{
-+ queue_balance_callback(rq, &per_cpu(active_balance_head, cpu), pcore_balance);
-+}
-+
-+#ifdef ALT_SCHED_DEBUG
-+#define SCHED_DEBUG_INFO(...) printk(KERN_INFO __VA_ARGS__)
-+#else
-+#define SCHED_DEBUG_INFO(...) do { } while(0)
-+#endif
-+
-+#define SET_IDLE_SELECT_FUNC(func) \
-+{ \
-+ idle_select_func = func; \
-+ printk(KERN_INFO "sched: "#func); \
-+}
-+
-+#define SET_RQ_BALANCE_FUNC(rq, cpu, func) \
-+{ \
-+ rq->balance_func = func; \
-+ SCHED_DEBUG_INFO("sched: cpu#%02d -> "#func, cpu); \
-+}
-+
-+#define SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_func, clear_func) \
-+{ \
-+ rq->set_idle_mask_func = set_func; \
-+ rq->clear_idle_mask_func = clear_func; \
-+ SCHED_DEBUG_INFO("sched: cpu#%02d -> "#set_func" "#clear_func, cpu); \
-+}
-+
-+void sched_init_topology(void)
-+{
-+ int cpu;
-+ struct rq *rq;
-+ cpumask_t sched_ecore_mask = { CPU_BITS_NONE };
-+ int ecore_present = 0;
-+
-+#ifdef CONFIG_SCHED_SMT
-+ if (!cpumask_empty(&sched_smt_mask))
-+ printk(KERN_INFO "sched: smt mask: 0x%08lx\n", sched_smt_mask.bits[0]);
-+#endif
-+
-+ if (!cpumask_empty(&sched_pcore_mask)) {
-+ cpumask_andnot(&sched_ecore_mask, cpu_online_mask, &sched_pcore_mask);
-+ printk(KERN_INFO "sched: pcore mask: 0x%08lx, ecore mask: 0x%08lx\n",
-+ sched_pcore_mask.bits[0], sched_ecore_mask.bits[0]);
-+
-+ ecore_present = !cpumask_empty(&sched_ecore_mask);
-+ }
-+
-+#ifdef CONFIG_SCHED_SMT
-+ /* idle select function */
-+ if (cpumask_equal(&sched_smt_mask, cpu_online_mask)) {
-+ SET_IDLE_SELECT_FUNC(p1_idle_select_func);
-+ } else
-+#endif
-+ if (!cpumask_empty(&sched_pcore_mask)) {
-+ SET_IDLE_SELECT_FUNC(p1p2_idle_select_func);
-+ }
-+
-+ for_each_online_cpu(cpu) {
-+ rq = cpu_rq(cpu);
-+ /* take chance to reset time slice for idle tasks */
-+ rq->idle->time_slice = sysctl_sched_base_slice;
-+
-+#ifdef CONFIG_SCHED_SMT
-+ if (cpumask_weight(cpu_smt_mask(cpu)) > 1) {
-+ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_smt, clear_idle_mask_smt);
-+
-+ if (cpumask_test_cpu(cpu, &sched_pcore_mask) &&
-+ !cpumask_intersects(&sched_ecore_mask, &sched_smt_mask)) {
-+ SET_RQ_BALANCE_FUNC(rq, cpu, smt_pcore_balance_func);
-+ } else {
-+ SET_RQ_BALANCE_FUNC(rq, cpu, smt_balance_func);
-+ }
-+
-+ continue;
-+ }
-+#endif
-+ /* !SMT or only one cpu in sg */
-+ if (cpumask_test_cpu(cpu, &sched_pcore_mask)) {
-+ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_pcore, clear_idle_mask_pcore);
-+
-+ if (ecore_present)
-+ SET_RQ_BALANCE_FUNC(rq, cpu, pcore_balance_func);
-+
-+ continue;
-+ }
-+ if (cpumask_test_cpu(cpu, &sched_ecore_mask)) {
-+ SET_RQ_IDLE_MASK_FUNC(rq, cpu, set_idle_mask_ecore, clear_idle_mask_ecore);
-+#ifdef CONFIG_SCHED_SMT
-+ if (cpumask_intersects(&sched_pcore_mask, &sched_smt_mask))
-+ SET_RQ_BALANCE_FUNC(rq, cpu, ecore_balance_func);
-+#endif
-+ }
-+ }
-+}
-+#endif /* CONFIG_SMP */
-diff --git a/kernel/sched/alt_topology.h b/kernel/sched/alt_topology.h
-new file mode 100644
-index 000000000000..076174cd2bc6
---- /dev/null
-+++ b/kernel/sched/alt_topology.h
-@@ -0,0 +1,6 @@
-+#ifndef _KERNEL_SCHED_ALT_TOPOLOGY_H
-+#define _KERNEL_SCHED_ALT_TOPOLOGY_H
-+
-+extern void sched_init_topology(void);
-+
-+#endif /* _KERNEL_SCHED_ALT_TOPOLOGY_H */
-diff --git a/kernel/sched/bmq.h b/kernel/sched/bmq.h
-new file mode 100644
-index 000000000000..5a7835246ec3
---- /dev/null
-+++ b/kernel/sched/bmq.h
-@@ -0,0 +1,103 @@
-+#ifndef _KERNEL_SCHED_BMQ_H
-+#define _KERNEL_SCHED_BMQ_H
-+
-+#define ALT_SCHED_NAME "BMQ"
-+
-+/*
-+ * BMQ only routines
-+ */
-+static inline void boost_task(struct task_struct *p, int n)
-+{
-+ int limit;
-+
-+ switch (p->policy) {
-+ case SCHED_NORMAL:
-+ limit = -MAX_PRIORITY_ADJ;
-+ break;
-+ case SCHED_BATCH:
-+ limit = 0;
-+ break;
-+ default:
-+ return;
-+ }
-+
-+ p->boost_prio = max(limit, p->boost_prio - n);
-+}
-+
-+static inline void deboost_task(struct task_struct *p)
-+{
-+ if (p->boost_prio < MAX_PRIORITY_ADJ)
-+ p->boost_prio++;
-+}
-+
-+/*
-+ * Common interfaces
-+ */
-+static inline void sched_timeslice_imp(const int timeslice_ms) {}
-+
-+/* This API is used in task_prio(), return value readed by human users */
-+static inline int
-+task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
-+{
-+ return p->prio + p->boost_prio - MIN_NORMAL_PRIO;
-+}
-+
-+static inline int task_sched_prio(const struct task_struct *p)
-+{
-+ return (p->prio < MIN_NORMAL_PRIO)? (p->prio >> 2) :
-+ MIN_SCHED_NORMAL_PRIO + (p->prio + p->boost_prio - MIN_NORMAL_PRIO) / 2;
-+}
-+
-+#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio) \
-+ prio = task_sched_prio(p); \
-+ idx = prio;
-+
-+static inline int sched_prio2idx(int prio, struct rq *rq)
-+{
-+ return prio;
-+}
-+
-+static inline int sched_idx2prio(int idx, struct rq *rq)
-+{
-+ return idx;
-+}
-+
-+static inline int sched_rq_prio_idx(struct rq *rq)
-+{
-+ return rq->prio;
-+}
-+
-+static inline int task_running_nice(struct task_struct *p)
-+{
-+ return (p->prio + p->boost_prio > DEFAULT_PRIO);
-+}
-+
-+static inline void sched_update_rq_clock(struct rq *rq) {}
-+
-+static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
-+{
-+ deboost_task(p);
-+}
-+
-+static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq) {}
-+static inline void sched_task_fork(struct task_struct *p, struct rq *rq) {}
-+
-+static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
-+{
-+ p->boost_prio = MAX_PRIORITY_ADJ;
-+}
-+
-+static inline void sched_task_ttwu(struct task_struct *p)
-+{
-+ s64 delta = this_rq()->clock_task > p->last_ran;
-+
-+ if (likely(delta > 0))
-+ boost_task(p, delta >> 22);
-+}
-+
-+static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq)
-+{
-+ boost_task(p, 1);
-+}
-+
-+#endif /* _KERNEL_SCHED_BMQ_H */
-diff --git a/kernel/sched/build_policy.c b/kernel/sched/build_policy.c
-index fae1f5c921eb..1e06434b5b9b 100644
---- a/kernel/sched/build_policy.c
-+++ b/kernel/sched/build_policy.c
-@@ -49,15 +49,21 @@
-
- #include "idle.c"
-
-+#ifndef CONFIG_SCHED_ALT
- #include "rt.c"
-+#endif
-
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- # include "cpudeadline.c"
-+#endif
- # include "pelt.c"
- #endif
-
- #include "cputime.c"
-+#ifndef CONFIG_SCHED_ALT
- #include "deadline.c"
-+#endif
-
- #ifdef CONFIG_SCHED_CLASS_EXT
- # include "ext.c"
-diff --git a/kernel/sched/build_utility.c b/kernel/sched/build_utility.c
-index 80a3df49ab47..58d04aa73634 100644
---- a/kernel/sched/build_utility.c
-+++ b/kernel/sched/build_utility.c
-@@ -56,6 +56,10 @@
-
- #include "clock.c"
-
-+#ifdef CONFIG_SCHED_ALT
-+# include "alt_topology.c"
-+#endif
-+
- #ifdef CONFIG_CGROUP_CPUACCT
- # include "cpuacct.c"
- #endif
-@@ -84,7 +88,9 @@
-
- #ifdef CONFIG_SMP
- # include "cpupri.c"
-+#ifndef CONFIG_SCHED_ALT
- # include "stop_task.c"
-+#endif
- # include "topology.c"
- #endif
-
-diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
-index c6ba15388ea7..56590821f074 100644
---- a/kernel/sched/cpufreq_schedutil.c
-+++ b/kernel/sched/cpufreq_schedutil.c
-@@ -197,6 +197,7 @@ unsigned long sugov_effective_cpu_perf(int cpu, unsigned long actual,
-
- static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
- {
-+#ifndef CONFIG_SCHED_ALT
- unsigned long min, max, util = scx_cpuperf_target(sg_cpu->cpu);
-
- if (!scx_switched_all())
-@@ -205,6 +206,10 @@ static void sugov_get_util(struct sugov_cpu *sg_cpu, unsigned long boost)
- util = max(util, boost);
- sg_cpu->bw_min = min;
- sg_cpu->util = sugov_effective_cpu_perf(sg_cpu->cpu, util, min, max);
-+#else /* CONFIG_SCHED_ALT */
-+ sg_cpu->bw_min = 0;
-+ sg_cpu->util = rq_load_util(cpu_rq(sg_cpu->cpu), arch_scale_cpu_capacity(sg_cpu->cpu));
-+#endif /* CONFIG_SCHED_ALT */
- }
-
- /**
-@@ -364,8 +369,10 @@ static inline bool sugov_hold_freq(struct sugov_cpu *sg_cpu) { return false; }
- */
- static inline void ignore_dl_rate_limit(struct sugov_cpu *sg_cpu)
- {
-+#ifndef CONFIG_SCHED_ALT
- if (cpu_bw_dl(cpu_rq(sg_cpu->cpu)) > sg_cpu->bw_min)
- sg_cpu->sg_policy->limits_changed = true;
-+#endif
- }
-
- static inline bool sugov_update_single_common(struct sugov_cpu *sg_cpu,
-@@ -684,6 +691,7 @@ static int sugov_kthread_create(struct sugov_policy *sg_policy)
- }
-
- ret = sched_setattr_nocheck(thread, &attr);
-+
- if (ret) {
- kthread_stop(thread);
- pr_warn("%s: failed to set SCHED_DEADLINE\n", __func__);
-diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
-index 0bed0fa1acd9..031affa09446 100644
---- a/kernel/sched/cputime.c
-+++ b/kernel/sched/cputime.c
-@@ -126,7 +126,7 @@ void account_user_time(struct task_struct *p, u64 cputime)
- p->utime += cputime;
- account_group_user_time(p, cputime);
-
-- index = (task_nice(p) > 0) ? CPUTIME_NICE : CPUTIME_USER;
-+ index = task_running_nice(p) ? CPUTIME_NICE : CPUTIME_USER;
-
- /* Add user time to cpustat. */
- task_group_account_field(p, index, cputime);
-@@ -150,7 +150,7 @@ void account_guest_time(struct task_struct *p, u64 cputime)
- p->gtime += cputime;
-
- /* Add guest time to cpustat. */
-- if (task_nice(p) > 0) {
-+ if (task_running_nice(p)) {
- task_group_account_field(p, CPUTIME_NICE, cputime);
- cpustat[CPUTIME_GUEST_NICE] += cputime;
- } else {
-@@ -288,7 +288,7 @@ static inline u64 account_other_time(u64 max)
- #ifdef CONFIG_64BIT
- static inline u64 read_sum_exec_runtime(struct task_struct *t)
- {
-- return t->se.sum_exec_runtime;
-+ return tsk_seruntime(t);
- }
- #else
- static u64 read_sum_exec_runtime(struct task_struct *t)
-@@ -298,7 +298,7 @@ static u64 read_sum_exec_runtime(struct task_struct *t)
- struct rq *rq;
-
- rq = task_rq_lock(t, &rf);
-- ns = t->se.sum_exec_runtime;
-+ ns = tsk_seruntime(t);
- task_rq_unlock(rq, t, &rf);
-
- return ns;
-@@ -623,7 +623,7 @@ void cputime_adjust(struct task_cputime *curr, struct prev_cputime *prev,
- void task_cputime_adjusted(struct task_struct *p, u64 *ut, u64 *st)
- {
- struct task_cputime cputime = {
-- .sum_exec_runtime = p->se.sum_exec_runtime,
-+ .sum_exec_runtime = tsk_seruntime(p),
- };
-
- if (task_cputime(p, &cputime.utime, &cputime.stime))
-diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
-index f4035c7a0fa1..4df4ad88d6a9 100644
---- a/kernel/sched/debug.c
-+++ b/kernel/sched/debug.c
-@@ -7,6 +7,7 @@
- * Copyright(C) 2007, Red Hat, Inc., Ingo Molnar
- */
-
-+#ifndef CONFIG_SCHED_ALT
- /*
- * This allows printing both to /sys/kernel/debug/sched/debug and
- * to the console
-@@ -215,6 +216,7 @@ static const struct file_operations sched_scaling_fops = {
- };
-
- #endif /* SMP */
-+#endif /* !CONFIG_SCHED_ALT */
-
- #ifdef CONFIG_PREEMPT_DYNAMIC
-
-@@ -278,6 +280,7 @@ static const struct file_operations sched_dynamic_fops = {
-
- #endif /* CONFIG_PREEMPT_DYNAMIC */
-
-+#ifndef CONFIG_SCHED_ALT
- __read_mostly bool sched_debug_verbose;
-
- #ifdef CONFIG_SMP
-@@ -468,9 +471,11 @@ static const struct file_operations fair_server_period_fops = {
- .llseek = seq_lseek,
- .release = single_release,
- };
-+#endif /* !CONFIG_SCHED_ALT */
-
- static struct dentry *debugfs_sched;
-
-+#ifndef CONFIG_SCHED_ALT
- static void debugfs_fair_server_init(void)
- {
- struct dentry *d_fair;
-@@ -491,6 +496,7 @@ static void debugfs_fair_server_init(void)
- debugfs_create_file("period", 0644, d_cpu, (void *) cpu, &fair_server_period_fops);
- }
- }
-+#endif /* !CONFIG_SCHED_ALT */
-
- static __init int sched_init_debug(void)
- {
-@@ -498,14 +504,17 @@ static __init int sched_init_debug(void)
-
- debugfs_sched = debugfs_create_dir("sched", NULL);
-
-+#ifndef CONFIG_SCHED_ALT
- debugfs_create_file("features", 0644, debugfs_sched, NULL, &sched_feat_fops);
- debugfs_create_file_unsafe("verbose", 0644, debugfs_sched, &sched_debug_verbose, &sched_verbose_fops);
-+#endif /* !CONFIG_SCHED_ALT */
- #ifdef CONFIG_PREEMPT_DYNAMIC
- debugfs_create_file("preempt", 0644, debugfs_sched, NULL, &sched_dynamic_fops);
- #endif
-
- debugfs_create_u32("base_slice_ns", 0644, debugfs_sched, &sysctl_sched_base_slice);
-
-+#ifndef CONFIG_SCHED_ALT
- debugfs_create_u32("latency_warn_ms", 0644, debugfs_sched, &sysctl_resched_latency_warn_ms);
- debugfs_create_u32("latency_warn_once", 0644, debugfs_sched, &sysctl_resched_latency_warn_once);
-
-@@ -530,13 +539,17 @@ static __init int sched_init_debug(void)
- #endif
-
- debugfs_create_file("debug", 0444, debugfs_sched, NULL, &sched_debug_fops);
-+#endif /* !CONFIG_SCHED_ALT */
-
-+#ifndef CONFIG_SCHED_ALT
- debugfs_fair_server_init();
-+#endif /* !CONFIG_SCHED_ALT */
-
- return 0;
- }
- late_initcall(sched_init_debug);
-
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_SMP
-
- static cpumask_var_t sd_sysctl_cpus;
-@@ -1288,6 +1301,7 @@ void proc_sched_set_task(struct task_struct *p)
- memset(&p->stats, 0, sizeof(p->stats));
- #endif
- }
-+#endif /* !CONFIG_SCHED_ALT */
-
- void resched_latency_warn(int cpu, u64 latency)
- {
-diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
-index d2f096bb274c..36071f4b7b7f 100644
---- a/kernel/sched/idle.c
-+++ b/kernel/sched/idle.c
-@@ -424,6 +424,7 @@ void cpu_startup_entry(enum cpuhp_state state)
- do_idle();
- }
-
-+#ifndef CONFIG_SCHED_ALT
- /*
- * idle-task scheduling class.
- */
-@@ -538,3 +539,4 @@ DEFINE_SCHED_CLASS(idle) = {
- .switched_to = switched_to_idle,
- .update_curr = update_curr_idle,
- };
-+#endif
-diff --git a/kernel/sched/pds.h b/kernel/sched/pds.h
-new file mode 100644
-index 000000000000..fe3099071eb7
---- /dev/null
-+++ b/kernel/sched/pds.h
-@@ -0,0 +1,139 @@
-+#ifndef _KERNEL_SCHED_PDS_H
-+#define _KERNEL_SCHED_PDS_H
-+
-+#define ALT_SCHED_NAME "PDS"
-+
-+static const u64 RT_MASK = ((1ULL << MIN_SCHED_NORMAL_PRIO) - 1);
-+
-+#define SCHED_NORMAL_PRIO_NUM (32)
-+#define SCHED_EDGE_DELTA (SCHED_NORMAL_PRIO_NUM - NICE_WIDTH / 2)
-+
-+/* PDS assume SCHED_NORMAL_PRIO_NUM is power of 2 */
-+#define SCHED_NORMAL_PRIO_MOD(x) ((x) & (SCHED_NORMAL_PRIO_NUM - 1))
-+
-+/* default time slice 4ms -> shift 22, 2 time slice slots -> shift 23 */
-+static __read_mostly int sched_timeslice_shift = 23;
-+
-+/*
-+ * Common interfaces
-+ */
-+static inline int
-+task_sched_prio_normal(const struct task_struct *p, const struct rq *rq)
-+{
-+ u64 sched_dl = max(p->deadline, rq->time_edge);
-+
-+#ifdef ALT_SCHED_DEBUG
-+ if (WARN_ONCE(sched_dl - rq->time_edge > NORMAL_PRIO_NUM - 1,
-+ "pds: task_sched_prio_normal() delta %lld\n", sched_dl - rq->time_edge))
-+ return SCHED_NORMAL_PRIO_NUM - 1;
-+#endif
-+
-+ return sched_dl - rq->time_edge;
-+}
-+
-+static inline int task_sched_prio(const struct task_struct *p)
-+{
-+ return (p->prio < MIN_NORMAL_PRIO) ? (p->prio >> 2) :
-+ MIN_SCHED_NORMAL_PRIO + task_sched_prio_normal(p, task_rq(p));
-+}
-+
-+#define TASK_SCHED_PRIO_IDX(p, rq, idx, prio) \
-+ if (p->prio < MIN_NORMAL_PRIO) { \
-+ prio = p->prio >> 2; \
-+ idx = prio; \
-+ } else { \
-+ u64 sched_dl = max(p->deadline, rq->time_edge); \
-+ prio = MIN_SCHED_NORMAL_PRIO + sched_dl - rq->time_edge; \
-+ idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_dl); \
-+ }
-+
-+static inline int sched_prio2idx(int sched_prio, struct rq *rq)
-+{
-+ return (IDLE_TASK_SCHED_PRIO == sched_prio || sched_prio < MIN_SCHED_NORMAL_PRIO) ?
-+ sched_prio :
-+ MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_prio + rq->time_edge);
-+}
-+
-+static inline int sched_idx2prio(int sched_idx, struct rq *rq)
-+{
-+ return (sched_idx < MIN_SCHED_NORMAL_PRIO) ?
-+ sched_idx :
-+ MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(sched_idx - rq->time_edge);
-+}
-+
-+static inline int sched_rq_prio_idx(struct rq *rq)
-+{
-+ return rq->prio_idx;
-+}
-+
-+static inline int task_running_nice(struct task_struct *p)
-+{
-+ return (p->prio > DEFAULT_PRIO);
-+}
-+
-+static inline void sched_update_rq_clock(struct rq *rq)
-+{
-+ struct list_head head;
-+ u64 old = rq->time_edge;
-+ u64 now = rq->clock >> sched_timeslice_shift;
-+ u64 prio, delta;
-+ DECLARE_BITMAP(normal, SCHED_QUEUE_BITS);
-+
-+ if (now == old)
-+ return;
-+
-+ rq->time_edge = now;
-+ delta = min_t(u64, SCHED_NORMAL_PRIO_NUM, now - old);
-+ INIT_LIST_HEAD(&head);
-+
-+ prio = MIN_SCHED_NORMAL_PRIO;
-+ for_each_set_bit_from(prio, rq->queue.bitmap, MIN_SCHED_NORMAL_PRIO + delta)
-+ list_splice_tail_init(rq->queue.heads + MIN_SCHED_NORMAL_PRIO +
-+ SCHED_NORMAL_PRIO_MOD(prio + old), &head);
-+
-+ bitmap_shift_right(normal, rq->queue.bitmap, delta, SCHED_QUEUE_BITS);
-+ if (!list_empty(&head)) {
-+ u64 idx = MIN_SCHED_NORMAL_PRIO + SCHED_NORMAL_PRIO_MOD(now);
-+
-+ __list_splice(&head, rq->queue.heads + idx, rq->queue.heads[idx].next);
-+ set_bit(MIN_SCHED_NORMAL_PRIO, normal);
-+ }
-+ bitmap_replace(rq->queue.bitmap, normal, rq->queue.bitmap,
-+ (const unsigned long *)&RT_MASK, SCHED_QUEUE_BITS);
-+
-+ if (rq->prio < MIN_SCHED_NORMAL_PRIO || IDLE_TASK_SCHED_PRIO == rq->prio)
-+ return;
-+
-+ rq->prio = max_t(u64, MIN_SCHED_NORMAL_PRIO, rq->prio - delta);
-+ rq->prio_idx = sched_prio2idx(rq->prio, rq);
-+}
-+
-+static inline void sched_task_renew(struct task_struct *p, const struct rq *rq)
-+{
-+ if (p->prio >= MIN_NORMAL_PRIO)
-+ p->deadline = rq->time_edge + SCHED_EDGE_DELTA +
-+ (p->static_prio - (MAX_PRIO - NICE_WIDTH)) / 2;
-+}
-+
-+static inline void sched_task_sanity_check(struct task_struct *p, struct rq *rq)
-+{
-+ u64 max_dl = rq->time_edge + SCHED_EDGE_DELTA + NICE_WIDTH / 2 - 1;
-+ if (unlikely(p->deadline > max_dl))
-+ p->deadline = max_dl;
-+}
-+
-+static inline void sched_task_fork(struct task_struct *p, struct rq *rq)
-+{
-+ sched_task_renew(p, rq);
-+}
-+
-+static inline void do_sched_yield_type_1(struct task_struct *p, struct rq *rq)
-+{
-+ p->time_slice = sysctl_sched_base_slice;
-+ sched_task_renew(p, rq);
-+}
-+
-+static inline void sched_task_ttwu(struct task_struct *p) {}
-+static inline void sched_task_deactivate(struct task_struct *p, struct rq *rq) {}
-+
-+#endif /* _KERNEL_SCHED_PDS_H */
-diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c
-index a9c65d97b3ca..a66431e6527c 100644
---- a/kernel/sched/pelt.c
-+++ b/kernel/sched/pelt.c
-@@ -266,6 +266,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load)
- WRITE_ONCE(sa->util_avg, sa->util_sum / divider);
- }
-
-+#ifndef CONFIG_SCHED_ALT
- /*
- * sched_entity:
- *
-@@ -383,8 +384,9 @@ int update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
-
- return 0;
- }
-+#endif
-
--#ifdef CONFIG_SCHED_HW_PRESSURE
-+#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
- /*
- * hardware:
- *
-@@ -468,6 +470,7 @@ int update_irq_load_avg(struct rq *rq, u64 running)
- }
- #endif
-
-+#ifndef CONFIG_SCHED_ALT
- /*
- * Load avg and utiliztion metrics need to be updated periodically and before
- * consumption. This function updates the metrics for all subsystems except for
-@@ -487,3 +490,4 @@ bool update_other_load_avgs(struct rq *rq)
- update_hw_load_avg(rq_clock_task(rq), rq, hw_pressure) |
- update_irq_load_avg(rq, 0);
- }
-+#endif /* !CONFIG_SCHED_ALT */
-diff --git a/kernel/sched/pelt.h b/kernel/sched/pelt.h
-index f4f6a0875c66..ee780f2b6c17 100644
---- a/kernel/sched/pelt.h
-+++ b/kernel/sched/pelt.h
-@@ -1,14 +1,16 @@
- #ifdef CONFIG_SMP
- #include "sched-pelt.h"
-
-+#ifndef CONFIG_SCHED_ALT
- int __update_load_avg_blocked_se(u64 now, struct sched_entity *se);
- int __update_load_avg_se(u64 now, struct cfs_rq *cfs_rq, struct sched_entity *se);
- int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq);
- int update_rt_rq_load_avg(u64 now, struct rq *rq, int running);
- int update_dl_rq_load_avg(u64 now, struct rq *rq, int running);
- bool update_other_load_avgs(struct rq *rq);
-+#endif
-
--#ifdef CONFIG_SCHED_HW_PRESSURE
-+#if defined(CONFIG_SCHED_HW_PRESSURE) && !defined(CONFIG_SCHED_ALT)
- int update_hw_load_avg(u64 now, struct rq *rq, u64 capacity);
-
- static inline u64 hw_load_avg(struct rq *rq)
-@@ -45,6 +47,7 @@ static inline u32 get_pelt_divider(struct sched_avg *avg)
- return PELT_MIN_DIVIDER + avg->period_contrib;
- }
-
-+#ifndef CONFIG_SCHED_ALT
- static inline void cfs_se_util_change(struct sched_avg *avg)
- {
- unsigned int enqueued;
-@@ -181,9 +184,11 @@ static inline u64 cfs_rq_clock_pelt(struct cfs_rq *cfs_rq)
- return rq_clock_pelt(rq_of(cfs_rq));
- }
- #endif
-+#endif /* CONFIG_SCHED_ALT */
-
- #else
-
-+#ifndef CONFIG_SCHED_ALT
- static inline int
- update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
- {
-@@ -201,6 +206,7 @@ update_dl_rq_load_avg(u64 now, struct rq *rq, int running)
- {
- return 0;
- }
-+#endif
-
- static inline int
- update_hw_load_avg(u64 now, struct rq *rq, u64 capacity)
-diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
-index c03b3d7b320e..08ee4a9cd6a5 100644
---- a/kernel/sched/sched.h
-+++ b/kernel/sched/sched.h
-@@ -5,6 +5,10 @@
- #ifndef _KERNEL_SCHED_SCHED_H
- #define _KERNEL_SCHED_SCHED_H
-
-+#ifdef CONFIG_SCHED_ALT
-+#include "alt_sched.h"
-+#else
-+
- #include <linux/sched/affinity.h>
- #include <linux/sched/autogroup.h>
- #include <linux/sched/cpufreq.h>
-@@ -3878,4 +3882,9 @@ void sched_enq_and_set_task(struct sched_enq_and_set_ctx *ctx);
-
- #include "ext.h"
-
-+static inline int task_running_nice(struct task_struct *p)
-+{
-+ return (task_nice(p) > 0);
-+}
-+#endif /* !CONFIG_SCHED_ALT */
- #endif /* _KERNEL_SCHED_SCHED_H */
-diff --git a/kernel/sched/stats.c b/kernel/sched/stats.c
-index eb0cdcd4d921..72224ecb5cbf 100644
---- a/kernel/sched/stats.c
-+++ b/kernel/sched/stats.c
-@@ -115,8 +115,10 @@ static int show_schedstat(struct seq_file *seq, void *v)
- } else {
- struct rq *rq;
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- struct sched_domain *sd;
- int dcount = 0;
-+#endif
- #endif
- cpu = (unsigned long)(v - 2);
- rq = cpu_rq(cpu);
-@@ -133,6 +135,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
- seq_printf(seq, "\n");
-
- #ifdef CONFIG_SMP
-+#ifndef CONFIG_SCHED_ALT
- /* domain-specific stats */
- rcu_read_lock();
- for_each_domain(cpu, sd) {
-@@ -160,6 +163,7 @@ static int show_schedstat(struct seq_file *seq, void *v)
- sd->ttwu_move_balance);
- }
- rcu_read_unlock();
-+#endif
- #endif
- }
- return 0;
-diff --git a/kernel/sched/stats.h b/kernel/sched/stats.h
-index 767e098a3bd1..4cbf4d3e611e 100644
---- a/kernel/sched/stats.h
-+++ b/kernel/sched/stats.h
-@@ -89,6 +89,7 @@ static inline void rq_sched_info_depart (struct rq *rq, unsigned long long delt
-
- #endif /* CONFIG_SCHEDSTATS */
-
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_FAIR_GROUP_SCHED
- struct sched_entity_stats {
- struct sched_entity se;
-@@ -105,6 +106,7 @@ __schedstats_from_se(struct sched_entity *se)
- #endif
- return &task_of(se)->stats;
- }
-+#endif /* CONFIG_SCHED_ALT */
-
- #ifdef CONFIG_PSI
- void psi_task_change(struct task_struct *task, int clear, int set);
-diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c
-index 24f9f90b6574..9aa01e45c920 100644
---- a/kernel/sched/syscalls.c
-+++ b/kernel/sched/syscalls.c
-@@ -16,6 +16,14 @@
- #include "sched.h"
- #include "autogroup.h"
-
-+#ifdef CONFIG_SCHED_ALT
-+#include "alt_core.h"
-+
-+static inline int __normal_prio(int policy, int rt_prio, int static_prio)
-+{
-+ return rt_policy(policy) ? (MAX_RT_PRIO - 1 - rt_prio) : static_prio;
-+}
-+#else /* !CONFIG_SCHED_ALT */
- static inline int __normal_prio(int policy, int rt_prio, int nice)
- {
- int prio;
-@@ -29,6 +37,7 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
-
- return prio;
- }
-+#endif /* !CONFIG_SCHED_ALT */
-
- /*
- * Calculate the expected normal priority: i.e. priority
-@@ -39,7 +48,11 @@ static inline int __normal_prio(int policy, int rt_prio, int nice)
- */
- static inline int normal_prio(struct task_struct *p)
- {
-+#ifdef CONFIG_SCHED_ALT
-+ return __normal_prio(p->policy, p->rt_priority, p->static_prio);
-+#else /* !CONFIG_SCHED_ALT */
- return __normal_prio(p->policy, p->rt_priority, PRIO_TO_NICE(p->static_prio));
-+#endif /* !CONFIG_SCHED_ALT */
- }
-
- /*
-@@ -64,6 +77,37 @@ static int effective_prio(struct task_struct *p)
-
- void set_user_nice(struct task_struct *p, long nice)
- {
-+#ifdef CONFIG_SCHED_ALT
-+ unsigned long flags;
-+ struct rq *rq;
-+ raw_spinlock_t *lock;
-+
-+ if (task_nice(p) == nice || nice < MIN_NICE || nice > MAX_NICE)
-+ return;
-+ /*
-+ * We have to be careful, if called from sys_setpriority(),
-+ * the task might be in the middle of scheduling on another CPU.
-+ */
-+ raw_spin_lock_irqsave(&p->pi_lock, flags);
-+ rq = __task_access_lock(p, &lock);
-+
-+ p->static_prio = NICE_TO_PRIO(nice);
-+ /*
-+ * The RT priorities are set via sched_setscheduler(), but we still
-+ * allow the 'normal' nice value to be set - but as expected
-+ * it won't have any effect on scheduling until the task is
-+ * not SCHED_NORMAL/SCHED_BATCH:
-+ */
-+ if (task_has_rt_policy(p))
-+ goto out_unlock;
-+
-+ p->prio = effective_prio(p);
-+
-+ check_task_changed(p, rq);
-+out_unlock:
-+ __task_access_unlock(p, lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+#else
- bool queued, running;
- struct rq *rq;
- int old_prio;
-@@ -112,6 +156,7 @@ void set_user_nice(struct task_struct *p, long nice)
- * lowered its priority, then reschedule its CPU:
- */
- p->sched_class->prio_changed(rq, p, old_prio);
-+#endif /* !CONFIG_SCHED_ALT */
- }
- EXPORT_SYMBOL(set_user_nice);
-
-@@ -190,7 +235,19 @@ SYSCALL_DEFINE1(nice, int, increment)
- */
- int task_prio(const struct task_struct *p)
- {
-+#ifdef CONFIG_SCHED_ALT
-+/*
-+ * sched policy return value kernel prio user prio/nice
-+ *
-+ * (BMQ)normal, batch, idle[0 ... 53] [100 ... 139] 0/[-20 ... 19]/[-7 ... 7]
-+ * (PDS)normal, batch, idle[0 ... 39] 100 0/[-20 ... 19]
-+ * fifo, rr [-1 ... -100] [99 ... 0] [0 ... 99]
-+ */
-+ return (p->prio < MAX_RT_PRIO) ? p->prio - MAX_RT_PRIO :
-+ task_sched_prio_normal(p, task_rq(p));
-+#else
- return p->prio - MAX_RT_PRIO;
-+#endif /* !CONFIG_SCHED_ALT */
- }
-
- /**
-@@ -300,10 +357,13 @@ static void __setscheduler_params(struct task_struct *p,
-
- p->policy = policy;
-
-+#ifndef CONFIG_SCHED_ALT
- if (dl_policy(policy)) {
- __setparam_dl(p, attr);
- } else if (fair_policy(policy)) {
-+#endif /* !CONFIG_SCHED_ALT */
- p->static_prio = NICE_TO_PRIO(attr->sched_nice);
-+#ifndef CONFIG_SCHED_ALT
- if (attr->sched_runtime) {
- p->se.custom_slice = 1;
- p->se.slice = clamp_t(u64, attr->sched_runtime,
-@@ -322,6 +382,7 @@ static void __setscheduler_params(struct task_struct *p,
- /* when switching back to non-rt policy, restore timerslack */
- p->timer_slack_ns = p->default_timer_slack_ns;
- }
-+#endif /* !CONFIG_SCHED_ALT */
-
- /*
- * __sched_setscheduler() ensures attr->sched_priority == 0 when
-@@ -330,7 +391,9 @@ static void __setscheduler_params(struct task_struct *p,
- */
- p->rt_priority = attr->sched_priority;
- p->normal_prio = normal_prio(p);
-+#ifndef CONFIG_SCHED_ALT
- set_load_weight(p, true);
-+#endif /* !CONFIG_SCHED_ALT */
- }
-
- /*
-@@ -346,6 +409,8 @@ static bool check_same_owner(struct task_struct *p)
- uid_eq(cred->euid, pcred->uid));
- }
-
-+#ifndef CONFIG_SCHED_ALT
-+
- #ifdef CONFIG_UCLAMP_TASK
-
- static int uclamp_validate(struct task_struct *p,
-@@ -459,6 +524,7 @@ static inline int uclamp_validate(struct task_struct *p,
- static void __setscheduler_uclamp(struct task_struct *p,
- const struct sched_attr *attr) { }
- #endif
-+#endif /* !CONFIG_SCHED_ALT */
-
- /*
- * Allow unprivileged RT tasks to decrease priority.
-@@ -469,11 +535,13 @@ static int user_check_sched_setscheduler(struct task_struct *p,
- const struct sched_attr *attr,
- int policy, int reset_on_fork)
- {
-+#ifndef CONFIG_SCHED_ALT
- if (fair_policy(policy)) {
- if (attr->sched_nice < task_nice(p) &&
- !is_nice_reduction(p, attr->sched_nice))
- goto req_priv;
- }
-+#endif /* !CONFIG_SCHED_ALT */
-
- if (rt_policy(policy)) {
- unsigned long rlim_rtprio = task_rlimit(p, RLIMIT_RTPRIO);
-@@ -488,6 +556,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
- goto req_priv;
- }
-
-+#ifndef CONFIG_SCHED_ALT
- /*
- * Can't set/change SCHED_DEADLINE policy at all for now
- * (safest behavior); in the future we would like to allow
-@@ -505,6 +574,7 @@ static int user_check_sched_setscheduler(struct task_struct *p,
- if (!is_nice_reduction(p, task_nice(p)))
- goto req_priv;
- }
-+#endif /* !CONFIG_SCHED_ALT */
-
- /* Can't change other user's priorities: */
- if (!check_same_owner(p))
-@@ -527,6 +597,158 @@ int __sched_setscheduler(struct task_struct *p,
- const struct sched_attr *attr,
- bool user, bool pi)
- {
-+#ifdef CONFIG_SCHED_ALT
-+ const struct sched_attr dl_squash_attr = {
-+ .size = sizeof(struct sched_attr),
-+ .sched_policy = SCHED_FIFO,
-+ .sched_nice = 0,
-+ .sched_priority = 99,
-+ };
-+ int oldpolicy = -1, policy = attr->sched_policy;
-+ int retval, newprio;
-+ struct balance_callback *head;
-+ unsigned long flags;
-+ struct rq *rq;
-+ int reset_on_fork;
-+ raw_spinlock_t *lock;
-+
-+ /* The pi code expects interrupts enabled */
-+ BUG_ON(pi && in_interrupt());
-+
-+ /*
-+ * Alt schedule FW supports SCHED_DEADLINE by squash it as prio 0 SCHED_FIFO
-+ */
-+ if (unlikely(SCHED_DEADLINE == policy)) {
-+ attr = &dl_squash_attr;
-+ policy = attr->sched_policy;
-+ }
-+recheck:
-+ /* Double check policy once rq lock held */
-+ if (policy < 0) {
-+ reset_on_fork = p->sched_reset_on_fork;
-+ policy = oldpolicy = p->policy;
-+ } else {
-+ reset_on_fork = !!(attr->sched_flags & SCHED_RESET_ON_FORK);
-+
-+ if (policy > SCHED_IDLE)
-+ return -EINVAL;
-+ }
-+
-+ if (attr->sched_flags & ~(SCHED_FLAG_ALL))
-+ return -EINVAL;
-+
-+ /*
-+ * Valid priorities for SCHED_FIFO and SCHED_RR are
-+ * 1..MAX_RT_PRIO-1, valid priority for SCHED_NORMAL and
-+ * SCHED_BATCH and SCHED_IDLE is 0.
-+ */
-+ if (attr->sched_priority < 0 ||
-+ (p->mm && attr->sched_priority > MAX_RT_PRIO - 1) ||
-+ (!p->mm && attr->sched_priority > MAX_RT_PRIO - 1))
-+ return -EINVAL;
-+ if ((SCHED_RR == policy || SCHED_FIFO == policy) !=
-+ (attr->sched_priority != 0))
-+ return -EINVAL;
-+
-+ if (user) {
-+ retval = user_check_sched_setscheduler(p, attr, policy, reset_on_fork);
-+ if (retval)
-+ return retval;
-+
-+ retval = security_task_setscheduler(p);
-+ if (retval)
-+ return retval;
-+ }
-+
-+ /*
-+ * Make sure no PI-waiters arrive (or leave) while we are
-+ * changing the priority of the task:
-+ */
-+ raw_spin_lock_irqsave(&p->pi_lock, flags);
-+
-+ /*
-+ * To be able to change p->policy safely, task_access_lock()
-+ * must be called.
-+ * IF use task_access_lock() here:
-+ * For the task p which is not running, reading rq->stop is
-+ * racy but acceptable as ->stop doesn't change much.
-+ * An enhancemnet can be made to read rq->stop saftly.
-+ */
-+ rq = __task_access_lock(p, &lock);
-+
-+ /*
-+ * Changing the policy of the stop threads its a very bad idea
-+ */
-+ if (p == rq->stop) {
-+ retval = -EINVAL;
-+ goto unlock;
-+ }
-+
-+ /*
-+ * If not changing anything there's no need to proceed further:
-+ */
-+ if (unlikely(policy == p->policy)) {
-+ if (rt_policy(policy) && attr->sched_priority != p->rt_priority)
-+ goto change;
-+ if (!rt_policy(policy) &&
-+ NICE_TO_PRIO(attr->sched_nice) != p->static_prio)
-+ goto change;
-+
-+ p->sched_reset_on_fork = reset_on_fork;
-+ retval = 0;
-+ goto unlock;
-+ }
-+change:
-+
-+ /* Re-check policy now with rq lock held */
-+ if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
-+ policy = oldpolicy = -1;
-+ __task_access_unlock(p, lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+ goto recheck;
-+ }
-+
-+ p->sched_reset_on_fork = reset_on_fork;
-+
-+ newprio = __normal_prio(policy, attr->sched_priority, NICE_TO_PRIO(attr->sched_nice));
-+ if (pi) {
-+ /*
-+ * Take priority boosted tasks into account. If the new
-+ * effective priority is unchanged, we just store the new
-+ * normal parameters and do not touch the scheduler class and
-+ * the runqueue. This will be done when the task deboost
-+ * itself.
-+ */
-+ newprio = rt_effective_prio(p, newprio);
-+ }
-+
-+ if (!(attr->sched_flags & SCHED_FLAG_KEEP_PARAMS)) {
-+ __setscheduler_params(p, attr);
-+ __setscheduler_prio(p, newprio);
-+ }
-+
-+ check_task_changed(p, rq);
-+
-+ /* Avoid rq from going away on us: */
-+ preempt_disable();
-+ head = splice_balance_callbacks(rq);
-+ __task_access_unlock(p, lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+
-+ if (pi)
-+ rt_mutex_adjust_pi(p);
-+
-+ /* Run balance callbacks after we've adjusted the PI chain: */
-+ balance_callbacks(rq, head);
-+ preempt_enable();
-+
-+ return 0;
-+
-+unlock:
-+ __task_access_unlock(p, lock);
-+ raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-+ return retval;
-+#else /* !CONFIG_SCHED_ALT */
- int oldpolicy = -1, policy = attr->sched_policy;
- int retval, oldprio, newprio, queued, running;
- const struct sched_class *prev_class, *next_class;
-@@ -764,6 +986,7 @@ int __sched_setscheduler(struct task_struct *p,
- if (cpuset_locked)
- cpuset_unlock();
- return retval;
-+#endif /* !CONFIG_SCHED_ALT */
- }
-
- static int _sched_setscheduler(struct task_struct *p, int policy,
-@@ -775,8 +998,10 @@ static int _sched_setscheduler(struct task_struct *p, int policy,
- .sched_nice = PRIO_TO_NICE(p->static_prio),
- };
-
-+#ifndef CONFIG_SCHED_ALT
- if (p->se.custom_slice)
- attr.sched_runtime = p->se.slice;
-+#endif /* !CONFIG_SCHED_ALT */
-
- /* Fixup the legacy SCHED_RESET_ON_FORK hack. */
- if ((policy != SETPARAM_POLICY) && (policy & SCHED_RESET_ON_FORK)) {
-@@ -944,13 +1169,18 @@ static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *a
-
- static void get_params(struct task_struct *p, struct sched_attr *attr)
- {
-- if (task_has_dl_policy(p)) {
-+#ifndef CONFIG_SCHED_ALT
-+ if (task_has_dl_policy(p))
- __getparam_dl(p, attr);
-- } else if (task_has_rt_policy(p)) {
-+ else
-+#endif
-+ if (task_has_rt_policy(p)) {
- attr->sched_priority = p->rt_priority;
- } else {
- attr->sched_nice = task_nice(p);
-+#ifndef CONFIG_SCHED_ALT
- attr->sched_runtime = p->se.slice;
-+#endif
- }
- }
-
-@@ -1170,6 +1400,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
- #ifdef CONFIG_SMP
- int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
- {
-+#ifndef CONFIG_SCHED_ALT
- /*
- * If the task isn't a deadline task or admission control is
- * disabled then we don't care about affinity changes.
-@@ -1186,6 +1417,7 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
- guard(rcu)();
- if (!cpumask_subset(task_rq(p)->rd->span, mask))
- return -EBUSY;
-+#endif
-
- return 0;
- }
-@@ -1210,9 +1442,11 @@ int __sched_setaffinity(struct task_struct *p, struct affinity_context *ctx)
- ctx->new_mask = new_mask;
- ctx->flags |= SCA_CHECK;
-
-+#ifndef CONFIG_SCHED_ALT
- retval = dl_task_check_affinity(p, new_mask);
- if (retval)
- goto out_free_new_mask;
-+#endif
-
- retval = __set_cpus_allowed_ptr(p, ctx);
- if (retval)
-@@ -1392,13 +1626,34 @@ SYSCALL_DEFINE3(sched_getaffinity, pid_t, pid, unsigned int, len,
-
- static void do_sched_yield(void)
- {
-- struct rq_flags rf;
- struct rq *rq;
-+ struct rq_flags rf;
-+
-+#ifdef CONFIG_SCHED_ALT
-+ struct task_struct *p;
-+
-+ if (!sched_yield_type)
-+ return;
-
- rq = this_rq_lock_irq(&rf);
-
-+ schedstat_inc(rq->yld_count);
-+
-+ p = current;
-+ if (rt_task(p)) {
-+ if (task_on_rq_queued(p))
-+ requeue_task(p, rq);
-+ } else if (rq->nr_running > 1) {
-+ do_sched_yield_type_1(p, rq);
-+ if (task_on_rq_queued(p))
-+ requeue_task(p, rq);
-+ }
-+#else /* !CONFIG_SCHED_ALT */
-+ rq = this_rq_lock_irq(&rf);
-+
- schedstat_inc(rq->yld_count);
- current->sched_class->yield_task(rq);
-+#endif /* !CONFIG_SCHED_ALT */
-
- preempt_disable();
- rq_unlock_irq(rq, &rf);
-@@ -1467,6 +1722,9 @@ EXPORT_SYMBOL(yield);
- */
- int __sched yield_to(struct task_struct *p, bool preempt)
- {
-+#ifdef CONFIG_SCHED_ALT
-+ return 0;
-+#else /* !CONFIG_SCHED_ALT */
- struct task_struct *curr = current;
- struct rq *rq, *p_rq;
- int yielded = 0;
-@@ -1512,6 +1770,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
- schedule();
-
- return yielded;
-+#endif /* !CONFIG_SCHED_ALT */
- }
- EXPORT_SYMBOL_GPL(yield_to);
-
-@@ -1532,7 +1791,9 @@ SYSCALL_DEFINE1(sched_get_priority_max, int, policy)
- case SCHED_RR:
- ret = MAX_RT_PRIO-1;
- break;
-+#ifndef CONFIG_SCHED_ALT
- case SCHED_DEADLINE:
-+#endif
- case SCHED_NORMAL:
- case SCHED_BATCH:
- case SCHED_IDLE:
-@@ -1560,7 +1821,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
- case SCHED_RR:
- ret = 1;
- break;
-+#ifndef CONFIG_SCHED_ALT
- case SCHED_DEADLINE:
-+#endif
- case SCHED_NORMAL:
- case SCHED_BATCH:
- case SCHED_IDLE:
-@@ -1572,7 +1835,9 @@ SYSCALL_DEFINE1(sched_get_priority_min, int, policy)
-
- static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
- {
-+#ifndef CONFIG_SCHED_ALT
- unsigned int time_slice = 0;
-+#endif
- int retval;
-
- if (pid < 0)
-@@ -1587,6 +1852,7 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
- if (retval)
- return retval;
-
-+#ifndef CONFIG_SCHED_ALT
- scoped_guard (task_rq_lock, p) {
- struct rq *rq = scope.rq;
- if (p->sched_class->get_rr_interval)
-@@ -1595,6 +1861,13 @@ static int sched_rr_get_interval(pid_t pid, struct timespec64 *t)
- }
-
- jiffies_to_timespec64(time_slice, t);
-+#else
-+ }
-+
-+ alt_sched_debug();
-+
-+ *t = ns_to_timespec64(sysctl_sched_base_slice);
-+#endif /* !CONFIG_SCHED_ALT */
- return 0;
- }
-
-diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
-index 9748a4c8d668..1e2bdd70d69a 100644
---- a/kernel/sched/topology.c
-+++ b/kernel/sched/topology.c
-@@ -3,6 +3,7 @@
- * Scheduler topology setup/handling methods
- */
-
-+#ifndef CONFIG_SCHED_ALT
- #include <linux/bsearch.h>
-
- DEFINE_MUTEX(sched_domains_mutex);
-@@ -1459,8 +1460,10 @@ static void asym_cpu_capacity_scan(void)
- */
-
- static int default_relax_domain_level = -1;
-+#endif /* CONFIG_SCHED_ALT */
- int sched_domain_level_max;
-
-+#ifndef CONFIG_SCHED_ALT
- static int __init setup_relax_domain_level(char *str)
- {
- if (kstrtoint(str, 0, &default_relax_domain_level))
-@@ -1695,6 +1698,7 @@ sd_init(struct sched_domain_topology_level *tl,
-
- return sd;
- }
-+#endif /* CONFIG_SCHED_ALT */
-
- /*
- * Topology list, bottom-up.
-@@ -1731,6 +1735,7 @@ void __init set_sched_topology(struct sched_domain_topology_level *tl)
- sched_domain_topology_saved = NULL;
- }
-
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_NUMA
-
- static const struct cpumask *sd_numa_mask(int cpu)
-@@ -2797,3 +2802,28 @@ void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
- partition_sched_domains_locked(ndoms_new, doms_new, dattr_new);
- mutex_unlock(&sched_domains_mutex);
- }
-+#else /* CONFIG_SCHED_ALT */
-+DEFINE_STATIC_KEY_FALSE(sched_asym_cpucapacity);
-+
-+void partition_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
-+ struct sched_domain_attr *dattr_new)
-+{}
-+
-+#ifdef CONFIG_NUMA
-+int sched_numa_find_closest(const struct cpumask *cpus, int cpu)
-+{
-+ return best_mask_cpu(cpu, cpus);
-+}
-+
-+int sched_numa_find_nth_cpu(const struct cpumask *cpus, int cpu, int node)
-+{
-+ return cpumask_nth(cpu, cpus);
-+}
-+
-+const struct cpumask *sched_numa_hop_mask(unsigned int node, unsigned int hops)
-+{
-+ return ERR_PTR(-EOPNOTSUPP);
-+}
-+EXPORT_SYMBOL_GPL(sched_numa_hop_mask);
-+#endif /* CONFIG_NUMA */
-+#endif
-diff --git a/kernel/sysctl.c b/kernel/sysctl.c
-index 79e6cb1d5c48..61bc0352e233 100644
---- a/kernel/sysctl.c
-+++ b/kernel/sysctl.c
-@@ -92,6 +92,10 @@ EXPORT_SYMBOL_GPL(sysctl_long_vals);
-
- /* Constants used for minimum and maximum */
-
-+#ifdef CONFIG_SCHED_ALT
-+extern int sched_yield_type;
-+#endif
-+
- #ifdef CONFIG_PERF_EVENTS
- static const int six_hundred_forty_kb = 640 * 1024;
- #endif
-@@ -1907,6 +1911,17 @@ static struct ctl_table kern_table[] = {
- .proc_handler = proc_dointvec,
- },
- #endif
-+#ifdef CONFIG_SCHED_ALT
-+ {
-+ .procname = "yield_type",
-+ .data = &sched_yield_type,
-+ .maxlen = sizeof (int),
-+ .mode = 0644,
-+ .proc_handler = &proc_dointvec_minmax,
-+ .extra1 = SYSCTL_ZERO,
-+ .extra2 = SYSCTL_TWO,
-+ },
-+#endif
- #if defined(CONFIG_S390) && defined(CONFIG_SMP)
- {
- .procname = "spin_retry",
-diff --git a/kernel/time/posix-cpu-timers.c b/kernel/time/posix-cpu-timers.c
-index 6bcee4704059..cf88205fd4a2 100644
---- a/kernel/time/posix-cpu-timers.c
-+++ b/kernel/time/posix-cpu-timers.c
-@@ -223,7 +223,7 @@ static void task_sample_cputime(struct task_struct *p, u64 *samples)
- u64 stime, utime;
-
- task_cputime(p, &utime, &stime);
-- store_samples(samples, stime, utime, p->se.sum_exec_runtime);
-+ store_samples(samples, stime, utime, tsk_seruntime(p));
- }
-
- static void proc_sample_cputime_atomic(struct task_cputime_atomic *at,
-@@ -830,6 +830,7 @@ static void collect_posix_cputimers(struct posix_cputimers *pct, u64 *samples,
- }
- }
-
-+#ifndef CONFIG_SCHED_ALT
- static inline void check_dl_overrun(struct task_struct *tsk)
- {
- if (tsk->dl.dl_overrun) {
-@@ -837,6 +838,7 @@ static inline void check_dl_overrun(struct task_struct *tsk)
- send_signal_locked(SIGXCPU, SEND_SIG_PRIV, tsk, PIDTYPE_TGID);
- }
- }
-+#endif
-
- static bool check_rlimit(u64 time, u64 limit, int signo, bool rt, bool hard)
- {
-@@ -864,8 +866,10 @@ static void check_thread_timers(struct task_struct *tsk,
- u64 samples[CPUCLOCK_MAX];
- unsigned long soft;
-
-+#ifndef CONFIG_SCHED_ALT
- if (dl_task(tsk))
- check_dl_overrun(tsk);
-+#endif
-
- if (expiry_cache_is_inactive(pct))
- return;
-@@ -879,7 +883,7 @@ static void check_thread_timers(struct task_struct *tsk,
- soft = task_rlimit(tsk, RLIMIT_RTTIME);
- if (soft != RLIM_INFINITY) {
- /* Task RT timeout is accounted in jiffies. RTTIME is usec */
-- unsigned long rttime = tsk->rt.timeout * (USEC_PER_SEC / HZ);
-+ unsigned long rttime = tsk_rttimeout(tsk) * (USEC_PER_SEC / HZ);
- unsigned long hard = task_rlimit_max(tsk, RLIMIT_RTTIME);
-
- /* At the hard limit, send SIGKILL. No further action. */
-@@ -1115,8 +1119,10 @@ static inline bool fastpath_timer_check(struct task_struct *tsk)
- return true;
- }
-
-+#ifndef CONFIG_SCHED_ALT
- if (dl_task(tsk) && tsk->dl.dl_overrun)
- return true;
-+#endif
-
- return false;
- }
-diff --git a/kernel/trace/trace_osnoise.c b/kernel/trace/trace_osnoise.c
-index a50ed23bee77..be0477666049 100644
---- a/kernel/trace/trace_osnoise.c
-+++ b/kernel/trace/trace_osnoise.c
-@@ -1665,6 +1665,9 @@ static void osnoise_sleep(bool skip_period)
- */
- static inline int osnoise_migration_pending(void)
- {
-+#ifdef CONFIG_SCHED_ALT
-+ return 0;
-+#else
- if (!current->migration_pending)
- return 0;
-
-@@ -1686,6 +1689,7 @@ static inline int osnoise_migration_pending(void)
- mutex_unlock(&interface_lock);
-
- return 1;
-+#endif
- }
-
- /*
-diff --git a/kernel/trace/trace_selftest.c b/kernel/trace/trace_selftest.c
-index 1469dd8075fa..803527a0e48a 100644
---- a/kernel/trace/trace_selftest.c
-+++ b/kernel/trace/trace_selftest.c
-@@ -1419,10 +1419,15 @@ static int trace_wakeup_test_thread(void *data)
- {
- /* Make this a -deadline thread */
- static const struct sched_attr attr = {
-+#ifdef CONFIG_SCHED_ALT
-+ /* No deadline on BMQ/PDS, use RR */
-+ .sched_policy = SCHED_RR,
-+#else
- .sched_policy = SCHED_DEADLINE,
- .sched_runtime = 100000ULL,
- .sched_deadline = 10000000ULL,
- .sched_period = 10000000ULL
-+#endif
- };
- struct wakeup_test_data *x = data;
-
-diff --git a/kernel/workqueue.c b/kernel/workqueue.c
-index 9949ffad8df0..90eac9d802a8 100644
---- a/kernel/workqueue.c
-+++ b/kernel/workqueue.c
-@@ -1247,6 +1247,7 @@ static bool kick_pool(struct worker_pool *pool)
-
- p = worker->task;
-
-+#ifndef CONFIG_SCHED_ALT
- #ifdef CONFIG_SMP
- /*
- * Idle @worker is about to execute @work and waking up provides an
-@@ -1276,6 +1277,8 @@ static bool kick_pool(struct worker_pool *pool)
- }
- }
- #endif
-+#endif /* !CONFIG_SCHED_ALT */
-+
- wake_up_process(p);
- return true;
- }
-@@ -1404,7 +1407,11 @@ void wq_worker_running(struct task_struct *task)
- * CPU intensive auto-detection cares about how long a work item hogged
- * CPU without sleeping. Reset the starting timestamp on wakeup.
- */
-+#ifdef CONFIG_SCHED_ALT
-+ worker->current_at = worker->task->sched_time;
-+#else
- worker->current_at = worker->task->se.sum_exec_runtime;
-+#endif
-
- WRITE_ONCE(worker->sleeping, 0);
- }
-@@ -1489,7 +1496,11 @@ void wq_worker_tick(struct task_struct *task)
- * We probably want to make this prettier in the future.
- */
- if ((worker->flags & WORKER_NOT_RUNNING) || READ_ONCE(worker->sleeping) ||
-+#ifdef CONFIG_SCHED_ALT
-+ worker->task->sched_time - worker->current_at <
-+#else
- worker->task->se.sum_exec_runtime - worker->current_at <
-+#endif
- wq_cpu_intensive_thresh_us * NSEC_PER_USEC)
- return;
-
-@@ -3157,7 +3168,11 @@ __acquires(&pool->lock)
- worker->current_func = work->func;
- worker->current_pwq = pwq;
- if (worker->task)
-+#ifdef CONFIG_SCHED_ALT
-+ worker->current_at = worker->task->sched_time;
-+#else
- worker->current_at = worker->task->se.sum_exec_runtime;
-+#endif
- work_data = *work_data_bits(work);
- worker->current_color = get_work_color(work_data);
-
diff --git a/5021_BMQ-and-PDS-gentoo-defaults.patch b/5021_BMQ-and-PDS-gentoo-defaults.patch
deleted file mode 100644
index 7748d78c..00000000
--- a/5021_BMQ-and-PDS-gentoo-defaults.patch
+++ /dev/null
@@ -1,13 +0,0 @@
---- a/init/Kconfig 2024-11-13 14:45:36.566335895 -0500
-+++ b/init/Kconfig 2024-11-13 14:47:02.670787774 -0500
-@@ -860,8 +860,9 @@ config UCLAMP_BUCKETS_COUNT
- If in doubt, use the default value.
-
- menuconfig SCHED_ALT
-+ depends on X86_64
- bool "Alternative CPU Schedulers"
-- default y
-+ default n
- help
- This feature enable alternative CPU scheduler"
-
^ permalink raw reply related [flat|nested] 43+ messages in thread
end of thread, other threads:[~2025-04-25 11:54 UTC | newest]
Thread overview: 43+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-12-27 14:08 [gentoo-commits] proj/linux-patches:6.12 commit in: / Mike Pagano
-- strict thread matches above, loose matches on Subject: below --
2025-04-25 11:54 Mike Pagano
2025-04-25 11:47 Mike Pagano
2025-04-22 18:48 Mike Pagano
2025-04-20 9:38 Mike Pagano
2025-04-10 13:50 Mike Pagano
2025-04-10 13:29 Mike Pagano
2025-04-07 10:30 Mike Pagano
2025-03-29 10:59 Mike Pagano
2025-03-29 10:47 Mike Pagano
2025-03-23 11:31 Mike Pagano
2025-03-20 22:39 Mike Pagano
2025-03-13 12:54 Mike Pagano
2025-03-07 18:22 Mike Pagano
2025-02-27 13:22 Mike Pagano
2025-02-21 13:31 Mike Pagano
2025-02-18 11:26 Mike Pagano
2025-02-17 15:44 Mike Pagano
2025-02-17 11:25 Mike Pagano
2025-02-17 11:16 Mike Pagano
2025-02-16 21:48 Mike Pagano
2025-02-08 11:26 Mike Pagano
2025-02-01 23:07 Mike Pagano
2025-01-30 12:47 Mike Pagano
2025-01-23 17:02 Mike Pagano
2025-01-17 13:18 Mike Pagano
2025-01-17 13:18 Mike Pagano
2025-01-09 13:51 Mike Pagano
2025-01-02 12:31 Mike Pagano
2024-12-19 18:07 Mike Pagano
2024-12-15 0:02 Mike Pagano
2024-12-14 23:59 Mike Pagano
2024-12-14 23:47 Mike Pagano
2024-12-11 21:01 Mike Pagano
2024-12-09 23:13 Mike Pagano
2024-12-09 11:35 Mike Pagano
2024-12-06 12:44 Mike Pagano
2024-12-05 20:05 Mike Pagano
2024-12-05 14:06 Mike Pagano
2024-12-02 17:15 Mike Pagano
2024-11-30 17:33 Mike Pagano
2024-11-22 17:45 Mike Pagano
2024-11-21 13:12 Mike Pagano
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox